title
stringlengths
1
322
pmid
stringlengths
7
8
background_abstract
stringlengths
7
2.19k
background_abstract_label
stringlengths
7
30
methods_abstract
stringlengths
18
2.32k
methods_abstract_label
stringlengths
5
43
results_abstract
stringlengths
35
2.65k
results_abstract_label
stringlengths
6
30
conclusions_abstract
stringlengths
33
1.83k
conclusions_abstract_label
stringlengths
6
45
mesh_descriptor_names
sequence
pmcid
stringlengths
5
8
background_title
stringlengths
10
95
background_text
stringlengths
107
140k
methods_title
stringlengths
4
129
methods_text
stringlengths
99
196k
results_title
stringlengths
6
172
results_text
stringlengths
112
685k
conclusions_title
stringlengths
8
58
conclusions_text
stringlengths
64
66.1k
other_sections_titles
sequence
other_sections_texts
sequence
other_sections_sec_types
sequence
all_sections_titles
sequence
all_sections_texts
sequence
all_sections_sec_types
sequence
keywords
sequence
Do two intravenous iron sucrose preparations have the same efficacy?
21355067
Intravenous (i.v.) iron sucrose similar (ISS) preparations are available but clinical comparisons with the originator iron sucrose (IS) are lacking.
BACKGROUND
The impact of switching from IS to ISS on anaemia and iron parameters was assessed in a sequential observational study comparing two periods of 27 weeks each in 75 stable haemodialysis (HD) patients receiving i.v. iron weekly and an i.v. erythropoiesis-stimulating agent (ESA) once every 2 weeks. Patients received IS in the first period (P1) and ISS in the second period (P2).
METHODS
Mean haemoglobin value was 11.78 ± 0.99 g/dL during P1 and 11.48 ± 0.98 g/dL during P2 (P = 0.01). Mean serum ferritin was similar for both treatment periods (P1, 534 ± 328 μg/L; P2, 495 ± 280 μg/L, P = 0.25) but mean TSAT during P1 (49.3 ± 10.9%) was significantly higher than during P2 (24.5 ± 9.4%, P <0.0001). The mean dose of i.v. iron per patient per week was 45.58 ± 32.55 mg in P1 and 61.36 ± 30.98 mg in P2 (+34.6%), while the mean ESA dose was 0.58 ± 0.52 and 0.66 ± 0.64 μg/kg/week, respectively (+13.8%). Total mean anaemia drug costs increased in P2 by 11.9% compared to P1.
RESULTS
The switch from the originator IS to an ISS preparation led to destabilization of a well-controlled population of HD patients and incurred an increase in total anaemia drug costs. Prospective comparative clinical studies are required to prove that ISS are as efficacious and safe as the originator i.v. IS.
CONCLUSIONS
[ "Anemia, Iron-Deficiency", "Cohort Studies", "Erythropoietin", "Female", "Ferric Compounds", "Ferric Oxide, Saccharated", "Ferritins", "Glucaric Acid", "Hematinics", "Hemoglobins", "Humans", "Injections, Intravenous", "Male", "Middle Aged", "Prognosis", "Renal Dialysis" ]
3193183
Introduction
Anaemia is a common comorbidity in chronic kidney disease (CKD). Its prevalence rises as renal function deteriorates, and it is estimated that up to 70% of patients are anaemic at the time of starting dialysis [1, 2]. Iron deficiency is an important contributor to renal anaemia, particularly in haemodialysis (HD) patients as a result of chronic dialysis-related blood loss and increased iron demand due to use of erythropoiesis-stimulating agents (ESA) [2]. Accordingly, intravenous (i.v.) iron supplementation is an important component in the therapeutic armamentarium for the management of anaemia in HD patients [3–5]. There are a number of iron carbohydrate compounds available on the market and among these compounds iron sucrose (IS) complex has a favourable safety profile when administered i.v. [6] as outlined in the European Best Practice Guidelines [3]. Iron carbohydrates such as IS are complex macromolecules, and their physico-chemical and biological properties are closely dependent on the manufacturing process such that subtle structural modifications may affect the stability of the preparation [7, 8]. Stability is of paramount importance, since weakly bound iron may dissociate too rapidly and catalyse the generation of reactive oxygen species [7–10] that in turn induce oxidative stress and inflammation. Potentially, an unstable iron complex could have safety and efficacy implications, especially in chronically ill individuals such as HD patients in whom dialysis and comorbidities already confer an increased burden of oxidative stress and inflammation [11, 12]. Several iron sucrose similar (ISS) preparations have been introduced for the treatment of iron deficiency anaemia in a number of countries worldwide [7, 11, 13] on the basis that they can be considered therapeutically equivalent to the originator i.v. IS (Vifor France SA; manufactured by Vifor International Inc., St. Gallen, Switzerland). However, recent analytical tests and studies in rat models have demonstrated differences between ISS preparations and the originator IS in terms of physico-chemical structure and their effect on inflammatory, haemodynamic and functional markers [7, 13]. To our knowledge, no head-to-head clinical study has yet compared ISS and IS in a patient population. At the Centre Suzanne Levy, Paris, IS was used to correct iron deficiency anaemia in HD patients for 6 years prior to June 2009 at which point the centre switched to the ISS (Mylan SAS, Saint Priest, France manufactured by Help SA Pharmaceuticals, Athens, Greece) due to economic reasons. An observational retrospective and prospective analysis was undertaken to evaluate the impact of the switch from the originator IS to the ISS on haemoglobin (Hb) levels, iron parameters and dosing requirements for i.v. iron and ESA.
null
null
Results
[SUBTITLE] Patient population [SUBSECTION] Of 120 patients receiving dialysis at the centre, 75 patients were eligible for inclusion in the analysis. The demographics and clinical profile of the population were typical for adult recipients of chronic HD (Table 1) [16]. The mean number of dialysis sessions per patient performed was similar during P1 (75.1 ± 5.1) and P2 (75.1 ± 6.3, P = 0.97). Baseline demographic and clinical characteristics (n = 75)a Continuous variables are shown as mean ± SD. Of 120 patients receiving dialysis at the centre, 75 patients were eligible for inclusion in the analysis. The demographics and clinical profile of the population were typical for adult recipients of chronic HD (Table 1) [16]. The mean number of dialysis sessions per patient performed was similar during P1 (75.1 ± 5.1) and P2 (75.1 ± 6.3, P = 0.97). Baseline demographic and clinical characteristics (n = 75)a Continuous variables are shown as mean ± SD. [SUBTITLE] Efficacy [SUBSECTION] [SUBTITLE] Haemoglobin. [SUBSECTION] Mean Hb levels throughout the study are presented in Figure 1. Values during IS treatment were statistically significantly higher than during use of the ISS (P1, 11.78 ± 0.99 g/dL; P2, 11.48 ± 0.98 g/dL, P = 0.01). The difference was more pronounced when Hb values during P1 were compared to the first 3 months of P2 (11.78 ± 0.99 g/dL and 11.39 ± 1.12 g/dL, respectively, P = 0.005 versus P1). Mean Hb levels were similar during the first and last 3 months of P2 (11.39 ± 1.12 g/dL and 11.57 ± 1.09 g/dL, respectively, P = 0.15). Mean haemoglobin levels over time by treatment period (grams per deciliter). The mixed model (random intercept) analysis demonstrated that Hb levels were stable during the IS treatment period as the first slope (Hb measurements 1–14) was not statistically different from 0 (P = 0.105). The second slope (Hb measurements 14–15) was statistically different from 0 (P <0.0001) with a coefficient of −0.4 g/dL per two observations for the period, i.e. switch from IS to the ISS was associated with a significant decrease in mean Hb values of 0.4 g/dL (95% confidence interval −0.55 to −0.23). The third slope (Hb measurements 15–28) was also statistically different from 0 (P = 0.001) with an increase of 0.026 per two observations (0.34 g/dL). Results of the mixed model (random-intercept and slopes) also showed the second slope to be statistically different from 0 (P = 0.002) with a coefficient of −0.40 g/dL, but the third slope did not reach statistical significance (P = 0.167), although the trend was similar (0.020 per two observations). The proportion of patients who had at least one Hb value within the target range (11.5–12.0 g/dL) was 85% during P1 and 79% during P2 (P = 0.25). The mean number of days spent outside the target Hb range was 78 days per patient (cumulative 5880 days) during P1 compared to a mean of 99 days per patient (cumulative 7470 days) during P2 (P = 0.02) (Figure 2). Mean number of days spent outside the haemoglobin target range (11.5–12.0 g/dL) by treatment period. Mean Hb levels throughout the study are presented in Figure 1. Values during IS treatment were statistically significantly higher than during use of the ISS (P1, 11.78 ± 0.99 g/dL; P2, 11.48 ± 0.98 g/dL, P = 0.01). The difference was more pronounced when Hb values during P1 were compared to the first 3 months of P2 (11.78 ± 0.99 g/dL and 11.39 ± 1.12 g/dL, respectively, P = 0.005 versus P1). Mean Hb levels were similar during the first and last 3 months of P2 (11.39 ± 1.12 g/dL and 11.57 ± 1.09 g/dL, respectively, P = 0.15). Mean haemoglobin levels over time by treatment period (grams per deciliter). The mixed model (random intercept) analysis demonstrated that Hb levels were stable during the IS treatment period as the first slope (Hb measurements 1–14) was not statistically different from 0 (P = 0.105). The second slope (Hb measurements 14–15) was statistically different from 0 (P <0.0001) with a coefficient of −0.4 g/dL per two observations for the period, i.e. switch from IS to the ISS was associated with a significant decrease in mean Hb values of 0.4 g/dL (95% confidence interval −0.55 to −0.23). The third slope (Hb measurements 15–28) was also statistically different from 0 (P = 0.001) with an increase of 0.026 per two observations (0.34 g/dL). Results of the mixed model (random-intercept and slopes) also showed the second slope to be statistically different from 0 (P = 0.002) with a coefficient of −0.40 g/dL, but the third slope did not reach statistical significance (P = 0.167), although the trend was similar (0.020 per two observations). The proportion of patients who had at least one Hb value within the target range (11.5–12.0 g/dL) was 85% during P1 and 79% during P2 (P = 0.25). The mean number of days spent outside the target Hb range was 78 days per patient (cumulative 5880 days) during P1 compared to a mean of 99 days per patient (cumulative 7470 days) during P2 (P = 0.02) (Figure 2). Mean number of days spent outside the haemoglobin target range (11.5–12.0 g/dL) by treatment period. [SUBTITLE] Iron parameters. [SUBSECTION] Mean serum ferritin was similar between the treatment periods (P1, 533.8 ± 327.5 μg/L; P2, 494.5 ± 279.9 μg/L, P = 0.25) but was statistically significantly higher in P1 versus the first evaluation performed in P2 (September) (457.7 ± 290.4 μg/L, P = 0.04) (Table 2). Mean TSAT during P1 (49.3 ± 10.9%) was statistically significantly higher than during P2 overall (24.5 ± 9.4%, P < 0.0001) or Month 3 of P2 (23.3 ± 10.2%, P < 0.0001) (Table 2). Serum ferritin and TSAT during Period 1 and at Month 3 of Period 2 (September 2009) Mean serum ferritin was similar between the treatment periods (P1, 533.8 ± 327.5 μg/L; P2, 494.5 ± 279.9 μg/L, P = 0.25) but was statistically significantly higher in P1 versus the first evaluation performed in P2 (September) (457.7 ± 290.4 μg/L, P = 0.04) (Table 2). Mean TSAT during P1 (49.3 ± 10.9%) was statistically significantly higher than during P2 overall (24.5 ± 9.4%, P < 0.0001) or Month 3 of P2 (23.3 ± 10.2%, P < 0.0001) (Table 2). Serum ferritin and TSAT during Period 1 and at Month 3 of Period 2 (September 2009) [SUBTITLE] Intravenous iron consumption. [SUBSECTION] The mean dose of i.v. iron per patient was 1231 ± 879 mg in P1 and 1657 ± 866 mg in P2 (P = 0.001), representing a 34.6% increase in i.v. iron during P2 versus P1. Corresponding values for i.v. iron per patient per week were 61.36 ± 30.98 mg in P2 and 45.58 ± 32.55 mg in P1 (P = 0.001), and the cumulative dose for the total study population was 92 300 mg in P1 versus 124 250 mg in P2. The mean dose of i.v. iron per patient was 1231 ± 879 mg in P1 and 1657 ± 866 mg in P2 (P = 0.001), representing a 34.6% increase in i.v. iron during P2 versus P1. Corresponding values for i.v. iron per patient per week were 61.36 ± 30.98 mg in P2 and 45.58 ± 32.55 mg in P1 (P = 0.001), and the cumulative dose for the total study population was 92 300 mg in P1 versus 124 250 mg in P2. [SUBTITLE] ESA consumption. [SUBSECTION] The mean dose of ESA per patient was 980 ± 756 and 1103 ± 908 μg, respectively, during P1 and P2 (P = 0.12) representing a 12.6% increase. The mean dose according to body weight was 0.58 ± 0.52 μg/kg/week in P1 and 0.66 ± 0.64 μg/kg/week during P2 (P = 0.13) representing a 13.8% increase. The cumulative dose of ESA in P1 was 73 510 μg for the total study population and 82 750 μg in P2 (+12.6%). The mean fortnightly i.v. iron dose and concomitant ESA dose throughout the study versus achieved Hb levels are shown in Figure 3. Mean fortnightly dose of i.v. iron (milligrams) and ESA [darbepoetin-α (micrograms)] versus achieved haemoglobin levels (grams per deciliter). The mean dose of ESA per patient was 980 ± 756 and 1103 ± 908 μg, respectively, during P1 and P2 (P = 0.12) representing a 12.6% increase. The mean dose according to body weight was 0.58 ± 0.52 μg/kg/week in P1 and 0.66 ± 0.64 μg/kg/week during P2 (P = 0.13) representing a 13.8% increase. The cumulative dose of ESA in P1 was 73 510 μg for the total study population and 82 750 μg in P2 (+12.6%). The mean fortnightly i.v. iron dose and concomitant ESA dose throughout the study versus achieved Hb levels are shown in Figure 3. Mean fortnightly dose of i.v. iron (milligrams) and ESA [darbepoetin-α (micrograms)] versus achieved haemoglobin levels (grams per deciliter). [SUBTITLE] Economic impact. [SUBSECTION] The cumulative cost of i.v. iron was 11 981€ during P1 and 12 674€ during P2. For ESA, the cumulative cost was 108 060€ during P1 and 121 643€ during P2. The total cost from the health care provider’s perspective (including both i.v. iron and ESA) was 120 040€ and 134 316€ during P1 and P2, respectively, a difference of 14 276€ (+11.9% increase) (Figure 4). The incremental cost of switching from IS to the ISS for 1 year was estimated to be about 368€ per patient. Anaemia drug expenditure. The cumulative cost of i.v. iron was 11 981€ during P1 and 12 674€ during P2. For ESA, the cumulative cost was 108 060€ during P1 and 121 643€ during P2. The total cost from the health care provider’s perspective (including both i.v. iron and ESA) was 120 040€ and 134 316€ during P1 and P2, respectively, a difference of 14 276€ (+11.9% increase) (Figure 4). The incremental cost of switching from IS to the ISS for 1 year was estimated to be about 368€ per patient. Anaemia drug expenditure. [SUBTITLE] Haemoglobin. [SUBSECTION] Mean Hb levels throughout the study are presented in Figure 1. Values during IS treatment were statistically significantly higher than during use of the ISS (P1, 11.78 ± 0.99 g/dL; P2, 11.48 ± 0.98 g/dL, P = 0.01). The difference was more pronounced when Hb values during P1 were compared to the first 3 months of P2 (11.78 ± 0.99 g/dL and 11.39 ± 1.12 g/dL, respectively, P = 0.005 versus P1). Mean Hb levels were similar during the first and last 3 months of P2 (11.39 ± 1.12 g/dL and 11.57 ± 1.09 g/dL, respectively, P = 0.15). Mean haemoglobin levels over time by treatment period (grams per deciliter). The mixed model (random intercept) analysis demonstrated that Hb levels were stable during the IS treatment period as the first slope (Hb measurements 1–14) was not statistically different from 0 (P = 0.105). The second slope (Hb measurements 14–15) was statistically different from 0 (P <0.0001) with a coefficient of −0.4 g/dL per two observations for the period, i.e. switch from IS to the ISS was associated with a significant decrease in mean Hb values of 0.4 g/dL (95% confidence interval −0.55 to −0.23). The third slope (Hb measurements 15–28) was also statistically different from 0 (P = 0.001) with an increase of 0.026 per two observations (0.34 g/dL). Results of the mixed model (random-intercept and slopes) also showed the second slope to be statistically different from 0 (P = 0.002) with a coefficient of −0.40 g/dL, but the third slope did not reach statistical significance (P = 0.167), although the trend was similar (0.020 per two observations). The proportion of patients who had at least one Hb value within the target range (11.5–12.0 g/dL) was 85% during P1 and 79% during P2 (P = 0.25). The mean number of days spent outside the target Hb range was 78 days per patient (cumulative 5880 days) during P1 compared to a mean of 99 days per patient (cumulative 7470 days) during P2 (P = 0.02) (Figure 2). Mean number of days spent outside the haemoglobin target range (11.5–12.0 g/dL) by treatment period. Mean Hb levels throughout the study are presented in Figure 1. Values during IS treatment were statistically significantly higher than during use of the ISS (P1, 11.78 ± 0.99 g/dL; P2, 11.48 ± 0.98 g/dL, P = 0.01). The difference was more pronounced when Hb values during P1 were compared to the first 3 months of P2 (11.78 ± 0.99 g/dL and 11.39 ± 1.12 g/dL, respectively, P = 0.005 versus P1). Mean Hb levels were similar during the first and last 3 months of P2 (11.39 ± 1.12 g/dL and 11.57 ± 1.09 g/dL, respectively, P = 0.15). Mean haemoglobin levels over time by treatment period (grams per deciliter). The mixed model (random intercept) analysis demonstrated that Hb levels were stable during the IS treatment period as the first slope (Hb measurements 1–14) was not statistically different from 0 (P = 0.105). The second slope (Hb measurements 14–15) was statistically different from 0 (P <0.0001) with a coefficient of −0.4 g/dL per two observations for the period, i.e. switch from IS to the ISS was associated with a significant decrease in mean Hb values of 0.4 g/dL (95% confidence interval −0.55 to −0.23). The third slope (Hb measurements 15–28) was also statistically different from 0 (P = 0.001) with an increase of 0.026 per two observations (0.34 g/dL). Results of the mixed model (random-intercept and slopes) also showed the second slope to be statistically different from 0 (P = 0.002) with a coefficient of −0.40 g/dL, but the third slope did not reach statistical significance (P = 0.167), although the trend was similar (0.020 per two observations). The proportion of patients who had at least one Hb value within the target range (11.5–12.0 g/dL) was 85% during P1 and 79% during P2 (P = 0.25). The mean number of days spent outside the target Hb range was 78 days per patient (cumulative 5880 days) during P1 compared to a mean of 99 days per patient (cumulative 7470 days) during P2 (P = 0.02) (Figure 2). Mean number of days spent outside the haemoglobin target range (11.5–12.0 g/dL) by treatment period. [SUBTITLE] Iron parameters. [SUBSECTION] Mean serum ferritin was similar between the treatment periods (P1, 533.8 ± 327.5 μg/L; P2, 494.5 ± 279.9 μg/L, P = 0.25) but was statistically significantly higher in P1 versus the first evaluation performed in P2 (September) (457.7 ± 290.4 μg/L, P = 0.04) (Table 2). Mean TSAT during P1 (49.3 ± 10.9%) was statistically significantly higher than during P2 overall (24.5 ± 9.4%, P < 0.0001) or Month 3 of P2 (23.3 ± 10.2%, P < 0.0001) (Table 2). Serum ferritin and TSAT during Period 1 and at Month 3 of Period 2 (September 2009) Mean serum ferritin was similar between the treatment periods (P1, 533.8 ± 327.5 μg/L; P2, 494.5 ± 279.9 μg/L, P = 0.25) but was statistically significantly higher in P1 versus the first evaluation performed in P2 (September) (457.7 ± 290.4 μg/L, P = 0.04) (Table 2). Mean TSAT during P1 (49.3 ± 10.9%) was statistically significantly higher than during P2 overall (24.5 ± 9.4%, P < 0.0001) or Month 3 of P2 (23.3 ± 10.2%, P < 0.0001) (Table 2). Serum ferritin and TSAT during Period 1 and at Month 3 of Period 2 (September 2009) [SUBTITLE] Intravenous iron consumption. [SUBSECTION] The mean dose of i.v. iron per patient was 1231 ± 879 mg in P1 and 1657 ± 866 mg in P2 (P = 0.001), representing a 34.6% increase in i.v. iron during P2 versus P1. Corresponding values for i.v. iron per patient per week were 61.36 ± 30.98 mg in P2 and 45.58 ± 32.55 mg in P1 (P = 0.001), and the cumulative dose for the total study population was 92 300 mg in P1 versus 124 250 mg in P2. The mean dose of i.v. iron per patient was 1231 ± 879 mg in P1 and 1657 ± 866 mg in P2 (P = 0.001), representing a 34.6% increase in i.v. iron during P2 versus P1. Corresponding values for i.v. iron per patient per week were 61.36 ± 30.98 mg in P2 and 45.58 ± 32.55 mg in P1 (P = 0.001), and the cumulative dose for the total study population was 92 300 mg in P1 versus 124 250 mg in P2. [SUBTITLE] ESA consumption. [SUBSECTION] The mean dose of ESA per patient was 980 ± 756 and 1103 ± 908 μg, respectively, during P1 and P2 (P = 0.12) representing a 12.6% increase. The mean dose according to body weight was 0.58 ± 0.52 μg/kg/week in P1 and 0.66 ± 0.64 μg/kg/week during P2 (P = 0.13) representing a 13.8% increase. The cumulative dose of ESA in P1 was 73 510 μg for the total study population and 82 750 μg in P2 (+12.6%). The mean fortnightly i.v. iron dose and concomitant ESA dose throughout the study versus achieved Hb levels are shown in Figure 3. Mean fortnightly dose of i.v. iron (milligrams) and ESA [darbepoetin-α (micrograms)] versus achieved haemoglobin levels (grams per deciliter). The mean dose of ESA per patient was 980 ± 756 and 1103 ± 908 μg, respectively, during P1 and P2 (P = 0.12) representing a 12.6% increase. The mean dose according to body weight was 0.58 ± 0.52 μg/kg/week in P1 and 0.66 ± 0.64 μg/kg/week during P2 (P = 0.13) representing a 13.8% increase. The cumulative dose of ESA in P1 was 73 510 μg for the total study population and 82 750 μg in P2 (+12.6%). The mean fortnightly i.v. iron dose and concomitant ESA dose throughout the study versus achieved Hb levels are shown in Figure 3. Mean fortnightly dose of i.v. iron (milligrams) and ESA [darbepoetin-α (micrograms)] versus achieved haemoglobin levels (grams per deciliter). [SUBTITLE] Economic impact. [SUBSECTION] The cumulative cost of i.v. iron was 11 981€ during P1 and 12 674€ during P2. For ESA, the cumulative cost was 108 060€ during P1 and 121 643€ during P2. The total cost from the health care provider’s perspective (including both i.v. iron and ESA) was 120 040€ and 134 316€ during P1 and P2, respectively, a difference of 14 276€ (+11.9% increase) (Figure 4). The incremental cost of switching from IS to the ISS for 1 year was estimated to be about 368€ per patient. Anaemia drug expenditure. The cumulative cost of i.v. iron was 11 981€ during P1 and 12 674€ during P2. For ESA, the cumulative cost was 108 060€ during P1 and 121 643€ during P2. The total cost from the health care provider’s perspective (including both i.v. iron and ESA) was 120 040€ and 134 316€ during P1 and P2, respectively, a difference of 14 276€ (+11.9% increase) (Figure 4). The incremental cost of switching from IS to the ISS for 1 year was estimated to be about 368€ per patient. Anaemia drug expenditure. [SUBTITLE] Safety [SUBSECTION] Mean values for serum phosphorus, Ca × P, 25-OH vitamin D and urea decreased significantly, and total bilirubin increased significantly, during P2 compared to P1 (Table 3). There were no significant differences between the two treatment periods regarding serum calcium, PTH, CRP, albumin, fibrinogen, γ-GT, alkaline phosphatase, ALT, AST, leucocytes, platelets, neutrophils or Kt/V. Mean CRP showed a non-significant increase from P1 [6.3 ± 6.9 mg/L (median 3.3 mg/L)] to the last 3 months of P2 [9.1 ± 14.6 (median 3.0 mg/L), P = 0.09] in 70 patients. A further CRP measurement was performed in January 2010 on the last day of use of ISS and the mean value was 13.7 mg/L. Laboratory values with statistically significant differences between Periods 1 and 2 (n = 75) No adverse events were observed in relation to the study drugs during the study and no hospitalization related to i.v. iron supplementation occurred. Adverse events that resulted in hospitalization were considered to be related to the pre-existing conditions including CKD. There was no significant difference concerning the proportion of hospitalized patients between P1 [12/75 (16.0%)] and P2 [11/75 (14.7%), P = 0.78]. Five patients were hospitalized twice during P1 and P2. Seven patients were hospitalized during P1 only and six patients were hospitalized during P2 only. The cumulative length of hospital stay was 191 days during P1 and 107 days during P2. The mean length of stay per hospitalized patient was similar during P1 and P2 (10.6 days versus 5.9 days, respectively, P = 0.461, Wilcoxon test on paired data). Mean values for serum phosphorus, Ca × P, 25-OH vitamin D and urea decreased significantly, and total bilirubin increased significantly, during P2 compared to P1 (Table 3). There were no significant differences between the two treatment periods regarding serum calcium, PTH, CRP, albumin, fibrinogen, γ-GT, alkaline phosphatase, ALT, AST, leucocytes, platelets, neutrophils or Kt/V. Mean CRP showed a non-significant increase from P1 [6.3 ± 6.9 mg/L (median 3.3 mg/L)] to the last 3 months of P2 [9.1 ± 14.6 (median 3.0 mg/L), P = 0.09] in 70 patients. A further CRP measurement was performed in January 2010 on the last day of use of ISS and the mean value was 13.7 mg/L. Laboratory values with statistically significant differences between Periods 1 and 2 (n = 75) No adverse events were observed in relation to the study drugs during the study and no hospitalization related to i.v. iron supplementation occurred. Adverse events that resulted in hospitalization were considered to be related to the pre-existing conditions including CKD. There was no significant difference concerning the proportion of hospitalized patients between P1 [12/75 (16.0%)] and P2 [11/75 (14.7%), P = 0.78]. Five patients were hospitalized twice during P1 and P2. Seven patients were hospitalized during P1 only and six patients were hospitalized during P2 only. The cumulative length of hospital stay was 191 days during P1 and 107 days during P2. The mean length of stay per hospitalized patient was similar during P1 and P2 (10.6 days versus 5.9 days, respectively, P = 0.461, Wilcoxon test on paired data).
null
null
[ "Study design", "Objectives", "Patients", "Evaluation", "Statistical analysis", "Patient population", "Efficacy", "Haemoglobin.", "Iron parameters.", "Intravenous iron consumption.", "ESA consumption.", "Economic impact.", "Safety" ]
[ "This was an observational, non-interventional, single-centre single-cohort study. The study compared two time periods of 27 weeks each: Period 1 (P1; 1 December 2008 to 7 June 2009) during which patients received IS and Period 2 (P2; 29 June 2009 to 3 January 2010) during which patients received the ISS. Data for 3 weeks in June 2009 were excluded as both medications were simultaneously available at the centre. The medical management of the patients did not otherwise change during the study period except for the switch from IS to ISS. The study was initiated in September 2009 in response to concerns about Hb levels following introduction of the ISS. Data up to this point were obtained retrospectively from the patients’ medical records; subsequent data were collected prospectively. As an observational study, it did not require approval of the relevant Ethics Committee according to French regulations.", "The primary objective was to compare the impact of the switch from IS to the ISS on Hb levels and iron parameters. Secondary objectives were to describe the level of consumption of i.v. iron and ESA drugs and to estimate anaemia drug expenditure for the two treatment periods.", "Patients undergoing chronic HD (three times a week) at the centre were included in the analysis if they underwent at least 60 dialysis sessions during both periods (a lower limit was allowed for patients attending for HD twice a week) and at least one prescription of i.v. iron during the study. IS (5-mL ampoules with 100 mg iron) or ISS (5-mL ampoules with 100 mg iron) were injected i.v. once a week at a dose of 25–100 mg iron. ESA (darbepoetin-α) was injected i.v. once every 2 weeks.", "Hb, serum calcium, serum phosphorus, leucocytes, platelets, neutrophils and urea (before the HD session) were assessed every 2 weeks. Transferrin saturation (TSAT), serum ferritin, alkaline phosphatase, parathyroid hormone (PTH), 25-OH vitamin D, albumin, urea (after the HD session), adequacy of dialysis (Kt/V), C-reactive protein (CRP), fibrinogen, gamma-glutamyl transpeptidase (γ-GT), aspartate aminotransferase (AST), alanine aminotransferase (ALT) and total bilirubin were measured every 3 months. All adverse events were reported according to applicable regulations and adverse events resulting in hospitalization were recorded.", "Between-period comparisons were performed using the paired Student’s t-test and the paired Wilcoxon test for continuous data, and the McNemar test (χ2) for qualitative data. A hierarchical (mixed-effects) model [14] was used to estimate trends in the mean values of Hb for the following three time periods:(1) baseline, i.e. the period of IS administration (Hb measurements 1–14),(2) immediately before and after the switch to ISS (between Hb measurements 14 and 15) and(3) after switch to ISS (measurements 15–28).\n(1) baseline, i.e. the period of IS administration (Hb measurements 1–14),\n(2) immediately before and after the switch to ISS (between Hb measurements 14 and 15) and\n(3) after switch to ISS (measurements 15–28).\nTo carry out the analysis, splines with three knots at 14, 15 and 28 weeks were used, which defined three slopes corresponding, respectively, to the trends in Hb levels for the three periods defined. Hence, the estimate of the first spline in the mixed-effects model corresponds to change in Hb during the baseline period; similarly, the estimate of the second spline represents the change in Hb values immediately following the introduction of the new product and the estimate of the third spline represents the change in Hb values thereafter (with increases in dose of i.v. iron and ESA).\nBoth random-intercept and random-coefficients (random intercept and slopes) models were fitted to account for the repeated measures of Hb within each patient.\nThe cost of i.v. iron medications and ESA over time was calculated for both periods. The unit cost of i.v. iron (public price) was 12.98€/ampoule (100 mg iron) for IS and 10.20€/ampoule (100 mg iron) for ISS. The cost of darbepoetin-α at the centre was 1.47€/μg. Results were extrapolated to obtain a yearly incremental cost per patient by multiplying the daily cost per patient by 365. A national drug expenditure impact was estimated based on the assumption that the total French dialysis population (estimated to be 37 000 patients in January 2009 [15]) would be switched from IS to the ISS. This extrapolation is only indicative as unit cost estimates were centre specific, especially concerning darbepoetin-α.", "Of 120 patients receiving dialysis at the centre, 75 patients were eligible for inclusion in the analysis. The demographics and clinical profile of the population were typical for adult recipients of chronic HD (Table 1) [16]. The mean number of dialysis sessions per patient performed was similar during P1 (75.1 ± 5.1) and P2 (75.1 ± 6.3, P = 0.97).\nBaseline demographic and clinical characteristics (n = 75)a\nContinuous variables are shown as mean ± SD.", "[SUBTITLE] Haemoglobin. [SUBSECTION] Mean Hb levels throughout the study are presented in Figure 1. Values during IS treatment were statistically significantly higher than during use of the ISS (P1, 11.78 ± 0.99 g/dL; P2, 11.48 ± 0.98 g/dL, P = 0.01). The difference was more pronounced when Hb values during P1 were compared to the first 3 months of P2 (11.78 ± 0.99 g/dL and 11.39 ± 1.12 g/dL, respectively, P = 0.005 versus P1). Mean Hb levels were similar during the first and last 3 months of P2 (11.39 ± 1.12 g/dL and 11.57 ± 1.09 g/dL, respectively, P = 0.15).\nMean haemoglobin levels over time by treatment period (grams per deciliter).\nThe mixed model (random intercept) analysis demonstrated that Hb levels were stable during the IS treatment period as the first slope (Hb measurements 1–14) was not statistically different from 0 (P = 0.105). The second slope (Hb measurements 14–15) was statistically different from 0 (P <0.0001) with a coefficient of −0.4 g/dL per two observations for the period, i.e. switch from IS to the ISS was associated with a significant decrease in mean Hb values of 0.4 g/dL (95% confidence interval −0.55 to −0.23). The third slope (Hb measurements 15–28) was also statistically different from 0 (P = 0.001) with an increase of 0.026 per two observations (0.34 g/dL). Results of the mixed model (random-intercept and slopes) also showed the second slope to be statistically different from 0 (P = 0.002) with a coefficient of −0.40 g/dL, but the third slope did not reach statistical significance (P = 0.167), although the trend was similar (0.020 per two observations).\nThe proportion of patients who had at least one Hb value within the target range (11.5–12.0 g/dL) was 85% during P1 and 79% during P2 (P = 0.25). The mean number of days spent outside the target Hb range was 78 days per patient (cumulative 5880 days) during P1 compared to a mean of 99 days per patient (cumulative 7470 days) during P2 (P = 0.02) (Figure 2).\nMean number of days spent outside the haemoglobin target range (11.5–12.0 g/dL) by treatment period.\nMean Hb levels throughout the study are presented in Figure 1. Values during IS treatment were statistically significantly higher than during use of the ISS (P1, 11.78 ± 0.99 g/dL; P2, 11.48 ± 0.98 g/dL, P = 0.01). The difference was more pronounced when Hb values during P1 were compared to the first 3 months of P2 (11.78 ± 0.99 g/dL and 11.39 ± 1.12 g/dL, respectively, P = 0.005 versus P1). Mean Hb levels were similar during the first and last 3 months of P2 (11.39 ± 1.12 g/dL and 11.57 ± 1.09 g/dL, respectively, P = 0.15).\nMean haemoglobin levels over time by treatment period (grams per deciliter).\nThe mixed model (random intercept) analysis demonstrated that Hb levels were stable during the IS treatment period as the first slope (Hb measurements 1–14) was not statistically different from 0 (P = 0.105). The second slope (Hb measurements 14–15) was statistically different from 0 (P <0.0001) with a coefficient of −0.4 g/dL per two observations for the period, i.e. switch from IS to the ISS was associated with a significant decrease in mean Hb values of 0.4 g/dL (95% confidence interval −0.55 to −0.23). The third slope (Hb measurements 15–28) was also statistically different from 0 (P = 0.001) with an increase of 0.026 per two observations (0.34 g/dL). Results of the mixed model (random-intercept and slopes) also showed the second slope to be statistically different from 0 (P = 0.002) with a coefficient of −0.40 g/dL, but the third slope did not reach statistical significance (P = 0.167), although the trend was similar (0.020 per two observations).\nThe proportion of patients who had at least one Hb value within the target range (11.5–12.0 g/dL) was 85% during P1 and 79% during P2 (P = 0.25). The mean number of days spent outside the target Hb range was 78 days per patient (cumulative 5880 days) during P1 compared to a mean of 99 days per patient (cumulative 7470 days) during P2 (P = 0.02) (Figure 2).\nMean number of days spent outside the haemoglobin target range (11.5–12.0 g/dL) by treatment period.\n[SUBTITLE] Iron parameters. [SUBSECTION] Mean serum ferritin was similar between the treatment periods (P1, 533.8 ± 327.5 μg/L; P2, 494.5 ± 279.9 μg/L, P = 0.25) but was statistically significantly higher in P1 versus the first evaluation performed in P2 (September) (457.7 ± 290.4 μg/L, P = 0.04) (Table 2). Mean TSAT during P1 (49.3 ± 10.9%) was statistically significantly higher than during P2 overall (24.5 ± 9.4%, P < 0.0001) or Month 3 of P2 (23.3 ± 10.2%, P < 0.0001) (Table 2).\nSerum ferritin and TSAT during Period 1 and at Month 3 of Period 2 (September 2009)\nMean serum ferritin was similar between the treatment periods (P1, 533.8 ± 327.5 μg/L; P2, 494.5 ± 279.9 μg/L, P = 0.25) but was statistically significantly higher in P1 versus the first evaluation performed in P2 (September) (457.7 ± 290.4 μg/L, P = 0.04) (Table 2). Mean TSAT during P1 (49.3 ± 10.9%) was statistically significantly higher than during P2 overall (24.5 ± 9.4%, P < 0.0001) or Month 3 of P2 (23.3 ± 10.2%, P < 0.0001) (Table 2).\nSerum ferritin and TSAT during Period 1 and at Month 3 of Period 2 (September 2009)\n[SUBTITLE] Intravenous iron consumption. [SUBSECTION] The mean dose of i.v. iron per patient was 1231 ± 879 mg in P1 and 1657 ± 866 mg in P2 (P = 0.001), representing a 34.6% increase in i.v. iron during P2 versus P1. Corresponding values for i.v. iron per patient per week were 61.36 ± 30.98 mg in P2 and 45.58 ± 32.55 mg in P1 (P = 0.001), and the cumulative dose for the total study population was 92 300 mg in P1 versus 124 250 mg in P2.\nThe mean dose of i.v. iron per patient was 1231 ± 879 mg in P1 and 1657 ± 866 mg in P2 (P = 0.001), representing a 34.6% increase in i.v. iron during P2 versus P1. Corresponding values for i.v. iron per patient per week were 61.36 ± 30.98 mg in P2 and 45.58 ± 32.55 mg in P1 (P = 0.001), and the cumulative dose for the total study population was 92 300 mg in P1 versus 124 250 mg in P2.\n[SUBTITLE] ESA consumption. [SUBSECTION] The mean dose of ESA per patient was 980 ± 756 and 1103 ± 908 μg, respectively, during P1 and P2 (P = 0.12) representing a 12.6% increase. The mean dose according to body weight was 0.58 ± 0.52 μg/kg/week in P1 and 0.66 ± 0.64 μg/kg/week during P2 (P = 0.13) representing a 13.8% increase. The cumulative dose of ESA in P1 was 73 510 μg for the total study population and 82 750 μg in P2 (+12.6%).\nThe mean fortnightly i.v. iron dose and concomitant ESA dose throughout the study versus achieved Hb levels are shown in Figure 3.\nMean fortnightly dose of i.v. iron (milligrams) and ESA [darbepoetin-α (micrograms)] versus achieved haemoglobin levels (grams per deciliter).\nThe mean dose of ESA per patient was 980 ± 756 and 1103 ± 908 μg, respectively, during P1 and P2 (P = 0.12) representing a 12.6% increase. The mean dose according to body weight was 0.58 ± 0.52 μg/kg/week in P1 and 0.66 ± 0.64 μg/kg/week during P2 (P = 0.13) representing a 13.8% increase. The cumulative dose of ESA in P1 was 73 510 μg for the total study population and 82 750 μg in P2 (+12.6%).\nThe mean fortnightly i.v. iron dose and concomitant ESA dose throughout the study versus achieved Hb levels are shown in Figure 3.\nMean fortnightly dose of i.v. iron (milligrams) and ESA [darbepoetin-α (micrograms)] versus achieved haemoglobin levels (grams per deciliter).\n[SUBTITLE] Economic impact. [SUBSECTION] The cumulative cost of i.v. iron was 11 981€ during P1 and 12 674€ during P2. For ESA, the cumulative cost was 108 060€ during P1 and 121 643€ during P2. The total cost from the health care provider’s perspective (including both i.v. iron and ESA) was 120 040€ and 134 316€ during P1 and P2, respectively, a difference of 14 276€ (+11.9% increase) (Figure 4). The incremental cost of switching from IS to the ISS for 1 year was estimated to be about 368€ per patient.\nAnaemia drug expenditure.\nThe cumulative cost of i.v. iron was 11 981€ during P1 and 12 674€ during P2. For ESA, the cumulative cost was 108 060€ during P1 and 121 643€ during P2. The total cost from the health care provider’s perspective (including both i.v. iron and ESA) was 120 040€ and 134 316€ during P1 and P2, respectively, a difference of 14 276€ (+11.9% increase) (Figure 4). The incremental cost of switching from IS to the ISS for 1 year was estimated to be about 368€ per patient.\nAnaemia drug expenditure.", "Mean Hb levels throughout the study are presented in Figure 1. Values during IS treatment were statistically significantly higher than during use of the ISS (P1, 11.78 ± 0.99 g/dL; P2, 11.48 ± 0.98 g/dL, P = 0.01). The difference was more pronounced when Hb values during P1 were compared to the first 3 months of P2 (11.78 ± 0.99 g/dL and 11.39 ± 1.12 g/dL, respectively, P = 0.005 versus P1). Mean Hb levels were similar during the first and last 3 months of P2 (11.39 ± 1.12 g/dL and 11.57 ± 1.09 g/dL, respectively, P = 0.15).\nMean haemoglobin levels over time by treatment period (grams per deciliter).\nThe mixed model (random intercept) analysis demonstrated that Hb levels were stable during the IS treatment period as the first slope (Hb measurements 1–14) was not statistically different from 0 (P = 0.105). The second slope (Hb measurements 14–15) was statistically different from 0 (P <0.0001) with a coefficient of −0.4 g/dL per two observations for the period, i.e. switch from IS to the ISS was associated with a significant decrease in mean Hb values of 0.4 g/dL (95% confidence interval −0.55 to −0.23). The third slope (Hb measurements 15–28) was also statistically different from 0 (P = 0.001) with an increase of 0.026 per two observations (0.34 g/dL). Results of the mixed model (random-intercept and slopes) also showed the second slope to be statistically different from 0 (P = 0.002) with a coefficient of −0.40 g/dL, but the third slope did not reach statistical significance (P = 0.167), although the trend was similar (0.020 per two observations).\nThe proportion of patients who had at least one Hb value within the target range (11.5–12.0 g/dL) was 85% during P1 and 79% during P2 (P = 0.25). The mean number of days spent outside the target Hb range was 78 days per patient (cumulative 5880 days) during P1 compared to a mean of 99 days per patient (cumulative 7470 days) during P2 (P = 0.02) (Figure 2).\nMean number of days spent outside the haemoglobin target range (11.5–12.0 g/dL) by treatment period.", "Mean serum ferritin was similar between the treatment periods (P1, 533.8 ± 327.5 μg/L; P2, 494.5 ± 279.9 μg/L, P = 0.25) but was statistically significantly higher in P1 versus the first evaluation performed in P2 (September) (457.7 ± 290.4 μg/L, P = 0.04) (Table 2). Mean TSAT during P1 (49.3 ± 10.9%) was statistically significantly higher than during P2 overall (24.5 ± 9.4%, P < 0.0001) or Month 3 of P2 (23.3 ± 10.2%, P < 0.0001) (Table 2).\nSerum ferritin and TSAT during Period 1 and at Month 3 of Period 2 (September 2009)", "The mean dose of i.v. iron per patient was 1231 ± 879 mg in P1 and 1657 ± 866 mg in P2 (P = 0.001), representing a 34.6% increase in i.v. iron during P2 versus P1. Corresponding values for i.v. iron per patient per week were 61.36 ± 30.98 mg in P2 and 45.58 ± 32.55 mg in P1 (P = 0.001), and the cumulative dose for the total study population was 92 300 mg in P1 versus 124 250 mg in P2.", "The mean dose of ESA per patient was 980 ± 756 and 1103 ± 908 μg, respectively, during P1 and P2 (P = 0.12) representing a 12.6% increase. The mean dose according to body weight was 0.58 ± 0.52 μg/kg/week in P1 and 0.66 ± 0.64 μg/kg/week during P2 (P = 0.13) representing a 13.8% increase. The cumulative dose of ESA in P1 was 73 510 μg for the total study population and 82 750 μg in P2 (+12.6%).\nThe mean fortnightly i.v. iron dose and concomitant ESA dose throughout the study versus achieved Hb levels are shown in Figure 3.\nMean fortnightly dose of i.v. iron (milligrams) and ESA [darbepoetin-α (micrograms)] versus achieved haemoglobin levels (grams per deciliter).", "The cumulative cost of i.v. iron was 11 981€ during P1 and 12 674€ during P2. For ESA, the cumulative cost was 108 060€ during P1 and 121 643€ during P2. The total cost from the health care provider’s perspective (including both i.v. iron and ESA) was 120 040€ and 134 316€ during P1 and P2, respectively, a difference of 14 276€ (+11.9% increase) (Figure 4). The incremental cost of switching from IS to the ISS for 1 year was estimated to be about 368€ per patient.\nAnaemia drug expenditure.", "Mean values for serum phosphorus, Ca × P, 25-OH vitamin D and urea decreased significantly, and total bilirubin increased significantly, during P2 compared to P1 (Table 3). There were no significant differences between the two treatment periods regarding serum calcium, PTH, CRP, albumin, fibrinogen, γ-GT, alkaline phosphatase, ALT, AST, leucocytes, platelets, neutrophils or Kt/V. Mean CRP showed a non-significant increase from P1 [6.3 ± 6.9 mg/L (median 3.3 mg/L)] to the last 3 months of P2 [9.1 ± 14.6 (median 3.0 mg/L), P = 0.09] in 70 patients. A further CRP measurement was performed in January 2010 on the last day of use of ISS and the mean value was 13.7 mg/L.\nLaboratory values with statistically significant differences between Periods 1 and 2 (n = 75)\nNo adverse events were observed in relation to the study drugs during the study and no hospitalization related to i.v. iron supplementation occurred. Adverse events that resulted in hospitalization were considered to be related to the pre-existing conditions including CKD. There was no significant difference concerning the proportion of hospitalized patients between P1 [12/75 (16.0%)] and P2 [11/75 (14.7%), P = 0.78]. Five patients were hospitalized twice during P1 and P2. Seven patients were hospitalized during P1 only and six patients were hospitalized during P2 only. The cumulative length of hospital stay was 191 days during P1 and 107 days during P2. The mean length of stay per hospitalized patient was similar during P1 and P2 (10.6 days versus 5.9 days, respectively, P = 0.461, Wilcoxon test on paired data)." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Materials and methods", "Study design", "Objectives", "Patients", "Evaluation", "Statistical analysis", "Results", "Patient population", "Efficacy", "Haemoglobin.", "Iron parameters.", "Intravenous iron consumption.", "ESA consumption.", "Economic impact.", "Safety", "Discussion" ]
[ "Anaemia is a common comorbidity in chronic kidney disease (CKD). Its prevalence rises as renal function deteriorates, and it is estimated that up to 70% of patients are anaemic at the time of starting dialysis [1, 2]. Iron deficiency is an important contributor to renal anaemia, particularly in haemodialysis (HD) patients as a result of chronic dialysis-related blood loss and increased iron demand due to use of erythropoiesis-stimulating agents (ESA) [2]. Accordingly, intravenous (i.v.) iron supplementation is an important component in the therapeutic armamentarium for the management of anaemia in HD patients [3–5].\nThere are a number of iron carbohydrate compounds available on the market and among these compounds iron sucrose (IS) complex has a favourable safety profile when administered i.v. [6] as outlined in the European Best Practice Guidelines [3]. Iron carbohydrates such as IS are complex macromolecules, and their physico-chemical and biological properties are closely dependent on the manufacturing process such that subtle structural modifications may affect the stability of the preparation [7, 8]. Stability is of paramount importance, since weakly bound iron may dissociate too rapidly and catalyse the generation of reactive oxygen species [7–10] that in turn induce oxidative stress and inflammation. Potentially, an unstable iron complex could have safety and efficacy implications, especially in chronically ill individuals such as HD patients in whom dialysis and comorbidities already confer an increased burden of oxidative stress and inflammation [11, 12].\nSeveral iron sucrose similar (ISS) preparations have been introduced for the treatment of iron deficiency anaemia in a number of countries worldwide [7, 11, 13] on the basis that they can be considered therapeutically equivalent to the originator i.v. IS (Vifor France SA; manufactured by Vifor International Inc., St. Gallen, Switzerland). However, recent analytical tests and studies in rat models have demonstrated differences between ISS preparations and the originator IS in terms of physico-chemical structure and their effect on inflammatory, haemodynamic and functional markers [7, 13]. To our knowledge, no head-to-head clinical study has yet compared ISS and IS in a patient population.\nAt the Centre Suzanne Levy, Paris, IS was used to correct iron deficiency anaemia in HD patients for 6 years prior to June 2009 at which point the centre switched to the ISS (Mylan SAS, Saint Priest, France manufactured by Help SA Pharmaceuticals, Athens, Greece) due to economic reasons. An observational retrospective and prospective analysis was undertaken to evaluate the impact of the switch from the originator IS to the ISS on haemoglobin (Hb) levels, iron parameters and dosing requirements for i.v. iron and ESA.", "[SUBTITLE] Study design [SUBSECTION] This was an observational, non-interventional, single-centre single-cohort study. The study compared two time periods of 27 weeks each: Period 1 (P1; 1 December 2008 to 7 June 2009) during which patients received IS and Period 2 (P2; 29 June 2009 to 3 January 2010) during which patients received the ISS. Data for 3 weeks in June 2009 were excluded as both medications were simultaneously available at the centre. The medical management of the patients did not otherwise change during the study period except for the switch from IS to ISS. The study was initiated in September 2009 in response to concerns about Hb levels following introduction of the ISS. Data up to this point were obtained retrospectively from the patients’ medical records; subsequent data were collected prospectively. As an observational study, it did not require approval of the relevant Ethics Committee according to French regulations.\nThis was an observational, non-interventional, single-centre single-cohort study. The study compared two time periods of 27 weeks each: Period 1 (P1; 1 December 2008 to 7 June 2009) during which patients received IS and Period 2 (P2; 29 June 2009 to 3 January 2010) during which patients received the ISS. Data for 3 weeks in June 2009 were excluded as both medications were simultaneously available at the centre. The medical management of the patients did not otherwise change during the study period except for the switch from IS to ISS. The study was initiated in September 2009 in response to concerns about Hb levels following introduction of the ISS. Data up to this point were obtained retrospectively from the patients’ medical records; subsequent data were collected prospectively. As an observational study, it did not require approval of the relevant Ethics Committee according to French regulations.\n[SUBTITLE] Objectives [SUBSECTION] The primary objective was to compare the impact of the switch from IS to the ISS on Hb levels and iron parameters. Secondary objectives were to describe the level of consumption of i.v. iron and ESA drugs and to estimate anaemia drug expenditure for the two treatment periods.\nThe primary objective was to compare the impact of the switch from IS to the ISS on Hb levels and iron parameters. Secondary objectives were to describe the level of consumption of i.v. iron and ESA drugs and to estimate anaemia drug expenditure for the two treatment periods.\n[SUBTITLE] Patients [SUBSECTION] Patients undergoing chronic HD (three times a week) at the centre were included in the analysis if they underwent at least 60 dialysis sessions during both periods (a lower limit was allowed for patients attending for HD twice a week) and at least one prescription of i.v. iron during the study. IS (5-mL ampoules with 100 mg iron) or ISS (5-mL ampoules with 100 mg iron) were injected i.v. once a week at a dose of 25–100 mg iron. ESA (darbepoetin-α) was injected i.v. once every 2 weeks.\nPatients undergoing chronic HD (three times a week) at the centre were included in the analysis if they underwent at least 60 dialysis sessions during both periods (a lower limit was allowed for patients attending for HD twice a week) and at least one prescription of i.v. iron during the study. IS (5-mL ampoules with 100 mg iron) or ISS (5-mL ampoules with 100 mg iron) were injected i.v. once a week at a dose of 25–100 mg iron. ESA (darbepoetin-α) was injected i.v. once every 2 weeks.\n[SUBTITLE] Evaluation [SUBSECTION] Hb, serum calcium, serum phosphorus, leucocytes, platelets, neutrophils and urea (before the HD session) were assessed every 2 weeks. Transferrin saturation (TSAT), serum ferritin, alkaline phosphatase, parathyroid hormone (PTH), 25-OH vitamin D, albumin, urea (after the HD session), adequacy of dialysis (Kt/V), C-reactive protein (CRP), fibrinogen, gamma-glutamyl transpeptidase (γ-GT), aspartate aminotransferase (AST), alanine aminotransferase (ALT) and total bilirubin were measured every 3 months. All adverse events were reported according to applicable regulations and adverse events resulting in hospitalization were recorded.\nHb, serum calcium, serum phosphorus, leucocytes, platelets, neutrophils and urea (before the HD session) were assessed every 2 weeks. Transferrin saturation (TSAT), serum ferritin, alkaline phosphatase, parathyroid hormone (PTH), 25-OH vitamin D, albumin, urea (after the HD session), adequacy of dialysis (Kt/V), C-reactive protein (CRP), fibrinogen, gamma-glutamyl transpeptidase (γ-GT), aspartate aminotransferase (AST), alanine aminotransferase (ALT) and total bilirubin were measured every 3 months. All adverse events were reported according to applicable regulations and adverse events resulting in hospitalization were recorded.\n[SUBTITLE] Statistical analysis [SUBSECTION] Between-period comparisons were performed using the paired Student’s t-test and the paired Wilcoxon test for continuous data, and the McNemar test (χ2) for qualitative data. A hierarchical (mixed-effects) model [14] was used to estimate trends in the mean values of Hb for the following three time periods:(1) baseline, i.e. the period of IS administration (Hb measurements 1–14),(2) immediately before and after the switch to ISS (between Hb measurements 14 and 15) and(3) after switch to ISS (measurements 15–28).\n(1) baseline, i.e. the period of IS administration (Hb measurements 1–14),\n(2) immediately before and after the switch to ISS (between Hb measurements 14 and 15) and\n(3) after switch to ISS (measurements 15–28).\nTo carry out the analysis, splines with three knots at 14, 15 and 28 weeks were used, which defined three slopes corresponding, respectively, to the trends in Hb levels for the three periods defined. Hence, the estimate of the first spline in the mixed-effects model corresponds to change in Hb during the baseline period; similarly, the estimate of the second spline represents the change in Hb values immediately following the introduction of the new product and the estimate of the third spline represents the change in Hb values thereafter (with increases in dose of i.v. iron and ESA).\nBoth random-intercept and random-coefficients (random intercept and slopes) models were fitted to account for the repeated measures of Hb within each patient.\nThe cost of i.v. iron medications and ESA over time was calculated for both periods. The unit cost of i.v. iron (public price) was 12.98€/ampoule (100 mg iron) for IS and 10.20€/ampoule (100 mg iron) for ISS. The cost of darbepoetin-α at the centre was 1.47€/μg. Results were extrapolated to obtain a yearly incremental cost per patient by multiplying the daily cost per patient by 365. A national drug expenditure impact was estimated based on the assumption that the total French dialysis population (estimated to be 37 000 patients in January 2009 [15]) would be switched from IS to the ISS. This extrapolation is only indicative as unit cost estimates were centre specific, especially concerning darbepoetin-α.\nBetween-period comparisons were performed using the paired Student’s t-test and the paired Wilcoxon test for continuous data, and the McNemar test (χ2) for qualitative data. A hierarchical (mixed-effects) model [14] was used to estimate trends in the mean values of Hb for the following three time periods:(1) baseline, i.e. the period of IS administration (Hb measurements 1–14),(2) immediately before and after the switch to ISS (between Hb measurements 14 and 15) and(3) after switch to ISS (measurements 15–28).\n(1) baseline, i.e. the period of IS administration (Hb measurements 1–14),\n(2) immediately before and after the switch to ISS (between Hb measurements 14 and 15) and\n(3) after switch to ISS (measurements 15–28).\nTo carry out the analysis, splines with three knots at 14, 15 and 28 weeks were used, which defined three slopes corresponding, respectively, to the trends in Hb levels for the three periods defined. Hence, the estimate of the first spline in the mixed-effects model corresponds to change in Hb during the baseline period; similarly, the estimate of the second spline represents the change in Hb values immediately following the introduction of the new product and the estimate of the third spline represents the change in Hb values thereafter (with increases in dose of i.v. iron and ESA).\nBoth random-intercept and random-coefficients (random intercept and slopes) models were fitted to account for the repeated measures of Hb within each patient.\nThe cost of i.v. iron medications and ESA over time was calculated for both periods. The unit cost of i.v. iron (public price) was 12.98€/ampoule (100 mg iron) for IS and 10.20€/ampoule (100 mg iron) for ISS. The cost of darbepoetin-α at the centre was 1.47€/μg. Results were extrapolated to obtain a yearly incremental cost per patient by multiplying the daily cost per patient by 365. A national drug expenditure impact was estimated based on the assumption that the total French dialysis population (estimated to be 37 000 patients in January 2009 [15]) would be switched from IS to the ISS. This extrapolation is only indicative as unit cost estimates were centre specific, especially concerning darbepoetin-α.", "This was an observational, non-interventional, single-centre single-cohort study. The study compared two time periods of 27 weeks each: Period 1 (P1; 1 December 2008 to 7 June 2009) during which patients received IS and Period 2 (P2; 29 June 2009 to 3 January 2010) during which patients received the ISS. Data for 3 weeks in June 2009 were excluded as both medications were simultaneously available at the centre. The medical management of the patients did not otherwise change during the study period except for the switch from IS to ISS. The study was initiated in September 2009 in response to concerns about Hb levels following introduction of the ISS. Data up to this point were obtained retrospectively from the patients’ medical records; subsequent data were collected prospectively. As an observational study, it did not require approval of the relevant Ethics Committee according to French regulations.", "The primary objective was to compare the impact of the switch from IS to the ISS on Hb levels and iron parameters. Secondary objectives were to describe the level of consumption of i.v. iron and ESA drugs and to estimate anaemia drug expenditure for the two treatment periods.", "Patients undergoing chronic HD (three times a week) at the centre were included in the analysis if they underwent at least 60 dialysis sessions during both periods (a lower limit was allowed for patients attending for HD twice a week) and at least one prescription of i.v. iron during the study. IS (5-mL ampoules with 100 mg iron) or ISS (5-mL ampoules with 100 mg iron) were injected i.v. once a week at a dose of 25–100 mg iron. ESA (darbepoetin-α) was injected i.v. once every 2 weeks.", "Hb, serum calcium, serum phosphorus, leucocytes, platelets, neutrophils and urea (before the HD session) were assessed every 2 weeks. Transferrin saturation (TSAT), serum ferritin, alkaline phosphatase, parathyroid hormone (PTH), 25-OH vitamin D, albumin, urea (after the HD session), adequacy of dialysis (Kt/V), C-reactive protein (CRP), fibrinogen, gamma-glutamyl transpeptidase (γ-GT), aspartate aminotransferase (AST), alanine aminotransferase (ALT) and total bilirubin were measured every 3 months. All adverse events were reported according to applicable regulations and adverse events resulting in hospitalization were recorded.", "Between-period comparisons were performed using the paired Student’s t-test and the paired Wilcoxon test for continuous data, and the McNemar test (χ2) for qualitative data. A hierarchical (mixed-effects) model [14] was used to estimate trends in the mean values of Hb for the following three time periods:(1) baseline, i.e. the period of IS administration (Hb measurements 1–14),(2) immediately before and after the switch to ISS (between Hb measurements 14 and 15) and(3) after switch to ISS (measurements 15–28).\n(1) baseline, i.e. the period of IS administration (Hb measurements 1–14),\n(2) immediately before and after the switch to ISS (between Hb measurements 14 and 15) and\n(3) after switch to ISS (measurements 15–28).\nTo carry out the analysis, splines with three knots at 14, 15 and 28 weeks were used, which defined three slopes corresponding, respectively, to the trends in Hb levels for the three periods defined. Hence, the estimate of the first spline in the mixed-effects model corresponds to change in Hb during the baseline period; similarly, the estimate of the second spline represents the change in Hb values immediately following the introduction of the new product and the estimate of the third spline represents the change in Hb values thereafter (with increases in dose of i.v. iron and ESA).\nBoth random-intercept and random-coefficients (random intercept and slopes) models were fitted to account for the repeated measures of Hb within each patient.\nThe cost of i.v. iron medications and ESA over time was calculated for both periods. The unit cost of i.v. iron (public price) was 12.98€/ampoule (100 mg iron) for IS and 10.20€/ampoule (100 mg iron) for ISS. The cost of darbepoetin-α at the centre was 1.47€/μg. Results were extrapolated to obtain a yearly incremental cost per patient by multiplying the daily cost per patient by 365. A national drug expenditure impact was estimated based on the assumption that the total French dialysis population (estimated to be 37 000 patients in January 2009 [15]) would be switched from IS to the ISS. This extrapolation is only indicative as unit cost estimates were centre specific, especially concerning darbepoetin-α.", "[SUBTITLE] Patient population [SUBSECTION] Of 120 patients receiving dialysis at the centre, 75 patients were eligible for inclusion in the analysis. The demographics and clinical profile of the population were typical for adult recipients of chronic HD (Table 1) [16]. The mean number of dialysis sessions per patient performed was similar during P1 (75.1 ± 5.1) and P2 (75.1 ± 6.3, P = 0.97).\nBaseline demographic and clinical characteristics (n = 75)a\nContinuous variables are shown as mean ± SD.\nOf 120 patients receiving dialysis at the centre, 75 patients were eligible for inclusion in the analysis. The demographics and clinical profile of the population were typical for adult recipients of chronic HD (Table 1) [16]. The mean number of dialysis sessions per patient performed was similar during P1 (75.1 ± 5.1) and P2 (75.1 ± 6.3, P = 0.97).\nBaseline demographic and clinical characteristics (n = 75)a\nContinuous variables are shown as mean ± SD.\n[SUBTITLE] Efficacy [SUBSECTION] [SUBTITLE] Haemoglobin. [SUBSECTION] Mean Hb levels throughout the study are presented in Figure 1. Values during IS treatment were statistically significantly higher than during use of the ISS (P1, 11.78 ± 0.99 g/dL; P2, 11.48 ± 0.98 g/dL, P = 0.01). The difference was more pronounced when Hb values during P1 were compared to the first 3 months of P2 (11.78 ± 0.99 g/dL and 11.39 ± 1.12 g/dL, respectively, P = 0.005 versus P1). Mean Hb levels were similar during the first and last 3 months of P2 (11.39 ± 1.12 g/dL and 11.57 ± 1.09 g/dL, respectively, P = 0.15).\nMean haemoglobin levels over time by treatment period (grams per deciliter).\nThe mixed model (random intercept) analysis demonstrated that Hb levels were stable during the IS treatment period as the first slope (Hb measurements 1–14) was not statistically different from 0 (P = 0.105). The second slope (Hb measurements 14–15) was statistically different from 0 (P <0.0001) with a coefficient of −0.4 g/dL per two observations for the period, i.e. switch from IS to the ISS was associated with a significant decrease in mean Hb values of 0.4 g/dL (95% confidence interval −0.55 to −0.23). The third slope (Hb measurements 15–28) was also statistically different from 0 (P = 0.001) with an increase of 0.026 per two observations (0.34 g/dL). Results of the mixed model (random-intercept and slopes) also showed the second slope to be statistically different from 0 (P = 0.002) with a coefficient of −0.40 g/dL, but the third slope did not reach statistical significance (P = 0.167), although the trend was similar (0.020 per two observations).\nThe proportion of patients who had at least one Hb value within the target range (11.5–12.0 g/dL) was 85% during P1 and 79% during P2 (P = 0.25). The mean number of days spent outside the target Hb range was 78 days per patient (cumulative 5880 days) during P1 compared to a mean of 99 days per patient (cumulative 7470 days) during P2 (P = 0.02) (Figure 2).\nMean number of days spent outside the haemoglobin target range (11.5–12.0 g/dL) by treatment period.\nMean Hb levels throughout the study are presented in Figure 1. Values during IS treatment were statistically significantly higher than during use of the ISS (P1, 11.78 ± 0.99 g/dL; P2, 11.48 ± 0.98 g/dL, P = 0.01). The difference was more pronounced when Hb values during P1 were compared to the first 3 months of P2 (11.78 ± 0.99 g/dL and 11.39 ± 1.12 g/dL, respectively, P = 0.005 versus P1). Mean Hb levels were similar during the first and last 3 months of P2 (11.39 ± 1.12 g/dL and 11.57 ± 1.09 g/dL, respectively, P = 0.15).\nMean haemoglobin levels over time by treatment period (grams per deciliter).\nThe mixed model (random intercept) analysis demonstrated that Hb levels were stable during the IS treatment period as the first slope (Hb measurements 1–14) was not statistically different from 0 (P = 0.105). The second slope (Hb measurements 14–15) was statistically different from 0 (P <0.0001) with a coefficient of −0.4 g/dL per two observations for the period, i.e. switch from IS to the ISS was associated with a significant decrease in mean Hb values of 0.4 g/dL (95% confidence interval −0.55 to −0.23). The third slope (Hb measurements 15–28) was also statistically different from 0 (P = 0.001) with an increase of 0.026 per two observations (0.34 g/dL). Results of the mixed model (random-intercept and slopes) also showed the second slope to be statistically different from 0 (P = 0.002) with a coefficient of −0.40 g/dL, but the third slope did not reach statistical significance (P = 0.167), although the trend was similar (0.020 per two observations).\nThe proportion of patients who had at least one Hb value within the target range (11.5–12.0 g/dL) was 85% during P1 and 79% during P2 (P = 0.25). The mean number of days spent outside the target Hb range was 78 days per patient (cumulative 5880 days) during P1 compared to a mean of 99 days per patient (cumulative 7470 days) during P2 (P = 0.02) (Figure 2).\nMean number of days spent outside the haemoglobin target range (11.5–12.0 g/dL) by treatment period.\n[SUBTITLE] Iron parameters. [SUBSECTION] Mean serum ferritin was similar between the treatment periods (P1, 533.8 ± 327.5 μg/L; P2, 494.5 ± 279.9 μg/L, P = 0.25) but was statistically significantly higher in P1 versus the first evaluation performed in P2 (September) (457.7 ± 290.4 μg/L, P = 0.04) (Table 2). Mean TSAT during P1 (49.3 ± 10.9%) was statistically significantly higher than during P2 overall (24.5 ± 9.4%, P < 0.0001) or Month 3 of P2 (23.3 ± 10.2%, P < 0.0001) (Table 2).\nSerum ferritin and TSAT during Period 1 and at Month 3 of Period 2 (September 2009)\nMean serum ferritin was similar between the treatment periods (P1, 533.8 ± 327.5 μg/L; P2, 494.5 ± 279.9 μg/L, P = 0.25) but was statistically significantly higher in P1 versus the first evaluation performed in P2 (September) (457.7 ± 290.4 μg/L, P = 0.04) (Table 2). Mean TSAT during P1 (49.3 ± 10.9%) was statistically significantly higher than during P2 overall (24.5 ± 9.4%, P < 0.0001) or Month 3 of P2 (23.3 ± 10.2%, P < 0.0001) (Table 2).\nSerum ferritin and TSAT during Period 1 and at Month 3 of Period 2 (September 2009)\n[SUBTITLE] Intravenous iron consumption. [SUBSECTION] The mean dose of i.v. iron per patient was 1231 ± 879 mg in P1 and 1657 ± 866 mg in P2 (P = 0.001), representing a 34.6% increase in i.v. iron during P2 versus P1. Corresponding values for i.v. iron per patient per week were 61.36 ± 30.98 mg in P2 and 45.58 ± 32.55 mg in P1 (P = 0.001), and the cumulative dose for the total study population was 92 300 mg in P1 versus 124 250 mg in P2.\nThe mean dose of i.v. iron per patient was 1231 ± 879 mg in P1 and 1657 ± 866 mg in P2 (P = 0.001), representing a 34.6% increase in i.v. iron during P2 versus P1. Corresponding values for i.v. iron per patient per week were 61.36 ± 30.98 mg in P2 and 45.58 ± 32.55 mg in P1 (P = 0.001), and the cumulative dose for the total study population was 92 300 mg in P1 versus 124 250 mg in P2.\n[SUBTITLE] ESA consumption. [SUBSECTION] The mean dose of ESA per patient was 980 ± 756 and 1103 ± 908 μg, respectively, during P1 and P2 (P = 0.12) representing a 12.6% increase. The mean dose according to body weight was 0.58 ± 0.52 μg/kg/week in P1 and 0.66 ± 0.64 μg/kg/week during P2 (P = 0.13) representing a 13.8% increase. The cumulative dose of ESA in P1 was 73 510 μg for the total study population and 82 750 μg in P2 (+12.6%).\nThe mean fortnightly i.v. iron dose and concomitant ESA dose throughout the study versus achieved Hb levels are shown in Figure 3.\nMean fortnightly dose of i.v. iron (milligrams) and ESA [darbepoetin-α (micrograms)] versus achieved haemoglobin levels (grams per deciliter).\nThe mean dose of ESA per patient was 980 ± 756 and 1103 ± 908 μg, respectively, during P1 and P2 (P = 0.12) representing a 12.6% increase. The mean dose according to body weight was 0.58 ± 0.52 μg/kg/week in P1 and 0.66 ± 0.64 μg/kg/week during P2 (P = 0.13) representing a 13.8% increase. The cumulative dose of ESA in P1 was 73 510 μg for the total study population and 82 750 μg in P2 (+12.6%).\nThe mean fortnightly i.v. iron dose and concomitant ESA dose throughout the study versus achieved Hb levels are shown in Figure 3.\nMean fortnightly dose of i.v. iron (milligrams) and ESA [darbepoetin-α (micrograms)] versus achieved haemoglobin levels (grams per deciliter).\n[SUBTITLE] Economic impact. [SUBSECTION] The cumulative cost of i.v. iron was 11 981€ during P1 and 12 674€ during P2. For ESA, the cumulative cost was 108 060€ during P1 and 121 643€ during P2. The total cost from the health care provider’s perspective (including both i.v. iron and ESA) was 120 040€ and 134 316€ during P1 and P2, respectively, a difference of 14 276€ (+11.9% increase) (Figure 4). The incremental cost of switching from IS to the ISS for 1 year was estimated to be about 368€ per patient.\nAnaemia drug expenditure.\nThe cumulative cost of i.v. iron was 11 981€ during P1 and 12 674€ during P2. For ESA, the cumulative cost was 108 060€ during P1 and 121 643€ during P2. The total cost from the health care provider’s perspective (including both i.v. iron and ESA) was 120 040€ and 134 316€ during P1 and P2, respectively, a difference of 14 276€ (+11.9% increase) (Figure 4). The incremental cost of switching from IS to the ISS for 1 year was estimated to be about 368€ per patient.\nAnaemia drug expenditure.\n[SUBTITLE] Haemoglobin. [SUBSECTION] Mean Hb levels throughout the study are presented in Figure 1. Values during IS treatment were statistically significantly higher than during use of the ISS (P1, 11.78 ± 0.99 g/dL; P2, 11.48 ± 0.98 g/dL, P = 0.01). The difference was more pronounced when Hb values during P1 were compared to the first 3 months of P2 (11.78 ± 0.99 g/dL and 11.39 ± 1.12 g/dL, respectively, P = 0.005 versus P1). Mean Hb levels were similar during the first and last 3 months of P2 (11.39 ± 1.12 g/dL and 11.57 ± 1.09 g/dL, respectively, P = 0.15).\nMean haemoglobin levels over time by treatment period (grams per deciliter).\nThe mixed model (random intercept) analysis demonstrated that Hb levels were stable during the IS treatment period as the first slope (Hb measurements 1–14) was not statistically different from 0 (P = 0.105). The second slope (Hb measurements 14–15) was statistically different from 0 (P <0.0001) with a coefficient of −0.4 g/dL per two observations for the period, i.e. switch from IS to the ISS was associated with a significant decrease in mean Hb values of 0.4 g/dL (95% confidence interval −0.55 to −0.23). The third slope (Hb measurements 15–28) was also statistically different from 0 (P = 0.001) with an increase of 0.026 per two observations (0.34 g/dL). Results of the mixed model (random-intercept and slopes) also showed the second slope to be statistically different from 0 (P = 0.002) with a coefficient of −0.40 g/dL, but the third slope did not reach statistical significance (P = 0.167), although the trend was similar (0.020 per two observations).\nThe proportion of patients who had at least one Hb value within the target range (11.5–12.0 g/dL) was 85% during P1 and 79% during P2 (P = 0.25). The mean number of days spent outside the target Hb range was 78 days per patient (cumulative 5880 days) during P1 compared to a mean of 99 days per patient (cumulative 7470 days) during P2 (P = 0.02) (Figure 2).\nMean number of days spent outside the haemoglobin target range (11.5–12.0 g/dL) by treatment period.\nMean Hb levels throughout the study are presented in Figure 1. Values during IS treatment were statistically significantly higher than during use of the ISS (P1, 11.78 ± 0.99 g/dL; P2, 11.48 ± 0.98 g/dL, P = 0.01). The difference was more pronounced when Hb values during P1 were compared to the first 3 months of P2 (11.78 ± 0.99 g/dL and 11.39 ± 1.12 g/dL, respectively, P = 0.005 versus P1). Mean Hb levels were similar during the first and last 3 months of P2 (11.39 ± 1.12 g/dL and 11.57 ± 1.09 g/dL, respectively, P = 0.15).\nMean haemoglobin levels over time by treatment period (grams per deciliter).\nThe mixed model (random intercept) analysis demonstrated that Hb levels were stable during the IS treatment period as the first slope (Hb measurements 1–14) was not statistically different from 0 (P = 0.105). The second slope (Hb measurements 14–15) was statistically different from 0 (P <0.0001) with a coefficient of −0.4 g/dL per two observations for the period, i.e. switch from IS to the ISS was associated with a significant decrease in mean Hb values of 0.4 g/dL (95% confidence interval −0.55 to −0.23). The third slope (Hb measurements 15–28) was also statistically different from 0 (P = 0.001) with an increase of 0.026 per two observations (0.34 g/dL). Results of the mixed model (random-intercept and slopes) also showed the second slope to be statistically different from 0 (P = 0.002) with a coefficient of −0.40 g/dL, but the third slope did not reach statistical significance (P = 0.167), although the trend was similar (0.020 per two observations).\nThe proportion of patients who had at least one Hb value within the target range (11.5–12.0 g/dL) was 85% during P1 and 79% during P2 (P = 0.25). The mean number of days spent outside the target Hb range was 78 days per patient (cumulative 5880 days) during P1 compared to a mean of 99 days per patient (cumulative 7470 days) during P2 (P = 0.02) (Figure 2).\nMean number of days spent outside the haemoglobin target range (11.5–12.0 g/dL) by treatment period.\n[SUBTITLE] Iron parameters. [SUBSECTION] Mean serum ferritin was similar between the treatment periods (P1, 533.8 ± 327.5 μg/L; P2, 494.5 ± 279.9 μg/L, P = 0.25) but was statistically significantly higher in P1 versus the first evaluation performed in P2 (September) (457.7 ± 290.4 μg/L, P = 0.04) (Table 2). Mean TSAT during P1 (49.3 ± 10.9%) was statistically significantly higher than during P2 overall (24.5 ± 9.4%, P < 0.0001) or Month 3 of P2 (23.3 ± 10.2%, P < 0.0001) (Table 2).\nSerum ferritin and TSAT during Period 1 and at Month 3 of Period 2 (September 2009)\nMean serum ferritin was similar between the treatment periods (P1, 533.8 ± 327.5 μg/L; P2, 494.5 ± 279.9 μg/L, P = 0.25) but was statistically significantly higher in P1 versus the first evaluation performed in P2 (September) (457.7 ± 290.4 μg/L, P = 0.04) (Table 2). Mean TSAT during P1 (49.3 ± 10.9%) was statistically significantly higher than during P2 overall (24.5 ± 9.4%, P < 0.0001) or Month 3 of P2 (23.3 ± 10.2%, P < 0.0001) (Table 2).\nSerum ferritin and TSAT during Period 1 and at Month 3 of Period 2 (September 2009)\n[SUBTITLE] Intravenous iron consumption. [SUBSECTION] The mean dose of i.v. iron per patient was 1231 ± 879 mg in P1 and 1657 ± 866 mg in P2 (P = 0.001), representing a 34.6% increase in i.v. iron during P2 versus P1. Corresponding values for i.v. iron per patient per week were 61.36 ± 30.98 mg in P2 and 45.58 ± 32.55 mg in P1 (P = 0.001), and the cumulative dose for the total study population was 92 300 mg in P1 versus 124 250 mg in P2.\nThe mean dose of i.v. iron per patient was 1231 ± 879 mg in P1 and 1657 ± 866 mg in P2 (P = 0.001), representing a 34.6% increase in i.v. iron during P2 versus P1. Corresponding values for i.v. iron per patient per week were 61.36 ± 30.98 mg in P2 and 45.58 ± 32.55 mg in P1 (P = 0.001), and the cumulative dose for the total study population was 92 300 mg in P1 versus 124 250 mg in P2.\n[SUBTITLE] ESA consumption. [SUBSECTION] The mean dose of ESA per patient was 980 ± 756 and 1103 ± 908 μg, respectively, during P1 and P2 (P = 0.12) representing a 12.6% increase. The mean dose according to body weight was 0.58 ± 0.52 μg/kg/week in P1 and 0.66 ± 0.64 μg/kg/week during P2 (P = 0.13) representing a 13.8% increase. The cumulative dose of ESA in P1 was 73 510 μg for the total study population and 82 750 μg in P2 (+12.6%).\nThe mean fortnightly i.v. iron dose and concomitant ESA dose throughout the study versus achieved Hb levels are shown in Figure 3.\nMean fortnightly dose of i.v. iron (milligrams) and ESA [darbepoetin-α (micrograms)] versus achieved haemoglobin levels (grams per deciliter).\nThe mean dose of ESA per patient was 980 ± 756 and 1103 ± 908 μg, respectively, during P1 and P2 (P = 0.12) representing a 12.6% increase. The mean dose according to body weight was 0.58 ± 0.52 μg/kg/week in P1 and 0.66 ± 0.64 μg/kg/week during P2 (P = 0.13) representing a 13.8% increase. The cumulative dose of ESA in P1 was 73 510 μg for the total study population and 82 750 μg in P2 (+12.6%).\nThe mean fortnightly i.v. iron dose and concomitant ESA dose throughout the study versus achieved Hb levels are shown in Figure 3.\nMean fortnightly dose of i.v. iron (milligrams) and ESA [darbepoetin-α (micrograms)] versus achieved haemoglobin levels (grams per deciliter).\n[SUBTITLE] Economic impact. [SUBSECTION] The cumulative cost of i.v. iron was 11 981€ during P1 and 12 674€ during P2. For ESA, the cumulative cost was 108 060€ during P1 and 121 643€ during P2. The total cost from the health care provider’s perspective (including both i.v. iron and ESA) was 120 040€ and 134 316€ during P1 and P2, respectively, a difference of 14 276€ (+11.9% increase) (Figure 4). The incremental cost of switching from IS to the ISS for 1 year was estimated to be about 368€ per patient.\nAnaemia drug expenditure.\nThe cumulative cost of i.v. iron was 11 981€ during P1 and 12 674€ during P2. For ESA, the cumulative cost was 108 060€ during P1 and 121 643€ during P2. The total cost from the health care provider’s perspective (including both i.v. iron and ESA) was 120 040€ and 134 316€ during P1 and P2, respectively, a difference of 14 276€ (+11.9% increase) (Figure 4). The incremental cost of switching from IS to the ISS for 1 year was estimated to be about 368€ per patient.\nAnaemia drug expenditure.\n[SUBTITLE] Safety [SUBSECTION] Mean values for serum phosphorus, Ca × P, 25-OH vitamin D and urea decreased significantly, and total bilirubin increased significantly, during P2 compared to P1 (Table 3). There were no significant differences between the two treatment periods regarding serum calcium, PTH, CRP, albumin, fibrinogen, γ-GT, alkaline phosphatase, ALT, AST, leucocytes, platelets, neutrophils or Kt/V. Mean CRP showed a non-significant increase from P1 [6.3 ± 6.9 mg/L (median 3.3 mg/L)] to the last 3 months of P2 [9.1 ± 14.6 (median 3.0 mg/L), P = 0.09] in 70 patients. A further CRP measurement was performed in January 2010 on the last day of use of ISS and the mean value was 13.7 mg/L.\nLaboratory values with statistically significant differences between Periods 1 and 2 (n = 75)\nNo adverse events were observed in relation to the study drugs during the study and no hospitalization related to i.v. iron supplementation occurred. Adverse events that resulted in hospitalization were considered to be related to the pre-existing conditions including CKD. There was no significant difference concerning the proportion of hospitalized patients between P1 [12/75 (16.0%)] and P2 [11/75 (14.7%), P = 0.78]. Five patients were hospitalized twice during P1 and P2. Seven patients were hospitalized during P1 only and six patients were hospitalized during P2 only. The cumulative length of hospital stay was 191 days during P1 and 107 days during P2. The mean length of stay per hospitalized patient was similar during P1 and P2 (10.6 days versus 5.9 days, respectively, P = 0.461, Wilcoxon test on paired data).\nMean values for serum phosphorus, Ca × P, 25-OH vitamin D and urea decreased significantly, and total bilirubin increased significantly, during P2 compared to P1 (Table 3). There were no significant differences between the two treatment periods regarding serum calcium, PTH, CRP, albumin, fibrinogen, γ-GT, alkaline phosphatase, ALT, AST, leucocytes, platelets, neutrophils or Kt/V. Mean CRP showed a non-significant increase from P1 [6.3 ± 6.9 mg/L (median 3.3 mg/L)] to the last 3 months of P2 [9.1 ± 14.6 (median 3.0 mg/L), P = 0.09] in 70 patients. A further CRP measurement was performed in January 2010 on the last day of use of ISS and the mean value was 13.7 mg/L.\nLaboratory values with statistically significant differences between Periods 1 and 2 (n = 75)\nNo adverse events were observed in relation to the study drugs during the study and no hospitalization related to i.v. iron supplementation occurred. Adverse events that resulted in hospitalization were considered to be related to the pre-existing conditions including CKD. There was no significant difference concerning the proportion of hospitalized patients between P1 [12/75 (16.0%)] and P2 [11/75 (14.7%), P = 0.78]. Five patients were hospitalized twice during P1 and P2. Seven patients were hospitalized during P1 only and six patients were hospitalized during P2 only. The cumulative length of hospital stay was 191 days during P1 and 107 days during P2. The mean length of stay per hospitalized patient was similar during P1 and P2 (10.6 days versus 5.9 days, respectively, P = 0.461, Wilcoxon test on paired data).", "Of 120 patients receiving dialysis at the centre, 75 patients were eligible for inclusion in the analysis. The demographics and clinical profile of the population were typical for adult recipients of chronic HD (Table 1) [16]. The mean number of dialysis sessions per patient performed was similar during P1 (75.1 ± 5.1) and P2 (75.1 ± 6.3, P = 0.97).\nBaseline demographic and clinical characteristics (n = 75)a\nContinuous variables are shown as mean ± SD.", "[SUBTITLE] Haemoglobin. [SUBSECTION] Mean Hb levels throughout the study are presented in Figure 1. Values during IS treatment were statistically significantly higher than during use of the ISS (P1, 11.78 ± 0.99 g/dL; P2, 11.48 ± 0.98 g/dL, P = 0.01). The difference was more pronounced when Hb values during P1 were compared to the first 3 months of P2 (11.78 ± 0.99 g/dL and 11.39 ± 1.12 g/dL, respectively, P = 0.005 versus P1). Mean Hb levels were similar during the first and last 3 months of P2 (11.39 ± 1.12 g/dL and 11.57 ± 1.09 g/dL, respectively, P = 0.15).\nMean haemoglobin levels over time by treatment period (grams per deciliter).\nThe mixed model (random intercept) analysis demonstrated that Hb levels were stable during the IS treatment period as the first slope (Hb measurements 1–14) was not statistically different from 0 (P = 0.105). The second slope (Hb measurements 14–15) was statistically different from 0 (P <0.0001) with a coefficient of −0.4 g/dL per two observations for the period, i.e. switch from IS to the ISS was associated with a significant decrease in mean Hb values of 0.4 g/dL (95% confidence interval −0.55 to −0.23). The third slope (Hb measurements 15–28) was also statistically different from 0 (P = 0.001) with an increase of 0.026 per two observations (0.34 g/dL). Results of the mixed model (random-intercept and slopes) also showed the second slope to be statistically different from 0 (P = 0.002) with a coefficient of −0.40 g/dL, but the third slope did not reach statistical significance (P = 0.167), although the trend was similar (0.020 per two observations).\nThe proportion of patients who had at least one Hb value within the target range (11.5–12.0 g/dL) was 85% during P1 and 79% during P2 (P = 0.25). The mean number of days spent outside the target Hb range was 78 days per patient (cumulative 5880 days) during P1 compared to a mean of 99 days per patient (cumulative 7470 days) during P2 (P = 0.02) (Figure 2).\nMean number of days spent outside the haemoglobin target range (11.5–12.0 g/dL) by treatment period.\nMean Hb levels throughout the study are presented in Figure 1. Values during IS treatment were statistically significantly higher than during use of the ISS (P1, 11.78 ± 0.99 g/dL; P2, 11.48 ± 0.98 g/dL, P = 0.01). The difference was more pronounced when Hb values during P1 were compared to the first 3 months of P2 (11.78 ± 0.99 g/dL and 11.39 ± 1.12 g/dL, respectively, P = 0.005 versus P1). Mean Hb levels were similar during the first and last 3 months of P2 (11.39 ± 1.12 g/dL and 11.57 ± 1.09 g/dL, respectively, P = 0.15).\nMean haemoglobin levels over time by treatment period (grams per deciliter).\nThe mixed model (random intercept) analysis demonstrated that Hb levels were stable during the IS treatment period as the first slope (Hb measurements 1–14) was not statistically different from 0 (P = 0.105). The second slope (Hb measurements 14–15) was statistically different from 0 (P <0.0001) with a coefficient of −0.4 g/dL per two observations for the period, i.e. switch from IS to the ISS was associated with a significant decrease in mean Hb values of 0.4 g/dL (95% confidence interval −0.55 to −0.23). The third slope (Hb measurements 15–28) was also statistically different from 0 (P = 0.001) with an increase of 0.026 per two observations (0.34 g/dL). Results of the mixed model (random-intercept and slopes) also showed the second slope to be statistically different from 0 (P = 0.002) with a coefficient of −0.40 g/dL, but the third slope did not reach statistical significance (P = 0.167), although the trend was similar (0.020 per two observations).\nThe proportion of patients who had at least one Hb value within the target range (11.5–12.0 g/dL) was 85% during P1 and 79% during P2 (P = 0.25). The mean number of days spent outside the target Hb range was 78 days per patient (cumulative 5880 days) during P1 compared to a mean of 99 days per patient (cumulative 7470 days) during P2 (P = 0.02) (Figure 2).\nMean number of days spent outside the haemoglobin target range (11.5–12.0 g/dL) by treatment period.\n[SUBTITLE] Iron parameters. [SUBSECTION] Mean serum ferritin was similar between the treatment periods (P1, 533.8 ± 327.5 μg/L; P2, 494.5 ± 279.9 μg/L, P = 0.25) but was statistically significantly higher in P1 versus the first evaluation performed in P2 (September) (457.7 ± 290.4 μg/L, P = 0.04) (Table 2). Mean TSAT during P1 (49.3 ± 10.9%) was statistically significantly higher than during P2 overall (24.5 ± 9.4%, P < 0.0001) or Month 3 of P2 (23.3 ± 10.2%, P < 0.0001) (Table 2).\nSerum ferritin and TSAT during Period 1 and at Month 3 of Period 2 (September 2009)\nMean serum ferritin was similar between the treatment periods (P1, 533.8 ± 327.5 μg/L; P2, 494.5 ± 279.9 μg/L, P = 0.25) but was statistically significantly higher in P1 versus the first evaluation performed in P2 (September) (457.7 ± 290.4 μg/L, P = 0.04) (Table 2). Mean TSAT during P1 (49.3 ± 10.9%) was statistically significantly higher than during P2 overall (24.5 ± 9.4%, P < 0.0001) or Month 3 of P2 (23.3 ± 10.2%, P < 0.0001) (Table 2).\nSerum ferritin and TSAT during Period 1 and at Month 3 of Period 2 (September 2009)\n[SUBTITLE] Intravenous iron consumption. [SUBSECTION] The mean dose of i.v. iron per patient was 1231 ± 879 mg in P1 and 1657 ± 866 mg in P2 (P = 0.001), representing a 34.6% increase in i.v. iron during P2 versus P1. Corresponding values for i.v. iron per patient per week were 61.36 ± 30.98 mg in P2 and 45.58 ± 32.55 mg in P1 (P = 0.001), and the cumulative dose for the total study population was 92 300 mg in P1 versus 124 250 mg in P2.\nThe mean dose of i.v. iron per patient was 1231 ± 879 mg in P1 and 1657 ± 866 mg in P2 (P = 0.001), representing a 34.6% increase in i.v. iron during P2 versus P1. Corresponding values for i.v. iron per patient per week were 61.36 ± 30.98 mg in P2 and 45.58 ± 32.55 mg in P1 (P = 0.001), and the cumulative dose for the total study population was 92 300 mg in P1 versus 124 250 mg in P2.\n[SUBTITLE] ESA consumption. [SUBSECTION] The mean dose of ESA per patient was 980 ± 756 and 1103 ± 908 μg, respectively, during P1 and P2 (P = 0.12) representing a 12.6% increase. The mean dose according to body weight was 0.58 ± 0.52 μg/kg/week in P1 and 0.66 ± 0.64 μg/kg/week during P2 (P = 0.13) representing a 13.8% increase. The cumulative dose of ESA in P1 was 73 510 μg for the total study population and 82 750 μg in P2 (+12.6%).\nThe mean fortnightly i.v. iron dose and concomitant ESA dose throughout the study versus achieved Hb levels are shown in Figure 3.\nMean fortnightly dose of i.v. iron (milligrams) and ESA [darbepoetin-α (micrograms)] versus achieved haemoglobin levels (grams per deciliter).\nThe mean dose of ESA per patient was 980 ± 756 and 1103 ± 908 μg, respectively, during P1 and P2 (P = 0.12) representing a 12.6% increase. The mean dose according to body weight was 0.58 ± 0.52 μg/kg/week in P1 and 0.66 ± 0.64 μg/kg/week during P2 (P = 0.13) representing a 13.8% increase. The cumulative dose of ESA in P1 was 73 510 μg for the total study population and 82 750 μg in P2 (+12.6%).\nThe mean fortnightly i.v. iron dose and concomitant ESA dose throughout the study versus achieved Hb levels are shown in Figure 3.\nMean fortnightly dose of i.v. iron (milligrams) and ESA [darbepoetin-α (micrograms)] versus achieved haemoglobin levels (grams per deciliter).\n[SUBTITLE] Economic impact. [SUBSECTION] The cumulative cost of i.v. iron was 11 981€ during P1 and 12 674€ during P2. For ESA, the cumulative cost was 108 060€ during P1 and 121 643€ during P2. The total cost from the health care provider’s perspective (including both i.v. iron and ESA) was 120 040€ and 134 316€ during P1 and P2, respectively, a difference of 14 276€ (+11.9% increase) (Figure 4). The incremental cost of switching from IS to the ISS for 1 year was estimated to be about 368€ per patient.\nAnaemia drug expenditure.\nThe cumulative cost of i.v. iron was 11 981€ during P1 and 12 674€ during P2. For ESA, the cumulative cost was 108 060€ during P1 and 121 643€ during P2. The total cost from the health care provider’s perspective (including both i.v. iron and ESA) was 120 040€ and 134 316€ during P1 and P2, respectively, a difference of 14 276€ (+11.9% increase) (Figure 4). The incremental cost of switching from IS to the ISS for 1 year was estimated to be about 368€ per patient.\nAnaemia drug expenditure.", "Mean Hb levels throughout the study are presented in Figure 1. Values during IS treatment were statistically significantly higher than during use of the ISS (P1, 11.78 ± 0.99 g/dL; P2, 11.48 ± 0.98 g/dL, P = 0.01). The difference was more pronounced when Hb values during P1 were compared to the first 3 months of P2 (11.78 ± 0.99 g/dL and 11.39 ± 1.12 g/dL, respectively, P = 0.005 versus P1). Mean Hb levels were similar during the first and last 3 months of P2 (11.39 ± 1.12 g/dL and 11.57 ± 1.09 g/dL, respectively, P = 0.15).\nMean haemoglobin levels over time by treatment period (grams per deciliter).\nThe mixed model (random intercept) analysis demonstrated that Hb levels were stable during the IS treatment period as the first slope (Hb measurements 1–14) was not statistically different from 0 (P = 0.105). The second slope (Hb measurements 14–15) was statistically different from 0 (P <0.0001) with a coefficient of −0.4 g/dL per two observations for the period, i.e. switch from IS to the ISS was associated with a significant decrease in mean Hb values of 0.4 g/dL (95% confidence interval −0.55 to −0.23). The third slope (Hb measurements 15–28) was also statistically different from 0 (P = 0.001) with an increase of 0.026 per two observations (0.34 g/dL). Results of the mixed model (random-intercept and slopes) also showed the second slope to be statistically different from 0 (P = 0.002) with a coefficient of −0.40 g/dL, but the third slope did not reach statistical significance (P = 0.167), although the trend was similar (0.020 per two observations).\nThe proportion of patients who had at least one Hb value within the target range (11.5–12.0 g/dL) was 85% during P1 and 79% during P2 (P = 0.25). The mean number of days spent outside the target Hb range was 78 days per patient (cumulative 5880 days) during P1 compared to a mean of 99 days per patient (cumulative 7470 days) during P2 (P = 0.02) (Figure 2).\nMean number of days spent outside the haemoglobin target range (11.5–12.0 g/dL) by treatment period.", "Mean serum ferritin was similar between the treatment periods (P1, 533.8 ± 327.5 μg/L; P2, 494.5 ± 279.9 μg/L, P = 0.25) but was statistically significantly higher in P1 versus the first evaluation performed in P2 (September) (457.7 ± 290.4 μg/L, P = 0.04) (Table 2). Mean TSAT during P1 (49.3 ± 10.9%) was statistically significantly higher than during P2 overall (24.5 ± 9.4%, P < 0.0001) or Month 3 of P2 (23.3 ± 10.2%, P < 0.0001) (Table 2).\nSerum ferritin and TSAT during Period 1 and at Month 3 of Period 2 (September 2009)", "The mean dose of i.v. iron per patient was 1231 ± 879 mg in P1 and 1657 ± 866 mg in P2 (P = 0.001), representing a 34.6% increase in i.v. iron during P2 versus P1. Corresponding values for i.v. iron per patient per week were 61.36 ± 30.98 mg in P2 and 45.58 ± 32.55 mg in P1 (P = 0.001), and the cumulative dose for the total study population was 92 300 mg in P1 versus 124 250 mg in P2.", "The mean dose of ESA per patient was 980 ± 756 and 1103 ± 908 μg, respectively, during P1 and P2 (P = 0.12) representing a 12.6% increase. The mean dose according to body weight was 0.58 ± 0.52 μg/kg/week in P1 and 0.66 ± 0.64 μg/kg/week during P2 (P = 0.13) representing a 13.8% increase. The cumulative dose of ESA in P1 was 73 510 μg for the total study population and 82 750 μg in P2 (+12.6%).\nThe mean fortnightly i.v. iron dose and concomitant ESA dose throughout the study versus achieved Hb levels are shown in Figure 3.\nMean fortnightly dose of i.v. iron (milligrams) and ESA [darbepoetin-α (micrograms)] versus achieved haemoglobin levels (grams per deciliter).", "The cumulative cost of i.v. iron was 11 981€ during P1 and 12 674€ during P2. For ESA, the cumulative cost was 108 060€ during P1 and 121 643€ during P2. The total cost from the health care provider’s perspective (including both i.v. iron and ESA) was 120 040€ and 134 316€ during P1 and P2, respectively, a difference of 14 276€ (+11.9% increase) (Figure 4). The incremental cost of switching from IS to the ISS for 1 year was estimated to be about 368€ per patient.\nAnaemia drug expenditure.", "Mean values for serum phosphorus, Ca × P, 25-OH vitamin D and urea decreased significantly, and total bilirubin increased significantly, during P2 compared to P1 (Table 3). There were no significant differences between the two treatment periods regarding serum calcium, PTH, CRP, albumin, fibrinogen, γ-GT, alkaline phosphatase, ALT, AST, leucocytes, platelets, neutrophils or Kt/V. Mean CRP showed a non-significant increase from P1 [6.3 ± 6.9 mg/L (median 3.3 mg/L)] to the last 3 months of P2 [9.1 ± 14.6 (median 3.0 mg/L), P = 0.09] in 70 patients. A further CRP measurement was performed in January 2010 on the last day of use of ISS and the mean value was 13.7 mg/L.\nLaboratory values with statistically significant differences between Periods 1 and 2 (n = 75)\nNo adverse events were observed in relation to the study drugs during the study and no hospitalization related to i.v. iron supplementation occurred. Adverse events that resulted in hospitalization were considered to be related to the pre-existing conditions including CKD. There was no significant difference concerning the proportion of hospitalized patients between P1 [12/75 (16.0%)] and P2 [11/75 (14.7%), P = 0.78]. Five patients were hospitalized twice during P1 and P2. Seven patients were hospitalized during P1 only and six patients were hospitalized during P2 only. The cumulative length of hospital stay was 191 days during P1 and 107 days during P2. The mean length of stay per hospitalized patient was similar during P1 and P2 (10.6 days versus 5.9 days, respectively, P = 0.461, Wilcoxon test on paired data).", "This is the first study to evaluate the clinical impact of switching from the originator i.v. IS to an ISS. In this population of stable HD patients, switch to an ISS was associated with a significant reduction in Hb level and reduced iron indices despite an increase in i.v. iron dose, ESA dose and an overall increase in drug expenditure.\nThe decrease in Hb level during ISS therapy coupled with a shorter proportion of time spent within the Hb target range may have clinical implications for this high-risk population. Cardiac complications and impaired quality of life are well-documented consequences of poor anaemia control in the CKD and HD populations [4, 5, 17, 18]. Moreover, there was a marked and significant increase in the dose of iron that was required after the switch, accompanied by a numerical increase in ESA dose. Long-term exposure to higher doses of drugs such as i.v. iron and ESA increase the risk of toxicity [19] and are associated with increased oxidative stress and inflammation [9] that can exacerbate the existing pro-oxidant effects of co-morbidities, uraemia and the dialysis procedure [20].\nThe marked reduction in Hb that was already apparent at the start of the ISS treatment period reflects the immediate effect of switching i.v. iron preparation. Towards the end of the 20-day interval in June 2009 when both the IS and ISS preparations were used, stock levels available at the centre suggest that two-thirds of patients were likely to have received ISS. Intravenous iron treatment exerts a rapid effect, with the administered iron being incorporated into red blood cells within days [21], such that the switch to ISS appears to have quickly led to lower Hb levels. We are not aware of any other factors that could have accounted for the change in Hb level from the end of Period 1 to the start of Period 2 in this single cohort of patients for whom management was otherwise consistent throughout the study.\nThe discrepancy in Hb levels seen during administration of IS and ISS may be due to variations in the stability of the complexes and the bioavailability of iron from the two preparations. It can be hypothesized that the kinetics of iron dissociation differ between the two complexes, affecting the pattern of iron distribution and storage. This is supported by the halving of mean TSAT level during ISS therapy compared to the original IS, which occurred despite regular application of i.v. iron and an increased iron dose. The reduction in TSAT indicates that a lower proportion of iron was available for erythropoiesis. If iron is not utilized for erythropoiesis, it may be sequestered in other compartments in the body from which it is not available for erythropoiesis and could potentially cause harmful effects, which would be consistent with the increased levels of bilirubin and CRP observed during the ISS treatment period. Two recent studies found significant iron deposits in the liver, kidney and heart tissue of rats following administration of ISS preparations compared to the originator i.v. IS [7, 13]. Moreover, proinflammatory cytokine levels such as interleukin-6 and tumor necrosis factor-α were higher in the liver, heart and kidney tissues of rats treated with ISS [7, 13], possibly due to iron accumulation in the wrong compartment. These results may indicate reduced stability of ISS complexes and increased oxidative stress and inflammation secondary to the presence of weakly bound iron [7, 8, 13]. Further studies are required to examine this issue in more detail.\nBoth preparations showed a good safety profile, with no adverse events related to either study drug being observed. A longer term prospective controlled study is required to evaluate the impact of the ISS on clinical safety parameters and to investigate the interchangeability with the originator IS.\nThe lower levels of TSAT and serum ferritin during ISS therapy, particularly during the first 3 months after switch, are likely to have contributed to increased ESA dosing, with economic implications. Besarab et al. [22] demonstrated that maintenance of TSAT between 30 and 50% through maintenance i.v. iron administration improves anaemia and reduces ESA dose requirements. Economically, in contrast to the expected reduction in drug cost due to the purchase of the less expensive ISS (−21%), total anaemia drug expenditure increased by 11.9% due to higher dosing of ESA and i.v. iron. This resulted in an absolute drug expenditure increase of ∼368€/year per patient, equivalent to ∼13.6 million €/year if extrapolated to the French HD population.\nIn conclusion, the switch from the originator IS to an ISS preparation led to destabilization of a well-controlled population of HD patients, with a significant decrease of Hb levels and iron indices coupled with a need to increase anaemia drug consumption. The economic rationale for switch to a less expensive iron preparation was negated by the increase in total drug costs. Prospective comparative studies are required to confirm the efficacy and safety of ISS and their interchangeability with the originator i.v. IS." ]
[ "intro", "materials|methods", null, null, null, null, null, "results", null, null, null, null, null, null, null, null, "discussion" ]
[ "anaemia", "cost analysis", "haemodialysis", "intravenous iron", "iron sucrose" ]
FORM: an Australian method for formulating and grading recommendations in evidence-based clinical guidelines.
21356039
Clinical practice guidelines are an important element of evidence-based practice. Considering an often complicated body of evidence can be problematic for guideline developers, who in the past may have resorted to using levels of evidence of individual studies as a quasi-indicator for the strength of a recommendation. This paper reports on the production and trial of a methodology and associated processes to assist Australian guideline developers in considering a body of evidence and grading the resulting guideline recommendations.
BACKGROUND
In recognition of the complexities of clinical guidelines and the multiple factors that influence choice in health care, a working group of experienced guideline consultants was formed under the auspices of the Australian National Health and Medical Research Council (NHMRC) to produce and pilot a framework to formulate and grade guideline recommendations. Consultation with national and international experts and extensive piloting informed the process.
METHODS
The FORM framework consists of five components (evidence base, consistency, clinical impact, generalisability and applicability) which are used by guideline developers to structure their decisions on how to convey the strength of a recommendation through wording and grading via a considered judgement form. In parallel (but separate from the grading process) guideline developers are asked to consider implementation implications for each recommendation.
RESULTS
The framework has now been widely adopted by Australian guideline developers who find it to be a logical and intuitive way to formulate and grade recommendations in clinical practice guidelines.
CONCLUSIONS
[ "Australia", "Evidence-Based Medicine", "Humans", "National Health Programs", "Practice Guidelines as Topic" ]
3053308
null
null
Methods
In 2004, the NHMRC commissioned a review of existing frameworks for assessing evidence internationally [7]. This internal report provided a resource for a working party (comprising GAR consultants and NHMRC personnel - see Appendix 1 for members) to review existing practice, design and/or adapt a framework for grading a body of evidence and pilot this process with Australian guideline developers. The report identified nine possible systems for use in developing clinical practice guidelines. Of these, three were considered to be most useful for informing the development of an Australian guideline recommendation process. These frameworks were the Scottish Intercollegiate Guidelines Network (SIGN) system and considered judgement statement (SIGN50, revised 2008)[8]; the Strength of Recommendation Taxonomy (SORT) [9]; and the Grading of Recommendations, Assessment, Development and Evaluation (GRADE) [10]. These systems were discussed at a face-to-face meeting of the working party with respect to their advantages and disadvantages and compatibility with the existing advice in the NHMRC 'Guidelines for Guideline Development' handbooks. A consensus was reached about how these frameworks could be adapted in the new process. From the three systems, we combined elements to achieve our objectives, which were: to have a system that matched and complemented the current NHMRC evidence dimensions and documents as closely as possible; simplicity and clarity of approach; and to provide transparent method/s of formulating and documenting judgments to give a graded set of recommendations. The working party drafted a new framework for grading recommendations and this was refined by extensive email consultation and iteration within the group. The resulting draft framework was piloted by GAR consultants working with guideline developers between 2005 and 2009. There were five main methods to gather feedback: • Known experts in the international guideline field were approached by NHMRC directly for comment on the draft system - this was a formal request and responses were semi-structured in that the experts were free to review in their own style, • Key evidence-based assessment organisations in Australian and New Zealand were invited to register feedback on the website where the system was posted, • All guideline development groups working within the NHMRC endorsement framework during this period used FORM under the guidance of the GARs - they were all invited to offer feedback during and after the process. • The draft process was presented at key conferences and interactive workshops (eg International Cochrane Colloquium [11]), • The website was open for the 5 years (passive seeking) and included a structured feedback form. Following this initial period of consultation (up until 2007) the FORM's framework was further refined, taking account of the feedback received, and the public consultation period was extended to June 2009. During the development, trialing and refinement period from 2004 to 2009, the international guideline community continued to debate and evolve other systems of guideline production - these developments were monitored and helped to inform the Australian process. The revised version of FORM was subsequently endorsed by the Council of the NHMRC.
null
null
null
null
[ "Background", "The concept of a body of evidence", "Results", "Key components of FORM", "1. Evidence base", "2. Consistency", "3. Clinical impact", "4. Generalisability", "5. Applicability", "The FORM Matrix and Evidence Statement Form", "Feedback, piloting and users' experiences", "Discussion", "Limitations", "Conclusion", "Competing interests", "Authors' contributions", "Appendix 1", "Pre-publication history" ]
[ "Best practice in health care should be guided by the results of research on the safety and effectiveness of different courses of clinical action. This evidence needs to be assembled, justified and presented in the form of health advice for multiple stakeholders including health professionals, decision makers and consumers of health care. Clinical practice guidelines are recognised as one of the best ways to present recommended courses of action based on research evidence, although recommendations are often presented inconsistently [1]. Where such evidence is not available, guidelines may use consensus-based practice points and/or identify areas requiring further research. Both format and content can adversely affect the adoption and integration of guidelines into clinical practice [2].\nThe National Health and Medical Research Council (NHMRC) of Australia has been a world leader in developing and supporting the development of evidence-based health advice, including clinical practice guidelines. As early as 1999, the NHMRC commissioned and published 'Guidelines for Guideline Development' [3], anticipating the need for a comprehensive set of resources to help guideline developers produce high quality guidelines. This was followed by a more detailed series of handbooks on different aspects of finding and reviewing clinical research [4].\nAustralian guideline developers must comply with NHMRC standards in order to gain NHMRC approval. These standards (such as rigorous evidence-based methods, multidisciplinary panels and public consultation processes) have resulted in NHMRC approved guidelines being of higher quality than those developed outside NHMRC processes [5].\nBy 2004, it had become clear that the NHMRC standards required expansion and revision in response to the rapid growth and diversification of clinical practice guidelines in Australia and elsewhere. There were two main areas where a need for revision was identified. The first was the need to develop a set of levels (or hierarchy) of evidence which would cover the different individual study designs used to address the different types of questions formulated by guideline development panels. This work (covering interventions, diagnostic accuracy, prognosis, aetiology and screening) is outlined in Merlin et al [6]. The second area was the need to develop a new system, or adaptation of an existing system, of formulating and grading recommendations for clinical practice guidelines that incorporated an assessment of the 'body of evidence'.\n[SUBTITLE] The concept of a body of evidence [SUBSECTION] Many guideline recommendations have been rated solely according to the level of evidence of the individual studies contributing to that recommendation. In the late 1990 s and early 2000 s, NHMRC prepared a series of handbooks to assist clinical practice guideline developers. These handbooks stated that other elements such as study quality, size and precision of study results, and relevance to local practice were also important [3,4]. They did not, however, go as far as providing a transparent logical framework for assessing these elements when formulating recommendations. What was needed was a method for considering all of these elements across all of the research studies addressing the clinical question as a whole (the 'body of evidence') like some other guideline development methodologies (such as those used by the Scottish Intercollegiate Guidelines Network or the National Institute for Health and Clinical Excellence). Recommendations based on the body of evidence could then be graded according to the degree of confidence that implementing the suggested course of action would lead to improved patient health outcomes.\nIn recognition of this need, and in response to requests from methodological experts that consult for the NHMRC on guideline development (Guidelines Assessment Register [GAR] consultants) (see Appendix 1), the NHMRC undertook to revise and update its methodological approaches. This paper reports on the production and trial of a methodology and associated processes to assist Australian guideline developers in considering a body of evidence and grading the resulting guideline recommendations.\nMany guideline recommendations have been rated solely according to the level of evidence of the individual studies contributing to that recommendation. In the late 1990 s and early 2000 s, NHMRC prepared a series of handbooks to assist clinical practice guideline developers. These handbooks stated that other elements such as study quality, size and precision of study results, and relevance to local practice were also important [3,4]. They did not, however, go as far as providing a transparent logical framework for assessing these elements when formulating recommendations. What was needed was a method for considering all of these elements across all of the research studies addressing the clinical question as a whole (the 'body of evidence') like some other guideline development methodologies (such as those used by the Scottish Intercollegiate Guidelines Network or the National Institute for Health and Clinical Excellence). Recommendations based on the body of evidence could then be graded according to the degree of confidence that implementing the suggested course of action would lead to improved patient health outcomes.\nIn recognition of this need, and in response to requests from methodological experts that consult for the NHMRC on guideline development (Guidelines Assessment Register [GAR] consultants) (see Appendix 1), the NHMRC undertook to revise and update its methodological approaches. This paper reports on the production and trial of a methodology and associated processes to assist Australian guideline developers in considering a body of evidence and grading the resulting guideline recommendations.", "Many guideline recommendations have been rated solely according to the level of evidence of the individual studies contributing to that recommendation. In the late 1990 s and early 2000 s, NHMRC prepared a series of handbooks to assist clinical practice guideline developers. These handbooks stated that other elements such as study quality, size and precision of study results, and relevance to local practice were also important [3,4]. They did not, however, go as far as providing a transparent logical framework for assessing these elements when formulating recommendations. What was needed was a method for considering all of these elements across all of the research studies addressing the clinical question as a whole (the 'body of evidence') like some other guideline development methodologies (such as those used by the Scottish Intercollegiate Guidelines Network or the National Institute for Health and Clinical Excellence). Recommendations based on the body of evidence could then be graded according to the degree of confidence that implementing the suggested course of action would lead to improved patient health outcomes.\nIn recognition of this need, and in response to requests from methodological experts that consult for the NHMRC on guideline development (Guidelines Assessment Register [GAR] consultants) (see Appendix 1), the NHMRC undertook to revise and update its methodological approaches. This paper reports on the production and trial of a methodology and associated processes to assist Australian guideline developers in considering a body of evidence and grading the resulting guideline recommendations.", "The new FORM framework was loosely based on the SIGN considered judgement form [8]. It provides guideline developers with a structured process for considering the whole body of evidence relevant to a particular clinical question, in the context of the setting in which it is to be applied. FORM recognises that ascribing a level of evidence to each study that reflects the risk of bias in its design, is only one small part of assessing evidence for a guideline recommendation. FORM provides a framework for assessing all the studies relevant for a recommendation against five criteria: the evidence base (i.e. number, level and risk of bias in included studies); the consistency of findings between studies; the clinical impact suggested by the evidence base; the generalisability of the results to the population for whom the guideline is intended; and the applicability of the results to the Australian (and/or local) health care setting. Under FORM, these five key components are individually assessed for each clinical question giving a picture of both the internal and external validity of the evidence base under consideration.\n[SUBTITLE] Key components of FORM [SUBSECTION] [SUBTITLE] 1. Evidence base [SUBSECTION] The evidence base is assessed in terms of the quantity and quality of the studies identified by a systematic literature review for the clinical question concerned ('included studies'). Study quality relates to an assessment of the risk of bias inherent in the conduct, design and reporting of results in the included studies. The guideline developers are free to choose the most relevant process or tool to assess risk of bias. To ensure that consideration is given to the full range of study designs required to assess the breadth of clinical questions in a guideline, the GAR consultants also developed levels of evidence to address different clinical questions (prognosis, diagnostic accuracy, aetiology etc). This has been comprehensively addressed by Merlin et al [6] (see also, NHMRC website: http://www.nhmrc.gov.au/guidelines/developers.htm)\nThe evidence base is assessed in terms of the quantity and quality of the studies identified by a systematic literature review for the clinical question concerned ('included studies'). Study quality relates to an assessment of the risk of bias inherent in the conduct, design and reporting of results in the included studies. The guideline developers are free to choose the most relevant process or tool to assess risk of bias. To ensure that consideration is given to the full range of study designs required to assess the breadth of clinical questions in a guideline, the GAR consultants also developed levels of evidence to address different clinical questions (prognosis, diagnostic accuracy, aetiology etc). This has been comprehensively addressed by Merlin et al [6] (see also, NHMRC website: http://www.nhmrc.gov.au/guidelines/developers.htm)\n[SUBTITLE] 2. Consistency [SUBSECTION] The consistency component of the 'body of evidence' assesses the extent to which the findings are consistent across the included studies (including across a range of study populations and study designs). This allows users to assess whether the results are likely to be replicable or only likely to occur under certain conditions. Consistency may be assessed where appropriate as statistical heterogeneity (applying an I-squared statistic for example) or more likely will require the users to make a judgment about the overall direction of effects across multiple studies with reference to clinical heterogeneity. Possible sources of inconsistency (heterogeneity) in the results of studies may be differences in the study design, the quality of the studies (risk of bias), the population studied, and varying definitions of outcomes being assessed. Should results differ for certain subpopulations, this could then be reflected in the development of the recommendation.\nThe consistency component of the 'body of evidence' assesses the extent to which the findings are consistent across the included studies (including across a range of study populations and study designs). This allows users to assess whether the results are likely to be replicable or only likely to occur under certain conditions. Consistency may be assessed where appropriate as statistical heterogeneity (applying an I-squared statistic for example) or more likely will require the users to make a judgment about the overall direction of effects across multiple studies with reference to clinical heterogeneity. Possible sources of inconsistency (heterogeneity) in the results of studies may be differences in the study design, the quality of the studies (risk of bias), the population studied, and varying definitions of outcomes being assessed. Should results differ for certain subpopulations, this could then be reflected in the development of the recommendation.\n[SUBTITLE] 3. Clinical impact [SUBSECTION] Clinical impact is a measure of the likely benefit that application of the guideline would have across the target population, and involves a clinical judgement. Factors that need to be taken into account when estimating clinical impact include: the relevance of the evidence to the clinical question; the statistical precision and size (and clinical importance) of the effect reported in the evidence-base; the relevance of the effect to patients, compared to other management options; the duration of therapy required to achieve the effect; and the balance of risks and benefits to the patient group, including potential harm. A hypothetical example of incorporating both clinical importance and potential harm may be for the use of statins in the control of dyslipidaemia where there is a very large body of evidence with low risk of bias indicating a substantial reduction in risk of cardiovascular events. In this case a qualifying recommendation could be made to differentiate the small group of people who may experience adverse events as a result of statin therapy.\nClinical impact is arguably the most subjective of the five evidence components rated in the evidence statement. However, we have found in assisting many guideline development groups to produce clinical practice guidelines using the FORM process that it is often clearer for clinicians than it is for methodological experts. Clinicians seem to grasp the net benefit concept quite easily, although often robust discussions occur before a consensus is reached regarding the rating of this component. A strength of FORM is that these discussions contribute to formulating appropriate recommendations, and the final conclusion can be documented so that users of the guideline can see how the developers arrived at the recommendation.\nClinical impact is a measure of the likely benefit that application of the guideline would have across the target population, and involves a clinical judgement. Factors that need to be taken into account when estimating clinical impact include: the relevance of the evidence to the clinical question; the statistical precision and size (and clinical importance) of the effect reported in the evidence-base; the relevance of the effect to patients, compared to other management options; the duration of therapy required to achieve the effect; and the balance of risks and benefits to the patient group, including potential harm. A hypothetical example of incorporating both clinical importance and potential harm may be for the use of statins in the control of dyslipidaemia where there is a very large body of evidence with low risk of bias indicating a substantial reduction in risk of cardiovascular events. In this case a qualifying recommendation could be made to differentiate the small group of people who may experience adverse events as a result of statin therapy.\nClinical impact is arguably the most subjective of the five evidence components rated in the evidence statement. However, we have found in assisting many guideline development groups to produce clinical practice guidelines using the FORM process that it is often clearer for clinicians than it is for methodological experts. Clinicians seem to grasp the net benefit concept quite easily, although often robust discussions occur before a consensus is reached regarding the rating of this component. A strength of FORM is that these discussions contribute to formulating appropriate recommendations, and the final conclusion can be documented so that users of the guideline can see how the developers arrived at the recommendation.\n[SUBTITLE] 4. Generalisability [SUBSECTION] The assessment of generalisability involves determining how precisely the available body of evidence answers the clinical question that was asked. Issues to be considered include: how well the participants and settings of the included studies match the patient population being targeted by the guideline; the clinical setting where the recommendation will be implemented; and other factors such as the stage of the disease (e.g. early versus advanced), the duration of illness and (for diagnostic accuracy questions) the prevalence of the disease in the study population as compared to the target population for the guideline.\nThe assessment of generalisability involves determining how precisely the available body of evidence answers the clinical question that was asked. Issues to be considered include: how well the participants and settings of the included studies match the patient population being targeted by the guideline; the clinical setting where the recommendation will be implemented; and other factors such as the stage of the disease (e.g. early versus advanced), the duration of illness and (for diagnostic accuracy questions) the prevalence of the disease in the study population as compared to the target population for the guideline.\n[SUBTITLE] 5. Applicability [SUBSECTION] This component addresses whether the evidence base is relevant to the Australian health care system generally, or to more local settings for specific recommendations (such as rural areas or cities). Factors that may reduce the direct application of study findings to the Australian or more local settings include organisational factors (e.g. availability of trained staff, clinic time, specialised equipment, tests or other resources) and cultural factors (e.g. attitudes to health issues, including those that may affect compliance with the recommendation).\nThis component addresses whether the evidence base is relevant to the Australian health care system generally, or to more local settings for specific recommendations (such as rural areas or cities). Factors that may reduce the direct application of study findings to the Australian or more local settings include organisational factors (e.g. availability of trained staff, clinic time, specialised equipment, tests or other resources) and cultural factors (e.g. attitudes to health issues, including those that may affect compliance with the recommendation).\n[SUBTITLE] 1. Evidence base [SUBSECTION] The evidence base is assessed in terms of the quantity and quality of the studies identified by a systematic literature review for the clinical question concerned ('included studies'). Study quality relates to an assessment of the risk of bias inherent in the conduct, design and reporting of results in the included studies. The guideline developers are free to choose the most relevant process or tool to assess risk of bias. To ensure that consideration is given to the full range of study designs required to assess the breadth of clinical questions in a guideline, the GAR consultants also developed levels of evidence to address different clinical questions (prognosis, diagnostic accuracy, aetiology etc). This has been comprehensively addressed by Merlin et al [6] (see also, NHMRC website: http://www.nhmrc.gov.au/guidelines/developers.htm)\nThe evidence base is assessed in terms of the quantity and quality of the studies identified by a systematic literature review for the clinical question concerned ('included studies'). Study quality relates to an assessment of the risk of bias inherent in the conduct, design and reporting of results in the included studies. The guideline developers are free to choose the most relevant process or tool to assess risk of bias. To ensure that consideration is given to the full range of study designs required to assess the breadth of clinical questions in a guideline, the GAR consultants also developed levels of evidence to address different clinical questions (prognosis, diagnostic accuracy, aetiology etc). This has been comprehensively addressed by Merlin et al [6] (see also, NHMRC website: http://www.nhmrc.gov.au/guidelines/developers.htm)\n[SUBTITLE] 2. Consistency [SUBSECTION] The consistency component of the 'body of evidence' assesses the extent to which the findings are consistent across the included studies (including across a range of study populations and study designs). This allows users to assess whether the results are likely to be replicable or only likely to occur under certain conditions. Consistency may be assessed where appropriate as statistical heterogeneity (applying an I-squared statistic for example) or more likely will require the users to make a judgment about the overall direction of effects across multiple studies with reference to clinical heterogeneity. Possible sources of inconsistency (heterogeneity) in the results of studies may be differences in the study design, the quality of the studies (risk of bias), the population studied, and varying definitions of outcomes being assessed. Should results differ for certain subpopulations, this could then be reflected in the development of the recommendation.\nThe consistency component of the 'body of evidence' assesses the extent to which the findings are consistent across the included studies (including across a range of study populations and study designs). This allows users to assess whether the results are likely to be replicable or only likely to occur under certain conditions. Consistency may be assessed where appropriate as statistical heterogeneity (applying an I-squared statistic for example) or more likely will require the users to make a judgment about the overall direction of effects across multiple studies with reference to clinical heterogeneity. Possible sources of inconsistency (heterogeneity) in the results of studies may be differences in the study design, the quality of the studies (risk of bias), the population studied, and varying definitions of outcomes being assessed. Should results differ for certain subpopulations, this could then be reflected in the development of the recommendation.\n[SUBTITLE] 3. Clinical impact [SUBSECTION] Clinical impact is a measure of the likely benefit that application of the guideline would have across the target population, and involves a clinical judgement. Factors that need to be taken into account when estimating clinical impact include: the relevance of the evidence to the clinical question; the statistical precision and size (and clinical importance) of the effect reported in the evidence-base; the relevance of the effect to patients, compared to other management options; the duration of therapy required to achieve the effect; and the balance of risks and benefits to the patient group, including potential harm. A hypothetical example of incorporating both clinical importance and potential harm may be for the use of statins in the control of dyslipidaemia where there is a very large body of evidence with low risk of bias indicating a substantial reduction in risk of cardiovascular events. In this case a qualifying recommendation could be made to differentiate the small group of people who may experience adverse events as a result of statin therapy.\nClinical impact is arguably the most subjective of the five evidence components rated in the evidence statement. However, we have found in assisting many guideline development groups to produce clinical practice guidelines using the FORM process that it is often clearer for clinicians than it is for methodological experts. Clinicians seem to grasp the net benefit concept quite easily, although often robust discussions occur before a consensus is reached regarding the rating of this component. A strength of FORM is that these discussions contribute to formulating appropriate recommendations, and the final conclusion can be documented so that users of the guideline can see how the developers arrived at the recommendation.\nClinical impact is a measure of the likely benefit that application of the guideline would have across the target population, and involves a clinical judgement. Factors that need to be taken into account when estimating clinical impact include: the relevance of the evidence to the clinical question; the statistical precision and size (and clinical importance) of the effect reported in the evidence-base; the relevance of the effect to patients, compared to other management options; the duration of therapy required to achieve the effect; and the balance of risks and benefits to the patient group, including potential harm. A hypothetical example of incorporating both clinical importance and potential harm may be for the use of statins in the control of dyslipidaemia where there is a very large body of evidence with low risk of bias indicating a substantial reduction in risk of cardiovascular events. In this case a qualifying recommendation could be made to differentiate the small group of people who may experience adverse events as a result of statin therapy.\nClinical impact is arguably the most subjective of the five evidence components rated in the evidence statement. However, we have found in assisting many guideline development groups to produce clinical practice guidelines using the FORM process that it is often clearer for clinicians than it is for methodological experts. Clinicians seem to grasp the net benefit concept quite easily, although often robust discussions occur before a consensus is reached regarding the rating of this component. A strength of FORM is that these discussions contribute to formulating appropriate recommendations, and the final conclusion can be documented so that users of the guideline can see how the developers arrived at the recommendation.\n[SUBTITLE] 4. Generalisability [SUBSECTION] The assessment of generalisability involves determining how precisely the available body of evidence answers the clinical question that was asked. Issues to be considered include: how well the participants and settings of the included studies match the patient population being targeted by the guideline; the clinical setting where the recommendation will be implemented; and other factors such as the stage of the disease (e.g. early versus advanced), the duration of illness and (for diagnostic accuracy questions) the prevalence of the disease in the study population as compared to the target population for the guideline.\nThe assessment of generalisability involves determining how precisely the available body of evidence answers the clinical question that was asked. Issues to be considered include: how well the participants and settings of the included studies match the patient population being targeted by the guideline; the clinical setting where the recommendation will be implemented; and other factors such as the stage of the disease (e.g. early versus advanced), the duration of illness and (for diagnostic accuracy questions) the prevalence of the disease in the study population as compared to the target population for the guideline.\n[SUBTITLE] 5. Applicability [SUBSECTION] This component addresses whether the evidence base is relevant to the Australian health care system generally, or to more local settings for specific recommendations (such as rural areas or cities). Factors that may reduce the direct application of study findings to the Australian or more local settings include organisational factors (e.g. availability of trained staff, clinic time, specialised equipment, tests or other resources) and cultural factors (e.g. attitudes to health issues, including those that may affect compliance with the recommendation).\nThis component addresses whether the evidence base is relevant to the Australian health care system generally, or to more local settings for specific recommendations (such as rural areas or cities). Factors that may reduce the direct application of study findings to the Australian or more local settings include organisational factors (e.g. availability of trained staff, clinic time, specialised equipment, tests or other resources) and cultural factors (e.g. attitudes to health issues, including those that may affect compliance with the recommendation).\n[SUBTITLE] The FORM Matrix and Evidence Statement Form [SUBSECTION] The FORM matrix forms part of the overall process which is detailed in Additional file 1. Each of the components in the FORM matrix can be rated from A to D. The body of evidence supporting a recommendation rarely consists of the same rating for each of the five components. There may be a large number of studies with a low risk of bias and consistent findings, but which have only a limited clinical impact, and are not directly generalisable to the target population or applicable to the local (e.g. Australian) healthcare context. Alternatively, a body of evidence may consist of one or two randomised trials with small sample sizes that have a moderate risk of bias but have a very large clinical impact and are directly applicable to the local healthcare context and target population. By rating each of the five components separately, FORM allows for this mixture of components, while still reflecting the overall body of evidence supporting a guideline recommendation. The FORM Matrix provides guidance for users about how to rate each component of the body of evidence (see Table 1). The accompanying Evidence Statement Form is provided for guideline developers to complete for each clinical question with room for additional information and dissenting opinions to be recorded. A recommendation to answer the clinical question is developed in two stages. First, a rating is assigned for each of the five components described above and an evidence statement is written in passive voice to reflect the findings of the evidence base. Second, an overall recommendation or action statement is developed on the basis of the evidence statement and an overall grade is assigned to this recommendation that reflects the level of confidence in the evidence supporting the recommendation.\nNHMRC Body of evidence matrix\nSR = systematic review; several = more than two studies\n1 Level of evidence determined from the NHMRC Evidence Hierarchy\n2 If there is only one study, rank this component as 'not applicable'\n3 For example, results in adults that are clinically sensible to apply to children OR psychosocial outcomes for one cancer that may be applicable to patients with another cancer\nEvidence statements may be developed by outcome measures for each intervention and then the multiple evidence statements for a single question can be collapsed into a single recommendation. Guideline developers can produce a combined recommendation taking into account the balance of benefits and harms or separate recommendations for benefits and harms, if this is more appropriate. The FORM process allows considerable flexibility in developing the recommendation.\nThe overall grades for recommendations should indicate the strength of the body of evidence underpinning the recommendation. This assists users of the clinical practice guidelines to make appropriate and informed clinical judgments. Grade A or B recommendations are generally based on a body of evidence that can be trusted to guide clinical practice, whereas Grade C or D recommendations must be applied carefully to individual clinical and organisational circumstances and should be interpreted with caution (see Table 2). A recommendation cannot be graded A or B unless the evidence base and consistency of the evidence are both rated A or B. In some cases, lower-graded evidence statements may not provide sufficient confidence to support an evidence-based recommendation at all. However, the framework allows Good Practice Points (GPP) to be included when developers feel it is important to provide non-evidence-based guidance.\nDefinition of NHMRC grades of recommendations\nIn formulating the recommendation users are advised to address the specific clinical question and to use action statements. The wording of the recommendation should reflect the strength of the body of evidence. Words such as 'must' or 'should' or 'use' are included when the evidence underpinning the recommendation is strong, and words such as 'might' or 'could' or 'consider' are used when the evidence base is weaker.\nThe following recommendations illustrate these points and are taken from the NHMRC Clinical Practice Guidelines for the Management of Melanoma in Australia and New Zealand (NHMRC 2008). These show that the evidence base, consistency and impact were high for dermoscopy, but not so high for total body photography (also indicated by the use of the verb 'recommended' in the first case and 'consider' in the second):\n• Training and utilisation of dermoscopy is recommended for clinicians routinely examining pigmented skin lesions: Grade A;\n• Consider the use of baseline total body photography as a tool for the early detection of melanoma in patients who are at high risk for developing primary melanoma: Grade C (p xxii [12]).\nDevelopers are also asked to consider how the guideline will be implemented at the time that the guideline recommendations are being formulated. The Evidence Statement Form requests developers to consider whether: the recommendation will result in changes in usual care; there are any resource implications associated with implementing the recommendation; the implementation of the recommendation will require changes in the way care is currently organised; and the guideline development group are aware of any barriers to the implementation of the recommendation. This information is used to inform the implementation plan for the Guideline.\nThe FORM matrix forms part of the overall process which is detailed in Additional file 1. Each of the components in the FORM matrix can be rated from A to D. The body of evidence supporting a recommendation rarely consists of the same rating for each of the five components. There may be a large number of studies with a low risk of bias and consistent findings, but which have only a limited clinical impact, and are not directly generalisable to the target population or applicable to the local (e.g. Australian) healthcare context. Alternatively, a body of evidence may consist of one or two randomised trials with small sample sizes that have a moderate risk of bias but have a very large clinical impact and are directly applicable to the local healthcare context and target population. By rating each of the five components separately, FORM allows for this mixture of components, while still reflecting the overall body of evidence supporting a guideline recommendation. The FORM Matrix provides guidance for users about how to rate each component of the body of evidence (see Table 1). The accompanying Evidence Statement Form is provided for guideline developers to complete for each clinical question with room for additional information and dissenting opinions to be recorded. A recommendation to answer the clinical question is developed in two stages. First, a rating is assigned for each of the five components described above and an evidence statement is written in passive voice to reflect the findings of the evidence base. Second, an overall recommendation or action statement is developed on the basis of the evidence statement and an overall grade is assigned to this recommendation that reflects the level of confidence in the evidence supporting the recommendation.\nNHMRC Body of evidence matrix\nSR = systematic review; several = more than two studies\n1 Level of evidence determined from the NHMRC Evidence Hierarchy\n2 If there is only one study, rank this component as 'not applicable'\n3 For example, results in adults that are clinically sensible to apply to children OR psychosocial outcomes for one cancer that may be applicable to patients with another cancer\nEvidence statements may be developed by outcome measures for each intervention and then the multiple evidence statements for a single question can be collapsed into a single recommendation. Guideline developers can produce a combined recommendation taking into account the balance of benefits and harms or separate recommendations for benefits and harms, if this is more appropriate. The FORM process allows considerable flexibility in developing the recommendation.\nThe overall grades for recommendations should indicate the strength of the body of evidence underpinning the recommendation. This assists users of the clinical practice guidelines to make appropriate and informed clinical judgments. Grade A or B recommendations are generally based on a body of evidence that can be trusted to guide clinical practice, whereas Grade C or D recommendations must be applied carefully to individual clinical and organisational circumstances and should be interpreted with caution (see Table 2). A recommendation cannot be graded A or B unless the evidence base and consistency of the evidence are both rated A or B. In some cases, lower-graded evidence statements may not provide sufficient confidence to support an evidence-based recommendation at all. However, the framework allows Good Practice Points (GPP) to be included when developers feel it is important to provide non-evidence-based guidance.\nDefinition of NHMRC grades of recommendations\nIn formulating the recommendation users are advised to address the specific clinical question and to use action statements. The wording of the recommendation should reflect the strength of the body of evidence. Words such as 'must' or 'should' or 'use' are included when the evidence underpinning the recommendation is strong, and words such as 'might' or 'could' or 'consider' are used when the evidence base is weaker.\nThe following recommendations illustrate these points and are taken from the NHMRC Clinical Practice Guidelines for the Management of Melanoma in Australia and New Zealand (NHMRC 2008). These show that the evidence base, consistency and impact were high for dermoscopy, but not so high for total body photography (also indicated by the use of the verb 'recommended' in the first case and 'consider' in the second):\n• Training and utilisation of dermoscopy is recommended for clinicians routinely examining pigmented skin lesions: Grade A;\n• Consider the use of baseline total body photography as a tool for the early detection of melanoma in patients who are at high risk for developing primary melanoma: Grade C (p xxii [12]).\nDevelopers are also asked to consider how the guideline will be implemented at the time that the guideline recommendations are being formulated. The Evidence Statement Form requests developers to consider whether: the recommendation will result in changes in usual care; there are any resource implications associated with implementing the recommendation; the implementation of the recommendation will require changes in the way care is currently organised; and the guideline development group are aware of any barriers to the implementation of the recommendation. This information is used to inform the implementation plan for the Guideline.\n[SUBTITLE] Feedback, piloting and users' experiences [SUBSECTION] Over the trial and consultancy period for the FORM grading process, we obtained feedback from invited experts (see acknowledgements), from current guideline developers and from the public. These issues and suggestions were carefully considered at the face-to-face meeting of the GAR consultants in 2007 (see methods). Where appropriate, we amended the FORM methodology and/or supporting documents to incorporate the suggestions or address problems. This iterative process ensured that the development of FORM was responsive to the needs of its core user group - guideline developers - and was as clear and comprehensible as possible, even for developers with limited methodological expertise. It also allowed the FORM development process to keep abreast of the sometimes rapidly changing methodology underpinning guideline development internationally and incorporate changes into FORM as appropriate. As developers of FORM and also methodological experts assisting guideline developers we (the authors) have been able to field-test the FORM process and gain first-hand feedback and direct experience about problems and issues encountered. This has been invaluable in modifying FORM to be more effective and useful.\nThe following issues were identified in the first consultation and addressed in the second iteration of FORM where appropriate:\n• deciding between grades - but this has become easier with time and familiarity\n• determining and extracting relevant information from synthesised sources (such as existing systematic reviews) which are incompletely reported\n• insufficient funding, human resources and/or time for the rigorous systematic literature reviews needed to underpin the evidence statements\n• need to accommodate subjectivity in the interpretation of the components and the final recommendation/s\nIn response to specific suggestions made in the first consultation period, we made the following modifications to the FORM supporting documentation:\n• revision of the notes, matrix and form to be more user friendly\n• the addition of 'explanatory notes' sections for developers to document reasons for particular decisions within the matrix\n• the addition of a 'dissenting opinions' and 'unresolved issues' sections to the Evidence Statement Form to keep decision making transparent and informed\n• a flowchart to assist in navigation\nFeedback from the second stage of consultation showed that the modifications were a major improvement and that guideline developers agreed that the FORM system of grading was an improvement on the previous system where recommendations were 'graded' according to the level of evidence from the NHMRC evidence hierarchy [3,6]. They also reported that the framework offers an opportunity to develop guidelines that improve dissemination and uptake in clinical practice. With increasing familiarity users have found the framework fairly simple to use.\nAs methodological experts assisting guideline developers, we have found the framework provides additional flexibility, especially when handling evidence with more than one outcome measure (for example overall survival, pain, readmission rates). Variable results/evidence statements for multiple outcomes can be captured by a single recommendation. Furthermore, the framework also allows a recommendation to be developed that balances the benefits and harms of an intervention (i.e. safety and effectiveness), but with enough flexibility to keep them separate if it is felt to be important. More than 20 NHMRC guidelines have now been completed using FORM.\nOver the trial and consultancy period for the FORM grading process, we obtained feedback from invited experts (see acknowledgements), from current guideline developers and from the public. These issues and suggestions were carefully considered at the face-to-face meeting of the GAR consultants in 2007 (see methods). Where appropriate, we amended the FORM methodology and/or supporting documents to incorporate the suggestions or address problems. This iterative process ensured that the development of FORM was responsive to the needs of its core user group - guideline developers - and was as clear and comprehensible as possible, even for developers with limited methodological expertise. It also allowed the FORM development process to keep abreast of the sometimes rapidly changing methodology underpinning guideline development internationally and incorporate changes into FORM as appropriate. As developers of FORM and also methodological experts assisting guideline developers we (the authors) have been able to field-test the FORM process and gain first-hand feedback and direct experience about problems and issues encountered. This has been invaluable in modifying FORM to be more effective and useful.\nThe following issues were identified in the first consultation and addressed in the second iteration of FORM where appropriate:\n• deciding between grades - but this has become easier with time and familiarity\n• determining and extracting relevant information from synthesised sources (such as existing systematic reviews) which are incompletely reported\n• insufficient funding, human resources and/or time for the rigorous systematic literature reviews needed to underpin the evidence statements\n• need to accommodate subjectivity in the interpretation of the components and the final recommendation/s\nIn response to specific suggestions made in the first consultation period, we made the following modifications to the FORM supporting documentation:\n• revision of the notes, matrix and form to be more user friendly\n• the addition of 'explanatory notes' sections for developers to document reasons for particular decisions within the matrix\n• the addition of a 'dissenting opinions' and 'unresolved issues' sections to the Evidence Statement Form to keep decision making transparent and informed\n• a flowchart to assist in navigation\nFeedback from the second stage of consultation showed that the modifications were a major improvement and that guideline developers agreed that the FORM system of grading was an improvement on the previous system where recommendations were 'graded' according to the level of evidence from the NHMRC evidence hierarchy [3,6]. They also reported that the framework offers an opportunity to develop guidelines that improve dissemination and uptake in clinical practice. With increasing familiarity users have found the framework fairly simple to use.\nAs methodological experts assisting guideline developers, we have found the framework provides additional flexibility, especially when handling evidence with more than one outcome measure (for example overall survival, pain, readmission rates). Variable results/evidence statements for multiple outcomes can be captured by a single recommendation. Furthermore, the framework also allows a recommendation to be developed that balances the benefits and harms of an intervention (i.e. safety and effectiveness), but with enough flexibility to keep them separate if it is felt to be important. More than 20 NHMRC guidelines have now been completed using FORM.", "[SUBTITLE] 1. Evidence base [SUBSECTION] The evidence base is assessed in terms of the quantity and quality of the studies identified by a systematic literature review for the clinical question concerned ('included studies'). Study quality relates to an assessment of the risk of bias inherent in the conduct, design and reporting of results in the included studies. The guideline developers are free to choose the most relevant process or tool to assess risk of bias. To ensure that consideration is given to the full range of study designs required to assess the breadth of clinical questions in a guideline, the GAR consultants also developed levels of evidence to address different clinical questions (prognosis, diagnostic accuracy, aetiology etc). This has been comprehensively addressed by Merlin et al [6] (see also, NHMRC website: http://www.nhmrc.gov.au/guidelines/developers.htm)\nThe evidence base is assessed in terms of the quantity and quality of the studies identified by a systematic literature review for the clinical question concerned ('included studies'). Study quality relates to an assessment of the risk of bias inherent in the conduct, design and reporting of results in the included studies. The guideline developers are free to choose the most relevant process or tool to assess risk of bias. To ensure that consideration is given to the full range of study designs required to assess the breadth of clinical questions in a guideline, the GAR consultants also developed levels of evidence to address different clinical questions (prognosis, diagnostic accuracy, aetiology etc). This has been comprehensively addressed by Merlin et al [6] (see also, NHMRC website: http://www.nhmrc.gov.au/guidelines/developers.htm)\n[SUBTITLE] 2. Consistency [SUBSECTION] The consistency component of the 'body of evidence' assesses the extent to which the findings are consistent across the included studies (including across a range of study populations and study designs). This allows users to assess whether the results are likely to be replicable or only likely to occur under certain conditions. Consistency may be assessed where appropriate as statistical heterogeneity (applying an I-squared statistic for example) or more likely will require the users to make a judgment about the overall direction of effects across multiple studies with reference to clinical heterogeneity. Possible sources of inconsistency (heterogeneity) in the results of studies may be differences in the study design, the quality of the studies (risk of bias), the population studied, and varying definitions of outcomes being assessed. Should results differ for certain subpopulations, this could then be reflected in the development of the recommendation.\nThe consistency component of the 'body of evidence' assesses the extent to which the findings are consistent across the included studies (including across a range of study populations and study designs). This allows users to assess whether the results are likely to be replicable or only likely to occur under certain conditions. Consistency may be assessed where appropriate as statistical heterogeneity (applying an I-squared statistic for example) or more likely will require the users to make a judgment about the overall direction of effects across multiple studies with reference to clinical heterogeneity. Possible sources of inconsistency (heterogeneity) in the results of studies may be differences in the study design, the quality of the studies (risk of bias), the population studied, and varying definitions of outcomes being assessed. Should results differ for certain subpopulations, this could then be reflected in the development of the recommendation.\n[SUBTITLE] 3. Clinical impact [SUBSECTION] Clinical impact is a measure of the likely benefit that application of the guideline would have across the target population, and involves a clinical judgement. Factors that need to be taken into account when estimating clinical impact include: the relevance of the evidence to the clinical question; the statistical precision and size (and clinical importance) of the effect reported in the evidence-base; the relevance of the effect to patients, compared to other management options; the duration of therapy required to achieve the effect; and the balance of risks and benefits to the patient group, including potential harm. A hypothetical example of incorporating both clinical importance and potential harm may be for the use of statins in the control of dyslipidaemia where there is a very large body of evidence with low risk of bias indicating a substantial reduction in risk of cardiovascular events. In this case a qualifying recommendation could be made to differentiate the small group of people who may experience adverse events as a result of statin therapy.\nClinical impact is arguably the most subjective of the five evidence components rated in the evidence statement. However, we have found in assisting many guideline development groups to produce clinical practice guidelines using the FORM process that it is often clearer for clinicians than it is for methodological experts. Clinicians seem to grasp the net benefit concept quite easily, although often robust discussions occur before a consensus is reached regarding the rating of this component. A strength of FORM is that these discussions contribute to formulating appropriate recommendations, and the final conclusion can be documented so that users of the guideline can see how the developers arrived at the recommendation.\nClinical impact is a measure of the likely benefit that application of the guideline would have across the target population, and involves a clinical judgement. Factors that need to be taken into account when estimating clinical impact include: the relevance of the evidence to the clinical question; the statistical precision and size (and clinical importance) of the effect reported in the evidence-base; the relevance of the effect to patients, compared to other management options; the duration of therapy required to achieve the effect; and the balance of risks and benefits to the patient group, including potential harm. A hypothetical example of incorporating both clinical importance and potential harm may be for the use of statins in the control of dyslipidaemia where there is a very large body of evidence with low risk of bias indicating a substantial reduction in risk of cardiovascular events. In this case a qualifying recommendation could be made to differentiate the small group of people who may experience adverse events as a result of statin therapy.\nClinical impact is arguably the most subjective of the five evidence components rated in the evidence statement. However, we have found in assisting many guideline development groups to produce clinical practice guidelines using the FORM process that it is often clearer for clinicians than it is for methodological experts. Clinicians seem to grasp the net benefit concept quite easily, although often robust discussions occur before a consensus is reached regarding the rating of this component. A strength of FORM is that these discussions contribute to formulating appropriate recommendations, and the final conclusion can be documented so that users of the guideline can see how the developers arrived at the recommendation.\n[SUBTITLE] 4. Generalisability [SUBSECTION] The assessment of generalisability involves determining how precisely the available body of evidence answers the clinical question that was asked. Issues to be considered include: how well the participants and settings of the included studies match the patient population being targeted by the guideline; the clinical setting where the recommendation will be implemented; and other factors such as the stage of the disease (e.g. early versus advanced), the duration of illness and (for diagnostic accuracy questions) the prevalence of the disease in the study population as compared to the target population for the guideline.\nThe assessment of generalisability involves determining how precisely the available body of evidence answers the clinical question that was asked. Issues to be considered include: how well the participants and settings of the included studies match the patient population being targeted by the guideline; the clinical setting where the recommendation will be implemented; and other factors such as the stage of the disease (e.g. early versus advanced), the duration of illness and (for diagnostic accuracy questions) the prevalence of the disease in the study population as compared to the target population for the guideline.\n[SUBTITLE] 5. Applicability [SUBSECTION] This component addresses whether the evidence base is relevant to the Australian health care system generally, or to more local settings for specific recommendations (such as rural areas or cities). Factors that may reduce the direct application of study findings to the Australian or more local settings include organisational factors (e.g. availability of trained staff, clinic time, specialised equipment, tests or other resources) and cultural factors (e.g. attitudes to health issues, including those that may affect compliance with the recommendation).\nThis component addresses whether the evidence base is relevant to the Australian health care system generally, or to more local settings for specific recommendations (such as rural areas or cities). Factors that may reduce the direct application of study findings to the Australian or more local settings include organisational factors (e.g. availability of trained staff, clinic time, specialised equipment, tests or other resources) and cultural factors (e.g. attitudes to health issues, including those that may affect compliance with the recommendation).", "The evidence base is assessed in terms of the quantity and quality of the studies identified by a systematic literature review for the clinical question concerned ('included studies'). Study quality relates to an assessment of the risk of bias inherent in the conduct, design and reporting of results in the included studies. The guideline developers are free to choose the most relevant process or tool to assess risk of bias. To ensure that consideration is given to the full range of study designs required to assess the breadth of clinical questions in a guideline, the GAR consultants also developed levels of evidence to address different clinical questions (prognosis, diagnostic accuracy, aetiology etc). This has been comprehensively addressed by Merlin et al [6] (see also, NHMRC website: http://www.nhmrc.gov.au/guidelines/developers.htm)", "The consistency component of the 'body of evidence' assesses the extent to which the findings are consistent across the included studies (including across a range of study populations and study designs). This allows users to assess whether the results are likely to be replicable or only likely to occur under certain conditions. Consistency may be assessed where appropriate as statistical heterogeneity (applying an I-squared statistic for example) or more likely will require the users to make a judgment about the overall direction of effects across multiple studies with reference to clinical heterogeneity. Possible sources of inconsistency (heterogeneity) in the results of studies may be differences in the study design, the quality of the studies (risk of bias), the population studied, and varying definitions of outcomes being assessed. Should results differ for certain subpopulations, this could then be reflected in the development of the recommendation.", "Clinical impact is a measure of the likely benefit that application of the guideline would have across the target population, and involves a clinical judgement. Factors that need to be taken into account when estimating clinical impact include: the relevance of the evidence to the clinical question; the statistical precision and size (and clinical importance) of the effect reported in the evidence-base; the relevance of the effect to patients, compared to other management options; the duration of therapy required to achieve the effect; and the balance of risks and benefits to the patient group, including potential harm. A hypothetical example of incorporating both clinical importance and potential harm may be for the use of statins in the control of dyslipidaemia where there is a very large body of evidence with low risk of bias indicating a substantial reduction in risk of cardiovascular events. In this case a qualifying recommendation could be made to differentiate the small group of people who may experience adverse events as a result of statin therapy.\nClinical impact is arguably the most subjective of the five evidence components rated in the evidence statement. However, we have found in assisting many guideline development groups to produce clinical practice guidelines using the FORM process that it is often clearer for clinicians than it is for methodological experts. Clinicians seem to grasp the net benefit concept quite easily, although often robust discussions occur before a consensus is reached regarding the rating of this component. A strength of FORM is that these discussions contribute to formulating appropriate recommendations, and the final conclusion can be documented so that users of the guideline can see how the developers arrived at the recommendation.", "The assessment of generalisability involves determining how precisely the available body of evidence answers the clinical question that was asked. Issues to be considered include: how well the participants and settings of the included studies match the patient population being targeted by the guideline; the clinical setting where the recommendation will be implemented; and other factors such as the stage of the disease (e.g. early versus advanced), the duration of illness and (for diagnostic accuracy questions) the prevalence of the disease in the study population as compared to the target population for the guideline.", "This component addresses whether the evidence base is relevant to the Australian health care system generally, or to more local settings for specific recommendations (such as rural areas or cities). Factors that may reduce the direct application of study findings to the Australian or more local settings include organisational factors (e.g. availability of trained staff, clinic time, specialised equipment, tests or other resources) and cultural factors (e.g. attitudes to health issues, including those that may affect compliance with the recommendation).", "The FORM matrix forms part of the overall process which is detailed in Additional file 1. Each of the components in the FORM matrix can be rated from A to D. The body of evidence supporting a recommendation rarely consists of the same rating for each of the five components. There may be a large number of studies with a low risk of bias and consistent findings, but which have only a limited clinical impact, and are not directly generalisable to the target population or applicable to the local (e.g. Australian) healthcare context. Alternatively, a body of evidence may consist of one or two randomised trials with small sample sizes that have a moderate risk of bias but have a very large clinical impact and are directly applicable to the local healthcare context and target population. By rating each of the five components separately, FORM allows for this mixture of components, while still reflecting the overall body of evidence supporting a guideline recommendation. The FORM Matrix provides guidance for users about how to rate each component of the body of evidence (see Table 1). The accompanying Evidence Statement Form is provided for guideline developers to complete for each clinical question with room for additional information and dissenting opinions to be recorded. A recommendation to answer the clinical question is developed in two stages. First, a rating is assigned for each of the five components described above and an evidence statement is written in passive voice to reflect the findings of the evidence base. Second, an overall recommendation or action statement is developed on the basis of the evidence statement and an overall grade is assigned to this recommendation that reflects the level of confidence in the evidence supporting the recommendation.\nNHMRC Body of evidence matrix\nSR = systematic review; several = more than two studies\n1 Level of evidence determined from the NHMRC Evidence Hierarchy\n2 If there is only one study, rank this component as 'not applicable'\n3 For example, results in adults that are clinically sensible to apply to children OR psychosocial outcomes for one cancer that may be applicable to patients with another cancer\nEvidence statements may be developed by outcome measures for each intervention and then the multiple evidence statements for a single question can be collapsed into a single recommendation. Guideline developers can produce a combined recommendation taking into account the balance of benefits and harms or separate recommendations for benefits and harms, if this is more appropriate. The FORM process allows considerable flexibility in developing the recommendation.\nThe overall grades for recommendations should indicate the strength of the body of evidence underpinning the recommendation. This assists users of the clinical practice guidelines to make appropriate and informed clinical judgments. Grade A or B recommendations are generally based on a body of evidence that can be trusted to guide clinical practice, whereas Grade C or D recommendations must be applied carefully to individual clinical and organisational circumstances and should be interpreted with caution (see Table 2). A recommendation cannot be graded A or B unless the evidence base and consistency of the evidence are both rated A or B. In some cases, lower-graded evidence statements may not provide sufficient confidence to support an evidence-based recommendation at all. However, the framework allows Good Practice Points (GPP) to be included when developers feel it is important to provide non-evidence-based guidance.\nDefinition of NHMRC grades of recommendations\nIn formulating the recommendation users are advised to address the specific clinical question and to use action statements. The wording of the recommendation should reflect the strength of the body of evidence. Words such as 'must' or 'should' or 'use' are included when the evidence underpinning the recommendation is strong, and words such as 'might' or 'could' or 'consider' are used when the evidence base is weaker.\nThe following recommendations illustrate these points and are taken from the NHMRC Clinical Practice Guidelines for the Management of Melanoma in Australia and New Zealand (NHMRC 2008). These show that the evidence base, consistency and impact were high for dermoscopy, but not so high for total body photography (also indicated by the use of the verb 'recommended' in the first case and 'consider' in the second):\n• Training and utilisation of dermoscopy is recommended for clinicians routinely examining pigmented skin lesions: Grade A;\n• Consider the use of baseline total body photography as a tool for the early detection of melanoma in patients who are at high risk for developing primary melanoma: Grade C (p xxii [12]).\nDevelopers are also asked to consider how the guideline will be implemented at the time that the guideline recommendations are being formulated. The Evidence Statement Form requests developers to consider whether: the recommendation will result in changes in usual care; there are any resource implications associated with implementing the recommendation; the implementation of the recommendation will require changes in the way care is currently organised; and the guideline development group are aware of any barriers to the implementation of the recommendation. This information is used to inform the implementation plan for the Guideline.", "Over the trial and consultancy period for the FORM grading process, we obtained feedback from invited experts (see acknowledgements), from current guideline developers and from the public. These issues and suggestions were carefully considered at the face-to-face meeting of the GAR consultants in 2007 (see methods). Where appropriate, we amended the FORM methodology and/or supporting documents to incorporate the suggestions or address problems. This iterative process ensured that the development of FORM was responsive to the needs of its core user group - guideline developers - and was as clear and comprehensible as possible, even for developers with limited methodological expertise. It also allowed the FORM development process to keep abreast of the sometimes rapidly changing methodology underpinning guideline development internationally and incorporate changes into FORM as appropriate. As developers of FORM and also methodological experts assisting guideline developers we (the authors) have been able to field-test the FORM process and gain first-hand feedback and direct experience about problems and issues encountered. This has been invaluable in modifying FORM to be more effective and useful.\nThe following issues were identified in the first consultation and addressed in the second iteration of FORM where appropriate:\n• deciding between grades - but this has become easier with time and familiarity\n• determining and extracting relevant information from synthesised sources (such as existing systematic reviews) which are incompletely reported\n• insufficient funding, human resources and/or time for the rigorous systematic literature reviews needed to underpin the evidence statements\n• need to accommodate subjectivity in the interpretation of the components and the final recommendation/s\nIn response to specific suggestions made in the first consultation period, we made the following modifications to the FORM supporting documentation:\n• revision of the notes, matrix and form to be more user friendly\n• the addition of 'explanatory notes' sections for developers to document reasons for particular decisions within the matrix\n• the addition of a 'dissenting opinions' and 'unresolved issues' sections to the Evidence Statement Form to keep decision making transparent and informed\n• a flowchart to assist in navigation\nFeedback from the second stage of consultation showed that the modifications were a major improvement and that guideline developers agreed that the FORM system of grading was an improvement on the previous system where recommendations were 'graded' according to the level of evidence from the NHMRC evidence hierarchy [3,6]. They also reported that the framework offers an opportunity to develop guidelines that improve dissemination and uptake in clinical practice. With increasing familiarity users have found the framework fairly simple to use.\nAs methodological experts assisting guideline developers, we have found the framework provides additional flexibility, especially when handling evidence with more than one outcome measure (for example overall survival, pain, readmission rates). Variable results/evidence statements for multiple outcomes can be captured by a single recommendation. Furthermore, the framework also allows a recommendation to be developed that balances the benefits and harms of an intervention (i.e. safety and effectiveness), but with enough flexibility to keep them separate if it is felt to be important. More than 20 NHMRC guidelines have now been completed using FORM.", "The formulation and inclusion of recommendations is one of the defining differences between clinical practice guidelines and other evidence syntheses such as systematic reviews. A recent review of the adequacy of guideline recommendations has highlighted that over half of the recommendations (52.7%) give no indication of the strength of that recommendation [1].\nThe FORM process for formulation and grading of recommendations in clinical practice guidelines is logical, simple to use and intuitive. Its concurrent development with Australian levels of evidence [6] means that NHMRC can provide Australian (and other) guideline developers with an integrated framework for producing high-quality recommendations that represent best-practice and are implementable, acceptable and appropriate for the local health care system. The framework is also generic - the same processes can be used to formulate and grade recommendations for any type of clinical question, despite the differences in the type of evidence required to address that question (e.g. questions of diagnostic test accuracy, risk factors for disease progression or poor prognosis). Furthermore, health service providers can implement the evidence-based course of action with appropriate modification in light of the individual patient's values and preferences.\nIn areas like public health where there may never be high-level evidence supporting the use of different interventions, practice recommendations developed using other grading systems would consistently rate a lower grade than is felt appropriate by experts in those fields. Examples of such areas include large-scale dietary questions, passive smoking or exposure to environmental chemicals. This does not occur using the FORM methodology. Using the NHMRC levels of evidence for aetiology questions as an alternative to the levels for intervention questions [6] allows the evidence base component of our grading system to be rated higher than would otherwise occur and this would be reflected in the overall grade of recommendation.\nThe extensive pilot of FORM and subsequent uptake by both new and experienced guideline developers has shown that the framework is feasible and accepted. The component approach allows transparency in how recommendations are formulated, with users of the guidelines able to explicitly see the various contributions of factors such as quality of the evidence and clinical impact. A further strength is that implementation and resourcing issues are considered separately, which means that effective but potentially costly interventions are not penalised with a downgraded recommendation as the developers of this system felt that users' willingness to pay will vary according to the context of use. Arguably the greater ability to differentiate strength of recommendation (four levels) in FORM offers more precision for developers.\n[SUBTITLE] Limitations [SUBSECTION] The UK National Institute for Health and Clinical Excellence (NICE) has decided to discontinue summary grades for recommendations, on the grounds that their previous grading system was being misinterpreted. They have stated that they are not sure that the GRADE system's approach to summary labels overcomes this [13]. We are not aware of this sort of misinterpretation occurring with FORM, and believe that the benefits of grading outweigh the harms as clinicians are striving for clear-cut health advice to assist with their individual decision-making. However, ongoing monitoring and periodic review of the application and use of FORM needs to be considered.\nRecommendation formulation and grading can be particularly challenging when the evidence is scant and/or poor, or conflicting. NICE has outlined some strategies to address these challenges, including using consensus when no evidence is found for a particular clinical question and highlighting gaps in the evidence where evidence is scant or poor.[14] NICE reminds us that whenever guidelines are unable to rely on a solid evidence base other methods used for formulating recommendations must be transparent and set out clearly in the guideline. A particular strength of an explicit process such as FORM is that the path from evidence to recommendation is made clear.\nCurrent evidence frameworks are grappling with how to integrate other forms of evidence needed to answer qualitative questions such as optimal quality of life, and we anticipate that FORM will need to be periodically reassessed in the light of international debate about levels of evidence and grading recommendations.\nThe purpose of clinical practice guidelines is to change or guide health professionals' behaviour and to improve quality of care. Therefore, the ultimate test of guidelines and the processes used to develop and implement guidelines will be improved health outcomes and improved systems. One way of facilitating this is by developing recommendations that are transparently produced through a process that is user-friendly, weighs up multiple concepts when formulating a course of action (much as the clinician does for an individual patient), and provides clear advice on the confidence or uncertainty associated with the recommended course of action.\nThe UK National Institute for Health and Clinical Excellence (NICE) has decided to discontinue summary grades for recommendations, on the grounds that their previous grading system was being misinterpreted. They have stated that they are not sure that the GRADE system's approach to summary labels overcomes this [13]. We are not aware of this sort of misinterpretation occurring with FORM, and believe that the benefits of grading outweigh the harms as clinicians are striving for clear-cut health advice to assist with their individual decision-making. However, ongoing monitoring and periodic review of the application and use of FORM needs to be considered.\nRecommendation formulation and grading can be particularly challenging when the evidence is scant and/or poor, or conflicting. NICE has outlined some strategies to address these challenges, including using consensus when no evidence is found for a particular clinical question and highlighting gaps in the evidence where evidence is scant or poor.[14] NICE reminds us that whenever guidelines are unable to rely on a solid evidence base other methods used for formulating recommendations must be transparent and set out clearly in the guideline. A particular strength of an explicit process such as FORM is that the path from evidence to recommendation is made clear.\nCurrent evidence frameworks are grappling with how to integrate other forms of evidence needed to answer qualitative questions such as optimal quality of life, and we anticipate that FORM will need to be periodically reassessed in the light of international debate about levels of evidence and grading recommendations.\nThe purpose of clinical practice guidelines is to change or guide health professionals' behaviour and to improve quality of care. Therefore, the ultimate test of guidelines and the processes used to develop and implement guidelines will be improved health outcomes and improved systems. One way of facilitating this is by developing recommendations that are transparently produced through a process that is user-friendly, weighs up multiple concepts when formulating a course of action (much as the clinician does for an individual patient), and provides clear advice on the confidence or uncertainty associated with the recommended course of action.", "The UK National Institute for Health and Clinical Excellence (NICE) has decided to discontinue summary grades for recommendations, on the grounds that their previous grading system was being misinterpreted. They have stated that they are not sure that the GRADE system's approach to summary labels overcomes this [13]. We are not aware of this sort of misinterpretation occurring with FORM, and believe that the benefits of grading outweigh the harms as clinicians are striving for clear-cut health advice to assist with their individual decision-making. However, ongoing monitoring and periodic review of the application and use of FORM needs to be considered.\nRecommendation formulation and grading can be particularly challenging when the evidence is scant and/or poor, or conflicting. NICE has outlined some strategies to address these challenges, including using consensus when no evidence is found for a particular clinical question and highlighting gaps in the evidence where evidence is scant or poor.[14] NICE reminds us that whenever guidelines are unable to rely on a solid evidence base other methods used for formulating recommendations must be transparent and set out clearly in the guideline. A particular strength of an explicit process such as FORM is that the path from evidence to recommendation is made clear.\nCurrent evidence frameworks are grappling with how to integrate other forms of evidence needed to answer qualitative questions such as optimal quality of life, and we anticipate that FORM will need to be periodically reassessed in the light of international debate about levels of evidence and grading recommendations.\nThe purpose of clinical practice guidelines is to change or guide health professionals' behaviour and to improve quality of care. Therefore, the ultimate test of guidelines and the processes used to develop and implement guidelines will be improved health outcomes and improved systems. One way of facilitating this is by developing recommendations that are transparently produced through a process that is user-friendly, weighs up multiple concepts when formulating a course of action (much as the clinician does for an individual patient), and provides clear advice on the confidence or uncertainty associated with the recommended course of action.", "FORM provides a contemporary and internationally relevant structure within which clinical guideline developers can consider current literature related to specific clinical questions. It has been developed through a unique partnership of government, academic, private consultancy and clinical personnel with considerable experience in evidence-based practice and development of clinical practice guidelines. Our work with over 20 guideline developers during the piloting of the FORM process has demonstrated it to be a logical, simple to use and intuitive system for formulating and grading recommendations in clinical practice guidelines.", "The authors declare that they have no competing interest. Meeting attendance fees were paid to the authors (or their institutions) by the NHMRC - a not-for-profit research organisation funded by the Australian Government.", "All authors were involved in the process of drafting and revising the grading process as described and all authors were involved in the preparation and final approval of the current manuscript.", "History of NHMRC Guidelines Assessment Register (GAR) and members of the Levels and Grades Working Party\nIn 2002, the NHMRC convened a register of methodological experts (Guidelines Assessment Register [GAR]) to assist external guideline developers in Australia through the process of identifying and synthesising evidence for guidelines in a way that complied with NHMRC specified requirements and would assist them in gaining NHMRC endorsement for their work. The main role of the GAR consultants was to oversee the methodological processes in external development of guidelines, particularly reviewing and classifying the quality of the evidence, and how these classifications correlated to the resultant recommendations. The expected outcome of the involvement of the GAR consultants was that consistently high quality guidelines would be submitted to HAC for approval, and that problems identified post hoc in guideline development could be pre-empted.\nKristina Coleman, Sarah Norris, Adele Weston (Health Technology Analysts Pty Ltd)\nKaren Grimmer-Somers, Susan Hillier (iCentre for Allied Health Evidence, University of South Australia)\nTracy Merlin (Adelaide Health Technology Assessment, Discipline of Public Health, University of Adelaide)\nPhilippa Middleton, Rebecca Tooher (ARCH, University of Adelaide)\nJanet Salisbury (Biotext Pty Ltd)", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2288/11/23/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "The concept of a body of evidence", "Methods", "Results", "Key components of FORM", "1. Evidence base", "2. Consistency", "3. Clinical impact", "4. Generalisability", "5. Applicability", "The FORM Matrix and Evidence Statement Form", "Feedback, piloting and users' experiences", "Discussion", "Limitations", "Conclusion", "Competing interests", "Authors' contributions", "Appendix 1", "Pre-publication history", "Supplementary Material" ]
[ "Best practice in health care should be guided by the results of research on the safety and effectiveness of different courses of clinical action. This evidence needs to be assembled, justified and presented in the form of health advice for multiple stakeholders including health professionals, decision makers and consumers of health care. Clinical practice guidelines are recognised as one of the best ways to present recommended courses of action based on research evidence, although recommendations are often presented inconsistently [1]. Where such evidence is not available, guidelines may use consensus-based practice points and/or identify areas requiring further research. Both format and content can adversely affect the adoption and integration of guidelines into clinical practice [2].\nThe National Health and Medical Research Council (NHMRC) of Australia has been a world leader in developing and supporting the development of evidence-based health advice, including clinical practice guidelines. As early as 1999, the NHMRC commissioned and published 'Guidelines for Guideline Development' [3], anticipating the need for a comprehensive set of resources to help guideline developers produce high quality guidelines. This was followed by a more detailed series of handbooks on different aspects of finding and reviewing clinical research [4].\nAustralian guideline developers must comply with NHMRC standards in order to gain NHMRC approval. These standards (such as rigorous evidence-based methods, multidisciplinary panels and public consultation processes) have resulted in NHMRC approved guidelines being of higher quality than those developed outside NHMRC processes [5].\nBy 2004, it had become clear that the NHMRC standards required expansion and revision in response to the rapid growth and diversification of clinical practice guidelines in Australia and elsewhere. There were two main areas where a need for revision was identified. The first was the need to develop a set of levels (or hierarchy) of evidence which would cover the different individual study designs used to address the different types of questions formulated by guideline development panels. This work (covering interventions, diagnostic accuracy, prognosis, aetiology and screening) is outlined in Merlin et al [6]. The second area was the need to develop a new system, or adaptation of an existing system, of formulating and grading recommendations for clinical practice guidelines that incorporated an assessment of the 'body of evidence'.\n[SUBTITLE] The concept of a body of evidence [SUBSECTION] Many guideline recommendations have been rated solely according to the level of evidence of the individual studies contributing to that recommendation. In the late 1990 s and early 2000 s, NHMRC prepared a series of handbooks to assist clinical practice guideline developers. These handbooks stated that other elements such as study quality, size and precision of study results, and relevance to local practice were also important [3,4]. They did not, however, go as far as providing a transparent logical framework for assessing these elements when formulating recommendations. What was needed was a method for considering all of these elements across all of the research studies addressing the clinical question as a whole (the 'body of evidence') like some other guideline development methodologies (such as those used by the Scottish Intercollegiate Guidelines Network or the National Institute for Health and Clinical Excellence). Recommendations based on the body of evidence could then be graded according to the degree of confidence that implementing the suggested course of action would lead to improved patient health outcomes.\nIn recognition of this need, and in response to requests from methodological experts that consult for the NHMRC on guideline development (Guidelines Assessment Register [GAR] consultants) (see Appendix 1), the NHMRC undertook to revise and update its methodological approaches. This paper reports on the production and trial of a methodology and associated processes to assist Australian guideline developers in considering a body of evidence and grading the resulting guideline recommendations.\nMany guideline recommendations have been rated solely according to the level of evidence of the individual studies contributing to that recommendation. In the late 1990 s and early 2000 s, NHMRC prepared a series of handbooks to assist clinical practice guideline developers. These handbooks stated that other elements such as study quality, size and precision of study results, and relevance to local practice were also important [3,4]. They did not, however, go as far as providing a transparent logical framework for assessing these elements when formulating recommendations. What was needed was a method for considering all of these elements across all of the research studies addressing the clinical question as a whole (the 'body of evidence') like some other guideline development methodologies (such as those used by the Scottish Intercollegiate Guidelines Network or the National Institute for Health and Clinical Excellence). Recommendations based on the body of evidence could then be graded according to the degree of confidence that implementing the suggested course of action would lead to improved patient health outcomes.\nIn recognition of this need, and in response to requests from methodological experts that consult for the NHMRC on guideline development (Guidelines Assessment Register [GAR] consultants) (see Appendix 1), the NHMRC undertook to revise and update its methodological approaches. This paper reports on the production and trial of a methodology and associated processes to assist Australian guideline developers in considering a body of evidence and grading the resulting guideline recommendations.", "Many guideline recommendations have been rated solely according to the level of evidence of the individual studies contributing to that recommendation. In the late 1990 s and early 2000 s, NHMRC prepared a series of handbooks to assist clinical practice guideline developers. These handbooks stated that other elements such as study quality, size and precision of study results, and relevance to local practice were also important [3,4]. They did not, however, go as far as providing a transparent logical framework for assessing these elements when formulating recommendations. What was needed was a method for considering all of these elements across all of the research studies addressing the clinical question as a whole (the 'body of evidence') like some other guideline development methodologies (such as those used by the Scottish Intercollegiate Guidelines Network or the National Institute for Health and Clinical Excellence). Recommendations based on the body of evidence could then be graded according to the degree of confidence that implementing the suggested course of action would lead to improved patient health outcomes.\nIn recognition of this need, and in response to requests from methodological experts that consult for the NHMRC on guideline development (Guidelines Assessment Register [GAR] consultants) (see Appendix 1), the NHMRC undertook to revise and update its methodological approaches. This paper reports on the production and trial of a methodology and associated processes to assist Australian guideline developers in considering a body of evidence and grading the resulting guideline recommendations.", "In 2004, the NHMRC commissioned a review of existing frameworks for assessing evidence internationally [7]. This internal report provided a resource for a working party (comprising GAR consultants and NHMRC personnel - see Appendix 1 for members) to review existing practice, design and/or adapt a framework for grading a body of evidence and pilot this process with Australian guideline developers.\nThe report identified nine possible systems for use in developing clinical practice guidelines. Of these, three were considered to be most useful for informing the development of an Australian guideline recommendation process. These frameworks were the Scottish Intercollegiate Guidelines Network (SIGN) system and considered judgement statement (SIGN50, revised 2008)[8]; the Strength of Recommendation Taxonomy (SORT) [9]; and the Grading of Recommendations, Assessment, Development and Evaluation (GRADE) [10].\nThese systems were discussed at a face-to-face meeting of the working party with respect to their advantages and disadvantages and compatibility with the existing advice in the NHMRC 'Guidelines for Guideline Development' handbooks. A consensus was reached about how these frameworks could be adapted in the new process. From the three systems, we combined elements to achieve our objectives, which were: to have a system that matched and complemented the current NHMRC evidence dimensions and documents as closely as possible; simplicity and clarity of approach; and to provide transparent method/s of formulating and documenting judgments to give a graded set of recommendations. The working party drafted a new framework for grading recommendations and this was refined by extensive email consultation and iteration within the group.\nThe resulting draft framework was piloted by GAR consultants working with guideline developers between 2005 and 2009. There were five main methods to gather feedback:\n• Known experts in the international guideline field were approached by NHMRC directly for comment on the draft system - this was a formal request and responses were semi-structured in that the experts were free to review in their own style,\n• Key evidence-based assessment organisations in Australian and New Zealand were invited to register feedback on the website where the system was posted,\n• All guideline development groups working within the NHMRC endorsement framework during this period used FORM under the guidance of the GARs - they were all invited to offer feedback during and after the process.\n• The draft process was presented at key conferences and interactive workshops (eg International Cochrane Colloquium [11]),\n• The website was open for the 5 years (passive seeking) and included a structured feedback form.\nFollowing this initial period of consultation (up until 2007) the FORM's framework was further refined, taking account of the feedback received, and the public consultation period was extended to June 2009. During the development, trialing and refinement period from 2004 to 2009, the international guideline community continued to debate and evolve other systems of guideline production - these developments were monitored and helped to inform the Australian process. The revised version of FORM was subsequently endorsed by the Council of the NHMRC.", "The new FORM framework was loosely based on the SIGN considered judgement form [8]. It provides guideline developers with a structured process for considering the whole body of evidence relevant to a particular clinical question, in the context of the setting in which it is to be applied. FORM recognises that ascribing a level of evidence to each study that reflects the risk of bias in its design, is only one small part of assessing evidence for a guideline recommendation. FORM provides a framework for assessing all the studies relevant for a recommendation against five criteria: the evidence base (i.e. number, level and risk of bias in included studies); the consistency of findings between studies; the clinical impact suggested by the evidence base; the generalisability of the results to the population for whom the guideline is intended; and the applicability of the results to the Australian (and/or local) health care setting. Under FORM, these five key components are individually assessed for each clinical question giving a picture of both the internal and external validity of the evidence base under consideration.\n[SUBTITLE] Key components of FORM [SUBSECTION] [SUBTITLE] 1. Evidence base [SUBSECTION] The evidence base is assessed in terms of the quantity and quality of the studies identified by a systematic literature review for the clinical question concerned ('included studies'). Study quality relates to an assessment of the risk of bias inherent in the conduct, design and reporting of results in the included studies. The guideline developers are free to choose the most relevant process or tool to assess risk of bias. To ensure that consideration is given to the full range of study designs required to assess the breadth of clinical questions in a guideline, the GAR consultants also developed levels of evidence to address different clinical questions (prognosis, diagnostic accuracy, aetiology etc). This has been comprehensively addressed by Merlin et al [6] (see also, NHMRC website: http://www.nhmrc.gov.au/guidelines/developers.htm)\nThe evidence base is assessed in terms of the quantity and quality of the studies identified by a systematic literature review for the clinical question concerned ('included studies'). Study quality relates to an assessment of the risk of bias inherent in the conduct, design and reporting of results in the included studies. The guideline developers are free to choose the most relevant process or tool to assess risk of bias. To ensure that consideration is given to the full range of study designs required to assess the breadth of clinical questions in a guideline, the GAR consultants also developed levels of evidence to address different clinical questions (prognosis, diagnostic accuracy, aetiology etc). This has been comprehensively addressed by Merlin et al [6] (see also, NHMRC website: http://www.nhmrc.gov.au/guidelines/developers.htm)\n[SUBTITLE] 2. Consistency [SUBSECTION] The consistency component of the 'body of evidence' assesses the extent to which the findings are consistent across the included studies (including across a range of study populations and study designs). This allows users to assess whether the results are likely to be replicable or only likely to occur under certain conditions. Consistency may be assessed where appropriate as statistical heterogeneity (applying an I-squared statistic for example) or more likely will require the users to make a judgment about the overall direction of effects across multiple studies with reference to clinical heterogeneity. Possible sources of inconsistency (heterogeneity) in the results of studies may be differences in the study design, the quality of the studies (risk of bias), the population studied, and varying definitions of outcomes being assessed. Should results differ for certain subpopulations, this could then be reflected in the development of the recommendation.\nThe consistency component of the 'body of evidence' assesses the extent to which the findings are consistent across the included studies (including across a range of study populations and study designs). This allows users to assess whether the results are likely to be replicable or only likely to occur under certain conditions. Consistency may be assessed where appropriate as statistical heterogeneity (applying an I-squared statistic for example) or more likely will require the users to make a judgment about the overall direction of effects across multiple studies with reference to clinical heterogeneity. Possible sources of inconsistency (heterogeneity) in the results of studies may be differences in the study design, the quality of the studies (risk of bias), the population studied, and varying definitions of outcomes being assessed. Should results differ for certain subpopulations, this could then be reflected in the development of the recommendation.\n[SUBTITLE] 3. Clinical impact [SUBSECTION] Clinical impact is a measure of the likely benefit that application of the guideline would have across the target population, and involves a clinical judgement. Factors that need to be taken into account when estimating clinical impact include: the relevance of the evidence to the clinical question; the statistical precision and size (and clinical importance) of the effect reported in the evidence-base; the relevance of the effect to patients, compared to other management options; the duration of therapy required to achieve the effect; and the balance of risks and benefits to the patient group, including potential harm. A hypothetical example of incorporating both clinical importance and potential harm may be for the use of statins in the control of dyslipidaemia where there is a very large body of evidence with low risk of bias indicating a substantial reduction in risk of cardiovascular events. In this case a qualifying recommendation could be made to differentiate the small group of people who may experience adverse events as a result of statin therapy.\nClinical impact is arguably the most subjective of the five evidence components rated in the evidence statement. However, we have found in assisting many guideline development groups to produce clinical practice guidelines using the FORM process that it is often clearer for clinicians than it is for methodological experts. Clinicians seem to grasp the net benefit concept quite easily, although often robust discussions occur before a consensus is reached regarding the rating of this component. A strength of FORM is that these discussions contribute to formulating appropriate recommendations, and the final conclusion can be documented so that users of the guideline can see how the developers arrived at the recommendation.\nClinical impact is a measure of the likely benefit that application of the guideline would have across the target population, and involves a clinical judgement. Factors that need to be taken into account when estimating clinical impact include: the relevance of the evidence to the clinical question; the statistical precision and size (and clinical importance) of the effect reported in the evidence-base; the relevance of the effect to patients, compared to other management options; the duration of therapy required to achieve the effect; and the balance of risks and benefits to the patient group, including potential harm. A hypothetical example of incorporating both clinical importance and potential harm may be for the use of statins in the control of dyslipidaemia where there is a very large body of evidence with low risk of bias indicating a substantial reduction in risk of cardiovascular events. In this case a qualifying recommendation could be made to differentiate the small group of people who may experience adverse events as a result of statin therapy.\nClinical impact is arguably the most subjective of the five evidence components rated in the evidence statement. However, we have found in assisting many guideline development groups to produce clinical practice guidelines using the FORM process that it is often clearer for clinicians than it is for methodological experts. Clinicians seem to grasp the net benefit concept quite easily, although often robust discussions occur before a consensus is reached regarding the rating of this component. A strength of FORM is that these discussions contribute to formulating appropriate recommendations, and the final conclusion can be documented so that users of the guideline can see how the developers arrived at the recommendation.\n[SUBTITLE] 4. Generalisability [SUBSECTION] The assessment of generalisability involves determining how precisely the available body of evidence answers the clinical question that was asked. Issues to be considered include: how well the participants and settings of the included studies match the patient population being targeted by the guideline; the clinical setting where the recommendation will be implemented; and other factors such as the stage of the disease (e.g. early versus advanced), the duration of illness and (for diagnostic accuracy questions) the prevalence of the disease in the study population as compared to the target population for the guideline.\nThe assessment of generalisability involves determining how precisely the available body of evidence answers the clinical question that was asked. Issues to be considered include: how well the participants and settings of the included studies match the patient population being targeted by the guideline; the clinical setting where the recommendation will be implemented; and other factors such as the stage of the disease (e.g. early versus advanced), the duration of illness and (for diagnostic accuracy questions) the prevalence of the disease in the study population as compared to the target population for the guideline.\n[SUBTITLE] 5. Applicability [SUBSECTION] This component addresses whether the evidence base is relevant to the Australian health care system generally, or to more local settings for specific recommendations (such as rural areas or cities). Factors that may reduce the direct application of study findings to the Australian or more local settings include organisational factors (e.g. availability of trained staff, clinic time, specialised equipment, tests or other resources) and cultural factors (e.g. attitudes to health issues, including those that may affect compliance with the recommendation).\nThis component addresses whether the evidence base is relevant to the Australian health care system generally, or to more local settings for specific recommendations (such as rural areas or cities). Factors that may reduce the direct application of study findings to the Australian or more local settings include organisational factors (e.g. availability of trained staff, clinic time, specialised equipment, tests or other resources) and cultural factors (e.g. attitudes to health issues, including those that may affect compliance with the recommendation).\n[SUBTITLE] 1. Evidence base [SUBSECTION] The evidence base is assessed in terms of the quantity and quality of the studies identified by a systematic literature review for the clinical question concerned ('included studies'). Study quality relates to an assessment of the risk of bias inherent in the conduct, design and reporting of results in the included studies. The guideline developers are free to choose the most relevant process or tool to assess risk of bias. To ensure that consideration is given to the full range of study designs required to assess the breadth of clinical questions in a guideline, the GAR consultants also developed levels of evidence to address different clinical questions (prognosis, diagnostic accuracy, aetiology etc). This has been comprehensively addressed by Merlin et al [6] (see also, NHMRC website: http://www.nhmrc.gov.au/guidelines/developers.htm)\nThe evidence base is assessed in terms of the quantity and quality of the studies identified by a systematic literature review for the clinical question concerned ('included studies'). Study quality relates to an assessment of the risk of bias inherent in the conduct, design and reporting of results in the included studies. The guideline developers are free to choose the most relevant process or tool to assess risk of bias. To ensure that consideration is given to the full range of study designs required to assess the breadth of clinical questions in a guideline, the GAR consultants also developed levels of evidence to address different clinical questions (prognosis, diagnostic accuracy, aetiology etc). This has been comprehensively addressed by Merlin et al [6] (see also, NHMRC website: http://www.nhmrc.gov.au/guidelines/developers.htm)\n[SUBTITLE] 2. Consistency [SUBSECTION] The consistency component of the 'body of evidence' assesses the extent to which the findings are consistent across the included studies (including across a range of study populations and study designs). This allows users to assess whether the results are likely to be replicable or only likely to occur under certain conditions. Consistency may be assessed where appropriate as statistical heterogeneity (applying an I-squared statistic for example) or more likely will require the users to make a judgment about the overall direction of effects across multiple studies with reference to clinical heterogeneity. Possible sources of inconsistency (heterogeneity) in the results of studies may be differences in the study design, the quality of the studies (risk of bias), the population studied, and varying definitions of outcomes being assessed. Should results differ for certain subpopulations, this could then be reflected in the development of the recommendation.\nThe consistency component of the 'body of evidence' assesses the extent to which the findings are consistent across the included studies (including across a range of study populations and study designs). This allows users to assess whether the results are likely to be replicable or only likely to occur under certain conditions. Consistency may be assessed where appropriate as statistical heterogeneity (applying an I-squared statistic for example) or more likely will require the users to make a judgment about the overall direction of effects across multiple studies with reference to clinical heterogeneity. Possible sources of inconsistency (heterogeneity) in the results of studies may be differences in the study design, the quality of the studies (risk of bias), the population studied, and varying definitions of outcomes being assessed. Should results differ for certain subpopulations, this could then be reflected in the development of the recommendation.\n[SUBTITLE] 3. Clinical impact [SUBSECTION] Clinical impact is a measure of the likely benefit that application of the guideline would have across the target population, and involves a clinical judgement. Factors that need to be taken into account when estimating clinical impact include: the relevance of the evidence to the clinical question; the statistical precision and size (and clinical importance) of the effect reported in the evidence-base; the relevance of the effect to patients, compared to other management options; the duration of therapy required to achieve the effect; and the balance of risks and benefits to the patient group, including potential harm. A hypothetical example of incorporating both clinical importance and potential harm may be for the use of statins in the control of dyslipidaemia where there is a very large body of evidence with low risk of bias indicating a substantial reduction in risk of cardiovascular events. In this case a qualifying recommendation could be made to differentiate the small group of people who may experience adverse events as a result of statin therapy.\nClinical impact is arguably the most subjective of the five evidence components rated in the evidence statement. However, we have found in assisting many guideline development groups to produce clinical practice guidelines using the FORM process that it is often clearer for clinicians than it is for methodological experts. Clinicians seem to grasp the net benefit concept quite easily, although often robust discussions occur before a consensus is reached regarding the rating of this component. A strength of FORM is that these discussions contribute to formulating appropriate recommendations, and the final conclusion can be documented so that users of the guideline can see how the developers arrived at the recommendation.\nClinical impact is a measure of the likely benefit that application of the guideline would have across the target population, and involves a clinical judgement. Factors that need to be taken into account when estimating clinical impact include: the relevance of the evidence to the clinical question; the statistical precision and size (and clinical importance) of the effect reported in the evidence-base; the relevance of the effect to patients, compared to other management options; the duration of therapy required to achieve the effect; and the balance of risks and benefits to the patient group, including potential harm. A hypothetical example of incorporating both clinical importance and potential harm may be for the use of statins in the control of dyslipidaemia where there is a very large body of evidence with low risk of bias indicating a substantial reduction in risk of cardiovascular events. In this case a qualifying recommendation could be made to differentiate the small group of people who may experience adverse events as a result of statin therapy.\nClinical impact is arguably the most subjective of the five evidence components rated in the evidence statement. However, we have found in assisting many guideline development groups to produce clinical practice guidelines using the FORM process that it is often clearer for clinicians than it is for methodological experts. Clinicians seem to grasp the net benefit concept quite easily, although often robust discussions occur before a consensus is reached regarding the rating of this component. A strength of FORM is that these discussions contribute to formulating appropriate recommendations, and the final conclusion can be documented so that users of the guideline can see how the developers arrived at the recommendation.\n[SUBTITLE] 4. Generalisability [SUBSECTION] The assessment of generalisability involves determining how precisely the available body of evidence answers the clinical question that was asked. Issues to be considered include: how well the participants and settings of the included studies match the patient population being targeted by the guideline; the clinical setting where the recommendation will be implemented; and other factors such as the stage of the disease (e.g. early versus advanced), the duration of illness and (for diagnostic accuracy questions) the prevalence of the disease in the study population as compared to the target population for the guideline.\nThe assessment of generalisability involves determining how precisely the available body of evidence answers the clinical question that was asked. Issues to be considered include: how well the participants and settings of the included studies match the patient population being targeted by the guideline; the clinical setting where the recommendation will be implemented; and other factors such as the stage of the disease (e.g. early versus advanced), the duration of illness and (for diagnostic accuracy questions) the prevalence of the disease in the study population as compared to the target population for the guideline.\n[SUBTITLE] 5. Applicability [SUBSECTION] This component addresses whether the evidence base is relevant to the Australian health care system generally, or to more local settings for specific recommendations (such as rural areas or cities). Factors that may reduce the direct application of study findings to the Australian or more local settings include organisational factors (e.g. availability of trained staff, clinic time, specialised equipment, tests or other resources) and cultural factors (e.g. attitudes to health issues, including those that may affect compliance with the recommendation).\nThis component addresses whether the evidence base is relevant to the Australian health care system generally, or to more local settings for specific recommendations (such as rural areas or cities). Factors that may reduce the direct application of study findings to the Australian or more local settings include organisational factors (e.g. availability of trained staff, clinic time, specialised equipment, tests or other resources) and cultural factors (e.g. attitudes to health issues, including those that may affect compliance with the recommendation).\n[SUBTITLE] The FORM Matrix and Evidence Statement Form [SUBSECTION] The FORM matrix forms part of the overall process which is detailed in Additional file 1. Each of the components in the FORM matrix can be rated from A to D. The body of evidence supporting a recommendation rarely consists of the same rating for each of the five components. There may be a large number of studies with a low risk of bias and consistent findings, but which have only a limited clinical impact, and are not directly generalisable to the target population or applicable to the local (e.g. Australian) healthcare context. Alternatively, a body of evidence may consist of one or two randomised trials with small sample sizes that have a moderate risk of bias but have a very large clinical impact and are directly applicable to the local healthcare context and target population. By rating each of the five components separately, FORM allows for this mixture of components, while still reflecting the overall body of evidence supporting a guideline recommendation. The FORM Matrix provides guidance for users about how to rate each component of the body of evidence (see Table 1). The accompanying Evidence Statement Form is provided for guideline developers to complete for each clinical question with room for additional information and dissenting opinions to be recorded. A recommendation to answer the clinical question is developed in two stages. First, a rating is assigned for each of the five components described above and an evidence statement is written in passive voice to reflect the findings of the evidence base. Second, an overall recommendation or action statement is developed on the basis of the evidence statement and an overall grade is assigned to this recommendation that reflects the level of confidence in the evidence supporting the recommendation.\nNHMRC Body of evidence matrix\nSR = systematic review; several = more than two studies\n1 Level of evidence determined from the NHMRC Evidence Hierarchy\n2 If there is only one study, rank this component as 'not applicable'\n3 For example, results in adults that are clinically sensible to apply to children OR psychosocial outcomes for one cancer that may be applicable to patients with another cancer\nEvidence statements may be developed by outcome measures for each intervention and then the multiple evidence statements for a single question can be collapsed into a single recommendation. Guideline developers can produce a combined recommendation taking into account the balance of benefits and harms or separate recommendations for benefits and harms, if this is more appropriate. The FORM process allows considerable flexibility in developing the recommendation.\nThe overall grades for recommendations should indicate the strength of the body of evidence underpinning the recommendation. This assists users of the clinical practice guidelines to make appropriate and informed clinical judgments. Grade A or B recommendations are generally based on a body of evidence that can be trusted to guide clinical practice, whereas Grade C or D recommendations must be applied carefully to individual clinical and organisational circumstances and should be interpreted with caution (see Table 2). A recommendation cannot be graded A or B unless the evidence base and consistency of the evidence are both rated A or B. In some cases, lower-graded evidence statements may not provide sufficient confidence to support an evidence-based recommendation at all. However, the framework allows Good Practice Points (GPP) to be included when developers feel it is important to provide non-evidence-based guidance.\nDefinition of NHMRC grades of recommendations\nIn formulating the recommendation users are advised to address the specific clinical question and to use action statements. The wording of the recommendation should reflect the strength of the body of evidence. Words such as 'must' or 'should' or 'use' are included when the evidence underpinning the recommendation is strong, and words such as 'might' or 'could' or 'consider' are used when the evidence base is weaker.\nThe following recommendations illustrate these points and are taken from the NHMRC Clinical Practice Guidelines for the Management of Melanoma in Australia and New Zealand (NHMRC 2008). These show that the evidence base, consistency and impact were high for dermoscopy, but not so high for total body photography (also indicated by the use of the verb 'recommended' in the first case and 'consider' in the second):\n• Training and utilisation of dermoscopy is recommended for clinicians routinely examining pigmented skin lesions: Grade A;\n• Consider the use of baseline total body photography as a tool for the early detection of melanoma in patients who are at high risk for developing primary melanoma: Grade C (p xxii [12]).\nDevelopers are also asked to consider how the guideline will be implemented at the time that the guideline recommendations are being formulated. The Evidence Statement Form requests developers to consider whether: the recommendation will result in changes in usual care; there are any resource implications associated with implementing the recommendation; the implementation of the recommendation will require changes in the way care is currently organised; and the guideline development group are aware of any barriers to the implementation of the recommendation. This information is used to inform the implementation plan for the Guideline.\nThe FORM matrix forms part of the overall process which is detailed in Additional file 1. Each of the components in the FORM matrix can be rated from A to D. The body of evidence supporting a recommendation rarely consists of the same rating for each of the five components. There may be a large number of studies with a low risk of bias and consistent findings, but which have only a limited clinical impact, and are not directly generalisable to the target population or applicable to the local (e.g. Australian) healthcare context. Alternatively, a body of evidence may consist of one or two randomised trials with small sample sizes that have a moderate risk of bias but have a very large clinical impact and are directly applicable to the local healthcare context and target population. By rating each of the five components separately, FORM allows for this mixture of components, while still reflecting the overall body of evidence supporting a guideline recommendation. The FORM Matrix provides guidance for users about how to rate each component of the body of evidence (see Table 1). The accompanying Evidence Statement Form is provided for guideline developers to complete for each clinical question with room for additional information and dissenting opinions to be recorded. A recommendation to answer the clinical question is developed in two stages. First, a rating is assigned for each of the five components described above and an evidence statement is written in passive voice to reflect the findings of the evidence base. Second, an overall recommendation or action statement is developed on the basis of the evidence statement and an overall grade is assigned to this recommendation that reflects the level of confidence in the evidence supporting the recommendation.\nNHMRC Body of evidence matrix\nSR = systematic review; several = more than two studies\n1 Level of evidence determined from the NHMRC Evidence Hierarchy\n2 If there is only one study, rank this component as 'not applicable'\n3 For example, results in adults that are clinically sensible to apply to children OR psychosocial outcomes for one cancer that may be applicable to patients with another cancer\nEvidence statements may be developed by outcome measures for each intervention and then the multiple evidence statements for a single question can be collapsed into a single recommendation. Guideline developers can produce a combined recommendation taking into account the balance of benefits and harms or separate recommendations for benefits and harms, if this is more appropriate. The FORM process allows considerable flexibility in developing the recommendation.\nThe overall grades for recommendations should indicate the strength of the body of evidence underpinning the recommendation. This assists users of the clinical practice guidelines to make appropriate and informed clinical judgments. Grade A or B recommendations are generally based on a body of evidence that can be trusted to guide clinical practice, whereas Grade C or D recommendations must be applied carefully to individual clinical and organisational circumstances and should be interpreted with caution (see Table 2). A recommendation cannot be graded A or B unless the evidence base and consistency of the evidence are both rated A or B. In some cases, lower-graded evidence statements may not provide sufficient confidence to support an evidence-based recommendation at all. However, the framework allows Good Practice Points (GPP) to be included when developers feel it is important to provide non-evidence-based guidance.\nDefinition of NHMRC grades of recommendations\nIn formulating the recommendation users are advised to address the specific clinical question and to use action statements. The wording of the recommendation should reflect the strength of the body of evidence. Words such as 'must' or 'should' or 'use' are included when the evidence underpinning the recommendation is strong, and words such as 'might' or 'could' or 'consider' are used when the evidence base is weaker.\nThe following recommendations illustrate these points and are taken from the NHMRC Clinical Practice Guidelines for the Management of Melanoma in Australia and New Zealand (NHMRC 2008). These show that the evidence base, consistency and impact were high for dermoscopy, but not so high for total body photography (also indicated by the use of the verb 'recommended' in the first case and 'consider' in the second):\n• Training and utilisation of dermoscopy is recommended for clinicians routinely examining pigmented skin lesions: Grade A;\n• Consider the use of baseline total body photography as a tool for the early detection of melanoma in patients who are at high risk for developing primary melanoma: Grade C (p xxii [12]).\nDevelopers are also asked to consider how the guideline will be implemented at the time that the guideline recommendations are being formulated. The Evidence Statement Form requests developers to consider whether: the recommendation will result in changes in usual care; there are any resource implications associated with implementing the recommendation; the implementation of the recommendation will require changes in the way care is currently organised; and the guideline development group are aware of any barriers to the implementation of the recommendation. This information is used to inform the implementation plan for the Guideline.\n[SUBTITLE] Feedback, piloting and users' experiences [SUBSECTION] Over the trial and consultancy period for the FORM grading process, we obtained feedback from invited experts (see acknowledgements), from current guideline developers and from the public. These issues and suggestions were carefully considered at the face-to-face meeting of the GAR consultants in 2007 (see methods). Where appropriate, we amended the FORM methodology and/or supporting documents to incorporate the suggestions or address problems. This iterative process ensured that the development of FORM was responsive to the needs of its core user group - guideline developers - and was as clear and comprehensible as possible, even for developers with limited methodological expertise. It also allowed the FORM development process to keep abreast of the sometimes rapidly changing methodology underpinning guideline development internationally and incorporate changes into FORM as appropriate. As developers of FORM and also methodological experts assisting guideline developers we (the authors) have been able to field-test the FORM process and gain first-hand feedback and direct experience about problems and issues encountered. This has been invaluable in modifying FORM to be more effective and useful.\nThe following issues were identified in the first consultation and addressed in the second iteration of FORM where appropriate:\n• deciding between grades - but this has become easier with time and familiarity\n• determining and extracting relevant information from synthesised sources (such as existing systematic reviews) which are incompletely reported\n• insufficient funding, human resources and/or time for the rigorous systematic literature reviews needed to underpin the evidence statements\n• need to accommodate subjectivity in the interpretation of the components and the final recommendation/s\nIn response to specific suggestions made in the first consultation period, we made the following modifications to the FORM supporting documentation:\n• revision of the notes, matrix and form to be more user friendly\n• the addition of 'explanatory notes' sections for developers to document reasons for particular decisions within the matrix\n• the addition of a 'dissenting opinions' and 'unresolved issues' sections to the Evidence Statement Form to keep decision making transparent and informed\n• a flowchart to assist in navigation\nFeedback from the second stage of consultation showed that the modifications were a major improvement and that guideline developers agreed that the FORM system of grading was an improvement on the previous system where recommendations were 'graded' according to the level of evidence from the NHMRC evidence hierarchy [3,6]. They also reported that the framework offers an opportunity to develop guidelines that improve dissemination and uptake in clinical practice. With increasing familiarity users have found the framework fairly simple to use.\nAs methodological experts assisting guideline developers, we have found the framework provides additional flexibility, especially when handling evidence with more than one outcome measure (for example overall survival, pain, readmission rates). Variable results/evidence statements for multiple outcomes can be captured by a single recommendation. Furthermore, the framework also allows a recommendation to be developed that balances the benefits and harms of an intervention (i.e. safety and effectiveness), but with enough flexibility to keep them separate if it is felt to be important. More than 20 NHMRC guidelines have now been completed using FORM.\nOver the trial and consultancy period for the FORM grading process, we obtained feedback from invited experts (see acknowledgements), from current guideline developers and from the public. These issues and suggestions were carefully considered at the face-to-face meeting of the GAR consultants in 2007 (see methods). Where appropriate, we amended the FORM methodology and/or supporting documents to incorporate the suggestions or address problems. This iterative process ensured that the development of FORM was responsive to the needs of its core user group - guideline developers - and was as clear and comprehensible as possible, even for developers with limited methodological expertise. It also allowed the FORM development process to keep abreast of the sometimes rapidly changing methodology underpinning guideline development internationally and incorporate changes into FORM as appropriate. As developers of FORM and also methodological experts assisting guideline developers we (the authors) have been able to field-test the FORM process and gain first-hand feedback and direct experience about problems and issues encountered. This has been invaluable in modifying FORM to be more effective and useful.\nThe following issues were identified in the first consultation and addressed in the second iteration of FORM where appropriate:\n• deciding between grades - but this has become easier with time and familiarity\n• determining and extracting relevant information from synthesised sources (such as existing systematic reviews) which are incompletely reported\n• insufficient funding, human resources and/or time for the rigorous systematic literature reviews needed to underpin the evidence statements\n• need to accommodate subjectivity in the interpretation of the components and the final recommendation/s\nIn response to specific suggestions made in the first consultation period, we made the following modifications to the FORM supporting documentation:\n• revision of the notes, matrix and form to be more user friendly\n• the addition of 'explanatory notes' sections for developers to document reasons for particular decisions within the matrix\n• the addition of a 'dissenting opinions' and 'unresolved issues' sections to the Evidence Statement Form to keep decision making transparent and informed\n• a flowchart to assist in navigation\nFeedback from the second stage of consultation showed that the modifications were a major improvement and that guideline developers agreed that the FORM system of grading was an improvement on the previous system where recommendations were 'graded' according to the level of evidence from the NHMRC evidence hierarchy [3,6]. They also reported that the framework offers an opportunity to develop guidelines that improve dissemination and uptake in clinical practice. With increasing familiarity users have found the framework fairly simple to use.\nAs methodological experts assisting guideline developers, we have found the framework provides additional flexibility, especially when handling evidence with more than one outcome measure (for example overall survival, pain, readmission rates). Variable results/evidence statements for multiple outcomes can be captured by a single recommendation. Furthermore, the framework also allows a recommendation to be developed that balances the benefits and harms of an intervention (i.e. safety and effectiveness), but with enough flexibility to keep them separate if it is felt to be important. More than 20 NHMRC guidelines have now been completed using FORM.", "[SUBTITLE] 1. Evidence base [SUBSECTION] The evidence base is assessed in terms of the quantity and quality of the studies identified by a systematic literature review for the clinical question concerned ('included studies'). Study quality relates to an assessment of the risk of bias inherent in the conduct, design and reporting of results in the included studies. The guideline developers are free to choose the most relevant process or tool to assess risk of bias. To ensure that consideration is given to the full range of study designs required to assess the breadth of clinical questions in a guideline, the GAR consultants also developed levels of evidence to address different clinical questions (prognosis, diagnostic accuracy, aetiology etc). This has been comprehensively addressed by Merlin et al [6] (see also, NHMRC website: http://www.nhmrc.gov.au/guidelines/developers.htm)\nThe evidence base is assessed in terms of the quantity and quality of the studies identified by a systematic literature review for the clinical question concerned ('included studies'). Study quality relates to an assessment of the risk of bias inherent in the conduct, design and reporting of results in the included studies. The guideline developers are free to choose the most relevant process or tool to assess risk of bias. To ensure that consideration is given to the full range of study designs required to assess the breadth of clinical questions in a guideline, the GAR consultants also developed levels of evidence to address different clinical questions (prognosis, diagnostic accuracy, aetiology etc). This has been comprehensively addressed by Merlin et al [6] (see also, NHMRC website: http://www.nhmrc.gov.au/guidelines/developers.htm)\n[SUBTITLE] 2. Consistency [SUBSECTION] The consistency component of the 'body of evidence' assesses the extent to which the findings are consistent across the included studies (including across a range of study populations and study designs). This allows users to assess whether the results are likely to be replicable or only likely to occur under certain conditions. Consistency may be assessed where appropriate as statistical heterogeneity (applying an I-squared statistic for example) or more likely will require the users to make a judgment about the overall direction of effects across multiple studies with reference to clinical heterogeneity. Possible sources of inconsistency (heterogeneity) in the results of studies may be differences in the study design, the quality of the studies (risk of bias), the population studied, and varying definitions of outcomes being assessed. Should results differ for certain subpopulations, this could then be reflected in the development of the recommendation.\nThe consistency component of the 'body of evidence' assesses the extent to which the findings are consistent across the included studies (including across a range of study populations and study designs). This allows users to assess whether the results are likely to be replicable or only likely to occur under certain conditions. Consistency may be assessed where appropriate as statistical heterogeneity (applying an I-squared statistic for example) or more likely will require the users to make a judgment about the overall direction of effects across multiple studies with reference to clinical heterogeneity. Possible sources of inconsistency (heterogeneity) in the results of studies may be differences in the study design, the quality of the studies (risk of bias), the population studied, and varying definitions of outcomes being assessed. Should results differ for certain subpopulations, this could then be reflected in the development of the recommendation.\n[SUBTITLE] 3. Clinical impact [SUBSECTION] Clinical impact is a measure of the likely benefit that application of the guideline would have across the target population, and involves a clinical judgement. Factors that need to be taken into account when estimating clinical impact include: the relevance of the evidence to the clinical question; the statistical precision and size (and clinical importance) of the effect reported in the evidence-base; the relevance of the effect to patients, compared to other management options; the duration of therapy required to achieve the effect; and the balance of risks and benefits to the patient group, including potential harm. A hypothetical example of incorporating both clinical importance and potential harm may be for the use of statins in the control of dyslipidaemia where there is a very large body of evidence with low risk of bias indicating a substantial reduction in risk of cardiovascular events. In this case a qualifying recommendation could be made to differentiate the small group of people who may experience adverse events as a result of statin therapy.\nClinical impact is arguably the most subjective of the five evidence components rated in the evidence statement. However, we have found in assisting many guideline development groups to produce clinical practice guidelines using the FORM process that it is often clearer for clinicians than it is for methodological experts. Clinicians seem to grasp the net benefit concept quite easily, although often robust discussions occur before a consensus is reached regarding the rating of this component. A strength of FORM is that these discussions contribute to formulating appropriate recommendations, and the final conclusion can be documented so that users of the guideline can see how the developers arrived at the recommendation.\nClinical impact is a measure of the likely benefit that application of the guideline would have across the target population, and involves a clinical judgement. Factors that need to be taken into account when estimating clinical impact include: the relevance of the evidence to the clinical question; the statistical precision and size (and clinical importance) of the effect reported in the evidence-base; the relevance of the effect to patients, compared to other management options; the duration of therapy required to achieve the effect; and the balance of risks and benefits to the patient group, including potential harm. A hypothetical example of incorporating both clinical importance and potential harm may be for the use of statins in the control of dyslipidaemia where there is a very large body of evidence with low risk of bias indicating a substantial reduction in risk of cardiovascular events. In this case a qualifying recommendation could be made to differentiate the small group of people who may experience adverse events as a result of statin therapy.\nClinical impact is arguably the most subjective of the five evidence components rated in the evidence statement. However, we have found in assisting many guideline development groups to produce clinical practice guidelines using the FORM process that it is often clearer for clinicians than it is for methodological experts. Clinicians seem to grasp the net benefit concept quite easily, although often robust discussions occur before a consensus is reached regarding the rating of this component. A strength of FORM is that these discussions contribute to formulating appropriate recommendations, and the final conclusion can be documented so that users of the guideline can see how the developers arrived at the recommendation.\n[SUBTITLE] 4. Generalisability [SUBSECTION] The assessment of generalisability involves determining how precisely the available body of evidence answers the clinical question that was asked. Issues to be considered include: how well the participants and settings of the included studies match the patient population being targeted by the guideline; the clinical setting where the recommendation will be implemented; and other factors such as the stage of the disease (e.g. early versus advanced), the duration of illness and (for diagnostic accuracy questions) the prevalence of the disease in the study population as compared to the target population for the guideline.\nThe assessment of generalisability involves determining how precisely the available body of evidence answers the clinical question that was asked. Issues to be considered include: how well the participants and settings of the included studies match the patient population being targeted by the guideline; the clinical setting where the recommendation will be implemented; and other factors such as the stage of the disease (e.g. early versus advanced), the duration of illness and (for diagnostic accuracy questions) the prevalence of the disease in the study population as compared to the target population for the guideline.\n[SUBTITLE] 5. Applicability [SUBSECTION] This component addresses whether the evidence base is relevant to the Australian health care system generally, or to more local settings for specific recommendations (such as rural areas or cities). Factors that may reduce the direct application of study findings to the Australian or more local settings include organisational factors (e.g. availability of trained staff, clinic time, specialised equipment, tests or other resources) and cultural factors (e.g. attitudes to health issues, including those that may affect compliance with the recommendation).\nThis component addresses whether the evidence base is relevant to the Australian health care system generally, or to more local settings for specific recommendations (such as rural areas or cities). Factors that may reduce the direct application of study findings to the Australian or more local settings include organisational factors (e.g. availability of trained staff, clinic time, specialised equipment, tests or other resources) and cultural factors (e.g. attitudes to health issues, including those that may affect compliance with the recommendation).", "The evidence base is assessed in terms of the quantity and quality of the studies identified by a systematic literature review for the clinical question concerned ('included studies'). Study quality relates to an assessment of the risk of bias inherent in the conduct, design and reporting of results in the included studies. The guideline developers are free to choose the most relevant process or tool to assess risk of bias. To ensure that consideration is given to the full range of study designs required to assess the breadth of clinical questions in a guideline, the GAR consultants also developed levels of evidence to address different clinical questions (prognosis, diagnostic accuracy, aetiology etc). This has been comprehensively addressed by Merlin et al [6] (see also, NHMRC website: http://www.nhmrc.gov.au/guidelines/developers.htm)", "The consistency component of the 'body of evidence' assesses the extent to which the findings are consistent across the included studies (including across a range of study populations and study designs). This allows users to assess whether the results are likely to be replicable or only likely to occur under certain conditions. Consistency may be assessed where appropriate as statistical heterogeneity (applying an I-squared statistic for example) or more likely will require the users to make a judgment about the overall direction of effects across multiple studies with reference to clinical heterogeneity. Possible sources of inconsistency (heterogeneity) in the results of studies may be differences in the study design, the quality of the studies (risk of bias), the population studied, and varying definitions of outcomes being assessed. Should results differ for certain subpopulations, this could then be reflected in the development of the recommendation.", "Clinical impact is a measure of the likely benefit that application of the guideline would have across the target population, and involves a clinical judgement. Factors that need to be taken into account when estimating clinical impact include: the relevance of the evidence to the clinical question; the statistical precision and size (and clinical importance) of the effect reported in the evidence-base; the relevance of the effect to patients, compared to other management options; the duration of therapy required to achieve the effect; and the balance of risks and benefits to the patient group, including potential harm. A hypothetical example of incorporating both clinical importance and potential harm may be for the use of statins in the control of dyslipidaemia where there is a very large body of evidence with low risk of bias indicating a substantial reduction in risk of cardiovascular events. In this case a qualifying recommendation could be made to differentiate the small group of people who may experience adverse events as a result of statin therapy.\nClinical impact is arguably the most subjective of the five evidence components rated in the evidence statement. However, we have found in assisting many guideline development groups to produce clinical practice guidelines using the FORM process that it is often clearer for clinicians than it is for methodological experts. Clinicians seem to grasp the net benefit concept quite easily, although often robust discussions occur before a consensus is reached regarding the rating of this component. A strength of FORM is that these discussions contribute to formulating appropriate recommendations, and the final conclusion can be documented so that users of the guideline can see how the developers arrived at the recommendation.", "The assessment of generalisability involves determining how precisely the available body of evidence answers the clinical question that was asked. Issues to be considered include: how well the participants and settings of the included studies match the patient population being targeted by the guideline; the clinical setting where the recommendation will be implemented; and other factors such as the stage of the disease (e.g. early versus advanced), the duration of illness and (for diagnostic accuracy questions) the prevalence of the disease in the study population as compared to the target population for the guideline.", "This component addresses whether the evidence base is relevant to the Australian health care system generally, or to more local settings for specific recommendations (such as rural areas or cities). Factors that may reduce the direct application of study findings to the Australian or more local settings include organisational factors (e.g. availability of trained staff, clinic time, specialised equipment, tests or other resources) and cultural factors (e.g. attitudes to health issues, including those that may affect compliance with the recommendation).", "The FORM matrix forms part of the overall process which is detailed in Additional file 1. Each of the components in the FORM matrix can be rated from A to D. The body of evidence supporting a recommendation rarely consists of the same rating for each of the five components. There may be a large number of studies with a low risk of bias and consistent findings, but which have only a limited clinical impact, and are not directly generalisable to the target population or applicable to the local (e.g. Australian) healthcare context. Alternatively, a body of evidence may consist of one or two randomised trials with small sample sizes that have a moderate risk of bias but have a very large clinical impact and are directly applicable to the local healthcare context and target population. By rating each of the five components separately, FORM allows for this mixture of components, while still reflecting the overall body of evidence supporting a guideline recommendation. The FORM Matrix provides guidance for users about how to rate each component of the body of evidence (see Table 1). The accompanying Evidence Statement Form is provided for guideline developers to complete for each clinical question with room for additional information and dissenting opinions to be recorded. A recommendation to answer the clinical question is developed in two stages. First, a rating is assigned for each of the five components described above and an evidence statement is written in passive voice to reflect the findings of the evidence base. Second, an overall recommendation or action statement is developed on the basis of the evidence statement and an overall grade is assigned to this recommendation that reflects the level of confidence in the evidence supporting the recommendation.\nNHMRC Body of evidence matrix\nSR = systematic review; several = more than two studies\n1 Level of evidence determined from the NHMRC Evidence Hierarchy\n2 If there is only one study, rank this component as 'not applicable'\n3 For example, results in adults that are clinically sensible to apply to children OR psychosocial outcomes for one cancer that may be applicable to patients with another cancer\nEvidence statements may be developed by outcome measures for each intervention and then the multiple evidence statements for a single question can be collapsed into a single recommendation. Guideline developers can produce a combined recommendation taking into account the balance of benefits and harms or separate recommendations for benefits and harms, if this is more appropriate. The FORM process allows considerable flexibility in developing the recommendation.\nThe overall grades for recommendations should indicate the strength of the body of evidence underpinning the recommendation. This assists users of the clinical practice guidelines to make appropriate and informed clinical judgments. Grade A or B recommendations are generally based on a body of evidence that can be trusted to guide clinical practice, whereas Grade C or D recommendations must be applied carefully to individual clinical and organisational circumstances and should be interpreted with caution (see Table 2). A recommendation cannot be graded A or B unless the evidence base and consistency of the evidence are both rated A or B. In some cases, lower-graded evidence statements may not provide sufficient confidence to support an evidence-based recommendation at all. However, the framework allows Good Practice Points (GPP) to be included when developers feel it is important to provide non-evidence-based guidance.\nDefinition of NHMRC grades of recommendations\nIn formulating the recommendation users are advised to address the specific clinical question and to use action statements. The wording of the recommendation should reflect the strength of the body of evidence. Words such as 'must' or 'should' or 'use' are included when the evidence underpinning the recommendation is strong, and words such as 'might' or 'could' or 'consider' are used when the evidence base is weaker.\nThe following recommendations illustrate these points and are taken from the NHMRC Clinical Practice Guidelines for the Management of Melanoma in Australia and New Zealand (NHMRC 2008). These show that the evidence base, consistency and impact were high for dermoscopy, but not so high for total body photography (also indicated by the use of the verb 'recommended' in the first case and 'consider' in the second):\n• Training and utilisation of dermoscopy is recommended for clinicians routinely examining pigmented skin lesions: Grade A;\n• Consider the use of baseline total body photography as a tool for the early detection of melanoma in patients who are at high risk for developing primary melanoma: Grade C (p xxii [12]).\nDevelopers are also asked to consider how the guideline will be implemented at the time that the guideline recommendations are being formulated. The Evidence Statement Form requests developers to consider whether: the recommendation will result in changes in usual care; there are any resource implications associated with implementing the recommendation; the implementation of the recommendation will require changes in the way care is currently organised; and the guideline development group are aware of any barriers to the implementation of the recommendation. This information is used to inform the implementation plan for the Guideline.", "Over the trial and consultancy period for the FORM grading process, we obtained feedback from invited experts (see acknowledgements), from current guideline developers and from the public. These issues and suggestions were carefully considered at the face-to-face meeting of the GAR consultants in 2007 (see methods). Where appropriate, we amended the FORM methodology and/or supporting documents to incorporate the suggestions or address problems. This iterative process ensured that the development of FORM was responsive to the needs of its core user group - guideline developers - and was as clear and comprehensible as possible, even for developers with limited methodological expertise. It also allowed the FORM development process to keep abreast of the sometimes rapidly changing methodology underpinning guideline development internationally and incorporate changes into FORM as appropriate. As developers of FORM and also methodological experts assisting guideline developers we (the authors) have been able to field-test the FORM process and gain first-hand feedback and direct experience about problems and issues encountered. This has been invaluable in modifying FORM to be more effective and useful.\nThe following issues were identified in the first consultation and addressed in the second iteration of FORM where appropriate:\n• deciding between grades - but this has become easier with time and familiarity\n• determining and extracting relevant information from synthesised sources (such as existing systematic reviews) which are incompletely reported\n• insufficient funding, human resources and/or time for the rigorous systematic literature reviews needed to underpin the evidence statements\n• need to accommodate subjectivity in the interpretation of the components and the final recommendation/s\nIn response to specific suggestions made in the first consultation period, we made the following modifications to the FORM supporting documentation:\n• revision of the notes, matrix and form to be more user friendly\n• the addition of 'explanatory notes' sections for developers to document reasons for particular decisions within the matrix\n• the addition of a 'dissenting opinions' and 'unresolved issues' sections to the Evidence Statement Form to keep decision making transparent and informed\n• a flowchart to assist in navigation\nFeedback from the second stage of consultation showed that the modifications were a major improvement and that guideline developers agreed that the FORM system of grading was an improvement on the previous system where recommendations were 'graded' according to the level of evidence from the NHMRC evidence hierarchy [3,6]. They also reported that the framework offers an opportunity to develop guidelines that improve dissemination and uptake in clinical practice. With increasing familiarity users have found the framework fairly simple to use.\nAs methodological experts assisting guideline developers, we have found the framework provides additional flexibility, especially when handling evidence with more than one outcome measure (for example overall survival, pain, readmission rates). Variable results/evidence statements for multiple outcomes can be captured by a single recommendation. Furthermore, the framework also allows a recommendation to be developed that balances the benefits and harms of an intervention (i.e. safety and effectiveness), but with enough flexibility to keep them separate if it is felt to be important. More than 20 NHMRC guidelines have now been completed using FORM.", "The formulation and inclusion of recommendations is one of the defining differences between clinical practice guidelines and other evidence syntheses such as systematic reviews. A recent review of the adequacy of guideline recommendations has highlighted that over half of the recommendations (52.7%) give no indication of the strength of that recommendation [1].\nThe FORM process for formulation and grading of recommendations in clinical practice guidelines is logical, simple to use and intuitive. Its concurrent development with Australian levels of evidence [6] means that NHMRC can provide Australian (and other) guideline developers with an integrated framework for producing high-quality recommendations that represent best-practice and are implementable, acceptable and appropriate for the local health care system. The framework is also generic - the same processes can be used to formulate and grade recommendations for any type of clinical question, despite the differences in the type of evidence required to address that question (e.g. questions of diagnostic test accuracy, risk factors for disease progression or poor prognosis). Furthermore, health service providers can implement the evidence-based course of action with appropriate modification in light of the individual patient's values and preferences.\nIn areas like public health where there may never be high-level evidence supporting the use of different interventions, practice recommendations developed using other grading systems would consistently rate a lower grade than is felt appropriate by experts in those fields. Examples of such areas include large-scale dietary questions, passive smoking or exposure to environmental chemicals. This does not occur using the FORM methodology. Using the NHMRC levels of evidence for aetiology questions as an alternative to the levels for intervention questions [6] allows the evidence base component of our grading system to be rated higher than would otherwise occur and this would be reflected in the overall grade of recommendation.\nThe extensive pilot of FORM and subsequent uptake by both new and experienced guideline developers has shown that the framework is feasible and accepted. The component approach allows transparency in how recommendations are formulated, with users of the guidelines able to explicitly see the various contributions of factors such as quality of the evidence and clinical impact. A further strength is that implementation and resourcing issues are considered separately, which means that effective but potentially costly interventions are not penalised with a downgraded recommendation as the developers of this system felt that users' willingness to pay will vary according to the context of use. Arguably the greater ability to differentiate strength of recommendation (four levels) in FORM offers more precision for developers.\n[SUBTITLE] Limitations [SUBSECTION] The UK National Institute for Health and Clinical Excellence (NICE) has decided to discontinue summary grades for recommendations, on the grounds that their previous grading system was being misinterpreted. They have stated that they are not sure that the GRADE system's approach to summary labels overcomes this [13]. We are not aware of this sort of misinterpretation occurring with FORM, and believe that the benefits of grading outweigh the harms as clinicians are striving for clear-cut health advice to assist with their individual decision-making. However, ongoing monitoring and periodic review of the application and use of FORM needs to be considered.\nRecommendation formulation and grading can be particularly challenging when the evidence is scant and/or poor, or conflicting. NICE has outlined some strategies to address these challenges, including using consensus when no evidence is found for a particular clinical question and highlighting gaps in the evidence where evidence is scant or poor.[14] NICE reminds us that whenever guidelines are unable to rely on a solid evidence base other methods used for formulating recommendations must be transparent and set out clearly in the guideline. A particular strength of an explicit process such as FORM is that the path from evidence to recommendation is made clear.\nCurrent evidence frameworks are grappling with how to integrate other forms of evidence needed to answer qualitative questions such as optimal quality of life, and we anticipate that FORM will need to be periodically reassessed in the light of international debate about levels of evidence and grading recommendations.\nThe purpose of clinical practice guidelines is to change or guide health professionals' behaviour and to improve quality of care. Therefore, the ultimate test of guidelines and the processes used to develop and implement guidelines will be improved health outcomes and improved systems. One way of facilitating this is by developing recommendations that are transparently produced through a process that is user-friendly, weighs up multiple concepts when formulating a course of action (much as the clinician does for an individual patient), and provides clear advice on the confidence or uncertainty associated with the recommended course of action.\nThe UK National Institute for Health and Clinical Excellence (NICE) has decided to discontinue summary grades for recommendations, on the grounds that their previous grading system was being misinterpreted. They have stated that they are not sure that the GRADE system's approach to summary labels overcomes this [13]. We are not aware of this sort of misinterpretation occurring with FORM, and believe that the benefits of grading outweigh the harms as clinicians are striving for clear-cut health advice to assist with their individual decision-making. However, ongoing monitoring and periodic review of the application and use of FORM needs to be considered.\nRecommendation formulation and grading can be particularly challenging when the evidence is scant and/or poor, or conflicting. NICE has outlined some strategies to address these challenges, including using consensus when no evidence is found for a particular clinical question and highlighting gaps in the evidence where evidence is scant or poor.[14] NICE reminds us that whenever guidelines are unable to rely on a solid evidence base other methods used for formulating recommendations must be transparent and set out clearly in the guideline. A particular strength of an explicit process such as FORM is that the path from evidence to recommendation is made clear.\nCurrent evidence frameworks are grappling with how to integrate other forms of evidence needed to answer qualitative questions such as optimal quality of life, and we anticipate that FORM will need to be periodically reassessed in the light of international debate about levels of evidence and grading recommendations.\nThe purpose of clinical practice guidelines is to change or guide health professionals' behaviour and to improve quality of care. Therefore, the ultimate test of guidelines and the processes used to develop and implement guidelines will be improved health outcomes and improved systems. One way of facilitating this is by developing recommendations that are transparently produced through a process that is user-friendly, weighs up multiple concepts when formulating a course of action (much as the clinician does for an individual patient), and provides clear advice on the confidence or uncertainty associated with the recommended course of action.", "The UK National Institute for Health and Clinical Excellence (NICE) has decided to discontinue summary grades for recommendations, on the grounds that their previous grading system was being misinterpreted. They have stated that they are not sure that the GRADE system's approach to summary labels overcomes this [13]. We are not aware of this sort of misinterpretation occurring with FORM, and believe that the benefits of grading outweigh the harms as clinicians are striving for clear-cut health advice to assist with their individual decision-making. However, ongoing monitoring and periodic review of the application and use of FORM needs to be considered.\nRecommendation formulation and grading can be particularly challenging when the evidence is scant and/or poor, or conflicting. NICE has outlined some strategies to address these challenges, including using consensus when no evidence is found for a particular clinical question and highlighting gaps in the evidence where evidence is scant or poor.[14] NICE reminds us that whenever guidelines are unable to rely on a solid evidence base other methods used for formulating recommendations must be transparent and set out clearly in the guideline. A particular strength of an explicit process such as FORM is that the path from evidence to recommendation is made clear.\nCurrent evidence frameworks are grappling with how to integrate other forms of evidence needed to answer qualitative questions such as optimal quality of life, and we anticipate that FORM will need to be periodically reassessed in the light of international debate about levels of evidence and grading recommendations.\nThe purpose of clinical practice guidelines is to change or guide health professionals' behaviour and to improve quality of care. Therefore, the ultimate test of guidelines and the processes used to develop and implement guidelines will be improved health outcomes and improved systems. One way of facilitating this is by developing recommendations that are transparently produced through a process that is user-friendly, weighs up multiple concepts when formulating a course of action (much as the clinician does for an individual patient), and provides clear advice on the confidence or uncertainty associated with the recommended course of action.", "FORM provides a contemporary and internationally relevant structure within which clinical guideline developers can consider current literature related to specific clinical questions. It has been developed through a unique partnership of government, academic, private consultancy and clinical personnel with considerable experience in evidence-based practice and development of clinical practice guidelines. Our work with over 20 guideline developers during the piloting of the FORM process has demonstrated it to be a logical, simple to use and intuitive system for formulating and grading recommendations in clinical practice guidelines.", "The authors declare that they have no competing interest. Meeting attendance fees were paid to the authors (or their institutions) by the NHMRC - a not-for-profit research organisation funded by the Australian Government.", "All authors were involved in the process of drafting and revising the grading process as described and all authors were involved in the preparation and final approval of the current manuscript.", "History of NHMRC Guidelines Assessment Register (GAR) and members of the Levels and Grades Working Party\nIn 2002, the NHMRC convened a register of methodological experts (Guidelines Assessment Register [GAR]) to assist external guideline developers in Australia through the process of identifying and synthesising evidence for guidelines in a way that complied with NHMRC specified requirements and would assist them in gaining NHMRC endorsement for their work. The main role of the GAR consultants was to oversee the methodological processes in external development of guidelines, particularly reviewing and classifying the quality of the evidence, and how these classifications correlated to the resultant recommendations. The expected outcome of the involvement of the GAR consultants was that consistently high quality guidelines would be submitted to HAC for approval, and that problems identified post hoc in guideline development could be pre-empted.\nKristina Coleman, Sarah Norris, Adele Weston (Health Technology Analysts Pty Ltd)\nKaren Grimmer-Somers, Susan Hillier (iCentre for Allied Health Evidence, University of South Australia)\nTracy Merlin (Adelaide Health Technology Assessment, Discipline of Public Health, University of Adelaide)\nPhilippa Middleton, Rebecca Tooher (ARCH, University of Adelaide)\nJanet Salisbury (Biotext Pty Ltd)", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2288/11/23/prepub\n", "NHMRC Evidence Statement (landscape Document). This file includes the FORM documents discussed - the matrix and trigger questions for guideline developers to complete the Evidence Statement.\nClick here for file" ]
[ null, null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, "supplementary-material" ]
[]
The contribution of psychological distress to socio-economic differences in cause-specific mortality: a population-based follow-up of 28 years.
21356041
Psychological factors associated with low social status have been proposed as one possible explanation for the socio-economic gradient in health. The aim of this study is to explore whether different indicators of psychological distress contribute to socio-economic differences in cause-specific mortality.
BACKGROUND
The data source is a nationally representative, repeated cross-sectional survey, "Health Behaviour and Health among the Finnish Adult Population" (AVTK). The survey results were linked with socio-economic register data from Statistics Finland (from the years 1979-2002) and mortality follow-up data up to 2006 from the Finnish National Cause of Death Register. The data included 32,451 men and 35,420 women (response rate 73.5%). Self-reported measures of depression, insomnia and stress were used as indicators of psychological distress. Socio-economic factors included education, employment status and household income. Mortality data consisted of unnatural causes of death (suicide, accidents and violence, and alcohol-related mortality) and coronary heart disease (CHD) mortality. Adjusted hazard ratios were calculated using the Cox regression model.
METHODS
In unnatural mortality, psychological distress accounted for some of the employment status (11-31%) and income level (4-16%) differences among both men and women, and for the differences related to the educational level (5-12%) among men; the educational level was associated statistically significantly with unnatural mortality only among men. Psychological distress had minor or no contribution to socio-economic differences in CHD mortality.
RESULTS
Psychological distress partly accounted for socio-economic disparities in unnatural mortality. Further studies are needed to explore the role and mechanisms of psychological distress associated with socio-economic differences in cause-specific mortality.
CONCLUSIONS
[ "Adolescent", "Adult", "Cause of Death", "Female", "Finland", "Follow-Up Studies", "Health Surveys", "Humans", "Male", "Middle Aged", "Mortality", "Proportional Hazards Models", "Social Class", "Stress, Psychological", "Young Adult" ]
3053248
null
null
Methods
[SUBTITLE] Data [SUBSECTION] The basic data source is the nationwide, repeated cross-sectional survey, "Health Behaviour and Health among the Finnish Adult Population", conducted annually since 1978 by the National Public Health Institute of Finland [42]. The survey questionnaire is mailed to a random sample of 5,000 Finns aged 15-64 years. The simple random sample was conducted by The Finnish Population Information System which is a computerized national register that contains basic on-line information about all Finnish citizens residing permanently in Finland. The survey years covered in this study are 1979-2002. The year 1985 has been excluded from the survey due to missing personal identification codes for that year. Respondents under 25 years of age have been excluded from this study because their socio-economic status is not established. We have supplemented the survey data with education and household income variables from Statistics Finland Register Data from the years 1979-2002 and the Finnish National Causes of Death Register follow-up data from the years 1979-2006. The mortality data include immediate, contributing and underlying causes of death, as well as the exact date of death. The data linkages were derived by using the personal identification codes assigned to all persons living permanently in Finland. After excluding the missing data on psychological distress variables (N = 1129, 1,6%), the total number of cases was 67871 (average annual response rate 73.5%), out of which 32451 were men (average response rate 69%) and 35420 were women (average response rate 78%). Our study is reviewed and supported by The Institutional Review Board of National Institute for Health and Welfare, (THL) (IRB 00007085, FWA 00014588). The basic data source is the nationwide, repeated cross-sectional survey, "Health Behaviour and Health among the Finnish Adult Population", conducted annually since 1978 by the National Public Health Institute of Finland [42]. The survey questionnaire is mailed to a random sample of 5,000 Finns aged 15-64 years. The simple random sample was conducted by The Finnish Population Information System which is a computerized national register that contains basic on-line information about all Finnish citizens residing permanently in Finland. The survey years covered in this study are 1979-2002. The year 1985 has been excluded from the survey due to missing personal identification codes for that year. Respondents under 25 years of age have been excluded from this study because their socio-economic status is not established. We have supplemented the survey data with education and household income variables from Statistics Finland Register Data from the years 1979-2002 and the Finnish National Causes of Death Register follow-up data from the years 1979-2006. The mortality data include immediate, contributing and underlying causes of death, as well as the exact date of death. The data linkages were derived by using the personal identification codes assigned to all persons living permanently in Finland. After excluding the missing data on psychological distress variables (N = 1129, 1,6%), the total number of cases was 67871 (average annual response rate 73.5%), out of which 32451 were men (average response rate 69%) and 35420 were women (average response rate 78%). Our study is reviewed and supported by The Institutional Review Board of National Institute for Health and Welfare, (THL) (IRB 00007085, FWA 00014588). [SUBTITLE] Psychological distress variables [SUBSECTION] We questioned the respondents about 14 health problems or symptoms, among them depression and insomnia, by the following single question: "Have you had any of the following symptoms or health problems during past 30 days?" (Yes, if so). Stress was addressed in a separate question on a four-point scale (1 = unbearable situation, 4 = no stress at all); respondents were asked if they had symptoms of tension or had been under great stress or considerable strain during the past 30 days. We considered unbearable stress as having the most negative effect on health and mortality and being associated with social disadvantage. Therefore, we classified those reporting an unbearable situation as having stress. We investigated the correlation of the psychological distress measures in another paper using this same data and showed that single-question depression (males r = .58; females r = .55) and insomnia (males and females r = .38) correlated with the general mental health inventory (MHI-5) [30]. In this study, self-reported psychological distress is thought to reflect the subjective experience of psychological well-being, and it is used to explore the role of psychological distress in generating socio-economic differences in mortality at an extensive population level [43]. We questioned the respondents about 14 health problems or symptoms, among them depression and insomnia, by the following single question: "Have you had any of the following symptoms or health problems during past 30 days?" (Yes, if so). Stress was addressed in a separate question on a four-point scale (1 = unbearable situation, 4 = no stress at all); respondents were asked if they had symptoms of tension or had been under great stress or considerable strain during the past 30 days. We considered unbearable stress as having the most negative effect on health and mortality and being associated with social disadvantage. Therefore, we classified those reporting an unbearable situation as having stress. We investigated the correlation of the psychological distress measures in another paper using this same data and showed that single-question depression (males r = .58; females r = .55) and insomnia (males and females r = .38) correlated with the general mental health inventory (MHI-5) [30]. In this study, self-reported psychological distress is thought to reflect the subjective experience of psychological well-being, and it is used to explore the role of psychological distress in generating socio-economic differences in mortality at an extensive population level [43]. [SUBTITLE] Socio-economic variables [SUBSECTION] Socio-economic variables included education and household income from the register data and employment status from the survey questionnaire. We collected the register data for education and income from 1980 statistics for the survey years 1979-1983, from 1985 statistics for the survey years 1984-1986 and annually from 1987 until 2000. For the survey years 2001-2002, we collected the socio-economic data from the year 2000. The educational level was derived from the Register of Educational Qualifications and Degrees, which follows, as far as possible, the principles and categories of the revised UNESCO International Standard Classification of Education 1997 (ISCED 1997). We divided the respondent's educational qualification into three categories: the lowest level included respondents with no education, an unknown education or with lower secondary education; the intermediate level included respondents with upper secondary or post-secondary non-tertiary education; and the highest level included respondents with tertiary education. Household income has been found to be more strongly and consistently associated with health than individual income. [44] We calculated household income as taxable total gross income for a household per year without transfer payment, divided by the consumption unit of the OECD equivalence scale. The first adult in the household was weighted as 1.0, other adults as 0.7 and children under 18 as 0.5 [45]. We further divided household income per consumption unit into tertiles by every study year, in order to keep the comparability of the variable over time. Employment status during most of the year consisted of the categories of employed and unemployed. Additional categories, that is housewives/husbands, students and retired people were excluded from the analyses concerning mortality differences by employment status. Socio-economic variables included education and household income from the register data and employment status from the survey questionnaire. We collected the register data for education and income from 1980 statistics for the survey years 1979-1983, from 1985 statistics for the survey years 1984-1986 and annually from 1987 until 2000. For the survey years 2001-2002, we collected the socio-economic data from the year 2000. The educational level was derived from the Register of Educational Qualifications and Degrees, which follows, as far as possible, the principles and categories of the revised UNESCO International Standard Classification of Education 1997 (ISCED 1997). We divided the respondent's educational qualification into three categories: the lowest level included respondents with no education, an unknown education or with lower secondary education; the intermediate level included respondents with upper secondary or post-secondary non-tertiary education; and the highest level included respondents with tertiary education. Household income has been found to be more strongly and consistently associated with health than individual income. [44] We calculated household income as taxable total gross income for a household per year without transfer payment, divided by the consumption unit of the OECD equivalence scale. The first adult in the household was weighted as 1.0, other adults as 0.7 and children under 18 as 0.5 [45]. We further divided household income per consumption unit into tertiles by every study year, in order to keep the comparability of the variable over time. Employment status during most of the year consisted of the categories of employed and unemployed. Additional categories, that is housewives/husbands, students and retired people were excluded from the analyses concerning mortality differences by employment status. [SUBTITLE] Mortality [SUBSECTION] In this study we analysed unnatural causes of death like suicide, accidents and violence, and alcohol-related mortality, and, for purposes of comparison, a general cause of death, coronary heart disease (CHD) mortality. We identified the causes of death using the International Classification of Diseases (ICD, WHO; 1974, 1978, 1992). The 8th revision was used for the years 1979-1986, the 9th revision for the years 1987-1995 and the 10th revision for the years subsequent to 1996. The classifications for suicide were E950-E959 (ICD-8 and ICD-9) and X60-X84 (ICD-10). Accidents and violence were ICD codes E800-E859, E861-E949, E960-E999 (ICD-8), E800-E849, E852-E949, E960-E999 (ICD-9) and V00-V99, W00-W99, X00-X44, X46-X59, X85-X99, Y00-Y89 (ICD-10). The definition of alcohol-related deaths included injuries, diseases and poisonings where alcohol was the main cause of the death (ICD-8 codes 291, 303, 571.0, 577, and E860; ICD-9 codes 291, 303, 357.5, 425.5, 535.3, 571.0-571.3, 577.0D-577.0F, 577.1C-577.1D and E851; ICD-10 codes F10, G31.2, G62.1, G72.1, I42.6, K29.2, K70, K86.0, O35.4 and X45). We grouped suicides, accidents and violence and alcohol-related deaths together and called them 'unnatural' causes of death. The classification for coronary heart disease (CHD) mortality was 410-414 for ICD-8 and 9 codes and I20-I25 for ICD-10 codes. In this study we analysed unnatural causes of death like suicide, accidents and violence, and alcohol-related mortality, and, for purposes of comparison, a general cause of death, coronary heart disease (CHD) mortality. We identified the causes of death using the International Classification of Diseases (ICD, WHO; 1974, 1978, 1992). The 8th revision was used for the years 1979-1986, the 9th revision for the years 1987-1995 and the 10th revision for the years subsequent to 1996. The classifications for suicide were E950-E959 (ICD-8 and ICD-9) and X60-X84 (ICD-10). Accidents and violence were ICD codes E800-E859, E861-E949, E960-E999 (ICD-8), E800-E849, E852-E949, E960-E999 (ICD-9) and V00-V99, W00-W99, X00-X44, X46-X59, X85-X99, Y00-Y89 (ICD-10). The definition of alcohol-related deaths included injuries, diseases and poisonings where alcohol was the main cause of the death (ICD-8 codes 291, 303, 571.0, 577, and E860; ICD-9 codes 291, 303, 357.5, 425.5, 535.3, 571.0-571.3, 577.0D-577.0F, 577.1C-577.1D and E851; ICD-10 codes F10, G31.2, G62.1, G72.1, I42.6, K29.2, K70, K86.0, O35.4 and X45). We grouped suicides, accidents and violence and alcohol-related deaths together and called them 'unnatural' causes of death. The classification for coronary heart disease (CHD) mortality was 410-414 for ICD-8 and 9 codes and I20-I25 for ICD-10 codes. [SUBTITLE] Statistical methods [SUBSECTION] For preliminary analyses, we examined associations between psychological distress and socio-economic position with a logistic regression model reporting odds ratios (OR) with 95% confidence intervals (CI) (see Additional file 2) and between psychological distress and mortality with a Cox proportional hazard model reporting hazard ratios (HR) with 95% confidence intervals (CI) (see Additional file 3) [46]. We conducted the main analyses using the Cox proportional hazard model (Tables 1, 2, 3). All the analyses were performed with age and study year as covariates. Variation over time was taken into account by adjusting for the study year. To take into account the non-linear association of age with unnatural mortality, we adjusted mortality analyses for age squared. We carried out all the statistical analyses separately for men and women, using the statistical package SPSS 17.0 for Windows (SPSS Corporation 2008). The effect of adjusting for self-reported psychological distress on educational level differences in excess mortality. Hazard ratios (95% CIs) and percent reduction (%) in mortality among those with an intermediate or low education compared to those with a high education after adjusting for psychological distress. *The confounders: age, age squared, study year. The effect of adjusting for self-reported psychological distress on employment status differences in excess mortality. Hazard ratios (95% CIs) and percent reduction (%) in mortality among the unemployed compared to the employed after adjusting for self-reported psychological distress. *The confounders: age, age squared, study year The effect of adjusting for self-reported psychological distress on household income differences in excess mortality. Hazard ratios (95% CIs) and percent of reduction (%) in mortality among intermediate or low household income levels compared to high income level after adjustments of self-reported psychological distress.*The confounders: age, age squared, study year In the main Cox proportional hazard analyses, we first carried out the base model to explore the relative differences in mortality outcomes by socio-economic variables adjusted for age, age squared and study year. In the following models, we adjusted for each of the psychological distress variables separately and, finally, for all of them simultaneously to see whether those variables contributed to the socio-economic disparities in mortality. To assess the impact of the adjustment of different variables on the base model hazard ratio, we calculated the percentage reduction of the HR as follows: [(base model HR-base model plus other factors HR)/(base model HR-1)] × 100 [7,47]. We interpreted the reduction in the hazard ratio to tell how much of the association between the individual socio-economic variables and mortality was accounted for by the measures of psychological distress. For preliminary analyses, we examined associations between psychological distress and socio-economic position with a logistic regression model reporting odds ratios (OR) with 95% confidence intervals (CI) (see Additional file 2) and between psychological distress and mortality with a Cox proportional hazard model reporting hazard ratios (HR) with 95% confidence intervals (CI) (see Additional file 3) [46]. We conducted the main analyses using the Cox proportional hazard model (Tables 1, 2, 3). All the analyses were performed with age and study year as covariates. Variation over time was taken into account by adjusting for the study year. To take into account the non-linear association of age with unnatural mortality, we adjusted mortality analyses for age squared. We carried out all the statistical analyses separately for men and women, using the statistical package SPSS 17.0 for Windows (SPSS Corporation 2008). The effect of adjusting for self-reported psychological distress on educational level differences in excess mortality. Hazard ratios (95% CIs) and percent reduction (%) in mortality among those with an intermediate or low education compared to those with a high education after adjusting for psychological distress. *The confounders: age, age squared, study year. The effect of adjusting for self-reported psychological distress on employment status differences in excess mortality. Hazard ratios (95% CIs) and percent reduction (%) in mortality among the unemployed compared to the employed after adjusting for self-reported psychological distress. *The confounders: age, age squared, study year The effect of adjusting for self-reported psychological distress on household income differences in excess mortality. Hazard ratios (95% CIs) and percent of reduction (%) in mortality among intermediate or low household income levels compared to high income level after adjustments of self-reported psychological distress.*The confounders: age, age squared, study year In the main Cox proportional hazard analyses, we first carried out the base model to explore the relative differences in mortality outcomes by socio-economic variables adjusted for age, age squared and study year. In the following models, we adjusted for each of the psychological distress variables separately and, finally, for all of them simultaneously to see whether those variables contributed to the socio-economic disparities in mortality. To assess the impact of the adjustment of different variables on the base model hazard ratio, we calculated the percentage reduction of the HR as follows: [(base model HR-base model plus other factors HR)/(base model HR-1)] × 100 [7,47]. We interpreted the reduction in the hazard ratio to tell how much of the association between the individual socio-economic variables and mortality was accounted for by the measures of psychological distress.
null
null
null
null
[ "Background", "Aim of the study", "Data", "Psychological distress variables", "Socio-economic variables", "Mortality", "Statistical methods", "Results", "Contribution of psychological distress to educational differences in mortality", "Contribution of psychological distress to employment status differences in mortality", "Contribution of psychological distress to household income level differences in mortality", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Socio-economic inequalities in mortality are well reported in Western European countries [1-5]. Excess coronary heart disease mortality [6,7] as well as unnatural mortality, namely suicide [8-10], alcohol-related deaths [9,11,12], and accidental and violent causes of death [13], have been reported in lower socio-economic groups. Socio-economic variation is also significant in Finland in these specific causes of death [10,11,14,15]. Over the past 40 years, the socio-economic inequalities in mortality have widened in several countries [16,17].\nIn their search for new explanations for socio-economic disparities, scholars exploring the links between socio-economic position and health are moving beyond the material and behavioural factors, which do not fully account for these disparities. Psychological indicators, such as negative emotions (including depression and anxiety) [18], stress [19,20] and insomnia [21], have been proposed as a plausible explanation for the socio-economic gradient in health. It is suggested that socio-economic differences in health are at least partly mediated by psychological distress stemming from socio-economic deprivation. Supposedly, not only absolute deprivation, but also relative deprivation, that is, one's position in the hierarchy vis-à-vis others, is important and associated with health [22].\nIn Williams' [23] conceptual framework, psychosocial factors (consisting of risky health practices, social ties, perception of control, stress and affective states) are seen as critical mediators between social structure and health status. Marmot and Brunner [22] proposed a model in which social structure is linked to health and disease via material, psychosocial and behavioural pathways. These approaches view psychological distress not as the property of an individual but as the response of the individual to the external environment acting upon him or her. According to Schnittker [24], the resources provided by socio-economic position are related to the inferences individuals draw about the self, and these psychological states might affect physical and mental health. Wilkinson states that the psychological pain resulting from low social status affects patterns of violence, disrespect, shame, poor social relations and depression [25].\nAn association between lower socio-economic position and poor mental health [26-30] has been reported, as well as an excess mortality associated with psychological distress, especially with death from unnatural causes and cardiovascular disease [31-37]. The evidence regarding the potential mediating role of psychological factors on the relationship between socio-economic position and health is not unambiguous. Schnittker [24] examined whether four psychological factors (self-esteem, mastery, neuroticism and depressive symptoms) mediated the relationship between socio-economic position and three indicators of health (self-rated health, functional limitations and chronic conditions) and found only weak mediating effects. The results provided the strongest evidence for mediation in cases of neuroticism or depressive symptoms. Marmot et al. [38] found evidence for the mediation of psychological well-being measures (control/self-efficacy) on the association between education and health. Likewise, a Hungarian population study found that depressive symptom severity mediated between relative socio-economic deprivation and higher self-rated morbidity rates [36].\nTo our knowledge, few studies have examined the question of whether psychological factors contribute to socio-economic differences in cause-specific mortality. It has been proposed that psychological factors play a mediating role in the socio-economic differences associated with cardiovascular mortality [7,39]. In a U.S. study [40], psychological distress as measured by hopelessness, depression and life dissatisfaction was not a significant contributor to socio-economic disparities in all-cause mortality. Van Oort et al. [41] found that, when independent of material factors, psychosocial factors contribute little to the explanation of educational inequalities in all-cause mortality.\n[SUBTITLE] Aim of the study [SUBSECTION] The aim of this study was to explore whether self-reported psychological distress, measured by depression, stress and insomnia, mediates socio-economic differences (indicated by educational level, employment status and household income) in unnatural (suicide, accidents and violence, and alcohol-related mortality) and coronary heart disease (CHD) mortality in the 28-year follow-up (see Additional file 1). In other words, the aim of the study is to examine the contribution of psychological distress to relative differences in cause-specific mortality by socio-economic position.\nThe aim of this study was to explore whether self-reported psychological distress, measured by depression, stress and insomnia, mediates socio-economic differences (indicated by educational level, employment status and household income) in unnatural (suicide, accidents and violence, and alcohol-related mortality) and coronary heart disease (CHD) mortality in the 28-year follow-up (see Additional file 1). In other words, the aim of the study is to examine the contribution of psychological distress to relative differences in cause-specific mortality by socio-economic position.", "The aim of this study was to explore whether self-reported psychological distress, measured by depression, stress and insomnia, mediates socio-economic differences (indicated by educational level, employment status and household income) in unnatural (suicide, accidents and violence, and alcohol-related mortality) and coronary heart disease (CHD) mortality in the 28-year follow-up (see Additional file 1). In other words, the aim of the study is to examine the contribution of psychological distress to relative differences in cause-specific mortality by socio-economic position.", "The basic data source is the nationwide, repeated cross-sectional survey, \"Health Behaviour and Health among the Finnish Adult Population\", conducted annually since 1978 by the National Public Health Institute of Finland [42]. The survey questionnaire is mailed to a random sample of 5,000 Finns aged 15-64 years. The simple random sample was conducted by The Finnish Population Information System which is a computerized national register that contains basic on-line information about all Finnish citizens residing permanently in Finland. The survey years covered in this study are 1979-2002. The year 1985 has been excluded from the survey due to missing personal identification codes for that year. Respondents under 25 years of age have been excluded from this study because their socio-economic status is not established.\nWe have supplemented the survey data with education and household income variables from Statistics Finland Register Data from the years 1979-2002 and the Finnish National Causes of Death Register follow-up data from the years 1979-2006. The mortality data include immediate, contributing and underlying causes of death, as well as the exact date of death. The data linkages were derived by using the personal identification codes assigned to all persons living permanently in Finland. After excluding the missing data on psychological distress variables (N = 1129, 1,6%), the total number of cases was 67871 (average annual response rate 73.5%), out of which 32451 were men (average response rate 69%) and 35420 were women (average response rate 78%). Our study is reviewed and supported by The Institutional Review Board of National Institute for Health and Welfare, (THL) (IRB 00007085, FWA 00014588).", "We questioned the respondents about 14 health problems or symptoms, among them depression and insomnia, by the following single question: \"Have you had any of the following symptoms or health problems during past 30 days?\" (Yes, if so). Stress was addressed in a separate question on a four-point scale (1 = unbearable situation, 4 = no stress at all); respondents were asked if they had symptoms of tension or had been under great stress or considerable strain during the past 30 days. We considered unbearable stress as having the most negative effect on health and mortality and being associated with social disadvantage. Therefore, we classified those reporting an unbearable situation as having stress. We investigated the correlation of the psychological distress measures in another paper using this same data and showed that single-question depression (males r = .58; females r = .55) and insomnia (males and females r = .38) correlated with the general mental health inventory (MHI-5) [30]. In this study, self-reported psychological distress is thought to reflect the subjective experience of psychological well-being, and it is used to explore the role of psychological distress in generating socio-economic differences in mortality at an extensive population level [43].", "Socio-economic variables included education and household income from the register data and employment status from the survey questionnaire. We collected the register data for education and income from 1980 statistics for the survey years 1979-1983, from 1985 statistics for the survey years 1984-1986 and annually from 1987 until 2000. For the survey years 2001-2002, we collected the socio-economic data from the year 2000.\nThe educational level was derived from the Register of Educational Qualifications and Degrees, which follows, as far as possible, the principles and categories of the revised UNESCO International Standard Classification of Education 1997 (ISCED 1997). We divided the respondent's educational qualification into three categories: the lowest level included respondents with no education, an unknown education or with lower secondary education; the intermediate level included respondents with upper secondary or post-secondary non-tertiary education; and the highest level included respondents with tertiary education.\nHousehold income has been found to be more strongly and consistently associated with health than individual income. [44] We calculated household income as taxable total gross income for a household per year without transfer payment, divided by the consumption unit of the OECD equivalence scale. The first adult in the household was weighted as 1.0, other adults as 0.7 and children under 18 as 0.5 [45]. We further divided household income per consumption unit into tertiles by every study year, in order to keep the comparability of the variable over time.\nEmployment status during most of the year consisted of the categories of employed and unemployed. Additional categories, that is housewives/husbands, students and retired people were excluded from the analyses concerning mortality differences by employment status.", "In this study we analysed unnatural causes of death like suicide, accidents and violence, and alcohol-related mortality, and, for purposes of comparison, a general cause of death, coronary heart disease (CHD) mortality. We identified the causes of death using the International Classification of Diseases (ICD, WHO; 1974, 1978, 1992). The 8th revision was used for the years 1979-1986, the 9th revision for the years 1987-1995 and the 10th revision for the years subsequent to 1996. The classifications for suicide were E950-E959 (ICD-8 and ICD-9) and X60-X84 (ICD-10). Accidents and violence were ICD codes E800-E859, E861-E949, E960-E999 (ICD-8), E800-E849, E852-E949, E960-E999 (ICD-9) and V00-V99, W00-W99, X00-X44, X46-X59, X85-X99, Y00-Y89 (ICD-10). The definition of alcohol-related deaths included injuries, diseases and poisonings where alcohol was the main cause of the death (ICD-8 codes 291, 303, 571.0, 577, and E860; ICD-9 codes 291, 303, 357.5, 425.5, 535.3, 571.0-571.3, 577.0D-577.0F, 577.1C-577.1D and E851; ICD-10 codes F10, G31.2, G62.1, G72.1, I42.6, K29.2, K70, K86.0, O35.4 and X45). We grouped suicides, accidents and violence and alcohol-related deaths together and called them 'unnatural' causes of death. The classification for coronary heart disease (CHD) mortality was 410-414 for ICD-8 and 9 codes and I20-I25 for ICD-10 codes.", "For preliminary analyses, we examined associations between psychological distress and socio-economic position with a logistic regression model reporting odds ratios (OR) with 95% confidence intervals (CI) (see Additional file 2) and between psychological distress and mortality with a Cox proportional hazard model reporting hazard ratios (HR) with 95% confidence intervals (CI) (see Additional file 3) [46].\nWe conducted the main analyses using the Cox proportional hazard model (Tables 1, 2, 3). All the analyses were performed with age and study year as covariates. Variation over time was taken into account by adjusting for the study year. To take into account the non-linear association of age with unnatural mortality, we adjusted mortality analyses for age squared. We carried out all the statistical analyses separately for men and women, using the statistical package SPSS 17.0 for Windows (SPSS Corporation 2008).\nThe effect of adjusting for self-reported psychological distress on educational level differences in excess mortality.\nHazard ratios (95% CIs) and percent reduction (%) in mortality among those with an intermediate or low education compared to those with a high education after adjusting for psychological distress. *The confounders: age, age squared, study year.\nThe effect of adjusting for self-reported psychological distress on employment status differences in excess mortality.\nHazard ratios (95% CIs) and percent reduction (%) in mortality among the unemployed compared to the employed after adjusting for self-reported psychological distress. *The confounders: age, age squared, study year\nThe effect of adjusting for self-reported psychological distress on household income differences in excess mortality.\nHazard ratios (95% CIs) and percent of reduction (%) in mortality among intermediate or low household income levels compared to high income level after adjustments of self-reported psychological distress.*The confounders: age, age squared, study year\nIn the main Cox proportional hazard analyses, we first carried out the base model to explore the relative differences in mortality outcomes by socio-economic variables adjusted for age, age squared and study year. In the following models, we adjusted for each of the psychological distress variables separately and, finally, for all of them simultaneously to see whether those variables contributed to the socio-economic disparities in mortality. To assess the impact of the adjustment of different variables on the base model hazard ratio, we calculated the percentage reduction of the HR as follows: [(base model HR-base model plus other factors HR)/(base model HR-1)] × 100 [7,47]. We interpreted the reduction in the hazard ratio to tell how much of the association between the individual socio-economic variables and mortality was accounted for by the measures of psychological distress.", "Table 4 describes the follow-up data by socio-economic position. The total number of deaths for unnatural causes was 716 for men and 222 for women, while the numbers for CHD mortality were 1,389 for men and 635 for women. Fourteen per cent of men and 18% of women reported depression, 18% of men and 19% of women reported insomnia, and 2.6% of men and 2.4% of women reported unbearable stress (not shown in the table).\nDescription of the mortality follow-up data by socio-economic position among men and women\nThe preliminary adjusted logistic regression analysis confirmed the associations between low socio-economic position and psychological distress for all indicators (see Additional file 2). The second preliminary analysis (see Additional file 3), based on the Cox proportional hazard model, demonstrated statistically significant hazard ratios for both unnatural and coronary heart disease mortality by psychological distress. Hazard ratios for psychological distress were higher for unnatural causes of death than for CHD mortality\n[SUBTITLE] Contribution of psychological distress to educational differences in mortality [SUBSECTION] In the main Cox proportional hazard model analyses, we examined the contribution of the psychological distress variables to excess mortality by socio-economic position for each of the socio-economic variables separately (Tables 1-3). The hazard ratios for educational level in Table 1 present the effect of adjusting for psychological distress variables on the relative differences by educational level in unnatural and CHD mortality among men and women. In the base model for men, we found excess unnatural mortality in the intermediate and lowest educational levels, and excess CHD mortality in the lowest level of education. However, adjusting for measures of psychological distress, when considered both separately and simultaneously, resulted in a very modest reduction in the relative mortality difference by educational level (5-12%) in unnatural causes of death, and no change in CHD mortality (0-5%). In women, the level of education was statistically significantly associated only with CHD mortality, where the contribution of psychological distress variables was equivalent to men.\nIn the main Cox proportional hazard model analyses, we examined the contribution of the psychological distress variables to excess mortality by socio-economic position for each of the socio-economic variables separately (Tables 1-3). The hazard ratios for educational level in Table 1 present the effect of adjusting for psychological distress variables on the relative differences by educational level in unnatural and CHD mortality among men and women. In the base model for men, we found excess unnatural mortality in the intermediate and lowest educational levels, and excess CHD mortality in the lowest level of education. However, adjusting for measures of psychological distress, when considered both separately and simultaneously, resulted in a very modest reduction in the relative mortality difference by educational level (5-12%) in unnatural causes of death, and no change in CHD mortality (0-5%). In women, the level of education was statistically significantly associated only with CHD mortality, where the contribution of psychological distress variables was equivalent to men.\n[SUBTITLE] Contribution of psychological distress to employment status differences in mortality [SUBSECTION] In the base model presented in Table 2, unemployment was associated with increased mortality in both genders. For unnatural cause of death, adjusting for psychological distress variables separately and simultaneously accounted for 11-31% of the excess mortality in unemployed men and 11-26% in women. Adjusting for all of the measures of psychological distress combined resulted in further reductions to the excess risk of unnatural mortality among the unemployed. Adjusting for psychological distress attenuated the association between employment status and CHD mortality at the most 9% among both men and women.\nIn the base model presented in Table 2, unemployment was associated with increased mortality in both genders. For unnatural cause of death, adjusting for psychological distress variables separately and simultaneously accounted for 11-31% of the excess mortality in unemployed men and 11-26% in women. Adjusting for all of the measures of psychological distress combined resulted in further reductions to the excess risk of unnatural mortality among the unemployed. Adjusting for psychological distress attenuated the association between employment status and CHD mortality at the most 9% among both men and women.\n[SUBTITLE] Contribution of psychological distress to household income level differences in mortality [SUBSECTION] In the base model for mortality by household income level, we found a higher risk of mortality in the lowest income group compared to the highest income group among men and women in both unnatural and CHD mortality (Table 3). After controlling for the psychological distress variables separately or combined, psychological distress accounted for 4-16% of the differences in unnatural mortality among those at the lowest income level in both men and women. Again, the effect of adjustment for psychological distress measures in CHD mortality by income level appeared weak (0-5%).\nIn the base model for mortality by household income level, we found a higher risk of mortality in the lowest income group compared to the highest income group among men and women in both unnatural and CHD mortality (Table 3). After controlling for the psychological distress variables separately or combined, psychological distress accounted for 4-16% of the differences in unnatural mortality among those at the lowest income level in both men and women. Again, the effect of adjustment for psychological distress measures in CHD mortality by income level appeared weak (0-5%).", "In the main Cox proportional hazard model analyses, we examined the contribution of the psychological distress variables to excess mortality by socio-economic position for each of the socio-economic variables separately (Tables 1-3). The hazard ratios for educational level in Table 1 present the effect of adjusting for psychological distress variables on the relative differences by educational level in unnatural and CHD mortality among men and women. In the base model for men, we found excess unnatural mortality in the intermediate and lowest educational levels, and excess CHD mortality in the lowest level of education. However, adjusting for measures of psychological distress, when considered both separately and simultaneously, resulted in a very modest reduction in the relative mortality difference by educational level (5-12%) in unnatural causes of death, and no change in CHD mortality (0-5%). In women, the level of education was statistically significantly associated only with CHD mortality, where the contribution of psychological distress variables was equivalent to men.", "In the base model presented in Table 2, unemployment was associated with increased mortality in both genders. For unnatural cause of death, adjusting for psychological distress variables separately and simultaneously accounted for 11-31% of the excess mortality in unemployed men and 11-26% in women. Adjusting for all of the measures of psychological distress combined resulted in further reductions to the excess risk of unnatural mortality among the unemployed. Adjusting for psychological distress attenuated the association between employment status and CHD mortality at the most 9% among both men and women.", "In the base model for mortality by household income level, we found a higher risk of mortality in the lowest income group compared to the highest income group among men and women in both unnatural and CHD mortality (Table 3). After controlling for the psychological distress variables separately or combined, psychological distress accounted for 4-16% of the differences in unnatural mortality among those at the lowest income level in both men and women. Again, the effect of adjustment for psychological distress measures in CHD mortality by income level appeared weak (0-5%).", "Based on our results, we can conclude that psychological distress partly accounted for employment status and household income level differences in unnatural mortality (suicide, accidents and violence, and alcohol-related mortality) in both genders, and for educational level differences in unnatural mortality among men; among women no significant educational differences were found in unnatural mortality in the first place. The contribution of psychological distress variables to socio-economic differences in CHD mortality, on the other hand, was negligible.\nThe strength of our study is the nationally representative data from repeated population surveys, which was supplemented with extensive socio-economic register data and national causes of death register data, providing for a prospective study design with a 28-year follow-up. However, the cross-sectional measure of socio-economic factors and psychological distress variables allows for no conclusions about the direction of the association, that is, health selection versus causation, which may both contribute to the associations between socio-economic position and psychological factors [18].\nThe response rate of the survey is similar to that of other population surveys [48]. However, in our non-respondent analysis of this data [49] we found lower response rates for the lower educated. Total and cause specific (for example, alcohol, external causes, suicide) excess mortality rates were higher among survey non-respondents and this is partly explained by educational and income differences between respondents and non-respondents [50]. These results indicate that non-respondents have more severe illnesses, mental health problems and depression as well as unhealthy lifestyles, such as smoking and alcohol use. They also indicate that the comparability of the results of the different socio-economic groups may be biased and, therefore, the socio-economic differences may actually be stronger than those observed in this data. Additional analyses for respondents with missing data on psychological distress variables (N = 1129, 1,6%), although containing relatively small number, showed that those with missing data on psychological distress measures were also more likely to be in the lower SES groups.\nOne principal limitation of the study is that the measures of psychological distress are very simple self-reported single-item questions. These measures may cover a variety of transient or chronic psychological symptoms, a wide range of meanings from the temporary decrease of psychological well-being to deeply impaired, even life-threatening disorders. Therefore, the main focus of these indicators is not to detect clinical disorders but to reflect the subjective experience of mental health, and to study mental well-being at an extensive population level [43]. Nevertheless, single-item psychological distress variables demonstrated significant associations with cause-specific mortality, indicating that self-reported psychological distress have an implication for health. Another limitation concerning measures used in this study is the unemployed versus employed classification, which is a crude measure of employment status.\nIn the previous studies psychological factors only weakly or moderately mediate the relationship between SES and all-cause mortality [40,41]. In this study, we analysed three different measures of psychological distress and found some mediation for unnatural mortality and SES, and weak mediation for CHD mortality by employment status. It has been proposed that the excess CHD mortality among those in a lower socio-economic position is dependent on socio-economic differences in behavioural and biological risk factors, such as smoking, blood pressure and serum cholesterol levels [51]. A previous study based on the same data examined health behaviours as explanations for educational level differences in CHD mortality [47]. Health behaviours, most importantly smoking, physical activity and vegetable intake, explained about 50% of the educational differences in CHD mortality among men, but did not explain much of the differences among women. Compared to these results, psychological factors examined in the present study did not add to the contribution made by behavioural factors in explaining socio-economic differences in CHD mortality. However, psychological distress explaining some of the inequalities in suicide, accidents and violence, and alcohol-related mortality indicates that in these specific causes of death, poor mental health is related to more severe consequences in the lower socio-economic status groups than in the higher SES groups. It is possibly due to poor coping strategies of psychological distress in the lower SES. Obviously, that includes risky behaviour and, above all, heavy alcohol consumption which may be aimed at relieving psychological symptoms.\nTheories and models which propose psychosocial factors as mediators in the SES-health relationship also emphasize that health status is the result of complex causes. Health behaviour, socio-demographic factors and early environmental, genetic, biomedical and medical factors are all seen as related to this phenomenon. Our results suggest that psychological distress may explain some of the cause-specific mortality disparities between socio-economic groups.", "Psychological distress partly accounted for socio-economic disparities in unnatural mortality, but notably less for CHD mortality. Improvement of psychological well-being in lower socio-economic groups may reduce some of the socio-economic disparities in cause-specific mortality. Especially, the possible mental health problems of the unemployed should be taken into account when searching for a means to decrease these inequalities. Further studies are needed to explicate the role and mechanisms of psychological distress in generating socio-economic differences, particularly in cause-specific mortality.", "The authors declare that they have no competing interests.", "KTM processed the data, carried out the statistical analyses and drafted the manuscript. TKML, TMH, AIO were involved in interpreting the data and drafting the manuscript. TPM supervised the first author and was involved in interpreting the data and drafting the manuscript. RSP was involved in data management, coordinated the study, supervised the first author and was involved in drafting the manuscript. All authors revised the text critically for important intellectual content and read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/11/138/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Aim of the study", "Methods", "Data", "Psychological distress variables", "Socio-economic variables", "Mortality", "Statistical methods", "Results", "Contribution of psychological distress to educational differences in mortality", "Contribution of psychological distress to employment status differences in mortality", "Contribution of psychological distress to household income level differences in mortality", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history", "Supplementary Material" ]
[ "Socio-economic inequalities in mortality are well reported in Western European countries [1-5]. Excess coronary heart disease mortality [6,7] as well as unnatural mortality, namely suicide [8-10], alcohol-related deaths [9,11,12], and accidental and violent causes of death [13], have been reported in lower socio-economic groups. Socio-economic variation is also significant in Finland in these specific causes of death [10,11,14,15]. Over the past 40 years, the socio-economic inequalities in mortality have widened in several countries [16,17].\nIn their search for new explanations for socio-economic disparities, scholars exploring the links between socio-economic position and health are moving beyond the material and behavioural factors, which do not fully account for these disparities. Psychological indicators, such as negative emotions (including depression and anxiety) [18], stress [19,20] and insomnia [21], have been proposed as a plausible explanation for the socio-economic gradient in health. It is suggested that socio-economic differences in health are at least partly mediated by psychological distress stemming from socio-economic deprivation. Supposedly, not only absolute deprivation, but also relative deprivation, that is, one's position in the hierarchy vis-à-vis others, is important and associated with health [22].\nIn Williams' [23] conceptual framework, psychosocial factors (consisting of risky health practices, social ties, perception of control, stress and affective states) are seen as critical mediators between social structure and health status. Marmot and Brunner [22] proposed a model in which social structure is linked to health and disease via material, psychosocial and behavioural pathways. These approaches view psychological distress not as the property of an individual but as the response of the individual to the external environment acting upon him or her. According to Schnittker [24], the resources provided by socio-economic position are related to the inferences individuals draw about the self, and these psychological states might affect physical and mental health. Wilkinson states that the psychological pain resulting from low social status affects patterns of violence, disrespect, shame, poor social relations and depression [25].\nAn association between lower socio-economic position and poor mental health [26-30] has been reported, as well as an excess mortality associated with psychological distress, especially with death from unnatural causes and cardiovascular disease [31-37]. The evidence regarding the potential mediating role of psychological factors on the relationship between socio-economic position and health is not unambiguous. Schnittker [24] examined whether four psychological factors (self-esteem, mastery, neuroticism and depressive symptoms) mediated the relationship between socio-economic position and three indicators of health (self-rated health, functional limitations and chronic conditions) and found only weak mediating effects. The results provided the strongest evidence for mediation in cases of neuroticism or depressive symptoms. Marmot et al. [38] found evidence for the mediation of psychological well-being measures (control/self-efficacy) on the association between education and health. Likewise, a Hungarian population study found that depressive symptom severity mediated between relative socio-economic deprivation and higher self-rated morbidity rates [36].\nTo our knowledge, few studies have examined the question of whether psychological factors contribute to socio-economic differences in cause-specific mortality. It has been proposed that psychological factors play a mediating role in the socio-economic differences associated with cardiovascular mortality [7,39]. In a U.S. study [40], psychological distress as measured by hopelessness, depression and life dissatisfaction was not a significant contributor to socio-economic disparities in all-cause mortality. Van Oort et al. [41] found that, when independent of material factors, psychosocial factors contribute little to the explanation of educational inequalities in all-cause mortality.\n[SUBTITLE] Aim of the study [SUBSECTION] The aim of this study was to explore whether self-reported psychological distress, measured by depression, stress and insomnia, mediates socio-economic differences (indicated by educational level, employment status and household income) in unnatural (suicide, accidents and violence, and alcohol-related mortality) and coronary heart disease (CHD) mortality in the 28-year follow-up (see Additional file 1). In other words, the aim of the study is to examine the contribution of psychological distress to relative differences in cause-specific mortality by socio-economic position.\nThe aim of this study was to explore whether self-reported psychological distress, measured by depression, stress and insomnia, mediates socio-economic differences (indicated by educational level, employment status and household income) in unnatural (suicide, accidents and violence, and alcohol-related mortality) and coronary heart disease (CHD) mortality in the 28-year follow-up (see Additional file 1). In other words, the aim of the study is to examine the contribution of psychological distress to relative differences in cause-specific mortality by socio-economic position.", "The aim of this study was to explore whether self-reported psychological distress, measured by depression, stress and insomnia, mediates socio-economic differences (indicated by educational level, employment status and household income) in unnatural (suicide, accidents and violence, and alcohol-related mortality) and coronary heart disease (CHD) mortality in the 28-year follow-up (see Additional file 1). In other words, the aim of the study is to examine the contribution of psychological distress to relative differences in cause-specific mortality by socio-economic position.", "[SUBTITLE] Data [SUBSECTION] The basic data source is the nationwide, repeated cross-sectional survey, \"Health Behaviour and Health among the Finnish Adult Population\", conducted annually since 1978 by the National Public Health Institute of Finland [42]. The survey questionnaire is mailed to a random sample of 5,000 Finns aged 15-64 years. The simple random sample was conducted by The Finnish Population Information System which is a computerized national register that contains basic on-line information about all Finnish citizens residing permanently in Finland. The survey years covered in this study are 1979-2002. The year 1985 has been excluded from the survey due to missing personal identification codes for that year. Respondents under 25 years of age have been excluded from this study because their socio-economic status is not established.\nWe have supplemented the survey data with education and household income variables from Statistics Finland Register Data from the years 1979-2002 and the Finnish National Causes of Death Register follow-up data from the years 1979-2006. The mortality data include immediate, contributing and underlying causes of death, as well as the exact date of death. The data linkages were derived by using the personal identification codes assigned to all persons living permanently in Finland. After excluding the missing data on psychological distress variables (N = 1129, 1,6%), the total number of cases was 67871 (average annual response rate 73.5%), out of which 32451 were men (average response rate 69%) and 35420 were women (average response rate 78%). Our study is reviewed and supported by The Institutional Review Board of National Institute for Health and Welfare, (THL) (IRB 00007085, FWA 00014588).\nThe basic data source is the nationwide, repeated cross-sectional survey, \"Health Behaviour and Health among the Finnish Adult Population\", conducted annually since 1978 by the National Public Health Institute of Finland [42]. The survey questionnaire is mailed to a random sample of 5,000 Finns aged 15-64 years. The simple random sample was conducted by The Finnish Population Information System which is a computerized national register that contains basic on-line information about all Finnish citizens residing permanently in Finland. The survey years covered in this study are 1979-2002. The year 1985 has been excluded from the survey due to missing personal identification codes for that year. Respondents under 25 years of age have been excluded from this study because their socio-economic status is not established.\nWe have supplemented the survey data with education and household income variables from Statistics Finland Register Data from the years 1979-2002 and the Finnish National Causes of Death Register follow-up data from the years 1979-2006. The mortality data include immediate, contributing and underlying causes of death, as well as the exact date of death. The data linkages were derived by using the personal identification codes assigned to all persons living permanently in Finland. After excluding the missing data on psychological distress variables (N = 1129, 1,6%), the total number of cases was 67871 (average annual response rate 73.5%), out of which 32451 were men (average response rate 69%) and 35420 were women (average response rate 78%). Our study is reviewed and supported by The Institutional Review Board of National Institute for Health and Welfare, (THL) (IRB 00007085, FWA 00014588).\n[SUBTITLE] Psychological distress variables [SUBSECTION] We questioned the respondents about 14 health problems or symptoms, among them depression and insomnia, by the following single question: \"Have you had any of the following symptoms or health problems during past 30 days?\" (Yes, if so). Stress was addressed in a separate question on a four-point scale (1 = unbearable situation, 4 = no stress at all); respondents were asked if they had symptoms of tension or had been under great stress or considerable strain during the past 30 days. We considered unbearable stress as having the most negative effect on health and mortality and being associated with social disadvantage. Therefore, we classified those reporting an unbearable situation as having stress. We investigated the correlation of the psychological distress measures in another paper using this same data and showed that single-question depression (males r = .58; females r = .55) and insomnia (males and females r = .38) correlated with the general mental health inventory (MHI-5) [30]. In this study, self-reported psychological distress is thought to reflect the subjective experience of psychological well-being, and it is used to explore the role of psychological distress in generating socio-economic differences in mortality at an extensive population level [43].\nWe questioned the respondents about 14 health problems or symptoms, among them depression and insomnia, by the following single question: \"Have you had any of the following symptoms or health problems during past 30 days?\" (Yes, if so). Stress was addressed in a separate question on a four-point scale (1 = unbearable situation, 4 = no stress at all); respondents were asked if they had symptoms of tension or had been under great stress or considerable strain during the past 30 days. We considered unbearable stress as having the most negative effect on health and mortality and being associated with social disadvantage. Therefore, we classified those reporting an unbearable situation as having stress. We investigated the correlation of the psychological distress measures in another paper using this same data and showed that single-question depression (males r = .58; females r = .55) and insomnia (males and females r = .38) correlated with the general mental health inventory (MHI-5) [30]. In this study, self-reported psychological distress is thought to reflect the subjective experience of psychological well-being, and it is used to explore the role of psychological distress in generating socio-economic differences in mortality at an extensive population level [43].\n[SUBTITLE] Socio-economic variables [SUBSECTION] Socio-economic variables included education and household income from the register data and employment status from the survey questionnaire. We collected the register data for education and income from 1980 statistics for the survey years 1979-1983, from 1985 statistics for the survey years 1984-1986 and annually from 1987 until 2000. For the survey years 2001-2002, we collected the socio-economic data from the year 2000.\nThe educational level was derived from the Register of Educational Qualifications and Degrees, which follows, as far as possible, the principles and categories of the revised UNESCO International Standard Classification of Education 1997 (ISCED 1997). We divided the respondent's educational qualification into three categories: the lowest level included respondents with no education, an unknown education or with lower secondary education; the intermediate level included respondents with upper secondary or post-secondary non-tertiary education; and the highest level included respondents with tertiary education.\nHousehold income has been found to be more strongly and consistently associated with health than individual income. [44] We calculated household income as taxable total gross income for a household per year without transfer payment, divided by the consumption unit of the OECD equivalence scale. The first adult in the household was weighted as 1.0, other adults as 0.7 and children under 18 as 0.5 [45]. We further divided household income per consumption unit into tertiles by every study year, in order to keep the comparability of the variable over time.\nEmployment status during most of the year consisted of the categories of employed and unemployed. Additional categories, that is housewives/husbands, students and retired people were excluded from the analyses concerning mortality differences by employment status.\nSocio-economic variables included education and household income from the register data and employment status from the survey questionnaire. We collected the register data for education and income from 1980 statistics for the survey years 1979-1983, from 1985 statistics for the survey years 1984-1986 and annually from 1987 until 2000. For the survey years 2001-2002, we collected the socio-economic data from the year 2000.\nThe educational level was derived from the Register of Educational Qualifications and Degrees, which follows, as far as possible, the principles and categories of the revised UNESCO International Standard Classification of Education 1997 (ISCED 1997). We divided the respondent's educational qualification into three categories: the lowest level included respondents with no education, an unknown education or with lower secondary education; the intermediate level included respondents with upper secondary or post-secondary non-tertiary education; and the highest level included respondents with tertiary education.\nHousehold income has been found to be more strongly and consistently associated with health than individual income. [44] We calculated household income as taxable total gross income for a household per year without transfer payment, divided by the consumption unit of the OECD equivalence scale. The first adult in the household was weighted as 1.0, other adults as 0.7 and children under 18 as 0.5 [45]. We further divided household income per consumption unit into tertiles by every study year, in order to keep the comparability of the variable over time.\nEmployment status during most of the year consisted of the categories of employed and unemployed. Additional categories, that is housewives/husbands, students and retired people were excluded from the analyses concerning mortality differences by employment status.\n[SUBTITLE] Mortality [SUBSECTION] In this study we analysed unnatural causes of death like suicide, accidents and violence, and alcohol-related mortality, and, for purposes of comparison, a general cause of death, coronary heart disease (CHD) mortality. We identified the causes of death using the International Classification of Diseases (ICD, WHO; 1974, 1978, 1992). The 8th revision was used for the years 1979-1986, the 9th revision for the years 1987-1995 and the 10th revision for the years subsequent to 1996. The classifications for suicide were E950-E959 (ICD-8 and ICD-9) and X60-X84 (ICD-10). Accidents and violence were ICD codes E800-E859, E861-E949, E960-E999 (ICD-8), E800-E849, E852-E949, E960-E999 (ICD-9) and V00-V99, W00-W99, X00-X44, X46-X59, X85-X99, Y00-Y89 (ICD-10). The definition of alcohol-related deaths included injuries, diseases and poisonings where alcohol was the main cause of the death (ICD-8 codes 291, 303, 571.0, 577, and E860; ICD-9 codes 291, 303, 357.5, 425.5, 535.3, 571.0-571.3, 577.0D-577.0F, 577.1C-577.1D and E851; ICD-10 codes F10, G31.2, G62.1, G72.1, I42.6, K29.2, K70, K86.0, O35.4 and X45). We grouped suicides, accidents and violence and alcohol-related deaths together and called them 'unnatural' causes of death. The classification for coronary heart disease (CHD) mortality was 410-414 for ICD-8 and 9 codes and I20-I25 for ICD-10 codes.\nIn this study we analysed unnatural causes of death like suicide, accidents and violence, and alcohol-related mortality, and, for purposes of comparison, a general cause of death, coronary heart disease (CHD) mortality. We identified the causes of death using the International Classification of Diseases (ICD, WHO; 1974, 1978, 1992). The 8th revision was used for the years 1979-1986, the 9th revision for the years 1987-1995 and the 10th revision for the years subsequent to 1996. The classifications for suicide were E950-E959 (ICD-8 and ICD-9) and X60-X84 (ICD-10). Accidents and violence were ICD codes E800-E859, E861-E949, E960-E999 (ICD-8), E800-E849, E852-E949, E960-E999 (ICD-9) and V00-V99, W00-W99, X00-X44, X46-X59, X85-X99, Y00-Y89 (ICD-10). The definition of alcohol-related deaths included injuries, diseases and poisonings where alcohol was the main cause of the death (ICD-8 codes 291, 303, 571.0, 577, and E860; ICD-9 codes 291, 303, 357.5, 425.5, 535.3, 571.0-571.3, 577.0D-577.0F, 577.1C-577.1D and E851; ICD-10 codes F10, G31.2, G62.1, G72.1, I42.6, K29.2, K70, K86.0, O35.4 and X45). We grouped suicides, accidents and violence and alcohol-related deaths together and called them 'unnatural' causes of death. The classification for coronary heart disease (CHD) mortality was 410-414 for ICD-8 and 9 codes and I20-I25 for ICD-10 codes.\n[SUBTITLE] Statistical methods [SUBSECTION] For preliminary analyses, we examined associations between psychological distress and socio-economic position with a logistic regression model reporting odds ratios (OR) with 95% confidence intervals (CI) (see Additional file 2) and between psychological distress and mortality with a Cox proportional hazard model reporting hazard ratios (HR) with 95% confidence intervals (CI) (see Additional file 3) [46].\nWe conducted the main analyses using the Cox proportional hazard model (Tables 1, 2, 3). All the analyses were performed with age and study year as covariates. Variation over time was taken into account by adjusting for the study year. To take into account the non-linear association of age with unnatural mortality, we adjusted mortality analyses for age squared. We carried out all the statistical analyses separately for men and women, using the statistical package SPSS 17.0 for Windows (SPSS Corporation 2008).\nThe effect of adjusting for self-reported psychological distress on educational level differences in excess mortality.\nHazard ratios (95% CIs) and percent reduction (%) in mortality among those with an intermediate or low education compared to those with a high education after adjusting for psychological distress. *The confounders: age, age squared, study year.\nThe effect of adjusting for self-reported psychological distress on employment status differences in excess mortality.\nHazard ratios (95% CIs) and percent reduction (%) in mortality among the unemployed compared to the employed after adjusting for self-reported psychological distress. *The confounders: age, age squared, study year\nThe effect of adjusting for self-reported psychological distress on household income differences in excess mortality.\nHazard ratios (95% CIs) and percent of reduction (%) in mortality among intermediate or low household income levels compared to high income level after adjustments of self-reported psychological distress.*The confounders: age, age squared, study year\nIn the main Cox proportional hazard analyses, we first carried out the base model to explore the relative differences in mortality outcomes by socio-economic variables adjusted for age, age squared and study year. In the following models, we adjusted for each of the psychological distress variables separately and, finally, for all of them simultaneously to see whether those variables contributed to the socio-economic disparities in mortality. To assess the impact of the adjustment of different variables on the base model hazard ratio, we calculated the percentage reduction of the HR as follows: [(base model HR-base model plus other factors HR)/(base model HR-1)] × 100 [7,47]. We interpreted the reduction in the hazard ratio to tell how much of the association between the individual socio-economic variables and mortality was accounted for by the measures of psychological distress.\nFor preliminary analyses, we examined associations between psychological distress and socio-economic position with a logistic regression model reporting odds ratios (OR) with 95% confidence intervals (CI) (see Additional file 2) and between psychological distress and mortality with a Cox proportional hazard model reporting hazard ratios (HR) with 95% confidence intervals (CI) (see Additional file 3) [46].\nWe conducted the main analyses using the Cox proportional hazard model (Tables 1, 2, 3). All the analyses were performed with age and study year as covariates. Variation over time was taken into account by adjusting for the study year. To take into account the non-linear association of age with unnatural mortality, we adjusted mortality analyses for age squared. We carried out all the statistical analyses separately for men and women, using the statistical package SPSS 17.0 for Windows (SPSS Corporation 2008).\nThe effect of adjusting for self-reported psychological distress on educational level differences in excess mortality.\nHazard ratios (95% CIs) and percent reduction (%) in mortality among those with an intermediate or low education compared to those with a high education after adjusting for psychological distress. *The confounders: age, age squared, study year.\nThe effect of adjusting for self-reported psychological distress on employment status differences in excess mortality.\nHazard ratios (95% CIs) and percent reduction (%) in mortality among the unemployed compared to the employed after adjusting for self-reported psychological distress. *The confounders: age, age squared, study year\nThe effect of adjusting for self-reported psychological distress on household income differences in excess mortality.\nHazard ratios (95% CIs) and percent of reduction (%) in mortality among intermediate or low household income levels compared to high income level after adjustments of self-reported psychological distress.*The confounders: age, age squared, study year\nIn the main Cox proportional hazard analyses, we first carried out the base model to explore the relative differences in mortality outcomes by socio-economic variables adjusted for age, age squared and study year. In the following models, we adjusted for each of the psychological distress variables separately and, finally, for all of them simultaneously to see whether those variables contributed to the socio-economic disparities in mortality. To assess the impact of the adjustment of different variables on the base model hazard ratio, we calculated the percentage reduction of the HR as follows: [(base model HR-base model plus other factors HR)/(base model HR-1)] × 100 [7,47]. We interpreted the reduction in the hazard ratio to tell how much of the association between the individual socio-economic variables and mortality was accounted for by the measures of psychological distress.", "The basic data source is the nationwide, repeated cross-sectional survey, \"Health Behaviour and Health among the Finnish Adult Population\", conducted annually since 1978 by the National Public Health Institute of Finland [42]. The survey questionnaire is mailed to a random sample of 5,000 Finns aged 15-64 years. The simple random sample was conducted by The Finnish Population Information System which is a computerized national register that contains basic on-line information about all Finnish citizens residing permanently in Finland. The survey years covered in this study are 1979-2002. The year 1985 has been excluded from the survey due to missing personal identification codes for that year. Respondents under 25 years of age have been excluded from this study because their socio-economic status is not established.\nWe have supplemented the survey data with education and household income variables from Statistics Finland Register Data from the years 1979-2002 and the Finnish National Causes of Death Register follow-up data from the years 1979-2006. The mortality data include immediate, contributing and underlying causes of death, as well as the exact date of death. The data linkages were derived by using the personal identification codes assigned to all persons living permanently in Finland. After excluding the missing data on psychological distress variables (N = 1129, 1,6%), the total number of cases was 67871 (average annual response rate 73.5%), out of which 32451 were men (average response rate 69%) and 35420 were women (average response rate 78%). Our study is reviewed and supported by The Institutional Review Board of National Institute for Health and Welfare, (THL) (IRB 00007085, FWA 00014588).", "We questioned the respondents about 14 health problems or symptoms, among them depression and insomnia, by the following single question: \"Have you had any of the following symptoms or health problems during past 30 days?\" (Yes, if so). Stress was addressed in a separate question on a four-point scale (1 = unbearable situation, 4 = no stress at all); respondents were asked if they had symptoms of tension or had been under great stress or considerable strain during the past 30 days. We considered unbearable stress as having the most negative effect on health and mortality and being associated with social disadvantage. Therefore, we classified those reporting an unbearable situation as having stress. We investigated the correlation of the psychological distress measures in another paper using this same data and showed that single-question depression (males r = .58; females r = .55) and insomnia (males and females r = .38) correlated with the general mental health inventory (MHI-5) [30]. In this study, self-reported psychological distress is thought to reflect the subjective experience of psychological well-being, and it is used to explore the role of psychological distress in generating socio-economic differences in mortality at an extensive population level [43].", "Socio-economic variables included education and household income from the register data and employment status from the survey questionnaire. We collected the register data for education and income from 1980 statistics for the survey years 1979-1983, from 1985 statistics for the survey years 1984-1986 and annually from 1987 until 2000. For the survey years 2001-2002, we collected the socio-economic data from the year 2000.\nThe educational level was derived from the Register of Educational Qualifications and Degrees, which follows, as far as possible, the principles and categories of the revised UNESCO International Standard Classification of Education 1997 (ISCED 1997). We divided the respondent's educational qualification into three categories: the lowest level included respondents with no education, an unknown education or with lower secondary education; the intermediate level included respondents with upper secondary or post-secondary non-tertiary education; and the highest level included respondents with tertiary education.\nHousehold income has been found to be more strongly and consistently associated with health than individual income. [44] We calculated household income as taxable total gross income for a household per year without transfer payment, divided by the consumption unit of the OECD equivalence scale. The first adult in the household was weighted as 1.0, other adults as 0.7 and children under 18 as 0.5 [45]. We further divided household income per consumption unit into tertiles by every study year, in order to keep the comparability of the variable over time.\nEmployment status during most of the year consisted of the categories of employed and unemployed. Additional categories, that is housewives/husbands, students and retired people were excluded from the analyses concerning mortality differences by employment status.", "In this study we analysed unnatural causes of death like suicide, accidents and violence, and alcohol-related mortality, and, for purposes of comparison, a general cause of death, coronary heart disease (CHD) mortality. We identified the causes of death using the International Classification of Diseases (ICD, WHO; 1974, 1978, 1992). The 8th revision was used for the years 1979-1986, the 9th revision for the years 1987-1995 and the 10th revision for the years subsequent to 1996. The classifications for suicide were E950-E959 (ICD-8 and ICD-9) and X60-X84 (ICD-10). Accidents and violence were ICD codes E800-E859, E861-E949, E960-E999 (ICD-8), E800-E849, E852-E949, E960-E999 (ICD-9) and V00-V99, W00-W99, X00-X44, X46-X59, X85-X99, Y00-Y89 (ICD-10). The definition of alcohol-related deaths included injuries, diseases and poisonings where alcohol was the main cause of the death (ICD-8 codes 291, 303, 571.0, 577, and E860; ICD-9 codes 291, 303, 357.5, 425.5, 535.3, 571.0-571.3, 577.0D-577.0F, 577.1C-577.1D and E851; ICD-10 codes F10, G31.2, G62.1, G72.1, I42.6, K29.2, K70, K86.0, O35.4 and X45). We grouped suicides, accidents and violence and alcohol-related deaths together and called them 'unnatural' causes of death. The classification for coronary heart disease (CHD) mortality was 410-414 for ICD-8 and 9 codes and I20-I25 for ICD-10 codes.", "For preliminary analyses, we examined associations between psychological distress and socio-economic position with a logistic regression model reporting odds ratios (OR) with 95% confidence intervals (CI) (see Additional file 2) and between psychological distress and mortality with a Cox proportional hazard model reporting hazard ratios (HR) with 95% confidence intervals (CI) (see Additional file 3) [46].\nWe conducted the main analyses using the Cox proportional hazard model (Tables 1, 2, 3). All the analyses were performed with age and study year as covariates. Variation over time was taken into account by adjusting for the study year. To take into account the non-linear association of age with unnatural mortality, we adjusted mortality analyses for age squared. We carried out all the statistical analyses separately for men and women, using the statistical package SPSS 17.0 for Windows (SPSS Corporation 2008).\nThe effect of adjusting for self-reported psychological distress on educational level differences in excess mortality.\nHazard ratios (95% CIs) and percent reduction (%) in mortality among those with an intermediate or low education compared to those with a high education after adjusting for psychological distress. *The confounders: age, age squared, study year.\nThe effect of adjusting for self-reported psychological distress on employment status differences in excess mortality.\nHazard ratios (95% CIs) and percent reduction (%) in mortality among the unemployed compared to the employed after adjusting for self-reported psychological distress. *The confounders: age, age squared, study year\nThe effect of adjusting for self-reported psychological distress on household income differences in excess mortality.\nHazard ratios (95% CIs) and percent of reduction (%) in mortality among intermediate or low household income levels compared to high income level after adjustments of self-reported psychological distress.*The confounders: age, age squared, study year\nIn the main Cox proportional hazard analyses, we first carried out the base model to explore the relative differences in mortality outcomes by socio-economic variables adjusted for age, age squared and study year. In the following models, we adjusted for each of the psychological distress variables separately and, finally, for all of them simultaneously to see whether those variables contributed to the socio-economic disparities in mortality. To assess the impact of the adjustment of different variables on the base model hazard ratio, we calculated the percentage reduction of the HR as follows: [(base model HR-base model plus other factors HR)/(base model HR-1)] × 100 [7,47]. We interpreted the reduction in the hazard ratio to tell how much of the association between the individual socio-economic variables and mortality was accounted for by the measures of psychological distress.", "Table 4 describes the follow-up data by socio-economic position. The total number of deaths for unnatural causes was 716 for men and 222 for women, while the numbers for CHD mortality were 1,389 for men and 635 for women. Fourteen per cent of men and 18% of women reported depression, 18% of men and 19% of women reported insomnia, and 2.6% of men and 2.4% of women reported unbearable stress (not shown in the table).\nDescription of the mortality follow-up data by socio-economic position among men and women\nThe preliminary adjusted logistic regression analysis confirmed the associations between low socio-economic position and psychological distress for all indicators (see Additional file 2). The second preliminary analysis (see Additional file 3), based on the Cox proportional hazard model, demonstrated statistically significant hazard ratios for both unnatural and coronary heart disease mortality by psychological distress. Hazard ratios for psychological distress were higher for unnatural causes of death than for CHD mortality\n[SUBTITLE] Contribution of psychological distress to educational differences in mortality [SUBSECTION] In the main Cox proportional hazard model analyses, we examined the contribution of the psychological distress variables to excess mortality by socio-economic position for each of the socio-economic variables separately (Tables 1-3). The hazard ratios for educational level in Table 1 present the effect of adjusting for psychological distress variables on the relative differences by educational level in unnatural and CHD mortality among men and women. In the base model for men, we found excess unnatural mortality in the intermediate and lowest educational levels, and excess CHD mortality in the lowest level of education. However, adjusting for measures of psychological distress, when considered both separately and simultaneously, resulted in a very modest reduction in the relative mortality difference by educational level (5-12%) in unnatural causes of death, and no change in CHD mortality (0-5%). In women, the level of education was statistically significantly associated only with CHD mortality, where the contribution of psychological distress variables was equivalent to men.\nIn the main Cox proportional hazard model analyses, we examined the contribution of the psychological distress variables to excess mortality by socio-economic position for each of the socio-economic variables separately (Tables 1-3). The hazard ratios for educational level in Table 1 present the effect of adjusting for psychological distress variables on the relative differences by educational level in unnatural and CHD mortality among men and women. In the base model for men, we found excess unnatural mortality in the intermediate and lowest educational levels, and excess CHD mortality in the lowest level of education. However, adjusting for measures of psychological distress, when considered both separately and simultaneously, resulted in a very modest reduction in the relative mortality difference by educational level (5-12%) in unnatural causes of death, and no change in CHD mortality (0-5%). In women, the level of education was statistically significantly associated only with CHD mortality, where the contribution of psychological distress variables was equivalent to men.\n[SUBTITLE] Contribution of psychological distress to employment status differences in mortality [SUBSECTION] In the base model presented in Table 2, unemployment was associated with increased mortality in both genders. For unnatural cause of death, adjusting for psychological distress variables separately and simultaneously accounted for 11-31% of the excess mortality in unemployed men and 11-26% in women. Adjusting for all of the measures of psychological distress combined resulted in further reductions to the excess risk of unnatural mortality among the unemployed. Adjusting for psychological distress attenuated the association between employment status and CHD mortality at the most 9% among both men and women.\nIn the base model presented in Table 2, unemployment was associated with increased mortality in both genders. For unnatural cause of death, adjusting for psychological distress variables separately and simultaneously accounted for 11-31% of the excess mortality in unemployed men and 11-26% in women. Adjusting for all of the measures of psychological distress combined resulted in further reductions to the excess risk of unnatural mortality among the unemployed. Adjusting for psychological distress attenuated the association between employment status and CHD mortality at the most 9% among both men and women.\n[SUBTITLE] Contribution of psychological distress to household income level differences in mortality [SUBSECTION] In the base model for mortality by household income level, we found a higher risk of mortality in the lowest income group compared to the highest income group among men and women in both unnatural and CHD mortality (Table 3). After controlling for the psychological distress variables separately or combined, psychological distress accounted for 4-16% of the differences in unnatural mortality among those at the lowest income level in both men and women. Again, the effect of adjustment for psychological distress measures in CHD mortality by income level appeared weak (0-5%).\nIn the base model for mortality by household income level, we found a higher risk of mortality in the lowest income group compared to the highest income group among men and women in both unnatural and CHD mortality (Table 3). After controlling for the psychological distress variables separately or combined, psychological distress accounted for 4-16% of the differences in unnatural mortality among those at the lowest income level in both men and women. Again, the effect of adjustment for psychological distress measures in CHD mortality by income level appeared weak (0-5%).", "In the main Cox proportional hazard model analyses, we examined the contribution of the psychological distress variables to excess mortality by socio-economic position for each of the socio-economic variables separately (Tables 1-3). The hazard ratios for educational level in Table 1 present the effect of adjusting for psychological distress variables on the relative differences by educational level in unnatural and CHD mortality among men and women. In the base model for men, we found excess unnatural mortality in the intermediate and lowest educational levels, and excess CHD mortality in the lowest level of education. However, adjusting for measures of psychological distress, when considered both separately and simultaneously, resulted in a very modest reduction in the relative mortality difference by educational level (5-12%) in unnatural causes of death, and no change in CHD mortality (0-5%). In women, the level of education was statistically significantly associated only with CHD mortality, where the contribution of psychological distress variables was equivalent to men.", "In the base model presented in Table 2, unemployment was associated with increased mortality in both genders. For unnatural cause of death, adjusting for psychological distress variables separately and simultaneously accounted for 11-31% of the excess mortality in unemployed men and 11-26% in women. Adjusting for all of the measures of psychological distress combined resulted in further reductions to the excess risk of unnatural mortality among the unemployed. Adjusting for psychological distress attenuated the association between employment status and CHD mortality at the most 9% among both men and women.", "In the base model for mortality by household income level, we found a higher risk of mortality in the lowest income group compared to the highest income group among men and women in both unnatural and CHD mortality (Table 3). After controlling for the psychological distress variables separately or combined, psychological distress accounted for 4-16% of the differences in unnatural mortality among those at the lowest income level in both men and women. Again, the effect of adjustment for psychological distress measures in CHD mortality by income level appeared weak (0-5%).", "Based on our results, we can conclude that psychological distress partly accounted for employment status and household income level differences in unnatural mortality (suicide, accidents and violence, and alcohol-related mortality) in both genders, and for educational level differences in unnatural mortality among men; among women no significant educational differences were found in unnatural mortality in the first place. The contribution of psychological distress variables to socio-economic differences in CHD mortality, on the other hand, was negligible.\nThe strength of our study is the nationally representative data from repeated population surveys, which was supplemented with extensive socio-economic register data and national causes of death register data, providing for a prospective study design with a 28-year follow-up. However, the cross-sectional measure of socio-economic factors and psychological distress variables allows for no conclusions about the direction of the association, that is, health selection versus causation, which may both contribute to the associations between socio-economic position and psychological factors [18].\nThe response rate of the survey is similar to that of other population surveys [48]. However, in our non-respondent analysis of this data [49] we found lower response rates for the lower educated. Total and cause specific (for example, alcohol, external causes, suicide) excess mortality rates were higher among survey non-respondents and this is partly explained by educational and income differences between respondents and non-respondents [50]. These results indicate that non-respondents have more severe illnesses, mental health problems and depression as well as unhealthy lifestyles, such as smoking and alcohol use. They also indicate that the comparability of the results of the different socio-economic groups may be biased and, therefore, the socio-economic differences may actually be stronger than those observed in this data. Additional analyses for respondents with missing data on psychological distress variables (N = 1129, 1,6%), although containing relatively small number, showed that those with missing data on psychological distress measures were also more likely to be in the lower SES groups.\nOne principal limitation of the study is that the measures of psychological distress are very simple self-reported single-item questions. These measures may cover a variety of transient or chronic psychological symptoms, a wide range of meanings from the temporary decrease of psychological well-being to deeply impaired, even life-threatening disorders. Therefore, the main focus of these indicators is not to detect clinical disorders but to reflect the subjective experience of mental health, and to study mental well-being at an extensive population level [43]. Nevertheless, single-item psychological distress variables demonstrated significant associations with cause-specific mortality, indicating that self-reported psychological distress have an implication for health. Another limitation concerning measures used in this study is the unemployed versus employed classification, which is a crude measure of employment status.\nIn the previous studies psychological factors only weakly or moderately mediate the relationship between SES and all-cause mortality [40,41]. In this study, we analysed three different measures of psychological distress and found some mediation for unnatural mortality and SES, and weak mediation for CHD mortality by employment status. It has been proposed that the excess CHD mortality among those in a lower socio-economic position is dependent on socio-economic differences in behavioural and biological risk factors, such as smoking, blood pressure and serum cholesterol levels [51]. A previous study based on the same data examined health behaviours as explanations for educational level differences in CHD mortality [47]. Health behaviours, most importantly smoking, physical activity and vegetable intake, explained about 50% of the educational differences in CHD mortality among men, but did not explain much of the differences among women. Compared to these results, psychological factors examined in the present study did not add to the contribution made by behavioural factors in explaining socio-economic differences in CHD mortality. However, psychological distress explaining some of the inequalities in suicide, accidents and violence, and alcohol-related mortality indicates that in these specific causes of death, poor mental health is related to more severe consequences in the lower socio-economic status groups than in the higher SES groups. It is possibly due to poor coping strategies of psychological distress in the lower SES. Obviously, that includes risky behaviour and, above all, heavy alcohol consumption which may be aimed at relieving psychological symptoms.\nTheories and models which propose psychosocial factors as mediators in the SES-health relationship also emphasize that health status is the result of complex causes. Health behaviour, socio-demographic factors and early environmental, genetic, biomedical and medical factors are all seen as related to this phenomenon. Our results suggest that psychological distress may explain some of the cause-specific mortality disparities between socio-economic groups.", "Psychological distress partly accounted for socio-economic disparities in unnatural mortality, but notably less for CHD mortality. Improvement of psychological well-being in lower socio-economic groups may reduce some of the socio-economic disparities in cause-specific mortality. Especially, the possible mental health problems of the unemployed should be taken into account when searching for a means to decrease these inequalities. Further studies are needed to explicate the role and mechanisms of psychological distress in generating socio-economic differences, particularly in cause-specific mortality.", "The authors declare that they have no competing interests.", "KTM processed the data, carried out the statistical analyses and drafted the manuscript. TKML, TMH, AIO were involved in interpreting the data and drafting the manuscript. TPM supervised the first author and was involved in interpreting the data and drafting the manuscript. RSP was involved in data management, coordinated the study, supervised the first author and was involved in drafting the manuscript. All authors revised the text critically for important intellectual content and read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/11/138/prepub\n", "Appendix figure S1. Conceptual framework of the study.\nClick here for file\nAppendix table S1. Logistic regression model (Odds Ratios, 95% Confidence Intervals) for psychological distress by socio-economic position. Males and females. Adjusted for age and study year.\nClick here for file\nAppendix table S2. Adjusted Cox proportional hazard model for the unnatural and CHD mortality in self-reported psychological distress. Adjusted for age, age squared and study year.\nClick here for file" ]
[ null, null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, "supplementary-material" ]
[]
Associations of education with 30 year life course blood pressure trajectories: Framingham Offspring Study.
21356045
Education is inversely associated with cardiovascular disease incidence in developed countries. Blood pressure may be an explanatory biological mechanism. However few studies have investigated educational gradients in longitudinal blood pressure trajectories, particularly over substantial proportions of the life course. Study objectives were to determine whether low education was associated with increased blood pressure from multiple longitudinal assessments over 30 years. Furthermore, we aimed to separate antecedent effects of education, and other related factors, that might have caused baseline differences in blood pressure, from potential long-term effects of education on post-baseline blood pressure changes.
BACKGROUND
The study examined 3890 participants of the Framingham Offspring Study (mean age 36.7 years, 52.0% females at baseline) from 1971 through 2001 at up to 7 separate examinations using multivariable mixed linear models.
METHODS
Mixed linear models demonstrated that mean systolic blood pressure (SBP) over 30 years was higher for participants with ≤12 vs. ≥17 years education after adjusting for age (3.26 mmHg, 95% CI: 1.46, 5.05 in females, 2.26 mmHg, 95% CI: 0.87, 3.66 in males). Further adjustment for conventional covariates (antihypertensive medication, smoking, body mass index and alcohol) reduced differences in females and males (2.86, 95% CI: 1.13, 4.59, and 1.25, 95% CI: -0.16, 2.66 mmHg, respectively). Additional analyses adjusted for baseline SBP, to evaluate if there may be educational contributions to post-baseline SBP. In analyses adjusted for age and baseline SBP, females with ≤12 years education had 2.69 (95% CI: 1.09, 4.30) mmHg higher SBP over follow-up compared with ≥17 years education. Further adjustment for aforementioned covariates slightly reduced effect strength (2.53 mmHg, 95% CI: 0.93, 4.14). Associations were weaker in males, where those with ≤12 years education had 1.20 (95% CI: -0.07, 2.46) mmHg higher SBP over follow-up compared to males with ≥17 years of education, after adjustment for age and baseline blood pressure; effects were substantially reduced after adjusting for aforementioned covariates (0.34 mmHg, 95% CI: -0.90, 1.68). Sex-by-education interaction was marginally significant (p = 0.046).
RESULTS
Education was inversely associated with higher systolic blood pressure throughout a 30-year life course span, and associations may be stronger in females than males.
CONCLUSION
[ "Adult", "Aged", "Aged, 80 and over", "Educational Status", "Female", "Health Behavior", "Humans", "Hypertension", "Linear Models", "Longitudinal Studies", "Male", "Middle Aged", "United States" ]
3053249
null
null
Methods
[SUBTITLE] Study sample [SUBSECTION] The Framingham Heart Study is a community-based, longitudinal, observational cohort study that was initiated in 1948 to prospectively investigate risk factors for coronary heart disease. The Framingham Offspring Study began in 1971 with recruitment of 5124 men and women who were offspring (or offspring's spouses) of the Original Cohort of the Framingham Heart Study. The design and selection criteria of the Framingham Offspring Study have been described elsewhere.[9] Participants were prospectively assessed during 7 examinations between 1971 and 2001. The consecutive examination dates were as follows: 1971-1975; 1979-1982; 1984-1987; 1987-1990; 1991-1995; 1996-1998, and 1998-2001. At each examination visit, participants underwent medical history, physical examination, anthropometry, and laboratory assessment of coronary heart disease risk factors, as previously described.[9] Framingham participants signed informed consent and the study is reviewed annually by the Boston University Medical Center Institutional Review Board. There were 5124 participants who completed Offspring Examination 1 (in 1971-1975), of which 4989 (97%) agreed for their data to be in the open-access dataset. Of these, 1099 subjects were excluded from the present analyses because of missing education data (primarily among the participants who did not attend exams 2 or 3 when education was assessed) or being <28 years of age (n = 60) when education was assessed. Participants were restricted to those aged ≥28 years at the time education was assessed in order to allow at least 10 years from expected completion of high school (at age 18 years, on average), during which the participants could obtain higher levels of education. Consequently, 3890 subjects were included in the data analyses. The Framingham Heart Study is a community-based, longitudinal, observational cohort study that was initiated in 1948 to prospectively investigate risk factors for coronary heart disease. The Framingham Offspring Study began in 1971 with recruitment of 5124 men and women who were offspring (or offspring's spouses) of the Original Cohort of the Framingham Heart Study. The design and selection criteria of the Framingham Offspring Study have been described elsewhere.[9] Participants were prospectively assessed during 7 examinations between 1971 and 2001. The consecutive examination dates were as follows: 1971-1975; 1979-1982; 1984-1987; 1987-1990; 1991-1995; 1996-1998, and 1998-2001. At each examination visit, participants underwent medical history, physical examination, anthropometry, and laboratory assessment of coronary heart disease risk factors, as previously described.[9] Framingham participants signed informed consent and the study is reviewed annually by the Boston University Medical Center Institutional Review Board. There were 5124 participants who completed Offspring Examination 1 (in 1971-1975), of which 4989 (97%) agreed for their data to be in the open-access dataset. Of these, 1099 subjects were excluded from the present analyses because of missing education data (primarily among the participants who did not attend exams 2 or 3 when education was assessed) or being <28 years of age (n = 60) when education was assessed. Participants were restricted to those aged ≥28 years at the time education was assessed in order to allow at least 10 years from expected completion of high school (at age 18 years, on average), during which the participants could obtain higher levels of education. Consequently, 3890 subjects were included in the data analyses. [SUBTITLE] Education [SUBSECTION] The participants' own education was measured directly from Framingham Offspring Study participants at Examinations 2 (1979-1982) and 3 (1984-1987). Examination 3 education was used whenever available, otherwise the Examination 2 measure was used. In the original data, education was recorded in 6 categories of completed years of education: 0-4, 5-8, 9-11, 12, 13-16, ≥17 years. For current analyses, the participants' own education was collapsed into 3 groups: ≤12 years (reflecting high school or less), 13-16 years (indicative of some post-secondary education including technical school and college degree) and ≥17 years education (approximating those with more than an undergraduate college degree). This grouping was motivated by both (i) statistical power considerations (to ensure adequate number of participants in each category) and (ii) substantive reasons, whereby the education categories represent educational milestones recognized to influence earnings, occupation type, and socioeconomic position in society. The participants' own education was measured directly from Framingham Offspring Study participants at Examinations 2 (1979-1982) and 3 (1984-1987). Examination 3 education was used whenever available, otherwise the Examination 2 measure was used. In the original data, education was recorded in 6 categories of completed years of education: 0-4, 5-8, 9-11, 12, 13-16, ≥17 years. For current analyses, the participants' own education was collapsed into 3 groups: ≤12 years (reflecting high school or less), 13-16 years (indicative of some post-secondary education including technical school and college degree) and ≥17 years education (approximating those with more than an undergraduate college degree). This grouping was motivated by both (i) statistical power considerations (to ensure adequate number of participants in each category) and (ii) substantive reasons, whereby the education categories represent educational milestones recognized to influence earnings, occupation type, and socioeconomic position in society. [SUBTITLE] Blood Pressure [SUBSECTION] Each participant rested for at least five minutes before blood pressure measurement. While the participant remained seated, a physician measured SBP and DBP each twice in the left arm with a mercury-column sphygmomanometer, according to a standardized protocol [10]. The average of the two readings was used for analyses. Each participant rested for at least five minutes before blood pressure measurement. While the participant remained seated, a physician measured SBP and DBP each twice in the left arm with a mercury-column sphygmomanometer, according to a standardized protocol [10]. The average of the two readings was used for analyses. [SUBTITLE] Covariates [SUBSECTION] Covariates were measured at each exam. A binary indicator of current cigarette smoking was determined by self-report, defined as smoking regularly in the year prior to the examination (yes/no). Alcohol consumption was evaluated by self-reported average number of alcoholic drinks (e.g. beer, wine, cocktails) per week. Body mass index was calculated as the weight in kilograms divided by the square of the height in meters (kg/m2). Current antihypertensive medication use was self-reported and modeled as a binary variable (yes/no). "Baseline age" represented age at Examination 1. "Time from baseline age" was calculated as the difference between age at a given examination and the baseline age. Covariates were measured at each exam. A binary indicator of current cigarette smoking was determined by self-report, defined as smoking regularly in the year prior to the examination (yes/no). Alcohol consumption was evaluated by self-reported average number of alcoholic drinks (e.g. beer, wine, cocktails) per week. Body mass index was calculated as the weight in kilograms divided by the square of the height in meters (kg/m2). Current antihypertensive medication use was self-reported and modeled as a binary variable (yes/no). "Baseline age" represented age at Examination 1. "Time from baseline age" was calculated as the difference between age at a given examination and the baseline age. [SUBTITLE] Statistical Analyses [SUBSECTION] Primary analyses focused on associations of education (categorized as ≤12, 13-16, and ≥17 years, as described above) with longitudinal trajectories of SBP and DBP. Analyses relied on multivariable mixed linear models, which extend multiple linear regression to longitudinal analyses of repeated measures [11]. Accordingly, all effects reported in this study are likelihood-based estimates from mixed models, as are all 95% confidence intervals and test statistics used for inference about these estimates. The blood pressure measures from consecutive examinations represented repeated values of the continuous dependent variable. To model the dependence between repeated outcome measures, we used the autoregressive order 1 AR(1) covariance structure of the residuals, which assumed that blood pressure values measured at consecutive visits are correlated more strongly than those separated by longer time intervals [12]. All models adjusted for baseline age and time since baseline assessment (the former allowed us to adjust for potential cohort effects, such as increasing education over time in the United States). In further analyses, we additionally adjusted for several time-varying conventional risk factors for hypertension expected to be involved in potential mechanisms by which educational attainment may influence blood pressure (representing visit-specific binary indicators of current use of anti-hypertensive medication and current smoking, as well as time-varying continuous measures of alcohol consumption and body mass index). AR(1) is a standard choice for the covariance matrix in mixed model analyses of longitudinal data. Because there are no well established tests to compare fit of models' based on alternative covariance structures, the AR(1) structure is usually selected a priori. Assessment of the consistency of the AR(1) assumption is shown in the Results section. We implemented this longitudinal analysis by using PROC MIXED, with AR(1) covariance structure specified in REPEATED statement in SAS [13]. The second class of models additionally adjusted all the education effects for baseline blood pressure. Accordingly, in these models, baseline blood pressure values were not used as the outcome measure, so that the number of dependent value measures for each subject was reduced by one. Adjustment for baseline blood pressure effectively implied that we compared post-baseline trajectories of blood pressure as if participants with different education had the same baseline blood pressure. This approach allowed us to separate the antecedent effects of education, and other related factors, that might have resulted in the baseline differences in blood pressure, from the potential long-term effects of education on post-baseline changes in blood pressure. Preliminary analyses were carried out to determine the most accurate representation of the effects of baseline age, and time since baseline. In particular, we a priori expected that both between- and within-subject effects of aging on blood pressure may be non-linear. Accordingly, we gradually expanded the basic model, with linear effects of age and time, by adding and testing first quadratic and then cubic effects of each of the two variables, while adjusting for the use of anti-hypertensive medication and for education. Furthermore, because we expected that the impact of within-subject aging (time) on blood pressure may vary depending on the baseline age, we also tested linear and quadratic interactions between age and time. All the multivariable mixed models employed in the final analyses, described below, adjusted for only those non-linear effects of age or time, and those interactions between these effects, that were statistically significant, based on Wald test with 2-tailed α = 0.05. Once the optimal representations of the effects of age and time, as well as of their interactions, were determined, these representations were used in the final analyses of the adjusted association of education with SBP and DBP. The final representation of the effects of baseline age and time from baseline for analyses on SBP was as follows: age+age2+time+time2+age*time+age*time2+age2*time. For DBP, it was as follows: age+age2+time+ time2+age*time+age*time2+age2*time+ age2*time2. All analyses were sex-specific, as a formal test for sex-by-education interaction suggested that the effects of education may differ between males and females (p = 0.046 for SBP; p = 0.063 for DBP). Primary analyses focused on associations of education (categorized as ≤12, 13-16, and ≥17 years, as described above) with longitudinal trajectories of SBP and DBP. Analyses relied on multivariable mixed linear models, which extend multiple linear regression to longitudinal analyses of repeated measures [11]. Accordingly, all effects reported in this study are likelihood-based estimates from mixed models, as are all 95% confidence intervals and test statistics used for inference about these estimates. The blood pressure measures from consecutive examinations represented repeated values of the continuous dependent variable. To model the dependence between repeated outcome measures, we used the autoregressive order 1 AR(1) covariance structure of the residuals, which assumed that blood pressure values measured at consecutive visits are correlated more strongly than those separated by longer time intervals [12]. All models adjusted for baseline age and time since baseline assessment (the former allowed us to adjust for potential cohort effects, such as increasing education over time in the United States). In further analyses, we additionally adjusted for several time-varying conventional risk factors for hypertension expected to be involved in potential mechanisms by which educational attainment may influence blood pressure (representing visit-specific binary indicators of current use of anti-hypertensive medication and current smoking, as well as time-varying continuous measures of alcohol consumption and body mass index). AR(1) is a standard choice for the covariance matrix in mixed model analyses of longitudinal data. Because there are no well established tests to compare fit of models' based on alternative covariance structures, the AR(1) structure is usually selected a priori. Assessment of the consistency of the AR(1) assumption is shown in the Results section. We implemented this longitudinal analysis by using PROC MIXED, with AR(1) covariance structure specified in REPEATED statement in SAS [13]. The second class of models additionally adjusted all the education effects for baseline blood pressure. Accordingly, in these models, baseline blood pressure values were not used as the outcome measure, so that the number of dependent value measures for each subject was reduced by one. Adjustment for baseline blood pressure effectively implied that we compared post-baseline trajectories of blood pressure as if participants with different education had the same baseline blood pressure. This approach allowed us to separate the antecedent effects of education, and other related factors, that might have resulted in the baseline differences in blood pressure, from the potential long-term effects of education on post-baseline changes in blood pressure. Preliminary analyses were carried out to determine the most accurate representation of the effects of baseline age, and time since baseline. In particular, we a priori expected that both between- and within-subject effects of aging on blood pressure may be non-linear. Accordingly, we gradually expanded the basic model, with linear effects of age and time, by adding and testing first quadratic and then cubic effects of each of the two variables, while adjusting for the use of anti-hypertensive medication and for education. Furthermore, because we expected that the impact of within-subject aging (time) on blood pressure may vary depending on the baseline age, we also tested linear and quadratic interactions between age and time. All the multivariable mixed models employed in the final analyses, described below, adjusted for only those non-linear effects of age or time, and those interactions between these effects, that were statistically significant, based on Wald test with 2-tailed α = 0.05. Once the optimal representations of the effects of age and time, as well as of their interactions, were determined, these representations were used in the final analyses of the adjusted association of education with SBP and DBP. The final representation of the effects of baseline age and time from baseline for analyses on SBP was as follows: age+age2+time+time2+age*time+age*time2+age2*time. For DBP, it was as follows: age+age2+time+ time2+age*time+age*time2+age2*time+ age2*time2. All analyses were sex-specific, as a formal test for sex-by-education interaction suggested that the effects of education may differ between males and females (p = 0.046 for SBP; p = 0.063 for DBP).
null
null
null
null
[ "Background", "Study sample", "Education", "Blood Pressure", "Covariates", "Statistical Analyses", "Results", "Discussion", "Prior Literature", "Mechanisms", "Strengths and Weaknesses", "Conclusion", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Education and other measures of socioeconomic position, such as occupation and income, are consistently inversely associated with incidence of cardiovascular disease in developed countries [1,2]. Elevated blood pressure, a major risk factor for cardiovascular disease, has been demonstrated in cross-sectional studies to be associated with low education and lower levels of other SEP measures [3]. Because of the limitations of cross-sectional studies, further investigation of whether educational attainment may be causally related to blood pressure can be achieved through prospective designs that measure longitudinal trajectories of blood pressure. Few studies have investigated longitudinal blood pressure trajectories, especially over a substantial proportion of the life course [4-7]. Furthermore, little is known about the effects of adjusting for potential explanatory/mediating mechanisms such as smoking, alcohol consumption, obesity, or use of antihypertensive medications [4-7].\nThe objectives of this study were to determine whether low educational attainment was associated with increased blood pressure from multiple longitudinal assessments over 30 years. Furthermore, we aimed to separate 'antecedant' effects of education, and other related factors, that might have caused baseline differences in blood pressure, from potential long-term effects of education on post-baseline changes in blood pressure. Analyses prioritized measures of systolic blood pressure (SBP) over diastolic blood pressure (DBP), as systolic hypertension is substantially more common than diastolic hypertension, and SBP contributes more to the global disease burden attributable to hypertension than DBP [8].", "The Framingham Heart Study is a community-based, longitudinal, observational cohort study that was initiated in 1948 to prospectively investigate risk factors for coronary heart disease. The Framingham Offspring Study began in 1971 with recruitment of 5124 men and women who were offspring (or offspring's spouses) of the Original Cohort of the Framingham Heart Study. The design and selection criteria of the Framingham Offspring Study have been described elsewhere.[9] Participants were prospectively assessed during 7 examinations between 1971 and 2001. The consecutive examination dates were as follows: 1971-1975; 1979-1982; 1984-1987; 1987-1990; 1991-1995; 1996-1998, and 1998-2001. At each examination visit, participants underwent medical history, physical examination, anthropometry, and laboratory assessment of coronary heart disease risk factors, as previously described.[9] Framingham participants signed informed consent and the study is reviewed annually by the Boston University Medical Center Institutional Review Board.\nThere were 5124 participants who completed Offspring Examination 1 (in 1971-1975), of which 4989 (97%) agreed for their data to be in the open-access dataset. Of these, 1099 subjects were excluded from the present analyses because of missing education data (primarily among the participants who did not attend exams 2 or 3 when education was assessed) or being <28 years of age (n = 60) when education was assessed. Participants were restricted to those aged ≥28 years at the time education was assessed in order to allow at least 10 years from expected completion of high school (at age 18 years, on average), during which the participants could obtain higher levels of education. Consequently, 3890 subjects were included in the data analyses.", "The participants' own education was measured directly from Framingham Offspring Study participants at Examinations 2 (1979-1982) and 3 (1984-1987). Examination 3 education was used whenever available, otherwise the Examination 2 measure was used. In the original data, education was recorded in 6 categories of completed years of education: 0-4, 5-8, 9-11, 12, 13-16, ≥17 years. For current analyses, the participants' own education was collapsed into 3 groups: ≤12 years (reflecting high school or less), 13-16 years (indicative of some post-secondary education including technical school and college degree) and ≥17 years education (approximating those with more than an undergraduate college degree). This grouping was motivated by both (i) statistical power considerations (to ensure adequate number of participants in each category) and (ii) substantive reasons, whereby the education categories represent educational milestones recognized to influence earnings, occupation type, and socioeconomic position in society.", "Each participant rested for at least five minutes before blood pressure measurement. While the participant remained seated, a physician measured SBP and DBP each twice in the left arm with a mercury-column sphygmomanometer, according to a standardized protocol [10]. The average of the two readings was used for analyses.", "Covariates were measured at each exam. A binary indicator of current cigarette smoking was determined by self-report, defined as smoking regularly in the year prior to the examination (yes/no). Alcohol consumption was evaluated by self-reported average number of alcoholic drinks (e.g. beer, wine, cocktails) per week. Body mass index was calculated as the weight in kilograms divided by the square of the height in meters (kg/m2). Current antihypertensive medication use was self-reported and modeled as a binary variable (yes/no). \"Baseline age\" represented age at Examination 1. \"Time from baseline age\" was calculated as the difference between age at a given examination and the baseline age.", "Primary analyses focused on associations of education (categorized as ≤12, 13-16, and ≥17 years, as described above) with longitudinal trajectories of SBP and DBP. Analyses relied on multivariable mixed linear models, which extend multiple linear regression to longitudinal analyses of repeated measures [11]. Accordingly, all effects reported in this study are likelihood-based estimates from mixed models, as are all 95% confidence intervals and test statistics used for inference about these estimates. The blood pressure measures from consecutive examinations represented repeated values of the continuous dependent variable. To model the dependence between repeated outcome measures, we used the autoregressive order 1 AR(1) covariance structure of the residuals, which assumed that blood pressure values measured at consecutive visits are correlated more strongly than those separated by longer time intervals [12]. All models adjusted for baseline age and time since baseline assessment (the former allowed us to adjust for potential cohort effects, such as increasing education over time in the United States). In further analyses, we additionally adjusted for several time-varying conventional risk factors for hypertension expected to be involved in potential mechanisms by which educational attainment may influence blood pressure (representing visit-specific binary indicators of current use of anti-hypertensive medication and current smoking, as well as time-varying continuous measures of alcohol consumption and body mass index). AR(1) is a standard choice for the covariance matrix in mixed model analyses of longitudinal data. Because there are no well established tests to compare fit of models' based on alternative covariance structures, the AR(1) structure is usually selected a priori. Assessment of the consistency of the AR(1) assumption is shown in the Results section. We implemented this longitudinal analysis by using PROC MIXED, with AR(1) covariance structure specified in REPEATED statement in SAS [13].\nThe second class of models additionally adjusted all the education effects for baseline blood pressure. Accordingly, in these models, baseline blood pressure values were not used as the outcome measure, so that the number of dependent value measures for each subject was reduced by one. Adjustment for baseline blood pressure effectively implied that we compared post-baseline trajectories of blood pressure as if participants with different education had the same baseline blood pressure. This approach allowed us to separate the antecedent effects of education, and other related factors, that might have resulted in the baseline differences in blood pressure, from the potential long-term effects of education on post-baseline changes in blood pressure.\nPreliminary analyses were carried out to determine the most accurate representation of the effects of baseline age, and time since baseline. In particular, we a priori expected that both between- and within-subject effects of aging on blood pressure may be non-linear. Accordingly, we gradually expanded the basic model, with linear effects of age and time, by adding and testing first quadratic and then cubic effects of each of the two variables, while adjusting for the use of anti-hypertensive medication and for education. Furthermore, because we expected that the impact of within-subject aging (time) on blood pressure may vary depending on the baseline age, we also tested linear and quadratic interactions between age and time. All the multivariable mixed models employed in the final analyses, described below, adjusted for only those non-linear effects of age or time, and those interactions between these effects, that were statistically significant, based on Wald test with 2-tailed α = 0.05. Once the optimal representations of the effects of age and time, as well as of their interactions, were determined, these representations were used in the final analyses of the adjusted association of education with SBP and DBP. The final representation of the effects of baseline age and time from baseline for analyses on SBP was as follows: age+age2+time+time2+age*time+age*time2+age2*time. For DBP, it was as follows: age+age2+time+ time2+age*time+age*time2+age2*time+ age2*time2. All analyses were sex-specific, as a formal test for sex-by-education interaction suggested that the effects of education may differ between males and females (p = 0.046 for SBP; p = 0.063 for DBP).", "Participants included in the current analyses had higher mean age (36.7 vs. 34.5 years, P < 0.001), and lower smoking rates (43.1% vs. 49.8%, P < 0.001) than excluded participants. Included and excluded participants had similar distributions of sex and baseline values of SBP, DBP, body mass index, alcohol consumption and antihypertensive medication use.\nIn females, unadjusted analyses demonstrated that education was inversely associated with baseline values of age, SBP, DBP, anti-hypertensive medication use, body mass index and smoking, and directly associated with alcohol consumption (Table 1). In males, education was inversely associated with age, SBP, DBP, body mass index, alcohol consumption and smoking.\nBaseline characteristics (means and proportion) of the Framingham Heart Study Offspring Cohort, according to educational attainment. Statistical variance shown in parentheses represents 95% confidence intervals.\nTo test the trend across education level, we used t tests for continuous variables, and Cochran-Armitage tests for categorical variables.\n†Pearson chi-square tests comparing males and females for proportion taking anti-hypertensive medication, and proportion current smokers demonstrated P-values of 0.41 and 0.49, respectively.\nUsing multivariable mixed linear models, mean SBP across the assessment times was higher for participants with low education compared with high education (Table 2, Figure 1), after adjusting for baseline age and time from baseline age, including their selected quadratic effects and two-way interactions, as shown in footnotes of Table 2. Specifically, in these analyses, the mean difference across all 7 visits in SBP for ≤12 vs. ≥17 years education was 3.26 (95% CI: 1.46, 5.05) mmHg in females, and 2.26 (95% CI: 0.87, 3.66) mmHg in males (Table 2). Further adjustment for conventional time-dependent covariates representing current (up-dated) values of antihypertensive medication use, smoking, body mass index and alcohol consumption reduced the difference in females to 2.86 (95% CI: 1.13, 4.59) mmHg, and in males to 1.25 (95% CI: -0.16, 2.66) mmHg.\nMultivariable-adjusted mixed linear models, demonstrating associations between educational attainment and longitudinal trajectories of mean systolic blood pressure, Framingham Offspring Study, 1971-2001.\nPoint estimates (and 95% confidence intervals shown in parentheses) represent mean differences in systolic blood pressure (mmHg) between comparison and referent groups. Age adjustment refers to adjustment for baseline age and time from baseline age. Modeling for baseline age and time from baseline was as follows: age+age2+time+ time2+age*time+age*time2+age2*time.\nConventional risk factors include antihypertensive medication, smoking, body mass index and alcohol consumption.\nMixed linear models adjusted for age, demonstrating associations of educational attainment with longitudinal trajectories of mean systolic blood pressure (SBP) in (A) females and (B) males. Age adjustment refers to adjustment for baseline age and time from baseline age. Modeling for baseline age and time from baseline was as follows: age+age2+time+time2+ age*time+age*time2+age2*time. Error bars represent 95% confidence intervals. Framingham Offspring Study, 1971-2001.\nA second set of analyses adjusted for baseline SBP, in an effort to evaluate if there were educational differences in the post-baseline values of blood pressure, independent of the baseline differences. In analyses adjusted for baseline age, time from baseline age, and baseline SBP, females with ≤12 years education had 2.69 (95% CI: 1.09, 4.30) mmHg higher SBP over follow-up compared with females with ≥17 years education (Table 2). Further adjustment for conventional risk factors had minimal impact on the effect strength in females (effect reduced to 2.53 (95% CI: 0.93, 4.14) mmHg for ≤12 years vs. ≥17 years education). Associations were weaker in males, where those with ≤12 years education had 1.20 (95% CI: -0.07, 2.46) mmHg higher SBP over follow-up compared to males with ≥17 years of education, after adjustment for baseline age, time from baseline age, and blood pressure; effects were substantially reduced after adjusting for conventional risk factors (Table 2). As a formal test for sex-by-education interaction suggested that the effects of education may differ between males and females (p = 0.046 for SBP), and of the covariates only alcohol consumption showed differential associations with education between males and females (Table 1), analyses were repeated without adjusting for alcohol consumption. This approach evaluated if gender differences in associations between education and SBP persisted with and without adjusting for alcohol consumption. Analyses adjusted for all aforementioned covariates with the exception of alcohol (i.e. adjusted for baseline age, time from baseline age, baseline SBP, antihypertensive medication, smoking, and body mass index) demonstrated persistent gender differences in associations where mean difference across all 7 visits in SBP for ≤12 vs. ≥17 years education was 2.20 (95% CI: 0.59, 3.81) mmHg in females, and 0.60 (95% CI: -0.72, 1.92) mmHg in males, suggesting gender differences in the association consumption between alcohol and education were not a substantial explanation for gender differences in observed associations between education and SBP.\nDBP was higher in female, but less so in male, study participants of low compared to high educational attainment after adjusting for baseline age and time since baseline assessment including the selected quadratic effects and two-way interactions described in Table 3. Specifically, the mean difference in DBP across all assessment times, for ≤12 vs. ≥17 years education was 1.47 (95% CI: 0.43, 2.50) mmHg in females, and 0.66 (95% CI: -0.17, 1.50) mmHg in males (Table 3). Further adjustment for conventional risk factors including antihypertensive medication, smoking, body mass index and alcohol consumption reduced the association strength of low education with DBP somewhat in females, resulting in a smaller difference of 1.26 (95% CI: 0.25, 2.26) mmHg, and eliminated any association in males, with adjusted difference of 0.05 (95% CI: -0.78, 0.86) mmHg. In analyses adjusted for baseline age, time from baseline age, and baseline SBP, there was no association between education and post-baseline values of DBP, among participants with the same baseline DBP, in either sex (Table 3).\nMultivariable-adjusted mixed linear models, demonstrating associations between educational attainment and longitudinal trajectories of mean diastolic blood pressure, Framingham Offspring Study, 1971-2001.\nPoint estimates (and 95% confidence intervals shown in parentheses) represent mean differences in diastolic blood pressure (mmHg) between comparison and referent groups. Age adjustment refers to adjustment for baseline age and time from baseline age. Modeling for baseline age and time from baseline was as follows: age+age2+time+ time2+age*time+age*time2+age2*time+age2*time2.\nConventional risk factors include antihypertensive medication, smoking, body mass index and alcohol consumption.\nIn order to assess the consistency of the AR(1) assumption with our data, we estimated the Pearson correlation coefficients between measurements at different time points, for both SBP and DBP, separately for female and male. The results of Pearson correlation coefficients (shown in Tables 4, 5, 6, 7) indeed show an autoregressive correlation structure, that is, the correlation decreases systematically as the distance in time between two measurements increases. In addition, we compared the AIC of the AR(1) model with that based on another popular structure: the exchangeable structure. As expected, based on Tables 4, 5, 6, 7 and a priori considerations, the AR(1) (e.g. for male in the model specified in the last column of Table 2, AIC = 73281) yielded better (i.e. lower AIC) than the exchangeable structure (AIC = 73434).\nPearson correlation coefficients of systolic blood pressure among females for examinations 1-7.\nPearson correlation coefficients of systolic blood pressure among males for examinations 1-7.\nPearson correlation coefficients of diastolic blood pressure among females for examinations 1-7.\nPearson correlation coefficients of diastolic blood pressure among males for examinations 1-7.", "Findings in this paper demonstrated that education was inversely associated with longitudinal trajectories of mean SBP in females and males. Furthermore, especially in females, lower education was associated with higher post-baseline SBP even among the participants with the same baseline SBP. This suggests that low education may have a long-term impact on changes over time in blood pressure in females. Adjusting for the time-varying values of conventional risk factors, measured at the same time as the blood pressure, typically reduced the strength of these associations. Associations of education with DBP were generally weaker than with SBP, for both females and males.\n[SUBTITLE] Prior Literature [SUBSECTION] Few studies have investigated sex-specific longitudinal trajectories of blood pressure, particularly over a substantial proportion of the life course. Diez Roux et al. in the ARIC cohort (n = 8555) aged 45 to 64 years at baseline and followed using 4 examinations over a period of 9 years, found in white participants, that education was marginally inversely associated with increases in blood pressure after adjusting for age, sex, center, medication use, and reported interactions between time and sex, and interactions between time and baseline age [4]. The 5-year change in mean SBP was 6.0 mmHg for those with <high school degree and 5.3 mmHg for those with a college degree. Further adjustment for baseline SBP somewhat reduced the association strength, to 5.9 mmHg for <high school and 5.5 mmHg for participants with college degree. Associations were weaker in black participants. Strand et al. demonstrated, in a large prospective study of 48,422 males and females aged 35-49 followed for 14 years using three examinations, that education was inversely associated with increases over time in SBP in males and females, after adjusting for year of birth [6]; socioeconomic disparities widened over time in females but not males. In a study on the Framingham Offspring cohort that included only participants aged 20-29 years at baseline (many of whom may not have completed education yet), education was not significantly associated with mean 8-year change in SBP or DBP in males or females, after adjusting for age [5]. In the CARDIA study of 2913 participants aged 18-30 years at baseline education was significantly inversely associated with mean 15-year change in both SBP and DBP [7]. Specifically for SBP, those with <high school degree had a 15-year mean increase of 8.2 mmHg versus only 0.7 mmHg for participants with >college graduate degree. However the observed associations were not adjusted for covariates [7]. Although prior cross-sectional studies suggested that associations may be stronger in females than males [3], little is known about sex-specific associations between education and blood pressure trajectories, particularly over long periods of the life course (>20 years follow-up). Finally, little is known about the effects of adjusting for use of antihypertensive medications, body mass index, alcohol consumption, smoking or other potential mechanisms that may, at least partly, mediate the impact of lower education on longitudinal trajectories of blood pressure. This study added to the literature sex-specific information demonstrating that education is inversely associated with longitudinal trajectories of mean SBP in females and males over a substantial proportion of the life course (approximately 30 years) and that association may be stronger in females than males. Furthermore, in females, lower education was associated with a higher mean post-baseline SBP even among participants with the same baseline SBP, suggesting a possible long-term impact of lower education. Adjusting for up-dated values of conventional risk factors typically reduced strengths of association, but in females the impact of lower education remained statistically significant. For DBP, association strengths were generally weaker for both females and males.\nFew studies have investigated sex-specific longitudinal trajectories of blood pressure, particularly over a substantial proportion of the life course. Diez Roux et al. in the ARIC cohort (n = 8555) aged 45 to 64 years at baseline and followed using 4 examinations over a period of 9 years, found in white participants, that education was marginally inversely associated with increases in blood pressure after adjusting for age, sex, center, medication use, and reported interactions between time and sex, and interactions between time and baseline age [4]. The 5-year change in mean SBP was 6.0 mmHg for those with <high school degree and 5.3 mmHg for those with a college degree. Further adjustment for baseline SBP somewhat reduced the association strength, to 5.9 mmHg for <high school and 5.5 mmHg for participants with college degree. Associations were weaker in black participants. Strand et al. demonstrated, in a large prospective study of 48,422 males and females aged 35-49 followed for 14 years using three examinations, that education was inversely associated with increases over time in SBP in males and females, after adjusting for year of birth [6]; socioeconomic disparities widened over time in females but not males. In a study on the Framingham Offspring cohort that included only participants aged 20-29 years at baseline (many of whom may not have completed education yet), education was not significantly associated with mean 8-year change in SBP or DBP in males or females, after adjusting for age [5]. In the CARDIA study of 2913 participants aged 18-30 years at baseline education was significantly inversely associated with mean 15-year change in both SBP and DBP [7]. Specifically for SBP, those with <high school degree had a 15-year mean increase of 8.2 mmHg versus only 0.7 mmHg for participants with >college graduate degree. However the observed associations were not adjusted for covariates [7]. Although prior cross-sectional studies suggested that associations may be stronger in females than males [3], little is known about sex-specific associations between education and blood pressure trajectories, particularly over long periods of the life course (>20 years follow-up). Finally, little is known about the effects of adjusting for use of antihypertensive medications, body mass index, alcohol consumption, smoking or other potential mechanisms that may, at least partly, mediate the impact of lower education on longitudinal trajectories of blood pressure. This study added to the literature sex-specific information demonstrating that education is inversely associated with longitudinal trajectories of mean SBP in females and males over a substantial proportion of the life course (approximately 30 years) and that association may be stronger in females than males. Furthermore, in females, lower education was associated with a higher mean post-baseline SBP even among participants with the same baseline SBP, suggesting a possible long-term impact of lower education. Adjusting for up-dated values of conventional risk factors typically reduced strengths of association, but in females the impact of lower education remained statistically significant. For DBP, association strengths were generally weaker for both females and males.\n[SUBTITLE] Mechanisms [SUBSECTION] The primary candidate mechanisms by which education may influence longitudinal trajectories of blood pressure involve conventional risk factors for hypertension, including smoking, obesity, blood pressure medication use, and alcohol consumption. In this study, in females, education was inversely associated with anti-hypertensive medication use, body mass index and smoking, and directly associated with alcohol consumption. In males, education was inversely associated with body mass index, alcohol consumption and smoking, and not associated with antihypertensive use. Furthermore, the estimated effects of education tended to somewhat decrease after adjusting for these potential mechanisms (particularly in males), suggesting that they may be at least partial explanatory pathways for the observed association between educational attainment and longitudinal trajectories of blood pressure. It is important to note that biases can be induced by adjusting for variables that may partly mediate the effect of exposure; therefore, these mechanistic findings should be interpreted with caution [14]. Furthermore, there remain plausible confounders unadjusted for, such as childhood socioeconomic circumstances (which are associated with adulthood education and blood pressure [15]), parental blood pressure (which may be associated with offspring education and has been related to offspring blood pressure [16]), intelligence (which is associated with educational attainment and CHD risk [17]), and early life obesity (that could affect upward social mobility via obesity discrimination particularly in women [18,19], and is related to blood pressure in adulthood [20]. Consequently, residual confounding remains a possibility.\nLow educational attainment has been demonstrated to predispose individuals to high strain jobs, characterized by high levels of demand and low levels of control, which have been associated with elevated blood pressure [21,22]. Other related mechanisms involve stress-induced sympathetic nervous system activation due to stressful conditions outside of work, that are also associated with low educational attainment. These may be particularly important for women. It has been shown that women with low education have higher risk of co-occurring psychosocial determinants of poor health, including single-parenting, depression, income below the poverty threshold, and unemployment, compared to men with low education [23]. Consequently, low socioeconomic position may be a stronger determinant of hypertension risk in women compared with men. This may be one of the explanations for why we found a significant interaction between sex and education, and somewhat stronger associations between education and blood pressure in women than men. The extent of health care available for people of low socioeconomic position is typically less than what is available for those with high socioeconomic position, hence limiting access to treatments of hypertension [24]. Furthermore, there is evidence that people of low socioeconomic position have less healthful diets, such as lower rates of fruit and vegetable consumption, and higher salt intake, which may be additional mechanisms contributing to disparities in blood pressure [25,26].\nIt has been demonstrated that although both SBP and DBP are positively associated with incidence of coronary heart disease, there are differences in the way SBP and DBP evolve over the life course. SBP tends to increase steadily with age, while DBP tends to increase until age 50 years, and to decrease steadily after that age [4,8,27]. The mechanisms responsible for the age-related increase in DBP among younger people likely involve an atherosclerotic increase in peripheral resistance, caused by narrowing of the smaller arteries and arterioles [8,28]. In contrast, for older individuals, structural damage and calcification due to arteriosclerosis in the larger conduit arteries can result in loss of arterial compliance, which can cause a rise in SBP, but a reduction in DBP [8,28]. As the burden of hypertension is greatest after the age of 50 years, and it is exceedingly uncommon to have diastolic hypertension without concurrent systolic hypertension in adults over the age of 50 years, it has been argued that SBP is by far the more important measure of the two in terms of predictive importance for population health [8].\nStudies generally show consistent inverse associations between educational attainment and longitudinal changes in SBP [4,6,7], with the exception of young participants aged 20-29 years at baseline, followed over 8 years in the study by Hubert et al. [5] However, findings are less consistent for DBP, where studies have shown inverse [7], null [5], or even positive [4] associations between educational attainment and longitudinal changes in DBP. Our findings demonstrated fairly robust inverse associations of education with SBP, and weaker inconsistent associations with DBP. The pathophysiological mechanisms (e.g. smoking, obesity, alcohol consumption) that cause steady increases over the life course for SBP but not DBP, and also tend to be inversely associated with socioeconomic position, may explain the more consistent findings for the inverse association between education and changes in SBP rather than DBP over the life course. However, adjustment for these variables in our study appeared to account for only a small amount of the association in females, and a larger amount of the (weaker) association in males, suggesting there may be other explanatory factors, particularly in females.\nThe primary candidate mechanisms by which education may influence longitudinal trajectories of blood pressure involve conventional risk factors for hypertension, including smoking, obesity, blood pressure medication use, and alcohol consumption. In this study, in females, education was inversely associated with anti-hypertensive medication use, body mass index and smoking, and directly associated with alcohol consumption. In males, education was inversely associated with body mass index, alcohol consumption and smoking, and not associated with antihypertensive use. Furthermore, the estimated effects of education tended to somewhat decrease after adjusting for these potential mechanisms (particularly in males), suggesting that they may be at least partial explanatory pathways for the observed association between educational attainment and longitudinal trajectories of blood pressure. It is important to note that biases can be induced by adjusting for variables that may partly mediate the effect of exposure; therefore, these mechanistic findings should be interpreted with caution [14]. Furthermore, there remain plausible confounders unadjusted for, such as childhood socioeconomic circumstances (which are associated with adulthood education and blood pressure [15]), parental blood pressure (which may be associated with offspring education and has been related to offspring blood pressure [16]), intelligence (which is associated with educational attainment and CHD risk [17]), and early life obesity (that could affect upward social mobility via obesity discrimination particularly in women [18,19], and is related to blood pressure in adulthood [20]. Consequently, residual confounding remains a possibility.\nLow educational attainment has been demonstrated to predispose individuals to high strain jobs, characterized by high levels of demand and low levels of control, which have been associated with elevated blood pressure [21,22]. Other related mechanisms involve stress-induced sympathetic nervous system activation due to stressful conditions outside of work, that are also associated with low educational attainment. These may be particularly important for women. It has been shown that women with low education have higher risk of co-occurring psychosocial determinants of poor health, including single-parenting, depression, income below the poverty threshold, and unemployment, compared to men with low education [23]. Consequently, low socioeconomic position may be a stronger determinant of hypertension risk in women compared with men. This may be one of the explanations for why we found a significant interaction between sex and education, and somewhat stronger associations between education and blood pressure in women than men. The extent of health care available for people of low socioeconomic position is typically less than what is available for those with high socioeconomic position, hence limiting access to treatments of hypertension [24]. Furthermore, there is evidence that people of low socioeconomic position have less healthful diets, such as lower rates of fruit and vegetable consumption, and higher salt intake, which may be additional mechanisms contributing to disparities in blood pressure [25,26].\nIt has been demonstrated that although both SBP and DBP are positively associated with incidence of coronary heart disease, there are differences in the way SBP and DBP evolve over the life course. SBP tends to increase steadily with age, while DBP tends to increase until age 50 years, and to decrease steadily after that age [4,8,27]. The mechanisms responsible for the age-related increase in DBP among younger people likely involve an atherosclerotic increase in peripheral resistance, caused by narrowing of the smaller arteries and arterioles [8,28]. In contrast, for older individuals, structural damage and calcification due to arteriosclerosis in the larger conduit arteries can result in loss of arterial compliance, which can cause a rise in SBP, but a reduction in DBP [8,28]. As the burden of hypertension is greatest after the age of 50 years, and it is exceedingly uncommon to have diastolic hypertension without concurrent systolic hypertension in adults over the age of 50 years, it has been argued that SBP is by far the more important measure of the two in terms of predictive importance for population health [8].\nStudies generally show consistent inverse associations between educational attainment and longitudinal changes in SBP [4,6,7], with the exception of young participants aged 20-29 years at baseline, followed over 8 years in the study by Hubert et al. [5] However, findings are less consistent for DBP, where studies have shown inverse [7], null [5], or even positive [4] associations between educational attainment and longitudinal changes in DBP. Our findings demonstrated fairly robust inverse associations of education with SBP, and weaker inconsistent associations with DBP. The pathophysiological mechanisms (e.g. smoking, obesity, alcohol consumption) that cause steady increases over the life course for SBP but not DBP, and also tend to be inversely associated with socioeconomic position, may explain the more consistent findings for the inverse association between education and changes in SBP rather than DBP over the life course. However, adjustment for these variables in our study appeared to account for only a small amount of the association in females, and a larger amount of the (weaker) association in males, suggesting there may be other explanatory factors, particularly in females.\n[SUBTITLE] Strengths and Weaknesses [SUBSECTION] Strengths of this study include having access to data on approximately 30 years of longitudinal blood pressure measurements. Furthermore, follow-up rates of the Framingham Heart Study are considered to be high for observational studies, decreasing risk of bias due to loss-to-follow-up. Finally, measurements of blood pressure were performed using methods and equipment providing good accuracy and precision, and analyses relied on statistical methods appropriate for longitudinal repeated-measures studies.\nWith regard to weaknesses, because the historical design of the Framingham Offspring Study reflected the population of Framingham, Massachusetts at study onset in 1948, the Original and Offspring cohorts are largely composed of white participants. Consequently, the generalizability of our findings to other races/ethnicities is uncertain. Furthermore, although we had up to 7 measurements for each covariate, we expect there to be reasonable residual confounding due to imperfect measurement of obesity (body mass index), and self-reported alcohol consumption, smoking and antihypertensive medication use.\nStrengths of this study include having access to data on approximately 30 years of longitudinal blood pressure measurements. Furthermore, follow-up rates of the Framingham Heart Study are considered to be high for observational studies, decreasing risk of bias due to loss-to-follow-up. Finally, measurements of blood pressure were performed using methods and equipment providing good accuracy and precision, and analyses relied on statistical methods appropriate for longitudinal repeated-measures studies.\nWith regard to weaknesses, because the historical design of the Framingham Offspring Study reflected the population of Framingham, Massachusetts at study onset in 1948, the Original and Offspring cohorts are largely composed of white participants. Consequently, the generalizability of our findings to other races/ethnicities is uncertain. Furthermore, although we had up to 7 measurements for each covariate, we expect there to be reasonable residual confounding due to imperfect measurement of obesity (body mass index), and self-reported alcohol consumption, smoking and antihypertensive medication use.", "Few studies have investigated sex-specific longitudinal trajectories of blood pressure, particularly over a substantial proportion of the life course. Diez Roux et al. in the ARIC cohort (n = 8555) aged 45 to 64 years at baseline and followed using 4 examinations over a period of 9 years, found in white participants, that education was marginally inversely associated with increases in blood pressure after adjusting for age, sex, center, medication use, and reported interactions between time and sex, and interactions between time and baseline age [4]. The 5-year change in mean SBP was 6.0 mmHg for those with <high school degree and 5.3 mmHg for those with a college degree. Further adjustment for baseline SBP somewhat reduced the association strength, to 5.9 mmHg for <high school and 5.5 mmHg for participants with college degree. Associations were weaker in black participants. Strand et al. demonstrated, in a large prospective study of 48,422 males and females aged 35-49 followed for 14 years using three examinations, that education was inversely associated with increases over time in SBP in males and females, after adjusting for year of birth [6]; socioeconomic disparities widened over time in females but not males. In a study on the Framingham Offspring cohort that included only participants aged 20-29 years at baseline (many of whom may not have completed education yet), education was not significantly associated with mean 8-year change in SBP or DBP in males or females, after adjusting for age [5]. In the CARDIA study of 2913 participants aged 18-30 years at baseline education was significantly inversely associated with mean 15-year change in both SBP and DBP [7]. Specifically for SBP, those with <high school degree had a 15-year mean increase of 8.2 mmHg versus only 0.7 mmHg for participants with >college graduate degree. However the observed associations were not adjusted for covariates [7]. Although prior cross-sectional studies suggested that associations may be stronger in females than males [3], little is known about sex-specific associations between education and blood pressure trajectories, particularly over long periods of the life course (>20 years follow-up). Finally, little is known about the effects of adjusting for use of antihypertensive medications, body mass index, alcohol consumption, smoking or other potential mechanisms that may, at least partly, mediate the impact of lower education on longitudinal trajectories of blood pressure. This study added to the literature sex-specific information demonstrating that education is inversely associated with longitudinal trajectories of mean SBP in females and males over a substantial proportion of the life course (approximately 30 years) and that association may be stronger in females than males. Furthermore, in females, lower education was associated with a higher mean post-baseline SBP even among participants with the same baseline SBP, suggesting a possible long-term impact of lower education. Adjusting for up-dated values of conventional risk factors typically reduced strengths of association, but in females the impact of lower education remained statistically significant. For DBP, association strengths were generally weaker for both females and males.", "The primary candidate mechanisms by which education may influence longitudinal trajectories of blood pressure involve conventional risk factors for hypertension, including smoking, obesity, blood pressure medication use, and alcohol consumption. In this study, in females, education was inversely associated with anti-hypertensive medication use, body mass index and smoking, and directly associated with alcohol consumption. In males, education was inversely associated with body mass index, alcohol consumption and smoking, and not associated with antihypertensive use. Furthermore, the estimated effects of education tended to somewhat decrease after adjusting for these potential mechanisms (particularly in males), suggesting that they may be at least partial explanatory pathways for the observed association between educational attainment and longitudinal trajectories of blood pressure. It is important to note that biases can be induced by adjusting for variables that may partly mediate the effect of exposure; therefore, these mechanistic findings should be interpreted with caution [14]. Furthermore, there remain plausible confounders unadjusted for, such as childhood socioeconomic circumstances (which are associated with adulthood education and blood pressure [15]), parental blood pressure (which may be associated with offspring education and has been related to offspring blood pressure [16]), intelligence (which is associated with educational attainment and CHD risk [17]), and early life obesity (that could affect upward social mobility via obesity discrimination particularly in women [18,19], and is related to blood pressure in adulthood [20]. Consequently, residual confounding remains a possibility.\nLow educational attainment has been demonstrated to predispose individuals to high strain jobs, characterized by high levels of demand and low levels of control, which have been associated with elevated blood pressure [21,22]. Other related mechanisms involve stress-induced sympathetic nervous system activation due to stressful conditions outside of work, that are also associated with low educational attainment. These may be particularly important for women. It has been shown that women with low education have higher risk of co-occurring psychosocial determinants of poor health, including single-parenting, depression, income below the poverty threshold, and unemployment, compared to men with low education [23]. Consequently, low socioeconomic position may be a stronger determinant of hypertension risk in women compared with men. This may be one of the explanations for why we found a significant interaction between sex and education, and somewhat stronger associations between education and blood pressure in women than men. The extent of health care available for people of low socioeconomic position is typically less than what is available for those with high socioeconomic position, hence limiting access to treatments of hypertension [24]. Furthermore, there is evidence that people of low socioeconomic position have less healthful diets, such as lower rates of fruit and vegetable consumption, and higher salt intake, which may be additional mechanisms contributing to disparities in blood pressure [25,26].\nIt has been demonstrated that although both SBP and DBP are positively associated with incidence of coronary heart disease, there are differences in the way SBP and DBP evolve over the life course. SBP tends to increase steadily with age, while DBP tends to increase until age 50 years, and to decrease steadily after that age [4,8,27]. The mechanisms responsible for the age-related increase in DBP among younger people likely involve an atherosclerotic increase in peripheral resistance, caused by narrowing of the smaller arteries and arterioles [8,28]. In contrast, for older individuals, structural damage and calcification due to arteriosclerosis in the larger conduit arteries can result in loss of arterial compliance, which can cause a rise in SBP, but a reduction in DBP [8,28]. As the burden of hypertension is greatest after the age of 50 years, and it is exceedingly uncommon to have diastolic hypertension without concurrent systolic hypertension in adults over the age of 50 years, it has been argued that SBP is by far the more important measure of the two in terms of predictive importance for population health [8].\nStudies generally show consistent inverse associations between educational attainment and longitudinal changes in SBP [4,6,7], with the exception of young participants aged 20-29 years at baseline, followed over 8 years in the study by Hubert et al. [5] However, findings are less consistent for DBP, where studies have shown inverse [7], null [5], or even positive [4] associations between educational attainment and longitudinal changes in DBP. Our findings demonstrated fairly robust inverse associations of education with SBP, and weaker inconsistent associations with DBP. The pathophysiological mechanisms (e.g. smoking, obesity, alcohol consumption) that cause steady increases over the life course for SBP but not DBP, and also tend to be inversely associated with socioeconomic position, may explain the more consistent findings for the inverse association between education and changes in SBP rather than DBP over the life course. However, adjustment for these variables in our study appeared to account for only a small amount of the association in females, and a larger amount of the (weaker) association in males, suggesting there may be other explanatory factors, particularly in females.", "Strengths of this study include having access to data on approximately 30 years of longitudinal blood pressure measurements. Furthermore, follow-up rates of the Framingham Heart Study are considered to be high for observational studies, decreasing risk of bias due to loss-to-follow-up. Finally, measurements of blood pressure were performed using methods and equipment providing good accuracy and precision, and analyses relied on statistical methods appropriate for longitudinal repeated-measures studies.\nWith regard to weaknesses, because the historical design of the Framingham Offspring Study reflected the population of Framingham, Massachusetts at study onset in 1948, the Original and Offspring cohorts are largely composed of white participants. Consequently, the generalizability of our findings to other races/ethnicities is uncertain. Furthermore, although we had up to 7 measurements for each covariate, we expect there to be reasonable residual confounding due to imperfect measurement of obesity (body mass index), and self-reported alcohol consumption, smoking and antihypertensive medication use.", "This study provides evidence that education is inversely associated with systolic blood pressure throughout a 30 year life course span, and associations may be stronger in females than males. These findings provide evidence that education may be a potential risk factor for elevated blood pressure across the life course.", "The authors declare that they have no competing interests.", "EL initially conceived of the study and further developed the study objectives in collaboration with all co-authors. EL wrote much of the initial draft of the manuscript. MA was the senior biostatistics advisor and drafted the analytic approach in the Methods section. YX performed the analyses and advised on the analytic approach. JL made substantial contributions to the analytic approach, and advised on the subject matter related to socioeconomic disparities in blood pressure. All authors were involved with drafting the final manuscript, and revising it as needed for important intellectual content. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/11/139/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study sample", "Education", "Blood Pressure", "Covariates", "Statistical Analyses", "Results", "Discussion", "Prior Literature", "Mechanisms", "Strengths and Weaknesses", "Conclusion", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Education and other measures of socioeconomic position, such as occupation and income, are consistently inversely associated with incidence of cardiovascular disease in developed countries [1,2]. Elevated blood pressure, a major risk factor for cardiovascular disease, has been demonstrated in cross-sectional studies to be associated with low education and lower levels of other SEP measures [3]. Because of the limitations of cross-sectional studies, further investigation of whether educational attainment may be causally related to blood pressure can be achieved through prospective designs that measure longitudinal trajectories of blood pressure. Few studies have investigated longitudinal blood pressure trajectories, especially over a substantial proportion of the life course [4-7]. Furthermore, little is known about the effects of adjusting for potential explanatory/mediating mechanisms such as smoking, alcohol consumption, obesity, or use of antihypertensive medications [4-7].\nThe objectives of this study were to determine whether low educational attainment was associated with increased blood pressure from multiple longitudinal assessments over 30 years. Furthermore, we aimed to separate 'antecedant' effects of education, and other related factors, that might have caused baseline differences in blood pressure, from potential long-term effects of education on post-baseline changes in blood pressure. Analyses prioritized measures of systolic blood pressure (SBP) over diastolic blood pressure (DBP), as systolic hypertension is substantially more common than diastolic hypertension, and SBP contributes more to the global disease burden attributable to hypertension than DBP [8].", "[SUBTITLE] Study sample [SUBSECTION] The Framingham Heart Study is a community-based, longitudinal, observational cohort study that was initiated in 1948 to prospectively investigate risk factors for coronary heart disease. The Framingham Offspring Study began in 1971 with recruitment of 5124 men and women who were offspring (or offspring's spouses) of the Original Cohort of the Framingham Heart Study. The design and selection criteria of the Framingham Offspring Study have been described elsewhere.[9] Participants were prospectively assessed during 7 examinations between 1971 and 2001. The consecutive examination dates were as follows: 1971-1975; 1979-1982; 1984-1987; 1987-1990; 1991-1995; 1996-1998, and 1998-2001. At each examination visit, participants underwent medical history, physical examination, anthropometry, and laboratory assessment of coronary heart disease risk factors, as previously described.[9] Framingham participants signed informed consent and the study is reviewed annually by the Boston University Medical Center Institutional Review Board.\nThere were 5124 participants who completed Offspring Examination 1 (in 1971-1975), of which 4989 (97%) agreed for their data to be in the open-access dataset. Of these, 1099 subjects were excluded from the present analyses because of missing education data (primarily among the participants who did not attend exams 2 or 3 when education was assessed) or being <28 years of age (n = 60) when education was assessed. Participants were restricted to those aged ≥28 years at the time education was assessed in order to allow at least 10 years from expected completion of high school (at age 18 years, on average), during which the participants could obtain higher levels of education. Consequently, 3890 subjects were included in the data analyses.\nThe Framingham Heart Study is a community-based, longitudinal, observational cohort study that was initiated in 1948 to prospectively investigate risk factors for coronary heart disease. The Framingham Offspring Study began in 1971 with recruitment of 5124 men and women who were offspring (or offspring's spouses) of the Original Cohort of the Framingham Heart Study. The design and selection criteria of the Framingham Offspring Study have been described elsewhere.[9] Participants were prospectively assessed during 7 examinations between 1971 and 2001. The consecutive examination dates were as follows: 1971-1975; 1979-1982; 1984-1987; 1987-1990; 1991-1995; 1996-1998, and 1998-2001. At each examination visit, participants underwent medical history, physical examination, anthropometry, and laboratory assessment of coronary heart disease risk factors, as previously described.[9] Framingham participants signed informed consent and the study is reviewed annually by the Boston University Medical Center Institutional Review Board.\nThere were 5124 participants who completed Offspring Examination 1 (in 1971-1975), of which 4989 (97%) agreed for their data to be in the open-access dataset. Of these, 1099 subjects were excluded from the present analyses because of missing education data (primarily among the participants who did not attend exams 2 or 3 when education was assessed) or being <28 years of age (n = 60) when education was assessed. Participants were restricted to those aged ≥28 years at the time education was assessed in order to allow at least 10 years from expected completion of high school (at age 18 years, on average), during which the participants could obtain higher levels of education. Consequently, 3890 subjects were included in the data analyses.\n[SUBTITLE] Education [SUBSECTION] The participants' own education was measured directly from Framingham Offspring Study participants at Examinations 2 (1979-1982) and 3 (1984-1987). Examination 3 education was used whenever available, otherwise the Examination 2 measure was used. In the original data, education was recorded in 6 categories of completed years of education: 0-4, 5-8, 9-11, 12, 13-16, ≥17 years. For current analyses, the participants' own education was collapsed into 3 groups: ≤12 years (reflecting high school or less), 13-16 years (indicative of some post-secondary education including technical school and college degree) and ≥17 years education (approximating those with more than an undergraduate college degree). This grouping was motivated by both (i) statistical power considerations (to ensure adequate number of participants in each category) and (ii) substantive reasons, whereby the education categories represent educational milestones recognized to influence earnings, occupation type, and socioeconomic position in society.\nThe participants' own education was measured directly from Framingham Offspring Study participants at Examinations 2 (1979-1982) and 3 (1984-1987). Examination 3 education was used whenever available, otherwise the Examination 2 measure was used. In the original data, education was recorded in 6 categories of completed years of education: 0-4, 5-8, 9-11, 12, 13-16, ≥17 years. For current analyses, the participants' own education was collapsed into 3 groups: ≤12 years (reflecting high school or less), 13-16 years (indicative of some post-secondary education including technical school and college degree) and ≥17 years education (approximating those with more than an undergraduate college degree). This grouping was motivated by both (i) statistical power considerations (to ensure adequate number of participants in each category) and (ii) substantive reasons, whereby the education categories represent educational milestones recognized to influence earnings, occupation type, and socioeconomic position in society.\n[SUBTITLE] Blood Pressure [SUBSECTION] Each participant rested for at least five minutes before blood pressure measurement. While the participant remained seated, a physician measured SBP and DBP each twice in the left arm with a mercury-column sphygmomanometer, according to a standardized protocol [10]. The average of the two readings was used for analyses.\nEach participant rested for at least five minutes before blood pressure measurement. While the participant remained seated, a physician measured SBP and DBP each twice in the left arm with a mercury-column sphygmomanometer, according to a standardized protocol [10]. The average of the two readings was used for analyses.\n[SUBTITLE] Covariates [SUBSECTION] Covariates were measured at each exam. A binary indicator of current cigarette smoking was determined by self-report, defined as smoking regularly in the year prior to the examination (yes/no). Alcohol consumption was evaluated by self-reported average number of alcoholic drinks (e.g. beer, wine, cocktails) per week. Body mass index was calculated as the weight in kilograms divided by the square of the height in meters (kg/m2). Current antihypertensive medication use was self-reported and modeled as a binary variable (yes/no). \"Baseline age\" represented age at Examination 1. \"Time from baseline age\" was calculated as the difference between age at a given examination and the baseline age.\nCovariates were measured at each exam. A binary indicator of current cigarette smoking was determined by self-report, defined as smoking regularly in the year prior to the examination (yes/no). Alcohol consumption was evaluated by self-reported average number of alcoholic drinks (e.g. beer, wine, cocktails) per week. Body mass index was calculated as the weight in kilograms divided by the square of the height in meters (kg/m2). Current antihypertensive medication use was self-reported and modeled as a binary variable (yes/no). \"Baseline age\" represented age at Examination 1. \"Time from baseline age\" was calculated as the difference between age at a given examination and the baseline age.\n[SUBTITLE] Statistical Analyses [SUBSECTION] Primary analyses focused on associations of education (categorized as ≤12, 13-16, and ≥17 years, as described above) with longitudinal trajectories of SBP and DBP. Analyses relied on multivariable mixed linear models, which extend multiple linear regression to longitudinal analyses of repeated measures [11]. Accordingly, all effects reported in this study are likelihood-based estimates from mixed models, as are all 95% confidence intervals and test statistics used for inference about these estimates. The blood pressure measures from consecutive examinations represented repeated values of the continuous dependent variable. To model the dependence between repeated outcome measures, we used the autoregressive order 1 AR(1) covariance structure of the residuals, which assumed that blood pressure values measured at consecutive visits are correlated more strongly than those separated by longer time intervals [12]. All models adjusted for baseline age and time since baseline assessment (the former allowed us to adjust for potential cohort effects, such as increasing education over time in the United States). In further analyses, we additionally adjusted for several time-varying conventional risk factors for hypertension expected to be involved in potential mechanisms by which educational attainment may influence blood pressure (representing visit-specific binary indicators of current use of anti-hypertensive medication and current smoking, as well as time-varying continuous measures of alcohol consumption and body mass index). AR(1) is a standard choice for the covariance matrix in mixed model analyses of longitudinal data. Because there are no well established tests to compare fit of models' based on alternative covariance structures, the AR(1) structure is usually selected a priori. Assessment of the consistency of the AR(1) assumption is shown in the Results section. We implemented this longitudinal analysis by using PROC MIXED, with AR(1) covariance structure specified in REPEATED statement in SAS [13].\nThe second class of models additionally adjusted all the education effects for baseline blood pressure. Accordingly, in these models, baseline blood pressure values were not used as the outcome measure, so that the number of dependent value measures for each subject was reduced by one. Adjustment for baseline blood pressure effectively implied that we compared post-baseline trajectories of blood pressure as if participants with different education had the same baseline blood pressure. This approach allowed us to separate the antecedent effects of education, and other related factors, that might have resulted in the baseline differences in blood pressure, from the potential long-term effects of education on post-baseline changes in blood pressure.\nPreliminary analyses were carried out to determine the most accurate representation of the effects of baseline age, and time since baseline. In particular, we a priori expected that both between- and within-subject effects of aging on blood pressure may be non-linear. Accordingly, we gradually expanded the basic model, with linear effects of age and time, by adding and testing first quadratic and then cubic effects of each of the two variables, while adjusting for the use of anti-hypertensive medication and for education. Furthermore, because we expected that the impact of within-subject aging (time) on blood pressure may vary depending on the baseline age, we also tested linear and quadratic interactions between age and time. All the multivariable mixed models employed in the final analyses, described below, adjusted for only those non-linear effects of age or time, and those interactions between these effects, that were statistically significant, based on Wald test with 2-tailed α = 0.05. Once the optimal representations of the effects of age and time, as well as of their interactions, were determined, these representations were used in the final analyses of the adjusted association of education with SBP and DBP. The final representation of the effects of baseline age and time from baseline for analyses on SBP was as follows: age+age2+time+time2+age*time+age*time2+age2*time. For DBP, it was as follows: age+age2+time+ time2+age*time+age*time2+age2*time+ age2*time2. All analyses were sex-specific, as a formal test for sex-by-education interaction suggested that the effects of education may differ between males and females (p = 0.046 for SBP; p = 0.063 for DBP).\nPrimary analyses focused on associations of education (categorized as ≤12, 13-16, and ≥17 years, as described above) with longitudinal trajectories of SBP and DBP. Analyses relied on multivariable mixed linear models, which extend multiple linear regression to longitudinal analyses of repeated measures [11]. Accordingly, all effects reported in this study are likelihood-based estimates from mixed models, as are all 95% confidence intervals and test statistics used for inference about these estimates. The blood pressure measures from consecutive examinations represented repeated values of the continuous dependent variable. To model the dependence between repeated outcome measures, we used the autoregressive order 1 AR(1) covariance structure of the residuals, which assumed that blood pressure values measured at consecutive visits are correlated more strongly than those separated by longer time intervals [12]. All models adjusted for baseline age and time since baseline assessment (the former allowed us to adjust for potential cohort effects, such as increasing education over time in the United States). In further analyses, we additionally adjusted for several time-varying conventional risk factors for hypertension expected to be involved in potential mechanisms by which educational attainment may influence blood pressure (representing visit-specific binary indicators of current use of anti-hypertensive medication and current smoking, as well as time-varying continuous measures of alcohol consumption and body mass index). AR(1) is a standard choice for the covariance matrix in mixed model analyses of longitudinal data. Because there are no well established tests to compare fit of models' based on alternative covariance structures, the AR(1) structure is usually selected a priori. Assessment of the consistency of the AR(1) assumption is shown in the Results section. We implemented this longitudinal analysis by using PROC MIXED, with AR(1) covariance structure specified in REPEATED statement in SAS [13].\nThe second class of models additionally adjusted all the education effects for baseline blood pressure. Accordingly, in these models, baseline blood pressure values were not used as the outcome measure, so that the number of dependent value measures for each subject was reduced by one. Adjustment for baseline blood pressure effectively implied that we compared post-baseline trajectories of blood pressure as if participants with different education had the same baseline blood pressure. This approach allowed us to separate the antecedent effects of education, and other related factors, that might have resulted in the baseline differences in blood pressure, from the potential long-term effects of education on post-baseline changes in blood pressure.\nPreliminary analyses were carried out to determine the most accurate representation of the effects of baseline age, and time since baseline. In particular, we a priori expected that both between- and within-subject effects of aging on blood pressure may be non-linear. Accordingly, we gradually expanded the basic model, with linear effects of age and time, by adding and testing first quadratic and then cubic effects of each of the two variables, while adjusting for the use of anti-hypertensive medication and for education. Furthermore, because we expected that the impact of within-subject aging (time) on blood pressure may vary depending on the baseline age, we also tested linear and quadratic interactions between age and time. All the multivariable mixed models employed in the final analyses, described below, adjusted for only those non-linear effects of age or time, and those interactions between these effects, that were statistically significant, based on Wald test with 2-tailed α = 0.05. Once the optimal representations of the effects of age and time, as well as of their interactions, were determined, these representations were used in the final analyses of the adjusted association of education with SBP and DBP. The final representation of the effects of baseline age and time from baseline for analyses on SBP was as follows: age+age2+time+time2+age*time+age*time2+age2*time. For DBP, it was as follows: age+age2+time+ time2+age*time+age*time2+age2*time+ age2*time2. All analyses were sex-specific, as a formal test for sex-by-education interaction suggested that the effects of education may differ between males and females (p = 0.046 for SBP; p = 0.063 for DBP).", "The Framingham Heart Study is a community-based, longitudinal, observational cohort study that was initiated in 1948 to prospectively investigate risk factors for coronary heart disease. The Framingham Offspring Study began in 1971 with recruitment of 5124 men and women who were offspring (or offspring's spouses) of the Original Cohort of the Framingham Heart Study. The design and selection criteria of the Framingham Offspring Study have been described elsewhere.[9] Participants were prospectively assessed during 7 examinations between 1971 and 2001. The consecutive examination dates were as follows: 1971-1975; 1979-1982; 1984-1987; 1987-1990; 1991-1995; 1996-1998, and 1998-2001. At each examination visit, participants underwent medical history, physical examination, anthropometry, and laboratory assessment of coronary heart disease risk factors, as previously described.[9] Framingham participants signed informed consent and the study is reviewed annually by the Boston University Medical Center Institutional Review Board.\nThere were 5124 participants who completed Offspring Examination 1 (in 1971-1975), of which 4989 (97%) agreed for their data to be in the open-access dataset. Of these, 1099 subjects were excluded from the present analyses because of missing education data (primarily among the participants who did not attend exams 2 or 3 when education was assessed) or being <28 years of age (n = 60) when education was assessed. Participants were restricted to those aged ≥28 years at the time education was assessed in order to allow at least 10 years from expected completion of high school (at age 18 years, on average), during which the participants could obtain higher levels of education. Consequently, 3890 subjects were included in the data analyses.", "The participants' own education was measured directly from Framingham Offspring Study participants at Examinations 2 (1979-1982) and 3 (1984-1987). Examination 3 education was used whenever available, otherwise the Examination 2 measure was used. In the original data, education was recorded in 6 categories of completed years of education: 0-4, 5-8, 9-11, 12, 13-16, ≥17 years. For current analyses, the participants' own education was collapsed into 3 groups: ≤12 years (reflecting high school or less), 13-16 years (indicative of some post-secondary education including technical school and college degree) and ≥17 years education (approximating those with more than an undergraduate college degree). This grouping was motivated by both (i) statistical power considerations (to ensure adequate number of participants in each category) and (ii) substantive reasons, whereby the education categories represent educational milestones recognized to influence earnings, occupation type, and socioeconomic position in society.", "Each participant rested for at least five minutes before blood pressure measurement. While the participant remained seated, a physician measured SBP and DBP each twice in the left arm with a mercury-column sphygmomanometer, according to a standardized protocol [10]. The average of the two readings was used for analyses.", "Covariates were measured at each exam. A binary indicator of current cigarette smoking was determined by self-report, defined as smoking regularly in the year prior to the examination (yes/no). Alcohol consumption was evaluated by self-reported average number of alcoholic drinks (e.g. beer, wine, cocktails) per week. Body mass index was calculated as the weight in kilograms divided by the square of the height in meters (kg/m2). Current antihypertensive medication use was self-reported and modeled as a binary variable (yes/no). \"Baseline age\" represented age at Examination 1. \"Time from baseline age\" was calculated as the difference between age at a given examination and the baseline age.", "Primary analyses focused on associations of education (categorized as ≤12, 13-16, and ≥17 years, as described above) with longitudinal trajectories of SBP and DBP. Analyses relied on multivariable mixed linear models, which extend multiple linear regression to longitudinal analyses of repeated measures [11]. Accordingly, all effects reported in this study are likelihood-based estimates from mixed models, as are all 95% confidence intervals and test statistics used for inference about these estimates. The blood pressure measures from consecutive examinations represented repeated values of the continuous dependent variable. To model the dependence between repeated outcome measures, we used the autoregressive order 1 AR(1) covariance structure of the residuals, which assumed that blood pressure values measured at consecutive visits are correlated more strongly than those separated by longer time intervals [12]. All models adjusted for baseline age and time since baseline assessment (the former allowed us to adjust for potential cohort effects, such as increasing education over time in the United States). In further analyses, we additionally adjusted for several time-varying conventional risk factors for hypertension expected to be involved in potential mechanisms by which educational attainment may influence blood pressure (representing visit-specific binary indicators of current use of anti-hypertensive medication and current smoking, as well as time-varying continuous measures of alcohol consumption and body mass index). AR(1) is a standard choice for the covariance matrix in mixed model analyses of longitudinal data. Because there are no well established tests to compare fit of models' based on alternative covariance structures, the AR(1) structure is usually selected a priori. Assessment of the consistency of the AR(1) assumption is shown in the Results section. We implemented this longitudinal analysis by using PROC MIXED, with AR(1) covariance structure specified in REPEATED statement in SAS [13].\nThe second class of models additionally adjusted all the education effects for baseline blood pressure. Accordingly, in these models, baseline blood pressure values were not used as the outcome measure, so that the number of dependent value measures for each subject was reduced by one. Adjustment for baseline blood pressure effectively implied that we compared post-baseline trajectories of blood pressure as if participants with different education had the same baseline blood pressure. This approach allowed us to separate the antecedent effects of education, and other related factors, that might have resulted in the baseline differences in blood pressure, from the potential long-term effects of education on post-baseline changes in blood pressure.\nPreliminary analyses were carried out to determine the most accurate representation of the effects of baseline age, and time since baseline. In particular, we a priori expected that both between- and within-subject effects of aging on blood pressure may be non-linear. Accordingly, we gradually expanded the basic model, with linear effects of age and time, by adding and testing first quadratic and then cubic effects of each of the two variables, while adjusting for the use of anti-hypertensive medication and for education. Furthermore, because we expected that the impact of within-subject aging (time) on blood pressure may vary depending on the baseline age, we also tested linear and quadratic interactions between age and time. All the multivariable mixed models employed in the final analyses, described below, adjusted for only those non-linear effects of age or time, and those interactions between these effects, that were statistically significant, based on Wald test with 2-tailed α = 0.05. Once the optimal representations of the effects of age and time, as well as of their interactions, were determined, these representations were used in the final analyses of the adjusted association of education with SBP and DBP. The final representation of the effects of baseline age and time from baseline for analyses on SBP was as follows: age+age2+time+time2+age*time+age*time2+age2*time. For DBP, it was as follows: age+age2+time+ time2+age*time+age*time2+age2*time+ age2*time2. All analyses were sex-specific, as a formal test for sex-by-education interaction suggested that the effects of education may differ between males and females (p = 0.046 for SBP; p = 0.063 for DBP).", "Participants included in the current analyses had higher mean age (36.7 vs. 34.5 years, P < 0.001), and lower smoking rates (43.1% vs. 49.8%, P < 0.001) than excluded participants. Included and excluded participants had similar distributions of sex and baseline values of SBP, DBP, body mass index, alcohol consumption and antihypertensive medication use.\nIn females, unadjusted analyses demonstrated that education was inversely associated with baseline values of age, SBP, DBP, anti-hypertensive medication use, body mass index and smoking, and directly associated with alcohol consumption (Table 1). In males, education was inversely associated with age, SBP, DBP, body mass index, alcohol consumption and smoking.\nBaseline characteristics (means and proportion) of the Framingham Heart Study Offspring Cohort, according to educational attainment. Statistical variance shown in parentheses represents 95% confidence intervals.\nTo test the trend across education level, we used t tests for continuous variables, and Cochran-Armitage tests for categorical variables.\n†Pearson chi-square tests comparing males and females for proportion taking anti-hypertensive medication, and proportion current smokers demonstrated P-values of 0.41 and 0.49, respectively.\nUsing multivariable mixed linear models, mean SBP across the assessment times was higher for participants with low education compared with high education (Table 2, Figure 1), after adjusting for baseline age and time from baseline age, including their selected quadratic effects and two-way interactions, as shown in footnotes of Table 2. Specifically, in these analyses, the mean difference across all 7 visits in SBP for ≤12 vs. ≥17 years education was 3.26 (95% CI: 1.46, 5.05) mmHg in females, and 2.26 (95% CI: 0.87, 3.66) mmHg in males (Table 2). Further adjustment for conventional time-dependent covariates representing current (up-dated) values of antihypertensive medication use, smoking, body mass index and alcohol consumption reduced the difference in females to 2.86 (95% CI: 1.13, 4.59) mmHg, and in males to 1.25 (95% CI: -0.16, 2.66) mmHg.\nMultivariable-adjusted mixed linear models, demonstrating associations between educational attainment and longitudinal trajectories of mean systolic blood pressure, Framingham Offspring Study, 1971-2001.\nPoint estimates (and 95% confidence intervals shown in parentheses) represent mean differences in systolic blood pressure (mmHg) between comparison and referent groups. Age adjustment refers to adjustment for baseline age and time from baseline age. Modeling for baseline age and time from baseline was as follows: age+age2+time+ time2+age*time+age*time2+age2*time.\nConventional risk factors include antihypertensive medication, smoking, body mass index and alcohol consumption.\nMixed linear models adjusted for age, demonstrating associations of educational attainment with longitudinal trajectories of mean systolic blood pressure (SBP) in (A) females and (B) males. Age adjustment refers to adjustment for baseline age and time from baseline age. Modeling for baseline age and time from baseline was as follows: age+age2+time+time2+ age*time+age*time2+age2*time. Error bars represent 95% confidence intervals. Framingham Offspring Study, 1971-2001.\nA second set of analyses adjusted for baseline SBP, in an effort to evaluate if there were educational differences in the post-baseline values of blood pressure, independent of the baseline differences. In analyses adjusted for baseline age, time from baseline age, and baseline SBP, females with ≤12 years education had 2.69 (95% CI: 1.09, 4.30) mmHg higher SBP over follow-up compared with females with ≥17 years education (Table 2). Further adjustment for conventional risk factors had minimal impact on the effect strength in females (effect reduced to 2.53 (95% CI: 0.93, 4.14) mmHg for ≤12 years vs. ≥17 years education). Associations were weaker in males, where those with ≤12 years education had 1.20 (95% CI: -0.07, 2.46) mmHg higher SBP over follow-up compared to males with ≥17 years of education, after adjustment for baseline age, time from baseline age, and blood pressure; effects were substantially reduced after adjusting for conventional risk factors (Table 2). As a formal test for sex-by-education interaction suggested that the effects of education may differ between males and females (p = 0.046 for SBP), and of the covariates only alcohol consumption showed differential associations with education between males and females (Table 1), analyses were repeated without adjusting for alcohol consumption. This approach evaluated if gender differences in associations between education and SBP persisted with and without adjusting for alcohol consumption. Analyses adjusted for all aforementioned covariates with the exception of alcohol (i.e. adjusted for baseline age, time from baseline age, baseline SBP, antihypertensive medication, smoking, and body mass index) demonstrated persistent gender differences in associations where mean difference across all 7 visits in SBP for ≤12 vs. ≥17 years education was 2.20 (95% CI: 0.59, 3.81) mmHg in females, and 0.60 (95% CI: -0.72, 1.92) mmHg in males, suggesting gender differences in the association consumption between alcohol and education were not a substantial explanation for gender differences in observed associations between education and SBP.\nDBP was higher in female, but less so in male, study participants of low compared to high educational attainment after adjusting for baseline age and time since baseline assessment including the selected quadratic effects and two-way interactions described in Table 3. Specifically, the mean difference in DBP across all assessment times, for ≤12 vs. ≥17 years education was 1.47 (95% CI: 0.43, 2.50) mmHg in females, and 0.66 (95% CI: -0.17, 1.50) mmHg in males (Table 3). Further adjustment for conventional risk factors including antihypertensive medication, smoking, body mass index and alcohol consumption reduced the association strength of low education with DBP somewhat in females, resulting in a smaller difference of 1.26 (95% CI: 0.25, 2.26) mmHg, and eliminated any association in males, with adjusted difference of 0.05 (95% CI: -0.78, 0.86) mmHg. In analyses adjusted for baseline age, time from baseline age, and baseline SBP, there was no association between education and post-baseline values of DBP, among participants with the same baseline DBP, in either sex (Table 3).\nMultivariable-adjusted mixed linear models, demonstrating associations between educational attainment and longitudinal trajectories of mean diastolic blood pressure, Framingham Offspring Study, 1971-2001.\nPoint estimates (and 95% confidence intervals shown in parentheses) represent mean differences in diastolic blood pressure (mmHg) between comparison and referent groups. Age adjustment refers to adjustment for baseline age and time from baseline age. Modeling for baseline age and time from baseline was as follows: age+age2+time+ time2+age*time+age*time2+age2*time+age2*time2.\nConventional risk factors include antihypertensive medication, smoking, body mass index and alcohol consumption.\nIn order to assess the consistency of the AR(1) assumption with our data, we estimated the Pearson correlation coefficients between measurements at different time points, for both SBP and DBP, separately for female and male. The results of Pearson correlation coefficients (shown in Tables 4, 5, 6, 7) indeed show an autoregressive correlation structure, that is, the correlation decreases systematically as the distance in time between two measurements increases. In addition, we compared the AIC of the AR(1) model with that based on another popular structure: the exchangeable structure. As expected, based on Tables 4, 5, 6, 7 and a priori considerations, the AR(1) (e.g. for male in the model specified in the last column of Table 2, AIC = 73281) yielded better (i.e. lower AIC) than the exchangeable structure (AIC = 73434).\nPearson correlation coefficients of systolic blood pressure among females for examinations 1-7.\nPearson correlation coefficients of systolic blood pressure among males for examinations 1-7.\nPearson correlation coefficients of diastolic blood pressure among females for examinations 1-7.\nPearson correlation coefficients of diastolic blood pressure among males for examinations 1-7.", "Findings in this paper demonstrated that education was inversely associated with longitudinal trajectories of mean SBP in females and males. Furthermore, especially in females, lower education was associated with higher post-baseline SBP even among the participants with the same baseline SBP. This suggests that low education may have a long-term impact on changes over time in blood pressure in females. Adjusting for the time-varying values of conventional risk factors, measured at the same time as the blood pressure, typically reduced the strength of these associations. Associations of education with DBP were generally weaker than with SBP, for both females and males.\n[SUBTITLE] Prior Literature [SUBSECTION] Few studies have investigated sex-specific longitudinal trajectories of blood pressure, particularly over a substantial proportion of the life course. Diez Roux et al. in the ARIC cohort (n = 8555) aged 45 to 64 years at baseline and followed using 4 examinations over a period of 9 years, found in white participants, that education was marginally inversely associated with increases in blood pressure after adjusting for age, sex, center, medication use, and reported interactions between time and sex, and interactions between time and baseline age [4]. The 5-year change in mean SBP was 6.0 mmHg for those with <high school degree and 5.3 mmHg for those with a college degree. Further adjustment for baseline SBP somewhat reduced the association strength, to 5.9 mmHg for <high school and 5.5 mmHg for participants with college degree. Associations were weaker in black participants. Strand et al. demonstrated, in a large prospective study of 48,422 males and females aged 35-49 followed for 14 years using three examinations, that education was inversely associated with increases over time in SBP in males and females, after adjusting for year of birth [6]; socioeconomic disparities widened over time in females but not males. In a study on the Framingham Offspring cohort that included only participants aged 20-29 years at baseline (many of whom may not have completed education yet), education was not significantly associated with mean 8-year change in SBP or DBP in males or females, after adjusting for age [5]. In the CARDIA study of 2913 participants aged 18-30 years at baseline education was significantly inversely associated with mean 15-year change in both SBP and DBP [7]. Specifically for SBP, those with <high school degree had a 15-year mean increase of 8.2 mmHg versus only 0.7 mmHg for participants with >college graduate degree. However the observed associations were not adjusted for covariates [7]. Although prior cross-sectional studies suggested that associations may be stronger in females than males [3], little is known about sex-specific associations between education and blood pressure trajectories, particularly over long periods of the life course (>20 years follow-up). Finally, little is known about the effects of adjusting for use of antihypertensive medications, body mass index, alcohol consumption, smoking or other potential mechanisms that may, at least partly, mediate the impact of lower education on longitudinal trajectories of blood pressure. This study added to the literature sex-specific information demonstrating that education is inversely associated with longitudinal trajectories of mean SBP in females and males over a substantial proportion of the life course (approximately 30 years) and that association may be stronger in females than males. Furthermore, in females, lower education was associated with a higher mean post-baseline SBP even among participants with the same baseline SBP, suggesting a possible long-term impact of lower education. Adjusting for up-dated values of conventional risk factors typically reduced strengths of association, but in females the impact of lower education remained statistically significant. For DBP, association strengths were generally weaker for both females and males.\nFew studies have investigated sex-specific longitudinal trajectories of blood pressure, particularly over a substantial proportion of the life course. Diez Roux et al. in the ARIC cohort (n = 8555) aged 45 to 64 years at baseline and followed using 4 examinations over a period of 9 years, found in white participants, that education was marginally inversely associated with increases in blood pressure after adjusting for age, sex, center, medication use, and reported interactions between time and sex, and interactions between time and baseline age [4]. The 5-year change in mean SBP was 6.0 mmHg for those with <high school degree and 5.3 mmHg for those with a college degree. Further adjustment for baseline SBP somewhat reduced the association strength, to 5.9 mmHg for <high school and 5.5 mmHg for participants with college degree. Associations were weaker in black participants. Strand et al. demonstrated, in a large prospective study of 48,422 males and females aged 35-49 followed for 14 years using three examinations, that education was inversely associated with increases over time in SBP in males and females, after adjusting for year of birth [6]; socioeconomic disparities widened over time in females but not males. In a study on the Framingham Offspring cohort that included only participants aged 20-29 years at baseline (many of whom may not have completed education yet), education was not significantly associated with mean 8-year change in SBP or DBP in males or females, after adjusting for age [5]. In the CARDIA study of 2913 participants aged 18-30 years at baseline education was significantly inversely associated with mean 15-year change in both SBP and DBP [7]. Specifically for SBP, those with <high school degree had a 15-year mean increase of 8.2 mmHg versus only 0.7 mmHg for participants with >college graduate degree. However the observed associations were not adjusted for covariates [7]. Although prior cross-sectional studies suggested that associations may be stronger in females than males [3], little is known about sex-specific associations between education and blood pressure trajectories, particularly over long periods of the life course (>20 years follow-up). Finally, little is known about the effects of adjusting for use of antihypertensive medications, body mass index, alcohol consumption, smoking or other potential mechanisms that may, at least partly, mediate the impact of lower education on longitudinal trajectories of blood pressure. This study added to the literature sex-specific information demonstrating that education is inversely associated with longitudinal trajectories of mean SBP in females and males over a substantial proportion of the life course (approximately 30 years) and that association may be stronger in females than males. Furthermore, in females, lower education was associated with a higher mean post-baseline SBP even among participants with the same baseline SBP, suggesting a possible long-term impact of lower education. Adjusting for up-dated values of conventional risk factors typically reduced strengths of association, but in females the impact of lower education remained statistically significant. For DBP, association strengths were generally weaker for both females and males.\n[SUBTITLE] Mechanisms [SUBSECTION] The primary candidate mechanisms by which education may influence longitudinal trajectories of blood pressure involve conventional risk factors for hypertension, including smoking, obesity, blood pressure medication use, and alcohol consumption. In this study, in females, education was inversely associated with anti-hypertensive medication use, body mass index and smoking, and directly associated with alcohol consumption. In males, education was inversely associated with body mass index, alcohol consumption and smoking, and not associated with antihypertensive use. Furthermore, the estimated effects of education tended to somewhat decrease after adjusting for these potential mechanisms (particularly in males), suggesting that they may be at least partial explanatory pathways for the observed association between educational attainment and longitudinal trajectories of blood pressure. It is important to note that biases can be induced by adjusting for variables that may partly mediate the effect of exposure; therefore, these mechanistic findings should be interpreted with caution [14]. Furthermore, there remain plausible confounders unadjusted for, such as childhood socioeconomic circumstances (which are associated with adulthood education and blood pressure [15]), parental blood pressure (which may be associated with offspring education and has been related to offspring blood pressure [16]), intelligence (which is associated with educational attainment and CHD risk [17]), and early life obesity (that could affect upward social mobility via obesity discrimination particularly in women [18,19], and is related to blood pressure in adulthood [20]. Consequently, residual confounding remains a possibility.\nLow educational attainment has been demonstrated to predispose individuals to high strain jobs, characterized by high levels of demand and low levels of control, which have been associated with elevated blood pressure [21,22]. Other related mechanisms involve stress-induced sympathetic nervous system activation due to stressful conditions outside of work, that are also associated with low educational attainment. These may be particularly important for women. It has been shown that women with low education have higher risk of co-occurring psychosocial determinants of poor health, including single-parenting, depression, income below the poverty threshold, and unemployment, compared to men with low education [23]. Consequently, low socioeconomic position may be a stronger determinant of hypertension risk in women compared with men. This may be one of the explanations for why we found a significant interaction between sex and education, and somewhat stronger associations between education and blood pressure in women than men. The extent of health care available for people of low socioeconomic position is typically less than what is available for those with high socioeconomic position, hence limiting access to treatments of hypertension [24]. Furthermore, there is evidence that people of low socioeconomic position have less healthful diets, such as lower rates of fruit and vegetable consumption, and higher salt intake, which may be additional mechanisms contributing to disparities in blood pressure [25,26].\nIt has been demonstrated that although both SBP and DBP are positively associated with incidence of coronary heart disease, there are differences in the way SBP and DBP evolve over the life course. SBP tends to increase steadily with age, while DBP tends to increase until age 50 years, and to decrease steadily after that age [4,8,27]. The mechanisms responsible for the age-related increase in DBP among younger people likely involve an atherosclerotic increase in peripheral resistance, caused by narrowing of the smaller arteries and arterioles [8,28]. In contrast, for older individuals, structural damage and calcification due to arteriosclerosis in the larger conduit arteries can result in loss of arterial compliance, which can cause a rise in SBP, but a reduction in DBP [8,28]. As the burden of hypertension is greatest after the age of 50 years, and it is exceedingly uncommon to have diastolic hypertension without concurrent systolic hypertension in adults over the age of 50 years, it has been argued that SBP is by far the more important measure of the two in terms of predictive importance for population health [8].\nStudies generally show consistent inverse associations between educational attainment and longitudinal changes in SBP [4,6,7], with the exception of young participants aged 20-29 years at baseline, followed over 8 years in the study by Hubert et al. [5] However, findings are less consistent for DBP, where studies have shown inverse [7], null [5], or even positive [4] associations between educational attainment and longitudinal changes in DBP. Our findings demonstrated fairly robust inverse associations of education with SBP, and weaker inconsistent associations with DBP. The pathophysiological mechanisms (e.g. smoking, obesity, alcohol consumption) that cause steady increases over the life course for SBP but not DBP, and also tend to be inversely associated with socioeconomic position, may explain the more consistent findings for the inverse association between education and changes in SBP rather than DBP over the life course. However, adjustment for these variables in our study appeared to account for only a small amount of the association in females, and a larger amount of the (weaker) association in males, suggesting there may be other explanatory factors, particularly in females.\nThe primary candidate mechanisms by which education may influence longitudinal trajectories of blood pressure involve conventional risk factors for hypertension, including smoking, obesity, blood pressure medication use, and alcohol consumption. In this study, in females, education was inversely associated with anti-hypertensive medication use, body mass index and smoking, and directly associated with alcohol consumption. In males, education was inversely associated with body mass index, alcohol consumption and smoking, and not associated with antihypertensive use. Furthermore, the estimated effects of education tended to somewhat decrease after adjusting for these potential mechanisms (particularly in males), suggesting that they may be at least partial explanatory pathways for the observed association between educational attainment and longitudinal trajectories of blood pressure. It is important to note that biases can be induced by adjusting for variables that may partly mediate the effect of exposure; therefore, these mechanistic findings should be interpreted with caution [14]. Furthermore, there remain plausible confounders unadjusted for, such as childhood socioeconomic circumstances (which are associated with adulthood education and blood pressure [15]), parental blood pressure (which may be associated with offspring education and has been related to offspring blood pressure [16]), intelligence (which is associated with educational attainment and CHD risk [17]), and early life obesity (that could affect upward social mobility via obesity discrimination particularly in women [18,19], and is related to blood pressure in adulthood [20]. Consequently, residual confounding remains a possibility.\nLow educational attainment has been demonstrated to predispose individuals to high strain jobs, characterized by high levels of demand and low levels of control, which have been associated with elevated blood pressure [21,22]. Other related mechanisms involve stress-induced sympathetic nervous system activation due to stressful conditions outside of work, that are also associated with low educational attainment. These may be particularly important for women. It has been shown that women with low education have higher risk of co-occurring psychosocial determinants of poor health, including single-parenting, depression, income below the poverty threshold, and unemployment, compared to men with low education [23]. Consequently, low socioeconomic position may be a stronger determinant of hypertension risk in women compared with men. This may be one of the explanations for why we found a significant interaction between sex and education, and somewhat stronger associations between education and blood pressure in women than men. The extent of health care available for people of low socioeconomic position is typically less than what is available for those with high socioeconomic position, hence limiting access to treatments of hypertension [24]. Furthermore, there is evidence that people of low socioeconomic position have less healthful diets, such as lower rates of fruit and vegetable consumption, and higher salt intake, which may be additional mechanisms contributing to disparities in blood pressure [25,26].\nIt has been demonstrated that although both SBP and DBP are positively associated with incidence of coronary heart disease, there are differences in the way SBP and DBP evolve over the life course. SBP tends to increase steadily with age, while DBP tends to increase until age 50 years, and to decrease steadily after that age [4,8,27]. The mechanisms responsible for the age-related increase in DBP among younger people likely involve an atherosclerotic increase in peripheral resistance, caused by narrowing of the smaller arteries and arterioles [8,28]. In contrast, for older individuals, structural damage and calcification due to arteriosclerosis in the larger conduit arteries can result in loss of arterial compliance, which can cause a rise in SBP, but a reduction in DBP [8,28]. As the burden of hypertension is greatest after the age of 50 years, and it is exceedingly uncommon to have diastolic hypertension without concurrent systolic hypertension in adults over the age of 50 years, it has been argued that SBP is by far the more important measure of the two in terms of predictive importance for population health [8].\nStudies generally show consistent inverse associations between educational attainment and longitudinal changes in SBP [4,6,7], with the exception of young participants aged 20-29 years at baseline, followed over 8 years in the study by Hubert et al. [5] However, findings are less consistent for DBP, where studies have shown inverse [7], null [5], or even positive [4] associations between educational attainment and longitudinal changes in DBP. Our findings demonstrated fairly robust inverse associations of education with SBP, and weaker inconsistent associations with DBP. The pathophysiological mechanisms (e.g. smoking, obesity, alcohol consumption) that cause steady increases over the life course for SBP but not DBP, and also tend to be inversely associated with socioeconomic position, may explain the more consistent findings for the inverse association between education and changes in SBP rather than DBP over the life course. However, adjustment for these variables in our study appeared to account for only a small amount of the association in females, and a larger amount of the (weaker) association in males, suggesting there may be other explanatory factors, particularly in females.\n[SUBTITLE] Strengths and Weaknesses [SUBSECTION] Strengths of this study include having access to data on approximately 30 years of longitudinal blood pressure measurements. Furthermore, follow-up rates of the Framingham Heart Study are considered to be high for observational studies, decreasing risk of bias due to loss-to-follow-up. Finally, measurements of blood pressure were performed using methods and equipment providing good accuracy and precision, and analyses relied on statistical methods appropriate for longitudinal repeated-measures studies.\nWith regard to weaknesses, because the historical design of the Framingham Offspring Study reflected the population of Framingham, Massachusetts at study onset in 1948, the Original and Offspring cohorts are largely composed of white participants. Consequently, the generalizability of our findings to other races/ethnicities is uncertain. Furthermore, although we had up to 7 measurements for each covariate, we expect there to be reasonable residual confounding due to imperfect measurement of obesity (body mass index), and self-reported alcohol consumption, smoking and antihypertensive medication use.\nStrengths of this study include having access to data on approximately 30 years of longitudinal blood pressure measurements. Furthermore, follow-up rates of the Framingham Heart Study are considered to be high for observational studies, decreasing risk of bias due to loss-to-follow-up. Finally, measurements of blood pressure were performed using methods and equipment providing good accuracy and precision, and analyses relied on statistical methods appropriate for longitudinal repeated-measures studies.\nWith regard to weaknesses, because the historical design of the Framingham Offspring Study reflected the population of Framingham, Massachusetts at study onset in 1948, the Original and Offspring cohorts are largely composed of white participants. Consequently, the generalizability of our findings to other races/ethnicities is uncertain. Furthermore, although we had up to 7 measurements for each covariate, we expect there to be reasonable residual confounding due to imperfect measurement of obesity (body mass index), and self-reported alcohol consumption, smoking and antihypertensive medication use.", "Few studies have investigated sex-specific longitudinal trajectories of blood pressure, particularly over a substantial proportion of the life course. Diez Roux et al. in the ARIC cohort (n = 8555) aged 45 to 64 years at baseline and followed using 4 examinations over a period of 9 years, found in white participants, that education was marginally inversely associated with increases in blood pressure after adjusting for age, sex, center, medication use, and reported interactions between time and sex, and interactions between time and baseline age [4]. The 5-year change in mean SBP was 6.0 mmHg for those with <high school degree and 5.3 mmHg for those with a college degree. Further adjustment for baseline SBP somewhat reduced the association strength, to 5.9 mmHg for <high school and 5.5 mmHg for participants with college degree. Associations were weaker in black participants. Strand et al. demonstrated, in a large prospective study of 48,422 males and females aged 35-49 followed for 14 years using three examinations, that education was inversely associated with increases over time in SBP in males and females, after adjusting for year of birth [6]; socioeconomic disparities widened over time in females but not males. In a study on the Framingham Offspring cohort that included only participants aged 20-29 years at baseline (many of whom may not have completed education yet), education was not significantly associated with mean 8-year change in SBP or DBP in males or females, after adjusting for age [5]. In the CARDIA study of 2913 participants aged 18-30 years at baseline education was significantly inversely associated with mean 15-year change in both SBP and DBP [7]. Specifically for SBP, those with <high school degree had a 15-year mean increase of 8.2 mmHg versus only 0.7 mmHg for participants with >college graduate degree. However the observed associations were not adjusted for covariates [7]. Although prior cross-sectional studies suggested that associations may be stronger in females than males [3], little is known about sex-specific associations between education and blood pressure trajectories, particularly over long periods of the life course (>20 years follow-up). Finally, little is known about the effects of adjusting for use of antihypertensive medications, body mass index, alcohol consumption, smoking or other potential mechanisms that may, at least partly, mediate the impact of lower education on longitudinal trajectories of blood pressure. This study added to the literature sex-specific information demonstrating that education is inversely associated with longitudinal trajectories of mean SBP in females and males over a substantial proportion of the life course (approximately 30 years) and that association may be stronger in females than males. Furthermore, in females, lower education was associated with a higher mean post-baseline SBP even among participants with the same baseline SBP, suggesting a possible long-term impact of lower education. Adjusting for up-dated values of conventional risk factors typically reduced strengths of association, but in females the impact of lower education remained statistically significant. For DBP, association strengths were generally weaker for both females and males.", "The primary candidate mechanisms by which education may influence longitudinal trajectories of blood pressure involve conventional risk factors for hypertension, including smoking, obesity, blood pressure medication use, and alcohol consumption. In this study, in females, education was inversely associated with anti-hypertensive medication use, body mass index and smoking, and directly associated with alcohol consumption. In males, education was inversely associated with body mass index, alcohol consumption and smoking, and not associated with antihypertensive use. Furthermore, the estimated effects of education tended to somewhat decrease after adjusting for these potential mechanisms (particularly in males), suggesting that they may be at least partial explanatory pathways for the observed association between educational attainment and longitudinal trajectories of blood pressure. It is important to note that biases can be induced by adjusting for variables that may partly mediate the effect of exposure; therefore, these mechanistic findings should be interpreted with caution [14]. Furthermore, there remain plausible confounders unadjusted for, such as childhood socioeconomic circumstances (which are associated with adulthood education and blood pressure [15]), parental blood pressure (which may be associated with offspring education and has been related to offspring blood pressure [16]), intelligence (which is associated with educational attainment and CHD risk [17]), and early life obesity (that could affect upward social mobility via obesity discrimination particularly in women [18,19], and is related to blood pressure in adulthood [20]. Consequently, residual confounding remains a possibility.\nLow educational attainment has been demonstrated to predispose individuals to high strain jobs, characterized by high levels of demand and low levels of control, which have been associated with elevated blood pressure [21,22]. Other related mechanisms involve stress-induced sympathetic nervous system activation due to stressful conditions outside of work, that are also associated with low educational attainment. These may be particularly important for women. It has been shown that women with low education have higher risk of co-occurring psychosocial determinants of poor health, including single-parenting, depression, income below the poverty threshold, and unemployment, compared to men with low education [23]. Consequently, low socioeconomic position may be a stronger determinant of hypertension risk in women compared with men. This may be one of the explanations for why we found a significant interaction between sex and education, and somewhat stronger associations between education and blood pressure in women than men. The extent of health care available for people of low socioeconomic position is typically less than what is available for those with high socioeconomic position, hence limiting access to treatments of hypertension [24]. Furthermore, there is evidence that people of low socioeconomic position have less healthful diets, such as lower rates of fruit and vegetable consumption, and higher salt intake, which may be additional mechanisms contributing to disparities in blood pressure [25,26].\nIt has been demonstrated that although both SBP and DBP are positively associated with incidence of coronary heart disease, there are differences in the way SBP and DBP evolve over the life course. SBP tends to increase steadily with age, while DBP tends to increase until age 50 years, and to decrease steadily after that age [4,8,27]. The mechanisms responsible for the age-related increase in DBP among younger people likely involve an atherosclerotic increase in peripheral resistance, caused by narrowing of the smaller arteries and arterioles [8,28]. In contrast, for older individuals, structural damage and calcification due to arteriosclerosis in the larger conduit arteries can result in loss of arterial compliance, which can cause a rise in SBP, but a reduction in DBP [8,28]. As the burden of hypertension is greatest after the age of 50 years, and it is exceedingly uncommon to have diastolic hypertension without concurrent systolic hypertension in adults over the age of 50 years, it has been argued that SBP is by far the more important measure of the two in terms of predictive importance for population health [8].\nStudies generally show consistent inverse associations between educational attainment and longitudinal changes in SBP [4,6,7], with the exception of young participants aged 20-29 years at baseline, followed over 8 years in the study by Hubert et al. [5] However, findings are less consistent for DBP, where studies have shown inverse [7], null [5], or even positive [4] associations between educational attainment and longitudinal changes in DBP. Our findings demonstrated fairly robust inverse associations of education with SBP, and weaker inconsistent associations with DBP. The pathophysiological mechanisms (e.g. smoking, obesity, alcohol consumption) that cause steady increases over the life course for SBP but not DBP, and also tend to be inversely associated with socioeconomic position, may explain the more consistent findings for the inverse association between education and changes in SBP rather than DBP over the life course. However, adjustment for these variables in our study appeared to account for only a small amount of the association in females, and a larger amount of the (weaker) association in males, suggesting there may be other explanatory factors, particularly in females.", "Strengths of this study include having access to data on approximately 30 years of longitudinal blood pressure measurements. Furthermore, follow-up rates of the Framingham Heart Study are considered to be high for observational studies, decreasing risk of bias due to loss-to-follow-up. Finally, measurements of blood pressure were performed using methods and equipment providing good accuracy and precision, and analyses relied on statistical methods appropriate for longitudinal repeated-measures studies.\nWith regard to weaknesses, because the historical design of the Framingham Offspring Study reflected the population of Framingham, Massachusetts at study onset in 1948, the Original and Offspring cohorts are largely composed of white participants. Consequently, the generalizability of our findings to other races/ethnicities is uncertain. Furthermore, although we had up to 7 measurements for each covariate, we expect there to be reasonable residual confounding due to imperfect measurement of obesity (body mass index), and self-reported alcohol consumption, smoking and antihypertensive medication use.", "This study provides evidence that education is inversely associated with systolic blood pressure throughout a 30 year life course span, and associations may be stronger in females than males. These findings provide evidence that education may be a potential risk factor for elevated blood pressure across the life course.", "The authors declare that they have no competing interests.", "EL initially conceived of the study and further developed the study objectives in collaboration with all co-authors. EL wrote much of the initial draft of the manuscript. MA was the senior biostatistics advisor and drafted the analytic approach in the Methods section. YX performed the analyses and advised on the analytic approach. JL made substantial contributions to the analytic approach, and advised on the subject matter related to socioeconomic disparities in blood pressure. All authors were involved with drafting the final manuscript, and revising it as needed for important intellectual content. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/11/139/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Carbon tetrachloride induced kidney and lung tissue damages and antioxidant activities of the aqueous rhizome extract of Podophyllum hexandrum.
21356055
The present study was conducted to evaluate the in vitro and in vivo antioxidant properties of aqueous extract of Podophyllum hexandrum. The antioxidant potential of the plant extract under in vitro situations was evaluated by using two separate methods, inhibition of superoxide radical and hydrogen peroxide radical. Carbon tetrachloride (CCl4) is a well known toxicant and exposure to this chemical is known to induce oxidative stress and causes tissue damage by the formation of free radicals.
BACKGROUND
36 albino rats were divided into six groups of 6 animals each, all animals were allowed food and water ad libitum. Group I (control) was given olive oil, while the rest groups were injected intraperitoneally with a single dose of CCl4 (1 ml/kg) as a 50% (v/v) solution in olive oil. Group II received CCl4 only. Group III animals received vitamin E at a concentration of 50 mg/kg body weight and animals of groups IV, V and VI were given extract of Podophyllum hexandrum at concentration dose of 20, 30 and 50 mg/kg body weight. Antioxidant status in both kidney and lung tissues were estimated by determining the activities of antioxidative enzymes, glutathione reductase (GR), glutathione peroxidase (GPX), glutathione-S-transferase (GST) and superoxide dismutase (SOD); as well as by determining the levels of reduced glutathione (GSH) and thiobarbituric acid reactive substances (TBARS). In addition, superoxide and hydrogen peroxide radical scavenging activity of the extract was also determined.
METHODS
Results showed that the extract possessed strong superoxide and hydrogen peroxide radical scavenging activity comparable to that of known antioxidant butylated hydroxy toluene (BHT). Our results also showed that CCl4 caused a marked increase in TBARS levels whereas GSH, SOD, GR, GPX and GST levels were decreased in kidney and lung tissue homogenates of CCl4 treated rats. Aqueous extract of Podophyllum hexandrum successfully prevented the alterations of these effects in the experimental animals.
RESULTS
Our study demonstrated that the aqueous extract of Podophyllum hexandrum could protect the kidney and lung tissue against CCl4 induced oxidative stress probably by increasing antioxidant defense activities.
CONCLUSION
[ "Animals", "Antioxidants", "Carbon Tetrachloride", "Free Radical Scavengers", "Glutathione", "Kidney", "Kidney Diseases", "Lung", "Lung Diseases", "Male", "Oxidative Stress", "Phytotherapy", "Plant Extracts", "Podophyllum", "Rats", "Rats, Wistar", "Rhizome", "Thiobarbituric Acid Reactive Substances" ]
3056849
null
null
Methods
[SUBTITLE] Plant material collection and extraction [SUBSECTION] The rhizome of Podophyllum hexandrum was collected from higher reaches of Aharbal, Shopian, J&K, India in the month of May and June, identified by the Centre of Plant Taxonomy, Department of Botany, University of Kashmir, and authenticated by Dr. Irshad Ahmad Nawchoo (Department of Botany) and Akhter Hussain Malik (Curator, Centre for Plant Taxonomy, University of Kashmir). A reference specimen has been retained in the herbarium of the Department of Botany at the University of Kashmir under reference number KASH- bot/Ku/PH- 702- SAG. The plant material (rhizome) was dried in the shade at 30 ± 2°C. The dried rhizome material was ground into a powder using mortar and pestle and passed through a sieve of 0.3 mm mesh size. The powder obtained was extracted with water using a Soxhlet extractor (60-80°C). The extract was then concentrated with the help of rotary evaporator under reduced pressure and the solid extract was stored in refrigerator for future use. The rhizome of Podophyllum hexandrum was collected from higher reaches of Aharbal, Shopian, J&K, India in the month of May and June, identified by the Centre of Plant Taxonomy, Department of Botany, University of Kashmir, and authenticated by Dr. Irshad Ahmad Nawchoo (Department of Botany) and Akhter Hussain Malik (Curator, Centre for Plant Taxonomy, University of Kashmir). A reference specimen has been retained in the herbarium of the Department of Botany at the University of Kashmir under reference number KASH- bot/Ku/PH- 702- SAG. The plant material (rhizome) was dried in the shade at 30 ± 2°C. The dried rhizome material was ground into a powder using mortar and pestle and passed through a sieve of 0.3 mm mesh size. The powder obtained was extracted with water using a Soxhlet extractor (60-80°C). The extract was then concentrated with the help of rotary evaporator under reduced pressure and the solid extract was stored in refrigerator for future use. [SUBTITLE] Animals [SUBSECTION] Adult male albino rats of Wistar strain weighing 200-250 g used throughout this study were purchased from the Indian Institute of Integrative Medicine Jammu (IIIM). The animals had access to food and water ad libitum. The animals were maintained in a controlled environment under standard conditions of temperature and humidity with an alternating 12 hr light and dark cycle. The animals were maintained in accordance with the guidelines prescribed by the National Institute of Nutrition, Indian Council of Medical Research and the study was approved by the Animal Ethics Committee of the University of Kashmir. Adult male albino rats of Wistar strain weighing 200-250 g used throughout this study were purchased from the Indian Institute of Integrative Medicine Jammu (IIIM). The animals had access to food and water ad libitum. The animals were maintained in a controlled environment under standard conditions of temperature and humidity with an alternating 12 hr light and dark cycle. The animals were maintained in accordance with the guidelines prescribed by the National Institute of Nutrition, Indian Council of Medical Research and the study was approved by the Animal Ethics Committee of the University of Kashmir. [SUBTITLE] Experimental methods [SUBSECTION] [SUBTITLE] Assessment of superoxide anion radical scavenging property [SUBSECTION] Superoxide anion radical generated by the Xanthine/Xanthine oxidase system was spectrophotometrically determined by monitoring the product of nitroblue tetrazolium (NBT) using the method of Jung [13]. A reaction mixture containing 1.0 ml of 0.05 M phosphate buffer (pH 7.4), 0.04 ml of 0.15% BSA, 0.04 ml of 15.0 mM NBT and various concentrations of plant extract and known antioxidant was incubated at 25°C for 10 min, and the reaction was then started by adding 0.04 ml of 1.5 U/ml Xanthine oxidase and again incubated at 25°C for 20 min. The absorbance of the reaction mixture was measured at 560 nm. Decreased absorbance of the reaction mixture indicates increased superoxide anion radical scavenging activity. The scavenging activity of the plant extract on Superoxide anion radical was expressed as: %  inhibition = [ ( A 0 − A 1 ) / A 0 ] × 1 00 Where A0 was the absorbance of the control and A1 was absorbance in the presence of Podophyllum hexandrum extract/known antioxidant. Superoxide anion radical generated by the Xanthine/Xanthine oxidase system was spectrophotometrically determined by monitoring the product of nitroblue tetrazolium (NBT) using the method of Jung [13]. A reaction mixture containing 1.0 ml of 0.05 M phosphate buffer (pH 7.4), 0.04 ml of 0.15% BSA, 0.04 ml of 15.0 mM NBT and various concentrations of plant extract and known antioxidant was incubated at 25°C for 10 min, and the reaction was then started by adding 0.04 ml of 1.5 U/ml Xanthine oxidase and again incubated at 25°C for 20 min. The absorbance of the reaction mixture was measured at 560 nm. Decreased absorbance of the reaction mixture indicates increased superoxide anion radical scavenging activity. The scavenging activity of the plant extract on Superoxide anion radical was expressed as: %  inhibition = [ ( A 0 − A 1 ) / A 0 ] × 1 00 Where A0 was the absorbance of the control and A1 was absorbance in the presence of Podophyllum hexandrum extract/known antioxidant. [SUBTITLE] Assessment of Hydrogen peroxide scavenging activity [SUBSECTION] The ability of Podophyllum hexandrum aqueous extract to scavenge hydrogen peroxide was evaluated according to the method of Ruch [14]. A solution of H2O2 (2 mmol) was prepared in phosphate buffer (pH 7.5). Plant extract (50-300 μg/ml) was added to the hydrogen peroxide solution (0.6 ml). Absorbance of hydrogen peroxide at 230 nm was determined after 15 minutes against a blank solution containing phosphate buffer without hydrogen peroxide. BHT was taken as known standard. The scavenging activity of the plant extract on H2O2 was expressed as: %  scavenged  [ H 2 O 2 ] = [ ( A 0 − A 1 ) / A 0 ] × 1 00 Where A0 is the absorbance of the control and A1 is absorbance in the presence of plant extract and known standard. The ability of Podophyllum hexandrum aqueous extract to scavenge hydrogen peroxide was evaluated according to the method of Ruch [14]. A solution of H2O2 (2 mmol) was prepared in phosphate buffer (pH 7.5). Plant extract (50-300 μg/ml) was added to the hydrogen peroxide solution (0.6 ml). Absorbance of hydrogen peroxide at 230 nm was determined after 15 minutes against a blank solution containing phosphate buffer without hydrogen peroxide. BHT was taken as known standard. The scavenging activity of the plant extract on H2O2 was expressed as: %  scavenged  [ H 2 O 2 ] = [ ( A 0 − A 1 ) / A 0 ] × 1 00 Where A0 is the absorbance of the control and A1 is absorbance in the presence of plant extract and known standard. [SUBTITLE] Assessment of superoxide anion radical scavenging property [SUBSECTION] Superoxide anion radical generated by the Xanthine/Xanthine oxidase system was spectrophotometrically determined by monitoring the product of nitroblue tetrazolium (NBT) using the method of Jung [13]. A reaction mixture containing 1.0 ml of 0.05 M phosphate buffer (pH 7.4), 0.04 ml of 0.15% BSA, 0.04 ml of 15.0 mM NBT and various concentrations of plant extract and known antioxidant was incubated at 25°C for 10 min, and the reaction was then started by adding 0.04 ml of 1.5 U/ml Xanthine oxidase and again incubated at 25°C for 20 min. The absorbance of the reaction mixture was measured at 560 nm. Decreased absorbance of the reaction mixture indicates increased superoxide anion radical scavenging activity. The scavenging activity of the plant extract on Superoxide anion radical was expressed as: %  inhibition = [ ( A 0 − A 1 ) / A 0 ] × 1 00 Where A0 was the absorbance of the control and A1 was absorbance in the presence of Podophyllum hexandrum extract/known antioxidant. Superoxide anion radical generated by the Xanthine/Xanthine oxidase system was spectrophotometrically determined by monitoring the product of nitroblue tetrazolium (NBT) using the method of Jung [13]. A reaction mixture containing 1.0 ml of 0.05 M phosphate buffer (pH 7.4), 0.04 ml of 0.15% BSA, 0.04 ml of 15.0 mM NBT and various concentrations of plant extract and known antioxidant was incubated at 25°C for 10 min, and the reaction was then started by adding 0.04 ml of 1.5 U/ml Xanthine oxidase and again incubated at 25°C for 20 min. The absorbance of the reaction mixture was measured at 560 nm. Decreased absorbance of the reaction mixture indicates increased superoxide anion radical scavenging activity. The scavenging activity of the plant extract on Superoxide anion radical was expressed as: %  inhibition = [ ( A 0 − A 1 ) / A 0 ] × 1 00 Where A0 was the absorbance of the control and A1 was absorbance in the presence of Podophyllum hexandrum extract/known antioxidant. [SUBTITLE] Assessment of Hydrogen peroxide scavenging activity [SUBSECTION] The ability of Podophyllum hexandrum aqueous extract to scavenge hydrogen peroxide was evaluated according to the method of Ruch [14]. A solution of H2O2 (2 mmol) was prepared in phosphate buffer (pH 7.5). Plant extract (50-300 μg/ml) was added to the hydrogen peroxide solution (0.6 ml). Absorbance of hydrogen peroxide at 230 nm was determined after 15 minutes against a blank solution containing phosphate buffer without hydrogen peroxide. BHT was taken as known standard. The scavenging activity of the plant extract on H2O2 was expressed as: %  scavenged  [ H 2 O 2 ] = [ ( A 0 − A 1 ) / A 0 ] × 1 00 Where A0 is the absorbance of the control and A1 is absorbance in the presence of plant extract and known standard. The ability of Podophyllum hexandrum aqueous extract to scavenge hydrogen peroxide was evaluated according to the method of Ruch [14]. A solution of H2O2 (2 mmol) was prepared in phosphate buffer (pH 7.5). Plant extract (50-300 μg/ml) was added to the hydrogen peroxide solution (0.6 ml). Absorbance of hydrogen peroxide at 230 nm was determined after 15 minutes against a blank solution containing phosphate buffer without hydrogen peroxide. BHT was taken as known standard. The scavenging activity of the plant extract on H2O2 was expressed as: %  scavenged  [ H 2 O 2 ] = [ ( A 0 − A 1 ) / A 0 ] × 1 00 Where A0 is the absorbance of the control and A1 is absorbance in the presence of plant extract and known standard. [SUBTITLE] Dosage and treatment [SUBSECTION] Rats were divided into six groups each containing six rats. The plant extract was employed at oral doses of 20, 30 and 50 mg/kg-day. The extract was suspended in normal saline such that the final volume of extract at each dose was 1 ml which was fed to rats by gavage. Group I - Received olive oil vehicle only at 5 ml/kg-day. Group II - Received CCl4 in olive oil vehicle only. Group III - Were administered with vitamin E (50 mg/kg-day). Group IV - Received 20 mg/kg-day extract orally for fifteen days. Group V - Received 30 mg/kg-day extract orally for fifteen days. Group VI - Received 50 mg/kg-day orally for fifteen days. On the thirteenth day, animals from groups II-VI were injected intraperitoneally with CCl4 in olive oil vehicle at a dosage of 1 ml/kg bw. The rats were sacrificed 48 hr after CCl4 administration and kidney and lung tissue was isolated out, and post mitochondrial supernatant of both the tissues was prepared. [SUBTITLE] Preparation of post mitochondrial supernatant (PMS) [SUBSECTION] Kidney and lung tissue was washed in ice-cold 1.15% KCl and homogenized in a homogenizing buffer (50 mM Tris- HCl, 1.15% KCl pH 7.4) using a Teflon homogenizer. The homogenate was centrifuged at 9,000 g for 20 minutes to remove debris. The supernatant was further centrifuged at 15,000 g for 20 minutes at 4°C to get PMS which was used for various biochemical assays. Protein concentration was estimated by the method of Lowry [15]. Kidney and lung tissue was washed in ice-cold 1.15% KCl and homogenized in a homogenizing buffer (50 mM Tris- HCl, 1.15% KCl pH 7.4) using a Teflon homogenizer. The homogenate was centrifuged at 9,000 g for 20 minutes to remove debris. The supernatant was further centrifuged at 15,000 g for 20 minutes at 4°C to get PMS which was used for various biochemical assays. Protein concentration was estimated by the method of Lowry [15]. [SUBTITLE] Estimation of lipid peroxidation (PMS) [SUBSECTION] Lipid peroxidation in tissues was estimated by the formation of thiobarbituric acid reactive substances (TBARS) by the method of Nichans and Samuelson [16]. In brief 0.1 ml of tissue homogenate (PMS; Tris- HCl buffer, pH 7.5) was treated with 2 ml of (1:1:1 ratio) TBA-TCA-HCl reagent (0.37% thiobarbituric acid, 0.25 N HCl, and 15% TCA), placed in boiling water bath for 15 min, cooled and centrifuged at room temperature for 10 min. The absorbance of the clear supernatant was measured against reference blank at 535 nm. Lipid peroxidation in tissues was estimated by the formation of thiobarbituric acid reactive substances (TBARS) by the method of Nichans and Samuelson [16]. In brief 0.1 ml of tissue homogenate (PMS; Tris- HCl buffer, pH 7.5) was treated with 2 ml of (1:1:1 ratio) TBA-TCA-HCl reagent (0.37% thiobarbituric acid, 0.25 N HCl, and 15% TCA), placed in boiling water bath for 15 min, cooled and centrifuged at room temperature for 10 min. The absorbance of the clear supernatant was measured against reference blank at 535 nm. [SUBTITLE] Determination of total sulphydryl groups [SUBSECTION] The acid soluble sulphydryl groups (non protein thiols of which more than 93% is reduced glutathione (GSH) forms a yellow colored complex with DTNB that shows the absorption maximum at 412 nm. The assay procedure will be followed to that of Moren [17]. 500 μl of homogenate precipitated with 100 μl of 25% TCA, will be then subjected to centrifugation at 3000 g for 10 minutes to settle the precipitate. 100 μl of the supernatant obtained shall be added to the test tube containing the 2 ml of 0.6 mM DTNB and 0.9 ml of 0.2 mM sodium phosphate buffer (pH 7.4). The yellow color obtained will be measured at 412 nm against the reagent blank which contains 100 μl of 25% TCA in place of the supernatant. Sulphydryl content shall be calculated using the DTNB molar extension coefficient of 13,100. The acid soluble sulphydryl groups (non protein thiols of which more than 93% is reduced glutathione (GSH) forms a yellow colored complex with DTNB that shows the absorption maximum at 412 nm. The assay procedure will be followed to that of Moren [17]. 500 μl of homogenate precipitated with 100 μl of 25% TCA, will be then subjected to centrifugation at 3000 g for 10 minutes to settle the precipitate. 100 μl of the supernatant obtained shall be added to the test tube containing the 2 ml of 0.6 mM DTNB and 0.9 ml of 0.2 mM sodium phosphate buffer (pH 7.4). The yellow color obtained will be measured at 412 nm against the reagent blank which contains 100 μl of 25% TCA in place of the supernatant. Sulphydryl content shall be calculated using the DTNB molar extension coefficient of 13,100. [SUBTITLE] Glutathione peroxidase (GPX) [SUBSECTION] GPX activity was assayed using the method of Sharma [18]. The assay mixture consists of 1.49 ml of sodium phosphate buffer (0.1 M pH 7.4), 0.1 ml EDTA (1 mM), 0.1 ml sodium azide (1 mM), 0.1 ml 1 mM GSH, 0.1 ml of NADPH (0.02 mM), 0.01 ml of 1 mM H2O2 and 0.1 ml PMS in a total volume of 2 ml. Oxidation of NADPH was recorded spectrophotometrically at 340 nm and the enzyme activity was calculated as nmoles NADPH oxidized/min/mg of protein, using € of 6.22 × 103 M-1 cm-1. GPX activity was assayed using the method of Sharma [18]. The assay mixture consists of 1.49 ml of sodium phosphate buffer (0.1 M pH 7.4), 0.1 ml EDTA (1 mM), 0.1 ml sodium azide (1 mM), 0.1 ml 1 mM GSH, 0.1 ml of NADPH (0.02 mM), 0.01 ml of 1 mM H2O2 and 0.1 ml PMS in a total volume of 2 ml. Oxidation of NADPH was recorded spectrophotometrically at 340 nm and the enzyme activity was calculated as nmoles NADPH oxidized/min/mg of protein, using € of 6.22 × 103 M-1 cm-1. [SUBTITLE] Glutathione Reductase activity (GR) [SUBSECTION] GR activity was assayed by the method of Sharma [18]. The assay mixture consisted of 1.6 ml of sodium phosphate buffer (0.1 M pH 7.4), 0.1 ml EDTA (1 mM), 0.1 ml 1 mM oxidized glutathione, 0.1 ml of NADPH (0.02 mM), 0.01 ml of 1 mM H2O2 and 0.1 ml PMS in a total volume of 2 ml. The enzyme activity measured at 340 nm was calculated as nmoles of NADPH oxidized/min/mg of protein using € of 6.22 × 103 M-1 cm-1. GR activity was assayed by the method of Sharma [18]. The assay mixture consisted of 1.6 ml of sodium phosphate buffer (0.1 M pH 7.4), 0.1 ml EDTA (1 mM), 0.1 ml 1 mM oxidized glutathione, 0.1 ml of NADPH (0.02 mM), 0.01 ml of 1 mM H2O2 and 0.1 ml PMS in a total volume of 2 ml. The enzyme activity measured at 340 nm was calculated as nmoles of NADPH oxidized/min/mg of protein using € of 6.22 × 103 M-1 cm-1. [SUBTITLE] Glutathione-S-transferase (GST) activity [SUBSECTION] GST activity was assayed using the method of Haque [19]. The reaction mixture consisted of 1.67 ml sodium phosphate buffer (0.1 M pH 6.5), 0.2 ml of 1 mM GSH, 0.025 ml of 1 mM CDNB and 0.1 ml of PMS in a total volume of 2 ml. The change in absorbance was recorded at 340 nm and the enzyme activity was calculated as nmoles of CDNB conjugates formed/min/mg protein using € of 9.6 × 103 M-1 cm-1. GST activity was assayed using the method of Haque [19]. The reaction mixture consisted of 1.67 ml sodium phosphate buffer (0.1 M pH 6.5), 0.2 ml of 1 mM GSH, 0.025 ml of 1 mM CDNB and 0.1 ml of PMS in a total volume of 2 ml. The change in absorbance was recorded at 340 nm and the enzyme activity was calculated as nmoles of CDNB conjugates formed/min/mg protein using € of 9.6 × 103 M-1 cm-1. [SUBTITLE] Super oxide dismutase activity (SOD) [SUBSECTION] SOD activity was estimated by Beauchamp and Fridovich [20]. The reaction mixture consisted of 0.5 ml of hepatic PMS, 1 ml of 50 mM sodium carbonate, 0.4 ml of 25 μM NBT and 0.2 ml of 0.1 mM EDTA. The reaction was initiated by addition of 0.4 ml of 1 mM hydroxylamine-hydrochloride. The change in absorbance was recorded at 560 nm. The control was simultaneously run without tissue homogenate. Units of SOD activity were expressed as the amount of enzyme required to inhibit the reduction of NBT by 50%. SOD activity was estimated by Beauchamp and Fridovich [20]. The reaction mixture consisted of 0.5 ml of hepatic PMS, 1 ml of 50 mM sodium carbonate, 0.4 ml of 25 μM NBT and 0.2 ml of 0.1 mM EDTA. The reaction was initiated by addition of 0.4 ml of 1 mM hydroxylamine-hydrochloride. The change in absorbance was recorded at 560 nm. The control was simultaneously run without tissue homogenate. Units of SOD activity were expressed as the amount of enzyme required to inhibit the reduction of NBT by 50%. Rats were divided into six groups each containing six rats. The plant extract was employed at oral doses of 20, 30 and 50 mg/kg-day. The extract was suspended in normal saline such that the final volume of extract at each dose was 1 ml which was fed to rats by gavage. Group I - Received olive oil vehicle only at 5 ml/kg-day. Group II - Received CCl4 in olive oil vehicle only. Group III - Were administered with vitamin E (50 mg/kg-day). Group IV - Received 20 mg/kg-day extract orally for fifteen days. Group V - Received 30 mg/kg-day extract orally for fifteen days. Group VI - Received 50 mg/kg-day orally for fifteen days. On the thirteenth day, animals from groups II-VI were injected intraperitoneally with CCl4 in olive oil vehicle at a dosage of 1 ml/kg bw. The rats were sacrificed 48 hr after CCl4 administration and kidney and lung tissue was isolated out, and post mitochondrial supernatant of both the tissues was prepared. [SUBTITLE] Preparation of post mitochondrial supernatant (PMS) [SUBSECTION] Kidney and lung tissue was washed in ice-cold 1.15% KCl and homogenized in a homogenizing buffer (50 mM Tris- HCl, 1.15% KCl pH 7.4) using a Teflon homogenizer. The homogenate was centrifuged at 9,000 g for 20 minutes to remove debris. The supernatant was further centrifuged at 15,000 g for 20 minutes at 4°C to get PMS which was used for various biochemical assays. Protein concentration was estimated by the method of Lowry [15]. Kidney and lung tissue was washed in ice-cold 1.15% KCl and homogenized in a homogenizing buffer (50 mM Tris- HCl, 1.15% KCl pH 7.4) using a Teflon homogenizer. The homogenate was centrifuged at 9,000 g for 20 minutes to remove debris. The supernatant was further centrifuged at 15,000 g for 20 minutes at 4°C to get PMS which was used for various biochemical assays. Protein concentration was estimated by the method of Lowry [15]. [SUBTITLE] Estimation of lipid peroxidation (PMS) [SUBSECTION] Lipid peroxidation in tissues was estimated by the formation of thiobarbituric acid reactive substances (TBARS) by the method of Nichans and Samuelson [16]. In brief 0.1 ml of tissue homogenate (PMS; Tris- HCl buffer, pH 7.5) was treated with 2 ml of (1:1:1 ratio) TBA-TCA-HCl reagent (0.37% thiobarbituric acid, 0.25 N HCl, and 15% TCA), placed in boiling water bath for 15 min, cooled and centrifuged at room temperature for 10 min. The absorbance of the clear supernatant was measured against reference blank at 535 nm. Lipid peroxidation in tissues was estimated by the formation of thiobarbituric acid reactive substances (TBARS) by the method of Nichans and Samuelson [16]. In brief 0.1 ml of tissue homogenate (PMS; Tris- HCl buffer, pH 7.5) was treated with 2 ml of (1:1:1 ratio) TBA-TCA-HCl reagent (0.37% thiobarbituric acid, 0.25 N HCl, and 15% TCA), placed in boiling water bath for 15 min, cooled and centrifuged at room temperature for 10 min. The absorbance of the clear supernatant was measured against reference blank at 535 nm. [SUBTITLE] Determination of total sulphydryl groups [SUBSECTION] The acid soluble sulphydryl groups (non protein thiols of which more than 93% is reduced glutathione (GSH) forms a yellow colored complex with DTNB that shows the absorption maximum at 412 nm. The assay procedure will be followed to that of Moren [17]. 500 μl of homogenate precipitated with 100 μl of 25% TCA, will be then subjected to centrifugation at 3000 g for 10 minutes to settle the precipitate. 100 μl of the supernatant obtained shall be added to the test tube containing the 2 ml of 0.6 mM DTNB and 0.9 ml of 0.2 mM sodium phosphate buffer (pH 7.4). The yellow color obtained will be measured at 412 nm against the reagent blank which contains 100 μl of 25% TCA in place of the supernatant. Sulphydryl content shall be calculated using the DTNB molar extension coefficient of 13,100. The acid soluble sulphydryl groups (non protein thiols of which more than 93% is reduced glutathione (GSH) forms a yellow colored complex with DTNB that shows the absorption maximum at 412 nm. The assay procedure will be followed to that of Moren [17]. 500 μl of homogenate precipitated with 100 μl of 25% TCA, will be then subjected to centrifugation at 3000 g for 10 minutes to settle the precipitate. 100 μl of the supernatant obtained shall be added to the test tube containing the 2 ml of 0.6 mM DTNB and 0.9 ml of 0.2 mM sodium phosphate buffer (pH 7.4). The yellow color obtained will be measured at 412 nm against the reagent blank which contains 100 μl of 25% TCA in place of the supernatant. Sulphydryl content shall be calculated using the DTNB molar extension coefficient of 13,100. [SUBTITLE] Glutathione peroxidase (GPX) [SUBSECTION] GPX activity was assayed using the method of Sharma [18]. The assay mixture consists of 1.49 ml of sodium phosphate buffer (0.1 M pH 7.4), 0.1 ml EDTA (1 mM), 0.1 ml sodium azide (1 mM), 0.1 ml 1 mM GSH, 0.1 ml of NADPH (0.02 mM), 0.01 ml of 1 mM H2O2 and 0.1 ml PMS in a total volume of 2 ml. Oxidation of NADPH was recorded spectrophotometrically at 340 nm and the enzyme activity was calculated as nmoles NADPH oxidized/min/mg of protein, using € of 6.22 × 103 M-1 cm-1. GPX activity was assayed using the method of Sharma [18]. The assay mixture consists of 1.49 ml of sodium phosphate buffer (0.1 M pH 7.4), 0.1 ml EDTA (1 mM), 0.1 ml sodium azide (1 mM), 0.1 ml 1 mM GSH, 0.1 ml of NADPH (0.02 mM), 0.01 ml of 1 mM H2O2 and 0.1 ml PMS in a total volume of 2 ml. Oxidation of NADPH was recorded spectrophotometrically at 340 nm and the enzyme activity was calculated as nmoles NADPH oxidized/min/mg of protein, using € of 6.22 × 103 M-1 cm-1. [SUBTITLE] Glutathione Reductase activity (GR) [SUBSECTION] GR activity was assayed by the method of Sharma [18]. The assay mixture consisted of 1.6 ml of sodium phosphate buffer (0.1 M pH 7.4), 0.1 ml EDTA (1 mM), 0.1 ml 1 mM oxidized glutathione, 0.1 ml of NADPH (0.02 mM), 0.01 ml of 1 mM H2O2 and 0.1 ml PMS in a total volume of 2 ml. The enzyme activity measured at 340 nm was calculated as nmoles of NADPH oxidized/min/mg of protein using € of 6.22 × 103 M-1 cm-1. GR activity was assayed by the method of Sharma [18]. The assay mixture consisted of 1.6 ml of sodium phosphate buffer (0.1 M pH 7.4), 0.1 ml EDTA (1 mM), 0.1 ml 1 mM oxidized glutathione, 0.1 ml of NADPH (0.02 mM), 0.01 ml of 1 mM H2O2 and 0.1 ml PMS in a total volume of 2 ml. The enzyme activity measured at 340 nm was calculated as nmoles of NADPH oxidized/min/mg of protein using € of 6.22 × 103 M-1 cm-1. [SUBTITLE] Glutathione-S-transferase (GST) activity [SUBSECTION] GST activity was assayed using the method of Haque [19]. The reaction mixture consisted of 1.67 ml sodium phosphate buffer (0.1 M pH 6.5), 0.2 ml of 1 mM GSH, 0.025 ml of 1 mM CDNB and 0.1 ml of PMS in a total volume of 2 ml. The change in absorbance was recorded at 340 nm and the enzyme activity was calculated as nmoles of CDNB conjugates formed/min/mg protein using € of 9.6 × 103 M-1 cm-1. GST activity was assayed using the method of Haque [19]. The reaction mixture consisted of 1.67 ml sodium phosphate buffer (0.1 M pH 6.5), 0.2 ml of 1 mM GSH, 0.025 ml of 1 mM CDNB and 0.1 ml of PMS in a total volume of 2 ml. The change in absorbance was recorded at 340 nm and the enzyme activity was calculated as nmoles of CDNB conjugates formed/min/mg protein using € of 9.6 × 103 M-1 cm-1. [SUBTITLE] Super oxide dismutase activity (SOD) [SUBSECTION] SOD activity was estimated by Beauchamp and Fridovich [20]. The reaction mixture consisted of 0.5 ml of hepatic PMS, 1 ml of 50 mM sodium carbonate, 0.4 ml of 25 μM NBT and 0.2 ml of 0.1 mM EDTA. The reaction was initiated by addition of 0.4 ml of 1 mM hydroxylamine-hydrochloride. The change in absorbance was recorded at 560 nm. The control was simultaneously run without tissue homogenate. Units of SOD activity were expressed as the amount of enzyme required to inhibit the reduction of NBT by 50%. SOD activity was estimated by Beauchamp and Fridovich [20]. The reaction mixture consisted of 0.5 ml of hepatic PMS, 1 ml of 50 mM sodium carbonate, 0.4 ml of 25 μM NBT and 0.2 ml of 0.1 mM EDTA. The reaction was initiated by addition of 0.4 ml of 1 mM hydroxylamine-hydrochloride. The change in absorbance was recorded at 560 nm. The control was simultaneously run without tissue homogenate. Units of SOD activity were expressed as the amount of enzyme required to inhibit the reduction of NBT by 50%. [SUBTITLE] Statistical analysis [SUBSECTION] The values are expressed as mean ± standard deviation (SD). The results were evaluated by using the SPSS (version 12.0) and Origin 6 softwares and evaluated by one-way ANOVA followed by Bonferroni t-test. Statistical significance was considered when value of P was < 0.5. The values are expressed as mean ± standard deviation (SD). The results were evaluated by using the SPSS (version 12.0) and Origin 6 softwares and evaluated by one-way ANOVA followed by Bonferroni t-test. Statistical significance was considered when value of P was < 0.5.
null
null
null
null
[ "Background", "Plant material collection and extraction", "Animals", "Experimental methods", "Assessment of superoxide anion radical scavenging property", "Assessment of Hydrogen peroxide scavenging activity", "Dosage and treatment", "Preparation of post mitochondrial supernatant (PMS)", "Estimation of lipid peroxidation (PMS)", "Determination of total sulphydryl groups", "Glutathione peroxidase (GPX)", "Glutathione Reductase activity (GR)", "Glutathione-S-transferase (GST) activity", "Super oxide dismutase activity (SOD)", "Statistical analysis", "Results", "Superoxide anion radical scavenging activity", "Hydrogen peroxide radical scavenging activity", "Effect of aqueous extract on lipid peroxidation in CCl4 treated rats", "Effect on (GPX activity)", "Effect on GR activity", "Effect on SOD activity", "Effect on GSH level", "Effect on GST activity", "Discussion", "Conclusion", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Reactive oxygen species, including superoxide radicals (O2•-), hydrogen peroxide (H2O2) and hydroxyl radicals (OH•) are generated as byproducts of normal metabolism [1,2]. Cumulative oxidative damage leads to numerous diseases and disorders [3]. The enhanced production of free radicals and oxidative stress can also be induced by a variety of factors such as radiation or exposure to heavy metals and xenobiotics (e.g., carbon tetrachloride) [4]. Carbon tetrachloride (CCl4) intoxication in animals is an experimental model that mimics oxidative stress in many pathophysiological situations [5]. CCl4 intoxication in various studies has demonstrated that CCl4 causes free radical generation in many tissues such as liver, kidney, heart, lung, brain and blood [6]. The toxicity of CCl4 probably depends on formation of the trichloromethyl radical (CCl3•), which in the presence of oxygen interacts with it to form the more toxic trichloromethyl peroxyl radical (CCl3O2•) [7]. Studies also showed that various herbal extracts could protect organs against CCl4 induced oxidative stress by altering the levels of increased lipid peroxidation and enhancing the decreased activities of antioxidant enzymes, like superoxide dismutase (SOD), catalase (CAT) and glutathione-S-transferase (GST) as well as enhanced the decreased level of the reduced glutathione (GSH) [8]. In the modern medicine, plants occupy a significant berth as raw materials for some important drug preparations [9]. Podophyllum hexandrum (PH) has been extensively exploited in traditional Ayurvedic system of medicine for treatment of a number of ailments like Condyloma acuminata, Taenia capitis, monocytoid leukemia, Hodgkins disease, non-Hodgkin's Lymphoma, cancer of brain, lung, bladder and venereal warts [10]. PH is reported to contain a number of compounds with significant pharmacological properties, e.g. epipodophyllotoxin, podophyllotoxone, 4-methylpodophyllotoxin, aryltetrahydronaphthalene lignans, flavonoids such as quercetin, quercetin-3-glycoside, 4-demethylpodophyllotoxin glycoside, podophyllotoxinglycoside, kaempferol and kaempferol-3-glucoside [11,12]. In this particular study, protective role of aqueous extract of the Podophyllum hexandrum was evaluated against free radical mediated damages under in vitro and in vivo situations. In vitro assays were carried on superoxide radical scavenging activity and hydrogen peroxide radical scavenging activity. Here, kidney and lung toxicity was induced by administering a single dose of CCl4 into experimental adult male albino rats and radical scavenging activity of the extract was evaluated by measuring the levels of GSH and extent of lipid peroxidation in kidney and lung tissue homogenates and activity of antioxidant enzymes via SOD, GPX, GR and GST. In addition, study on the effect of a known antioxidant, vitamin E, was also included against CCl4 induced kidney and lung oxidative stress. The major aim of the present study was to examine the protective mechanisms of aqueous extract of PH in kidney and lung tissues in carbon tetrachloride intoxicated rats.", "The rhizome of Podophyllum hexandrum was collected from higher reaches of Aharbal, Shopian, J&K, India in the month of May and June, identified by the Centre of Plant Taxonomy, Department of Botany, University of Kashmir, and authenticated by Dr. Irshad Ahmad Nawchoo (Department of Botany) and Akhter Hussain Malik (Curator, Centre for Plant Taxonomy, University of Kashmir). A reference specimen has been retained in the herbarium of the Department of Botany at the University of Kashmir under reference number KASH- bot/Ku/PH- 702- SAG.\nThe plant material (rhizome) was dried in the shade at 30 ± 2°C. The dried rhizome material was ground into a powder using mortar and pestle and passed through a sieve of 0.3 mm mesh size. The powder obtained was extracted with water using a Soxhlet extractor (60-80°C). The extract was then concentrated with the help of rotary evaporator under reduced pressure and the solid extract was stored in refrigerator for future use.", "Adult male albino rats of Wistar strain weighing 200-250 g used throughout this study were purchased from the Indian Institute of Integrative Medicine Jammu (IIIM). The animals had access to food and water ad libitum. The animals were maintained in a controlled environment under standard conditions of temperature and humidity with an alternating 12 hr light and dark cycle. The animals were maintained in accordance with the guidelines prescribed by the National Institute of Nutrition, Indian Council of Medical Research and the study was approved by the Animal Ethics Committee of the University of Kashmir.", "[SUBTITLE] Assessment of superoxide anion radical scavenging property [SUBSECTION] Superoxide anion radical generated by the Xanthine/Xanthine oxidase system was spectrophotometrically determined by monitoring the product of nitroblue tetrazolium (NBT) using the method of Jung [13]. A reaction mixture containing 1.0 ml of 0.05 M phosphate buffer (pH 7.4), 0.04 ml of 0.15% BSA, 0.04 ml of 15.0 mM NBT and various concentrations of plant extract and known antioxidant was incubated at 25°C for 10 min, and the reaction was then started by adding 0.04 ml of 1.5 U/ml Xanthine oxidase and again incubated at 25°C for 20 min. The absorbance of the reaction mixture was measured at 560 nm. Decreased absorbance of the reaction mixture indicates increased superoxide anion radical scavenging activity.\nThe scavenging activity of the plant extract on Superoxide anion radical was expressed as:\n\n\n\n\n%\n inhibition\n=\n[\n(\n\nA\n0\n\n−\n\nA\n1\n\n)\n/\n\nA\n0\n\n]\n×\n1\n00\n\n\n\n\nWhere A0 was the absorbance of the control and A1 was absorbance in the presence of Podophyllum hexandrum extract/known antioxidant.\nSuperoxide anion radical generated by the Xanthine/Xanthine oxidase system was spectrophotometrically determined by monitoring the product of nitroblue tetrazolium (NBT) using the method of Jung [13]. A reaction mixture containing 1.0 ml of 0.05 M phosphate buffer (pH 7.4), 0.04 ml of 0.15% BSA, 0.04 ml of 15.0 mM NBT and various concentrations of plant extract and known antioxidant was incubated at 25°C for 10 min, and the reaction was then started by adding 0.04 ml of 1.5 U/ml Xanthine oxidase and again incubated at 25°C for 20 min. The absorbance of the reaction mixture was measured at 560 nm. Decreased absorbance of the reaction mixture indicates increased superoxide anion radical scavenging activity.\nThe scavenging activity of the plant extract on Superoxide anion radical was expressed as:\n\n\n\n\n%\n inhibition\n=\n[\n(\n\nA\n0\n\n−\n\nA\n1\n\n)\n/\n\nA\n0\n\n]\n×\n1\n00\n\n\n\n\nWhere A0 was the absorbance of the control and A1 was absorbance in the presence of Podophyllum hexandrum extract/known antioxidant.\n[SUBTITLE] Assessment of Hydrogen peroxide scavenging activity [SUBSECTION] The ability of Podophyllum hexandrum aqueous extract to scavenge hydrogen peroxide was evaluated according to the method of Ruch [14]. A solution of H2O2 (2 mmol) was prepared in phosphate buffer (pH 7.5). Plant extract (50-300 μg/ml) was added to the hydrogen peroxide solution (0.6 ml). Absorbance of hydrogen peroxide at 230 nm was determined after 15 minutes against a blank solution containing phosphate buffer without hydrogen peroxide. BHT was taken as known standard. The scavenging activity of the plant extract on H2O2 was expressed as:\n\n\n\n\n%\n scavenged \n[\n\nH\n2\n\n\nO\n2\n\n\n]\n=\n[\n\n(\n\nA\n0\n\n−\n\nA\n1\n\n)\n/\n\nA\n0\n\n]\n×\n1\n00\n\n\n\n\nWhere A0 is the absorbance of the control and A1 is absorbance in the presence of plant extract and known standard.\nThe ability of Podophyllum hexandrum aqueous extract to scavenge hydrogen peroxide was evaluated according to the method of Ruch [14]. A solution of H2O2 (2 mmol) was prepared in phosphate buffer (pH 7.5). Plant extract (50-300 μg/ml) was added to the hydrogen peroxide solution (0.6 ml). Absorbance of hydrogen peroxide at 230 nm was determined after 15 minutes against a blank solution containing phosphate buffer without hydrogen peroxide. BHT was taken as known standard. The scavenging activity of the plant extract on H2O2 was expressed as:\n\n\n\n\n%\n scavenged \n[\n\nH\n2\n\n\nO\n2\n\n\n]\n=\n[\n\n(\n\nA\n0\n\n−\n\nA\n1\n\n)\n/\n\nA\n0\n\n]\n×\n1\n00\n\n\n\n\nWhere A0 is the absorbance of the control and A1 is absorbance in the presence of plant extract and known standard.", "Superoxide anion radical generated by the Xanthine/Xanthine oxidase system was spectrophotometrically determined by monitoring the product of nitroblue tetrazolium (NBT) using the method of Jung [13]. A reaction mixture containing 1.0 ml of 0.05 M phosphate buffer (pH 7.4), 0.04 ml of 0.15% BSA, 0.04 ml of 15.0 mM NBT and various concentrations of plant extract and known antioxidant was incubated at 25°C for 10 min, and the reaction was then started by adding 0.04 ml of 1.5 U/ml Xanthine oxidase and again incubated at 25°C for 20 min. The absorbance of the reaction mixture was measured at 560 nm. Decreased absorbance of the reaction mixture indicates increased superoxide anion radical scavenging activity.\nThe scavenging activity of the plant extract on Superoxide anion radical was expressed as:\n\n\n\n\n%\n inhibition\n=\n[\n(\n\nA\n0\n\n−\n\nA\n1\n\n)\n/\n\nA\n0\n\n]\n×\n1\n00\n\n\n\n\nWhere A0 was the absorbance of the control and A1 was absorbance in the presence of Podophyllum hexandrum extract/known antioxidant.", "The ability of Podophyllum hexandrum aqueous extract to scavenge hydrogen peroxide was evaluated according to the method of Ruch [14]. A solution of H2O2 (2 mmol) was prepared in phosphate buffer (pH 7.5). Plant extract (50-300 μg/ml) was added to the hydrogen peroxide solution (0.6 ml). Absorbance of hydrogen peroxide at 230 nm was determined after 15 minutes against a blank solution containing phosphate buffer without hydrogen peroxide. BHT was taken as known standard. The scavenging activity of the plant extract on H2O2 was expressed as:\n\n\n\n\n%\n scavenged \n[\n\nH\n2\n\n\nO\n2\n\n\n]\n=\n[\n\n(\n\nA\n0\n\n−\n\nA\n1\n\n)\n/\n\nA\n0\n\n]\n×\n1\n00\n\n\n\n\nWhere A0 is the absorbance of the control and A1 is absorbance in the presence of plant extract and known standard.", "Rats were divided into six groups each containing six rats. The plant extract was employed at oral doses of 20, 30 and 50 mg/kg-day. The extract was suspended in normal saline such that the final volume of extract at each dose was 1 ml which was fed to rats by gavage.\nGroup I - Received olive oil vehicle only at 5 ml/kg-day.\nGroup II - Received CCl4 in olive oil vehicle only.\nGroup III - Were administered with vitamin E (50 mg/kg-day).\nGroup IV - Received 20 mg/kg-day extract orally for fifteen days.\nGroup V - Received 30 mg/kg-day extract orally for fifteen days.\nGroup VI - Received 50 mg/kg-day orally for fifteen days.\nOn the thirteenth day, animals from groups II-VI were injected intraperitoneally with CCl4 in olive oil vehicle at a dosage of 1 ml/kg bw. The rats were sacrificed 48 hr after CCl4 administration and kidney and lung tissue was isolated out, and post mitochondrial supernatant of both the tissues was prepared.\n[SUBTITLE] Preparation of post mitochondrial supernatant (PMS) [SUBSECTION] Kidney and lung tissue was washed in ice-cold 1.15% KCl and homogenized in a homogenizing buffer (50 mM Tris- HCl, 1.15% KCl pH 7.4) using a Teflon homogenizer. The homogenate was centrifuged at 9,000 g for 20 minutes to remove debris. The supernatant was further centrifuged at 15,000 g for 20 minutes at 4°C to get PMS which was used for various biochemical assays. Protein concentration was estimated by the method of Lowry [15].\nKidney and lung tissue was washed in ice-cold 1.15% KCl and homogenized in a homogenizing buffer (50 mM Tris- HCl, 1.15% KCl pH 7.4) using a Teflon homogenizer. The homogenate was centrifuged at 9,000 g for 20 minutes to remove debris. The supernatant was further centrifuged at 15,000 g for 20 minutes at 4°C to get PMS which was used for various biochemical assays. Protein concentration was estimated by the method of Lowry [15].\n[SUBTITLE] Estimation of lipid peroxidation (PMS) [SUBSECTION] Lipid peroxidation in tissues was estimated by the formation of thiobarbituric acid reactive substances (TBARS) by the method of Nichans and Samuelson [16]. In brief 0.1 ml of tissue homogenate (PMS; Tris- HCl buffer, pH 7.5) was treated with 2 ml of (1:1:1 ratio) TBA-TCA-HCl reagent (0.37% thiobarbituric acid, 0.25 N HCl, and 15% TCA), placed in boiling water bath for 15 min, cooled and centrifuged at room temperature for 10 min. The absorbance of the clear supernatant was measured against reference blank at 535 nm.\nLipid peroxidation in tissues was estimated by the formation of thiobarbituric acid reactive substances (TBARS) by the method of Nichans and Samuelson [16]. In brief 0.1 ml of tissue homogenate (PMS; Tris- HCl buffer, pH 7.5) was treated with 2 ml of (1:1:1 ratio) TBA-TCA-HCl reagent (0.37% thiobarbituric acid, 0.25 N HCl, and 15% TCA), placed in boiling water bath for 15 min, cooled and centrifuged at room temperature for 10 min. The absorbance of the clear supernatant was measured against reference blank at 535 nm.\n[SUBTITLE] Determination of total sulphydryl groups [SUBSECTION] The acid soluble sulphydryl groups (non protein thiols of which more than 93% is reduced glutathione (GSH) forms a yellow colored complex with DTNB that shows the absorption maximum at 412 nm. The assay procedure will be followed to that of Moren [17]. 500 μl of homogenate precipitated with 100 μl of 25% TCA, will be then subjected to centrifugation at 3000 g for 10 minutes to settle the precipitate. 100 μl of the supernatant obtained shall be added to the test tube containing the 2 ml of 0.6 mM DTNB and 0.9 ml of 0.2 mM sodium phosphate buffer (pH 7.4). The yellow color obtained will be measured at 412 nm against the reagent blank which contains 100 μl of 25% TCA in place of the supernatant. Sulphydryl content shall be calculated using the DTNB molar extension coefficient of 13,100.\nThe acid soluble sulphydryl groups (non protein thiols of which more than 93% is reduced glutathione (GSH) forms a yellow colored complex with DTNB that shows the absorption maximum at 412 nm. The assay procedure will be followed to that of Moren [17]. 500 μl of homogenate precipitated with 100 μl of 25% TCA, will be then subjected to centrifugation at 3000 g for 10 minutes to settle the precipitate. 100 μl of the supernatant obtained shall be added to the test tube containing the 2 ml of 0.6 mM DTNB and 0.9 ml of 0.2 mM sodium phosphate buffer (pH 7.4). The yellow color obtained will be measured at 412 nm against the reagent blank which contains 100 μl of 25% TCA in place of the supernatant. Sulphydryl content shall be calculated using the DTNB molar extension coefficient of 13,100.\n[SUBTITLE] Glutathione peroxidase (GPX) [SUBSECTION] GPX activity was assayed using the method of Sharma [18]. The assay mixture consists of 1.49 ml of sodium phosphate buffer (0.1 M pH 7.4), 0.1 ml EDTA (1 mM), 0.1 ml sodium azide (1 mM), 0.1 ml 1 mM GSH, 0.1 ml of NADPH (0.02 mM), 0.01 ml of 1 mM H2O2 and 0.1 ml PMS in a total volume of 2 ml. Oxidation of NADPH was recorded spectrophotometrically at 340 nm and the enzyme activity was calculated as nmoles NADPH oxidized/min/mg of protein, using € of 6.22 × 103 M-1 cm-1.\nGPX activity was assayed using the method of Sharma [18]. The assay mixture consists of 1.49 ml of sodium phosphate buffer (0.1 M pH 7.4), 0.1 ml EDTA (1 mM), 0.1 ml sodium azide (1 mM), 0.1 ml 1 mM GSH, 0.1 ml of NADPH (0.02 mM), 0.01 ml of 1 mM H2O2 and 0.1 ml PMS in a total volume of 2 ml. Oxidation of NADPH was recorded spectrophotometrically at 340 nm and the enzyme activity was calculated as nmoles NADPH oxidized/min/mg of protein, using € of 6.22 × 103 M-1 cm-1.\n[SUBTITLE] Glutathione Reductase activity (GR) [SUBSECTION] GR activity was assayed by the method of Sharma [18]. The assay mixture consisted of 1.6 ml of sodium phosphate buffer (0.1 M pH 7.4), 0.1 ml EDTA (1 mM), 0.1 ml 1 mM oxidized glutathione, 0.1 ml of NADPH (0.02 mM), 0.01 ml of 1 mM H2O2 and 0.1 ml PMS in a total volume of 2 ml. The enzyme activity measured at 340 nm was calculated as nmoles of NADPH oxidized/min/mg of protein using € of 6.22 × 103 M-1 cm-1.\nGR activity was assayed by the method of Sharma [18]. The assay mixture consisted of 1.6 ml of sodium phosphate buffer (0.1 M pH 7.4), 0.1 ml EDTA (1 mM), 0.1 ml 1 mM oxidized glutathione, 0.1 ml of NADPH (0.02 mM), 0.01 ml of 1 mM H2O2 and 0.1 ml PMS in a total volume of 2 ml. The enzyme activity measured at 340 nm was calculated as nmoles of NADPH oxidized/min/mg of protein using € of 6.22 × 103 M-1 cm-1.\n[SUBTITLE] Glutathione-S-transferase (GST) activity [SUBSECTION] GST activity was assayed using the method of Haque [19]. The reaction mixture consisted of 1.67 ml sodium phosphate buffer (0.1 M pH 6.5), 0.2 ml of 1 mM GSH, 0.025 ml of 1 mM CDNB and 0.1 ml of PMS in a total volume of 2 ml. The change in absorbance was recorded at 340 nm and the enzyme activity was calculated as nmoles of CDNB conjugates formed/min/mg protein using € of 9.6 × 103 M-1 cm-1.\nGST activity was assayed using the method of Haque [19]. The reaction mixture consisted of 1.67 ml sodium phosphate buffer (0.1 M pH 6.5), 0.2 ml of 1 mM GSH, 0.025 ml of 1 mM CDNB and 0.1 ml of PMS in a total volume of 2 ml. The change in absorbance was recorded at 340 nm and the enzyme activity was calculated as nmoles of CDNB conjugates formed/min/mg protein using € of 9.6 × 103 M-1 cm-1.\n[SUBTITLE] Super oxide dismutase activity (SOD) [SUBSECTION] SOD activity was estimated by Beauchamp and Fridovich [20]. The reaction mixture consisted of 0.5 ml of hepatic PMS, 1 ml of 50 mM sodium carbonate, 0.4 ml of 25 μM NBT and 0.2 ml of 0.1 mM EDTA. The reaction was initiated by addition of 0.4 ml of 1 mM hydroxylamine-hydrochloride. The change in absorbance was recorded at 560 nm. The control was simultaneously run without tissue homogenate. Units of SOD activity were expressed as the amount of enzyme required to inhibit the reduction of NBT by 50%.\nSOD activity was estimated by Beauchamp and Fridovich [20]. The reaction mixture consisted of 0.5 ml of hepatic PMS, 1 ml of 50 mM sodium carbonate, 0.4 ml of 25 μM NBT and 0.2 ml of 0.1 mM EDTA. The reaction was initiated by addition of 0.4 ml of 1 mM hydroxylamine-hydrochloride. The change in absorbance was recorded at 560 nm. The control was simultaneously run without tissue homogenate. Units of SOD activity were expressed as the amount of enzyme required to inhibit the reduction of NBT by 50%.", "Kidney and lung tissue was washed in ice-cold 1.15% KCl and homogenized in a homogenizing buffer (50 mM Tris- HCl, 1.15% KCl pH 7.4) using a Teflon homogenizer. The homogenate was centrifuged at 9,000 g for 20 minutes to remove debris. The supernatant was further centrifuged at 15,000 g for 20 minutes at 4°C to get PMS which was used for various biochemical assays. Protein concentration was estimated by the method of Lowry [15].", "Lipid peroxidation in tissues was estimated by the formation of thiobarbituric acid reactive substances (TBARS) by the method of Nichans and Samuelson [16]. In brief 0.1 ml of tissue homogenate (PMS; Tris- HCl buffer, pH 7.5) was treated with 2 ml of (1:1:1 ratio) TBA-TCA-HCl reagent (0.37% thiobarbituric acid, 0.25 N HCl, and 15% TCA), placed in boiling water bath for 15 min, cooled and centrifuged at room temperature for 10 min. The absorbance of the clear supernatant was measured against reference blank at 535 nm.", "The acid soluble sulphydryl groups (non protein thiols of which more than 93% is reduced glutathione (GSH) forms a yellow colored complex with DTNB that shows the absorption maximum at 412 nm. The assay procedure will be followed to that of Moren [17]. 500 μl of homogenate precipitated with 100 μl of 25% TCA, will be then subjected to centrifugation at 3000 g for 10 minutes to settle the precipitate. 100 μl of the supernatant obtained shall be added to the test tube containing the 2 ml of 0.6 mM DTNB and 0.9 ml of 0.2 mM sodium phosphate buffer (pH 7.4). The yellow color obtained will be measured at 412 nm against the reagent blank which contains 100 μl of 25% TCA in place of the supernatant. Sulphydryl content shall be calculated using the DTNB molar extension coefficient of 13,100.", "GPX activity was assayed using the method of Sharma [18]. The assay mixture consists of 1.49 ml of sodium phosphate buffer (0.1 M pH 7.4), 0.1 ml EDTA (1 mM), 0.1 ml sodium azide (1 mM), 0.1 ml 1 mM GSH, 0.1 ml of NADPH (0.02 mM), 0.01 ml of 1 mM H2O2 and 0.1 ml PMS in a total volume of 2 ml. Oxidation of NADPH was recorded spectrophotometrically at 340 nm and the enzyme activity was calculated as nmoles NADPH oxidized/min/mg of protein, using € of 6.22 × 103 M-1 cm-1.", "GR activity was assayed by the method of Sharma [18]. The assay mixture consisted of 1.6 ml of sodium phosphate buffer (0.1 M pH 7.4), 0.1 ml EDTA (1 mM), 0.1 ml 1 mM oxidized glutathione, 0.1 ml of NADPH (0.02 mM), 0.01 ml of 1 mM H2O2 and 0.1 ml PMS in a total volume of 2 ml. The enzyme activity measured at 340 nm was calculated as nmoles of NADPH oxidized/min/mg of protein using € of 6.22 × 103 M-1 cm-1.", "GST activity was assayed using the method of Haque [19]. The reaction mixture consisted of 1.67 ml sodium phosphate buffer (0.1 M pH 6.5), 0.2 ml of 1 mM GSH, 0.025 ml of 1 mM CDNB and 0.1 ml of PMS in a total volume of 2 ml. The change in absorbance was recorded at 340 nm and the enzyme activity was calculated as nmoles of CDNB conjugates formed/min/mg protein using € of 9.6 × 103 M-1 cm-1.", "SOD activity was estimated by Beauchamp and Fridovich [20]. The reaction mixture consisted of 0.5 ml of hepatic PMS, 1 ml of 50 mM sodium carbonate, 0.4 ml of 25 μM NBT and 0.2 ml of 0.1 mM EDTA. The reaction was initiated by addition of 0.4 ml of 1 mM hydroxylamine-hydrochloride. The change in absorbance was recorded at 560 nm. The control was simultaneously run without tissue homogenate. Units of SOD activity were expressed as the amount of enzyme required to inhibit the reduction of NBT by 50%.", "The values are expressed as mean ± standard deviation (SD). The results were evaluated by using the SPSS (version 12.0) and Origin 6 softwares and evaluated by one-way ANOVA followed by Bonferroni t-test. Statistical significance was considered when value of P was < 0.5.", "[SUBTITLE] Superoxide anion radical scavenging activity [SUBSECTION] Superoxide anion radical scavenging activity of varying amount of aqueous extract of Podophyllum hexandrum was determined by Xanthine-Xanthine oxidase system. Table 1 shows the percentage inhibition of superoxide radical generation of 50-300 μg of extract and comparison with the same amount of BHT. The aqueous extract of Podophyllum hexandrum exhibited somewhat lesser superoxide radical scavenging activity than BHT. The percentage inhibition of superoxide generation at a concentration of 300 μg/ml of aqueous extract of Podophyllum hexandrum and BHT was however found as 81.64 and 85.71%, suggesting that Podophyllum hexandrum has strong superoxide radical scavenging activity at higher concentration.\nEffect of Podophyllum hexandrum aqueous extract and known antioxidant (BHT) on superoxide radical scavenging activity.\nEffect of Podophyllum hexandrum aqueous extract and known antioxidant (BHT) on superoxide radical scavenging activity. Absorbance of control at 560 nm = 0.899 ± 0.25. The results represent mean ± S.D of 3 separate experiments.\nSuperoxide anion radical scavenging activity of varying amount of aqueous extract of Podophyllum hexandrum was determined by Xanthine-Xanthine oxidase system. Table 1 shows the percentage inhibition of superoxide radical generation of 50-300 μg of extract and comparison with the same amount of BHT. The aqueous extract of Podophyllum hexandrum exhibited somewhat lesser superoxide radical scavenging activity than BHT. The percentage inhibition of superoxide generation at a concentration of 300 μg/ml of aqueous extract of Podophyllum hexandrum and BHT was however found as 81.64 and 85.71%, suggesting that Podophyllum hexandrum has strong superoxide radical scavenging activity at higher concentration.\nEffect of Podophyllum hexandrum aqueous extract and known antioxidant (BHT) on superoxide radical scavenging activity.\nEffect of Podophyllum hexandrum aqueous extract and known antioxidant (BHT) on superoxide radical scavenging activity. Absorbance of control at 560 nm = 0.899 ± 0.25. The results represent mean ± S.D of 3 separate experiments.\n[SUBTITLE] Hydrogen peroxide radical scavenging activity [SUBSECTION] Table 2 shows the scavenging effect of Podophyllum hexandrum extract on H2O2 and the comparison with standard BHT in an amount dependent manner. As shown in table the extract and BHT exhibited 67.51 and 72.17% scavenging activity on hydrogen peroxide at 300 μg/ml respectively, again suggests that Podophyllum hexandrum extract possess a strong free radical scavenging activity, comparable to that of BHT.\nEffect of Podophyllum hexandrum aqueous extract and known antioxidant (BHT) on hydrogen peroxide radical scavenging activity.\nEffect of Podophyllum hexandrum aqueous extract and known antioxidant (BHT) on hydrogen peroxide radical scavenging activity. Absorbance of control at 230 nm = 0.665 ± 0.15. The results represent mean ± S.D of 3 separate experiments.\nTable 2 shows the scavenging effect of Podophyllum hexandrum extract on H2O2 and the comparison with standard BHT in an amount dependent manner. As shown in table the extract and BHT exhibited 67.51 and 72.17% scavenging activity on hydrogen peroxide at 300 μg/ml respectively, again suggests that Podophyllum hexandrum extract possess a strong free radical scavenging activity, comparable to that of BHT.\nEffect of Podophyllum hexandrum aqueous extract and known antioxidant (BHT) on hydrogen peroxide radical scavenging activity.\nEffect of Podophyllum hexandrum aqueous extract and known antioxidant (BHT) on hydrogen peroxide radical scavenging activity. Absorbance of control at 230 nm = 0.665 ± 0.15. The results represent mean ± S.D of 3 separate experiments.\n[SUBTITLE] Effect of aqueous extract on lipid peroxidation in CCl4 treated rats [SUBSECTION] TBARS concentrations (expressed as MDA) in the kidney and lung tissue homogenates of all the experimental animals are shown in Figure 1 and 2. After CCl4 administration, the MDA levels increased significantly from 0.12 to 2.8 nmol/mg protein in kidney tissue homogenate and in lung tissue homogenate, the MDA level increased from 0.02 to 2.30 nmol/mg protein. However pretreatment of aqueous extract of Podophyllum hexandrum for 15 days decreased the MDA level in a dose dependent manner in both tissue homogenates. Vitamin E treated animals also showed significant decrease in the MDA levels as compared to CCl4 treated animals.\nRepresents the effect of aqueous extract on kidney tissue homogenate lipid peroxidation in CCl4 treated rats. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\nRepresents the effect of aqueous extract on lung tissue homogenate lipid peroxidation in CCl4 treated rats. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\nIn order to investigate whether the antioxidant activities of Podophyllum hexandrum are mediated by an increase in antioxidant enzymes, we measured GPx, GR, SOD and GST activities in kidney and lung tissues of rats treated with Podophyllum hexandrum rhizome aqueous extract. In the present study, treatment of rats with Podophyllum hexandrum rhizome aqueous extract significantly increased rat, kidney and lung tissue SOD, GPX, GR and GST activities.\nTBARS concentrations (expressed as MDA) in the kidney and lung tissue homogenates of all the experimental animals are shown in Figure 1 and 2. After CCl4 administration, the MDA levels increased significantly from 0.12 to 2.8 nmol/mg protein in kidney tissue homogenate and in lung tissue homogenate, the MDA level increased from 0.02 to 2.30 nmol/mg protein. However pretreatment of aqueous extract of Podophyllum hexandrum for 15 days decreased the MDA level in a dose dependent manner in both tissue homogenates. Vitamin E treated animals also showed significant decrease in the MDA levels as compared to CCl4 treated animals.\nRepresents the effect of aqueous extract on kidney tissue homogenate lipid peroxidation in CCl4 treated rats. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\nRepresents the effect of aqueous extract on lung tissue homogenate lipid peroxidation in CCl4 treated rats. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\nIn order to investigate whether the antioxidant activities of Podophyllum hexandrum are mediated by an increase in antioxidant enzymes, we measured GPx, GR, SOD and GST activities in kidney and lung tissues of rats treated with Podophyllum hexandrum rhizome aqueous extract. In the present study, treatment of rats with Podophyllum hexandrum rhizome aqueous extract significantly increased rat, kidney and lung tissue SOD, GPX, GR and GST activities.\n[SUBTITLE] Effect on (GPX activity) [SUBSECTION] (Figure 3 and 4) shows that glutathione peroxidase activity in kidney and lung tissue was significantly decreased in CCl4 treated animals compared to control. Pretreatment with aqueous extract significantly increased the GPX activity in a dose dependent manner. At higher concentrations of plant extract (50 mg/kg dose level), the activity was increased to 49.30 from CCl4 treated group (11.20) in kidney tissue and the level was increased to 11.93 from 3.51 at the same concentration in lung tissue. Vitamin E (50 mg/kg) treated animals also showed significant increase in GPX activity in both the tested organs.\nRepresents the dose dependent effect of aqueous extract of Podophyllum hexandrum on glutathione peroxidase activity against CCl4 induced toxicity in Kidney tissue homogenate. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\nRepresents the dose dependent effect of aqueous extract of Podophyllum hexandrum on glutathione peroxidase activity against CCl4 induced toxicity in lung tissue homogenate. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\n(Figure 3 and 4) shows that glutathione peroxidase activity in kidney and lung tissue was significantly decreased in CCl4 treated animals compared to control. Pretreatment with aqueous extract significantly increased the GPX activity in a dose dependent manner. At higher concentrations of plant extract (50 mg/kg dose level), the activity was increased to 49.30 from CCl4 treated group (11.20) in kidney tissue and the level was increased to 11.93 from 3.51 at the same concentration in lung tissue. Vitamin E (50 mg/kg) treated animals also showed significant increase in GPX activity in both the tested organs.\nRepresents the dose dependent effect of aqueous extract of Podophyllum hexandrum on glutathione peroxidase activity against CCl4 induced toxicity in Kidney tissue homogenate. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\nRepresents the dose dependent effect of aqueous extract of Podophyllum hexandrum on glutathione peroxidase activity against CCl4 induced toxicity in lung tissue homogenate. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\n[SUBTITLE] Effect on GR activity [SUBSECTION] Glutathione reductase (GR) activity was significantly decreased in CCl4 treated animals when compared to control group. There was a significant increase in glutathione reductase activity observed in aqueous extract treated groups in both the tested organs. At the higher concentration of plant extract (50 mg/kg) the activity increased many fold (Figure 5 and 6). Similar results were obtained with vitamin E.\nRepresents the effect of aqueous extract of Podophyllum hexandrum on the glutathione reductase activity in kidney tissue. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\nRepresents the effect of aqueous extract of Podophyllum hexandrum on the glutathione reductase activity in lung tissue. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\nGlutathione reductase (GR) activity was significantly decreased in CCl4 treated animals when compared to control group. There was a significant increase in glutathione reductase activity observed in aqueous extract treated groups in both the tested organs. At the higher concentration of plant extract (50 mg/kg) the activity increased many fold (Figure 5 and 6). Similar results were obtained with vitamin E.\nRepresents the effect of aqueous extract of Podophyllum hexandrum on the glutathione reductase activity in kidney tissue. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\nRepresents the effect of aqueous extract of Podophyllum hexandrum on the glutathione reductase activity in lung tissue. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\n[SUBTITLE] Effect on SOD activity [SUBSECTION] Effects of CCl4 and CCl4 plus aqueous extract of Podophyllum hexandrum treatments on kidney and lung tissue SOD activity is presented in Figure 7. The SOD activity of both the tested organs significantly decreased in CCl4 treated group. Administration of extract proved significantly better in restoring the altered activity of antioxidant enzyme like SOD, and increased the activity in a dose dependent manner in both organs. Similar results were observed in vitamin E treated group.\nRepresents the effect of aqueous extract of Podophyllum hexandrum on the superoxide dismutase activity in kidney and lung tissue organs. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\nEffects of CCl4 and CCl4 plus aqueous extract of Podophyllum hexandrum treatments on kidney and lung tissue SOD activity is presented in Figure 7. The SOD activity of both the tested organs significantly decreased in CCl4 treated group. Administration of extract proved significantly better in restoring the altered activity of antioxidant enzyme like SOD, and increased the activity in a dose dependent manner in both organs. Similar results were observed in vitamin E treated group.\nRepresents the effect of aqueous extract of Podophyllum hexandrum on the superoxide dismutase activity in kidney and lung tissue organs. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\n[SUBTITLE] Effect on GSH level [SUBSECTION] CCl4 administration markedly decreased the levels of reduced glutathione in both the kidney (control = 54.73 microgram/mg protein) and lung tissue (control = 31.93 microgram/mg protein) to 9.62 (CCl4 group kidney) and 6.53 (CCl4 group lung) demonstrating oxidative stress. Pretreatment with the aqueous extract of Podophyllum hexandrum significantly ameliorated CCl4-induced depletion of GSH in both lung and kidney in a dose dependent manner (Figure 8 and 9).\nRepresents the effect of aqueous extract on GSH levels in CCl4 induced Kidney damages in rats. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\nRepresents the effect of aqueous extract of Podophyllum hexandrum on GSH levels in CCl4 induced lung damages in rats. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\nCCl4 administration markedly decreased the levels of reduced glutathione in both the kidney (control = 54.73 microgram/mg protein) and lung tissue (control = 31.93 microgram/mg protein) to 9.62 (CCl4 group kidney) and 6.53 (CCl4 group lung) demonstrating oxidative stress. Pretreatment with the aqueous extract of Podophyllum hexandrum significantly ameliorated CCl4-induced depletion of GSH in both lung and kidney in a dose dependent manner (Figure 8 and 9).\nRepresents the effect of aqueous extract on GSH levels in CCl4 induced Kidney damages in rats. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\nRepresents the effect of aqueous extract of Podophyllum hexandrum on GSH levels in CCl4 induced lung damages in rats. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\n[SUBTITLE] Effect on GST activity [SUBSECTION] GST activity as measured from the lung and kidney tissue homogenates of all the experimental animals have been shown in Figure 10. In the kidney tissue homogenate decreased GST activity was observed in CCl4 treated animals (17.71 nmoles) compared to the normal control group (38.28 nmoles). Pretreatment with the aqueous extract for 15 days prior to CCl4 intoxication enhanced that activity significantly in a dose dependent manner. In the lung homogenate GST activity of CCl4 treated group (5.22 nmoles) was lower compared to that in the normal group (9.11 nmoles), while the GST activity was found to be increased in the lung tissue homogenate of rats treated with aqueous extract at the concentration of 20 and 30 mg/kg bw for 15 days prior to CCl4 treatment. GST activity in vitamin E and 50 mg/kg bw plant extract pretreated group was close to the normal level in lung tissue.\nEffect of aqueous extract on glutathione-S-transferase activity. Left panel shows the effect of extract on kidney tissue and right panel shows the effect on lung tissue against CCl4 induced damages. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E; &; non significant as compared with normal control. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\nGST activity as measured from the lung and kidney tissue homogenates of all the experimental animals have been shown in Figure 10. In the kidney tissue homogenate decreased GST activity was observed in CCl4 treated animals (17.71 nmoles) compared to the normal control group (38.28 nmoles). Pretreatment with the aqueous extract for 15 days prior to CCl4 intoxication enhanced that activity significantly in a dose dependent manner. In the lung homogenate GST activity of CCl4 treated group (5.22 nmoles) was lower compared to that in the normal group (9.11 nmoles), while the GST activity was found to be increased in the lung tissue homogenate of rats treated with aqueous extract at the concentration of 20 and 30 mg/kg bw for 15 days prior to CCl4 treatment. GST activity in vitamin E and 50 mg/kg bw plant extract pretreated group was close to the normal level in lung tissue.\nEffect of aqueous extract on glutathione-S-transferase activity. Left panel shows the effect of extract on kidney tissue and right panel shows the effect on lung tissue against CCl4 induced damages. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E; &; non significant as compared with normal control. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.", "Superoxide anion radical scavenging activity of varying amount of aqueous extract of Podophyllum hexandrum was determined by Xanthine-Xanthine oxidase system. Table 1 shows the percentage inhibition of superoxide radical generation of 50-300 μg of extract and comparison with the same amount of BHT. The aqueous extract of Podophyllum hexandrum exhibited somewhat lesser superoxide radical scavenging activity than BHT. The percentage inhibition of superoxide generation at a concentration of 300 μg/ml of aqueous extract of Podophyllum hexandrum and BHT was however found as 81.64 and 85.71%, suggesting that Podophyllum hexandrum has strong superoxide radical scavenging activity at higher concentration.\nEffect of Podophyllum hexandrum aqueous extract and known antioxidant (BHT) on superoxide radical scavenging activity.\nEffect of Podophyllum hexandrum aqueous extract and known antioxidant (BHT) on superoxide radical scavenging activity. Absorbance of control at 560 nm = 0.899 ± 0.25. The results represent mean ± S.D of 3 separate experiments.", "Table 2 shows the scavenging effect of Podophyllum hexandrum extract on H2O2 and the comparison with standard BHT in an amount dependent manner. As shown in table the extract and BHT exhibited 67.51 and 72.17% scavenging activity on hydrogen peroxide at 300 μg/ml respectively, again suggests that Podophyllum hexandrum extract possess a strong free radical scavenging activity, comparable to that of BHT.\nEffect of Podophyllum hexandrum aqueous extract and known antioxidant (BHT) on hydrogen peroxide radical scavenging activity.\nEffect of Podophyllum hexandrum aqueous extract and known antioxidant (BHT) on hydrogen peroxide radical scavenging activity. Absorbance of control at 230 nm = 0.665 ± 0.15. The results represent mean ± S.D of 3 separate experiments.", "TBARS concentrations (expressed as MDA) in the kidney and lung tissue homogenates of all the experimental animals are shown in Figure 1 and 2. After CCl4 administration, the MDA levels increased significantly from 0.12 to 2.8 nmol/mg protein in kidney tissue homogenate and in lung tissue homogenate, the MDA level increased from 0.02 to 2.30 nmol/mg protein. However pretreatment of aqueous extract of Podophyllum hexandrum for 15 days decreased the MDA level in a dose dependent manner in both tissue homogenates. Vitamin E treated animals also showed significant decrease in the MDA levels as compared to CCl4 treated animals.\nRepresents the effect of aqueous extract on kidney tissue homogenate lipid peroxidation in CCl4 treated rats. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\nRepresents the effect of aqueous extract on lung tissue homogenate lipid peroxidation in CCl4 treated rats. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\nIn order to investigate whether the antioxidant activities of Podophyllum hexandrum are mediated by an increase in antioxidant enzymes, we measured GPx, GR, SOD and GST activities in kidney and lung tissues of rats treated with Podophyllum hexandrum rhizome aqueous extract. In the present study, treatment of rats with Podophyllum hexandrum rhizome aqueous extract significantly increased rat, kidney and lung tissue SOD, GPX, GR and GST activities.", "(Figure 3 and 4) shows that glutathione peroxidase activity in kidney and lung tissue was significantly decreased in CCl4 treated animals compared to control. Pretreatment with aqueous extract significantly increased the GPX activity in a dose dependent manner. At higher concentrations of plant extract (50 mg/kg dose level), the activity was increased to 49.30 from CCl4 treated group (11.20) in kidney tissue and the level was increased to 11.93 from 3.51 at the same concentration in lung tissue. Vitamin E (50 mg/kg) treated animals also showed significant increase in GPX activity in both the tested organs.\nRepresents the dose dependent effect of aqueous extract of Podophyllum hexandrum on glutathione peroxidase activity against CCl4 induced toxicity in Kidney tissue homogenate. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\nRepresents the dose dependent effect of aqueous extract of Podophyllum hexandrum on glutathione peroxidase activity against CCl4 induced toxicity in lung tissue homogenate. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.", "Glutathione reductase (GR) activity was significantly decreased in CCl4 treated animals when compared to control group. There was a significant increase in glutathione reductase activity observed in aqueous extract treated groups in both the tested organs. At the higher concentration of plant extract (50 mg/kg) the activity increased many fold (Figure 5 and 6). Similar results were obtained with vitamin E.\nRepresents the effect of aqueous extract of Podophyllum hexandrum on the glutathione reductase activity in kidney tissue. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\nRepresents the effect of aqueous extract of Podophyllum hexandrum on the glutathione reductase activity in lung tissue. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.", "Effects of CCl4 and CCl4 plus aqueous extract of Podophyllum hexandrum treatments on kidney and lung tissue SOD activity is presented in Figure 7. The SOD activity of both the tested organs significantly decreased in CCl4 treated group. Administration of extract proved significantly better in restoring the altered activity of antioxidant enzyme like SOD, and increased the activity in a dose dependent manner in both organs. Similar results were observed in vitamin E treated group.\nRepresents the effect of aqueous extract of Podophyllum hexandrum on the superoxide dismutase activity in kidney and lung tissue organs. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.", "CCl4 administration markedly decreased the levels of reduced glutathione in both the kidney (control = 54.73 microgram/mg protein) and lung tissue (control = 31.93 microgram/mg protein) to 9.62 (CCl4 group kidney) and 6.53 (CCl4 group lung) demonstrating oxidative stress. Pretreatment with the aqueous extract of Podophyllum hexandrum significantly ameliorated CCl4-induced depletion of GSH in both lung and kidney in a dose dependent manner (Figure 8 and 9).\nRepresents the effect of aqueous extract on GSH levels in CCl4 induced Kidney damages in rats. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\nRepresents the effect of aqueous extract of Podophyllum hexandrum on GSH levels in CCl4 induced lung damages in rats. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.", "GST activity as measured from the lung and kidney tissue homogenates of all the experimental animals have been shown in Figure 10. In the kidney tissue homogenate decreased GST activity was observed in CCl4 treated animals (17.71 nmoles) compared to the normal control group (38.28 nmoles). Pretreatment with the aqueous extract for 15 days prior to CCl4 intoxication enhanced that activity significantly in a dose dependent manner. In the lung homogenate GST activity of CCl4 treated group (5.22 nmoles) was lower compared to that in the normal group (9.11 nmoles), while the GST activity was found to be increased in the lung tissue homogenate of rats treated with aqueous extract at the concentration of 20 and 30 mg/kg bw for 15 days prior to CCl4 treatment. GST activity in vitamin E and 50 mg/kg bw plant extract pretreated group was close to the normal level in lung tissue.\nEffect of aqueous extract on glutathione-S-transferase activity. Left panel shows the effect of extract on kidney tissue and right panel shows the effect on lung tissue against CCl4 induced damages. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E; &; non significant as compared with normal control. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.", "CCl4 when administrated is distributed and deposited to organs such as the liver, brain, kidney, lung and heart [21]. The reactive metabolite trichloromethyl radical (•CCl3) and trichloromethyl peroxide radical (CCl3O2•) has been formed from the metabolic conversion of CCl4 by cytochrome P-450. As O2 tension rises, a greater fraction of •CCl3 present in the system reacts very rapidly with O2 and more reactive free radicals, like CCl3OO• is generated from •CCl3. These free radicals initiate the peroxidation of membrane poly unsaturated fatty acids (PUFA), cell necrosis, GSH depletion, membrane damage and loss of antioxidant enzyme activity.\nIn this experimental study we investigated the protective effect of aqueous extract of Podophyllum hexandrum Free radicals e.g. superoxide radical, hydrogen peroxide and hydroxyl radical, from both endogenous and exogenous sources, are implicated in the etiology of several degenerative diseases, such as coronary artery disease, stroke, rheumatoid arthritis, diabetes and cancer [22]. High consumption of fruits and vegetables is associated with low risk for these diseases, which is attributed to the antioxidant vitamins and other phytochemicals [23,24]. The extent of initial damage caused by free radicals is further amplified by Fenton reaction generated hydroxyl radicals in the presence of superoxide and hydrogen peroxide [25]. Thus, the redox state and concentration of iron ions in the cellular milieu plays a crucial role in amplification of damage [26] as they interact with membranes to generate alkoxyl and peroxyl radicals, thereby inflicting further damage to the cellular system [27].\nSuperoxide is biologically important since it can be decomposed to form stronger oxidative species such as singlet oxygen and hydroxyl radicals, which are very harmful to the cellular components in a biological system [28]. Superoxide radical is generated from O2 by multiple pathways [29,30]. Using NBT assay system to generate superoxide radical, dose dependent inhibition was observed in the increasing concentration of Podophyllum hexandrum rhizome aqueous extract indicating its potential to possess scavenging properties.\nHydrogen peroxide itself is not very reactive, but it can give highly reactive species •OH radical through Fenton reaction [31]. Earlier reports suggest that H2O2 could induce DNA break in the intact cell and purified DNA [32]. The H2O2-scavenging activity of Podophyllum hexandrum aqueous extract and the standard BHT increased in a dose dependent manner. With comparable results observed at highest concentration. Similar results were reported by Duh [33] for Chrysanthemum morifolium with high relationship between phenolic content and scavenging activity of the aqueous extracts on hydrogen peroxide. As previously reported by Chaudhary et al., that Podophyllum hexandrum possess strong antioxidant activity against superoxide and hydroxyl radical under in vitro conditions [34]. Chawla et al., have also established the antioxidant potential of different extracts of Podophyllum hexandrum [35].\nThe level of kidney and lung MDA in CCl4 treated group was significantly higher than the control group. The increase in MDA level in both the tissues suggests enhanced peroxidation leading to tissue damage and failure of the antioxidant mechanisms to prevent the production of excessive free radicals. Our previous results have shown that ethanolic extract of Podophyllum hexandrum possess strong hepatoprotective activity against CCl4 induced damage in albino rats [36]. Similar results were previously reported in kidney by Ogeturk [37] and liver tissues by Yang [38] and Melin [39], which stated that CCl4 metabolized by cytochrome p-450 generates a highly reactive free radical, and initiates lipid peroxidation of the cell membrane of the endoplasmic reticulum and causes a chain reaction. These reactive oxygen species can cause oxidative damage in DNA, proteins and lipids. However pretreatment of Podophyllum hexandrum extract in this study significantly prevent CCl4-induced lipid peroxidation in kidney and lung tissue. Our results are in conformation to the already published report by Padma and Setty [40] that administration of aqueous extract of Phyllanthus fraternus significantly decreased the carbon tetrachloride induced lipid peroxidation in different organs of rats under in vivo conditions.\nGSH as we know is involved in several defense processes against oxidative damage protects cells against free radicals, peroxides and other toxic compounds [41]. Indeed, glutathione depletion increases the sensitivity of cells to various aggressions and also has several metabolic effects. It is widely known that a deficiency of GSH within living organisms can lead to tissue disorder and injury [42]. In our study, the kidney and lung GSH level in CCl4 treated group was significantly decreased compared with control group. Likewise we [36] and others, Ohta [43], reported a significant decrease in the GSH content in different organs of rats, when injected with CCl4. Pretreatment however, with Podophyllum hexandrum aqueous extract increased GSH level as compared with CCl4 groups and thus affording protection. The antioxidant effects are likely to be mediated by the restoration of CCl4 induced decreased SOD, GR, GPx and GST activities in various tissues of rats. Treatment of rats with Podophyllum hexandrum aqueous extract significantly increased rat lung and kidney SOD, GR, GST and GPx activities. Tirkey [44] have recently conducted experiments to determine the effect of CCl4 on the renal damages in rats and obtained similar results. All these enzymes are major free radical scavenging enzymes that have shown to be reduced in a number of pathophysiological processes and diseases such as diabetes [45]. Thus, activation of these enzymes by the administration of Podophyllum hexandrum aqueous extract clearly shows that Podophyllum through its free radical scavenging activity could exert a beneficial action against pathophysiological alterations caused by the presence of superoxide, hydrogen peroxide and hydroxyl radicals.", "Combining all, we could conclude that the aqueous extract of Podophyllum hexandrum exhibits good antioxidant activity in both in vitro and in vivo experiments. In vitro antioxidant tests proved that the plant possesses components with strong superoxide and hydrogen peroxide radical scavenging activity. Study also suggests that the extract also possess potential to protect the kidney and lung tissue against oxidative damages and could be used as an effective protector against CCl4 induced kidney and lung damages. Further works are needed to fully characterize the active principles present in the plant responsible for these functions and elucidate its possible mode of action.", "The authors declare that they have no competing interests.", "SAG: Designed the study, conducted the experiments, analyzed the data and drafted the manuscript. EH and AH: made substantial contributions to the design of the study, the collection of the data as well as the interpretation and analysis of the data. They also drafted the manuscript and gave final approval for its submission to the Journal for consideration of publication. AM: made substantial contributions to the design of the study, the collection of the data as well as the interpretation and analysis of the data. He also drafted the manuscript and gave final approval for its submission to the Journal for consideration of publication. MAZ: the investigation-in-charge for the study, made substantial contributions to the design of the study, as well as the interpretation and analysis of the data. He also drafted the manuscript and gave final approval for its submission to the Journal for consideration of publication. YQ, ZM and BAZ: Made substantial contributions in the design of study and also helped in the compilation of the manuscript. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6882/11/17/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Plant material collection and extraction", "Animals", "Experimental methods", "Assessment of superoxide anion radical scavenging property", "Assessment of Hydrogen peroxide scavenging activity", "Dosage and treatment", "Preparation of post mitochondrial supernatant (PMS)", "Estimation of lipid peroxidation (PMS)", "Determination of total sulphydryl groups", "Glutathione peroxidase (GPX)", "Glutathione Reductase activity (GR)", "Glutathione-S-transferase (GST) activity", "Super oxide dismutase activity (SOD)", "Statistical analysis", "Results", "Superoxide anion radical scavenging activity", "Hydrogen peroxide radical scavenging activity", "Effect of aqueous extract on lipid peroxidation in CCl4 treated rats", "Effect on (GPX activity)", "Effect on GR activity", "Effect on SOD activity", "Effect on GSH level", "Effect on GST activity", "Discussion", "Conclusion", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Reactive oxygen species, including superoxide radicals (O2•-), hydrogen peroxide (H2O2) and hydroxyl radicals (OH•) are generated as byproducts of normal metabolism [1,2]. Cumulative oxidative damage leads to numerous diseases and disorders [3]. The enhanced production of free radicals and oxidative stress can also be induced by a variety of factors such as radiation or exposure to heavy metals and xenobiotics (e.g., carbon tetrachloride) [4]. Carbon tetrachloride (CCl4) intoxication in animals is an experimental model that mimics oxidative stress in many pathophysiological situations [5]. CCl4 intoxication in various studies has demonstrated that CCl4 causes free radical generation in many tissues such as liver, kidney, heart, lung, brain and blood [6]. The toxicity of CCl4 probably depends on formation of the trichloromethyl radical (CCl3•), which in the presence of oxygen interacts with it to form the more toxic trichloromethyl peroxyl radical (CCl3O2•) [7]. Studies also showed that various herbal extracts could protect organs against CCl4 induced oxidative stress by altering the levels of increased lipid peroxidation and enhancing the decreased activities of antioxidant enzymes, like superoxide dismutase (SOD), catalase (CAT) and glutathione-S-transferase (GST) as well as enhanced the decreased level of the reduced glutathione (GSH) [8]. In the modern medicine, plants occupy a significant berth as raw materials for some important drug preparations [9]. Podophyllum hexandrum (PH) has been extensively exploited in traditional Ayurvedic system of medicine for treatment of a number of ailments like Condyloma acuminata, Taenia capitis, monocytoid leukemia, Hodgkins disease, non-Hodgkin's Lymphoma, cancer of brain, lung, bladder and venereal warts [10]. PH is reported to contain a number of compounds with significant pharmacological properties, e.g. epipodophyllotoxin, podophyllotoxone, 4-methylpodophyllotoxin, aryltetrahydronaphthalene lignans, flavonoids such as quercetin, quercetin-3-glycoside, 4-demethylpodophyllotoxin glycoside, podophyllotoxinglycoside, kaempferol and kaempferol-3-glucoside [11,12]. In this particular study, protective role of aqueous extract of the Podophyllum hexandrum was evaluated against free radical mediated damages under in vitro and in vivo situations. In vitro assays were carried on superoxide radical scavenging activity and hydrogen peroxide radical scavenging activity. Here, kidney and lung toxicity was induced by administering a single dose of CCl4 into experimental adult male albino rats and radical scavenging activity of the extract was evaluated by measuring the levels of GSH and extent of lipid peroxidation in kidney and lung tissue homogenates and activity of antioxidant enzymes via SOD, GPX, GR and GST. In addition, study on the effect of a known antioxidant, vitamin E, was also included against CCl4 induced kidney and lung oxidative stress. The major aim of the present study was to examine the protective mechanisms of aqueous extract of PH in kidney and lung tissues in carbon tetrachloride intoxicated rats.", "[SUBTITLE] Plant material collection and extraction [SUBSECTION] The rhizome of Podophyllum hexandrum was collected from higher reaches of Aharbal, Shopian, J&K, India in the month of May and June, identified by the Centre of Plant Taxonomy, Department of Botany, University of Kashmir, and authenticated by Dr. Irshad Ahmad Nawchoo (Department of Botany) and Akhter Hussain Malik (Curator, Centre for Plant Taxonomy, University of Kashmir). A reference specimen has been retained in the herbarium of the Department of Botany at the University of Kashmir under reference number KASH- bot/Ku/PH- 702- SAG.\nThe plant material (rhizome) was dried in the shade at 30 ± 2°C. The dried rhizome material was ground into a powder using mortar and pestle and passed through a sieve of 0.3 mm mesh size. The powder obtained was extracted with water using a Soxhlet extractor (60-80°C). The extract was then concentrated with the help of rotary evaporator under reduced pressure and the solid extract was stored in refrigerator for future use.\nThe rhizome of Podophyllum hexandrum was collected from higher reaches of Aharbal, Shopian, J&K, India in the month of May and June, identified by the Centre of Plant Taxonomy, Department of Botany, University of Kashmir, and authenticated by Dr. Irshad Ahmad Nawchoo (Department of Botany) and Akhter Hussain Malik (Curator, Centre for Plant Taxonomy, University of Kashmir). A reference specimen has been retained in the herbarium of the Department of Botany at the University of Kashmir under reference number KASH- bot/Ku/PH- 702- SAG.\nThe plant material (rhizome) was dried in the shade at 30 ± 2°C. The dried rhizome material was ground into a powder using mortar and pestle and passed through a sieve of 0.3 mm mesh size. The powder obtained was extracted with water using a Soxhlet extractor (60-80°C). The extract was then concentrated with the help of rotary evaporator under reduced pressure and the solid extract was stored in refrigerator for future use.\n[SUBTITLE] Animals [SUBSECTION] Adult male albino rats of Wistar strain weighing 200-250 g used throughout this study were purchased from the Indian Institute of Integrative Medicine Jammu (IIIM). The animals had access to food and water ad libitum. The animals were maintained in a controlled environment under standard conditions of temperature and humidity with an alternating 12 hr light and dark cycle. The animals were maintained in accordance with the guidelines prescribed by the National Institute of Nutrition, Indian Council of Medical Research and the study was approved by the Animal Ethics Committee of the University of Kashmir.\nAdult male albino rats of Wistar strain weighing 200-250 g used throughout this study were purchased from the Indian Institute of Integrative Medicine Jammu (IIIM). The animals had access to food and water ad libitum. The animals were maintained in a controlled environment under standard conditions of temperature and humidity with an alternating 12 hr light and dark cycle. The animals were maintained in accordance with the guidelines prescribed by the National Institute of Nutrition, Indian Council of Medical Research and the study was approved by the Animal Ethics Committee of the University of Kashmir.\n[SUBTITLE] Experimental methods [SUBSECTION] [SUBTITLE] Assessment of superoxide anion radical scavenging property [SUBSECTION] Superoxide anion radical generated by the Xanthine/Xanthine oxidase system was spectrophotometrically determined by monitoring the product of nitroblue tetrazolium (NBT) using the method of Jung [13]. A reaction mixture containing 1.0 ml of 0.05 M phosphate buffer (pH 7.4), 0.04 ml of 0.15% BSA, 0.04 ml of 15.0 mM NBT and various concentrations of plant extract and known antioxidant was incubated at 25°C for 10 min, and the reaction was then started by adding 0.04 ml of 1.5 U/ml Xanthine oxidase and again incubated at 25°C for 20 min. The absorbance of the reaction mixture was measured at 560 nm. Decreased absorbance of the reaction mixture indicates increased superoxide anion radical scavenging activity.\nThe scavenging activity of the plant extract on Superoxide anion radical was expressed as:\n\n\n\n\n%\n inhibition\n=\n[\n(\n\nA\n0\n\n−\n\nA\n1\n\n)\n/\n\nA\n0\n\n]\n×\n1\n00\n\n\n\n\nWhere A0 was the absorbance of the control and A1 was absorbance in the presence of Podophyllum hexandrum extract/known antioxidant.\nSuperoxide anion radical generated by the Xanthine/Xanthine oxidase system was spectrophotometrically determined by monitoring the product of nitroblue tetrazolium (NBT) using the method of Jung [13]. A reaction mixture containing 1.0 ml of 0.05 M phosphate buffer (pH 7.4), 0.04 ml of 0.15% BSA, 0.04 ml of 15.0 mM NBT and various concentrations of plant extract and known antioxidant was incubated at 25°C for 10 min, and the reaction was then started by adding 0.04 ml of 1.5 U/ml Xanthine oxidase and again incubated at 25°C for 20 min. The absorbance of the reaction mixture was measured at 560 nm. Decreased absorbance of the reaction mixture indicates increased superoxide anion radical scavenging activity.\nThe scavenging activity of the plant extract on Superoxide anion radical was expressed as:\n\n\n\n\n%\n inhibition\n=\n[\n(\n\nA\n0\n\n−\n\nA\n1\n\n)\n/\n\nA\n0\n\n]\n×\n1\n00\n\n\n\n\nWhere A0 was the absorbance of the control and A1 was absorbance in the presence of Podophyllum hexandrum extract/known antioxidant.\n[SUBTITLE] Assessment of Hydrogen peroxide scavenging activity [SUBSECTION] The ability of Podophyllum hexandrum aqueous extract to scavenge hydrogen peroxide was evaluated according to the method of Ruch [14]. A solution of H2O2 (2 mmol) was prepared in phosphate buffer (pH 7.5). Plant extract (50-300 μg/ml) was added to the hydrogen peroxide solution (0.6 ml). Absorbance of hydrogen peroxide at 230 nm was determined after 15 minutes against a blank solution containing phosphate buffer without hydrogen peroxide. BHT was taken as known standard. The scavenging activity of the plant extract on H2O2 was expressed as:\n\n\n\n\n%\n scavenged \n[\n\nH\n2\n\n\nO\n2\n\n\n]\n=\n[\n\n(\n\nA\n0\n\n−\n\nA\n1\n\n)\n/\n\nA\n0\n\n]\n×\n1\n00\n\n\n\n\nWhere A0 is the absorbance of the control and A1 is absorbance in the presence of plant extract and known standard.\nThe ability of Podophyllum hexandrum aqueous extract to scavenge hydrogen peroxide was evaluated according to the method of Ruch [14]. A solution of H2O2 (2 mmol) was prepared in phosphate buffer (pH 7.5). Plant extract (50-300 μg/ml) was added to the hydrogen peroxide solution (0.6 ml). Absorbance of hydrogen peroxide at 230 nm was determined after 15 minutes against a blank solution containing phosphate buffer without hydrogen peroxide. BHT was taken as known standard. The scavenging activity of the plant extract on H2O2 was expressed as:\n\n\n\n\n%\n scavenged \n[\n\nH\n2\n\n\nO\n2\n\n\n]\n=\n[\n\n(\n\nA\n0\n\n−\n\nA\n1\n\n)\n/\n\nA\n0\n\n]\n×\n1\n00\n\n\n\n\nWhere A0 is the absorbance of the control and A1 is absorbance in the presence of plant extract and known standard.\n[SUBTITLE] Assessment of superoxide anion radical scavenging property [SUBSECTION] Superoxide anion radical generated by the Xanthine/Xanthine oxidase system was spectrophotometrically determined by monitoring the product of nitroblue tetrazolium (NBT) using the method of Jung [13]. A reaction mixture containing 1.0 ml of 0.05 M phosphate buffer (pH 7.4), 0.04 ml of 0.15% BSA, 0.04 ml of 15.0 mM NBT and various concentrations of plant extract and known antioxidant was incubated at 25°C for 10 min, and the reaction was then started by adding 0.04 ml of 1.5 U/ml Xanthine oxidase and again incubated at 25°C for 20 min. The absorbance of the reaction mixture was measured at 560 nm. Decreased absorbance of the reaction mixture indicates increased superoxide anion radical scavenging activity.\nThe scavenging activity of the plant extract on Superoxide anion radical was expressed as:\n\n\n\n\n%\n inhibition\n=\n[\n(\n\nA\n0\n\n−\n\nA\n1\n\n)\n/\n\nA\n0\n\n]\n×\n1\n00\n\n\n\n\nWhere A0 was the absorbance of the control and A1 was absorbance in the presence of Podophyllum hexandrum extract/known antioxidant.\nSuperoxide anion radical generated by the Xanthine/Xanthine oxidase system was spectrophotometrically determined by monitoring the product of nitroblue tetrazolium (NBT) using the method of Jung [13]. A reaction mixture containing 1.0 ml of 0.05 M phosphate buffer (pH 7.4), 0.04 ml of 0.15% BSA, 0.04 ml of 15.0 mM NBT and various concentrations of plant extract and known antioxidant was incubated at 25°C for 10 min, and the reaction was then started by adding 0.04 ml of 1.5 U/ml Xanthine oxidase and again incubated at 25°C for 20 min. The absorbance of the reaction mixture was measured at 560 nm. Decreased absorbance of the reaction mixture indicates increased superoxide anion radical scavenging activity.\nThe scavenging activity of the plant extract on Superoxide anion radical was expressed as:\n\n\n\n\n%\n inhibition\n=\n[\n(\n\nA\n0\n\n−\n\nA\n1\n\n)\n/\n\nA\n0\n\n]\n×\n1\n00\n\n\n\n\nWhere A0 was the absorbance of the control and A1 was absorbance in the presence of Podophyllum hexandrum extract/known antioxidant.\n[SUBTITLE] Assessment of Hydrogen peroxide scavenging activity [SUBSECTION] The ability of Podophyllum hexandrum aqueous extract to scavenge hydrogen peroxide was evaluated according to the method of Ruch [14]. A solution of H2O2 (2 mmol) was prepared in phosphate buffer (pH 7.5). Plant extract (50-300 μg/ml) was added to the hydrogen peroxide solution (0.6 ml). Absorbance of hydrogen peroxide at 230 nm was determined after 15 minutes against a blank solution containing phosphate buffer without hydrogen peroxide. BHT was taken as known standard. The scavenging activity of the plant extract on H2O2 was expressed as:\n\n\n\n\n%\n scavenged \n[\n\nH\n2\n\n\nO\n2\n\n\n]\n=\n[\n\n(\n\nA\n0\n\n−\n\nA\n1\n\n)\n/\n\nA\n0\n\n]\n×\n1\n00\n\n\n\n\nWhere A0 is the absorbance of the control and A1 is absorbance in the presence of plant extract and known standard.\nThe ability of Podophyllum hexandrum aqueous extract to scavenge hydrogen peroxide was evaluated according to the method of Ruch [14]. A solution of H2O2 (2 mmol) was prepared in phosphate buffer (pH 7.5). Plant extract (50-300 μg/ml) was added to the hydrogen peroxide solution (0.6 ml). Absorbance of hydrogen peroxide at 230 nm was determined after 15 minutes against a blank solution containing phosphate buffer without hydrogen peroxide. BHT was taken as known standard. The scavenging activity of the plant extract on H2O2 was expressed as:\n\n\n\n\n%\n scavenged \n[\n\nH\n2\n\n\nO\n2\n\n\n]\n=\n[\n\n(\n\nA\n0\n\n−\n\nA\n1\n\n)\n/\n\nA\n0\n\n]\n×\n1\n00\n\n\n\n\nWhere A0 is the absorbance of the control and A1 is absorbance in the presence of plant extract and known standard.\n[SUBTITLE] Dosage and treatment [SUBSECTION] Rats were divided into six groups each containing six rats. The plant extract was employed at oral doses of 20, 30 and 50 mg/kg-day. The extract was suspended in normal saline such that the final volume of extract at each dose was 1 ml which was fed to rats by gavage.\nGroup I - Received olive oil vehicle only at 5 ml/kg-day.\nGroup II - Received CCl4 in olive oil vehicle only.\nGroup III - Were administered with vitamin E (50 mg/kg-day).\nGroup IV - Received 20 mg/kg-day extract orally for fifteen days.\nGroup V - Received 30 mg/kg-day extract orally for fifteen days.\nGroup VI - Received 50 mg/kg-day orally for fifteen days.\nOn the thirteenth day, animals from groups II-VI were injected intraperitoneally with CCl4 in olive oil vehicle at a dosage of 1 ml/kg bw. The rats were sacrificed 48 hr after CCl4 administration and kidney and lung tissue was isolated out, and post mitochondrial supernatant of both the tissues was prepared.\n[SUBTITLE] Preparation of post mitochondrial supernatant (PMS) [SUBSECTION] Kidney and lung tissue was washed in ice-cold 1.15% KCl and homogenized in a homogenizing buffer (50 mM Tris- HCl, 1.15% KCl pH 7.4) using a Teflon homogenizer. The homogenate was centrifuged at 9,000 g for 20 minutes to remove debris. The supernatant was further centrifuged at 15,000 g for 20 minutes at 4°C to get PMS which was used for various biochemical assays. Protein concentration was estimated by the method of Lowry [15].\nKidney and lung tissue was washed in ice-cold 1.15% KCl and homogenized in a homogenizing buffer (50 mM Tris- HCl, 1.15% KCl pH 7.4) using a Teflon homogenizer. The homogenate was centrifuged at 9,000 g for 20 minutes to remove debris. The supernatant was further centrifuged at 15,000 g for 20 minutes at 4°C to get PMS which was used for various biochemical assays. Protein concentration was estimated by the method of Lowry [15].\n[SUBTITLE] Estimation of lipid peroxidation (PMS) [SUBSECTION] Lipid peroxidation in tissues was estimated by the formation of thiobarbituric acid reactive substances (TBARS) by the method of Nichans and Samuelson [16]. In brief 0.1 ml of tissue homogenate (PMS; Tris- HCl buffer, pH 7.5) was treated with 2 ml of (1:1:1 ratio) TBA-TCA-HCl reagent (0.37% thiobarbituric acid, 0.25 N HCl, and 15% TCA), placed in boiling water bath for 15 min, cooled and centrifuged at room temperature for 10 min. The absorbance of the clear supernatant was measured against reference blank at 535 nm.\nLipid peroxidation in tissues was estimated by the formation of thiobarbituric acid reactive substances (TBARS) by the method of Nichans and Samuelson [16]. In brief 0.1 ml of tissue homogenate (PMS; Tris- HCl buffer, pH 7.5) was treated with 2 ml of (1:1:1 ratio) TBA-TCA-HCl reagent (0.37% thiobarbituric acid, 0.25 N HCl, and 15% TCA), placed in boiling water bath for 15 min, cooled and centrifuged at room temperature for 10 min. The absorbance of the clear supernatant was measured against reference blank at 535 nm.\n[SUBTITLE] Determination of total sulphydryl groups [SUBSECTION] The acid soluble sulphydryl groups (non protein thiols of which more than 93% is reduced glutathione (GSH) forms a yellow colored complex with DTNB that shows the absorption maximum at 412 nm. The assay procedure will be followed to that of Moren [17]. 500 μl of homogenate precipitated with 100 μl of 25% TCA, will be then subjected to centrifugation at 3000 g for 10 minutes to settle the precipitate. 100 μl of the supernatant obtained shall be added to the test tube containing the 2 ml of 0.6 mM DTNB and 0.9 ml of 0.2 mM sodium phosphate buffer (pH 7.4). The yellow color obtained will be measured at 412 nm against the reagent blank which contains 100 μl of 25% TCA in place of the supernatant. Sulphydryl content shall be calculated using the DTNB molar extension coefficient of 13,100.\nThe acid soluble sulphydryl groups (non protein thiols of which more than 93% is reduced glutathione (GSH) forms a yellow colored complex with DTNB that shows the absorption maximum at 412 nm. The assay procedure will be followed to that of Moren [17]. 500 μl of homogenate precipitated with 100 μl of 25% TCA, will be then subjected to centrifugation at 3000 g for 10 minutes to settle the precipitate. 100 μl of the supernatant obtained shall be added to the test tube containing the 2 ml of 0.6 mM DTNB and 0.9 ml of 0.2 mM sodium phosphate buffer (pH 7.4). The yellow color obtained will be measured at 412 nm against the reagent blank which contains 100 μl of 25% TCA in place of the supernatant. Sulphydryl content shall be calculated using the DTNB molar extension coefficient of 13,100.\n[SUBTITLE] Glutathione peroxidase (GPX) [SUBSECTION] GPX activity was assayed using the method of Sharma [18]. The assay mixture consists of 1.49 ml of sodium phosphate buffer (0.1 M pH 7.4), 0.1 ml EDTA (1 mM), 0.1 ml sodium azide (1 mM), 0.1 ml 1 mM GSH, 0.1 ml of NADPH (0.02 mM), 0.01 ml of 1 mM H2O2 and 0.1 ml PMS in a total volume of 2 ml. Oxidation of NADPH was recorded spectrophotometrically at 340 nm and the enzyme activity was calculated as nmoles NADPH oxidized/min/mg of protein, using € of 6.22 × 103 M-1 cm-1.\nGPX activity was assayed using the method of Sharma [18]. The assay mixture consists of 1.49 ml of sodium phosphate buffer (0.1 M pH 7.4), 0.1 ml EDTA (1 mM), 0.1 ml sodium azide (1 mM), 0.1 ml 1 mM GSH, 0.1 ml of NADPH (0.02 mM), 0.01 ml of 1 mM H2O2 and 0.1 ml PMS in a total volume of 2 ml. Oxidation of NADPH was recorded spectrophotometrically at 340 nm and the enzyme activity was calculated as nmoles NADPH oxidized/min/mg of protein, using € of 6.22 × 103 M-1 cm-1.\n[SUBTITLE] Glutathione Reductase activity (GR) [SUBSECTION] GR activity was assayed by the method of Sharma [18]. The assay mixture consisted of 1.6 ml of sodium phosphate buffer (0.1 M pH 7.4), 0.1 ml EDTA (1 mM), 0.1 ml 1 mM oxidized glutathione, 0.1 ml of NADPH (0.02 mM), 0.01 ml of 1 mM H2O2 and 0.1 ml PMS in a total volume of 2 ml. The enzyme activity measured at 340 nm was calculated as nmoles of NADPH oxidized/min/mg of protein using € of 6.22 × 103 M-1 cm-1.\nGR activity was assayed by the method of Sharma [18]. The assay mixture consisted of 1.6 ml of sodium phosphate buffer (0.1 M pH 7.4), 0.1 ml EDTA (1 mM), 0.1 ml 1 mM oxidized glutathione, 0.1 ml of NADPH (0.02 mM), 0.01 ml of 1 mM H2O2 and 0.1 ml PMS in a total volume of 2 ml. The enzyme activity measured at 340 nm was calculated as nmoles of NADPH oxidized/min/mg of protein using € of 6.22 × 103 M-1 cm-1.\n[SUBTITLE] Glutathione-S-transferase (GST) activity [SUBSECTION] GST activity was assayed using the method of Haque [19]. The reaction mixture consisted of 1.67 ml sodium phosphate buffer (0.1 M pH 6.5), 0.2 ml of 1 mM GSH, 0.025 ml of 1 mM CDNB and 0.1 ml of PMS in a total volume of 2 ml. The change in absorbance was recorded at 340 nm and the enzyme activity was calculated as nmoles of CDNB conjugates formed/min/mg protein using € of 9.6 × 103 M-1 cm-1.\nGST activity was assayed using the method of Haque [19]. The reaction mixture consisted of 1.67 ml sodium phosphate buffer (0.1 M pH 6.5), 0.2 ml of 1 mM GSH, 0.025 ml of 1 mM CDNB and 0.1 ml of PMS in a total volume of 2 ml. The change in absorbance was recorded at 340 nm and the enzyme activity was calculated as nmoles of CDNB conjugates formed/min/mg protein using € of 9.6 × 103 M-1 cm-1.\n[SUBTITLE] Super oxide dismutase activity (SOD) [SUBSECTION] SOD activity was estimated by Beauchamp and Fridovich [20]. The reaction mixture consisted of 0.5 ml of hepatic PMS, 1 ml of 50 mM sodium carbonate, 0.4 ml of 25 μM NBT and 0.2 ml of 0.1 mM EDTA. The reaction was initiated by addition of 0.4 ml of 1 mM hydroxylamine-hydrochloride. The change in absorbance was recorded at 560 nm. The control was simultaneously run without tissue homogenate. Units of SOD activity were expressed as the amount of enzyme required to inhibit the reduction of NBT by 50%.\nSOD activity was estimated by Beauchamp and Fridovich [20]. The reaction mixture consisted of 0.5 ml of hepatic PMS, 1 ml of 50 mM sodium carbonate, 0.4 ml of 25 μM NBT and 0.2 ml of 0.1 mM EDTA. The reaction was initiated by addition of 0.4 ml of 1 mM hydroxylamine-hydrochloride. The change in absorbance was recorded at 560 nm. The control was simultaneously run without tissue homogenate. Units of SOD activity were expressed as the amount of enzyme required to inhibit the reduction of NBT by 50%.\nRats were divided into six groups each containing six rats. The plant extract was employed at oral doses of 20, 30 and 50 mg/kg-day. The extract was suspended in normal saline such that the final volume of extract at each dose was 1 ml which was fed to rats by gavage.\nGroup I - Received olive oil vehicle only at 5 ml/kg-day.\nGroup II - Received CCl4 in olive oil vehicle only.\nGroup III - Were administered with vitamin E (50 mg/kg-day).\nGroup IV - Received 20 mg/kg-day extract orally for fifteen days.\nGroup V - Received 30 mg/kg-day extract orally for fifteen days.\nGroup VI - Received 50 mg/kg-day orally for fifteen days.\nOn the thirteenth day, animals from groups II-VI were injected intraperitoneally with CCl4 in olive oil vehicle at a dosage of 1 ml/kg bw. The rats were sacrificed 48 hr after CCl4 administration and kidney and lung tissue was isolated out, and post mitochondrial supernatant of both the tissues was prepared.\n[SUBTITLE] Preparation of post mitochondrial supernatant (PMS) [SUBSECTION] Kidney and lung tissue was washed in ice-cold 1.15% KCl and homogenized in a homogenizing buffer (50 mM Tris- HCl, 1.15% KCl pH 7.4) using a Teflon homogenizer. The homogenate was centrifuged at 9,000 g for 20 minutes to remove debris. The supernatant was further centrifuged at 15,000 g for 20 minutes at 4°C to get PMS which was used for various biochemical assays. Protein concentration was estimated by the method of Lowry [15].\nKidney and lung tissue was washed in ice-cold 1.15% KCl and homogenized in a homogenizing buffer (50 mM Tris- HCl, 1.15% KCl pH 7.4) using a Teflon homogenizer. The homogenate was centrifuged at 9,000 g for 20 minutes to remove debris. The supernatant was further centrifuged at 15,000 g for 20 minutes at 4°C to get PMS which was used for various biochemical assays. Protein concentration was estimated by the method of Lowry [15].\n[SUBTITLE] Estimation of lipid peroxidation (PMS) [SUBSECTION] Lipid peroxidation in tissues was estimated by the formation of thiobarbituric acid reactive substances (TBARS) by the method of Nichans and Samuelson [16]. In brief 0.1 ml of tissue homogenate (PMS; Tris- HCl buffer, pH 7.5) was treated with 2 ml of (1:1:1 ratio) TBA-TCA-HCl reagent (0.37% thiobarbituric acid, 0.25 N HCl, and 15% TCA), placed in boiling water bath for 15 min, cooled and centrifuged at room temperature for 10 min. The absorbance of the clear supernatant was measured against reference blank at 535 nm.\nLipid peroxidation in tissues was estimated by the formation of thiobarbituric acid reactive substances (TBARS) by the method of Nichans and Samuelson [16]. In brief 0.1 ml of tissue homogenate (PMS; Tris- HCl buffer, pH 7.5) was treated with 2 ml of (1:1:1 ratio) TBA-TCA-HCl reagent (0.37% thiobarbituric acid, 0.25 N HCl, and 15% TCA), placed in boiling water bath for 15 min, cooled and centrifuged at room temperature for 10 min. The absorbance of the clear supernatant was measured against reference blank at 535 nm.\n[SUBTITLE] Determination of total sulphydryl groups [SUBSECTION] The acid soluble sulphydryl groups (non protein thiols of which more than 93% is reduced glutathione (GSH) forms a yellow colored complex with DTNB that shows the absorption maximum at 412 nm. The assay procedure will be followed to that of Moren [17]. 500 μl of homogenate precipitated with 100 μl of 25% TCA, will be then subjected to centrifugation at 3000 g for 10 minutes to settle the precipitate. 100 μl of the supernatant obtained shall be added to the test tube containing the 2 ml of 0.6 mM DTNB and 0.9 ml of 0.2 mM sodium phosphate buffer (pH 7.4). The yellow color obtained will be measured at 412 nm against the reagent blank which contains 100 μl of 25% TCA in place of the supernatant. Sulphydryl content shall be calculated using the DTNB molar extension coefficient of 13,100.\nThe acid soluble sulphydryl groups (non protein thiols of which more than 93% is reduced glutathione (GSH) forms a yellow colored complex with DTNB that shows the absorption maximum at 412 nm. The assay procedure will be followed to that of Moren [17]. 500 μl of homogenate precipitated with 100 μl of 25% TCA, will be then subjected to centrifugation at 3000 g for 10 minutes to settle the precipitate. 100 μl of the supernatant obtained shall be added to the test tube containing the 2 ml of 0.6 mM DTNB and 0.9 ml of 0.2 mM sodium phosphate buffer (pH 7.4). The yellow color obtained will be measured at 412 nm against the reagent blank which contains 100 μl of 25% TCA in place of the supernatant. Sulphydryl content shall be calculated using the DTNB molar extension coefficient of 13,100.\n[SUBTITLE] Glutathione peroxidase (GPX) [SUBSECTION] GPX activity was assayed using the method of Sharma [18]. The assay mixture consists of 1.49 ml of sodium phosphate buffer (0.1 M pH 7.4), 0.1 ml EDTA (1 mM), 0.1 ml sodium azide (1 mM), 0.1 ml 1 mM GSH, 0.1 ml of NADPH (0.02 mM), 0.01 ml of 1 mM H2O2 and 0.1 ml PMS in a total volume of 2 ml. Oxidation of NADPH was recorded spectrophotometrically at 340 nm and the enzyme activity was calculated as nmoles NADPH oxidized/min/mg of protein, using € of 6.22 × 103 M-1 cm-1.\nGPX activity was assayed using the method of Sharma [18]. The assay mixture consists of 1.49 ml of sodium phosphate buffer (0.1 M pH 7.4), 0.1 ml EDTA (1 mM), 0.1 ml sodium azide (1 mM), 0.1 ml 1 mM GSH, 0.1 ml of NADPH (0.02 mM), 0.01 ml of 1 mM H2O2 and 0.1 ml PMS in a total volume of 2 ml. Oxidation of NADPH was recorded spectrophotometrically at 340 nm and the enzyme activity was calculated as nmoles NADPH oxidized/min/mg of protein, using € of 6.22 × 103 M-1 cm-1.\n[SUBTITLE] Glutathione Reductase activity (GR) [SUBSECTION] GR activity was assayed by the method of Sharma [18]. The assay mixture consisted of 1.6 ml of sodium phosphate buffer (0.1 M pH 7.4), 0.1 ml EDTA (1 mM), 0.1 ml 1 mM oxidized glutathione, 0.1 ml of NADPH (0.02 mM), 0.01 ml of 1 mM H2O2 and 0.1 ml PMS in a total volume of 2 ml. The enzyme activity measured at 340 nm was calculated as nmoles of NADPH oxidized/min/mg of protein using € of 6.22 × 103 M-1 cm-1.\nGR activity was assayed by the method of Sharma [18]. The assay mixture consisted of 1.6 ml of sodium phosphate buffer (0.1 M pH 7.4), 0.1 ml EDTA (1 mM), 0.1 ml 1 mM oxidized glutathione, 0.1 ml of NADPH (0.02 mM), 0.01 ml of 1 mM H2O2 and 0.1 ml PMS in a total volume of 2 ml. The enzyme activity measured at 340 nm was calculated as nmoles of NADPH oxidized/min/mg of protein using € of 6.22 × 103 M-1 cm-1.\n[SUBTITLE] Glutathione-S-transferase (GST) activity [SUBSECTION] GST activity was assayed using the method of Haque [19]. The reaction mixture consisted of 1.67 ml sodium phosphate buffer (0.1 M pH 6.5), 0.2 ml of 1 mM GSH, 0.025 ml of 1 mM CDNB and 0.1 ml of PMS in a total volume of 2 ml. The change in absorbance was recorded at 340 nm and the enzyme activity was calculated as nmoles of CDNB conjugates formed/min/mg protein using € of 9.6 × 103 M-1 cm-1.\nGST activity was assayed using the method of Haque [19]. The reaction mixture consisted of 1.67 ml sodium phosphate buffer (0.1 M pH 6.5), 0.2 ml of 1 mM GSH, 0.025 ml of 1 mM CDNB and 0.1 ml of PMS in a total volume of 2 ml. The change in absorbance was recorded at 340 nm and the enzyme activity was calculated as nmoles of CDNB conjugates formed/min/mg protein using € of 9.6 × 103 M-1 cm-1.\n[SUBTITLE] Super oxide dismutase activity (SOD) [SUBSECTION] SOD activity was estimated by Beauchamp and Fridovich [20]. The reaction mixture consisted of 0.5 ml of hepatic PMS, 1 ml of 50 mM sodium carbonate, 0.4 ml of 25 μM NBT and 0.2 ml of 0.1 mM EDTA. The reaction was initiated by addition of 0.4 ml of 1 mM hydroxylamine-hydrochloride. The change in absorbance was recorded at 560 nm. The control was simultaneously run without tissue homogenate. Units of SOD activity were expressed as the amount of enzyme required to inhibit the reduction of NBT by 50%.\nSOD activity was estimated by Beauchamp and Fridovich [20]. The reaction mixture consisted of 0.5 ml of hepatic PMS, 1 ml of 50 mM sodium carbonate, 0.4 ml of 25 μM NBT and 0.2 ml of 0.1 mM EDTA. The reaction was initiated by addition of 0.4 ml of 1 mM hydroxylamine-hydrochloride. The change in absorbance was recorded at 560 nm. The control was simultaneously run without tissue homogenate. Units of SOD activity were expressed as the amount of enzyme required to inhibit the reduction of NBT by 50%.\n[SUBTITLE] Statistical analysis [SUBSECTION] The values are expressed as mean ± standard deviation (SD). The results were evaluated by using the SPSS (version 12.0) and Origin 6 softwares and evaluated by one-way ANOVA followed by Bonferroni t-test. Statistical significance was considered when value of P was < 0.5.\nThe values are expressed as mean ± standard deviation (SD). The results were evaluated by using the SPSS (version 12.0) and Origin 6 softwares and evaluated by one-way ANOVA followed by Bonferroni t-test. Statistical significance was considered when value of P was < 0.5.", "The rhizome of Podophyllum hexandrum was collected from higher reaches of Aharbal, Shopian, J&K, India in the month of May and June, identified by the Centre of Plant Taxonomy, Department of Botany, University of Kashmir, and authenticated by Dr. Irshad Ahmad Nawchoo (Department of Botany) and Akhter Hussain Malik (Curator, Centre for Plant Taxonomy, University of Kashmir). A reference specimen has been retained in the herbarium of the Department of Botany at the University of Kashmir under reference number KASH- bot/Ku/PH- 702- SAG.\nThe plant material (rhizome) was dried in the shade at 30 ± 2°C. The dried rhizome material was ground into a powder using mortar and pestle and passed through a sieve of 0.3 mm mesh size. The powder obtained was extracted with water using a Soxhlet extractor (60-80°C). The extract was then concentrated with the help of rotary evaporator under reduced pressure and the solid extract was stored in refrigerator for future use.", "Adult male albino rats of Wistar strain weighing 200-250 g used throughout this study were purchased from the Indian Institute of Integrative Medicine Jammu (IIIM). The animals had access to food and water ad libitum. The animals were maintained in a controlled environment under standard conditions of temperature and humidity with an alternating 12 hr light and dark cycle. The animals were maintained in accordance with the guidelines prescribed by the National Institute of Nutrition, Indian Council of Medical Research and the study was approved by the Animal Ethics Committee of the University of Kashmir.", "[SUBTITLE] Assessment of superoxide anion radical scavenging property [SUBSECTION] Superoxide anion radical generated by the Xanthine/Xanthine oxidase system was spectrophotometrically determined by monitoring the product of nitroblue tetrazolium (NBT) using the method of Jung [13]. A reaction mixture containing 1.0 ml of 0.05 M phosphate buffer (pH 7.4), 0.04 ml of 0.15% BSA, 0.04 ml of 15.0 mM NBT and various concentrations of plant extract and known antioxidant was incubated at 25°C for 10 min, and the reaction was then started by adding 0.04 ml of 1.5 U/ml Xanthine oxidase and again incubated at 25°C for 20 min. The absorbance of the reaction mixture was measured at 560 nm. Decreased absorbance of the reaction mixture indicates increased superoxide anion radical scavenging activity.\nThe scavenging activity of the plant extract on Superoxide anion radical was expressed as:\n\n\n\n\n%\n inhibition\n=\n[\n(\n\nA\n0\n\n−\n\nA\n1\n\n)\n/\n\nA\n0\n\n]\n×\n1\n00\n\n\n\n\nWhere A0 was the absorbance of the control and A1 was absorbance in the presence of Podophyllum hexandrum extract/known antioxidant.\nSuperoxide anion radical generated by the Xanthine/Xanthine oxidase system was spectrophotometrically determined by monitoring the product of nitroblue tetrazolium (NBT) using the method of Jung [13]. A reaction mixture containing 1.0 ml of 0.05 M phosphate buffer (pH 7.4), 0.04 ml of 0.15% BSA, 0.04 ml of 15.0 mM NBT and various concentrations of plant extract and known antioxidant was incubated at 25°C for 10 min, and the reaction was then started by adding 0.04 ml of 1.5 U/ml Xanthine oxidase and again incubated at 25°C for 20 min. The absorbance of the reaction mixture was measured at 560 nm. Decreased absorbance of the reaction mixture indicates increased superoxide anion radical scavenging activity.\nThe scavenging activity of the plant extract on Superoxide anion radical was expressed as:\n\n\n\n\n%\n inhibition\n=\n[\n(\n\nA\n0\n\n−\n\nA\n1\n\n)\n/\n\nA\n0\n\n]\n×\n1\n00\n\n\n\n\nWhere A0 was the absorbance of the control and A1 was absorbance in the presence of Podophyllum hexandrum extract/known antioxidant.\n[SUBTITLE] Assessment of Hydrogen peroxide scavenging activity [SUBSECTION] The ability of Podophyllum hexandrum aqueous extract to scavenge hydrogen peroxide was evaluated according to the method of Ruch [14]. A solution of H2O2 (2 mmol) was prepared in phosphate buffer (pH 7.5). Plant extract (50-300 μg/ml) was added to the hydrogen peroxide solution (0.6 ml). Absorbance of hydrogen peroxide at 230 nm was determined after 15 minutes against a blank solution containing phosphate buffer without hydrogen peroxide. BHT was taken as known standard. The scavenging activity of the plant extract on H2O2 was expressed as:\n\n\n\n\n%\n scavenged \n[\n\nH\n2\n\n\nO\n2\n\n\n]\n=\n[\n\n(\n\nA\n0\n\n−\n\nA\n1\n\n)\n/\n\nA\n0\n\n]\n×\n1\n00\n\n\n\n\nWhere A0 is the absorbance of the control and A1 is absorbance in the presence of plant extract and known standard.\nThe ability of Podophyllum hexandrum aqueous extract to scavenge hydrogen peroxide was evaluated according to the method of Ruch [14]. A solution of H2O2 (2 mmol) was prepared in phosphate buffer (pH 7.5). Plant extract (50-300 μg/ml) was added to the hydrogen peroxide solution (0.6 ml). Absorbance of hydrogen peroxide at 230 nm was determined after 15 minutes against a blank solution containing phosphate buffer without hydrogen peroxide. BHT was taken as known standard. The scavenging activity of the plant extract on H2O2 was expressed as:\n\n\n\n\n%\n scavenged \n[\n\nH\n2\n\n\nO\n2\n\n\n]\n=\n[\n\n(\n\nA\n0\n\n−\n\nA\n1\n\n)\n/\n\nA\n0\n\n]\n×\n1\n00\n\n\n\n\nWhere A0 is the absorbance of the control and A1 is absorbance in the presence of plant extract and known standard.", "Superoxide anion radical generated by the Xanthine/Xanthine oxidase system was spectrophotometrically determined by monitoring the product of nitroblue tetrazolium (NBT) using the method of Jung [13]. A reaction mixture containing 1.0 ml of 0.05 M phosphate buffer (pH 7.4), 0.04 ml of 0.15% BSA, 0.04 ml of 15.0 mM NBT and various concentrations of plant extract and known antioxidant was incubated at 25°C for 10 min, and the reaction was then started by adding 0.04 ml of 1.5 U/ml Xanthine oxidase and again incubated at 25°C for 20 min. The absorbance of the reaction mixture was measured at 560 nm. Decreased absorbance of the reaction mixture indicates increased superoxide anion radical scavenging activity.\nThe scavenging activity of the plant extract on Superoxide anion radical was expressed as:\n\n\n\n\n%\n inhibition\n=\n[\n(\n\nA\n0\n\n−\n\nA\n1\n\n)\n/\n\nA\n0\n\n]\n×\n1\n00\n\n\n\n\nWhere A0 was the absorbance of the control and A1 was absorbance in the presence of Podophyllum hexandrum extract/known antioxidant.", "The ability of Podophyllum hexandrum aqueous extract to scavenge hydrogen peroxide was evaluated according to the method of Ruch [14]. A solution of H2O2 (2 mmol) was prepared in phosphate buffer (pH 7.5). Plant extract (50-300 μg/ml) was added to the hydrogen peroxide solution (0.6 ml). Absorbance of hydrogen peroxide at 230 nm was determined after 15 minutes against a blank solution containing phosphate buffer without hydrogen peroxide. BHT was taken as known standard. The scavenging activity of the plant extract on H2O2 was expressed as:\n\n\n\n\n%\n scavenged \n[\n\nH\n2\n\n\nO\n2\n\n\n]\n=\n[\n\n(\n\nA\n0\n\n−\n\nA\n1\n\n)\n/\n\nA\n0\n\n]\n×\n1\n00\n\n\n\n\nWhere A0 is the absorbance of the control and A1 is absorbance in the presence of plant extract and known standard.", "Rats were divided into six groups each containing six rats. The plant extract was employed at oral doses of 20, 30 and 50 mg/kg-day. The extract was suspended in normal saline such that the final volume of extract at each dose was 1 ml which was fed to rats by gavage.\nGroup I - Received olive oil vehicle only at 5 ml/kg-day.\nGroup II - Received CCl4 in olive oil vehicle only.\nGroup III - Were administered with vitamin E (50 mg/kg-day).\nGroup IV - Received 20 mg/kg-day extract orally for fifteen days.\nGroup V - Received 30 mg/kg-day extract orally for fifteen days.\nGroup VI - Received 50 mg/kg-day orally for fifteen days.\nOn the thirteenth day, animals from groups II-VI were injected intraperitoneally with CCl4 in olive oil vehicle at a dosage of 1 ml/kg bw. The rats were sacrificed 48 hr after CCl4 administration and kidney and lung tissue was isolated out, and post mitochondrial supernatant of both the tissues was prepared.\n[SUBTITLE] Preparation of post mitochondrial supernatant (PMS) [SUBSECTION] Kidney and lung tissue was washed in ice-cold 1.15% KCl and homogenized in a homogenizing buffer (50 mM Tris- HCl, 1.15% KCl pH 7.4) using a Teflon homogenizer. The homogenate was centrifuged at 9,000 g for 20 minutes to remove debris. The supernatant was further centrifuged at 15,000 g for 20 minutes at 4°C to get PMS which was used for various biochemical assays. Protein concentration was estimated by the method of Lowry [15].\nKidney and lung tissue was washed in ice-cold 1.15% KCl and homogenized in a homogenizing buffer (50 mM Tris- HCl, 1.15% KCl pH 7.4) using a Teflon homogenizer. The homogenate was centrifuged at 9,000 g for 20 minutes to remove debris. The supernatant was further centrifuged at 15,000 g for 20 minutes at 4°C to get PMS which was used for various biochemical assays. Protein concentration was estimated by the method of Lowry [15].\n[SUBTITLE] Estimation of lipid peroxidation (PMS) [SUBSECTION] Lipid peroxidation in tissues was estimated by the formation of thiobarbituric acid reactive substances (TBARS) by the method of Nichans and Samuelson [16]. In brief 0.1 ml of tissue homogenate (PMS; Tris- HCl buffer, pH 7.5) was treated with 2 ml of (1:1:1 ratio) TBA-TCA-HCl reagent (0.37% thiobarbituric acid, 0.25 N HCl, and 15% TCA), placed in boiling water bath for 15 min, cooled and centrifuged at room temperature for 10 min. The absorbance of the clear supernatant was measured against reference blank at 535 nm.\nLipid peroxidation in tissues was estimated by the formation of thiobarbituric acid reactive substances (TBARS) by the method of Nichans and Samuelson [16]. In brief 0.1 ml of tissue homogenate (PMS; Tris- HCl buffer, pH 7.5) was treated with 2 ml of (1:1:1 ratio) TBA-TCA-HCl reagent (0.37% thiobarbituric acid, 0.25 N HCl, and 15% TCA), placed in boiling water bath for 15 min, cooled and centrifuged at room temperature for 10 min. The absorbance of the clear supernatant was measured against reference blank at 535 nm.\n[SUBTITLE] Determination of total sulphydryl groups [SUBSECTION] The acid soluble sulphydryl groups (non protein thiols of which more than 93% is reduced glutathione (GSH) forms a yellow colored complex with DTNB that shows the absorption maximum at 412 nm. The assay procedure will be followed to that of Moren [17]. 500 μl of homogenate precipitated with 100 μl of 25% TCA, will be then subjected to centrifugation at 3000 g for 10 minutes to settle the precipitate. 100 μl of the supernatant obtained shall be added to the test tube containing the 2 ml of 0.6 mM DTNB and 0.9 ml of 0.2 mM sodium phosphate buffer (pH 7.4). The yellow color obtained will be measured at 412 nm against the reagent blank which contains 100 μl of 25% TCA in place of the supernatant. Sulphydryl content shall be calculated using the DTNB molar extension coefficient of 13,100.\nThe acid soluble sulphydryl groups (non protein thiols of which more than 93% is reduced glutathione (GSH) forms a yellow colored complex with DTNB that shows the absorption maximum at 412 nm. The assay procedure will be followed to that of Moren [17]. 500 μl of homogenate precipitated with 100 μl of 25% TCA, will be then subjected to centrifugation at 3000 g for 10 minutes to settle the precipitate. 100 μl of the supernatant obtained shall be added to the test tube containing the 2 ml of 0.6 mM DTNB and 0.9 ml of 0.2 mM sodium phosphate buffer (pH 7.4). The yellow color obtained will be measured at 412 nm against the reagent blank which contains 100 μl of 25% TCA in place of the supernatant. Sulphydryl content shall be calculated using the DTNB molar extension coefficient of 13,100.\n[SUBTITLE] Glutathione peroxidase (GPX) [SUBSECTION] GPX activity was assayed using the method of Sharma [18]. The assay mixture consists of 1.49 ml of sodium phosphate buffer (0.1 M pH 7.4), 0.1 ml EDTA (1 mM), 0.1 ml sodium azide (1 mM), 0.1 ml 1 mM GSH, 0.1 ml of NADPH (0.02 mM), 0.01 ml of 1 mM H2O2 and 0.1 ml PMS in a total volume of 2 ml. Oxidation of NADPH was recorded spectrophotometrically at 340 nm and the enzyme activity was calculated as nmoles NADPH oxidized/min/mg of protein, using € of 6.22 × 103 M-1 cm-1.\nGPX activity was assayed using the method of Sharma [18]. The assay mixture consists of 1.49 ml of sodium phosphate buffer (0.1 M pH 7.4), 0.1 ml EDTA (1 mM), 0.1 ml sodium azide (1 mM), 0.1 ml 1 mM GSH, 0.1 ml of NADPH (0.02 mM), 0.01 ml of 1 mM H2O2 and 0.1 ml PMS in a total volume of 2 ml. Oxidation of NADPH was recorded spectrophotometrically at 340 nm and the enzyme activity was calculated as nmoles NADPH oxidized/min/mg of protein, using € of 6.22 × 103 M-1 cm-1.\n[SUBTITLE] Glutathione Reductase activity (GR) [SUBSECTION] GR activity was assayed by the method of Sharma [18]. The assay mixture consisted of 1.6 ml of sodium phosphate buffer (0.1 M pH 7.4), 0.1 ml EDTA (1 mM), 0.1 ml 1 mM oxidized glutathione, 0.1 ml of NADPH (0.02 mM), 0.01 ml of 1 mM H2O2 and 0.1 ml PMS in a total volume of 2 ml. The enzyme activity measured at 340 nm was calculated as nmoles of NADPH oxidized/min/mg of protein using € of 6.22 × 103 M-1 cm-1.\nGR activity was assayed by the method of Sharma [18]. The assay mixture consisted of 1.6 ml of sodium phosphate buffer (0.1 M pH 7.4), 0.1 ml EDTA (1 mM), 0.1 ml 1 mM oxidized glutathione, 0.1 ml of NADPH (0.02 mM), 0.01 ml of 1 mM H2O2 and 0.1 ml PMS in a total volume of 2 ml. The enzyme activity measured at 340 nm was calculated as nmoles of NADPH oxidized/min/mg of protein using € of 6.22 × 103 M-1 cm-1.\n[SUBTITLE] Glutathione-S-transferase (GST) activity [SUBSECTION] GST activity was assayed using the method of Haque [19]. The reaction mixture consisted of 1.67 ml sodium phosphate buffer (0.1 M pH 6.5), 0.2 ml of 1 mM GSH, 0.025 ml of 1 mM CDNB and 0.1 ml of PMS in a total volume of 2 ml. The change in absorbance was recorded at 340 nm and the enzyme activity was calculated as nmoles of CDNB conjugates formed/min/mg protein using € of 9.6 × 103 M-1 cm-1.\nGST activity was assayed using the method of Haque [19]. The reaction mixture consisted of 1.67 ml sodium phosphate buffer (0.1 M pH 6.5), 0.2 ml of 1 mM GSH, 0.025 ml of 1 mM CDNB and 0.1 ml of PMS in a total volume of 2 ml. The change in absorbance was recorded at 340 nm and the enzyme activity was calculated as nmoles of CDNB conjugates formed/min/mg protein using € of 9.6 × 103 M-1 cm-1.\n[SUBTITLE] Super oxide dismutase activity (SOD) [SUBSECTION] SOD activity was estimated by Beauchamp and Fridovich [20]. The reaction mixture consisted of 0.5 ml of hepatic PMS, 1 ml of 50 mM sodium carbonate, 0.4 ml of 25 μM NBT and 0.2 ml of 0.1 mM EDTA. The reaction was initiated by addition of 0.4 ml of 1 mM hydroxylamine-hydrochloride. The change in absorbance was recorded at 560 nm. The control was simultaneously run without tissue homogenate. Units of SOD activity were expressed as the amount of enzyme required to inhibit the reduction of NBT by 50%.\nSOD activity was estimated by Beauchamp and Fridovich [20]. The reaction mixture consisted of 0.5 ml of hepatic PMS, 1 ml of 50 mM sodium carbonate, 0.4 ml of 25 μM NBT and 0.2 ml of 0.1 mM EDTA. The reaction was initiated by addition of 0.4 ml of 1 mM hydroxylamine-hydrochloride. The change in absorbance was recorded at 560 nm. The control was simultaneously run without tissue homogenate. Units of SOD activity were expressed as the amount of enzyme required to inhibit the reduction of NBT by 50%.", "Kidney and lung tissue was washed in ice-cold 1.15% KCl and homogenized in a homogenizing buffer (50 mM Tris- HCl, 1.15% KCl pH 7.4) using a Teflon homogenizer. The homogenate was centrifuged at 9,000 g for 20 minutes to remove debris. The supernatant was further centrifuged at 15,000 g for 20 minutes at 4°C to get PMS which was used for various biochemical assays. Protein concentration was estimated by the method of Lowry [15].", "Lipid peroxidation in tissues was estimated by the formation of thiobarbituric acid reactive substances (TBARS) by the method of Nichans and Samuelson [16]. In brief 0.1 ml of tissue homogenate (PMS; Tris- HCl buffer, pH 7.5) was treated with 2 ml of (1:1:1 ratio) TBA-TCA-HCl reagent (0.37% thiobarbituric acid, 0.25 N HCl, and 15% TCA), placed in boiling water bath for 15 min, cooled and centrifuged at room temperature for 10 min. The absorbance of the clear supernatant was measured against reference blank at 535 nm.", "The acid soluble sulphydryl groups (non protein thiols of which more than 93% is reduced glutathione (GSH) forms a yellow colored complex with DTNB that shows the absorption maximum at 412 nm. The assay procedure will be followed to that of Moren [17]. 500 μl of homogenate precipitated with 100 μl of 25% TCA, will be then subjected to centrifugation at 3000 g for 10 minutes to settle the precipitate. 100 μl of the supernatant obtained shall be added to the test tube containing the 2 ml of 0.6 mM DTNB and 0.9 ml of 0.2 mM sodium phosphate buffer (pH 7.4). The yellow color obtained will be measured at 412 nm against the reagent blank which contains 100 μl of 25% TCA in place of the supernatant. Sulphydryl content shall be calculated using the DTNB molar extension coefficient of 13,100.", "GPX activity was assayed using the method of Sharma [18]. The assay mixture consists of 1.49 ml of sodium phosphate buffer (0.1 M pH 7.4), 0.1 ml EDTA (1 mM), 0.1 ml sodium azide (1 mM), 0.1 ml 1 mM GSH, 0.1 ml of NADPH (0.02 mM), 0.01 ml of 1 mM H2O2 and 0.1 ml PMS in a total volume of 2 ml. Oxidation of NADPH was recorded spectrophotometrically at 340 nm and the enzyme activity was calculated as nmoles NADPH oxidized/min/mg of protein, using € of 6.22 × 103 M-1 cm-1.", "GR activity was assayed by the method of Sharma [18]. The assay mixture consisted of 1.6 ml of sodium phosphate buffer (0.1 M pH 7.4), 0.1 ml EDTA (1 mM), 0.1 ml 1 mM oxidized glutathione, 0.1 ml of NADPH (0.02 mM), 0.01 ml of 1 mM H2O2 and 0.1 ml PMS in a total volume of 2 ml. The enzyme activity measured at 340 nm was calculated as nmoles of NADPH oxidized/min/mg of protein using € of 6.22 × 103 M-1 cm-1.", "GST activity was assayed using the method of Haque [19]. The reaction mixture consisted of 1.67 ml sodium phosphate buffer (0.1 M pH 6.5), 0.2 ml of 1 mM GSH, 0.025 ml of 1 mM CDNB and 0.1 ml of PMS in a total volume of 2 ml. The change in absorbance was recorded at 340 nm and the enzyme activity was calculated as nmoles of CDNB conjugates formed/min/mg protein using € of 9.6 × 103 M-1 cm-1.", "SOD activity was estimated by Beauchamp and Fridovich [20]. The reaction mixture consisted of 0.5 ml of hepatic PMS, 1 ml of 50 mM sodium carbonate, 0.4 ml of 25 μM NBT and 0.2 ml of 0.1 mM EDTA. The reaction was initiated by addition of 0.4 ml of 1 mM hydroxylamine-hydrochloride. The change in absorbance was recorded at 560 nm. The control was simultaneously run without tissue homogenate. Units of SOD activity were expressed as the amount of enzyme required to inhibit the reduction of NBT by 50%.", "The values are expressed as mean ± standard deviation (SD). The results were evaluated by using the SPSS (version 12.0) and Origin 6 softwares and evaluated by one-way ANOVA followed by Bonferroni t-test. Statistical significance was considered when value of P was < 0.5.", "[SUBTITLE] Superoxide anion radical scavenging activity [SUBSECTION] Superoxide anion radical scavenging activity of varying amount of aqueous extract of Podophyllum hexandrum was determined by Xanthine-Xanthine oxidase system. Table 1 shows the percentage inhibition of superoxide radical generation of 50-300 μg of extract and comparison with the same amount of BHT. The aqueous extract of Podophyllum hexandrum exhibited somewhat lesser superoxide radical scavenging activity than BHT. The percentage inhibition of superoxide generation at a concentration of 300 μg/ml of aqueous extract of Podophyllum hexandrum and BHT was however found as 81.64 and 85.71%, suggesting that Podophyllum hexandrum has strong superoxide radical scavenging activity at higher concentration.\nEffect of Podophyllum hexandrum aqueous extract and known antioxidant (BHT) on superoxide radical scavenging activity.\nEffect of Podophyllum hexandrum aqueous extract and known antioxidant (BHT) on superoxide radical scavenging activity. Absorbance of control at 560 nm = 0.899 ± 0.25. The results represent mean ± S.D of 3 separate experiments.\nSuperoxide anion radical scavenging activity of varying amount of aqueous extract of Podophyllum hexandrum was determined by Xanthine-Xanthine oxidase system. Table 1 shows the percentage inhibition of superoxide radical generation of 50-300 μg of extract and comparison with the same amount of BHT. The aqueous extract of Podophyllum hexandrum exhibited somewhat lesser superoxide radical scavenging activity than BHT. The percentage inhibition of superoxide generation at a concentration of 300 μg/ml of aqueous extract of Podophyllum hexandrum and BHT was however found as 81.64 and 85.71%, suggesting that Podophyllum hexandrum has strong superoxide radical scavenging activity at higher concentration.\nEffect of Podophyllum hexandrum aqueous extract and known antioxidant (BHT) on superoxide radical scavenging activity.\nEffect of Podophyllum hexandrum aqueous extract and known antioxidant (BHT) on superoxide radical scavenging activity. Absorbance of control at 560 nm = 0.899 ± 0.25. The results represent mean ± S.D of 3 separate experiments.\n[SUBTITLE] Hydrogen peroxide radical scavenging activity [SUBSECTION] Table 2 shows the scavenging effect of Podophyllum hexandrum extract on H2O2 and the comparison with standard BHT in an amount dependent manner. As shown in table the extract and BHT exhibited 67.51 and 72.17% scavenging activity on hydrogen peroxide at 300 μg/ml respectively, again suggests that Podophyllum hexandrum extract possess a strong free radical scavenging activity, comparable to that of BHT.\nEffect of Podophyllum hexandrum aqueous extract and known antioxidant (BHT) on hydrogen peroxide radical scavenging activity.\nEffect of Podophyllum hexandrum aqueous extract and known antioxidant (BHT) on hydrogen peroxide radical scavenging activity. Absorbance of control at 230 nm = 0.665 ± 0.15. The results represent mean ± S.D of 3 separate experiments.\nTable 2 shows the scavenging effect of Podophyllum hexandrum extract on H2O2 and the comparison with standard BHT in an amount dependent manner. As shown in table the extract and BHT exhibited 67.51 and 72.17% scavenging activity on hydrogen peroxide at 300 μg/ml respectively, again suggests that Podophyllum hexandrum extract possess a strong free radical scavenging activity, comparable to that of BHT.\nEffect of Podophyllum hexandrum aqueous extract and known antioxidant (BHT) on hydrogen peroxide radical scavenging activity.\nEffect of Podophyllum hexandrum aqueous extract and known antioxidant (BHT) on hydrogen peroxide radical scavenging activity. Absorbance of control at 230 nm = 0.665 ± 0.15. The results represent mean ± S.D of 3 separate experiments.\n[SUBTITLE] Effect of aqueous extract on lipid peroxidation in CCl4 treated rats [SUBSECTION] TBARS concentrations (expressed as MDA) in the kidney and lung tissue homogenates of all the experimental animals are shown in Figure 1 and 2. After CCl4 administration, the MDA levels increased significantly from 0.12 to 2.8 nmol/mg protein in kidney tissue homogenate and in lung tissue homogenate, the MDA level increased from 0.02 to 2.30 nmol/mg protein. However pretreatment of aqueous extract of Podophyllum hexandrum for 15 days decreased the MDA level in a dose dependent manner in both tissue homogenates. Vitamin E treated animals also showed significant decrease in the MDA levels as compared to CCl4 treated animals.\nRepresents the effect of aqueous extract on kidney tissue homogenate lipid peroxidation in CCl4 treated rats. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\nRepresents the effect of aqueous extract on lung tissue homogenate lipid peroxidation in CCl4 treated rats. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\nIn order to investigate whether the antioxidant activities of Podophyllum hexandrum are mediated by an increase in antioxidant enzymes, we measured GPx, GR, SOD and GST activities in kidney and lung tissues of rats treated with Podophyllum hexandrum rhizome aqueous extract. In the present study, treatment of rats with Podophyllum hexandrum rhizome aqueous extract significantly increased rat, kidney and lung tissue SOD, GPX, GR and GST activities.\nTBARS concentrations (expressed as MDA) in the kidney and lung tissue homogenates of all the experimental animals are shown in Figure 1 and 2. After CCl4 administration, the MDA levels increased significantly from 0.12 to 2.8 nmol/mg protein in kidney tissue homogenate and in lung tissue homogenate, the MDA level increased from 0.02 to 2.30 nmol/mg protein. However pretreatment of aqueous extract of Podophyllum hexandrum for 15 days decreased the MDA level in a dose dependent manner in both tissue homogenates. Vitamin E treated animals also showed significant decrease in the MDA levels as compared to CCl4 treated animals.\nRepresents the effect of aqueous extract on kidney tissue homogenate lipid peroxidation in CCl4 treated rats. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\nRepresents the effect of aqueous extract on lung tissue homogenate lipid peroxidation in CCl4 treated rats. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\nIn order to investigate whether the antioxidant activities of Podophyllum hexandrum are mediated by an increase in antioxidant enzymes, we measured GPx, GR, SOD and GST activities in kidney and lung tissues of rats treated with Podophyllum hexandrum rhizome aqueous extract. In the present study, treatment of rats with Podophyllum hexandrum rhizome aqueous extract significantly increased rat, kidney and lung tissue SOD, GPX, GR and GST activities.\n[SUBTITLE] Effect on (GPX activity) [SUBSECTION] (Figure 3 and 4) shows that glutathione peroxidase activity in kidney and lung tissue was significantly decreased in CCl4 treated animals compared to control. Pretreatment with aqueous extract significantly increased the GPX activity in a dose dependent manner. At higher concentrations of plant extract (50 mg/kg dose level), the activity was increased to 49.30 from CCl4 treated group (11.20) in kidney tissue and the level was increased to 11.93 from 3.51 at the same concentration in lung tissue. Vitamin E (50 mg/kg) treated animals also showed significant increase in GPX activity in both the tested organs.\nRepresents the dose dependent effect of aqueous extract of Podophyllum hexandrum on glutathione peroxidase activity against CCl4 induced toxicity in Kidney tissue homogenate. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\nRepresents the dose dependent effect of aqueous extract of Podophyllum hexandrum on glutathione peroxidase activity against CCl4 induced toxicity in lung tissue homogenate. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\n(Figure 3 and 4) shows that glutathione peroxidase activity in kidney and lung tissue was significantly decreased in CCl4 treated animals compared to control. Pretreatment with aqueous extract significantly increased the GPX activity in a dose dependent manner. At higher concentrations of plant extract (50 mg/kg dose level), the activity was increased to 49.30 from CCl4 treated group (11.20) in kidney tissue and the level was increased to 11.93 from 3.51 at the same concentration in lung tissue. Vitamin E (50 mg/kg) treated animals also showed significant increase in GPX activity in both the tested organs.\nRepresents the dose dependent effect of aqueous extract of Podophyllum hexandrum on glutathione peroxidase activity against CCl4 induced toxicity in Kidney tissue homogenate. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\nRepresents the dose dependent effect of aqueous extract of Podophyllum hexandrum on glutathione peroxidase activity against CCl4 induced toxicity in lung tissue homogenate. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\n[SUBTITLE] Effect on GR activity [SUBSECTION] Glutathione reductase (GR) activity was significantly decreased in CCl4 treated animals when compared to control group. There was a significant increase in glutathione reductase activity observed in aqueous extract treated groups in both the tested organs. At the higher concentration of plant extract (50 mg/kg) the activity increased many fold (Figure 5 and 6). Similar results were obtained with vitamin E.\nRepresents the effect of aqueous extract of Podophyllum hexandrum on the glutathione reductase activity in kidney tissue. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\nRepresents the effect of aqueous extract of Podophyllum hexandrum on the glutathione reductase activity in lung tissue. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\nGlutathione reductase (GR) activity was significantly decreased in CCl4 treated animals when compared to control group. There was a significant increase in glutathione reductase activity observed in aqueous extract treated groups in both the tested organs. At the higher concentration of plant extract (50 mg/kg) the activity increased many fold (Figure 5 and 6). Similar results were obtained with vitamin E.\nRepresents the effect of aqueous extract of Podophyllum hexandrum on the glutathione reductase activity in kidney tissue. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\nRepresents the effect of aqueous extract of Podophyllum hexandrum on the glutathione reductase activity in lung tissue. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\n[SUBTITLE] Effect on SOD activity [SUBSECTION] Effects of CCl4 and CCl4 plus aqueous extract of Podophyllum hexandrum treatments on kidney and lung tissue SOD activity is presented in Figure 7. The SOD activity of both the tested organs significantly decreased in CCl4 treated group. Administration of extract proved significantly better in restoring the altered activity of antioxidant enzyme like SOD, and increased the activity in a dose dependent manner in both organs. Similar results were observed in vitamin E treated group.\nRepresents the effect of aqueous extract of Podophyllum hexandrum on the superoxide dismutase activity in kidney and lung tissue organs. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\nEffects of CCl4 and CCl4 plus aqueous extract of Podophyllum hexandrum treatments on kidney and lung tissue SOD activity is presented in Figure 7. The SOD activity of both the tested organs significantly decreased in CCl4 treated group. Administration of extract proved significantly better in restoring the altered activity of antioxidant enzyme like SOD, and increased the activity in a dose dependent manner in both organs. Similar results were observed in vitamin E treated group.\nRepresents the effect of aqueous extract of Podophyllum hexandrum on the superoxide dismutase activity in kidney and lung tissue organs. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\n[SUBTITLE] Effect on GSH level [SUBSECTION] CCl4 administration markedly decreased the levels of reduced glutathione in both the kidney (control = 54.73 microgram/mg protein) and lung tissue (control = 31.93 microgram/mg protein) to 9.62 (CCl4 group kidney) and 6.53 (CCl4 group lung) demonstrating oxidative stress. Pretreatment with the aqueous extract of Podophyllum hexandrum significantly ameliorated CCl4-induced depletion of GSH in both lung and kidney in a dose dependent manner (Figure 8 and 9).\nRepresents the effect of aqueous extract on GSH levels in CCl4 induced Kidney damages in rats. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\nRepresents the effect of aqueous extract of Podophyllum hexandrum on GSH levels in CCl4 induced lung damages in rats. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\nCCl4 administration markedly decreased the levels of reduced glutathione in both the kidney (control = 54.73 microgram/mg protein) and lung tissue (control = 31.93 microgram/mg protein) to 9.62 (CCl4 group kidney) and 6.53 (CCl4 group lung) demonstrating oxidative stress. Pretreatment with the aqueous extract of Podophyllum hexandrum significantly ameliorated CCl4-induced depletion of GSH in both lung and kidney in a dose dependent manner (Figure 8 and 9).\nRepresents the effect of aqueous extract on GSH levels in CCl4 induced Kidney damages in rats. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\nRepresents the effect of aqueous extract of Podophyllum hexandrum on GSH levels in CCl4 induced lung damages in rats. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\n[SUBTITLE] Effect on GST activity [SUBSECTION] GST activity as measured from the lung and kidney tissue homogenates of all the experimental animals have been shown in Figure 10. In the kidney tissue homogenate decreased GST activity was observed in CCl4 treated animals (17.71 nmoles) compared to the normal control group (38.28 nmoles). Pretreatment with the aqueous extract for 15 days prior to CCl4 intoxication enhanced that activity significantly in a dose dependent manner. In the lung homogenate GST activity of CCl4 treated group (5.22 nmoles) was lower compared to that in the normal group (9.11 nmoles), while the GST activity was found to be increased in the lung tissue homogenate of rats treated with aqueous extract at the concentration of 20 and 30 mg/kg bw for 15 days prior to CCl4 treatment. GST activity in vitamin E and 50 mg/kg bw plant extract pretreated group was close to the normal level in lung tissue.\nEffect of aqueous extract on glutathione-S-transferase activity. Left panel shows the effect of extract on kidney tissue and right panel shows the effect on lung tissue against CCl4 induced damages. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E; &; non significant as compared with normal control. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\nGST activity as measured from the lung and kidney tissue homogenates of all the experimental animals have been shown in Figure 10. In the kidney tissue homogenate decreased GST activity was observed in CCl4 treated animals (17.71 nmoles) compared to the normal control group (38.28 nmoles). Pretreatment with the aqueous extract for 15 days prior to CCl4 intoxication enhanced that activity significantly in a dose dependent manner. In the lung homogenate GST activity of CCl4 treated group (5.22 nmoles) was lower compared to that in the normal group (9.11 nmoles), while the GST activity was found to be increased in the lung tissue homogenate of rats treated with aqueous extract at the concentration of 20 and 30 mg/kg bw for 15 days prior to CCl4 treatment. GST activity in vitamin E and 50 mg/kg bw plant extract pretreated group was close to the normal level in lung tissue.\nEffect of aqueous extract on glutathione-S-transferase activity. Left panel shows the effect of extract on kidney tissue and right panel shows the effect on lung tissue against CCl4 induced damages. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E; &; non significant as compared with normal control. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.", "Superoxide anion radical scavenging activity of varying amount of aqueous extract of Podophyllum hexandrum was determined by Xanthine-Xanthine oxidase system. Table 1 shows the percentage inhibition of superoxide radical generation of 50-300 μg of extract and comparison with the same amount of BHT. The aqueous extract of Podophyllum hexandrum exhibited somewhat lesser superoxide radical scavenging activity than BHT. The percentage inhibition of superoxide generation at a concentration of 300 μg/ml of aqueous extract of Podophyllum hexandrum and BHT was however found as 81.64 and 85.71%, suggesting that Podophyllum hexandrum has strong superoxide radical scavenging activity at higher concentration.\nEffect of Podophyllum hexandrum aqueous extract and known antioxidant (BHT) on superoxide radical scavenging activity.\nEffect of Podophyllum hexandrum aqueous extract and known antioxidant (BHT) on superoxide radical scavenging activity. Absorbance of control at 560 nm = 0.899 ± 0.25. The results represent mean ± S.D of 3 separate experiments.", "Table 2 shows the scavenging effect of Podophyllum hexandrum extract on H2O2 and the comparison with standard BHT in an amount dependent manner. As shown in table the extract and BHT exhibited 67.51 and 72.17% scavenging activity on hydrogen peroxide at 300 μg/ml respectively, again suggests that Podophyllum hexandrum extract possess a strong free radical scavenging activity, comparable to that of BHT.\nEffect of Podophyllum hexandrum aqueous extract and known antioxidant (BHT) on hydrogen peroxide radical scavenging activity.\nEffect of Podophyllum hexandrum aqueous extract and known antioxidant (BHT) on hydrogen peroxide radical scavenging activity. Absorbance of control at 230 nm = 0.665 ± 0.15. The results represent mean ± S.D of 3 separate experiments.", "TBARS concentrations (expressed as MDA) in the kidney and lung tissue homogenates of all the experimental animals are shown in Figure 1 and 2. After CCl4 administration, the MDA levels increased significantly from 0.12 to 2.8 nmol/mg protein in kidney tissue homogenate and in lung tissue homogenate, the MDA level increased from 0.02 to 2.30 nmol/mg protein. However pretreatment of aqueous extract of Podophyllum hexandrum for 15 days decreased the MDA level in a dose dependent manner in both tissue homogenates. Vitamin E treated animals also showed significant decrease in the MDA levels as compared to CCl4 treated animals.\nRepresents the effect of aqueous extract on kidney tissue homogenate lipid peroxidation in CCl4 treated rats. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\nRepresents the effect of aqueous extract on lung tissue homogenate lipid peroxidation in CCl4 treated rats. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\nIn order to investigate whether the antioxidant activities of Podophyllum hexandrum are mediated by an increase in antioxidant enzymes, we measured GPx, GR, SOD and GST activities in kidney and lung tissues of rats treated with Podophyllum hexandrum rhizome aqueous extract. In the present study, treatment of rats with Podophyllum hexandrum rhizome aqueous extract significantly increased rat, kidney and lung tissue SOD, GPX, GR and GST activities.", "(Figure 3 and 4) shows that glutathione peroxidase activity in kidney and lung tissue was significantly decreased in CCl4 treated animals compared to control. Pretreatment with aqueous extract significantly increased the GPX activity in a dose dependent manner. At higher concentrations of plant extract (50 mg/kg dose level), the activity was increased to 49.30 from CCl4 treated group (11.20) in kidney tissue and the level was increased to 11.93 from 3.51 at the same concentration in lung tissue. Vitamin E (50 mg/kg) treated animals also showed significant increase in GPX activity in both the tested organs.\nRepresents the dose dependent effect of aqueous extract of Podophyllum hexandrum on glutathione peroxidase activity against CCl4 induced toxicity in Kidney tissue homogenate. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\nRepresents the dose dependent effect of aqueous extract of Podophyllum hexandrum on glutathione peroxidase activity against CCl4 induced toxicity in lung tissue homogenate. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.", "Glutathione reductase (GR) activity was significantly decreased in CCl4 treated animals when compared to control group. There was a significant increase in glutathione reductase activity observed in aqueous extract treated groups in both the tested organs. At the higher concentration of plant extract (50 mg/kg) the activity increased many fold (Figure 5 and 6). Similar results were obtained with vitamin E.\nRepresents the effect of aqueous extract of Podophyllum hexandrum on the glutathione reductase activity in kidney tissue. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\nRepresents the effect of aqueous extract of Podophyllum hexandrum on the glutathione reductase activity in lung tissue. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.", "Effects of CCl4 and CCl4 plus aqueous extract of Podophyllum hexandrum treatments on kidney and lung tissue SOD activity is presented in Figure 7. The SOD activity of both the tested organs significantly decreased in CCl4 treated group. Administration of extract proved significantly better in restoring the altered activity of antioxidant enzyme like SOD, and increased the activity in a dose dependent manner in both organs. Similar results were observed in vitamin E treated group.\nRepresents the effect of aqueous extract of Podophyllum hexandrum on the superoxide dismutase activity in kidney and lung tissue organs. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.", "CCl4 administration markedly decreased the levels of reduced glutathione in both the kidney (control = 54.73 microgram/mg protein) and lung tissue (control = 31.93 microgram/mg protein) to 9.62 (CCl4 group kidney) and 6.53 (CCl4 group lung) demonstrating oxidative stress. Pretreatment with the aqueous extract of Podophyllum hexandrum significantly ameliorated CCl4-induced depletion of GSH in both lung and kidney in a dose dependent manner (Figure 8 and 9).\nRepresents the effect of aqueous extract on GSH levels in CCl4 induced Kidney damages in rats. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.\nRepresents the effect of aqueous extract of Podophyllum hexandrum on GSH levels in CCl4 induced lung damages in rats. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.", "GST activity as measured from the lung and kidney tissue homogenates of all the experimental animals have been shown in Figure 10. In the kidney tissue homogenate decreased GST activity was observed in CCl4 treated animals (17.71 nmoles) compared to the normal control group (38.28 nmoles). Pretreatment with the aqueous extract for 15 days prior to CCl4 intoxication enhanced that activity significantly in a dose dependent manner. In the lung homogenate GST activity of CCl4 treated group (5.22 nmoles) was lower compared to that in the normal group (9.11 nmoles), while the GST activity was found to be increased in the lung tissue homogenate of rats treated with aqueous extract at the concentration of 20 and 30 mg/kg bw for 15 days prior to CCl4 treatment. GST activity in vitamin E and 50 mg/kg bw plant extract pretreated group was close to the normal level in lung tissue.\nEffect of aqueous extract on glutathione-S-transferase activity. Left panel shows the effect of extract on kidney tissue and right panel shows the effect on lung tissue against CCl4 induced damages. $; p < 0.001, as compared with normal control group, #; p < 0.001 as compared with CCl4 group, @; p < 0.001 as compared with V.E; &; non significant as compared with normal control. The data were presented as means ± SD for six animals in each observation and evaluated by one-way ANOVA followed by Bonferroni t - test to detect inter group differences. Differences were considered to be statistically significant if p < 0.05.", "CCl4 when administrated is distributed and deposited to organs such as the liver, brain, kidney, lung and heart [21]. The reactive metabolite trichloromethyl radical (•CCl3) and trichloromethyl peroxide radical (CCl3O2•) has been formed from the metabolic conversion of CCl4 by cytochrome P-450. As O2 tension rises, a greater fraction of •CCl3 present in the system reacts very rapidly with O2 and more reactive free radicals, like CCl3OO• is generated from •CCl3. These free radicals initiate the peroxidation of membrane poly unsaturated fatty acids (PUFA), cell necrosis, GSH depletion, membrane damage and loss of antioxidant enzyme activity.\nIn this experimental study we investigated the protective effect of aqueous extract of Podophyllum hexandrum Free radicals e.g. superoxide radical, hydrogen peroxide and hydroxyl radical, from both endogenous and exogenous sources, are implicated in the etiology of several degenerative diseases, such as coronary artery disease, stroke, rheumatoid arthritis, diabetes and cancer [22]. High consumption of fruits and vegetables is associated with low risk for these diseases, which is attributed to the antioxidant vitamins and other phytochemicals [23,24]. The extent of initial damage caused by free radicals is further amplified by Fenton reaction generated hydroxyl radicals in the presence of superoxide and hydrogen peroxide [25]. Thus, the redox state and concentration of iron ions in the cellular milieu plays a crucial role in amplification of damage [26] as they interact with membranes to generate alkoxyl and peroxyl radicals, thereby inflicting further damage to the cellular system [27].\nSuperoxide is biologically important since it can be decomposed to form stronger oxidative species such as singlet oxygen and hydroxyl radicals, which are very harmful to the cellular components in a biological system [28]. Superoxide radical is generated from O2 by multiple pathways [29,30]. Using NBT assay system to generate superoxide radical, dose dependent inhibition was observed in the increasing concentration of Podophyllum hexandrum rhizome aqueous extract indicating its potential to possess scavenging properties.\nHydrogen peroxide itself is not very reactive, but it can give highly reactive species •OH radical through Fenton reaction [31]. Earlier reports suggest that H2O2 could induce DNA break in the intact cell and purified DNA [32]. The H2O2-scavenging activity of Podophyllum hexandrum aqueous extract and the standard BHT increased in a dose dependent manner. With comparable results observed at highest concentration. Similar results were reported by Duh [33] for Chrysanthemum morifolium with high relationship between phenolic content and scavenging activity of the aqueous extracts on hydrogen peroxide. As previously reported by Chaudhary et al., that Podophyllum hexandrum possess strong antioxidant activity against superoxide and hydroxyl radical under in vitro conditions [34]. Chawla et al., have also established the antioxidant potential of different extracts of Podophyllum hexandrum [35].\nThe level of kidney and lung MDA in CCl4 treated group was significantly higher than the control group. The increase in MDA level in both the tissues suggests enhanced peroxidation leading to tissue damage and failure of the antioxidant mechanisms to prevent the production of excessive free radicals. Our previous results have shown that ethanolic extract of Podophyllum hexandrum possess strong hepatoprotective activity against CCl4 induced damage in albino rats [36]. Similar results were previously reported in kidney by Ogeturk [37] and liver tissues by Yang [38] and Melin [39], which stated that CCl4 metabolized by cytochrome p-450 generates a highly reactive free radical, and initiates lipid peroxidation of the cell membrane of the endoplasmic reticulum and causes a chain reaction. These reactive oxygen species can cause oxidative damage in DNA, proteins and lipids. However pretreatment of Podophyllum hexandrum extract in this study significantly prevent CCl4-induced lipid peroxidation in kidney and lung tissue. Our results are in conformation to the already published report by Padma and Setty [40] that administration of aqueous extract of Phyllanthus fraternus significantly decreased the carbon tetrachloride induced lipid peroxidation in different organs of rats under in vivo conditions.\nGSH as we know is involved in several defense processes against oxidative damage protects cells against free radicals, peroxides and other toxic compounds [41]. Indeed, glutathione depletion increases the sensitivity of cells to various aggressions and also has several metabolic effects. It is widely known that a deficiency of GSH within living organisms can lead to tissue disorder and injury [42]. In our study, the kidney and lung GSH level in CCl4 treated group was significantly decreased compared with control group. Likewise we [36] and others, Ohta [43], reported a significant decrease in the GSH content in different organs of rats, when injected with CCl4. Pretreatment however, with Podophyllum hexandrum aqueous extract increased GSH level as compared with CCl4 groups and thus affording protection. The antioxidant effects are likely to be mediated by the restoration of CCl4 induced decreased SOD, GR, GPx and GST activities in various tissues of rats. Treatment of rats with Podophyllum hexandrum aqueous extract significantly increased rat lung and kidney SOD, GR, GST and GPx activities. Tirkey [44] have recently conducted experiments to determine the effect of CCl4 on the renal damages in rats and obtained similar results. All these enzymes are major free radical scavenging enzymes that have shown to be reduced in a number of pathophysiological processes and diseases such as diabetes [45]. Thus, activation of these enzymes by the administration of Podophyllum hexandrum aqueous extract clearly shows that Podophyllum through its free radical scavenging activity could exert a beneficial action against pathophysiological alterations caused by the presence of superoxide, hydrogen peroxide and hydroxyl radicals.", "Combining all, we could conclude that the aqueous extract of Podophyllum hexandrum exhibits good antioxidant activity in both in vitro and in vivo experiments. In vitro antioxidant tests proved that the plant possesses components with strong superoxide and hydrogen peroxide radical scavenging activity. Study also suggests that the extract also possess potential to protect the kidney and lung tissue against oxidative damages and could be used as an effective protector against CCl4 induced kidney and lung damages. Further works are needed to fully characterize the active principles present in the plant responsible for these functions and elucidate its possible mode of action.", "The authors declare that they have no competing interests.", "SAG: Designed the study, conducted the experiments, analyzed the data and drafted the manuscript. EH and AH: made substantial contributions to the design of the study, the collection of the data as well as the interpretation and analysis of the data. They also drafted the manuscript and gave final approval for its submission to the Journal for consideration of publication. AM: made substantial contributions to the design of the study, the collection of the data as well as the interpretation and analysis of the data. He also drafted the manuscript and gave final approval for its submission to the Journal for consideration of publication. MAZ: the investigation-in-charge for the study, made substantial contributions to the design of the study, as well as the interpretation and analysis of the data. He also drafted the manuscript and gave final approval for its submission to the Journal for consideration of publication. YQ, ZM and BAZ: Made substantial contributions in the design of study and also helped in the compilation of the manuscript. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6882/11/17/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
To what extent are adverse events found in patient records reported by patients and healthcare professionals via complaints, claims and incident reports?
21356056
Patient record review is believed to be the most useful method for estimating the rate of adverse events among hospitalised patients. However, the method has some practical and financial disadvantages. Some of these disadvantages might be overcome by using existing reporting systems in which patient safety issues are already reported, such as incidents reported by healthcare professionals and complaints and medico-legal claims filled by patients or their relatives. The aim of the study is to examine to what extent the hospital reporting systems cover the adverse events identified by patient record review.
BACKGROUND
We conducted a retrospective study using a database from a record review study of 5375 patient records in 14 hospitals in the Netherlands. Trained nurses and physicians using a method based on the protocol of The Harvard Medical Practice Study previously reviewed the records. Four reporting systems were linked with the database of reviewed records: 1) informal and 2) formal complaints by patients/relatives, 3) medico-legal claims by patients/relatives and 4) incident reports by healthcare professionals. For each adverse event identified in patient records the equivalent was sought in these reporting systems by comparing dates and descriptions of the events. The study focussed on the number of adverse event matches, overlap of adverse events detected by different sources, preventability and severity of consequences of reported and non-reported events and sensitivity and specificity of reports.
METHODS
In the sample of 5375 patient records, 498 adverse events were identified. Only 18 of the 498 (3.6%) adverse events identified by record review were found in one or more of the four reporting systems. There was some overlap: one adverse event had an equivalent in both a complaint and incident report and in three cases a patient/relative used two or three systems to complain about an adverse event. Healthcare professionals reported relatively more preventable adverse events than patients.Reports are not sensitive for adverse events nor do reports have a positive predictive value.
RESULTS
In order to detect the same adverse events as identified by patient record review, one cannot rely on the existing reporting systems within hospitals.
CONCLUSIONS
[ "Humans", "Medical Audit", "Medical Errors", "Netherlands", "Retrospective Studies", "Risk Management" ]
3059299
null
null
Methods
[SUBTITLE] Study design and setting [SUBSECTION] A retrospective study was performed using a database of reviewed patient records collected during a retrospective patient record review study examining the incidence of adverse events [1,9]. This record review study was conducted in 21 hospitals, involving 20% of the acute care hospitals in the Netherlands. All 21 hospitals were invited to participate in our supplementary study on the completeness of incident reporting systems. Fourteen hospitals were willing to participate: two university hospitals, four tertiary teaching hospitals and eight general hospitals. Hospital sizes ranged from 201 to 985 beds. Seven hospitals declined because of practical reasons, such as simultaneous participation in other projects, time constraints, elimination of the reporting system after two years or having an anonymous reporting system (no patient information registered). Four reporting systems per hospital were linked with the adverse events database in these fourteen hospitals: 1) the reporting system of the complaint officer: for informal complaints by patients; 2) the reporting system of the complaint committee: for formal complaints by patients; 3) the reporting system of the liability officer: for medico-legal claims by patients and 4) the reporting system of incident reports: for incident reports by healthcare professionals. For each adverse event identified in the patient records, the equivalent was sought in the four reporting systems of the same hospitals by comparing the date and descriptions of the events. This study is a continuation of the Dutch patient record review study [1,9], for which ethical approval was granted by the VU University Medical Center in Amsterdam. The participating hospitals formally consented to take part in this study. A retrospective study was performed using a database of reviewed patient records collected during a retrospective patient record review study examining the incidence of adverse events [1,9]. This record review study was conducted in 21 hospitals, involving 20% of the acute care hospitals in the Netherlands. All 21 hospitals were invited to participate in our supplementary study on the completeness of incident reporting systems. Fourteen hospitals were willing to participate: two university hospitals, four tertiary teaching hospitals and eight general hospitals. Hospital sizes ranged from 201 to 985 beds. Seven hospitals declined because of practical reasons, such as simultaneous participation in other projects, time constraints, elimination of the reporting system after two years or having an anonymous reporting system (no patient information registered). Four reporting systems per hospital were linked with the adverse events database in these fourteen hospitals: 1) the reporting system of the complaint officer: for informal complaints by patients; 2) the reporting system of the complaint committee: for formal complaints by patients; 3) the reporting system of the liability officer: for medico-legal claims by patients and 4) the reporting system of incident reports: for incident reports by healthcare professionals. For each adverse event identified in the patient records, the equivalent was sought in the four reporting systems of the same hospitals by comparing the date and descriptions of the events. This study is a continuation of the Dutch patient record review study [1,9], for which ethical approval was granted by the VU University Medical Center in Amsterdam. The participating hospitals formally consented to take part in this study. [SUBTITLE] Patient record review [SUBSECTION] In each hospital a stratified random sample was selected of 200 admissions of patients discharged from the hospital in 2004 (> 24 hours stay) and 200 (or less if the total of patients who died in 2004 was lower) admissions of patients deceased in the hospital in the year 2004, excluding admissions of psychiatry, obstetrics and children less than one year old. Between August 2005 and October 2006, 55 trained physicians reviewed the medical, nursing and, if available, the outpatient record of all sampled admissions that contained triggers for adverse events, for example an unplanned readmission, unplanned return to the operating room or unexpected death. The presence of one or more of the 18 predefined triggers was judged in advance by trained nurses. For each patient record two physician reviewers determined independently the presence of one or more, consequences, and degree of preventability of the adverse events, based on a standardised procedure and review form. A preventable adverse event was defined as an adverse event resulting from an error in management due to failure to follow accepted practice at an individual or system level. Accepted practice was taken to be 'the current level of expected performance for the average practitioner or system that manages the condition in question'. The degree of preventability was measured on a six-point scale from "(Virtually) no evidence of preventability" to "(Virtually) certain evidence of preventability". The degree of severity of the consequences was rated on a seven-point scale from "No physical impairment or disability" to "Death". The methods of determining adverse events were based on the well-known protocol of The Harvard Medical Practice Study and have been described in detail elsewhere [1]. In each hospital a stratified random sample was selected of 200 admissions of patients discharged from the hospital in 2004 (> 24 hours stay) and 200 (or less if the total of patients who died in 2004 was lower) admissions of patients deceased in the hospital in the year 2004, excluding admissions of psychiatry, obstetrics and children less than one year old. Between August 2005 and October 2006, 55 trained physicians reviewed the medical, nursing and, if available, the outpatient record of all sampled admissions that contained triggers for adverse events, for example an unplanned readmission, unplanned return to the operating room or unexpected death. The presence of one or more of the 18 predefined triggers was judged in advance by trained nurses. For each patient record two physician reviewers determined independently the presence of one or more, consequences, and degree of preventability of the adverse events, based on a standardised procedure and review form. A preventable adverse event was defined as an adverse event resulting from an error in management due to failure to follow accepted practice at an individual or system level. Accepted practice was taken to be 'the current level of expected performance for the average practitioner or system that manages the condition in question'. The degree of preventability was measured on a six-point scale from "(Virtually) no evidence of preventability" to "(Virtually) certain evidence of preventability". The degree of severity of the consequences was rated on a seven-point scale from "No physical impairment or disability" to "Death". The methods of determining adverse events were based on the well-known protocol of The Harvard Medical Practice Study and have been described in detail elsewhere [1]. [SUBTITLE] Linking record reviews with reporting systems (finding patient identity matches) [SUBSECTION] Between March 2007 and March 2008 fourteen hospitals provided datasets from each of the four reporting systems, containing identification characteristics of all patients involved in an incident report, complaint or claim in the year 2004: patient registration number, date of birth and sex and hospital. Reports without sufficient patient identifiers or reports of events which occurred in 2003 but were reported in 2004 were non-eligible for matching. Because of the confidentiality of the information, the researchers had no access to the reporting systems in the hospitals. Sometimes exceptions were made when the incident reports, complaints or claims were not digitally registered. In these cases, the researchers got permission to access the reports to get the information required. A confidentiality agreement was signed by the researchers to maintain secrecy of the information. Data from the reporting systems were recorded anonymously and all information was kept confidential. The patient identification data from the four sources were then linked with the database of the sample from the record review. A case matched when a patient involved in the complaint, claim or incident report matched on identity with a patient from the record review sample. These matches were classified as patient identity matches (see Figure 1). Flow chart of study procedure. Between March 2007 and March 2008 fourteen hospitals provided datasets from each of the four reporting systems, containing identification characteristics of all patients involved in an incident report, complaint or claim in the year 2004: patient registration number, date of birth and sex and hospital. Reports without sufficient patient identifiers or reports of events which occurred in 2003 but were reported in 2004 were non-eligible for matching. Because of the confidentiality of the information, the researchers had no access to the reporting systems in the hospitals. Sometimes exceptions were made when the incident reports, complaints or claims were not digitally registered. In these cases, the researchers got permission to access the reports to get the information required. A confidentiality agreement was signed by the researchers to maintain secrecy of the information. Data from the reporting systems were recorded anonymously and all information was kept confidential. The patient identification data from the four sources were then linked with the database of the sample from the record review. A case matched when a patient involved in the complaint, claim or incident report matched on identity with a patient from the record review sample. These matches were classified as patient identity matches (see Figure 1). Flow chart of study procedure. [SUBTITLE] Comparison of narratives (finding adverse event matches) [SUBSECTION] It is possible that a complaint, claim or incident report - hereafter together named "reports"- concerned another issue than the adverse event identified by record review. For example, relatives of a patient complaining about how the doctors communicate with the patient and not about the adverse event that involved a wrong dose of medication. To be able to examine whether the identity matches related to the same issue, we obtained more information about these events, including the original description of the event, the date of occurrence and consequences for the patient, by means of a questionnaire. Furthermore we obtained information about the characteristics of the reporter (patient or healthcare professional). The questionnaires were sent to the officer of each reporting system in the hospitals. A researcher (ICD) compared the dates and descriptions of the adverse events from the record reviews with the dates and descriptions/narratives of the reports. The descriptions of the events did not need to correspond perfectly. The physicians who reviewed the patient records made a medical description of the adverse event, while patients generally do not use professional language when describing their dissatisfaction. Moreover, an adverse event can evolve from a chain of events in time, while a report often describes just a part of the situation. Allowing for the above mentioned differences, cases were classified as a match if it was plausible that: 1) the descriptions were about the same event or 2) the description in the report was about an event that was related to the adverse event (for example, the adverse event was a consequence of the reported event). The comparisons were discussed with other researchers (LZ, CW). In doubt and for the final verification a physician was consulted (MD). Matched cases in this final stage-meaning that the topic of the report was related to an adverse event in the record review study-were classified as adverse event matches (see Figure 1). It is possible that a complaint, claim or incident report - hereafter together named "reports"- concerned another issue than the adverse event identified by record review. For example, relatives of a patient complaining about how the doctors communicate with the patient and not about the adverse event that involved a wrong dose of medication. To be able to examine whether the identity matches related to the same issue, we obtained more information about these events, including the original description of the event, the date of occurrence and consequences for the patient, by means of a questionnaire. Furthermore we obtained information about the characteristics of the reporter (patient or healthcare professional). The questionnaires were sent to the officer of each reporting system in the hospitals. A researcher (ICD) compared the dates and descriptions of the adverse events from the record reviews with the dates and descriptions/narratives of the reports. The descriptions of the events did not need to correspond perfectly. The physicians who reviewed the patient records made a medical description of the adverse event, while patients generally do not use professional language when describing their dissatisfaction. Moreover, an adverse event can evolve from a chain of events in time, while a report often describes just a part of the situation. Allowing for the above mentioned differences, cases were classified as a match if it was plausible that: 1) the descriptions were about the same event or 2) the description in the report was about an event that was related to the adverse event (for example, the adverse event was a consequence of the reported event). The comparisons were discussed with other researchers (LZ, CW). In doubt and for the final verification a physician was consulted (MD). Matched cases in this final stage-meaning that the topic of the report was related to an adverse event in the record review study-were classified as adverse event matches (see Figure 1). [SUBTITLE] Statistical analysis [SUBSECTION] Usefulness of the reporting systems for detection of adverse events was determined by the number of adverse event matches (absolute number and percentage of all possible matches). The extent of overlap between reporting systems was determined by counting the number of adverse events that were detected in more than one reporting source. The characteristics of the reporters of each report were described in percentages of the following categories: professionals (physician, resident/physician, nurse and student nurse) and patients (patient, child, spouse, child & spouse and legal adviser). To test whether the degree of preventability and severity of the adverse events influence the likelihood they are reported, we performed t-tests. Results were considered statistically significant at p < 0.05. For the non-significant differences we calculated the power. Data were analysed using SPSS 15.0. Each patient was identified as being positive or negative for an adverse event for one or more reports that matched an adverse event in the patient record review. Sensitivity and specificity interpret the reports results retrospectively whereas positive predictive values and negative predictive values establish the predictive properties of the reports in the future. Usefulness of the reporting systems for detection of adverse events was determined by the number of adverse event matches (absolute number and percentage of all possible matches). The extent of overlap between reporting systems was determined by counting the number of adverse events that were detected in more than one reporting source. The characteristics of the reporters of each report were described in percentages of the following categories: professionals (physician, resident/physician, nurse and student nurse) and patients (patient, child, spouse, child & spouse and legal adviser). To test whether the degree of preventability and severity of the adverse events influence the likelihood they are reported, we performed t-tests. Results were considered statistically significant at p < 0.05. For the non-significant differences we calculated the power. Data were analysed using SPSS 15.0. Each patient was identified as being positive or negative for an adverse event for one or more reports that matched an adverse event in the patient record review. Sensitivity and specificity interpret the reports results retrospectively whereas positive predictive values and negative predictive values establish the predictive properties of the reports in the future.
null
null
null
null
[ "Background", "Incident reporting, complaints and claims in Dutch hospitals", "Study design and setting", "Patient record review", "Linking record reviews with reporting systems (finding patient identity matches)", "Comparison of narratives (finding adverse event matches)", "Statistical analysis", "Results", "Discussion", "Main findings and interpretation", "Comparison with previous research", "Strengths and limitations", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "For hospital managers and healthcare providers involved in patient safety issues it is important to have access to patient safety data to facilitate decisions on interventions aimed at improving the quality and safety of hospital care. Ideally there is real-time information about patient safety, capturing incidents that reflect actual or potential risks of adverse events. An adverse event is commonly defined as an unintended injury that results in temporary or permanent disability, death, or prolonged hospital stay and is caused by healthcare management rather than by the patient's underlying disease process [1].\nMany countries have performed retrospective patient record review studies to identify adverse events in their hospitals [2-9]. Patient record review is believed to be the most useful method for estimating the rate of adverse events among hospitalised patients [10]. However, the method has some practical disadvantages: it is time-consuming, labour intensive and expensive [11]. Moreover, retrospective record review does not provide real-time information. It is often not possible to gain additional information about the events from the people involved.\nHospitals would benefit from reporting sources, which can provide information on patient safety periodically and on demand. The value of incident reporting by healthcare professionals has been widely recognised. Moreover, it is increasingly believed that patients (or their relatives, hereafter together named \"patients\") can play an important role in signalling safety issues. In the report of the UK Department of Health \"An organisation with a memory\" [12], the importance of both sources is stressed.\nSo far there is little knowledge about the comprehensiveness of incident reporting systems in hospitals. A general weakness that is often mentioned in literature about incident reporting by healthcare providers is that incidents are considerably under-reported [13,14]. A few studies compared results from record review with either incident reports or patient complaints in small scale study designs [13,15,16]. They found little overlap in events detected by the methods they compared and concluded that incident reporting or patient complaints alone do not provide an adequate assessment of adverse events. As far as we know, no earlier studies have reported on the completeness of both incident reporting by healthcare professionals and reporting by patients (complaints as well as claims) in regard of detecting adverse events.\nThe aim of our study is to get insight into the extent in which four hospital reporting systems of incidents reported by healthcare professionals and complaints and claims reported by patients cover adverse events that were identified by patient record review. We want to know if the diverse reporting systems, which are already implemented in Dutch hospitals, are individually or cumulatively useful as a method for identifying adverse events (see Box 1 for some background information about incident reporting by healthcare professionals and the procedures for complaints/claims by patients in the Netherlands).\nMoreover, we are interested whether adverse events with a higher preventability and severity of consequences have a larger likelihood of being reported. It is our assumption that highly preventable adverse events might be reported relatively often, because the persons involved feel the hospital can learn from these events to prevent them in the future. Severe adverse events might have an increased chance of being reported, because of their visibility and impact on the patient. When highly preventable and severe adverse events are regularly reported in one or more reporting systems, the issue of under-reporting is less worrying than is stated in literature.\nOur study has four specific research questions: 1) How many adverse events established by patient record review are also identified in one or more reporting systems of incident reports, complaints and claims? 2) What is the amount of overlap in the detection of adverse events between the reporting systems? 3) Is the degree of preventability and severity of adverse events related to the likelihood they are reported? 4) What is the sensitivity and specificity of reports for adverse events?\n[SUBTITLE] Incident reporting, complaints and claims in Dutch hospitals [SUBSECTION] In the Netherlands, healthcare professionals have been increasingly stimulated to report incidents or near misses within their own hospital. For the healthcare professional in 2004 reporting incidents (especially adverse events and calamities) was not mandatory by law but a consequence of healthcare quality and patients' rights legislation. It often was mandatory by employment contract. The incident reporting system is by definition meant for all incidents, including adverse events. An incident can be defined as \"Any deviation from usual medical care that causes an injury to the patient or poses a risk of harm. Includes errors, preventable adverse events, and hazards.\" A near miss is defined as \"Serious error or mishap that has the potential to cause an adverse event but fails to do so because of chance or because it is intercepted.\" [17]. This means that an adverse event is always an incident but an incident does not has to be an adverse event. After an incident/near miss has happened, it can be reported by filling out an electronic or paper-based report form containing, among other things, a description of the event, the time and place of occurrence and the people involved. The incident reporting committee will register and analyse the incidents.\nPatients are encouraged to report their complaints without further definition.\nIn the Netherlands patients have various options to file a complaint or claim in the Netherlands when they are not satisfied with the care or cure they received. The choice of the path depends largely on the intention of the patient making the complaint or claim.\nThere are different reasons for patients to complaint, for example, patients want to have an explanation or an apology or initiate an investigation on the legitimacy of certain acts committed. Reasons to file a claim are for example to get financial compensation or to prevent recurrence of the incident to restore their sense of justice [18]. Patients can submit their grievances to the complaints officer or to the more formal complaint committee within the hospital. The purpose of an informal complaint is mediation or expressing a concern about the quality of care, whereas a formal complaint is made to instigate an investigation followed by a formal judgement about the legitimacy of the complaint (not juridical binding). A legal option outside the hospital is to submit a formal appeal to the medical board to obtain a verdict or when a financial compensation is wanted, to file a claim to the hospital board. The Netherlands does not have a no-fault system. Malpractice claims are judged by the insurance company of the hospital. If patients do not agree with the judgement of the insurance company on the liability or the financial compensation, patients can approach a civil court. It is possible for patients to use more than one path simultaneously or consecutively. Reporting systems are not set up with the intention to report adverse events in specific. Patient reports often concern the quality of care, whereas healthcare professionals frequently report about deviations from procedures.\nIn the Netherlands, healthcare professionals have been increasingly stimulated to report incidents or near misses within their own hospital. For the healthcare professional in 2004 reporting incidents (especially adverse events and calamities) was not mandatory by law but a consequence of healthcare quality and patients' rights legislation. It often was mandatory by employment contract. The incident reporting system is by definition meant for all incidents, including adverse events. An incident can be defined as \"Any deviation from usual medical care that causes an injury to the patient or poses a risk of harm. Includes errors, preventable adverse events, and hazards.\" A near miss is defined as \"Serious error or mishap that has the potential to cause an adverse event but fails to do so because of chance or because it is intercepted.\" [17]. This means that an adverse event is always an incident but an incident does not has to be an adverse event. After an incident/near miss has happened, it can be reported by filling out an electronic or paper-based report form containing, among other things, a description of the event, the time and place of occurrence and the people involved. The incident reporting committee will register and analyse the incidents.\nPatients are encouraged to report their complaints without further definition.\nIn the Netherlands patients have various options to file a complaint or claim in the Netherlands when they are not satisfied with the care or cure they received. The choice of the path depends largely on the intention of the patient making the complaint or claim.\nThere are different reasons for patients to complaint, for example, patients want to have an explanation or an apology or initiate an investigation on the legitimacy of certain acts committed. Reasons to file a claim are for example to get financial compensation or to prevent recurrence of the incident to restore their sense of justice [18]. Patients can submit their grievances to the complaints officer or to the more formal complaint committee within the hospital. The purpose of an informal complaint is mediation or expressing a concern about the quality of care, whereas a formal complaint is made to instigate an investigation followed by a formal judgement about the legitimacy of the complaint (not juridical binding). A legal option outside the hospital is to submit a formal appeal to the medical board to obtain a verdict or when a financial compensation is wanted, to file a claim to the hospital board. The Netherlands does not have a no-fault system. Malpractice claims are judged by the insurance company of the hospital. If patients do not agree with the judgement of the insurance company on the liability or the financial compensation, patients can approach a civil court. It is possible for patients to use more than one path simultaneously or consecutively. Reporting systems are not set up with the intention to report adverse events in specific. Patient reports often concern the quality of care, whereas healthcare professionals frequently report about deviations from procedures.", "In the Netherlands, healthcare professionals have been increasingly stimulated to report incidents or near misses within their own hospital. For the healthcare professional in 2004 reporting incidents (especially adverse events and calamities) was not mandatory by law but a consequence of healthcare quality and patients' rights legislation. It often was mandatory by employment contract. The incident reporting system is by definition meant for all incidents, including adverse events. An incident can be defined as \"Any deviation from usual medical care that causes an injury to the patient or poses a risk of harm. Includes errors, preventable adverse events, and hazards.\" A near miss is defined as \"Serious error or mishap that has the potential to cause an adverse event but fails to do so because of chance or because it is intercepted.\" [17]. This means that an adverse event is always an incident but an incident does not has to be an adverse event. After an incident/near miss has happened, it can be reported by filling out an electronic or paper-based report form containing, among other things, a description of the event, the time and place of occurrence and the people involved. The incident reporting committee will register and analyse the incidents.\nPatients are encouraged to report their complaints without further definition.\nIn the Netherlands patients have various options to file a complaint or claim in the Netherlands when they are not satisfied with the care or cure they received. The choice of the path depends largely on the intention of the patient making the complaint or claim.\nThere are different reasons for patients to complaint, for example, patients want to have an explanation or an apology or initiate an investigation on the legitimacy of certain acts committed. Reasons to file a claim are for example to get financial compensation or to prevent recurrence of the incident to restore their sense of justice [18]. Patients can submit their grievances to the complaints officer or to the more formal complaint committee within the hospital. The purpose of an informal complaint is mediation or expressing a concern about the quality of care, whereas a formal complaint is made to instigate an investigation followed by a formal judgement about the legitimacy of the complaint (not juridical binding). A legal option outside the hospital is to submit a formal appeal to the medical board to obtain a verdict or when a financial compensation is wanted, to file a claim to the hospital board. The Netherlands does not have a no-fault system. Malpractice claims are judged by the insurance company of the hospital. If patients do not agree with the judgement of the insurance company on the liability or the financial compensation, patients can approach a civil court. It is possible for patients to use more than one path simultaneously or consecutively. Reporting systems are not set up with the intention to report adverse events in specific. Patient reports often concern the quality of care, whereas healthcare professionals frequently report about deviations from procedures.", "A retrospective study was performed using a database of reviewed patient records collected during a retrospective patient record review study examining the incidence of adverse events [1,9]. This record review study was conducted in 21 hospitals, involving 20% of the acute care hospitals in the Netherlands. All 21 hospitals were invited to participate in our supplementary study on the completeness of incident reporting systems. Fourteen hospitals were willing to participate: two university hospitals, four tertiary teaching hospitals and eight general hospitals. Hospital sizes ranged from 201 to 985 beds. Seven hospitals declined because of practical reasons, such as simultaneous participation in other projects, time constraints, elimination of the reporting system after two years or having an anonymous reporting system (no patient information registered).\nFour reporting systems per hospital were linked with the adverse events database in these fourteen hospitals: 1) the reporting system of the complaint officer: for informal complaints by patients; 2) the reporting system of the complaint committee: for formal complaints by patients; 3) the reporting system of the liability officer: for medico-legal claims by patients and 4) the reporting system of incident reports: for incident reports by healthcare professionals. For each adverse event identified in the patient records, the equivalent was sought in the four reporting systems of the same hospitals by comparing the date and descriptions of the events.\nThis study is a continuation of the Dutch patient record review study [1,9], for which ethical approval was granted by the VU University Medical Center in Amsterdam. The participating hospitals formally consented to take part in this study.", "In each hospital a stratified random sample was selected of 200 admissions of patients discharged from the hospital in 2004 (> 24 hours stay) and 200 (or less if the total of patients who died in 2004 was lower) admissions of patients deceased in the hospital in the year 2004, excluding admissions of psychiatry, obstetrics and children less than one year old.\nBetween August 2005 and October 2006, 55 trained physicians reviewed the medical, nursing and, if available, the outpatient record of all sampled admissions that contained triggers for adverse events, for example an unplanned readmission, unplanned return to the operating room or unexpected death. The presence of one or more of the 18 predefined triggers was judged in advance by trained nurses. For each patient record two physician reviewers determined independently the presence of one or more, consequences, and degree of preventability of the adverse events, based on a standardised procedure and review form. A preventable adverse event was defined as an adverse event resulting from an error in management due to failure to follow accepted practice at an individual or system level. Accepted practice was taken to be 'the current level of expected performance for the average practitioner or system that manages the condition in question'. The degree of preventability was measured on a six-point scale from \"(Virtually) no evidence of preventability\" to \"(Virtually) certain evidence of preventability\".\nThe degree of severity of the consequences was rated on a seven-point scale from \"No physical impairment or disability\" to \"Death\". The methods of determining adverse events were based on the well-known protocol of The Harvard Medical Practice Study and have been described in detail elsewhere [1].", "Between March 2007 and March 2008 fourteen hospitals provided datasets from each of the four reporting systems, containing identification characteristics of all patients involved in an incident report, complaint or claim in the year 2004: patient registration number, date of birth and sex and hospital. Reports without sufficient patient identifiers or reports of events which occurred in 2003 but were reported in 2004 were non-eligible for matching. Because of the confidentiality of the information, the researchers had no access to the reporting systems in the hospitals. Sometimes exceptions were made when the incident reports, complaints or claims were not digitally registered. In these cases, the researchers got permission to access the reports to get the information required. A confidentiality agreement was signed by the researchers to maintain secrecy of the information. Data from the reporting systems were recorded anonymously and all information was kept confidential.\nThe patient identification data from the four sources were then linked with the database of the sample from the record review. A case matched when a patient involved in the complaint, claim or incident report matched on identity with a patient from the record review sample. These matches were classified as patient identity matches (see Figure 1).\nFlow chart of study procedure.", "It is possible that a complaint, claim or incident report - hereafter together named \"reports\"- concerned another issue than the adverse event identified by record review. For example, relatives of a patient complaining about how the doctors communicate with the patient and not about the adverse event that involved a wrong dose of medication. To be able to examine whether the identity matches related to the same issue, we obtained more information about these events, including the original description of the event, the date of occurrence and consequences for the patient, by means of a questionnaire.\nFurthermore we obtained information about the characteristics of the reporter (patient or healthcare professional). The questionnaires were sent to the officer of each reporting system in the hospitals. A researcher (ICD) compared the dates and descriptions of the adverse events from the record reviews with the dates and descriptions/narratives of the reports. The descriptions of the events did not need to correspond perfectly. The physicians who reviewed the patient records made a medical description of the adverse event, while patients generally do not use professional language when describing their dissatisfaction. Moreover, an adverse event can evolve from a chain of events in time, while a report often describes just a part of the situation. Allowing for the above mentioned differences, cases were classified as a match if it was plausible that: 1) the descriptions were about the same event or 2) the description in the report was about an event that was related to the adverse event (for example, the adverse event was a consequence of the reported event). The comparisons were discussed with other researchers (LZ, CW). In doubt and for the final verification a physician was consulted (MD).\nMatched cases in this final stage-meaning that the topic of the report was related to an adverse event in the record review study-were classified as adverse event matches (see Figure 1).", "Usefulness of the reporting systems for detection of adverse events was determined by the number of adverse event matches (absolute number and percentage of all possible matches). The extent of overlap between reporting systems was determined by counting the number of adverse events that were detected in more than one reporting source. The characteristics of the reporters of each report were described in percentages of the following categories: professionals (physician, resident/physician, nurse and student nurse) and patients (patient, child, spouse, child & spouse and legal adviser).\nTo test whether the degree of preventability and severity of the adverse events influence the likelihood they are reported, we performed t-tests. Results were considered statistically significant at p < 0.05. For the non-significant differences we calculated the power. Data were analysed using SPSS 15.0.\nEach patient was identified as being positive or negative for an adverse event for one or more reports that matched an adverse event in the patient record review. Sensitivity and specificity interpret the reports results retrospectively whereas positive predictive values and negative predictive values establish the predictive properties of the reports in the future.", "In the record review study in 14 hospitals, a total of 5.375 patient records were reviewed. The reviewers identified 498 adverse events. In a few medical cases more than one adverse event was identified. A flow chart of the study process is presented in Figure 1.\nThe fourteen hospitals received in total 10.668 reports in four reporting systems in 2004. Of these reports 1.236 reports were not eligible, because they were anonymous, not related to a specific patient or the report concerned an event that occurred in 2003. For our study 9.432 reports were eligible: 5.592 incident reports, 3.384 informal complaints, 186 formal complaints and 270 medico-legal claims.\nSince 498 adverse events were identified in the patient record study, patients and healthcare professionals in theory could have reported 498 adverse events as a complaint, claim or incident report in one or more reporting systems. After matching on patient identifiers in stage 1, an overlap was found between 422 (of 9.432) reports of patients and healthcare professionals and 348 (of 5375) patient records (patient identity matches).\nHealthcare professionals reported 353 incidents and patients submitted 61 informal complaints, 5 formal complaints and 3 medico-legal claims. Sometimes a patient was identified in more than one reporting source.\nDue to confidentiality agreements we could not disclose the patient records in which the adverse events were detected. Therefore we send the officers of the reporting committees a questionnaire for each report that matched on patient identifiers with a patient record for more information about these reports (n = 422 questionnaires). The response rate was 100%.\nIn stage 2 we excluded the patient records in which no adverse events were detected and the reports that matched on identity with these patient records.\nThe patient identity matches now involved 62 patient records with 70 adverse events in the patient record review and 83 reports in the reporting systems. Comparison of the content of the 83 reports with the 70 adverse events in stage 3 showed that 18 adverse events of the patient record review study were reported in one or more reporting systems (adverse event matches); that is 3.6% (18/498) of all possible adverse event matches. (The result of the matching on adverse event identity is presented in Figure 2.)\nDegree of overlap in patients with AEs between patient record review and reporting systems (n = 5375).\nHealthcare professionals reported 10 adverse events in 12 reports (2.0% of all possible adverse event matches). Patients reported 14 adverse events in 11 reports (2.8% of all possible adverse event matches). The characteristics of the reporters show that (student) nurses made most of the adverse event reports (40% of all reports) (Figure 3). On behalf of the patient, in the majority of complaints the patients' children were involved (26% of all reports).\nCharacteristics of reporters (n = 23 reporters/reports). Characteristics of reporters of each report that matched with a patient record review on patient identifier and AEs.\nThere was only a small overlap in reporting systems (Figure 4). One adverse event was identified in both a patient complaint and an incident report. In another case a patient used two sources to complain and submit a claim (complaint officer and liability officer) and in two cases three sources were used (complaint officer, complaint committee and liability officer).\nDegree of overlap of AEs between registration sources (n = 18 AEs).\nThe mean preventability of adverse events that were reported was 3.61 (n = 18; SD = 1.72) compared to 3.01 (n = 480; SD = 1.72) of those that were not reported in any of the reporting systems (no significant difference, power = 30.6%). The mean preventability of reported adverse events was higher for events reported by healthcare professionals (mean = 4.60; n = 10; SD = 1.07) than events reported by patients (mean = 2.38; n = 9; SD = 1.60), (p < 0.05).\nThe mean severity of adverse events that were reported was 5.33 (n = 18; SD = 2.24) compared to 4.49 (n = 480; SD = 2.45) of those that were not reported (no significant difference, power = 34.4%). The mean severity of reported adverse events was higher for events reported by patients (mean = 6.13; n = 9; SD = 1.64) than for events reported by healthcare professionals (mean = 4.70; n = 10; SD = 2.54), although not statistically significant (power = 31.3%).\nThe sensitivity and positive predictive value of the reports for adverse events was 3.6% and 4.6%. The specificity was 93.3% and the negative predictive value was 91.5%", "[SUBTITLE] Main findings and interpretation [SUBSECTION] Among the 498 adverse events identified by patient record review only 18 adverse events (3.6%) were identified by record review, meaning that the majority of 480 adverse events was not registered in one of the four registration systems.\nMost of the adverse events (n = 17) were detected in either a professional's report (n = 9) or a patient's report (n = 8); only one adverse event was detected in both systems. Patient reports and healthcare professional reports contributed almost equally to the detection of adverse events. Since healthcare professionals overall made over five times more reports than patients did, we can conclude that there is a higher chance to find an adverse event in reporting systems of patient reports than in incident reporting systems for healthcare professionals. In incident reporting systems for professionals, most of the reports are made by nurses and, therefore, probably mainly concern nursing care. Adverse events, in contrast, often concern medical care provided by resident physicians and medical consultants. The fact that these professionals are the ones making few reports in reporting systems could explain the relatively low number of adverse event matches in incident reports. Patients, on the other hand, report about the whole sequence of care. Generally, both healthcare professionals and patients report about a broader group of incidents than mere adverse events. Often, their reports concern quality issues without patient harm.\nStatistical calculations showed that the degree of preventability and severity of the adverse events does not influence the likelihood they are reported. The results showed no statistically significant differences. However the non-significance might be the result of the low power.\nPatients and healthcare professionals reported relatively more preventable and severe adverse events compared to less preventable and severe adverse events. The degree of preventability of adverse events reported by healthcare professionals was relatively higher than of adverse events reported by patients. We found that healthcare professionals reported more preventable adverse events and patients more severe adverse events. The learning potential of the adverse event or its visible impact on the patient might contribute to reporting behaviour. But, unfortunately, not all highly preventable and severe adverse events are registered.\nBecause of the low number of adverse events detected in the reporting systems, under-reporting of especially preventable and severe adverse events remains to be a problem.\nPatients made only a few medico legal claims. In the Netherlands, there is not a real claim culture. In addition, patients can only file a claim when they are informed about the possibility to financially redress the harm that is suffered.\nOn the whole, the results show that in order to detect the same adverse events identified by record review, one cannot rely on the existing reporting systems within hospitals at present.\nWhy are so few adverse events reported by healthcare providers or patients? Healthcare professionals might be embarrassed or afraid of condemnation by their colleagues or the hospital management after reporting adverse events they were involved in, especially those with severe consequences. Furthermore, there are other barriers mentioned in literature that influence event reporting, including: fear of disciplinary action or potential litigation, time pressure, no feedback, the perception that reporting is unnecessary, unclear reporting procedures and a lack of clear definitions as to what constitutes a reportable event [19-25]. Moreover, resident physicians and medical consults do not generally perceive (surgical) complications to be \"reportable incidents\". They address complications, among other incidents, in Mortality and Morbidity meetings (M&M) [26]. There are also a number of possible reasons why patients do not report adverse events [19]. Patients may not be aware they have sustained harm from medical care, while it is not easy to disentangle medical injury from the development of the underlying illness. Moreover, patients can be unaware of the possibilities of making a complaint or claim. Or they can be unwilling to do so, because they do not want to commit the time and energy needed to take action, they are concerned the report will bring tension into the relationship with their doctor or they do not feel the need to complain, because their doctor clearly explained the event to their satisfaction (disclosure). Moreover patients might not feel the need to file a claim because in the Netherlands the healthcare and social security systems are well developed and patients do not feel the need for financial compensation.\nFinally, patients with grievances or concerns may choose to speak directly to their healthcare provider. Other patients that have experienced an adverse event can choose to step outside the hospital and submit an appeal to e.g. the medical disciplinary board or civil court.\nThe sensitivity and positive predictive value is low (3.6% and 4.6%), meaning that reports of patients and healthcare professionals are not useful as a predictive method to detect the same adverse events as a patient record study. Although specificity and negative predictive value seem high (93.3% and 91.5%), one has to bear in mind that an absence of a report does not imply that no adverse event occurred. We did not researched the types and consequences of the reports, therefore it might be possible that reports that did not match still concerns adverse events.\nAlthough incidents, complaints and claims do not detect the same (number of) adverse events as the patient record review, reports can still be useful in identifying issues concerning patient safety. In the sample 441 patients were involved in one or more adverse events and 348 patients were involved in one or more report. With an overlap of 16 patients 332 patients were still involved in situations that healthcare professionals or patients found important enough to report.\nAmong the 498 adverse events identified by patient record review only 18 adverse events (3.6%) were identified by record review, meaning that the majority of 480 adverse events was not registered in one of the four registration systems.\nMost of the adverse events (n = 17) were detected in either a professional's report (n = 9) or a patient's report (n = 8); only one adverse event was detected in both systems. Patient reports and healthcare professional reports contributed almost equally to the detection of adverse events. Since healthcare professionals overall made over five times more reports than patients did, we can conclude that there is a higher chance to find an adverse event in reporting systems of patient reports than in incident reporting systems for healthcare professionals. In incident reporting systems for professionals, most of the reports are made by nurses and, therefore, probably mainly concern nursing care. Adverse events, in contrast, often concern medical care provided by resident physicians and medical consultants. The fact that these professionals are the ones making few reports in reporting systems could explain the relatively low number of adverse event matches in incident reports. Patients, on the other hand, report about the whole sequence of care. Generally, both healthcare professionals and patients report about a broader group of incidents than mere adverse events. Often, their reports concern quality issues without patient harm.\nStatistical calculations showed that the degree of preventability and severity of the adverse events does not influence the likelihood they are reported. The results showed no statistically significant differences. However the non-significance might be the result of the low power.\nPatients and healthcare professionals reported relatively more preventable and severe adverse events compared to less preventable and severe adverse events. The degree of preventability of adverse events reported by healthcare professionals was relatively higher than of adverse events reported by patients. We found that healthcare professionals reported more preventable adverse events and patients more severe adverse events. The learning potential of the adverse event or its visible impact on the patient might contribute to reporting behaviour. But, unfortunately, not all highly preventable and severe adverse events are registered.\nBecause of the low number of adverse events detected in the reporting systems, under-reporting of especially preventable and severe adverse events remains to be a problem.\nPatients made only a few medico legal claims. In the Netherlands, there is not a real claim culture. In addition, patients can only file a claim when they are informed about the possibility to financially redress the harm that is suffered.\nOn the whole, the results show that in order to detect the same adverse events identified by record review, one cannot rely on the existing reporting systems within hospitals at present.\nWhy are so few adverse events reported by healthcare providers or patients? Healthcare professionals might be embarrassed or afraid of condemnation by their colleagues or the hospital management after reporting adverse events they were involved in, especially those with severe consequences. Furthermore, there are other barriers mentioned in literature that influence event reporting, including: fear of disciplinary action or potential litigation, time pressure, no feedback, the perception that reporting is unnecessary, unclear reporting procedures and a lack of clear definitions as to what constitutes a reportable event [19-25]. Moreover, resident physicians and medical consults do not generally perceive (surgical) complications to be \"reportable incidents\". They address complications, among other incidents, in Mortality and Morbidity meetings (M&M) [26]. There are also a number of possible reasons why patients do not report adverse events [19]. Patients may not be aware they have sustained harm from medical care, while it is not easy to disentangle medical injury from the development of the underlying illness. Moreover, patients can be unaware of the possibilities of making a complaint or claim. Or they can be unwilling to do so, because they do not want to commit the time and energy needed to take action, they are concerned the report will bring tension into the relationship with their doctor or they do not feel the need to complain, because their doctor clearly explained the event to their satisfaction (disclosure). Moreover patients might not feel the need to file a claim because in the Netherlands the healthcare and social security systems are well developed and patients do not feel the need for financial compensation.\nFinally, patients with grievances or concerns may choose to speak directly to their healthcare provider. Other patients that have experienced an adverse event can choose to step outside the hospital and submit an appeal to e.g. the medical disciplinary board or civil court.\nThe sensitivity and positive predictive value is low (3.6% and 4.6%), meaning that reports of patients and healthcare professionals are not useful as a predictive method to detect the same adverse events as a patient record study. Although specificity and negative predictive value seem high (93.3% and 91.5%), one has to bear in mind that an absence of a report does not imply that no adverse event occurred. We did not researched the types and consequences of the reports, therefore it might be possible that reports that did not match still concerns adverse events.\nAlthough incidents, complaints and claims do not detect the same (number of) adverse events as the patient record review, reports can still be useful in identifying issues concerning patient safety. In the sample 441 patients were involved in one or more adverse events and 348 patients were involved in one or more report. With an overlap of 16 patients 332 patients were still involved in situations that healthcare professionals or patients found important enough to report.\n[SUBTITLE] Comparison with previous research [SUBSECTION] Different from our study design, other studies compared adverse events identified with record review with either incident reporting by healthcare professionals or patient complaints [13,15,16]. Olsen et al. compared local real-time record review with incident reporting and pharmacist surveillance [15]. As in our study, the authors found little overlap in events detected by the different methods. In a study by Sari et al., data of record review were compared with data submitted to a routine incident reporting system of the same patients [13]. They found that the reporting system missed most patient safety incidents that were identified by record review and detected only 5% of those incidents that resulted in patient harm (these incidents probably included adverse events). In our study, 2% of the adverse events that were identified by record review were reported by healthcare professionals. The authors of both studies concluded that incident reporting alone does not provide an adequate assessment of adverse events and recommend hospital staff and researchers to use more than one method at the same time [13,15].\nBismark et al. compared results from a record review study with patient complaints: 0.4% of the adverse events resulted in complaints [16]. In our study, 1.8% of the adverse events were detected in complaints and claims. Bismark found that the odds of a complaint were higher for adverse events with higher preventability and severity [16]. Because of the low numbers we could not compare the results.\nDifferent from our study design, other studies compared adverse events identified with record review with either incident reporting by healthcare professionals or patient complaints [13,15,16]. Olsen et al. compared local real-time record review with incident reporting and pharmacist surveillance [15]. As in our study, the authors found little overlap in events detected by the different methods. In a study by Sari et al., data of record review were compared with data submitted to a routine incident reporting system of the same patients [13]. They found that the reporting system missed most patient safety incidents that were identified by record review and detected only 5% of those incidents that resulted in patient harm (these incidents probably included adverse events). In our study, 2% of the adverse events that were identified by record review were reported by healthcare professionals. The authors of both studies concluded that incident reporting alone does not provide an adequate assessment of adverse events and recommend hospital staff and researchers to use more than one method at the same time [13,15].\nBismark et al. compared results from a record review study with patient complaints: 0.4% of the adverse events resulted in complaints [16]. In our study, 1.8% of the adverse events were detected in complaints and claims. Bismark found that the odds of a complaint were higher for adverse events with higher preventability and severity [16]. Because of the low numbers we could not compare the results.\n[SUBTITLE] Strengths and limitations [SUBSECTION] No earlier study has compared record review with both registration systems of reports by healthcare professionals and patients (complaints as well as medico-legal claims). We studied a large number of adverse events and included multiple hospitals in our study design, increasing the likelihood that our results can be generalised to other hospitals.\nOur study has, however, several limitations. Firstly, complaints and claims relating to episodes of care in 2004 may have been submitted later, outside our study period (January-December 2004), especially those that are submitted to the Complaint Committee and Liability Officer. Moreover, in some hospitals resident physicians and medical consultants also make reports of adverse outcomes of medical care during Morbidity and Mortality meetings (M&M). It was not possible to include this as a source in our study, because in some hospitals M&M reports are written down on notepads and are not formally registered. In other hospitals, there was only an M&M registration for surgery or not an M&M registration at all. Finally, patient record review has recognized limitations regarding the estimation of adverse event rates [27,28]. Thomas et al. stated that estimates of adverse events rates from patient record review are highly sensitive to the degree of consensus and confidence among reviewers [28]. Therefore it is possible that reports contained adverse events that were not identified in the record review study. Because of these limitations, the number of adverse events that can be detected by patients and healthcare professionals might be underestimated.\nNo earlier study has compared record review with both registration systems of reports by healthcare professionals and patients (complaints as well as medico-legal claims). We studied a large number of adverse events and included multiple hospitals in our study design, increasing the likelihood that our results can be generalised to other hospitals.\nOur study has, however, several limitations. Firstly, complaints and claims relating to episodes of care in 2004 may have been submitted later, outside our study period (January-December 2004), especially those that are submitted to the Complaint Committee and Liability Officer. Moreover, in some hospitals resident physicians and medical consultants also make reports of adverse outcomes of medical care during Morbidity and Mortality meetings (M&M). It was not possible to include this as a source in our study, because in some hospitals M&M reports are written down on notepads and are not formally registered. In other hospitals, there was only an M&M registration for surgery or not an M&M registration at all. Finally, patient record review has recognized limitations regarding the estimation of adverse event rates [27,28]. Thomas et al. stated that estimates of adverse events rates from patient record review are highly sensitive to the degree of consensus and confidence among reviewers [28]. Therefore it is possible that reports contained adverse events that were not identified in the record review study. Because of these limitations, the number of adverse events that can be detected by patients and healthcare professionals might be underestimated.", "Among the 498 adverse events identified by patient record review only 18 adverse events (3.6%) were identified by record review, meaning that the majority of 480 adverse events was not registered in one of the four registration systems.\nMost of the adverse events (n = 17) were detected in either a professional's report (n = 9) or a patient's report (n = 8); only one adverse event was detected in both systems. Patient reports and healthcare professional reports contributed almost equally to the detection of adverse events. Since healthcare professionals overall made over five times more reports than patients did, we can conclude that there is a higher chance to find an adverse event in reporting systems of patient reports than in incident reporting systems for healthcare professionals. In incident reporting systems for professionals, most of the reports are made by nurses and, therefore, probably mainly concern nursing care. Adverse events, in contrast, often concern medical care provided by resident physicians and medical consultants. The fact that these professionals are the ones making few reports in reporting systems could explain the relatively low number of adverse event matches in incident reports. Patients, on the other hand, report about the whole sequence of care. Generally, both healthcare professionals and patients report about a broader group of incidents than mere adverse events. Often, their reports concern quality issues without patient harm.\nStatistical calculations showed that the degree of preventability and severity of the adverse events does not influence the likelihood they are reported. The results showed no statistically significant differences. However the non-significance might be the result of the low power.\nPatients and healthcare professionals reported relatively more preventable and severe adverse events compared to less preventable and severe adverse events. The degree of preventability of adverse events reported by healthcare professionals was relatively higher than of adverse events reported by patients. We found that healthcare professionals reported more preventable adverse events and patients more severe adverse events. The learning potential of the adverse event or its visible impact on the patient might contribute to reporting behaviour. But, unfortunately, not all highly preventable and severe adverse events are registered.\nBecause of the low number of adverse events detected in the reporting systems, under-reporting of especially preventable and severe adverse events remains to be a problem.\nPatients made only a few medico legal claims. In the Netherlands, there is not a real claim culture. In addition, patients can only file a claim when they are informed about the possibility to financially redress the harm that is suffered.\nOn the whole, the results show that in order to detect the same adverse events identified by record review, one cannot rely on the existing reporting systems within hospitals at present.\nWhy are so few adverse events reported by healthcare providers or patients? Healthcare professionals might be embarrassed or afraid of condemnation by their colleagues or the hospital management after reporting adverse events they were involved in, especially those with severe consequences. Furthermore, there are other barriers mentioned in literature that influence event reporting, including: fear of disciplinary action or potential litigation, time pressure, no feedback, the perception that reporting is unnecessary, unclear reporting procedures and a lack of clear definitions as to what constitutes a reportable event [19-25]. Moreover, resident physicians and medical consults do not generally perceive (surgical) complications to be \"reportable incidents\". They address complications, among other incidents, in Mortality and Morbidity meetings (M&M) [26]. There are also a number of possible reasons why patients do not report adverse events [19]. Patients may not be aware they have sustained harm from medical care, while it is not easy to disentangle medical injury from the development of the underlying illness. Moreover, patients can be unaware of the possibilities of making a complaint or claim. Or they can be unwilling to do so, because they do not want to commit the time and energy needed to take action, they are concerned the report will bring tension into the relationship with their doctor or they do not feel the need to complain, because their doctor clearly explained the event to their satisfaction (disclosure). Moreover patients might not feel the need to file a claim because in the Netherlands the healthcare and social security systems are well developed and patients do not feel the need for financial compensation.\nFinally, patients with grievances or concerns may choose to speak directly to their healthcare provider. Other patients that have experienced an adverse event can choose to step outside the hospital and submit an appeal to e.g. the medical disciplinary board or civil court.\nThe sensitivity and positive predictive value is low (3.6% and 4.6%), meaning that reports of patients and healthcare professionals are not useful as a predictive method to detect the same adverse events as a patient record study. Although specificity and negative predictive value seem high (93.3% and 91.5%), one has to bear in mind that an absence of a report does not imply that no adverse event occurred. We did not researched the types and consequences of the reports, therefore it might be possible that reports that did not match still concerns adverse events.\nAlthough incidents, complaints and claims do not detect the same (number of) adverse events as the patient record review, reports can still be useful in identifying issues concerning patient safety. In the sample 441 patients were involved in one or more adverse events and 348 patients were involved in one or more report. With an overlap of 16 patients 332 patients were still involved in situations that healthcare professionals or patients found important enough to report.", "Different from our study design, other studies compared adverse events identified with record review with either incident reporting by healthcare professionals or patient complaints [13,15,16]. Olsen et al. compared local real-time record review with incident reporting and pharmacist surveillance [15]. As in our study, the authors found little overlap in events detected by the different methods. In a study by Sari et al., data of record review were compared with data submitted to a routine incident reporting system of the same patients [13]. They found that the reporting system missed most patient safety incidents that were identified by record review and detected only 5% of those incidents that resulted in patient harm (these incidents probably included adverse events). In our study, 2% of the adverse events that were identified by record review were reported by healthcare professionals. The authors of both studies concluded that incident reporting alone does not provide an adequate assessment of adverse events and recommend hospital staff and researchers to use more than one method at the same time [13,15].\nBismark et al. compared results from a record review study with patient complaints: 0.4% of the adverse events resulted in complaints [16]. In our study, 1.8% of the adverse events were detected in complaints and claims. Bismark found that the odds of a complaint were higher for adverse events with higher preventability and severity [16]. Because of the low numbers we could not compare the results.", "No earlier study has compared record review with both registration systems of reports by healthcare professionals and patients (complaints as well as medico-legal claims). We studied a large number of adverse events and included multiple hospitals in our study design, increasing the likelihood that our results can be generalised to other hospitals.\nOur study has, however, several limitations. Firstly, complaints and claims relating to episodes of care in 2004 may have been submitted later, outside our study period (January-December 2004), especially those that are submitted to the Complaint Committee and Liability Officer. Moreover, in some hospitals resident physicians and medical consultants also make reports of adverse outcomes of medical care during Morbidity and Mortality meetings (M&M). It was not possible to include this as a source in our study, because in some hospitals M&M reports are written down on notepads and are not formally registered. In other hospitals, there was only an M&M registration for surgery or not an M&M registration at all. Finally, patient record review has recognized limitations regarding the estimation of adverse event rates [27,28]. Thomas et al. stated that estimates of adverse events rates from patient record review are highly sensitive to the degree of consensus and confidence among reviewers [28]. Therefore it is possible that reports contained adverse events that were not identified in the record review study. Because of these limitations, the number of adverse events that can be detected by patients and healthcare professionals might be underestimated.", "An examination of reports from healthcare professionals and patients in reporting systems is not sufficient to detect a substantial number of adverse events and thus cannot replace record review. Adverse events are seriously under-reported. Barriers of reporting should be reduced.\nThe results show that there is little overlap in adverse events covered by healthcare professionals' and patients' reports: both groups reported different adverse events. There is underreporting of adverse events by both groups, but using either reports of professionals or patients would have yielded even fewer adverse event matches.\nConsidering the large numbers of patients' and healthcare professionals' reports that were not related to an adverse event, incident reporting systems, complaints and claims could, however, carry a vast amount of valuable information on the quality of care that can be used for the improvement of hospital healthcare. In future research, it seems worthwhile to study what information these sources can offer regarding the quality and safety of hospital care.", "The authors declare that they have no competing interests.", "ICD contributed to the design of the study, coordinated the data collection, analysed and interpreted the data and drafted the manuscript. MS wrote the manuscript and contributed to the analysis and interpretation of the data. LZ contributed to the data analysis and critically read the manuscript. SL contributed to the data collection and critically read the manuscript. GvdW and CW contributed to the design of the study and interpretation of the data, and revised the manuscript critically for important intellectual content. All authors gave their approval of the final version of the manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6963/11/49/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Incident reporting, complaints and claims in Dutch hospitals", "Methods", "Study design and setting", "Patient record review", "Linking record reviews with reporting systems (finding patient identity matches)", "Comparison of narratives (finding adverse event matches)", "Statistical analysis", "Results", "Discussion", "Main findings and interpretation", "Comparison with previous research", "Strengths and limitations", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "For hospital managers and healthcare providers involved in patient safety issues it is important to have access to patient safety data to facilitate decisions on interventions aimed at improving the quality and safety of hospital care. Ideally there is real-time information about patient safety, capturing incidents that reflect actual or potential risks of adverse events. An adverse event is commonly defined as an unintended injury that results in temporary or permanent disability, death, or prolonged hospital stay and is caused by healthcare management rather than by the patient's underlying disease process [1].\nMany countries have performed retrospective patient record review studies to identify adverse events in their hospitals [2-9]. Patient record review is believed to be the most useful method for estimating the rate of adverse events among hospitalised patients [10]. However, the method has some practical disadvantages: it is time-consuming, labour intensive and expensive [11]. Moreover, retrospective record review does not provide real-time information. It is often not possible to gain additional information about the events from the people involved.\nHospitals would benefit from reporting sources, which can provide information on patient safety periodically and on demand. The value of incident reporting by healthcare professionals has been widely recognised. Moreover, it is increasingly believed that patients (or their relatives, hereafter together named \"patients\") can play an important role in signalling safety issues. In the report of the UK Department of Health \"An organisation with a memory\" [12], the importance of both sources is stressed.\nSo far there is little knowledge about the comprehensiveness of incident reporting systems in hospitals. A general weakness that is often mentioned in literature about incident reporting by healthcare providers is that incidents are considerably under-reported [13,14]. A few studies compared results from record review with either incident reports or patient complaints in small scale study designs [13,15,16]. They found little overlap in events detected by the methods they compared and concluded that incident reporting or patient complaints alone do not provide an adequate assessment of adverse events. As far as we know, no earlier studies have reported on the completeness of both incident reporting by healthcare professionals and reporting by patients (complaints as well as claims) in regard of detecting adverse events.\nThe aim of our study is to get insight into the extent in which four hospital reporting systems of incidents reported by healthcare professionals and complaints and claims reported by patients cover adverse events that were identified by patient record review. We want to know if the diverse reporting systems, which are already implemented in Dutch hospitals, are individually or cumulatively useful as a method for identifying adverse events (see Box 1 for some background information about incident reporting by healthcare professionals and the procedures for complaints/claims by patients in the Netherlands).\nMoreover, we are interested whether adverse events with a higher preventability and severity of consequences have a larger likelihood of being reported. It is our assumption that highly preventable adverse events might be reported relatively often, because the persons involved feel the hospital can learn from these events to prevent them in the future. Severe adverse events might have an increased chance of being reported, because of their visibility and impact on the patient. When highly preventable and severe adverse events are regularly reported in one or more reporting systems, the issue of under-reporting is less worrying than is stated in literature.\nOur study has four specific research questions: 1) How many adverse events established by patient record review are also identified in one or more reporting systems of incident reports, complaints and claims? 2) What is the amount of overlap in the detection of adverse events between the reporting systems? 3) Is the degree of preventability and severity of adverse events related to the likelihood they are reported? 4) What is the sensitivity and specificity of reports for adverse events?\n[SUBTITLE] Incident reporting, complaints and claims in Dutch hospitals [SUBSECTION] In the Netherlands, healthcare professionals have been increasingly stimulated to report incidents or near misses within their own hospital. For the healthcare professional in 2004 reporting incidents (especially adverse events and calamities) was not mandatory by law but a consequence of healthcare quality and patients' rights legislation. It often was mandatory by employment contract. The incident reporting system is by definition meant for all incidents, including adverse events. An incident can be defined as \"Any deviation from usual medical care that causes an injury to the patient or poses a risk of harm. Includes errors, preventable adverse events, and hazards.\" A near miss is defined as \"Serious error or mishap that has the potential to cause an adverse event but fails to do so because of chance or because it is intercepted.\" [17]. This means that an adverse event is always an incident but an incident does not has to be an adverse event. After an incident/near miss has happened, it can be reported by filling out an electronic or paper-based report form containing, among other things, a description of the event, the time and place of occurrence and the people involved. The incident reporting committee will register and analyse the incidents.\nPatients are encouraged to report their complaints without further definition.\nIn the Netherlands patients have various options to file a complaint or claim in the Netherlands when they are not satisfied with the care or cure they received. The choice of the path depends largely on the intention of the patient making the complaint or claim.\nThere are different reasons for patients to complaint, for example, patients want to have an explanation or an apology or initiate an investigation on the legitimacy of certain acts committed. Reasons to file a claim are for example to get financial compensation or to prevent recurrence of the incident to restore their sense of justice [18]. Patients can submit their grievances to the complaints officer or to the more formal complaint committee within the hospital. The purpose of an informal complaint is mediation or expressing a concern about the quality of care, whereas a formal complaint is made to instigate an investigation followed by a formal judgement about the legitimacy of the complaint (not juridical binding). A legal option outside the hospital is to submit a formal appeal to the medical board to obtain a verdict or when a financial compensation is wanted, to file a claim to the hospital board. The Netherlands does not have a no-fault system. Malpractice claims are judged by the insurance company of the hospital. If patients do not agree with the judgement of the insurance company on the liability or the financial compensation, patients can approach a civil court. It is possible for patients to use more than one path simultaneously or consecutively. Reporting systems are not set up with the intention to report adverse events in specific. Patient reports often concern the quality of care, whereas healthcare professionals frequently report about deviations from procedures.\nIn the Netherlands, healthcare professionals have been increasingly stimulated to report incidents or near misses within their own hospital. For the healthcare professional in 2004 reporting incidents (especially adverse events and calamities) was not mandatory by law but a consequence of healthcare quality and patients' rights legislation. It often was mandatory by employment contract. The incident reporting system is by definition meant for all incidents, including adverse events. An incident can be defined as \"Any deviation from usual medical care that causes an injury to the patient or poses a risk of harm. Includes errors, preventable adverse events, and hazards.\" A near miss is defined as \"Serious error or mishap that has the potential to cause an adverse event but fails to do so because of chance or because it is intercepted.\" [17]. This means that an adverse event is always an incident but an incident does not has to be an adverse event. After an incident/near miss has happened, it can be reported by filling out an electronic or paper-based report form containing, among other things, a description of the event, the time and place of occurrence and the people involved. The incident reporting committee will register and analyse the incidents.\nPatients are encouraged to report their complaints without further definition.\nIn the Netherlands patients have various options to file a complaint or claim in the Netherlands when they are not satisfied with the care or cure they received. The choice of the path depends largely on the intention of the patient making the complaint or claim.\nThere are different reasons for patients to complaint, for example, patients want to have an explanation or an apology or initiate an investigation on the legitimacy of certain acts committed. Reasons to file a claim are for example to get financial compensation or to prevent recurrence of the incident to restore their sense of justice [18]. Patients can submit their grievances to the complaints officer or to the more formal complaint committee within the hospital. The purpose of an informal complaint is mediation or expressing a concern about the quality of care, whereas a formal complaint is made to instigate an investigation followed by a formal judgement about the legitimacy of the complaint (not juridical binding). A legal option outside the hospital is to submit a formal appeal to the medical board to obtain a verdict or when a financial compensation is wanted, to file a claim to the hospital board. The Netherlands does not have a no-fault system. Malpractice claims are judged by the insurance company of the hospital. If patients do not agree with the judgement of the insurance company on the liability or the financial compensation, patients can approach a civil court. It is possible for patients to use more than one path simultaneously or consecutively. Reporting systems are not set up with the intention to report adverse events in specific. Patient reports often concern the quality of care, whereas healthcare professionals frequently report about deviations from procedures.", "In the Netherlands, healthcare professionals have been increasingly stimulated to report incidents or near misses within their own hospital. For the healthcare professional in 2004 reporting incidents (especially adverse events and calamities) was not mandatory by law but a consequence of healthcare quality and patients' rights legislation. It often was mandatory by employment contract. The incident reporting system is by definition meant for all incidents, including adverse events. An incident can be defined as \"Any deviation from usual medical care that causes an injury to the patient or poses a risk of harm. Includes errors, preventable adverse events, and hazards.\" A near miss is defined as \"Serious error or mishap that has the potential to cause an adverse event but fails to do so because of chance or because it is intercepted.\" [17]. This means that an adverse event is always an incident but an incident does not has to be an adverse event. After an incident/near miss has happened, it can be reported by filling out an electronic or paper-based report form containing, among other things, a description of the event, the time and place of occurrence and the people involved. The incident reporting committee will register and analyse the incidents.\nPatients are encouraged to report their complaints without further definition.\nIn the Netherlands patients have various options to file a complaint or claim in the Netherlands when they are not satisfied with the care or cure they received. The choice of the path depends largely on the intention of the patient making the complaint or claim.\nThere are different reasons for patients to complaint, for example, patients want to have an explanation or an apology or initiate an investigation on the legitimacy of certain acts committed. Reasons to file a claim are for example to get financial compensation or to prevent recurrence of the incident to restore their sense of justice [18]. Patients can submit their grievances to the complaints officer or to the more formal complaint committee within the hospital. The purpose of an informal complaint is mediation or expressing a concern about the quality of care, whereas a formal complaint is made to instigate an investigation followed by a formal judgement about the legitimacy of the complaint (not juridical binding). A legal option outside the hospital is to submit a formal appeal to the medical board to obtain a verdict or when a financial compensation is wanted, to file a claim to the hospital board. The Netherlands does not have a no-fault system. Malpractice claims are judged by the insurance company of the hospital. If patients do not agree with the judgement of the insurance company on the liability or the financial compensation, patients can approach a civil court. It is possible for patients to use more than one path simultaneously or consecutively. Reporting systems are not set up with the intention to report adverse events in specific. Patient reports often concern the quality of care, whereas healthcare professionals frequently report about deviations from procedures.", "[SUBTITLE] Study design and setting [SUBSECTION] A retrospective study was performed using a database of reviewed patient records collected during a retrospective patient record review study examining the incidence of adverse events [1,9]. This record review study was conducted in 21 hospitals, involving 20% of the acute care hospitals in the Netherlands. All 21 hospitals were invited to participate in our supplementary study on the completeness of incident reporting systems. Fourteen hospitals were willing to participate: two university hospitals, four tertiary teaching hospitals and eight general hospitals. Hospital sizes ranged from 201 to 985 beds. Seven hospitals declined because of practical reasons, such as simultaneous participation in other projects, time constraints, elimination of the reporting system after two years or having an anonymous reporting system (no patient information registered).\nFour reporting systems per hospital were linked with the adverse events database in these fourteen hospitals: 1) the reporting system of the complaint officer: for informal complaints by patients; 2) the reporting system of the complaint committee: for formal complaints by patients; 3) the reporting system of the liability officer: for medico-legal claims by patients and 4) the reporting system of incident reports: for incident reports by healthcare professionals. For each adverse event identified in the patient records, the equivalent was sought in the four reporting systems of the same hospitals by comparing the date and descriptions of the events.\nThis study is a continuation of the Dutch patient record review study [1,9], for which ethical approval was granted by the VU University Medical Center in Amsterdam. The participating hospitals formally consented to take part in this study.\nA retrospective study was performed using a database of reviewed patient records collected during a retrospective patient record review study examining the incidence of adverse events [1,9]. This record review study was conducted in 21 hospitals, involving 20% of the acute care hospitals in the Netherlands. All 21 hospitals were invited to participate in our supplementary study on the completeness of incident reporting systems. Fourteen hospitals were willing to participate: two university hospitals, four tertiary teaching hospitals and eight general hospitals. Hospital sizes ranged from 201 to 985 beds. Seven hospitals declined because of practical reasons, such as simultaneous participation in other projects, time constraints, elimination of the reporting system after two years or having an anonymous reporting system (no patient information registered).\nFour reporting systems per hospital were linked with the adverse events database in these fourteen hospitals: 1) the reporting system of the complaint officer: for informal complaints by patients; 2) the reporting system of the complaint committee: for formal complaints by patients; 3) the reporting system of the liability officer: for medico-legal claims by patients and 4) the reporting system of incident reports: for incident reports by healthcare professionals. For each adverse event identified in the patient records, the equivalent was sought in the four reporting systems of the same hospitals by comparing the date and descriptions of the events.\nThis study is a continuation of the Dutch patient record review study [1,9], for which ethical approval was granted by the VU University Medical Center in Amsterdam. The participating hospitals formally consented to take part in this study.\n[SUBTITLE] Patient record review [SUBSECTION] In each hospital a stratified random sample was selected of 200 admissions of patients discharged from the hospital in 2004 (> 24 hours stay) and 200 (or less if the total of patients who died in 2004 was lower) admissions of patients deceased in the hospital in the year 2004, excluding admissions of psychiatry, obstetrics and children less than one year old.\nBetween August 2005 and October 2006, 55 trained physicians reviewed the medical, nursing and, if available, the outpatient record of all sampled admissions that contained triggers for adverse events, for example an unplanned readmission, unplanned return to the operating room or unexpected death. The presence of one or more of the 18 predefined triggers was judged in advance by trained nurses. For each patient record two physician reviewers determined independently the presence of one or more, consequences, and degree of preventability of the adverse events, based on a standardised procedure and review form. A preventable adverse event was defined as an adverse event resulting from an error in management due to failure to follow accepted practice at an individual or system level. Accepted practice was taken to be 'the current level of expected performance for the average practitioner or system that manages the condition in question'. The degree of preventability was measured on a six-point scale from \"(Virtually) no evidence of preventability\" to \"(Virtually) certain evidence of preventability\".\nThe degree of severity of the consequences was rated on a seven-point scale from \"No physical impairment or disability\" to \"Death\". The methods of determining adverse events were based on the well-known protocol of The Harvard Medical Practice Study and have been described in detail elsewhere [1].\nIn each hospital a stratified random sample was selected of 200 admissions of patients discharged from the hospital in 2004 (> 24 hours stay) and 200 (or less if the total of patients who died in 2004 was lower) admissions of patients deceased in the hospital in the year 2004, excluding admissions of psychiatry, obstetrics and children less than one year old.\nBetween August 2005 and October 2006, 55 trained physicians reviewed the medical, nursing and, if available, the outpatient record of all sampled admissions that contained triggers for adverse events, for example an unplanned readmission, unplanned return to the operating room or unexpected death. The presence of one or more of the 18 predefined triggers was judged in advance by trained nurses. For each patient record two physician reviewers determined independently the presence of one or more, consequences, and degree of preventability of the adverse events, based on a standardised procedure and review form. A preventable adverse event was defined as an adverse event resulting from an error in management due to failure to follow accepted practice at an individual or system level. Accepted practice was taken to be 'the current level of expected performance for the average practitioner or system that manages the condition in question'. The degree of preventability was measured on a six-point scale from \"(Virtually) no evidence of preventability\" to \"(Virtually) certain evidence of preventability\".\nThe degree of severity of the consequences was rated on a seven-point scale from \"No physical impairment or disability\" to \"Death\". The methods of determining adverse events were based on the well-known protocol of The Harvard Medical Practice Study and have been described in detail elsewhere [1].\n[SUBTITLE] Linking record reviews with reporting systems (finding patient identity matches) [SUBSECTION] Between March 2007 and March 2008 fourteen hospitals provided datasets from each of the four reporting systems, containing identification characteristics of all patients involved in an incident report, complaint or claim in the year 2004: patient registration number, date of birth and sex and hospital. Reports without sufficient patient identifiers or reports of events which occurred in 2003 but were reported in 2004 were non-eligible for matching. Because of the confidentiality of the information, the researchers had no access to the reporting systems in the hospitals. Sometimes exceptions were made when the incident reports, complaints or claims were not digitally registered. In these cases, the researchers got permission to access the reports to get the information required. A confidentiality agreement was signed by the researchers to maintain secrecy of the information. Data from the reporting systems were recorded anonymously and all information was kept confidential.\nThe patient identification data from the four sources were then linked with the database of the sample from the record review. A case matched when a patient involved in the complaint, claim or incident report matched on identity with a patient from the record review sample. These matches were classified as patient identity matches (see Figure 1).\nFlow chart of study procedure.\nBetween March 2007 and March 2008 fourteen hospitals provided datasets from each of the four reporting systems, containing identification characteristics of all patients involved in an incident report, complaint or claim in the year 2004: patient registration number, date of birth and sex and hospital. Reports without sufficient patient identifiers or reports of events which occurred in 2003 but were reported in 2004 were non-eligible for matching. Because of the confidentiality of the information, the researchers had no access to the reporting systems in the hospitals. Sometimes exceptions were made when the incident reports, complaints or claims were not digitally registered. In these cases, the researchers got permission to access the reports to get the information required. A confidentiality agreement was signed by the researchers to maintain secrecy of the information. Data from the reporting systems were recorded anonymously and all information was kept confidential.\nThe patient identification data from the four sources were then linked with the database of the sample from the record review. A case matched when a patient involved in the complaint, claim or incident report matched on identity with a patient from the record review sample. These matches were classified as patient identity matches (see Figure 1).\nFlow chart of study procedure.\n[SUBTITLE] Comparison of narratives (finding adverse event matches) [SUBSECTION] It is possible that a complaint, claim or incident report - hereafter together named \"reports\"- concerned another issue than the adverse event identified by record review. For example, relatives of a patient complaining about how the doctors communicate with the patient and not about the adverse event that involved a wrong dose of medication. To be able to examine whether the identity matches related to the same issue, we obtained more information about these events, including the original description of the event, the date of occurrence and consequences for the patient, by means of a questionnaire.\nFurthermore we obtained information about the characteristics of the reporter (patient or healthcare professional). The questionnaires were sent to the officer of each reporting system in the hospitals. A researcher (ICD) compared the dates and descriptions of the adverse events from the record reviews with the dates and descriptions/narratives of the reports. The descriptions of the events did not need to correspond perfectly. The physicians who reviewed the patient records made a medical description of the adverse event, while patients generally do not use professional language when describing their dissatisfaction. Moreover, an adverse event can evolve from a chain of events in time, while a report often describes just a part of the situation. Allowing for the above mentioned differences, cases were classified as a match if it was plausible that: 1) the descriptions were about the same event or 2) the description in the report was about an event that was related to the adverse event (for example, the adverse event was a consequence of the reported event). The comparisons were discussed with other researchers (LZ, CW). In doubt and for the final verification a physician was consulted (MD).\nMatched cases in this final stage-meaning that the topic of the report was related to an adverse event in the record review study-were classified as adverse event matches (see Figure 1).\nIt is possible that a complaint, claim or incident report - hereafter together named \"reports\"- concerned another issue than the adverse event identified by record review. For example, relatives of a patient complaining about how the doctors communicate with the patient and not about the adverse event that involved a wrong dose of medication. To be able to examine whether the identity matches related to the same issue, we obtained more information about these events, including the original description of the event, the date of occurrence and consequences for the patient, by means of a questionnaire.\nFurthermore we obtained information about the characteristics of the reporter (patient or healthcare professional). The questionnaires were sent to the officer of each reporting system in the hospitals. A researcher (ICD) compared the dates and descriptions of the adverse events from the record reviews with the dates and descriptions/narratives of the reports. The descriptions of the events did not need to correspond perfectly. The physicians who reviewed the patient records made a medical description of the adverse event, while patients generally do not use professional language when describing their dissatisfaction. Moreover, an adverse event can evolve from a chain of events in time, while a report often describes just a part of the situation. Allowing for the above mentioned differences, cases were classified as a match if it was plausible that: 1) the descriptions were about the same event or 2) the description in the report was about an event that was related to the adverse event (for example, the adverse event was a consequence of the reported event). The comparisons were discussed with other researchers (LZ, CW). In doubt and for the final verification a physician was consulted (MD).\nMatched cases in this final stage-meaning that the topic of the report was related to an adverse event in the record review study-were classified as adverse event matches (see Figure 1).\n[SUBTITLE] Statistical analysis [SUBSECTION] Usefulness of the reporting systems for detection of adverse events was determined by the number of adverse event matches (absolute number and percentage of all possible matches). The extent of overlap between reporting systems was determined by counting the number of adverse events that were detected in more than one reporting source. The characteristics of the reporters of each report were described in percentages of the following categories: professionals (physician, resident/physician, nurse and student nurse) and patients (patient, child, spouse, child & spouse and legal adviser).\nTo test whether the degree of preventability and severity of the adverse events influence the likelihood they are reported, we performed t-tests. Results were considered statistically significant at p < 0.05. For the non-significant differences we calculated the power. Data were analysed using SPSS 15.0.\nEach patient was identified as being positive or negative for an adverse event for one or more reports that matched an adverse event in the patient record review. Sensitivity and specificity interpret the reports results retrospectively whereas positive predictive values and negative predictive values establish the predictive properties of the reports in the future.\nUsefulness of the reporting systems for detection of adverse events was determined by the number of adverse event matches (absolute number and percentage of all possible matches). The extent of overlap between reporting systems was determined by counting the number of adverse events that were detected in more than one reporting source. The characteristics of the reporters of each report were described in percentages of the following categories: professionals (physician, resident/physician, nurse and student nurse) and patients (patient, child, spouse, child & spouse and legal adviser).\nTo test whether the degree of preventability and severity of the adverse events influence the likelihood they are reported, we performed t-tests. Results were considered statistically significant at p < 0.05. For the non-significant differences we calculated the power. Data were analysed using SPSS 15.0.\nEach patient was identified as being positive or negative for an adverse event for one or more reports that matched an adverse event in the patient record review. Sensitivity and specificity interpret the reports results retrospectively whereas positive predictive values and negative predictive values establish the predictive properties of the reports in the future.", "A retrospective study was performed using a database of reviewed patient records collected during a retrospective patient record review study examining the incidence of adverse events [1,9]. This record review study was conducted in 21 hospitals, involving 20% of the acute care hospitals in the Netherlands. All 21 hospitals were invited to participate in our supplementary study on the completeness of incident reporting systems. Fourteen hospitals were willing to participate: two university hospitals, four tertiary teaching hospitals and eight general hospitals. Hospital sizes ranged from 201 to 985 beds. Seven hospitals declined because of practical reasons, such as simultaneous participation in other projects, time constraints, elimination of the reporting system after two years or having an anonymous reporting system (no patient information registered).\nFour reporting systems per hospital were linked with the adverse events database in these fourteen hospitals: 1) the reporting system of the complaint officer: for informal complaints by patients; 2) the reporting system of the complaint committee: for formal complaints by patients; 3) the reporting system of the liability officer: for medico-legal claims by patients and 4) the reporting system of incident reports: for incident reports by healthcare professionals. For each adverse event identified in the patient records, the equivalent was sought in the four reporting systems of the same hospitals by comparing the date and descriptions of the events.\nThis study is a continuation of the Dutch patient record review study [1,9], for which ethical approval was granted by the VU University Medical Center in Amsterdam. The participating hospitals formally consented to take part in this study.", "In each hospital a stratified random sample was selected of 200 admissions of patients discharged from the hospital in 2004 (> 24 hours stay) and 200 (or less if the total of patients who died in 2004 was lower) admissions of patients deceased in the hospital in the year 2004, excluding admissions of psychiatry, obstetrics and children less than one year old.\nBetween August 2005 and October 2006, 55 trained physicians reviewed the medical, nursing and, if available, the outpatient record of all sampled admissions that contained triggers for adverse events, for example an unplanned readmission, unplanned return to the operating room or unexpected death. The presence of one or more of the 18 predefined triggers was judged in advance by trained nurses. For each patient record two physician reviewers determined independently the presence of one or more, consequences, and degree of preventability of the adverse events, based on a standardised procedure and review form. A preventable adverse event was defined as an adverse event resulting from an error in management due to failure to follow accepted practice at an individual or system level. Accepted practice was taken to be 'the current level of expected performance for the average practitioner or system that manages the condition in question'. The degree of preventability was measured on a six-point scale from \"(Virtually) no evidence of preventability\" to \"(Virtually) certain evidence of preventability\".\nThe degree of severity of the consequences was rated on a seven-point scale from \"No physical impairment or disability\" to \"Death\". The methods of determining adverse events were based on the well-known protocol of The Harvard Medical Practice Study and have been described in detail elsewhere [1].", "Between March 2007 and March 2008 fourteen hospitals provided datasets from each of the four reporting systems, containing identification characteristics of all patients involved in an incident report, complaint or claim in the year 2004: patient registration number, date of birth and sex and hospital. Reports without sufficient patient identifiers or reports of events which occurred in 2003 but were reported in 2004 were non-eligible for matching. Because of the confidentiality of the information, the researchers had no access to the reporting systems in the hospitals. Sometimes exceptions were made when the incident reports, complaints or claims were not digitally registered. In these cases, the researchers got permission to access the reports to get the information required. A confidentiality agreement was signed by the researchers to maintain secrecy of the information. Data from the reporting systems were recorded anonymously and all information was kept confidential.\nThe patient identification data from the four sources were then linked with the database of the sample from the record review. A case matched when a patient involved in the complaint, claim or incident report matched on identity with a patient from the record review sample. These matches were classified as patient identity matches (see Figure 1).\nFlow chart of study procedure.", "It is possible that a complaint, claim or incident report - hereafter together named \"reports\"- concerned another issue than the adverse event identified by record review. For example, relatives of a patient complaining about how the doctors communicate with the patient and not about the adverse event that involved a wrong dose of medication. To be able to examine whether the identity matches related to the same issue, we obtained more information about these events, including the original description of the event, the date of occurrence and consequences for the patient, by means of a questionnaire.\nFurthermore we obtained information about the characteristics of the reporter (patient or healthcare professional). The questionnaires were sent to the officer of each reporting system in the hospitals. A researcher (ICD) compared the dates and descriptions of the adverse events from the record reviews with the dates and descriptions/narratives of the reports. The descriptions of the events did not need to correspond perfectly. The physicians who reviewed the patient records made a medical description of the adverse event, while patients generally do not use professional language when describing their dissatisfaction. Moreover, an adverse event can evolve from a chain of events in time, while a report often describes just a part of the situation. Allowing for the above mentioned differences, cases were classified as a match if it was plausible that: 1) the descriptions were about the same event or 2) the description in the report was about an event that was related to the adverse event (for example, the adverse event was a consequence of the reported event). The comparisons were discussed with other researchers (LZ, CW). In doubt and for the final verification a physician was consulted (MD).\nMatched cases in this final stage-meaning that the topic of the report was related to an adverse event in the record review study-were classified as adverse event matches (see Figure 1).", "Usefulness of the reporting systems for detection of adverse events was determined by the number of adverse event matches (absolute number and percentage of all possible matches). The extent of overlap between reporting systems was determined by counting the number of adverse events that were detected in more than one reporting source. The characteristics of the reporters of each report were described in percentages of the following categories: professionals (physician, resident/physician, nurse and student nurse) and patients (patient, child, spouse, child & spouse and legal adviser).\nTo test whether the degree of preventability and severity of the adverse events influence the likelihood they are reported, we performed t-tests. Results were considered statistically significant at p < 0.05. For the non-significant differences we calculated the power. Data were analysed using SPSS 15.0.\nEach patient was identified as being positive or negative for an adverse event for one or more reports that matched an adverse event in the patient record review. Sensitivity and specificity interpret the reports results retrospectively whereas positive predictive values and negative predictive values establish the predictive properties of the reports in the future.", "In the record review study in 14 hospitals, a total of 5.375 patient records were reviewed. The reviewers identified 498 adverse events. In a few medical cases more than one adverse event was identified. A flow chart of the study process is presented in Figure 1.\nThe fourteen hospitals received in total 10.668 reports in four reporting systems in 2004. Of these reports 1.236 reports were not eligible, because they were anonymous, not related to a specific patient or the report concerned an event that occurred in 2003. For our study 9.432 reports were eligible: 5.592 incident reports, 3.384 informal complaints, 186 formal complaints and 270 medico-legal claims.\nSince 498 adverse events were identified in the patient record study, patients and healthcare professionals in theory could have reported 498 adverse events as a complaint, claim or incident report in one or more reporting systems. After matching on patient identifiers in stage 1, an overlap was found between 422 (of 9.432) reports of patients and healthcare professionals and 348 (of 5375) patient records (patient identity matches).\nHealthcare professionals reported 353 incidents and patients submitted 61 informal complaints, 5 formal complaints and 3 medico-legal claims. Sometimes a patient was identified in more than one reporting source.\nDue to confidentiality agreements we could not disclose the patient records in which the adverse events were detected. Therefore we send the officers of the reporting committees a questionnaire for each report that matched on patient identifiers with a patient record for more information about these reports (n = 422 questionnaires). The response rate was 100%.\nIn stage 2 we excluded the patient records in which no adverse events were detected and the reports that matched on identity with these patient records.\nThe patient identity matches now involved 62 patient records with 70 adverse events in the patient record review and 83 reports in the reporting systems. Comparison of the content of the 83 reports with the 70 adverse events in stage 3 showed that 18 adverse events of the patient record review study were reported in one or more reporting systems (adverse event matches); that is 3.6% (18/498) of all possible adverse event matches. (The result of the matching on adverse event identity is presented in Figure 2.)\nDegree of overlap in patients with AEs between patient record review and reporting systems (n = 5375).\nHealthcare professionals reported 10 adverse events in 12 reports (2.0% of all possible adverse event matches). Patients reported 14 adverse events in 11 reports (2.8% of all possible adverse event matches). The characteristics of the reporters show that (student) nurses made most of the adverse event reports (40% of all reports) (Figure 3). On behalf of the patient, in the majority of complaints the patients' children were involved (26% of all reports).\nCharacteristics of reporters (n = 23 reporters/reports). Characteristics of reporters of each report that matched with a patient record review on patient identifier and AEs.\nThere was only a small overlap in reporting systems (Figure 4). One adverse event was identified in both a patient complaint and an incident report. In another case a patient used two sources to complain and submit a claim (complaint officer and liability officer) and in two cases three sources were used (complaint officer, complaint committee and liability officer).\nDegree of overlap of AEs between registration sources (n = 18 AEs).\nThe mean preventability of adverse events that were reported was 3.61 (n = 18; SD = 1.72) compared to 3.01 (n = 480; SD = 1.72) of those that were not reported in any of the reporting systems (no significant difference, power = 30.6%). The mean preventability of reported adverse events was higher for events reported by healthcare professionals (mean = 4.60; n = 10; SD = 1.07) than events reported by patients (mean = 2.38; n = 9; SD = 1.60), (p < 0.05).\nThe mean severity of adverse events that were reported was 5.33 (n = 18; SD = 2.24) compared to 4.49 (n = 480; SD = 2.45) of those that were not reported (no significant difference, power = 34.4%). The mean severity of reported adverse events was higher for events reported by patients (mean = 6.13; n = 9; SD = 1.64) than for events reported by healthcare professionals (mean = 4.70; n = 10; SD = 2.54), although not statistically significant (power = 31.3%).\nThe sensitivity and positive predictive value of the reports for adverse events was 3.6% and 4.6%. The specificity was 93.3% and the negative predictive value was 91.5%", "[SUBTITLE] Main findings and interpretation [SUBSECTION] Among the 498 adverse events identified by patient record review only 18 adverse events (3.6%) were identified by record review, meaning that the majority of 480 adverse events was not registered in one of the four registration systems.\nMost of the adverse events (n = 17) were detected in either a professional's report (n = 9) or a patient's report (n = 8); only one adverse event was detected in both systems. Patient reports and healthcare professional reports contributed almost equally to the detection of adverse events. Since healthcare professionals overall made over five times more reports than patients did, we can conclude that there is a higher chance to find an adverse event in reporting systems of patient reports than in incident reporting systems for healthcare professionals. In incident reporting systems for professionals, most of the reports are made by nurses and, therefore, probably mainly concern nursing care. Adverse events, in contrast, often concern medical care provided by resident physicians and medical consultants. The fact that these professionals are the ones making few reports in reporting systems could explain the relatively low number of adverse event matches in incident reports. Patients, on the other hand, report about the whole sequence of care. Generally, both healthcare professionals and patients report about a broader group of incidents than mere adverse events. Often, their reports concern quality issues without patient harm.\nStatistical calculations showed that the degree of preventability and severity of the adverse events does not influence the likelihood they are reported. The results showed no statistically significant differences. However the non-significance might be the result of the low power.\nPatients and healthcare professionals reported relatively more preventable and severe adverse events compared to less preventable and severe adverse events. The degree of preventability of adverse events reported by healthcare professionals was relatively higher than of adverse events reported by patients. We found that healthcare professionals reported more preventable adverse events and patients more severe adverse events. The learning potential of the adverse event or its visible impact on the patient might contribute to reporting behaviour. But, unfortunately, not all highly preventable and severe adverse events are registered.\nBecause of the low number of adverse events detected in the reporting systems, under-reporting of especially preventable and severe adverse events remains to be a problem.\nPatients made only a few medico legal claims. In the Netherlands, there is not a real claim culture. In addition, patients can only file a claim when they are informed about the possibility to financially redress the harm that is suffered.\nOn the whole, the results show that in order to detect the same adverse events identified by record review, one cannot rely on the existing reporting systems within hospitals at present.\nWhy are so few adverse events reported by healthcare providers or patients? Healthcare professionals might be embarrassed or afraid of condemnation by their colleagues or the hospital management after reporting adverse events they were involved in, especially those with severe consequences. Furthermore, there are other barriers mentioned in literature that influence event reporting, including: fear of disciplinary action or potential litigation, time pressure, no feedback, the perception that reporting is unnecessary, unclear reporting procedures and a lack of clear definitions as to what constitutes a reportable event [19-25]. Moreover, resident physicians and medical consults do not generally perceive (surgical) complications to be \"reportable incidents\". They address complications, among other incidents, in Mortality and Morbidity meetings (M&M) [26]. There are also a number of possible reasons why patients do not report adverse events [19]. Patients may not be aware they have sustained harm from medical care, while it is not easy to disentangle medical injury from the development of the underlying illness. Moreover, patients can be unaware of the possibilities of making a complaint or claim. Or they can be unwilling to do so, because they do not want to commit the time and energy needed to take action, they are concerned the report will bring tension into the relationship with their doctor or they do not feel the need to complain, because their doctor clearly explained the event to their satisfaction (disclosure). Moreover patients might not feel the need to file a claim because in the Netherlands the healthcare and social security systems are well developed and patients do not feel the need for financial compensation.\nFinally, patients with grievances or concerns may choose to speak directly to their healthcare provider. Other patients that have experienced an adverse event can choose to step outside the hospital and submit an appeal to e.g. the medical disciplinary board or civil court.\nThe sensitivity and positive predictive value is low (3.6% and 4.6%), meaning that reports of patients and healthcare professionals are not useful as a predictive method to detect the same adverse events as a patient record study. Although specificity and negative predictive value seem high (93.3% and 91.5%), one has to bear in mind that an absence of a report does not imply that no adverse event occurred. We did not researched the types and consequences of the reports, therefore it might be possible that reports that did not match still concerns adverse events.\nAlthough incidents, complaints and claims do not detect the same (number of) adverse events as the patient record review, reports can still be useful in identifying issues concerning patient safety. In the sample 441 patients were involved in one or more adverse events and 348 patients were involved in one or more report. With an overlap of 16 patients 332 patients were still involved in situations that healthcare professionals or patients found important enough to report.\nAmong the 498 adverse events identified by patient record review only 18 adverse events (3.6%) were identified by record review, meaning that the majority of 480 adverse events was not registered in one of the four registration systems.\nMost of the adverse events (n = 17) were detected in either a professional's report (n = 9) or a patient's report (n = 8); only one adverse event was detected in both systems. Patient reports and healthcare professional reports contributed almost equally to the detection of adverse events. Since healthcare professionals overall made over five times more reports than patients did, we can conclude that there is a higher chance to find an adverse event in reporting systems of patient reports than in incident reporting systems for healthcare professionals. In incident reporting systems for professionals, most of the reports are made by nurses and, therefore, probably mainly concern nursing care. Adverse events, in contrast, often concern medical care provided by resident physicians and medical consultants. The fact that these professionals are the ones making few reports in reporting systems could explain the relatively low number of adverse event matches in incident reports. Patients, on the other hand, report about the whole sequence of care. Generally, both healthcare professionals and patients report about a broader group of incidents than mere adverse events. Often, their reports concern quality issues without patient harm.\nStatistical calculations showed that the degree of preventability and severity of the adverse events does not influence the likelihood they are reported. The results showed no statistically significant differences. However the non-significance might be the result of the low power.\nPatients and healthcare professionals reported relatively more preventable and severe adverse events compared to less preventable and severe adverse events. The degree of preventability of adverse events reported by healthcare professionals was relatively higher than of adverse events reported by patients. We found that healthcare professionals reported more preventable adverse events and patients more severe adverse events. The learning potential of the adverse event or its visible impact on the patient might contribute to reporting behaviour. But, unfortunately, not all highly preventable and severe adverse events are registered.\nBecause of the low number of adverse events detected in the reporting systems, under-reporting of especially preventable and severe adverse events remains to be a problem.\nPatients made only a few medico legal claims. In the Netherlands, there is not a real claim culture. In addition, patients can only file a claim when they are informed about the possibility to financially redress the harm that is suffered.\nOn the whole, the results show that in order to detect the same adverse events identified by record review, one cannot rely on the existing reporting systems within hospitals at present.\nWhy are so few adverse events reported by healthcare providers or patients? Healthcare professionals might be embarrassed or afraid of condemnation by their colleagues or the hospital management after reporting adverse events they were involved in, especially those with severe consequences. Furthermore, there are other barriers mentioned in literature that influence event reporting, including: fear of disciplinary action or potential litigation, time pressure, no feedback, the perception that reporting is unnecessary, unclear reporting procedures and a lack of clear definitions as to what constitutes a reportable event [19-25]. Moreover, resident physicians and medical consults do not generally perceive (surgical) complications to be \"reportable incidents\". They address complications, among other incidents, in Mortality and Morbidity meetings (M&M) [26]. There are also a number of possible reasons why patients do not report adverse events [19]. Patients may not be aware they have sustained harm from medical care, while it is not easy to disentangle medical injury from the development of the underlying illness. Moreover, patients can be unaware of the possibilities of making a complaint or claim. Or they can be unwilling to do so, because they do not want to commit the time and energy needed to take action, they are concerned the report will bring tension into the relationship with their doctor or they do not feel the need to complain, because their doctor clearly explained the event to their satisfaction (disclosure). Moreover patients might not feel the need to file a claim because in the Netherlands the healthcare and social security systems are well developed and patients do not feel the need for financial compensation.\nFinally, patients with grievances or concerns may choose to speak directly to their healthcare provider. Other patients that have experienced an adverse event can choose to step outside the hospital and submit an appeal to e.g. the medical disciplinary board or civil court.\nThe sensitivity and positive predictive value is low (3.6% and 4.6%), meaning that reports of patients and healthcare professionals are not useful as a predictive method to detect the same adverse events as a patient record study. Although specificity and negative predictive value seem high (93.3% and 91.5%), one has to bear in mind that an absence of a report does not imply that no adverse event occurred. We did not researched the types and consequences of the reports, therefore it might be possible that reports that did not match still concerns adverse events.\nAlthough incidents, complaints and claims do not detect the same (number of) adverse events as the patient record review, reports can still be useful in identifying issues concerning patient safety. In the sample 441 patients were involved in one or more adverse events and 348 patients were involved in one or more report. With an overlap of 16 patients 332 patients were still involved in situations that healthcare professionals or patients found important enough to report.\n[SUBTITLE] Comparison with previous research [SUBSECTION] Different from our study design, other studies compared adverse events identified with record review with either incident reporting by healthcare professionals or patient complaints [13,15,16]. Olsen et al. compared local real-time record review with incident reporting and pharmacist surveillance [15]. As in our study, the authors found little overlap in events detected by the different methods. In a study by Sari et al., data of record review were compared with data submitted to a routine incident reporting system of the same patients [13]. They found that the reporting system missed most patient safety incidents that were identified by record review and detected only 5% of those incidents that resulted in patient harm (these incidents probably included adverse events). In our study, 2% of the adverse events that were identified by record review were reported by healthcare professionals. The authors of both studies concluded that incident reporting alone does not provide an adequate assessment of adverse events and recommend hospital staff and researchers to use more than one method at the same time [13,15].\nBismark et al. compared results from a record review study with patient complaints: 0.4% of the adverse events resulted in complaints [16]. In our study, 1.8% of the adverse events were detected in complaints and claims. Bismark found that the odds of a complaint were higher for adverse events with higher preventability and severity [16]. Because of the low numbers we could not compare the results.\nDifferent from our study design, other studies compared adverse events identified with record review with either incident reporting by healthcare professionals or patient complaints [13,15,16]. Olsen et al. compared local real-time record review with incident reporting and pharmacist surveillance [15]. As in our study, the authors found little overlap in events detected by the different methods. In a study by Sari et al., data of record review were compared with data submitted to a routine incident reporting system of the same patients [13]. They found that the reporting system missed most patient safety incidents that were identified by record review and detected only 5% of those incidents that resulted in patient harm (these incidents probably included adverse events). In our study, 2% of the adverse events that were identified by record review were reported by healthcare professionals. The authors of both studies concluded that incident reporting alone does not provide an adequate assessment of adverse events and recommend hospital staff and researchers to use more than one method at the same time [13,15].\nBismark et al. compared results from a record review study with patient complaints: 0.4% of the adverse events resulted in complaints [16]. In our study, 1.8% of the adverse events were detected in complaints and claims. Bismark found that the odds of a complaint were higher for adverse events with higher preventability and severity [16]. Because of the low numbers we could not compare the results.\n[SUBTITLE] Strengths and limitations [SUBSECTION] No earlier study has compared record review with both registration systems of reports by healthcare professionals and patients (complaints as well as medico-legal claims). We studied a large number of adverse events and included multiple hospitals in our study design, increasing the likelihood that our results can be generalised to other hospitals.\nOur study has, however, several limitations. Firstly, complaints and claims relating to episodes of care in 2004 may have been submitted later, outside our study period (January-December 2004), especially those that are submitted to the Complaint Committee and Liability Officer. Moreover, in some hospitals resident physicians and medical consultants also make reports of adverse outcomes of medical care during Morbidity and Mortality meetings (M&M). It was not possible to include this as a source in our study, because in some hospitals M&M reports are written down on notepads and are not formally registered. In other hospitals, there was only an M&M registration for surgery or not an M&M registration at all. Finally, patient record review has recognized limitations regarding the estimation of adverse event rates [27,28]. Thomas et al. stated that estimates of adverse events rates from patient record review are highly sensitive to the degree of consensus and confidence among reviewers [28]. Therefore it is possible that reports contained adverse events that were not identified in the record review study. Because of these limitations, the number of adverse events that can be detected by patients and healthcare professionals might be underestimated.\nNo earlier study has compared record review with both registration systems of reports by healthcare professionals and patients (complaints as well as medico-legal claims). We studied a large number of adverse events and included multiple hospitals in our study design, increasing the likelihood that our results can be generalised to other hospitals.\nOur study has, however, several limitations. Firstly, complaints and claims relating to episodes of care in 2004 may have been submitted later, outside our study period (January-December 2004), especially those that are submitted to the Complaint Committee and Liability Officer. Moreover, in some hospitals resident physicians and medical consultants also make reports of adverse outcomes of medical care during Morbidity and Mortality meetings (M&M). It was not possible to include this as a source in our study, because in some hospitals M&M reports are written down on notepads and are not formally registered. In other hospitals, there was only an M&M registration for surgery or not an M&M registration at all. Finally, patient record review has recognized limitations regarding the estimation of adverse event rates [27,28]. Thomas et al. stated that estimates of adverse events rates from patient record review are highly sensitive to the degree of consensus and confidence among reviewers [28]. Therefore it is possible that reports contained adverse events that were not identified in the record review study. Because of these limitations, the number of adverse events that can be detected by patients and healthcare professionals might be underestimated.", "Among the 498 adverse events identified by patient record review only 18 adverse events (3.6%) were identified by record review, meaning that the majority of 480 adverse events was not registered in one of the four registration systems.\nMost of the adverse events (n = 17) were detected in either a professional's report (n = 9) or a patient's report (n = 8); only one adverse event was detected in both systems. Patient reports and healthcare professional reports contributed almost equally to the detection of adverse events. Since healthcare professionals overall made over five times more reports than patients did, we can conclude that there is a higher chance to find an adverse event in reporting systems of patient reports than in incident reporting systems for healthcare professionals. In incident reporting systems for professionals, most of the reports are made by nurses and, therefore, probably mainly concern nursing care. Adverse events, in contrast, often concern medical care provided by resident physicians and medical consultants. The fact that these professionals are the ones making few reports in reporting systems could explain the relatively low number of adverse event matches in incident reports. Patients, on the other hand, report about the whole sequence of care. Generally, both healthcare professionals and patients report about a broader group of incidents than mere adverse events. Often, their reports concern quality issues without patient harm.\nStatistical calculations showed that the degree of preventability and severity of the adverse events does not influence the likelihood they are reported. The results showed no statistically significant differences. However the non-significance might be the result of the low power.\nPatients and healthcare professionals reported relatively more preventable and severe adverse events compared to less preventable and severe adverse events. The degree of preventability of adverse events reported by healthcare professionals was relatively higher than of adverse events reported by patients. We found that healthcare professionals reported more preventable adverse events and patients more severe adverse events. The learning potential of the adverse event or its visible impact on the patient might contribute to reporting behaviour. But, unfortunately, not all highly preventable and severe adverse events are registered.\nBecause of the low number of adverse events detected in the reporting systems, under-reporting of especially preventable and severe adverse events remains to be a problem.\nPatients made only a few medico legal claims. In the Netherlands, there is not a real claim culture. In addition, patients can only file a claim when they are informed about the possibility to financially redress the harm that is suffered.\nOn the whole, the results show that in order to detect the same adverse events identified by record review, one cannot rely on the existing reporting systems within hospitals at present.\nWhy are so few adverse events reported by healthcare providers or patients? Healthcare professionals might be embarrassed or afraid of condemnation by their colleagues or the hospital management after reporting adverse events they were involved in, especially those with severe consequences. Furthermore, there are other barriers mentioned in literature that influence event reporting, including: fear of disciplinary action or potential litigation, time pressure, no feedback, the perception that reporting is unnecessary, unclear reporting procedures and a lack of clear definitions as to what constitutes a reportable event [19-25]. Moreover, resident physicians and medical consults do not generally perceive (surgical) complications to be \"reportable incidents\". They address complications, among other incidents, in Mortality and Morbidity meetings (M&M) [26]. There are also a number of possible reasons why patients do not report adverse events [19]. Patients may not be aware they have sustained harm from medical care, while it is not easy to disentangle medical injury from the development of the underlying illness. Moreover, patients can be unaware of the possibilities of making a complaint or claim. Or they can be unwilling to do so, because they do not want to commit the time and energy needed to take action, they are concerned the report will bring tension into the relationship with their doctor or they do not feel the need to complain, because their doctor clearly explained the event to their satisfaction (disclosure). Moreover patients might not feel the need to file a claim because in the Netherlands the healthcare and social security systems are well developed and patients do not feel the need for financial compensation.\nFinally, patients with grievances or concerns may choose to speak directly to their healthcare provider. Other patients that have experienced an adverse event can choose to step outside the hospital and submit an appeal to e.g. the medical disciplinary board or civil court.\nThe sensitivity and positive predictive value is low (3.6% and 4.6%), meaning that reports of patients and healthcare professionals are not useful as a predictive method to detect the same adverse events as a patient record study. Although specificity and negative predictive value seem high (93.3% and 91.5%), one has to bear in mind that an absence of a report does not imply that no adverse event occurred. We did not researched the types and consequences of the reports, therefore it might be possible that reports that did not match still concerns adverse events.\nAlthough incidents, complaints and claims do not detect the same (number of) adverse events as the patient record review, reports can still be useful in identifying issues concerning patient safety. In the sample 441 patients were involved in one or more adverse events and 348 patients were involved in one or more report. With an overlap of 16 patients 332 patients were still involved in situations that healthcare professionals or patients found important enough to report.", "Different from our study design, other studies compared adverse events identified with record review with either incident reporting by healthcare professionals or patient complaints [13,15,16]. Olsen et al. compared local real-time record review with incident reporting and pharmacist surveillance [15]. As in our study, the authors found little overlap in events detected by the different methods. In a study by Sari et al., data of record review were compared with data submitted to a routine incident reporting system of the same patients [13]. They found that the reporting system missed most patient safety incidents that were identified by record review and detected only 5% of those incidents that resulted in patient harm (these incidents probably included adverse events). In our study, 2% of the adverse events that were identified by record review were reported by healthcare professionals. The authors of both studies concluded that incident reporting alone does not provide an adequate assessment of adverse events and recommend hospital staff and researchers to use more than one method at the same time [13,15].\nBismark et al. compared results from a record review study with patient complaints: 0.4% of the adverse events resulted in complaints [16]. In our study, 1.8% of the adverse events were detected in complaints and claims. Bismark found that the odds of a complaint were higher for adverse events with higher preventability and severity [16]. Because of the low numbers we could not compare the results.", "No earlier study has compared record review with both registration systems of reports by healthcare professionals and patients (complaints as well as medico-legal claims). We studied a large number of adverse events and included multiple hospitals in our study design, increasing the likelihood that our results can be generalised to other hospitals.\nOur study has, however, several limitations. Firstly, complaints and claims relating to episodes of care in 2004 may have been submitted later, outside our study period (January-December 2004), especially those that are submitted to the Complaint Committee and Liability Officer. Moreover, in some hospitals resident physicians and medical consultants also make reports of adverse outcomes of medical care during Morbidity and Mortality meetings (M&M). It was not possible to include this as a source in our study, because in some hospitals M&M reports are written down on notepads and are not formally registered. In other hospitals, there was only an M&M registration for surgery or not an M&M registration at all. Finally, patient record review has recognized limitations regarding the estimation of adverse event rates [27,28]. Thomas et al. stated that estimates of adverse events rates from patient record review are highly sensitive to the degree of consensus and confidence among reviewers [28]. Therefore it is possible that reports contained adverse events that were not identified in the record review study. Because of these limitations, the number of adverse events that can be detected by patients and healthcare professionals might be underestimated.", "An examination of reports from healthcare professionals and patients in reporting systems is not sufficient to detect a substantial number of adverse events and thus cannot replace record review. Adverse events are seriously under-reported. Barriers of reporting should be reduced.\nThe results show that there is little overlap in adverse events covered by healthcare professionals' and patients' reports: both groups reported different adverse events. There is underreporting of adverse events by both groups, but using either reports of professionals or patients would have yielded even fewer adverse event matches.\nConsidering the large numbers of patients' and healthcare professionals' reports that were not related to an adverse event, incident reporting systems, complaints and claims could, however, carry a vast amount of valuable information on the quality of care that can be used for the improvement of hospital healthcare. In future research, it seems worthwhile to study what information these sources can offer regarding the quality and safety of hospital care.", "The authors declare that they have no competing interests.", "ICD contributed to the design of the study, coordinated the data collection, analysed and interpreted the data and drafted the manuscript. MS wrote the manuscript and contributed to the analysis and interpretation of the data. LZ contributed to the data analysis and critically read the manuscript. SL contributed to the data collection and critically read the manuscript. GvdW and CW contributed to the design of the study and interpretation of the data, and revised the manuscript critically for important intellectual content. All authors gave their approval of the final version of the manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6963/11/49/prepub\n" ]
[ null, null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Obesity prevalence estimates in a Canadian regional population of preschool children using variant growth references.
21356057
Childhood obesity is a public health problem in Canada. Accurate measurement of a health problem is crucial in defining its burden. The objective of this study is to compare the prevalence estimates of overweight and obesity in preschool children using three growth references.
BACKGROUND
Weights and heights were measured on 1026 preschool children born in Newfoundland and Labrador (NL), Canada, and body mass index calculated. The prevalence of overweight and obesity was determined and statistical comparisons conducted among the three growth references; the Centres for Disease Control (CDC), the International Obesity Task Force (IOTF) and the World Health Organization (WHO).
METHODS
CDC and IOTF produced similar estimates of the prevalence of overweight, 19.1% versus 18.2% while the WHO reported a higher prevalence 26.7% (p < .001). The CDC classified twice as many children as obese compared to the IOTF 16.6% versus 8.3% (p < .001) and a third more than the WHO 16.6% versus 11.3% (p < .01). There was variable level of agreement between methods.
RESULTS
The CDC reported a much higher prevalence of obesity compared to the other references. The prevalence of childhood obesity is dependent on the growth reference used.
CONCLUSIONS
[ "Body Height", "Body Mass Index", "Body Weight", "Centers for Disease Control and Prevention, U.S.", "Child, Preschool", "Cross-Sectional Studies", "Female", "Growth Charts", "Humans", "Male", "Newfoundland and Labrador", "Obesity", "Overweight", "Prevalence", "United States", "World Health Organization" ]
3056808
null
null
Methods
The Memorial University Human Investigations Committee and the Health and Community Services Boards ethics committees approved this study. [SUBTITLE] Study Design & Population [SUBSECTION] This is cross-sectional analysis of 1026 children (mean age 4.5 years) living in the province of Newfoundland and Labrador who participated in pre-Kindergarten Health Fairs prior to starting school in 2005. The Fairs were open to all children and provided immunizations and tests for vision, hearing and developmental problems. The population is described elsewhere [15]. Two trained research staff collected the information required for the current study. This is cross-sectional analysis of 1026 children (mean age 4.5 years) living in the province of Newfoundland and Labrador who participated in pre-Kindergarten Health Fairs prior to starting school in 2005. The Fairs were open to all children and provided immunizations and tests for vision, hearing and developmental problems. The population is described elsewhere [15]. Two trained research staff collected the information required for the current study. [SUBTITLE] Data Collection and Study Variables [SUBSECTION] Research assistants trained by a Pediatrician took direct anthropometric measures. Children were asked to take off their shoes for the height measure and to take off any over clothing for the weight measure. Direct measures of weight were collected using a Tanita digital weighing scale (kg) rounded to one decimal place calibrated to the hospital digital scale. An Invicta stadiometer (cm) was used to measure the height rounded to one decimal place of the children. Two measures were taken and the average recorded. Sex and age in years and months were collected and rounded to the nearest half year. Research assistants trained by a Pediatrician took direct anthropometric measures. Children were asked to take off their shoes for the height measure and to take off any over clothing for the weight measure. Direct measures of weight were collected using a Tanita digital weighing scale (kg) rounded to one decimal place calibrated to the hospital digital scale. An Invicta stadiometer (cm) was used to measure the height rounded to one decimal place of the children. Two measures were taken and the average recorded. Sex and age in years and months were collected and rounded to the nearest half year. [SUBTITLE] Defining overweight and obesity [SUBSECTION] For each child BMI (kg/m2) was calculated and classified according to the cut-points published by the CDC, IOTF and the WHO. [SUBTITLE] The US Centre for Disease Control [SUBSECTION] The US CDC publishes BMI age and sex-specific growth references derived from five nationally representative surveys of American children conducted between 1963 and 1994 [10]. Using software downloaded with permission from the CDC, children were classified as overweight (BMI >85th ) and obese (BMI ≥95th) [16]. The US CDC publishes BMI age and sex-specific growth references derived from five nationally representative surveys of American children conducted between 1963 and 1994 [10]. Using software downloaded with permission from the CDC, children were classified as overweight (BMI >85th ) and obese (BMI ≥95th) [16]. [SUBTITLE] The International Obesity Task Force [SUBSECTION] In 2000, the IOTF published BMI cut-points for defining overweight and obesity in children between 2 and 18 years [11]. These references are based on children living in six countries (i.e., United States, Brazil, Great Britain, Hong Kong, Netherlands, Singapore) and can be extrapolated to the widely accepted definitions for adult overweight and obesity; a BMI ≥ 25 and a BMI ≥ 30 respectively, shown to be predictive of adverse health outcomes in adults. Using these cut-points, a preschool child is considered overweight with a BMI ≥ 91st and obese with a BMI ≥ 99th percentile, respectively. In 2000, the IOTF published BMI cut-points for defining overweight and obesity in children between 2 and 18 years [11]. These references are based on children living in six countries (i.e., United States, Brazil, Great Britain, Hong Kong, Netherlands, Singapore) and can be extrapolated to the widely accepted definitions for adult overweight and obesity; a BMI ≥ 25 and a BMI ≥ 30 respectively, shown to be predictive of adverse health outcomes in adults. Using these cut-points, a preschool child is considered overweight with a BMI ≥ 91st and obese with a BMI ≥ 99th percentile, respectively. [SUBTITLE] The World Health Organization [SUBSECTION] In 2006, the WHO released new growth references for assessing and monitoring the growth in children. These were generated from data collected from the WHO Multicentre Growth Reference Study in six countries (i.e., Brazil, Ghana, India, Oman, Norway, United States). Children under study were raised in optimal conditions that included living in a nonsmoking environment. The children were exclusively or predominantly breastfed for more than four months, fed complementary foods by six months, had continuation of breastfeeding until at least 12 months, were immunized and had access to and received required healthcare. Using the BMI-for-age z-scores, children were classified as overweight with a BMI between one and two standard deviations (SD) above the mean and obese with a BMI more than two SDs above the mean. Overweight in this population is classified as a BMI >84th percentile while a BMI >97.7th percentile classifies a child as obese [12]. In 2006, the WHO released new growth references for assessing and monitoring the growth in children. These were generated from data collected from the WHO Multicentre Growth Reference Study in six countries (i.e., Brazil, Ghana, India, Oman, Norway, United States). Children under study were raised in optimal conditions that included living in a nonsmoking environment. The children were exclusively or predominantly breastfed for more than four months, fed complementary foods by six months, had continuation of breastfeeding until at least 12 months, were immunized and had access to and received required healthcare. Using the BMI-for-age z-scores, children were classified as overweight with a BMI between one and two standard deviations (SD) above the mean and obese with a BMI more than two SDs above the mean. Overweight in this population is classified as a BMI >84th percentile while a BMI >97.7th percentile classifies a child as obese [12]. For each child BMI (kg/m2) was calculated and classified according to the cut-points published by the CDC, IOTF and the WHO. [SUBTITLE] The US Centre for Disease Control [SUBSECTION] The US CDC publishes BMI age and sex-specific growth references derived from five nationally representative surveys of American children conducted between 1963 and 1994 [10]. Using software downloaded with permission from the CDC, children were classified as overweight (BMI >85th ) and obese (BMI ≥95th) [16]. The US CDC publishes BMI age and sex-specific growth references derived from five nationally representative surveys of American children conducted between 1963 and 1994 [10]. Using software downloaded with permission from the CDC, children were classified as overweight (BMI >85th ) and obese (BMI ≥95th) [16]. [SUBTITLE] The International Obesity Task Force [SUBSECTION] In 2000, the IOTF published BMI cut-points for defining overweight and obesity in children between 2 and 18 years [11]. These references are based on children living in six countries (i.e., United States, Brazil, Great Britain, Hong Kong, Netherlands, Singapore) and can be extrapolated to the widely accepted definitions for adult overweight and obesity; a BMI ≥ 25 and a BMI ≥ 30 respectively, shown to be predictive of adverse health outcomes in adults. Using these cut-points, a preschool child is considered overweight with a BMI ≥ 91st and obese with a BMI ≥ 99th percentile, respectively. In 2000, the IOTF published BMI cut-points for defining overweight and obesity in children between 2 and 18 years [11]. These references are based on children living in six countries (i.e., United States, Brazil, Great Britain, Hong Kong, Netherlands, Singapore) and can be extrapolated to the widely accepted definitions for adult overweight and obesity; a BMI ≥ 25 and a BMI ≥ 30 respectively, shown to be predictive of adverse health outcomes in adults. Using these cut-points, a preschool child is considered overweight with a BMI ≥ 91st and obese with a BMI ≥ 99th percentile, respectively. [SUBTITLE] The World Health Organization [SUBSECTION] In 2006, the WHO released new growth references for assessing and monitoring the growth in children. These were generated from data collected from the WHO Multicentre Growth Reference Study in six countries (i.e., Brazil, Ghana, India, Oman, Norway, United States). Children under study were raised in optimal conditions that included living in a nonsmoking environment. The children were exclusively or predominantly breastfed for more than four months, fed complementary foods by six months, had continuation of breastfeeding until at least 12 months, were immunized and had access to and received required healthcare. Using the BMI-for-age z-scores, children were classified as overweight with a BMI between one and two standard deviations (SD) above the mean and obese with a BMI more than two SDs above the mean. Overweight in this population is classified as a BMI >84th percentile while a BMI >97.7th percentile classifies a child as obese [12]. In 2006, the WHO released new growth references for assessing and monitoring the growth in children. These were generated from data collected from the WHO Multicentre Growth Reference Study in six countries (i.e., Brazil, Ghana, India, Oman, Norway, United States). Children under study were raised in optimal conditions that included living in a nonsmoking environment. The children were exclusively or predominantly breastfed for more than four months, fed complementary foods by six months, had continuation of breastfeeding until at least 12 months, were immunized and had access to and received required healthcare. Using the BMI-for-age z-scores, children were classified as overweight with a BMI between one and two standard deviations (SD) above the mean and obese with a BMI more than two SDs above the mean. Overweight in this population is classified as a BMI >84th percentile while a BMI >97.7th percentile classifies a child as obese [12]. [SUBTITLE] Statistical Analysis [SUBSECTION] Continuous variables were normally distributed and reported using means and standard deviations. Categorical data were reported as whole numbers and percentages. Statistical comparisons were conducted using student t-tests for continuous data and chi-squared analysis for categorical data. Cohen's kappa statistic was calculated to determine the level of agreement between the growth references. A kappa greater than .80 signifies very good agreement, between .60-.80 a good level of agreement and that less than .50 little to moderate agreement [17]. A p-value <.05 was statistically significant. All data was analyzed using the Statistical Package for the Social Sciences (SPSS 15.0). Continuous variables were normally distributed and reported using means and standard deviations. Categorical data were reported as whole numbers and percentages. Statistical comparisons were conducted using student t-tests for continuous data and chi-squared analysis for categorical data. Cohen's kappa statistic was calculated to determine the level of agreement between the growth references. A kappa greater than .80 signifies very good agreement, between .60-.80 a good level of agreement and that less than .50 little to moderate agreement [17]. A p-value <.05 was statistically significant. All data was analyzed using the Statistical Package for the Social Sciences (SPSS 15.0).
null
null
null
null
[ "Background", "Study Design & Population", "Data Collection and Study Variables", "Defining overweight and obesity", "The US Centre for Disease Control", "The International Obesity Task Force", "The World Health Organization", "Statistical Analysis", "Results", "Discussion", "Conclusions", "Abbreviations", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Globally, obesity is a significant public health problem [1,2] and a number of studies report an increasing prevalence of overweight and obese children in Canada [3-6]. The health risks associated with excess body weight are well documented [7,8]. The age and sex specific body mass index (kg/m2) or BMI is the most common method for assessing weight status and health risk in children [9]. There are three sets of growth references commonly used to assess a child's weight status and health risk; BMI cut-points published by the US Centre for Disease Control and Prevention (CDC), the International Obesity Task Force (IOTF) and those published by the World Health Organization (WHO) [10-12]. Inconsistent prevalence estimates of childhood overweight and obesity based on variant growth references pose a challenge in defining the burden of childhood obesity at a population level. Recommendations are inconsistent on which method to use [13,14].\nThe purpose of this paper is to compare prevalence estimates of overweight and obesity among a regional preschool population living in the province of Newfoundland and Labrador, Canada using the CDC, IOTF and WHO BMI cut-points. A secondary objective is to assess the level of agreement between the growth references.", "This is cross-sectional analysis of 1026 children (mean age 4.5 years) living in the province of Newfoundland and Labrador who participated in pre-Kindergarten Health Fairs prior to starting school in 2005. The Fairs were open to all children and provided immunizations and tests for vision, hearing and developmental problems. The population is described elsewhere [15]. Two trained research staff collected the information required for the current study.", "Research assistants trained by a Pediatrician took direct anthropometric measures. Children were asked to take off their shoes for the height measure and to take off any over clothing for the weight measure. Direct measures of weight were collected using a Tanita digital weighing scale (kg) rounded to one decimal place calibrated to the hospital digital scale. An Invicta stadiometer (cm) was used to measure the height rounded to one decimal place of the children. Two measures were taken and the average recorded. Sex and age in years and months were collected and rounded to the nearest half year.", "For each child BMI (kg/m2) was calculated and classified according to the cut-points published by the CDC, IOTF and the WHO.\n[SUBTITLE] The US Centre for Disease Control [SUBSECTION] The US CDC publishes BMI age and sex-specific growth references derived from five nationally representative surveys of American children conducted between 1963 and 1994 [10]. Using software downloaded with permission from the CDC, children were classified as overweight (BMI >85th ) and obese (BMI ≥95th) [16].\nThe US CDC publishes BMI age and sex-specific growth references derived from five nationally representative surveys of American children conducted between 1963 and 1994 [10]. Using software downloaded with permission from the CDC, children were classified as overweight (BMI >85th ) and obese (BMI ≥95th) [16].\n[SUBTITLE] The International Obesity Task Force [SUBSECTION] In 2000, the IOTF published BMI cut-points for defining overweight and obesity in children between 2 and 18 years [11]. These references are based on children living in six countries (i.e., United States, Brazil, Great Britain, Hong Kong, Netherlands, Singapore) and can be extrapolated to the widely accepted definitions for adult overweight and obesity; a BMI ≥ 25 and a BMI ≥ 30 respectively, shown to be predictive of adverse health outcomes in adults. Using these cut-points, a preschool child is considered overweight with a BMI ≥ 91st and obese with a BMI ≥ 99th percentile, respectively.\nIn 2000, the IOTF published BMI cut-points for defining overweight and obesity in children between 2 and 18 years [11]. These references are based on children living in six countries (i.e., United States, Brazil, Great Britain, Hong Kong, Netherlands, Singapore) and can be extrapolated to the widely accepted definitions for adult overweight and obesity; a BMI ≥ 25 and a BMI ≥ 30 respectively, shown to be predictive of adverse health outcomes in adults. Using these cut-points, a preschool child is considered overweight with a BMI ≥ 91st and obese with a BMI ≥ 99th percentile, respectively.\n[SUBTITLE] The World Health Organization [SUBSECTION] In 2006, the WHO released new growth references for assessing and monitoring the growth in children. These were generated from data collected from the WHO Multicentre Growth Reference Study in six countries (i.e., Brazil, Ghana, India, Oman, Norway, United States). Children under study were raised in optimal conditions that included living in a nonsmoking environment. The children were exclusively or predominantly breastfed for more than four months, fed complementary foods by six months, had continuation of breastfeeding until at least 12 months, were immunized and had access to and received required healthcare. Using the BMI-for-age z-scores, children were classified as overweight with a BMI between one and two standard deviations (SD) above the mean and obese with a BMI more than two SDs above the mean. Overweight in this population is classified as a BMI >84th percentile while a BMI >97.7th percentile classifies a child as obese [12].\nIn 2006, the WHO released new growth references for assessing and monitoring the growth in children. These were generated from data collected from the WHO Multicentre Growth Reference Study in six countries (i.e., Brazil, Ghana, India, Oman, Norway, United States). Children under study were raised in optimal conditions that included living in a nonsmoking environment. The children were exclusively or predominantly breastfed for more than four months, fed complementary foods by six months, had continuation of breastfeeding until at least 12 months, were immunized and had access to and received required healthcare. Using the BMI-for-age z-scores, children were classified as overweight with a BMI between one and two standard deviations (SD) above the mean and obese with a BMI more than two SDs above the mean. Overweight in this population is classified as a BMI >84th percentile while a BMI >97.7th percentile classifies a child as obese [12].", "The US CDC publishes BMI age and sex-specific growth references derived from five nationally representative surveys of American children conducted between 1963 and 1994 [10]. Using software downloaded with permission from the CDC, children were classified as overweight (BMI >85th ) and obese (BMI ≥95th) [16].", "In 2000, the IOTF published BMI cut-points for defining overweight and obesity in children between 2 and 18 years [11]. These references are based on children living in six countries (i.e., United States, Brazil, Great Britain, Hong Kong, Netherlands, Singapore) and can be extrapolated to the widely accepted definitions for adult overweight and obesity; a BMI ≥ 25 and a BMI ≥ 30 respectively, shown to be predictive of adverse health outcomes in adults. Using these cut-points, a preschool child is considered overweight with a BMI ≥ 91st and obese with a BMI ≥ 99th percentile, respectively.", "In 2006, the WHO released new growth references for assessing and monitoring the growth in children. These were generated from data collected from the WHO Multicentre Growth Reference Study in six countries (i.e., Brazil, Ghana, India, Oman, Norway, United States). Children under study were raised in optimal conditions that included living in a nonsmoking environment. The children were exclusively or predominantly breastfed for more than four months, fed complementary foods by six months, had continuation of breastfeeding until at least 12 months, were immunized and had access to and received required healthcare. Using the BMI-for-age z-scores, children were classified as overweight with a BMI between one and two standard deviations (SD) above the mean and obese with a BMI more than two SDs above the mean. Overweight in this population is classified as a BMI >84th percentile while a BMI >97.7th percentile classifies a child as obese [12].", "Continuous variables were normally distributed and reported using means and standard deviations. Categorical data were reported as whole numbers and percentages. Statistical comparisons were conducted using student t-tests for continuous data and chi-squared analysis for categorical data. Cohen's kappa statistic was calculated to determine the level of agreement between the growth references. A kappa greater than .80 signifies very good agreement, between .60-.80 a good level of agreement and that less than .50 little to moderate agreement [17]. A p-value <.05 was statistically significant. All data was analyzed using the Statistical Package for the Social Sciences (SPSS 15.0).", "The anthropometric measures and characteristics of the children are presented in Table 1. There were no significant differences in these variables. Figure 1 illustrates that for the overweight group, there were significant differences between CDC and WHO (19.1% and 26.7%) and between IOTF and WHO (18.2% and 26.7%). When children were classified as obese, there were significant differences across the three references (CDC 16.6%, IOTF 8.3%, WHO 11.3%).\nCharacteristics of study sample (n = 1026)\nBMI: Body Mass Index, WHO: The World Health Organization, IOTF: The International Obesity Task Force, CDC: The Centre for Disease Control and Prevention, SD: Standard Deviation\nPrevalence of overweight and obesity using CDC, IOTF and WHO growth references. CDC: The Centres for Disease Control and Prevention, IOTF: The International Obesity Task Force, WHO: The World Health Organization. p < .05 significant, ns: not significant\nIn Figure 2, for boys, there were significant differences in overweight between IOTF and WHO (16.4% vs. 28.5%) and between CDC and WHO (18.6% vs. 28.5%). A similar relationship was found for girls. In Figure 3, there were significant differences in the obese category across all three standards for both boys (CDC 17.3%, IOTF 7.8%, WHO 11.7%) and girls (CDC 15.7%, IOTF 8.8%, WHO 10.8%). To determine the level of agreement between growth references, we calculated the kappa statistic (Table 2).\nPrevalence of overweight in boys and girls using CDC, IOTF and WHO growth references. CDC: The Centres for Disease Control and Prevention, IOTF: The International Obesity Task Force, WHO: The World Health Organization. p < .05 significant, ns: not significant\nPrevalence of obesity in boys and girls using CDC, IOTF and WHO growth references. CDC: The Centres for Disease Control and Prevention, IOTF: The International Obesity Task Force, WHO: The World Health Organization. p < . 05 significant, ns: not significant\nComparison of agreement for categorizing weight categories between 1) CDC and WHO 2) IOTF and WHO and 3) CDC and IOTF\nWHO: The World Health Organization, IOTF: The International Obesity Task Force, CDC: The Centre for Disease Control and Prevention, p < .05 significant", "There is a high prevalence of childhood overweight or obesity in this Canadian preschool population, irregardless of growth reference used. Approximately one in three preschool children was overweight or obese. Similar statistics are being reported in other developed counties [18-27]. The short and long term physical health risks for children associated with excess weight include hypertension, hyperinsulinemia, glucose intolerance, type II diabetes, dyslipidemia, increased risk of early cardiac disease and psychosocial difficulties. Childhood obesity is a significant public health concern and accurate measurement and classification is crucial in determining the burden of the health problem. In the current study, the CDC reported a significantly higher prevalence of childhood obesity compared to the other growth references. The CDC reported more than double the prevalence of obesity compared to the IOTF and approximately 30% more obese than the WHO. Whether this is an accurate reflection or an overestimation of the prevalence of obesity in this population, it is difficult to tell. Conflicting recommendations are provided by various organizations that include the Canadian Pediatric Society [28] and the Clinical Practice Guidelines on the management and prevention of obesity in adults and children [13]. Although there was an overall good level of agreement between the different growth references there was some noticeable inconsistencies in classification which may lead to challenges in assessing the weight status of individual children in a clinical environment.\nSeveral studies are published comparing the prevalence estimates of overweight and obesity in children and youth using various growth references, and the findings are inconsistent [18-27]. One study recently published on a sample of Canadian children and youth compared the estimates of excess weight using the same three reference cut-points as in the current study and similar findings were reported. The prevalence estimate for the childhood obesity estimate was the same based on the WHO and CDC growth references (13%) but lower based on the IOTF cut-points (8%) [18]. Another study on children between four and six years of age living in the province of Alberta, compared prevalence estimates using CDC and IOTF cut-points. The authors reported that an estimated 13.8% and 11.4% of children were overweight and obese according to the CDC cut-offs, while an estimated 11.5% and 6.8% of children were classified as overweight and obese using the IOTF cut-offs. Similar to the current study, the CDC growth references classified almost twice as many children as obese compared to the IOTF method. The level of agreement between the two methods was .69 (p < .01), significantly lower then that in the current study [22].\nIn an Italian study on 258 preschool children three to six year of age, the prevalence of overweight and obesity were compared using the CDC and IOTF methods as well as local Italian growth references published by Luciano [23]. All three sets of growth references gave similar estimates of overweight in boys (CDC 16.10%, IOTF 12.90%, Luciano 14.5%) and girls (CDC 15.70%, IOTF 15.7%, Luciano 10%). These findings were not dissimilar to the current study's findings on overweight in boys (CDC 18.6%, IOTF 16.4%) and overweight in girls (CDC 19.6%, IOTF 20.2%). However in the Italian study, the use of the CDC reference led to a prevalence estimate of obesity in boys that was ten times that of the other references (CDC 10.5%, IOTF 0.8%, Luciano 0.8%). In girls the CDC estimate was also significantly higher compared to the other two references (CDC 11.90%, IOTF 6.7%, Luciano 3%), although not as large a difference as in the boys. Based on this study it appears that the reliability of the growth reference used may be affected by the underlying prevalence of childhood obesity in the population, as the Italian rates of childhood obesity tend to be much lower than those found in North America. Ethnic diversity will also have an impact. This provides further challenges when making international comparisons [23]. In contrast to the previous studies, a study recently conducted on 604 Spanish children 6 to 10 years of age, researchers reported that when using the WHO criteria, the combined prevalence of overweight and obesity was 39%, significantly higher than both the CDC estimate of 20% and the IOTF estimate of 17%. In this study, the CDC and IOTF demonstrated a high level of agreement (kappa >.80), however the level of agreement between the CDC and the IOTF and the WHO was poor (kappa < .40). In this study, the authors concluded that it was the WHO criteria that overestimated the prevalence of childhood overweight and obesity, not the CDC as seen in the current study [19].\nThe significant difference in prevalence estimates of childhood obesity produced by the three growth references make it difficult; to assess the extent of the problem and its associated health burden, to conduct research, to make population comparisons and to inform policy. The CDC and to a more limited extent the WHO references appear to allow for an earlier identification of a larger number of children affected by excess weight compared to the IOTF. If the CDC classifies children accurately than using this reference may prompt health professionals to provide primary prevention to a pediatric population earlier to help reduce the risk of associated health conditions [9]. Although the association between increasing BMI and increased health risk has been substantiated in adults [29] it is important to raise a concern that none of these growth references have been convincingly linked to the future development of adverse health outcomes in children.\nAn important strength of our study is that we directly measured children's heights and weights and did not rely on self reported survey data which may underestimate the prevalence of overweight and obesity. Our study limitations included a non random sample from one region in Canada. However, the prevalence estimates of overweight and obesity in the current study were very similar to a larger provincial study [5], providing confidence that our study findings are representative of this provincial population. The current research adds to the debate about the relative reliability and validity of the variant growth references used to monitor a child's growth and has implications for future research. It raises several research questions that still need to be answered. Have we determined the ideal definition of overweight and obesity in children so that prevention efforts can be initiated at the earliest opportunity? Are some of these references better suited to particular populations depending on the mix of ethnicity and race? Can we validate these growth references against a gold standard? Should references based on optimal growth be considered the gold standard? Are any of these standards associated with adverse health outcomes in children? It is clear that more research including longitudinal studies is needed in order to answer many of these questions.", "Childhood obesity is a public health problem and is associated with increased morbidity and mortality. Obesity tracks through the life cycle [2,30] suggesting that early identification and primary prevention is key to reversing and preventing the upward trend into adult obesity and its potential future burden of illness [31]. A consensus is urgently needed on the most valid and reliable growth reference to use to measure and monitor a child's growth for both clinical and research purposes.", "(NL): Newfoundland and Labrador; (BMI): body mass index; (CDC): Centre of Disease Control; (IOTF): International Obesity Task Force; (WHO); World Health Organization.", "The authors declare that they have no competing interests.", "LT was the principal investigator on the project and responsible for the intellectual conception and design of the study including the data analysis and interpretation of the data. LT was also the lead investigator on both the initial funding application the manuscript preparation and provided final approval for submission. LN contributed to the conception and design of the study including preparation of the funding application. LN also helped draft and revise the final manuscript. Both authors have approved the final version of the manuscript for publication.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2431/11/21/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study Design & Population", "Data Collection and Study Variables", "Defining overweight and obesity", "The US Centre for Disease Control", "The International Obesity Task Force", "The World Health Organization", "Statistical Analysis", "Results", "Discussion", "Conclusions", "Abbreviations", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Globally, obesity is a significant public health problem [1,2] and a number of studies report an increasing prevalence of overweight and obese children in Canada [3-6]. The health risks associated with excess body weight are well documented [7,8]. The age and sex specific body mass index (kg/m2) or BMI is the most common method for assessing weight status and health risk in children [9]. There are three sets of growth references commonly used to assess a child's weight status and health risk; BMI cut-points published by the US Centre for Disease Control and Prevention (CDC), the International Obesity Task Force (IOTF) and those published by the World Health Organization (WHO) [10-12]. Inconsistent prevalence estimates of childhood overweight and obesity based on variant growth references pose a challenge in defining the burden of childhood obesity at a population level. Recommendations are inconsistent on which method to use [13,14].\nThe purpose of this paper is to compare prevalence estimates of overweight and obesity among a regional preschool population living in the province of Newfoundland and Labrador, Canada using the CDC, IOTF and WHO BMI cut-points. A secondary objective is to assess the level of agreement between the growth references.", "The Memorial University Human Investigations Committee and the Health and Community Services Boards ethics committees approved this study.\n[SUBTITLE] Study Design & Population [SUBSECTION] This is cross-sectional analysis of 1026 children (mean age 4.5 years) living in the province of Newfoundland and Labrador who participated in pre-Kindergarten Health Fairs prior to starting school in 2005. The Fairs were open to all children and provided immunizations and tests for vision, hearing and developmental problems. The population is described elsewhere [15]. Two trained research staff collected the information required for the current study.\nThis is cross-sectional analysis of 1026 children (mean age 4.5 years) living in the province of Newfoundland and Labrador who participated in pre-Kindergarten Health Fairs prior to starting school in 2005. The Fairs were open to all children and provided immunizations and tests for vision, hearing and developmental problems. The population is described elsewhere [15]. Two trained research staff collected the information required for the current study.\n[SUBTITLE] Data Collection and Study Variables [SUBSECTION] Research assistants trained by a Pediatrician took direct anthropometric measures. Children were asked to take off their shoes for the height measure and to take off any over clothing for the weight measure. Direct measures of weight were collected using a Tanita digital weighing scale (kg) rounded to one decimal place calibrated to the hospital digital scale. An Invicta stadiometer (cm) was used to measure the height rounded to one decimal place of the children. Two measures were taken and the average recorded. Sex and age in years and months were collected and rounded to the nearest half year.\nResearch assistants trained by a Pediatrician took direct anthropometric measures. Children were asked to take off their shoes for the height measure and to take off any over clothing for the weight measure. Direct measures of weight were collected using a Tanita digital weighing scale (kg) rounded to one decimal place calibrated to the hospital digital scale. An Invicta stadiometer (cm) was used to measure the height rounded to one decimal place of the children. Two measures were taken and the average recorded. Sex and age in years and months were collected and rounded to the nearest half year.\n[SUBTITLE] Defining overweight and obesity [SUBSECTION] For each child BMI (kg/m2) was calculated and classified according to the cut-points published by the CDC, IOTF and the WHO.\n[SUBTITLE] The US Centre for Disease Control [SUBSECTION] The US CDC publishes BMI age and sex-specific growth references derived from five nationally representative surveys of American children conducted between 1963 and 1994 [10]. Using software downloaded with permission from the CDC, children were classified as overweight (BMI >85th ) and obese (BMI ≥95th) [16].\nThe US CDC publishes BMI age and sex-specific growth references derived from five nationally representative surveys of American children conducted between 1963 and 1994 [10]. Using software downloaded with permission from the CDC, children were classified as overweight (BMI >85th ) and obese (BMI ≥95th) [16].\n[SUBTITLE] The International Obesity Task Force [SUBSECTION] In 2000, the IOTF published BMI cut-points for defining overweight and obesity in children between 2 and 18 years [11]. These references are based on children living in six countries (i.e., United States, Brazil, Great Britain, Hong Kong, Netherlands, Singapore) and can be extrapolated to the widely accepted definitions for adult overweight and obesity; a BMI ≥ 25 and a BMI ≥ 30 respectively, shown to be predictive of adverse health outcomes in adults. Using these cut-points, a preschool child is considered overweight with a BMI ≥ 91st and obese with a BMI ≥ 99th percentile, respectively.\nIn 2000, the IOTF published BMI cut-points for defining overweight and obesity in children between 2 and 18 years [11]. These references are based on children living in six countries (i.e., United States, Brazil, Great Britain, Hong Kong, Netherlands, Singapore) and can be extrapolated to the widely accepted definitions for adult overweight and obesity; a BMI ≥ 25 and a BMI ≥ 30 respectively, shown to be predictive of adverse health outcomes in adults. Using these cut-points, a preschool child is considered overweight with a BMI ≥ 91st and obese with a BMI ≥ 99th percentile, respectively.\n[SUBTITLE] The World Health Organization [SUBSECTION] In 2006, the WHO released new growth references for assessing and monitoring the growth in children. These were generated from data collected from the WHO Multicentre Growth Reference Study in six countries (i.e., Brazil, Ghana, India, Oman, Norway, United States). Children under study were raised in optimal conditions that included living in a nonsmoking environment. The children were exclusively or predominantly breastfed for more than four months, fed complementary foods by six months, had continuation of breastfeeding until at least 12 months, were immunized and had access to and received required healthcare. Using the BMI-for-age z-scores, children were classified as overweight with a BMI between one and two standard deviations (SD) above the mean and obese with a BMI more than two SDs above the mean. Overweight in this population is classified as a BMI >84th percentile while a BMI >97.7th percentile classifies a child as obese [12].\nIn 2006, the WHO released new growth references for assessing and monitoring the growth in children. These were generated from data collected from the WHO Multicentre Growth Reference Study in six countries (i.e., Brazil, Ghana, India, Oman, Norway, United States). Children under study were raised in optimal conditions that included living in a nonsmoking environment. The children were exclusively or predominantly breastfed for more than four months, fed complementary foods by six months, had continuation of breastfeeding until at least 12 months, were immunized and had access to and received required healthcare. Using the BMI-for-age z-scores, children were classified as overweight with a BMI between one and two standard deviations (SD) above the mean and obese with a BMI more than two SDs above the mean. Overweight in this population is classified as a BMI >84th percentile while a BMI >97.7th percentile classifies a child as obese [12].\nFor each child BMI (kg/m2) was calculated and classified according to the cut-points published by the CDC, IOTF and the WHO.\n[SUBTITLE] The US Centre for Disease Control [SUBSECTION] The US CDC publishes BMI age and sex-specific growth references derived from five nationally representative surveys of American children conducted between 1963 and 1994 [10]. Using software downloaded with permission from the CDC, children were classified as overweight (BMI >85th ) and obese (BMI ≥95th) [16].\nThe US CDC publishes BMI age and sex-specific growth references derived from five nationally representative surveys of American children conducted between 1963 and 1994 [10]. Using software downloaded with permission from the CDC, children were classified as overweight (BMI >85th ) and obese (BMI ≥95th) [16].\n[SUBTITLE] The International Obesity Task Force [SUBSECTION] In 2000, the IOTF published BMI cut-points for defining overweight and obesity in children between 2 and 18 years [11]. These references are based on children living in six countries (i.e., United States, Brazil, Great Britain, Hong Kong, Netherlands, Singapore) and can be extrapolated to the widely accepted definitions for adult overweight and obesity; a BMI ≥ 25 and a BMI ≥ 30 respectively, shown to be predictive of adverse health outcomes in adults. Using these cut-points, a preschool child is considered overweight with a BMI ≥ 91st and obese with a BMI ≥ 99th percentile, respectively.\nIn 2000, the IOTF published BMI cut-points for defining overweight and obesity in children between 2 and 18 years [11]. These references are based on children living in six countries (i.e., United States, Brazil, Great Britain, Hong Kong, Netherlands, Singapore) and can be extrapolated to the widely accepted definitions for adult overweight and obesity; a BMI ≥ 25 and a BMI ≥ 30 respectively, shown to be predictive of adverse health outcomes in adults. Using these cut-points, a preschool child is considered overweight with a BMI ≥ 91st and obese with a BMI ≥ 99th percentile, respectively.\n[SUBTITLE] The World Health Organization [SUBSECTION] In 2006, the WHO released new growth references for assessing and monitoring the growth in children. These were generated from data collected from the WHO Multicentre Growth Reference Study in six countries (i.e., Brazil, Ghana, India, Oman, Norway, United States). Children under study were raised in optimal conditions that included living in a nonsmoking environment. The children were exclusively or predominantly breastfed for more than four months, fed complementary foods by six months, had continuation of breastfeeding until at least 12 months, were immunized and had access to and received required healthcare. Using the BMI-for-age z-scores, children were classified as overweight with a BMI between one and two standard deviations (SD) above the mean and obese with a BMI more than two SDs above the mean. Overweight in this population is classified as a BMI >84th percentile while a BMI >97.7th percentile classifies a child as obese [12].\nIn 2006, the WHO released new growth references for assessing and monitoring the growth in children. These were generated from data collected from the WHO Multicentre Growth Reference Study in six countries (i.e., Brazil, Ghana, India, Oman, Norway, United States). Children under study were raised in optimal conditions that included living in a nonsmoking environment. The children were exclusively or predominantly breastfed for more than four months, fed complementary foods by six months, had continuation of breastfeeding until at least 12 months, were immunized and had access to and received required healthcare. Using the BMI-for-age z-scores, children were classified as overweight with a BMI between one and two standard deviations (SD) above the mean and obese with a BMI more than two SDs above the mean. Overweight in this population is classified as a BMI >84th percentile while a BMI >97.7th percentile classifies a child as obese [12].\n[SUBTITLE] Statistical Analysis [SUBSECTION] Continuous variables were normally distributed and reported using means and standard deviations. Categorical data were reported as whole numbers and percentages. Statistical comparisons were conducted using student t-tests for continuous data and chi-squared analysis for categorical data. Cohen's kappa statistic was calculated to determine the level of agreement between the growth references. A kappa greater than .80 signifies very good agreement, between .60-.80 a good level of agreement and that less than .50 little to moderate agreement [17]. A p-value <.05 was statistically significant. All data was analyzed using the Statistical Package for the Social Sciences (SPSS 15.0).\nContinuous variables were normally distributed and reported using means and standard deviations. Categorical data were reported as whole numbers and percentages. Statistical comparisons were conducted using student t-tests for continuous data and chi-squared analysis for categorical data. Cohen's kappa statistic was calculated to determine the level of agreement between the growth references. A kappa greater than .80 signifies very good agreement, between .60-.80 a good level of agreement and that less than .50 little to moderate agreement [17]. A p-value <.05 was statistically significant. All data was analyzed using the Statistical Package for the Social Sciences (SPSS 15.0).", "This is cross-sectional analysis of 1026 children (mean age 4.5 years) living in the province of Newfoundland and Labrador who participated in pre-Kindergarten Health Fairs prior to starting school in 2005. The Fairs were open to all children and provided immunizations and tests for vision, hearing and developmental problems. The population is described elsewhere [15]. Two trained research staff collected the information required for the current study.", "Research assistants trained by a Pediatrician took direct anthropometric measures. Children were asked to take off their shoes for the height measure and to take off any over clothing for the weight measure. Direct measures of weight were collected using a Tanita digital weighing scale (kg) rounded to one decimal place calibrated to the hospital digital scale. An Invicta stadiometer (cm) was used to measure the height rounded to one decimal place of the children. Two measures were taken and the average recorded. Sex and age in years and months were collected and rounded to the nearest half year.", "For each child BMI (kg/m2) was calculated and classified according to the cut-points published by the CDC, IOTF and the WHO.\n[SUBTITLE] The US Centre for Disease Control [SUBSECTION] The US CDC publishes BMI age and sex-specific growth references derived from five nationally representative surveys of American children conducted between 1963 and 1994 [10]. Using software downloaded with permission from the CDC, children were classified as overweight (BMI >85th ) and obese (BMI ≥95th) [16].\nThe US CDC publishes BMI age and sex-specific growth references derived from five nationally representative surveys of American children conducted between 1963 and 1994 [10]. Using software downloaded with permission from the CDC, children were classified as overweight (BMI >85th ) and obese (BMI ≥95th) [16].\n[SUBTITLE] The International Obesity Task Force [SUBSECTION] In 2000, the IOTF published BMI cut-points for defining overweight and obesity in children between 2 and 18 years [11]. These references are based on children living in six countries (i.e., United States, Brazil, Great Britain, Hong Kong, Netherlands, Singapore) and can be extrapolated to the widely accepted definitions for adult overweight and obesity; a BMI ≥ 25 and a BMI ≥ 30 respectively, shown to be predictive of adverse health outcomes in adults. Using these cut-points, a preschool child is considered overweight with a BMI ≥ 91st and obese with a BMI ≥ 99th percentile, respectively.\nIn 2000, the IOTF published BMI cut-points for defining overweight and obesity in children between 2 and 18 years [11]. These references are based on children living in six countries (i.e., United States, Brazil, Great Britain, Hong Kong, Netherlands, Singapore) and can be extrapolated to the widely accepted definitions for adult overweight and obesity; a BMI ≥ 25 and a BMI ≥ 30 respectively, shown to be predictive of adverse health outcomes in adults. Using these cut-points, a preschool child is considered overweight with a BMI ≥ 91st and obese with a BMI ≥ 99th percentile, respectively.\n[SUBTITLE] The World Health Organization [SUBSECTION] In 2006, the WHO released new growth references for assessing and monitoring the growth in children. These were generated from data collected from the WHO Multicentre Growth Reference Study in six countries (i.e., Brazil, Ghana, India, Oman, Norway, United States). Children under study were raised in optimal conditions that included living in a nonsmoking environment. The children were exclusively or predominantly breastfed for more than four months, fed complementary foods by six months, had continuation of breastfeeding until at least 12 months, were immunized and had access to and received required healthcare. Using the BMI-for-age z-scores, children were classified as overweight with a BMI between one and two standard deviations (SD) above the mean and obese with a BMI more than two SDs above the mean. Overweight in this population is classified as a BMI >84th percentile while a BMI >97.7th percentile classifies a child as obese [12].\nIn 2006, the WHO released new growth references for assessing and monitoring the growth in children. These were generated from data collected from the WHO Multicentre Growth Reference Study in six countries (i.e., Brazil, Ghana, India, Oman, Norway, United States). Children under study were raised in optimal conditions that included living in a nonsmoking environment. The children were exclusively or predominantly breastfed for more than four months, fed complementary foods by six months, had continuation of breastfeeding until at least 12 months, were immunized and had access to and received required healthcare. Using the BMI-for-age z-scores, children were classified as overweight with a BMI between one and two standard deviations (SD) above the mean and obese with a BMI more than two SDs above the mean. Overweight in this population is classified as a BMI >84th percentile while a BMI >97.7th percentile classifies a child as obese [12].", "The US CDC publishes BMI age and sex-specific growth references derived from five nationally representative surveys of American children conducted between 1963 and 1994 [10]. Using software downloaded with permission from the CDC, children were classified as overweight (BMI >85th ) and obese (BMI ≥95th) [16].", "In 2000, the IOTF published BMI cut-points for defining overweight and obesity in children between 2 and 18 years [11]. These references are based on children living in six countries (i.e., United States, Brazil, Great Britain, Hong Kong, Netherlands, Singapore) and can be extrapolated to the widely accepted definitions for adult overweight and obesity; a BMI ≥ 25 and a BMI ≥ 30 respectively, shown to be predictive of adverse health outcomes in adults. Using these cut-points, a preschool child is considered overweight with a BMI ≥ 91st and obese with a BMI ≥ 99th percentile, respectively.", "In 2006, the WHO released new growth references for assessing and monitoring the growth in children. These were generated from data collected from the WHO Multicentre Growth Reference Study in six countries (i.e., Brazil, Ghana, India, Oman, Norway, United States). Children under study were raised in optimal conditions that included living in a nonsmoking environment. The children were exclusively or predominantly breastfed for more than four months, fed complementary foods by six months, had continuation of breastfeeding until at least 12 months, were immunized and had access to and received required healthcare. Using the BMI-for-age z-scores, children were classified as overweight with a BMI between one and two standard deviations (SD) above the mean and obese with a BMI more than two SDs above the mean. Overweight in this population is classified as a BMI >84th percentile while a BMI >97.7th percentile classifies a child as obese [12].", "Continuous variables were normally distributed and reported using means and standard deviations. Categorical data were reported as whole numbers and percentages. Statistical comparisons were conducted using student t-tests for continuous data and chi-squared analysis for categorical data. Cohen's kappa statistic was calculated to determine the level of agreement between the growth references. A kappa greater than .80 signifies very good agreement, between .60-.80 a good level of agreement and that less than .50 little to moderate agreement [17]. A p-value <.05 was statistically significant. All data was analyzed using the Statistical Package for the Social Sciences (SPSS 15.0).", "The anthropometric measures and characteristics of the children are presented in Table 1. There were no significant differences in these variables. Figure 1 illustrates that for the overweight group, there were significant differences between CDC and WHO (19.1% and 26.7%) and between IOTF and WHO (18.2% and 26.7%). When children were classified as obese, there were significant differences across the three references (CDC 16.6%, IOTF 8.3%, WHO 11.3%).\nCharacteristics of study sample (n = 1026)\nBMI: Body Mass Index, WHO: The World Health Organization, IOTF: The International Obesity Task Force, CDC: The Centre for Disease Control and Prevention, SD: Standard Deviation\nPrevalence of overweight and obesity using CDC, IOTF and WHO growth references. CDC: The Centres for Disease Control and Prevention, IOTF: The International Obesity Task Force, WHO: The World Health Organization. p < .05 significant, ns: not significant\nIn Figure 2, for boys, there were significant differences in overweight between IOTF and WHO (16.4% vs. 28.5%) and between CDC and WHO (18.6% vs. 28.5%). A similar relationship was found for girls. In Figure 3, there were significant differences in the obese category across all three standards for both boys (CDC 17.3%, IOTF 7.8%, WHO 11.7%) and girls (CDC 15.7%, IOTF 8.8%, WHO 10.8%). To determine the level of agreement between growth references, we calculated the kappa statistic (Table 2).\nPrevalence of overweight in boys and girls using CDC, IOTF and WHO growth references. CDC: The Centres for Disease Control and Prevention, IOTF: The International Obesity Task Force, WHO: The World Health Organization. p < .05 significant, ns: not significant\nPrevalence of obesity in boys and girls using CDC, IOTF and WHO growth references. CDC: The Centres for Disease Control and Prevention, IOTF: The International Obesity Task Force, WHO: The World Health Organization. p < . 05 significant, ns: not significant\nComparison of agreement for categorizing weight categories between 1) CDC and WHO 2) IOTF and WHO and 3) CDC and IOTF\nWHO: The World Health Organization, IOTF: The International Obesity Task Force, CDC: The Centre for Disease Control and Prevention, p < .05 significant", "There is a high prevalence of childhood overweight or obesity in this Canadian preschool population, irregardless of growth reference used. Approximately one in three preschool children was overweight or obese. Similar statistics are being reported in other developed counties [18-27]. The short and long term physical health risks for children associated with excess weight include hypertension, hyperinsulinemia, glucose intolerance, type II diabetes, dyslipidemia, increased risk of early cardiac disease and psychosocial difficulties. Childhood obesity is a significant public health concern and accurate measurement and classification is crucial in determining the burden of the health problem. In the current study, the CDC reported a significantly higher prevalence of childhood obesity compared to the other growth references. The CDC reported more than double the prevalence of obesity compared to the IOTF and approximately 30% more obese than the WHO. Whether this is an accurate reflection or an overestimation of the prevalence of obesity in this population, it is difficult to tell. Conflicting recommendations are provided by various organizations that include the Canadian Pediatric Society [28] and the Clinical Practice Guidelines on the management and prevention of obesity in adults and children [13]. Although there was an overall good level of agreement between the different growth references there was some noticeable inconsistencies in classification which may lead to challenges in assessing the weight status of individual children in a clinical environment.\nSeveral studies are published comparing the prevalence estimates of overweight and obesity in children and youth using various growth references, and the findings are inconsistent [18-27]. One study recently published on a sample of Canadian children and youth compared the estimates of excess weight using the same three reference cut-points as in the current study and similar findings were reported. The prevalence estimate for the childhood obesity estimate was the same based on the WHO and CDC growth references (13%) but lower based on the IOTF cut-points (8%) [18]. Another study on children between four and six years of age living in the province of Alberta, compared prevalence estimates using CDC and IOTF cut-points. The authors reported that an estimated 13.8% and 11.4% of children were overweight and obese according to the CDC cut-offs, while an estimated 11.5% and 6.8% of children were classified as overweight and obese using the IOTF cut-offs. Similar to the current study, the CDC growth references classified almost twice as many children as obese compared to the IOTF method. The level of agreement between the two methods was .69 (p < .01), significantly lower then that in the current study [22].\nIn an Italian study on 258 preschool children three to six year of age, the prevalence of overweight and obesity were compared using the CDC and IOTF methods as well as local Italian growth references published by Luciano [23]. All three sets of growth references gave similar estimates of overweight in boys (CDC 16.10%, IOTF 12.90%, Luciano 14.5%) and girls (CDC 15.70%, IOTF 15.7%, Luciano 10%). These findings were not dissimilar to the current study's findings on overweight in boys (CDC 18.6%, IOTF 16.4%) and overweight in girls (CDC 19.6%, IOTF 20.2%). However in the Italian study, the use of the CDC reference led to a prevalence estimate of obesity in boys that was ten times that of the other references (CDC 10.5%, IOTF 0.8%, Luciano 0.8%). In girls the CDC estimate was also significantly higher compared to the other two references (CDC 11.90%, IOTF 6.7%, Luciano 3%), although not as large a difference as in the boys. Based on this study it appears that the reliability of the growth reference used may be affected by the underlying prevalence of childhood obesity in the population, as the Italian rates of childhood obesity tend to be much lower than those found in North America. Ethnic diversity will also have an impact. This provides further challenges when making international comparisons [23]. In contrast to the previous studies, a study recently conducted on 604 Spanish children 6 to 10 years of age, researchers reported that when using the WHO criteria, the combined prevalence of overweight and obesity was 39%, significantly higher than both the CDC estimate of 20% and the IOTF estimate of 17%. In this study, the CDC and IOTF demonstrated a high level of agreement (kappa >.80), however the level of agreement between the CDC and the IOTF and the WHO was poor (kappa < .40). In this study, the authors concluded that it was the WHO criteria that overestimated the prevalence of childhood overweight and obesity, not the CDC as seen in the current study [19].\nThe significant difference in prevalence estimates of childhood obesity produced by the three growth references make it difficult; to assess the extent of the problem and its associated health burden, to conduct research, to make population comparisons and to inform policy. The CDC and to a more limited extent the WHO references appear to allow for an earlier identification of a larger number of children affected by excess weight compared to the IOTF. If the CDC classifies children accurately than using this reference may prompt health professionals to provide primary prevention to a pediatric population earlier to help reduce the risk of associated health conditions [9]. Although the association between increasing BMI and increased health risk has been substantiated in adults [29] it is important to raise a concern that none of these growth references have been convincingly linked to the future development of adverse health outcomes in children.\nAn important strength of our study is that we directly measured children's heights and weights and did not rely on self reported survey data which may underestimate the prevalence of overweight and obesity. Our study limitations included a non random sample from one region in Canada. However, the prevalence estimates of overweight and obesity in the current study were very similar to a larger provincial study [5], providing confidence that our study findings are representative of this provincial population. The current research adds to the debate about the relative reliability and validity of the variant growth references used to monitor a child's growth and has implications for future research. It raises several research questions that still need to be answered. Have we determined the ideal definition of overweight and obesity in children so that prevention efforts can be initiated at the earliest opportunity? Are some of these references better suited to particular populations depending on the mix of ethnicity and race? Can we validate these growth references against a gold standard? Should references based on optimal growth be considered the gold standard? Are any of these standards associated with adverse health outcomes in children? It is clear that more research including longitudinal studies is needed in order to answer many of these questions.", "Childhood obesity is a public health problem and is associated with increased morbidity and mortality. Obesity tracks through the life cycle [2,30] suggesting that early identification and primary prevention is key to reversing and preventing the upward trend into adult obesity and its potential future burden of illness [31]. A consensus is urgently needed on the most valid and reliable growth reference to use to measure and monitor a child's growth for both clinical and research purposes.", "(NL): Newfoundland and Labrador; (BMI): body mass index; (CDC): Centre of Disease Control; (IOTF): International Obesity Task Force; (WHO); World Health Organization.", "The authors declare that they have no competing interests.", "LT was the principal investigator on the project and responsible for the intellectual conception and design of the study including the data analysis and interpretation of the data. LT was also the lead investigator on both the initial funding application the manuscript preparation and provided final approval for submission. LN contributed to the conception and design of the study including preparation of the funding application. LN also helped draft and revise the final manuscript. Both authors have approved the final version of the manuscript for publication.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2431/11/21/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Radiographs and low field MRI (0.2T) as predictors of efficacy in a weight loss trial in obese women with knee osteoarthritis.
21356061
To study the predictive value of baseline radiographs and low-field (0.2T) MRI scans for the symptomatic outcome of clinically significant weight loss in obese patients with knee osteoarthritis.
BACKGROUND
In this study we hypothesize that imaging variables assessed with radiographs and MRI scans pre-treatment can predict the symptomatic changes following a recommended clinically significant weight reduction Patients were recruited from the Department of Rheumatology, Frederiksberg Hospital, Denmark. Eligibility criteria were: age >18 years; primary osteoarthritis according to ACR; BMI > 28 kg/m2; motivation for weight loss. Subjects were randomly assigned to either intervention by low-energy diet (LED) for 8 weeks followed by another 24 weeks of dietary instruction or control-group. MRI scans and radiographs were scored for structural changes and these parameters were examined as independent predictors of changes in osteoarthritis symptoms after 32 weeks. The outcome assessor and statistician were blinded to group allocation.
METHODS
No significant correlations were found between imaging variables and changes in Western Ontario and McMaster Universities Index of Osteoarthritis (Spearman's test, r < 0.33 and P > 0.07).Only the LED group achieved a weight loss, with a mean difference of 16.3 kg (95%CI: 13.4-19.2;P < 0.0001) compared to the control group. The total WOMAC index showed a significant difference favouring LED, with a group mean difference of - 321.3 mm (95%CI: -577.5 to -65.1 mm; P = 0.01). No significant adverse events were reported.
RESULTS
Stage of joint destruction, assessed on either radiographs or low-field MRI (0.2T), does not preclude a symptoms relief following a clinically relevant weight loss in elderly obese female patients with knee osteoarthritis.
CONCLUSION
[ "Aged", "Diet, Reducing", "Disability Evaluation", "Female", "Humans", "Magnetic Resonance Imaging", "Middle Aged", "Obesity", "Osteoarthritis, Knee", "Predictive Value of Tests", "Radiography", "Treatment Outcome", "Weight Loss" ]
3065448
null
null
Methods
[SUBTITLE] Participants [SUBSECTION] Following approval from the local ethical committee ((KF) 01-104/02 and 11-149/03), female patients were recruited from the outpatients' clinic, Department of Rheumatology, Frederiksberg Hospital, Denmark. They were all invited from the waiting list of the first diet study from the Parker Institute (Christensen, 2005 86/id). All patients signed and approved the informed consent and standing knee radiographs, MRI and clinical examinations were performed on the same day at baseline. The study was carried out in accordance with the Helsinki Declaration II and the European Guidelines for Good Clinical Practise. Eligibility criteria were: age above eighteen years; primary KOA diagnosed according to the clinical classification of KOA [21]; no history or active presence of other rheumatic diseases that might be responsible for secondary KOA; no substantial abnormalities in haematological, hepatic, renal, cardiac or endocrine functions (including diabetes mellitus); body-mass index (BMI) ≥28 kg/m2, expression of a clear, unequivocal motivation for weight loss; fluent in Danish language. Only pain medication was monitored in our project: All participants were asked not to change the previous medications for pain, i.e. maintain the same medication at same dosage. The GP was informed of the project and asked to monitor other medications, including antidiabetics. Following approval from the local ethical committee ((KF) 01-104/02 and 11-149/03), female patients were recruited from the outpatients' clinic, Department of Rheumatology, Frederiksberg Hospital, Denmark. They were all invited from the waiting list of the first diet study from the Parker Institute (Christensen, 2005 86/id). All patients signed and approved the informed consent and standing knee radiographs, MRI and clinical examinations were performed on the same day at baseline. The study was carried out in accordance with the Helsinki Declaration II and the European Guidelines for Good Clinical Practise. Eligibility criteria were: age above eighteen years; primary KOA diagnosed according to the clinical classification of KOA [21]; no history or active presence of other rheumatic diseases that might be responsible for secondary KOA; no substantial abnormalities in haematological, hepatic, renal, cardiac or endocrine functions (including diabetes mellitus); body-mass index (BMI) ≥28 kg/m2, expression of a clear, unequivocal motivation for weight loss; fluent in Danish language. Only pain medication was monitored in our project: All participants were asked not to change the previous medications for pain, i.e. maintain the same medication at same dosage. The GP was informed of the project and asked to monitor other medications, including antidiabetics. [SUBTITLE] Imaging acquisition [SUBSECTION] Baseline MRI was obtained of a single knee, using a dedicated extremity scanner (E-Saote E-scanner, 0.2 Tesla, Software release 9.6B). In case of bilateral symptoms we examined the most symptomatic one All MRI scans were performed in the same department of radiology by a team of two radiographers applying a standardized technique. Knees were placed in a receive-only cylinder coil. The imaging protocol used was: A gradient echo scout followed by a saggital STIR with 4 mm slices (TR 1460, TE 24, FOV 160 × 160, matrix 256 × 256, acquisition time 5 min 10 s). Two successive T1-weighted 3 D gradient-echo sequences were acquired in the axial and saggital plane with respectively 104 and 52 adjoining 1.4 mm thick slices (TR 60 ms, TE 24 ms, 45° flip angle, field of view 150 mm, matrix 192 × 160 and voxel size 0.78 × 1.07 × 1.4 mm 3, acquisition time 6 min). Coronal T1-weighted spin-echo with 15 contiguous 4 mm thick slices (TR 520 ms, TE 15 ms, field of view 160 mm, matrix 192 × 160 mm, acquisition time 3 min 20 s with two signals acquired). Finally a saggital T2*-weighted two-dimensional gradient-echo sequence was acquired with 25 contiguous 4 mm thick sections (TR 60 ms, TE 24 ms, 45° flip angle, field of view 160 mm, matrix 192 × 160, acquisition time 4 min 50 s). Bi-plane weight-bearing semi-flexed radiographs were taken of the index knee; one in the posteroanterior and one in the lateral view (in case of bilateral symptoms we used the most symptomatic knee). They were obtained at inclusion/baseline, using a Philips Optimus apparatus, and the same radiographers using a standardized protocol carried out all examinations. The ionizing radiation dose per examination was 0.006 mSv corresponding to 0.2% of the annual background radiation on earth (average background dose for humans are 2.4 mSv annually. Baseline MRI was obtained of a single knee, using a dedicated extremity scanner (E-Saote E-scanner, 0.2 Tesla, Software release 9.6B). In case of bilateral symptoms we examined the most symptomatic one All MRI scans were performed in the same department of radiology by a team of two radiographers applying a standardized technique. Knees were placed in a receive-only cylinder coil. The imaging protocol used was: A gradient echo scout followed by a saggital STIR with 4 mm slices (TR 1460, TE 24, FOV 160 × 160, matrix 256 × 256, acquisition time 5 min 10 s). Two successive T1-weighted 3 D gradient-echo sequences were acquired in the axial and saggital plane with respectively 104 and 52 adjoining 1.4 mm thick slices (TR 60 ms, TE 24 ms, 45° flip angle, field of view 150 mm, matrix 192 × 160 and voxel size 0.78 × 1.07 × 1.4 mm 3, acquisition time 6 min). Coronal T1-weighted spin-echo with 15 contiguous 4 mm thick slices (TR 520 ms, TE 15 ms, field of view 160 mm, matrix 192 × 160 mm, acquisition time 3 min 20 s with two signals acquired). Finally a saggital T2*-weighted two-dimensional gradient-echo sequence was acquired with 25 contiguous 4 mm thick sections (TR 60 ms, TE 24 ms, 45° flip angle, field of view 160 mm, matrix 192 × 160, acquisition time 4 min 50 s). Bi-plane weight-bearing semi-flexed radiographs were taken of the index knee; one in the posteroanterior and one in the lateral view (in case of bilateral symptoms we used the most symptomatic knee). They were obtained at inclusion/baseline, using a Philips Optimus apparatus, and the same radiographers using a standardized protocol carried out all examinations. The ionizing radiation dose per examination was 0.006 mSv corresponding to 0.2% of the annual background radiation on earth (average background dose for humans are 2.4 mSv annually. [SUBTITLE] Imaging evaluation [SUBSECTION] MRI scans were scored separately for four structural parameters and summed as a "Total MRI Score" to see if this construct would perform better as an imaging biomarker. Cartilage abnormalities, BMLs and synovitis were scored for the medial, lateral and patellofemoral chamber and effusion was graded according to the total amount. Cartilage abnormalities were assessed using the T2* and the 3 D T1 weighted Gradient echo sequences. These abnormalities were graded 0-4 according to the description by Ding et al. [22], and the specific grades were as follows; grade 0, normal cartilage; grade 1, focal blistering and an intra-cartilaginous area of low signal intensity with an intact surface; grade 2, irregularities on the surface or bottom and a < 50% loss of thickness; grade 3, deep ulceration, with a > 50% loss of thickness; grade 4, full-thickness chondral wear, with exposure of the subchondral bone. BMLs were defined as poorly marginated areas of increased signal intensity in the subchondral bone on the STIR images, and they were scored according to the description by Torres et al. [23]. The grades were defined as follows; grade 0, normal; grade 1, < 25% of the chamber, grade 2, 25-50% of the chamber and grade 3, > 50% of the chamber. The degree of synovitis was scored according to Rhodes et al. on a scale ranging from 0-3 where 0 = normal, 1 = diffuse, even thickening, 2 = nodular thickening and 3 = gross nodular thickening [24]. The amount of effusion was graded from 0-3 where 0 = physiological amount, 1 = small amount, in the retropatellar space, 2 = moderate amount, slight convexity of the suprapatellar bursa and 3 = large, capsular distension with bulging of the extensor retinaculum [17,25]. Maximum global score was 12 for cartilage abnormalities and 9 for BMLs, effusion and synovitis. Minimum score was 0 for all assessed structural parameters. The radiographs analysed using the Kellgren Lawrence scoring method (K/L), as this is a recommended and reliable method for baseline assessments of KOA using the fixed flexion protocol with antero-posterior and lateral radiographs [11]. One experienced investigator (MB), who was blinded to randomization, analyzed all radiographs and MRI scans in a random order. MRI scans were scored separately for four structural parameters and summed as a "Total MRI Score" to see if this construct would perform better as an imaging biomarker. Cartilage abnormalities, BMLs and synovitis were scored for the medial, lateral and patellofemoral chamber and effusion was graded according to the total amount. Cartilage abnormalities were assessed using the T2* and the 3 D T1 weighted Gradient echo sequences. These abnormalities were graded 0-4 according to the description by Ding et al. [22], and the specific grades were as follows; grade 0, normal cartilage; grade 1, focal blistering and an intra-cartilaginous area of low signal intensity with an intact surface; grade 2, irregularities on the surface or bottom and a < 50% loss of thickness; grade 3, deep ulceration, with a > 50% loss of thickness; grade 4, full-thickness chondral wear, with exposure of the subchondral bone. BMLs were defined as poorly marginated areas of increased signal intensity in the subchondral bone on the STIR images, and they were scored according to the description by Torres et al. [23]. The grades were defined as follows; grade 0, normal; grade 1, < 25% of the chamber, grade 2, 25-50% of the chamber and grade 3, > 50% of the chamber. The degree of synovitis was scored according to Rhodes et al. on a scale ranging from 0-3 where 0 = normal, 1 = diffuse, even thickening, 2 = nodular thickening and 3 = gross nodular thickening [24]. The amount of effusion was graded from 0-3 where 0 = physiological amount, 1 = small amount, in the retropatellar space, 2 = moderate amount, slight convexity of the suprapatellar bursa and 3 = large, capsular distension with bulging of the extensor retinaculum [17,25]. Maximum global score was 12 for cartilage abnormalities and 9 for BMLs, effusion and synovitis. Minimum score was 0 for all assessed structural parameters. The radiographs analysed using the Kellgren Lawrence scoring method (K/L), as this is a recommended and reliable method for baseline assessments of KOA using the fixed flexion protocol with antero-posterior and lateral radiographs [11]. One experienced investigator (MB), who was blinded to randomization, analyzed all radiographs and MRI scans in a random order. [SUBTITLE] Interventions [SUBSECTION] Subjects were randomly assigned to either a control group or an intervention group who was treated with a dietary regime. This consisted of a low-energy diet (LED) for eight weeks followed by 24 weeks of conventional hypo-energetic and high protein diet. As previously described [5] the intervention-diet consisted of nutrition powder (Speasy, Dansk Droge A/S) dissolved in water and it was taken as six daily meals, giving the patient 3.4 MJ/day. This fulfilled the recommendations of daily intake of high quality protein [26]; 37 energy percent (E%) from soy protein providing the essential amino acids, 47 E% from carbohydrate, 16 E% from vegetable fat (primarily from rapeseed oil), and fibres from oat-bran (15 g/day). The LED group received nutritional instruction and behavioural therapy by an experienced dietician at weekly sessions (1.5 h/week) throughout the eight weeks. This was done to reinforce and continuously stimulate the patients' intention to loose weight, and to promote a high degree of compliance. Patients in the control group attended a thorough two-hour session at baseline (by the same dietician who treated the LED group). The patients were given nutritional advice and recommended ordinary foods in amounts that would provide the patients with approximately 5 MJ/day. After this initial session all the patients in the control group received ideas for a diet plan in a booklet providing the participants with a variety of 'good-advices' when trying to reduce body weight. Finally the subjects in the control group were put on a waiting list for later recall to a similar dietary plan as in the intervention group. The follow-up visit was at t = 32 weeks. Subjects were randomly assigned to either a control group or an intervention group who was treated with a dietary regime. This consisted of a low-energy diet (LED) for eight weeks followed by 24 weeks of conventional hypo-energetic and high protein diet. As previously described [5] the intervention-diet consisted of nutrition powder (Speasy, Dansk Droge A/S) dissolved in water and it was taken as six daily meals, giving the patient 3.4 MJ/day. This fulfilled the recommendations of daily intake of high quality protein [26]; 37 energy percent (E%) from soy protein providing the essential amino acids, 47 E% from carbohydrate, 16 E% from vegetable fat (primarily from rapeseed oil), and fibres from oat-bran (15 g/day). The LED group received nutritional instruction and behavioural therapy by an experienced dietician at weekly sessions (1.5 h/week) throughout the eight weeks. This was done to reinforce and continuously stimulate the patients' intention to loose weight, and to promote a high degree of compliance. Patients in the control group attended a thorough two-hour session at baseline (by the same dietician who treated the LED group). The patients were given nutritional advice and recommended ordinary foods in amounts that would provide the patients with approximately 5 MJ/day. After this initial session all the patients in the control group received ideas for a diet plan in a booklet providing the participants with a variety of 'good-advices' when trying to reduce body weight. Finally the subjects in the control group were put on a waiting list for later recall to a similar dietary plan as in the intervention group. The follow-up visit was at t = 32 weeks. [SUBTITLE] Biometric examinations [SUBSECTION] At baseline and after half a year (t = 32 weeks) the body weights (without coats, shoes etc.) of all patients' were recorded on a decimal scale (TANITA BW-800, 'Frederiksberg Vægtfabrik', Copenhagen, Denmark). At baseline and after half a year (t = 32 weeks) the body weights (without coats, shoes etc.) of all patients' were recorded on a decimal scale (TANITA BW-800, 'Frederiksberg Vægtfabrik', Copenhagen, Denmark). [SUBTITLE] Symptom assessment [SUBSECTION] The patient important outcome being their experience of KOA symptoms were assessed by the Western Ontario and McMaster Universities' (WOMAC) OA index, a validated disease-specific questionnaire comprised of three self-reported items; five pain-related questions of each 100 mm VAS (500 mm VAS in total); seventeen disability-related questions of each 100 mm VAS (1700 mm VAS in total); two stiffness-related questions of each 100 mm VAS (200 mm VAS in total). The patients mark their present level of symptoms, within each of the above described items, by placing a vertical line on a 100 mm horizontal line. The total WOMAC is a measure of the global KOA level of symptoms; 0 mm WOMAC representing no disease, and 2400 mm WOMAC representing worst possible state of disease [27]. This was done at baseline and again at follow-up (t = 32 weeks). The patient important outcome being their experience of KOA symptoms were assessed by the Western Ontario and McMaster Universities' (WOMAC) OA index, a validated disease-specific questionnaire comprised of three self-reported items; five pain-related questions of each 100 mm VAS (500 mm VAS in total); seventeen disability-related questions of each 100 mm VAS (1700 mm VAS in total); two stiffness-related questions of each 100 mm VAS (200 mm VAS in total). The patients mark their present level of symptoms, within each of the above described items, by placing a vertical line on a 100 mm horizontal line. The total WOMAC is a measure of the global KOA level of symptoms; 0 mm WOMAC representing no disease, and 2400 mm WOMAC representing worst possible state of disease [27]. This was done at baseline and again at follow-up (t = 32 weeks). [SUBTITLE] Randomization, allocation concealment, implementation and blinding [SUBSECTION] A method of restricted randomization called minimization was used with stratifying patients according to (i) gender, (ii) BMI and (iii) age. This was done for every sixteen patients included, and ensured homogeneity between the groups [28]. Each randomization list was drawn up by the statistician and given to the secretariat. In order to implement the random allocation, the sequence was concealed until interventions were assigned: The secretariat informed the patients about when to meet with the dietician (i.e. only implicitly referring to group allocation). The code was not revealed to the researchers before data collection, imaging assessments and laboratory analyses were complete. The statistician and the assessor of radiographs and MRI scans were blinded. A method of restricted randomization called minimization was used with stratifying patients according to (i) gender, (ii) BMI and (iii) age. This was done for every sixteen patients included, and ensured homogeneity between the groups [28]. Each randomization list was drawn up by the statistician and given to the secretariat. In order to implement the random allocation, the sequence was concealed until interventions were assigned: The secretariat informed the patients about when to meet with the dietician (i.e. only implicitly referring to group allocation). The code was not revealed to the researchers before data collection, imaging assessments and laboratory analyses were complete. The statistician and the assessor of radiographs and MRI scans were blinded. [SUBTITLE] Statistical methods [SUBSECTION] Clinical outcomes were analyzed as differences from baseline values (x32 - x0), and weight loss (kg) was also analyzed as a relative measure, being the percentage change from baseline ((x32 - x0)/x0 *100%). We performed a distribution-free Spearman's test of rank correlation when examining the possible relationship between imaging variables and clinical outcomes of the dietary interventions. Further analyses on significant results were carried out according to the data type. The Spearman correlation coefficient was interpreted as follows: < 0.3: none; 0.31-0.5: weak; 0.51-0.7: strong; 0.71-0.9: very strong and > 0.9: excellent. A P-value less than 0.05 (two-tailed) or a 95% confidence interval (CI) not including the null hypothesis was regarded as statistically significant. All the analyses were performed on SAS version 9.1 for Windows (Chicago, IL, USA). Clinical outcomes were analyzed as differences from baseline values (x32 - x0), and weight loss (kg) was also analyzed as a relative measure, being the percentage change from baseline ((x32 - x0)/x0 *100%). We performed a distribution-free Spearman's test of rank correlation when examining the possible relationship between imaging variables and clinical outcomes of the dietary interventions. Further analyses on significant results were carried out according to the data type. The Spearman correlation coefficient was interpreted as follows: < 0.3: none; 0.31-0.5: weak; 0.51-0.7: strong; 0.71-0.9: very strong and > 0.9: excellent. A P-value less than 0.05 (two-tailed) or a 95% confidence interval (CI) not including the null hypothesis was regarded as statistically significant. All the analyses were performed on SAS version 9.1 for Windows (Chicago, IL, USA).
null
null
null
null
[ "Background", "Participants", "Imaging acquisition", "Imaging evaluation", "Interventions", "Biometric examinations", "Symptom assessment", "Randomization, allocation concealment, implementation and blinding", "Statistical methods", "Results", "Population characteristics", "Assessed imaging variables as predictors of symptomatic changes following weight loss", "Results regarding the weight loss program", "Adverse events", "Discussion", "Conclusion", "Competing intersts", "Authors' contributions", "Pre-publication history" ]
[ "Knee osteoarthritis (KOA) is a multi factorial disease characterized by joint-stiffness, pain and loss of function [1]. With an increasing prevalence of elderly and obese citizens, the problems of KOA is likely to escalate in the future [2-4], and as new potential treatments arise, there is a need to examine MRI evaluated structural changes in clinical trials.\nA drug/treatment that can efficiently halt the degenerative nature of KOA (DMOAD) has not yet been presented, but in obese KOA patients recent studies have shown a direct relationship between weight loss and the level of symptomatic improvement [5]. This result supports earlier epidemiological findings that weight loss reduces the risk of development and progression of KOA, and that KOA related symptoms tend to worsen in obese patients [6-9]. As a consequence, overweight KOA patients are now recommended to commence weight reduction as a first line therapy [3,4]. Which diet to choose is still debated as there is no evidence to support one diet composition over others; the single most important factor is to establish a continuous energy deficit [10].\nConventional radiography is the simplest and least expensive imaging method for assessing KOA, and the K/L score remains the most widely applied system when diagnosing KOA [11,12] in clinical trials. Both low- and high field MRI provides additional information to radiographs, as these modalities have a unique ability to image all knee joint related structures [13].\nThe MRI modality withholds a possibility for semi-quantitative scoring of synovial thickening, joint effusion, bone marrow lesions (BMLs) and cartilage abnormalities. These structures are essential because the synovium, joint capsule and subchondral bone are highly innervated and appear to represent some of the main origins of KOA-related pain, whereas the cartilage status is suggested to be more a marker of joint strain and thereby a surrogate marker for KOA symptoms [14]. All of these structural changes have been shown to correlate with clinical symptoms and/or progression of disease [15-20], and they therefore seem relevant to examine in this intervention study.\nIn this study we hypothesize that imaging variables assessed with radiographs and MRI scans pre-treatment can predict the symptomatic changes following a recommended clinically significant weight reduction [5].", "Following approval from the local ethical committee ((KF) 01-104/02 and 11-149/03), female patients were recruited from the outpatients' clinic, Department of Rheumatology, Frederiksberg Hospital, Denmark. They were all invited from the waiting list of the first diet study from the Parker Institute (Christensen, 2005 86/id). All patients signed and approved the informed consent and standing knee radiographs, MRI and clinical examinations were performed on the same day at baseline. The study was carried out in accordance with the Helsinki Declaration II and the European Guidelines for Good Clinical Practise.\nEligibility criteria were: age above eighteen years; primary KOA diagnosed according to the clinical classification of KOA [21]; no history or active presence of other rheumatic diseases that might be responsible for secondary KOA; no substantial abnormalities in haematological, hepatic, renal, cardiac or endocrine functions (including diabetes mellitus); body-mass index (BMI) ≥28 kg/m2, expression of a clear, unequivocal motivation for weight loss; fluent in Danish language.\nOnly pain medication was monitored in our project: All participants were asked not to change the previous medications for pain, i.e. maintain the same medication at same dosage. The GP was informed of the project and asked to monitor other medications, including antidiabetics.", "Baseline MRI was obtained of a single knee, using a dedicated extremity scanner (E-Saote E-scanner, 0.2 Tesla, Software release 9.6B). In case of bilateral symptoms we examined the most symptomatic one All MRI scans were performed in the same department of radiology by a team of two radiographers applying a standardized technique. Knees were placed in a receive-only cylinder coil.\nThe imaging protocol used was:\nA gradient echo scout followed by a saggital STIR with 4 mm slices (TR 1460, TE 24, FOV 160 × 160, matrix 256 × 256, acquisition time 5 min 10 s). Two successive T1-weighted 3 D gradient-echo sequences were acquired in the axial and saggital plane with respectively 104 and 52 adjoining 1.4 mm thick slices (TR 60 ms, TE 24 ms, 45° flip angle, field of view 150 mm, matrix 192 × 160 and voxel size 0.78 × 1.07 × 1.4 mm 3, acquisition time 6 min). Coronal T1-weighted spin-echo with 15 contiguous 4 mm thick slices (TR 520 ms, TE 15 ms, field of view 160 mm, matrix 192 × 160 mm, acquisition time 3 min 20 s with two signals acquired). Finally a saggital T2*-weighted two-dimensional gradient-echo sequence was acquired with 25 contiguous 4 mm thick sections (TR 60 ms, TE 24 ms, 45° flip angle, field of view 160 mm, matrix 192 × 160, acquisition time 4 min 50 s).\nBi-plane weight-bearing semi-flexed radiographs were taken of the index knee; one in the posteroanterior and one in the lateral view (in case of bilateral symptoms we used the most symptomatic knee). They were obtained at inclusion/baseline, using a Philips Optimus apparatus, and the same radiographers using a standardized protocol carried out all examinations. The ionizing radiation dose per examination was 0.006 mSv corresponding to 0.2% of the annual background radiation on earth (average background dose for humans are 2.4 mSv annually.", "MRI scans were scored separately for four structural parameters and summed as a \"Total MRI Score\" to see if this construct would perform better as an imaging biomarker. Cartilage abnormalities, BMLs and synovitis were scored for the medial, lateral and patellofemoral chamber and effusion was graded according to the total amount.\nCartilage abnormalities were assessed using the T2* and the 3 D T1 weighted Gradient echo sequences. These abnormalities were graded 0-4 according to the description by Ding et al. [22], and the specific grades were as follows; grade 0, normal cartilage; grade 1, focal blistering and an intra-cartilaginous area of low signal intensity with an intact surface; grade 2, irregularities on the surface or bottom and a < 50% loss of thickness; grade 3, deep ulceration, with a > 50% loss of thickness; grade 4, full-thickness chondral wear, with exposure of the subchondral bone. BMLs were defined as poorly marginated areas of increased signal intensity in the subchondral bone on the STIR images, and they were scored according to the description by Torres et al. [23]. The grades were defined as follows; grade 0, normal; grade 1, < 25% of the chamber, grade 2, 25-50% of the chamber and grade 3, > 50% of the chamber. The degree of synovitis was scored according to Rhodes et al. on a scale ranging from 0-3 where 0 = normal, 1 = diffuse, even thickening, 2 = nodular thickening and 3 = gross nodular thickening [24]. The amount of effusion was graded from 0-3 where 0 = physiological amount, 1 = small amount, in the retropatellar space, 2 = moderate amount, slight convexity of the suprapatellar bursa and 3 = large, capsular distension with bulging of the extensor retinaculum [17,25].\nMaximum global score was 12 for cartilage abnormalities and 9 for BMLs, effusion and synovitis. Minimum score was 0 for all assessed structural parameters.\nThe radiographs analysed using the Kellgren Lawrence scoring method (K/L), as this is a recommended and reliable method for baseline assessments of KOA using the fixed flexion protocol with antero-posterior and lateral radiographs [11]. One experienced investigator (MB), who was blinded to randomization, analyzed all radiographs and MRI scans in a random order.", "Subjects were randomly assigned to either a control group or an intervention group who was treated with a dietary regime. This consisted of a low-energy diet (LED) for eight weeks followed by 24 weeks of conventional hypo-energetic and high protein diet. As previously described [5] the intervention-diet consisted of nutrition powder (Speasy, Dansk Droge A/S) dissolved in water and it was taken as six daily meals, giving the patient 3.4 MJ/day. This fulfilled the recommendations of daily intake of high quality protein [26]; 37 energy percent (E%) from soy protein providing the essential amino acids, 47 E% from carbohydrate, 16 E% from vegetable fat (primarily from rapeseed oil), and fibres from oat-bran (15 g/day). The LED group received nutritional instruction and behavioural therapy by an experienced dietician at weekly sessions (1.5 h/week) throughout the eight weeks. This was done to reinforce and continuously stimulate the patients' intention to loose weight, and to promote a high degree of compliance.\nPatients in the control group attended a thorough two-hour session at baseline (by the same dietician who treated the LED group). The patients were given nutritional advice and recommended ordinary foods in amounts that would provide the patients with approximately 5 MJ/day. After this initial session all the patients in the control group received ideas for a diet plan in a booklet providing the participants with a variety of 'good-advices' when trying to reduce body weight. Finally the subjects in the control group were put on a waiting list for later recall to a similar dietary plan as in the intervention group. The follow-up visit was at t = 32 weeks.", "At baseline and after half a year (t = 32 weeks) the body weights (without coats, shoes etc.) of all patients' were recorded on a decimal scale (TANITA BW-800, 'Frederiksberg Vægtfabrik', Copenhagen, Denmark).", "The patient important outcome being their experience of KOA symptoms were assessed by the Western Ontario and McMaster Universities' (WOMAC) OA index, a validated disease-specific questionnaire comprised of three self-reported items; five pain-related questions of each 100 mm VAS (500 mm VAS in total); seventeen disability-related questions of each 100 mm VAS (1700 mm VAS in total); two stiffness-related questions of each 100 mm VAS (200 mm VAS in total). The patients mark their present level of symptoms, within each of the above described items, by placing a vertical line on a 100 mm horizontal line. The total WOMAC is a measure of the global KOA level of symptoms; 0 mm WOMAC representing no disease, and 2400 mm WOMAC representing worst possible state of disease [27]. This was done at baseline and again at follow-up (t = 32 weeks).", "A method of restricted randomization called minimization was used with stratifying patients according to (i) gender, (ii) BMI and (iii) age. This was done for every sixteen patients included, and ensured homogeneity between the groups [28].\nEach randomization list was drawn up by the statistician and given to the secretariat. In order to implement the random allocation, the sequence was concealed until interventions were assigned: The secretariat informed the patients about when to meet with the dietician (i.e. only implicitly referring to group allocation). The code was not revealed to the researchers before data collection, imaging assessments and laboratory analyses were complete.\nThe statistician and the assessor of radiographs and MRI scans were blinded.", "Clinical outcomes were analyzed as differences from baseline values (x32 - x0), and weight loss (kg) was also analyzed as a relative measure, being the percentage change from baseline ((x32 - x0)/x0 *100%). We performed a distribution-free Spearman's test of rank correlation when examining the possible relationship between imaging variables and clinical outcomes of the dietary interventions. Further analyses on significant results were carried out according to the data type. The Spearman correlation coefficient was interpreted as follows: < 0.3: none; 0.31-0.5: weak; 0.51-0.7: strong; 0.71-0.9: very strong and > 0.9: excellent. A P-value less than 0.05 (two-tailed) or a 95% confidence interval (CI) not including the null hypothesis was regarded as statistically significant. All the analyses were performed on SAS version 9.1 for Windows (Chicago, IL, USA).", "[SUBTITLE] Population characteristics [SUBSECTION] 32 patients were invited to participate, 31 of these were interested and 30 patients had baseline measurements performed. The patient not randomized was excluded due to withdrawal of consent before the randomization procedure. The 30 enrolled patients were randomly assigned to either LED or conventional hypo energetic diet. Following randomization of 15 patients to each group, all patients completed the trial and we subsequently analyzed the ITT population based on these 30 patients (Figure 1).\nTrial profile.\nAll patients were women, average age was 62 years (SD 6.8) and average BMI was 37 kg/m 2 (SD 6.0) (see table 1). At baseline we registered data regarding blood analyses and Patient-Reported Outcomes (PROs); there were no statistically significant differences between patients in the LED and control group at baseline (data not shown). Use of pain medication was also monitored by PROs and data revealed an unaltered use during the trial period.\nCharacteristics of all participants\n1 Sum of visual analogue scale scores; WOMAC-pain-index of 500 mm, -disability-index of 1700 mm, -stiffness-index of 200 mm and -total-index of 2400 mm (se text in article under Symptom assessment)\n2 Showed a Non-Gaussian distribution, thus presented as median (interquartile range)\nRadiographs were scored using the K/L score, and the three joint compartments were scores separately in order to assess whether KOA at specific locations had any influence on our hypothesis. No group differences (data not shown). 63% of the patients had a medial K/L score ≥ 2 and 13% had a K/L score of 0 (no group differences). The assessment of MRI scans revealed that 37% of the patients had a BML score ≥ 1 for all three compartments and that 93% of the patients had some degree of cartilage abnormalities (score ≥ 1). For effusion and synovitis 30 and 40% had a score of zero respectively.\n32 patients were invited to participate, 31 of these were interested and 30 patients had baseline measurements performed. The patient not randomized was excluded due to withdrawal of consent before the randomization procedure. The 30 enrolled patients were randomly assigned to either LED or conventional hypo energetic diet. Following randomization of 15 patients to each group, all patients completed the trial and we subsequently analyzed the ITT population based on these 30 patients (Figure 1).\nTrial profile.\nAll patients were women, average age was 62 years (SD 6.8) and average BMI was 37 kg/m 2 (SD 6.0) (see table 1). At baseline we registered data regarding blood analyses and Patient-Reported Outcomes (PROs); there were no statistically significant differences between patients in the LED and control group at baseline (data not shown). Use of pain medication was also monitored by PROs and data revealed an unaltered use during the trial period.\nCharacteristics of all participants\n1 Sum of visual analogue scale scores; WOMAC-pain-index of 500 mm, -disability-index of 1700 mm, -stiffness-index of 200 mm and -total-index of 2400 mm (se text in article under Symptom assessment)\n2 Showed a Non-Gaussian distribution, thus presented as median (interquartile range)\nRadiographs were scored using the K/L score, and the three joint compartments were scores separately in order to assess whether KOA at specific locations had any influence on our hypothesis. No group differences (data not shown). 63% of the patients had a medial K/L score ≥ 2 and 13% had a K/L score of 0 (no group differences). The assessment of MRI scans revealed that 37% of the patients had a BML score ≥ 1 for all three compartments and that 93% of the patients had some degree of cartilage abnormalities (score ≥ 1). For effusion and synovitis 30 and 40% had a score of zero respectively.\n[SUBTITLE] Assessed imaging variables as predictors of symptomatic changes following weight loss [SUBSECTION] Imaging variables as predictors of symptomatic outcome were examined by a Spearman correlation analysis (Table 2). The analysis did not show significant correlation between any imaging variables and the following outcomes; Δ WOMAC pain (mm) and Δ WOMAC disability (mm) (r ≤ 0.33; p > 0.05).\nCorrelation of baseline imaging variables with change in symptoms\n1 Measured by the WOMAC-index\nImaging variables as predictors of symptomatic outcome were examined by a Spearman correlation analysis (Table 2). The analysis did not show significant correlation between any imaging variables and the following outcomes; Δ WOMAC pain (mm) and Δ WOMAC disability (mm) (r ≤ 0.33; p > 0.05).\nCorrelation of baseline imaging variables with change in symptoms\n1 Measured by the WOMAC-index\n[SUBTITLE] Results regarding the weight loss program [SUBSECTION] Results from the intention-to-treat population are displayed in Table 3. The LED and control group changed their mean body weight (SE) by -15.6 (3.6)% and 0.4 (3.2)% respectively (data not shown).\nResults based on changes for the whole intention-to-treat population\n1 Sum of visual analogue scale scores\nIn terms of responders, 40% vs. 13% of the patients in the LED and control group, respectively, achieved a pain reduction of more than 50% in the WOMAC-pain index and 33% vs. 7% of the patients in the LED and control group achieved > 50% in the WOMAC total index.. The WOMAC disability index showed improvement in the LED group when compared with the control group, MD of - 266 mm (95%CI: -468.9 to -63.1; p < 0.01) (data not shown).\nResults from the intention-to-treat population are displayed in Table 3. The LED and control group changed their mean body weight (SE) by -15.6 (3.6)% and 0.4 (3.2)% respectively (data not shown).\nResults based on changes for the whole intention-to-treat population\n1 Sum of visual analogue scale scores\nIn terms of responders, 40% vs. 13% of the patients in the LED and control group, respectively, achieved a pain reduction of more than 50% in the WOMAC-pain index and 33% vs. 7% of the patients in the LED and control group achieved > 50% in the WOMAC total index.. The WOMAC disability index showed improvement in the LED group when compared with the control group, MD of - 266 mm (95%CI: -468.9 to -63.1; p < 0.01) (data not shown).\n[SUBTITLE] Adverse events [SUBSECTION] No significant adverse events were reported.\nNo significant adverse events were reported.", "32 patients were invited to participate, 31 of these were interested and 30 patients had baseline measurements performed. The patient not randomized was excluded due to withdrawal of consent before the randomization procedure. The 30 enrolled patients were randomly assigned to either LED or conventional hypo energetic diet. Following randomization of 15 patients to each group, all patients completed the trial and we subsequently analyzed the ITT population based on these 30 patients (Figure 1).\nTrial profile.\nAll patients were women, average age was 62 years (SD 6.8) and average BMI was 37 kg/m 2 (SD 6.0) (see table 1). At baseline we registered data regarding blood analyses and Patient-Reported Outcomes (PROs); there were no statistically significant differences between patients in the LED and control group at baseline (data not shown). Use of pain medication was also monitored by PROs and data revealed an unaltered use during the trial period.\nCharacteristics of all participants\n1 Sum of visual analogue scale scores; WOMAC-pain-index of 500 mm, -disability-index of 1700 mm, -stiffness-index of 200 mm and -total-index of 2400 mm (se text in article under Symptom assessment)\n2 Showed a Non-Gaussian distribution, thus presented as median (interquartile range)\nRadiographs were scored using the K/L score, and the three joint compartments were scores separately in order to assess whether KOA at specific locations had any influence on our hypothesis. No group differences (data not shown). 63% of the patients had a medial K/L score ≥ 2 and 13% had a K/L score of 0 (no group differences). The assessment of MRI scans revealed that 37% of the patients had a BML score ≥ 1 for all three compartments and that 93% of the patients had some degree of cartilage abnormalities (score ≥ 1). For effusion and synovitis 30 and 40% had a score of zero respectively.", "Imaging variables as predictors of symptomatic outcome were examined by a Spearman correlation analysis (Table 2). The analysis did not show significant correlation between any imaging variables and the following outcomes; Δ WOMAC pain (mm) and Δ WOMAC disability (mm) (r ≤ 0.33; p > 0.05).\nCorrelation of baseline imaging variables with change in symptoms\n1 Measured by the WOMAC-index", "Results from the intention-to-treat population are displayed in Table 3. The LED and control group changed their mean body weight (SE) by -15.6 (3.6)% and 0.4 (3.2)% respectively (data not shown).\nResults based on changes for the whole intention-to-treat population\n1 Sum of visual analogue scale scores\nIn terms of responders, 40% vs. 13% of the patients in the LED and control group, respectively, achieved a pain reduction of more than 50% in the WOMAC-pain index and 33% vs. 7% of the patients in the LED and control group achieved > 50% in the WOMAC total index.. The WOMAC disability index showed improvement in the LED group when compared with the control group, MD of - 266 mm (95%CI: -468.9 to -63.1; p < 0.01) (data not shown).", "No significant adverse events were reported.", "We found that KOA related structural changes seen on radiographs and MRI scans, at baseline, did not rule out improvement of symptoms following a clinically significant weight loss and could not predict the symptomatic outcome of the diet intervention in this elderly sample of female obese KOA patients. This result was found in an intervention group in which 90% of the patients experienced a significant weight reduction (> 10%), and 33% of the patients experienced > 50% reduction in their overall symptoms of KOA. The results correspond to prior studies investigating short-term effects of weight-loss and long-term outcome of total knee joint replacement [5,29]. We believe that these findings could be valuable for the future design of trials examining the benefit of weight loss in KOA patients, as it indicates that none of the examined structural parameters, individually or combined, could predict the symptomatic outcome of a significant weight loss in obese women with KOA.\nA prior study investigating synovitis at baseline and clinical symptoms after two months, found no association [30], while Hill et al found a change in synovitis to be associated with change in symptoms of pain [18]. Furthermore, several cross-sectional studies have investigated MRI assessed items in relation to e.g. clinical symptoms, and in a meta-analysis BMLs and effusion/synovitis were found to be associated with knee pain [31].\nThe study has several limitations. It includes only 30 patients, secondly, the use of radiographs and low-field MRI are not the most advanced diagnostic tools regarding imaging assessment of osteoarthritis. Also, a follow-up period of 32 weeks might influence our findings.\nThe main disadvantage of low field MRI is the poorer image quality due to low SNR, which can only be compensated for by increasing either number of excitations, slice thickness and/or Field of Window or by reducing matrix and/or receiver bandwidth. All of which will increase scan time and/or decrease the in-plane resolution. Smaller cartilage abnormalities is not as well detected by low field MRI when compared to medium or high field MRI [32], but unfortunately a recent review that could have brought new insight to the subject, could not complete a meta-analysis due to study heterogeneity [33]. We applied a near isotropic sub millimetre 3 D GRE sequence and assessed images in several planes in order to achieve the highest possible diagnostic accuracy [34,35].\nFinally we did not include analysis of multiple different scoring methods for radiographs and MRI scans, but the current approach was chosen inspired by several previous publications investigating this topic [17,22-25,36]. Newest evidence supports this approach as BMLs, synovitis and effusion seems to be the most important MRI assessed items likely to be associated with knee pain in KOA [31].", "In conclusion the present study reveals that baseline joint status assessed by low field MRI scans (0.2 T) and bi-plane standing radiographs did not influence the long-term improvement in WOMAC disability and WOMAC total indexes following a clinically relevant weight loss. The present study also demonstrated that an initial diet intervention program was able to induce a sustainable weight loss in KOA patients over a period of half a year (32 weeks).", "The authors declare that they have no competing interests.", "HRG made all the analysis and interpretation of data, drafted the manuscript and approved the final version. MB contributed to the conception and design, analysed all MRI and radiographs, revised the manuscript several times and approved the final version. RC contributed to the conception and design, especially the statistics, revised the manuscript and approved the final version. AA contributed with the overall design idea, revised the manuscript and approved the final version. HB contributed to the conception and design, revised the manuscript and approved the final version.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2474/12/56/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Participants", "Imaging acquisition", "Imaging evaluation", "Interventions", "Biometric examinations", "Symptom assessment", "Randomization, allocation concealment, implementation and blinding", "Statistical methods", "Results", "Population characteristics", "Assessed imaging variables as predictors of symptomatic changes following weight loss", "Results regarding the weight loss program", "Adverse events", "Discussion", "Conclusion", "Competing intersts", "Authors' contributions", "Pre-publication history" ]
[ "Knee osteoarthritis (KOA) is a multi factorial disease characterized by joint-stiffness, pain and loss of function [1]. With an increasing prevalence of elderly and obese citizens, the problems of KOA is likely to escalate in the future [2-4], and as new potential treatments arise, there is a need to examine MRI evaluated structural changes in clinical trials.\nA drug/treatment that can efficiently halt the degenerative nature of KOA (DMOAD) has not yet been presented, but in obese KOA patients recent studies have shown a direct relationship between weight loss and the level of symptomatic improvement [5]. This result supports earlier epidemiological findings that weight loss reduces the risk of development and progression of KOA, and that KOA related symptoms tend to worsen in obese patients [6-9]. As a consequence, overweight KOA patients are now recommended to commence weight reduction as a first line therapy [3,4]. Which diet to choose is still debated as there is no evidence to support one diet composition over others; the single most important factor is to establish a continuous energy deficit [10].\nConventional radiography is the simplest and least expensive imaging method for assessing KOA, and the K/L score remains the most widely applied system when diagnosing KOA [11,12] in clinical trials. Both low- and high field MRI provides additional information to radiographs, as these modalities have a unique ability to image all knee joint related structures [13].\nThe MRI modality withholds a possibility for semi-quantitative scoring of synovial thickening, joint effusion, bone marrow lesions (BMLs) and cartilage abnormalities. These structures are essential because the synovium, joint capsule and subchondral bone are highly innervated and appear to represent some of the main origins of KOA-related pain, whereas the cartilage status is suggested to be more a marker of joint strain and thereby a surrogate marker for KOA symptoms [14]. All of these structural changes have been shown to correlate with clinical symptoms and/or progression of disease [15-20], and they therefore seem relevant to examine in this intervention study.\nIn this study we hypothesize that imaging variables assessed with radiographs and MRI scans pre-treatment can predict the symptomatic changes following a recommended clinically significant weight reduction [5].", "[SUBTITLE] Participants [SUBSECTION] Following approval from the local ethical committee ((KF) 01-104/02 and 11-149/03), female patients were recruited from the outpatients' clinic, Department of Rheumatology, Frederiksberg Hospital, Denmark. They were all invited from the waiting list of the first diet study from the Parker Institute (Christensen, 2005 86/id). All patients signed and approved the informed consent and standing knee radiographs, MRI and clinical examinations were performed on the same day at baseline. The study was carried out in accordance with the Helsinki Declaration II and the European Guidelines for Good Clinical Practise.\nEligibility criteria were: age above eighteen years; primary KOA diagnosed according to the clinical classification of KOA [21]; no history or active presence of other rheumatic diseases that might be responsible for secondary KOA; no substantial abnormalities in haematological, hepatic, renal, cardiac or endocrine functions (including diabetes mellitus); body-mass index (BMI) ≥28 kg/m2, expression of a clear, unequivocal motivation for weight loss; fluent in Danish language.\nOnly pain medication was monitored in our project: All participants were asked not to change the previous medications for pain, i.e. maintain the same medication at same dosage. The GP was informed of the project and asked to monitor other medications, including antidiabetics.\nFollowing approval from the local ethical committee ((KF) 01-104/02 and 11-149/03), female patients were recruited from the outpatients' clinic, Department of Rheumatology, Frederiksberg Hospital, Denmark. They were all invited from the waiting list of the first diet study from the Parker Institute (Christensen, 2005 86/id). All patients signed and approved the informed consent and standing knee radiographs, MRI and clinical examinations were performed on the same day at baseline. The study was carried out in accordance with the Helsinki Declaration II and the European Guidelines for Good Clinical Practise.\nEligibility criteria were: age above eighteen years; primary KOA diagnosed according to the clinical classification of KOA [21]; no history or active presence of other rheumatic diseases that might be responsible for secondary KOA; no substantial abnormalities in haematological, hepatic, renal, cardiac or endocrine functions (including diabetes mellitus); body-mass index (BMI) ≥28 kg/m2, expression of a clear, unequivocal motivation for weight loss; fluent in Danish language.\nOnly pain medication was monitored in our project: All participants were asked not to change the previous medications for pain, i.e. maintain the same medication at same dosage. The GP was informed of the project and asked to monitor other medications, including antidiabetics.\n[SUBTITLE] Imaging acquisition [SUBSECTION] Baseline MRI was obtained of a single knee, using a dedicated extremity scanner (E-Saote E-scanner, 0.2 Tesla, Software release 9.6B). In case of bilateral symptoms we examined the most symptomatic one All MRI scans were performed in the same department of radiology by a team of two radiographers applying a standardized technique. Knees were placed in a receive-only cylinder coil.\nThe imaging protocol used was:\nA gradient echo scout followed by a saggital STIR with 4 mm slices (TR 1460, TE 24, FOV 160 × 160, matrix 256 × 256, acquisition time 5 min 10 s). Two successive T1-weighted 3 D gradient-echo sequences were acquired in the axial and saggital plane with respectively 104 and 52 adjoining 1.4 mm thick slices (TR 60 ms, TE 24 ms, 45° flip angle, field of view 150 mm, matrix 192 × 160 and voxel size 0.78 × 1.07 × 1.4 mm 3, acquisition time 6 min). Coronal T1-weighted spin-echo with 15 contiguous 4 mm thick slices (TR 520 ms, TE 15 ms, field of view 160 mm, matrix 192 × 160 mm, acquisition time 3 min 20 s with two signals acquired). Finally a saggital T2*-weighted two-dimensional gradient-echo sequence was acquired with 25 contiguous 4 mm thick sections (TR 60 ms, TE 24 ms, 45° flip angle, field of view 160 mm, matrix 192 × 160, acquisition time 4 min 50 s).\nBi-plane weight-bearing semi-flexed radiographs were taken of the index knee; one in the posteroanterior and one in the lateral view (in case of bilateral symptoms we used the most symptomatic knee). They were obtained at inclusion/baseline, using a Philips Optimus apparatus, and the same radiographers using a standardized protocol carried out all examinations. The ionizing radiation dose per examination was 0.006 mSv corresponding to 0.2% of the annual background radiation on earth (average background dose for humans are 2.4 mSv annually.\nBaseline MRI was obtained of a single knee, using a dedicated extremity scanner (E-Saote E-scanner, 0.2 Tesla, Software release 9.6B). In case of bilateral symptoms we examined the most symptomatic one All MRI scans were performed in the same department of radiology by a team of two radiographers applying a standardized technique. Knees were placed in a receive-only cylinder coil.\nThe imaging protocol used was:\nA gradient echo scout followed by a saggital STIR with 4 mm slices (TR 1460, TE 24, FOV 160 × 160, matrix 256 × 256, acquisition time 5 min 10 s). Two successive T1-weighted 3 D gradient-echo sequences were acquired in the axial and saggital plane with respectively 104 and 52 adjoining 1.4 mm thick slices (TR 60 ms, TE 24 ms, 45° flip angle, field of view 150 mm, matrix 192 × 160 and voxel size 0.78 × 1.07 × 1.4 mm 3, acquisition time 6 min). Coronal T1-weighted spin-echo with 15 contiguous 4 mm thick slices (TR 520 ms, TE 15 ms, field of view 160 mm, matrix 192 × 160 mm, acquisition time 3 min 20 s with two signals acquired). Finally a saggital T2*-weighted two-dimensional gradient-echo sequence was acquired with 25 contiguous 4 mm thick sections (TR 60 ms, TE 24 ms, 45° flip angle, field of view 160 mm, matrix 192 × 160, acquisition time 4 min 50 s).\nBi-plane weight-bearing semi-flexed radiographs were taken of the index knee; one in the posteroanterior and one in the lateral view (in case of bilateral symptoms we used the most symptomatic knee). They were obtained at inclusion/baseline, using a Philips Optimus apparatus, and the same radiographers using a standardized protocol carried out all examinations. The ionizing radiation dose per examination was 0.006 mSv corresponding to 0.2% of the annual background radiation on earth (average background dose for humans are 2.4 mSv annually.\n[SUBTITLE] Imaging evaluation [SUBSECTION] MRI scans were scored separately for four structural parameters and summed as a \"Total MRI Score\" to see if this construct would perform better as an imaging biomarker. Cartilage abnormalities, BMLs and synovitis were scored for the medial, lateral and patellofemoral chamber and effusion was graded according to the total amount.\nCartilage abnormalities were assessed using the T2* and the 3 D T1 weighted Gradient echo sequences. These abnormalities were graded 0-4 according to the description by Ding et al. [22], and the specific grades were as follows; grade 0, normal cartilage; grade 1, focal blistering and an intra-cartilaginous area of low signal intensity with an intact surface; grade 2, irregularities on the surface or bottom and a < 50% loss of thickness; grade 3, deep ulceration, with a > 50% loss of thickness; grade 4, full-thickness chondral wear, with exposure of the subchondral bone. BMLs were defined as poorly marginated areas of increased signal intensity in the subchondral bone on the STIR images, and they were scored according to the description by Torres et al. [23]. The grades were defined as follows; grade 0, normal; grade 1, < 25% of the chamber, grade 2, 25-50% of the chamber and grade 3, > 50% of the chamber. The degree of synovitis was scored according to Rhodes et al. on a scale ranging from 0-3 where 0 = normal, 1 = diffuse, even thickening, 2 = nodular thickening and 3 = gross nodular thickening [24]. The amount of effusion was graded from 0-3 where 0 = physiological amount, 1 = small amount, in the retropatellar space, 2 = moderate amount, slight convexity of the suprapatellar bursa and 3 = large, capsular distension with bulging of the extensor retinaculum [17,25].\nMaximum global score was 12 for cartilage abnormalities and 9 for BMLs, effusion and synovitis. Minimum score was 0 for all assessed structural parameters.\nThe radiographs analysed using the Kellgren Lawrence scoring method (K/L), as this is a recommended and reliable method for baseline assessments of KOA using the fixed flexion protocol with antero-posterior and lateral radiographs [11]. One experienced investigator (MB), who was blinded to randomization, analyzed all radiographs and MRI scans in a random order.\nMRI scans were scored separately for four structural parameters and summed as a \"Total MRI Score\" to see if this construct would perform better as an imaging biomarker. Cartilage abnormalities, BMLs and synovitis were scored for the medial, lateral and patellofemoral chamber and effusion was graded according to the total amount.\nCartilage abnormalities were assessed using the T2* and the 3 D T1 weighted Gradient echo sequences. These abnormalities were graded 0-4 according to the description by Ding et al. [22], and the specific grades were as follows; grade 0, normal cartilage; grade 1, focal blistering and an intra-cartilaginous area of low signal intensity with an intact surface; grade 2, irregularities on the surface or bottom and a < 50% loss of thickness; grade 3, deep ulceration, with a > 50% loss of thickness; grade 4, full-thickness chondral wear, with exposure of the subchondral bone. BMLs were defined as poorly marginated areas of increased signal intensity in the subchondral bone on the STIR images, and they were scored according to the description by Torres et al. [23]. The grades were defined as follows; grade 0, normal; grade 1, < 25% of the chamber, grade 2, 25-50% of the chamber and grade 3, > 50% of the chamber. The degree of synovitis was scored according to Rhodes et al. on a scale ranging from 0-3 where 0 = normal, 1 = diffuse, even thickening, 2 = nodular thickening and 3 = gross nodular thickening [24]. The amount of effusion was graded from 0-3 where 0 = physiological amount, 1 = small amount, in the retropatellar space, 2 = moderate amount, slight convexity of the suprapatellar bursa and 3 = large, capsular distension with bulging of the extensor retinaculum [17,25].\nMaximum global score was 12 for cartilage abnormalities and 9 for BMLs, effusion and synovitis. Minimum score was 0 for all assessed structural parameters.\nThe radiographs analysed using the Kellgren Lawrence scoring method (K/L), as this is a recommended and reliable method for baseline assessments of KOA using the fixed flexion protocol with antero-posterior and lateral radiographs [11]. One experienced investigator (MB), who was blinded to randomization, analyzed all radiographs and MRI scans in a random order.\n[SUBTITLE] Interventions [SUBSECTION] Subjects were randomly assigned to either a control group or an intervention group who was treated with a dietary regime. This consisted of a low-energy diet (LED) for eight weeks followed by 24 weeks of conventional hypo-energetic and high protein diet. As previously described [5] the intervention-diet consisted of nutrition powder (Speasy, Dansk Droge A/S) dissolved in water and it was taken as six daily meals, giving the patient 3.4 MJ/day. This fulfilled the recommendations of daily intake of high quality protein [26]; 37 energy percent (E%) from soy protein providing the essential amino acids, 47 E% from carbohydrate, 16 E% from vegetable fat (primarily from rapeseed oil), and fibres from oat-bran (15 g/day). The LED group received nutritional instruction and behavioural therapy by an experienced dietician at weekly sessions (1.5 h/week) throughout the eight weeks. This was done to reinforce and continuously stimulate the patients' intention to loose weight, and to promote a high degree of compliance.\nPatients in the control group attended a thorough two-hour session at baseline (by the same dietician who treated the LED group). The patients were given nutritional advice and recommended ordinary foods in amounts that would provide the patients with approximately 5 MJ/day. After this initial session all the patients in the control group received ideas for a diet plan in a booklet providing the participants with a variety of 'good-advices' when trying to reduce body weight. Finally the subjects in the control group were put on a waiting list for later recall to a similar dietary plan as in the intervention group. The follow-up visit was at t = 32 weeks.\nSubjects were randomly assigned to either a control group or an intervention group who was treated with a dietary regime. This consisted of a low-energy diet (LED) for eight weeks followed by 24 weeks of conventional hypo-energetic and high protein diet. As previously described [5] the intervention-diet consisted of nutrition powder (Speasy, Dansk Droge A/S) dissolved in water and it was taken as six daily meals, giving the patient 3.4 MJ/day. This fulfilled the recommendations of daily intake of high quality protein [26]; 37 energy percent (E%) from soy protein providing the essential amino acids, 47 E% from carbohydrate, 16 E% from vegetable fat (primarily from rapeseed oil), and fibres from oat-bran (15 g/day). The LED group received nutritional instruction and behavioural therapy by an experienced dietician at weekly sessions (1.5 h/week) throughout the eight weeks. This was done to reinforce and continuously stimulate the patients' intention to loose weight, and to promote a high degree of compliance.\nPatients in the control group attended a thorough two-hour session at baseline (by the same dietician who treated the LED group). The patients were given nutritional advice and recommended ordinary foods in amounts that would provide the patients with approximately 5 MJ/day. After this initial session all the patients in the control group received ideas for a diet plan in a booklet providing the participants with a variety of 'good-advices' when trying to reduce body weight. Finally the subjects in the control group were put on a waiting list for later recall to a similar dietary plan as in the intervention group. The follow-up visit was at t = 32 weeks.\n[SUBTITLE] Biometric examinations [SUBSECTION] At baseline and after half a year (t = 32 weeks) the body weights (without coats, shoes etc.) of all patients' were recorded on a decimal scale (TANITA BW-800, 'Frederiksberg Vægtfabrik', Copenhagen, Denmark).\nAt baseline and after half a year (t = 32 weeks) the body weights (without coats, shoes etc.) of all patients' were recorded on a decimal scale (TANITA BW-800, 'Frederiksberg Vægtfabrik', Copenhagen, Denmark).\n[SUBTITLE] Symptom assessment [SUBSECTION] The patient important outcome being their experience of KOA symptoms were assessed by the Western Ontario and McMaster Universities' (WOMAC) OA index, a validated disease-specific questionnaire comprised of three self-reported items; five pain-related questions of each 100 mm VAS (500 mm VAS in total); seventeen disability-related questions of each 100 mm VAS (1700 mm VAS in total); two stiffness-related questions of each 100 mm VAS (200 mm VAS in total). The patients mark their present level of symptoms, within each of the above described items, by placing a vertical line on a 100 mm horizontal line. The total WOMAC is a measure of the global KOA level of symptoms; 0 mm WOMAC representing no disease, and 2400 mm WOMAC representing worst possible state of disease [27]. This was done at baseline and again at follow-up (t = 32 weeks).\nThe patient important outcome being their experience of KOA symptoms were assessed by the Western Ontario and McMaster Universities' (WOMAC) OA index, a validated disease-specific questionnaire comprised of three self-reported items; five pain-related questions of each 100 mm VAS (500 mm VAS in total); seventeen disability-related questions of each 100 mm VAS (1700 mm VAS in total); two stiffness-related questions of each 100 mm VAS (200 mm VAS in total). The patients mark their present level of symptoms, within each of the above described items, by placing a vertical line on a 100 mm horizontal line. The total WOMAC is a measure of the global KOA level of symptoms; 0 mm WOMAC representing no disease, and 2400 mm WOMAC representing worst possible state of disease [27]. This was done at baseline and again at follow-up (t = 32 weeks).\n[SUBTITLE] Randomization, allocation concealment, implementation and blinding [SUBSECTION] A method of restricted randomization called minimization was used with stratifying patients according to (i) gender, (ii) BMI and (iii) age. This was done for every sixteen patients included, and ensured homogeneity between the groups [28].\nEach randomization list was drawn up by the statistician and given to the secretariat. In order to implement the random allocation, the sequence was concealed until interventions were assigned: The secretariat informed the patients about when to meet with the dietician (i.e. only implicitly referring to group allocation). The code was not revealed to the researchers before data collection, imaging assessments and laboratory analyses were complete.\nThe statistician and the assessor of radiographs and MRI scans were blinded.\nA method of restricted randomization called minimization was used with stratifying patients according to (i) gender, (ii) BMI and (iii) age. This was done for every sixteen patients included, and ensured homogeneity between the groups [28].\nEach randomization list was drawn up by the statistician and given to the secretariat. In order to implement the random allocation, the sequence was concealed until interventions were assigned: The secretariat informed the patients about when to meet with the dietician (i.e. only implicitly referring to group allocation). The code was not revealed to the researchers before data collection, imaging assessments and laboratory analyses were complete.\nThe statistician and the assessor of radiographs and MRI scans were blinded.\n[SUBTITLE] Statistical methods [SUBSECTION] Clinical outcomes were analyzed as differences from baseline values (x32 - x0), and weight loss (kg) was also analyzed as a relative measure, being the percentage change from baseline ((x32 - x0)/x0 *100%). We performed a distribution-free Spearman's test of rank correlation when examining the possible relationship between imaging variables and clinical outcomes of the dietary interventions. Further analyses on significant results were carried out according to the data type. The Spearman correlation coefficient was interpreted as follows: < 0.3: none; 0.31-0.5: weak; 0.51-0.7: strong; 0.71-0.9: very strong and > 0.9: excellent. A P-value less than 0.05 (two-tailed) or a 95% confidence interval (CI) not including the null hypothesis was regarded as statistically significant. All the analyses were performed on SAS version 9.1 for Windows (Chicago, IL, USA).\nClinical outcomes were analyzed as differences from baseline values (x32 - x0), and weight loss (kg) was also analyzed as a relative measure, being the percentage change from baseline ((x32 - x0)/x0 *100%). We performed a distribution-free Spearman's test of rank correlation when examining the possible relationship between imaging variables and clinical outcomes of the dietary interventions. Further analyses on significant results were carried out according to the data type. The Spearman correlation coefficient was interpreted as follows: < 0.3: none; 0.31-0.5: weak; 0.51-0.7: strong; 0.71-0.9: very strong and > 0.9: excellent. A P-value less than 0.05 (two-tailed) or a 95% confidence interval (CI) not including the null hypothesis was regarded as statistically significant. All the analyses were performed on SAS version 9.1 for Windows (Chicago, IL, USA).", "Following approval from the local ethical committee ((KF) 01-104/02 and 11-149/03), female patients were recruited from the outpatients' clinic, Department of Rheumatology, Frederiksberg Hospital, Denmark. They were all invited from the waiting list of the first diet study from the Parker Institute (Christensen, 2005 86/id). All patients signed and approved the informed consent and standing knee radiographs, MRI and clinical examinations were performed on the same day at baseline. The study was carried out in accordance with the Helsinki Declaration II and the European Guidelines for Good Clinical Practise.\nEligibility criteria were: age above eighteen years; primary KOA diagnosed according to the clinical classification of KOA [21]; no history or active presence of other rheumatic diseases that might be responsible for secondary KOA; no substantial abnormalities in haematological, hepatic, renal, cardiac or endocrine functions (including diabetes mellitus); body-mass index (BMI) ≥28 kg/m2, expression of a clear, unequivocal motivation for weight loss; fluent in Danish language.\nOnly pain medication was monitored in our project: All participants were asked not to change the previous medications for pain, i.e. maintain the same medication at same dosage. The GP was informed of the project and asked to monitor other medications, including antidiabetics.", "Baseline MRI was obtained of a single knee, using a dedicated extremity scanner (E-Saote E-scanner, 0.2 Tesla, Software release 9.6B). In case of bilateral symptoms we examined the most symptomatic one All MRI scans were performed in the same department of radiology by a team of two radiographers applying a standardized technique. Knees were placed in a receive-only cylinder coil.\nThe imaging protocol used was:\nA gradient echo scout followed by a saggital STIR with 4 mm slices (TR 1460, TE 24, FOV 160 × 160, matrix 256 × 256, acquisition time 5 min 10 s). Two successive T1-weighted 3 D gradient-echo sequences were acquired in the axial and saggital plane with respectively 104 and 52 adjoining 1.4 mm thick slices (TR 60 ms, TE 24 ms, 45° flip angle, field of view 150 mm, matrix 192 × 160 and voxel size 0.78 × 1.07 × 1.4 mm 3, acquisition time 6 min). Coronal T1-weighted spin-echo with 15 contiguous 4 mm thick slices (TR 520 ms, TE 15 ms, field of view 160 mm, matrix 192 × 160 mm, acquisition time 3 min 20 s with two signals acquired). Finally a saggital T2*-weighted two-dimensional gradient-echo sequence was acquired with 25 contiguous 4 mm thick sections (TR 60 ms, TE 24 ms, 45° flip angle, field of view 160 mm, matrix 192 × 160, acquisition time 4 min 50 s).\nBi-plane weight-bearing semi-flexed radiographs were taken of the index knee; one in the posteroanterior and one in the lateral view (in case of bilateral symptoms we used the most symptomatic knee). They were obtained at inclusion/baseline, using a Philips Optimus apparatus, and the same radiographers using a standardized protocol carried out all examinations. The ionizing radiation dose per examination was 0.006 mSv corresponding to 0.2% of the annual background radiation on earth (average background dose for humans are 2.4 mSv annually.", "MRI scans were scored separately for four structural parameters and summed as a \"Total MRI Score\" to see if this construct would perform better as an imaging biomarker. Cartilage abnormalities, BMLs and synovitis were scored for the medial, lateral and patellofemoral chamber and effusion was graded according to the total amount.\nCartilage abnormalities were assessed using the T2* and the 3 D T1 weighted Gradient echo sequences. These abnormalities were graded 0-4 according to the description by Ding et al. [22], and the specific grades were as follows; grade 0, normal cartilage; grade 1, focal blistering and an intra-cartilaginous area of low signal intensity with an intact surface; grade 2, irregularities on the surface or bottom and a < 50% loss of thickness; grade 3, deep ulceration, with a > 50% loss of thickness; grade 4, full-thickness chondral wear, with exposure of the subchondral bone. BMLs were defined as poorly marginated areas of increased signal intensity in the subchondral bone on the STIR images, and they were scored according to the description by Torres et al. [23]. The grades were defined as follows; grade 0, normal; grade 1, < 25% of the chamber, grade 2, 25-50% of the chamber and grade 3, > 50% of the chamber. The degree of synovitis was scored according to Rhodes et al. on a scale ranging from 0-3 where 0 = normal, 1 = diffuse, even thickening, 2 = nodular thickening and 3 = gross nodular thickening [24]. The amount of effusion was graded from 0-3 where 0 = physiological amount, 1 = small amount, in the retropatellar space, 2 = moderate amount, slight convexity of the suprapatellar bursa and 3 = large, capsular distension with bulging of the extensor retinaculum [17,25].\nMaximum global score was 12 for cartilage abnormalities and 9 for BMLs, effusion and synovitis. Minimum score was 0 for all assessed structural parameters.\nThe radiographs analysed using the Kellgren Lawrence scoring method (K/L), as this is a recommended and reliable method for baseline assessments of KOA using the fixed flexion protocol with antero-posterior and lateral radiographs [11]. One experienced investigator (MB), who was blinded to randomization, analyzed all radiographs and MRI scans in a random order.", "Subjects were randomly assigned to either a control group or an intervention group who was treated with a dietary regime. This consisted of a low-energy diet (LED) for eight weeks followed by 24 weeks of conventional hypo-energetic and high protein diet. As previously described [5] the intervention-diet consisted of nutrition powder (Speasy, Dansk Droge A/S) dissolved in water and it was taken as six daily meals, giving the patient 3.4 MJ/day. This fulfilled the recommendations of daily intake of high quality protein [26]; 37 energy percent (E%) from soy protein providing the essential amino acids, 47 E% from carbohydrate, 16 E% from vegetable fat (primarily from rapeseed oil), and fibres from oat-bran (15 g/day). The LED group received nutritional instruction and behavioural therapy by an experienced dietician at weekly sessions (1.5 h/week) throughout the eight weeks. This was done to reinforce and continuously stimulate the patients' intention to loose weight, and to promote a high degree of compliance.\nPatients in the control group attended a thorough two-hour session at baseline (by the same dietician who treated the LED group). The patients were given nutritional advice and recommended ordinary foods in amounts that would provide the patients with approximately 5 MJ/day. After this initial session all the patients in the control group received ideas for a diet plan in a booklet providing the participants with a variety of 'good-advices' when trying to reduce body weight. Finally the subjects in the control group were put on a waiting list for later recall to a similar dietary plan as in the intervention group. The follow-up visit was at t = 32 weeks.", "At baseline and after half a year (t = 32 weeks) the body weights (without coats, shoes etc.) of all patients' were recorded on a decimal scale (TANITA BW-800, 'Frederiksberg Vægtfabrik', Copenhagen, Denmark).", "The patient important outcome being their experience of KOA symptoms were assessed by the Western Ontario and McMaster Universities' (WOMAC) OA index, a validated disease-specific questionnaire comprised of three self-reported items; five pain-related questions of each 100 mm VAS (500 mm VAS in total); seventeen disability-related questions of each 100 mm VAS (1700 mm VAS in total); two stiffness-related questions of each 100 mm VAS (200 mm VAS in total). The patients mark their present level of symptoms, within each of the above described items, by placing a vertical line on a 100 mm horizontal line. The total WOMAC is a measure of the global KOA level of symptoms; 0 mm WOMAC representing no disease, and 2400 mm WOMAC representing worst possible state of disease [27]. This was done at baseline and again at follow-up (t = 32 weeks).", "A method of restricted randomization called minimization was used with stratifying patients according to (i) gender, (ii) BMI and (iii) age. This was done for every sixteen patients included, and ensured homogeneity between the groups [28].\nEach randomization list was drawn up by the statistician and given to the secretariat. In order to implement the random allocation, the sequence was concealed until interventions were assigned: The secretariat informed the patients about when to meet with the dietician (i.e. only implicitly referring to group allocation). The code was not revealed to the researchers before data collection, imaging assessments and laboratory analyses were complete.\nThe statistician and the assessor of radiographs and MRI scans were blinded.", "Clinical outcomes were analyzed as differences from baseline values (x32 - x0), and weight loss (kg) was also analyzed as a relative measure, being the percentage change from baseline ((x32 - x0)/x0 *100%). We performed a distribution-free Spearman's test of rank correlation when examining the possible relationship between imaging variables and clinical outcomes of the dietary interventions. Further analyses on significant results were carried out according to the data type. The Spearman correlation coefficient was interpreted as follows: < 0.3: none; 0.31-0.5: weak; 0.51-0.7: strong; 0.71-0.9: very strong and > 0.9: excellent. A P-value less than 0.05 (two-tailed) or a 95% confidence interval (CI) not including the null hypothesis was regarded as statistically significant. All the analyses were performed on SAS version 9.1 for Windows (Chicago, IL, USA).", "[SUBTITLE] Population characteristics [SUBSECTION] 32 patients were invited to participate, 31 of these were interested and 30 patients had baseline measurements performed. The patient not randomized was excluded due to withdrawal of consent before the randomization procedure. The 30 enrolled patients were randomly assigned to either LED or conventional hypo energetic diet. Following randomization of 15 patients to each group, all patients completed the trial and we subsequently analyzed the ITT population based on these 30 patients (Figure 1).\nTrial profile.\nAll patients were women, average age was 62 years (SD 6.8) and average BMI was 37 kg/m 2 (SD 6.0) (see table 1). At baseline we registered data regarding blood analyses and Patient-Reported Outcomes (PROs); there were no statistically significant differences between patients in the LED and control group at baseline (data not shown). Use of pain medication was also monitored by PROs and data revealed an unaltered use during the trial period.\nCharacteristics of all participants\n1 Sum of visual analogue scale scores; WOMAC-pain-index of 500 mm, -disability-index of 1700 mm, -stiffness-index of 200 mm and -total-index of 2400 mm (se text in article under Symptom assessment)\n2 Showed a Non-Gaussian distribution, thus presented as median (interquartile range)\nRadiographs were scored using the K/L score, and the three joint compartments were scores separately in order to assess whether KOA at specific locations had any influence on our hypothesis. No group differences (data not shown). 63% of the patients had a medial K/L score ≥ 2 and 13% had a K/L score of 0 (no group differences). The assessment of MRI scans revealed that 37% of the patients had a BML score ≥ 1 for all three compartments and that 93% of the patients had some degree of cartilage abnormalities (score ≥ 1). For effusion and synovitis 30 and 40% had a score of zero respectively.\n32 patients were invited to participate, 31 of these were interested and 30 patients had baseline measurements performed. The patient not randomized was excluded due to withdrawal of consent before the randomization procedure. The 30 enrolled patients were randomly assigned to either LED or conventional hypo energetic diet. Following randomization of 15 patients to each group, all patients completed the trial and we subsequently analyzed the ITT population based on these 30 patients (Figure 1).\nTrial profile.\nAll patients were women, average age was 62 years (SD 6.8) and average BMI was 37 kg/m 2 (SD 6.0) (see table 1). At baseline we registered data regarding blood analyses and Patient-Reported Outcomes (PROs); there were no statistically significant differences between patients in the LED and control group at baseline (data not shown). Use of pain medication was also monitored by PROs and data revealed an unaltered use during the trial period.\nCharacteristics of all participants\n1 Sum of visual analogue scale scores; WOMAC-pain-index of 500 mm, -disability-index of 1700 mm, -stiffness-index of 200 mm and -total-index of 2400 mm (se text in article under Symptom assessment)\n2 Showed a Non-Gaussian distribution, thus presented as median (interquartile range)\nRadiographs were scored using the K/L score, and the three joint compartments were scores separately in order to assess whether KOA at specific locations had any influence on our hypothesis. No group differences (data not shown). 63% of the patients had a medial K/L score ≥ 2 and 13% had a K/L score of 0 (no group differences). The assessment of MRI scans revealed that 37% of the patients had a BML score ≥ 1 for all three compartments and that 93% of the patients had some degree of cartilage abnormalities (score ≥ 1). For effusion and synovitis 30 and 40% had a score of zero respectively.\n[SUBTITLE] Assessed imaging variables as predictors of symptomatic changes following weight loss [SUBSECTION] Imaging variables as predictors of symptomatic outcome were examined by a Spearman correlation analysis (Table 2). The analysis did not show significant correlation between any imaging variables and the following outcomes; Δ WOMAC pain (mm) and Δ WOMAC disability (mm) (r ≤ 0.33; p > 0.05).\nCorrelation of baseline imaging variables with change in symptoms\n1 Measured by the WOMAC-index\nImaging variables as predictors of symptomatic outcome were examined by a Spearman correlation analysis (Table 2). The analysis did not show significant correlation between any imaging variables and the following outcomes; Δ WOMAC pain (mm) and Δ WOMAC disability (mm) (r ≤ 0.33; p > 0.05).\nCorrelation of baseline imaging variables with change in symptoms\n1 Measured by the WOMAC-index\n[SUBTITLE] Results regarding the weight loss program [SUBSECTION] Results from the intention-to-treat population are displayed in Table 3. The LED and control group changed their mean body weight (SE) by -15.6 (3.6)% and 0.4 (3.2)% respectively (data not shown).\nResults based on changes for the whole intention-to-treat population\n1 Sum of visual analogue scale scores\nIn terms of responders, 40% vs. 13% of the patients in the LED and control group, respectively, achieved a pain reduction of more than 50% in the WOMAC-pain index and 33% vs. 7% of the patients in the LED and control group achieved > 50% in the WOMAC total index.. The WOMAC disability index showed improvement in the LED group when compared with the control group, MD of - 266 mm (95%CI: -468.9 to -63.1; p < 0.01) (data not shown).\nResults from the intention-to-treat population are displayed in Table 3. The LED and control group changed their mean body weight (SE) by -15.6 (3.6)% and 0.4 (3.2)% respectively (data not shown).\nResults based on changes for the whole intention-to-treat population\n1 Sum of visual analogue scale scores\nIn terms of responders, 40% vs. 13% of the patients in the LED and control group, respectively, achieved a pain reduction of more than 50% in the WOMAC-pain index and 33% vs. 7% of the patients in the LED and control group achieved > 50% in the WOMAC total index.. The WOMAC disability index showed improvement in the LED group when compared with the control group, MD of - 266 mm (95%CI: -468.9 to -63.1; p < 0.01) (data not shown).\n[SUBTITLE] Adverse events [SUBSECTION] No significant adverse events were reported.\nNo significant adverse events were reported.", "32 patients were invited to participate, 31 of these were interested and 30 patients had baseline measurements performed. The patient not randomized was excluded due to withdrawal of consent before the randomization procedure. The 30 enrolled patients were randomly assigned to either LED or conventional hypo energetic diet. Following randomization of 15 patients to each group, all patients completed the trial and we subsequently analyzed the ITT population based on these 30 patients (Figure 1).\nTrial profile.\nAll patients were women, average age was 62 years (SD 6.8) and average BMI was 37 kg/m 2 (SD 6.0) (see table 1). At baseline we registered data regarding blood analyses and Patient-Reported Outcomes (PROs); there were no statistically significant differences between patients in the LED and control group at baseline (data not shown). Use of pain medication was also monitored by PROs and data revealed an unaltered use during the trial period.\nCharacteristics of all participants\n1 Sum of visual analogue scale scores; WOMAC-pain-index of 500 mm, -disability-index of 1700 mm, -stiffness-index of 200 mm and -total-index of 2400 mm (se text in article under Symptom assessment)\n2 Showed a Non-Gaussian distribution, thus presented as median (interquartile range)\nRadiographs were scored using the K/L score, and the three joint compartments were scores separately in order to assess whether KOA at specific locations had any influence on our hypothesis. No group differences (data not shown). 63% of the patients had a medial K/L score ≥ 2 and 13% had a K/L score of 0 (no group differences). The assessment of MRI scans revealed that 37% of the patients had a BML score ≥ 1 for all three compartments and that 93% of the patients had some degree of cartilage abnormalities (score ≥ 1). For effusion and synovitis 30 and 40% had a score of zero respectively.", "Imaging variables as predictors of symptomatic outcome were examined by a Spearman correlation analysis (Table 2). The analysis did not show significant correlation between any imaging variables and the following outcomes; Δ WOMAC pain (mm) and Δ WOMAC disability (mm) (r ≤ 0.33; p > 0.05).\nCorrelation of baseline imaging variables with change in symptoms\n1 Measured by the WOMAC-index", "Results from the intention-to-treat population are displayed in Table 3. The LED and control group changed their mean body weight (SE) by -15.6 (3.6)% and 0.4 (3.2)% respectively (data not shown).\nResults based on changes for the whole intention-to-treat population\n1 Sum of visual analogue scale scores\nIn terms of responders, 40% vs. 13% of the patients in the LED and control group, respectively, achieved a pain reduction of more than 50% in the WOMAC-pain index and 33% vs. 7% of the patients in the LED and control group achieved > 50% in the WOMAC total index.. The WOMAC disability index showed improvement in the LED group when compared with the control group, MD of - 266 mm (95%CI: -468.9 to -63.1; p < 0.01) (data not shown).", "No significant adverse events were reported.", "We found that KOA related structural changes seen on radiographs and MRI scans, at baseline, did not rule out improvement of symptoms following a clinically significant weight loss and could not predict the symptomatic outcome of the diet intervention in this elderly sample of female obese KOA patients. This result was found in an intervention group in which 90% of the patients experienced a significant weight reduction (> 10%), and 33% of the patients experienced > 50% reduction in their overall symptoms of KOA. The results correspond to prior studies investigating short-term effects of weight-loss and long-term outcome of total knee joint replacement [5,29]. We believe that these findings could be valuable for the future design of trials examining the benefit of weight loss in KOA patients, as it indicates that none of the examined structural parameters, individually or combined, could predict the symptomatic outcome of a significant weight loss in obese women with KOA.\nA prior study investigating synovitis at baseline and clinical symptoms after two months, found no association [30], while Hill et al found a change in synovitis to be associated with change in symptoms of pain [18]. Furthermore, several cross-sectional studies have investigated MRI assessed items in relation to e.g. clinical symptoms, and in a meta-analysis BMLs and effusion/synovitis were found to be associated with knee pain [31].\nThe study has several limitations. It includes only 30 patients, secondly, the use of radiographs and low-field MRI are not the most advanced diagnostic tools regarding imaging assessment of osteoarthritis. Also, a follow-up period of 32 weeks might influence our findings.\nThe main disadvantage of low field MRI is the poorer image quality due to low SNR, which can only be compensated for by increasing either number of excitations, slice thickness and/or Field of Window or by reducing matrix and/or receiver bandwidth. All of which will increase scan time and/or decrease the in-plane resolution. Smaller cartilage abnormalities is not as well detected by low field MRI when compared to medium or high field MRI [32], but unfortunately a recent review that could have brought new insight to the subject, could not complete a meta-analysis due to study heterogeneity [33]. We applied a near isotropic sub millimetre 3 D GRE sequence and assessed images in several planes in order to achieve the highest possible diagnostic accuracy [34,35].\nFinally we did not include analysis of multiple different scoring methods for radiographs and MRI scans, but the current approach was chosen inspired by several previous publications investigating this topic [17,22-25,36]. Newest evidence supports this approach as BMLs, synovitis and effusion seems to be the most important MRI assessed items likely to be associated with knee pain in KOA [31].", "In conclusion the present study reveals that baseline joint status assessed by low field MRI scans (0.2 T) and bi-plane standing radiographs did not influence the long-term improvement in WOMAC disability and WOMAC total indexes following a clinically relevant weight loss. The present study also demonstrated that an initial diet intervention program was able to induce a sustainable weight loss in KOA patients over a period of half a year (32 weeks).", "The authors declare that they have no competing interests.", "HRG made all the analysis and interpretation of data, drafted the manuscript and approved the final version. MB contributed to the conception and design, analysed all MRI and radiographs, revised the manuscript several times and approved the final version. RC contributed to the conception and design, especially the statistics, revised the manuscript and approved the final version. AA contributed with the overall design idea, revised the manuscript and approved the final version. HB contributed to the conception and design, revised the manuscript and approved the final version.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2474/12/56/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Risk factors for tuberculosis treatment failure, default, or relapse and outcomes of retreatment in Morocco.
21356062
Patients with tuberculosis require retreatment if they fail or default from initial treatment or if they relapse following initial treatment success. Outcomes among patients receiving a standard World Health Organization Category II retreatment regimen are suboptimal, resulting in increased risk of morbidity, drug resistance, and transmission.. In this study, we evaluated the risk factors for initial treatment failure, default, or early relapse leading to the need for tuberculosis retreatment in Morocco. We also assessed retreatment outcomes and drug susceptibility testing use for retreatment patients in urban centers in Morocco, where tuberculosis incidence is stubbornly high.
BACKGROUND
Patients with smear- or culture-positive pulmonary tuberculosis presenting for retreatment were identified using clinic registries in nine urban public clinics in Morocco. Demographic and outcomes data were collected from clinical charts and reference laboratories. To identify factors that had put these individuals at risk for failure, default, or early relapse in the first place, initial treatment records were also abstracted (if retreatment began within two years of initial treatment), and patient characteristics were compared with controls who successfully completed initial treatment without early relapse.
METHODS
291 patients presenting for retreatment were included; 93% received a standard Category II regimen. Retreatment was successful in 74% of relapse patients, 48% of failure patients, and 41% of default patients. 25% of retreatment patients defaulted, higher than previous estimates. Retreatment failure was most common among patients who had failed initial treatment (24%), and default from retreatment was most frequent among patients with initial treatment default (57%). Drug susceptibility testing was performed in only 10% of retreatment patients. Independent risk factors for failure, default, or early relapse after initial treatment included male gender (aOR = 2.29, 95% CI 1.10-4.77), positive sputum smear after 3 months of treatment (OR 7.14, 95% CI 4.04-13.2), and hospitalization (OR 2.09, 95% CI 1.01-4.34). Higher weight at treatment initiation was protective. Male sex, substance use, missed doses, and hospitalization appeared to be risk factors for default, but subgroup analyses were limited by small numbers.
RESULTS
Outcomes of retreatment with a Category II regimen are suboptimal and vary by subgroup. Default among patients receiving tuberculosis retreatment is unacceptably high in urban areas in Morocco, and patients who fail initial tuberculosis treatment are at especially high risk of retreatment failure. Strategies to address risk factors for initial treatment default and to identify patients at risk for failure (including expanded use of drug susceptibility testing) are important given suboptimal retreatment outcomes in these groups.
CONCLUSIONS
[ "Adult", "Cohort Studies", "Female", "Humans", "Male", "Middle Aged", "Morocco", "Outcome Assessment, Health Care", "Retreatment", "Retrospective Studies", "Risk Factors", "Treatment Failure", "Tuberculosis", "Young Adult" ]
3053250
null
null
Methods
[SUBTITLE] Sites [SUBSECTION] Morocco's National Tuberculosis Program (NTP) is well-established. TB care is provided free of charge by the Ministry of Health. Regional TB programs are divided into sectors, and each sector has a public health center (CDTMR) staffed by a specialist in TB care. TB clinical care is provided at CDTMR, while TB medications are delivered via DOTS at local clinics or dispensaries. M. tuberculosis culture and DST are performed at the National Reference Laboratory at the National Institute of Hygiene (INH) in Rabat or at the regional reference laboratory at the Institute Pasteur in Casablanca (IPM). National guidelines recommend DST for all retreatment patients. We chose nine urban CDTMR in high-incidence settings in Rabat, Casablanca, Fes, Tangier, and El Jedida as sites. Morocco's National Tuberculosis Program (NTP) is well-established. TB care is provided free of charge by the Ministry of Health. Regional TB programs are divided into sectors, and each sector has a public health center (CDTMR) staffed by a specialist in TB care. TB clinical care is provided at CDTMR, while TB medications are delivered via DOTS at local clinics or dispensaries. M. tuberculosis culture and DST are performed at the National Reference Laboratory at the National Institute of Hygiene (INH) in Rabat or at the regional reference laboratory at the Institute Pasteur in Casablanca (IPM). National guidelines recommend DST for all retreatment patients. We chose nine urban CDTMR in high-incidence settings in Rabat, Casablanca, Fes, Tangier, and El Jedida as sites. [SUBTITLE] Design [SUBSECTION] A retrospective cohort study of retreatment cases focusing on patient characteristics, treatment outcomes, and drug susceptibility was conducted. To determine the risk factors for failure, relapse, or default after initial treatment that led to the need for retreatment, a nested case-control study was performed. A retrospective cohort study of retreatment cases focusing on patient characteristics, treatment outcomes, and drug susceptibility was conducted. To determine the risk factors for failure, relapse, or default after initial treatment that led to the need for retreatment, a nested case-control study was performed. [SUBTITLE] Study population [SUBSECTION] Moroccan patients with smear- or culture-confirmed pulmonary TB presenting for retreatment between June 2007 and August 2008 were identified using clinic registries. Those who had failed initial treatment, had relapse after completing initial treatment, or defaulted after at least two months of initial treatment were included in the cohort study. Among patients in the cohort study, those whose initial treatment had ended within two years of starting their retreatment regimen were eligible to be included in the nested case-control study as cases. The rationale for the two-year limit was to make it more likely that initial treatment charts would be available, to ensure that standards of practice for initial TB treatment would be similar across cases, and to increase the likelihood that recurrent disease represented relapse rather than reinfection. Controls were chosen from among patients with successful initial treatment without failure, default, or early relapse selected from the same center and treatment week. Moroccan patients with smear- or culture-confirmed pulmonary TB presenting for retreatment between June 2007 and August 2008 were identified using clinic registries. Those who had failed initial treatment, had relapse after completing initial treatment, or defaulted after at least two months of initial treatment were included in the cohort study. Among patients in the cohort study, those whose initial treatment had ended within two years of starting their retreatment regimen were eligible to be included in the nested case-control study as cases. The rationale for the two-year limit was to make it more likely that initial treatment charts would be available, to ensure that standards of practice for initial TB treatment would be similar across cases, and to increase the likelihood that recurrent disease represented relapse rather than reinfection. Controls were chosen from among patients with successful initial treatment without failure, default, or early relapse selected from the same center and treatment week. [SUBTITLE] Definitions [SUBSECTION] As per national guidelines, a patient with positive sputum smears for acid-fast bacilli after 5 months of continuous Category I treatment had treatment failure. A patient with initial treatment success after TB therapy of sufficient length (9 months for severe disease, 6 months for all others) that developed recurrent TB had treatment relapse. Treatment default was defined as interruption of treatment for ≥2 consecutive months. Treatment success was defined as treatment completion or cure. Retreatment patients were those receiving their first retreatment regimen after relapse, failure, or default. As per national guidelines, a patient with positive sputum smears for acid-fast bacilli after 5 months of continuous Category I treatment had treatment failure. A patient with initial treatment success after TB therapy of sufficient length (9 months for severe disease, 6 months for all others) that developed recurrent TB had treatment relapse. Treatment default was defined as interruption of treatment for ≥2 consecutive months. Treatment success was defined as treatment completion or cure. Retreatment patients were those receiving their first retreatment regimen after relapse, failure, or default. [SUBTITLE] Data collection [SUBSECTION] DST results were reviewed at INH and IPM to identify sites that used DST testing services. At clinics chosen as study sites, the TB registry and medical records of included patients were reviewed. For retreatment patients, information from initial TB treatment was collected if initial treatment was within two years of retreatment, and this information was used for risk factor analyses. This study was approved by the Ministry of Health of Morocco and by the Institutional Review Board of the Johns Hopkins University School of Medicine. DST results were reviewed at INH and IPM to identify sites that used DST testing services. At clinics chosen as study sites, the TB registry and medical records of included patients were reviewed. For retreatment patients, information from initial TB treatment was collected if initial treatment was within two years of retreatment, and this information was used for risk factor analyses. This study was approved by the Ministry of Health of Morocco and by the Institutional Review Board of the Johns Hopkins University School of Medicine. [SUBTITLE] Drug Susceptibility Testing [SUBSECTION] Smear microscopy and culture were performed using standard methods. Specimens demonstrating growth of M. tuberculosis on Lowenstein-Jensen medium were tested for susceptibility to isoniazid, rifampin, ethambutol, and streptomycin using the proportion method. Critical concentrations were as follows: RIF 40 mcg/mL, INH 0.2 mcg/mL, streptomycin 4 mcg/mL, and ethambutol 2 mcg/mL. DST quality control was provided by the Supranational Laboratory at the Institute Pasteur of Algiers, as per standard laboratory operating procedures. Smear microscopy and culture were performed using standard methods. Specimens demonstrating growth of M. tuberculosis on Lowenstein-Jensen medium were tested for susceptibility to isoniazid, rifampin, ethambutol, and streptomycin using the proportion method. Critical concentrations were as follows: RIF 40 mcg/mL, INH 0.2 mcg/mL, streptomycin 4 mcg/mL, and ethambutol 2 mcg/mL. DST quality control was provided by the Supranational Laboratory at the Institute Pasteur of Algiers, as per standard laboratory operating procedures. [SUBTITLE] Statistical analysis [SUBSECTION] Data analyses were performed in EpiInfo™ (Version 3.3.2, Centers for Disease Control, Atlanta, GA) except for multivariate logistic regressions, which were performed using STATA software, version 10.0 (StataCorp LP, College Station, Texas). Demographic and clinical characteristics of cases and controls during initial TB treatment were compared using Pearson's χ2 or Fisher's exact tests for categorical variables and student's t tests for continuous variables. Variables known to be risk factors for relapse, failure, or default as well as factors found to be associated with these outcomes in univariate analyses were included in multivariable logistic regression models. Significance tests were two-sided with p-values of ≤ 0.05 considered statistically significant. Data analyses were performed in EpiInfo™ (Version 3.3.2, Centers for Disease Control, Atlanta, GA) except for multivariate logistic regressions, which were performed using STATA software, version 10.0 (StataCorp LP, College Station, Texas). Demographic and clinical characteristics of cases and controls during initial TB treatment were compared using Pearson's χ2 or Fisher's exact tests for categorical variables and student's t tests for continuous variables. Variables known to be risk factors for relapse, failure, or default as well as factors found to be associated with these outcomes in univariate analyses were included in multivariable logistic regression models. Significance tests were two-sided with p-values of ≤ 0.05 considered statistically significant.
null
null
null
null
[ "Background", "Sites", "Design", "Study population", "Definitions", "Data collection", "Drug Susceptibility Testing", "Statistical analysis", "Results", "Tuberculosis retreatment patients: population description, treatment outcomes, and DST results", "Risk factors for relapse, failure, or default from initial TB treatment: results of the nested case-control study", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Tuberculosis (TB) continues to be a global public health problem, with an estimated 9.4 million incident cases of TB and 1.8 million deaths in 2008[1]. Drug resistance and obstacles to successful directly observed therapy short-course (DOTS) impede disease control. Among patients being retreated for TB because of initial treatment failure, default from initial treatment, or relapse following initial treatment, drug resistance is common and retreatment outcomes inferior[2,3].\nPatients who fail, default from, or relapse after completion of standard first-line TB treatment and present for retreatment were previously grouped together by the World Health Organization (WHO) as Category II cases, and, in settings where individual drug susceptibility testing (DST) was not universally accessible, these patients were often treated with a standard retreatment regimen of first-line agents (a regimen that adds a single drug to the standard initial TB treatment regimen)[4]. Retreatment outcomes, however, are often poor, especially in patients with treatment failure or default[5]. DST may help identify those patients with multidrug-resistant (MDR) TB so that the appropriate antibiotics can be administered. Identifying local patient characteristics that confer higher risk of relapse, failure, or default from primary TB treatment may help inform country-specific prevention strategies aiming to reduce the need for retreatment, resulting in cost savings and diminished morbidity and transmission.\nIn Morocco, the incidence of TB in 2008 was 81 per 100,000 overall but was significantly higher in several urban centers, or \"hot spots\". Of the roughly 28,000 new TB cases nationally each year, 12% are retreatment cases[5]. National TB treatment guidelines in 2007 and 2008 recommended a Category I treatment regimen -- 2 months of isoniazid, rifampin, pyrazinamide, and streptomycin followed by 4 months of rifampin and isoniazid (2SHRZ/4RH) -- for new smear-positive cases and a Category II regimen -- 2HRZES/1RHEZ/5RHE (E = ethambutol) -- for retreatment cases. Beginning in 2009, ethambutol replaced streptomycin in Category I regimens. Among retreatment cases in Morocco, 12.2% are infected with Mycobacterium tuberculosis strains that are resistant to rifampin and isoniazid, or MDR TB[6]. National guidelines suggest drug susceptibility testing (DST) for all retreatment patients.\nThis retrospective cohort study examines the efficacy of standard TB retreatment in urban centers in Morocco, evaluates the uptake of DST testing for retreatment patients, and, using a nested case-control design, explores the risk factors that led to the need for retreatment in the first place.", "Morocco's National Tuberculosis Program (NTP) is well-established. TB care is provided free of charge by the Ministry of Health. Regional TB programs are divided into sectors, and each sector has a public health center (CDTMR) staffed by a specialist in TB care. TB clinical care is provided at CDTMR, while TB medications are delivered via DOTS at local clinics or dispensaries. M. tuberculosis culture and DST are performed at the National Reference Laboratory at the National Institute of Hygiene (INH) in Rabat or at the regional reference laboratory at the Institute Pasteur in Casablanca (IPM). National guidelines recommend DST for all retreatment patients. We chose nine urban CDTMR in high-incidence settings in Rabat, Casablanca, Fes, Tangier, and El Jedida as sites.", "A retrospective cohort study of retreatment cases focusing on patient characteristics, treatment outcomes, and drug susceptibility was conducted. To determine the risk factors for failure, relapse, or default after initial treatment that led to the need for retreatment, a nested case-control study was performed.", "Moroccan patients with smear- or culture-confirmed pulmonary TB presenting for retreatment between June 2007 and August 2008 were identified using clinic registries. Those who had failed initial treatment, had relapse after completing initial treatment, or defaulted after at least two months of initial treatment were included in the cohort study. Among patients in the cohort study, those whose initial treatment had ended within two years of starting their retreatment regimen were eligible to be included in the nested case-control study as cases. The rationale for the two-year limit was to make it more likely that initial treatment charts would be available, to ensure that standards of practice for initial TB treatment would be similar across cases, and to increase the likelihood that recurrent disease represented relapse rather than reinfection. Controls were chosen from among patients with successful initial treatment without failure, default, or early relapse selected from the same center and treatment week.", "As per national guidelines, a patient with positive sputum smears for acid-fast bacilli after 5 months of continuous Category I treatment had treatment failure. A patient with initial treatment success after TB therapy of sufficient length (9 months for severe disease, 6 months for all others) that developed recurrent TB had treatment relapse. Treatment default was defined as interruption of treatment for ≥2 consecutive months. Treatment success was defined as treatment completion or cure. Retreatment patients were those receiving their first retreatment regimen after relapse, failure, or default.", "DST results were reviewed at INH and IPM to identify sites that used DST testing services. At clinics chosen as study sites, the TB registry and medical records of included patients were reviewed. For retreatment patients, information from initial TB treatment was collected if initial treatment was within two years of retreatment, and this information was used for risk factor analyses. This study was approved by the Ministry of Health of Morocco and by the Institutional Review Board of the Johns Hopkins University School of Medicine.", "Smear microscopy and culture were performed using standard methods. Specimens demonstrating growth of M. tuberculosis on Lowenstein-Jensen medium were tested for susceptibility to isoniazid, rifampin, ethambutol, and streptomycin using the proportion method. Critical concentrations were as follows: RIF 40 mcg/mL, INH 0.2 mcg/mL, streptomycin 4 mcg/mL, and ethambutol 2 mcg/mL. DST quality control was provided by the Supranational Laboratory at the Institute Pasteur of Algiers, as per standard laboratory operating procedures.", "Data analyses were performed in EpiInfo™ (Version 3.3.2, Centers for Disease Control, Atlanta, GA) except for multivariate logistic regressions, which were performed using STATA software, version 10.0 (StataCorp LP, College Station, Texas). Demographic and clinical characteristics of cases and controls during initial TB treatment were compared using Pearson's χ2 or Fisher's exact tests for categorical variables and student's t tests for continuous variables. Variables known to be risk factors for relapse, failure, or default as well as factors found to be associated with these outcomes in univariate analyses were included in multivariable logistic regression models. Significance tests were two-sided with p-values of ≤ 0.05 considered statistically significant.", "[SUBTITLE] Tuberculosis retreatment patients: population description, treatment outcomes, and DST results [SUBSECTION] 291 patients with smear- or culture-positive pulmonary TB presenting for retreatment were identified and included in the study. Of retreatment cases, 232 (80%) had relapsed after completing an initial treatment regimen, 21 (7%) had failed an initial treatment regimen, and 38 (13%) had defaulted from initial treatment. The mean age was 37 years (IQR 27-46), and 78% were men. HIV and diabetes mellitus were rare (recorded for 1 and 3 patients, respectively).\nA standard Category II retreatment regimen was used in 272 (93%) of 291 patients. Retreatment outcomes were as follows: 172 (59%) cured, 26 (9%) completed treatment, 7 (2%) died, 73 (25%) defaulted, and 11 (4%) failed. Retreatment was successful in 173 (74%) of relapse patients, 10 (48%) of failure patients, and 15 (41%) of default patients.(p < 0.01) Not surprisingly, retreatment failure was more common among patients who had failed initial treatment (24%) than among relapse (3%) or default (0%) patients (p < 0.01). Default was very common among retreatment patients at 25% overall and was particularly high among patients with initial treatment default (57%) (vs. 20% among relapse patients and 24% among failure patients (p < 0.01)). Among relapse patients, the median time from the end of initial treatment to diagnosis of relapse was 7.0 years (range 9 months to 44 years).\nOnly 30 (10%) of 291 retreatment patients had DST testing performed with results available in the chart. Of three patients being retreated because of initial treatment failure who were tested, all had resistance to HRS, none of five being retreated because of default from initial treatment who were tested had pan-sensitive M. tuberculosis, and 3 of 22 (14%) patients who were being retreated because of relapse within two years after initial TB treatment had MDR-TB.\n291 patients with smear- or culture-positive pulmonary TB presenting for retreatment were identified and included in the study. Of retreatment cases, 232 (80%) had relapsed after completing an initial treatment regimen, 21 (7%) had failed an initial treatment regimen, and 38 (13%) had defaulted from initial treatment. The mean age was 37 years (IQR 27-46), and 78% were men. HIV and diabetes mellitus were rare (recorded for 1 and 3 patients, respectively).\nA standard Category II retreatment regimen was used in 272 (93%) of 291 patients. Retreatment outcomes were as follows: 172 (59%) cured, 26 (9%) completed treatment, 7 (2%) died, 73 (25%) defaulted, and 11 (4%) failed. Retreatment was successful in 173 (74%) of relapse patients, 10 (48%) of failure patients, and 15 (41%) of default patients.(p < 0.01) Not surprisingly, retreatment failure was more common among patients who had failed initial treatment (24%) than among relapse (3%) or default (0%) patients (p < 0.01). Default was very common among retreatment patients at 25% overall and was particularly high among patients with initial treatment default (57%) (vs. 20% among relapse patients and 24% among failure patients (p < 0.01)). Among relapse patients, the median time from the end of initial treatment to diagnosis of relapse was 7.0 years (range 9 months to 44 years).\nOnly 30 (10%) of 291 retreatment patients had DST testing performed with results available in the chart. Of three patients being retreated because of initial treatment failure who were tested, all had resistance to HRS, none of five being retreated because of default from initial treatment who were tested had pan-sensitive M. tuberculosis, and 3 of 22 (14%) patients who were being retreated because of relapse within two years after initial TB treatment had MDR-TB.\n[SUBTITLE] Risk factors for relapse, failure, or default from initial TB treatment: results of the nested case-control study [SUBSECTION] Of 291 retreatment patients, 104 (36%) had started a retreatment regimen within two years of completing or stopping initial TB treatment and were, thus, eligible for the risk factor analysis. Of these, initial treatment charts were available for 83 (80%), and 80 were suitable for data extraction (cases, n = 80). 266 patients with initial treatment success were included as controls (controls, n = 266). (Table 1).\nPatient and disease characteristics of individuals receiving standard initial tuberculosis treatment, comparing patients with treatment relapse, failure, or default (cases) to patients with treatment success without relapse (controls)\n§N = 54 for cases, 190 for controls, as marital status was not consistently recorded in the charts\n*Patients with no evidence of comorbid conditions in chart review were combined with those for whom the absence of cormorbidities was expressly noted.\n‡N = 70 for cases, 237 for controls, as x-rays were not performed and recorded for all patients.\n√Patients with no evidence of tobacco, alcohol, or illicit drug use in chart review were combined with those for whom the absence of tobacco, alcohol, or illicit use was expressly noted.\n†N = 62 for cases (15 of 27 patients with default did not have sputum smear conversion data, 3 patients with relapse were culture-positive, smear-negative cases); N = 251 for controls.\n**N = 65 for cases (12 default patients, 1 failure, and 2 relapse patients did not have 2-month weight data), 263 for controls at 2 months; n = 50 for cases, 257 for controls after 5-6 months of treatment\nIn a multivariable logistic regression analysis, patients undergoing initial treatment for TB were at higher risk of a composite endpoint of failure, default, or relapse within two years if they were male (OR = 2.29, 95% CI 1.10-4.77), failed to have sputum smear conversion to negative by 3 months of treatment (OR 7.14, 95% CI 4.04-13.2), or required hospitalization during treatment (OR 2.09, 95% CI 1.01-4.34). There was a trend towards increased risk of this composite endpoint among those with poor weight gain (less than 10% by two months of treatment) or missed doses during the intensive phase of treatment, but these differences did not reach statistical significance. Odds of the composite outcome were 4% lower for each 1 kilogram increase in weight at treatment initiation (OR 0.96, 95% CI 0.93-0.99). Alcohol use and HIV were uncommon (2% and <1%, respectively).\nRisk factors appeared to differ by subgroup, though analyses were limited by sample size (Table 2). Risk factors for treatment default included male sex, substance use, missed doses during the intensive phase, and hospitalization. Risk factors for failure or relapse were harder to identify.\nRisk factors for relapse, failure, or default from initial TB treatment, a subgroup analysis.\n** Three-month sputum smear conversion to negative was negative in all failure patients so could not be included in logistic regression models.\n†Tobacco, alcohol, or illicit drug use.\nOf 291 retreatment patients, 104 (36%) had started a retreatment regimen within two years of completing or stopping initial TB treatment and were, thus, eligible for the risk factor analysis. Of these, initial treatment charts were available for 83 (80%), and 80 were suitable for data extraction (cases, n = 80). 266 patients with initial treatment success were included as controls (controls, n = 266). (Table 1).\nPatient and disease characteristics of individuals receiving standard initial tuberculosis treatment, comparing patients with treatment relapse, failure, or default (cases) to patients with treatment success without relapse (controls)\n§N = 54 for cases, 190 for controls, as marital status was not consistently recorded in the charts\n*Patients with no evidence of comorbid conditions in chart review were combined with those for whom the absence of cormorbidities was expressly noted.\n‡N = 70 for cases, 237 for controls, as x-rays were not performed and recorded for all patients.\n√Patients with no evidence of tobacco, alcohol, or illicit drug use in chart review were combined with those for whom the absence of tobacco, alcohol, or illicit use was expressly noted.\n†N = 62 for cases (15 of 27 patients with default did not have sputum smear conversion data, 3 patients with relapse were culture-positive, smear-negative cases); N = 251 for controls.\n**N = 65 for cases (12 default patients, 1 failure, and 2 relapse patients did not have 2-month weight data), 263 for controls at 2 months; n = 50 for cases, 257 for controls after 5-6 months of treatment\nIn a multivariable logistic regression analysis, patients undergoing initial treatment for TB were at higher risk of a composite endpoint of failure, default, or relapse within two years if they were male (OR = 2.29, 95% CI 1.10-4.77), failed to have sputum smear conversion to negative by 3 months of treatment (OR 7.14, 95% CI 4.04-13.2), or required hospitalization during treatment (OR 2.09, 95% CI 1.01-4.34). There was a trend towards increased risk of this composite endpoint among those with poor weight gain (less than 10% by two months of treatment) or missed doses during the intensive phase of treatment, but these differences did not reach statistical significance. Odds of the composite outcome were 4% lower for each 1 kilogram increase in weight at treatment initiation (OR 0.96, 95% CI 0.93-0.99). Alcohol use and HIV were uncommon (2% and <1%, respectively).\nRisk factors appeared to differ by subgroup, though analyses were limited by sample size (Table 2). Risk factors for treatment default included male sex, substance use, missed doses during the intensive phase, and hospitalization. Risk factors for failure or relapse were harder to identify.\nRisk factors for relapse, failure, or default from initial TB treatment, a subgroup analysis.\n** Three-month sputum smear conversion to negative was negative in all failure patients so could not be included in logistic regression models.\n†Tobacco, alcohol, or illicit drug use.", "291 patients with smear- or culture-positive pulmonary TB presenting for retreatment were identified and included in the study. Of retreatment cases, 232 (80%) had relapsed after completing an initial treatment regimen, 21 (7%) had failed an initial treatment regimen, and 38 (13%) had defaulted from initial treatment. The mean age was 37 years (IQR 27-46), and 78% were men. HIV and diabetes mellitus were rare (recorded for 1 and 3 patients, respectively).\nA standard Category II retreatment regimen was used in 272 (93%) of 291 patients. Retreatment outcomes were as follows: 172 (59%) cured, 26 (9%) completed treatment, 7 (2%) died, 73 (25%) defaulted, and 11 (4%) failed. Retreatment was successful in 173 (74%) of relapse patients, 10 (48%) of failure patients, and 15 (41%) of default patients.(p < 0.01) Not surprisingly, retreatment failure was more common among patients who had failed initial treatment (24%) than among relapse (3%) or default (0%) patients (p < 0.01). Default was very common among retreatment patients at 25% overall and was particularly high among patients with initial treatment default (57%) (vs. 20% among relapse patients and 24% among failure patients (p < 0.01)). Among relapse patients, the median time from the end of initial treatment to diagnosis of relapse was 7.0 years (range 9 months to 44 years).\nOnly 30 (10%) of 291 retreatment patients had DST testing performed with results available in the chart. Of three patients being retreated because of initial treatment failure who were tested, all had resistance to HRS, none of five being retreated because of default from initial treatment who were tested had pan-sensitive M. tuberculosis, and 3 of 22 (14%) patients who were being retreated because of relapse within two years after initial TB treatment had MDR-TB.", "Of 291 retreatment patients, 104 (36%) had started a retreatment regimen within two years of completing or stopping initial TB treatment and were, thus, eligible for the risk factor analysis. Of these, initial treatment charts were available for 83 (80%), and 80 were suitable for data extraction (cases, n = 80). 266 patients with initial treatment success were included as controls (controls, n = 266). (Table 1).\nPatient and disease characteristics of individuals receiving standard initial tuberculosis treatment, comparing patients with treatment relapse, failure, or default (cases) to patients with treatment success without relapse (controls)\n§N = 54 for cases, 190 for controls, as marital status was not consistently recorded in the charts\n*Patients with no evidence of comorbid conditions in chart review were combined with those for whom the absence of cormorbidities was expressly noted.\n‡N = 70 for cases, 237 for controls, as x-rays were not performed and recorded for all patients.\n√Patients with no evidence of tobacco, alcohol, or illicit drug use in chart review were combined with those for whom the absence of tobacco, alcohol, or illicit use was expressly noted.\n†N = 62 for cases (15 of 27 patients with default did not have sputum smear conversion data, 3 patients with relapse were culture-positive, smear-negative cases); N = 251 for controls.\n**N = 65 for cases (12 default patients, 1 failure, and 2 relapse patients did not have 2-month weight data), 263 for controls at 2 months; n = 50 for cases, 257 for controls after 5-6 months of treatment\nIn a multivariable logistic regression analysis, patients undergoing initial treatment for TB were at higher risk of a composite endpoint of failure, default, or relapse within two years if they were male (OR = 2.29, 95% CI 1.10-4.77), failed to have sputum smear conversion to negative by 3 months of treatment (OR 7.14, 95% CI 4.04-13.2), or required hospitalization during treatment (OR 2.09, 95% CI 1.01-4.34). There was a trend towards increased risk of this composite endpoint among those with poor weight gain (less than 10% by two months of treatment) or missed doses during the intensive phase of treatment, but these differences did not reach statistical significance. Odds of the composite outcome were 4% lower for each 1 kilogram increase in weight at treatment initiation (OR 0.96, 95% CI 0.93-0.99). Alcohol use and HIV were uncommon (2% and <1%, respectively).\nRisk factors appeared to differ by subgroup, though analyses were limited by sample size (Table 2). Risk factors for treatment default included male sex, substance use, missed doses during the intensive phase, and hospitalization. Risk factors for failure or relapse were harder to identify.\nRisk factors for relapse, failure, or default from initial TB treatment, a subgroup analysis.\n** Three-month sputum smear conversion to negative was negative in all failure patients so could not be included in logistic regression models.\n†Tobacco, alcohol, or illicit drug use.", "In this study of 291 patients undergoing retreatment for TB, outcomes differed considerably by group -- 74% of patients with relapse, 48% of patients with failure, and 41% of patients with default had treatment success -- similar to previous studies[5,7]. Default from retreatment was extremely common at 25%, higher than previous country-wide estimates[5]. This may reflect temporal changes in treatment completion but more likely represents differences in study populations, as we focused on TB \"hot spots\", or urban centers with comparatively high TB incidence. Recent studies have demonstrated that, in urban settings, adherence is linked to patient knowledge about TB and provision of disease-specific education by the health care provider to the patient[8]. In busy urban clinics, time for education may be limited. Default from retreatment was most frequent among those who had defaulted from initial treatment, while failure was most common among those with previous failure. Although retreatment guidelines are often the same for patients with failure, default from, or relapse after initial treatment,[4] these results suggest that groups may benefit from different management strategies[9,10]. For example, treatment failure is commonly due to drug resistance, while recurrence may be due to poor adherence, high mycobacterial burden (such as in cavitary disease), or exogenous reinfection. Default patients may require intensified case management and education, rather than more intensive treatment.\nThe present study shows that, even when available, drug susceptibility testing is underutilized. It was performed in only 10% of retreatment patients. All 3 failure patients who underwent DST testing had MDR-TB, while 3 of 22 of relapse patients and 0 of 5 default patients tested did. While these DST results were only available for three failure patients and, therefore, not representative, these data and those from other studies suggest that MDR risk is not uniform among retreatment subgroups, with increased prevalence of MDR among patients with initial treatment failure[2,11-13]. According to a population-based study conducted among retreatment cases in Morocco, 12.2% had MDR-TB, but the study did not divide retreatment patients into failure, relapse, or default subgroups[6]. Taken together, these findings support use of DST in all retreatment patients, earlier DST testing in those with clinical and microbiological indications of impending treatment failure, and use of second-line drugs for retreatment of patients with initial treatment failure until DST results are known. In Morocco, DOTS coverage is 100%, and concerted efforts to dramatically enhance DST use are underway.\nPublished medical risk factors for failure or relapse include HIV infection, diabetes mellitus, low body weight, cavitation on chest x-ray, high bacterial burden, short treatment duration, drug resistance, and positive culture after two months of treatment[14-16]. Sociodemographic factors include unemployment, drug abuse, alcoholism, smoking, and poor treatment adherence. Treatment default is known to be associated with substance abuse, foreign birth, male gender, previous default, low socioeconomic status, psychiatric illness, unemployment, migration, side effects, [17,18] long distance to the clinic, social stigma, and poorly-implemented DOTS but, of course, differ by setting [16,19-21]. In our study population, HIV infection is rare; among TB patients, less than 1% are HIV-infected (unpublished data, Morocco NTP). Further, alcohol use in Morocco is uncommon, and smoking is extremely uncommon among women. Moreover, in the urban clinics studied, the majority of patients are non-immigrants, the clinics are geographically accessible, and DOTS coverage is 100%. Thus, many traditional risk factors for poor TB treatment outcomes are less prominent in Morocco, making it harder to prospectively identify patients at risk. However, continued sputum smear positivity after 3 months of treatment is a strong predictor of subsequent poor outcomes, and should prompt DST testing in all patients. As missed treatment doses may herald impending default, enhanced communication between the local clinics that dispense TB treatment and physicians at the regional health centers that prescribe it may be one country-specific strategy to help pinpoint those individuals who are missing doses and are at high risk of defaulting altogether. Small sample sizes limited our ability to evaluate subgroups, but even so, we were able to identify male sex, substance use (tobacco, alcohol, or illicit drug use), and missed doses during the intensive phase as likely risk factors for treatment default. Higher odds of hospitalization probably reflected the need for hospitalization to ensure adherence rather than increased disease severity. Further exploring risk factors for treatment default may help control programs identify those likely to benefit from targeted interventions such as health education, substance abuse counseling, enhanced tracking, or reinforcement of DOTS supervision[22,23].\nAs a retrospective chart review, our risk factor evaluation was limited by the availability of data present in clinical charts. Information about tobacco, alcohol, and illicit drug use was not routinely recorded, for example. Classifying those with missing substance use data as nonusers likely biased analyses of this risk factor toward the null; however, estimates of smoking, tobacco, and illicit drug use were similar to national substance use statistics[24]. Also, in cases of recurrence, it was not possible to distinguish between relapse and reinfection, so we limited our risk factor analysis to those who had received initial TB treatment within two years of recurrence and were, thus, more likely to have true relapse. Our ability to identify independent risk factors in subgroup analyses was limited by small sample sizes; questions regarding risk factors in these subgroups would best be answered in larger, prospective studies. Finally, DST testing was not universally performed in retreatment patients, so selection bias is possible, as clinicians are more likely to send those at high risk of resistance for testing. In our study, 20% of retreatment patients with DST had MDR-TB, compared with 12% in a national prevalence survey[6].", "Patients presenting for TB retreatment - those with relapse, failure of initial treatment, or default -- are often grouped together and treated with a standard Category II retreatment regimen. However, these groups have distinct demographic and clinical characteristics, important differences in retreatment outcomes, and likely have different rates of drug resistant M. tuberculosis. Default from retreatment is common in high-incidence urban centers in Morocco, pointing to the need for strategies to address adherence. DST is essential for identifying retreatment patients with drug resistance, but even when available, it is underutilized, likely due to practical constraints. Preventing the need for retreatment in the first place is the best strategy given the individual and public health consequences of poor initial TB treatment outcomes, so strategies to identify and address country-specific risk factors are warranted to maximize treatment success.", "The authors declare that they have no competing interests.", "KED designed the study and helped with data analysis, interpretation of results, and drafting of the manuscript. OL assisted with access to and interpretation of laboratory drug susceptibility testing results. IG conceived of the study and assisted with study design, interpretation of results, and editing of the manuscript and provided invaluable clinical insight. JK helped with data collection and study design and implementation. MDE assisted with access to laboratory drug susceptibility testing results. IC performed the data analysis and assisted with interpretation of results. REA assisted with study design and study implementation and provided oversight of key study personnel at INH. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/11/140/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Sites", "Design", "Study population", "Definitions", "Data collection", "Drug Susceptibility Testing", "Statistical analysis", "Results", "Tuberculosis retreatment patients: population description, treatment outcomes, and DST results", "Risk factors for relapse, failure, or default from initial TB treatment: results of the nested case-control study", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Tuberculosis (TB) continues to be a global public health problem, with an estimated 9.4 million incident cases of TB and 1.8 million deaths in 2008[1]. Drug resistance and obstacles to successful directly observed therapy short-course (DOTS) impede disease control. Among patients being retreated for TB because of initial treatment failure, default from initial treatment, or relapse following initial treatment, drug resistance is common and retreatment outcomes inferior[2,3].\nPatients who fail, default from, or relapse after completion of standard first-line TB treatment and present for retreatment were previously grouped together by the World Health Organization (WHO) as Category II cases, and, in settings where individual drug susceptibility testing (DST) was not universally accessible, these patients were often treated with a standard retreatment regimen of first-line agents (a regimen that adds a single drug to the standard initial TB treatment regimen)[4]. Retreatment outcomes, however, are often poor, especially in patients with treatment failure or default[5]. DST may help identify those patients with multidrug-resistant (MDR) TB so that the appropriate antibiotics can be administered. Identifying local patient characteristics that confer higher risk of relapse, failure, or default from primary TB treatment may help inform country-specific prevention strategies aiming to reduce the need for retreatment, resulting in cost savings and diminished morbidity and transmission.\nIn Morocco, the incidence of TB in 2008 was 81 per 100,000 overall but was significantly higher in several urban centers, or \"hot spots\". Of the roughly 28,000 new TB cases nationally each year, 12% are retreatment cases[5]. National TB treatment guidelines in 2007 and 2008 recommended a Category I treatment regimen -- 2 months of isoniazid, rifampin, pyrazinamide, and streptomycin followed by 4 months of rifampin and isoniazid (2SHRZ/4RH) -- for new smear-positive cases and a Category II regimen -- 2HRZES/1RHEZ/5RHE (E = ethambutol) -- for retreatment cases. Beginning in 2009, ethambutol replaced streptomycin in Category I regimens. Among retreatment cases in Morocco, 12.2% are infected with Mycobacterium tuberculosis strains that are resistant to rifampin and isoniazid, or MDR TB[6]. National guidelines suggest drug susceptibility testing (DST) for all retreatment patients.\nThis retrospective cohort study examines the efficacy of standard TB retreatment in urban centers in Morocco, evaluates the uptake of DST testing for retreatment patients, and, using a nested case-control design, explores the risk factors that led to the need for retreatment in the first place.", "[SUBTITLE] Sites [SUBSECTION] Morocco's National Tuberculosis Program (NTP) is well-established. TB care is provided free of charge by the Ministry of Health. Regional TB programs are divided into sectors, and each sector has a public health center (CDTMR) staffed by a specialist in TB care. TB clinical care is provided at CDTMR, while TB medications are delivered via DOTS at local clinics or dispensaries. M. tuberculosis culture and DST are performed at the National Reference Laboratory at the National Institute of Hygiene (INH) in Rabat or at the regional reference laboratory at the Institute Pasteur in Casablanca (IPM). National guidelines recommend DST for all retreatment patients. We chose nine urban CDTMR in high-incidence settings in Rabat, Casablanca, Fes, Tangier, and El Jedida as sites.\nMorocco's National Tuberculosis Program (NTP) is well-established. TB care is provided free of charge by the Ministry of Health. Regional TB programs are divided into sectors, and each sector has a public health center (CDTMR) staffed by a specialist in TB care. TB clinical care is provided at CDTMR, while TB medications are delivered via DOTS at local clinics or dispensaries. M. tuberculosis culture and DST are performed at the National Reference Laboratory at the National Institute of Hygiene (INH) in Rabat or at the regional reference laboratory at the Institute Pasteur in Casablanca (IPM). National guidelines recommend DST for all retreatment patients. We chose nine urban CDTMR in high-incidence settings in Rabat, Casablanca, Fes, Tangier, and El Jedida as sites.\n[SUBTITLE] Design [SUBSECTION] A retrospective cohort study of retreatment cases focusing on patient characteristics, treatment outcomes, and drug susceptibility was conducted. To determine the risk factors for failure, relapse, or default after initial treatment that led to the need for retreatment, a nested case-control study was performed.\nA retrospective cohort study of retreatment cases focusing on patient characteristics, treatment outcomes, and drug susceptibility was conducted. To determine the risk factors for failure, relapse, or default after initial treatment that led to the need for retreatment, a nested case-control study was performed.\n[SUBTITLE] Study population [SUBSECTION] Moroccan patients with smear- or culture-confirmed pulmonary TB presenting for retreatment between June 2007 and August 2008 were identified using clinic registries. Those who had failed initial treatment, had relapse after completing initial treatment, or defaulted after at least two months of initial treatment were included in the cohort study. Among patients in the cohort study, those whose initial treatment had ended within two years of starting their retreatment regimen were eligible to be included in the nested case-control study as cases. The rationale for the two-year limit was to make it more likely that initial treatment charts would be available, to ensure that standards of practice for initial TB treatment would be similar across cases, and to increase the likelihood that recurrent disease represented relapse rather than reinfection. Controls were chosen from among patients with successful initial treatment without failure, default, or early relapse selected from the same center and treatment week.\nMoroccan patients with smear- or culture-confirmed pulmonary TB presenting for retreatment between June 2007 and August 2008 were identified using clinic registries. Those who had failed initial treatment, had relapse after completing initial treatment, or defaulted after at least two months of initial treatment were included in the cohort study. Among patients in the cohort study, those whose initial treatment had ended within two years of starting their retreatment regimen were eligible to be included in the nested case-control study as cases. The rationale for the two-year limit was to make it more likely that initial treatment charts would be available, to ensure that standards of practice for initial TB treatment would be similar across cases, and to increase the likelihood that recurrent disease represented relapse rather than reinfection. Controls were chosen from among patients with successful initial treatment without failure, default, or early relapse selected from the same center and treatment week.\n[SUBTITLE] Definitions [SUBSECTION] As per national guidelines, a patient with positive sputum smears for acid-fast bacilli after 5 months of continuous Category I treatment had treatment failure. A patient with initial treatment success after TB therapy of sufficient length (9 months for severe disease, 6 months for all others) that developed recurrent TB had treatment relapse. Treatment default was defined as interruption of treatment for ≥2 consecutive months. Treatment success was defined as treatment completion or cure. Retreatment patients were those receiving their first retreatment regimen after relapse, failure, or default.\nAs per national guidelines, a patient with positive sputum smears for acid-fast bacilli after 5 months of continuous Category I treatment had treatment failure. A patient with initial treatment success after TB therapy of sufficient length (9 months for severe disease, 6 months for all others) that developed recurrent TB had treatment relapse. Treatment default was defined as interruption of treatment for ≥2 consecutive months. Treatment success was defined as treatment completion or cure. Retreatment patients were those receiving their first retreatment regimen after relapse, failure, or default.\n[SUBTITLE] Data collection [SUBSECTION] DST results were reviewed at INH and IPM to identify sites that used DST testing services. At clinics chosen as study sites, the TB registry and medical records of included patients were reviewed. For retreatment patients, information from initial TB treatment was collected if initial treatment was within two years of retreatment, and this information was used for risk factor analyses. This study was approved by the Ministry of Health of Morocco and by the Institutional Review Board of the Johns Hopkins University School of Medicine.\nDST results were reviewed at INH and IPM to identify sites that used DST testing services. At clinics chosen as study sites, the TB registry and medical records of included patients were reviewed. For retreatment patients, information from initial TB treatment was collected if initial treatment was within two years of retreatment, and this information was used for risk factor analyses. This study was approved by the Ministry of Health of Morocco and by the Institutional Review Board of the Johns Hopkins University School of Medicine.\n[SUBTITLE] Drug Susceptibility Testing [SUBSECTION] Smear microscopy and culture were performed using standard methods. Specimens demonstrating growth of M. tuberculosis on Lowenstein-Jensen medium were tested for susceptibility to isoniazid, rifampin, ethambutol, and streptomycin using the proportion method. Critical concentrations were as follows: RIF 40 mcg/mL, INH 0.2 mcg/mL, streptomycin 4 mcg/mL, and ethambutol 2 mcg/mL. DST quality control was provided by the Supranational Laboratory at the Institute Pasteur of Algiers, as per standard laboratory operating procedures.\nSmear microscopy and culture were performed using standard methods. Specimens demonstrating growth of M. tuberculosis on Lowenstein-Jensen medium were tested for susceptibility to isoniazid, rifampin, ethambutol, and streptomycin using the proportion method. Critical concentrations were as follows: RIF 40 mcg/mL, INH 0.2 mcg/mL, streptomycin 4 mcg/mL, and ethambutol 2 mcg/mL. DST quality control was provided by the Supranational Laboratory at the Institute Pasteur of Algiers, as per standard laboratory operating procedures.\n[SUBTITLE] Statistical analysis [SUBSECTION] Data analyses were performed in EpiInfo™ (Version 3.3.2, Centers for Disease Control, Atlanta, GA) except for multivariate logistic regressions, which were performed using STATA software, version 10.0 (StataCorp LP, College Station, Texas). Demographic and clinical characteristics of cases and controls during initial TB treatment were compared using Pearson's χ2 or Fisher's exact tests for categorical variables and student's t tests for continuous variables. Variables known to be risk factors for relapse, failure, or default as well as factors found to be associated with these outcomes in univariate analyses were included in multivariable logistic regression models. Significance tests were two-sided with p-values of ≤ 0.05 considered statistically significant.\nData analyses were performed in EpiInfo™ (Version 3.3.2, Centers for Disease Control, Atlanta, GA) except for multivariate logistic regressions, which were performed using STATA software, version 10.0 (StataCorp LP, College Station, Texas). Demographic and clinical characteristics of cases and controls during initial TB treatment were compared using Pearson's χ2 or Fisher's exact tests for categorical variables and student's t tests for continuous variables. Variables known to be risk factors for relapse, failure, or default as well as factors found to be associated with these outcomes in univariate analyses were included in multivariable logistic regression models. Significance tests were two-sided with p-values of ≤ 0.05 considered statistically significant.", "Morocco's National Tuberculosis Program (NTP) is well-established. TB care is provided free of charge by the Ministry of Health. Regional TB programs are divided into sectors, and each sector has a public health center (CDTMR) staffed by a specialist in TB care. TB clinical care is provided at CDTMR, while TB medications are delivered via DOTS at local clinics or dispensaries. M. tuberculosis culture and DST are performed at the National Reference Laboratory at the National Institute of Hygiene (INH) in Rabat or at the regional reference laboratory at the Institute Pasteur in Casablanca (IPM). National guidelines recommend DST for all retreatment patients. We chose nine urban CDTMR in high-incidence settings in Rabat, Casablanca, Fes, Tangier, and El Jedida as sites.", "A retrospective cohort study of retreatment cases focusing on patient characteristics, treatment outcomes, and drug susceptibility was conducted. To determine the risk factors for failure, relapse, or default after initial treatment that led to the need for retreatment, a nested case-control study was performed.", "Moroccan patients with smear- or culture-confirmed pulmonary TB presenting for retreatment between June 2007 and August 2008 were identified using clinic registries. Those who had failed initial treatment, had relapse after completing initial treatment, or defaulted after at least two months of initial treatment were included in the cohort study. Among patients in the cohort study, those whose initial treatment had ended within two years of starting their retreatment regimen were eligible to be included in the nested case-control study as cases. The rationale for the two-year limit was to make it more likely that initial treatment charts would be available, to ensure that standards of practice for initial TB treatment would be similar across cases, and to increase the likelihood that recurrent disease represented relapse rather than reinfection. Controls were chosen from among patients with successful initial treatment without failure, default, or early relapse selected from the same center and treatment week.", "As per national guidelines, a patient with positive sputum smears for acid-fast bacilli after 5 months of continuous Category I treatment had treatment failure. A patient with initial treatment success after TB therapy of sufficient length (9 months for severe disease, 6 months for all others) that developed recurrent TB had treatment relapse. Treatment default was defined as interruption of treatment for ≥2 consecutive months. Treatment success was defined as treatment completion or cure. Retreatment patients were those receiving their first retreatment regimen after relapse, failure, or default.", "DST results were reviewed at INH and IPM to identify sites that used DST testing services. At clinics chosen as study sites, the TB registry and medical records of included patients were reviewed. For retreatment patients, information from initial TB treatment was collected if initial treatment was within two years of retreatment, and this information was used for risk factor analyses. This study was approved by the Ministry of Health of Morocco and by the Institutional Review Board of the Johns Hopkins University School of Medicine.", "Smear microscopy and culture were performed using standard methods. Specimens demonstrating growth of M. tuberculosis on Lowenstein-Jensen medium were tested for susceptibility to isoniazid, rifampin, ethambutol, and streptomycin using the proportion method. Critical concentrations were as follows: RIF 40 mcg/mL, INH 0.2 mcg/mL, streptomycin 4 mcg/mL, and ethambutol 2 mcg/mL. DST quality control was provided by the Supranational Laboratory at the Institute Pasteur of Algiers, as per standard laboratory operating procedures.", "Data analyses were performed in EpiInfo™ (Version 3.3.2, Centers for Disease Control, Atlanta, GA) except for multivariate logistic regressions, which were performed using STATA software, version 10.0 (StataCorp LP, College Station, Texas). Demographic and clinical characteristics of cases and controls during initial TB treatment were compared using Pearson's χ2 or Fisher's exact tests for categorical variables and student's t tests for continuous variables. Variables known to be risk factors for relapse, failure, or default as well as factors found to be associated with these outcomes in univariate analyses were included in multivariable logistic regression models. Significance tests were two-sided with p-values of ≤ 0.05 considered statistically significant.", "[SUBTITLE] Tuberculosis retreatment patients: population description, treatment outcomes, and DST results [SUBSECTION] 291 patients with smear- or culture-positive pulmonary TB presenting for retreatment were identified and included in the study. Of retreatment cases, 232 (80%) had relapsed after completing an initial treatment regimen, 21 (7%) had failed an initial treatment regimen, and 38 (13%) had defaulted from initial treatment. The mean age was 37 years (IQR 27-46), and 78% were men. HIV and diabetes mellitus were rare (recorded for 1 and 3 patients, respectively).\nA standard Category II retreatment regimen was used in 272 (93%) of 291 patients. Retreatment outcomes were as follows: 172 (59%) cured, 26 (9%) completed treatment, 7 (2%) died, 73 (25%) defaulted, and 11 (4%) failed. Retreatment was successful in 173 (74%) of relapse patients, 10 (48%) of failure patients, and 15 (41%) of default patients.(p < 0.01) Not surprisingly, retreatment failure was more common among patients who had failed initial treatment (24%) than among relapse (3%) or default (0%) patients (p < 0.01). Default was very common among retreatment patients at 25% overall and was particularly high among patients with initial treatment default (57%) (vs. 20% among relapse patients and 24% among failure patients (p < 0.01)). Among relapse patients, the median time from the end of initial treatment to diagnosis of relapse was 7.0 years (range 9 months to 44 years).\nOnly 30 (10%) of 291 retreatment patients had DST testing performed with results available in the chart. Of three patients being retreated because of initial treatment failure who were tested, all had resistance to HRS, none of five being retreated because of default from initial treatment who were tested had pan-sensitive M. tuberculosis, and 3 of 22 (14%) patients who were being retreated because of relapse within two years after initial TB treatment had MDR-TB.\n291 patients with smear- or culture-positive pulmonary TB presenting for retreatment were identified and included in the study. Of retreatment cases, 232 (80%) had relapsed after completing an initial treatment regimen, 21 (7%) had failed an initial treatment regimen, and 38 (13%) had defaulted from initial treatment. The mean age was 37 years (IQR 27-46), and 78% were men. HIV and diabetes mellitus were rare (recorded for 1 and 3 patients, respectively).\nA standard Category II retreatment regimen was used in 272 (93%) of 291 patients. Retreatment outcomes were as follows: 172 (59%) cured, 26 (9%) completed treatment, 7 (2%) died, 73 (25%) defaulted, and 11 (4%) failed. Retreatment was successful in 173 (74%) of relapse patients, 10 (48%) of failure patients, and 15 (41%) of default patients.(p < 0.01) Not surprisingly, retreatment failure was more common among patients who had failed initial treatment (24%) than among relapse (3%) or default (0%) patients (p < 0.01). Default was very common among retreatment patients at 25% overall and was particularly high among patients with initial treatment default (57%) (vs. 20% among relapse patients and 24% among failure patients (p < 0.01)). Among relapse patients, the median time from the end of initial treatment to diagnosis of relapse was 7.0 years (range 9 months to 44 years).\nOnly 30 (10%) of 291 retreatment patients had DST testing performed with results available in the chart. Of three patients being retreated because of initial treatment failure who were tested, all had resistance to HRS, none of five being retreated because of default from initial treatment who were tested had pan-sensitive M. tuberculosis, and 3 of 22 (14%) patients who were being retreated because of relapse within two years after initial TB treatment had MDR-TB.\n[SUBTITLE] Risk factors for relapse, failure, or default from initial TB treatment: results of the nested case-control study [SUBSECTION] Of 291 retreatment patients, 104 (36%) had started a retreatment regimen within two years of completing or stopping initial TB treatment and were, thus, eligible for the risk factor analysis. Of these, initial treatment charts were available for 83 (80%), and 80 were suitable for data extraction (cases, n = 80). 266 patients with initial treatment success were included as controls (controls, n = 266). (Table 1).\nPatient and disease characteristics of individuals receiving standard initial tuberculosis treatment, comparing patients with treatment relapse, failure, or default (cases) to patients with treatment success without relapse (controls)\n§N = 54 for cases, 190 for controls, as marital status was not consistently recorded in the charts\n*Patients with no evidence of comorbid conditions in chart review were combined with those for whom the absence of cormorbidities was expressly noted.\n‡N = 70 for cases, 237 for controls, as x-rays were not performed and recorded for all patients.\n√Patients with no evidence of tobacco, alcohol, or illicit drug use in chart review were combined with those for whom the absence of tobacco, alcohol, or illicit use was expressly noted.\n†N = 62 for cases (15 of 27 patients with default did not have sputum smear conversion data, 3 patients with relapse were culture-positive, smear-negative cases); N = 251 for controls.\n**N = 65 for cases (12 default patients, 1 failure, and 2 relapse patients did not have 2-month weight data), 263 for controls at 2 months; n = 50 for cases, 257 for controls after 5-6 months of treatment\nIn a multivariable logistic regression analysis, patients undergoing initial treatment for TB were at higher risk of a composite endpoint of failure, default, or relapse within two years if they were male (OR = 2.29, 95% CI 1.10-4.77), failed to have sputum smear conversion to negative by 3 months of treatment (OR 7.14, 95% CI 4.04-13.2), or required hospitalization during treatment (OR 2.09, 95% CI 1.01-4.34). There was a trend towards increased risk of this composite endpoint among those with poor weight gain (less than 10% by two months of treatment) or missed doses during the intensive phase of treatment, but these differences did not reach statistical significance. Odds of the composite outcome were 4% lower for each 1 kilogram increase in weight at treatment initiation (OR 0.96, 95% CI 0.93-0.99). Alcohol use and HIV were uncommon (2% and <1%, respectively).\nRisk factors appeared to differ by subgroup, though analyses were limited by sample size (Table 2). Risk factors for treatment default included male sex, substance use, missed doses during the intensive phase, and hospitalization. Risk factors for failure or relapse were harder to identify.\nRisk factors for relapse, failure, or default from initial TB treatment, a subgroup analysis.\n** Three-month sputum smear conversion to negative was negative in all failure patients so could not be included in logistic regression models.\n†Tobacco, alcohol, or illicit drug use.\nOf 291 retreatment patients, 104 (36%) had started a retreatment regimen within two years of completing or stopping initial TB treatment and were, thus, eligible for the risk factor analysis. Of these, initial treatment charts were available for 83 (80%), and 80 were suitable for data extraction (cases, n = 80). 266 patients with initial treatment success were included as controls (controls, n = 266). (Table 1).\nPatient and disease characteristics of individuals receiving standard initial tuberculosis treatment, comparing patients with treatment relapse, failure, or default (cases) to patients with treatment success without relapse (controls)\n§N = 54 for cases, 190 for controls, as marital status was not consistently recorded in the charts\n*Patients with no evidence of comorbid conditions in chart review were combined with those for whom the absence of cormorbidities was expressly noted.\n‡N = 70 for cases, 237 for controls, as x-rays were not performed and recorded for all patients.\n√Patients with no evidence of tobacco, alcohol, or illicit drug use in chart review were combined with those for whom the absence of tobacco, alcohol, or illicit use was expressly noted.\n†N = 62 for cases (15 of 27 patients with default did not have sputum smear conversion data, 3 patients with relapse were culture-positive, smear-negative cases); N = 251 for controls.\n**N = 65 for cases (12 default patients, 1 failure, and 2 relapse patients did not have 2-month weight data), 263 for controls at 2 months; n = 50 for cases, 257 for controls after 5-6 months of treatment\nIn a multivariable logistic regression analysis, patients undergoing initial treatment for TB were at higher risk of a composite endpoint of failure, default, or relapse within two years if they were male (OR = 2.29, 95% CI 1.10-4.77), failed to have sputum smear conversion to negative by 3 months of treatment (OR 7.14, 95% CI 4.04-13.2), or required hospitalization during treatment (OR 2.09, 95% CI 1.01-4.34). There was a trend towards increased risk of this composite endpoint among those with poor weight gain (less than 10% by two months of treatment) or missed doses during the intensive phase of treatment, but these differences did not reach statistical significance. Odds of the composite outcome were 4% lower for each 1 kilogram increase in weight at treatment initiation (OR 0.96, 95% CI 0.93-0.99). Alcohol use and HIV were uncommon (2% and <1%, respectively).\nRisk factors appeared to differ by subgroup, though analyses were limited by sample size (Table 2). Risk factors for treatment default included male sex, substance use, missed doses during the intensive phase, and hospitalization. Risk factors for failure or relapse were harder to identify.\nRisk factors for relapse, failure, or default from initial TB treatment, a subgroup analysis.\n** Three-month sputum smear conversion to negative was negative in all failure patients so could not be included in logistic regression models.\n†Tobacco, alcohol, or illicit drug use.", "291 patients with smear- or culture-positive pulmonary TB presenting for retreatment were identified and included in the study. Of retreatment cases, 232 (80%) had relapsed after completing an initial treatment regimen, 21 (7%) had failed an initial treatment regimen, and 38 (13%) had defaulted from initial treatment. The mean age was 37 years (IQR 27-46), and 78% were men. HIV and diabetes mellitus were rare (recorded for 1 and 3 patients, respectively).\nA standard Category II retreatment regimen was used in 272 (93%) of 291 patients. Retreatment outcomes were as follows: 172 (59%) cured, 26 (9%) completed treatment, 7 (2%) died, 73 (25%) defaulted, and 11 (4%) failed. Retreatment was successful in 173 (74%) of relapse patients, 10 (48%) of failure patients, and 15 (41%) of default patients.(p < 0.01) Not surprisingly, retreatment failure was more common among patients who had failed initial treatment (24%) than among relapse (3%) or default (0%) patients (p < 0.01). Default was very common among retreatment patients at 25% overall and was particularly high among patients with initial treatment default (57%) (vs. 20% among relapse patients and 24% among failure patients (p < 0.01)). Among relapse patients, the median time from the end of initial treatment to diagnosis of relapse was 7.0 years (range 9 months to 44 years).\nOnly 30 (10%) of 291 retreatment patients had DST testing performed with results available in the chart. Of three patients being retreated because of initial treatment failure who were tested, all had resistance to HRS, none of five being retreated because of default from initial treatment who were tested had pan-sensitive M. tuberculosis, and 3 of 22 (14%) patients who were being retreated because of relapse within two years after initial TB treatment had MDR-TB.", "Of 291 retreatment patients, 104 (36%) had started a retreatment regimen within two years of completing or stopping initial TB treatment and were, thus, eligible for the risk factor analysis. Of these, initial treatment charts were available for 83 (80%), and 80 were suitable for data extraction (cases, n = 80). 266 patients with initial treatment success were included as controls (controls, n = 266). (Table 1).\nPatient and disease characteristics of individuals receiving standard initial tuberculosis treatment, comparing patients with treatment relapse, failure, or default (cases) to patients with treatment success without relapse (controls)\n§N = 54 for cases, 190 for controls, as marital status was not consistently recorded in the charts\n*Patients with no evidence of comorbid conditions in chart review were combined with those for whom the absence of cormorbidities was expressly noted.\n‡N = 70 for cases, 237 for controls, as x-rays were not performed and recorded for all patients.\n√Patients with no evidence of tobacco, alcohol, or illicit drug use in chart review were combined with those for whom the absence of tobacco, alcohol, or illicit use was expressly noted.\n†N = 62 for cases (15 of 27 patients with default did not have sputum smear conversion data, 3 patients with relapse were culture-positive, smear-negative cases); N = 251 for controls.\n**N = 65 for cases (12 default patients, 1 failure, and 2 relapse patients did not have 2-month weight data), 263 for controls at 2 months; n = 50 for cases, 257 for controls after 5-6 months of treatment\nIn a multivariable logistic regression analysis, patients undergoing initial treatment for TB were at higher risk of a composite endpoint of failure, default, or relapse within two years if they were male (OR = 2.29, 95% CI 1.10-4.77), failed to have sputum smear conversion to negative by 3 months of treatment (OR 7.14, 95% CI 4.04-13.2), or required hospitalization during treatment (OR 2.09, 95% CI 1.01-4.34). There was a trend towards increased risk of this composite endpoint among those with poor weight gain (less than 10% by two months of treatment) or missed doses during the intensive phase of treatment, but these differences did not reach statistical significance. Odds of the composite outcome were 4% lower for each 1 kilogram increase in weight at treatment initiation (OR 0.96, 95% CI 0.93-0.99). Alcohol use and HIV were uncommon (2% and <1%, respectively).\nRisk factors appeared to differ by subgroup, though analyses were limited by sample size (Table 2). Risk factors for treatment default included male sex, substance use, missed doses during the intensive phase, and hospitalization. Risk factors for failure or relapse were harder to identify.\nRisk factors for relapse, failure, or default from initial TB treatment, a subgroup analysis.\n** Three-month sputum smear conversion to negative was negative in all failure patients so could not be included in logistic regression models.\n†Tobacco, alcohol, or illicit drug use.", "In this study of 291 patients undergoing retreatment for TB, outcomes differed considerably by group -- 74% of patients with relapse, 48% of patients with failure, and 41% of patients with default had treatment success -- similar to previous studies[5,7]. Default from retreatment was extremely common at 25%, higher than previous country-wide estimates[5]. This may reflect temporal changes in treatment completion but more likely represents differences in study populations, as we focused on TB \"hot spots\", or urban centers with comparatively high TB incidence. Recent studies have demonstrated that, in urban settings, adherence is linked to patient knowledge about TB and provision of disease-specific education by the health care provider to the patient[8]. In busy urban clinics, time for education may be limited. Default from retreatment was most frequent among those who had defaulted from initial treatment, while failure was most common among those with previous failure. Although retreatment guidelines are often the same for patients with failure, default from, or relapse after initial treatment,[4] these results suggest that groups may benefit from different management strategies[9,10]. For example, treatment failure is commonly due to drug resistance, while recurrence may be due to poor adherence, high mycobacterial burden (such as in cavitary disease), or exogenous reinfection. Default patients may require intensified case management and education, rather than more intensive treatment.\nThe present study shows that, even when available, drug susceptibility testing is underutilized. It was performed in only 10% of retreatment patients. All 3 failure patients who underwent DST testing had MDR-TB, while 3 of 22 of relapse patients and 0 of 5 default patients tested did. While these DST results were only available for three failure patients and, therefore, not representative, these data and those from other studies suggest that MDR risk is not uniform among retreatment subgroups, with increased prevalence of MDR among patients with initial treatment failure[2,11-13]. According to a population-based study conducted among retreatment cases in Morocco, 12.2% had MDR-TB, but the study did not divide retreatment patients into failure, relapse, or default subgroups[6]. Taken together, these findings support use of DST in all retreatment patients, earlier DST testing in those with clinical and microbiological indications of impending treatment failure, and use of second-line drugs for retreatment of patients with initial treatment failure until DST results are known. In Morocco, DOTS coverage is 100%, and concerted efforts to dramatically enhance DST use are underway.\nPublished medical risk factors for failure or relapse include HIV infection, diabetes mellitus, low body weight, cavitation on chest x-ray, high bacterial burden, short treatment duration, drug resistance, and positive culture after two months of treatment[14-16]. Sociodemographic factors include unemployment, drug abuse, alcoholism, smoking, and poor treatment adherence. Treatment default is known to be associated with substance abuse, foreign birth, male gender, previous default, low socioeconomic status, psychiatric illness, unemployment, migration, side effects, [17,18] long distance to the clinic, social stigma, and poorly-implemented DOTS but, of course, differ by setting [16,19-21]. In our study population, HIV infection is rare; among TB patients, less than 1% are HIV-infected (unpublished data, Morocco NTP). Further, alcohol use in Morocco is uncommon, and smoking is extremely uncommon among women. Moreover, in the urban clinics studied, the majority of patients are non-immigrants, the clinics are geographically accessible, and DOTS coverage is 100%. Thus, many traditional risk factors for poor TB treatment outcomes are less prominent in Morocco, making it harder to prospectively identify patients at risk. However, continued sputum smear positivity after 3 months of treatment is a strong predictor of subsequent poor outcomes, and should prompt DST testing in all patients. As missed treatment doses may herald impending default, enhanced communication between the local clinics that dispense TB treatment and physicians at the regional health centers that prescribe it may be one country-specific strategy to help pinpoint those individuals who are missing doses and are at high risk of defaulting altogether. Small sample sizes limited our ability to evaluate subgroups, but even so, we were able to identify male sex, substance use (tobacco, alcohol, or illicit drug use), and missed doses during the intensive phase as likely risk factors for treatment default. Higher odds of hospitalization probably reflected the need for hospitalization to ensure adherence rather than increased disease severity. Further exploring risk factors for treatment default may help control programs identify those likely to benefit from targeted interventions such as health education, substance abuse counseling, enhanced tracking, or reinforcement of DOTS supervision[22,23].\nAs a retrospective chart review, our risk factor evaluation was limited by the availability of data present in clinical charts. Information about tobacco, alcohol, and illicit drug use was not routinely recorded, for example. Classifying those with missing substance use data as nonusers likely biased analyses of this risk factor toward the null; however, estimates of smoking, tobacco, and illicit drug use were similar to national substance use statistics[24]. Also, in cases of recurrence, it was not possible to distinguish between relapse and reinfection, so we limited our risk factor analysis to those who had received initial TB treatment within two years of recurrence and were, thus, more likely to have true relapse. Our ability to identify independent risk factors in subgroup analyses was limited by small sample sizes; questions regarding risk factors in these subgroups would best be answered in larger, prospective studies. Finally, DST testing was not universally performed in retreatment patients, so selection bias is possible, as clinicians are more likely to send those at high risk of resistance for testing. In our study, 20% of retreatment patients with DST had MDR-TB, compared with 12% in a national prevalence survey[6].", "Patients presenting for TB retreatment - those with relapse, failure of initial treatment, or default -- are often grouped together and treated with a standard Category II retreatment regimen. However, these groups have distinct demographic and clinical characteristics, important differences in retreatment outcomes, and likely have different rates of drug resistant M. tuberculosis. Default from retreatment is common in high-incidence urban centers in Morocco, pointing to the need for strategies to address adherence. DST is essential for identifying retreatment patients with drug resistance, but even when available, it is underutilized, likely due to practical constraints. Preventing the need for retreatment in the first place is the best strategy given the individual and public health consequences of poor initial TB treatment outcomes, so strategies to identify and address country-specific risk factors are warranted to maximize treatment success.", "The authors declare that they have no competing interests.", "KED designed the study and helped with data analysis, interpretation of results, and drafting of the manuscript. OL assisted with access to and interpretation of laboratory drug susceptibility testing results. IG conceived of the study and assisted with study design, interpretation of results, and editing of the manuscript and provided invaluable clinical insight. JK helped with data collection and study design and implementation. MDE assisted with access to laboratory drug susceptibility testing results. IC performed the data analysis and assisted with interpretation of results. REA assisted with study design and study implementation and provided oversight of key study personnel at INH. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/11/140/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Multidisciplinary cancer care in Spain, or when the function creates the organ: qualitative interview study.
21356063
The Spanish National Health System recognised multidisciplinary care as a health priority in 2006, when a national strategy for promoting quality in cancer care was first published. This institutional effort is being implemented on a co-operative basis within the context of Spain's decentralised health care system, so a high degree of variability is to be expected. This study was aimed to explore the views of professionals working with multidisciplinary cancer teams and identify which barriers to effective team work should be considered to ensure implementation of health policy.
BACKGROUND
Qualitative interview study with semi-structured, one-to-one interviews. Data were examined inductively, using content analysis to generate categories and an explanatory framework. 39 professionals performing their tasks, wholly or in part, in different multidisciplinary cancer teams were interviewed. The breakdown of participants' medical specialisations was as follows: medical oncologists (n = 10); radiation oncologists (n = 8); surgeons (n = 7); pathologists or radiologists (n = 6); oncology nurses (n = 5); and others (n = 3).
METHODS
Teams could be classified into three models of professional co-operation in multidisciplinary cancer care, namely, advisory committee, formal co-adaptation and integrated care process. The following barriers to implementation were posed: existence of different gateways for the same patient profile; variability in development and use of clinical protocols and guidelines; role of the hospital executive board; outcomes assessment; and the recording and documenting of clinical decisions in a multidisciplinary team setting. All these play a key role in the development of cancer teams and their ability to improve quality of care.
RESULTS
Cancer team development results from an specific adaptation to the hospital environment. Nevertheless, health policy plays an important role in promoting an organisational approach that changes the way in which professionals develop their clinical practice.
CONCLUSION
[ "Decision Making", "Humans", "Interdisciplinary Communication", "Interviews as Topic", "Models, Organizational", "National Health Programs", "Oncology Service, Hospital", "Patient Care Team", "Spain" ]
3053251
null
null
Methods
[SUBTITLE] Study design and setting [SUBSECTION] A qualitative research method was used in order to describe health professionals' points of view and experiences of multidisciplinary cancer care, and explore the barriers to be considered in future policy development. As an MDT approach in cancer had not been previously studied in the context of the Spanish health system, a pilot test was deemed appropriate. The aim was to identify a set of analytical categories which, along with a review of the literature, would define a theoretical basis for the interviews. The pilot scheme was undertaken by three teams at different hospitals in Catalonia, focusing on different tumours (breast, lung and colorectal). In order to ensure the relevance and appropriateness of the categories yielded by the test, health professionals from different disciplines were then asked to give their considered opinion. The methodology of analysis consisted of semi-structured interviews conducted in situ from October 2008 to January 2009 with professionals involved in the diagnosis and treatment of cancer patients at public hospitals of differing levels of importance, situated in the most populated regions of Spain, namely, Andalusia, Catalonia, Madrid, Galicia and Valencia. Participants were interviewed by an experienced, qualitative researcher. A qualitative research method was used in order to describe health professionals' points of view and experiences of multidisciplinary cancer care, and explore the barriers to be considered in future policy development. As an MDT approach in cancer had not been previously studied in the context of the Spanish health system, a pilot test was deemed appropriate. The aim was to identify a set of analytical categories which, along with a review of the literature, would define a theoretical basis for the interviews. The pilot scheme was undertaken by three teams at different hospitals in Catalonia, focusing on different tumours (breast, lung and colorectal). In order to ensure the relevance and appropriateness of the categories yielded by the test, health professionals from different disciplines were then asked to give their considered opinion. The methodology of analysis consisted of semi-structured interviews conducted in situ from October 2008 to January 2009 with professionals involved in the diagnosis and treatment of cancer patients at public hospitals of differing levels of importance, situated in the most populated regions of Spain, namely, Andalusia, Catalonia, Madrid, Galicia and Valencia. Participants were interviewed by an experienced, qualitative researcher. [SUBTITLE] Recruitment [SUBSECTION] A total of thirty-nine health professionals were recruited. They were deemed eligible in any case where they performed their professional task, wholly or in part, in an MDT. For the selection of informants and composition of the theoretical sample, three inclusion criteria were established. To ensure that the views of different health professionals could be explored, the first criterion laid down that five medical specialisations had to be covered in each region: these were medical oncology, radiation oncology, surgery, pathology or radiology, and oncology nursing (Table 1). The second criterion reinforced a systematic approach to the phenomenon under study, by insisting on the presence of three opinions from each region (table 2). The third criterion took the form of a restriction on interviewing professionals belonging to the same team, with the aim of preventing biased versions on the subject of study and so contribute to the internal validity of our research. Detailed breakdown of the 39 professionals interviewed Profiles required for selection of key informants A total of thirty-nine health professionals were recruited. They were deemed eligible in any case where they performed their professional task, wholly or in part, in an MDT. For the selection of informants and composition of the theoretical sample, three inclusion criteria were established. To ensure that the views of different health professionals could be explored, the first criterion laid down that five medical specialisations had to be covered in each region: these were medical oncology, radiation oncology, surgery, pathology or radiology, and oncology nursing (Table 1). The second criterion reinforced a systematic approach to the phenomenon under study, by insisting on the presence of three opinions from each region (table 2). The third criterion took the form of a restriction on interviewing professionals belonging to the same team, with the aim of preventing biased versions on the subject of study and so contribute to the internal validity of our research. Detailed breakdown of the 39 professionals interviewed Profiles required for selection of key informants [SUBTITLE] Interviews [SUBSECTION] A semi-structured list of questions ensured that critical points were covered in every interview. To elicit beliefs and experiences, participants were given the necessary flexibility to enable them to volunteer information on topics that were relevant to them. The selected health professionals were interviewed on a one-to-one basis for 60-100 minutes at their hospital offices. Interviews started with a general question on the cancer team's approach and ended with a question on how multidisciplinary care could be promoted by health policy measures. No notes were taken by the researcher during the interview; instead, all interviews were audio-taped and transcribed in full by the researcher. These data were then compiled into a documentary record and rendered anonymous to protect confidentiality. Every transcription was checked against its corresponding audio record and accuracy was found to be good. A preliminary analysis was conducted after each interview. A semi-structured list of questions ensured that critical points were covered in every interview. To elicit beliefs and experiences, participants were given the necessary flexibility to enable them to volunteer information on topics that were relevant to them. The selected health professionals were interviewed on a one-to-one basis for 60-100 minutes at their hospital offices. Interviews started with a general question on the cancer team's approach and ended with a question on how multidisciplinary care could be promoted by health policy measures. No notes were taken by the researcher during the interview; instead, all interviews were audio-taped and transcribed in full by the researcher. These data were then compiled into a documentary record and rendered anonymous to protect confidentiality. Every transcription was checked against its corresponding audio record and accuracy was found to be good. A preliminary analysis was conducted after each interview. [SUBTITLE] Analysis [SUBSECTION] Interview data were examined inductively, using content analysis to generate categories and an explanatory framework. Grounded theory methodology was considered appropriate for describing the organisation and culture of health professionals belonging to multidisciplinary cancer teams. As our study was theoretical and aimed at incorporating the organisational context in which cancer teams practised, we used an axial coding, as described by Strauss and Corbin [9]. Data were electronically coded with ATLAS.ti [10]. Whereas the thematic analysis enabled language use to be understood and professionals' beliefs to be communicated, the method of constant comparison ensured that recurring views and experiences were obtained. The coding process and emerging themes were derived, on the one hand, from a priori issues drawn from the pilot test and previous research, and on the other, from issues raised by participants. Examples of codes were "nature of agreements", "access of a patient to the team during his/her journey" and "impact management on care performance". The consistency of coding/interpretation was checked during analysis by reviewing the transcripts at different moments in time. This process allowed for labelling and developing a reference of the data for subsequent exploration and identification. Accordingly, a thematic framework based on models of and barriers to effective multidisciplinary teamwork was identified. A specific effort was made to capture this stage of interpretation, i.e., by mapping, creating typologies [11] and finding associations among themes. Moreover, the preliminary findings were discussed at a workshop (held by the scientific societies), to which some of the health professionals interviewed and a number of social science researchers were invited. These discussions were useful for reinforcing team types for the Spanish health system and clarifying some barrier-related aspects. Interview data were examined inductively, using content analysis to generate categories and an explanatory framework. Grounded theory methodology was considered appropriate for describing the organisation and culture of health professionals belonging to multidisciplinary cancer teams. As our study was theoretical and aimed at incorporating the organisational context in which cancer teams practised, we used an axial coding, as described by Strauss and Corbin [9]. Data were electronically coded with ATLAS.ti [10]. Whereas the thematic analysis enabled language use to be understood and professionals' beliefs to be communicated, the method of constant comparison ensured that recurring views and experiences were obtained. The coding process and emerging themes were derived, on the one hand, from a priori issues drawn from the pilot test and previous research, and on the other, from issues raised by participants. Examples of codes were "nature of agreements", "access of a patient to the team during his/her journey" and "impact management on care performance". The consistency of coding/interpretation was checked during analysis by reviewing the transcripts at different moments in time. This process allowed for labelling and developing a reference of the data for subsequent exploration and identification. Accordingly, a thematic framework based on models of and barriers to effective multidisciplinary teamwork was identified. A specific effort was made to capture this stage of interpretation, i.e., by mapping, creating typologies [11] and finding associations among themes. Moreover, the preliminary findings were discussed at a workshop (held by the scientific societies), to which some of the health professionals interviewed and a number of social science researchers were invited. These discussions were useful for reinforcing team types for the Spanish health system and clarifying some barrier-related aspects. [SUBTITLE] Ethical considerations [SUBSECTION] The data for this study was based on professionally conducted interviews which, other than the consent of the professionals themselves, require no "ethical approval" from any research committee. However, as the aim of our study is sensitive to hospital organisation and relations between specialisations and health professionals, confidentiality is of the essence. Accordingly, the strategy pursued was to prioritise a selection of participants whose opinion on these issues was deemed crucial, and so, contact with regional cancer control policymakers was made. The implementation of the study began through contact with the heads of the regional cancer plans, who proposed a short list of health professionals selected in accordance with our criteria (see table 2). Likewise, the resulting list was endorsed by the scientific societies of medical and radiation oncology. The health professionals concerned were then sent a letter of invitation explaining the research goals and a confidentiality agreement. On being advised by telephone and receiving an assurance as to the confidentiality of any information provided, all the professionals selected for study consented to the interviews being recorded. The consent form was formally signed at the meeting, with any doubts as to the designated purpose and method of research being discussed with the professionals. No-one refused the invitation to participate. The data for this study was based on professionally conducted interviews which, other than the consent of the professionals themselves, require no "ethical approval" from any research committee. However, as the aim of our study is sensitive to hospital organisation and relations between specialisations and health professionals, confidentiality is of the essence. Accordingly, the strategy pursued was to prioritise a selection of participants whose opinion on these issues was deemed crucial, and so, contact with regional cancer control policymakers was made. The implementation of the study began through contact with the heads of the regional cancer plans, who proposed a short list of health professionals selected in accordance with our criteria (see table 2). Likewise, the resulting list was endorsed by the scientific societies of medical and radiation oncology. The health professionals concerned were then sent a letter of invitation explaining the research goals and a confidentiality agreement. On being advised by telephone and receiving an assurance as to the confidentiality of any information provided, all the professionals selected for study consented to the interviews being recorded. The consent form was formally signed at the meeting, with any doubts as to the designated purpose and method of research being discussed with the professionals. No-one refused the invitation to participate.
null
null
null
null
[ "Background", "Study design and setting", "Recruitment", "Interviews", "Analysis", "Ethical considerations", "Results", "1.- Advisory committee", "2.- Formal co-adaptation", "3.- Integrated care process", "Critical factors in multidisciplinary cancer team development", "Existence of different gateways for the same patient profile", "Variability in the development and use of clinical protocols and guidelines", "The role of the hospital executive board", "Outcome assessment", "Recording and documenting of clinical decisions in an MDT setting", "Discussion", "Impact on decision making", "Strengths and limitations", "Conclusions", "List of abreviations", "Competing interests", "Ethical approval", "Authors' contributions", "Pre-publication history" ]
[ "The increasing complexity of cancer care makes organisation of clinical decision-making one of the key elements in high-quality cancer care [1]. This raises the question of how good professional co-operation is to be achieved in day-to-day clinical practice. Pre-eminent among the different answers to this question is multidisciplinary teamwork, an approach that has emerged in parallel with the accelerated process of specialisation in cancer among health professionals. Recent reviews of the literature have associated a multidisciplinary approach to cancer care with better adherence to clinical practice guidelines [2], increased patient access to clinical trials [3] and enhanced co-ordination of hospital services [4,5]. These outcomes, together with the role assigned to multidisciplinary care in various cancer plans, indicate its strategic role for health systems in general and for quality of care in particular in the organisation of cancer services, something that was highlighted at the Lisbon round-table held under the Portuguese EU Presidency (2007) [6].\nThe development of multidisciplinary cancer care involves a redistribution of health professionals' tasks when it comes to clinical decision-making and patient care processes. The development of specific organisational frameworks for dealing with different types of cancer care must be seen in the context of a services-based hospital structure. This process modifies a highly sensitive aspect, i.e., the way in which professionals interact and are co-ordinated. The following are the main aspects defining a multidisciplinary approach in the organisation of cancer care:\n- professional specialisation by disease (including diagnostic disciplines);\n- standardisation of the process and clinical criteria in guidelines and pathways;\n- redistribution of tasks at a multidisciplinary-team level; and,\n- trend towards identifying and allocating specific resources according to disease/organ\nThe Spanish National Health System (SNHS) recognised multidisciplinary care as being a health priority as far back as 2006, when a national strategy for promoting quality in cancer care was first published [7]. As one of its basic principles, this Cancer Strategy document stipulates that cancer patients should be diagnosed and treated in the context of a multidisciplinary team (MDT), and goes on to identify tumour boards as the main mechanism for deciding and planning therapy. This institutional effort is being implemented on a co-operative basis within the context of Spain's decentralised health care system. The priority assigned by the respective regional health services to multidisciplinary care since health service management became operationally decentralised (2002) and the specific mode of organisation introduced at each hospital, together determine the starting point of the implementation of an MDT approach. A high degree of variability is thus to be expected.\nThis study addresses the question of how multidisciplinary cancer care has been implemented and the critical factors linked to this process, with special stress laid on the knowledge of policy required to ensure effective team work. The study was undertaken against a common backdrop of a growing cancer care burden and a rapidly expanding range of potentially effective treatments, which involves \"therapeutic dilemmas about treatment options\" [8]. A qualitative approach was chosen in order to better understand the perceptions and beliefs of all the professional partners involved at the different public hospitals spread across the nation's various health care regions.", "A qualitative research method was used in order to describe health professionals' points of view and experiences of multidisciplinary cancer care, and explore the barriers to be considered in future policy development. As an MDT approach in cancer had not been previously studied in the context of the Spanish health system, a pilot test was deemed appropriate. The aim was to identify a set of analytical categories which, along with a review of the literature, would define a theoretical basis for the interviews. The pilot scheme was undertaken by three teams at different hospitals in Catalonia, focusing on different tumours (breast, lung and colorectal). In order to ensure the relevance and appropriateness of the categories yielded by the test, health professionals from different disciplines were then asked to give their considered opinion. The methodology of analysis consisted of semi-structured interviews conducted in situ from October 2008 to January 2009 with professionals involved in the diagnosis and treatment of cancer patients at public hospitals of differing levels of importance, situated in the most populated regions of Spain, namely, Andalusia, Catalonia, Madrid, Galicia and Valencia. Participants were interviewed by an experienced, qualitative researcher.", "A total of thirty-nine health professionals were recruited. They were deemed eligible in any case where they performed their professional task, wholly or in part, in an MDT. For the selection of informants and composition of the theoretical sample, three inclusion criteria were established. To ensure that the views of different health professionals could be explored, the first criterion laid down that five medical specialisations had to be covered in each region: these were medical oncology, radiation oncology, surgery, pathology or radiology, and oncology nursing (Table 1). The second criterion reinforced a systematic approach to the phenomenon under study, by insisting on the presence of three opinions from each region (table 2). The third criterion took the form of a restriction on interviewing professionals belonging to the same team, with the aim of preventing biased versions on the subject of study and so contribute to the internal validity of our research.\nDetailed breakdown of the 39 professionals interviewed\nProfiles required for selection of key informants", "A semi-structured list of questions ensured that critical points were covered in every interview. To elicit beliefs and experiences, participants were given the necessary flexibility to enable them to volunteer information on topics that were relevant to them. The selected health professionals were interviewed on a one-to-one basis for 60-100 minutes at their hospital offices. Interviews started with a general question on the cancer team's approach and ended with a question on how multidisciplinary care could be promoted by health policy measures. No notes were taken by the researcher during the interview; instead, all interviews were audio-taped and transcribed in full by the researcher. These data were then compiled into a documentary record and rendered anonymous to protect confidentiality. Every transcription was checked against its corresponding audio record and accuracy was found to be good. A preliminary analysis was conducted after each interview.", "Interview data were examined inductively, using content analysis to generate categories and an explanatory framework. Grounded theory methodology was considered appropriate for describing the organisation and culture of health professionals belonging to multidisciplinary cancer teams. As our study was theoretical and aimed at incorporating the organisational context in which cancer teams practised, we used an axial coding, as described by Strauss and Corbin [9]. Data were electronically coded with ATLAS.ti [10]. Whereas the thematic analysis enabled language use to be understood and professionals' beliefs to be communicated, the method of constant comparison ensured that recurring views and experiences were obtained. The coding process and emerging themes were derived, on the one hand, from a priori issues drawn from the pilot test and previous research, and on the other, from issues raised by participants. Examples of codes were \"nature of agreements\", \"access of a patient to the team during his/her journey\" and \"impact management on care performance\". The consistency of coding/interpretation was checked during analysis by reviewing the transcripts at different moments in time. This process allowed for labelling and developing a reference of the data for subsequent exploration and identification. Accordingly, a thematic framework based on models of and barriers to effective multidisciplinary teamwork was identified. A specific effort was made to capture this stage of interpretation, i.e., by mapping, creating typologies [11] and finding associations among themes. Moreover, the preliminary findings were discussed at a workshop (held by the scientific societies), to which some of the health professionals interviewed and a number of social science researchers were invited. These discussions were useful for reinforcing team types for the Spanish health system and clarifying some barrier-related aspects.", "The data for this study was based on professionally conducted interviews which, other than the consent of the professionals themselves, require no \"ethical approval\" from any research committee. However, as the aim of our study is sensitive to hospital organisation and relations between specialisations and health professionals, confidentiality is of the essence. Accordingly, the strategy pursued was to prioritise a selection of participants whose opinion on these issues was deemed crucial, and so, contact with regional cancer control policymakers was made. The implementation of the study began through contact with the heads of the regional cancer plans, who proposed a short list of health professionals selected in accordance with our criteria (see table 2). Likewise, the resulting list was endorsed by the scientific societies of medical and radiation oncology. The health professionals concerned were then sent a letter of invitation explaining the research goals and a confidentiality agreement. On being advised by telephone and receiving an assurance as to the confidentiality of any information provided, all the professionals selected for study consented to the interviews being recorded. The consent form was formally signed at the meeting, with any doubts as to the designated purpose and method of research being discussed with the professionals. No-one refused the invitation to participate.", "We identified three models of development of multidisciplinary cancer care into which all teams could be classified (table 3). Their internal consistency means that they can be seen as models of co-operation, and despite being general categories, any given team model may well contain elements of the other two. Moreover, the qualitative classification described here may even occur within the same hospital for different teams, with each of these assuming responsibility for a different tumour.\nModels of co-operation in multidisciplinary cancer care\nRather than targeting specific forms of organisation (tumour board, cancer unit), this approach focuses instead on team capabilities, based on each team's work method and the overall scope (breadth and depth) of the tasks performed, elements of analysis that emerged during the study and indicate the nature of interaction among professionals. While all cancer teams fulfilled the role of assessing patients and complying with and updating clinical protocols, obvious differences in their respective abilities to achieve quality of care were nevertheless observed.\n[SUBTITLE] 1.- Advisory committee [SUBSECTION] This is a group of professionals, largely made up of specialists in different therapeutic fields, which meets regularly on a informal basis to discuss cases considered clinically complex. Since the patients may already have received some of the treatments (usually surgery), the multidisciplinary meeting is aimed at referring them to other professionals for further treatment. This approach implies rigorous respect for the autonomy of the clinician and for overlapping boundaries between the team and the multidisciplinary meeting: patient assessment is made without other health care performance considerations. Judging by our results, this model continues to enjoy an important presence in the system (40%).\nThis is a group of professionals, largely made up of specialists in different therapeutic fields, which meets regularly on a informal basis to discuss cases considered clinically complex. Since the patients may already have received some of the treatments (usually surgery), the multidisciplinary meeting is aimed at referring them to other professionals for further treatment. This approach implies rigorous respect for the autonomy of the clinician and for overlapping boundaries between the team and the multidisciplinary meeting: patient assessment is made without other health care performance considerations. Judging by our results, this model continues to enjoy an important presence in the system (40%).\n[SUBTITLE] 2.- Formal co-adaptation [SUBSECTION] Owing to the high degree of interaction (mutual adaptation) among the professionals involved, consensus plays a key role in this model. The team acts as the reference framework for professionals, who share their views on the diagnosis, treatment and monitoring of a specific cancer type. The meeting is open to all professionals involved in patient management and it is here that the roles of team tumour board co-ordinator and chair appear. The key factor for fostering such an approach is agreement on the need for: joint decision-making to precede application of any treatment; and all cases to be dealt with at the multidisciplinary meeting. Both aspects are hampered, however, by hospital service inertia when it comes to disease management. This formula accounts for half (50%) of all MDT meetings in the health care system.\nOwing to the high degree of interaction (mutual adaptation) among the professionals involved, consensus plays a key role in this model. The team acts as the reference framework for professionals, who share their views on the diagnosis, treatment and monitoring of a specific cancer type. The meeting is open to all professionals involved in patient management and it is here that the roles of team tumour board co-ordinator and chair appear. The key factor for fostering such an approach is agreement on the need for: joint decision-making to precede application of any treatment; and all cases to be dealt with at the multidisciplinary meeting. Both aspects are hampered, however, by hospital service inertia when it comes to disease management. This formula accounts for half (50%) of all MDT meetings in the health care system.\n[SUBTITLE] 3.- Integrated care process [SUBSECTION] The teams that work under this model share wide aims on patient management, including co-ordination of clinical research and economic evaluation of treatment. As this model provides teams with early access to patients, the latter's preferences along with knowledge of their co-morbidities and psychosocial context are incorporated into the multidisciplinary discussions. This occurs through systematic follow-up of patients throughout their journey, from suspicion of cancer, to diagnosis, therapeutic decision-making and follow-up. The presence of professional team roles has an impact on the entire care process and on progress towards achieving seamless care. The hospital executive board plays a pivotal role by consolidating meeting times, and ensuring health professionals' attendance as well as their commitment to the MDT. This model is not frequent in the system (10%).\nThe teams that work under this model share wide aims on patient management, including co-ordination of clinical research and economic evaluation of treatment. As this model provides teams with early access to patients, the latter's preferences along with knowledge of their co-morbidities and psychosocial context are incorporated into the multidisciplinary discussions. This occurs through systematic follow-up of patients throughout their journey, from suspicion of cancer, to diagnosis, therapeutic decision-making and follow-up. The presence of professional team roles has an impact on the entire care process and on progress towards achieving seamless care. The hospital executive board plays a pivotal role by consolidating meeting times, and ensuring health professionals' attendance as well as their commitment to the MDT. This model is not frequent in the system (10%).\n[SUBTITLE] Critical factors in multidisciplinary cancer team development [SUBSECTION] For many clinicians, development of multidisciplinary care has involved a cultural change, as can be seen from the pathway which resulted from recommendations that were proposed by cancer team members for clinical management of patients and have gradually become binding decisions. The key critical factors identified for this change are as follows:\n[SUBTITLE] Existence of different gateways for the same patient profile [SUBSECTION] Many clinicians acknowledged significant variability in clinical practice as a result of diagnosing and treating patients who, despite having similar symptoms and diagnoses, might receive different initial therapy because access to hospital took place through different departments. A typical example is provided by the different therapeutic approaches proposed to a prostate cancer patient depending on the initial hospital department responsible for his diagnosis. Although a shared clinical protocol on such patients for the whole hospital plus an agreement to submit the patient to the MDT meeting would limit this variability, it is the unification of hospital-access gateways into a single hospital department that would have the necessary transforming quality in terms of standardisation of clinical criteria and pathways. Indeed, a recurring example of this type of organisational change is afforded by unification of admission to radiology for breast cancer or to gastroenterology for colorectal cancer in the case of patients displaying symptoms with a high risk of cancer. The experience of agreeing upon a common gateway for suspects has three effects: it provides teams with early access to patients; it reduces the feeling of patients being the \"property\" of any given clinician or department; and it sets up a primary care reference catchment area, as it becomes easier and clearer to determine where and how subjects with high risk of cancer should be referred and who the specialists of reference are. Moreover, where the department taking on the gateway unification process of the clinical pathway is a diagnostic unit, this implies that it should have a more relevant role within the team.\n\"At the hospital, there is a breast cancer unit that is beset by a root problem, i.e., two treatment options depending on whether the patient has been admitted via gynaecology or surgery. These are internal battles waged by the respective competencies.\" (Breast surgeon)\n\"In this hospital there are two chairs of surgery: one comes to the meetings, the other doesn't. We know that they administer different forms of treatment. The percentage of cases in which this occurs is by no means inconsiderable.\" (Medical oncologist)\nMany clinicians acknowledged significant variability in clinical practice as a result of diagnosing and treating patients who, despite having similar symptoms and diagnoses, might receive different initial therapy because access to hospital took place through different departments. A typical example is provided by the different therapeutic approaches proposed to a prostate cancer patient depending on the initial hospital department responsible for his diagnosis. Although a shared clinical protocol on such patients for the whole hospital plus an agreement to submit the patient to the MDT meeting would limit this variability, it is the unification of hospital-access gateways into a single hospital department that would have the necessary transforming quality in terms of standardisation of clinical criteria and pathways. Indeed, a recurring example of this type of organisational change is afforded by unification of admission to radiology for breast cancer or to gastroenterology for colorectal cancer in the case of patients displaying symptoms with a high risk of cancer. The experience of agreeing upon a common gateway for suspects has three effects: it provides teams with early access to patients; it reduces the feeling of patients being the \"property\" of any given clinician or department; and it sets up a primary care reference catchment area, as it becomes easier and clearer to determine where and how subjects with high risk of cancer should be referred and who the specialists of reference are. Moreover, where the department taking on the gateway unification process of the clinical pathway is a diagnostic unit, this implies that it should have a more relevant role within the team.\n\"At the hospital, there is a breast cancer unit that is beset by a root problem, i.e., two treatment options depending on whether the patient has been admitted via gynaecology or surgery. These are internal battles waged by the respective competencies.\" (Breast surgeon)\n\"In this hospital there are two chairs of surgery: one comes to the meetings, the other doesn't. We know that they administer different forms of treatment. The percentage of cases in which this occurs is by no means inconsiderable.\" (Medical oncologist)\n[SUBTITLE] Variability in the development and use of clinical protocols and guidelines [SUBSECTION] Evidence-based decisions are a source of concern to professionals, and the updating of clinical protocols by the team reflects this concern. Many clinicians felt, however, that this goal was conditioned by the implementation and dissemination of cancer clinical guidelines in the Spanish Health System. They argued that the absence of common guidelines for the whole country and the lack of co-ordination strategies for implementing the few that did exist resulted in reduced use and a lack of systematic assessment of existing levels of adherence. Owing to this perceived situation, clinical protocols at a hospital level are very often based on foreign guidelines, and efforts made to produce Spanish ones are of little use. This in turn leads to three common situations which impact at a team level. Firstly, hospitals that refer cases (e.g., because of their clinical complexity) have protocols based on different guidelines that are not standardised across the health care system. Secondly, multidisciplinary cancer care displays different levels of development, so that patients in one hospital may be referred to a specific department, but not to the tumour board, in another, the point here being the absence of pre-specified criteria for referral among levels of care. Thus, some decisions are made without the scientific consensus of an MDT. Finally, a cancer team might change the original treatment plan for a patient referred from a lower level hospital. This was a concern voiced by several clinicians, since decisions sometimes tend to differ widely, causing confusion and lack of trust in the patient. This perception was not shared by health professionals who work for cancer networks.\n\"When a surgeon comes along saying that he has operated on a given patient without the consensus of the committee but -according to him- this course of action was 'in line with the evidence'... this is unacceptable. It's an issue to be addressed by the respective cancer plans. When guidelines are reviewed, this should be the starting point, i.e., the tumour committee report should be seen before the surgical report.\" (Radiation oncologist)\nEvidence-based decisions are a source of concern to professionals, and the updating of clinical protocols by the team reflects this concern. Many clinicians felt, however, that this goal was conditioned by the implementation and dissemination of cancer clinical guidelines in the Spanish Health System. They argued that the absence of common guidelines for the whole country and the lack of co-ordination strategies for implementing the few that did exist resulted in reduced use and a lack of systematic assessment of existing levels of adherence. Owing to this perceived situation, clinical protocols at a hospital level are very often based on foreign guidelines, and efforts made to produce Spanish ones are of little use. This in turn leads to three common situations which impact at a team level. Firstly, hospitals that refer cases (e.g., because of their clinical complexity) have protocols based on different guidelines that are not standardised across the health care system. Secondly, multidisciplinary cancer care displays different levels of development, so that patients in one hospital may be referred to a specific department, but not to the tumour board, in another, the point here being the absence of pre-specified criteria for referral among levels of care. Thus, some decisions are made without the scientific consensus of an MDT. Finally, a cancer team might change the original treatment plan for a patient referred from a lower level hospital. This was a concern voiced by several clinicians, since decisions sometimes tend to differ widely, causing confusion and lack of trust in the patient. This perception was not shared by health professionals who work for cancer networks.\n\"When a surgeon comes along saying that he has operated on a given patient without the consensus of the committee but -according to him- this course of action was 'in line with the evidence'... this is unacceptable. It's an issue to be addressed by the respective cancer plans. When guidelines are reviewed, this should be the starting point, i.e., the tumour committee report should be seen before the surgical report.\" (Radiation oncologist)\n[SUBTITLE] The role of the hospital executive board [SUBSECTION] Most health professionals believed that, while they had not been hampered by the hospital executive board, neither had they been specifically supported to better organise clinical pathways and MDT activity. In their view, the main problem was that MDT work time was not recognised as a health care activity (or \"real work\" to quote them). Half of those interviewed felt that hospital managers knew little about their tasks, goals, level of involvement and management problems. These professionals identified two clear priorities for hospital executive boards, namely: to protect multidisciplinary meetings and work time; and to promote new professional roles, such as nurse case-managers or administrative support. Those with management responsibilities stated that cancer teams were not reflected in the organisational chart but were very important in terms of quality of care, and more innovative and responsive insofar as health care organisation achievement was concerned.\n\"If you tell management that you have to attend a committee meeting, they view it as something that is all well and good but nevertheless ancillary, and so not meriting consideration as part of the daily work load. Yet such attendance should be accorded health care and scientific value, i.e., so many hours correspond to committee work, which is equal to time spent seeing patients in a medical practice.\" (Colorectal surgeon)\n\"Your personal efforts are not appreciated, regardless of whether you've taken part in drawing up a protocol or whether you've devoted one day or three weeks to the job... And, as no stress is laid on the importance of teamwork, there are pockets of resistance that don't change\". (Radiologist)\nMost health professionals believed that, while they had not been hampered by the hospital executive board, neither had they been specifically supported to better organise clinical pathways and MDT activity. In their view, the main problem was that MDT work time was not recognised as a health care activity (or \"real work\" to quote them). Half of those interviewed felt that hospital managers knew little about their tasks, goals, level of involvement and management problems. These professionals identified two clear priorities for hospital executive boards, namely: to protect multidisciplinary meetings and work time; and to promote new professional roles, such as nurse case-managers or administrative support. Those with management responsibilities stated that cancer teams were not reflected in the organisational chart but were very important in terms of quality of care, and more innovative and responsive insofar as health care organisation achievement was concerned.\n\"If you tell management that you have to attend a committee meeting, they view it as something that is all well and good but nevertheless ancillary, and so not meriting consideration as part of the daily work load. Yet such attendance should be accorded health care and scientific value, i.e., so many hours correspond to committee work, which is equal to time spent seeing patients in a medical practice.\" (Colorectal surgeon)\n\"Your personal efforts are not appreciated, regardless of whether you've taken part in drawing up a protocol or whether you've devoted one day or three weeks to the job... And, as no stress is laid on the importance of teamwork, there are pockets of resistance that don't change\". (Radiologist)\n[SUBTITLE] Outcome assessment [SUBSECTION] The main goal of any multidisciplinary cancer team is to enhance the effectiveness of diagnosis and treatment of a specific disease. Assessment of the MDTs that had been put in place revealed relevant differences among the views held by the professionals themselves. To most of them, the ultimate consequence of the efforts of some hospital units or specialised professionals that regularly collect clinical data was evaluation or a study aimed at assessing MDT outcomes. Others, in contrast, described process evaluation involving initial inter-departmental consensus on indicators, development of a specific data-collection methodology, and periodic analysis of results using a shared database. Above all, this situation defines different approaches to the possibility of taking clinical outcomes and process indicators, and linking these to actions aimed at improving cancer care. There were two recurring arguments associated with possible ways of achieving organisational change: the first centred on the key role to be played by the health care service in reaching a technical definition of and agreement on a minimum set of indicators for the entire hospital system and a proposed level of transparency vis-à-vis outcomes; the second addressed the pervasive \"culture of efficiency\" currently prevailing in hospital departments, insisting on the need to limit its influence and instead give increased relevance to clinical and process indicators. An experience that has had remarkable success in various health care regions and has served as the basis for the evaluation of each MDT, is the implementation of a fast-track, colorectal, breast and lung cancer diagnosis and treatment programme, a driving force in promoting integration among services and MDTs. Its implementation has shown the key role that health care policy could play in enhancing the organisation of cancer care.\n\"The problem is that each specialisation has developed its own indicators of toxicity, clinical results, etc. There should be at least one database in which the team's outcomes are reflected. This is something that the hospital ought to demand. We could then say in real time, 'this, or that, is what's happening in prostate cancer'.\" (Medical oncologist)\n\"Yesterday I saw 37 patients: I can't devote myself to recording that much information in the database without any support... It's difficult for everyone's survival to be ascertained under such conditions. We tend to move within a 'dead' database context, that's to say, we get together at the end of the year to see how things have gone...\" (Colorectal surgeon)\nThe main goal of any multidisciplinary cancer team is to enhance the effectiveness of diagnosis and treatment of a specific disease. Assessment of the MDTs that had been put in place revealed relevant differences among the views held by the professionals themselves. To most of them, the ultimate consequence of the efforts of some hospital units or specialised professionals that regularly collect clinical data was evaluation or a study aimed at assessing MDT outcomes. Others, in contrast, described process evaluation involving initial inter-departmental consensus on indicators, development of a specific data-collection methodology, and periodic analysis of results using a shared database. Above all, this situation defines different approaches to the possibility of taking clinical outcomes and process indicators, and linking these to actions aimed at improving cancer care. There were two recurring arguments associated with possible ways of achieving organisational change: the first centred on the key role to be played by the health care service in reaching a technical definition of and agreement on a minimum set of indicators for the entire hospital system and a proposed level of transparency vis-à-vis outcomes; the second addressed the pervasive \"culture of efficiency\" currently prevailing in hospital departments, insisting on the need to limit its influence and instead give increased relevance to clinical and process indicators. An experience that has had remarkable success in various health care regions and has served as the basis for the evaluation of each MDT, is the implementation of a fast-track, colorectal, breast and lung cancer diagnosis and treatment programme, a driving force in promoting integration among services and MDTs. Its implementation has shown the key role that health care policy could play in enhancing the organisation of cancer care.\n\"The problem is that each specialisation has developed its own indicators of toxicity, clinical results, etc. There should be at least one database in which the team's outcomes are reflected. This is something that the hospital ought to demand. We could then say in real time, 'this, or that, is what's happening in prostate cancer'.\" (Medical oncologist)\n\"Yesterday I saw 37 patients: I can't devote myself to recording that much information in the database without any support... It's difficult for everyone's survival to be ascertained under such conditions. We tend to move within a 'dead' database context, that's to say, we get together at the end of the year to see how things have gone...\" (Colorectal surgeon)\n[SUBTITLE] Recording and documenting of clinical decisions in an MDT setting [SUBSECTION] The more formalised MDTs become, the more important easy access to and transparency of decisions and the rationale behind them are. The reason for this is that recording decisions reflects the outcome of consensus building and the value that professionals attribute to their work. Half of all clinicians interviewed stated that they noted their decisions on the electronic clinical record. Not only does such action clearly define the end-point of the decision-making process, it also renders it more transparent, something that, in turn, generates a positive perception of the entire hospital environment. In contrast, there are many cases where team decisions do not extend beyond the strict limits of the tumour board, as shown by the first comment in Excerpt 5. The major weaknesses in recording clinical decisions stem from the lack of standardisation achieved in tumour-board Minute-taking, due to absence of common forms, failure to identify clear recording responsibilities and, very often, lack of administrative support. What this tends to mean is that only the decisions affecting of-protocol patients are recorded, thus hindering the possibility of establishing a reference database for a specific cancer. One last very important aspect for any team is the recording of decisions made in those cases where there is no consensus. Though infrequent, this situation is thought to play a relevant role in terms of medico-legal implications.\n\"In one of the hospital teams, there are professionals who find it difficult to accept consensus-based decisions. Accordingly, we consider it appropriate that, in addition to the decision being recorded in the digital clinical history, a file should be circulated to all team members so that decisions taken with respect to all patients are 'known' to them...\" (Medical oncologist)\n\"We keep a number of formal records, I mean to say that there are several specialists who record details of patients in their files... but there is no single overall record.\" (Nurse case manager)\n\"There is an element of administration (which should be the task of a secretary) entailed in the drafting and signing of Minutes. This is generally performed by a physician, but if he's absent for any reason, then no-one does it. It's always the same old story: it's all a matter of personal involvement.\" (Pathologist)\nThe more formalised MDTs become, the more important easy access to and transparency of decisions and the rationale behind them are. The reason for this is that recording decisions reflects the outcome of consensus building and the value that professionals attribute to their work. Half of all clinicians interviewed stated that they noted their decisions on the electronic clinical record. Not only does such action clearly define the end-point of the decision-making process, it also renders it more transparent, something that, in turn, generates a positive perception of the entire hospital environment. In contrast, there are many cases where team decisions do not extend beyond the strict limits of the tumour board, as shown by the first comment in Excerpt 5. The major weaknesses in recording clinical decisions stem from the lack of standardisation achieved in tumour-board Minute-taking, due to absence of common forms, failure to identify clear recording responsibilities and, very often, lack of administrative support. What this tends to mean is that only the decisions affecting of-protocol patients are recorded, thus hindering the possibility of establishing a reference database for a specific cancer. One last very important aspect for any team is the recording of decisions made in those cases where there is no consensus. Though infrequent, this situation is thought to play a relevant role in terms of medico-legal implications.\n\"In one of the hospital teams, there are professionals who find it difficult to accept consensus-based decisions. Accordingly, we consider it appropriate that, in addition to the decision being recorded in the digital clinical history, a file should be circulated to all team members so that decisions taken with respect to all patients are 'known' to them...\" (Medical oncologist)\n\"We keep a number of formal records, I mean to say that there are several specialists who record details of patients in their files... but there is no single overall record.\" (Nurse case manager)\n\"There is an element of administration (which should be the task of a secretary) entailed in the drafting and signing of Minutes. This is generally performed by a physician, but if he's absent for any reason, then no-one does it. It's always the same old story: it's all a matter of personal involvement.\" (Pathologist)\nFor many clinicians, development of multidisciplinary care has involved a cultural change, as can be seen from the pathway which resulted from recommendations that were proposed by cancer team members for clinical management of patients and have gradually become binding decisions. The key critical factors identified for this change are as follows:\n[SUBTITLE] Existence of different gateways for the same patient profile [SUBSECTION] Many clinicians acknowledged significant variability in clinical practice as a result of diagnosing and treating patients who, despite having similar symptoms and diagnoses, might receive different initial therapy because access to hospital took place through different departments. A typical example is provided by the different therapeutic approaches proposed to a prostate cancer patient depending on the initial hospital department responsible for his diagnosis. Although a shared clinical protocol on such patients for the whole hospital plus an agreement to submit the patient to the MDT meeting would limit this variability, it is the unification of hospital-access gateways into a single hospital department that would have the necessary transforming quality in terms of standardisation of clinical criteria and pathways. Indeed, a recurring example of this type of organisational change is afforded by unification of admission to radiology for breast cancer or to gastroenterology for colorectal cancer in the case of patients displaying symptoms with a high risk of cancer. The experience of agreeing upon a common gateway for suspects has three effects: it provides teams with early access to patients; it reduces the feeling of patients being the \"property\" of any given clinician or department; and it sets up a primary care reference catchment area, as it becomes easier and clearer to determine where and how subjects with high risk of cancer should be referred and who the specialists of reference are. Moreover, where the department taking on the gateway unification process of the clinical pathway is a diagnostic unit, this implies that it should have a more relevant role within the team.\n\"At the hospital, there is a breast cancer unit that is beset by a root problem, i.e., two treatment options depending on whether the patient has been admitted via gynaecology or surgery. These are internal battles waged by the respective competencies.\" (Breast surgeon)\n\"In this hospital there are two chairs of surgery: one comes to the meetings, the other doesn't. We know that they administer different forms of treatment. The percentage of cases in which this occurs is by no means inconsiderable.\" (Medical oncologist)\nMany clinicians acknowledged significant variability in clinical practice as a result of diagnosing and treating patients who, despite having similar symptoms and diagnoses, might receive different initial therapy because access to hospital took place through different departments. A typical example is provided by the different therapeutic approaches proposed to a prostate cancer patient depending on the initial hospital department responsible for his diagnosis. Although a shared clinical protocol on such patients for the whole hospital plus an agreement to submit the patient to the MDT meeting would limit this variability, it is the unification of hospital-access gateways into a single hospital department that would have the necessary transforming quality in terms of standardisation of clinical criteria and pathways. Indeed, a recurring example of this type of organisational change is afforded by unification of admission to radiology for breast cancer or to gastroenterology for colorectal cancer in the case of patients displaying symptoms with a high risk of cancer. The experience of agreeing upon a common gateway for suspects has three effects: it provides teams with early access to patients; it reduces the feeling of patients being the \"property\" of any given clinician or department; and it sets up a primary care reference catchment area, as it becomes easier and clearer to determine where and how subjects with high risk of cancer should be referred and who the specialists of reference are. Moreover, where the department taking on the gateway unification process of the clinical pathway is a diagnostic unit, this implies that it should have a more relevant role within the team.\n\"At the hospital, there is a breast cancer unit that is beset by a root problem, i.e., two treatment options depending on whether the patient has been admitted via gynaecology or surgery. These are internal battles waged by the respective competencies.\" (Breast surgeon)\n\"In this hospital there are two chairs of surgery: one comes to the meetings, the other doesn't. We know that they administer different forms of treatment. The percentage of cases in which this occurs is by no means inconsiderable.\" (Medical oncologist)\n[SUBTITLE] Variability in the development and use of clinical protocols and guidelines [SUBSECTION] Evidence-based decisions are a source of concern to professionals, and the updating of clinical protocols by the team reflects this concern. Many clinicians felt, however, that this goal was conditioned by the implementation and dissemination of cancer clinical guidelines in the Spanish Health System. They argued that the absence of common guidelines for the whole country and the lack of co-ordination strategies for implementing the few that did exist resulted in reduced use and a lack of systematic assessment of existing levels of adherence. Owing to this perceived situation, clinical protocols at a hospital level are very often based on foreign guidelines, and efforts made to produce Spanish ones are of little use. This in turn leads to three common situations which impact at a team level. Firstly, hospitals that refer cases (e.g., because of their clinical complexity) have protocols based on different guidelines that are not standardised across the health care system. Secondly, multidisciplinary cancer care displays different levels of development, so that patients in one hospital may be referred to a specific department, but not to the tumour board, in another, the point here being the absence of pre-specified criteria for referral among levels of care. Thus, some decisions are made without the scientific consensus of an MDT. Finally, a cancer team might change the original treatment plan for a patient referred from a lower level hospital. This was a concern voiced by several clinicians, since decisions sometimes tend to differ widely, causing confusion and lack of trust in the patient. This perception was not shared by health professionals who work for cancer networks.\n\"When a surgeon comes along saying that he has operated on a given patient without the consensus of the committee but -according to him- this course of action was 'in line with the evidence'... this is unacceptable. It's an issue to be addressed by the respective cancer plans. When guidelines are reviewed, this should be the starting point, i.e., the tumour committee report should be seen before the surgical report.\" (Radiation oncologist)\nEvidence-based decisions are a source of concern to professionals, and the updating of clinical protocols by the team reflects this concern. Many clinicians felt, however, that this goal was conditioned by the implementation and dissemination of cancer clinical guidelines in the Spanish Health System. They argued that the absence of common guidelines for the whole country and the lack of co-ordination strategies for implementing the few that did exist resulted in reduced use and a lack of systematic assessment of existing levels of adherence. Owing to this perceived situation, clinical protocols at a hospital level are very often based on foreign guidelines, and efforts made to produce Spanish ones are of little use. This in turn leads to three common situations which impact at a team level. Firstly, hospitals that refer cases (e.g., because of their clinical complexity) have protocols based on different guidelines that are not standardised across the health care system. Secondly, multidisciplinary cancer care displays different levels of development, so that patients in one hospital may be referred to a specific department, but not to the tumour board, in another, the point here being the absence of pre-specified criteria for referral among levels of care. Thus, some decisions are made without the scientific consensus of an MDT. Finally, a cancer team might change the original treatment plan for a patient referred from a lower level hospital. This was a concern voiced by several clinicians, since decisions sometimes tend to differ widely, causing confusion and lack of trust in the patient. This perception was not shared by health professionals who work for cancer networks.\n\"When a surgeon comes along saying that he has operated on a given patient without the consensus of the committee but -according to him- this course of action was 'in line with the evidence'... this is unacceptable. It's an issue to be addressed by the respective cancer plans. When guidelines are reviewed, this should be the starting point, i.e., the tumour committee report should be seen before the surgical report.\" (Radiation oncologist)\n[SUBTITLE] The role of the hospital executive board [SUBSECTION] Most health professionals believed that, while they had not been hampered by the hospital executive board, neither had they been specifically supported to better organise clinical pathways and MDT activity. In their view, the main problem was that MDT work time was not recognised as a health care activity (or \"real work\" to quote them). Half of those interviewed felt that hospital managers knew little about their tasks, goals, level of involvement and management problems. These professionals identified two clear priorities for hospital executive boards, namely: to protect multidisciplinary meetings and work time; and to promote new professional roles, such as nurse case-managers or administrative support. Those with management responsibilities stated that cancer teams were not reflected in the organisational chart but were very important in terms of quality of care, and more innovative and responsive insofar as health care organisation achievement was concerned.\n\"If you tell management that you have to attend a committee meeting, they view it as something that is all well and good but nevertheless ancillary, and so not meriting consideration as part of the daily work load. Yet such attendance should be accorded health care and scientific value, i.e., so many hours correspond to committee work, which is equal to time spent seeing patients in a medical practice.\" (Colorectal surgeon)\n\"Your personal efforts are not appreciated, regardless of whether you've taken part in drawing up a protocol or whether you've devoted one day or three weeks to the job... And, as no stress is laid on the importance of teamwork, there are pockets of resistance that don't change\". (Radiologist)\nMost health professionals believed that, while they had not been hampered by the hospital executive board, neither had they been specifically supported to better organise clinical pathways and MDT activity. In their view, the main problem was that MDT work time was not recognised as a health care activity (or \"real work\" to quote them). Half of those interviewed felt that hospital managers knew little about their tasks, goals, level of involvement and management problems. These professionals identified two clear priorities for hospital executive boards, namely: to protect multidisciplinary meetings and work time; and to promote new professional roles, such as nurse case-managers or administrative support. Those with management responsibilities stated that cancer teams were not reflected in the organisational chart but were very important in terms of quality of care, and more innovative and responsive insofar as health care organisation achievement was concerned.\n\"If you tell management that you have to attend a committee meeting, they view it as something that is all well and good but nevertheless ancillary, and so not meriting consideration as part of the daily work load. Yet such attendance should be accorded health care and scientific value, i.e., so many hours correspond to committee work, which is equal to time spent seeing patients in a medical practice.\" (Colorectal surgeon)\n\"Your personal efforts are not appreciated, regardless of whether you've taken part in drawing up a protocol or whether you've devoted one day or three weeks to the job... And, as no stress is laid on the importance of teamwork, there are pockets of resistance that don't change\". (Radiologist)\n[SUBTITLE] Outcome assessment [SUBSECTION] The main goal of any multidisciplinary cancer team is to enhance the effectiveness of diagnosis and treatment of a specific disease. Assessment of the MDTs that had been put in place revealed relevant differences among the views held by the professionals themselves. To most of them, the ultimate consequence of the efforts of some hospital units or specialised professionals that regularly collect clinical data was evaluation or a study aimed at assessing MDT outcomes. Others, in contrast, described process evaluation involving initial inter-departmental consensus on indicators, development of a specific data-collection methodology, and periodic analysis of results using a shared database. Above all, this situation defines different approaches to the possibility of taking clinical outcomes and process indicators, and linking these to actions aimed at improving cancer care. There were two recurring arguments associated with possible ways of achieving organisational change: the first centred on the key role to be played by the health care service in reaching a technical definition of and agreement on a minimum set of indicators for the entire hospital system and a proposed level of transparency vis-à-vis outcomes; the second addressed the pervasive \"culture of efficiency\" currently prevailing in hospital departments, insisting on the need to limit its influence and instead give increased relevance to clinical and process indicators. An experience that has had remarkable success in various health care regions and has served as the basis for the evaluation of each MDT, is the implementation of a fast-track, colorectal, breast and lung cancer diagnosis and treatment programme, a driving force in promoting integration among services and MDTs. Its implementation has shown the key role that health care policy could play in enhancing the organisation of cancer care.\n\"The problem is that each specialisation has developed its own indicators of toxicity, clinical results, etc. There should be at least one database in which the team's outcomes are reflected. This is something that the hospital ought to demand. We could then say in real time, 'this, or that, is what's happening in prostate cancer'.\" (Medical oncologist)\n\"Yesterday I saw 37 patients: I can't devote myself to recording that much information in the database without any support... It's difficult for everyone's survival to be ascertained under such conditions. We tend to move within a 'dead' database context, that's to say, we get together at the end of the year to see how things have gone...\" (Colorectal surgeon)\nThe main goal of any multidisciplinary cancer team is to enhance the effectiveness of diagnosis and treatment of a specific disease. Assessment of the MDTs that had been put in place revealed relevant differences among the views held by the professionals themselves. To most of them, the ultimate consequence of the efforts of some hospital units or specialised professionals that regularly collect clinical data was evaluation or a study aimed at assessing MDT outcomes. Others, in contrast, described process evaluation involving initial inter-departmental consensus on indicators, development of a specific data-collection methodology, and periodic analysis of results using a shared database. Above all, this situation defines different approaches to the possibility of taking clinical outcomes and process indicators, and linking these to actions aimed at improving cancer care. There were two recurring arguments associated with possible ways of achieving organisational change: the first centred on the key role to be played by the health care service in reaching a technical definition of and agreement on a minimum set of indicators for the entire hospital system and a proposed level of transparency vis-à-vis outcomes; the second addressed the pervasive \"culture of efficiency\" currently prevailing in hospital departments, insisting on the need to limit its influence and instead give increased relevance to clinical and process indicators. An experience that has had remarkable success in various health care regions and has served as the basis for the evaluation of each MDT, is the implementation of a fast-track, colorectal, breast and lung cancer diagnosis and treatment programme, a driving force in promoting integration among services and MDTs. Its implementation has shown the key role that health care policy could play in enhancing the organisation of cancer care.\n\"The problem is that each specialisation has developed its own indicators of toxicity, clinical results, etc. There should be at least one database in which the team's outcomes are reflected. This is something that the hospital ought to demand. We could then say in real time, 'this, or that, is what's happening in prostate cancer'.\" (Medical oncologist)\n\"Yesterday I saw 37 patients: I can't devote myself to recording that much information in the database without any support... It's difficult for everyone's survival to be ascertained under such conditions. We tend to move within a 'dead' database context, that's to say, we get together at the end of the year to see how things have gone...\" (Colorectal surgeon)\n[SUBTITLE] Recording and documenting of clinical decisions in an MDT setting [SUBSECTION] The more formalised MDTs become, the more important easy access to and transparency of decisions and the rationale behind them are. The reason for this is that recording decisions reflects the outcome of consensus building and the value that professionals attribute to their work. Half of all clinicians interviewed stated that they noted their decisions on the electronic clinical record. Not only does such action clearly define the end-point of the decision-making process, it also renders it more transparent, something that, in turn, generates a positive perception of the entire hospital environment. In contrast, there are many cases where team decisions do not extend beyond the strict limits of the tumour board, as shown by the first comment in Excerpt 5. The major weaknesses in recording clinical decisions stem from the lack of standardisation achieved in tumour-board Minute-taking, due to absence of common forms, failure to identify clear recording responsibilities and, very often, lack of administrative support. What this tends to mean is that only the decisions affecting of-protocol patients are recorded, thus hindering the possibility of establishing a reference database for a specific cancer. One last very important aspect for any team is the recording of decisions made in those cases where there is no consensus. Though infrequent, this situation is thought to play a relevant role in terms of medico-legal implications.\n\"In one of the hospital teams, there are professionals who find it difficult to accept consensus-based decisions. Accordingly, we consider it appropriate that, in addition to the decision being recorded in the digital clinical history, a file should be circulated to all team members so that decisions taken with respect to all patients are 'known' to them...\" (Medical oncologist)\n\"We keep a number of formal records, I mean to say that there are several specialists who record details of patients in their files... but there is no single overall record.\" (Nurse case manager)\n\"There is an element of administration (which should be the task of a secretary) entailed in the drafting and signing of Minutes. This is generally performed by a physician, but if he's absent for any reason, then no-one does it. It's always the same old story: it's all a matter of personal involvement.\" (Pathologist)\nThe more formalised MDTs become, the more important easy access to and transparency of decisions and the rationale behind them are. The reason for this is that recording decisions reflects the outcome of consensus building and the value that professionals attribute to their work. Half of all clinicians interviewed stated that they noted their decisions on the electronic clinical record. Not only does such action clearly define the end-point of the decision-making process, it also renders it more transparent, something that, in turn, generates a positive perception of the entire hospital environment. In contrast, there are many cases where team decisions do not extend beyond the strict limits of the tumour board, as shown by the first comment in Excerpt 5. The major weaknesses in recording clinical decisions stem from the lack of standardisation achieved in tumour-board Minute-taking, due to absence of common forms, failure to identify clear recording responsibilities and, very often, lack of administrative support. What this tends to mean is that only the decisions affecting of-protocol patients are recorded, thus hindering the possibility of establishing a reference database for a specific cancer. One last very important aspect for any team is the recording of decisions made in those cases where there is no consensus. Though infrequent, this situation is thought to play a relevant role in terms of medico-legal implications.\n\"In one of the hospital teams, there are professionals who find it difficult to accept consensus-based decisions. Accordingly, we consider it appropriate that, in addition to the decision being recorded in the digital clinical history, a file should be circulated to all team members so that decisions taken with respect to all patients are 'known' to them...\" (Medical oncologist)\n\"We keep a number of formal records, I mean to say that there are several specialists who record details of patients in their files... but there is no single overall record.\" (Nurse case manager)\n\"There is an element of administration (which should be the task of a secretary) entailed in the drafting and signing of Minutes. This is generally performed by a physician, but if he's absent for any reason, then no-one does it. It's always the same old story: it's all a matter of personal involvement.\" (Pathologist)", "This is a group of professionals, largely made up of specialists in different therapeutic fields, which meets regularly on a informal basis to discuss cases considered clinically complex. Since the patients may already have received some of the treatments (usually surgery), the multidisciplinary meeting is aimed at referring them to other professionals for further treatment. This approach implies rigorous respect for the autonomy of the clinician and for overlapping boundaries between the team and the multidisciplinary meeting: patient assessment is made without other health care performance considerations. Judging by our results, this model continues to enjoy an important presence in the system (40%).", "Owing to the high degree of interaction (mutual adaptation) among the professionals involved, consensus plays a key role in this model. The team acts as the reference framework for professionals, who share their views on the diagnosis, treatment and monitoring of a specific cancer type. The meeting is open to all professionals involved in patient management and it is here that the roles of team tumour board co-ordinator and chair appear. The key factor for fostering such an approach is agreement on the need for: joint decision-making to precede application of any treatment; and all cases to be dealt with at the multidisciplinary meeting. Both aspects are hampered, however, by hospital service inertia when it comes to disease management. This formula accounts for half (50%) of all MDT meetings in the health care system.", "The teams that work under this model share wide aims on patient management, including co-ordination of clinical research and economic evaluation of treatment. As this model provides teams with early access to patients, the latter's preferences along with knowledge of their co-morbidities and psychosocial context are incorporated into the multidisciplinary discussions. This occurs through systematic follow-up of patients throughout their journey, from suspicion of cancer, to diagnosis, therapeutic decision-making and follow-up. The presence of professional team roles has an impact on the entire care process and on progress towards achieving seamless care. The hospital executive board plays a pivotal role by consolidating meeting times, and ensuring health professionals' attendance as well as their commitment to the MDT. This model is not frequent in the system (10%).", "For many clinicians, development of multidisciplinary care has involved a cultural change, as can be seen from the pathway which resulted from recommendations that were proposed by cancer team members for clinical management of patients and have gradually become binding decisions. The key critical factors identified for this change are as follows:\n[SUBTITLE] Existence of different gateways for the same patient profile [SUBSECTION] Many clinicians acknowledged significant variability in clinical practice as a result of diagnosing and treating patients who, despite having similar symptoms and diagnoses, might receive different initial therapy because access to hospital took place through different departments. A typical example is provided by the different therapeutic approaches proposed to a prostate cancer patient depending on the initial hospital department responsible for his diagnosis. Although a shared clinical protocol on such patients for the whole hospital plus an agreement to submit the patient to the MDT meeting would limit this variability, it is the unification of hospital-access gateways into a single hospital department that would have the necessary transforming quality in terms of standardisation of clinical criteria and pathways. Indeed, a recurring example of this type of organisational change is afforded by unification of admission to radiology for breast cancer or to gastroenterology for colorectal cancer in the case of patients displaying symptoms with a high risk of cancer. The experience of agreeing upon a common gateway for suspects has three effects: it provides teams with early access to patients; it reduces the feeling of patients being the \"property\" of any given clinician or department; and it sets up a primary care reference catchment area, as it becomes easier and clearer to determine where and how subjects with high risk of cancer should be referred and who the specialists of reference are. Moreover, where the department taking on the gateway unification process of the clinical pathway is a diagnostic unit, this implies that it should have a more relevant role within the team.\n\"At the hospital, there is a breast cancer unit that is beset by a root problem, i.e., two treatment options depending on whether the patient has been admitted via gynaecology or surgery. These are internal battles waged by the respective competencies.\" (Breast surgeon)\n\"In this hospital there are two chairs of surgery: one comes to the meetings, the other doesn't. We know that they administer different forms of treatment. The percentage of cases in which this occurs is by no means inconsiderable.\" (Medical oncologist)\nMany clinicians acknowledged significant variability in clinical practice as a result of diagnosing and treating patients who, despite having similar symptoms and diagnoses, might receive different initial therapy because access to hospital took place through different departments. A typical example is provided by the different therapeutic approaches proposed to a prostate cancer patient depending on the initial hospital department responsible for his diagnosis. Although a shared clinical protocol on such patients for the whole hospital plus an agreement to submit the patient to the MDT meeting would limit this variability, it is the unification of hospital-access gateways into a single hospital department that would have the necessary transforming quality in terms of standardisation of clinical criteria and pathways. Indeed, a recurring example of this type of organisational change is afforded by unification of admission to radiology for breast cancer or to gastroenterology for colorectal cancer in the case of patients displaying symptoms with a high risk of cancer. The experience of agreeing upon a common gateway for suspects has three effects: it provides teams with early access to patients; it reduces the feeling of patients being the \"property\" of any given clinician or department; and it sets up a primary care reference catchment area, as it becomes easier and clearer to determine where and how subjects with high risk of cancer should be referred and who the specialists of reference are. Moreover, where the department taking on the gateway unification process of the clinical pathway is a diagnostic unit, this implies that it should have a more relevant role within the team.\n\"At the hospital, there is a breast cancer unit that is beset by a root problem, i.e., two treatment options depending on whether the patient has been admitted via gynaecology or surgery. These are internal battles waged by the respective competencies.\" (Breast surgeon)\n\"In this hospital there are two chairs of surgery: one comes to the meetings, the other doesn't. We know that they administer different forms of treatment. The percentage of cases in which this occurs is by no means inconsiderable.\" (Medical oncologist)\n[SUBTITLE] Variability in the development and use of clinical protocols and guidelines [SUBSECTION] Evidence-based decisions are a source of concern to professionals, and the updating of clinical protocols by the team reflects this concern. Many clinicians felt, however, that this goal was conditioned by the implementation and dissemination of cancer clinical guidelines in the Spanish Health System. They argued that the absence of common guidelines for the whole country and the lack of co-ordination strategies for implementing the few that did exist resulted in reduced use and a lack of systematic assessment of existing levels of adherence. Owing to this perceived situation, clinical protocols at a hospital level are very often based on foreign guidelines, and efforts made to produce Spanish ones are of little use. This in turn leads to three common situations which impact at a team level. Firstly, hospitals that refer cases (e.g., because of their clinical complexity) have protocols based on different guidelines that are not standardised across the health care system. Secondly, multidisciplinary cancer care displays different levels of development, so that patients in one hospital may be referred to a specific department, but not to the tumour board, in another, the point here being the absence of pre-specified criteria for referral among levels of care. Thus, some decisions are made without the scientific consensus of an MDT. Finally, a cancer team might change the original treatment plan for a patient referred from a lower level hospital. This was a concern voiced by several clinicians, since decisions sometimes tend to differ widely, causing confusion and lack of trust in the patient. This perception was not shared by health professionals who work for cancer networks.\n\"When a surgeon comes along saying that he has operated on a given patient without the consensus of the committee but -according to him- this course of action was 'in line with the evidence'... this is unacceptable. It's an issue to be addressed by the respective cancer plans. When guidelines are reviewed, this should be the starting point, i.e., the tumour committee report should be seen before the surgical report.\" (Radiation oncologist)\nEvidence-based decisions are a source of concern to professionals, and the updating of clinical protocols by the team reflects this concern. Many clinicians felt, however, that this goal was conditioned by the implementation and dissemination of cancer clinical guidelines in the Spanish Health System. They argued that the absence of common guidelines for the whole country and the lack of co-ordination strategies for implementing the few that did exist resulted in reduced use and a lack of systematic assessment of existing levels of adherence. Owing to this perceived situation, clinical protocols at a hospital level are very often based on foreign guidelines, and efforts made to produce Spanish ones are of little use. This in turn leads to three common situations which impact at a team level. Firstly, hospitals that refer cases (e.g., because of their clinical complexity) have protocols based on different guidelines that are not standardised across the health care system. Secondly, multidisciplinary cancer care displays different levels of development, so that patients in one hospital may be referred to a specific department, but not to the tumour board, in another, the point here being the absence of pre-specified criteria for referral among levels of care. Thus, some decisions are made without the scientific consensus of an MDT. Finally, a cancer team might change the original treatment plan for a patient referred from a lower level hospital. This was a concern voiced by several clinicians, since decisions sometimes tend to differ widely, causing confusion and lack of trust in the patient. This perception was not shared by health professionals who work for cancer networks.\n\"When a surgeon comes along saying that he has operated on a given patient without the consensus of the committee but -according to him- this course of action was 'in line with the evidence'... this is unacceptable. It's an issue to be addressed by the respective cancer plans. When guidelines are reviewed, this should be the starting point, i.e., the tumour committee report should be seen before the surgical report.\" (Radiation oncologist)\n[SUBTITLE] The role of the hospital executive board [SUBSECTION] Most health professionals believed that, while they had not been hampered by the hospital executive board, neither had they been specifically supported to better organise clinical pathways and MDT activity. In their view, the main problem was that MDT work time was not recognised as a health care activity (or \"real work\" to quote them). Half of those interviewed felt that hospital managers knew little about their tasks, goals, level of involvement and management problems. These professionals identified two clear priorities for hospital executive boards, namely: to protect multidisciplinary meetings and work time; and to promote new professional roles, such as nurse case-managers or administrative support. Those with management responsibilities stated that cancer teams were not reflected in the organisational chart but were very important in terms of quality of care, and more innovative and responsive insofar as health care organisation achievement was concerned.\n\"If you tell management that you have to attend a committee meeting, they view it as something that is all well and good but nevertheless ancillary, and so not meriting consideration as part of the daily work load. Yet such attendance should be accorded health care and scientific value, i.e., so many hours correspond to committee work, which is equal to time spent seeing patients in a medical practice.\" (Colorectal surgeon)\n\"Your personal efforts are not appreciated, regardless of whether you've taken part in drawing up a protocol or whether you've devoted one day or three weeks to the job... And, as no stress is laid on the importance of teamwork, there are pockets of resistance that don't change\". (Radiologist)\nMost health professionals believed that, while they had not been hampered by the hospital executive board, neither had they been specifically supported to better organise clinical pathways and MDT activity. In their view, the main problem was that MDT work time was not recognised as a health care activity (or \"real work\" to quote them). Half of those interviewed felt that hospital managers knew little about their tasks, goals, level of involvement and management problems. These professionals identified two clear priorities for hospital executive boards, namely: to protect multidisciplinary meetings and work time; and to promote new professional roles, such as nurse case-managers or administrative support. Those with management responsibilities stated that cancer teams were not reflected in the organisational chart but were very important in terms of quality of care, and more innovative and responsive insofar as health care organisation achievement was concerned.\n\"If you tell management that you have to attend a committee meeting, they view it as something that is all well and good but nevertheless ancillary, and so not meriting consideration as part of the daily work load. Yet such attendance should be accorded health care and scientific value, i.e., so many hours correspond to committee work, which is equal to time spent seeing patients in a medical practice.\" (Colorectal surgeon)\n\"Your personal efforts are not appreciated, regardless of whether you've taken part in drawing up a protocol or whether you've devoted one day or three weeks to the job... And, as no stress is laid on the importance of teamwork, there are pockets of resistance that don't change\". (Radiologist)\n[SUBTITLE] Outcome assessment [SUBSECTION] The main goal of any multidisciplinary cancer team is to enhance the effectiveness of diagnosis and treatment of a specific disease. Assessment of the MDTs that had been put in place revealed relevant differences among the views held by the professionals themselves. To most of them, the ultimate consequence of the efforts of some hospital units or specialised professionals that regularly collect clinical data was evaluation or a study aimed at assessing MDT outcomes. Others, in contrast, described process evaluation involving initial inter-departmental consensus on indicators, development of a specific data-collection methodology, and periodic analysis of results using a shared database. Above all, this situation defines different approaches to the possibility of taking clinical outcomes and process indicators, and linking these to actions aimed at improving cancer care. There were two recurring arguments associated with possible ways of achieving organisational change: the first centred on the key role to be played by the health care service in reaching a technical definition of and agreement on a minimum set of indicators for the entire hospital system and a proposed level of transparency vis-à-vis outcomes; the second addressed the pervasive \"culture of efficiency\" currently prevailing in hospital departments, insisting on the need to limit its influence and instead give increased relevance to clinical and process indicators. An experience that has had remarkable success in various health care regions and has served as the basis for the evaluation of each MDT, is the implementation of a fast-track, colorectal, breast and lung cancer diagnosis and treatment programme, a driving force in promoting integration among services and MDTs. Its implementation has shown the key role that health care policy could play in enhancing the organisation of cancer care.\n\"The problem is that each specialisation has developed its own indicators of toxicity, clinical results, etc. There should be at least one database in which the team's outcomes are reflected. This is something that the hospital ought to demand. We could then say in real time, 'this, or that, is what's happening in prostate cancer'.\" (Medical oncologist)\n\"Yesterday I saw 37 patients: I can't devote myself to recording that much information in the database without any support... It's difficult for everyone's survival to be ascertained under such conditions. We tend to move within a 'dead' database context, that's to say, we get together at the end of the year to see how things have gone...\" (Colorectal surgeon)\nThe main goal of any multidisciplinary cancer team is to enhance the effectiveness of diagnosis and treatment of a specific disease. Assessment of the MDTs that had been put in place revealed relevant differences among the views held by the professionals themselves. To most of them, the ultimate consequence of the efforts of some hospital units or specialised professionals that regularly collect clinical data was evaluation or a study aimed at assessing MDT outcomes. Others, in contrast, described process evaluation involving initial inter-departmental consensus on indicators, development of a specific data-collection methodology, and periodic analysis of results using a shared database. Above all, this situation defines different approaches to the possibility of taking clinical outcomes and process indicators, and linking these to actions aimed at improving cancer care. There were two recurring arguments associated with possible ways of achieving organisational change: the first centred on the key role to be played by the health care service in reaching a technical definition of and agreement on a minimum set of indicators for the entire hospital system and a proposed level of transparency vis-à-vis outcomes; the second addressed the pervasive \"culture of efficiency\" currently prevailing in hospital departments, insisting on the need to limit its influence and instead give increased relevance to clinical and process indicators. An experience that has had remarkable success in various health care regions and has served as the basis for the evaluation of each MDT, is the implementation of a fast-track, colorectal, breast and lung cancer diagnosis and treatment programme, a driving force in promoting integration among services and MDTs. Its implementation has shown the key role that health care policy could play in enhancing the organisation of cancer care.\n\"The problem is that each specialisation has developed its own indicators of toxicity, clinical results, etc. There should be at least one database in which the team's outcomes are reflected. This is something that the hospital ought to demand. We could then say in real time, 'this, or that, is what's happening in prostate cancer'.\" (Medical oncologist)\n\"Yesterday I saw 37 patients: I can't devote myself to recording that much information in the database without any support... It's difficult for everyone's survival to be ascertained under such conditions. We tend to move within a 'dead' database context, that's to say, we get together at the end of the year to see how things have gone...\" (Colorectal surgeon)\n[SUBTITLE] Recording and documenting of clinical decisions in an MDT setting [SUBSECTION] The more formalised MDTs become, the more important easy access to and transparency of decisions and the rationale behind them are. The reason for this is that recording decisions reflects the outcome of consensus building and the value that professionals attribute to their work. Half of all clinicians interviewed stated that they noted their decisions on the electronic clinical record. Not only does such action clearly define the end-point of the decision-making process, it also renders it more transparent, something that, in turn, generates a positive perception of the entire hospital environment. In contrast, there are many cases where team decisions do not extend beyond the strict limits of the tumour board, as shown by the first comment in Excerpt 5. The major weaknesses in recording clinical decisions stem from the lack of standardisation achieved in tumour-board Minute-taking, due to absence of common forms, failure to identify clear recording responsibilities and, very often, lack of administrative support. What this tends to mean is that only the decisions affecting of-protocol patients are recorded, thus hindering the possibility of establishing a reference database for a specific cancer. One last very important aspect for any team is the recording of decisions made in those cases where there is no consensus. Though infrequent, this situation is thought to play a relevant role in terms of medico-legal implications.\n\"In one of the hospital teams, there are professionals who find it difficult to accept consensus-based decisions. Accordingly, we consider it appropriate that, in addition to the decision being recorded in the digital clinical history, a file should be circulated to all team members so that decisions taken with respect to all patients are 'known' to them...\" (Medical oncologist)\n\"We keep a number of formal records, I mean to say that there are several specialists who record details of patients in their files... but there is no single overall record.\" (Nurse case manager)\n\"There is an element of administration (which should be the task of a secretary) entailed in the drafting and signing of Minutes. This is generally performed by a physician, but if he's absent for any reason, then no-one does it. It's always the same old story: it's all a matter of personal involvement.\" (Pathologist)\nThe more formalised MDTs become, the more important easy access to and transparency of decisions and the rationale behind them are. The reason for this is that recording decisions reflects the outcome of consensus building and the value that professionals attribute to their work. Half of all clinicians interviewed stated that they noted their decisions on the electronic clinical record. Not only does such action clearly define the end-point of the decision-making process, it also renders it more transparent, something that, in turn, generates a positive perception of the entire hospital environment. In contrast, there are many cases where team decisions do not extend beyond the strict limits of the tumour board, as shown by the first comment in Excerpt 5. The major weaknesses in recording clinical decisions stem from the lack of standardisation achieved in tumour-board Minute-taking, due to absence of common forms, failure to identify clear recording responsibilities and, very often, lack of administrative support. What this tends to mean is that only the decisions affecting of-protocol patients are recorded, thus hindering the possibility of establishing a reference database for a specific cancer. One last very important aspect for any team is the recording of decisions made in those cases where there is no consensus. Though infrequent, this situation is thought to play a relevant role in terms of medico-legal implications.\n\"In one of the hospital teams, there are professionals who find it difficult to accept consensus-based decisions. Accordingly, we consider it appropriate that, in addition to the decision being recorded in the digital clinical history, a file should be circulated to all team members so that decisions taken with respect to all patients are 'known' to them...\" (Medical oncologist)\n\"We keep a number of formal records, I mean to say that there are several specialists who record details of patients in their files... but there is no single overall record.\" (Nurse case manager)\n\"There is an element of administration (which should be the task of a secretary) entailed in the drafting and signing of Minutes. This is generally performed by a physician, but if he's absent for any reason, then no-one does it. It's always the same old story: it's all a matter of personal involvement.\" (Pathologist)", "Many clinicians acknowledged significant variability in clinical practice as a result of diagnosing and treating patients who, despite having similar symptoms and diagnoses, might receive different initial therapy because access to hospital took place through different departments. A typical example is provided by the different therapeutic approaches proposed to a prostate cancer patient depending on the initial hospital department responsible for his diagnosis. Although a shared clinical protocol on such patients for the whole hospital plus an agreement to submit the patient to the MDT meeting would limit this variability, it is the unification of hospital-access gateways into a single hospital department that would have the necessary transforming quality in terms of standardisation of clinical criteria and pathways. Indeed, a recurring example of this type of organisational change is afforded by unification of admission to radiology for breast cancer or to gastroenterology for colorectal cancer in the case of patients displaying symptoms with a high risk of cancer. The experience of agreeing upon a common gateway for suspects has three effects: it provides teams with early access to patients; it reduces the feeling of patients being the \"property\" of any given clinician or department; and it sets up a primary care reference catchment area, as it becomes easier and clearer to determine where and how subjects with high risk of cancer should be referred and who the specialists of reference are. Moreover, where the department taking on the gateway unification process of the clinical pathway is a diagnostic unit, this implies that it should have a more relevant role within the team.\n\"At the hospital, there is a breast cancer unit that is beset by a root problem, i.e., two treatment options depending on whether the patient has been admitted via gynaecology or surgery. These are internal battles waged by the respective competencies.\" (Breast surgeon)\n\"In this hospital there are two chairs of surgery: one comes to the meetings, the other doesn't. We know that they administer different forms of treatment. The percentage of cases in which this occurs is by no means inconsiderable.\" (Medical oncologist)", "Evidence-based decisions are a source of concern to professionals, and the updating of clinical protocols by the team reflects this concern. Many clinicians felt, however, that this goal was conditioned by the implementation and dissemination of cancer clinical guidelines in the Spanish Health System. They argued that the absence of common guidelines for the whole country and the lack of co-ordination strategies for implementing the few that did exist resulted in reduced use and a lack of systematic assessment of existing levels of adherence. Owing to this perceived situation, clinical protocols at a hospital level are very often based on foreign guidelines, and efforts made to produce Spanish ones are of little use. This in turn leads to three common situations which impact at a team level. Firstly, hospitals that refer cases (e.g., because of their clinical complexity) have protocols based on different guidelines that are not standardised across the health care system. Secondly, multidisciplinary cancer care displays different levels of development, so that patients in one hospital may be referred to a specific department, but not to the tumour board, in another, the point here being the absence of pre-specified criteria for referral among levels of care. Thus, some decisions are made without the scientific consensus of an MDT. Finally, a cancer team might change the original treatment plan for a patient referred from a lower level hospital. This was a concern voiced by several clinicians, since decisions sometimes tend to differ widely, causing confusion and lack of trust in the patient. This perception was not shared by health professionals who work for cancer networks.\n\"When a surgeon comes along saying that he has operated on a given patient without the consensus of the committee but -according to him- this course of action was 'in line with the evidence'... this is unacceptable. It's an issue to be addressed by the respective cancer plans. When guidelines are reviewed, this should be the starting point, i.e., the tumour committee report should be seen before the surgical report.\" (Radiation oncologist)", "Most health professionals believed that, while they had not been hampered by the hospital executive board, neither had they been specifically supported to better organise clinical pathways and MDT activity. In their view, the main problem was that MDT work time was not recognised as a health care activity (or \"real work\" to quote them). Half of those interviewed felt that hospital managers knew little about their tasks, goals, level of involvement and management problems. These professionals identified two clear priorities for hospital executive boards, namely: to protect multidisciplinary meetings and work time; and to promote new professional roles, such as nurse case-managers or administrative support. Those with management responsibilities stated that cancer teams were not reflected in the organisational chart but were very important in terms of quality of care, and more innovative and responsive insofar as health care organisation achievement was concerned.\n\"If you tell management that you have to attend a committee meeting, they view it as something that is all well and good but nevertheless ancillary, and so not meriting consideration as part of the daily work load. Yet such attendance should be accorded health care and scientific value, i.e., so many hours correspond to committee work, which is equal to time spent seeing patients in a medical practice.\" (Colorectal surgeon)\n\"Your personal efforts are not appreciated, regardless of whether you've taken part in drawing up a protocol or whether you've devoted one day or three weeks to the job... And, as no stress is laid on the importance of teamwork, there are pockets of resistance that don't change\". (Radiologist)", "The main goal of any multidisciplinary cancer team is to enhance the effectiveness of diagnosis and treatment of a specific disease. Assessment of the MDTs that had been put in place revealed relevant differences among the views held by the professionals themselves. To most of them, the ultimate consequence of the efforts of some hospital units or specialised professionals that regularly collect clinical data was evaluation or a study aimed at assessing MDT outcomes. Others, in contrast, described process evaluation involving initial inter-departmental consensus on indicators, development of a specific data-collection methodology, and periodic analysis of results using a shared database. Above all, this situation defines different approaches to the possibility of taking clinical outcomes and process indicators, and linking these to actions aimed at improving cancer care. There were two recurring arguments associated with possible ways of achieving organisational change: the first centred on the key role to be played by the health care service in reaching a technical definition of and agreement on a minimum set of indicators for the entire hospital system and a proposed level of transparency vis-à-vis outcomes; the second addressed the pervasive \"culture of efficiency\" currently prevailing in hospital departments, insisting on the need to limit its influence and instead give increased relevance to clinical and process indicators. An experience that has had remarkable success in various health care regions and has served as the basis for the evaluation of each MDT, is the implementation of a fast-track, colorectal, breast and lung cancer diagnosis and treatment programme, a driving force in promoting integration among services and MDTs. Its implementation has shown the key role that health care policy could play in enhancing the organisation of cancer care.\n\"The problem is that each specialisation has developed its own indicators of toxicity, clinical results, etc. There should be at least one database in which the team's outcomes are reflected. This is something that the hospital ought to demand. We could then say in real time, 'this, or that, is what's happening in prostate cancer'.\" (Medical oncologist)\n\"Yesterday I saw 37 patients: I can't devote myself to recording that much information in the database without any support... It's difficult for everyone's survival to be ascertained under such conditions. We tend to move within a 'dead' database context, that's to say, we get together at the end of the year to see how things have gone...\" (Colorectal surgeon)", "The more formalised MDTs become, the more important easy access to and transparency of decisions and the rationale behind them are. The reason for this is that recording decisions reflects the outcome of consensus building and the value that professionals attribute to their work. Half of all clinicians interviewed stated that they noted their decisions on the electronic clinical record. Not only does such action clearly define the end-point of the decision-making process, it also renders it more transparent, something that, in turn, generates a positive perception of the entire hospital environment. In contrast, there are many cases where team decisions do not extend beyond the strict limits of the tumour board, as shown by the first comment in Excerpt 5. The major weaknesses in recording clinical decisions stem from the lack of standardisation achieved in tumour-board Minute-taking, due to absence of common forms, failure to identify clear recording responsibilities and, very often, lack of administrative support. What this tends to mean is that only the decisions affecting of-protocol patients are recorded, thus hindering the possibility of establishing a reference database for a specific cancer. One last very important aspect for any team is the recording of decisions made in those cases where there is no consensus. Though infrequent, this situation is thought to play a relevant role in terms of medico-legal implications.\n\"In one of the hospital teams, there are professionals who find it difficult to accept consensus-based decisions. Accordingly, we consider it appropriate that, in addition to the decision being recorded in the digital clinical history, a file should be circulated to all team members so that decisions taken with respect to all patients are 'known' to them...\" (Medical oncologist)\n\"We keep a number of formal records, I mean to say that there are several specialists who record details of patients in their files... but there is no single overall record.\" (Nurse case manager)\n\"There is an element of administration (which should be the task of a secretary) entailed in the drafting and signing of Minutes. This is generally performed by a physician, but if he's absent for any reason, then no-one does it. It's always the same old story: it's all a matter of personal involvement.\" (Pathologist)", "The reference to Jean-Baptiste Lamarck (1744-1829) in the title of this article (\"the function creates the organ\") is a description that is both accurate and useful for understanding the evolution of multidisciplinary cancer care in the Spanish health system. In a manner similar to the apocryphal example of Lamarck's giraffe, which craned its neck to match the height of the trees, the development of cancer teams results from adaptation to the immediate hospital environment accompanied by a lack of policy orientation. While the law lays down that all hospitals are to have cancer boards for the most prevalent diseases, no specific aims, organisational requirements or performance assessment standards have been prescribed.\nThere is a valuable lesson to be learnt in the path taken by the UK National Health Service. The publication of the Calman-Hine [12] report in 1995 highlighted the importance of a successful institutional framework for cancer services. As Haward [13,14] pointed out, however, the effort to define their performance -- including multidisciplinary care-- in detail [15], without addressing the factors that were to facilitate the transition, resulted in slow, uneven change. The Spanish experience failed to develop this type of learning process. Our study therefore sought to identify the cultural and organisational dimensions that influence the incorporation of planned actions. This approach is reinforced by the EUROCARE-4 study, which identifies the organisational elements in the care process by the latter's ability to improve the survival and quality of life of cancer patients, as evidenced by the differences among European countries [16,17].\n[SUBTITLE] Impact on decision making [SUBSECTION] Implementation of multidisciplinary care involves a redistribution of the responsibilities assumed by the respective professionals, with the aim of developing greater potential for enhancing their joint clinical effectiveness. It is a specific organisational answer to the complexity of cancer care, and enables new approaches to be taken and known problems, such as variability in clinical practice, to be tackled. In this connection, note should be taken of the overall strategy adopted by the National Breast and Ovarian Cancer Centre of Australia, which, along with several other authors [18-20], identifies multidisciplinary care with the standardisation of clinical practice in the health system. Most professionals interviewed by us regard MDT as the main tool for ensuring that the expertise of each discipline is involved in the clinical decision making process affecting any given patient. Furthermore, high levels of adherence to clinical protocols improve the efficiency of MD meetings by better discerning the transition from simple to complex case discussions.\nOur study confirms previous research in underscoring the high degree to which the effectiveness of multidisciplinary interventions is dependent on the organisational context in which cancer care is delivered. Some technical aspects stressed are the need: for administrative support for team activity and organisation [8]; and for all decisions taken to be entered into the electronic clinical record, since failure to keep a record hinders application of such decisions to the patient, as shown by a study that targeted breast cancer teams in 2006 [21]. Moreover, a treatment-planning register can be helpful when it comes to assessing similar cases or auditing an MDT's performance [22]. Nevertheless, the key factor is communication among team members as a sign of professional team trust. This is the most relevant dimension to be discerned in the above-described models of integration of clinical care. The fact that decisions should be binding upon team members, that there is continued participation by specialists in the meetings, that the impact on the entire patient pathway is perceived as positive, that there is a role for clinical co-ordinators and nurse case managers, and that both residents and nurses participate in training, are indicators of the ability of the clinicians involved to abandon a sequential and relatively unco-ordinated model of cancer care and progress instead towards achieving a model of integrated care based on consensus decision-making.\nSpecialisation in a given area of cancer diagnosis and treatment facilitates communication among different specialisations and professionals, through using specialist knowledge and expertise as a departure point for addressing specific patients rather than for the performance of specific tasks. Experiences, such as reaching an agreement on a common gateway for patients with high-risk symptoms for cancer or protecting the time for multidisciplinary teamwork, can also be key groundwork for promoting effective team communication. Other researchers emphasise aspects of the importance of professional integration within cancer networks [23], or the improvement in mental wellbeing and professional satisfaction that comes with MDT development, as a result of lower anxiety and better feelings about personal performance [24,25]. Another approach is also the need to achieve consistent care from the standpoint of the cancer patient [26]. This was well illustrated by affording patients access to the MDT in the early stages of the diagnostic process, a way of preventing the possibility of initial treatment being administered without team discussion, and communicational fragmentation with the patient being increased.\nImplementation of multidisciplinary care involves a redistribution of the responsibilities assumed by the respective professionals, with the aim of developing greater potential for enhancing their joint clinical effectiveness. It is a specific organisational answer to the complexity of cancer care, and enables new approaches to be taken and known problems, such as variability in clinical practice, to be tackled. In this connection, note should be taken of the overall strategy adopted by the National Breast and Ovarian Cancer Centre of Australia, which, along with several other authors [18-20], identifies multidisciplinary care with the standardisation of clinical practice in the health system. Most professionals interviewed by us regard MDT as the main tool for ensuring that the expertise of each discipline is involved in the clinical decision making process affecting any given patient. Furthermore, high levels of adherence to clinical protocols improve the efficiency of MD meetings by better discerning the transition from simple to complex case discussions.\nOur study confirms previous research in underscoring the high degree to which the effectiveness of multidisciplinary interventions is dependent on the organisational context in which cancer care is delivered. Some technical aspects stressed are the need: for administrative support for team activity and organisation [8]; and for all decisions taken to be entered into the electronic clinical record, since failure to keep a record hinders application of such decisions to the patient, as shown by a study that targeted breast cancer teams in 2006 [21]. Moreover, a treatment-planning register can be helpful when it comes to assessing similar cases or auditing an MDT's performance [22]. Nevertheless, the key factor is communication among team members as a sign of professional team trust. This is the most relevant dimension to be discerned in the above-described models of integration of clinical care. The fact that decisions should be binding upon team members, that there is continued participation by specialists in the meetings, that the impact on the entire patient pathway is perceived as positive, that there is a role for clinical co-ordinators and nurse case managers, and that both residents and nurses participate in training, are indicators of the ability of the clinicians involved to abandon a sequential and relatively unco-ordinated model of cancer care and progress instead towards achieving a model of integrated care based on consensus decision-making.\nSpecialisation in a given area of cancer diagnosis and treatment facilitates communication among different specialisations and professionals, through using specialist knowledge and expertise as a departure point for addressing specific patients rather than for the performance of specific tasks. Experiences, such as reaching an agreement on a common gateway for patients with high-risk symptoms for cancer or protecting the time for multidisciplinary teamwork, can also be key groundwork for promoting effective team communication. Other researchers emphasise aspects of the importance of professional integration within cancer networks [23], or the improvement in mental wellbeing and professional satisfaction that comes with MDT development, as a result of lower anxiety and better feelings about personal performance [24,25]. Another approach is also the need to achieve consistent care from the standpoint of the cancer patient [26]. This was well illustrated by affording patients access to the MDT in the early stages of the diagnostic process, a way of preventing the possibility of initial treatment being administered without team discussion, and communicational fragmentation with the patient being increased.\n[SUBTITLE] Strengths and limitations [SUBSECTION] This study has some strengths and limitations that must be taken into account when assessing its results. Insofar as its strengths are concerned, it should be noted that, rather than approaching MDTs from the standpoint of the specific structures which frame teamwork, we sought instead to understand MDTs from the standpoint of the capabilities of the professionals and teams themselves. This enabled us to obtain a better insight into the ways in which professionals interacted and the nature of the agreements and commitments reached within an MDT. The synthesis of our results in the form of three models of multidisciplinary cancer care to be found in Spain facilitates the transfer of such findings to SNHS hospitals. Indeed, as our study shows, multidisciplinary care displays significant variability in its methodology and degree of implementation among hospitals and regions, but not in the critical factors that have influenced its development. These models have been checked with the health professionals involved in the study.\nA clear limitation of the study resides in the selection process, based on proposals put forward by the chairmen of scientific societies of medical and radiotherapy oncology and the heads of regional cancer plans, which could have biased our selection of professionals towards those with sensitivity to multidisciplinary care and organisational change per se. The selection criteria vis-à-vis the different profiles, plus the fact that major university teaching hospitals were involved in the study, were intended to minimise this limitation. Moreover, our interpretation of the findings and the model proposed here were discussed with different specialists, hospitals and regions.\nAs with all qualitative studies, there was not a large number of participants. Our research focused on the views of key informants, thereby implicitly ruling out the possibility of capturing all the experiences and best practices that might exist in the health system. Lastly, mention should be made of the fact that one third of all interviewees belonged to the field of breast cancer, a disease that frequently becomes a model for others.\nThis study has some strengths and limitations that must be taken into account when assessing its results. Insofar as its strengths are concerned, it should be noted that, rather than approaching MDTs from the standpoint of the specific structures which frame teamwork, we sought instead to understand MDTs from the standpoint of the capabilities of the professionals and teams themselves. This enabled us to obtain a better insight into the ways in which professionals interacted and the nature of the agreements and commitments reached within an MDT. The synthesis of our results in the form of three models of multidisciplinary cancer care to be found in Spain facilitates the transfer of such findings to SNHS hospitals. Indeed, as our study shows, multidisciplinary care displays significant variability in its methodology and degree of implementation among hospitals and regions, but not in the critical factors that have influenced its development. These models have been checked with the health professionals involved in the study.\nA clear limitation of the study resides in the selection process, based on proposals put forward by the chairmen of scientific societies of medical and radiotherapy oncology and the heads of regional cancer plans, which could have biased our selection of professionals towards those with sensitivity to multidisciplinary care and organisational change per se. The selection criteria vis-à-vis the different profiles, plus the fact that major university teaching hospitals were involved in the study, were intended to minimise this limitation. Moreover, our interpretation of the findings and the model proposed here were discussed with different specialists, hospitals and regions.\nAs with all qualitative studies, there was not a large number of participants. Our research focused on the views of key informants, thereby implicitly ruling out the possibility of capturing all the experiences and best practices that might exist in the health system. Lastly, mention should be made of the fact that one third of all interviewees belonged to the field of breast cancer, a disease that frequently becomes a model for others.", "Implementation of multidisciplinary care involves a redistribution of the responsibilities assumed by the respective professionals, with the aim of developing greater potential for enhancing their joint clinical effectiveness. It is a specific organisational answer to the complexity of cancer care, and enables new approaches to be taken and known problems, such as variability in clinical practice, to be tackled. In this connection, note should be taken of the overall strategy adopted by the National Breast and Ovarian Cancer Centre of Australia, which, along with several other authors [18-20], identifies multidisciplinary care with the standardisation of clinical practice in the health system. Most professionals interviewed by us regard MDT as the main tool for ensuring that the expertise of each discipline is involved in the clinical decision making process affecting any given patient. Furthermore, high levels of adherence to clinical protocols improve the efficiency of MD meetings by better discerning the transition from simple to complex case discussions.\nOur study confirms previous research in underscoring the high degree to which the effectiveness of multidisciplinary interventions is dependent on the organisational context in which cancer care is delivered. Some technical aspects stressed are the need: for administrative support for team activity and organisation [8]; and for all decisions taken to be entered into the electronic clinical record, since failure to keep a record hinders application of such decisions to the patient, as shown by a study that targeted breast cancer teams in 2006 [21]. Moreover, a treatment-planning register can be helpful when it comes to assessing similar cases or auditing an MDT's performance [22]. Nevertheless, the key factor is communication among team members as a sign of professional team trust. This is the most relevant dimension to be discerned in the above-described models of integration of clinical care. The fact that decisions should be binding upon team members, that there is continued participation by specialists in the meetings, that the impact on the entire patient pathway is perceived as positive, that there is a role for clinical co-ordinators and nurse case managers, and that both residents and nurses participate in training, are indicators of the ability of the clinicians involved to abandon a sequential and relatively unco-ordinated model of cancer care and progress instead towards achieving a model of integrated care based on consensus decision-making.\nSpecialisation in a given area of cancer diagnosis and treatment facilitates communication among different specialisations and professionals, through using specialist knowledge and expertise as a departure point for addressing specific patients rather than for the performance of specific tasks. Experiences, such as reaching an agreement on a common gateway for patients with high-risk symptoms for cancer or protecting the time for multidisciplinary teamwork, can also be key groundwork for promoting effective team communication. Other researchers emphasise aspects of the importance of professional integration within cancer networks [23], or the improvement in mental wellbeing and professional satisfaction that comes with MDT development, as a result of lower anxiety and better feelings about personal performance [24,25]. Another approach is also the need to achieve consistent care from the standpoint of the cancer patient [26]. This was well illustrated by affording patients access to the MDT in the early stages of the diagnostic process, a way of preventing the possibility of initial treatment being administered without team discussion, and communicational fragmentation with the patient being increased.", "This study has some strengths and limitations that must be taken into account when assessing its results. Insofar as its strengths are concerned, it should be noted that, rather than approaching MDTs from the standpoint of the specific structures which frame teamwork, we sought instead to understand MDTs from the standpoint of the capabilities of the professionals and teams themselves. This enabled us to obtain a better insight into the ways in which professionals interacted and the nature of the agreements and commitments reached within an MDT. The synthesis of our results in the form of three models of multidisciplinary cancer care to be found in Spain facilitates the transfer of such findings to SNHS hospitals. Indeed, as our study shows, multidisciplinary care displays significant variability in its methodology and degree of implementation among hospitals and regions, but not in the critical factors that have influenced its development. These models have been checked with the health professionals involved in the study.\nA clear limitation of the study resides in the selection process, based on proposals put forward by the chairmen of scientific societies of medical and radiotherapy oncology and the heads of regional cancer plans, which could have biased our selection of professionals towards those with sensitivity to multidisciplinary care and organisational change per se. The selection criteria vis-à-vis the different profiles, plus the fact that major university teaching hospitals were involved in the study, were intended to minimise this limitation. Moreover, our interpretation of the findings and the model proposed here were discussed with different specialists, hospitals and regions.\nAs with all qualitative studies, there was not a large number of participants. Our research focused on the views of key informants, thereby implicitly ruling out the possibility of capturing all the experiences and best practices that might exist in the health system. Lastly, mention should be made of the fact that one third of all interviewees belonged to the field of breast cancer, a disease that frequently becomes a model for others.", "This is the first qualitative study of multidisciplinary cancer care in southern Europe. The delay in MDT implementation poses the need for health policy not only to acknowledge and promote it, but also to provide quality standards. In addition, there is a clear need to respect and promote good practices existing in the health care system. In this regard, this study may help understand how professionals conceptualise this approach, which is relevant when interest lies in developing more comprehensive care by placing multidisciplinary care at the core of cancer department, as stated in Spain's official cancer strategy. Moreover, metaphors play a key role in the way professionals imagine and explain teamwork in cancer care (table 4).\nResearch metaphors\nMDT development often entails a process of decentralisation inside hospitals, which may involve some redistribution of power. This is an adaptive challenge for hospital managers in terms of clinical governance, i.e., making structures more permeable to organisation of expertise without losing efficiency in the management of shared resources. A culture of evaluation of clinical and process outcomes should emerge, aimed at directing and justifying organisational innovation so as to achieve the best possible performance in terms of care of cancer patients. Multidisciplinary care occurs simultaneously with rapid changes in treatment and use of clinical practice guidelines, all of which makes it more difficult to identify its specific advantages [27]. This is why health policy plays an important role in promoting an organisational approach that changes the way in which professionals develop their clinical practice, a key issue in a disease such as cancer, characterised by its clinical complexity, involvement of different clinical specialists and need to face the new challenge of managing patient preferences.", "MDT: Multidisciplinary Team; SNHS: Spanish National Health System", "The authors declare that they have no competing interests.", "Not required.", "JMB had the initial idea for this study. JMB and JP designed the study and drafted the research proposal. JP conducted pilot interviews, while JMB provided guidance and critical review of this information and helped with the review of the literature. JP undertook the main fieldwork for the study, interviewed, coded, charted and analysed the data for this paper, which was scrutinised and discussed by JMB. JP and JMB interpreted the results and wrote the first draft and final version of this article. JP and JMB read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/11/141/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study design and setting", "Recruitment", "Interviews", "Analysis", "Ethical considerations", "Results", "1.- Advisory committee", "2.- Formal co-adaptation", "3.- Integrated care process", "Critical factors in multidisciplinary cancer team development", "Existence of different gateways for the same patient profile", "Variability in the development and use of clinical protocols and guidelines", "The role of the hospital executive board", "Outcome assessment", "Recording and documenting of clinical decisions in an MDT setting", "Discussion", "Impact on decision making", "Strengths and limitations", "Conclusions", "List of abreviations", "Competing interests", "Ethical approval", "Authors' contributions", "Pre-publication history" ]
[ "The increasing complexity of cancer care makes organisation of clinical decision-making one of the key elements in high-quality cancer care [1]. This raises the question of how good professional co-operation is to be achieved in day-to-day clinical practice. Pre-eminent among the different answers to this question is multidisciplinary teamwork, an approach that has emerged in parallel with the accelerated process of specialisation in cancer among health professionals. Recent reviews of the literature have associated a multidisciplinary approach to cancer care with better adherence to clinical practice guidelines [2], increased patient access to clinical trials [3] and enhanced co-ordination of hospital services [4,5]. These outcomes, together with the role assigned to multidisciplinary care in various cancer plans, indicate its strategic role for health systems in general and for quality of care in particular in the organisation of cancer services, something that was highlighted at the Lisbon round-table held under the Portuguese EU Presidency (2007) [6].\nThe development of multidisciplinary cancer care involves a redistribution of health professionals' tasks when it comes to clinical decision-making and patient care processes. The development of specific organisational frameworks for dealing with different types of cancer care must be seen in the context of a services-based hospital structure. This process modifies a highly sensitive aspect, i.e., the way in which professionals interact and are co-ordinated. The following are the main aspects defining a multidisciplinary approach in the organisation of cancer care:\n- professional specialisation by disease (including diagnostic disciplines);\n- standardisation of the process and clinical criteria in guidelines and pathways;\n- redistribution of tasks at a multidisciplinary-team level; and,\n- trend towards identifying and allocating specific resources according to disease/organ\nThe Spanish National Health System (SNHS) recognised multidisciplinary care as being a health priority as far back as 2006, when a national strategy for promoting quality in cancer care was first published [7]. As one of its basic principles, this Cancer Strategy document stipulates that cancer patients should be diagnosed and treated in the context of a multidisciplinary team (MDT), and goes on to identify tumour boards as the main mechanism for deciding and planning therapy. This institutional effort is being implemented on a co-operative basis within the context of Spain's decentralised health care system. The priority assigned by the respective regional health services to multidisciplinary care since health service management became operationally decentralised (2002) and the specific mode of organisation introduced at each hospital, together determine the starting point of the implementation of an MDT approach. A high degree of variability is thus to be expected.\nThis study addresses the question of how multidisciplinary cancer care has been implemented and the critical factors linked to this process, with special stress laid on the knowledge of policy required to ensure effective team work. The study was undertaken against a common backdrop of a growing cancer care burden and a rapidly expanding range of potentially effective treatments, which involves \"therapeutic dilemmas about treatment options\" [8]. A qualitative approach was chosen in order to better understand the perceptions and beliefs of all the professional partners involved at the different public hospitals spread across the nation's various health care regions.", "[SUBTITLE] Study design and setting [SUBSECTION] A qualitative research method was used in order to describe health professionals' points of view and experiences of multidisciplinary cancer care, and explore the barriers to be considered in future policy development. As an MDT approach in cancer had not been previously studied in the context of the Spanish health system, a pilot test was deemed appropriate. The aim was to identify a set of analytical categories which, along with a review of the literature, would define a theoretical basis for the interviews. The pilot scheme was undertaken by three teams at different hospitals in Catalonia, focusing on different tumours (breast, lung and colorectal). In order to ensure the relevance and appropriateness of the categories yielded by the test, health professionals from different disciplines were then asked to give their considered opinion. The methodology of analysis consisted of semi-structured interviews conducted in situ from October 2008 to January 2009 with professionals involved in the diagnosis and treatment of cancer patients at public hospitals of differing levels of importance, situated in the most populated regions of Spain, namely, Andalusia, Catalonia, Madrid, Galicia and Valencia. Participants were interviewed by an experienced, qualitative researcher.\nA qualitative research method was used in order to describe health professionals' points of view and experiences of multidisciplinary cancer care, and explore the barriers to be considered in future policy development. As an MDT approach in cancer had not been previously studied in the context of the Spanish health system, a pilot test was deemed appropriate. The aim was to identify a set of analytical categories which, along with a review of the literature, would define a theoretical basis for the interviews. The pilot scheme was undertaken by three teams at different hospitals in Catalonia, focusing on different tumours (breast, lung and colorectal). In order to ensure the relevance and appropriateness of the categories yielded by the test, health professionals from different disciplines were then asked to give their considered opinion. The methodology of analysis consisted of semi-structured interviews conducted in situ from October 2008 to January 2009 with professionals involved in the diagnosis and treatment of cancer patients at public hospitals of differing levels of importance, situated in the most populated regions of Spain, namely, Andalusia, Catalonia, Madrid, Galicia and Valencia. Participants were interviewed by an experienced, qualitative researcher.\n[SUBTITLE] Recruitment [SUBSECTION] A total of thirty-nine health professionals were recruited. They were deemed eligible in any case where they performed their professional task, wholly or in part, in an MDT. For the selection of informants and composition of the theoretical sample, three inclusion criteria were established. To ensure that the views of different health professionals could be explored, the first criterion laid down that five medical specialisations had to be covered in each region: these were medical oncology, radiation oncology, surgery, pathology or radiology, and oncology nursing (Table 1). The second criterion reinforced a systematic approach to the phenomenon under study, by insisting on the presence of three opinions from each region (table 2). The third criterion took the form of a restriction on interviewing professionals belonging to the same team, with the aim of preventing biased versions on the subject of study and so contribute to the internal validity of our research.\nDetailed breakdown of the 39 professionals interviewed\nProfiles required for selection of key informants\nA total of thirty-nine health professionals were recruited. They were deemed eligible in any case where they performed their professional task, wholly or in part, in an MDT. For the selection of informants and composition of the theoretical sample, three inclusion criteria were established. To ensure that the views of different health professionals could be explored, the first criterion laid down that five medical specialisations had to be covered in each region: these were medical oncology, radiation oncology, surgery, pathology or radiology, and oncology nursing (Table 1). The second criterion reinforced a systematic approach to the phenomenon under study, by insisting on the presence of three opinions from each region (table 2). The third criterion took the form of a restriction on interviewing professionals belonging to the same team, with the aim of preventing biased versions on the subject of study and so contribute to the internal validity of our research.\nDetailed breakdown of the 39 professionals interviewed\nProfiles required for selection of key informants\n[SUBTITLE] Interviews [SUBSECTION] A semi-structured list of questions ensured that critical points were covered in every interview. To elicit beliefs and experiences, participants were given the necessary flexibility to enable them to volunteer information on topics that were relevant to them. The selected health professionals were interviewed on a one-to-one basis for 60-100 minutes at their hospital offices. Interviews started with a general question on the cancer team's approach and ended with a question on how multidisciplinary care could be promoted by health policy measures. No notes were taken by the researcher during the interview; instead, all interviews were audio-taped and transcribed in full by the researcher. These data were then compiled into a documentary record and rendered anonymous to protect confidentiality. Every transcription was checked against its corresponding audio record and accuracy was found to be good. A preliminary analysis was conducted after each interview.\nA semi-structured list of questions ensured that critical points were covered in every interview. To elicit beliefs and experiences, participants were given the necessary flexibility to enable them to volunteer information on topics that were relevant to them. The selected health professionals were interviewed on a one-to-one basis for 60-100 minutes at their hospital offices. Interviews started with a general question on the cancer team's approach and ended with a question on how multidisciplinary care could be promoted by health policy measures. No notes were taken by the researcher during the interview; instead, all interviews were audio-taped and transcribed in full by the researcher. These data were then compiled into a documentary record and rendered anonymous to protect confidentiality. Every transcription was checked against its corresponding audio record and accuracy was found to be good. A preliminary analysis was conducted after each interview.\n[SUBTITLE] Analysis [SUBSECTION] Interview data were examined inductively, using content analysis to generate categories and an explanatory framework. Grounded theory methodology was considered appropriate for describing the organisation and culture of health professionals belonging to multidisciplinary cancer teams. As our study was theoretical and aimed at incorporating the organisational context in which cancer teams practised, we used an axial coding, as described by Strauss and Corbin [9]. Data were electronically coded with ATLAS.ti [10]. Whereas the thematic analysis enabled language use to be understood and professionals' beliefs to be communicated, the method of constant comparison ensured that recurring views and experiences were obtained. The coding process and emerging themes were derived, on the one hand, from a priori issues drawn from the pilot test and previous research, and on the other, from issues raised by participants. Examples of codes were \"nature of agreements\", \"access of a patient to the team during his/her journey\" and \"impact management on care performance\". The consistency of coding/interpretation was checked during analysis by reviewing the transcripts at different moments in time. This process allowed for labelling and developing a reference of the data for subsequent exploration and identification. Accordingly, a thematic framework based on models of and barriers to effective multidisciplinary teamwork was identified. A specific effort was made to capture this stage of interpretation, i.e., by mapping, creating typologies [11] and finding associations among themes. Moreover, the preliminary findings were discussed at a workshop (held by the scientific societies), to which some of the health professionals interviewed and a number of social science researchers were invited. These discussions were useful for reinforcing team types for the Spanish health system and clarifying some barrier-related aspects.\nInterview data were examined inductively, using content analysis to generate categories and an explanatory framework. Grounded theory methodology was considered appropriate for describing the organisation and culture of health professionals belonging to multidisciplinary cancer teams. As our study was theoretical and aimed at incorporating the organisational context in which cancer teams practised, we used an axial coding, as described by Strauss and Corbin [9]. Data were electronically coded with ATLAS.ti [10]. Whereas the thematic analysis enabled language use to be understood and professionals' beliefs to be communicated, the method of constant comparison ensured that recurring views and experiences were obtained. The coding process and emerging themes were derived, on the one hand, from a priori issues drawn from the pilot test and previous research, and on the other, from issues raised by participants. Examples of codes were \"nature of agreements\", \"access of a patient to the team during his/her journey\" and \"impact management on care performance\". The consistency of coding/interpretation was checked during analysis by reviewing the transcripts at different moments in time. This process allowed for labelling and developing a reference of the data for subsequent exploration and identification. Accordingly, a thematic framework based on models of and barriers to effective multidisciplinary teamwork was identified. A specific effort was made to capture this stage of interpretation, i.e., by mapping, creating typologies [11] and finding associations among themes. Moreover, the preliminary findings were discussed at a workshop (held by the scientific societies), to which some of the health professionals interviewed and a number of social science researchers were invited. These discussions were useful for reinforcing team types for the Spanish health system and clarifying some barrier-related aspects.\n[SUBTITLE] Ethical considerations [SUBSECTION] The data for this study was based on professionally conducted interviews which, other than the consent of the professionals themselves, require no \"ethical approval\" from any research committee. However, as the aim of our study is sensitive to hospital organisation and relations between specialisations and health professionals, confidentiality is of the essence. Accordingly, the strategy pursued was to prioritise a selection of participants whose opinion on these issues was deemed crucial, and so, contact with regional cancer control policymakers was made. The implementation of the study began through contact with the heads of the regional cancer plans, who proposed a short list of health professionals selected in accordance with our criteria (see table 2). Likewise, the resulting list was endorsed by the scientific societies of medical and radiation oncology. The health professionals concerned were then sent a letter of invitation explaining the research goals and a confidentiality agreement. On being advised by telephone and receiving an assurance as to the confidentiality of any information provided, all the professionals selected for study consented to the interviews being recorded. The consent form was formally signed at the meeting, with any doubts as to the designated purpose and method of research being discussed with the professionals. No-one refused the invitation to participate.\nThe data for this study was based on professionally conducted interviews which, other than the consent of the professionals themselves, require no \"ethical approval\" from any research committee. However, as the aim of our study is sensitive to hospital organisation and relations between specialisations and health professionals, confidentiality is of the essence. Accordingly, the strategy pursued was to prioritise a selection of participants whose opinion on these issues was deemed crucial, and so, contact with regional cancer control policymakers was made. The implementation of the study began through contact with the heads of the regional cancer plans, who proposed a short list of health professionals selected in accordance with our criteria (see table 2). Likewise, the resulting list was endorsed by the scientific societies of medical and radiation oncology. The health professionals concerned were then sent a letter of invitation explaining the research goals and a confidentiality agreement. On being advised by telephone and receiving an assurance as to the confidentiality of any information provided, all the professionals selected for study consented to the interviews being recorded. The consent form was formally signed at the meeting, with any doubts as to the designated purpose and method of research being discussed with the professionals. No-one refused the invitation to participate.", "A qualitative research method was used in order to describe health professionals' points of view and experiences of multidisciplinary cancer care, and explore the barriers to be considered in future policy development. As an MDT approach in cancer had not been previously studied in the context of the Spanish health system, a pilot test was deemed appropriate. The aim was to identify a set of analytical categories which, along with a review of the literature, would define a theoretical basis for the interviews. The pilot scheme was undertaken by three teams at different hospitals in Catalonia, focusing on different tumours (breast, lung and colorectal). In order to ensure the relevance and appropriateness of the categories yielded by the test, health professionals from different disciplines were then asked to give their considered opinion. The methodology of analysis consisted of semi-structured interviews conducted in situ from October 2008 to January 2009 with professionals involved in the diagnosis and treatment of cancer patients at public hospitals of differing levels of importance, situated in the most populated regions of Spain, namely, Andalusia, Catalonia, Madrid, Galicia and Valencia. Participants were interviewed by an experienced, qualitative researcher.", "A total of thirty-nine health professionals were recruited. They were deemed eligible in any case where they performed their professional task, wholly or in part, in an MDT. For the selection of informants and composition of the theoretical sample, three inclusion criteria were established. To ensure that the views of different health professionals could be explored, the first criterion laid down that five medical specialisations had to be covered in each region: these were medical oncology, radiation oncology, surgery, pathology or radiology, and oncology nursing (Table 1). The second criterion reinforced a systematic approach to the phenomenon under study, by insisting on the presence of three opinions from each region (table 2). The third criterion took the form of a restriction on interviewing professionals belonging to the same team, with the aim of preventing biased versions on the subject of study and so contribute to the internal validity of our research.\nDetailed breakdown of the 39 professionals interviewed\nProfiles required for selection of key informants", "A semi-structured list of questions ensured that critical points were covered in every interview. To elicit beliefs and experiences, participants were given the necessary flexibility to enable them to volunteer information on topics that were relevant to them. The selected health professionals were interviewed on a one-to-one basis for 60-100 minutes at their hospital offices. Interviews started with a general question on the cancer team's approach and ended with a question on how multidisciplinary care could be promoted by health policy measures. No notes were taken by the researcher during the interview; instead, all interviews were audio-taped and transcribed in full by the researcher. These data were then compiled into a documentary record and rendered anonymous to protect confidentiality. Every transcription was checked against its corresponding audio record and accuracy was found to be good. A preliminary analysis was conducted after each interview.", "Interview data were examined inductively, using content analysis to generate categories and an explanatory framework. Grounded theory methodology was considered appropriate for describing the organisation and culture of health professionals belonging to multidisciplinary cancer teams. As our study was theoretical and aimed at incorporating the organisational context in which cancer teams practised, we used an axial coding, as described by Strauss and Corbin [9]. Data were electronically coded with ATLAS.ti [10]. Whereas the thematic analysis enabled language use to be understood and professionals' beliefs to be communicated, the method of constant comparison ensured that recurring views and experiences were obtained. The coding process and emerging themes were derived, on the one hand, from a priori issues drawn from the pilot test and previous research, and on the other, from issues raised by participants. Examples of codes were \"nature of agreements\", \"access of a patient to the team during his/her journey\" and \"impact management on care performance\". The consistency of coding/interpretation was checked during analysis by reviewing the transcripts at different moments in time. This process allowed for labelling and developing a reference of the data for subsequent exploration and identification. Accordingly, a thematic framework based on models of and barriers to effective multidisciplinary teamwork was identified. A specific effort was made to capture this stage of interpretation, i.e., by mapping, creating typologies [11] and finding associations among themes. Moreover, the preliminary findings were discussed at a workshop (held by the scientific societies), to which some of the health professionals interviewed and a number of social science researchers were invited. These discussions were useful for reinforcing team types for the Spanish health system and clarifying some barrier-related aspects.", "The data for this study was based on professionally conducted interviews which, other than the consent of the professionals themselves, require no \"ethical approval\" from any research committee. However, as the aim of our study is sensitive to hospital organisation and relations between specialisations and health professionals, confidentiality is of the essence. Accordingly, the strategy pursued was to prioritise a selection of participants whose opinion on these issues was deemed crucial, and so, contact with regional cancer control policymakers was made. The implementation of the study began through contact with the heads of the regional cancer plans, who proposed a short list of health professionals selected in accordance with our criteria (see table 2). Likewise, the resulting list was endorsed by the scientific societies of medical and radiation oncology. The health professionals concerned were then sent a letter of invitation explaining the research goals and a confidentiality agreement. On being advised by telephone and receiving an assurance as to the confidentiality of any information provided, all the professionals selected for study consented to the interviews being recorded. The consent form was formally signed at the meeting, with any doubts as to the designated purpose and method of research being discussed with the professionals. No-one refused the invitation to participate.", "We identified three models of development of multidisciplinary cancer care into which all teams could be classified (table 3). Their internal consistency means that they can be seen as models of co-operation, and despite being general categories, any given team model may well contain elements of the other two. Moreover, the qualitative classification described here may even occur within the same hospital for different teams, with each of these assuming responsibility for a different tumour.\nModels of co-operation in multidisciplinary cancer care\nRather than targeting specific forms of organisation (tumour board, cancer unit), this approach focuses instead on team capabilities, based on each team's work method and the overall scope (breadth and depth) of the tasks performed, elements of analysis that emerged during the study and indicate the nature of interaction among professionals. While all cancer teams fulfilled the role of assessing patients and complying with and updating clinical protocols, obvious differences in their respective abilities to achieve quality of care were nevertheless observed.\n[SUBTITLE] 1.- Advisory committee [SUBSECTION] This is a group of professionals, largely made up of specialists in different therapeutic fields, which meets regularly on a informal basis to discuss cases considered clinically complex. Since the patients may already have received some of the treatments (usually surgery), the multidisciplinary meeting is aimed at referring them to other professionals for further treatment. This approach implies rigorous respect for the autonomy of the clinician and for overlapping boundaries between the team and the multidisciplinary meeting: patient assessment is made without other health care performance considerations. Judging by our results, this model continues to enjoy an important presence in the system (40%).\nThis is a group of professionals, largely made up of specialists in different therapeutic fields, which meets regularly on a informal basis to discuss cases considered clinically complex. Since the patients may already have received some of the treatments (usually surgery), the multidisciplinary meeting is aimed at referring them to other professionals for further treatment. This approach implies rigorous respect for the autonomy of the clinician and for overlapping boundaries between the team and the multidisciplinary meeting: patient assessment is made without other health care performance considerations. Judging by our results, this model continues to enjoy an important presence in the system (40%).\n[SUBTITLE] 2.- Formal co-adaptation [SUBSECTION] Owing to the high degree of interaction (mutual adaptation) among the professionals involved, consensus plays a key role in this model. The team acts as the reference framework for professionals, who share their views on the diagnosis, treatment and monitoring of a specific cancer type. The meeting is open to all professionals involved in patient management and it is here that the roles of team tumour board co-ordinator and chair appear. The key factor for fostering such an approach is agreement on the need for: joint decision-making to precede application of any treatment; and all cases to be dealt with at the multidisciplinary meeting. Both aspects are hampered, however, by hospital service inertia when it comes to disease management. This formula accounts for half (50%) of all MDT meetings in the health care system.\nOwing to the high degree of interaction (mutual adaptation) among the professionals involved, consensus plays a key role in this model. The team acts as the reference framework for professionals, who share their views on the diagnosis, treatment and monitoring of a specific cancer type. The meeting is open to all professionals involved in patient management and it is here that the roles of team tumour board co-ordinator and chair appear. The key factor for fostering such an approach is agreement on the need for: joint decision-making to precede application of any treatment; and all cases to be dealt with at the multidisciplinary meeting. Both aspects are hampered, however, by hospital service inertia when it comes to disease management. This formula accounts for half (50%) of all MDT meetings in the health care system.\n[SUBTITLE] 3.- Integrated care process [SUBSECTION] The teams that work under this model share wide aims on patient management, including co-ordination of clinical research and economic evaluation of treatment. As this model provides teams with early access to patients, the latter's preferences along with knowledge of their co-morbidities and psychosocial context are incorporated into the multidisciplinary discussions. This occurs through systematic follow-up of patients throughout their journey, from suspicion of cancer, to diagnosis, therapeutic decision-making and follow-up. The presence of professional team roles has an impact on the entire care process and on progress towards achieving seamless care. The hospital executive board plays a pivotal role by consolidating meeting times, and ensuring health professionals' attendance as well as their commitment to the MDT. This model is not frequent in the system (10%).\nThe teams that work under this model share wide aims on patient management, including co-ordination of clinical research and economic evaluation of treatment. As this model provides teams with early access to patients, the latter's preferences along with knowledge of their co-morbidities and psychosocial context are incorporated into the multidisciplinary discussions. This occurs through systematic follow-up of patients throughout their journey, from suspicion of cancer, to diagnosis, therapeutic decision-making and follow-up. The presence of professional team roles has an impact on the entire care process and on progress towards achieving seamless care. The hospital executive board plays a pivotal role by consolidating meeting times, and ensuring health professionals' attendance as well as their commitment to the MDT. This model is not frequent in the system (10%).\n[SUBTITLE] Critical factors in multidisciplinary cancer team development [SUBSECTION] For many clinicians, development of multidisciplinary care has involved a cultural change, as can be seen from the pathway which resulted from recommendations that were proposed by cancer team members for clinical management of patients and have gradually become binding decisions. The key critical factors identified for this change are as follows:\n[SUBTITLE] Existence of different gateways for the same patient profile [SUBSECTION] Many clinicians acknowledged significant variability in clinical practice as a result of diagnosing and treating patients who, despite having similar symptoms and diagnoses, might receive different initial therapy because access to hospital took place through different departments. A typical example is provided by the different therapeutic approaches proposed to a prostate cancer patient depending on the initial hospital department responsible for his diagnosis. Although a shared clinical protocol on such patients for the whole hospital plus an agreement to submit the patient to the MDT meeting would limit this variability, it is the unification of hospital-access gateways into a single hospital department that would have the necessary transforming quality in terms of standardisation of clinical criteria and pathways. Indeed, a recurring example of this type of organisational change is afforded by unification of admission to radiology for breast cancer or to gastroenterology for colorectal cancer in the case of patients displaying symptoms with a high risk of cancer. The experience of agreeing upon a common gateway for suspects has three effects: it provides teams with early access to patients; it reduces the feeling of patients being the \"property\" of any given clinician or department; and it sets up a primary care reference catchment area, as it becomes easier and clearer to determine where and how subjects with high risk of cancer should be referred and who the specialists of reference are. Moreover, where the department taking on the gateway unification process of the clinical pathway is a diagnostic unit, this implies that it should have a more relevant role within the team.\n\"At the hospital, there is a breast cancer unit that is beset by a root problem, i.e., two treatment options depending on whether the patient has been admitted via gynaecology or surgery. These are internal battles waged by the respective competencies.\" (Breast surgeon)\n\"In this hospital there are two chairs of surgery: one comes to the meetings, the other doesn't. We know that they administer different forms of treatment. The percentage of cases in which this occurs is by no means inconsiderable.\" (Medical oncologist)\nMany clinicians acknowledged significant variability in clinical practice as a result of diagnosing and treating patients who, despite having similar symptoms and diagnoses, might receive different initial therapy because access to hospital took place through different departments. A typical example is provided by the different therapeutic approaches proposed to a prostate cancer patient depending on the initial hospital department responsible for his diagnosis. Although a shared clinical protocol on such patients for the whole hospital plus an agreement to submit the patient to the MDT meeting would limit this variability, it is the unification of hospital-access gateways into a single hospital department that would have the necessary transforming quality in terms of standardisation of clinical criteria and pathways. Indeed, a recurring example of this type of organisational change is afforded by unification of admission to radiology for breast cancer or to gastroenterology for colorectal cancer in the case of patients displaying symptoms with a high risk of cancer. The experience of agreeing upon a common gateway for suspects has three effects: it provides teams with early access to patients; it reduces the feeling of patients being the \"property\" of any given clinician or department; and it sets up a primary care reference catchment area, as it becomes easier and clearer to determine where and how subjects with high risk of cancer should be referred and who the specialists of reference are. Moreover, where the department taking on the gateway unification process of the clinical pathway is a diagnostic unit, this implies that it should have a more relevant role within the team.\n\"At the hospital, there is a breast cancer unit that is beset by a root problem, i.e., two treatment options depending on whether the patient has been admitted via gynaecology or surgery. These are internal battles waged by the respective competencies.\" (Breast surgeon)\n\"In this hospital there are two chairs of surgery: one comes to the meetings, the other doesn't. We know that they administer different forms of treatment. The percentage of cases in which this occurs is by no means inconsiderable.\" (Medical oncologist)\n[SUBTITLE] Variability in the development and use of clinical protocols and guidelines [SUBSECTION] Evidence-based decisions are a source of concern to professionals, and the updating of clinical protocols by the team reflects this concern. Many clinicians felt, however, that this goal was conditioned by the implementation and dissemination of cancer clinical guidelines in the Spanish Health System. They argued that the absence of common guidelines for the whole country and the lack of co-ordination strategies for implementing the few that did exist resulted in reduced use and a lack of systematic assessment of existing levels of adherence. Owing to this perceived situation, clinical protocols at a hospital level are very often based on foreign guidelines, and efforts made to produce Spanish ones are of little use. This in turn leads to three common situations which impact at a team level. Firstly, hospitals that refer cases (e.g., because of their clinical complexity) have protocols based on different guidelines that are not standardised across the health care system. Secondly, multidisciplinary cancer care displays different levels of development, so that patients in one hospital may be referred to a specific department, but not to the tumour board, in another, the point here being the absence of pre-specified criteria for referral among levels of care. Thus, some decisions are made without the scientific consensus of an MDT. Finally, a cancer team might change the original treatment plan for a patient referred from a lower level hospital. This was a concern voiced by several clinicians, since decisions sometimes tend to differ widely, causing confusion and lack of trust in the patient. This perception was not shared by health professionals who work for cancer networks.\n\"When a surgeon comes along saying that he has operated on a given patient without the consensus of the committee but -according to him- this course of action was 'in line with the evidence'... this is unacceptable. It's an issue to be addressed by the respective cancer plans. When guidelines are reviewed, this should be the starting point, i.e., the tumour committee report should be seen before the surgical report.\" (Radiation oncologist)\nEvidence-based decisions are a source of concern to professionals, and the updating of clinical protocols by the team reflects this concern. Many clinicians felt, however, that this goal was conditioned by the implementation and dissemination of cancer clinical guidelines in the Spanish Health System. They argued that the absence of common guidelines for the whole country and the lack of co-ordination strategies for implementing the few that did exist resulted in reduced use and a lack of systematic assessment of existing levels of adherence. Owing to this perceived situation, clinical protocols at a hospital level are very often based on foreign guidelines, and efforts made to produce Spanish ones are of little use. This in turn leads to three common situations which impact at a team level. Firstly, hospitals that refer cases (e.g., because of their clinical complexity) have protocols based on different guidelines that are not standardised across the health care system. Secondly, multidisciplinary cancer care displays different levels of development, so that patients in one hospital may be referred to a specific department, but not to the tumour board, in another, the point here being the absence of pre-specified criteria for referral among levels of care. Thus, some decisions are made without the scientific consensus of an MDT. Finally, a cancer team might change the original treatment plan for a patient referred from a lower level hospital. This was a concern voiced by several clinicians, since decisions sometimes tend to differ widely, causing confusion and lack of trust in the patient. This perception was not shared by health professionals who work for cancer networks.\n\"When a surgeon comes along saying that he has operated on a given patient without the consensus of the committee but -according to him- this course of action was 'in line with the evidence'... this is unacceptable. It's an issue to be addressed by the respective cancer plans. When guidelines are reviewed, this should be the starting point, i.e., the tumour committee report should be seen before the surgical report.\" (Radiation oncologist)\n[SUBTITLE] The role of the hospital executive board [SUBSECTION] Most health professionals believed that, while they had not been hampered by the hospital executive board, neither had they been specifically supported to better organise clinical pathways and MDT activity. In their view, the main problem was that MDT work time was not recognised as a health care activity (or \"real work\" to quote them). Half of those interviewed felt that hospital managers knew little about their tasks, goals, level of involvement and management problems. These professionals identified two clear priorities for hospital executive boards, namely: to protect multidisciplinary meetings and work time; and to promote new professional roles, such as nurse case-managers or administrative support. Those with management responsibilities stated that cancer teams were not reflected in the organisational chart but were very important in terms of quality of care, and more innovative and responsive insofar as health care organisation achievement was concerned.\n\"If you tell management that you have to attend a committee meeting, they view it as something that is all well and good but nevertheless ancillary, and so not meriting consideration as part of the daily work load. Yet such attendance should be accorded health care and scientific value, i.e., so many hours correspond to committee work, which is equal to time spent seeing patients in a medical practice.\" (Colorectal surgeon)\n\"Your personal efforts are not appreciated, regardless of whether you've taken part in drawing up a protocol or whether you've devoted one day or three weeks to the job... And, as no stress is laid on the importance of teamwork, there are pockets of resistance that don't change\". (Radiologist)\nMost health professionals believed that, while they had not been hampered by the hospital executive board, neither had they been specifically supported to better organise clinical pathways and MDT activity. In their view, the main problem was that MDT work time was not recognised as a health care activity (or \"real work\" to quote them). Half of those interviewed felt that hospital managers knew little about their tasks, goals, level of involvement and management problems. These professionals identified two clear priorities for hospital executive boards, namely: to protect multidisciplinary meetings and work time; and to promote new professional roles, such as nurse case-managers or administrative support. Those with management responsibilities stated that cancer teams were not reflected in the organisational chart but were very important in terms of quality of care, and more innovative and responsive insofar as health care organisation achievement was concerned.\n\"If you tell management that you have to attend a committee meeting, they view it as something that is all well and good but nevertheless ancillary, and so not meriting consideration as part of the daily work load. Yet such attendance should be accorded health care and scientific value, i.e., so many hours correspond to committee work, which is equal to time spent seeing patients in a medical practice.\" (Colorectal surgeon)\n\"Your personal efforts are not appreciated, regardless of whether you've taken part in drawing up a protocol or whether you've devoted one day or three weeks to the job... And, as no stress is laid on the importance of teamwork, there are pockets of resistance that don't change\". (Radiologist)\n[SUBTITLE] Outcome assessment [SUBSECTION] The main goal of any multidisciplinary cancer team is to enhance the effectiveness of diagnosis and treatment of a specific disease. Assessment of the MDTs that had been put in place revealed relevant differences among the views held by the professionals themselves. To most of them, the ultimate consequence of the efforts of some hospital units or specialised professionals that regularly collect clinical data was evaluation or a study aimed at assessing MDT outcomes. Others, in contrast, described process evaluation involving initial inter-departmental consensus on indicators, development of a specific data-collection methodology, and periodic analysis of results using a shared database. Above all, this situation defines different approaches to the possibility of taking clinical outcomes and process indicators, and linking these to actions aimed at improving cancer care. There were two recurring arguments associated with possible ways of achieving organisational change: the first centred on the key role to be played by the health care service in reaching a technical definition of and agreement on a minimum set of indicators for the entire hospital system and a proposed level of transparency vis-à-vis outcomes; the second addressed the pervasive \"culture of efficiency\" currently prevailing in hospital departments, insisting on the need to limit its influence and instead give increased relevance to clinical and process indicators. An experience that has had remarkable success in various health care regions and has served as the basis for the evaluation of each MDT, is the implementation of a fast-track, colorectal, breast and lung cancer diagnosis and treatment programme, a driving force in promoting integration among services and MDTs. Its implementation has shown the key role that health care policy could play in enhancing the organisation of cancer care.\n\"The problem is that each specialisation has developed its own indicators of toxicity, clinical results, etc. There should be at least one database in which the team's outcomes are reflected. This is something that the hospital ought to demand. We could then say in real time, 'this, or that, is what's happening in prostate cancer'.\" (Medical oncologist)\n\"Yesterday I saw 37 patients: I can't devote myself to recording that much information in the database without any support... It's difficult for everyone's survival to be ascertained under such conditions. We tend to move within a 'dead' database context, that's to say, we get together at the end of the year to see how things have gone...\" (Colorectal surgeon)\nThe main goal of any multidisciplinary cancer team is to enhance the effectiveness of diagnosis and treatment of a specific disease. Assessment of the MDTs that had been put in place revealed relevant differences among the views held by the professionals themselves. To most of them, the ultimate consequence of the efforts of some hospital units or specialised professionals that regularly collect clinical data was evaluation or a study aimed at assessing MDT outcomes. Others, in contrast, described process evaluation involving initial inter-departmental consensus on indicators, development of a specific data-collection methodology, and periodic analysis of results using a shared database. Above all, this situation defines different approaches to the possibility of taking clinical outcomes and process indicators, and linking these to actions aimed at improving cancer care. There were two recurring arguments associated with possible ways of achieving organisational change: the first centred on the key role to be played by the health care service in reaching a technical definition of and agreement on a minimum set of indicators for the entire hospital system and a proposed level of transparency vis-à-vis outcomes; the second addressed the pervasive \"culture of efficiency\" currently prevailing in hospital departments, insisting on the need to limit its influence and instead give increased relevance to clinical and process indicators. An experience that has had remarkable success in various health care regions and has served as the basis for the evaluation of each MDT, is the implementation of a fast-track, colorectal, breast and lung cancer diagnosis and treatment programme, a driving force in promoting integration among services and MDTs. Its implementation has shown the key role that health care policy could play in enhancing the organisation of cancer care.\n\"The problem is that each specialisation has developed its own indicators of toxicity, clinical results, etc. There should be at least one database in which the team's outcomes are reflected. This is something that the hospital ought to demand. We could then say in real time, 'this, or that, is what's happening in prostate cancer'.\" (Medical oncologist)\n\"Yesterday I saw 37 patients: I can't devote myself to recording that much information in the database without any support... It's difficult for everyone's survival to be ascertained under such conditions. We tend to move within a 'dead' database context, that's to say, we get together at the end of the year to see how things have gone...\" (Colorectal surgeon)\n[SUBTITLE] Recording and documenting of clinical decisions in an MDT setting [SUBSECTION] The more formalised MDTs become, the more important easy access to and transparency of decisions and the rationale behind them are. The reason for this is that recording decisions reflects the outcome of consensus building and the value that professionals attribute to their work. Half of all clinicians interviewed stated that they noted their decisions on the electronic clinical record. Not only does such action clearly define the end-point of the decision-making process, it also renders it more transparent, something that, in turn, generates a positive perception of the entire hospital environment. In contrast, there are many cases where team decisions do not extend beyond the strict limits of the tumour board, as shown by the first comment in Excerpt 5. The major weaknesses in recording clinical decisions stem from the lack of standardisation achieved in tumour-board Minute-taking, due to absence of common forms, failure to identify clear recording responsibilities and, very often, lack of administrative support. What this tends to mean is that only the decisions affecting of-protocol patients are recorded, thus hindering the possibility of establishing a reference database for a specific cancer. One last very important aspect for any team is the recording of decisions made in those cases where there is no consensus. Though infrequent, this situation is thought to play a relevant role in terms of medico-legal implications.\n\"In one of the hospital teams, there are professionals who find it difficult to accept consensus-based decisions. Accordingly, we consider it appropriate that, in addition to the decision being recorded in the digital clinical history, a file should be circulated to all team members so that decisions taken with respect to all patients are 'known' to them...\" (Medical oncologist)\n\"We keep a number of formal records, I mean to say that there are several specialists who record details of patients in their files... but there is no single overall record.\" (Nurse case manager)\n\"There is an element of administration (which should be the task of a secretary) entailed in the drafting and signing of Minutes. This is generally performed by a physician, but if he's absent for any reason, then no-one does it. It's always the same old story: it's all a matter of personal involvement.\" (Pathologist)\nThe more formalised MDTs become, the more important easy access to and transparency of decisions and the rationale behind them are. The reason for this is that recording decisions reflects the outcome of consensus building and the value that professionals attribute to their work. Half of all clinicians interviewed stated that they noted their decisions on the electronic clinical record. Not only does such action clearly define the end-point of the decision-making process, it also renders it more transparent, something that, in turn, generates a positive perception of the entire hospital environment. In contrast, there are many cases where team decisions do not extend beyond the strict limits of the tumour board, as shown by the first comment in Excerpt 5. The major weaknesses in recording clinical decisions stem from the lack of standardisation achieved in tumour-board Minute-taking, due to absence of common forms, failure to identify clear recording responsibilities and, very often, lack of administrative support. What this tends to mean is that only the decisions affecting of-protocol patients are recorded, thus hindering the possibility of establishing a reference database for a specific cancer. One last very important aspect for any team is the recording of decisions made in those cases where there is no consensus. Though infrequent, this situation is thought to play a relevant role in terms of medico-legal implications.\n\"In one of the hospital teams, there are professionals who find it difficult to accept consensus-based decisions. Accordingly, we consider it appropriate that, in addition to the decision being recorded in the digital clinical history, a file should be circulated to all team members so that decisions taken with respect to all patients are 'known' to them...\" (Medical oncologist)\n\"We keep a number of formal records, I mean to say that there are several specialists who record details of patients in their files... but there is no single overall record.\" (Nurse case manager)\n\"There is an element of administration (which should be the task of a secretary) entailed in the drafting and signing of Minutes. This is generally performed by a physician, but if he's absent for any reason, then no-one does it. It's always the same old story: it's all a matter of personal involvement.\" (Pathologist)\nFor many clinicians, development of multidisciplinary care has involved a cultural change, as can be seen from the pathway which resulted from recommendations that were proposed by cancer team members for clinical management of patients and have gradually become binding decisions. The key critical factors identified for this change are as follows:\n[SUBTITLE] Existence of different gateways for the same patient profile [SUBSECTION] Many clinicians acknowledged significant variability in clinical practice as a result of diagnosing and treating patients who, despite having similar symptoms and diagnoses, might receive different initial therapy because access to hospital took place through different departments. A typical example is provided by the different therapeutic approaches proposed to a prostate cancer patient depending on the initial hospital department responsible for his diagnosis. Although a shared clinical protocol on such patients for the whole hospital plus an agreement to submit the patient to the MDT meeting would limit this variability, it is the unification of hospital-access gateways into a single hospital department that would have the necessary transforming quality in terms of standardisation of clinical criteria and pathways. Indeed, a recurring example of this type of organisational change is afforded by unification of admission to radiology for breast cancer or to gastroenterology for colorectal cancer in the case of patients displaying symptoms with a high risk of cancer. The experience of agreeing upon a common gateway for suspects has three effects: it provides teams with early access to patients; it reduces the feeling of patients being the \"property\" of any given clinician or department; and it sets up a primary care reference catchment area, as it becomes easier and clearer to determine where and how subjects with high risk of cancer should be referred and who the specialists of reference are. Moreover, where the department taking on the gateway unification process of the clinical pathway is a diagnostic unit, this implies that it should have a more relevant role within the team.\n\"At the hospital, there is a breast cancer unit that is beset by a root problem, i.e., two treatment options depending on whether the patient has been admitted via gynaecology or surgery. These are internal battles waged by the respective competencies.\" (Breast surgeon)\n\"In this hospital there are two chairs of surgery: one comes to the meetings, the other doesn't. We know that they administer different forms of treatment. The percentage of cases in which this occurs is by no means inconsiderable.\" (Medical oncologist)\nMany clinicians acknowledged significant variability in clinical practice as a result of diagnosing and treating patients who, despite having similar symptoms and diagnoses, might receive different initial therapy because access to hospital took place through different departments. A typical example is provided by the different therapeutic approaches proposed to a prostate cancer patient depending on the initial hospital department responsible for his diagnosis. Although a shared clinical protocol on such patients for the whole hospital plus an agreement to submit the patient to the MDT meeting would limit this variability, it is the unification of hospital-access gateways into a single hospital department that would have the necessary transforming quality in terms of standardisation of clinical criteria and pathways. Indeed, a recurring example of this type of organisational change is afforded by unification of admission to radiology for breast cancer or to gastroenterology for colorectal cancer in the case of patients displaying symptoms with a high risk of cancer. The experience of agreeing upon a common gateway for suspects has three effects: it provides teams with early access to patients; it reduces the feeling of patients being the \"property\" of any given clinician or department; and it sets up a primary care reference catchment area, as it becomes easier and clearer to determine where and how subjects with high risk of cancer should be referred and who the specialists of reference are. Moreover, where the department taking on the gateway unification process of the clinical pathway is a diagnostic unit, this implies that it should have a more relevant role within the team.\n\"At the hospital, there is a breast cancer unit that is beset by a root problem, i.e., two treatment options depending on whether the patient has been admitted via gynaecology or surgery. These are internal battles waged by the respective competencies.\" (Breast surgeon)\n\"In this hospital there are two chairs of surgery: one comes to the meetings, the other doesn't. We know that they administer different forms of treatment. The percentage of cases in which this occurs is by no means inconsiderable.\" (Medical oncologist)\n[SUBTITLE] Variability in the development and use of clinical protocols and guidelines [SUBSECTION] Evidence-based decisions are a source of concern to professionals, and the updating of clinical protocols by the team reflects this concern. Many clinicians felt, however, that this goal was conditioned by the implementation and dissemination of cancer clinical guidelines in the Spanish Health System. They argued that the absence of common guidelines for the whole country and the lack of co-ordination strategies for implementing the few that did exist resulted in reduced use and a lack of systematic assessment of existing levels of adherence. Owing to this perceived situation, clinical protocols at a hospital level are very often based on foreign guidelines, and efforts made to produce Spanish ones are of little use. This in turn leads to three common situations which impact at a team level. Firstly, hospitals that refer cases (e.g., because of their clinical complexity) have protocols based on different guidelines that are not standardised across the health care system. Secondly, multidisciplinary cancer care displays different levels of development, so that patients in one hospital may be referred to a specific department, but not to the tumour board, in another, the point here being the absence of pre-specified criteria for referral among levels of care. Thus, some decisions are made without the scientific consensus of an MDT. Finally, a cancer team might change the original treatment plan for a patient referred from a lower level hospital. This was a concern voiced by several clinicians, since decisions sometimes tend to differ widely, causing confusion and lack of trust in the patient. This perception was not shared by health professionals who work for cancer networks.\n\"When a surgeon comes along saying that he has operated on a given patient without the consensus of the committee but -according to him- this course of action was 'in line with the evidence'... this is unacceptable. It's an issue to be addressed by the respective cancer plans. When guidelines are reviewed, this should be the starting point, i.e., the tumour committee report should be seen before the surgical report.\" (Radiation oncologist)\nEvidence-based decisions are a source of concern to professionals, and the updating of clinical protocols by the team reflects this concern. Many clinicians felt, however, that this goal was conditioned by the implementation and dissemination of cancer clinical guidelines in the Spanish Health System. They argued that the absence of common guidelines for the whole country and the lack of co-ordination strategies for implementing the few that did exist resulted in reduced use and a lack of systematic assessment of existing levels of adherence. Owing to this perceived situation, clinical protocols at a hospital level are very often based on foreign guidelines, and efforts made to produce Spanish ones are of little use. This in turn leads to three common situations which impact at a team level. Firstly, hospitals that refer cases (e.g., because of their clinical complexity) have protocols based on different guidelines that are not standardised across the health care system. Secondly, multidisciplinary cancer care displays different levels of development, so that patients in one hospital may be referred to a specific department, but not to the tumour board, in another, the point here being the absence of pre-specified criteria for referral among levels of care. Thus, some decisions are made without the scientific consensus of an MDT. Finally, a cancer team might change the original treatment plan for a patient referred from a lower level hospital. This was a concern voiced by several clinicians, since decisions sometimes tend to differ widely, causing confusion and lack of trust in the patient. This perception was not shared by health professionals who work for cancer networks.\n\"When a surgeon comes along saying that he has operated on a given patient without the consensus of the committee but -according to him- this course of action was 'in line with the evidence'... this is unacceptable. It's an issue to be addressed by the respective cancer plans. When guidelines are reviewed, this should be the starting point, i.e., the tumour committee report should be seen before the surgical report.\" (Radiation oncologist)\n[SUBTITLE] The role of the hospital executive board [SUBSECTION] Most health professionals believed that, while they had not been hampered by the hospital executive board, neither had they been specifically supported to better organise clinical pathways and MDT activity. In their view, the main problem was that MDT work time was not recognised as a health care activity (or \"real work\" to quote them). Half of those interviewed felt that hospital managers knew little about their tasks, goals, level of involvement and management problems. These professionals identified two clear priorities for hospital executive boards, namely: to protect multidisciplinary meetings and work time; and to promote new professional roles, such as nurse case-managers or administrative support. Those with management responsibilities stated that cancer teams were not reflected in the organisational chart but were very important in terms of quality of care, and more innovative and responsive insofar as health care organisation achievement was concerned.\n\"If you tell management that you have to attend a committee meeting, they view it as something that is all well and good but nevertheless ancillary, and so not meriting consideration as part of the daily work load. Yet such attendance should be accorded health care and scientific value, i.e., so many hours correspond to committee work, which is equal to time spent seeing patients in a medical practice.\" (Colorectal surgeon)\n\"Your personal efforts are not appreciated, regardless of whether you've taken part in drawing up a protocol or whether you've devoted one day or three weeks to the job... And, as no stress is laid on the importance of teamwork, there are pockets of resistance that don't change\". (Radiologist)\nMost health professionals believed that, while they had not been hampered by the hospital executive board, neither had they been specifically supported to better organise clinical pathways and MDT activity. In their view, the main problem was that MDT work time was not recognised as a health care activity (or \"real work\" to quote them). Half of those interviewed felt that hospital managers knew little about their tasks, goals, level of involvement and management problems. These professionals identified two clear priorities for hospital executive boards, namely: to protect multidisciplinary meetings and work time; and to promote new professional roles, such as nurse case-managers or administrative support. Those with management responsibilities stated that cancer teams were not reflected in the organisational chart but were very important in terms of quality of care, and more innovative and responsive insofar as health care organisation achievement was concerned.\n\"If you tell management that you have to attend a committee meeting, they view it as something that is all well and good but nevertheless ancillary, and so not meriting consideration as part of the daily work load. Yet such attendance should be accorded health care and scientific value, i.e., so many hours correspond to committee work, which is equal to time spent seeing patients in a medical practice.\" (Colorectal surgeon)\n\"Your personal efforts are not appreciated, regardless of whether you've taken part in drawing up a protocol or whether you've devoted one day or three weeks to the job... And, as no stress is laid on the importance of teamwork, there are pockets of resistance that don't change\". (Radiologist)\n[SUBTITLE] Outcome assessment [SUBSECTION] The main goal of any multidisciplinary cancer team is to enhance the effectiveness of diagnosis and treatment of a specific disease. Assessment of the MDTs that had been put in place revealed relevant differences among the views held by the professionals themselves. To most of them, the ultimate consequence of the efforts of some hospital units or specialised professionals that regularly collect clinical data was evaluation or a study aimed at assessing MDT outcomes. Others, in contrast, described process evaluation involving initial inter-departmental consensus on indicators, development of a specific data-collection methodology, and periodic analysis of results using a shared database. Above all, this situation defines different approaches to the possibility of taking clinical outcomes and process indicators, and linking these to actions aimed at improving cancer care. There were two recurring arguments associated with possible ways of achieving organisational change: the first centred on the key role to be played by the health care service in reaching a technical definition of and agreement on a minimum set of indicators for the entire hospital system and a proposed level of transparency vis-à-vis outcomes; the second addressed the pervasive \"culture of efficiency\" currently prevailing in hospital departments, insisting on the need to limit its influence and instead give increased relevance to clinical and process indicators. An experience that has had remarkable success in various health care regions and has served as the basis for the evaluation of each MDT, is the implementation of a fast-track, colorectal, breast and lung cancer diagnosis and treatment programme, a driving force in promoting integration among services and MDTs. Its implementation has shown the key role that health care policy could play in enhancing the organisation of cancer care.\n\"The problem is that each specialisation has developed its own indicators of toxicity, clinical results, etc. There should be at least one database in which the team's outcomes are reflected. This is something that the hospital ought to demand. We could then say in real time, 'this, or that, is what's happening in prostate cancer'.\" (Medical oncologist)\n\"Yesterday I saw 37 patients: I can't devote myself to recording that much information in the database without any support... It's difficult for everyone's survival to be ascertained under such conditions. We tend to move within a 'dead' database context, that's to say, we get together at the end of the year to see how things have gone...\" (Colorectal surgeon)\nThe main goal of any multidisciplinary cancer team is to enhance the effectiveness of diagnosis and treatment of a specific disease. Assessment of the MDTs that had been put in place revealed relevant differences among the views held by the professionals themselves. To most of them, the ultimate consequence of the efforts of some hospital units or specialised professionals that regularly collect clinical data was evaluation or a study aimed at assessing MDT outcomes. Others, in contrast, described process evaluation involving initial inter-departmental consensus on indicators, development of a specific data-collection methodology, and periodic analysis of results using a shared database. Above all, this situation defines different approaches to the possibility of taking clinical outcomes and process indicators, and linking these to actions aimed at improving cancer care. There were two recurring arguments associated with possible ways of achieving organisational change: the first centred on the key role to be played by the health care service in reaching a technical definition of and agreement on a minimum set of indicators for the entire hospital system and a proposed level of transparency vis-à-vis outcomes; the second addressed the pervasive \"culture of efficiency\" currently prevailing in hospital departments, insisting on the need to limit its influence and instead give increased relevance to clinical and process indicators. An experience that has had remarkable success in various health care regions and has served as the basis for the evaluation of each MDT, is the implementation of a fast-track, colorectal, breast and lung cancer diagnosis and treatment programme, a driving force in promoting integration among services and MDTs. Its implementation has shown the key role that health care policy could play in enhancing the organisation of cancer care.\n\"The problem is that each specialisation has developed its own indicators of toxicity, clinical results, etc. There should be at least one database in which the team's outcomes are reflected. This is something that the hospital ought to demand. We could then say in real time, 'this, or that, is what's happening in prostate cancer'.\" (Medical oncologist)\n\"Yesterday I saw 37 patients: I can't devote myself to recording that much information in the database without any support... It's difficult for everyone's survival to be ascertained under such conditions. We tend to move within a 'dead' database context, that's to say, we get together at the end of the year to see how things have gone...\" (Colorectal surgeon)\n[SUBTITLE] Recording and documenting of clinical decisions in an MDT setting [SUBSECTION] The more formalised MDTs become, the more important easy access to and transparency of decisions and the rationale behind them are. The reason for this is that recording decisions reflects the outcome of consensus building and the value that professionals attribute to their work. Half of all clinicians interviewed stated that they noted their decisions on the electronic clinical record. Not only does such action clearly define the end-point of the decision-making process, it also renders it more transparent, something that, in turn, generates a positive perception of the entire hospital environment. In contrast, there are many cases where team decisions do not extend beyond the strict limits of the tumour board, as shown by the first comment in Excerpt 5. The major weaknesses in recording clinical decisions stem from the lack of standardisation achieved in tumour-board Minute-taking, due to absence of common forms, failure to identify clear recording responsibilities and, very often, lack of administrative support. What this tends to mean is that only the decisions affecting of-protocol patients are recorded, thus hindering the possibility of establishing a reference database for a specific cancer. One last very important aspect for any team is the recording of decisions made in those cases where there is no consensus. Though infrequent, this situation is thought to play a relevant role in terms of medico-legal implications.\n\"In one of the hospital teams, there are professionals who find it difficult to accept consensus-based decisions. Accordingly, we consider it appropriate that, in addition to the decision being recorded in the digital clinical history, a file should be circulated to all team members so that decisions taken with respect to all patients are 'known' to them...\" (Medical oncologist)\n\"We keep a number of formal records, I mean to say that there are several specialists who record details of patients in their files... but there is no single overall record.\" (Nurse case manager)\n\"There is an element of administration (which should be the task of a secretary) entailed in the drafting and signing of Minutes. This is generally performed by a physician, but if he's absent for any reason, then no-one does it. It's always the same old story: it's all a matter of personal involvement.\" (Pathologist)\nThe more formalised MDTs become, the more important easy access to and transparency of decisions and the rationale behind them are. The reason for this is that recording decisions reflects the outcome of consensus building and the value that professionals attribute to their work. Half of all clinicians interviewed stated that they noted their decisions on the electronic clinical record. Not only does such action clearly define the end-point of the decision-making process, it also renders it more transparent, something that, in turn, generates a positive perception of the entire hospital environment. In contrast, there are many cases where team decisions do not extend beyond the strict limits of the tumour board, as shown by the first comment in Excerpt 5. The major weaknesses in recording clinical decisions stem from the lack of standardisation achieved in tumour-board Minute-taking, due to absence of common forms, failure to identify clear recording responsibilities and, very often, lack of administrative support. What this tends to mean is that only the decisions affecting of-protocol patients are recorded, thus hindering the possibility of establishing a reference database for a specific cancer. One last very important aspect for any team is the recording of decisions made in those cases where there is no consensus. Though infrequent, this situation is thought to play a relevant role in terms of medico-legal implications.\n\"In one of the hospital teams, there are professionals who find it difficult to accept consensus-based decisions. Accordingly, we consider it appropriate that, in addition to the decision being recorded in the digital clinical history, a file should be circulated to all team members so that decisions taken with respect to all patients are 'known' to them...\" (Medical oncologist)\n\"We keep a number of formal records, I mean to say that there are several specialists who record details of patients in their files... but there is no single overall record.\" (Nurse case manager)\n\"There is an element of administration (which should be the task of a secretary) entailed in the drafting and signing of Minutes. This is generally performed by a physician, but if he's absent for any reason, then no-one does it. It's always the same old story: it's all a matter of personal involvement.\" (Pathologist)", "This is a group of professionals, largely made up of specialists in different therapeutic fields, which meets regularly on a informal basis to discuss cases considered clinically complex. Since the patients may already have received some of the treatments (usually surgery), the multidisciplinary meeting is aimed at referring them to other professionals for further treatment. This approach implies rigorous respect for the autonomy of the clinician and for overlapping boundaries between the team and the multidisciplinary meeting: patient assessment is made without other health care performance considerations. Judging by our results, this model continues to enjoy an important presence in the system (40%).", "Owing to the high degree of interaction (mutual adaptation) among the professionals involved, consensus plays a key role in this model. The team acts as the reference framework for professionals, who share their views on the diagnosis, treatment and monitoring of a specific cancer type. The meeting is open to all professionals involved in patient management and it is here that the roles of team tumour board co-ordinator and chair appear. The key factor for fostering such an approach is agreement on the need for: joint decision-making to precede application of any treatment; and all cases to be dealt with at the multidisciplinary meeting. Both aspects are hampered, however, by hospital service inertia when it comes to disease management. This formula accounts for half (50%) of all MDT meetings in the health care system.", "The teams that work under this model share wide aims on patient management, including co-ordination of clinical research and economic evaluation of treatment. As this model provides teams with early access to patients, the latter's preferences along with knowledge of their co-morbidities and psychosocial context are incorporated into the multidisciplinary discussions. This occurs through systematic follow-up of patients throughout their journey, from suspicion of cancer, to diagnosis, therapeutic decision-making and follow-up. The presence of professional team roles has an impact on the entire care process and on progress towards achieving seamless care. The hospital executive board plays a pivotal role by consolidating meeting times, and ensuring health professionals' attendance as well as their commitment to the MDT. This model is not frequent in the system (10%).", "For many clinicians, development of multidisciplinary care has involved a cultural change, as can be seen from the pathway which resulted from recommendations that were proposed by cancer team members for clinical management of patients and have gradually become binding decisions. The key critical factors identified for this change are as follows:\n[SUBTITLE] Existence of different gateways for the same patient profile [SUBSECTION] Many clinicians acknowledged significant variability in clinical practice as a result of diagnosing and treating patients who, despite having similar symptoms and diagnoses, might receive different initial therapy because access to hospital took place through different departments. A typical example is provided by the different therapeutic approaches proposed to a prostate cancer patient depending on the initial hospital department responsible for his diagnosis. Although a shared clinical protocol on such patients for the whole hospital plus an agreement to submit the patient to the MDT meeting would limit this variability, it is the unification of hospital-access gateways into a single hospital department that would have the necessary transforming quality in terms of standardisation of clinical criteria and pathways. Indeed, a recurring example of this type of organisational change is afforded by unification of admission to radiology for breast cancer or to gastroenterology for colorectal cancer in the case of patients displaying symptoms with a high risk of cancer. The experience of agreeing upon a common gateway for suspects has three effects: it provides teams with early access to patients; it reduces the feeling of patients being the \"property\" of any given clinician or department; and it sets up a primary care reference catchment area, as it becomes easier and clearer to determine where and how subjects with high risk of cancer should be referred and who the specialists of reference are. Moreover, where the department taking on the gateway unification process of the clinical pathway is a diagnostic unit, this implies that it should have a more relevant role within the team.\n\"At the hospital, there is a breast cancer unit that is beset by a root problem, i.e., two treatment options depending on whether the patient has been admitted via gynaecology or surgery. These are internal battles waged by the respective competencies.\" (Breast surgeon)\n\"In this hospital there are two chairs of surgery: one comes to the meetings, the other doesn't. We know that they administer different forms of treatment. The percentage of cases in which this occurs is by no means inconsiderable.\" (Medical oncologist)\nMany clinicians acknowledged significant variability in clinical practice as a result of diagnosing and treating patients who, despite having similar symptoms and diagnoses, might receive different initial therapy because access to hospital took place through different departments. A typical example is provided by the different therapeutic approaches proposed to a prostate cancer patient depending on the initial hospital department responsible for his diagnosis. Although a shared clinical protocol on such patients for the whole hospital plus an agreement to submit the patient to the MDT meeting would limit this variability, it is the unification of hospital-access gateways into a single hospital department that would have the necessary transforming quality in terms of standardisation of clinical criteria and pathways. Indeed, a recurring example of this type of organisational change is afforded by unification of admission to radiology for breast cancer or to gastroenterology for colorectal cancer in the case of patients displaying symptoms with a high risk of cancer. The experience of agreeing upon a common gateway for suspects has three effects: it provides teams with early access to patients; it reduces the feeling of patients being the \"property\" of any given clinician or department; and it sets up a primary care reference catchment area, as it becomes easier and clearer to determine where and how subjects with high risk of cancer should be referred and who the specialists of reference are. Moreover, where the department taking on the gateway unification process of the clinical pathway is a diagnostic unit, this implies that it should have a more relevant role within the team.\n\"At the hospital, there is a breast cancer unit that is beset by a root problem, i.e., two treatment options depending on whether the patient has been admitted via gynaecology or surgery. These are internal battles waged by the respective competencies.\" (Breast surgeon)\n\"In this hospital there are two chairs of surgery: one comes to the meetings, the other doesn't. We know that they administer different forms of treatment. The percentage of cases in which this occurs is by no means inconsiderable.\" (Medical oncologist)\n[SUBTITLE] Variability in the development and use of clinical protocols and guidelines [SUBSECTION] Evidence-based decisions are a source of concern to professionals, and the updating of clinical protocols by the team reflects this concern. Many clinicians felt, however, that this goal was conditioned by the implementation and dissemination of cancer clinical guidelines in the Spanish Health System. They argued that the absence of common guidelines for the whole country and the lack of co-ordination strategies for implementing the few that did exist resulted in reduced use and a lack of systematic assessment of existing levels of adherence. Owing to this perceived situation, clinical protocols at a hospital level are very often based on foreign guidelines, and efforts made to produce Spanish ones are of little use. This in turn leads to three common situations which impact at a team level. Firstly, hospitals that refer cases (e.g., because of their clinical complexity) have protocols based on different guidelines that are not standardised across the health care system. Secondly, multidisciplinary cancer care displays different levels of development, so that patients in one hospital may be referred to a specific department, but not to the tumour board, in another, the point here being the absence of pre-specified criteria for referral among levels of care. Thus, some decisions are made without the scientific consensus of an MDT. Finally, a cancer team might change the original treatment plan for a patient referred from a lower level hospital. This was a concern voiced by several clinicians, since decisions sometimes tend to differ widely, causing confusion and lack of trust in the patient. This perception was not shared by health professionals who work for cancer networks.\n\"When a surgeon comes along saying that he has operated on a given patient without the consensus of the committee but -according to him- this course of action was 'in line with the evidence'... this is unacceptable. It's an issue to be addressed by the respective cancer plans. When guidelines are reviewed, this should be the starting point, i.e., the tumour committee report should be seen before the surgical report.\" (Radiation oncologist)\nEvidence-based decisions are a source of concern to professionals, and the updating of clinical protocols by the team reflects this concern. Many clinicians felt, however, that this goal was conditioned by the implementation and dissemination of cancer clinical guidelines in the Spanish Health System. They argued that the absence of common guidelines for the whole country and the lack of co-ordination strategies for implementing the few that did exist resulted in reduced use and a lack of systematic assessment of existing levels of adherence. Owing to this perceived situation, clinical protocols at a hospital level are very often based on foreign guidelines, and efforts made to produce Spanish ones are of little use. This in turn leads to three common situations which impact at a team level. Firstly, hospitals that refer cases (e.g., because of their clinical complexity) have protocols based on different guidelines that are not standardised across the health care system. Secondly, multidisciplinary cancer care displays different levels of development, so that patients in one hospital may be referred to a specific department, but not to the tumour board, in another, the point here being the absence of pre-specified criteria for referral among levels of care. Thus, some decisions are made without the scientific consensus of an MDT. Finally, a cancer team might change the original treatment plan for a patient referred from a lower level hospital. This was a concern voiced by several clinicians, since decisions sometimes tend to differ widely, causing confusion and lack of trust in the patient. This perception was not shared by health professionals who work for cancer networks.\n\"When a surgeon comes along saying that he has operated on a given patient without the consensus of the committee but -according to him- this course of action was 'in line with the evidence'... this is unacceptable. It's an issue to be addressed by the respective cancer plans. When guidelines are reviewed, this should be the starting point, i.e., the tumour committee report should be seen before the surgical report.\" (Radiation oncologist)\n[SUBTITLE] The role of the hospital executive board [SUBSECTION] Most health professionals believed that, while they had not been hampered by the hospital executive board, neither had they been specifically supported to better organise clinical pathways and MDT activity. In their view, the main problem was that MDT work time was not recognised as a health care activity (or \"real work\" to quote them). Half of those interviewed felt that hospital managers knew little about their tasks, goals, level of involvement and management problems. These professionals identified two clear priorities for hospital executive boards, namely: to protect multidisciplinary meetings and work time; and to promote new professional roles, such as nurse case-managers or administrative support. Those with management responsibilities stated that cancer teams were not reflected in the organisational chart but were very important in terms of quality of care, and more innovative and responsive insofar as health care organisation achievement was concerned.\n\"If you tell management that you have to attend a committee meeting, they view it as something that is all well and good but nevertheless ancillary, and so not meriting consideration as part of the daily work load. Yet such attendance should be accorded health care and scientific value, i.e., so many hours correspond to committee work, which is equal to time spent seeing patients in a medical practice.\" (Colorectal surgeon)\n\"Your personal efforts are not appreciated, regardless of whether you've taken part in drawing up a protocol or whether you've devoted one day or three weeks to the job... And, as no stress is laid on the importance of teamwork, there are pockets of resistance that don't change\". (Radiologist)\nMost health professionals believed that, while they had not been hampered by the hospital executive board, neither had they been specifically supported to better organise clinical pathways and MDT activity. In their view, the main problem was that MDT work time was not recognised as a health care activity (or \"real work\" to quote them). Half of those interviewed felt that hospital managers knew little about their tasks, goals, level of involvement and management problems. These professionals identified two clear priorities for hospital executive boards, namely: to protect multidisciplinary meetings and work time; and to promote new professional roles, such as nurse case-managers or administrative support. Those with management responsibilities stated that cancer teams were not reflected in the organisational chart but were very important in terms of quality of care, and more innovative and responsive insofar as health care organisation achievement was concerned.\n\"If you tell management that you have to attend a committee meeting, they view it as something that is all well and good but nevertheless ancillary, and so not meriting consideration as part of the daily work load. Yet such attendance should be accorded health care and scientific value, i.e., so many hours correspond to committee work, which is equal to time spent seeing patients in a medical practice.\" (Colorectal surgeon)\n\"Your personal efforts are not appreciated, regardless of whether you've taken part in drawing up a protocol or whether you've devoted one day or three weeks to the job... And, as no stress is laid on the importance of teamwork, there are pockets of resistance that don't change\". (Radiologist)\n[SUBTITLE] Outcome assessment [SUBSECTION] The main goal of any multidisciplinary cancer team is to enhance the effectiveness of diagnosis and treatment of a specific disease. Assessment of the MDTs that had been put in place revealed relevant differences among the views held by the professionals themselves. To most of them, the ultimate consequence of the efforts of some hospital units or specialised professionals that regularly collect clinical data was evaluation or a study aimed at assessing MDT outcomes. Others, in contrast, described process evaluation involving initial inter-departmental consensus on indicators, development of a specific data-collection methodology, and periodic analysis of results using a shared database. Above all, this situation defines different approaches to the possibility of taking clinical outcomes and process indicators, and linking these to actions aimed at improving cancer care. There were two recurring arguments associated with possible ways of achieving organisational change: the first centred on the key role to be played by the health care service in reaching a technical definition of and agreement on a minimum set of indicators for the entire hospital system and a proposed level of transparency vis-à-vis outcomes; the second addressed the pervasive \"culture of efficiency\" currently prevailing in hospital departments, insisting on the need to limit its influence and instead give increased relevance to clinical and process indicators. An experience that has had remarkable success in various health care regions and has served as the basis for the evaluation of each MDT, is the implementation of a fast-track, colorectal, breast and lung cancer diagnosis and treatment programme, a driving force in promoting integration among services and MDTs. Its implementation has shown the key role that health care policy could play in enhancing the organisation of cancer care.\n\"The problem is that each specialisation has developed its own indicators of toxicity, clinical results, etc. There should be at least one database in which the team's outcomes are reflected. This is something that the hospital ought to demand. We could then say in real time, 'this, or that, is what's happening in prostate cancer'.\" (Medical oncologist)\n\"Yesterday I saw 37 patients: I can't devote myself to recording that much information in the database without any support... It's difficult for everyone's survival to be ascertained under such conditions. We tend to move within a 'dead' database context, that's to say, we get together at the end of the year to see how things have gone...\" (Colorectal surgeon)\nThe main goal of any multidisciplinary cancer team is to enhance the effectiveness of diagnosis and treatment of a specific disease. Assessment of the MDTs that had been put in place revealed relevant differences among the views held by the professionals themselves. To most of them, the ultimate consequence of the efforts of some hospital units or specialised professionals that regularly collect clinical data was evaluation or a study aimed at assessing MDT outcomes. Others, in contrast, described process evaluation involving initial inter-departmental consensus on indicators, development of a specific data-collection methodology, and periodic analysis of results using a shared database. Above all, this situation defines different approaches to the possibility of taking clinical outcomes and process indicators, and linking these to actions aimed at improving cancer care. There were two recurring arguments associated with possible ways of achieving organisational change: the first centred on the key role to be played by the health care service in reaching a technical definition of and agreement on a minimum set of indicators for the entire hospital system and a proposed level of transparency vis-à-vis outcomes; the second addressed the pervasive \"culture of efficiency\" currently prevailing in hospital departments, insisting on the need to limit its influence and instead give increased relevance to clinical and process indicators. An experience that has had remarkable success in various health care regions and has served as the basis for the evaluation of each MDT, is the implementation of a fast-track, colorectal, breast and lung cancer diagnosis and treatment programme, a driving force in promoting integration among services and MDTs. Its implementation has shown the key role that health care policy could play in enhancing the organisation of cancer care.\n\"The problem is that each specialisation has developed its own indicators of toxicity, clinical results, etc. There should be at least one database in which the team's outcomes are reflected. This is something that the hospital ought to demand. We could then say in real time, 'this, or that, is what's happening in prostate cancer'.\" (Medical oncologist)\n\"Yesterday I saw 37 patients: I can't devote myself to recording that much information in the database without any support... It's difficult for everyone's survival to be ascertained under such conditions. We tend to move within a 'dead' database context, that's to say, we get together at the end of the year to see how things have gone...\" (Colorectal surgeon)\n[SUBTITLE] Recording and documenting of clinical decisions in an MDT setting [SUBSECTION] The more formalised MDTs become, the more important easy access to and transparency of decisions and the rationale behind them are. The reason for this is that recording decisions reflects the outcome of consensus building and the value that professionals attribute to their work. Half of all clinicians interviewed stated that they noted their decisions on the electronic clinical record. Not only does such action clearly define the end-point of the decision-making process, it also renders it more transparent, something that, in turn, generates a positive perception of the entire hospital environment. In contrast, there are many cases where team decisions do not extend beyond the strict limits of the tumour board, as shown by the first comment in Excerpt 5. The major weaknesses in recording clinical decisions stem from the lack of standardisation achieved in tumour-board Minute-taking, due to absence of common forms, failure to identify clear recording responsibilities and, very often, lack of administrative support. What this tends to mean is that only the decisions affecting of-protocol patients are recorded, thus hindering the possibility of establishing a reference database for a specific cancer. One last very important aspect for any team is the recording of decisions made in those cases where there is no consensus. Though infrequent, this situation is thought to play a relevant role in terms of medico-legal implications.\n\"In one of the hospital teams, there are professionals who find it difficult to accept consensus-based decisions. Accordingly, we consider it appropriate that, in addition to the decision being recorded in the digital clinical history, a file should be circulated to all team members so that decisions taken with respect to all patients are 'known' to them...\" (Medical oncologist)\n\"We keep a number of formal records, I mean to say that there are several specialists who record details of patients in their files... but there is no single overall record.\" (Nurse case manager)\n\"There is an element of administration (which should be the task of a secretary) entailed in the drafting and signing of Minutes. This is generally performed by a physician, but if he's absent for any reason, then no-one does it. It's always the same old story: it's all a matter of personal involvement.\" (Pathologist)\nThe more formalised MDTs become, the more important easy access to and transparency of decisions and the rationale behind them are. The reason for this is that recording decisions reflects the outcome of consensus building and the value that professionals attribute to their work. Half of all clinicians interviewed stated that they noted their decisions on the electronic clinical record. Not only does such action clearly define the end-point of the decision-making process, it also renders it more transparent, something that, in turn, generates a positive perception of the entire hospital environment. In contrast, there are many cases where team decisions do not extend beyond the strict limits of the tumour board, as shown by the first comment in Excerpt 5. The major weaknesses in recording clinical decisions stem from the lack of standardisation achieved in tumour-board Minute-taking, due to absence of common forms, failure to identify clear recording responsibilities and, very often, lack of administrative support. What this tends to mean is that only the decisions affecting of-protocol patients are recorded, thus hindering the possibility of establishing a reference database for a specific cancer. One last very important aspect for any team is the recording of decisions made in those cases where there is no consensus. Though infrequent, this situation is thought to play a relevant role in terms of medico-legal implications.\n\"In one of the hospital teams, there are professionals who find it difficult to accept consensus-based decisions. Accordingly, we consider it appropriate that, in addition to the decision being recorded in the digital clinical history, a file should be circulated to all team members so that decisions taken with respect to all patients are 'known' to them...\" (Medical oncologist)\n\"We keep a number of formal records, I mean to say that there are several specialists who record details of patients in their files... but there is no single overall record.\" (Nurse case manager)\n\"There is an element of administration (which should be the task of a secretary) entailed in the drafting and signing of Minutes. This is generally performed by a physician, but if he's absent for any reason, then no-one does it. It's always the same old story: it's all a matter of personal involvement.\" (Pathologist)", "Many clinicians acknowledged significant variability in clinical practice as a result of diagnosing and treating patients who, despite having similar symptoms and diagnoses, might receive different initial therapy because access to hospital took place through different departments. A typical example is provided by the different therapeutic approaches proposed to a prostate cancer patient depending on the initial hospital department responsible for his diagnosis. Although a shared clinical protocol on such patients for the whole hospital plus an agreement to submit the patient to the MDT meeting would limit this variability, it is the unification of hospital-access gateways into a single hospital department that would have the necessary transforming quality in terms of standardisation of clinical criteria and pathways. Indeed, a recurring example of this type of organisational change is afforded by unification of admission to radiology for breast cancer or to gastroenterology for colorectal cancer in the case of patients displaying symptoms with a high risk of cancer. The experience of agreeing upon a common gateway for suspects has three effects: it provides teams with early access to patients; it reduces the feeling of patients being the \"property\" of any given clinician or department; and it sets up a primary care reference catchment area, as it becomes easier and clearer to determine where and how subjects with high risk of cancer should be referred and who the specialists of reference are. Moreover, where the department taking on the gateway unification process of the clinical pathway is a diagnostic unit, this implies that it should have a more relevant role within the team.\n\"At the hospital, there is a breast cancer unit that is beset by a root problem, i.e., two treatment options depending on whether the patient has been admitted via gynaecology or surgery. These are internal battles waged by the respective competencies.\" (Breast surgeon)\n\"In this hospital there are two chairs of surgery: one comes to the meetings, the other doesn't. We know that they administer different forms of treatment. The percentage of cases in which this occurs is by no means inconsiderable.\" (Medical oncologist)", "Evidence-based decisions are a source of concern to professionals, and the updating of clinical protocols by the team reflects this concern. Many clinicians felt, however, that this goal was conditioned by the implementation and dissemination of cancer clinical guidelines in the Spanish Health System. They argued that the absence of common guidelines for the whole country and the lack of co-ordination strategies for implementing the few that did exist resulted in reduced use and a lack of systematic assessment of existing levels of adherence. Owing to this perceived situation, clinical protocols at a hospital level are very often based on foreign guidelines, and efforts made to produce Spanish ones are of little use. This in turn leads to three common situations which impact at a team level. Firstly, hospitals that refer cases (e.g., because of their clinical complexity) have protocols based on different guidelines that are not standardised across the health care system. Secondly, multidisciplinary cancer care displays different levels of development, so that patients in one hospital may be referred to a specific department, but not to the tumour board, in another, the point here being the absence of pre-specified criteria for referral among levels of care. Thus, some decisions are made without the scientific consensus of an MDT. Finally, a cancer team might change the original treatment plan for a patient referred from a lower level hospital. This was a concern voiced by several clinicians, since decisions sometimes tend to differ widely, causing confusion and lack of trust in the patient. This perception was not shared by health professionals who work for cancer networks.\n\"When a surgeon comes along saying that he has operated on a given patient without the consensus of the committee but -according to him- this course of action was 'in line with the evidence'... this is unacceptable. It's an issue to be addressed by the respective cancer plans. When guidelines are reviewed, this should be the starting point, i.e., the tumour committee report should be seen before the surgical report.\" (Radiation oncologist)", "Most health professionals believed that, while they had not been hampered by the hospital executive board, neither had they been specifically supported to better organise clinical pathways and MDT activity. In their view, the main problem was that MDT work time was not recognised as a health care activity (or \"real work\" to quote them). Half of those interviewed felt that hospital managers knew little about their tasks, goals, level of involvement and management problems. These professionals identified two clear priorities for hospital executive boards, namely: to protect multidisciplinary meetings and work time; and to promote new professional roles, such as nurse case-managers or administrative support. Those with management responsibilities stated that cancer teams were not reflected in the organisational chart but were very important in terms of quality of care, and more innovative and responsive insofar as health care organisation achievement was concerned.\n\"If you tell management that you have to attend a committee meeting, they view it as something that is all well and good but nevertheless ancillary, and so not meriting consideration as part of the daily work load. Yet such attendance should be accorded health care and scientific value, i.e., so many hours correspond to committee work, which is equal to time spent seeing patients in a medical practice.\" (Colorectal surgeon)\n\"Your personal efforts are not appreciated, regardless of whether you've taken part in drawing up a protocol or whether you've devoted one day or three weeks to the job... And, as no stress is laid on the importance of teamwork, there are pockets of resistance that don't change\". (Radiologist)", "The main goal of any multidisciplinary cancer team is to enhance the effectiveness of diagnosis and treatment of a specific disease. Assessment of the MDTs that had been put in place revealed relevant differences among the views held by the professionals themselves. To most of them, the ultimate consequence of the efforts of some hospital units or specialised professionals that regularly collect clinical data was evaluation or a study aimed at assessing MDT outcomes. Others, in contrast, described process evaluation involving initial inter-departmental consensus on indicators, development of a specific data-collection methodology, and periodic analysis of results using a shared database. Above all, this situation defines different approaches to the possibility of taking clinical outcomes and process indicators, and linking these to actions aimed at improving cancer care. There were two recurring arguments associated with possible ways of achieving organisational change: the first centred on the key role to be played by the health care service in reaching a technical definition of and agreement on a minimum set of indicators for the entire hospital system and a proposed level of transparency vis-à-vis outcomes; the second addressed the pervasive \"culture of efficiency\" currently prevailing in hospital departments, insisting on the need to limit its influence and instead give increased relevance to clinical and process indicators. An experience that has had remarkable success in various health care regions and has served as the basis for the evaluation of each MDT, is the implementation of a fast-track, colorectal, breast and lung cancer diagnosis and treatment programme, a driving force in promoting integration among services and MDTs. Its implementation has shown the key role that health care policy could play in enhancing the organisation of cancer care.\n\"The problem is that each specialisation has developed its own indicators of toxicity, clinical results, etc. There should be at least one database in which the team's outcomes are reflected. This is something that the hospital ought to demand. We could then say in real time, 'this, or that, is what's happening in prostate cancer'.\" (Medical oncologist)\n\"Yesterday I saw 37 patients: I can't devote myself to recording that much information in the database without any support... It's difficult for everyone's survival to be ascertained under such conditions. We tend to move within a 'dead' database context, that's to say, we get together at the end of the year to see how things have gone...\" (Colorectal surgeon)", "The more formalised MDTs become, the more important easy access to and transparency of decisions and the rationale behind them are. The reason for this is that recording decisions reflects the outcome of consensus building and the value that professionals attribute to their work. Half of all clinicians interviewed stated that they noted their decisions on the electronic clinical record. Not only does such action clearly define the end-point of the decision-making process, it also renders it more transparent, something that, in turn, generates a positive perception of the entire hospital environment. In contrast, there are many cases where team decisions do not extend beyond the strict limits of the tumour board, as shown by the first comment in Excerpt 5. The major weaknesses in recording clinical decisions stem from the lack of standardisation achieved in tumour-board Minute-taking, due to absence of common forms, failure to identify clear recording responsibilities and, very often, lack of administrative support. What this tends to mean is that only the decisions affecting of-protocol patients are recorded, thus hindering the possibility of establishing a reference database for a specific cancer. One last very important aspect for any team is the recording of decisions made in those cases where there is no consensus. Though infrequent, this situation is thought to play a relevant role in terms of medico-legal implications.\n\"In one of the hospital teams, there are professionals who find it difficult to accept consensus-based decisions. Accordingly, we consider it appropriate that, in addition to the decision being recorded in the digital clinical history, a file should be circulated to all team members so that decisions taken with respect to all patients are 'known' to them...\" (Medical oncologist)\n\"We keep a number of formal records, I mean to say that there are several specialists who record details of patients in their files... but there is no single overall record.\" (Nurse case manager)\n\"There is an element of administration (which should be the task of a secretary) entailed in the drafting and signing of Minutes. This is generally performed by a physician, but if he's absent for any reason, then no-one does it. It's always the same old story: it's all a matter of personal involvement.\" (Pathologist)", "The reference to Jean-Baptiste Lamarck (1744-1829) in the title of this article (\"the function creates the organ\") is a description that is both accurate and useful for understanding the evolution of multidisciplinary cancer care in the Spanish health system. In a manner similar to the apocryphal example of Lamarck's giraffe, which craned its neck to match the height of the trees, the development of cancer teams results from adaptation to the immediate hospital environment accompanied by a lack of policy orientation. While the law lays down that all hospitals are to have cancer boards for the most prevalent diseases, no specific aims, organisational requirements or performance assessment standards have been prescribed.\nThere is a valuable lesson to be learnt in the path taken by the UK National Health Service. The publication of the Calman-Hine [12] report in 1995 highlighted the importance of a successful institutional framework for cancer services. As Haward [13,14] pointed out, however, the effort to define their performance -- including multidisciplinary care-- in detail [15], without addressing the factors that were to facilitate the transition, resulted in slow, uneven change. The Spanish experience failed to develop this type of learning process. Our study therefore sought to identify the cultural and organisational dimensions that influence the incorporation of planned actions. This approach is reinforced by the EUROCARE-4 study, which identifies the organisational elements in the care process by the latter's ability to improve the survival and quality of life of cancer patients, as evidenced by the differences among European countries [16,17].\n[SUBTITLE] Impact on decision making [SUBSECTION] Implementation of multidisciplinary care involves a redistribution of the responsibilities assumed by the respective professionals, with the aim of developing greater potential for enhancing their joint clinical effectiveness. It is a specific organisational answer to the complexity of cancer care, and enables new approaches to be taken and known problems, such as variability in clinical practice, to be tackled. In this connection, note should be taken of the overall strategy adopted by the National Breast and Ovarian Cancer Centre of Australia, which, along with several other authors [18-20], identifies multidisciplinary care with the standardisation of clinical practice in the health system. Most professionals interviewed by us regard MDT as the main tool for ensuring that the expertise of each discipline is involved in the clinical decision making process affecting any given patient. Furthermore, high levels of adherence to clinical protocols improve the efficiency of MD meetings by better discerning the transition from simple to complex case discussions.\nOur study confirms previous research in underscoring the high degree to which the effectiveness of multidisciplinary interventions is dependent on the organisational context in which cancer care is delivered. Some technical aspects stressed are the need: for administrative support for team activity and organisation [8]; and for all decisions taken to be entered into the electronic clinical record, since failure to keep a record hinders application of such decisions to the patient, as shown by a study that targeted breast cancer teams in 2006 [21]. Moreover, a treatment-planning register can be helpful when it comes to assessing similar cases or auditing an MDT's performance [22]. Nevertheless, the key factor is communication among team members as a sign of professional team trust. This is the most relevant dimension to be discerned in the above-described models of integration of clinical care. The fact that decisions should be binding upon team members, that there is continued participation by specialists in the meetings, that the impact on the entire patient pathway is perceived as positive, that there is a role for clinical co-ordinators and nurse case managers, and that both residents and nurses participate in training, are indicators of the ability of the clinicians involved to abandon a sequential and relatively unco-ordinated model of cancer care and progress instead towards achieving a model of integrated care based on consensus decision-making.\nSpecialisation in a given area of cancer diagnosis and treatment facilitates communication among different specialisations and professionals, through using specialist knowledge and expertise as a departure point for addressing specific patients rather than for the performance of specific tasks. Experiences, such as reaching an agreement on a common gateway for patients with high-risk symptoms for cancer or protecting the time for multidisciplinary teamwork, can also be key groundwork for promoting effective team communication. Other researchers emphasise aspects of the importance of professional integration within cancer networks [23], or the improvement in mental wellbeing and professional satisfaction that comes with MDT development, as a result of lower anxiety and better feelings about personal performance [24,25]. Another approach is also the need to achieve consistent care from the standpoint of the cancer patient [26]. This was well illustrated by affording patients access to the MDT in the early stages of the diagnostic process, a way of preventing the possibility of initial treatment being administered without team discussion, and communicational fragmentation with the patient being increased.\nImplementation of multidisciplinary care involves a redistribution of the responsibilities assumed by the respective professionals, with the aim of developing greater potential for enhancing their joint clinical effectiveness. It is a specific organisational answer to the complexity of cancer care, and enables new approaches to be taken and known problems, such as variability in clinical practice, to be tackled. In this connection, note should be taken of the overall strategy adopted by the National Breast and Ovarian Cancer Centre of Australia, which, along with several other authors [18-20], identifies multidisciplinary care with the standardisation of clinical practice in the health system. Most professionals interviewed by us regard MDT as the main tool for ensuring that the expertise of each discipline is involved in the clinical decision making process affecting any given patient. Furthermore, high levels of adherence to clinical protocols improve the efficiency of MD meetings by better discerning the transition from simple to complex case discussions.\nOur study confirms previous research in underscoring the high degree to which the effectiveness of multidisciplinary interventions is dependent on the organisational context in which cancer care is delivered. Some technical aspects stressed are the need: for administrative support for team activity and organisation [8]; and for all decisions taken to be entered into the electronic clinical record, since failure to keep a record hinders application of such decisions to the patient, as shown by a study that targeted breast cancer teams in 2006 [21]. Moreover, a treatment-planning register can be helpful when it comes to assessing similar cases or auditing an MDT's performance [22]. Nevertheless, the key factor is communication among team members as a sign of professional team trust. This is the most relevant dimension to be discerned in the above-described models of integration of clinical care. The fact that decisions should be binding upon team members, that there is continued participation by specialists in the meetings, that the impact on the entire patient pathway is perceived as positive, that there is a role for clinical co-ordinators and nurse case managers, and that both residents and nurses participate in training, are indicators of the ability of the clinicians involved to abandon a sequential and relatively unco-ordinated model of cancer care and progress instead towards achieving a model of integrated care based on consensus decision-making.\nSpecialisation in a given area of cancer diagnosis and treatment facilitates communication among different specialisations and professionals, through using specialist knowledge and expertise as a departure point for addressing specific patients rather than for the performance of specific tasks. Experiences, such as reaching an agreement on a common gateway for patients with high-risk symptoms for cancer or protecting the time for multidisciplinary teamwork, can also be key groundwork for promoting effective team communication. Other researchers emphasise aspects of the importance of professional integration within cancer networks [23], or the improvement in mental wellbeing and professional satisfaction that comes with MDT development, as a result of lower anxiety and better feelings about personal performance [24,25]. Another approach is also the need to achieve consistent care from the standpoint of the cancer patient [26]. This was well illustrated by affording patients access to the MDT in the early stages of the diagnostic process, a way of preventing the possibility of initial treatment being administered without team discussion, and communicational fragmentation with the patient being increased.\n[SUBTITLE] Strengths and limitations [SUBSECTION] This study has some strengths and limitations that must be taken into account when assessing its results. Insofar as its strengths are concerned, it should be noted that, rather than approaching MDTs from the standpoint of the specific structures which frame teamwork, we sought instead to understand MDTs from the standpoint of the capabilities of the professionals and teams themselves. This enabled us to obtain a better insight into the ways in which professionals interacted and the nature of the agreements and commitments reached within an MDT. The synthesis of our results in the form of three models of multidisciplinary cancer care to be found in Spain facilitates the transfer of such findings to SNHS hospitals. Indeed, as our study shows, multidisciplinary care displays significant variability in its methodology and degree of implementation among hospitals and regions, but not in the critical factors that have influenced its development. These models have been checked with the health professionals involved in the study.\nA clear limitation of the study resides in the selection process, based on proposals put forward by the chairmen of scientific societies of medical and radiotherapy oncology and the heads of regional cancer plans, which could have biased our selection of professionals towards those with sensitivity to multidisciplinary care and organisational change per se. The selection criteria vis-à-vis the different profiles, plus the fact that major university teaching hospitals were involved in the study, were intended to minimise this limitation. Moreover, our interpretation of the findings and the model proposed here were discussed with different specialists, hospitals and regions.\nAs with all qualitative studies, there was not a large number of participants. Our research focused on the views of key informants, thereby implicitly ruling out the possibility of capturing all the experiences and best practices that might exist in the health system. Lastly, mention should be made of the fact that one third of all interviewees belonged to the field of breast cancer, a disease that frequently becomes a model for others.\nThis study has some strengths and limitations that must be taken into account when assessing its results. Insofar as its strengths are concerned, it should be noted that, rather than approaching MDTs from the standpoint of the specific structures which frame teamwork, we sought instead to understand MDTs from the standpoint of the capabilities of the professionals and teams themselves. This enabled us to obtain a better insight into the ways in which professionals interacted and the nature of the agreements and commitments reached within an MDT. The synthesis of our results in the form of three models of multidisciplinary cancer care to be found in Spain facilitates the transfer of such findings to SNHS hospitals. Indeed, as our study shows, multidisciplinary care displays significant variability in its methodology and degree of implementation among hospitals and regions, but not in the critical factors that have influenced its development. These models have been checked with the health professionals involved in the study.\nA clear limitation of the study resides in the selection process, based on proposals put forward by the chairmen of scientific societies of medical and radiotherapy oncology and the heads of regional cancer plans, which could have biased our selection of professionals towards those with sensitivity to multidisciplinary care and organisational change per se. The selection criteria vis-à-vis the different profiles, plus the fact that major university teaching hospitals were involved in the study, were intended to minimise this limitation. Moreover, our interpretation of the findings and the model proposed here were discussed with different specialists, hospitals and regions.\nAs with all qualitative studies, there was not a large number of participants. Our research focused on the views of key informants, thereby implicitly ruling out the possibility of capturing all the experiences and best practices that might exist in the health system. Lastly, mention should be made of the fact that one third of all interviewees belonged to the field of breast cancer, a disease that frequently becomes a model for others.", "Implementation of multidisciplinary care involves a redistribution of the responsibilities assumed by the respective professionals, with the aim of developing greater potential for enhancing their joint clinical effectiveness. It is a specific organisational answer to the complexity of cancer care, and enables new approaches to be taken and known problems, such as variability in clinical practice, to be tackled. In this connection, note should be taken of the overall strategy adopted by the National Breast and Ovarian Cancer Centre of Australia, which, along with several other authors [18-20], identifies multidisciplinary care with the standardisation of clinical practice in the health system. Most professionals interviewed by us regard MDT as the main tool for ensuring that the expertise of each discipline is involved in the clinical decision making process affecting any given patient. Furthermore, high levels of adherence to clinical protocols improve the efficiency of MD meetings by better discerning the transition from simple to complex case discussions.\nOur study confirms previous research in underscoring the high degree to which the effectiveness of multidisciplinary interventions is dependent on the organisational context in which cancer care is delivered. Some technical aspects stressed are the need: for administrative support for team activity and organisation [8]; and for all decisions taken to be entered into the electronic clinical record, since failure to keep a record hinders application of such decisions to the patient, as shown by a study that targeted breast cancer teams in 2006 [21]. Moreover, a treatment-planning register can be helpful when it comes to assessing similar cases or auditing an MDT's performance [22]. Nevertheless, the key factor is communication among team members as a sign of professional team trust. This is the most relevant dimension to be discerned in the above-described models of integration of clinical care. The fact that decisions should be binding upon team members, that there is continued participation by specialists in the meetings, that the impact on the entire patient pathway is perceived as positive, that there is a role for clinical co-ordinators and nurse case managers, and that both residents and nurses participate in training, are indicators of the ability of the clinicians involved to abandon a sequential and relatively unco-ordinated model of cancer care and progress instead towards achieving a model of integrated care based on consensus decision-making.\nSpecialisation in a given area of cancer diagnosis and treatment facilitates communication among different specialisations and professionals, through using specialist knowledge and expertise as a departure point for addressing specific patients rather than for the performance of specific tasks. Experiences, such as reaching an agreement on a common gateway for patients with high-risk symptoms for cancer or protecting the time for multidisciplinary teamwork, can also be key groundwork for promoting effective team communication. Other researchers emphasise aspects of the importance of professional integration within cancer networks [23], or the improvement in mental wellbeing and professional satisfaction that comes with MDT development, as a result of lower anxiety and better feelings about personal performance [24,25]. Another approach is also the need to achieve consistent care from the standpoint of the cancer patient [26]. This was well illustrated by affording patients access to the MDT in the early stages of the diagnostic process, a way of preventing the possibility of initial treatment being administered without team discussion, and communicational fragmentation with the patient being increased.", "This study has some strengths and limitations that must be taken into account when assessing its results. Insofar as its strengths are concerned, it should be noted that, rather than approaching MDTs from the standpoint of the specific structures which frame teamwork, we sought instead to understand MDTs from the standpoint of the capabilities of the professionals and teams themselves. This enabled us to obtain a better insight into the ways in which professionals interacted and the nature of the agreements and commitments reached within an MDT. The synthesis of our results in the form of three models of multidisciplinary cancer care to be found in Spain facilitates the transfer of such findings to SNHS hospitals. Indeed, as our study shows, multidisciplinary care displays significant variability in its methodology and degree of implementation among hospitals and regions, but not in the critical factors that have influenced its development. These models have been checked with the health professionals involved in the study.\nA clear limitation of the study resides in the selection process, based on proposals put forward by the chairmen of scientific societies of medical and radiotherapy oncology and the heads of regional cancer plans, which could have biased our selection of professionals towards those with sensitivity to multidisciplinary care and organisational change per se. The selection criteria vis-à-vis the different profiles, plus the fact that major university teaching hospitals were involved in the study, were intended to minimise this limitation. Moreover, our interpretation of the findings and the model proposed here were discussed with different specialists, hospitals and regions.\nAs with all qualitative studies, there was not a large number of participants. Our research focused on the views of key informants, thereby implicitly ruling out the possibility of capturing all the experiences and best practices that might exist in the health system. Lastly, mention should be made of the fact that one third of all interviewees belonged to the field of breast cancer, a disease that frequently becomes a model for others.", "This is the first qualitative study of multidisciplinary cancer care in southern Europe. The delay in MDT implementation poses the need for health policy not only to acknowledge and promote it, but also to provide quality standards. In addition, there is a clear need to respect and promote good practices existing in the health care system. In this regard, this study may help understand how professionals conceptualise this approach, which is relevant when interest lies in developing more comprehensive care by placing multidisciplinary care at the core of cancer department, as stated in Spain's official cancer strategy. Moreover, metaphors play a key role in the way professionals imagine and explain teamwork in cancer care (table 4).\nResearch metaphors\nMDT development often entails a process of decentralisation inside hospitals, which may involve some redistribution of power. This is an adaptive challenge for hospital managers in terms of clinical governance, i.e., making structures more permeable to organisation of expertise without losing efficiency in the management of shared resources. A culture of evaluation of clinical and process outcomes should emerge, aimed at directing and justifying organisational innovation so as to achieve the best possible performance in terms of care of cancer patients. Multidisciplinary care occurs simultaneously with rapid changes in treatment and use of clinical practice guidelines, all of which makes it more difficult to identify its specific advantages [27]. This is why health policy plays an important role in promoting an organisational approach that changes the way in which professionals develop their clinical practice, a key issue in a disease such as cancer, characterised by its clinical complexity, involvement of different clinical specialists and need to face the new challenge of managing patient preferences.", "MDT: Multidisciplinary Team; SNHS: Spanish National Health System", "The authors declare that they have no competing interests.", "Not required.", "JMB had the initial idea for this study. JMB and JP designed the study and drafted the research proposal. JP conducted pilot interviews, while JMB provided guidance and critical review of this information and helped with the review of the literature. JP undertook the main fieldwork for the study, interviewed, coded, charted and analysed the data for this paper, which was scrutinised and discussed by JMB. JP and JMB interpreted the results and wrote the first draft and final version of this article. JP and JMB read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2458/11/141/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Stability-indicating HPLC-DAD/UV-ESI/MS impurity profiling of the anti-malarial drug lumefantrine.
21356068
Lumefantrine (benflumetol) is a fluorene derivative belonging to the aryl amino alcohol class of anti-malarial drugs and is commercially available in fixed combination products with β-artemether. Impurity characterization of such drugs, which are widely consumed in tropical countries for malaria control programmes, is of paramount importance. However, until now, no exhaustive impurity profile of lumefantrine has been established, encompassing process-related and degradation impurities in active pharmaceutical ingredients (APIs) and finished pharmaceutical products (FPPs).
BACKGROUND
Using HPLC-DAD/UV-ESI/ion trap/MS, a comprehensive impurity profile was established based upon analysis of market samples as well as stress, accelerated and long-term stability results. In-silico toxicological predictions for these lumefantrine related impurities were made using Toxtree® and Derek®.
METHODS
Several new impurities are identified, of which the desbenzylketo derivative (DBK) is proposed as a new specified degradant. DBK and the remaining unspecified lumefantrine related impurities are predicted, using Toxtree® and Derek®, to have a toxicity risk comparable to the toxicity risk of the API lumefantrine itself.
RESULTS
From unstressed, stressed and accelerated stability samples of lumefantrine API and FPPs, nine compounds were detected and characterized to be lumefantrine related impurities. One new lumefantrine related compound, DBK, was identified and characterized as a specified degradation impurity of lumefantrine in real market samples (FPPs). The in-silico toxicological investigation (Toxtree® and Derek®) indicated overall a toxicity risk for lumefantrine related impurities comparable to that of the API lumefantrine itself.
CONCLUSIONS
[ "Antimalarials", "Chemistry Techniques, Analytical", "Computer Simulation", "Drug Stability", "Ethanolamines", "Fluorenes", "Lumefantrine" ]
3059303
null
null
Methods
[SUBTITLE] Samples and chemicals [SUBSECTION] All drug substance batches (APIs), commercially available FPPs (Co-Artesiane®, Artefan®, Lumartem® and Coartem®) and the standard of desbenzylketo (DBK) lumefantrine derivative were supplied by Dafra Pharma International (Belgium). USP-Salmous standards of lumefantrine and impurity A were purchased from U.S. Pharmacopeia (Basel, Switzerland). Analytical solutions were prepared in HPLC grade tetrahydrofuran (Fisher Scientific, Leicestershire, UK) at a concentration of 0.96 mg/ml lumefantrine, which corresponds to 100% label claim (l.c.). A dilution equivalent to 0.5% l.c. is also prepared and used for the quantification of the related impurities. Hydrogen peroxide (H2O2), sodium hydroxide (NaOH) and ammonium acetate were purchased from Merck (Darmstadt, Germany), hydrochloric acid (HCl) from Sigma-Aldrich (St Louis, USA) and glacial acetic acid from Riedel-de Haën (Seelze, Germany). Sartorius (Göttingen, Germany) ultrapure 18.2 MΩ.cm quality water and HPLC grade acetonitril (Romil, Cambridge, UK) were used for HPLC-UV/MS analysis. All drug substance batches (APIs), commercially available FPPs (Co-Artesiane®, Artefan®, Lumartem® and Coartem®) and the standard of desbenzylketo (DBK) lumefantrine derivative were supplied by Dafra Pharma International (Belgium). USP-Salmous standards of lumefantrine and impurity A were purchased from U.S. Pharmacopeia (Basel, Switzerland). Analytical solutions were prepared in HPLC grade tetrahydrofuran (Fisher Scientific, Leicestershire, UK) at a concentration of 0.96 mg/ml lumefantrine, which corresponds to 100% label claim (l.c.). A dilution equivalent to 0.5% l.c. is also prepared and used for the quantification of the related impurities. Hydrogen peroxide (H2O2), sodium hydroxide (NaOH) and ammonium acetate were purchased from Merck (Darmstadt, Germany), hydrochloric acid (HCl) from Sigma-Aldrich (St Louis, USA) and glacial acetic acid from Riedel-de Haën (Seelze, Germany). Sartorius (Göttingen, Germany) ultrapure 18.2 MΩ.cm quality water and HPLC grade acetonitril (Romil, Cambridge, UK) were used for HPLC-UV/MS analysis. [SUBTITLE] Liquid chromatography [SUBSECTION] HPLC-UV investigation of the impurity profiles was performed on a HPLC-PDA apparatus consisting of a Waters Alliance 2695 separation module and a Waters 2998 photodiode array detector with Empower 2 software for data acquisition (all Waters, Milford, MA, USA). For PDA detection, the UV spectrum was recorded at 190-400 nm. Quantification was performed at 266 nm. The positive ion ESI and the collision-induced dissociation (CID) mass spectra were obtained from the LC-UV/MS apparatus consisting of a Spectra System SN4000 interface, a Spectra System SCM1000 degasser, a Spectra System P1000XR pump, a Spectra System AS3000 autosampler, and a Finnigan LCQ Classic ion trap mass spectrometer in positive ion mode (all Thermo, San José, CA, USA), mass to charge range m/z 100 to m/z 2000 at unit resolution and with a peak width of 0.25 daltons/z, equipped with a Waters 2487 dual wavelength UV detector (Waters, Milford, MA, USA) and Xcalibur 2.0 software (Thermo) for data acquisition. ESI was conducted using a needle voltage of 4.5 kV. Nitrogen was used as sheath and auxiliary gas with the heated capillary set at 250°C. UV-detection was used for quantification (at 266 nm), while ESI-ion trap MS detection was used for identification. LC determination of impurities in lumefantrine samples was performed using a Purospher STAR RP-18 endcapped (150 × 4.6 mm, 5 μm particle size) column (Merck, Darmstadt, Germany) with guard column at 30°C under isocratic conditions with a mobile phase consisting of ammonium acetate (pH 4.9; 0.1 M) and acetonitrile (10:90, v/v). The flow rate was set at 2.0 mL/min (minimal run time: 30 min.). The injection volume was 10 μl. Under these conditions, lumefantrine elutes at approximately 22 min. System suitability tests (SSTs) were established as the plate number on lumefantrine (N ≥ 8.2 × 103) and the oxidative stress degradation product (N ≥ 2.4 × 103), the signal-to-noise ratio of 0.5% l.c. lumefantrine solution (S/N ≥ 30), the peak area ratio of the 0.5% l.c. versus 100% l.c. (between 0.4 and 0.6) and the relative position of the in-situ prepared N-oxide by H2O2 treatment (RRT between 0.12 and 0.22). The LC method was validated for the determination of lumefantrine and its related impurities. The selectivity of the developed chromatographic method was established by the separation of lumefantrine and its impurities. A correlation coefficient (r2) of 0.9998 for lumefantrine (0.0006 to 0.01 mg/ml) and 1.0 for impurity A and DBK (0.001 to 0.1 mg/ml) demonstrated that the HPLC method is linear in the lower range. LOD/LOQ values for lumefantrine, DBK and impurity A were calculated (S/N = 3 resp. 10): 0.004 mg/ml and 0.026 mg/ml for lumefantrine (0.004% respectively 0.026% l.c.), 0.011 mg/ml and 0.040 mg/ml for DBK (0.012% respectively 0.042% l.c.) and 0.110 mg/ml and 0.393 mg/ml for impurity A (0.115% respectively 0.409% l.c.). The analytical stability of lumefantrine, impurity A and DBK was confirmed over a storage period of 24 hours at 5°C, i.e. the sample compartment temperature. Accuracy and precision were evaluated by repeated analysis (n = 6), with 102.6% l.c. recovery and 2.1%, respectively 2.86%, for repeatability, respectively intermediate precision. The relative retention time (RRT) is defined as the ratio of the retention time of the compound versus the retention time of lumefantrine. The relative response factor is defined as the ratio of the area of the compound versus the area of lumefantrine, both injected at the same concentration. HPLC-UV investigation of the impurity profiles was performed on a HPLC-PDA apparatus consisting of a Waters Alliance 2695 separation module and a Waters 2998 photodiode array detector with Empower 2 software for data acquisition (all Waters, Milford, MA, USA). For PDA detection, the UV spectrum was recorded at 190-400 nm. Quantification was performed at 266 nm. The positive ion ESI and the collision-induced dissociation (CID) mass spectra were obtained from the LC-UV/MS apparatus consisting of a Spectra System SN4000 interface, a Spectra System SCM1000 degasser, a Spectra System P1000XR pump, a Spectra System AS3000 autosampler, and a Finnigan LCQ Classic ion trap mass spectrometer in positive ion mode (all Thermo, San José, CA, USA), mass to charge range m/z 100 to m/z 2000 at unit resolution and with a peak width of 0.25 daltons/z, equipped with a Waters 2487 dual wavelength UV detector (Waters, Milford, MA, USA) and Xcalibur 2.0 software (Thermo) for data acquisition. ESI was conducted using a needle voltage of 4.5 kV. Nitrogen was used as sheath and auxiliary gas with the heated capillary set at 250°C. UV-detection was used for quantification (at 266 nm), while ESI-ion trap MS detection was used for identification. LC determination of impurities in lumefantrine samples was performed using a Purospher STAR RP-18 endcapped (150 × 4.6 mm, 5 μm particle size) column (Merck, Darmstadt, Germany) with guard column at 30°C under isocratic conditions with a mobile phase consisting of ammonium acetate (pH 4.9; 0.1 M) and acetonitrile (10:90, v/v). The flow rate was set at 2.0 mL/min (minimal run time: 30 min.). The injection volume was 10 μl. Under these conditions, lumefantrine elutes at approximately 22 min. System suitability tests (SSTs) were established as the plate number on lumefantrine (N ≥ 8.2 × 103) and the oxidative stress degradation product (N ≥ 2.4 × 103), the signal-to-noise ratio of 0.5% l.c. lumefantrine solution (S/N ≥ 30), the peak area ratio of the 0.5% l.c. versus 100% l.c. (between 0.4 and 0.6) and the relative position of the in-situ prepared N-oxide by H2O2 treatment (RRT between 0.12 and 0.22). The LC method was validated for the determination of lumefantrine and its related impurities. The selectivity of the developed chromatographic method was established by the separation of lumefantrine and its impurities. A correlation coefficient (r2) of 0.9998 for lumefantrine (0.0006 to 0.01 mg/ml) and 1.0 for impurity A and DBK (0.001 to 0.1 mg/ml) demonstrated that the HPLC method is linear in the lower range. LOD/LOQ values for lumefantrine, DBK and impurity A were calculated (S/N = 3 resp. 10): 0.004 mg/ml and 0.026 mg/ml for lumefantrine (0.004% respectively 0.026% l.c.), 0.011 mg/ml and 0.040 mg/ml for DBK (0.012% respectively 0.042% l.c.) and 0.110 mg/ml and 0.393 mg/ml for impurity A (0.115% respectively 0.409% l.c.). The analytical stability of lumefantrine, impurity A and DBK was confirmed over a storage period of 24 hours at 5°C, i.e. the sample compartment temperature. Accuracy and precision were evaluated by repeated analysis (n = 6), with 102.6% l.c. recovery and 2.1%, respectively 2.86%, for repeatability, respectively intermediate precision. The relative retention time (RRT) is defined as the ratio of the retention time of the compound versus the retention time of lumefantrine. The relative response factor is defined as the ratio of the area of the compound versus the area of lumefantrine, both injected at the same concentration. [SUBTITLE] Forced degradation [SUBSECTION] Forced degradation of lumefantrine API and FPP was performed under heat, light, acidic, alkaline and oxidative stress conditions. In heat stress studies, the FPP powder (one gram) was incubated at 40, 50 and 60°C for respectively four, three and two days. The placebo powder (one gram) was incubated for two days at 60°C. In light stress studies, the FPP and placebo powder (one gram) were subjected to UV (three days incubation) and VIS (seven days incubation) light in a qualified Pharma 500 L stability cabinet (Weiss Technik) according to ICH. Finally, FPP and placebo were stressed by adding 10 ml of 1 M HCl (acidic), 1 M NaOH (alkaline) or 1% H2O2 (oxidative) to one gram of the powder to be examined. Samples were incubated, up to eight days, at 5, 25, 40, 50 and 60°C. After the incubation, samples were neutralized using NaOH (acidic), HCl (alkaline) or Na2S2O5 (oxidative), and the solvent evaporated using freeze-drying. The resulting powdered samples were dispersed in THF, centrifuged, HPLC-filtrated and analyzed using HPLC-DAD/UV-ESI/MS. Additionally, different batches of FPP were included in long-term (up to 24 months, 30°C, 70-75% RH) and accelerated (up to 6 months, 40°C, 75% RH) stability studies according to ICH stability guidelines [24]. Forced degradation of lumefantrine API and FPP was performed under heat, light, acidic, alkaline and oxidative stress conditions. In heat stress studies, the FPP powder (one gram) was incubated at 40, 50 and 60°C for respectively four, three and two days. The placebo powder (one gram) was incubated for two days at 60°C. In light stress studies, the FPP and placebo powder (one gram) were subjected to UV (three days incubation) and VIS (seven days incubation) light in a qualified Pharma 500 L stability cabinet (Weiss Technik) according to ICH. Finally, FPP and placebo were stressed by adding 10 ml of 1 M HCl (acidic), 1 M NaOH (alkaline) or 1% H2O2 (oxidative) to one gram of the powder to be examined. Samples were incubated, up to eight days, at 5, 25, 40, 50 and 60°C. After the incubation, samples were neutralized using NaOH (acidic), HCl (alkaline) or Na2S2O5 (oxidative), and the solvent evaporated using freeze-drying. The resulting powdered samples were dispersed in THF, centrifuged, HPLC-filtrated and analyzed using HPLC-DAD/UV-ESI/MS. Additionally, different batches of FPP were included in long-term (up to 24 months, 30°C, 70-75% RH) and accelerated (up to 6 months, 40°C, 75% RH) stability studies according to ICH stability guidelines [24]. [SUBTITLE] In-silico toxicological predictions [SUBSECTION] To make in-silico toxicological predictions for lumefantrine and its identified related impurities, two sources of toxicological predictions were used: Derek® (Nexus v2.0) for Windows developed by Lhasa Limited (Leeds, UK) and Toxtree® (v1.60) developed by Ideaconsult Ltd. (Sofia, Bulgaria). Derek® (Nexus v2.0) for Windows is an expert knowledge base system, containing descriptions of molecular substructures which have been associated with toxic endpoints (structural alerts), that predicts whether a chemical is toxic in humans, other mammals and bacteria. The programme applies structure-activity relationships ((Q)SARs) and other expert knowledge rules to derive a reasoned conclusion about the potential toxicity of the query chemical [25,26]. Toxtree® is an open source application, which is able to estimate toxic hazards by applying a decision tree approach and making structure-based predictions for a number of toxicological endpoints using different modules. Hazard estimations were generated using three Toxtree® modules: Cramer rules with extensions, Benigni/Bossa rulebase and structure alerts for the in vivo micronucleus assay in rodents. To make in-silico toxicological predictions for lumefantrine and its identified related impurities, two sources of toxicological predictions were used: Derek® (Nexus v2.0) for Windows developed by Lhasa Limited (Leeds, UK) and Toxtree® (v1.60) developed by Ideaconsult Ltd. (Sofia, Bulgaria). Derek® (Nexus v2.0) for Windows is an expert knowledge base system, containing descriptions of molecular substructures which have been associated with toxic endpoints (structural alerts), that predicts whether a chemical is toxic in humans, other mammals and bacteria. The programme applies structure-activity relationships ((Q)SARs) and other expert knowledge rules to derive a reasoned conclusion about the potential toxicity of the query chemical [25,26]. Toxtree® is an open source application, which is able to estimate toxic hazards by applying a decision tree approach and making structure-based predictions for a number of toxicological endpoints using different modules. Hazard estimations were generated using three Toxtree® modules: Cramer rules with extensions, Benigni/Bossa rulebase and structure alerts for the in vivo micronucleus assay in rodents.
null
null
null
null
[ "Background", "Samples and chemicals", "Liquid chromatography", "Forced degradation", "In-silico toxicological predictions", "Results and discussion", "HPLC analysis of lumefantrine containing samples", "Identification of lumefantrine impurities with LC-MS/MS", "Specified lumefantrine impurity DBK", "In-silico toxicological predictions of lumefantrine and its related impurities", "Conclusions", "Competing interests", "Authors' contributions" ]
[ "Lumefantrine (benflumetol) is a 2,4,7,9-substituted fluorene (2,3-benzindene) derivative (Figure 1). It was synthesized in the 1970s by the Academy of Military Medical Sciences, in Beijing, and registered in China for anti-malarial use in 1987. It is now commercially available in fixed combination products, mostly with β-artemether (ACT, artemisinin-based combination therapy), which are proven to be highly efficacious for treatment of uncomplicated falciparum malaria. In addition to the compound itself, the compound proved to possess marked blood schizontocidal activity against a wide range of Plasmodium, among them chloroquine-resistant Plasmodium falciparum [1-5].\nStructure of Lumefantrine.\nBiochemical studies suggest that its anti-malarial effect involves lysosomal trapping of the drug in the food vacuole of the intra-erythrocytic parasite, followed by binding to haem that is produced in the course of haemoglobin digestion. This binding prevents the polymerization of haem into haemozoin, hence inhibiting the detoxification of haem. Investigations involving aryl-methanol compounds have suggested the coordination of the iron centre of haem (Fe(III)PPIX) and related porphyrins by the alcohol functionality, indicating the structural activity relationship of the anti-malarial drug lumefantrine [6]. Hence, structural analogues of lumefantrine also posses marked anti-malarial effects. Halofantrine, an aryl amino alcohol analogue of lumefantrine, is also an anti-malarial drug, but is known to be potentially cardiotoxic [7]. Monodesbutyl-benflumetol, a metabolite of lumefantrine, exerts higher blood schizontocidal activity in Plasmodium falciparum, as well as in Plasmodium vivax. It is about 10-times more effective then lumefantrine [8]. The secondary alcohol permits the formation of dextrorotatory and levorotatory lumefantrine enantiomers and routine synthesis yield the racemate of (+)-lumefantrine and (-)-lumefantrine, which have almost identical potency. Therefore, from the activity point of view, there is no reason to use only one of the enantiomers of lumefantrine instead of the racemate. Moreover, in view of the low animal and human toxicity of the lumefantrine racemate, no major toxicological differences between the two enantiomers are expected [9]. However, other impurities resulting from synthesis might be present.\nLumefantrine containing combinations are incorporated in the WHO essential drug list for the treatment of malaria in endemic areas of the tropical climate. Due to the logistic system [10], degradation products may be spontaneously generated during distribution and storage. Control of such impurities in drug substances and finished drug products is required as they might impart different efficacy and bioavailability to the drug and/or they might produce different adverse and toxic effects to the patients [11].\nThe safety of a drug product is dependent not only on the toxicological properties of the active drug substance, but also on the toxicological properties of its impurities [12]. Thus, there is an ever-increasing interest in impurities present in APIs and FPPs [13]. Impurity profiling (i.e. the identity as well as the quantity of impurities in the pharmaceutical drug) is now gaining critical attention from regulatory authorities. The different Pharmacopoeias, such as the European Pharmacopoeia (Ph.Eur.), United States Pharmacopeia (USP) and International Pharmacopoeia (Ph.Int.) are incorporating specification limits to acceptable levels of impurities present in the API's or FPPs formulations, based upon found levels in approved market samples [11,14,15]. Moreover, ICH guideline Q3A(R) stipulates different thresholds or action limits based upon the maximum daily dose (MDD). For lumefantrine formulations (FPP), with a MDD of 960 mg/day, these are defined as 0.10% reporting threshold, 0.20% identification threshold and 0.20% qualification threshold [16].\nUSP Salmous (Standards for Articles Legally Marketed Outside the US) and Ph. Int. have already established specification limits for three lumefantrine related impurities: lumefantrine related compound A ((RS,Z)-2-(Dibutylamino)-2-(2,7-dichloro-9-(4-chlorobenzylidene)-9H-fluoren-4-yl)ethanol), lumefantrine related compound BA ((1S,3R,5R)-1,3-bis((EZ)-2,7-Dichloro-9-(4-chlorobenzylidene)-9H-fluoren-4-yl)-2,6-dioxabicyclo[3.1.0]hexane) and lumefantrine related compound BB ((2-((EZ)-2,6-Dichloro-9-(4-chlorobenzylidene)-9H-fluoren-4-yl)-3'-((EZ)-2,7-dichloro-9-(4-chlorobenzylidene)-9H-fluoren-4-yl)-2,2'-bioxirane). The USP Salmous specification limits of these impurities are 0.1% for both impurities A and BA and 0.3% for impurity BB [17]. The Ph.Int. lumefantrine monograph lists the same three compounds as identified potential impurities, with specification limits of 0.1% for impurity A and 0.3% for impurity BA and BB [18].\nAnalytical procedures have been reported for the assay of lumefantrine in different FPPs, using HPLC-UV [19-22]. Furthermore, an LC/MS/MS bio-analytical method for quantification of lumefantrine in human plasma has been developed [23]. However, no impurity profile has been established for this drug, while this is considered much more critical than the assay value. In this study, the potential impurities are described, including new degradants, as well as their relevance towards specification settings and in-silico toxicological evaluation. APIs and FPPs containing lumefantrine were evaluated by HPLC, with UV detection for quantification and with ESI-iontrap MS detection for identification.", "All drug substance batches (APIs), commercially available FPPs (Co-Artesiane®, Artefan®, Lumartem® and Coartem®) and the standard of desbenzylketo (DBK) lumefantrine derivative were supplied by Dafra Pharma International (Belgium). USP-Salmous standards of lumefantrine and impurity A were purchased from U.S. Pharmacopeia (Basel, Switzerland). Analytical solutions were prepared in HPLC grade tetrahydrofuran (Fisher Scientific, Leicestershire, UK) at a concentration of 0.96 mg/ml lumefantrine, which corresponds to 100% label claim (l.c.). A dilution equivalent to 0.5% l.c. is also prepared and used for the quantification of the related impurities. Hydrogen peroxide (H2O2), sodium hydroxide (NaOH) and ammonium acetate were purchased from Merck (Darmstadt, Germany), hydrochloric acid (HCl) from Sigma-Aldrich (St Louis, USA) and glacial acetic acid from Riedel-de Haën (Seelze, Germany). Sartorius (Göttingen, Germany) ultrapure 18.2 MΩ.cm quality water and HPLC grade acetonitril (Romil, Cambridge, UK) were used for HPLC-UV/MS analysis.", "HPLC-UV investigation of the impurity profiles was performed on a HPLC-PDA apparatus consisting of a Waters Alliance 2695 separation module and a Waters 2998 photodiode array detector with Empower 2 software for data acquisition (all Waters, Milford, MA, USA). For PDA detection, the UV spectrum was recorded at 190-400 nm. Quantification was performed at 266 nm. The positive ion ESI and the collision-induced dissociation (CID) mass spectra were obtained from the LC-UV/MS apparatus consisting of a Spectra System SN4000 interface, a Spectra System SCM1000 degasser, a Spectra System P1000XR pump, a Spectra System AS3000 autosampler, and a Finnigan LCQ Classic ion trap mass spectrometer in positive ion mode (all Thermo, San José, CA, USA), mass to charge range m/z 100 to m/z 2000 at unit resolution and with a peak width of 0.25 daltons/z, equipped with a Waters 2487 dual wavelength UV detector (Waters, Milford, MA, USA) and Xcalibur 2.0 software (Thermo) for data acquisition. ESI was conducted using a needle voltage of 4.5 kV. Nitrogen was used as sheath and auxiliary gas with the heated capillary set at 250°C. UV-detection was used for quantification (at 266 nm), while ESI-ion trap MS detection was used for identification.\nLC determination of impurities in lumefantrine samples was performed using a Purospher STAR RP-18 endcapped (150 × 4.6 mm, 5 μm particle size) column (Merck, Darmstadt, Germany) with guard column at 30°C under isocratic conditions with a mobile phase consisting of ammonium acetate (pH 4.9; 0.1 M) and acetonitrile (10:90, v/v). The flow rate was set at 2.0 mL/min (minimal run time: 30 min.). The injection volume was 10 μl. Under these conditions, lumefantrine elutes at approximately 22 min. System suitability tests (SSTs) were established as the plate number on lumefantrine (N ≥ 8.2 × 103) and the oxidative stress degradation product (N ≥ 2.4 × 103), the signal-to-noise ratio of 0.5% l.c. lumefantrine solution (S/N ≥ 30), the peak area ratio of the 0.5% l.c. versus 100% l.c. (between 0.4 and 0.6) and the relative position of the in-situ prepared N-oxide by H2O2 treatment (RRT between 0.12 and 0.22).\nThe LC method was validated for the determination of lumefantrine and its related impurities. The selectivity of the developed chromatographic method was established by the separation of lumefantrine and its impurities. A correlation coefficient (r2) of 0.9998 for lumefantrine (0.0006 to 0.01 mg/ml) and 1.0 for impurity A and DBK (0.001 to 0.1 mg/ml) demonstrated that the HPLC method is linear in the lower range. LOD/LOQ values for lumefantrine, DBK and impurity A were calculated (S/N = 3 resp. 10): 0.004 mg/ml and 0.026 mg/ml for lumefantrine (0.004% respectively 0.026% l.c.), 0.011 mg/ml and 0.040 mg/ml for DBK (0.012% respectively 0.042% l.c.) and 0.110 mg/ml and 0.393 mg/ml for impurity A (0.115% respectively 0.409% l.c.). The analytical stability of lumefantrine, impurity A and DBK was confirmed over a storage period of 24 hours at 5°C, i.e. the sample compartment temperature. Accuracy and precision were evaluated by repeated analysis (n = 6), with 102.6% l.c. recovery and 2.1%, respectively 2.86%, for repeatability, respectively intermediate precision.\nThe relative retention time (RRT) is defined as the ratio of the retention time of the compound versus the retention time of lumefantrine. The relative response factor is defined as the ratio of the area of the compound versus the area of lumefantrine, both injected at the same concentration.", "Forced degradation of lumefantrine API and FPP was performed under heat, light, acidic, alkaline and oxidative stress conditions. In heat stress studies, the FPP powder (one gram) was incubated at 40, 50 and 60°C for respectively four, three and two days. The placebo powder (one gram) was incubated for two days at 60°C. In light stress studies, the FPP and placebo powder (one gram) were subjected to UV (three days incubation) and VIS (seven days incubation) light in a qualified Pharma 500 L stability cabinet (Weiss Technik) according to ICH. Finally, FPP and placebo were stressed by adding 10 ml of 1 M HCl (acidic), 1 M NaOH (alkaline) or 1% H2O2 (oxidative) to one gram of the powder to be examined. Samples were incubated, up to eight days, at 5, 25, 40, 50 and 60°C. After the incubation, samples were neutralized using NaOH (acidic), HCl (alkaline) or Na2S2O5 (oxidative), and the solvent evaporated using freeze-drying. The resulting powdered samples were dispersed in THF, centrifuged, HPLC-filtrated and analyzed using HPLC-DAD/UV-ESI/MS.\nAdditionally, different batches of FPP were included in long-term (up to 24 months, 30°C, 70-75% RH) and accelerated (up to 6 months, 40°C, 75% RH) stability studies according to ICH stability guidelines [24].", "To make in-silico toxicological predictions for lumefantrine and its identified related impurities, two sources of toxicological predictions were used: Derek® (Nexus v2.0) for Windows developed by Lhasa Limited (Leeds, UK) and Toxtree® (v1.60) developed by Ideaconsult Ltd. (Sofia, Bulgaria). Derek® (Nexus v2.0) for Windows is an expert knowledge base system, containing descriptions of molecular substructures which have been associated with toxic endpoints (structural alerts), that predicts whether a chemical is toxic in humans, other mammals and bacteria. The programme applies structure-activity relationships ((Q)SARs) and other expert knowledge rules to derive a reasoned conclusion about the potential toxicity of the query chemical [25,26]. Toxtree® is an open source application, which is able to estimate toxic hazards by applying a decision tree approach and making structure-based predictions for a number of toxicological endpoints using different modules. Hazard estimations were generated using three Toxtree® modules: Cramer rules with extensions, Benigni/Bossa rulebase and structure alerts for the in vivo micronucleus assay in rodents.", "[SUBTITLE] HPLC analysis of lumefantrine containing samples [SUBSECTION] As the Ph.Int./USP Salmous HPLC methods [17,18] are a complex, step-wise gradient using an ion-pairing reagent, this method is not compatible with MS detection. Moreover, the gradient is required for the detection of synthesis impurities BA and BB, which are structurally very different from lumefantrine and its other impurities, especially degradants. Therefore, an isocratic RP-HPLC method was used without an MS-incompatible ion-pairing reagent. The chromatographic characteristics of these two synthesis impurities on this system could not be evaluated, due to the unavailability of references for these impurities.\nLumefantrine API and FPPs were exposed to diverse stress conditions for different periods. Additionally, FPPs were put on long-term and accelerated stability studies as well according to ICH. FPP samples in the long-term stability study were kept for up to twenty-four months at 30°C/75% RH. In the accelerated study, the stability conditions were adjusted and FPP samples were kept for up to six months at 40°C/75% RH. The unstressed and stressed API samples, as well as the unstressed (release), accelerated, long-term and stressed FPP samples were analyzed with the validated HPLC method. Five synthesis and four stress related lumefantrine impurities have been observed in lumefantrine containing samples (Table 1). The relative retention time (RRT), relative to lumefantrine, of these impurities was defined and normalized quantification was performed with a reporting threshold of 0.10%. Maximal actually observed levels of lumefantrine related impurities in different samples under different conditions were obtained (Table 2). None of these lumefantrine related impurities were observed above the reporting threshold (i.e. > 0.10%) in unstressed API and release (T0) FPP samples, except for the monodesbutyl derivative. However, these lumefantrine degradants were observed in stressed API and FPP samples, and in FPP stability studies. Compounds 1,2 and 3 were observed in oxidative stressed API samples. Three lumefantrine related impurities were observed in stressed FPP stability samples: compound 1 (60°C, 1 M NaOH, T2d), compound 3 (60°C, 1% H2O2, T2d) and compound 4 (50°C, 1 M HCl, T3d). Compound 3 and 4 were also detected in the accelerated (40°C/75% RH, T6m) and long-term stability studies. A typical UV chromatogram illustrating the separation of lumefantrine N-oxide, DBK, desbenzyl lumefantrine derivative and lumefantrine is given in Figure 2.\nStructural information for the observed and/or reported lumefantrine related impurities\nPercentage maximum actual levels of lumefantrine related impurities observed(1)\n(1) RT: reporting threshold = 0.10%\nUV chromatogram of a mixed sample illustrating lumefantrine N-oxide, lumefantrine DBK and desbenzyl derivative and lumefantrine. UV chromatogram of a mixed sample illustrating lumefantrine N-oxide (3), DBK (4) and desbenzyl lumefantrine derivative (5) and lumefantrine (L).\nAs the Ph.Int./USP Salmous HPLC methods [17,18] are a complex, step-wise gradient using an ion-pairing reagent, this method is not compatible with MS detection. Moreover, the gradient is required for the detection of synthesis impurities BA and BB, which are structurally very different from lumefantrine and its other impurities, especially degradants. Therefore, an isocratic RP-HPLC method was used without an MS-incompatible ion-pairing reagent. The chromatographic characteristics of these two synthesis impurities on this system could not be evaluated, due to the unavailability of references for these impurities.\nLumefantrine API and FPPs were exposed to diverse stress conditions for different periods. Additionally, FPPs were put on long-term and accelerated stability studies as well according to ICH. FPP samples in the long-term stability study were kept for up to twenty-four months at 30°C/75% RH. In the accelerated study, the stability conditions were adjusted and FPP samples were kept for up to six months at 40°C/75% RH. The unstressed and stressed API samples, as well as the unstressed (release), accelerated, long-term and stressed FPP samples were analyzed with the validated HPLC method. Five synthesis and four stress related lumefantrine impurities have been observed in lumefantrine containing samples (Table 1). The relative retention time (RRT), relative to lumefantrine, of these impurities was defined and normalized quantification was performed with a reporting threshold of 0.10%. Maximal actually observed levels of lumefantrine related impurities in different samples under different conditions were obtained (Table 2). None of these lumefantrine related impurities were observed above the reporting threshold (i.e. > 0.10%) in unstressed API and release (T0) FPP samples, except for the monodesbutyl derivative. However, these lumefantrine degradants were observed in stressed API and FPP samples, and in FPP stability studies. Compounds 1,2 and 3 were observed in oxidative stressed API samples. Three lumefantrine related impurities were observed in stressed FPP stability samples: compound 1 (60°C, 1 M NaOH, T2d), compound 3 (60°C, 1% H2O2, T2d) and compound 4 (50°C, 1 M HCl, T3d). Compound 3 and 4 were also detected in the accelerated (40°C/75% RH, T6m) and long-term stability studies. A typical UV chromatogram illustrating the separation of lumefantrine N-oxide, DBK, desbenzyl lumefantrine derivative and lumefantrine is given in Figure 2.\nStructural information for the observed and/or reported lumefantrine related impurities\nPercentage maximum actual levels of lumefantrine related impurities observed(1)\n(1) RT: reporting threshold = 0.10%\nUV chromatogram of a mixed sample illustrating lumefantrine N-oxide, lumefantrine DBK and desbenzyl derivative and lumefantrine. UV chromatogram of a mixed sample illustrating lumefantrine N-oxide (3), DBK (4) and desbenzyl lumefantrine derivative (5) and lumefantrine (L).\n[SUBTITLE] Identification of lumefantrine impurities with LC-MS/MS [SUBSECTION] The observed lumefantrine impurity peaks (related to synthesis as well as degradation processes) in stressed or unstressed API and FPPs were identified using LC-MS/MS, with one of them investigated for the first time and proposed as a new specified lumefantrine related impurity. The desbutyl, desbenzyl and isomeric compound A derivatives are already known lumefantrine impurities. The analytical characteristics of the remaining unidentified lumefantrine related impurities were obtained by analysis of MS data: m/z values (Table 3), isotopic-distributions in mass spectra (Figure 2) and MS/MS (fragmentation pattern for structural identification).\nHPLC characteristics of lumefantrine related impurities\n(1) Retention time (min.)\n(2) Relative retention time\nThe mono-isotopic mass of lumefantrine [(1RS)-2-(Dibutylamino)-1-[(Z)-2,7-dichloro-9-(4-chlorobenzylidene)-9H-fluoren-4-yl]ethanol] was calculated to be 527.15. The mass spectrum of lumefantrine main peak indicated the most abundant ion at an m/z-ratio of 528.10, with an isotopic distribution corresponding to the three chlorine atoms in its structure (35Cl at 75.77% and 37Cl at 24.23%). In the mass spectrum of compound 1 (RRT ~ 0.08), 436.14 is observed to be the most abundant m/z. The isotopic distribution is suggestive for a compound possessing two chlorine atoms, and is identical to the isotopic distribution of compound 4. The most abundant m/z value for compound 4 is 420.13, with a molecular formula of C23H27NO2Cl2, i.e. desbenzylketo derivative (DBK). This MS-derived structure was confirmed by chemical synthesis of a DBK reference and its IR and NMR spectroscopic structure confirmation. This DBK reference standard gave similar chromatographic retention characteristics as well as DAD-UV spectrum as degradant 4 found in the samples. Based on the observed m/z values of compound 1 and DBK, compound 1 has an additional oxide to its structure, and is thus assigned as being the N-oxide of DBK. The most abundant ion found for compound 2 was m/z 474.00. Its isotopic distribution is characteristic for a compound possessing three chlorines and the molecular formula C26H24NOCl3, i.e. the monodesbutyl derivative. As this compound is more hydrophilic than lumefantrine, it elutes much earlier than lumefantrine. Compound 3 was found in the oxidative stressed FPP samples. Its most abundant m/z is 544.08, with an isotopic distribution corresponding to that of lumefantrine, giving the molecular formula C30H24NO2Cl3. Based on the observed m/z values of compound 3 and lumefantrine, compound 3 has an additional oxide to its structure. MS/MS fragmentation spectra, by collision induced dissociation (CID, energy 100 eV), of compound 3 showed peaks at m/z 526.12 (loss of H2O), 470.10, 396.99, 380.95, 346.23, 305.58, 298.06 (loss of C14H7Cl2) and 152.30. This impurity was thus identified as lumefantrine N-oxide. Compounds 6,7 and 9 gave identical mass spectra (Figure 3) with the most abundant ion found at m/z 544.12. The isotopic distributions are characteristic for a compound containing three chlorines and a molecular formula C30H32NO2Cl3, i.e. oxides of lumefantrine. As these three impurities are eluting at different retention time, they are most probably isomeric compounds with an -OH function at different positions on the lumefantrine aromatic ring structure.\nIsotopic-distribution mass spectra of lumefantrine related impurities with the most abundant m/z observed. Isotopic-distribution mass spectra of lumefantrine related impurities: (1) Desbenzylketo N-oxide; (2) Monodesbutyl derivative; (3) Lumefantrine N-oxide; (4) Desbenzylketo derivative; (5) Desbenzyl derivative; (6,7,9) Oxide of lumefantrine and (8) Impurity A (USP/Ph.Int).\nThe observed lumefantrine impurity peaks (related to synthesis as well as degradation processes) in stressed or unstressed API and FPPs were identified using LC-MS/MS, with one of them investigated for the first time and proposed as a new specified lumefantrine related impurity. The desbutyl, desbenzyl and isomeric compound A derivatives are already known lumefantrine impurities. The analytical characteristics of the remaining unidentified lumefantrine related impurities were obtained by analysis of MS data: m/z values (Table 3), isotopic-distributions in mass spectra (Figure 2) and MS/MS (fragmentation pattern for structural identification).\nHPLC characteristics of lumefantrine related impurities\n(1) Retention time (min.)\n(2) Relative retention time\nThe mono-isotopic mass of lumefantrine [(1RS)-2-(Dibutylamino)-1-[(Z)-2,7-dichloro-9-(4-chlorobenzylidene)-9H-fluoren-4-yl]ethanol] was calculated to be 527.15. The mass spectrum of lumefantrine main peak indicated the most abundant ion at an m/z-ratio of 528.10, with an isotopic distribution corresponding to the three chlorine atoms in its structure (35Cl at 75.77% and 37Cl at 24.23%). In the mass spectrum of compound 1 (RRT ~ 0.08), 436.14 is observed to be the most abundant m/z. The isotopic distribution is suggestive for a compound possessing two chlorine atoms, and is identical to the isotopic distribution of compound 4. The most abundant m/z value for compound 4 is 420.13, with a molecular formula of C23H27NO2Cl2, i.e. desbenzylketo derivative (DBK). This MS-derived structure was confirmed by chemical synthesis of a DBK reference and its IR and NMR spectroscopic structure confirmation. This DBK reference standard gave similar chromatographic retention characteristics as well as DAD-UV spectrum as degradant 4 found in the samples. Based on the observed m/z values of compound 1 and DBK, compound 1 has an additional oxide to its structure, and is thus assigned as being the N-oxide of DBK. The most abundant ion found for compound 2 was m/z 474.00. Its isotopic distribution is characteristic for a compound possessing three chlorines and the molecular formula C26H24NOCl3, i.e. the monodesbutyl derivative. As this compound is more hydrophilic than lumefantrine, it elutes much earlier than lumefantrine. Compound 3 was found in the oxidative stressed FPP samples. Its most abundant m/z is 544.08, with an isotopic distribution corresponding to that of lumefantrine, giving the molecular formula C30H24NO2Cl3. Based on the observed m/z values of compound 3 and lumefantrine, compound 3 has an additional oxide to its structure. MS/MS fragmentation spectra, by collision induced dissociation (CID, energy 100 eV), of compound 3 showed peaks at m/z 526.12 (loss of H2O), 470.10, 396.99, 380.95, 346.23, 305.58, 298.06 (loss of C14H7Cl2) and 152.30. This impurity was thus identified as lumefantrine N-oxide. Compounds 6,7 and 9 gave identical mass spectra (Figure 3) with the most abundant ion found at m/z 544.12. The isotopic distributions are characteristic for a compound containing three chlorines and a molecular formula C30H32NO2Cl3, i.e. oxides of lumefantrine. As these three impurities are eluting at different retention time, they are most probably isomeric compounds with an -OH function at different positions on the lumefantrine aromatic ring structure.\nIsotopic-distribution mass spectra of lumefantrine related impurities with the most abundant m/z observed. Isotopic-distribution mass spectra of lumefantrine related impurities: (1) Desbenzylketo N-oxide; (2) Monodesbutyl derivative; (3) Lumefantrine N-oxide; (4) Desbenzylketo derivative; (5) Desbenzyl derivative; (6,7,9) Oxide of lumefantrine and (8) Impurity A (USP/Ph.Int).\n[SUBTITLE] Specified lumefantrine impurity DBK [SUBSECTION] The lumefantrine-related compound 4, DBK (RRT ~ 0.33), was not only formed in stress stability samples, but was also observed in accelerated and stressed stability samples of FPP. Moreover, DBK was found to be present in market samples at a concentration ranging between 0.03% and 0.12%, determined by area normalization. Subsequently, DBK was synthesized for further analytical characterization, including confirmation of its relative retention time (RRT) and determination of its relative response factor (RFF) at the detection wavelength of 266 nm. The RRT and the RRF of DBK relative to lumefantrine were experimentally determined to be 0.33 and 2.87 respectively. The DAD-UV spectra recorded for lumefantrine and DBK (Figure 4) showed the wavelength of maximum absorption to be higher for DBK (app. 266 nm) than for lumefantrine (app. 234 nm), due to the benzyl group being replaced by the keto function. This impurity was observed in oxidative and acidic stress degradation, as well as in the accelerated and long-term ICH stability studies, justifying this degradant to be classified as a specified degradant.\nUV spectra recorded for lumefantrine (left) and DBK (right) showing the observed wavelength of maximum absorption. DAD-UV spectra of lumefantrine (left) and DBK (right).\nThe lumefantrine-related compound 4, DBK (RRT ~ 0.33), was not only formed in stress stability samples, but was also observed in accelerated and stressed stability samples of FPP. Moreover, DBK was found to be present in market samples at a concentration ranging between 0.03% and 0.12%, determined by area normalization. Subsequently, DBK was synthesized for further analytical characterization, including confirmation of its relative retention time (RRT) and determination of its relative response factor (RFF) at the detection wavelength of 266 nm. The RRT and the RRF of DBK relative to lumefantrine were experimentally determined to be 0.33 and 2.87 respectively. The DAD-UV spectra recorded for lumefantrine and DBK (Figure 4) showed the wavelength of maximum absorption to be higher for DBK (app. 266 nm) than for lumefantrine (app. 234 nm), due to the benzyl group being replaced by the keto function. This impurity was observed in oxidative and acidic stress degradation, as well as in the accelerated and long-term ICH stability studies, justifying this degradant to be classified as a specified degradant.\nUV spectra recorded for lumefantrine (left) and DBK (right) showing the observed wavelength of maximum absorption. DAD-UV spectra of lumefantrine (left) and DBK (right).\n[SUBTITLE] In-silico toxicological predictions of lumefantrine and its related impurities [SUBSECTION] Using the knowledge-based expert systems Toxtree® and Derek®, general toxicological and carcinogenic alerts for lumefantrine, as well as for its related observed and already described impurities, have been investigated. Since DBK is a specified lumefantrine-related compound, the toxicity profile of DBK is of paramount importance. Based on the Cramer rules with extensions, Toxtree® clearly predicted general toxicity risks (class III), and genotoxic alerts (polycyclic aromatic hydrocarbons, halogenated benzene and H-acceptor-path3-H-acceptor) for DBK, which are identical to the API lumefantrine itself. According to the toxicological concern (TTC), the daily dosage for compounds classified in class III should be below 90 μg/person (60 kg)/day to be validated as non toxic [27]. Hence, the TTC value of 90 μg on the MDD of 960 mg lumefantrine corresponds to a limit of 0.01% (90 μg/960 mg), which is far below the levels actually observed.\nThe toxicity profile by Derek® of DBK is defined by several general toxicity alerts which are similar to lumefantrine: hERG channel inhibion and α2 μ-globulin nephropathy [28] plus additional photo-toxicity and -allergenicity. However, Derek® did not trigger any genotoxicity or carcinogenicity for DBK.\nThe other lumefantrine related impurities were also predicted in Toxtree® to have a high general toxicity similar to lumefantrine itself (depicted Class III), based on the Cramer rules with extensions, and genotoxicity risks. Again, Derek® clearly indicated a limit toxicity profile for the majority of lumefantrine related impurities compared to lumefantrine (hERG channel inhibion, α2μ-globulin nephropathy). Only impurity BB triggered additional toxicity alerts (carcinogenicity/mutagenicity, chromosome damage, eye/skin irritation, developmental toxicity, skin sensitization), indicative for a non-toxic profile compared to lumefantrine itself.\nUsing the knowledge-based expert systems Toxtree® and Derek®, general toxicological and carcinogenic alerts for lumefantrine, as well as for its related observed and already described impurities, have been investigated. Since DBK is a specified lumefantrine-related compound, the toxicity profile of DBK is of paramount importance. Based on the Cramer rules with extensions, Toxtree® clearly predicted general toxicity risks (class III), and genotoxic alerts (polycyclic aromatic hydrocarbons, halogenated benzene and H-acceptor-path3-H-acceptor) for DBK, which are identical to the API lumefantrine itself. According to the toxicological concern (TTC), the daily dosage for compounds classified in class III should be below 90 μg/person (60 kg)/day to be validated as non toxic [27]. Hence, the TTC value of 90 μg on the MDD of 960 mg lumefantrine corresponds to a limit of 0.01% (90 μg/960 mg), which is far below the levels actually observed.\nThe toxicity profile by Derek® of DBK is defined by several general toxicity alerts which are similar to lumefantrine: hERG channel inhibion and α2 μ-globulin nephropathy [28] plus additional photo-toxicity and -allergenicity. However, Derek® did not trigger any genotoxicity or carcinogenicity for DBK.\nThe other lumefantrine related impurities were also predicted in Toxtree® to have a high general toxicity similar to lumefantrine itself (depicted Class III), based on the Cramer rules with extensions, and genotoxicity risks. Again, Derek® clearly indicated a limit toxicity profile for the majority of lumefantrine related impurities compared to lumefantrine (hERG channel inhibion, α2μ-globulin nephropathy). Only impurity BB triggered additional toxicity alerts (carcinogenicity/mutagenicity, chromosome damage, eye/skin irritation, developmental toxicity, skin sensitization), indicative for a non-toxic profile compared to lumefantrine itself.", "As the Ph.Int./USP Salmous HPLC methods [17,18] are a complex, step-wise gradient using an ion-pairing reagent, this method is not compatible with MS detection. Moreover, the gradient is required for the detection of synthesis impurities BA and BB, which are structurally very different from lumefantrine and its other impurities, especially degradants. Therefore, an isocratic RP-HPLC method was used without an MS-incompatible ion-pairing reagent. The chromatographic characteristics of these two synthesis impurities on this system could not be evaluated, due to the unavailability of references for these impurities.\nLumefantrine API and FPPs were exposed to diverse stress conditions for different periods. Additionally, FPPs were put on long-term and accelerated stability studies as well according to ICH. FPP samples in the long-term stability study were kept for up to twenty-four months at 30°C/75% RH. In the accelerated study, the stability conditions were adjusted and FPP samples were kept for up to six months at 40°C/75% RH. The unstressed and stressed API samples, as well as the unstressed (release), accelerated, long-term and stressed FPP samples were analyzed with the validated HPLC method. Five synthesis and four stress related lumefantrine impurities have been observed in lumefantrine containing samples (Table 1). The relative retention time (RRT), relative to lumefantrine, of these impurities was defined and normalized quantification was performed with a reporting threshold of 0.10%. Maximal actually observed levels of lumefantrine related impurities in different samples under different conditions were obtained (Table 2). None of these lumefantrine related impurities were observed above the reporting threshold (i.e. > 0.10%) in unstressed API and release (T0) FPP samples, except for the monodesbutyl derivative. However, these lumefantrine degradants were observed in stressed API and FPP samples, and in FPP stability studies. Compounds 1,2 and 3 were observed in oxidative stressed API samples. Three lumefantrine related impurities were observed in stressed FPP stability samples: compound 1 (60°C, 1 M NaOH, T2d), compound 3 (60°C, 1% H2O2, T2d) and compound 4 (50°C, 1 M HCl, T3d). Compound 3 and 4 were also detected in the accelerated (40°C/75% RH, T6m) and long-term stability studies. A typical UV chromatogram illustrating the separation of lumefantrine N-oxide, DBK, desbenzyl lumefantrine derivative and lumefantrine is given in Figure 2.\nStructural information for the observed and/or reported lumefantrine related impurities\nPercentage maximum actual levels of lumefantrine related impurities observed(1)\n(1) RT: reporting threshold = 0.10%\nUV chromatogram of a mixed sample illustrating lumefantrine N-oxide, lumefantrine DBK and desbenzyl derivative and lumefantrine. UV chromatogram of a mixed sample illustrating lumefantrine N-oxide (3), DBK (4) and desbenzyl lumefantrine derivative (5) and lumefantrine (L).", "The observed lumefantrine impurity peaks (related to synthesis as well as degradation processes) in stressed or unstressed API and FPPs were identified using LC-MS/MS, with one of them investigated for the first time and proposed as a new specified lumefantrine related impurity. The desbutyl, desbenzyl and isomeric compound A derivatives are already known lumefantrine impurities. The analytical characteristics of the remaining unidentified lumefantrine related impurities were obtained by analysis of MS data: m/z values (Table 3), isotopic-distributions in mass spectra (Figure 2) and MS/MS (fragmentation pattern for structural identification).\nHPLC characteristics of lumefantrine related impurities\n(1) Retention time (min.)\n(2) Relative retention time\nThe mono-isotopic mass of lumefantrine [(1RS)-2-(Dibutylamino)-1-[(Z)-2,7-dichloro-9-(4-chlorobenzylidene)-9H-fluoren-4-yl]ethanol] was calculated to be 527.15. The mass spectrum of lumefantrine main peak indicated the most abundant ion at an m/z-ratio of 528.10, with an isotopic distribution corresponding to the three chlorine atoms in its structure (35Cl at 75.77% and 37Cl at 24.23%). In the mass spectrum of compound 1 (RRT ~ 0.08), 436.14 is observed to be the most abundant m/z. The isotopic distribution is suggestive for a compound possessing two chlorine atoms, and is identical to the isotopic distribution of compound 4. The most abundant m/z value for compound 4 is 420.13, with a molecular formula of C23H27NO2Cl2, i.e. desbenzylketo derivative (DBK). This MS-derived structure was confirmed by chemical synthesis of a DBK reference and its IR and NMR spectroscopic structure confirmation. This DBK reference standard gave similar chromatographic retention characteristics as well as DAD-UV spectrum as degradant 4 found in the samples. Based on the observed m/z values of compound 1 and DBK, compound 1 has an additional oxide to its structure, and is thus assigned as being the N-oxide of DBK. The most abundant ion found for compound 2 was m/z 474.00. Its isotopic distribution is characteristic for a compound possessing three chlorines and the molecular formula C26H24NOCl3, i.e. the monodesbutyl derivative. As this compound is more hydrophilic than lumefantrine, it elutes much earlier than lumefantrine. Compound 3 was found in the oxidative stressed FPP samples. Its most abundant m/z is 544.08, with an isotopic distribution corresponding to that of lumefantrine, giving the molecular formula C30H24NO2Cl3. Based on the observed m/z values of compound 3 and lumefantrine, compound 3 has an additional oxide to its structure. MS/MS fragmentation spectra, by collision induced dissociation (CID, energy 100 eV), of compound 3 showed peaks at m/z 526.12 (loss of H2O), 470.10, 396.99, 380.95, 346.23, 305.58, 298.06 (loss of C14H7Cl2) and 152.30. This impurity was thus identified as lumefantrine N-oxide. Compounds 6,7 and 9 gave identical mass spectra (Figure 3) with the most abundant ion found at m/z 544.12. The isotopic distributions are characteristic for a compound containing three chlorines and a molecular formula C30H32NO2Cl3, i.e. oxides of lumefantrine. As these three impurities are eluting at different retention time, they are most probably isomeric compounds with an -OH function at different positions on the lumefantrine aromatic ring structure.\nIsotopic-distribution mass spectra of lumefantrine related impurities with the most abundant m/z observed. Isotopic-distribution mass spectra of lumefantrine related impurities: (1) Desbenzylketo N-oxide; (2) Monodesbutyl derivative; (3) Lumefantrine N-oxide; (4) Desbenzylketo derivative; (5) Desbenzyl derivative; (6,7,9) Oxide of lumefantrine and (8) Impurity A (USP/Ph.Int).", "The lumefantrine-related compound 4, DBK (RRT ~ 0.33), was not only formed in stress stability samples, but was also observed in accelerated and stressed stability samples of FPP. Moreover, DBK was found to be present in market samples at a concentration ranging between 0.03% and 0.12%, determined by area normalization. Subsequently, DBK was synthesized for further analytical characterization, including confirmation of its relative retention time (RRT) and determination of its relative response factor (RFF) at the detection wavelength of 266 nm. The RRT and the RRF of DBK relative to lumefantrine were experimentally determined to be 0.33 and 2.87 respectively. The DAD-UV spectra recorded for lumefantrine and DBK (Figure 4) showed the wavelength of maximum absorption to be higher for DBK (app. 266 nm) than for lumefantrine (app. 234 nm), due to the benzyl group being replaced by the keto function. This impurity was observed in oxidative and acidic stress degradation, as well as in the accelerated and long-term ICH stability studies, justifying this degradant to be classified as a specified degradant.\nUV spectra recorded for lumefantrine (left) and DBK (right) showing the observed wavelength of maximum absorption. DAD-UV spectra of lumefantrine (left) and DBK (right).", "Using the knowledge-based expert systems Toxtree® and Derek®, general toxicological and carcinogenic alerts for lumefantrine, as well as for its related observed and already described impurities, have been investigated. Since DBK is a specified lumefantrine-related compound, the toxicity profile of DBK is of paramount importance. Based on the Cramer rules with extensions, Toxtree® clearly predicted general toxicity risks (class III), and genotoxic alerts (polycyclic aromatic hydrocarbons, halogenated benzene and H-acceptor-path3-H-acceptor) for DBK, which are identical to the API lumefantrine itself. According to the toxicological concern (TTC), the daily dosage for compounds classified in class III should be below 90 μg/person (60 kg)/day to be validated as non toxic [27]. Hence, the TTC value of 90 μg on the MDD of 960 mg lumefantrine corresponds to a limit of 0.01% (90 μg/960 mg), which is far below the levels actually observed.\nThe toxicity profile by Derek® of DBK is defined by several general toxicity alerts which are similar to lumefantrine: hERG channel inhibion and α2 μ-globulin nephropathy [28] plus additional photo-toxicity and -allergenicity. However, Derek® did not trigger any genotoxicity or carcinogenicity for DBK.\nThe other lumefantrine related impurities were also predicted in Toxtree® to have a high general toxicity similar to lumefantrine itself (depicted Class III), based on the Cramer rules with extensions, and genotoxicity risks. Again, Derek® clearly indicated a limit toxicity profile for the majority of lumefantrine related impurities compared to lumefantrine (hERG channel inhibion, α2μ-globulin nephropathy). Only impurity BB triggered additional toxicity alerts (carcinogenicity/mutagenicity, chromosome damage, eye/skin irritation, developmental toxicity, skin sensitization), indicative for a non-toxic profile compared to lumefantrine itself.", "An exhaustive impurity profiling of lumefantrine was performed using HPLC-UV/ESI-ion trap MS. From unstressed, stressed and accelerated stability samples of lumefantrine API and FPPs, nine compounds were detected and characterized to be lumefantrine related impurities. One new lumefantrine related compound, DBK, was identified and characterized as a specified degradation impurity of lumefantrine in real market samples (FPPs). The in-silico toxicological investigation (Toxtree® and Derek®) indicated overall a lesser toxicity for the specified impurity DBK compared to the API lumefantrine itself.", "The authors, MV, BB, EVG and BDS would like to acknowledge that Dafra sponsored the analytical development. However, these authors do not work for, or represent in any way, Dafra. FHJ is partly working for Dafra.", "MV and SS did part of the analytical experiments incl. validation, performed the in-silico verification and wrote the article. BB, EVG and SVD did part of the analytical experiments, incl. stability and MS experiments and QC on data. CB and LD critically reviewed and discussed this manuscript. FHJ designed the experiments, overviewed the DBK reference synthesis, and critically reviewed the manuscript. BDS was the overall study director, responsible for design of experiments, interpretation of data and writing this manuscript. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Samples and chemicals", "Liquid chromatography", "Forced degradation", "In-silico toxicological predictions", "Results and discussion", "HPLC analysis of lumefantrine containing samples", "Identification of lumefantrine impurities with LC-MS/MS", "Specified lumefantrine impurity DBK", "In-silico toxicological predictions of lumefantrine and its related impurities", "Conclusions", "Competing interests", "Authors' contributions" ]
[ "Lumefantrine (benflumetol) is a 2,4,7,9-substituted fluorene (2,3-benzindene) derivative (Figure 1). It was synthesized in the 1970s by the Academy of Military Medical Sciences, in Beijing, and registered in China for anti-malarial use in 1987. It is now commercially available in fixed combination products, mostly with β-artemether (ACT, artemisinin-based combination therapy), which are proven to be highly efficacious for treatment of uncomplicated falciparum malaria. In addition to the compound itself, the compound proved to possess marked blood schizontocidal activity against a wide range of Plasmodium, among them chloroquine-resistant Plasmodium falciparum [1-5].\nStructure of Lumefantrine.\nBiochemical studies suggest that its anti-malarial effect involves lysosomal trapping of the drug in the food vacuole of the intra-erythrocytic parasite, followed by binding to haem that is produced in the course of haemoglobin digestion. This binding prevents the polymerization of haem into haemozoin, hence inhibiting the detoxification of haem. Investigations involving aryl-methanol compounds have suggested the coordination of the iron centre of haem (Fe(III)PPIX) and related porphyrins by the alcohol functionality, indicating the structural activity relationship of the anti-malarial drug lumefantrine [6]. Hence, structural analogues of lumefantrine also posses marked anti-malarial effects. Halofantrine, an aryl amino alcohol analogue of lumefantrine, is also an anti-malarial drug, but is known to be potentially cardiotoxic [7]. Monodesbutyl-benflumetol, a metabolite of lumefantrine, exerts higher blood schizontocidal activity in Plasmodium falciparum, as well as in Plasmodium vivax. It is about 10-times more effective then lumefantrine [8]. The secondary alcohol permits the formation of dextrorotatory and levorotatory lumefantrine enantiomers and routine synthesis yield the racemate of (+)-lumefantrine and (-)-lumefantrine, which have almost identical potency. Therefore, from the activity point of view, there is no reason to use only one of the enantiomers of lumefantrine instead of the racemate. Moreover, in view of the low animal and human toxicity of the lumefantrine racemate, no major toxicological differences between the two enantiomers are expected [9]. However, other impurities resulting from synthesis might be present.\nLumefantrine containing combinations are incorporated in the WHO essential drug list for the treatment of malaria in endemic areas of the tropical climate. Due to the logistic system [10], degradation products may be spontaneously generated during distribution and storage. Control of such impurities in drug substances and finished drug products is required as they might impart different efficacy and bioavailability to the drug and/or they might produce different adverse and toxic effects to the patients [11].\nThe safety of a drug product is dependent not only on the toxicological properties of the active drug substance, but also on the toxicological properties of its impurities [12]. Thus, there is an ever-increasing interest in impurities present in APIs and FPPs [13]. Impurity profiling (i.e. the identity as well as the quantity of impurities in the pharmaceutical drug) is now gaining critical attention from regulatory authorities. The different Pharmacopoeias, such as the European Pharmacopoeia (Ph.Eur.), United States Pharmacopeia (USP) and International Pharmacopoeia (Ph.Int.) are incorporating specification limits to acceptable levels of impurities present in the API's or FPPs formulations, based upon found levels in approved market samples [11,14,15]. Moreover, ICH guideline Q3A(R) stipulates different thresholds or action limits based upon the maximum daily dose (MDD). For lumefantrine formulations (FPP), with a MDD of 960 mg/day, these are defined as 0.10% reporting threshold, 0.20% identification threshold and 0.20% qualification threshold [16].\nUSP Salmous (Standards for Articles Legally Marketed Outside the US) and Ph. Int. have already established specification limits for three lumefantrine related impurities: lumefantrine related compound A ((RS,Z)-2-(Dibutylamino)-2-(2,7-dichloro-9-(4-chlorobenzylidene)-9H-fluoren-4-yl)ethanol), lumefantrine related compound BA ((1S,3R,5R)-1,3-bis((EZ)-2,7-Dichloro-9-(4-chlorobenzylidene)-9H-fluoren-4-yl)-2,6-dioxabicyclo[3.1.0]hexane) and lumefantrine related compound BB ((2-((EZ)-2,6-Dichloro-9-(4-chlorobenzylidene)-9H-fluoren-4-yl)-3'-((EZ)-2,7-dichloro-9-(4-chlorobenzylidene)-9H-fluoren-4-yl)-2,2'-bioxirane). The USP Salmous specification limits of these impurities are 0.1% for both impurities A and BA and 0.3% for impurity BB [17]. The Ph.Int. lumefantrine monograph lists the same three compounds as identified potential impurities, with specification limits of 0.1% for impurity A and 0.3% for impurity BA and BB [18].\nAnalytical procedures have been reported for the assay of lumefantrine in different FPPs, using HPLC-UV [19-22]. Furthermore, an LC/MS/MS bio-analytical method for quantification of lumefantrine in human plasma has been developed [23]. However, no impurity profile has been established for this drug, while this is considered much more critical than the assay value. In this study, the potential impurities are described, including new degradants, as well as their relevance towards specification settings and in-silico toxicological evaluation. APIs and FPPs containing lumefantrine were evaluated by HPLC, with UV detection for quantification and with ESI-iontrap MS detection for identification.", "[SUBTITLE] Samples and chemicals [SUBSECTION] All drug substance batches (APIs), commercially available FPPs (Co-Artesiane®, Artefan®, Lumartem® and Coartem®) and the standard of desbenzylketo (DBK) lumefantrine derivative were supplied by Dafra Pharma International (Belgium). USP-Salmous standards of lumefantrine and impurity A were purchased from U.S. Pharmacopeia (Basel, Switzerland). Analytical solutions were prepared in HPLC grade tetrahydrofuran (Fisher Scientific, Leicestershire, UK) at a concentration of 0.96 mg/ml lumefantrine, which corresponds to 100% label claim (l.c.). A dilution equivalent to 0.5% l.c. is also prepared and used for the quantification of the related impurities. Hydrogen peroxide (H2O2), sodium hydroxide (NaOH) and ammonium acetate were purchased from Merck (Darmstadt, Germany), hydrochloric acid (HCl) from Sigma-Aldrich (St Louis, USA) and glacial acetic acid from Riedel-de Haën (Seelze, Germany). Sartorius (Göttingen, Germany) ultrapure 18.2 MΩ.cm quality water and HPLC grade acetonitril (Romil, Cambridge, UK) were used for HPLC-UV/MS analysis.\nAll drug substance batches (APIs), commercially available FPPs (Co-Artesiane®, Artefan®, Lumartem® and Coartem®) and the standard of desbenzylketo (DBK) lumefantrine derivative were supplied by Dafra Pharma International (Belgium). USP-Salmous standards of lumefantrine and impurity A were purchased from U.S. Pharmacopeia (Basel, Switzerland). Analytical solutions were prepared in HPLC grade tetrahydrofuran (Fisher Scientific, Leicestershire, UK) at a concentration of 0.96 mg/ml lumefantrine, which corresponds to 100% label claim (l.c.). A dilution equivalent to 0.5% l.c. is also prepared and used for the quantification of the related impurities. Hydrogen peroxide (H2O2), sodium hydroxide (NaOH) and ammonium acetate were purchased from Merck (Darmstadt, Germany), hydrochloric acid (HCl) from Sigma-Aldrich (St Louis, USA) and glacial acetic acid from Riedel-de Haën (Seelze, Germany). Sartorius (Göttingen, Germany) ultrapure 18.2 MΩ.cm quality water and HPLC grade acetonitril (Romil, Cambridge, UK) were used for HPLC-UV/MS analysis.\n[SUBTITLE] Liquid chromatography [SUBSECTION] HPLC-UV investigation of the impurity profiles was performed on a HPLC-PDA apparatus consisting of a Waters Alliance 2695 separation module and a Waters 2998 photodiode array detector with Empower 2 software for data acquisition (all Waters, Milford, MA, USA). For PDA detection, the UV spectrum was recorded at 190-400 nm. Quantification was performed at 266 nm. The positive ion ESI and the collision-induced dissociation (CID) mass spectra were obtained from the LC-UV/MS apparatus consisting of a Spectra System SN4000 interface, a Spectra System SCM1000 degasser, a Spectra System P1000XR pump, a Spectra System AS3000 autosampler, and a Finnigan LCQ Classic ion trap mass spectrometer in positive ion mode (all Thermo, San José, CA, USA), mass to charge range m/z 100 to m/z 2000 at unit resolution and with a peak width of 0.25 daltons/z, equipped with a Waters 2487 dual wavelength UV detector (Waters, Milford, MA, USA) and Xcalibur 2.0 software (Thermo) for data acquisition. ESI was conducted using a needle voltage of 4.5 kV. Nitrogen was used as sheath and auxiliary gas with the heated capillary set at 250°C. UV-detection was used for quantification (at 266 nm), while ESI-ion trap MS detection was used for identification.\nLC determination of impurities in lumefantrine samples was performed using a Purospher STAR RP-18 endcapped (150 × 4.6 mm, 5 μm particle size) column (Merck, Darmstadt, Germany) with guard column at 30°C under isocratic conditions with a mobile phase consisting of ammonium acetate (pH 4.9; 0.1 M) and acetonitrile (10:90, v/v). The flow rate was set at 2.0 mL/min (minimal run time: 30 min.). The injection volume was 10 μl. Under these conditions, lumefantrine elutes at approximately 22 min. System suitability tests (SSTs) were established as the plate number on lumefantrine (N ≥ 8.2 × 103) and the oxidative stress degradation product (N ≥ 2.4 × 103), the signal-to-noise ratio of 0.5% l.c. lumefantrine solution (S/N ≥ 30), the peak area ratio of the 0.5% l.c. versus 100% l.c. (between 0.4 and 0.6) and the relative position of the in-situ prepared N-oxide by H2O2 treatment (RRT between 0.12 and 0.22).\nThe LC method was validated for the determination of lumefantrine and its related impurities. The selectivity of the developed chromatographic method was established by the separation of lumefantrine and its impurities. A correlation coefficient (r2) of 0.9998 for lumefantrine (0.0006 to 0.01 mg/ml) and 1.0 for impurity A and DBK (0.001 to 0.1 mg/ml) demonstrated that the HPLC method is linear in the lower range. LOD/LOQ values for lumefantrine, DBK and impurity A were calculated (S/N = 3 resp. 10): 0.004 mg/ml and 0.026 mg/ml for lumefantrine (0.004% respectively 0.026% l.c.), 0.011 mg/ml and 0.040 mg/ml for DBK (0.012% respectively 0.042% l.c.) and 0.110 mg/ml and 0.393 mg/ml for impurity A (0.115% respectively 0.409% l.c.). The analytical stability of lumefantrine, impurity A and DBK was confirmed over a storage period of 24 hours at 5°C, i.e. the sample compartment temperature. Accuracy and precision were evaluated by repeated analysis (n = 6), with 102.6% l.c. recovery and 2.1%, respectively 2.86%, for repeatability, respectively intermediate precision.\nThe relative retention time (RRT) is defined as the ratio of the retention time of the compound versus the retention time of lumefantrine. The relative response factor is defined as the ratio of the area of the compound versus the area of lumefantrine, both injected at the same concentration.\nHPLC-UV investigation of the impurity profiles was performed on a HPLC-PDA apparatus consisting of a Waters Alliance 2695 separation module and a Waters 2998 photodiode array detector with Empower 2 software for data acquisition (all Waters, Milford, MA, USA). For PDA detection, the UV spectrum was recorded at 190-400 nm. Quantification was performed at 266 nm. The positive ion ESI and the collision-induced dissociation (CID) mass spectra were obtained from the LC-UV/MS apparatus consisting of a Spectra System SN4000 interface, a Spectra System SCM1000 degasser, a Spectra System P1000XR pump, a Spectra System AS3000 autosampler, and a Finnigan LCQ Classic ion trap mass spectrometer in positive ion mode (all Thermo, San José, CA, USA), mass to charge range m/z 100 to m/z 2000 at unit resolution and with a peak width of 0.25 daltons/z, equipped with a Waters 2487 dual wavelength UV detector (Waters, Milford, MA, USA) and Xcalibur 2.0 software (Thermo) for data acquisition. ESI was conducted using a needle voltage of 4.5 kV. Nitrogen was used as sheath and auxiliary gas with the heated capillary set at 250°C. UV-detection was used for quantification (at 266 nm), while ESI-ion trap MS detection was used for identification.\nLC determination of impurities in lumefantrine samples was performed using a Purospher STAR RP-18 endcapped (150 × 4.6 mm, 5 μm particle size) column (Merck, Darmstadt, Germany) with guard column at 30°C under isocratic conditions with a mobile phase consisting of ammonium acetate (pH 4.9; 0.1 M) and acetonitrile (10:90, v/v). The flow rate was set at 2.0 mL/min (minimal run time: 30 min.). The injection volume was 10 μl. Under these conditions, lumefantrine elutes at approximately 22 min. System suitability tests (SSTs) were established as the plate number on lumefantrine (N ≥ 8.2 × 103) and the oxidative stress degradation product (N ≥ 2.4 × 103), the signal-to-noise ratio of 0.5% l.c. lumefantrine solution (S/N ≥ 30), the peak area ratio of the 0.5% l.c. versus 100% l.c. (between 0.4 and 0.6) and the relative position of the in-situ prepared N-oxide by H2O2 treatment (RRT between 0.12 and 0.22).\nThe LC method was validated for the determination of lumefantrine and its related impurities. The selectivity of the developed chromatographic method was established by the separation of lumefantrine and its impurities. A correlation coefficient (r2) of 0.9998 for lumefantrine (0.0006 to 0.01 mg/ml) and 1.0 for impurity A and DBK (0.001 to 0.1 mg/ml) demonstrated that the HPLC method is linear in the lower range. LOD/LOQ values for lumefantrine, DBK and impurity A were calculated (S/N = 3 resp. 10): 0.004 mg/ml and 0.026 mg/ml for lumefantrine (0.004% respectively 0.026% l.c.), 0.011 mg/ml and 0.040 mg/ml for DBK (0.012% respectively 0.042% l.c.) and 0.110 mg/ml and 0.393 mg/ml for impurity A (0.115% respectively 0.409% l.c.). The analytical stability of lumefantrine, impurity A and DBK was confirmed over a storage period of 24 hours at 5°C, i.e. the sample compartment temperature. Accuracy and precision were evaluated by repeated analysis (n = 6), with 102.6% l.c. recovery and 2.1%, respectively 2.86%, for repeatability, respectively intermediate precision.\nThe relative retention time (RRT) is defined as the ratio of the retention time of the compound versus the retention time of lumefantrine. The relative response factor is defined as the ratio of the area of the compound versus the area of lumefantrine, both injected at the same concentration.\n[SUBTITLE] Forced degradation [SUBSECTION] Forced degradation of lumefantrine API and FPP was performed under heat, light, acidic, alkaline and oxidative stress conditions. In heat stress studies, the FPP powder (one gram) was incubated at 40, 50 and 60°C for respectively four, three and two days. The placebo powder (one gram) was incubated for two days at 60°C. In light stress studies, the FPP and placebo powder (one gram) were subjected to UV (three days incubation) and VIS (seven days incubation) light in a qualified Pharma 500 L stability cabinet (Weiss Technik) according to ICH. Finally, FPP and placebo were stressed by adding 10 ml of 1 M HCl (acidic), 1 M NaOH (alkaline) or 1% H2O2 (oxidative) to one gram of the powder to be examined. Samples were incubated, up to eight days, at 5, 25, 40, 50 and 60°C. After the incubation, samples were neutralized using NaOH (acidic), HCl (alkaline) or Na2S2O5 (oxidative), and the solvent evaporated using freeze-drying. The resulting powdered samples were dispersed in THF, centrifuged, HPLC-filtrated and analyzed using HPLC-DAD/UV-ESI/MS.\nAdditionally, different batches of FPP were included in long-term (up to 24 months, 30°C, 70-75% RH) and accelerated (up to 6 months, 40°C, 75% RH) stability studies according to ICH stability guidelines [24].\nForced degradation of lumefantrine API and FPP was performed under heat, light, acidic, alkaline and oxidative stress conditions. In heat stress studies, the FPP powder (one gram) was incubated at 40, 50 and 60°C for respectively four, three and two days. The placebo powder (one gram) was incubated for two days at 60°C. In light stress studies, the FPP and placebo powder (one gram) were subjected to UV (three days incubation) and VIS (seven days incubation) light in a qualified Pharma 500 L stability cabinet (Weiss Technik) according to ICH. Finally, FPP and placebo were stressed by adding 10 ml of 1 M HCl (acidic), 1 M NaOH (alkaline) or 1% H2O2 (oxidative) to one gram of the powder to be examined. Samples were incubated, up to eight days, at 5, 25, 40, 50 and 60°C. After the incubation, samples were neutralized using NaOH (acidic), HCl (alkaline) or Na2S2O5 (oxidative), and the solvent evaporated using freeze-drying. The resulting powdered samples were dispersed in THF, centrifuged, HPLC-filtrated and analyzed using HPLC-DAD/UV-ESI/MS.\nAdditionally, different batches of FPP were included in long-term (up to 24 months, 30°C, 70-75% RH) and accelerated (up to 6 months, 40°C, 75% RH) stability studies according to ICH stability guidelines [24].\n[SUBTITLE] In-silico toxicological predictions [SUBSECTION] To make in-silico toxicological predictions for lumefantrine and its identified related impurities, two sources of toxicological predictions were used: Derek® (Nexus v2.0) for Windows developed by Lhasa Limited (Leeds, UK) and Toxtree® (v1.60) developed by Ideaconsult Ltd. (Sofia, Bulgaria). Derek® (Nexus v2.0) for Windows is an expert knowledge base system, containing descriptions of molecular substructures which have been associated with toxic endpoints (structural alerts), that predicts whether a chemical is toxic in humans, other mammals and bacteria. The programme applies structure-activity relationships ((Q)SARs) and other expert knowledge rules to derive a reasoned conclusion about the potential toxicity of the query chemical [25,26]. Toxtree® is an open source application, which is able to estimate toxic hazards by applying a decision tree approach and making structure-based predictions for a number of toxicological endpoints using different modules. Hazard estimations were generated using three Toxtree® modules: Cramer rules with extensions, Benigni/Bossa rulebase and structure alerts for the in vivo micronucleus assay in rodents.\nTo make in-silico toxicological predictions for lumefantrine and its identified related impurities, two sources of toxicological predictions were used: Derek® (Nexus v2.0) for Windows developed by Lhasa Limited (Leeds, UK) and Toxtree® (v1.60) developed by Ideaconsult Ltd. (Sofia, Bulgaria). Derek® (Nexus v2.0) for Windows is an expert knowledge base system, containing descriptions of molecular substructures which have been associated with toxic endpoints (structural alerts), that predicts whether a chemical is toxic in humans, other mammals and bacteria. The programme applies structure-activity relationships ((Q)SARs) and other expert knowledge rules to derive a reasoned conclusion about the potential toxicity of the query chemical [25,26]. Toxtree® is an open source application, which is able to estimate toxic hazards by applying a decision tree approach and making structure-based predictions for a number of toxicological endpoints using different modules. Hazard estimations were generated using three Toxtree® modules: Cramer rules with extensions, Benigni/Bossa rulebase and structure alerts for the in vivo micronucleus assay in rodents.", "All drug substance batches (APIs), commercially available FPPs (Co-Artesiane®, Artefan®, Lumartem® and Coartem®) and the standard of desbenzylketo (DBK) lumefantrine derivative were supplied by Dafra Pharma International (Belgium). USP-Salmous standards of lumefantrine and impurity A were purchased from U.S. Pharmacopeia (Basel, Switzerland). Analytical solutions were prepared in HPLC grade tetrahydrofuran (Fisher Scientific, Leicestershire, UK) at a concentration of 0.96 mg/ml lumefantrine, which corresponds to 100% label claim (l.c.). A dilution equivalent to 0.5% l.c. is also prepared and used for the quantification of the related impurities. Hydrogen peroxide (H2O2), sodium hydroxide (NaOH) and ammonium acetate were purchased from Merck (Darmstadt, Germany), hydrochloric acid (HCl) from Sigma-Aldrich (St Louis, USA) and glacial acetic acid from Riedel-de Haën (Seelze, Germany). Sartorius (Göttingen, Germany) ultrapure 18.2 MΩ.cm quality water and HPLC grade acetonitril (Romil, Cambridge, UK) were used for HPLC-UV/MS analysis.", "HPLC-UV investigation of the impurity profiles was performed on a HPLC-PDA apparatus consisting of a Waters Alliance 2695 separation module and a Waters 2998 photodiode array detector with Empower 2 software for data acquisition (all Waters, Milford, MA, USA). For PDA detection, the UV spectrum was recorded at 190-400 nm. Quantification was performed at 266 nm. The positive ion ESI and the collision-induced dissociation (CID) mass spectra were obtained from the LC-UV/MS apparatus consisting of a Spectra System SN4000 interface, a Spectra System SCM1000 degasser, a Spectra System P1000XR pump, a Spectra System AS3000 autosampler, and a Finnigan LCQ Classic ion trap mass spectrometer in positive ion mode (all Thermo, San José, CA, USA), mass to charge range m/z 100 to m/z 2000 at unit resolution and with a peak width of 0.25 daltons/z, equipped with a Waters 2487 dual wavelength UV detector (Waters, Milford, MA, USA) and Xcalibur 2.0 software (Thermo) for data acquisition. ESI was conducted using a needle voltage of 4.5 kV. Nitrogen was used as sheath and auxiliary gas with the heated capillary set at 250°C. UV-detection was used for quantification (at 266 nm), while ESI-ion trap MS detection was used for identification.\nLC determination of impurities in lumefantrine samples was performed using a Purospher STAR RP-18 endcapped (150 × 4.6 mm, 5 μm particle size) column (Merck, Darmstadt, Germany) with guard column at 30°C under isocratic conditions with a mobile phase consisting of ammonium acetate (pH 4.9; 0.1 M) and acetonitrile (10:90, v/v). The flow rate was set at 2.0 mL/min (minimal run time: 30 min.). The injection volume was 10 μl. Under these conditions, lumefantrine elutes at approximately 22 min. System suitability tests (SSTs) were established as the plate number on lumefantrine (N ≥ 8.2 × 103) and the oxidative stress degradation product (N ≥ 2.4 × 103), the signal-to-noise ratio of 0.5% l.c. lumefantrine solution (S/N ≥ 30), the peak area ratio of the 0.5% l.c. versus 100% l.c. (between 0.4 and 0.6) and the relative position of the in-situ prepared N-oxide by H2O2 treatment (RRT between 0.12 and 0.22).\nThe LC method was validated for the determination of lumefantrine and its related impurities. The selectivity of the developed chromatographic method was established by the separation of lumefantrine and its impurities. A correlation coefficient (r2) of 0.9998 for lumefantrine (0.0006 to 0.01 mg/ml) and 1.0 for impurity A and DBK (0.001 to 0.1 mg/ml) demonstrated that the HPLC method is linear in the lower range. LOD/LOQ values for lumefantrine, DBK and impurity A were calculated (S/N = 3 resp. 10): 0.004 mg/ml and 0.026 mg/ml for lumefantrine (0.004% respectively 0.026% l.c.), 0.011 mg/ml and 0.040 mg/ml for DBK (0.012% respectively 0.042% l.c.) and 0.110 mg/ml and 0.393 mg/ml for impurity A (0.115% respectively 0.409% l.c.). The analytical stability of lumefantrine, impurity A and DBK was confirmed over a storage period of 24 hours at 5°C, i.e. the sample compartment temperature. Accuracy and precision were evaluated by repeated analysis (n = 6), with 102.6% l.c. recovery and 2.1%, respectively 2.86%, for repeatability, respectively intermediate precision.\nThe relative retention time (RRT) is defined as the ratio of the retention time of the compound versus the retention time of lumefantrine. The relative response factor is defined as the ratio of the area of the compound versus the area of lumefantrine, both injected at the same concentration.", "Forced degradation of lumefantrine API and FPP was performed under heat, light, acidic, alkaline and oxidative stress conditions. In heat stress studies, the FPP powder (one gram) was incubated at 40, 50 and 60°C for respectively four, three and two days. The placebo powder (one gram) was incubated for two days at 60°C. In light stress studies, the FPP and placebo powder (one gram) were subjected to UV (three days incubation) and VIS (seven days incubation) light in a qualified Pharma 500 L stability cabinet (Weiss Technik) according to ICH. Finally, FPP and placebo were stressed by adding 10 ml of 1 M HCl (acidic), 1 M NaOH (alkaline) or 1% H2O2 (oxidative) to one gram of the powder to be examined. Samples were incubated, up to eight days, at 5, 25, 40, 50 and 60°C. After the incubation, samples were neutralized using NaOH (acidic), HCl (alkaline) or Na2S2O5 (oxidative), and the solvent evaporated using freeze-drying. The resulting powdered samples were dispersed in THF, centrifuged, HPLC-filtrated and analyzed using HPLC-DAD/UV-ESI/MS.\nAdditionally, different batches of FPP were included in long-term (up to 24 months, 30°C, 70-75% RH) and accelerated (up to 6 months, 40°C, 75% RH) stability studies according to ICH stability guidelines [24].", "To make in-silico toxicological predictions for lumefantrine and its identified related impurities, two sources of toxicological predictions were used: Derek® (Nexus v2.0) for Windows developed by Lhasa Limited (Leeds, UK) and Toxtree® (v1.60) developed by Ideaconsult Ltd. (Sofia, Bulgaria). Derek® (Nexus v2.0) for Windows is an expert knowledge base system, containing descriptions of molecular substructures which have been associated with toxic endpoints (structural alerts), that predicts whether a chemical is toxic in humans, other mammals and bacteria. The programme applies structure-activity relationships ((Q)SARs) and other expert knowledge rules to derive a reasoned conclusion about the potential toxicity of the query chemical [25,26]. Toxtree® is an open source application, which is able to estimate toxic hazards by applying a decision tree approach and making structure-based predictions for a number of toxicological endpoints using different modules. Hazard estimations were generated using three Toxtree® modules: Cramer rules with extensions, Benigni/Bossa rulebase and structure alerts for the in vivo micronucleus assay in rodents.", "[SUBTITLE] HPLC analysis of lumefantrine containing samples [SUBSECTION] As the Ph.Int./USP Salmous HPLC methods [17,18] are a complex, step-wise gradient using an ion-pairing reagent, this method is not compatible with MS detection. Moreover, the gradient is required for the detection of synthesis impurities BA and BB, which are structurally very different from lumefantrine and its other impurities, especially degradants. Therefore, an isocratic RP-HPLC method was used without an MS-incompatible ion-pairing reagent. The chromatographic characteristics of these two synthesis impurities on this system could not be evaluated, due to the unavailability of references for these impurities.\nLumefantrine API and FPPs were exposed to diverse stress conditions for different periods. Additionally, FPPs were put on long-term and accelerated stability studies as well according to ICH. FPP samples in the long-term stability study were kept for up to twenty-four months at 30°C/75% RH. In the accelerated study, the stability conditions were adjusted and FPP samples were kept for up to six months at 40°C/75% RH. The unstressed and stressed API samples, as well as the unstressed (release), accelerated, long-term and stressed FPP samples were analyzed with the validated HPLC method. Five synthesis and four stress related lumefantrine impurities have been observed in lumefantrine containing samples (Table 1). The relative retention time (RRT), relative to lumefantrine, of these impurities was defined and normalized quantification was performed with a reporting threshold of 0.10%. Maximal actually observed levels of lumefantrine related impurities in different samples under different conditions were obtained (Table 2). None of these lumefantrine related impurities were observed above the reporting threshold (i.e. > 0.10%) in unstressed API and release (T0) FPP samples, except for the monodesbutyl derivative. However, these lumefantrine degradants were observed in stressed API and FPP samples, and in FPP stability studies. Compounds 1,2 and 3 were observed in oxidative stressed API samples. Three lumefantrine related impurities were observed in stressed FPP stability samples: compound 1 (60°C, 1 M NaOH, T2d), compound 3 (60°C, 1% H2O2, T2d) and compound 4 (50°C, 1 M HCl, T3d). Compound 3 and 4 were also detected in the accelerated (40°C/75% RH, T6m) and long-term stability studies. A typical UV chromatogram illustrating the separation of lumefantrine N-oxide, DBK, desbenzyl lumefantrine derivative and lumefantrine is given in Figure 2.\nStructural information for the observed and/or reported lumefantrine related impurities\nPercentage maximum actual levels of lumefantrine related impurities observed(1)\n(1) RT: reporting threshold = 0.10%\nUV chromatogram of a mixed sample illustrating lumefantrine N-oxide, lumefantrine DBK and desbenzyl derivative and lumefantrine. UV chromatogram of a mixed sample illustrating lumefantrine N-oxide (3), DBK (4) and desbenzyl lumefantrine derivative (5) and lumefantrine (L).\nAs the Ph.Int./USP Salmous HPLC methods [17,18] are a complex, step-wise gradient using an ion-pairing reagent, this method is not compatible with MS detection. Moreover, the gradient is required for the detection of synthesis impurities BA and BB, which are structurally very different from lumefantrine and its other impurities, especially degradants. Therefore, an isocratic RP-HPLC method was used without an MS-incompatible ion-pairing reagent. The chromatographic characteristics of these two synthesis impurities on this system could not be evaluated, due to the unavailability of references for these impurities.\nLumefantrine API and FPPs were exposed to diverse stress conditions for different periods. Additionally, FPPs were put on long-term and accelerated stability studies as well according to ICH. FPP samples in the long-term stability study were kept for up to twenty-four months at 30°C/75% RH. In the accelerated study, the stability conditions were adjusted and FPP samples were kept for up to six months at 40°C/75% RH. The unstressed and stressed API samples, as well as the unstressed (release), accelerated, long-term and stressed FPP samples were analyzed with the validated HPLC method. Five synthesis and four stress related lumefantrine impurities have been observed in lumefantrine containing samples (Table 1). The relative retention time (RRT), relative to lumefantrine, of these impurities was defined and normalized quantification was performed with a reporting threshold of 0.10%. Maximal actually observed levels of lumefantrine related impurities in different samples under different conditions were obtained (Table 2). None of these lumefantrine related impurities were observed above the reporting threshold (i.e. > 0.10%) in unstressed API and release (T0) FPP samples, except for the monodesbutyl derivative. However, these lumefantrine degradants were observed in stressed API and FPP samples, and in FPP stability studies. Compounds 1,2 and 3 were observed in oxidative stressed API samples. Three lumefantrine related impurities were observed in stressed FPP stability samples: compound 1 (60°C, 1 M NaOH, T2d), compound 3 (60°C, 1% H2O2, T2d) and compound 4 (50°C, 1 M HCl, T3d). Compound 3 and 4 were also detected in the accelerated (40°C/75% RH, T6m) and long-term stability studies. A typical UV chromatogram illustrating the separation of lumefantrine N-oxide, DBK, desbenzyl lumefantrine derivative and lumefantrine is given in Figure 2.\nStructural information for the observed and/or reported lumefantrine related impurities\nPercentage maximum actual levels of lumefantrine related impurities observed(1)\n(1) RT: reporting threshold = 0.10%\nUV chromatogram of a mixed sample illustrating lumefantrine N-oxide, lumefantrine DBK and desbenzyl derivative and lumefantrine. UV chromatogram of a mixed sample illustrating lumefantrine N-oxide (3), DBK (4) and desbenzyl lumefantrine derivative (5) and lumefantrine (L).\n[SUBTITLE] Identification of lumefantrine impurities with LC-MS/MS [SUBSECTION] The observed lumefantrine impurity peaks (related to synthesis as well as degradation processes) in stressed or unstressed API and FPPs were identified using LC-MS/MS, with one of them investigated for the first time and proposed as a new specified lumefantrine related impurity. The desbutyl, desbenzyl and isomeric compound A derivatives are already known lumefantrine impurities. The analytical characteristics of the remaining unidentified lumefantrine related impurities were obtained by analysis of MS data: m/z values (Table 3), isotopic-distributions in mass spectra (Figure 2) and MS/MS (fragmentation pattern for structural identification).\nHPLC characteristics of lumefantrine related impurities\n(1) Retention time (min.)\n(2) Relative retention time\nThe mono-isotopic mass of lumefantrine [(1RS)-2-(Dibutylamino)-1-[(Z)-2,7-dichloro-9-(4-chlorobenzylidene)-9H-fluoren-4-yl]ethanol] was calculated to be 527.15. The mass spectrum of lumefantrine main peak indicated the most abundant ion at an m/z-ratio of 528.10, with an isotopic distribution corresponding to the three chlorine atoms in its structure (35Cl at 75.77% and 37Cl at 24.23%). In the mass spectrum of compound 1 (RRT ~ 0.08), 436.14 is observed to be the most abundant m/z. The isotopic distribution is suggestive for a compound possessing two chlorine atoms, and is identical to the isotopic distribution of compound 4. The most abundant m/z value for compound 4 is 420.13, with a molecular formula of C23H27NO2Cl2, i.e. desbenzylketo derivative (DBK). This MS-derived structure was confirmed by chemical synthesis of a DBK reference and its IR and NMR spectroscopic structure confirmation. This DBK reference standard gave similar chromatographic retention characteristics as well as DAD-UV spectrum as degradant 4 found in the samples. Based on the observed m/z values of compound 1 and DBK, compound 1 has an additional oxide to its structure, and is thus assigned as being the N-oxide of DBK. The most abundant ion found for compound 2 was m/z 474.00. Its isotopic distribution is characteristic for a compound possessing three chlorines and the molecular formula C26H24NOCl3, i.e. the monodesbutyl derivative. As this compound is more hydrophilic than lumefantrine, it elutes much earlier than lumefantrine. Compound 3 was found in the oxidative stressed FPP samples. Its most abundant m/z is 544.08, with an isotopic distribution corresponding to that of lumefantrine, giving the molecular formula C30H24NO2Cl3. Based on the observed m/z values of compound 3 and lumefantrine, compound 3 has an additional oxide to its structure. MS/MS fragmentation spectra, by collision induced dissociation (CID, energy 100 eV), of compound 3 showed peaks at m/z 526.12 (loss of H2O), 470.10, 396.99, 380.95, 346.23, 305.58, 298.06 (loss of C14H7Cl2) and 152.30. This impurity was thus identified as lumefantrine N-oxide. Compounds 6,7 and 9 gave identical mass spectra (Figure 3) with the most abundant ion found at m/z 544.12. The isotopic distributions are characteristic for a compound containing three chlorines and a molecular formula C30H32NO2Cl3, i.e. oxides of lumefantrine. As these three impurities are eluting at different retention time, they are most probably isomeric compounds with an -OH function at different positions on the lumefantrine aromatic ring structure.\nIsotopic-distribution mass spectra of lumefantrine related impurities with the most abundant m/z observed. Isotopic-distribution mass spectra of lumefantrine related impurities: (1) Desbenzylketo N-oxide; (2) Monodesbutyl derivative; (3) Lumefantrine N-oxide; (4) Desbenzylketo derivative; (5) Desbenzyl derivative; (6,7,9) Oxide of lumefantrine and (8) Impurity A (USP/Ph.Int).\nThe observed lumefantrine impurity peaks (related to synthesis as well as degradation processes) in stressed or unstressed API and FPPs were identified using LC-MS/MS, with one of them investigated for the first time and proposed as a new specified lumefantrine related impurity. The desbutyl, desbenzyl and isomeric compound A derivatives are already known lumefantrine impurities. The analytical characteristics of the remaining unidentified lumefantrine related impurities were obtained by analysis of MS data: m/z values (Table 3), isotopic-distributions in mass spectra (Figure 2) and MS/MS (fragmentation pattern for structural identification).\nHPLC characteristics of lumefantrine related impurities\n(1) Retention time (min.)\n(2) Relative retention time\nThe mono-isotopic mass of lumefantrine [(1RS)-2-(Dibutylamino)-1-[(Z)-2,7-dichloro-9-(4-chlorobenzylidene)-9H-fluoren-4-yl]ethanol] was calculated to be 527.15. The mass spectrum of lumefantrine main peak indicated the most abundant ion at an m/z-ratio of 528.10, with an isotopic distribution corresponding to the three chlorine atoms in its structure (35Cl at 75.77% and 37Cl at 24.23%). In the mass spectrum of compound 1 (RRT ~ 0.08), 436.14 is observed to be the most abundant m/z. The isotopic distribution is suggestive for a compound possessing two chlorine atoms, and is identical to the isotopic distribution of compound 4. The most abundant m/z value for compound 4 is 420.13, with a molecular formula of C23H27NO2Cl2, i.e. desbenzylketo derivative (DBK). This MS-derived structure was confirmed by chemical synthesis of a DBK reference and its IR and NMR spectroscopic structure confirmation. This DBK reference standard gave similar chromatographic retention characteristics as well as DAD-UV spectrum as degradant 4 found in the samples. Based on the observed m/z values of compound 1 and DBK, compound 1 has an additional oxide to its structure, and is thus assigned as being the N-oxide of DBK. The most abundant ion found for compound 2 was m/z 474.00. Its isotopic distribution is characteristic for a compound possessing three chlorines and the molecular formula C26H24NOCl3, i.e. the monodesbutyl derivative. As this compound is more hydrophilic than lumefantrine, it elutes much earlier than lumefantrine. Compound 3 was found in the oxidative stressed FPP samples. Its most abundant m/z is 544.08, with an isotopic distribution corresponding to that of lumefantrine, giving the molecular formula C30H24NO2Cl3. Based on the observed m/z values of compound 3 and lumefantrine, compound 3 has an additional oxide to its structure. MS/MS fragmentation spectra, by collision induced dissociation (CID, energy 100 eV), of compound 3 showed peaks at m/z 526.12 (loss of H2O), 470.10, 396.99, 380.95, 346.23, 305.58, 298.06 (loss of C14H7Cl2) and 152.30. This impurity was thus identified as lumefantrine N-oxide. Compounds 6,7 and 9 gave identical mass spectra (Figure 3) with the most abundant ion found at m/z 544.12. The isotopic distributions are characteristic for a compound containing three chlorines and a molecular formula C30H32NO2Cl3, i.e. oxides of lumefantrine. As these three impurities are eluting at different retention time, they are most probably isomeric compounds with an -OH function at different positions on the lumefantrine aromatic ring structure.\nIsotopic-distribution mass spectra of lumefantrine related impurities with the most abundant m/z observed. Isotopic-distribution mass spectra of lumefantrine related impurities: (1) Desbenzylketo N-oxide; (2) Monodesbutyl derivative; (3) Lumefantrine N-oxide; (4) Desbenzylketo derivative; (5) Desbenzyl derivative; (6,7,9) Oxide of lumefantrine and (8) Impurity A (USP/Ph.Int).\n[SUBTITLE] Specified lumefantrine impurity DBK [SUBSECTION] The lumefantrine-related compound 4, DBK (RRT ~ 0.33), was not only formed in stress stability samples, but was also observed in accelerated and stressed stability samples of FPP. Moreover, DBK was found to be present in market samples at a concentration ranging between 0.03% and 0.12%, determined by area normalization. Subsequently, DBK was synthesized for further analytical characterization, including confirmation of its relative retention time (RRT) and determination of its relative response factor (RFF) at the detection wavelength of 266 nm. The RRT and the RRF of DBK relative to lumefantrine were experimentally determined to be 0.33 and 2.87 respectively. The DAD-UV spectra recorded for lumefantrine and DBK (Figure 4) showed the wavelength of maximum absorption to be higher for DBK (app. 266 nm) than for lumefantrine (app. 234 nm), due to the benzyl group being replaced by the keto function. This impurity was observed in oxidative and acidic stress degradation, as well as in the accelerated and long-term ICH stability studies, justifying this degradant to be classified as a specified degradant.\nUV spectra recorded for lumefantrine (left) and DBK (right) showing the observed wavelength of maximum absorption. DAD-UV spectra of lumefantrine (left) and DBK (right).\nThe lumefantrine-related compound 4, DBK (RRT ~ 0.33), was not only formed in stress stability samples, but was also observed in accelerated and stressed stability samples of FPP. Moreover, DBK was found to be present in market samples at a concentration ranging between 0.03% and 0.12%, determined by area normalization. Subsequently, DBK was synthesized for further analytical characterization, including confirmation of its relative retention time (RRT) and determination of its relative response factor (RFF) at the detection wavelength of 266 nm. The RRT and the RRF of DBK relative to lumefantrine were experimentally determined to be 0.33 and 2.87 respectively. The DAD-UV spectra recorded for lumefantrine and DBK (Figure 4) showed the wavelength of maximum absorption to be higher for DBK (app. 266 nm) than for lumefantrine (app. 234 nm), due to the benzyl group being replaced by the keto function. This impurity was observed in oxidative and acidic stress degradation, as well as in the accelerated and long-term ICH stability studies, justifying this degradant to be classified as a specified degradant.\nUV spectra recorded for lumefantrine (left) and DBK (right) showing the observed wavelength of maximum absorption. DAD-UV spectra of lumefantrine (left) and DBK (right).\n[SUBTITLE] In-silico toxicological predictions of lumefantrine and its related impurities [SUBSECTION] Using the knowledge-based expert systems Toxtree® and Derek®, general toxicological and carcinogenic alerts for lumefantrine, as well as for its related observed and already described impurities, have been investigated. Since DBK is a specified lumefantrine-related compound, the toxicity profile of DBK is of paramount importance. Based on the Cramer rules with extensions, Toxtree® clearly predicted general toxicity risks (class III), and genotoxic alerts (polycyclic aromatic hydrocarbons, halogenated benzene and H-acceptor-path3-H-acceptor) for DBK, which are identical to the API lumefantrine itself. According to the toxicological concern (TTC), the daily dosage for compounds classified in class III should be below 90 μg/person (60 kg)/day to be validated as non toxic [27]. Hence, the TTC value of 90 μg on the MDD of 960 mg lumefantrine corresponds to a limit of 0.01% (90 μg/960 mg), which is far below the levels actually observed.\nThe toxicity profile by Derek® of DBK is defined by several general toxicity alerts which are similar to lumefantrine: hERG channel inhibion and α2 μ-globulin nephropathy [28] plus additional photo-toxicity and -allergenicity. However, Derek® did not trigger any genotoxicity or carcinogenicity for DBK.\nThe other lumefantrine related impurities were also predicted in Toxtree® to have a high general toxicity similar to lumefantrine itself (depicted Class III), based on the Cramer rules with extensions, and genotoxicity risks. Again, Derek® clearly indicated a limit toxicity profile for the majority of lumefantrine related impurities compared to lumefantrine (hERG channel inhibion, α2μ-globulin nephropathy). Only impurity BB triggered additional toxicity alerts (carcinogenicity/mutagenicity, chromosome damage, eye/skin irritation, developmental toxicity, skin sensitization), indicative for a non-toxic profile compared to lumefantrine itself.\nUsing the knowledge-based expert systems Toxtree® and Derek®, general toxicological and carcinogenic alerts for lumefantrine, as well as for its related observed and already described impurities, have been investigated. Since DBK is a specified lumefantrine-related compound, the toxicity profile of DBK is of paramount importance. Based on the Cramer rules with extensions, Toxtree® clearly predicted general toxicity risks (class III), and genotoxic alerts (polycyclic aromatic hydrocarbons, halogenated benzene and H-acceptor-path3-H-acceptor) for DBK, which are identical to the API lumefantrine itself. According to the toxicological concern (TTC), the daily dosage for compounds classified in class III should be below 90 μg/person (60 kg)/day to be validated as non toxic [27]. Hence, the TTC value of 90 μg on the MDD of 960 mg lumefantrine corresponds to a limit of 0.01% (90 μg/960 mg), which is far below the levels actually observed.\nThe toxicity profile by Derek® of DBK is defined by several general toxicity alerts which are similar to lumefantrine: hERG channel inhibion and α2 μ-globulin nephropathy [28] plus additional photo-toxicity and -allergenicity. However, Derek® did not trigger any genotoxicity or carcinogenicity for DBK.\nThe other lumefantrine related impurities were also predicted in Toxtree® to have a high general toxicity similar to lumefantrine itself (depicted Class III), based on the Cramer rules with extensions, and genotoxicity risks. Again, Derek® clearly indicated a limit toxicity profile for the majority of lumefantrine related impurities compared to lumefantrine (hERG channel inhibion, α2μ-globulin nephropathy). Only impurity BB triggered additional toxicity alerts (carcinogenicity/mutagenicity, chromosome damage, eye/skin irritation, developmental toxicity, skin sensitization), indicative for a non-toxic profile compared to lumefantrine itself.", "As the Ph.Int./USP Salmous HPLC methods [17,18] are a complex, step-wise gradient using an ion-pairing reagent, this method is not compatible with MS detection. Moreover, the gradient is required for the detection of synthesis impurities BA and BB, which are structurally very different from lumefantrine and its other impurities, especially degradants. Therefore, an isocratic RP-HPLC method was used without an MS-incompatible ion-pairing reagent. The chromatographic characteristics of these two synthesis impurities on this system could not be evaluated, due to the unavailability of references for these impurities.\nLumefantrine API and FPPs were exposed to diverse stress conditions for different periods. Additionally, FPPs were put on long-term and accelerated stability studies as well according to ICH. FPP samples in the long-term stability study were kept for up to twenty-four months at 30°C/75% RH. In the accelerated study, the stability conditions were adjusted and FPP samples were kept for up to six months at 40°C/75% RH. The unstressed and stressed API samples, as well as the unstressed (release), accelerated, long-term and stressed FPP samples were analyzed with the validated HPLC method. Five synthesis and four stress related lumefantrine impurities have been observed in lumefantrine containing samples (Table 1). The relative retention time (RRT), relative to lumefantrine, of these impurities was defined and normalized quantification was performed with a reporting threshold of 0.10%. Maximal actually observed levels of lumefantrine related impurities in different samples under different conditions were obtained (Table 2). None of these lumefantrine related impurities were observed above the reporting threshold (i.e. > 0.10%) in unstressed API and release (T0) FPP samples, except for the monodesbutyl derivative. However, these lumefantrine degradants were observed in stressed API and FPP samples, and in FPP stability studies. Compounds 1,2 and 3 were observed in oxidative stressed API samples. Three lumefantrine related impurities were observed in stressed FPP stability samples: compound 1 (60°C, 1 M NaOH, T2d), compound 3 (60°C, 1% H2O2, T2d) and compound 4 (50°C, 1 M HCl, T3d). Compound 3 and 4 were also detected in the accelerated (40°C/75% RH, T6m) and long-term stability studies. A typical UV chromatogram illustrating the separation of lumefantrine N-oxide, DBK, desbenzyl lumefantrine derivative and lumefantrine is given in Figure 2.\nStructural information for the observed and/or reported lumefantrine related impurities\nPercentage maximum actual levels of lumefantrine related impurities observed(1)\n(1) RT: reporting threshold = 0.10%\nUV chromatogram of a mixed sample illustrating lumefantrine N-oxide, lumefantrine DBK and desbenzyl derivative and lumefantrine. UV chromatogram of a mixed sample illustrating lumefantrine N-oxide (3), DBK (4) and desbenzyl lumefantrine derivative (5) and lumefantrine (L).", "The observed lumefantrine impurity peaks (related to synthesis as well as degradation processes) in stressed or unstressed API and FPPs were identified using LC-MS/MS, with one of them investigated for the first time and proposed as a new specified lumefantrine related impurity. The desbutyl, desbenzyl and isomeric compound A derivatives are already known lumefantrine impurities. The analytical characteristics of the remaining unidentified lumefantrine related impurities were obtained by analysis of MS data: m/z values (Table 3), isotopic-distributions in mass spectra (Figure 2) and MS/MS (fragmentation pattern for structural identification).\nHPLC characteristics of lumefantrine related impurities\n(1) Retention time (min.)\n(2) Relative retention time\nThe mono-isotopic mass of lumefantrine [(1RS)-2-(Dibutylamino)-1-[(Z)-2,7-dichloro-9-(4-chlorobenzylidene)-9H-fluoren-4-yl]ethanol] was calculated to be 527.15. The mass spectrum of lumefantrine main peak indicated the most abundant ion at an m/z-ratio of 528.10, with an isotopic distribution corresponding to the three chlorine atoms in its structure (35Cl at 75.77% and 37Cl at 24.23%). In the mass spectrum of compound 1 (RRT ~ 0.08), 436.14 is observed to be the most abundant m/z. The isotopic distribution is suggestive for a compound possessing two chlorine atoms, and is identical to the isotopic distribution of compound 4. The most abundant m/z value for compound 4 is 420.13, with a molecular formula of C23H27NO2Cl2, i.e. desbenzylketo derivative (DBK). This MS-derived structure was confirmed by chemical synthesis of a DBK reference and its IR and NMR spectroscopic structure confirmation. This DBK reference standard gave similar chromatographic retention characteristics as well as DAD-UV spectrum as degradant 4 found in the samples. Based on the observed m/z values of compound 1 and DBK, compound 1 has an additional oxide to its structure, and is thus assigned as being the N-oxide of DBK. The most abundant ion found for compound 2 was m/z 474.00. Its isotopic distribution is characteristic for a compound possessing three chlorines and the molecular formula C26H24NOCl3, i.e. the monodesbutyl derivative. As this compound is more hydrophilic than lumefantrine, it elutes much earlier than lumefantrine. Compound 3 was found in the oxidative stressed FPP samples. Its most abundant m/z is 544.08, with an isotopic distribution corresponding to that of lumefantrine, giving the molecular formula C30H24NO2Cl3. Based on the observed m/z values of compound 3 and lumefantrine, compound 3 has an additional oxide to its structure. MS/MS fragmentation spectra, by collision induced dissociation (CID, energy 100 eV), of compound 3 showed peaks at m/z 526.12 (loss of H2O), 470.10, 396.99, 380.95, 346.23, 305.58, 298.06 (loss of C14H7Cl2) and 152.30. This impurity was thus identified as lumefantrine N-oxide. Compounds 6,7 and 9 gave identical mass spectra (Figure 3) with the most abundant ion found at m/z 544.12. The isotopic distributions are characteristic for a compound containing three chlorines and a molecular formula C30H32NO2Cl3, i.e. oxides of lumefantrine. As these three impurities are eluting at different retention time, they are most probably isomeric compounds with an -OH function at different positions on the lumefantrine aromatic ring structure.\nIsotopic-distribution mass spectra of lumefantrine related impurities with the most abundant m/z observed. Isotopic-distribution mass spectra of lumefantrine related impurities: (1) Desbenzylketo N-oxide; (2) Monodesbutyl derivative; (3) Lumefantrine N-oxide; (4) Desbenzylketo derivative; (5) Desbenzyl derivative; (6,7,9) Oxide of lumefantrine and (8) Impurity A (USP/Ph.Int).", "The lumefantrine-related compound 4, DBK (RRT ~ 0.33), was not only formed in stress stability samples, but was also observed in accelerated and stressed stability samples of FPP. Moreover, DBK was found to be present in market samples at a concentration ranging between 0.03% and 0.12%, determined by area normalization. Subsequently, DBK was synthesized for further analytical characterization, including confirmation of its relative retention time (RRT) and determination of its relative response factor (RFF) at the detection wavelength of 266 nm. The RRT and the RRF of DBK relative to lumefantrine were experimentally determined to be 0.33 and 2.87 respectively. The DAD-UV spectra recorded for lumefantrine and DBK (Figure 4) showed the wavelength of maximum absorption to be higher for DBK (app. 266 nm) than for lumefantrine (app. 234 nm), due to the benzyl group being replaced by the keto function. This impurity was observed in oxidative and acidic stress degradation, as well as in the accelerated and long-term ICH stability studies, justifying this degradant to be classified as a specified degradant.\nUV spectra recorded for lumefantrine (left) and DBK (right) showing the observed wavelength of maximum absorption. DAD-UV spectra of lumefantrine (left) and DBK (right).", "Using the knowledge-based expert systems Toxtree® and Derek®, general toxicological and carcinogenic alerts for lumefantrine, as well as for its related observed and already described impurities, have been investigated. Since DBK is a specified lumefantrine-related compound, the toxicity profile of DBK is of paramount importance. Based on the Cramer rules with extensions, Toxtree® clearly predicted general toxicity risks (class III), and genotoxic alerts (polycyclic aromatic hydrocarbons, halogenated benzene and H-acceptor-path3-H-acceptor) for DBK, which are identical to the API lumefantrine itself. According to the toxicological concern (TTC), the daily dosage for compounds classified in class III should be below 90 μg/person (60 kg)/day to be validated as non toxic [27]. Hence, the TTC value of 90 μg on the MDD of 960 mg lumefantrine corresponds to a limit of 0.01% (90 μg/960 mg), which is far below the levels actually observed.\nThe toxicity profile by Derek® of DBK is defined by several general toxicity alerts which are similar to lumefantrine: hERG channel inhibion and α2 μ-globulin nephropathy [28] plus additional photo-toxicity and -allergenicity. However, Derek® did not trigger any genotoxicity or carcinogenicity for DBK.\nThe other lumefantrine related impurities were also predicted in Toxtree® to have a high general toxicity similar to lumefantrine itself (depicted Class III), based on the Cramer rules with extensions, and genotoxicity risks. Again, Derek® clearly indicated a limit toxicity profile for the majority of lumefantrine related impurities compared to lumefantrine (hERG channel inhibion, α2μ-globulin nephropathy). Only impurity BB triggered additional toxicity alerts (carcinogenicity/mutagenicity, chromosome damage, eye/skin irritation, developmental toxicity, skin sensitization), indicative for a non-toxic profile compared to lumefantrine itself.", "An exhaustive impurity profiling of lumefantrine was performed using HPLC-UV/ESI-ion trap MS. From unstressed, stressed and accelerated stability samples of lumefantrine API and FPPs, nine compounds were detected and characterized to be lumefantrine related impurities. One new lumefantrine related compound, DBK, was identified and characterized as a specified degradation impurity of lumefantrine in real market samples (FPPs). The in-silico toxicological investigation (Toxtree® and Derek®) indicated overall a lesser toxicity for the specified impurity DBK compared to the API lumefantrine itself.", "The authors, MV, BB, EVG and BDS would like to acknowledge that Dafra sponsored the analytical development. However, these authors do not work for, or represent in any way, Dafra. FHJ is partly working for Dafra.", "MV and SS did part of the analytical experiments incl. validation, performed the in-silico verification and wrote the article. BB, EVG and SVD did part of the analytical experiments, incl. stability and MS experiments and QC on data. CB and LD critically reviewed and discussed this manuscript. FHJ designed the experiments, overviewed the DBK reference synthesis, and critically reviewed the manuscript. BDS was the overall study director, responsible for design of experiments, interpretation of data and writing this manuscript. All authors read and approved the final manuscript." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Modified versus standard intention-to-treat reporting: are there differences in methodological quality, sponsorship, and findings in randomized trials? A cross-sectional study.
21356072
Randomized controlled trials (RCTs) that use the modified intention-to-treat (mITT) approach are increasingly being published. Such trials have a preponderance of post-randomization exclusions, industry sponsorship, and favourable findings, and little is known whether in terms of these items mITT trials are different with respect to trials that report a standard intention-to-treat.
BACKGROUND
To determine differences in the methodological quality, sponsorship, authors' conflicts of interest, and findings among trials with different "types" of intention-to-treat, we undertook a cross-sectional study of RCTs published in 2006 in three general medical journals (the Journal of the American Medical Association, the New England Journal of Medicine and the Lancet) and three specialty journals (Antimicrobial Agents and Chemotherapy, the American Heart Journal and the Journal of Clinical Oncology). Trials were categorized based on the "type" of intention-to-treat reporting as follows: ITT, trials reporting the use of standard ITT approach; mITT, trials reporting the use of a "modified intention-to-treat" approach; and "no ITT", trials not reporting the use of any intention-to-treat approach. Two pairs of reviewers independently extracted the data in duplicate. The strength of the associations between the "type" of intention-to-treat reporting and the quality of reporting (sample size calculation, flow-chart, lost to follow-up), the methodological quality of the trials (sequence generation, allocation concealment, and blinding), the funding source, and the findings was determined. Odds ratios (OR) were calculated with 95% confidence intervals (CI).
METHODS
Of the 367 RCTs included, 197 were classified as ITT, 56 as mITT, and 114 as "no ITT" trials. The quality of reporting and the methodological quality of the mITT trials were similar to those of the ITT trials; however, the mITT trials were more likely to report post-randomization exclusions (adjusted OR 3.43 [95%CI, 1.70 to 6.95]; P < 0.001). We found a strong association between trials classified as mITT and for-profit agency sponsorship (adjusted OR 7.41 [95%CI, 3.14 to 17.48]; P < .001) as well as the presence of authors' conflicts of interest (adjusted OR 5.14 [95%CI, 2.12 to 12.48]; P < .001). There was no association between mITT reporting and favourable results; in general, however, trials with for-profit agency sponsorship were significantly associated with favourable results (adjusted OR 2.30; [95%CI, 1.28 to 4.16]; P = 0.006).
RESULTS
We found that the mITT trials were significantly more likely to perform post-randomization exclusions and were strongly associated with industry funding and authors' conflicts of interest.
CONCLUSION
[ "Confidence Intervals", "Conflict of Interest", "Cross-Sectional Studies", "Data Interpretation, Statistical", "Drug Industry", "Evidence-Based Medicine", "Humans", "Intention to Treat Analysis", "Odds Ratio", "Randomized Controlled Trials as Topic", "Research Design", "Research Support as Topic", "Treatment Outcome" ]
3055831
null
null
Methods
[SUBTITLE] Selection of Studies [SUBSECTION] To compare methodological quality, industry sponsorship, the presence of authors' conflicts of interest, and findings among trials based on the type of intention-to-treat reporting, we sought to identify the top medical journals and specialty journals published in 2006 that were more likely to report the use of an mITT approach according to our previous survey [10]. The three high-ranking medical journals were the Journal of American Medical Association, the New England Journal of Medicine and the Lancet; and the three specialty journals were Antimicrobial Agents and Chemotherapy, the American Heart Journal and the Journal of Clinical Oncology. We carried out a computerized search in Medline (via PubMed) to identify RCTs published in these six journals using the publication type "randomized controlled trials". The results were then cross-checked against a search of the Cochrane Central Register of Controlled Trials. Excluded reports included: reports of phase I trials; pharmacokinetic, pharmacodymanic or dose-comparing studies; cluster RCTs; post-hoc studies; research letters; cost-effectiveness studies; study protocols; and prognostic and diagnostic studies. Two authors screened all of the full-text journal articles to confirm their eligibility for inclusion and excluded 97 articles (Figure 1). Study Screening Process. AAC: Antimicrobial Agents and Chemotherapy; AHJ: American Heart Journal; JAMA: Journal of American Medical Association; JCO: Journal of Clinical Oncology; NEJM: New England Journal of Medicine. Two other reviewers independently and in duplicate, extracted the following information from the journal articles: characteristics of the trial, primary outcomes, number of allocation groups, number of patients, main outcome measure, P-values, and number of subjects excluded from the analysis. Moreover, reporting of flow-charts, sample size calculations, and information regarding missing data, withdrawals or patients lost to follow-up were recorded. The relevant RCTs were subsequently classified according to the type of intention-to-treat analyses used as follows: ITT, trials reporting the use of standard ITT analyses; mITT, trials reporting the use of "modified intention-to-treat" analyses; or "no ITT" trials not reporting the use of any intention-to-treat analyses. This classification was independent of the reporting of any post-randomisation exclusion. Trials reporting the use of ITT with descriptions or conditions different from the standard intention-to-treat definition were classified as mITT. Descriptions of mITT were retrieved to evaluate the type and deviation from a true intention-to-treat analysis according to our previous classification[5]. To compare methodological quality, industry sponsorship, the presence of authors' conflicts of interest, and findings among trials based on the type of intention-to-treat reporting, we sought to identify the top medical journals and specialty journals published in 2006 that were more likely to report the use of an mITT approach according to our previous survey [10]. The three high-ranking medical journals were the Journal of American Medical Association, the New England Journal of Medicine and the Lancet; and the three specialty journals were Antimicrobial Agents and Chemotherapy, the American Heart Journal and the Journal of Clinical Oncology. We carried out a computerized search in Medline (via PubMed) to identify RCTs published in these six journals using the publication type "randomized controlled trials". The results were then cross-checked against a search of the Cochrane Central Register of Controlled Trials. Excluded reports included: reports of phase I trials; pharmacokinetic, pharmacodymanic or dose-comparing studies; cluster RCTs; post-hoc studies; research letters; cost-effectiveness studies; study protocols; and prognostic and diagnostic studies. Two authors screened all of the full-text journal articles to confirm their eligibility for inclusion and excluded 97 articles (Figure 1). Study Screening Process. AAC: Antimicrobial Agents and Chemotherapy; AHJ: American Heart Journal; JAMA: Journal of American Medical Association; JCO: Journal of Clinical Oncology; NEJM: New England Journal of Medicine. Two other reviewers independently and in duplicate, extracted the following information from the journal articles: characteristics of the trial, primary outcomes, number of allocation groups, number of patients, main outcome measure, P-values, and number of subjects excluded from the analysis. Moreover, reporting of flow-charts, sample size calculations, and information regarding missing data, withdrawals or patients lost to follow-up were recorded. The relevant RCTs were subsequently classified according to the type of intention-to-treat analyses used as follows: ITT, trials reporting the use of standard ITT analyses; mITT, trials reporting the use of "modified intention-to-treat" analyses; or "no ITT" trials not reporting the use of any intention-to-treat analyses. This classification was independent of the reporting of any post-randomisation exclusion. Trials reporting the use of ITT with descriptions or conditions different from the standard intention-to-treat definition were classified as mITT. Descriptions of mITT were retrieved to evaluate the type and deviation from a true intention-to-treat analysis according to our previous classification[5]. [SUBTITLE] Assessment of Methodological Quality [SUBSECTION] Sequence generation, allocation concealment and blinding were used as indicators of methodological quality and were classified as "adequate," "inadequate" or "unclear." In our analysis, the "inadequate" and "unclear" categories were collapsed into "not adequate." For sequence generation, the following approaches were considered adequate: random number table, computer random number generator, drawing of lots and envelopes. The use of methods such as case record number, date of birth or day, month, or year of admission were considered inadequate for sequence generation. For allocation concealment, central randomisation, coded drug packs prepared by an independent pharmacist and sequentially numbered, sealed and opaque envelopes or any method that hindered the researchers' ability to foresee the next assignment was considered adequate. Methods such as procedures based on inadequate generation of allocation sequences, open allocation schedules, alternation and unsealed or non-opaque envelopes were considered inadequate. Regarding blinding, a trial was considered adequately blinded if it was described as "double-blinded" and used adequate methods such as identical placebo tablets or adequate descriptions of who was blinded (e.g., the outcome assessor) in cases where blinding of participants and caregivers may not be feasible. Each eligible trial was independently reviewed for methodological quality. Discrepancies were resolved by discussion, and a consensus was reached for all trials. Sequence generation, allocation concealment and blinding were used as indicators of methodological quality and were classified as "adequate," "inadequate" or "unclear." In our analysis, the "inadequate" and "unclear" categories were collapsed into "not adequate." For sequence generation, the following approaches were considered adequate: random number table, computer random number generator, drawing of lots and envelopes. The use of methods such as case record number, date of birth or day, month, or year of admission were considered inadequate for sequence generation. For allocation concealment, central randomisation, coded drug packs prepared by an independent pharmacist and sequentially numbered, sealed and opaque envelopes or any method that hindered the researchers' ability to foresee the next assignment was considered adequate. Methods such as procedures based on inadequate generation of allocation sequences, open allocation schedules, alternation and unsealed or non-opaque envelopes were considered inadequate. Regarding blinding, a trial was considered adequately blinded if it was described as "double-blinded" and used adequate methods such as identical placebo tablets or adequate descriptions of who was blinded (e.g., the outcome assessor) in cases where blinding of participants and caregivers may not be feasible. Each eligible trial was independently reviewed for methodological quality. Discrepancies were resolved by discussion, and a consensus was reached for all trials. [SUBTITLE] Funding Source [SUBSECTION] Information regarding the sources of funding for each trial was extracted from the articles by reviewing the article text, the conflict of interest section, information on funding (if present) and the acknowledgements section. RTC funding sources were abstracted and categorized as follows: (1) not-for-profit organization; (2) for-profit agency; (3) co-financed, indicating funding from both not-for-profit organization(s) and for-profit agency(s); (4) no funding; and (5) not reported. Information regarding the sources of funding for each trial was extracted from the articles by reviewing the article text, the conflict of interest section, information on funding (if present) and the acknowledgements section. RTC funding sources were abstracted and categorized as follows: (1) not-for-profit organization; (2) for-profit agency; (3) co-financed, indicating funding from both not-for-profit organization(s) and for-profit agency(s); (4) no funding; and (5) not reported. [SUBTITLE] Authors' Potential Conflicts of Interest [SUBSECTION] The presence or absence of author conflicts of interest was determined by reviewing the authors' institutional affiliations, the conflicts of interest section, information on funding (if present) and the acknowledgements section. The institutional affiliation of the first author (e.g., university, not-for-profit organization or for-profit agency) was retrieved from each article. The affiliation of the other authors was also recorded when a financial tie with a for-profit agency was present. Subsequently, the studies were classified for analysis as follows: (1) authors declaring no competing interests; (2) financial ties with the sponsor of the study disclosed, indicating that at least one author participated on behalf of a for-profit agency (e.g., as an employee or consultant); (3) other financial ties disclosed, indicating that an author did not participate on behalf of the study sponsor but did report a conflict of interest such as a past financial tie with the pharmaceutical industry; and (4) conflicts of interest not reported. The presence or absence of author conflicts of interest was determined by reviewing the authors' institutional affiliations, the conflicts of interest section, information on funding (if present) and the acknowledgements section. The institutional affiliation of the first author (e.g., university, not-for-profit organization or for-profit agency) was retrieved from each article. The affiliation of the other authors was also recorded when a financial tie with a for-profit agency was present. Subsequently, the studies were classified for analysis as follows: (1) authors declaring no competing interests; (2) financial ties with the sponsor of the study disclosed, indicating that at least one author participated on behalf of a for-profit agency (e.g., as an employee or consultant); (3) other financial ties disclosed, indicating that an author did not participate on behalf of the study sponsor but did report a conflict of interest such as a past financial tie with the pharmaceutical industry; and (4) conflicts of interest not reported. [SUBTITLE] Study Findings [SUBSECTION] For each published trial, the results for each primary outcome were classified as one of the following: (1) favourable, if the result was statistically significant with P < 0.05 or if the confidence interval did not include the null; or (2) inconclusive, if the result did not reach statistical significance. For equivalence or noninferiority studies, the result was coded as favourable when the assumption of equivalence or noninferiority was satisfied. For each published trial, the results for each primary outcome were classified as one of the following: (1) favourable, if the result was statistically significant with P < 0.05 or if the confidence interval did not include the null; or (2) inconclusive, if the result did not reach statistical significance. For equivalence or noninferiority studies, the result was coded as favourable when the assumption of equivalence or noninferiority was satisfied. [SUBTITLE] Statistical Methods [SUBSECTION] The distribution of various characteristics of the examined trials such as the type of ITT analysis and the source of funding were reported using descriptive statistics. The Pearson chi-square test was used for contingency table analysis. Interobserver agreement was measured using the kappa (κ) statistic. We investigated the associations between the "type" of intention-to-treat analysis (as a dependent, multinomial 3-level variable: ITT, mITT, or "no ITT") and the methodological quality (sequence generation, allocation concealment and blinding; Model A), the quality of reporting (reporting of a flow-chart, sample size calculation, and lost to follow-up; Model A), the funding source (Model B), and the presence of authors' conflicts of interest (Model C) using multinomial logistic regression. Bivariate analyses were performed to identify independent variables with a P value less than or equal to 0.10. These potential confounding variables were included in each model of multivariate multinomial logistic regression analysis. In each model, the variables of journal type, use of placebo, the presence of post-randomisation exclusions, and all of the items used as indicators of methodological quality and quality of reporting were included. In model A, in addition to all the mentioned variables, the use of funding was included (the inclusion of the presence of authors' conflict of interest instead of funding did not produce any change in the results of the model). In each model, the ITT category was used as a reference for comparisons. Furthermore, we performed univariate logistic regression to investigate the association between findings (as a dependent variable) and the "type" of intention-to-treat (as an independent polynomial 3-level variable: ITT, mITT, "no ITT"). To account for potential confounding variables we performed a multivariate logistic regression and included in the model the type of the journal, the use of placebo, the presence of post-randomisation exclusions, the use of funding and all the items that were indicators of reporting and methodological quality. Finally, we assessed the overall association between favourable findings and no for-profit sponsorship using a logistic regression (univariate and multivariate model were used; all of the above mentioned variables including the "type" of intention-to-treat were used for adjustment). This was done to evaluate any consistency with studies in the biomedical literature that report a strong association between industry sponsorship and positive findings. The Pearson goodness-of-fit test was used to assess the overall fit of the models. Odds ratios (ORs) were then computed with confidence intervals (CIs). A two-tailed P-value less than 0.05 was considered statistically significant. The analyses were performed using STATA/SE, version 8.2 for Windows (StataCorp, College Station, Texas). The distribution of various characteristics of the examined trials such as the type of ITT analysis and the source of funding were reported using descriptive statistics. The Pearson chi-square test was used for contingency table analysis. Interobserver agreement was measured using the kappa (κ) statistic. We investigated the associations between the "type" of intention-to-treat analysis (as a dependent, multinomial 3-level variable: ITT, mITT, or "no ITT") and the methodological quality (sequence generation, allocation concealment and blinding; Model A), the quality of reporting (reporting of a flow-chart, sample size calculation, and lost to follow-up; Model A), the funding source (Model B), and the presence of authors' conflicts of interest (Model C) using multinomial logistic regression. Bivariate analyses were performed to identify independent variables with a P value less than or equal to 0.10. These potential confounding variables were included in each model of multivariate multinomial logistic regression analysis. In each model, the variables of journal type, use of placebo, the presence of post-randomisation exclusions, and all of the items used as indicators of methodological quality and quality of reporting were included. In model A, in addition to all the mentioned variables, the use of funding was included (the inclusion of the presence of authors' conflict of interest instead of funding did not produce any change in the results of the model). In each model, the ITT category was used as a reference for comparisons. Furthermore, we performed univariate logistic regression to investigate the association between findings (as a dependent variable) and the "type" of intention-to-treat (as an independent polynomial 3-level variable: ITT, mITT, "no ITT"). To account for potential confounding variables we performed a multivariate logistic regression and included in the model the type of the journal, the use of placebo, the presence of post-randomisation exclusions, the use of funding and all the items that were indicators of reporting and methodological quality. Finally, we assessed the overall association between favourable findings and no for-profit sponsorship using a logistic regression (univariate and multivariate model were used; all of the above mentioned variables including the "type" of intention-to-treat were used for adjustment). This was done to evaluate any consistency with studies in the biomedical literature that report a strong association between industry sponsorship and positive findings. The Pearson goodness-of-fit test was used to assess the overall fit of the models. Odds ratios (ORs) were then computed with confidence intervals (CIs). A two-tailed P-value less than 0.05 was considered statistically significant. The analyses were performed using STATA/SE, version 8.2 for Windows (StataCorp, College Station, Texas).
null
null
null
null
[ "Background", "Selection of Studies", "Assessment of Methodological Quality", "Funding Source", "Authors' Potential Conflicts of Interest", "Study Findings", "Statistical Methods", "Results", "Characteristics of the Included Studies", "Characteristics of the Included Trials and the Type of Intention-to-treat Reporting", "Sponsorship, Conflicts of Interest and the Type of Intention-to-treat Reporting", "Analysis of the Findings", "Discussion", "Limitations", "Conclusion", "Appendix 1", "Competing interests", "Authors' contributions" ]
[ "The intention-to-treat principle requires that all participants that are randomized must be included in the final analysis and analyzed according to the treatment group to which they were originally assigned, regardless of the treatment received, withdrawals, lost to follow-up or cross-overs. Despite this principle, in many instances in randomized trials the term intention-to-treat was inappropriately described and participants improperly excluded[1-4]. In addition, the use of a modified intention-to-treat (mITT) approach in randomized controlled trials (RCTs) is increasingly appearing in the medical literature according to a systematic review[5]. There is no clear definition of what is mITT. In fact, descriptions of mITT analyses vary greatly from trial to trial, and often contain more than one criterion that are difficult to interpret as it is not easy to discriminate between missing data cases and deviations from protocol. Moreover, while post-randomization exclusion appear to be the primary factor that characterizes the mITT analysis, the majority of the trials were industry sponsored and reported results that favoured the therapy under investigation[5]. The uncertainty or equipoise principle that should exist in the design of RCTs states that over time, the mean benefit of investigational therapies and comparison therapies should be equal [6]. The high prevalence of significant results reported in randomized trials might not reflect true differences[7]. Some argue that this principle can be violated in the presence of financial ties between investigators and the pharmaceutical industry [8]. The biomedical literature reports strong and consistent evidence that industry-sponsored research is likely to produce pro-industry conclusions [9]. The appearance of mITT reporting in modern RCTs could be a consequence of the widespread financial ties that exists among investigators and industry.\nIn this cross-sectional study, we examined whether mITT reporting trials were different in terms of methodological quality from trials with other types of intention-to-treat reporting. Our first hypothesis was that studies using mITT analyses would be associated with limited quality of reporting and study characteristics based on the assumption that mITT reporting is a limitation in principle. Moreover, we investigated whether the reporting of mITT in RCTs was associated with industry sponsorship, the presence of authors' conflicts of interest, and favourable findings.", "To compare methodological quality, industry sponsorship, the presence of authors' conflicts of interest, and findings among trials based on the type of intention-to-treat reporting, we sought to identify the top medical journals and specialty journals published in 2006 that were more likely to report the use of an mITT approach according to our previous survey [10]. The three high-ranking medical journals were the Journal of American Medical Association, the New England Journal of Medicine and the Lancet; and the three specialty journals were Antimicrobial Agents and Chemotherapy, the American Heart Journal and the Journal of Clinical Oncology. We carried out a computerized search in Medline (via PubMed) to identify RCTs published in these six journals using the publication type \"randomized controlled trials\". The results were then cross-checked against a search of the Cochrane Central Register of Controlled Trials.\nExcluded reports included: reports of phase I trials; pharmacokinetic, pharmacodymanic or dose-comparing studies; cluster RCTs; post-hoc studies; research letters; cost-effectiveness studies; study protocols; and prognostic and diagnostic studies. Two authors screened all of the full-text journal articles to confirm their eligibility for inclusion and excluded 97 articles (Figure 1).\nStudy Screening Process. AAC: Antimicrobial Agents and Chemotherapy; AHJ: American Heart Journal; JAMA: Journal of American Medical Association; JCO: Journal of Clinical Oncology; NEJM: New England Journal of Medicine.\nTwo other reviewers independently and in duplicate, extracted the following information from the journal articles: characteristics of the trial, primary outcomes, number of allocation groups, number of patients, main outcome measure, P-values, and number of subjects excluded from the analysis. Moreover, reporting of flow-charts, sample size calculations, and information regarding missing data, withdrawals or patients lost to follow-up were recorded.\nThe relevant RCTs were subsequently classified according to the type of intention-to-treat analyses used as follows: ITT, trials reporting the use of standard ITT analyses; mITT, trials reporting the use of \"modified intention-to-treat\" analyses; or \"no ITT\" trials not reporting the use of any intention-to-treat analyses. This classification was independent of the reporting of any post-randomisation exclusion. Trials reporting the use of ITT with descriptions or conditions different from the standard intention-to-treat definition were classified as mITT.\nDescriptions of mITT were retrieved to evaluate the type and deviation from a true intention-to-treat analysis according to our previous classification[5].", "Sequence generation, allocation concealment and blinding were used as indicators of methodological quality and were classified as \"adequate,\" \"inadequate\" or \"unclear.\" In our analysis, the \"inadequate\" and \"unclear\" categories were collapsed into \"not adequate.\" For sequence generation, the following approaches were considered adequate: random number table, computer random number generator, drawing of lots and envelopes. The use of methods such as case record number, date of birth or day, month, or year of admission were considered inadequate for sequence generation.\nFor allocation concealment, central randomisation, coded drug packs prepared by an independent pharmacist and sequentially numbered, sealed and opaque envelopes or any method that hindered the researchers' ability to foresee the next assignment was considered adequate. Methods such as procedures based on inadequate generation of allocation sequences, open allocation schedules, alternation and unsealed or non-opaque envelopes were considered inadequate.\nRegarding blinding, a trial was considered adequately blinded if it was described as \"double-blinded\" and used adequate methods such as identical placebo tablets or adequate descriptions of who was blinded (e.g., the outcome assessor) in cases where blinding of participants and caregivers may not be feasible. Each eligible trial was independently reviewed for methodological quality. Discrepancies were resolved by discussion, and a consensus was reached for all trials.", "Information regarding the sources of funding for each trial was extracted from the articles by reviewing the article text, the conflict of interest section, information on funding (if present) and the acknowledgements section. RTC funding sources were abstracted and categorized as follows: (1) not-for-profit organization; (2) for-profit agency; (3) co-financed, indicating funding from both not-for-profit organization(s) and for-profit agency(s); (4) no funding; and (5) not reported.", "The presence or absence of author conflicts of interest was determined by reviewing the authors' institutional affiliations, the conflicts of interest section, information on funding (if present) and the acknowledgements section.\nThe institutional affiliation of the first author (e.g., university, not-for-profit organization or for-profit agency) was retrieved from each article. The affiliation of the other authors was also recorded when a financial tie with a for-profit agency was present. Subsequently, the studies were classified for analysis as follows: (1) authors declaring no competing interests; (2) financial ties with the sponsor of the study disclosed, indicating that at least one author participated on behalf of a for-profit agency (e.g., as an employee or consultant); (3) other financial ties disclosed, indicating that an author did not participate on behalf of the study sponsor but did report a conflict of interest such as a past financial tie with the pharmaceutical industry; and (4) conflicts of interest not reported.", "For each published trial, the results for each primary outcome were classified as one of the following: (1) favourable, if the result was statistically significant with P < 0.05 or if the confidence interval did not include the null; or (2) inconclusive, if the result did not reach statistical significance.\nFor equivalence or noninferiority studies, the result was coded as favourable when the assumption of equivalence or noninferiority was satisfied.", "The distribution of various characteristics of the examined trials such as the type of ITT analysis and the source of funding were reported using descriptive statistics. The Pearson chi-square test was used for contingency table analysis. Interobserver agreement was measured using the kappa (κ) statistic.\nWe investigated the associations between the \"type\" of intention-to-treat analysis (as a dependent, multinomial 3-level variable: ITT, mITT, or \"no ITT\") and the methodological quality (sequence generation, allocation concealment and blinding; Model A), the quality of reporting (reporting of a flow-chart, sample size calculation, and lost to follow-up; Model A), the funding source (Model B), and the presence of authors' conflicts of interest (Model C) using multinomial logistic regression.\nBivariate analyses were performed to identify independent variables with a P value less than or equal to 0.10. These potential confounding variables were included in each model of multivariate multinomial logistic regression analysis. In each model, the variables of journal type, use of placebo, the presence of post-randomisation exclusions, and all of the items used as indicators of methodological quality and quality of reporting were included. In model A, in addition to all the mentioned variables, the use of funding was included (the inclusion of the presence of authors' conflict of interest instead of funding did not produce any change in the results of the model).\nIn each model, the ITT category was used as a reference for comparisons.\nFurthermore, we performed univariate logistic regression to investigate the association between findings (as a dependent variable) and the \"type\" of intention-to-treat (as an independent polynomial 3-level variable: ITT, mITT, \"no ITT\"). To account for potential confounding variables we performed a multivariate logistic regression and included in the model the type of the journal, the use of placebo, the presence of post-randomisation exclusions, the use of funding and all the items that were indicators of reporting and methodological quality.\nFinally, we assessed the overall association between favourable findings and no for-profit sponsorship using a logistic regression (univariate and multivariate model were used; all of the above mentioned variables including the \"type\" of intention-to-treat were used for adjustment). This was done to evaluate any consistency with studies in the biomedical literature that report a strong association between industry sponsorship and positive findings. The Pearson goodness-of-fit test was used to assess the overall fit of the models. Odds ratios (ORs) were then computed with confidence intervals (CIs).\nA two-tailed P-value less than 0.05 was considered statistically significant. The analyses were performed using STATA/SE, version 8.2 for Windows (StataCorp, College Station, Texas).", "[SUBTITLE] Characteristics of the Included Studies [SUBSECTION] Our final sample consisted of 367 published RCTs. Of these, 197 were classified as ITT trials, 114 as \"no ITT\" trials and 56 as mITT trials (analysis of number and types of mITT deviations are reported in Appendix 1).\nThe number of participants included in each RCT ranged from 10 to 160,921 (median, 368; interquartile range 140-991). Trials classified as mITT were significantly more likely to be published in general medical journals, to report post-randomisation exclusions and to use placebo as a comparator. A total of 258 (69%) trials received complete or partial financial support from a for-profit agency and 216 of the trials (60%) reported results that favoured the treatment under investigation. The characteristics of the included trials are shown in Table 1.\nCharacteristics of Included Randomised Controlled Trials.\nIQR = Inter-quartile range;\nThe kappa value for generation sequence was 0.79 (95% CI 0.74 to 0.85), for allocation concealment 0.82 (95% CI 0.77 to 0.88); for blinding, 0.78 (95% CI 0.72 to 0.84); and for intention-to treat analysis, 0.90 (95% CI 0.89 to 0.92). The kappa value for funding source was 0.96 (95% CI 0.95 to 0.97); for authors' conflict of interest, 0.94 (95% CI 0.93 to 0.96); and for study findings, 0.93 (95% CI 0.93 to 0.95).\nOur final sample consisted of 367 published RCTs. Of these, 197 were classified as ITT trials, 114 as \"no ITT\" trials and 56 as mITT trials (analysis of number and types of mITT deviations are reported in Appendix 1).\nThe number of participants included in each RCT ranged from 10 to 160,921 (median, 368; interquartile range 140-991). Trials classified as mITT were significantly more likely to be published in general medical journals, to report post-randomisation exclusions and to use placebo as a comparator. A total of 258 (69%) trials received complete or partial financial support from a for-profit agency and 216 of the trials (60%) reported results that favoured the treatment under investigation. The characteristics of the included trials are shown in Table 1.\nCharacteristics of Included Randomised Controlled Trials.\nIQR = Inter-quartile range;\nThe kappa value for generation sequence was 0.79 (95% CI 0.74 to 0.85), for allocation concealment 0.82 (95% CI 0.77 to 0.88); for blinding, 0.78 (95% CI 0.72 to 0.84); and for intention-to treat analysis, 0.90 (95% CI 0.89 to 0.92). The kappa value for funding source was 0.96 (95% CI 0.95 to 0.97); for authors' conflict of interest, 0.94 (95% CI 0.93 to 0.96); and for study findings, 0.93 (95% CI 0.93 to 0.95).\n[SUBTITLE] Characteristics of the Included Trials and the Type of Intention-to-treat Reporting [SUBSECTION] Multinomial logistic regression analyses showed that the mITT trials were more likely to have inadequate or unclear sequence generation and adequate blinding compared to the ITT trials. Trials classified as \"no ITT\" reported low standards of reporting and methodological quality.\nIn the multivariate analysis, the mITT trials appeared to have substantially similar methodological quality standards compared to the ITT trials; however, the mITT trials remained more likely to report post-randomisation exclusions (adjusted OR 3.43 [95% CI 1.70 to 6.95]; P = 0.001). Trials classified as \"no ITT\" were significantly more likely to avoid reporting flow-chart, sample size calculation and lost to follow-up. Table 2 shows the unadjusted and adjusted ORs of the strength of association between characteristics of the studies and the type of intention-to-treat reporting.\nAssociation Between Characteristics of the Included Randomised Controlled Trials and the Type of Intention-to-treat Reporting\nITT is the comparison group.\nmITT = modified intention-to-treat\nITT = intention-to-treat.\nMultinomial logistic regression analyses showed that the mITT trials were more likely to have inadequate or unclear sequence generation and adequate blinding compared to the ITT trials. Trials classified as \"no ITT\" reported low standards of reporting and methodological quality.\nIn the multivariate analysis, the mITT trials appeared to have substantially similar methodological quality standards compared to the ITT trials; however, the mITT trials remained more likely to report post-randomisation exclusions (adjusted OR 3.43 [95% CI 1.70 to 6.95]; P = 0.001). Trials classified as \"no ITT\" were significantly more likely to avoid reporting flow-chart, sample size calculation and lost to follow-up. Table 2 shows the unadjusted and adjusted ORs of the strength of association between characteristics of the studies and the type of intention-to-treat reporting.\nAssociation Between Characteristics of the Included Randomised Controlled Trials and the Type of Intention-to-treat Reporting\nITT is the comparison group.\nmITT = modified intention-to-treat\nITT = intention-to-treat.\n[SUBTITLE] Sponsorship, Conflicts of Interest and the Type of Intention-to-treat Reporting [SUBSECTION] Compared to the ITT trials, RCTs classified as mITT were more likely to receive sponsorship from a for-profit agency (adjusted OR 7.41 [95% CI 3.14 to 17.48]; P < 0.001) and were more likely to have at least one investigator who was authoring on behalf of the pharmaceutical industry (adjusted OR 5.14 [95%CI, 2.12 to 12.48]; P < 0.001). Interestingly, both the mITT and \"no ITT\" trials reported a higher odds ratio of authors not reporting any conflicts of interest. Table 3 shows the unadjusted and adjusted ORs of the strength of association between funding and authors' conflicts of interest and the type of intention-to-treat reporting.\nAssociation Between Funding and Authors' Conflicts of Interest for the Included Randomised Controlled Trials and the Type of Intention-to-treat Reporting.\nITT is the comparison group.\nmITT = modified intention-to-treat\nITT = intention-to-treat.\nCompared to the ITT trials, RCTs classified as mITT were more likely to receive sponsorship from a for-profit agency (adjusted OR 7.41 [95% CI 3.14 to 17.48]; P < 0.001) and were more likely to have at least one investigator who was authoring on behalf of the pharmaceutical industry (adjusted OR 5.14 [95%CI, 2.12 to 12.48]; P < 0.001). Interestingly, both the mITT and \"no ITT\" trials reported a higher odds ratio of authors not reporting any conflicts of interest. Table 3 shows the unadjusted and adjusted ORs of the strength of association between funding and authors' conflicts of interest and the type of intention-to-treat reporting.\nAssociation Between Funding and Authors' Conflicts of Interest for the Included Randomised Controlled Trials and the Type of Intention-to-treat Reporting.\nITT is the comparison group.\nmITT = modified intention-to-treat\nITT = intention-to-treat.\n[SUBTITLE] Analysis of the Findings [SUBSECTION] We did not find any association between mITT reporting and favourable results (adjusted OR 1.27 [95% CI 0.62 to 2.61]; P = 0.51). Overall, however, trials with for-profit agency sponsorship were more likely to report favourable results when compared to trials that did not receive any sponsorship from a for-profit agency (adjusted OR 2.30 [95% CI 1.28 to 4.16]; P = 0.006).\nWe did not find any association between mITT reporting and favourable results (adjusted OR 1.27 [95% CI 0.62 to 2.61]; P = 0.51). Overall, however, trials with for-profit agency sponsorship were more likely to report favourable results when compared to trials that did not receive any sponsorship from a for-profit agency (adjusted OR 2.30 [95% CI 1.28 to 4.16]; P = 0.006).", "Our final sample consisted of 367 published RCTs. Of these, 197 were classified as ITT trials, 114 as \"no ITT\" trials and 56 as mITT trials (analysis of number and types of mITT deviations are reported in Appendix 1).\nThe number of participants included in each RCT ranged from 10 to 160,921 (median, 368; interquartile range 140-991). Trials classified as mITT were significantly more likely to be published in general medical journals, to report post-randomisation exclusions and to use placebo as a comparator. A total of 258 (69%) trials received complete or partial financial support from a for-profit agency and 216 of the trials (60%) reported results that favoured the treatment under investigation. The characteristics of the included trials are shown in Table 1.\nCharacteristics of Included Randomised Controlled Trials.\nIQR = Inter-quartile range;\nThe kappa value for generation sequence was 0.79 (95% CI 0.74 to 0.85), for allocation concealment 0.82 (95% CI 0.77 to 0.88); for blinding, 0.78 (95% CI 0.72 to 0.84); and for intention-to treat analysis, 0.90 (95% CI 0.89 to 0.92). The kappa value for funding source was 0.96 (95% CI 0.95 to 0.97); for authors' conflict of interest, 0.94 (95% CI 0.93 to 0.96); and for study findings, 0.93 (95% CI 0.93 to 0.95).", "Multinomial logistic regression analyses showed that the mITT trials were more likely to have inadequate or unclear sequence generation and adequate blinding compared to the ITT trials. Trials classified as \"no ITT\" reported low standards of reporting and methodological quality.\nIn the multivariate analysis, the mITT trials appeared to have substantially similar methodological quality standards compared to the ITT trials; however, the mITT trials remained more likely to report post-randomisation exclusions (adjusted OR 3.43 [95% CI 1.70 to 6.95]; P = 0.001). Trials classified as \"no ITT\" were significantly more likely to avoid reporting flow-chart, sample size calculation and lost to follow-up. Table 2 shows the unadjusted and adjusted ORs of the strength of association between characteristics of the studies and the type of intention-to-treat reporting.\nAssociation Between Characteristics of the Included Randomised Controlled Trials and the Type of Intention-to-treat Reporting\nITT is the comparison group.\nmITT = modified intention-to-treat\nITT = intention-to-treat.", "Compared to the ITT trials, RCTs classified as mITT were more likely to receive sponsorship from a for-profit agency (adjusted OR 7.41 [95% CI 3.14 to 17.48]; P < 0.001) and were more likely to have at least one investigator who was authoring on behalf of the pharmaceutical industry (adjusted OR 5.14 [95%CI, 2.12 to 12.48]; P < 0.001). Interestingly, both the mITT and \"no ITT\" trials reported a higher odds ratio of authors not reporting any conflicts of interest. Table 3 shows the unadjusted and adjusted ORs of the strength of association between funding and authors' conflicts of interest and the type of intention-to-treat reporting.\nAssociation Between Funding and Authors' Conflicts of Interest for the Included Randomised Controlled Trials and the Type of Intention-to-treat Reporting.\nITT is the comparison group.\nmITT = modified intention-to-treat\nITT = intention-to-treat.", "We did not find any association between mITT reporting and favourable results (adjusted OR 1.27 [95% CI 0.62 to 2.61]; P = 0.51). Overall, however, trials with for-profit agency sponsorship were more likely to report favourable results when compared to trials that did not receive any sponsorship from a for-profit agency (adjusted OR 2.30 [95% CI 1.28 to 4.16]; P = 0.006).", "In a sample of RCTs, we assessed the associations between the type of intention-to-treat analysis used and study design characteristics, including the source of funding and the favourability of the results. Our first hypothesis that studies using mITT analyses would be associated with limited quality of reporting and study characteristics was refuted. Instead, we found that the mITT trials were methodologically similar to the ITT trials.\nReports from the literature about the methodological quality of industry sponsored trials are controversial. While some authors report that the methodological quality of industry-sponsored RCTs is limited, others report that trials funded by for-profit agencies have similar or better methodological quality than unsupported trials [8,11].\nIn our investigation, the mITT trials were more likely to perform post-randomisation exclusions compared to both ITT trials and \"no ITT\" trials. A study published in 1996 reported a paradox that RCTs reporting ITT with sound methodological quality were more likely to perform exclusions [12]. The authors' interpretation of this finding was that studies with low methodological standards may be less likely to report exclusions, even when exclusions actually occurred. In our analysis, although the mITT trials had the highest occurrence of exclusions, we found that the \"no ITT\" trials were more likely to perform exclusions compared to the ITT trials. It is possible that the standard of reporting changed over time, and we can speculate that allowing any modification of the standard intention-to-treat analysis may have encouraged authors to report exclusions after deciding which \"type\" of mITT analysis to perform.\nSeveral studies in the medical literature report that post-randomisation exclusions in randomised trials may lead to biased estimates of the treatment effect. Individual patient data analyses of systematic reviews found that the results were in favor to the treatment under investigation when exclusions were not taken into account in the results rather than when a true intention-to-treat was used[13]. A meta-epidemiological study that investigated 14 meta-analyses in osteoarthritis showed that the exclusion of patients from analysis resulted in a biased estimate of the treatment effect [14]. Another study of pharmaceutical industry sponsored trials investigating serotonin reuptake inhibitors found that results were more in favour when a per-protocol analysis was used instead of intention-to-treat analysis [15].\nThe second aim of our study was to assess the associations between the type of intention-to-treat analysis used and the presence of industry sponsorship or author conflicts of interest among RCTs. In the RCTs included in this study the use of mITT analysis was strongly associated with industry funding and author conflicts of interest.\nIn the medical literature, there is plenty of evidence that the pharmaceutical industry is directly or indirectly involved at different stages in the conduct, design and publication of biomedical research[9,11,16]. The selection of an inappropriate comparison group (e.g., a drug with non-equivalent dosage) [8,17,18], multiple reporting of studies [19,20] and suppression or delay of publications [18] are all circumstances where the presence of industry sponsorship are well documented.\nWe did not find any association between favourable results and mITT reporting. The number of mITT trials was too low to detect any difference; however, even if the sample of trials was adequate, we are aware that any potential difference cannot be free of bias given the large heterogeneity of the included trials in terms of the intervention and outcome investigated. We believe, nonetheless, that the appropriate place to evaluate the mITT as a source of bias is to explore its impact in meta-epidemiological studies.\nIn our previous survey, we showed that publications of mITT reporting trials is significantly increasing. For example, the overall incidence of trials published in 2006 in the medical literature was around 5%[5]. This incidence of mITT reporting trials was underestimated since the studies claiming the use of the intention-to-treat approach when in fact they used the mITT approach (owing to the type of deviation present in the description of the analysis) may have escaped the previous search[5]. Indeed, in the present study, we were able to assess all the randomized trials in 6 journals, and capture those trials that reported intention-to-treat but with deviations. The number of these trials (n = 32) was greater than the number of the trials that actually reported as \"modified\" (n = 24; Box). Consequently, we can conclude that the phenomenon of mITT is wider than previously estimated.", "Our main limitation was that we used data from RCTs that were published in six journals; therefore, we are unsure that our results can be generalized to other trials published elsewhere. To test our aforementioned hypothesis, we needed to compare mITT trials with trials reporting (or not reporting) intention-to-treat. Although our previous research has documented an increasing incidence of mITT trials, the proportion of mITT trials being conducted remains too low to obtain an adequate sample for comparison. Consequently, we decided to target journals that were more likely to report mITT trials by selecting general medical and specialty journals.", "Data analysis is the crucial final stage of any study design. Well-designed RCTs require strict adherence to the intention-to-treat principle, in which subjects should remain in the group to which they were originally allocated. Although further research is needed to document that the mITT is a potential source of bias, to limit unjustified exclusions, the misuse of \"intention-to-treat\" terminology should be abandoned, and authors need to adhere to the standard intention-to-treat principle. The updated version of the CONSORT statement goes into this direction and suggests the replacement of any kind of intention-to-treat reporting with a clear description of exactly who was included in each analysis[21].", "Analyses of Types and Deviations from Intention-to-Treat of Trials classified as mITT\nOf the 56 trials classified as mITT, 24 were reported explicitly as \"modified\" while 32 reported \"intention-to-treat\" but reported descriptions that deviate from the true ITT and thus were considered as mITT trials.\nOverall, 31 (55%) trials reported 1 type of mITT deviation, 17 (30%) reported 2 types of mITT deviation, 5 (9%) reported 3 types of mITT deviation, and 3 (5%) did not report any type of mITT deviation. In 35 (63%) trials, the main exclusion criterion for the mITT analysis was treatment-related; in 18 of such trials, the treatment-related mITT was accompanied by at least one additional type of mITT deviation.\nIn 17 (30%) trials, the exclusion criteria used to justify the mITT criteria was the absence of a post-baseline assessment; in 13 of these trials, it was accompanied by at least one other type of mITT deviation.\nBaseline assessment related mITT was described in 10 (18%) trials; in 6 of these the approach was accompanied by another type of deviation. Four (7%) trials required a target condition to describe the mITT deviation; and 1 (2%) trial fell into the follow-up related mITT deviation. While 13 (23%) trials did not fall into none of the above category, 3 (5%) trials remained without any type of mITT description.", "Maria Laura Luchetta has been employed in Laboratori Guidotti s.p.a. (Pisa) from November 2003 to October 2006; in Menarini IFR (Firenze) from November 2006 to November 2007. Alessandro Montedori, Maria Isabella Bonacini, Giovanni Casazza, Francesco Cozzolino, Piergiorio Duca, and Iosief Abraha declare that they have no competing interests.", "AM and IA conceived and designed the study. AM, MIB, FC, MLL collected data. GC, PD, IA did the statistical analyses. All authors contributed to the interpretation of the results, the writing and critical review of the report. All authors have seen and approved the final version of the report." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Selection of Studies", "Assessment of Methodological Quality", "Funding Source", "Authors' Potential Conflicts of Interest", "Study Findings", "Statistical Methods", "Results", "Characteristics of the Included Studies", "Characteristics of the Included Trials and the Type of Intention-to-treat Reporting", "Sponsorship, Conflicts of Interest and the Type of Intention-to-treat Reporting", "Analysis of the Findings", "Discussion", "Limitations", "Conclusion", "Appendix 1", "Competing interests", "Authors' contributions" ]
[ "The intention-to-treat principle requires that all participants that are randomized must be included in the final analysis and analyzed according to the treatment group to which they were originally assigned, regardless of the treatment received, withdrawals, lost to follow-up or cross-overs. Despite this principle, in many instances in randomized trials the term intention-to-treat was inappropriately described and participants improperly excluded[1-4]. In addition, the use of a modified intention-to-treat (mITT) approach in randomized controlled trials (RCTs) is increasingly appearing in the medical literature according to a systematic review[5]. There is no clear definition of what is mITT. In fact, descriptions of mITT analyses vary greatly from trial to trial, and often contain more than one criterion that are difficult to interpret as it is not easy to discriminate between missing data cases and deviations from protocol. Moreover, while post-randomization exclusion appear to be the primary factor that characterizes the mITT analysis, the majority of the trials were industry sponsored and reported results that favoured the therapy under investigation[5]. The uncertainty or equipoise principle that should exist in the design of RCTs states that over time, the mean benefit of investigational therapies and comparison therapies should be equal [6]. The high prevalence of significant results reported in randomized trials might not reflect true differences[7]. Some argue that this principle can be violated in the presence of financial ties between investigators and the pharmaceutical industry [8]. The biomedical literature reports strong and consistent evidence that industry-sponsored research is likely to produce pro-industry conclusions [9]. The appearance of mITT reporting in modern RCTs could be a consequence of the widespread financial ties that exists among investigators and industry.\nIn this cross-sectional study, we examined whether mITT reporting trials were different in terms of methodological quality from trials with other types of intention-to-treat reporting. Our first hypothesis was that studies using mITT analyses would be associated with limited quality of reporting and study characteristics based on the assumption that mITT reporting is a limitation in principle. Moreover, we investigated whether the reporting of mITT in RCTs was associated with industry sponsorship, the presence of authors' conflicts of interest, and favourable findings.", "[SUBTITLE] Selection of Studies [SUBSECTION] To compare methodological quality, industry sponsorship, the presence of authors' conflicts of interest, and findings among trials based on the type of intention-to-treat reporting, we sought to identify the top medical journals and specialty journals published in 2006 that were more likely to report the use of an mITT approach according to our previous survey [10]. The three high-ranking medical journals were the Journal of American Medical Association, the New England Journal of Medicine and the Lancet; and the three specialty journals were Antimicrobial Agents and Chemotherapy, the American Heart Journal and the Journal of Clinical Oncology. We carried out a computerized search in Medline (via PubMed) to identify RCTs published in these six journals using the publication type \"randomized controlled trials\". The results were then cross-checked against a search of the Cochrane Central Register of Controlled Trials.\nExcluded reports included: reports of phase I trials; pharmacokinetic, pharmacodymanic or dose-comparing studies; cluster RCTs; post-hoc studies; research letters; cost-effectiveness studies; study protocols; and prognostic and diagnostic studies. Two authors screened all of the full-text journal articles to confirm their eligibility for inclusion and excluded 97 articles (Figure 1).\nStudy Screening Process. AAC: Antimicrobial Agents and Chemotherapy; AHJ: American Heart Journal; JAMA: Journal of American Medical Association; JCO: Journal of Clinical Oncology; NEJM: New England Journal of Medicine.\nTwo other reviewers independently and in duplicate, extracted the following information from the journal articles: characteristics of the trial, primary outcomes, number of allocation groups, number of patients, main outcome measure, P-values, and number of subjects excluded from the analysis. Moreover, reporting of flow-charts, sample size calculations, and information regarding missing data, withdrawals or patients lost to follow-up were recorded.\nThe relevant RCTs were subsequently classified according to the type of intention-to-treat analyses used as follows: ITT, trials reporting the use of standard ITT analyses; mITT, trials reporting the use of \"modified intention-to-treat\" analyses; or \"no ITT\" trials not reporting the use of any intention-to-treat analyses. This classification was independent of the reporting of any post-randomisation exclusion. Trials reporting the use of ITT with descriptions or conditions different from the standard intention-to-treat definition were classified as mITT.\nDescriptions of mITT were retrieved to evaluate the type and deviation from a true intention-to-treat analysis according to our previous classification[5].\nTo compare methodological quality, industry sponsorship, the presence of authors' conflicts of interest, and findings among trials based on the type of intention-to-treat reporting, we sought to identify the top medical journals and specialty journals published in 2006 that were more likely to report the use of an mITT approach according to our previous survey [10]. The three high-ranking medical journals were the Journal of American Medical Association, the New England Journal of Medicine and the Lancet; and the three specialty journals were Antimicrobial Agents and Chemotherapy, the American Heart Journal and the Journal of Clinical Oncology. We carried out a computerized search in Medline (via PubMed) to identify RCTs published in these six journals using the publication type \"randomized controlled trials\". The results were then cross-checked against a search of the Cochrane Central Register of Controlled Trials.\nExcluded reports included: reports of phase I trials; pharmacokinetic, pharmacodymanic or dose-comparing studies; cluster RCTs; post-hoc studies; research letters; cost-effectiveness studies; study protocols; and prognostic and diagnostic studies. Two authors screened all of the full-text journal articles to confirm their eligibility for inclusion and excluded 97 articles (Figure 1).\nStudy Screening Process. AAC: Antimicrobial Agents and Chemotherapy; AHJ: American Heart Journal; JAMA: Journal of American Medical Association; JCO: Journal of Clinical Oncology; NEJM: New England Journal of Medicine.\nTwo other reviewers independently and in duplicate, extracted the following information from the journal articles: characteristics of the trial, primary outcomes, number of allocation groups, number of patients, main outcome measure, P-values, and number of subjects excluded from the analysis. Moreover, reporting of flow-charts, sample size calculations, and information regarding missing data, withdrawals or patients lost to follow-up were recorded.\nThe relevant RCTs were subsequently classified according to the type of intention-to-treat analyses used as follows: ITT, trials reporting the use of standard ITT analyses; mITT, trials reporting the use of \"modified intention-to-treat\" analyses; or \"no ITT\" trials not reporting the use of any intention-to-treat analyses. This classification was independent of the reporting of any post-randomisation exclusion. Trials reporting the use of ITT with descriptions or conditions different from the standard intention-to-treat definition were classified as mITT.\nDescriptions of mITT were retrieved to evaluate the type and deviation from a true intention-to-treat analysis according to our previous classification[5].\n[SUBTITLE] Assessment of Methodological Quality [SUBSECTION] Sequence generation, allocation concealment and blinding were used as indicators of methodological quality and were classified as \"adequate,\" \"inadequate\" or \"unclear.\" In our analysis, the \"inadequate\" and \"unclear\" categories were collapsed into \"not adequate.\" For sequence generation, the following approaches were considered adequate: random number table, computer random number generator, drawing of lots and envelopes. The use of methods such as case record number, date of birth or day, month, or year of admission were considered inadequate for sequence generation.\nFor allocation concealment, central randomisation, coded drug packs prepared by an independent pharmacist and sequentially numbered, sealed and opaque envelopes or any method that hindered the researchers' ability to foresee the next assignment was considered adequate. Methods such as procedures based on inadequate generation of allocation sequences, open allocation schedules, alternation and unsealed or non-opaque envelopes were considered inadequate.\nRegarding blinding, a trial was considered adequately blinded if it was described as \"double-blinded\" and used adequate methods such as identical placebo tablets or adequate descriptions of who was blinded (e.g., the outcome assessor) in cases where blinding of participants and caregivers may not be feasible. Each eligible trial was independently reviewed for methodological quality. Discrepancies were resolved by discussion, and a consensus was reached for all trials.\nSequence generation, allocation concealment and blinding were used as indicators of methodological quality and were classified as \"adequate,\" \"inadequate\" or \"unclear.\" In our analysis, the \"inadequate\" and \"unclear\" categories were collapsed into \"not adequate.\" For sequence generation, the following approaches were considered adequate: random number table, computer random number generator, drawing of lots and envelopes. The use of methods such as case record number, date of birth or day, month, or year of admission were considered inadequate for sequence generation.\nFor allocation concealment, central randomisation, coded drug packs prepared by an independent pharmacist and sequentially numbered, sealed and opaque envelopes or any method that hindered the researchers' ability to foresee the next assignment was considered adequate. Methods such as procedures based on inadequate generation of allocation sequences, open allocation schedules, alternation and unsealed or non-opaque envelopes were considered inadequate.\nRegarding blinding, a trial was considered adequately blinded if it was described as \"double-blinded\" and used adequate methods such as identical placebo tablets or adequate descriptions of who was blinded (e.g., the outcome assessor) in cases where blinding of participants and caregivers may not be feasible. Each eligible trial was independently reviewed for methodological quality. Discrepancies were resolved by discussion, and a consensus was reached for all trials.\n[SUBTITLE] Funding Source [SUBSECTION] Information regarding the sources of funding for each trial was extracted from the articles by reviewing the article text, the conflict of interest section, information on funding (if present) and the acknowledgements section. RTC funding sources were abstracted and categorized as follows: (1) not-for-profit organization; (2) for-profit agency; (3) co-financed, indicating funding from both not-for-profit organization(s) and for-profit agency(s); (4) no funding; and (5) not reported.\nInformation regarding the sources of funding for each trial was extracted from the articles by reviewing the article text, the conflict of interest section, information on funding (if present) and the acknowledgements section. RTC funding sources were abstracted and categorized as follows: (1) not-for-profit organization; (2) for-profit agency; (3) co-financed, indicating funding from both not-for-profit organization(s) and for-profit agency(s); (4) no funding; and (5) not reported.\n[SUBTITLE] Authors' Potential Conflicts of Interest [SUBSECTION] The presence or absence of author conflicts of interest was determined by reviewing the authors' institutional affiliations, the conflicts of interest section, information on funding (if present) and the acknowledgements section.\nThe institutional affiliation of the first author (e.g., university, not-for-profit organization or for-profit agency) was retrieved from each article. The affiliation of the other authors was also recorded when a financial tie with a for-profit agency was present. Subsequently, the studies were classified for analysis as follows: (1) authors declaring no competing interests; (2) financial ties with the sponsor of the study disclosed, indicating that at least one author participated on behalf of a for-profit agency (e.g., as an employee or consultant); (3) other financial ties disclosed, indicating that an author did not participate on behalf of the study sponsor but did report a conflict of interest such as a past financial tie with the pharmaceutical industry; and (4) conflicts of interest not reported.\nThe presence or absence of author conflicts of interest was determined by reviewing the authors' institutional affiliations, the conflicts of interest section, information on funding (if present) and the acknowledgements section.\nThe institutional affiliation of the first author (e.g., university, not-for-profit organization or for-profit agency) was retrieved from each article. The affiliation of the other authors was also recorded when a financial tie with a for-profit agency was present. Subsequently, the studies were classified for analysis as follows: (1) authors declaring no competing interests; (2) financial ties with the sponsor of the study disclosed, indicating that at least one author participated on behalf of a for-profit agency (e.g., as an employee or consultant); (3) other financial ties disclosed, indicating that an author did not participate on behalf of the study sponsor but did report a conflict of interest such as a past financial tie with the pharmaceutical industry; and (4) conflicts of interest not reported.\n[SUBTITLE] Study Findings [SUBSECTION] For each published trial, the results for each primary outcome were classified as one of the following: (1) favourable, if the result was statistically significant with P < 0.05 or if the confidence interval did not include the null; or (2) inconclusive, if the result did not reach statistical significance.\nFor equivalence or noninferiority studies, the result was coded as favourable when the assumption of equivalence or noninferiority was satisfied.\nFor each published trial, the results for each primary outcome were classified as one of the following: (1) favourable, if the result was statistically significant with P < 0.05 or if the confidence interval did not include the null; or (2) inconclusive, if the result did not reach statistical significance.\nFor equivalence or noninferiority studies, the result was coded as favourable when the assumption of equivalence or noninferiority was satisfied.\n[SUBTITLE] Statistical Methods [SUBSECTION] The distribution of various characteristics of the examined trials such as the type of ITT analysis and the source of funding were reported using descriptive statistics. The Pearson chi-square test was used for contingency table analysis. Interobserver agreement was measured using the kappa (κ) statistic.\nWe investigated the associations between the \"type\" of intention-to-treat analysis (as a dependent, multinomial 3-level variable: ITT, mITT, or \"no ITT\") and the methodological quality (sequence generation, allocation concealment and blinding; Model A), the quality of reporting (reporting of a flow-chart, sample size calculation, and lost to follow-up; Model A), the funding source (Model B), and the presence of authors' conflicts of interest (Model C) using multinomial logistic regression.\nBivariate analyses were performed to identify independent variables with a P value less than or equal to 0.10. These potential confounding variables were included in each model of multivariate multinomial logistic regression analysis. In each model, the variables of journal type, use of placebo, the presence of post-randomisation exclusions, and all of the items used as indicators of methodological quality and quality of reporting were included. In model A, in addition to all the mentioned variables, the use of funding was included (the inclusion of the presence of authors' conflict of interest instead of funding did not produce any change in the results of the model).\nIn each model, the ITT category was used as a reference for comparisons.\nFurthermore, we performed univariate logistic regression to investigate the association between findings (as a dependent variable) and the \"type\" of intention-to-treat (as an independent polynomial 3-level variable: ITT, mITT, \"no ITT\"). To account for potential confounding variables we performed a multivariate logistic regression and included in the model the type of the journal, the use of placebo, the presence of post-randomisation exclusions, the use of funding and all the items that were indicators of reporting and methodological quality.\nFinally, we assessed the overall association between favourable findings and no for-profit sponsorship using a logistic regression (univariate and multivariate model were used; all of the above mentioned variables including the \"type\" of intention-to-treat were used for adjustment). This was done to evaluate any consistency with studies in the biomedical literature that report a strong association between industry sponsorship and positive findings. The Pearson goodness-of-fit test was used to assess the overall fit of the models. Odds ratios (ORs) were then computed with confidence intervals (CIs).\nA two-tailed P-value less than 0.05 was considered statistically significant. The analyses were performed using STATA/SE, version 8.2 for Windows (StataCorp, College Station, Texas).\nThe distribution of various characteristics of the examined trials such as the type of ITT analysis and the source of funding were reported using descriptive statistics. The Pearson chi-square test was used for contingency table analysis. Interobserver agreement was measured using the kappa (κ) statistic.\nWe investigated the associations between the \"type\" of intention-to-treat analysis (as a dependent, multinomial 3-level variable: ITT, mITT, or \"no ITT\") and the methodological quality (sequence generation, allocation concealment and blinding; Model A), the quality of reporting (reporting of a flow-chart, sample size calculation, and lost to follow-up; Model A), the funding source (Model B), and the presence of authors' conflicts of interest (Model C) using multinomial logistic regression.\nBivariate analyses were performed to identify independent variables with a P value less than or equal to 0.10. These potential confounding variables were included in each model of multivariate multinomial logistic regression analysis. In each model, the variables of journal type, use of placebo, the presence of post-randomisation exclusions, and all of the items used as indicators of methodological quality and quality of reporting were included. In model A, in addition to all the mentioned variables, the use of funding was included (the inclusion of the presence of authors' conflict of interest instead of funding did not produce any change in the results of the model).\nIn each model, the ITT category was used as a reference for comparisons.\nFurthermore, we performed univariate logistic regression to investigate the association between findings (as a dependent variable) and the \"type\" of intention-to-treat (as an independent polynomial 3-level variable: ITT, mITT, \"no ITT\"). To account for potential confounding variables we performed a multivariate logistic regression and included in the model the type of the journal, the use of placebo, the presence of post-randomisation exclusions, the use of funding and all the items that were indicators of reporting and methodological quality.\nFinally, we assessed the overall association between favourable findings and no for-profit sponsorship using a logistic regression (univariate and multivariate model were used; all of the above mentioned variables including the \"type\" of intention-to-treat were used for adjustment). This was done to evaluate any consistency with studies in the biomedical literature that report a strong association between industry sponsorship and positive findings. The Pearson goodness-of-fit test was used to assess the overall fit of the models. Odds ratios (ORs) were then computed with confidence intervals (CIs).\nA two-tailed P-value less than 0.05 was considered statistically significant. The analyses were performed using STATA/SE, version 8.2 for Windows (StataCorp, College Station, Texas).", "To compare methodological quality, industry sponsorship, the presence of authors' conflicts of interest, and findings among trials based on the type of intention-to-treat reporting, we sought to identify the top medical journals and specialty journals published in 2006 that were more likely to report the use of an mITT approach according to our previous survey [10]. The three high-ranking medical journals were the Journal of American Medical Association, the New England Journal of Medicine and the Lancet; and the three specialty journals were Antimicrobial Agents and Chemotherapy, the American Heart Journal and the Journal of Clinical Oncology. We carried out a computerized search in Medline (via PubMed) to identify RCTs published in these six journals using the publication type \"randomized controlled trials\". The results were then cross-checked against a search of the Cochrane Central Register of Controlled Trials.\nExcluded reports included: reports of phase I trials; pharmacokinetic, pharmacodymanic or dose-comparing studies; cluster RCTs; post-hoc studies; research letters; cost-effectiveness studies; study protocols; and prognostic and diagnostic studies. Two authors screened all of the full-text journal articles to confirm their eligibility for inclusion and excluded 97 articles (Figure 1).\nStudy Screening Process. AAC: Antimicrobial Agents and Chemotherapy; AHJ: American Heart Journal; JAMA: Journal of American Medical Association; JCO: Journal of Clinical Oncology; NEJM: New England Journal of Medicine.\nTwo other reviewers independently and in duplicate, extracted the following information from the journal articles: characteristics of the trial, primary outcomes, number of allocation groups, number of patients, main outcome measure, P-values, and number of subjects excluded from the analysis. Moreover, reporting of flow-charts, sample size calculations, and information regarding missing data, withdrawals or patients lost to follow-up were recorded.\nThe relevant RCTs were subsequently classified according to the type of intention-to-treat analyses used as follows: ITT, trials reporting the use of standard ITT analyses; mITT, trials reporting the use of \"modified intention-to-treat\" analyses; or \"no ITT\" trials not reporting the use of any intention-to-treat analyses. This classification was independent of the reporting of any post-randomisation exclusion. Trials reporting the use of ITT with descriptions or conditions different from the standard intention-to-treat definition were classified as mITT.\nDescriptions of mITT were retrieved to evaluate the type and deviation from a true intention-to-treat analysis according to our previous classification[5].", "Sequence generation, allocation concealment and blinding were used as indicators of methodological quality and were classified as \"adequate,\" \"inadequate\" or \"unclear.\" In our analysis, the \"inadequate\" and \"unclear\" categories were collapsed into \"not adequate.\" For sequence generation, the following approaches were considered adequate: random number table, computer random number generator, drawing of lots and envelopes. The use of methods such as case record number, date of birth or day, month, or year of admission were considered inadequate for sequence generation.\nFor allocation concealment, central randomisation, coded drug packs prepared by an independent pharmacist and sequentially numbered, sealed and opaque envelopes or any method that hindered the researchers' ability to foresee the next assignment was considered adequate. Methods such as procedures based on inadequate generation of allocation sequences, open allocation schedules, alternation and unsealed or non-opaque envelopes were considered inadequate.\nRegarding blinding, a trial was considered adequately blinded if it was described as \"double-blinded\" and used adequate methods such as identical placebo tablets or adequate descriptions of who was blinded (e.g., the outcome assessor) in cases where blinding of participants and caregivers may not be feasible. Each eligible trial was independently reviewed for methodological quality. Discrepancies were resolved by discussion, and a consensus was reached for all trials.", "Information regarding the sources of funding for each trial was extracted from the articles by reviewing the article text, the conflict of interest section, information on funding (if present) and the acknowledgements section. RTC funding sources were abstracted and categorized as follows: (1) not-for-profit organization; (2) for-profit agency; (3) co-financed, indicating funding from both not-for-profit organization(s) and for-profit agency(s); (4) no funding; and (5) not reported.", "The presence or absence of author conflicts of interest was determined by reviewing the authors' institutional affiliations, the conflicts of interest section, information on funding (if present) and the acknowledgements section.\nThe institutional affiliation of the first author (e.g., university, not-for-profit organization or for-profit agency) was retrieved from each article. The affiliation of the other authors was also recorded when a financial tie with a for-profit agency was present. Subsequently, the studies were classified for analysis as follows: (1) authors declaring no competing interests; (2) financial ties with the sponsor of the study disclosed, indicating that at least one author participated on behalf of a for-profit agency (e.g., as an employee or consultant); (3) other financial ties disclosed, indicating that an author did not participate on behalf of the study sponsor but did report a conflict of interest such as a past financial tie with the pharmaceutical industry; and (4) conflicts of interest not reported.", "For each published trial, the results for each primary outcome were classified as one of the following: (1) favourable, if the result was statistically significant with P < 0.05 or if the confidence interval did not include the null; or (2) inconclusive, if the result did not reach statistical significance.\nFor equivalence or noninferiority studies, the result was coded as favourable when the assumption of equivalence or noninferiority was satisfied.", "The distribution of various characteristics of the examined trials such as the type of ITT analysis and the source of funding were reported using descriptive statistics. The Pearson chi-square test was used for contingency table analysis. Interobserver agreement was measured using the kappa (κ) statistic.\nWe investigated the associations between the \"type\" of intention-to-treat analysis (as a dependent, multinomial 3-level variable: ITT, mITT, or \"no ITT\") and the methodological quality (sequence generation, allocation concealment and blinding; Model A), the quality of reporting (reporting of a flow-chart, sample size calculation, and lost to follow-up; Model A), the funding source (Model B), and the presence of authors' conflicts of interest (Model C) using multinomial logistic regression.\nBivariate analyses were performed to identify independent variables with a P value less than or equal to 0.10. These potential confounding variables were included in each model of multivariate multinomial logistic regression analysis. In each model, the variables of journal type, use of placebo, the presence of post-randomisation exclusions, and all of the items used as indicators of methodological quality and quality of reporting were included. In model A, in addition to all the mentioned variables, the use of funding was included (the inclusion of the presence of authors' conflict of interest instead of funding did not produce any change in the results of the model).\nIn each model, the ITT category was used as a reference for comparisons.\nFurthermore, we performed univariate logistic regression to investigate the association between findings (as a dependent variable) and the \"type\" of intention-to-treat (as an independent polynomial 3-level variable: ITT, mITT, \"no ITT\"). To account for potential confounding variables we performed a multivariate logistic regression and included in the model the type of the journal, the use of placebo, the presence of post-randomisation exclusions, the use of funding and all the items that were indicators of reporting and methodological quality.\nFinally, we assessed the overall association between favourable findings and no for-profit sponsorship using a logistic regression (univariate and multivariate model were used; all of the above mentioned variables including the \"type\" of intention-to-treat were used for adjustment). This was done to evaluate any consistency with studies in the biomedical literature that report a strong association between industry sponsorship and positive findings. The Pearson goodness-of-fit test was used to assess the overall fit of the models. Odds ratios (ORs) were then computed with confidence intervals (CIs).\nA two-tailed P-value less than 0.05 was considered statistically significant. The analyses were performed using STATA/SE, version 8.2 for Windows (StataCorp, College Station, Texas).", "[SUBTITLE] Characteristics of the Included Studies [SUBSECTION] Our final sample consisted of 367 published RCTs. Of these, 197 were classified as ITT trials, 114 as \"no ITT\" trials and 56 as mITT trials (analysis of number and types of mITT deviations are reported in Appendix 1).\nThe number of participants included in each RCT ranged from 10 to 160,921 (median, 368; interquartile range 140-991). Trials classified as mITT were significantly more likely to be published in general medical journals, to report post-randomisation exclusions and to use placebo as a comparator. A total of 258 (69%) trials received complete or partial financial support from a for-profit agency and 216 of the trials (60%) reported results that favoured the treatment under investigation. The characteristics of the included trials are shown in Table 1.\nCharacteristics of Included Randomised Controlled Trials.\nIQR = Inter-quartile range;\nThe kappa value for generation sequence was 0.79 (95% CI 0.74 to 0.85), for allocation concealment 0.82 (95% CI 0.77 to 0.88); for blinding, 0.78 (95% CI 0.72 to 0.84); and for intention-to treat analysis, 0.90 (95% CI 0.89 to 0.92). The kappa value for funding source was 0.96 (95% CI 0.95 to 0.97); for authors' conflict of interest, 0.94 (95% CI 0.93 to 0.96); and for study findings, 0.93 (95% CI 0.93 to 0.95).\nOur final sample consisted of 367 published RCTs. Of these, 197 were classified as ITT trials, 114 as \"no ITT\" trials and 56 as mITT trials (analysis of number and types of mITT deviations are reported in Appendix 1).\nThe number of participants included in each RCT ranged from 10 to 160,921 (median, 368; interquartile range 140-991). Trials classified as mITT were significantly more likely to be published in general medical journals, to report post-randomisation exclusions and to use placebo as a comparator. A total of 258 (69%) trials received complete or partial financial support from a for-profit agency and 216 of the trials (60%) reported results that favoured the treatment under investigation. The characteristics of the included trials are shown in Table 1.\nCharacteristics of Included Randomised Controlled Trials.\nIQR = Inter-quartile range;\nThe kappa value for generation sequence was 0.79 (95% CI 0.74 to 0.85), for allocation concealment 0.82 (95% CI 0.77 to 0.88); for blinding, 0.78 (95% CI 0.72 to 0.84); and for intention-to treat analysis, 0.90 (95% CI 0.89 to 0.92). The kappa value for funding source was 0.96 (95% CI 0.95 to 0.97); for authors' conflict of interest, 0.94 (95% CI 0.93 to 0.96); and for study findings, 0.93 (95% CI 0.93 to 0.95).\n[SUBTITLE] Characteristics of the Included Trials and the Type of Intention-to-treat Reporting [SUBSECTION] Multinomial logistic regression analyses showed that the mITT trials were more likely to have inadequate or unclear sequence generation and adequate blinding compared to the ITT trials. Trials classified as \"no ITT\" reported low standards of reporting and methodological quality.\nIn the multivariate analysis, the mITT trials appeared to have substantially similar methodological quality standards compared to the ITT trials; however, the mITT trials remained more likely to report post-randomisation exclusions (adjusted OR 3.43 [95% CI 1.70 to 6.95]; P = 0.001). Trials classified as \"no ITT\" were significantly more likely to avoid reporting flow-chart, sample size calculation and lost to follow-up. Table 2 shows the unadjusted and adjusted ORs of the strength of association between characteristics of the studies and the type of intention-to-treat reporting.\nAssociation Between Characteristics of the Included Randomised Controlled Trials and the Type of Intention-to-treat Reporting\nITT is the comparison group.\nmITT = modified intention-to-treat\nITT = intention-to-treat.\nMultinomial logistic regression analyses showed that the mITT trials were more likely to have inadequate or unclear sequence generation and adequate blinding compared to the ITT trials. Trials classified as \"no ITT\" reported low standards of reporting and methodological quality.\nIn the multivariate analysis, the mITT trials appeared to have substantially similar methodological quality standards compared to the ITT trials; however, the mITT trials remained more likely to report post-randomisation exclusions (adjusted OR 3.43 [95% CI 1.70 to 6.95]; P = 0.001). Trials classified as \"no ITT\" were significantly more likely to avoid reporting flow-chart, sample size calculation and lost to follow-up. Table 2 shows the unadjusted and adjusted ORs of the strength of association between characteristics of the studies and the type of intention-to-treat reporting.\nAssociation Between Characteristics of the Included Randomised Controlled Trials and the Type of Intention-to-treat Reporting\nITT is the comparison group.\nmITT = modified intention-to-treat\nITT = intention-to-treat.\n[SUBTITLE] Sponsorship, Conflicts of Interest and the Type of Intention-to-treat Reporting [SUBSECTION] Compared to the ITT trials, RCTs classified as mITT were more likely to receive sponsorship from a for-profit agency (adjusted OR 7.41 [95% CI 3.14 to 17.48]; P < 0.001) and were more likely to have at least one investigator who was authoring on behalf of the pharmaceutical industry (adjusted OR 5.14 [95%CI, 2.12 to 12.48]; P < 0.001). Interestingly, both the mITT and \"no ITT\" trials reported a higher odds ratio of authors not reporting any conflicts of interest. Table 3 shows the unadjusted and adjusted ORs of the strength of association between funding and authors' conflicts of interest and the type of intention-to-treat reporting.\nAssociation Between Funding and Authors' Conflicts of Interest for the Included Randomised Controlled Trials and the Type of Intention-to-treat Reporting.\nITT is the comparison group.\nmITT = modified intention-to-treat\nITT = intention-to-treat.\nCompared to the ITT trials, RCTs classified as mITT were more likely to receive sponsorship from a for-profit agency (adjusted OR 7.41 [95% CI 3.14 to 17.48]; P < 0.001) and were more likely to have at least one investigator who was authoring on behalf of the pharmaceutical industry (adjusted OR 5.14 [95%CI, 2.12 to 12.48]; P < 0.001). Interestingly, both the mITT and \"no ITT\" trials reported a higher odds ratio of authors not reporting any conflicts of interest. Table 3 shows the unadjusted and adjusted ORs of the strength of association between funding and authors' conflicts of interest and the type of intention-to-treat reporting.\nAssociation Between Funding and Authors' Conflicts of Interest for the Included Randomised Controlled Trials and the Type of Intention-to-treat Reporting.\nITT is the comparison group.\nmITT = modified intention-to-treat\nITT = intention-to-treat.\n[SUBTITLE] Analysis of the Findings [SUBSECTION] We did not find any association between mITT reporting and favourable results (adjusted OR 1.27 [95% CI 0.62 to 2.61]; P = 0.51). Overall, however, trials with for-profit agency sponsorship were more likely to report favourable results when compared to trials that did not receive any sponsorship from a for-profit agency (adjusted OR 2.30 [95% CI 1.28 to 4.16]; P = 0.006).\nWe did not find any association between mITT reporting and favourable results (adjusted OR 1.27 [95% CI 0.62 to 2.61]; P = 0.51). Overall, however, trials with for-profit agency sponsorship were more likely to report favourable results when compared to trials that did not receive any sponsorship from a for-profit agency (adjusted OR 2.30 [95% CI 1.28 to 4.16]; P = 0.006).", "Our final sample consisted of 367 published RCTs. Of these, 197 were classified as ITT trials, 114 as \"no ITT\" trials and 56 as mITT trials (analysis of number and types of mITT deviations are reported in Appendix 1).\nThe number of participants included in each RCT ranged from 10 to 160,921 (median, 368; interquartile range 140-991). Trials classified as mITT were significantly more likely to be published in general medical journals, to report post-randomisation exclusions and to use placebo as a comparator. A total of 258 (69%) trials received complete or partial financial support from a for-profit agency and 216 of the trials (60%) reported results that favoured the treatment under investigation. The characteristics of the included trials are shown in Table 1.\nCharacteristics of Included Randomised Controlled Trials.\nIQR = Inter-quartile range;\nThe kappa value for generation sequence was 0.79 (95% CI 0.74 to 0.85), for allocation concealment 0.82 (95% CI 0.77 to 0.88); for blinding, 0.78 (95% CI 0.72 to 0.84); and for intention-to treat analysis, 0.90 (95% CI 0.89 to 0.92). The kappa value for funding source was 0.96 (95% CI 0.95 to 0.97); for authors' conflict of interest, 0.94 (95% CI 0.93 to 0.96); and for study findings, 0.93 (95% CI 0.93 to 0.95).", "Multinomial logistic regression analyses showed that the mITT trials were more likely to have inadequate or unclear sequence generation and adequate blinding compared to the ITT trials. Trials classified as \"no ITT\" reported low standards of reporting and methodological quality.\nIn the multivariate analysis, the mITT trials appeared to have substantially similar methodological quality standards compared to the ITT trials; however, the mITT trials remained more likely to report post-randomisation exclusions (adjusted OR 3.43 [95% CI 1.70 to 6.95]; P = 0.001). Trials classified as \"no ITT\" were significantly more likely to avoid reporting flow-chart, sample size calculation and lost to follow-up. Table 2 shows the unadjusted and adjusted ORs of the strength of association between characteristics of the studies and the type of intention-to-treat reporting.\nAssociation Between Characteristics of the Included Randomised Controlled Trials and the Type of Intention-to-treat Reporting\nITT is the comparison group.\nmITT = modified intention-to-treat\nITT = intention-to-treat.", "Compared to the ITT trials, RCTs classified as mITT were more likely to receive sponsorship from a for-profit agency (adjusted OR 7.41 [95% CI 3.14 to 17.48]; P < 0.001) and were more likely to have at least one investigator who was authoring on behalf of the pharmaceutical industry (adjusted OR 5.14 [95%CI, 2.12 to 12.48]; P < 0.001). Interestingly, both the mITT and \"no ITT\" trials reported a higher odds ratio of authors not reporting any conflicts of interest. Table 3 shows the unadjusted and adjusted ORs of the strength of association between funding and authors' conflicts of interest and the type of intention-to-treat reporting.\nAssociation Between Funding and Authors' Conflicts of Interest for the Included Randomised Controlled Trials and the Type of Intention-to-treat Reporting.\nITT is the comparison group.\nmITT = modified intention-to-treat\nITT = intention-to-treat.", "We did not find any association between mITT reporting and favourable results (adjusted OR 1.27 [95% CI 0.62 to 2.61]; P = 0.51). Overall, however, trials with for-profit agency sponsorship were more likely to report favourable results when compared to trials that did not receive any sponsorship from a for-profit agency (adjusted OR 2.30 [95% CI 1.28 to 4.16]; P = 0.006).", "In a sample of RCTs, we assessed the associations between the type of intention-to-treat analysis used and study design characteristics, including the source of funding and the favourability of the results. Our first hypothesis that studies using mITT analyses would be associated with limited quality of reporting and study characteristics was refuted. Instead, we found that the mITT trials were methodologically similar to the ITT trials.\nReports from the literature about the methodological quality of industry sponsored trials are controversial. While some authors report that the methodological quality of industry-sponsored RCTs is limited, others report that trials funded by for-profit agencies have similar or better methodological quality than unsupported trials [8,11].\nIn our investigation, the mITT trials were more likely to perform post-randomisation exclusions compared to both ITT trials and \"no ITT\" trials. A study published in 1996 reported a paradox that RCTs reporting ITT with sound methodological quality were more likely to perform exclusions [12]. The authors' interpretation of this finding was that studies with low methodological standards may be less likely to report exclusions, even when exclusions actually occurred. In our analysis, although the mITT trials had the highest occurrence of exclusions, we found that the \"no ITT\" trials were more likely to perform exclusions compared to the ITT trials. It is possible that the standard of reporting changed over time, and we can speculate that allowing any modification of the standard intention-to-treat analysis may have encouraged authors to report exclusions after deciding which \"type\" of mITT analysis to perform.\nSeveral studies in the medical literature report that post-randomisation exclusions in randomised trials may lead to biased estimates of the treatment effect. Individual patient data analyses of systematic reviews found that the results were in favor to the treatment under investigation when exclusions were not taken into account in the results rather than when a true intention-to-treat was used[13]. A meta-epidemiological study that investigated 14 meta-analyses in osteoarthritis showed that the exclusion of patients from analysis resulted in a biased estimate of the treatment effect [14]. Another study of pharmaceutical industry sponsored trials investigating serotonin reuptake inhibitors found that results were more in favour when a per-protocol analysis was used instead of intention-to-treat analysis [15].\nThe second aim of our study was to assess the associations between the type of intention-to-treat analysis used and the presence of industry sponsorship or author conflicts of interest among RCTs. In the RCTs included in this study the use of mITT analysis was strongly associated with industry funding and author conflicts of interest.\nIn the medical literature, there is plenty of evidence that the pharmaceutical industry is directly or indirectly involved at different stages in the conduct, design and publication of biomedical research[9,11,16]. The selection of an inappropriate comparison group (e.g., a drug with non-equivalent dosage) [8,17,18], multiple reporting of studies [19,20] and suppression or delay of publications [18] are all circumstances where the presence of industry sponsorship are well documented.\nWe did not find any association between favourable results and mITT reporting. The number of mITT trials was too low to detect any difference; however, even if the sample of trials was adequate, we are aware that any potential difference cannot be free of bias given the large heterogeneity of the included trials in terms of the intervention and outcome investigated. We believe, nonetheless, that the appropriate place to evaluate the mITT as a source of bias is to explore its impact in meta-epidemiological studies.\nIn our previous survey, we showed that publications of mITT reporting trials is significantly increasing. For example, the overall incidence of trials published in 2006 in the medical literature was around 5%[5]. This incidence of mITT reporting trials was underestimated since the studies claiming the use of the intention-to-treat approach when in fact they used the mITT approach (owing to the type of deviation present in the description of the analysis) may have escaped the previous search[5]. Indeed, in the present study, we were able to assess all the randomized trials in 6 journals, and capture those trials that reported intention-to-treat but with deviations. The number of these trials (n = 32) was greater than the number of the trials that actually reported as \"modified\" (n = 24; Box). Consequently, we can conclude that the phenomenon of mITT is wider than previously estimated.", "Our main limitation was that we used data from RCTs that were published in six journals; therefore, we are unsure that our results can be generalized to other trials published elsewhere. To test our aforementioned hypothesis, we needed to compare mITT trials with trials reporting (or not reporting) intention-to-treat. Although our previous research has documented an increasing incidence of mITT trials, the proportion of mITT trials being conducted remains too low to obtain an adequate sample for comparison. Consequently, we decided to target journals that were more likely to report mITT trials by selecting general medical and specialty journals.", "Data analysis is the crucial final stage of any study design. Well-designed RCTs require strict adherence to the intention-to-treat principle, in which subjects should remain in the group to which they were originally allocated. Although further research is needed to document that the mITT is a potential source of bias, to limit unjustified exclusions, the misuse of \"intention-to-treat\" terminology should be abandoned, and authors need to adhere to the standard intention-to-treat principle. The updated version of the CONSORT statement goes into this direction and suggests the replacement of any kind of intention-to-treat reporting with a clear description of exactly who was included in each analysis[21].", "Analyses of Types and Deviations from Intention-to-Treat of Trials classified as mITT\nOf the 56 trials classified as mITT, 24 were reported explicitly as \"modified\" while 32 reported \"intention-to-treat\" but reported descriptions that deviate from the true ITT and thus were considered as mITT trials.\nOverall, 31 (55%) trials reported 1 type of mITT deviation, 17 (30%) reported 2 types of mITT deviation, 5 (9%) reported 3 types of mITT deviation, and 3 (5%) did not report any type of mITT deviation. In 35 (63%) trials, the main exclusion criterion for the mITT analysis was treatment-related; in 18 of such trials, the treatment-related mITT was accompanied by at least one additional type of mITT deviation.\nIn 17 (30%) trials, the exclusion criteria used to justify the mITT criteria was the absence of a post-baseline assessment; in 13 of these trials, it was accompanied by at least one other type of mITT deviation.\nBaseline assessment related mITT was described in 10 (18%) trials; in 6 of these the approach was accompanied by another type of deviation. Four (7%) trials required a target condition to describe the mITT deviation; and 1 (2%) trial fell into the follow-up related mITT deviation. While 13 (23%) trials did not fall into none of the above category, 3 (5%) trials remained without any type of mITT description.", "Maria Laura Luchetta has been employed in Laboratori Guidotti s.p.a. (Pisa) from November 2003 to October 2006; in Menarini IFR (Firenze) from November 2006 to November 2007. Alessandro Montedori, Maria Isabella Bonacini, Giovanni Casazza, Francesco Cozzolino, Piergiorio Duca, and Iosief Abraha declare that they have no competing interests.", "AM and IA conceived and designed the study. AM, MIB, FC, MLL collected data. GC, PD, IA did the statistical analyses. All authors contributed to the interpretation of the results, the writing and critical review of the report. All authors have seen and approved the final version of the report." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Atorvastatin prevents Plasmodium falciparum cytoadherence and endothelial damage.
21356073
The adhesion of Plasmodium falciparum parasitized red blood cell (PRBC) to human endothelial cells (EC) induces inflammatory processes, coagulation cascades, oxidative stress and apoptosis. These pathological processes are suspected to be responsible for the blood-brain-barrier and other organs' endothelial dysfunctions observed in fatal cases of malaria. Atorvastatin, a drug that belongs to the lowering cholesterol molecule family of statins, has been shown to ameliorate endothelial functions and is widely used in patients with cardiovascular disorders.
BACKGROUND
The effect of this compound on PRBC induced endothelial impairments was assessed using endothelial co-culture models.
METHODS
Atorvastatin pre-treatment of EC was found to reduce the expression of adhesion molecules and P. falciparum cytoadherence, to protect cells against PRBC-induced apoptosis and to enhance endothelial monolayer integrity during co-incubation with parasites.
RESULTS
These results might suggest a potential interest use of atorvastatin as a protective treatment to interfere with the pathophysiological cascades leading to severe malaria.
CONCLUSIONS
[ "Antimalarials", "Atorvastatin", "Cell Adhesion", "Cells, Cultured", "Coculture Techniques", "Endothelial Cells", "Heptanoic Acids", "Humans", "Plasmodium falciparum", "Pyrroles" ]
3056843
null
null
Methods
[SUBTITLE] Culture of human endothelial cell [SUBSECTION] Primary endothelial cells were isolated from human lung (HLEC) after enzymatic digestion and selected using a continuous gradient and immunomagnetic purification technique, as described elsewhere [24]. Endothelial cells of ninth to twelfth passages derived from one batch were used for the experiments. Before use, cells were verified for their expression of ICAM-1, CD36, Von Willebrand factor, VCAM-1, CD31, E/P-selectin and CSA. HLEC were raised in M199 medium (Gibco), supplemented with 10 μg/ml of endothelial cell growth supplement (Upstate, NY) and 10% of fetal calf serum (Biowest), at 37°C with 5% CO2, using 8 chamber-Labtek (Nunc International, Napervil), 35 mm glass bottom dishes (MatTek corp.), 96-well plates (Costar), or Transwell insert supports (Corning LifeSciences). Primary endothelial cells were isolated from human lung (HLEC) after enzymatic digestion and selected using a continuous gradient and immunomagnetic purification technique, as described elsewhere [24]. Endothelial cells of ninth to twelfth passages derived from one batch were used for the experiments. Before use, cells were verified for their expression of ICAM-1, CD36, Von Willebrand factor, VCAM-1, CD31, E/P-selectin and CSA. HLEC were raised in M199 medium (Gibco), supplemented with 10 μg/ml of endothelial cell growth supplement (Upstate, NY) and 10% of fetal calf serum (Biowest), at 37°C with 5% CO2, using 8 chamber-Labtek (Nunc International, Napervil), 35 mm glass bottom dishes (MatTek corp.), 96-well plates (Costar), or Transwell insert supports (Corning LifeSciences). [SUBTITLE] Plasmodium falciparum culture [SUBSECTION] The P. falciparum 3D7 clone was used for these experiments. Infected erythrocytes were maintained in culture according to Trager and Jensen's technique in a suspension of erythrocytes in RPMI (Gibco) supplemented with 8.3 g/l of Hepes, 2.1 g/l of NaHCO, 0.1 mg.ml-1 of gentamycin, 2 g/l of dextrose, and 0.4% of Albumax II (Gibco, Invitrogen Corporation) [25]. The 3D7 clone was characterized for adhesion phenotype as previously described [25] and adheres to ICAM-1 and CD36. For each experiment, parasite cultures were enriched in mature forms by Plasmagel floating [26]. Briefly, erythrocytes were harvested from 5 to 10% parasitized cultures and centrifuged 5 minutes at 2000 rpm. Cells were resuspended in Plasmion® and incubated for 20 minutes at 37°C. The upper fraction containing mature trophozoites and schizonts was collected and washed three times in RPM, before adequate adjustment of the hematocrit and parasitemia. The P. falciparum 3D7 clone was used for these experiments. Infected erythrocytes were maintained in culture according to Trager and Jensen's technique in a suspension of erythrocytes in RPMI (Gibco) supplemented with 8.3 g/l of Hepes, 2.1 g/l of NaHCO, 0.1 mg.ml-1 of gentamycin, 2 g/l of dextrose, and 0.4% of Albumax II (Gibco, Invitrogen Corporation) [25]. The 3D7 clone was characterized for adhesion phenotype as previously described [25] and adheres to ICAM-1 and CD36. For each experiment, parasite cultures were enriched in mature forms by Plasmagel floating [26]. Briefly, erythrocytes were harvested from 5 to 10% parasitized cultures and centrifuged 5 minutes at 2000 rpm. Cells were resuspended in Plasmion® and incubated for 20 minutes at 37°C. The upper fraction containing mature trophozoites and schizonts was collected and washed three times in RPM, before adequate adjustment of the hematocrit and parasitemia. [SUBTITLE] Real time RT-PCR [SUBSECTION] HLECs were treated with atorvastatin (Pfizer) for 24 hours, and total RNA was prepared using RNeasy mini kit (Qiagen). RT and quantitative PCR were performed as previously described [27]. Primers used were as follow: ICAM-1: forward 5'-GCAATGTGCAAGAAGATAGCCA-3' reverse 5'-GGGCAAGACCTCAGGTCATGT-3' VCAM-1: forward 5'-GAGTACGCAAACACTTTATGTCAATGT-3', reverse 5'-CTCGTCCTTTCGGGACCG-3', P-selectin forward 5'-AGACTCCCCACCAATGTGTGA-3', reverse 5'-CCACGAGTGTCAGAACAATCCA-3' CD36 forward 5'-TAATGGCACAGATGCAGCCT-3', reverse 5'-ACAGCATAGATGGACCTGCAA-3' HPRT1 forward 5'-AAAGGACCCCACGAAGTGTT-3' reverse 5'-TCAAGGGCATATCCTACAACAA-3'. HLECs were treated with atorvastatin (Pfizer) for 24 hours, and total RNA was prepared using RNeasy mini kit (Qiagen). RT and quantitative PCR were performed as previously described [27]. Primers used were as follow: ICAM-1: forward 5'-GCAATGTGCAAGAAGATAGCCA-3' reverse 5'-GGGCAAGACCTCAGGTCATGT-3' VCAM-1: forward 5'-GAGTACGCAAACACTTTATGTCAATGT-3', reverse 5'-CTCGTCCTTTCGGGACCG-3', P-selectin forward 5'-AGACTCCCCACCAATGTGTGA-3', reverse 5'-CCACGAGTGTCAGAACAATCCA-3' CD36 forward 5'-TAATGGCACAGATGCAGCCT-3', reverse 5'-ACAGCATAGATGGACCTGCAA-3' HPRT1 forward 5'-AAAGGACCCCACGAAGTGTT-3' reverse 5'-TCAAGGGCATATCCTACAACAA-3'. [SUBTITLE] Plasmodium falciparum adhesion assay [SUBSECTION] HLEC were raised in Labtek until confluence. Suspensions of mature pRBC were deposited onto cells and incubated for one hour at 37°C with gentle shaking every 10 minutes. After the incubation, the unbound pRBC were removed and the preparation was fixed for 30 minutes at room temperature with 2% glutaraldehyde, before staining with Giemsa. The number of parasites adhering to 700 HLEC was counted by direct observation with light microscope. HLEC were raised in Labtek until confluence. Suspensions of mature pRBC were deposited onto cells and incubated for one hour at 37°C with gentle shaking every 10 minutes. After the incubation, the unbound pRBC were removed and the preparation was fixed for 30 minutes at room temperature with 2% glutaraldehyde, before staining with Giemsa. The number of parasites adhering to 700 HLEC was counted by direct observation with light microscope. [SUBTITLE] Nucleosome release assay [SUBSECTION] The proportion of apoptotic cells was assessed by measuring the intracytoplasmic release of mono- and oligonucleosomes as, using Cell Death Dtection ELISA® (Roche) previously described [11]. Briefly, HLEC were raised in 96-well plate (Costar) until confluence and exposed 24 hours to PRBC (hematocrit 5% parasitemia 50%) or RBC (hematocrit 5%). After 5 PBS washing steps, endothelial cells were lysed and cytoplasms were analysed for their nucleosomes content. The proportion of apoptotic cells was assessed by measuring the intracytoplasmic release of mono- and oligonucleosomes as, using Cell Death Dtection ELISA® (Roche) previously described [11]. Briefly, HLEC were raised in 96-well plate (Costar) until confluence and exposed 24 hours to PRBC (hematocrit 5% parasitemia 50%) or RBC (hematocrit 5%). After 5 PBS washing steps, endothelial cells were lysed and cytoplasms were analysed for their nucleosomes content. [SUBTITLE] Endothelial barrier integrity assay [SUBSECTION] Confluent endothelial monolayer were obtained by seeding 30,000 HLEC on Transwell permeable support (polyester, 3 μm pores, 6.5 mm diameter) and raised in M199 medium supplemented as above during 36 hours. Endothelial monolayers were then exposed during 12 hours to PRBC suspensions at 1% parasitemia and 0.5% of hematocrit. Transwell compartments were then washed three times and cell monolayers on Transwell inserts were transferred to a new plate containing PBS. Evans Blue (0.5 mg/mL) (ICN Biomedicals) was then added in the 'upper compartments'. After 5 min of incubation at 37°C and 5% CO2, the 'under compartments' were collected for optic density analysis of diffused Evans Blue (630 nm) with a standard microplate reader (Bio-Tek™ EL311SX)[12]. Confluent endothelial monolayer were obtained by seeding 30,000 HLEC on Transwell permeable support (polyester, 3 μm pores, 6.5 mm diameter) and raised in M199 medium supplemented as above during 36 hours. Endothelial monolayers were then exposed during 12 hours to PRBC suspensions at 1% parasitemia and 0.5% of hematocrit. Transwell compartments were then washed three times and cell monolayers on Transwell inserts were transferred to a new plate containing PBS. Evans Blue (0.5 mg/mL) (ICN Biomedicals) was then added in the 'upper compartments'. After 5 min of incubation at 37°C and 5% CO2, the 'under compartments' were collected for optic density analysis of diffused Evans Blue (630 nm) with a standard microplate reader (Bio-Tek™ EL311SX)[12]. [SUBTITLE] Western blot [SUBSECTION] Cells (1 × 106) were lysed in Laemmli sample buffer and protein extracts were separated by SDS-PAGE, transferred to nitrocellulose, blocked (5% non-fat dry milk) in Tris-buffered saline (TBS: 20 mM Tris-HCl pH 7.5, 150 mM NaCl) plus 0.05% Tween-20 and incubated with the primary antibody over night at 4°C in TBS-0.5% non-fat dry milk. The membrane was washed and incubated with PO-conjugated secondary antibody for two hours at room temperature. Secondary antibodies on western blot membranes were revealed using the ECL system. Cells (1 × 106) were lysed in Laemmli sample buffer and protein extracts were separated by SDS-PAGE, transferred to nitrocellulose, blocked (5% non-fat dry milk) in Tris-buffered saline (TBS: 20 mM Tris-HCl pH 7.5, 150 mM NaCl) plus 0.05% Tween-20 and incubated with the primary antibody over night at 4°C in TBS-0.5% non-fat dry milk. The membrane was washed and incubated with PO-conjugated secondary antibody for two hours at room temperature. Secondary antibodies on western blot membranes were revealed using the ECL system. [SUBTITLE] Immunofluorescence and confocal microscopy [SUBSECTION] HLEC were raised in 35 mm glass bottom dishes, before fixation with 1% paraformaldehyde for 5 min, permeabilized and then incubated with polyclonal anti-Akt antibody (Cell Signaling) for 2 h in PBS 3% BSA at room temperature. FITC-secondary antibody was added and incubated for 1 h at room temperature. After several washing steps, samples were incubated with methanol at -20°C for 10 min, mounted with Vectashield medium and analysed by confocal microscopy. HLEC were raised in 35 mm glass bottom dishes, before fixation with 1% paraformaldehyde for 5 min, permeabilized and then incubated with polyclonal anti-Akt antibody (Cell Signaling) for 2 h in PBS 3% BSA at room temperature. FITC-secondary antibody was added and incubated for 1 h at room temperature. After several washing steps, samples were incubated with methanol at -20°C for 10 min, mounted with Vectashield medium and analysed by confocal microscopy. [SUBTITLE] Statistical analysis [SUBSECTION] Differences between groups were analysed for statistical significance using the Games-Howell post-hoc (SPSS software). A p value at least less than 0.05 was considered significant. Differences between groups were analysed for statistical significance using the Games-Howell post-hoc (SPSS software). A p value at least less than 0.05 was considered significant.
null
null
null
null
[ "Background", "Culture of human endothelial cell", "Plasmodium falciparum culture", "Real time RT-PCR", "Plasmodium falciparum adhesion assay", "Nucleosome release assay", "Endothelial barrier integrity assay", "Western blot", "Immunofluorescence and confocal microscopy", "Statistical analysis", "Results and discussion", "Atorvastatin prevents P. falciparum-induced endothelial adhesion molecules upregulation", "Atorvastatin decreases P. falciparum cytoadherence and P. falciparum-induced endothelial apoptosis", "Atorvastatin protects endothelial barrier integrity from P. falciparum-induced impairments", "Atorvastatin increases Akt expression in endothelial cells exposed to P. falciparum", "Conclusions", "List of abbreviations", "Competing interests", "Authors' contributions" ]
[ "Malaria remains a major threat to public health with 40% of the world population currently at risk. Every year, an estimated 500 million cases of clinical malaria and at least one million deaths are reported [1,2]. Fatal cases of malaria occur mainly in young children in Africa and consist of an acute neurological syndrome (cerebral malaria), either in isolation or concomitantly with multi-organ failure (pulmonary distress, acute renal failure) [3]. Anti-malarial drugs, such as quinine and artemisinin derivatives are administered intravenously as an emergency treatment. However, although these drugs effectively and rapidly clear parasites from the blood, 15%-20% of patients still die, probably as the result of impaired host responses [4,5]. Such a situation together with the difficulty in developing vaccines highlights the urgent need of novel strategies for complement therapeutics.\nThe severity of Plasmodium falciparum infection depends largely on the ability of parasitized red blood cells (PRBC) to adhere on endothelial cells (EC) and sequester in the capillary network of vital organs (e.g. brain, lungs, kidneys, liver.) Moreover, activation of endothelial cells resulting from the adhesion of infected erythrocytes leads to an overexpression of different mediators, such as adhesion molecules (\"hyperadhesion\" phenomenon), pro-inflammatory cytokines, coagulation factors and contributes by itself to the pathology [6]. Among endothelial receptors, some are known to interact with PRBC, mainly via PfEMP1 (P. falciparum erythrocyte membrane protein 1), a highly polymorphic parasite ligand exported on the infected erythrocyte surface. A number of the cytoadherence receptors have been identified, such as ICAM-1, CD36, VCAM-1 or P-selectin [7]. It was previously shown that P. falciparum adhesion to human endothelial cells can specifically trigger proinflammatory gene expression[8-10], oxidative stress [11] and caspases activation [11], further leading to perturbation of the endothelial barrier integrity [12,13]. In addition PRBC adhesion to EC induces redox and rho-kinase dependent EC activation and apoptosis, which can be reversed respectively by the addition of anti-oxidants or fasudil rho-kinase inhibitor [12,14].\nAtorvastatin is an oral drug that lowers the level of cholesterol in the blood. All statins, including atorvastatin, prevent the production of cholesterol by blocking the enzyme HMGCoA reductase. However, there is increasing evidence that statins may also exert effects beyond cholesterol lowering. Many of these cholesterol-independent or \"pleiotropic\" vascular effects of statins appear to involve restoring or improving the endothelial function through increasing the bioavailability of nitric oxide, promoting re-endothelialization, reducing oxidative stress, and inhibiting inflammatory responses [15]. Thus, the endothelium-dependent effects of statins are thought to contribute to many of the beneficial effects of statin therapy in cardiovascular disease [16]. In addition to its lipids lowering effects, atorvastatin has been shown to promote nitric oxide (NO) production by decreasing caveolin-1 expression in EC, regardless of the level of extracellular LDL-cholesterol [17]. Statins retard the initiation of atherosclerosis formation through the improvement of NO bioavailability by both up-regulation of endothelial-nitric oxide synthase (eNOS) mRNA and decrease of superoxide anion O2- production in EC [18]. Statins modulate the adhesion cascade at multiple points by targeting both the endothelium and leukocytes and affect cell adhesion by inhibiting chemokine expression (MCP-1) in activated leukocytes and endothelial cells. There is also evidence that statins decrease ICAM-1 expression in stimulated EC and monocytes [19]. The effects of atorvastatin on mature EC are correlated with the activation of the anti-apoptotic Akt pathway, as determined by the phosphorylation of Akt and eNOS [20]. Therefore, activation of Akt represents a mechanism that can account for some of the beneficial side effects of statins.\nGiven the pleiotropic effects on endothelium of statins in general and atorvastatin in particular, atorvastatin was hypothesized to be useful as a protective drug against P. falciparum induced endothelial damages. Primary human lung endothelial cells (HLEC) were used here in co-culture with P. falciparum-infected erythrocytes Previous studies have already shown that PRBC adhesion triggers inflammation, oxidative stress and apoptosis within lung endothelial cells [9,11,12,21,22]. Moreover, acute respiratory distress as a complication of malaria infection is rare, but with a vey high rate of mortality [3,23]. The data presented in this manuscript show that, concomitantly with increased expression of Akt within HLEC, atorvastatin reduces adhesion of P. falciparum-infected erythrocytes, and as a consequence, prevents the endothelial damages induced by PRBC.", "Primary endothelial cells were isolated from human lung (HLEC) after enzymatic digestion and selected using a continuous gradient and immunomagnetic purification technique, as described elsewhere [24]. Endothelial cells of ninth to twelfth passages derived from one batch were used for the experiments. Before use, cells were verified for their expression of ICAM-1, CD36, Von Willebrand factor, VCAM-1, CD31, E/P-selectin and CSA. HLEC were raised in M199 medium (Gibco), supplemented with 10 μg/ml of endothelial cell growth supplement (Upstate, NY) and 10% of fetal calf serum (Biowest), at 37°C with 5% CO2, using 8 chamber-Labtek (Nunc International, Napervil), 35 mm glass bottom dishes (MatTek corp.), 96-well plates (Costar), or Transwell insert supports (Corning LifeSciences).", "The P. falciparum 3D7 clone was used for these experiments. Infected erythrocytes were maintained in culture according to Trager and Jensen's technique in a suspension of erythrocytes in RPMI (Gibco) supplemented with 8.3 g/l of Hepes, 2.1 g/l of NaHCO, 0.1 mg.ml-1 of gentamycin, 2 g/l of dextrose, and 0.4% of Albumax II (Gibco, Invitrogen Corporation) [25]. The 3D7 clone was characterized for adhesion phenotype as previously described [25] and adheres to ICAM-1 and CD36. For each experiment, parasite cultures were enriched in mature forms by Plasmagel floating [26]. Briefly, erythrocytes were harvested from 5 to 10% parasitized cultures and centrifuged 5 minutes at 2000 rpm. Cells were resuspended in Plasmion® and incubated for 20 minutes at 37°C. The upper fraction containing mature trophozoites and schizonts was collected and washed three times in RPM, before adequate adjustment of the hematocrit and parasitemia.", "HLECs were treated with atorvastatin (Pfizer) for 24 hours, and total RNA was prepared using RNeasy mini kit (Qiagen). RT and quantitative PCR were performed as previously described [27]. Primers used were as follow: ICAM-1: forward 5'-GCAATGTGCAAGAAGATAGCCA-3' reverse 5'-GGGCAAGACCTCAGGTCATGT-3' VCAM-1: forward 5'-GAGTACGCAAACACTTTATGTCAATGT-3', reverse 5'-CTCGTCCTTTCGGGACCG-3', P-selectin forward 5'-AGACTCCCCACCAATGTGTGA-3', reverse 5'-CCACGAGTGTCAGAACAATCCA-3' CD36 forward 5'-TAATGGCACAGATGCAGCCT-3', reverse 5'-ACAGCATAGATGGACCTGCAA-3' HPRT1 forward 5'-AAAGGACCCCACGAAGTGTT-3' reverse 5'-TCAAGGGCATATCCTACAACAA-3'.", "HLEC were raised in Labtek until confluence. Suspensions of mature pRBC were deposited onto cells and incubated for one hour at 37°C with gentle shaking every 10 minutes. After the incubation, the unbound pRBC were removed and the preparation was fixed for 30 minutes at room temperature with 2% glutaraldehyde, before staining with Giemsa. The number of parasites adhering to 700 HLEC was counted by direct observation with light microscope.", "The proportion of apoptotic cells was assessed by measuring the intracytoplasmic release of mono- and oligonucleosomes as, using Cell Death Dtection ELISA® (Roche) previously described [11]. Briefly, HLEC were raised in 96-well plate (Costar) until confluence and exposed 24 hours to PRBC (hematocrit 5% parasitemia 50%) or RBC (hematocrit 5%). After 5 PBS washing steps, endothelial cells were lysed and cytoplasms were analysed for their nucleosomes content.", "Confluent endothelial monolayer were obtained by seeding 30,000 HLEC on Transwell permeable support (polyester, 3 μm pores, 6.5 mm diameter) and raised in M199 medium supplemented as above during 36 hours. Endothelial monolayers were then exposed during 12 hours to PRBC suspensions at 1% parasitemia and 0.5% of hematocrit. Transwell compartments were then washed three times and cell monolayers on Transwell inserts were transferred to a new plate containing PBS. Evans Blue (0.5 mg/mL) (ICN Biomedicals) was then added in the 'upper compartments'. After 5 min of incubation at 37°C and 5% CO2, the 'under compartments' were collected for optic density analysis of diffused Evans Blue (630 nm) with a standard microplate reader (Bio-Tek™ EL311SX)[12].", "Cells (1 × 106) were lysed in Laemmli sample buffer and protein extracts were separated by SDS-PAGE, transferred to nitrocellulose, blocked (5% non-fat dry milk) in Tris-buffered saline (TBS: 20 mM Tris-HCl pH 7.5, 150 mM NaCl) plus 0.05% Tween-20 and incubated with the primary antibody over night at 4°C in TBS-0.5% non-fat dry milk. The membrane was washed and incubated with PO-conjugated secondary antibody for two hours at room temperature. Secondary antibodies on western blot membranes were revealed using the ECL system.", "HLEC were raised in 35 mm glass bottom dishes, before fixation with 1% paraformaldehyde for 5 min, permeabilized and then incubated with polyclonal anti-Akt antibody (Cell Signaling) for 2 h in PBS 3% BSA at room temperature. FITC-secondary antibody was added and incubated for 1 h at room temperature. After several washing steps, samples were incubated with methanol at -20°C for 10 min, mounted with Vectashield medium and analysed by confocal microscopy.", "Differences between groups were analysed for statistical significance using the Games-Howell post-hoc (SPSS software). A p value at least less than 0.05 was considered significant.", "[SUBTITLE] Atorvastatin prevents P. falciparum-induced endothelial adhesion molecules upregulation [SUBSECTION] In addition to their anti-cholesterol function, statins are also involved in expression regulation of some adhesion molecules in lymphocytes, monocytes and endothelial cells. The role of statin atorvastatin in the control of expression of CD36, ICAM-1, VCAM-1 and P-Selectin, four adhesion molecules that are involved in tethering, rolling and adhesion processes, was analysed in a co-culture of endothelial cells and P. falciparum parasitized red blood cells (PRBC). Atorvastatin doses, ranging from 0.01 to 1 microM, showed no toxic effect on human lung endothelial cells (HLEC) in vitro, doses ranging from 1.5 microM to 5 microM showed low toxic effects, whereas atorvastatin doses above 6 microM showed mitochondrial toxic effects on HLEC (MTT mitochondrial-based cell viability assay). HLEC were pre-treated 24 hours with 1 microM of atorvastatin before being exposed to PRBC or control RBC during four hours. Total RNA of EC was extracted and adhesion molecules expression was assessed by qPCR. On the one hand, our data show that PRBC increase the expression of endothelial cell (EC) adhesion molecule ICAM-1 and P-selectin. Indeed, PRBC cytoadherence was previously shown to increase cytoadherence itself through increase of EC adhesion molecules expression ('hyperadhesion') [28], contributing to the microcirculation perturbation in severe malaria pathology. More importantly, the data clearly showed that the P. falciparum-induced increase of ICAM-1 and P-Selectin expression is suppressed by atorvastatin pre-treatment (Figure 1).\nEffect of atorvastatin on endothelial expression of adhesion molecules: HLEC were cultured alone or co-cultured with RBC (hematocrit 5%), or PRBC for 4 h (parasitemia 50%, hematocrit 5%) after 24 hours of pretreatment or not with 1 microM of atorvastatin (A, 1 microM). Then, cells were washed and total RNA was isolated. qPCR was done using primers for amplification of human ICAM-1, VCAM-1, P-selectin and CD36. As internal control, we used HPRT. The expression is shown as fold induction compared to control non-treated cells (n = 6), **P < 0.01; *** P < 0.005.\nIn addition to their anti-cholesterol function, statins are also involved in expression regulation of some adhesion molecules in lymphocytes, monocytes and endothelial cells. The role of statin atorvastatin in the control of expression of CD36, ICAM-1, VCAM-1 and P-Selectin, four adhesion molecules that are involved in tethering, rolling and adhesion processes, was analysed in a co-culture of endothelial cells and P. falciparum parasitized red blood cells (PRBC). Atorvastatin doses, ranging from 0.01 to 1 microM, showed no toxic effect on human lung endothelial cells (HLEC) in vitro, doses ranging from 1.5 microM to 5 microM showed low toxic effects, whereas atorvastatin doses above 6 microM showed mitochondrial toxic effects on HLEC (MTT mitochondrial-based cell viability assay). HLEC were pre-treated 24 hours with 1 microM of atorvastatin before being exposed to PRBC or control RBC during four hours. Total RNA of EC was extracted and adhesion molecules expression was assessed by qPCR. On the one hand, our data show that PRBC increase the expression of endothelial cell (EC) adhesion molecule ICAM-1 and P-selectin. Indeed, PRBC cytoadherence was previously shown to increase cytoadherence itself through increase of EC adhesion molecules expression ('hyperadhesion') [28], contributing to the microcirculation perturbation in severe malaria pathology. More importantly, the data clearly showed that the P. falciparum-induced increase of ICAM-1 and P-Selectin expression is suppressed by atorvastatin pre-treatment (Figure 1).\nEffect of atorvastatin on endothelial expression of adhesion molecules: HLEC were cultured alone or co-cultured with RBC (hematocrit 5%), or PRBC for 4 h (parasitemia 50%, hematocrit 5%) after 24 hours of pretreatment or not with 1 microM of atorvastatin (A, 1 microM). Then, cells were washed and total RNA was isolated. qPCR was done using primers for amplification of human ICAM-1, VCAM-1, P-selectin and CD36. As internal control, we used HPRT. The expression is shown as fold induction compared to control non-treated cells (n = 6), **P < 0.01; *** P < 0.005.\n[SUBTITLE] Atorvastatin decreases P. falciparum cytoadherence and P. falciparum-induced endothelial apoptosis [SUBSECTION] Given the effect of atorvastatin endothelial treatment on the expression of adhesion molecules, its effect on PRBC ability to adhere on endothelial cells was analysed (Figure 2). HLEC were pre-incubated 24 hours with various doses of atorvastatin (0.01, 0.025, 0.05, 0.1, 0.5, 1 microM), before being exposed to PRBC ad cytoadherence assay. The data showed that PRBC cytoadherence is significantly decreased by doses that are higher than 0.5 microM of atorvastatin. The cytoadherence decreased to 44% ± 9, for EC pre-incubated with 1 microM of atorvastatin compared to untreated controls. Atorvastatin is thus capable of interfering efficiently with the adhesion of P. falciparum infected erythrocytes on endothelial cells.\nAtorvastatin decreasing effect on P. falciparum endothelial cytoadherence: HLECs were pre-treated with various doses of atorvastatin ((0.01, 0.025, 0.05, 0.1, 0.5, 1 microM)) for 24 hours before addition of RBCs or PRBCs at a haematocrit of 5% and a parasitemia of 50% for 1 hour. After removal of unbound erythrocytes, cells were stained with Giemsa and the number of adhered pRBC to 700 HLEC was counted. Data are the mean ±SD of pRBC cytodherence (n = 4), **P < 0.01.\nPRBC adhesion was previously shown to specifically trigger endothelial apoptosis [9]. The effect of atorvastatin (0.01-1 microM) on endothelial apoptosis was tested in the co-culture model. HLEC were pre-treated 24 hours with atorvastatin before being exposed for four hours to PRBC or control uninfected RBC. Endothelial apoptosis was then quantitatively assessed using a method to determine HLEC intracytoplasmic content of nucleosomes, a well known late apoptosis marker (Figure 3). The data showed decreasing endothelial apoptosis correlated with increasing doses of atorvastatin pre-treatment. PRBC-induced apoptosis was found completely abolished with doses higher than 0.05 microM.\nEffect of atorvastatin on P. falciparum-induced endothelial apoptosis: HLEC cells were pre-treated with or without different concentrations of atorvastatin ((0.01, 0.025, 0.05, 0.1, 0.5, 1 microM)) for 24 h and then co-cultured 4 hours with PRBC (hematocrit 5% parasitemia 50%) or RBC (hematocrit 5%). Intracytoplasmic content of nucleosomes was assessed (n = 4), **P < 0.01; *** P < 0.005.\nGiven the effect of atorvastatin endothelial treatment on the expression of adhesion molecules, its effect on PRBC ability to adhere on endothelial cells was analysed (Figure 2). HLEC were pre-incubated 24 hours with various doses of atorvastatin (0.01, 0.025, 0.05, 0.1, 0.5, 1 microM), before being exposed to PRBC ad cytoadherence assay. The data showed that PRBC cytoadherence is significantly decreased by doses that are higher than 0.5 microM of atorvastatin. The cytoadherence decreased to 44% ± 9, for EC pre-incubated with 1 microM of atorvastatin compared to untreated controls. Atorvastatin is thus capable of interfering efficiently with the adhesion of P. falciparum infected erythrocytes on endothelial cells.\nAtorvastatin decreasing effect on P. falciparum endothelial cytoadherence: HLECs were pre-treated with various doses of atorvastatin ((0.01, 0.025, 0.05, 0.1, 0.5, 1 microM)) for 24 hours before addition of RBCs or PRBCs at a haematocrit of 5% and a parasitemia of 50% for 1 hour. After removal of unbound erythrocytes, cells were stained with Giemsa and the number of adhered pRBC to 700 HLEC was counted. Data are the mean ±SD of pRBC cytodherence (n = 4), **P < 0.01.\nPRBC adhesion was previously shown to specifically trigger endothelial apoptosis [9]. The effect of atorvastatin (0.01-1 microM) on endothelial apoptosis was tested in the co-culture model. HLEC were pre-treated 24 hours with atorvastatin before being exposed for four hours to PRBC or control uninfected RBC. Endothelial apoptosis was then quantitatively assessed using a method to determine HLEC intracytoplasmic content of nucleosomes, a well known late apoptosis marker (Figure 3). The data showed decreasing endothelial apoptosis correlated with increasing doses of atorvastatin pre-treatment. PRBC-induced apoptosis was found completely abolished with doses higher than 0.05 microM.\nEffect of atorvastatin on P. falciparum-induced endothelial apoptosis: HLEC cells were pre-treated with or without different concentrations of atorvastatin ((0.01, 0.025, 0.05, 0.1, 0.5, 1 microM)) for 24 h and then co-cultured 4 hours with PRBC (hematocrit 5% parasitemia 50%) or RBC (hematocrit 5%). Intracytoplasmic content of nucleosomes was assessed (n = 4), **P < 0.01; *** P < 0.005.\n[SUBTITLE] Atorvastatin protects endothelial barrier integrity from P. falciparum-induced impairments [SUBSECTION] PRBC were previously observed to trigger specific signaling pathway that leads to endothelial barrier permeabilization [12]. Given the effects of atorvastatin pre-treatments on endothelial apoptosis, atorvastatin was tested whether it could have also beneficial properties on endothelial barrier integrity. HLEC were pre-treated with 1 microM of atorvastatin for 24 h and then with PRBC for 4 hours. After washing of endothelial monolayer, the cell barrier integrity was assessed by the standard Evans blue extrusion method. Figure 4 shows that co-culture of PRBC with endothelial cells induces endothelial barrier permeability, compared to unexposed control cells or RBC exposed cells. Pre-treatment of endothelial cells with atorvastatin 1 microM could strongly decrease PRBC-inducced barrier permeabilization. This result suggests that atorvastatin is also very efficient to protect endothelial barrier integrity from P. falciparum-induced impairments. However, it should be noted that the relevance of these quantitative data must be restricted to the detection method of standard Evans blue extrusion.\nProtective role of atorvastatin against P. falciparum-induced endothelial barrier impairment: HLEC were pre-treated with 1 microM of atorvastatin for 24 h and then co-cultured with PRBC (hematocrit 0.5%/parasitemia 1%) or RBC (hematocrit 0.5%) for 4 h. The endothelial barrier permeability was estimated by Evan Blue diffusion through an endothelial monolayer. Results are represented related to the OD of the control HLEC cells. A, atorvastatin. (n = 6), *** P < 0.005, NS: non significant.\nPRBC were previously observed to trigger specific signaling pathway that leads to endothelial barrier permeabilization [12]. Given the effects of atorvastatin pre-treatments on endothelial apoptosis, atorvastatin was tested whether it could have also beneficial properties on endothelial barrier integrity. HLEC were pre-treated with 1 microM of atorvastatin for 24 h and then with PRBC for 4 hours. After washing of endothelial monolayer, the cell barrier integrity was assessed by the standard Evans blue extrusion method. Figure 4 shows that co-culture of PRBC with endothelial cells induces endothelial barrier permeability, compared to unexposed control cells or RBC exposed cells. Pre-treatment of endothelial cells with atorvastatin 1 microM could strongly decrease PRBC-inducced barrier permeabilization. This result suggests that atorvastatin is also very efficient to protect endothelial barrier integrity from P. falciparum-induced impairments. However, it should be noted that the relevance of these quantitative data must be restricted to the detection method of standard Evans blue extrusion.\nProtective role of atorvastatin against P. falciparum-induced endothelial barrier impairment: HLEC were pre-treated with 1 microM of atorvastatin for 24 h and then co-cultured with PRBC (hematocrit 0.5%/parasitemia 1%) or RBC (hematocrit 0.5%) for 4 h. The endothelial barrier permeability was estimated by Evan Blue diffusion through an endothelial monolayer. Results are represented related to the OD of the control HLEC cells. A, atorvastatin. (n = 6), *** P < 0.005, NS: non significant.\n[SUBTITLE] Atorvastatin increases Akt expression in endothelial cells exposed to P. falciparum [SUBSECTION] Among the pleiotropic effects of statins, their role in the activation of cell survival protein kinase B or Akt signaling pathway is well established [29,30]. Given the protective effect of atorvastatin against cell death in our endothelial-PRBC co-culture model, the effect of atorvastatin treatment on endothelial Akt expression was analysed. HLEC were treated 24 hours with 0.05, 1 or 5 microM of atorvastatin. Figure 5A shows Western blot data and its respective quantitative analysis by densitometry imaging. It shows that pre-treatment of increasing doses of atorvastatin up-regulates Akt expression in HLEC. Similarly, Akt expression was up-regulated in co-cultures of HLEC exposed to RBC or PRBC. The effect of atorvastatin on the endothelial up-regulation of Akt expression was confirmed by confocal microscopy imaging analyses (Figure 5B). These data suggest that atorvastatin increases the Akt expression within endothelial cells.\nAtorvastatin promotes endothelial expression of Akt: A. Western blot. HLEC were pre-treated with different concentrations of atorvastatin (0.05, 1 or 5 microM) for 24 h and then cultured or co-cultured with PRBC (at 5% of hematocrit and 50% of parasitemia) or RBC (at 5% of hematocrit) for 4 h. Cells were harvested, washed, protein extracts separated by SDS-PAGE, transferred to nitrocellulose and immunoblotted with anti-Akt and anti-tubulin antibodies, the latter as internal control. Molecular mass of the corresponding proteins is shown. The Western blot data were then quantitatively analysed by densitometry imaging (ratio Akt/Tubulin). Similar results were obtained in three independent experiments. B. Confocal microscopy. HLEC were pre-treated with different concentrations of atorvastatin (0.05, 1 or 5 microM) for 24 h and then exposed to PRBC (at 5% of hematocrit and 50% of parasitemia) or RBC (at 5% of hematocrit) for 4 h. Fixed cells were stained with anti-Akt antibody and analysed by confocal microscopy. Similar results were obtained in 3 independent experiments. (Bar: 20 μm).\nThe data presented here demonstrate that atorvastatin can be used to reduce the cytoadherence of P. falciparum on endothelial cells, which is one of key events, along with the inflammatory burst, involved in the pathogenesis of human severe malaria cases. Atorvastatin can decrease the expression of adhesion molecules and also prevents the PRBC-induced overexpression of adhesion molecules. Moreover, atorvastatin shows an ability to strongly protect endothelial cell against P. falciparum-induced collateral damages, cell apoptosis and endothelial barrier permeabilization.\nThe observed cytoprotection effect on endothelial cells may mainly be due to the decrease of PRBC adhesion through the down-regulation of adhesion molecules, as PRBC contact was previously shown to specifically trigger pro-inflammatory and pro-apoptotic signaling cascades [9]. Moreover, atorvastatin stimulates Akt expression within HLEC, which is the major actor of anti-apoptotic and endothelial cell survival pathways [29]. Endothelial protection of atorvastatin appears then to be 'pleiotropic' via anti-adhesive, anti-inflammatory and anti-apoptotic synergistic effects. Indeed, PRBC cytoadherence is known to activate many deleterious cascades such as oxidative stress, via mitochondrial ROS (radical oxygen species) production, rho-kinase and NF-kappa B-dependent signaling to induce 'hyperadhesion' phenomenon, endothelial activation and apoptosis [8,11,12,14,28,31]. When phosphorylated, Akt inactivates pro-apoptotic elements such as Bad, preventing mitochondrial ROS release, and caspase 9 [32]. Statins can also increase the bioavailability and the physiological production of endothelial nitric oxide (NO) via Akt phosphorylation and downstream effector endothelial NO-synthase (eNOS), and via the stabilization of eNOS mRNA [30,33]. NO has many crucial functions in vascular endothelium particularly in maintaining permanently anti-inflammatory and anti-adhesive endothelium homeostasis. NO-donors have currently been under investigation in cerebral malaria adjunctive emergency treatments, the efficiency of which remains controversial [34]. Indeed, the deleterious scavenging of NO by ROS has not been considered with antioxidant co-treatment yet. As regard their molecular 'pleiotropic' effects, particularly with the antioxidant and the NO bioavailability promotion, statins may then be highly relevant to 'shield' the endothelium against both PRBC induced impairments and uncontrolled host responses in severe malaria cases.\nThere is increasing interest of using statins as complementary therapeutic to anti-plasmodial molecules. However, these studies with statins were performed on infected erythrocytes only and have shown interestingly to have a direct inhibitory effect on P. falciparum growth in vitro, using high doses ranging from 120 to 240 microM [35], from 5 to 40 microM [36] or from 2.5 to 10.8 microM [37]. Atorvastatin was shown to be 10 times more active against P. falciparum compared to mevastatin, simvastatin, lovastatin, fluvastatin or pravastatin [36], suggesting that atorvastatin is the best candidate among statins for therapy prospects in malaria treatment. Simvastatin was also used in mouse model of cerebral malaria (C57BL/6 mice infected with Plasmodium berghei), but failed to decrease significantly parasitemia or to prevent death [38,39]. Atorvastatin may have a beneficial effect on mice survival, only when administered in combination with artesunate [40]. Indeed, atorvastatin was recently shown to reduce the IC50 of the anti-plasmodial activity of quinine, mefloquine and dihydroartemisinin [41-43]. However, the pathogenesis of cerebral malaria in murine model relies solely on the inflammatory response unlike the pathogenesis of human cerebral malaria. Indeed, the cytoadherence phenomenon does not exist in P. berghei mice infections [44,45]. Atorvastatin was used in pre-treatment to specifically evaluate the potential effects on endothelial cells. Endothelial damage is almost universal in severe malaria pathogenesis and statins have proved their efficiency in clinical use for 20 years against cardiovascular diseases [29]. Interestingly, significant endothelial protection effects were obtained with doses as low as 100 nanoM, 500 nanoM or 1 microM.\nAlthough the 'pleiotropic' effects of statins are well documented, the precise molecular mechanism by which the PI3K/Akt/eNOS pathway is activated remains unknown [29,46]. Since the cholesterol lowering property of statins by itself ameliorates the endothelial function, statins beneficial effects were for long attributed to the inhibition of HMG-CoA reductase and the cholesterol synthesis pathway. However, the endothelial function improvement was shown to occur earlier than cholesterol lowering and independently from the cholesterol rate in blood serum [47,48]. Statins are inhibiting the synthesis of mevalonate, the second biochemical reaction in the cholesterol synthesis pathway. Mevalonate is also an important precursor of isoprenoid intermediates such as farnesyl pyrophosphate (FPP) or geranylgeranyl pyrophosphate (GGPP). Isoprenoids serve as cell surface inner membrane-anchoring molecules to functionalize Rho-GTPases, such as Rho (GGPP) and Ras (FPP) [49,50], key signaling molecules, after geranylation and farnesylation post-translational modifications. Rho-GTPases act as molecular switches (active and inactive when bound to GTP and GDP, respectively), and are the first intermediates of the intracellular signaling engagement of these receptors with the downstream effector Rho-kinase. We have previously demonstrated that P. falciparum-induced endothelial collateral damages are precisely dependent on Rho-kinase signaling [12]. Rho-kinase pathway by many aspects is antagonist to PI3K/Akt/eNOS endothelial cell survival pathway [46]. In fact, strong endothelial protective effects, obtained with atorvastatin against P. falciparum, may likely be related to both rho-kinase pathway inhibition and Akt cell survival pathway promotion. However, it was previously reported that the rho-kinase inhibition by fasudil could not decrease significantly the number of parasites and P. falciparum cytoadherence [12,51].\nAmong the pleiotropic effects of statins, their role in the activation of cell survival protein kinase B or Akt signaling pathway is well established [29,30]. Given the protective effect of atorvastatin against cell death in our endothelial-PRBC co-culture model, the effect of atorvastatin treatment on endothelial Akt expression was analysed. HLEC were treated 24 hours with 0.05, 1 or 5 microM of atorvastatin. Figure 5A shows Western blot data and its respective quantitative analysis by densitometry imaging. It shows that pre-treatment of increasing doses of atorvastatin up-regulates Akt expression in HLEC. Similarly, Akt expression was up-regulated in co-cultures of HLEC exposed to RBC or PRBC. The effect of atorvastatin on the endothelial up-regulation of Akt expression was confirmed by confocal microscopy imaging analyses (Figure 5B). These data suggest that atorvastatin increases the Akt expression within endothelial cells.\nAtorvastatin promotes endothelial expression of Akt: A. Western blot. HLEC were pre-treated with different concentrations of atorvastatin (0.05, 1 or 5 microM) for 24 h and then cultured or co-cultured with PRBC (at 5% of hematocrit and 50% of parasitemia) or RBC (at 5% of hematocrit) for 4 h. Cells were harvested, washed, protein extracts separated by SDS-PAGE, transferred to nitrocellulose and immunoblotted with anti-Akt and anti-tubulin antibodies, the latter as internal control. Molecular mass of the corresponding proteins is shown. The Western blot data were then quantitatively analysed by densitometry imaging (ratio Akt/Tubulin). Similar results were obtained in three independent experiments. B. Confocal microscopy. HLEC were pre-treated with different concentrations of atorvastatin (0.05, 1 or 5 microM) for 24 h and then exposed to PRBC (at 5% of hematocrit and 50% of parasitemia) or RBC (at 5% of hematocrit) for 4 h. Fixed cells were stained with anti-Akt antibody and analysed by confocal microscopy. Similar results were obtained in 3 independent experiments. (Bar: 20 μm).\nThe data presented here demonstrate that atorvastatin can be used to reduce the cytoadherence of P. falciparum on endothelial cells, which is one of key events, along with the inflammatory burst, involved in the pathogenesis of human severe malaria cases. Atorvastatin can decrease the expression of adhesion molecules and also prevents the PRBC-induced overexpression of adhesion molecules. Moreover, atorvastatin shows an ability to strongly protect endothelial cell against P. falciparum-induced collateral damages, cell apoptosis and endothelial barrier permeabilization.\nThe observed cytoprotection effect on endothelial cells may mainly be due to the decrease of PRBC adhesion through the down-regulation of adhesion molecules, as PRBC contact was previously shown to specifically trigger pro-inflammatory and pro-apoptotic signaling cascades [9]. Moreover, atorvastatin stimulates Akt expression within HLEC, which is the major actor of anti-apoptotic and endothelial cell survival pathways [29]. Endothelial protection of atorvastatin appears then to be 'pleiotropic' via anti-adhesive, anti-inflammatory and anti-apoptotic synergistic effects. Indeed, PRBC cytoadherence is known to activate many deleterious cascades such as oxidative stress, via mitochondrial ROS (radical oxygen species) production, rho-kinase and NF-kappa B-dependent signaling to induce 'hyperadhesion' phenomenon, endothelial activation and apoptosis [8,11,12,14,28,31]. When phosphorylated, Akt inactivates pro-apoptotic elements such as Bad, preventing mitochondrial ROS release, and caspase 9 [32]. Statins can also increase the bioavailability and the physiological production of endothelial nitric oxide (NO) via Akt phosphorylation and downstream effector endothelial NO-synthase (eNOS), and via the stabilization of eNOS mRNA [30,33]. NO has many crucial functions in vascular endothelium particularly in maintaining permanently anti-inflammatory and anti-adhesive endothelium homeostasis. NO-donors have currently been under investigation in cerebral malaria adjunctive emergency treatments, the efficiency of which remains controversial [34]. Indeed, the deleterious scavenging of NO by ROS has not been considered with antioxidant co-treatment yet. As regard their molecular 'pleiotropic' effects, particularly with the antioxidant and the NO bioavailability promotion, statins may then be highly relevant to 'shield' the endothelium against both PRBC induced impairments and uncontrolled host responses in severe malaria cases.\nThere is increasing interest of using statins as complementary therapeutic to anti-plasmodial molecules. However, these studies with statins were performed on infected erythrocytes only and have shown interestingly to have a direct inhibitory effect on P. falciparum growth in vitro, using high doses ranging from 120 to 240 microM [35], from 5 to 40 microM [36] or from 2.5 to 10.8 microM [37]. Atorvastatin was shown to be 10 times more active against P. falciparum compared to mevastatin, simvastatin, lovastatin, fluvastatin or pravastatin [36], suggesting that atorvastatin is the best candidate among statins for therapy prospects in malaria treatment. Simvastatin was also used in mouse model of cerebral malaria (C57BL/6 mice infected with Plasmodium berghei), but failed to decrease significantly parasitemia or to prevent death [38,39]. Atorvastatin may have a beneficial effect on mice survival, only when administered in combination with artesunate [40]. Indeed, atorvastatin was recently shown to reduce the IC50 of the anti-plasmodial activity of quinine, mefloquine and dihydroartemisinin [41-43]. However, the pathogenesis of cerebral malaria in murine model relies solely on the inflammatory response unlike the pathogenesis of human cerebral malaria. Indeed, the cytoadherence phenomenon does not exist in P. berghei mice infections [44,45]. Atorvastatin was used in pre-treatment to specifically evaluate the potential effects on endothelial cells. Endothelial damage is almost universal in severe malaria pathogenesis and statins have proved their efficiency in clinical use for 20 years against cardiovascular diseases [29]. Interestingly, significant endothelial protection effects were obtained with doses as low as 100 nanoM, 500 nanoM or 1 microM.\nAlthough the 'pleiotropic' effects of statins are well documented, the precise molecular mechanism by which the PI3K/Akt/eNOS pathway is activated remains unknown [29,46]. Since the cholesterol lowering property of statins by itself ameliorates the endothelial function, statins beneficial effects were for long attributed to the inhibition of HMG-CoA reductase and the cholesterol synthesis pathway. However, the endothelial function improvement was shown to occur earlier than cholesterol lowering and independently from the cholesterol rate in blood serum [47,48]. Statins are inhibiting the synthesis of mevalonate, the second biochemical reaction in the cholesterol synthesis pathway. Mevalonate is also an important precursor of isoprenoid intermediates such as farnesyl pyrophosphate (FPP) or geranylgeranyl pyrophosphate (GGPP). Isoprenoids serve as cell surface inner membrane-anchoring molecules to functionalize Rho-GTPases, such as Rho (GGPP) and Ras (FPP) [49,50], key signaling molecules, after geranylation and farnesylation post-translational modifications. Rho-GTPases act as molecular switches (active and inactive when bound to GTP and GDP, respectively), and are the first intermediates of the intracellular signaling engagement of these receptors with the downstream effector Rho-kinase. We have previously demonstrated that P. falciparum-induced endothelial collateral damages are precisely dependent on Rho-kinase signaling [12]. Rho-kinase pathway by many aspects is antagonist to PI3K/Akt/eNOS endothelial cell survival pathway [46]. In fact, strong endothelial protective effects, obtained with atorvastatin against P. falciparum, may likely be related to both rho-kinase pathway inhibition and Akt cell survival pathway promotion. However, it was previously reported that the rho-kinase inhibition by fasudil could not decrease significantly the number of parasites and P. falciparum cytoadherence [12,51].", "In addition to their anti-cholesterol function, statins are also involved in expression regulation of some adhesion molecules in lymphocytes, monocytes and endothelial cells. The role of statin atorvastatin in the control of expression of CD36, ICAM-1, VCAM-1 and P-Selectin, four adhesion molecules that are involved in tethering, rolling and adhesion processes, was analysed in a co-culture of endothelial cells and P. falciparum parasitized red blood cells (PRBC). Atorvastatin doses, ranging from 0.01 to 1 microM, showed no toxic effect on human lung endothelial cells (HLEC) in vitro, doses ranging from 1.5 microM to 5 microM showed low toxic effects, whereas atorvastatin doses above 6 microM showed mitochondrial toxic effects on HLEC (MTT mitochondrial-based cell viability assay). HLEC were pre-treated 24 hours with 1 microM of atorvastatin before being exposed to PRBC or control RBC during four hours. Total RNA of EC was extracted and adhesion molecules expression was assessed by qPCR. On the one hand, our data show that PRBC increase the expression of endothelial cell (EC) adhesion molecule ICAM-1 and P-selectin. Indeed, PRBC cytoadherence was previously shown to increase cytoadherence itself through increase of EC adhesion molecules expression ('hyperadhesion') [28], contributing to the microcirculation perturbation in severe malaria pathology. More importantly, the data clearly showed that the P. falciparum-induced increase of ICAM-1 and P-Selectin expression is suppressed by atorvastatin pre-treatment (Figure 1).\nEffect of atorvastatin on endothelial expression of adhesion molecules: HLEC were cultured alone or co-cultured with RBC (hematocrit 5%), or PRBC for 4 h (parasitemia 50%, hematocrit 5%) after 24 hours of pretreatment or not with 1 microM of atorvastatin (A, 1 microM). Then, cells were washed and total RNA was isolated. qPCR was done using primers for amplification of human ICAM-1, VCAM-1, P-selectin and CD36. As internal control, we used HPRT. The expression is shown as fold induction compared to control non-treated cells (n = 6), **P < 0.01; *** P < 0.005.", "Given the effect of atorvastatin endothelial treatment on the expression of adhesion molecules, its effect on PRBC ability to adhere on endothelial cells was analysed (Figure 2). HLEC were pre-incubated 24 hours with various doses of atorvastatin (0.01, 0.025, 0.05, 0.1, 0.5, 1 microM), before being exposed to PRBC ad cytoadherence assay. The data showed that PRBC cytoadherence is significantly decreased by doses that are higher than 0.5 microM of atorvastatin. The cytoadherence decreased to 44% ± 9, for EC pre-incubated with 1 microM of atorvastatin compared to untreated controls. Atorvastatin is thus capable of interfering efficiently with the adhesion of P. falciparum infected erythrocytes on endothelial cells.\nAtorvastatin decreasing effect on P. falciparum endothelial cytoadherence: HLECs were pre-treated with various doses of atorvastatin ((0.01, 0.025, 0.05, 0.1, 0.5, 1 microM)) for 24 hours before addition of RBCs or PRBCs at a haematocrit of 5% and a parasitemia of 50% for 1 hour. After removal of unbound erythrocytes, cells were stained with Giemsa and the number of adhered pRBC to 700 HLEC was counted. Data are the mean ±SD of pRBC cytodherence (n = 4), **P < 0.01.\nPRBC adhesion was previously shown to specifically trigger endothelial apoptosis [9]. The effect of atorvastatin (0.01-1 microM) on endothelial apoptosis was tested in the co-culture model. HLEC were pre-treated 24 hours with atorvastatin before being exposed for four hours to PRBC or control uninfected RBC. Endothelial apoptosis was then quantitatively assessed using a method to determine HLEC intracytoplasmic content of nucleosomes, a well known late apoptosis marker (Figure 3). The data showed decreasing endothelial apoptosis correlated with increasing doses of atorvastatin pre-treatment. PRBC-induced apoptosis was found completely abolished with doses higher than 0.05 microM.\nEffect of atorvastatin on P. falciparum-induced endothelial apoptosis: HLEC cells were pre-treated with or without different concentrations of atorvastatin ((0.01, 0.025, 0.05, 0.1, 0.5, 1 microM)) for 24 h and then co-cultured 4 hours with PRBC (hematocrit 5% parasitemia 50%) or RBC (hematocrit 5%). Intracytoplasmic content of nucleosomes was assessed (n = 4), **P < 0.01; *** P < 0.005.", "PRBC were previously observed to trigger specific signaling pathway that leads to endothelial barrier permeabilization [12]. Given the effects of atorvastatin pre-treatments on endothelial apoptosis, atorvastatin was tested whether it could have also beneficial properties on endothelial barrier integrity. HLEC were pre-treated with 1 microM of atorvastatin for 24 h and then with PRBC for 4 hours. After washing of endothelial monolayer, the cell barrier integrity was assessed by the standard Evans blue extrusion method. Figure 4 shows that co-culture of PRBC with endothelial cells induces endothelial barrier permeability, compared to unexposed control cells or RBC exposed cells. Pre-treatment of endothelial cells with atorvastatin 1 microM could strongly decrease PRBC-inducced barrier permeabilization. This result suggests that atorvastatin is also very efficient to protect endothelial barrier integrity from P. falciparum-induced impairments. However, it should be noted that the relevance of these quantitative data must be restricted to the detection method of standard Evans blue extrusion.\nProtective role of atorvastatin against P. falciparum-induced endothelial barrier impairment: HLEC were pre-treated with 1 microM of atorvastatin for 24 h and then co-cultured with PRBC (hematocrit 0.5%/parasitemia 1%) or RBC (hematocrit 0.5%) for 4 h. The endothelial barrier permeability was estimated by Evan Blue diffusion through an endothelial monolayer. Results are represented related to the OD of the control HLEC cells. A, atorvastatin. (n = 6), *** P < 0.005, NS: non significant.", "Among the pleiotropic effects of statins, their role in the activation of cell survival protein kinase B or Akt signaling pathway is well established [29,30]. Given the protective effect of atorvastatin against cell death in our endothelial-PRBC co-culture model, the effect of atorvastatin treatment on endothelial Akt expression was analysed. HLEC were treated 24 hours with 0.05, 1 or 5 microM of atorvastatin. Figure 5A shows Western blot data and its respective quantitative analysis by densitometry imaging. It shows that pre-treatment of increasing doses of atorvastatin up-regulates Akt expression in HLEC. Similarly, Akt expression was up-regulated in co-cultures of HLEC exposed to RBC or PRBC. The effect of atorvastatin on the endothelial up-regulation of Akt expression was confirmed by confocal microscopy imaging analyses (Figure 5B). These data suggest that atorvastatin increases the Akt expression within endothelial cells.\nAtorvastatin promotes endothelial expression of Akt: A. Western blot. HLEC were pre-treated with different concentrations of atorvastatin (0.05, 1 or 5 microM) for 24 h and then cultured or co-cultured with PRBC (at 5% of hematocrit and 50% of parasitemia) or RBC (at 5% of hematocrit) for 4 h. Cells were harvested, washed, protein extracts separated by SDS-PAGE, transferred to nitrocellulose and immunoblotted with anti-Akt and anti-tubulin antibodies, the latter as internal control. Molecular mass of the corresponding proteins is shown. The Western blot data were then quantitatively analysed by densitometry imaging (ratio Akt/Tubulin). Similar results were obtained in three independent experiments. B. Confocal microscopy. HLEC were pre-treated with different concentrations of atorvastatin (0.05, 1 or 5 microM) for 24 h and then exposed to PRBC (at 5% of hematocrit and 50% of parasitemia) or RBC (at 5% of hematocrit) for 4 h. Fixed cells were stained with anti-Akt antibody and analysed by confocal microscopy. Similar results were obtained in 3 independent experiments. (Bar: 20 μm).\nThe data presented here demonstrate that atorvastatin can be used to reduce the cytoadherence of P. falciparum on endothelial cells, which is one of key events, along with the inflammatory burst, involved in the pathogenesis of human severe malaria cases. Atorvastatin can decrease the expression of adhesion molecules and also prevents the PRBC-induced overexpression of adhesion molecules. Moreover, atorvastatin shows an ability to strongly protect endothelial cell against P. falciparum-induced collateral damages, cell apoptosis and endothelial barrier permeabilization.\nThe observed cytoprotection effect on endothelial cells may mainly be due to the decrease of PRBC adhesion through the down-regulation of adhesion molecules, as PRBC contact was previously shown to specifically trigger pro-inflammatory and pro-apoptotic signaling cascades [9]. Moreover, atorvastatin stimulates Akt expression within HLEC, which is the major actor of anti-apoptotic and endothelial cell survival pathways [29]. Endothelial protection of atorvastatin appears then to be 'pleiotropic' via anti-adhesive, anti-inflammatory and anti-apoptotic synergistic effects. Indeed, PRBC cytoadherence is known to activate many deleterious cascades such as oxidative stress, via mitochondrial ROS (radical oxygen species) production, rho-kinase and NF-kappa B-dependent signaling to induce 'hyperadhesion' phenomenon, endothelial activation and apoptosis [8,11,12,14,28,31]. When phosphorylated, Akt inactivates pro-apoptotic elements such as Bad, preventing mitochondrial ROS release, and caspase 9 [32]. Statins can also increase the bioavailability and the physiological production of endothelial nitric oxide (NO) via Akt phosphorylation and downstream effector endothelial NO-synthase (eNOS), and via the stabilization of eNOS mRNA [30,33]. NO has many crucial functions in vascular endothelium particularly in maintaining permanently anti-inflammatory and anti-adhesive endothelium homeostasis. NO-donors have currently been under investigation in cerebral malaria adjunctive emergency treatments, the efficiency of which remains controversial [34]. Indeed, the deleterious scavenging of NO by ROS has not been considered with antioxidant co-treatment yet. As regard their molecular 'pleiotropic' effects, particularly with the antioxidant and the NO bioavailability promotion, statins may then be highly relevant to 'shield' the endothelium against both PRBC induced impairments and uncontrolled host responses in severe malaria cases.\nThere is increasing interest of using statins as complementary therapeutic to anti-plasmodial molecules. However, these studies with statins were performed on infected erythrocytes only and have shown interestingly to have a direct inhibitory effect on P. falciparum growth in vitro, using high doses ranging from 120 to 240 microM [35], from 5 to 40 microM [36] or from 2.5 to 10.8 microM [37]. Atorvastatin was shown to be 10 times more active against P. falciparum compared to mevastatin, simvastatin, lovastatin, fluvastatin or pravastatin [36], suggesting that atorvastatin is the best candidate among statins for therapy prospects in malaria treatment. Simvastatin was also used in mouse model of cerebral malaria (C57BL/6 mice infected with Plasmodium berghei), but failed to decrease significantly parasitemia or to prevent death [38,39]. Atorvastatin may have a beneficial effect on mice survival, only when administered in combination with artesunate [40]. Indeed, atorvastatin was recently shown to reduce the IC50 of the anti-plasmodial activity of quinine, mefloquine and dihydroartemisinin [41-43]. However, the pathogenesis of cerebral malaria in murine model relies solely on the inflammatory response unlike the pathogenesis of human cerebral malaria. Indeed, the cytoadherence phenomenon does not exist in P. berghei mice infections [44,45]. Atorvastatin was used in pre-treatment to specifically evaluate the potential effects on endothelial cells. Endothelial damage is almost universal in severe malaria pathogenesis and statins have proved their efficiency in clinical use for 20 years against cardiovascular diseases [29]. Interestingly, significant endothelial protection effects were obtained with doses as low as 100 nanoM, 500 nanoM or 1 microM.\nAlthough the 'pleiotropic' effects of statins are well documented, the precise molecular mechanism by which the PI3K/Akt/eNOS pathway is activated remains unknown [29,46]. Since the cholesterol lowering property of statins by itself ameliorates the endothelial function, statins beneficial effects were for long attributed to the inhibition of HMG-CoA reductase and the cholesterol synthesis pathway. However, the endothelial function improvement was shown to occur earlier than cholesterol lowering and independently from the cholesterol rate in blood serum [47,48]. Statins are inhibiting the synthesis of mevalonate, the second biochemical reaction in the cholesterol synthesis pathway. Mevalonate is also an important precursor of isoprenoid intermediates such as farnesyl pyrophosphate (FPP) or geranylgeranyl pyrophosphate (GGPP). Isoprenoids serve as cell surface inner membrane-anchoring molecules to functionalize Rho-GTPases, such as Rho (GGPP) and Ras (FPP) [49,50], key signaling molecules, after geranylation and farnesylation post-translational modifications. Rho-GTPases act as molecular switches (active and inactive when bound to GTP and GDP, respectively), and are the first intermediates of the intracellular signaling engagement of these receptors with the downstream effector Rho-kinase. We have previously demonstrated that P. falciparum-induced endothelial collateral damages are precisely dependent on Rho-kinase signaling [12]. Rho-kinase pathway by many aspects is antagonist to PI3K/Akt/eNOS endothelial cell survival pathway [46]. In fact, strong endothelial protective effects, obtained with atorvastatin against P. falciparum, may likely be related to both rho-kinase pathway inhibition and Akt cell survival pathway promotion. However, it was previously reported that the rho-kinase inhibition by fasudil could not decrease significantly the number of parasites and P. falciparum cytoadherence [12,51].", "In conclusion, data from this report demonstrate that HMG-CoA reductase inhibitor atorvastatin prevents P. faciparum cytoadherence and confers the ability to the endothelium to resist against consequent cellular damages. In conclusion, this study suggests that atorvastatin may be a good candidate for further studies or clinical trials on the use of statins in malaria treatment. Moreover, the safety of statins on malaria treatment needs to be addressed, since these molecules were not originally designed for these therapeutic approaches.", "EC: endothelial cell; RBC: red blood cells; PRBC: parasitized red blood cells; HLEC: human lung endothelial cells.", "The authors declare that they have no competing interests.", "ZT contributed to write the manuscript, to design and to conduct the experiments. PP contributed to design the experiments. NN, IA and SA contributed to perform some experiments. FS made substantial constructive advice in the initial design of the project. AR and DM contributed to write the manuscript and to design the experiments. All authors have read and approved the final version of the manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Culture of human endothelial cell", "Plasmodium falciparum culture", "Real time RT-PCR", "Plasmodium falciparum adhesion assay", "Nucleosome release assay", "Endothelial barrier integrity assay", "Western blot", "Immunofluorescence and confocal microscopy", "Statistical analysis", "Results and discussion", "Atorvastatin prevents P. falciparum-induced endothelial adhesion molecules upregulation", "Atorvastatin decreases P. falciparum cytoadherence and P. falciparum-induced endothelial apoptosis", "Atorvastatin protects endothelial barrier integrity from P. falciparum-induced impairments", "Atorvastatin increases Akt expression in endothelial cells exposed to P. falciparum", "Conclusions", "List of abbreviations", "Competing interests", "Authors' contributions" ]
[ "Malaria remains a major threat to public health with 40% of the world population currently at risk. Every year, an estimated 500 million cases of clinical malaria and at least one million deaths are reported [1,2]. Fatal cases of malaria occur mainly in young children in Africa and consist of an acute neurological syndrome (cerebral malaria), either in isolation or concomitantly with multi-organ failure (pulmonary distress, acute renal failure) [3]. Anti-malarial drugs, such as quinine and artemisinin derivatives are administered intravenously as an emergency treatment. However, although these drugs effectively and rapidly clear parasites from the blood, 15%-20% of patients still die, probably as the result of impaired host responses [4,5]. Such a situation together with the difficulty in developing vaccines highlights the urgent need of novel strategies for complement therapeutics.\nThe severity of Plasmodium falciparum infection depends largely on the ability of parasitized red blood cells (PRBC) to adhere on endothelial cells (EC) and sequester in the capillary network of vital organs (e.g. brain, lungs, kidneys, liver.) Moreover, activation of endothelial cells resulting from the adhesion of infected erythrocytes leads to an overexpression of different mediators, such as adhesion molecules (\"hyperadhesion\" phenomenon), pro-inflammatory cytokines, coagulation factors and contributes by itself to the pathology [6]. Among endothelial receptors, some are known to interact with PRBC, mainly via PfEMP1 (P. falciparum erythrocyte membrane protein 1), a highly polymorphic parasite ligand exported on the infected erythrocyte surface. A number of the cytoadherence receptors have been identified, such as ICAM-1, CD36, VCAM-1 or P-selectin [7]. It was previously shown that P. falciparum adhesion to human endothelial cells can specifically trigger proinflammatory gene expression[8-10], oxidative stress [11] and caspases activation [11], further leading to perturbation of the endothelial barrier integrity [12,13]. In addition PRBC adhesion to EC induces redox and rho-kinase dependent EC activation and apoptosis, which can be reversed respectively by the addition of anti-oxidants or fasudil rho-kinase inhibitor [12,14].\nAtorvastatin is an oral drug that lowers the level of cholesterol in the blood. All statins, including atorvastatin, prevent the production of cholesterol by blocking the enzyme HMGCoA reductase. However, there is increasing evidence that statins may also exert effects beyond cholesterol lowering. Many of these cholesterol-independent or \"pleiotropic\" vascular effects of statins appear to involve restoring or improving the endothelial function through increasing the bioavailability of nitric oxide, promoting re-endothelialization, reducing oxidative stress, and inhibiting inflammatory responses [15]. Thus, the endothelium-dependent effects of statins are thought to contribute to many of the beneficial effects of statin therapy in cardiovascular disease [16]. In addition to its lipids lowering effects, atorvastatin has been shown to promote nitric oxide (NO) production by decreasing caveolin-1 expression in EC, regardless of the level of extracellular LDL-cholesterol [17]. Statins retard the initiation of atherosclerosis formation through the improvement of NO bioavailability by both up-regulation of endothelial-nitric oxide synthase (eNOS) mRNA and decrease of superoxide anion O2- production in EC [18]. Statins modulate the adhesion cascade at multiple points by targeting both the endothelium and leukocytes and affect cell adhesion by inhibiting chemokine expression (MCP-1) in activated leukocytes and endothelial cells. There is also evidence that statins decrease ICAM-1 expression in stimulated EC and monocytes [19]. The effects of atorvastatin on mature EC are correlated with the activation of the anti-apoptotic Akt pathway, as determined by the phosphorylation of Akt and eNOS [20]. Therefore, activation of Akt represents a mechanism that can account for some of the beneficial side effects of statins.\nGiven the pleiotropic effects on endothelium of statins in general and atorvastatin in particular, atorvastatin was hypothesized to be useful as a protective drug against P. falciparum induced endothelial damages. Primary human lung endothelial cells (HLEC) were used here in co-culture with P. falciparum-infected erythrocytes Previous studies have already shown that PRBC adhesion triggers inflammation, oxidative stress and apoptosis within lung endothelial cells [9,11,12,21,22]. Moreover, acute respiratory distress as a complication of malaria infection is rare, but with a vey high rate of mortality [3,23]. The data presented in this manuscript show that, concomitantly with increased expression of Akt within HLEC, atorvastatin reduces adhesion of P. falciparum-infected erythrocytes, and as a consequence, prevents the endothelial damages induced by PRBC.", "[SUBTITLE] Culture of human endothelial cell [SUBSECTION] Primary endothelial cells were isolated from human lung (HLEC) after enzymatic digestion and selected using a continuous gradient and immunomagnetic purification technique, as described elsewhere [24]. Endothelial cells of ninth to twelfth passages derived from one batch were used for the experiments. Before use, cells were verified for their expression of ICAM-1, CD36, Von Willebrand factor, VCAM-1, CD31, E/P-selectin and CSA. HLEC were raised in M199 medium (Gibco), supplemented with 10 μg/ml of endothelial cell growth supplement (Upstate, NY) and 10% of fetal calf serum (Biowest), at 37°C with 5% CO2, using 8 chamber-Labtek (Nunc International, Napervil), 35 mm glass bottom dishes (MatTek corp.), 96-well plates (Costar), or Transwell insert supports (Corning LifeSciences).\nPrimary endothelial cells were isolated from human lung (HLEC) after enzymatic digestion and selected using a continuous gradient and immunomagnetic purification technique, as described elsewhere [24]. Endothelial cells of ninth to twelfth passages derived from one batch were used for the experiments. Before use, cells were verified for their expression of ICAM-1, CD36, Von Willebrand factor, VCAM-1, CD31, E/P-selectin and CSA. HLEC were raised in M199 medium (Gibco), supplemented with 10 μg/ml of endothelial cell growth supplement (Upstate, NY) and 10% of fetal calf serum (Biowest), at 37°C with 5% CO2, using 8 chamber-Labtek (Nunc International, Napervil), 35 mm glass bottom dishes (MatTek corp.), 96-well plates (Costar), or Transwell insert supports (Corning LifeSciences).\n[SUBTITLE] Plasmodium falciparum culture [SUBSECTION] The P. falciparum 3D7 clone was used for these experiments. Infected erythrocytes were maintained in culture according to Trager and Jensen's technique in a suspension of erythrocytes in RPMI (Gibco) supplemented with 8.3 g/l of Hepes, 2.1 g/l of NaHCO, 0.1 mg.ml-1 of gentamycin, 2 g/l of dextrose, and 0.4% of Albumax II (Gibco, Invitrogen Corporation) [25]. The 3D7 clone was characterized for adhesion phenotype as previously described [25] and adheres to ICAM-1 and CD36. For each experiment, parasite cultures were enriched in mature forms by Plasmagel floating [26]. Briefly, erythrocytes were harvested from 5 to 10% parasitized cultures and centrifuged 5 minutes at 2000 rpm. Cells were resuspended in Plasmion® and incubated for 20 minutes at 37°C. The upper fraction containing mature trophozoites and schizonts was collected and washed three times in RPM, before adequate adjustment of the hematocrit and parasitemia.\nThe P. falciparum 3D7 clone was used for these experiments. Infected erythrocytes were maintained in culture according to Trager and Jensen's technique in a suspension of erythrocytes in RPMI (Gibco) supplemented with 8.3 g/l of Hepes, 2.1 g/l of NaHCO, 0.1 mg.ml-1 of gentamycin, 2 g/l of dextrose, and 0.4% of Albumax II (Gibco, Invitrogen Corporation) [25]. The 3D7 clone was characterized for adhesion phenotype as previously described [25] and adheres to ICAM-1 and CD36. For each experiment, parasite cultures were enriched in mature forms by Plasmagel floating [26]. Briefly, erythrocytes were harvested from 5 to 10% parasitized cultures and centrifuged 5 minutes at 2000 rpm. Cells were resuspended in Plasmion® and incubated for 20 minutes at 37°C. The upper fraction containing mature trophozoites and schizonts was collected and washed three times in RPM, before adequate adjustment of the hematocrit and parasitemia.\n[SUBTITLE] Real time RT-PCR [SUBSECTION] HLECs were treated with atorvastatin (Pfizer) for 24 hours, and total RNA was prepared using RNeasy mini kit (Qiagen). RT and quantitative PCR were performed as previously described [27]. Primers used were as follow: ICAM-1: forward 5'-GCAATGTGCAAGAAGATAGCCA-3' reverse 5'-GGGCAAGACCTCAGGTCATGT-3' VCAM-1: forward 5'-GAGTACGCAAACACTTTATGTCAATGT-3', reverse 5'-CTCGTCCTTTCGGGACCG-3', P-selectin forward 5'-AGACTCCCCACCAATGTGTGA-3', reverse 5'-CCACGAGTGTCAGAACAATCCA-3' CD36 forward 5'-TAATGGCACAGATGCAGCCT-3', reverse 5'-ACAGCATAGATGGACCTGCAA-3' HPRT1 forward 5'-AAAGGACCCCACGAAGTGTT-3' reverse 5'-TCAAGGGCATATCCTACAACAA-3'.\nHLECs were treated with atorvastatin (Pfizer) for 24 hours, and total RNA was prepared using RNeasy mini kit (Qiagen). RT and quantitative PCR were performed as previously described [27]. Primers used were as follow: ICAM-1: forward 5'-GCAATGTGCAAGAAGATAGCCA-3' reverse 5'-GGGCAAGACCTCAGGTCATGT-3' VCAM-1: forward 5'-GAGTACGCAAACACTTTATGTCAATGT-3', reverse 5'-CTCGTCCTTTCGGGACCG-3', P-selectin forward 5'-AGACTCCCCACCAATGTGTGA-3', reverse 5'-CCACGAGTGTCAGAACAATCCA-3' CD36 forward 5'-TAATGGCACAGATGCAGCCT-3', reverse 5'-ACAGCATAGATGGACCTGCAA-3' HPRT1 forward 5'-AAAGGACCCCACGAAGTGTT-3' reverse 5'-TCAAGGGCATATCCTACAACAA-3'.\n[SUBTITLE] Plasmodium falciparum adhesion assay [SUBSECTION] HLEC were raised in Labtek until confluence. Suspensions of mature pRBC were deposited onto cells and incubated for one hour at 37°C with gentle shaking every 10 minutes. After the incubation, the unbound pRBC were removed and the preparation was fixed for 30 minutes at room temperature with 2% glutaraldehyde, before staining with Giemsa. The number of parasites adhering to 700 HLEC was counted by direct observation with light microscope.\nHLEC were raised in Labtek until confluence. Suspensions of mature pRBC were deposited onto cells and incubated for one hour at 37°C with gentle shaking every 10 minutes. After the incubation, the unbound pRBC were removed and the preparation was fixed for 30 minutes at room temperature with 2% glutaraldehyde, before staining with Giemsa. The number of parasites adhering to 700 HLEC was counted by direct observation with light microscope.\n[SUBTITLE] Nucleosome release assay [SUBSECTION] The proportion of apoptotic cells was assessed by measuring the intracytoplasmic release of mono- and oligonucleosomes as, using Cell Death Dtection ELISA® (Roche) previously described [11]. Briefly, HLEC were raised in 96-well plate (Costar) until confluence and exposed 24 hours to PRBC (hematocrit 5% parasitemia 50%) or RBC (hematocrit 5%). After 5 PBS washing steps, endothelial cells were lysed and cytoplasms were analysed for their nucleosomes content.\nThe proportion of apoptotic cells was assessed by measuring the intracytoplasmic release of mono- and oligonucleosomes as, using Cell Death Dtection ELISA® (Roche) previously described [11]. Briefly, HLEC were raised in 96-well plate (Costar) until confluence and exposed 24 hours to PRBC (hematocrit 5% parasitemia 50%) or RBC (hematocrit 5%). After 5 PBS washing steps, endothelial cells were lysed and cytoplasms were analysed for their nucleosomes content.\n[SUBTITLE] Endothelial barrier integrity assay [SUBSECTION] Confluent endothelial monolayer were obtained by seeding 30,000 HLEC on Transwell permeable support (polyester, 3 μm pores, 6.5 mm diameter) and raised in M199 medium supplemented as above during 36 hours. Endothelial monolayers were then exposed during 12 hours to PRBC suspensions at 1% parasitemia and 0.5% of hematocrit. Transwell compartments were then washed three times and cell monolayers on Transwell inserts were transferred to a new plate containing PBS. Evans Blue (0.5 mg/mL) (ICN Biomedicals) was then added in the 'upper compartments'. After 5 min of incubation at 37°C and 5% CO2, the 'under compartments' were collected for optic density analysis of diffused Evans Blue (630 nm) with a standard microplate reader (Bio-Tek™ EL311SX)[12].\nConfluent endothelial monolayer were obtained by seeding 30,000 HLEC on Transwell permeable support (polyester, 3 μm pores, 6.5 mm diameter) and raised in M199 medium supplemented as above during 36 hours. Endothelial monolayers were then exposed during 12 hours to PRBC suspensions at 1% parasitemia and 0.5% of hematocrit. Transwell compartments were then washed three times and cell monolayers on Transwell inserts were transferred to a new plate containing PBS. Evans Blue (0.5 mg/mL) (ICN Biomedicals) was then added in the 'upper compartments'. After 5 min of incubation at 37°C and 5% CO2, the 'under compartments' were collected for optic density analysis of diffused Evans Blue (630 nm) with a standard microplate reader (Bio-Tek™ EL311SX)[12].\n[SUBTITLE] Western blot [SUBSECTION] Cells (1 × 106) were lysed in Laemmli sample buffer and protein extracts were separated by SDS-PAGE, transferred to nitrocellulose, blocked (5% non-fat dry milk) in Tris-buffered saline (TBS: 20 mM Tris-HCl pH 7.5, 150 mM NaCl) plus 0.05% Tween-20 and incubated with the primary antibody over night at 4°C in TBS-0.5% non-fat dry milk. The membrane was washed and incubated with PO-conjugated secondary antibody for two hours at room temperature. Secondary antibodies on western blot membranes were revealed using the ECL system.\nCells (1 × 106) were lysed in Laemmli sample buffer and protein extracts were separated by SDS-PAGE, transferred to nitrocellulose, blocked (5% non-fat dry milk) in Tris-buffered saline (TBS: 20 mM Tris-HCl pH 7.5, 150 mM NaCl) plus 0.05% Tween-20 and incubated with the primary antibody over night at 4°C in TBS-0.5% non-fat dry milk. The membrane was washed and incubated with PO-conjugated secondary antibody for two hours at room temperature. Secondary antibodies on western blot membranes were revealed using the ECL system.\n[SUBTITLE] Immunofluorescence and confocal microscopy [SUBSECTION] HLEC were raised in 35 mm glass bottom dishes, before fixation with 1% paraformaldehyde for 5 min, permeabilized and then incubated with polyclonal anti-Akt antibody (Cell Signaling) for 2 h in PBS 3% BSA at room temperature. FITC-secondary antibody was added and incubated for 1 h at room temperature. After several washing steps, samples were incubated with methanol at -20°C for 10 min, mounted with Vectashield medium and analysed by confocal microscopy.\nHLEC were raised in 35 mm glass bottom dishes, before fixation with 1% paraformaldehyde for 5 min, permeabilized and then incubated with polyclonal anti-Akt antibody (Cell Signaling) for 2 h in PBS 3% BSA at room temperature. FITC-secondary antibody was added and incubated for 1 h at room temperature. After several washing steps, samples were incubated with methanol at -20°C for 10 min, mounted with Vectashield medium and analysed by confocal microscopy.\n[SUBTITLE] Statistical analysis [SUBSECTION] Differences between groups were analysed for statistical significance using the Games-Howell post-hoc (SPSS software). A p value at least less than 0.05 was considered significant.\nDifferences between groups were analysed for statistical significance using the Games-Howell post-hoc (SPSS software). A p value at least less than 0.05 was considered significant.", "Primary endothelial cells were isolated from human lung (HLEC) after enzymatic digestion and selected using a continuous gradient and immunomagnetic purification technique, as described elsewhere [24]. Endothelial cells of ninth to twelfth passages derived from one batch were used for the experiments. Before use, cells were verified for their expression of ICAM-1, CD36, Von Willebrand factor, VCAM-1, CD31, E/P-selectin and CSA. HLEC were raised in M199 medium (Gibco), supplemented with 10 μg/ml of endothelial cell growth supplement (Upstate, NY) and 10% of fetal calf serum (Biowest), at 37°C with 5% CO2, using 8 chamber-Labtek (Nunc International, Napervil), 35 mm glass bottom dishes (MatTek corp.), 96-well plates (Costar), or Transwell insert supports (Corning LifeSciences).", "The P. falciparum 3D7 clone was used for these experiments. Infected erythrocytes were maintained in culture according to Trager and Jensen's technique in a suspension of erythrocytes in RPMI (Gibco) supplemented with 8.3 g/l of Hepes, 2.1 g/l of NaHCO, 0.1 mg.ml-1 of gentamycin, 2 g/l of dextrose, and 0.4% of Albumax II (Gibco, Invitrogen Corporation) [25]. The 3D7 clone was characterized for adhesion phenotype as previously described [25] and adheres to ICAM-1 and CD36. For each experiment, parasite cultures were enriched in mature forms by Plasmagel floating [26]. Briefly, erythrocytes were harvested from 5 to 10% parasitized cultures and centrifuged 5 minutes at 2000 rpm. Cells were resuspended in Plasmion® and incubated for 20 minutes at 37°C. The upper fraction containing mature trophozoites and schizonts was collected and washed three times in RPM, before adequate adjustment of the hematocrit and parasitemia.", "HLECs were treated with atorvastatin (Pfizer) for 24 hours, and total RNA was prepared using RNeasy mini kit (Qiagen). RT and quantitative PCR were performed as previously described [27]. Primers used were as follow: ICAM-1: forward 5'-GCAATGTGCAAGAAGATAGCCA-3' reverse 5'-GGGCAAGACCTCAGGTCATGT-3' VCAM-1: forward 5'-GAGTACGCAAACACTTTATGTCAATGT-3', reverse 5'-CTCGTCCTTTCGGGACCG-3', P-selectin forward 5'-AGACTCCCCACCAATGTGTGA-3', reverse 5'-CCACGAGTGTCAGAACAATCCA-3' CD36 forward 5'-TAATGGCACAGATGCAGCCT-3', reverse 5'-ACAGCATAGATGGACCTGCAA-3' HPRT1 forward 5'-AAAGGACCCCACGAAGTGTT-3' reverse 5'-TCAAGGGCATATCCTACAACAA-3'.", "HLEC were raised in Labtek until confluence. Suspensions of mature pRBC were deposited onto cells and incubated for one hour at 37°C with gentle shaking every 10 minutes. After the incubation, the unbound pRBC were removed and the preparation was fixed for 30 minutes at room temperature with 2% glutaraldehyde, before staining with Giemsa. The number of parasites adhering to 700 HLEC was counted by direct observation with light microscope.", "The proportion of apoptotic cells was assessed by measuring the intracytoplasmic release of mono- and oligonucleosomes as, using Cell Death Dtection ELISA® (Roche) previously described [11]. Briefly, HLEC were raised in 96-well plate (Costar) until confluence and exposed 24 hours to PRBC (hematocrit 5% parasitemia 50%) or RBC (hematocrit 5%). After 5 PBS washing steps, endothelial cells were lysed and cytoplasms were analysed for their nucleosomes content.", "Confluent endothelial monolayer were obtained by seeding 30,000 HLEC on Transwell permeable support (polyester, 3 μm pores, 6.5 mm diameter) and raised in M199 medium supplemented as above during 36 hours. Endothelial monolayers were then exposed during 12 hours to PRBC suspensions at 1% parasitemia and 0.5% of hematocrit. Transwell compartments were then washed three times and cell monolayers on Transwell inserts were transferred to a new plate containing PBS. Evans Blue (0.5 mg/mL) (ICN Biomedicals) was then added in the 'upper compartments'. After 5 min of incubation at 37°C and 5% CO2, the 'under compartments' were collected for optic density analysis of diffused Evans Blue (630 nm) with a standard microplate reader (Bio-Tek™ EL311SX)[12].", "Cells (1 × 106) were lysed in Laemmli sample buffer and protein extracts were separated by SDS-PAGE, transferred to nitrocellulose, blocked (5% non-fat dry milk) in Tris-buffered saline (TBS: 20 mM Tris-HCl pH 7.5, 150 mM NaCl) plus 0.05% Tween-20 and incubated with the primary antibody over night at 4°C in TBS-0.5% non-fat dry milk. The membrane was washed and incubated with PO-conjugated secondary antibody for two hours at room temperature. Secondary antibodies on western blot membranes were revealed using the ECL system.", "HLEC were raised in 35 mm glass bottom dishes, before fixation with 1% paraformaldehyde for 5 min, permeabilized and then incubated with polyclonal anti-Akt antibody (Cell Signaling) for 2 h in PBS 3% BSA at room temperature. FITC-secondary antibody was added and incubated for 1 h at room temperature. After several washing steps, samples were incubated with methanol at -20°C for 10 min, mounted with Vectashield medium and analysed by confocal microscopy.", "Differences between groups were analysed for statistical significance using the Games-Howell post-hoc (SPSS software). A p value at least less than 0.05 was considered significant.", "[SUBTITLE] Atorvastatin prevents P. falciparum-induced endothelial adhesion molecules upregulation [SUBSECTION] In addition to their anti-cholesterol function, statins are also involved in expression regulation of some adhesion molecules in lymphocytes, monocytes and endothelial cells. The role of statin atorvastatin in the control of expression of CD36, ICAM-1, VCAM-1 and P-Selectin, four adhesion molecules that are involved in tethering, rolling and adhesion processes, was analysed in a co-culture of endothelial cells and P. falciparum parasitized red blood cells (PRBC). Atorvastatin doses, ranging from 0.01 to 1 microM, showed no toxic effect on human lung endothelial cells (HLEC) in vitro, doses ranging from 1.5 microM to 5 microM showed low toxic effects, whereas atorvastatin doses above 6 microM showed mitochondrial toxic effects on HLEC (MTT mitochondrial-based cell viability assay). HLEC were pre-treated 24 hours with 1 microM of atorvastatin before being exposed to PRBC or control RBC during four hours. Total RNA of EC was extracted and adhesion molecules expression was assessed by qPCR. On the one hand, our data show that PRBC increase the expression of endothelial cell (EC) adhesion molecule ICAM-1 and P-selectin. Indeed, PRBC cytoadherence was previously shown to increase cytoadherence itself through increase of EC adhesion molecules expression ('hyperadhesion') [28], contributing to the microcirculation perturbation in severe malaria pathology. More importantly, the data clearly showed that the P. falciparum-induced increase of ICAM-1 and P-Selectin expression is suppressed by atorvastatin pre-treatment (Figure 1).\nEffect of atorvastatin on endothelial expression of adhesion molecules: HLEC were cultured alone or co-cultured with RBC (hematocrit 5%), or PRBC for 4 h (parasitemia 50%, hematocrit 5%) after 24 hours of pretreatment or not with 1 microM of atorvastatin (A, 1 microM). Then, cells were washed and total RNA was isolated. qPCR was done using primers for amplification of human ICAM-1, VCAM-1, P-selectin and CD36. As internal control, we used HPRT. The expression is shown as fold induction compared to control non-treated cells (n = 6), **P < 0.01; *** P < 0.005.\nIn addition to their anti-cholesterol function, statins are also involved in expression regulation of some adhesion molecules in lymphocytes, monocytes and endothelial cells. The role of statin atorvastatin in the control of expression of CD36, ICAM-1, VCAM-1 and P-Selectin, four adhesion molecules that are involved in tethering, rolling and adhesion processes, was analysed in a co-culture of endothelial cells and P. falciparum parasitized red blood cells (PRBC). Atorvastatin doses, ranging from 0.01 to 1 microM, showed no toxic effect on human lung endothelial cells (HLEC) in vitro, doses ranging from 1.5 microM to 5 microM showed low toxic effects, whereas atorvastatin doses above 6 microM showed mitochondrial toxic effects on HLEC (MTT mitochondrial-based cell viability assay). HLEC were pre-treated 24 hours with 1 microM of atorvastatin before being exposed to PRBC or control RBC during four hours. Total RNA of EC was extracted and adhesion molecules expression was assessed by qPCR. On the one hand, our data show that PRBC increase the expression of endothelial cell (EC) adhesion molecule ICAM-1 and P-selectin. Indeed, PRBC cytoadherence was previously shown to increase cytoadherence itself through increase of EC adhesion molecules expression ('hyperadhesion') [28], contributing to the microcirculation perturbation in severe malaria pathology. More importantly, the data clearly showed that the P. falciparum-induced increase of ICAM-1 and P-Selectin expression is suppressed by atorvastatin pre-treatment (Figure 1).\nEffect of atorvastatin on endothelial expression of adhesion molecules: HLEC were cultured alone or co-cultured with RBC (hematocrit 5%), or PRBC for 4 h (parasitemia 50%, hematocrit 5%) after 24 hours of pretreatment or not with 1 microM of atorvastatin (A, 1 microM). Then, cells were washed and total RNA was isolated. qPCR was done using primers for amplification of human ICAM-1, VCAM-1, P-selectin and CD36. As internal control, we used HPRT. The expression is shown as fold induction compared to control non-treated cells (n = 6), **P < 0.01; *** P < 0.005.\n[SUBTITLE] Atorvastatin decreases P. falciparum cytoadherence and P. falciparum-induced endothelial apoptosis [SUBSECTION] Given the effect of atorvastatin endothelial treatment on the expression of adhesion molecules, its effect on PRBC ability to adhere on endothelial cells was analysed (Figure 2). HLEC were pre-incubated 24 hours with various doses of atorvastatin (0.01, 0.025, 0.05, 0.1, 0.5, 1 microM), before being exposed to PRBC ad cytoadherence assay. The data showed that PRBC cytoadherence is significantly decreased by doses that are higher than 0.5 microM of atorvastatin. The cytoadherence decreased to 44% ± 9, for EC pre-incubated with 1 microM of atorvastatin compared to untreated controls. Atorvastatin is thus capable of interfering efficiently with the adhesion of P. falciparum infected erythrocytes on endothelial cells.\nAtorvastatin decreasing effect on P. falciparum endothelial cytoadherence: HLECs were pre-treated with various doses of atorvastatin ((0.01, 0.025, 0.05, 0.1, 0.5, 1 microM)) for 24 hours before addition of RBCs or PRBCs at a haematocrit of 5% and a parasitemia of 50% for 1 hour. After removal of unbound erythrocytes, cells were stained with Giemsa and the number of adhered pRBC to 700 HLEC was counted. Data are the mean ±SD of pRBC cytodherence (n = 4), **P < 0.01.\nPRBC adhesion was previously shown to specifically trigger endothelial apoptosis [9]. The effect of atorvastatin (0.01-1 microM) on endothelial apoptosis was tested in the co-culture model. HLEC were pre-treated 24 hours with atorvastatin before being exposed for four hours to PRBC or control uninfected RBC. Endothelial apoptosis was then quantitatively assessed using a method to determine HLEC intracytoplasmic content of nucleosomes, a well known late apoptosis marker (Figure 3). The data showed decreasing endothelial apoptosis correlated with increasing doses of atorvastatin pre-treatment. PRBC-induced apoptosis was found completely abolished with doses higher than 0.05 microM.\nEffect of atorvastatin on P. falciparum-induced endothelial apoptosis: HLEC cells were pre-treated with or without different concentrations of atorvastatin ((0.01, 0.025, 0.05, 0.1, 0.5, 1 microM)) for 24 h and then co-cultured 4 hours with PRBC (hematocrit 5% parasitemia 50%) or RBC (hematocrit 5%). Intracytoplasmic content of nucleosomes was assessed (n = 4), **P < 0.01; *** P < 0.005.\nGiven the effect of atorvastatin endothelial treatment on the expression of adhesion molecules, its effect on PRBC ability to adhere on endothelial cells was analysed (Figure 2). HLEC were pre-incubated 24 hours with various doses of atorvastatin (0.01, 0.025, 0.05, 0.1, 0.5, 1 microM), before being exposed to PRBC ad cytoadherence assay. The data showed that PRBC cytoadherence is significantly decreased by doses that are higher than 0.5 microM of atorvastatin. The cytoadherence decreased to 44% ± 9, for EC pre-incubated with 1 microM of atorvastatin compared to untreated controls. Atorvastatin is thus capable of interfering efficiently with the adhesion of P. falciparum infected erythrocytes on endothelial cells.\nAtorvastatin decreasing effect on P. falciparum endothelial cytoadherence: HLECs were pre-treated with various doses of atorvastatin ((0.01, 0.025, 0.05, 0.1, 0.5, 1 microM)) for 24 hours before addition of RBCs or PRBCs at a haematocrit of 5% and a parasitemia of 50% for 1 hour. After removal of unbound erythrocytes, cells were stained with Giemsa and the number of adhered pRBC to 700 HLEC was counted. Data are the mean ±SD of pRBC cytodherence (n = 4), **P < 0.01.\nPRBC adhesion was previously shown to specifically trigger endothelial apoptosis [9]. The effect of atorvastatin (0.01-1 microM) on endothelial apoptosis was tested in the co-culture model. HLEC were pre-treated 24 hours with atorvastatin before being exposed for four hours to PRBC or control uninfected RBC. Endothelial apoptosis was then quantitatively assessed using a method to determine HLEC intracytoplasmic content of nucleosomes, a well known late apoptosis marker (Figure 3). The data showed decreasing endothelial apoptosis correlated with increasing doses of atorvastatin pre-treatment. PRBC-induced apoptosis was found completely abolished with doses higher than 0.05 microM.\nEffect of atorvastatin on P. falciparum-induced endothelial apoptosis: HLEC cells were pre-treated with or without different concentrations of atorvastatin ((0.01, 0.025, 0.05, 0.1, 0.5, 1 microM)) for 24 h and then co-cultured 4 hours with PRBC (hematocrit 5% parasitemia 50%) or RBC (hematocrit 5%). Intracytoplasmic content of nucleosomes was assessed (n = 4), **P < 0.01; *** P < 0.005.\n[SUBTITLE] Atorvastatin protects endothelial barrier integrity from P. falciparum-induced impairments [SUBSECTION] PRBC were previously observed to trigger specific signaling pathway that leads to endothelial barrier permeabilization [12]. Given the effects of atorvastatin pre-treatments on endothelial apoptosis, atorvastatin was tested whether it could have also beneficial properties on endothelial barrier integrity. HLEC were pre-treated with 1 microM of atorvastatin for 24 h and then with PRBC for 4 hours. After washing of endothelial monolayer, the cell barrier integrity was assessed by the standard Evans blue extrusion method. Figure 4 shows that co-culture of PRBC with endothelial cells induces endothelial barrier permeability, compared to unexposed control cells or RBC exposed cells. Pre-treatment of endothelial cells with atorvastatin 1 microM could strongly decrease PRBC-inducced barrier permeabilization. This result suggests that atorvastatin is also very efficient to protect endothelial barrier integrity from P. falciparum-induced impairments. However, it should be noted that the relevance of these quantitative data must be restricted to the detection method of standard Evans blue extrusion.\nProtective role of atorvastatin against P. falciparum-induced endothelial barrier impairment: HLEC were pre-treated with 1 microM of atorvastatin for 24 h and then co-cultured with PRBC (hematocrit 0.5%/parasitemia 1%) or RBC (hematocrit 0.5%) for 4 h. The endothelial barrier permeability was estimated by Evan Blue diffusion through an endothelial monolayer. Results are represented related to the OD of the control HLEC cells. A, atorvastatin. (n = 6), *** P < 0.005, NS: non significant.\nPRBC were previously observed to trigger specific signaling pathway that leads to endothelial barrier permeabilization [12]. Given the effects of atorvastatin pre-treatments on endothelial apoptosis, atorvastatin was tested whether it could have also beneficial properties on endothelial barrier integrity. HLEC were pre-treated with 1 microM of atorvastatin for 24 h and then with PRBC for 4 hours. After washing of endothelial monolayer, the cell barrier integrity was assessed by the standard Evans blue extrusion method. Figure 4 shows that co-culture of PRBC with endothelial cells induces endothelial barrier permeability, compared to unexposed control cells or RBC exposed cells. Pre-treatment of endothelial cells with atorvastatin 1 microM could strongly decrease PRBC-inducced barrier permeabilization. This result suggests that atorvastatin is also very efficient to protect endothelial barrier integrity from P. falciparum-induced impairments. However, it should be noted that the relevance of these quantitative data must be restricted to the detection method of standard Evans blue extrusion.\nProtective role of atorvastatin against P. falciparum-induced endothelial barrier impairment: HLEC were pre-treated with 1 microM of atorvastatin for 24 h and then co-cultured with PRBC (hematocrit 0.5%/parasitemia 1%) or RBC (hematocrit 0.5%) for 4 h. The endothelial barrier permeability was estimated by Evan Blue diffusion through an endothelial monolayer. Results are represented related to the OD of the control HLEC cells. A, atorvastatin. (n = 6), *** P < 0.005, NS: non significant.\n[SUBTITLE] Atorvastatin increases Akt expression in endothelial cells exposed to P. falciparum [SUBSECTION] Among the pleiotropic effects of statins, their role in the activation of cell survival protein kinase B or Akt signaling pathway is well established [29,30]. Given the protective effect of atorvastatin against cell death in our endothelial-PRBC co-culture model, the effect of atorvastatin treatment on endothelial Akt expression was analysed. HLEC were treated 24 hours with 0.05, 1 or 5 microM of atorvastatin. Figure 5A shows Western blot data and its respective quantitative analysis by densitometry imaging. It shows that pre-treatment of increasing doses of atorvastatin up-regulates Akt expression in HLEC. Similarly, Akt expression was up-regulated in co-cultures of HLEC exposed to RBC or PRBC. The effect of atorvastatin on the endothelial up-regulation of Akt expression was confirmed by confocal microscopy imaging analyses (Figure 5B). These data suggest that atorvastatin increases the Akt expression within endothelial cells.\nAtorvastatin promotes endothelial expression of Akt: A. Western blot. HLEC were pre-treated with different concentrations of atorvastatin (0.05, 1 or 5 microM) for 24 h and then cultured or co-cultured with PRBC (at 5% of hematocrit and 50% of parasitemia) or RBC (at 5% of hematocrit) for 4 h. Cells were harvested, washed, protein extracts separated by SDS-PAGE, transferred to nitrocellulose and immunoblotted with anti-Akt and anti-tubulin antibodies, the latter as internal control. Molecular mass of the corresponding proteins is shown. The Western blot data were then quantitatively analysed by densitometry imaging (ratio Akt/Tubulin). Similar results were obtained in three independent experiments. B. Confocal microscopy. HLEC were pre-treated with different concentrations of atorvastatin (0.05, 1 or 5 microM) for 24 h and then exposed to PRBC (at 5% of hematocrit and 50% of parasitemia) or RBC (at 5% of hematocrit) for 4 h. Fixed cells were stained with anti-Akt antibody and analysed by confocal microscopy. Similar results were obtained in 3 independent experiments. (Bar: 20 μm).\nThe data presented here demonstrate that atorvastatin can be used to reduce the cytoadherence of P. falciparum on endothelial cells, which is one of key events, along with the inflammatory burst, involved in the pathogenesis of human severe malaria cases. Atorvastatin can decrease the expression of adhesion molecules and also prevents the PRBC-induced overexpression of adhesion molecules. Moreover, atorvastatin shows an ability to strongly protect endothelial cell against P. falciparum-induced collateral damages, cell apoptosis and endothelial barrier permeabilization.\nThe observed cytoprotection effect on endothelial cells may mainly be due to the decrease of PRBC adhesion through the down-regulation of adhesion molecules, as PRBC contact was previously shown to specifically trigger pro-inflammatory and pro-apoptotic signaling cascades [9]. Moreover, atorvastatin stimulates Akt expression within HLEC, which is the major actor of anti-apoptotic and endothelial cell survival pathways [29]. Endothelial protection of atorvastatin appears then to be 'pleiotropic' via anti-adhesive, anti-inflammatory and anti-apoptotic synergistic effects. Indeed, PRBC cytoadherence is known to activate many deleterious cascades such as oxidative stress, via mitochondrial ROS (radical oxygen species) production, rho-kinase and NF-kappa B-dependent signaling to induce 'hyperadhesion' phenomenon, endothelial activation and apoptosis [8,11,12,14,28,31]. When phosphorylated, Akt inactivates pro-apoptotic elements such as Bad, preventing mitochondrial ROS release, and caspase 9 [32]. Statins can also increase the bioavailability and the physiological production of endothelial nitric oxide (NO) via Akt phosphorylation and downstream effector endothelial NO-synthase (eNOS), and via the stabilization of eNOS mRNA [30,33]. NO has many crucial functions in vascular endothelium particularly in maintaining permanently anti-inflammatory and anti-adhesive endothelium homeostasis. NO-donors have currently been under investigation in cerebral malaria adjunctive emergency treatments, the efficiency of which remains controversial [34]. Indeed, the deleterious scavenging of NO by ROS has not been considered with antioxidant co-treatment yet. As regard their molecular 'pleiotropic' effects, particularly with the antioxidant and the NO bioavailability promotion, statins may then be highly relevant to 'shield' the endothelium against both PRBC induced impairments and uncontrolled host responses in severe malaria cases.\nThere is increasing interest of using statins as complementary therapeutic to anti-plasmodial molecules. However, these studies with statins were performed on infected erythrocytes only and have shown interestingly to have a direct inhibitory effect on P. falciparum growth in vitro, using high doses ranging from 120 to 240 microM [35], from 5 to 40 microM [36] or from 2.5 to 10.8 microM [37]. Atorvastatin was shown to be 10 times more active against P. falciparum compared to mevastatin, simvastatin, lovastatin, fluvastatin or pravastatin [36], suggesting that atorvastatin is the best candidate among statins for therapy prospects in malaria treatment. Simvastatin was also used in mouse model of cerebral malaria (C57BL/6 mice infected with Plasmodium berghei), but failed to decrease significantly parasitemia or to prevent death [38,39]. Atorvastatin may have a beneficial effect on mice survival, only when administered in combination with artesunate [40]. Indeed, atorvastatin was recently shown to reduce the IC50 of the anti-plasmodial activity of quinine, mefloquine and dihydroartemisinin [41-43]. However, the pathogenesis of cerebral malaria in murine model relies solely on the inflammatory response unlike the pathogenesis of human cerebral malaria. Indeed, the cytoadherence phenomenon does not exist in P. berghei mice infections [44,45]. Atorvastatin was used in pre-treatment to specifically evaluate the potential effects on endothelial cells. Endothelial damage is almost universal in severe malaria pathogenesis and statins have proved their efficiency in clinical use for 20 years against cardiovascular diseases [29]. Interestingly, significant endothelial protection effects were obtained with doses as low as 100 nanoM, 500 nanoM or 1 microM.\nAlthough the 'pleiotropic' effects of statins are well documented, the precise molecular mechanism by which the PI3K/Akt/eNOS pathway is activated remains unknown [29,46]. Since the cholesterol lowering property of statins by itself ameliorates the endothelial function, statins beneficial effects were for long attributed to the inhibition of HMG-CoA reductase and the cholesterol synthesis pathway. However, the endothelial function improvement was shown to occur earlier than cholesterol lowering and independently from the cholesterol rate in blood serum [47,48]. Statins are inhibiting the synthesis of mevalonate, the second biochemical reaction in the cholesterol synthesis pathway. Mevalonate is also an important precursor of isoprenoid intermediates such as farnesyl pyrophosphate (FPP) or geranylgeranyl pyrophosphate (GGPP). Isoprenoids serve as cell surface inner membrane-anchoring molecules to functionalize Rho-GTPases, such as Rho (GGPP) and Ras (FPP) [49,50], key signaling molecules, after geranylation and farnesylation post-translational modifications. Rho-GTPases act as molecular switches (active and inactive when bound to GTP and GDP, respectively), and are the first intermediates of the intracellular signaling engagement of these receptors with the downstream effector Rho-kinase. We have previously demonstrated that P. falciparum-induced endothelial collateral damages are precisely dependent on Rho-kinase signaling [12]. Rho-kinase pathway by many aspects is antagonist to PI3K/Akt/eNOS endothelial cell survival pathway [46]. In fact, strong endothelial protective effects, obtained with atorvastatin against P. falciparum, may likely be related to both rho-kinase pathway inhibition and Akt cell survival pathway promotion. However, it was previously reported that the rho-kinase inhibition by fasudil could not decrease significantly the number of parasites and P. falciparum cytoadherence [12,51].\nAmong the pleiotropic effects of statins, their role in the activation of cell survival protein kinase B or Akt signaling pathway is well established [29,30]. Given the protective effect of atorvastatin against cell death in our endothelial-PRBC co-culture model, the effect of atorvastatin treatment on endothelial Akt expression was analysed. HLEC were treated 24 hours with 0.05, 1 or 5 microM of atorvastatin. Figure 5A shows Western blot data and its respective quantitative analysis by densitometry imaging. It shows that pre-treatment of increasing doses of atorvastatin up-regulates Akt expression in HLEC. Similarly, Akt expression was up-regulated in co-cultures of HLEC exposed to RBC or PRBC. The effect of atorvastatin on the endothelial up-regulation of Akt expression was confirmed by confocal microscopy imaging analyses (Figure 5B). These data suggest that atorvastatin increases the Akt expression within endothelial cells.\nAtorvastatin promotes endothelial expression of Akt: A. Western blot. HLEC were pre-treated with different concentrations of atorvastatin (0.05, 1 or 5 microM) for 24 h and then cultured or co-cultured with PRBC (at 5% of hematocrit and 50% of parasitemia) or RBC (at 5% of hematocrit) for 4 h. Cells were harvested, washed, protein extracts separated by SDS-PAGE, transferred to nitrocellulose and immunoblotted with anti-Akt and anti-tubulin antibodies, the latter as internal control. Molecular mass of the corresponding proteins is shown. The Western blot data were then quantitatively analysed by densitometry imaging (ratio Akt/Tubulin). Similar results were obtained in three independent experiments. B. Confocal microscopy. HLEC were pre-treated with different concentrations of atorvastatin (0.05, 1 or 5 microM) for 24 h and then exposed to PRBC (at 5% of hematocrit and 50% of parasitemia) or RBC (at 5% of hematocrit) for 4 h. Fixed cells were stained with anti-Akt antibody and analysed by confocal microscopy. Similar results were obtained in 3 independent experiments. (Bar: 20 μm).\nThe data presented here demonstrate that atorvastatin can be used to reduce the cytoadherence of P. falciparum on endothelial cells, which is one of key events, along with the inflammatory burst, involved in the pathogenesis of human severe malaria cases. Atorvastatin can decrease the expression of adhesion molecules and also prevents the PRBC-induced overexpression of adhesion molecules. Moreover, atorvastatin shows an ability to strongly protect endothelial cell against P. falciparum-induced collateral damages, cell apoptosis and endothelial barrier permeabilization.\nThe observed cytoprotection effect on endothelial cells may mainly be due to the decrease of PRBC adhesion through the down-regulation of adhesion molecules, as PRBC contact was previously shown to specifically trigger pro-inflammatory and pro-apoptotic signaling cascades [9]. Moreover, atorvastatin stimulates Akt expression within HLEC, which is the major actor of anti-apoptotic and endothelial cell survival pathways [29]. Endothelial protection of atorvastatin appears then to be 'pleiotropic' via anti-adhesive, anti-inflammatory and anti-apoptotic synergistic effects. Indeed, PRBC cytoadherence is known to activate many deleterious cascades such as oxidative stress, via mitochondrial ROS (radical oxygen species) production, rho-kinase and NF-kappa B-dependent signaling to induce 'hyperadhesion' phenomenon, endothelial activation and apoptosis [8,11,12,14,28,31]. When phosphorylated, Akt inactivates pro-apoptotic elements such as Bad, preventing mitochondrial ROS release, and caspase 9 [32]. Statins can also increase the bioavailability and the physiological production of endothelial nitric oxide (NO) via Akt phosphorylation and downstream effector endothelial NO-synthase (eNOS), and via the stabilization of eNOS mRNA [30,33]. NO has many crucial functions in vascular endothelium particularly in maintaining permanently anti-inflammatory and anti-adhesive endothelium homeostasis. NO-donors have currently been under investigation in cerebral malaria adjunctive emergency treatments, the efficiency of which remains controversial [34]. Indeed, the deleterious scavenging of NO by ROS has not been considered with antioxidant co-treatment yet. As regard their molecular 'pleiotropic' effects, particularly with the antioxidant and the NO bioavailability promotion, statins may then be highly relevant to 'shield' the endothelium against both PRBC induced impairments and uncontrolled host responses in severe malaria cases.\nThere is increasing interest of using statins as complementary therapeutic to anti-plasmodial molecules. However, these studies with statins were performed on infected erythrocytes only and have shown interestingly to have a direct inhibitory effect on P. falciparum growth in vitro, using high doses ranging from 120 to 240 microM [35], from 5 to 40 microM [36] or from 2.5 to 10.8 microM [37]. Atorvastatin was shown to be 10 times more active against P. falciparum compared to mevastatin, simvastatin, lovastatin, fluvastatin or pravastatin [36], suggesting that atorvastatin is the best candidate among statins for therapy prospects in malaria treatment. Simvastatin was also used in mouse model of cerebral malaria (C57BL/6 mice infected with Plasmodium berghei), but failed to decrease significantly parasitemia or to prevent death [38,39]. Atorvastatin may have a beneficial effect on mice survival, only when administered in combination with artesunate [40]. Indeed, atorvastatin was recently shown to reduce the IC50 of the anti-plasmodial activity of quinine, mefloquine and dihydroartemisinin [41-43]. However, the pathogenesis of cerebral malaria in murine model relies solely on the inflammatory response unlike the pathogenesis of human cerebral malaria. Indeed, the cytoadherence phenomenon does not exist in P. berghei mice infections [44,45]. Atorvastatin was used in pre-treatment to specifically evaluate the potential effects on endothelial cells. Endothelial damage is almost universal in severe malaria pathogenesis and statins have proved their efficiency in clinical use for 20 years against cardiovascular diseases [29]. Interestingly, significant endothelial protection effects were obtained with doses as low as 100 nanoM, 500 nanoM or 1 microM.\nAlthough the 'pleiotropic' effects of statins are well documented, the precise molecular mechanism by which the PI3K/Akt/eNOS pathway is activated remains unknown [29,46]. Since the cholesterol lowering property of statins by itself ameliorates the endothelial function, statins beneficial effects were for long attributed to the inhibition of HMG-CoA reductase and the cholesterol synthesis pathway. However, the endothelial function improvement was shown to occur earlier than cholesterol lowering and independently from the cholesterol rate in blood serum [47,48]. Statins are inhibiting the synthesis of mevalonate, the second biochemical reaction in the cholesterol synthesis pathway. Mevalonate is also an important precursor of isoprenoid intermediates such as farnesyl pyrophosphate (FPP) or geranylgeranyl pyrophosphate (GGPP). Isoprenoids serve as cell surface inner membrane-anchoring molecules to functionalize Rho-GTPases, such as Rho (GGPP) and Ras (FPP) [49,50], key signaling molecules, after geranylation and farnesylation post-translational modifications. Rho-GTPases act as molecular switches (active and inactive when bound to GTP and GDP, respectively), and are the first intermediates of the intracellular signaling engagement of these receptors with the downstream effector Rho-kinase. We have previously demonstrated that P. falciparum-induced endothelial collateral damages are precisely dependent on Rho-kinase signaling [12]. Rho-kinase pathway by many aspects is antagonist to PI3K/Akt/eNOS endothelial cell survival pathway [46]. In fact, strong endothelial protective effects, obtained with atorvastatin against P. falciparum, may likely be related to both rho-kinase pathway inhibition and Akt cell survival pathway promotion. However, it was previously reported that the rho-kinase inhibition by fasudil could not decrease significantly the number of parasites and P. falciparum cytoadherence [12,51].", "In addition to their anti-cholesterol function, statins are also involved in expression regulation of some adhesion molecules in lymphocytes, monocytes and endothelial cells. The role of statin atorvastatin in the control of expression of CD36, ICAM-1, VCAM-1 and P-Selectin, four adhesion molecules that are involved in tethering, rolling and adhesion processes, was analysed in a co-culture of endothelial cells and P. falciparum parasitized red blood cells (PRBC). Atorvastatin doses, ranging from 0.01 to 1 microM, showed no toxic effect on human lung endothelial cells (HLEC) in vitro, doses ranging from 1.5 microM to 5 microM showed low toxic effects, whereas atorvastatin doses above 6 microM showed mitochondrial toxic effects on HLEC (MTT mitochondrial-based cell viability assay). HLEC were pre-treated 24 hours with 1 microM of atorvastatin before being exposed to PRBC or control RBC during four hours. Total RNA of EC was extracted and adhesion molecules expression was assessed by qPCR. On the one hand, our data show that PRBC increase the expression of endothelial cell (EC) adhesion molecule ICAM-1 and P-selectin. Indeed, PRBC cytoadherence was previously shown to increase cytoadherence itself through increase of EC adhesion molecules expression ('hyperadhesion') [28], contributing to the microcirculation perturbation in severe malaria pathology. More importantly, the data clearly showed that the P. falciparum-induced increase of ICAM-1 and P-Selectin expression is suppressed by atorvastatin pre-treatment (Figure 1).\nEffect of atorvastatin on endothelial expression of adhesion molecules: HLEC were cultured alone or co-cultured with RBC (hematocrit 5%), or PRBC for 4 h (parasitemia 50%, hematocrit 5%) after 24 hours of pretreatment or not with 1 microM of atorvastatin (A, 1 microM). Then, cells were washed and total RNA was isolated. qPCR was done using primers for amplification of human ICAM-1, VCAM-1, P-selectin and CD36. As internal control, we used HPRT. The expression is shown as fold induction compared to control non-treated cells (n = 6), **P < 0.01; *** P < 0.005.", "Given the effect of atorvastatin endothelial treatment on the expression of adhesion molecules, its effect on PRBC ability to adhere on endothelial cells was analysed (Figure 2). HLEC were pre-incubated 24 hours with various doses of atorvastatin (0.01, 0.025, 0.05, 0.1, 0.5, 1 microM), before being exposed to PRBC ad cytoadherence assay. The data showed that PRBC cytoadherence is significantly decreased by doses that are higher than 0.5 microM of atorvastatin. The cytoadherence decreased to 44% ± 9, for EC pre-incubated with 1 microM of atorvastatin compared to untreated controls. Atorvastatin is thus capable of interfering efficiently with the adhesion of P. falciparum infected erythrocytes on endothelial cells.\nAtorvastatin decreasing effect on P. falciparum endothelial cytoadherence: HLECs were pre-treated with various doses of atorvastatin ((0.01, 0.025, 0.05, 0.1, 0.5, 1 microM)) for 24 hours before addition of RBCs or PRBCs at a haematocrit of 5% and a parasitemia of 50% for 1 hour. After removal of unbound erythrocytes, cells were stained with Giemsa and the number of adhered pRBC to 700 HLEC was counted. Data are the mean ±SD of pRBC cytodherence (n = 4), **P < 0.01.\nPRBC adhesion was previously shown to specifically trigger endothelial apoptosis [9]. The effect of atorvastatin (0.01-1 microM) on endothelial apoptosis was tested in the co-culture model. HLEC were pre-treated 24 hours with atorvastatin before being exposed for four hours to PRBC or control uninfected RBC. Endothelial apoptosis was then quantitatively assessed using a method to determine HLEC intracytoplasmic content of nucleosomes, a well known late apoptosis marker (Figure 3). The data showed decreasing endothelial apoptosis correlated with increasing doses of atorvastatin pre-treatment. PRBC-induced apoptosis was found completely abolished with doses higher than 0.05 microM.\nEffect of atorvastatin on P. falciparum-induced endothelial apoptosis: HLEC cells were pre-treated with or without different concentrations of atorvastatin ((0.01, 0.025, 0.05, 0.1, 0.5, 1 microM)) for 24 h and then co-cultured 4 hours with PRBC (hematocrit 5% parasitemia 50%) or RBC (hematocrit 5%). Intracytoplasmic content of nucleosomes was assessed (n = 4), **P < 0.01; *** P < 0.005.", "PRBC were previously observed to trigger specific signaling pathway that leads to endothelial barrier permeabilization [12]. Given the effects of atorvastatin pre-treatments on endothelial apoptosis, atorvastatin was tested whether it could have also beneficial properties on endothelial barrier integrity. HLEC were pre-treated with 1 microM of atorvastatin for 24 h and then with PRBC for 4 hours. After washing of endothelial monolayer, the cell barrier integrity was assessed by the standard Evans blue extrusion method. Figure 4 shows that co-culture of PRBC with endothelial cells induces endothelial barrier permeability, compared to unexposed control cells or RBC exposed cells. Pre-treatment of endothelial cells with atorvastatin 1 microM could strongly decrease PRBC-inducced barrier permeabilization. This result suggests that atorvastatin is also very efficient to protect endothelial barrier integrity from P. falciparum-induced impairments. However, it should be noted that the relevance of these quantitative data must be restricted to the detection method of standard Evans blue extrusion.\nProtective role of atorvastatin against P. falciparum-induced endothelial barrier impairment: HLEC were pre-treated with 1 microM of atorvastatin for 24 h and then co-cultured with PRBC (hematocrit 0.5%/parasitemia 1%) or RBC (hematocrit 0.5%) for 4 h. The endothelial barrier permeability was estimated by Evan Blue diffusion through an endothelial monolayer. Results are represented related to the OD of the control HLEC cells. A, atorvastatin. (n = 6), *** P < 0.005, NS: non significant.", "Among the pleiotropic effects of statins, their role in the activation of cell survival protein kinase B or Akt signaling pathway is well established [29,30]. Given the protective effect of atorvastatin against cell death in our endothelial-PRBC co-culture model, the effect of atorvastatin treatment on endothelial Akt expression was analysed. HLEC were treated 24 hours with 0.05, 1 or 5 microM of atorvastatin. Figure 5A shows Western blot data and its respective quantitative analysis by densitometry imaging. It shows that pre-treatment of increasing doses of atorvastatin up-regulates Akt expression in HLEC. Similarly, Akt expression was up-regulated in co-cultures of HLEC exposed to RBC or PRBC. The effect of atorvastatin on the endothelial up-regulation of Akt expression was confirmed by confocal microscopy imaging analyses (Figure 5B). These data suggest that atorvastatin increases the Akt expression within endothelial cells.\nAtorvastatin promotes endothelial expression of Akt: A. Western blot. HLEC were pre-treated with different concentrations of atorvastatin (0.05, 1 or 5 microM) for 24 h and then cultured or co-cultured with PRBC (at 5% of hematocrit and 50% of parasitemia) or RBC (at 5% of hematocrit) for 4 h. Cells were harvested, washed, protein extracts separated by SDS-PAGE, transferred to nitrocellulose and immunoblotted with anti-Akt and anti-tubulin antibodies, the latter as internal control. Molecular mass of the corresponding proteins is shown. The Western blot data were then quantitatively analysed by densitometry imaging (ratio Akt/Tubulin). Similar results were obtained in three independent experiments. B. Confocal microscopy. HLEC were pre-treated with different concentrations of atorvastatin (0.05, 1 or 5 microM) for 24 h and then exposed to PRBC (at 5% of hematocrit and 50% of parasitemia) or RBC (at 5% of hematocrit) for 4 h. Fixed cells were stained with anti-Akt antibody and analysed by confocal microscopy. Similar results were obtained in 3 independent experiments. (Bar: 20 μm).\nThe data presented here demonstrate that atorvastatin can be used to reduce the cytoadherence of P. falciparum on endothelial cells, which is one of key events, along with the inflammatory burst, involved in the pathogenesis of human severe malaria cases. Atorvastatin can decrease the expression of adhesion molecules and also prevents the PRBC-induced overexpression of adhesion molecules. Moreover, atorvastatin shows an ability to strongly protect endothelial cell against P. falciparum-induced collateral damages, cell apoptosis and endothelial barrier permeabilization.\nThe observed cytoprotection effect on endothelial cells may mainly be due to the decrease of PRBC adhesion through the down-regulation of adhesion molecules, as PRBC contact was previously shown to specifically trigger pro-inflammatory and pro-apoptotic signaling cascades [9]. Moreover, atorvastatin stimulates Akt expression within HLEC, which is the major actor of anti-apoptotic and endothelial cell survival pathways [29]. Endothelial protection of atorvastatin appears then to be 'pleiotropic' via anti-adhesive, anti-inflammatory and anti-apoptotic synergistic effects. Indeed, PRBC cytoadherence is known to activate many deleterious cascades such as oxidative stress, via mitochondrial ROS (radical oxygen species) production, rho-kinase and NF-kappa B-dependent signaling to induce 'hyperadhesion' phenomenon, endothelial activation and apoptosis [8,11,12,14,28,31]. When phosphorylated, Akt inactivates pro-apoptotic elements such as Bad, preventing mitochondrial ROS release, and caspase 9 [32]. Statins can also increase the bioavailability and the physiological production of endothelial nitric oxide (NO) via Akt phosphorylation and downstream effector endothelial NO-synthase (eNOS), and via the stabilization of eNOS mRNA [30,33]. NO has many crucial functions in vascular endothelium particularly in maintaining permanently anti-inflammatory and anti-adhesive endothelium homeostasis. NO-donors have currently been under investigation in cerebral malaria adjunctive emergency treatments, the efficiency of which remains controversial [34]. Indeed, the deleterious scavenging of NO by ROS has not been considered with antioxidant co-treatment yet. As regard their molecular 'pleiotropic' effects, particularly with the antioxidant and the NO bioavailability promotion, statins may then be highly relevant to 'shield' the endothelium against both PRBC induced impairments and uncontrolled host responses in severe malaria cases.\nThere is increasing interest of using statins as complementary therapeutic to anti-plasmodial molecules. However, these studies with statins were performed on infected erythrocytes only and have shown interestingly to have a direct inhibitory effect on P. falciparum growth in vitro, using high doses ranging from 120 to 240 microM [35], from 5 to 40 microM [36] or from 2.5 to 10.8 microM [37]. Atorvastatin was shown to be 10 times more active against P. falciparum compared to mevastatin, simvastatin, lovastatin, fluvastatin or pravastatin [36], suggesting that atorvastatin is the best candidate among statins for therapy prospects in malaria treatment. Simvastatin was also used in mouse model of cerebral malaria (C57BL/6 mice infected with Plasmodium berghei), but failed to decrease significantly parasitemia or to prevent death [38,39]. Atorvastatin may have a beneficial effect on mice survival, only when administered in combination with artesunate [40]. Indeed, atorvastatin was recently shown to reduce the IC50 of the anti-plasmodial activity of quinine, mefloquine and dihydroartemisinin [41-43]. However, the pathogenesis of cerebral malaria in murine model relies solely on the inflammatory response unlike the pathogenesis of human cerebral malaria. Indeed, the cytoadherence phenomenon does not exist in P. berghei mice infections [44,45]. Atorvastatin was used in pre-treatment to specifically evaluate the potential effects on endothelial cells. Endothelial damage is almost universal in severe malaria pathogenesis and statins have proved their efficiency in clinical use for 20 years against cardiovascular diseases [29]. Interestingly, significant endothelial protection effects were obtained with doses as low as 100 nanoM, 500 nanoM or 1 microM.\nAlthough the 'pleiotropic' effects of statins are well documented, the precise molecular mechanism by which the PI3K/Akt/eNOS pathway is activated remains unknown [29,46]. Since the cholesterol lowering property of statins by itself ameliorates the endothelial function, statins beneficial effects were for long attributed to the inhibition of HMG-CoA reductase and the cholesterol synthesis pathway. However, the endothelial function improvement was shown to occur earlier than cholesterol lowering and independently from the cholesterol rate in blood serum [47,48]. Statins are inhibiting the synthesis of mevalonate, the second biochemical reaction in the cholesterol synthesis pathway. Mevalonate is also an important precursor of isoprenoid intermediates such as farnesyl pyrophosphate (FPP) or geranylgeranyl pyrophosphate (GGPP). Isoprenoids serve as cell surface inner membrane-anchoring molecules to functionalize Rho-GTPases, such as Rho (GGPP) and Ras (FPP) [49,50], key signaling molecules, after geranylation and farnesylation post-translational modifications. Rho-GTPases act as molecular switches (active and inactive when bound to GTP and GDP, respectively), and are the first intermediates of the intracellular signaling engagement of these receptors with the downstream effector Rho-kinase. We have previously demonstrated that P. falciparum-induced endothelial collateral damages are precisely dependent on Rho-kinase signaling [12]. Rho-kinase pathway by many aspects is antagonist to PI3K/Akt/eNOS endothelial cell survival pathway [46]. In fact, strong endothelial protective effects, obtained with atorvastatin against P. falciparum, may likely be related to both rho-kinase pathway inhibition and Akt cell survival pathway promotion. However, it was previously reported that the rho-kinase inhibition by fasudil could not decrease significantly the number of parasites and P. falciparum cytoadherence [12,51].", "In conclusion, data from this report demonstrate that HMG-CoA reductase inhibitor atorvastatin prevents P. faciparum cytoadherence and confers the ability to the endothelium to resist against consequent cellular damages. In conclusion, this study suggests that atorvastatin may be a good candidate for further studies or clinical trials on the use of statins in malaria treatment. Moreover, the safety of statins on malaria treatment needs to be addressed, since these molecules were not originally designed for these therapeutic approaches.", "EC: endothelial cell; RBC: red blood cells; PRBC: parasitized red blood cells; HLEC: human lung endothelial cells.", "The authors declare that they have no competing interests.", "ZT contributed to write the manuscript, to design and to conduct the experiments. PP contributed to design the experiments. NN, IA and SA contributed to perform some experiments. FS made substantial constructive advice in the initial design of the project. AR and DM contributed to write the manuscript and to design the experiments. All authors have read and approved the final version of the manuscript." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Beneficial effect of Mentha suaveolens essential oil in the treatment of vaginal candidiasis assessed by real-time monitoring of infection.
21356078
Vaginal candidiasis is a frequent and common distressing disease affecting up to 75% of the women of fertile age; most of these women have recurrent episodes. Essential oils from aromatic plants have been shown to have antimicrobial and antifungal activities. This study was aimed at assessing the anti-fungal activity of essential oil from Mentha suaveolens (EOMS) in an experimental infection of vaginal candidiasis.
BACKGROUND
The in vitro and in vivo activity of EOMS was assessed. The in vitro activity was evaluated under standard CLSI methods, and the in vivo analysis was carried out by exploiting a novel, non-invasive model of vaginal candidiasis in mice based on an in vivo imaging technique. Differences between essential oil treated and saline treated mice were evaluated by the non-parametric Mann-Whitney U-test. Viable count data from a time kill assay and yeast and hyphae survival test were compared using the Student's t-test (two-tailed).
METHODS
Our main findings were: i) EOMS shows potent candidastatic and candidacidal activity in an in vitro experimental system; ii) EOMS gives a degree of protection against vaginal candidiasis in an in vivo experimental system.
RESULTS
This study shows for the first time that the essential oil of a Moroccan plant Mentha suaveolens is candidastatic and candidacidal in vitro, and has a degree of anticandidal activity in a model of vaginal infection, as demonstrated in an in vivo monitoring imaging system. We conclude that our findings lay the ground for further, more extensive investigations to identify the active EOMS component(s), promising in the therapeutically problematic setting of chronic vaginal candidiasis in humans.
CONCLUSIONS
[ "Animals", "Antifungal Agents", "Candida albicans", "Candidiasis, Vulvovaginal", "Disease Models, Animal", "Female", "Mentha", "Mice", "Mice, Inbred Strains", "Microbial Sensitivity Tests", "Oils, Volatile", "Phytotherapy", "Plant Extracts", "Statistics, Nonparametric" ]
3056850
null
null
Methods
[SUBTITLE] Essential oils [SUBSECTION] Mentha suaveolens essential oil was kindly provided by the Department of Chemistry and Drug Technologies, University of Rome "La Sapienza", Italy. It was obtained from wild-type plants grown in Tarquinia forests located around 60 miles from Rome. The oil was extracted by four-hour hydro distillation of the leaves using a Clevenger-type apparatus as previously described [15], then analyzed for chemical composition by gas chromatography and mass spectroscopy (DMePe BETA PS086, 0.25 mm film on a 25 m column, diameter of 0.25 mm, operating at 220°C and eluting with helium). Compounds were identified by the application of the NIST 08 Mass Spectral Library. Analysis revealed that piperitenone oxide constitutes 90% of EOMS. Limonene and 1,8-cyneole were also present, among other minor constituents. Essential oils of tea tree (Melaleuca alternifolia) (TTO) and jasmine oil (Jasminum grandiflorum) (JO) also used in this research were commercial oils purchased form Named (Lesmo, Italy) and Erboristeria Magentina (Torino, Italy), respectively. They were obtained by steam distillation from leaves and young branches of tea tree, and from flowers of jasmine. TTO is pure, extracted without additives and was used as a positive control, because of documented antifungal activity [16,17] while jasmine oil, which was shown to be inactive against fungal growth, was used as a negative control [18]. Fluconazole was obtained from Sigma-Aldrich (Germany). Mentha suaveolens essential oil was kindly provided by the Department of Chemistry and Drug Technologies, University of Rome "La Sapienza", Italy. It was obtained from wild-type plants grown in Tarquinia forests located around 60 miles from Rome. The oil was extracted by four-hour hydro distillation of the leaves using a Clevenger-type apparatus as previously described [15], then analyzed for chemical composition by gas chromatography and mass spectroscopy (DMePe BETA PS086, 0.25 mm film on a 25 m column, diameter of 0.25 mm, operating at 220°C and eluting with helium). Compounds were identified by the application of the NIST 08 Mass Spectral Library. Analysis revealed that piperitenone oxide constitutes 90% of EOMS. Limonene and 1,8-cyneole were also present, among other minor constituents. Essential oils of tea tree (Melaleuca alternifolia) (TTO) and jasmine oil (Jasminum grandiflorum) (JO) also used in this research were commercial oils purchased form Named (Lesmo, Italy) and Erboristeria Magentina (Torino, Italy), respectively. They were obtained by steam distillation from leaves and young branches of tea tree, and from flowers of jasmine. TTO is pure, extracted without additives and was used as a positive control, because of documented antifungal activity [16,17] while jasmine oil, which was shown to be inactive against fungal growth, was used as a negative control [18]. Fluconazole was obtained from Sigma-Aldrich (Germany). [SUBTITLE] Microorganisms [SUBSECTION] Different strains of Candida albicans were used in the study: four clinical isolates from AIDS patients AIDS68, AIDS6, AIDS37 and AIDS126, CO23 isolated from a subject with vulvo-vaginal candidiasis susceptible to micafungin and fluconazole and the drug-resistant strains CO23RFK (micafungin-resistant) and CO23RFLU (fluconazole-resistant) [19], CA2, an echinocandin-resistant, non-germinative strain that grows as a pure yeast form at 28-37°C in conventional mycologic media [20], GR5 isolated from a woman with recurrent vaginal candidiasis, 3153 intrinsically resistant to fluconazole, ATCC10231 and ATCC24433. C. albicans CA1398 carrying the ACT1p-gLUC59 fusion (C. albicans gLUC59) or C. albicans CA1398 that did not express gLUC59 (control strain) were used in the models of vaginal Candida infections [14]. For experimental infections, cells from stock cultures in YPD agar (1% yeast extract, 2% peptone, 2% glucose, 1.5% agar, all w/v) with 50 μg/ml chloramphenicol were grown in YPD broth (1% yeast extract, 2% peptone,2% glucose, all w/v) at room temperature for 24 h, then harvested by centrifugation, washed, counted in an haemocytometer, and resuspended to the desired concentration in sterile physiological saline. In order to examine the effect of the oil on the mycelia form of Candida, yeasts were grown for 4 h in RPMI 1640 plus 10% FBS at 37°C, then hyphae were washed and incubated with different concentrations of essential oils (EOMS, TTO and JO) for 24 h at 37°C. Yeasts for infection were harvested from overnight cultures in YPD agar plates and adjusted to the concentration 109/ml in sterile physiological saline. Different strains of Candida albicans were used in the study: four clinical isolates from AIDS patients AIDS68, AIDS6, AIDS37 and AIDS126, CO23 isolated from a subject with vulvo-vaginal candidiasis susceptible to micafungin and fluconazole and the drug-resistant strains CO23RFK (micafungin-resistant) and CO23RFLU (fluconazole-resistant) [19], CA2, an echinocandin-resistant, non-germinative strain that grows as a pure yeast form at 28-37°C in conventional mycologic media [20], GR5 isolated from a woman with recurrent vaginal candidiasis, 3153 intrinsically resistant to fluconazole, ATCC10231 and ATCC24433. C. albicans CA1398 carrying the ACT1p-gLUC59 fusion (C. albicans gLUC59) or C. albicans CA1398 that did not express gLUC59 (control strain) were used in the models of vaginal Candida infections [14]. For experimental infections, cells from stock cultures in YPD agar (1% yeast extract, 2% peptone, 2% glucose, 1.5% agar, all w/v) with 50 μg/ml chloramphenicol were grown in YPD broth (1% yeast extract, 2% peptone,2% glucose, all w/v) at room temperature for 24 h, then harvested by centrifugation, washed, counted in an haemocytometer, and resuspended to the desired concentration in sterile physiological saline. In order to examine the effect of the oil on the mycelia form of Candida, yeasts were grown for 4 h in RPMI 1640 plus 10% FBS at 37°C, then hyphae were washed and incubated with different concentrations of essential oils (EOMS, TTO and JO) for 24 h at 37°C. Yeasts for infection were harvested from overnight cultures in YPD agar plates and adjusted to the concentration 109/ml in sterile physiological saline. [SUBTITLE] Minimal Inhibitory Concentration (MIC) assay [SUBSECTION] The Minimal Inhibitory Concentration (MIC) was determined by micro-broth dilution method according to the Clinical and Laboratory Standards Institute/National Commitee for Clinical Laboratory Standards (CLSI/NCCLS) Approved Standard M27-A3, 2008 [21]. Fluconazole 0.5 g/L solution was prepared by dissolving the agent in endotoxin free water. Solutions of essential oils (100 g/L) were prepared in RPMI1640. Briefly, to determine the MIC of EOMS, TTO, JO or Fluconazole, RPMI-1640 supplemented with MOPS at pH 7 was used. EOMS, TTO and JO were diluted in RPMI-1640 supplemented with Tween 80 (final concentration of 0.001% v/v). The dilutions, ranging from 0.01219 to 12.48 g/L of the essential oils, were prepared in 96 well plates. The inoculum size was about 2.5 × 103cells/ml. The plates were incubated at 30°C for 24-48 h. To determine the hyphae survival, C. albicans cells were first grown for 4 h in RPMI supplemented with 10% of FBS serum and then treated with different essential oils. The Minimal Inhibitory Concentration (MIC) was determined by micro-broth dilution method according to the Clinical and Laboratory Standards Institute/National Commitee for Clinical Laboratory Standards (CLSI/NCCLS) Approved Standard M27-A3, 2008 [21]. Fluconazole 0.5 g/L solution was prepared by dissolving the agent in endotoxin free water. Solutions of essential oils (100 g/L) were prepared in RPMI1640. Briefly, to determine the MIC of EOMS, TTO, JO or Fluconazole, RPMI-1640 supplemented with MOPS at pH 7 was used. EOMS, TTO and JO were diluted in RPMI-1640 supplemented with Tween 80 (final concentration of 0.001% v/v). The dilutions, ranging from 0.01219 to 12.48 g/L of the essential oils, were prepared in 96 well plates. The inoculum size was about 2.5 × 103cells/ml. The plates were incubated at 30°C for 24-48 h. To determine the hyphae survival, C. albicans cells were first grown for 4 h in RPMI supplemented with 10% of FBS serum and then treated with different essential oils. [SUBTITLE] Minimal Fungicidal Concentration (MFC) assay [SUBSECTION] The Minimal Fungicidal Concentration (MFC) was determined as the lowest concentration of Fluconazole or essential oils at which no microbial growth was observed. For the MFC determination, Sabouraud dextrose agar plates were seeded with 10 μl of cell suspensions taken from the wells of the plates of MIC assay where cell growth was not observed. These plates were incubated at 30°C for 24-48 h and colony forming units (CFU) growth was evaluated. The Minimal Fungicidal Concentration (MFC) was determined as the lowest concentration of Fluconazole or essential oils at which no microbial growth was observed. For the MFC determination, Sabouraud dextrose agar plates were seeded with 10 μl of cell suspensions taken from the wells of the plates of MIC assay where cell growth was not observed. These plates were incubated at 30°C for 24-48 h and colony forming units (CFU) growth was evaluated. [SUBTITLE] Time killing [SUBSECTION] To confirm the fungicidal activity of EOMS, time-kill procedures were performed as described by Klepser [22]. Cells sub-cultured in YPD at 28°C for 24 h were centrifuged, washed and resuspended at a concentration of 2.5 × 105cell/ml in RPMI supplemented with EOMS or TTO and incubated at 28°C. Essential oil concentrations used in the test were equivalent to 1, 2, 4, and 8 times the MIC. At predetermined time points (0, 0.5, 1, 2, 4, 6, 8, 24 and 48 hours) of incubation, 100 μl aliquots were removed from the test solution and tenfold serial dilutions were performed. 100 μl aliquot from each dilution was spread on the surface of Sabouraud dextrose agar plates and incubated at 37°C for 48 h for determination of CFU/ml. To confirm the fungicidal activity of EOMS, time-kill procedures were performed as described by Klepser [22]. Cells sub-cultured in YPD at 28°C for 24 h were centrifuged, washed and resuspended at a concentration of 2.5 × 105cell/ml in RPMI supplemented with EOMS or TTO and incubated at 28°C. Essential oil concentrations used in the test were equivalent to 1, 2, 4, and 8 times the MIC. At predetermined time points (0, 0.5, 1, 2, 4, 6, 8, 24 and 48 hours) of incubation, 100 μl aliquots were removed from the test solution and tenfold serial dilutions were performed. 100 μl aliquot from each dilution was spread on the surface of Sabouraud dextrose agar plates and incubated at 37°C for 48 h for determination of CFU/ml. [SUBTITLE] Cell lines [SUBSECTION] Monomac-6, a human tumour cell line which was initially obtained from peripheral blood of a 60-year-old man with acute monocytic leukaemia, and L929, a fibroblast-like cell line cloned from strain L (the parent strain was derived from normal subartaneous areolar and adipose tissue of a male C3H/An mouse) were grown in a humidified atmosphere containing 5% of CO2 at 37°C. The culture medium consisted of RPMI 1640 with glutamine, 10% FBS (foetal bovine serum) and antibiotics. Every three or four days the cultures were split. Monomac-6, a human tumour cell line which was initially obtained from peripheral blood of a 60-year-old man with acute monocytic leukaemia, and L929, a fibroblast-like cell line cloned from strain L (the parent strain was derived from normal subartaneous areolar and adipose tissue of a male C3H/An mouse) were grown in a humidified atmosphere containing 5% of CO2 at 37°C. The culture medium consisted of RPMI 1640 with glutamine, 10% FBS (foetal bovine serum) and antibiotics. Every three or four days the cultures were split. [SUBTITLE] Cytotoxicity assay [SUBSECTION] The cytotoxicity was tested by the determination of the cell ATP level by ViaLight® Plus Kit (Lonza). The method is based upon the bioluminescent measurement of ATP that is present in all metabolically active cells. The bioluminescent method utilizes an enzyme, luciferase, which catalyses the formation of light from ATP and luciferin. The emitted light intensity is linearly related to the ATP concentration and is measured using a luminometer. To perform cytotoxicity tests, cells were recovered and counted and adjusted to the concentration 106/ml. The examinations were carried out for essential oils (EOMS, TTO and JO) and the control (cells not treated). Various 1:2 dilutions of the above mentioned oils were prepared in the medium (RMPI 1640, 10% FBS, antibiotics) in order to achieve final concentrations in the wells: 1000-500-250-125-62.5-31-16-8-4-2-1-0 mg/L. Each concentration was tested in triplicate. After adding oils into appropriate wells, cells were added to each well to obtain the concentration of 105cells/well and incubated for 2 h at 37°C. Plates were left in a room temperature to cool for 10 minutes and then the Cell Lysis Reagent was added to each well to extract ATP form the cells. Next, after 10 minutes the AMR Plus (ATP Monitoring Reagent Plus) was added and after 2 more minutes the luminescence was read using a microplate luminometer (TECAN). The cytotoxicity was tested by the determination of the cell ATP level by ViaLight® Plus Kit (Lonza). The method is based upon the bioluminescent measurement of ATP that is present in all metabolically active cells. The bioluminescent method utilizes an enzyme, luciferase, which catalyses the formation of light from ATP and luciferin. The emitted light intensity is linearly related to the ATP concentration and is measured using a luminometer. To perform cytotoxicity tests, cells were recovered and counted and adjusted to the concentration 106/ml. The examinations were carried out for essential oils (EOMS, TTO and JO) and the control (cells not treated). Various 1:2 dilutions of the above mentioned oils were prepared in the medium (RMPI 1640, 10% FBS, antibiotics) in order to achieve final concentrations in the wells: 1000-500-250-125-62.5-31-16-8-4-2-1-0 mg/L. Each concentration was tested in triplicate. After adding oils into appropriate wells, cells were added to each well to obtain the concentration of 105cells/well and incubated for 2 h at 37°C. Plates were left in a room temperature to cool for 10 minutes and then the Cell Lysis Reagent was added to each well to extract ATP form the cells. Next, after 10 minutes the AMR Plus (ATP Monitoring Reagent Plus) was added and after 2 more minutes the luminescence was read using a microplate luminometer (TECAN). [SUBTITLE] Mice [SUBSECTION] Female CD1 mice obtained from Harlan Italy Laboratories (Udine, Italy) were used at 4 to 6 weeks of age. Mice were allowed to rest for 1 week before the experiment; by that time the animals were roughly 5 to 7 weeks old. Animals were used under specific-pathogen-free conditions that included testing sentinels for unwanted infections; according to the Federation of European Laboratory Animal Science Association standards, no infections were detected. The experimental research was approved on 25 January 2008 by the Ethics Committee of the University of Perugia. Female CD1 mice obtained from Harlan Italy Laboratories (Udine, Italy) were used at 4 to 6 weeks of age. Mice were allowed to rest for 1 week before the experiment; by that time the animals were roughly 5 to 7 weeks old. Animals were used under specific-pathogen-free conditions that included testing sentinels for unwanted infections; according to the Federation of European Laboratory Animal Science Association standards, no infections were detected. The experimental research was approved on 25 January 2008 by the Ethics Committee of the University of Perugia. [SUBTITLE] Infection and treatment [SUBSECTION] Mice infection was performed as previously described with minor adaptations [23]. Mice were maintained under pseudoestrus condition by subcutaneous injection of 0.2 mg of estradiol valerate in 100 μl of sesame oil (Sigma-Aldrich) 6 days prior to infection and weekly until the completion of the study. Mice anaesthetized with 2.5-3.5 (v/v) isofluorane gas were infected twice at a 24 h interval with 10 μl of 109 cell/ml of C. albicans gLUC59 or the control strain. Cell suspensions were administered from a mechanical pipette into the vaginal lumen, close to the cervix. To favour vaginal contact and adsorption of fungal cells, mice were held head down for 1 min following inoculation. Mice were then allowed to recover for 24-48 h, during which the Candida infection was established. The intravaginal treatment with TTO, EOMS and JO (500 μg/10 μl/mouse) was begun 2 h before the first challenge and then it was repeated every two days until day +21. Mice infection was performed as previously described with minor adaptations [23]. Mice were maintained under pseudoestrus condition by subcutaneous injection of 0.2 mg of estradiol valerate in 100 μl of sesame oil (Sigma-Aldrich) 6 days prior to infection and weekly until the completion of the study. Mice anaesthetized with 2.5-3.5 (v/v) isofluorane gas were infected twice at a 24 h interval with 10 μl of 109 cell/ml of C. albicans gLUC59 or the control strain. Cell suspensions were administered from a mechanical pipette into the vaginal lumen, close to the cervix. To favour vaginal contact and adsorption of fungal cells, mice were held head down for 1 min following inoculation. Mice were then allowed to recover for 24-48 h, during which the Candida infection was established. The intravaginal treatment with TTO, EOMS and JO (500 μg/10 μl/mouse) was begun 2 h before the first challenge and then it was repeated every two days until day +21. [SUBTITLE] Monitoring of mouse vaginal infection [SUBSECTION] To monitor the infection during the treatment with essential oil, every day post-infection (starting 48 h after challenge) 10 μl (1 mg/ml in 1:4 methanol:H2O) of coelenterazine was added to the vaginal lumen. Afterwards, mice were imaged in the IVIS-200TM imaging system under anaesthesia with 2.5% isoflurane. Total photon emission from vaginal areas within the images (Region Of Interest, ROI) of each mouse was quantified with Living ImageR software package. In selected experiments mice were anaesthetized with 2.5% isoflurane and then held head down, the vaginal lumen was thoroughly washed with 150 μl of saline. To determine the fungal load in the vagina, 50 μl of the lavage fluids from each mouse were plated on YPD agar plus chloramphenicol (50 μg/ml), then CFUs were evaluated. To monitor the infection during the treatment with essential oil, every day post-infection (starting 48 h after challenge) 10 μl (1 mg/ml in 1:4 methanol:H2O) of coelenterazine was added to the vaginal lumen. Afterwards, mice were imaged in the IVIS-200TM imaging system under anaesthesia with 2.5% isoflurane. Total photon emission from vaginal areas within the images (Region Of Interest, ROI) of each mouse was quantified with Living ImageR software package. In selected experiments mice were anaesthetized with 2.5% isoflurane and then held head down, the vaginal lumen was thoroughly washed with 150 μl of saline. To determine the fungal load in the vagina, 50 μl of the lavage fluids from each mouse were plated on YPD agar plus chloramphenicol (50 μg/ml), then CFUs were evaluated. [SUBTITLE] Statistical analysis [SUBSECTION] Differences between essential oil treated and saline treated mice were evaluated by the non-parametric Mann-Whitney U-test. Viable count data from time kill assay and yeast and hyphae survival test were compared using the Student's t-test (two-tailed). P-values of < 0.05 were considered significant. Differences between essential oil treated and saline treated mice were evaluated by the non-parametric Mann-Whitney U-test. Viable count data from time kill assay and yeast and hyphae survival test were compared using the Student's t-test (two-tailed). P-values of < 0.05 were considered significant.
null
null
null
null
[ "Background", "Essential oils", "Microorganisms", "Minimal Inhibitory Concentration (MIC) assay", "Minimal Fungicidal Concentration (MFC) assay", "Time killing", "Cell lines", "Cytotoxicity assay", "Mice", "Infection and treatment", "Monitoring of mouse vaginal infection", "Statistical analysis", "Results", "MIC, MFC and Killing Kinetics", "Yeasts and the mycelial form have different susceptibilities to EOMS", "Effect of EOMS on vaginal candidiasis", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Candida albicans is a major fungal pathogen of humans [1,2] and a commensal organism of the gastrointestinal tract. In severely immunocompromised patients this fungus causes high morbidity and mortality. C. albicans is also the etiological agent of vulvovaginal candidiasis, a common pathological condition, afflicting normal women of fertile age, which frequently develops into a chronic, substantially incurable, disease [3].\nDifferent classes of antimycotic drugs are available to treat fungal infections. The azoles, particularly fluconazole, remain among the most common antifungal drugs, but their intensive clinical use for both therapy and prophylaxis has favoured the emergence of resistant strains [4]. The phenomenon of drug resistance has raised interest in substances of natural origin as a therapeutic alternative. Essential oils (EO) of aromatic plants are used by companies for the production of soaps, perfumes and toiletries. Many of them are also used in traditional medicine for various purposes [5-7]. In the last years various EO have been found to show antimicrobial, antioxidant anticancer and other pharmacological activities [8-10]. Particularly, a number of EO have been tested for in vivo and in vitro antimycotic activity and some have been shown to be potential antifungal agents.\nThe EO have a complex composition based on a number of constituents with low molecular weight, and their biological activities are due either to a main component of the mixture, usually a monoterpene, or to the synergic action of multiple compounds [11].\nMentha suaveolens has been used in the traditional medicine of Mediterranean areas and has a wide range of effects: tonic, stimulating, stomachic, carminative, analgesic, choleretic, antispasmodic, sedative, hypotensive and insecticidal. It shows depressor activity, analgesic and antiinflammatory action [12].\nMentha suaveolens plants collected in various regions of Morocco contains a high percentage of oxides such as piperitenone oxide (PEO) and piperitone oxide (PO), terpenic alcohol (fenohol, p-cymen-8-ol, geraniol, terpineol and borneol) and terpenic ketones (pulegone and piperitenone) all of which account for 65% to 90% of the total essential oil. The antimicrobial activity of PO, even if generally comparable to that of PEO, seems to be two-fold lower than that of PEO against yeast [13]. No studies have however addressed the in vivo activity of Mentha suaveolens EO in a suitable experimental model of vaginal candidiasis under controlled conditions. Thus, in this study we have tested the in vitro and in vivo activity of M. suaveolens EO against C. albicans. Particularly, for in vivo activity, we used a recently developed, non-invasive in vivo imaging technique, which exploits a novel cell surface luciferase as reporter gene [14].\nFor both in vitro and in vivo studies, we used Jasmine Oil as a negative control and Tea Tree Oil as a positive control.", "Mentha suaveolens essential oil was kindly provided by the Department of Chemistry and Drug Technologies, University of Rome \"La Sapienza\", Italy. It was obtained from wild-type plants grown in Tarquinia forests located around 60 miles from Rome. The oil was extracted by four-hour hydro distillation of the leaves using a Clevenger-type apparatus as previously described [15], then analyzed for chemical composition by gas chromatography and mass spectroscopy (DMePe BETA PS086, 0.25 mm film on a 25 m column, diameter of 0.25 mm, operating at 220°C and eluting with helium). Compounds were identified by the application of the NIST 08 Mass Spectral Library. Analysis revealed that piperitenone oxide constitutes 90% of EOMS. Limonene and 1,8-cyneole were also present, among other minor constituents.\nEssential oils of tea tree (Melaleuca alternifolia) (TTO) and jasmine oil (Jasminum grandiflorum) (JO) also used in this research were commercial oils purchased form Named (Lesmo, Italy) and Erboristeria Magentina (Torino, Italy), respectively. They were obtained by steam distillation from leaves and young branches of tea tree, and from flowers of jasmine. TTO is pure, extracted without additives and was used as a positive control, because of documented antifungal activity [16,17] while jasmine oil, which was shown to be inactive against fungal growth, was used as a negative control [18].\nFluconazole was obtained from Sigma-Aldrich (Germany).", "Different strains of Candida albicans were used in the study: four clinical isolates from AIDS patients AIDS68, AIDS6, AIDS37 and AIDS126, CO23 isolated from a subject with vulvo-vaginal candidiasis susceptible to micafungin and fluconazole and the drug-resistant strains CO23RFK (micafungin-resistant) and CO23RFLU (fluconazole-resistant) [19], CA2, an echinocandin-resistant, non-germinative strain that grows as a pure yeast form at 28-37°C in conventional mycologic media [20], GR5 isolated from a woman with recurrent vaginal candidiasis, 3153 intrinsically resistant to fluconazole, ATCC10231 and ATCC24433. C. albicans CA1398 carrying the ACT1p-gLUC59 fusion (C. albicans gLUC59) or C. albicans CA1398 that did not express gLUC59 (control strain) were used in the models of vaginal Candida infections [14]. For experimental infections, cells from stock cultures in YPD agar (1% yeast extract, 2% peptone, 2% glucose, 1.5% agar, all w/v) with 50 μg/ml chloramphenicol were grown in YPD broth (1% yeast extract, 2% peptone,2% glucose, all w/v) at room temperature for 24 h, then harvested by centrifugation, washed, counted in an haemocytometer, and resuspended to the desired concentration in sterile physiological saline. In order to examine the effect of the oil on the mycelia form of Candida, yeasts were grown for 4 h in RPMI 1640 plus 10% FBS at 37°C, then hyphae were washed and incubated with different concentrations of essential oils (EOMS, TTO and JO) for 24 h at 37°C. Yeasts for infection were harvested from overnight cultures in YPD agar plates and adjusted to the concentration 109/ml in sterile physiological saline.", "The Minimal Inhibitory Concentration (MIC) was determined by micro-broth dilution method according to the Clinical and Laboratory Standards Institute/National Commitee for Clinical Laboratory Standards (CLSI/NCCLS) Approved Standard M27-A3, 2008 [21]. Fluconazole 0.5 g/L solution was prepared by dissolving the agent in endotoxin free water. Solutions of essential oils (100 g/L) were prepared in RPMI1640. Briefly, to determine the MIC of EOMS, TTO, JO or Fluconazole, RPMI-1640 supplemented with MOPS at pH 7 was used. EOMS, TTO and JO were diluted in RPMI-1640 supplemented with Tween 80 (final concentration of 0.001% v/v). The dilutions, ranging from 0.01219 to 12.48 g/L of the essential oils, were prepared in 96 well plates. The inoculum size was about 2.5 × 103cells/ml. The plates were incubated at 30°C for 24-48 h. To determine the hyphae survival, C. albicans cells were first grown for 4 h in RPMI supplemented with 10% of FBS serum and then treated with different essential oils.", "The Minimal Fungicidal Concentration (MFC) was determined as the lowest concentration of Fluconazole or essential oils at which no microbial growth was observed. For the MFC determination, Sabouraud dextrose agar plates were seeded with 10 μl of cell suspensions taken from the wells of the plates of MIC assay where cell growth was not observed. These plates were incubated at 30°C for 24-48 h and colony forming units (CFU) growth was evaluated.", "To confirm the fungicidal activity of EOMS, time-kill procedures were performed as described by Klepser [22]. Cells sub-cultured in YPD at 28°C for 24 h were centrifuged, washed and resuspended at a concentration of 2.5 × 105cell/ml in RPMI supplemented with EOMS or TTO and incubated at 28°C. Essential oil concentrations used in the test were equivalent to 1, 2, 4, and 8 times the MIC. At predetermined time points (0, 0.5, 1, 2, 4, 6, 8, 24 and 48 hours) of incubation, 100 μl aliquots were removed from the test solution and tenfold serial dilutions were performed. 100 μl aliquot from each dilution was spread on the surface of Sabouraud dextrose agar plates and incubated at 37°C for 48 h for determination of CFU/ml.", "Monomac-6, a human tumour cell line which was initially obtained from peripheral blood of a 60-year-old man with acute monocytic leukaemia, and L929, a fibroblast-like cell line cloned from strain L (the parent strain was derived from normal subartaneous areolar and adipose tissue of a male C3H/An mouse) were grown in a humidified atmosphere containing 5% of CO2 at 37°C. The culture medium consisted of RPMI 1640 with glutamine, 10% FBS (foetal bovine serum) and antibiotics. Every three or four days the cultures were split.", "The cytotoxicity was tested by the determination of the cell ATP level by ViaLight® Plus Kit (Lonza). The method is based upon the bioluminescent measurement of ATP that is present in all metabolically active cells. The bioluminescent method utilizes an enzyme, luciferase, which catalyses the formation of light from ATP and luciferin. The emitted light intensity is linearly related to the ATP concentration and is measured using a luminometer. To perform cytotoxicity tests, cells were recovered and counted and adjusted to the concentration 106/ml. The examinations were carried out for essential oils (EOMS, TTO and JO) and the control (cells not treated). Various 1:2 dilutions of the above mentioned oils were prepared in the medium (RMPI 1640, 10% FBS, antibiotics) in order to achieve final concentrations in the wells: 1000-500-250-125-62.5-31-16-8-4-2-1-0 mg/L. Each concentration was tested in triplicate. After adding oils into appropriate wells, cells were added to each well to obtain the concentration of 105cells/well and incubated for 2 h at 37°C. Plates were left in a room temperature to cool for 10 minutes and then the Cell Lysis Reagent was added to each well to extract ATP form the cells. Next, after 10 minutes the AMR Plus (ATP Monitoring Reagent Plus) was added and after 2 more minutes the luminescence was read using a microplate luminometer (TECAN).", "Female CD1 mice obtained from Harlan Italy Laboratories (Udine, Italy) were used at 4 to 6 weeks of age. Mice were allowed to rest for 1 week before the experiment; by that time the animals were roughly 5 to 7 weeks old. Animals were used under specific-pathogen-free conditions that included testing sentinels for unwanted infections; according to the Federation of European Laboratory Animal Science Association standards, no infections were detected.\nThe experimental research was approved on 25 January 2008 by the Ethics Committee of the University of Perugia.", "Mice infection was performed as previously described with minor adaptations [23]. Mice were maintained under pseudoestrus condition by subcutaneous injection of 0.2 mg of estradiol valerate in 100 μl of sesame oil (Sigma-Aldrich) 6 days prior to infection and weekly until the completion of the study. Mice anaesthetized with 2.5-3.5 (v/v) isofluorane gas were infected twice at a 24 h interval with 10 μl of 109 cell/ml of C. albicans gLUC59 or the control strain. Cell suspensions were administered from a mechanical pipette into the vaginal lumen, close to the cervix. To favour vaginal contact and adsorption of fungal cells, mice were held head down for 1 min following inoculation. Mice were then allowed to recover for 24-48 h, during which the Candida infection was established.\nThe intravaginal treatment with TTO, EOMS and JO (500 μg/10 μl/mouse) was begun 2 h before the first challenge and then it was repeated every two days until day +21.", "To monitor the infection during the treatment with essential oil, every day post-infection (starting 48 h after challenge) 10 μl (1 mg/ml in 1:4 methanol:H2O) of coelenterazine was added to the vaginal lumen. Afterwards, mice were imaged in the IVIS-200TM imaging system under anaesthesia with 2.5% isoflurane. Total photon emission from vaginal areas within the images (Region Of Interest, ROI) of each mouse was quantified with Living ImageR software package. In selected experiments mice were anaesthetized with 2.5% isoflurane and then held head down, the vaginal lumen was thoroughly washed with 150 μl of saline. To determine the fungal load in the vagina, 50 μl of the lavage fluids from each mouse were plated on YPD agar plus chloramphenicol (50 μg/ml), then CFUs were evaluated.", "Differences between essential oil treated and saline treated mice were evaluated by the non-parametric Mann-Whitney U-test. Viable count data from time kill assay and yeast and hyphae survival test were compared using the Student's t-test (two-tailed). P-values of < 0.05 were considered significant.", "[SUBTITLE] MIC, MFC and Killing Kinetics [SUBSECTION] The initial determination of the antifungal activity of essential oils (EOMS, TTO and JO) was performed in vitro by standardized CLSI/NCCLS methods[21] and this was done against all strains of C. albicans used throughout this study. MIC values fell in a range of 0.39-0.78 g/L for EOMS and 0.78-3.12 g/L for TTO. The MFC values ranged from 0.39-1.56 g/L for EOMS and 1.56-6.24 for TTO, thus very close to MIC values. In our experimental system, TTO was less efficient than EOMS especially when the oils were tested against fluconazole resistant strains. In addition MFC values for TTO were higher than EOMS for all strains tested (table 1). JO, used as a negative control, did not affect the growth of any strain tested.\nAntifungal activity of EOMS, TTO, JO and fluconazole on different Candida albicans strains.\nTo obtain more insight into the anticandidal activity of EOMS on gLUC59, i.e the strain used in the experimental vaginal infection (see below), a time-kill test at concentrations equivalent to 1, 2, 4, and 8 times of the MIC value (0.39 g/L) was performed. As reported in Figure 1, at a concentration of 2 × MIC (0.78 g/L), the number of colonies was significantly reduced after 24 hrs of incubation (P < 0.05) and the total fungicidal effect was observed within 48 hrs of contact. The results demonstrated that C. albicans gLUC59 was highly susceptible to EOMS. In parallel we analyzed the time kill test for TTO, confirming that this oil exerted a fungicidal effect within 48 hrs at a concentration higher (3,12 g/L; P < 0.05) than that observed with the EOMS (0,78 g/L).\nTime kill curve of EOMS and TTO against Candida albicans gLUC59. Cells were untreated (control) or treated with 0.39, 0.78, 1.56, or 3.12 g/L of EOMS or treated with 3.12, 6.25, 12.5 or 25 g/L of TTO for different time periods (0, 0.5, 1, 2, 4, 6, 8, 24 and 48 hours). After incubation survival cells were determined by cultivation on Sabouraud dextrose agar plates at 37°C for 48 h. Results are expressed as CFU/ml and indicated as mean ± SEM of triplicate samples. Data are representative of one of three independent experiments. The statistical significance was evaluated with the Student's t-test (two-tailed). *P-values of < 0.05 were considered significant.\nThe initial determination of the antifungal activity of essential oils (EOMS, TTO and JO) was performed in vitro by standardized CLSI/NCCLS methods[21] and this was done against all strains of C. albicans used throughout this study. MIC values fell in a range of 0.39-0.78 g/L for EOMS and 0.78-3.12 g/L for TTO. The MFC values ranged from 0.39-1.56 g/L for EOMS and 1.56-6.24 for TTO, thus very close to MIC values. In our experimental system, TTO was less efficient than EOMS especially when the oils were tested against fluconazole resistant strains. In addition MFC values for TTO were higher than EOMS for all strains tested (table 1). JO, used as a negative control, did not affect the growth of any strain tested.\nAntifungal activity of EOMS, TTO, JO and fluconazole on different Candida albicans strains.\nTo obtain more insight into the anticandidal activity of EOMS on gLUC59, i.e the strain used in the experimental vaginal infection (see below), a time-kill test at concentrations equivalent to 1, 2, 4, and 8 times of the MIC value (0.39 g/L) was performed. As reported in Figure 1, at a concentration of 2 × MIC (0.78 g/L), the number of colonies was significantly reduced after 24 hrs of incubation (P < 0.05) and the total fungicidal effect was observed within 48 hrs of contact. The results demonstrated that C. albicans gLUC59 was highly susceptible to EOMS. In parallel we analyzed the time kill test for TTO, confirming that this oil exerted a fungicidal effect within 48 hrs at a concentration higher (3,12 g/L; P < 0.05) than that observed with the EOMS (0,78 g/L).\nTime kill curve of EOMS and TTO against Candida albicans gLUC59. Cells were untreated (control) or treated with 0.39, 0.78, 1.56, or 3.12 g/L of EOMS or treated with 3.12, 6.25, 12.5 or 25 g/L of TTO for different time periods (0, 0.5, 1, 2, 4, 6, 8, 24 and 48 hours). After incubation survival cells were determined by cultivation on Sabouraud dextrose agar plates at 37°C for 48 h. Results are expressed as CFU/ml and indicated as mean ± SEM of triplicate samples. Data are representative of one of three independent experiments. The statistical significance was evaluated with the Student's t-test (two-tailed). *P-values of < 0.05 were considered significant.\n[SUBTITLE] Yeasts and the mycelial form have different susceptibilities to EOMS [SUBSECTION] Given that EOMS affects C. albicans yeast forms of growth, we extended our investigations to the virulent mycelial form of C. albicans. To this end C. albicans was cultured at 37°C for 4 h in the presence of 10% of FBS serum. Microscopic examination demonstrated >90% mycelial conversion under our conditions. As shown in Figure 2, hyphal forms were less susceptible to EOMS than yeast forms, since inhibition of hyphal growth is obtained at a significantly higher concentration (0.098 g/L) than the concentration required to inhibit yeast forms of growth. In fact, a concentration of 0.5 g/L was able to completely inhibit the yeast cells but not the hyphal form (Figure 2A-B) Notably, the EOMS showed a greater inhibitory effect than TTO, in both yeast cells (0.05 vs 0.098 g/L respectively) and the hyphal form (0.098 vs 0.39 g/L respectively). Jasmine Oil did not affect the viability of yeast or mycelial cells. The lack of effect of EOMS 0.5 g/L on hyphal survival was documented by microscopic examination of hyphal damage observed after addition of EOMS (Figure 2C).\nEffect of essential oil on yeast and hyphae survival. gLUC59 Candida albicans yeast cells (A) and preformed hyphae (B) were treated with different concentrations of essential oils (EOMS, TTO and JO) for 24 h. After incubation 10 μl of coelenterazine substrate (1 mg/ml) was added to each well and samples were read using a luminometer. Results are expressed as percentage of yeast or hyphae survival and indicated as mean ± SD of triplicate samples are from one of three experiments with similar results. The statistical significance was evaluated with the Student's t-test (two-tailed). *P-values of < 0.05 were considered significant. The effect on preformed hyphae was microscopically examined after 24 h of treatment with essential oil (C). Original magnification of 20x or 40x is indicated in the micrographs. The results are representative of one of three independent experiments.\nExperiments were also performed to examine whether the EOMS expressed cytotoxicity towards immune cells. To this end, Monomac 6 and L929 cell lines were treated with various concentrations of EOMS for 2 and 24 h. The results reported in Figure 3 show that high concentrations of EOMS (0.5 and 1 g/L), i.e. higher than the MIC value, were necessary to exert cytotoxicity on Monomac 6 and L929 cell lines. Using the same concentrations a similar trend was observed for TTO, while no cytotoxicity resulted from JO treatment.\nCytotoxicity of essential oils on mammalian cells. Monomac 6 and L929 cells were treated with different concentrations of essential oils for 2 h. The cytotoxicity was tested by the determination of the cell ATP level by a bioluminescent method. Results, expressed as Relative luciferase activity (RLUC), represent the mean ± SD of three different experiments. The statistical significance was evaluated with the Student's t-test (two-tailed). *P-values of < 0.05 were considered significant.\nGiven that EOMS affects C. albicans yeast forms of growth, we extended our investigations to the virulent mycelial form of C. albicans. To this end C. albicans was cultured at 37°C for 4 h in the presence of 10% of FBS serum. Microscopic examination demonstrated >90% mycelial conversion under our conditions. As shown in Figure 2, hyphal forms were less susceptible to EOMS than yeast forms, since inhibition of hyphal growth is obtained at a significantly higher concentration (0.098 g/L) than the concentration required to inhibit yeast forms of growth. In fact, a concentration of 0.5 g/L was able to completely inhibit the yeast cells but not the hyphal form (Figure 2A-B) Notably, the EOMS showed a greater inhibitory effect than TTO, in both yeast cells (0.05 vs 0.098 g/L respectively) and the hyphal form (0.098 vs 0.39 g/L respectively). Jasmine Oil did not affect the viability of yeast or mycelial cells. The lack of effect of EOMS 0.5 g/L on hyphal survival was documented by microscopic examination of hyphal damage observed after addition of EOMS (Figure 2C).\nEffect of essential oil on yeast and hyphae survival. gLUC59 Candida albicans yeast cells (A) and preformed hyphae (B) were treated with different concentrations of essential oils (EOMS, TTO and JO) for 24 h. After incubation 10 μl of coelenterazine substrate (1 mg/ml) was added to each well and samples were read using a luminometer. Results are expressed as percentage of yeast or hyphae survival and indicated as mean ± SD of triplicate samples are from one of three experiments with similar results. The statistical significance was evaluated with the Student's t-test (two-tailed). *P-values of < 0.05 were considered significant. The effect on preformed hyphae was microscopically examined after 24 h of treatment with essential oil (C). Original magnification of 20x or 40x is indicated in the micrographs. The results are representative of one of three independent experiments.\nExperiments were also performed to examine whether the EOMS expressed cytotoxicity towards immune cells. To this end, Monomac 6 and L929 cell lines were treated with various concentrations of EOMS for 2 and 24 h. The results reported in Figure 3 show that high concentrations of EOMS (0.5 and 1 g/L), i.e. higher than the MIC value, were necessary to exert cytotoxicity on Monomac 6 and L929 cell lines. Using the same concentrations a similar trend was observed for TTO, while no cytotoxicity resulted from JO treatment.\nCytotoxicity of essential oils on mammalian cells. Monomac 6 and L929 cells were treated with different concentrations of essential oils for 2 h. The cytotoxicity was tested by the determination of the cell ATP level by a bioluminescent method. Results, expressed as Relative luciferase activity (RLUC), represent the mean ± SD of three different experiments. The statistical significance was evaluated with the Student's t-test (two-tailed). *P-values of < 0.05 were considered significant.\n[SUBTITLE] Effect of EOMS on vaginal candidiasis [SUBSECTION] Given the encouraging results observed in vitro, we wondered whether these beneficial effects against C. albicans could be reproduced in an in vivo system. To this purpose we exploited the new in vivo imaging technique which we have recently developed in our laboratory [14,24] to assess therapeutic efficacy in an estrogen-dependent mouse model of vaginal candidiasis.\nEstradiol treated mice were infected with C. albicans expressing luciferase gene gLUC and EOMS, TTO or JO were administered intravaginally 2 h before the first challenge and then every two days. The course of the infection was monitored at various days post challenge by in vivo imaging (Figure 4). The fungal load in the vagina was quantified as photon emission as well as CFU from vaginal fluids. The results reported in Figure 5 show that there is a significant reduction of Candida load in mice treated with EOMS with respect to diluent (saline) treated mice starting from 15 days post infection, and this beneficial effect was maintained until 21 days post infection. In this model, and under the conditions tested, TTO was only minimally effective in causing a significant reduction of vaginal fungus load, measured as photon emission at 9 and 15 days. No effect was recorded after 21 days of infection.\nIn vivo imaging of mice vaginally infected with Candida albicans and treated with essential oils. The vaginal lumen of mice under pseudoestrus condition were infected for 2 consecutive days with 10 μl of a 109 cell/ml suspension of Candida albicans gLUC59 (first three mice of each group) or control strain (fourth mouse of each group) and treated with the different essential oils 2 h before the first challenge and then every two days. After 5, 7, 9, 15 and 21 days post-infection mice were treated intravaginally with 10 μg of coelenterazine and imaged in the IVIS-200TM imaging system under anaesthesia with 2.5% isoflurane. Total photon emission from vaginal areas within the images (Region Of Interest, ROI) of each mouse was quantified with Living ImageR software package. Data are from one of three experiments with similar results.\nMeasurement of Candida albicans load in mice treated with different essential oils. The vaginal lumen of mice under pseudoestrus condition were infected with Candida albicans gLUC59 and then treated with the essential oils 2 h before the first challenge and then every two days. After 5, 7, 9, 15 and 21 days post-infection 6 mice per group were anaesthetized, imaged in the IVIS-200TM imaging system, and the vaginal lumen was thoroughly washed with 150 μl of saline using a mechanical pipette. The fungal burden of vaginal lavage fluids was determined by evaluating the colony forming units (CFU) assay.\nFor CFU assay 50 μl of the lavage fluids were diluted and seeded in YPD agar plus chloramphenicol. Results were reported and the statistical significance was evaluated with the non-parametric Mann-Whitney U-test. *P < 0.05 (essential oil treated mice versus saline treated mice).\nGiven the encouraging results observed in vitro, we wondered whether these beneficial effects against C. albicans could be reproduced in an in vivo system. To this purpose we exploited the new in vivo imaging technique which we have recently developed in our laboratory [14,24] to assess therapeutic efficacy in an estrogen-dependent mouse model of vaginal candidiasis.\nEstradiol treated mice were infected with C. albicans expressing luciferase gene gLUC and EOMS, TTO or JO were administered intravaginally 2 h before the first challenge and then every two days. The course of the infection was monitored at various days post challenge by in vivo imaging (Figure 4). The fungal load in the vagina was quantified as photon emission as well as CFU from vaginal fluids. The results reported in Figure 5 show that there is a significant reduction of Candida load in mice treated with EOMS with respect to diluent (saline) treated mice starting from 15 days post infection, and this beneficial effect was maintained until 21 days post infection. In this model, and under the conditions tested, TTO was only minimally effective in causing a significant reduction of vaginal fungus load, measured as photon emission at 9 and 15 days. No effect was recorded after 21 days of infection.\nIn vivo imaging of mice vaginally infected with Candida albicans and treated with essential oils. The vaginal lumen of mice under pseudoestrus condition were infected for 2 consecutive days with 10 μl of a 109 cell/ml suspension of Candida albicans gLUC59 (first three mice of each group) or control strain (fourth mouse of each group) and treated with the different essential oils 2 h before the first challenge and then every two days. After 5, 7, 9, 15 and 21 days post-infection mice were treated intravaginally with 10 μg of coelenterazine and imaged in the IVIS-200TM imaging system under anaesthesia with 2.5% isoflurane. Total photon emission from vaginal areas within the images (Region Of Interest, ROI) of each mouse was quantified with Living ImageR software package. Data are from one of three experiments with similar results.\nMeasurement of Candida albicans load in mice treated with different essential oils. The vaginal lumen of mice under pseudoestrus condition were infected with Candida albicans gLUC59 and then treated with the essential oils 2 h before the first challenge and then every two days. After 5, 7, 9, 15 and 21 days post-infection 6 mice per group were anaesthetized, imaged in the IVIS-200TM imaging system, and the vaginal lumen was thoroughly washed with 150 μl of saline using a mechanical pipette. The fungal burden of vaginal lavage fluids was determined by evaluating the colony forming units (CFU) assay.\nFor CFU assay 50 μl of the lavage fluids were diluted and seeded in YPD agar plus chloramphenicol. Results were reported and the statistical significance was evaluated with the non-parametric Mann-Whitney U-test. *P < 0.05 (essential oil treated mice versus saline treated mice).", "The initial determination of the antifungal activity of essential oils (EOMS, TTO and JO) was performed in vitro by standardized CLSI/NCCLS methods[21] and this was done against all strains of C. albicans used throughout this study. MIC values fell in a range of 0.39-0.78 g/L for EOMS and 0.78-3.12 g/L for TTO. The MFC values ranged from 0.39-1.56 g/L for EOMS and 1.56-6.24 for TTO, thus very close to MIC values. In our experimental system, TTO was less efficient than EOMS especially when the oils were tested against fluconazole resistant strains. In addition MFC values for TTO were higher than EOMS for all strains tested (table 1). JO, used as a negative control, did not affect the growth of any strain tested.\nAntifungal activity of EOMS, TTO, JO and fluconazole on different Candida albicans strains.\nTo obtain more insight into the anticandidal activity of EOMS on gLUC59, i.e the strain used in the experimental vaginal infection (see below), a time-kill test at concentrations equivalent to 1, 2, 4, and 8 times of the MIC value (0.39 g/L) was performed. As reported in Figure 1, at a concentration of 2 × MIC (0.78 g/L), the number of colonies was significantly reduced after 24 hrs of incubation (P < 0.05) and the total fungicidal effect was observed within 48 hrs of contact. The results demonstrated that C. albicans gLUC59 was highly susceptible to EOMS. In parallel we analyzed the time kill test for TTO, confirming that this oil exerted a fungicidal effect within 48 hrs at a concentration higher (3,12 g/L; P < 0.05) than that observed with the EOMS (0,78 g/L).\nTime kill curve of EOMS and TTO against Candida albicans gLUC59. Cells were untreated (control) or treated with 0.39, 0.78, 1.56, or 3.12 g/L of EOMS or treated with 3.12, 6.25, 12.5 or 25 g/L of TTO for different time periods (0, 0.5, 1, 2, 4, 6, 8, 24 and 48 hours). After incubation survival cells were determined by cultivation on Sabouraud dextrose agar plates at 37°C for 48 h. Results are expressed as CFU/ml and indicated as mean ± SEM of triplicate samples. Data are representative of one of three independent experiments. The statistical significance was evaluated with the Student's t-test (two-tailed). *P-values of < 0.05 were considered significant.", "Given that EOMS affects C. albicans yeast forms of growth, we extended our investigations to the virulent mycelial form of C. albicans. To this end C. albicans was cultured at 37°C for 4 h in the presence of 10% of FBS serum. Microscopic examination demonstrated >90% mycelial conversion under our conditions. As shown in Figure 2, hyphal forms were less susceptible to EOMS than yeast forms, since inhibition of hyphal growth is obtained at a significantly higher concentration (0.098 g/L) than the concentration required to inhibit yeast forms of growth. In fact, a concentration of 0.5 g/L was able to completely inhibit the yeast cells but not the hyphal form (Figure 2A-B) Notably, the EOMS showed a greater inhibitory effect than TTO, in both yeast cells (0.05 vs 0.098 g/L respectively) and the hyphal form (0.098 vs 0.39 g/L respectively). Jasmine Oil did not affect the viability of yeast or mycelial cells. The lack of effect of EOMS 0.5 g/L on hyphal survival was documented by microscopic examination of hyphal damage observed after addition of EOMS (Figure 2C).\nEffect of essential oil on yeast and hyphae survival. gLUC59 Candida albicans yeast cells (A) and preformed hyphae (B) were treated with different concentrations of essential oils (EOMS, TTO and JO) for 24 h. After incubation 10 μl of coelenterazine substrate (1 mg/ml) was added to each well and samples were read using a luminometer. Results are expressed as percentage of yeast or hyphae survival and indicated as mean ± SD of triplicate samples are from one of three experiments with similar results. The statistical significance was evaluated with the Student's t-test (two-tailed). *P-values of < 0.05 were considered significant. The effect on preformed hyphae was microscopically examined after 24 h of treatment with essential oil (C). Original magnification of 20x or 40x is indicated in the micrographs. The results are representative of one of three independent experiments.\nExperiments were also performed to examine whether the EOMS expressed cytotoxicity towards immune cells. To this end, Monomac 6 and L929 cell lines were treated with various concentrations of EOMS for 2 and 24 h. The results reported in Figure 3 show that high concentrations of EOMS (0.5 and 1 g/L), i.e. higher than the MIC value, were necessary to exert cytotoxicity on Monomac 6 and L929 cell lines. Using the same concentrations a similar trend was observed for TTO, while no cytotoxicity resulted from JO treatment.\nCytotoxicity of essential oils on mammalian cells. Monomac 6 and L929 cells were treated with different concentrations of essential oils for 2 h. The cytotoxicity was tested by the determination of the cell ATP level by a bioluminescent method. Results, expressed as Relative luciferase activity (RLUC), represent the mean ± SD of three different experiments. The statistical significance was evaluated with the Student's t-test (two-tailed). *P-values of < 0.05 were considered significant.", "Given the encouraging results observed in vitro, we wondered whether these beneficial effects against C. albicans could be reproduced in an in vivo system. To this purpose we exploited the new in vivo imaging technique which we have recently developed in our laboratory [14,24] to assess therapeutic efficacy in an estrogen-dependent mouse model of vaginal candidiasis.\nEstradiol treated mice were infected with C. albicans expressing luciferase gene gLUC and EOMS, TTO or JO were administered intravaginally 2 h before the first challenge and then every two days. The course of the infection was monitored at various days post challenge by in vivo imaging (Figure 4). The fungal load in the vagina was quantified as photon emission as well as CFU from vaginal fluids. The results reported in Figure 5 show that there is a significant reduction of Candida load in mice treated with EOMS with respect to diluent (saline) treated mice starting from 15 days post infection, and this beneficial effect was maintained until 21 days post infection. In this model, and under the conditions tested, TTO was only minimally effective in causing a significant reduction of vaginal fungus load, measured as photon emission at 9 and 15 days. No effect was recorded after 21 days of infection.\nIn vivo imaging of mice vaginally infected with Candida albicans and treated with essential oils. The vaginal lumen of mice under pseudoestrus condition were infected for 2 consecutive days with 10 μl of a 109 cell/ml suspension of Candida albicans gLUC59 (first three mice of each group) or control strain (fourth mouse of each group) and treated with the different essential oils 2 h before the first challenge and then every two days. After 5, 7, 9, 15 and 21 days post-infection mice were treated intravaginally with 10 μg of coelenterazine and imaged in the IVIS-200TM imaging system under anaesthesia with 2.5% isoflurane. Total photon emission from vaginal areas within the images (Region Of Interest, ROI) of each mouse was quantified with Living ImageR software package. Data are from one of three experiments with similar results.\nMeasurement of Candida albicans load in mice treated with different essential oils. The vaginal lumen of mice under pseudoestrus condition were infected with Candida albicans gLUC59 and then treated with the essential oils 2 h before the first challenge and then every two days. After 5, 7, 9, 15 and 21 days post-infection 6 mice per group were anaesthetized, imaged in the IVIS-200TM imaging system, and the vaginal lumen was thoroughly washed with 150 μl of saline using a mechanical pipette. The fungal burden of vaginal lavage fluids was determined by evaluating the colony forming units (CFU) assay.\nFor CFU assay 50 μl of the lavage fluids were diluted and seeded in YPD agar plus chloramphenicol. Results were reported and the statistical significance was evaluated with the non-parametric Mann-Whitney U-test. *P < 0.05 (essential oil treated mice versus saline treated mice).", "Human pathogenic fungi represent a significant proportion of the infectious agents affecting the immunocompromised host. The therapeutic options for these patients are hampered by i) the relative scarcity of active and safe antifungal drugs, most of which are essentially fungistatic rather than fungicidal, ii) antifungal drug resistance to the most active and widely used azole compounds, iii) the difficulties of devising and/or constantly maintaining effective infection control measures in the health care institutions. Overall, fungal infections in immunodepressed subjects are a very challenging problem for the health system.\nThus there is a clear demand for finding a new therapeutic approach in this era of increasing spreading of antimicrobial drug resistance and re-emergence of infectious diseases [25,26].\nRecently the use of TTO as a new approach in antifungal therapy has been proposed. This natural compound appears to be effective in vitro against multidrug resistant Candida and in vivo against mucosal candidiasis [27]. Moreover it has also been documented that terpinen-4-ol rather than 1,8-cineole is the most likely mediator of TTO activity or, at least, a main contributor to anti-Candida activity [16]. In this study we used TTO as a positive control in our in vitro and in vivo experimental system.\nRegarding the antimicrobial properties of EOMS recent evidence attributes larvicidal activity to this essential oil and its active compound [28]. Other important activities of EOMS include protective effects against hydrogen-peroxide-induced-cytotoxicity. Anti-Candida activity has been described for Mentha piperita [29]. Furthermore EOMS was effective against Gram positive and Gram negative microorganisms and fungi [13]. The main microbicidal components of EOMS were pulegone and piperitone oxide.\nIn this study we demonstrated for the first time that EOMS is endowed with potent anticandidal activity in vitro, both against azole-susceptible and azole-resistant Candida strains. In addition, EOMS was shown to be not only an inhibitor of Candida growth, but also able to actually kill the yeasts. We determined the time killing curves, and so discovered that EOMS was apparently more effective than the more extensively investigated TTO. All experiments were performed against a control, the jasmine oil, which proved totally ineffective.\nThe antifungal activity is manifested against both yeast and the mycelial form, although higher EOMS concentrations were required to kill these latter forms of growth. Finally, we provide evidence that intravaginal administration of EOMS in vivo is also efficacious to some degree.\nFor the in vivo assay, a stringent and controlled model of vaginal infection of mice was used. This exploits a novel cell surface luciferase as reporter gene, constructed by fusing a synthetic, codon-optimized version of the Gaussia princeps luciferase gene to Candida albicans PGA59, which encodes a glycosylphosphatidyl inositol-linked cell wall protein [14]. This technique allows a continuous, non invasive monitoring of the spatial and temporal progression of vaginal infection in a small number of live mice. The model proved useful in assaying for anticandidal protection in actively or passively immunized animals [24]. The method was paralleled by a more traditional determination of vaginal fungus load in the vagina by CFU. The in vivo imaging technique resulted much more sensitive than the classic CFU method for at least two different reasons: 1) the vaginal wash doesn't completely clear the vaginal lumen because the Candida hyphae are well attached to the tissue; 2) several hyphae often grew as a single colony, causing an underestimation of the fungal load.\nOverall, we show here that EOMS accelerates the clearance of fungus during vaginal candidiasis, and this accelerated clearance of Candida is demonstrated by both photon emission and CFU measurements. The EOMS activity in our model seems superior, at least after 21 days of infection, to that of TTO, which has previously been found particularly efficacious in a rat model of vaginal candidiasis [16].\nOur data are potentially relevant in the treatment of Candida vulvovaginal infection (VVC). This is a frequent and commonly distressing disease affecting 70-75% of childbearing age women worldwide at least once during their lives. Predisposing factors for developing an acute form of vaginal candidiasis include antibiotic and oral contraceptive usage, hormone replacement therapy, pregnancy, uncontrolled diabetes mellitus and African American ethnicity [30,31]. 5% and possibly up to 10% of women with a primary episode subsequently experience frustrating recurrent VVC (RVVC) which is defined as at least three-four specific episodes within one year [3,32].", "This study shows for the first time that: i) EOMS has considerable in vitro, candidastatic and candidacidal activity ii) EOMS administration in vivo accelerates the clearance of C. albicans during vaginal infection.\nThe high impact of this infection and the difficulty of finding an effective therapy reinforces the need to search for an alternative therapeutic approach to integrate or even replace the current treatment. The present results could provide the ground for further investigations, particularly aimed at identifying the therapeutically active anticandidal EOMS component(s).", "A patent related to piperitenone oxide, the main component of Mentha suaveolens essential oil, and its possible industrial application, has been filed by LA and RR.", "DP, AR carried out the in vivo experiments and part of the MIC evaluation. LA, EV, FM, RR carried out the essential oil extraction, the MIC and time killing curve experiments. FB participated in the design and coordination of the study. AV conceived of the study and was primarily involved in the conceptual planning of the paper. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6882/11/18/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Essential oils", "Microorganisms", "Minimal Inhibitory Concentration (MIC) assay", "Minimal Fungicidal Concentration (MFC) assay", "Time killing", "Cell lines", "Cytotoxicity assay", "Mice", "Infection and treatment", "Monitoring of mouse vaginal infection", "Statistical analysis", "Results", "MIC, MFC and Killing Kinetics", "Yeasts and the mycelial form have different susceptibilities to EOMS", "Effect of EOMS on vaginal candidiasis", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Candida albicans is a major fungal pathogen of humans [1,2] and a commensal organism of the gastrointestinal tract. In severely immunocompromised patients this fungus causes high morbidity and mortality. C. albicans is also the etiological agent of vulvovaginal candidiasis, a common pathological condition, afflicting normal women of fertile age, which frequently develops into a chronic, substantially incurable, disease [3].\nDifferent classes of antimycotic drugs are available to treat fungal infections. The azoles, particularly fluconazole, remain among the most common antifungal drugs, but their intensive clinical use for both therapy and prophylaxis has favoured the emergence of resistant strains [4]. The phenomenon of drug resistance has raised interest in substances of natural origin as a therapeutic alternative. Essential oils (EO) of aromatic plants are used by companies for the production of soaps, perfumes and toiletries. Many of them are also used in traditional medicine for various purposes [5-7]. In the last years various EO have been found to show antimicrobial, antioxidant anticancer and other pharmacological activities [8-10]. Particularly, a number of EO have been tested for in vivo and in vitro antimycotic activity and some have been shown to be potential antifungal agents.\nThe EO have a complex composition based on a number of constituents with low molecular weight, and their biological activities are due either to a main component of the mixture, usually a monoterpene, or to the synergic action of multiple compounds [11].\nMentha suaveolens has been used in the traditional medicine of Mediterranean areas and has a wide range of effects: tonic, stimulating, stomachic, carminative, analgesic, choleretic, antispasmodic, sedative, hypotensive and insecticidal. It shows depressor activity, analgesic and antiinflammatory action [12].\nMentha suaveolens plants collected in various regions of Morocco contains a high percentage of oxides such as piperitenone oxide (PEO) and piperitone oxide (PO), terpenic alcohol (fenohol, p-cymen-8-ol, geraniol, terpineol and borneol) and terpenic ketones (pulegone and piperitenone) all of which account for 65% to 90% of the total essential oil. The antimicrobial activity of PO, even if generally comparable to that of PEO, seems to be two-fold lower than that of PEO against yeast [13]. No studies have however addressed the in vivo activity of Mentha suaveolens EO in a suitable experimental model of vaginal candidiasis under controlled conditions. Thus, in this study we have tested the in vitro and in vivo activity of M. suaveolens EO against C. albicans. Particularly, for in vivo activity, we used a recently developed, non-invasive in vivo imaging technique, which exploits a novel cell surface luciferase as reporter gene [14].\nFor both in vitro and in vivo studies, we used Jasmine Oil as a negative control and Tea Tree Oil as a positive control.", "[SUBTITLE] Essential oils [SUBSECTION] Mentha suaveolens essential oil was kindly provided by the Department of Chemistry and Drug Technologies, University of Rome \"La Sapienza\", Italy. It was obtained from wild-type plants grown in Tarquinia forests located around 60 miles from Rome. The oil was extracted by four-hour hydro distillation of the leaves using a Clevenger-type apparatus as previously described [15], then analyzed for chemical composition by gas chromatography and mass spectroscopy (DMePe BETA PS086, 0.25 mm film on a 25 m column, diameter of 0.25 mm, operating at 220°C and eluting with helium). Compounds were identified by the application of the NIST 08 Mass Spectral Library. Analysis revealed that piperitenone oxide constitutes 90% of EOMS. Limonene and 1,8-cyneole were also present, among other minor constituents.\nEssential oils of tea tree (Melaleuca alternifolia) (TTO) and jasmine oil (Jasminum grandiflorum) (JO) also used in this research were commercial oils purchased form Named (Lesmo, Italy) and Erboristeria Magentina (Torino, Italy), respectively. They were obtained by steam distillation from leaves and young branches of tea tree, and from flowers of jasmine. TTO is pure, extracted without additives and was used as a positive control, because of documented antifungal activity [16,17] while jasmine oil, which was shown to be inactive against fungal growth, was used as a negative control [18].\nFluconazole was obtained from Sigma-Aldrich (Germany).\nMentha suaveolens essential oil was kindly provided by the Department of Chemistry and Drug Technologies, University of Rome \"La Sapienza\", Italy. It was obtained from wild-type plants grown in Tarquinia forests located around 60 miles from Rome. The oil was extracted by four-hour hydro distillation of the leaves using a Clevenger-type apparatus as previously described [15], then analyzed for chemical composition by gas chromatography and mass spectroscopy (DMePe BETA PS086, 0.25 mm film on a 25 m column, diameter of 0.25 mm, operating at 220°C and eluting with helium). Compounds were identified by the application of the NIST 08 Mass Spectral Library. Analysis revealed that piperitenone oxide constitutes 90% of EOMS. Limonene and 1,8-cyneole were also present, among other minor constituents.\nEssential oils of tea tree (Melaleuca alternifolia) (TTO) and jasmine oil (Jasminum grandiflorum) (JO) also used in this research were commercial oils purchased form Named (Lesmo, Italy) and Erboristeria Magentina (Torino, Italy), respectively. They were obtained by steam distillation from leaves and young branches of tea tree, and from flowers of jasmine. TTO is pure, extracted without additives and was used as a positive control, because of documented antifungal activity [16,17] while jasmine oil, which was shown to be inactive against fungal growth, was used as a negative control [18].\nFluconazole was obtained from Sigma-Aldrich (Germany).\n[SUBTITLE] Microorganisms [SUBSECTION] Different strains of Candida albicans were used in the study: four clinical isolates from AIDS patients AIDS68, AIDS6, AIDS37 and AIDS126, CO23 isolated from a subject with vulvo-vaginal candidiasis susceptible to micafungin and fluconazole and the drug-resistant strains CO23RFK (micafungin-resistant) and CO23RFLU (fluconazole-resistant) [19], CA2, an echinocandin-resistant, non-germinative strain that grows as a pure yeast form at 28-37°C in conventional mycologic media [20], GR5 isolated from a woman with recurrent vaginal candidiasis, 3153 intrinsically resistant to fluconazole, ATCC10231 and ATCC24433. C. albicans CA1398 carrying the ACT1p-gLUC59 fusion (C. albicans gLUC59) or C. albicans CA1398 that did not express gLUC59 (control strain) were used in the models of vaginal Candida infections [14]. For experimental infections, cells from stock cultures in YPD agar (1% yeast extract, 2% peptone, 2% glucose, 1.5% agar, all w/v) with 50 μg/ml chloramphenicol were grown in YPD broth (1% yeast extract, 2% peptone,2% glucose, all w/v) at room temperature for 24 h, then harvested by centrifugation, washed, counted in an haemocytometer, and resuspended to the desired concentration in sterile physiological saline. In order to examine the effect of the oil on the mycelia form of Candida, yeasts were grown for 4 h in RPMI 1640 plus 10% FBS at 37°C, then hyphae were washed and incubated with different concentrations of essential oils (EOMS, TTO and JO) for 24 h at 37°C. Yeasts for infection were harvested from overnight cultures in YPD agar plates and adjusted to the concentration 109/ml in sterile physiological saline.\nDifferent strains of Candida albicans were used in the study: four clinical isolates from AIDS patients AIDS68, AIDS6, AIDS37 and AIDS126, CO23 isolated from a subject with vulvo-vaginal candidiasis susceptible to micafungin and fluconazole and the drug-resistant strains CO23RFK (micafungin-resistant) and CO23RFLU (fluconazole-resistant) [19], CA2, an echinocandin-resistant, non-germinative strain that grows as a pure yeast form at 28-37°C in conventional mycologic media [20], GR5 isolated from a woman with recurrent vaginal candidiasis, 3153 intrinsically resistant to fluconazole, ATCC10231 and ATCC24433. C. albicans CA1398 carrying the ACT1p-gLUC59 fusion (C. albicans gLUC59) or C. albicans CA1398 that did not express gLUC59 (control strain) were used in the models of vaginal Candida infections [14]. For experimental infections, cells from stock cultures in YPD agar (1% yeast extract, 2% peptone, 2% glucose, 1.5% agar, all w/v) with 50 μg/ml chloramphenicol were grown in YPD broth (1% yeast extract, 2% peptone,2% glucose, all w/v) at room temperature for 24 h, then harvested by centrifugation, washed, counted in an haemocytometer, and resuspended to the desired concentration in sterile physiological saline. In order to examine the effect of the oil on the mycelia form of Candida, yeasts were grown for 4 h in RPMI 1640 plus 10% FBS at 37°C, then hyphae were washed and incubated with different concentrations of essential oils (EOMS, TTO and JO) for 24 h at 37°C. Yeasts for infection were harvested from overnight cultures in YPD agar plates and adjusted to the concentration 109/ml in sterile physiological saline.\n[SUBTITLE] Minimal Inhibitory Concentration (MIC) assay [SUBSECTION] The Minimal Inhibitory Concentration (MIC) was determined by micro-broth dilution method according to the Clinical and Laboratory Standards Institute/National Commitee for Clinical Laboratory Standards (CLSI/NCCLS) Approved Standard M27-A3, 2008 [21]. Fluconazole 0.5 g/L solution was prepared by dissolving the agent in endotoxin free water. Solutions of essential oils (100 g/L) were prepared in RPMI1640. Briefly, to determine the MIC of EOMS, TTO, JO or Fluconazole, RPMI-1640 supplemented with MOPS at pH 7 was used. EOMS, TTO and JO were diluted in RPMI-1640 supplemented with Tween 80 (final concentration of 0.001% v/v). The dilutions, ranging from 0.01219 to 12.48 g/L of the essential oils, were prepared in 96 well plates. The inoculum size was about 2.5 × 103cells/ml. The plates were incubated at 30°C for 24-48 h. To determine the hyphae survival, C. albicans cells were first grown for 4 h in RPMI supplemented with 10% of FBS serum and then treated with different essential oils.\nThe Minimal Inhibitory Concentration (MIC) was determined by micro-broth dilution method according to the Clinical and Laboratory Standards Institute/National Commitee for Clinical Laboratory Standards (CLSI/NCCLS) Approved Standard M27-A3, 2008 [21]. Fluconazole 0.5 g/L solution was prepared by dissolving the agent in endotoxin free water. Solutions of essential oils (100 g/L) were prepared in RPMI1640. Briefly, to determine the MIC of EOMS, TTO, JO or Fluconazole, RPMI-1640 supplemented with MOPS at pH 7 was used. EOMS, TTO and JO were diluted in RPMI-1640 supplemented with Tween 80 (final concentration of 0.001% v/v). The dilutions, ranging from 0.01219 to 12.48 g/L of the essential oils, were prepared in 96 well plates. The inoculum size was about 2.5 × 103cells/ml. The plates were incubated at 30°C for 24-48 h. To determine the hyphae survival, C. albicans cells were first grown for 4 h in RPMI supplemented with 10% of FBS serum and then treated with different essential oils.\n[SUBTITLE] Minimal Fungicidal Concentration (MFC) assay [SUBSECTION] The Minimal Fungicidal Concentration (MFC) was determined as the lowest concentration of Fluconazole or essential oils at which no microbial growth was observed. For the MFC determination, Sabouraud dextrose agar plates were seeded with 10 μl of cell suspensions taken from the wells of the plates of MIC assay where cell growth was not observed. These plates were incubated at 30°C for 24-48 h and colony forming units (CFU) growth was evaluated.\nThe Minimal Fungicidal Concentration (MFC) was determined as the lowest concentration of Fluconazole or essential oils at which no microbial growth was observed. For the MFC determination, Sabouraud dextrose agar plates were seeded with 10 μl of cell suspensions taken from the wells of the plates of MIC assay where cell growth was not observed. These plates were incubated at 30°C for 24-48 h and colony forming units (CFU) growth was evaluated.\n[SUBTITLE] Time killing [SUBSECTION] To confirm the fungicidal activity of EOMS, time-kill procedures were performed as described by Klepser [22]. Cells sub-cultured in YPD at 28°C for 24 h were centrifuged, washed and resuspended at a concentration of 2.5 × 105cell/ml in RPMI supplemented with EOMS or TTO and incubated at 28°C. Essential oil concentrations used in the test were equivalent to 1, 2, 4, and 8 times the MIC. At predetermined time points (0, 0.5, 1, 2, 4, 6, 8, 24 and 48 hours) of incubation, 100 μl aliquots were removed from the test solution and tenfold serial dilutions were performed. 100 μl aliquot from each dilution was spread on the surface of Sabouraud dextrose agar plates and incubated at 37°C for 48 h for determination of CFU/ml.\nTo confirm the fungicidal activity of EOMS, time-kill procedures were performed as described by Klepser [22]. Cells sub-cultured in YPD at 28°C for 24 h were centrifuged, washed and resuspended at a concentration of 2.5 × 105cell/ml in RPMI supplemented with EOMS or TTO and incubated at 28°C. Essential oil concentrations used in the test were equivalent to 1, 2, 4, and 8 times the MIC. At predetermined time points (0, 0.5, 1, 2, 4, 6, 8, 24 and 48 hours) of incubation, 100 μl aliquots were removed from the test solution and tenfold serial dilutions were performed. 100 μl aliquot from each dilution was spread on the surface of Sabouraud dextrose agar plates and incubated at 37°C for 48 h for determination of CFU/ml.\n[SUBTITLE] Cell lines [SUBSECTION] Monomac-6, a human tumour cell line which was initially obtained from peripheral blood of a 60-year-old man with acute monocytic leukaemia, and L929, a fibroblast-like cell line cloned from strain L (the parent strain was derived from normal subartaneous areolar and adipose tissue of a male C3H/An mouse) were grown in a humidified atmosphere containing 5% of CO2 at 37°C. The culture medium consisted of RPMI 1640 with glutamine, 10% FBS (foetal bovine serum) and antibiotics. Every three or four days the cultures were split.\nMonomac-6, a human tumour cell line which was initially obtained from peripheral blood of a 60-year-old man with acute monocytic leukaemia, and L929, a fibroblast-like cell line cloned from strain L (the parent strain was derived from normal subartaneous areolar and adipose tissue of a male C3H/An mouse) were grown in a humidified atmosphere containing 5% of CO2 at 37°C. The culture medium consisted of RPMI 1640 with glutamine, 10% FBS (foetal bovine serum) and antibiotics. Every three or four days the cultures were split.\n[SUBTITLE] Cytotoxicity assay [SUBSECTION] The cytotoxicity was tested by the determination of the cell ATP level by ViaLight® Plus Kit (Lonza). The method is based upon the bioluminescent measurement of ATP that is present in all metabolically active cells. The bioluminescent method utilizes an enzyme, luciferase, which catalyses the formation of light from ATP and luciferin. The emitted light intensity is linearly related to the ATP concentration and is measured using a luminometer. To perform cytotoxicity tests, cells were recovered and counted and adjusted to the concentration 106/ml. The examinations were carried out for essential oils (EOMS, TTO and JO) and the control (cells not treated). Various 1:2 dilutions of the above mentioned oils were prepared in the medium (RMPI 1640, 10% FBS, antibiotics) in order to achieve final concentrations in the wells: 1000-500-250-125-62.5-31-16-8-4-2-1-0 mg/L. Each concentration was tested in triplicate. After adding oils into appropriate wells, cells were added to each well to obtain the concentration of 105cells/well and incubated for 2 h at 37°C. Plates were left in a room temperature to cool for 10 minutes and then the Cell Lysis Reagent was added to each well to extract ATP form the cells. Next, after 10 minutes the AMR Plus (ATP Monitoring Reagent Plus) was added and after 2 more minutes the luminescence was read using a microplate luminometer (TECAN).\nThe cytotoxicity was tested by the determination of the cell ATP level by ViaLight® Plus Kit (Lonza). The method is based upon the bioluminescent measurement of ATP that is present in all metabolically active cells. The bioluminescent method utilizes an enzyme, luciferase, which catalyses the formation of light from ATP and luciferin. The emitted light intensity is linearly related to the ATP concentration and is measured using a luminometer. To perform cytotoxicity tests, cells were recovered and counted and adjusted to the concentration 106/ml. The examinations were carried out for essential oils (EOMS, TTO and JO) and the control (cells not treated). Various 1:2 dilutions of the above mentioned oils were prepared in the medium (RMPI 1640, 10% FBS, antibiotics) in order to achieve final concentrations in the wells: 1000-500-250-125-62.5-31-16-8-4-2-1-0 mg/L. Each concentration was tested in triplicate. After adding oils into appropriate wells, cells were added to each well to obtain the concentration of 105cells/well and incubated for 2 h at 37°C. Plates were left in a room temperature to cool for 10 minutes and then the Cell Lysis Reagent was added to each well to extract ATP form the cells. Next, after 10 minutes the AMR Plus (ATP Monitoring Reagent Plus) was added and after 2 more minutes the luminescence was read using a microplate luminometer (TECAN).\n[SUBTITLE] Mice [SUBSECTION] Female CD1 mice obtained from Harlan Italy Laboratories (Udine, Italy) were used at 4 to 6 weeks of age. Mice were allowed to rest for 1 week before the experiment; by that time the animals were roughly 5 to 7 weeks old. Animals were used under specific-pathogen-free conditions that included testing sentinels for unwanted infections; according to the Federation of European Laboratory Animal Science Association standards, no infections were detected.\nThe experimental research was approved on 25 January 2008 by the Ethics Committee of the University of Perugia.\nFemale CD1 mice obtained from Harlan Italy Laboratories (Udine, Italy) were used at 4 to 6 weeks of age. Mice were allowed to rest for 1 week before the experiment; by that time the animals were roughly 5 to 7 weeks old. Animals were used under specific-pathogen-free conditions that included testing sentinels for unwanted infections; according to the Federation of European Laboratory Animal Science Association standards, no infections were detected.\nThe experimental research was approved on 25 January 2008 by the Ethics Committee of the University of Perugia.\n[SUBTITLE] Infection and treatment [SUBSECTION] Mice infection was performed as previously described with minor adaptations [23]. Mice were maintained under pseudoestrus condition by subcutaneous injection of 0.2 mg of estradiol valerate in 100 μl of sesame oil (Sigma-Aldrich) 6 days prior to infection and weekly until the completion of the study. Mice anaesthetized with 2.5-3.5 (v/v) isofluorane gas were infected twice at a 24 h interval with 10 μl of 109 cell/ml of C. albicans gLUC59 or the control strain. Cell suspensions were administered from a mechanical pipette into the vaginal lumen, close to the cervix. To favour vaginal contact and adsorption of fungal cells, mice were held head down for 1 min following inoculation. Mice were then allowed to recover for 24-48 h, during which the Candida infection was established.\nThe intravaginal treatment with TTO, EOMS and JO (500 μg/10 μl/mouse) was begun 2 h before the first challenge and then it was repeated every two days until day +21.\nMice infection was performed as previously described with minor adaptations [23]. Mice were maintained under pseudoestrus condition by subcutaneous injection of 0.2 mg of estradiol valerate in 100 μl of sesame oil (Sigma-Aldrich) 6 days prior to infection and weekly until the completion of the study. Mice anaesthetized with 2.5-3.5 (v/v) isofluorane gas were infected twice at a 24 h interval with 10 μl of 109 cell/ml of C. albicans gLUC59 or the control strain. Cell suspensions were administered from a mechanical pipette into the vaginal lumen, close to the cervix. To favour vaginal contact and adsorption of fungal cells, mice were held head down for 1 min following inoculation. Mice were then allowed to recover for 24-48 h, during which the Candida infection was established.\nThe intravaginal treatment with TTO, EOMS and JO (500 μg/10 μl/mouse) was begun 2 h before the first challenge and then it was repeated every two days until day +21.\n[SUBTITLE] Monitoring of mouse vaginal infection [SUBSECTION] To monitor the infection during the treatment with essential oil, every day post-infection (starting 48 h after challenge) 10 μl (1 mg/ml in 1:4 methanol:H2O) of coelenterazine was added to the vaginal lumen. Afterwards, mice were imaged in the IVIS-200TM imaging system under anaesthesia with 2.5% isoflurane. Total photon emission from vaginal areas within the images (Region Of Interest, ROI) of each mouse was quantified with Living ImageR software package. In selected experiments mice were anaesthetized with 2.5% isoflurane and then held head down, the vaginal lumen was thoroughly washed with 150 μl of saline. To determine the fungal load in the vagina, 50 μl of the lavage fluids from each mouse were plated on YPD agar plus chloramphenicol (50 μg/ml), then CFUs were evaluated.\nTo monitor the infection during the treatment with essential oil, every day post-infection (starting 48 h after challenge) 10 μl (1 mg/ml in 1:4 methanol:H2O) of coelenterazine was added to the vaginal lumen. Afterwards, mice were imaged in the IVIS-200TM imaging system under anaesthesia with 2.5% isoflurane. Total photon emission from vaginal areas within the images (Region Of Interest, ROI) of each mouse was quantified with Living ImageR software package. In selected experiments mice were anaesthetized with 2.5% isoflurane and then held head down, the vaginal lumen was thoroughly washed with 150 μl of saline. To determine the fungal load in the vagina, 50 μl of the lavage fluids from each mouse were plated on YPD agar plus chloramphenicol (50 μg/ml), then CFUs were evaluated.\n[SUBTITLE] Statistical analysis [SUBSECTION] Differences between essential oil treated and saline treated mice were evaluated by the non-parametric Mann-Whitney U-test. Viable count data from time kill assay and yeast and hyphae survival test were compared using the Student's t-test (two-tailed). P-values of < 0.05 were considered significant.\nDifferences between essential oil treated and saline treated mice were evaluated by the non-parametric Mann-Whitney U-test. Viable count data from time kill assay and yeast and hyphae survival test were compared using the Student's t-test (two-tailed). P-values of < 0.05 were considered significant.", "Mentha suaveolens essential oil was kindly provided by the Department of Chemistry and Drug Technologies, University of Rome \"La Sapienza\", Italy. It was obtained from wild-type plants grown in Tarquinia forests located around 60 miles from Rome. The oil was extracted by four-hour hydro distillation of the leaves using a Clevenger-type apparatus as previously described [15], then analyzed for chemical composition by gas chromatography and mass spectroscopy (DMePe BETA PS086, 0.25 mm film on a 25 m column, diameter of 0.25 mm, operating at 220°C and eluting with helium). Compounds were identified by the application of the NIST 08 Mass Spectral Library. Analysis revealed that piperitenone oxide constitutes 90% of EOMS. Limonene and 1,8-cyneole were also present, among other minor constituents.\nEssential oils of tea tree (Melaleuca alternifolia) (TTO) and jasmine oil (Jasminum grandiflorum) (JO) also used in this research were commercial oils purchased form Named (Lesmo, Italy) and Erboristeria Magentina (Torino, Italy), respectively. They were obtained by steam distillation from leaves and young branches of tea tree, and from flowers of jasmine. TTO is pure, extracted without additives and was used as a positive control, because of documented antifungal activity [16,17] while jasmine oil, which was shown to be inactive against fungal growth, was used as a negative control [18].\nFluconazole was obtained from Sigma-Aldrich (Germany).", "Different strains of Candida albicans were used in the study: four clinical isolates from AIDS patients AIDS68, AIDS6, AIDS37 and AIDS126, CO23 isolated from a subject with vulvo-vaginal candidiasis susceptible to micafungin and fluconazole and the drug-resistant strains CO23RFK (micafungin-resistant) and CO23RFLU (fluconazole-resistant) [19], CA2, an echinocandin-resistant, non-germinative strain that grows as a pure yeast form at 28-37°C in conventional mycologic media [20], GR5 isolated from a woman with recurrent vaginal candidiasis, 3153 intrinsically resistant to fluconazole, ATCC10231 and ATCC24433. C. albicans CA1398 carrying the ACT1p-gLUC59 fusion (C. albicans gLUC59) or C. albicans CA1398 that did not express gLUC59 (control strain) were used in the models of vaginal Candida infections [14]. For experimental infections, cells from stock cultures in YPD agar (1% yeast extract, 2% peptone, 2% glucose, 1.5% agar, all w/v) with 50 μg/ml chloramphenicol were grown in YPD broth (1% yeast extract, 2% peptone,2% glucose, all w/v) at room temperature for 24 h, then harvested by centrifugation, washed, counted in an haemocytometer, and resuspended to the desired concentration in sterile physiological saline. In order to examine the effect of the oil on the mycelia form of Candida, yeasts were grown for 4 h in RPMI 1640 plus 10% FBS at 37°C, then hyphae were washed and incubated with different concentrations of essential oils (EOMS, TTO and JO) for 24 h at 37°C. Yeasts for infection were harvested from overnight cultures in YPD agar plates and adjusted to the concentration 109/ml in sterile physiological saline.", "The Minimal Inhibitory Concentration (MIC) was determined by micro-broth dilution method according to the Clinical and Laboratory Standards Institute/National Commitee for Clinical Laboratory Standards (CLSI/NCCLS) Approved Standard M27-A3, 2008 [21]. Fluconazole 0.5 g/L solution was prepared by dissolving the agent in endotoxin free water. Solutions of essential oils (100 g/L) were prepared in RPMI1640. Briefly, to determine the MIC of EOMS, TTO, JO or Fluconazole, RPMI-1640 supplemented with MOPS at pH 7 was used. EOMS, TTO and JO were diluted in RPMI-1640 supplemented with Tween 80 (final concentration of 0.001% v/v). The dilutions, ranging from 0.01219 to 12.48 g/L of the essential oils, were prepared in 96 well plates. The inoculum size was about 2.5 × 103cells/ml. The plates were incubated at 30°C for 24-48 h. To determine the hyphae survival, C. albicans cells were first grown for 4 h in RPMI supplemented with 10% of FBS serum and then treated with different essential oils.", "The Minimal Fungicidal Concentration (MFC) was determined as the lowest concentration of Fluconazole or essential oils at which no microbial growth was observed. For the MFC determination, Sabouraud dextrose agar plates were seeded with 10 μl of cell suspensions taken from the wells of the plates of MIC assay where cell growth was not observed. These plates were incubated at 30°C for 24-48 h and colony forming units (CFU) growth was evaluated.", "To confirm the fungicidal activity of EOMS, time-kill procedures were performed as described by Klepser [22]. Cells sub-cultured in YPD at 28°C for 24 h were centrifuged, washed and resuspended at a concentration of 2.5 × 105cell/ml in RPMI supplemented with EOMS or TTO and incubated at 28°C. Essential oil concentrations used in the test were equivalent to 1, 2, 4, and 8 times the MIC. At predetermined time points (0, 0.5, 1, 2, 4, 6, 8, 24 and 48 hours) of incubation, 100 μl aliquots were removed from the test solution and tenfold serial dilutions were performed. 100 μl aliquot from each dilution was spread on the surface of Sabouraud dextrose agar plates and incubated at 37°C for 48 h for determination of CFU/ml.", "Monomac-6, a human tumour cell line which was initially obtained from peripheral blood of a 60-year-old man with acute monocytic leukaemia, and L929, a fibroblast-like cell line cloned from strain L (the parent strain was derived from normal subartaneous areolar and adipose tissue of a male C3H/An mouse) were grown in a humidified atmosphere containing 5% of CO2 at 37°C. The culture medium consisted of RPMI 1640 with glutamine, 10% FBS (foetal bovine serum) and antibiotics. Every three or four days the cultures were split.", "The cytotoxicity was tested by the determination of the cell ATP level by ViaLight® Plus Kit (Lonza). The method is based upon the bioluminescent measurement of ATP that is present in all metabolically active cells. The bioluminescent method utilizes an enzyme, luciferase, which catalyses the formation of light from ATP and luciferin. The emitted light intensity is linearly related to the ATP concentration and is measured using a luminometer. To perform cytotoxicity tests, cells were recovered and counted and adjusted to the concentration 106/ml. The examinations were carried out for essential oils (EOMS, TTO and JO) and the control (cells not treated). Various 1:2 dilutions of the above mentioned oils were prepared in the medium (RMPI 1640, 10% FBS, antibiotics) in order to achieve final concentrations in the wells: 1000-500-250-125-62.5-31-16-8-4-2-1-0 mg/L. Each concentration was tested in triplicate. After adding oils into appropriate wells, cells were added to each well to obtain the concentration of 105cells/well and incubated for 2 h at 37°C. Plates were left in a room temperature to cool for 10 minutes and then the Cell Lysis Reagent was added to each well to extract ATP form the cells. Next, after 10 minutes the AMR Plus (ATP Monitoring Reagent Plus) was added and after 2 more minutes the luminescence was read using a microplate luminometer (TECAN).", "Female CD1 mice obtained from Harlan Italy Laboratories (Udine, Italy) were used at 4 to 6 weeks of age. Mice were allowed to rest for 1 week before the experiment; by that time the animals were roughly 5 to 7 weeks old. Animals were used under specific-pathogen-free conditions that included testing sentinels for unwanted infections; according to the Federation of European Laboratory Animal Science Association standards, no infections were detected.\nThe experimental research was approved on 25 January 2008 by the Ethics Committee of the University of Perugia.", "Mice infection was performed as previously described with minor adaptations [23]. Mice were maintained under pseudoestrus condition by subcutaneous injection of 0.2 mg of estradiol valerate in 100 μl of sesame oil (Sigma-Aldrich) 6 days prior to infection and weekly until the completion of the study. Mice anaesthetized with 2.5-3.5 (v/v) isofluorane gas were infected twice at a 24 h interval with 10 μl of 109 cell/ml of C. albicans gLUC59 or the control strain. Cell suspensions were administered from a mechanical pipette into the vaginal lumen, close to the cervix. To favour vaginal contact and adsorption of fungal cells, mice were held head down for 1 min following inoculation. Mice were then allowed to recover for 24-48 h, during which the Candida infection was established.\nThe intravaginal treatment with TTO, EOMS and JO (500 μg/10 μl/mouse) was begun 2 h before the first challenge and then it was repeated every two days until day +21.", "To monitor the infection during the treatment with essential oil, every day post-infection (starting 48 h after challenge) 10 μl (1 mg/ml in 1:4 methanol:H2O) of coelenterazine was added to the vaginal lumen. Afterwards, mice were imaged in the IVIS-200TM imaging system under anaesthesia with 2.5% isoflurane. Total photon emission from vaginal areas within the images (Region Of Interest, ROI) of each mouse was quantified with Living ImageR software package. In selected experiments mice were anaesthetized with 2.5% isoflurane and then held head down, the vaginal lumen was thoroughly washed with 150 μl of saline. To determine the fungal load in the vagina, 50 μl of the lavage fluids from each mouse were plated on YPD agar plus chloramphenicol (50 μg/ml), then CFUs were evaluated.", "Differences between essential oil treated and saline treated mice were evaluated by the non-parametric Mann-Whitney U-test. Viable count data from time kill assay and yeast and hyphae survival test were compared using the Student's t-test (two-tailed). P-values of < 0.05 were considered significant.", "[SUBTITLE] MIC, MFC and Killing Kinetics [SUBSECTION] The initial determination of the antifungal activity of essential oils (EOMS, TTO and JO) was performed in vitro by standardized CLSI/NCCLS methods[21] and this was done against all strains of C. albicans used throughout this study. MIC values fell in a range of 0.39-0.78 g/L for EOMS and 0.78-3.12 g/L for TTO. The MFC values ranged from 0.39-1.56 g/L for EOMS and 1.56-6.24 for TTO, thus very close to MIC values. In our experimental system, TTO was less efficient than EOMS especially when the oils were tested against fluconazole resistant strains. In addition MFC values for TTO were higher than EOMS for all strains tested (table 1). JO, used as a negative control, did not affect the growth of any strain tested.\nAntifungal activity of EOMS, TTO, JO and fluconazole on different Candida albicans strains.\nTo obtain more insight into the anticandidal activity of EOMS on gLUC59, i.e the strain used in the experimental vaginal infection (see below), a time-kill test at concentrations equivalent to 1, 2, 4, and 8 times of the MIC value (0.39 g/L) was performed. As reported in Figure 1, at a concentration of 2 × MIC (0.78 g/L), the number of colonies was significantly reduced after 24 hrs of incubation (P < 0.05) and the total fungicidal effect was observed within 48 hrs of contact. The results demonstrated that C. albicans gLUC59 was highly susceptible to EOMS. In parallel we analyzed the time kill test for TTO, confirming that this oil exerted a fungicidal effect within 48 hrs at a concentration higher (3,12 g/L; P < 0.05) than that observed with the EOMS (0,78 g/L).\nTime kill curve of EOMS and TTO against Candida albicans gLUC59. Cells were untreated (control) or treated with 0.39, 0.78, 1.56, or 3.12 g/L of EOMS or treated with 3.12, 6.25, 12.5 or 25 g/L of TTO for different time periods (0, 0.5, 1, 2, 4, 6, 8, 24 and 48 hours). After incubation survival cells were determined by cultivation on Sabouraud dextrose agar plates at 37°C for 48 h. Results are expressed as CFU/ml and indicated as mean ± SEM of triplicate samples. Data are representative of one of three independent experiments. The statistical significance was evaluated with the Student's t-test (two-tailed). *P-values of < 0.05 were considered significant.\nThe initial determination of the antifungal activity of essential oils (EOMS, TTO and JO) was performed in vitro by standardized CLSI/NCCLS methods[21] and this was done against all strains of C. albicans used throughout this study. MIC values fell in a range of 0.39-0.78 g/L for EOMS and 0.78-3.12 g/L for TTO. The MFC values ranged from 0.39-1.56 g/L for EOMS and 1.56-6.24 for TTO, thus very close to MIC values. In our experimental system, TTO was less efficient than EOMS especially when the oils were tested against fluconazole resistant strains. In addition MFC values for TTO were higher than EOMS for all strains tested (table 1). JO, used as a negative control, did not affect the growth of any strain tested.\nAntifungal activity of EOMS, TTO, JO and fluconazole on different Candida albicans strains.\nTo obtain more insight into the anticandidal activity of EOMS on gLUC59, i.e the strain used in the experimental vaginal infection (see below), a time-kill test at concentrations equivalent to 1, 2, 4, and 8 times of the MIC value (0.39 g/L) was performed. As reported in Figure 1, at a concentration of 2 × MIC (0.78 g/L), the number of colonies was significantly reduced after 24 hrs of incubation (P < 0.05) and the total fungicidal effect was observed within 48 hrs of contact. The results demonstrated that C. albicans gLUC59 was highly susceptible to EOMS. In parallel we analyzed the time kill test for TTO, confirming that this oil exerted a fungicidal effect within 48 hrs at a concentration higher (3,12 g/L; P < 0.05) than that observed with the EOMS (0,78 g/L).\nTime kill curve of EOMS and TTO against Candida albicans gLUC59. Cells were untreated (control) or treated with 0.39, 0.78, 1.56, or 3.12 g/L of EOMS or treated with 3.12, 6.25, 12.5 or 25 g/L of TTO for different time periods (0, 0.5, 1, 2, 4, 6, 8, 24 and 48 hours). After incubation survival cells were determined by cultivation on Sabouraud dextrose agar plates at 37°C for 48 h. Results are expressed as CFU/ml and indicated as mean ± SEM of triplicate samples. Data are representative of one of three independent experiments. The statistical significance was evaluated with the Student's t-test (two-tailed). *P-values of < 0.05 were considered significant.\n[SUBTITLE] Yeasts and the mycelial form have different susceptibilities to EOMS [SUBSECTION] Given that EOMS affects C. albicans yeast forms of growth, we extended our investigations to the virulent mycelial form of C. albicans. To this end C. albicans was cultured at 37°C for 4 h in the presence of 10% of FBS serum. Microscopic examination demonstrated >90% mycelial conversion under our conditions. As shown in Figure 2, hyphal forms were less susceptible to EOMS than yeast forms, since inhibition of hyphal growth is obtained at a significantly higher concentration (0.098 g/L) than the concentration required to inhibit yeast forms of growth. In fact, a concentration of 0.5 g/L was able to completely inhibit the yeast cells but not the hyphal form (Figure 2A-B) Notably, the EOMS showed a greater inhibitory effect than TTO, in both yeast cells (0.05 vs 0.098 g/L respectively) and the hyphal form (0.098 vs 0.39 g/L respectively). Jasmine Oil did not affect the viability of yeast or mycelial cells. The lack of effect of EOMS 0.5 g/L on hyphal survival was documented by microscopic examination of hyphal damage observed after addition of EOMS (Figure 2C).\nEffect of essential oil on yeast and hyphae survival. gLUC59 Candida albicans yeast cells (A) and preformed hyphae (B) were treated with different concentrations of essential oils (EOMS, TTO and JO) for 24 h. After incubation 10 μl of coelenterazine substrate (1 mg/ml) was added to each well and samples were read using a luminometer. Results are expressed as percentage of yeast or hyphae survival and indicated as mean ± SD of triplicate samples are from one of three experiments with similar results. The statistical significance was evaluated with the Student's t-test (two-tailed). *P-values of < 0.05 were considered significant. The effect on preformed hyphae was microscopically examined after 24 h of treatment with essential oil (C). Original magnification of 20x or 40x is indicated in the micrographs. The results are representative of one of three independent experiments.\nExperiments were also performed to examine whether the EOMS expressed cytotoxicity towards immune cells. To this end, Monomac 6 and L929 cell lines were treated with various concentrations of EOMS for 2 and 24 h. The results reported in Figure 3 show that high concentrations of EOMS (0.5 and 1 g/L), i.e. higher than the MIC value, were necessary to exert cytotoxicity on Monomac 6 and L929 cell lines. Using the same concentrations a similar trend was observed for TTO, while no cytotoxicity resulted from JO treatment.\nCytotoxicity of essential oils on mammalian cells. Monomac 6 and L929 cells were treated with different concentrations of essential oils for 2 h. The cytotoxicity was tested by the determination of the cell ATP level by a bioluminescent method. Results, expressed as Relative luciferase activity (RLUC), represent the mean ± SD of three different experiments. The statistical significance was evaluated with the Student's t-test (two-tailed). *P-values of < 0.05 were considered significant.\nGiven that EOMS affects C. albicans yeast forms of growth, we extended our investigations to the virulent mycelial form of C. albicans. To this end C. albicans was cultured at 37°C for 4 h in the presence of 10% of FBS serum. Microscopic examination demonstrated >90% mycelial conversion under our conditions. As shown in Figure 2, hyphal forms were less susceptible to EOMS than yeast forms, since inhibition of hyphal growth is obtained at a significantly higher concentration (0.098 g/L) than the concentration required to inhibit yeast forms of growth. In fact, a concentration of 0.5 g/L was able to completely inhibit the yeast cells but not the hyphal form (Figure 2A-B) Notably, the EOMS showed a greater inhibitory effect than TTO, in both yeast cells (0.05 vs 0.098 g/L respectively) and the hyphal form (0.098 vs 0.39 g/L respectively). Jasmine Oil did not affect the viability of yeast or mycelial cells. The lack of effect of EOMS 0.5 g/L on hyphal survival was documented by microscopic examination of hyphal damage observed after addition of EOMS (Figure 2C).\nEffect of essential oil on yeast and hyphae survival. gLUC59 Candida albicans yeast cells (A) and preformed hyphae (B) were treated with different concentrations of essential oils (EOMS, TTO and JO) for 24 h. After incubation 10 μl of coelenterazine substrate (1 mg/ml) was added to each well and samples were read using a luminometer. Results are expressed as percentage of yeast or hyphae survival and indicated as mean ± SD of triplicate samples are from one of three experiments with similar results. The statistical significance was evaluated with the Student's t-test (two-tailed). *P-values of < 0.05 were considered significant. The effect on preformed hyphae was microscopically examined after 24 h of treatment with essential oil (C). Original magnification of 20x or 40x is indicated in the micrographs. The results are representative of one of three independent experiments.\nExperiments were also performed to examine whether the EOMS expressed cytotoxicity towards immune cells. To this end, Monomac 6 and L929 cell lines were treated with various concentrations of EOMS for 2 and 24 h. The results reported in Figure 3 show that high concentrations of EOMS (0.5 and 1 g/L), i.e. higher than the MIC value, were necessary to exert cytotoxicity on Monomac 6 and L929 cell lines. Using the same concentrations a similar trend was observed for TTO, while no cytotoxicity resulted from JO treatment.\nCytotoxicity of essential oils on mammalian cells. Monomac 6 and L929 cells were treated with different concentrations of essential oils for 2 h. The cytotoxicity was tested by the determination of the cell ATP level by a bioluminescent method. Results, expressed as Relative luciferase activity (RLUC), represent the mean ± SD of three different experiments. The statistical significance was evaluated with the Student's t-test (two-tailed). *P-values of < 0.05 were considered significant.\n[SUBTITLE] Effect of EOMS on vaginal candidiasis [SUBSECTION] Given the encouraging results observed in vitro, we wondered whether these beneficial effects against C. albicans could be reproduced in an in vivo system. To this purpose we exploited the new in vivo imaging technique which we have recently developed in our laboratory [14,24] to assess therapeutic efficacy in an estrogen-dependent mouse model of vaginal candidiasis.\nEstradiol treated mice were infected with C. albicans expressing luciferase gene gLUC and EOMS, TTO or JO were administered intravaginally 2 h before the first challenge and then every two days. The course of the infection was monitored at various days post challenge by in vivo imaging (Figure 4). The fungal load in the vagina was quantified as photon emission as well as CFU from vaginal fluids. The results reported in Figure 5 show that there is a significant reduction of Candida load in mice treated with EOMS with respect to diluent (saline) treated mice starting from 15 days post infection, and this beneficial effect was maintained until 21 days post infection. In this model, and under the conditions tested, TTO was only minimally effective in causing a significant reduction of vaginal fungus load, measured as photon emission at 9 and 15 days. No effect was recorded after 21 days of infection.\nIn vivo imaging of mice vaginally infected with Candida albicans and treated with essential oils. The vaginal lumen of mice under pseudoestrus condition were infected for 2 consecutive days with 10 μl of a 109 cell/ml suspension of Candida albicans gLUC59 (first three mice of each group) or control strain (fourth mouse of each group) and treated with the different essential oils 2 h before the first challenge and then every two days. After 5, 7, 9, 15 and 21 days post-infection mice were treated intravaginally with 10 μg of coelenterazine and imaged in the IVIS-200TM imaging system under anaesthesia with 2.5% isoflurane. Total photon emission from vaginal areas within the images (Region Of Interest, ROI) of each mouse was quantified with Living ImageR software package. Data are from one of three experiments with similar results.\nMeasurement of Candida albicans load in mice treated with different essential oils. The vaginal lumen of mice under pseudoestrus condition were infected with Candida albicans gLUC59 and then treated with the essential oils 2 h before the first challenge and then every two days. After 5, 7, 9, 15 and 21 days post-infection 6 mice per group were anaesthetized, imaged in the IVIS-200TM imaging system, and the vaginal lumen was thoroughly washed with 150 μl of saline using a mechanical pipette. The fungal burden of vaginal lavage fluids was determined by evaluating the colony forming units (CFU) assay.\nFor CFU assay 50 μl of the lavage fluids were diluted and seeded in YPD agar plus chloramphenicol. Results were reported and the statistical significance was evaluated with the non-parametric Mann-Whitney U-test. *P < 0.05 (essential oil treated mice versus saline treated mice).\nGiven the encouraging results observed in vitro, we wondered whether these beneficial effects against C. albicans could be reproduced in an in vivo system. To this purpose we exploited the new in vivo imaging technique which we have recently developed in our laboratory [14,24] to assess therapeutic efficacy in an estrogen-dependent mouse model of vaginal candidiasis.\nEstradiol treated mice were infected with C. albicans expressing luciferase gene gLUC and EOMS, TTO or JO were administered intravaginally 2 h before the first challenge and then every two days. The course of the infection was monitored at various days post challenge by in vivo imaging (Figure 4). The fungal load in the vagina was quantified as photon emission as well as CFU from vaginal fluids. The results reported in Figure 5 show that there is a significant reduction of Candida load in mice treated with EOMS with respect to diluent (saline) treated mice starting from 15 days post infection, and this beneficial effect was maintained until 21 days post infection. In this model, and under the conditions tested, TTO was only minimally effective in causing a significant reduction of vaginal fungus load, measured as photon emission at 9 and 15 days. No effect was recorded after 21 days of infection.\nIn vivo imaging of mice vaginally infected with Candida albicans and treated with essential oils. The vaginal lumen of mice under pseudoestrus condition were infected for 2 consecutive days with 10 μl of a 109 cell/ml suspension of Candida albicans gLUC59 (first three mice of each group) or control strain (fourth mouse of each group) and treated with the different essential oils 2 h before the first challenge and then every two days. After 5, 7, 9, 15 and 21 days post-infection mice were treated intravaginally with 10 μg of coelenterazine and imaged in the IVIS-200TM imaging system under anaesthesia with 2.5% isoflurane. Total photon emission from vaginal areas within the images (Region Of Interest, ROI) of each mouse was quantified with Living ImageR software package. Data are from one of three experiments with similar results.\nMeasurement of Candida albicans load in mice treated with different essential oils. The vaginal lumen of mice under pseudoestrus condition were infected with Candida albicans gLUC59 and then treated with the essential oils 2 h before the first challenge and then every two days. After 5, 7, 9, 15 and 21 days post-infection 6 mice per group were anaesthetized, imaged in the IVIS-200TM imaging system, and the vaginal lumen was thoroughly washed with 150 μl of saline using a mechanical pipette. The fungal burden of vaginal lavage fluids was determined by evaluating the colony forming units (CFU) assay.\nFor CFU assay 50 μl of the lavage fluids were diluted and seeded in YPD agar plus chloramphenicol. Results were reported and the statistical significance was evaluated with the non-parametric Mann-Whitney U-test. *P < 0.05 (essential oil treated mice versus saline treated mice).", "The initial determination of the antifungal activity of essential oils (EOMS, TTO and JO) was performed in vitro by standardized CLSI/NCCLS methods[21] and this was done against all strains of C. albicans used throughout this study. MIC values fell in a range of 0.39-0.78 g/L for EOMS and 0.78-3.12 g/L for TTO. The MFC values ranged from 0.39-1.56 g/L for EOMS and 1.56-6.24 for TTO, thus very close to MIC values. In our experimental system, TTO was less efficient than EOMS especially when the oils were tested against fluconazole resistant strains. In addition MFC values for TTO were higher than EOMS for all strains tested (table 1). JO, used as a negative control, did not affect the growth of any strain tested.\nAntifungal activity of EOMS, TTO, JO and fluconazole on different Candida albicans strains.\nTo obtain more insight into the anticandidal activity of EOMS on gLUC59, i.e the strain used in the experimental vaginal infection (see below), a time-kill test at concentrations equivalent to 1, 2, 4, and 8 times of the MIC value (0.39 g/L) was performed. As reported in Figure 1, at a concentration of 2 × MIC (0.78 g/L), the number of colonies was significantly reduced after 24 hrs of incubation (P < 0.05) and the total fungicidal effect was observed within 48 hrs of contact. The results demonstrated that C. albicans gLUC59 was highly susceptible to EOMS. In parallel we analyzed the time kill test for TTO, confirming that this oil exerted a fungicidal effect within 48 hrs at a concentration higher (3,12 g/L; P < 0.05) than that observed with the EOMS (0,78 g/L).\nTime kill curve of EOMS and TTO against Candida albicans gLUC59. Cells were untreated (control) or treated with 0.39, 0.78, 1.56, or 3.12 g/L of EOMS or treated with 3.12, 6.25, 12.5 or 25 g/L of TTO for different time periods (0, 0.5, 1, 2, 4, 6, 8, 24 and 48 hours). After incubation survival cells were determined by cultivation on Sabouraud dextrose agar plates at 37°C for 48 h. Results are expressed as CFU/ml and indicated as mean ± SEM of triplicate samples. Data are representative of one of three independent experiments. The statistical significance was evaluated with the Student's t-test (two-tailed). *P-values of < 0.05 were considered significant.", "Given that EOMS affects C. albicans yeast forms of growth, we extended our investigations to the virulent mycelial form of C. albicans. To this end C. albicans was cultured at 37°C for 4 h in the presence of 10% of FBS serum. Microscopic examination demonstrated >90% mycelial conversion under our conditions. As shown in Figure 2, hyphal forms were less susceptible to EOMS than yeast forms, since inhibition of hyphal growth is obtained at a significantly higher concentration (0.098 g/L) than the concentration required to inhibit yeast forms of growth. In fact, a concentration of 0.5 g/L was able to completely inhibit the yeast cells but not the hyphal form (Figure 2A-B) Notably, the EOMS showed a greater inhibitory effect than TTO, in both yeast cells (0.05 vs 0.098 g/L respectively) and the hyphal form (0.098 vs 0.39 g/L respectively). Jasmine Oil did not affect the viability of yeast or mycelial cells. The lack of effect of EOMS 0.5 g/L on hyphal survival was documented by microscopic examination of hyphal damage observed after addition of EOMS (Figure 2C).\nEffect of essential oil on yeast and hyphae survival. gLUC59 Candida albicans yeast cells (A) and preformed hyphae (B) were treated with different concentrations of essential oils (EOMS, TTO and JO) for 24 h. After incubation 10 μl of coelenterazine substrate (1 mg/ml) was added to each well and samples were read using a luminometer. Results are expressed as percentage of yeast or hyphae survival and indicated as mean ± SD of triplicate samples are from one of three experiments with similar results. The statistical significance was evaluated with the Student's t-test (two-tailed). *P-values of < 0.05 were considered significant. The effect on preformed hyphae was microscopically examined after 24 h of treatment with essential oil (C). Original magnification of 20x or 40x is indicated in the micrographs. The results are representative of one of three independent experiments.\nExperiments were also performed to examine whether the EOMS expressed cytotoxicity towards immune cells. To this end, Monomac 6 and L929 cell lines were treated with various concentrations of EOMS for 2 and 24 h. The results reported in Figure 3 show that high concentrations of EOMS (0.5 and 1 g/L), i.e. higher than the MIC value, were necessary to exert cytotoxicity on Monomac 6 and L929 cell lines. Using the same concentrations a similar trend was observed for TTO, while no cytotoxicity resulted from JO treatment.\nCytotoxicity of essential oils on mammalian cells. Monomac 6 and L929 cells were treated with different concentrations of essential oils for 2 h. The cytotoxicity was tested by the determination of the cell ATP level by a bioluminescent method. Results, expressed as Relative luciferase activity (RLUC), represent the mean ± SD of three different experiments. The statistical significance was evaluated with the Student's t-test (two-tailed). *P-values of < 0.05 were considered significant.", "Given the encouraging results observed in vitro, we wondered whether these beneficial effects against C. albicans could be reproduced in an in vivo system. To this purpose we exploited the new in vivo imaging technique which we have recently developed in our laboratory [14,24] to assess therapeutic efficacy in an estrogen-dependent mouse model of vaginal candidiasis.\nEstradiol treated mice were infected with C. albicans expressing luciferase gene gLUC and EOMS, TTO or JO were administered intravaginally 2 h before the first challenge and then every two days. The course of the infection was monitored at various days post challenge by in vivo imaging (Figure 4). The fungal load in the vagina was quantified as photon emission as well as CFU from vaginal fluids. The results reported in Figure 5 show that there is a significant reduction of Candida load in mice treated with EOMS with respect to diluent (saline) treated mice starting from 15 days post infection, and this beneficial effect was maintained until 21 days post infection. In this model, and under the conditions tested, TTO was only minimally effective in causing a significant reduction of vaginal fungus load, measured as photon emission at 9 and 15 days. No effect was recorded after 21 days of infection.\nIn vivo imaging of mice vaginally infected with Candida albicans and treated with essential oils. The vaginal lumen of mice under pseudoestrus condition were infected for 2 consecutive days with 10 μl of a 109 cell/ml suspension of Candida albicans gLUC59 (first three mice of each group) or control strain (fourth mouse of each group) and treated with the different essential oils 2 h before the first challenge and then every two days. After 5, 7, 9, 15 and 21 days post-infection mice were treated intravaginally with 10 μg of coelenterazine and imaged in the IVIS-200TM imaging system under anaesthesia with 2.5% isoflurane. Total photon emission from vaginal areas within the images (Region Of Interest, ROI) of each mouse was quantified with Living ImageR software package. Data are from one of three experiments with similar results.\nMeasurement of Candida albicans load in mice treated with different essential oils. The vaginal lumen of mice under pseudoestrus condition were infected with Candida albicans gLUC59 and then treated with the essential oils 2 h before the first challenge and then every two days. After 5, 7, 9, 15 and 21 days post-infection 6 mice per group were anaesthetized, imaged in the IVIS-200TM imaging system, and the vaginal lumen was thoroughly washed with 150 μl of saline using a mechanical pipette. The fungal burden of vaginal lavage fluids was determined by evaluating the colony forming units (CFU) assay.\nFor CFU assay 50 μl of the lavage fluids were diluted and seeded in YPD agar plus chloramphenicol. Results were reported and the statistical significance was evaluated with the non-parametric Mann-Whitney U-test. *P < 0.05 (essential oil treated mice versus saline treated mice).", "Human pathogenic fungi represent a significant proportion of the infectious agents affecting the immunocompromised host. The therapeutic options for these patients are hampered by i) the relative scarcity of active and safe antifungal drugs, most of which are essentially fungistatic rather than fungicidal, ii) antifungal drug resistance to the most active and widely used azole compounds, iii) the difficulties of devising and/or constantly maintaining effective infection control measures in the health care institutions. Overall, fungal infections in immunodepressed subjects are a very challenging problem for the health system.\nThus there is a clear demand for finding a new therapeutic approach in this era of increasing spreading of antimicrobial drug resistance and re-emergence of infectious diseases [25,26].\nRecently the use of TTO as a new approach in antifungal therapy has been proposed. This natural compound appears to be effective in vitro against multidrug resistant Candida and in vivo against mucosal candidiasis [27]. Moreover it has also been documented that terpinen-4-ol rather than 1,8-cineole is the most likely mediator of TTO activity or, at least, a main contributor to anti-Candida activity [16]. In this study we used TTO as a positive control in our in vitro and in vivo experimental system.\nRegarding the antimicrobial properties of EOMS recent evidence attributes larvicidal activity to this essential oil and its active compound [28]. Other important activities of EOMS include protective effects against hydrogen-peroxide-induced-cytotoxicity. Anti-Candida activity has been described for Mentha piperita [29]. Furthermore EOMS was effective against Gram positive and Gram negative microorganisms and fungi [13]. The main microbicidal components of EOMS were pulegone and piperitone oxide.\nIn this study we demonstrated for the first time that EOMS is endowed with potent anticandidal activity in vitro, both against azole-susceptible and azole-resistant Candida strains. In addition, EOMS was shown to be not only an inhibitor of Candida growth, but also able to actually kill the yeasts. We determined the time killing curves, and so discovered that EOMS was apparently more effective than the more extensively investigated TTO. All experiments were performed against a control, the jasmine oil, which proved totally ineffective.\nThe antifungal activity is manifested against both yeast and the mycelial form, although higher EOMS concentrations were required to kill these latter forms of growth. Finally, we provide evidence that intravaginal administration of EOMS in vivo is also efficacious to some degree.\nFor the in vivo assay, a stringent and controlled model of vaginal infection of mice was used. This exploits a novel cell surface luciferase as reporter gene, constructed by fusing a synthetic, codon-optimized version of the Gaussia princeps luciferase gene to Candida albicans PGA59, which encodes a glycosylphosphatidyl inositol-linked cell wall protein [14]. This technique allows a continuous, non invasive monitoring of the spatial and temporal progression of vaginal infection in a small number of live mice. The model proved useful in assaying for anticandidal protection in actively or passively immunized animals [24]. The method was paralleled by a more traditional determination of vaginal fungus load in the vagina by CFU. The in vivo imaging technique resulted much more sensitive than the classic CFU method for at least two different reasons: 1) the vaginal wash doesn't completely clear the vaginal lumen because the Candida hyphae are well attached to the tissue; 2) several hyphae often grew as a single colony, causing an underestimation of the fungal load.\nOverall, we show here that EOMS accelerates the clearance of fungus during vaginal candidiasis, and this accelerated clearance of Candida is demonstrated by both photon emission and CFU measurements. The EOMS activity in our model seems superior, at least after 21 days of infection, to that of TTO, which has previously been found particularly efficacious in a rat model of vaginal candidiasis [16].\nOur data are potentially relevant in the treatment of Candida vulvovaginal infection (VVC). This is a frequent and commonly distressing disease affecting 70-75% of childbearing age women worldwide at least once during their lives. Predisposing factors for developing an acute form of vaginal candidiasis include antibiotic and oral contraceptive usage, hormone replacement therapy, pregnancy, uncontrolled diabetes mellitus and African American ethnicity [30,31]. 5% and possibly up to 10% of women with a primary episode subsequently experience frustrating recurrent VVC (RVVC) which is defined as at least three-four specific episodes within one year [3,32].", "This study shows for the first time that: i) EOMS has considerable in vitro, candidastatic and candidacidal activity ii) EOMS administration in vivo accelerates the clearance of C. albicans during vaginal infection.\nThe high impact of this infection and the difficulty of finding an effective therapy reinforces the need to search for an alternative therapeutic approach to integrate or even replace the current treatment. The present results could provide the ground for further investigations, particularly aimed at identifying the therapeutically active anticandidal EOMS component(s).", "A patent related to piperitenone oxide, the main component of Mentha suaveolens essential oil, and its possible industrial application, has been filed by LA and RR.", "DP, AR carried out the in vivo experiments and part of the MIC evaluation. LA, EV, FM, RR carried out the essential oil extraction, the MIC and time killing curve experiments. FB participated in the design and coordination of the study. AV conceived of the study and was primarily involved in the conceptual planning of the paper. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6882/11/18/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Clinicians' perceptions of organizational readiness for change in the context of clinical information system projects: insights from two cross-sectional surveys.
21356080
The adoption and diffusion of clinical information systems has become one of the critical benchmarks for achieving several healthcare organizational reform priorities, including home care, primary care, and integrated care networks. However, these systems are often strongly resisted by the same community that is expected to benefit from their use. Prior research has found that early perceptions and beliefs play a central role in shaping future attitudes and behaviors such as negative rumors, lack of involvement, and resistance to change. In this line of research, this paper builds on the change management and information systems literature and identifies variables associated with clinicians' early perceptions of organizational readiness for change in the specific context of clinical information system projects.
BACKGROUND
Two cross-sectional surveys were conducted to test our research model. First, a questionnaire was pretested and then distributed to the future users of a mobile computing technology in 11 home care organizations. The second study took place in a large teaching hospital that had approved a budget for the acquisition of an electronic medical records system. Data analysis was performed using partial least squares.
METHODS
Scale items used in this study showed adequate psychometric properties. In Study 1, four of the hypothesized links in the research model were supported, with change appropriateness, organizational flexibility, vision clarity, and change efficacy explaining 75% of the variance in organizational readiness. In Study 2, four hypotheses were also supported, two of which differed from those supported in Study 1: the presence of an effective project champion and collective self-efficacy. In addition to these variables, vision clarity and change appropriateness also helped explain 75% of the variance in the dependent variable. Explanations for the similarities and differences observed in the two surveys are provided.
RESULTS
Organizational readiness is arguably a key factor involved in clinicians' initial support for clinical information system initiatives. As healthcare organizations continue to invest in information technologies to improve quality and continuity of care and reduce costs, understanding the factors that influence organizational readiness for change represents an important avenue for future research.
CONCLUSIONS
[ "Adult", "Attitude to Computers", "Computers, Handheld", "Cross-Sectional Studies", "Diffusion of Innovation", "Female", "Home Care Services", "Hospitals, Teaching", "Humans", "Information Systems", "Leadership", "Least-Squares Analysis", "Male", "Medical Records Systems, Computerized", "Nurses", "Organizational Innovation", "Perception", "Physicians", "Psychometrics", "Surveys and Questionnaires" ]
3056827
null
null
Methods
To test the above hypotheses and increase the generalizability of our findings, two cross-sectional surveys were conducted. The first study investigated the pre-deployment of a mobile computing software solution in 11 ambulatory home care units. The second study took place in a large teaching hospital that had approved a budget for the implementation of an EMR system. Given the exploratory nature of this study, we favored a literal replication strategy where similar, not constrasting, results were predicted for each of the two CIS projects. The following paragraphs describe the pre-test and the two empirical studies. [SUBTITLE] Pre-test and research settings [SUBSECTION] [SUBTITLE] Pre-test [SUBSECTION] The study questionnaire was first pre-tested with five graduate students who are familiar with CIS as well as with four clinicians who had been involved in several CIS projects. Each respondent completed a first version of the questionnaire and provided feedback about the process and the measures, including the questionnaire administration time, and the clarity of the instructions and questions. The pretest indicated that the measurement instrument was relatively clear and easy to fill out. Following the pre-test, minor modifications were made to improve the wording of some items and the overall structure and presentation quality of the questionnaire. [SUBTITLE] Study one [SUBSECTION] A mobile computing project was carried out in an oncology and palliative care unit in Quebec, Canada in 2007. The core of the project was the implementation of a CIS that optimizes the process used to organize nursing activities taking place in patients' homes. The new SyMO package (Médisolution™) consists of a nursing care plan dictionary that covers all the procedures nurses need to perform in response to patient health problems, and an intervention plan module that allows nurses to create specific care plans for patients. Once she has arrived at the patient's home, the nurse uses the software to take notes on each procedure in the patient's care plan. This pilot project was the subject of an evaluative study that yielded encouraging results. Indeed, eight months after the implementation of the software application, the number of treated cancer patients increased by 6%, the average number of home visits by nurses increased by 0.7 visit per day, and the time allocated for direct patient care increased by 14% [5]. Given these positive results, senior administrators of Quebec's department of health and social services decided to invest additional funds in the project to verify whether the results were generalizable in the context of traditional home care services. The department asked 11 home care units in three different geographical regions to participate in the project. Study one investigated the pre-implementation phase of the mobile computing project in these ambulatory home care units. The mobile computing project and the software package were officially presented to the nursing staff of each of the participating organizations at kick-off meetings held from May to November in 2009. The meetings were jointly organized by the managers of the health facilities and the supplier in order to present the scope of the project and its objectives, the roles and responsibilities of the key stakeholders, the planned deployment approach (including project phases), and the projected schedule and budget. The meetings also included a 60-minute demonstration of the software. The data was collected at each of these meetings. Only nurses who would be affected by the change, i.e., change recipients, were asked to stay in the room while the data were collected. Once the objectives of the study had been explained, a questionnaire was distributed to all the nurses in attendance. A total of 138 nurses completed the survey instrument, for a response rate of 90%. A mobile computing project was carried out in an oncology and palliative care unit in Quebec, Canada in 2007. The core of the project was the implementation of a CIS that optimizes the process used to organize nursing activities taking place in patients' homes. The new SyMO package (Médisolution™) consists of a nursing care plan dictionary that covers all the procedures nurses need to perform in response to patient health problems, and an intervention plan module that allows nurses to create specific care plans for patients. Once she has arrived at the patient's home, the nurse uses the software to take notes on each procedure in the patient's care plan. This pilot project was the subject of an evaluative study that yielded encouraging results. Indeed, eight months after the implementation of the software application, the number of treated cancer patients increased by 6%, the average number of home visits by nurses increased by 0.7 visit per day, and the time allocated for direct patient care increased by 14% [5]. Given these positive results, senior administrators of Quebec's department of health and social services decided to invest additional funds in the project to verify whether the results were generalizable in the context of traditional home care services. The department asked 11 home care units in three different geographical regions to participate in the project. Study one investigated the pre-implementation phase of the mobile computing project in these ambulatory home care units. The mobile computing project and the software package were officially presented to the nursing staff of each of the participating organizations at kick-off meetings held from May to November in 2009. The meetings were jointly organized by the managers of the health facilities and the supplier in order to present the scope of the project and its objectives, the roles and responsibilities of the key stakeholders, the planned deployment approach (including project phases), and the projected schedule and budget. The meetings also included a 60-minute demonstration of the software. The data was collected at each of these meetings. Only nurses who would be affected by the change, i.e., change recipients, were asked to stay in the room while the data were collected. Once the objectives of the study had been explained, a questionnaire was distributed to all the nurses in attendance. A total of 138 nurses completed the survey instrument, for a response rate of 90%. [SUBTITLE] Study two [SUBSECTION] As mentioned earlier, the second study took place in a large teaching hospital that had approved a budget for the acquisition of an EMR system. The institution, which specializes in the diagnosis and treatment of mental health illnesses, is divided into 10 clinical programs (e.g., a mental health program for adults, a child psychiatry program, a geriatric psychiatry program, an eating disorders program). Each clinical program specializes in the care and treatment of various mental health illnesses. The staff is composed of approximately 400 clinicians (including mostly nursing staff and a balanced group of occupational therapists, social workers and psychologists), as well as 55 physicians. The deployment of the EMR system represented a major organizational project that would affect the work processes and procedures across the entire hospital. The main objective was to maintain all patient information (admission, diagnosis, notes, prescriptions, test results, et al.) in a central patient database. At the time of the study, in early 2008, the healthcare organization was in the process of selecting a software vendor and an integrator. The project was presented at two staff meetings: the first meeting was for the entire nursing population, and the second was a general assembly that included all managers of the targeted clinical programs. Data collection began shortly after the two official presentations. Over a period of three weeks, one of the researchers and the project manager visited the clinical staff at a weekly meeting in each of the 10 programs. At each visit, the local project manager summarized the key elements of the EMR project to staff and the researchers explained the objectives of the research project, addressing staff concerns and questions for approximately fifteen minutes. Then survey questionnaires were handed out, along with a pre-addressed return envelope. A total of 379 questionnaires were distributed to the clinicians. For physicians, direct email contact was initiated by the EMR project manager in a memorandum that presented the pending EMR implementation and the ongoing academic research project. A package was then mailed to each physician, containing a presentation letter, a copy of the questionnaire, as well as a pre-addressed postage-paid return envelope. The 55 physicians were asked to complete the survey within a week. Overall, a total of 235 questionnaires (207 from clinicians and 28 from physicians) were returned to the research team, for a response rate of 54%. As mentioned earlier, the second study took place in a large teaching hospital that had approved a budget for the acquisition of an EMR system. The institution, which specializes in the diagnosis and treatment of mental health illnesses, is divided into 10 clinical programs (e.g., a mental health program for adults, a child psychiatry program, a geriatric psychiatry program, an eating disorders program). Each clinical program specializes in the care and treatment of various mental health illnesses. The staff is composed of approximately 400 clinicians (including mostly nursing staff and a balanced group of occupational therapists, social workers and psychologists), as well as 55 physicians. The deployment of the EMR system represented a major organizational project that would affect the work processes and procedures across the entire hospital. The main objective was to maintain all patient information (admission, diagnosis, notes, prescriptions, test results, et al.) in a central patient database. At the time of the study, in early 2008, the healthcare organization was in the process of selecting a software vendor and an integrator. The project was presented at two staff meetings: the first meeting was for the entire nursing population, and the second was a general assembly that included all managers of the targeted clinical programs. Data collection began shortly after the two official presentations. Over a period of three weeks, one of the researchers and the project manager visited the clinical staff at a weekly meeting in each of the 10 programs. At each visit, the local project manager summarized the key elements of the EMR project to staff and the researchers explained the objectives of the research project, addressing staff concerns and questions for approximately fifteen minutes. Then survey questionnaires were handed out, along with a pre-addressed return envelope. A total of 379 questionnaires were distributed to the clinicians. For physicians, direct email contact was initiated by the EMR project manager in a memorandum that presented the pending EMR implementation and the ongoing academic research project. A package was then mailed to each physician, containing a presentation letter, a copy of the questionnaire, as well as a pre-addressed postage-paid return envelope. The 55 physicians were asked to complete the survey within a week. Overall, a total of 235 questionnaires (207 from clinicians and 28 from physicians) were returned to the research team, for a response rate of 54%. The study questionnaire was first pre-tested with five graduate students who are familiar with CIS as well as with four clinicians who had been involved in several CIS projects. Each respondent completed a first version of the questionnaire and provided feedback about the process and the measures, including the questionnaire administration time, and the clarity of the instructions and questions. The pretest indicated that the measurement instrument was relatively clear and easy to fill out. Following the pre-test, minor modifications were made to improve the wording of some items and the overall structure and presentation quality of the questionnaire. [SUBTITLE] Study one [SUBSECTION] A mobile computing project was carried out in an oncology and palliative care unit in Quebec, Canada in 2007. The core of the project was the implementation of a CIS that optimizes the process used to organize nursing activities taking place in patients' homes. The new SyMO package (Médisolution™) consists of a nursing care plan dictionary that covers all the procedures nurses need to perform in response to patient health problems, and an intervention plan module that allows nurses to create specific care plans for patients. Once she has arrived at the patient's home, the nurse uses the software to take notes on each procedure in the patient's care plan. This pilot project was the subject of an evaluative study that yielded encouraging results. Indeed, eight months after the implementation of the software application, the number of treated cancer patients increased by 6%, the average number of home visits by nurses increased by 0.7 visit per day, and the time allocated for direct patient care increased by 14% [5]. Given these positive results, senior administrators of Quebec's department of health and social services decided to invest additional funds in the project to verify whether the results were generalizable in the context of traditional home care services. The department asked 11 home care units in three different geographical regions to participate in the project. Study one investigated the pre-implementation phase of the mobile computing project in these ambulatory home care units. The mobile computing project and the software package were officially presented to the nursing staff of each of the participating organizations at kick-off meetings held from May to November in 2009. The meetings were jointly organized by the managers of the health facilities and the supplier in order to present the scope of the project and its objectives, the roles and responsibilities of the key stakeholders, the planned deployment approach (including project phases), and the projected schedule and budget. The meetings also included a 60-minute demonstration of the software. The data was collected at each of these meetings. Only nurses who would be affected by the change, i.e., change recipients, were asked to stay in the room while the data were collected. Once the objectives of the study had been explained, a questionnaire was distributed to all the nurses in attendance. A total of 138 nurses completed the survey instrument, for a response rate of 90%. A mobile computing project was carried out in an oncology and palliative care unit in Quebec, Canada in 2007. The core of the project was the implementation of a CIS that optimizes the process used to organize nursing activities taking place in patients' homes. The new SyMO package (Médisolution™) consists of a nursing care plan dictionary that covers all the procedures nurses need to perform in response to patient health problems, and an intervention plan module that allows nurses to create specific care plans for patients. Once she has arrived at the patient's home, the nurse uses the software to take notes on each procedure in the patient's care plan. This pilot project was the subject of an evaluative study that yielded encouraging results. Indeed, eight months after the implementation of the software application, the number of treated cancer patients increased by 6%, the average number of home visits by nurses increased by 0.7 visit per day, and the time allocated for direct patient care increased by 14% [5]. Given these positive results, senior administrators of Quebec's department of health and social services decided to invest additional funds in the project to verify whether the results were generalizable in the context of traditional home care services. The department asked 11 home care units in three different geographical regions to participate in the project. Study one investigated the pre-implementation phase of the mobile computing project in these ambulatory home care units. The mobile computing project and the software package were officially presented to the nursing staff of each of the participating organizations at kick-off meetings held from May to November in 2009. The meetings were jointly organized by the managers of the health facilities and the supplier in order to present the scope of the project and its objectives, the roles and responsibilities of the key stakeholders, the planned deployment approach (including project phases), and the projected schedule and budget. The meetings also included a 60-minute demonstration of the software. The data was collected at each of these meetings. Only nurses who would be affected by the change, i.e., change recipients, were asked to stay in the room while the data were collected. Once the objectives of the study had been explained, a questionnaire was distributed to all the nurses in attendance. A total of 138 nurses completed the survey instrument, for a response rate of 90%. [SUBTITLE] Study two [SUBSECTION] As mentioned earlier, the second study took place in a large teaching hospital that had approved a budget for the acquisition of an EMR system. The institution, which specializes in the diagnosis and treatment of mental health illnesses, is divided into 10 clinical programs (e.g., a mental health program for adults, a child psychiatry program, a geriatric psychiatry program, an eating disorders program). Each clinical program specializes in the care and treatment of various mental health illnesses. The staff is composed of approximately 400 clinicians (including mostly nursing staff and a balanced group of occupational therapists, social workers and psychologists), as well as 55 physicians. The deployment of the EMR system represented a major organizational project that would affect the work processes and procedures across the entire hospital. The main objective was to maintain all patient information (admission, diagnosis, notes, prescriptions, test results, et al.) in a central patient database. At the time of the study, in early 2008, the healthcare organization was in the process of selecting a software vendor and an integrator. The project was presented at two staff meetings: the first meeting was for the entire nursing population, and the second was a general assembly that included all managers of the targeted clinical programs. Data collection began shortly after the two official presentations. Over a period of three weeks, one of the researchers and the project manager visited the clinical staff at a weekly meeting in each of the 10 programs. At each visit, the local project manager summarized the key elements of the EMR project to staff and the researchers explained the objectives of the research project, addressing staff concerns and questions for approximately fifteen minutes. Then survey questionnaires were handed out, along with a pre-addressed return envelope. A total of 379 questionnaires were distributed to the clinicians. For physicians, direct email contact was initiated by the EMR project manager in a memorandum that presented the pending EMR implementation and the ongoing academic research project. A package was then mailed to each physician, containing a presentation letter, a copy of the questionnaire, as well as a pre-addressed postage-paid return envelope. The 55 physicians were asked to complete the survey within a week. Overall, a total of 235 questionnaires (207 from clinicians and 28 from physicians) were returned to the research team, for a response rate of 54%. As mentioned earlier, the second study took place in a large teaching hospital that had approved a budget for the acquisition of an EMR system. The institution, which specializes in the diagnosis and treatment of mental health illnesses, is divided into 10 clinical programs (e.g., a mental health program for adults, a child psychiatry program, a geriatric psychiatry program, an eating disorders program). Each clinical program specializes in the care and treatment of various mental health illnesses. The staff is composed of approximately 400 clinicians (including mostly nursing staff and a balanced group of occupational therapists, social workers and psychologists), as well as 55 physicians. The deployment of the EMR system represented a major organizational project that would affect the work processes and procedures across the entire hospital. The main objective was to maintain all patient information (admission, diagnosis, notes, prescriptions, test results, et al.) in a central patient database. At the time of the study, in early 2008, the healthcare organization was in the process of selecting a software vendor and an integrator. The project was presented at two staff meetings: the first meeting was for the entire nursing population, and the second was a general assembly that included all managers of the targeted clinical programs. Data collection began shortly after the two official presentations. Over a period of three weeks, one of the researchers and the project manager visited the clinical staff at a weekly meeting in each of the 10 programs. At each visit, the local project manager summarized the key elements of the EMR project to staff and the researchers explained the objectives of the research project, addressing staff concerns and questions for approximately fifteen minutes. Then survey questionnaires were handed out, along with a pre-addressed return envelope. A total of 379 questionnaires were distributed to the clinicians. For physicians, direct email contact was initiated by the EMR project manager in a memorandum that presented the pending EMR implementation and the ongoing academic research project. A package was then mailed to each physician, containing a presentation letter, a copy of the questionnaire, as well as a pre-addressed postage-paid return envelope. The 55 physicians were asked to complete the survey within a week. Overall, a total of 235 questionnaires (207 from clinicians and 28 from physicians) were returned to the research team, for a response rate of 54%. [SUBTITLE] Pre-test [SUBSECTION] The study questionnaire was first pre-tested with five graduate students who are familiar with CIS as well as with four clinicians who had been involved in several CIS projects. Each respondent completed a first version of the questionnaire and provided feedback about the process and the measures, including the questionnaire administration time, and the clarity of the instructions and questions. The pretest indicated that the measurement instrument was relatively clear and easy to fill out. Following the pre-test, minor modifications were made to improve the wording of some items and the overall structure and presentation quality of the questionnaire. [SUBTITLE] Study one [SUBSECTION] A mobile computing project was carried out in an oncology and palliative care unit in Quebec, Canada in 2007. The core of the project was the implementation of a CIS that optimizes the process used to organize nursing activities taking place in patients' homes. The new SyMO package (Médisolution™) consists of a nursing care plan dictionary that covers all the procedures nurses need to perform in response to patient health problems, and an intervention plan module that allows nurses to create specific care plans for patients. Once she has arrived at the patient's home, the nurse uses the software to take notes on each procedure in the patient's care plan. This pilot project was the subject of an evaluative study that yielded encouraging results. Indeed, eight months after the implementation of the software application, the number of treated cancer patients increased by 6%, the average number of home visits by nurses increased by 0.7 visit per day, and the time allocated for direct patient care increased by 14% [5]. Given these positive results, senior administrators of Quebec's department of health and social services decided to invest additional funds in the project to verify whether the results were generalizable in the context of traditional home care services. The department asked 11 home care units in three different geographical regions to participate in the project. Study one investigated the pre-implementation phase of the mobile computing project in these ambulatory home care units. The mobile computing project and the software package were officially presented to the nursing staff of each of the participating organizations at kick-off meetings held from May to November in 2009. The meetings were jointly organized by the managers of the health facilities and the supplier in order to present the scope of the project and its objectives, the roles and responsibilities of the key stakeholders, the planned deployment approach (including project phases), and the projected schedule and budget. The meetings also included a 60-minute demonstration of the software. The data was collected at each of these meetings. Only nurses who would be affected by the change, i.e., change recipients, were asked to stay in the room while the data were collected. Once the objectives of the study had been explained, a questionnaire was distributed to all the nurses in attendance. A total of 138 nurses completed the survey instrument, for a response rate of 90%. A mobile computing project was carried out in an oncology and palliative care unit in Quebec, Canada in 2007. The core of the project was the implementation of a CIS that optimizes the process used to organize nursing activities taking place in patients' homes. The new SyMO package (Médisolution™) consists of a nursing care plan dictionary that covers all the procedures nurses need to perform in response to patient health problems, and an intervention plan module that allows nurses to create specific care plans for patients. Once she has arrived at the patient's home, the nurse uses the software to take notes on each procedure in the patient's care plan. This pilot project was the subject of an evaluative study that yielded encouraging results. Indeed, eight months after the implementation of the software application, the number of treated cancer patients increased by 6%, the average number of home visits by nurses increased by 0.7 visit per day, and the time allocated for direct patient care increased by 14% [5]. Given these positive results, senior administrators of Quebec's department of health and social services decided to invest additional funds in the project to verify whether the results were generalizable in the context of traditional home care services. The department asked 11 home care units in three different geographical regions to participate in the project. Study one investigated the pre-implementation phase of the mobile computing project in these ambulatory home care units. The mobile computing project and the software package were officially presented to the nursing staff of each of the participating organizations at kick-off meetings held from May to November in 2009. The meetings were jointly organized by the managers of the health facilities and the supplier in order to present the scope of the project and its objectives, the roles and responsibilities of the key stakeholders, the planned deployment approach (including project phases), and the projected schedule and budget. The meetings also included a 60-minute demonstration of the software. The data was collected at each of these meetings. Only nurses who would be affected by the change, i.e., change recipients, were asked to stay in the room while the data were collected. Once the objectives of the study had been explained, a questionnaire was distributed to all the nurses in attendance. A total of 138 nurses completed the survey instrument, for a response rate of 90%. [SUBTITLE] Study two [SUBSECTION] As mentioned earlier, the second study took place in a large teaching hospital that had approved a budget for the acquisition of an EMR system. The institution, which specializes in the diagnosis and treatment of mental health illnesses, is divided into 10 clinical programs (e.g., a mental health program for adults, a child psychiatry program, a geriatric psychiatry program, an eating disorders program). Each clinical program specializes in the care and treatment of various mental health illnesses. The staff is composed of approximately 400 clinicians (including mostly nursing staff and a balanced group of occupational therapists, social workers and psychologists), as well as 55 physicians. The deployment of the EMR system represented a major organizational project that would affect the work processes and procedures across the entire hospital. The main objective was to maintain all patient information (admission, diagnosis, notes, prescriptions, test results, et al.) in a central patient database. At the time of the study, in early 2008, the healthcare organization was in the process of selecting a software vendor and an integrator. The project was presented at two staff meetings: the first meeting was for the entire nursing population, and the second was a general assembly that included all managers of the targeted clinical programs. Data collection began shortly after the two official presentations. Over a period of three weeks, one of the researchers and the project manager visited the clinical staff at a weekly meeting in each of the 10 programs. At each visit, the local project manager summarized the key elements of the EMR project to staff and the researchers explained the objectives of the research project, addressing staff concerns and questions for approximately fifteen minutes. Then survey questionnaires were handed out, along with a pre-addressed return envelope. A total of 379 questionnaires were distributed to the clinicians. For physicians, direct email contact was initiated by the EMR project manager in a memorandum that presented the pending EMR implementation and the ongoing academic research project. A package was then mailed to each physician, containing a presentation letter, a copy of the questionnaire, as well as a pre-addressed postage-paid return envelope. The 55 physicians were asked to complete the survey within a week. Overall, a total of 235 questionnaires (207 from clinicians and 28 from physicians) were returned to the research team, for a response rate of 54%. As mentioned earlier, the second study took place in a large teaching hospital that had approved a budget for the acquisition of an EMR system. The institution, which specializes in the diagnosis and treatment of mental health illnesses, is divided into 10 clinical programs (e.g., a mental health program for adults, a child psychiatry program, a geriatric psychiatry program, an eating disorders program). Each clinical program specializes in the care and treatment of various mental health illnesses. The staff is composed of approximately 400 clinicians (including mostly nursing staff and a balanced group of occupational therapists, social workers and psychologists), as well as 55 physicians. The deployment of the EMR system represented a major organizational project that would affect the work processes and procedures across the entire hospital. The main objective was to maintain all patient information (admission, diagnosis, notes, prescriptions, test results, et al.) in a central patient database. At the time of the study, in early 2008, the healthcare organization was in the process of selecting a software vendor and an integrator. The project was presented at two staff meetings: the first meeting was for the entire nursing population, and the second was a general assembly that included all managers of the targeted clinical programs. Data collection began shortly after the two official presentations. Over a period of three weeks, one of the researchers and the project manager visited the clinical staff at a weekly meeting in each of the 10 programs. At each visit, the local project manager summarized the key elements of the EMR project to staff and the researchers explained the objectives of the research project, addressing staff concerns and questions for approximately fifteen minutes. Then survey questionnaires were handed out, along with a pre-addressed return envelope. A total of 379 questionnaires were distributed to the clinicians. For physicians, direct email contact was initiated by the EMR project manager in a memorandum that presented the pending EMR implementation and the ongoing academic research project. A package was then mailed to each physician, containing a presentation letter, a copy of the questionnaire, as well as a pre-addressed postage-paid return envelope. The 55 physicians were asked to complete the survey within a week. Overall, a total of 235 questionnaires (207 from clinicians and 28 from physicians) were returned to the research team, for a response rate of 54%. The study questionnaire was first pre-tested with five graduate students who are familiar with CIS as well as with four clinicians who had been involved in several CIS projects. Each respondent completed a first version of the questionnaire and provided feedback about the process and the measures, including the questionnaire administration time, and the clarity of the instructions and questions. The pretest indicated that the measurement instrument was relatively clear and easy to fill out. Following the pre-test, minor modifications were made to improve the wording of some items and the overall structure and presentation quality of the questionnaire. [SUBTITLE] Study one [SUBSECTION] A mobile computing project was carried out in an oncology and palliative care unit in Quebec, Canada in 2007. The core of the project was the implementation of a CIS that optimizes the process used to organize nursing activities taking place in patients' homes. The new SyMO package (Médisolution™) consists of a nursing care plan dictionary that covers all the procedures nurses need to perform in response to patient health problems, and an intervention plan module that allows nurses to create specific care plans for patients. Once she has arrived at the patient's home, the nurse uses the software to take notes on each procedure in the patient's care plan. This pilot project was the subject of an evaluative study that yielded encouraging results. Indeed, eight months after the implementation of the software application, the number of treated cancer patients increased by 6%, the average number of home visits by nurses increased by 0.7 visit per day, and the time allocated for direct patient care increased by 14% [5]. Given these positive results, senior administrators of Quebec's department of health and social services decided to invest additional funds in the project to verify whether the results were generalizable in the context of traditional home care services. The department asked 11 home care units in three different geographical regions to participate in the project. Study one investigated the pre-implementation phase of the mobile computing project in these ambulatory home care units. The mobile computing project and the software package were officially presented to the nursing staff of each of the participating organizations at kick-off meetings held from May to November in 2009. The meetings were jointly organized by the managers of the health facilities and the supplier in order to present the scope of the project and its objectives, the roles and responsibilities of the key stakeholders, the planned deployment approach (including project phases), and the projected schedule and budget. The meetings also included a 60-minute demonstration of the software. The data was collected at each of these meetings. Only nurses who would be affected by the change, i.e., change recipients, were asked to stay in the room while the data were collected. Once the objectives of the study had been explained, a questionnaire was distributed to all the nurses in attendance. A total of 138 nurses completed the survey instrument, for a response rate of 90%. A mobile computing project was carried out in an oncology and palliative care unit in Quebec, Canada in 2007. The core of the project was the implementation of a CIS that optimizes the process used to organize nursing activities taking place in patients' homes. The new SyMO package (Médisolution™) consists of a nursing care plan dictionary that covers all the procedures nurses need to perform in response to patient health problems, and an intervention plan module that allows nurses to create specific care plans for patients. Once she has arrived at the patient's home, the nurse uses the software to take notes on each procedure in the patient's care plan. This pilot project was the subject of an evaluative study that yielded encouraging results. Indeed, eight months after the implementation of the software application, the number of treated cancer patients increased by 6%, the average number of home visits by nurses increased by 0.7 visit per day, and the time allocated for direct patient care increased by 14% [5]. Given these positive results, senior administrators of Quebec's department of health and social services decided to invest additional funds in the project to verify whether the results were generalizable in the context of traditional home care services. The department asked 11 home care units in three different geographical regions to participate in the project. Study one investigated the pre-implementation phase of the mobile computing project in these ambulatory home care units. The mobile computing project and the software package were officially presented to the nursing staff of each of the participating organizations at kick-off meetings held from May to November in 2009. The meetings were jointly organized by the managers of the health facilities and the supplier in order to present the scope of the project and its objectives, the roles and responsibilities of the key stakeholders, the planned deployment approach (including project phases), and the projected schedule and budget. The meetings also included a 60-minute demonstration of the software. The data was collected at each of these meetings. Only nurses who would be affected by the change, i.e., change recipients, were asked to stay in the room while the data were collected. Once the objectives of the study had been explained, a questionnaire was distributed to all the nurses in attendance. A total of 138 nurses completed the survey instrument, for a response rate of 90%. [SUBTITLE] Study two [SUBSECTION] As mentioned earlier, the second study took place in a large teaching hospital that had approved a budget for the acquisition of an EMR system. The institution, which specializes in the diagnosis and treatment of mental health illnesses, is divided into 10 clinical programs (e.g., a mental health program for adults, a child psychiatry program, a geriatric psychiatry program, an eating disorders program). Each clinical program specializes in the care and treatment of various mental health illnesses. The staff is composed of approximately 400 clinicians (including mostly nursing staff and a balanced group of occupational therapists, social workers and psychologists), as well as 55 physicians. The deployment of the EMR system represented a major organizational project that would affect the work processes and procedures across the entire hospital. The main objective was to maintain all patient information (admission, diagnosis, notes, prescriptions, test results, et al.) in a central patient database. At the time of the study, in early 2008, the healthcare organization was in the process of selecting a software vendor and an integrator. The project was presented at two staff meetings: the first meeting was for the entire nursing population, and the second was a general assembly that included all managers of the targeted clinical programs. Data collection began shortly after the two official presentations. Over a period of three weeks, one of the researchers and the project manager visited the clinical staff at a weekly meeting in each of the 10 programs. At each visit, the local project manager summarized the key elements of the EMR project to staff and the researchers explained the objectives of the research project, addressing staff concerns and questions for approximately fifteen minutes. Then survey questionnaires were handed out, along with a pre-addressed return envelope. A total of 379 questionnaires were distributed to the clinicians. For physicians, direct email contact was initiated by the EMR project manager in a memorandum that presented the pending EMR implementation and the ongoing academic research project. A package was then mailed to each physician, containing a presentation letter, a copy of the questionnaire, as well as a pre-addressed postage-paid return envelope. The 55 physicians were asked to complete the survey within a week. Overall, a total of 235 questionnaires (207 from clinicians and 28 from physicians) were returned to the research team, for a response rate of 54%. As mentioned earlier, the second study took place in a large teaching hospital that had approved a budget for the acquisition of an EMR system. The institution, which specializes in the diagnosis and treatment of mental health illnesses, is divided into 10 clinical programs (e.g., a mental health program for adults, a child psychiatry program, a geriatric psychiatry program, an eating disorders program). Each clinical program specializes in the care and treatment of various mental health illnesses. The staff is composed of approximately 400 clinicians (including mostly nursing staff and a balanced group of occupational therapists, social workers and psychologists), as well as 55 physicians. The deployment of the EMR system represented a major organizational project that would affect the work processes and procedures across the entire hospital. The main objective was to maintain all patient information (admission, diagnosis, notes, prescriptions, test results, et al.) in a central patient database. At the time of the study, in early 2008, the healthcare organization was in the process of selecting a software vendor and an integrator. The project was presented at two staff meetings: the first meeting was for the entire nursing population, and the second was a general assembly that included all managers of the targeted clinical programs. Data collection began shortly after the two official presentations. Over a period of three weeks, one of the researchers and the project manager visited the clinical staff at a weekly meeting in each of the 10 programs. At each visit, the local project manager summarized the key elements of the EMR project to staff and the researchers explained the objectives of the research project, addressing staff concerns and questions for approximately fifteen minutes. Then survey questionnaires were handed out, along with a pre-addressed return envelope. A total of 379 questionnaires were distributed to the clinicians. For physicians, direct email contact was initiated by the EMR project manager in a memorandum that presented the pending EMR implementation and the ongoing academic research project. A package was then mailed to each physician, containing a presentation letter, a copy of the questionnaire, as well as a pre-addressed postage-paid return envelope. The 55 physicians were asked to complete the survey within a week. Overall, a total of 235 questionnaires (207 from clinicians and 28 from physicians) were returned to the research team, for a response rate of 54%. [SUBTITLE] Operationalization of variables and data analyses [SUBSECTION] Consistent with our research model, the survey's questions covered 10 variables. All except one were measured with four items. All the items were assessed on 7-point Likert scales ranging from strongly disagree to strongly agree. The items used to measure the dependent variable, namely, organizational readiness, were adapted from Eby et al. [58] and Rafferty and Simons [59]. As for the independent variables, vision clarity (VC) was measured using a scale adapted from Armenakis et al. [28]. Top-management support and change appropriateness were measured using scales adapted from Holt et al. [34]. Organizational flexibility was adapted from Rush et al. [60] and Eby et al. [58]. Group self-efficacy was measured using a scale adapted from Compeau and Higgins [61]. Finally, scales associated with change efficacy, organizational history of change, the presence of an effective project champion, and organizational conflicts were developed by the authors during a brainstorming session. Scale items used to measure all study variables are presented in Appendix. Data analysis was performed using partial least squares (PLS), a structural equation modeling approach [62]. Consistent with our research model, the survey's questions covered 10 variables. All except one were measured with four items. All the items were assessed on 7-point Likert scales ranging from strongly disagree to strongly agree. The items used to measure the dependent variable, namely, organizational readiness, were adapted from Eby et al. [58] and Rafferty and Simons [59]. As for the independent variables, vision clarity (VC) was measured using a scale adapted from Armenakis et al. [28]. Top-management support and change appropriateness were measured using scales adapted from Holt et al. [34]. Organizational flexibility was adapted from Rush et al. [60] and Eby et al. [58]. Group self-efficacy was measured using a scale adapted from Compeau and Higgins [61]. Finally, scales associated with change efficacy, organizational history of change, the presence of an effective project champion, and organizational conflicts were developed by the authors during a brainstorming session. Scale items used to measure all study variables are presented in Appendix. Data analysis was performed using partial least squares (PLS), a structural equation modeling approach [62]. [SUBTITLE] Ethics approval [SUBSECTION] The present study was approved by the appropriate institutional ethics review boards. The present study was approved by the appropriate institutional ethics review boards.
null
null
null
null
[ "Background", "Research model", "Attributes of the change", "Vision clarity", "Change appropriateness", "Change efficacy", "Leadership support", "Top-management support", "Presence of a project champion", "Organizational context", "Organizational history of change", "Organizational conflicts", "Organizational flexibility", "Change targets' attributes", "Collective self-efficacy", "Pre-test and research settings", "Pre-test", "Study one", "Study two", "Operationalization of variables and data analyses", "Ethics approval", "Results", "Sample profiles", "Psychometric properties of the measures", "Hypothesis testing", "Discussion", "Limitations", "Conclusions", "Competing interests", "Authors' contributions", "Appendix", "Questionnaire items (as framed in study one)", "Vision Clarity (VC)", "Change Appropriateness (CA)", "Change Efficacy (CE)", "Top-Management Support (TMS)", "Champion (C)", "Organizational History of Change (OHC)", "Organizational Conflicts and Politics (OCP)", "Organizational Flexibility (OF)", "Group Self-Efficacy (GSE)", "Organizational Readiness (OR)" ]
[ "The adoption and diffusion of clinical information systems (CIS) such as electronic medical record (EMR) systems, decision support systems, picture archiving and communication systems, and computerized provider order entry systems has become one of the critical benchmarks for achieving several healthcare organizational reform priorities, including home care, primary care, and integrated care networks [1-3]. Outcomes associated with the adoption of CIS in healthcare organizations include higher productivity levels among clinicians [4,5], better integrated care processes [6], and improved patient safety and quality of care [7,8], to name but a few.\nHowever, these systems are often strongly resisted by the same community that is expected to benefit from their adoption and use. In some cases resistance has manifested itself in boycotts of installed computer-based systems [9,10] or threats of strikes by the medical staff to oppose the implementation of EMR systems [11,12]. In extreme cases, technological resistance induced the hospital management to remove state of the art CIS. For instance, Freudenheim [13] reports that physicians at the Cedars-Sinai Medical Center at Los Angeles rebelled against their newly installed computerized physician order entry (CPOE) system, complaining that the system was too great a distraction from their medical duties and forcing its withdrawal after it was already online in two-thirds of the 870-bed hospital. Nurses are also seen to be reluctant to use computers in areas closely related to patient care [14-16] for several reasons, such as the fear of being distracted or taken away from the patient and the lack of perceived alignment with nursing workflow/documentation processes [17]. Gillespie [18] reported that nursing resistance alone had caused the 'death' of several IT initiatives.\nPrior research has found that favorable user attitudes are often associated with a high level of information technology (IT) adoption and acceptance [17,19-21]. In this regard, we argue that the early stages of the CIS project lifecycle deserve additional attention because early perceptions and beliefs play a central role in shaping future attitudes and behaviors such as negative rumors, involvement in the planning and design phases, and resistance to system usage. Furthermore, some authors concur that change management is most efficient when it is introduced at the earliest possible opportunity in the project lifecycle [22,23]. For these reasons, we decided to focus our attention on the pre-implementation stage, which is usually when change targets are introduced into the detailed project planning, the new system is seen or discussed for the first time, and initial impressions are formed about how work is likely to change [24,25].\nChange targets' perceptions of the organization's readiness for change have been identified by change management theorists as one important factor in understanding potential sources of resistance [26-28]. An individual's perception of an organization's readiness for change is viewed as a concept similar to unfreezing, which is described as a process in which an individual's beliefs about pending change are influenced such that the imminent change comes to be seen as possible [29]. Readiness collectively reflects the extent to which individuals are cognitively and emotionally inclined to accept, embrace, and adopt a particular plan to purposefully alter the status quo [30]. These perceptions are conceptualized as existing on a continuum, from viewing the organization as capable of withstanding change and successfully adapting to it (high readiness for change) to believing the organization is not ready to undergo such a change (low readiness for change) [30].\nWhile organizational readiness for change is an intuitively appealing construct, very few empirical studies in the health informatics field have focused on this phenomenon. The work of Snyder-Halpern [31-33] was all that could be found on the subject in the extant literature. In her studies, she defines organizational readiness very broadly as 'the level of fit between the IT innovation and the organization' and tests the hypothesis that a higher level of readiness leads to a lower level of innovation risk and a more successful CIS outcome [33]. The definition adopted by Snyder-Halpern is therefore more macro than the one used in this paper and applies to all phases of the CIS project life cycle. While the measure proposed by Snyder-Halpern serves as a proxy for the level of risk in a technological project, our measure is focused entirely on the notion of the ability to succeed at technological change as it is perceived by the users identified in the pre-implementation phase. We therefore see Snyder-Halpern's contribution as complementary to our own work.\nThe paper is structured as follows: First, we begin by reviewing relevant work in the change management and information systems fields that supports the hypothesized relationships between organizational readiness for change and its antecedents. Next, the paper describes the research design and the data that was collected in order to test our research model. This is followed by the presentation of the study results, their discussion, and concluding remarks.\n[SUBTITLE] Research model [SUBSECTION] The primary intent of this study was to investigate the variables associated with clinicians' perceptions of organizational readiness for change in the specific context of CIS projects. Based on Holt et al.'s [34] research model, four classes of variables (see Figure 1) were identified as possibly related to a clinician's interpretation of organizational readiness for change during the pre-implementation phase of CIS projects: the attributes of the change that is being introduced; the extent of leadership support for the proposed change; the organizational context where the change takes place; and the characteristics of the change targets. Each of these variables will be discussed.\nResearch Model\nThe primary intent of this study was to investigate the variables associated with clinicians' perceptions of organizational readiness for change in the specific context of CIS projects. Based on Holt et al.'s [34] research model, four classes of variables (see Figure 1) were identified as possibly related to a clinician's interpretation of organizational readiness for change during the pre-implementation phase of CIS projects: the attributes of the change that is being introduced; the extent of leadership support for the proposed change; the organizational context where the change takes place; and the characteristics of the change targets. Each of these variables will be discussed.\nResearch Model\n[SUBTITLE] Attributes of the change [SUBSECTION] The attributes of the change refer to the 'what' factor of the change [34]. That is, one should first consider what is being changed. In most CIS projects, the change is not only associated with the new system, but also with local processes, organizational structure, roles and responsibilities, and compensation schemes [11]. As explained below, three attributes of the change are likely to have a significant influence on change recipients' perceptions of organizational readiness for change.\n[SUBTITLE] Vision clarity [SUBSECTION] Change management theorists posit that one of the key sentiments to creating change readiness is the sense that change is needed [26-28,35-37]. A clear vision provides much of the justification for such a sentiment. A discrepency beween current and desired performance helps legitimize the need for change. Otherwise, the motive for a change may be perceived as arbitrary [26]. The notion of vision clarity is also consistent with social accounts theory, which stipulates that information should be provided by change agents to explain why an organizational change is needed [38,39].\nChange management theorists posit that one of the key sentiments to creating change readiness is the sense that change is needed [26-28,35-37]. A clear vision provides much of the justification for such a sentiment. A discrepency beween current and desired performance helps legitimize the need for change. Otherwise, the motive for a change may be perceived as arbitrary [26]. The notion of vision clarity is also consistent with social accounts theory, which stipulates that information should be provided by change agents to explain why an organizational change is needed [38,39].\n[SUBTITLE] Change appropriateness [SUBSECTION] A second key sentiment emphasized by Armenakis et al. [26-28] is the sense that the change is appropriate. Indeed, in addition to believing that a change is needed, if employees are to support change, they must also believe that the specific change being proposed will effectively address the discrepancy. This sentiment is also consistent with social accounts theory [38] and is used to describe whether the proposed change is the correct one for the situation at hand. If the proposed change is viewed by employees as the incorrect approach for pursuing the vision, change targets may not be willing to 'buy-in' to the change or attempt to make it work [40]. Clearly, appropriateness of a change is important, because individuals may feel that some form of change is needed but may disagree with the specific change being proposed.\nA second key sentiment emphasized by Armenakis et al. [26-28] is the sense that the change is appropriate. Indeed, in addition to believing that a change is needed, if employees are to support change, they must also believe that the specific change being proposed will effectively address the discrepancy. This sentiment is also consistent with social accounts theory [38] and is used to describe whether the proposed change is the correct one for the situation at hand. If the proposed change is viewed by employees as the incorrect approach for pursuing the vision, change targets may not be willing to 'buy-in' to the change or attempt to make it work [40]. Clearly, appropriateness of a change is important, because individuals may feel that some form of change is needed but may disagree with the specific change being proposed.\n[SUBTITLE] Change efficacy [SUBSECTION] A sense of efficacy, in the form of expectancy (efforts will lead to successful accomplishment), is a central tenet of most motivation theories [41]. To be motivated to support a change, individuals must not only feel that the change is appropriate but also that success is possible. In this sense, we believe that information from the environment may have a significant impact on individuals' perceptions of organizational readiness. If the proposed change has already been implemented successfully in similar organizations and this information has reached the appropriate individuals, one could conclude that they will see their organization as ready for a successful implementation. In contrast, if the press has reported prior failures in similar changes, one could expect some reticence on the part of the individuals affected by the change.\nBased on this research, the following hypotheses are proposed:\nHypothesis one: Vision clarity will be positively related to perceived organizational readiness for change.\nHypothesis two: Change appropriateness will be positively related to perceived organizational readiness for change.\nHypothesis three: Change efficacy will be positively related to perceived organizational readiness for change.\nA sense of efficacy, in the form of expectancy (efforts will lead to successful accomplishment), is a central tenet of most motivation theories [41]. To be motivated to support a change, individuals must not only feel that the change is appropriate but also that success is possible. In this sense, we believe that information from the environment may have a significant impact on individuals' perceptions of organizational readiness. If the proposed change has already been implemented successfully in similar organizations and this information has reached the appropriate individuals, one could conclude that they will see their organization as ready for a successful implementation. In contrast, if the press has reported prior failures in similar changes, one could expect some reticence on the part of the individuals affected by the change.\nBased on this research, the following hypotheses are proposed:\nHypothesis one: Vision clarity will be positively related to perceived organizational readiness for change.\nHypothesis two: Change appropriateness will be positively related to perceived organizational readiness for change.\nHypothesis three: Change efficacy will be positively related to perceived organizational readiness for change.\nThe attributes of the change refer to the 'what' factor of the change [34]. That is, one should first consider what is being changed. In most CIS projects, the change is not only associated with the new system, but also with local processes, organizational structure, roles and responsibilities, and compensation schemes [11]. As explained below, three attributes of the change are likely to have a significant influence on change recipients' perceptions of organizational readiness for change.\n[SUBTITLE] Vision clarity [SUBSECTION] Change management theorists posit that one of the key sentiments to creating change readiness is the sense that change is needed [26-28,35-37]. A clear vision provides much of the justification for such a sentiment. A discrepency beween current and desired performance helps legitimize the need for change. Otherwise, the motive for a change may be perceived as arbitrary [26]. The notion of vision clarity is also consistent with social accounts theory, which stipulates that information should be provided by change agents to explain why an organizational change is needed [38,39].\nChange management theorists posit that one of the key sentiments to creating change readiness is the sense that change is needed [26-28,35-37]. A clear vision provides much of the justification for such a sentiment. A discrepency beween current and desired performance helps legitimize the need for change. Otherwise, the motive for a change may be perceived as arbitrary [26]. The notion of vision clarity is also consistent with social accounts theory, which stipulates that information should be provided by change agents to explain why an organizational change is needed [38,39].\n[SUBTITLE] Change appropriateness [SUBSECTION] A second key sentiment emphasized by Armenakis et al. [26-28] is the sense that the change is appropriate. Indeed, in addition to believing that a change is needed, if employees are to support change, they must also believe that the specific change being proposed will effectively address the discrepancy. This sentiment is also consistent with social accounts theory [38] and is used to describe whether the proposed change is the correct one for the situation at hand. If the proposed change is viewed by employees as the incorrect approach for pursuing the vision, change targets may not be willing to 'buy-in' to the change or attempt to make it work [40]. Clearly, appropriateness of a change is important, because individuals may feel that some form of change is needed but may disagree with the specific change being proposed.\nA second key sentiment emphasized by Armenakis et al. [26-28] is the sense that the change is appropriate. Indeed, in addition to believing that a change is needed, if employees are to support change, they must also believe that the specific change being proposed will effectively address the discrepancy. This sentiment is also consistent with social accounts theory [38] and is used to describe whether the proposed change is the correct one for the situation at hand. If the proposed change is viewed by employees as the incorrect approach for pursuing the vision, change targets may not be willing to 'buy-in' to the change or attempt to make it work [40]. Clearly, appropriateness of a change is important, because individuals may feel that some form of change is needed but may disagree with the specific change being proposed.\n[SUBTITLE] Change efficacy [SUBSECTION] A sense of efficacy, in the form of expectancy (efforts will lead to successful accomplishment), is a central tenet of most motivation theories [41]. To be motivated to support a change, individuals must not only feel that the change is appropriate but also that success is possible. In this sense, we believe that information from the environment may have a significant impact on individuals' perceptions of organizational readiness. If the proposed change has already been implemented successfully in similar organizations and this information has reached the appropriate individuals, one could conclude that they will see their organization as ready for a successful implementation. In contrast, if the press has reported prior failures in similar changes, one could expect some reticence on the part of the individuals affected by the change.\nBased on this research, the following hypotheses are proposed:\nHypothesis one: Vision clarity will be positively related to perceived organizational readiness for change.\nHypothesis two: Change appropriateness will be positively related to perceived organizational readiness for change.\nHypothesis three: Change efficacy will be positively related to perceived organizational readiness for change.\nA sense of efficacy, in the form of expectancy (efforts will lead to successful accomplishment), is a central tenet of most motivation theories [41]. To be motivated to support a change, individuals must not only feel that the change is appropriate but also that success is possible. In this sense, we believe that information from the environment may have a significant impact on individuals' perceptions of organizational readiness. If the proposed change has already been implemented successfully in similar organizations and this information has reached the appropriate individuals, one could conclude that they will see their organization as ready for a successful implementation. In contrast, if the press has reported prior failures in similar changes, one could expect some reticence on the part of the individuals affected by the change.\nBased on this research, the following hypotheses are proposed:\nHypothesis one: Vision clarity will be positively related to perceived organizational readiness for change.\nHypothesis two: Change appropriateness will be positively related to perceived organizational readiness for change.\nHypothesis three: Change efficacy will be positively related to perceived organizational readiness for change.\n[SUBTITLE] Leadership support [SUBSECTION] Social learning theory [42] posits that individuals sense through their interpersonal networks the support that exists throughout the organization. In this study, principal support describes the support from upper management as well as local change agents [28].\n[SUBTITLE] Top-management support [SUBSECTION] Many researchers have argued that senior managers play a crucial role in determining whether an information system project succeeds or fails [43-45]. Today, the need for strong leadership seems to be the generally accepted wisdom among information systems academics and managerial practitioners. When upper management is highly supportive of an IT project, greater resources are likely to be allocated to develop and support the new system [46], enhancing facilitating conditions [47] and, ultimately, increasing perceptions of organizational readiness.\nMany researchers have argued that senior managers play a crucial role in determining whether an information system project succeeds or fails [43-45]. Today, the need for strong leadership seems to be the generally accepted wisdom among information systems academics and managerial practitioners. When upper management is highly supportive of an IT project, greater resources are likely to be allocated to develop and support the new system [46], enhancing facilitating conditions [47] and, ultimately, increasing perceptions of organizational readiness.\n[SUBTITLE] Presence of a project champion [SUBSECTION] It has long been recognized by both practitioners and academics that it is highly risky to attempt complex change without a project champion [48,49]. In the IT context, champions are individuals who actively promote their personal vision for using IT, pushing the project over or around approval and implementation hurdles [50]. They may have initiated the process or been convinced of its necessity by other organizational members. Dong et al. [51] recently observed that perceived leadership behaviors of IT project champions exercise a direct and positive influence on users' attitudes toward the object of change. Their finding confirms the claim that project champions are effective leaders in tems of conveying visions and transcending users' self-interest for collective goals [50].\nExtending this research, it is proposed that:\nHypothesis four: Top-management support will be positively related to perceived organizational readiness for change.\nHypothesis five: The presence of a project champion will be positively related to perceived organizational readiness for change.\nIt has long been recognized by both practitioners and academics that it is highly risky to attempt complex change without a project champion [48,49]. In the IT context, champions are individuals who actively promote their personal vision for using IT, pushing the project over or around approval and implementation hurdles [50]. They may have initiated the process or been convinced of its necessity by other organizational members. Dong et al. [51] recently observed that perceived leadership behaviors of IT project champions exercise a direct and positive influence on users' attitudes toward the object of change. Their finding confirms the claim that project champions are effective leaders in tems of conveying visions and transcending users' self-interest for collective goals [50].\nExtending this research, it is proposed that:\nHypothesis four: Top-management support will be positively related to perceived organizational readiness for change.\nHypothesis five: The presence of a project champion will be positively related to perceived organizational readiness for change.\nSocial learning theory [42] posits that individuals sense through their interpersonal networks the support that exists throughout the organization. In this study, principal support describes the support from upper management as well as local change agents [28].\n[SUBTITLE] Top-management support [SUBSECTION] Many researchers have argued that senior managers play a crucial role in determining whether an information system project succeeds or fails [43-45]. Today, the need for strong leadership seems to be the generally accepted wisdom among information systems academics and managerial practitioners. When upper management is highly supportive of an IT project, greater resources are likely to be allocated to develop and support the new system [46], enhancing facilitating conditions [47] and, ultimately, increasing perceptions of organizational readiness.\nMany researchers have argued that senior managers play a crucial role in determining whether an information system project succeeds or fails [43-45]. Today, the need for strong leadership seems to be the generally accepted wisdom among information systems academics and managerial practitioners. When upper management is highly supportive of an IT project, greater resources are likely to be allocated to develop and support the new system [46], enhancing facilitating conditions [47] and, ultimately, increasing perceptions of organizational readiness.\n[SUBTITLE] Presence of a project champion [SUBSECTION] It has long been recognized by both practitioners and academics that it is highly risky to attempt complex change without a project champion [48,49]. In the IT context, champions are individuals who actively promote their personal vision for using IT, pushing the project over or around approval and implementation hurdles [50]. They may have initiated the process or been convinced of its necessity by other organizational members. Dong et al. [51] recently observed that perceived leadership behaviors of IT project champions exercise a direct and positive influence on users' attitudes toward the object of change. Their finding confirms the claim that project champions are effective leaders in tems of conveying visions and transcending users' self-interest for collective goals [50].\nExtending this research, it is proposed that:\nHypothesis four: Top-management support will be positively related to perceived organizational readiness for change.\nHypothesis five: The presence of a project champion will be positively related to perceived organizational readiness for change.\nIt has long been recognized by both practitioners and academics that it is highly risky to attempt complex change without a project champion [48,49]. In the IT context, champions are individuals who actively promote their personal vision for using IT, pushing the project over or around approval and implementation hurdles [50]. They may have initiated the process or been convinced of its necessity by other organizational members. Dong et al. [51] recently observed that perceived leadership behaviors of IT project champions exercise a direct and positive influence on users' attitudes toward the object of change. Their finding confirms the claim that project champions are effective leaders in tems of conveying visions and transcending users' self-interest for collective goals [50].\nExtending this research, it is proposed that:\nHypothesis four: Top-management support will be positively related to perceived organizational readiness for change.\nHypothesis five: The presence of a project champion will be positively related to perceived organizational readiness for change.\n[SUBTITLE] Organizational context [SUBSECTION] According to Holt et al. [34], internal context refers to the circumstances that describe the organization as it embarks on change. Mowday and Sutton [52] described internal context as the conditions external to change recipients that influence their beliefs, attitudes, intentions, and behavior. Prior research has led us to hypothesize that three organizational variables have a significant influence on change targets' perceptions of readiness.\n[SUBTITLE] Organizational history of change [SUBSECTION] To some degree, all organizations are idiosyncratic; that is, previous experiences have been stored in each organization in a pattern that makes the organization different from others that may on the surface appear very similar [53]. Organizations are dynamically evolving systems, and each has a history of resources, commitments, successes, and failures that shape the environment in which computer-based systems are developed and implemented [54]. Therefore, organizational history or memory might affect the way a change is framed in terms of previous initiatives undertaken by organization and hence have a great influence on the extent of IT implementation success.\nTo some degree, all organizations are idiosyncratic; that is, previous experiences have been stored in each organization in a pattern that makes the organization different from others that may on the surface appear very similar [53]. Organizations are dynamically evolving systems, and each has a history of resources, commitments, successes, and failures that shape the environment in which computer-based systems are developed and implemented [54]. Therefore, organizational history or memory might affect the way a change is framed in terms of previous initiatives undertaken by organization and hence have a great influence on the extent of IT implementation success.\n[SUBTITLE] Organizational conflicts [SUBSECTION] CIS implementation in healthcare organizations is characterized by social interactions. Among the many individuals and groups involved in the implementation process, there are usually managers, a project leader, a project champion, project team members, system developers, and a group of user representatives (clinicians). These actors have different interests and objectives for the adoption of a new CIS [55]. Hence, system implementation might be influenced by organizational politics and power relations [56,57]. Conflicting interests of different key actors and groups might lead to perceptions among targeted users that the organization is not ready for change.\nCIS implementation in healthcare organizations is characterized by social interactions. Among the many individuals and groups involved in the implementation process, there are usually managers, a project leader, a project champion, project team members, system developers, and a group of user representatives (clinicians). These actors have different interests and objectives for the adoption of a new CIS [55]. Hence, system implementation might be influenced by organizational politics and power relations [56,57]. Conflicting interests of different key actors and groups might lead to perceptions among targeted users that the organization is not ready for change.\n[SUBTITLE] Organizational flexibility [SUBSECTION] Some organizations are more agile and easily adaptable than others. For this reason, the degree to which organizational policies and practices are supportive of change may also be important to understanding how an employee perceives the organization's readiness for change [26]. Eby et al. [58] examined this issue in a study of two divisions of a national sales organization that was transitioning to work teams. Their results reveal that vendors' perceptions of their organization's ability to accommodate change by altering policies and procedures were strongly and positively related to perceived organizational readiness for change. Hence, we posit that clinicians are likely to hold unfavorable views about readiness for change when they perceive their healthcare organization's structure and policies as rigid and inflexible.\nBased on prior research, we propose the following research hypotheses:\nHypothesis six: History of successful change experiences will be positively related to perceived organizational readiness for change.\nHypothesis seven: Organizational conflicts will be negatively related to perceived organizational readiness for change.\nHypothesis eight: Organizational flexibility will be positively related to perceived organizational readiness for change.\nSome organizations are more agile and easily adaptable than others. For this reason, the degree to which organizational policies and practices are supportive of change may also be important to understanding how an employee perceives the organization's readiness for change [26]. Eby et al. [58] examined this issue in a study of two divisions of a national sales organization that was transitioning to work teams. Their results reveal that vendors' perceptions of their organization's ability to accommodate change by altering policies and procedures were strongly and positively related to perceived organizational readiness for change. Hence, we posit that clinicians are likely to hold unfavorable views about readiness for change when they perceive their healthcare organization's structure and policies as rigid and inflexible.\nBased on prior research, we propose the following research hypotheses:\nHypothesis six: History of successful change experiences will be positively related to perceived organizational readiness for change.\nHypothesis seven: Organizational conflicts will be negatively related to perceived organizational readiness for change.\nHypothesis eight: Organizational flexibility will be positively related to perceived organizational readiness for change.\nAccording to Holt et al. [34], internal context refers to the circumstances that describe the organization as it embarks on change. Mowday and Sutton [52] described internal context as the conditions external to change recipients that influence their beliefs, attitudes, intentions, and behavior. Prior research has led us to hypothesize that three organizational variables have a significant influence on change targets' perceptions of readiness.\n[SUBTITLE] Organizational history of change [SUBSECTION] To some degree, all organizations are idiosyncratic; that is, previous experiences have been stored in each organization in a pattern that makes the organization different from others that may on the surface appear very similar [53]. Organizations are dynamically evolving systems, and each has a history of resources, commitments, successes, and failures that shape the environment in which computer-based systems are developed and implemented [54]. Therefore, organizational history or memory might affect the way a change is framed in terms of previous initiatives undertaken by organization and hence have a great influence on the extent of IT implementation success.\nTo some degree, all organizations are idiosyncratic; that is, previous experiences have been stored in each organization in a pattern that makes the organization different from others that may on the surface appear very similar [53]. Organizations are dynamically evolving systems, and each has a history of resources, commitments, successes, and failures that shape the environment in which computer-based systems are developed and implemented [54]. Therefore, organizational history or memory might affect the way a change is framed in terms of previous initiatives undertaken by organization and hence have a great influence on the extent of IT implementation success.\n[SUBTITLE] Organizational conflicts [SUBSECTION] CIS implementation in healthcare organizations is characterized by social interactions. Among the many individuals and groups involved in the implementation process, there are usually managers, a project leader, a project champion, project team members, system developers, and a group of user representatives (clinicians). These actors have different interests and objectives for the adoption of a new CIS [55]. Hence, system implementation might be influenced by organizational politics and power relations [56,57]. Conflicting interests of different key actors and groups might lead to perceptions among targeted users that the organization is not ready for change.\nCIS implementation in healthcare organizations is characterized by social interactions. Among the many individuals and groups involved in the implementation process, there are usually managers, a project leader, a project champion, project team members, system developers, and a group of user representatives (clinicians). These actors have different interests and objectives for the adoption of a new CIS [55]. Hence, system implementation might be influenced by organizational politics and power relations [56,57]. Conflicting interests of different key actors and groups might lead to perceptions among targeted users that the organization is not ready for change.\n[SUBTITLE] Organizational flexibility [SUBSECTION] Some organizations are more agile and easily adaptable than others. For this reason, the degree to which organizational policies and practices are supportive of change may also be important to understanding how an employee perceives the organization's readiness for change [26]. Eby et al. [58] examined this issue in a study of two divisions of a national sales organization that was transitioning to work teams. Their results reveal that vendors' perceptions of their organization's ability to accommodate change by altering policies and procedures were strongly and positively related to perceived organizational readiness for change. Hence, we posit that clinicians are likely to hold unfavorable views about readiness for change when they perceive their healthcare organization's structure and policies as rigid and inflexible.\nBased on prior research, we propose the following research hypotheses:\nHypothesis six: History of successful change experiences will be positively related to perceived organizational readiness for change.\nHypothesis seven: Organizational conflicts will be negatively related to perceived organizational readiness for change.\nHypothesis eight: Organizational flexibility will be positively related to perceived organizational readiness for change.\nSome organizations are more agile and easily adaptable than others. For this reason, the degree to which organizational policies and practices are supportive of change may also be important to understanding how an employee perceives the organization's readiness for change [26]. Eby et al. [58] examined this issue in a study of two divisions of a national sales organization that was transitioning to work teams. Their results reveal that vendors' perceptions of their organization's ability to accommodate change by altering policies and procedures were strongly and positively related to perceived organizational readiness for change. Hence, we posit that clinicians are likely to hold unfavorable views about readiness for change when they perceive their healthcare organization's structure and policies as rigid and inflexible.\nBased on prior research, we propose the following research hypotheses:\nHypothesis six: History of successful change experiences will be positively related to perceived organizational readiness for change.\nHypothesis seven: Organizational conflicts will be negatively related to perceived organizational readiness for change.\nHypothesis eight: Organizational flexibility will be positively related to perceived organizational readiness for change.\n[SUBTITLE] Change targets' attributes [SUBSECTION] The fourth and final class of variables refers to the 'who,' or the organizational members who are required for change [34]. The variables are the attributes representing conditions internal to individuals that influence their beliefs, attitudes, and intention when confronted with change. In the present study, we focused on one of the most common individual factors that might influence perceptions of readiness, namely, individuals' skills or abilities.\n[SUBTITLE] Collective self-efficacy [SUBSECTION] Self-efficacy refers to sentiments of confidence in one's ability to succeed. This concept is included in Bandura's social learning theory [42], which suggests that employees who feel comfortable with their present skill set will believe that a different skill required to successfully execute the new job requirements can be mastered, such that they will be able to regain the comfort felt prior to the change. In this study we measure collective rather than individual efficacy, because the goal is to explain a construct at the organizational level. More specifically, we posit that individuals who perceive change targets, as a group, as capable of learning new work methods and tools will be more likely to look favorably on the organization's readiness for change.\nHypothesis nine: Collective self-efficacy will be positively related to perceived organizational readiness for change.\nSelf-efficacy refers to sentiments of confidence in one's ability to succeed. This concept is included in Bandura's social learning theory [42], which suggests that employees who feel comfortable with their present skill set will believe that a different skill required to successfully execute the new job requirements can be mastered, such that they will be able to regain the comfort felt prior to the change. In this study we measure collective rather than individual efficacy, because the goal is to explain a construct at the organizational level. More specifically, we posit that individuals who perceive change targets, as a group, as capable of learning new work methods and tools will be more likely to look favorably on the organization's readiness for change.\nHypothesis nine: Collective self-efficacy will be positively related to perceived organizational readiness for change.\nThe fourth and final class of variables refers to the 'who,' or the organizational members who are required for change [34]. The variables are the attributes representing conditions internal to individuals that influence their beliefs, attitudes, and intention when confronted with change. In the present study, we focused on one of the most common individual factors that might influence perceptions of readiness, namely, individuals' skills or abilities.\n[SUBTITLE] Collective self-efficacy [SUBSECTION] Self-efficacy refers to sentiments of confidence in one's ability to succeed. This concept is included in Bandura's social learning theory [42], which suggests that employees who feel comfortable with their present skill set will believe that a different skill required to successfully execute the new job requirements can be mastered, such that they will be able to regain the comfort felt prior to the change. In this study we measure collective rather than individual efficacy, because the goal is to explain a construct at the organizational level. More specifically, we posit that individuals who perceive change targets, as a group, as capable of learning new work methods and tools will be more likely to look favorably on the organization's readiness for change.\nHypothesis nine: Collective self-efficacy will be positively related to perceived organizational readiness for change.\nSelf-efficacy refers to sentiments of confidence in one's ability to succeed. This concept is included in Bandura's social learning theory [42], which suggests that employees who feel comfortable with their present skill set will believe that a different skill required to successfully execute the new job requirements can be mastered, such that they will be able to regain the comfort felt prior to the change. In this study we measure collective rather than individual efficacy, because the goal is to explain a construct at the organizational level. More specifically, we posit that individuals who perceive change targets, as a group, as capable of learning new work methods and tools will be more likely to look favorably on the organization's readiness for change.\nHypothesis nine: Collective self-efficacy will be positively related to perceived organizational readiness for change.", "The primary intent of this study was to investigate the variables associated with clinicians' perceptions of organizational readiness for change in the specific context of CIS projects. Based on Holt et al.'s [34] research model, four classes of variables (see Figure 1) were identified as possibly related to a clinician's interpretation of organizational readiness for change during the pre-implementation phase of CIS projects: the attributes of the change that is being introduced; the extent of leadership support for the proposed change; the organizational context where the change takes place; and the characteristics of the change targets. Each of these variables will be discussed.\nResearch Model", "The attributes of the change refer to the 'what' factor of the change [34]. That is, one should first consider what is being changed. In most CIS projects, the change is not only associated with the new system, but also with local processes, organizational structure, roles and responsibilities, and compensation schemes [11]. As explained below, three attributes of the change are likely to have a significant influence on change recipients' perceptions of organizational readiness for change.\n[SUBTITLE] Vision clarity [SUBSECTION] Change management theorists posit that one of the key sentiments to creating change readiness is the sense that change is needed [26-28,35-37]. A clear vision provides much of the justification for such a sentiment. A discrepency beween current and desired performance helps legitimize the need for change. Otherwise, the motive for a change may be perceived as arbitrary [26]. The notion of vision clarity is also consistent with social accounts theory, which stipulates that information should be provided by change agents to explain why an organizational change is needed [38,39].\nChange management theorists posit that one of the key sentiments to creating change readiness is the sense that change is needed [26-28,35-37]. A clear vision provides much of the justification for such a sentiment. A discrepency beween current and desired performance helps legitimize the need for change. Otherwise, the motive for a change may be perceived as arbitrary [26]. The notion of vision clarity is also consistent with social accounts theory, which stipulates that information should be provided by change agents to explain why an organizational change is needed [38,39].\n[SUBTITLE] Change appropriateness [SUBSECTION] A second key sentiment emphasized by Armenakis et al. [26-28] is the sense that the change is appropriate. Indeed, in addition to believing that a change is needed, if employees are to support change, they must also believe that the specific change being proposed will effectively address the discrepancy. This sentiment is also consistent with social accounts theory [38] and is used to describe whether the proposed change is the correct one for the situation at hand. If the proposed change is viewed by employees as the incorrect approach for pursuing the vision, change targets may not be willing to 'buy-in' to the change or attempt to make it work [40]. Clearly, appropriateness of a change is important, because individuals may feel that some form of change is needed but may disagree with the specific change being proposed.\nA second key sentiment emphasized by Armenakis et al. [26-28] is the sense that the change is appropriate. Indeed, in addition to believing that a change is needed, if employees are to support change, they must also believe that the specific change being proposed will effectively address the discrepancy. This sentiment is also consistent with social accounts theory [38] and is used to describe whether the proposed change is the correct one for the situation at hand. If the proposed change is viewed by employees as the incorrect approach for pursuing the vision, change targets may not be willing to 'buy-in' to the change or attempt to make it work [40]. Clearly, appropriateness of a change is important, because individuals may feel that some form of change is needed but may disagree with the specific change being proposed.\n[SUBTITLE] Change efficacy [SUBSECTION] A sense of efficacy, in the form of expectancy (efforts will lead to successful accomplishment), is a central tenet of most motivation theories [41]. To be motivated to support a change, individuals must not only feel that the change is appropriate but also that success is possible. In this sense, we believe that information from the environment may have a significant impact on individuals' perceptions of organizational readiness. If the proposed change has already been implemented successfully in similar organizations and this information has reached the appropriate individuals, one could conclude that they will see their organization as ready for a successful implementation. In contrast, if the press has reported prior failures in similar changes, one could expect some reticence on the part of the individuals affected by the change.\nBased on this research, the following hypotheses are proposed:\nHypothesis one: Vision clarity will be positively related to perceived organizational readiness for change.\nHypothesis two: Change appropriateness will be positively related to perceived organizational readiness for change.\nHypothesis three: Change efficacy will be positively related to perceived organizational readiness for change.\nA sense of efficacy, in the form of expectancy (efforts will lead to successful accomplishment), is a central tenet of most motivation theories [41]. To be motivated to support a change, individuals must not only feel that the change is appropriate but also that success is possible. In this sense, we believe that information from the environment may have a significant impact on individuals' perceptions of organizational readiness. If the proposed change has already been implemented successfully in similar organizations and this information has reached the appropriate individuals, one could conclude that they will see their organization as ready for a successful implementation. In contrast, if the press has reported prior failures in similar changes, one could expect some reticence on the part of the individuals affected by the change.\nBased on this research, the following hypotheses are proposed:\nHypothesis one: Vision clarity will be positively related to perceived organizational readiness for change.\nHypothesis two: Change appropriateness will be positively related to perceived organizational readiness for change.\nHypothesis three: Change efficacy will be positively related to perceived organizational readiness for change.", "Change management theorists posit that one of the key sentiments to creating change readiness is the sense that change is needed [26-28,35-37]. A clear vision provides much of the justification for such a sentiment. A discrepency beween current and desired performance helps legitimize the need for change. Otherwise, the motive for a change may be perceived as arbitrary [26]. The notion of vision clarity is also consistent with social accounts theory, which stipulates that information should be provided by change agents to explain why an organizational change is needed [38,39].", "A second key sentiment emphasized by Armenakis et al. [26-28] is the sense that the change is appropriate. Indeed, in addition to believing that a change is needed, if employees are to support change, they must also believe that the specific change being proposed will effectively address the discrepancy. This sentiment is also consistent with social accounts theory [38] and is used to describe whether the proposed change is the correct one for the situation at hand. If the proposed change is viewed by employees as the incorrect approach for pursuing the vision, change targets may not be willing to 'buy-in' to the change or attempt to make it work [40]. Clearly, appropriateness of a change is important, because individuals may feel that some form of change is needed but may disagree with the specific change being proposed.", "A sense of efficacy, in the form of expectancy (efforts will lead to successful accomplishment), is a central tenet of most motivation theories [41]. To be motivated to support a change, individuals must not only feel that the change is appropriate but also that success is possible. In this sense, we believe that information from the environment may have a significant impact on individuals' perceptions of organizational readiness. If the proposed change has already been implemented successfully in similar organizations and this information has reached the appropriate individuals, one could conclude that they will see their organization as ready for a successful implementation. In contrast, if the press has reported prior failures in similar changes, one could expect some reticence on the part of the individuals affected by the change.\nBased on this research, the following hypotheses are proposed:\nHypothesis one: Vision clarity will be positively related to perceived organizational readiness for change.\nHypothesis two: Change appropriateness will be positively related to perceived organizational readiness for change.\nHypothesis three: Change efficacy will be positively related to perceived organizational readiness for change.", "Social learning theory [42] posits that individuals sense through their interpersonal networks the support that exists throughout the organization. In this study, principal support describes the support from upper management as well as local change agents [28].\n[SUBTITLE] Top-management support [SUBSECTION] Many researchers have argued that senior managers play a crucial role in determining whether an information system project succeeds or fails [43-45]. Today, the need for strong leadership seems to be the generally accepted wisdom among information systems academics and managerial practitioners. When upper management is highly supportive of an IT project, greater resources are likely to be allocated to develop and support the new system [46], enhancing facilitating conditions [47] and, ultimately, increasing perceptions of organizational readiness.\nMany researchers have argued that senior managers play a crucial role in determining whether an information system project succeeds or fails [43-45]. Today, the need for strong leadership seems to be the generally accepted wisdom among information systems academics and managerial practitioners. When upper management is highly supportive of an IT project, greater resources are likely to be allocated to develop and support the new system [46], enhancing facilitating conditions [47] and, ultimately, increasing perceptions of organizational readiness.\n[SUBTITLE] Presence of a project champion [SUBSECTION] It has long been recognized by both practitioners and academics that it is highly risky to attempt complex change without a project champion [48,49]. In the IT context, champions are individuals who actively promote their personal vision for using IT, pushing the project over or around approval and implementation hurdles [50]. They may have initiated the process or been convinced of its necessity by other organizational members. Dong et al. [51] recently observed that perceived leadership behaviors of IT project champions exercise a direct and positive influence on users' attitudes toward the object of change. Their finding confirms the claim that project champions are effective leaders in tems of conveying visions and transcending users' self-interest for collective goals [50].\nExtending this research, it is proposed that:\nHypothesis four: Top-management support will be positively related to perceived organizational readiness for change.\nHypothesis five: The presence of a project champion will be positively related to perceived organizational readiness for change.\nIt has long been recognized by both practitioners and academics that it is highly risky to attempt complex change without a project champion [48,49]. In the IT context, champions are individuals who actively promote their personal vision for using IT, pushing the project over or around approval and implementation hurdles [50]. They may have initiated the process or been convinced of its necessity by other organizational members. Dong et al. [51] recently observed that perceived leadership behaviors of IT project champions exercise a direct and positive influence on users' attitudes toward the object of change. Their finding confirms the claim that project champions are effective leaders in tems of conveying visions and transcending users' self-interest for collective goals [50].\nExtending this research, it is proposed that:\nHypothesis four: Top-management support will be positively related to perceived organizational readiness for change.\nHypothesis five: The presence of a project champion will be positively related to perceived organizational readiness for change.", "Many researchers have argued that senior managers play a crucial role in determining whether an information system project succeeds or fails [43-45]. Today, the need for strong leadership seems to be the generally accepted wisdom among information systems academics and managerial practitioners. When upper management is highly supportive of an IT project, greater resources are likely to be allocated to develop and support the new system [46], enhancing facilitating conditions [47] and, ultimately, increasing perceptions of organizational readiness.", "It has long been recognized by both practitioners and academics that it is highly risky to attempt complex change without a project champion [48,49]. In the IT context, champions are individuals who actively promote their personal vision for using IT, pushing the project over or around approval and implementation hurdles [50]. They may have initiated the process or been convinced of its necessity by other organizational members. Dong et al. [51] recently observed that perceived leadership behaviors of IT project champions exercise a direct and positive influence on users' attitudes toward the object of change. Their finding confirms the claim that project champions are effective leaders in tems of conveying visions and transcending users' self-interest for collective goals [50].\nExtending this research, it is proposed that:\nHypothesis four: Top-management support will be positively related to perceived organizational readiness for change.\nHypothesis five: The presence of a project champion will be positively related to perceived organizational readiness for change.", "According to Holt et al. [34], internal context refers to the circumstances that describe the organization as it embarks on change. Mowday and Sutton [52] described internal context as the conditions external to change recipients that influence their beliefs, attitudes, intentions, and behavior. Prior research has led us to hypothesize that three organizational variables have a significant influence on change targets' perceptions of readiness.\n[SUBTITLE] Organizational history of change [SUBSECTION] To some degree, all organizations are idiosyncratic; that is, previous experiences have been stored in each organization in a pattern that makes the organization different from others that may on the surface appear very similar [53]. Organizations are dynamically evolving systems, and each has a history of resources, commitments, successes, and failures that shape the environment in which computer-based systems are developed and implemented [54]. Therefore, organizational history or memory might affect the way a change is framed in terms of previous initiatives undertaken by organization and hence have a great influence on the extent of IT implementation success.\nTo some degree, all organizations are idiosyncratic; that is, previous experiences have been stored in each organization in a pattern that makes the organization different from others that may on the surface appear very similar [53]. Organizations are dynamically evolving systems, and each has a history of resources, commitments, successes, and failures that shape the environment in which computer-based systems are developed and implemented [54]. Therefore, organizational history or memory might affect the way a change is framed in terms of previous initiatives undertaken by organization and hence have a great influence on the extent of IT implementation success.\n[SUBTITLE] Organizational conflicts [SUBSECTION] CIS implementation in healthcare organizations is characterized by social interactions. Among the many individuals and groups involved in the implementation process, there are usually managers, a project leader, a project champion, project team members, system developers, and a group of user representatives (clinicians). These actors have different interests and objectives for the adoption of a new CIS [55]. Hence, system implementation might be influenced by organizational politics and power relations [56,57]. Conflicting interests of different key actors and groups might lead to perceptions among targeted users that the organization is not ready for change.\nCIS implementation in healthcare organizations is characterized by social interactions. Among the many individuals and groups involved in the implementation process, there are usually managers, a project leader, a project champion, project team members, system developers, and a group of user representatives (clinicians). These actors have different interests and objectives for the adoption of a new CIS [55]. Hence, system implementation might be influenced by organizational politics and power relations [56,57]. Conflicting interests of different key actors and groups might lead to perceptions among targeted users that the organization is not ready for change.\n[SUBTITLE] Organizational flexibility [SUBSECTION] Some organizations are more agile and easily adaptable than others. For this reason, the degree to which organizational policies and practices are supportive of change may also be important to understanding how an employee perceives the organization's readiness for change [26]. Eby et al. [58] examined this issue in a study of two divisions of a national sales organization that was transitioning to work teams. Their results reveal that vendors' perceptions of their organization's ability to accommodate change by altering policies and procedures were strongly and positively related to perceived organizational readiness for change. Hence, we posit that clinicians are likely to hold unfavorable views about readiness for change when they perceive their healthcare organization's structure and policies as rigid and inflexible.\nBased on prior research, we propose the following research hypotheses:\nHypothesis six: History of successful change experiences will be positively related to perceived organizational readiness for change.\nHypothesis seven: Organizational conflicts will be negatively related to perceived organizational readiness for change.\nHypothesis eight: Organizational flexibility will be positively related to perceived organizational readiness for change.\nSome organizations are more agile and easily adaptable than others. For this reason, the degree to which organizational policies and practices are supportive of change may also be important to understanding how an employee perceives the organization's readiness for change [26]. Eby et al. [58] examined this issue in a study of two divisions of a national sales organization that was transitioning to work teams. Their results reveal that vendors' perceptions of their organization's ability to accommodate change by altering policies and procedures were strongly and positively related to perceived organizational readiness for change. Hence, we posit that clinicians are likely to hold unfavorable views about readiness for change when they perceive their healthcare organization's structure and policies as rigid and inflexible.\nBased on prior research, we propose the following research hypotheses:\nHypothesis six: History of successful change experiences will be positively related to perceived organizational readiness for change.\nHypothesis seven: Organizational conflicts will be negatively related to perceived organizational readiness for change.\nHypothesis eight: Organizational flexibility will be positively related to perceived organizational readiness for change.", "To some degree, all organizations are idiosyncratic; that is, previous experiences have been stored in each organization in a pattern that makes the organization different from others that may on the surface appear very similar [53]. Organizations are dynamically evolving systems, and each has a history of resources, commitments, successes, and failures that shape the environment in which computer-based systems are developed and implemented [54]. Therefore, organizational history or memory might affect the way a change is framed in terms of previous initiatives undertaken by organization and hence have a great influence on the extent of IT implementation success.", "CIS implementation in healthcare organizations is characterized by social interactions. Among the many individuals and groups involved in the implementation process, there are usually managers, a project leader, a project champion, project team members, system developers, and a group of user representatives (clinicians). These actors have different interests and objectives for the adoption of a new CIS [55]. Hence, system implementation might be influenced by organizational politics and power relations [56,57]. Conflicting interests of different key actors and groups might lead to perceptions among targeted users that the organization is not ready for change.", "Some organizations are more agile and easily adaptable than others. For this reason, the degree to which organizational policies and practices are supportive of change may also be important to understanding how an employee perceives the organization's readiness for change [26]. Eby et al. [58] examined this issue in a study of two divisions of a national sales organization that was transitioning to work teams. Their results reveal that vendors' perceptions of their organization's ability to accommodate change by altering policies and procedures were strongly and positively related to perceived organizational readiness for change. Hence, we posit that clinicians are likely to hold unfavorable views about readiness for change when they perceive their healthcare organization's structure and policies as rigid and inflexible.\nBased on prior research, we propose the following research hypotheses:\nHypothesis six: History of successful change experiences will be positively related to perceived organizational readiness for change.\nHypothesis seven: Organizational conflicts will be negatively related to perceived organizational readiness for change.\nHypothesis eight: Organizational flexibility will be positively related to perceived organizational readiness for change.", "The fourth and final class of variables refers to the 'who,' or the organizational members who are required for change [34]. The variables are the attributes representing conditions internal to individuals that influence their beliefs, attitudes, and intention when confronted with change. In the present study, we focused on one of the most common individual factors that might influence perceptions of readiness, namely, individuals' skills or abilities.\n[SUBTITLE] Collective self-efficacy [SUBSECTION] Self-efficacy refers to sentiments of confidence in one's ability to succeed. This concept is included in Bandura's social learning theory [42], which suggests that employees who feel comfortable with their present skill set will believe that a different skill required to successfully execute the new job requirements can be mastered, such that they will be able to regain the comfort felt prior to the change. In this study we measure collective rather than individual efficacy, because the goal is to explain a construct at the organizational level. More specifically, we posit that individuals who perceive change targets, as a group, as capable of learning new work methods and tools will be more likely to look favorably on the organization's readiness for change.\nHypothesis nine: Collective self-efficacy will be positively related to perceived organizational readiness for change.\nSelf-efficacy refers to sentiments of confidence in one's ability to succeed. This concept is included in Bandura's social learning theory [42], which suggests that employees who feel comfortable with their present skill set will believe that a different skill required to successfully execute the new job requirements can be mastered, such that they will be able to regain the comfort felt prior to the change. In this study we measure collective rather than individual efficacy, because the goal is to explain a construct at the organizational level. More specifically, we posit that individuals who perceive change targets, as a group, as capable of learning new work methods and tools will be more likely to look favorably on the organization's readiness for change.\nHypothesis nine: Collective self-efficacy will be positively related to perceived organizational readiness for change.", "Self-efficacy refers to sentiments of confidence in one's ability to succeed. This concept is included in Bandura's social learning theory [42], which suggests that employees who feel comfortable with their present skill set will believe that a different skill required to successfully execute the new job requirements can be mastered, such that they will be able to regain the comfort felt prior to the change. In this study we measure collective rather than individual efficacy, because the goal is to explain a construct at the organizational level. More specifically, we posit that individuals who perceive change targets, as a group, as capable of learning new work methods and tools will be more likely to look favorably on the organization's readiness for change.\nHypothesis nine: Collective self-efficacy will be positively related to perceived organizational readiness for change.", "[SUBTITLE] Pre-test [SUBSECTION] The study questionnaire was first pre-tested with five graduate students who are familiar with CIS as well as with four clinicians who had been involved in several CIS projects. Each respondent completed a first version of the questionnaire and provided feedback about the process and the measures, including the questionnaire administration time, and the clarity of the instructions and questions. The pretest indicated that the measurement instrument was relatively clear and easy to fill out. Following the pre-test, minor modifications were made to improve the wording of some items and the overall structure and presentation quality of the questionnaire.\n[SUBTITLE] Study one [SUBSECTION] A mobile computing project was carried out in an oncology and palliative care unit in Quebec, Canada in 2007. The core of the project was the implementation of a CIS that optimizes the process used to organize nursing activities taking place in patients' homes. The new SyMO package (Médisolution™) consists of a nursing care plan dictionary that covers all the procedures nurses need to perform in response to patient health problems, and an intervention plan module that allows nurses to create specific care plans for patients. Once she has arrived at the patient's home, the nurse uses the software to take notes on each procedure in the patient's care plan. This pilot project was the subject of an evaluative study that yielded encouraging results. Indeed, eight months after the implementation of the software application, the number of treated cancer patients increased by 6%, the average number of home visits by nurses increased by 0.7 visit per day, and the time allocated for direct patient care increased by 14% [5]. Given these positive results, senior administrators of Quebec's department of health and social services decided to invest additional funds in the project to verify whether the results were generalizable in the context of traditional home care services. The department asked 11 home care units in three different geographical regions to participate in the project. Study one investigated the pre-implementation phase of the mobile computing project in these ambulatory home care units.\nThe mobile computing project and the software package were officially presented to the nursing staff of each of the participating organizations at kick-off meetings held from May to November in 2009. The meetings were jointly organized by the managers of the health facilities and the supplier in order to present the scope of the project and its objectives, the roles and responsibilities of the key stakeholders, the planned deployment approach (including project phases), and the projected schedule and budget. The meetings also included a 60-minute demonstration of the software.\nThe data was collected at each of these meetings. Only nurses who would be affected by the change, i.e., change recipients, were asked to stay in the room while the data were collected. Once the objectives of the study had been explained, a questionnaire was distributed to all the nurses in attendance. A total of 138 nurses completed the survey instrument, for a response rate of 90%.\nA mobile computing project was carried out in an oncology and palliative care unit in Quebec, Canada in 2007. The core of the project was the implementation of a CIS that optimizes the process used to organize nursing activities taking place in patients' homes. The new SyMO package (Médisolution™) consists of a nursing care plan dictionary that covers all the procedures nurses need to perform in response to patient health problems, and an intervention plan module that allows nurses to create specific care plans for patients. Once she has arrived at the patient's home, the nurse uses the software to take notes on each procedure in the patient's care plan. This pilot project was the subject of an evaluative study that yielded encouraging results. Indeed, eight months after the implementation of the software application, the number of treated cancer patients increased by 6%, the average number of home visits by nurses increased by 0.7 visit per day, and the time allocated for direct patient care increased by 14% [5]. Given these positive results, senior administrators of Quebec's department of health and social services decided to invest additional funds in the project to verify whether the results were generalizable in the context of traditional home care services. The department asked 11 home care units in three different geographical regions to participate in the project. Study one investigated the pre-implementation phase of the mobile computing project in these ambulatory home care units.\nThe mobile computing project and the software package were officially presented to the nursing staff of each of the participating organizations at kick-off meetings held from May to November in 2009. The meetings were jointly organized by the managers of the health facilities and the supplier in order to present the scope of the project and its objectives, the roles and responsibilities of the key stakeholders, the planned deployment approach (including project phases), and the projected schedule and budget. The meetings also included a 60-minute demonstration of the software.\nThe data was collected at each of these meetings. Only nurses who would be affected by the change, i.e., change recipients, were asked to stay in the room while the data were collected. Once the objectives of the study had been explained, a questionnaire was distributed to all the nurses in attendance. A total of 138 nurses completed the survey instrument, for a response rate of 90%.\n[SUBTITLE] Study two [SUBSECTION] As mentioned earlier, the second study took place in a large teaching hospital that had approved a budget for the acquisition of an EMR system. The institution, which specializes in the diagnosis and treatment of mental health illnesses, is divided into 10 clinical programs (e.g., a mental health program for adults, a child psychiatry program, a geriatric psychiatry program, an eating disorders program). Each clinical program specializes in the care and treatment of various mental health illnesses. The staff is composed of approximately 400 clinicians (including mostly nursing staff and a balanced group of occupational therapists, social workers and psychologists), as well as 55 physicians.\nThe deployment of the EMR system represented a major organizational project that would affect the work processes and procedures across the entire hospital. The main objective was to maintain all patient information (admission, diagnosis, notes, prescriptions, test results, et al.) in a central patient database. At the time of the study, in early 2008, the healthcare organization was in the process of selecting a software vendor and an integrator. The project was presented at two staff meetings: the first meeting was for the entire nursing population, and the second was a general assembly that included all managers of the targeted clinical programs.\nData collection began shortly after the two official presentations. Over a period of three weeks, one of the researchers and the project manager visited the clinical staff at a weekly meeting in each of the 10 programs. At each visit, the local project manager summarized the key elements of the EMR project to staff and the researchers explained the objectives of the research project, addressing staff concerns and questions for approximately fifteen minutes. Then survey questionnaires were handed out, along with a pre-addressed return envelope. A total of 379 questionnaires were distributed to the clinicians. For physicians, direct email contact was initiated by the EMR project manager in a memorandum that presented the pending EMR implementation and the ongoing academic research project. A package was then mailed to each physician, containing a presentation letter, a copy of the questionnaire, as well as a pre-addressed postage-paid return envelope. The 55 physicians were asked to complete the survey within a week. Overall, a total of 235 questionnaires (207 from clinicians and 28 from physicians) were returned to the research team, for a response rate of 54%.\nAs mentioned earlier, the second study took place in a large teaching hospital that had approved a budget for the acquisition of an EMR system. The institution, which specializes in the diagnosis and treatment of mental health illnesses, is divided into 10 clinical programs (e.g., a mental health program for adults, a child psychiatry program, a geriatric psychiatry program, an eating disorders program). Each clinical program specializes in the care and treatment of various mental health illnesses. The staff is composed of approximately 400 clinicians (including mostly nursing staff and a balanced group of occupational therapists, social workers and psychologists), as well as 55 physicians.\nThe deployment of the EMR system represented a major organizational project that would affect the work processes and procedures across the entire hospital. The main objective was to maintain all patient information (admission, diagnosis, notes, prescriptions, test results, et al.) in a central patient database. At the time of the study, in early 2008, the healthcare organization was in the process of selecting a software vendor and an integrator. The project was presented at two staff meetings: the first meeting was for the entire nursing population, and the second was a general assembly that included all managers of the targeted clinical programs.\nData collection began shortly after the two official presentations. Over a period of three weeks, one of the researchers and the project manager visited the clinical staff at a weekly meeting in each of the 10 programs. At each visit, the local project manager summarized the key elements of the EMR project to staff and the researchers explained the objectives of the research project, addressing staff concerns and questions for approximately fifteen minutes. Then survey questionnaires were handed out, along with a pre-addressed return envelope. A total of 379 questionnaires were distributed to the clinicians. For physicians, direct email contact was initiated by the EMR project manager in a memorandum that presented the pending EMR implementation and the ongoing academic research project. A package was then mailed to each physician, containing a presentation letter, a copy of the questionnaire, as well as a pre-addressed postage-paid return envelope. The 55 physicians were asked to complete the survey within a week. Overall, a total of 235 questionnaires (207 from clinicians and 28 from physicians) were returned to the research team, for a response rate of 54%.\nThe study questionnaire was first pre-tested with five graduate students who are familiar with CIS as well as with four clinicians who had been involved in several CIS projects. Each respondent completed a first version of the questionnaire and provided feedback about the process and the measures, including the questionnaire administration time, and the clarity of the instructions and questions. The pretest indicated that the measurement instrument was relatively clear and easy to fill out. Following the pre-test, minor modifications were made to improve the wording of some items and the overall structure and presentation quality of the questionnaire.\n[SUBTITLE] Study one [SUBSECTION] A mobile computing project was carried out in an oncology and palliative care unit in Quebec, Canada in 2007. The core of the project was the implementation of a CIS that optimizes the process used to organize nursing activities taking place in patients' homes. The new SyMO package (Médisolution™) consists of a nursing care plan dictionary that covers all the procedures nurses need to perform in response to patient health problems, and an intervention plan module that allows nurses to create specific care plans for patients. Once she has arrived at the patient's home, the nurse uses the software to take notes on each procedure in the patient's care plan. This pilot project was the subject of an evaluative study that yielded encouraging results. Indeed, eight months after the implementation of the software application, the number of treated cancer patients increased by 6%, the average number of home visits by nurses increased by 0.7 visit per day, and the time allocated for direct patient care increased by 14% [5]. Given these positive results, senior administrators of Quebec's department of health and social services decided to invest additional funds in the project to verify whether the results were generalizable in the context of traditional home care services. The department asked 11 home care units in three different geographical regions to participate in the project. Study one investigated the pre-implementation phase of the mobile computing project in these ambulatory home care units.\nThe mobile computing project and the software package were officially presented to the nursing staff of each of the participating organizations at kick-off meetings held from May to November in 2009. The meetings were jointly organized by the managers of the health facilities and the supplier in order to present the scope of the project and its objectives, the roles and responsibilities of the key stakeholders, the planned deployment approach (including project phases), and the projected schedule and budget. The meetings also included a 60-minute demonstration of the software.\nThe data was collected at each of these meetings. Only nurses who would be affected by the change, i.e., change recipients, were asked to stay in the room while the data were collected. Once the objectives of the study had been explained, a questionnaire was distributed to all the nurses in attendance. A total of 138 nurses completed the survey instrument, for a response rate of 90%.\nA mobile computing project was carried out in an oncology and palliative care unit in Quebec, Canada in 2007. The core of the project was the implementation of a CIS that optimizes the process used to organize nursing activities taking place in patients' homes. The new SyMO package (Médisolution™) consists of a nursing care plan dictionary that covers all the procedures nurses need to perform in response to patient health problems, and an intervention plan module that allows nurses to create specific care plans for patients. Once she has arrived at the patient's home, the nurse uses the software to take notes on each procedure in the patient's care plan. This pilot project was the subject of an evaluative study that yielded encouraging results. Indeed, eight months after the implementation of the software application, the number of treated cancer patients increased by 6%, the average number of home visits by nurses increased by 0.7 visit per day, and the time allocated for direct patient care increased by 14% [5]. Given these positive results, senior administrators of Quebec's department of health and social services decided to invest additional funds in the project to verify whether the results were generalizable in the context of traditional home care services. The department asked 11 home care units in three different geographical regions to participate in the project. Study one investigated the pre-implementation phase of the mobile computing project in these ambulatory home care units.\nThe mobile computing project and the software package were officially presented to the nursing staff of each of the participating organizations at kick-off meetings held from May to November in 2009. The meetings were jointly organized by the managers of the health facilities and the supplier in order to present the scope of the project and its objectives, the roles and responsibilities of the key stakeholders, the planned deployment approach (including project phases), and the projected schedule and budget. The meetings also included a 60-minute demonstration of the software.\nThe data was collected at each of these meetings. Only nurses who would be affected by the change, i.e., change recipients, were asked to stay in the room while the data were collected. Once the objectives of the study had been explained, a questionnaire was distributed to all the nurses in attendance. A total of 138 nurses completed the survey instrument, for a response rate of 90%.\n[SUBTITLE] Study two [SUBSECTION] As mentioned earlier, the second study took place in a large teaching hospital that had approved a budget for the acquisition of an EMR system. The institution, which specializes in the diagnosis and treatment of mental health illnesses, is divided into 10 clinical programs (e.g., a mental health program for adults, a child psychiatry program, a geriatric psychiatry program, an eating disorders program). Each clinical program specializes in the care and treatment of various mental health illnesses. The staff is composed of approximately 400 clinicians (including mostly nursing staff and a balanced group of occupational therapists, social workers and psychologists), as well as 55 physicians.\nThe deployment of the EMR system represented a major organizational project that would affect the work processes and procedures across the entire hospital. The main objective was to maintain all patient information (admission, diagnosis, notes, prescriptions, test results, et al.) in a central patient database. At the time of the study, in early 2008, the healthcare organization was in the process of selecting a software vendor and an integrator. The project was presented at two staff meetings: the first meeting was for the entire nursing population, and the second was a general assembly that included all managers of the targeted clinical programs.\nData collection began shortly after the two official presentations. Over a period of three weeks, one of the researchers and the project manager visited the clinical staff at a weekly meeting in each of the 10 programs. At each visit, the local project manager summarized the key elements of the EMR project to staff and the researchers explained the objectives of the research project, addressing staff concerns and questions for approximately fifteen minutes. Then survey questionnaires were handed out, along with a pre-addressed return envelope. A total of 379 questionnaires were distributed to the clinicians. For physicians, direct email contact was initiated by the EMR project manager in a memorandum that presented the pending EMR implementation and the ongoing academic research project. A package was then mailed to each physician, containing a presentation letter, a copy of the questionnaire, as well as a pre-addressed postage-paid return envelope. The 55 physicians were asked to complete the survey within a week. Overall, a total of 235 questionnaires (207 from clinicians and 28 from physicians) were returned to the research team, for a response rate of 54%.\nAs mentioned earlier, the second study took place in a large teaching hospital that had approved a budget for the acquisition of an EMR system. The institution, which specializes in the diagnosis and treatment of mental health illnesses, is divided into 10 clinical programs (e.g., a mental health program for adults, a child psychiatry program, a geriatric psychiatry program, an eating disorders program). Each clinical program specializes in the care and treatment of various mental health illnesses. The staff is composed of approximately 400 clinicians (including mostly nursing staff and a balanced group of occupational therapists, social workers and psychologists), as well as 55 physicians.\nThe deployment of the EMR system represented a major organizational project that would affect the work processes and procedures across the entire hospital. The main objective was to maintain all patient information (admission, diagnosis, notes, prescriptions, test results, et al.) in a central patient database. At the time of the study, in early 2008, the healthcare organization was in the process of selecting a software vendor and an integrator. The project was presented at two staff meetings: the first meeting was for the entire nursing population, and the second was a general assembly that included all managers of the targeted clinical programs.\nData collection began shortly after the two official presentations. Over a period of three weeks, one of the researchers and the project manager visited the clinical staff at a weekly meeting in each of the 10 programs. At each visit, the local project manager summarized the key elements of the EMR project to staff and the researchers explained the objectives of the research project, addressing staff concerns and questions for approximately fifteen minutes. Then survey questionnaires were handed out, along with a pre-addressed return envelope. A total of 379 questionnaires were distributed to the clinicians. For physicians, direct email contact was initiated by the EMR project manager in a memorandum that presented the pending EMR implementation and the ongoing academic research project. A package was then mailed to each physician, containing a presentation letter, a copy of the questionnaire, as well as a pre-addressed postage-paid return envelope. The 55 physicians were asked to complete the survey within a week. Overall, a total of 235 questionnaires (207 from clinicians and 28 from physicians) were returned to the research team, for a response rate of 54%.", "The study questionnaire was first pre-tested with five graduate students who are familiar with CIS as well as with four clinicians who had been involved in several CIS projects. Each respondent completed a first version of the questionnaire and provided feedback about the process and the measures, including the questionnaire administration time, and the clarity of the instructions and questions. The pretest indicated that the measurement instrument was relatively clear and easy to fill out. Following the pre-test, minor modifications were made to improve the wording of some items and the overall structure and presentation quality of the questionnaire.\n[SUBTITLE] Study one [SUBSECTION] A mobile computing project was carried out in an oncology and palliative care unit in Quebec, Canada in 2007. The core of the project was the implementation of a CIS that optimizes the process used to organize nursing activities taking place in patients' homes. The new SyMO package (Médisolution™) consists of a nursing care plan dictionary that covers all the procedures nurses need to perform in response to patient health problems, and an intervention plan module that allows nurses to create specific care plans for patients. Once she has arrived at the patient's home, the nurse uses the software to take notes on each procedure in the patient's care plan. This pilot project was the subject of an evaluative study that yielded encouraging results. Indeed, eight months after the implementation of the software application, the number of treated cancer patients increased by 6%, the average number of home visits by nurses increased by 0.7 visit per day, and the time allocated for direct patient care increased by 14% [5]. Given these positive results, senior administrators of Quebec's department of health and social services decided to invest additional funds in the project to verify whether the results were generalizable in the context of traditional home care services. The department asked 11 home care units in three different geographical regions to participate in the project. Study one investigated the pre-implementation phase of the mobile computing project in these ambulatory home care units.\nThe mobile computing project and the software package were officially presented to the nursing staff of each of the participating organizations at kick-off meetings held from May to November in 2009. The meetings were jointly organized by the managers of the health facilities and the supplier in order to present the scope of the project and its objectives, the roles and responsibilities of the key stakeholders, the planned deployment approach (including project phases), and the projected schedule and budget. The meetings also included a 60-minute demonstration of the software.\nThe data was collected at each of these meetings. Only nurses who would be affected by the change, i.e., change recipients, were asked to stay in the room while the data were collected. Once the objectives of the study had been explained, a questionnaire was distributed to all the nurses in attendance. A total of 138 nurses completed the survey instrument, for a response rate of 90%.\nA mobile computing project was carried out in an oncology and palliative care unit in Quebec, Canada in 2007. The core of the project was the implementation of a CIS that optimizes the process used to organize nursing activities taking place in patients' homes. The new SyMO package (Médisolution™) consists of a nursing care plan dictionary that covers all the procedures nurses need to perform in response to patient health problems, and an intervention plan module that allows nurses to create specific care plans for patients. Once she has arrived at the patient's home, the nurse uses the software to take notes on each procedure in the patient's care plan. This pilot project was the subject of an evaluative study that yielded encouraging results. Indeed, eight months after the implementation of the software application, the number of treated cancer patients increased by 6%, the average number of home visits by nurses increased by 0.7 visit per day, and the time allocated for direct patient care increased by 14% [5]. Given these positive results, senior administrators of Quebec's department of health and social services decided to invest additional funds in the project to verify whether the results were generalizable in the context of traditional home care services. The department asked 11 home care units in three different geographical regions to participate in the project. Study one investigated the pre-implementation phase of the mobile computing project in these ambulatory home care units.\nThe mobile computing project and the software package were officially presented to the nursing staff of each of the participating organizations at kick-off meetings held from May to November in 2009. The meetings were jointly organized by the managers of the health facilities and the supplier in order to present the scope of the project and its objectives, the roles and responsibilities of the key stakeholders, the planned deployment approach (including project phases), and the projected schedule and budget. The meetings also included a 60-minute demonstration of the software.\nThe data was collected at each of these meetings. Only nurses who would be affected by the change, i.e., change recipients, were asked to stay in the room while the data were collected. Once the objectives of the study had been explained, a questionnaire was distributed to all the nurses in attendance. A total of 138 nurses completed the survey instrument, for a response rate of 90%.\n[SUBTITLE] Study two [SUBSECTION] As mentioned earlier, the second study took place in a large teaching hospital that had approved a budget for the acquisition of an EMR system. The institution, which specializes in the diagnosis and treatment of mental health illnesses, is divided into 10 clinical programs (e.g., a mental health program for adults, a child psychiatry program, a geriatric psychiatry program, an eating disorders program). Each clinical program specializes in the care and treatment of various mental health illnesses. The staff is composed of approximately 400 clinicians (including mostly nursing staff and a balanced group of occupational therapists, social workers and psychologists), as well as 55 physicians.\nThe deployment of the EMR system represented a major organizational project that would affect the work processes and procedures across the entire hospital. The main objective was to maintain all patient information (admission, diagnosis, notes, prescriptions, test results, et al.) in a central patient database. At the time of the study, in early 2008, the healthcare organization was in the process of selecting a software vendor and an integrator. The project was presented at two staff meetings: the first meeting was for the entire nursing population, and the second was a general assembly that included all managers of the targeted clinical programs.\nData collection began shortly after the two official presentations. Over a period of three weeks, one of the researchers and the project manager visited the clinical staff at a weekly meeting in each of the 10 programs. At each visit, the local project manager summarized the key elements of the EMR project to staff and the researchers explained the objectives of the research project, addressing staff concerns and questions for approximately fifteen minutes. Then survey questionnaires were handed out, along with a pre-addressed return envelope. A total of 379 questionnaires were distributed to the clinicians. For physicians, direct email contact was initiated by the EMR project manager in a memorandum that presented the pending EMR implementation and the ongoing academic research project. A package was then mailed to each physician, containing a presentation letter, a copy of the questionnaire, as well as a pre-addressed postage-paid return envelope. The 55 physicians were asked to complete the survey within a week. Overall, a total of 235 questionnaires (207 from clinicians and 28 from physicians) were returned to the research team, for a response rate of 54%.\nAs mentioned earlier, the second study took place in a large teaching hospital that had approved a budget for the acquisition of an EMR system. The institution, which specializes in the diagnosis and treatment of mental health illnesses, is divided into 10 clinical programs (e.g., a mental health program for adults, a child psychiatry program, a geriatric psychiatry program, an eating disorders program). Each clinical program specializes in the care and treatment of various mental health illnesses. The staff is composed of approximately 400 clinicians (including mostly nursing staff and a balanced group of occupational therapists, social workers and psychologists), as well as 55 physicians.\nThe deployment of the EMR system represented a major organizational project that would affect the work processes and procedures across the entire hospital. The main objective was to maintain all patient information (admission, diagnosis, notes, prescriptions, test results, et al.) in a central patient database. At the time of the study, in early 2008, the healthcare organization was in the process of selecting a software vendor and an integrator. The project was presented at two staff meetings: the first meeting was for the entire nursing population, and the second was a general assembly that included all managers of the targeted clinical programs.\nData collection began shortly after the two official presentations. Over a period of three weeks, one of the researchers and the project manager visited the clinical staff at a weekly meeting in each of the 10 programs. At each visit, the local project manager summarized the key elements of the EMR project to staff and the researchers explained the objectives of the research project, addressing staff concerns and questions for approximately fifteen minutes. Then survey questionnaires were handed out, along with a pre-addressed return envelope. A total of 379 questionnaires were distributed to the clinicians. For physicians, direct email contact was initiated by the EMR project manager in a memorandum that presented the pending EMR implementation and the ongoing academic research project. A package was then mailed to each physician, containing a presentation letter, a copy of the questionnaire, as well as a pre-addressed postage-paid return envelope. The 55 physicians were asked to complete the survey within a week. Overall, a total of 235 questionnaires (207 from clinicians and 28 from physicians) were returned to the research team, for a response rate of 54%.", "A mobile computing project was carried out in an oncology and palliative care unit in Quebec, Canada in 2007. The core of the project was the implementation of a CIS that optimizes the process used to organize nursing activities taking place in patients' homes. The new SyMO package (Médisolution™) consists of a nursing care plan dictionary that covers all the procedures nurses need to perform in response to patient health problems, and an intervention plan module that allows nurses to create specific care plans for patients. Once she has arrived at the patient's home, the nurse uses the software to take notes on each procedure in the patient's care plan. This pilot project was the subject of an evaluative study that yielded encouraging results. Indeed, eight months after the implementation of the software application, the number of treated cancer patients increased by 6%, the average number of home visits by nurses increased by 0.7 visit per day, and the time allocated for direct patient care increased by 14% [5]. Given these positive results, senior administrators of Quebec's department of health and social services decided to invest additional funds in the project to verify whether the results were generalizable in the context of traditional home care services. The department asked 11 home care units in three different geographical regions to participate in the project. Study one investigated the pre-implementation phase of the mobile computing project in these ambulatory home care units.\nThe mobile computing project and the software package were officially presented to the nursing staff of each of the participating organizations at kick-off meetings held from May to November in 2009. The meetings were jointly organized by the managers of the health facilities and the supplier in order to present the scope of the project and its objectives, the roles and responsibilities of the key stakeholders, the planned deployment approach (including project phases), and the projected schedule and budget. The meetings also included a 60-minute demonstration of the software.\nThe data was collected at each of these meetings. Only nurses who would be affected by the change, i.e., change recipients, were asked to stay in the room while the data were collected. Once the objectives of the study had been explained, a questionnaire was distributed to all the nurses in attendance. A total of 138 nurses completed the survey instrument, for a response rate of 90%.", "As mentioned earlier, the second study took place in a large teaching hospital that had approved a budget for the acquisition of an EMR system. The institution, which specializes in the diagnosis and treatment of mental health illnesses, is divided into 10 clinical programs (e.g., a mental health program for adults, a child psychiatry program, a geriatric psychiatry program, an eating disorders program). Each clinical program specializes in the care and treatment of various mental health illnesses. The staff is composed of approximately 400 clinicians (including mostly nursing staff and a balanced group of occupational therapists, social workers and psychologists), as well as 55 physicians.\nThe deployment of the EMR system represented a major organizational project that would affect the work processes and procedures across the entire hospital. The main objective was to maintain all patient information (admission, diagnosis, notes, prescriptions, test results, et al.) in a central patient database. At the time of the study, in early 2008, the healthcare organization was in the process of selecting a software vendor and an integrator. The project was presented at two staff meetings: the first meeting was for the entire nursing population, and the second was a general assembly that included all managers of the targeted clinical programs.\nData collection began shortly after the two official presentations. Over a period of three weeks, one of the researchers and the project manager visited the clinical staff at a weekly meeting in each of the 10 programs. At each visit, the local project manager summarized the key elements of the EMR project to staff and the researchers explained the objectives of the research project, addressing staff concerns and questions for approximately fifteen minutes. Then survey questionnaires were handed out, along with a pre-addressed return envelope. A total of 379 questionnaires were distributed to the clinicians. For physicians, direct email contact was initiated by the EMR project manager in a memorandum that presented the pending EMR implementation and the ongoing academic research project. A package was then mailed to each physician, containing a presentation letter, a copy of the questionnaire, as well as a pre-addressed postage-paid return envelope. The 55 physicians were asked to complete the survey within a week. Overall, a total of 235 questionnaires (207 from clinicians and 28 from physicians) were returned to the research team, for a response rate of 54%.", "Consistent with our research model, the survey's questions covered 10 variables. All except one were measured with four items. All the items were assessed on 7-point Likert scales ranging from strongly disagree to strongly agree. The items used to measure the dependent variable, namely, organizational readiness, were adapted from Eby et al. [58] and Rafferty and Simons [59]. As for the independent variables, vision clarity (VC) was measured using a scale adapted from Armenakis et al. [28]. Top-management support and change appropriateness were measured using scales adapted from Holt et al. [34]. Organizational flexibility was adapted from Rush et al. [60] and Eby et al. [58]. Group self-efficacy was measured using a scale adapted from Compeau and Higgins [61]. Finally, scales associated with change efficacy, organizational history of change, the presence of an effective project champion, and organizational conflicts were developed by the authors during a brainstorming session.\nScale items used to measure all study variables are presented in Appendix. Data analysis was performed using partial least squares (PLS), a structural equation modeling approach [62].", "The present study was approved by the appropriate institutional ethics review boards.", "[SUBTITLE] Sample profiles [SUBSECTION] As shown in Table 1, most participants in study one were women and had full-time positions. They were established registered nurses with an average of over 18 years of experience in the nursing profession and 10 years of seniority within their healthcare organization. The respondents' average experience with personal computers was 4.6 on a 7-point Likert scale where 1 is 'very unfamiliar with computers' and 7 is 'very familiar with computers.' In study two, one third of the respondents were men. More than half of the respondents (57%) were registered nurses and 12% were physicians. Respondents had over 15 years of experience in their profession and had spent, on average, 14 years in their current organization. Their level of experience with computers was similar to that of respondents in study one, with an average score of 4.8.\nProfile of respondents\nAs shown in Table 1, most participants in study one were women and had full-time positions. They were established registered nurses with an average of over 18 years of experience in the nursing profession and 10 years of seniority within their healthcare organization. The respondents' average experience with personal computers was 4.6 on a 7-point Likert scale where 1 is 'very unfamiliar with computers' and 7 is 'very familiar with computers.' In study two, one third of the respondents were men. More than half of the respondents (57%) were registered nurses and 12% were physicians. Respondents had over 15 years of experience in their profession and had spent, on average, 14 years in their current organization. Their level of experience with computers was similar to that of respondents in study one, with an average score of 4.8.\nProfile of respondents\n[SUBTITLE] Psychometric properties of the measures [SUBSECTION] Exploratory factor analyses of each reflective construct's items and their Cronbach alpha reliabilities were first examined as a check of unidimensionality. The results from these analyses revealed that all scale items associated with a given construct loaded highly (>0.60) on a single factor. Next, based on the results of the reliability analysis (Cronbach alpha), three items out of 39 were removed from their respective measurement instruments: OF4 (organizational flexibility), OC2 (organizational conflicts) and OHC4 (organizational history of changes). As a result, the remaining 36 items were then analyzed in PLS confirmatory factor analyses (CFA). Examination of revised construct reliabilities (Table 2), the variance shared between constructs (Table 3) and the cross-loadings (Tables 4 and 5) indicated that the psychometric properties of the 10 reflective constructs were acceptable. As can be seen, all Cronbach alphas were 0.71 or better and all item loadings were greater than 0.68.\nReliability assessment of research model variables\nVariance shared between research model constructs\n** p < 0.001; * p < 0.05; ns = non significant\nThe bold numbers on the leading diagonal show the square root of the variance shared by the constructs and their measures. Off-diagonal elements are the correlations among constructs. For discriminant validity, diagonal elements should be larger than off-diagonal elements.\nPLS Construct cross-loadings of the research model (study one)\nPLS construct cross-loadings of the research model (study two)\nTwo criteria that are recommended for assessing discriminant validity are a square root of average variance extracted (AVE) that is higher than inter-construct correlations and indicators loading more highly on their corresponding factor than on other factors [63,64]. The results shown in Table 3 indicate that diagonal elements (AVE) were higher than off-diagonal elements (inter-construct correlations). For their part, the cross-loadings in Table 4 and Table 5 show that all indicators loaded more highly on their own factor than on other factors. Overall, these findings indicate that the measurement model has satisfied the recommended convergent and discriminant validity criteria.\nExploratory factor analyses of each reflective construct's items and their Cronbach alpha reliabilities were first examined as a check of unidimensionality. The results from these analyses revealed that all scale items associated with a given construct loaded highly (>0.60) on a single factor. Next, based on the results of the reliability analysis (Cronbach alpha), three items out of 39 were removed from their respective measurement instruments: OF4 (organizational flexibility), OC2 (organizational conflicts) and OHC4 (organizational history of changes). As a result, the remaining 36 items were then analyzed in PLS confirmatory factor analyses (CFA). Examination of revised construct reliabilities (Table 2), the variance shared between constructs (Table 3) and the cross-loadings (Tables 4 and 5) indicated that the psychometric properties of the 10 reflective constructs were acceptable. As can be seen, all Cronbach alphas were 0.71 or better and all item loadings were greater than 0.68.\nReliability assessment of research model variables\nVariance shared between research model constructs\n** p < 0.001; * p < 0.05; ns = non significant\nThe bold numbers on the leading diagonal show the square root of the variance shared by the constructs and their measures. Off-diagonal elements are the correlations among constructs. For discriminant validity, diagonal elements should be larger than off-diagonal elements.\nPLS Construct cross-loadings of the research model (study one)\nPLS construct cross-loadings of the research model (study two)\nTwo criteria that are recommended for assessing discriminant validity are a square root of average variance extracted (AVE) that is higher than inter-construct correlations and indicators loading more highly on their corresponding factor than on other factors [63,64]. The results shown in Table 3 indicate that diagonal elements (AVE) were higher than off-diagonal elements (inter-construct correlations). For their part, the cross-loadings in Table 4 and Table 5 show that all indicators loaded more highly on their own factor than on other factors. Overall, these findings indicate that the measurement model has satisfied the recommended convergent and discriminant validity criteria.\n[SUBTITLE] Hypothesis testing [SUBSECTION] Table 6 presents the PLS path coefficients along with the proportion of explained variance in the dependent variable. In study one, four of the hypothesized links in the research model were supported, with change appropriateness (H2), organizational flexibility (H8), vision clarity (H1), and change efficacy (H3) explaining 75% of the variance in organizational readiness. On the other hand, five hypotheses were not supported. More specifically, top-management support (H4), presence of an effective champion (H5), organizational history of change (H6), organizational conflicts (H7), and collective self-efficacy (H9) were not found to be associated with the dependent variable. In study two, four hypotheses were also supported, two of which differed from those supported in study one: the presence of an effective champion (H5) and collective self-efficacy (H9). In addition to these variables, vision clarity (H1) and change appropriateness (H2) also helped explain 75% of the variance in organizational readiness. Five hypotheses were not supported in study 2: change efficacy (H3), top-management support (H4), organizational history of change (H6), organizational conflicts (H7), and organizational flexibility (H8).\nPLS Results\n*** p < 0.001; ** p < 0.05; * p < 0.01\nTable 6 presents the PLS path coefficients along with the proportion of explained variance in the dependent variable. In study one, four of the hypothesized links in the research model were supported, with change appropriateness (H2), organizational flexibility (H8), vision clarity (H1), and change efficacy (H3) explaining 75% of the variance in organizational readiness. On the other hand, five hypotheses were not supported. More specifically, top-management support (H4), presence of an effective champion (H5), organizational history of change (H6), organizational conflicts (H7), and collective self-efficacy (H9) were not found to be associated with the dependent variable. In study two, four hypotheses were also supported, two of which differed from those supported in study one: the presence of an effective champion (H5) and collective self-efficacy (H9). In addition to these variables, vision clarity (H1) and change appropriateness (H2) also helped explain 75% of the variance in organizational readiness. Five hypotheses were not supported in study 2: change efficacy (H3), top-management support (H4), organizational history of change (H6), organizational conflicts (H7), and organizational flexibility (H8).\nPLS Results\n*** p < 0.001; ** p < 0.05; * p < 0.01", "As shown in Table 1, most participants in study one were women and had full-time positions. They were established registered nurses with an average of over 18 years of experience in the nursing profession and 10 years of seniority within their healthcare organization. The respondents' average experience with personal computers was 4.6 on a 7-point Likert scale where 1 is 'very unfamiliar with computers' and 7 is 'very familiar with computers.' In study two, one third of the respondents were men. More than half of the respondents (57%) were registered nurses and 12% were physicians. Respondents had over 15 years of experience in their profession and had spent, on average, 14 years in their current organization. Their level of experience with computers was similar to that of respondents in study one, with an average score of 4.8.\nProfile of respondents", "Exploratory factor analyses of each reflective construct's items and their Cronbach alpha reliabilities were first examined as a check of unidimensionality. The results from these analyses revealed that all scale items associated with a given construct loaded highly (>0.60) on a single factor. Next, based on the results of the reliability analysis (Cronbach alpha), three items out of 39 were removed from their respective measurement instruments: OF4 (organizational flexibility), OC2 (organizational conflicts) and OHC4 (organizational history of changes). As a result, the remaining 36 items were then analyzed in PLS confirmatory factor analyses (CFA). Examination of revised construct reliabilities (Table 2), the variance shared between constructs (Table 3) and the cross-loadings (Tables 4 and 5) indicated that the psychometric properties of the 10 reflective constructs were acceptable. As can be seen, all Cronbach alphas were 0.71 or better and all item loadings were greater than 0.68.\nReliability assessment of research model variables\nVariance shared between research model constructs\n** p < 0.001; * p < 0.05; ns = non significant\nThe bold numbers on the leading diagonal show the square root of the variance shared by the constructs and their measures. Off-diagonal elements are the correlations among constructs. For discriminant validity, diagonal elements should be larger than off-diagonal elements.\nPLS Construct cross-loadings of the research model (study one)\nPLS construct cross-loadings of the research model (study two)\nTwo criteria that are recommended for assessing discriminant validity are a square root of average variance extracted (AVE) that is higher than inter-construct correlations and indicators loading more highly on their corresponding factor than on other factors [63,64]. The results shown in Table 3 indicate that diagonal elements (AVE) were higher than off-diagonal elements (inter-construct correlations). For their part, the cross-loadings in Table 4 and Table 5 show that all indicators loaded more highly on their own factor than on other factors. Overall, these findings indicate that the measurement model has satisfied the recommended convergent and discriminant validity criteria.", "Table 6 presents the PLS path coefficients along with the proportion of explained variance in the dependent variable. In study one, four of the hypothesized links in the research model were supported, with change appropriateness (H2), organizational flexibility (H8), vision clarity (H1), and change efficacy (H3) explaining 75% of the variance in organizational readiness. On the other hand, five hypotheses were not supported. More specifically, top-management support (H4), presence of an effective champion (H5), organizational history of change (H6), organizational conflicts (H7), and collective self-efficacy (H9) were not found to be associated with the dependent variable. In study two, four hypotheses were also supported, two of which differed from those supported in study one: the presence of an effective champion (H5) and collective self-efficacy (H9). In addition to these variables, vision clarity (H1) and change appropriateness (H2) also helped explain 75% of the variance in organizational readiness. Five hypotheses were not supported in study 2: change efficacy (H3), top-management support (H4), organizational history of change (H6), organizational conflicts (H7), and organizational flexibility (H8).\nPLS Results\n*** p < 0.001; ** p < 0.05; * p < 0.01", "The purpose of this study was to identify variables associated with clinicians' perceptions of organizational readiness for change in the particular context of CIS projects. Change management theorists argue that there are four classes of antecedents that have a direct effect on perceived organizational readiness for change: the attributes of the change that is being introduced, the extent of leadership support for the proposed change, the organizational context where the change is being implemented, and the characteristics of the change targets. As explained in the preceding section, our analyses suggest adequate reliability as well as convergent and discriminant validity of the measurement instruments used in this study.\nOur findings imply that CIS project managers and leaders would benefit from explicitly addressing change content perceptions (change attributes) when pre-implementing CIS in healthcare organizations. More specifically, the results of this study indicate that two types of change sentiments -- vision clarity and change appropriateness -- have a significant and positive influence on clinicians' perceptions of organizational readiness for CIS-based change. In other words, our results support the idea that CIS projects have greater chances of success with a compelling reason, i.e., a reason that makes change targets recognize and accept that a change is needed (vision clarity). In addition to believing that change is needed, if change targets are to support the CIS project, they must also believe that the specific change being proposed is the correct one in the present context (change appropriateness). Change theorists also argue that in order to be motivated to support a change, individuals must not only feel that the change is appropriate but also that success is possible. In this regard, sources of information outside the organization can be used to bolster messages sent by the change agents. This is effectively what happened in the CIS project reported in study one. Indeed, the success of the pilot project carried out in the oncology and palliative care unit in 2007 was highly publicized through newspaper and magazine stories, as well as on television at the start of 2008. In the spring of 2008, the project was nominated for an award in the annual 3M innovation contest organized by Quebec's professional order of nurses. The publicity surrounding the project had a significant, positive effect on the perceptions of nurses in the 11 units of their organization's capacity to successfully implement the proposed change. The effect of this variable (change efficacy) was not supported by study two. One possible explanation may be that a system had not yet been selected at the time the data was collected, and the hospital concerned was one of the first health facilities in Quebec to deploy an EMR system. This meant that little information was available in the media about this type of project when the readiness in this facility was measured.\nSecond, we hypothesized that leadership support would be positively associated with organizational readiness for change. For one thing, it is important to ask why top-management support was not associated with organizational readiness. One explanation may be the speed with which CIS projects were launched. In both studies, the project announcement came suddenly, only a few weeks before the survey. It was only as the project was being officially presented -- at the same time as data collection -- that most of the targeted clinicians were informed of management's support for the project. The second variable, the presence of an effective project champion, was supported only by study two. One possible explanation may be tied to whether or not someone had been identified to assume this role at the time that we measured organizational readiness. Even though this role had been filled in each of the 11 facilities in study one, no champion had yet been identified in the 10 hospital clinics participating in the project in study two. Not knowing who would assume this role in the project may have exacerbated the uncertainty experienced by respondents, such that they perceived this variable as very important to the project's success.\nThird, our findings provided minimal support for hypotheses related to the organizational context within which change is implemented. More specifically, we observed that an organizational history of change and the political climate in the organization were not supported as indicators of readiness for change. In study one, only clinicians' perceptions of their organization's ability to accommodate changing conditions by altering policies and procedures were strongly related to perceived readiness for change. As mentioned above, some organizations are more adaptable and flexible than others. As such, regardless of change targets' comfort level with the nature of a CIS project, if the organization's structure is perceived to be inflexible and rigid, it appears that targeted clinicians are likely to hold less favorable attitudes about the organization's readiness for change. This finding was not, however, supported in study two. One possible reason is that study two was conducted in a single health facility, as compared to the 11 facilities in study one, which presented varying levels of flexibility.\nFinally, collective self-efficacy was found to be positively related to organizational readiness for change only in study two. This finding might also depend on the timing of the organizational readiness assessment. A software provider (and package) had already been selected in study one, while in study two the technology represented a relatively abstract concept to the respondents because organizational readiness measurement took place prior to the call-for-tender process. The nurses in each of the 11 facilities in study one had already attended a demonstration of the software when they completed the questionnaire, and this may have reassured them about their collective ability to learn and use their future work tool. This was not the case for study two respondents who only had a vague idea of what the functionalities of the EMR system would be.", "This study is not without certain methodological limitations that should be considered when interpreting the results. First, the data were collected using a single, self-reported questionnaire. When self-reports are used, concerns often arise as to whether common method bias is responsible for the observed relationships. Second, our analyses were based on a single type of technology (CIS) and a single group of change recipients (healthcare professionals), which limits the generalizability of our findings. However, our study was conducted in 11 ambulatory care organizations (study one) and 10 clinical units at a large teaching hospital (study two) to ensure a certain variety in terms of context. Third, the research design used in this study also presents limitations inasmuch as it did not allow us to assess clinicians' changing perceptions of their organization's readiness for change over time. For instance, while we believe that the presence of an effective project champion influences change targets' perceptions, the champion's actions and commitment might be more influential during the subsequent implementation phase when he or she drives consensus and manages resistance to change. In a similar way, as the project progresses toward the implementation phase, leadership behaviors exercised by upper management (e.g., a clarifying vision, allocating the required financial and human resources to the project) are likely to play a greater role in the change process. Even though the argument may be difficult to support in the case of the organizational history of change, we believe that organizational conflicts and politics as well as group self-efficacy will prove to play major roles in the implementation phase; hence the importance of carrying out longitudinal studies and making a clear distinction between the pre-implementation and the implementation phases.", "Some authors have argued that the management of IT-based organizational change needs to begin as early as possible. The present study represents an initial attempt at understanding the variables that affect clinicians' perceived organizational readiness for change by suggesting that vision clarity and change appropriateness, as well as change efficacy, organizational flexibility, the presence of an effective champion, and collective self-efficacy, are all important antecedents.\nOur findings have several implications for both practice and research. In practical terms, conducting a pre-implementation readiness assessment will help CIS project managers and decision makers choose whether they should initiate such a project or implement less costly, preliminary steps that will prepare the organization for the anticipated change. In this light, it is interesting to note that two of the 11 sites that participated in study one have not deployed the software package because of low readiness scores. As for future research, we believe that our results raise two important issues. First, more studies are needed in order to confirm which determinants are most significant in terms of perceived organizational readiness for CIS-based change. It would also be interesting to verify which antecedents are likely to emerge, based on the particular context of the project, and those that have an impact on the perceptions of change targets, independent of the context. Second, future research should investigate the extent to which organizational readiness is predictive of successful CIS adoption. Prior studies have also revealed that perceived organizational readiness significantly influences an individual's readiness for change [58,59,65] which, in turn, is a precursor of individual adoption or resistance behaviors (see Figure 1). It would therefore be important to have an analysis of the link between the level of perceived organizational readiness and clinicians' individual readiness for CIS-based change. Third, other key predictors could be included in the research model to further increase its explanatory power. For instance, clinicians' early perceptions of the usability of the technology per se [66] may also play a significant role in predicting clinicians' early perceptions of organizational readiness. In short, as healthcare organizations continue to invest in CIS to enhance quality and continuity of care, understanding the factors that contribute to an effective change process represents an important avenue for continued research.", "The authors declare that they have no competing interests.", "GP and CS participated in the design of the study and the development of the measurement instrument, carried out data collection in study one, performed the statistical analyses, and they were responsible for the writing of the manuscript. GB participated in the literature review and the development of the measurement instrument and he was responsible for collecting data in study two. PB contributed to the literature search and was involved in drafting the manuscript and revising it critically for important intellectual content. All authors read and approved the final manuscript.", "[SUBTITLE] Questionnaire items (as framed in study one) [SUBSECTION] [SUBTITLE] Vision Clarity (VC) [SUBSECTION] VC1. I believe there are legitimate reasons for us to introduce a new computer-based system in our unit.\nVC2. We definitely need new tools to improve the way we work around here.\nVC3. There are a number of rational reasons for the deployment of a new information system in our unit.\nVC4. A new computer-based system is needed to improve our clinical processes.\nVC1. I believe there are legitimate reasons for us to introduce a new computer-based system in our unit.\nVC2. We definitely need new tools to improve the way we work around here.\nVC3. There are a number of rational reasons for the deployment of a new information system in our unit.\nVC4. A new computer-based system is needed to improve our clinical processes.\n[SUBTITLE] Change Appropriateness (CA) [SUBSECTION] CA1. I think that nurses in our unit will benefit from the use of SyMO.\nCA2. The deployment of SyMO will contribute to our unit's overall performance.\nCA3. The deployment of SyMO matches the priorities of our unit.\nCA4. The implementation of SyMO will prove to be best for our unit.\nCA1. I think that nurses in our unit will benefit from the use of SyMO.\nCA2. The deployment of SyMO will contribute to our unit's overall performance.\nCA3. The deployment of SyMO matches the priorities of our unit.\nCA4. The implementation of SyMO will prove to be best for our unit.\n[SUBTITLE] Change Efficacy (CE) [SUBSECTION] CE1. I know nurses outside our unit who had successful experiences with SyMO.\nCE2. SyMO has been successfully deployed in clinical units similar to ours.\nCE3. SyMO has received positive reviews in the press (e.g., newspapers, magazines, newsletters, et al.).\nCE4. I believe the provincial movement toward the electronic medical record represents a driving force for the deployment of SyMO in our unit.\nCE1. I know nurses outside our unit who had successful experiences with SyMO.\nCE2. SyMO has been successfully deployed in clinical units similar to ours.\nCE3. SyMO has received positive reviews in the press (e.g., newspapers, magazines, newsletters, et al.).\nCE4. I believe the provincial movement toward the electronic medical record represents a driving force for the deployment of SyMO in our unit.\n[SUBTITLE] Top-Management Support (TMS) [SUBSECTION] TMS1. Managers in our unit are committed to the deployment of SyMO.\nTMS2. Managers in our unit have stressed the importance of this change.\nTMS3. Managers have sent a clear message that the deployment of SyMO will occur in our unit.\nTMS4. Nurses have been encouraged to embrace the upcoming deployment of SyMO.\nTMS1. Managers in our unit are committed to the deployment of SyMO.\nTMS2. Managers in our unit have stressed the importance of this change.\nTMS3. Managers have sent a clear message that the deployment of SyMO will occur in our unit.\nTMS4. Nurses have been encouraged to embrace the upcoming deployment of SyMO.\n[SUBTITLE] Champion (C) [SUBSECTION] C1. There is a champion who actively promotes the deployment of SyMO in our unit.\nC2. The SyMO project has a credible and trustworthy champion.\nC3. There is a champion who will be able to push the SyMO project over or around implementation hurdles.\nC1. There is a champion who actively promotes the deployment of SyMO in our unit.\nC2. The SyMO project has a credible and trustworthy champion.\nC3. There is a champion who will be able to push the SyMO project over or around implementation hurdles.\n[SUBTITLE] Organizational History of Change (OHC) [SUBSECTION] OHC1. Our unit has successfully implemented other technological changes in recent years.\nOHC2. Nursing staff in our unit have had negative experiences with technological projects in the past (reversed item).\nOHC3. Our unit is usually successful when it undertakes all types of changes.\nOHC4. Information technology initiatives have been encouraged and are common practices in our unit (removed item).\nOHC1. Our unit has successfully implemented other technological changes in recent years.\nOHC2. Nursing staff in our unit have had negative experiences with technological projects in the past (reversed item).\nOHC3. Our unit is usually successful when it undertakes all types of changes.\nOHC4. Information technology initiatives have been encouraged and are common practices in our unit (removed item).\n[SUBTITLE] Organizational Conflicts and Politics (OCP) [SUBSECTION] OCP1. Mutual trust and cooperation among nursing staff in our unit is strong (reversed item).\nOCP2. Recent attempts to change the way we work in our unit have been hindered by political forces or conditions (removed item).\nOCP3. The climate in our unit is mainly characterized by conflicts and disputes.\nOCP4. Staff frustration is common in our unit.\nOCP1. Mutual trust and cooperation among nursing staff in our unit is strong (reversed item).\nOCP2. Recent attempts to change the way we work in our unit have been hindered by political forces or conditions (removed item).\nOCP3. The climate in our unit is mainly characterized by conflicts and disputes.\nOCP4. Staff frustration is common in our unit.\n[SUBTITLE] Organizational Flexibility (OF) [SUBSECTION] OF1. Our unit is structured to allow superiors to make changes quickly.\nOF2. It is easy to change procedures in our unit to meet new conditions.\nOF3. Getting anything changed in our unit is a long, time-consuming process.\nOF4. Policies and procedures in our unit allow us to take on new challenges effectively (removed item).\nOF1. Our unit is structured to allow superiors to make changes quickly.\nOF2. It is easy to change procedures in our unit to meet new conditions.\nOF3. Getting anything changed in our unit is a long, time-consuming process.\nOF4. Policies and procedures in our unit allow us to take on new challenges effectively (removed item).\n[SUBTITLE] Group Self-Efficacy (GSE) [SUBSECTION] SE1. All nurses in our unit are highly computer literate.\nSE2. It won't take a long time before nurses in our unit feel comfortable using SyMO.\nSE3. Using a computer effectively is no problem for the nursing staff in our unit.\nSE4. In general, nursing staff in our unit have low computer skills (reversed item).\nSE1. All nurses in our unit are highly computer literate.\nSE2. It won't take a long time before nurses in our unit feel comfortable using SyMO.\nSE3. Using a computer effectively is no problem for the nursing staff in our unit.\nSE4. In general, nursing staff in our unit have low computer skills (reversed item).\n[SUBTITLE] Organizational Readiness (OR) [SUBSECTION] OR1. I believe SyMO can be successfully implemented in our unit.\nOR2. Managers should delay the deployment of SyMO in our unit (reversed item).\nOR3. The deployment of SyMO in our unit is timely.\nOR4. Our unit is ready to take on this technological change.\nOR1. I believe SyMO can be successfully implemented in our unit.\nOR2. Managers should delay the deployment of SyMO in our unit (reversed item).\nOR3. The deployment of SyMO in our unit is timely.\nOR4. Our unit is ready to take on this technological change.\n[SUBTITLE] Vision Clarity (VC) [SUBSECTION] VC1. I believe there are legitimate reasons for us to introduce a new computer-based system in our unit.\nVC2. We definitely need new tools to improve the way we work around here.\nVC3. There are a number of rational reasons for the deployment of a new information system in our unit.\nVC4. A new computer-based system is needed to improve our clinical processes.\nVC1. I believe there are legitimate reasons for us to introduce a new computer-based system in our unit.\nVC2. We definitely need new tools to improve the way we work around here.\nVC3. There are a number of rational reasons for the deployment of a new information system in our unit.\nVC4. A new computer-based system is needed to improve our clinical processes.\n[SUBTITLE] Change Appropriateness (CA) [SUBSECTION] CA1. I think that nurses in our unit will benefit from the use of SyMO.\nCA2. The deployment of SyMO will contribute to our unit's overall performance.\nCA3. The deployment of SyMO matches the priorities of our unit.\nCA4. The implementation of SyMO will prove to be best for our unit.\nCA1. I think that nurses in our unit will benefit from the use of SyMO.\nCA2. The deployment of SyMO will contribute to our unit's overall performance.\nCA3. The deployment of SyMO matches the priorities of our unit.\nCA4. The implementation of SyMO will prove to be best for our unit.\n[SUBTITLE] Change Efficacy (CE) [SUBSECTION] CE1. I know nurses outside our unit who had successful experiences with SyMO.\nCE2. SyMO has been successfully deployed in clinical units similar to ours.\nCE3. SyMO has received positive reviews in the press (e.g., newspapers, magazines, newsletters, et al.).\nCE4. I believe the provincial movement toward the electronic medical record represents a driving force for the deployment of SyMO in our unit.\nCE1. I know nurses outside our unit who had successful experiences with SyMO.\nCE2. SyMO has been successfully deployed in clinical units similar to ours.\nCE3. SyMO has received positive reviews in the press (e.g., newspapers, magazines, newsletters, et al.).\nCE4. I believe the provincial movement toward the electronic medical record represents a driving force for the deployment of SyMO in our unit.\n[SUBTITLE] Top-Management Support (TMS) [SUBSECTION] TMS1. Managers in our unit are committed to the deployment of SyMO.\nTMS2. Managers in our unit have stressed the importance of this change.\nTMS3. Managers have sent a clear message that the deployment of SyMO will occur in our unit.\nTMS4. Nurses have been encouraged to embrace the upcoming deployment of SyMO.\nTMS1. Managers in our unit are committed to the deployment of SyMO.\nTMS2. Managers in our unit have stressed the importance of this change.\nTMS3. Managers have sent a clear message that the deployment of SyMO will occur in our unit.\nTMS4. Nurses have been encouraged to embrace the upcoming deployment of SyMO.\n[SUBTITLE] Champion (C) [SUBSECTION] C1. There is a champion who actively promotes the deployment of SyMO in our unit.\nC2. The SyMO project has a credible and trustworthy champion.\nC3. There is a champion who will be able to push the SyMO project over or around implementation hurdles.\nC1. There is a champion who actively promotes the deployment of SyMO in our unit.\nC2. The SyMO project has a credible and trustworthy champion.\nC3. There is a champion who will be able to push the SyMO project over or around implementation hurdles.\n[SUBTITLE] Organizational History of Change (OHC) [SUBSECTION] OHC1. Our unit has successfully implemented other technological changes in recent years.\nOHC2. Nursing staff in our unit have had negative experiences with technological projects in the past (reversed item).\nOHC3. Our unit is usually successful when it undertakes all types of changes.\nOHC4. Information technology initiatives have been encouraged and are common practices in our unit (removed item).\nOHC1. Our unit has successfully implemented other technological changes in recent years.\nOHC2. Nursing staff in our unit have had negative experiences with technological projects in the past (reversed item).\nOHC3. Our unit is usually successful when it undertakes all types of changes.\nOHC4. Information technology initiatives have been encouraged and are common practices in our unit (removed item).\n[SUBTITLE] Organizational Conflicts and Politics (OCP) [SUBSECTION] OCP1. Mutual trust and cooperation among nursing staff in our unit is strong (reversed item).\nOCP2. Recent attempts to change the way we work in our unit have been hindered by political forces or conditions (removed item).\nOCP3. The climate in our unit is mainly characterized by conflicts and disputes.\nOCP4. Staff frustration is common in our unit.\nOCP1. Mutual trust and cooperation among nursing staff in our unit is strong (reversed item).\nOCP2. Recent attempts to change the way we work in our unit have been hindered by political forces or conditions (removed item).\nOCP3. The climate in our unit is mainly characterized by conflicts and disputes.\nOCP4. Staff frustration is common in our unit.\n[SUBTITLE] Organizational Flexibility (OF) [SUBSECTION] OF1. Our unit is structured to allow superiors to make changes quickly.\nOF2. It is easy to change procedures in our unit to meet new conditions.\nOF3. Getting anything changed in our unit is a long, time-consuming process.\nOF4. Policies and procedures in our unit allow us to take on new challenges effectively (removed item).\nOF1. Our unit is structured to allow superiors to make changes quickly.\nOF2. It is easy to change procedures in our unit to meet new conditions.\nOF3. Getting anything changed in our unit is a long, time-consuming process.\nOF4. Policies and procedures in our unit allow us to take on new challenges effectively (removed item).\n[SUBTITLE] Group Self-Efficacy (GSE) [SUBSECTION] SE1. All nurses in our unit are highly computer literate.\nSE2. It won't take a long time before nurses in our unit feel comfortable using SyMO.\nSE3. Using a computer effectively is no problem for the nursing staff in our unit.\nSE4. In general, nursing staff in our unit have low computer skills (reversed item).\nSE1. All nurses in our unit are highly computer literate.\nSE2. It won't take a long time before nurses in our unit feel comfortable using SyMO.\nSE3. Using a computer effectively is no problem for the nursing staff in our unit.\nSE4. In general, nursing staff in our unit have low computer skills (reversed item).\n[SUBTITLE] Organizational Readiness (OR) [SUBSECTION] OR1. I believe SyMO can be successfully implemented in our unit.\nOR2. Managers should delay the deployment of SyMO in our unit (reversed item).\nOR3. The deployment of SyMO in our unit is timely.\nOR4. Our unit is ready to take on this technological change.\nOR1. I believe SyMO can be successfully implemented in our unit.\nOR2. Managers should delay the deployment of SyMO in our unit (reversed item).\nOR3. The deployment of SyMO in our unit is timely.\nOR4. Our unit is ready to take on this technological change.", "[SUBTITLE] Vision Clarity (VC) [SUBSECTION] VC1. I believe there are legitimate reasons for us to introduce a new computer-based system in our unit.\nVC2. We definitely need new tools to improve the way we work around here.\nVC3. There are a number of rational reasons for the deployment of a new information system in our unit.\nVC4. A new computer-based system is needed to improve our clinical processes.\nVC1. I believe there are legitimate reasons for us to introduce a new computer-based system in our unit.\nVC2. We definitely need new tools to improve the way we work around here.\nVC3. There are a number of rational reasons for the deployment of a new information system in our unit.\nVC4. A new computer-based system is needed to improve our clinical processes.\n[SUBTITLE] Change Appropriateness (CA) [SUBSECTION] CA1. I think that nurses in our unit will benefit from the use of SyMO.\nCA2. The deployment of SyMO will contribute to our unit's overall performance.\nCA3. The deployment of SyMO matches the priorities of our unit.\nCA4. The implementation of SyMO will prove to be best for our unit.\nCA1. I think that nurses in our unit will benefit from the use of SyMO.\nCA2. The deployment of SyMO will contribute to our unit's overall performance.\nCA3. The deployment of SyMO matches the priorities of our unit.\nCA4. The implementation of SyMO will prove to be best for our unit.\n[SUBTITLE] Change Efficacy (CE) [SUBSECTION] CE1. I know nurses outside our unit who had successful experiences with SyMO.\nCE2. SyMO has been successfully deployed in clinical units similar to ours.\nCE3. SyMO has received positive reviews in the press (e.g., newspapers, magazines, newsletters, et al.).\nCE4. I believe the provincial movement toward the electronic medical record represents a driving force for the deployment of SyMO in our unit.\nCE1. I know nurses outside our unit who had successful experiences with SyMO.\nCE2. SyMO has been successfully deployed in clinical units similar to ours.\nCE3. SyMO has received positive reviews in the press (e.g., newspapers, magazines, newsletters, et al.).\nCE4. I believe the provincial movement toward the electronic medical record represents a driving force for the deployment of SyMO in our unit.\n[SUBTITLE] Top-Management Support (TMS) [SUBSECTION] TMS1. Managers in our unit are committed to the deployment of SyMO.\nTMS2. Managers in our unit have stressed the importance of this change.\nTMS3. Managers have sent a clear message that the deployment of SyMO will occur in our unit.\nTMS4. Nurses have been encouraged to embrace the upcoming deployment of SyMO.\nTMS1. Managers in our unit are committed to the deployment of SyMO.\nTMS2. Managers in our unit have stressed the importance of this change.\nTMS3. Managers have sent a clear message that the deployment of SyMO will occur in our unit.\nTMS4. Nurses have been encouraged to embrace the upcoming deployment of SyMO.\n[SUBTITLE] Champion (C) [SUBSECTION] C1. There is a champion who actively promotes the deployment of SyMO in our unit.\nC2. The SyMO project has a credible and trustworthy champion.\nC3. There is a champion who will be able to push the SyMO project over or around implementation hurdles.\nC1. There is a champion who actively promotes the deployment of SyMO in our unit.\nC2. The SyMO project has a credible and trustworthy champion.\nC3. There is a champion who will be able to push the SyMO project over or around implementation hurdles.\n[SUBTITLE] Organizational History of Change (OHC) [SUBSECTION] OHC1. Our unit has successfully implemented other technological changes in recent years.\nOHC2. Nursing staff in our unit have had negative experiences with technological projects in the past (reversed item).\nOHC3. Our unit is usually successful when it undertakes all types of changes.\nOHC4. Information technology initiatives have been encouraged and are common practices in our unit (removed item).\nOHC1. Our unit has successfully implemented other technological changes in recent years.\nOHC2. Nursing staff in our unit have had negative experiences with technological projects in the past (reversed item).\nOHC3. Our unit is usually successful when it undertakes all types of changes.\nOHC4. Information technology initiatives have been encouraged and are common practices in our unit (removed item).\n[SUBTITLE] Organizational Conflicts and Politics (OCP) [SUBSECTION] OCP1. Mutual trust and cooperation among nursing staff in our unit is strong (reversed item).\nOCP2. Recent attempts to change the way we work in our unit have been hindered by political forces or conditions (removed item).\nOCP3. The climate in our unit is mainly characterized by conflicts and disputes.\nOCP4. Staff frustration is common in our unit.\nOCP1. Mutual trust and cooperation among nursing staff in our unit is strong (reversed item).\nOCP2. Recent attempts to change the way we work in our unit have been hindered by political forces or conditions (removed item).\nOCP3. The climate in our unit is mainly characterized by conflicts and disputes.\nOCP4. Staff frustration is common in our unit.\n[SUBTITLE] Organizational Flexibility (OF) [SUBSECTION] OF1. Our unit is structured to allow superiors to make changes quickly.\nOF2. It is easy to change procedures in our unit to meet new conditions.\nOF3. Getting anything changed in our unit is a long, time-consuming process.\nOF4. Policies and procedures in our unit allow us to take on new challenges effectively (removed item).\nOF1. Our unit is structured to allow superiors to make changes quickly.\nOF2. It is easy to change procedures in our unit to meet new conditions.\nOF3. Getting anything changed in our unit is a long, time-consuming process.\nOF4. Policies and procedures in our unit allow us to take on new challenges effectively (removed item).\n[SUBTITLE] Group Self-Efficacy (GSE) [SUBSECTION] SE1. All nurses in our unit are highly computer literate.\nSE2. It won't take a long time before nurses in our unit feel comfortable using SyMO.\nSE3. Using a computer effectively is no problem for the nursing staff in our unit.\nSE4. In general, nursing staff in our unit have low computer skills (reversed item).\nSE1. All nurses in our unit are highly computer literate.\nSE2. It won't take a long time before nurses in our unit feel comfortable using SyMO.\nSE3. Using a computer effectively is no problem for the nursing staff in our unit.\nSE4. In general, nursing staff in our unit have low computer skills (reversed item).\n[SUBTITLE] Organizational Readiness (OR) [SUBSECTION] OR1. I believe SyMO can be successfully implemented in our unit.\nOR2. Managers should delay the deployment of SyMO in our unit (reversed item).\nOR3. The deployment of SyMO in our unit is timely.\nOR4. Our unit is ready to take on this technological change.\nOR1. I believe SyMO can be successfully implemented in our unit.\nOR2. Managers should delay the deployment of SyMO in our unit (reversed item).\nOR3. The deployment of SyMO in our unit is timely.\nOR4. Our unit is ready to take on this technological change.", "VC1. I believe there are legitimate reasons for us to introduce a new computer-based system in our unit.\nVC2. We definitely need new tools to improve the way we work around here.\nVC3. There are a number of rational reasons for the deployment of a new information system in our unit.\nVC4. A new computer-based system is needed to improve our clinical processes.", "CA1. I think that nurses in our unit will benefit from the use of SyMO.\nCA2. The deployment of SyMO will contribute to our unit's overall performance.\nCA3. The deployment of SyMO matches the priorities of our unit.\nCA4. The implementation of SyMO will prove to be best for our unit.", "CE1. I know nurses outside our unit who had successful experiences with SyMO.\nCE2. SyMO has been successfully deployed in clinical units similar to ours.\nCE3. SyMO has received positive reviews in the press (e.g., newspapers, magazines, newsletters, et al.).\nCE4. I believe the provincial movement toward the electronic medical record represents a driving force for the deployment of SyMO in our unit.", "TMS1. Managers in our unit are committed to the deployment of SyMO.\nTMS2. Managers in our unit have stressed the importance of this change.\nTMS3. Managers have sent a clear message that the deployment of SyMO will occur in our unit.\nTMS4. Nurses have been encouraged to embrace the upcoming deployment of SyMO.", "C1. There is a champion who actively promotes the deployment of SyMO in our unit.\nC2. The SyMO project has a credible and trustworthy champion.\nC3. There is a champion who will be able to push the SyMO project over or around implementation hurdles.", "OHC1. Our unit has successfully implemented other technological changes in recent years.\nOHC2. Nursing staff in our unit have had negative experiences with technological projects in the past (reversed item).\nOHC3. Our unit is usually successful when it undertakes all types of changes.\nOHC4. Information technology initiatives have been encouraged and are common practices in our unit (removed item).", "OCP1. Mutual trust and cooperation among nursing staff in our unit is strong (reversed item).\nOCP2. Recent attempts to change the way we work in our unit have been hindered by political forces or conditions (removed item).\nOCP3. The climate in our unit is mainly characterized by conflicts and disputes.\nOCP4. Staff frustration is common in our unit.", "OF1. Our unit is structured to allow superiors to make changes quickly.\nOF2. It is easy to change procedures in our unit to meet new conditions.\nOF3. Getting anything changed in our unit is a long, time-consuming process.\nOF4. Policies and procedures in our unit allow us to take on new challenges effectively (removed item).", "SE1. All nurses in our unit are highly computer literate.\nSE2. It won't take a long time before nurses in our unit feel comfortable using SyMO.\nSE3. Using a computer effectively is no problem for the nursing staff in our unit.\nSE4. In general, nursing staff in our unit have low computer skills (reversed item).", "OR1. I believe SyMO can be successfully implemented in our unit.\nOR2. Managers should delay the deployment of SyMO in our unit (reversed item).\nOR3. The deployment of SyMO in our unit is timely.\nOR4. Our unit is ready to take on this technological change." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Research model", "Attributes of the change", "Vision clarity", "Change appropriateness", "Change efficacy", "Leadership support", "Top-management support", "Presence of a project champion", "Organizational context", "Organizational history of change", "Organizational conflicts", "Organizational flexibility", "Change targets' attributes", "Collective self-efficacy", "Methods", "Pre-test and research settings", "Pre-test", "Study one", "Study two", "Operationalization of variables and data analyses", "Ethics approval", "Results", "Sample profiles", "Psychometric properties of the measures", "Hypothesis testing", "Discussion", "Limitations", "Conclusions", "Competing interests", "Authors' contributions", "Appendix", "Questionnaire items (as framed in study one)", "Vision Clarity (VC)", "Change Appropriateness (CA)", "Change Efficacy (CE)", "Top-Management Support (TMS)", "Champion (C)", "Organizational History of Change (OHC)", "Organizational Conflicts and Politics (OCP)", "Organizational Flexibility (OF)", "Group Self-Efficacy (GSE)", "Organizational Readiness (OR)" ]
[ "The adoption and diffusion of clinical information systems (CIS) such as electronic medical record (EMR) systems, decision support systems, picture archiving and communication systems, and computerized provider order entry systems has become one of the critical benchmarks for achieving several healthcare organizational reform priorities, including home care, primary care, and integrated care networks [1-3]. Outcomes associated with the adoption of CIS in healthcare organizations include higher productivity levels among clinicians [4,5], better integrated care processes [6], and improved patient safety and quality of care [7,8], to name but a few.\nHowever, these systems are often strongly resisted by the same community that is expected to benefit from their adoption and use. In some cases resistance has manifested itself in boycotts of installed computer-based systems [9,10] or threats of strikes by the medical staff to oppose the implementation of EMR systems [11,12]. In extreme cases, technological resistance induced the hospital management to remove state of the art CIS. For instance, Freudenheim [13] reports that physicians at the Cedars-Sinai Medical Center at Los Angeles rebelled against their newly installed computerized physician order entry (CPOE) system, complaining that the system was too great a distraction from their medical duties and forcing its withdrawal after it was already online in two-thirds of the 870-bed hospital. Nurses are also seen to be reluctant to use computers in areas closely related to patient care [14-16] for several reasons, such as the fear of being distracted or taken away from the patient and the lack of perceived alignment with nursing workflow/documentation processes [17]. Gillespie [18] reported that nursing resistance alone had caused the 'death' of several IT initiatives.\nPrior research has found that favorable user attitudes are often associated with a high level of information technology (IT) adoption and acceptance [17,19-21]. In this regard, we argue that the early stages of the CIS project lifecycle deserve additional attention because early perceptions and beliefs play a central role in shaping future attitudes and behaviors such as negative rumors, involvement in the planning and design phases, and resistance to system usage. Furthermore, some authors concur that change management is most efficient when it is introduced at the earliest possible opportunity in the project lifecycle [22,23]. For these reasons, we decided to focus our attention on the pre-implementation stage, which is usually when change targets are introduced into the detailed project planning, the new system is seen or discussed for the first time, and initial impressions are formed about how work is likely to change [24,25].\nChange targets' perceptions of the organization's readiness for change have been identified by change management theorists as one important factor in understanding potential sources of resistance [26-28]. An individual's perception of an organization's readiness for change is viewed as a concept similar to unfreezing, which is described as a process in which an individual's beliefs about pending change are influenced such that the imminent change comes to be seen as possible [29]. Readiness collectively reflects the extent to which individuals are cognitively and emotionally inclined to accept, embrace, and adopt a particular plan to purposefully alter the status quo [30]. These perceptions are conceptualized as existing on a continuum, from viewing the organization as capable of withstanding change and successfully adapting to it (high readiness for change) to believing the organization is not ready to undergo such a change (low readiness for change) [30].\nWhile organizational readiness for change is an intuitively appealing construct, very few empirical studies in the health informatics field have focused on this phenomenon. The work of Snyder-Halpern [31-33] was all that could be found on the subject in the extant literature. In her studies, she defines organizational readiness very broadly as 'the level of fit between the IT innovation and the organization' and tests the hypothesis that a higher level of readiness leads to a lower level of innovation risk and a more successful CIS outcome [33]. The definition adopted by Snyder-Halpern is therefore more macro than the one used in this paper and applies to all phases of the CIS project life cycle. While the measure proposed by Snyder-Halpern serves as a proxy for the level of risk in a technological project, our measure is focused entirely on the notion of the ability to succeed at technological change as it is perceived by the users identified in the pre-implementation phase. We therefore see Snyder-Halpern's contribution as complementary to our own work.\nThe paper is structured as follows: First, we begin by reviewing relevant work in the change management and information systems fields that supports the hypothesized relationships between organizational readiness for change and its antecedents. Next, the paper describes the research design and the data that was collected in order to test our research model. This is followed by the presentation of the study results, their discussion, and concluding remarks.\n[SUBTITLE] Research model [SUBSECTION] The primary intent of this study was to investigate the variables associated with clinicians' perceptions of organizational readiness for change in the specific context of CIS projects. Based on Holt et al.'s [34] research model, four classes of variables (see Figure 1) were identified as possibly related to a clinician's interpretation of organizational readiness for change during the pre-implementation phase of CIS projects: the attributes of the change that is being introduced; the extent of leadership support for the proposed change; the organizational context where the change takes place; and the characteristics of the change targets. Each of these variables will be discussed.\nResearch Model\nThe primary intent of this study was to investigate the variables associated with clinicians' perceptions of organizational readiness for change in the specific context of CIS projects. Based on Holt et al.'s [34] research model, four classes of variables (see Figure 1) were identified as possibly related to a clinician's interpretation of organizational readiness for change during the pre-implementation phase of CIS projects: the attributes of the change that is being introduced; the extent of leadership support for the proposed change; the organizational context where the change takes place; and the characteristics of the change targets. Each of these variables will be discussed.\nResearch Model\n[SUBTITLE] Attributes of the change [SUBSECTION] The attributes of the change refer to the 'what' factor of the change [34]. That is, one should first consider what is being changed. In most CIS projects, the change is not only associated with the new system, but also with local processes, organizational structure, roles and responsibilities, and compensation schemes [11]. As explained below, three attributes of the change are likely to have a significant influence on change recipients' perceptions of organizational readiness for change.\n[SUBTITLE] Vision clarity [SUBSECTION] Change management theorists posit that one of the key sentiments to creating change readiness is the sense that change is needed [26-28,35-37]. A clear vision provides much of the justification for such a sentiment. A discrepency beween current and desired performance helps legitimize the need for change. Otherwise, the motive for a change may be perceived as arbitrary [26]. The notion of vision clarity is also consistent with social accounts theory, which stipulates that information should be provided by change agents to explain why an organizational change is needed [38,39].\nChange management theorists posit that one of the key sentiments to creating change readiness is the sense that change is needed [26-28,35-37]. A clear vision provides much of the justification for such a sentiment. A discrepency beween current and desired performance helps legitimize the need for change. Otherwise, the motive for a change may be perceived as arbitrary [26]. The notion of vision clarity is also consistent with social accounts theory, which stipulates that information should be provided by change agents to explain why an organizational change is needed [38,39].\n[SUBTITLE] Change appropriateness [SUBSECTION] A second key sentiment emphasized by Armenakis et al. [26-28] is the sense that the change is appropriate. Indeed, in addition to believing that a change is needed, if employees are to support change, they must also believe that the specific change being proposed will effectively address the discrepancy. This sentiment is also consistent with social accounts theory [38] and is used to describe whether the proposed change is the correct one for the situation at hand. If the proposed change is viewed by employees as the incorrect approach for pursuing the vision, change targets may not be willing to 'buy-in' to the change or attempt to make it work [40]. Clearly, appropriateness of a change is important, because individuals may feel that some form of change is needed but may disagree with the specific change being proposed.\nA second key sentiment emphasized by Armenakis et al. [26-28] is the sense that the change is appropriate. Indeed, in addition to believing that a change is needed, if employees are to support change, they must also believe that the specific change being proposed will effectively address the discrepancy. This sentiment is also consistent with social accounts theory [38] and is used to describe whether the proposed change is the correct one for the situation at hand. If the proposed change is viewed by employees as the incorrect approach for pursuing the vision, change targets may not be willing to 'buy-in' to the change or attempt to make it work [40]. Clearly, appropriateness of a change is important, because individuals may feel that some form of change is needed but may disagree with the specific change being proposed.\n[SUBTITLE] Change efficacy [SUBSECTION] A sense of efficacy, in the form of expectancy (efforts will lead to successful accomplishment), is a central tenet of most motivation theories [41]. To be motivated to support a change, individuals must not only feel that the change is appropriate but also that success is possible. In this sense, we believe that information from the environment may have a significant impact on individuals' perceptions of organizational readiness. If the proposed change has already been implemented successfully in similar organizations and this information has reached the appropriate individuals, one could conclude that they will see their organization as ready for a successful implementation. In contrast, if the press has reported prior failures in similar changes, one could expect some reticence on the part of the individuals affected by the change.\nBased on this research, the following hypotheses are proposed:\nHypothesis one: Vision clarity will be positively related to perceived organizational readiness for change.\nHypothesis two: Change appropriateness will be positively related to perceived organizational readiness for change.\nHypothesis three: Change efficacy will be positively related to perceived organizational readiness for change.\nA sense of efficacy, in the form of expectancy (efforts will lead to successful accomplishment), is a central tenet of most motivation theories [41]. To be motivated to support a change, individuals must not only feel that the change is appropriate but also that success is possible. In this sense, we believe that information from the environment may have a significant impact on individuals' perceptions of organizational readiness. If the proposed change has already been implemented successfully in similar organizations and this information has reached the appropriate individuals, one could conclude that they will see their organization as ready for a successful implementation. In contrast, if the press has reported prior failures in similar changes, one could expect some reticence on the part of the individuals affected by the change.\nBased on this research, the following hypotheses are proposed:\nHypothesis one: Vision clarity will be positively related to perceived organizational readiness for change.\nHypothesis two: Change appropriateness will be positively related to perceived organizational readiness for change.\nHypothesis three: Change efficacy will be positively related to perceived organizational readiness for change.\nThe attributes of the change refer to the 'what' factor of the change [34]. That is, one should first consider what is being changed. In most CIS projects, the change is not only associated with the new system, but also with local processes, organizational structure, roles and responsibilities, and compensation schemes [11]. As explained below, three attributes of the change are likely to have a significant influence on change recipients' perceptions of organizational readiness for change.\n[SUBTITLE] Vision clarity [SUBSECTION] Change management theorists posit that one of the key sentiments to creating change readiness is the sense that change is needed [26-28,35-37]. A clear vision provides much of the justification for such a sentiment. A discrepency beween current and desired performance helps legitimize the need for change. Otherwise, the motive for a change may be perceived as arbitrary [26]. The notion of vision clarity is also consistent with social accounts theory, which stipulates that information should be provided by change agents to explain why an organizational change is needed [38,39].\nChange management theorists posit that one of the key sentiments to creating change readiness is the sense that change is needed [26-28,35-37]. A clear vision provides much of the justification for such a sentiment. A discrepency beween current and desired performance helps legitimize the need for change. Otherwise, the motive for a change may be perceived as arbitrary [26]. The notion of vision clarity is also consistent with social accounts theory, which stipulates that information should be provided by change agents to explain why an organizational change is needed [38,39].\n[SUBTITLE] Change appropriateness [SUBSECTION] A second key sentiment emphasized by Armenakis et al. [26-28] is the sense that the change is appropriate. Indeed, in addition to believing that a change is needed, if employees are to support change, they must also believe that the specific change being proposed will effectively address the discrepancy. This sentiment is also consistent with social accounts theory [38] and is used to describe whether the proposed change is the correct one for the situation at hand. If the proposed change is viewed by employees as the incorrect approach for pursuing the vision, change targets may not be willing to 'buy-in' to the change or attempt to make it work [40]. Clearly, appropriateness of a change is important, because individuals may feel that some form of change is needed but may disagree with the specific change being proposed.\nA second key sentiment emphasized by Armenakis et al. [26-28] is the sense that the change is appropriate. Indeed, in addition to believing that a change is needed, if employees are to support change, they must also believe that the specific change being proposed will effectively address the discrepancy. This sentiment is also consistent with social accounts theory [38] and is used to describe whether the proposed change is the correct one for the situation at hand. If the proposed change is viewed by employees as the incorrect approach for pursuing the vision, change targets may not be willing to 'buy-in' to the change or attempt to make it work [40]. Clearly, appropriateness of a change is important, because individuals may feel that some form of change is needed but may disagree with the specific change being proposed.\n[SUBTITLE] Change efficacy [SUBSECTION] A sense of efficacy, in the form of expectancy (efforts will lead to successful accomplishment), is a central tenet of most motivation theories [41]. To be motivated to support a change, individuals must not only feel that the change is appropriate but also that success is possible. In this sense, we believe that information from the environment may have a significant impact on individuals' perceptions of organizational readiness. If the proposed change has already been implemented successfully in similar organizations and this information has reached the appropriate individuals, one could conclude that they will see their organization as ready for a successful implementation. In contrast, if the press has reported prior failures in similar changes, one could expect some reticence on the part of the individuals affected by the change.\nBased on this research, the following hypotheses are proposed:\nHypothesis one: Vision clarity will be positively related to perceived organizational readiness for change.\nHypothesis two: Change appropriateness will be positively related to perceived organizational readiness for change.\nHypothesis three: Change efficacy will be positively related to perceived organizational readiness for change.\nA sense of efficacy, in the form of expectancy (efforts will lead to successful accomplishment), is a central tenet of most motivation theories [41]. To be motivated to support a change, individuals must not only feel that the change is appropriate but also that success is possible. In this sense, we believe that information from the environment may have a significant impact on individuals' perceptions of organizational readiness. If the proposed change has already been implemented successfully in similar organizations and this information has reached the appropriate individuals, one could conclude that they will see their organization as ready for a successful implementation. In contrast, if the press has reported prior failures in similar changes, one could expect some reticence on the part of the individuals affected by the change.\nBased on this research, the following hypotheses are proposed:\nHypothesis one: Vision clarity will be positively related to perceived organizational readiness for change.\nHypothesis two: Change appropriateness will be positively related to perceived organizational readiness for change.\nHypothesis three: Change efficacy will be positively related to perceived organizational readiness for change.\n[SUBTITLE] Leadership support [SUBSECTION] Social learning theory [42] posits that individuals sense through their interpersonal networks the support that exists throughout the organization. In this study, principal support describes the support from upper management as well as local change agents [28].\n[SUBTITLE] Top-management support [SUBSECTION] Many researchers have argued that senior managers play a crucial role in determining whether an information system project succeeds or fails [43-45]. Today, the need for strong leadership seems to be the generally accepted wisdom among information systems academics and managerial practitioners. When upper management is highly supportive of an IT project, greater resources are likely to be allocated to develop and support the new system [46], enhancing facilitating conditions [47] and, ultimately, increasing perceptions of organizational readiness.\nMany researchers have argued that senior managers play a crucial role in determining whether an information system project succeeds or fails [43-45]. Today, the need for strong leadership seems to be the generally accepted wisdom among information systems academics and managerial practitioners. When upper management is highly supportive of an IT project, greater resources are likely to be allocated to develop and support the new system [46], enhancing facilitating conditions [47] and, ultimately, increasing perceptions of organizational readiness.\n[SUBTITLE] Presence of a project champion [SUBSECTION] It has long been recognized by both practitioners and academics that it is highly risky to attempt complex change without a project champion [48,49]. In the IT context, champions are individuals who actively promote their personal vision for using IT, pushing the project over or around approval and implementation hurdles [50]. They may have initiated the process or been convinced of its necessity by other organizational members. Dong et al. [51] recently observed that perceived leadership behaviors of IT project champions exercise a direct and positive influence on users' attitudes toward the object of change. Their finding confirms the claim that project champions are effective leaders in tems of conveying visions and transcending users' self-interest for collective goals [50].\nExtending this research, it is proposed that:\nHypothesis four: Top-management support will be positively related to perceived organizational readiness for change.\nHypothesis five: The presence of a project champion will be positively related to perceived organizational readiness for change.\nIt has long been recognized by both practitioners and academics that it is highly risky to attempt complex change without a project champion [48,49]. In the IT context, champions are individuals who actively promote their personal vision for using IT, pushing the project over or around approval and implementation hurdles [50]. They may have initiated the process or been convinced of its necessity by other organizational members. Dong et al. [51] recently observed that perceived leadership behaviors of IT project champions exercise a direct and positive influence on users' attitudes toward the object of change. Their finding confirms the claim that project champions are effective leaders in tems of conveying visions and transcending users' self-interest for collective goals [50].\nExtending this research, it is proposed that:\nHypothesis four: Top-management support will be positively related to perceived organizational readiness for change.\nHypothesis five: The presence of a project champion will be positively related to perceived organizational readiness for change.\nSocial learning theory [42] posits that individuals sense through their interpersonal networks the support that exists throughout the organization. In this study, principal support describes the support from upper management as well as local change agents [28].\n[SUBTITLE] Top-management support [SUBSECTION] Many researchers have argued that senior managers play a crucial role in determining whether an information system project succeeds or fails [43-45]. Today, the need for strong leadership seems to be the generally accepted wisdom among information systems academics and managerial practitioners. When upper management is highly supportive of an IT project, greater resources are likely to be allocated to develop and support the new system [46], enhancing facilitating conditions [47] and, ultimately, increasing perceptions of organizational readiness.\nMany researchers have argued that senior managers play a crucial role in determining whether an information system project succeeds or fails [43-45]. Today, the need for strong leadership seems to be the generally accepted wisdom among information systems academics and managerial practitioners. When upper management is highly supportive of an IT project, greater resources are likely to be allocated to develop and support the new system [46], enhancing facilitating conditions [47] and, ultimately, increasing perceptions of organizational readiness.\n[SUBTITLE] Presence of a project champion [SUBSECTION] It has long been recognized by both practitioners and academics that it is highly risky to attempt complex change without a project champion [48,49]. In the IT context, champions are individuals who actively promote their personal vision for using IT, pushing the project over or around approval and implementation hurdles [50]. They may have initiated the process or been convinced of its necessity by other organizational members. Dong et al. [51] recently observed that perceived leadership behaviors of IT project champions exercise a direct and positive influence on users' attitudes toward the object of change. Their finding confirms the claim that project champions are effective leaders in tems of conveying visions and transcending users' self-interest for collective goals [50].\nExtending this research, it is proposed that:\nHypothesis four: Top-management support will be positively related to perceived organizational readiness for change.\nHypothesis five: The presence of a project champion will be positively related to perceived organizational readiness for change.\nIt has long been recognized by both practitioners and academics that it is highly risky to attempt complex change without a project champion [48,49]. In the IT context, champions are individuals who actively promote their personal vision for using IT, pushing the project over or around approval and implementation hurdles [50]. They may have initiated the process or been convinced of its necessity by other organizational members. Dong et al. [51] recently observed that perceived leadership behaviors of IT project champions exercise a direct and positive influence on users' attitudes toward the object of change. Their finding confirms the claim that project champions are effective leaders in tems of conveying visions and transcending users' self-interest for collective goals [50].\nExtending this research, it is proposed that:\nHypothesis four: Top-management support will be positively related to perceived organizational readiness for change.\nHypothesis five: The presence of a project champion will be positively related to perceived organizational readiness for change.\n[SUBTITLE] Organizational context [SUBSECTION] According to Holt et al. [34], internal context refers to the circumstances that describe the organization as it embarks on change. Mowday and Sutton [52] described internal context as the conditions external to change recipients that influence their beliefs, attitudes, intentions, and behavior. Prior research has led us to hypothesize that three organizational variables have a significant influence on change targets' perceptions of readiness.\n[SUBTITLE] Organizational history of change [SUBSECTION] To some degree, all organizations are idiosyncratic; that is, previous experiences have been stored in each organization in a pattern that makes the organization different from others that may on the surface appear very similar [53]. Organizations are dynamically evolving systems, and each has a history of resources, commitments, successes, and failures that shape the environment in which computer-based systems are developed and implemented [54]. Therefore, organizational history or memory might affect the way a change is framed in terms of previous initiatives undertaken by organization and hence have a great influence on the extent of IT implementation success.\nTo some degree, all organizations are idiosyncratic; that is, previous experiences have been stored in each organization in a pattern that makes the organization different from others that may on the surface appear very similar [53]. Organizations are dynamically evolving systems, and each has a history of resources, commitments, successes, and failures that shape the environment in which computer-based systems are developed and implemented [54]. Therefore, organizational history or memory might affect the way a change is framed in terms of previous initiatives undertaken by organization and hence have a great influence on the extent of IT implementation success.\n[SUBTITLE] Organizational conflicts [SUBSECTION] CIS implementation in healthcare organizations is characterized by social interactions. Among the many individuals and groups involved in the implementation process, there are usually managers, a project leader, a project champion, project team members, system developers, and a group of user representatives (clinicians). These actors have different interests and objectives for the adoption of a new CIS [55]. Hence, system implementation might be influenced by organizational politics and power relations [56,57]. Conflicting interests of different key actors and groups might lead to perceptions among targeted users that the organization is not ready for change.\nCIS implementation in healthcare organizations is characterized by social interactions. Among the many individuals and groups involved in the implementation process, there are usually managers, a project leader, a project champion, project team members, system developers, and a group of user representatives (clinicians). These actors have different interests and objectives for the adoption of a new CIS [55]. Hence, system implementation might be influenced by organizational politics and power relations [56,57]. Conflicting interests of different key actors and groups might lead to perceptions among targeted users that the organization is not ready for change.\n[SUBTITLE] Organizational flexibility [SUBSECTION] Some organizations are more agile and easily adaptable than others. For this reason, the degree to which organizational policies and practices are supportive of change may also be important to understanding how an employee perceives the organization's readiness for change [26]. Eby et al. [58] examined this issue in a study of two divisions of a national sales organization that was transitioning to work teams. Their results reveal that vendors' perceptions of their organization's ability to accommodate change by altering policies and procedures were strongly and positively related to perceived organizational readiness for change. Hence, we posit that clinicians are likely to hold unfavorable views about readiness for change when they perceive their healthcare organization's structure and policies as rigid and inflexible.\nBased on prior research, we propose the following research hypotheses:\nHypothesis six: History of successful change experiences will be positively related to perceived organizational readiness for change.\nHypothesis seven: Organizational conflicts will be negatively related to perceived organizational readiness for change.\nHypothesis eight: Organizational flexibility will be positively related to perceived organizational readiness for change.\nSome organizations are more agile and easily adaptable than others. For this reason, the degree to which organizational policies and practices are supportive of change may also be important to understanding how an employee perceives the organization's readiness for change [26]. Eby et al. [58] examined this issue in a study of two divisions of a national sales organization that was transitioning to work teams. Their results reveal that vendors' perceptions of their organization's ability to accommodate change by altering policies and procedures were strongly and positively related to perceived organizational readiness for change. Hence, we posit that clinicians are likely to hold unfavorable views about readiness for change when they perceive their healthcare organization's structure and policies as rigid and inflexible.\nBased on prior research, we propose the following research hypotheses:\nHypothesis six: History of successful change experiences will be positively related to perceived organizational readiness for change.\nHypothesis seven: Organizational conflicts will be negatively related to perceived organizational readiness for change.\nHypothesis eight: Organizational flexibility will be positively related to perceived organizational readiness for change.\nAccording to Holt et al. [34], internal context refers to the circumstances that describe the organization as it embarks on change. Mowday and Sutton [52] described internal context as the conditions external to change recipients that influence their beliefs, attitudes, intentions, and behavior. Prior research has led us to hypothesize that three organizational variables have a significant influence on change targets' perceptions of readiness.\n[SUBTITLE] Organizational history of change [SUBSECTION] To some degree, all organizations are idiosyncratic; that is, previous experiences have been stored in each organization in a pattern that makes the organization different from others that may on the surface appear very similar [53]. Organizations are dynamically evolving systems, and each has a history of resources, commitments, successes, and failures that shape the environment in which computer-based systems are developed and implemented [54]. Therefore, organizational history or memory might affect the way a change is framed in terms of previous initiatives undertaken by organization and hence have a great influence on the extent of IT implementation success.\nTo some degree, all organizations are idiosyncratic; that is, previous experiences have been stored in each organization in a pattern that makes the organization different from others that may on the surface appear very similar [53]. Organizations are dynamically evolving systems, and each has a history of resources, commitments, successes, and failures that shape the environment in which computer-based systems are developed and implemented [54]. Therefore, organizational history or memory might affect the way a change is framed in terms of previous initiatives undertaken by organization and hence have a great influence on the extent of IT implementation success.\n[SUBTITLE] Organizational conflicts [SUBSECTION] CIS implementation in healthcare organizations is characterized by social interactions. Among the many individuals and groups involved in the implementation process, there are usually managers, a project leader, a project champion, project team members, system developers, and a group of user representatives (clinicians). These actors have different interests and objectives for the adoption of a new CIS [55]. Hence, system implementation might be influenced by organizational politics and power relations [56,57]. Conflicting interests of different key actors and groups might lead to perceptions among targeted users that the organization is not ready for change.\nCIS implementation in healthcare organizations is characterized by social interactions. Among the many individuals and groups involved in the implementation process, there are usually managers, a project leader, a project champion, project team members, system developers, and a group of user representatives (clinicians). These actors have different interests and objectives for the adoption of a new CIS [55]. Hence, system implementation might be influenced by organizational politics and power relations [56,57]. Conflicting interests of different key actors and groups might lead to perceptions among targeted users that the organization is not ready for change.\n[SUBTITLE] Organizational flexibility [SUBSECTION] Some organizations are more agile and easily adaptable than others. For this reason, the degree to which organizational policies and practices are supportive of change may also be important to understanding how an employee perceives the organization's readiness for change [26]. Eby et al. [58] examined this issue in a study of two divisions of a national sales organization that was transitioning to work teams. Their results reveal that vendors' perceptions of their organization's ability to accommodate change by altering policies and procedures were strongly and positively related to perceived organizational readiness for change. Hence, we posit that clinicians are likely to hold unfavorable views about readiness for change when they perceive their healthcare organization's structure and policies as rigid and inflexible.\nBased on prior research, we propose the following research hypotheses:\nHypothesis six: History of successful change experiences will be positively related to perceived organizational readiness for change.\nHypothesis seven: Organizational conflicts will be negatively related to perceived organizational readiness for change.\nHypothesis eight: Organizational flexibility will be positively related to perceived organizational readiness for change.\nSome organizations are more agile and easily adaptable than others. For this reason, the degree to which organizational policies and practices are supportive of change may also be important to understanding how an employee perceives the organization's readiness for change [26]. Eby et al. [58] examined this issue in a study of two divisions of a national sales organization that was transitioning to work teams. Their results reveal that vendors' perceptions of their organization's ability to accommodate change by altering policies and procedures were strongly and positively related to perceived organizational readiness for change. Hence, we posit that clinicians are likely to hold unfavorable views about readiness for change when they perceive their healthcare organization's structure and policies as rigid and inflexible.\nBased on prior research, we propose the following research hypotheses:\nHypothesis six: History of successful change experiences will be positively related to perceived organizational readiness for change.\nHypothesis seven: Organizational conflicts will be negatively related to perceived organizational readiness for change.\nHypothesis eight: Organizational flexibility will be positively related to perceived organizational readiness for change.\n[SUBTITLE] Change targets' attributes [SUBSECTION] The fourth and final class of variables refers to the 'who,' or the organizational members who are required for change [34]. The variables are the attributes representing conditions internal to individuals that influence their beliefs, attitudes, and intention when confronted with change. In the present study, we focused on one of the most common individual factors that might influence perceptions of readiness, namely, individuals' skills or abilities.\n[SUBTITLE] Collective self-efficacy [SUBSECTION] Self-efficacy refers to sentiments of confidence in one's ability to succeed. This concept is included in Bandura's social learning theory [42], which suggests that employees who feel comfortable with their present skill set will believe that a different skill required to successfully execute the new job requirements can be mastered, such that they will be able to regain the comfort felt prior to the change. In this study we measure collective rather than individual efficacy, because the goal is to explain a construct at the organizational level. More specifically, we posit that individuals who perceive change targets, as a group, as capable of learning new work methods and tools will be more likely to look favorably on the organization's readiness for change.\nHypothesis nine: Collective self-efficacy will be positively related to perceived organizational readiness for change.\nSelf-efficacy refers to sentiments of confidence in one's ability to succeed. This concept is included in Bandura's social learning theory [42], which suggests that employees who feel comfortable with their present skill set will believe that a different skill required to successfully execute the new job requirements can be mastered, such that they will be able to regain the comfort felt prior to the change. In this study we measure collective rather than individual efficacy, because the goal is to explain a construct at the organizational level. More specifically, we posit that individuals who perceive change targets, as a group, as capable of learning new work methods and tools will be more likely to look favorably on the organization's readiness for change.\nHypothesis nine: Collective self-efficacy will be positively related to perceived organizational readiness for change.\nThe fourth and final class of variables refers to the 'who,' or the organizational members who are required for change [34]. The variables are the attributes representing conditions internal to individuals that influence their beliefs, attitudes, and intention when confronted with change. In the present study, we focused on one of the most common individual factors that might influence perceptions of readiness, namely, individuals' skills or abilities.\n[SUBTITLE] Collective self-efficacy [SUBSECTION] Self-efficacy refers to sentiments of confidence in one's ability to succeed. This concept is included in Bandura's social learning theory [42], which suggests that employees who feel comfortable with their present skill set will believe that a different skill required to successfully execute the new job requirements can be mastered, such that they will be able to regain the comfort felt prior to the change. In this study we measure collective rather than individual efficacy, because the goal is to explain a construct at the organizational level. More specifically, we posit that individuals who perceive change targets, as a group, as capable of learning new work methods and tools will be more likely to look favorably on the organization's readiness for change.\nHypothesis nine: Collective self-efficacy will be positively related to perceived organizational readiness for change.\nSelf-efficacy refers to sentiments of confidence in one's ability to succeed. This concept is included in Bandura's social learning theory [42], which suggests that employees who feel comfortable with their present skill set will believe that a different skill required to successfully execute the new job requirements can be mastered, such that they will be able to regain the comfort felt prior to the change. In this study we measure collective rather than individual efficacy, because the goal is to explain a construct at the organizational level. More specifically, we posit that individuals who perceive change targets, as a group, as capable of learning new work methods and tools will be more likely to look favorably on the organization's readiness for change.\nHypothesis nine: Collective self-efficacy will be positively related to perceived organizational readiness for change.", "The primary intent of this study was to investigate the variables associated with clinicians' perceptions of organizational readiness for change in the specific context of CIS projects. Based on Holt et al.'s [34] research model, four classes of variables (see Figure 1) were identified as possibly related to a clinician's interpretation of organizational readiness for change during the pre-implementation phase of CIS projects: the attributes of the change that is being introduced; the extent of leadership support for the proposed change; the organizational context where the change takes place; and the characteristics of the change targets. Each of these variables will be discussed.\nResearch Model", "The attributes of the change refer to the 'what' factor of the change [34]. That is, one should first consider what is being changed. In most CIS projects, the change is not only associated with the new system, but also with local processes, organizational structure, roles and responsibilities, and compensation schemes [11]. As explained below, three attributes of the change are likely to have a significant influence on change recipients' perceptions of organizational readiness for change.\n[SUBTITLE] Vision clarity [SUBSECTION] Change management theorists posit that one of the key sentiments to creating change readiness is the sense that change is needed [26-28,35-37]. A clear vision provides much of the justification for such a sentiment. A discrepency beween current and desired performance helps legitimize the need for change. Otherwise, the motive for a change may be perceived as arbitrary [26]. The notion of vision clarity is also consistent with social accounts theory, which stipulates that information should be provided by change agents to explain why an organizational change is needed [38,39].\nChange management theorists posit that one of the key sentiments to creating change readiness is the sense that change is needed [26-28,35-37]. A clear vision provides much of the justification for such a sentiment. A discrepency beween current and desired performance helps legitimize the need for change. Otherwise, the motive for a change may be perceived as arbitrary [26]. The notion of vision clarity is also consistent with social accounts theory, which stipulates that information should be provided by change agents to explain why an organizational change is needed [38,39].\n[SUBTITLE] Change appropriateness [SUBSECTION] A second key sentiment emphasized by Armenakis et al. [26-28] is the sense that the change is appropriate. Indeed, in addition to believing that a change is needed, if employees are to support change, they must also believe that the specific change being proposed will effectively address the discrepancy. This sentiment is also consistent with social accounts theory [38] and is used to describe whether the proposed change is the correct one for the situation at hand. If the proposed change is viewed by employees as the incorrect approach for pursuing the vision, change targets may not be willing to 'buy-in' to the change or attempt to make it work [40]. Clearly, appropriateness of a change is important, because individuals may feel that some form of change is needed but may disagree with the specific change being proposed.\nA second key sentiment emphasized by Armenakis et al. [26-28] is the sense that the change is appropriate. Indeed, in addition to believing that a change is needed, if employees are to support change, they must also believe that the specific change being proposed will effectively address the discrepancy. This sentiment is also consistent with social accounts theory [38] and is used to describe whether the proposed change is the correct one for the situation at hand. If the proposed change is viewed by employees as the incorrect approach for pursuing the vision, change targets may not be willing to 'buy-in' to the change or attempt to make it work [40]. Clearly, appropriateness of a change is important, because individuals may feel that some form of change is needed but may disagree with the specific change being proposed.\n[SUBTITLE] Change efficacy [SUBSECTION] A sense of efficacy, in the form of expectancy (efforts will lead to successful accomplishment), is a central tenet of most motivation theories [41]. To be motivated to support a change, individuals must not only feel that the change is appropriate but also that success is possible. In this sense, we believe that information from the environment may have a significant impact on individuals' perceptions of organizational readiness. If the proposed change has already been implemented successfully in similar organizations and this information has reached the appropriate individuals, one could conclude that they will see their organization as ready for a successful implementation. In contrast, if the press has reported prior failures in similar changes, one could expect some reticence on the part of the individuals affected by the change.\nBased on this research, the following hypotheses are proposed:\nHypothesis one: Vision clarity will be positively related to perceived organizational readiness for change.\nHypothesis two: Change appropriateness will be positively related to perceived organizational readiness for change.\nHypothesis three: Change efficacy will be positively related to perceived organizational readiness for change.\nA sense of efficacy, in the form of expectancy (efforts will lead to successful accomplishment), is a central tenet of most motivation theories [41]. To be motivated to support a change, individuals must not only feel that the change is appropriate but also that success is possible. In this sense, we believe that information from the environment may have a significant impact on individuals' perceptions of organizational readiness. If the proposed change has already been implemented successfully in similar organizations and this information has reached the appropriate individuals, one could conclude that they will see their organization as ready for a successful implementation. In contrast, if the press has reported prior failures in similar changes, one could expect some reticence on the part of the individuals affected by the change.\nBased on this research, the following hypotheses are proposed:\nHypothesis one: Vision clarity will be positively related to perceived organizational readiness for change.\nHypothesis two: Change appropriateness will be positively related to perceived organizational readiness for change.\nHypothesis three: Change efficacy will be positively related to perceived organizational readiness for change.", "Change management theorists posit that one of the key sentiments to creating change readiness is the sense that change is needed [26-28,35-37]. A clear vision provides much of the justification for such a sentiment. A discrepency beween current and desired performance helps legitimize the need for change. Otherwise, the motive for a change may be perceived as arbitrary [26]. The notion of vision clarity is also consistent with social accounts theory, which stipulates that information should be provided by change agents to explain why an organizational change is needed [38,39].", "A second key sentiment emphasized by Armenakis et al. [26-28] is the sense that the change is appropriate. Indeed, in addition to believing that a change is needed, if employees are to support change, they must also believe that the specific change being proposed will effectively address the discrepancy. This sentiment is also consistent with social accounts theory [38] and is used to describe whether the proposed change is the correct one for the situation at hand. If the proposed change is viewed by employees as the incorrect approach for pursuing the vision, change targets may not be willing to 'buy-in' to the change or attempt to make it work [40]. Clearly, appropriateness of a change is important, because individuals may feel that some form of change is needed but may disagree with the specific change being proposed.", "A sense of efficacy, in the form of expectancy (efforts will lead to successful accomplishment), is a central tenet of most motivation theories [41]. To be motivated to support a change, individuals must not only feel that the change is appropriate but also that success is possible. In this sense, we believe that information from the environment may have a significant impact on individuals' perceptions of organizational readiness. If the proposed change has already been implemented successfully in similar organizations and this information has reached the appropriate individuals, one could conclude that they will see their organization as ready for a successful implementation. In contrast, if the press has reported prior failures in similar changes, one could expect some reticence on the part of the individuals affected by the change.\nBased on this research, the following hypotheses are proposed:\nHypothesis one: Vision clarity will be positively related to perceived organizational readiness for change.\nHypothesis two: Change appropriateness will be positively related to perceived organizational readiness for change.\nHypothesis three: Change efficacy will be positively related to perceived organizational readiness for change.", "Social learning theory [42] posits that individuals sense through their interpersonal networks the support that exists throughout the organization. In this study, principal support describes the support from upper management as well as local change agents [28].\n[SUBTITLE] Top-management support [SUBSECTION] Many researchers have argued that senior managers play a crucial role in determining whether an information system project succeeds or fails [43-45]. Today, the need for strong leadership seems to be the generally accepted wisdom among information systems academics and managerial practitioners. When upper management is highly supportive of an IT project, greater resources are likely to be allocated to develop and support the new system [46], enhancing facilitating conditions [47] and, ultimately, increasing perceptions of organizational readiness.\nMany researchers have argued that senior managers play a crucial role in determining whether an information system project succeeds or fails [43-45]. Today, the need for strong leadership seems to be the generally accepted wisdom among information systems academics and managerial practitioners. When upper management is highly supportive of an IT project, greater resources are likely to be allocated to develop and support the new system [46], enhancing facilitating conditions [47] and, ultimately, increasing perceptions of organizational readiness.\n[SUBTITLE] Presence of a project champion [SUBSECTION] It has long been recognized by both practitioners and academics that it is highly risky to attempt complex change without a project champion [48,49]. In the IT context, champions are individuals who actively promote their personal vision for using IT, pushing the project over or around approval and implementation hurdles [50]. They may have initiated the process or been convinced of its necessity by other organizational members. Dong et al. [51] recently observed that perceived leadership behaviors of IT project champions exercise a direct and positive influence on users' attitudes toward the object of change. Their finding confirms the claim that project champions are effective leaders in tems of conveying visions and transcending users' self-interest for collective goals [50].\nExtending this research, it is proposed that:\nHypothesis four: Top-management support will be positively related to perceived organizational readiness for change.\nHypothesis five: The presence of a project champion will be positively related to perceived organizational readiness for change.\nIt has long been recognized by both practitioners and academics that it is highly risky to attempt complex change without a project champion [48,49]. In the IT context, champions are individuals who actively promote their personal vision for using IT, pushing the project over or around approval and implementation hurdles [50]. They may have initiated the process or been convinced of its necessity by other organizational members. Dong et al. [51] recently observed that perceived leadership behaviors of IT project champions exercise a direct and positive influence on users' attitudes toward the object of change. Their finding confirms the claim that project champions are effective leaders in tems of conveying visions and transcending users' self-interest for collective goals [50].\nExtending this research, it is proposed that:\nHypothesis four: Top-management support will be positively related to perceived organizational readiness for change.\nHypothesis five: The presence of a project champion will be positively related to perceived organizational readiness for change.", "Many researchers have argued that senior managers play a crucial role in determining whether an information system project succeeds or fails [43-45]. Today, the need for strong leadership seems to be the generally accepted wisdom among information systems academics and managerial practitioners. When upper management is highly supportive of an IT project, greater resources are likely to be allocated to develop and support the new system [46], enhancing facilitating conditions [47] and, ultimately, increasing perceptions of organizational readiness.", "It has long been recognized by both practitioners and academics that it is highly risky to attempt complex change without a project champion [48,49]. In the IT context, champions are individuals who actively promote their personal vision for using IT, pushing the project over or around approval and implementation hurdles [50]. They may have initiated the process or been convinced of its necessity by other organizational members. Dong et al. [51] recently observed that perceived leadership behaviors of IT project champions exercise a direct and positive influence on users' attitudes toward the object of change. Their finding confirms the claim that project champions are effective leaders in tems of conveying visions and transcending users' self-interest for collective goals [50].\nExtending this research, it is proposed that:\nHypothesis four: Top-management support will be positively related to perceived organizational readiness for change.\nHypothesis five: The presence of a project champion will be positively related to perceived organizational readiness for change.", "According to Holt et al. [34], internal context refers to the circumstances that describe the organization as it embarks on change. Mowday and Sutton [52] described internal context as the conditions external to change recipients that influence their beliefs, attitudes, intentions, and behavior. Prior research has led us to hypothesize that three organizational variables have a significant influence on change targets' perceptions of readiness.\n[SUBTITLE] Organizational history of change [SUBSECTION] To some degree, all organizations are idiosyncratic; that is, previous experiences have been stored in each organization in a pattern that makes the organization different from others that may on the surface appear very similar [53]. Organizations are dynamically evolving systems, and each has a history of resources, commitments, successes, and failures that shape the environment in which computer-based systems are developed and implemented [54]. Therefore, organizational history or memory might affect the way a change is framed in terms of previous initiatives undertaken by organization and hence have a great influence on the extent of IT implementation success.\nTo some degree, all organizations are idiosyncratic; that is, previous experiences have been stored in each organization in a pattern that makes the organization different from others that may on the surface appear very similar [53]. Organizations are dynamically evolving systems, and each has a history of resources, commitments, successes, and failures that shape the environment in which computer-based systems are developed and implemented [54]. Therefore, organizational history or memory might affect the way a change is framed in terms of previous initiatives undertaken by organization and hence have a great influence on the extent of IT implementation success.\n[SUBTITLE] Organizational conflicts [SUBSECTION] CIS implementation in healthcare organizations is characterized by social interactions. Among the many individuals and groups involved in the implementation process, there are usually managers, a project leader, a project champion, project team members, system developers, and a group of user representatives (clinicians). These actors have different interests and objectives for the adoption of a new CIS [55]. Hence, system implementation might be influenced by organizational politics and power relations [56,57]. Conflicting interests of different key actors and groups might lead to perceptions among targeted users that the organization is not ready for change.\nCIS implementation in healthcare organizations is characterized by social interactions. Among the many individuals and groups involved in the implementation process, there are usually managers, a project leader, a project champion, project team members, system developers, and a group of user representatives (clinicians). These actors have different interests and objectives for the adoption of a new CIS [55]. Hence, system implementation might be influenced by organizational politics and power relations [56,57]. Conflicting interests of different key actors and groups might lead to perceptions among targeted users that the organization is not ready for change.\n[SUBTITLE] Organizational flexibility [SUBSECTION] Some organizations are more agile and easily adaptable than others. For this reason, the degree to which organizational policies and practices are supportive of change may also be important to understanding how an employee perceives the organization's readiness for change [26]. Eby et al. [58] examined this issue in a study of two divisions of a national sales organization that was transitioning to work teams. Their results reveal that vendors' perceptions of their organization's ability to accommodate change by altering policies and procedures were strongly and positively related to perceived organizational readiness for change. Hence, we posit that clinicians are likely to hold unfavorable views about readiness for change when they perceive their healthcare organization's structure and policies as rigid and inflexible.\nBased on prior research, we propose the following research hypotheses:\nHypothesis six: History of successful change experiences will be positively related to perceived organizational readiness for change.\nHypothesis seven: Organizational conflicts will be negatively related to perceived organizational readiness for change.\nHypothesis eight: Organizational flexibility will be positively related to perceived organizational readiness for change.\nSome organizations are more agile and easily adaptable than others. For this reason, the degree to which organizational policies and practices are supportive of change may also be important to understanding how an employee perceives the organization's readiness for change [26]. Eby et al. [58] examined this issue in a study of two divisions of a national sales organization that was transitioning to work teams. Their results reveal that vendors' perceptions of their organization's ability to accommodate change by altering policies and procedures were strongly and positively related to perceived organizational readiness for change. Hence, we posit that clinicians are likely to hold unfavorable views about readiness for change when they perceive their healthcare organization's structure and policies as rigid and inflexible.\nBased on prior research, we propose the following research hypotheses:\nHypothesis six: History of successful change experiences will be positively related to perceived organizational readiness for change.\nHypothesis seven: Organizational conflicts will be negatively related to perceived organizational readiness for change.\nHypothesis eight: Organizational flexibility will be positively related to perceived organizational readiness for change.", "To some degree, all organizations are idiosyncratic; that is, previous experiences have been stored in each organization in a pattern that makes the organization different from others that may on the surface appear very similar [53]. Organizations are dynamically evolving systems, and each has a history of resources, commitments, successes, and failures that shape the environment in which computer-based systems are developed and implemented [54]. Therefore, organizational history or memory might affect the way a change is framed in terms of previous initiatives undertaken by organization and hence have a great influence on the extent of IT implementation success.", "CIS implementation in healthcare organizations is characterized by social interactions. Among the many individuals and groups involved in the implementation process, there are usually managers, a project leader, a project champion, project team members, system developers, and a group of user representatives (clinicians). These actors have different interests and objectives for the adoption of a new CIS [55]. Hence, system implementation might be influenced by organizational politics and power relations [56,57]. Conflicting interests of different key actors and groups might lead to perceptions among targeted users that the organization is not ready for change.", "Some organizations are more agile and easily adaptable than others. For this reason, the degree to which organizational policies and practices are supportive of change may also be important to understanding how an employee perceives the organization's readiness for change [26]. Eby et al. [58] examined this issue in a study of two divisions of a national sales organization that was transitioning to work teams. Their results reveal that vendors' perceptions of their organization's ability to accommodate change by altering policies and procedures were strongly and positively related to perceived organizational readiness for change. Hence, we posit that clinicians are likely to hold unfavorable views about readiness for change when they perceive their healthcare organization's structure and policies as rigid and inflexible.\nBased on prior research, we propose the following research hypotheses:\nHypothesis six: History of successful change experiences will be positively related to perceived organizational readiness for change.\nHypothesis seven: Organizational conflicts will be negatively related to perceived organizational readiness for change.\nHypothesis eight: Organizational flexibility will be positively related to perceived organizational readiness for change.", "The fourth and final class of variables refers to the 'who,' or the organizational members who are required for change [34]. The variables are the attributes representing conditions internal to individuals that influence their beliefs, attitudes, and intention when confronted with change. In the present study, we focused on one of the most common individual factors that might influence perceptions of readiness, namely, individuals' skills or abilities.\n[SUBTITLE] Collective self-efficacy [SUBSECTION] Self-efficacy refers to sentiments of confidence in one's ability to succeed. This concept is included in Bandura's social learning theory [42], which suggests that employees who feel comfortable with their present skill set will believe that a different skill required to successfully execute the new job requirements can be mastered, such that they will be able to regain the comfort felt prior to the change. In this study we measure collective rather than individual efficacy, because the goal is to explain a construct at the organizational level. More specifically, we posit that individuals who perceive change targets, as a group, as capable of learning new work methods and tools will be more likely to look favorably on the organization's readiness for change.\nHypothesis nine: Collective self-efficacy will be positively related to perceived organizational readiness for change.\nSelf-efficacy refers to sentiments of confidence in one's ability to succeed. This concept is included in Bandura's social learning theory [42], which suggests that employees who feel comfortable with their present skill set will believe that a different skill required to successfully execute the new job requirements can be mastered, such that they will be able to regain the comfort felt prior to the change. In this study we measure collective rather than individual efficacy, because the goal is to explain a construct at the organizational level. More specifically, we posit that individuals who perceive change targets, as a group, as capable of learning new work methods and tools will be more likely to look favorably on the organization's readiness for change.\nHypothesis nine: Collective self-efficacy will be positively related to perceived organizational readiness for change.", "Self-efficacy refers to sentiments of confidence in one's ability to succeed. This concept is included in Bandura's social learning theory [42], which suggests that employees who feel comfortable with their present skill set will believe that a different skill required to successfully execute the new job requirements can be mastered, such that they will be able to regain the comfort felt prior to the change. In this study we measure collective rather than individual efficacy, because the goal is to explain a construct at the organizational level. More specifically, we posit that individuals who perceive change targets, as a group, as capable of learning new work methods and tools will be more likely to look favorably on the organization's readiness for change.\nHypothesis nine: Collective self-efficacy will be positively related to perceived organizational readiness for change.", "To test the above hypotheses and increase the generalizability of our findings, two cross-sectional surveys were conducted. The first study investigated the pre-deployment of a mobile computing software solution in 11 ambulatory home care units. The second study took place in a large teaching hospital that had approved a budget for the implementation of an EMR system. Given the exploratory nature of this study, we favored a literal replication strategy where similar, not constrasting, results were predicted for each of the two CIS projects. The following paragraphs describe the pre-test and the two empirical studies.\n[SUBTITLE] Pre-test and research settings [SUBSECTION] [SUBTITLE] Pre-test [SUBSECTION] The study questionnaire was first pre-tested with five graduate students who are familiar with CIS as well as with four clinicians who had been involved in several CIS projects. Each respondent completed a first version of the questionnaire and provided feedback about the process and the measures, including the questionnaire administration time, and the clarity of the instructions and questions. The pretest indicated that the measurement instrument was relatively clear and easy to fill out. Following the pre-test, minor modifications were made to improve the wording of some items and the overall structure and presentation quality of the questionnaire.\n[SUBTITLE] Study one [SUBSECTION] A mobile computing project was carried out in an oncology and palliative care unit in Quebec, Canada in 2007. The core of the project was the implementation of a CIS that optimizes the process used to organize nursing activities taking place in patients' homes. The new SyMO package (Médisolution™) consists of a nursing care plan dictionary that covers all the procedures nurses need to perform in response to patient health problems, and an intervention plan module that allows nurses to create specific care plans for patients. Once she has arrived at the patient's home, the nurse uses the software to take notes on each procedure in the patient's care plan. This pilot project was the subject of an evaluative study that yielded encouraging results. Indeed, eight months after the implementation of the software application, the number of treated cancer patients increased by 6%, the average number of home visits by nurses increased by 0.7 visit per day, and the time allocated for direct patient care increased by 14% [5]. Given these positive results, senior administrators of Quebec's department of health and social services decided to invest additional funds in the project to verify whether the results were generalizable in the context of traditional home care services. The department asked 11 home care units in three different geographical regions to participate in the project. Study one investigated the pre-implementation phase of the mobile computing project in these ambulatory home care units.\nThe mobile computing project and the software package were officially presented to the nursing staff of each of the participating organizations at kick-off meetings held from May to November in 2009. The meetings were jointly organized by the managers of the health facilities and the supplier in order to present the scope of the project and its objectives, the roles and responsibilities of the key stakeholders, the planned deployment approach (including project phases), and the projected schedule and budget. The meetings also included a 60-minute demonstration of the software.\nThe data was collected at each of these meetings. Only nurses who would be affected by the change, i.e., change recipients, were asked to stay in the room while the data were collected. Once the objectives of the study had been explained, a questionnaire was distributed to all the nurses in attendance. A total of 138 nurses completed the survey instrument, for a response rate of 90%.\nA mobile computing project was carried out in an oncology and palliative care unit in Quebec, Canada in 2007. The core of the project was the implementation of a CIS that optimizes the process used to organize nursing activities taking place in patients' homes. The new SyMO package (Médisolution™) consists of a nursing care plan dictionary that covers all the procedures nurses need to perform in response to patient health problems, and an intervention plan module that allows nurses to create specific care plans for patients. Once she has arrived at the patient's home, the nurse uses the software to take notes on each procedure in the patient's care plan. This pilot project was the subject of an evaluative study that yielded encouraging results. Indeed, eight months after the implementation of the software application, the number of treated cancer patients increased by 6%, the average number of home visits by nurses increased by 0.7 visit per day, and the time allocated for direct patient care increased by 14% [5]. Given these positive results, senior administrators of Quebec's department of health and social services decided to invest additional funds in the project to verify whether the results were generalizable in the context of traditional home care services. The department asked 11 home care units in three different geographical regions to participate in the project. Study one investigated the pre-implementation phase of the mobile computing project in these ambulatory home care units.\nThe mobile computing project and the software package were officially presented to the nursing staff of each of the participating organizations at kick-off meetings held from May to November in 2009. The meetings were jointly organized by the managers of the health facilities and the supplier in order to present the scope of the project and its objectives, the roles and responsibilities of the key stakeholders, the planned deployment approach (including project phases), and the projected schedule and budget. The meetings also included a 60-minute demonstration of the software.\nThe data was collected at each of these meetings. Only nurses who would be affected by the change, i.e., change recipients, were asked to stay in the room while the data were collected. Once the objectives of the study had been explained, a questionnaire was distributed to all the nurses in attendance. A total of 138 nurses completed the survey instrument, for a response rate of 90%.\n[SUBTITLE] Study two [SUBSECTION] As mentioned earlier, the second study took place in a large teaching hospital that had approved a budget for the acquisition of an EMR system. The institution, which specializes in the diagnosis and treatment of mental health illnesses, is divided into 10 clinical programs (e.g., a mental health program for adults, a child psychiatry program, a geriatric psychiatry program, an eating disorders program). Each clinical program specializes in the care and treatment of various mental health illnesses. The staff is composed of approximately 400 clinicians (including mostly nursing staff and a balanced group of occupational therapists, social workers and psychologists), as well as 55 physicians.\nThe deployment of the EMR system represented a major organizational project that would affect the work processes and procedures across the entire hospital. The main objective was to maintain all patient information (admission, diagnosis, notes, prescriptions, test results, et al.) in a central patient database. At the time of the study, in early 2008, the healthcare organization was in the process of selecting a software vendor and an integrator. The project was presented at two staff meetings: the first meeting was for the entire nursing population, and the second was a general assembly that included all managers of the targeted clinical programs.\nData collection began shortly after the two official presentations. Over a period of three weeks, one of the researchers and the project manager visited the clinical staff at a weekly meeting in each of the 10 programs. At each visit, the local project manager summarized the key elements of the EMR project to staff and the researchers explained the objectives of the research project, addressing staff concerns and questions for approximately fifteen minutes. Then survey questionnaires were handed out, along with a pre-addressed return envelope. A total of 379 questionnaires were distributed to the clinicians. For physicians, direct email contact was initiated by the EMR project manager in a memorandum that presented the pending EMR implementation and the ongoing academic research project. A package was then mailed to each physician, containing a presentation letter, a copy of the questionnaire, as well as a pre-addressed postage-paid return envelope. The 55 physicians were asked to complete the survey within a week. Overall, a total of 235 questionnaires (207 from clinicians and 28 from physicians) were returned to the research team, for a response rate of 54%.\nAs mentioned earlier, the second study took place in a large teaching hospital that had approved a budget for the acquisition of an EMR system. The institution, which specializes in the diagnosis and treatment of mental health illnesses, is divided into 10 clinical programs (e.g., a mental health program for adults, a child psychiatry program, a geriatric psychiatry program, an eating disorders program). Each clinical program specializes in the care and treatment of various mental health illnesses. The staff is composed of approximately 400 clinicians (including mostly nursing staff and a balanced group of occupational therapists, social workers and psychologists), as well as 55 physicians.\nThe deployment of the EMR system represented a major organizational project that would affect the work processes and procedures across the entire hospital. The main objective was to maintain all patient information (admission, diagnosis, notes, prescriptions, test results, et al.) in a central patient database. At the time of the study, in early 2008, the healthcare organization was in the process of selecting a software vendor and an integrator. The project was presented at two staff meetings: the first meeting was for the entire nursing population, and the second was a general assembly that included all managers of the targeted clinical programs.\nData collection began shortly after the two official presentations. Over a period of three weeks, one of the researchers and the project manager visited the clinical staff at a weekly meeting in each of the 10 programs. At each visit, the local project manager summarized the key elements of the EMR project to staff and the researchers explained the objectives of the research project, addressing staff concerns and questions for approximately fifteen minutes. Then survey questionnaires were handed out, along with a pre-addressed return envelope. A total of 379 questionnaires were distributed to the clinicians. For physicians, direct email contact was initiated by the EMR project manager in a memorandum that presented the pending EMR implementation and the ongoing academic research project. A package was then mailed to each physician, containing a presentation letter, a copy of the questionnaire, as well as a pre-addressed postage-paid return envelope. The 55 physicians were asked to complete the survey within a week. Overall, a total of 235 questionnaires (207 from clinicians and 28 from physicians) were returned to the research team, for a response rate of 54%.\nThe study questionnaire was first pre-tested with five graduate students who are familiar with CIS as well as with four clinicians who had been involved in several CIS projects. Each respondent completed a first version of the questionnaire and provided feedback about the process and the measures, including the questionnaire administration time, and the clarity of the instructions and questions. The pretest indicated that the measurement instrument was relatively clear and easy to fill out. Following the pre-test, minor modifications were made to improve the wording of some items and the overall structure and presentation quality of the questionnaire.\n[SUBTITLE] Study one [SUBSECTION] A mobile computing project was carried out in an oncology and palliative care unit in Quebec, Canada in 2007. The core of the project was the implementation of a CIS that optimizes the process used to organize nursing activities taking place in patients' homes. The new SyMO package (Médisolution™) consists of a nursing care plan dictionary that covers all the procedures nurses need to perform in response to patient health problems, and an intervention plan module that allows nurses to create specific care plans for patients. Once she has arrived at the patient's home, the nurse uses the software to take notes on each procedure in the patient's care plan. This pilot project was the subject of an evaluative study that yielded encouraging results. Indeed, eight months after the implementation of the software application, the number of treated cancer patients increased by 6%, the average number of home visits by nurses increased by 0.7 visit per day, and the time allocated for direct patient care increased by 14% [5]. Given these positive results, senior administrators of Quebec's department of health and social services decided to invest additional funds in the project to verify whether the results were generalizable in the context of traditional home care services. The department asked 11 home care units in three different geographical regions to participate in the project. Study one investigated the pre-implementation phase of the mobile computing project in these ambulatory home care units.\nThe mobile computing project and the software package were officially presented to the nursing staff of each of the participating organizations at kick-off meetings held from May to November in 2009. The meetings were jointly organized by the managers of the health facilities and the supplier in order to present the scope of the project and its objectives, the roles and responsibilities of the key stakeholders, the planned deployment approach (including project phases), and the projected schedule and budget. The meetings also included a 60-minute demonstration of the software.\nThe data was collected at each of these meetings. Only nurses who would be affected by the change, i.e., change recipients, were asked to stay in the room while the data were collected. Once the objectives of the study had been explained, a questionnaire was distributed to all the nurses in attendance. A total of 138 nurses completed the survey instrument, for a response rate of 90%.\nA mobile computing project was carried out in an oncology and palliative care unit in Quebec, Canada in 2007. The core of the project was the implementation of a CIS that optimizes the process used to organize nursing activities taking place in patients' homes. The new SyMO package (Médisolution™) consists of a nursing care plan dictionary that covers all the procedures nurses need to perform in response to patient health problems, and an intervention plan module that allows nurses to create specific care plans for patients. Once she has arrived at the patient's home, the nurse uses the software to take notes on each procedure in the patient's care plan. This pilot project was the subject of an evaluative study that yielded encouraging results. Indeed, eight months after the implementation of the software application, the number of treated cancer patients increased by 6%, the average number of home visits by nurses increased by 0.7 visit per day, and the time allocated for direct patient care increased by 14% [5]. Given these positive results, senior administrators of Quebec's department of health and social services decided to invest additional funds in the project to verify whether the results were generalizable in the context of traditional home care services. The department asked 11 home care units in three different geographical regions to participate in the project. Study one investigated the pre-implementation phase of the mobile computing project in these ambulatory home care units.\nThe mobile computing project and the software package were officially presented to the nursing staff of each of the participating organizations at kick-off meetings held from May to November in 2009. The meetings were jointly organized by the managers of the health facilities and the supplier in order to present the scope of the project and its objectives, the roles and responsibilities of the key stakeholders, the planned deployment approach (including project phases), and the projected schedule and budget. The meetings also included a 60-minute demonstration of the software.\nThe data was collected at each of these meetings. Only nurses who would be affected by the change, i.e., change recipients, were asked to stay in the room while the data were collected. Once the objectives of the study had been explained, a questionnaire was distributed to all the nurses in attendance. A total of 138 nurses completed the survey instrument, for a response rate of 90%.\n[SUBTITLE] Study two [SUBSECTION] As mentioned earlier, the second study took place in a large teaching hospital that had approved a budget for the acquisition of an EMR system. The institution, which specializes in the diagnosis and treatment of mental health illnesses, is divided into 10 clinical programs (e.g., a mental health program for adults, a child psychiatry program, a geriatric psychiatry program, an eating disorders program). Each clinical program specializes in the care and treatment of various mental health illnesses. The staff is composed of approximately 400 clinicians (including mostly nursing staff and a balanced group of occupational therapists, social workers and psychologists), as well as 55 physicians.\nThe deployment of the EMR system represented a major organizational project that would affect the work processes and procedures across the entire hospital. The main objective was to maintain all patient information (admission, diagnosis, notes, prescriptions, test results, et al.) in a central patient database. At the time of the study, in early 2008, the healthcare organization was in the process of selecting a software vendor and an integrator. The project was presented at two staff meetings: the first meeting was for the entire nursing population, and the second was a general assembly that included all managers of the targeted clinical programs.\nData collection began shortly after the two official presentations. Over a period of three weeks, one of the researchers and the project manager visited the clinical staff at a weekly meeting in each of the 10 programs. At each visit, the local project manager summarized the key elements of the EMR project to staff and the researchers explained the objectives of the research project, addressing staff concerns and questions for approximately fifteen minutes. Then survey questionnaires were handed out, along with a pre-addressed return envelope. A total of 379 questionnaires were distributed to the clinicians. For physicians, direct email contact was initiated by the EMR project manager in a memorandum that presented the pending EMR implementation and the ongoing academic research project. A package was then mailed to each physician, containing a presentation letter, a copy of the questionnaire, as well as a pre-addressed postage-paid return envelope. The 55 physicians were asked to complete the survey within a week. Overall, a total of 235 questionnaires (207 from clinicians and 28 from physicians) were returned to the research team, for a response rate of 54%.\nAs mentioned earlier, the second study took place in a large teaching hospital that had approved a budget for the acquisition of an EMR system. The institution, which specializes in the diagnosis and treatment of mental health illnesses, is divided into 10 clinical programs (e.g., a mental health program for adults, a child psychiatry program, a geriatric psychiatry program, an eating disorders program). Each clinical program specializes in the care and treatment of various mental health illnesses. The staff is composed of approximately 400 clinicians (including mostly nursing staff and a balanced group of occupational therapists, social workers and psychologists), as well as 55 physicians.\nThe deployment of the EMR system represented a major organizational project that would affect the work processes and procedures across the entire hospital. The main objective was to maintain all patient information (admission, diagnosis, notes, prescriptions, test results, et al.) in a central patient database. At the time of the study, in early 2008, the healthcare organization was in the process of selecting a software vendor and an integrator. The project was presented at two staff meetings: the first meeting was for the entire nursing population, and the second was a general assembly that included all managers of the targeted clinical programs.\nData collection began shortly after the two official presentations. Over a period of three weeks, one of the researchers and the project manager visited the clinical staff at a weekly meeting in each of the 10 programs. At each visit, the local project manager summarized the key elements of the EMR project to staff and the researchers explained the objectives of the research project, addressing staff concerns and questions for approximately fifteen minutes. Then survey questionnaires were handed out, along with a pre-addressed return envelope. A total of 379 questionnaires were distributed to the clinicians. For physicians, direct email contact was initiated by the EMR project manager in a memorandum that presented the pending EMR implementation and the ongoing academic research project. A package was then mailed to each physician, containing a presentation letter, a copy of the questionnaire, as well as a pre-addressed postage-paid return envelope. The 55 physicians were asked to complete the survey within a week. Overall, a total of 235 questionnaires (207 from clinicians and 28 from physicians) were returned to the research team, for a response rate of 54%.\n[SUBTITLE] Pre-test [SUBSECTION] The study questionnaire was first pre-tested with five graduate students who are familiar with CIS as well as with four clinicians who had been involved in several CIS projects. Each respondent completed a first version of the questionnaire and provided feedback about the process and the measures, including the questionnaire administration time, and the clarity of the instructions and questions. The pretest indicated that the measurement instrument was relatively clear and easy to fill out. Following the pre-test, minor modifications were made to improve the wording of some items and the overall structure and presentation quality of the questionnaire.\n[SUBTITLE] Study one [SUBSECTION] A mobile computing project was carried out in an oncology and palliative care unit in Quebec, Canada in 2007. The core of the project was the implementation of a CIS that optimizes the process used to organize nursing activities taking place in patients' homes. The new SyMO package (Médisolution™) consists of a nursing care plan dictionary that covers all the procedures nurses need to perform in response to patient health problems, and an intervention plan module that allows nurses to create specific care plans for patients. Once she has arrived at the patient's home, the nurse uses the software to take notes on each procedure in the patient's care plan. This pilot project was the subject of an evaluative study that yielded encouraging results. Indeed, eight months after the implementation of the software application, the number of treated cancer patients increased by 6%, the average number of home visits by nurses increased by 0.7 visit per day, and the time allocated for direct patient care increased by 14% [5]. Given these positive results, senior administrators of Quebec's department of health and social services decided to invest additional funds in the project to verify whether the results were generalizable in the context of traditional home care services. The department asked 11 home care units in three different geographical regions to participate in the project. Study one investigated the pre-implementation phase of the mobile computing project in these ambulatory home care units.\nThe mobile computing project and the software package were officially presented to the nursing staff of each of the participating organizations at kick-off meetings held from May to November in 2009. The meetings were jointly organized by the managers of the health facilities and the supplier in order to present the scope of the project and its objectives, the roles and responsibilities of the key stakeholders, the planned deployment approach (including project phases), and the projected schedule and budget. The meetings also included a 60-minute demonstration of the software.\nThe data was collected at each of these meetings. Only nurses who would be affected by the change, i.e., change recipients, were asked to stay in the room while the data were collected. Once the objectives of the study had been explained, a questionnaire was distributed to all the nurses in attendance. A total of 138 nurses completed the survey instrument, for a response rate of 90%.\nA mobile computing project was carried out in an oncology and palliative care unit in Quebec, Canada in 2007. The core of the project was the implementation of a CIS that optimizes the process used to organize nursing activities taking place in patients' homes. The new SyMO package (Médisolution™) consists of a nursing care plan dictionary that covers all the procedures nurses need to perform in response to patient health problems, and an intervention plan module that allows nurses to create specific care plans for patients. Once she has arrived at the patient's home, the nurse uses the software to take notes on each procedure in the patient's care plan. This pilot project was the subject of an evaluative study that yielded encouraging results. Indeed, eight months after the implementation of the software application, the number of treated cancer patients increased by 6%, the average number of home visits by nurses increased by 0.7 visit per day, and the time allocated for direct patient care increased by 14% [5]. Given these positive results, senior administrators of Quebec's department of health and social services decided to invest additional funds in the project to verify whether the results were generalizable in the context of traditional home care services. The department asked 11 home care units in three different geographical regions to participate in the project. Study one investigated the pre-implementation phase of the mobile computing project in these ambulatory home care units.\nThe mobile computing project and the software package were officially presented to the nursing staff of each of the participating organizations at kick-off meetings held from May to November in 2009. The meetings were jointly organized by the managers of the health facilities and the supplier in order to present the scope of the project and its objectives, the roles and responsibilities of the key stakeholders, the planned deployment approach (including project phases), and the projected schedule and budget. The meetings also included a 60-minute demonstration of the software.\nThe data was collected at each of these meetings. Only nurses who would be affected by the change, i.e., change recipients, were asked to stay in the room while the data were collected. Once the objectives of the study had been explained, a questionnaire was distributed to all the nurses in attendance. A total of 138 nurses completed the survey instrument, for a response rate of 90%.\n[SUBTITLE] Study two [SUBSECTION] As mentioned earlier, the second study took place in a large teaching hospital that had approved a budget for the acquisition of an EMR system. The institution, which specializes in the diagnosis and treatment of mental health illnesses, is divided into 10 clinical programs (e.g., a mental health program for adults, a child psychiatry program, a geriatric psychiatry program, an eating disorders program). Each clinical program specializes in the care and treatment of various mental health illnesses. The staff is composed of approximately 400 clinicians (including mostly nursing staff and a balanced group of occupational therapists, social workers and psychologists), as well as 55 physicians.\nThe deployment of the EMR system represented a major organizational project that would affect the work processes and procedures across the entire hospital. The main objective was to maintain all patient information (admission, diagnosis, notes, prescriptions, test results, et al.) in a central patient database. At the time of the study, in early 2008, the healthcare organization was in the process of selecting a software vendor and an integrator. The project was presented at two staff meetings: the first meeting was for the entire nursing population, and the second was a general assembly that included all managers of the targeted clinical programs.\nData collection began shortly after the two official presentations. Over a period of three weeks, one of the researchers and the project manager visited the clinical staff at a weekly meeting in each of the 10 programs. At each visit, the local project manager summarized the key elements of the EMR project to staff and the researchers explained the objectives of the research project, addressing staff concerns and questions for approximately fifteen minutes. Then survey questionnaires were handed out, along with a pre-addressed return envelope. A total of 379 questionnaires were distributed to the clinicians. For physicians, direct email contact was initiated by the EMR project manager in a memorandum that presented the pending EMR implementation and the ongoing academic research project. A package was then mailed to each physician, containing a presentation letter, a copy of the questionnaire, as well as a pre-addressed postage-paid return envelope. The 55 physicians were asked to complete the survey within a week. Overall, a total of 235 questionnaires (207 from clinicians and 28 from physicians) were returned to the research team, for a response rate of 54%.\nAs mentioned earlier, the second study took place in a large teaching hospital that had approved a budget for the acquisition of an EMR system. The institution, which specializes in the diagnosis and treatment of mental health illnesses, is divided into 10 clinical programs (e.g., a mental health program for adults, a child psychiatry program, a geriatric psychiatry program, an eating disorders program). Each clinical program specializes in the care and treatment of various mental health illnesses. The staff is composed of approximately 400 clinicians (including mostly nursing staff and a balanced group of occupational therapists, social workers and psychologists), as well as 55 physicians.\nThe deployment of the EMR system represented a major organizational project that would affect the work processes and procedures across the entire hospital. The main objective was to maintain all patient information (admission, diagnosis, notes, prescriptions, test results, et al.) in a central patient database. At the time of the study, in early 2008, the healthcare organization was in the process of selecting a software vendor and an integrator. The project was presented at two staff meetings: the first meeting was for the entire nursing population, and the second was a general assembly that included all managers of the targeted clinical programs.\nData collection began shortly after the two official presentations. Over a period of three weeks, one of the researchers and the project manager visited the clinical staff at a weekly meeting in each of the 10 programs. At each visit, the local project manager summarized the key elements of the EMR project to staff and the researchers explained the objectives of the research project, addressing staff concerns and questions for approximately fifteen minutes. Then survey questionnaires were handed out, along with a pre-addressed return envelope. A total of 379 questionnaires were distributed to the clinicians. For physicians, direct email contact was initiated by the EMR project manager in a memorandum that presented the pending EMR implementation and the ongoing academic research project. A package was then mailed to each physician, containing a presentation letter, a copy of the questionnaire, as well as a pre-addressed postage-paid return envelope. The 55 physicians were asked to complete the survey within a week. Overall, a total of 235 questionnaires (207 from clinicians and 28 from physicians) were returned to the research team, for a response rate of 54%.\nThe study questionnaire was first pre-tested with five graduate students who are familiar with CIS as well as with four clinicians who had been involved in several CIS projects. Each respondent completed a first version of the questionnaire and provided feedback about the process and the measures, including the questionnaire administration time, and the clarity of the instructions and questions. The pretest indicated that the measurement instrument was relatively clear and easy to fill out. Following the pre-test, minor modifications were made to improve the wording of some items and the overall structure and presentation quality of the questionnaire.\n[SUBTITLE] Study one [SUBSECTION] A mobile computing project was carried out in an oncology and palliative care unit in Quebec, Canada in 2007. The core of the project was the implementation of a CIS that optimizes the process used to organize nursing activities taking place in patients' homes. The new SyMO package (Médisolution™) consists of a nursing care plan dictionary that covers all the procedures nurses need to perform in response to patient health problems, and an intervention plan module that allows nurses to create specific care plans for patients. Once she has arrived at the patient's home, the nurse uses the software to take notes on each procedure in the patient's care plan. This pilot project was the subject of an evaluative study that yielded encouraging results. Indeed, eight months after the implementation of the software application, the number of treated cancer patients increased by 6%, the average number of home visits by nurses increased by 0.7 visit per day, and the time allocated for direct patient care increased by 14% [5]. Given these positive results, senior administrators of Quebec's department of health and social services decided to invest additional funds in the project to verify whether the results were generalizable in the context of traditional home care services. The department asked 11 home care units in three different geographical regions to participate in the project. Study one investigated the pre-implementation phase of the mobile computing project in these ambulatory home care units.\nThe mobile computing project and the software package were officially presented to the nursing staff of each of the participating organizations at kick-off meetings held from May to November in 2009. The meetings were jointly organized by the managers of the health facilities and the supplier in order to present the scope of the project and its objectives, the roles and responsibilities of the key stakeholders, the planned deployment approach (including project phases), and the projected schedule and budget. The meetings also included a 60-minute demonstration of the software.\nThe data was collected at each of these meetings. Only nurses who would be affected by the change, i.e., change recipients, were asked to stay in the room while the data were collected. Once the objectives of the study had been explained, a questionnaire was distributed to all the nurses in attendance. A total of 138 nurses completed the survey instrument, for a response rate of 90%.\nA mobile computing project was carried out in an oncology and palliative care unit in Quebec, Canada in 2007. The core of the project was the implementation of a CIS that optimizes the process used to organize nursing activities taking place in patients' homes. The new SyMO package (Médisolution™) consists of a nursing care plan dictionary that covers all the procedures nurses need to perform in response to patient health problems, and an intervention plan module that allows nurses to create specific care plans for patients. Once she has arrived at the patient's home, the nurse uses the software to take notes on each procedure in the patient's care plan. This pilot project was the subject of an evaluative study that yielded encouraging results. Indeed, eight months after the implementation of the software application, the number of treated cancer patients increased by 6%, the average number of home visits by nurses increased by 0.7 visit per day, and the time allocated for direct patient care increased by 14% [5]. Given these positive results, senior administrators of Quebec's department of health and social services decided to invest additional funds in the project to verify whether the results were generalizable in the context of traditional home care services. The department asked 11 home care units in three different geographical regions to participate in the project. Study one investigated the pre-implementation phase of the mobile computing project in these ambulatory home care units.\nThe mobile computing project and the software package were officially presented to the nursing staff of each of the participating organizations at kick-off meetings held from May to November in 2009. The meetings were jointly organized by the managers of the health facilities and the supplier in order to present the scope of the project and its objectives, the roles and responsibilities of the key stakeholders, the planned deployment approach (including project phases), and the projected schedule and budget. The meetings also included a 60-minute demonstration of the software.\nThe data was collected at each of these meetings. Only nurses who would be affected by the change, i.e., change recipients, were asked to stay in the room while the data were collected. Once the objectives of the study had been explained, a questionnaire was distributed to all the nurses in attendance. A total of 138 nurses completed the survey instrument, for a response rate of 90%.\n[SUBTITLE] Study two [SUBSECTION] As mentioned earlier, the second study took place in a large teaching hospital that had approved a budget for the acquisition of an EMR system. The institution, which specializes in the diagnosis and treatment of mental health illnesses, is divided into 10 clinical programs (e.g., a mental health program for adults, a child psychiatry program, a geriatric psychiatry program, an eating disorders program). Each clinical program specializes in the care and treatment of various mental health illnesses. The staff is composed of approximately 400 clinicians (including mostly nursing staff and a balanced group of occupational therapists, social workers and psychologists), as well as 55 physicians.\nThe deployment of the EMR system represented a major organizational project that would affect the work processes and procedures across the entire hospital. The main objective was to maintain all patient information (admission, diagnosis, notes, prescriptions, test results, et al.) in a central patient database. At the time of the study, in early 2008, the healthcare organization was in the process of selecting a software vendor and an integrator. The project was presented at two staff meetings: the first meeting was for the entire nursing population, and the second was a general assembly that included all managers of the targeted clinical programs.\nData collection began shortly after the two official presentations. Over a period of three weeks, one of the researchers and the project manager visited the clinical staff at a weekly meeting in each of the 10 programs. At each visit, the local project manager summarized the key elements of the EMR project to staff and the researchers explained the objectives of the research project, addressing staff concerns and questions for approximately fifteen minutes. Then survey questionnaires were handed out, along with a pre-addressed return envelope. A total of 379 questionnaires were distributed to the clinicians. For physicians, direct email contact was initiated by the EMR project manager in a memorandum that presented the pending EMR implementation and the ongoing academic research project. A package was then mailed to each physician, containing a presentation letter, a copy of the questionnaire, as well as a pre-addressed postage-paid return envelope. The 55 physicians were asked to complete the survey within a week. Overall, a total of 235 questionnaires (207 from clinicians and 28 from physicians) were returned to the research team, for a response rate of 54%.\nAs mentioned earlier, the second study took place in a large teaching hospital that had approved a budget for the acquisition of an EMR system. The institution, which specializes in the diagnosis and treatment of mental health illnesses, is divided into 10 clinical programs (e.g., a mental health program for adults, a child psychiatry program, a geriatric psychiatry program, an eating disorders program). Each clinical program specializes in the care and treatment of various mental health illnesses. The staff is composed of approximately 400 clinicians (including mostly nursing staff and a balanced group of occupational therapists, social workers and psychologists), as well as 55 physicians.\nThe deployment of the EMR system represented a major organizational project that would affect the work processes and procedures across the entire hospital. The main objective was to maintain all patient information (admission, diagnosis, notes, prescriptions, test results, et al.) in a central patient database. At the time of the study, in early 2008, the healthcare organization was in the process of selecting a software vendor and an integrator. The project was presented at two staff meetings: the first meeting was for the entire nursing population, and the second was a general assembly that included all managers of the targeted clinical programs.\nData collection began shortly after the two official presentations. Over a period of three weeks, one of the researchers and the project manager visited the clinical staff at a weekly meeting in each of the 10 programs. At each visit, the local project manager summarized the key elements of the EMR project to staff and the researchers explained the objectives of the research project, addressing staff concerns and questions for approximately fifteen minutes. Then survey questionnaires were handed out, along with a pre-addressed return envelope. A total of 379 questionnaires were distributed to the clinicians. For physicians, direct email contact was initiated by the EMR project manager in a memorandum that presented the pending EMR implementation and the ongoing academic research project. A package was then mailed to each physician, containing a presentation letter, a copy of the questionnaire, as well as a pre-addressed postage-paid return envelope. The 55 physicians were asked to complete the survey within a week. Overall, a total of 235 questionnaires (207 from clinicians and 28 from physicians) were returned to the research team, for a response rate of 54%.\n[SUBTITLE] Operationalization of variables and data analyses [SUBSECTION] Consistent with our research model, the survey's questions covered 10 variables. All except one were measured with four items. All the items were assessed on 7-point Likert scales ranging from strongly disagree to strongly agree. The items used to measure the dependent variable, namely, organizational readiness, were adapted from Eby et al. [58] and Rafferty and Simons [59]. As for the independent variables, vision clarity (VC) was measured using a scale adapted from Armenakis et al. [28]. Top-management support and change appropriateness were measured using scales adapted from Holt et al. [34]. Organizational flexibility was adapted from Rush et al. [60] and Eby et al. [58]. Group self-efficacy was measured using a scale adapted from Compeau and Higgins [61]. Finally, scales associated with change efficacy, organizational history of change, the presence of an effective project champion, and organizational conflicts were developed by the authors during a brainstorming session.\nScale items used to measure all study variables are presented in Appendix. Data analysis was performed using partial least squares (PLS), a structural equation modeling approach [62].\nConsistent with our research model, the survey's questions covered 10 variables. All except one were measured with four items. All the items were assessed on 7-point Likert scales ranging from strongly disagree to strongly agree. The items used to measure the dependent variable, namely, organizational readiness, were adapted from Eby et al. [58] and Rafferty and Simons [59]. As for the independent variables, vision clarity (VC) was measured using a scale adapted from Armenakis et al. [28]. Top-management support and change appropriateness were measured using scales adapted from Holt et al. [34]. Organizational flexibility was adapted from Rush et al. [60] and Eby et al. [58]. Group self-efficacy was measured using a scale adapted from Compeau and Higgins [61]. Finally, scales associated with change efficacy, organizational history of change, the presence of an effective project champion, and organizational conflicts were developed by the authors during a brainstorming session.\nScale items used to measure all study variables are presented in Appendix. Data analysis was performed using partial least squares (PLS), a structural equation modeling approach [62].\n[SUBTITLE] Ethics approval [SUBSECTION] The present study was approved by the appropriate institutional ethics review boards.\nThe present study was approved by the appropriate institutional ethics review boards.", "[SUBTITLE] Pre-test [SUBSECTION] The study questionnaire was first pre-tested with five graduate students who are familiar with CIS as well as with four clinicians who had been involved in several CIS projects. Each respondent completed a first version of the questionnaire and provided feedback about the process and the measures, including the questionnaire administration time, and the clarity of the instructions and questions. The pretest indicated that the measurement instrument was relatively clear and easy to fill out. Following the pre-test, minor modifications were made to improve the wording of some items and the overall structure and presentation quality of the questionnaire.\n[SUBTITLE] Study one [SUBSECTION] A mobile computing project was carried out in an oncology and palliative care unit in Quebec, Canada in 2007. The core of the project was the implementation of a CIS that optimizes the process used to organize nursing activities taking place in patients' homes. The new SyMO package (Médisolution™) consists of a nursing care plan dictionary that covers all the procedures nurses need to perform in response to patient health problems, and an intervention plan module that allows nurses to create specific care plans for patients. Once she has arrived at the patient's home, the nurse uses the software to take notes on each procedure in the patient's care plan. This pilot project was the subject of an evaluative study that yielded encouraging results. Indeed, eight months after the implementation of the software application, the number of treated cancer patients increased by 6%, the average number of home visits by nurses increased by 0.7 visit per day, and the time allocated for direct patient care increased by 14% [5]. Given these positive results, senior administrators of Quebec's department of health and social services decided to invest additional funds in the project to verify whether the results were generalizable in the context of traditional home care services. The department asked 11 home care units in three different geographical regions to participate in the project. Study one investigated the pre-implementation phase of the mobile computing project in these ambulatory home care units.\nThe mobile computing project and the software package were officially presented to the nursing staff of each of the participating organizations at kick-off meetings held from May to November in 2009. The meetings were jointly organized by the managers of the health facilities and the supplier in order to present the scope of the project and its objectives, the roles and responsibilities of the key stakeholders, the planned deployment approach (including project phases), and the projected schedule and budget. The meetings also included a 60-minute demonstration of the software.\nThe data was collected at each of these meetings. Only nurses who would be affected by the change, i.e., change recipients, were asked to stay in the room while the data were collected. Once the objectives of the study had been explained, a questionnaire was distributed to all the nurses in attendance. A total of 138 nurses completed the survey instrument, for a response rate of 90%.\nA mobile computing project was carried out in an oncology and palliative care unit in Quebec, Canada in 2007. The core of the project was the implementation of a CIS that optimizes the process used to organize nursing activities taking place in patients' homes. The new SyMO package (Médisolution™) consists of a nursing care plan dictionary that covers all the procedures nurses need to perform in response to patient health problems, and an intervention plan module that allows nurses to create specific care plans for patients. Once she has arrived at the patient's home, the nurse uses the software to take notes on each procedure in the patient's care plan. This pilot project was the subject of an evaluative study that yielded encouraging results. Indeed, eight months after the implementation of the software application, the number of treated cancer patients increased by 6%, the average number of home visits by nurses increased by 0.7 visit per day, and the time allocated for direct patient care increased by 14% [5]. Given these positive results, senior administrators of Quebec's department of health and social services decided to invest additional funds in the project to verify whether the results were generalizable in the context of traditional home care services. The department asked 11 home care units in three different geographical regions to participate in the project. Study one investigated the pre-implementation phase of the mobile computing project in these ambulatory home care units.\nThe mobile computing project and the software package were officially presented to the nursing staff of each of the participating organizations at kick-off meetings held from May to November in 2009. The meetings were jointly organized by the managers of the health facilities and the supplier in order to present the scope of the project and its objectives, the roles and responsibilities of the key stakeholders, the planned deployment approach (including project phases), and the projected schedule and budget. The meetings also included a 60-minute demonstration of the software.\nThe data was collected at each of these meetings. Only nurses who would be affected by the change, i.e., change recipients, were asked to stay in the room while the data were collected. Once the objectives of the study had been explained, a questionnaire was distributed to all the nurses in attendance. A total of 138 nurses completed the survey instrument, for a response rate of 90%.\n[SUBTITLE] Study two [SUBSECTION] As mentioned earlier, the second study took place in a large teaching hospital that had approved a budget for the acquisition of an EMR system. The institution, which specializes in the diagnosis and treatment of mental health illnesses, is divided into 10 clinical programs (e.g., a mental health program for adults, a child psychiatry program, a geriatric psychiatry program, an eating disorders program). Each clinical program specializes in the care and treatment of various mental health illnesses. The staff is composed of approximately 400 clinicians (including mostly nursing staff and a balanced group of occupational therapists, social workers and psychologists), as well as 55 physicians.\nThe deployment of the EMR system represented a major organizational project that would affect the work processes and procedures across the entire hospital. The main objective was to maintain all patient information (admission, diagnosis, notes, prescriptions, test results, et al.) in a central patient database. At the time of the study, in early 2008, the healthcare organization was in the process of selecting a software vendor and an integrator. The project was presented at two staff meetings: the first meeting was for the entire nursing population, and the second was a general assembly that included all managers of the targeted clinical programs.\nData collection began shortly after the two official presentations. Over a period of three weeks, one of the researchers and the project manager visited the clinical staff at a weekly meeting in each of the 10 programs. At each visit, the local project manager summarized the key elements of the EMR project to staff and the researchers explained the objectives of the research project, addressing staff concerns and questions for approximately fifteen minutes. Then survey questionnaires were handed out, along with a pre-addressed return envelope. A total of 379 questionnaires were distributed to the clinicians. For physicians, direct email contact was initiated by the EMR project manager in a memorandum that presented the pending EMR implementation and the ongoing academic research project. A package was then mailed to each physician, containing a presentation letter, a copy of the questionnaire, as well as a pre-addressed postage-paid return envelope. The 55 physicians were asked to complete the survey within a week. Overall, a total of 235 questionnaires (207 from clinicians and 28 from physicians) were returned to the research team, for a response rate of 54%.\nAs mentioned earlier, the second study took place in a large teaching hospital that had approved a budget for the acquisition of an EMR system. The institution, which specializes in the diagnosis and treatment of mental health illnesses, is divided into 10 clinical programs (e.g., a mental health program for adults, a child psychiatry program, a geriatric psychiatry program, an eating disorders program). Each clinical program specializes in the care and treatment of various mental health illnesses. The staff is composed of approximately 400 clinicians (including mostly nursing staff and a balanced group of occupational therapists, social workers and psychologists), as well as 55 physicians.\nThe deployment of the EMR system represented a major organizational project that would affect the work processes and procedures across the entire hospital. The main objective was to maintain all patient information (admission, diagnosis, notes, prescriptions, test results, et al.) in a central patient database. At the time of the study, in early 2008, the healthcare organization was in the process of selecting a software vendor and an integrator. The project was presented at two staff meetings: the first meeting was for the entire nursing population, and the second was a general assembly that included all managers of the targeted clinical programs.\nData collection began shortly after the two official presentations. Over a period of three weeks, one of the researchers and the project manager visited the clinical staff at a weekly meeting in each of the 10 programs. At each visit, the local project manager summarized the key elements of the EMR project to staff and the researchers explained the objectives of the research project, addressing staff concerns and questions for approximately fifteen minutes. Then survey questionnaires were handed out, along with a pre-addressed return envelope. A total of 379 questionnaires were distributed to the clinicians. For physicians, direct email contact was initiated by the EMR project manager in a memorandum that presented the pending EMR implementation and the ongoing academic research project. A package was then mailed to each physician, containing a presentation letter, a copy of the questionnaire, as well as a pre-addressed postage-paid return envelope. The 55 physicians were asked to complete the survey within a week. Overall, a total of 235 questionnaires (207 from clinicians and 28 from physicians) were returned to the research team, for a response rate of 54%.\nThe study questionnaire was first pre-tested with five graduate students who are familiar with CIS as well as with four clinicians who had been involved in several CIS projects. Each respondent completed a first version of the questionnaire and provided feedback about the process and the measures, including the questionnaire administration time, and the clarity of the instructions and questions. The pretest indicated that the measurement instrument was relatively clear and easy to fill out. Following the pre-test, minor modifications were made to improve the wording of some items and the overall structure and presentation quality of the questionnaire.\n[SUBTITLE] Study one [SUBSECTION] A mobile computing project was carried out in an oncology and palliative care unit in Quebec, Canada in 2007. The core of the project was the implementation of a CIS that optimizes the process used to organize nursing activities taking place in patients' homes. The new SyMO package (Médisolution™) consists of a nursing care plan dictionary that covers all the procedures nurses need to perform in response to patient health problems, and an intervention plan module that allows nurses to create specific care plans for patients. Once she has arrived at the patient's home, the nurse uses the software to take notes on each procedure in the patient's care plan. This pilot project was the subject of an evaluative study that yielded encouraging results. Indeed, eight months after the implementation of the software application, the number of treated cancer patients increased by 6%, the average number of home visits by nurses increased by 0.7 visit per day, and the time allocated for direct patient care increased by 14% [5]. Given these positive results, senior administrators of Quebec's department of health and social services decided to invest additional funds in the project to verify whether the results were generalizable in the context of traditional home care services. The department asked 11 home care units in three different geographical regions to participate in the project. Study one investigated the pre-implementation phase of the mobile computing project in these ambulatory home care units.\nThe mobile computing project and the software package were officially presented to the nursing staff of each of the participating organizations at kick-off meetings held from May to November in 2009. The meetings were jointly organized by the managers of the health facilities and the supplier in order to present the scope of the project and its objectives, the roles and responsibilities of the key stakeholders, the planned deployment approach (including project phases), and the projected schedule and budget. The meetings also included a 60-minute demonstration of the software.\nThe data was collected at each of these meetings. Only nurses who would be affected by the change, i.e., change recipients, were asked to stay in the room while the data were collected. Once the objectives of the study had been explained, a questionnaire was distributed to all the nurses in attendance. A total of 138 nurses completed the survey instrument, for a response rate of 90%.\nA mobile computing project was carried out in an oncology and palliative care unit in Quebec, Canada in 2007. The core of the project was the implementation of a CIS that optimizes the process used to organize nursing activities taking place in patients' homes. The new SyMO package (Médisolution™) consists of a nursing care plan dictionary that covers all the procedures nurses need to perform in response to patient health problems, and an intervention plan module that allows nurses to create specific care plans for patients. Once she has arrived at the patient's home, the nurse uses the software to take notes on each procedure in the patient's care plan. This pilot project was the subject of an evaluative study that yielded encouraging results. Indeed, eight months after the implementation of the software application, the number of treated cancer patients increased by 6%, the average number of home visits by nurses increased by 0.7 visit per day, and the time allocated for direct patient care increased by 14% [5]. Given these positive results, senior administrators of Quebec's department of health and social services decided to invest additional funds in the project to verify whether the results were generalizable in the context of traditional home care services. The department asked 11 home care units in three different geographical regions to participate in the project. Study one investigated the pre-implementation phase of the mobile computing project in these ambulatory home care units.\nThe mobile computing project and the software package were officially presented to the nursing staff of each of the participating organizations at kick-off meetings held from May to November in 2009. The meetings were jointly organized by the managers of the health facilities and the supplier in order to present the scope of the project and its objectives, the roles and responsibilities of the key stakeholders, the planned deployment approach (including project phases), and the projected schedule and budget. The meetings also included a 60-minute demonstration of the software.\nThe data was collected at each of these meetings. Only nurses who would be affected by the change, i.e., change recipients, were asked to stay in the room while the data were collected. Once the objectives of the study had been explained, a questionnaire was distributed to all the nurses in attendance. A total of 138 nurses completed the survey instrument, for a response rate of 90%.\n[SUBTITLE] Study two [SUBSECTION] As mentioned earlier, the second study took place in a large teaching hospital that had approved a budget for the acquisition of an EMR system. The institution, which specializes in the diagnosis and treatment of mental health illnesses, is divided into 10 clinical programs (e.g., a mental health program for adults, a child psychiatry program, a geriatric psychiatry program, an eating disorders program). Each clinical program specializes in the care and treatment of various mental health illnesses. The staff is composed of approximately 400 clinicians (including mostly nursing staff and a balanced group of occupational therapists, social workers and psychologists), as well as 55 physicians.\nThe deployment of the EMR system represented a major organizational project that would affect the work processes and procedures across the entire hospital. The main objective was to maintain all patient information (admission, diagnosis, notes, prescriptions, test results, et al.) in a central patient database. At the time of the study, in early 2008, the healthcare organization was in the process of selecting a software vendor and an integrator. The project was presented at two staff meetings: the first meeting was for the entire nursing population, and the second was a general assembly that included all managers of the targeted clinical programs.\nData collection began shortly after the two official presentations. Over a period of three weeks, one of the researchers and the project manager visited the clinical staff at a weekly meeting in each of the 10 programs. At each visit, the local project manager summarized the key elements of the EMR project to staff and the researchers explained the objectives of the research project, addressing staff concerns and questions for approximately fifteen minutes. Then survey questionnaires were handed out, along with a pre-addressed return envelope. A total of 379 questionnaires were distributed to the clinicians. For physicians, direct email contact was initiated by the EMR project manager in a memorandum that presented the pending EMR implementation and the ongoing academic research project. A package was then mailed to each physician, containing a presentation letter, a copy of the questionnaire, as well as a pre-addressed postage-paid return envelope. The 55 physicians were asked to complete the survey within a week. Overall, a total of 235 questionnaires (207 from clinicians and 28 from physicians) were returned to the research team, for a response rate of 54%.\nAs mentioned earlier, the second study took place in a large teaching hospital that had approved a budget for the acquisition of an EMR system. The institution, which specializes in the diagnosis and treatment of mental health illnesses, is divided into 10 clinical programs (e.g., a mental health program for adults, a child psychiatry program, a geriatric psychiatry program, an eating disorders program). Each clinical program specializes in the care and treatment of various mental health illnesses. The staff is composed of approximately 400 clinicians (including mostly nursing staff and a balanced group of occupational therapists, social workers and psychologists), as well as 55 physicians.\nThe deployment of the EMR system represented a major organizational project that would affect the work processes and procedures across the entire hospital. The main objective was to maintain all patient information (admission, diagnosis, notes, prescriptions, test results, et al.) in a central patient database. At the time of the study, in early 2008, the healthcare organization was in the process of selecting a software vendor and an integrator. The project was presented at two staff meetings: the first meeting was for the entire nursing population, and the second was a general assembly that included all managers of the targeted clinical programs.\nData collection began shortly after the two official presentations. Over a period of three weeks, one of the researchers and the project manager visited the clinical staff at a weekly meeting in each of the 10 programs. At each visit, the local project manager summarized the key elements of the EMR project to staff and the researchers explained the objectives of the research project, addressing staff concerns and questions for approximately fifteen minutes. Then survey questionnaires were handed out, along with a pre-addressed return envelope. A total of 379 questionnaires were distributed to the clinicians. For physicians, direct email contact was initiated by the EMR project manager in a memorandum that presented the pending EMR implementation and the ongoing academic research project. A package was then mailed to each physician, containing a presentation letter, a copy of the questionnaire, as well as a pre-addressed postage-paid return envelope. The 55 physicians were asked to complete the survey within a week. Overall, a total of 235 questionnaires (207 from clinicians and 28 from physicians) were returned to the research team, for a response rate of 54%.", "The study questionnaire was first pre-tested with five graduate students who are familiar with CIS as well as with four clinicians who had been involved in several CIS projects. Each respondent completed a first version of the questionnaire and provided feedback about the process and the measures, including the questionnaire administration time, and the clarity of the instructions and questions. The pretest indicated that the measurement instrument was relatively clear and easy to fill out. Following the pre-test, minor modifications were made to improve the wording of some items and the overall structure and presentation quality of the questionnaire.\n[SUBTITLE] Study one [SUBSECTION] A mobile computing project was carried out in an oncology and palliative care unit in Quebec, Canada in 2007. The core of the project was the implementation of a CIS that optimizes the process used to organize nursing activities taking place in patients' homes. The new SyMO package (Médisolution™) consists of a nursing care plan dictionary that covers all the procedures nurses need to perform in response to patient health problems, and an intervention plan module that allows nurses to create specific care plans for patients. Once she has arrived at the patient's home, the nurse uses the software to take notes on each procedure in the patient's care plan. This pilot project was the subject of an evaluative study that yielded encouraging results. Indeed, eight months after the implementation of the software application, the number of treated cancer patients increased by 6%, the average number of home visits by nurses increased by 0.7 visit per day, and the time allocated for direct patient care increased by 14% [5]. Given these positive results, senior administrators of Quebec's department of health and social services decided to invest additional funds in the project to verify whether the results were generalizable in the context of traditional home care services. The department asked 11 home care units in three different geographical regions to participate in the project. Study one investigated the pre-implementation phase of the mobile computing project in these ambulatory home care units.\nThe mobile computing project and the software package were officially presented to the nursing staff of each of the participating organizations at kick-off meetings held from May to November in 2009. The meetings were jointly organized by the managers of the health facilities and the supplier in order to present the scope of the project and its objectives, the roles and responsibilities of the key stakeholders, the planned deployment approach (including project phases), and the projected schedule and budget. The meetings also included a 60-minute demonstration of the software.\nThe data was collected at each of these meetings. Only nurses who would be affected by the change, i.e., change recipients, were asked to stay in the room while the data were collected. Once the objectives of the study had been explained, a questionnaire was distributed to all the nurses in attendance. A total of 138 nurses completed the survey instrument, for a response rate of 90%.\nA mobile computing project was carried out in an oncology and palliative care unit in Quebec, Canada in 2007. The core of the project was the implementation of a CIS that optimizes the process used to organize nursing activities taking place in patients' homes. The new SyMO package (Médisolution™) consists of a nursing care plan dictionary that covers all the procedures nurses need to perform in response to patient health problems, and an intervention plan module that allows nurses to create specific care plans for patients. Once she has arrived at the patient's home, the nurse uses the software to take notes on each procedure in the patient's care plan. This pilot project was the subject of an evaluative study that yielded encouraging results. Indeed, eight months after the implementation of the software application, the number of treated cancer patients increased by 6%, the average number of home visits by nurses increased by 0.7 visit per day, and the time allocated for direct patient care increased by 14% [5]. Given these positive results, senior administrators of Quebec's department of health and social services decided to invest additional funds in the project to verify whether the results were generalizable in the context of traditional home care services. The department asked 11 home care units in three different geographical regions to participate in the project. Study one investigated the pre-implementation phase of the mobile computing project in these ambulatory home care units.\nThe mobile computing project and the software package were officially presented to the nursing staff of each of the participating organizations at kick-off meetings held from May to November in 2009. The meetings were jointly organized by the managers of the health facilities and the supplier in order to present the scope of the project and its objectives, the roles and responsibilities of the key stakeholders, the planned deployment approach (including project phases), and the projected schedule and budget. The meetings also included a 60-minute demonstration of the software.\nThe data was collected at each of these meetings. Only nurses who would be affected by the change, i.e., change recipients, were asked to stay in the room while the data were collected. Once the objectives of the study had been explained, a questionnaire was distributed to all the nurses in attendance. A total of 138 nurses completed the survey instrument, for a response rate of 90%.\n[SUBTITLE] Study two [SUBSECTION] As mentioned earlier, the second study took place in a large teaching hospital that had approved a budget for the acquisition of an EMR system. The institution, which specializes in the diagnosis and treatment of mental health illnesses, is divided into 10 clinical programs (e.g., a mental health program for adults, a child psychiatry program, a geriatric psychiatry program, an eating disorders program). Each clinical program specializes in the care and treatment of various mental health illnesses. The staff is composed of approximately 400 clinicians (including mostly nursing staff and a balanced group of occupational therapists, social workers and psychologists), as well as 55 physicians.\nThe deployment of the EMR system represented a major organizational project that would affect the work processes and procedures across the entire hospital. The main objective was to maintain all patient information (admission, diagnosis, notes, prescriptions, test results, et al.) in a central patient database. At the time of the study, in early 2008, the healthcare organization was in the process of selecting a software vendor and an integrator. The project was presented at two staff meetings: the first meeting was for the entire nursing population, and the second was a general assembly that included all managers of the targeted clinical programs.\nData collection began shortly after the two official presentations. Over a period of three weeks, one of the researchers and the project manager visited the clinical staff at a weekly meeting in each of the 10 programs. At each visit, the local project manager summarized the key elements of the EMR project to staff and the researchers explained the objectives of the research project, addressing staff concerns and questions for approximately fifteen minutes. Then survey questionnaires were handed out, along with a pre-addressed return envelope. A total of 379 questionnaires were distributed to the clinicians. For physicians, direct email contact was initiated by the EMR project manager in a memorandum that presented the pending EMR implementation and the ongoing academic research project. A package was then mailed to each physician, containing a presentation letter, a copy of the questionnaire, as well as a pre-addressed postage-paid return envelope. The 55 physicians were asked to complete the survey within a week. Overall, a total of 235 questionnaires (207 from clinicians and 28 from physicians) were returned to the research team, for a response rate of 54%.\nAs mentioned earlier, the second study took place in a large teaching hospital that had approved a budget for the acquisition of an EMR system. The institution, which specializes in the diagnosis and treatment of mental health illnesses, is divided into 10 clinical programs (e.g., a mental health program for adults, a child psychiatry program, a geriatric psychiatry program, an eating disorders program). Each clinical program specializes in the care and treatment of various mental health illnesses. The staff is composed of approximately 400 clinicians (including mostly nursing staff and a balanced group of occupational therapists, social workers and psychologists), as well as 55 physicians.\nThe deployment of the EMR system represented a major organizational project that would affect the work processes and procedures across the entire hospital. The main objective was to maintain all patient information (admission, diagnosis, notes, prescriptions, test results, et al.) in a central patient database. At the time of the study, in early 2008, the healthcare organization was in the process of selecting a software vendor and an integrator. The project was presented at two staff meetings: the first meeting was for the entire nursing population, and the second was a general assembly that included all managers of the targeted clinical programs.\nData collection began shortly after the two official presentations. Over a period of three weeks, one of the researchers and the project manager visited the clinical staff at a weekly meeting in each of the 10 programs. At each visit, the local project manager summarized the key elements of the EMR project to staff and the researchers explained the objectives of the research project, addressing staff concerns and questions for approximately fifteen minutes. Then survey questionnaires were handed out, along with a pre-addressed return envelope. A total of 379 questionnaires were distributed to the clinicians. For physicians, direct email contact was initiated by the EMR project manager in a memorandum that presented the pending EMR implementation and the ongoing academic research project. A package was then mailed to each physician, containing a presentation letter, a copy of the questionnaire, as well as a pre-addressed postage-paid return envelope. The 55 physicians were asked to complete the survey within a week. Overall, a total of 235 questionnaires (207 from clinicians and 28 from physicians) were returned to the research team, for a response rate of 54%.", "A mobile computing project was carried out in an oncology and palliative care unit in Quebec, Canada in 2007. The core of the project was the implementation of a CIS that optimizes the process used to organize nursing activities taking place in patients' homes. The new SyMO package (Médisolution™) consists of a nursing care plan dictionary that covers all the procedures nurses need to perform in response to patient health problems, and an intervention plan module that allows nurses to create specific care plans for patients. Once she has arrived at the patient's home, the nurse uses the software to take notes on each procedure in the patient's care plan. This pilot project was the subject of an evaluative study that yielded encouraging results. Indeed, eight months after the implementation of the software application, the number of treated cancer patients increased by 6%, the average number of home visits by nurses increased by 0.7 visit per day, and the time allocated for direct patient care increased by 14% [5]. Given these positive results, senior administrators of Quebec's department of health and social services decided to invest additional funds in the project to verify whether the results were generalizable in the context of traditional home care services. The department asked 11 home care units in three different geographical regions to participate in the project. Study one investigated the pre-implementation phase of the mobile computing project in these ambulatory home care units.\nThe mobile computing project and the software package were officially presented to the nursing staff of each of the participating organizations at kick-off meetings held from May to November in 2009. The meetings were jointly organized by the managers of the health facilities and the supplier in order to present the scope of the project and its objectives, the roles and responsibilities of the key stakeholders, the planned deployment approach (including project phases), and the projected schedule and budget. The meetings also included a 60-minute demonstration of the software.\nThe data was collected at each of these meetings. Only nurses who would be affected by the change, i.e., change recipients, were asked to stay in the room while the data were collected. Once the objectives of the study had been explained, a questionnaire was distributed to all the nurses in attendance. A total of 138 nurses completed the survey instrument, for a response rate of 90%.", "As mentioned earlier, the second study took place in a large teaching hospital that had approved a budget for the acquisition of an EMR system. The institution, which specializes in the diagnosis and treatment of mental health illnesses, is divided into 10 clinical programs (e.g., a mental health program for adults, a child psychiatry program, a geriatric psychiatry program, an eating disorders program). Each clinical program specializes in the care and treatment of various mental health illnesses. The staff is composed of approximately 400 clinicians (including mostly nursing staff and a balanced group of occupational therapists, social workers and psychologists), as well as 55 physicians.\nThe deployment of the EMR system represented a major organizational project that would affect the work processes and procedures across the entire hospital. The main objective was to maintain all patient information (admission, diagnosis, notes, prescriptions, test results, et al.) in a central patient database. At the time of the study, in early 2008, the healthcare organization was in the process of selecting a software vendor and an integrator. The project was presented at two staff meetings: the first meeting was for the entire nursing population, and the second was a general assembly that included all managers of the targeted clinical programs.\nData collection began shortly after the two official presentations. Over a period of three weeks, one of the researchers and the project manager visited the clinical staff at a weekly meeting in each of the 10 programs. At each visit, the local project manager summarized the key elements of the EMR project to staff and the researchers explained the objectives of the research project, addressing staff concerns and questions for approximately fifteen minutes. Then survey questionnaires were handed out, along with a pre-addressed return envelope. A total of 379 questionnaires were distributed to the clinicians. For physicians, direct email contact was initiated by the EMR project manager in a memorandum that presented the pending EMR implementation and the ongoing academic research project. A package was then mailed to each physician, containing a presentation letter, a copy of the questionnaire, as well as a pre-addressed postage-paid return envelope. The 55 physicians were asked to complete the survey within a week. Overall, a total of 235 questionnaires (207 from clinicians and 28 from physicians) were returned to the research team, for a response rate of 54%.", "Consistent with our research model, the survey's questions covered 10 variables. All except one were measured with four items. All the items were assessed on 7-point Likert scales ranging from strongly disagree to strongly agree. The items used to measure the dependent variable, namely, organizational readiness, were adapted from Eby et al. [58] and Rafferty and Simons [59]. As for the independent variables, vision clarity (VC) was measured using a scale adapted from Armenakis et al. [28]. Top-management support and change appropriateness were measured using scales adapted from Holt et al. [34]. Organizational flexibility was adapted from Rush et al. [60] and Eby et al. [58]. Group self-efficacy was measured using a scale adapted from Compeau and Higgins [61]. Finally, scales associated with change efficacy, organizational history of change, the presence of an effective project champion, and organizational conflicts were developed by the authors during a brainstorming session.\nScale items used to measure all study variables are presented in Appendix. Data analysis was performed using partial least squares (PLS), a structural equation modeling approach [62].", "The present study was approved by the appropriate institutional ethics review boards.", "[SUBTITLE] Sample profiles [SUBSECTION] As shown in Table 1, most participants in study one were women and had full-time positions. They were established registered nurses with an average of over 18 years of experience in the nursing profession and 10 years of seniority within their healthcare organization. The respondents' average experience with personal computers was 4.6 on a 7-point Likert scale where 1 is 'very unfamiliar with computers' and 7 is 'very familiar with computers.' In study two, one third of the respondents were men. More than half of the respondents (57%) were registered nurses and 12% were physicians. Respondents had over 15 years of experience in their profession and had spent, on average, 14 years in their current organization. Their level of experience with computers was similar to that of respondents in study one, with an average score of 4.8.\nProfile of respondents\nAs shown in Table 1, most participants in study one were women and had full-time positions. They were established registered nurses with an average of over 18 years of experience in the nursing profession and 10 years of seniority within their healthcare organization. The respondents' average experience with personal computers was 4.6 on a 7-point Likert scale where 1 is 'very unfamiliar with computers' and 7 is 'very familiar with computers.' In study two, one third of the respondents were men. More than half of the respondents (57%) were registered nurses and 12% were physicians. Respondents had over 15 years of experience in their profession and had spent, on average, 14 years in their current organization. Their level of experience with computers was similar to that of respondents in study one, with an average score of 4.8.\nProfile of respondents\n[SUBTITLE] Psychometric properties of the measures [SUBSECTION] Exploratory factor analyses of each reflective construct's items and their Cronbach alpha reliabilities were first examined as a check of unidimensionality. The results from these analyses revealed that all scale items associated with a given construct loaded highly (>0.60) on a single factor. Next, based on the results of the reliability analysis (Cronbach alpha), three items out of 39 were removed from their respective measurement instruments: OF4 (organizational flexibility), OC2 (organizational conflicts) and OHC4 (organizational history of changes). As a result, the remaining 36 items were then analyzed in PLS confirmatory factor analyses (CFA). Examination of revised construct reliabilities (Table 2), the variance shared between constructs (Table 3) and the cross-loadings (Tables 4 and 5) indicated that the psychometric properties of the 10 reflective constructs were acceptable. As can be seen, all Cronbach alphas were 0.71 or better and all item loadings were greater than 0.68.\nReliability assessment of research model variables\nVariance shared between research model constructs\n** p < 0.001; * p < 0.05; ns = non significant\nThe bold numbers on the leading diagonal show the square root of the variance shared by the constructs and their measures. Off-diagonal elements are the correlations among constructs. For discriminant validity, diagonal elements should be larger than off-diagonal elements.\nPLS Construct cross-loadings of the research model (study one)\nPLS construct cross-loadings of the research model (study two)\nTwo criteria that are recommended for assessing discriminant validity are a square root of average variance extracted (AVE) that is higher than inter-construct correlations and indicators loading more highly on their corresponding factor than on other factors [63,64]. The results shown in Table 3 indicate that diagonal elements (AVE) were higher than off-diagonal elements (inter-construct correlations). For their part, the cross-loadings in Table 4 and Table 5 show that all indicators loaded more highly on their own factor than on other factors. Overall, these findings indicate that the measurement model has satisfied the recommended convergent and discriminant validity criteria.\nExploratory factor analyses of each reflective construct's items and their Cronbach alpha reliabilities were first examined as a check of unidimensionality. The results from these analyses revealed that all scale items associated with a given construct loaded highly (>0.60) on a single factor. Next, based on the results of the reliability analysis (Cronbach alpha), three items out of 39 were removed from their respective measurement instruments: OF4 (organizational flexibility), OC2 (organizational conflicts) and OHC4 (organizational history of changes). As a result, the remaining 36 items were then analyzed in PLS confirmatory factor analyses (CFA). Examination of revised construct reliabilities (Table 2), the variance shared between constructs (Table 3) and the cross-loadings (Tables 4 and 5) indicated that the psychometric properties of the 10 reflective constructs were acceptable. As can be seen, all Cronbach alphas were 0.71 or better and all item loadings were greater than 0.68.\nReliability assessment of research model variables\nVariance shared between research model constructs\n** p < 0.001; * p < 0.05; ns = non significant\nThe bold numbers on the leading diagonal show the square root of the variance shared by the constructs and their measures. Off-diagonal elements are the correlations among constructs. For discriminant validity, diagonal elements should be larger than off-diagonal elements.\nPLS Construct cross-loadings of the research model (study one)\nPLS construct cross-loadings of the research model (study two)\nTwo criteria that are recommended for assessing discriminant validity are a square root of average variance extracted (AVE) that is higher than inter-construct correlations and indicators loading more highly on their corresponding factor than on other factors [63,64]. The results shown in Table 3 indicate that diagonal elements (AVE) were higher than off-diagonal elements (inter-construct correlations). For their part, the cross-loadings in Table 4 and Table 5 show that all indicators loaded more highly on their own factor than on other factors. Overall, these findings indicate that the measurement model has satisfied the recommended convergent and discriminant validity criteria.\n[SUBTITLE] Hypothesis testing [SUBSECTION] Table 6 presents the PLS path coefficients along with the proportion of explained variance in the dependent variable. In study one, four of the hypothesized links in the research model were supported, with change appropriateness (H2), organizational flexibility (H8), vision clarity (H1), and change efficacy (H3) explaining 75% of the variance in organizational readiness. On the other hand, five hypotheses were not supported. More specifically, top-management support (H4), presence of an effective champion (H5), organizational history of change (H6), organizational conflicts (H7), and collective self-efficacy (H9) were not found to be associated with the dependent variable. In study two, four hypotheses were also supported, two of which differed from those supported in study one: the presence of an effective champion (H5) and collective self-efficacy (H9). In addition to these variables, vision clarity (H1) and change appropriateness (H2) also helped explain 75% of the variance in organizational readiness. Five hypotheses were not supported in study 2: change efficacy (H3), top-management support (H4), organizational history of change (H6), organizational conflicts (H7), and organizational flexibility (H8).\nPLS Results\n*** p < 0.001; ** p < 0.05; * p < 0.01\nTable 6 presents the PLS path coefficients along with the proportion of explained variance in the dependent variable. In study one, four of the hypothesized links in the research model were supported, with change appropriateness (H2), organizational flexibility (H8), vision clarity (H1), and change efficacy (H3) explaining 75% of the variance in organizational readiness. On the other hand, five hypotheses were not supported. More specifically, top-management support (H4), presence of an effective champion (H5), organizational history of change (H6), organizational conflicts (H7), and collective self-efficacy (H9) were not found to be associated with the dependent variable. In study two, four hypotheses were also supported, two of which differed from those supported in study one: the presence of an effective champion (H5) and collective self-efficacy (H9). In addition to these variables, vision clarity (H1) and change appropriateness (H2) also helped explain 75% of the variance in organizational readiness. Five hypotheses were not supported in study 2: change efficacy (H3), top-management support (H4), organizational history of change (H6), organizational conflicts (H7), and organizational flexibility (H8).\nPLS Results\n*** p < 0.001; ** p < 0.05; * p < 0.01", "As shown in Table 1, most participants in study one were women and had full-time positions. They were established registered nurses with an average of over 18 years of experience in the nursing profession and 10 years of seniority within their healthcare organization. The respondents' average experience with personal computers was 4.6 on a 7-point Likert scale where 1 is 'very unfamiliar with computers' and 7 is 'very familiar with computers.' In study two, one third of the respondents were men. More than half of the respondents (57%) were registered nurses and 12% were physicians. Respondents had over 15 years of experience in their profession and had spent, on average, 14 years in their current organization. Their level of experience with computers was similar to that of respondents in study one, with an average score of 4.8.\nProfile of respondents", "Exploratory factor analyses of each reflective construct's items and their Cronbach alpha reliabilities were first examined as a check of unidimensionality. The results from these analyses revealed that all scale items associated with a given construct loaded highly (>0.60) on a single factor. Next, based on the results of the reliability analysis (Cronbach alpha), three items out of 39 were removed from their respective measurement instruments: OF4 (organizational flexibility), OC2 (organizational conflicts) and OHC4 (organizational history of changes). As a result, the remaining 36 items were then analyzed in PLS confirmatory factor analyses (CFA). Examination of revised construct reliabilities (Table 2), the variance shared between constructs (Table 3) and the cross-loadings (Tables 4 and 5) indicated that the psychometric properties of the 10 reflective constructs were acceptable. As can be seen, all Cronbach alphas were 0.71 or better and all item loadings were greater than 0.68.\nReliability assessment of research model variables\nVariance shared between research model constructs\n** p < 0.001; * p < 0.05; ns = non significant\nThe bold numbers on the leading diagonal show the square root of the variance shared by the constructs and their measures. Off-diagonal elements are the correlations among constructs. For discriminant validity, diagonal elements should be larger than off-diagonal elements.\nPLS Construct cross-loadings of the research model (study one)\nPLS construct cross-loadings of the research model (study two)\nTwo criteria that are recommended for assessing discriminant validity are a square root of average variance extracted (AVE) that is higher than inter-construct correlations and indicators loading more highly on their corresponding factor than on other factors [63,64]. The results shown in Table 3 indicate that diagonal elements (AVE) were higher than off-diagonal elements (inter-construct correlations). For their part, the cross-loadings in Table 4 and Table 5 show that all indicators loaded more highly on their own factor than on other factors. Overall, these findings indicate that the measurement model has satisfied the recommended convergent and discriminant validity criteria.", "Table 6 presents the PLS path coefficients along with the proportion of explained variance in the dependent variable. In study one, four of the hypothesized links in the research model were supported, with change appropriateness (H2), organizational flexibility (H8), vision clarity (H1), and change efficacy (H3) explaining 75% of the variance in organizational readiness. On the other hand, five hypotheses were not supported. More specifically, top-management support (H4), presence of an effective champion (H5), organizational history of change (H6), organizational conflicts (H7), and collective self-efficacy (H9) were not found to be associated with the dependent variable. In study two, four hypotheses were also supported, two of which differed from those supported in study one: the presence of an effective champion (H5) and collective self-efficacy (H9). In addition to these variables, vision clarity (H1) and change appropriateness (H2) also helped explain 75% of the variance in organizational readiness. Five hypotheses were not supported in study 2: change efficacy (H3), top-management support (H4), organizational history of change (H6), organizational conflicts (H7), and organizational flexibility (H8).\nPLS Results\n*** p < 0.001; ** p < 0.05; * p < 0.01", "The purpose of this study was to identify variables associated with clinicians' perceptions of organizational readiness for change in the particular context of CIS projects. Change management theorists argue that there are four classes of antecedents that have a direct effect on perceived organizational readiness for change: the attributes of the change that is being introduced, the extent of leadership support for the proposed change, the organizational context where the change is being implemented, and the characteristics of the change targets. As explained in the preceding section, our analyses suggest adequate reliability as well as convergent and discriminant validity of the measurement instruments used in this study.\nOur findings imply that CIS project managers and leaders would benefit from explicitly addressing change content perceptions (change attributes) when pre-implementing CIS in healthcare organizations. More specifically, the results of this study indicate that two types of change sentiments -- vision clarity and change appropriateness -- have a significant and positive influence on clinicians' perceptions of organizational readiness for CIS-based change. In other words, our results support the idea that CIS projects have greater chances of success with a compelling reason, i.e., a reason that makes change targets recognize and accept that a change is needed (vision clarity). In addition to believing that change is needed, if change targets are to support the CIS project, they must also believe that the specific change being proposed is the correct one in the present context (change appropriateness). Change theorists also argue that in order to be motivated to support a change, individuals must not only feel that the change is appropriate but also that success is possible. In this regard, sources of information outside the organization can be used to bolster messages sent by the change agents. This is effectively what happened in the CIS project reported in study one. Indeed, the success of the pilot project carried out in the oncology and palliative care unit in 2007 was highly publicized through newspaper and magazine stories, as well as on television at the start of 2008. In the spring of 2008, the project was nominated for an award in the annual 3M innovation contest organized by Quebec's professional order of nurses. The publicity surrounding the project had a significant, positive effect on the perceptions of nurses in the 11 units of their organization's capacity to successfully implement the proposed change. The effect of this variable (change efficacy) was not supported by study two. One possible explanation may be that a system had not yet been selected at the time the data was collected, and the hospital concerned was one of the first health facilities in Quebec to deploy an EMR system. This meant that little information was available in the media about this type of project when the readiness in this facility was measured.\nSecond, we hypothesized that leadership support would be positively associated with organizational readiness for change. For one thing, it is important to ask why top-management support was not associated with organizational readiness. One explanation may be the speed with which CIS projects were launched. In both studies, the project announcement came suddenly, only a few weeks before the survey. It was only as the project was being officially presented -- at the same time as data collection -- that most of the targeted clinicians were informed of management's support for the project. The second variable, the presence of an effective project champion, was supported only by study two. One possible explanation may be tied to whether or not someone had been identified to assume this role at the time that we measured organizational readiness. Even though this role had been filled in each of the 11 facilities in study one, no champion had yet been identified in the 10 hospital clinics participating in the project in study two. Not knowing who would assume this role in the project may have exacerbated the uncertainty experienced by respondents, such that they perceived this variable as very important to the project's success.\nThird, our findings provided minimal support for hypotheses related to the organizational context within which change is implemented. More specifically, we observed that an organizational history of change and the political climate in the organization were not supported as indicators of readiness for change. In study one, only clinicians' perceptions of their organization's ability to accommodate changing conditions by altering policies and procedures were strongly related to perceived readiness for change. As mentioned above, some organizations are more adaptable and flexible than others. As such, regardless of change targets' comfort level with the nature of a CIS project, if the organization's structure is perceived to be inflexible and rigid, it appears that targeted clinicians are likely to hold less favorable attitudes about the organization's readiness for change. This finding was not, however, supported in study two. One possible reason is that study two was conducted in a single health facility, as compared to the 11 facilities in study one, which presented varying levels of flexibility.\nFinally, collective self-efficacy was found to be positively related to organizational readiness for change only in study two. This finding might also depend on the timing of the organizational readiness assessment. A software provider (and package) had already been selected in study one, while in study two the technology represented a relatively abstract concept to the respondents because organizational readiness measurement took place prior to the call-for-tender process. The nurses in each of the 11 facilities in study one had already attended a demonstration of the software when they completed the questionnaire, and this may have reassured them about their collective ability to learn and use their future work tool. This was not the case for study two respondents who only had a vague idea of what the functionalities of the EMR system would be.", "This study is not without certain methodological limitations that should be considered when interpreting the results. First, the data were collected using a single, self-reported questionnaire. When self-reports are used, concerns often arise as to whether common method bias is responsible for the observed relationships. Second, our analyses were based on a single type of technology (CIS) and a single group of change recipients (healthcare professionals), which limits the generalizability of our findings. However, our study was conducted in 11 ambulatory care organizations (study one) and 10 clinical units at a large teaching hospital (study two) to ensure a certain variety in terms of context. Third, the research design used in this study also presents limitations inasmuch as it did not allow us to assess clinicians' changing perceptions of their organization's readiness for change over time. For instance, while we believe that the presence of an effective project champion influences change targets' perceptions, the champion's actions and commitment might be more influential during the subsequent implementation phase when he or she drives consensus and manages resistance to change. In a similar way, as the project progresses toward the implementation phase, leadership behaviors exercised by upper management (e.g., a clarifying vision, allocating the required financial and human resources to the project) are likely to play a greater role in the change process. Even though the argument may be difficult to support in the case of the organizational history of change, we believe that organizational conflicts and politics as well as group self-efficacy will prove to play major roles in the implementation phase; hence the importance of carrying out longitudinal studies and making a clear distinction between the pre-implementation and the implementation phases.", "Some authors have argued that the management of IT-based organizational change needs to begin as early as possible. The present study represents an initial attempt at understanding the variables that affect clinicians' perceived organizational readiness for change by suggesting that vision clarity and change appropriateness, as well as change efficacy, organizational flexibility, the presence of an effective champion, and collective self-efficacy, are all important antecedents.\nOur findings have several implications for both practice and research. In practical terms, conducting a pre-implementation readiness assessment will help CIS project managers and decision makers choose whether they should initiate such a project or implement less costly, preliminary steps that will prepare the organization for the anticipated change. In this light, it is interesting to note that two of the 11 sites that participated in study one have not deployed the software package because of low readiness scores. As for future research, we believe that our results raise two important issues. First, more studies are needed in order to confirm which determinants are most significant in terms of perceived organizational readiness for CIS-based change. It would also be interesting to verify which antecedents are likely to emerge, based on the particular context of the project, and those that have an impact on the perceptions of change targets, independent of the context. Second, future research should investigate the extent to which organizational readiness is predictive of successful CIS adoption. Prior studies have also revealed that perceived organizational readiness significantly influences an individual's readiness for change [58,59,65] which, in turn, is a precursor of individual adoption or resistance behaviors (see Figure 1). It would therefore be important to have an analysis of the link between the level of perceived organizational readiness and clinicians' individual readiness for CIS-based change. Third, other key predictors could be included in the research model to further increase its explanatory power. For instance, clinicians' early perceptions of the usability of the technology per se [66] may also play a significant role in predicting clinicians' early perceptions of organizational readiness. In short, as healthcare organizations continue to invest in CIS to enhance quality and continuity of care, understanding the factors that contribute to an effective change process represents an important avenue for continued research.", "The authors declare that they have no competing interests.", "GP and CS participated in the design of the study and the development of the measurement instrument, carried out data collection in study one, performed the statistical analyses, and they were responsible for the writing of the manuscript. GB participated in the literature review and the development of the measurement instrument and he was responsible for collecting data in study two. PB contributed to the literature search and was involved in drafting the manuscript and revising it critically for important intellectual content. All authors read and approved the final manuscript.", "[SUBTITLE] Questionnaire items (as framed in study one) [SUBSECTION] [SUBTITLE] Vision Clarity (VC) [SUBSECTION] VC1. I believe there are legitimate reasons for us to introduce a new computer-based system in our unit.\nVC2. We definitely need new tools to improve the way we work around here.\nVC3. There are a number of rational reasons for the deployment of a new information system in our unit.\nVC4. A new computer-based system is needed to improve our clinical processes.\nVC1. I believe there are legitimate reasons for us to introduce a new computer-based system in our unit.\nVC2. We definitely need new tools to improve the way we work around here.\nVC3. There are a number of rational reasons for the deployment of a new information system in our unit.\nVC4. A new computer-based system is needed to improve our clinical processes.\n[SUBTITLE] Change Appropriateness (CA) [SUBSECTION] CA1. I think that nurses in our unit will benefit from the use of SyMO.\nCA2. The deployment of SyMO will contribute to our unit's overall performance.\nCA3. The deployment of SyMO matches the priorities of our unit.\nCA4. The implementation of SyMO will prove to be best for our unit.\nCA1. I think that nurses in our unit will benefit from the use of SyMO.\nCA2. The deployment of SyMO will contribute to our unit's overall performance.\nCA3. The deployment of SyMO matches the priorities of our unit.\nCA4. The implementation of SyMO will prove to be best for our unit.\n[SUBTITLE] Change Efficacy (CE) [SUBSECTION] CE1. I know nurses outside our unit who had successful experiences with SyMO.\nCE2. SyMO has been successfully deployed in clinical units similar to ours.\nCE3. SyMO has received positive reviews in the press (e.g., newspapers, magazines, newsletters, et al.).\nCE4. I believe the provincial movement toward the electronic medical record represents a driving force for the deployment of SyMO in our unit.\nCE1. I know nurses outside our unit who had successful experiences with SyMO.\nCE2. SyMO has been successfully deployed in clinical units similar to ours.\nCE3. SyMO has received positive reviews in the press (e.g., newspapers, magazines, newsletters, et al.).\nCE4. I believe the provincial movement toward the electronic medical record represents a driving force for the deployment of SyMO in our unit.\n[SUBTITLE] Top-Management Support (TMS) [SUBSECTION] TMS1. Managers in our unit are committed to the deployment of SyMO.\nTMS2. Managers in our unit have stressed the importance of this change.\nTMS3. Managers have sent a clear message that the deployment of SyMO will occur in our unit.\nTMS4. Nurses have been encouraged to embrace the upcoming deployment of SyMO.\nTMS1. Managers in our unit are committed to the deployment of SyMO.\nTMS2. Managers in our unit have stressed the importance of this change.\nTMS3. Managers have sent a clear message that the deployment of SyMO will occur in our unit.\nTMS4. Nurses have been encouraged to embrace the upcoming deployment of SyMO.\n[SUBTITLE] Champion (C) [SUBSECTION] C1. There is a champion who actively promotes the deployment of SyMO in our unit.\nC2. The SyMO project has a credible and trustworthy champion.\nC3. There is a champion who will be able to push the SyMO project over or around implementation hurdles.\nC1. There is a champion who actively promotes the deployment of SyMO in our unit.\nC2. The SyMO project has a credible and trustworthy champion.\nC3. There is a champion who will be able to push the SyMO project over or around implementation hurdles.\n[SUBTITLE] Organizational History of Change (OHC) [SUBSECTION] OHC1. Our unit has successfully implemented other technological changes in recent years.\nOHC2. Nursing staff in our unit have had negative experiences with technological projects in the past (reversed item).\nOHC3. Our unit is usually successful when it undertakes all types of changes.\nOHC4. Information technology initiatives have been encouraged and are common practices in our unit (removed item).\nOHC1. Our unit has successfully implemented other technological changes in recent years.\nOHC2. Nursing staff in our unit have had negative experiences with technological projects in the past (reversed item).\nOHC3. Our unit is usually successful when it undertakes all types of changes.\nOHC4. Information technology initiatives have been encouraged and are common practices in our unit (removed item).\n[SUBTITLE] Organizational Conflicts and Politics (OCP) [SUBSECTION] OCP1. Mutual trust and cooperation among nursing staff in our unit is strong (reversed item).\nOCP2. Recent attempts to change the way we work in our unit have been hindered by political forces or conditions (removed item).\nOCP3. The climate in our unit is mainly characterized by conflicts and disputes.\nOCP4. Staff frustration is common in our unit.\nOCP1. Mutual trust and cooperation among nursing staff in our unit is strong (reversed item).\nOCP2. Recent attempts to change the way we work in our unit have been hindered by political forces or conditions (removed item).\nOCP3. The climate in our unit is mainly characterized by conflicts and disputes.\nOCP4. Staff frustration is common in our unit.\n[SUBTITLE] Organizational Flexibility (OF) [SUBSECTION] OF1. Our unit is structured to allow superiors to make changes quickly.\nOF2. It is easy to change procedures in our unit to meet new conditions.\nOF3. Getting anything changed in our unit is a long, time-consuming process.\nOF4. Policies and procedures in our unit allow us to take on new challenges effectively (removed item).\nOF1. Our unit is structured to allow superiors to make changes quickly.\nOF2. It is easy to change procedures in our unit to meet new conditions.\nOF3. Getting anything changed in our unit is a long, time-consuming process.\nOF4. Policies and procedures in our unit allow us to take on new challenges effectively (removed item).\n[SUBTITLE] Group Self-Efficacy (GSE) [SUBSECTION] SE1. All nurses in our unit are highly computer literate.\nSE2. It won't take a long time before nurses in our unit feel comfortable using SyMO.\nSE3. Using a computer effectively is no problem for the nursing staff in our unit.\nSE4. In general, nursing staff in our unit have low computer skills (reversed item).\nSE1. All nurses in our unit are highly computer literate.\nSE2. It won't take a long time before nurses in our unit feel comfortable using SyMO.\nSE3. Using a computer effectively is no problem for the nursing staff in our unit.\nSE4. In general, nursing staff in our unit have low computer skills (reversed item).\n[SUBTITLE] Organizational Readiness (OR) [SUBSECTION] OR1. I believe SyMO can be successfully implemented in our unit.\nOR2. Managers should delay the deployment of SyMO in our unit (reversed item).\nOR3. The deployment of SyMO in our unit is timely.\nOR4. Our unit is ready to take on this technological change.\nOR1. I believe SyMO can be successfully implemented in our unit.\nOR2. Managers should delay the deployment of SyMO in our unit (reversed item).\nOR3. The deployment of SyMO in our unit is timely.\nOR4. Our unit is ready to take on this technological change.\n[SUBTITLE] Vision Clarity (VC) [SUBSECTION] VC1. I believe there are legitimate reasons for us to introduce a new computer-based system in our unit.\nVC2. We definitely need new tools to improve the way we work around here.\nVC3. There are a number of rational reasons for the deployment of a new information system in our unit.\nVC4. A new computer-based system is needed to improve our clinical processes.\nVC1. I believe there are legitimate reasons for us to introduce a new computer-based system in our unit.\nVC2. We definitely need new tools to improve the way we work around here.\nVC3. There are a number of rational reasons for the deployment of a new information system in our unit.\nVC4. A new computer-based system is needed to improve our clinical processes.\n[SUBTITLE] Change Appropriateness (CA) [SUBSECTION] CA1. I think that nurses in our unit will benefit from the use of SyMO.\nCA2. The deployment of SyMO will contribute to our unit's overall performance.\nCA3. The deployment of SyMO matches the priorities of our unit.\nCA4. The implementation of SyMO will prove to be best for our unit.\nCA1. I think that nurses in our unit will benefit from the use of SyMO.\nCA2. The deployment of SyMO will contribute to our unit's overall performance.\nCA3. The deployment of SyMO matches the priorities of our unit.\nCA4. The implementation of SyMO will prove to be best for our unit.\n[SUBTITLE] Change Efficacy (CE) [SUBSECTION] CE1. I know nurses outside our unit who had successful experiences with SyMO.\nCE2. SyMO has been successfully deployed in clinical units similar to ours.\nCE3. SyMO has received positive reviews in the press (e.g., newspapers, magazines, newsletters, et al.).\nCE4. I believe the provincial movement toward the electronic medical record represents a driving force for the deployment of SyMO in our unit.\nCE1. I know nurses outside our unit who had successful experiences with SyMO.\nCE2. SyMO has been successfully deployed in clinical units similar to ours.\nCE3. SyMO has received positive reviews in the press (e.g., newspapers, magazines, newsletters, et al.).\nCE4. I believe the provincial movement toward the electronic medical record represents a driving force for the deployment of SyMO in our unit.\n[SUBTITLE] Top-Management Support (TMS) [SUBSECTION] TMS1. Managers in our unit are committed to the deployment of SyMO.\nTMS2. Managers in our unit have stressed the importance of this change.\nTMS3. Managers have sent a clear message that the deployment of SyMO will occur in our unit.\nTMS4. Nurses have been encouraged to embrace the upcoming deployment of SyMO.\nTMS1. Managers in our unit are committed to the deployment of SyMO.\nTMS2. Managers in our unit have stressed the importance of this change.\nTMS3. Managers have sent a clear message that the deployment of SyMO will occur in our unit.\nTMS4. Nurses have been encouraged to embrace the upcoming deployment of SyMO.\n[SUBTITLE] Champion (C) [SUBSECTION] C1. There is a champion who actively promotes the deployment of SyMO in our unit.\nC2. The SyMO project has a credible and trustworthy champion.\nC3. There is a champion who will be able to push the SyMO project over or around implementation hurdles.\nC1. There is a champion who actively promotes the deployment of SyMO in our unit.\nC2. The SyMO project has a credible and trustworthy champion.\nC3. There is a champion who will be able to push the SyMO project over or around implementation hurdles.\n[SUBTITLE] Organizational History of Change (OHC) [SUBSECTION] OHC1. Our unit has successfully implemented other technological changes in recent years.\nOHC2. Nursing staff in our unit have had negative experiences with technological projects in the past (reversed item).\nOHC3. Our unit is usually successful when it undertakes all types of changes.\nOHC4. Information technology initiatives have been encouraged and are common practices in our unit (removed item).\nOHC1. Our unit has successfully implemented other technological changes in recent years.\nOHC2. Nursing staff in our unit have had negative experiences with technological projects in the past (reversed item).\nOHC3. Our unit is usually successful when it undertakes all types of changes.\nOHC4. Information technology initiatives have been encouraged and are common practices in our unit (removed item).\n[SUBTITLE] Organizational Conflicts and Politics (OCP) [SUBSECTION] OCP1. Mutual trust and cooperation among nursing staff in our unit is strong (reversed item).\nOCP2. Recent attempts to change the way we work in our unit have been hindered by political forces or conditions (removed item).\nOCP3. The climate in our unit is mainly characterized by conflicts and disputes.\nOCP4. Staff frustration is common in our unit.\nOCP1. Mutual trust and cooperation among nursing staff in our unit is strong (reversed item).\nOCP2. Recent attempts to change the way we work in our unit have been hindered by political forces or conditions (removed item).\nOCP3. The climate in our unit is mainly characterized by conflicts and disputes.\nOCP4. Staff frustration is common in our unit.\n[SUBTITLE] Organizational Flexibility (OF) [SUBSECTION] OF1. Our unit is structured to allow superiors to make changes quickly.\nOF2. It is easy to change procedures in our unit to meet new conditions.\nOF3. Getting anything changed in our unit is a long, time-consuming process.\nOF4. Policies and procedures in our unit allow us to take on new challenges effectively (removed item).\nOF1. Our unit is structured to allow superiors to make changes quickly.\nOF2. It is easy to change procedures in our unit to meet new conditions.\nOF3. Getting anything changed in our unit is a long, time-consuming process.\nOF4. Policies and procedures in our unit allow us to take on new challenges effectively (removed item).\n[SUBTITLE] Group Self-Efficacy (GSE) [SUBSECTION] SE1. All nurses in our unit are highly computer literate.\nSE2. It won't take a long time before nurses in our unit feel comfortable using SyMO.\nSE3. Using a computer effectively is no problem for the nursing staff in our unit.\nSE4. In general, nursing staff in our unit have low computer skills (reversed item).\nSE1. All nurses in our unit are highly computer literate.\nSE2. It won't take a long time before nurses in our unit feel comfortable using SyMO.\nSE3. Using a computer effectively is no problem for the nursing staff in our unit.\nSE4. In general, nursing staff in our unit have low computer skills (reversed item).\n[SUBTITLE] Organizational Readiness (OR) [SUBSECTION] OR1. I believe SyMO can be successfully implemented in our unit.\nOR2. Managers should delay the deployment of SyMO in our unit (reversed item).\nOR3. The deployment of SyMO in our unit is timely.\nOR4. Our unit is ready to take on this technological change.\nOR1. I believe SyMO can be successfully implemented in our unit.\nOR2. Managers should delay the deployment of SyMO in our unit (reversed item).\nOR3. The deployment of SyMO in our unit is timely.\nOR4. Our unit is ready to take on this technological change.", "[SUBTITLE] Vision Clarity (VC) [SUBSECTION] VC1. I believe there are legitimate reasons for us to introduce a new computer-based system in our unit.\nVC2. We definitely need new tools to improve the way we work around here.\nVC3. There are a number of rational reasons for the deployment of a new information system in our unit.\nVC4. A new computer-based system is needed to improve our clinical processes.\nVC1. I believe there are legitimate reasons for us to introduce a new computer-based system in our unit.\nVC2. We definitely need new tools to improve the way we work around here.\nVC3. There are a number of rational reasons for the deployment of a new information system in our unit.\nVC4. A new computer-based system is needed to improve our clinical processes.\n[SUBTITLE] Change Appropriateness (CA) [SUBSECTION] CA1. I think that nurses in our unit will benefit from the use of SyMO.\nCA2. The deployment of SyMO will contribute to our unit's overall performance.\nCA3. The deployment of SyMO matches the priorities of our unit.\nCA4. The implementation of SyMO will prove to be best for our unit.\nCA1. I think that nurses in our unit will benefit from the use of SyMO.\nCA2. The deployment of SyMO will contribute to our unit's overall performance.\nCA3. The deployment of SyMO matches the priorities of our unit.\nCA4. The implementation of SyMO will prove to be best for our unit.\n[SUBTITLE] Change Efficacy (CE) [SUBSECTION] CE1. I know nurses outside our unit who had successful experiences with SyMO.\nCE2. SyMO has been successfully deployed in clinical units similar to ours.\nCE3. SyMO has received positive reviews in the press (e.g., newspapers, magazines, newsletters, et al.).\nCE4. I believe the provincial movement toward the electronic medical record represents a driving force for the deployment of SyMO in our unit.\nCE1. I know nurses outside our unit who had successful experiences with SyMO.\nCE2. SyMO has been successfully deployed in clinical units similar to ours.\nCE3. SyMO has received positive reviews in the press (e.g., newspapers, magazines, newsletters, et al.).\nCE4. I believe the provincial movement toward the electronic medical record represents a driving force for the deployment of SyMO in our unit.\n[SUBTITLE] Top-Management Support (TMS) [SUBSECTION] TMS1. Managers in our unit are committed to the deployment of SyMO.\nTMS2. Managers in our unit have stressed the importance of this change.\nTMS3. Managers have sent a clear message that the deployment of SyMO will occur in our unit.\nTMS4. Nurses have been encouraged to embrace the upcoming deployment of SyMO.\nTMS1. Managers in our unit are committed to the deployment of SyMO.\nTMS2. Managers in our unit have stressed the importance of this change.\nTMS3. Managers have sent a clear message that the deployment of SyMO will occur in our unit.\nTMS4. Nurses have been encouraged to embrace the upcoming deployment of SyMO.\n[SUBTITLE] Champion (C) [SUBSECTION] C1. There is a champion who actively promotes the deployment of SyMO in our unit.\nC2. The SyMO project has a credible and trustworthy champion.\nC3. There is a champion who will be able to push the SyMO project over or around implementation hurdles.\nC1. There is a champion who actively promotes the deployment of SyMO in our unit.\nC2. The SyMO project has a credible and trustworthy champion.\nC3. There is a champion who will be able to push the SyMO project over or around implementation hurdles.\n[SUBTITLE] Organizational History of Change (OHC) [SUBSECTION] OHC1. Our unit has successfully implemented other technological changes in recent years.\nOHC2. Nursing staff in our unit have had negative experiences with technological projects in the past (reversed item).\nOHC3. Our unit is usually successful when it undertakes all types of changes.\nOHC4. Information technology initiatives have been encouraged and are common practices in our unit (removed item).\nOHC1. Our unit has successfully implemented other technological changes in recent years.\nOHC2. Nursing staff in our unit have had negative experiences with technological projects in the past (reversed item).\nOHC3. Our unit is usually successful when it undertakes all types of changes.\nOHC4. Information technology initiatives have been encouraged and are common practices in our unit (removed item).\n[SUBTITLE] Organizational Conflicts and Politics (OCP) [SUBSECTION] OCP1. Mutual trust and cooperation among nursing staff in our unit is strong (reversed item).\nOCP2. Recent attempts to change the way we work in our unit have been hindered by political forces or conditions (removed item).\nOCP3. The climate in our unit is mainly characterized by conflicts and disputes.\nOCP4. Staff frustration is common in our unit.\nOCP1. Mutual trust and cooperation among nursing staff in our unit is strong (reversed item).\nOCP2. Recent attempts to change the way we work in our unit have been hindered by political forces or conditions (removed item).\nOCP3. The climate in our unit is mainly characterized by conflicts and disputes.\nOCP4. Staff frustration is common in our unit.\n[SUBTITLE] Organizational Flexibility (OF) [SUBSECTION] OF1. Our unit is structured to allow superiors to make changes quickly.\nOF2. It is easy to change procedures in our unit to meet new conditions.\nOF3. Getting anything changed in our unit is a long, time-consuming process.\nOF4. Policies and procedures in our unit allow us to take on new challenges effectively (removed item).\nOF1. Our unit is structured to allow superiors to make changes quickly.\nOF2. It is easy to change procedures in our unit to meet new conditions.\nOF3. Getting anything changed in our unit is a long, time-consuming process.\nOF4. Policies and procedures in our unit allow us to take on new challenges effectively (removed item).\n[SUBTITLE] Group Self-Efficacy (GSE) [SUBSECTION] SE1. All nurses in our unit are highly computer literate.\nSE2. It won't take a long time before nurses in our unit feel comfortable using SyMO.\nSE3. Using a computer effectively is no problem for the nursing staff in our unit.\nSE4. In general, nursing staff in our unit have low computer skills (reversed item).\nSE1. All nurses in our unit are highly computer literate.\nSE2. It won't take a long time before nurses in our unit feel comfortable using SyMO.\nSE3. Using a computer effectively is no problem for the nursing staff in our unit.\nSE4. In general, nursing staff in our unit have low computer skills (reversed item).\n[SUBTITLE] Organizational Readiness (OR) [SUBSECTION] OR1. I believe SyMO can be successfully implemented in our unit.\nOR2. Managers should delay the deployment of SyMO in our unit (reversed item).\nOR3. The deployment of SyMO in our unit is timely.\nOR4. Our unit is ready to take on this technological change.\nOR1. I believe SyMO can be successfully implemented in our unit.\nOR2. Managers should delay the deployment of SyMO in our unit (reversed item).\nOR3. The deployment of SyMO in our unit is timely.\nOR4. Our unit is ready to take on this technological change.", "VC1. I believe there are legitimate reasons for us to introduce a new computer-based system in our unit.\nVC2. We definitely need new tools to improve the way we work around here.\nVC3. There are a number of rational reasons for the deployment of a new information system in our unit.\nVC4. A new computer-based system is needed to improve our clinical processes.", "CA1. I think that nurses in our unit will benefit from the use of SyMO.\nCA2. The deployment of SyMO will contribute to our unit's overall performance.\nCA3. The deployment of SyMO matches the priorities of our unit.\nCA4. The implementation of SyMO will prove to be best for our unit.", "CE1. I know nurses outside our unit who had successful experiences with SyMO.\nCE2. SyMO has been successfully deployed in clinical units similar to ours.\nCE3. SyMO has received positive reviews in the press (e.g., newspapers, magazines, newsletters, et al.).\nCE4. I believe the provincial movement toward the electronic medical record represents a driving force for the deployment of SyMO in our unit.", "TMS1. Managers in our unit are committed to the deployment of SyMO.\nTMS2. Managers in our unit have stressed the importance of this change.\nTMS3. Managers have sent a clear message that the deployment of SyMO will occur in our unit.\nTMS4. Nurses have been encouraged to embrace the upcoming deployment of SyMO.", "C1. There is a champion who actively promotes the deployment of SyMO in our unit.\nC2. The SyMO project has a credible and trustworthy champion.\nC3. There is a champion who will be able to push the SyMO project over or around implementation hurdles.", "OHC1. Our unit has successfully implemented other technological changes in recent years.\nOHC2. Nursing staff in our unit have had negative experiences with technological projects in the past (reversed item).\nOHC3. Our unit is usually successful when it undertakes all types of changes.\nOHC4. Information technology initiatives have been encouraged and are common practices in our unit (removed item).", "OCP1. Mutual trust and cooperation among nursing staff in our unit is strong (reversed item).\nOCP2. Recent attempts to change the way we work in our unit have been hindered by political forces or conditions (removed item).\nOCP3. The climate in our unit is mainly characterized by conflicts and disputes.\nOCP4. Staff frustration is common in our unit.", "OF1. Our unit is structured to allow superiors to make changes quickly.\nOF2. It is easy to change procedures in our unit to meet new conditions.\nOF3. Getting anything changed in our unit is a long, time-consuming process.\nOF4. Policies and procedures in our unit allow us to take on new challenges effectively (removed item).", "SE1. All nurses in our unit are highly computer literate.\nSE2. It won't take a long time before nurses in our unit feel comfortable using SyMO.\nSE3. Using a computer effectively is no problem for the nursing staff in our unit.\nSE4. In general, nursing staff in our unit have low computer skills (reversed item).", "OR1. I believe SyMO can be successfully implemented in our unit.\nOR2. Managers should delay the deployment of SyMO in our unit (reversed item).\nOR3. The deployment of SyMO in our unit is timely.\nOR4. Our unit is ready to take on this technological change." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Histopathological cutaneous alterations in systemic sclerosis: a clinicopathological study.
21356083
The aims of the present study were to identify histopathological parameters which are linked to local clinical skin disease at two distinct anatomical sites in systemic sclerosis (SSc) patients with skin involvement (limited cutaneous systemic sclerosis (lcSSc) or diffuse cutaneous systemic sclerosis (dcSSc)) and to determine the sensitivity of SSc specific histological alterations, focusing on SSc patients without clinical skin involvement (limited SSc (lSSc)).
INTRODUCTION
Histopathological alterations were systematically scored in skin biopsies of 53 consecutive SSc patients (dorsal forearm and upper inner arm) and 18 controls (upper inner arm). Clinical skin involvement was evaluated using the modified Rodnan skin score. In patients with lcSSc or dcSSc, associations of histopathological parameters with local clinical skin involvement were determined by generalised estimation equation modelling.
METHODS
The hyalinised collagen score, the myofibroblast score, the mean epidermal thickness, the mononuclear cellular infiltration and the frequency of focal exocytosis differed significantly between biopsies with and without local clinical skin involvement. Except for mononuclear cellular infiltration, all of the continuous parameters correlated with the local clinical skin score at the dorsal forearm. Parakeratosis, myofibroblasts and intima proliferation were present in a minority of the SSc biopsies, but not in controls. No differences were found between lSSc and controls.
RESULTS
Several histopathological parameters are linked to local clinical skin disease. SSc-specific histological alterations have a low diagnostic sensitivity.
CONCLUSIONS
[ "Adult", "Biopsy", "Female", "Humans", "Immunohistochemistry", "Male", "Middle Aged", "Scleroderma, Limited", "Scleroderma, Systemic", "Skin" ]
3241379
null
null
null
null
Results
[SUBTITLE] Clinical characteristics of the patients [SUBSECTION] Skin biopsies from the dorsal forearm and the upper inner arm were obtained from 53 consecutive SSc patients (17 males and 36 females; mean age ± SD, 52 ± 12 years). Seven SSc patients (13%) had no clinical skin involvement (lSSc), and 46 SSc patients (87%) had skin involvement (29 lcSSc and 17 dcSSc patients). Twenty-five patients used methotrexate, and 11 patients used low-dose corticosteroids (< 15 mg prednisolone/day). Table 2 summarizes the features of the different SSc patient subsets. Normal skin samples taken from the upper inner arm were included as a reference set (n = 18 comprising 7 males and 11 females; mean age ± SD, 44 ± 17 years). Clinical data of patients with lSSc, lcSSc and dcSSca aACR, American College of Rheumatology; ANA, antinuclear antibodies; dcSSc, diffuse cutaneous systemic sclerosis; lSSc, limited systemic sclerosis; lcSSc, limited cutaneous systemic sclerosis; mRSS, modified Rodnan skin score; U1RNP, U1-ribonucleic protein; bdisease duration from first non-Raynaud's phenomenon symptom; cone patient had both anticentromere and anti-topoisomerase I antibodies. crange denotes the full interval between the smallest and largest values Skin biopsies from the dorsal forearm and the upper inner arm were obtained from 53 consecutive SSc patients (17 males and 36 females; mean age ± SD, 52 ± 12 years). Seven SSc patients (13%) had no clinical skin involvement (lSSc), and 46 SSc patients (87%) had skin involvement (29 lcSSc and 17 dcSSc patients). Twenty-five patients used methotrexate, and 11 patients used low-dose corticosteroids (< 15 mg prednisolone/day). Table 2 summarizes the features of the different SSc patient subsets. Normal skin samples taken from the upper inner arm were included as a reference set (n = 18 comprising 7 males and 11 females; mean age ± SD, 44 ± 17 years). Clinical data of patients with lSSc, lcSSc and dcSSca aACR, American College of Rheumatology; ANA, antinuclear antibodies; dcSSc, diffuse cutaneous systemic sclerosis; lSSc, limited systemic sclerosis; lcSSc, limited cutaneous systemic sclerosis; mRSS, modified Rodnan skin score; U1RNP, U1-ribonucleic protein; bdisease duration from first non-Raynaud's phenomenon symptom; cone patient had both anticentromere and anti-topoisomerase I antibodies. crange denotes the full interval between the smallest and largest values [SUBTITLE] Associations of local clinical skin involvement and histological alterations [SUBSECTION] To examine associations of local skin disease with histological parameters, we analysed the biopsies from patients with lcSSc or dcSSc (n = 46). We found that, independently of the anatomical site of the biopsy, the hyalinised collagen score, the myofibroblast score, the mean epidermal thickness, the mononuclear cellular infiltration and the frequency of focal exocytosis differed significantly between biopsies with and without local skin involvement (α = 0.05; GEE) (Table 3). For the continuous parameters, only the epidermal thickness (r = 0.553; P < 0.001), the myofibroblast score (r = 0.507; P < 0.001) and the hyalinised collagen score (r = 0.572; P < 0.001) correlated with the local clinical skin score. No association was found between the local clinical score and the frequency of focal exocytosis (P = 0.06). Histological characteristics of patients with lcSSc or dcSSca alcSSc, limited cutaneous systemic sclerosis; dcSSc, diffuse cutaneous systemic sclerosis; SD, standard deviation; NR, not reported; b generalised estimation equation (GEE) modelling was used to determine the effect of local skin involvement for each variable, correcting for subject level and biopsy site; cin the GEE model, the effect of this parameter differed significantly between the skin sites (Student's t-test was used to evaluate the upper inner arm, P < 0.001, and the dorsal forearm, P = 0.580); dstatistical testing could not be performed. To examine associations of local skin disease with histological parameters, we analysed the biopsies from patients with lcSSc or dcSSc (n = 46). We found that, independently of the anatomical site of the biopsy, the hyalinised collagen score, the myofibroblast score, the mean epidermal thickness, the mononuclear cellular infiltration and the frequency of focal exocytosis differed significantly between biopsies with and without local skin involvement (α = 0.05; GEE) (Table 3). For the continuous parameters, only the epidermal thickness (r = 0.553; P < 0.001), the myofibroblast score (r = 0.507; P < 0.001) and the hyalinised collagen score (r = 0.572; P < 0.001) correlated with the local clinical skin score. No association was found between the local clinical score and the frequency of focal exocytosis (P = 0.06). Histological characteristics of patients with lcSSc or dcSSca alcSSc, limited cutaneous systemic sclerosis; dcSSc, diffuse cutaneous systemic sclerosis; SD, standard deviation; NR, not reported; b generalised estimation equation (GEE) modelling was used to determine the effect of local skin involvement for each variable, correcting for subject level and biopsy site; cin the GEE model, the effect of this parameter differed significantly between the skin sites (Student's t-test was used to evaluate the upper inner arm, P < 0.001, and the dorsal forearm, P = 0.580); dstatistical testing could not be performed. [SUBTITLE] Sensitivity and specificity of histological alterations [SUBSECTION] To determine the specificity of the studied histological parameters, we analysed 18 upper inner arm biopsies from patients who were referred for a lupus band test and in whom further evaluation excluded any specified autoimmune disease. Myofibroblasts, intima proliferation of deep arterioles and parakeratosis were not seen in these biopsies (Table 4). Comparison with the biopsies of the patients with lSSc (n = 7) revealed no statistically significant differences (Table 4). Analysis of all SSc biopsies showed that myofibroblasts, intima proliferation of deep arterioles and parakeratosis were present in, respectively, 22%, 19% and 5.8% of the dorsal forearm biopsies and in 14%, 14% and 0% of the upper inner arm biopsies. Histological characteristics of patients with lSSc and controlsa alSSc, limited systemic sclerosis; NR, not reported; bstatistical testing could not be performed. To determine the specificity of the studied histological parameters, we analysed 18 upper inner arm biopsies from patients who were referred for a lupus band test and in whom further evaluation excluded any specified autoimmune disease. Myofibroblasts, intima proliferation of deep arterioles and parakeratosis were not seen in these biopsies (Table 4). Comparison with the biopsies of the patients with lSSc (n = 7) revealed no statistically significant differences (Table 4). Analysis of all SSc biopsies showed that myofibroblasts, intima proliferation of deep arterioles and parakeratosis were present in, respectively, 22%, 19% and 5.8% of the dorsal forearm biopsies and in 14%, 14% and 0% of the upper inner arm biopsies. Histological characteristics of patients with lSSc and controlsa alSSc, limited systemic sclerosis; NR, not reported; bstatistical testing could not be performed.
Conclusions
In conclusion, the systematic analysis of skin biopsies from 53 consecutive SSc patients and 18 normal controls revealed that the mean epidermal thickness, the hyalinised collagen score and the myofibroblast score are linked to local clinical skin involvement and are correlated with the local skin score. Concerning histological alterations in lSSc, we found no significant differences with control skin at the upper inner arm. Finally, myofibroblasts, intima proliferation of the deep arterioles or parakeratosis in a skin biopsy are useful diagnostic markers for SSc, although they have a low sensitivity.
[ "Introduction", "Patients", "Immunohistochemistry", "Selection of scoring parameters for SSc histological skin alterations", "Statistical analysis", "Clinical characteristics of the patients", "Associations of local clinical skin involvement and histological alterations", "Sensitivity and specificity of histological alterations", "Abbreviations", "Competing interests", "Authors' contributions" ]
[ "Systemic sclerosis (SSc) is a chronic autoimmune disease which can affect the skin and various internal organs [1]. One of the hallmark clinical features of SSc is skin thickening caused by oedema and excessive accumulation of collagen-rich extracellular matrix. Apart from sclerosis, the pathogenesis of SSc is characterized by vasculopathy, which is evidenced by nailfold capillary abnormalities and Raynaud's phenomenon, as well as by the presence of antinuclear antibodies such as anticentromere, anti-topoisomerase I and anti-RNA polymerase III antibodies. The most extensively validated technique to quantify skin involvement is the modified Rodnan skin score (mRSS) [2]. In this scoring system, 17 body areas are examined by clinical palpation and scored on the basis of judgement of skin thickness on a 4-point ordinal scale. Because the extent of skin disease correlates with the disease course, patients are grouped into disease subsets on the basis of skin involvement. The currently most widely applied classification differentiates three subsets, namely, limited SSc (lSSc), limited cutaneous SSc (lcSSc) and diffuse cutaneous SSc (dcSSc) [3]. Patients with lcSSc have skin involvement confined to the fingers, hands, forearms, lower legs or the face, whereas patients with dcSSc also have more proximal skin thickening. The group classified as lSSc has Raynaud's phenomenon with nailfold capillaroscopic abnormalities and/or SSc-associated autoantibodies, but no clinical skin involvement. This subset is considered to have 'early' SSc, as a longitudinal follow-up study has demonstrated that these patients are at risk for progression to definite SSc [4].\nThe histopathology of SSc skin has recently regained interest because it may be integrated as an outcome measure in clinical trials on skin disease, whereby skin biopsies are obtained before and after administration of a therapeutic drug [5,6]. In this way, the hyalinised collagen score and the myofibroblast score have previously been correlated with local skin score and durometry score in patients with dcSSc [7]. Consequently, these parameters could potentially be used to study the effect of a drug on skin disease. Other alterations in skin histology, however, have not been linked to clinical assessment. In addition, studies on the skin histopathology of SSc patients have mainly focused on alterations in patients with established disease, leaving open the question whether patients with 'early' disease have SSc specific histological alterations.\nThe aims of this study were (1) to identify histopathological parameters which are linked with local clinical skin disease at two different anatomical sites in SSc patients with skin involvement (lcSSc or dcSSc) and (2) to determine the sensitivity of SSc-specific histological alterations, with a focus on SSc patients with lSSc.", "Skin biopsies from the dorsal forearm (at the transition of the distal one-third and the proximal two-thirds) and the mid-upper inner arm were obtained from 53 consecutive SSc patients visiting the Scleroderma Clinic of the Ghent University Hospital. From two patients, only one biopsy was available for analysis. All patients fulfilled the criteria for early SSc set forth by LeRoy and Medsger [3]. Video nailfold capillaroscopy and antinuclear antibody identification were performed as described previously [8]. Patients were assigned to the lSSc, lcSSc or dcSSc group according to the criteria published by LeRoy et al. [9]. Skin involvement was clinically assessed according to the 17-site mRSS, whereby local skin involvement is determined for each site, including the dorsal forearm, on a semiquantitative scale (0 = normal thickness, 1 = mild thickening, 2 = moderate thickening and 3 = severe thickening) [10]. A normal reference set was included in the analysis. This set contained the skin biopsies from the inner upper arm of patients who were referred for a lupus band test and in whom further evaluation excluded any specified autoimmune disease. This study was conducted after approval of the Ghent University Hospital Ethical Committee was obtained and all patients had signed informed consent.\nFull-thickness skin biopsies (approximately 1.5 cm long and 0.5 cm wide) were surgically obtained while the patients were under local anaesthesia. Biopsy samples were stored in formaldehyde and embedded in paraffin. Sections of 5 μm were cut and stained with haematoxylin and eosin, Masson trichrome and colloidal iron according to standard techniques. Paraffin-embedded sections were also used for immunohistochemistry.", "Paraffin-embedded sections were dewaxed and heated in antigen retrieval buffer at 98°C for 20 minutes using citrate buffer (pH 6) or Tris buffer (pH 9) (Pascal: Dako, Glostrup, Denmark). After rinsing and blocking endogenous peroxidase, sections were incubated for 60 minutes with the following mouse monoclonal antibodies (mAbs): CD3 (T cells; Novocastra, Newcastle, UK) and α-smooth muscle actin (α-SMA) (myofibroblasts, clone 1A4; Dako). Parallel sections were incubated with irrelevant isotype-matched mAbs as negative controls. The sections were subsequently incubated for 15 minutes with a biotinylated antimouse secondary antibody, followed by 15-minute incubation with a streptavidin-peroxidase complex (LSAB+ Kit; Dako). The colour reaction was developed using 3-amino-9-ethylcarbazole substrate (Dako) as chromogen. Finally, the sections were counterstained with haematoxylin. All incubations were carried out at room temperature, and the sections were washed with phosphate-buffered saline between all steps.", "Studies of SSc skin histopathology have described alterations in different skin compartments, including atrophy and increased pigmentation of the epidermis [11], loss of the epidermal papillae [11], increase of melanophages ('pigment incontinence') [11], the presence of a mononuclear perivascular infiltrate and myofibroblasts [7,12-14], sclerosis [7], narrowing of arteriolar lumina in the deep vascular plexus (reticular dermis) [15] and disappearance and entrapment of dermal adnexae and calcification [11,12]. To this set of alterations, we added two key histopathological features of other scleroderma-like disorders, namely, mucin deposition and fibroplasia [16,17]. Because analysis of routine biopsies revealed the presence of telangiectasia, focal exocytosis (that is, the presence of lymphocytes in the epidermis) and parakeratosis in some SSc biopsies, these items were also added to the set of scoring parameters [18]. Scoring systems for different parameters were obtained from previous publications as much as possible. An overview of the skin parameters that were scored is shown in Table 1.\nOverview of the skin histology parameters and scoring systema\naα-SMA, α-smooth muscle actin; CI, colloidal iron; H&E, haematoxylin and eosin; IHC, immunohistochemistry; MT, Masson trichrome; VAS, Visual Analogue Scale.\nThe hyalinised collagen score and the myofibroblast score were assessed on a 10-cm Visual Analogue Scale [5]. The epidermal thickness was scored using a semiquantitative scale on the basis of the mean number of keratinocyte layers (stratum basale and stratum spinosum) in a zone between two rete ridges (that is, undulations of the dermoepidermal junction) in five microscopic fields chosen at random (0 = mean less than three layers, 1 = mean of three or four layers, 2 = mean of five or six layers, 3 = mean of more than six layers), a scoring system which has also been used for the synovial lining layer [14,19]. Mononuclear cellular infiltration was scored on a semiquantitative scale as 0 (few scattered cells), 1 (maximum number of cells per collection at least 10), 2 (maximum number of cells per collection between 10 and 50) or 3 (maximum number of cells per collection at least 50) [14]. Mucin deposition in the reticular dermis was evaluated by scoring the degree of acid mucopolysaccharide staining on a semiquantitative scale as negative, very slight, slight, fair or abundant [16]. Entrapment of an eccrine sweat gland was defined as the absence of any surrounding fat tissue (Figure 1A). Focal exocytosis was defined as the presence of T-lymphocytes in the epidermis at least at two distinct sites (Figure 1B). Intima proliferation in deep arterioles was considered pathologic if the vessel wall thickness exceeded the diameter of the vessel lumen (Figure 1C). Parakeratosis was defined as the presence of nuclei in the stratum corneum (Figure 1D). Pigment incontinence was defined as the presence of melanin in papillary macrophages at least at two distinct sites (Figure 1E). Telangiectasia was defined as enlarged papillary capillaries (Figure 1F).\nHistopathological alterations in the skin of SSc patients. All photographs were taken at ×200 magnification. (A) Entrapment of an eccrine sweat gland. (B) Focal exocytosis (arrow). (C) Intima proliferation in a deep arteriole. (D) Parakeratosis (arrow). (E) Pigment incontinence (arrow). (F) Telangiectasia. (A and C-F) Haematoxylin and eosin staining. (B) Anti-CD3 staining.\nStained slides were coded so that a blinded analysis could be performed. Slides from the dorsal forearm were analysed by two independent observers (MH and JTVP) who were uninformed of any clinical data. All scoring parameters were judged to be reliable, as the interobserver agreement was substantial (κ > 0.6 for all categorical parameters and intraclass correlation coefficient >0.7 for all continuous parameters). For continuous parameters, the mean of the two observers was used for analysis. In case of a discrepant score for a categorical parameter, a consensus was determined by the two observers. Slides from the upper inner arm were scored twice by one observer (JTVP). For continuous parameters, the mean of the two scores was used for analysis. In case of a discrepant score for a categorical parameter, a consensus was determined by a third evaluation. Because fibroplasia and calcification were not observed in a single biopsy, these items were left out of all analyses.", "In patients with lcSSc or dcSSc, associations of histopathological parameters with local clinical skin involvement were determined by generalised estimation equation (GEE) modelling with a correction for subject level and biopsy site. Local skin involvement at the dorsal forearm was defined as a local clinical score of at least 1. Since the upper inner arm local skin score is not included in the mRSS, all patients with dcSSc were considered to have local skin involvement at this site. In case a significant interaction between biopsy site and local clinical skin involvement was found in the GEE model, statistical testing for the effect of skin involvement was separately performed for the dorsal forearm and the upper inner arm. For normally distributed parameters, the Pearson correlation coefficient was used to determine the correlation with the dorsal forearm score. Otherwise, the Spearman correlation coefficient was used. For categorical parameters, Fisher's exact test was used to analyse the association with the dorsal forearm score. Mann-Whitney U test (continuous data) and Fisher's exact test (categorical data) were used to compare lSSc biopsies with normal controls. P ≤ 0.05 was considered statistically significant. All analyses were performed using PASW 18.0 software (SPSS, Inc., Chicago, IL, USA).", "Skin biopsies from the dorsal forearm and the upper inner arm were obtained from 53 consecutive SSc patients (17 males and 36 females; mean age ± SD, 52 ± 12 years). Seven SSc patients (13%) had no clinical skin involvement (lSSc), and 46 SSc patients (87%) had skin involvement (29 lcSSc and 17 dcSSc patients). Twenty-five patients used methotrexate, and 11 patients used low-dose corticosteroids (< 15 mg prednisolone/day). Table 2 summarizes the features of the different SSc patient subsets. Normal skin samples taken from the upper inner arm were included as a reference set (n = 18 comprising 7 males and 11 females; mean age ± SD, 44 ± 17 years).\nClinical data of patients with lSSc, lcSSc and dcSSca\naACR, American College of Rheumatology; ANA, antinuclear antibodies; dcSSc, diffuse cutaneous systemic sclerosis; lSSc, limited systemic sclerosis; lcSSc, limited cutaneous systemic sclerosis; mRSS, modified Rodnan skin score; U1RNP, U1-ribonucleic protein; bdisease duration from first non-Raynaud's phenomenon symptom; cone patient had both anticentromere and anti-topoisomerase I antibodies. crange denotes the full interval between the smallest and largest values", "To examine associations of local skin disease with histological parameters, we analysed the biopsies from patients with lcSSc or dcSSc (n = 46). We found that, independently of the anatomical site of the biopsy, the hyalinised collagen score, the myofibroblast score, the mean epidermal thickness, the mononuclear cellular infiltration and the frequency of focal exocytosis differed significantly between biopsies with and without local skin involvement (α = 0.05; GEE) (Table 3). For the continuous parameters, only the epidermal thickness (r = 0.553; P < 0.001), the myofibroblast score (r = 0.507; P < 0.001) and the hyalinised collagen score (r = 0.572; P < 0.001) correlated with the local clinical skin score. No association was found between the local clinical score and the frequency of focal exocytosis (P = 0.06).\nHistological characteristics of patients with lcSSc or dcSSca\nalcSSc, limited cutaneous systemic sclerosis; dcSSc, diffuse cutaneous systemic sclerosis; SD, standard deviation; NR, not reported; b generalised estimation equation (GEE) modelling was used to determine the effect of local skin involvement for each variable, correcting for subject level and biopsy site; cin the GEE model, the effect of this parameter differed significantly between the skin sites (Student's t-test was used to evaluate the upper inner arm, P < 0.001, and the dorsal forearm, P = 0.580); dstatistical testing could not be performed.", "To determine the specificity of the studied histological parameters, we analysed 18 upper inner arm biopsies from patients who were referred for a lupus band test and in whom further evaluation excluded any specified autoimmune disease. Myofibroblasts, intima proliferation of deep arterioles and parakeratosis were not seen in these biopsies (Table 4). Comparison with the biopsies of the patients with lSSc (n = 7) revealed no statistically significant differences (Table 4). Analysis of all SSc biopsies showed that myofibroblasts, intima proliferation of deep arterioles and parakeratosis were present in, respectively, 22%, 19% and 5.8% of the dorsal forearm biopsies and in 14%, 14% and 0% of the upper inner arm biopsies.\nHistological characteristics of patients with lSSc and controlsa\nalSSc, limited systemic sclerosis; NR, not reported; bstatistical testing could not be performed.", "dcSSc: diffuse cutaneous systemic sclerosis; GEE: generalised estimation equation; lcSSc: limited cutaneous systemic sclerosis; lSSc: limited systemic sclerosis; mRSS: modified Rodnan Skin Score; SSc: systemic sclerosis.", "The authors declare that they have no competing interests.", "JTVP, VS, MH and FDK designed the study. VS acquired the capillaroscopic and clinical data. JTVP and FDK acquired the serological data. JTVP, MH and ND acquired the histological data. JTVP, VS, MH, DE and FDK participated in the manuscript preparation and finalisation. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Materials and methods", "Patients", "Immunohistochemistry", "Selection of scoring parameters for SSc histological skin alterations", "Statistical analysis", "Results", "Clinical characteristics of the patients", "Associations of local clinical skin involvement and histological alterations", "Sensitivity and specificity of histological alterations", "Discussion", "Conclusions", "Abbreviations", "Competing interests", "Authors' contributions" ]
[ "Systemic sclerosis (SSc) is a chronic autoimmune disease which can affect the skin and various internal organs [1]. One of the hallmark clinical features of SSc is skin thickening caused by oedema and excessive accumulation of collagen-rich extracellular matrix. Apart from sclerosis, the pathogenesis of SSc is characterized by vasculopathy, which is evidenced by nailfold capillary abnormalities and Raynaud's phenomenon, as well as by the presence of antinuclear antibodies such as anticentromere, anti-topoisomerase I and anti-RNA polymerase III antibodies. The most extensively validated technique to quantify skin involvement is the modified Rodnan skin score (mRSS) [2]. In this scoring system, 17 body areas are examined by clinical palpation and scored on the basis of judgement of skin thickness on a 4-point ordinal scale. Because the extent of skin disease correlates with the disease course, patients are grouped into disease subsets on the basis of skin involvement. The currently most widely applied classification differentiates three subsets, namely, limited SSc (lSSc), limited cutaneous SSc (lcSSc) and diffuse cutaneous SSc (dcSSc) [3]. Patients with lcSSc have skin involvement confined to the fingers, hands, forearms, lower legs or the face, whereas patients with dcSSc also have more proximal skin thickening. The group classified as lSSc has Raynaud's phenomenon with nailfold capillaroscopic abnormalities and/or SSc-associated autoantibodies, but no clinical skin involvement. This subset is considered to have 'early' SSc, as a longitudinal follow-up study has demonstrated that these patients are at risk for progression to definite SSc [4].\nThe histopathology of SSc skin has recently regained interest because it may be integrated as an outcome measure in clinical trials on skin disease, whereby skin biopsies are obtained before and after administration of a therapeutic drug [5,6]. In this way, the hyalinised collagen score and the myofibroblast score have previously been correlated with local skin score and durometry score in patients with dcSSc [7]. Consequently, these parameters could potentially be used to study the effect of a drug on skin disease. Other alterations in skin histology, however, have not been linked to clinical assessment. In addition, studies on the skin histopathology of SSc patients have mainly focused on alterations in patients with established disease, leaving open the question whether patients with 'early' disease have SSc specific histological alterations.\nThe aims of this study were (1) to identify histopathological parameters which are linked with local clinical skin disease at two different anatomical sites in SSc patients with skin involvement (lcSSc or dcSSc) and (2) to determine the sensitivity of SSc-specific histological alterations, with a focus on SSc patients with lSSc.", "[SUBTITLE] Patients [SUBSECTION] Skin biopsies from the dorsal forearm (at the transition of the distal one-third and the proximal two-thirds) and the mid-upper inner arm were obtained from 53 consecutive SSc patients visiting the Scleroderma Clinic of the Ghent University Hospital. From two patients, only one biopsy was available for analysis. All patients fulfilled the criteria for early SSc set forth by LeRoy and Medsger [3]. Video nailfold capillaroscopy and antinuclear antibody identification were performed as described previously [8]. Patients were assigned to the lSSc, lcSSc or dcSSc group according to the criteria published by LeRoy et al. [9]. Skin involvement was clinically assessed according to the 17-site mRSS, whereby local skin involvement is determined for each site, including the dorsal forearm, on a semiquantitative scale (0 = normal thickness, 1 = mild thickening, 2 = moderate thickening and 3 = severe thickening) [10]. A normal reference set was included in the analysis. This set contained the skin biopsies from the inner upper arm of patients who were referred for a lupus band test and in whom further evaluation excluded any specified autoimmune disease. This study was conducted after approval of the Ghent University Hospital Ethical Committee was obtained and all patients had signed informed consent.\nFull-thickness skin biopsies (approximately 1.5 cm long and 0.5 cm wide) were surgically obtained while the patients were under local anaesthesia. Biopsy samples were stored in formaldehyde and embedded in paraffin. Sections of 5 μm were cut and stained with haematoxylin and eosin, Masson trichrome and colloidal iron according to standard techniques. Paraffin-embedded sections were also used for immunohistochemistry.\nSkin biopsies from the dorsal forearm (at the transition of the distal one-third and the proximal two-thirds) and the mid-upper inner arm were obtained from 53 consecutive SSc patients visiting the Scleroderma Clinic of the Ghent University Hospital. From two patients, only one biopsy was available for analysis. All patients fulfilled the criteria for early SSc set forth by LeRoy and Medsger [3]. Video nailfold capillaroscopy and antinuclear antibody identification were performed as described previously [8]. Patients were assigned to the lSSc, lcSSc or dcSSc group according to the criteria published by LeRoy et al. [9]. Skin involvement was clinically assessed according to the 17-site mRSS, whereby local skin involvement is determined for each site, including the dorsal forearm, on a semiquantitative scale (0 = normal thickness, 1 = mild thickening, 2 = moderate thickening and 3 = severe thickening) [10]. A normal reference set was included in the analysis. This set contained the skin biopsies from the inner upper arm of patients who were referred for a lupus band test and in whom further evaluation excluded any specified autoimmune disease. This study was conducted after approval of the Ghent University Hospital Ethical Committee was obtained and all patients had signed informed consent.\nFull-thickness skin biopsies (approximately 1.5 cm long and 0.5 cm wide) were surgically obtained while the patients were under local anaesthesia. Biopsy samples were stored in formaldehyde and embedded in paraffin. Sections of 5 μm were cut and stained with haematoxylin and eosin, Masson trichrome and colloidal iron according to standard techniques. Paraffin-embedded sections were also used for immunohistochemistry.\n[SUBTITLE] Immunohistochemistry [SUBSECTION] Paraffin-embedded sections were dewaxed and heated in antigen retrieval buffer at 98°C for 20 minutes using citrate buffer (pH 6) or Tris buffer (pH 9) (Pascal: Dako, Glostrup, Denmark). After rinsing and blocking endogenous peroxidase, sections were incubated for 60 minutes with the following mouse monoclonal antibodies (mAbs): CD3 (T cells; Novocastra, Newcastle, UK) and α-smooth muscle actin (α-SMA) (myofibroblasts, clone 1A4; Dako). Parallel sections were incubated with irrelevant isotype-matched mAbs as negative controls. The sections were subsequently incubated for 15 minutes with a biotinylated antimouse secondary antibody, followed by 15-minute incubation with a streptavidin-peroxidase complex (LSAB+ Kit; Dako). The colour reaction was developed using 3-amino-9-ethylcarbazole substrate (Dako) as chromogen. Finally, the sections were counterstained with haematoxylin. All incubations were carried out at room temperature, and the sections were washed with phosphate-buffered saline between all steps.\nParaffin-embedded sections were dewaxed and heated in antigen retrieval buffer at 98°C for 20 minutes using citrate buffer (pH 6) or Tris buffer (pH 9) (Pascal: Dako, Glostrup, Denmark). After rinsing and blocking endogenous peroxidase, sections were incubated for 60 minutes with the following mouse monoclonal antibodies (mAbs): CD3 (T cells; Novocastra, Newcastle, UK) and α-smooth muscle actin (α-SMA) (myofibroblasts, clone 1A4; Dako). Parallel sections were incubated with irrelevant isotype-matched mAbs as negative controls. The sections were subsequently incubated for 15 minutes with a biotinylated antimouse secondary antibody, followed by 15-minute incubation with a streptavidin-peroxidase complex (LSAB+ Kit; Dako). The colour reaction was developed using 3-amino-9-ethylcarbazole substrate (Dako) as chromogen. Finally, the sections were counterstained with haematoxylin. All incubations were carried out at room temperature, and the sections were washed with phosphate-buffered saline between all steps.\n[SUBTITLE] Selection of scoring parameters for SSc histological skin alterations [SUBSECTION] Studies of SSc skin histopathology have described alterations in different skin compartments, including atrophy and increased pigmentation of the epidermis [11], loss of the epidermal papillae [11], increase of melanophages ('pigment incontinence') [11], the presence of a mononuclear perivascular infiltrate and myofibroblasts [7,12-14], sclerosis [7], narrowing of arteriolar lumina in the deep vascular plexus (reticular dermis) [15] and disappearance and entrapment of dermal adnexae and calcification [11,12]. To this set of alterations, we added two key histopathological features of other scleroderma-like disorders, namely, mucin deposition and fibroplasia [16,17]. Because analysis of routine biopsies revealed the presence of telangiectasia, focal exocytosis (that is, the presence of lymphocytes in the epidermis) and parakeratosis in some SSc biopsies, these items were also added to the set of scoring parameters [18]. Scoring systems for different parameters were obtained from previous publications as much as possible. An overview of the skin parameters that were scored is shown in Table 1.\nOverview of the skin histology parameters and scoring systema\naα-SMA, α-smooth muscle actin; CI, colloidal iron; H&E, haematoxylin and eosin; IHC, immunohistochemistry; MT, Masson trichrome; VAS, Visual Analogue Scale.\nThe hyalinised collagen score and the myofibroblast score were assessed on a 10-cm Visual Analogue Scale [5]. The epidermal thickness was scored using a semiquantitative scale on the basis of the mean number of keratinocyte layers (stratum basale and stratum spinosum) in a zone between two rete ridges (that is, undulations of the dermoepidermal junction) in five microscopic fields chosen at random (0 = mean less than three layers, 1 = mean of three or four layers, 2 = mean of five or six layers, 3 = mean of more than six layers), a scoring system which has also been used for the synovial lining layer [14,19]. Mononuclear cellular infiltration was scored on a semiquantitative scale as 0 (few scattered cells), 1 (maximum number of cells per collection at least 10), 2 (maximum number of cells per collection between 10 and 50) or 3 (maximum number of cells per collection at least 50) [14]. Mucin deposition in the reticular dermis was evaluated by scoring the degree of acid mucopolysaccharide staining on a semiquantitative scale as negative, very slight, slight, fair or abundant [16]. Entrapment of an eccrine sweat gland was defined as the absence of any surrounding fat tissue (Figure 1A). Focal exocytosis was defined as the presence of T-lymphocytes in the epidermis at least at two distinct sites (Figure 1B). Intima proliferation in deep arterioles was considered pathologic if the vessel wall thickness exceeded the diameter of the vessel lumen (Figure 1C). Parakeratosis was defined as the presence of nuclei in the stratum corneum (Figure 1D). Pigment incontinence was defined as the presence of melanin in papillary macrophages at least at two distinct sites (Figure 1E). Telangiectasia was defined as enlarged papillary capillaries (Figure 1F).\nHistopathological alterations in the skin of SSc patients. All photographs were taken at ×200 magnification. (A) Entrapment of an eccrine sweat gland. (B) Focal exocytosis (arrow). (C) Intima proliferation in a deep arteriole. (D) Parakeratosis (arrow). (E) Pigment incontinence (arrow). (F) Telangiectasia. (A and C-F) Haematoxylin and eosin staining. (B) Anti-CD3 staining.\nStained slides were coded so that a blinded analysis could be performed. Slides from the dorsal forearm were analysed by two independent observers (MH and JTVP) who were uninformed of any clinical data. All scoring parameters were judged to be reliable, as the interobserver agreement was substantial (κ > 0.6 for all categorical parameters and intraclass correlation coefficient >0.7 for all continuous parameters). For continuous parameters, the mean of the two observers was used for analysis. In case of a discrepant score for a categorical parameter, a consensus was determined by the two observers. Slides from the upper inner arm were scored twice by one observer (JTVP). For continuous parameters, the mean of the two scores was used for analysis. In case of a discrepant score for a categorical parameter, a consensus was determined by a third evaluation. Because fibroplasia and calcification were not observed in a single biopsy, these items were left out of all analyses.\nStudies of SSc skin histopathology have described alterations in different skin compartments, including atrophy and increased pigmentation of the epidermis [11], loss of the epidermal papillae [11], increase of melanophages ('pigment incontinence') [11], the presence of a mononuclear perivascular infiltrate and myofibroblasts [7,12-14], sclerosis [7], narrowing of arteriolar lumina in the deep vascular plexus (reticular dermis) [15] and disappearance and entrapment of dermal adnexae and calcification [11,12]. To this set of alterations, we added two key histopathological features of other scleroderma-like disorders, namely, mucin deposition and fibroplasia [16,17]. Because analysis of routine biopsies revealed the presence of telangiectasia, focal exocytosis (that is, the presence of lymphocytes in the epidermis) and parakeratosis in some SSc biopsies, these items were also added to the set of scoring parameters [18]. Scoring systems for different parameters were obtained from previous publications as much as possible. An overview of the skin parameters that were scored is shown in Table 1.\nOverview of the skin histology parameters and scoring systema\naα-SMA, α-smooth muscle actin; CI, colloidal iron; H&E, haematoxylin and eosin; IHC, immunohistochemistry; MT, Masson trichrome; VAS, Visual Analogue Scale.\nThe hyalinised collagen score and the myofibroblast score were assessed on a 10-cm Visual Analogue Scale [5]. The epidermal thickness was scored using a semiquantitative scale on the basis of the mean number of keratinocyte layers (stratum basale and stratum spinosum) in a zone between two rete ridges (that is, undulations of the dermoepidermal junction) in five microscopic fields chosen at random (0 = mean less than three layers, 1 = mean of three or four layers, 2 = mean of five or six layers, 3 = mean of more than six layers), a scoring system which has also been used for the synovial lining layer [14,19]. Mononuclear cellular infiltration was scored on a semiquantitative scale as 0 (few scattered cells), 1 (maximum number of cells per collection at least 10), 2 (maximum number of cells per collection between 10 and 50) or 3 (maximum number of cells per collection at least 50) [14]. Mucin deposition in the reticular dermis was evaluated by scoring the degree of acid mucopolysaccharide staining on a semiquantitative scale as negative, very slight, slight, fair or abundant [16]. Entrapment of an eccrine sweat gland was defined as the absence of any surrounding fat tissue (Figure 1A). Focal exocytosis was defined as the presence of T-lymphocytes in the epidermis at least at two distinct sites (Figure 1B). Intima proliferation in deep arterioles was considered pathologic if the vessel wall thickness exceeded the diameter of the vessel lumen (Figure 1C). Parakeratosis was defined as the presence of nuclei in the stratum corneum (Figure 1D). Pigment incontinence was defined as the presence of melanin in papillary macrophages at least at two distinct sites (Figure 1E). Telangiectasia was defined as enlarged papillary capillaries (Figure 1F).\nHistopathological alterations in the skin of SSc patients. All photographs were taken at ×200 magnification. (A) Entrapment of an eccrine sweat gland. (B) Focal exocytosis (arrow). (C) Intima proliferation in a deep arteriole. (D) Parakeratosis (arrow). (E) Pigment incontinence (arrow). (F) Telangiectasia. (A and C-F) Haematoxylin and eosin staining. (B) Anti-CD3 staining.\nStained slides were coded so that a blinded analysis could be performed. Slides from the dorsal forearm were analysed by two independent observers (MH and JTVP) who were uninformed of any clinical data. All scoring parameters were judged to be reliable, as the interobserver agreement was substantial (κ > 0.6 for all categorical parameters and intraclass correlation coefficient >0.7 for all continuous parameters). For continuous parameters, the mean of the two observers was used for analysis. In case of a discrepant score for a categorical parameter, a consensus was determined by the two observers. Slides from the upper inner arm were scored twice by one observer (JTVP). For continuous parameters, the mean of the two scores was used for analysis. In case of a discrepant score for a categorical parameter, a consensus was determined by a third evaluation. Because fibroplasia and calcification were not observed in a single biopsy, these items were left out of all analyses.\n[SUBTITLE] Statistical analysis [SUBSECTION] In patients with lcSSc or dcSSc, associations of histopathological parameters with local clinical skin involvement were determined by generalised estimation equation (GEE) modelling with a correction for subject level and biopsy site. Local skin involvement at the dorsal forearm was defined as a local clinical score of at least 1. Since the upper inner arm local skin score is not included in the mRSS, all patients with dcSSc were considered to have local skin involvement at this site. In case a significant interaction between biopsy site and local clinical skin involvement was found in the GEE model, statistical testing for the effect of skin involvement was separately performed for the dorsal forearm and the upper inner arm. For normally distributed parameters, the Pearson correlation coefficient was used to determine the correlation with the dorsal forearm score. Otherwise, the Spearman correlation coefficient was used. For categorical parameters, Fisher's exact test was used to analyse the association with the dorsal forearm score. Mann-Whitney U test (continuous data) and Fisher's exact test (categorical data) were used to compare lSSc biopsies with normal controls. P ≤ 0.05 was considered statistically significant. All analyses were performed using PASW 18.0 software (SPSS, Inc., Chicago, IL, USA).\nIn patients with lcSSc or dcSSc, associations of histopathological parameters with local clinical skin involvement were determined by generalised estimation equation (GEE) modelling with a correction for subject level and biopsy site. Local skin involvement at the dorsal forearm was defined as a local clinical score of at least 1. Since the upper inner arm local skin score is not included in the mRSS, all patients with dcSSc were considered to have local skin involvement at this site. In case a significant interaction between biopsy site and local clinical skin involvement was found in the GEE model, statistical testing for the effect of skin involvement was separately performed for the dorsal forearm and the upper inner arm. For normally distributed parameters, the Pearson correlation coefficient was used to determine the correlation with the dorsal forearm score. Otherwise, the Spearman correlation coefficient was used. For categorical parameters, Fisher's exact test was used to analyse the association with the dorsal forearm score. Mann-Whitney U test (continuous data) and Fisher's exact test (categorical data) were used to compare lSSc biopsies with normal controls. P ≤ 0.05 was considered statistically significant. All analyses were performed using PASW 18.0 software (SPSS, Inc., Chicago, IL, USA).", "Skin biopsies from the dorsal forearm (at the transition of the distal one-third and the proximal two-thirds) and the mid-upper inner arm were obtained from 53 consecutive SSc patients visiting the Scleroderma Clinic of the Ghent University Hospital. From two patients, only one biopsy was available for analysis. All patients fulfilled the criteria for early SSc set forth by LeRoy and Medsger [3]. Video nailfold capillaroscopy and antinuclear antibody identification were performed as described previously [8]. Patients were assigned to the lSSc, lcSSc or dcSSc group according to the criteria published by LeRoy et al. [9]. Skin involvement was clinically assessed according to the 17-site mRSS, whereby local skin involvement is determined for each site, including the dorsal forearm, on a semiquantitative scale (0 = normal thickness, 1 = mild thickening, 2 = moderate thickening and 3 = severe thickening) [10]. A normal reference set was included in the analysis. This set contained the skin biopsies from the inner upper arm of patients who were referred for a lupus band test and in whom further evaluation excluded any specified autoimmune disease. This study was conducted after approval of the Ghent University Hospital Ethical Committee was obtained and all patients had signed informed consent.\nFull-thickness skin biopsies (approximately 1.5 cm long and 0.5 cm wide) were surgically obtained while the patients were under local anaesthesia. Biopsy samples were stored in formaldehyde and embedded in paraffin. Sections of 5 μm were cut and stained with haematoxylin and eosin, Masson trichrome and colloidal iron according to standard techniques. Paraffin-embedded sections were also used for immunohistochemistry.", "Paraffin-embedded sections were dewaxed and heated in antigen retrieval buffer at 98°C for 20 minutes using citrate buffer (pH 6) or Tris buffer (pH 9) (Pascal: Dako, Glostrup, Denmark). After rinsing and blocking endogenous peroxidase, sections were incubated for 60 minutes with the following mouse monoclonal antibodies (mAbs): CD3 (T cells; Novocastra, Newcastle, UK) and α-smooth muscle actin (α-SMA) (myofibroblasts, clone 1A4; Dako). Parallel sections were incubated with irrelevant isotype-matched mAbs as negative controls. The sections were subsequently incubated for 15 minutes with a biotinylated antimouse secondary antibody, followed by 15-minute incubation with a streptavidin-peroxidase complex (LSAB+ Kit; Dako). The colour reaction was developed using 3-amino-9-ethylcarbazole substrate (Dako) as chromogen. Finally, the sections were counterstained with haematoxylin. All incubations were carried out at room temperature, and the sections were washed with phosphate-buffered saline between all steps.", "Studies of SSc skin histopathology have described alterations in different skin compartments, including atrophy and increased pigmentation of the epidermis [11], loss of the epidermal papillae [11], increase of melanophages ('pigment incontinence') [11], the presence of a mononuclear perivascular infiltrate and myofibroblasts [7,12-14], sclerosis [7], narrowing of arteriolar lumina in the deep vascular plexus (reticular dermis) [15] and disappearance and entrapment of dermal adnexae and calcification [11,12]. To this set of alterations, we added two key histopathological features of other scleroderma-like disorders, namely, mucin deposition and fibroplasia [16,17]. Because analysis of routine biopsies revealed the presence of telangiectasia, focal exocytosis (that is, the presence of lymphocytes in the epidermis) and parakeratosis in some SSc biopsies, these items were also added to the set of scoring parameters [18]. Scoring systems for different parameters were obtained from previous publications as much as possible. An overview of the skin parameters that were scored is shown in Table 1.\nOverview of the skin histology parameters and scoring systema\naα-SMA, α-smooth muscle actin; CI, colloidal iron; H&E, haematoxylin and eosin; IHC, immunohistochemistry; MT, Masson trichrome; VAS, Visual Analogue Scale.\nThe hyalinised collagen score and the myofibroblast score were assessed on a 10-cm Visual Analogue Scale [5]. The epidermal thickness was scored using a semiquantitative scale on the basis of the mean number of keratinocyte layers (stratum basale and stratum spinosum) in a zone between two rete ridges (that is, undulations of the dermoepidermal junction) in five microscopic fields chosen at random (0 = mean less than three layers, 1 = mean of three or four layers, 2 = mean of five or six layers, 3 = mean of more than six layers), a scoring system which has also been used for the synovial lining layer [14,19]. Mononuclear cellular infiltration was scored on a semiquantitative scale as 0 (few scattered cells), 1 (maximum number of cells per collection at least 10), 2 (maximum number of cells per collection between 10 and 50) or 3 (maximum number of cells per collection at least 50) [14]. Mucin deposition in the reticular dermis was evaluated by scoring the degree of acid mucopolysaccharide staining on a semiquantitative scale as negative, very slight, slight, fair or abundant [16]. Entrapment of an eccrine sweat gland was defined as the absence of any surrounding fat tissue (Figure 1A). Focal exocytosis was defined as the presence of T-lymphocytes in the epidermis at least at two distinct sites (Figure 1B). Intima proliferation in deep arterioles was considered pathologic if the vessel wall thickness exceeded the diameter of the vessel lumen (Figure 1C). Parakeratosis was defined as the presence of nuclei in the stratum corneum (Figure 1D). Pigment incontinence was defined as the presence of melanin in papillary macrophages at least at two distinct sites (Figure 1E). Telangiectasia was defined as enlarged papillary capillaries (Figure 1F).\nHistopathological alterations in the skin of SSc patients. All photographs were taken at ×200 magnification. (A) Entrapment of an eccrine sweat gland. (B) Focal exocytosis (arrow). (C) Intima proliferation in a deep arteriole. (D) Parakeratosis (arrow). (E) Pigment incontinence (arrow). (F) Telangiectasia. (A and C-F) Haematoxylin and eosin staining. (B) Anti-CD3 staining.\nStained slides were coded so that a blinded analysis could be performed. Slides from the dorsal forearm were analysed by two independent observers (MH and JTVP) who were uninformed of any clinical data. All scoring parameters were judged to be reliable, as the interobserver agreement was substantial (κ > 0.6 for all categorical parameters and intraclass correlation coefficient >0.7 for all continuous parameters). For continuous parameters, the mean of the two observers was used for analysis. In case of a discrepant score for a categorical parameter, a consensus was determined by the two observers. Slides from the upper inner arm were scored twice by one observer (JTVP). For continuous parameters, the mean of the two scores was used for analysis. In case of a discrepant score for a categorical parameter, a consensus was determined by a third evaluation. Because fibroplasia and calcification were not observed in a single biopsy, these items were left out of all analyses.", "In patients with lcSSc or dcSSc, associations of histopathological parameters with local clinical skin involvement were determined by generalised estimation equation (GEE) modelling with a correction for subject level and biopsy site. Local skin involvement at the dorsal forearm was defined as a local clinical score of at least 1. Since the upper inner arm local skin score is not included in the mRSS, all patients with dcSSc were considered to have local skin involvement at this site. In case a significant interaction between biopsy site and local clinical skin involvement was found in the GEE model, statistical testing for the effect of skin involvement was separately performed for the dorsal forearm and the upper inner arm. For normally distributed parameters, the Pearson correlation coefficient was used to determine the correlation with the dorsal forearm score. Otherwise, the Spearman correlation coefficient was used. For categorical parameters, Fisher's exact test was used to analyse the association with the dorsal forearm score. Mann-Whitney U test (continuous data) and Fisher's exact test (categorical data) were used to compare lSSc biopsies with normal controls. P ≤ 0.05 was considered statistically significant. All analyses were performed using PASW 18.0 software (SPSS, Inc., Chicago, IL, USA).", "[SUBTITLE] Clinical characteristics of the patients [SUBSECTION] Skin biopsies from the dorsal forearm and the upper inner arm were obtained from 53 consecutive SSc patients (17 males and 36 females; mean age ± SD, 52 ± 12 years). Seven SSc patients (13%) had no clinical skin involvement (lSSc), and 46 SSc patients (87%) had skin involvement (29 lcSSc and 17 dcSSc patients). Twenty-five patients used methotrexate, and 11 patients used low-dose corticosteroids (< 15 mg prednisolone/day). Table 2 summarizes the features of the different SSc patient subsets. Normal skin samples taken from the upper inner arm were included as a reference set (n = 18 comprising 7 males and 11 females; mean age ± SD, 44 ± 17 years).\nClinical data of patients with lSSc, lcSSc and dcSSca\naACR, American College of Rheumatology; ANA, antinuclear antibodies; dcSSc, diffuse cutaneous systemic sclerosis; lSSc, limited systemic sclerosis; lcSSc, limited cutaneous systemic sclerosis; mRSS, modified Rodnan skin score; U1RNP, U1-ribonucleic protein; bdisease duration from first non-Raynaud's phenomenon symptom; cone patient had both anticentromere and anti-topoisomerase I antibodies. crange denotes the full interval between the smallest and largest values\nSkin biopsies from the dorsal forearm and the upper inner arm were obtained from 53 consecutive SSc patients (17 males and 36 females; mean age ± SD, 52 ± 12 years). Seven SSc patients (13%) had no clinical skin involvement (lSSc), and 46 SSc patients (87%) had skin involvement (29 lcSSc and 17 dcSSc patients). Twenty-five patients used methotrexate, and 11 patients used low-dose corticosteroids (< 15 mg prednisolone/day). Table 2 summarizes the features of the different SSc patient subsets. Normal skin samples taken from the upper inner arm were included as a reference set (n = 18 comprising 7 males and 11 females; mean age ± SD, 44 ± 17 years).\nClinical data of patients with lSSc, lcSSc and dcSSca\naACR, American College of Rheumatology; ANA, antinuclear antibodies; dcSSc, diffuse cutaneous systemic sclerosis; lSSc, limited systemic sclerosis; lcSSc, limited cutaneous systemic sclerosis; mRSS, modified Rodnan skin score; U1RNP, U1-ribonucleic protein; bdisease duration from first non-Raynaud's phenomenon symptom; cone patient had both anticentromere and anti-topoisomerase I antibodies. crange denotes the full interval between the smallest and largest values\n[SUBTITLE] Associations of local clinical skin involvement and histological alterations [SUBSECTION] To examine associations of local skin disease with histological parameters, we analysed the biopsies from patients with lcSSc or dcSSc (n = 46). We found that, independently of the anatomical site of the biopsy, the hyalinised collagen score, the myofibroblast score, the mean epidermal thickness, the mononuclear cellular infiltration and the frequency of focal exocytosis differed significantly between biopsies with and without local skin involvement (α = 0.05; GEE) (Table 3). For the continuous parameters, only the epidermal thickness (r = 0.553; P < 0.001), the myofibroblast score (r = 0.507; P < 0.001) and the hyalinised collagen score (r = 0.572; P < 0.001) correlated with the local clinical skin score. No association was found between the local clinical score and the frequency of focal exocytosis (P = 0.06).\nHistological characteristics of patients with lcSSc or dcSSca\nalcSSc, limited cutaneous systemic sclerosis; dcSSc, diffuse cutaneous systemic sclerosis; SD, standard deviation; NR, not reported; b generalised estimation equation (GEE) modelling was used to determine the effect of local skin involvement for each variable, correcting for subject level and biopsy site; cin the GEE model, the effect of this parameter differed significantly between the skin sites (Student's t-test was used to evaluate the upper inner arm, P < 0.001, and the dorsal forearm, P = 0.580); dstatistical testing could not be performed.\nTo examine associations of local skin disease with histological parameters, we analysed the biopsies from patients with lcSSc or dcSSc (n = 46). We found that, independently of the anatomical site of the biopsy, the hyalinised collagen score, the myofibroblast score, the mean epidermal thickness, the mononuclear cellular infiltration and the frequency of focal exocytosis differed significantly between biopsies with and without local skin involvement (α = 0.05; GEE) (Table 3). For the continuous parameters, only the epidermal thickness (r = 0.553; P < 0.001), the myofibroblast score (r = 0.507; P < 0.001) and the hyalinised collagen score (r = 0.572; P < 0.001) correlated with the local clinical skin score. No association was found between the local clinical score and the frequency of focal exocytosis (P = 0.06).\nHistological characteristics of patients with lcSSc or dcSSca\nalcSSc, limited cutaneous systemic sclerosis; dcSSc, diffuse cutaneous systemic sclerosis; SD, standard deviation; NR, not reported; b generalised estimation equation (GEE) modelling was used to determine the effect of local skin involvement for each variable, correcting for subject level and biopsy site; cin the GEE model, the effect of this parameter differed significantly between the skin sites (Student's t-test was used to evaluate the upper inner arm, P < 0.001, and the dorsal forearm, P = 0.580); dstatistical testing could not be performed.\n[SUBTITLE] Sensitivity and specificity of histological alterations [SUBSECTION] To determine the specificity of the studied histological parameters, we analysed 18 upper inner arm biopsies from patients who were referred for a lupus band test and in whom further evaluation excluded any specified autoimmune disease. Myofibroblasts, intima proliferation of deep arterioles and parakeratosis were not seen in these biopsies (Table 4). Comparison with the biopsies of the patients with lSSc (n = 7) revealed no statistically significant differences (Table 4). Analysis of all SSc biopsies showed that myofibroblasts, intima proliferation of deep arterioles and parakeratosis were present in, respectively, 22%, 19% and 5.8% of the dorsal forearm biopsies and in 14%, 14% and 0% of the upper inner arm biopsies.\nHistological characteristics of patients with lSSc and controlsa\nalSSc, limited systemic sclerosis; NR, not reported; bstatistical testing could not be performed.\nTo determine the specificity of the studied histological parameters, we analysed 18 upper inner arm biopsies from patients who were referred for a lupus band test and in whom further evaluation excluded any specified autoimmune disease. Myofibroblasts, intima proliferation of deep arterioles and parakeratosis were not seen in these biopsies (Table 4). Comparison with the biopsies of the patients with lSSc (n = 7) revealed no statistically significant differences (Table 4). Analysis of all SSc biopsies showed that myofibroblasts, intima proliferation of deep arterioles and parakeratosis were present in, respectively, 22%, 19% and 5.8% of the dorsal forearm biopsies and in 14%, 14% and 0% of the upper inner arm biopsies.\nHistological characteristics of patients with lSSc and controlsa\nalSSc, limited systemic sclerosis; NR, not reported; bstatistical testing could not be performed.", "Skin biopsies from the dorsal forearm and the upper inner arm were obtained from 53 consecutive SSc patients (17 males and 36 females; mean age ± SD, 52 ± 12 years). Seven SSc patients (13%) had no clinical skin involvement (lSSc), and 46 SSc patients (87%) had skin involvement (29 lcSSc and 17 dcSSc patients). Twenty-five patients used methotrexate, and 11 patients used low-dose corticosteroids (< 15 mg prednisolone/day). Table 2 summarizes the features of the different SSc patient subsets. Normal skin samples taken from the upper inner arm were included as a reference set (n = 18 comprising 7 males and 11 females; mean age ± SD, 44 ± 17 years).\nClinical data of patients with lSSc, lcSSc and dcSSca\naACR, American College of Rheumatology; ANA, antinuclear antibodies; dcSSc, diffuse cutaneous systemic sclerosis; lSSc, limited systemic sclerosis; lcSSc, limited cutaneous systemic sclerosis; mRSS, modified Rodnan skin score; U1RNP, U1-ribonucleic protein; bdisease duration from first non-Raynaud's phenomenon symptom; cone patient had both anticentromere and anti-topoisomerase I antibodies. crange denotes the full interval between the smallest and largest values", "To examine associations of local skin disease with histological parameters, we analysed the biopsies from patients with lcSSc or dcSSc (n = 46). We found that, independently of the anatomical site of the biopsy, the hyalinised collagen score, the myofibroblast score, the mean epidermal thickness, the mononuclear cellular infiltration and the frequency of focal exocytosis differed significantly between biopsies with and without local skin involvement (α = 0.05; GEE) (Table 3). For the continuous parameters, only the epidermal thickness (r = 0.553; P < 0.001), the myofibroblast score (r = 0.507; P < 0.001) and the hyalinised collagen score (r = 0.572; P < 0.001) correlated with the local clinical skin score. No association was found between the local clinical score and the frequency of focal exocytosis (P = 0.06).\nHistological characteristics of patients with lcSSc or dcSSca\nalcSSc, limited cutaneous systemic sclerosis; dcSSc, diffuse cutaneous systemic sclerosis; SD, standard deviation; NR, not reported; b generalised estimation equation (GEE) modelling was used to determine the effect of local skin involvement for each variable, correcting for subject level and biopsy site; cin the GEE model, the effect of this parameter differed significantly between the skin sites (Student's t-test was used to evaluate the upper inner arm, P < 0.001, and the dorsal forearm, P = 0.580); dstatistical testing could not be performed.", "To determine the specificity of the studied histological parameters, we analysed 18 upper inner arm biopsies from patients who were referred for a lupus band test and in whom further evaluation excluded any specified autoimmune disease. Myofibroblasts, intima proliferation of deep arterioles and parakeratosis were not seen in these biopsies (Table 4). Comparison with the biopsies of the patients with lSSc (n = 7) revealed no statistically significant differences (Table 4). Analysis of all SSc biopsies showed that myofibroblasts, intima proliferation of deep arterioles and parakeratosis were present in, respectively, 22%, 19% and 5.8% of the dorsal forearm biopsies and in 14%, 14% and 0% of the upper inner arm biopsies.\nHistological characteristics of patients with lSSc and controlsa\nalSSc, limited systemic sclerosis; NR, not reported; bstatistical testing could not be performed.", "Because of the rarity of the disease, few studies have systemically addressed the skin histopathology of SSc. Previous studies may have been hampered by bias due to the inclusion of only dcSSc patients [7], by failure to perform biopsies at the same anatomical site in all patients [15] or by failure to include a normal control group or to link alterations to clinical scoring [11,13]. The present study was designed to overcome these issues and to identify histopathological alterations in the skin of SSc patients which are linked to clinical skin scoring or might have diagnostic relevance.\nThe results of this study show that independent of the biopsy site (dorsal forearm or inner upper arm), the hyalinised collagen score, the myofibroblast score and the mean epidermal thickness are associated with the presence of local clinical skin involvement. Furthermore, these three parameters correlated well with the local clinical skin score at the dorsal forearm. In agreement with our data, Kissin et al. [7] reported a good correlation of the hyalinised collagen score and the myofibroblast score with clinical scoring in patients with dcSSc. The link between histopathological alterations and clinical assessment at two independent skin sites in a large set of SSc biopsies, including patients from different disease subsets and with early and late disease, suggests these parameters might be potential candidates as outcome measures in clinical trials on skin disease. However, longitudinal studies should address their sensitivity to change before they can be considered validated measures [20]. Given that the link with clinical scoring was independent of the skin site, our data indicate that in patients with dcSSc, biopsies from the upper inner arm may be used to study histological parameters.\nOne interesting finding of our study is the link between clinical skin scoring and epidermal changes. Apart from the increased mean epidermal thickness in biopsies with local clinical skin involvement, we also found parakeratosis in a minority of clinically involved skin biopsies from the dorsal forearm, which points to a disturbance of epidermal differentiation [21]. Consistent with these results, Aden et al. [22] demonstrated that the epidermis in involved SSc skin shows thickening and altered differentiation, mimicking an active wound-healing phenotype. In contrast, older literature reported atrophy of the epidermis in SSc skin biopsies [11]. Also, we found that local clinical skin involvement was associated with a higher epidermal pigmentation at the upper inner arm but not at the dorsal forearm, which is probably related to the different sun exposure of the two biopsy sites.\nA second research aim of this study was to determine the sensitivity of SSc-specific histological alterations, focusing on SSc patients without clinical skin involvement (lSSc). These patients have Raynaud's phenomenon and SSc-associated antinuclear antibodies and/or nailfold capillaroscopic alterations without skin involvement. At the upper inner arm, we found no differences between lSSc and control biopsies. Concerning the specificity of histological parameters for SSc, we could not detect parakeratosis, myofibroblasts or intima proliferation of the deep arterioles in controls. However, these alterations were present in only a minority of the SSc biopsies. Thus, SSc-specific histological alterations have a low diagnostic sensitivity.", "In conclusion, the systematic analysis of skin biopsies from 53 consecutive SSc patients and 18 normal controls revealed that the mean epidermal thickness, the hyalinised collagen score and the myofibroblast score are linked to local clinical skin involvement and are correlated with the local skin score. Concerning histological alterations in lSSc, we found no significant differences with control skin at the upper inner arm. Finally, myofibroblasts, intima proliferation of the deep arterioles or parakeratosis in a skin biopsy are useful diagnostic markers for SSc, although they have a low sensitivity.", "dcSSc: diffuse cutaneous systemic sclerosis; GEE: generalised estimation equation; lcSSc: limited cutaneous systemic sclerosis; lSSc: limited systemic sclerosis; mRSS: modified Rodnan Skin Score; SSc: systemic sclerosis.", "The authors declare that they have no competing interests.", "JTVP, VS, MH and FDK designed the study. VS acquired the capillaroscopic and clinical data. JTVP and FDK acquired the serological data. JTVP, MH and ND acquired the histological data. JTVP, VS, MH, DE and FDK participated in the manuscript preparation and finalisation. All authors read and approved the final manuscript." ]
[ null, "materials|methods", null, null, null, null, "results", null, null, null, "discussion", "conclusions", null, null, null ]
[]
Assessing quality and completeness of human transcriptional regulatory pathways on a genome-wide scale.
21356087
Pathway databases are becoming increasingly important and almost omnipresent in most types of biological and translational research. However, little is known about the quality and completeness of pathways stored in these databases. The present study conducts a comprehensive assessment of transcriptional regulatory pathways in humans for seven well-studied transcription factors: MYC, NOTCH1, BCL6, TP53, AR, STAT1, and RELA. The employed benchmarking methodology first involves integrating genome-wide binding with functional gene expression data to derive direct targets of transcription factors. Then the lists of experimentally obtained direct targets are compared with relevant lists of transcriptional targets from 10 commonly used pathway databases.
BACKGROUND
This article was reviewed by Prof. Wing Hung Wong, Dr. Thiago Motta Venancio (nominated by Dr. L Aravind), and Prof. Geoff J McLachlan.
REVIEWERS
The results of this study show that for the majority of pathway databases, the overlap between experimentally obtained target genes and targets reported in transcriptional regulatory pathway databases is surprisingly small and often is not statistically significant. The only exception is MetaCore pathway database which yields statistically significant intersection with experimental results in 84% cases. Additionally, we suggest that the lists of experimentally derived direct targets obtained in this study can be used to reveal new biological insight in transcriptional regulation and suggest novel putative therapeutic targets in cancer.
RESULTS
Our study opens a debate on validity of using many popular pathway databases to obtain transcriptional regulatory targets. We conclude that the choice of pathway databases should be informed by solid scientific evidence and rigorous empirical evaluation.
CONCLUSIONS
[ "Databases, Genetic", "Gene Regulatory Networks", "Genome, Human", "Humans", "Reference Standards", "Transcription Factors", "Transcription, Genetic" ]
3055855
null
null
Methods
[SUBTITLE] Functional expression data [SUBSECTION] The functional expression data was obtained from eleven previously published gene expression microarray and RNA-seq studies [9-19], where the transcription factor in question was knocked-down or over-expressed (see Table S1 in Additional File 1). In order to achieve statistical significance of our results, we selected only datasets with at least eight samples in total and three samples per condition. The functional expression data was obtained from eleven previously published gene expression microarray and RNA-seq studies [9-19], where the transcription factor in question was knocked-down or over-expressed (see Table S1 in Additional File 1). In order to achieve statistical significance of our results, we selected only datasets with at least eight samples in total and three samples per condition. [SUBTITLE] Protein-DNA binding data [SUBSECTION] The transcription factor-DNA genome-wide binding data was derived from seven previously published ChIP-chip, ChIP-seq and ChIP-PET studies summarized in Table S2 in Additional File 1[9,13,14,20-23]. The transcription factor-DNA genome-wide binding data was derived from seven previously published ChIP-chip, ChIP-seq and ChIP-PET studies summarized in Table S2 in Additional File 1[9,13,14,20-23]. [SUBTITLE] Generation of the gold-standards [SUBSECTION] A gold-standard is a list of genes that are directly downstream of a particular transcription factor, and are functionally regulated by it. Generation of gold-standards involved steps that are outlined below. Functional gene expression data was first used to identify genes that are downstream (but not necessarily directly) of a particular transcription factor by application of the Student's t-test with α = 0.05 to 'experiment' (e. g., siRNA) and 'control' samples. Wherever applicable, we used a paired t-test that has larger statistical power to find differentially expressed genes than an unpaired version. Also, if some transcription factor had a well-known role as either activator or repressor for the majority of target genes, we created two gold-standards: one with a one-sided t-test and another one with a two-sided t-test. For example, since it is known that MYC is an activator for most target genes [24], we expect that in siRNA experiments genes downstream of MYC are down-regulated; this can be detected by a one-sided t-test. However, since there are studies that reported role of MYC as a repressor [25,26], we can also expect that genes downstream of MYC can be either up-or down-regulated; this can be detected by a two-sided t-test. Genome-wide binding data was then employed to identify direct binding targets of each transcription factor. Specifically, for each studied transcription factor we obtained the set(s) of genes with detected promoter region-transcription factor binding according to the primary study that generated binding data. We emphasize that using genome-wide binding data by itself is insufficient to find downstream functional targets of a transcription factor, because many binding sites can be non-functional [27]. Therefore, the final step in gold-standard creation required overlapping of the list of direct binding targets (from binding data) with the list of downstream functional targets (from expression data). Knowledge gained by integration of data from these two sources is believed to provide high confidence that a given transcription factor directly regulates a particular gene [28]. Also, integration of data from two different sources contributes to the reduction of false positives in the resulting gold-standards. A gold-standard is a list of genes that are directly downstream of a particular transcription factor, and are functionally regulated by it. Generation of gold-standards involved steps that are outlined below. Functional gene expression data was first used to identify genes that are downstream (but not necessarily directly) of a particular transcription factor by application of the Student's t-test with α = 0.05 to 'experiment' (e. g., siRNA) and 'control' samples. Wherever applicable, we used a paired t-test that has larger statistical power to find differentially expressed genes than an unpaired version. Also, if some transcription factor had a well-known role as either activator or repressor for the majority of target genes, we created two gold-standards: one with a one-sided t-test and another one with a two-sided t-test. For example, since it is known that MYC is an activator for most target genes [24], we expect that in siRNA experiments genes downstream of MYC are down-regulated; this can be detected by a one-sided t-test. However, since there are studies that reported role of MYC as a repressor [25,26], we can also expect that genes downstream of MYC can be either up-or down-regulated; this can be detected by a two-sided t-test. Genome-wide binding data was then employed to identify direct binding targets of each transcription factor. Specifically, for each studied transcription factor we obtained the set(s) of genes with detected promoter region-transcription factor binding according to the primary study that generated binding data. We emphasize that using genome-wide binding data by itself is insufficient to find downstream functional targets of a transcription factor, because many binding sites can be non-functional [27]. Therefore, the final step in gold-standard creation required overlapping of the list of direct binding targets (from binding data) with the list of downstream functional targets (from expression data). Knowledge gained by integration of data from these two sources is believed to provide high confidence that a given transcription factor directly regulates a particular gene [28]. Also, integration of data from two different sources contributes to the reduction of false positives in the resulting gold-standards. [SUBTITLE] Pathway databases [SUBSECTION] In the current study we analyzed twelve pathway-derived sets of direct transcriptional targets for each transcription factor of interest. These gene sets were extracted from the ten pathway databases listed in Table 1 according to the following protocol. Pathway databases. From each relevant pathway present in BioCarta, KEGG, WikiPathways and Cell Signaling Technology, we manually extracted all stated direct transcriptional targets of each of the seven transcription factors from our list. From the Ingenuity Pathway Analysis database, we extracted two sets of target genes regulated by each transcription factor of interest. One of them (a more conservative) contained all genes with the 'transcription' relation type to a given transcription factor, while another one (a more liberal) incorporated union of the genes with relation types 'transcription', 'expression' and 'protein-DNA interaction'. From each of the BKL TRANSPATH and TRANSFAC databases, we extracted a set of genes that are stated to be regulated by each transcription factor in question (i.e., "Binding Sites/Regulated Genes" and "Regulates expression of (direct or indirect)"). From the GenSpring database, we created two gene sets. The first set (a more conservative) contained intersection of the following two groups of genes: genes regulated by the transcription factor on the expression level and genes that are bound to a given transcription factor. The second set (a more liberal) incorporated a union of above two groups of genes. Finally, from the Pathway Studio and MetaCore pathway databases, we extracted a set of targets that are transcriptionally regulated by each of the transcription factors from our list (in MetaCore we considered a union of genes with 'transcription regulation' and 'co-regulation of transcription' types of relations with given transcription factor). In the current study we analyzed twelve pathway-derived sets of direct transcriptional targets for each transcription factor of interest. These gene sets were extracted from the ten pathway databases listed in Table 1 according to the following protocol. Pathway databases. From each relevant pathway present in BioCarta, KEGG, WikiPathways and Cell Signaling Technology, we manually extracted all stated direct transcriptional targets of each of the seven transcription factors from our list. From the Ingenuity Pathway Analysis database, we extracted two sets of target genes regulated by each transcription factor of interest. One of them (a more conservative) contained all genes with the 'transcription' relation type to a given transcription factor, while another one (a more liberal) incorporated union of the genes with relation types 'transcription', 'expression' and 'protein-DNA interaction'. From each of the BKL TRANSPATH and TRANSFAC databases, we extracted a set of genes that are stated to be regulated by each transcription factor in question (i.e., "Binding Sites/Regulated Genes" and "Regulates expression of (direct or indirect)"). From the GenSpring database, we created two gene sets. The first set (a more conservative) contained intersection of the following two groups of genes: genes regulated by the transcription factor on the expression level and genes that are bound to a given transcription factor. The second set (a more liberal) incorporated a union of above two groups of genes. Finally, from the Pathway Studio and MetaCore pathway databases, we extracted a set of targets that are transcriptionally regulated by each of the transcription factors from our list (in MetaCore we considered a union of genes with 'transcription regulation' and 'co-regulation of transcription' types of relations with given transcription factor). [SUBTITLE] Statistical comparison of gene sets [SUBSECTION] Since we are seeking to compare gene sets from different studies/databases, it is essential to transform genes to standard identifiers. That is why we transformed all gene sets to the HUGO Gene Nomenclature Committee approved gene symbols and names [29]. In order to assess statistical significance of the overlap between the resulting gene sets, we used the hypergeometric test at 5% α-level with false discovery rate correction for multiple comparisons by the method of Benjamini and Yekutieli [30]. The alternative hypothesis of this test is that two sets of genes (set A from pathway database and set B from experiments) have greater number of genes in common than two randomly selected gene sets with the same number of genes as in sets A and B. For example, consider that for some transcription factor there are 300 direct targets in the pathway database #1 and 700 in the experimentally derived list (gold-standard), and their intersection is 16 genes (Figure 2a). If we select on random from a total of 20,000 genes two sets with 300 and 700 genes each, their overlap would be greater or equal to 16 genes in 6.34% times. Thus, this overlap will not be statistically significant at 5% α-level (p = 0.0634). On the other hand, consider that for the pathway database #2, there are 30 direct targets of that transcription factor, and their intersection with the 700-gene gold-standard is only 6 genes. Even though the size of this intersection is rather small, it is unlikely to randomly select 30 genes (out of 20,000) with an overlap greater or equal to 6 genes with a 700-gene gold-standard (p = 0.0005, see Figure 2a). This overlap is statistically significant at 5% α-level. Illustration of statistical methodology for comparison between a gold-standard and a pathway database. Even though the above statistical methodology is based on odds ratios, databases with a very small number of targets may not reach statistical significance regardless of the quality of their data. To address this issue and provide another view on the data of our study, we calculate an enrichment fold change ratio (EFC) for every intersection between a gold-standard and a pathway database. For a given pair of a gold-standard and a pathway database, EFC is equal to the observed number of genes in their intersection, divided by the expected size of intersection under the null hypothesis (plus machine epsilon, to avoid division by zero). Notice however that larger values of EFC may correspond to databases that are highly incomplete and contain only a few relations. For example, consider that for some transcription factor there are 300 direct targets in the pathway database #1 and 50 in the experimentally derived list (gold-standard), and their intersection is 30 genes (Figure 2b). If we select on random from a total of 20,000 genes two sets with 300 and 50 genes each, their expected overlap under the null hypothesis will be equal to 0.75. Thus, the EFC ratio will be equal to 40 (= 30/0.75). On the other hand, consider that for the pathway database #2, there are 2 direct targets of that transcription factor, and their intersection with the 50-gene gold-standard is only 1 gene. Even though the expected overlap under the null hypothesis will be equal to 0.005 and EFC equal to 200 (5 times bigger than for the database #1), the size of this intersection with the gold-standard is 30 times less than for database #1 (Figure 2b). Since we are seeking to compare gene sets from different studies/databases, it is essential to transform genes to standard identifiers. That is why we transformed all gene sets to the HUGO Gene Nomenclature Committee approved gene symbols and names [29]. In order to assess statistical significance of the overlap between the resulting gene sets, we used the hypergeometric test at 5% α-level with false discovery rate correction for multiple comparisons by the method of Benjamini and Yekutieli [30]. The alternative hypothesis of this test is that two sets of genes (set A from pathway database and set B from experiments) have greater number of genes in common than two randomly selected gene sets with the same number of genes as in sets A and B. For example, consider that for some transcription factor there are 300 direct targets in the pathway database #1 and 700 in the experimentally derived list (gold-standard), and their intersection is 16 genes (Figure 2a). If we select on random from a total of 20,000 genes two sets with 300 and 700 genes each, their overlap would be greater or equal to 16 genes in 6.34% times. Thus, this overlap will not be statistically significant at 5% α-level (p = 0.0634). On the other hand, consider that for the pathway database #2, there are 30 direct targets of that transcription factor, and their intersection with the 700-gene gold-standard is only 6 genes. Even though the size of this intersection is rather small, it is unlikely to randomly select 30 genes (out of 20,000) with an overlap greater or equal to 6 genes with a 700-gene gold-standard (p = 0.0005, see Figure 2a). This overlap is statistically significant at 5% α-level. Illustration of statistical methodology for comparison between a gold-standard and a pathway database. Even though the above statistical methodology is based on odds ratios, databases with a very small number of targets may not reach statistical significance regardless of the quality of their data. To address this issue and provide another view on the data of our study, we calculate an enrichment fold change ratio (EFC) for every intersection between a gold-standard and a pathway database. For a given pair of a gold-standard and a pathway database, EFC is equal to the observed number of genes in their intersection, divided by the expected size of intersection under the null hypothesis (plus machine epsilon, to avoid division by zero). Notice however that larger values of EFC may correspond to databases that are highly incomplete and contain only a few relations. For example, consider that for some transcription factor there are 300 direct targets in the pathway database #1 and 50 in the experimentally derived list (gold-standard), and their intersection is 30 genes (Figure 2b). If we select on random from a total of 20,000 genes two sets with 300 and 50 genes each, their expected overlap under the null hypothesis will be equal to 0.75. Thus, the EFC ratio will be equal to 40 (= 30/0.75). On the other hand, consider that for the pathway database #2, there are 2 direct targets of that transcription factor, and their intersection with the 50-gene gold-standard is only 1 gene. Even though the expected overlap under the null hypothesis will be equal to 0.005 and EFC equal to 200 (5 times bigger than for the database #1), the size of this intersection with the gold-standard is 30 times less than for database #1 (Figure 2b).
null
null
null
null
[ "Background", "Functional expression data", "Protein-DNA binding data", "Generation of the gold-standards", "Pathway databases", "Statistical comparison of gene sets", "Results", "Comparison between pathway databases", "Generation of the gold-standards", "Comparison between gold-standards and transcriptional regulatory pathways", "Cross-talk of MYC, NOTCH1, RELA transcriptional regulatory pathways", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Reviewers' comments", "Reviewer's comments", "Authors' response", "Reviewer's comments", "Authors' response", "Reviewer's comments", "Authors' response" ]
[ "Recently the biological pathways have become a common and probably the most popular form of representing biochemical information for hypothesis generation and validation. These maps store wide knowledge of complex molecular interactions and regulations occurring in the living organism in a simple and obvious way, often using intuitive graphical notation. Two major types of biological pathways could be distinguished. Metabolic pathways incorporate complex networks of protein-based interactions and modifications, while signal transduction and transcriptional regulatory pathways are usually considered to provide information on mechanisms of transcription [1].\nFor the last decade a variety of different public and commercial online pathway databases have been developed [2] and are currently routinely utilized by biomedical researchers and even by the U.S. government regulatory agencies such as FDA [3]. Each of these databases has its own structure, way of data storage and representation, and method for extracting and verifying biological knowledge. Information in the most popular publicly available pathway databases, such as BioCarta, KEGG [4], WikiPathways [5], Cell Signaling Technology pathways is usually either curated by efforts of a particular academic group (e.g., KEGG) or by direct participation of the broader scientific community (e.g., WikiPathways). Popular commercial products, such as MetaCore [6], Ingenuity Pathway Analysis, Pathway Studio and Biobase Knowledge Library (TRANSPATH and TRANSFAC) are often based on literature curation (e.g., Biobase Knowledge Library) and/or complex algorithms for mining biomedical literature (e.g., Pathway Studio).\nWhile there are a lot of data collected on human metabolic processes, the content of signal transduction and transcriptional regulatory pathways varies greatly in quality and completeness [7]. An indicative comparison of MYC transcriptional targets reported in ten different pathway databases reveals that these databases differ greatly from each other (Figure 1). Given that MYC is involved in the transcriptional regulation of approximately 15% of all genes [8], one cannot argue that the majority of pathway databases that contain less than thirty putative transcriptional targets of MYC are even close to complete. More importantly, to date there have been no prior genome-wide evaluation studies (that are based on genome-wide binding and gene expression assays) assessing pathway databases. Thus, biomedical scientists have to make their choice of the database based on interface, prior experience, or marketing presentations. However, it is critically important that this choice is informed by a rigorous evaluation that utilizes genome-wide experimental data. In the current study we perform such an evaluation of ten commonly used pathway databases. Particularly, we assessed the transcriptional regulatory pathways, considered in the current study as the interactions of the type 'transcription factor-transcriptional targets'.\nNumber of genes in common between MYC transcriptional targets derived from ten different pathway databases. Cells are colored according to their values from white (low values) to red (high values).\nOur study first involves integration of human genome-wide functional microarray or RNA-seq gene expression data with protein-DNA binding data from ChIP-chip, ChIP-seq, or ChIP-PET platforms to find direct transcriptional targets of the seven well known transcription factors: MYC, NOTCH1, BCL6, TP53, AR, STAT1, and RELA. The choice of transcription factors is based on their important role in oncogenesis and availability of binding and expression data in the public domain. Then, the lists of experimentally derived direct targets are used to assess the quality and completeness of 84 transcriptional regulatory pathways from four publicly available (BioCarta, KEGG, WikiPathways and Cell Signaling Technology) and six commercial (MetaCore, Ingenuity Pathway Analysis, BKL TRANSPATH, BKL TRANSFAC, Pathway Studio and GeneSpring Pathways) pathway databases. We measure the overlap between pathways and experimentally obtained target genes and assess statistical significance of this overlap. Additionally, we demonstrate that experimentally derived lists of direct transcriptional targets can be used to reveal new biological insight on transcriptional regulation. We show this by analyzing common direct transcriptional targets of MYC, NOTCH1 and RELA that act in interconnected molecular pathways. Detection of such genes is important as it could reveal novel targets of cancer therapy.", "The functional expression data was obtained from eleven previously published gene expression microarray and RNA-seq studies [9-19], where the transcription factor in question was knocked-down or over-expressed (see Table S1 in Additional File 1). In order to achieve statistical significance of our results, we selected only datasets with at least eight samples in total and three samples per condition.", "The transcription factor-DNA genome-wide binding data was derived from seven previously published ChIP-chip, ChIP-seq and ChIP-PET studies summarized in Table S2 in Additional File 1[9,13,14,20-23].", "A gold-standard is a list of genes that are directly downstream of a particular transcription factor, and are functionally regulated by it. Generation of gold-standards involved steps that are outlined below.\nFunctional gene expression data was first used to identify genes that are downstream (but not necessarily directly) of a particular transcription factor by application of the Student's t-test with α = 0.05 to 'experiment' (e. g., siRNA) and 'control' samples. Wherever applicable, we used a paired t-test that has larger statistical power to find differentially expressed genes than an unpaired version. Also, if some transcription factor had a well-known role as either activator or repressor for the majority of target genes, we created two gold-standards: one with a one-sided t-test and another one with a two-sided t-test. For example, since it is known that MYC is an activator for most target genes [24], we expect that in siRNA experiments genes downstream of MYC are down-regulated; this can be detected by a one-sided t-test. However, since there are studies that reported role of MYC as a repressor [25,26], we can also expect that genes downstream of MYC can be either up-or down-regulated; this can be detected by a two-sided t-test.\nGenome-wide binding data was then employed to identify direct binding targets of each transcription factor. Specifically, for each studied transcription factor we obtained the set(s) of genes with detected promoter region-transcription factor binding according to the primary study that generated binding data.\nWe emphasize that using genome-wide binding data by itself is insufficient to find downstream functional targets of a transcription factor, because many binding sites can be non-functional [27]. Therefore, the final step in gold-standard creation required overlapping of the list of direct binding targets (from binding data) with the list of downstream functional targets (from expression data). Knowledge gained by integration of data from these two sources is believed to provide high confidence that a given transcription factor directly regulates a particular gene [28]. Also, integration of data from two different sources contributes to the reduction of false positives in the resulting gold-standards.", "In the current study we analyzed twelve pathway-derived sets of direct transcriptional targets for each transcription factor of interest. These gene sets were extracted from the ten pathway databases listed in Table 1 according to the following protocol.\nPathway databases.\nFrom each relevant pathway present in BioCarta, KEGG, WikiPathways and Cell Signaling Technology, we manually extracted all stated direct transcriptional targets of each of the seven transcription factors from our list. From the Ingenuity Pathway Analysis database, we extracted two sets of target genes regulated by each transcription factor of interest. One of them (a more conservative) contained all genes with the 'transcription' relation type to a given transcription factor, while another one (a more liberal) incorporated union of the genes with relation types 'transcription', 'expression' and 'protein-DNA interaction'. From each of the BKL TRANSPATH and TRANSFAC databases, we extracted a set of genes that are stated to be regulated by each transcription factor in question (i.e., \"Binding Sites/Regulated Genes\" and \"Regulates expression of (direct or indirect)\"). From the GenSpring database, we created two gene sets. The first set (a more conservative) contained intersection of the following two groups of genes: genes regulated by the transcription factor on the expression level and genes that are bound to a given transcription factor. The second set (a more liberal) incorporated a union of above two groups of genes. Finally, from the Pathway Studio and MetaCore pathway databases, we extracted a set of targets that are transcriptionally regulated by each of the transcription factors from our list (in MetaCore we considered a union of genes with 'transcription regulation' and 'co-regulation of transcription' types of relations with given transcription factor).", "Since we are seeking to compare gene sets from different studies/databases, it is essential to transform genes to standard identifiers. That is why we transformed all gene sets to the HUGO Gene Nomenclature Committee approved gene symbols and names [29].\nIn order to assess statistical significance of the overlap between the resulting gene sets, we used the hypergeometric test at 5% α-level with false discovery rate correction for multiple comparisons by the method of Benjamini and Yekutieli [30]. The alternative hypothesis of this test is that two sets of genes (set A from pathway database and set B from experiments) have greater number of genes in common than two randomly selected gene sets with the same number of genes as in sets A and B. For example, consider that for some transcription factor there are 300 direct targets in the pathway database #1 and 700 in the experimentally derived list (gold-standard), and their intersection is 16 genes (Figure 2a). If we select on random from a total of 20,000 genes two sets with 300 and 700 genes each, their overlap would be greater or equal to 16 genes in 6.34% times. Thus, this overlap will not be statistically significant at 5% α-level (p = 0.0634). On the other hand, consider that for the pathway database #2, there are 30 direct targets of that transcription factor, and their intersection with the 700-gene gold-standard is only 6 genes. Even though the size of this intersection is rather small, it is unlikely to randomly select 30 genes (out of 20,000) with an overlap greater or equal to 6 genes with a 700-gene gold-standard (p = 0.0005, see Figure 2a). This overlap is statistically significant at 5% α-level.\nIllustration of statistical methodology for comparison between a gold-standard and a pathway database.\nEven though the above statistical methodology is based on odds ratios, databases with a very small number of targets may not reach statistical significance regardless of the quality of their data. To address this issue and provide another view on the data of our study, we calculate an enrichment fold change ratio (EFC) for every intersection between a gold-standard and a pathway database. For a given pair of a gold-standard and a pathway database, EFC is equal to the observed number of genes in their intersection, divided by the expected size of intersection under the null hypothesis (plus machine epsilon, to avoid division by zero). Notice however that larger values of EFC may correspond to databases that are highly incomplete and contain only a few relations. For example, consider that for some transcription factor there are 300 direct targets in the pathway database #1 and 50 in the experimentally derived list (gold-standard), and their intersection is 30 genes (Figure 2b). If we select on random from a total of 20,000 genes two sets with 300 and 50 genes each, their expected overlap under the null hypothesis will be equal to 0.75. Thus, the EFC ratio will be equal to 40 (= 30/0.75). On the other hand, consider that for the pathway database #2, there are 2 direct targets of that transcription factor, and their intersection with the 50-gene gold-standard is only 1 gene. Even though the expected overlap under the null hypothesis will be equal to 0.005 and EFC equal to 200 (5 times bigger than for the database #1), the size of this intersection with the gold-standard is 30 times less than for database #1 (Figure 2b).", "[SUBTITLE] Comparison between pathway databases [SUBSECTION] First, we assessed all pathway databases listed in Table 1 by a comparison of the extracted transcriptional targets for each of the seven transcription factors. The number of overlapping MYC targets between pathway databases is shown in Figure 1. We also calculated Jaccard index, the normalized measure of similarity between two gene lists (which is the size of intersection between two gene lists divided by the size of their union), for all pairwise comparisons of pathway databases (Figure S1 in Additional File 1). Since the information in the majority of these databases is curated, we would expect to see almost full intersection for every pair of databases, i.e. Jaccard index close to 1. However, our analysis revealed that the transcriptional data differs significantly from one database to another. Indeed, the average Jaccard index over all pairwise comparisons of pathway databases ranges between 0.0180 (AR) and 0.0742 (RELA), depending on a transcription factor. The grand average Jaccard index over all pathway comparisons and transcription factors is 0.0533. This means that only 5.33% of gene targets that belong to either of the two transcriptional regulatory pathways, also belong to both pathways. Furthermore, the transcriptional regulatory information on particular transcription factors was totally absent from some pathway databases.\nFirst, we assessed all pathway databases listed in Table 1 by a comparison of the extracted transcriptional targets for each of the seven transcription factors. The number of overlapping MYC targets between pathway databases is shown in Figure 1. We also calculated Jaccard index, the normalized measure of similarity between two gene lists (which is the size of intersection between two gene lists divided by the size of their union), for all pairwise comparisons of pathway databases (Figure S1 in Additional File 1). Since the information in the majority of these databases is curated, we would expect to see almost full intersection for every pair of databases, i.e. Jaccard index close to 1. However, our analysis revealed that the transcriptional data differs significantly from one database to another. Indeed, the average Jaccard index over all pairwise comparisons of pathway databases ranges between 0.0180 (AR) and 0.0742 (RELA), depending on a transcription factor. The grand average Jaccard index over all pathway comparisons and transcription factors is 0.0533. This means that only 5.33% of gene targets that belong to either of the two transcriptional regulatory pathways, also belong to both pathways. Furthermore, the transcriptional regulatory information on particular transcription factors was totally absent from some pathway databases.\n[SUBTITLE] Generation of the gold-standards [SUBSECTION] By integrating functional gene expression and genome-wide binding datasets, we obtained 25 gold-standards for the seven transcription factors considered in our study (Table 2). Each of these gold-standards was generated as described in the Methods section and contains genes that are directly downstream of a particular transcription factor and are functionally regulated by it. A detailed list of all genes in each gold-standard is available from http://www.nyuinformatics.org/downloads/supplements/PathwayAssessment Additionally, as for MYC, NOTCH1, RELA, and BCL6 there was more than one dataset (functional gene expression or genome-wide binding) available, we also generated a list of most confident direct downstream targets of each of these transcription factors by overlapping gold-standards that were obtained with different datasets (Table S3 in Additional File 1). While these lists are expected to contain only the most confident genes which are directly regulated by a transcription factor in question, these lists are likely to be incomplete due to condition-specific transcriptional targets that may not appear in all datasets and also due to statistical considerations, namely the fact that probability of obtaining significant results in several studies declines with the number of studies [31].\nGold standards for each transcription factor (TF).\nBy integrating functional gene expression and genome-wide binding datasets, we obtained 25 gold-standards for the seven transcription factors considered in our study (Table 2). Each of these gold-standards was generated as described in the Methods section and contains genes that are directly downstream of a particular transcription factor and are functionally regulated by it. A detailed list of all genes in each gold-standard is available from http://www.nyuinformatics.org/downloads/supplements/PathwayAssessment Additionally, as for MYC, NOTCH1, RELA, and BCL6 there was more than one dataset (functional gene expression or genome-wide binding) available, we also generated a list of most confident direct downstream targets of each of these transcription factors by overlapping gold-standards that were obtained with different datasets (Table S3 in Additional File 1). While these lists are expected to contain only the most confident genes which are directly regulated by a transcription factor in question, these lists are likely to be incomplete due to condition-specific transcriptional targets that may not appear in all datasets and also due to statistical considerations, namely the fact that probability of obtaining significant results in several studies declines with the number of studies [31].\nGold standards for each transcription factor (TF).\n[SUBTITLE] Comparison between gold-standards and transcriptional regulatory pathways [SUBSECTION] For each transcription factor, we analyzed an overlap between experimentally derived gold-standards of direct transcriptional targets and 12 gene sets from pathway databases as described in the Methods section. The detailed results are shown in Figure 3 and are summarized in Figure 4. Over all gold-standards and transcription factors, the largest overlap with experimentally derived gene targets was obtained by MetaCore (statistically significant overlap with 84% gold-standards). Other pathway databases to a large extent do not have statistically significant overlaps with experimentally obtained target genes: the best of the remaining pathway databases, Ingenuity Pathway Analysis, results in statistically significant overlap with only 36% of gold-standards. Notably, in 82 comparisons (over all 25 gold-standards and 12 gene sets for each transcription factor) the resulting overlap of pathways with gold-standards is empty. In brief, for the majority of pathway databases used in this study, randomly selected gene sets yield the same number of overlapping genes with gold-standards as the pathway databases. On the other hand, the biggest average EFC ratio was obtained by WikiPathways and was equal to 19. Notably, the intersection of WikiPathways data with the gold-standard #I for TP53 was 203 times bigger than the expected one for that overlap (the largest EFC over all assessed data). Furthermore, the EFC of this intersection was approximately 18.5 times bigger than the corresponding EFC of MetaCore. However, the overlap of WikiPathways data with this gold-standard (3 genes) was 6 times smaller than the overlap of MetaCore data with the gold-standard (18 genes).\nComparison between different pathway databases and experimentally derived gold-standards for all considered transcription factors. Value in a given cell is a number of overlapping genes between a gold-standard and a pathway-derived gene set. Cells are colored according to their values from white (low values) to red (high values). Underlined values in red represent statistically significant intersections.\nSummary of the pathway databases assessment. Green cells represent statistically significant intersections between experimentally derived gold-standards and transcriptional regulatory pathways. White cells denote results that are not statistically significant. Numbers are the enrichment fold change ratios (EFC) calculated for each intersection.\nFor each transcription factor, we analyzed an overlap between experimentally derived gold-standards of direct transcriptional targets and 12 gene sets from pathway databases as described in the Methods section. The detailed results are shown in Figure 3 and are summarized in Figure 4. Over all gold-standards and transcription factors, the largest overlap with experimentally derived gene targets was obtained by MetaCore (statistically significant overlap with 84% gold-standards). Other pathway databases to a large extent do not have statistically significant overlaps with experimentally obtained target genes: the best of the remaining pathway databases, Ingenuity Pathway Analysis, results in statistically significant overlap with only 36% of gold-standards. Notably, in 82 comparisons (over all 25 gold-standards and 12 gene sets for each transcription factor) the resulting overlap of pathways with gold-standards is empty. In brief, for the majority of pathway databases used in this study, randomly selected gene sets yield the same number of overlapping genes with gold-standards as the pathway databases. On the other hand, the biggest average EFC ratio was obtained by WikiPathways and was equal to 19. Notably, the intersection of WikiPathways data with the gold-standard #I for TP53 was 203 times bigger than the expected one for that overlap (the largest EFC over all assessed data). Furthermore, the EFC of this intersection was approximately 18.5 times bigger than the corresponding EFC of MetaCore. However, the overlap of WikiPathways data with this gold-standard (3 genes) was 6 times smaller than the overlap of MetaCore data with the gold-standard (18 genes).\nComparison between different pathway databases and experimentally derived gold-standards for all considered transcription factors. Value in a given cell is a number of overlapping genes between a gold-standard and a pathway-derived gene set. Cells are colored according to their values from white (low values) to red (high values). Underlined values in red represent statistically significant intersections.\nSummary of the pathway databases assessment. Green cells represent statistically significant intersections between experimentally derived gold-standards and transcriptional regulatory pathways. White cells denote results that are not statistically significant. Numbers are the enrichment fold change ratios (EFC) calculated for each intersection.\n[SUBTITLE] Cross-talk of MYC, NOTCH1, RELA transcriptional regulatory pathways [SUBSECTION] The gold-standards generated in our study contain direct targets of each individual transcription factor. However, these factors only rarely act individually. Indeed, we and others have previously suggested that induction and maintenance of T-cell acute lymphoblastic leukemia (T-ALL), a devastating pediatric blood cancer, depends on the cross talk of three transcription factors, NOTCH1, MYC, and RELA (NF-κB) [15,19,32]. We have thus hypothesized that these factors should share target-genes that could be important for the progression of the disease. Our analysis supported this hypothesis as it identified a large number of genes targeted by two or more factors (Table S4 in Additional File 1). As expected, NOTCH1 and MYC, two transcription factors that have been closely connected in T-ALL share a large number of common targets (> 400). Some of these genes are very intriguing from a biological point of view. For example, two activators of cell cycle entry, CDK4 and CDK6 appear to be induced by both factors. We have previously shown that silencing of CDK4/6 activity is able to suppress T-ALL suggesting that NOTCH1 and MYC activities could converge on these CDK genes to initiate expansion of transformed cells [33]. Interestingly, MYC itself and its interacting partners MYCB and MYCB2 appear to also be targeted by both factors, suggesting an interesting signal amplification mechanism.\nRELA and MYC also share a large number (> 550) of common gene targets. This is a novel biological finding with importance for the biology of T-cell leukemia. For example, several essential T-cell regulators, including RUNX1, BCL2L1 (BCL-xL), ID3, ITCH, JAK3 and NOTCH1, appear to be controlled by both transcription factors. Interestingly, NOTCH1 is downstream of both RELA and MYC but at the same time these two factors are targets of oncogenic NOTCH1 [15,32], suggesting once more an intricate auto-amplification loop that could sustain transformation.\nThe gold-standards generated in our study contain direct targets of each individual transcription factor. However, these factors only rarely act individually. Indeed, we and others have previously suggested that induction and maintenance of T-cell acute lymphoblastic leukemia (T-ALL), a devastating pediatric blood cancer, depends on the cross talk of three transcription factors, NOTCH1, MYC, and RELA (NF-κB) [15,19,32]. We have thus hypothesized that these factors should share target-genes that could be important for the progression of the disease. Our analysis supported this hypothesis as it identified a large number of genes targeted by two or more factors (Table S4 in Additional File 1). As expected, NOTCH1 and MYC, two transcription factors that have been closely connected in T-ALL share a large number of common targets (> 400). Some of these genes are very intriguing from a biological point of view. For example, two activators of cell cycle entry, CDK4 and CDK6 appear to be induced by both factors. We have previously shown that silencing of CDK4/6 activity is able to suppress T-ALL suggesting that NOTCH1 and MYC activities could converge on these CDK genes to initiate expansion of transformed cells [33]. Interestingly, MYC itself and its interacting partners MYCB and MYCB2 appear to also be targeted by both factors, suggesting an interesting signal amplification mechanism.\nRELA and MYC also share a large number (> 550) of common gene targets. This is a novel biological finding with importance for the biology of T-cell leukemia. For example, several essential T-cell regulators, including RUNX1, BCL2L1 (BCL-xL), ID3, ITCH, JAK3 and NOTCH1, appear to be controlled by both transcription factors. Interestingly, NOTCH1 is downstream of both RELA and MYC but at the same time these two factors are targets of oncogenic NOTCH1 [15,32], suggesting once more an intricate auto-amplification loop that could sustain transformation.", "First, we assessed all pathway databases listed in Table 1 by a comparison of the extracted transcriptional targets for each of the seven transcription factors. The number of overlapping MYC targets between pathway databases is shown in Figure 1. We also calculated Jaccard index, the normalized measure of similarity between two gene lists (which is the size of intersection between two gene lists divided by the size of their union), for all pairwise comparisons of pathway databases (Figure S1 in Additional File 1). Since the information in the majority of these databases is curated, we would expect to see almost full intersection for every pair of databases, i.e. Jaccard index close to 1. However, our analysis revealed that the transcriptional data differs significantly from one database to another. Indeed, the average Jaccard index over all pairwise comparisons of pathway databases ranges between 0.0180 (AR) and 0.0742 (RELA), depending on a transcription factor. The grand average Jaccard index over all pathway comparisons and transcription factors is 0.0533. This means that only 5.33% of gene targets that belong to either of the two transcriptional regulatory pathways, also belong to both pathways. Furthermore, the transcriptional regulatory information on particular transcription factors was totally absent from some pathway databases.", "By integrating functional gene expression and genome-wide binding datasets, we obtained 25 gold-standards for the seven transcription factors considered in our study (Table 2). Each of these gold-standards was generated as described in the Methods section and contains genes that are directly downstream of a particular transcription factor and are functionally regulated by it. A detailed list of all genes in each gold-standard is available from http://www.nyuinformatics.org/downloads/supplements/PathwayAssessment Additionally, as for MYC, NOTCH1, RELA, and BCL6 there was more than one dataset (functional gene expression or genome-wide binding) available, we also generated a list of most confident direct downstream targets of each of these transcription factors by overlapping gold-standards that were obtained with different datasets (Table S3 in Additional File 1). While these lists are expected to contain only the most confident genes which are directly regulated by a transcription factor in question, these lists are likely to be incomplete due to condition-specific transcriptional targets that may not appear in all datasets and also due to statistical considerations, namely the fact that probability of obtaining significant results in several studies declines with the number of studies [31].\nGold standards for each transcription factor (TF).", "For each transcription factor, we analyzed an overlap between experimentally derived gold-standards of direct transcriptional targets and 12 gene sets from pathway databases as described in the Methods section. The detailed results are shown in Figure 3 and are summarized in Figure 4. Over all gold-standards and transcription factors, the largest overlap with experimentally derived gene targets was obtained by MetaCore (statistically significant overlap with 84% gold-standards). Other pathway databases to a large extent do not have statistically significant overlaps with experimentally obtained target genes: the best of the remaining pathway databases, Ingenuity Pathway Analysis, results in statistically significant overlap with only 36% of gold-standards. Notably, in 82 comparisons (over all 25 gold-standards and 12 gene sets for each transcription factor) the resulting overlap of pathways with gold-standards is empty. In brief, for the majority of pathway databases used in this study, randomly selected gene sets yield the same number of overlapping genes with gold-standards as the pathway databases. On the other hand, the biggest average EFC ratio was obtained by WikiPathways and was equal to 19. Notably, the intersection of WikiPathways data with the gold-standard #I for TP53 was 203 times bigger than the expected one for that overlap (the largest EFC over all assessed data). Furthermore, the EFC of this intersection was approximately 18.5 times bigger than the corresponding EFC of MetaCore. However, the overlap of WikiPathways data with this gold-standard (3 genes) was 6 times smaller than the overlap of MetaCore data with the gold-standard (18 genes).\nComparison between different pathway databases and experimentally derived gold-standards for all considered transcription factors. Value in a given cell is a number of overlapping genes between a gold-standard and a pathway-derived gene set. Cells are colored according to their values from white (low values) to red (high values). Underlined values in red represent statistically significant intersections.\nSummary of the pathway databases assessment. Green cells represent statistically significant intersections between experimentally derived gold-standards and transcriptional regulatory pathways. White cells denote results that are not statistically significant. Numbers are the enrichment fold change ratios (EFC) calculated for each intersection.", "The gold-standards generated in our study contain direct targets of each individual transcription factor. However, these factors only rarely act individually. Indeed, we and others have previously suggested that induction and maintenance of T-cell acute lymphoblastic leukemia (T-ALL), a devastating pediatric blood cancer, depends on the cross talk of three transcription factors, NOTCH1, MYC, and RELA (NF-κB) [15,19,32]. We have thus hypothesized that these factors should share target-genes that could be important for the progression of the disease. Our analysis supported this hypothesis as it identified a large number of genes targeted by two or more factors (Table S4 in Additional File 1). As expected, NOTCH1 and MYC, two transcription factors that have been closely connected in T-ALL share a large number of common targets (> 400). Some of these genes are very intriguing from a biological point of view. For example, two activators of cell cycle entry, CDK4 and CDK6 appear to be induced by both factors. We have previously shown that silencing of CDK4/6 activity is able to suppress T-ALL suggesting that NOTCH1 and MYC activities could converge on these CDK genes to initiate expansion of transformed cells [33]. Interestingly, MYC itself and its interacting partners MYCB and MYCB2 appear to also be targeted by both factors, suggesting an interesting signal amplification mechanism.\nRELA and MYC also share a large number (> 550) of common gene targets. This is a novel biological finding with importance for the biology of T-cell leukemia. For example, several essential T-cell regulators, including RUNX1, BCL2L1 (BCL-xL), ID3, ITCH, JAK3 and NOTCH1, appear to be controlled by both transcription factors. Interestingly, NOTCH1 is downstream of both RELA and MYC but at the same time these two factors are targets of oncogenic NOTCH1 [15,32], suggesting once more an intricate auto-amplification loop that could sustain transformation.", "At the core of this study was creation of gold-standards of transcriptional regulation in humans that can be compared with target genes reported in transcriptional regulatory pathways. We focused on seven well known transcription factors and obtained gold-standards by integrating genome-wide transcription factor-DNA binding data (from ChIP-chip, ChIP-seq, or ChIP-PET platforms) with functional gene expression microarray and RNA-seq data. The latter data allows to survey changes in the transcriptomes on a genome-wide scale after the inhibition or over-expression of the transcription factor in question. However, change in the expression of a particular gene could be caused either by the direct effect of the removal or introduction of a given transcription factor, as well as by an indirect effect, through the change in expression level of some other gene(s). As mentioned in the Methods section, transcription factor-DNA binding data by itself is similarly insufficient to determine downstream functional targets of a transcription factor due to the occurring nonfunctional and/or non-specific protein binding activity [27]. Thus, it is essential to integrate data from these two sources to obtain an accurate list of gene targets that are directly regulated by a transcription factor [28].\nIt is worth noting that tested pathway databases typically do not give distinction between cell-lines, experimental conditions, and other details relevant to experimental systems in which data were obtained. These databases in a sense propose a 'universal' list of transcriptional targets. However, it is known that transcriptional regulation in a cell is dynamic and works differently for different systems and stimuli. This accentuates the major limitation of pathway databases and emphasizes importance of deriving a specific list of transcriptional targets for the current experimental system. In this study we followed the latter approach by developing gold-standards for specific well-characterized biological systems and experimental conditions.\nHowever our approach for building gold-standards of direct mechanistic knowledge has several limitations. First of all, these are limitations inherited by the assaying technology. Microarrays cannot reliably detect small changes in gene expression and/or genes expressed on very low levels [34]. Similarly, ChIP-chip transcription factor-DNA binding data is known to have imperfect reproducibility [35]. Second, functional gene expression and binding data used in our work often originated from different studies. Even though we verified that there were comparable biological systems, even minor differences in experimental conditions and phenotypes between studies can challenge integration of binding and functional expression data. Third, the siRNA knock-down is not ideal experiment for proving direct causation for identified binding relations, because it could cause some false positives in our gold-standards. For example, a transcription factor could bind to promoter region of a gene, but regulate expression of that gene indirectly via transregulation of another transcription factor. Forth, some number of false negatives in our approach could arise due to the compensatory mechanism in the cell. Finally, our approach can yield only gold-standards with direct downstream targets of the transcription factor, and therefore information about upstream regulation cannot be obtained by this method. Likewise, it does not allow capturing interactions between genes that are not transcription factors. Notwithstanding the above challenges, this method is currently state-of-the-art and is believed to provide with high confidence direct regulatory interactions of the transcription factors on genome-wide scale [28].\nIn addition to assessment of pathway databases, these gold-standards can be used in order to get new biological insights in the field of transcriptional regulation. Specifically, our results suggest that multiple transcription factors can co-operate and control both physiological differentiation and malignant transformation, as demonstrated utilizing combinatorial gene-profiling for NOTCH1, MYC and RELA targets. These studies might lead us to multi-pathway gene expression \"signatures\" essential for the prediction of genes that could be targeted in cancer treatments. In agreement with this hypothesis, several of the genes identified in our analysis have been suggested to be putative therapeutic targets in leukemia, with either preclinical or clinical trials underway (CDK4, CDK6, GSK3b, MYC, LCK, NFkB2, BCL2L1, NOTCH1) [36].", "The comparison of the pathway databases, the main goal of this study, first of all revealed that human transcriptional regulatory pathways often do not agree with each other and contain different target genes. More importantly, with the exception of MetaCore, the majority of sets of target genes specified in the transcriptional regulatory pathways were found to be incomplete and/or inaccurate when compared with experimentally derived gold-standards. Despite of the fact that in the present study we assessed the transcriptional pathways of only seven transcription factors (due to limited availability of high-quality genome-wide binding and expression data in the public domain), we anticipate that our results would generalize to other transcription factors and pathways.\nGiven widespread use of pathways for hypothesis generation, the conclusions of our study have significant implications for biomedical research in general and discovery of new drugs and treatments. In order to obtain a more accurate research hypothesis, the choice of pathway databases has to be informed by solid scientific evidence and rigorous empirical evaluations such as ours. We thus aim to continue comprehensive benchmarking of biological pathways to facilitate evidence-based pathway selection of biomedical researchers. At the same time we propose to developers of pathways databases to take advantage of recently available genome-wide binding and functional expression data to refine transcriptional regulatory pathways.", "The authors declare that they have no competing interests.", "Conceived and designed the experiments: ES, AS. Performed the experiments: ES, AS, ZT. Analyzed the results of experiments: ES, AS, ZT, IA. Wrote the paper: ES, AS, ZT, IA. All authors read and approved the final manuscript.", "Reviewer #1: Prof. Wing Hung Wong\nDepartment of Statistics, Stanford University, Stanford, CA, USA\n[SUBTITLE] Reviewer's comments [SUBSECTION] The purpose of this paper is to assess the quality and completeness of information in several popular pathway databases on the targets of transcription factors. To do this, the authors extracted this information for seven transcription factors from 10 pathway databases, and compared it to \"gold standard\" target lists identified from expression profiling and ChIP-seq experiments. The \"gold standard\" criterion is a relatively stringent one that requires not only ChIP-seq support for interaction between the regulator and the regulatory regions of the target gene, but also significant changes in target gene expression upon knockdown or overexpression of the regulator.\nThe main finding is that there is a very low degree of agreement among the target lists extracted from the different databases, and only one of the database (MetaCore) can provide a statistically significant overlap of targets when compared to the gold standard lists.\nThis is a very timely research. Many investigators are now incorporating information from the pathway databases in their data analysis. The results of this paper serve to warn us that the pathway databases are far from complete and may yield misleading results.\nOne unsatisfactory aspect of the statistical analysis presented in this paper is the over-reliance of test of significance as opposed to quantification of the degree of overlap by appropriate indexes such as odds ratio or enrichment fold-changes. For example, it may be that TransFac and KEGG annotations are highly reliable but they cannot reach statistical significance because they contain very few relations. From this consideration it is not surprising that MetaCore, which contains a large number of relations, appears to do so much better than the other databases. I hope the authors can address this issue carefully in their revision.\n[SUBTITLE] Authors' response [SUBSECTION] We agree with the reviewer that the statistical methodology has to incorporate odds ratios in order to account for differences in sizes of pathway databases. As a matter of fact, our statistical methodology is based on odds ratios and thus is appropriate for the study. We admit that this was not clearly explained in the original paper, and we have clarified this issue in the revised manuscript. Specifically, we made changes to the subsection \"Statistical comparison of gene sets\" and added Figure 2.\nWe agree with the reviewer that the statistical methodology has to incorporate odds ratios in order to account for differences in sizes of pathway databases. As a matter of fact, our statistical methodology is based on odds ratios and thus is appropriate for the study. We admit that this was not clearly explained in the original paper, and we have clarified this issue in the revised manuscript. Specifically, we made changes to the subsection \"Statistical comparison of gene sets\" and added Figure 2.\nThe purpose of this paper is to assess the quality and completeness of information in several popular pathway databases on the targets of transcription factors. To do this, the authors extracted this information for seven transcription factors from 10 pathway databases, and compared it to \"gold standard\" target lists identified from expression profiling and ChIP-seq experiments. The \"gold standard\" criterion is a relatively stringent one that requires not only ChIP-seq support for interaction between the regulator and the regulatory regions of the target gene, but also significant changes in target gene expression upon knockdown or overexpression of the regulator.\nThe main finding is that there is a very low degree of agreement among the target lists extracted from the different databases, and only one of the database (MetaCore) can provide a statistically significant overlap of targets when compared to the gold standard lists.\nThis is a very timely research. Many investigators are now incorporating information from the pathway databases in their data analysis. The results of this paper serve to warn us that the pathway databases are far from complete and may yield misleading results.\nOne unsatisfactory aspect of the statistical analysis presented in this paper is the over-reliance of test of significance as opposed to quantification of the degree of overlap by appropriate indexes such as odds ratio or enrichment fold-changes. For example, it may be that TransFac and KEGG annotations are highly reliable but they cannot reach statistical significance because they contain very few relations. From this consideration it is not surprising that MetaCore, which contains a large number of relations, appears to do so much better than the other databases. I hope the authors can address this issue carefully in their revision.\n[SUBTITLE] Authors' response [SUBSECTION] We agree with the reviewer that the statistical methodology has to incorporate odds ratios in order to account for differences in sizes of pathway databases. As a matter of fact, our statistical methodology is based on odds ratios and thus is appropriate for the study. We admit that this was not clearly explained in the original paper, and we have clarified this issue in the revised manuscript. Specifically, we made changes to the subsection \"Statistical comparison of gene sets\" and added Figure 2.\nWe agree with the reviewer that the statistical methodology has to incorporate odds ratios in order to account for differences in sizes of pathway databases. As a matter of fact, our statistical methodology is based on odds ratios and thus is appropriate for the study. We admit that this was not clearly explained in the original paper, and we have clarified this issue in the revised manuscript. Specifically, we made changes to the subsection \"Statistical comparison of gene sets\" and added Figure 2.\n[SUBTITLE] Reviewer's comments [SUBSECTION] The authors have not addressed my previous comment:\n\"One unsatisfactory aspect of the statistical analysis presented in this paper is the over-reliance of test of significance as opposed to quantification of the degree of overlap by appropriate indexes such as odds ratio or enrichment fold-changes. For example, it may be that TransFac and KEGG annotations are highly reliable but they cannot reach statistical significance because they contain very few relations. From this consideration it is not surprising that MetaCore, which contains a large number of relations, appears to do so much better than the other databases. I hope the authors can address this issue carefully in their revision.\"\nIn their response they stated that odds ratio is used in their statistics. However this is used only to compute the hypergeometric p-value. As pointed out in my original review, such p-values are very sensitive to the sample size so the databases with a small smaller number of annotations will not be able to reach the threshold of statistical significance. Thus the comparison between the databases, such as the conclusion that MetaCore is more reliable than the other ones, will be biased by this effect. At the very least, the authors should provide in their various tables, the enrichment fold change which is the observed number in the intersection, divided by the expected number (given fixed marginal counts for each factor) under the null hypothesis.\n[SUBTITLE] Authors' response [SUBSECTION] We agree with the reviewer that databases with a very small number of reported targets might not reach the threshold of statistical significance, even though their data could be highly reliable. We improved the manuscript accordingly and provided an alternative enrichment fold change metric suggested by the reviewer. Please see the last paragraph of the Methods section, Figure 2b, and Figure 4.\nReviewer #2: Dr. Thiago Motta Venancio (nominated by Dr. L Aravind)\nComputational Biology Branch, NCBI, NLM, NIH, Bethesda, MD, USA\nWe agree with the reviewer that databases with a very small number of reported targets might not reach the threshold of statistical significance, even though their data could be highly reliable. We improved the manuscript accordingly and provided an alternative enrichment fold change metric suggested by the reviewer. Please see the last paragraph of the Methods section, Figure 2b, and Figure 4.\nReviewer #2: Dr. Thiago Motta Venancio (nominated by Dr. L Aravind)\nComputational Biology Branch, NCBI, NLM, NIH, Bethesda, MD, USA\nThe authors have not addressed my previous comment:\n\"One unsatisfactory aspect of the statistical analysis presented in this paper is the over-reliance of test of significance as opposed to quantification of the degree of overlap by appropriate indexes such as odds ratio or enrichment fold-changes. For example, it may be that TransFac and KEGG annotations are highly reliable but they cannot reach statistical significance because they contain very few relations. From this consideration it is not surprising that MetaCore, which contains a large number of relations, appears to do so much better than the other databases. I hope the authors can address this issue carefully in their revision.\"\nIn their response they stated that odds ratio is used in their statistics. However this is used only to compute the hypergeometric p-value. As pointed out in my original review, such p-values are very sensitive to the sample size so the databases with a small smaller number of annotations will not be able to reach the threshold of statistical significance. Thus the comparison between the databases, such as the conclusion that MetaCore is more reliable than the other ones, will be biased by this effect. At the very least, the authors should provide in their various tables, the enrichment fold change which is the observed number in the intersection, divided by the expected number (given fixed marginal counts for each factor) under the null hypothesis.\n[SUBTITLE] Authors' response [SUBSECTION] We agree with the reviewer that databases with a very small number of reported targets might not reach the threshold of statistical significance, even though their data could be highly reliable. We improved the manuscript accordingly and provided an alternative enrichment fold change metric suggested by the reviewer. Please see the last paragraph of the Methods section, Figure 2b, and Figure 4.\nReviewer #2: Dr. Thiago Motta Venancio (nominated by Dr. L Aravind)\nComputational Biology Branch, NCBI, NLM, NIH, Bethesda, MD, USA\nWe agree with the reviewer that databases with a very small number of reported targets might not reach the threshold of statistical significance, even though their data could be highly reliable. We improved the manuscript accordingly and provided an alternative enrichment fold change metric suggested by the reviewer. Please see the last paragraph of the Methods section, Figure 2b, and Figure 4.\nReviewer #2: Dr. Thiago Motta Venancio (nominated by Dr. L Aravind)\nComputational Biology Branch, NCBI, NLM, NIH, Bethesda, MD, USA\n[SUBTITLE] Reviewer's comments [SUBSECTION] In this paper Shmelkov et al. analyze the targets of 7 important human transcription factors. They found that the data obtained from different databases and direct experimental works are not well correlated, which could raise important concerns given the widespread use of such data repositories by the scientific community. I have some critical points that should be addressed in order to improve clarity and coherence of the manuscript.\n1) The authors extracted data from different databases and datasets. How did they handle the problem of different identifier to name genes (e.g. Entrez GeneID, Ensembl)? In addition, what is the impact of different genome versions between databases/datasets in this analysis? These problems could easily give the low overlap found in the manuscript.\n2) Many (if not all) of the databases used in the present study import data from the literature. Are the datasets used present in any of the datasets? If yes, this could be an artificial explanation for the MetaCore large overlap would be expected. If not, why are they not there and why the method used by the authors is better than the ones used by the database teams to identify the real transcription factor targets?\n3) I think the T-test and the significance level used to define gold standards are inappropriate for genome-scale analysis, due to multiple testing. Did the authors do any statistical analysis to circumvent such problems (e.g. p-value correction)?\n4) Some databases have very low numbers of genes. Wouldn't this result in non significant results regardless of the size of the overlap?\n5) The human genome has more than 1500 transcription factors. Due to data availability, the authors assessed 7 of them in this study. In my opinion this far from being \"genome wide\" in the transcription factor landscape.\nMinor point: In the abstract the authors say: \"and targets reported in transcriptional regulatory pathways is surprisingly small\". Are you referring to transcriptional regulatory pathways databases?\n[SUBTITLE] Authors' response [SUBSECTION] The point-by-point response follows:\n1) As described in the revised manuscript, we have transformed all gene identifiers from different studies/databases to the HUGO Gene Nomenclature Committee approved gene symbols and names (see the first sentence in the subsection \"Statistical comparison of gene sets\"). We also ensured that we used the same version of the reference genome as used in the original studies when we generated gene lists (e.g., from ChIP-Seq studies).\n2) We agree with the reviewer that the above questions can facilitate understanding the results of our study. Indeed, if some database used all recent available data in the literature and analyzed it using appropriate statistical and bioinformatics methodologies, it would have a significant overlap with our gold-standards. As we have found, this was not the case for most tested databases. Knowing how databases obtained their knowledge bases would potentially allow to delve deeper and address the questions posed by the reviewer. However, these questions are outside the scope of our manuscript; its main purpose was to evaluate pathway databases and identify ones that mostly correlate with the experimental data. Finally, it is also worthwhile to mention that most of used databases use proprietary algorithms and do not disclose how they extract underlying knowledge.\n3) In this study we build gold-standards by intersecting the list of binding targets (from ChIP-seq/ChIP-chip/Chip-PET data) with the list of differentially expressed downstream targets (from gene expression data). Indeed, we used t-test for analysis of gene expression data with the 5% alpha level. In general this would entail 5% of false positives which can be a fairly large number. Notice however that our criterion for a gene to participate in the gold-standard is not only that it has p-value <0.05 according to the t-test in the gene expression data, but also that it is bound by the transcription factor of interest, as determined from an independent analysis of the ChIP-seq/ChIP-chip/Chip-PET data. Thus, we reduce the number of false positives in the resulting gold-standards. In order to further clarify this in the revised manuscript, we added a sentence in the last paragraph of the subsection \"Generation of the gold-standards\" of \"Methods\".\n4) We have clarified this issue in the revised manuscript in the subsection \"Statistical comparison of gene sets\". In summary, the statistical test is based on odds ratios and thus accounts for different number of genes in pathway databases.\n5) We used the term \"genome-wide\" to denote that the gold-standard for each transcription factor was obtained on a genome-wide scale, i.e. it was based on chromatin immunoprecipitation coupled with the entire genome sequencing or microarray gene expression analysis versus focusing on a selected subset of genes. We have improved the third paragraph of the Background section in the revised manuscript to clarify this issue.\nReviewer's Minor point: We agree with the reviewer and have modified the abstract accordingly.\nReviewer #3: Prof. Geoff J McLachlan\nDepartment of Mathematics, The University of Queensland, Brisbane, Australia\nThis reviewer provided no comments for publication.\nThe point-by-point response follows:\n1) As described in the revised manuscript, we have transformed all gene identifiers from different studies/databases to the HUGO Gene Nomenclature Committee approved gene symbols and names (see the first sentence in the subsection \"Statistical comparison of gene sets\"). We also ensured that we used the same version of the reference genome as used in the original studies when we generated gene lists (e.g., from ChIP-Seq studies).\n2) We agree with the reviewer that the above questions can facilitate understanding the results of our study. Indeed, if some database used all recent available data in the literature and analyzed it using appropriate statistical and bioinformatics methodologies, it would have a significant overlap with our gold-standards. As we have found, this was not the case for most tested databases. Knowing how databases obtained their knowledge bases would potentially allow to delve deeper and address the questions posed by the reviewer. However, these questions are outside the scope of our manuscript; its main purpose was to evaluate pathway databases and identify ones that mostly correlate with the experimental data. Finally, it is also worthwhile to mention that most of used databases use proprietary algorithms and do not disclose how they extract underlying knowledge.\n3) In this study we build gold-standards by intersecting the list of binding targets (from ChIP-seq/ChIP-chip/Chip-PET data) with the list of differentially expressed downstream targets (from gene expression data). Indeed, we used t-test for analysis of gene expression data with the 5% alpha level. In general this would entail 5% of false positives which can be a fairly large number. Notice however that our criterion for a gene to participate in the gold-standard is not only that it has p-value <0.05 according to the t-test in the gene expression data, but also that it is bound by the transcription factor of interest, as determined from an independent analysis of the ChIP-seq/ChIP-chip/Chip-PET data. Thus, we reduce the number of false positives in the resulting gold-standards. In order to further clarify this in the revised manuscript, we added a sentence in the last paragraph of the subsection \"Generation of the gold-standards\" of \"Methods\".\n4) We have clarified this issue in the revised manuscript in the subsection \"Statistical comparison of gene sets\". In summary, the statistical test is based on odds ratios and thus accounts for different number of genes in pathway databases.\n5) We used the term \"genome-wide\" to denote that the gold-standard for each transcription factor was obtained on a genome-wide scale, i.e. it was based on chromatin immunoprecipitation coupled with the entire genome sequencing or microarray gene expression analysis versus focusing on a selected subset of genes. We have improved the third paragraph of the Background section in the revised manuscript to clarify this issue.\nReviewer's Minor point: We agree with the reviewer and have modified the abstract accordingly.\nReviewer #3: Prof. Geoff J McLachlan\nDepartment of Mathematics, The University of Queensland, Brisbane, Australia\nThis reviewer provided no comments for publication.\nIn this paper Shmelkov et al. analyze the targets of 7 important human transcription factors. They found that the data obtained from different databases and direct experimental works are not well correlated, which could raise important concerns given the widespread use of such data repositories by the scientific community. I have some critical points that should be addressed in order to improve clarity and coherence of the manuscript.\n1) The authors extracted data from different databases and datasets. How did they handle the problem of different identifier to name genes (e.g. Entrez GeneID, Ensembl)? In addition, what is the impact of different genome versions between databases/datasets in this analysis? These problems could easily give the low overlap found in the manuscript.\n2) Many (if not all) of the databases used in the present study import data from the literature. Are the datasets used present in any of the datasets? If yes, this could be an artificial explanation for the MetaCore large overlap would be expected. If not, why are they not there and why the method used by the authors is better than the ones used by the database teams to identify the real transcription factor targets?\n3) I think the T-test and the significance level used to define gold standards are inappropriate for genome-scale analysis, due to multiple testing. Did the authors do any statistical analysis to circumvent such problems (e.g. p-value correction)?\n4) Some databases have very low numbers of genes. Wouldn't this result in non significant results regardless of the size of the overlap?\n5) The human genome has more than 1500 transcription factors. Due to data availability, the authors assessed 7 of them in this study. In my opinion this far from being \"genome wide\" in the transcription factor landscape.\nMinor point: In the abstract the authors say: \"and targets reported in transcriptional regulatory pathways is surprisingly small\". Are you referring to transcriptional regulatory pathways databases?\n[SUBTITLE] Authors' response [SUBSECTION] The point-by-point response follows:\n1) As described in the revised manuscript, we have transformed all gene identifiers from different studies/databases to the HUGO Gene Nomenclature Committee approved gene symbols and names (see the first sentence in the subsection \"Statistical comparison of gene sets\"). We also ensured that we used the same version of the reference genome as used in the original studies when we generated gene lists (e.g., from ChIP-Seq studies).\n2) We agree with the reviewer that the above questions can facilitate understanding the results of our study. Indeed, if some database used all recent available data in the literature and analyzed it using appropriate statistical and bioinformatics methodologies, it would have a significant overlap with our gold-standards. As we have found, this was not the case for most tested databases. Knowing how databases obtained their knowledge bases would potentially allow to delve deeper and address the questions posed by the reviewer. However, these questions are outside the scope of our manuscript; its main purpose was to evaluate pathway databases and identify ones that mostly correlate with the experimental data. Finally, it is also worthwhile to mention that most of used databases use proprietary algorithms and do not disclose how they extract underlying knowledge.\n3) In this study we build gold-standards by intersecting the list of binding targets (from ChIP-seq/ChIP-chip/Chip-PET data) with the list of differentially expressed downstream targets (from gene expression data). Indeed, we used t-test for analysis of gene expression data with the 5% alpha level. In general this would entail 5% of false positives which can be a fairly large number. Notice however that our criterion for a gene to participate in the gold-standard is not only that it has p-value <0.05 according to the t-test in the gene expression data, but also that it is bound by the transcription factor of interest, as determined from an independent analysis of the ChIP-seq/ChIP-chip/Chip-PET data. Thus, we reduce the number of false positives in the resulting gold-standards. In order to further clarify this in the revised manuscript, we added a sentence in the last paragraph of the subsection \"Generation of the gold-standards\" of \"Methods\".\n4) We have clarified this issue in the revised manuscript in the subsection \"Statistical comparison of gene sets\". In summary, the statistical test is based on odds ratios and thus accounts for different number of genes in pathway databases.\n5) We used the term \"genome-wide\" to denote that the gold-standard for each transcription factor was obtained on a genome-wide scale, i.e. it was based on chromatin immunoprecipitation coupled with the entire genome sequencing or microarray gene expression analysis versus focusing on a selected subset of genes. We have improved the third paragraph of the Background section in the revised manuscript to clarify this issue.\nReviewer's Minor point: We agree with the reviewer and have modified the abstract accordingly.\nReviewer #3: Prof. Geoff J McLachlan\nDepartment of Mathematics, The University of Queensland, Brisbane, Australia\nThis reviewer provided no comments for publication.\nThe point-by-point response follows:\n1) As described in the revised manuscript, we have transformed all gene identifiers from different studies/databases to the HUGO Gene Nomenclature Committee approved gene symbols and names (see the first sentence in the subsection \"Statistical comparison of gene sets\"). We also ensured that we used the same version of the reference genome as used in the original studies when we generated gene lists (e.g., from ChIP-Seq studies).\n2) We agree with the reviewer that the above questions can facilitate understanding the results of our study. Indeed, if some database used all recent available data in the literature and analyzed it using appropriate statistical and bioinformatics methodologies, it would have a significant overlap with our gold-standards. As we have found, this was not the case for most tested databases. Knowing how databases obtained their knowledge bases would potentially allow to delve deeper and address the questions posed by the reviewer. However, these questions are outside the scope of our manuscript; its main purpose was to evaluate pathway databases and identify ones that mostly correlate with the experimental data. Finally, it is also worthwhile to mention that most of used databases use proprietary algorithms and do not disclose how they extract underlying knowledge.\n3) In this study we build gold-standards by intersecting the list of binding targets (from ChIP-seq/ChIP-chip/Chip-PET data) with the list of differentially expressed downstream targets (from gene expression data). Indeed, we used t-test for analysis of gene expression data with the 5% alpha level. In general this would entail 5% of false positives which can be a fairly large number. Notice however that our criterion for a gene to participate in the gold-standard is not only that it has p-value <0.05 according to the t-test in the gene expression data, but also that it is bound by the transcription factor of interest, as determined from an independent analysis of the ChIP-seq/ChIP-chip/Chip-PET data. Thus, we reduce the number of false positives in the resulting gold-standards. In order to further clarify this in the revised manuscript, we added a sentence in the last paragraph of the subsection \"Generation of the gold-standards\" of \"Methods\".\n4) We have clarified this issue in the revised manuscript in the subsection \"Statistical comparison of gene sets\". In summary, the statistical test is based on odds ratios and thus accounts for different number of genes in pathway databases.\n5) We used the term \"genome-wide\" to denote that the gold-standard for each transcription factor was obtained on a genome-wide scale, i.e. it was based on chromatin immunoprecipitation coupled with the entire genome sequencing or microarray gene expression analysis versus focusing on a selected subset of genes. We have improved the third paragraph of the Background section in the revised manuscript to clarify this issue.\nReviewer's Minor point: We agree with the reviewer and have modified the abstract accordingly.\nReviewer #3: Prof. Geoff J McLachlan\nDepartment of Mathematics, The University of Queensland, Brisbane, Australia\nThis reviewer provided no comments for publication.", "The purpose of this paper is to assess the quality and completeness of information in several popular pathway databases on the targets of transcription factors. To do this, the authors extracted this information for seven transcription factors from 10 pathway databases, and compared it to \"gold standard\" target lists identified from expression profiling and ChIP-seq experiments. The \"gold standard\" criterion is a relatively stringent one that requires not only ChIP-seq support for interaction between the regulator and the regulatory regions of the target gene, but also significant changes in target gene expression upon knockdown or overexpression of the regulator.\nThe main finding is that there is a very low degree of agreement among the target lists extracted from the different databases, and only one of the database (MetaCore) can provide a statistically significant overlap of targets when compared to the gold standard lists.\nThis is a very timely research. Many investigators are now incorporating information from the pathway databases in their data analysis. The results of this paper serve to warn us that the pathway databases are far from complete and may yield misleading results.\nOne unsatisfactory aspect of the statistical analysis presented in this paper is the over-reliance of test of significance as opposed to quantification of the degree of overlap by appropriate indexes such as odds ratio or enrichment fold-changes. For example, it may be that TransFac and KEGG annotations are highly reliable but they cannot reach statistical significance because they contain very few relations. From this consideration it is not surprising that MetaCore, which contains a large number of relations, appears to do so much better than the other databases. I hope the authors can address this issue carefully in their revision.\n[SUBTITLE] Authors' response [SUBSECTION] We agree with the reviewer that the statistical methodology has to incorporate odds ratios in order to account for differences in sizes of pathway databases. As a matter of fact, our statistical methodology is based on odds ratios and thus is appropriate for the study. We admit that this was not clearly explained in the original paper, and we have clarified this issue in the revised manuscript. Specifically, we made changes to the subsection \"Statistical comparison of gene sets\" and added Figure 2.\nWe agree with the reviewer that the statistical methodology has to incorporate odds ratios in order to account for differences in sizes of pathway databases. As a matter of fact, our statistical methodology is based on odds ratios and thus is appropriate for the study. We admit that this was not clearly explained in the original paper, and we have clarified this issue in the revised manuscript. Specifically, we made changes to the subsection \"Statistical comparison of gene sets\" and added Figure 2.", "We agree with the reviewer that the statistical methodology has to incorporate odds ratios in order to account for differences in sizes of pathway databases. As a matter of fact, our statistical methodology is based on odds ratios and thus is appropriate for the study. We admit that this was not clearly explained in the original paper, and we have clarified this issue in the revised manuscript. Specifically, we made changes to the subsection \"Statistical comparison of gene sets\" and added Figure 2.", "The authors have not addressed my previous comment:\n\"One unsatisfactory aspect of the statistical analysis presented in this paper is the over-reliance of test of significance as opposed to quantification of the degree of overlap by appropriate indexes such as odds ratio or enrichment fold-changes. For example, it may be that TransFac and KEGG annotations are highly reliable but they cannot reach statistical significance because they contain very few relations. From this consideration it is not surprising that MetaCore, which contains a large number of relations, appears to do so much better than the other databases. I hope the authors can address this issue carefully in their revision.\"\nIn their response they stated that odds ratio is used in their statistics. However this is used only to compute the hypergeometric p-value. As pointed out in my original review, such p-values are very sensitive to the sample size so the databases with a small smaller number of annotations will not be able to reach the threshold of statistical significance. Thus the comparison between the databases, such as the conclusion that MetaCore is more reliable than the other ones, will be biased by this effect. At the very least, the authors should provide in their various tables, the enrichment fold change which is the observed number in the intersection, divided by the expected number (given fixed marginal counts for each factor) under the null hypothesis.\n[SUBTITLE] Authors' response [SUBSECTION] We agree with the reviewer that databases with a very small number of reported targets might not reach the threshold of statistical significance, even though their data could be highly reliable. We improved the manuscript accordingly and provided an alternative enrichment fold change metric suggested by the reviewer. Please see the last paragraph of the Methods section, Figure 2b, and Figure 4.\nReviewer #2: Dr. Thiago Motta Venancio (nominated by Dr. L Aravind)\nComputational Biology Branch, NCBI, NLM, NIH, Bethesda, MD, USA\nWe agree with the reviewer that databases with a very small number of reported targets might not reach the threshold of statistical significance, even though their data could be highly reliable. We improved the manuscript accordingly and provided an alternative enrichment fold change metric suggested by the reviewer. Please see the last paragraph of the Methods section, Figure 2b, and Figure 4.\nReviewer #2: Dr. Thiago Motta Venancio (nominated by Dr. L Aravind)\nComputational Biology Branch, NCBI, NLM, NIH, Bethesda, MD, USA", "We agree with the reviewer that databases with a very small number of reported targets might not reach the threshold of statistical significance, even though their data could be highly reliable. We improved the manuscript accordingly and provided an alternative enrichment fold change metric suggested by the reviewer. Please see the last paragraph of the Methods section, Figure 2b, and Figure 4.\nReviewer #2: Dr. Thiago Motta Venancio (nominated by Dr. L Aravind)\nComputational Biology Branch, NCBI, NLM, NIH, Bethesda, MD, USA", "In this paper Shmelkov et al. analyze the targets of 7 important human transcription factors. They found that the data obtained from different databases and direct experimental works are not well correlated, which could raise important concerns given the widespread use of such data repositories by the scientific community. I have some critical points that should be addressed in order to improve clarity and coherence of the manuscript.\n1) The authors extracted data from different databases and datasets. How did they handle the problem of different identifier to name genes (e.g. Entrez GeneID, Ensembl)? In addition, what is the impact of different genome versions between databases/datasets in this analysis? These problems could easily give the low overlap found in the manuscript.\n2) Many (if not all) of the databases used in the present study import data from the literature. Are the datasets used present in any of the datasets? If yes, this could be an artificial explanation for the MetaCore large overlap would be expected. If not, why are they not there and why the method used by the authors is better than the ones used by the database teams to identify the real transcription factor targets?\n3) I think the T-test and the significance level used to define gold standards are inappropriate for genome-scale analysis, due to multiple testing. Did the authors do any statistical analysis to circumvent such problems (e.g. p-value correction)?\n4) Some databases have very low numbers of genes. Wouldn't this result in non significant results regardless of the size of the overlap?\n5) The human genome has more than 1500 transcription factors. Due to data availability, the authors assessed 7 of them in this study. In my opinion this far from being \"genome wide\" in the transcription factor landscape.\nMinor point: In the abstract the authors say: \"and targets reported in transcriptional regulatory pathways is surprisingly small\". Are you referring to transcriptional regulatory pathways databases?\n[SUBTITLE] Authors' response [SUBSECTION] The point-by-point response follows:\n1) As described in the revised manuscript, we have transformed all gene identifiers from different studies/databases to the HUGO Gene Nomenclature Committee approved gene symbols and names (see the first sentence in the subsection \"Statistical comparison of gene sets\"). We also ensured that we used the same version of the reference genome as used in the original studies when we generated gene lists (e.g., from ChIP-Seq studies).\n2) We agree with the reviewer that the above questions can facilitate understanding the results of our study. Indeed, if some database used all recent available data in the literature and analyzed it using appropriate statistical and bioinformatics methodologies, it would have a significant overlap with our gold-standards. As we have found, this was not the case for most tested databases. Knowing how databases obtained their knowledge bases would potentially allow to delve deeper and address the questions posed by the reviewer. However, these questions are outside the scope of our manuscript; its main purpose was to evaluate pathway databases and identify ones that mostly correlate with the experimental data. Finally, it is also worthwhile to mention that most of used databases use proprietary algorithms and do not disclose how they extract underlying knowledge.\n3) In this study we build gold-standards by intersecting the list of binding targets (from ChIP-seq/ChIP-chip/Chip-PET data) with the list of differentially expressed downstream targets (from gene expression data). Indeed, we used t-test for analysis of gene expression data with the 5% alpha level. In general this would entail 5% of false positives which can be a fairly large number. Notice however that our criterion for a gene to participate in the gold-standard is not only that it has p-value <0.05 according to the t-test in the gene expression data, but also that it is bound by the transcription factor of interest, as determined from an independent analysis of the ChIP-seq/ChIP-chip/Chip-PET data. Thus, we reduce the number of false positives in the resulting gold-standards. In order to further clarify this in the revised manuscript, we added a sentence in the last paragraph of the subsection \"Generation of the gold-standards\" of \"Methods\".\n4) We have clarified this issue in the revised manuscript in the subsection \"Statistical comparison of gene sets\". In summary, the statistical test is based on odds ratios and thus accounts for different number of genes in pathway databases.\n5) We used the term \"genome-wide\" to denote that the gold-standard for each transcription factor was obtained on a genome-wide scale, i.e. it was based on chromatin immunoprecipitation coupled with the entire genome sequencing or microarray gene expression analysis versus focusing on a selected subset of genes. We have improved the third paragraph of the Background section in the revised manuscript to clarify this issue.\nReviewer's Minor point: We agree with the reviewer and have modified the abstract accordingly.\nReviewer #3: Prof. Geoff J McLachlan\nDepartment of Mathematics, The University of Queensland, Brisbane, Australia\nThis reviewer provided no comments for publication.\nThe point-by-point response follows:\n1) As described in the revised manuscript, we have transformed all gene identifiers from different studies/databases to the HUGO Gene Nomenclature Committee approved gene symbols and names (see the first sentence in the subsection \"Statistical comparison of gene sets\"). We also ensured that we used the same version of the reference genome as used in the original studies when we generated gene lists (e.g., from ChIP-Seq studies).\n2) We agree with the reviewer that the above questions can facilitate understanding the results of our study. Indeed, if some database used all recent available data in the literature and analyzed it using appropriate statistical and bioinformatics methodologies, it would have a significant overlap with our gold-standards. As we have found, this was not the case for most tested databases. Knowing how databases obtained their knowledge bases would potentially allow to delve deeper and address the questions posed by the reviewer. However, these questions are outside the scope of our manuscript; its main purpose was to evaluate pathway databases and identify ones that mostly correlate with the experimental data. Finally, it is also worthwhile to mention that most of used databases use proprietary algorithms and do not disclose how they extract underlying knowledge.\n3) In this study we build gold-standards by intersecting the list of binding targets (from ChIP-seq/ChIP-chip/Chip-PET data) with the list of differentially expressed downstream targets (from gene expression data). Indeed, we used t-test for analysis of gene expression data with the 5% alpha level. In general this would entail 5% of false positives which can be a fairly large number. Notice however that our criterion for a gene to participate in the gold-standard is not only that it has p-value <0.05 according to the t-test in the gene expression data, but also that it is bound by the transcription factor of interest, as determined from an independent analysis of the ChIP-seq/ChIP-chip/Chip-PET data. Thus, we reduce the number of false positives in the resulting gold-standards. In order to further clarify this in the revised manuscript, we added a sentence in the last paragraph of the subsection \"Generation of the gold-standards\" of \"Methods\".\n4) We have clarified this issue in the revised manuscript in the subsection \"Statistical comparison of gene sets\". In summary, the statistical test is based on odds ratios and thus accounts for different number of genes in pathway databases.\n5) We used the term \"genome-wide\" to denote that the gold-standard for each transcription factor was obtained on a genome-wide scale, i.e. it was based on chromatin immunoprecipitation coupled with the entire genome sequencing or microarray gene expression analysis versus focusing on a selected subset of genes. We have improved the third paragraph of the Background section in the revised manuscript to clarify this issue.\nReviewer's Minor point: We agree with the reviewer and have modified the abstract accordingly.\nReviewer #3: Prof. Geoff J McLachlan\nDepartment of Mathematics, The University of Queensland, Brisbane, Australia\nThis reviewer provided no comments for publication.", "The point-by-point response follows:\n1) As described in the revised manuscript, we have transformed all gene identifiers from different studies/databases to the HUGO Gene Nomenclature Committee approved gene symbols and names (see the first sentence in the subsection \"Statistical comparison of gene sets\"). We also ensured that we used the same version of the reference genome as used in the original studies when we generated gene lists (e.g., from ChIP-Seq studies).\n2) We agree with the reviewer that the above questions can facilitate understanding the results of our study. Indeed, if some database used all recent available data in the literature and analyzed it using appropriate statistical and bioinformatics methodologies, it would have a significant overlap with our gold-standards. As we have found, this was not the case for most tested databases. Knowing how databases obtained their knowledge bases would potentially allow to delve deeper and address the questions posed by the reviewer. However, these questions are outside the scope of our manuscript; its main purpose was to evaluate pathway databases and identify ones that mostly correlate with the experimental data. Finally, it is also worthwhile to mention that most of used databases use proprietary algorithms and do not disclose how they extract underlying knowledge.\n3) In this study we build gold-standards by intersecting the list of binding targets (from ChIP-seq/ChIP-chip/Chip-PET data) with the list of differentially expressed downstream targets (from gene expression data). Indeed, we used t-test for analysis of gene expression data with the 5% alpha level. In general this would entail 5% of false positives which can be a fairly large number. Notice however that our criterion for a gene to participate in the gold-standard is not only that it has p-value <0.05 according to the t-test in the gene expression data, but also that it is bound by the transcription factor of interest, as determined from an independent analysis of the ChIP-seq/ChIP-chip/Chip-PET data. Thus, we reduce the number of false positives in the resulting gold-standards. In order to further clarify this in the revised manuscript, we added a sentence in the last paragraph of the subsection \"Generation of the gold-standards\" of \"Methods\".\n4) We have clarified this issue in the revised manuscript in the subsection \"Statistical comparison of gene sets\". In summary, the statistical test is based on odds ratios and thus accounts for different number of genes in pathway databases.\n5) We used the term \"genome-wide\" to denote that the gold-standard for each transcription factor was obtained on a genome-wide scale, i.e. it was based on chromatin immunoprecipitation coupled with the entire genome sequencing or microarray gene expression analysis versus focusing on a selected subset of genes. We have improved the third paragraph of the Background section in the revised manuscript to clarify this issue.\nReviewer's Minor point: We agree with the reviewer and have modified the abstract accordingly.\nReviewer #3: Prof. Geoff J McLachlan\nDepartment of Mathematics, The University of Queensland, Brisbane, Australia\nThis reviewer provided no comments for publication." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Functional expression data", "Protein-DNA binding data", "Generation of the gold-standards", "Pathway databases", "Statistical comparison of gene sets", "Results", "Comparison between pathway databases", "Generation of the gold-standards", "Comparison between gold-standards and transcriptional regulatory pathways", "Cross-talk of MYC, NOTCH1, RELA transcriptional regulatory pathways", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Reviewers' comments", "Reviewer's comments", "Authors' response", "Reviewer's comments", "Authors' response", "Reviewer's comments", "Authors' response", "Supplementary Material" ]
[ "Recently the biological pathways have become a common and probably the most popular form of representing biochemical information for hypothesis generation and validation. These maps store wide knowledge of complex molecular interactions and regulations occurring in the living organism in a simple and obvious way, often using intuitive graphical notation. Two major types of biological pathways could be distinguished. Metabolic pathways incorporate complex networks of protein-based interactions and modifications, while signal transduction and transcriptional regulatory pathways are usually considered to provide information on mechanisms of transcription [1].\nFor the last decade a variety of different public and commercial online pathway databases have been developed [2] and are currently routinely utilized by biomedical researchers and even by the U.S. government regulatory agencies such as FDA [3]. Each of these databases has its own structure, way of data storage and representation, and method for extracting and verifying biological knowledge. Information in the most popular publicly available pathway databases, such as BioCarta, KEGG [4], WikiPathways [5], Cell Signaling Technology pathways is usually either curated by efforts of a particular academic group (e.g., KEGG) or by direct participation of the broader scientific community (e.g., WikiPathways). Popular commercial products, such as MetaCore [6], Ingenuity Pathway Analysis, Pathway Studio and Biobase Knowledge Library (TRANSPATH and TRANSFAC) are often based on literature curation (e.g., Biobase Knowledge Library) and/or complex algorithms for mining biomedical literature (e.g., Pathway Studio).\nWhile there are a lot of data collected on human metabolic processes, the content of signal transduction and transcriptional regulatory pathways varies greatly in quality and completeness [7]. An indicative comparison of MYC transcriptional targets reported in ten different pathway databases reveals that these databases differ greatly from each other (Figure 1). Given that MYC is involved in the transcriptional regulation of approximately 15% of all genes [8], one cannot argue that the majority of pathway databases that contain less than thirty putative transcriptional targets of MYC are even close to complete. More importantly, to date there have been no prior genome-wide evaluation studies (that are based on genome-wide binding and gene expression assays) assessing pathway databases. Thus, biomedical scientists have to make their choice of the database based on interface, prior experience, or marketing presentations. However, it is critically important that this choice is informed by a rigorous evaluation that utilizes genome-wide experimental data. In the current study we perform such an evaluation of ten commonly used pathway databases. Particularly, we assessed the transcriptional regulatory pathways, considered in the current study as the interactions of the type 'transcription factor-transcriptional targets'.\nNumber of genes in common between MYC transcriptional targets derived from ten different pathway databases. Cells are colored according to their values from white (low values) to red (high values).\nOur study first involves integration of human genome-wide functional microarray or RNA-seq gene expression data with protein-DNA binding data from ChIP-chip, ChIP-seq, or ChIP-PET platforms to find direct transcriptional targets of the seven well known transcription factors: MYC, NOTCH1, BCL6, TP53, AR, STAT1, and RELA. The choice of transcription factors is based on their important role in oncogenesis and availability of binding and expression data in the public domain. Then, the lists of experimentally derived direct targets are used to assess the quality and completeness of 84 transcriptional regulatory pathways from four publicly available (BioCarta, KEGG, WikiPathways and Cell Signaling Technology) and six commercial (MetaCore, Ingenuity Pathway Analysis, BKL TRANSPATH, BKL TRANSFAC, Pathway Studio and GeneSpring Pathways) pathway databases. We measure the overlap between pathways and experimentally obtained target genes and assess statistical significance of this overlap. Additionally, we demonstrate that experimentally derived lists of direct transcriptional targets can be used to reveal new biological insight on transcriptional regulation. We show this by analyzing common direct transcriptional targets of MYC, NOTCH1 and RELA that act in interconnected molecular pathways. Detection of such genes is important as it could reveal novel targets of cancer therapy.", "[SUBTITLE] Functional expression data [SUBSECTION] The functional expression data was obtained from eleven previously published gene expression microarray and RNA-seq studies [9-19], where the transcription factor in question was knocked-down or over-expressed (see Table S1 in Additional File 1). In order to achieve statistical significance of our results, we selected only datasets with at least eight samples in total and three samples per condition.\nThe functional expression data was obtained from eleven previously published gene expression microarray and RNA-seq studies [9-19], where the transcription factor in question was knocked-down or over-expressed (see Table S1 in Additional File 1). In order to achieve statistical significance of our results, we selected only datasets with at least eight samples in total and three samples per condition.\n[SUBTITLE] Protein-DNA binding data [SUBSECTION] The transcription factor-DNA genome-wide binding data was derived from seven previously published ChIP-chip, ChIP-seq and ChIP-PET studies summarized in Table S2 in Additional File 1[9,13,14,20-23].\nThe transcription factor-DNA genome-wide binding data was derived from seven previously published ChIP-chip, ChIP-seq and ChIP-PET studies summarized in Table S2 in Additional File 1[9,13,14,20-23].\n[SUBTITLE] Generation of the gold-standards [SUBSECTION] A gold-standard is a list of genes that are directly downstream of a particular transcription factor, and are functionally regulated by it. Generation of gold-standards involved steps that are outlined below.\nFunctional gene expression data was first used to identify genes that are downstream (but not necessarily directly) of a particular transcription factor by application of the Student's t-test with α = 0.05 to 'experiment' (e. g., siRNA) and 'control' samples. Wherever applicable, we used a paired t-test that has larger statistical power to find differentially expressed genes than an unpaired version. Also, if some transcription factor had a well-known role as either activator or repressor for the majority of target genes, we created two gold-standards: one with a one-sided t-test and another one with a two-sided t-test. For example, since it is known that MYC is an activator for most target genes [24], we expect that in siRNA experiments genes downstream of MYC are down-regulated; this can be detected by a one-sided t-test. However, since there are studies that reported role of MYC as a repressor [25,26], we can also expect that genes downstream of MYC can be either up-or down-regulated; this can be detected by a two-sided t-test.\nGenome-wide binding data was then employed to identify direct binding targets of each transcription factor. Specifically, for each studied transcription factor we obtained the set(s) of genes with detected promoter region-transcription factor binding according to the primary study that generated binding data.\nWe emphasize that using genome-wide binding data by itself is insufficient to find downstream functional targets of a transcription factor, because many binding sites can be non-functional [27]. Therefore, the final step in gold-standard creation required overlapping of the list of direct binding targets (from binding data) with the list of downstream functional targets (from expression data). Knowledge gained by integration of data from these two sources is believed to provide high confidence that a given transcription factor directly regulates a particular gene [28]. Also, integration of data from two different sources contributes to the reduction of false positives in the resulting gold-standards.\nA gold-standard is a list of genes that are directly downstream of a particular transcription factor, and are functionally regulated by it. Generation of gold-standards involved steps that are outlined below.\nFunctional gene expression data was first used to identify genes that are downstream (but not necessarily directly) of a particular transcription factor by application of the Student's t-test with α = 0.05 to 'experiment' (e. g., siRNA) and 'control' samples. Wherever applicable, we used a paired t-test that has larger statistical power to find differentially expressed genes than an unpaired version. Also, if some transcription factor had a well-known role as either activator or repressor for the majority of target genes, we created two gold-standards: one with a one-sided t-test and another one with a two-sided t-test. For example, since it is known that MYC is an activator for most target genes [24], we expect that in siRNA experiments genes downstream of MYC are down-regulated; this can be detected by a one-sided t-test. However, since there are studies that reported role of MYC as a repressor [25,26], we can also expect that genes downstream of MYC can be either up-or down-regulated; this can be detected by a two-sided t-test.\nGenome-wide binding data was then employed to identify direct binding targets of each transcription factor. Specifically, for each studied transcription factor we obtained the set(s) of genes with detected promoter region-transcription factor binding according to the primary study that generated binding data.\nWe emphasize that using genome-wide binding data by itself is insufficient to find downstream functional targets of a transcription factor, because many binding sites can be non-functional [27]. Therefore, the final step in gold-standard creation required overlapping of the list of direct binding targets (from binding data) with the list of downstream functional targets (from expression data). Knowledge gained by integration of data from these two sources is believed to provide high confidence that a given transcription factor directly regulates a particular gene [28]. Also, integration of data from two different sources contributes to the reduction of false positives in the resulting gold-standards.\n[SUBTITLE] Pathway databases [SUBSECTION] In the current study we analyzed twelve pathway-derived sets of direct transcriptional targets for each transcription factor of interest. These gene sets were extracted from the ten pathway databases listed in Table 1 according to the following protocol.\nPathway databases.\nFrom each relevant pathway present in BioCarta, KEGG, WikiPathways and Cell Signaling Technology, we manually extracted all stated direct transcriptional targets of each of the seven transcription factors from our list. From the Ingenuity Pathway Analysis database, we extracted two sets of target genes regulated by each transcription factor of interest. One of them (a more conservative) contained all genes with the 'transcription' relation type to a given transcription factor, while another one (a more liberal) incorporated union of the genes with relation types 'transcription', 'expression' and 'protein-DNA interaction'. From each of the BKL TRANSPATH and TRANSFAC databases, we extracted a set of genes that are stated to be regulated by each transcription factor in question (i.e., \"Binding Sites/Regulated Genes\" and \"Regulates expression of (direct or indirect)\"). From the GenSpring database, we created two gene sets. The first set (a more conservative) contained intersection of the following two groups of genes: genes regulated by the transcription factor on the expression level and genes that are bound to a given transcription factor. The second set (a more liberal) incorporated a union of above two groups of genes. Finally, from the Pathway Studio and MetaCore pathway databases, we extracted a set of targets that are transcriptionally regulated by each of the transcription factors from our list (in MetaCore we considered a union of genes with 'transcription regulation' and 'co-regulation of transcription' types of relations with given transcription factor).\nIn the current study we analyzed twelve pathway-derived sets of direct transcriptional targets for each transcription factor of interest. These gene sets were extracted from the ten pathway databases listed in Table 1 according to the following protocol.\nPathway databases.\nFrom each relevant pathway present in BioCarta, KEGG, WikiPathways and Cell Signaling Technology, we manually extracted all stated direct transcriptional targets of each of the seven transcription factors from our list. From the Ingenuity Pathway Analysis database, we extracted two sets of target genes regulated by each transcription factor of interest. One of them (a more conservative) contained all genes with the 'transcription' relation type to a given transcription factor, while another one (a more liberal) incorporated union of the genes with relation types 'transcription', 'expression' and 'protein-DNA interaction'. From each of the BKL TRANSPATH and TRANSFAC databases, we extracted a set of genes that are stated to be regulated by each transcription factor in question (i.e., \"Binding Sites/Regulated Genes\" and \"Regulates expression of (direct or indirect)\"). From the GenSpring database, we created two gene sets. The first set (a more conservative) contained intersection of the following two groups of genes: genes regulated by the transcription factor on the expression level and genes that are bound to a given transcription factor. The second set (a more liberal) incorporated a union of above two groups of genes. Finally, from the Pathway Studio and MetaCore pathway databases, we extracted a set of targets that are transcriptionally regulated by each of the transcription factors from our list (in MetaCore we considered a union of genes with 'transcription regulation' and 'co-regulation of transcription' types of relations with given transcription factor).\n[SUBTITLE] Statistical comparison of gene sets [SUBSECTION] Since we are seeking to compare gene sets from different studies/databases, it is essential to transform genes to standard identifiers. That is why we transformed all gene sets to the HUGO Gene Nomenclature Committee approved gene symbols and names [29].\nIn order to assess statistical significance of the overlap between the resulting gene sets, we used the hypergeometric test at 5% α-level with false discovery rate correction for multiple comparisons by the method of Benjamini and Yekutieli [30]. The alternative hypothesis of this test is that two sets of genes (set A from pathway database and set B from experiments) have greater number of genes in common than two randomly selected gene sets with the same number of genes as in sets A and B. For example, consider that for some transcription factor there are 300 direct targets in the pathway database #1 and 700 in the experimentally derived list (gold-standard), and their intersection is 16 genes (Figure 2a). If we select on random from a total of 20,000 genes two sets with 300 and 700 genes each, their overlap would be greater or equal to 16 genes in 6.34% times. Thus, this overlap will not be statistically significant at 5% α-level (p = 0.0634). On the other hand, consider that for the pathway database #2, there are 30 direct targets of that transcription factor, and their intersection with the 700-gene gold-standard is only 6 genes. Even though the size of this intersection is rather small, it is unlikely to randomly select 30 genes (out of 20,000) with an overlap greater or equal to 6 genes with a 700-gene gold-standard (p = 0.0005, see Figure 2a). This overlap is statistically significant at 5% α-level.\nIllustration of statistical methodology for comparison between a gold-standard and a pathway database.\nEven though the above statistical methodology is based on odds ratios, databases with a very small number of targets may not reach statistical significance regardless of the quality of their data. To address this issue and provide another view on the data of our study, we calculate an enrichment fold change ratio (EFC) for every intersection between a gold-standard and a pathway database. For a given pair of a gold-standard and a pathway database, EFC is equal to the observed number of genes in their intersection, divided by the expected size of intersection under the null hypothesis (plus machine epsilon, to avoid division by zero). Notice however that larger values of EFC may correspond to databases that are highly incomplete and contain only a few relations. For example, consider that for some transcription factor there are 300 direct targets in the pathway database #1 and 50 in the experimentally derived list (gold-standard), and their intersection is 30 genes (Figure 2b). If we select on random from a total of 20,000 genes two sets with 300 and 50 genes each, their expected overlap under the null hypothesis will be equal to 0.75. Thus, the EFC ratio will be equal to 40 (= 30/0.75). On the other hand, consider that for the pathway database #2, there are 2 direct targets of that transcription factor, and their intersection with the 50-gene gold-standard is only 1 gene. Even though the expected overlap under the null hypothesis will be equal to 0.005 and EFC equal to 200 (5 times bigger than for the database #1), the size of this intersection with the gold-standard is 30 times less than for database #1 (Figure 2b).\nSince we are seeking to compare gene sets from different studies/databases, it is essential to transform genes to standard identifiers. That is why we transformed all gene sets to the HUGO Gene Nomenclature Committee approved gene symbols and names [29].\nIn order to assess statistical significance of the overlap between the resulting gene sets, we used the hypergeometric test at 5% α-level with false discovery rate correction for multiple comparisons by the method of Benjamini and Yekutieli [30]. The alternative hypothesis of this test is that two sets of genes (set A from pathway database and set B from experiments) have greater number of genes in common than two randomly selected gene sets with the same number of genes as in sets A and B. For example, consider that for some transcription factor there are 300 direct targets in the pathway database #1 and 700 in the experimentally derived list (gold-standard), and their intersection is 16 genes (Figure 2a). If we select on random from a total of 20,000 genes two sets with 300 and 700 genes each, their overlap would be greater or equal to 16 genes in 6.34% times. Thus, this overlap will not be statistically significant at 5% α-level (p = 0.0634). On the other hand, consider that for the pathway database #2, there are 30 direct targets of that transcription factor, and their intersection with the 700-gene gold-standard is only 6 genes. Even though the size of this intersection is rather small, it is unlikely to randomly select 30 genes (out of 20,000) with an overlap greater or equal to 6 genes with a 700-gene gold-standard (p = 0.0005, see Figure 2a). This overlap is statistically significant at 5% α-level.\nIllustration of statistical methodology for comparison between a gold-standard and a pathway database.\nEven though the above statistical methodology is based on odds ratios, databases with a very small number of targets may not reach statistical significance regardless of the quality of their data. To address this issue and provide another view on the data of our study, we calculate an enrichment fold change ratio (EFC) for every intersection between a gold-standard and a pathway database. For a given pair of a gold-standard and a pathway database, EFC is equal to the observed number of genes in their intersection, divided by the expected size of intersection under the null hypothesis (plus machine epsilon, to avoid division by zero). Notice however that larger values of EFC may correspond to databases that are highly incomplete and contain only a few relations. For example, consider that for some transcription factor there are 300 direct targets in the pathway database #1 and 50 in the experimentally derived list (gold-standard), and their intersection is 30 genes (Figure 2b). If we select on random from a total of 20,000 genes two sets with 300 and 50 genes each, their expected overlap under the null hypothesis will be equal to 0.75. Thus, the EFC ratio will be equal to 40 (= 30/0.75). On the other hand, consider that for the pathway database #2, there are 2 direct targets of that transcription factor, and their intersection with the 50-gene gold-standard is only 1 gene. Even though the expected overlap under the null hypothesis will be equal to 0.005 and EFC equal to 200 (5 times bigger than for the database #1), the size of this intersection with the gold-standard is 30 times less than for database #1 (Figure 2b).", "The functional expression data was obtained from eleven previously published gene expression microarray and RNA-seq studies [9-19], where the transcription factor in question was knocked-down or over-expressed (see Table S1 in Additional File 1). In order to achieve statistical significance of our results, we selected only datasets with at least eight samples in total and three samples per condition.", "The transcription factor-DNA genome-wide binding data was derived from seven previously published ChIP-chip, ChIP-seq and ChIP-PET studies summarized in Table S2 in Additional File 1[9,13,14,20-23].", "A gold-standard is a list of genes that are directly downstream of a particular transcription factor, and are functionally regulated by it. Generation of gold-standards involved steps that are outlined below.\nFunctional gene expression data was first used to identify genes that are downstream (but not necessarily directly) of a particular transcription factor by application of the Student's t-test with α = 0.05 to 'experiment' (e. g., siRNA) and 'control' samples. Wherever applicable, we used a paired t-test that has larger statistical power to find differentially expressed genes than an unpaired version. Also, if some transcription factor had a well-known role as either activator or repressor for the majority of target genes, we created two gold-standards: one with a one-sided t-test and another one with a two-sided t-test. For example, since it is known that MYC is an activator for most target genes [24], we expect that in siRNA experiments genes downstream of MYC are down-regulated; this can be detected by a one-sided t-test. However, since there are studies that reported role of MYC as a repressor [25,26], we can also expect that genes downstream of MYC can be either up-or down-regulated; this can be detected by a two-sided t-test.\nGenome-wide binding data was then employed to identify direct binding targets of each transcription factor. Specifically, for each studied transcription factor we obtained the set(s) of genes with detected promoter region-transcription factor binding according to the primary study that generated binding data.\nWe emphasize that using genome-wide binding data by itself is insufficient to find downstream functional targets of a transcription factor, because many binding sites can be non-functional [27]. Therefore, the final step in gold-standard creation required overlapping of the list of direct binding targets (from binding data) with the list of downstream functional targets (from expression data). Knowledge gained by integration of data from these two sources is believed to provide high confidence that a given transcription factor directly regulates a particular gene [28]. Also, integration of data from two different sources contributes to the reduction of false positives in the resulting gold-standards.", "In the current study we analyzed twelve pathway-derived sets of direct transcriptional targets for each transcription factor of interest. These gene sets were extracted from the ten pathway databases listed in Table 1 according to the following protocol.\nPathway databases.\nFrom each relevant pathway present in BioCarta, KEGG, WikiPathways and Cell Signaling Technology, we manually extracted all stated direct transcriptional targets of each of the seven transcription factors from our list. From the Ingenuity Pathway Analysis database, we extracted two sets of target genes regulated by each transcription factor of interest. One of them (a more conservative) contained all genes with the 'transcription' relation type to a given transcription factor, while another one (a more liberal) incorporated union of the genes with relation types 'transcription', 'expression' and 'protein-DNA interaction'. From each of the BKL TRANSPATH and TRANSFAC databases, we extracted a set of genes that are stated to be regulated by each transcription factor in question (i.e., \"Binding Sites/Regulated Genes\" and \"Regulates expression of (direct or indirect)\"). From the GenSpring database, we created two gene sets. The first set (a more conservative) contained intersection of the following two groups of genes: genes regulated by the transcription factor on the expression level and genes that are bound to a given transcription factor. The second set (a more liberal) incorporated a union of above two groups of genes. Finally, from the Pathway Studio and MetaCore pathway databases, we extracted a set of targets that are transcriptionally regulated by each of the transcription factors from our list (in MetaCore we considered a union of genes with 'transcription regulation' and 'co-regulation of transcription' types of relations with given transcription factor).", "Since we are seeking to compare gene sets from different studies/databases, it is essential to transform genes to standard identifiers. That is why we transformed all gene sets to the HUGO Gene Nomenclature Committee approved gene symbols and names [29].\nIn order to assess statistical significance of the overlap between the resulting gene sets, we used the hypergeometric test at 5% α-level with false discovery rate correction for multiple comparisons by the method of Benjamini and Yekutieli [30]. The alternative hypothesis of this test is that two sets of genes (set A from pathway database and set B from experiments) have greater number of genes in common than two randomly selected gene sets with the same number of genes as in sets A and B. For example, consider that for some transcription factor there are 300 direct targets in the pathway database #1 and 700 in the experimentally derived list (gold-standard), and their intersection is 16 genes (Figure 2a). If we select on random from a total of 20,000 genes two sets with 300 and 700 genes each, their overlap would be greater or equal to 16 genes in 6.34% times. Thus, this overlap will not be statistically significant at 5% α-level (p = 0.0634). On the other hand, consider that for the pathway database #2, there are 30 direct targets of that transcription factor, and their intersection with the 700-gene gold-standard is only 6 genes. Even though the size of this intersection is rather small, it is unlikely to randomly select 30 genes (out of 20,000) with an overlap greater or equal to 6 genes with a 700-gene gold-standard (p = 0.0005, see Figure 2a). This overlap is statistically significant at 5% α-level.\nIllustration of statistical methodology for comparison between a gold-standard and a pathway database.\nEven though the above statistical methodology is based on odds ratios, databases with a very small number of targets may not reach statistical significance regardless of the quality of their data. To address this issue and provide another view on the data of our study, we calculate an enrichment fold change ratio (EFC) for every intersection between a gold-standard and a pathway database. For a given pair of a gold-standard and a pathway database, EFC is equal to the observed number of genes in their intersection, divided by the expected size of intersection under the null hypothesis (plus machine epsilon, to avoid division by zero). Notice however that larger values of EFC may correspond to databases that are highly incomplete and contain only a few relations. For example, consider that for some transcription factor there are 300 direct targets in the pathway database #1 and 50 in the experimentally derived list (gold-standard), and their intersection is 30 genes (Figure 2b). If we select on random from a total of 20,000 genes two sets with 300 and 50 genes each, their expected overlap under the null hypothesis will be equal to 0.75. Thus, the EFC ratio will be equal to 40 (= 30/0.75). On the other hand, consider that for the pathway database #2, there are 2 direct targets of that transcription factor, and their intersection with the 50-gene gold-standard is only 1 gene. Even though the expected overlap under the null hypothesis will be equal to 0.005 and EFC equal to 200 (5 times bigger than for the database #1), the size of this intersection with the gold-standard is 30 times less than for database #1 (Figure 2b).", "[SUBTITLE] Comparison between pathway databases [SUBSECTION] First, we assessed all pathway databases listed in Table 1 by a comparison of the extracted transcriptional targets for each of the seven transcription factors. The number of overlapping MYC targets between pathway databases is shown in Figure 1. We also calculated Jaccard index, the normalized measure of similarity between two gene lists (which is the size of intersection between two gene lists divided by the size of their union), for all pairwise comparisons of pathway databases (Figure S1 in Additional File 1). Since the information in the majority of these databases is curated, we would expect to see almost full intersection for every pair of databases, i.e. Jaccard index close to 1. However, our analysis revealed that the transcriptional data differs significantly from one database to another. Indeed, the average Jaccard index over all pairwise comparisons of pathway databases ranges between 0.0180 (AR) and 0.0742 (RELA), depending on a transcription factor. The grand average Jaccard index over all pathway comparisons and transcription factors is 0.0533. This means that only 5.33% of gene targets that belong to either of the two transcriptional regulatory pathways, also belong to both pathways. Furthermore, the transcriptional regulatory information on particular transcription factors was totally absent from some pathway databases.\nFirst, we assessed all pathway databases listed in Table 1 by a comparison of the extracted transcriptional targets for each of the seven transcription factors. The number of overlapping MYC targets between pathway databases is shown in Figure 1. We also calculated Jaccard index, the normalized measure of similarity between two gene lists (which is the size of intersection between two gene lists divided by the size of their union), for all pairwise comparisons of pathway databases (Figure S1 in Additional File 1). Since the information in the majority of these databases is curated, we would expect to see almost full intersection for every pair of databases, i.e. Jaccard index close to 1. However, our analysis revealed that the transcriptional data differs significantly from one database to another. Indeed, the average Jaccard index over all pairwise comparisons of pathway databases ranges between 0.0180 (AR) and 0.0742 (RELA), depending on a transcription factor. The grand average Jaccard index over all pathway comparisons and transcription factors is 0.0533. This means that only 5.33% of gene targets that belong to either of the two transcriptional regulatory pathways, also belong to both pathways. Furthermore, the transcriptional regulatory information on particular transcription factors was totally absent from some pathway databases.\n[SUBTITLE] Generation of the gold-standards [SUBSECTION] By integrating functional gene expression and genome-wide binding datasets, we obtained 25 gold-standards for the seven transcription factors considered in our study (Table 2). Each of these gold-standards was generated as described in the Methods section and contains genes that are directly downstream of a particular transcription factor and are functionally regulated by it. A detailed list of all genes in each gold-standard is available from http://www.nyuinformatics.org/downloads/supplements/PathwayAssessment Additionally, as for MYC, NOTCH1, RELA, and BCL6 there was more than one dataset (functional gene expression or genome-wide binding) available, we also generated a list of most confident direct downstream targets of each of these transcription factors by overlapping gold-standards that were obtained with different datasets (Table S3 in Additional File 1). While these lists are expected to contain only the most confident genes which are directly regulated by a transcription factor in question, these lists are likely to be incomplete due to condition-specific transcriptional targets that may not appear in all datasets and also due to statistical considerations, namely the fact that probability of obtaining significant results in several studies declines with the number of studies [31].\nGold standards for each transcription factor (TF).\nBy integrating functional gene expression and genome-wide binding datasets, we obtained 25 gold-standards for the seven transcription factors considered in our study (Table 2). Each of these gold-standards was generated as described in the Methods section and contains genes that are directly downstream of a particular transcription factor and are functionally regulated by it. A detailed list of all genes in each gold-standard is available from http://www.nyuinformatics.org/downloads/supplements/PathwayAssessment Additionally, as for MYC, NOTCH1, RELA, and BCL6 there was more than one dataset (functional gene expression or genome-wide binding) available, we also generated a list of most confident direct downstream targets of each of these transcription factors by overlapping gold-standards that were obtained with different datasets (Table S3 in Additional File 1). While these lists are expected to contain only the most confident genes which are directly regulated by a transcription factor in question, these lists are likely to be incomplete due to condition-specific transcriptional targets that may not appear in all datasets and also due to statistical considerations, namely the fact that probability of obtaining significant results in several studies declines with the number of studies [31].\nGold standards for each transcription factor (TF).\n[SUBTITLE] Comparison between gold-standards and transcriptional regulatory pathways [SUBSECTION] For each transcription factor, we analyzed an overlap between experimentally derived gold-standards of direct transcriptional targets and 12 gene sets from pathway databases as described in the Methods section. The detailed results are shown in Figure 3 and are summarized in Figure 4. Over all gold-standards and transcription factors, the largest overlap with experimentally derived gene targets was obtained by MetaCore (statistically significant overlap with 84% gold-standards). Other pathway databases to a large extent do not have statistically significant overlaps with experimentally obtained target genes: the best of the remaining pathway databases, Ingenuity Pathway Analysis, results in statistically significant overlap with only 36% of gold-standards. Notably, in 82 comparisons (over all 25 gold-standards and 12 gene sets for each transcription factor) the resulting overlap of pathways with gold-standards is empty. In brief, for the majority of pathway databases used in this study, randomly selected gene sets yield the same number of overlapping genes with gold-standards as the pathway databases. On the other hand, the biggest average EFC ratio was obtained by WikiPathways and was equal to 19. Notably, the intersection of WikiPathways data with the gold-standard #I for TP53 was 203 times bigger than the expected one for that overlap (the largest EFC over all assessed data). Furthermore, the EFC of this intersection was approximately 18.5 times bigger than the corresponding EFC of MetaCore. However, the overlap of WikiPathways data with this gold-standard (3 genes) was 6 times smaller than the overlap of MetaCore data with the gold-standard (18 genes).\nComparison between different pathway databases and experimentally derived gold-standards for all considered transcription factors. Value in a given cell is a number of overlapping genes between a gold-standard and a pathway-derived gene set. Cells are colored according to their values from white (low values) to red (high values). Underlined values in red represent statistically significant intersections.\nSummary of the pathway databases assessment. Green cells represent statistically significant intersections between experimentally derived gold-standards and transcriptional regulatory pathways. White cells denote results that are not statistically significant. Numbers are the enrichment fold change ratios (EFC) calculated for each intersection.\nFor each transcription factor, we analyzed an overlap between experimentally derived gold-standards of direct transcriptional targets and 12 gene sets from pathway databases as described in the Methods section. The detailed results are shown in Figure 3 and are summarized in Figure 4. Over all gold-standards and transcription factors, the largest overlap with experimentally derived gene targets was obtained by MetaCore (statistically significant overlap with 84% gold-standards). Other pathway databases to a large extent do not have statistically significant overlaps with experimentally obtained target genes: the best of the remaining pathway databases, Ingenuity Pathway Analysis, results in statistically significant overlap with only 36% of gold-standards. Notably, in 82 comparisons (over all 25 gold-standards and 12 gene sets for each transcription factor) the resulting overlap of pathways with gold-standards is empty. In brief, for the majority of pathway databases used in this study, randomly selected gene sets yield the same number of overlapping genes with gold-standards as the pathway databases. On the other hand, the biggest average EFC ratio was obtained by WikiPathways and was equal to 19. Notably, the intersection of WikiPathways data with the gold-standard #I for TP53 was 203 times bigger than the expected one for that overlap (the largest EFC over all assessed data). Furthermore, the EFC of this intersection was approximately 18.5 times bigger than the corresponding EFC of MetaCore. However, the overlap of WikiPathways data with this gold-standard (3 genes) was 6 times smaller than the overlap of MetaCore data with the gold-standard (18 genes).\nComparison between different pathway databases and experimentally derived gold-standards for all considered transcription factors. Value in a given cell is a number of overlapping genes between a gold-standard and a pathway-derived gene set. Cells are colored according to their values from white (low values) to red (high values). Underlined values in red represent statistically significant intersections.\nSummary of the pathway databases assessment. Green cells represent statistically significant intersections between experimentally derived gold-standards and transcriptional regulatory pathways. White cells denote results that are not statistically significant. Numbers are the enrichment fold change ratios (EFC) calculated for each intersection.\n[SUBTITLE] Cross-talk of MYC, NOTCH1, RELA transcriptional regulatory pathways [SUBSECTION] The gold-standards generated in our study contain direct targets of each individual transcription factor. However, these factors only rarely act individually. Indeed, we and others have previously suggested that induction and maintenance of T-cell acute lymphoblastic leukemia (T-ALL), a devastating pediatric blood cancer, depends on the cross talk of three transcription factors, NOTCH1, MYC, and RELA (NF-κB) [15,19,32]. We have thus hypothesized that these factors should share target-genes that could be important for the progression of the disease. Our analysis supported this hypothesis as it identified a large number of genes targeted by two or more factors (Table S4 in Additional File 1). As expected, NOTCH1 and MYC, two transcription factors that have been closely connected in T-ALL share a large number of common targets (> 400). Some of these genes are very intriguing from a biological point of view. For example, two activators of cell cycle entry, CDK4 and CDK6 appear to be induced by both factors. We have previously shown that silencing of CDK4/6 activity is able to suppress T-ALL suggesting that NOTCH1 and MYC activities could converge on these CDK genes to initiate expansion of transformed cells [33]. Interestingly, MYC itself and its interacting partners MYCB and MYCB2 appear to also be targeted by both factors, suggesting an interesting signal amplification mechanism.\nRELA and MYC also share a large number (> 550) of common gene targets. This is a novel biological finding with importance for the biology of T-cell leukemia. For example, several essential T-cell regulators, including RUNX1, BCL2L1 (BCL-xL), ID3, ITCH, JAK3 and NOTCH1, appear to be controlled by both transcription factors. Interestingly, NOTCH1 is downstream of both RELA and MYC but at the same time these two factors are targets of oncogenic NOTCH1 [15,32], suggesting once more an intricate auto-amplification loop that could sustain transformation.\nThe gold-standards generated in our study contain direct targets of each individual transcription factor. However, these factors only rarely act individually. Indeed, we and others have previously suggested that induction and maintenance of T-cell acute lymphoblastic leukemia (T-ALL), a devastating pediatric blood cancer, depends on the cross talk of three transcription factors, NOTCH1, MYC, and RELA (NF-κB) [15,19,32]. We have thus hypothesized that these factors should share target-genes that could be important for the progression of the disease. Our analysis supported this hypothesis as it identified a large number of genes targeted by two or more factors (Table S4 in Additional File 1). As expected, NOTCH1 and MYC, two transcription factors that have been closely connected in T-ALL share a large number of common targets (> 400). Some of these genes are very intriguing from a biological point of view. For example, two activators of cell cycle entry, CDK4 and CDK6 appear to be induced by both factors. We have previously shown that silencing of CDK4/6 activity is able to suppress T-ALL suggesting that NOTCH1 and MYC activities could converge on these CDK genes to initiate expansion of transformed cells [33]. Interestingly, MYC itself and its interacting partners MYCB and MYCB2 appear to also be targeted by both factors, suggesting an interesting signal amplification mechanism.\nRELA and MYC also share a large number (> 550) of common gene targets. This is a novel biological finding with importance for the biology of T-cell leukemia. For example, several essential T-cell regulators, including RUNX1, BCL2L1 (BCL-xL), ID3, ITCH, JAK3 and NOTCH1, appear to be controlled by both transcription factors. Interestingly, NOTCH1 is downstream of both RELA and MYC but at the same time these two factors are targets of oncogenic NOTCH1 [15,32], suggesting once more an intricate auto-amplification loop that could sustain transformation.", "First, we assessed all pathway databases listed in Table 1 by a comparison of the extracted transcriptional targets for each of the seven transcription factors. The number of overlapping MYC targets between pathway databases is shown in Figure 1. We also calculated Jaccard index, the normalized measure of similarity between two gene lists (which is the size of intersection between two gene lists divided by the size of their union), for all pairwise comparisons of pathway databases (Figure S1 in Additional File 1). Since the information in the majority of these databases is curated, we would expect to see almost full intersection for every pair of databases, i.e. Jaccard index close to 1. However, our analysis revealed that the transcriptional data differs significantly from one database to another. Indeed, the average Jaccard index over all pairwise comparisons of pathway databases ranges between 0.0180 (AR) and 0.0742 (RELA), depending on a transcription factor. The grand average Jaccard index over all pathway comparisons and transcription factors is 0.0533. This means that only 5.33% of gene targets that belong to either of the two transcriptional regulatory pathways, also belong to both pathways. Furthermore, the transcriptional regulatory information on particular transcription factors was totally absent from some pathway databases.", "By integrating functional gene expression and genome-wide binding datasets, we obtained 25 gold-standards for the seven transcription factors considered in our study (Table 2). Each of these gold-standards was generated as described in the Methods section and contains genes that are directly downstream of a particular transcription factor and are functionally regulated by it. A detailed list of all genes in each gold-standard is available from http://www.nyuinformatics.org/downloads/supplements/PathwayAssessment Additionally, as for MYC, NOTCH1, RELA, and BCL6 there was more than one dataset (functional gene expression or genome-wide binding) available, we also generated a list of most confident direct downstream targets of each of these transcription factors by overlapping gold-standards that were obtained with different datasets (Table S3 in Additional File 1). While these lists are expected to contain only the most confident genes which are directly regulated by a transcription factor in question, these lists are likely to be incomplete due to condition-specific transcriptional targets that may not appear in all datasets and also due to statistical considerations, namely the fact that probability of obtaining significant results in several studies declines with the number of studies [31].\nGold standards for each transcription factor (TF).", "For each transcription factor, we analyzed an overlap between experimentally derived gold-standards of direct transcriptional targets and 12 gene sets from pathway databases as described in the Methods section. The detailed results are shown in Figure 3 and are summarized in Figure 4. Over all gold-standards and transcription factors, the largest overlap with experimentally derived gene targets was obtained by MetaCore (statistically significant overlap with 84% gold-standards). Other pathway databases to a large extent do not have statistically significant overlaps with experimentally obtained target genes: the best of the remaining pathway databases, Ingenuity Pathway Analysis, results in statistically significant overlap with only 36% of gold-standards. Notably, in 82 comparisons (over all 25 gold-standards and 12 gene sets for each transcription factor) the resulting overlap of pathways with gold-standards is empty. In brief, for the majority of pathway databases used in this study, randomly selected gene sets yield the same number of overlapping genes with gold-standards as the pathway databases. On the other hand, the biggest average EFC ratio was obtained by WikiPathways and was equal to 19. Notably, the intersection of WikiPathways data with the gold-standard #I for TP53 was 203 times bigger than the expected one for that overlap (the largest EFC over all assessed data). Furthermore, the EFC of this intersection was approximately 18.5 times bigger than the corresponding EFC of MetaCore. However, the overlap of WikiPathways data with this gold-standard (3 genes) was 6 times smaller than the overlap of MetaCore data with the gold-standard (18 genes).\nComparison between different pathway databases and experimentally derived gold-standards for all considered transcription factors. Value in a given cell is a number of overlapping genes between a gold-standard and a pathway-derived gene set. Cells are colored according to their values from white (low values) to red (high values). Underlined values in red represent statistically significant intersections.\nSummary of the pathway databases assessment. Green cells represent statistically significant intersections between experimentally derived gold-standards and transcriptional regulatory pathways. White cells denote results that are not statistically significant. Numbers are the enrichment fold change ratios (EFC) calculated for each intersection.", "The gold-standards generated in our study contain direct targets of each individual transcription factor. However, these factors only rarely act individually. Indeed, we and others have previously suggested that induction and maintenance of T-cell acute lymphoblastic leukemia (T-ALL), a devastating pediatric blood cancer, depends on the cross talk of three transcription factors, NOTCH1, MYC, and RELA (NF-κB) [15,19,32]. We have thus hypothesized that these factors should share target-genes that could be important for the progression of the disease. Our analysis supported this hypothesis as it identified a large number of genes targeted by two or more factors (Table S4 in Additional File 1). As expected, NOTCH1 and MYC, two transcription factors that have been closely connected in T-ALL share a large number of common targets (> 400). Some of these genes are very intriguing from a biological point of view. For example, two activators of cell cycle entry, CDK4 and CDK6 appear to be induced by both factors. We have previously shown that silencing of CDK4/6 activity is able to suppress T-ALL suggesting that NOTCH1 and MYC activities could converge on these CDK genes to initiate expansion of transformed cells [33]. Interestingly, MYC itself and its interacting partners MYCB and MYCB2 appear to also be targeted by both factors, suggesting an interesting signal amplification mechanism.\nRELA and MYC also share a large number (> 550) of common gene targets. This is a novel biological finding with importance for the biology of T-cell leukemia. For example, several essential T-cell regulators, including RUNX1, BCL2L1 (BCL-xL), ID3, ITCH, JAK3 and NOTCH1, appear to be controlled by both transcription factors. Interestingly, NOTCH1 is downstream of both RELA and MYC but at the same time these two factors are targets of oncogenic NOTCH1 [15,32], suggesting once more an intricate auto-amplification loop that could sustain transformation.", "At the core of this study was creation of gold-standards of transcriptional regulation in humans that can be compared with target genes reported in transcriptional regulatory pathways. We focused on seven well known transcription factors and obtained gold-standards by integrating genome-wide transcription factor-DNA binding data (from ChIP-chip, ChIP-seq, or ChIP-PET platforms) with functional gene expression microarray and RNA-seq data. The latter data allows to survey changes in the transcriptomes on a genome-wide scale after the inhibition or over-expression of the transcription factor in question. However, change in the expression of a particular gene could be caused either by the direct effect of the removal or introduction of a given transcription factor, as well as by an indirect effect, through the change in expression level of some other gene(s). As mentioned in the Methods section, transcription factor-DNA binding data by itself is similarly insufficient to determine downstream functional targets of a transcription factor due to the occurring nonfunctional and/or non-specific protein binding activity [27]. Thus, it is essential to integrate data from these two sources to obtain an accurate list of gene targets that are directly regulated by a transcription factor [28].\nIt is worth noting that tested pathway databases typically do not give distinction between cell-lines, experimental conditions, and other details relevant to experimental systems in which data were obtained. These databases in a sense propose a 'universal' list of transcriptional targets. However, it is known that transcriptional regulation in a cell is dynamic and works differently for different systems and stimuli. This accentuates the major limitation of pathway databases and emphasizes importance of deriving a specific list of transcriptional targets for the current experimental system. In this study we followed the latter approach by developing gold-standards for specific well-characterized biological systems and experimental conditions.\nHowever our approach for building gold-standards of direct mechanistic knowledge has several limitations. First of all, these are limitations inherited by the assaying technology. Microarrays cannot reliably detect small changes in gene expression and/or genes expressed on very low levels [34]. Similarly, ChIP-chip transcription factor-DNA binding data is known to have imperfect reproducibility [35]. Second, functional gene expression and binding data used in our work often originated from different studies. Even though we verified that there were comparable biological systems, even minor differences in experimental conditions and phenotypes between studies can challenge integration of binding and functional expression data. Third, the siRNA knock-down is not ideal experiment for proving direct causation for identified binding relations, because it could cause some false positives in our gold-standards. For example, a transcription factor could bind to promoter region of a gene, but regulate expression of that gene indirectly via transregulation of another transcription factor. Forth, some number of false negatives in our approach could arise due to the compensatory mechanism in the cell. Finally, our approach can yield only gold-standards with direct downstream targets of the transcription factor, and therefore information about upstream regulation cannot be obtained by this method. Likewise, it does not allow capturing interactions between genes that are not transcription factors. Notwithstanding the above challenges, this method is currently state-of-the-art and is believed to provide with high confidence direct regulatory interactions of the transcription factors on genome-wide scale [28].\nIn addition to assessment of pathway databases, these gold-standards can be used in order to get new biological insights in the field of transcriptional regulation. Specifically, our results suggest that multiple transcription factors can co-operate and control both physiological differentiation and malignant transformation, as demonstrated utilizing combinatorial gene-profiling for NOTCH1, MYC and RELA targets. These studies might lead us to multi-pathway gene expression \"signatures\" essential for the prediction of genes that could be targeted in cancer treatments. In agreement with this hypothesis, several of the genes identified in our analysis have been suggested to be putative therapeutic targets in leukemia, with either preclinical or clinical trials underway (CDK4, CDK6, GSK3b, MYC, LCK, NFkB2, BCL2L1, NOTCH1) [36].", "The comparison of the pathway databases, the main goal of this study, first of all revealed that human transcriptional regulatory pathways often do not agree with each other and contain different target genes. More importantly, with the exception of MetaCore, the majority of sets of target genes specified in the transcriptional regulatory pathways were found to be incomplete and/or inaccurate when compared with experimentally derived gold-standards. Despite of the fact that in the present study we assessed the transcriptional pathways of only seven transcription factors (due to limited availability of high-quality genome-wide binding and expression data in the public domain), we anticipate that our results would generalize to other transcription factors and pathways.\nGiven widespread use of pathways for hypothesis generation, the conclusions of our study have significant implications for biomedical research in general and discovery of new drugs and treatments. In order to obtain a more accurate research hypothesis, the choice of pathway databases has to be informed by solid scientific evidence and rigorous empirical evaluations such as ours. We thus aim to continue comprehensive benchmarking of biological pathways to facilitate evidence-based pathway selection of biomedical researchers. At the same time we propose to developers of pathways databases to take advantage of recently available genome-wide binding and functional expression data to refine transcriptional regulatory pathways.", "The authors declare that they have no competing interests.", "Conceived and designed the experiments: ES, AS. Performed the experiments: ES, AS, ZT. Analyzed the results of experiments: ES, AS, ZT, IA. Wrote the paper: ES, AS, ZT, IA. All authors read and approved the final manuscript.", "Reviewer #1: Prof. Wing Hung Wong\nDepartment of Statistics, Stanford University, Stanford, CA, USA\n[SUBTITLE] Reviewer's comments [SUBSECTION] The purpose of this paper is to assess the quality and completeness of information in several popular pathway databases on the targets of transcription factors. To do this, the authors extracted this information for seven transcription factors from 10 pathway databases, and compared it to \"gold standard\" target lists identified from expression profiling and ChIP-seq experiments. The \"gold standard\" criterion is a relatively stringent one that requires not only ChIP-seq support for interaction between the regulator and the regulatory regions of the target gene, but also significant changes in target gene expression upon knockdown or overexpression of the regulator.\nThe main finding is that there is a very low degree of agreement among the target lists extracted from the different databases, and only one of the database (MetaCore) can provide a statistically significant overlap of targets when compared to the gold standard lists.\nThis is a very timely research. Many investigators are now incorporating information from the pathway databases in their data analysis. The results of this paper serve to warn us that the pathway databases are far from complete and may yield misleading results.\nOne unsatisfactory aspect of the statistical analysis presented in this paper is the over-reliance of test of significance as opposed to quantification of the degree of overlap by appropriate indexes such as odds ratio or enrichment fold-changes. For example, it may be that TransFac and KEGG annotations are highly reliable but they cannot reach statistical significance because they contain very few relations. From this consideration it is not surprising that MetaCore, which contains a large number of relations, appears to do so much better than the other databases. I hope the authors can address this issue carefully in their revision.\n[SUBTITLE] Authors' response [SUBSECTION] We agree with the reviewer that the statistical methodology has to incorporate odds ratios in order to account for differences in sizes of pathway databases. As a matter of fact, our statistical methodology is based on odds ratios and thus is appropriate for the study. We admit that this was not clearly explained in the original paper, and we have clarified this issue in the revised manuscript. Specifically, we made changes to the subsection \"Statistical comparison of gene sets\" and added Figure 2.\nWe agree with the reviewer that the statistical methodology has to incorporate odds ratios in order to account for differences in sizes of pathway databases. As a matter of fact, our statistical methodology is based on odds ratios and thus is appropriate for the study. We admit that this was not clearly explained in the original paper, and we have clarified this issue in the revised manuscript. Specifically, we made changes to the subsection \"Statistical comparison of gene sets\" and added Figure 2.\nThe purpose of this paper is to assess the quality and completeness of information in several popular pathway databases on the targets of transcription factors. To do this, the authors extracted this information for seven transcription factors from 10 pathway databases, and compared it to \"gold standard\" target lists identified from expression profiling and ChIP-seq experiments. The \"gold standard\" criterion is a relatively stringent one that requires not only ChIP-seq support for interaction between the regulator and the regulatory regions of the target gene, but also significant changes in target gene expression upon knockdown or overexpression of the regulator.\nThe main finding is that there is a very low degree of agreement among the target lists extracted from the different databases, and only one of the database (MetaCore) can provide a statistically significant overlap of targets when compared to the gold standard lists.\nThis is a very timely research. Many investigators are now incorporating information from the pathway databases in their data analysis. The results of this paper serve to warn us that the pathway databases are far from complete and may yield misleading results.\nOne unsatisfactory aspect of the statistical analysis presented in this paper is the over-reliance of test of significance as opposed to quantification of the degree of overlap by appropriate indexes such as odds ratio or enrichment fold-changes. For example, it may be that TransFac and KEGG annotations are highly reliable but they cannot reach statistical significance because they contain very few relations. From this consideration it is not surprising that MetaCore, which contains a large number of relations, appears to do so much better than the other databases. I hope the authors can address this issue carefully in their revision.\n[SUBTITLE] Authors' response [SUBSECTION] We agree with the reviewer that the statistical methodology has to incorporate odds ratios in order to account for differences in sizes of pathway databases. As a matter of fact, our statistical methodology is based on odds ratios and thus is appropriate for the study. We admit that this was not clearly explained in the original paper, and we have clarified this issue in the revised manuscript. Specifically, we made changes to the subsection \"Statistical comparison of gene sets\" and added Figure 2.\nWe agree with the reviewer that the statistical methodology has to incorporate odds ratios in order to account for differences in sizes of pathway databases. As a matter of fact, our statistical methodology is based on odds ratios and thus is appropriate for the study. We admit that this was not clearly explained in the original paper, and we have clarified this issue in the revised manuscript. Specifically, we made changes to the subsection \"Statistical comparison of gene sets\" and added Figure 2.\n[SUBTITLE] Reviewer's comments [SUBSECTION] The authors have not addressed my previous comment:\n\"One unsatisfactory aspect of the statistical analysis presented in this paper is the over-reliance of test of significance as opposed to quantification of the degree of overlap by appropriate indexes such as odds ratio or enrichment fold-changes. For example, it may be that TransFac and KEGG annotations are highly reliable but they cannot reach statistical significance because they contain very few relations. From this consideration it is not surprising that MetaCore, which contains a large number of relations, appears to do so much better than the other databases. I hope the authors can address this issue carefully in their revision.\"\nIn their response they stated that odds ratio is used in their statistics. However this is used only to compute the hypergeometric p-value. As pointed out in my original review, such p-values are very sensitive to the sample size so the databases with a small smaller number of annotations will not be able to reach the threshold of statistical significance. Thus the comparison between the databases, such as the conclusion that MetaCore is more reliable than the other ones, will be biased by this effect. At the very least, the authors should provide in their various tables, the enrichment fold change which is the observed number in the intersection, divided by the expected number (given fixed marginal counts for each factor) under the null hypothesis.\n[SUBTITLE] Authors' response [SUBSECTION] We agree with the reviewer that databases with a very small number of reported targets might not reach the threshold of statistical significance, even though their data could be highly reliable. We improved the manuscript accordingly and provided an alternative enrichment fold change metric suggested by the reviewer. Please see the last paragraph of the Methods section, Figure 2b, and Figure 4.\nReviewer #2: Dr. Thiago Motta Venancio (nominated by Dr. L Aravind)\nComputational Biology Branch, NCBI, NLM, NIH, Bethesda, MD, USA\nWe agree with the reviewer that databases with a very small number of reported targets might not reach the threshold of statistical significance, even though their data could be highly reliable. We improved the manuscript accordingly and provided an alternative enrichment fold change metric suggested by the reviewer. Please see the last paragraph of the Methods section, Figure 2b, and Figure 4.\nReviewer #2: Dr. Thiago Motta Venancio (nominated by Dr. L Aravind)\nComputational Biology Branch, NCBI, NLM, NIH, Bethesda, MD, USA\nThe authors have not addressed my previous comment:\n\"One unsatisfactory aspect of the statistical analysis presented in this paper is the over-reliance of test of significance as opposed to quantification of the degree of overlap by appropriate indexes such as odds ratio or enrichment fold-changes. For example, it may be that TransFac and KEGG annotations are highly reliable but they cannot reach statistical significance because they contain very few relations. From this consideration it is not surprising that MetaCore, which contains a large number of relations, appears to do so much better than the other databases. I hope the authors can address this issue carefully in their revision.\"\nIn their response they stated that odds ratio is used in their statistics. However this is used only to compute the hypergeometric p-value. As pointed out in my original review, such p-values are very sensitive to the sample size so the databases with a small smaller number of annotations will not be able to reach the threshold of statistical significance. Thus the comparison between the databases, such as the conclusion that MetaCore is more reliable than the other ones, will be biased by this effect. At the very least, the authors should provide in their various tables, the enrichment fold change which is the observed number in the intersection, divided by the expected number (given fixed marginal counts for each factor) under the null hypothesis.\n[SUBTITLE] Authors' response [SUBSECTION] We agree with the reviewer that databases with a very small number of reported targets might not reach the threshold of statistical significance, even though their data could be highly reliable. We improved the manuscript accordingly and provided an alternative enrichment fold change metric suggested by the reviewer. Please see the last paragraph of the Methods section, Figure 2b, and Figure 4.\nReviewer #2: Dr. Thiago Motta Venancio (nominated by Dr. L Aravind)\nComputational Biology Branch, NCBI, NLM, NIH, Bethesda, MD, USA\nWe agree with the reviewer that databases with a very small number of reported targets might not reach the threshold of statistical significance, even though their data could be highly reliable. We improved the manuscript accordingly and provided an alternative enrichment fold change metric suggested by the reviewer. Please see the last paragraph of the Methods section, Figure 2b, and Figure 4.\nReviewer #2: Dr. Thiago Motta Venancio (nominated by Dr. L Aravind)\nComputational Biology Branch, NCBI, NLM, NIH, Bethesda, MD, USA\n[SUBTITLE] Reviewer's comments [SUBSECTION] In this paper Shmelkov et al. analyze the targets of 7 important human transcription factors. They found that the data obtained from different databases and direct experimental works are not well correlated, which could raise important concerns given the widespread use of such data repositories by the scientific community. I have some critical points that should be addressed in order to improve clarity and coherence of the manuscript.\n1) The authors extracted data from different databases and datasets. How did they handle the problem of different identifier to name genes (e.g. Entrez GeneID, Ensembl)? In addition, what is the impact of different genome versions between databases/datasets in this analysis? These problems could easily give the low overlap found in the manuscript.\n2) Many (if not all) of the databases used in the present study import data from the literature. Are the datasets used present in any of the datasets? If yes, this could be an artificial explanation for the MetaCore large overlap would be expected. If not, why are they not there and why the method used by the authors is better than the ones used by the database teams to identify the real transcription factor targets?\n3) I think the T-test and the significance level used to define gold standards are inappropriate for genome-scale analysis, due to multiple testing. Did the authors do any statistical analysis to circumvent such problems (e.g. p-value correction)?\n4) Some databases have very low numbers of genes. Wouldn't this result in non significant results regardless of the size of the overlap?\n5) The human genome has more than 1500 transcription factors. Due to data availability, the authors assessed 7 of them in this study. In my opinion this far from being \"genome wide\" in the transcription factor landscape.\nMinor point: In the abstract the authors say: \"and targets reported in transcriptional regulatory pathways is surprisingly small\". Are you referring to transcriptional regulatory pathways databases?\n[SUBTITLE] Authors' response [SUBSECTION] The point-by-point response follows:\n1) As described in the revised manuscript, we have transformed all gene identifiers from different studies/databases to the HUGO Gene Nomenclature Committee approved gene symbols and names (see the first sentence in the subsection \"Statistical comparison of gene sets\"). We also ensured that we used the same version of the reference genome as used in the original studies when we generated gene lists (e.g., from ChIP-Seq studies).\n2) We agree with the reviewer that the above questions can facilitate understanding the results of our study. Indeed, if some database used all recent available data in the literature and analyzed it using appropriate statistical and bioinformatics methodologies, it would have a significant overlap with our gold-standards. As we have found, this was not the case for most tested databases. Knowing how databases obtained their knowledge bases would potentially allow to delve deeper and address the questions posed by the reviewer. However, these questions are outside the scope of our manuscript; its main purpose was to evaluate pathway databases and identify ones that mostly correlate with the experimental data. Finally, it is also worthwhile to mention that most of used databases use proprietary algorithms and do not disclose how they extract underlying knowledge.\n3) In this study we build gold-standards by intersecting the list of binding targets (from ChIP-seq/ChIP-chip/Chip-PET data) with the list of differentially expressed downstream targets (from gene expression data). Indeed, we used t-test for analysis of gene expression data with the 5% alpha level. In general this would entail 5% of false positives which can be a fairly large number. Notice however that our criterion for a gene to participate in the gold-standard is not only that it has p-value <0.05 according to the t-test in the gene expression data, but also that it is bound by the transcription factor of interest, as determined from an independent analysis of the ChIP-seq/ChIP-chip/Chip-PET data. Thus, we reduce the number of false positives in the resulting gold-standards. In order to further clarify this in the revised manuscript, we added a sentence in the last paragraph of the subsection \"Generation of the gold-standards\" of \"Methods\".\n4) We have clarified this issue in the revised manuscript in the subsection \"Statistical comparison of gene sets\". In summary, the statistical test is based on odds ratios and thus accounts for different number of genes in pathway databases.\n5) We used the term \"genome-wide\" to denote that the gold-standard for each transcription factor was obtained on a genome-wide scale, i.e. it was based on chromatin immunoprecipitation coupled with the entire genome sequencing or microarray gene expression analysis versus focusing on a selected subset of genes. We have improved the third paragraph of the Background section in the revised manuscript to clarify this issue.\nReviewer's Minor point: We agree with the reviewer and have modified the abstract accordingly.\nReviewer #3: Prof. Geoff J McLachlan\nDepartment of Mathematics, The University of Queensland, Brisbane, Australia\nThis reviewer provided no comments for publication.\nThe point-by-point response follows:\n1) As described in the revised manuscript, we have transformed all gene identifiers from different studies/databases to the HUGO Gene Nomenclature Committee approved gene symbols and names (see the first sentence in the subsection \"Statistical comparison of gene sets\"). We also ensured that we used the same version of the reference genome as used in the original studies when we generated gene lists (e.g., from ChIP-Seq studies).\n2) We agree with the reviewer that the above questions can facilitate understanding the results of our study. Indeed, if some database used all recent available data in the literature and analyzed it using appropriate statistical and bioinformatics methodologies, it would have a significant overlap with our gold-standards. As we have found, this was not the case for most tested databases. Knowing how databases obtained their knowledge bases would potentially allow to delve deeper and address the questions posed by the reviewer. However, these questions are outside the scope of our manuscript; its main purpose was to evaluate pathway databases and identify ones that mostly correlate with the experimental data. Finally, it is also worthwhile to mention that most of used databases use proprietary algorithms and do not disclose how they extract underlying knowledge.\n3) In this study we build gold-standards by intersecting the list of binding targets (from ChIP-seq/ChIP-chip/Chip-PET data) with the list of differentially expressed downstream targets (from gene expression data). Indeed, we used t-test for analysis of gene expression data with the 5% alpha level. In general this would entail 5% of false positives which can be a fairly large number. Notice however that our criterion for a gene to participate in the gold-standard is not only that it has p-value <0.05 according to the t-test in the gene expression data, but also that it is bound by the transcription factor of interest, as determined from an independent analysis of the ChIP-seq/ChIP-chip/Chip-PET data. Thus, we reduce the number of false positives in the resulting gold-standards. In order to further clarify this in the revised manuscript, we added a sentence in the last paragraph of the subsection \"Generation of the gold-standards\" of \"Methods\".\n4) We have clarified this issue in the revised manuscript in the subsection \"Statistical comparison of gene sets\". In summary, the statistical test is based on odds ratios and thus accounts for different number of genes in pathway databases.\n5) We used the term \"genome-wide\" to denote that the gold-standard for each transcription factor was obtained on a genome-wide scale, i.e. it was based on chromatin immunoprecipitation coupled with the entire genome sequencing or microarray gene expression analysis versus focusing on a selected subset of genes. We have improved the third paragraph of the Background section in the revised manuscript to clarify this issue.\nReviewer's Minor point: We agree with the reviewer and have modified the abstract accordingly.\nReviewer #3: Prof. Geoff J McLachlan\nDepartment of Mathematics, The University of Queensland, Brisbane, Australia\nThis reviewer provided no comments for publication.\nIn this paper Shmelkov et al. analyze the targets of 7 important human transcription factors. They found that the data obtained from different databases and direct experimental works are not well correlated, which could raise important concerns given the widespread use of such data repositories by the scientific community. I have some critical points that should be addressed in order to improve clarity and coherence of the manuscript.\n1) The authors extracted data from different databases and datasets. How did they handle the problem of different identifier to name genes (e.g. Entrez GeneID, Ensembl)? In addition, what is the impact of different genome versions between databases/datasets in this analysis? These problems could easily give the low overlap found in the manuscript.\n2) Many (if not all) of the databases used in the present study import data from the literature. Are the datasets used present in any of the datasets? If yes, this could be an artificial explanation for the MetaCore large overlap would be expected. If not, why are they not there and why the method used by the authors is better than the ones used by the database teams to identify the real transcription factor targets?\n3) I think the T-test and the significance level used to define gold standards are inappropriate for genome-scale analysis, due to multiple testing. Did the authors do any statistical analysis to circumvent such problems (e.g. p-value correction)?\n4) Some databases have very low numbers of genes. Wouldn't this result in non significant results regardless of the size of the overlap?\n5) The human genome has more than 1500 transcription factors. Due to data availability, the authors assessed 7 of them in this study. In my opinion this far from being \"genome wide\" in the transcription factor landscape.\nMinor point: In the abstract the authors say: \"and targets reported in transcriptional regulatory pathways is surprisingly small\". Are you referring to transcriptional regulatory pathways databases?\n[SUBTITLE] Authors' response [SUBSECTION] The point-by-point response follows:\n1) As described in the revised manuscript, we have transformed all gene identifiers from different studies/databases to the HUGO Gene Nomenclature Committee approved gene symbols and names (see the first sentence in the subsection \"Statistical comparison of gene sets\"). We also ensured that we used the same version of the reference genome as used in the original studies when we generated gene lists (e.g., from ChIP-Seq studies).\n2) We agree with the reviewer that the above questions can facilitate understanding the results of our study. Indeed, if some database used all recent available data in the literature and analyzed it using appropriate statistical and bioinformatics methodologies, it would have a significant overlap with our gold-standards. As we have found, this was not the case for most tested databases. Knowing how databases obtained their knowledge bases would potentially allow to delve deeper and address the questions posed by the reviewer. However, these questions are outside the scope of our manuscript; its main purpose was to evaluate pathway databases and identify ones that mostly correlate with the experimental data. Finally, it is also worthwhile to mention that most of used databases use proprietary algorithms and do not disclose how they extract underlying knowledge.\n3) In this study we build gold-standards by intersecting the list of binding targets (from ChIP-seq/ChIP-chip/Chip-PET data) with the list of differentially expressed downstream targets (from gene expression data). Indeed, we used t-test for analysis of gene expression data with the 5% alpha level. In general this would entail 5% of false positives which can be a fairly large number. Notice however that our criterion for a gene to participate in the gold-standard is not only that it has p-value <0.05 according to the t-test in the gene expression data, but also that it is bound by the transcription factor of interest, as determined from an independent analysis of the ChIP-seq/ChIP-chip/Chip-PET data. Thus, we reduce the number of false positives in the resulting gold-standards. In order to further clarify this in the revised manuscript, we added a sentence in the last paragraph of the subsection \"Generation of the gold-standards\" of \"Methods\".\n4) We have clarified this issue in the revised manuscript in the subsection \"Statistical comparison of gene sets\". In summary, the statistical test is based on odds ratios and thus accounts for different number of genes in pathway databases.\n5) We used the term \"genome-wide\" to denote that the gold-standard for each transcription factor was obtained on a genome-wide scale, i.e. it was based on chromatin immunoprecipitation coupled with the entire genome sequencing or microarray gene expression analysis versus focusing on a selected subset of genes. We have improved the third paragraph of the Background section in the revised manuscript to clarify this issue.\nReviewer's Minor point: We agree with the reviewer and have modified the abstract accordingly.\nReviewer #3: Prof. Geoff J McLachlan\nDepartment of Mathematics, The University of Queensland, Brisbane, Australia\nThis reviewer provided no comments for publication.\nThe point-by-point response follows:\n1) As described in the revised manuscript, we have transformed all gene identifiers from different studies/databases to the HUGO Gene Nomenclature Committee approved gene symbols and names (see the first sentence in the subsection \"Statistical comparison of gene sets\"). We also ensured that we used the same version of the reference genome as used in the original studies when we generated gene lists (e.g., from ChIP-Seq studies).\n2) We agree with the reviewer that the above questions can facilitate understanding the results of our study. Indeed, if some database used all recent available data in the literature and analyzed it using appropriate statistical and bioinformatics methodologies, it would have a significant overlap with our gold-standards. As we have found, this was not the case for most tested databases. Knowing how databases obtained their knowledge bases would potentially allow to delve deeper and address the questions posed by the reviewer. However, these questions are outside the scope of our manuscript; its main purpose was to evaluate pathway databases and identify ones that mostly correlate with the experimental data. Finally, it is also worthwhile to mention that most of used databases use proprietary algorithms and do not disclose how they extract underlying knowledge.\n3) In this study we build gold-standards by intersecting the list of binding targets (from ChIP-seq/ChIP-chip/Chip-PET data) with the list of differentially expressed downstream targets (from gene expression data). Indeed, we used t-test for analysis of gene expression data with the 5% alpha level. In general this would entail 5% of false positives which can be a fairly large number. Notice however that our criterion for a gene to participate in the gold-standard is not only that it has p-value <0.05 according to the t-test in the gene expression data, but also that it is bound by the transcription factor of interest, as determined from an independent analysis of the ChIP-seq/ChIP-chip/Chip-PET data. Thus, we reduce the number of false positives in the resulting gold-standards. In order to further clarify this in the revised manuscript, we added a sentence in the last paragraph of the subsection \"Generation of the gold-standards\" of \"Methods\".\n4) We have clarified this issue in the revised manuscript in the subsection \"Statistical comparison of gene sets\". In summary, the statistical test is based on odds ratios and thus accounts for different number of genes in pathway databases.\n5) We used the term \"genome-wide\" to denote that the gold-standard for each transcription factor was obtained on a genome-wide scale, i.e. it was based on chromatin immunoprecipitation coupled with the entire genome sequencing or microarray gene expression analysis versus focusing on a selected subset of genes. We have improved the third paragraph of the Background section in the revised manuscript to clarify this issue.\nReviewer's Minor point: We agree with the reviewer and have modified the abstract accordingly.\nReviewer #3: Prof. Geoff J McLachlan\nDepartment of Mathematics, The University of Queensland, Brisbane, Australia\nThis reviewer provided no comments for publication.", "The purpose of this paper is to assess the quality and completeness of information in several popular pathway databases on the targets of transcription factors. To do this, the authors extracted this information for seven transcription factors from 10 pathway databases, and compared it to \"gold standard\" target lists identified from expression profiling and ChIP-seq experiments. The \"gold standard\" criterion is a relatively stringent one that requires not only ChIP-seq support for interaction between the regulator and the regulatory regions of the target gene, but also significant changes in target gene expression upon knockdown or overexpression of the regulator.\nThe main finding is that there is a very low degree of agreement among the target lists extracted from the different databases, and only one of the database (MetaCore) can provide a statistically significant overlap of targets when compared to the gold standard lists.\nThis is a very timely research. Many investigators are now incorporating information from the pathway databases in their data analysis. The results of this paper serve to warn us that the pathway databases are far from complete and may yield misleading results.\nOne unsatisfactory aspect of the statistical analysis presented in this paper is the over-reliance of test of significance as opposed to quantification of the degree of overlap by appropriate indexes such as odds ratio or enrichment fold-changes. For example, it may be that TransFac and KEGG annotations are highly reliable but they cannot reach statistical significance because they contain very few relations. From this consideration it is not surprising that MetaCore, which contains a large number of relations, appears to do so much better than the other databases. I hope the authors can address this issue carefully in their revision.\n[SUBTITLE] Authors' response [SUBSECTION] We agree with the reviewer that the statistical methodology has to incorporate odds ratios in order to account for differences in sizes of pathway databases. As a matter of fact, our statistical methodology is based on odds ratios and thus is appropriate for the study. We admit that this was not clearly explained in the original paper, and we have clarified this issue in the revised manuscript. Specifically, we made changes to the subsection \"Statistical comparison of gene sets\" and added Figure 2.\nWe agree with the reviewer that the statistical methodology has to incorporate odds ratios in order to account for differences in sizes of pathway databases. As a matter of fact, our statistical methodology is based on odds ratios and thus is appropriate for the study. We admit that this was not clearly explained in the original paper, and we have clarified this issue in the revised manuscript. Specifically, we made changes to the subsection \"Statistical comparison of gene sets\" and added Figure 2.", "We agree with the reviewer that the statistical methodology has to incorporate odds ratios in order to account for differences in sizes of pathway databases. As a matter of fact, our statistical methodology is based on odds ratios and thus is appropriate for the study. We admit that this was not clearly explained in the original paper, and we have clarified this issue in the revised manuscript. Specifically, we made changes to the subsection \"Statistical comparison of gene sets\" and added Figure 2.", "The authors have not addressed my previous comment:\n\"One unsatisfactory aspect of the statistical analysis presented in this paper is the over-reliance of test of significance as opposed to quantification of the degree of overlap by appropriate indexes such as odds ratio or enrichment fold-changes. For example, it may be that TransFac and KEGG annotations are highly reliable but they cannot reach statistical significance because they contain very few relations. From this consideration it is not surprising that MetaCore, which contains a large number of relations, appears to do so much better than the other databases. I hope the authors can address this issue carefully in their revision.\"\nIn their response they stated that odds ratio is used in their statistics. However this is used only to compute the hypergeometric p-value. As pointed out in my original review, such p-values are very sensitive to the sample size so the databases with a small smaller number of annotations will not be able to reach the threshold of statistical significance. Thus the comparison between the databases, such as the conclusion that MetaCore is more reliable than the other ones, will be biased by this effect. At the very least, the authors should provide in their various tables, the enrichment fold change which is the observed number in the intersection, divided by the expected number (given fixed marginal counts for each factor) under the null hypothesis.\n[SUBTITLE] Authors' response [SUBSECTION] We agree with the reviewer that databases with a very small number of reported targets might not reach the threshold of statistical significance, even though their data could be highly reliable. We improved the manuscript accordingly and provided an alternative enrichment fold change metric suggested by the reviewer. Please see the last paragraph of the Methods section, Figure 2b, and Figure 4.\nReviewer #2: Dr. Thiago Motta Venancio (nominated by Dr. L Aravind)\nComputational Biology Branch, NCBI, NLM, NIH, Bethesda, MD, USA\nWe agree with the reviewer that databases with a very small number of reported targets might not reach the threshold of statistical significance, even though their data could be highly reliable. We improved the manuscript accordingly and provided an alternative enrichment fold change metric suggested by the reviewer. Please see the last paragraph of the Methods section, Figure 2b, and Figure 4.\nReviewer #2: Dr. Thiago Motta Venancio (nominated by Dr. L Aravind)\nComputational Biology Branch, NCBI, NLM, NIH, Bethesda, MD, USA", "We agree with the reviewer that databases with a very small number of reported targets might not reach the threshold of statistical significance, even though their data could be highly reliable. We improved the manuscript accordingly and provided an alternative enrichment fold change metric suggested by the reviewer. Please see the last paragraph of the Methods section, Figure 2b, and Figure 4.\nReviewer #2: Dr. Thiago Motta Venancio (nominated by Dr. L Aravind)\nComputational Biology Branch, NCBI, NLM, NIH, Bethesda, MD, USA", "In this paper Shmelkov et al. analyze the targets of 7 important human transcription factors. They found that the data obtained from different databases and direct experimental works are not well correlated, which could raise important concerns given the widespread use of such data repositories by the scientific community. I have some critical points that should be addressed in order to improve clarity and coherence of the manuscript.\n1) The authors extracted data from different databases and datasets. How did they handle the problem of different identifier to name genes (e.g. Entrez GeneID, Ensembl)? In addition, what is the impact of different genome versions between databases/datasets in this analysis? These problems could easily give the low overlap found in the manuscript.\n2) Many (if not all) of the databases used in the present study import data from the literature. Are the datasets used present in any of the datasets? If yes, this could be an artificial explanation for the MetaCore large overlap would be expected. If not, why are they not there and why the method used by the authors is better than the ones used by the database teams to identify the real transcription factor targets?\n3) I think the T-test and the significance level used to define gold standards are inappropriate for genome-scale analysis, due to multiple testing. Did the authors do any statistical analysis to circumvent such problems (e.g. p-value correction)?\n4) Some databases have very low numbers of genes. Wouldn't this result in non significant results regardless of the size of the overlap?\n5) The human genome has more than 1500 transcription factors. Due to data availability, the authors assessed 7 of them in this study. In my opinion this far from being \"genome wide\" in the transcription factor landscape.\nMinor point: In the abstract the authors say: \"and targets reported in transcriptional regulatory pathways is surprisingly small\". Are you referring to transcriptional regulatory pathways databases?\n[SUBTITLE] Authors' response [SUBSECTION] The point-by-point response follows:\n1) As described in the revised manuscript, we have transformed all gene identifiers from different studies/databases to the HUGO Gene Nomenclature Committee approved gene symbols and names (see the first sentence in the subsection \"Statistical comparison of gene sets\"). We also ensured that we used the same version of the reference genome as used in the original studies when we generated gene lists (e.g., from ChIP-Seq studies).\n2) We agree with the reviewer that the above questions can facilitate understanding the results of our study. Indeed, if some database used all recent available data in the literature and analyzed it using appropriate statistical and bioinformatics methodologies, it would have a significant overlap with our gold-standards. As we have found, this was not the case for most tested databases. Knowing how databases obtained their knowledge bases would potentially allow to delve deeper and address the questions posed by the reviewer. However, these questions are outside the scope of our manuscript; its main purpose was to evaluate pathway databases and identify ones that mostly correlate with the experimental data. Finally, it is also worthwhile to mention that most of used databases use proprietary algorithms and do not disclose how they extract underlying knowledge.\n3) In this study we build gold-standards by intersecting the list of binding targets (from ChIP-seq/ChIP-chip/Chip-PET data) with the list of differentially expressed downstream targets (from gene expression data). Indeed, we used t-test for analysis of gene expression data with the 5% alpha level. In general this would entail 5% of false positives which can be a fairly large number. Notice however that our criterion for a gene to participate in the gold-standard is not only that it has p-value <0.05 according to the t-test in the gene expression data, but also that it is bound by the transcription factor of interest, as determined from an independent analysis of the ChIP-seq/ChIP-chip/Chip-PET data. Thus, we reduce the number of false positives in the resulting gold-standards. In order to further clarify this in the revised manuscript, we added a sentence in the last paragraph of the subsection \"Generation of the gold-standards\" of \"Methods\".\n4) We have clarified this issue in the revised manuscript in the subsection \"Statistical comparison of gene sets\". In summary, the statistical test is based on odds ratios and thus accounts for different number of genes in pathway databases.\n5) We used the term \"genome-wide\" to denote that the gold-standard for each transcription factor was obtained on a genome-wide scale, i.e. it was based on chromatin immunoprecipitation coupled with the entire genome sequencing or microarray gene expression analysis versus focusing on a selected subset of genes. We have improved the third paragraph of the Background section in the revised manuscript to clarify this issue.\nReviewer's Minor point: We agree with the reviewer and have modified the abstract accordingly.\nReviewer #3: Prof. Geoff J McLachlan\nDepartment of Mathematics, The University of Queensland, Brisbane, Australia\nThis reviewer provided no comments for publication.\nThe point-by-point response follows:\n1) As described in the revised manuscript, we have transformed all gene identifiers from different studies/databases to the HUGO Gene Nomenclature Committee approved gene symbols and names (see the first sentence in the subsection \"Statistical comparison of gene sets\"). We also ensured that we used the same version of the reference genome as used in the original studies when we generated gene lists (e.g., from ChIP-Seq studies).\n2) We agree with the reviewer that the above questions can facilitate understanding the results of our study. Indeed, if some database used all recent available data in the literature and analyzed it using appropriate statistical and bioinformatics methodologies, it would have a significant overlap with our gold-standards. As we have found, this was not the case for most tested databases. Knowing how databases obtained their knowledge bases would potentially allow to delve deeper and address the questions posed by the reviewer. However, these questions are outside the scope of our manuscript; its main purpose was to evaluate pathway databases and identify ones that mostly correlate with the experimental data. Finally, it is also worthwhile to mention that most of used databases use proprietary algorithms and do not disclose how they extract underlying knowledge.\n3) In this study we build gold-standards by intersecting the list of binding targets (from ChIP-seq/ChIP-chip/Chip-PET data) with the list of differentially expressed downstream targets (from gene expression data). Indeed, we used t-test for analysis of gene expression data with the 5% alpha level. In general this would entail 5% of false positives which can be a fairly large number. Notice however that our criterion for a gene to participate in the gold-standard is not only that it has p-value <0.05 according to the t-test in the gene expression data, but also that it is bound by the transcription factor of interest, as determined from an independent analysis of the ChIP-seq/ChIP-chip/Chip-PET data. Thus, we reduce the number of false positives in the resulting gold-standards. In order to further clarify this in the revised manuscript, we added a sentence in the last paragraph of the subsection \"Generation of the gold-standards\" of \"Methods\".\n4) We have clarified this issue in the revised manuscript in the subsection \"Statistical comparison of gene sets\". In summary, the statistical test is based on odds ratios and thus accounts for different number of genes in pathway databases.\n5) We used the term \"genome-wide\" to denote that the gold-standard for each transcription factor was obtained on a genome-wide scale, i.e. it was based on chromatin immunoprecipitation coupled with the entire genome sequencing or microarray gene expression analysis versus focusing on a selected subset of genes. We have improved the third paragraph of the Background section in the revised manuscript to clarify this issue.\nReviewer's Minor point: We agree with the reviewer and have modified the abstract accordingly.\nReviewer #3: Prof. Geoff J McLachlan\nDepartment of Mathematics, The University of Queensland, Brisbane, Australia\nThis reviewer provided no comments for publication.", "The point-by-point response follows:\n1) As described in the revised manuscript, we have transformed all gene identifiers from different studies/databases to the HUGO Gene Nomenclature Committee approved gene symbols and names (see the first sentence in the subsection \"Statistical comparison of gene sets\"). We also ensured that we used the same version of the reference genome as used in the original studies when we generated gene lists (e.g., from ChIP-Seq studies).\n2) We agree with the reviewer that the above questions can facilitate understanding the results of our study. Indeed, if some database used all recent available data in the literature and analyzed it using appropriate statistical and bioinformatics methodologies, it would have a significant overlap with our gold-standards. As we have found, this was not the case for most tested databases. Knowing how databases obtained their knowledge bases would potentially allow to delve deeper and address the questions posed by the reviewer. However, these questions are outside the scope of our manuscript; its main purpose was to evaluate pathway databases and identify ones that mostly correlate with the experimental data. Finally, it is also worthwhile to mention that most of used databases use proprietary algorithms and do not disclose how they extract underlying knowledge.\n3) In this study we build gold-standards by intersecting the list of binding targets (from ChIP-seq/ChIP-chip/Chip-PET data) with the list of differentially expressed downstream targets (from gene expression data). Indeed, we used t-test for analysis of gene expression data with the 5% alpha level. In general this would entail 5% of false positives which can be a fairly large number. Notice however that our criterion for a gene to participate in the gold-standard is not only that it has p-value <0.05 according to the t-test in the gene expression data, but also that it is bound by the transcription factor of interest, as determined from an independent analysis of the ChIP-seq/ChIP-chip/Chip-PET data. Thus, we reduce the number of false positives in the resulting gold-standards. In order to further clarify this in the revised manuscript, we added a sentence in the last paragraph of the subsection \"Generation of the gold-standards\" of \"Methods\".\n4) We have clarified this issue in the revised manuscript in the subsection \"Statistical comparison of gene sets\". In summary, the statistical test is based on odds ratios and thus accounts for different number of genes in pathway databases.\n5) We used the term \"genome-wide\" to denote that the gold-standard for each transcription factor was obtained on a genome-wide scale, i.e. it was based on chromatin immunoprecipitation coupled with the entire genome sequencing or microarray gene expression analysis versus focusing on a selected subset of genes. We have improved the third paragraph of the Background section in the revised manuscript to clarify this issue.\nReviewer's Minor point: We agree with the reviewer and have modified the abstract accordingly.\nReviewer #3: Prof. Geoff J McLachlan\nDepartment of Mathematics, The University of Queensland, Brisbane, Australia\nThis reviewer provided no comments for publication.", "Supplementary Information. Table S1: Functional gene expression data. Table S2: Transcription factor-DNA binding data. Table S3: Most confident direct transcriptional targets of each of the four transcription factors. These targets were obtained by overlapping several gold-standards obtained with different datasets for the same transcription factor. Table S4: Genes directly regulated by two or more of the three transcription factors: MYC, NOTCH1, and RELA. Figure S1: Comparison of gene sets of transcriptional targets derived from ten different pathway databases by Jaccard index. In case, where Jaccard index of an overlap could not be determined due to comparison of two empty gene lists, we assigned value 0. Cells are colored according to the Jaccard index, from white (Jaccard index equal to 0) to dark-orange (Jaccard index equal to 1). Each sub-figure gives results for a different transcription factor: (a) AR, (b) BCL6, (c) MYC, (d) NOTCH1, (e) RELA, (f) STAT1, (g) TP53.\nClick here for file" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, "supplementary-material" ]
[]
Maternal and perinatal factors associated with hospitalised infectious mononucleosis in children, adolescents and young adults: record linkage study.
21356092
There is current interest in the role of perinatal factors in the aetiology of diseases that occur later in life. Infectious mononucleosis (IM) can follow late primary infection with Epstein-Barr virus (EBV), and has been shown to increase the risk of multiple sclerosis and Hodgkin's disease. Little is known about maternal or perinatal factors associated with IM or its sequelae.
BACKGROUND
We investigated perinatal risk factors for hospitalised IM using a prospective record-linkage study in a population in the south of England. The dataset used, the Oxford record linkage study (ORLS), includes abstracts of birth registrations, maternities and in-patient hospital records, including day case care, for all subjects in a defined geographical area. From these sources, we identified cases of hospitalised IM up to the age of 30 years in people for whom the ORLS had a maternity record; and we compared perinatal factors in their pregnancy with those in the pregnancy of children who had no hospital record of IM.
METHODS
Our data showed a significant association between hospitalised IM and lower social class (p = 0.02), a higher risk of hospitalised IM in children of married rather than single mothers (p < 0.001), and, of marginal statistical significance, an association with singleton birth (p = 0.06). The ratio of observed to expected cases of hospitalised IM in each season was 0.95 in winter, 1.02 in spring, 1.02 in summer and 1.00 in autumn. The chi-square test for seasonality, with a value of 0.8, was not significant.Other factors studied, including low birth weight, short gestational age, maternal smoking, late age at motherhood, did not increase the risk of subsequent hospitalised IM.
RESULTS
Because of the increasing tendency of women to postpone childbearing, it is useful to know that older age at motherhood is not associated with an increased risk of hospitalised IM in their children. We have no explanation for the finding that children of married women had a higher risk of IM than those of single mothers. Though highly significant, it may nonetheless be a chance finding. We found no evidence that such perinatal factors as birth weight and gestational age, or season of birth, were associated with the risk of hospitalised IM.
CONCLUSIONS
[ "Adolescent", "Child", "Child, Preschool", "England", "Female", "Hospitalization", "Humans", "Infant", "Infant, Newborn", "Infectious Mononucleosis", "Male", "Perinatal Care", "Pregnancy", "Prospective Studies", "Risk Factors", "Young Adult" ]
3056792
null
null
Methods
The Oxford record linkage study (ORLS) includes abstracts of birth registrations, maternities and in-patient hospital admission records, including day case care (ie admission to hospital for care without overnight stay), for all subjects in a defined geographical area of South East England. The maternity data covered all National Health Service (NHS) hospitals in two health districts from 1970 to 1989 (in 1989 detailed data collection on maternity in the ORLS stopped after reforms by the government to increase the uniformity of NHS data collection systems). Cases of hospitalised IM were identified using in-patient and day case admission data in the ORLS for all clinical specialties and from all districts covered by the ORLS including those that did not collect maternity data. These data covered the two health districts from 1970 to 1999 (population 0.9 million in 1999); a further four adjacent districts from 1970-1991 (total population 1.9 million); and all eight districts of the former Oxford region from 1991-1999. The maternity data were extracted from maternity records by clerical staff, trained at the ORLS by senior medical staff. In the 30-year period covered by this study, the abstracts relating to the same individual were linked as part of the Oxford region's NHS health information system. Similarly, the records of each mother and her offspring were routinely linked. From these sources, we identified cases of hospitalised IM up to the age of 30 years in people for whom the ORLS had a maternity record. IM in the mothers was identified by record linkage of each mother's maternity record to hospital admissions for the mother before and after the pregnancy from 1963 to 1999. Exclusions from the analysis of the maternity dataset included 985 abortions, 1560 stillbirths and 1567 neonatal deaths within 30 days of birth. In 289 maternities, the birth weight was recorded as less than 1000 g - these were also excluded because most of these records had implausibly low values and/or missing data for many of the risk factors investigated. None of these 289 excluded babies had a subsequent record of IM. After exclusions, records of 248 659 children remained. Some data items such as social class, mother's smoking and breast feeding at the time of discharge from hospital were not collected until 1975. We accepted, as a case of IM, each person in the ORLS who had a hospital discharge record that included the International Classification of Diseases (ICD) code for IM. The codes used were 075 in the eighth and ninth revision of the ICD and B27 in the tenth. The occupation of the head of the mother's household was recorded based on husband's occupation, or the mother's occupation if single and working, or the mother's father's occupation if not. It was recorded contemporaneously on the mothers' hospital records, obtained by trained interviewers at hospital admission; and was subsequently coded by trained coders as occupational social class in the five standard groups then used in English national statistics (social class one is the most advantaged socio-economic group, and social class five the most deprived). The duration of follow-up for the offspring ranged from 30 years for those born in 1970 to 10 years for those born in 1989, with a mean follow-up duration of 18 years. We analysed all cases of IM together and then split the analysis into those aged 10 and under and those aged 11 years and over. We did this, first, because we have 10 years of follow-up of all infants, but a variable length of follow-up thereafter; and, second, because we were particularly interested in cases of late onset IM. Statistical methods used include chi-squared tests to assess the significance of associations between each individual perinatal risk factor and IM in the offspring, and logistic regression modelling to investigate risk factors that had significant independent associations with IM. Statistical significance was measured at the standard 5% level. When using logistic regression, all variables that were significant (P < 0.05) in the univariate analysis were included in the initial model and the variables that were not significant were removed before running the initial model. Thereafter, each of the variables that were not significant in the univariate analysis was re-introduced into the model, one at a time. The purpose of this was to test whether any variable, if not significant in univariate analysis, became significant when modelled with other significant variables. Missing data were excluded only for those terms that were included in the logistic regression model. In order to test the hypothesis that the season-of-birth distribution for people with IM may differ from that in the general population, we also used a much larger ORLS dataset. This dataset covers all records in the ORLS from 1963-1999, not just those linked to maternity records in the smaller area from 1970-1989. We analysed season of birth in patients who were born in the UK (to avoid confounding with the place of birth of people born overseas, e.g. on the Indian subcontinent). We calculated the 'expected' number of births of IM patients in each month by applying the monthly distribution of all births in the general UK-born population in the ORLS to the number of people with IM. We did this with adjustment for year of birth, sex, and for differences in the number of days in different months. We compared the expected number of births in each month with the observed number, and expressed the result as a ratio of monthly observed to expected. We used a chi square test for heterogeneity to test for differences between individual months and between four seasons of winter (December, January, February), spring (March, April, May), summer (June, July, August) and autumn (September, October, November). To provide contextual information on the incidence of hospitalised IM in the region covered by the study, we analysed trends over time in population-based admission rates using the whole ORLS dataset. The English NHS Central Office for Research Ethics Committees approved the current work programme of analysis using the linked dataset (reference number 04/Q2006/176).
null
null
null
null
[ "Background", "Results", "Discussion", "Strengths and limitations", "Delayed EBV infection, IM, HD and MS", "Principal findings", "Conclusion", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Infectious mononucleosis (IM) can follow a pathologically strong immune response to primary infection with Epstein-Barr Virus (EBV), mainly during adolescence or young adulthood[1]. The majority of individuals have a primary infection with EBV during infancy and childhood, and are asymptomatic or only experience a mild clinical course[2,3]. Some individuals who are not infected in childhood are subsequently infected in adolescence or adulthood, leading to more severe disease. For example, a recent study of university students in Edinburgh found that three-quarters were EBV seropositive at entry to university; that, of the quarter who were seronegative, almost half experienced EBV sero-conversion over the following three years; and that, of these, 25% developed IM[4]. Two-thirds of the IM cases, but only one tenth of the asymptomatic primary EBV infections, were statistically attributable to sexual intercourse[4]. At least in student populations, sexual intercourse and intimate kissing are important factors in the transmission of EBV infections that lead to IM.\nEpidemiological and laboratory data show that EBV infection is also associated with several other diseases. EBV was first identified in Burkitt's lymphoma cells[5]. EBV infection has been known for many years to be associated with nasopharyngeal carcinoma in some parts of the world, [6] and with Hodgkin's disease (HD),[7-9] associations that have been amply confirmed[3]. More recently, it has been demonstrated that IM is a risk factor for multiple sclerosis (MS)[10-13]. Factors that are relevant to the epidemiology of IM may also have some relevance to the epidemiology of Hodgkin's disease and MS, and vice versa.\nThere has been considerable interest in recent years in the influence of maternal and perinatal factors on the subsequent development of disease in later life[14]. Much of the interest has focused on subsequent chronic non-infectious diseases, such as hypertension, coronary heart disease and diabetes,[15-17] rather than acute infectious disease. Specifically, there is little or no information on whether perinatal factors might have any influence on the development of IM.\nThere are reasons to consider the possibility that perinatal and/or other early life factors might influence the risk of IM. First, there is the fact that many individuals are infected with EBV very early in life, while others are not and have an increased risk of IM later. Second, Purtilo and Sakamoto reported that reactivation of EBV commonly occurs in normal pregnant women and commented that \"the impact of pregnancy on outcomes of EBV infections has not been thoroughly evaluated\" in respect of either the mother or child[18]. There is still a paucity of research in this area. Third, migration patterns for MS, between high and low risk countries, show that the risk of MS is substantially determined by place of residence in early life rather than later[19,20]. Fourth, there are reasons to think that pregnancy-related or other early life factors may influence the development of MS in some people: in particular, there is increasingly strong evidence that the distribution of season of birth in people with MS differs from that in the general population[21,22]. There is an excess of spring births, albeit a numerically modest excess, among people with MS with the implication that pregnancy-associated factors may be relevant to the risk of MS. There is also some evidence of season of birth effects in HD with a slight excess of spring births in young people with HD[23].\nFor these reasons, we decided to use the Oxford record linkage study (ORLS) to study perinatal factors in people who developed IM, as part of a wider programme of work studying the influence of perinatal factors on the subsequent development of disease in the offspring[24-26]. The ORLS dataset has already been used, in previous studies, to demonstrate that there is an increased risk of MS and of HD in individuals following admission to hospital with IM in the Oxford area[27,28].", "There were 225 people with a maternity record in the ORLS and with a subsequent admission for IM. 69 of the 225 people with hospitalized IM (31%) were aged 10 years or less at the time of admission for IM (39 males, 30 females); and 156 (69%) were aged 11 and over (74 males, 82 females). We noted that, although there were more male than female cases in the age group under ten, and more females than males in the age group aged 10-19, these findings did not reach statistical significance. Numbers of people admitted for IM were highest in the 15-19 age group, and a larger number of over 20s were admitted than children under 5 (Table 1). This age profile of patients contrasts with that of EBV infection, generally, which frequently occurs in infancy.\nNumber of people admitted to hospital for infectious mononucleosis (IM), based on age at admission and sex.\nChi square, comparing males and females in each age group, = 3.39, 4 df, p = 0.50).\nThere was no significant association between IM in the child and maternal IM, smoking, parity, ABO blood group and rhesus status (Table 2). In most analyses, we grouped parity as either first-born or subsequent-born. However, we show parity in greater detail in Table 3 to demonstrate that there were no important differences, in detail, between those with and without IM. Overall, IM was more common in children of younger than of older mothers; the same was observed in those with IM aged 11 and over, though differences were marginally significant (p = 0.04) and fairly small (Table 2). There was an increased risk of IM amongst lower social classes though, again, the differences were fairly small (Table 2). IM was significantly more common in children of mothers who were married than in children of mothers who were single (p < 0.001): ninety seven per cent of children admitted with IM had a married mother compared with 90% of children who were not admitted with IM (Table 2).\nAssociations between mothers' characteristics and IM in the child\n* Significant difference, with exact p value given\n† Chi square for trend\ndf = degrees of freedom\nDistribution of mothers' parity at the time of birth of people without and with eventual IM\nChildren with IM were more likely to have been singletons than others, but this finding did not quite reach statistical significance (P = 0.2 overall, 0.06 among those aged 11 years and over, Table 4). We found no significant association between IM and birth weight, gestational age, breastfeeding, caesarian birth, presentation at delivery or Apgar scores at 1 and 5 minutes after delivery. Children with IM were significantly more likely to have had a forceps delivery than a child without IM, both in the all-ages analysis (p = 0.008) and in that for children with IM aged 11 years and over (p = 0.02). There was a borderline significant association between pre-eclampsia and IM (p = 0.07) (Table 3).\nAssociations between characteristics of the births and IM in the child\n* Significant difference, with exact p value given\ndf = degrees of freedom\nAn association with marital status persisted after multivariate adjustment: IM was less common in children of single mothers than in children of married mothers (odds ratio, single to married, 0.36, 95% confidence interval 0.16-0.80) after adjustment for maternal age, parity and social class. The association between marital status and IM seems to be independent of either parity or social class, and is illustrated in Table 5. As Table 5 shows, the percentages of children with IM are systematically higher for those whose mothers were married, regardless of parity (summarised as first-born or subsequent child) and regardless of social class (summarised as 1 and 2, the most favoured social class, to 4 and 5, the most deprived). However, numbers of cases of IM in the unmarried category are very small, and, though the differences were systematic they were not generally statistically significant within subgroups.\nPercentage in each group with IM, by marital status, and numbers on which percentages are based (n/N)\n* Data only available for part of the study period\nChi square values (with Yates' correction), comparing hospitalised IM in offspring of mothers who were married or not married, 17.95, p = .005; 22.88, p = .09; 30.61, p = 0.44; 42.05, p = 0.15; 50.95, p = 0.33.\nMultivariate analysis showed that pre-eclampsia and use of forceps during delivery were not independently associated with an increased risk of IM, after controlling for year of birth and social class.\nIn the analyses of season of birth, there was no significant association in the dataset of the 225 patients on whom we had a maternity record. In the full ORLS dataset (see Method), there were 1695 people with a record of hospitalised IM. The ratio of observed to expected cases of IM in each season was 0.95 in winter, 1.02 in spring, 1.02 in summer and 1.00 in autumn. The chi-square test for seasonality, with a value of 0.8, was not significant.\nThe analyses of trends show that there were no major changes over time - for example, admission rates in the ORLS area were 4.4 per 100,000 population in 1975, 4.4 in 1985 and 4.7 in 1995.", "[SUBTITLE] Strengths and limitations [SUBSECTION] Strengths of this study are that data collection was prospective, undertaken in a large and well-defined population, over a period of 30 years, including around 250,000 births, and recall biases are impossible. Data concerning perinatal risk factors, and subsequent IM, were collected independently. They were brought together by record-linkage, and therefore data about risk factors could not have been influenced by knowledge of the study outcome (IM) or by the kinds of interviewer, recall or attribution bias that can handicap case-control studies based on interviewing patients.\nDespite the large study population, the total number of cases of IM identified was a modest 225. This limits the power of the study. To our knowledge, there are no other reports in the published literature concerning perinatal factors and subsequent IM. Cases of IM not requiring hospitalisation will have been missed by this study. IM is diagnosed primarily based upon a clinical picture of symptoms, peripheral blood smear, and heterophile (Monospot) antibody test. It seems likely that hospitalised cases are more likely than those that do not warrant admission to have had confirmatory tests done to establish the diagnosis with certainty. However, we do not have data on the diagnostic criteria used in, or the clinical features of, the study population. We had to accept a coded diagnosis on the hospital discharge abstract. Current privacy regulations preclude checking the actual medical records of the patients for further detail.\nThere are some gaps in the data collection: smoking behaviour and social class were not routinely collected for a few years of the study. We could not identify records of children who were diagnosed with IM after moving away from the ORLS region, lowering our observed incidence of IM. It is certain that our observed IM incidence is lower than the true incidence of IM. However, the influence of perinatal risk factors, when comparing children with and without IM, should not be biased unless migration itself is associated with both the risk of subsequent IM and putative perinatal risk factors. We found very few significant associations. It is theoretically possible, though we think unlikely, that associations have been missed as a result of unmeasured confounding, i.e. that a true association has been masked by confounding factors that act in equal and opposite directions to a true cause-and-effect association.\nAlthough cases of IM requiring hospital admission are infrequent, they are likely to represent people at the severe end of the clinical spectrum. If perinatal and maternal factors affect the risk of IM, they are more likely to affect those with severe disease. Those with symptoms severe enough to warrant hospital admission may also have the strongest reactions to primary EBV infection, which in turn, may represent individuals who are more susceptible to diseases where EBV is thought to play an aetiological role, notably HD and MS[29]. We hope that others will be stimulated to publish results from similar databases and, if future individual studies are limited in size, we hope that our data and others could be pooled to produce meta-analyses.\nStrengths of this study are that data collection was prospective, undertaken in a large and well-defined population, over a period of 30 years, including around 250,000 births, and recall biases are impossible. Data concerning perinatal risk factors, and subsequent IM, were collected independently. They were brought together by record-linkage, and therefore data about risk factors could not have been influenced by knowledge of the study outcome (IM) or by the kinds of interviewer, recall or attribution bias that can handicap case-control studies based on interviewing patients.\nDespite the large study population, the total number of cases of IM identified was a modest 225. This limits the power of the study. To our knowledge, there are no other reports in the published literature concerning perinatal factors and subsequent IM. Cases of IM not requiring hospitalisation will have been missed by this study. IM is diagnosed primarily based upon a clinical picture of symptoms, peripheral blood smear, and heterophile (Monospot) antibody test. It seems likely that hospitalised cases are more likely than those that do not warrant admission to have had confirmatory tests done to establish the diagnosis with certainty. However, we do not have data on the diagnostic criteria used in, or the clinical features of, the study population. We had to accept a coded diagnosis on the hospital discharge abstract. Current privacy regulations preclude checking the actual medical records of the patients for further detail.\nThere are some gaps in the data collection: smoking behaviour and social class were not routinely collected for a few years of the study. We could not identify records of children who were diagnosed with IM after moving away from the ORLS region, lowering our observed incidence of IM. It is certain that our observed IM incidence is lower than the true incidence of IM. However, the influence of perinatal risk factors, when comparing children with and without IM, should not be biased unless migration itself is associated with both the risk of subsequent IM and putative perinatal risk factors. We found very few significant associations. It is theoretically possible, though we think unlikely, that associations have been missed as a result of unmeasured confounding, i.e. that a true association has been masked by confounding factors that act in equal and opposite directions to a true cause-and-effect association.\nAlthough cases of IM requiring hospital admission are infrequent, they are likely to represent people at the severe end of the clinical spectrum. If perinatal and maternal factors affect the risk of IM, they are more likely to affect those with severe disease. Those with symptoms severe enough to warrant hospital admission may also have the strongest reactions to primary EBV infection, which in turn, may represent individuals who are more susceptible to diseases where EBV is thought to play an aetiological role, notably HD and MS[29]. We hope that others will be stimulated to publish results from similar databases and, if future individual studies are limited in size, we hope that our data and others could be pooled to produce meta-analyses.\n[SUBTITLE] Delayed EBV infection, IM, HD and MS [SUBSECTION] In the majority of individuals, primary EBV infection occurs during early childhood and is often asymptomatic, but delayed EBV infection may result in IM in adolescents and adults. The symptoms of IM, most notably fever, sore throat, swollen glands and fatigue, are thought to be the clinical manifestation of an exaggerated T cell response to EBV infection and the release of inflammatory cytokines[30]. It has been suggested that the size of the initial viral dose of EBV may be a contributing factor in the development of IM and that adolescents may be more likely to encounter a larger viral dose through deep kissing during penetrative sexual intercourse[4]. A relationship between the level of the T cell response and the severity of IM has also been noted[31]. The difference in severity of symptoms between those infected with EBV at a young age and those infected during adolescence and early adulthood may be the difference in magnitude of the viral dose, with a smaller dose acquired by salivary contact in children than that acquired through sexual contact in adolescents and young adults[4,32]. In addition, recent genetic markers in the HLA class I locus have also been implicated in the immune-response to EBV infection in both IM [33] and HD,[7] suggesting that genetic factors may also play a role. Immunopathological mechanisms involved in IM, contrasted with those in asymptomatic primary EBV infection, have been reported[32,34]. As our findings may only be representative of cases severe enough to require hospital admission, further studies in people with IM who have not been hospitalised may be beneficial.\nEBV-positive Hodgkin's disease has been found to be more common in people with a previous diagnosis of IM,[29] and an almost 100% prevalence of EBV seroconversion has been found in MS patients, as compared to a 90% seroconversion rate in the general population[34]. There is growing evidence of associations between IM and both HD [7-9] and MS[10-13]. The 'hygiene hypothesis' has been put forward as a possible explanation for a causal pathway between EBV and HD and MS. It proposes that a lack of early life infections or exposure to viral pathogens in childhood may prevent the normal processes of immune maturation, leading to increases in rates of both allergic and immune-mediated conditions, such as MS[35]. Perinatal and early life factors that may affect late exposure to infection may play a role in the relationship between these conditions.\nIn the majority of individuals, primary EBV infection occurs during early childhood and is often asymptomatic, but delayed EBV infection may result in IM in adolescents and adults. The symptoms of IM, most notably fever, sore throat, swollen glands and fatigue, are thought to be the clinical manifestation of an exaggerated T cell response to EBV infection and the release of inflammatory cytokines[30]. It has been suggested that the size of the initial viral dose of EBV may be a contributing factor in the development of IM and that adolescents may be more likely to encounter a larger viral dose through deep kissing during penetrative sexual intercourse[4]. A relationship between the level of the T cell response and the severity of IM has also been noted[31]. The difference in severity of symptoms between those infected with EBV at a young age and those infected during adolescence and early adulthood may be the difference in magnitude of the viral dose, with a smaller dose acquired by salivary contact in children than that acquired through sexual contact in adolescents and young adults[4,32]. In addition, recent genetic markers in the HLA class I locus have also been implicated in the immune-response to EBV infection in both IM [33] and HD,[7] suggesting that genetic factors may also play a role. Immunopathological mechanisms involved in IM, contrasted with those in asymptomatic primary EBV infection, have been reported[32,34]. As our findings may only be representative of cases severe enough to require hospital admission, further studies in people with IM who have not been hospitalised may be beneficial.\nEBV-positive Hodgkin's disease has been found to be more common in people with a previous diagnosis of IM,[29] and an almost 100% prevalence of EBV seroconversion has been found in MS patients, as compared to a 90% seroconversion rate in the general population[34]. There is growing evidence of associations between IM and both HD [7-9] and MS[10-13]. The 'hygiene hypothesis' has been put forward as a possible explanation for a causal pathway between EBV and HD and MS. It proposes that a lack of early life infections or exposure to viral pathogens in childhood may prevent the normal processes of immune maturation, leading to increases in rates of both allergic and immune-mediated conditions, such as MS[35]. Perinatal and early life factors that may affect late exposure to infection may play a role in the relationship between these conditions.\n[SUBTITLE] Principal findings [SUBSECTION] The lack of association between increasing maternal age and hospitalised IM found in the current study is important, given the trend in Western countries towards postponement of childbearing. It is now common for women to give birth well into their late 30s or early 40s, and it is reassuring that older motherhood does not seem to carry an increased risk for IM. However, although maternal age has increased over recent years, in the years covered by the study (1970-1989), most mothers were under 35 (94% in our data).\nThere was no association between season of birth and hospitalised IM.\nOur data suggested that pre-eclampsia, and forceps delivery, had a borderline significant association with subsequent IM. There was no residual association after controlling for other factors, suggesting that confounding was responsible for these apparent associations.\nChildren born as one of a pair of twins had a borderline significant lower risk of developing IM (p = 0.06) than that of singletons. If this is not a chance finding, it would support studies proposing that increased sibship sizes can protect against IM (and its long term sequelae, including MS), by exposing children to viral infections early in life[36]. The reasoning, part of the hygiene hypothesis, is that children born as one of twins are more likely to be exposed to EBV infection early in life, through physical and salivary contact with their sibling, thus reducing their risk of delayed EBV infection, and therefore IM, later in life.\nThere is no information in the literature about marital status and IM, or delayed childhood EBV infection. Our results show that children born to single mothers had a significantly lower risk of hospitalised IM than those born to married mothers. We have no explanation for this, although one possibility is that (for a given level of severity of illness) single mothers may have had greater difficulty than married mothers in accessing hospital care. Though possible, we think that this is unlikely in that, with free access to National Health Service care, children deemed to be in need of hospital care are likely to have received it. It is possible that, though the finding was highly statistically significant, it may nonetheless have arisen from the play of chance. It is worth noting that, in the era of the pregnancies covered by this study, single motherhood was much less common in England than it is now. Previous studies have found clustering of infectious diseases within households in which an older child is present[36]. Although parity is an incomplete measure of contact with older children within the household, it was the only measure available to us. It did not come close to significance in this study. It is unlikely that the association with single mothers is confounded by parity: it persisted after adjustment for parity and, in any case, there was no association between parity and risk of IM (Tables 2, 3).\nWe found a modest association between IM and lower social class. It is generally held that, if anything, IM is a little more common in higher social classes[36]. However, the patients in our study are those admitted to hospital and it is possible, even likely, that typical clinical thresholds for admission of patients with IM can be influenced by patients' socio-economic circumstances. Thus, for a given level of clinical severity, it is possible that children in less favoured socio-economic circumstances may be more likely than others to be admitted to hospital. The literature is conflicting over the relationship between social class and possible sequelae of late infection with EBV and HD. Several studies have reported minimal or no effect of social class on MS [37,38] or HD[39,40]. It has also been reported that EBV-infection-associated HD is in fact more common in lower social classes,[41] although this association only reached statistical significance in females[41].\nThe lack of association between increasing maternal age and hospitalised IM found in the current study is important, given the trend in Western countries towards postponement of childbearing. It is now common for women to give birth well into their late 30s or early 40s, and it is reassuring that older motherhood does not seem to carry an increased risk for IM. However, although maternal age has increased over recent years, in the years covered by the study (1970-1989), most mothers were under 35 (94% in our data).\nThere was no association between season of birth and hospitalised IM.\nOur data suggested that pre-eclampsia, and forceps delivery, had a borderline significant association with subsequent IM. There was no residual association after controlling for other factors, suggesting that confounding was responsible for these apparent associations.\nChildren born as one of a pair of twins had a borderline significant lower risk of developing IM (p = 0.06) than that of singletons. If this is not a chance finding, it would support studies proposing that increased sibship sizes can protect against IM (and its long term sequelae, including MS), by exposing children to viral infections early in life[36]. The reasoning, part of the hygiene hypothesis, is that children born as one of twins are more likely to be exposed to EBV infection early in life, through physical and salivary contact with their sibling, thus reducing their risk of delayed EBV infection, and therefore IM, later in life.\nThere is no information in the literature about marital status and IM, or delayed childhood EBV infection. Our results show that children born to single mothers had a significantly lower risk of hospitalised IM than those born to married mothers. We have no explanation for this, although one possibility is that (for a given level of severity of illness) single mothers may have had greater difficulty than married mothers in accessing hospital care. Though possible, we think that this is unlikely in that, with free access to National Health Service care, children deemed to be in need of hospital care are likely to have received it. It is possible that, though the finding was highly statistically significant, it may nonetheless have arisen from the play of chance. It is worth noting that, in the era of the pregnancies covered by this study, single motherhood was much less common in England than it is now. Previous studies have found clustering of infectious diseases within households in which an older child is present[36]. Although parity is an incomplete measure of contact with older children within the household, it was the only measure available to us. It did not come close to significance in this study. It is unlikely that the association with single mothers is confounded by parity: it persisted after adjustment for parity and, in any case, there was no association between parity and risk of IM (Tables 2, 3).\nWe found a modest association between IM and lower social class. It is generally held that, if anything, IM is a little more common in higher social classes[36]. However, the patients in our study are those admitted to hospital and it is possible, even likely, that typical clinical thresholds for admission of patients with IM can be influenced by patients' socio-economic circumstances. Thus, for a given level of clinical severity, it is possible that children in less favoured socio-economic circumstances may be more likely than others to be admitted to hospital. The literature is conflicting over the relationship between social class and possible sequelae of late infection with EBV and HD. Several studies have reported minimal or no effect of social class on MS [37,38] or HD[39,40]. It has also been reported that EBV-infection-associated HD is in fact more common in lower social classes,[41] although this association only reached statistical significance in females[41].", "Strengths of this study are that data collection was prospective, undertaken in a large and well-defined population, over a period of 30 years, including around 250,000 births, and recall biases are impossible. Data concerning perinatal risk factors, and subsequent IM, were collected independently. They were brought together by record-linkage, and therefore data about risk factors could not have been influenced by knowledge of the study outcome (IM) or by the kinds of interviewer, recall or attribution bias that can handicap case-control studies based on interviewing patients.\nDespite the large study population, the total number of cases of IM identified was a modest 225. This limits the power of the study. To our knowledge, there are no other reports in the published literature concerning perinatal factors and subsequent IM. Cases of IM not requiring hospitalisation will have been missed by this study. IM is diagnosed primarily based upon a clinical picture of symptoms, peripheral blood smear, and heterophile (Monospot) antibody test. It seems likely that hospitalised cases are more likely than those that do not warrant admission to have had confirmatory tests done to establish the diagnosis with certainty. However, we do not have data on the diagnostic criteria used in, or the clinical features of, the study population. We had to accept a coded diagnosis on the hospital discharge abstract. Current privacy regulations preclude checking the actual medical records of the patients for further detail.\nThere are some gaps in the data collection: smoking behaviour and social class were not routinely collected for a few years of the study. We could not identify records of children who were diagnosed with IM after moving away from the ORLS region, lowering our observed incidence of IM. It is certain that our observed IM incidence is lower than the true incidence of IM. However, the influence of perinatal risk factors, when comparing children with and without IM, should not be biased unless migration itself is associated with both the risk of subsequent IM and putative perinatal risk factors. We found very few significant associations. It is theoretically possible, though we think unlikely, that associations have been missed as a result of unmeasured confounding, i.e. that a true association has been masked by confounding factors that act in equal and opposite directions to a true cause-and-effect association.\nAlthough cases of IM requiring hospital admission are infrequent, they are likely to represent people at the severe end of the clinical spectrum. If perinatal and maternal factors affect the risk of IM, they are more likely to affect those with severe disease. Those with symptoms severe enough to warrant hospital admission may also have the strongest reactions to primary EBV infection, which in turn, may represent individuals who are more susceptible to diseases where EBV is thought to play an aetiological role, notably HD and MS[29]. We hope that others will be stimulated to publish results from similar databases and, if future individual studies are limited in size, we hope that our data and others could be pooled to produce meta-analyses.", "In the majority of individuals, primary EBV infection occurs during early childhood and is often asymptomatic, but delayed EBV infection may result in IM in adolescents and adults. The symptoms of IM, most notably fever, sore throat, swollen glands and fatigue, are thought to be the clinical manifestation of an exaggerated T cell response to EBV infection and the release of inflammatory cytokines[30]. It has been suggested that the size of the initial viral dose of EBV may be a contributing factor in the development of IM and that adolescents may be more likely to encounter a larger viral dose through deep kissing during penetrative sexual intercourse[4]. A relationship between the level of the T cell response and the severity of IM has also been noted[31]. The difference in severity of symptoms between those infected with EBV at a young age and those infected during adolescence and early adulthood may be the difference in magnitude of the viral dose, with a smaller dose acquired by salivary contact in children than that acquired through sexual contact in adolescents and young adults[4,32]. In addition, recent genetic markers in the HLA class I locus have also been implicated in the immune-response to EBV infection in both IM [33] and HD,[7] suggesting that genetic factors may also play a role. Immunopathological mechanisms involved in IM, contrasted with those in asymptomatic primary EBV infection, have been reported[32,34]. As our findings may only be representative of cases severe enough to require hospital admission, further studies in people with IM who have not been hospitalised may be beneficial.\nEBV-positive Hodgkin's disease has been found to be more common in people with a previous diagnosis of IM,[29] and an almost 100% prevalence of EBV seroconversion has been found in MS patients, as compared to a 90% seroconversion rate in the general population[34]. There is growing evidence of associations between IM and both HD [7-9] and MS[10-13]. The 'hygiene hypothesis' has been put forward as a possible explanation for a causal pathway between EBV and HD and MS. It proposes that a lack of early life infections or exposure to viral pathogens in childhood may prevent the normal processes of immune maturation, leading to increases in rates of both allergic and immune-mediated conditions, such as MS[35]. Perinatal and early life factors that may affect late exposure to infection may play a role in the relationship between these conditions.", "The lack of association between increasing maternal age and hospitalised IM found in the current study is important, given the trend in Western countries towards postponement of childbearing. It is now common for women to give birth well into their late 30s or early 40s, and it is reassuring that older motherhood does not seem to carry an increased risk for IM. However, although maternal age has increased over recent years, in the years covered by the study (1970-1989), most mothers were under 35 (94% in our data).\nThere was no association between season of birth and hospitalised IM.\nOur data suggested that pre-eclampsia, and forceps delivery, had a borderline significant association with subsequent IM. There was no residual association after controlling for other factors, suggesting that confounding was responsible for these apparent associations.\nChildren born as one of a pair of twins had a borderline significant lower risk of developing IM (p = 0.06) than that of singletons. If this is not a chance finding, it would support studies proposing that increased sibship sizes can protect against IM (and its long term sequelae, including MS), by exposing children to viral infections early in life[36]. The reasoning, part of the hygiene hypothesis, is that children born as one of twins are more likely to be exposed to EBV infection early in life, through physical and salivary contact with their sibling, thus reducing their risk of delayed EBV infection, and therefore IM, later in life.\nThere is no information in the literature about marital status and IM, or delayed childhood EBV infection. Our results show that children born to single mothers had a significantly lower risk of hospitalised IM than those born to married mothers. We have no explanation for this, although one possibility is that (for a given level of severity of illness) single mothers may have had greater difficulty than married mothers in accessing hospital care. Though possible, we think that this is unlikely in that, with free access to National Health Service care, children deemed to be in need of hospital care are likely to have received it. It is possible that, though the finding was highly statistically significant, it may nonetheless have arisen from the play of chance. It is worth noting that, in the era of the pregnancies covered by this study, single motherhood was much less common in England than it is now. Previous studies have found clustering of infectious diseases within households in which an older child is present[36]. Although parity is an incomplete measure of contact with older children within the household, it was the only measure available to us. It did not come close to significance in this study. It is unlikely that the association with single mothers is confounded by parity: it persisted after adjustment for parity and, in any case, there was no association between parity and risk of IM (Tables 2, 3).\nWe found a modest association between IM and lower social class. It is generally held that, if anything, IM is a little more common in higher social classes[36]. However, the patients in our study are those admitted to hospital and it is possible, even likely, that typical clinical thresholds for admission of patients with IM can be influenced by patients' socio-economic circumstances. Thus, for a given level of clinical severity, it is possible that children in less favoured socio-economic circumstances may be more likely than others to be admitted to hospital. The literature is conflicting over the relationship between social class and possible sequelae of late infection with EBV and HD. Several studies have reported minimal or no effect of social class on MS [37,38] or HD[39,40]. It has also been reported that EBV-infection-associated HD is in fact more common in lower social classes,[41] although this association only reached statistical significance in females[41].", "In summary, the association with single motherhood deserves further study, as does the possibility that reduced contact between young children may increase the risk of IM and possibly, for a few, eventually the risk of MS or HD. Other perinatal factors studied by us, including season of birth, were not associated with an increased risk of hospitalised IM. Of some importance, late age at motherhood was not a risk factor.", "The authors declare that they have no competing interests.", "MJG designed the study, with input from IM and OA-M. CJW undertook the analyses. All authors contributed to the interpretation and discussion of findings. IM wrote the first draft and all authors contributed to the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2334/11/51/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Results", "Discussion", "Strengths and limitations", "Delayed EBV infection, IM, HD and MS", "Principal findings", "Conclusion", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Infectious mononucleosis (IM) can follow a pathologically strong immune response to primary infection with Epstein-Barr Virus (EBV), mainly during adolescence or young adulthood[1]. The majority of individuals have a primary infection with EBV during infancy and childhood, and are asymptomatic or only experience a mild clinical course[2,3]. Some individuals who are not infected in childhood are subsequently infected in adolescence or adulthood, leading to more severe disease. For example, a recent study of university students in Edinburgh found that three-quarters were EBV seropositive at entry to university; that, of the quarter who were seronegative, almost half experienced EBV sero-conversion over the following three years; and that, of these, 25% developed IM[4]. Two-thirds of the IM cases, but only one tenth of the asymptomatic primary EBV infections, were statistically attributable to sexual intercourse[4]. At least in student populations, sexual intercourse and intimate kissing are important factors in the transmission of EBV infections that lead to IM.\nEpidemiological and laboratory data show that EBV infection is also associated with several other diseases. EBV was first identified in Burkitt's lymphoma cells[5]. EBV infection has been known for many years to be associated with nasopharyngeal carcinoma in some parts of the world, [6] and with Hodgkin's disease (HD),[7-9] associations that have been amply confirmed[3]. More recently, it has been demonstrated that IM is a risk factor for multiple sclerosis (MS)[10-13]. Factors that are relevant to the epidemiology of IM may also have some relevance to the epidemiology of Hodgkin's disease and MS, and vice versa.\nThere has been considerable interest in recent years in the influence of maternal and perinatal factors on the subsequent development of disease in later life[14]. Much of the interest has focused on subsequent chronic non-infectious diseases, such as hypertension, coronary heart disease and diabetes,[15-17] rather than acute infectious disease. Specifically, there is little or no information on whether perinatal factors might have any influence on the development of IM.\nThere are reasons to consider the possibility that perinatal and/or other early life factors might influence the risk of IM. First, there is the fact that many individuals are infected with EBV very early in life, while others are not and have an increased risk of IM later. Second, Purtilo and Sakamoto reported that reactivation of EBV commonly occurs in normal pregnant women and commented that \"the impact of pregnancy on outcomes of EBV infections has not been thoroughly evaluated\" in respect of either the mother or child[18]. There is still a paucity of research in this area. Third, migration patterns for MS, between high and low risk countries, show that the risk of MS is substantially determined by place of residence in early life rather than later[19,20]. Fourth, there are reasons to think that pregnancy-related or other early life factors may influence the development of MS in some people: in particular, there is increasingly strong evidence that the distribution of season of birth in people with MS differs from that in the general population[21,22]. There is an excess of spring births, albeit a numerically modest excess, among people with MS with the implication that pregnancy-associated factors may be relevant to the risk of MS. There is also some evidence of season of birth effects in HD with a slight excess of spring births in young people with HD[23].\nFor these reasons, we decided to use the Oxford record linkage study (ORLS) to study perinatal factors in people who developed IM, as part of a wider programme of work studying the influence of perinatal factors on the subsequent development of disease in the offspring[24-26]. The ORLS dataset has already been used, in previous studies, to demonstrate that there is an increased risk of MS and of HD in individuals following admission to hospital with IM in the Oxford area[27,28].", "The Oxford record linkage study (ORLS) includes abstracts of birth registrations, maternities and in-patient hospital admission records, including day case care (ie admission to hospital for care without overnight stay), for all subjects in a defined geographical area of South East England. The maternity data covered all National Health Service (NHS) hospitals in two health districts from 1970 to 1989 (in 1989 detailed data collection on maternity in the ORLS stopped after reforms by the government to increase the uniformity of NHS data collection systems). Cases of hospitalised IM were identified using in-patient and day case admission data in the ORLS for all clinical specialties and from all districts covered by the ORLS including those that did not collect maternity data. These data covered the two health districts from 1970 to 1999 (population 0.9 million in 1999); a further four adjacent districts from 1970-1991 (total population 1.9 million); and all eight districts of the former Oxford region from 1991-1999. The maternity data were extracted from maternity records by clerical staff, trained at the ORLS by senior medical staff. In the 30-year period covered by this study, the abstracts relating to the same individual were linked as part of the Oxford region's NHS health information system. Similarly, the records of each mother and her offspring were routinely linked. From these sources, we identified cases of hospitalised IM up to the age of 30 years in people for whom the ORLS had a maternity record. IM in the mothers was identified by record linkage of each mother's maternity record to hospital admissions for the mother before and after the pregnancy from 1963 to 1999.\nExclusions from the analysis of the maternity dataset included 985 abortions, 1560 stillbirths and 1567 neonatal deaths within 30 days of birth. In 289 maternities, the birth weight was recorded as less than 1000 g - these were also excluded because most of these records had implausibly low values and/or missing data for many of the risk factors investigated. None of these 289 excluded babies had a subsequent record of IM. After exclusions, records of 248 659 children remained. Some data items such as social class, mother's smoking and breast feeding at the time of discharge from hospital were not collected until 1975.\nWe accepted, as a case of IM, each person in the ORLS who had a hospital discharge record that included the International Classification of Diseases (ICD) code for IM. The codes used were 075 in the eighth and ninth revision of the ICD and B27 in the tenth. The occupation of the head of the mother's household was recorded based on husband's occupation, or the mother's occupation if single and working, or the mother's father's occupation if not. It was recorded contemporaneously on the mothers' hospital records, obtained by trained interviewers at hospital admission; and was subsequently coded by trained coders as occupational social class in the five standard groups then used in English national statistics (social class one is the most advantaged socio-economic group, and social class five the most deprived).\nThe duration of follow-up for the offspring ranged from 30 years for those born in 1970 to 10 years for those born in 1989, with a mean follow-up duration of 18 years. We analysed all cases of IM together and then split the analysis into those aged 10 and under and those aged 11 years and over. We did this, first, because we have 10 years of follow-up of all infants, but a variable length of follow-up thereafter; and, second, because we were particularly interested in cases of late onset IM.\nStatistical methods used include chi-squared tests to assess the significance of associations between each individual perinatal risk factor and IM in the offspring, and logistic regression modelling to investigate risk factors that had significant independent associations with IM. Statistical significance was measured at the standard 5% level. When using logistic regression, all variables that were significant (P < 0.05) in the univariate analysis were included in the initial model and the variables that were not significant were removed before running the initial model. Thereafter, each of the variables that were not significant in the univariate analysis was re-introduced into the model, one at a time. The purpose of this was to test whether any variable, if not significant in univariate analysis, became significant when modelled with other significant variables. Missing data were excluded only for those terms that were included in the logistic regression model.\nIn order to test the hypothesis that the season-of-birth distribution for people with IM may differ from that in the general population, we also used a much larger ORLS dataset. This dataset covers all records in the ORLS from 1963-1999, not just those linked to maternity records in the smaller area from 1970-1989. We analysed season of birth in patients who were born in the UK (to avoid confounding with the place of birth of people born overseas, e.g. on the Indian subcontinent). We calculated the 'expected' number of births of IM patients in each month by applying the monthly distribution of all births in the general UK-born population in the ORLS to the number of people with IM. We did this with adjustment for year of birth, sex, and for differences in the number of days in different months. We compared the expected number of births in each month with the observed number, and expressed the result as a ratio of monthly observed to expected. We used a chi square test for heterogeneity to test for differences between individual months and between four seasons of winter (December, January, February), spring (March, April, May), summer (June, July, August) and autumn (September, October, November).\nTo provide contextual information on the incidence of hospitalised IM in the region covered by the study, we analysed trends over time in population-based admission rates using the whole ORLS dataset.\nThe English NHS Central Office for Research Ethics Committees approved the current work programme of analysis using the linked dataset (reference number 04/Q2006/176).", "There were 225 people with a maternity record in the ORLS and with a subsequent admission for IM. 69 of the 225 people with hospitalized IM (31%) were aged 10 years or less at the time of admission for IM (39 males, 30 females); and 156 (69%) were aged 11 and over (74 males, 82 females). We noted that, although there were more male than female cases in the age group under ten, and more females than males in the age group aged 10-19, these findings did not reach statistical significance. Numbers of people admitted for IM were highest in the 15-19 age group, and a larger number of over 20s were admitted than children under 5 (Table 1). This age profile of patients contrasts with that of EBV infection, generally, which frequently occurs in infancy.\nNumber of people admitted to hospital for infectious mononucleosis (IM), based on age at admission and sex.\nChi square, comparing males and females in each age group, = 3.39, 4 df, p = 0.50).\nThere was no significant association between IM in the child and maternal IM, smoking, parity, ABO blood group and rhesus status (Table 2). In most analyses, we grouped parity as either first-born or subsequent-born. However, we show parity in greater detail in Table 3 to demonstrate that there were no important differences, in detail, between those with and without IM. Overall, IM was more common in children of younger than of older mothers; the same was observed in those with IM aged 11 and over, though differences were marginally significant (p = 0.04) and fairly small (Table 2). There was an increased risk of IM amongst lower social classes though, again, the differences were fairly small (Table 2). IM was significantly more common in children of mothers who were married than in children of mothers who were single (p < 0.001): ninety seven per cent of children admitted with IM had a married mother compared with 90% of children who were not admitted with IM (Table 2).\nAssociations between mothers' characteristics and IM in the child\n* Significant difference, with exact p value given\n† Chi square for trend\ndf = degrees of freedom\nDistribution of mothers' parity at the time of birth of people without and with eventual IM\nChildren with IM were more likely to have been singletons than others, but this finding did not quite reach statistical significance (P = 0.2 overall, 0.06 among those aged 11 years and over, Table 4). We found no significant association between IM and birth weight, gestational age, breastfeeding, caesarian birth, presentation at delivery or Apgar scores at 1 and 5 minutes after delivery. Children with IM were significantly more likely to have had a forceps delivery than a child without IM, both in the all-ages analysis (p = 0.008) and in that for children with IM aged 11 years and over (p = 0.02). There was a borderline significant association between pre-eclampsia and IM (p = 0.07) (Table 3).\nAssociations between characteristics of the births and IM in the child\n* Significant difference, with exact p value given\ndf = degrees of freedom\nAn association with marital status persisted after multivariate adjustment: IM was less common in children of single mothers than in children of married mothers (odds ratio, single to married, 0.36, 95% confidence interval 0.16-0.80) after adjustment for maternal age, parity and social class. The association between marital status and IM seems to be independent of either parity or social class, and is illustrated in Table 5. As Table 5 shows, the percentages of children with IM are systematically higher for those whose mothers were married, regardless of parity (summarised as first-born or subsequent child) and regardless of social class (summarised as 1 and 2, the most favoured social class, to 4 and 5, the most deprived). However, numbers of cases of IM in the unmarried category are very small, and, though the differences were systematic they were not generally statistically significant within subgroups.\nPercentage in each group with IM, by marital status, and numbers on which percentages are based (n/N)\n* Data only available for part of the study period\nChi square values (with Yates' correction), comparing hospitalised IM in offspring of mothers who were married or not married, 17.95, p = .005; 22.88, p = .09; 30.61, p = 0.44; 42.05, p = 0.15; 50.95, p = 0.33.\nMultivariate analysis showed that pre-eclampsia and use of forceps during delivery were not independently associated with an increased risk of IM, after controlling for year of birth and social class.\nIn the analyses of season of birth, there was no significant association in the dataset of the 225 patients on whom we had a maternity record. In the full ORLS dataset (see Method), there were 1695 people with a record of hospitalised IM. The ratio of observed to expected cases of IM in each season was 0.95 in winter, 1.02 in spring, 1.02 in summer and 1.00 in autumn. The chi-square test for seasonality, with a value of 0.8, was not significant.\nThe analyses of trends show that there were no major changes over time - for example, admission rates in the ORLS area were 4.4 per 100,000 population in 1975, 4.4 in 1985 and 4.7 in 1995.", "[SUBTITLE] Strengths and limitations [SUBSECTION] Strengths of this study are that data collection was prospective, undertaken in a large and well-defined population, over a period of 30 years, including around 250,000 births, and recall biases are impossible. Data concerning perinatal risk factors, and subsequent IM, were collected independently. They were brought together by record-linkage, and therefore data about risk factors could not have been influenced by knowledge of the study outcome (IM) or by the kinds of interviewer, recall or attribution bias that can handicap case-control studies based on interviewing patients.\nDespite the large study population, the total number of cases of IM identified was a modest 225. This limits the power of the study. To our knowledge, there are no other reports in the published literature concerning perinatal factors and subsequent IM. Cases of IM not requiring hospitalisation will have been missed by this study. IM is diagnosed primarily based upon a clinical picture of symptoms, peripheral blood smear, and heterophile (Monospot) antibody test. It seems likely that hospitalised cases are more likely than those that do not warrant admission to have had confirmatory tests done to establish the diagnosis with certainty. However, we do not have data on the diagnostic criteria used in, or the clinical features of, the study population. We had to accept a coded diagnosis on the hospital discharge abstract. Current privacy regulations preclude checking the actual medical records of the patients for further detail.\nThere are some gaps in the data collection: smoking behaviour and social class were not routinely collected for a few years of the study. We could not identify records of children who were diagnosed with IM after moving away from the ORLS region, lowering our observed incidence of IM. It is certain that our observed IM incidence is lower than the true incidence of IM. However, the influence of perinatal risk factors, when comparing children with and without IM, should not be biased unless migration itself is associated with both the risk of subsequent IM and putative perinatal risk factors. We found very few significant associations. It is theoretically possible, though we think unlikely, that associations have been missed as a result of unmeasured confounding, i.e. that a true association has been masked by confounding factors that act in equal and opposite directions to a true cause-and-effect association.\nAlthough cases of IM requiring hospital admission are infrequent, they are likely to represent people at the severe end of the clinical spectrum. If perinatal and maternal factors affect the risk of IM, they are more likely to affect those with severe disease. Those with symptoms severe enough to warrant hospital admission may also have the strongest reactions to primary EBV infection, which in turn, may represent individuals who are more susceptible to diseases where EBV is thought to play an aetiological role, notably HD and MS[29]. We hope that others will be stimulated to publish results from similar databases and, if future individual studies are limited in size, we hope that our data and others could be pooled to produce meta-analyses.\nStrengths of this study are that data collection was prospective, undertaken in a large and well-defined population, over a period of 30 years, including around 250,000 births, and recall biases are impossible. Data concerning perinatal risk factors, and subsequent IM, were collected independently. They were brought together by record-linkage, and therefore data about risk factors could not have been influenced by knowledge of the study outcome (IM) or by the kinds of interviewer, recall or attribution bias that can handicap case-control studies based on interviewing patients.\nDespite the large study population, the total number of cases of IM identified was a modest 225. This limits the power of the study. To our knowledge, there are no other reports in the published literature concerning perinatal factors and subsequent IM. Cases of IM not requiring hospitalisation will have been missed by this study. IM is diagnosed primarily based upon a clinical picture of symptoms, peripheral blood smear, and heterophile (Monospot) antibody test. It seems likely that hospitalised cases are more likely than those that do not warrant admission to have had confirmatory tests done to establish the diagnosis with certainty. However, we do not have data on the diagnostic criteria used in, or the clinical features of, the study population. We had to accept a coded diagnosis on the hospital discharge abstract. Current privacy regulations preclude checking the actual medical records of the patients for further detail.\nThere are some gaps in the data collection: smoking behaviour and social class were not routinely collected for a few years of the study. We could not identify records of children who were diagnosed with IM after moving away from the ORLS region, lowering our observed incidence of IM. It is certain that our observed IM incidence is lower than the true incidence of IM. However, the influence of perinatal risk factors, when comparing children with and without IM, should not be biased unless migration itself is associated with both the risk of subsequent IM and putative perinatal risk factors. We found very few significant associations. It is theoretically possible, though we think unlikely, that associations have been missed as a result of unmeasured confounding, i.e. that a true association has been masked by confounding factors that act in equal and opposite directions to a true cause-and-effect association.\nAlthough cases of IM requiring hospital admission are infrequent, they are likely to represent people at the severe end of the clinical spectrum. If perinatal and maternal factors affect the risk of IM, they are more likely to affect those with severe disease. Those with symptoms severe enough to warrant hospital admission may also have the strongest reactions to primary EBV infection, which in turn, may represent individuals who are more susceptible to diseases where EBV is thought to play an aetiological role, notably HD and MS[29]. We hope that others will be stimulated to publish results from similar databases and, if future individual studies are limited in size, we hope that our data and others could be pooled to produce meta-analyses.\n[SUBTITLE] Delayed EBV infection, IM, HD and MS [SUBSECTION] In the majority of individuals, primary EBV infection occurs during early childhood and is often asymptomatic, but delayed EBV infection may result in IM in adolescents and adults. The symptoms of IM, most notably fever, sore throat, swollen glands and fatigue, are thought to be the clinical manifestation of an exaggerated T cell response to EBV infection and the release of inflammatory cytokines[30]. It has been suggested that the size of the initial viral dose of EBV may be a contributing factor in the development of IM and that adolescents may be more likely to encounter a larger viral dose through deep kissing during penetrative sexual intercourse[4]. A relationship between the level of the T cell response and the severity of IM has also been noted[31]. The difference in severity of symptoms between those infected with EBV at a young age and those infected during adolescence and early adulthood may be the difference in magnitude of the viral dose, with a smaller dose acquired by salivary contact in children than that acquired through sexual contact in adolescents and young adults[4,32]. In addition, recent genetic markers in the HLA class I locus have also been implicated in the immune-response to EBV infection in both IM [33] and HD,[7] suggesting that genetic factors may also play a role. Immunopathological mechanisms involved in IM, contrasted with those in asymptomatic primary EBV infection, have been reported[32,34]. As our findings may only be representative of cases severe enough to require hospital admission, further studies in people with IM who have not been hospitalised may be beneficial.\nEBV-positive Hodgkin's disease has been found to be more common in people with a previous diagnosis of IM,[29] and an almost 100% prevalence of EBV seroconversion has been found in MS patients, as compared to a 90% seroconversion rate in the general population[34]. There is growing evidence of associations between IM and both HD [7-9] and MS[10-13]. The 'hygiene hypothesis' has been put forward as a possible explanation for a causal pathway between EBV and HD and MS. It proposes that a lack of early life infections or exposure to viral pathogens in childhood may prevent the normal processes of immune maturation, leading to increases in rates of both allergic and immune-mediated conditions, such as MS[35]. Perinatal and early life factors that may affect late exposure to infection may play a role in the relationship between these conditions.\nIn the majority of individuals, primary EBV infection occurs during early childhood and is often asymptomatic, but delayed EBV infection may result in IM in adolescents and adults. The symptoms of IM, most notably fever, sore throat, swollen glands and fatigue, are thought to be the clinical manifestation of an exaggerated T cell response to EBV infection and the release of inflammatory cytokines[30]. It has been suggested that the size of the initial viral dose of EBV may be a contributing factor in the development of IM and that adolescents may be more likely to encounter a larger viral dose through deep kissing during penetrative sexual intercourse[4]. A relationship between the level of the T cell response and the severity of IM has also been noted[31]. The difference in severity of symptoms between those infected with EBV at a young age and those infected during adolescence and early adulthood may be the difference in magnitude of the viral dose, with a smaller dose acquired by salivary contact in children than that acquired through sexual contact in adolescents and young adults[4,32]. In addition, recent genetic markers in the HLA class I locus have also been implicated in the immune-response to EBV infection in both IM [33] and HD,[7] suggesting that genetic factors may also play a role. Immunopathological mechanisms involved in IM, contrasted with those in asymptomatic primary EBV infection, have been reported[32,34]. As our findings may only be representative of cases severe enough to require hospital admission, further studies in people with IM who have not been hospitalised may be beneficial.\nEBV-positive Hodgkin's disease has been found to be more common in people with a previous diagnosis of IM,[29] and an almost 100% prevalence of EBV seroconversion has been found in MS patients, as compared to a 90% seroconversion rate in the general population[34]. There is growing evidence of associations between IM and both HD [7-9] and MS[10-13]. The 'hygiene hypothesis' has been put forward as a possible explanation for a causal pathway between EBV and HD and MS. It proposes that a lack of early life infections or exposure to viral pathogens in childhood may prevent the normal processes of immune maturation, leading to increases in rates of both allergic and immune-mediated conditions, such as MS[35]. Perinatal and early life factors that may affect late exposure to infection may play a role in the relationship between these conditions.\n[SUBTITLE] Principal findings [SUBSECTION] The lack of association between increasing maternal age and hospitalised IM found in the current study is important, given the trend in Western countries towards postponement of childbearing. It is now common for women to give birth well into their late 30s or early 40s, and it is reassuring that older motherhood does not seem to carry an increased risk for IM. However, although maternal age has increased over recent years, in the years covered by the study (1970-1989), most mothers were under 35 (94% in our data).\nThere was no association between season of birth and hospitalised IM.\nOur data suggested that pre-eclampsia, and forceps delivery, had a borderline significant association with subsequent IM. There was no residual association after controlling for other factors, suggesting that confounding was responsible for these apparent associations.\nChildren born as one of a pair of twins had a borderline significant lower risk of developing IM (p = 0.06) than that of singletons. If this is not a chance finding, it would support studies proposing that increased sibship sizes can protect against IM (and its long term sequelae, including MS), by exposing children to viral infections early in life[36]. The reasoning, part of the hygiene hypothesis, is that children born as one of twins are more likely to be exposed to EBV infection early in life, through physical and salivary contact with their sibling, thus reducing their risk of delayed EBV infection, and therefore IM, later in life.\nThere is no information in the literature about marital status and IM, or delayed childhood EBV infection. Our results show that children born to single mothers had a significantly lower risk of hospitalised IM than those born to married mothers. We have no explanation for this, although one possibility is that (for a given level of severity of illness) single mothers may have had greater difficulty than married mothers in accessing hospital care. Though possible, we think that this is unlikely in that, with free access to National Health Service care, children deemed to be in need of hospital care are likely to have received it. It is possible that, though the finding was highly statistically significant, it may nonetheless have arisen from the play of chance. It is worth noting that, in the era of the pregnancies covered by this study, single motherhood was much less common in England than it is now. Previous studies have found clustering of infectious diseases within households in which an older child is present[36]. Although parity is an incomplete measure of contact with older children within the household, it was the only measure available to us. It did not come close to significance in this study. It is unlikely that the association with single mothers is confounded by parity: it persisted after adjustment for parity and, in any case, there was no association between parity and risk of IM (Tables 2, 3).\nWe found a modest association between IM and lower social class. It is generally held that, if anything, IM is a little more common in higher social classes[36]. However, the patients in our study are those admitted to hospital and it is possible, even likely, that typical clinical thresholds for admission of patients with IM can be influenced by patients' socio-economic circumstances. Thus, for a given level of clinical severity, it is possible that children in less favoured socio-economic circumstances may be more likely than others to be admitted to hospital. The literature is conflicting over the relationship between social class and possible sequelae of late infection with EBV and HD. Several studies have reported minimal or no effect of social class on MS [37,38] or HD[39,40]. It has also been reported that EBV-infection-associated HD is in fact more common in lower social classes,[41] although this association only reached statistical significance in females[41].\nThe lack of association between increasing maternal age and hospitalised IM found in the current study is important, given the trend in Western countries towards postponement of childbearing. It is now common for women to give birth well into their late 30s or early 40s, and it is reassuring that older motherhood does not seem to carry an increased risk for IM. However, although maternal age has increased over recent years, in the years covered by the study (1970-1989), most mothers were under 35 (94% in our data).\nThere was no association between season of birth and hospitalised IM.\nOur data suggested that pre-eclampsia, and forceps delivery, had a borderline significant association with subsequent IM. There was no residual association after controlling for other factors, suggesting that confounding was responsible for these apparent associations.\nChildren born as one of a pair of twins had a borderline significant lower risk of developing IM (p = 0.06) than that of singletons. If this is not a chance finding, it would support studies proposing that increased sibship sizes can protect against IM (and its long term sequelae, including MS), by exposing children to viral infections early in life[36]. The reasoning, part of the hygiene hypothesis, is that children born as one of twins are more likely to be exposed to EBV infection early in life, through physical and salivary contact with their sibling, thus reducing their risk of delayed EBV infection, and therefore IM, later in life.\nThere is no information in the literature about marital status and IM, or delayed childhood EBV infection. Our results show that children born to single mothers had a significantly lower risk of hospitalised IM than those born to married mothers. We have no explanation for this, although one possibility is that (for a given level of severity of illness) single mothers may have had greater difficulty than married mothers in accessing hospital care. Though possible, we think that this is unlikely in that, with free access to National Health Service care, children deemed to be in need of hospital care are likely to have received it. It is possible that, though the finding was highly statistically significant, it may nonetheless have arisen from the play of chance. It is worth noting that, in the era of the pregnancies covered by this study, single motherhood was much less common in England than it is now. Previous studies have found clustering of infectious diseases within households in which an older child is present[36]. Although parity is an incomplete measure of contact with older children within the household, it was the only measure available to us. It did not come close to significance in this study. It is unlikely that the association with single mothers is confounded by parity: it persisted after adjustment for parity and, in any case, there was no association between parity and risk of IM (Tables 2, 3).\nWe found a modest association between IM and lower social class. It is generally held that, if anything, IM is a little more common in higher social classes[36]. However, the patients in our study are those admitted to hospital and it is possible, even likely, that typical clinical thresholds for admission of patients with IM can be influenced by patients' socio-economic circumstances. Thus, for a given level of clinical severity, it is possible that children in less favoured socio-economic circumstances may be more likely than others to be admitted to hospital. The literature is conflicting over the relationship between social class and possible sequelae of late infection with EBV and HD. Several studies have reported minimal or no effect of social class on MS [37,38] or HD[39,40]. It has also been reported that EBV-infection-associated HD is in fact more common in lower social classes,[41] although this association only reached statistical significance in females[41].", "Strengths of this study are that data collection was prospective, undertaken in a large and well-defined population, over a period of 30 years, including around 250,000 births, and recall biases are impossible. Data concerning perinatal risk factors, and subsequent IM, were collected independently. They were brought together by record-linkage, and therefore data about risk factors could not have been influenced by knowledge of the study outcome (IM) or by the kinds of interviewer, recall or attribution bias that can handicap case-control studies based on interviewing patients.\nDespite the large study population, the total number of cases of IM identified was a modest 225. This limits the power of the study. To our knowledge, there are no other reports in the published literature concerning perinatal factors and subsequent IM. Cases of IM not requiring hospitalisation will have been missed by this study. IM is diagnosed primarily based upon a clinical picture of symptoms, peripheral blood smear, and heterophile (Monospot) antibody test. It seems likely that hospitalised cases are more likely than those that do not warrant admission to have had confirmatory tests done to establish the diagnosis with certainty. However, we do not have data on the diagnostic criteria used in, or the clinical features of, the study population. We had to accept a coded diagnosis on the hospital discharge abstract. Current privacy regulations preclude checking the actual medical records of the patients for further detail.\nThere are some gaps in the data collection: smoking behaviour and social class were not routinely collected for a few years of the study. We could not identify records of children who were diagnosed with IM after moving away from the ORLS region, lowering our observed incidence of IM. It is certain that our observed IM incidence is lower than the true incidence of IM. However, the influence of perinatal risk factors, when comparing children with and without IM, should not be biased unless migration itself is associated with both the risk of subsequent IM and putative perinatal risk factors. We found very few significant associations. It is theoretically possible, though we think unlikely, that associations have been missed as a result of unmeasured confounding, i.e. that a true association has been masked by confounding factors that act in equal and opposite directions to a true cause-and-effect association.\nAlthough cases of IM requiring hospital admission are infrequent, they are likely to represent people at the severe end of the clinical spectrum. If perinatal and maternal factors affect the risk of IM, they are more likely to affect those with severe disease. Those with symptoms severe enough to warrant hospital admission may also have the strongest reactions to primary EBV infection, which in turn, may represent individuals who are more susceptible to diseases where EBV is thought to play an aetiological role, notably HD and MS[29]. We hope that others will be stimulated to publish results from similar databases and, if future individual studies are limited in size, we hope that our data and others could be pooled to produce meta-analyses.", "In the majority of individuals, primary EBV infection occurs during early childhood and is often asymptomatic, but delayed EBV infection may result in IM in adolescents and adults. The symptoms of IM, most notably fever, sore throat, swollen glands and fatigue, are thought to be the clinical manifestation of an exaggerated T cell response to EBV infection and the release of inflammatory cytokines[30]. It has been suggested that the size of the initial viral dose of EBV may be a contributing factor in the development of IM and that adolescents may be more likely to encounter a larger viral dose through deep kissing during penetrative sexual intercourse[4]. A relationship between the level of the T cell response and the severity of IM has also been noted[31]. The difference in severity of symptoms between those infected with EBV at a young age and those infected during adolescence and early adulthood may be the difference in magnitude of the viral dose, with a smaller dose acquired by salivary contact in children than that acquired through sexual contact in adolescents and young adults[4,32]. In addition, recent genetic markers in the HLA class I locus have also been implicated in the immune-response to EBV infection in both IM [33] and HD,[7] suggesting that genetic factors may also play a role. Immunopathological mechanisms involved in IM, contrasted with those in asymptomatic primary EBV infection, have been reported[32,34]. As our findings may only be representative of cases severe enough to require hospital admission, further studies in people with IM who have not been hospitalised may be beneficial.\nEBV-positive Hodgkin's disease has been found to be more common in people with a previous diagnosis of IM,[29] and an almost 100% prevalence of EBV seroconversion has been found in MS patients, as compared to a 90% seroconversion rate in the general population[34]. There is growing evidence of associations between IM and both HD [7-9] and MS[10-13]. The 'hygiene hypothesis' has been put forward as a possible explanation for a causal pathway between EBV and HD and MS. It proposes that a lack of early life infections or exposure to viral pathogens in childhood may prevent the normal processes of immune maturation, leading to increases in rates of both allergic and immune-mediated conditions, such as MS[35]. Perinatal and early life factors that may affect late exposure to infection may play a role in the relationship between these conditions.", "The lack of association between increasing maternal age and hospitalised IM found in the current study is important, given the trend in Western countries towards postponement of childbearing. It is now common for women to give birth well into their late 30s or early 40s, and it is reassuring that older motherhood does not seem to carry an increased risk for IM. However, although maternal age has increased over recent years, in the years covered by the study (1970-1989), most mothers were under 35 (94% in our data).\nThere was no association between season of birth and hospitalised IM.\nOur data suggested that pre-eclampsia, and forceps delivery, had a borderline significant association with subsequent IM. There was no residual association after controlling for other factors, suggesting that confounding was responsible for these apparent associations.\nChildren born as one of a pair of twins had a borderline significant lower risk of developing IM (p = 0.06) than that of singletons. If this is not a chance finding, it would support studies proposing that increased sibship sizes can protect against IM (and its long term sequelae, including MS), by exposing children to viral infections early in life[36]. The reasoning, part of the hygiene hypothesis, is that children born as one of twins are more likely to be exposed to EBV infection early in life, through physical and salivary contact with their sibling, thus reducing their risk of delayed EBV infection, and therefore IM, later in life.\nThere is no information in the literature about marital status and IM, or delayed childhood EBV infection. Our results show that children born to single mothers had a significantly lower risk of hospitalised IM than those born to married mothers. We have no explanation for this, although one possibility is that (for a given level of severity of illness) single mothers may have had greater difficulty than married mothers in accessing hospital care. Though possible, we think that this is unlikely in that, with free access to National Health Service care, children deemed to be in need of hospital care are likely to have received it. It is possible that, though the finding was highly statistically significant, it may nonetheless have arisen from the play of chance. It is worth noting that, in the era of the pregnancies covered by this study, single motherhood was much less common in England than it is now. Previous studies have found clustering of infectious diseases within households in which an older child is present[36]. Although parity is an incomplete measure of contact with older children within the household, it was the only measure available to us. It did not come close to significance in this study. It is unlikely that the association with single mothers is confounded by parity: it persisted after adjustment for parity and, in any case, there was no association between parity and risk of IM (Tables 2, 3).\nWe found a modest association between IM and lower social class. It is generally held that, if anything, IM is a little more common in higher social classes[36]. However, the patients in our study are those admitted to hospital and it is possible, even likely, that typical clinical thresholds for admission of patients with IM can be influenced by patients' socio-economic circumstances. Thus, for a given level of clinical severity, it is possible that children in less favoured socio-economic circumstances may be more likely than others to be admitted to hospital. The literature is conflicting over the relationship between social class and possible sequelae of late infection with EBV and HD. Several studies have reported minimal or no effect of social class on MS [37,38] or HD[39,40]. It has also been reported that EBV-infection-associated HD is in fact more common in lower social classes,[41] although this association only reached statistical significance in females[41].", "In summary, the association with single motherhood deserves further study, as does the possibility that reduced contact between young children may increase the risk of IM and possibly, for a few, eventually the risk of MS or HD. Other perinatal factors studied by us, including season of birth, were not associated with an increased risk of hospitalised IM. Of some importance, late age at motherhood was not a risk factor.", "The authors declare that they have no competing interests.", "MJG designed the study, with input from IM and OA-M. CJW undertook the analyses. All authors contributed to the interpretation and discussion of findings. IM wrote the first draft and all authors contributed to the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2334/11/51/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null ]
[]
The role of the user within the medical device design and development process: medical device manufacturers' perspectives.
21356097
Academic literature and international standards bodies suggest that user involvement, via the incorporation of human factors engineering methods within the medical device design and development (MDDD) process, offer many benefits that enable the development of safer and more usable medical devices that are better suited to users' needs. However, little research has been carried out to explore medical device manufacturers' beliefs and attitudes towards user involvement within this process, or indeed what value they believe can be added by doing so.
BACKGROUND
In-depth interviews with representatives from 11 medical device manufacturers are carried out. We ask them to specify who they believe the intended users of the device to be, who they consult to inform the MDDD process, what role they believe the user plays within this process, and what value (if any) they believe users add. Thematic analysis is used to analyse the fully transcribed interview data, to gain insight into medical device manufacturers' beliefs and attitudes towards user involvement within the MDDD process.
METHODS
A number of high-level themes emerged, relating who the user is perceived to be, the methods used, the perceived value and barriers to user involvement, and the nature of user contributions. The findings reveal that despite standards agencies and academic literature offering strong support for the employment formal methods, manufacturers are still hesitant due to a range of factors including: perceived barriers to obtaining ethical approval; the speed at which such activity may be carried out; the belief that there is no need given the 'all-knowing' nature of senior health care staff and clinical champions; a belief that effective results are achievable by consulting a minimal number of champions. Furthermore, less senior health care practitioners and patients were rarely seen as being able to provide valuable input into the process.
RESULTS
Medical device manufacturers often do not see the benefit of employing formal human factors engineering methods within the MDDD process. Research is required to better understand the day-to-day requirements of manufacturers within this sector. The development of new or adapted methods may be required if user involvement is to be fully realised.
CONCLUSIONS
[ "Community Participation", "Equipment Design", "Humans", "Safety", "Technology Assessment, Biomedical" ]
3058010
null
null
Methods
Eleven interviews were carried out with senior members of staff in each company. Recruitment was by convenience sampling: participants were identified as a result of their employment by companies that were industrial partners of the MATCH collaboration. To provide case examples for discussion, company representatives were asked to choose one medical device to discuss during the interview, which provided an example of the MDDD process they engaged in. The majority of the devices chosen for discussion were intended for use by surgeons within a clinical setting. Interviews lasted approximately one hour in duration, and were carried out by members of the MATCH research team. One of the topics of discussion was around device users. Interviewees were asked about the role the user is perceived to play within the design and development of the example medical device, what they saw as the barriers and benefits to involving users within the MDDD process, and what human factors engineering methods they used (if any) within the MDDD process. All interviews were recorded and researchers also took notes during the interviews. Before beginning, the interviewer explained the purpose and format of the interview to the participant and informed consent to participate, and for the audio recording of the interview, was obtained. Table 1 summarises the position held within the company for each interviewee, the treatment area or clinical use for the device, the intended users of the devices, and the actual individuals consulted in the MDDD process by the respective manufacturers. Company details and intended device users A thematic analysis of the textual dataset was then carried out. Detailed descriptions of what the thematic analysis process involves are available in [29-32]. In brief, thematic analysis facilitates the effective and rigorous abstraction of salient themes and sub-themes from a complex and detailed textual dataset [33], hence is particularly suitable in this context. The following steps were taken to analyse the data collected during the interview process, Figure 2 provides an overview of this. Process of thematic analysis. Initially all recordings of the trial sessions were transcribed into text format. After transcription, the textual dataset was initially perused to conceptualise the overarching themes that existed within the transcripts at a high-level. These were noted in a coding frame, with each concept assigned a code name, and a description and examples of text that fit each concept. The dataset was then examined iteratively, enabling themes and sub-themes to be developed further. These were spliced and linked together, and text relating to each category and sub-category were appropriately labelled. The first and second authors coded the data and discussed inconsistencies where these arose until a clear consensus of the main themes was reached. When no further refinement of the categorisation could be derived a final group of categories and sub-categories, representative of the transcripts was produced. The main themes are those drawn from multiple contributions and that represent issues that are clearly central to the participants themselves. Within these themes, we have explored areas of consensus and diversity as they were presented by interviewees.
null
null
null
null
[ "Background", "Human Factors Engineering Methods", "The challenge for industry", "Section summary", "Results", "Who is the User?", "Methods used", "Perceived value and barriers to user involvement", "The nature of user contributions", "Discussion and Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "From a human factors engineering perspective, ensuring the development of high quality and well designed medical devices that are in tune with patient and user needs, require formal human factors engineering methods (also known as user-centred usability engineering methods) to be used at every stage of the MDDD process [1]. Employing such formal methods ensures that the device design process considers appropriately the environment in which the device is to be used, the work patterns of users and the specific individual needs of the user, which could be any individual involved in the use of the device including health professionals, patients, and lay care givers [2]. In particular, human factors engineering methods highlight the importance of considering the needs of the user when designing and developing devices at the earliest stage of defining the device concept and then at every subsequent stage of the device development process [3]. The importance and value of focusing on user needs has been recognised as having a number of health related benefits including; improved patient safety [4,5], improved compliance and health outcomes [6], and higher levels of patient and user satisfaction [7]. Furthermore, employing human factors engineering methods throughout the MDDD process has been said to substantially reduce device development time because usability issues are identified and attended to prior launch, and hence avoid costly design changes and product recalls [8,9].\nGiven the proposed benefits of developing medical devices according to a human factors engineering perspective, medical device standards bodies are increasingly recognising the importance of these methods and the important role they play in developing safe and usable medical devices. Currently the two most important regulations for medical device developers are the European Commission (EC) Medical Device Directive 93/42/EEC [10] and the United States (US) Food and Drug Administration (FDA) regulations since medical devices must comply with these regulations in order to be sold throughout Europe and the US [11]. These regulations promote the use of human factors engineering methods within the MDDD process, for example, the US FDA regulations specify that medical device developers must demonstrate that human factors principles have been used in the design of the device so as to ensure that use-related hazards have been identified, understood and addressed. Some guidance is also given suggesting how human factors principles may be integrated into the MDDD process [12]. A further standard that medical device developers have been obliged to adhere in recent years is the International Electrotechnical Commission (IEC) standard 60601-1-6 [13], which have equivalent European Standards (ES) and British Standards (BS), requiring medical device developers to incorporate human factors engineering processes to ensure patient safety, stating that \"The manufacturer should conduct iterative design and development. Usability engineering should begin early and continue through the equipment design and development lifecycle\". More recently, the usability standard 'IEC 62366: Medical devices - Application of usability engineering to medical devices' [14] has superseded IEC 60601-1-6, and extends the requirement for medical device developers to incorporate human factors engineering methods in the development of all medical devices, not just electrical devices. In early 2010 IEC 62366 was harmonised by the EU Medical Device Directive meaning that it is now a legal requirement for medical device developers to formally address the usability of a device before placing it on the market anywhere in Europe. In order to comply with this standard, medical device developers must document the process in detail within a Usability Engineering File.\n[SUBTITLE] Human Factors Engineering Methods [SUBSECTION] The MDDD process may be considered to be made up of four key stages. At stage one, user needs are established and scoping exercises are carried out with users. Stage two aims to validate and refine the concept. Stage three involves designing the device, and Stage four involved evaluation of the device.\nFigure 1 presents the medical device development lifecycle, and the associated user-centred design methods that may be used at each respective stage.\nMedical device development lifecycle adapted from [5].\nIn some of our earlier work, as part of the Multidisciplinary Assessment of Technology Centre for Healthcare (MATCH) research activity, we carried out a rigorous review of the methods that have been employed in the MDDD process as presented in the academic literature [5,11,15]. MATCH is a UK research initiative between five universities which is funded by the Engineering and Physical Sciences Research Council (EPSRC) and the Department of Trade and Industry (DTI). The aim of MATCH is to provide support to the healthcare sector by developing methods to assess the value of medical devices at every stage of the MDDD process, with a focus on working with industrial partners to solve real world problems.\nTo provide some insight into what a selection of these methods involve, a brief description of the most frequently occurring methods within the device development lifecycle are now provided. For a detailed description of all methods and how these may be applied within the MDDD is provided in [5].\nFocus groups involve group discussion typically between 8-10 users, with a moderator guiding and facilitating the group through relevant topics of discussion. They may be appropriate for use at stage one, two or four of the device development lifecycle. Focus groups are used in a wide variety of industry settings and may be conducted at a comparatively low cost compared with other methods such as usability testing.\nInterviews are one of the most common approaches employed in user centred research, and may be of value at stages one, two and four of the development pathway. They can be rapidly deployed, and are relatively inexpensive to carry out. Interviews enable the researcher to access a broad range of opinion regarding a device, whilst also allowing rich and detailed opinions to be collected.\nUsability testing, typically performed at stage four, advocates is a function of three criteria: effectiveness, efficiency and satisfaction. Usability testing protocols involve users carrying out specific tasks with a specific device, whilst usability measures are taken. Effectiveness is concerned with whether the user is able to successfully accomplish a given task. Efficiency may include a count of the number of user actions or the amount of time required to carry out a task. Satisfaction may typically be measured subjectively by means of a satisfaction questionnaire, completed after using a particular device.\nHeuristic evaluation is a more rapid form of usability test which may be deployed by developers, perhaps prior to carrying out usability tests. Developers step through the device features and functionality and check the extent to which it complies with pre-determined list of characteristics (heuristics/rules of thumb) that may have been defined by earlier UCD design activities. This type of evaluation would normally be applied at stage four of the development pathway, once a tangible device has been developed.\nThe MDDD process may be considered to be made up of four key stages. At stage one, user needs are established and scoping exercises are carried out with users. Stage two aims to validate and refine the concept. Stage three involves designing the device, and Stage four involved evaluation of the device.\nFigure 1 presents the medical device development lifecycle, and the associated user-centred design methods that may be used at each respective stage.\nMedical device development lifecycle adapted from [5].\nIn some of our earlier work, as part of the Multidisciplinary Assessment of Technology Centre for Healthcare (MATCH) research activity, we carried out a rigorous review of the methods that have been employed in the MDDD process as presented in the academic literature [5,11,15]. MATCH is a UK research initiative between five universities which is funded by the Engineering and Physical Sciences Research Council (EPSRC) and the Department of Trade and Industry (DTI). The aim of MATCH is to provide support to the healthcare sector by developing methods to assess the value of medical devices at every stage of the MDDD process, with a focus on working with industrial partners to solve real world problems.\nTo provide some insight into what a selection of these methods involve, a brief description of the most frequently occurring methods within the device development lifecycle are now provided. For a detailed description of all methods and how these may be applied within the MDDD is provided in [5].\nFocus groups involve group discussion typically between 8-10 users, with a moderator guiding and facilitating the group through relevant topics of discussion. They may be appropriate for use at stage one, two or four of the device development lifecycle. Focus groups are used in a wide variety of industry settings and may be conducted at a comparatively low cost compared with other methods such as usability testing.\nInterviews are one of the most common approaches employed in user centred research, and may be of value at stages one, two and four of the development pathway. They can be rapidly deployed, and are relatively inexpensive to carry out. Interviews enable the researcher to access a broad range of opinion regarding a device, whilst also allowing rich and detailed opinions to be collected.\nUsability testing, typically performed at stage four, advocates is a function of three criteria: effectiveness, efficiency and satisfaction. Usability testing protocols involve users carrying out specific tasks with a specific device, whilst usability measures are taken. Effectiveness is concerned with whether the user is able to successfully accomplish a given task. Efficiency may include a count of the number of user actions or the amount of time required to carry out a task. Satisfaction may typically be measured subjectively by means of a satisfaction questionnaire, completed after using a particular device.\nHeuristic evaluation is a more rapid form of usability test which may be deployed by developers, perhaps prior to carrying out usability tests. Developers step through the device features and functionality and check the extent to which it complies with pre-determined list of characteristics (heuristics/rules of thumb) that may have been defined by earlier UCD design activities. This type of evaluation would normally be applied at stage four of the development pathway, once a tangible device has been developed.\n[SUBTITLE] The challenge for industry [SUBSECTION] Although a large number of human factors engineering methods are available that may be employed within the MDDD process, previous research indicates that medical device manufacturers often avoid employing such methods due to lack of resources and the perception that such methods are often too resource intensive [16]. The literature in the area of user involvement suggests that there are a number of risks for manufacturers associate with a lack of engagement with users within the product development process. For example, Cooper [17] suggests that failure to build in the voice of the customer is one of the key reasons for the failure of developing effective and innovative products. Certainly in the realm of ICT products Kujala [18] suggests that attending to user perspectives increase the likelihood of a successful product. Alongside this however, other work has highlighted some of the challenges of, and barriers to, involving users in the product development process. Brown [19] and Butler [20] both suggest that the volume of data generated by field studies with users may be costly, difficult to analyse with no obvious route to informing development. When considering the factors leading to successful innovation, van der Panne et al. [21] note that although it is incontrovertible that good market research is associated with a successful product, the role of customer involvement in innovation remains contentious. They suggest that customer involvement at an early stage may tend to gravitate towards imitative products, being less able to envision or express novelty and thus, \"bias innovation efforts towards incremental innovation\" [[21] p.326]. User preferences may change over time and engaging with a limited range of users may result in over specification of the product [22]. From the developer's perspective, their criteria for the success of user involvement may be different than those (often academics or researchers) who actually do user engagement work [23,24]. Furthermore, user information that is based on formal methods of elicitation may be at variance with the representations of the user held by developers themselves [22] and may thus not readily be appropriated within the organisational culture and structure of the lead organisation [25]. Many of the above examples are drawn from the more general area of user involvement within the product development process, since there is limited research in this area specifically addressing such issues within the medical device development domain. There is a lack of existing primary research that explores the challenges and benefits of involving users specifically within the medical device development process, particularly from a medical device manufacturer's perspective. Examples of some of the work that does exist within this domain includes, Grocott et al. [26] who carry out pioneering work in the field, proposing a valuable model of user engagement in medical device development, and focus on the practical issues around how user needs may be captured throughout the MDDD process [26]. Shah and Robinson [16] carry out secondary research in the form of a literature review of research that has involved users in some way in the design and development of medical devices, exploring the barriers and benefits to involving users as may have been reported in the reviewed literature. The findings of this study are in line with the perspective offered above that whilst user involvement in MDDD process is likely to benefit the user in terms of developing devices better suited to the users' needs, MDDD methods may be perceived as highly resource intensive and hence often not a feasible option for some manufacturers.\nReflecting on the body of literature above, and in particular on the perceived benefits and barriers to user involvement within the wider domain of technology product development, it seems that there are a number of key factors that impact on effective engagement with users. Perhaps most notable is the notion that product developers consider user needs research to be disproportionately costly, in light of the perceived benefits and pay-off for engaging in such activities. Furthermore, user needs research is perceived by developers to generate unmanageable volumes of data and there does not appear to be any clear route through to informing the development of the product once the user needs data has been collected and analysed. Whilst all of the perceived drawbacks, if well-founded, may be notable reasons for avoiding engagement with user needs research, it is not clear what the underlying reasons are for these factors. For example, a commonly held view by some developers is that the key function of human factors engineering methods is to serve as a means of facilitating a 'cake-frosting' exercise [27], by which 'superficial' design features may be 'painted' onto the device at the end of the development process. Such views of human factors engineering methods do not lend themselves to positive engagement or indeed a realisation of the full complexity and pay-off that such methods may potentially deliver if they were to be deployed with methodological rigour at appropriate points within the design process [28]. Certainly from an academic and/or human factors engineer's point of view, it may be argued that many of these factors may be overcome by increased awareness and better educating industrial developers in human factors engineering methods and their application. However, is it the case that with more training in the area of user research methods research, product developers may overcome these reservations, recognise the potential opportunities these methods promise, and develop the necessary skills to deploy methods and analyse data in a timely fashion, or are these methods indeed incompatible within the industrial context? Should user needs research be outsourced and delivered by the human factors engineering experts, in order to overcome the overhead in acquiring appropriate skills and level of understanding to actually effectively deploy these methods? Should these methods be adapted to make them more lightweight and easier to implement, hence making them more fit for purpose? These are all questions that need to be answered if the goal of incorporating the user into the product development process is to be realised. More specifically within the MDDD process, there are likely to be domain specific factors and conditions that influence the uptake of such methods.\nTherefore, from a theoretical perspective, human factors engineering methods are presented as being of value at every stage of the MDDD process, manufacturers may not actually be employing these methods in practice. If their full benefits are to be realised, more primary research is needed to better understand manufacturers' perspectives and motivations regarding such methods. Therefore, the aim of this study is to gain first hand and detailed insights into what medical device manufacturers' attitudes are towards engaging with users, and what the perceived value and barriers are of doing so. Furthermore, we aim to explore which methods are used, and what device manufacturers' attitudes are towards employing such methods.\nAlthough a large number of human factors engineering methods are available that may be employed within the MDDD process, previous research indicates that medical device manufacturers often avoid employing such methods due to lack of resources and the perception that such methods are often too resource intensive [16]. The literature in the area of user involvement suggests that there are a number of risks for manufacturers associate with a lack of engagement with users within the product development process. For example, Cooper [17] suggests that failure to build in the voice of the customer is one of the key reasons for the failure of developing effective and innovative products. Certainly in the realm of ICT products Kujala [18] suggests that attending to user perspectives increase the likelihood of a successful product. Alongside this however, other work has highlighted some of the challenges of, and barriers to, involving users in the product development process. Brown [19] and Butler [20] both suggest that the volume of data generated by field studies with users may be costly, difficult to analyse with no obvious route to informing development. When considering the factors leading to successful innovation, van der Panne et al. [21] note that although it is incontrovertible that good market research is associated with a successful product, the role of customer involvement in innovation remains contentious. They suggest that customer involvement at an early stage may tend to gravitate towards imitative products, being less able to envision or express novelty and thus, \"bias innovation efforts towards incremental innovation\" [[21] p.326]. User preferences may change over time and engaging with a limited range of users may result in over specification of the product [22]. From the developer's perspective, their criteria for the success of user involvement may be different than those (often academics or researchers) who actually do user engagement work [23,24]. Furthermore, user information that is based on formal methods of elicitation may be at variance with the representations of the user held by developers themselves [22] and may thus not readily be appropriated within the organisational culture and structure of the lead organisation [25]. Many of the above examples are drawn from the more general area of user involvement within the product development process, since there is limited research in this area specifically addressing such issues within the medical device development domain. There is a lack of existing primary research that explores the challenges and benefits of involving users specifically within the medical device development process, particularly from a medical device manufacturer's perspective. Examples of some of the work that does exist within this domain includes, Grocott et al. [26] who carry out pioneering work in the field, proposing a valuable model of user engagement in medical device development, and focus on the practical issues around how user needs may be captured throughout the MDDD process [26]. Shah and Robinson [16] carry out secondary research in the form of a literature review of research that has involved users in some way in the design and development of medical devices, exploring the barriers and benefits to involving users as may have been reported in the reviewed literature. The findings of this study are in line with the perspective offered above that whilst user involvement in MDDD process is likely to benefit the user in terms of developing devices better suited to the users' needs, MDDD methods may be perceived as highly resource intensive and hence often not a feasible option for some manufacturers.\nReflecting on the body of literature above, and in particular on the perceived benefits and barriers to user involvement within the wider domain of technology product development, it seems that there are a number of key factors that impact on effective engagement with users. Perhaps most notable is the notion that product developers consider user needs research to be disproportionately costly, in light of the perceived benefits and pay-off for engaging in such activities. Furthermore, user needs research is perceived by developers to generate unmanageable volumes of data and there does not appear to be any clear route through to informing the development of the product once the user needs data has been collected and analysed. Whilst all of the perceived drawbacks, if well-founded, may be notable reasons for avoiding engagement with user needs research, it is not clear what the underlying reasons are for these factors. For example, a commonly held view by some developers is that the key function of human factors engineering methods is to serve as a means of facilitating a 'cake-frosting' exercise [27], by which 'superficial' design features may be 'painted' onto the device at the end of the development process. Such views of human factors engineering methods do not lend themselves to positive engagement or indeed a realisation of the full complexity and pay-off that such methods may potentially deliver if they were to be deployed with methodological rigour at appropriate points within the design process [28]. Certainly from an academic and/or human factors engineer's point of view, it may be argued that many of these factors may be overcome by increased awareness and better educating industrial developers in human factors engineering methods and their application. However, is it the case that with more training in the area of user research methods research, product developers may overcome these reservations, recognise the potential opportunities these methods promise, and develop the necessary skills to deploy methods and analyse data in a timely fashion, or are these methods indeed incompatible within the industrial context? Should user needs research be outsourced and delivered by the human factors engineering experts, in order to overcome the overhead in acquiring appropriate skills and level of understanding to actually effectively deploy these methods? Should these methods be adapted to make them more lightweight and easier to implement, hence making them more fit for purpose? These are all questions that need to be answered if the goal of incorporating the user into the product development process is to be realised. More specifically within the MDDD process, there are likely to be domain specific factors and conditions that influence the uptake of such methods.\nTherefore, from a theoretical perspective, human factors engineering methods are presented as being of value at every stage of the MDDD process, manufacturers may not actually be employing these methods in practice. If their full benefits are to be realised, more primary research is needed to better understand manufacturers' perspectives and motivations regarding such methods. Therefore, the aim of this study is to gain first hand and detailed insights into what medical device manufacturers' attitudes are towards engaging with users, and what the perceived value and barriers are of doing so. Furthermore, we aim to explore which methods are used, and what device manufacturers' attitudes are towards employing such methods.\n[SUBTITLE] Section summary [SUBSECTION] Thus far, we have detailed a range of formal methods that may be used to elicit user perspectives as part of the process of medical device development and noted the range of regulatory requirements to involve users. We have suggested that the 'in principle' value of systematically seeking user input may be somewhat at variance with the day to day experiences of manufacturers where user involvement may be seen as a barrier to speedy and innovative product development. There is currently no evidence about this in respect of medical device development. The remainder of this paper addresses this issue and is structured as follows. In the next section, we describe a study which involved carrying out in-depth interviews with 11 medical device manufacturers which explored the perceived value of users in the MDDD process. In the following section, the results of the analysis of these interviews are then presented. The paper concludes with suggested recommendations for future research and practice in this area.\nThus far, we have detailed a range of formal methods that may be used to elicit user perspectives as part of the process of medical device development and noted the range of regulatory requirements to involve users. We have suggested that the 'in principle' value of systematically seeking user input may be somewhat at variance with the day to day experiences of manufacturers where user involvement may be seen as a barrier to speedy and innovative product development. There is currently no evidence about this in respect of medical device development. The remainder of this paper addresses this issue and is structured as follows. In the next section, we describe a study which involved carrying out in-depth interviews with 11 medical device manufacturers which explored the perceived value of users in the MDDD process. In the following section, the results of the analysis of these interviews are then presented. The paper concludes with suggested recommendations for future research and practice in this area.", "The MDDD process may be considered to be made up of four key stages. At stage one, user needs are established and scoping exercises are carried out with users. Stage two aims to validate and refine the concept. Stage three involves designing the device, and Stage four involved evaluation of the device.\nFigure 1 presents the medical device development lifecycle, and the associated user-centred design methods that may be used at each respective stage.\nMedical device development lifecycle adapted from [5].\nIn some of our earlier work, as part of the Multidisciplinary Assessment of Technology Centre for Healthcare (MATCH) research activity, we carried out a rigorous review of the methods that have been employed in the MDDD process as presented in the academic literature [5,11,15]. MATCH is a UK research initiative between five universities which is funded by the Engineering and Physical Sciences Research Council (EPSRC) and the Department of Trade and Industry (DTI). The aim of MATCH is to provide support to the healthcare sector by developing methods to assess the value of medical devices at every stage of the MDDD process, with a focus on working with industrial partners to solve real world problems.\nTo provide some insight into what a selection of these methods involve, a brief description of the most frequently occurring methods within the device development lifecycle are now provided. For a detailed description of all methods and how these may be applied within the MDDD is provided in [5].\nFocus groups involve group discussion typically between 8-10 users, with a moderator guiding and facilitating the group through relevant topics of discussion. They may be appropriate for use at stage one, two or four of the device development lifecycle. Focus groups are used in a wide variety of industry settings and may be conducted at a comparatively low cost compared with other methods such as usability testing.\nInterviews are one of the most common approaches employed in user centred research, and may be of value at stages one, two and four of the development pathway. They can be rapidly deployed, and are relatively inexpensive to carry out. Interviews enable the researcher to access a broad range of opinion regarding a device, whilst also allowing rich and detailed opinions to be collected.\nUsability testing, typically performed at stage four, advocates is a function of three criteria: effectiveness, efficiency and satisfaction. Usability testing protocols involve users carrying out specific tasks with a specific device, whilst usability measures are taken. Effectiveness is concerned with whether the user is able to successfully accomplish a given task. Efficiency may include a count of the number of user actions or the amount of time required to carry out a task. Satisfaction may typically be measured subjectively by means of a satisfaction questionnaire, completed after using a particular device.\nHeuristic evaluation is a more rapid form of usability test which may be deployed by developers, perhaps prior to carrying out usability tests. Developers step through the device features and functionality and check the extent to which it complies with pre-determined list of characteristics (heuristics/rules of thumb) that may have been defined by earlier UCD design activities. This type of evaluation would normally be applied at stage four of the development pathway, once a tangible device has been developed.", "Although a large number of human factors engineering methods are available that may be employed within the MDDD process, previous research indicates that medical device manufacturers often avoid employing such methods due to lack of resources and the perception that such methods are often too resource intensive [16]. The literature in the area of user involvement suggests that there are a number of risks for manufacturers associate with a lack of engagement with users within the product development process. For example, Cooper [17] suggests that failure to build in the voice of the customer is one of the key reasons for the failure of developing effective and innovative products. Certainly in the realm of ICT products Kujala [18] suggests that attending to user perspectives increase the likelihood of a successful product. Alongside this however, other work has highlighted some of the challenges of, and barriers to, involving users in the product development process. Brown [19] and Butler [20] both suggest that the volume of data generated by field studies with users may be costly, difficult to analyse with no obvious route to informing development. When considering the factors leading to successful innovation, van der Panne et al. [21] note that although it is incontrovertible that good market research is associated with a successful product, the role of customer involvement in innovation remains contentious. They suggest that customer involvement at an early stage may tend to gravitate towards imitative products, being less able to envision or express novelty and thus, \"bias innovation efforts towards incremental innovation\" [[21] p.326]. User preferences may change over time and engaging with a limited range of users may result in over specification of the product [22]. From the developer's perspective, their criteria for the success of user involvement may be different than those (often academics or researchers) who actually do user engagement work [23,24]. Furthermore, user information that is based on formal methods of elicitation may be at variance with the representations of the user held by developers themselves [22] and may thus not readily be appropriated within the organisational culture and structure of the lead organisation [25]. Many of the above examples are drawn from the more general area of user involvement within the product development process, since there is limited research in this area specifically addressing such issues within the medical device development domain. There is a lack of existing primary research that explores the challenges and benefits of involving users specifically within the medical device development process, particularly from a medical device manufacturer's perspective. Examples of some of the work that does exist within this domain includes, Grocott et al. [26] who carry out pioneering work in the field, proposing a valuable model of user engagement in medical device development, and focus on the practical issues around how user needs may be captured throughout the MDDD process [26]. Shah and Robinson [16] carry out secondary research in the form of a literature review of research that has involved users in some way in the design and development of medical devices, exploring the barriers and benefits to involving users as may have been reported in the reviewed literature. The findings of this study are in line with the perspective offered above that whilst user involvement in MDDD process is likely to benefit the user in terms of developing devices better suited to the users' needs, MDDD methods may be perceived as highly resource intensive and hence often not a feasible option for some manufacturers.\nReflecting on the body of literature above, and in particular on the perceived benefits and barriers to user involvement within the wider domain of technology product development, it seems that there are a number of key factors that impact on effective engagement with users. Perhaps most notable is the notion that product developers consider user needs research to be disproportionately costly, in light of the perceived benefits and pay-off for engaging in such activities. Furthermore, user needs research is perceived by developers to generate unmanageable volumes of data and there does not appear to be any clear route through to informing the development of the product once the user needs data has been collected and analysed. Whilst all of the perceived drawbacks, if well-founded, may be notable reasons for avoiding engagement with user needs research, it is not clear what the underlying reasons are for these factors. For example, a commonly held view by some developers is that the key function of human factors engineering methods is to serve as a means of facilitating a 'cake-frosting' exercise [27], by which 'superficial' design features may be 'painted' onto the device at the end of the development process. Such views of human factors engineering methods do not lend themselves to positive engagement or indeed a realisation of the full complexity and pay-off that such methods may potentially deliver if they were to be deployed with methodological rigour at appropriate points within the design process [28]. Certainly from an academic and/or human factors engineer's point of view, it may be argued that many of these factors may be overcome by increased awareness and better educating industrial developers in human factors engineering methods and their application. However, is it the case that with more training in the area of user research methods research, product developers may overcome these reservations, recognise the potential opportunities these methods promise, and develop the necessary skills to deploy methods and analyse data in a timely fashion, or are these methods indeed incompatible within the industrial context? Should user needs research be outsourced and delivered by the human factors engineering experts, in order to overcome the overhead in acquiring appropriate skills and level of understanding to actually effectively deploy these methods? Should these methods be adapted to make them more lightweight and easier to implement, hence making them more fit for purpose? These are all questions that need to be answered if the goal of incorporating the user into the product development process is to be realised. More specifically within the MDDD process, there are likely to be domain specific factors and conditions that influence the uptake of such methods.\nTherefore, from a theoretical perspective, human factors engineering methods are presented as being of value at every stage of the MDDD process, manufacturers may not actually be employing these methods in practice. If their full benefits are to be realised, more primary research is needed to better understand manufacturers' perspectives and motivations regarding such methods. Therefore, the aim of this study is to gain first hand and detailed insights into what medical device manufacturers' attitudes are towards engaging with users, and what the perceived value and barriers are of doing so. Furthermore, we aim to explore which methods are used, and what device manufacturers' attitudes are towards employing such methods.", "Thus far, we have detailed a range of formal methods that may be used to elicit user perspectives as part of the process of medical device development and noted the range of regulatory requirements to involve users. We have suggested that the 'in principle' value of systematically seeking user input may be somewhat at variance with the day to day experiences of manufacturers where user involvement may be seen as a barrier to speedy and innovative product development. There is currently no evidence about this in respect of medical device development. The remainder of this paper addresses this issue and is structured as follows. In the next section, we describe a study which involved carrying out in-depth interviews with 11 medical device manufacturers which explored the perceived value of users in the MDDD process. In the following section, the results of the analysis of these interviews are then presented. The paper concludes with suggested recommendations for future research and practice in this area.", "As a result of carrying out a thematic analysis of the interviews carried out with manufacturers, four high level themes emerged relating to manufacturers' views of user involvement within the MDDD process. These are as follows: Who is the user?; Methods used; Perceived value and barriers to user involvement; The nature of user contributions. The remainder of this section presents the findings of our study according to these themes.\n[SUBTITLE] Who is the User? [SUBSECTION] Discussions relating to the range of individuals that are consulted during the MDDD process revealed that there was a mismatch between the users that were consulted, and those that would actually use the device in practice. Some manufacturers believed that the needs of the patient do not originate from the patient themselves, and that patients' needs are better articulated through a hierarchy of health professionals including surgeons and 'clinical champions'.\n#2: \"The need actually will probably be established by the clinical fraternity....All you have to do after that is convince the people on down the chain, from the hospital clinical researchers right down to the guy in the street who says 'It's a good idea to have one of these. That's where the need is identified. It is identified really at the top, and then it's taught down, if you follow me\"\nIn light of this, manufacturers had a preference for seeking input from more senior health care staff over less senior staff, regardless of who would actually use the device in practice. The assumption was that senior staff members were more than capable of speaking on behalf of less senior staff, even though manufacturers acknowledged that it was likely that senior staff members would never actually use the device. This was the case even in scenarios where the patient was seen to be the main user of the device. For example, Manufacturer #2 who is involved in the development of Automated External Defibrillators (AEDs), recognised that a significant proportion of the intended users of the device included members of the public, however, they did not see it as necessary to consult members of the public, but rather consulted senior health professionals in the early stages of device design and development. Input from nurses was also considered to be less desirable than input from more senior health care staff such as surgeons. For example, Manufacturer #11 identified nurses as the main user of their product, however, did not consider it necessary to consult nurses in the design and development process. It was felt that surgeons made the decisions on behalf of nurses and patients, and therefore it was logical to consult them to identify nurse and patient needs, as is articulated below.\n#11: \"Surgeons may be there making the decision or recommending which models to buy, but it might be the nurses who are actually using the unit...the surgeon has made the decision, but he doesn't necessarily have to actually work with it, so in your case, the controllability aspect, the patient actually understanding how to operate the unit, the clinician has made the decision and has prescribed that particular device...\"\nSurgeons were also considered by Manufacturer #11 as having sufficient knowledge to act as representatives for home patients. The motivation for this was as a direct result of the way in which the device is introduced and promoted to the patient. Surgeons do the marketing and promoting of the product to the patients, and therefore it was seen as important that the device is primarily designed according to the surgeons' requirements. In the quote below, manufacturer #11 justifies their motivation for valuing the surgeons' opinion over the patient users.\n#11: \"but, by and large, most of our market research is done with the surgeons, not with the end users, rightly or wrongly...but how it will be marketed will be through the healthcare professional, who will have to sell it effectively to the patient, show them how to use it.\"\nSimilarly, in the event of the user group being general health professionals, the opinions of a small number of 'clinical champions' was sought by Manufacturer #8 as well as purchasing representatives from the health authority. It was deemed more important to meet the needs of the individuals who were responsible for making purchasing decisions or held most influence over them, as opposed to focusing on the needs of the individuals that would be using the devices on a day to day basis. Therefore MDDD activity often seemed to be carried out a strong focus on how effort would translate to sales. In the following quote, Manufacturer #4 is asked who would be consulted in identifying design requirements for the development of the given medical device:\n#4: P1: \"...it's through the doctors to, its [name]'s contact with the hospitals out there, now then I doubt it would be at doctor level, it would be more the managerial level of the hospital...\"\nI2: ...so in effect you are capturing a user need in that way...\nP: \" ...yes.\"\n(Note: 1 P: Denotes the participant's response, 2 I: Denotes the interviewer's questioning)\nManufacturer #1 also focuses on those making/influencing purchasing decisions, which includes management and administrative staff. Manufacturer #1 articulates this shift in focus when asked who they view as their customer:\n#1: \"Orthopaedic surgeons really. Although it is used in patients, it is the orthopaedic surgeons who decide what he will use. Sell to orthopaedic surgeons. Increasingly in the UK, we have to sell to the hospital management to justify why they should be using our bone graft substitutes as opposed to any other on the market...\"\nManufacturer #8 stated that health professionals are considered to be the main user of their device, However, the opinions of purchasing representatives are the primary source used to inform device design and development, followed up by consultation with a small number of 'clinical champions' (typically high profile surgeons and well known experts in their field). When questioned on what value the user may add value to the MDDD process, although couched in a humorous reply, it was clear that the usefulness of the user was ideally directly located in relation to sales relevant information.\n#8: \"The most helpful would to give me the year three sales figures with absolute confidence. Year-1...[laughs]...\"\nThere seems to be limited overlap between the individuals that will be using the device, and those that are consulted to inform the MDDD process. In particular, priority is given to those that hold more senior positions within the health care system. Therefore, surgeons, doctors and clinical champions were seen as more valuable sources for identifying user needs as opposed to those individual that would actually use the device on a daily basis. Furthermore, from the manufacturers' point of view, the motivation for maximising sales seems to have conflated the distinction between the customer and the user. Therefore the needs of those that make purchasing decisions or indeed have most influence over these decisions are more salient than the needs of the user. This is certainly a more complex picture than the human factors engineering approach, which puts the user's needs at the centre of the design and development process.\nDiscussions relating to the range of individuals that are consulted during the MDDD process revealed that there was a mismatch between the users that were consulted, and those that would actually use the device in practice. Some manufacturers believed that the needs of the patient do not originate from the patient themselves, and that patients' needs are better articulated through a hierarchy of health professionals including surgeons and 'clinical champions'.\n#2: \"The need actually will probably be established by the clinical fraternity....All you have to do after that is convince the people on down the chain, from the hospital clinical researchers right down to the guy in the street who says 'It's a good idea to have one of these. That's where the need is identified. It is identified really at the top, and then it's taught down, if you follow me\"\nIn light of this, manufacturers had a preference for seeking input from more senior health care staff over less senior staff, regardless of who would actually use the device in practice. The assumption was that senior staff members were more than capable of speaking on behalf of less senior staff, even though manufacturers acknowledged that it was likely that senior staff members would never actually use the device. This was the case even in scenarios where the patient was seen to be the main user of the device. For example, Manufacturer #2 who is involved in the development of Automated External Defibrillators (AEDs), recognised that a significant proportion of the intended users of the device included members of the public, however, they did not see it as necessary to consult members of the public, but rather consulted senior health professionals in the early stages of device design and development. Input from nurses was also considered to be less desirable than input from more senior health care staff such as surgeons. For example, Manufacturer #11 identified nurses as the main user of their product, however, did not consider it necessary to consult nurses in the design and development process. It was felt that surgeons made the decisions on behalf of nurses and patients, and therefore it was logical to consult them to identify nurse and patient needs, as is articulated below.\n#11: \"Surgeons may be there making the decision or recommending which models to buy, but it might be the nurses who are actually using the unit...the surgeon has made the decision, but he doesn't necessarily have to actually work with it, so in your case, the controllability aspect, the patient actually understanding how to operate the unit, the clinician has made the decision and has prescribed that particular device...\"\nSurgeons were also considered by Manufacturer #11 as having sufficient knowledge to act as representatives for home patients. The motivation for this was as a direct result of the way in which the device is introduced and promoted to the patient. Surgeons do the marketing and promoting of the product to the patients, and therefore it was seen as important that the device is primarily designed according to the surgeons' requirements. In the quote below, manufacturer #11 justifies their motivation for valuing the surgeons' opinion over the patient users.\n#11: \"but, by and large, most of our market research is done with the surgeons, not with the end users, rightly or wrongly...but how it will be marketed will be through the healthcare professional, who will have to sell it effectively to the patient, show them how to use it.\"\nSimilarly, in the event of the user group being general health professionals, the opinions of a small number of 'clinical champions' was sought by Manufacturer #8 as well as purchasing representatives from the health authority. It was deemed more important to meet the needs of the individuals who were responsible for making purchasing decisions or held most influence over them, as opposed to focusing on the needs of the individuals that would be using the devices on a day to day basis. Therefore MDDD activity often seemed to be carried out a strong focus on how effort would translate to sales. In the following quote, Manufacturer #4 is asked who would be consulted in identifying design requirements for the development of the given medical device:\n#4: P1: \"...it's through the doctors to, its [name]'s contact with the hospitals out there, now then I doubt it would be at doctor level, it would be more the managerial level of the hospital...\"\nI2: ...so in effect you are capturing a user need in that way...\nP: \" ...yes.\"\n(Note: 1 P: Denotes the participant's response, 2 I: Denotes the interviewer's questioning)\nManufacturer #1 also focuses on those making/influencing purchasing decisions, which includes management and administrative staff. Manufacturer #1 articulates this shift in focus when asked who they view as their customer:\n#1: \"Orthopaedic surgeons really. Although it is used in patients, it is the orthopaedic surgeons who decide what he will use. Sell to orthopaedic surgeons. Increasingly in the UK, we have to sell to the hospital management to justify why they should be using our bone graft substitutes as opposed to any other on the market...\"\nManufacturer #8 stated that health professionals are considered to be the main user of their device, However, the opinions of purchasing representatives are the primary source used to inform device design and development, followed up by consultation with a small number of 'clinical champions' (typically high profile surgeons and well known experts in their field). When questioned on what value the user may add value to the MDDD process, although couched in a humorous reply, it was clear that the usefulness of the user was ideally directly located in relation to sales relevant information.\n#8: \"The most helpful would to give me the year three sales figures with absolute confidence. Year-1...[laughs]...\"\nThere seems to be limited overlap between the individuals that will be using the device, and those that are consulted to inform the MDDD process. In particular, priority is given to those that hold more senior positions within the health care system. Therefore, surgeons, doctors and clinical champions were seen as more valuable sources for identifying user needs as opposed to those individual that would actually use the device on a daily basis. Furthermore, from the manufacturers' point of view, the motivation for maximising sales seems to have conflated the distinction between the customer and the user. Therefore the needs of those that make purchasing decisions or indeed have most influence over these decisions are more salient than the needs of the user. This is certainly a more complex picture than the human factors engineering approach, which puts the user's needs at the centre of the design and development process.\n[SUBTITLE] Methods used [SUBSECTION] Given the wide range of formal methods that are available to engage with users in the MDDD process, manufacturers tended to use only a very limited range of methods to capture information from users and patients. In line with the analysis above regarding the nature of the preferred user to consult, the most typical method used to gain input from users was via informal discussions with senior health professionals. Only one out of the eleven manufacturers (Manufacturer #8) stated that they regularly used some formal methods throughout the MDDD process, such as focus groups and questionnaires when developing their airway management device. Interestingly, the initial identification of a need for this device occurred as a result of six members of the company attending a postgraduate university course, which required the use of formal methods in order to explore and identify new device ideas. It was apparent that manufacturers do not feel that they have the time or resources to engage in rigorous formal user data collection methods, instead relying on a range of strategies including gut feel, instinct, and a personal belief that they understand the market place in which they operate, in order to identify and develop new devices.\nThe idea of employing formal methods is once again something that is not regarded as feasible, given the amount of resources available and the fact that manufacturers believe it is necessary to move quickly in order to remain competitive with their rivals. The view is that consulting a large number of individuals is problematic in itself, as every person that is approached for feedback has a different view. Informal methods, however, are seen as offering versatile and rapid solutions, hence the belief that relying on gut feel and pressing ahead when the moment feels right is a more feasible and efficient solution. This point is articulated by Manufacturer #8 below:\n#8: \"The very fact that someone is willing to talk with you almost means that they have slightly different view. You can go on asking forever. It's that balance between have we got sufficient confidence in what we have here to move forward vs. just the generation of the information....being confident enough of its assurance. That's what you constantly face anyhow. I think that's the dilemma you always have. You can ask the users till the cows come home but you never get a new product. You ask 100,000 you get 99,999 different opinions!\"\nManufacturer #3 further echoes the observation that formal methods are rarely used for medical device development, however, informal discussions and observation are seen as more versatile and fit for purpose, :\n#3: P:\"you're introducing a medical device to people, first question they'll ask, is \"what will it do to help me?\" Second question will be \"how long is this going to take?\" In a busy clinic that's really important...If it takes too many clicks, then people won't use it because they are busy enough as it is.\"\nI: have you been able to capture that at all?\nP: \"Through our interaction with users. We haven't got a specific mechanism for capturing it.\"\nI: Do you use any formal methods for converting customer needs into product development?\nP: \"No.\"\nThere was typically the belief that there was little need to consult the actual users formally regarding a new innovation, but rather contacting what they referred to as a 'clinical champion', was sufficient to qualify the feasibility and validity of a given new innovation. For example, Manufacturer #8, responding to a question regarding how the feasibility of new device ideas are qualified responded:\n#8: \"Every project will have a clinical champion. They will typically be involved, sometimes they come down here to meetings. We have a list of a couple of dozen clinicians that I can pick the phone up at anytime throughout the world and say \"what do you think of this?\".\nWhen asked whether there are any formal methods used within their organisation, Manufacturer #6 stated that formal methods are used within his organisation, however, they are not relevant for this particular product. Once again, formal methods, in this case example, were considered to be too bureaucratic, time consuming, and not applicable given the device and development scenario. Informal methods, such as ad hoc discussions with senior health professionals were likely to identify the majority of user design needs, and were also more appropriate.\n#6:\"Yes we do, but I couldn't apply that to hip replacement at this point in time. If the surgeon tells me \"that this catheter is a bit too stiff, and could you make it a bit softer, if you like, or a bit more flexible, yet do exactly the same task?\" we will do that - it's for everyone's benefit\"\nIn the above example, a pragmatic approach is taken by manufacturers to make modifications to devices, adopting the belief that if a problem is encountered by one individual, then it is likely to be a problem for the majority, providing it seems to be a reasonable request. The intuition of the manufacturer plays a major part in the process of developing products that are useful to the user, and responding to their needs.\nGiven the wide range of formal methods that are available to engage with users in the MDDD process, manufacturers tended to use only a very limited range of methods to capture information from users and patients. In line with the analysis above regarding the nature of the preferred user to consult, the most typical method used to gain input from users was via informal discussions with senior health professionals. Only one out of the eleven manufacturers (Manufacturer #8) stated that they regularly used some formal methods throughout the MDDD process, such as focus groups and questionnaires when developing their airway management device. Interestingly, the initial identification of a need for this device occurred as a result of six members of the company attending a postgraduate university course, which required the use of formal methods in order to explore and identify new device ideas. It was apparent that manufacturers do not feel that they have the time or resources to engage in rigorous formal user data collection methods, instead relying on a range of strategies including gut feel, instinct, and a personal belief that they understand the market place in which they operate, in order to identify and develop new devices.\nThe idea of employing formal methods is once again something that is not regarded as feasible, given the amount of resources available and the fact that manufacturers believe it is necessary to move quickly in order to remain competitive with their rivals. The view is that consulting a large number of individuals is problematic in itself, as every person that is approached for feedback has a different view. Informal methods, however, are seen as offering versatile and rapid solutions, hence the belief that relying on gut feel and pressing ahead when the moment feels right is a more feasible and efficient solution. This point is articulated by Manufacturer #8 below:\n#8: \"The very fact that someone is willing to talk with you almost means that they have slightly different view. You can go on asking forever. It's that balance between have we got sufficient confidence in what we have here to move forward vs. just the generation of the information....being confident enough of its assurance. That's what you constantly face anyhow. I think that's the dilemma you always have. You can ask the users till the cows come home but you never get a new product. You ask 100,000 you get 99,999 different opinions!\"\nManufacturer #3 further echoes the observation that formal methods are rarely used for medical device development, however, informal discussions and observation are seen as more versatile and fit for purpose, :\n#3: P:\"you're introducing a medical device to people, first question they'll ask, is \"what will it do to help me?\" Second question will be \"how long is this going to take?\" In a busy clinic that's really important...If it takes too many clicks, then people won't use it because they are busy enough as it is.\"\nI: have you been able to capture that at all?\nP: \"Through our interaction with users. We haven't got a specific mechanism for capturing it.\"\nI: Do you use any formal methods for converting customer needs into product development?\nP: \"No.\"\nThere was typically the belief that there was little need to consult the actual users formally regarding a new innovation, but rather contacting what they referred to as a 'clinical champion', was sufficient to qualify the feasibility and validity of a given new innovation. For example, Manufacturer #8, responding to a question regarding how the feasibility of new device ideas are qualified responded:\n#8: \"Every project will have a clinical champion. They will typically be involved, sometimes they come down here to meetings. We have a list of a couple of dozen clinicians that I can pick the phone up at anytime throughout the world and say \"what do you think of this?\".\nWhen asked whether there are any formal methods used within their organisation, Manufacturer #6 stated that formal methods are used within his organisation, however, they are not relevant for this particular product. Once again, formal methods, in this case example, were considered to be too bureaucratic, time consuming, and not applicable given the device and development scenario. Informal methods, such as ad hoc discussions with senior health professionals were likely to identify the majority of user design needs, and were also more appropriate.\n#6:\"Yes we do, but I couldn't apply that to hip replacement at this point in time. If the surgeon tells me \"that this catheter is a bit too stiff, and could you make it a bit softer, if you like, or a bit more flexible, yet do exactly the same task?\" we will do that - it's for everyone's benefit\"\nIn the above example, a pragmatic approach is taken by manufacturers to make modifications to devices, adopting the belief that if a problem is encountered by one individual, then it is likely to be a problem for the majority, providing it seems to be a reasonable request. The intuition of the manufacturer plays a major part in the process of developing products that are useful to the user, and responding to their needs.\n[SUBTITLE] Perceived value and barriers to user involvement [SUBSECTION] There was limited evidence that direct elicitation of user views was seen by manufacturers as being of value to the MDDD process. This appeared to be particularly the case in relation to patient users. For example, Manufacturer #11, discussing their device that was purely aimed at the home patient market, did not believe that patient involvement in the MDDD process was a particularly wise expenditure of resources. This was explicitly linked to the degree of influence that they have in terms of the level of power and influence they have in the levels of uptake of the device. In response to being asked whether they would like to involve the patient more in the MDDD process, manufacturer #11 replied:\n#11: \"if they are highly powered (i.e. influential in terms of clinical decision making), if they have no power then we have to ethically try to make sure that we don't harm the patient and [that] what we do is for their benefit, but appealing to them may actually be a waste of resources, so we have to make sure we don't pretend that they have a sway when they really don't, you know? \"\nOnce again, the above statement reinforces the notion that patients are seen as being at the bottom of the hierarchy of influence, and hence investing resources and effort to find out their opinion is not considered an effective of efficient strategy. Manufacturer #2, who also saw their device as being targeted at home market, acknowledged that the patient user is becoming increasingly important, particularly as they are now selling more directly to the home patient. However, the majority of individuals involved in the MDDD process are still predominantly senior health professionals as opposed to patients, who may play a stronger role in the clinical trials phase, after the device has been designed and developed. Therefore, the role of the patient user is considered to play a passive role, more in terms of verifying the value of an already developed device, as opposed to being involved in the initial design and development stages. This is discussed in more detail in the next section.\nAnother factor discouraging manufacturers from involving patient users in the MDDD process is the prospect of having to obtain ethical approval in order to carry out the research. When asked whether there are any difficulties in obtaining ethical approval for involving patient users in the MDDD process, Manufacturer #3 stated:\n#3: \"Yes. A huge problem. In my experience if you are not affecting patient care, patient throughput, you can get 'ethics', but if you are you won't get it, or you might but it will take years. So you have to design your trial so as not to affect patient care.\"\nManufacturer #1 also comments on the ethical approval process and the R&D committees that must be attended to when employing more formal methods users, both patient users and professional users in the collection of clinical data.\n#1: R&D committees. That's another thing. You start a study and surgeon wants to collect clinical data to be sure that they want to use this. Their experience that this is the product they want to use. You have to notify the hospital R&D and then suddenly they see it as an R&D project, so they throw on their overheads and it's an extra hurdle to go through. They start reviewing it in addition to ethical board. This is what has changed in the last few years. In those days you notified the larger University hospitals. Increasingly has to be approved. Some are straightforward; others ask extra questions that delays process - UK research.\nThis perceived difficulty with obtaining ethical approval may be one reason why manufacturers tend not to use formal methods, or engage in systematic research activity in order to inform the MDDD process, particularly at the early stages of development. Indeed manufacturers appear to actively avoid involving professional and patient users in the process, in fear that the MDDD process could be delayed for years as a result of the ethical approval process. As identified in section 3.2, informal discussion with clinical staff is seen as a more realistic, pragmatic, and feasible route to informing the MDDD process, which can be carried out informally and hence without ethical clearance. Manufacturer #3 goes on to describe the strategy that they use to get around the challenge of obtaining ethical approval:\n#3: \"So you take the least hard route, to not affect patient care in any way whatsoever, and design your study to sit on the back of that. So the study might not be optimal, but at least you can do it.\"\nSetting up device design and development activity so as to avoid the need for user involvement seems to be the 'method' of choice for some device manufacturers. Despite this being seen as a potentially sub-optimal approach, perhaps less effective in evaluating the extent to which design innovations may be accepted by users, it seems a route worth taking given the alternative of incurring long delays as a result of the ethical approval process.\nThere was limited evidence that direct elicitation of user views was seen by manufacturers as being of value to the MDDD process. This appeared to be particularly the case in relation to patient users. For example, Manufacturer #11, discussing their device that was purely aimed at the home patient market, did not believe that patient involvement in the MDDD process was a particularly wise expenditure of resources. This was explicitly linked to the degree of influence that they have in terms of the level of power and influence they have in the levels of uptake of the device. In response to being asked whether they would like to involve the patient more in the MDDD process, manufacturer #11 replied:\n#11: \"if they are highly powered (i.e. influential in terms of clinical decision making), if they have no power then we have to ethically try to make sure that we don't harm the patient and [that] what we do is for their benefit, but appealing to them may actually be a waste of resources, so we have to make sure we don't pretend that they have a sway when they really don't, you know? \"\nOnce again, the above statement reinforces the notion that patients are seen as being at the bottom of the hierarchy of influence, and hence investing resources and effort to find out their opinion is not considered an effective of efficient strategy. Manufacturer #2, who also saw their device as being targeted at home market, acknowledged that the patient user is becoming increasingly important, particularly as they are now selling more directly to the home patient. However, the majority of individuals involved in the MDDD process are still predominantly senior health professionals as opposed to patients, who may play a stronger role in the clinical trials phase, after the device has been designed and developed. Therefore, the role of the patient user is considered to play a passive role, more in terms of verifying the value of an already developed device, as opposed to being involved in the initial design and development stages. This is discussed in more detail in the next section.\nAnother factor discouraging manufacturers from involving patient users in the MDDD process is the prospect of having to obtain ethical approval in order to carry out the research. When asked whether there are any difficulties in obtaining ethical approval for involving patient users in the MDDD process, Manufacturer #3 stated:\n#3: \"Yes. A huge problem. In my experience if you are not affecting patient care, patient throughput, you can get 'ethics', but if you are you won't get it, or you might but it will take years. So you have to design your trial so as not to affect patient care.\"\nManufacturer #1 also comments on the ethical approval process and the R&D committees that must be attended to when employing more formal methods users, both patient users and professional users in the collection of clinical data.\n#1: R&D committees. That's another thing. You start a study and surgeon wants to collect clinical data to be sure that they want to use this. Their experience that this is the product they want to use. You have to notify the hospital R&D and then suddenly they see it as an R&D project, so they throw on their overheads and it's an extra hurdle to go through. They start reviewing it in addition to ethical board. This is what has changed in the last few years. In those days you notified the larger University hospitals. Increasingly has to be approved. Some are straightforward; others ask extra questions that delays process - UK research.\nThis perceived difficulty with obtaining ethical approval may be one reason why manufacturers tend not to use formal methods, or engage in systematic research activity in order to inform the MDDD process, particularly at the early stages of development. Indeed manufacturers appear to actively avoid involving professional and patient users in the process, in fear that the MDDD process could be delayed for years as a result of the ethical approval process. As identified in section 3.2, informal discussion with clinical staff is seen as a more realistic, pragmatic, and feasible route to informing the MDDD process, which can be carried out informally and hence without ethical clearance. Manufacturer #3 goes on to describe the strategy that they use to get around the challenge of obtaining ethical approval:\n#3: \"So you take the least hard route, to not affect patient care in any way whatsoever, and design your study to sit on the back of that. So the study might not be optimal, but at least you can do it.\"\nSetting up device design and development activity so as to avoid the need for user involvement seems to be the 'method' of choice for some device manufacturers. Despite this being seen as a potentially sub-optimal approach, perhaps less effective in evaluating the extent to which design innovations may be accepted by users, it seems a route worth taking given the alternative of incurring long delays as a result of the ethical approval process.\n[SUBTITLE] The nature of user contributions [SUBSECTION] The measures used to evaluate the effectiveness of newly developed devices towards the end of the MDDD process echo the findings presented earlier, that formal methods employed in any part of the MDDD process are seen by manufacturers as having the potential of slowing the process down, and incurring additional and unnecessary costs and overheads that otherwise could be avoided if a less formal approach was taken. Most commonly, manufacturers reported that the success of a new device is typically measured by the absence of receiving customer complaints or 'bad news' emerging as a result of the device being used in the field. Manufacturer #1, in response to being asked how the success of their device is measured responded as follows:\n#1: \"The absence of bad news. The fact that we have access to the surgeons who are using the product and any adverse effects would be reported and we would be aware early on.\"\nWith regards to seeking patient feedback about the success of a medical device in providing an effective health intervention, Manufacturer #3 takes a similar 'no bad news is good news' approach to the design and functioning of their product. When asked whether any patient feedback is sought about the device, the reply was:\n#3: \"Not explicitly, we haven't gone out to get it, but we get feedback though the users (clinicians). It's non-invasive, so as far as the patients concerned it's not a problem to use it. No-one has said they don't want it done or had any problems with that.\"\nManufacturer #9, commenting on whether any formal methods are used for converting user needs into device design requirements, also indicated that they adopt a reactive stance to customer suggestions and complaints, which is their default position on such matters.\n#9: \"Erm, we certainly have regular meetings here where we will look at customer feedback, you know, whether it's customer complaints or customer suggestions or whatever, yes so, yes we do have a means to do that...\"\nSome evidence of formal user involvement did emerge from the interview data, however, it indicated that the majority of formal user involvement took place at the clinical trials stage, i.e. after the product had been developed, and manufacturers were at the stage of demonstrating its efficacy and clinically effectiveness. Therefore, in effect any considerations in terms of design preference of the medical device that may benefit the user in terms of their treatment or use of the device had already been made prior to this formal user involvement. Manufacturer #2 highlights the notion that the patient user is primarily seen as being of value when attempting to demonstrate the clinical effectiveness of a new medical device.. Interestingly, patient users are not seen as primarily informing the general design of the device at earlier stages of the MDDD process, at least not in a formal capacity. When asked whether formal user methods are employed within the MDDD process, Manufacturer #2 responded:\n#2: \"There is the formal method of gaining patient data which means at the moment for example we have conducted clinical trials, clinical investigations. All sorts of clinical information gathering is going on at the [hospital name]. We are also... we tend to gather information first of all on the effectiveness or efficacy of it...Any advance, any variation that is likely to happen to the unit, any proposals for change that are going to improve the machine are all tested out in the clinical environment.\"\nManufacturer #5 reported that the key driver for collecting formal user data, relating to the performance of a device, comes from the possibility that an organisation such as the National Institute for Clinical Excellence (NICE) may decide to investigate the efficacy of their device at some unknown point. Therefore, the key driver for formally collecting user generated data is to fulfil the potential future requirements of external standards or purchasing agencies. The motivation to collect user facing data does not appear to be borne out of an inherent belief, on the manufacturer's behalf, that this user data would add any significant value in terms of fulfilling their own need to develop more effective devices or indeed learn more about the effectiveness and efficiency of their own device.\n#5: I think, if you take an organisation like NICE as a customer, or any of the other health technology organisations, one doesn't know whether your particular intervention is going to be assessed by them, and if so at what point, and one can see that as [the product] gains momentum and the NHS starts to look at what its spending on [this] surgery, then it may be something that NICE feel, or NICE get directed to take a look at. So it's a kind of a problem to us to understand when that's going to happen and it will certainly be a challenge; we have to therefore make sure that we are gathering the evidence in case they do.\"\nThe measures used to evaluate the effectiveness of newly developed devices towards the end of the MDDD process echo the findings presented earlier, that formal methods employed in any part of the MDDD process are seen by manufacturers as having the potential of slowing the process down, and incurring additional and unnecessary costs and overheads that otherwise could be avoided if a less formal approach was taken. Most commonly, manufacturers reported that the success of a new device is typically measured by the absence of receiving customer complaints or 'bad news' emerging as a result of the device being used in the field. Manufacturer #1, in response to being asked how the success of their device is measured responded as follows:\n#1: \"The absence of bad news. The fact that we have access to the surgeons who are using the product and any adverse effects would be reported and we would be aware early on.\"\nWith regards to seeking patient feedback about the success of a medical device in providing an effective health intervention, Manufacturer #3 takes a similar 'no bad news is good news' approach to the design and functioning of their product. When asked whether any patient feedback is sought about the device, the reply was:\n#3: \"Not explicitly, we haven't gone out to get it, but we get feedback though the users (clinicians). It's non-invasive, so as far as the patients concerned it's not a problem to use it. No-one has said they don't want it done or had any problems with that.\"\nManufacturer #9, commenting on whether any formal methods are used for converting user needs into device design requirements, also indicated that they adopt a reactive stance to customer suggestions and complaints, which is their default position on such matters.\n#9: \"Erm, we certainly have regular meetings here where we will look at customer feedback, you know, whether it's customer complaints or customer suggestions or whatever, yes so, yes we do have a means to do that...\"\nSome evidence of formal user involvement did emerge from the interview data, however, it indicated that the majority of formal user involvement took place at the clinical trials stage, i.e. after the product had been developed, and manufacturers were at the stage of demonstrating its efficacy and clinically effectiveness. Therefore, in effect any considerations in terms of design preference of the medical device that may benefit the user in terms of their treatment or use of the device had already been made prior to this formal user involvement. Manufacturer #2 highlights the notion that the patient user is primarily seen as being of value when attempting to demonstrate the clinical effectiveness of a new medical device.. Interestingly, patient users are not seen as primarily informing the general design of the device at earlier stages of the MDDD process, at least not in a formal capacity. When asked whether formal user methods are employed within the MDDD process, Manufacturer #2 responded:\n#2: \"There is the formal method of gaining patient data which means at the moment for example we have conducted clinical trials, clinical investigations. All sorts of clinical information gathering is going on at the [hospital name]. We are also... we tend to gather information first of all on the effectiveness or efficacy of it...Any advance, any variation that is likely to happen to the unit, any proposals for change that are going to improve the machine are all tested out in the clinical environment.\"\nManufacturer #5 reported that the key driver for collecting formal user data, relating to the performance of a device, comes from the possibility that an organisation such as the National Institute for Clinical Excellence (NICE) may decide to investigate the efficacy of their device at some unknown point. Therefore, the key driver for formally collecting user generated data is to fulfil the potential future requirements of external standards or purchasing agencies. The motivation to collect user facing data does not appear to be borne out of an inherent belief, on the manufacturer's behalf, that this user data would add any significant value in terms of fulfilling their own need to develop more effective devices or indeed learn more about the effectiveness and efficiency of their own device.\n#5: I think, if you take an organisation like NICE as a customer, or any of the other health technology organisations, one doesn't know whether your particular intervention is going to be assessed by them, and if so at what point, and one can see that as [the product] gains momentum and the NHS starts to look at what its spending on [this] surgery, then it may be something that NICE feel, or NICE get directed to take a look at. So it's a kind of a problem to us to understand when that's going to happen and it will certainly be a challenge; we have to therefore make sure that we are gathering the evidence in case they do.\"", "Discussions relating to the range of individuals that are consulted during the MDDD process revealed that there was a mismatch between the users that were consulted, and those that would actually use the device in practice. Some manufacturers believed that the needs of the patient do not originate from the patient themselves, and that patients' needs are better articulated through a hierarchy of health professionals including surgeons and 'clinical champions'.\n#2: \"The need actually will probably be established by the clinical fraternity....All you have to do after that is convince the people on down the chain, from the hospital clinical researchers right down to the guy in the street who says 'It's a good idea to have one of these. That's where the need is identified. It is identified really at the top, and then it's taught down, if you follow me\"\nIn light of this, manufacturers had a preference for seeking input from more senior health care staff over less senior staff, regardless of who would actually use the device in practice. The assumption was that senior staff members were more than capable of speaking on behalf of less senior staff, even though manufacturers acknowledged that it was likely that senior staff members would never actually use the device. This was the case even in scenarios where the patient was seen to be the main user of the device. For example, Manufacturer #2 who is involved in the development of Automated External Defibrillators (AEDs), recognised that a significant proportion of the intended users of the device included members of the public, however, they did not see it as necessary to consult members of the public, but rather consulted senior health professionals in the early stages of device design and development. Input from nurses was also considered to be less desirable than input from more senior health care staff such as surgeons. For example, Manufacturer #11 identified nurses as the main user of their product, however, did not consider it necessary to consult nurses in the design and development process. It was felt that surgeons made the decisions on behalf of nurses and patients, and therefore it was logical to consult them to identify nurse and patient needs, as is articulated below.\n#11: \"Surgeons may be there making the decision or recommending which models to buy, but it might be the nurses who are actually using the unit...the surgeon has made the decision, but he doesn't necessarily have to actually work with it, so in your case, the controllability aspect, the patient actually understanding how to operate the unit, the clinician has made the decision and has prescribed that particular device...\"\nSurgeons were also considered by Manufacturer #11 as having sufficient knowledge to act as representatives for home patients. The motivation for this was as a direct result of the way in which the device is introduced and promoted to the patient. Surgeons do the marketing and promoting of the product to the patients, and therefore it was seen as important that the device is primarily designed according to the surgeons' requirements. In the quote below, manufacturer #11 justifies their motivation for valuing the surgeons' opinion over the patient users.\n#11: \"but, by and large, most of our market research is done with the surgeons, not with the end users, rightly or wrongly...but how it will be marketed will be through the healthcare professional, who will have to sell it effectively to the patient, show them how to use it.\"\nSimilarly, in the event of the user group being general health professionals, the opinions of a small number of 'clinical champions' was sought by Manufacturer #8 as well as purchasing representatives from the health authority. It was deemed more important to meet the needs of the individuals who were responsible for making purchasing decisions or held most influence over them, as opposed to focusing on the needs of the individuals that would be using the devices on a day to day basis. Therefore MDDD activity often seemed to be carried out a strong focus on how effort would translate to sales. In the following quote, Manufacturer #4 is asked who would be consulted in identifying design requirements for the development of the given medical device:\n#4: P1: \"...it's through the doctors to, its [name]'s contact with the hospitals out there, now then I doubt it would be at doctor level, it would be more the managerial level of the hospital...\"\nI2: ...so in effect you are capturing a user need in that way...\nP: \" ...yes.\"\n(Note: 1 P: Denotes the participant's response, 2 I: Denotes the interviewer's questioning)\nManufacturer #1 also focuses on those making/influencing purchasing decisions, which includes management and administrative staff. Manufacturer #1 articulates this shift in focus when asked who they view as their customer:\n#1: \"Orthopaedic surgeons really. Although it is used in patients, it is the orthopaedic surgeons who decide what he will use. Sell to orthopaedic surgeons. Increasingly in the UK, we have to sell to the hospital management to justify why they should be using our bone graft substitutes as opposed to any other on the market...\"\nManufacturer #8 stated that health professionals are considered to be the main user of their device, However, the opinions of purchasing representatives are the primary source used to inform device design and development, followed up by consultation with a small number of 'clinical champions' (typically high profile surgeons and well known experts in their field). When questioned on what value the user may add value to the MDDD process, although couched in a humorous reply, it was clear that the usefulness of the user was ideally directly located in relation to sales relevant information.\n#8: \"The most helpful would to give me the year three sales figures with absolute confidence. Year-1...[laughs]...\"\nThere seems to be limited overlap between the individuals that will be using the device, and those that are consulted to inform the MDDD process. In particular, priority is given to those that hold more senior positions within the health care system. Therefore, surgeons, doctors and clinical champions were seen as more valuable sources for identifying user needs as opposed to those individual that would actually use the device on a daily basis. Furthermore, from the manufacturers' point of view, the motivation for maximising sales seems to have conflated the distinction between the customer and the user. Therefore the needs of those that make purchasing decisions or indeed have most influence over these decisions are more salient than the needs of the user. This is certainly a more complex picture than the human factors engineering approach, which puts the user's needs at the centre of the design and development process.", "Given the wide range of formal methods that are available to engage with users in the MDDD process, manufacturers tended to use only a very limited range of methods to capture information from users and patients. In line with the analysis above regarding the nature of the preferred user to consult, the most typical method used to gain input from users was via informal discussions with senior health professionals. Only one out of the eleven manufacturers (Manufacturer #8) stated that they regularly used some formal methods throughout the MDDD process, such as focus groups and questionnaires when developing their airway management device. Interestingly, the initial identification of a need for this device occurred as a result of six members of the company attending a postgraduate university course, which required the use of formal methods in order to explore and identify new device ideas. It was apparent that manufacturers do not feel that they have the time or resources to engage in rigorous formal user data collection methods, instead relying on a range of strategies including gut feel, instinct, and a personal belief that they understand the market place in which they operate, in order to identify and develop new devices.\nThe idea of employing formal methods is once again something that is not regarded as feasible, given the amount of resources available and the fact that manufacturers believe it is necessary to move quickly in order to remain competitive with their rivals. The view is that consulting a large number of individuals is problematic in itself, as every person that is approached for feedback has a different view. Informal methods, however, are seen as offering versatile and rapid solutions, hence the belief that relying on gut feel and pressing ahead when the moment feels right is a more feasible and efficient solution. This point is articulated by Manufacturer #8 below:\n#8: \"The very fact that someone is willing to talk with you almost means that they have slightly different view. You can go on asking forever. It's that balance between have we got sufficient confidence in what we have here to move forward vs. just the generation of the information....being confident enough of its assurance. That's what you constantly face anyhow. I think that's the dilemma you always have. You can ask the users till the cows come home but you never get a new product. You ask 100,000 you get 99,999 different opinions!\"\nManufacturer #3 further echoes the observation that formal methods are rarely used for medical device development, however, informal discussions and observation are seen as more versatile and fit for purpose, :\n#3: P:\"you're introducing a medical device to people, first question they'll ask, is \"what will it do to help me?\" Second question will be \"how long is this going to take?\" In a busy clinic that's really important...If it takes too many clicks, then people won't use it because they are busy enough as it is.\"\nI: have you been able to capture that at all?\nP: \"Through our interaction with users. We haven't got a specific mechanism for capturing it.\"\nI: Do you use any formal methods for converting customer needs into product development?\nP: \"No.\"\nThere was typically the belief that there was little need to consult the actual users formally regarding a new innovation, but rather contacting what they referred to as a 'clinical champion', was sufficient to qualify the feasibility and validity of a given new innovation. For example, Manufacturer #8, responding to a question regarding how the feasibility of new device ideas are qualified responded:\n#8: \"Every project will have a clinical champion. They will typically be involved, sometimes they come down here to meetings. We have a list of a couple of dozen clinicians that I can pick the phone up at anytime throughout the world and say \"what do you think of this?\".\nWhen asked whether there are any formal methods used within their organisation, Manufacturer #6 stated that formal methods are used within his organisation, however, they are not relevant for this particular product. Once again, formal methods, in this case example, were considered to be too bureaucratic, time consuming, and not applicable given the device and development scenario. Informal methods, such as ad hoc discussions with senior health professionals were likely to identify the majority of user design needs, and were also more appropriate.\n#6:\"Yes we do, but I couldn't apply that to hip replacement at this point in time. If the surgeon tells me \"that this catheter is a bit too stiff, and could you make it a bit softer, if you like, or a bit more flexible, yet do exactly the same task?\" we will do that - it's for everyone's benefit\"\nIn the above example, a pragmatic approach is taken by manufacturers to make modifications to devices, adopting the belief that if a problem is encountered by one individual, then it is likely to be a problem for the majority, providing it seems to be a reasonable request. The intuition of the manufacturer plays a major part in the process of developing products that are useful to the user, and responding to their needs.", "There was limited evidence that direct elicitation of user views was seen by manufacturers as being of value to the MDDD process. This appeared to be particularly the case in relation to patient users. For example, Manufacturer #11, discussing their device that was purely aimed at the home patient market, did not believe that patient involvement in the MDDD process was a particularly wise expenditure of resources. This was explicitly linked to the degree of influence that they have in terms of the level of power and influence they have in the levels of uptake of the device. In response to being asked whether they would like to involve the patient more in the MDDD process, manufacturer #11 replied:\n#11: \"if they are highly powered (i.e. influential in terms of clinical decision making), if they have no power then we have to ethically try to make sure that we don't harm the patient and [that] what we do is for their benefit, but appealing to them may actually be a waste of resources, so we have to make sure we don't pretend that they have a sway when they really don't, you know? \"\nOnce again, the above statement reinforces the notion that patients are seen as being at the bottom of the hierarchy of influence, and hence investing resources and effort to find out their opinion is not considered an effective of efficient strategy. Manufacturer #2, who also saw their device as being targeted at home market, acknowledged that the patient user is becoming increasingly important, particularly as they are now selling more directly to the home patient. However, the majority of individuals involved in the MDDD process are still predominantly senior health professionals as opposed to patients, who may play a stronger role in the clinical trials phase, after the device has been designed and developed. Therefore, the role of the patient user is considered to play a passive role, more in terms of verifying the value of an already developed device, as opposed to being involved in the initial design and development stages. This is discussed in more detail in the next section.\nAnother factor discouraging manufacturers from involving patient users in the MDDD process is the prospect of having to obtain ethical approval in order to carry out the research. When asked whether there are any difficulties in obtaining ethical approval for involving patient users in the MDDD process, Manufacturer #3 stated:\n#3: \"Yes. A huge problem. In my experience if you are not affecting patient care, patient throughput, you can get 'ethics', but if you are you won't get it, or you might but it will take years. So you have to design your trial so as not to affect patient care.\"\nManufacturer #1 also comments on the ethical approval process and the R&D committees that must be attended to when employing more formal methods users, both patient users and professional users in the collection of clinical data.\n#1: R&D committees. That's another thing. You start a study and surgeon wants to collect clinical data to be sure that they want to use this. Their experience that this is the product they want to use. You have to notify the hospital R&D and then suddenly they see it as an R&D project, so they throw on their overheads and it's an extra hurdle to go through. They start reviewing it in addition to ethical board. This is what has changed in the last few years. In those days you notified the larger University hospitals. Increasingly has to be approved. Some are straightforward; others ask extra questions that delays process - UK research.\nThis perceived difficulty with obtaining ethical approval may be one reason why manufacturers tend not to use formal methods, or engage in systematic research activity in order to inform the MDDD process, particularly at the early stages of development. Indeed manufacturers appear to actively avoid involving professional and patient users in the process, in fear that the MDDD process could be delayed for years as a result of the ethical approval process. As identified in section 3.2, informal discussion with clinical staff is seen as a more realistic, pragmatic, and feasible route to informing the MDDD process, which can be carried out informally and hence without ethical clearance. Manufacturer #3 goes on to describe the strategy that they use to get around the challenge of obtaining ethical approval:\n#3: \"So you take the least hard route, to not affect patient care in any way whatsoever, and design your study to sit on the back of that. So the study might not be optimal, but at least you can do it.\"\nSetting up device design and development activity so as to avoid the need for user involvement seems to be the 'method' of choice for some device manufacturers. Despite this being seen as a potentially sub-optimal approach, perhaps less effective in evaluating the extent to which design innovations may be accepted by users, it seems a route worth taking given the alternative of incurring long delays as a result of the ethical approval process.", "The measures used to evaluate the effectiveness of newly developed devices towards the end of the MDDD process echo the findings presented earlier, that formal methods employed in any part of the MDDD process are seen by manufacturers as having the potential of slowing the process down, and incurring additional and unnecessary costs and overheads that otherwise could be avoided if a less formal approach was taken. Most commonly, manufacturers reported that the success of a new device is typically measured by the absence of receiving customer complaints or 'bad news' emerging as a result of the device being used in the field. Manufacturer #1, in response to being asked how the success of their device is measured responded as follows:\n#1: \"The absence of bad news. The fact that we have access to the surgeons who are using the product and any adverse effects would be reported and we would be aware early on.\"\nWith regards to seeking patient feedback about the success of a medical device in providing an effective health intervention, Manufacturer #3 takes a similar 'no bad news is good news' approach to the design and functioning of their product. When asked whether any patient feedback is sought about the device, the reply was:\n#3: \"Not explicitly, we haven't gone out to get it, but we get feedback though the users (clinicians). It's non-invasive, so as far as the patients concerned it's not a problem to use it. No-one has said they don't want it done or had any problems with that.\"\nManufacturer #9, commenting on whether any formal methods are used for converting user needs into device design requirements, also indicated that they adopt a reactive stance to customer suggestions and complaints, which is their default position on such matters.\n#9: \"Erm, we certainly have regular meetings here where we will look at customer feedback, you know, whether it's customer complaints or customer suggestions or whatever, yes so, yes we do have a means to do that...\"\nSome evidence of formal user involvement did emerge from the interview data, however, it indicated that the majority of formal user involvement took place at the clinical trials stage, i.e. after the product had been developed, and manufacturers were at the stage of demonstrating its efficacy and clinically effectiveness. Therefore, in effect any considerations in terms of design preference of the medical device that may benefit the user in terms of their treatment or use of the device had already been made prior to this formal user involvement. Manufacturer #2 highlights the notion that the patient user is primarily seen as being of value when attempting to demonstrate the clinical effectiveness of a new medical device.. Interestingly, patient users are not seen as primarily informing the general design of the device at earlier stages of the MDDD process, at least not in a formal capacity. When asked whether formal user methods are employed within the MDDD process, Manufacturer #2 responded:\n#2: \"There is the formal method of gaining patient data which means at the moment for example we have conducted clinical trials, clinical investigations. All sorts of clinical information gathering is going on at the [hospital name]. We are also... we tend to gather information first of all on the effectiveness or efficacy of it...Any advance, any variation that is likely to happen to the unit, any proposals for change that are going to improve the machine are all tested out in the clinical environment.\"\nManufacturer #5 reported that the key driver for collecting formal user data, relating to the performance of a device, comes from the possibility that an organisation such as the National Institute for Clinical Excellence (NICE) may decide to investigate the efficacy of their device at some unknown point. Therefore, the key driver for formally collecting user generated data is to fulfil the potential future requirements of external standards or purchasing agencies. The motivation to collect user facing data does not appear to be borne out of an inherent belief, on the manufacturer's behalf, that this user data would add any significant value in terms of fulfilling their own need to develop more effective devices or indeed learn more about the effectiveness and efficiency of their own device.\n#5: I think, if you take an organisation like NICE as a customer, or any of the other health technology organisations, one doesn't know whether your particular intervention is going to be assessed by them, and if so at what point, and one can see that as [the product] gains momentum and the NHS starts to look at what its spending on [this] surgery, then it may be something that NICE feel, or NICE get directed to take a look at. So it's a kind of a problem to us to understand when that's going to happen and it will certainly be a challenge; we have to therefore make sure that we are gathering the evidence in case they do.\"", "In this study, 11in-depth interviews were carried out with medical device manufacturers, who were asked to comment on the role and value they believed that users have within the MDDD process. Given the small sample size, it is recognised that the results of this study should be considered as provisional. However, given the limited existing research in this area, the findings provide an important point of reference for further work.\nThe results revealed that manufacturers tend to prioritise the views of more senior health professionals above those that are less senior as well as patients. Furthermore, manufacturers' perceptions of the customer and the user has become conflated, partly due to the strong sales focus of manufacturers, seeking device design input from those individuals who make purchasing decisions, as opposed to the users of the devices. With regards to seeking input from the patient user, there was little motivation to engage in such practice which was seen as an ineffective use of resources, with patients, and less senior health professionals being perceived as having little impact or influence on general device sales, largely due to the inbuilt culture of patients being 'taught down' their needs from health care professionals.\nOnly one out of 11 manufacturers claimed to regularly use formal user centred design methods within the MDDD process. Interestingly, this individual also reported that they were familiar with such methods as a result of being introduced to them within the university setting in which they carried out the initial formulation and design of the product they were discussing. This experience appeared to have had a lasting effect, as they report the ongoing use of formal research methods to this day. It is therefore possible that a contributory factor to lack of engagement with formal methods is a lack of education, familiarity or confidence in their use. Necessary training has indeed been found to be a factor that affects the level of uptake of usability methods in medical device development [34]. Informal methods were typically preferred by manufacturers, in the form of extemporised discussions with a small number of esteemed medical experts. There was also a belief that if a manufacturer wished to be competitive and responsive within a fast moving market, formal methods do not offer the necessary versatility and are more generally not appropriate for purpose, despite the proposed benefits that academic literature suggests such methods promise if applied within the health care sector [35,36]. Research in the technology design domain suggests it is necessary to adopt a flexible and evolutionary stance when applying formal methods to cater for the unique context and organisational cultures that present themselves within any new design challenge [37]. Similarly, there may be value in increasing awareness of the versatility of existing human factors engineering methods, and in specialist contexts to explore the potential development of more agile and tailored methods that cater for manufacturers' needs in the context in which they are to be applied. It has been suggested that, as a consequence of the increased pressure from standards agencies to incorporate formal user methods into the MDDD process, some manufacturers may 'misuse' existing human factors methods [38]. The reasons for this are unclear. Is it perhaps in a bid to make methods more fit for purpose, or as a result of not being fully aware of the ways in which existing methods should be applied?\nThe perceived barriers to user involvement within the MDDD process were linked to the notion that manufacturers seek out those individuals that will be most influential in making purchasing decisions for their products. Consequently, involving patient users appeared to be of lowest priority, since it was believed that they held the lowest level of influence over whether health care organisations purchased their products. These findings are supported by our earlier preliminary work relating to barriers to user involvement [39,40]. The examples that did emerge of consulting users via formal methods, tended to occur at the end of the MDDD process, after the device had been designed and developed, where users played a passive role in aiding the manufacturer in verifying the efficacy of the device primarily for the purposes of satisfying external standards and purchasing agencies requirements. One key reason for avoiding user involvement was the difficulty that this presents when attempting to gain ethical approval for user elicitation studies. Manufacturers conceded that, although avoiding user involvement may perhaps be sub-optimal, it is necessary if they are to remain competitive within a fast moving market. Warlow [41] and Stewart et al. [42] both propose that over-regulation of clinical research poses a significant threat to public health and it seems that this study support this. In particular, our findings suggest that the seemingly unnecessary bureaucracy associated with obtaining ethical approval for non-interventional and low-risk studies, that seek to capture user opinions and requirements, leads to manufacturers excluding the voice of the user from the development process.\nThis study reveals that the notion of proactively involving the user within the MDDD process in general slows down the process. Rather, a reactive 'no bad news is good news' stance is taken, only taking into account users input if they are presented in the form of complaints or feedback on devices that have already been released into the health care system. The only evidence of engagement with formal methods of user involvement is apparent when the use of such methods are mandatory, dictated to manufacturers by standards and purchasing agencies. Given the findings of this study, the appropriate employment of formal methods by manufacturers is unlikely to occur to significant levels without deliberate efforts to encourage and support manufacturers in doing so. The following recommendations propose where some of these efforts should focus, in order to achieve increased levels of user engagement by manufacturers, as is now stipulated by IEC 62366 [14]:\n- Research to better understand the requirements of manufacturers, in terms what is required from human factors engineering methods in order to make their use more feasible and accessible in practice.\n- Provision of training on the use and benefits of employing formal human factors engineering methods at every stage of the MDDD process.\n- Health care providers should implement formal processes to ensure better communication and synergy between those making purchasing decisions and the actual users of the devices.\n- Provisions should be made within the ethical approval process that enables medical device manufacturers to engage more easily with users with minimal levels of bureaucracy whilst also ensuring that all research is conducted in an ethical manner that protects healthcare staff and patients.", "The authors declare that they have no competing interests.", "All authors contributed to the conceptual design of this study. AGM and JB carried out primary data analysis and drafted the manuscript. MPC and JLM contributed to data collection and provided domain expertise. All authors contributed to redrafting the manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6947/11/15/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Human Factors Engineering Methods", "The challenge for industry", "Section summary", "Methods", "Results", "Who is the User?", "Methods used", "Perceived value and barriers to user involvement", "The nature of user contributions", "Discussion and Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "From a human factors engineering perspective, ensuring the development of high quality and well designed medical devices that are in tune with patient and user needs, require formal human factors engineering methods (also known as user-centred usability engineering methods) to be used at every stage of the MDDD process [1]. Employing such formal methods ensures that the device design process considers appropriately the environment in which the device is to be used, the work patterns of users and the specific individual needs of the user, which could be any individual involved in the use of the device including health professionals, patients, and lay care givers [2]. In particular, human factors engineering methods highlight the importance of considering the needs of the user when designing and developing devices at the earliest stage of defining the device concept and then at every subsequent stage of the device development process [3]. The importance and value of focusing on user needs has been recognised as having a number of health related benefits including; improved patient safety [4,5], improved compliance and health outcomes [6], and higher levels of patient and user satisfaction [7]. Furthermore, employing human factors engineering methods throughout the MDDD process has been said to substantially reduce device development time because usability issues are identified and attended to prior launch, and hence avoid costly design changes and product recalls [8,9].\nGiven the proposed benefits of developing medical devices according to a human factors engineering perspective, medical device standards bodies are increasingly recognising the importance of these methods and the important role they play in developing safe and usable medical devices. Currently the two most important regulations for medical device developers are the European Commission (EC) Medical Device Directive 93/42/EEC [10] and the United States (US) Food and Drug Administration (FDA) regulations since medical devices must comply with these regulations in order to be sold throughout Europe and the US [11]. These regulations promote the use of human factors engineering methods within the MDDD process, for example, the US FDA regulations specify that medical device developers must demonstrate that human factors principles have been used in the design of the device so as to ensure that use-related hazards have been identified, understood and addressed. Some guidance is also given suggesting how human factors principles may be integrated into the MDDD process [12]. A further standard that medical device developers have been obliged to adhere in recent years is the International Electrotechnical Commission (IEC) standard 60601-1-6 [13], which have equivalent European Standards (ES) and British Standards (BS), requiring medical device developers to incorporate human factors engineering processes to ensure patient safety, stating that \"The manufacturer should conduct iterative design and development. Usability engineering should begin early and continue through the equipment design and development lifecycle\". More recently, the usability standard 'IEC 62366: Medical devices - Application of usability engineering to medical devices' [14] has superseded IEC 60601-1-6, and extends the requirement for medical device developers to incorporate human factors engineering methods in the development of all medical devices, not just electrical devices. In early 2010 IEC 62366 was harmonised by the EU Medical Device Directive meaning that it is now a legal requirement for medical device developers to formally address the usability of a device before placing it on the market anywhere in Europe. In order to comply with this standard, medical device developers must document the process in detail within a Usability Engineering File.\n[SUBTITLE] Human Factors Engineering Methods [SUBSECTION] The MDDD process may be considered to be made up of four key stages. At stage one, user needs are established and scoping exercises are carried out with users. Stage two aims to validate and refine the concept. Stage three involves designing the device, and Stage four involved evaluation of the device.\nFigure 1 presents the medical device development lifecycle, and the associated user-centred design methods that may be used at each respective stage.\nMedical device development lifecycle adapted from [5].\nIn some of our earlier work, as part of the Multidisciplinary Assessment of Technology Centre for Healthcare (MATCH) research activity, we carried out a rigorous review of the methods that have been employed in the MDDD process as presented in the academic literature [5,11,15]. MATCH is a UK research initiative between five universities which is funded by the Engineering and Physical Sciences Research Council (EPSRC) and the Department of Trade and Industry (DTI). The aim of MATCH is to provide support to the healthcare sector by developing methods to assess the value of medical devices at every stage of the MDDD process, with a focus on working with industrial partners to solve real world problems.\nTo provide some insight into what a selection of these methods involve, a brief description of the most frequently occurring methods within the device development lifecycle are now provided. For a detailed description of all methods and how these may be applied within the MDDD is provided in [5].\nFocus groups involve group discussion typically between 8-10 users, with a moderator guiding and facilitating the group through relevant topics of discussion. They may be appropriate for use at stage one, two or four of the device development lifecycle. Focus groups are used in a wide variety of industry settings and may be conducted at a comparatively low cost compared with other methods such as usability testing.\nInterviews are one of the most common approaches employed in user centred research, and may be of value at stages one, two and four of the development pathway. They can be rapidly deployed, and are relatively inexpensive to carry out. Interviews enable the researcher to access a broad range of opinion regarding a device, whilst also allowing rich and detailed opinions to be collected.\nUsability testing, typically performed at stage four, advocates is a function of three criteria: effectiveness, efficiency and satisfaction. Usability testing protocols involve users carrying out specific tasks with a specific device, whilst usability measures are taken. Effectiveness is concerned with whether the user is able to successfully accomplish a given task. Efficiency may include a count of the number of user actions or the amount of time required to carry out a task. Satisfaction may typically be measured subjectively by means of a satisfaction questionnaire, completed after using a particular device.\nHeuristic evaluation is a more rapid form of usability test which may be deployed by developers, perhaps prior to carrying out usability tests. Developers step through the device features and functionality and check the extent to which it complies with pre-determined list of characteristics (heuristics/rules of thumb) that may have been defined by earlier UCD design activities. This type of evaluation would normally be applied at stage four of the development pathway, once a tangible device has been developed.\nThe MDDD process may be considered to be made up of four key stages. At stage one, user needs are established and scoping exercises are carried out with users. Stage two aims to validate and refine the concept. Stage three involves designing the device, and Stage four involved evaluation of the device.\nFigure 1 presents the medical device development lifecycle, and the associated user-centred design methods that may be used at each respective stage.\nMedical device development lifecycle adapted from [5].\nIn some of our earlier work, as part of the Multidisciplinary Assessment of Technology Centre for Healthcare (MATCH) research activity, we carried out a rigorous review of the methods that have been employed in the MDDD process as presented in the academic literature [5,11,15]. MATCH is a UK research initiative between five universities which is funded by the Engineering and Physical Sciences Research Council (EPSRC) and the Department of Trade and Industry (DTI). The aim of MATCH is to provide support to the healthcare sector by developing methods to assess the value of medical devices at every stage of the MDDD process, with a focus on working with industrial partners to solve real world problems.\nTo provide some insight into what a selection of these methods involve, a brief description of the most frequently occurring methods within the device development lifecycle are now provided. For a detailed description of all methods and how these may be applied within the MDDD is provided in [5].\nFocus groups involve group discussion typically between 8-10 users, with a moderator guiding and facilitating the group through relevant topics of discussion. They may be appropriate for use at stage one, two or four of the device development lifecycle. Focus groups are used in a wide variety of industry settings and may be conducted at a comparatively low cost compared with other methods such as usability testing.\nInterviews are one of the most common approaches employed in user centred research, and may be of value at stages one, two and four of the development pathway. They can be rapidly deployed, and are relatively inexpensive to carry out. Interviews enable the researcher to access a broad range of opinion regarding a device, whilst also allowing rich and detailed opinions to be collected.\nUsability testing, typically performed at stage four, advocates is a function of three criteria: effectiveness, efficiency and satisfaction. Usability testing protocols involve users carrying out specific tasks with a specific device, whilst usability measures are taken. Effectiveness is concerned with whether the user is able to successfully accomplish a given task. Efficiency may include a count of the number of user actions or the amount of time required to carry out a task. Satisfaction may typically be measured subjectively by means of a satisfaction questionnaire, completed after using a particular device.\nHeuristic evaluation is a more rapid form of usability test which may be deployed by developers, perhaps prior to carrying out usability tests. Developers step through the device features and functionality and check the extent to which it complies with pre-determined list of characteristics (heuristics/rules of thumb) that may have been defined by earlier UCD design activities. This type of evaluation would normally be applied at stage four of the development pathway, once a tangible device has been developed.\n[SUBTITLE] The challenge for industry [SUBSECTION] Although a large number of human factors engineering methods are available that may be employed within the MDDD process, previous research indicates that medical device manufacturers often avoid employing such methods due to lack of resources and the perception that such methods are often too resource intensive [16]. The literature in the area of user involvement suggests that there are a number of risks for manufacturers associate with a lack of engagement with users within the product development process. For example, Cooper [17] suggests that failure to build in the voice of the customer is one of the key reasons for the failure of developing effective and innovative products. Certainly in the realm of ICT products Kujala [18] suggests that attending to user perspectives increase the likelihood of a successful product. Alongside this however, other work has highlighted some of the challenges of, and barriers to, involving users in the product development process. Brown [19] and Butler [20] both suggest that the volume of data generated by field studies with users may be costly, difficult to analyse with no obvious route to informing development. When considering the factors leading to successful innovation, van der Panne et al. [21] note that although it is incontrovertible that good market research is associated with a successful product, the role of customer involvement in innovation remains contentious. They suggest that customer involvement at an early stage may tend to gravitate towards imitative products, being less able to envision or express novelty and thus, \"bias innovation efforts towards incremental innovation\" [[21] p.326]. User preferences may change over time and engaging with a limited range of users may result in over specification of the product [22]. From the developer's perspective, their criteria for the success of user involvement may be different than those (often academics or researchers) who actually do user engagement work [23,24]. Furthermore, user information that is based on formal methods of elicitation may be at variance with the representations of the user held by developers themselves [22] and may thus not readily be appropriated within the organisational culture and structure of the lead organisation [25]. Many of the above examples are drawn from the more general area of user involvement within the product development process, since there is limited research in this area specifically addressing such issues within the medical device development domain. There is a lack of existing primary research that explores the challenges and benefits of involving users specifically within the medical device development process, particularly from a medical device manufacturer's perspective. Examples of some of the work that does exist within this domain includes, Grocott et al. [26] who carry out pioneering work in the field, proposing a valuable model of user engagement in medical device development, and focus on the practical issues around how user needs may be captured throughout the MDDD process [26]. Shah and Robinson [16] carry out secondary research in the form of a literature review of research that has involved users in some way in the design and development of medical devices, exploring the barriers and benefits to involving users as may have been reported in the reviewed literature. The findings of this study are in line with the perspective offered above that whilst user involvement in MDDD process is likely to benefit the user in terms of developing devices better suited to the users' needs, MDDD methods may be perceived as highly resource intensive and hence often not a feasible option for some manufacturers.\nReflecting on the body of literature above, and in particular on the perceived benefits and barriers to user involvement within the wider domain of technology product development, it seems that there are a number of key factors that impact on effective engagement with users. Perhaps most notable is the notion that product developers consider user needs research to be disproportionately costly, in light of the perceived benefits and pay-off for engaging in such activities. Furthermore, user needs research is perceived by developers to generate unmanageable volumes of data and there does not appear to be any clear route through to informing the development of the product once the user needs data has been collected and analysed. Whilst all of the perceived drawbacks, if well-founded, may be notable reasons for avoiding engagement with user needs research, it is not clear what the underlying reasons are for these factors. For example, a commonly held view by some developers is that the key function of human factors engineering methods is to serve as a means of facilitating a 'cake-frosting' exercise [27], by which 'superficial' design features may be 'painted' onto the device at the end of the development process. Such views of human factors engineering methods do not lend themselves to positive engagement or indeed a realisation of the full complexity and pay-off that such methods may potentially deliver if they were to be deployed with methodological rigour at appropriate points within the design process [28]. Certainly from an academic and/or human factors engineer's point of view, it may be argued that many of these factors may be overcome by increased awareness and better educating industrial developers in human factors engineering methods and their application. However, is it the case that with more training in the area of user research methods research, product developers may overcome these reservations, recognise the potential opportunities these methods promise, and develop the necessary skills to deploy methods and analyse data in a timely fashion, or are these methods indeed incompatible within the industrial context? Should user needs research be outsourced and delivered by the human factors engineering experts, in order to overcome the overhead in acquiring appropriate skills and level of understanding to actually effectively deploy these methods? Should these methods be adapted to make them more lightweight and easier to implement, hence making them more fit for purpose? These are all questions that need to be answered if the goal of incorporating the user into the product development process is to be realised. More specifically within the MDDD process, there are likely to be domain specific factors and conditions that influence the uptake of such methods.\nTherefore, from a theoretical perspective, human factors engineering methods are presented as being of value at every stage of the MDDD process, manufacturers may not actually be employing these methods in practice. If their full benefits are to be realised, more primary research is needed to better understand manufacturers' perspectives and motivations regarding such methods. Therefore, the aim of this study is to gain first hand and detailed insights into what medical device manufacturers' attitudes are towards engaging with users, and what the perceived value and barriers are of doing so. Furthermore, we aim to explore which methods are used, and what device manufacturers' attitudes are towards employing such methods.\nAlthough a large number of human factors engineering methods are available that may be employed within the MDDD process, previous research indicates that medical device manufacturers often avoid employing such methods due to lack of resources and the perception that such methods are often too resource intensive [16]. The literature in the area of user involvement suggests that there are a number of risks for manufacturers associate with a lack of engagement with users within the product development process. For example, Cooper [17] suggests that failure to build in the voice of the customer is one of the key reasons for the failure of developing effective and innovative products. Certainly in the realm of ICT products Kujala [18] suggests that attending to user perspectives increase the likelihood of a successful product. Alongside this however, other work has highlighted some of the challenges of, and barriers to, involving users in the product development process. Brown [19] and Butler [20] both suggest that the volume of data generated by field studies with users may be costly, difficult to analyse with no obvious route to informing development. When considering the factors leading to successful innovation, van der Panne et al. [21] note that although it is incontrovertible that good market research is associated with a successful product, the role of customer involvement in innovation remains contentious. They suggest that customer involvement at an early stage may tend to gravitate towards imitative products, being less able to envision or express novelty and thus, \"bias innovation efforts towards incremental innovation\" [[21] p.326]. User preferences may change over time and engaging with a limited range of users may result in over specification of the product [22]. From the developer's perspective, their criteria for the success of user involvement may be different than those (often academics or researchers) who actually do user engagement work [23,24]. Furthermore, user information that is based on formal methods of elicitation may be at variance with the representations of the user held by developers themselves [22] and may thus not readily be appropriated within the organisational culture and structure of the lead organisation [25]. Many of the above examples are drawn from the more general area of user involvement within the product development process, since there is limited research in this area specifically addressing such issues within the medical device development domain. There is a lack of existing primary research that explores the challenges and benefits of involving users specifically within the medical device development process, particularly from a medical device manufacturer's perspective. Examples of some of the work that does exist within this domain includes, Grocott et al. [26] who carry out pioneering work in the field, proposing a valuable model of user engagement in medical device development, and focus on the practical issues around how user needs may be captured throughout the MDDD process [26]. Shah and Robinson [16] carry out secondary research in the form of a literature review of research that has involved users in some way in the design and development of medical devices, exploring the barriers and benefits to involving users as may have been reported in the reviewed literature. The findings of this study are in line with the perspective offered above that whilst user involvement in MDDD process is likely to benefit the user in terms of developing devices better suited to the users' needs, MDDD methods may be perceived as highly resource intensive and hence often not a feasible option for some manufacturers.\nReflecting on the body of literature above, and in particular on the perceived benefits and barriers to user involvement within the wider domain of technology product development, it seems that there are a number of key factors that impact on effective engagement with users. Perhaps most notable is the notion that product developers consider user needs research to be disproportionately costly, in light of the perceived benefits and pay-off for engaging in such activities. Furthermore, user needs research is perceived by developers to generate unmanageable volumes of data and there does not appear to be any clear route through to informing the development of the product once the user needs data has been collected and analysed. Whilst all of the perceived drawbacks, if well-founded, may be notable reasons for avoiding engagement with user needs research, it is not clear what the underlying reasons are for these factors. For example, a commonly held view by some developers is that the key function of human factors engineering methods is to serve as a means of facilitating a 'cake-frosting' exercise [27], by which 'superficial' design features may be 'painted' onto the device at the end of the development process. Such views of human factors engineering methods do not lend themselves to positive engagement or indeed a realisation of the full complexity and pay-off that such methods may potentially deliver if they were to be deployed with methodological rigour at appropriate points within the design process [28]. Certainly from an academic and/or human factors engineer's point of view, it may be argued that many of these factors may be overcome by increased awareness and better educating industrial developers in human factors engineering methods and their application. However, is it the case that with more training in the area of user research methods research, product developers may overcome these reservations, recognise the potential opportunities these methods promise, and develop the necessary skills to deploy methods and analyse data in a timely fashion, or are these methods indeed incompatible within the industrial context? Should user needs research be outsourced and delivered by the human factors engineering experts, in order to overcome the overhead in acquiring appropriate skills and level of understanding to actually effectively deploy these methods? Should these methods be adapted to make them more lightweight and easier to implement, hence making them more fit for purpose? These are all questions that need to be answered if the goal of incorporating the user into the product development process is to be realised. More specifically within the MDDD process, there are likely to be domain specific factors and conditions that influence the uptake of such methods.\nTherefore, from a theoretical perspective, human factors engineering methods are presented as being of value at every stage of the MDDD process, manufacturers may not actually be employing these methods in practice. If their full benefits are to be realised, more primary research is needed to better understand manufacturers' perspectives and motivations regarding such methods. Therefore, the aim of this study is to gain first hand and detailed insights into what medical device manufacturers' attitudes are towards engaging with users, and what the perceived value and barriers are of doing so. Furthermore, we aim to explore which methods are used, and what device manufacturers' attitudes are towards employing such methods.\n[SUBTITLE] Section summary [SUBSECTION] Thus far, we have detailed a range of formal methods that may be used to elicit user perspectives as part of the process of medical device development and noted the range of regulatory requirements to involve users. We have suggested that the 'in principle' value of systematically seeking user input may be somewhat at variance with the day to day experiences of manufacturers where user involvement may be seen as a barrier to speedy and innovative product development. There is currently no evidence about this in respect of medical device development. The remainder of this paper addresses this issue and is structured as follows. In the next section, we describe a study which involved carrying out in-depth interviews with 11 medical device manufacturers which explored the perceived value of users in the MDDD process. In the following section, the results of the analysis of these interviews are then presented. The paper concludes with suggested recommendations for future research and practice in this area.\nThus far, we have detailed a range of formal methods that may be used to elicit user perspectives as part of the process of medical device development and noted the range of regulatory requirements to involve users. We have suggested that the 'in principle' value of systematically seeking user input may be somewhat at variance with the day to day experiences of manufacturers where user involvement may be seen as a barrier to speedy and innovative product development. There is currently no evidence about this in respect of medical device development. The remainder of this paper addresses this issue and is structured as follows. In the next section, we describe a study which involved carrying out in-depth interviews with 11 medical device manufacturers which explored the perceived value of users in the MDDD process. In the following section, the results of the analysis of these interviews are then presented. The paper concludes with suggested recommendations for future research and practice in this area.", "The MDDD process may be considered to be made up of four key stages. At stage one, user needs are established and scoping exercises are carried out with users. Stage two aims to validate and refine the concept. Stage three involves designing the device, and Stage four involved evaluation of the device.\nFigure 1 presents the medical device development lifecycle, and the associated user-centred design methods that may be used at each respective stage.\nMedical device development lifecycle adapted from [5].\nIn some of our earlier work, as part of the Multidisciplinary Assessment of Technology Centre for Healthcare (MATCH) research activity, we carried out a rigorous review of the methods that have been employed in the MDDD process as presented in the academic literature [5,11,15]. MATCH is a UK research initiative between five universities which is funded by the Engineering and Physical Sciences Research Council (EPSRC) and the Department of Trade and Industry (DTI). The aim of MATCH is to provide support to the healthcare sector by developing methods to assess the value of medical devices at every stage of the MDDD process, with a focus on working with industrial partners to solve real world problems.\nTo provide some insight into what a selection of these methods involve, a brief description of the most frequently occurring methods within the device development lifecycle are now provided. For a detailed description of all methods and how these may be applied within the MDDD is provided in [5].\nFocus groups involve group discussion typically between 8-10 users, with a moderator guiding and facilitating the group through relevant topics of discussion. They may be appropriate for use at stage one, two or four of the device development lifecycle. Focus groups are used in a wide variety of industry settings and may be conducted at a comparatively low cost compared with other methods such as usability testing.\nInterviews are one of the most common approaches employed in user centred research, and may be of value at stages one, two and four of the development pathway. They can be rapidly deployed, and are relatively inexpensive to carry out. Interviews enable the researcher to access a broad range of opinion regarding a device, whilst also allowing rich and detailed opinions to be collected.\nUsability testing, typically performed at stage four, advocates is a function of three criteria: effectiveness, efficiency and satisfaction. Usability testing protocols involve users carrying out specific tasks with a specific device, whilst usability measures are taken. Effectiveness is concerned with whether the user is able to successfully accomplish a given task. Efficiency may include a count of the number of user actions or the amount of time required to carry out a task. Satisfaction may typically be measured subjectively by means of a satisfaction questionnaire, completed after using a particular device.\nHeuristic evaluation is a more rapid form of usability test which may be deployed by developers, perhaps prior to carrying out usability tests. Developers step through the device features and functionality and check the extent to which it complies with pre-determined list of characteristics (heuristics/rules of thumb) that may have been defined by earlier UCD design activities. This type of evaluation would normally be applied at stage four of the development pathway, once a tangible device has been developed.", "Although a large number of human factors engineering methods are available that may be employed within the MDDD process, previous research indicates that medical device manufacturers often avoid employing such methods due to lack of resources and the perception that such methods are often too resource intensive [16]. The literature in the area of user involvement suggests that there are a number of risks for manufacturers associate with a lack of engagement with users within the product development process. For example, Cooper [17] suggests that failure to build in the voice of the customer is one of the key reasons for the failure of developing effective and innovative products. Certainly in the realm of ICT products Kujala [18] suggests that attending to user perspectives increase the likelihood of a successful product. Alongside this however, other work has highlighted some of the challenges of, and barriers to, involving users in the product development process. Brown [19] and Butler [20] both suggest that the volume of data generated by field studies with users may be costly, difficult to analyse with no obvious route to informing development. When considering the factors leading to successful innovation, van der Panne et al. [21] note that although it is incontrovertible that good market research is associated with a successful product, the role of customer involvement in innovation remains contentious. They suggest that customer involvement at an early stage may tend to gravitate towards imitative products, being less able to envision or express novelty and thus, \"bias innovation efforts towards incremental innovation\" [[21] p.326]. User preferences may change over time and engaging with a limited range of users may result in over specification of the product [22]. From the developer's perspective, their criteria for the success of user involvement may be different than those (often academics or researchers) who actually do user engagement work [23,24]. Furthermore, user information that is based on formal methods of elicitation may be at variance with the representations of the user held by developers themselves [22] and may thus not readily be appropriated within the organisational culture and structure of the lead organisation [25]. Many of the above examples are drawn from the more general area of user involvement within the product development process, since there is limited research in this area specifically addressing such issues within the medical device development domain. There is a lack of existing primary research that explores the challenges and benefits of involving users specifically within the medical device development process, particularly from a medical device manufacturer's perspective. Examples of some of the work that does exist within this domain includes, Grocott et al. [26] who carry out pioneering work in the field, proposing a valuable model of user engagement in medical device development, and focus on the practical issues around how user needs may be captured throughout the MDDD process [26]. Shah and Robinson [16] carry out secondary research in the form of a literature review of research that has involved users in some way in the design and development of medical devices, exploring the barriers and benefits to involving users as may have been reported in the reviewed literature. The findings of this study are in line with the perspective offered above that whilst user involvement in MDDD process is likely to benefit the user in terms of developing devices better suited to the users' needs, MDDD methods may be perceived as highly resource intensive and hence often not a feasible option for some manufacturers.\nReflecting on the body of literature above, and in particular on the perceived benefits and barriers to user involvement within the wider domain of technology product development, it seems that there are a number of key factors that impact on effective engagement with users. Perhaps most notable is the notion that product developers consider user needs research to be disproportionately costly, in light of the perceived benefits and pay-off for engaging in such activities. Furthermore, user needs research is perceived by developers to generate unmanageable volumes of data and there does not appear to be any clear route through to informing the development of the product once the user needs data has been collected and analysed. Whilst all of the perceived drawbacks, if well-founded, may be notable reasons for avoiding engagement with user needs research, it is not clear what the underlying reasons are for these factors. For example, a commonly held view by some developers is that the key function of human factors engineering methods is to serve as a means of facilitating a 'cake-frosting' exercise [27], by which 'superficial' design features may be 'painted' onto the device at the end of the development process. Such views of human factors engineering methods do not lend themselves to positive engagement or indeed a realisation of the full complexity and pay-off that such methods may potentially deliver if they were to be deployed with methodological rigour at appropriate points within the design process [28]. Certainly from an academic and/or human factors engineer's point of view, it may be argued that many of these factors may be overcome by increased awareness and better educating industrial developers in human factors engineering methods and their application. However, is it the case that with more training in the area of user research methods research, product developers may overcome these reservations, recognise the potential opportunities these methods promise, and develop the necessary skills to deploy methods and analyse data in a timely fashion, or are these methods indeed incompatible within the industrial context? Should user needs research be outsourced and delivered by the human factors engineering experts, in order to overcome the overhead in acquiring appropriate skills and level of understanding to actually effectively deploy these methods? Should these methods be adapted to make them more lightweight and easier to implement, hence making them more fit for purpose? These are all questions that need to be answered if the goal of incorporating the user into the product development process is to be realised. More specifically within the MDDD process, there are likely to be domain specific factors and conditions that influence the uptake of such methods.\nTherefore, from a theoretical perspective, human factors engineering methods are presented as being of value at every stage of the MDDD process, manufacturers may not actually be employing these methods in practice. If their full benefits are to be realised, more primary research is needed to better understand manufacturers' perspectives and motivations regarding such methods. Therefore, the aim of this study is to gain first hand and detailed insights into what medical device manufacturers' attitudes are towards engaging with users, and what the perceived value and barriers are of doing so. Furthermore, we aim to explore which methods are used, and what device manufacturers' attitudes are towards employing such methods.", "Thus far, we have detailed a range of formal methods that may be used to elicit user perspectives as part of the process of medical device development and noted the range of regulatory requirements to involve users. We have suggested that the 'in principle' value of systematically seeking user input may be somewhat at variance with the day to day experiences of manufacturers where user involvement may be seen as a barrier to speedy and innovative product development. There is currently no evidence about this in respect of medical device development. The remainder of this paper addresses this issue and is structured as follows. In the next section, we describe a study which involved carrying out in-depth interviews with 11 medical device manufacturers which explored the perceived value of users in the MDDD process. In the following section, the results of the analysis of these interviews are then presented. The paper concludes with suggested recommendations for future research and practice in this area.", "Eleven interviews were carried out with senior members of staff in each company. Recruitment was by convenience sampling: participants were identified as a result of their employment by companies that were industrial partners of the MATCH collaboration. To provide case examples for discussion, company representatives were asked to choose one medical device to discuss during the interview, which provided an example of the MDDD process they engaged in. The majority of the devices chosen for discussion were intended for use by surgeons within a clinical setting. Interviews lasted approximately one hour in duration, and were carried out by members of the MATCH research team. One of the topics of discussion was around device users. Interviewees were asked about the role the user is perceived to play within the design and development of the example medical device, what they saw as the barriers and benefits to involving users within the MDDD process, and what human factors engineering methods they used (if any) within the MDDD process. All interviews were recorded and researchers also took notes during the interviews. Before beginning, the interviewer explained the purpose and format of the interview to the participant and informed consent to participate, and for the audio recording of the interview, was obtained. Table 1 summarises the position held within the company for each interviewee, the treatment area or clinical use for the device, the intended users of the devices, and the actual individuals consulted in the MDDD process by the respective manufacturers.\nCompany details and intended device users\nA thematic analysis of the textual dataset was then carried out. Detailed descriptions of what the thematic analysis process involves are available in [29-32]. In brief, thematic analysis facilitates the effective and rigorous abstraction of salient themes and sub-themes from a complex and detailed textual dataset [33], hence is particularly suitable in this context. The following steps were taken to analyse the data collected during the interview process, Figure 2 provides an overview of this.\nProcess of thematic analysis.\nInitially all recordings of the trial sessions were transcribed into text format. After transcription, the textual dataset was initially perused to conceptualise the overarching themes that existed within the transcripts at a high-level. These were noted in a coding frame, with each concept assigned a code name, and a description and examples of text that fit each concept. The dataset was then examined iteratively, enabling themes and sub-themes to be developed further. These were spliced and linked together, and text relating to each category and sub-category were appropriately labelled. The first and second authors coded the data and discussed inconsistencies where these arose until a clear consensus of the main themes was reached. When no further refinement of the categorisation could be derived a final group of categories and sub-categories, representative of the transcripts was produced. The main themes are those drawn from multiple contributions and that represent issues that are clearly central to the participants themselves. Within these themes, we have explored areas of consensus and diversity as they were presented by interviewees.", "As a result of carrying out a thematic analysis of the interviews carried out with manufacturers, four high level themes emerged relating to manufacturers' views of user involvement within the MDDD process. These are as follows: Who is the user?; Methods used; Perceived value and barriers to user involvement; The nature of user contributions. The remainder of this section presents the findings of our study according to these themes.\n[SUBTITLE] Who is the User? [SUBSECTION] Discussions relating to the range of individuals that are consulted during the MDDD process revealed that there was a mismatch between the users that were consulted, and those that would actually use the device in practice. Some manufacturers believed that the needs of the patient do not originate from the patient themselves, and that patients' needs are better articulated through a hierarchy of health professionals including surgeons and 'clinical champions'.\n#2: \"The need actually will probably be established by the clinical fraternity....All you have to do after that is convince the people on down the chain, from the hospital clinical researchers right down to the guy in the street who says 'It's a good idea to have one of these. That's where the need is identified. It is identified really at the top, and then it's taught down, if you follow me\"\nIn light of this, manufacturers had a preference for seeking input from more senior health care staff over less senior staff, regardless of who would actually use the device in practice. The assumption was that senior staff members were more than capable of speaking on behalf of less senior staff, even though manufacturers acknowledged that it was likely that senior staff members would never actually use the device. This was the case even in scenarios where the patient was seen to be the main user of the device. For example, Manufacturer #2 who is involved in the development of Automated External Defibrillators (AEDs), recognised that a significant proportion of the intended users of the device included members of the public, however, they did not see it as necessary to consult members of the public, but rather consulted senior health professionals in the early stages of device design and development. Input from nurses was also considered to be less desirable than input from more senior health care staff such as surgeons. For example, Manufacturer #11 identified nurses as the main user of their product, however, did not consider it necessary to consult nurses in the design and development process. It was felt that surgeons made the decisions on behalf of nurses and patients, and therefore it was logical to consult them to identify nurse and patient needs, as is articulated below.\n#11: \"Surgeons may be there making the decision or recommending which models to buy, but it might be the nurses who are actually using the unit...the surgeon has made the decision, but he doesn't necessarily have to actually work with it, so in your case, the controllability aspect, the patient actually understanding how to operate the unit, the clinician has made the decision and has prescribed that particular device...\"\nSurgeons were also considered by Manufacturer #11 as having sufficient knowledge to act as representatives for home patients. The motivation for this was as a direct result of the way in which the device is introduced and promoted to the patient. Surgeons do the marketing and promoting of the product to the patients, and therefore it was seen as important that the device is primarily designed according to the surgeons' requirements. In the quote below, manufacturer #11 justifies their motivation for valuing the surgeons' opinion over the patient users.\n#11: \"but, by and large, most of our market research is done with the surgeons, not with the end users, rightly or wrongly...but how it will be marketed will be through the healthcare professional, who will have to sell it effectively to the patient, show them how to use it.\"\nSimilarly, in the event of the user group being general health professionals, the opinions of a small number of 'clinical champions' was sought by Manufacturer #8 as well as purchasing representatives from the health authority. It was deemed more important to meet the needs of the individuals who were responsible for making purchasing decisions or held most influence over them, as opposed to focusing on the needs of the individuals that would be using the devices on a day to day basis. Therefore MDDD activity often seemed to be carried out a strong focus on how effort would translate to sales. In the following quote, Manufacturer #4 is asked who would be consulted in identifying design requirements for the development of the given medical device:\n#4: P1: \"...it's through the doctors to, its [name]'s contact with the hospitals out there, now then I doubt it would be at doctor level, it would be more the managerial level of the hospital...\"\nI2: ...so in effect you are capturing a user need in that way...\nP: \" ...yes.\"\n(Note: 1 P: Denotes the participant's response, 2 I: Denotes the interviewer's questioning)\nManufacturer #1 also focuses on those making/influencing purchasing decisions, which includes management and administrative staff. Manufacturer #1 articulates this shift in focus when asked who they view as their customer:\n#1: \"Orthopaedic surgeons really. Although it is used in patients, it is the orthopaedic surgeons who decide what he will use. Sell to orthopaedic surgeons. Increasingly in the UK, we have to sell to the hospital management to justify why they should be using our bone graft substitutes as opposed to any other on the market...\"\nManufacturer #8 stated that health professionals are considered to be the main user of their device, However, the opinions of purchasing representatives are the primary source used to inform device design and development, followed up by consultation with a small number of 'clinical champions' (typically high profile surgeons and well known experts in their field). When questioned on what value the user may add value to the MDDD process, although couched in a humorous reply, it was clear that the usefulness of the user was ideally directly located in relation to sales relevant information.\n#8: \"The most helpful would to give me the year three sales figures with absolute confidence. Year-1...[laughs]...\"\nThere seems to be limited overlap between the individuals that will be using the device, and those that are consulted to inform the MDDD process. In particular, priority is given to those that hold more senior positions within the health care system. Therefore, surgeons, doctors and clinical champions were seen as more valuable sources for identifying user needs as opposed to those individual that would actually use the device on a daily basis. Furthermore, from the manufacturers' point of view, the motivation for maximising sales seems to have conflated the distinction between the customer and the user. Therefore the needs of those that make purchasing decisions or indeed have most influence over these decisions are more salient than the needs of the user. This is certainly a more complex picture than the human factors engineering approach, which puts the user's needs at the centre of the design and development process.\nDiscussions relating to the range of individuals that are consulted during the MDDD process revealed that there was a mismatch between the users that were consulted, and those that would actually use the device in practice. Some manufacturers believed that the needs of the patient do not originate from the patient themselves, and that patients' needs are better articulated through a hierarchy of health professionals including surgeons and 'clinical champions'.\n#2: \"The need actually will probably be established by the clinical fraternity....All you have to do after that is convince the people on down the chain, from the hospital clinical researchers right down to the guy in the street who says 'It's a good idea to have one of these. That's where the need is identified. It is identified really at the top, and then it's taught down, if you follow me\"\nIn light of this, manufacturers had a preference for seeking input from more senior health care staff over less senior staff, regardless of who would actually use the device in practice. The assumption was that senior staff members were more than capable of speaking on behalf of less senior staff, even though manufacturers acknowledged that it was likely that senior staff members would never actually use the device. This was the case even in scenarios where the patient was seen to be the main user of the device. For example, Manufacturer #2 who is involved in the development of Automated External Defibrillators (AEDs), recognised that a significant proportion of the intended users of the device included members of the public, however, they did not see it as necessary to consult members of the public, but rather consulted senior health professionals in the early stages of device design and development. Input from nurses was also considered to be less desirable than input from more senior health care staff such as surgeons. For example, Manufacturer #11 identified nurses as the main user of their product, however, did not consider it necessary to consult nurses in the design and development process. It was felt that surgeons made the decisions on behalf of nurses and patients, and therefore it was logical to consult them to identify nurse and patient needs, as is articulated below.\n#11: \"Surgeons may be there making the decision or recommending which models to buy, but it might be the nurses who are actually using the unit...the surgeon has made the decision, but he doesn't necessarily have to actually work with it, so in your case, the controllability aspect, the patient actually understanding how to operate the unit, the clinician has made the decision and has prescribed that particular device...\"\nSurgeons were also considered by Manufacturer #11 as having sufficient knowledge to act as representatives for home patients. The motivation for this was as a direct result of the way in which the device is introduced and promoted to the patient. Surgeons do the marketing and promoting of the product to the patients, and therefore it was seen as important that the device is primarily designed according to the surgeons' requirements. In the quote below, manufacturer #11 justifies their motivation for valuing the surgeons' opinion over the patient users.\n#11: \"but, by and large, most of our market research is done with the surgeons, not with the end users, rightly or wrongly...but how it will be marketed will be through the healthcare professional, who will have to sell it effectively to the patient, show them how to use it.\"\nSimilarly, in the event of the user group being general health professionals, the opinions of a small number of 'clinical champions' was sought by Manufacturer #8 as well as purchasing representatives from the health authority. It was deemed more important to meet the needs of the individuals who were responsible for making purchasing decisions or held most influence over them, as opposed to focusing on the needs of the individuals that would be using the devices on a day to day basis. Therefore MDDD activity often seemed to be carried out a strong focus on how effort would translate to sales. In the following quote, Manufacturer #4 is asked who would be consulted in identifying design requirements for the development of the given medical device:\n#4: P1: \"...it's through the doctors to, its [name]'s contact with the hospitals out there, now then I doubt it would be at doctor level, it would be more the managerial level of the hospital...\"\nI2: ...so in effect you are capturing a user need in that way...\nP: \" ...yes.\"\n(Note: 1 P: Denotes the participant's response, 2 I: Denotes the interviewer's questioning)\nManufacturer #1 also focuses on those making/influencing purchasing decisions, which includes management and administrative staff. Manufacturer #1 articulates this shift in focus when asked who they view as their customer:\n#1: \"Orthopaedic surgeons really. Although it is used in patients, it is the orthopaedic surgeons who decide what he will use. Sell to orthopaedic surgeons. Increasingly in the UK, we have to sell to the hospital management to justify why they should be using our bone graft substitutes as opposed to any other on the market...\"\nManufacturer #8 stated that health professionals are considered to be the main user of their device, However, the opinions of purchasing representatives are the primary source used to inform device design and development, followed up by consultation with a small number of 'clinical champions' (typically high profile surgeons and well known experts in their field). When questioned on what value the user may add value to the MDDD process, although couched in a humorous reply, it was clear that the usefulness of the user was ideally directly located in relation to sales relevant information.\n#8: \"The most helpful would to give me the year three sales figures with absolute confidence. Year-1...[laughs]...\"\nThere seems to be limited overlap between the individuals that will be using the device, and those that are consulted to inform the MDDD process. In particular, priority is given to those that hold more senior positions within the health care system. Therefore, surgeons, doctors and clinical champions were seen as more valuable sources for identifying user needs as opposed to those individual that would actually use the device on a daily basis. Furthermore, from the manufacturers' point of view, the motivation for maximising sales seems to have conflated the distinction between the customer and the user. Therefore the needs of those that make purchasing decisions or indeed have most influence over these decisions are more salient than the needs of the user. This is certainly a more complex picture than the human factors engineering approach, which puts the user's needs at the centre of the design and development process.\n[SUBTITLE] Methods used [SUBSECTION] Given the wide range of formal methods that are available to engage with users in the MDDD process, manufacturers tended to use only a very limited range of methods to capture information from users and patients. In line with the analysis above regarding the nature of the preferred user to consult, the most typical method used to gain input from users was via informal discussions with senior health professionals. Only one out of the eleven manufacturers (Manufacturer #8) stated that they regularly used some formal methods throughout the MDDD process, such as focus groups and questionnaires when developing their airway management device. Interestingly, the initial identification of a need for this device occurred as a result of six members of the company attending a postgraduate university course, which required the use of formal methods in order to explore and identify new device ideas. It was apparent that manufacturers do not feel that they have the time or resources to engage in rigorous formal user data collection methods, instead relying on a range of strategies including gut feel, instinct, and a personal belief that they understand the market place in which they operate, in order to identify and develop new devices.\nThe idea of employing formal methods is once again something that is not regarded as feasible, given the amount of resources available and the fact that manufacturers believe it is necessary to move quickly in order to remain competitive with their rivals. The view is that consulting a large number of individuals is problematic in itself, as every person that is approached for feedback has a different view. Informal methods, however, are seen as offering versatile and rapid solutions, hence the belief that relying on gut feel and pressing ahead when the moment feels right is a more feasible and efficient solution. This point is articulated by Manufacturer #8 below:\n#8: \"The very fact that someone is willing to talk with you almost means that they have slightly different view. You can go on asking forever. It's that balance between have we got sufficient confidence in what we have here to move forward vs. just the generation of the information....being confident enough of its assurance. That's what you constantly face anyhow. I think that's the dilemma you always have. You can ask the users till the cows come home but you never get a new product. You ask 100,000 you get 99,999 different opinions!\"\nManufacturer #3 further echoes the observation that formal methods are rarely used for medical device development, however, informal discussions and observation are seen as more versatile and fit for purpose, :\n#3: P:\"you're introducing a medical device to people, first question they'll ask, is \"what will it do to help me?\" Second question will be \"how long is this going to take?\" In a busy clinic that's really important...If it takes too many clicks, then people won't use it because they are busy enough as it is.\"\nI: have you been able to capture that at all?\nP: \"Through our interaction with users. We haven't got a specific mechanism for capturing it.\"\nI: Do you use any formal methods for converting customer needs into product development?\nP: \"No.\"\nThere was typically the belief that there was little need to consult the actual users formally regarding a new innovation, but rather contacting what they referred to as a 'clinical champion', was sufficient to qualify the feasibility and validity of a given new innovation. For example, Manufacturer #8, responding to a question regarding how the feasibility of new device ideas are qualified responded:\n#8: \"Every project will have a clinical champion. They will typically be involved, sometimes they come down here to meetings. We have a list of a couple of dozen clinicians that I can pick the phone up at anytime throughout the world and say \"what do you think of this?\".\nWhen asked whether there are any formal methods used within their organisation, Manufacturer #6 stated that formal methods are used within his organisation, however, they are not relevant for this particular product. Once again, formal methods, in this case example, were considered to be too bureaucratic, time consuming, and not applicable given the device and development scenario. Informal methods, such as ad hoc discussions with senior health professionals were likely to identify the majority of user design needs, and were also more appropriate.\n#6:\"Yes we do, but I couldn't apply that to hip replacement at this point in time. If the surgeon tells me \"that this catheter is a bit too stiff, and could you make it a bit softer, if you like, or a bit more flexible, yet do exactly the same task?\" we will do that - it's for everyone's benefit\"\nIn the above example, a pragmatic approach is taken by manufacturers to make modifications to devices, adopting the belief that if a problem is encountered by one individual, then it is likely to be a problem for the majority, providing it seems to be a reasonable request. The intuition of the manufacturer plays a major part in the process of developing products that are useful to the user, and responding to their needs.\nGiven the wide range of formal methods that are available to engage with users in the MDDD process, manufacturers tended to use only a very limited range of methods to capture information from users and patients. In line with the analysis above regarding the nature of the preferred user to consult, the most typical method used to gain input from users was via informal discussions with senior health professionals. Only one out of the eleven manufacturers (Manufacturer #8) stated that they regularly used some formal methods throughout the MDDD process, such as focus groups and questionnaires when developing their airway management device. Interestingly, the initial identification of a need for this device occurred as a result of six members of the company attending a postgraduate university course, which required the use of formal methods in order to explore and identify new device ideas. It was apparent that manufacturers do not feel that they have the time or resources to engage in rigorous formal user data collection methods, instead relying on a range of strategies including gut feel, instinct, and a personal belief that they understand the market place in which they operate, in order to identify and develop new devices.\nThe idea of employing formal methods is once again something that is not regarded as feasible, given the amount of resources available and the fact that manufacturers believe it is necessary to move quickly in order to remain competitive with their rivals. The view is that consulting a large number of individuals is problematic in itself, as every person that is approached for feedback has a different view. Informal methods, however, are seen as offering versatile and rapid solutions, hence the belief that relying on gut feel and pressing ahead when the moment feels right is a more feasible and efficient solution. This point is articulated by Manufacturer #8 below:\n#8: \"The very fact that someone is willing to talk with you almost means that they have slightly different view. You can go on asking forever. It's that balance between have we got sufficient confidence in what we have here to move forward vs. just the generation of the information....being confident enough of its assurance. That's what you constantly face anyhow. I think that's the dilemma you always have. You can ask the users till the cows come home but you never get a new product. You ask 100,000 you get 99,999 different opinions!\"\nManufacturer #3 further echoes the observation that formal methods are rarely used for medical device development, however, informal discussions and observation are seen as more versatile and fit for purpose, :\n#3: P:\"you're introducing a medical device to people, first question they'll ask, is \"what will it do to help me?\" Second question will be \"how long is this going to take?\" In a busy clinic that's really important...If it takes too many clicks, then people won't use it because they are busy enough as it is.\"\nI: have you been able to capture that at all?\nP: \"Through our interaction with users. We haven't got a specific mechanism for capturing it.\"\nI: Do you use any formal methods for converting customer needs into product development?\nP: \"No.\"\nThere was typically the belief that there was little need to consult the actual users formally regarding a new innovation, but rather contacting what they referred to as a 'clinical champion', was sufficient to qualify the feasibility and validity of a given new innovation. For example, Manufacturer #8, responding to a question regarding how the feasibility of new device ideas are qualified responded:\n#8: \"Every project will have a clinical champion. They will typically be involved, sometimes they come down here to meetings. We have a list of a couple of dozen clinicians that I can pick the phone up at anytime throughout the world and say \"what do you think of this?\".\nWhen asked whether there are any formal methods used within their organisation, Manufacturer #6 stated that formal methods are used within his organisation, however, they are not relevant for this particular product. Once again, formal methods, in this case example, were considered to be too bureaucratic, time consuming, and not applicable given the device and development scenario. Informal methods, such as ad hoc discussions with senior health professionals were likely to identify the majority of user design needs, and were also more appropriate.\n#6:\"Yes we do, but I couldn't apply that to hip replacement at this point in time. If the surgeon tells me \"that this catheter is a bit too stiff, and could you make it a bit softer, if you like, or a bit more flexible, yet do exactly the same task?\" we will do that - it's for everyone's benefit\"\nIn the above example, a pragmatic approach is taken by manufacturers to make modifications to devices, adopting the belief that if a problem is encountered by one individual, then it is likely to be a problem for the majority, providing it seems to be a reasonable request. The intuition of the manufacturer plays a major part in the process of developing products that are useful to the user, and responding to their needs.\n[SUBTITLE] Perceived value and barriers to user involvement [SUBSECTION] There was limited evidence that direct elicitation of user views was seen by manufacturers as being of value to the MDDD process. This appeared to be particularly the case in relation to patient users. For example, Manufacturer #11, discussing their device that was purely aimed at the home patient market, did not believe that patient involvement in the MDDD process was a particularly wise expenditure of resources. This was explicitly linked to the degree of influence that they have in terms of the level of power and influence they have in the levels of uptake of the device. In response to being asked whether they would like to involve the patient more in the MDDD process, manufacturer #11 replied:\n#11: \"if they are highly powered (i.e. influential in terms of clinical decision making), if they have no power then we have to ethically try to make sure that we don't harm the patient and [that] what we do is for their benefit, but appealing to them may actually be a waste of resources, so we have to make sure we don't pretend that they have a sway when they really don't, you know? \"\nOnce again, the above statement reinforces the notion that patients are seen as being at the bottom of the hierarchy of influence, and hence investing resources and effort to find out their opinion is not considered an effective of efficient strategy. Manufacturer #2, who also saw their device as being targeted at home market, acknowledged that the patient user is becoming increasingly important, particularly as they are now selling more directly to the home patient. However, the majority of individuals involved in the MDDD process are still predominantly senior health professionals as opposed to patients, who may play a stronger role in the clinical trials phase, after the device has been designed and developed. Therefore, the role of the patient user is considered to play a passive role, more in terms of verifying the value of an already developed device, as opposed to being involved in the initial design and development stages. This is discussed in more detail in the next section.\nAnother factor discouraging manufacturers from involving patient users in the MDDD process is the prospect of having to obtain ethical approval in order to carry out the research. When asked whether there are any difficulties in obtaining ethical approval for involving patient users in the MDDD process, Manufacturer #3 stated:\n#3: \"Yes. A huge problem. In my experience if you are not affecting patient care, patient throughput, you can get 'ethics', but if you are you won't get it, or you might but it will take years. So you have to design your trial so as not to affect patient care.\"\nManufacturer #1 also comments on the ethical approval process and the R&D committees that must be attended to when employing more formal methods users, both patient users and professional users in the collection of clinical data.\n#1: R&D committees. That's another thing. You start a study and surgeon wants to collect clinical data to be sure that they want to use this. Their experience that this is the product they want to use. You have to notify the hospital R&D and then suddenly they see it as an R&D project, so they throw on their overheads and it's an extra hurdle to go through. They start reviewing it in addition to ethical board. This is what has changed in the last few years. In those days you notified the larger University hospitals. Increasingly has to be approved. Some are straightforward; others ask extra questions that delays process - UK research.\nThis perceived difficulty with obtaining ethical approval may be one reason why manufacturers tend not to use formal methods, or engage in systematic research activity in order to inform the MDDD process, particularly at the early stages of development. Indeed manufacturers appear to actively avoid involving professional and patient users in the process, in fear that the MDDD process could be delayed for years as a result of the ethical approval process. As identified in section 3.2, informal discussion with clinical staff is seen as a more realistic, pragmatic, and feasible route to informing the MDDD process, which can be carried out informally and hence without ethical clearance. Manufacturer #3 goes on to describe the strategy that they use to get around the challenge of obtaining ethical approval:\n#3: \"So you take the least hard route, to not affect patient care in any way whatsoever, and design your study to sit on the back of that. So the study might not be optimal, but at least you can do it.\"\nSetting up device design and development activity so as to avoid the need for user involvement seems to be the 'method' of choice for some device manufacturers. Despite this being seen as a potentially sub-optimal approach, perhaps less effective in evaluating the extent to which design innovations may be accepted by users, it seems a route worth taking given the alternative of incurring long delays as a result of the ethical approval process.\nThere was limited evidence that direct elicitation of user views was seen by manufacturers as being of value to the MDDD process. This appeared to be particularly the case in relation to patient users. For example, Manufacturer #11, discussing their device that was purely aimed at the home patient market, did not believe that patient involvement in the MDDD process was a particularly wise expenditure of resources. This was explicitly linked to the degree of influence that they have in terms of the level of power and influence they have in the levels of uptake of the device. In response to being asked whether they would like to involve the patient more in the MDDD process, manufacturer #11 replied:\n#11: \"if they are highly powered (i.e. influential in terms of clinical decision making), if they have no power then we have to ethically try to make sure that we don't harm the patient and [that] what we do is for their benefit, but appealing to them may actually be a waste of resources, so we have to make sure we don't pretend that they have a sway when they really don't, you know? \"\nOnce again, the above statement reinforces the notion that patients are seen as being at the bottom of the hierarchy of influence, and hence investing resources and effort to find out their opinion is not considered an effective of efficient strategy. Manufacturer #2, who also saw their device as being targeted at home market, acknowledged that the patient user is becoming increasingly important, particularly as they are now selling more directly to the home patient. However, the majority of individuals involved in the MDDD process are still predominantly senior health professionals as opposed to patients, who may play a stronger role in the clinical trials phase, after the device has been designed and developed. Therefore, the role of the patient user is considered to play a passive role, more in terms of verifying the value of an already developed device, as opposed to being involved in the initial design and development stages. This is discussed in more detail in the next section.\nAnother factor discouraging manufacturers from involving patient users in the MDDD process is the prospect of having to obtain ethical approval in order to carry out the research. When asked whether there are any difficulties in obtaining ethical approval for involving patient users in the MDDD process, Manufacturer #3 stated:\n#3: \"Yes. A huge problem. In my experience if you are not affecting patient care, patient throughput, you can get 'ethics', but if you are you won't get it, or you might but it will take years. So you have to design your trial so as not to affect patient care.\"\nManufacturer #1 also comments on the ethical approval process and the R&D committees that must be attended to when employing more formal methods users, both patient users and professional users in the collection of clinical data.\n#1: R&D committees. That's another thing. You start a study and surgeon wants to collect clinical data to be sure that they want to use this. Their experience that this is the product they want to use. You have to notify the hospital R&D and then suddenly they see it as an R&D project, so they throw on their overheads and it's an extra hurdle to go through. They start reviewing it in addition to ethical board. This is what has changed in the last few years. In those days you notified the larger University hospitals. Increasingly has to be approved. Some are straightforward; others ask extra questions that delays process - UK research.\nThis perceived difficulty with obtaining ethical approval may be one reason why manufacturers tend not to use formal methods, or engage in systematic research activity in order to inform the MDDD process, particularly at the early stages of development. Indeed manufacturers appear to actively avoid involving professional and patient users in the process, in fear that the MDDD process could be delayed for years as a result of the ethical approval process. As identified in section 3.2, informal discussion with clinical staff is seen as a more realistic, pragmatic, and feasible route to informing the MDDD process, which can be carried out informally and hence without ethical clearance. Manufacturer #3 goes on to describe the strategy that they use to get around the challenge of obtaining ethical approval:\n#3: \"So you take the least hard route, to not affect patient care in any way whatsoever, and design your study to sit on the back of that. So the study might not be optimal, but at least you can do it.\"\nSetting up device design and development activity so as to avoid the need for user involvement seems to be the 'method' of choice for some device manufacturers. Despite this being seen as a potentially sub-optimal approach, perhaps less effective in evaluating the extent to which design innovations may be accepted by users, it seems a route worth taking given the alternative of incurring long delays as a result of the ethical approval process.\n[SUBTITLE] The nature of user contributions [SUBSECTION] The measures used to evaluate the effectiveness of newly developed devices towards the end of the MDDD process echo the findings presented earlier, that formal methods employed in any part of the MDDD process are seen by manufacturers as having the potential of slowing the process down, and incurring additional and unnecessary costs and overheads that otherwise could be avoided if a less formal approach was taken. Most commonly, manufacturers reported that the success of a new device is typically measured by the absence of receiving customer complaints or 'bad news' emerging as a result of the device being used in the field. Manufacturer #1, in response to being asked how the success of their device is measured responded as follows:\n#1: \"The absence of bad news. The fact that we have access to the surgeons who are using the product and any adverse effects would be reported and we would be aware early on.\"\nWith regards to seeking patient feedback about the success of a medical device in providing an effective health intervention, Manufacturer #3 takes a similar 'no bad news is good news' approach to the design and functioning of their product. When asked whether any patient feedback is sought about the device, the reply was:\n#3: \"Not explicitly, we haven't gone out to get it, but we get feedback though the users (clinicians). It's non-invasive, so as far as the patients concerned it's not a problem to use it. No-one has said they don't want it done or had any problems with that.\"\nManufacturer #9, commenting on whether any formal methods are used for converting user needs into device design requirements, also indicated that they adopt a reactive stance to customer suggestions and complaints, which is their default position on such matters.\n#9: \"Erm, we certainly have regular meetings here where we will look at customer feedback, you know, whether it's customer complaints or customer suggestions or whatever, yes so, yes we do have a means to do that...\"\nSome evidence of formal user involvement did emerge from the interview data, however, it indicated that the majority of formal user involvement took place at the clinical trials stage, i.e. after the product had been developed, and manufacturers were at the stage of demonstrating its efficacy and clinically effectiveness. Therefore, in effect any considerations in terms of design preference of the medical device that may benefit the user in terms of their treatment or use of the device had already been made prior to this formal user involvement. Manufacturer #2 highlights the notion that the patient user is primarily seen as being of value when attempting to demonstrate the clinical effectiveness of a new medical device.. Interestingly, patient users are not seen as primarily informing the general design of the device at earlier stages of the MDDD process, at least not in a formal capacity. When asked whether formal user methods are employed within the MDDD process, Manufacturer #2 responded:\n#2: \"There is the formal method of gaining patient data which means at the moment for example we have conducted clinical trials, clinical investigations. All sorts of clinical information gathering is going on at the [hospital name]. We are also... we tend to gather information first of all on the effectiveness or efficacy of it...Any advance, any variation that is likely to happen to the unit, any proposals for change that are going to improve the machine are all tested out in the clinical environment.\"\nManufacturer #5 reported that the key driver for collecting formal user data, relating to the performance of a device, comes from the possibility that an organisation such as the National Institute for Clinical Excellence (NICE) may decide to investigate the efficacy of their device at some unknown point. Therefore, the key driver for formally collecting user generated data is to fulfil the potential future requirements of external standards or purchasing agencies. The motivation to collect user facing data does not appear to be borne out of an inherent belief, on the manufacturer's behalf, that this user data would add any significant value in terms of fulfilling their own need to develop more effective devices or indeed learn more about the effectiveness and efficiency of their own device.\n#5: I think, if you take an organisation like NICE as a customer, or any of the other health technology organisations, one doesn't know whether your particular intervention is going to be assessed by them, and if so at what point, and one can see that as [the product] gains momentum and the NHS starts to look at what its spending on [this] surgery, then it may be something that NICE feel, or NICE get directed to take a look at. So it's a kind of a problem to us to understand when that's going to happen and it will certainly be a challenge; we have to therefore make sure that we are gathering the evidence in case they do.\"\nThe measures used to evaluate the effectiveness of newly developed devices towards the end of the MDDD process echo the findings presented earlier, that formal methods employed in any part of the MDDD process are seen by manufacturers as having the potential of slowing the process down, and incurring additional and unnecessary costs and overheads that otherwise could be avoided if a less formal approach was taken. Most commonly, manufacturers reported that the success of a new device is typically measured by the absence of receiving customer complaints or 'bad news' emerging as a result of the device being used in the field. Manufacturer #1, in response to being asked how the success of their device is measured responded as follows:\n#1: \"The absence of bad news. The fact that we have access to the surgeons who are using the product and any adverse effects would be reported and we would be aware early on.\"\nWith regards to seeking patient feedback about the success of a medical device in providing an effective health intervention, Manufacturer #3 takes a similar 'no bad news is good news' approach to the design and functioning of their product. When asked whether any patient feedback is sought about the device, the reply was:\n#3: \"Not explicitly, we haven't gone out to get it, but we get feedback though the users (clinicians). It's non-invasive, so as far as the patients concerned it's not a problem to use it. No-one has said they don't want it done or had any problems with that.\"\nManufacturer #9, commenting on whether any formal methods are used for converting user needs into device design requirements, also indicated that they adopt a reactive stance to customer suggestions and complaints, which is their default position on such matters.\n#9: \"Erm, we certainly have regular meetings here where we will look at customer feedback, you know, whether it's customer complaints or customer suggestions or whatever, yes so, yes we do have a means to do that...\"\nSome evidence of formal user involvement did emerge from the interview data, however, it indicated that the majority of formal user involvement took place at the clinical trials stage, i.e. after the product had been developed, and manufacturers were at the stage of demonstrating its efficacy and clinically effectiveness. Therefore, in effect any considerations in terms of design preference of the medical device that may benefit the user in terms of their treatment or use of the device had already been made prior to this formal user involvement. Manufacturer #2 highlights the notion that the patient user is primarily seen as being of value when attempting to demonstrate the clinical effectiveness of a new medical device.. Interestingly, patient users are not seen as primarily informing the general design of the device at earlier stages of the MDDD process, at least not in a formal capacity. When asked whether formal user methods are employed within the MDDD process, Manufacturer #2 responded:\n#2: \"There is the formal method of gaining patient data which means at the moment for example we have conducted clinical trials, clinical investigations. All sorts of clinical information gathering is going on at the [hospital name]. We are also... we tend to gather information first of all on the effectiveness or efficacy of it...Any advance, any variation that is likely to happen to the unit, any proposals for change that are going to improve the machine are all tested out in the clinical environment.\"\nManufacturer #5 reported that the key driver for collecting formal user data, relating to the performance of a device, comes from the possibility that an organisation such as the National Institute for Clinical Excellence (NICE) may decide to investigate the efficacy of their device at some unknown point. Therefore, the key driver for formally collecting user generated data is to fulfil the potential future requirements of external standards or purchasing agencies. The motivation to collect user facing data does not appear to be borne out of an inherent belief, on the manufacturer's behalf, that this user data would add any significant value in terms of fulfilling their own need to develop more effective devices or indeed learn more about the effectiveness and efficiency of their own device.\n#5: I think, if you take an organisation like NICE as a customer, or any of the other health technology organisations, one doesn't know whether your particular intervention is going to be assessed by them, and if so at what point, and one can see that as [the product] gains momentum and the NHS starts to look at what its spending on [this] surgery, then it may be something that NICE feel, or NICE get directed to take a look at. So it's a kind of a problem to us to understand when that's going to happen and it will certainly be a challenge; we have to therefore make sure that we are gathering the evidence in case they do.\"", "Discussions relating to the range of individuals that are consulted during the MDDD process revealed that there was a mismatch between the users that were consulted, and those that would actually use the device in practice. Some manufacturers believed that the needs of the patient do not originate from the patient themselves, and that patients' needs are better articulated through a hierarchy of health professionals including surgeons and 'clinical champions'.\n#2: \"The need actually will probably be established by the clinical fraternity....All you have to do after that is convince the people on down the chain, from the hospital clinical researchers right down to the guy in the street who says 'It's a good idea to have one of these. That's where the need is identified. It is identified really at the top, and then it's taught down, if you follow me\"\nIn light of this, manufacturers had a preference for seeking input from more senior health care staff over less senior staff, regardless of who would actually use the device in practice. The assumption was that senior staff members were more than capable of speaking on behalf of less senior staff, even though manufacturers acknowledged that it was likely that senior staff members would never actually use the device. This was the case even in scenarios where the patient was seen to be the main user of the device. For example, Manufacturer #2 who is involved in the development of Automated External Defibrillators (AEDs), recognised that a significant proportion of the intended users of the device included members of the public, however, they did not see it as necessary to consult members of the public, but rather consulted senior health professionals in the early stages of device design and development. Input from nurses was also considered to be less desirable than input from more senior health care staff such as surgeons. For example, Manufacturer #11 identified nurses as the main user of their product, however, did not consider it necessary to consult nurses in the design and development process. It was felt that surgeons made the decisions on behalf of nurses and patients, and therefore it was logical to consult them to identify nurse and patient needs, as is articulated below.\n#11: \"Surgeons may be there making the decision or recommending which models to buy, but it might be the nurses who are actually using the unit...the surgeon has made the decision, but he doesn't necessarily have to actually work with it, so in your case, the controllability aspect, the patient actually understanding how to operate the unit, the clinician has made the decision and has prescribed that particular device...\"\nSurgeons were also considered by Manufacturer #11 as having sufficient knowledge to act as representatives for home patients. The motivation for this was as a direct result of the way in which the device is introduced and promoted to the patient. Surgeons do the marketing and promoting of the product to the patients, and therefore it was seen as important that the device is primarily designed according to the surgeons' requirements. In the quote below, manufacturer #11 justifies their motivation for valuing the surgeons' opinion over the patient users.\n#11: \"but, by and large, most of our market research is done with the surgeons, not with the end users, rightly or wrongly...but how it will be marketed will be through the healthcare professional, who will have to sell it effectively to the patient, show them how to use it.\"\nSimilarly, in the event of the user group being general health professionals, the opinions of a small number of 'clinical champions' was sought by Manufacturer #8 as well as purchasing representatives from the health authority. It was deemed more important to meet the needs of the individuals who were responsible for making purchasing decisions or held most influence over them, as opposed to focusing on the needs of the individuals that would be using the devices on a day to day basis. Therefore MDDD activity often seemed to be carried out a strong focus on how effort would translate to sales. In the following quote, Manufacturer #4 is asked who would be consulted in identifying design requirements for the development of the given medical device:\n#4: P1: \"...it's through the doctors to, its [name]'s contact with the hospitals out there, now then I doubt it would be at doctor level, it would be more the managerial level of the hospital...\"\nI2: ...so in effect you are capturing a user need in that way...\nP: \" ...yes.\"\n(Note: 1 P: Denotes the participant's response, 2 I: Denotes the interviewer's questioning)\nManufacturer #1 also focuses on those making/influencing purchasing decisions, which includes management and administrative staff. Manufacturer #1 articulates this shift in focus when asked who they view as their customer:\n#1: \"Orthopaedic surgeons really. Although it is used in patients, it is the orthopaedic surgeons who decide what he will use. Sell to orthopaedic surgeons. Increasingly in the UK, we have to sell to the hospital management to justify why they should be using our bone graft substitutes as opposed to any other on the market...\"\nManufacturer #8 stated that health professionals are considered to be the main user of their device, However, the opinions of purchasing representatives are the primary source used to inform device design and development, followed up by consultation with a small number of 'clinical champions' (typically high profile surgeons and well known experts in their field). When questioned on what value the user may add value to the MDDD process, although couched in a humorous reply, it was clear that the usefulness of the user was ideally directly located in relation to sales relevant information.\n#8: \"The most helpful would to give me the year three sales figures with absolute confidence. Year-1...[laughs]...\"\nThere seems to be limited overlap between the individuals that will be using the device, and those that are consulted to inform the MDDD process. In particular, priority is given to those that hold more senior positions within the health care system. Therefore, surgeons, doctors and clinical champions were seen as more valuable sources for identifying user needs as opposed to those individual that would actually use the device on a daily basis. Furthermore, from the manufacturers' point of view, the motivation for maximising sales seems to have conflated the distinction between the customer and the user. Therefore the needs of those that make purchasing decisions or indeed have most influence over these decisions are more salient than the needs of the user. This is certainly a more complex picture than the human factors engineering approach, which puts the user's needs at the centre of the design and development process.", "Given the wide range of formal methods that are available to engage with users in the MDDD process, manufacturers tended to use only a very limited range of methods to capture information from users and patients. In line with the analysis above regarding the nature of the preferred user to consult, the most typical method used to gain input from users was via informal discussions with senior health professionals. Only one out of the eleven manufacturers (Manufacturer #8) stated that they regularly used some formal methods throughout the MDDD process, such as focus groups and questionnaires when developing their airway management device. Interestingly, the initial identification of a need for this device occurred as a result of six members of the company attending a postgraduate university course, which required the use of formal methods in order to explore and identify new device ideas. It was apparent that manufacturers do not feel that they have the time or resources to engage in rigorous formal user data collection methods, instead relying on a range of strategies including gut feel, instinct, and a personal belief that they understand the market place in which they operate, in order to identify and develop new devices.\nThe idea of employing formal methods is once again something that is not regarded as feasible, given the amount of resources available and the fact that manufacturers believe it is necessary to move quickly in order to remain competitive with their rivals. The view is that consulting a large number of individuals is problematic in itself, as every person that is approached for feedback has a different view. Informal methods, however, are seen as offering versatile and rapid solutions, hence the belief that relying on gut feel and pressing ahead when the moment feels right is a more feasible and efficient solution. This point is articulated by Manufacturer #8 below:\n#8: \"The very fact that someone is willing to talk with you almost means that they have slightly different view. You can go on asking forever. It's that balance between have we got sufficient confidence in what we have here to move forward vs. just the generation of the information....being confident enough of its assurance. That's what you constantly face anyhow. I think that's the dilemma you always have. You can ask the users till the cows come home but you never get a new product. You ask 100,000 you get 99,999 different opinions!\"\nManufacturer #3 further echoes the observation that formal methods are rarely used for medical device development, however, informal discussions and observation are seen as more versatile and fit for purpose, :\n#3: P:\"you're introducing a medical device to people, first question they'll ask, is \"what will it do to help me?\" Second question will be \"how long is this going to take?\" In a busy clinic that's really important...If it takes too many clicks, then people won't use it because they are busy enough as it is.\"\nI: have you been able to capture that at all?\nP: \"Through our interaction with users. We haven't got a specific mechanism for capturing it.\"\nI: Do you use any formal methods for converting customer needs into product development?\nP: \"No.\"\nThere was typically the belief that there was little need to consult the actual users formally regarding a new innovation, but rather contacting what they referred to as a 'clinical champion', was sufficient to qualify the feasibility and validity of a given new innovation. For example, Manufacturer #8, responding to a question regarding how the feasibility of new device ideas are qualified responded:\n#8: \"Every project will have a clinical champion. They will typically be involved, sometimes they come down here to meetings. We have a list of a couple of dozen clinicians that I can pick the phone up at anytime throughout the world and say \"what do you think of this?\".\nWhen asked whether there are any formal methods used within their organisation, Manufacturer #6 stated that formal methods are used within his organisation, however, they are not relevant for this particular product. Once again, formal methods, in this case example, were considered to be too bureaucratic, time consuming, and not applicable given the device and development scenario. Informal methods, such as ad hoc discussions with senior health professionals were likely to identify the majority of user design needs, and were also more appropriate.\n#6:\"Yes we do, but I couldn't apply that to hip replacement at this point in time. If the surgeon tells me \"that this catheter is a bit too stiff, and could you make it a bit softer, if you like, or a bit more flexible, yet do exactly the same task?\" we will do that - it's for everyone's benefit\"\nIn the above example, a pragmatic approach is taken by manufacturers to make modifications to devices, adopting the belief that if a problem is encountered by one individual, then it is likely to be a problem for the majority, providing it seems to be a reasonable request. The intuition of the manufacturer plays a major part in the process of developing products that are useful to the user, and responding to their needs.", "There was limited evidence that direct elicitation of user views was seen by manufacturers as being of value to the MDDD process. This appeared to be particularly the case in relation to patient users. For example, Manufacturer #11, discussing their device that was purely aimed at the home patient market, did not believe that patient involvement in the MDDD process was a particularly wise expenditure of resources. This was explicitly linked to the degree of influence that they have in terms of the level of power and influence they have in the levels of uptake of the device. In response to being asked whether they would like to involve the patient more in the MDDD process, manufacturer #11 replied:\n#11: \"if they are highly powered (i.e. influential in terms of clinical decision making), if they have no power then we have to ethically try to make sure that we don't harm the patient and [that] what we do is for their benefit, but appealing to them may actually be a waste of resources, so we have to make sure we don't pretend that they have a sway when they really don't, you know? \"\nOnce again, the above statement reinforces the notion that patients are seen as being at the bottom of the hierarchy of influence, and hence investing resources and effort to find out their opinion is not considered an effective of efficient strategy. Manufacturer #2, who also saw their device as being targeted at home market, acknowledged that the patient user is becoming increasingly important, particularly as they are now selling more directly to the home patient. However, the majority of individuals involved in the MDDD process are still predominantly senior health professionals as opposed to patients, who may play a stronger role in the clinical trials phase, after the device has been designed and developed. Therefore, the role of the patient user is considered to play a passive role, more in terms of verifying the value of an already developed device, as opposed to being involved in the initial design and development stages. This is discussed in more detail in the next section.\nAnother factor discouraging manufacturers from involving patient users in the MDDD process is the prospect of having to obtain ethical approval in order to carry out the research. When asked whether there are any difficulties in obtaining ethical approval for involving patient users in the MDDD process, Manufacturer #3 stated:\n#3: \"Yes. A huge problem. In my experience if you are not affecting patient care, patient throughput, you can get 'ethics', but if you are you won't get it, or you might but it will take years. So you have to design your trial so as not to affect patient care.\"\nManufacturer #1 also comments on the ethical approval process and the R&D committees that must be attended to when employing more formal methods users, both patient users and professional users in the collection of clinical data.\n#1: R&D committees. That's another thing. You start a study and surgeon wants to collect clinical data to be sure that they want to use this. Their experience that this is the product they want to use. You have to notify the hospital R&D and then suddenly they see it as an R&D project, so they throw on their overheads and it's an extra hurdle to go through. They start reviewing it in addition to ethical board. This is what has changed in the last few years. In those days you notified the larger University hospitals. Increasingly has to be approved. Some are straightforward; others ask extra questions that delays process - UK research.\nThis perceived difficulty with obtaining ethical approval may be one reason why manufacturers tend not to use formal methods, or engage in systematic research activity in order to inform the MDDD process, particularly at the early stages of development. Indeed manufacturers appear to actively avoid involving professional and patient users in the process, in fear that the MDDD process could be delayed for years as a result of the ethical approval process. As identified in section 3.2, informal discussion with clinical staff is seen as a more realistic, pragmatic, and feasible route to informing the MDDD process, which can be carried out informally and hence without ethical clearance. Manufacturer #3 goes on to describe the strategy that they use to get around the challenge of obtaining ethical approval:\n#3: \"So you take the least hard route, to not affect patient care in any way whatsoever, and design your study to sit on the back of that. So the study might not be optimal, but at least you can do it.\"\nSetting up device design and development activity so as to avoid the need for user involvement seems to be the 'method' of choice for some device manufacturers. Despite this being seen as a potentially sub-optimal approach, perhaps less effective in evaluating the extent to which design innovations may be accepted by users, it seems a route worth taking given the alternative of incurring long delays as a result of the ethical approval process.", "The measures used to evaluate the effectiveness of newly developed devices towards the end of the MDDD process echo the findings presented earlier, that formal methods employed in any part of the MDDD process are seen by manufacturers as having the potential of slowing the process down, and incurring additional and unnecessary costs and overheads that otherwise could be avoided if a less formal approach was taken. Most commonly, manufacturers reported that the success of a new device is typically measured by the absence of receiving customer complaints or 'bad news' emerging as a result of the device being used in the field. Manufacturer #1, in response to being asked how the success of their device is measured responded as follows:\n#1: \"The absence of bad news. The fact that we have access to the surgeons who are using the product and any adverse effects would be reported and we would be aware early on.\"\nWith regards to seeking patient feedback about the success of a medical device in providing an effective health intervention, Manufacturer #3 takes a similar 'no bad news is good news' approach to the design and functioning of their product. When asked whether any patient feedback is sought about the device, the reply was:\n#3: \"Not explicitly, we haven't gone out to get it, but we get feedback though the users (clinicians). It's non-invasive, so as far as the patients concerned it's not a problem to use it. No-one has said they don't want it done or had any problems with that.\"\nManufacturer #9, commenting on whether any formal methods are used for converting user needs into device design requirements, also indicated that they adopt a reactive stance to customer suggestions and complaints, which is their default position on such matters.\n#9: \"Erm, we certainly have regular meetings here where we will look at customer feedback, you know, whether it's customer complaints or customer suggestions or whatever, yes so, yes we do have a means to do that...\"\nSome evidence of formal user involvement did emerge from the interview data, however, it indicated that the majority of formal user involvement took place at the clinical trials stage, i.e. after the product had been developed, and manufacturers were at the stage of demonstrating its efficacy and clinically effectiveness. Therefore, in effect any considerations in terms of design preference of the medical device that may benefit the user in terms of their treatment or use of the device had already been made prior to this formal user involvement. Manufacturer #2 highlights the notion that the patient user is primarily seen as being of value when attempting to demonstrate the clinical effectiveness of a new medical device.. Interestingly, patient users are not seen as primarily informing the general design of the device at earlier stages of the MDDD process, at least not in a formal capacity. When asked whether formal user methods are employed within the MDDD process, Manufacturer #2 responded:\n#2: \"There is the formal method of gaining patient data which means at the moment for example we have conducted clinical trials, clinical investigations. All sorts of clinical information gathering is going on at the [hospital name]. We are also... we tend to gather information first of all on the effectiveness or efficacy of it...Any advance, any variation that is likely to happen to the unit, any proposals for change that are going to improve the machine are all tested out in the clinical environment.\"\nManufacturer #5 reported that the key driver for collecting formal user data, relating to the performance of a device, comes from the possibility that an organisation such as the National Institute for Clinical Excellence (NICE) may decide to investigate the efficacy of their device at some unknown point. Therefore, the key driver for formally collecting user generated data is to fulfil the potential future requirements of external standards or purchasing agencies. The motivation to collect user facing data does not appear to be borne out of an inherent belief, on the manufacturer's behalf, that this user data would add any significant value in terms of fulfilling their own need to develop more effective devices or indeed learn more about the effectiveness and efficiency of their own device.\n#5: I think, if you take an organisation like NICE as a customer, or any of the other health technology organisations, one doesn't know whether your particular intervention is going to be assessed by them, and if so at what point, and one can see that as [the product] gains momentum and the NHS starts to look at what its spending on [this] surgery, then it may be something that NICE feel, or NICE get directed to take a look at. So it's a kind of a problem to us to understand when that's going to happen and it will certainly be a challenge; we have to therefore make sure that we are gathering the evidence in case they do.\"", "In this study, 11in-depth interviews were carried out with medical device manufacturers, who were asked to comment on the role and value they believed that users have within the MDDD process. Given the small sample size, it is recognised that the results of this study should be considered as provisional. However, given the limited existing research in this area, the findings provide an important point of reference for further work.\nThe results revealed that manufacturers tend to prioritise the views of more senior health professionals above those that are less senior as well as patients. Furthermore, manufacturers' perceptions of the customer and the user has become conflated, partly due to the strong sales focus of manufacturers, seeking device design input from those individuals who make purchasing decisions, as opposed to the users of the devices. With regards to seeking input from the patient user, there was little motivation to engage in such practice which was seen as an ineffective use of resources, with patients, and less senior health professionals being perceived as having little impact or influence on general device sales, largely due to the inbuilt culture of patients being 'taught down' their needs from health care professionals.\nOnly one out of 11 manufacturers claimed to regularly use formal user centred design methods within the MDDD process. Interestingly, this individual also reported that they were familiar with such methods as a result of being introduced to them within the university setting in which they carried out the initial formulation and design of the product they were discussing. This experience appeared to have had a lasting effect, as they report the ongoing use of formal research methods to this day. It is therefore possible that a contributory factor to lack of engagement with formal methods is a lack of education, familiarity or confidence in their use. Necessary training has indeed been found to be a factor that affects the level of uptake of usability methods in medical device development [34]. Informal methods were typically preferred by manufacturers, in the form of extemporised discussions with a small number of esteemed medical experts. There was also a belief that if a manufacturer wished to be competitive and responsive within a fast moving market, formal methods do not offer the necessary versatility and are more generally not appropriate for purpose, despite the proposed benefits that academic literature suggests such methods promise if applied within the health care sector [35,36]. Research in the technology design domain suggests it is necessary to adopt a flexible and evolutionary stance when applying formal methods to cater for the unique context and organisational cultures that present themselves within any new design challenge [37]. Similarly, there may be value in increasing awareness of the versatility of existing human factors engineering methods, and in specialist contexts to explore the potential development of more agile and tailored methods that cater for manufacturers' needs in the context in which they are to be applied. It has been suggested that, as a consequence of the increased pressure from standards agencies to incorporate formal user methods into the MDDD process, some manufacturers may 'misuse' existing human factors methods [38]. The reasons for this are unclear. Is it perhaps in a bid to make methods more fit for purpose, or as a result of not being fully aware of the ways in which existing methods should be applied?\nThe perceived barriers to user involvement within the MDDD process were linked to the notion that manufacturers seek out those individuals that will be most influential in making purchasing decisions for their products. Consequently, involving patient users appeared to be of lowest priority, since it was believed that they held the lowest level of influence over whether health care organisations purchased their products. These findings are supported by our earlier preliminary work relating to barriers to user involvement [39,40]. The examples that did emerge of consulting users via formal methods, tended to occur at the end of the MDDD process, after the device had been designed and developed, where users played a passive role in aiding the manufacturer in verifying the efficacy of the device primarily for the purposes of satisfying external standards and purchasing agencies requirements. One key reason for avoiding user involvement was the difficulty that this presents when attempting to gain ethical approval for user elicitation studies. Manufacturers conceded that, although avoiding user involvement may perhaps be sub-optimal, it is necessary if they are to remain competitive within a fast moving market. Warlow [41] and Stewart et al. [42] both propose that over-regulation of clinical research poses a significant threat to public health and it seems that this study support this. In particular, our findings suggest that the seemingly unnecessary bureaucracy associated with obtaining ethical approval for non-interventional and low-risk studies, that seek to capture user opinions and requirements, leads to manufacturers excluding the voice of the user from the development process.\nThis study reveals that the notion of proactively involving the user within the MDDD process in general slows down the process. Rather, a reactive 'no bad news is good news' stance is taken, only taking into account users input if they are presented in the form of complaints or feedback on devices that have already been released into the health care system. The only evidence of engagement with formal methods of user involvement is apparent when the use of such methods are mandatory, dictated to manufacturers by standards and purchasing agencies. Given the findings of this study, the appropriate employment of formal methods by manufacturers is unlikely to occur to significant levels without deliberate efforts to encourage and support manufacturers in doing so. The following recommendations propose where some of these efforts should focus, in order to achieve increased levels of user engagement by manufacturers, as is now stipulated by IEC 62366 [14]:\n- Research to better understand the requirements of manufacturers, in terms what is required from human factors engineering methods in order to make their use more feasible and accessible in practice.\n- Provision of training on the use and benefits of employing formal human factors engineering methods at every stage of the MDDD process.\n- Health care providers should implement formal processes to ensure better communication and synergy between those making purchasing decisions and the actual users of the devices.\n- Provisions should be made within the ethical approval process that enables medical device manufacturers to engage more easily with users with minimal levels of bureaucracy whilst also ensuring that all research is conducted in an ethical manner that protects healthcare staff and patients.", "The authors declare that they have no competing interests.", "All authors contributed to the conceptual design of this study. AGM and JB carried out primary data analysis and drafted the manuscript. MPC and JLM contributed to data collection and provided domain expertise. All authors contributed to redrafting the manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6947/11/15/prepub\n" ]
[ null, null, null, null, "methods", null, null, null, null, null, null, null, null, null ]
[]
Differences in allergen-induced T cell activation between allergic asthma and rhinitis: Role of CD28, ICOS and CTLA-4.
21356099
Th2 cell activation and T regulatory cell (Treg) deficiency are key features of allergy. This applies for asthma and rhinitis. However with a same atopic background, some patients will develop rhinitis and asthma, whereas others will display rhinitis only. Co-receptors are pivotal in determining the type of T cell activation, but their role in allergic asthma and rhinitis has not been explored. Our objective was to assess whether allergen-induced T cell activation differs from allergic rhinitis to allergic rhinitis with asthma, and explore the role of ICOS, CD28 and CTLA-4.
BACKGROUND
T cell co-receptor and cytokine expressions were assessed by flow cytometry in PBMC from 18 house dust mite (HDM) allergic rhinitics (R), 18 HDM allergic rhinitics and asthmatics (AR), 13 non allergic asthmatics (A) and 20 controls, with or without anti-co-receptors antibodies.
METHODS
In asthmatics (A+AR), a constitutive decrease of CTLA-4+ and of CD4+CD25+Foxp3+ cells was found, with an increase of IFN-γ+ cells. In allergic subjects (R + AR), allergen stimulation induced CD28 together with IL-4 and IL-13, and decreased the proportion of CTLA-4+, IL-10+ and CD4+CD25+Foxp3+ cells. Anti-ICOS and anti-CD28 antibodies blocked allergen-induced IL-4 and IL-13. IL-13 production also involved CTLA-4.
RESULTS
T cell activation differs between allergic rhinitis and asthma. In asthma, a constitutive, co-receptor independent, Th1 activation and Treg deficiency is found. In allergic rhinitis, an allergen-induced Treg cell deficiency is seen, as well as an ICOS-, CD28- and CTLA-4-dependent Th2 activation. Allergic asthmatics display both characteristics.
CONCLUSIONS
[ "Adult", "Antigens, CD", "Antigens, Dermatophagoides", "Antigens, Differentiation, T-Lymphocyte", "Asthma", "CD28 Antigens", "CTLA-4 Antigen", "Cells, Cultured", "Cytokines", "Female", "Flow Cytometry", "Humans", "Hypersensitivity", "Inducible T-Cell Co-Stimulator Protein", "Intradermal Tests", "Lymphocyte Activation", "Male", "Middle Aged", "Rhinitis, Allergic, Perennial", "T-Lymphocyte Subsets", "T-Lymphocytes, Regulatory", "Th1 Cells", "Th2 Cells" ]
3051906
null
null
Methods
[SUBTITLE] Study population [SUBSECTION] Four groups of patients were recruited: allergic rhinitics (R), allergic rhinitics and asthmatics (AR), non allergic asthmatics (A), and controls (C). All allergic patients were selected to display house dust mite (HDM) allergy. As rBetv1 birch pollen allergen was used as control antigen for in vitro stimulation of T cells, patients were selected to be not sensitized to birch pollen. The diagnosis of HDM allergy was determined by positive skin prick test to Dermatophagoides pteronyssinus extract (Stallergenes, France). Allergic rhinitis was defined by the presence of perennial nasal symptoms out of viral infection such as nasal obstruction, sneezing, rhinorrhea and nasal pruritus. The diagnosis of asthma was done on the basis of a history of dyspnea and wheezes with a reversible obstructive ventilatory defect or a positive methacholine challenge. The distinction between mild and moderate asthma was done according to GINA classification [19]. In patients, any inhaled corticosteroids and anti-histamines were discontinued 15 days before sampling. As controls, healthy non smoker individuals with normal lung function, negative methacholine challenge and negative skin prick test were included. In controls, absence of allergy was established by the negativity of 35 skin prick tests to common environmental aeroallergens, and absence of asthma was stated on negative methacholine challenge and induced sputum eosinophil count below 3% (see additional file 1: Skin testing, methacholine challenge and induced sputum procedures). The positive methacholine test was defined by a drop of at least 20% of FEV1(forced expiratory volume in 1 second) in response to 200 μg or less of metacholine. This project was approved by the local Ethic Committee and written informed consent was obtained from each patient. Four groups of patients were recruited: allergic rhinitics (R), allergic rhinitics and asthmatics (AR), non allergic asthmatics (A), and controls (C). All allergic patients were selected to display house dust mite (HDM) allergy. As rBetv1 birch pollen allergen was used as control antigen for in vitro stimulation of T cells, patients were selected to be not sensitized to birch pollen. The diagnosis of HDM allergy was determined by positive skin prick test to Dermatophagoides pteronyssinus extract (Stallergenes, France). Allergic rhinitis was defined by the presence of perennial nasal symptoms out of viral infection such as nasal obstruction, sneezing, rhinorrhea and nasal pruritus. The diagnosis of asthma was done on the basis of a history of dyspnea and wheezes with a reversible obstructive ventilatory defect or a positive methacholine challenge. The distinction between mild and moderate asthma was done according to GINA classification [19]. In patients, any inhaled corticosteroids and anti-histamines were discontinued 15 days before sampling. As controls, healthy non smoker individuals with normal lung function, negative methacholine challenge and negative skin prick test were included. In controls, absence of allergy was established by the negativity of 35 skin prick tests to common environmental aeroallergens, and absence of asthma was stated on negative methacholine challenge and induced sputum eosinophil count below 3% (see additional file 1: Skin testing, methacholine challenge and induced sputum procedures). The positive methacholine test was defined by a drop of at least 20% of FEV1(forced expiratory volume in 1 second) in response to 200 μg or less of metacholine. This project was approved by the local Ethic Committee and written informed consent was obtained from each patient. [SUBTITLE] Isolation of PBMC [SUBSECTION] Peripheral blood mononuclear cells (PBMC) were isolated from peripheral venous blood by Ficoll-Hypaque plus (GE Healthcare, Uppsala, Sweden) density gradient centrifugation. Cells were then washed three times and resuspended in complete medium RPMI-1640 supplemented with 10% (v/v) foetal calf serum (FCS), 2 mM L-glutamine, 1 mM sodium pyruvate, 1 μM 2-mercapto-ethanol (Sigma Chemical, Saint-Louis, Missouri), 1000 U/ml Penicillin and Streptomycin. All culture reagents, except 2-mercapto-ethanol, were purchased from GIBCO®. Peripheral blood mononuclear cells (PBMC) were isolated from peripheral venous blood by Ficoll-Hypaque plus (GE Healthcare, Uppsala, Sweden) density gradient centrifugation. Cells were then washed three times and resuspended in complete medium RPMI-1640 supplemented with 10% (v/v) foetal calf serum (FCS), 2 mM L-glutamine, 1 mM sodium pyruvate, 1 μM 2-mercapto-ethanol (Sigma Chemical, Saint-Louis, Missouri), 1000 U/ml Penicillin and Streptomycin. All culture reagents, except 2-mercapto-ethanol, were purchased from GIBCO®. [SUBTITLE] Antigens [SUBSECTION] Recombinant (r) Betv 1 of birch pollen (Betula verrucosa) and purified (p) Derp 1 of house dust mite (Dermatophagoides pteronyssinus) were provided by Stallergènes (Antony, France). None of the allergens contained detectable amounts of LPS. Recombinant (r) Betv 1 of birch pollen (Betula verrucosa) and purified (p) Derp 1 of house dust mite (Dermatophagoides pteronyssinus) were provided by Stallergènes (Antony, France). None of the allergens contained detectable amounts of LPS. [SUBTITLE] Specific stimulation of T cells [SUBSECTION] Optimal dose of stimulatory pDerp1 and kinetics of T cell cytokine secretion and proliferation were determined in an independent pilot study on 5 house dust mite allergics and 5 healthy volunteers. PBMC (5 × 105) were cultured in 96 wells plates (Falcon) in 100 μl medium containing 1 μg/ml pDerp1 at 37°C in 5%CO2 and cells were harvested after 8 days culture. 50 μl of fresh complete medium was added every 2 days in each well. rBetv1 was used as control antigen at a concentration of 1 μg/ml. Optimal dose of stimulatory pDerp1 and kinetics of T cell cytokine secretion and proliferation were determined in an independent pilot study on 5 house dust mite allergics and 5 healthy volunteers. PBMC (5 × 105) were cultured in 96 wells plates (Falcon) in 100 μl medium containing 1 μg/ml pDerp1 at 37°C in 5%CO2 and cells were harvested after 8 days culture. 50 μl of fresh complete medium was added every 2 days in each well. rBetv1 was used as control antigen at a concentration of 1 μg/ml. [SUBTITLE] Surface staining [SUBSECTION] After 8 days of culture with pDerp 1, PBMC (5 × 105) were harvested and stained with anti-CD4-PE-Cy5, anti-CD25-FITC, (Beckman Coulter, Marseille, France); anti-CD3-FITC (Dako, Trappes, France), anti-CD3-PE-Cy5 (Immunotools, Friesoythe, Germany), anti-CD28-FITC, anti-ICOS-PE, or anti-CTLA-4-PE-Cy5 (BD Pharmingen, le Pont de Claix, France) mAbs at recommended concentrations. To detect Foxp3 intracellular transcription factor, T cells were then fixed, permeabilized, and stained with anti-Foxp3-PE mAb (eBiosciences, San Diego, California). The Treg population was identified as CD4+CD25Hi+Fox p 3+ cells. Fluorescence was detected with a 15 mW argon ion laser on a three colors FACSCan® (Becton Dickinson, Franklin Lakes, NJ, USA). Standard acquisition and analysis software were obtained through Cellquest® Software (Becton Dickinson). After 8 days of culture with pDerp 1, PBMC (5 × 105) were harvested and stained with anti-CD4-PE-Cy5, anti-CD25-FITC, (Beckman Coulter, Marseille, France); anti-CD3-FITC (Dako, Trappes, France), anti-CD3-PE-Cy5 (Immunotools, Friesoythe, Germany), anti-CD28-FITC, anti-ICOS-PE, or anti-CTLA-4-PE-Cy5 (BD Pharmingen, le Pont de Claix, France) mAbs at recommended concentrations. To detect Foxp3 intracellular transcription factor, T cells were then fixed, permeabilized, and stained with anti-Foxp3-PE mAb (eBiosciences, San Diego, California). The Treg population was identified as CD4+CD25Hi+Fox p 3+ cells. Fluorescence was detected with a 15 mW argon ion laser on a three colors FACSCan® (Becton Dickinson, Franklin Lakes, NJ, USA). Standard acquisition and analysis software were obtained through Cellquest® Software (Becton Dickinson). [SUBTITLE] Intracellular T cell cytokine staining [SUBSECTION] PBMC (5 × 105) were cultured for 8 days with pDerp 1. PMA (Sigma Chemical, Saint-Louis, Missouri, 50 ng/ml), Ionomycin (Euromedex, 2 μg/ml) and Monensin (Sigma Chemical, 2 μM) were added during the last 6 hours of culture. These culture conditions allow the detection of cytokines already engaged in a synthesis process in vivo [20]. Cells were harvested and stained with CD3-PE-Cy5 (Immunotools, Friesoythe, Germany). Cells were then fixed, permeabilized, and stained with antibodies to detect intracellular cytokines (anti-IFNγ-FITC, anti-IL-4-FITC, BD Pharmingen, le Pont de Claix, France; anti-IL13-PE, anti-IL-10-PE, R&D system, Lille, France). IL-4+ and IL-13+ cells were considered as Th2 cells, IFN-γ + cells as Th1 cells. IL-10+ cells were considered as belonging to Treg cell population. PBMC (5 × 105) were cultured for 8 days with pDerp 1. PMA (Sigma Chemical, Saint-Louis, Missouri, 50 ng/ml), Ionomycin (Euromedex, 2 μg/ml) and Monensin (Sigma Chemical, 2 μM) were added during the last 6 hours of culture. These culture conditions allow the detection of cytokines already engaged in a synthesis process in vivo [20]. Cells were harvested and stained with CD3-PE-Cy5 (Immunotools, Friesoythe, Germany). Cells were then fixed, permeabilized, and stained with antibodies to detect intracellular cytokines (anti-IFNγ-FITC, anti-IL-4-FITC, BD Pharmingen, le Pont de Claix, France; anti-IL13-PE, anti-IL-10-PE, R&D system, Lille, France). IL-4+ and IL-13+ cells were considered as Th2 cells, IFN-γ + cells as Th1 cells. IL-10+ cells were considered as belonging to Treg cell population. [SUBTITLE] Co-receptor study [SUBSECTION] To determine the role of co-receptors in T cell activation, PBMC cultures were performed with or without anti-CTLA-4 (clone 14D3, 12 μg/ml), anti-ICOS (clone ISA-3, 12 μg/ml) or anti-CD28 (clone CD28.6, 3 μg/ml) monoclonal antibodies (mAb). These mAb were purchased from eBioscience. To determine the role of co-receptors in T cell activation, PBMC cultures were performed with or without anti-CTLA-4 (clone 14D3, 12 μg/ml), anti-ICOS (clone ISA-3, 12 μg/ml) or anti-CD28 (clone CD28.6, 3 μg/ml) monoclonal antibodies (mAb). These mAb were purchased from eBioscience. [SUBTITLE] Statistical Analysis [SUBSECTION] Analysis was performed using the Statview® Software. Normal distributions of the variables were checked with a Kolmogorov-Smirnof's test. Average percentages of positive cells and cytokine concentrations were then compared between groups (controls, non allergic asthmatics, allergic rhinitics and allergic asthmatics) using the analysis of variance (ANOVA). When the ANOVA showed statistical difference between groups, a multiple linear regression analysis was done to identify if allergy, asthma, or both could explain the variable studied. Between-groups comparisons were performed using a Student's t-test. A paired t test was used to compare differences between paired groups. A p value < 0.05 was considered as statistically significant for all statistical tests. Results are expressed as mean ± standard error (SE). Analysis was performed using the Statview® Software. Normal distributions of the variables were checked with a Kolmogorov-Smirnof's test. Average percentages of positive cells and cytokine concentrations were then compared between groups (controls, non allergic asthmatics, allergic rhinitics and allergic asthmatics) using the analysis of variance (ANOVA). When the ANOVA showed statistical difference between groups, a multiple linear regression analysis was done to identify if allergy, asthma, or both could explain the variable studied. Between-groups comparisons were performed using a Student's t-test. A paired t test was used to compare differences between paired groups. A p value < 0.05 was considered as statistically significant for all statistical tests. Results are expressed as mean ± standard error (SE).
null
null
null
null
[ "Background", "Study population", "Isolation of PBMC", "Antigens", "Specific stimulation of T cells", "Surface staining", "Intracellular T cell cytokine staining", "Co-receptor study", "Statistical Analysis", "Results", "Study population", "T cell activation and co-receptor expression before specific stimulation", "T cell activation and co-receptor expression after specific stimulation by allergens", "Role of co-receptor engagement", "Discussion", "Conclusion", "Competing interests", "Authors' contributions" ]
[ "Atopic diseases including allergic rhinitis and asthma are inflammatory conditions that have increased in prevalence over the past two decades [1]. The inflammatory response to common environmental allergens during allergy and asthma has been extensively studied in the past years, and has clearly determined the pivotal role of T cell activation, with a predominant Th2 cytokine production [2,3]. T regulatory (Treg) cells, characterized by the production of anti-inflammatory cytokines such as IL-10 and TGF-β [4,5] are considered as responsible for the normal tolerance against auto-antigens and external antigens such as allergens [6]. Accordingly, a deficiency in Treg counts and activation was found in autoimmune diseases and allergic conditions, notably during allergen exposure [7,8] and exacerbations of severe asthma [9]. However although this Th2/Treg imbalance applies both for allergic rhinitis and asthma, it is remarkable that despite a same atopic background and allergen exposure, some subjects will develop both rhinitis and asthma whereas other will display rhinitis only. We hypothesize since several years that T cell activation is different between both conditions and with others we previously described a Th1 activation in asthma that was absent in non asthmatic allergy in blood, induced sputum and broncho-alveolar lavages [10-12]. However, the role of allergen in the tuning of T cell activation in allergic rhinitics with and without asthma was not explored yet.\nAllergen-induced T cell activation depends on signals delivered from antigen presenting cells (APCs) through the antigen-specific T cell receptor as well as additional co-stimulatory signals provided by engagement of so-called co-receptors on APCs and T cells [13]. Major T cell co-receptors are CD28, inducible costimulatory molecule (ICOS) and cytotoxic T lymphocyte antigen (CTLA)-4. They belong to the immunoglobulin gene superfamily and display various kinetics of expression. CD28 is a constitutive co-stimulatory receptor binding CD80 and CD86 on APCs, delivering important signals for T cell activation and survival. Ligation of CD28 promotes the production of IL-4 and IL-5 and provides resistance to apoptosis and long-term expansion of T-cells. As CD28, ICOS is a positive regulator of T cell activation which is up-regulated on activated T-cells. ICOS was initially shown to selectively induce high levels of IL-10 and IL-4, but is also able to stimulate both Th1 and Th2 cytokine production in vivo [14].\nCTLA-4 is also a CD80/CD86-binding protein. It is up-regulated on activated T cells and delivers mainly an inhibitory signal, playing an important role in maintenance of peripheral tolerance [15]. Indeed, it was shown in murine Treg cells, that CTLA-4 controlled homeostasis and suppressive capacity of regulatory T cells [16].\nCo-receptors thus represent important potential targets for therapeutic immunomodulation. Indeed the blockade of CD28 and CTLA-4 agonists are tested for their ability to prevent graft rejection [17], and in animal models, ICOS inhibition prevented allergic inflammation [18]. However, the actual role of co-receptors in the context of asthma and allergy in humans is still unexplored.\nThe objective of this study was therefore to compare the pattern of T cell activation between allergic rhinitics and asthmatics upon allergen stimulation and to assess the role of co-receptors CD28, ICOS and CTLA-4 in this process.", "Four groups of patients were recruited: allergic rhinitics (R), allergic rhinitics and asthmatics (AR), non allergic asthmatics (A), and controls (C). All allergic patients were selected to display house dust mite (HDM) allergy. As rBetv1 birch pollen allergen was used as control antigen for in vitro stimulation of T cells, patients were selected to be not sensitized to birch pollen. The diagnosis of HDM allergy was determined by positive skin prick test to Dermatophagoides pteronyssinus extract (Stallergenes, France). Allergic rhinitis was defined by the presence of perennial nasal symptoms out of viral infection such as nasal obstruction, sneezing, rhinorrhea and nasal pruritus. The diagnosis of asthma was done on the basis of a history of dyspnea and wheezes with a reversible obstructive ventilatory defect or a positive methacholine challenge. The distinction between mild and moderate asthma was done according to GINA classification [19]. In patients, any inhaled corticosteroids and anti-histamines were discontinued 15 days before sampling. As controls, healthy non smoker individuals with normal lung function, negative methacholine challenge and negative skin prick test were included. In controls, absence of allergy was established by the negativity of 35 skin prick tests to common environmental aeroallergens, and absence of asthma was stated on negative methacholine challenge and induced sputum eosinophil count below 3% (see additional file 1: Skin testing, methacholine challenge and induced sputum procedures). The positive methacholine test was defined by a drop of at least 20% of FEV1(forced expiratory volume in 1 second) in response to 200 μg or less of metacholine. This project was approved by the local Ethic Committee and written informed consent was obtained from each patient.", "Peripheral blood mononuclear cells (PBMC) were isolated from peripheral venous blood by Ficoll-Hypaque plus (GE Healthcare, Uppsala, Sweden) density gradient centrifugation. Cells were then washed three times and resuspended in complete medium RPMI-1640 supplemented with 10% (v/v) foetal calf serum (FCS), 2 mM L-glutamine, 1 mM sodium pyruvate, 1 μM 2-mercapto-ethanol (Sigma Chemical, Saint-Louis, Missouri), 1000 U/ml Penicillin and Streptomycin. All culture reagents, except 2-mercapto-ethanol, were purchased from GIBCO®.", "Recombinant (r) Betv 1 of birch pollen (Betula verrucosa) and purified (p) Derp 1 of house dust mite (Dermatophagoides pteronyssinus) were provided by Stallergènes (Antony, France). None of the allergens contained detectable amounts of LPS.", "Optimal dose of stimulatory pDerp1 and kinetics of T cell cytokine secretion and proliferation were determined in an independent pilot study on 5 house dust mite allergics and 5 healthy volunteers.\nPBMC (5 × 105) were cultured in 96 wells plates (Falcon) in 100 μl medium containing 1 μg/ml pDerp1 at 37°C in 5%CO2 and cells were harvested after 8 days culture. 50 μl of fresh complete medium was added every 2 days in each well. rBetv1 was used as control antigen at a concentration of 1 μg/ml.", "After 8 days of culture with pDerp 1, PBMC (5 × 105) were harvested and stained with anti-CD4-PE-Cy5, anti-CD25-FITC, (Beckman Coulter, Marseille, France); anti-CD3-FITC (Dako, Trappes, France), anti-CD3-PE-Cy5 (Immunotools, Friesoythe, Germany), anti-CD28-FITC, anti-ICOS-PE, or anti-CTLA-4-PE-Cy5 (BD Pharmingen, le Pont de Claix, France) mAbs at recommended concentrations. To detect Foxp3 intracellular transcription factor, T cells were then fixed, permeabilized, and stained with anti-Foxp3-PE mAb (eBiosciences, San Diego, California). The Treg population was identified as CD4+CD25Hi+Fox p 3+ cells.\nFluorescence was detected with a 15 mW argon ion laser on a three colors FACSCan® (Becton Dickinson, Franklin Lakes, NJ, USA). Standard acquisition and analysis software were obtained through Cellquest® Software (Becton Dickinson).", "PBMC (5 × 105) were cultured for 8 days with pDerp 1. PMA (Sigma Chemical, Saint-Louis, Missouri, 50 ng/ml), Ionomycin (Euromedex, 2 μg/ml) and Monensin (Sigma Chemical, 2 μM) were added during the last 6 hours of culture. These culture conditions allow the detection of cytokines already engaged in a synthesis process in vivo [20]. Cells were harvested and stained with CD3-PE-Cy5 (Immunotools, Friesoythe, Germany). Cells were then fixed, permeabilized, and stained with antibodies to detect intracellular cytokines (anti-IFNγ-FITC, anti-IL-4-FITC, BD Pharmingen, le Pont de Claix, France; anti-IL13-PE, anti-IL-10-PE, R&D system, Lille, France). IL-4+ and IL-13+ cells were considered as Th2 cells, IFN-γ + cells as Th1 cells. IL-10+ cells were considered as belonging to Treg cell population.", "To determine the role of co-receptors in T cell activation, PBMC cultures were performed with or without anti-CTLA-4 (clone 14D3, 12 μg/ml), anti-ICOS (clone ISA-3, 12 μg/ml) or anti-CD28 (clone CD28.6, 3 μg/ml) monoclonal antibodies (mAb). These mAb were purchased from eBioscience.", "Analysis was performed using the Statview® Software. Normal distributions of the variables were checked with a Kolmogorov-Smirnof's test. Average percentages of positive cells and cytokine concentrations were then compared between groups (controls, non allergic asthmatics, allergic rhinitics and allergic asthmatics) using the analysis of variance (ANOVA). When the ANOVA showed statistical difference between groups, a multiple linear regression analysis was done to identify if allergy, asthma, or both could explain the variable studied. Between-groups comparisons were performed using a Student's t-test. A paired t test was used to compare differences between paired groups. A p value < 0.05 was considered as statistically significant for all statistical tests. Results are expressed as mean ± standard error (SE).", "[SUBTITLE] Study population [SUBSECTION] Sixty-nine subjects (33 males, 36 females, mean age 37.20 ± 1.90) were included. Blood samples from 20 healthy individuals with no history of allergy or asthma, 18 allergic asthmatics (AR), 18 allergic rhinitics (R), and 13 non allergic asthmatics (A) were collected. Characteristics of the patients are shown in table 1.\nCharacteristics of the patients\nFEV1 = Forced expiratory volume in 1 second\n* Values are mean ± standard error (SE),\n** = p < 0.01. R = allergic rhinitis; A R = allergic asthma and rhinitis; A = non allergic asthmatics.\nNone of the subjects was a smoker. Patients interrupted their local or systemic steroids or antihistamines 15 days before sampling. Asthmatics were mild asthmatics for one half and moderate asthmatics for the other half. All allergic patients displayed symptoms compatible with allergic rhinitis. All non allergic asthmatics also complained from nasal symptoms. Healthy volunteers did not report any symptom.\nSputum eosinophil counts were significantly higher in asthmatics than in control subjects or allergic rhinitis, with no significant difference between allergic and non allergic asthmatics. None of the subjects was sensitized to birch. The age difference between the A+R group and other groups (A, R and C) was not significant statistically.\nSixty-nine subjects (33 males, 36 females, mean age 37.20 ± 1.90) were included. Blood samples from 20 healthy individuals with no history of allergy or asthma, 18 allergic asthmatics (AR), 18 allergic rhinitics (R), and 13 non allergic asthmatics (A) were collected. Characteristics of the patients are shown in table 1.\nCharacteristics of the patients\nFEV1 = Forced expiratory volume in 1 second\n* Values are mean ± standard error (SE),\n** = p < 0.01. R = allergic rhinitis; A R = allergic asthma and rhinitis; A = non allergic asthmatics.\nNone of the subjects was a smoker. Patients interrupted their local or systemic steroids or antihistamines 15 days before sampling. Asthmatics were mild asthmatics for one half and moderate asthmatics for the other half. All allergic patients displayed symptoms compatible with allergic rhinitis. All non allergic asthmatics also complained from nasal symptoms. Healthy volunteers did not report any symptom.\nSputum eosinophil counts were significantly higher in asthmatics than in control subjects or allergic rhinitis, with no significant difference between allergic and non allergic asthmatics. None of the subjects was sensitized to birch. The age difference between the A+R group and other groups (A, R and C) was not significant statistically.\n[SUBTITLE] T cell activation and co-receptor expression before specific stimulation [SUBSECTION] Treg cells proportion, Th1 and Th2 cytokines production and co-receptors expression (CTLA-4, ICOS, CD28) in each group were first assessed by flow cytometry, prior to any specific stimulation.\nIn non-stimulated conditions, CTLA-4+ T cells were decreased in asthmatics (p < 0.05 vs controls, figure 1A), whatever their allergic status. In keeping with this result, a reduced Treg population (p < 0.025, figure 1B) was found in these patients. Relevantly, Treg cell proportions were higher in mild asthmatics than in moderate counterparts (p < 0.012, figure 1C). IFN-γ + cells were increased (p < 0.022 vs controls, figure 1D) in asthmatics. No significant difference in Th2 cytokines or IL-10 production was found (table 2) between groups.\nT cell activation and co-receptor expression before specific stimulation. CTLA-4 expression (A), Treg cells (CD4+CD25+HiFoxp3+, B), IFN-γ producing T cells (D) and ICOS expression (E) were assessed by flow cytometry in PBMC from HDM allergic rhinitics (R) (triangle, n = 18), allergic asthmatics and rhinitics (AR) (square, n = 18), non allergic asthmatics (A) (lozenge, n = 13), and controls (circle, n = 20). Treg cells were also evaluated in non allergic asthma and allergic asthma between mild and moderate asthmatics (C). Results are expressed as percentage of total T cell and compared versus controls. _ : mean of each group. * = p < 0.05; ** = p < 0.01\nBaseline T-cell co-receptor and cytokine expression\nPBMC from each patient were cultured in complete medium during 8 days. ICOS, CD28, IL-4, IL-13, and IL-10 expression by T-cells were assessed by flow cytometry. Results are expressed as mean of total T-cells ± SE and compared versus controls. * = p < 0.05, ** = p < 0,01. R = allergic rhinitis; A R = allergic asthma and rhinitis; A = non allergic asthmatics.\nICOS expression was higher in R compared to controls (p = 0.029, figure 1E), but similar in AR and controls. No significant variation was found at the level of CD28 expression between groups (table 2).\nThe multiple linear regression analysis showed that asthma (A + AR) was associated to lower ICOS and CTLA-4 expression and Treg cell proportions, but to higher IFN-γ+ T cells (table 3). By contrast, allergic rhinitis (with or without asthma) was positively linked to ICOS expression.\nMultiple linear regression analysis between asthma, allergy and allergy after specific stimulation\nValues are expressed as coefficient of regression ± SE\nTreg cells proportion, Th1 and Th2 cytokines production and co-receptors expression (CTLA-4, ICOS, CD28) in each group were first assessed by flow cytometry, prior to any specific stimulation.\nIn non-stimulated conditions, CTLA-4+ T cells were decreased in asthmatics (p < 0.05 vs controls, figure 1A), whatever their allergic status. In keeping with this result, a reduced Treg population (p < 0.025, figure 1B) was found in these patients. Relevantly, Treg cell proportions were higher in mild asthmatics than in moderate counterparts (p < 0.012, figure 1C). IFN-γ + cells were increased (p < 0.022 vs controls, figure 1D) in asthmatics. No significant difference in Th2 cytokines or IL-10 production was found (table 2) between groups.\nT cell activation and co-receptor expression before specific stimulation. CTLA-4 expression (A), Treg cells (CD4+CD25+HiFoxp3+, B), IFN-γ producing T cells (D) and ICOS expression (E) were assessed by flow cytometry in PBMC from HDM allergic rhinitics (R) (triangle, n = 18), allergic asthmatics and rhinitics (AR) (square, n = 18), non allergic asthmatics (A) (lozenge, n = 13), and controls (circle, n = 20). Treg cells were also evaluated in non allergic asthma and allergic asthma between mild and moderate asthmatics (C). Results are expressed as percentage of total T cell and compared versus controls. _ : mean of each group. * = p < 0.05; ** = p < 0.01\nBaseline T-cell co-receptor and cytokine expression\nPBMC from each patient were cultured in complete medium during 8 days. ICOS, CD28, IL-4, IL-13, and IL-10 expression by T-cells were assessed by flow cytometry. Results are expressed as mean of total T-cells ± SE and compared versus controls. * = p < 0.05, ** = p < 0,01. R = allergic rhinitis; A R = allergic asthma and rhinitis; A = non allergic asthmatics.\nICOS expression was higher in R compared to controls (p = 0.029, figure 1E), but similar in AR and controls. No significant variation was found at the level of CD28 expression between groups (table 2).\nThe multiple linear regression analysis showed that asthma (A + AR) was associated to lower ICOS and CTLA-4 expression and Treg cell proportions, but to higher IFN-γ+ T cells (table 3). By contrast, allergic rhinitis (with or without asthma) was positively linked to ICOS expression.\nMultiple linear regression analysis between asthma, allergy and allergy after specific stimulation\nValues are expressed as coefficient of regression ± SE\n[SUBTITLE] T cell activation and co-receptor expression after specific stimulation by allergens [SUBSECTION] PBMCs were cultured in the presence or not of pDerp1 during 8 days. T cell activation and co-receptors expression were then studied by flow cytometry.\nIn AR, Der p 1 up-regulated CD28 (89.78 ± 1.33 vs 91.01 ± 1.48; p = 0.0016) and ICOS expression, and decreased CTLA-4 (figure 2A). Furthermore, Derp1 stimulation induced an increase in IL-4+ and IL-13+ cells (figure 2A), without significant variation in IFN-γ+ cells (not shown). This increase in Th2 cells was associated to a decrease in IL-10+ cells and Treg cells (figure 2A).\nT cell activation and co-receptor expression after specific stimulation. ICOS, CTLA-4 expression, IL-4, IL-13, IL-10 producing T cells and Treg cells (CD4+CD25+HiFoxp3+) were assessed by flow cytometry in PBMC from HDM allergic asthmatics and rhinitics (AR) (A, n = 18), HDM allergic rhinitics (R) (B, n = 18), non allergic asthmatics (A) (C, n = 13), and controls (D, n = 20) stimulated or not with Derp1 allergen (1 μg/ml) during 8 days. Results are expressed as percentage of total T cells. _ : mean of each group. * = p < 0.05; ** = p < 0.01\nIn R, Derp1 also increased CD28 (89.03 ± 1.71 vs 91.00 ± 1.48; p = 0.0025) but not ICOS expression (figure 2B). It decreased CTLA-4+ cell proportions. Allergen stimulation induced an increase in Th2 cells without variation of IFN-γ + cells (not shown), and a decrease in IL10+ and Treg cells (figure 2B).\nTherefore at the exception of ICOS, that was already increased at baseline in R and thus could not increase upon stimulation, the profile of T cell activation and co-receptor expression induced by Derp1 was similar in AR and R subjects.\nAfter specific stimulation (figure 2C-D), T cells from asthmatic and non asthmatic allergics displayed higher expression of ICOS (p < 0.02) and lower expression of CTLA-4 compared to controls (p < 0.007). In addition Th2 cell proportions were higher in allergics whereas Treg cells were decreased (IL-4, p < 0.0022; IL-13, p < 0.0001; Treg, p < 0.008). CD28+ cell percentages were not different between groups after allergen-specific stimulation (not shown). In non allergic subjects (figure 2C-D) no significant variation was found in any of the parameters studied\nThe multiple linear regression analysis showed that after Derp 1 specific stimulation, allergy (R + AR) correlated positively with percentages of ICOS, IL-13 and IL-4-expressing T cells and negatively with CTLA-4 and IL-10-expressing T cells (table 3).\nNo variation was found in any subject for any co-receptor or cytokine expression after stimulation with irrelevant rBetv1 (not shown).\nPBMCs were cultured in the presence or not of pDerp1 during 8 days. T cell activation and co-receptors expression were then studied by flow cytometry.\nIn AR, Der p 1 up-regulated CD28 (89.78 ± 1.33 vs 91.01 ± 1.48; p = 0.0016) and ICOS expression, and decreased CTLA-4 (figure 2A). Furthermore, Derp1 stimulation induced an increase in IL-4+ and IL-13+ cells (figure 2A), without significant variation in IFN-γ+ cells (not shown). This increase in Th2 cells was associated to a decrease in IL-10+ cells and Treg cells (figure 2A).\nT cell activation and co-receptor expression after specific stimulation. ICOS, CTLA-4 expression, IL-4, IL-13, IL-10 producing T cells and Treg cells (CD4+CD25+HiFoxp3+) were assessed by flow cytometry in PBMC from HDM allergic asthmatics and rhinitics (AR) (A, n = 18), HDM allergic rhinitics (R) (B, n = 18), non allergic asthmatics (A) (C, n = 13), and controls (D, n = 20) stimulated or not with Derp1 allergen (1 μg/ml) during 8 days. Results are expressed as percentage of total T cells. _ : mean of each group. * = p < 0.05; ** = p < 0.01\nIn R, Derp1 also increased CD28 (89.03 ± 1.71 vs 91.00 ± 1.48; p = 0.0025) but not ICOS expression (figure 2B). It decreased CTLA-4+ cell proportions. Allergen stimulation induced an increase in Th2 cells without variation of IFN-γ + cells (not shown), and a decrease in IL10+ and Treg cells (figure 2B).\nTherefore at the exception of ICOS, that was already increased at baseline in R and thus could not increase upon stimulation, the profile of T cell activation and co-receptor expression induced by Derp1 was similar in AR and R subjects.\nAfter specific stimulation (figure 2C-D), T cells from asthmatic and non asthmatic allergics displayed higher expression of ICOS (p < 0.02) and lower expression of CTLA-4 compared to controls (p < 0.007). In addition Th2 cell proportions were higher in allergics whereas Treg cells were decreased (IL-4, p < 0.0022; IL-13, p < 0.0001; Treg, p < 0.008). CD28+ cell percentages were not different between groups after allergen-specific stimulation (not shown). In non allergic subjects (figure 2C-D) no significant variation was found in any of the parameters studied\nThe multiple linear regression analysis showed that after Derp 1 specific stimulation, allergy (R + AR) correlated positively with percentages of ICOS, IL-13 and IL-4-expressing T cells and negatively with CTLA-4 and IL-10-expressing T cells (table 3).\nNo variation was found in any subject for any co-receptor or cytokine expression after stimulation with irrelevant rBetv1 (not shown).\n[SUBTITLE] Role of co-receptor engagement [SUBSECTION] In order to study the respective role of CD28, ICOS and CTLA-4 in T cell activation patterns in the context of allergen presentation, PBMC were stimulated with Derp1 in the absence or presence of anti-ICOS, anti-CTLA-4 or anti-CD28 mAb.\nIn allergics, whatever the asthmatic status (R + AR), anti-ICOS and anti-CD28 mAb specifically decreased IL-4+ and IL-13+ cells (figure 3A and table 4), but had no influence on IFN-γ+ cells (table 4). Anti-CTLA-4 mAb had no effect on IL-4+ cells, but unexpectedly decreased IL-13+ cell proportions (table 4).\nEffect of anti-co-receptors antibodies on IL-13 production by T cells. PBMC from allergic rhinitics (R) (triangle, n = 12), allergic rhinitic and asthmatics (AR) (square, n = 10) and non allergic asthmatics (A) (lozenge, n = 11) were stimulated with Derp 1 and cultured in the presence or absence of anti-ICOS, anti-CTLA-4 or anti-CD28 antibodies. IL-13 expressing T cells were then compared in each group versus baseline. Results are expressed as percentage of total T cells. Black line : mean of each group. * = p < 0.05; ** = p < 0.01; *** = p < 0.001\nEffect of anti-co-receptors antibodies on Treg cells, IL-10 and IFN-γ production\nPBMC from allergic non asthmatics (R, n = 12), allergic asthmatics (AR, n = 10) and non allergic asthmatics (A, n = 11), were stimulated with Derp 1 and cultured in the presence or absence of anti-ICOS, anti-CTLA4 or anti-CD28 antibodies. Treg cells, IL-10 and IFN-γ production by T-cells were assessed by flow cytometry. Results are expressed as mean of total T-cells ± SE and compared versus absence of anti-co-receptors antibodies conditions. R = allergic rhinitis; A R = allergic asthma and rhinitis. * = p < 0.05, ** = p < 0,01, *** = p < 0,001.\nIn non allergic subjects (A + controls), anti-co-receptor antibodies did not affect Th1 or Th2 cytokine production (figure 3 and table 4).\nIn order to study the respective role of CD28, ICOS and CTLA-4 in T cell activation patterns in the context of allergen presentation, PBMC were stimulated with Derp1 in the absence or presence of anti-ICOS, anti-CTLA-4 or anti-CD28 mAb.\nIn allergics, whatever the asthmatic status (R + AR), anti-ICOS and anti-CD28 mAb specifically decreased IL-4+ and IL-13+ cells (figure 3A and table 4), but had no influence on IFN-γ+ cells (table 4). Anti-CTLA-4 mAb had no effect on IL-4+ cells, but unexpectedly decreased IL-13+ cell proportions (table 4).\nEffect of anti-co-receptors antibodies on IL-13 production by T cells. PBMC from allergic rhinitics (R) (triangle, n = 12), allergic rhinitic and asthmatics (AR) (square, n = 10) and non allergic asthmatics (A) (lozenge, n = 11) were stimulated with Derp 1 and cultured in the presence or absence of anti-ICOS, anti-CTLA-4 or anti-CD28 antibodies. IL-13 expressing T cells were then compared in each group versus baseline. Results are expressed as percentage of total T cells. Black line : mean of each group. * = p < 0.05; ** = p < 0.01; *** = p < 0.001\nEffect of anti-co-receptors antibodies on Treg cells, IL-10 and IFN-γ production\nPBMC from allergic non asthmatics (R, n = 12), allergic asthmatics (AR, n = 10) and non allergic asthmatics (A, n = 11), were stimulated with Derp 1 and cultured in the presence or absence of anti-ICOS, anti-CTLA4 or anti-CD28 antibodies. Treg cells, IL-10 and IFN-γ production by T-cells were assessed by flow cytometry. Results are expressed as mean of total T-cells ± SE and compared versus absence of anti-co-receptors antibodies conditions. R = allergic rhinitis; A R = allergic asthma and rhinitis. * = p < 0.05, ** = p < 0,01, *** = p < 0,001.\nIn non allergic subjects (A + controls), anti-co-receptor antibodies did not affect Th1 or Th2 cytokine production (figure 3 and table 4).", "Sixty-nine subjects (33 males, 36 females, mean age 37.20 ± 1.90) were included. Blood samples from 20 healthy individuals with no history of allergy or asthma, 18 allergic asthmatics (AR), 18 allergic rhinitics (R), and 13 non allergic asthmatics (A) were collected. Characteristics of the patients are shown in table 1.\nCharacteristics of the patients\nFEV1 = Forced expiratory volume in 1 second\n* Values are mean ± standard error (SE),\n** = p < 0.01. R = allergic rhinitis; A R = allergic asthma and rhinitis; A = non allergic asthmatics.\nNone of the subjects was a smoker. Patients interrupted their local or systemic steroids or antihistamines 15 days before sampling. Asthmatics were mild asthmatics for one half and moderate asthmatics for the other half. All allergic patients displayed symptoms compatible with allergic rhinitis. All non allergic asthmatics also complained from nasal symptoms. Healthy volunteers did not report any symptom.\nSputum eosinophil counts were significantly higher in asthmatics than in control subjects or allergic rhinitis, with no significant difference between allergic and non allergic asthmatics. None of the subjects was sensitized to birch. The age difference between the A+R group and other groups (A, R and C) was not significant statistically.", "Treg cells proportion, Th1 and Th2 cytokines production and co-receptors expression (CTLA-4, ICOS, CD28) in each group were first assessed by flow cytometry, prior to any specific stimulation.\nIn non-stimulated conditions, CTLA-4+ T cells were decreased in asthmatics (p < 0.05 vs controls, figure 1A), whatever their allergic status. In keeping with this result, a reduced Treg population (p < 0.025, figure 1B) was found in these patients. Relevantly, Treg cell proportions were higher in mild asthmatics than in moderate counterparts (p < 0.012, figure 1C). IFN-γ + cells were increased (p < 0.022 vs controls, figure 1D) in asthmatics. No significant difference in Th2 cytokines or IL-10 production was found (table 2) between groups.\nT cell activation and co-receptor expression before specific stimulation. CTLA-4 expression (A), Treg cells (CD4+CD25+HiFoxp3+, B), IFN-γ producing T cells (D) and ICOS expression (E) were assessed by flow cytometry in PBMC from HDM allergic rhinitics (R) (triangle, n = 18), allergic asthmatics and rhinitics (AR) (square, n = 18), non allergic asthmatics (A) (lozenge, n = 13), and controls (circle, n = 20). Treg cells were also evaluated in non allergic asthma and allergic asthma between mild and moderate asthmatics (C). Results are expressed as percentage of total T cell and compared versus controls. _ : mean of each group. * = p < 0.05; ** = p < 0.01\nBaseline T-cell co-receptor and cytokine expression\nPBMC from each patient were cultured in complete medium during 8 days. ICOS, CD28, IL-4, IL-13, and IL-10 expression by T-cells were assessed by flow cytometry. Results are expressed as mean of total T-cells ± SE and compared versus controls. * = p < 0.05, ** = p < 0,01. R = allergic rhinitis; A R = allergic asthma and rhinitis; A = non allergic asthmatics.\nICOS expression was higher in R compared to controls (p = 0.029, figure 1E), but similar in AR and controls. No significant variation was found at the level of CD28 expression between groups (table 2).\nThe multiple linear regression analysis showed that asthma (A + AR) was associated to lower ICOS and CTLA-4 expression and Treg cell proportions, but to higher IFN-γ+ T cells (table 3). By contrast, allergic rhinitis (with or without asthma) was positively linked to ICOS expression.\nMultiple linear regression analysis between asthma, allergy and allergy after specific stimulation\nValues are expressed as coefficient of regression ± SE", "PBMCs were cultured in the presence or not of pDerp1 during 8 days. T cell activation and co-receptors expression were then studied by flow cytometry.\nIn AR, Der p 1 up-regulated CD28 (89.78 ± 1.33 vs 91.01 ± 1.48; p = 0.0016) and ICOS expression, and decreased CTLA-4 (figure 2A). Furthermore, Derp1 stimulation induced an increase in IL-4+ and IL-13+ cells (figure 2A), without significant variation in IFN-γ+ cells (not shown). This increase in Th2 cells was associated to a decrease in IL-10+ cells and Treg cells (figure 2A).\nT cell activation and co-receptor expression after specific stimulation. ICOS, CTLA-4 expression, IL-4, IL-13, IL-10 producing T cells and Treg cells (CD4+CD25+HiFoxp3+) were assessed by flow cytometry in PBMC from HDM allergic asthmatics and rhinitics (AR) (A, n = 18), HDM allergic rhinitics (R) (B, n = 18), non allergic asthmatics (A) (C, n = 13), and controls (D, n = 20) stimulated or not with Derp1 allergen (1 μg/ml) during 8 days. Results are expressed as percentage of total T cells. _ : mean of each group. * = p < 0.05; ** = p < 0.01\nIn R, Derp1 also increased CD28 (89.03 ± 1.71 vs 91.00 ± 1.48; p = 0.0025) but not ICOS expression (figure 2B). It decreased CTLA-4+ cell proportions. Allergen stimulation induced an increase in Th2 cells without variation of IFN-γ + cells (not shown), and a decrease in IL10+ and Treg cells (figure 2B).\nTherefore at the exception of ICOS, that was already increased at baseline in R and thus could not increase upon stimulation, the profile of T cell activation and co-receptor expression induced by Derp1 was similar in AR and R subjects.\nAfter specific stimulation (figure 2C-D), T cells from asthmatic and non asthmatic allergics displayed higher expression of ICOS (p < 0.02) and lower expression of CTLA-4 compared to controls (p < 0.007). In addition Th2 cell proportions were higher in allergics whereas Treg cells were decreased (IL-4, p < 0.0022; IL-13, p < 0.0001; Treg, p < 0.008). CD28+ cell percentages were not different between groups after allergen-specific stimulation (not shown). In non allergic subjects (figure 2C-D) no significant variation was found in any of the parameters studied\nThe multiple linear regression analysis showed that after Derp 1 specific stimulation, allergy (R + AR) correlated positively with percentages of ICOS, IL-13 and IL-4-expressing T cells and negatively with CTLA-4 and IL-10-expressing T cells (table 3).\nNo variation was found in any subject for any co-receptor or cytokine expression after stimulation with irrelevant rBetv1 (not shown).", "In order to study the respective role of CD28, ICOS and CTLA-4 in T cell activation patterns in the context of allergen presentation, PBMC were stimulated with Derp1 in the absence or presence of anti-ICOS, anti-CTLA-4 or anti-CD28 mAb.\nIn allergics, whatever the asthmatic status (R + AR), anti-ICOS and anti-CD28 mAb specifically decreased IL-4+ and IL-13+ cells (figure 3A and table 4), but had no influence on IFN-γ+ cells (table 4). Anti-CTLA-4 mAb had no effect on IL-4+ cells, but unexpectedly decreased IL-13+ cell proportions (table 4).\nEffect of anti-co-receptors antibodies on IL-13 production by T cells. PBMC from allergic rhinitics (R) (triangle, n = 12), allergic rhinitic and asthmatics (AR) (square, n = 10) and non allergic asthmatics (A) (lozenge, n = 11) were stimulated with Derp 1 and cultured in the presence or absence of anti-ICOS, anti-CTLA-4 or anti-CD28 antibodies. IL-13 expressing T cells were then compared in each group versus baseline. Results are expressed as percentage of total T cells. Black line : mean of each group. * = p < 0.05; ** = p < 0.01; *** = p < 0.001\nEffect of anti-co-receptors antibodies on Treg cells, IL-10 and IFN-γ production\nPBMC from allergic non asthmatics (R, n = 12), allergic asthmatics (AR, n = 10) and non allergic asthmatics (A, n = 11), were stimulated with Derp 1 and cultured in the presence or absence of anti-ICOS, anti-CTLA4 or anti-CD28 antibodies. Treg cells, IL-10 and IFN-γ production by T-cells were assessed by flow cytometry. Results are expressed as mean of total T-cells ± SE and compared versus absence of anti-co-receptors antibodies conditions. R = allergic rhinitis; A R = allergic asthma and rhinitis. * = p < 0.05, ** = p < 0,01, *** = p < 0,001.\nIn non allergic subjects (A + controls), anti-co-receptor antibodies did not affect Th1 or Th2 cytokine production (figure 3 and table 4).", "The results of our ex vivo study strongly suggest a contrasted picture of T cell activation in allergic rhinitis and asthma, with distinct patterns of Th1, Th2 and Treg profiles and expression of ICOS, CD28 and CTLA-4 co-receptors.\nIndeed, we showed that in asthma, IFN-γ production was constitutive, did not increase upon allergen stimulation, and was not blocked by any of the anti-co-receptor antibodies. Similarly, the constitutive defect of Treg and CTLA-4 expression seen in asthmatics and not enhanced in non allergic asthmatics after allergen stimulation was not modified after co-receptors blockade. The Th1/Treg imbalance in asthma is therefore constitutive and independent of allergen presentation.\nThe constitutive Th1 activation in asthma was demonstrated before [10,12,21]. It could result from the intrinsic defect in the CTLA-4+ and Treg populations as CTLA-4, known to be involved in tolerance induction [22], could prevent the asthmatic inflammation by inducing T cells to differentiate in T regulatory cells. Recently, we have showed during in vivo studies a lower proportion of Treg cells in blood from severe refractory asthmatics compared to controls, which was even deeper during exacerbations, both in blood and induced sputum [9]. Herein we show that this lower proportion of Treg is present in milder stages of asthma. Relevantly, Treg cells were higher in mild than in moderate asthma whatever the allergic status. This results are concordant with the primary Treg cell deficiency suggested in asthma and allergy [23]. That the Th1/Treg imbalance is similar in allergic and non allergic asthma suggests that it is a characteristic of asthma independent of allergy, possibly triggered by infectious agents or non specific substances such as pollutants but it must be precised that asthmatics included in the present study were controlled and did not experienced any recent exacerbation. Another hypothesis would be that the Th1/Treg imbalance in asthma is really intrinsic and independent of any external aggression.\nIn allergic groups, we demonstrated a Th2/Treg imbalance inducible upon allergen stimulation. That Th2 activation was not seen in non allergic patients and could be broken by CD28 and ICOS blockade indicates that it is really the cognate allergen presentation by antigen presenting cells that was responsible for it. IL-13 secretion was suppressed also by blocking CTLA-4, indicating that in peripheral cells (1) Th2 activation cannot be considered globally, Th2 cytokines being regulated distinctly, and (2) CTLA-4 being not only involved in tolerance but also in inflammation. This result is concordant with Lordan and al., who showed that allergen-induced production of IL-5 and IL-13 by PBMC from allergic asthmatics could be inhibited by blocking CTLA-4 receptor with CTLA-4-Ig [24]. Regarding the allergen-induced Treg defect in allergics, other co-receptors than these tested are likely involved, among which PD1 is a candidate [25]. Indeed, Meiler et al. recently demonstrated in PBMC from allergic patients that the suppressive effect of IL-10 secreting T cells was partially inhibited by blocking CTLA-4 or PD-1 co-receptors, whereas blocking both receptors simultaneously had an additive effect [26].\nThe association of allergy with ICOS over-expression before any allergen stimulation suggests a non specific priming of T cells towards the Th2 pathway in allergic subjects. Indeed ICOS was clearly related to Th2 activation, as shown by anti-ICOS stimulation results. Numerous studies using animal models of airways inflammation have showed that ICOS-mediated signalling was essential for induction of Th2 cytokines [27,28]. Indeed inhibition of ICOS suppresses allergic lung inflammation and Th2 cytokines production in mice models [29]. However in other models ICOS engagement induces tolerance and inhibits the allergic inflammation. These distinct actions of ICOS seem related to the density of ICOS molecules per cell, with inflammation being related to a high density of co-receptors and tolerance induction to a lower number of ICOS molecules per cell [30].\nThat in R ICOS expression does not increase after allergen stimulation by contrast with the AR group could result from a maximal expression of ICOS in R whereas it is still inducible in AR. Indeed the basal level of ICOS expression is lower in the latter group than in the former. This relative defect in ICOS expression in AR patients could result from the constitutive Th1/Treg imbalance of asthmatics that by a Th1-driven \"anti-Th2\" effect would decrease ICOS expression.\nUnder allergen stimulation, CD28 expression increased significantly in R and AR, and blockade of CD28 decreased the Th2 cytokine production, indicating the involvement of CD28 in Th2 cell activation in allergy. It is noteworthy that although significant statistically, the proportion of CD28+ cells could not increase in high proportion, as most T cells constitutively expressed CD28 in all groups. CD28 is a crucial co-receptor for inducing T cell cytokine production [31], and was showed to be involved both in Th1 and Th2 activation. CD28 blockade is proposed as an immunosuppressive strategy to prevent graft rejection, and is experimented in various inflammatory diseases. However the practical use of CD28 blockade was refrained by the agonist action of some anti-CD28 antibodies encountered in clinical trials [32].\nOur study provides new insights into the hypothesis of Treg cell deficiency as a paradigm for allergic diseases, by showing a constitutive Treg cell deficit in asthma whatever the allergic status and an inducible Treg deficit in allergy, whatever the presence of asthma. As a consequence, the Treg cell deficiency is the highest in asthmatic allergics after allergen stimulation. This distinction between allergy and asthma contradicts our previous hypothesis of a gradient of Treg cell deficiency from allergy to asthma [23], and better suggests that the abnormalities seen in both diseases could be juxtaposed and independent, as showed by the multiple linear regression analysis.\nRecently an in vivo study showed no difference in the number of Treg cells between asthmatics and controls, whereas FOXP3 protein expression within Treg cells was significantly decreased in asthmatic patients [33]. Our study was performed in blood ex vivo and therefore might not fully reflect the in vivo and in situ reality. However many studies showed that blood compartment was relevant to the in situ inflammation as far as T cells and allergy were concerned [21], and the mechanistic studies proposed here cannot be assessed in situ in humans. They can be performed in vivo in animals, but the relevance to real asthma would also be uncertain.\nIn conclusion, allergy is associated to a constitutive ICOS over-expression and inducible CTLA-4 under-expression with Th2/Treg imbalance, when a constitutive CTLA-4 under-expression and Th1/Treg disequilibrium appears as a hallmark of asthma. Both profiles are mixed in allergic asthma, and one can argue that asthma would occur in allergic subjects only if the unknown conditions leading to the constitutive Th1 activation are present. Still missing in the puzzle is the stimulus inducing the Th2 activation present in non allergic asthma [3]. Lastly, our results demonstrate that although targeting one type of T cell activation only would be a pitfall in allergic asthma, there is a rationale to develop strategies based on targeting co-receptors in allergy.", "In conclusion, our work adds significant insights into the immune mechanism involved in allergy and asthma and states the rationale for new diagnosis and/or therapeutic strategies in these pathologies.", "The authors declare that they have no competing interests.", "All the authors have contributed significantly to the research and preparation of the manuscript, and they approve its submission." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study population", "Isolation of PBMC", "Antigens", "Specific stimulation of T cells", "Surface staining", "Intracellular T cell cytokine staining", "Co-receptor study", "Statistical Analysis", "Results", "Study population", "T cell activation and co-receptor expression before specific stimulation", "T cell activation and co-receptor expression after specific stimulation by allergens", "Role of co-receptor engagement", "Discussion", "Conclusion", "Competing interests", "Authors' contributions", "Supplementary Material" ]
[ "Atopic diseases including allergic rhinitis and asthma are inflammatory conditions that have increased in prevalence over the past two decades [1]. The inflammatory response to common environmental allergens during allergy and asthma has been extensively studied in the past years, and has clearly determined the pivotal role of T cell activation, with a predominant Th2 cytokine production [2,3]. T regulatory (Treg) cells, characterized by the production of anti-inflammatory cytokines such as IL-10 and TGF-β [4,5] are considered as responsible for the normal tolerance against auto-antigens and external antigens such as allergens [6]. Accordingly, a deficiency in Treg counts and activation was found in autoimmune diseases and allergic conditions, notably during allergen exposure [7,8] and exacerbations of severe asthma [9]. However although this Th2/Treg imbalance applies both for allergic rhinitis and asthma, it is remarkable that despite a same atopic background and allergen exposure, some subjects will develop both rhinitis and asthma whereas other will display rhinitis only. We hypothesize since several years that T cell activation is different between both conditions and with others we previously described a Th1 activation in asthma that was absent in non asthmatic allergy in blood, induced sputum and broncho-alveolar lavages [10-12]. However, the role of allergen in the tuning of T cell activation in allergic rhinitics with and without asthma was not explored yet.\nAllergen-induced T cell activation depends on signals delivered from antigen presenting cells (APCs) through the antigen-specific T cell receptor as well as additional co-stimulatory signals provided by engagement of so-called co-receptors on APCs and T cells [13]. Major T cell co-receptors are CD28, inducible costimulatory molecule (ICOS) and cytotoxic T lymphocyte antigen (CTLA)-4. They belong to the immunoglobulin gene superfamily and display various kinetics of expression. CD28 is a constitutive co-stimulatory receptor binding CD80 and CD86 on APCs, delivering important signals for T cell activation and survival. Ligation of CD28 promotes the production of IL-4 and IL-5 and provides resistance to apoptosis and long-term expansion of T-cells. As CD28, ICOS is a positive regulator of T cell activation which is up-regulated on activated T-cells. ICOS was initially shown to selectively induce high levels of IL-10 and IL-4, but is also able to stimulate both Th1 and Th2 cytokine production in vivo [14].\nCTLA-4 is also a CD80/CD86-binding protein. It is up-regulated on activated T cells and delivers mainly an inhibitory signal, playing an important role in maintenance of peripheral tolerance [15]. Indeed, it was shown in murine Treg cells, that CTLA-4 controlled homeostasis and suppressive capacity of regulatory T cells [16].\nCo-receptors thus represent important potential targets for therapeutic immunomodulation. Indeed the blockade of CD28 and CTLA-4 agonists are tested for their ability to prevent graft rejection [17], and in animal models, ICOS inhibition prevented allergic inflammation [18]. However, the actual role of co-receptors in the context of asthma and allergy in humans is still unexplored.\nThe objective of this study was therefore to compare the pattern of T cell activation between allergic rhinitics and asthmatics upon allergen stimulation and to assess the role of co-receptors CD28, ICOS and CTLA-4 in this process.", "[SUBTITLE] Study population [SUBSECTION] Four groups of patients were recruited: allergic rhinitics (R), allergic rhinitics and asthmatics (AR), non allergic asthmatics (A), and controls (C). All allergic patients were selected to display house dust mite (HDM) allergy. As rBetv1 birch pollen allergen was used as control antigen for in vitro stimulation of T cells, patients were selected to be not sensitized to birch pollen. The diagnosis of HDM allergy was determined by positive skin prick test to Dermatophagoides pteronyssinus extract (Stallergenes, France). Allergic rhinitis was defined by the presence of perennial nasal symptoms out of viral infection such as nasal obstruction, sneezing, rhinorrhea and nasal pruritus. The diagnosis of asthma was done on the basis of a history of dyspnea and wheezes with a reversible obstructive ventilatory defect or a positive methacholine challenge. The distinction between mild and moderate asthma was done according to GINA classification [19]. In patients, any inhaled corticosteroids and anti-histamines were discontinued 15 days before sampling. As controls, healthy non smoker individuals with normal lung function, negative methacholine challenge and negative skin prick test were included. In controls, absence of allergy was established by the negativity of 35 skin prick tests to common environmental aeroallergens, and absence of asthma was stated on negative methacholine challenge and induced sputum eosinophil count below 3% (see additional file 1: Skin testing, methacholine challenge and induced sputum procedures). The positive methacholine test was defined by a drop of at least 20% of FEV1(forced expiratory volume in 1 second) in response to 200 μg or less of metacholine. This project was approved by the local Ethic Committee and written informed consent was obtained from each patient.\nFour groups of patients were recruited: allergic rhinitics (R), allergic rhinitics and asthmatics (AR), non allergic asthmatics (A), and controls (C). All allergic patients were selected to display house dust mite (HDM) allergy. As rBetv1 birch pollen allergen was used as control antigen for in vitro stimulation of T cells, patients were selected to be not sensitized to birch pollen. The diagnosis of HDM allergy was determined by positive skin prick test to Dermatophagoides pteronyssinus extract (Stallergenes, France). Allergic rhinitis was defined by the presence of perennial nasal symptoms out of viral infection such as nasal obstruction, sneezing, rhinorrhea and nasal pruritus. The diagnosis of asthma was done on the basis of a history of dyspnea and wheezes with a reversible obstructive ventilatory defect or a positive methacholine challenge. The distinction between mild and moderate asthma was done according to GINA classification [19]. In patients, any inhaled corticosteroids and anti-histamines were discontinued 15 days before sampling. As controls, healthy non smoker individuals with normal lung function, negative methacholine challenge and negative skin prick test were included. In controls, absence of allergy was established by the negativity of 35 skin prick tests to common environmental aeroallergens, and absence of asthma was stated on negative methacholine challenge and induced sputum eosinophil count below 3% (see additional file 1: Skin testing, methacholine challenge and induced sputum procedures). The positive methacholine test was defined by a drop of at least 20% of FEV1(forced expiratory volume in 1 second) in response to 200 μg or less of metacholine. This project was approved by the local Ethic Committee and written informed consent was obtained from each patient.\n[SUBTITLE] Isolation of PBMC [SUBSECTION] Peripheral blood mononuclear cells (PBMC) were isolated from peripheral venous blood by Ficoll-Hypaque plus (GE Healthcare, Uppsala, Sweden) density gradient centrifugation. Cells were then washed three times and resuspended in complete medium RPMI-1640 supplemented with 10% (v/v) foetal calf serum (FCS), 2 mM L-glutamine, 1 mM sodium pyruvate, 1 μM 2-mercapto-ethanol (Sigma Chemical, Saint-Louis, Missouri), 1000 U/ml Penicillin and Streptomycin. All culture reagents, except 2-mercapto-ethanol, were purchased from GIBCO®.\nPeripheral blood mononuclear cells (PBMC) were isolated from peripheral venous blood by Ficoll-Hypaque plus (GE Healthcare, Uppsala, Sweden) density gradient centrifugation. Cells were then washed three times and resuspended in complete medium RPMI-1640 supplemented with 10% (v/v) foetal calf serum (FCS), 2 mM L-glutamine, 1 mM sodium pyruvate, 1 μM 2-mercapto-ethanol (Sigma Chemical, Saint-Louis, Missouri), 1000 U/ml Penicillin and Streptomycin. All culture reagents, except 2-mercapto-ethanol, were purchased from GIBCO®.\n[SUBTITLE] Antigens [SUBSECTION] Recombinant (r) Betv 1 of birch pollen (Betula verrucosa) and purified (p) Derp 1 of house dust mite (Dermatophagoides pteronyssinus) were provided by Stallergènes (Antony, France). None of the allergens contained detectable amounts of LPS.\nRecombinant (r) Betv 1 of birch pollen (Betula verrucosa) and purified (p) Derp 1 of house dust mite (Dermatophagoides pteronyssinus) were provided by Stallergènes (Antony, France). None of the allergens contained detectable amounts of LPS.\n[SUBTITLE] Specific stimulation of T cells [SUBSECTION] Optimal dose of stimulatory pDerp1 and kinetics of T cell cytokine secretion and proliferation were determined in an independent pilot study on 5 house dust mite allergics and 5 healthy volunteers.\nPBMC (5 × 105) were cultured in 96 wells plates (Falcon) in 100 μl medium containing 1 μg/ml pDerp1 at 37°C in 5%CO2 and cells were harvested after 8 days culture. 50 μl of fresh complete medium was added every 2 days in each well. rBetv1 was used as control antigen at a concentration of 1 μg/ml.\nOptimal dose of stimulatory pDerp1 and kinetics of T cell cytokine secretion and proliferation were determined in an independent pilot study on 5 house dust mite allergics and 5 healthy volunteers.\nPBMC (5 × 105) were cultured in 96 wells plates (Falcon) in 100 μl medium containing 1 μg/ml pDerp1 at 37°C in 5%CO2 and cells were harvested after 8 days culture. 50 μl of fresh complete medium was added every 2 days in each well. rBetv1 was used as control antigen at a concentration of 1 μg/ml.\n[SUBTITLE] Surface staining [SUBSECTION] After 8 days of culture with pDerp 1, PBMC (5 × 105) were harvested and stained with anti-CD4-PE-Cy5, anti-CD25-FITC, (Beckman Coulter, Marseille, France); anti-CD3-FITC (Dako, Trappes, France), anti-CD3-PE-Cy5 (Immunotools, Friesoythe, Germany), anti-CD28-FITC, anti-ICOS-PE, or anti-CTLA-4-PE-Cy5 (BD Pharmingen, le Pont de Claix, France) mAbs at recommended concentrations. To detect Foxp3 intracellular transcription factor, T cells were then fixed, permeabilized, and stained with anti-Foxp3-PE mAb (eBiosciences, San Diego, California). The Treg population was identified as CD4+CD25Hi+Fox p 3+ cells.\nFluorescence was detected with a 15 mW argon ion laser on a three colors FACSCan® (Becton Dickinson, Franklin Lakes, NJ, USA). Standard acquisition and analysis software were obtained through Cellquest® Software (Becton Dickinson).\nAfter 8 days of culture with pDerp 1, PBMC (5 × 105) were harvested and stained with anti-CD4-PE-Cy5, anti-CD25-FITC, (Beckman Coulter, Marseille, France); anti-CD3-FITC (Dako, Trappes, France), anti-CD3-PE-Cy5 (Immunotools, Friesoythe, Germany), anti-CD28-FITC, anti-ICOS-PE, or anti-CTLA-4-PE-Cy5 (BD Pharmingen, le Pont de Claix, France) mAbs at recommended concentrations. To detect Foxp3 intracellular transcription factor, T cells were then fixed, permeabilized, and stained with anti-Foxp3-PE mAb (eBiosciences, San Diego, California). The Treg population was identified as CD4+CD25Hi+Fox p 3+ cells.\nFluorescence was detected with a 15 mW argon ion laser on a three colors FACSCan® (Becton Dickinson, Franklin Lakes, NJ, USA). Standard acquisition and analysis software were obtained through Cellquest® Software (Becton Dickinson).\n[SUBTITLE] Intracellular T cell cytokine staining [SUBSECTION] PBMC (5 × 105) were cultured for 8 days with pDerp 1. PMA (Sigma Chemical, Saint-Louis, Missouri, 50 ng/ml), Ionomycin (Euromedex, 2 μg/ml) and Monensin (Sigma Chemical, 2 μM) were added during the last 6 hours of culture. These culture conditions allow the detection of cytokines already engaged in a synthesis process in vivo [20]. Cells were harvested and stained with CD3-PE-Cy5 (Immunotools, Friesoythe, Germany). Cells were then fixed, permeabilized, and stained with antibodies to detect intracellular cytokines (anti-IFNγ-FITC, anti-IL-4-FITC, BD Pharmingen, le Pont de Claix, France; anti-IL13-PE, anti-IL-10-PE, R&D system, Lille, France). IL-4+ and IL-13+ cells were considered as Th2 cells, IFN-γ + cells as Th1 cells. IL-10+ cells were considered as belonging to Treg cell population.\nPBMC (5 × 105) were cultured for 8 days with pDerp 1. PMA (Sigma Chemical, Saint-Louis, Missouri, 50 ng/ml), Ionomycin (Euromedex, 2 μg/ml) and Monensin (Sigma Chemical, 2 μM) were added during the last 6 hours of culture. These culture conditions allow the detection of cytokines already engaged in a synthesis process in vivo [20]. Cells were harvested and stained with CD3-PE-Cy5 (Immunotools, Friesoythe, Germany). Cells were then fixed, permeabilized, and stained with antibodies to detect intracellular cytokines (anti-IFNγ-FITC, anti-IL-4-FITC, BD Pharmingen, le Pont de Claix, France; anti-IL13-PE, anti-IL-10-PE, R&D system, Lille, France). IL-4+ and IL-13+ cells were considered as Th2 cells, IFN-γ + cells as Th1 cells. IL-10+ cells were considered as belonging to Treg cell population.\n[SUBTITLE] Co-receptor study [SUBSECTION] To determine the role of co-receptors in T cell activation, PBMC cultures were performed with or without anti-CTLA-4 (clone 14D3, 12 μg/ml), anti-ICOS (clone ISA-3, 12 μg/ml) or anti-CD28 (clone CD28.6, 3 μg/ml) monoclonal antibodies (mAb). These mAb were purchased from eBioscience.\nTo determine the role of co-receptors in T cell activation, PBMC cultures were performed with or without anti-CTLA-4 (clone 14D3, 12 μg/ml), anti-ICOS (clone ISA-3, 12 μg/ml) or anti-CD28 (clone CD28.6, 3 μg/ml) monoclonal antibodies (mAb). These mAb were purchased from eBioscience.\n[SUBTITLE] Statistical Analysis [SUBSECTION] Analysis was performed using the Statview® Software. Normal distributions of the variables were checked with a Kolmogorov-Smirnof's test. Average percentages of positive cells and cytokine concentrations were then compared between groups (controls, non allergic asthmatics, allergic rhinitics and allergic asthmatics) using the analysis of variance (ANOVA). When the ANOVA showed statistical difference between groups, a multiple linear regression analysis was done to identify if allergy, asthma, or both could explain the variable studied. Between-groups comparisons were performed using a Student's t-test. A paired t test was used to compare differences between paired groups. A p value < 0.05 was considered as statistically significant for all statistical tests. Results are expressed as mean ± standard error (SE).\nAnalysis was performed using the Statview® Software. Normal distributions of the variables were checked with a Kolmogorov-Smirnof's test. Average percentages of positive cells and cytokine concentrations were then compared between groups (controls, non allergic asthmatics, allergic rhinitics and allergic asthmatics) using the analysis of variance (ANOVA). When the ANOVA showed statistical difference between groups, a multiple linear regression analysis was done to identify if allergy, asthma, or both could explain the variable studied. Between-groups comparisons were performed using a Student's t-test. A paired t test was used to compare differences between paired groups. A p value < 0.05 was considered as statistically significant for all statistical tests. Results are expressed as mean ± standard error (SE).", "Four groups of patients were recruited: allergic rhinitics (R), allergic rhinitics and asthmatics (AR), non allergic asthmatics (A), and controls (C). All allergic patients were selected to display house dust mite (HDM) allergy. As rBetv1 birch pollen allergen was used as control antigen for in vitro stimulation of T cells, patients were selected to be not sensitized to birch pollen. The diagnosis of HDM allergy was determined by positive skin prick test to Dermatophagoides pteronyssinus extract (Stallergenes, France). Allergic rhinitis was defined by the presence of perennial nasal symptoms out of viral infection such as nasal obstruction, sneezing, rhinorrhea and nasal pruritus. The diagnosis of asthma was done on the basis of a history of dyspnea and wheezes with a reversible obstructive ventilatory defect or a positive methacholine challenge. The distinction between mild and moderate asthma was done according to GINA classification [19]. In patients, any inhaled corticosteroids and anti-histamines were discontinued 15 days before sampling. As controls, healthy non smoker individuals with normal lung function, negative methacholine challenge and negative skin prick test were included. In controls, absence of allergy was established by the negativity of 35 skin prick tests to common environmental aeroallergens, and absence of asthma was stated on negative methacholine challenge and induced sputum eosinophil count below 3% (see additional file 1: Skin testing, methacholine challenge and induced sputum procedures). The positive methacholine test was defined by a drop of at least 20% of FEV1(forced expiratory volume in 1 second) in response to 200 μg or less of metacholine. This project was approved by the local Ethic Committee and written informed consent was obtained from each patient.", "Peripheral blood mononuclear cells (PBMC) were isolated from peripheral venous blood by Ficoll-Hypaque plus (GE Healthcare, Uppsala, Sweden) density gradient centrifugation. Cells were then washed three times and resuspended in complete medium RPMI-1640 supplemented with 10% (v/v) foetal calf serum (FCS), 2 mM L-glutamine, 1 mM sodium pyruvate, 1 μM 2-mercapto-ethanol (Sigma Chemical, Saint-Louis, Missouri), 1000 U/ml Penicillin and Streptomycin. All culture reagents, except 2-mercapto-ethanol, were purchased from GIBCO®.", "Recombinant (r) Betv 1 of birch pollen (Betula verrucosa) and purified (p) Derp 1 of house dust mite (Dermatophagoides pteronyssinus) were provided by Stallergènes (Antony, France). None of the allergens contained detectable amounts of LPS.", "Optimal dose of stimulatory pDerp1 and kinetics of T cell cytokine secretion and proliferation were determined in an independent pilot study on 5 house dust mite allergics and 5 healthy volunteers.\nPBMC (5 × 105) were cultured in 96 wells plates (Falcon) in 100 μl medium containing 1 μg/ml pDerp1 at 37°C in 5%CO2 and cells were harvested after 8 days culture. 50 μl of fresh complete medium was added every 2 days in each well. rBetv1 was used as control antigen at a concentration of 1 μg/ml.", "After 8 days of culture with pDerp 1, PBMC (5 × 105) were harvested and stained with anti-CD4-PE-Cy5, anti-CD25-FITC, (Beckman Coulter, Marseille, France); anti-CD3-FITC (Dako, Trappes, France), anti-CD3-PE-Cy5 (Immunotools, Friesoythe, Germany), anti-CD28-FITC, anti-ICOS-PE, or anti-CTLA-4-PE-Cy5 (BD Pharmingen, le Pont de Claix, France) mAbs at recommended concentrations. To detect Foxp3 intracellular transcription factor, T cells were then fixed, permeabilized, and stained with anti-Foxp3-PE mAb (eBiosciences, San Diego, California). The Treg population was identified as CD4+CD25Hi+Fox p 3+ cells.\nFluorescence was detected with a 15 mW argon ion laser on a three colors FACSCan® (Becton Dickinson, Franklin Lakes, NJ, USA). Standard acquisition and analysis software were obtained through Cellquest® Software (Becton Dickinson).", "PBMC (5 × 105) were cultured for 8 days with pDerp 1. PMA (Sigma Chemical, Saint-Louis, Missouri, 50 ng/ml), Ionomycin (Euromedex, 2 μg/ml) and Monensin (Sigma Chemical, 2 μM) were added during the last 6 hours of culture. These culture conditions allow the detection of cytokines already engaged in a synthesis process in vivo [20]. Cells were harvested and stained with CD3-PE-Cy5 (Immunotools, Friesoythe, Germany). Cells were then fixed, permeabilized, and stained with antibodies to detect intracellular cytokines (anti-IFNγ-FITC, anti-IL-4-FITC, BD Pharmingen, le Pont de Claix, France; anti-IL13-PE, anti-IL-10-PE, R&D system, Lille, France). IL-4+ and IL-13+ cells were considered as Th2 cells, IFN-γ + cells as Th1 cells. IL-10+ cells were considered as belonging to Treg cell population.", "To determine the role of co-receptors in T cell activation, PBMC cultures were performed with or without anti-CTLA-4 (clone 14D3, 12 μg/ml), anti-ICOS (clone ISA-3, 12 μg/ml) or anti-CD28 (clone CD28.6, 3 μg/ml) monoclonal antibodies (mAb). These mAb were purchased from eBioscience.", "Analysis was performed using the Statview® Software. Normal distributions of the variables were checked with a Kolmogorov-Smirnof's test. Average percentages of positive cells and cytokine concentrations were then compared between groups (controls, non allergic asthmatics, allergic rhinitics and allergic asthmatics) using the analysis of variance (ANOVA). When the ANOVA showed statistical difference between groups, a multiple linear regression analysis was done to identify if allergy, asthma, or both could explain the variable studied. Between-groups comparisons were performed using a Student's t-test. A paired t test was used to compare differences between paired groups. A p value < 0.05 was considered as statistically significant for all statistical tests. Results are expressed as mean ± standard error (SE).", "[SUBTITLE] Study population [SUBSECTION] Sixty-nine subjects (33 males, 36 females, mean age 37.20 ± 1.90) were included. Blood samples from 20 healthy individuals with no history of allergy or asthma, 18 allergic asthmatics (AR), 18 allergic rhinitics (R), and 13 non allergic asthmatics (A) were collected. Characteristics of the patients are shown in table 1.\nCharacteristics of the patients\nFEV1 = Forced expiratory volume in 1 second\n* Values are mean ± standard error (SE),\n** = p < 0.01. R = allergic rhinitis; A R = allergic asthma and rhinitis; A = non allergic asthmatics.\nNone of the subjects was a smoker. Patients interrupted their local or systemic steroids or antihistamines 15 days before sampling. Asthmatics were mild asthmatics for one half and moderate asthmatics for the other half. All allergic patients displayed symptoms compatible with allergic rhinitis. All non allergic asthmatics also complained from nasal symptoms. Healthy volunteers did not report any symptom.\nSputum eosinophil counts were significantly higher in asthmatics than in control subjects or allergic rhinitis, with no significant difference between allergic and non allergic asthmatics. None of the subjects was sensitized to birch. The age difference between the A+R group and other groups (A, R and C) was not significant statistically.\nSixty-nine subjects (33 males, 36 females, mean age 37.20 ± 1.90) were included. Blood samples from 20 healthy individuals with no history of allergy or asthma, 18 allergic asthmatics (AR), 18 allergic rhinitics (R), and 13 non allergic asthmatics (A) were collected. Characteristics of the patients are shown in table 1.\nCharacteristics of the patients\nFEV1 = Forced expiratory volume in 1 second\n* Values are mean ± standard error (SE),\n** = p < 0.01. R = allergic rhinitis; A R = allergic asthma and rhinitis; A = non allergic asthmatics.\nNone of the subjects was a smoker. Patients interrupted their local or systemic steroids or antihistamines 15 days before sampling. Asthmatics were mild asthmatics for one half and moderate asthmatics for the other half. All allergic patients displayed symptoms compatible with allergic rhinitis. All non allergic asthmatics also complained from nasal symptoms. Healthy volunteers did not report any symptom.\nSputum eosinophil counts were significantly higher in asthmatics than in control subjects or allergic rhinitis, with no significant difference between allergic and non allergic asthmatics. None of the subjects was sensitized to birch. The age difference between the A+R group and other groups (A, R and C) was not significant statistically.\n[SUBTITLE] T cell activation and co-receptor expression before specific stimulation [SUBSECTION] Treg cells proportion, Th1 and Th2 cytokines production and co-receptors expression (CTLA-4, ICOS, CD28) in each group were first assessed by flow cytometry, prior to any specific stimulation.\nIn non-stimulated conditions, CTLA-4+ T cells were decreased in asthmatics (p < 0.05 vs controls, figure 1A), whatever their allergic status. In keeping with this result, a reduced Treg population (p < 0.025, figure 1B) was found in these patients. Relevantly, Treg cell proportions were higher in mild asthmatics than in moderate counterparts (p < 0.012, figure 1C). IFN-γ + cells were increased (p < 0.022 vs controls, figure 1D) in asthmatics. No significant difference in Th2 cytokines or IL-10 production was found (table 2) between groups.\nT cell activation and co-receptor expression before specific stimulation. CTLA-4 expression (A), Treg cells (CD4+CD25+HiFoxp3+, B), IFN-γ producing T cells (D) and ICOS expression (E) were assessed by flow cytometry in PBMC from HDM allergic rhinitics (R) (triangle, n = 18), allergic asthmatics and rhinitics (AR) (square, n = 18), non allergic asthmatics (A) (lozenge, n = 13), and controls (circle, n = 20). Treg cells were also evaluated in non allergic asthma and allergic asthma between mild and moderate asthmatics (C). Results are expressed as percentage of total T cell and compared versus controls. _ : mean of each group. * = p < 0.05; ** = p < 0.01\nBaseline T-cell co-receptor and cytokine expression\nPBMC from each patient were cultured in complete medium during 8 days. ICOS, CD28, IL-4, IL-13, and IL-10 expression by T-cells were assessed by flow cytometry. Results are expressed as mean of total T-cells ± SE and compared versus controls. * = p < 0.05, ** = p < 0,01. R = allergic rhinitis; A R = allergic asthma and rhinitis; A = non allergic asthmatics.\nICOS expression was higher in R compared to controls (p = 0.029, figure 1E), but similar in AR and controls. No significant variation was found at the level of CD28 expression between groups (table 2).\nThe multiple linear regression analysis showed that asthma (A + AR) was associated to lower ICOS and CTLA-4 expression and Treg cell proportions, but to higher IFN-γ+ T cells (table 3). By contrast, allergic rhinitis (with or without asthma) was positively linked to ICOS expression.\nMultiple linear regression analysis between asthma, allergy and allergy after specific stimulation\nValues are expressed as coefficient of regression ± SE\nTreg cells proportion, Th1 and Th2 cytokines production and co-receptors expression (CTLA-4, ICOS, CD28) in each group were first assessed by flow cytometry, prior to any specific stimulation.\nIn non-stimulated conditions, CTLA-4+ T cells were decreased in asthmatics (p < 0.05 vs controls, figure 1A), whatever their allergic status. In keeping with this result, a reduced Treg population (p < 0.025, figure 1B) was found in these patients. Relevantly, Treg cell proportions were higher in mild asthmatics than in moderate counterparts (p < 0.012, figure 1C). IFN-γ + cells were increased (p < 0.022 vs controls, figure 1D) in asthmatics. No significant difference in Th2 cytokines or IL-10 production was found (table 2) between groups.\nT cell activation and co-receptor expression before specific stimulation. CTLA-4 expression (A), Treg cells (CD4+CD25+HiFoxp3+, B), IFN-γ producing T cells (D) and ICOS expression (E) were assessed by flow cytometry in PBMC from HDM allergic rhinitics (R) (triangle, n = 18), allergic asthmatics and rhinitics (AR) (square, n = 18), non allergic asthmatics (A) (lozenge, n = 13), and controls (circle, n = 20). Treg cells were also evaluated in non allergic asthma and allergic asthma between mild and moderate asthmatics (C). Results are expressed as percentage of total T cell and compared versus controls. _ : mean of each group. * = p < 0.05; ** = p < 0.01\nBaseline T-cell co-receptor and cytokine expression\nPBMC from each patient were cultured in complete medium during 8 days. ICOS, CD28, IL-4, IL-13, and IL-10 expression by T-cells were assessed by flow cytometry. Results are expressed as mean of total T-cells ± SE and compared versus controls. * = p < 0.05, ** = p < 0,01. R = allergic rhinitis; A R = allergic asthma and rhinitis; A = non allergic asthmatics.\nICOS expression was higher in R compared to controls (p = 0.029, figure 1E), but similar in AR and controls. No significant variation was found at the level of CD28 expression between groups (table 2).\nThe multiple linear regression analysis showed that asthma (A + AR) was associated to lower ICOS and CTLA-4 expression and Treg cell proportions, but to higher IFN-γ+ T cells (table 3). By contrast, allergic rhinitis (with or without asthma) was positively linked to ICOS expression.\nMultiple linear regression analysis between asthma, allergy and allergy after specific stimulation\nValues are expressed as coefficient of regression ± SE\n[SUBTITLE] T cell activation and co-receptor expression after specific stimulation by allergens [SUBSECTION] PBMCs were cultured in the presence or not of pDerp1 during 8 days. T cell activation and co-receptors expression were then studied by flow cytometry.\nIn AR, Der p 1 up-regulated CD28 (89.78 ± 1.33 vs 91.01 ± 1.48; p = 0.0016) and ICOS expression, and decreased CTLA-4 (figure 2A). Furthermore, Derp1 stimulation induced an increase in IL-4+ and IL-13+ cells (figure 2A), without significant variation in IFN-γ+ cells (not shown). This increase in Th2 cells was associated to a decrease in IL-10+ cells and Treg cells (figure 2A).\nT cell activation and co-receptor expression after specific stimulation. ICOS, CTLA-4 expression, IL-4, IL-13, IL-10 producing T cells and Treg cells (CD4+CD25+HiFoxp3+) were assessed by flow cytometry in PBMC from HDM allergic asthmatics and rhinitics (AR) (A, n = 18), HDM allergic rhinitics (R) (B, n = 18), non allergic asthmatics (A) (C, n = 13), and controls (D, n = 20) stimulated or not with Derp1 allergen (1 μg/ml) during 8 days. Results are expressed as percentage of total T cells. _ : mean of each group. * = p < 0.05; ** = p < 0.01\nIn R, Derp1 also increased CD28 (89.03 ± 1.71 vs 91.00 ± 1.48; p = 0.0025) but not ICOS expression (figure 2B). It decreased CTLA-4+ cell proportions. Allergen stimulation induced an increase in Th2 cells without variation of IFN-γ + cells (not shown), and a decrease in IL10+ and Treg cells (figure 2B).\nTherefore at the exception of ICOS, that was already increased at baseline in R and thus could not increase upon stimulation, the profile of T cell activation and co-receptor expression induced by Derp1 was similar in AR and R subjects.\nAfter specific stimulation (figure 2C-D), T cells from asthmatic and non asthmatic allergics displayed higher expression of ICOS (p < 0.02) and lower expression of CTLA-4 compared to controls (p < 0.007). In addition Th2 cell proportions were higher in allergics whereas Treg cells were decreased (IL-4, p < 0.0022; IL-13, p < 0.0001; Treg, p < 0.008). CD28+ cell percentages were not different between groups after allergen-specific stimulation (not shown). In non allergic subjects (figure 2C-D) no significant variation was found in any of the parameters studied\nThe multiple linear regression analysis showed that after Derp 1 specific stimulation, allergy (R + AR) correlated positively with percentages of ICOS, IL-13 and IL-4-expressing T cells and negatively with CTLA-4 and IL-10-expressing T cells (table 3).\nNo variation was found in any subject for any co-receptor or cytokine expression after stimulation with irrelevant rBetv1 (not shown).\nPBMCs were cultured in the presence or not of pDerp1 during 8 days. T cell activation and co-receptors expression were then studied by flow cytometry.\nIn AR, Der p 1 up-regulated CD28 (89.78 ± 1.33 vs 91.01 ± 1.48; p = 0.0016) and ICOS expression, and decreased CTLA-4 (figure 2A). Furthermore, Derp1 stimulation induced an increase in IL-4+ and IL-13+ cells (figure 2A), without significant variation in IFN-γ+ cells (not shown). This increase in Th2 cells was associated to a decrease in IL-10+ cells and Treg cells (figure 2A).\nT cell activation and co-receptor expression after specific stimulation. ICOS, CTLA-4 expression, IL-4, IL-13, IL-10 producing T cells and Treg cells (CD4+CD25+HiFoxp3+) were assessed by flow cytometry in PBMC from HDM allergic asthmatics and rhinitics (AR) (A, n = 18), HDM allergic rhinitics (R) (B, n = 18), non allergic asthmatics (A) (C, n = 13), and controls (D, n = 20) stimulated or not with Derp1 allergen (1 μg/ml) during 8 days. Results are expressed as percentage of total T cells. _ : mean of each group. * = p < 0.05; ** = p < 0.01\nIn R, Derp1 also increased CD28 (89.03 ± 1.71 vs 91.00 ± 1.48; p = 0.0025) but not ICOS expression (figure 2B). It decreased CTLA-4+ cell proportions. Allergen stimulation induced an increase in Th2 cells without variation of IFN-γ + cells (not shown), and a decrease in IL10+ and Treg cells (figure 2B).\nTherefore at the exception of ICOS, that was already increased at baseline in R and thus could not increase upon stimulation, the profile of T cell activation and co-receptor expression induced by Derp1 was similar in AR and R subjects.\nAfter specific stimulation (figure 2C-D), T cells from asthmatic and non asthmatic allergics displayed higher expression of ICOS (p < 0.02) and lower expression of CTLA-4 compared to controls (p < 0.007). In addition Th2 cell proportions were higher in allergics whereas Treg cells were decreased (IL-4, p < 0.0022; IL-13, p < 0.0001; Treg, p < 0.008). CD28+ cell percentages were not different between groups after allergen-specific stimulation (not shown). In non allergic subjects (figure 2C-D) no significant variation was found in any of the parameters studied\nThe multiple linear regression analysis showed that after Derp 1 specific stimulation, allergy (R + AR) correlated positively with percentages of ICOS, IL-13 and IL-4-expressing T cells and negatively with CTLA-4 and IL-10-expressing T cells (table 3).\nNo variation was found in any subject for any co-receptor or cytokine expression after stimulation with irrelevant rBetv1 (not shown).\n[SUBTITLE] Role of co-receptor engagement [SUBSECTION] In order to study the respective role of CD28, ICOS and CTLA-4 in T cell activation patterns in the context of allergen presentation, PBMC were stimulated with Derp1 in the absence or presence of anti-ICOS, anti-CTLA-4 or anti-CD28 mAb.\nIn allergics, whatever the asthmatic status (R + AR), anti-ICOS and anti-CD28 mAb specifically decreased IL-4+ and IL-13+ cells (figure 3A and table 4), but had no influence on IFN-γ+ cells (table 4). Anti-CTLA-4 mAb had no effect on IL-4+ cells, but unexpectedly decreased IL-13+ cell proportions (table 4).\nEffect of anti-co-receptors antibodies on IL-13 production by T cells. PBMC from allergic rhinitics (R) (triangle, n = 12), allergic rhinitic and asthmatics (AR) (square, n = 10) and non allergic asthmatics (A) (lozenge, n = 11) were stimulated with Derp 1 and cultured in the presence or absence of anti-ICOS, anti-CTLA-4 or anti-CD28 antibodies. IL-13 expressing T cells were then compared in each group versus baseline. Results are expressed as percentage of total T cells. Black line : mean of each group. * = p < 0.05; ** = p < 0.01; *** = p < 0.001\nEffect of anti-co-receptors antibodies on Treg cells, IL-10 and IFN-γ production\nPBMC from allergic non asthmatics (R, n = 12), allergic asthmatics (AR, n = 10) and non allergic asthmatics (A, n = 11), were stimulated with Derp 1 and cultured in the presence or absence of anti-ICOS, anti-CTLA4 or anti-CD28 antibodies. Treg cells, IL-10 and IFN-γ production by T-cells were assessed by flow cytometry. Results are expressed as mean of total T-cells ± SE and compared versus absence of anti-co-receptors antibodies conditions. R = allergic rhinitis; A R = allergic asthma and rhinitis. * = p < 0.05, ** = p < 0,01, *** = p < 0,001.\nIn non allergic subjects (A + controls), anti-co-receptor antibodies did not affect Th1 or Th2 cytokine production (figure 3 and table 4).\nIn order to study the respective role of CD28, ICOS and CTLA-4 in T cell activation patterns in the context of allergen presentation, PBMC were stimulated with Derp1 in the absence or presence of anti-ICOS, anti-CTLA-4 or anti-CD28 mAb.\nIn allergics, whatever the asthmatic status (R + AR), anti-ICOS and anti-CD28 mAb specifically decreased IL-4+ and IL-13+ cells (figure 3A and table 4), but had no influence on IFN-γ+ cells (table 4). Anti-CTLA-4 mAb had no effect on IL-4+ cells, but unexpectedly decreased IL-13+ cell proportions (table 4).\nEffect of anti-co-receptors antibodies on IL-13 production by T cells. PBMC from allergic rhinitics (R) (triangle, n = 12), allergic rhinitic and asthmatics (AR) (square, n = 10) and non allergic asthmatics (A) (lozenge, n = 11) were stimulated with Derp 1 and cultured in the presence or absence of anti-ICOS, anti-CTLA-4 or anti-CD28 antibodies. IL-13 expressing T cells were then compared in each group versus baseline. Results are expressed as percentage of total T cells. Black line : mean of each group. * = p < 0.05; ** = p < 0.01; *** = p < 0.001\nEffect of anti-co-receptors antibodies on Treg cells, IL-10 and IFN-γ production\nPBMC from allergic non asthmatics (R, n = 12), allergic asthmatics (AR, n = 10) and non allergic asthmatics (A, n = 11), were stimulated with Derp 1 and cultured in the presence or absence of anti-ICOS, anti-CTLA4 or anti-CD28 antibodies. Treg cells, IL-10 and IFN-γ production by T-cells were assessed by flow cytometry. Results are expressed as mean of total T-cells ± SE and compared versus absence of anti-co-receptors antibodies conditions. R = allergic rhinitis; A R = allergic asthma and rhinitis. * = p < 0.05, ** = p < 0,01, *** = p < 0,001.\nIn non allergic subjects (A + controls), anti-co-receptor antibodies did not affect Th1 or Th2 cytokine production (figure 3 and table 4).", "Sixty-nine subjects (33 males, 36 females, mean age 37.20 ± 1.90) were included. Blood samples from 20 healthy individuals with no history of allergy or asthma, 18 allergic asthmatics (AR), 18 allergic rhinitics (R), and 13 non allergic asthmatics (A) were collected. Characteristics of the patients are shown in table 1.\nCharacteristics of the patients\nFEV1 = Forced expiratory volume in 1 second\n* Values are mean ± standard error (SE),\n** = p < 0.01. R = allergic rhinitis; A R = allergic asthma and rhinitis; A = non allergic asthmatics.\nNone of the subjects was a smoker. Patients interrupted their local or systemic steroids or antihistamines 15 days before sampling. Asthmatics were mild asthmatics for one half and moderate asthmatics for the other half. All allergic patients displayed symptoms compatible with allergic rhinitis. All non allergic asthmatics also complained from nasal symptoms. Healthy volunteers did not report any symptom.\nSputum eosinophil counts were significantly higher in asthmatics than in control subjects or allergic rhinitis, with no significant difference between allergic and non allergic asthmatics. None of the subjects was sensitized to birch. The age difference between the A+R group and other groups (A, R and C) was not significant statistically.", "Treg cells proportion, Th1 and Th2 cytokines production and co-receptors expression (CTLA-4, ICOS, CD28) in each group were first assessed by flow cytometry, prior to any specific stimulation.\nIn non-stimulated conditions, CTLA-4+ T cells were decreased in asthmatics (p < 0.05 vs controls, figure 1A), whatever their allergic status. In keeping with this result, a reduced Treg population (p < 0.025, figure 1B) was found in these patients. Relevantly, Treg cell proportions were higher in mild asthmatics than in moderate counterparts (p < 0.012, figure 1C). IFN-γ + cells were increased (p < 0.022 vs controls, figure 1D) in asthmatics. No significant difference in Th2 cytokines or IL-10 production was found (table 2) between groups.\nT cell activation and co-receptor expression before specific stimulation. CTLA-4 expression (A), Treg cells (CD4+CD25+HiFoxp3+, B), IFN-γ producing T cells (D) and ICOS expression (E) were assessed by flow cytometry in PBMC from HDM allergic rhinitics (R) (triangle, n = 18), allergic asthmatics and rhinitics (AR) (square, n = 18), non allergic asthmatics (A) (lozenge, n = 13), and controls (circle, n = 20). Treg cells were also evaluated in non allergic asthma and allergic asthma between mild and moderate asthmatics (C). Results are expressed as percentage of total T cell and compared versus controls. _ : mean of each group. * = p < 0.05; ** = p < 0.01\nBaseline T-cell co-receptor and cytokine expression\nPBMC from each patient were cultured in complete medium during 8 days. ICOS, CD28, IL-4, IL-13, and IL-10 expression by T-cells were assessed by flow cytometry. Results are expressed as mean of total T-cells ± SE and compared versus controls. * = p < 0.05, ** = p < 0,01. R = allergic rhinitis; A R = allergic asthma and rhinitis; A = non allergic asthmatics.\nICOS expression was higher in R compared to controls (p = 0.029, figure 1E), but similar in AR and controls. No significant variation was found at the level of CD28 expression between groups (table 2).\nThe multiple linear regression analysis showed that asthma (A + AR) was associated to lower ICOS and CTLA-4 expression and Treg cell proportions, but to higher IFN-γ+ T cells (table 3). By contrast, allergic rhinitis (with or without asthma) was positively linked to ICOS expression.\nMultiple linear regression analysis between asthma, allergy and allergy after specific stimulation\nValues are expressed as coefficient of regression ± SE", "PBMCs were cultured in the presence or not of pDerp1 during 8 days. T cell activation and co-receptors expression were then studied by flow cytometry.\nIn AR, Der p 1 up-regulated CD28 (89.78 ± 1.33 vs 91.01 ± 1.48; p = 0.0016) and ICOS expression, and decreased CTLA-4 (figure 2A). Furthermore, Derp1 stimulation induced an increase in IL-4+ and IL-13+ cells (figure 2A), without significant variation in IFN-γ+ cells (not shown). This increase in Th2 cells was associated to a decrease in IL-10+ cells and Treg cells (figure 2A).\nT cell activation and co-receptor expression after specific stimulation. ICOS, CTLA-4 expression, IL-4, IL-13, IL-10 producing T cells and Treg cells (CD4+CD25+HiFoxp3+) were assessed by flow cytometry in PBMC from HDM allergic asthmatics and rhinitics (AR) (A, n = 18), HDM allergic rhinitics (R) (B, n = 18), non allergic asthmatics (A) (C, n = 13), and controls (D, n = 20) stimulated or not with Derp1 allergen (1 μg/ml) during 8 days. Results are expressed as percentage of total T cells. _ : mean of each group. * = p < 0.05; ** = p < 0.01\nIn R, Derp1 also increased CD28 (89.03 ± 1.71 vs 91.00 ± 1.48; p = 0.0025) but not ICOS expression (figure 2B). It decreased CTLA-4+ cell proportions. Allergen stimulation induced an increase in Th2 cells without variation of IFN-γ + cells (not shown), and a decrease in IL10+ and Treg cells (figure 2B).\nTherefore at the exception of ICOS, that was already increased at baseline in R and thus could not increase upon stimulation, the profile of T cell activation and co-receptor expression induced by Derp1 was similar in AR and R subjects.\nAfter specific stimulation (figure 2C-D), T cells from asthmatic and non asthmatic allergics displayed higher expression of ICOS (p < 0.02) and lower expression of CTLA-4 compared to controls (p < 0.007). In addition Th2 cell proportions were higher in allergics whereas Treg cells were decreased (IL-4, p < 0.0022; IL-13, p < 0.0001; Treg, p < 0.008). CD28+ cell percentages were not different between groups after allergen-specific stimulation (not shown). In non allergic subjects (figure 2C-D) no significant variation was found in any of the parameters studied\nThe multiple linear regression analysis showed that after Derp 1 specific stimulation, allergy (R + AR) correlated positively with percentages of ICOS, IL-13 and IL-4-expressing T cells and negatively with CTLA-4 and IL-10-expressing T cells (table 3).\nNo variation was found in any subject for any co-receptor or cytokine expression after stimulation with irrelevant rBetv1 (not shown).", "In order to study the respective role of CD28, ICOS and CTLA-4 in T cell activation patterns in the context of allergen presentation, PBMC were stimulated with Derp1 in the absence or presence of anti-ICOS, anti-CTLA-4 or anti-CD28 mAb.\nIn allergics, whatever the asthmatic status (R + AR), anti-ICOS and anti-CD28 mAb specifically decreased IL-4+ and IL-13+ cells (figure 3A and table 4), but had no influence on IFN-γ+ cells (table 4). Anti-CTLA-4 mAb had no effect on IL-4+ cells, but unexpectedly decreased IL-13+ cell proportions (table 4).\nEffect of anti-co-receptors antibodies on IL-13 production by T cells. PBMC from allergic rhinitics (R) (triangle, n = 12), allergic rhinitic and asthmatics (AR) (square, n = 10) and non allergic asthmatics (A) (lozenge, n = 11) were stimulated with Derp 1 and cultured in the presence or absence of anti-ICOS, anti-CTLA-4 or anti-CD28 antibodies. IL-13 expressing T cells were then compared in each group versus baseline. Results are expressed as percentage of total T cells. Black line : mean of each group. * = p < 0.05; ** = p < 0.01; *** = p < 0.001\nEffect of anti-co-receptors antibodies on Treg cells, IL-10 and IFN-γ production\nPBMC from allergic non asthmatics (R, n = 12), allergic asthmatics (AR, n = 10) and non allergic asthmatics (A, n = 11), were stimulated with Derp 1 and cultured in the presence or absence of anti-ICOS, anti-CTLA4 or anti-CD28 antibodies. Treg cells, IL-10 and IFN-γ production by T-cells were assessed by flow cytometry. Results are expressed as mean of total T-cells ± SE and compared versus absence of anti-co-receptors antibodies conditions. R = allergic rhinitis; A R = allergic asthma and rhinitis. * = p < 0.05, ** = p < 0,01, *** = p < 0,001.\nIn non allergic subjects (A + controls), anti-co-receptor antibodies did not affect Th1 or Th2 cytokine production (figure 3 and table 4).", "The results of our ex vivo study strongly suggest a contrasted picture of T cell activation in allergic rhinitis and asthma, with distinct patterns of Th1, Th2 and Treg profiles and expression of ICOS, CD28 and CTLA-4 co-receptors.\nIndeed, we showed that in asthma, IFN-γ production was constitutive, did not increase upon allergen stimulation, and was not blocked by any of the anti-co-receptor antibodies. Similarly, the constitutive defect of Treg and CTLA-4 expression seen in asthmatics and not enhanced in non allergic asthmatics after allergen stimulation was not modified after co-receptors blockade. The Th1/Treg imbalance in asthma is therefore constitutive and independent of allergen presentation.\nThe constitutive Th1 activation in asthma was demonstrated before [10,12,21]. It could result from the intrinsic defect in the CTLA-4+ and Treg populations as CTLA-4, known to be involved in tolerance induction [22], could prevent the asthmatic inflammation by inducing T cells to differentiate in T regulatory cells. Recently, we have showed during in vivo studies a lower proportion of Treg cells in blood from severe refractory asthmatics compared to controls, which was even deeper during exacerbations, both in blood and induced sputum [9]. Herein we show that this lower proportion of Treg is present in milder stages of asthma. Relevantly, Treg cells were higher in mild than in moderate asthma whatever the allergic status. This results are concordant with the primary Treg cell deficiency suggested in asthma and allergy [23]. That the Th1/Treg imbalance is similar in allergic and non allergic asthma suggests that it is a characteristic of asthma independent of allergy, possibly triggered by infectious agents or non specific substances such as pollutants but it must be precised that asthmatics included in the present study were controlled and did not experienced any recent exacerbation. Another hypothesis would be that the Th1/Treg imbalance in asthma is really intrinsic and independent of any external aggression.\nIn allergic groups, we demonstrated a Th2/Treg imbalance inducible upon allergen stimulation. That Th2 activation was not seen in non allergic patients and could be broken by CD28 and ICOS blockade indicates that it is really the cognate allergen presentation by antigen presenting cells that was responsible for it. IL-13 secretion was suppressed also by blocking CTLA-4, indicating that in peripheral cells (1) Th2 activation cannot be considered globally, Th2 cytokines being regulated distinctly, and (2) CTLA-4 being not only involved in tolerance but also in inflammation. This result is concordant with Lordan and al., who showed that allergen-induced production of IL-5 and IL-13 by PBMC from allergic asthmatics could be inhibited by blocking CTLA-4 receptor with CTLA-4-Ig [24]. Regarding the allergen-induced Treg defect in allergics, other co-receptors than these tested are likely involved, among which PD1 is a candidate [25]. Indeed, Meiler et al. recently demonstrated in PBMC from allergic patients that the suppressive effect of IL-10 secreting T cells was partially inhibited by blocking CTLA-4 or PD-1 co-receptors, whereas blocking both receptors simultaneously had an additive effect [26].\nThe association of allergy with ICOS over-expression before any allergen stimulation suggests a non specific priming of T cells towards the Th2 pathway in allergic subjects. Indeed ICOS was clearly related to Th2 activation, as shown by anti-ICOS stimulation results. Numerous studies using animal models of airways inflammation have showed that ICOS-mediated signalling was essential for induction of Th2 cytokines [27,28]. Indeed inhibition of ICOS suppresses allergic lung inflammation and Th2 cytokines production in mice models [29]. However in other models ICOS engagement induces tolerance and inhibits the allergic inflammation. These distinct actions of ICOS seem related to the density of ICOS molecules per cell, with inflammation being related to a high density of co-receptors and tolerance induction to a lower number of ICOS molecules per cell [30].\nThat in R ICOS expression does not increase after allergen stimulation by contrast with the AR group could result from a maximal expression of ICOS in R whereas it is still inducible in AR. Indeed the basal level of ICOS expression is lower in the latter group than in the former. This relative defect in ICOS expression in AR patients could result from the constitutive Th1/Treg imbalance of asthmatics that by a Th1-driven \"anti-Th2\" effect would decrease ICOS expression.\nUnder allergen stimulation, CD28 expression increased significantly in R and AR, and blockade of CD28 decreased the Th2 cytokine production, indicating the involvement of CD28 in Th2 cell activation in allergy. It is noteworthy that although significant statistically, the proportion of CD28+ cells could not increase in high proportion, as most T cells constitutively expressed CD28 in all groups. CD28 is a crucial co-receptor for inducing T cell cytokine production [31], and was showed to be involved both in Th1 and Th2 activation. CD28 blockade is proposed as an immunosuppressive strategy to prevent graft rejection, and is experimented in various inflammatory diseases. However the practical use of CD28 blockade was refrained by the agonist action of some anti-CD28 antibodies encountered in clinical trials [32].\nOur study provides new insights into the hypothesis of Treg cell deficiency as a paradigm for allergic diseases, by showing a constitutive Treg cell deficit in asthma whatever the allergic status and an inducible Treg deficit in allergy, whatever the presence of asthma. As a consequence, the Treg cell deficiency is the highest in asthmatic allergics after allergen stimulation. This distinction between allergy and asthma contradicts our previous hypothesis of a gradient of Treg cell deficiency from allergy to asthma [23], and better suggests that the abnormalities seen in both diseases could be juxtaposed and independent, as showed by the multiple linear regression analysis.\nRecently an in vivo study showed no difference in the number of Treg cells between asthmatics and controls, whereas FOXP3 protein expression within Treg cells was significantly decreased in asthmatic patients [33]. Our study was performed in blood ex vivo and therefore might not fully reflect the in vivo and in situ reality. However many studies showed that blood compartment was relevant to the in situ inflammation as far as T cells and allergy were concerned [21], and the mechanistic studies proposed here cannot be assessed in situ in humans. They can be performed in vivo in animals, but the relevance to real asthma would also be uncertain.\nIn conclusion, allergy is associated to a constitutive ICOS over-expression and inducible CTLA-4 under-expression with Th2/Treg imbalance, when a constitutive CTLA-4 under-expression and Th1/Treg disequilibrium appears as a hallmark of asthma. Both profiles are mixed in allergic asthma, and one can argue that asthma would occur in allergic subjects only if the unknown conditions leading to the constitutive Th1 activation are present. Still missing in the puzzle is the stimulus inducing the Th2 activation present in non allergic asthma [3]. Lastly, our results demonstrate that although targeting one type of T cell activation only would be a pitfall in allergic asthma, there is a rationale to develop strategies based on targeting co-receptors in allergy.", "In conclusion, our work adds significant insights into the immune mechanism involved in allergy and asthma and states the rationale for new diagnosis and/or therapeutic strategies in these pathologies.", "The authors declare that they have no competing interests.", "All the authors have contributed significantly to the research and preparation of the manuscript, and they approve its submission.", "Skin testing, methacholine challenge and induced sputum procedures.\nClick here for file" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, "supplementary-material" ]
[]
Differential contribution of electrically evoked dorsal root reflexes to peripheral vasodilatation and plasma extravasation.
21356101
Dorsal root reflexes (DRRs) are antidromic activities traveling along the primary afferent fibers, which can be generated by peripheral stimulation or central stimulation. DRRs are thought to be involved in the generation of neurogenic inflammation, as indicated by plasma extravasation and vasodilatation. The hypothesis of this study was that electrical stimulation of the central stump of a cut dorsal root would lead to generation of DRRs, resulting in plasma extravasation and vasodilatation.
BACKGROUND
Sprague-Dawley rats were prepared to expose spinal cord and L4-L6 dorsal roots under pentobarbital general anesthesia. Electrical stimulation of either intact, proximal or distal, cut dorsal roots was applied while plasma extravasation or blood perfusion of the hindpaw was recorded.
METHODS
While stimulation of the peripheral stump of a dorsal root elicited plasma extravasation, electrical stimulation of the central stump of a cut dorsal root generated significant DRRs, but failed to induce plasma extravasation. However, stimulation of the central stump induced a significant increase in blood perfusion.
RESULTS
It is suggested that DRRs are involved in vasodilatation but not plasma extravasation in neurogenic inflammation in normal animals.
CONCLUSIONS
[ "Animals", "Blood Vessels", "Electric Stimulation", "Foot", "Male", "Neurogenic Inflammation", "Plasma", "Rats", "Rats, Sprague-Dawley", "Reflex", "Regional Blood Flow", "Spinal Nerve Roots", "Vasodilation" ]
3058041
null
null
Methods
[SUBTITLE] Animal preparation [SUBSECTION] A total of 19 adult male Sprague-Dawley rats weighing 300-400 g were used for this study, 8 for vasodilatation, and 11 for plasma extravasation. All procedures used in this study were approved by the Animal IACUC and followed the guidelines for the treatment of animals of the International Association for the Study of Pain [48]. Animals were initially anesthetized with sodium pentobarbital (50 mg/kg, i.p.). A catheter was placed into the jugular vein for continuous administration of anesthetic (sodium pentobarbital, 5-8 mg·kg-1·h-1 in a saline solution) and for Evans Blue injection in plasma extravasation experiments. The level of anesthesia was monitored by the stability of the level of end-tidal CO2 at around 30 mmHg and by the absence of flexion reflex. Tracheotomy was performed for artificial ventilation. The animal's body temperature was maintained at 37°C by a feedback controlled electric heating blanket. A 4-cm-long laminectomy was performed over the lumbosacral enlargement to expose the spinal cord and L4-L6 dorsal roots. The rat was held in a stereotaxic frame to prevent movement during recording. The skin over the laminectomy formed a pool and was filled with light mineral oil. A total of 19 adult male Sprague-Dawley rats weighing 300-400 g were used for this study, 8 for vasodilatation, and 11 for plasma extravasation. All procedures used in this study were approved by the Animal IACUC and followed the guidelines for the treatment of animals of the International Association for the Study of Pain [48]. Animals were initially anesthetized with sodium pentobarbital (50 mg/kg, i.p.). A catheter was placed into the jugular vein for continuous administration of anesthetic (sodium pentobarbital, 5-8 mg·kg-1·h-1 in a saline solution) and for Evans Blue injection in plasma extravasation experiments. The level of anesthesia was monitored by the stability of the level of end-tidal CO2 at around 30 mmHg and by the absence of flexion reflex. Tracheotomy was performed for artificial ventilation. The animal's body temperature was maintained at 37°C by a feedback controlled electric heating blanket. A 4-cm-long laminectomy was performed over the lumbosacral enlargement to expose the spinal cord and L4-L6 dorsal roots. The rat was held in a stereotaxic frame to prevent movement during recording. The skin over the laminectomy formed a pool and was filled with light mineral oil. [SUBTITLE] DRR recordings [SUBSECTION] A silver wire hook electrode was used to record extracellular single-unit discharges in filaments of the L4 through L6 dorsal roots. A small strand of the dorsal root was teased centrally from the main trunk and was further separated into a fine filament containing one or a few active fibers. This filament was then wrapped around the recording electrode (Figure 1A). Diagram of the experimental setup. A. The left L4 dorsal root is cut. The central stump is placed over a stimulating electrode (SE) while placing a strand from the left L5 dorsal root on a recording electrode (RE). B. A second cut is made at the left L5 dorsal root. The peripheral stump of the left L5 dorsal root is placed over a stimulating electrode. Seven minutes after Evans Blue injection (i.v.), electrical stimulation (20 V, 5 Hz, 0.5 ms for 5 min) is delivered, while a series of images is taken. C. A setup for stimulating an intact dorsal root. Data recording and analysis was performed by using a CED 1401Plus data acquisition system and SPIKE2 software (Cambridge Electronic Design Ltd, UK). A silver wire hook electrode was used to record extracellular single-unit discharges in filaments of the L4 through L6 dorsal roots. A small strand of the dorsal root was teased centrally from the main trunk and was further separated into a fine filament containing one or a few active fibers. This filament was then wrapped around the recording electrode (Figure 1A). Diagram of the experimental setup. A. The left L4 dorsal root is cut. The central stump is placed over a stimulating electrode (SE) while placing a strand from the left L5 dorsal root on a recording electrode (RE). B. A second cut is made at the left L5 dorsal root. The peripheral stump of the left L5 dorsal root is placed over a stimulating electrode. Seven minutes after Evans Blue injection (i.v.), electrical stimulation (20 V, 5 Hz, 0.5 ms for 5 min) is delivered, while a series of images is taken. C. A setup for stimulating an intact dorsal root. Data recording and analysis was performed by using a CED 1401Plus data acquisition system and SPIKE2 software (Cambridge Electronic Design Ltd, UK). [SUBTITLE] Electrical stimulation [SUBSECTION] Electrical stimulation of the central stump was performed in 16 animals, 8 for plasma extravasation and 8 for laser Doppler measurement. A tripolar electrode was used for electrical stimulation in order to minimize stimulus artifact and to avoid current spread. The cathode was in the middle of the array, and two anodes, one on each side of the cathode, were separated from the cathode by 1 mm [46]. The L4 or L5 dorsal roots were cut and the central stump was placed on the electrode for electrical stimulation (Figure 1A), while a teased strand from a nearby intact dorsal root was used for recording DRRs. Then stimulation was applied to this central stump of the cut dorsal root for 5 minutes at 20 V, 5 Hz, 0.5 ms pulse duration. At the end of stimulation of the central stump (L4 or L5) for plasma extravasation, the dorsal root that was used for DRR recording (L5 or L6) was cut and the peripheral stump was stimulated at the same parameters (20 V, 5 Hz, 0.5 ms; Figure 1B). In 3 animals, intact L4 or L5 dorsal root was stimulated at the same parameters (20 V, 5 Hz, 0.5 ms; Figure 1C) Electrical stimulation of the central stump was performed in 16 animals, 8 for plasma extravasation and 8 for laser Doppler measurement. A tripolar electrode was used for electrical stimulation in order to minimize stimulus artifact and to avoid current spread. The cathode was in the middle of the array, and two anodes, one on each side of the cathode, were separated from the cathode by 1 mm [46]. The L4 or L5 dorsal roots were cut and the central stump was placed on the electrode for electrical stimulation (Figure 1A), while a teased strand from a nearby intact dorsal root was used for recording DRRs. Then stimulation was applied to this central stump of the cut dorsal root for 5 minutes at 20 V, 5 Hz, 0.5 ms pulse duration. At the end of stimulation of the central stump (L4 or L5) for plasma extravasation, the dorsal root that was used for DRR recording (L5 or L6) was cut and the peripheral stump was stimulated at the same parameters (20 V, 5 Hz, 0.5 ms; Figure 1B). In 3 animals, intact L4 or L5 dorsal root was stimulated at the same parameters (20 V, 5 Hz, 0.5 ms; Figure 1C) [SUBTITLE] Plasma extravasation measurement [SUBSECTION] Plasma extravasation measurements were performed in 11 animals, 8 for central and peripheral stump stimulation (Figure 1A) and 3 for intact dorsal root stimulation (Figure 1C). Evans Blue was injected intravenously (50 mg/kg, using the catheter in the jugular vein) for detection of the sites of plasma extravasation 7 minutes before the start of electrical stimulation. Pictures were taken by an 8 megapixel camera (Nikon Coolpix 8700) on a tripod. Constant light condition, manual set of aperture, and exposure time were maintained during the course of the experiment. Pictures of the plantar side of the rat paw were taken 2 minutes after Evans Blue injections, then every 30 seconds during the course of electrical stimulation and for another 5 minutes or longer after the end of electrical stimulation. Matlab image analysis tool (The MathWorks, Inc., MA) was used to determine the dynamic change of color in the development of plasma extravasation. The whole hindpaw was selected as a region of interest. This method was conceptually similar to a dynamic measurement of plasma extravasation by using CCD video camera that has been developed recently [49,50]. Plasma extravasation measurements were performed in 11 animals, 8 for central and peripheral stump stimulation (Figure 1A) and 3 for intact dorsal root stimulation (Figure 1C). Evans Blue was injected intravenously (50 mg/kg, using the catheter in the jugular vein) for detection of the sites of plasma extravasation 7 minutes before the start of electrical stimulation. Pictures were taken by an 8 megapixel camera (Nikon Coolpix 8700) on a tripod. Constant light condition, manual set of aperture, and exposure time were maintained during the course of the experiment. Pictures of the plantar side of the rat paw were taken 2 minutes after Evans Blue injections, then every 30 seconds during the course of electrical stimulation and for another 5 minutes or longer after the end of electrical stimulation. Matlab image analysis tool (The MathWorks, Inc., MA) was used to determine the dynamic change of color in the development of plasma extravasation. The whole hindpaw was selected as a region of interest. This method was conceptually similar to a dynamic measurement of plasma extravasation by using CCD video camera that has been developed recently [49,50]. [SUBTITLE] Cutaneous blood flow measurement [SUBSECTION] Changes in cutaneous blood perfusion were measured in 8 animals to detect local vasodilatation (flare) in response to electrical stimulation of the central stump of the L4 or L5 dorsal roots. The measurements were done using Laser Doppler Imager (PeriScan PIM II, Perimed AB, Sweden). After 10 baseline images of the plantar side of the rat hindpaws, continuous scanning were taken during and after stimulation (20 V, 5 Hz, 0.5 ms pulse duration for 5 minutes) of the central stump of the cut dorsal root. Approximately 20 images were continuously taken after the end of stimulation. It required 2 minutes to acquire one image. Changes in cutaneous blood perfusion were measured in 8 animals to detect local vasodilatation (flare) in response to electrical stimulation of the central stump of the L4 or L5 dorsal roots. The measurements were done using Laser Doppler Imager (PeriScan PIM II, Perimed AB, Sweden). After 10 baseline images of the plantar side of the rat hindpaws, continuous scanning were taken during and after stimulation (20 V, 5 Hz, 0.5 ms pulse duration for 5 minutes) of the central stump of the cut dorsal root. Approximately 20 images were continuously taken after the end of stimulation. It required 2 minutes to acquire one image. [SUBTITLE] Data analysis [SUBSECTION] The stored digital record of unit activity was retrieved and analyzed off-line. The frequency of DRRs was calculated for the periods before (3 min), during (5 min), and after (3 min) the electrical stimulation. Statistical significance was tested using paired t-test. Matlab software was used in order to measure the intensity of colors on the rat paw. The same region of interest was selected in the set of pictures from each experiment and the change in color intensity in a gray scale was analyzed. The color intensity from Matlab is given as arbitrary unit (AU) for raw data. Normalization was calculated by the following formula: [(color intensity at any time point - color intensity before stimulation) / color intensity before stimulation] × 100%. A negative value represents a darker color, suggesting plasma extravasation. One-way ANOVA followed by Posthoc Fisher LSD Test was used to detect significant differences across time as compared to the baseline. For blood perfusion, the region of interest was selected which covered the whole paw. An average of perfusion (arbitrary unit, AU) in the selected area of each image frame was used for further calculating percentage change. The first 10 images at baseline were averaged as control for subsequent change during and after stimulation: [(blood perfusion at any time point - average of first 10 blood perfusion images before stimulation) / average of first 10 blood perfusion images before stimulation] × 100%. Repeated measures ANOVA followed by Posthoc Fisher LSD test was used to detect significant differences along the time as compared to the baseline. All values were presented as means ± SEM. A change was judged significant if p < 0.05. The stored digital record of unit activity was retrieved and analyzed off-line. The frequency of DRRs was calculated for the periods before (3 min), during (5 min), and after (3 min) the electrical stimulation. Statistical significance was tested using paired t-test. Matlab software was used in order to measure the intensity of colors on the rat paw. The same region of interest was selected in the set of pictures from each experiment and the change in color intensity in a gray scale was analyzed. The color intensity from Matlab is given as arbitrary unit (AU) for raw data. Normalization was calculated by the following formula: [(color intensity at any time point - color intensity before stimulation) / color intensity before stimulation] × 100%. A negative value represents a darker color, suggesting plasma extravasation. One-way ANOVA followed by Posthoc Fisher LSD Test was used to detect significant differences across time as compared to the baseline. For blood perfusion, the region of interest was selected which covered the whole paw. An average of perfusion (arbitrary unit, AU) in the selected area of each image frame was used for further calculating percentage change. The first 10 images at baseline were averaged as control for subsequent change during and after stimulation: [(blood perfusion at any time point - average of first 10 blood perfusion images before stimulation) / average of first 10 blood perfusion images before stimulation] × 100%. Repeated measures ANOVA followed by Posthoc Fisher LSD test was used to detect significant differences along the time as compared to the baseline. All values were presented as means ± SEM. A change was judged significant if p < 0.05.
null
null
null
null
[ "Background", "Animal preparation", "DRR recordings", "Electrical stimulation", "Plasma extravasation measurement", "Cutaneous blood flow measurement", "Data analysis", "Results", "Dorsal root reflexes can be elicited at the central dorsal root filaments by electrical stimulation of a neighboring central stump of the cut dorsal root", "Effects of electrical stimulation of the central stump of the cut dorsal root on plasma extravasation on the plantar surface of the ipsilateral and contralateral hindpaws", "Effects of electrical stimulation of the peripheral stump of the cut dorsal root on plasma extravasation on the plantar surface of the ipsilateral hindpaw", "Effects of electrical stimulation of the intact dorsal root on plasma extravasation on the plantar surface of the ipsilateral hindpaw", "Effects of electrical stimulation of the central stump of the cut dorsal root on vasodilatation on the plantar surface of bilateral hindpaws", "Discussion", "Conclusion", "Competing interests", "Authors' contributions" ]
[ "Somatosensory information is generally considered to originate in the peripheral terminals of primary afferent neurons, and is then transmitted to the spinal cord or brain. However, activity in primary afferent neurons can be generated within the spinal cord, and the impulses can travel antidromically toward the periphery. These phenomena are called dorsal root reflexes (DRR). Dorsal root reflexes were first discovered by Gotch and Horsley [1], and were studied extensively in later works [2-9].\nDRRs are thought to contribute to neurogenic inflammation. The main components of neurogenic inflammation include, but are not limited to, arteriolar vasodilatation and plasma extravasation. Neurogenic inflammation is triggered by substances released from sensory nerve terminals, including substance P (SP) and calcitonin gene-related peptide (CGRP). SP, as well as other tachykinins such as neurokinin A (NKA) and neurokinin B (NKB), cause plasma extravasation by a specific action on NK1, NK2, and NK3 receptors [10] to increase vascular permeability [11-13]. SP and NKA play a major role in the periphery, whereas NKB is mainly found in the CNS [14]. CGRP is active in dilating cutaneous arterioles [15] via the CGRP1 receptor [11,13]. Both tachykinins and CGRP are found in the peripheral endings of sensory nerves [16-19] and released from both C and Aδ fibers [11].\nNeurogenic inflammation has been shown to be caused by antidromic electrical stimulation of afferent nerves [20-26], by intradermal injection of capsaicin [27], and in acute [28-31] or chronic [32] arthritis experiments. Enhanced afferent discharges cause the central terminals of primary afferent fibers to release excitatory amino acids, which then activate non-NMDA and NMDA receptors on GABAergic interneurons, leading to the release of GABA on primary afferent central terminals [33-36]. GABA produces excessive primary afferent depolarization (PAD) through GABAA receptors located on presynaptic terminals of primary afferents [37,38]. When PAD exceeds the threshold, DRRs are generated [39,40], which are conducted antidromically in both myelinated and unmyelinated fibers toward the periphery [41,42], and can be blocked by the spinal GABAA antagonist, bicuculline [43]. This antidromic activity could result in the release of inflammatory mediators (e.g., SP), as was shown in the knee joint [23,44].\nDRRs can be induced by electrical stimulation of peripheral nerves, both ipsilaterally and contralaterally [45], as well as by supraspinal stimulation of the periaqueductal grey [46]. In the present study we tried to mimic the incoming nociceptive input to the spinal cord by electrical stimulation of the central stump of the dorsal root, and test whether the electrically evoked DRRs can contribute to the development of neurogenic inflammation - vasodilatation and plasma extravasation. Preliminary results have been reported [47].", "A total of 19 adult male Sprague-Dawley rats weighing 300-400 g were used for this study, 8 for vasodilatation, and 11 for plasma extravasation. All procedures used in this study were approved by the Animal IACUC and followed the guidelines for the treatment of animals of the International Association for the Study of Pain [48].\nAnimals were initially anesthetized with sodium pentobarbital (50 mg/kg, i.p.). A catheter was placed into the jugular vein for continuous administration of anesthetic (sodium pentobarbital, 5-8 mg·kg-1·h-1 in a saline solution) and for Evans Blue injection in plasma extravasation experiments. The level of anesthesia was monitored by the stability of the level of end-tidal CO2 at around 30 mmHg and by the absence of flexion reflex. Tracheotomy was performed for artificial ventilation. The animal's body temperature was maintained at 37°C by a feedback controlled electric heating blanket. A 4-cm-long laminectomy was performed over the lumbosacral enlargement to expose the spinal cord and L4-L6 dorsal roots. The rat was held in a stereotaxic frame to prevent movement during recording. The skin over the laminectomy formed a pool and was filled with light mineral oil.", "A silver wire hook electrode was used to record extracellular single-unit discharges in filaments of the L4 through L6 dorsal roots. A small strand of the dorsal root was teased centrally from the main trunk and was further separated into a fine filament containing one or a few active fibers. This filament was then wrapped around the recording electrode (Figure 1A).\nDiagram of the experimental setup. A. The left L4 dorsal root is cut. The central stump is placed over a stimulating electrode (SE) while placing a strand from the left L5 dorsal root on a recording electrode (RE). B. A second cut is made at the left L5 dorsal root. The peripheral stump of the left L5 dorsal root is placed over a stimulating electrode. Seven minutes after Evans Blue injection (i.v.), electrical stimulation (20 V, 5 Hz, 0.5 ms for 5 min) is delivered, while a series of images is taken. C. A setup for stimulating an intact dorsal root.\nData recording and analysis was performed by using a CED 1401Plus data acquisition system and SPIKE2 software (Cambridge Electronic Design Ltd, UK).", "Electrical stimulation of the central stump was performed in 16 animals, 8 for plasma extravasation and 8 for laser Doppler measurement. A tripolar electrode was used for electrical stimulation in order to minimize stimulus artifact and to avoid current spread. The cathode was in the middle of the array, and two anodes, one on each side of the cathode, were separated from the cathode by 1 mm [46]. The L4 or L5 dorsal roots were cut and the central stump was placed on the electrode for electrical stimulation (Figure 1A), while a teased strand from a nearby intact dorsal root was used for recording DRRs. Then stimulation was applied to this central stump of the cut dorsal root for 5 minutes at 20 V, 5 Hz, 0.5 ms pulse duration.\nAt the end of stimulation of the central stump (L4 or L5) for plasma extravasation, the dorsal root that was used for DRR recording (L5 or L6) was cut and the peripheral stump was stimulated at the same parameters (20 V, 5 Hz, 0.5 ms; Figure 1B).\nIn 3 animals, intact L4 or L5 dorsal root was stimulated at the same parameters (20 V, 5 Hz, 0.5 ms; Figure 1C)", "Plasma extravasation measurements were performed in 11 animals, 8 for central and peripheral stump stimulation (Figure 1A) and 3 for intact dorsal root stimulation (Figure 1C). Evans Blue was injected intravenously (50 mg/kg, using the catheter in the jugular vein) for detection of the sites of plasma extravasation 7 minutes before the start of electrical stimulation. Pictures were taken by an 8 megapixel camera (Nikon Coolpix 8700) on a tripod. Constant light condition, manual set of aperture, and exposure time were maintained during the course of the experiment. Pictures of the plantar side of the rat paw were taken 2 minutes after Evans Blue injections, then every 30 seconds during the course of electrical stimulation and for another 5 minutes or longer after the end of electrical stimulation. Matlab image analysis tool (The MathWorks, Inc., MA) was used to determine the dynamic change of color in the development of plasma extravasation. The whole hindpaw was selected as a region of interest. This method was conceptually similar to a dynamic measurement of plasma extravasation by using CCD video camera that has been developed recently [49,50].", "Changes in cutaneous blood perfusion were measured in 8 animals to detect local vasodilatation (flare) in response to electrical stimulation of the central stump of the L4 or L5 dorsal roots. The measurements were done using Laser Doppler Imager (PeriScan PIM II, Perimed AB, Sweden). After 10 baseline images of the plantar side of the rat hindpaws, continuous scanning were taken during and after stimulation (20 V, 5 Hz, 0.5 ms pulse duration for 5 minutes) of the central stump of the cut dorsal root. Approximately 20 images were continuously taken after the end of stimulation. It required 2 minutes to acquire one image.", "The stored digital record of unit activity was retrieved and analyzed off-line. The frequency of DRRs was calculated for the periods before (3 min), during (5 min), and after (3 min) the electrical stimulation. Statistical significance was tested using paired t-test.\nMatlab software was used in order to measure the intensity of colors on the rat paw. The same region of interest was selected in the set of pictures from each experiment and the change in color intensity in a gray scale was analyzed. The color intensity from Matlab is given as arbitrary unit (AU) for raw data. Normalization was calculated by the following formula: [(color intensity at any time point - color intensity before stimulation) / color intensity before stimulation] × 100%. A negative value represents a darker color, suggesting plasma extravasation. One-way ANOVA followed by Posthoc Fisher LSD Test was used to detect significant differences across time as compared to the baseline.\nFor blood perfusion, the region of interest was selected which covered the whole paw. An average of perfusion (arbitrary unit, AU) in the selected area of each image frame was used for further calculating percentage change. The first 10 images at baseline were averaged as control for subsequent change during and after stimulation: [(blood perfusion at any time point - average of first 10 blood perfusion images before stimulation) / average of first 10 blood perfusion images before stimulation] × 100%. Repeated measures ANOVA followed by Posthoc Fisher LSD test was used to detect significant differences along the time as compared to the baseline.\nAll values were presented as means ± SEM. A change was judged significant if p < 0.05.", "[SUBTITLE] Dorsal root reflexes can be elicited at the central dorsal root filaments by electrical stimulation of a neighboring central stump of the cut dorsal root [SUBSECTION] After the left L4 dorsal root was cut, the central stump was placed on the stimulating electrode. To ensure that DRRs can be elicited, a small fascicle of neighboring dorsal root (usually L5) was teased centrally and was placed in a recording electrode (Figure 1A). Multiunit spontaneous antidromic discharges were recorded from all 8 animals that were tested for plasma extravasation. The discharges were irregular and usually at a very low rate but increased during electrical stimulation of L4 (Figure 2A-C). Average mean spontaneous activity was 0.09 ± 0.03 Hz (range: 0-0.22 Hz; n = 8). In most recorded units, additional DRR activity could be evoked by applying a graded mechanical stimulus (brush, pressure, and pinch) to the skin of the foot (data not shown). One cell was found whose receptive field was covering the whole body, as previously reported by others [43,51]. During electrical stimulation (20 V, 5 Hz, 5 ms), a significant increase in DRRs was observed (2.28 ± 0.76 Hz; range: 0.13-5.27 Hz; n = 8, P < 0.05). The activity of antidromic discharges returned to normal 2 min (127 ± 70 s, range from 0 to 585 s) after the termination of electrical stimulation (0.14 ± 0.04 Hz; range: 0-0.36 Hz; n = 8). Four out of eight fibers returned to baseline as soon as the stimulation was terminated; one fiber lasted as long as 10 min.\nDorsal root reflexes from a left L5 filament. A representative strand shows that DRRs can be recorded from the central stump of the dorsal root (L5) while the L4 central stump is stimulated (A). Each vertical line indicates a DRR. In this strand, about 4 fibers show DRR activity, based on their amplitudes and shapes. The horizontal line indicates the duration of electrical stimulation (20 V, 5 Hz, 0.5 ms for 5 min). During stimulation, an obvious increase of DRRs is demonstrated, which is summarized in (C). An expanded trace is shown in B. *: p < 0.05.\nAfter the left L4 dorsal root was cut, the central stump was placed on the stimulating electrode. To ensure that DRRs can be elicited, a small fascicle of neighboring dorsal root (usually L5) was teased centrally and was placed in a recording electrode (Figure 1A). Multiunit spontaneous antidromic discharges were recorded from all 8 animals that were tested for plasma extravasation. The discharges were irregular and usually at a very low rate but increased during electrical stimulation of L4 (Figure 2A-C). Average mean spontaneous activity was 0.09 ± 0.03 Hz (range: 0-0.22 Hz; n = 8). In most recorded units, additional DRR activity could be evoked by applying a graded mechanical stimulus (brush, pressure, and pinch) to the skin of the foot (data not shown). One cell was found whose receptive field was covering the whole body, as previously reported by others [43,51]. During electrical stimulation (20 V, 5 Hz, 5 ms), a significant increase in DRRs was observed (2.28 ± 0.76 Hz; range: 0.13-5.27 Hz; n = 8, P < 0.05). The activity of antidromic discharges returned to normal 2 min (127 ± 70 s, range from 0 to 585 s) after the termination of electrical stimulation (0.14 ± 0.04 Hz; range: 0-0.36 Hz; n = 8). Four out of eight fibers returned to baseline as soon as the stimulation was terminated; one fiber lasted as long as 10 min.\nDorsal root reflexes from a left L5 filament. A representative strand shows that DRRs can be recorded from the central stump of the dorsal root (L5) while the L4 central stump is stimulated (A). Each vertical line indicates a DRR. In this strand, about 4 fibers show DRR activity, based on their amplitudes and shapes. The horizontal line indicates the duration of electrical stimulation (20 V, 5 Hz, 0.5 ms for 5 min). During stimulation, an obvious increase of DRRs is demonstrated, which is summarized in (C). An expanded trace is shown in B. *: p < 0.05.\n[SUBTITLE] Effects of electrical stimulation of the central stump of the cut dorsal root on plasma extravasation on the plantar surface of the ipsilateral and contralateral hindpaws [SUBSECTION] When the central stump of the left L4 dorsal root was stimulated, there was no obvious plasma extravasation observed in the ipsilateral (left) paw (Figure 3, 1st row) and contralateral (right) paw (Figure 3, 2nd row). The color intensities before stimulation were 109.57 ± 10.02 in the left paw (Figure 3G) and 103.10 ± 4.25 AU (arbitrary unit) (Figure 3G) in the right paw among 8 animals. During and after central left L4 stimulation, the color intensity for the left paw ranged from 113.72 ± 9.81 to 116.96 ± 8.96 AU (Figure 3G); the color intensity for the right paw ranged from 102.57 ± 4.53 to 106.43 ± 3.46 AU (Figure 3G). Data were analyzed by ANOVA to test differences between sides (ipsilateral and contralateral), and among effects of time (C to 10 min) following central stump stimulation. The results indicated no effect of stimulation side, F (1, 7) = 1.86, p = 0.22; no effect of time, F (20, 140) = 1.33, p = 0.17; and no effect of interaction (Side × Time), F (20, 140) = 1.25, p = 0.23.\nA series of representative images of the left and right paws while the central stump of cut L4 (1st and 2nd rows) and peripheral stump of cut L5 (3rd row) dorsal root was stimulated. The blue patches on the paw indicate plasma extravasation due to leakage of Evans Blue (EB). Note: A - before EB injection, B - after EB injection, C - 0.5 min, D - 2 min, E - 3.5 min, and F - 5 min after onset of stimulation. A summary shows the changes in color intensity (G) or changes in percentage of color intensity (H) of the ipsilateral (left) or contralateral (right) paws following electrical stimulation of either the central stump of L4 or the peripheral stump of the left L5 dorsal root. When the central stump of the left L4 dorsal root was stimulated, there were no significant differences in the color intensities (G) and percentage changes in color intensity (H) between the left (filled diamond) and right paws (open square). However, significant differences of color intensity and percentage change in color intensity were detected in the left paw when the central stump of L4 (filled diamond) and peripheral stump of L5 dorsal root (filled triangle) were stimulated. A summary of plasma extravasation induced by stimulating intact L5 dorsal root (I) shows significant plasma extravasation (as indicated by the decrease of color intensity, n = 3) 5 min after stimulation. The gray area indicates the duration of stimulation (5 min). *: p < 0.05, ***: p < 0.001, Fisher LSD test, as compared with the color intensity before electrical stimulation. +: p < 0.05, Fisher LSD test, as comparing the peripheral L5 stimulation (left paw) with the central L4 stimulation (left paw). AU: arbitrary unit; C: as a control before stimulation.\nBy using the color intensity before stimulation to normalize the other data, the percentage change of color intensity of the left paw ranged from 4.31 ± 2.96 to 8.00 ± 2.77% (Figure 3H); the percentage change of color intensity of the right paw ranged from -0.42 ± 2.31 to 3.68 ± 2.42% (Figure 3H). Data were analyzed by ANOVA to test differences between sides of central stump stimulation (side: ipsilateral and contralateral), and among effects of time (time: C to 10 min). The results indicated no effect of stimulation side, F (1, 7) = 5.48, p = 0.052; no effect of time, F (20, 140) = 1.37, p = 0.15; and no effect of interaction (Side × Time), F (20, 140) = 1.23, p = 0.24.\nWhen the central stump of the left L4 dorsal root was stimulated, there was no obvious plasma extravasation observed in the ipsilateral (left) paw (Figure 3, 1st row) and contralateral (right) paw (Figure 3, 2nd row). The color intensities before stimulation were 109.57 ± 10.02 in the left paw (Figure 3G) and 103.10 ± 4.25 AU (arbitrary unit) (Figure 3G) in the right paw among 8 animals. During and after central left L4 stimulation, the color intensity for the left paw ranged from 113.72 ± 9.81 to 116.96 ± 8.96 AU (Figure 3G); the color intensity for the right paw ranged from 102.57 ± 4.53 to 106.43 ± 3.46 AU (Figure 3G). Data were analyzed by ANOVA to test differences between sides (ipsilateral and contralateral), and among effects of time (C to 10 min) following central stump stimulation. The results indicated no effect of stimulation side, F (1, 7) = 1.86, p = 0.22; no effect of time, F (20, 140) = 1.33, p = 0.17; and no effect of interaction (Side × Time), F (20, 140) = 1.25, p = 0.23.\nA series of representative images of the left and right paws while the central stump of cut L4 (1st and 2nd rows) and peripheral stump of cut L5 (3rd row) dorsal root was stimulated. The blue patches on the paw indicate plasma extravasation due to leakage of Evans Blue (EB). Note: A - before EB injection, B - after EB injection, C - 0.5 min, D - 2 min, E - 3.5 min, and F - 5 min after onset of stimulation. A summary shows the changes in color intensity (G) or changes in percentage of color intensity (H) of the ipsilateral (left) or contralateral (right) paws following electrical stimulation of either the central stump of L4 or the peripheral stump of the left L5 dorsal root. When the central stump of the left L4 dorsal root was stimulated, there were no significant differences in the color intensities (G) and percentage changes in color intensity (H) between the left (filled diamond) and right paws (open square). However, significant differences of color intensity and percentage change in color intensity were detected in the left paw when the central stump of L4 (filled diamond) and peripheral stump of L5 dorsal root (filled triangle) were stimulated. A summary of plasma extravasation induced by stimulating intact L5 dorsal root (I) shows significant plasma extravasation (as indicated by the decrease of color intensity, n = 3) 5 min after stimulation. The gray area indicates the duration of stimulation (5 min). *: p < 0.05, ***: p < 0.001, Fisher LSD test, as compared with the color intensity before electrical stimulation. +: p < 0.05, Fisher LSD test, as comparing the peripheral L5 stimulation (left paw) with the central L4 stimulation (left paw). AU: arbitrary unit; C: as a control before stimulation.\nBy using the color intensity before stimulation to normalize the other data, the percentage change of color intensity of the left paw ranged from 4.31 ± 2.96 to 8.00 ± 2.77% (Figure 3H); the percentage change of color intensity of the right paw ranged from -0.42 ± 2.31 to 3.68 ± 2.42% (Figure 3H). Data were analyzed by ANOVA to test differences between sides of central stump stimulation (side: ipsilateral and contralateral), and among effects of time (time: C to 10 min). The results indicated no effect of stimulation side, F (1, 7) = 5.48, p = 0.052; no effect of time, F (20, 140) = 1.37, p = 0.15; and no effect of interaction (Side × Time), F (20, 140) = 1.23, p = 0.24.\n[SUBTITLE] Effects of electrical stimulation of the peripheral stump of the cut dorsal root on plasma extravasation on the plantar surface of the ipsilateral hindpaw [SUBSECTION] When the peripheral stump of the left L5 dorsal root was stimulated, there was obvious plasma extravasation observed in the left paw as demonstrated by blue patches (Figure 3A-F, 3rd row). The color intensity of the left paw before stimulation were 109.57 ± 10.02 for central stimulation group (n = 8, Figure 3G) and 125.94 ± 7.10 AU (arbitrary unit) (Figure 3G) for peripheral stump stimulation group (n = 7), respectively. During and after stimulation of peripheral stump of left L5 stimulation, the color intensity for the left paw dropped as low as 99.05 ± 8.06 AU. Data were analyzed by ANOVA to test differences between stimulation site (central vs. peripheral), and among effects of time (time: C to 10 min). The results indicated no effect of stimulation site, F (1, 6) = 0.52, p = 0.5; a significant effect of time, F (20, 120) = 5.29, p < 0.001; and a significant effect of interaction (Site × Time), F (20, 120) = 7.96, p < 0.001. Posthoc Fisher LSD tests indicated significantly lower color intensity in the left paw 2 minutes following stimulation of the peripheral stump of dorsal root (+: p < 0.05) as compared to central stump stimulation (Figure 3G), as well as to before stimulation (*: p < 0.05).\nBy using the color intensity before stimulation to normalize the other data, the percentage change of color intensity of the left paw dropped to -21.28 ± 4.73% following stimulation of the peripheral stump of dorsal root (Figure 3H). Data was analyzed by ANOVA to test differences between sites of stimulation (central vs. peripheral), and among effects of time (time: C to 10 min). The results indicated significant effect of stimulation site, F (1, 6) = 19.44, p = 0.005; a significant effect of time, F (20, 120) = 4.20, p < 0.001; and a significant effect of interaction (Site × Time), F (20, 120) = 6.16, p < 0.001. Posthoc Fisher LSD tests indicated significantly lower intensity in the left paw following stimulation of the peripheral stump of dorsal root (+: p < 0.05) as comparing to central stump stimulation (Figure 3H), as well as to before stimulation (*: p < 0.05).\nWhen the peripheral stump of the left L5 dorsal root was stimulated, there was obvious plasma extravasation observed in the left paw as demonstrated by blue patches (Figure 3A-F, 3rd row). The color intensity of the left paw before stimulation were 109.57 ± 10.02 for central stimulation group (n = 8, Figure 3G) and 125.94 ± 7.10 AU (arbitrary unit) (Figure 3G) for peripheral stump stimulation group (n = 7), respectively. During and after stimulation of peripheral stump of left L5 stimulation, the color intensity for the left paw dropped as low as 99.05 ± 8.06 AU. Data were analyzed by ANOVA to test differences between stimulation site (central vs. peripheral), and among effects of time (time: C to 10 min). The results indicated no effect of stimulation site, F (1, 6) = 0.52, p = 0.5; a significant effect of time, F (20, 120) = 5.29, p < 0.001; and a significant effect of interaction (Site × Time), F (20, 120) = 7.96, p < 0.001. Posthoc Fisher LSD tests indicated significantly lower color intensity in the left paw 2 minutes following stimulation of the peripheral stump of dorsal root (+: p < 0.05) as compared to central stump stimulation (Figure 3G), as well as to before stimulation (*: p < 0.05).\nBy using the color intensity before stimulation to normalize the other data, the percentage change of color intensity of the left paw dropped to -21.28 ± 4.73% following stimulation of the peripheral stump of dorsal root (Figure 3H). Data was analyzed by ANOVA to test differences between sites of stimulation (central vs. peripheral), and among effects of time (time: C to 10 min). The results indicated significant effect of stimulation site, F (1, 6) = 19.44, p = 0.005; a significant effect of time, F (20, 120) = 4.20, p < 0.001; and a significant effect of interaction (Site × Time), F (20, 120) = 6.16, p < 0.001. Posthoc Fisher LSD tests indicated significantly lower intensity in the left paw following stimulation of the peripheral stump of dorsal root (+: p < 0.05) as comparing to central stump stimulation (Figure 3H), as well as to before stimulation (*: p < 0.05).\n[SUBTITLE] Effects of electrical stimulation of the intact dorsal root on plasma extravasation on the plantar surface of the ipsilateral hindpaw [SUBSECTION] When the intact left L4 or L5 dorsal root was stimulated, there was plasma extravasation observed in the left paw (n = 3, Figure 3A-F, 4th row). The color intensity change of the left paw was normalized by using the color intensity before stimulation (Figure 3I). The change of color intensity of the left paw were -26.63 ± 5.91 at 5 min, -34.77 ± 2.81 at 10 min, -32.75 ± 3.69 at 15 min, and -35.69 ± 6.16 at 20 min following stimulation, where the negative value indicates an increase in extravasation. Significant changes were found at 5, 10, 15, and 20 min after stimulation. One-way ANOVA showed a significant change after stimulation of the intact dorsal root, F (4, 8) = 31.6, p < 0.001. Posthoc Fisher LSD tests indicated significantly lower intensity in the left paw following stimulation of the intact dorsal root as comparing to before stimulation (***: p < 0.001).\nWhen the intact left L4 or L5 dorsal root was stimulated, there was plasma extravasation observed in the left paw (n = 3, Figure 3A-F, 4th row). The color intensity change of the left paw was normalized by using the color intensity before stimulation (Figure 3I). The change of color intensity of the left paw were -26.63 ± 5.91 at 5 min, -34.77 ± 2.81 at 10 min, -32.75 ± 3.69 at 15 min, and -35.69 ± 6.16 at 20 min following stimulation, where the negative value indicates an increase in extravasation. Significant changes were found at 5, 10, 15, and 20 min after stimulation. One-way ANOVA showed a significant change after stimulation of the intact dorsal root, F (4, 8) = 31.6, p < 0.001. Posthoc Fisher LSD tests indicated significantly lower intensity in the left paw following stimulation of the intact dorsal root as comparing to before stimulation (***: p < 0.001).\n[SUBTITLE] Effects of electrical stimulation of the central stump of the cut dorsal root on vasodilatation on the plantar surface of bilateral hindpaws [SUBSECTION] When the central stump of left L4 or L5 dorsal root was stimulated, there was obvious vasodilatation in the left paw (Figure 4A-H). The blood perfusion change of the left paw was normalized from raw data (Figure 4I) using the average of 10 baseline images and summarized (ipsilateral n = 8, contralateral n = 5, Figure 4J). Data were analyzed by ANOVA to test differences between hindpaws (ipsilateral vs. contralateral), and among effects of images (image number: 1-34). The results indicated no effect of side of hindpaws, F (1, 3) = 1.66, p = 0.29; a significant effect of images numbers, F (33, 99) = 2.11, p = 0.002; and a significant effect of interaction (Side × Image), F (33, 99) = 2.11, p = 0.002. Posthoc Fisher LSD tests indicated significantly higher blood perfusion in the left hindpaw during stimulation of the central stump of dorsal root as compared to the right hindpaw (*: p < 0.05) (Figure 4I), as well as to before stimulation (+: p < 0.05).\nEffect of stimulation of the central stump of the cut dorsal root on blood perfusion in hindpaws bilaterally. The upper panel shows laser Doppler images from one rat as control (A), during stimulation (B-D), and 2 (E), 14 (F), 32 (G), and 52 min (H) after the end of stimulation. Blue color indicates lower perfusion, whereas red color indicates higher perfusion. Time response curves are plotted for the raw data (I) and normalized changes in blood perfusion (J) in both the ipsilateral (Left) and contralateral (Right) hindpaws. The gray area indicates duration of stimulation (from image 11 to 13). Note +: p < 0.05 as compared to baseline, including all time points under the bracket. *: p < 0.05 as compared to the right side.\nAfter normalization, an ANOVA to test differences between hindpaws (ipsilateral vs. contralateral), and among effects of images (image number: 1-34) indicated no effect of side of hindpaws, F (1, 3) = 0.16, p = 0.71; a significant effect of images, F (33, 99) = 2.26, p = 0.001; and a significant effect of interaction (Side × Image), F (33, 99) = 2.91, p < 0.001. Posthoc Fisher LSD tests indicated a significant increase of blood perfusion (Figure 4I) in the left hindpaws during stimulation of the central stump of dorsal root as compared to the right hindpaw (*: p < 0.05), as well as a significant percentage increase (Figure 4J) of blood perfusion in both hindpaws as compared to the baseline (+: p < 0.05).\nWhen the central stump of left L4 or L5 dorsal root was stimulated, there was obvious vasodilatation in the left paw (Figure 4A-H). The blood perfusion change of the left paw was normalized from raw data (Figure 4I) using the average of 10 baseline images and summarized (ipsilateral n = 8, contralateral n = 5, Figure 4J). Data were analyzed by ANOVA to test differences between hindpaws (ipsilateral vs. contralateral), and among effects of images (image number: 1-34). The results indicated no effect of side of hindpaws, F (1, 3) = 1.66, p = 0.29; a significant effect of images numbers, F (33, 99) = 2.11, p = 0.002; and a significant effect of interaction (Side × Image), F (33, 99) = 2.11, p = 0.002. Posthoc Fisher LSD tests indicated significantly higher blood perfusion in the left hindpaw during stimulation of the central stump of dorsal root as compared to the right hindpaw (*: p < 0.05) (Figure 4I), as well as to before stimulation (+: p < 0.05).\nEffect of stimulation of the central stump of the cut dorsal root on blood perfusion in hindpaws bilaterally. The upper panel shows laser Doppler images from one rat as control (A), during stimulation (B-D), and 2 (E), 14 (F), 32 (G), and 52 min (H) after the end of stimulation. Blue color indicates lower perfusion, whereas red color indicates higher perfusion. Time response curves are plotted for the raw data (I) and normalized changes in blood perfusion (J) in both the ipsilateral (Left) and contralateral (Right) hindpaws. The gray area indicates duration of stimulation (from image 11 to 13). Note +: p < 0.05 as compared to baseline, including all time points under the bracket. *: p < 0.05 as compared to the right side.\nAfter normalization, an ANOVA to test differences between hindpaws (ipsilateral vs. contralateral), and among effects of images (image number: 1-34) indicated no effect of side of hindpaws, F (1, 3) = 0.16, p = 0.71; a significant effect of images, F (33, 99) = 2.26, p = 0.001; and a significant effect of interaction (Side × Image), F (33, 99) = 2.91, p < 0.001. Posthoc Fisher LSD tests indicated a significant increase of blood perfusion (Figure 4I) in the left hindpaws during stimulation of the central stump of dorsal root as compared to the right hindpaw (*: p < 0.05), as well as a significant percentage increase (Figure 4J) of blood perfusion in both hindpaws as compared to the baseline (+: p < 0.05).", "After the left L4 dorsal root was cut, the central stump was placed on the stimulating electrode. To ensure that DRRs can be elicited, a small fascicle of neighboring dorsal root (usually L5) was teased centrally and was placed in a recording electrode (Figure 1A). Multiunit spontaneous antidromic discharges were recorded from all 8 animals that were tested for plasma extravasation. The discharges were irregular and usually at a very low rate but increased during electrical stimulation of L4 (Figure 2A-C). Average mean spontaneous activity was 0.09 ± 0.03 Hz (range: 0-0.22 Hz; n = 8). In most recorded units, additional DRR activity could be evoked by applying a graded mechanical stimulus (brush, pressure, and pinch) to the skin of the foot (data not shown). One cell was found whose receptive field was covering the whole body, as previously reported by others [43,51]. During electrical stimulation (20 V, 5 Hz, 5 ms), a significant increase in DRRs was observed (2.28 ± 0.76 Hz; range: 0.13-5.27 Hz; n = 8, P < 0.05). The activity of antidromic discharges returned to normal 2 min (127 ± 70 s, range from 0 to 585 s) after the termination of electrical stimulation (0.14 ± 0.04 Hz; range: 0-0.36 Hz; n = 8). Four out of eight fibers returned to baseline as soon as the stimulation was terminated; one fiber lasted as long as 10 min.\nDorsal root reflexes from a left L5 filament. A representative strand shows that DRRs can be recorded from the central stump of the dorsal root (L5) while the L4 central stump is stimulated (A). Each vertical line indicates a DRR. In this strand, about 4 fibers show DRR activity, based on their amplitudes and shapes. The horizontal line indicates the duration of electrical stimulation (20 V, 5 Hz, 0.5 ms for 5 min). During stimulation, an obvious increase of DRRs is demonstrated, which is summarized in (C). An expanded trace is shown in B. *: p < 0.05.", "When the central stump of the left L4 dorsal root was stimulated, there was no obvious plasma extravasation observed in the ipsilateral (left) paw (Figure 3, 1st row) and contralateral (right) paw (Figure 3, 2nd row). The color intensities before stimulation were 109.57 ± 10.02 in the left paw (Figure 3G) and 103.10 ± 4.25 AU (arbitrary unit) (Figure 3G) in the right paw among 8 animals. During and after central left L4 stimulation, the color intensity for the left paw ranged from 113.72 ± 9.81 to 116.96 ± 8.96 AU (Figure 3G); the color intensity for the right paw ranged from 102.57 ± 4.53 to 106.43 ± 3.46 AU (Figure 3G). Data were analyzed by ANOVA to test differences between sides (ipsilateral and contralateral), and among effects of time (C to 10 min) following central stump stimulation. The results indicated no effect of stimulation side, F (1, 7) = 1.86, p = 0.22; no effect of time, F (20, 140) = 1.33, p = 0.17; and no effect of interaction (Side × Time), F (20, 140) = 1.25, p = 0.23.\nA series of representative images of the left and right paws while the central stump of cut L4 (1st and 2nd rows) and peripheral stump of cut L5 (3rd row) dorsal root was stimulated. The blue patches on the paw indicate plasma extravasation due to leakage of Evans Blue (EB). Note: A - before EB injection, B - after EB injection, C - 0.5 min, D - 2 min, E - 3.5 min, and F - 5 min after onset of stimulation. A summary shows the changes in color intensity (G) or changes in percentage of color intensity (H) of the ipsilateral (left) or contralateral (right) paws following electrical stimulation of either the central stump of L4 or the peripheral stump of the left L5 dorsal root. When the central stump of the left L4 dorsal root was stimulated, there were no significant differences in the color intensities (G) and percentage changes in color intensity (H) between the left (filled diamond) and right paws (open square). However, significant differences of color intensity and percentage change in color intensity were detected in the left paw when the central stump of L4 (filled diamond) and peripheral stump of L5 dorsal root (filled triangle) were stimulated. A summary of plasma extravasation induced by stimulating intact L5 dorsal root (I) shows significant plasma extravasation (as indicated by the decrease of color intensity, n = 3) 5 min after stimulation. The gray area indicates the duration of stimulation (5 min). *: p < 0.05, ***: p < 0.001, Fisher LSD test, as compared with the color intensity before electrical stimulation. +: p < 0.05, Fisher LSD test, as comparing the peripheral L5 stimulation (left paw) with the central L4 stimulation (left paw). AU: arbitrary unit; C: as a control before stimulation.\nBy using the color intensity before stimulation to normalize the other data, the percentage change of color intensity of the left paw ranged from 4.31 ± 2.96 to 8.00 ± 2.77% (Figure 3H); the percentage change of color intensity of the right paw ranged from -0.42 ± 2.31 to 3.68 ± 2.42% (Figure 3H). Data were analyzed by ANOVA to test differences between sides of central stump stimulation (side: ipsilateral and contralateral), and among effects of time (time: C to 10 min). The results indicated no effect of stimulation side, F (1, 7) = 5.48, p = 0.052; no effect of time, F (20, 140) = 1.37, p = 0.15; and no effect of interaction (Side × Time), F (20, 140) = 1.23, p = 0.24.", "When the peripheral stump of the left L5 dorsal root was stimulated, there was obvious plasma extravasation observed in the left paw as demonstrated by blue patches (Figure 3A-F, 3rd row). The color intensity of the left paw before stimulation were 109.57 ± 10.02 for central stimulation group (n = 8, Figure 3G) and 125.94 ± 7.10 AU (arbitrary unit) (Figure 3G) for peripheral stump stimulation group (n = 7), respectively. During and after stimulation of peripheral stump of left L5 stimulation, the color intensity for the left paw dropped as low as 99.05 ± 8.06 AU. Data were analyzed by ANOVA to test differences between stimulation site (central vs. peripheral), and among effects of time (time: C to 10 min). The results indicated no effect of stimulation site, F (1, 6) = 0.52, p = 0.5; a significant effect of time, F (20, 120) = 5.29, p < 0.001; and a significant effect of interaction (Site × Time), F (20, 120) = 7.96, p < 0.001. Posthoc Fisher LSD tests indicated significantly lower color intensity in the left paw 2 minutes following stimulation of the peripheral stump of dorsal root (+: p < 0.05) as compared to central stump stimulation (Figure 3G), as well as to before stimulation (*: p < 0.05).\nBy using the color intensity before stimulation to normalize the other data, the percentage change of color intensity of the left paw dropped to -21.28 ± 4.73% following stimulation of the peripheral stump of dorsal root (Figure 3H). Data was analyzed by ANOVA to test differences between sites of stimulation (central vs. peripheral), and among effects of time (time: C to 10 min). The results indicated significant effect of stimulation site, F (1, 6) = 19.44, p = 0.005; a significant effect of time, F (20, 120) = 4.20, p < 0.001; and a significant effect of interaction (Site × Time), F (20, 120) = 6.16, p < 0.001. Posthoc Fisher LSD tests indicated significantly lower intensity in the left paw following stimulation of the peripheral stump of dorsal root (+: p < 0.05) as comparing to central stump stimulation (Figure 3H), as well as to before stimulation (*: p < 0.05).", "When the intact left L4 or L5 dorsal root was stimulated, there was plasma extravasation observed in the left paw (n = 3, Figure 3A-F, 4th row). The color intensity change of the left paw was normalized by using the color intensity before stimulation (Figure 3I). The change of color intensity of the left paw were -26.63 ± 5.91 at 5 min, -34.77 ± 2.81 at 10 min, -32.75 ± 3.69 at 15 min, and -35.69 ± 6.16 at 20 min following stimulation, where the negative value indicates an increase in extravasation. Significant changes were found at 5, 10, 15, and 20 min after stimulation. One-way ANOVA showed a significant change after stimulation of the intact dorsal root, F (4, 8) = 31.6, p < 0.001. Posthoc Fisher LSD tests indicated significantly lower intensity in the left paw following stimulation of the intact dorsal root as comparing to before stimulation (***: p < 0.001).", "When the central stump of left L4 or L5 dorsal root was stimulated, there was obvious vasodilatation in the left paw (Figure 4A-H). The blood perfusion change of the left paw was normalized from raw data (Figure 4I) using the average of 10 baseline images and summarized (ipsilateral n = 8, contralateral n = 5, Figure 4J). Data were analyzed by ANOVA to test differences between hindpaws (ipsilateral vs. contralateral), and among effects of images (image number: 1-34). The results indicated no effect of side of hindpaws, F (1, 3) = 1.66, p = 0.29; a significant effect of images numbers, F (33, 99) = 2.11, p = 0.002; and a significant effect of interaction (Side × Image), F (33, 99) = 2.11, p = 0.002. Posthoc Fisher LSD tests indicated significantly higher blood perfusion in the left hindpaw during stimulation of the central stump of dorsal root as compared to the right hindpaw (*: p < 0.05) (Figure 4I), as well as to before stimulation (+: p < 0.05).\nEffect of stimulation of the central stump of the cut dorsal root on blood perfusion in hindpaws bilaterally. The upper panel shows laser Doppler images from one rat as control (A), during stimulation (B-D), and 2 (E), 14 (F), 32 (G), and 52 min (H) after the end of stimulation. Blue color indicates lower perfusion, whereas red color indicates higher perfusion. Time response curves are plotted for the raw data (I) and normalized changes in blood perfusion (J) in both the ipsilateral (Left) and contralateral (Right) hindpaws. The gray area indicates duration of stimulation (from image 11 to 13). Note +: p < 0.05 as compared to baseline, including all time points under the bracket. *: p < 0.05 as compared to the right side.\nAfter normalization, an ANOVA to test differences between hindpaws (ipsilateral vs. contralateral), and among effects of images (image number: 1-34) indicated no effect of side of hindpaws, F (1, 3) = 0.16, p = 0.71; a significant effect of images, F (33, 99) = 2.26, p = 0.001; and a significant effect of interaction (Side × Image), F (33, 99) = 2.91, p < 0.001. Posthoc Fisher LSD tests indicated a significant increase of blood perfusion (Figure 4I) in the left hindpaws during stimulation of the central stump of dorsal root as compared to the right hindpaw (*: p < 0.05), as well as a significant percentage increase (Figure 4J) of blood perfusion in both hindpaws as compared to the baseline (+: p < 0.05).", "The goal of the present experiment was to minimize confounders introduced by artificial antidromic stimulation of dorsal root or introduction of substances to the periphery by interrupting the communication of the stimulated dorsal root with periphery. In the past, electrical stimulation of one dorsal root elicited DRRs in the neighboring roots and spread along (up to 16 spinal segment in both directions from the stimulated site) and across the spinal cord [52]. This process is believed to operate in all-or-none manner once activated [53]. Since a rat's paw is innervated by L4-L6 originating nerves, we assumed that stimulating the central portion of one cut dorsal root would evoke DRRs to the ipsilateral paw through the remaining two dorsal roots. In fact, in the current experiment electrical stimulation of the central stump of the dorsal root elicited a significant increase in DRR activity in the recorded fibers of the neighboring dorsal roots.\nNeuropeptides (particularly, substance P and CGRP) found in peripheral terminals of nociceptive fibers contribute to neurogenic inflammation and are released in response to antidromic stimulation [11,13,40,54]. Therefore, we expected that electrically evoked DRRs in the nerves innervating a rat's paw would produce both plasma extravasation and vasodilation. However, bilateral vasodilation but not plasma extravasation was observed in response to central stump stimulation.\nAs previously mentioned, SP acting on tachykinin receptors increases microvascular permeability and edema formation [10,13]. CGRP, on the other hand, acting on its receptors produces arteriolar vasodilation [13,15]. Interestingly, C-fibers contain both SP and CGRP, whereas Aδ-fibers predominantly have CGRP in their peripheral terminals [17,55,56]. In addition, antidromic stimulation of the saphenous nerve at C-fiber intensity produces both vasodilation and plasma extravasation, whereas stimulation at Aδ-fiber intensity produces only vasodilation [24,54]. It has been previously shown that 1-2 pulses to lumbosacral dorsal roots are enough to cause a change in cutaneous microcirculation, and 4-16 pulses at 2 Hz evokes vasodilatation lasting for several minutes [20]. Similar results have been shown with spinal cord stimulation [57]. In our study, electrical stimulation of the intact dorsal root or the peripheral stump of the dorsal root produced both vasodilatation and plasma extravasation in the skin. However, electrical stimulation of the central stump with the same parameters did not elicit plasma extravasation on either side, but did produce vasodilatation bilaterally. This finding suggests that the stimulation parameters selected were sufficient to excite both myelinated and unmyelinated fibers in both the distal and central stumps of the dorsal root. However, stimulation of the central stump of the dorsal root triggers more DRRs in myelinated than unmyelinated fibers in the neighboring roots, and leads mostly to CGRP release, and in turn vasodilation.\nThe differential release of co-localized neurotransmitters from the same terminal depending on the firing rate is another possible explanation of the obtained results. The stimulation frequency needed to induce plasma extravasation is higher than that to produce vasodilation [54]. Electrical stimulation should to some extent mimic peripherally evoked orthodromic action potentials. It is true that DRRs evoked by stimulating the central stump are much weaker than direct stimulation of the distal stump, due to the nature of multisynaptic connectivity inside the spinal cord. It may help to explain the differences in plasma extravasation resulting from stimulation of the central versus peripheral stump. In addition, co-packaged in the same granule, catecholamines and neuropeptides have been shown to be differentially released from adrenal medulla depending on the firing rate through a regulated activity-dependent dilation of the granule fusion pore and size-exclusion mechanism [58,59].\nIn both of the proposed mechanisms, there should be a higher probability of DRR generation in Aδ-fibers compared to C-fibers in response to central stump orthodromic stimulation. First, there may be a differential effect of GABA on GABAA receptors on the central terminals of primary afferents. C-fibers have been shown to have a lower density of GABAA receptors compared to both Aδ-fibers and Aβ-fibers [60]. Second, the threshold for generation of DRRs by PADs may be higher in C-fibers compared to Aδ- fibers.\nIn addition, the proportion of CGRP-containing afferents is much higher compared to SP-containing afferents in the skin. CGRP is present in both myelinated and unmyelinated nociceptive fibers, whereas SP is only found in small diameter unmyelinated fibers. CGRP is also found in larger number of unmyelinated fibers compared to SP [61].\nFinally, the role of sympathetic nervous system needs to be addressed, since the stimulation of the central stump may increase sympathetic activity. On one hand, sympathetic activity can decrease neuropeptide release from afferent fibers by its action on prejunctional α2-adrenoreceptors [13], and counteract dorsal reflex-mediated neurogenic inflammation [62]. On the other hand, sympathetic presence is important for the development of DRR-mediated neurogenic inflammation through the actions of neuropeptide Y (NPY) and norepinephrine on NPY Y2 and alpha1 receptors, respectively [63,64].\nIn this study, the contribution of the sympathetic nervous system during central stump stimulation was challenged by two experiments: blood perfusion change in stimulation of the central stump of the cut dorsal root (Figure 4) and plasma extravasation in stimulation of intact dorsal root (Figure 3I). Stimulation of the central stump produced a significant bilateral increase in blood perfusion suggesting that DRRs in primary afferents surpass sympathetic vasoconstriction, if present. Stimulation of intact dorsal root on the other hand (action potentials can travel orthodromically and antidromically) produced plasma extravasation in the ipsilateral hindpaw, suggesting that even if sympathetic system is activated by orthodromic input, its subsequent effects are not strong enough to counteract plasma extravasation induced by the antidromic spikes that reached the periphery.", "In summary, incoming stimulation at an intensity that activates all types of nociceptive fibers produces DRRs in the intact neighboring roots as well as bilateral vasodilation of the innervated area but not plasma extravasation. Neurogenic inflammation is a complex process that requires the co-release of multiple substances. It seems that noxious stimulation alone is not capable of eliciting all signs of neurogenic inflammation. Therefore, successful treatment of neurogenic inflammation will not only require that the neural input to the spinal cord be addressed, but also that co-factors in both spinal cord and the periphery that allow neural input to convert to neuropeptide co-release be addressed. In addition, acutely elicited DRRs are not able to elicit the complete picture of neurogenic inflammation; future studies are necessary to establish the contributions and nature of DRRs in chronic pain states such as arthritis or migraine.", "The authors declare that they have no competing interests.", "This study is based on the original idea of OVL and YBP. OVL performed data collection, data analysis, the statistical analysis and wrote the manuscript. YBP made contributions to conception and design and analysis and interpretation of data. All authors have read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Animal preparation", "DRR recordings", "Electrical stimulation", "Plasma extravasation measurement", "Cutaneous blood flow measurement", "Data analysis", "Results", "Dorsal root reflexes can be elicited at the central dorsal root filaments by electrical stimulation of a neighboring central stump of the cut dorsal root", "Effects of electrical stimulation of the central stump of the cut dorsal root on plasma extravasation on the plantar surface of the ipsilateral and contralateral hindpaws", "Effects of electrical stimulation of the peripheral stump of the cut dorsal root on plasma extravasation on the plantar surface of the ipsilateral hindpaw", "Effects of electrical stimulation of the intact dorsal root on plasma extravasation on the plantar surface of the ipsilateral hindpaw", "Effects of electrical stimulation of the central stump of the cut dorsal root on vasodilatation on the plantar surface of bilateral hindpaws", "Discussion", "Conclusion", "Competing interests", "Authors' contributions" ]
[ "Somatosensory information is generally considered to originate in the peripheral terminals of primary afferent neurons, and is then transmitted to the spinal cord or brain. However, activity in primary afferent neurons can be generated within the spinal cord, and the impulses can travel antidromically toward the periphery. These phenomena are called dorsal root reflexes (DRR). Dorsal root reflexes were first discovered by Gotch and Horsley [1], and were studied extensively in later works [2-9].\nDRRs are thought to contribute to neurogenic inflammation. The main components of neurogenic inflammation include, but are not limited to, arteriolar vasodilatation and plasma extravasation. Neurogenic inflammation is triggered by substances released from sensory nerve terminals, including substance P (SP) and calcitonin gene-related peptide (CGRP). SP, as well as other tachykinins such as neurokinin A (NKA) and neurokinin B (NKB), cause plasma extravasation by a specific action on NK1, NK2, and NK3 receptors [10] to increase vascular permeability [11-13]. SP and NKA play a major role in the periphery, whereas NKB is mainly found in the CNS [14]. CGRP is active in dilating cutaneous arterioles [15] via the CGRP1 receptor [11,13]. Both tachykinins and CGRP are found in the peripheral endings of sensory nerves [16-19] and released from both C and Aδ fibers [11].\nNeurogenic inflammation has been shown to be caused by antidromic electrical stimulation of afferent nerves [20-26], by intradermal injection of capsaicin [27], and in acute [28-31] or chronic [32] arthritis experiments. Enhanced afferent discharges cause the central terminals of primary afferent fibers to release excitatory amino acids, which then activate non-NMDA and NMDA receptors on GABAergic interneurons, leading to the release of GABA on primary afferent central terminals [33-36]. GABA produces excessive primary afferent depolarization (PAD) through GABAA receptors located on presynaptic terminals of primary afferents [37,38]. When PAD exceeds the threshold, DRRs are generated [39,40], which are conducted antidromically in both myelinated and unmyelinated fibers toward the periphery [41,42], and can be blocked by the spinal GABAA antagonist, bicuculline [43]. This antidromic activity could result in the release of inflammatory mediators (e.g., SP), as was shown in the knee joint [23,44].\nDRRs can be induced by electrical stimulation of peripheral nerves, both ipsilaterally and contralaterally [45], as well as by supraspinal stimulation of the periaqueductal grey [46]. In the present study we tried to mimic the incoming nociceptive input to the spinal cord by electrical stimulation of the central stump of the dorsal root, and test whether the electrically evoked DRRs can contribute to the development of neurogenic inflammation - vasodilatation and plasma extravasation. Preliminary results have been reported [47].", "[SUBTITLE] Animal preparation [SUBSECTION] A total of 19 adult male Sprague-Dawley rats weighing 300-400 g were used for this study, 8 for vasodilatation, and 11 for plasma extravasation. All procedures used in this study were approved by the Animal IACUC and followed the guidelines for the treatment of animals of the International Association for the Study of Pain [48].\nAnimals were initially anesthetized with sodium pentobarbital (50 mg/kg, i.p.). A catheter was placed into the jugular vein for continuous administration of anesthetic (sodium pentobarbital, 5-8 mg·kg-1·h-1 in a saline solution) and for Evans Blue injection in plasma extravasation experiments. The level of anesthesia was monitored by the stability of the level of end-tidal CO2 at around 30 mmHg and by the absence of flexion reflex. Tracheotomy was performed for artificial ventilation. The animal's body temperature was maintained at 37°C by a feedback controlled electric heating blanket. A 4-cm-long laminectomy was performed over the lumbosacral enlargement to expose the spinal cord and L4-L6 dorsal roots. The rat was held in a stereotaxic frame to prevent movement during recording. The skin over the laminectomy formed a pool and was filled with light mineral oil.\nA total of 19 adult male Sprague-Dawley rats weighing 300-400 g were used for this study, 8 for vasodilatation, and 11 for plasma extravasation. All procedures used in this study were approved by the Animal IACUC and followed the guidelines for the treatment of animals of the International Association for the Study of Pain [48].\nAnimals were initially anesthetized with sodium pentobarbital (50 mg/kg, i.p.). A catheter was placed into the jugular vein for continuous administration of anesthetic (sodium pentobarbital, 5-8 mg·kg-1·h-1 in a saline solution) and for Evans Blue injection in plasma extravasation experiments. The level of anesthesia was monitored by the stability of the level of end-tidal CO2 at around 30 mmHg and by the absence of flexion reflex. Tracheotomy was performed for artificial ventilation. The animal's body temperature was maintained at 37°C by a feedback controlled electric heating blanket. A 4-cm-long laminectomy was performed over the lumbosacral enlargement to expose the spinal cord and L4-L6 dorsal roots. The rat was held in a stereotaxic frame to prevent movement during recording. The skin over the laminectomy formed a pool and was filled with light mineral oil.\n[SUBTITLE] DRR recordings [SUBSECTION] A silver wire hook electrode was used to record extracellular single-unit discharges in filaments of the L4 through L6 dorsal roots. A small strand of the dorsal root was teased centrally from the main trunk and was further separated into a fine filament containing one or a few active fibers. This filament was then wrapped around the recording electrode (Figure 1A).\nDiagram of the experimental setup. A. The left L4 dorsal root is cut. The central stump is placed over a stimulating electrode (SE) while placing a strand from the left L5 dorsal root on a recording electrode (RE). B. A second cut is made at the left L5 dorsal root. The peripheral stump of the left L5 dorsal root is placed over a stimulating electrode. Seven minutes after Evans Blue injection (i.v.), electrical stimulation (20 V, 5 Hz, 0.5 ms for 5 min) is delivered, while a series of images is taken. C. A setup for stimulating an intact dorsal root.\nData recording and analysis was performed by using a CED 1401Plus data acquisition system and SPIKE2 software (Cambridge Electronic Design Ltd, UK).\nA silver wire hook electrode was used to record extracellular single-unit discharges in filaments of the L4 through L6 dorsal roots. A small strand of the dorsal root was teased centrally from the main trunk and was further separated into a fine filament containing one or a few active fibers. This filament was then wrapped around the recording electrode (Figure 1A).\nDiagram of the experimental setup. A. The left L4 dorsal root is cut. The central stump is placed over a stimulating electrode (SE) while placing a strand from the left L5 dorsal root on a recording electrode (RE). B. A second cut is made at the left L5 dorsal root. The peripheral stump of the left L5 dorsal root is placed over a stimulating electrode. Seven minutes after Evans Blue injection (i.v.), electrical stimulation (20 V, 5 Hz, 0.5 ms for 5 min) is delivered, while a series of images is taken. C. A setup for stimulating an intact dorsal root.\nData recording and analysis was performed by using a CED 1401Plus data acquisition system and SPIKE2 software (Cambridge Electronic Design Ltd, UK).\n[SUBTITLE] Electrical stimulation [SUBSECTION] Electrical stimulation of the central stump was performed in 16 animals, 8 for plasma extravasation and 8 for laser Doppler measurement. A tripolar electrode was used for electrical stimulation in order to minimize stimulus artifact and to avoid current spread. The cathode was in the middle of the array, and two anodes, one on each side of the cathode, were separated from the cathode by 1 mm [46]. The L4 or L5 dorsal roots were cut and the central stump was placed on the electrode for electrical stimulation (Figure 1A), while a teased strand from a nearby intact dorsal root was used for recording DRRs. Then stimulation was applied to this central stump of the cut dorsal root for 5 minutes at 20 V, 5 Hz, 0.5 ms pulse duration.\nAt the end of stimulation of the central stump (L4 or L5) for plasma extravasation, the dorsal root that was used for DRR recording (L5 or L6) was cut and the peripheral stump was stimulated at the same parameters (20 V, 5 Hz, 0.5 ms; Figure 1B).\nIn 3 animals, intact L4 or L5 dorsal root was stimulated at the same parameters (20 V, 5 Hz, 0.5 ms; Figure 1C)\nElectrical stimulation of the central stump was performed in 16 animals, 8 for plasma extravasation and 8 for laser Doppler measurement. A tripolar electrode was used for electrical stimulation in order to minimize stimulus artifact and to avoid current spread. The cathode was in the middle of the array, and two anodes, one on each side of the cathode, were separated from the cathode by 1 mm [46]. The L4 or L5 dorsal roots were cut and the central stump was placed on the electrode for electrical stimulation (Figure 1A), while a teased strand from a nearby intact dorsal root was used for recording DRRs. Then stimulation was applied to this central stump of the cut dorsal root for 5 minutes at 20 V, 5 Hz, 0.5 ms pulse duration.\nAt the end of stimulation of the central stump (L4 or L5) for plasma extravasation, the dorsal root that was used for DRR recording (L5 or L6) was cut and the peripheral stump was stimulated at the same parameters (20 V, 5 Hz, 0.5 ms; Figure 1B).\nIn 3 animals, intact L4 or L5 dorsal root was stimulated at the same parameters (20 V, 5 Hz, 0.5 ms; Figure 1C)\n[SUBTITLE] Plasma extravasation measurement [SUBSECTION] Plasma extravasation measurements were performed in 11 animals, 8 for central and peripheral stump stimulation (Figure 1A) and 3 for intact dorsal root stimulation (Figure 1C). Evans Blue was injected intravenously (50 mg/kg, using the catheter in the jugular vein) for detection of the sites of plasma extravasation 7 minutes before the start of electrical stimulation. Pictures were taken by an 8 megapixel camera (Nikon Coolpix 8700) on a tripod. Constant light condition, manual set of aperture, and exposure time were maintained during the course of the experiment. Pictures of the plantar side of the rat paw were taken 2 minutes after Evans Blue injections, then every 30 seconds during the course of electrical stimulation and for another 5 minutes or longer after the end of electrical stimulation. Matlab image analysis tool (The MathWorks, Inc., MA) was used to determine the dynamic change of color in the development of plasma extravasation. The whole hindpaw was selected as a region of interest. This method was conceptually similar to a dynamic measurement of plasma extravasation by using CCD video camera that has been developed recently [49,50].\nPlasma extravasation measurements were performed in 11 animals, 8 for central and peripheral stump stimulation (Figure 1A) and 3 for intact dorsal root stimulation (Figure 1C). Evans Blue was injected intravenously (50 mg/kg, using the catheter in the jugular vein) for detection of the sites of plasma extravasation 7 minutes before the start of electrical stimulation. Pictures were taken by an 8 megapixel camera (Nikon Coolpix 8700) on a tripod. Constant light condition, manual set of aperture, and exposure time were maintained during the course of the experiment. Pictures of the plantar side of the rat paw were taken 2 minutes after Evans Blue injections, then every 30 seconds during the course of electrical stimulation and for another 5 minutes or longer after the end of electrical stimulation. Matlab image analysis tool (The MathWorks, Inc., MA) was used to determine the dynamic change of color in the development of plasma extravasation. The whole hindpaw was selected as a region of interest. This method was conceptually similar to a dynamic measurement of plasma extravasation by using CCD video camera that has been developed recently [49,50].\n[SUBTITLE] Cutaneous blood flow measurement [SUBSECTION] Changes in cutaneous blood perfusion were measured in 8 animals to detect local vasodilatation (flare) in response to electrical stimulation of the central stump of the L4 or L5 dorsal roots. The measurements were done using Laser Doppler Imager (PeriScan PIM II, Perimed AB, Sweden). After 10 baseline images of the plantar side of the rat hindpaws, continuous scanning were taken during and after stimulation (20 V, 5 Hz, 0.5 ms pulse duration for 5 minutes) of the central stump of the cut dorsal root. Approximately 20 images were continuously taken after the end of stimulation. It required 2 minutes to acquire one image.\nChanges in cutaneous blood perfusion were measured in 8 animals to detect local vasodilatation (flare) in response to electrical stimulation of the central stump of the L4 or L5 dorsal roots. The measurements were done using Laser Doppler Imager (PeriScan PIM II, Perimed AB, Sweden). After 10 baseline images of the plantar side of the rat hindpaws, continuous scanning were taken during and after stimulation (20 V, 5 Hz, 0.5 ms pulse duration for 5 minutes) of the central stump of the cut dorsal root. Approximately 20 images were continuously taken after the end of stimulation. It required 2 minutes to acquire one image.\n[SUBTITLE] Data analysis [SUBSECTION] The stored digital record of unit activity was retrieved and analyzed off-line. The frequency of DRRs was calculated for the periods before (3 min), during (5 min), and after (3 min) the electrical stimulation. Statistical significance was tested using paired t-test.\nMatlab software was used in order to measure the intensity of colors on the rat paw. The same region of interest was selected in the set of pictures from each experiment and the change in color intensity in a gray scale was analyzed. The color intensity from Matlab is given as arbitrary unit (AU) for raw data. Normalization was calculated by the following formula: [(color intensity at any time point - color intensity before stimulation) / color intensity before stimulation] × 100%. A negative value represents a darker color, suggesting plasma extravasation. One-way ANOVA followed by Posthoc Fisher LSD Test was used to detect significant differences across time as compared to the baseline.\nFor blood perfusion, the region of interest was selected which covered the whole paw. An average of perfusion (arbitrary unit, AU) in the selected area of each image frame was used for further calculating percentage change. The first 10 images at baseline were averaged as control for subsequent change during and after stimulation: [(blood perfusion at any time point - average of first 10 blood perfusion images before stimulation) / average of first 10 blood perfusion images before stimulation] × 100%. Repeated measures ANOVA followed by Posthoc Fisher LSD test was used to detect significant differences along the time as compared to the baseline.\nAll values were presented as means ± SEM. A change was judged significant if p < 0.05.\nThe stored digital record of unit activity was retrieved and analyzed off-line. The frequency of DRRs was calculated for the periods before (3 min), during (5 min), and after (3 min) the electrical stimulation. Statistical significance was tested using paired t-test.\nMatlab software was used in order to measure the intensity of colors on the rat paw. The same region of interest was selected in the set of pictures from each experiment and the change in color intensity in a gray scale was analyzed. The color intensity from Matlab is given as arbitrary unit (AU) for raw data. Normalization was calculated by the following formula: [(color intensity at any time point - color intensity before stimulation) / color intensity before stimulation] × 100%. A negative value represents a darker color, suggesting plasma extravasation. One-way ANOVA followed by Posthoc Fisher LSD Test was used to detect significant differences across time as compared to the baseline.\nFor blood perfusion, the region of interest was selected which covered the whole paw. An average of perfusion (arbitrary unit, AU) in the selected area of each image frame was used for further calculating percentage change. The first 10 images at baseline were averaged as control for subsequent change during and after stimulation: [(blood perfusion at any time point - average of first 10 blood perfusion images before stimulation) / average of first 10 blood perfusion images before stimulation] × 100%. Repeated measures ANOVA followed by Posthoc Fisher LSD test was used to detect significant differences along the time as compared to the baseline.\nAll values were presented as means ± SEM. A change was judged significant if p < 0.05.", "A total of 19 adult male Sprague-Dawley rats weighing 300-400 g were used for this study, 8 for vasodilatation, and 11 for plasma extravasation. All procedures used in this study were approved by the Animal IACUC and followed the guidelines for the treatment of animals of the International Association for the Study of Pain [48].\nAnimals were initially anesthetized with sodium pentobarbital (50 mg/kg, i.p.). A catheter was placed into the jugular vein for continuous administration of anesthetic (sodium pentobarbital, 5-8 mg·kg-1·h-1 in a saline solution) and for Evans Blue injection in plasma extravasation experiments. The level of anesthesia was monitored by the stability of the level of end-tidal CO2 at around 30 mmHg and by the absence of flexion reflex. Tracheotomy was performed for artificial ventilation. The animal's body temperature was maintained at 37°C by a feedback controlled electric heating blanket. A 4-cm-long laminectomy was performed over the lumbosacral enlargement to expose the spinal cord and L4-L6 dorsal roots. The rat was held in a stereotaxic frame to prevent movement during recording. The skin over the laminectomy formed a pool and was filled with light mineral oil.", "A silver wire hook electrode was used to record extracellular single-unit discharges in filaments of the L4 through L6 dorsal roots. A small strand of the dorsal root was teased centrally from the main trunk and was further separated into a fine filament containing one or a few active fibers. This filament was then wrapped around the recording electrode (Figure 1A).\nDiagram of the experimental setup. A. The left L4 dorsal root is cut. The central stump is placed over a stimulating electrode (SE) while placing a strand from the left L5 dorsal root on a recording electrode (RE). B. A second cut is made at the left L5 dorsal root. The peripheral stump of the left L5 dorsal root is placed over a stimulating electrode. Seven minutes after Evans Blue injection (i.v.), electrical stimulation (20 V, 5 Hz, 0.5 ms for 5 min) is delivered, while a series of images is taken. C. A setup for stimulating an intact dorsal root.\nData recording and analysis was performed by using a CED 1401Plus data acquisition system and SPIKE2 software (Cambridge Electronic Design Ltd, UK).", "Electrical stimulation of the central stump was performed in 16 animals, 8 for plasma extravasation and 8 for laser Doppler measurement. A tripolar electrode was used for electrical stimulation in order to minimize stimulus artifact and to avoid current spread. The cathode was in the middle of the array, and two anodes, one on each side of the cathode, were separated from the cathode by 1 mm [46]. The L4 or L5 dorsal roots were cut and the central stump was placed on the electrode for electrical stimulation (Figure 1A), while a teased strand from a nearby intact dorsal root was used for recording DRRs. Then stimulation was applied to this central stump of the cut dorsal root for 5 minutes at 20 V, 5 Hz, 0.5 ms pulse duration.\nAt the end of stimulation of the central stump (L4 or L5) for plasma extravasation, the dorsal root that was used for DRR recording (L5 or L6) was cut and the peripheral stump was stimulated at the same parameters (20 V, 5 Hz, 0.5 ms; Figure 1B).\nIn 3 animals, intact L4 or L5 dorsal root was stimulated at the same parameters (20 V, 5 Hz, 0.5 ms; Figure 1C)", "Plasma extravasation measurements were performed in 11 animals, 8 for central and peripheral stump stimulation (Figure 1A) and 3 for intact dorsal root stimulation (Figure 1C). Evans Blue was injected intravenously (50 mg/kg, using the catheter in the jugular vein) for detection of the sites of plasma extravasation 7 minutes before the start of electrical stimulation. Pictures were taken by an 8 megapixel camera (Nikon Coolpix 8700) on a tripod. Constant light condition, manual set of aperture, and exposure time were maintained during the course of the experiment. Pictures of the plantar side of the rat paw were taken 2 minutes after Evans Blue injections, then every 30 seconds during the course of electrical stimulation and for another 5 minutes or longer after the end of electrical stimulation. Matlab image analysis tool (The MathWorks, Inc., MA) was used to determine the dynamic change of color in the development of plasma extravasation. The whole hindpaw was selected as a region of interest. This method was conceptually similar to a dynamic measurement of plasma extravasation by using CCD video camera that has been developed recently [49,50].", "Changes in cutaneous blood perfusion were measured in 8 animals to detect local vasodilatation (flare) in response to electrical stimulation of the central stump of the L4 or L5 dorsal roots. The measurements were done using Laser Doppler Imager (PeriScan PIM II, Perimed AB, Sweden). After 10 baseline images of the plantar side of the rat hindpaws, continuous scanning were taken during and after stimulation (20 V, 5 Hz, 0.5 ms pulse duration for 5 minutes) of the central stump of the cut dorsal root. Approximately 20 images were continuously taken after the end of stimulation. It required 2 minutes to acquire one image.", "The stored digital record of unit activity was retrieved and analyzed off-line. The frequency of DRRs was calculated for the periods before (3 min), during (5 min), and after (3 min) the electrical stimulation. Statistical significance was tested using paired t-test.\nMatlab software was used in order to measure the intensity of colors on the rat paw. The same region of interest was selected in the set of pictures from each experiment and the change in color intensity in a gray scale was analyzed. The color intensity from Matlab is given as arbitrary unit (AU) for raw data. Normalization was calculated by the following formula: [(color intensity at any time point - color intensity before stimulation) / color intensity before stimulation] × 100%. A negative value represents a darker color, suggesting plasma extravasation. One-way ANOVA followed by Posthoc Fisher LSD Test was used to detect significant differences across time as compared to the baseline.\nFor blood perfusion, the region of interest was selected which covered the whole paw. An average of perfusion (arbitrary unit, AU) in the selected area of each image frame was used for further calculating percentage change. The first 10 images at baseline were averaged as control for subsequent change during and after stimulation: [(blood perfusion at any time point - average of first 10 blood perfusion images before stimulation) / average of first 10 blood perfusion images before stimulation] × 100%. Repeated measures ANOVA followed by Posthoc Fisher LSD test was used to detect significant differences along the time as compared to the baseline.\nAll values were presented as means ± SEM. A change was judged significant if p < 0.05.", "[SUBTITLE] Dorsal root reflexes can be elicited at the central dorsal root filaments by electrical stimulation of a neighboring central stump of the cut dorsal root [SUBSECTION] After the left L4 dorsal root was cut, the central stump was placed on the stimulating electrode. To ensure that DRRs can be elicited, a small fascicle of neighboring dorsal root (usually L5) was teased centrally and was placed in a recording electrode (Figure 1A). Multiunit spontaneous antidromic discharges were recorded from all 8 animals that were tested for plasma extravasation. The discharges were irregular and usually at a very low rate but increased during electrical stimulation of L4 (Figure 2A-C). Average mean spontaneous activity was 0.09 ± 0.03 Hz (range: 0-0.22 Hz; n = 8). In most recorded units, additional DRR activity could be evoked by applying a graded mechanical stimulus (brush, pressure, and pinch) to the skin of the foot (data not shown). One cell was found whose receptive field was covering the whole body, as previously reported by others [43,51]. During electrical stimulation (20 V, 5 Hz, 5 ms), a significant increase in DRRs was observed (2.28 ± 0.76 Hz; range: 0.13-5.27 Hz; n = 8, P < 0.05). The activity of antidromic discharges returned to normal 2 min (127 ± 70 s, range from 0 to 585 s) after the termination of electrical stimulation (0.14 ± 0.04 Hz; range: 0-0.36 Hz; n = 8). Four out of eight fibers returned to baseline as soon as the stimulation was terminated; one fiber lasted as long as 10 min.\nDorsal root reflexes from a left L5 filament. A representative strand shows that DRRs can be recorded from the central stump of the dorsal root (L5) while the L4 central stump is stimulated (A). Each vertical line indicates a DRR. In this strand, about 4 fibers show DRR activity, based on their amplitudes and shapes. The horizontal line indicates the duration of electrical stimulation (20 V, 5 Hz, 0.5 ms for 5 min). During stimulation, an obvious increase of DRRs is demonstrated, which is summarized in (C). An expanded trace is shown in B. *: p < 0.05.\nAfter the left L4 dorsal root was cut, the central stump was placed on the stimulating electrode. To ensure that DRRs can be elicited, a small fascicle of neighboring dorsal root (usually L5) was teased centrally and was placed in a recording electrode (Figure 1A). Multiunit spontaneous antidromic discharges were recorded from all 8 animals that were tested for plasma extravasation. The discharges were irregular and usually at a very low rate but increased during electrical stimulation of L4 (Figure 2A-C). Average mean spontaneous activity was 0.09 ± 0.03 Hz (range: 0-0.22 Hz; n = 8). In most recorded units, additional DRR activity could be evoked by applying a graded mechanical stimulus (brush, pressure, and pinch) to the skin of the foot (data not shown). One cell was found whose receptive field was covering the whole body, as previously reported by others [43,51]. During electrical stimulation (20 V, 5 Hz, 5 ms), a significant increase in DRRs was observed (2.28 ± 0.76 Hz; range: 0.13-5.27 Hz; n = 8, P < 0.05). The activity of antidromic discharges returned to normal 2 min (127 ± 70 s, range from 0 to 585 s) after the termination of electrical stimulation (0.14 ± 0.04 Hz; range: 0-0.36 Hz; n = 8). Four out of eight fibers returned to baseline as soon as the stimulation was terminated; one fiber lasted as long as 10 min.\nDorsal root reflexes from a left L5 filament. A representative strand shows that DRRs can be recorded from the central stump of the dorsal root (L5) while the L4 central stump is stimulated (A). Each vertical line indicates a DRR. In this strand, about 4 fibers show DRR activity, based on their amplitudes and shapes. The horizontal line indicates the duration of electrical stimulation (20 V, 5 Hz, 0.5 ms for 5 min). During stimulation, an obvious increase of DRRs is demonstrated, which is summarized in (C). An expanded trace is shown in B. *: p < 0.05.\n[SUBTITLE] Effects of electrical stimulation of the central stump of the cut dorsal root on plasma extravasation on the plantar surface of the ipsilateral and contralateral hindpaws [SUBSECTION] When the central stump of the left L4 dorsal root was stimulated, there was no obvious plasma extravasation observed in the ipsilateral (left) paw (Figure 3, 1st row) and contralateral (right) paw (Figure 3, 2nd row). The color intensities before stimulation were 109.57 ± 10.02 in the left paw (Figure 3G) and 103.10 ± 4.25 AU (arbitrary unit) (Figure 3G) in the right paw among 8 animals. During and after central left L4 stimulation, the color intensity for the left paw ranged from 113.72 ± 9.81 to 116.96 ± 8.96 AU (Figure 3G); the color intensity for the right paw ranged from 102.57 ± 4.53 to 106.43 ± 3.46 AU (Figure 3G). Data were analyzed by ANOVA to test differences between sides (ipsilateral and contralateral), and among effects of time (C to 10 min) following central stump stimulation. The results indicated no effect of stimulation side, F (1, 7) = 1.86, p = 0.22; no effect of time, F (20, 140) = 1.33, p = 0.17; and no effect of interaction (Side × Time), F (20, 140) = 1.25, p = 0.23.\nA series of representative images of the left and right paws while the central stump of cut L4 (1st and 2nd rows) and peripheral stump of cut L5 (3rd row) dorsal root was stimulated. The blue patches on the paw indicate plasma extravasation due to leakage of Evans Blue (EB). Note: A - before EB injection, B - after EB injection, C - 0.5 min, D - 2 min, E - 3.5 min, and F - 5 min after onset of stimulation. A summary shows the changes in color intensity (G) or changes in percentage of color intensity (H) of the ipsilateral (left) or contralateral (right) paws following electrical stimulation of either the central stump of L4 or the peripheral stump of the left L5 dorsal root. When the central stump of the left L4 dorsal root was stimulated, there were no significant differences in the color intensities (G) and percentage changes in color intensity (H) between the left (filled diamond) and right paws (open square). However, significant differences of color intensity and percentage change in color intensity were detected in the left paw when the central stump of L4 (filled diamond) and peripheral stump of L5 dorsal root (filled triangle) were stimulated. A summary of plasma extravasation induced by stimulating intact L5 dorsal root (I) shows significant plasma extravasation (as indicated by the decrease of color intensity, n = 3) 5 min after stimulation. The gray area indicates the duration of stimulation (5 min). *: p < 0.05, ***: p < 0.001, Fisher LSD test, as compared with the color intensity before electrical stimulation. +: p < 0.05, Fisher LSD test, as comparing the peripheral L5 stimulation (left paw) with the central L4 stimulation (left paw). AU: arbitrary unit; C: as a control before stimulation.\nBy using the color intensity before stimulation to normalize the other data, the percentage change of color intensity of the left paw ranged from 4.31 ± 2.96 to 8.00 ± 2.77% (Figure 3H); the percentage change of color intensity of the right paw ranged from -0.42 ± 2.31 to 3.68 ± 2.42% (Figure 3H). Data were analyzed by ANOVA to test differences between sides of central stump stimulation (side: ipsilateral and contralateral), and among effects of time (time: C to 10 min). The results indicated no effect of stimulation side, F (1, 7) = 5.48, p = 0.052; no effect of time, F (20, 140) = 1.37, p = 0.15; and no effect of interaction (Side × Time), F (20, 140) = 1.23, p = 0.24.\nWhen the central stump of the left L4 dorsal root was stimulated, there was no obvious plasma extravasation observed in the ipsilateral (left) paw (Figure 3, 1st row) and contralateral (right) paw (Figure 3, 2nd row). The color intensities before stimulation were 109.57 ± 10.02 in the left paw (Figure 3G) and 103.10 ± 4.25 AU (arbitrary unit) (Figure 3G) in the right paw among 8 animals. During and after central left L4 stimulation, the color intensity for the left paw ranged from 113.72 ± 9.81 to 116.96 ± 8.96 AU (Figure 3G); the color intensity for the right paw ranged from 102.57 ± 4.53 to 106.43 ± 3.46 AU (Figure 3G). Data were analyzed by ANOVA to test differences between sides (ipsilateral and contralateral), and among effects of time (C to 10 min) following central stump stimulation. The results indicated no effect of stimulation side, F (1, 7) = 1.86, p = 0.22; no effect of time, F (20, 140) = 1.33, p = 0.17; and no effect of interaction (Side × Time), F (20, 140) = 1.25, p = 0.23.\nA series of representative images of the left and right paws while the central stump of cut L4 (1st and 2nd rows) and peripheral stump of cut L5 (3rd row) dorsal root was stimulated. The blue patches on the paw indicate plasma extravasation due to leakage of Evans Blue (EB). Note: A - before EB injection, B - after EB injection, C - 0.5 min, D - 2 min, E - 3.5 min, and F - 5 min after onset of stimulation. A summary shows the changes in color intensity (G) or changes in percentage of color intensity (H) of the ipsilateral (left) or contralateral (right) paws following electrical stimulation of either the central stump of L4 or the peripheral stump of the left L5 dorsal root. When the central stump of the left L4 dorsal root was stimulated, there were no significant differences in the color intensities (G) and percentage changes in color intensity (H) between the left (filled diamond) and right paws (open square). However, significant differences of color intensity and percentage change in color intensity were detected in the left paw when the central stump of L4 (filled diamond) and peripheral stump of L5 dorsal root (filled triangle) were stimulated. A summary of plasma extravasation induced by stimulating intact L5 dorsal root (I) shows significant plasma extravasation (as indicated by the decrease of color intensity, n = 3) 5 min after stimulation. The gray area indicates the duration of stimulation (5 min). *: p < 0.05, ***: p < 0.001, Fisher LSD test, as compared with the color intensity before electrical stimulation. +: p < 0.05, Fisher LSD test, as comparing the peripheral L5 stimulation (left paw) with the central L4 stimulation (left paw). AU: arbitrary unit; C: as a control before stimulation.\nBy using the color intensity before stimulation to normalize the other data, the percentage change of color intensity of the left paw ranged from 4.31 ± 2.96 to 8.00 ± 2.77% (Figure 3H); the percentage change of color intensity of the right paw ranged from -0.42 ± 2.31 to 3.68 ± 2.42% (Figure 3H). Data were analyzed by ANOVA to test differences between sides of central stump stimulation (side: ipsilateral and contralateral), and among effects of time (time: C to 10 min). The results indicated no effect of stimulation side, F (1, 7) = 5.48, p = 0.052; no effect of time, F (20, 140) = 1.37, p = 0.15; and no effect of interaction (Side × Time), F (20, 140) = 1.23, p = 0.24.\n[SUBTITLE] Effects of electrical stimulation of the peripheral stump of the cut dorsal root on plasma extravasation on the plantar surface of the ipsilateral hindpaw [SUBSECTION] When the peripheral stump of the left L5 dorsal root was stimulated, there was obvious plasma extravasation observed in the left paw as demonstrated by blue patches (Figure 3A-F, 3rd row). The color intensity of the left paw before stimulation were 109.57 ± 10.02 for central stimulation group (n = 8, Figure 3G) and 125.94 ± 7.10 AU (arbitrary unit) (Figure 3G) for peripheral stump stimulation group (n = 7), respectively. During and after stimulation of peripheral stump of left L5 stimulation, the color intensity for the left paw dropped as low as 99.05 ± 8.06 AU. Data were analyzed by ANOVA to test differences between stimulation site (central vs. peripheral), and among effects of time (time: C to 10 min). The results indicated no effect of stimulation site, F (1, 6) = 0.52, p = 0.5; a significant effect of time, F (20, 120) = 5.29, p < 0.001; and a significant effect of interaction (Site × Time), F (20, 120) = 7.96, p < 0.001. Posthoc Fisher LSD tests indicated significantly lower color intensity in the left paw 2 minutes following stimulation of the peripheral stump of dorsal root (+: p < 0.05) as compared to central stump stimulation (Figure 3G), as well as to before stimulation (*: p < 0.05).\nBy using the color intensity before stimulation to normalize the other data, the percentage change of color intensity of the left paw dropped to -21.28 ± 4.73% following stimulation of the peripheral stump of dorsal root (Figure 3H). Data was analyzed by ANOVA to test differences between sites of stimulation (central vs. peripheral), and among effects of time (time: C to 10 min). The results indicated significant effect of stimulation site, F (1, 6) = 19.44, p = 0.005; a significant effect of time, F (20, 120) = 4.20, p < 0.001; and a significant effect of interaction (Site × Time), F (20, 120) = 6.16, p < 0.001. Posthoc Fisher LSD tests indicated significantly lower intensity in the left paw following stimulation of the peripheral stump of dorsal root (+: p < 0.05) as comparing to central stump stimulation (Figure 3H), as well as to before stimulation (*: p < 0.05).\nWhen the peripheral stump of the left L5 dorsal root was stimulated, there was obvious plasma extravasation observed in the left paw as demonstrated by blue patches (Figure 3A-F, 3rd row). The color intensity of the left paw before stimulation were 109.57 ± 10.02 for central stimulation group (n = 8, Figure 3G) and 125.94 ± 7.10 AU (arbitrary unit) (Figure 3G) for peripheral stump stimulation group (n = 7), respectively. During and after stimulation of peripheral stump of left L5 stimulation, the color intensity for the left paw dropped as low as 99.05 ± 8.06 AU. Data were analyzed by ANOVA to test differences between stimulation site (central vs. peripheral), and among effects of time (time: C to 10 min). The results indicated no effect of stimulation site, F (1, 6) = 0.52, p = 0.5; a significant effect of time, F (20, 120) = 5.29, p < 0.001; and a significant effect of interaction (Site × Time), F (20, 120) = 7.96, p < 0.001. Posthoc Fisher LSD tests indicated significantly lower color intensity in the left paw 2 minutes following stimulation of the peripheral stump of dorsal root (+: p < 0.05) as compared to central stump stimulation (Figure 3G), as well as to before stimulation (*: p < 0.05).\nBy using the color intensity before stimulation to normalize the other data, the percentage change of color intensity of the left paw dropped to -21.28 ± 4.73% following stimulation of the peripheral stump of dorsal root (Figure 3H). Data was analyzed by ANOVA to test differences between sites of stimulation (central vs. peripheral), and among effects of time (time: C to 10 min). The results indicated significant effect of stimulation site, F (1, 6) = 19.44, p = 0.005; a significant effect of time, F (20, 120) = 4.20, p < 0.001; and a significant effect of interaction (Site × Time), F (20, 120) = 6.16, p < 0.001. Posthoc Fisher LSD tests indicated significantly lower intensity in the left paw following stimulation of the peripheral stump of dorsal root (+: p < 0.05) as comparing to central stump stimulation (Figure 3H), as well as to before stimulation (*: p < 0.05).\n[SUBTITLE] Effects of electrical stimulation of the intact dorsal root on plasma extravasation on the plantar surface of the ipsilateral hindpaw [SUBSECTION] When the intact left L4 or L5 dorsal root was stimulated, there was plasma extravasation observed in the left paw (n = 3, Figure 3A-F, 4th row). The color intensity change of the left paw was normalized by using the color intensity before stimulation (Figure 3I). The change of color intensity of the left paw were -26.63 ± 5.91 at 5 min, -34.77 ± 2.81 at 10 min, -32.75 ± 3.69 at 15 min, and -35.69 ± 6.16 at 20 min following stimulation, where the negative value indicates an increase in extravasation. Significant changes were found at 5, 10, 15, and 20 min after stimulation. One-way ANOVA showed a significant change after stimulation of the intact dorsal root, F (4, 8) = 31.6, p < 0.001. Posthoc Fisher LSD tests indicated significantly lower intensity in the left paw following stimulation of the intact dorsal root as comparing to before stimulation (***: p < 0.001).\nWhen the intact left L4 or L5 dorsal root was stimulated, there was plasma extravasation observed in the left paw (n = 3, Figure 3A-F, 4th row). The color intensity change of the left paw was normalized by using the color intensity before stimulation (Figure 3I). The change of color intensity of the left paw were -26.63 ± 5.91 at 5 min, -34.77 ± 2.81 at 10 min, -32.75 ± 3.69 at 15 min, and -35.69 ± 6.16 at 20 min following stimulation, where the negative value indicates an increase in extravasation. Significant changes were found at 5, 10, 15, and 20 min after stimulation. One-way ANOVA showed a significant change after stimulation of the intact dorsal root, F (4, 8) = 31.6, p < 0.001. Posthoc Fisher LSD tests indicated significantly lower intensity in the left paw following stimulation of the intact dorsal root as comparing to before stimulation (***: p < 0.001).\n[SUBTITLE] Effects of electrical stimulation of the central stump of the cut dorsal root on vasodilatation on the plantar surface of bilateral hindpaws [SUBSECTION] When the central stump of left L4 or L5 dorsal root was stimulated, there was obvious vasodilatation in the left paw (Figure 4A-H). The blood perfusion change of the left paw was normalized from raw data (Figure 4I) using the average of 10 baseline images and summarized (ipsilateral n = 8, contralateral n = 5, Figure 4J). Data were analyzed by ANOVA to test differences between hindpaws (ipsilateral vs. contralateral), and among effects of images (image number: 1-34). The results indicated no effect of side of hindpaws, F (1, 3) = 1.66, p = 0.29; a significant effect of images numbers, F (33, 99) = 2.11, p = 0.002; and a significant effect of interaction (Side × Image), F (33, 99) = 2.11, p = 0.002. Posthoc Fisher LSD tests indicated significantly higher blood perfusion in the left hindpaw during stimulation of the central stump of dorsal root as compared to the right hindpaw (*: p < 0.05) (Figure 4I), as well as to before stimulation (+: p < 0.05).\nEffect of stimulation of the central stump of the cut dorsal root on blood perfusion in hindpaws bilaterally. The upper panel shows laser Doppler images from one rat as control (A), during stimulation (B-D), and 2 (E), 14 (F), 32 (G), and 52 min (H) after the end of stimulation. Blue color indicates lower perfusion, whereas red color indicates higher perfusion. Time response curves are plotted for the raw data (I) and normalized changes in blood perfusion (J) in both the ipsilateral (Left) and contralateral (Right) hindpaws. The gray area indicates duration of stimulation (from image 11 to 13). Note +: p < 0.05 as compared to baseline, including all time points under the bracket. *: p < 0.05 as compared to the right side.\nAfter normalization, an ANOVA to test differences between hindpaws (ipsilateral vs. contralateral), and among effects of images (image number: 1-34) indicated no effect of side of hindpaws, F (1, 3) = 0.16, p = 0.71; a significant effect of images, F (33, 99) = 2.26, p = 0.001; and a significant effect of interaction (Side × Image), F (33, 99) = 2.91, p < 0.001. Posthoc Fisher LSD tests indicated a significant increase of blood perfusion (Figure 4I) in the left hindpaws during stimulation of the central stump of dorsal root as compared to the right hindpaw (*: p < 0.05), as well as a significant percentage increase (Figure 4J) of blood perfusion in both hindpaws as compared to the baseline (+: p < 0.05).\nWhen the central stump of left L4 or L5 dorsal root was stimulated, there was obvious vasodilatation in the left paw (Figure 4A-H). The blood perfusion change of the left paw was normalized from raw data (Figure 4I) using the average of 10 baseline images and summarized (ipsilateral n = 8, contralateral n = 5, Figure 4J). Data were analyzed by ANOVA to test differences between hindpaws (ipsilateral vs. contralateral), and among effects of images (image number: 1-34). The results indicated no effect of side of hindpaws, F (1, 3) = 1.66, p = 0.29; a significant effect of images numbers, F (33, 99) = 2.11, p = 0.002; and a significant effect of interaction (Side × Image), F (33, 99) = 2.11, p = 0.002. Posthoc Fisher LSD tests indicated significantly higher blood perfusion in the left hindpaw during stimulation of the central stump of dorsal root as compared to the right hindpaw (*: p < 0.05) (Figure 4I), as well as to before stimulation (+: p < 0.05).\nEffect of stimulation of the central stump of the cut dorsal root on blood perfusion in hindpaws bilaterally. The upper panel shows laser Doppler images from one rat as control (A), during stimulation (B-D), and 2 (E), 14 (F), 32 (G), and 52 min (H) after the end of stimulation. Blue color indicates lower perfusion, whereas red color indicates higher perfusion. Time response curves are plotted for the raw data (I) and normalized changes in blood perfusion (J) in both the ipsilateral (Left) and contralateral (Right) hindpaws. The gray area indicates duration of stimulation (from image 11 to 13). Note +: p < 0.05 as compared to baseline, including all time points under the bracket. *: p < 0.05 as compared to the right side.\nAfter normalization, an ANOVA to test differences between hindpaws (ipsilateral vs. contralateral), and among effects of images (image number: 1-34) indicated no effect of side of hindpaws, F (1, 3) = 0.16, p = 0.71; a significant effect of images, F (33, 99) = 2.26, p = 0.001; and a significant effect of interaction (Side × Image), F (33, 99) = 2.91, p < 0.001. Posthoc Fisher LSD tests indicated a significant increase of blood perfusion (Figure 4I) in the left hindpaws during stimulation of the central stump of dorsal root as compared to the right hindpaw (*: p < 0.05), as well as a significant percentage increase (Figure 4J) of blood perfusion in both hindpaws as compared to the baseline (+: p < 0.05).", "After the left L4 dorsal root was cut, the central stump was placed on the stimulating electrode. To ensure that DRRs can be elicited, a small fascicle of neighboring dorsal root (usually L5) was teased centrally and was placed in a recording electrode (Figure 1A). Multiunit spontaneous antidromic discharges were recorded from all 8 animals that were tested for plasma extravasation. The discharges were irregular and usually at a very low rate but increased during electrical stimulation of L4 (Figure 2A-C). Average mean spontaneous activity was 0.09 ± 0.03 Hz (range: 0-0.22 Hz; n = 8). In most recorded units, additional DRR activity could be evoked by applying a graded mechanical stimulus (brush, pressure, and pinch) to the skin of the foot (data not shown). One cell was found whose receptive field was covering the whole body, as previously reported by others [43,51]. During electrical stimulation (20 V, 5 Hz, 5 ms), a significant increase in DRRs was observed (2.28 ± 0.76 Hz; range: 0.13-5.27 Hz; n = 8, P < 0.05). The activity of antidromic discharges returned to normal 2 min (127 ± 70 s, range from 0 to 585 s) after the termination of electrical stimulation (0.14 ± 0.04 Hz; range: 0-0.36 Hz; n = 8). Four out of eight fibers returned to baseline as soon as the stimulation was terminated; one fiber lasted as long as 10 min.\nDorsal root reflexes from a left L5 filament. A representative strand shows that DRRs can be recorded from the central stump of the dorsal root (L5) while the L4 central stump is stimulated (A). Each vertical line indicates a DRR. In this strand, about 4 fibers show DRR activity, based on their amplitudes and shapes. The horizontal line indicates the duration of electrical stimulation (20 V, 5 Hz, 0.5 ms for 5 min). During stimulation, an obvious increase of DRRs is demonstrated, which is summarized in (C). An expanded trace is shown in B. *: p < 0.05.", "When the central stump of the left L4 dorsal root was stimulated, there was no obvious plasma extravasation observed in the ipsilateral (left) paw (Figure 3, 1st row) and contralateral (right) paw (Figure 3, 2nd row). The color intensities before stimulation were 109.57 ± 10.02 in the left paw (Figure 3G) and 103.10 ± 4.25 AU (arbitrary unit) (Figure 3G) in the right paw among 8 animals. During and after central left L4 stimulation, the color intensity for the left paw ranged from 113.72 ± 9.81 to 116.96 ± 8.96 AU (Figure 3G); the color intensity for the right paw ranged from 102.57 ± 4.53 to 106.43 ± 3.46 AU (Figure 3G). Data were analyzed by ANOVA to test differences between sides (ipsilateral and contralateral), and among effects of time (C to 10 min) following central stump stimulation. The results indicated no effect of stimulation side, F (1, 7) = 1.86, p = 0.22; no effect of time, F (20, 140) = 1.33, p = 0.17; and no effect of interaction (Side × Time), F (20, 140) = 1.25, p = 0.23.\nA series of representative images of the left and right paws while the central stump of cut L4 (1st and 2nd rows) and peripheral stump of cut L5 (3rd row) dorsal root was stimulated. The blue patches on the paw indicate plasma extravasation due to leakage of Evans Blue (EB). Note: A - before EB injection, B - after EB injection, C - 0.5 min, D - 2 min, E - 3.5 min, and F - 5 min after onset of stimulation. A summary shows the changes in color intensity (G) or changes in percentage of color intensity (H) of the ipsilateral (left) or contralateral (right) paws following electrical stimulation of either the central stump of L4 or the peripheral stump of the left L5 dorsal root. When the central stump of the left L4 dorsal root was stimulated, there were no significant differences in the color intensities (G) and percentage changes in color intensity (H) between the left (filled diamond) and right paws (open square). However, significant differences of color intensity and percentage change in color intensity were detected in the left paw when the central stump of L4 (filled diamond) and peripheral stump of L5 dorsal root (filled triangle) were stimulated. A summary of plasma extravasation induced by stimulating intact L5 dorsal root (I) shows significant plasma extravasation (as indicated by the decrease of color intensity, n = 3) 5 min after stimulation. The gray area indicates the duration of stimulation (5 min). *: p < 0.05, ***: p < 0.001, Fisher LSD test, as compared with the color intensity before electrical stimulation. +: p < 0.05, Fisher LSD test, as comparing the peripheral L5 stimulation (left paw) with the central L4 stimulation (left paw). AU: arbitrary unit; C: as a control before stimulation.\nBy using the color intensity before stimulation to normalize the other data, the percentage change of color intensity of the left paw ranged from 4.31 ± 2.96 to 8.00 ± 2.77% (Figure 3H); the percentage change of color intensity of the right paw ranged from -0.42 ± 2.31 to 3.68 ± 2.42% (Figure 3H). Data were analyzed by ANOVA to test differences between sides of central stump stimulation (side: ipsilateral and contralateral), and among effects of time (time: C to 10 min). The results indicated no effect of stimulation side, F (1, 7) = 5.48, p = 0.052; no effect of time, F (20, 140) = 1.37, p = 0.15; and no effect of interaction (Side × Time), F (20, 140) = 1.23, p = 0.24.", "When the peripheral stump of the left L5 dorsal root was stimulated, there was obvious plasma extravasation observed in the left paw as demonstrated by blue patches (Figure 3A-F, 3rd row). The color intensity of the left paw before stimulation were 109.57 ± 10.02 for central stimulation group (n = 8, Figure 3G) and 125.94 ± 7.10 AU (arbitrary unit) (Figure 3G) for peripheral stump stimulation group (n = 7), respectively. During and after stimulation of peripheral stump of left L5 stimulation, the color intensity for the left paw dropped as low as 99.05 ± 8.06 AU. Data were analyzed by ANOVA to test differences between stimulation site (central vs. peripheral), and among effects of time (time: C to 10 min). The results indicated no effect of stimulation site, F (1, 6) = 0.52, p = 0.5; a significant effect of time, F (20, 120) = 5.29, p < 0.001; and a significant effect of interaction (Site × Time), F (20, 120) = 7.96, p < 0.001. Posthoc Fisher LSD tests indicated significantly lower color intensity in the left paw 2 minutes following stimulation of the peripheral stump of dorsal root (+: p < 0.05) as compared to central stump stimulation (Figure 3G), as well as to before stimulation (*: p < 0.05).\nBy using the color intensity before stimulation to normalize the other data, the percentage change of color intensity of the left paw dropped to -21.28 ± 4.73% following stimulation of the peripheral stump of dorsal root (Figure 3H). Data was analyzed by ANOVA to test differences between sites of stimulation (central vs. peripheral), and among effects of time (time: C to 10 min). The results indicated significant effect of stimulation site, F (1, 6) = 19.44, p = 0.005; a significant effect of time, F (20, 120) = 4.20, p < 0.001; and a significant effect of interaction (Site × Time), F (20, 120) = 6.16, p < 0.001. Posthoc Fisher LSD tests indicated significantly lower intensity in the left paw following stimulation of the peripheral stump of dorsal root (+: p < 0.05) as comparing to central stump stimulation (Figure 3H), as well as to before stimulation (*: p < 0.05).", "When the intact left L4 or L5 dorsal root was stimulated, there was plasma extravasation observed in the left paw (n = 3, Figure 3A-F, 4th row). The color intensity change of the left paw was normalized by using the color intensity before stimulation (Figure 3I). The change of color intensity of the left paw were -26.63 ± 5.91 at 5 min, -34.77 ± 2.81 at 10 min, -32.75 ± 3.69 at 15 min, and -35.69 ± 6.16 at 20 min following stimulation, where the negative value indicates an increase in extravasation. Significant changes were found at 5, 10, 15, and 20 min after stimulation. One-way ANOVA showed a significant change after stimulation of the intact dorsal root, F (4, 8) = 31.6, p < 0.001. Posthoc Fisher LSD tests indicated significantly lower intensity in the left paw following stimulation of the intact dorsal root as comparing to before stimulation (***: p < 0.001).", "When the central stump of left L4 or L5 dorsal root was stimulated, there was obvious vasodilatation in the left paw (Figure 4A-H). The blood perfusion change of the left paw was normalized from raw data (Figure 4I) using the average of 10 baseline images and summarized (ipsilateral n = 8, contralateral n = 5, Figure 4J). Data were analyzed by ANOVA to test differences between hindpaws (ipsilateral vs. contralateral), and among effects of images (image number: 1-34). The results indicated no effect of side of hindpaws, F (1, 3) = 1.66, p = 0.29; a significant effect of images numbers, F (33, 99) = 2.11, p = 0.002; and a significant effect of interaction (Side × Image), F (33, 99) = 2.11, p = 0.002. Posthoc Fisher LSD tests indicated significantly higher blood perfusion in the left hindpaw during stimulation of the central stump of dorsal root as compared to the right hindpaw (*: p < 0.05) (Figure 4I), as well as to before stimulation (+: p < 0.05).\nEffect of stimulation of the central stump of the cut dorsal root on blood perfusion in hindpaws bilaterally. The upper panel shows laser Doppler images from one rat as control (A), during stimulation (B-D), and 2 (E), 14 (F), 32 (G), and 52 min (H) after the end of stimulation. Blue color indicates lower perfusion, whereas red color indicates higher perfusion. Time response curves are plotted for the raw data (I) and normalized changes in blood perfusion (J) in both the ipsilateral (Left) and contralateral (Right) hindpaws. The gray area indicates duration of stimulation (from image 11 to 13). Note +: p < 0.05 as compared to baseline, including all time points under the bracket. *: p < 0.05 as compared to the right side.\nAfter normalization, an ANOVA to test differences between hindpaws (ipsilateral vs. contralateral), and among effects of images (image number: 1-34) indicated no effect of side of hindpaws, F (1, 3) = 0.16, p = 0.71; a significant effect of images, F (33, 99) = 2.26, p = 0.001; and a significant effect of interaction (Side × Image), F (33, 99) = 2.91, p < 0.001. Posthoc Fisher LSD tests indicated a significant increase of blood perfusion (Figure 4I) in the left hindpaws during stimulation of the central stump of dorsal root as compared to the right hindpaw (*: p < 0.05), as well as a significant percentage increase (Figure 4J) of blood perfusion in both hindpaws as compared to the baseline (+: p < 0.05).", "The goal of the present experiment was to minimize confounders introduced by artificial antidromic stimulation of dorsal root or introduction of substances to the periphery by interrupting the communication of the stimulated dorsal root with periphery. In the past, electrical stimulation of one dorsal root elicited DRRs in the neighboring roots and spread along (up to 16 spinal segment in both directions from the stimulated site) and across the spinal cord [52]. This process is believed to operate in all-or-none manner once activated [53]. Since a rat's paw is innervated by L4-L6 originating nerves, we assumed that stimulating the central portion of one cut dorsal root would evoke DRRs to the ipsilateral paw through the remaining two dorsal roots. In fact, in the current experiment electrical stimulation of the central stump of the dorsal root elicited a significant increase in DRR activity in the recorded fibers of the neighboring dorsal roots.\nNeuropeptides (particularly, substance P and CGRP) found in peripheral terminals of nociceptive fibers contribute to neurogenic inflammation and are released in response to antidromic stimulation [11,13,40,54]. Therefore, we expected that electrically evoked DRRs in the nerves innervating a rat's paw would produce both plasma extravasation and vasodilation. However, bilateral vasodilation but not plasma extravasation was observed in response to central stump stimulation.\nAs previously mentioned, SP acting on tachykinin receptors increases microvascular permeability and edema formation [10,13]. CGRP, on the other hand, acting on its receptors produces arteriolar vasodilation [13,15]. Interestingly, C-fibers contain both SP and CGRP, whereas Aδ-fibers predominantly have CGRP in their peripheral terminals [17,55,56]. In addition, antidromic stimulation of the saphenous nerve at C-fiber intensity produces both vasodilation and plasma extravasation, whereas stimulation at Aδ-fiber intensity produces only vasodilation [24,54]. It has been previously shown that 1-2 pulses to lumbosacral dorsal roots are enough to cause a change in cutaneous microcirculation, and 4-16 pulses at 2 Hz evokes vasodilatation lasting for several minutes [20]. Similar results have been shown with spinal cord stimulation [57]. In our study, electrical stimulation of the intact dorsal root or the peripheral stump of the dorsal root produced both vasodilatation and plasma extravasation in the skin. However, electrical stimulation of the central stump with the same parameters did not elicit plasma extravasation on either side, but did produce vasodilatation bilaterally. This finding suggests that the stimulation parameters selected were sufficient to excite both myelinated and unmyelinated fibers in both the distal and central stumps of the dorsal root. However, stimulation of the central stump of the dorsal root triggers more DRRs in myelinated than unmyelinated fibers in the neighboring roots, and leads mostly to CGRP release, and in turn vasodilation.\nThe differential release of co-localized neurotransmitters from the same terminal depending on the firing rate is another possible explanation of the obtained results. The stimulation frequency needed to induce plasma extravasation is higher than that to produce vasodilation [54]. Electrical stimulation should to some extent mimic peripherally evoked orthodromic action potentials. It is true that DRRs evoked by stimulating the central stump are much weaker than direct stimulation of the distal stump, due to the nature of multisynaptic connectivity inside the spinal cord. It may help to explain the differences in plasma extravasation resulting from stimulation of the central versus peripheral stump. In addition, co-packaged in the same granule, catecholamines and neuropeptides have been shown to be differentially released from adrenal medulla depending on the firing rate through a regulated activity-dependent dilation of the granule fusion pore and size-exclusion mechanism [58,59].\nIn both of the proposed mechanisms, there should be a higher probability of DRR generation in Aδ-fibers compared to C-fibers in response to central stump orthodromic stimulation. First, there may be a differential effect of GABA on GABAA receptors on the central terminals of primary afferents. C-fibers have been shown to have a lower density of GABAA receptors compared to both Aδ-fibers and Aβ-fibers [60]. Second, the threshold for generation of DRRs by PADs may be higher in C-fibers compared to Aδ- fibers.\nIn addition, the proportion of CGRP-containing afferents is much higher compared to SP-containing afferents in the skin. CGRP is present in both myelinated and unmyelinated nociceptive fibers, whereas SP is only found in small diameter unmyelinated fibers. CGRP is also found in larger number of unmyelinated fibers compared to SP [61].\nFinally, the role of sympathetic nervous system needs to be addressed, since the stimulation of the central stump may increase sympathetic activity. On one hand, sympathetic activity can decrease neuropeptide release from afferent fibers by its action on prejunctional α2-adrenoreceptors [13], and counteract dorsal reflex-mediated neurogenic inflammation [62]. On the other hand, sympathetic presence is important for the development of DRR-mediated neurogenic inflammation through the actions of neuropeptide Y (NPY) and norepinephrine on NPY Y2 and alpha1 receptors, respectively [63,64].\nIn this study, the contribution of the sympathetic nervous system during central stump stimulation was challenged by two experiments: blood perfusion change in stimulation of the central stump of the cut dorsal root (Figure 4) and plasma extravasation in stimulation of intact dorsal root (Figure 3I). Stimulation of the central stump produced a significant bilateral increase in blood perfusion suggesting that DRRs in primary afferents surpass sympathetic vasoconstriction, if present. Stimulation of intact dorsal root on the other hand (action potentials can travel orthodromically and antidromically) produced plasma extravasation in the ipsilateral hindpaw, suggesting that even if sympathetic system is activated by orthodromic input, its subsequent effects are not strong enough to counteract plasma extravasation induced by the antidromic spikes that reached the periphery.", "In summary, incoming stimulation at an intensity that activates all types of nociceptive fibers produces DRRs in the intact neighboring roots as well as bilateral vasodilation of the innervated area but not plasma extravasation. Neurogenic inflammation is a complex process that requires the co-release of multiple substances. It seems that noxious stimulation alone is not capable of eliciting all signs of neurogenic inflammation. Therefore, successful treatment of neurogenic inflammation will not only require that the neural input to the spinal cord be addressed, but also that co-factors in both spinal cord and the periphery that allow neural input to convert to neuropeptide co-release be addressed. In addition, acutely elicited DRRs are not able to elicit the complete picture of neurogenic inflammation; future studies are necessary to establish the contributions and nature of DRRs in chronic pain states such as arthritis or migraine.", "The authors declare that they have no competing interests.", "This study is based on the original idea of OVL and YBP. OVL performed data collection, data analysis, the statistical analysis and wrote the manuscript. YBP made contributions to conception and design and analysis and interpretation of data. All authors have read and approved the final manuscript." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
The origin of a derived superkingdom: how a gram-positive bacterium crossed the desert to become an archaeon.
21356104
The tree of life is usually rooted between archaea and bacteria. We have previously presented three arguments that support placing the root of the tree of life in bacteria. The data have been dismissed because those who support the canonical rooting between the prokaryotic superkingdoms cannot imagine how the vast divide between the prokaryotic superkingdoms could be crossed.
BACKGROUND
This article was reviewed by Patrick Forterre, Eugene Koonin, and Gáspár Jékely.
REVIEWERS
We review the evidence that archaea are derived, as well as their biggest differences with bacteria. We argue that using novel data the gap between the superkingdoms is not insurmountable. We consider whether archaea are holophyletic or paraphyletic; essential to understanding their origin. Finally, we review several hypotheses on the origins of archaea and, where possible, evaluate each hypothesis using bioinformatics tools. As a result we argue for a firmicute ancestry for archaea over proposals for an actinobacterial ancestry.
RESULTS
We believe a synthesis of the hypotheses of Lake, Gupta, and Cavalier-Smith is possible where a combination of antibiotic warfare and viral endosymbiosis in the bacilli led to dramatic changes in a bacterium that resulted in the birth of archaea and eukaryotes.
CONCLUSION
[ "Archaea", "Biological Evolution", "Gram-Positive Bacteria", "Phylogeny" ]
3056875
null
null
Methods
Structural alignments were performed using CE [124]. Sequence alignments were performed using MUSCLE [125]. The alignments were visualized in Jalview [126]. Sequence trees were constructed using Phyml [127]. The essentiality of genes was determined by querying the database of essential of genes [31]. Drug targets were indentified from DrugBank [101]. The distribution of those targets was examined using Pfam [78].
null
null
null
null
[ "Background", "Results", "Three reasons why archaea are derived", "Ribosomal revolutions are historical fact", "There is a great divide in DNA replication machinery, but it can be bridged", "Are the Archaea Paraphyletic or Holophyletic? We're agnostic", "Weakening the neomuran hypothesis", "Evidence that supports a firmicute ancestry for archaea", "Peroxisomes: the red herring?", "Viruses as the missing link between the prokaryotic superkingdoms", "The greatest battle ever fought", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Reviewers' comments", "Reviewer's report 1", "Author's response", "Author's response", "Author's response", "Author's response", "Author's response", "Author's response", "Author's response", "Reviewer's report 2", "Author's response", "Author's response", "Reviewer's report 3", "Author's response", "Author's response", "Author's response", "Author's response" ]
[ "Archaea were first discovered because of a distinct sequence signature in their ribosomal RNA [1]. This remains one of the strongest signals found anywhere in the phylogenetic tree. It was truly a revolution in thought when the world realized there were two distinct types of prokaryotes. Besides placement on sequence trees, there are three major areas where archaea and bacteria differ greatly. First, the structures of archaeal and bacterial ribosomes each have many unique proteins [2]. Second, archaeal membranes are composed of glycerol-ether lipids, while bacterial membranes are composed of glycerol-ester lipids [3]. The glycerols have different stereochemistries between the superkingdoms as well. Third, the DNA replication machinery of these two superkingdoms is very different; many key proteins comprising this machinery have a superkingdom specific distribution [4].\nThese differences as well as the rRNA tree have convinced most scientists that the root of the tree of life must be between the prokaryotic superkingdoms. The proposal that archaea were a different kingdom was originally considered ridiculous because no one could imagine two distinct groups of prokaryotes [5]. In 30 years we went from the prevailing opinion that the archaea were similar enough to bacteria to be just prokaryotes, to the view they are so different they must each be primordial lineages.\nLocating the root of the tree of life is a prerequisite for understanding the origin and evolution of life. There are many examples of conclusions that become radically different if one assumes a different rooting of the tree. For example, the proposal that LUCA was acellular relies on a rooting between the archaea and bacteria [6]. Each of the estimates for divergence times of the prokaryotic taxa [7] would change drastically if archaea are not the same age as LUCA.\nSeveral groups have hypothesized that the root of the tree of life lies within bacteria and place archaea as a taxon derived from gram-positive bacteria [8-10]. These hypotheses are often dismissed for two reasons: 1) they do not agree on a single rooting; 2) there is an immense gap between archaea and bacteria in sequence trees and in the systems mentioned above. We addressed the differences between these alternative rooting options in [11] and concluded that it is possible for them to converge on a single root in the gram-negative bacteria. The point of this work is to address the objection to rooting archaea within the gram-positives.\nThis work is a synthesis of many creative ideas that came before us; as a result, much of what we say here has been said in some form before by others. However, the arrangement of the pieces is, we believe, novel and sheds light on the strengths and weaknesses of the various rootings of the tree of life. First, we discuss the ideas in their original form and consider what we see as the strengths and weaknesses of each. We take the stance that closing the debate prematurely deprives one of the ability to the see the many strengths of each of these hypotheses and the large common ground between them. We then offer novel data that helps refine some of these ideas and show the potential for testing them further.\nRadhey Gupta and colleagues created a detailed tree of life using rarely fixed indels (insertion-deletions) in prokaryotic groups [9]. He concluded that the root of the tree of life is within the gram-positive bacteria, and he places archaea as derived from firmicutes. The major driving force in his scenario is antibiotic warfare. He argues the differences between archaea and bacteria coincide with many of the targets of antibiotics produced by gram-positive bacteria. We will review recent work that demonstrates many antibiotic binding sites have dramatically different affinities in the superkingdoms. The strength of Gupta's phylogeny rests on the fact that many of the branch orders are supported by several independent indels. However, there are several points that concern us about Gupta's hypothesis. First, we disagree with his polarization of Hsp70 which is used to justify the root of the tree of life [11]. But the focus of the present paper is the origin of archaea, so that debate is probably better left to our other work[11]. The transition between gram-positives and archaea must have been a drastic event to be confronted in any hypothesis that roots the tree of life in bacteria. Antibiotic warfare is a powerful evolutionary force, but in Gupta's hypothesis there seems to be a special battle that resulted in archaea. He does not explain why antibiotic warfare only gave rise to one other prokaryotic superkingdom. Should not one expect there to be several different modified ribosomes in response to antibiotic pressure? We will invoke antibiotic warfare as a major driver in the origin of archaea, but we feel our scenario better sets the stage for why this was a unique event. Antibiotic warfare on its own is not enough to account for the vast differences between the prokaryotic superkingdoms, but it certainly was important.\nJames Lake and colleagues has also constructed a detailed tree of life using indels [10]. His group has focused more on indels that can be polarized using paralogous out-groups. The strength of Lake et al.'s method is that it provides evidence for derived and ancestral groups, which we feel is essential for understanding evolutionary histories. The polarizations are largely independent. This allows one to refine the tree because a flawed polarization will only affect one part of the tree. Like Gupta, his group roots archaea within the firmicutes and provides several independent reasons why this makes sense [12]. Lake has also proposed that eukaryotes had a crenarchaeal (eocyte) origin based on a shared indel in EF-1 and similarities in their ribosomal structure [13,14]. We find arguments like this appealing as it is a synthesis of both sequence and structural data. We discuss the strengths and weaknesses of that particular hypothesis at length below.\nThe weakness of the indel method in general is the difficulty in properly aligning paralogs as we argued in [11]. Fortunately, polarizations are mostly independent; so changing a polarization does not invalidate the whole tree, it just refines it. We argue that the refined version of Lake's tree is completely consistent with Cavalier-Smith's [11]. There are very few universal paralogs, so this method certainly needs to be supplemented with other data sources.\nCavalier-Smith has discussed the relationship and origin of the superkingdoms at length [8,15,16]. The major difference between his hypothesis and that of Gupta and Lake is the placement of the root in the gram-negative bacteria. He also roots archaea within or next to the actinobacteria. Cavalier-Smith constructed his tree by polarizing multiple types of data including indels, membrane structure, and quaternary structure. Again, if any one of these polarizations is brought into question it does not weaken the remainder. Cavalier-Smith included unique supporting discussions from the prokaryotic fossil. His analysis concludes that there is no fossil to indicate archaea are older than eukaryotes, despite much evidence that bacteria are older than eukaryotes. That said, there are several aspects of Cavalier-Smith's tree that still do not sit well with us. His hypothesis relies on the assumption that archaea are holophyletic (eukaryotes are their sisters, not their descendents). He provides some justification for this, but we will discuss below why we believe this is not a completely safe assumption at this time. Cavalier-Smith's rooting of the neomura (his term for archaea, eukaryotes and their last common ancestor (LAECA)) is in the actinobacteria. He cites traits shared between the eukaryotes and actinobacteria to support this hypothesis, but they are only relevant if archaea are holophyletic. We provide an alternative interpretation of this distribution by invoking an actinobacterial endosymbiont near the root of eukaryotes. Cavalier-Smith argues thermophily was the major force that lead to the neomuran revolution. We feel this argument falls short for the same reason as Gupta's; it just does not seem to be a unique enough selective pressure to create a novel superkingdom. Cavalier-Smith prefers the labels archaebacteria and eubacteria because he feels the labels archaea and bacteria over emphasize the difference between these superkingdoms. We disagree, these superkingdoms are fundamentally different. Despite that, we still believe archaea evolved from within bacteria.\nNone of these scenarios adequately addresses the origin of the DNA replication machinery shared between archaea and eukaryotes. Therefore we invoke the ideas of Patrick Forterre, who has proposed that cells received the ability to replicate DNA from viruses. He proposes this occurred three times; each event resulting in the birth of a superkingdom [17,18]. The amazing variation in DNA replication machinery found throughout the virosphere supports this idea. All extant cells uses double stranded DNA, but viruses can have many other forms of genetic material (reviewed in [19]). The plasticity of replication in the virus world certainly could lead to innovations of great importance in the cellular world.\nThere are two weaknesses to this view in our opinion. First, it is DNA centric so it necessarily neglects the many other important differences between the superkingdoms. Second, it is firmly placed within the framework of the classical rRNA tree. Forterre even assumes eukaryotes are a primordial lineage, as a consequence of taking the sequence tree too literally. We will demonstrate that this view is also highly informative if archaea are derived from bacteria. It has also been noted that other extra chromosomal elements could play key roles in the evolution of the different DNA replication systems [20], but that discussion is also firmly grounded in the canonical rooting.\nTaking all these viewpoints together, it would seem an uphill battle to argue that archaea are a derived superkingdom. One needs to provide compelling evidence archaea are derived, so we will review our data that supports that view. Any hypothesis that addresses how a bacterium could become an archaeon would have to explain dramatic changes in membranes, DNA replication, and ribosomes. We will demonstrate that the ribosome can have great plasticity under certain circumstances. It has been previously argued that the firmicutes have many of the enzymes needed to make archaeal membranes [21]. We will invoke viral endosymbiosis to explain the differences in DNA replication. For the reasons discussed below the hypothesis must work if archaea are paraphyletic or holophyletic. Finally, it must also address the rarity of the event that lead to this revolution. If a hypothesis could do all of these things, it would make a compelling argument for the origin of archaea.", "[SUBTITLE] Three reasons why archaea are derived [SUBSECTION] Several large indels are shared between archaea and gram-positive bacteria, and both groups only have one membrane [9]. Thus, if there is a direct relationship between the gram-positives and archaea the root is either between them, or one is derived from the other. Every piece of evidence that is polarizable implies archaea are derived from bacteria. Arguments that archaea and bacteria are so different that they both evolved from LUCA sidesteps directionality altogether. The only recent work that explicitly roots the tree in archaea is that of Wong et al. [22]. Many of their arguments are based on assumptions about the nature of LUCA and assumptions of what a primitive state would look like. None of their arguments are true polarizations. To the best of our knowledge there is no single polarized argument for an archaeal rooting that is on par with the three we shall discuss that place archaea as derived.\nThe first of these arguments is the proteasome. Proteasomes are self compartmentalized atp-dependent proteases that are found in varying degrees of complexity across the tree of life. All archaea contain a 20S proteasome which is composed of 28 subunits and is encoded by at least two genes that are clearly homologs. Therefore the 20S proteasome must be the result of duplication. Cavalier-Smith has argued that the simpler bacterial homolog HslV (heat shock locus v) could be duplicated to generate a 20S proteasome [8,16]. Loss of a subunit in the 20S proteasome would result in an open proteasome with no ATPase. Such a protein would lose the essential function of controlled degradation found in proteasomes, and does not make sense as an intermediate. It is more likely that the 20S proteasome is derived from a simpler structure. Cavalier-Smith excludes the root from archaea because all archaea contain a clearly derived protein.\nHowever, there is a counter argument to that proposal; LUCA had HslV and LACA (last archaeal common ancestor) is the point in the tree where HslV evolves into the 20S proteasome (Figure 1A). This would still exclude the root from the crown archaea, but it still allows for the possibility that the root is between the extinct stems of archaea and bacteria. Excluding the root from archaea will never be enough because one can always invoke stem lineages that show up before the derived trait. This would imply the 20S proteasome present in actinobacteria is probably the result of a horizontal transfer from archaea. However, we have observed that the two proteasome genes are often in the same operon in actinobacteria, but rarely together in archaea. This weakly polarizes the direction of the horizontal transfer to the archaea.\nTwo scenarios for interpreting the three polarizations. A) Under the canonical rooting proteasome evolution would require several selective sweeps and large-scale loss. The monomer PyrD B would have evolved from one of the more complex quaternary structures, and the derived insert in EF-2 would occur after LACA. B) Under the gram-negative rooting, Anbu could be ancestral to both HslV and the 20S proteasome. PyrD could evolve via stepwise increases in structural complexity, and there is no need to invoke extinct stem archaea to explain the EF_G insert. We believe these transitions argue for a gram-negative rooting.\nHowever, there is stronger evidence that narrows the root to within the bacteria. Our own work argues that the Anbu proteasome (or peptidase according to [23]) is more likely than HslV to be the 20S proteasome's direct ancestor based on both sequence data and structure predictions [24]. This argument is much stronger than Cavalier-Smith's because HslV is widespread in the gram-positives but Anbu appears to be missing in them altogether (Figure 1B). If the divide between archaea and bacteria is the earliest split in the tree, and our hypothesis on proteasome evolution is correct, then LUCA must have had Anbu. This would mean that all extant gram-positives need to have lost Anbu while the gram-negatives (that must be derived from gram-positives in this scenario) somehow retained Anbu. One would have to invoke a selective sweep of the 20S proteasome in archaea, and of HslV in the gram-positives. It is plausible that the 20S proteasome outcompeted Anbu or HslV since they are almost never found in the same genome. However, Anbu and HslV are found together in many genomes, which is evidence neither totally displaces the other in terms of function. Our arguments about Anbu are based on structure prediction, but a crystal structure could experimentally verify those predictions. If we are correct it may be the smoking gun for a gram-negative rooting, but even without that there is ample evidence to support Cavalier-Smith's position. Even if HslV is the direct ancestor of the 20S proteasome the root can still be excluded from all extant archaeal lineages.\nThe recent analysis of proteins occur in Anbu's operon [23] presented evidence we are wrong in labeling Anbu a proteasome because it lacks an associated ATP-dependent protein required for unfolding substrates. HslV and the 20S proteasome clearly have associated ATPases dedicated to unfolding substrates. Therefore the transition to both of them is easier from Anbu as no ATPase would have to be lost. The origin of HslV and the 20S proteasome would both involve the recruitment of distinct ATPases subunits. Therefore we think this new work strengthens our hypothesis that Anbu is ancestral to the 20S proteasome because no intermediate would ever lose the regulatory ATPase. If our hypothesis is correct, proteasomes would be polyphyletic if they are defined by the presence of the ATPase subunit as suggested in [23].\nThe indel in EF-2 shared between archaea and eukaryotes has been polarized using EF-Tu as an outgroup [25]. Our alignment free analysis of this indel agrees with the authors' conclusions despite there being a sequence artifact in their original alignment [11]. This polarization robustly excludes the root from within archaea, but does not narrow it to within bacteria.\nIn that analysis we also presented a novel structure-based argument for polarizing archaea. The quaternary structure of PyrD 1B is a heterotetramer across the firmicutes and archaea. We argue that the heterotetramer is probably derived from the homodimer PyrD 1A based on the presence of a conserved interface. The monomeric and homodimeric versions are present in the Gram-negatives and Actinobacteria. PyrD 1B is found across a gram-positive group and archaea, so it would have to be present in their last common ancestor, which is LUCA under the canonical rooting. This could be explained by the presence of both PyrD 1A and 1B in LUCA. But that scenario would require PyrD 1A to be lost in every archaea and some firmicutes, and for there to be a reversion to the monomeric form, PyrD, across the gram-negatives and actinobacteria. PyrD 1B is probably derived, so it follows that archaea, firmicutes, and their last common ancestor are also derived.\nThe polarization of the indel in EF-2 excludes the root from the extant archaea. Our novel polarizations of Anbu and PyrD argue the root is within bacteria. If these arguments only excluded the root from all extant archaea one is left wondering why all archaea that are not clearly derived went extinct. The combination of all three arguments strongly supports the bacterial rooting of the tree. If archaea are derived, there must be some way of reconciling the major differences between them and bacteria.\nSeveral large indels are shared between archaea and gram-positive bacteria, and both groups only have one membrane [9]. Thus, if there is a direct relationship between the gram-positives and archaea the root is either between them, or one is derived from the other. Every piece of evidence that is polarizable implies archaea are derived from bacteria. Arguments that archaea and bacteria are so different that they both evolved from LUCA sidesteps directionality altogether. The only recent work that explicitly roots the tree in archaea is that of Wong et al. [22]. Many of their arguments are based on assumptions about the nature of LUCA and assumptions of what a primitive state would look like. None of their arguments are true polarizations. To the best of our knowledge there is no single polarized argument for an archaeal rooting that is on par with the three we shall discuss that place archaea as derived.\nThe first of these arguments is the proteasome. Proteasomes are self compartmentalized atp-dependent proteases that are found in varying degrees of complexity across the tree of life. All archaea contain a 20S proteasome which is composed of 28 subunits and is encoded by at least two genes that are clearly homologs. Therefore the 20S proteasome must be the result of duplication. Cavalier-Smith has argued that the simpler bacterial homolog HslV (heat shock locus v) could be duplicated to generate a 20S proteasome [8,16]. Loss of a subunit in the 20S proteasome would result in an open proteasome with no ATPase. Such a protein would lose the essential function of controlled degradation found in proteasomes, and does not make sense as an intermediate. It is more likely that the 20S proteasome is derived from a simpler structure. Cavalier-Smith excludes the root from archaea because all archaea contain a clearly derived protein.\nHowever, there is a counter argument to that proposal; LUCA had HslV and LACA (last archaeal common ancestor) is the point in the tree where HslV evolves into the 20S proteasome (Figure 1A). This would still exclude the root from the crown archaea, but it still allows for the possibility that the root is between the extinct stems of archaea and bacteria. Excluding the root from archaea will never be enough because one can always invoke stem lineages that show up before the derived trait. This would imply the 20S proteasome present in actinobacteria is probably the result of a horizontal transfer from archaea. However, we have observed that the two proteasome genes are often in the same operon in actinobacteria, but rarely together in archaea. This weakly polarizes the direction of the horizontal transfer to the archaea.\nTwo scenarios for interpreting the three polarizations. A) Under the canonical rooting proteasome evolution would require several selective sweeps and large-scale loss. The monomer PyrD B would have evolved from one of the more complex quaternary structures, and the derived insert in EF-2 would occur after LACA. B) Under the gram-negative rooting, Anbu could be ancestral to both HslV and the 20S proteasome. PyrD could evolve via stepwise increases in structural complexity, and there is no need to invoke extinct stem archaea to explain the EF_G insert. We believe these transitions argue for a gram-negative rooting.\nHowever, there is stronger evidence that narrows the root to within the bacteria. Our own work argues that the Anbu proteasome (or peptidase according to [23]) is more likely than HslV to be the 20S proteasome's direct ancestor based on both sequence data and structure predictions [24]. This argument is much stronger than Cavalier-Smith's because HslV is widespread in the gram-positives but Anbu appears to be missing in them altogether (Figure 1B). If the divide between archaea and bacteria is the earliest split in the tree, and our hypothesis on proteasome evolution is correct, then LUCA must have had Anbu. This would mean that all extant gram-positives need to have lost Anbu while the gram-negatives (that must be derived from gram-positives in this scenario) somehow retained Anbu. One would have to invoke a selective sweep of the 20S proteasome in archaea, and of HslV in the gram-positives. It is plausible that the 20S proteasome outcompeted Anbu or HslV since they are almost never found in the same genome. However, Anbu and HslV are found together in many genomes, which is evidence neither totally displaces the other in terms of function. Our arguments about Anbu are based on structure prediction, but a crystal structure could experimentally verify those predictions. If we are correct it may be the smoking gun for a gram-negative rooting, but even without that there is ample evidence to support Cavalier-Smith's position. Even if HslV is the direct ancestor of the 20S proteasome the root can still be excluded from all extant archaeal lineages.\nThe recent analysis of proteins occur in Anbu's operon [23] presented evidence we are wrong in labeling Anbu a proteasome because it lacks an associated ATP-dependent protein required for unfolding substrates. HslV and the 20S proteasome clearly have associated ATPases dedicated to unfolding substrates. Therefore the transition to both of them is easier from Anbu as no ATPase would have to be lost. The origin of HslV and the 20S proteasome would both involve the recruitment of distinct ATPases subunits. Therefore we think this new work strengthens our hypothesis that Anbu is ancestral to the 20S proteasome because no intermediate would ever lose the regulatory ATPase. If our hypothesis is correct, proteasomes would be polyphyletic if they are defined by the presence of the ATPase subunit as suggested in [23].\nThe indel in EF-2 shared between archaea and eukaryotes has been polarized using EF-Tu as an outgroup [25]. Our alignment free analysis of this indel agrees with the authors' conclusions despite there being a sequence artifact in their original alignment [11]. This polarization robustly excludes the root from within archaea, but does not narrow it to within bacteria.\nIn that analysis we also presented a novel structure-based argument for polarizing archaea. The quaternary structure of PyrD 1B is a heterotetramer across the firmicutes and archaea. We argue that the heterotetramer is probably derived from the homodimer PyrD 1A based on the presence of a conserved interface. The monomeric and homodimeric versions are present in the Gram-negatives and Actinobacteria. PyrD 1B is found across a gram-positive group and archaea, so it would have to be present in their last common ancestor, which is LUCA under the canonical rooting. This could be explained by the presence of both PyrD 1A and 1B in LUCA. But that scenario would require PyrD 1A to be lost in every archaea and some firmicutes, and for there to be a reversion to the monomeric form, PyrD, across the gram-negatives and actinobacteria. PyrD 1B is probably derived, so it follows that archaea, firmicutes, and their last common ancestor are also derived.\nThe polarization of the indel in EF-2 excludes the root from the extant archaea. Our novel polarizations of Anbu and PyrD argue the root is within bacteria. If these arguments only excluded the root from all extant archaea one is left wondering why all archaea that are not clearly derived went extinct. The combination of all three arguments strongly supports the bacterial rooting of the tree. If archaea are derived, there must be some way of reconciling the major differences between them and bacteria.\n[SUBTITLE] Ribosomal revolutions are historical fact [SUBSECTION] Archaea cluster separately on phylogenetic trees based on ribosomal RNA [1]. This split has remained robust in many trees derived since then. We will discuss three scenarios that can explain this. The first scenario is that the ribosomal sequences are pretty good molecular clocks. The great splits seen in the tree reflects this most ancient divide in cellular life and is in accordance with the canonical rooting.\nThe second scenario also does not contradict the canonical rooting. It goes as follows. The ribosome in LUCA was incomplete. It did not have all the proteins found in extant archaea and bacteria, only the core that is universal between them. The addition of proteins after the split of the superkingdoms would start a quantum evolutionary event. Some sites would be free to mutate to achieve increased stability, while others would be under evolutionary pressure to maintain a strict structure-function relationship. The rate of mutation at different sites on the ribosome could vary wildly and exaggerate the true distance between the superkingdoms, even if they do represent a very ancient split.\nThe third scenario, which we champion here, is that the bacterial ribosome evolved into an archaeal one. Again this would be a quantum evolutionary event and sequences of both rRNA and ribosomal proteins would evolve rapidly. The point we are trying to make is that these three scenarios would result in exactly the same sequence tree. Hence we must look towards independent lines of reasoning to determine which of these scenarios best describes the tree branching.\nWe can exclude the first scenario by comparing the structure of the ribosome in archaea and bacteria. In the 50S subunit there are six ribosomal proteins that are in the same position on the rRNA, but have non-homologous structures in archaea and bacteria [26,27]. These must have changed in at least one lineage since LUCA, regardless of LUCA's nature. Therefore, we should expect that the distance between archaea and bacteria would be exaggerated due to compensatory mutations in the rRNA and ribosomal proteins.\nIt is certainly reasonable to object to the third scenario because it seems implausible that a ribosome would change so much between superkingdoms, yet stay so well conserved within a superkingdom. However, there are two examples where we know that ribosome structure has indeed changed significantly. Mitochondrial ribosomes have changed dramatically from their bacterial ancestors. They have lost about half their rRNA and replaced it with additional proteins [28]. The eukaryotic ribosome evolved from an archaeal one (or technically from some sort of proto-archaeal ribsosome if the archaea are holophyletic). There are eleven ribosomal proteins found only in the eukaryotes, nine of which are conserved across the superkingdom [2], and there is good separation on rRNA trees between eukaryotes and archaea. In the two cases where the ribosome structure has changed we know it changed from another fully functional ribosome. Thus, why would it be out of the question for it to happen between archaea and bacteria? There are five ribosomal proteins present across the crenarchaea, but absent in the euryarchaea [2]. These proteins were either lost or gained in one of these groups after they split. In either case there would be a transition between two complete ribosomes. In each of these cases we can clearly see that a ribosome can undergo dramatic changes in macromolecular structure when there is proper selective pressure (or relaxation of selective pressure).\nThe tree presented in [29] was constructed by concatenating 31 universal proteins. 23 of these are ribosomal proteins and many more are directly involved in translation. Many taxa on the tree cluster together with high bootstrap values (greater than 80%). However, there appears to be only three connections between high level taxa that are supported with that strength. The clustering of crenarchaea and euryarchaea is well supported, as is the clustering of eukaryotes and archaea. There is also a long well supported branch between the archaeal-eukaryal clade and bacteria. We doubt it is a coincidence that these splits correspond to the greatest changes in ribosomal structure on the tree. It appears the sequence tree in [29] and rRNA trees could be merely a reflection of the large changes in ribosomal structure that have occurred throughout the true tree of life. This protein set would be expected to work better as a clock within groups that have the same ribosomal proteins. Even if ones uses more sophisticated tree building techniques, such as those in [30], the major changes in the ribosome are still going to be problematic. The authors concatenated many translational proteins and the resulting tree supported the paraphyly of archaea. Eukaryotes were placed near the archaeal species with the most similar ribosomal structures. However, a single gene tree of RNA polymerase alpha subunit (RPOA) supported holophyly in the same study. This implies some of their results are an artifact caused by structural changes in a ribosomal revolution.\nThe third scenario could certainly be weakened if it was found that all the ribosomal proteins were essential in bacteria and there was absolutely no way they could be tinkered with. We examined which ribosomal proteins are essential in eleven different bacterial species using the Database of Essential Genes [31]. There are sixteen ribosomal proteins that would need to be lost in the transition from a bacterium to an archaeon, as they are found across bacteria but never in archaea. None of these ribosomal proteins were found to be essential in all species, which is the first sign it is possible to lose and replace them. Four of the sixteen proteins are essential in all species except Mycobacterium tuberculosis (Table 1). Only four of these proteins are essential in M. tuberculosis, the least of any species in this data set.\nEssentiality of ribosomal proteins.\nThe essentiality of proteins that would need to be lost in the transition from an extant bacterial ribosome to an archaeal one varies from species to species. M. tuberculosis appears preadapted for the losses that would be necessary in the transition to an archaeon.\nTo determine whether this portion of the ribosome is significantly flexible we calculated a p-value assuming a binomial distribution. The essentiality of each subunit can be considered a success or a failure. The p-value measures the odds of seeing at most n essential subunits in a set of sixteen random ribosomal proteins. The odds of a random ribosomal protein being essential were estimated as the proportion of ribosomal proteins found to be essential in that species. This was done to eliminate experimental biases between the species sets, as some of the knockout experiments are more thorough than others. Several species had p-values under .05, but M. tuberculosis was by far the most significant with a p-value of .0031. This implies that M. tuberculosis's ribosome is under different selective pressure than most bacteria, and that it is the most preadapted ribosome in this dataset capable of evolving into an archaeal ribosome.\nIt is highly counterintuitive that nearly every universal protein could be nonessential. The difference between essential and persistent genes was discussed in [32]. The authors point out that essentiality differs in the wild and laboratory settings. Many of the ribosomal proteins listed as nonessential are still highly deleterious to lose. But the point is they can be lost under the right circumstances. It might be our proteasome centric view of the world, but we think the presence of the 20S proteasome in Mycobacterium could partially explain this observation. It has been proposed the major cost of mutations and mistranslation comes from dealing with mis-folded proteins [33]. The ribosomal proteins are among the most highly translated proteins in the cell, so there is lots of pressure to ensure they fold correctly. A highly advanced degradation system, like the 20S proteasome with a Pup targeting system [34], could greatly relax that selective pressure. If the initial tinkering is not lethal, one can easily imagine a scenario where compensatory mutations and structures could rapidly and significantly change the ribosome if there is proper selective pressure. We will describe such a scenario below.\nIt has been observed that many bacteria contain paralogs of ribosomal proteins where one form binds Zn and the other does not [35]. M. tuberculosis has duplicates of several ribosomal proteins, which could explain why some (but not all) of the ribosomal proteins are not essential in that genome. The authors note that thermophilic bacteria seem to prefer the Zn binding forms of the ribosomal proteins, and that there are seven Zn binding ribosomal proteins conserved across archaea and eukaryotes that are absent in bacteria. This is consistent with our ideas that major historical changes in the availability of Zn in the ocean were a significant constraint on protein structure evolution [36,37]. Bacteria vary their ribosomes to optimize for both high and low Zn conditions. One can imagine this strategy being taken to an extreme where the tweaks are not just simple displacements, but larger rearrangements. Increased availability of Zn, as the ocean became oxic, could be a factor that made toying with the ribosome favorable for the early archaea. This combined with the antibiotic pressures discussed below, could lead to a ribosomal revolution, just as the presence of two ribosomes leads to a revolution at the root of eukaryotes.\nArchaea cluster separately on phylogenetic trees based on ribosomal RNA [1]. This split has remained robust in many trees derived since then. We will discuss three scenarios that can explain this. The first scenario is that the ribosomal sequences are pretty good molecular clocks. The great splits seen in the tree reflects this most ancient divide in cellular life and is in accordance with the canonical rooting.\nThe second scenario also does not contradict the canonical rooting. It goes as follows. The ribosome in LUCA was incomplete. It did not have all the proteins found in extant archaea and bacteria, only the core that is universal between them. The addition of proteins after the split of the superkingdoms would start a quantum evolutionary event. Some sites would be free to mutate to achieve increased stability, while others would be under evolutionary pressure to maintain a strict structure-function relationship. The rate of mutation at different sites on the ribosome could vary wildly and exaggerate the true distance between the superkingdoms, even if they do represent a very ancient split.\nThe third scenario, which we champion here, is that the bacterial ribosome evolved into an archaeal one. Again this would be a quantum evolutionary event and sequences of both rRNA and ribosomal proteins would evolve rapidly. The point we are trying to make is that these three scenarios would result in exactly the same sequence tree. Hence we must look towards independent lines of reasoning to determine which of these scenarios best describes the tree branching.\nWe can exclude the first scenario by comparing the structure of the ribosome in archaea and bacteria. In the 50S subunit there are six ribosomal proteins that are in the same position on the rRNA, but have non-homologous structures in archaea and bacteria [26,27]. These must have changed in at least one lineage since LUCA, regardless of LUCA's nature. Therefore, we should expect that the distance between archaea and bacteria would be exaggerated due to compensatory mutations in the rRNA and ribosomal proteins.\nIt is certainly reasonable to object to the third scenario because it seems implausible that a ribosome would change so much between superkingdoms, yet stay so well conserved within a superkingdom. However, there are two examples where we know that ribosome structure has indeed changed significantly. Mitochondrial ribosomes have changed dramatically from their bacterial ancestors. They have lost about half their rRNA and replaced it with additional proteins [28]. The eukaryotic ribosome evolved from an archaeal one (or technically from some sort of proto-archaeal ribsosome if the archaea are holophyletic). There are eleven ribosomal proteins found only in the eukaryotes, nine of which are conserved across the superkingdom [2], and there is good separation on rRNA trees between eukaryotes and archaea. In the two cases where the ribosome structure has changed we know it changed from another fully functional ribosome. Thus, why would it be out of the question for it to happen between archaea and bacteria? There are five ribosomal proteins present across the crenarchaea, but absent in the euryarchaea [2]. These proteins were either lost or gained in one of these groups after they split. In either case there would be a transition between two complete ribosomes. In each of these cases we can clearly see that a ribosome can undergo dramatic changes in macromolecular structure when there is proper selective pressure (or relaxation of selective pressure).\nThe tree presented in [29] was constructed by concatenating 31 universal proteins. 23 of these are ribosomal proteins and many more are directly involved in translation. Many taxa on the tree cluster together with high bootstrap values (greater than 80%). However, there appears to be only three connections between high level taxa that are supported with that strength. The clustering of crenarchaea and euryarchaea is well supported, as is the clustering of eukaryotes and archaea. There is also a long well supported branch between the archaeal-eukaryal clade and bacteria. We doubt it is a coincidence that these splits correspond to the greatest changes in ribosomal structure on the tree. It appears the sequence tree in [29] and rRNA trees could be merely a reflection of the large changes in ribosomal structure that have occurred throughout the true tree of life. This protein set would be expected to work better as a clock within groups that have the same ribosomal proteins. Even if ones uses more sophisticated tree building techniques, such as those in [30], the major changes in the ribosome are still going to be problematic. The authors concatenated many translational proteins and the resulting tree supported the paraphyly of archaea. Eukaryotes were placed near the archaeal species with the most similar ribosomal structures. However, a single gene tree of RNA polymerase alpha subunit (RPOA) supported holophyly in the same study. This implies some of their results are an artifact caused by structural changes in a ribosomal revolution.\nThe third scenario could certainly be weakened if it was found that all the ribosomal proteins were essential in bacteria and there was absolutely no way they could be tinkered with. We examined which ribosomal proteins are essential in eleven different bacterial species using the Database of Essential Genes [31]. There are sixteen ribosomal proteins that would need to be lost in the transition from a bacterium to an archaeon, as they are found across bacteria but never in archaea. None of these ribosomal proteins were found to be essential in all species, which is the first sign it is possible to lose and replace them. Four of the sixteen proteins are essential in all species except Mycobacterium tuberculosis (Table 1). Only four of these proteins are essential in M. tuberculosis, the least of any species in this data set.\nEssentiality of ribosomal proteins.\nThe essentiality of proteins that would need to be lost in the transition from an extant bacterial ribosome to an archaeal one varies from species to species. M. tuberculosis appears preadapted for the losses that would be necessary in the transition to an archaeon.\nTo determine whether this portion of the ribosome is significantly flexible we calculated a p-value assuming a binomial distribution. The essentiality of each subunit can be considered a success or a failure. The p-value measures the odds of seeing at most n essential subunits in a set of sixteen random ribosomal proteins. The odds of a random ribosomal protein being essential were estimated as the proportion of ribosomal proteins found to be essential in that species. This was done to eliminate experimental biases between the species sets, as some of the knockout experiments are more thorough than others. Several species had p-values under .05, but M. tuberculosis was by far the most significant with a p-value of .0031. This implies that M. tuberculosis's ribosome is under different selective pressure than most bacteria, and that it is the most preadapted ribosome in this dataset capable of evolving into an archaeal ribosome.\nIt is highly counterintuitive that nearly every universal protein could be nonessential. The difference between essential and persistent genes was discussed in [32]. The authors point out that essentiality differs in the wild and laboratory settings. Many of the ribosomal proteins listed as nonessential are still highly deleterious to lose. But the point is they can be lost under the right circumstances. It might be our proteasome centric view of the world, but we think the presence of the 20S proteasome in Mycobacterium could partially explain this observation. It has been proposed the major cost of mutations and mistranslation comes from dealing with mis-folded proteins [33]. The ribosomal proteins are among the most highly translated proteins in the cell, so there is lots of pressure to ensure they fold correctly. A highly advanced degradation system, like the 20S proteasome with a Pup targeting system [34], could greatly relax that selective pressure. If the initial tinkering is not lethal, one can easily imagine a scenario where compensatory mutations and structures could rapidly and significantly change the ribosome if there is proper selective pressure. We will describe such a scenario below.\nIt has been observed that many bacteria contain paralogs of ribosomal proteins where one form binds Zn and the other does not [35]. M. tuberculosis has duplicates of several ribosomal proteins, which could explain why some (but not all) of the ribosomal proteins are not essential in that genome. The authors note that thermophilic bacteria seem to prefer the Zn binding forms of the ribosomal proteins, and that there are seven Zn binding ribosomal proteins conserved across archaea and eukaryotes that are absent in bacteria. This is consistent with our ideas that major historical changes in the availability of Zn in the ocean were a significant constraint on protein structure evolution [36,37]. Bacteria vary their ribosomes to optimize for both high and low Zn conditions. One can imagine this strategy being taken to an extreme where the tweaks are not just simple displacements, but larger rearrangements. Increased availability of Zn, as the ocean became oxic, could be a factor that made toying with the ribosome favorable for the early archaea. This combined with the antibiotic pressures discussed below, could lead to a ribosomal revolution, just as the presence of two ribosomes leads to a revolution at the root of eukaryotes.\n[SUBTITLE] There is a great divide in DNA replication machinery, but it can be bridged [SUBSECTION] The differences between archaeal and bacterial replication machinery is vast [4]. Leipe et al. claim this difference is so great that it is unreasonable to argue that one prokaryotic superkingdom evolved from the other. They list four key functions of DNA replication that are performed by completely non-homologous proteins in archaea and bacteria: the main polymerase's polymerization domain, the phosphatase that powers the polymerase, the gap filling polymerase, and the DNA primase. We will argue that the differences between archaea and bacteria do not imply the root of the tree of life has to be between them.\nWe must keep in mind there is some flexibility in the DNA replication machinery despite the division across the superkingdoms; consider two examples. First, many proteobacteria use a PolB family polymerase as a repair protein [38], which is almost certainly the result of HGT. Second, PolD appears to have been present in LACA, but was lost in the crenarchaea [39]. These two examples illustrate major changes in the replication machinery that occured in DNA based genomes possessing fully functional replication systems. We are arguing that an even larger event occurred between the prokaryotic superkingdoms. This event entailed viral transfers and novel innovations, but there are several proteins whose origins can be better described by vertical inheritance from the gram-positive bacteria which we review first.\nKoonin et al. have demonstrated that many bacterial proteins have a region that is homologous to the small subunit of the archaeo-eukaryotic primase [40]. This domain is present in DNA ligase D from M. tuberculosis, which can act as a DNA-dependent RNA polymerase [41]. The rest of the protein is homologous to the ATP-dependent DNA ligase found in archaea and eukaryotes. Therefore, DNA ligase D is perfectly preadapted to replace the primase function of DnaG. The fission of the two halves of the protein would allow for the preservation of ligase activity while developing enhanced primase activity. A recent analysis of DNA ligases revealed many transfers between archaea, bacteria and viruses [42]. This history is very complicated, so it is hard to say with certainty where the archaeal enzymes originated. The large subunit of the primase may be a true innovation since it has no detectable bacterial homologs, but the small subunit of the primase and ATP-dependent DNA ligases both could have been inherited from the gram-positive ancestors of archaea.\nThe main helicase in bacteria is DnaB, while archaea use MCM6. Relevant to this discussion is the recent biochemical analysis of a protein in a prophage element in Bacillus cereus that has domains homologous to the MCM6-AAA domain as well as the small subunit of the archaeal primase [43]. The authors found that this protein was a functional helicase but had no primase activity. The narrow distribution of this prophage element implies its insertion was probably too recent to play a role in the origin of archaea. However, it demonstrates that there can be a selective advantage for a DNA based genome to take novel DNA handling machinery from a virus and use it in a different context. We will come back to this point later.\nBacteria use DnaA to define the origin of replication, while archaea use Cdc6. These proteins have a homologous AAA+ ATPase domain, but have little similarity otherwise. However, the bacterial protein RuvB has the same domain combination as Cdc6. RuvB, Cdc6, and DnaA were all put in the same superfamily in a recent classification of AAA+ domains [44]. RuvB is recruited to Holliday junctions by RuvA where it forms a hexamer around the DNA [45], just like Cdc6. It is plausible that Cdc6 evolved from RuvB.\nArchaea use a protein called Hjc to resolve Holliday junctions instead of the bacterial RuvABC system. Hjc is related to the alternative bacterial system RecU [46]. The only bacteria that use RecU are the firmicutes, and they also have RuvABC. We argue below archaea are derived from within the firmicutes. It is possible that the redundancy in Holliday junction systems allowed RuvB to drift in function. The homology between RecU and Hjc could be explained by the presence of a Holliday junction resolvase in LUCA under the canonical rooting. However, if the hypothetical RNA-DNA hybrid LUCA proposed in [4] was dealing with Holliday junctions we argue it probably would also need topoisomerases at that point. However, since the distribution of topoisomerases is different across the prokaryotic superkingdoms [47,48] that would imply the ancestral topoisomerase was displaced in at least one lineage. This weakens the proposal in [4]. We feel it is more likely archaeal topoisomerases evolved from bacterial ones as Cavalier-Smith has proposed [16].\nThere are certainly large differences between archaeal and bacterial DNA replication machineries. We have demonstrated the divide between replication systems has some flexibility, and this opens the door for a replication revolution. It is possible to come up with detailed scenarios for how each of the archaeal replication proteins originated. These results are summarized in Table 2. We will elaborate on this scenario below. However, there are several archaeal replication proteins that do not appear to have any homologs in bacteria; namely histones, PolD, and the large subunit of the archaeal-eukaryal primase. These are true innovations, but there really are not that many of them; certainly not enough to make the transitions seem unreasonable in light of the polarizations presented above.\nSummary of differences in DNA replication machinery of Archaea and Bacteria.\nThe list of protein functions was compiled from box 1 in [20] and table one in [129]. Italics indicate a probable horizontal transfer to a superkingdom. There are very few proteins in archaea that are true innovations. Many of their unique replication proteins could be recruited from bacterial or viral systems. A * indicates the Superfamily database was used to predict domain assignments of PDB entries not yet classified in SCOP.\nThe proposal of two independent inventions of DNA replication has recently been challenged [49]. The authors argue that ribonucleotide reduction is thermodynamically unfavorable, so convergent evolution is highly unlikely. They note that all ribonucleotide reductases have been shown to have a monophyletic origin. Finally, they argue that the proteins that are universally conserved imply a high fidelity replication system in LUCA that could not have been RNA based. The hypothesis that the root must be between the superkingdoms is diminished when one combines these arguments with the scenarios we have outlined here.\nThe differences between archaeal and bacterial replication machinery is vast [4]. Leipe et al. claim this difference is so great that it is unreasonable to argue that one prokaryotic superkingdom evolved from the other. They list four key functions of DNA replication that are performed by completely non-homologous proteins in archaea and bacteria: the main polymerase's polymerization domain, the phosphatase that powers the polymerase, the gap filling polymerase, and the DNA primase. We will argue that the differences between archaea and bacteria do not imply the root of the tree of life has to be between them.\nWe must keep in mind there is some flexibility in the DNA replication machinery despite the division across the superkingdoms; consider two examples. First, many proteobacteria use a PolB family polymerase as a repair protein [38], which is almost certainly the result of HGT. Second, PolD appears to have been present in LACA, but was lost in the crenarchaea [39]. These two examples illustrate major changes in the replication machinery that occured in DNA based genomes possessing fully functional replication systems. We are arguing that an even larger event occurred between the prokaryotic superkingdoms. This event entailed viral transfers and novel innovations, but there are several proteins whose origins can be better described by vertical inheritance from the gram-positive bacteria which we review first.\nKoonin et al. have demonstrated that many bacterial proteins have a region that is homologous to the small subunit of the archaeo-eukaryotic primase [40]. This domain is present in DNA ligase D from M. tuberculosis, which can act as a DNA-dependent RNA polymerase [41]. The rest of the protein is homologous to the ATP-dependent DNA ligase found in archaea and eukaryotes. Therefore, DNA ligase D is perfectly preadapted to replace the primase function of DnaG. The fission of the two halves of the protein would allow for the preservation of ligase activity while developing enhanced primase activity. A recent analysis of DNA ligases revealed many transfers between archaea, bacteria and viruses [42]. This history is very complicated, so it is hard to say with certainty where the archaeal enzymes originated. The large subunit of the primase may be a true innovation since it has no detectable bacterial homologs, but the small subunit of the primase and ATP-dependent DNA ligases both could have been inherited from the gram-positive ancestors of archaea.\nThe main helicase in bacteria is DnaB, while archaea use MCM6. Relevant to this discussion is the recent biochemical analysis of a protein in a prophage element in Bacillus cereus that has domains homologous to the MCM6-AAA domain as well as the small subunit of the archaeal primase [43]. The authors found that this protein was a functional helicase but had no primase activity. The narrow distribution of this prophage element implies its insertion was probably too recent to play a role in the origin of archaea. However, it demonstrates that there can be a selective advantage for a DNA based genome to take novel DNA handling machinery from a virus and use it in a different context. We will come back to this point later.\nBacteria use DnaA to define the origin of replication, while archaea use Cdc6. These proteins have a homologous AAA+ ATPase domain, but have little similarity otherwise. However, the bacterial protein RuvB has the same domain combination as Cdc6. RuvB, Cdc6, and DnaA were all put in the same superfamily in a recent classification of AAA+ domains [44]. RuvB is recruited to Holliday junctions by RuvA where it forms a hexamer around the DNA [45], just like Cdc6. It is plausible that Cdc6 evolved from RuvB.\nArchaea use a protein called Hjc to resolve Holliday junctions instead of the bacterial RuvABC system. Hjc is related to the alternative bacterial system RecU [46]. The only bacteria that use RecU are the firmicutes, and they also have RuvABC. We argue below archaea are derived from within the firmicutes. It is possible that the redundancy in Holliday junction systems allowed RuvB to drift in function. The homology between RecU and Hjc could be explained by the presence of a Holliday junction resolvase in LUCA under the canonical rooting. However, if the hypothetical RNA-DNA hybrid LUCA proposed in [4] was dealing with Holliday junctions we argue it probably would also need topoisomerases at that point. However, since the distribution of topoisomerases is different across the prokaryotic superkingdoms [47,48] that would imply the ancestral topoisomerase was displaced in at least one lineage. This weakens the proposal in [4]. We feel it is more likely archaeal topoisomerases evolved from bacterial ones as Cavalier-Smith has proposed [16].\nThere are certainly large differences between archaeal and bacterial DNA replication machineries. We have demonstrated the divide between replication systems has some flexibility, and this opens the door for a replication revolution. It is possible to come up with detailed scenarios for how each of the archaeal replication proteins originated. These results are summarized in Table 2. We will elaborate on this scenario below. However, there are several archaeal replication proteins that do not appear to have any homologs in bacteria; namely histones, PolD, and the large subunit of the archaeal-eukaryal primase. These are true innovations, but there really are not that many of them; certainly not enough to make the transitions seem unreasonable in light of the polarizations presented above.\nSummary of differences in DNA replication machinery of Archaea and Bacteria.\nThe list of protein functions was compiled from box 1 in [20] and table one in [129]. Italics indicate a probable horizontal transfer to a superkingdom. There are very few proteins in archaea that are true innovations. Many of their unique replication proteins could be recruited from bacterial or viral systems. A * indicates the Superfamily database was used to predict domain assignments of PDB entries not yet classified in SCOP.\nThe proposal of two independent inventions of DNA replication has recently been challenged [49]. The authors argue that ribonucleotide reduction is thermodynamically unfavorable, so convergent evolution is highly unlikely. They note that all ribonucleotide reductases have been shown to have a monophyletic origin. Finally, they argue that the proteins that are universally conserved imply a high fidelity replication system in LUCA that could not have been RNA based. The hypothesis that the root must be between the superkingdoms is diminished when one combines these arguments with the scenarios we have outlined here.\n[SUBTITLE] Are the Archaea Paraphyletic or Holophyletic? We're agnostic [SUBSECTION] So far we have presented several independent arguments that strongly polarize archaea as a taxa derived from within bacteria. We have demonstrated that although there are vast differences between the ribosomes and DNA replication machinery between the prokaryotic superkingdoms, none of the arguments associated with their respective proteins seems insurmountable. We will soon present a novel hypothesis to account for this; but first we must pinpoint the bacterial roots of archaea. One cannot properly reason about this rooting without first discussing whether archaea are paraphyletic (eukaryotes branch within them) or holophyletic (eukaryotes are their sisters). As there is clearly a relationship between archaea and eukaryotes it is vital to differentiate between these two scenarios to understand their origins. We will review the current available data, and argue that for now precise agnosticism seems the best course, that is, any hypothesis on the origin of archaea must accommodate both models. That said, we lean towards holophyly and our hypothesis does as well.\nEukaryotes and Archaea are sister clades under the standard three domain model. However, James Lake proposed that eukaryotes had a crenarchaeal (eocyte) origin based on a shared indel in EF-1 and similarities in their ribosomal structures [13,14]. This hypothesis never gained much support because there was little phylogenetic evidence to corroborate it. However, recent work [30] has shown that there is sequence data that implies archaea are paraphyletic and eukaryotes have a crenarchaeal-like ancestor. Conversely, another analysis done around the same time supported a deep branching archaeon as the host of mitochondria [50], which would be inconsistent with the eocyte hypothesis. They demonstrate that eukaryotes inherited both crenarchaeal and euryarchaeal specific proteins, so ancestry from either group alone is not enough to explain the eukaryotic protein repertoire. However, several deep branching archaeal genomes from korarchaeota and thaumarchaea are now available and change the context of some of these conclusions [51,52]. Both of these groups appear to contain a mix of crenarchaeal and euryarchaeal genes so the observation in [50] could be explained by a member of one of these groups being ancestral to eukaryotes.\nCavalier-Smith's hypothesis on the origin of the neomura relies on a sisterhood relationship between archaea and eukaryotes [16]. As discussed below, he mainly roots the neomura using traits actinobacteria share with eukaryotes, but not archaea. This only makes sense if archaea and eukaryotes are sisters, otherwise the traits should be present in at least some archaea. He lists eight properties that are unique and ubiquitous in archaea [16]. All of these traits strongly imply that archaea are monophyletic. However, most of them do not differentiate between whether archaea are holophyletic or paraphyletic.\nFor instance, the unique isoprenoid ether lipids found in all archaeal membranes are best explained by their presence in LACA. Eukaryotes have lipids that are more similar to those of bacteria. It would be more parsimonious for archaea and eukaryotes to be sisters with a single change in lipid structure. Any other scenario requires a reversion in eukaryotes back to the bacterial state. Even though this is not parsimonious, it is not out of the question because the mitochondrial ancestor would have all the necessary genes to make bacterial membranes [53]. We have to admit that does not seem unreasonable relative to the innovations we are discussing in this work. This certainly seems like a case where simple parsimony in terms of any one trait, even membrane structure, will be misleading.\nThe only one of these properties that appeared really informative in regard to this problem was the split gene for RPOA. RPOA is the only single gene tree that supported the three domain model in [30], so it is clear eukaryotes did not get this protein from the mitochondrial ancestor. Reassembling the split gene is highly improbable, so there is no reason to doubt the fused genes are monophyletic. This strongly contradicts the original eocyte hypothesis. However, novel genomic data has revealed that representatives from the deep branching phyla korarchaeota and thaumarchaeota have the non-split form of this gene [51,52]. This opens the door for a more specific version of the eocyte hypothesis where eukaryotes stem from either of these groups. Therefore, we have examined what additional data have to say about these taxa. The branch order between them has not yet been resolved, but it appears safe to assume they both branch before the split between crenarchaea and euryarchaea. This branching is supported by several phylogenetic trees as well as the non-split RPOA. This assumption will be key to our subsequent reasoning in several ways.\nIt seems impossible to come up with a scenario utilizing all the traits we discuss below that is completely parsimonious for all traits at the same time. With that in mind we have tried to reason which traits can be better explained by convergent evolution than others. When we observe convergent evolution happening at an indel site we do not consider it informative. Independent loss in any form is much easier than independent invention. Loss seems to be the rule rather than the exception in archaea. Both the thaumarchaeota and korachaeaota have traits that were thought to be specific to either euryarchaea or crenarchaea. For instance euryarchaea use FtsZ for cell division while crenarchaea use the cdvABC system. Intriguingly the thaumarchaeotal genomes have orthologs of both of these systems [54]. This implies that the crenarchaea and euryarchaea each lost one of these systems. This is not the most parsimonious solution, but it is the only one that is consistent with the apparent branch order of these taxa. Many other traits have the same distribution pattern. It is clear that groups of archaea can lose proteins of major functional importance. We will attempt to address these distributions in our hypothesis below.\nBeyond the EF-1 indel that implies paraphyly, six highly conserved indels were found to be informative in describing the relationship between archaea and eukaryotes in [50]. The authors only looked at derived insertions with well conserved sequences. The authors state that four indels argue for the holophyly of archaea. There is one indel that is shared between eukaryotes and crenarchaea, as well as one shared between the euryarchaea and eukaryotes. This implies there was a reversion in at least one lineage or a horizontal transfer.\nWe have analyzed those six indels as well as EF-1 in the context of the recently sequenced deep branching genomes (Table 3). Only the indels that differ between archaeal groups are useful for determining their branch order. Therefore we only created alignments that contained archaeal sequences to ensure these indels were not artifacts created by including eukaryotic and bacterial sequences. Where possible we also used structural alignments from representatives of the superkingdoms to further ensure the larger indels were real (a similar methodology as used in [11]).\nAnalysis of potentially informative gene structures in korarchaeota and thaumarchaeota.\nEach indel was analyzed by creating an alignment of archaeal sequences from BLAST searches. We consider these results to be inconclusive until thaumarchaeota and korarchaeota are sampled better.\nFirst, the reported indel shared between euryarchaea and eukaryotes in the DNA repair protein RadA appears to be an artifact. The euryarchaeal and crenarchaeal sequences align well in the indel region (Additional File 1; Figure S1). This is important because it was the only line of evidence in that work that implied a relationship between euryarchaea and eukaryotes. This new alignment, in conjunction with the split RPOA gene, implies eukaryotes either descend from within the deep branching archaea or are their sisters.\nWe also argue that the two reported indels in the alignments of Beta-glucosidase/6-phospho-beta-glucosidase/beta-galactosidase (PBG) and ribosomal protein S12 are both uninformative based off the authors' own analyses (supplemental data from [50]). The indel in ribosomal S12 is conserved across all archaea and eukaryotes, so it implies nothing about their branch order. The indel in PBG is uninformative because the authors conclude the eukaryotic version of this gene is probably of bacterial origin (supplemental data from [50]). Therefore, the state of the gene in archaea implies nothing about the branch order of these groups.\nTwo of the remaining four indels are only a single residue. The glycine insertion in SecY is present in thaumarchaeota and eukaryotes, but absent in korarchaeota. That weakly implies a relationship between eukaryotes and thaumarchaeota. However, given that the insertion is present in some of the deep branching taxa, but not in all euryarchaea, implies there was at least one secondary loss of this insertion. This is reasonable since the insertion is a single glycine residue, and will not have a dramatic effect on protein structure.\nThe single residue insertion in prolyl-tRNA aminoacyl synthetase initially implied archaea were holophyletic, however, the insert is missing in the thaumarchaeal genomes. When these genes are used to seed a BLAST [55] search they hit firmicutes more so than other archaea. This implies a possible horizontal transfer to thaumarchaeota. If so this insert could still support holophyly, but that cannot be concluded with absolute certainty.\nThis leaves us with two larger indels in EF-1 and glutamyl-tRNA amidotransferase subunit D (gatD). The seven AA insert in gatD is well conserved in the archaeal alignment. A structural alignment with a bacterial homolog reveals this indel is not an artifact caused by the sequence alignment (data not shown). The phylogenetic tree for this family (presented in the supplemental data of [50]) places archaea and eukaryotes as sisters with 100% bootstrap support. This is remarkable because the archaeal proteins have a different domain combination and quaternary structure than the eukaryotes and bacteria [56]. However, it seems that tree is too good to be true. We have attempted to verify the history of this indel, and found that the tree in [50] was missing a bacterial paralog. E. coli has members of two paralogous families of l-asparaginases [57], and it appears only one of them was present in the initial tree. The tree in Additional File 2; Figure S2 shows that fungi and the rest of the eukaryotes received the same domain superfamily from two distinct sources. Their sequences are mixed in with some bacteria, which implies there were some recent horizontal transfers. This tree is not well resolved, but it certainly does not support the notion eukaryotes inherited this protein from their archaeal ancestor. That, as well as the differences in domain combination and quaternary structure, implies this indel is inconclusive with regards to holophyly verses paraphyly.\nEF-1 also appears inconclusive. The insert shared between crenarchaea and eukaryotes is present in thaumarchaeota, but not korarchaeota. Our alignment revealed there are actually four different forms of indel at this site in archaea (Additional File 3; Figure S3). This implies there is some plasticity in this region in archaea. This is in contrast to the bacterial alignment which has no indels in this region. A structural alignment between a bacterial representative from E. coli and an archaeal one from Sulfolobus solfataricus reveals the conserved glycines in the sequence alignments are very close in their position in both forms of this indel (Figure 2). It is possible there were two insertions near the root of archaea that preserved the position of that residue. This indel's history does not appear to be parsimonious, which weakens it usefulness as a marker. Therefore, this indel appears to weakly support archaeal paraphyly, but we consider it inconclusive.\nStructural alignment of EF-1 and EF-Tu. The structural alignment of EF-1 (1JNYA) and EF-Tu (1EFC) in A, and the corresponding sequence alignment in B, show the potential for two independent indels in this region that confounds analysis.\nThe ribosomal proteins are the other side to this story. In a previous study, five ribosomal proteins were found in at least one crenarchaeon, but not in any of the euryarchaea (L38e, L13e, S25e, S26e and S30e) [2]. These, as well as four others that are not universal in archaea, are conserved across eukaryotes. We examined what ribosomal proteins are present in the thaumarchaeal and korarchaeal genomes (Table 4). It still appears that Lake is correct that crenarchaea have more similar ribosomal proteins to eukaryotes than any other group of archaea.\nInformative ribosomal proteins in thaumarchaeota and korarchaeota.\nThis table was constructed from [2]. The values listed were taken from searches of the Pfam website. Ribosomal proteins L20A and L30E were not well defined in Pfam so BLAST searches were performed instead. These results support the eocyte hypothesis, but it is plausible that there were independent losses of ribosomal subunits in archaea based on additional data.\nThe korarchaeota are missing three ribosomal proteins found in some crenarchaea and eukaryotes. They have five ribosomal proteins that are present across eukaryotes that are absent in thaumarchaeota. There are two ways we can interpret this trend. If archaea are paraphyletic then this distribution is best explained by the invention of ribosomal proteins after LACA. LECA could branch between the korarchaeota and crenarchaea, before the RPOA gene split. The alternative interpretation is that archaea are holophyletic and the archaeal ancestor had all the ribosomal proteins that are in any archaeon and at least one eukaryote. There would have to be several independent losses of each these ribosomal proteins. Again this is not parsimonious, but there is evidence it has occurred several times so we must consider it. Again, it can be argued that if a protein is present in korarchaeota and crenarchaea, but absent in euryarchaea, it must have been lost. The archaeal ribosomal proteins are more dispensable than their counterparts in the other superkingdoms [2], so they might not be a reliable marker for rooting eukaryotes in archaea.\nFor now it seems the only reasonable stance in light of all of this evidence is agnosticism. Only when thaumarchaeota and korarchaeota are sampled better, and their positions in the archaeal tree are determined robustly, will it be possible to state with confidence whether archaea are holophyletic or paraphyletic. We might always be left trying to weigh whether reversion of ribosomal proteins or indels is the more parsimonious scenario. However, several of these traits clearly exclude the root of eukaryotes from within crenarchaea and euryarchaea. Therefore, any hypotheses on the origin of eukaryotes that invokes specific taxa within those groups can be rejected with confidence (for a discussion of the many hypotheses on this subject see [58]). However, it may be possible those scenarios could be reworked to fit thaumarchaeota or korarchaeota once they are sampled better.\nSo far we have presented several independent arguments that strongly polarize archaea as a taxa derived from within bacteria. We have demonstrated that although there are vast differences between the ribosomes and DNA replication machinery between the prokaryotic superkingdoms, none of the arguments associated with their respective proteins seems insurmountable. We will soon present a novel hypothesis to account for this; but first we must pinpoint the bacterial roots of archaea. One cannot properly reason about this rooting without first discussing whether archaea are paraphyletic (eukaryotes branch within them) or holophyletic (eukaryotes are their sisters). As there is clearly a relationship between archaea and eukaryotes it is vital to differentiate between these two scenarios to understand their origins. We will review the current available data, and argue that for now precise agnosticism seems the best course, that is, any hypothesis on the origin of archaea must accommodate both models. That said, we lean towards holophyly and our hypothesis does as well.\nEukaryotes and Archaea are sister clades under the standard three domain model. However, James Lake proposed that eukaryotes had a crenarchaeal (eocyte) origin based on a shared indel in EF-1 and similarities in their ribosomal structures [13,14]. This hypothesis never gained much support because there was little phylogenetic evidence to corroborate it. However, recent work [30] has shown that there is sequence data that implies archaea are paraphyletic and eukaryotes have a crenarchaeal-like ancestor. Conversely, another analysis done around the same time supported a deep branching archaeon as the host of mitochondria [50], which would be inconsistent with the eocyte hypothesis. They demonstrate that eukaryotes inherited both crenarchaeal and euryarchaeal specific proteins, so ancestry from either group alone is not enough to explain the eukaryotic protein repertoire. However, several deep branching archaeal genomes from korarchaeota and thaumarchaea are now available and change the context of some of these conclusions [51,52]. Both of these groups appear to contain a mix of crenarchaeal and euryarchaeal genes so the observation in [50] could be explained by a member of one of these groups being ancestral to eukaryotes.\nCavalier-Smith's hypothesis on the origin of the neomura relies on a sisterhood relationship between archaea and eukaryotes [16]. As discussed below, he mainly roots the neomura using traits actinobacteria share with eukaryotes, but not archaea. This only makes sense if archaea and eukaryotes are sisters, otherwise the traits should be present in at least some archaea. He lists eight properties that are unique and ubiquitous in archaea [16]. All of these traits strongly imply that archaea are monophyletic. However, most of them do not differentiate between whether archaea are holophyletic or paraphyletic.\nFor instance, the unique isoprenoid ether lipids found in all archaeal membranes are best explained by their presence in LACA. Eukaryotes have lipids that are more similar to those of bacteria. It would be more parsimonious for archaea and eukaryotes to be sisters with a single change in lipid structure. Any other scenario requires a reversion in eukaryotes back to the bacterial state. Even though this is not parsimonious, it is not out of the question because the mitochondrial ancestor would have all the necessary genes to make bacterial membranes [53]. We have to admit that does not seem unreasonable relative to the innovations we are discussing in this work. This certainly seems like a case where simple parsimony in terms of any one trait, even membrane structure, will be misleading.\nThe only one of these properties that appeared really informative in regard to this problem was the split gene for RPOA. RPOA is the only single gene tree that supported the three domain model in [30], so it is clear eukaryotes did not get this protein from the mitochondrial ancestor. Reassembling the split gene is highly improbable, so there is no reason to doubt the fused genes are monophyletic. This strongly contradicts the original eocyte hypothesis. However, novel genomic data has revealed that representatives from the deep branching phyla korarchaeota and thaumarchaeota have the non-split form of this gene [51,52]. This opens the door for a more specific version of the eocyte hypothesis where eukaryotes stem from either of these groups. Therefore, we have examined what additional data have to say about these taxa. The branch order between them has not yet been resolved, but it appears safe to assume they both branch before the split between crenarchaea and euryarchaea. This branching is supported by several phylogenetic trees as well as the non-split RPOA. This assumption will be key to our subsequent reasoning in several ways.\nIt seems impossible to come up with a scenario utilizing all the traits we discuss below that is completely parsimonious for all traits at the same time. With that in mind we have tried to reason which traits can be better explained by convergent evolution than others. When we observe convergent evolution happening at an indel site we do not consider it informative. Independent loss in any form is much easier than independent invention. Loss seems to be the rule rather than the exception in archaea. Both the thaumarchaeota and korachaeaota have traits that were thought to be specific to either euryarchaea or crenarchaea. For instance euryarchaea use FtsZ for cell division while crenarchaea use the cdvABC system. Intriguingly the thaumarchaeotal genomes have orthologs of both of these systems [54]. This implies that the crenarchaea and euryarchaea each lost one of these systems. This is not the most parsimonious solution, but it is the only one that is consistent with the apparent branch order of these taxa. Many other traits have the same distribution pattern. It is clear that groups of archaea can lose proteins of major functional importance. We will attempt to address these distributions in our hypothesis below.\nBeyond the EF-1 indel that implies paraphyly, six highly conserved indels were found to be informative in describing the relationship between archaea and eukaryotes in [50]. The authors only looked at derived insertions with well conserved sequences. The authors state that four indels argue for the holophyly of archaea. There is one indel that is shared between eukaryotes and crenarchaea, as well as one shared between the euryarchaea and eukaryotes. This implies there was a reversion in at least one lineage or a horizontal transfer.\nWe have analyzed those six indels as well as EF-1 in the context of the recently sequenced deep branching genomes (Table 3). Only the indels that differ between archaeal groups are useful for determining their branch order. Therefore we only created alignments that contained archaeal sequences to ensure these indels were not artifacts created by including eukaryotic and bacterial sequences. Where possible we also used structural alignments from representatives of the superkingdoms to further ensure the larger indels were real (a similar methodology as used in [11]).\nAnalysis of potentially informative gene structures in korarchaeota and thaumarchaeota.\nEach indel was analyzed by creating an alignment of archaeal sequences from BLAST searches. We consider these results to be inconclusive until thaumarchaeota and korarchaeota are sampled better.\nFirst, the reported indel shared between euryarchaea and eukaryotes in the DNA repair protein RadA appears to be an artifact. The euryarchaeal and crenarchaeal sequences align well in the indel region (Additional File 1; Figure S1). This is important because it was the only line of evidence in that work that implied a relationship between euryarchaea and eukaryotes. This new alignment, in conjunction with the split RPOA gene, implies eukaryotes either descend from within the deep branching archaea or are their sisters.\nWe also argue that the two reported indels in the alignments of Beta-glucosidase/6-phospho-beta-glucosidase/beta-galactosidase (PBG) and ribosomal protein S12 are both uninformative based off the authors' own analyses (supplemental data from [50]). The indel in ribosomal S12 is conserved across all archaea and eukaryotes, so it implies nothing about their branch order. The indel in PBG is uninformative because the authors conclude the eukaryotic version of this gene is probably of bacterial origin (supplemental data from [50]). Therefore, the state of the gene in archaea implies nothing about the branch order of these groups.\nTwo of the remaining four indels are only a single residue. The glycine insertion in SecY is present in thaumarchaeota and eukaryotes, but absent in korarchaeota. That weakly implies a relationship between eukaryotes and thaumarchaeota. However, given that the insertion is present in some of the deep branching taxa, but not in all euryarchaea, implies there was at least one secondary loss of this insertion. This is reasonable since the insertion is a single glycine residue, and will not have a dramatic effect on protein structure.\nThe single residue insertion in prolyl-tRNA aminoacyl synthetase initially implied archaea were holophyletic, however, the insert is missing in the thaumarchaeal genomes. When these genes are used to seed a BLAST [55] search they hit firmicutes more so than other archaea. This implies a possible horizontal transfer to thaumarchaeota. If so this insert could still support holophyly, but that cannot be concluded with absolute certainty.\nThis leaves us with two larger indels in EF-1 and glutamyl-tRNA amidotransferase subunit D (gatD). The seven AA insert in gatD is well conserved in the archaeal alignment. A structural alignment with a bacterial homolog reveals this indel is not an artifact caused by the sequence alignment (data not shown). The phylogenetic tree for this family (presented in the supplemental data of [50]) places archaea and eukaryotes as sisters with 100% bootstrap support. This is remarkable because the archaeal proteins have a different domain combination and quaternary structure than the eukaryotes and bacteria [56]. However, it seems that tree is too good to be true. We have attempted to verify the history of this indel, and found that the tree in [50] was missing a bacterial paralog. E. coli has members of two paralogous families of l-asparaginases [57], and it appears only one of them was present in the initial tree. The tree in Additional File 2; Figure S2 shows that fungi and the rest of the eukaryotes received the same domain superfamily from two distinct sources. Their sequences are mixed in with some bacteria, which implies there were some recent horizontal transfers. This tree is not well resolved, but it certainly does not support the notion eukaryotes inherited this protein from their archaeal ancestor. That, as well as the differences in domain combination and quaternary structure, implies this indel is inconclusive with regards to holophyly verses paraphyly.\nEF-1 also appears inconclusive. The insert shared between crenarchaea and eukaryotes is present in thaumarchaeota, but not korarchaeota. Our alignment revealed there are actually four different forms of indel at this site in archaea (Additional File 3; Figure S3). This implies there is some plasticity in this region in archaea. This is in contrast to the bacterial alignment which has no indels in this region. A structural alignment between a bacterial representative from E. coli and an archaeal one from Sulfolobus solfataricus reveals the conserved glycines in the sequence alignments are very close in their position in both forms of this indel (Figure 2). It is possible there were two insertions near the root of archaea that preserved the position of that residue. This indel's history does not appear to be parsimonious, which weakens it usefulness as a marker. Therefore, this indel appears to weakly support archaeal paraphyly, but we consider it inconclusive.\nStructural alignment of EF-1 and EF-Tu. The structural alignment of EF-1 (1JNYA) and EF-Tu (1EFC) in A, and the corresponding sequence alignment in B, show the potential for two independent indels in this region that confounds analysis.\nThe ribosomal proteins are the other side to this story. In a previous study, five ribosomal proteins were found in at least one crenarchaeon, but not in any of the euryarchaea (L38e, L13e, S25e, S26e and S30e) [2]. These, as well as four others that are not universal in archaea, are conserved across eukaryotes. We examined what ribosomal proteins are present in the thaumarchaeal and korarchaeal genomes (Table 4). It still appears that Lake is correct that crenarchaea have more similar ribosomal proteins to eukaryotes than any other group of archaea.\nInformative ribosomal proteins in thaumarchaeota and korarchaeota.\nThis table was constructed from [2]. The values listed were taken from searches of the Pfam website. Ribosomal proteins L20A and L30E were not well defined in Pfam so BLAST searches were performed instead. These results support the eocyte hypothesis, but it is plausible that there were independent losses of ribosomal subunits in archaea based on additional data.\nThe korarchaeota are missing three ribosomal proteins found in some crenarchaea and eukaryotes. They have five ribosomal proteins that are present across eukaryotes that are absent in thaumarchaeota. There are two ways we can interpret this trend. If archaea are paraphyletic then this distribution is best explained by the invention of ribosomal proteins after LACA. LECA could branch between the korarchaeota and crenarchaea, before the RPOA gene split. The alternative interpretation is that archaea are holophyletic and the archaeal ancestor had all the ribosomal proteins that are in any archaeon and at least one eukaryote. There would have to be several independent losses of each these ribosomal proteins. Again this is not parsimonious, but there is evidence it has occurred several times so we must consider it. Again, it can be argued that if a protein is present in korarchaeota and crenarchaea, but absent in euryarchaea, it must have been lost. The archaeal ribosomal proteins are more dispensable than their counterparts in the other superkingdoms [2], so they might not be a reliable marker for rooting eukaryotes in archaea.\nFor now it seems the only reasonable stance in light of all of this evidence is agnosticism. Only when thaumarchaeota and korarchaeota are sampled better, and their positions in the archaeal tree are determined robustly, will it be possible to state with confidence whether archaea are holophyletic or paraphyletic. We might always be left trying to weigh whether reversion of ribosomal proteins or indels is the more parsimonious scenario. However, several of these traits clearly exclude the root of eukaryotes from within crenarchaea and euryarchaea. Therefore, any hypotheses on the origin of eukaryotes that invokes specific taxa within those groups can be rejected with confidence (for a discussion of the many hypotheses on this subject see [58]). However, it may be possible those scenarios could be reworked to fit thaumarchaeota or korarchaeota once they are sampled better.\n[SUBTITLE] Weakening the neomuran hypothesis [SUBSECTION] Now that we have argued for the true distance between the superkingdoms we can begin to address how it could be bridged. From our discussion above we feel we must be cautious about declaring the debate closed on the holophyly of archaea. Therefore, we are more interested in traits shared between a group of bacteria and all archaea than those shared with eukaryotes. Cavalier-Smith has presented fourteen reasons why the root of the neomura is probably within or next to actinobacteria [16]. Two of these traits are shared between actinobacteria and neomura, but the other twelve are only shared between eukaryotes and actinobacteria. Under this scenario these twelve traits would be lost in the ancestor of archaea, which implies archaea are holophyletic. We will review these fourteen traits, and argue that placing the archaeal ancestor in the bacilli makes more sense. We use the term neomura to refer to the clade of eukaryotes and archaea, but when we refer to the neomuran hypothesis we refer to Cavalier-Smith's rooting of that clade in the actinobacteria.\nThe first piece of evidence that places the neomuran root near actinobacteria is the proteasome. Actinobacterial and archaeal 20s proteasomes are well separated on phylogenetic trees which implies the presence of the 20s proteasome across these groups is not the result of recent horizontal transfers. Recently 20S proteasomes have also been found in sequenced genomes from verrucomicrobia [59] and leptospirillum metagenomic sequences [60]. This somewhat weakens the actinobacterial argument for ancestry, as archaea could have inherited a proteasome from these other groups. However, these recent findings do not weaken the polarization argument; it just excludes the root from these additional groups.\nThe second trait apparently shared between actinobacteria and all neomura is the post translational addition of CCA to the 3' end of tRNAs. The gene performing that function in archaea is tRNA CCA-pyrophosphorylase (protein cluster PRK13300 [61]). One of the domains, PAP/Archaeal CCA-adding enzyme, does not hit any bacteria in the Superfamily database [62]. Since the CCA addition is performed by nonhomologous enzymes this is not strong evidence for rooting neomura. There is also an analogous enzyme conserved across bacilli (protein cluster PRK13299). Even if archaea inherited this function from their bacterial ancestors, it is not clear which gram-positive group provided it.\nNow we must address the remaining dozen traits shared between actinobacteria and eukaryotes. Although there were initial reports of sterol synthesis in the actinobacteria [63,64], the latest work has found no evidence for a complete pathway [65]. The authors report that the few cases of the full pathway in bacteria (all outside the actinobacteria) are probably the result of horizontal transfer. However, they find several sterol synthesis enzymes are present in many actinobacteria. They conclude these are probably the result of a transfer from eukaryotes, but this is not supported by their trees, which show good separation between eukaryotes and actinobacteria. Several sterol enzymes appear to have been inherited vertically from actinobacteria to eukaryotes. This is certainly consistent with Cavalier-Smith's hypothesis. This is a good example of the dangers of closing the debate on the position of the root too soon. Their trees clearly support an alternative hypothesis, but that data is buried in the supplemental material without discussion of the opposing view.\nInitial reports also claimed the presence of chitin in actinobacteria [66]. However, there is no gene for chitin synthase in actinobacterial genomes. Several of them have chitinase which breaks chitin down. Also, chitin is found in metazoa and fungi, but not in archaeplastida which implies this enzyme was not in LECA.\nIt is true that actinobacteria have many serine/threonine signaling systems related to cyclin-dependent kinases [67]. This would be a key preadaptation to the cell cycle. However, it has recently been shown that Bacillus subtilis also has an extensive network of such regulation [68]. Therefore this line of evidence is consistent with either gram-positive group being ancestral to neomura.\nPhosphatidylinositol is an interesting case. Recent work on this subject confirms the presence of phosphatidylinositol synthase as well as the eukaryotic form of cardiolipin synthase in many actinobacteria [69]. These enzymes are paralogs. We could not create a quality tree for this superfamily because the alignment was of low quality. However BLAST searches showed a good separation between prokaryotic and eukaryotic sequences that implies this is not the result of a recent HGT. It is difficult to determine exactly what family each prokaryotic homolog belongs to, so it is hard to say with certainty what other groups of bacteria have phosphatidylinositol. It is certainly possible eukaryotes inherited phosphatidylinositol from actinobacteria.\nSome actinobacteria do have an α-amylase with similar primary structure to the form found in metazoa, but a recent comprehensive study found several other bacteria that did as well [70]. The authors concluded this was probably the result of a horizontal transfer due to their position in the phylogenetic tree as well the extremely sparse distribution of this form in actinobacteria. Therefore, this is not evidence for actinobacterial ancestry of the neomura.\nThe fatty acid synthetase (FAS) complex found in actinobacteria is unique among bacteria in that it is the same form as found in some fungi [71]. These fungi have the FAS complex split into two genes, but actinobacteria have it fused. Our phylogenetic trees are consistent with actinobacterial ancestry (Figure 3). However, the distribution of the fungal type complex in eukaryotes does not conclusively prove that this enzyme had to be in LECA. The only group outside the Fungi with this complex are stramenopiles. However, the animal type FAS is also present in some alveolata, so there could be some functional displacements. Actinobacteria probably played a role in the evolution of this enzyme in eukaryotes, but not necessarily via the neomuran hypothesis.\nMaximum likelihood tree of fungal type Fatty Acid Synthase (FAS) complex. This tree implies eukaryotes did not get FAS from a recent transfer, but it is also not clear whether or not it was in LECA. Circles indicate the split form of the gene. This gene is split in two different places in the fungi indicated by the yellow and red circles.\nThe argument that the exospore structure of actinobacteria could be a precursor to eukaryotic spore structures seems sound [72], but we are unable to locate a list of proteins involved in exospore formation. Without specific proteins homologs we cannot begin to evaluate this with bioinformatics. However, this argument becomes irrelevant if one invokes a viral ancestor of the nucleus as in [73].\nCavalier-Smith has also suggested that the C-terminal HEH domain found in the Ku proteins of some actinobacteria is ancestral to the HEH domain found in the eukaryotic Ku70 protein. However, the sequence analysis in [74] conclusively demonstrates eukaryotes did not inherit the HEH domain from actinobacteria. This domain is very compact and common. Therefore, it is not out of the question that it was recruited twice to the C-terminus of similar structures. Consequently we do not take this as evidence that eukaryotes inherited Ku from actinobacteria.\nSeveral traits initially listed as unique to actinobacteria are now found in enough other bacterial groups to now be considered ambiguous markers. Actinobacteria do have tyrosine kinases, but they have recently been put into a bacterial specific family, BY-kinase [75]. This family is present across actinobacteria, firmicutes, and proteobacteria, so it is does not support an actinobacterial rooting exclusively in the neomura. Many groups of bacteria have HU (histone H1 homologs) according the Superfamily database. This protein is relatively short, so we should not expect sequence to resolve its history. It is possible this protein was inherited from actinobacteria, but there are too many other possibilities to state that with certainty. Calmodulin-like proteins are now found in many bacteria, so this trait is not specific enough to root neomura near actinobacteria as Cavalier-Smith now admits [8]. The Superfamily database reveals that trypsin-like serine proteases are present in many groups of bacteria, but absent in archaea. This appears to be another trait that is too general to be useful for rooting neomura.\nNow that we have argued for the true distance between the superkingdoms we can begin to address how it could be bridged. From our discussion above we feel we must be cautious about declaring the debate closed on the holophyly of archaea. Therefore, we are more interested in traits shared between a group of bacteria and all archaea than those shared with eukaryotes. Cavalier-Smith has presented fourteen reasons why the root of the neomura is probably within or next to actinobacteria [16]. Two of these traits are shared between actinobacteria and neomura, but the other twelve are only shared between eukaryotes and actinobacteria. Under this scenario these twelve traits would be lost in the ancestor of archaea, which implies archaea are holophyletic. We will review these fourteen traits, and argue that placing the archaeal ancestor in the bacilli makes more sense. We use the term neomura to refer to the clade of eukaryotes and archaea, but when we refer to the neomuran hypothesis we refer to Cavalier-Smith's rooting of that clade in the actinobacteria.\nThe first piece of evidence that places the neomuran root near actinobacteria is the proteasome. Actinobacterial and archaeal 20s proteasomes are well separated on phylogenetic trees which implies the presence of the 20s proteasome across these groups is not the result of recent horizontal transfers. Recently 20S proteasomes have also been found in sequenced genomes from verrucomicrobia [59] and leptospirillum metagenomic sequences [60]. This somewhat weakens the actinobacterial argument for ancestry, as archaea could have inherited a proteasome from these other groups. However, these recent findings do not weaken the polarization argument; it just excludes the root from these additional groups.\nThe second trait apparently shared between actinobacteria and all neomura is the post translational addition of CCA to the 3' end of tRNAs. The gene performing that function in archaea is tRNA CCA-pyrophosphorylase (protein cluster PRK13300 [61]). One of the domains, PAP/Archaeal CCA-adding enzyme, does not hit any bacteria in the Superfamily database [62]. Since the CCA addition is performed by nonhomologous enzymes this is not strong evidence for rooting neomura. There is also an analogous enzyme conserved across bacilli (protein cluster PRK13299). Even if archaea inherited this function from their bacterial ancestors, it is not clear which gram-positive group provided it.\nNow we must address the remaining dozen traits shared between actinobacteria and eukaryotes. Although there were initial reports of sterol synthesis in the actinobacteria [63,64], the latest work has found no evidence for a complete pathway [65]. The authors report that the few cases of the full pathway in bacteria (all outside the actinobacteria) are probably the result of horizontal transfer. However, they find several sterol synthesis enzymes are present in many actinobacteria. They conclude these are probably the result of a transfer from eukaryotes, but this is not supported by their trees, which show good separation between eukaryotes and actinobacteria. Several sterol enzymes appear to have been inherited vertically from actinobacteria to eukaryotes. This is certainly consistent with Cavalier-Smith's hypothesis. This is a good example of the dangers of closing the debate on the position of the root too soon. Their trees clearly support an alternative hypothesis, but that data is buried in the supplemental material without discussion of the opposing view.\nInitial reports also claimed the presence of chitin in actinobacteria [66]. However, there is no gene for chitin synthase in actinobacterial genomes. Several of them have chitinase which breaks chitin down. Also, chitin is found in metazoa and fungi, but not in archaeplastida which implies this enzyme was not in LECA.\nIt is true that actinobacteria have many serine/threonine signaling systems related to cyclin-dependent kinases [67]. This would be a key preadaptation to the cell cycle. However, it has recently been shown that Bacillus subtilis also has an extensive network of such regulation [68]. Therefore this line of evidence is consistent with either gram-positive group being ancestral to neomura.\nPhosphatidylinositol is an interesting case. Recent work on this subject confirms the presence of phosphatidylinositol synthase as well as the eukaryotic form of cardiolipin synthase in many actinobacteria [69]. These enzymes are paralogs. We could not create a quality tree for this superfamily because the alignment was of low quality. However BLAST searches showed a good separation between prokaryotic and eukaryotic sequences that implies this is not the result of a recent HGT. It is difficult to determine exactly what family each prokaryotic homolog belongs to, so it is hard to say with certainty what other groups of bacteria have phosphatidylinositol. It is certainly possible eukaryotes inherited phosphatidylinositol from actinobacteria.\nSome actinobacteria do have an α-amylase with similar primary structure to the form found in metazoa, but a recent comprehensive study found several other bacteria that did as well [70]. The authors concluded this was probably the result of a horizontal transfer due to their position in the phylogenetic tree as well the extremely sparse distribution of this form in actinobacteria. Therefore, this is not evidence for actinobacterial ancestry of the neomura.\nThe fatty acid synthetase (FAS) complex found in actinobacteria is unique among bacteria in that it is the same form as found in some fungi [71]. These fungi have the FAS complex split into two genes, but actinobacteria have it fused. Our phylogenetic trees are consistent with actinobacterial ancestry (Figure 3). However, the distribution of the fungal type complex in eukaryotes does not conclusively prove that this enzyme had to be in LECA. The only group outside the Fungi with this complex are stramenopiles. However, the animal type FAS is also present in some alveolata, so there could be some functional displacements. Actinobacteria probably played a role in the evolution of this enzyme in eukaryotes, but not necessarily via the neomuran hypothesis.\nMaximum likelihood tree of fungal type Fatty Acid Synthase (FAS) complex. This tree implies eukaryotes did not get FAS from a recent transfer, but it is also not clear whether or not it was in LECA. Circles indicate the split form of the gene. This gene is split in two different places in the fungi indicated by the yellow and red circles.\nThe argument that the exospore structure of actinobacteria could be a precursor to eukaryotic spore structures seems sound [72], but we are unable to locate a list of proteins involved in exospore formation. Without specific proteins homologs we cannot begin to evaluate this with bioinformatics. However, this argument becomes irrelevant if one invokes a viral ancestor of the nucleus as in [73].\nCavalier-Smith has also suggested that the C-terminal HEH domain found in the Ku proteins of some actinobacteria is ancestral to the HEH domain found in the eukaryotic Ku70 protein. However, the sequence analysis in [74] conclusively demonstrates eukaryotes did not inherit the HEH domain from actinobacteria. This domain is very compact and common. Therefore, it is not out of the question that it was recruited twice to the C-terminus of similar structures. Consequently we do not take this as evidence that eukaryotes inherited Ku from actinobacteria.\nSeveral traits initially listed as unique to actinobacteria are now found in enough other bacterial groups to now be considered ambiguous markers. Actinobacteria do have tyrosine kinases, but they have recently been put into a bacterial specific family, BY-kinase [75]. This family is present across actinobacteria, firmicutes, and proteobacteria, so it is does not support an actinobacterial rooting exclusively in the neomura. Many groups of bacteria have HU (histone H1 homologs) according the Superfamily database. This protein is relatively short, so we should not expect sequence to resolve its history. It is possible this protein was inherited from actinobacteria, but there are too many other possibilities to state that with certainty. Calmodulin-like proteins are now found in many bacteria, so this trait is not specific enough to root neomura near actinobacteria as Cavalier-Smith now admits [8]. The Superfamily database reveals that trypsin-like serine proteases are present in many groups of bacteria, but absent in archaea. This appears to be another trait that is too general to be useful for rooting neomura.\n[SUBTITLE] Evidence that supports a firmicute ancestry for archaea [SUBSECTION] Skophammer et al. compiled several reasons to argue archaea are derived from bacilli [12]. There is an insert in ribosomal protein S12 that is present in archaea and bacilli (and maybe chloroflexi). Skophammer et al. conclude this indel is derived, but we argue elsewhere this polarization is flawed [11]. The insertion appears well conserved between archaea and bacilli regardless of whether it is ancestral or derived.\nSkophammer et al. also note that there is a shared deletion between firmicutes and archaea in PyrD. Our own work strengthens this connection by considering the quaternary structure of PyrD. The form that has the deletion also has an additional subunit, PyrK. The sequence and structure of the firmicute PyrD 1B are both shared by archaea. Our phylogenetic analysis of this protein implies this is not the result of recent horizontal transfers [11].\nSkophammer et al. note that many enzymes involved in the biosynthesis of unique archaeal membranes have previously been found in firmicutes [21]. The isoprenoid lipid precursors of archaeal membranes are made via the mevalonate pathway, which is five enzymes long. The KEGG database [76] reveals the entire mevalonate pathway is present in several bacilli as well as some actinobacteria (KEGG module M00191). The unique stereochemistry of archaeal membranes is determined by the enzyme geranylgeranylglyceryl phosphatase. Homologs of this enzyme are present in bacilli (protein cluster PRK04169), but appear to be absent in actinobacteria. The authors of an analysis of archaeal membrane biosynthesis propose that archaea became genetically isolated from bacteria once their membrane chemistry changed [77]. They suggest that archaea branched early from within bacteria, but their hypothesis is also consistent with a later gram-positive origin. Cavalier-Smith's own analysis [8] suggests that eukaryotic enzymes that make n-linked glycoproteins, which are necessary for the loss of peptidoglycan, evolved from the firmicute specific gene EspE. Therefore, for several reasons, the firmicutes are the bacterial group most preadapted to gain archaeal membranes.\nHomologs to ribosomal proteins L30e and L7ae are found across firmicutes. This is evidence of the link between firmicutes and archaea. Pfam [78] shows this family in several other groups, but many firmicutes contain two copies of this family. One of these paralogs has been characterized as a ribosomal protein, but neither is essential [79]. We constructed phylogenetic trees to see if they are consistent with vertical inheritance (Figure 4). There is good separation between the paralogs in firmicutes, which implies the duplication occurred early in firmicutes. All archaeal and eukaryotic genomes contain at least two copies of this family. The phylogenetic tree of the archaeal and firmicute sequences places the firmicute paralogs between the archaeal paralogs. The firmicute sequences are paraphyletic, albeit with very weak support. If these proteins are the result of independent duplications the archaeal sequences should cluster together, not appear on opposite ends of the tree. However, it is possible one of the archaeal sequences evolved rapidly after duplication.\nAlignment of L7Ae paralogs in archaea and firmicutes. This tree is consistent with a firmicute origin for two archaeal ribosomal proteins.\nOne of the paralogs in Bacillus subtilis was found to localize to a different portion of the ribosome than either of the archaeal paralogs [79]. The proteins would not only have to jump superkingdoms for a transfer to occur, they would also have to bind to a different region of the rRNA without interfering with ribosome assembly. We argue it would be less disruptive for a protein already present to gradually bind a different piece of rRNA. The separation between the superkingdoms in the phylogenetic trees also argues against HGT. If this is the result of vertical inheritance only two possibilities explain it. Either the firmicutes are ancestral to archaea, or the root lies between archaea and firmicutes. Our polarization of PyrD 1B's quaternary structure eliminates the latter rooting as a possibility. Thus this tree appears to support a firmicute ancestry for archaea, although it may just be the result of rapid evolution of structures in different contexts in the ribosome.\nAs discussed above, almost all the firmicute genomes have a unique Holliday junction resolvase, RecU, which is only found sparsely in other bacterial groups. It is homologous to the archaeal Holliday junction resolvase, Hjc [46]. Therefore the firmicutes have a DNA repair mechanism more similar to archaea than any bacterial group.\nHsp90 is missing in all archaeal genomes, so its presence across eukaryotes and bacteria implies it was inherited from the mitochondrial ancestor. However, a detailed analysis of this family did not reveal a relationship between eukaryotic and proteobacterial sequences [80]. Instead, the eukaryotic sequences branch within the gram-positive bacteria. The authors argue this supports the classical neomuran hypothesis, but eukaryotes are sisters to firmicutes rather than actinobacteria in that tree (albeit with moderate support). This would slightly favor firmicutes over actinobacteria ancestry. In either case it supports the view that the archaeal ancestor lost Hsp90.\nSkophammer et al. compiled several reasons to argue archaea are derived from bacilli [12]. There is an insert in ribosomal protein S12 that is present in archaea and bacilli (and maybe chloroflexi). Skophammer et al. conclude this indel is derived, but we argue elsewhere this polarization is flawed [11]. The insertion appears well conserved between archaea and bacilli regardless of whether it is ancestral or derived.\nSkophammer et al. also note that there is a shared deletion between firmicutes and archaea in PyrD. Our own work strengthens this connection by considering the quaternary structure of PyrD. The form that has the deletion also has an additional subunit, PyrK. The sequence and structure of the firmicute PyrD 1B are both shared by archaea. Our phylogenetic analysis of this protein implies this is not the result of recent horizontal transfers [11].\nSkophammer et al. note that many enzymes involved in the biosynthesis of unique archaeal membranes have previously been found in firmicutes [21]. The isoprenoid lipid precursors of archaeal membranes are made via the mevalonate pathway, which is five enzymes long. The KEGG database [76] reveals the entire mevalonate pathway is present in several bacilli as well as some actinobacteria (KEGG module M00191). The unique stereochemistry of archaeal membranes is determined by the enzyme geranylgeranylglyceryl phosphatase. Homologs of this enzyme are present in bacilli (protein cluster PRK04169), but appear to be absent in actinobacteria. The authors of an analysis of archaeal membrane biosynthesis propose that archaea became genetically isolated from bacteria once their membrane chemistry changed [77]. They suggest that archaea branched early from within bacteria, but their hypothesis is also consistent with a later gram-positive origin. Cavalier-Smith's own analysis [8] suggests that eukaryotic enzymes that make n-linked glycoproteins, which are necessary for the loss of peptidoglycan, evolved from the firmicute specific gene EspE. Therefore, for several reasons, the firmicutes are the bacterial group most preadapted to gain archaeal membranes.\nHomologs to ribosomal proteins L30e and L7ae are found across firmicutes. This is evidence of the link between firmicutes and archaea. Pfam [78] shows this family in several other groups, but many firmicutes contain two copies of this family. One of these paralogs has been characterized as a ribosomal protein, but neither is essential [79]. We constructed phylogenetic trees to see if they are consistent with vertical inheritance (Figure 4). There is good separation between the paralogs in firmicutes, which implies the duplication occurred early in firmicutes. All archaeal and eukaryotic genomes contain at least two copies of this family. The phylogenetic tree of the archaeal and firmicute sequences places the firmicute paralogs between the archaeal paralogs. The firmicute sequences are paraphyletic, albeit with very weak support. If these proteins are the result of independent duplications the archaeal sequences should cluster together, not appear on opposite ends of the tree. However, it is possible one of the archaeal sequences evolved rapidly after duplication.\nAlignment of L7Ae paralogs in archaea and firmicutes. This tree is consistent with a firmicute origin for two archaeal ribosomal proteins.\nOne of the paralogs in Bacillus subtilis was found to localize to a different portion of the ribosome than either of the archaeal paralogs [79]. The proteins would not only have to jump superkingdoms for a transfer to occur, they would also have to bind to a different region of the rRNA without interfering with ribosome assembly. We argue it would be less disruptive for a protein already present to gradually bind a different piece of rRNA. The separation between the superkingdoms in the phylogenetic trees also argues against HGT. If this is the result of vertical inheritance only two possibilities explain it. Either the firmicutes are ancestral to archaea, or the root lies between archaea and firmicutes. Our polarization of PyrD 1B's quaternary structure eliminates the latter rooting as a possibility. Thus this tree appears to support a firmicute ancestry for archaea, although it may just be the result of rapid evolution of structures in different contexts in the ribosome.\nAs discussed above, almost all the firmicute genomes have a unique Holliday junction resolvase, RecU, which is only found sparsely in other bacterial groups. It is homologous to the archaeal Holliday junction resolvase, Hjc [46]. Therefore the firmicutes have a DNA repair mechanism more similar to archaea than any bacterial group.\nHsp90 is missing in all archaeal genomes, so its presence across eukaryotes and bacteria implies it was inherited from the mitochondrial ancestor. However, a detailed analysis of this family did not reveal a relationship between eukaryotic and proteobacterial sequences [80]. Instead, the eukaryotic sequences branch within the gram-positive bacteria. The authors argue this supports the classical neomuran hypothesis, but eukaryotes are sisters to firmicutes rather than actinobacteria in that tree (albeit with moderate support). This would slightly favor firmicutes over actinobacteria ancestry. In either case it supports the view that the archaeal ancestor lost Hsp90.\n[SUBTITLE] Peroxisomes: the red herring? [SUBSECTION] There are several traits present in either firmicutes or actinobacteria that argue they are ancestral to either eukaryotes or archaea. The only trait that argues actinobacteria are ancestral to the neomura is the proteasome. Several more traits make compelling arguments that actinobacteria are ancestral to eukaryotes, but certainly not the dozen traits listed in [16]. In Cavalier-Smith's most recent version of the neomuran hypothesis he concludes firmicutes contributed a significant number of genes to the neomuran ancestor [81]. He proposed neomura originated as sisters of actinobacteria, and both of these taxa are descendents of firmicutes. That proposal is dependent on his argument that actinobacteria are derived from firmicutes, which is one of the less developed ideas in [8]. We believe he is wrong in his assertion that our analysis of the indel in ribosomal S12 [11] does not support firmicute ancestry of archaea. It is only shared (and well conserved) between bacilli and archaea regardless of the polarization of that indel. Cavalier-Smith is also not aware of the arguments about L7AE paralogs and RecU we present here for the first time. So we are left with a stronger list of reasons supporting firmicute ancestry and a weaker list for actinobacterial ancestry. However, there are still some key eukaryotic proteins that appear to have descended from actinobacteria. We will try to reconcile this apparent anomaly.\nThe peroxisome is an organelle with a single membrane, found across eukaryotes, that has various oxidative functions including the synthesis of some lipids [82]. They have been observed to divide independently of the rest of the cell, which initially led someto question whether they had an endosymbiotic origin [83,84]. Two recent studies both concluded that the peroxisome was likely derived from the endoplasmic reticulum [85,86], which led those initial proponents of peroxisomal endosymbiosis to abandon that idea.\nHowever, [85] found that many peroxisomal proteins likely originated in cyanobacteria, α-proteobacteria, or actinobacteria. The authors suggest that the proteobacterial genes were probably transferred from the mitochondria, which is consistent with observations that mitochondrial genes are often retargeted to other organelles [87]. However, recent work argues for an endosymbiotic origin of the peroxisome from an actinobacterium [88]. These latter authors demonstrate that at least two proteins imported into the peroxisome are of actinobacterial origin, and that the peroxisomal proteome has higher average BLAST scores to actinobacteria than any other group of prokaryotes. They argue that the retargeting of mitochondrial proteins after their genes migrate to the host's genome is easier than de novo targeting of peroxisomal proteins. They propose this masks the true history of the peroxisome.\nThe literature proposes two scenarios to explain the origin of the peroxisome: either the peroxisome was an endosymbiont, or actinobacteria were not endosymbionts. Clearly there is a third possibility; there was an actinobacterial endosymbiont, but the peroxisome is not a descendent of that membrane. That is to say, genes of an endosymbiotic origin were targeted into the peroxisome, but historically they are foreigners there. How could this be? A primitive peroxisome derived from the endomembrane system would be beneficial because it would separate dangerous oxidative chemistry from the rest of the cell. Proteins would be targeted to the organelle with relative ease since that system would be developed through mitochondrial endosymbiosis. Genes would be copied from the actinobacterial endosymbiont to the host genome (but not necessarily lost in the actinobacterium), and then imported into the peroxisome. This would be advantageous because some of these reactions would do better in that specialized environment rather than their original host. Potentially there would be less cost involved in maintaining an organelle that already existed versus an entire endosymbiont. Once enough genes were present in the host, the actinobacterial endosymbiont would essentially be a parasite, and complete gene loss would be beneficial.\nContrast the peroxisome to organelles such as plastids and mitochondria which retained both genomes and membranes long after they became organelles. Some have questioned why some organelles retain any genes at all [89]. These authors note that most genes retained in plastids and mitochondria are membrane-spanning proteins involved in core photosynthetic and respiratory systems. They agree with an earlier proposal that these proteins must be kept in the organelle to be able to quickly respond to, and balance, redox gradients [90]. In other words, plastids and mitochondria have retained membranes and genes because their functions are centered on membrane based chemistry. The stripped down endomsymbionts perform these functions better than a novel organelle initially could, so they are left with a few essential genes and membranes they inherited from endosymbiosis. These genes come with a high cost because the organelles need to import the machinery to translate them as well as the machinery to replicate the genes that encode them. Therefore one can hypothesize that other endosymbionts whose functions are not as membrane-centric could be replaced by organelles that are not of endosymbiotic origin. Unfortunately, plastids and mitochondria have shaped our expectations that endosymbionts will leave both membranes and genomes behind. We believe this is an over simplistic expectation.\nWe argue actinobacterial endosymbiosis accounts for the traits shared between eukaryotes and actinobacteria, as well the phylogenetic trees that place actinobacteria as sisters of the peroxisomal proteins. The fact that numerous mitochondrial proteins are imported into the peroxisome is evidence this endosymbiosis occurred after mitochondrial endosymbiosis. This would reconcile the apparently conflicting signals in terms of which gram-positive group is ancestral to archaea and eukaryotes. We find this scenario more reasonable than invoking an extinct lineage of gram-positives that has all the traits listed in Table 5 and Table 6. However, if a genome is sequenced that contains actinobacterial specific traits as well as firmicute specific traits listed here we would have no need to invoke endosymbiosis. It is also possible to reconcile the canonical rooting with the traits shared by actinobacteria by invoking this endosymbiotic hypothesis.\nSummary of data used to support actinobacterial ancestry of archaea.\nMany of these traits argue for an actinobacterial role in eukaryogenesis but not the origin of archaea. This list of informative characters is taken from [16].\nSummary of data that supports bacilli ancestry for archaea.\nThe bacilli are more similar to archaea in terms of DNA repair, ribosome structure, and lipid metabolism than any other group of bacteria.\nThere are several traits present in either firmicutes or actinobacteria that argue they are ancestral to either eukaryotes or archaea. The only trait that argues actinobacteria are ancestral to the neomura is the proteasome. Several more traits make compelling arguments that actinobacteria are ancestral to eukaryotes, but certainly not the dozen traits listed in [16]. In Cavalier-Smith's most recent version of the neomuran hypothesis he concludes firmicutes contributed a significant number of genes to the neomuran ancestor [81]. He proposed neomura originated as sisters of actinobacteria, and both of these taxa are descendents of firmicutes. That proposal is dependent on his argument that actinobacteria are derived from firmicutes, which is one of the less developed ideas in [8]. We believe he is wrong in his assertion that our analysis of the indel in ribosomal S12 [11] does not support firmicute ancestry of archaea. It is only shared (and well conserved) between bacilli and archaea regardless of the polarization of that indel. Cavalier-Smith is also not aware of the arguments about L7AE paralogs and RecU we present here for the first time. So we are left with a stronger list of reasons supporting firmicute ancestry and a weaker list for actinobacterial ancestry. However, there are still some key eukaryotic proteins that appear to have descended from actinobacteria. We will try to reconcile this apparent anomaly.\nThe peroxisome is an organelle with a single membrane, found across eukaryotes, that has various oxidative functions including the synthesis of some lipids [82]. They have been observed to divide independently of the rest of the cell, which initially led someto question whether they had an endosymbiotic origin [83,84]. Two recent studies both concluded that the peroxisome was likely derived from the endoplasmic reticulum [85,86], which led those initial proponents of peroxisomal endosymbiosis to abandon that idea.\nHowever, [85] found that many peroxisomal proteins likely originated in cyanobacteria, α-proteobacteria, or actinobacteria. The authors suggest that the proteobacterial genes were probably transferred from the mitochondria, which is consistent with observations that mitochondrial genes are often retargeted to other organelles [87]. However, recent work argues for an endosymbiotic origin of the peroxisome from an actinobacterium [88]. These latter authors demonstrate that at least two proteins imported into the peroxisome are of actinobacterial origin, and that the peroxisomal proteome has higher average BLAST scores to actinobacteria than any other group of prokaryotes. They argue that the retargeting of mitochondrial proteins after their genes migrate to the host's genome is easier than de novo targeting of peroxisomal proteins. They propose this masks the true history of the peroxisome.\nThe literature proposes two scenarios to explain the origin of the peroxisome: either the peroxisome was an endosymbiont, or actinobacteria were not endosymbionts. Clearly there is a third possibility; there was an actinobacterial endosymbiont, but the peroxisome is not a descendent of that membrane. That is to say, genes of an endosymbiotic origin were targeted into the peroxisome, but historically they are foreigners there. How could this be? A primitive peroxisome derived from the endomembrane system would be beneficial because it would separate dangerous oxidative chemistry from the rest of the cell. Proteins would be targeted to the organelle with relative ease since that system would be developed through mitochondrial endosymbiosis. Genes would be copied from the actinobacterial endosymbiont to the host genome (but not necessarily lost in the actinobacterium), and then imported into the peroxisome. This would be advantageous because some of these reactions would do better in that specialized environment rather than their original host. Potentially there would be less cost involved in maintaining an organelle that already existed versus an entire endosymbiont. Once enough genes were present in the host, the actinobacterial endosymbiont would essentially be a parasite, and complete gene loss would be beneficial.\nContrast the peroxisome to organelles such as plastids and mitochondria which retained both genomes and membranes long after they became organelles. Some have questioned why some organelles retain any genes at all [89]. These authors note that most genes retained in plastids and mitochondria are membrane-spanning proteins involved in core photosynthetic and respiratory systems. They agree with an earlier proposal that these proteins must be kept in the organelle to be able to quickly respond to, and balance, redox gradients [90]. In other words, plastids and mitochondria have retained membranes and genes because their functions are centered on membrane based chemistry. The stripped down endomsymbionts perform these functions better than a novel organelle initially could, so they are left with a few essential genes and membranes they inherited from endosymbiosis. These genes come with a high cost because the organelles need to import the machinery to translate them as well as the machinery to replicate the genes that encode them. Therefore one can hypothesize that other endosymbionts whose functions are not as membrane-centric could be replaced by organelles that are not of endosymbiotic origin. Unfortunately, plastids and mitochondria have shaped our expectations that endosymbionts will leave both membranes and genomes behind. We believe this is an over simplistic expectation.\nWe argue actinobacterial endosymbiosis accounts for the traits shared between eukaryotes and actinobacteria, as well the phylogenetic trees that place actinobacteria as sisters of the peroxisomal proteins. The fact that numerous mitochondrial proteins are imported into the peroxisome is evidence this endosymbiosis occurred after mitochondrial endosymbiosis. This would reconcile the apparently conflicting signals in terms of which gram-positive group is ancestral to archaea and eukaryotes. We find this scenario more reasonable than invoking an extinct lineage of gram-positives that has all the traits listed in Table 5 and Table 6. However, if a genome is sequenced that contains actinobacterial specific traits as well as firmicute specific traits listed here we would have no need to invoke endosymbiosis. It is also possible to reconcile the canonical rooting with the traits shared by actinobacteria by invoking this endosymbiotic hypothesis.\nSummary of data used to support actinobacterial ancestry of archaea.\nMany of these traits argue for an actinobacterial role in eukaryogenesis but not the origin of archaea. This list of informative characters is taken from [16].\nSummary of data that supports bacilli ancestry for archaea.\nThe bacilli are more similar to archaea in terms of DNA repair, ribosome structure, and lipid metabolism than any other group of bacteria.\n[SUBTITLE] Viruses as the missing link between the prokaryotic superkingdoms [SUBSECTION] Now that we have argued for the true distance between archaea and bacteria, the time has come to cross that desert. As we have asserted above, this is a unique event in evolution, so we must properly set the stage. The selective pressures associated with extreme environments and antibiotic warfare are ancient, however, they cannot cause a revolution on their own, so a significant relaxation in selective pressure is necessary. We argue that viral endosymbiosis could relax selective pressure enough to start such a revolution.\nKoonin has observed that the PolB family of polymerases are the most common DNA polymerase in viruses [91]. Koonin et al. also observed that archaeo-eukaryotic DNA primase was a hallmark viral protein [19]. This hints at some connection in DNA replication between archaea, eukaryotes, and viruses. We examined the distribution of all protein families in Pfam [78] that originated at the root of archaea and eukaryotes to see if this connection could be extended. We defined Pfam familiess that were present in at least 90% of archaeal genomes (46 at the time) and 90% of eukaryotic genomes (35 at the time) and in less than 50% of bacterial genomes (939 at the time) as originating at the root of archaea and eukaryotes. A 90% cutoff is strict enough to imply that the protein was present in LAECA, while a 50% cutoff is loose enough to accommodate recent horizontal transfers. Most of these Pfam families are well below the 50% cutoff in bacteria.\nBy this definition there are 74 Pfam domains that originated in LAECA; 24 of these are found in at least one viral genome (Table 7). On average each of these Pfam domains is present in 36.38 viral genomes (14.36 if one excludes PolB). As an approximate measure of the significance of this result we took 10000 random samples of 74 Pfam domains that are found in at least one cellular genome to see how often one finds 24 or more in at least one viral genome. None of the random sets had that many viral Pfam domains, which implies this set is significantly enriched in viral proteins. However, we must keep in mind that our sampling of the viral world is still highly biased (discussed in [91]) and that viral genomes evolve rapidly. Viral genomes are sampled so poorly that none had the MCM domain from Pfam, even though it is found in a prophage region of some bacilli as discussed above. Further, 18 of the remaining Pfam proteins that originate in LAECA are ribosomal, which we assume are less advantageous for viruses to encode than the DNA replication machinery (although we did find several ribosomal proteins in viruses in this set).\nPfam proteins that originated near LAECA and their distribution in the viral world.\nThe Pfam familiess that originated in LAECA are more common in viruses than those that originated in LBCA.\nWe can also verify whether this result is significant by looking at the set of proteins that would be present in LBCA (last bacterial common ancestor), but not LEACA under the same definition, that is, Pfam domains present in at least 90% of bacterial genomes and less than 50% of archaeal and eukaryal genomes. There are 106 such Pfam domains and 15 of them are found in at least one viral genome (p-value 0.2457). Each of those 15 is in an average of 8.33 viral genomes. It should be noted that this is an underestimate for LBCA's content since there are so many parasitic bacteria with genomic sequences available. However, in general viruses share more Pfam domains with LAECA than LBCA.\nKoonin proposes, based on PolB's distribution, that archaea arose from an acellular ancestor and then retained the more ancient polymerase [91]. We find this view hard to reconcile with the three independent arguments for the derived nature of archaea provided above. Forterre has argued that DNA originated from a viral endosymbiosis in each of the superkingdoms [17], but our data argues against that scenario for the origin of bacteria. We propose the alternative hypothesis that viral endosymbiosis occurred in bacteria and gave rise to archaea. This virus would supply the missing link in terms of DNA replication machinery between the prokaryotic superkingdoms. We think this would have to be endosymbiosis and not just a horizontal transfer given the distribution and interdependencies of these systems in cellular life.\nTo a first approximation there are three components that define the propensity of a genome to get permanently damaged. The first is the environment. Many different extreme environments are damaging to DNA, including radiation, high temperature and desiccation [92]. Second is the size of the genome. The larger the piece of DNA, the more likely damage will occur, and the more it must be mediated. Third is the state of the active repair system. If active repair is poor even rare damage events will eventually accumulate. Therefore, we argue that systems that are extreme in any one of these three components must routinely deal with DNA damage during replication.\nArchaea, in general, fit the description of extremophile better than any other major taxa. It has been proposed that the unifying trait of all archaea is adaptation to chronic energy stress [93]. The author argues that archaea outcompete bacteria in niches that are under chronic stress. Thus archaea have become successful in dealing with environments that other superkingdoms cannot handle. The author noted that archaea do better in environments that are consistently extreme, and are outcompeted by bacteria in environments that fluctuate.\nA corollary of chronic energy stress is chronic DNA damage. Many of the extremophilic environments archaea have made home severely damage DNA. On the other hand, bacteria may face occasional stressful situations and require DNA repair. Therefore it is disadvantageous for bacteria to have their repair systems on all the time. Conversely, archaea need to constantly repair their DNA, so it would make sense if the line is blurred between their replication and repair systems. An example of this prepare for the worst strategy is the unique ability of PolB to read ahead and stall replication if a uracil is encountered in archaea [94].\nIn terms of large genomes eukaryotes win hands down (see figure 1 in [95]). A polymerase is more likely to encounter damage somewhere in the replication of these large genomes than a prokaryote with a smaller genome in a similar environment. This is supported by evidence that eukaryotes use a separate repair system during replication of the large non-transcribed regions of their genome [96,97].\nWhat other situation besides chronic DNA stress and large genome size would put similar pressure on the DNA replication machinery? We argue, somewhat counter intuitively, that a total lack of active DNA repair systems would create a similar situation. Again it is optimal for the replicative system to expect to encounter damage. Viruses fit that description perfectly as they are unable to actively maintain their genomes without their host.\nIf the repair systems were turned on more and more of the time, the main replicative system would become free to drift. Under this scenario the ancestors of archaea could mix and match bacterial repair and replication proteins with several molecular innovations and some transfers from the viral endosymbiont. The end result could be a system that is more robust to chronic stress. The canonical rooting implies that the components of the replication machinery that are homologous, but not orthologous, were independently recruited from proteins that initially processed RNA. Under either scenario the same amount of molecular innovation is required. The question then becomes, is it easier to innovate function in a RNA based organism or a DNA based organism under relaxed selective pressure? We argue that the difference cannot be quantified, as both scenarios predict exactly what we observe: some proteins are orthologs, some are homologs, and some are unrelated. Therefore the way to tell the difference between these scenarios is independent lines of evidence. The polarizations presented above imply the bacterial repair machinery was recruited to become the replication machinery of archaea.\nIt is also tempting to speculate that many of the features shared between viruses and the eukaryotic nucleus described in the viral eukaryogenesis hypothesis [73,98,99] could be extended to this hypothesis. Bell notes many similarities between nuclei and viral replication factories. One can imagine the ancestry of these traits going back to LAECA with some being lost in archaea, and others not developing until the root of eukaryotes. This is only consistent with our hypothesis if archaea are holophyletic, but for now it is certainly worth considering.\nNow that we have argued for the true distance between archaea and bacteria, the time has come to cross that desert. As we have asserted above, this is a unique event in evolution, so we must properly set the stage. The selective pressures associated with extreme environments and antibiotic warfare are ancient, however, they cannot cause a revolution on their own, so a significant relaxation in selective pressure is necessary. We argue that viral endosymbiosis could relax selective pressure enough to start such a revolution.\nKoonin has observed that the PolB family of polymerases are the most common DNA polymerase in viruses [91]. Koonin et al. also observed that archaeo-eukaryotic DNA primase was a hallmark viral protein [19]. This hints at some connection in DNA replication between archaea, eukaryotes, and viruses. We examined the distribution of all protein families in Pfam [78] that originated at the root of archaea and eukaryotes to see if this connection could be extended. We defined Pfam familiess that were present in at least 90% of archaeal genomes (46 at the time) and 90% of eukaryotic genomes (35 at the time) and in less than 50% of bacterial genomes (939 at the time) as originating at the root of archaea and eukaryotes. A 90% cutoff is strict enough to imply that the protein was present in LAECA, while a 50% cutoff is loose enough to accommodate recent horizontal transfers. Most of these Pfam families are well below the 50% cutoff in bacteria.\nBy this definition there are 74 Pfam domains that originated in LAECA; 24 of these are found in at least one viral genome (Table 7). On average each of these Pfam domains is present in 36.38 viral genomes (14.36 if one excludes PolB). As an approximate measure of the significance of this result we took 10000 random samples of 74 Pfam domains that are found in at least one cellular genome to see how often one finds 24 or more in at least one viral genome. None of the random sets had that many viral Pfam domains, which implies this set is significantly enriched in viral proteins. However, we must keep in mind that our sampling of the viral world is still highly biased (discussed in [91]) and that viral genomes evolve rapidly. Viral genomes are sampled so poorly that none had the MCM domain from Pfam, even though it is found in a prophage region of some bacilli as discussed above. Further, 18 of the remaining Pfam proteins that originate in LAECA are ribosomal, which we assume are less advantageous for viruses to encode than the DNA replication machinery (although we did find several ribosomal proteins in viruses in this set).\nPfam proteins that originated near LAECA and their distribution in the viral world.\nThe Pfam familiess that originated in LAECA are more common in viruses than those that originated in LBCA.\nWe can also verify whether this result is significant by looking at the set of proteins that would be present in LBCA (last bacterial common ancestor), but not LEACA under the same definition, that is, Pfam domains present in at least 90% of bacterial genomes and less than 50% of archaeal and eukaryal genomes. There are 106 such Pfam domains and 15 of them are found in at least one viral genome (p-value 0.2457). Each of those 15 is in an average of 8.33 viral genomes. It should be noted that this is an underestimate for LBCA's content since there are so many parasitic bacteria with genomic sequences available. However, in general viruses share more Pfam domains with LAECA than LBCA.\nKoonin proposes, based on PolB's distribution, that archaea arose from an acellular ancestor and then retained the more ancient polymerase [91]. We find this view hard to reconcile with the three independent arguments for the derived nature of archaea provided above. Forterre has argued that DNA originated from a viral endosymbiosis in each of the superkingdoms [17], but our data argues against that scenario for the origin of bacteria. We propose the alternative hypothesis that viral endosymbiosis occurred in bacteria and gave rise to archaea. This virus would supply the missing link in terms of DNA replication machinery between the prokaryotic superkingdoms. We think this would have to be endosymbiosis and not just a horizontal transfer given the distribution and interdependencies of these systems in cellular life.\nTo a first approximation there are three components that define the propensity of a genome to get permanently damaged. The first is the environment. Many different extreme environments are damaging to DNA, including radiation, high temperature and desiccation [92]. Second is the size of the genome. The larger the piece of DNA, the more likely damage will occur, and the more it must be mediated. Third is the state of the active repair system. If active repair is poor even rare damage events will eventually accumulate. Therefore, we argue that systems that are extreme in any one of these three components must routinely deal with DNA damage during replication.\nArchaea, in general, fit the description of extremophile better than any other major taxa. It has been proposed that the unifying trait of all archaea is adaptation to chronic energy stress [93]. The author argues that archaea outcompete bacteria in niches that are under chronic stress. Thus archaea have become successful in dealing with environments that other superkingdoms cannot handle. The author noted that archaea do better in environments that are consistently extreme, and are outcompeted by bacteria in environments that fluctuate.\nA corollary of chronic energy stress is chronic DNA damage. Many of the extremophilic environments archaea have made home severely damage DNA. On the other hand, bacteria may face occasional stressful situations and require DNA repair. Therefore it is disadvantageous for bacteria to have their repair systems on all the time. Conversely, archaea need to constantly repair their DNA, so it would make sense if the line is blurred between their replication and repair systems. An example of this prepare for the worst strategy is the unique ability of PolB to read ahead and stall replication if a uracil is encountered in archaea [94].\nIn terms of large genomes eukaryotes win hands down (see figure 1 in [95]). A polymerase is more likely to encounter damage somewhere in the replication of these large genomes than a prokaryote with a smaller genome in a similar environment. This is supported by evidence that eukaryotes use a separate repair system during replication of the large non-transcribed regions of their genome [96,97].\nWhat other situation besides chronic DNA stress and large genome size would put similar pressure on the DNA replication machinery? We argue, somewhat counter intuitively, that a total lack of active DNA repair systems would create a similar situation. Again it is optimal for the replicative system to expect to encounter damage. Viruses fit that description perfectly as they are unable to actively maintain their genomes without their host.\nIf the repair systems were turned on more and more of the time, the main replicative system would become free to drift. Under this scenario the ancestors of archaea could mix and match bacterial repair and replication proteins with several molecular innovations and some transfers from the viral endosymbiont. The end result could be a system that is more robust to chronic stress. The canonical rooting implies that the components of the replication machinery that are homologous, but not orthologous, were independently recruited from proteins that initially processed RNA. Under either scenario the same amount of molecular innovation is required. The question then becomes, is it easier to innovate function in a RNA based organism or a DNA based organism under relaxed selective pressure? We argue that the difference cannot be quantified, as both scenarios predict exactly what we observe: some proteins are orthologs, some are homologs, and some are unrelated. Therefore the way to tell the difference between these scenarios is independent lines of evidence. The polarizations presented above imply the bacterial repair machinery was recruited to become the replication machinery of archaea.\nIt is also tempting to speculate that many of the features shared between viruses and the eukaryotic nucleus described in the viral eukaryogenesis hypothesis [73,98,99] could be extended to this hypothesis. Bell notes many similarities between nuclei and viral replication factories. One can imagine the ancestry of these traits going back to LAECA with some being lost in archaea, and others not developing until the root of eukaryotes. This is only consistent with our hypothesis if archaea are holophyletic, but for now it is certainly worth considering.\n[SUBTITLE] The greatest battle ever fought [SUBSECTION] So far we have demonstrated that there is robust evidence that archaea are a derived superkingdom. We have shown the bacterial ribosome could have enough plasticity to evolve into an archaeal one. We have presented evidence that there is some link between DNA replication in archaea, eukaryotes and viruses that could be the result of endosymbiosis. Now we will try to combine these into the larger story of why a bacterium would evolve into an archaeon.\nAs we discussed above, we feel the greatest weakness of Gupta's invocation of antibiotics is it is not of sufficient evolutionary pressure to cause a revolution on the scale necessary to create the differences between the prokaryotic superkingdoms. Observations of the vast differences in DNA replication machinery and evidence of a viral endosymbiosis in a bacillus before LEACA will set the stage for our subsequent hypothesis.\nIn the traditional antibiotic battle the gram-positives are capable of evolving resistance to each other. This leads to what is commonly referred to as a Red Queen game [100]. Neither group ever really gets ahead in the long-term war as each defensive innovation is matched by an offensive one. But that does not mean there are never winners in battles on shorter time scales. Winning a battle is not a good thing in the long run. The winners will increase in population size and consume more of an environment's resources. The corollary is that they become a better target for less dominant species to kill. If a species evolves a more resistant ribosome it just puts more pressure on the rest of the community to hit other targets in that species.\nOne can imagine a firmicute deeply entrenched in such warfare endowed with the gift of a complete and novel replication system from a virus. This is supported by the distribution of viral Pfam proteins discussed above. The virosphere contains so much diversity that even rare combinations of genes would eventually end up in the same capsid at the same time as long as they have some advantage to any virus. It would be an incredibly rare event for the virus to be just right for the bacterium to take up the entire replication system. And thus the stage is partly set for why the revolution happened but once.\nThe core of the DNA replication system does not appear to be as common an antibiotic target as the ribosomes or RNA polymerase. A search of DrugBank revealed no antibiotics that target PolC [101]. However, there are several that target gyrase. Why the difference? Inhibition of PolC just stops a population from growing, but the damage induced by the loss of a functional gyrase invokes an SOS response and leads to cell death. There are probably natural antibiotics that target PolC, but they would not be as effective as the numerous ones that target the ribosome and RNA polymerase. Thus the introduction of PolB into the bacillus genome would not be the enough to start the revolution. This is supported by the fact that many proteobacteria use PolB as a repair enzyme, the result of a HGT that did not start a revolution.\nAs discussed above there are no bacteria that have archaeal histones. This strongly implies they are only compatible with the archaeal-eukaryal replication machinery. Thus we argue that viral endosymbiosis was a relaxation in selective pressure that in combination with pressure from antibiotics targeting gyrase led to the innovation of histones. This is not a trivial difference with Cavalier-Smith's hypothesis that the numerous differences between the DNA-handling machinery of bacteria and archaea are the result of histones dramatically changing the way in which this machinery could interact with DNA [16]. He argues this was an adaptation to thermophily.\nHowever, Forterre has presented several arguments against Cavalier-Smith's scenario. He argues that the bacterial histone-like proteins that have replaced the archaeal ones in Thermoplasma acidophilum work just fine with the archaeal replication machinery [17]. He also notes that many hyperthermophilic bacteria do not use histones. At the same time hyperthermophilic bacteria exchange many genes with archaea [102]. Therefore the standard bacterial replication machinery could probably not tolerate the invention of histones even under selective pressure from an extreme environment. Euryarchaea appear to have gained DNA gyrase via several independent horizontal transfers from bacteria [47]. The fact that several euryarchaea retain both histones and gyrase is evidence against Cavalier-Smith's idea that gyrase became totally redundant with the advent of histones. That view is weakened further given that gyrase was found to be essential in several of those genomes [103].\nSince pressure from thermophily alone could not force histone innovation, we invoke the viral endosymbiont hypothesis. In other bacteria an alternative system to gyrase would not be much of an advantage, as getting rid of gyrase would just put more pressure on targets like the ribosome and peptidoglycan synthesis. However, as discussed above, the bacilli have several unique ribosomal proteins. That means they could already have some adaptations and preadaptation to antibiotic warfare that makes them a difficult target to hit. As discussed above they have EpsE [104], which could preadapt them for functioning without peptidoglycan. Once gyrase was no longer a useful target they could quickly lose peptidoglycan in their cell walls. The loss of these two major targets would be a huge advantage and increase pressure on the ribosomes as a target.\nAt this point any change to the ribosome would be highly beneficial. One can imagine a Red Queen game where neomura have a distinct advantage over gram-positives but need constant innovation in their ribosomes to maintain that advantage. The observation that many archaeal-eukaryal ribosomal proteins bind Zn would be consistent with pressure to ensure proper assembly despite the antibiotics. This is supported by the fact that bacterial hyperthermophiles, whose environment interferes with ribosomal assembly, have more Zn binding sites than most other bacteria [35].\nThus the initial neomura would have an advantage in antibiotic warfare as well as the ability to replicate DNA even in the presence of damaging pressures. Their genomes could be much larger than extant prokaryotes. A large robust genome would allow neomuran to be oligotrophic and handle extreme environments. This would put them in direct competition with many bacteria in diverse environments. Their larger genome size would allow for more gene duplication, which could lead to structural innovations like the ribosomal proteins found in neomura but not bacteria.\nThe strongest support for this hypothesis comes from the antibiotic target site most studied in the ribosome: the 23S RNA between ribosomal proteins L22 and L4. L22 and L4 are conserved across teh superkingdoms. They bind to the same positions on the ribosome in all three superkingdoms. There are numerous crystal structures, from both prokaryotic superkingdoms, with antibiotics bound in these sites [105,106]. These studies demonstrated that nine different antibiotics that bind strongly to this site in bacteria bind with much less affinity in archaea. A2058 (E. coli numbering) is one of the sites on the 23S RNA directly involved in binding these drugs. A2058 is conserved across 99.4% of sequenced bacterial 23S rRNAs [107]. The site is almost universally guanine in archaea and eukaryotes. The mutation A2058G makes many bacteria macrolide-resistant [108], while the reverse mutation can make archaea macrolide-sensitive [109]. These differences in antibiotic affinity are well conserved across the divide between bacteria and neomura, and appear to be the result of intense selective pressure from antibiotics.\nEven though bacteria are able to gain resistance through a similar mutation, it is probably not fixed because there is a slight decrease in fitness that can be reduced with other mutations [107]. If there were constant pressure on that site other mutations and changes in structure could relax those costs and fix that position. That would be completely consistent with the scenario outlined here. If the divide between archaea and bacteria is primordial, it is much harder to explain this difference. Ribosomal proteins L22 and L4 must have been present in LUCA. If the ancestor of archaea was an extremophile they should not have been in competition with enough bacteria to need the resistance inferred by this mutation.\nIt would be tempting to speculate that this mutation is an adaptation to thermophily or some other extreme environment to answer this nagging issue of antibiotic pressure at the root of archaea. Examining the position in bacterial hyperthermophiles can be tested. In both the hyperthermophiles Aquifex aeolicus and Thermotoga maritime this position is 100% conserved as adenine, as it is in their thermophilic relatives (Additional file 4; Figure S4). The thermophile Thermus thermophilus has two copies of the 23S RNA where usually both have adenine at that position unless they are under selective pressure from antibiotics [110]. Thus the only explanation that appears to hold water is some extreme antibiotic pressure at the root of archaea.\nThe mark of antibiotic pressure can also be seen in the proteins that would be lost at the origin of archaea. We searched Pfam and DrugBank for antibiotic targets that are conserved across bacteria but were clearly not in LACA. Eight of these are listed in Table 8. Several of these appear to have been horizontally transferred to archaea, such as DNA gyrase. That is consistent with the scenario under discussion because once archaea were no longer under strong antibiotic pressure these systems would be free to become essential again. It would be interesting to look at each of these eight predicted losses and see what preadaptations and environmental conditions can make them non-essential.\nDrug targets found across bacteria that were probably not in LACA.\nWe argue these proteins were lost in the archaeal ancestor in response to a unique antibiotic warfare scenario. Targets in italics appear to have transferred to archaea after LACA.\nExamples of drugs targets sites with resistance in archaea.\nThese drugs bind targets sites present in both bacteria and archaea (or eukaryotes), but with very different affinities. We argue this is a molecular fossil of the unique antibiotic war that resulted in the origin of archaea.\nWhy would this war end and who would the winners be? To address this question we will invoke the two novel niches that are central to the neomuran hypothesis; phagotrophy and hyperthermophily [16]. The oligotrophic neomuran with large genomes would be able to form many symbioses with prokaryotes because of their diverse metabolism. Such an environment would favor the preadapations to phagotrophy discussed in [111]. This could lead to several endosymbiotic events in a short span of time. These would force the nucleus to become a better separator in dealing with selective pressures proposed by several hypotheses: invasion of introns [112], differing metabolisms [113] and ribosome chimerism [114]. The successful phagotroph would eat prokaryotes, so at first it would be to the advantage of the prey to try to kill the neomura. However, that is not the optimal strategy for dealing with phagotrophs. It is much better to persist inside them and eat them from the inside out, as can be seen by the numerous bacterial taxa that have independently evolved the ability to infect eukaryotes. Once it is possible to infect the phagotrophs, killing them with antibiotics becomes counterproductive. And thus a truce (or new war) would be declared on one front of the great antibiotics war.\nThe early eukaryotes would outcompete and eat many of the initial neomura, but would be at a disadvantage in extreme environments as they began to rely on their cytoskeletons and larger cell size more. It would be easier for the neomura to drift into more extreme environments because of their DNA replication machinery. The proto-archaea would begin to emerge as the neomura began moving into previously unoccupied niches of extremophily. The conversion of their membranes would probably be the commitment step in the process. Once they began settling into environments that are constantly extreme they would be under pressure to streamline their genomes.\nThis scenario is consistent with a recent study on gene content evolution in archaea that concluded that most archaeal genomes have been streamlined from larger ancestral genomes [115]. The authors conclude that the archaeal ancestor could have had 2000 gene families, and the extant archaeal groups are mostly created through differential loss. The authors note this repeated loss is consistent with the energy shock state of the archaea described in [93], as specialization and loss are highly favorable in consistently extreme environments. The trend of euryarchaeal and crenarchaeal specific traits to both be present in the deep branching archaea is also consistent with the idea archaea became specialized from a more generalized genomic ancestor. The redundancy in archaeal systems such as two replicative polymerases and two cell division systems could be remnants of the antibiotic war. That redundancy would become unnecessary once archaea committed to extremophily. It was noted in [2] that ribosomal protein loss is much more common in archaea than in bacteria. Our hypothesis implies that the distribution of ribosomal proteins in archaea is the result of independent losses once they were no longer under antibiotic pressure. Some of these novel proteins developed other roles to deal with extremophily so they have been retained. The ancestral archaeal ribosome could very well have contained all of the proteins found in any archaeal genome, which would certainly weaken that aspect of the eocyte hypothesis.\nWhat about the neomura? They would be stuck in the middle. The eukaryotes would be eating them, and they would still be in competition with bacteria. Their only viable strategy would be constant innovation, as they would not really have a novel niche. However, the wave caused by viral endosybiosis would not go on forever. There would be diminishing returns in terms of the resistance provided by the new innovations. Eventually the innovations would become a disadvantage as bacteria can then release compounds that only target the new systems. For instance aphidicolin inhibits DNA replication in archaea and eukaryotes but not bacteria by targeting their unique polymerase [116,117]. So the initial advantage the neomura have in terms of antibiotic resistance is not a stable niche. They were outcompeted from three sides, and thus we are left with a hole in the middle of the branches of the tree of life that often gets mistaken as the root. This scenario is summarized in Figure 5.\nSummary of our hypothesis. A viral endosymbiosis bridges the gap in DNA machinery between the superkingdoms. That triggered an antibiotic war that resulted in the birth of eukaryotes and archaea. The antibiotic war ended when archaea became extremophiles and the eukaryotes became phagotrophs. Traits shared between eukaryotes and actinobacteria are the result of endosymbiosis; the peroxisome is not the direct descendent of an actinobacterium.\nSo far we have demonstrated that there is robust evidence that archaea are a derived superkingdom. We have shown the bacterial ribosome could have enough plasticity to evolve into an archaeal one. We have presented evidence that there is some link between DNA replication in archaea, eukaryotes and viruses that could be the result of endosymbiosis. Now we will try to combine these into the larger story of why a bacterium would evolve into an archaeon.\nAs we discussed above, we feel the greatest weakness of Gupta's invocation of antibiotics is it is not of sufficient evolutionary pressure to cause a revolution on the scale necessary to create the differences between the prokaryotic superkingdoms. Observations of the vast differences in DNA replication machinery and evidence of a viral endosymbiosis in a bacillus before LEACA will set the stage for our subsequent hypothesis.\nIn the traditional antibiotic battle the gram-positives are capable of evolving resistance to each other. This leads to what is commonly referred to as a Red Queen game [100]. Neither group ever really gets ahead in the long-term war as each defensive innovation is matched by an offensive one. But that does not mean there are never winners in battles on shorter time scales. Winning a battle is not a good thing in the long run. The winners will increase in population size and consume more of an environment's resources. The corollary is that they become a better target for less dominant species to kill. If a species evolves a more resistant ribosome it just puts more pressure on the rest of the community to hit other targets in that species.\nOne can imagine a firmicute deeply entrenched in such warfare endowed with the gift of a complete and novel replication system from a virus. This is supported by the distribution of viral Pfam proteins discussed above. The virosphere contains so much diversity that even rare combinations of genes would eventually end up in the same capsid at the same time as long as they have some advantage to any virus. It would be an incredibly rare event for the virus to be just right for the bacterium to take up the entire replication system. And thus the stage is partly set for why the revolution happened but once.\nThe core of the DNA replication system does not appear to be as common an antibiotic target as the ribosomes or RNA polymerase. A search of DrugBank revealed no antibiotics that target PolC [101]. However, there are several that target gyrase. Why the difference? Inhibition of PolC just stops a population from growing, but the damage induced by the loss of a functional gyrase invokes an SOS response and leads to cell death. There are probably natural antibiotics that target PolC, but they would not be as effective as the numerous ones that target the ribosome and RNA polymerase. Thus the introduction of PolB into the bacillus genome would not be the enough to start the revolution. This is supported by the fact that many proteobacteria use PolB as a repair enzyme, the result of a HGT that did not start a revolution.\nAs discussed above there are no bacteria that have archaeal histones. This strongly implies they are only compatible with the archaeal-eukaryal replication machinery. Thus we argue that viral endosymbiosis was a relaxation in selective pressure that in combination with pressure from antibiotics targeting gyrase led to the innovation of histones. This is not a trivial difference with Cavalier-Smith's hypothesis that the numerous differences between the DNA-handling machinery of bacteria and archaea are the result of histones dramatically changing the way in which this machinery could interact with DNA [16]. He argues this was an adaptation to thermophily.\nHowever, Forterre has presented several arguments against Cavalier-Smith's scenario. He argues that the bacterial histone-like proteins that have replaced the archaeal ones in Thermoplasma acidophilum work just fine with the archaeal replication machinery [17]. He also notes that many hyperthermophilic bacteria do not use histones. At the same time hyperthermophilic bacteria exchange many genes with archaea [102]. Therefore the standard bacterial replication machinery could probably not tolerate the invention of histones even under selective pressure from an extreme environment. Euryarchaea appear to have gained DNA gyrase via several independent horizontal transfers from bacteria [47]. The fact that several euryarchaea retain both histones and gyrase is evidence against Cavalier-Smith's idea that gyrase became totally redundant with the advent of histones. That view is weakened further given that gyrase was found to be essential in several of those genomes [103].\nSince pressure from thermophily alone could not force histone innovation, we invoke the viral endosymbiont hypothesis. In other bacteria an alternative system to gyrase would not be much of an advantage, as getting rid of gyrase would just put more pressure on targets like the ribosome and peptidoglycan synthesis. However, as discussed above, the bacilli have several unique ribosomal proteins. That means they could already have some adaptations and preadaptation to antibiotic warfare that makes them a difficult target to hit. As discussed above they have EpsE [104], which could preadapt them for functioning without peptidoglycan. Once gyrase was no longer a useful target they could quickly lose peptidoglycan in their cell walls. The loss of these two major targets would be a huge advantage and increase pressure on the ribosomes as a target.\nAt this point any change to the ribosome would be highly beneficial. One can imagine a Red Queen game where neomura have a distinct advantage over gram-positives but need constant innovation in their ribosomes to maintain that advantage. The observation that many archaeal-eukaryal ribosomal proteins bind Zn would be consistent with pressure to ensure proper assembly despite the antibiotics. This is supported by the fact that bacterial hyperthermophiles, whose environment interferes with ribosomal assembly, have more Zn binding sites than most other bacteria [35].\nThus the initial neomura would have an advantage in antibiotic warfare as well as the ability to replicate DNA even in the presence of damaging pressures. Their genomes could be much larger than extant prokaryotes. A large robust genome would allow neomuran to be oligotrophic and handle extreme environments. This would put them in direct competition with many bacteria in diverse environments. Their larger genome size would allow for more gene duplication, which could lead to structural innovations like the ribosomal proteins found in neomura but not bacteria.\nThe strongest support for this hypothesis comes from the antibiotic target site most studied in the ribosome: the 23S RNA between ribosomal proteins L22 and L4. L22 and L4 are conserved across teh superkingdoms. They bind to the same positions on the ribosome in all three superkingdoms. There are numerous crystal structures, from both prokaryotic superkingdoms, with antibiotics bound in these sites [105,106]. These studies demonstrated that nine different antibiotics that bind strongly to this site in bacteria bind with much less affinity in archaea. A2058 (E. coli numbering) is one of the sites on the 23S RNA directly involved in binding these drugs. A2058 is conserved across 99.4% of sequenced bacterial 23S rRNAs [107]. The site is almost universally guanine in archaea and eukaryotes. The mutation A2058G makes many bacteria macrolide-resistant [108], while the reverse mutation can make archaea macrolide-sensitive [109]. These differences in antibiotic affinity are well conserved across the divide between bacteria and neomura, and appear to be the result of intense selective pressure from antibiotics.\nEven though bacteria are able to gain resistance through a similar mutation, it is probably not fixed because there is a slight decrease in fitness that can be reduced with other mutations [107]. If there were constant pressure on that site other mutations and changes in structure could relax those costs and fix that position. That would be completely consistent with the scenario outlined here. If the divide between archaea and bacteria is primordial, it is much harder to explain this difference. Ribosomal proteins L22 and L4 must have been present in LUCA. If the ancestor of archaea was an extremophile they should not have been in competition with enough bacteria to need the resistance inferred by this mutation.\nIt would be tempting to speculate that this mutation is an adaptation to thermophily or some other extreme environment to answer this nagging issue of antibiotic pressure at the root of archaea. Examining the position in bacterial hyperthermophiles can be tested. In both the hyperthermophiles Aquifex aeolicus and Thermotoga maritime this position is 100% conserved as adenine, as it is in their thermophilic relatives (Additional file 4; Figure S4). The thermophile Thermus thermophilus has two copies of the 23S RNA where usually both have adenine at that position unless they are under selective pressure from antibiotics [110]. Thus the only explanation that appears to hold water is some extreme antibiotic pressure at the root of archaea.\nThe mark of antibiotic pressure can also be seen in the proteins that would be lost at the origin of archaea. We searched Pfam and DrugBank for antibiotic targets that are conserved across bacteria but were clearly not in LACA. Eight of these are listed in Table 8. Several of these appear to have been horizontally transferred to archaea, such as DNA gyrase. That is consistent with the scenario under discussion because once archaea were no longer under strong antibiotic pressure these systems would be free to become essential again. It would be interesting to look at each of these eight predicted losses and see what preadaptations and environmental conditions can make them non-essential.\nDrug targets found across bacteria that were probably not in LACA.\nWe argue these proteins were lost in the archaeal ancestor in response to a unique antibiotic warfare scenario. Targets in italics appear to have transferred to archaea after LACA.\nExamples of drugs targets sites with resistance in archaea.\nThese drugs bind targets sites present in both bacteria and archaea (or eukaryotes), but with very different affinities. We argue this is a molecular fossil of the unique antibiotic war that resulted in the origin of archaea.\nWhy would this war end and who would the winners be? To address this question we will invoke the two novel niches that are central to the neomuran hypothesis; phagotrophy and hyperthermophily [16]. The oligotrophic neomuran with large genomes would be able to form many symbioses with prokaryotes because of their diverse metabolism. Such an environment would favor the preadapations to phagotrophy discussed in [111]. This could lead to several endosymbiotic events in a short span of time. These would force the nucleus to become a better separator in dealing with selective pressures proposed by several hypotheses: invasion of introns [112], differing metabolisms [113] and ribosome chimerism [114]. The successful phagotroph would eat prokaryotes, so at first it would be to the advantage of the prey to try to kill the neomura. However, that is not the optimal strategy for dealing with phagotrophs. It is much better to persist inside them and eat them from the inside out, as can be seen by the numerous bacterial taxa that have independently evolved the ability to infect eukaryotes. Once it is possible to infect the phagotrophs, killing them with antibiotics becomes counterproductive. And thus a truce (or new war) would be declared on one front of the great antibiotics war.\nThe early eukaryotes would outcompete and eat many of the initial neomura, but would be at a disadvantage in extreme environments as they began to rely on their cytoskeletons and larger cell size more. It would be easier for the neomura to drift into more extreme environments because of their DNA replication machinery. The proto-archaea would begin to emerge as the neomura began moving into previously unoccupied niches of extremophily. The conversion of their membranes would probably be the commitment step in the process. Once they began settling into environments that are constantly extreme they would be under pressure to streamline their genomes.\nThis scenario is consistent with a recent study on gene content evolution in archaea that concluded that most archaeal genomes have been streamlined from larger ancestral genomes [115]. The authors conclude that the archaeal ancestor could have had 2000 gene families, and the extant archaeal groups are mostly created through differential loss. The authors note this repeated loss is consistent with the energy shock state of the archaea described in [93], as specialization and loss are highly favorable in consistently extreme environments. The trend of euryarchaeal and crenarchaeal specific traits to both be present in the deep branching archaea is also consistent with the idea archaea became specialized from a more generalized genomic ancestor. The redundancy in archaeal systems such as two replicative polymerases and two cell division systems could be remnants of the antibiotic war. That redundancy would become unnecessary once archaea committed to extremophily. It was noted in [2] that ribosomal protein loss is much more common in archaea than in bacteria. Our hypothesis implies that the distribution of ribosomal proteins in archaea is the result of independent losses once they were no longer under antibiotic pressure. Some of these novel proteins developed other roles to deal with extremophily so they have been retained. The ancestral archaeal ribosome could very well have contained all of the proteins found in any archaeal genome, which would certainly weaken that aspect of the eocyte hypothesis.\nWhat about the neomura? They would be stuck in the middle. The eukaryotes would be eating them, and they would still be in competition with bacteria. Their only viable strategy would be constant innovation, as they would not really have a novel niche. However, the wave caused by viral endosybiosis would not go on forever. There would be diminishing returns in terms of the resistance provided by the new innovations. Eventually the innovations would become a disadvantage as bacteria can then release compounds that only target the new systems. For instance aphidicolin inhibits DNA replication in archaea and eukaryotes but not bacteria by targeting their unique polymerase [116,117]. So the initial advantage the neomura have in terms of antibiotic resistance is not a stable niche. They were outcompeted from three sides, and thus we are left with a hole in the middle of the branches of the tree of life that often gets mistaken as the root. This scenario is summarized in Figure 5.\nSummary of our hypothesis. A viral endosymbiosis bridges the gap in DNA machinery between the superkingdoms. That triggered an antibiotic war that resulted in the birth of eukaryotes and archaea. The antibiotic war ended when archaea became extremophiles and the eukaryotes became phagotrophs. Traits shared between eukaryotes and actinobacteria are the result of endosymbiosis; the peroxisome is not the direct descendent of an actinobacterium.", "Several large indels are shared between archaea and gram-positive bacteria, and both groups only have one membrane [9]. Thus, if there is a direct relationship between the gram-positives and archaea the root is either between them, or one is derived from the other. Every piece of evidence that is polarizable implies archaea are derived from bacteria. Arguments that archaea and bacteria are so different that they both evolved from LUCA sidesteps directionality altogether. The only recent work that explicitly roots the tree in archaea is that of Wong et al. [22]. Many of their arguments are based on assumptions about the nature of LUCA and assumptions of what a primitive state would look like. None of their arguments are true polarizations. To the best of our knowledge there is no single polarized argument for an archaeal rooting that is on par with the three we shall discuss that place archaea as derived.\nThe first of these arguments is the proteasome. Proteasomes are self compartmentalized atp-dependent proteases that are found in varying degrees of complexity across the tree of life. All archaea contain a 20S proteasome which is composed of 28 subunits and is encoded by at least two genes that are clearly homologs. Therefore the 20S proteasome must be the result of duplication. Cavalier-Smith has argued that the simpler bacterial homolog HslV (heat shock locus v) could be duplicated to generate a 20S proteasome [8,16]. Loss of a subunit in the 20S proteasome would result in an open proteasome with no ATPase. Such a protein would lose the essential function of controlled degradation found in proteasomes, and does not make sense as an intermediate. It is more likely that the 20S proteasome is derived from a simpler structure. Cavalier-Smith excludes the root from archaea because all archaea contain a clearly derived protein.\nHowever, there is a counter argument to that proposal; LUCA had HslV and LACA (last archaeal common ancestor) is the point in the tree where HslV evolves into the 20S proteasome (Figure 1A). This would still exclude the root from the crown archaea, but it still allows for the possibility that the root is between the extinct stems of archaea and bacteria. Excluding the root from archaea will never be enough because one can always invoke stem lineages that show up before the derived trait. This would imply the 20S proteasome present in actinobacteria is probably the result of a horizontal transfer from archaea. However, we have observed that the two proteasome genes are often in the same operon in actinobacteria, but rarely together in archaea. This weakly polarizes the direction of the horizontal transfer to the archaea.\nTwo scenarios for interpreting the three polarizations. A) Under the canonical rooting proteasome evolution would require several selective sweeps and large-scale loss. The monomer PyrD B would have evolved from one of the more complex quaternary structures, and the derived insert in EF-2 would occur after LACA. B) Under the gram-negative rooting, Anbu could be ancestral to both HslV and the 20S proteasome. PyrD could evolve via stepwise increases in structural complexity, and there is no need to invoke extinct stem archaea to explain the EF_G insert. We believe these transitions argue for a gram-negative rooting.\nHowever, there is stronger evidence that narrows the root to within the bacteria. Our own work argues that the Anbu proteasome (or peptidase according to [23]) is more likely than HslV to be the 20S proteasome's direct ancestor based on both sequence data and structure predictions [24]. This argument is much stronger than Cavalier-Smith's because HslV is widespread in the gram-positives but Anbu appears to be missing in them altogether (Figure 1B). If the divide between archaea and bacteria is the earliest split in the tree, and our hypothesis on proteasome evolution is correct, then LUCA must have had Anbu. This would mean that all extant gram-positives need to have lost Anbu while the gram-negatives (that must be derived from gram-positives in this scenario) somehow retained Anbu. One would have to invoke a selective sweep of the 20S proteasome in archaea, and of HslV in the gram-positives. It is plausible that the 20S proteasome outcompeted Anbu or HslV since they are almost never found in the same genome. However, Anbu and HslV are found together in many genomes, which is evidence neither totally displaces the other in terms of function. Our arguments about Anbu are based on structure prediction, but a crystal structure could experimentally verify those predictions. If we are correct it may be the smoking gun for a gram-negative rooting, but even without that there is ample evidence to support Cavalier-Smith's position. Even if HslV is the direct ancestor of the 20S proteasome the root can still be excluded from all extant archaeal lineages.\nThe recent analysis of proteins occur in Anbu's operon [23] presented evidence we are wrong in labeling Anbu a proteasome because it lacks an associated ATP-dependent protein required for unfolding substrates. HslV and the 20S proteasome clearly have associated ATPases dedicated to unfolding substrates. Therefore the transition to both of them is easier from Anbu as no ATPase would have to be lost. The origin of HslV and the 20S proteasome would both involve the recruitment of distinct ATPases subunits. Therefore we think this new work strengthens our hypothesis that Anbu is ancestral to the 20S proteasome because no intermediate would ever lose the regulatory ATPase. If our hypothesis is correct, proteasomes would be polyphyletic if they are defined by the presence of the ATPase subunit as suggested in [23].\nThe indel in EF-2 shared between archaea and eukaryotes has been polarized using EF-Tu as an outgroup [25]. Our alignment free analysis of this indel agrees with the authors' conclusions despite there being a sequence artifact in their original alignment [11]. This polarization robustly excludes the root from within archaea, but does not narrow it to within bacteria.\nIn that analysis we also presented a novel structure-based argument for polarizing archaea. The quaternary structure of PyrD 1B is a heterotetramer across the firmicutes and archaea. We argue that the heterotetramer is probably derived from the homodimer PyrD 1A based on the presence of a conserved interface. The monomeric and homodimeric versions are present in the Gram-negatives and Actinobacteria. PyrD 1B is found across a gram-positive group and archaea, so it would have to be present in their last common ancestor, which is LUCA under the canonical rooting. This could be explained by the presence of both PyrD 1A and 1B in LUCA. But that scenario would require PyrD 1A to be lost in every archaea and some firmicutes, and for there to be a reversion to the monomeric form, PyrD, across the gram-negatives and actinobacteria. PyrD 1B is probably derived, so it follows that archaea, firmicutes, and their last common ancestor are also derived.\nThe polarization of the indel in EF-2 excludes the root from the extant archaea. Our novel polarizations of Anbu and PyrD argue the root is within bacteria. If these arguments only excluded the root from all extant archaea one is left wondering why all archaea that are not clearly derived went extinct. The combination of all three arguments strongly supports the bacterial rooting of the tree. If archaea are derived, there must be some way of reconciling the major differences between them and bacteria.", "Archaea cluster separately on phylogenetic trees based on ribosomal RNA [1]. This split has remained robust in many trees derived since then. We will discuss three scenarios that can explain this. The first scenario is that the ribosomal sequences are pretty good molecular clocks. The great splits seen in the tree reflects this most ancient divide in cellular life and is in accordance with the canonical rooting.\nThe second scenario also does not contradict the canonical rooting. It goes as follows. The ribosome in LUCA was incomplete. It did not have all the proteins found in extant archaea and bacteria, only the core that is universal between them. The addition of proteins after the split of the superkingdoms would start a quantum evolutionary event. Some sites would be free to mutate to achieve increased stability, while others would be under evolutionary pressure to maintain a strict structure-function relationship. The rate of mutation at different sites on the ribosome could vary wildly and exaggerate the true distance between the superkingdoms, even if they do represent a very ancient split.\nThe third scenario, which we champion here, is that the bacterial ribosome evolved into an archaeal one. Again this would be a quantum evolutionary event and sequences of both rRNA and ribosomal proteins would evolve rapidly. The point we are trying to make is that these three scenarios would result in exactly the same sequence tree. Hence we must look towards independent lines of reasoning to determine which of these scenarios best describes the tree branching.\nWe can exclude the first scenario by comparing the structure of the ribosome in archaea and bacteria. In the 50S subunit there are six ribosomal proteins that are in the same position on the rRNA, but have non-homologous structures in archaea and bacteria [26,27]. These must have changed in at least one lineage since LUCA, regardless of LUCA's nature. Therefore, we should expect that the distance between archaea and bacteria would be exaggerated due to compensatory mutations in the rRNA and ribosomal proteins.\nIt is certainly reasonable to object to the third scenario because it seems implausible that a ribosome would change so much between superkingdoms, yet stay so well conserved within a superkingdom. However, there are two examples where we know that ribosome structure has indeed changed significantly. Mitochondrial ribosomes have changed dramatically from their bacterial ancestors. They have lost about half their rRNA and replaced it with additional proteins [28]. The eukaryotic ribosome evolved from an archaeal one (or technically from some sort of proto-archaeal ribsosome if the archaea are holophyletic). There are eleven ribosomal proteins found only in the eukaryotes, nine of which are conserved across the superkingdom [2], and there is good separation on rRNA trees between eukaryotes and archaea. In the two cases where the ribosome structure has changed we know it changed from another fully functional ribosome. Thus, why would it be out of the question for it to happen between archaea and bacteria? There are five ribosomal proteins present across the crenarchaea, but absent in the euryarchaea [2]. These proteins were either lost or gained in one of these groups after they split. In either case there would be a transition between two complete ribosomes. In each of these cases we can clearly see that a ribosome can undergo dramatic changes in macromolecular structure when there is proper selective pressure (or relaxation of selective pressure).\nThe tree presented in [29] was constructed by concatenating 31 universal proteins. 23 of these are ribosomal proteins and many more are directly involved in translation. Many taxa on the tree cluster together with high bootstrap values (greater than 80%). However, there appears to be only three connections between high level taxa that are supported with that strength. The clustering of crenarchaea and euryarchaea is well supported, as is the clustering of eukaryotes and archaea. There is also a long well supported branch between the archaeal-eukaryal clade and bacteria. We doubt it is a coincidence that these splits correspond to the greatest changes in ribosomal structure on the tree. It appears the sequence tree in [29] and rRNA trees could be merely a reflection of the large changes in ribosomal structure that have occurred throughout the true tree of life. This protein set would be expected to work better as a clock within groups that have the same ribosomal proteins. Even if ones uses more sophisticated tree building techniques, such as those in [30], the major changes in the ribosome are still going to be problematic. The authors concatenated many translational proteins and the resulting tree supported the paraphyly of archaea. Eukaryotes were placed near the archaeal species with the most similar ribosomal structures. However, a single gene tree of RNA polymerase alpha subunit (RPOA) supported holophyly in the same study. This implies some of their results are an artifact caused by structural changes in a ribosomal revolution.\nThe third scenario could certainly be weakened if it was found that all the ribosomal proteins were essential in bacteria and there was absolutely no way they could be tinkered with. We examined which ribosomal proteins are essential in eleven different bacterial species using the Database of Essential Genes [31]. There are sixteen ribosomal proteins that would need to be lost in the transition from a bacterium to an archaeon, as they are found across bacteria but never in archaea. None of these ribosomal proteins were found to be essential in all species, which is the first sign it is possible to lose and replace them. Four of the sixteen proteins are essential in all species except Mycobacterium tuberculosis (Table 1). Only four of these proteins are essential in M. tuberculosis, the least of any species in this data set.\nEssentiality of ribosomal proteins.\nThe essentiality of proteins that would need to be lost in the transition from an extant bacterial ribosome to an archaeal one varies from species to species. M. tuberculosis appears preadapted for the losses that would be necessary in the transition to an archaeon.\nTo determine whether this portion of the ribosome is significantly flexible we calculated a p-value assuming a binomial distribution. The essentiality of each subunit can be considered a success or a failure. The p-value measures the odds of seeing at most n essential subunits in a set of sixteen random ribosomal proteins. The odds of a random ribosomal protein being essential were estimated as the proportion of ribosomal proteins found to be essential in that species. This was done to eliminate experimental biases between the species sets, as some of the knockout experiments are more thorough than others. Several species had p-values under .05, but M. tuberculosis was by far the most significant with a p-value of .0031. This implies that M. tuberculosis's ribosome is under different selective pressure than most bacteria, and that it is the most preadapted ribosome in this dataset capable of evolving into an archaeal ribosome.\nIt is highly counterintuitive that nearly every universal protein could be nonessential. The difference between essential and persistent genes was discussed in [32]. The authors point out that essentiality differs in the wild and laboratory settings. Many of the ribosomal proteins listed as nonessential are still highly deleterious to lose. But the point is they can be lost under the right circumstances. It might be our proteasome centric view of the world, but we think the presence of the 20S proteasome in Mycobacterium could partially explain this observation. It has been proposed the major cost of mutations and mistranslation comes from dealing with mis-folded proteins [33]. The ribosomal proteins are among the most highly translated proteins in the cell, so there is lots of pressure to ensure they fold correctly. A highly advanced degradation system, like the 20S proteasome with a Pup targeting system [34], could greatly relax that selective pressure. If the initial tinkering is not lethal, one can easily imagine a scenario where compensatory mutations and structures could rapidly and significantly change the ribosome if there is proper selective pressure. We will describe such a scenario below.\nIt has been observed that many bacteria contain paralogs of ribosomal proteins where one form binds Zn and the other does not [35]. M. tuberculosis has duplicates of several ribosomal proteins, which could explain why some (but not all) of the ribosomal proteins are not essential in that genome. The authors note that thermophilic bacteria seem to prefer the Zn binding forms of the ribosomal proteins, and that there are seven Zn binding ribosomal proteins conserved across archaea and eukaryotes that are absent in bacteria. This is consistent with our ideas that major historical changes in the availability of Zn in the ocean were a significant constraint on protein structure evolution [36,37]. Bacteria vary their ribosomes to optimize for both high and low Zn conditions. One can imagine this strategy being taken to an extreme where the tweaks are not just simple displacements, but larger rearrangements. Increased availability of Zn, as the ocean became oxic, could be a factor that made toying with the ribosome favorable for the early archaea. This combined with the antibiotic pressures discussed below, could lead to a ribosomal revolution, just as the presence of two ribosomes leads to a revolution at the root of eukaryotes.", "The differences between archaeal and bacterial replication machinery is vast [4]. Leipe et al. claim this difference is so great that it is unreasonable to argue that one prokaryotic superkingdom evolved from the other. They list four key functions of DNA replication that are performed by completely non-homologous proteins in archaea and bacteria: the main polymerase's polymerization domain, the phosphatase that powers the polymerase, the gap filling polymerase, and the DNA primase. We will argue that the differences between archaea and bacteria do not imply the root of the tree of life has to be between them.\nWe must keep in mind there is some flexibility in the DNA replication machinery despite the division across the superkingdoms; consider two examples. First, many proteobacteria use a PolB family polymerase as a repair protein [38], which is almost certainly the result of HGT. Second, PolD appears to have been present in LACA, but was lost in the crenarchaea [39]. These two examples illustrate major changes in the replication machinery that occured in DNA based genomes possessing fully functional replication systems. We are arguing that an even larger event occurred between the prokaryotic superkingdoms. This event entailed viral transfers and novel innovations, but there are several proteins whose origins can be better described by vertical inheritance from the gram-positive bacteria which we review first.\nKoonin et al. have demonstrated that many bacterial proteins have a region that is homologous to the small subunit of the archaeo-eukaryotic primase [40]. This domain is present in DNA ligase D from M. tuberculosis, which can act as a DNA-dependent RNA polymerase [41]. The rest of the protein is homologous to the ATP-dependent DNA ligase found in archaea and eukaryotes. Therefore, DNA ligase D is perfectly preadapted to replace the primase function of DnaG. The fission of the two halves of the protein would allow for the preservation of ligase activity while developing enhanced primase activity. A recent analysis of DNA ligases revealed many transfers between archaea, bacteria and viruses [42]. This history is very complicated, so it is hard to say with certainty where the archaeal enzymes originated. The large subunit of the primase may be a true innovation since it has no detectable bacterial homologs, but the small subunit of the primase and ATP-dependent DNA ligases both could have been inherited from the gram-positive ancestors of archaea.\nThe main helicase in bacteria is DnaB, while archaea use MCM6. Relevant to this discussion is the recent biochemical analysis of a protein in a prophage element in Bacillus cereus that has domains homologous to the MCM6-AAA domain as well as the small subunit of the archaeal primase [43]. The authors found that this protein was a functional helicase but had no primase activity. The narrow distribution of this prophage element implies its insertion was probably too recent to play a role in the origin of archaea. However, it demonstrates that there can be a selective advantage for a DNA based genome to take novel DNA handling machinery from a virus and use it in a different context. We will come back to this point later.\nBacteria use DnaA to define the origin of replication, while archaea use Cdc6. These proteins have a homologous AAA+ ATPase domain, but have little similarity otherwise. However, the bacterial protein RuvB has the same domain combination as Cdc6. RuvB, Cdc6, and DnaA were all put in the same superfamily in a recent classification of AAA+ domains [44]. RuvB is recruited to Holliday junctions by RuvA where it forms a hexamer around the DNA [45], just like Cdc6. It is plausible that Cdc6 evolved from RuvB.\nArchaea use a protein called Hjc to resolve Holliday junctions instead of the bacterial RuvABC system. Hjc is related to the alternative bacterial system RecU [46]. The only bacteria that use RecU are the firmicutes, and they also have RuvABC. We argue below archaea are derived from within the firmicutes. It is possible that the redundancy in Holliday junction systems allowed RuvB to drift in function. The homology between RecU and Hjc could be explained by the presence of a Holliday junction resolvase in LUCA under the canonical rooting. However, if the hypothetical RNA-DNA hybrid LUCA proposed in [4] was dealing with Holliday junctions we argue it probably would also need topoisomerases at that point. However, since the distribution of topoisomerases is different across the prokaryotic superkingdoms [47,48] that would imply the ancestral topoisomerase was displaced in at least one lineage. This weakens the proposal in [4]. We feel it is more likely archaeal topoisomerases evolved from bacterial ones as Cavalier-Smith has proposed [16].\nThere are certainly large differences between archaeal and bacterial DNA replication machineries. We have demonstrated the divide between replication systems has some flexibility, and this opens the door for a replication revolution. It is possible to come up with detailed scenarios for how each of the archaeal replication proteins originated. These results are summarized in Table 2. We will elaborate on this scenario below. However, there are several archaeal replication proteins that do not appear to have any homologs in bacteria; namely histones, PolD, and the large subunit of the archaeal-eukaryal primase. These are true innovations, but there really are not that many of them; certainly not enough to make the transitions seem unreasonable in light of the polarizations presented above.\nSummary of differences in DNA replication machinery of Archaea and Bacteria.\nThe list of protein functions was compiled from box 1 in [20] and table one in [129]. Italics indicate a probable horizontal transfer to a superkingdom. There are very few proteins in archaea that are true innovations. Many of their unique replication proteins could be recruited from bacterial or viral systems. A * indicates the Superfamily database was used to predict domain assignments of PDB entries not yet classified in SCOP.\nThe proposal of two independent inventions of DNA replication has recently been challenged [49]. The authors argue that ribonucleotide reduction is thermodynamically unfavorable, so convergent evolution is highly unlikely. They note that all ribonucleotide reductases have been shown to have a monophyletic origin. Finally, they argue that the proteins that are universally conserved imply a high fidelity replication system in LUCA that could not have been RNA based. The hypothesis that the root must be between the superkingdoms is diminished when one combines these arguments with the scenarios we have outlined here.", "So far we have presented several independent arguments that strongly polarize archaea as a taxa derived from within bacteria. We have demonstrated that although there are vast differences between the ribosomes and DNA replication machinery between the prokaryotic superkingdoms, none of the arguments associated with their respective proteins seems insurmountable. We will soon present a novel hypothesis to account for this; but first we must pinpoint the bacterial roots of archaea. One cannot properly reason about this rooting without first discussing whether archaea are paraphyletic (eukaryotes branch within them) or holophyletic (eukaryotes are their sisters). As there is clearly a relationship between archaea and eukaryotes it is vital to differentiate between these two scenarios to understand their origins. We will review the current available data, and argue that for now precise agnosticism seems the best course, that is, any hypothesis on the origin of archaea must accommodate both models. That said, we lean towards holophyly and our hypothesis does as well.\nEukaryotes and Archaea are sister clades under the standard three domain model. However, James Lake proposed that eukaryotes had a crenarchaeal (eocyte) origin based on a shared indel in EF-1 and similarities in their ribosomal structures [13,14]. This hypothesis never gained much support because there was little phylogenetic evidence to corroborate it. However, recent work [30] has shown that there is sequence data that implies archaea are paraphyletic and eukaryotes have a crenarchaeal-like ancestor. Conversely, another analysis done around the same time supported a deep branching archaeon as the host of mitochondria [50], which would be inconsistent with the eocyte hypothesis. They demonstrate that eukaryotes inherited both crenarchaeal and euryarchaeal specific proteins, so ancestry from either group alone is not enough to explain the eukaryotic protein repertoire. However, several deep branching archaeal genomes from korarchaeota and thaumarchaea are now available and change the context of some of these conclusions [51,52]. Both of these groups appear to contain a mix of crenarchaeal and euryarchaeal genes so the observation in [50] could be explained by a member of one of these groups being ancestral to eukaryotes.\nCavalier-Smith's hypothesis on the origin of the neomura relies on a sisterhood relationship between archaea and eukaryotes [16]. As discussed below, he mainly roots the neomura using traits actinobacteria share with eukaryotes, but not archaea. This only makes sense if archaea and eukaryotes are sisters, otherwise the traits should be present in at least some archaea. He lists eight properties that are unique and ubiquitous in archaea [16]. All of these traits strongly imply that archaea are monophyletic. However, most of them do not differentiate between whether archaea are holophyletic or paraphyletic.\nFor instance, the unique isoprenoid ether lipids found in all archaeal membranes are best explained by their presence in LACA. Eukaryotes have lipids that are more similar to those of bacteria. It would be more parsimonious for archaea and eukaryotes to be sisters with a single change in lipid structure. Any other scenario requires a reversion in eukaryotes back to the bacterial state. Even though this is not parsimonious, it is not out of the question because the mitochondrial ancestor would have all the necessary genes to make bacterial membranes [53]. We have to admit that does not seem unreasonable relative to the innovations we are discussing in this work. This certainly seems like a case where simple parsimony in terms of any one trait, even membrane structure, will be misleading.\nThe only one of these properties that appeared really informative in regard to this problem was the split gene for RPOA. RPOA is the only single gene tree that supported the three domain model in [30], so it is clear eukaryotes did not get this protein from the mitochondrial ancestor. Reassembling the split gene is highly improbable, so there is no reason to doubt the fused genes are monophyletic. This strongly contradicts the original eocyte hypothesis. However, novel genomic data has revealed that representatives from the deep branching phyla korarchaeota and thaumarchaeota have the non-split form of this gene [51,52]. This opens the door for a more specific version of the eocyte hypothesis where eukaryotes stem from either of these groups. Therefore, we have examined what additional data have to say about these taxa. The branch order between them has not yet been resolved, but it appears safe to assume they both branch before the split between crenarchaea and euryarchaea. This branching is supported by several phylogenetic trees as well as the non-split RPOA. This assumption will be key to our subsequent reasoning in several ways.\nIt seems impossible to come up with a scenario utilizing all the traits we discuss below that is completely parsimonious for all traits at the same time. With that in mind we have tried to reason which traits can be better explained by convergent evolution than others. When we observe convergent evolution happening at an indel site we do not consider it informative. Independent loss in any form is much easier than independent invention. Loss seems to be the rule rather than the exception in archaea. Both the thaumarchaeota and korachaeaota have traits that were thought to be specific to either euryarchaea or crenarchaea. For instance euryarchaea use FtsZ for cell division while crenarchaea use the cdvABC system. Intriguingly the thaumarchaeotal genomes have orthologs of both of these systems [54]. This implies that the crenarchaea and euryarchaea each lost one of these systems. This is not the most parsimonious solution, but it is the only one that is consistent with the apparent branch order of these taxa. Many other traits have the same distribution pattern. It is clear that groups of archaea can lose proteins of major functional importance. We will attempt to address these distributions in our hypothesis below.\nBeyond the EF-1 indel that implies paraphyly, six highly conserved indels were found to be informative in describing the relationship between archaea and eukaryotes in [50]. The authors only looked at derived insertions with well conserved sequences. The authors state that four indels argue for the holophyly of archaea. There is one indel that is shared between eukaryotes and crenarchaea, as well as one shared between the euryarchaea and eukaryotes. This implies there was a reversion in at least one lineage or a horizontal transfer.\nWe have analyzed those six indels as well as EF-1 in the context of the recently sequenced deep branching genomes (Table 3). Only the indels that differ between archaeal groups are useful for determining their branch order. Therefore we only created alignments that contained archaeal sequences to ensure these indels were not artifacts created by including eukaryotic and bacterial sequences. Where possible we also used structural alignments from representatives of the superkingdoms to further ensure the larger indels were real (a similar methodology as used in [11]).\nAnalysis of potentially informative gene structures in korarchaeota and thaumarchaeota.\nEach indel was analyzed by creating an alignment of archaeal sequences from BLAST searches. We consider these results to be inconclusive until thaumarchaeota and korarchaeota are sampled better.\nFirst, the reported indel shared between euryarchaea and eukaryotes in the DNA repair protein RadA appears to be an artifact. The euryarchaeal and crenarchaeal sequences align well in the indel region (Additional File 1; Figure S1). This is important because it was the only line of evidence in that work that implied a relationship between euryarchaea and eukaryotes. This new alignment, in conjunction with the split RPOA gene, implies eukaryotes either descend from within the deep branching archaea or are their sisters.\nWe also argue that the two reported indels in the alignments of Beta-glucosidase/6-phospho-beta-glucosidase/beta-galactosidase (PBG) and ribosomal protein S12 are both uninformative based off the authors' own analyses (supplemental data from [50]). The indel in ribosomal S12 is conserved across all archaea and eukaryotes, so it implies nothing about their branch order. The indel in PBG is uninformative because the authors conclude the eukaryotic version of this gene is probably of bacterial origin (supplemental data from [50]). Therefore, the state of the gene in archaea implies nothing about the branch order of these groups.\nTwo of the remaining four indels are only a single residue. The glycine insertion in SecY is present in thaumarchaeota and eukaryotes, but absent in korarchaeota. That weakly implies a relationship between eukaryotes and thaumarchaeota. However, given that the insertion is present in some of the deep branching taxa, but not in all euryarchaea, implies there was at least one secondary loss of this insertion. This is reasonable since the insertion is a single glycine residue, and will not have a dramatic effect on protein structure.\nThe single residue insertion in prolyl-tRNA aminoacyl synthetase initially implied archaea were holophyletic, however, the insert is missing in the thaumarchaeal genomes. When these genes are used to seed a BLAST [55] search they hit firmicutes more so than other archaea. This implies a possible horizontal transfer to thaumarchaeota. If so this insert could still support holophyly, but that cannot be concluded with absolute certainty.\nThis leaves us with two larger indels in EF-1 and glutamyl-tRNA amidotransferase subunit D (gatD). The seven AA insert in gatD is well conserved in the archaeal alignment. A structural alignment with a bacterial homolog reveals this indel is not an artifact caused by the sequence alignment (data not shown). The phylogenetic tree for this family (presented in the supplemental data of [50]) places archaea and eukaryotes as sisters with 100% bootstrap support. This is remarkable because the archaeal proteins have a different domain combination and quaternary structure than the eukaryotes and bacteria [56]. However, it seems that tree is too good to be true. We have attempted to verify the history of this indel, and found that the tree in [50] was missing a bacterial paralog. E. coli has members of two paralogous families of l-asparaginases [57], and it appears only one of them was present in the initial tree. The tree in Additional File 2; Figure S2 shows that fungi and the rest of the eukaryotes received the same domain superfamily from two distinct sources. Their sequences are mixed in with some bacteria, which implies there were some recent horizontal transfers. This tree is not well resolved, but it certainly does not support the notion eukaryotes inherited this protein from their archaeal ancestor. That, as well as the differences in domain combination and quaternary structure, implies this indel is inconclusive with regards to holophyly verses paraphyly.\nEF-1 also appears inconclusive. The insert shared between crenarchaea and eukaryotes is present in thaumarchaeota, but not korarchaeota. Our alignment revealed there are actually four different forms of indel at this site in archaea (Additional File 3; Figure S3). This implies there is some plasticity in this region in archaea. This is in contrast to the bacterial alignment which has no indels in this region. A structural alignment between a bacterial representative from E. coli and an archaeal one from Sulfolobus solfataricus reveals the conserved glycines in the sequence alignments are very close in their position in both forms of this indel (Figure 2). It is possible there were two insertions near the root of archaea that preserved the position of that residue. This indel's history does not appear to be parsimonious, which weakens it usefulness as a marker. Therefore, this indel appears to weakly support archaeal paraphyly, but we consider it inconclusive.\nStructural alignment of EF-1 and EF-Tu. The structural alignment of EF-1 (1JNYA) and EF-Tu (1EFC) in A, and the corresponding sequence alignment in B, show the potential for two independent indels in this region that confounds analysis.\nThe ribosomal proteins are the other side to this story. In a previous study, five ribosomal proteins were found in at least one crenarchaeon, but not in any of the euryarchaea (L38e, L13e, S25e, S26e and S30e) [2]. These, as well as four others that are not universal in archaea, are conserved across eukaryotes. We examined what ribosomal proteins are present in the thaumarchaeal and korarchaeal genomes (Table 4). It still appears that Lake is correct that crenarchaea have more similar ribosomal proteins to eukaryotes than any other group of archaea.\nInformative ribosomal proteins in thaumarchaeota and korarchaeota.\nThis table was constructed from [2]. The values listed were taken from searches of the Pfam website. Ribosomal proteins L20A and L30E were not well defined in Pfam so BLAST searches were performed instead. These results support the eocyte hypothesis, but it is plausible that there were independent losses of ribosomal subunits in archaea based on additional data.\nThe korarchaeota are missing three ribosomal proteins found in some crenarchaea and eukaryotes. They have five ribosomal proteins that are present across eukaryotes that are absent in thaumarchaeota. There are two ways we can interpret this trend. If archaea are paraphyletic then this distribution is best explained by the invention of ribosomal proteins after LACA. LECA could branch between the korarchaeota and crenarchaea, before the RPOA gene split. The alternative interpretation is that archaea are holophyletic and the archaeal ancestor had all the ribosomal proteins that are in any archaeon and at least one eukaryote. There would have to be several independent losses of each these ribosomal proteins. Again this is not parsimonious, but there is evidence it has occurred several times so we must consider it. Again, it can be argued that if a protein is present in korarchaeota and crenarchaea, but absent in euryarchaea, it must have been lost. The archaeal ribosomal proteins are more dispensable than their counterparts in the other superkingdoms [2], so they might not be a reliable marker for rooting eukaryotes in archaea.\nFor now it seems the only reasonable stance in light of all of this evidence is agnosticism. Only when thaumarchaeota and korarchaeota are sampled better, and their positions in the archaeal tree are determined robustly, will it be possible to state with confidence whether archaea are holophyletic or paraphyletic. We might always be left trying to weigh whether reversion of ribosomal proteins or indels is the more parsimonious scenario. However, several of these traits clearly exclude the root of eukaryotes from within crenarchaea and euryarchaea. Therefore, any hypotheses on the origin of eukaryotes that invokes specific taxa within those groups can be rejected with confidence (for a discussion of the many hypotheses on this subject see [58]). However, it may be possible those scenarios could be reworked to fit thaumarchaeota or korarchaeota once they are sampled better.", "Now that we have argued for the true distance between the superkingdoms we can begin to address how it could be bridged. From our discussion above we feel we must be cautious about declaring the debate closed on the holophyly of archaea. Therefore, we are more interested in traits shared between a group of bacteria and all archaea than those shared with eukaryotes. Cavalier-Smith has presented fourteen reasons why the root of the neomura is probably within or next to actinobacteria [16]. Two of these traits are shared between actinobacteria and neomura, but the other twelve are only shared between eukaryotes and actinobacteria. Under this scenario these twelve traits would be lost in the ancestor of archaea, which implies archaea are holophyletic. We will review these fourteen traits, and argue that placing the archaeal ancestor in the bacilli makes more sense. We use the term neomura to refer to the clade of eukaryotes and archaea, but when we refer to the neomuran hypothesis we refer to Cavalier-Smith's rooting of that clade in the actinobacteria.\nThe first piece of evidence that places the neomuran root near actinobacteria is the proteasome. Actinobacterial and archaeal 20s proteasomes are well separated on phylogenetic trees which implies the presence of the 20s proteasome across these groups is not the result of recent horizontal transfers. Recently 20S proteasomes have also been found in sequenced genomes from verrucomicrobia [59] and leptospirillum metagenomic sequences [60]. This somewhat weakens the actinobacterial argument for ancestry, as archaea could have inherited a proteasome from these other groups. However, these recent findings do not weaken the polarization argument; it just excludes the root from these additional groups.\nThe second trait apparently shared between actinobacteria and all neomura is the post translational addition of CCA to the 3' end of tRNAs. The gene performing that function in archaea is tRNA CCA-pyrophosphorylase (protein cluster PRK13300 [61]). One of the domains, PAP/Archaeal CCA-adding enzyme, does not hit any bacteria in the Superfamily database [62]. Since the CCA addition is performed by nonhomologous enzymes this is not strong evidence for rooting neomura. There is also an analogous enzyme conserved across bacilli (protein cluster PRK13299). Even if archaea inherited this function from their bacterial ancestors, it is not clear which gram-positive group provided it.\nNow we must address the remaining dozen traits shared between actinobacteria and eukaryotes. Although there were initial reports of sterol synthesis in the actinobacteria [63,64], the latest work has found no evidence for a complete pathway [65]. The authors report that the few cases of the full pathway in bacteria (all outside the actinobacteria) are probably the result of horizontal transfer. However, they find several sterol synthesis enzymes are present in many actinobacteria. They conclude these are probably the result of a transfer from eukaryotes, but this is not supported by their trees, which show good separation between eukaryotes and actinobacteria. Several sterol enzymes appear to have been inherited vertically from actinobacteria to eukaryotes. This is certainly consistent with Cavalier-Smith's hypothesis. This is a good example of the dangers of closing the debate on the position of the root too soon. Their trees clearly support an alternative hypothesis, but that data is buried in the supplemental material without discussion of the opposing view.\nInitial reports also claimed the presence of chitin in actinobacteria [66]. However, there is no gene for chitin synthase in actinobacterial genomes. Several of them have chitinase which breaks chitin down. Also, chitin is found in metazoa and fungi, but not in archaeplastida which implies this enzyme was not in LECA.\nIt is true that actinobacteria have many serine/threonine signaling systems related to cyclin-dependent kinases [67]. This would be a key preadaptation to the cell cycle. However, it has recently been shown that Bacillus subtilis also has an extensive network of such regulation [68]. Therefore this line of evidence is consistent with either gram-positive group being ancestral to neomura.\nPhosphatidylinositol is an interesting case. Recent work on this subject confirms the presence of phosphatidylinositol synthase as well as the eukaryotic form of cardiolipin synthase in many actinobacteria [69]. These enzymes are paralogs. We could not create a quality tree for this superfamily because the alignment was of low quality. However BLAST searches showed a good separation between prokaryotic and eukaryotic sequences that implies this is not the result of a recent HGT. It is difficult to determine exactly what family each prokaryotic homolog belongs to, so it is hard to say with certainty what other groups of bacteria have phosphatidylinositol. It is certainly possible eukaryotes inherited phosphatidylinositol from actinobacteria.\nSome actinobacteria do have an α-amylase with similar primary structure to the form found in metazoa, but a recent comprehensive study found several other bacteria that did as well [70]. The authors concluded this was probably the result of a horizontal transfer due to their position in the phylogenetic tree as well the extremely sparse distribution of this form in actinobacteria. Therefore, this is not evidence for actinobacterial ancestry of the neomura.\nThe fatty acid synthetase (FAS) complex found in actinobacteria is unique among bacteria in that it is the same form as found in some fungi [71]. These fungi have the FAS complex split into two genes, but actinobacteria have it fused. Our phylogenetic trees are consistent with actinobacterial ancestry (Figure 3). However, the distribution of the fungal type complex in eukaryotes does not conclusively prove that this enzyme had to be in LECA. The only group outside the Fungi with this complex are stramenopiles. However, the animal type FAS is also present in some alveolata, so there could be some functional displacements. Actinobacteria probably played a role in the evolution of this enzyme in eukaryotes, but not necessarily via the neomuran hypothesis.\nMaximum likelihood tree of fungal type Fatty Acid Synthase (FAS) complex. This tree implies eukaryotes did not get FAS from a recent transfer, but it is also not clear whether or not it was in LECA. Circles indicate the split form of the gene. This gene is split in two different places in the fungi indicated by the yellow and red circles.\nThe argument that the exospore structure of actinobacteria could be a precursor to eukaryotic spore structures seems sound [72], but we are unable to locate a list of proteins involved in exospore formation. Without specific proteins homologs we cannot begin to evaluate this with bioinformatics. However, this argument becomes irrelevant if one invokes a viral ancestor of the nucleus as in [73].\nCavalier-Smith has also suggested that the C-terminal HEH domain found in the Ku proteins of some actinobacteria is ancestral to the HEH domain found in the eukaryotic Ku70 protein. However, the sequence analysis in [74] conclusively demonstrates eukaryotes did not inherit the HEH domain from actinobacteria. This domain is very compact and common. Therefore, it is not out of the question that it was recruited twice to the C-terminus of similar structures. Consequently we do not take this as evidence that eukaryotes inherited Ku from actinobacteria.\nSeveral traits initially listed as unique to actinobacteria are now found in enough other bacterial groups to now be considered ambiguous markers. Actinobacteria do have tyrosine kinases, but they have recently been put into a bacterial specific family, BY-kinase [75]. This family is present across actinobacteria, firmicutes, and proteobacteria, so it is does not support an actinobacterial rooting exclusively in the neomura. Many groups of bacteria have HU (histone H1 homologs) according the Superfamily database. This protein is relatively short, so we should not expect sequence to resolve its history. It is possible this protein was inherited from actinobacteria, but there are too many other possibilities to state that with certainty. Calmodulin-like proteins are now found in many bacteria, so this trait is not specific enough to root neomura near actinobacteria as Cavalier-Smith now admits [8]. The Superfamily database reveals that trypsin-like serine proteases are present in many groups of bacteria, but absent in archaea. This appears to be another trait that is too general to be useful for rooting neomura.", "Skophammer et al. compiled several reasons to argue archaea are derived from bacilli [12]. There is an insert in ribosomal protein S12 that is present in archaea and bacilli (and maybe chloroflexi). Skophammer et al. conclude this indel is derived, but we argue elsewhere this polarization is flawed [11]. The insertion appears well conserved between archaea and bacilli regardless of whether it is ancestral or derived.\nSkophammer et al. also note that there is a shared deletion between firmicutes and archaea in PyrD. Our own work strengthens this connection by considering the quaternary structure of PyrD. The form that has the deletion also has an additional subunit, PyrK. The sequence and structure of the firmicute PyrD 1B are both shared by archaea. Our phylogenetic analysis of this protein implies this is not the result of recent horizontal transfers [11].\nSkophammer et al. note that many enzymes involved in the biosynthesis of unique archaeal membranes have previously been found in firmicutes [21]. The isoprenoid lipid precursors of archaeal membranes are made via the mevalonate pathway, which is five enzymes long. The KEGG database [76] reveals the entire mevalonate pathway is present in several bacilli as well as some actinobacteria (KEGG module M00191). The unique stereochemistry of archaeal membranes is determined by the enzyme geranylgeranylglyceryl phosphatase. Homologs of this enzyme are present in bacilli (protein cluster PRK04169), but appear to be absent in actinobacteria. The authors of an analysis of archaeal membrane biosynthesis propose that archaea became genetically isolated from bacteria once their membrane chemistry changed [77]. They suggest that archaea branched early from within bacteria, but their hypothesis is also consistent with a later gram-positive origin. Cavalier-Smith's own analysis [8] suggests that eukaryotic enzymes that make n-linked glycoproteins, which are necessary for the loss of peptidoglycan, evolved from the firmicute specific gene EspE. Therefore, for several reasons, the firmicutes are the bacterial group most preadapted to gain archaeal membranes.\nHomologs to ribosomal proteins L30e and L7ae are found across firmicutes. This is evidence of the link between firmicutes and archaea. Pfam [78] shows this family in several other groups, but many firmicutes contain two copies of this family. One of these paralogs has been characterized as a ribosomal protein, but neither is essential [79]. We constructed phylogenetic trees to see if they are consistent with vertical inheritance (Figure 4). There is good separation between the paralogs in firmicutes, which implies the duplication occurred early in firmicutes. All archaeal and eukaryotic genomes contain at least two copies of this family. The phylogenetic tree of the archaeal and firmicute sequences places the firmicute paralogs between the archaeal paralogs. The firmicute sequences are paraphyletic, albeit with very weak support. If these proteins are the result of independent duplications the archaeal sequences should cluster together, not appear on opposite ends of the tree. However, it is possible one of the archaeal sequences evolved rapidly after duplication.\nAlignment of L7Ae paralogs in archaea and firmicutes. This tree is consistent with a firmicute origin for two archaeal ribosomal proteins.\nOne of the paralogs in Bacillus subtilis was found to localize to a different portion of the ribosome than either of the archaeal paralogs [79]. The proteins would not only have to jump superkingdoms for a transfer to occur, they would also have to bind to a different region of the rRNA without interfering with ribosome assembly. We argue it would be less disruptive for a protein already present to gradually bind a different piece of rRNA. The separation between the superkingdoms in the phylogenetic trees also argues against HGT. If this is the result of vertical inheritance only two possibilities explain it. Either the firmicutes are ancestral to archaea, or the root lies between archaea and firmicutes. Our polarization of PyrD 1B's quaternary structure eliminates the latter rooting as a possibility. Thus this tree appears to support a firmicute ancestry for archaea, although it may just be the result of rapid evolution of structures in different contexts in the ribosome.\nAs discussed above, almost all the firmicute genomes have a unique Holliday junction resolvase, RecU, which is only found sparsely in other bacterial groups. It is homologous to the archaeal Holliday junction resolvase, Hjc [46]. Therefore the firmicutes have a DNA repair mechanism more similar to archaea than any bacterial group.\nHsp90 is missing in all archaeal genomes, so its presence across eukaryotes and bacteria implies it was inherited from the mitochondrial ancestor. However, a detailed analysis of this family did not reveal a relationship between eukaryotic and proteobacterial sequences [80]. Instead, the eukaryotic sequences branch within the gram-positive bacteria. The authors argue this supports the classical neomuran hypothesis, but eukaryotes are sisters to firmicutes rather than actinobacteria in that tree (albeit with moderate support). This would slightly favor firmicutes over actinobacteria ancestry. In either case it supports the view that the archaeal ancestor lost Hsp90.", "There are several traits present in either firmicutes or actinobacteria that argue they are ancestral to either eukaryotes or archaea. The only trait that argues actinobacteria are ancestral to the neomura is the proteasome. Several more traits make compelling arguments that actinobacteria are ancestral to eukaryotes, but certainly not the dozen traits listed in [16]. In Cavalier-Smith's most recent version of the neomuran hypothesis he concludes firmicutes contributed a significant number of genes to the neomuran ancestor [81]. He proposed neomura originated as sisters of actinobacteria, and both of these taxa are descendents of firmicutes. That proposal is dependent on his argument that actinobacteria are derived from firmicutes, which is one of the less developed ideas in [8]. We believe he is wrong in his assertion that our analysis of the indel in ribosomal S12 [11] does not support firmicute ancestry of archaea. It is only shared (and well conserved) between bacilli and archaea regardless of the polarization of that indel. Cavalier-Smith is also not aware of the arguments about L7AE paralogs and RecU we present here for the first time. So we are left with a stronger list of reasons supporting firmicute ancestry and a weaker list for actinobacterial ancestry. However, there are still some key eukaryotic proteins that appear to have descended from actinobacteria. We will try to reconcile this apparent anomaly.\nThe peroxisome is an organelle with a single membrane, found across eukaryotes, that has various oxidative functions including the synthesis of some lipids [82]. They have been observed to divide independently of the rest of the cell, which initially led someto question whether they had an endosymbiotic origin [83,84]. Two recent studies both concluded that the peroxisome was likely derived from the endoplasmic reticulum [85,86], which led those initial proponents of peroxisomal endosymbiosis to abandon that idea.\nHowever, [85] found that many peroxisomal proteins likely originated in cyanobacteria, α-proteobacteria, or actinobacteria. The authors suggest that the proteobacterial genes were probably transferred from the mitochondria, which is consistent with observations that mitochondrial genes are often retargeted to other organelles [87]. However, recent work argues for an endosymbiotic origin of the peroxisome from an actinobacterium [88]. These latter authors demonstrate that at least two proteins imported into the peroxisome are of actinobacterial origin, and that the peroxisomal proteome has higher average BLAST scores to actinobacteria than any other group of prokaryotes. They argue that the retargeting of mitochondrial proteins after their genes migrate to the host's genome is easier than de novo targeting of peroxisomal proteins. They propose this masks the true history of the peroxisome.\nThe literature proposes two scenarios to explain the origin of the peroxisome: either the peroxisome was an endosymbiont, or actinobacteria were not endosymbionts. Clearly there is a third possibility; there was an actinobacterial endosymbiont, but the peroxisome is not a descendent of that membrane. That is to say, genes of an endosymbiotic origin were targeted into the peroxisome, but historically they are foreigners there. How could this be? A primitive peroxisome derived from the endomembrane system would be beneficial because it would separate dangerous oxidative chemistry from the rest of the cell. Proteins would be targeted to the organelle with relative ease since that system would be developed through mitochondrial endosymbiosis. Genes would be copied from the actinobacterial endosymbiont to the host genome (but not necessarily lost in the actinobacterium), and then imported into the peroxisome. This would be advantageous because some of these reactions would do better in that specialized environment rather than their original host. Potentially there would be less cost involved in maintaining an organelle that already existed versus an entire endosymbiont. Once enough genes were present in the host, the actinobacterial endosymbiont would essentially be a parasite, and complete gene loss would be beneficial.\nContrast the peroxisome to organelles such as plastids and mitochondria which retained both genomes and membranes long after they became organelles. Some have questioned why some organelles retain any genes at all [89]. These authors note that most genes retained in plastids and mitochondria are membrane-spanning proteins involved in core photosynthetic and respiratory systems. They agree with an earlier proposal that these proteins must be kept in the organelle to be able to quickly respond to, and balance, redox gradients [90]. In other words, plastids and mitochondria have retained membranes and genes because their functions are centered on membrane based chemistry. The stripped down endomsymbionts perform these functions better than a novel organelle initially could, so they are left with a few essential genes and membranes they inherited from endosymbiosis. These genes come with a high cost because the organelles need to import the machinery to translate them as well as the machinery to replicate the genes that encode them. Therefore one can hypothesize that other endosymbionts whose functions are not as membrane-centric could be replaced by organelles that are not of endosymbiotic origin. Unfortunately, plastids and mitochondria have shaped our expectations that endosymbionts will leave both membranes and genomes behind. We believe this is an over simplistic expectation.\nWe argue actinobacterial endosymbiosis accounts for the traits shared between eukaryotes and actinobacteria, as well the phylogenetic trees that place actinobacteria as sisters of the peroxisomal proteins. The fact that numerous mitochondrial proteins are imported into the peroxisome is evidence this endosymbiosis occurred after mitochondrial endosymbiosis. This would reconcile the apparently conflicting signals in terms of which gram-positive group is ancestral to archaea and eukaryotes. We find this scenario more reasonable than invoking an extinct lineage of gram-positives that has all the traits listed in Table 5 and Table 6. However, if a genome is sequenced that contains actinobacterial specific traits as well as firmicute specific traits listed here we would have no need to invoke endosymbiosis. It is also possible to reconcile the canonical rooting with the traits shared by actinobacteria by invoking this endosymbiotic hypothesis.\nSummary of data used to support actinobacterial ancestry of archaea.\nMany of these traits argue for an actinobacterial role in eukaryogenesis but not the origin of archaea. This list of informative characters is taken from [16].\nSummary of data that supports bacilli ancestry for archaea.\nThe bacilli are more similar to archaea in terms of DNA repair, ribosome structure, and lipid metabolism than any other group of bacteria.", "Now that we have argued for the true distance between archaea and bacteria, the time has come to cross that desert. As we have asserted above, this is a unique event in evolution, so we must properly set the stage. The selective pressures associated with extreme environments and antibiotic warfare are ancient, however, they cannot cause a revolution on their own, so a significant relaxation in selective pressure is necessary. We argue that viral endosymbiosis could relax selective pressure enough to start such a revolution.\nKoonin has observed that the PolB family of polymerases are the most common DNA polymerase in viruses [91]. Koonin et al. also observed that archaeo-eukaryotic DNA primase was a hallmark viral protein [19]. This hints at some connection in DNA replication between archaea, eukaryotes, and viruses. We examined the distribution of all protein families in Pfam [78] that originated at the root of archaea and eukaryotes to see if this connection could be extended. We defined Pfam familiess that were present in at least 90% of archaeal genomes (46 at the time) and 90% of eukaryotic genomes (35 at the time) and in less than 50% of bacterial genomes (939 at the time) as originating at the root of archaea and eukaryotes. A 90% cutoff is strict enough to imply that the protein was present in LAECA, while a 50% cutoff is loose enough to accommodate recent horizontal transfers. Most of these Pfam families are well below the 50% cutoff in bacteria.\nBy this definition there are 74 Pfam domains that originated in LAECA; 24 of these are found in at least one viral genome (Table 7). On average each of these Pfam domains is present in 36.38 viral genomes (14.36 if one excludes PolB). As an approximate measure of the significance of this result we took 10000 random samples of 74 Pfam domains that are found in at least one cellular genome to see how often one finds 24 or more in at least one viral genome. None of the random sets had that many viral Pfam domains, which implies this set is significantly enriched in viral proteins. However, we must keep in mind that our sampling of the viral world is still highly biased (discussed in [91]) and that viral genomes evolve rapidly. Viral genomes are sampled so poorly that none had the MCM domain from Pfam, even though it is found in a prophage region of some bacilli as discussed above. Further, 18 of the remaining Pfam proteins that originate in LAECA are ribosomal, which we assume are less advantageous for viruses to encode than the DNA replication machinery (although we did find several ribosomal proteins in viruses in this set).\nPfam proteins that originated near LAECA and their distribution in the viral world.\nThe Pfam familiess that originated in LAECA are more common in viruses than those that originated in LBCA.\nWe can also verify whether this result is significant by looking at the set of proteins that would be present in LBCA (last bacterial common ancestor), but not LEACA under the same definition, that is, Pfam domains present in at least 90% of bacterial genomes and less than 50% of archaeal and eukaryal genomes. There are 106 such Pfam domains and 15 of them are found in at least one viral genome (p-value 0.2457). Each of those 15 is in an average of 8.33 viral genomes. It should be noted that this is an underestimate for LBCA's content since there are so many parasitic bacteria with genomic sequences available. However, in general viruses share more Pfam domains with LAECA than LBCA.\nKoonin proposes, based on PolB's distribution, that archaea arose from an acellular ancestor and then retained the more ancient polymerase [91]. We find this view hard to reconcile with the three independent arguments for the derived nature of archaea provided above. Forterre has argued that DNA originated from a viral endosymbiosis in each of the superkingdoms [17], but our data argues against that scenario for the origin of bacteria. We propose the alternative hypothesis that viral endosymbiosis occurred in bacteria and gave rise to archaea. This virus would supply the missing link in terms of DNA replication machinery between the prokaryotic superkingdoms. We think this would have to be endosymbiosis and not just a horizontal transfer given the distribution and interdependencies of these systems in cellular life.\nTo a first approximation there are three components that define the propensity of a genome to get permanently damaged. The first is the environment. Many different extreme environments are damaging to DNA, including radiation, high temperature and desiccation [92]. Second is the size of the genome. The larger the piece of DNA, the more likely damage will occur, and the more it must be mediated. Third is the state of the active repair system. If active repair is poor even rare damage events will eventually accumulate. Therefore, we argue that systems that are extreme in any one of these three components must routinely deal with DNA damage during replication.\nArchaea, in general, fit the description of extremophile better than any other major taxa. It has been proposed that the unifying trait of all archaea is adaptation to chronic energy stress [93]. The author argues that archaea outcompete bacteria in niches that are under chronic stress. Thus archaea have become successful in dealing with environments that other superkingdoms cannot handle. The author noted that archaea do better in environments that are consistently extreme, and are outcompeted by bacteria in environments that fluctuate.\nA corollary of chronic energy stress is chronic DNA damage. Many of the extremophilic environments archaea have made home severely damage DNA. On the other hand, bacteria may face occasional stressful situations and require DNA repair. Therefore it is disadvantageous for bacteria to have their repair systems on all the time. Conversely, archaea need to constantly repair their DNA, so it would make sense if the line is blurred between their replication and repair systems. An example of this prepare for the worst strategy is the unique ability of PolB to read ahead and stall replication if a uracil is encountered in archaea [94].\nIn terms of large genomes eukaryotes win hands down (see figure 1 in [95]). A polymerase is more likely to encounter damage somewhere in the replication of these large genomes than a prokaryote with a smaller genome in a similar environment. This is supported by evidence that eukaryotes use a separate repair system during replication of the large non-transcribed regions of their genome [96,97].\nWhat other situation besides chronic DNA stress and large genome size would put similar pressure on the DNA replication machinery? We argue, somewhat counter intuitively, that a total lack of active DNA repair systems would create a similar situation. Again it is optimal for the replicative system to expect to encounter damage. Viruses fit that description perfectly as they are unable to actively maintain their genomes without their host.\nIf the repair systems were turned on more and more of the time, the main replicative system would become free to drift. Under this scenario the ancestors of archaea could mix and match bacterial repair and replication proteins with several molecular innovations and some transfers from the viral endosymbiont. The end result could be a system that is more robust to chronic stress. The canonical rooting implies that the components of the replication machinery that are homologous, but not orthologous, were independently recruited from proteins that initially processed RNA. Under either scenario the same amount of molecular innovation is required. The question then becomes, is it easier to innovate function in a RNA based organism or a DNA based organism under relaxed selective pressure? We argue that the difference cannot be quantified, as both scenarios predict exactly what we observe: some proteins are orthologs, some are homologs, and some are unrelated. Therefore the way to tell the difference between these scenarios is independent lines of evidence. The polarizations presented above imply the bacterial repair machinery was recruited to become the replication machinery of archaea.\nIt is also tempting to speculate that many of the features shared between viruses and the eukaryotic nucleus described in the viral eukaryogenesis hypothesis [73,98,99] could be extended to this hypothesis. Bell notes many similarities between nuclei and viral replication factories. One can imagine the ancestry of these traits going back to LAECA with some being lost in archaea, and others not developing until the root of eukaryotes. This is only consistent with our hypothesis if archaea are holophyletic, but for now it is certainly worth considering.", "So far we have demonstrated that there is robust evidence that archaea are a derived superkingdom. We have shown the bacterial ribosome could have enough plasticity to evolve into an archaeal one. We have presented evidence that there is some link between DNA replication in archaea, eukaryotes and viruses that could be the result of endosymbiosis. Now we will try to combine these into the larger story of why a bacterium would evolve into an archaeon.\nAs we discussed above, we feel the greatest weakness of Gupta's invocation of antibiotics is it is not of sufficient evolutionary pressure to cause a revolution on the scale necessary to create the differences between the prokaryotic superkingdoms. Observations of the vast differences in DNA replication machinery and evidence of a viral endosymbiosis in a bacillus before LEACA will set the stage for our subsequent hypothesis.\nIn the traditional antibiotic battle the gram-positives are capable of evolving resistance to each other. This leads to what is commonly referred to as a Red Queen game [100]. Neither group ever really gets ahead in the long-term war as each defensive innovation is matched by an offensive one. But that does not mean there are never winners in battles on shorter time scales. Winning a battle is not a good thing in the long run. The winners will increase in population size and consume more of an environment's resources. The corollary is that they become a better target for less dominant species to kill. If a species evolves a more resistant ribosome it just puts more pressure on the rest of the community to hit other targets in that species.\nOne can imagine a firmicute deeply entrenched in such warfare endowed with the gift of a complete and novel replication system from a virus. This is supported by the distribution of viral Pfam proteins discussed above. The virosphere contains so much diversity that even rare combinations of genes would eventually end up in the same capsid at the same time as long as they have some advantage to any virus. It would be an incredibly rare event for the virus to be just right for the bacterium to take up the entire replication system. And thus the stage is partly set for why the revolution happened but once.\nThe core of the DNA replication system does not appear to be as common an antibiotic target as the ribosomes or RNA polymerase. A search of DrugBank revealed no antibiotics that target PolC [101]. However, there are several that target gyrase. Why the difference? Inhibition of PolC just stops a population from growing, but the damage induced by the loss of a functional gyrase invokes an SOS response and leads to cell death. There are probably natural antibiotics that target PolC, but they would not be as effective as the numerous ones that target the ribosome and RNA polymerase. Thus the introduction of PolB into the bacillus genome would not be the enough to start the revolution. This is supported by the fact that many proteobacteria use PolB as a repair enzyme, the result of a HGT that did not start a revolution.\nAs discussed above there are no bacteria that have archaeal histones. This strongly implies they are only compatible with the archaeal-eukaryal replication machinery. Thus we argue that viral endosymbiosis was a relaxation in selective pressure that in combination with pressure from antibiotics targeting gyrase led to the innovation of histones. This is not a trivial difference with Cavalier-Smith's hypothesis that the numerous differences between the DNA-handling machinery of bacteria and archaea are the result of histones dramatically changing the way in which this machinery could interact with DNA [16]. He argues this was an adaptation to thermophily.\nHowever, Forterre has presented several arguments against Cavalier-Smith's scenario. He argues that the bacterial histone-like proteins that have replaced the archaeal ones in Thermoplasma acidophilum work just fine with the archaeal replication machinery [17]. He also notes that many hyperthermophilic bacteria do not use histones. At the same time hyperthermophilic bacteria exchange many genes with archaea [102]. Therefore the standard bacterial replication machinery could probably not tolerate the invention of histones even under selective pressure from an extreme environment. Euryarchaea appear to have gained DNA gyrase via several independent horizontal transfers from bacteria [47]. The fact that several euryarchaea retain both histones and gyrase is evidence against Cavalier-Smith's idea that gyrase became totally redundant with the advent of histones. That view is weakened further given that gyrase was found to be essential in several of those genomes [103].\nSince pressure from thermophily alone could not force histone innovation, we invoke the viral endosymbiont hypothesis. In other bacteria an alternative system to gyrase would not be much of an advantage, as getting rid of gyrase would just put more pressure on targets like the ribosome and peptidoglycan synthesis. However, as discussed above, the bacilli have several unique ribosomal proteins. That means they could already have some adaptations and preadaptation to antibiotic warfare that makes them a difficult target to hit. As discussed above they have EpsE [104], which could preadapt them for functioning without peptidoglycan. Once gyrase was no longer a useful target they could quickly lose peptidoglycan in their cell walls. The loss of these two major targets would be a huge advantage and increase pressure on the ribosomes as a target.\nAt this point any change to the ribosome would be highly beneficial. One can imagine a Red Queen game where neomura have a distinct advantage over gram-positives but need constant innovation in their ribosomes to maintain that advantage. The observation that many archaeal-eukaryal ribosomal proteins bind Zn would be consistent with pressure to ensure proper assembly despite the antibiotics. This is supported by the fact that bacterial hyperthermophiles, whose environment interferes with ribosomal assembly, have more Zn binding sites than most other bacteria [35].\nThus the initial neomura would have an advantage in antibiotic warfare as well as the ability to replicate DNA even in the presence of damaging pressures. Their genomes could be much larger than extant prokaryotes. A large robust genome would allow neomuran to be oligotrophic and handle extreme environments. This would put them in direct competition with many bacteria in diverse environments. Their larger genome size would allow for more gene duplication, which could lead to structural innovations like the ribosomal proteins found in neomura but not bacteria.\nThe strongest support for this hypothesis comes from the antibiotic target site most studied in the ribosome: the 23S RNA between ribosomal proteins L22 and L4. L22 and L4 are conserved across teh superkingdoms. They bind to the same positions on the ribosome in all three superkingdoms. There are numerous crystal structures, from both prokaryotic superkingdoms, with antibiotics bound in these sites [105,106]. These studies demonstrated that nine different antibiotics that bind strongly to this site in bacteria bind with much less affinity in archaea. A2058 (E. coli numbering) is one of the sites on the 23S RNA directly involved in binding these drugs. A2058 is conserved across 99.4% of sequenced bacterial 23S rRNAs [107]. The site is almost universally guanine in archaea and eukaryotes. The mutation A2058G makes many bacteria macrolide-resistant [108], while the reverse mutation can make archaea macrolide-sensitive [109]. These differences in antibiotic affinity are well conserved across the divide between bacteria and neomura, and appear to be the result of intense selective pressure from antibiotics.\nEven though bacteria are able to gain resistance through a similar mutation, it is probably not fixed because there is a slight decrease in fitness that can be reduced with other mutations [107]. If there were constant pressure on that site other mutations and changes in structure could relax those costs and fix that position. That would be completely consistent with the scenario outlined here. If the divide between archaea and bacteria is primordial, it is much harder to explain this difference. Ribosomal proteins L22 and L4 must have been present in LUCA. If the ancestor of archaea was an extremophile they should not have been in competition with enough bacteria to need the resistance inferred by this mutation.\nIt would be tempting to speculate that this mutation is an adaptation to thermophily or some other extreme environment to answer this nagging issue of antibiotic pressure at the root of archaea. Examining the position in bacterial hyperthermophiles can be tested. In both the hyperthermophiles Aquifex aeolicus and Thermotoga maritime this position is 100% conserved as adenine, as it is in their thermophilic relatives (Additional file 4; Figure S4). The thermophile Thermus thermophilus has two copies of the 23S RNA where usually both have adenine at that position unless they are under selective pressure from antibiotics [110]. Thus the only explanation that appears to hold water is some extreme antibiotic pressure at the root of archaea.\nThe mark of antibiotic pressure can also be seen in the proteins that would be lost at the origin of archaea. We searched Pfam and DrugBank for antibiotic targets that are conserved across bacteria but were clearly not in LACA. Eight of these are listed in Table 8. Several of these appear to have been horizontally transferred to archaea, such as DNA gyrase. That is consistent with the scenario under discussion because once archaea were no longer under strong antibiotic pressure these systems would be free to become essential again. It would be interesting to look at each of these eight predicted losses and see what preadaptations and environmental conditions can make them non-essential.\nDrug targets found across bacteria that were probably not in LACA.\nWe argue these proteins were lost in the archaeal ancestor in response to a unique antibiotic warfare scenario. Targets in italics appear to have transferred to archaea after LACA.\nExamples of drugs targets sites with resistance in archaea.\nThese drugs bind targets sites present in both bacteria and archaea (or eukaryotes), but with very different affinities. We argue this is a molecular fossil of the unique antibiotic war that resulted in the origin of archaea.\nWhy would this war end and who would the winners be? To address this question we will invoke the two novel niches that are central to the neomuran hypothesis; phagotrophy and hyperthermophily [16]. The oligotrophic neomuran with large genomes would be able to form many symbioses with prokaryotes because of their diverse metabolism. Such an environment would favor the preadapations to phagotrophy discussed in [111]. This could lead to several endosymbiotic events in a short span of time. These would force the nucleus to become a better separator in dealing with selective pressures proposed by several hypotheses: invasion of introns [112], differing metabolisms [113] and ribosome chimerism [114]. The successful phagotroph would eat prokaryotes, so at first it would be to the advantage of the prey to try to kill the neomura. However, that is not the optimal strategy for dealing with phagotrophs. It is much better to persist inside them and eat them from the inside out, as can be seen by the numerous bacterial taxa that have independently evolved the ability to infect eukaryotes. Once it is possible to infect the phagotrophs, killing them with antibiotics becomes counterproductive. And thus a truce (or new war) would be declared on one front of the great antibiotics war.\nThe early eukaryotes would outcompete and eat many of the initial neomura, but would be at a disadvantage in extreme environments as they began to rely on their cytoskeletons and larger cell size more. It would be easier for the neomura to drift into more extreme environments because of their DNA replication machinery. The proto-archaea would begin to emerge as the neomura began moving into previously unoccupied niches of extremophily. The conversion of their membranes would probably be the commitment step in the process. Once they began settling into environments that are constantly extreme they would be under pressure to streamline their genomes.\nThis scenario is consistent with a recent study on gene content evolution in archaea that concluded that most archaeal genomes have been streamlined from larger ancestral genomes [115]. The authors conclude that the archaeal ancestor could have had 2000 gene families, and the extant archaeal groups are mostly created through differential loss. The authors note this repeated loss is consistent with the energy shock state of the archaea described in [93], as specialization and loss are highly favorable in consistently extreme environments. The trend of euryarchaeal and crenarchaeal specific traits to both be present in the deep branching archaea is also consistent with the idea archaea became specialized from a more generalized genomic ancestor. The redundancy in archaeal systems such as two replicative polymerases and two cell division systems could be remnants of the antibiotic war. That redundancy would become unnecessary once archaea committed to extremophily. It was noted in [2] that ribosomal protein loss is much more common in archaea than in bacteria. Our hypothesis implies that the distribution of ribosomal proteins in archaea is the result of independent losses once they were no longer under antibiotic pressure. Some of these novel proteins developed other roles to deal with extremophily so they have been retained. The ancestral archaeal ribosome could very well have contained all of the proteins found in any archaeal genome, which would certainly weaken that aspect of the eocyte hypothesis.\nWhat about the neomura? They would be stuck in the middle. The eukaryotes would be eating them, and they would still be in competition with bacteria. Their only viable strategy would be constant innovation, as they would not really have a novel niche. However, the wave caused by viral endosybiosis would not go on forever. There would be diminishing returns in terms of the resistance provided by the new innovations. Eventually the innovations would become a disadvantage as bacteria can then release compounds that only target the new systems. For instance aphidicolin inhibits DNA replication in archaea and eukaryotes but not bacteria by targeting their unique polymerase [116,117]. So the initial advantage the neomura have in terms of antibiotic resistance is not a stable niche. They were outcompeted from three sides, and thus we are left with a hole in the middle of the branches of the tree of life that often gets mistaken as the root. This scenario is summarized in Figure 5.\nSummary of our hypothesis. A viral endosymbiosis bridges the gap in DNA machinery between the superkingdoms. That triggered an antibiotic war that resulted in the birth of eukaryotes and archaea. The antibiotic war ended when archaea became extremophiles and the eukaryotes became phagotrophs. Traits shared between eukaryotes and actinobacteria are the result of endosymbiosis; the peroxisome is not the direct descendent of an actinobacterium.", "It is reasonable to ask how different archaea and bacteria would have to be for us to consider the rooting debate closed. If the genetic material were different between the superkingdoms it would be strong evidence of life being polyphyletic. If the genetic codes were somewhat different (even a few codons), that would certainly be evidence that both groups were primordial. If membrane proteins like SecY were not universally conserved, we would take that as evidence LUCA was acellular. The differences between the prokaryotic superkingdoms seem small if we consider that the last prokaryotic common ancestor had a membrane and a ribosome that used the same genetic code as all extant life. They have more in common than can be described by any tree.\nNone of the differences between archaea and bacteria are great enough to imply a transition between the superkingdoms is impossible. The three independent polarizations provide compelling evidence the transition occurred. A viral endosymbiosis in a firmicute host could be the relaxation in selective pressure that acted in combination with pressure from antibiotics to cause a revolution in terms of membranes, ribosomes, and DNA replication machinery. This is supported by the association between proteins found in viral genomes and those that appear at the root of archaea and eukaryotes. Gupta's hypothesis that antibiotics led to the differences between the superkingdoms is well supported by the data generated in the past decade.\nArchaea would certainly need to have several innovations in terms of protein structure. None of these are deal breakers. They are present in extant cells, and are not found in viruses. So they would have been an innovation at some point. There is no reason to assume all structural innovations happened near the root of the tree of life. Work from our group probing the relationship between ancient ocean chemistry and protein structure evolution is an example of one source of later innovations [36,37]. The modern ocean has several orders of magnitude more Zn than the ocean of LUCA's time [118]. Many Zn extant binding sites evolved after that transition. As noted above, several of the ribosomal proteins unique to the neomura have Zn binding sites. One of the innovations needed, PolD, is predicted to have two Zn fingers [39]. Increasing levels of Zn would not be the only factor, but it is another example of how the revolutionary planetary changes shape evolution as discussed in [119]. This observation makes sense if one places the origin of archaea after the great oxidation event, and considers the fossil record as a supplement to phylogenetic data. If we look at the details we may find the rhyme and reason to the other novel structures at the root of archaea as well. There are also many structural innovations at the root of the eukaryotes [120]. The fact that archaea have many unique protein structures does not imply they are primordial.\nAs we have hinted above, one of the strengths of this hypothesis is it does not rely on archaea being holophyletic. The scenario we have described implies holophyly, but if something conclusively proved archaea were ancestral to eukaryotes, it could be adapted. There is no explanation in the neomuran hypothesis for the traits shared between actinobacteria and eukaryotes besides vertical descent. The link Cavalier-Smith has justified could be the result of an endosymbiosis that did not leave its mark with an extant organelle. If archaea are paraphyletic it just means eukaryotes did not originate for the reasons we have hinted at here, but rather more along the lines of the traditional endosymbiotic hypotheses. It does not change the way we have to think about the origin and rooting of archaea, which is the central focus of this paper.\nThe hypothesis we have proposed can be refined with experiment. It seems if one really wants to understand the likelihood of intermediates between archaea and bacteria we need to understand why hybrid systems are unheard of. For instance, what other proteins need to be placed into a bacterium to allow them to use histones? How would an archaea with a bacterial ribosome function? Trying to recreate the intermediates we believe went extinct would certainly give insight into their plausibility. It definitely would give better insight into the functional nuances of proteins with homologous function across the prokaryotic superkingdoms that appear to be highly resistant to horizontal transfer. It would be highly informative regardless of the location of the root of the tree of life.\nWe have drawn our data from diverse sources that are not usually the primary tools for studying evolution. Viruses have been getting more attention as players in shaping the tree of life recently [18] and better sampling will clarify the plausibility of the endosymbiosis we have proposed. However, essentiality and protein structure are non-traditional tools in this field. If essentiality experiments were performed across the ribosomes and DNA replication machinery of bacilli under different conditions it could give us hints as to what selective pressures would need to be relaxed for the major transition to begin. Further study of natural antibiotics will also continue to increase the resolution of the hypothesis. We argue this line of experiment would be useful in its own right, since many of the firmicutes are pathogens that effect human health.\nOf course indepth sampling of thaumarchaeota and korarchaeota is going to be invaluable to this endeavor. If the ribosomal proteins that are currently missing in these groups are found in new genomes it would imply independent losses and make holophyly seem a little more appealing. The redundancy left in these genomes could just be the first surprise. Deeper sampling may reveal redundancy in some of the archaeal-bacterial hybrid systems we discussed above. Finding a deep branching archaeon that uses a bacterial system would truly validate this hypothesis.\nThe proteasome is an important component of our hypothesis. If one roots archaea in bacilli it does not explain the presence of the proteasome across the entire superkingdom. We think Cavalier-Smith is correct in pointing it out as a link between archaea and actinobacteria, but in light of the other evidence raised here we do not find that argument convincing on its own. It is not clear what direction the proteasome was inherited. Even if the proteasome was horizontally transferred it does not weaken the polarization of archaea; it would still be a derived structure that was present at the root of archaea, so they must still be derived.\nThere are many instances in the literature where data are only presented under the canonical rooting, when in fact it is better explained by an alternative rooting. This quickly leads to circular logic; a hypothesis gets buried because no data supports it, data gets buried (in supplemental data or ad hoc invocations of HGT) because it does not fit with the canonical rooting. As an example look at how much data from Eugene Koonin's group we have cited in this work to support our hypothesis even though he has made it clear he thinks this rooting is unsupportable (see his reviews of [11,81] and this manuscript as well). We refute the view, and prevailing opinion, that there is no reasonable data to support a rooting within bacteria.\nFor instance, one of the biggest problems with the canonical rooting is the origin of cells. The term \"RNA world\" is sometimes invoked as a miracle that could explain anything that happened in evolution before cells look the way they do now. But one thing RNA definitely cannot do is to make transmembrane pores. This problem is addressed well by the obcell hypothesis of Blobel and Cavalier-Smith [121,122]. They propose proto-cells had very little going on inside them initially. Rather, they were collections of ribozymes tethered to the outside of a cell. The details of their proposal get around the problems of transmembrane RNA structures, but also implies the first true cell had a double membrane (like the gram-negative bacteria). Our point here is not about which hypothesis is correct; but that both of them are understood better in terms of the strengths and weaknesses of the other; throwing either out is essentially operating without a null hypothesis. The differences in these hypotheses was recently reviewed in [123].\nOur view is that the debate should not be closed, but we acknowledge the difficulty in making meaningful contributions to that debate due to the complexity of the problem. Clearly DNA or protein sequence data alone does not suffice to provide a satisfactory answer. Data from different scales of biology - structure, function, biochemical processes, cell morphology etc. as well as the fossil record and earth's environment at different time points have to be applied. Fortunately, in our view, the increasing availability of these data and the tools to manipulate that data promise to keep the debate alive and opinion will continue to see-saw as it has done for the past 33 years since the pioneering work of Woese.", "This novel combination of hypotheses on the origin of archaea is intended to keep the debate alive. We think Cavalier-Smith has the best method for rooting the tree. His attention to detail and multiple sources of data allows one to refine his ideas as we have done here. Lake (and Gupta) has the right root for archaea, and despite our criticism, indel polarization is a useful methodology. Gupta has the right idea about antibiotics being a major force in this story and of course his work on indels laid the groundwork for our own work as well as Lake's. Forterre is right about viruses being major players in this event. Of course many others have shaped our thoughts on this subject, but we have clearly taken the most from the work of these four. In so doing we have tried to demonstrate the value of using opposing ideas as null hypotheses to each other.\nHave we provided a scenario that explains every detail for how archaea evolved out of gram-positive bacteria? We certainly have not. What we have presented is a variety of data that attempts to show it is a plausible and defendable stance. The emergence of archea is an amazing event in the history of life, but decipherings its origin is not simple. However, if we close the debate we close our eyes to the large body of evidence that supports the polarization of this transition.\nWe have tried to provide a novel view on the origin of archaea that makes it clear very little is settled on this subject. We have provided a scenario that covers most of the transition between bacteria and archaea. The ideas we propose here can be refined with further experiment and more observations. The ideas are currently supported by diverse data. The study of these hypotheses will give us insight into several tangentially related topics that are worth pursuing such as the subtleties of antibiotic resistance in the ribosome in Gram-positives. In summary, the hypothesis we present and support here reconciles many opposing viewpoints and strongly argues that archaea are derived from Bacilli.", "The authors declare that they have no competing interests.", "REV conceived the study and analyzed the data. PEB assisted in writing the manuscript. All authors read and approved the final manuscript.", "[SUBTITLE] Reviewer's report 1 [SUBSECTION] Patrick Forterre, Universite Paris Sud and Institut Pasteur, Paris, France\nIn their paper, « the origin of a derived superkingdom: how a gram positive bacterium crossed the desert to become an archaeon\", Valas and Bourne update the previous proposal by Gupta linking Archaea to « gram positive bacteria ». The term gram positive bacteria is really outdated, since the work of Carl Woese has shown that it has no phylogenetic meaning. In fact, the title of this paper should be: \"how a Firmicutes bacterium crossed the desert to become an archaeon ». Firmicutes are one of the 20, 30 more...(it's not yet clear) bacterial phyla. It has been much more extensively studied by human for medical and biotechnological reasons, but this does not qualify it to be more than that.\n[SUBTITLE] Author's response [SUBSECTION] We find there are many compelling reasons to still consider the Gram-positives a monophyletic group as discussed in [8]. We have also presented evidence to justify why we do not trust the rRNA tree as a tool for macrophylogeny, especially for two groups nicknamed the \"low gc gram-positives\" and the \"high gc gram-positives\". We have two sources that disagree on the position of these groups. The solution is not to make declarative statements that one data source makes looking at other unreasonable, but rather to consider the strengths and weaknesses of each. One of the goals of the paper is to unify the hypotheses that question the rooting of the tree of life between the Achaea and Bacteria. Again, we'd like to point out that despite their differences Lake, Gupta, and Cavalier-Smith all agree the Archaea are derived from a Gram-positive bacterium. So even though we narrow it down to a phyla, we think this title still reflects the larger goal of the paper.\nIn summary, Valas and Bourne proposed that both Archaea and Eukaryotes derived by transmutation from a member of Firmicutes, i.e. of one of the many bacterial phyla present today on our planet. This is revival of the old view that bacteria are primitive organisms that populated the planet much before all others (a sequel of Heckel monera). In fact, Bacteria are very evolved organisms, a superkingdom, that have been extremely successful sice they are now present everywhere and are usually much more abundant than members of the two other domains. It's unclear if they predated Archaea and ancient Eukarya, but they will certainly survive long after complex eukaryotes like us will have disappeared. I suspect that Archaea and Eukarya are the only two lineages that survive the extraordinary success of bacteria.\nWe find there are many compelling reasons to still consider the Gram-positives a monophyletic group as discussed in [8]. We have also presented evidence to justify why we do not trust the rRNA tree as a tool for macrophylogeny, especially for two groups nicknamed the \"low gc gram-positives\" and the \"high gc gram-positives\". We have two sources that disagree on the position of these groups. The solution is not to make declarative statements that one data source makes looking at other unreasonable, but rather to consider the strengths and weaknesses of each. One of the goals of the paper is to unify the hypotheses that question the rooting of the tree of life between the Achaea and Bacteria. Again, we'd like to point out that despite their differences Lake, Gupta, and Cavalier-Smith all agree the Archaea are derived from a Gram-positive bacterium. So even though we narrow it down to a phyla, we think this title still reflects the larger goal of the paper.\nIn summary, Valas and Bourne proposed that both Archaea and Eukaryotes derived by transmutation from a member of Firmicutes, i.e. of one of the many bacterial phyla present today on our planet. This is revival of the old view that bacteria are primitive organisms that populated the planet much before all others (a sequel of Heckel monera). In fact, Bacteria are very evolved organisms, a superkingdom, that have been extremely successful sice they are now present everywhere and are usually much more abundant than members of the two other domains. It's unclear if they predated Archaea and ancient Eukarya, but they will certainly survive long after complex eukaryotes like us will have disappeared. I suspect that Archaea and Eukarya are the only two lineages that survive the extraordinary success of bacteria.\n[SUBTITLE] Author's response [SUBSECTION] In this manuscript we have presented three pieces of evidence that imply Bacteria did predate the Archaea. This reviewer has not addressed why he feels those are insufficient.\nIn my opinion, one of the reason for this success was the invention of DNA gyrase. This enzyme allows to couple directly the energetic state of the cell (the ATP/ADP ratio) to the expression of all genes at once in modulating the supercoiled state of the chromosome. Once you become addict to DNA gyrase, you can't let it go. The last bacterial common ancestor had a DNA gyrase, and all modern bacteria still have it. Some archaea succeeded to get gyrase from bacteria, they are now fully dependent of it. Plants also get gyrase from cyanobacteria, one possible reason for their success ?? The idea that a poor Firmicutes abandoned DNA gyrase to escape antigyrase drug producers does not seem realistic to me. Unfortunately for too many human patients, gyrases have found many way to become multi drug resistant without having to abandon it. In general, bacteria have been very efficient to thrive happily in all possible « deserts » that one can imagine, including hot springs up to 95°C. Hyperthermophilic bacteria or desiccation resistant bacteria are not en route to become archaea but bona fide bacteria. I cannot discuss in details all ad hoc hypotheses proposed by the authors to explain how a Firmicutes become an archaeon.\nIn this manuscript we have presented three pieces of evidence that imply Bacteria did predate the Archaea. This reviewer has not addressed why he feels those are insufficient.\nIn my opinion, one of the reason for this success was the invention of DNA gyrase. This enzyme allows to couple directly the energetic state of the cell (the ATP/ADP ratio) to the expression of all genes at once in modulating the supercoiled state of the chromosome. Once you become addict to DNA gyrase, you can't let it go. The last bacterial common ancestor had a DNA gyrase, and all modern bacteria still have it. Some archaea succeeded to get gyrase from bacteria, they are now fully dependent of it. Plants also get gyrase from cyanobacteria, one possible reason for their success ?? The idea that a poor Firmicutes abandoned DNA gyrase to escape antigyrase drug producers does not seem realistic to me. Unfortunately for too many human patients, gyrases have found many way to become multi drug resistant without having to abandon it. In general, bacteria have been very efficient to thrive happily in all possible « deserts » that one can imagine, including hot springs up to 95°C. Hyperthermophilic bacteria or desiccation resistant bacteria are not en route to become archaea but bona fide bacteria. I cannot discuss in details all ad hoc hypotheses proposed by the authors to explain how a Firmicutes become an archaeon.\n[SUBTITLE] Author's response [SUBSECTION] We argue that extreme habitats and antibiotic warfare were not unique enough niches in the manuscript, and this why do not think Gupta's or Cavalier-Smith's hypotheses are sufficient on their own. We agree that DNA gyrase is a big deal, and it would require some very unique circumstances for it to be lost. If the archaea are adapted to chronic energy stress it would not be unreasonable from them to move away from gyrase becuase the benefit you described above disappears if ATP is always scarce. The question is whether the transition is impossible or implausible. We are arguing this was a very rare event that only happened once. We feel it is more productive to try evaluating some of the hybrid systems we propose than to speculate about their impossibility. Once again, we regret a reviewer refuses to discuss details with us.\nThey have certainly done a huge amont of bibliographic work and hard thinking which will help them in future debates on the origin of the three domains, but in my opinion, they have reached an impasse in trying to revive Gupta's hypothesis. For me, all hypotheses that invoke the transmutation of one domain (in its modern form) into another are definitively wrong. It is the same for hypotheses in which a combination of modern archaea and bacteria produced a protoeukaryote.\nWe argue that extreme habitats and antibiotic warfare were not unique enough niches in the manuscript, and this why do not think Gupta's or Cavalier-Smith's hypotheses are sufficient on their own. We agree that DNA gyrase is a big deal, and it would require some very unique circumstances for it to be lost. If the archaea are adapted to chronic energy stress it would not be unreasonable from them to move away from gyrase becuase the benefit you described above disappears if ATP is always scarce. The question is whether the transition is impossible or implausible. We are arguing this was a very rare event that only happened once. We feel it is more productive to try evaluating some of the hybrid systems we propose than to speculate about their impossibility. Once again, we regret a reviewer refuses to discuss details with us.\nThey have certainly done a huge amont of bibliographic work and hard thinking which will help them in future debates on the origin of the three domains, but in my opinion, they have reached an impasse in trying to revive Gupta's hypothesis. For me, all hypotheses that invoke the transmutation of one domain (in its modern form) into another are definitively wrong. It is the same for hypotheses in which a combination of modern archaea and bacteria produced a protoeukaryote.\n[SUBTITLE] Author's response [SUBSECTION] This implies there are essentially three primordial lineages; a view that we think is definitely wrong based on the currently available evidence. We have provided three robust pieces of evidence why the archaea appear younger than the bacteria which this reviewer has completely ignored. Other work from our group demonstrates that we can constrain the evolution of eukaryotes based on the biochemical history of the ocean [36,37]; that data argues the eukaryotes are a more recent lineage than the bacteria and archaea, and is completely independent of this work. Again this reviewer provides no argument for why transmutation hypotheses are definitely wrong, or why the polarization evidence bears no weight on these questions.\nI fully agree with Carl Woese who already wrote several years ago that « Modern cells are fully evolved entities. They are sufficiently complex, integrated, and \"individualized\" that further major change in their designs does not appear possible, which is not to say that relatively minor (but still functionally significant) variations on existing cellular themes cannot occur or that, under certain conditions, cellular design cannot degenerate\". Firmicutes are modern cells, they cannot have experienced \"major change in their designs\" to become archaea and later on eukarya. These transmutation hypotheses put us backward in the pre-Woesian era, when evolution was viewed as a succession of steps from simple organism (moneraprokaryote-bacteria) to lower eukaryotes, then to higher eukaryotes, then to human (the scala natura). Definitely, a bacterium cannot be transmutated into an archaeon, even by a virus.\nThis implies there are essentially three primordial lineages; a view that we think is definitely wrong based on the currently available evidence. We have provided three robust pieces of evidence why the archaea appear younger than the bacteria which this reviewer has completely ignored. Other work from our group demonstrates that we can constrain the evolution of eukaryotes based on the biochemical history of the ocean [36,37]; that data argues the eukaryotes are a more recent lineage than the bacteria and archaea, and is completely independent of this work. Again this reviewer provides no argument for why transmutation hypotheses are definitely wrong, or why the polarization evidence bears no weight on these questions.\nI fully agree with Carl Woese who already wrote several years ago that « Modern cells are fully evolved entities. They are sufficiently complex, integrated, and \"individualized\" that further major change in their designs does not appear possible, which is not to say that relatively minor (but still functionally significant) variations on existing cellular themes cannot occur or that, under certain conditions, cellular design cannot degenerate\". Firmicutes are modern cells, they cannot have experienced \"major change in their designs\" to become archaea and later on eukarya. These transmutation hypotheses put us backward in the pre-Woesian era, when evolution was viewed as a succession of steps from simple organism (moneraprokaryote-bacteria) to lower eukaryotes, then to higher eukaryotes, then to human (the scala natura). Definitely, a bacterium cannot be transmutated into an archaeon, even by a virus.\n[SUBTITLE] Author's response [SUBSECTION] Only time will tell whether our skeptical reading of the rRNA tree will turn out to be pre-Woesian or post-Woesian. We do not express the view that evolution is just a series of successive steps or that a bacterial cell is simple in any way, and neither does Cavalier-Smith, Gupta, or Lake in our opinion. However within the larger process of evolution there are clear paths that were built in successive steps of increasing complexity. We think the best example of this is the proteasome's quaternary structure. Fortunately, the proteasome has an informative phylogenetic distribution that allows us to polarize the direction of its evolution. We argue the Archaea evolved from the Bacteria because their proteasome is more complicated, but that does do not imply the rest of the machinery is simpler in the Bacteria. If there are markers that are clear cut stories, how could using them for phylogenetic inference be pre-Woesian? Again, we ask why there are no polarizations that place the Bacteria as derived from the Archaea? We think the view that there is an insurmountable divide between the superkingdoms by definition to leads to circular reasoning, instead of a discussion about the actual data. Ironically, we see parallels between the current situation and Woese's account of how his ideas were first received [5].\nA virus take over of the replication apparatus could have created a bacterium with a novel replication apparatus, that's all. This would not have changed bacterial lipids, membranes, ribosomes, proteasomes, ATP synthases, transport sustems, metabolism,........ Possibly, one day, among the ten of thousand of bacteria whose genomes will be sequences, one will find one bacterium with an atypical replication system of recent viral origin, but I bet that this bacterium will have « bacterial ribosomes » and so on.\nOnly time will tell whether our skeptical reading of the rRNA tree will turn out to be pre-Woesian or post-Woesian. We do not express the view that evolution is just a series of successive steps or that a bacterial cell is simple in any way, and neither does Cavalier-Smith, Gupta, or Lake in our opinion. However within the larger process of evolution there are clear paths that were built in successive steps of increasing complexity. We think the best example of this is the proteasome's quaternary structure. Fortunately, the proteasome has an informative phylogenetic distribution that allows us to polarize the direction of its evolution. We argue the Archaea evolved from the Bacteria because their proteasome is more complicated, but that does do not imply the rest of the machinery is simpler in the Bacteria. If there are markers that are clear cut stories, how could using them for phylogenetic inference be pre-Woesian? Again, we ask why there are no polarizations that place the Bacteria as derived from the Archaea? We think the view that there is an insurmountable divide between the superkingdoms by definition to leads to circular reasoning, instead of a discussion about the actual data. Ironically, we see parallels between the current situation and Woese's account of how his ideas were first received [5].\nA virus take over of the replication apparatus could have created a bacterium with a novel replication apparatus, that's all. This would not have changed bacterial lipids, membranes, ribosomes, proteasomes, ATP synthases, transport sustems, metabolism,........ Possibly, one day, among the ten of thousand of bacteria whose genomes will be sequences, one will find one bacterium with an atypical replication system of recent viral origin, but I bet that this bacterium will have « bacterial ribosomes » and so on.\n[SUBTITLE] Author's response [SUBSECTION] We would be very surprised if the DNA replication system was that different and rest of the cell was purely bacterial. The nice thing is this is one of the few points we disagree on that data will actually make clearer.\nIf one want to understand the origin of modern domains, one has to consider that they originated in a very different world that our present one. A world with many lineages (domains or protodomains) that have now disappeared, possibly back to the cellular RNA world. This is a really difficult and fascinating objective which requires to propose sometimes bold hypotheses, but these hypotheses should take into account that the divide between the three modern domains is now so great that it cannot be crossed, even by an adventurous, desperate Firmicutes.\nWe would be very surprised if the DNA replication system was that different and rest of the cell was purely bacterial. The nice thing is this is one of the few points we disagree on that data will actually make clearer.\nIf one want to understand the origin of modern domains, one has to consider that they originated in a very different world that our present one. A world with many lineages (domains or protodomains) that have now disappeared, possibly back to the cellular RNA world. This is a really difficult and fascinating objective which requires to propose sometimes bold hypotheses, but these hypotheses should take into account that the divide between the three modern domains is now so great that it cannot be crossed, even by an adventurous, desperate Firmicutes.\n[SUBTITLE] Author's response [SUBSECTION] We again refer readers to work from our group on the evolution of the superkingdoms in relationship to history of ocean's biochemistry [36,37]. We will continue to incorporate new data sources that allow us to measure how different that world was instead of speculating about it. For now, the many data sources we have woven together imply there is something deeply wrong with the canonical rooting as well the logic used to support it (see reviewer #3's comments). This reviewer's advice that we need bold hypotheses, but the rooting must be taken as dogma makes little sense to us in light of the many problems with that rooting.\nWe again refer readers to work from our group on the evolution of the superkingdoms in relationship to history of ocean's biochemistry [36,37]. We will continue to incorporate new data sources that allow us to measure how different that world was instead of speculating about it. For now, the many data sources we have woven together imply there is something deeply wrong with the canonical rooting as well the logic used to support it (see reviewer #3's comments). This reviewer's advice that we need bold hypotheses, but the rooting must be taken as dogma makes little sense to us in light of the many problems with that rooting.\nPatrick Forterre, Universite Paris Sud and Institut Pasteur, Paris, France\nIn their paper, « the origin of a derived superkingdom: how a gram positive bacterium crossed the desert to become an archaeon\", Valas and Bourne update the previous proposal by Gupta linking Archaea to « gram positive bacteria ». The term gram positive bacteria is really outdated, since the work of Carl Woese has shown that it has no phylogenetic meaning. In fact, the title of this paper should be: \"how a Firmicutes bacterium crossed the desert to become an archaeon ». Firmicutes are one of the 20, 30 more...(it's not yet clear) bacterial phyla. It has been much more extensively studied by human for medical and biotechnological reasons, but this does not qualify it to be more than that.\n[SUBTITLE] Author's response [SUBSECTION] We find there are many compelling reasons to still consider the Gram-positives a monophyletic group as discussed in [8]. We have also presented evidence to justify why we do not trust the rRNA tree as a tool for macrophylogeny, especially for two groups nicknamed the \"low gc gram-positives\" and the \"high gc gram-positives\". We have two sources that disagree on the position of these groups. The solution is not to make declarative statements that one data source makes looking at other unreasonable, but rather to consider the strengths and weaknesses of each. One of the goals of the paper is to unify the hypotheses that question the rooting of the tree of life between the Achaea and Bacteria. Again, we'd like to point out that despite their differences Lake, Gupta, and Cavalier-Smith all agree the Archaea are derived from a Gram-positive bacterium. So even though we narrow it down to a phyla, we think this title still reflects the larger goal of the paper.\nIn summary, Valas and Bourne proposed that both Archaea and Eukaryotes derived by transmutation from a member of Firmicutes, i.e. of one of the many bacterial phyla present today on our planet. This is revival of the old view that bacteria are primitive organisms that populated the planet much before all others (a sequel of Heckel monera). In fact, Bacteria are very evolved organisms, a superkingdom, that have been extremely successful sice they are now present everywhere and are usually much more abundant than members of the two other domains. It's unclear if they predated Archaea and ancient Eukarya, but they will certainly survive long after complex eukaryotes like us will have disappeared. I suspect that Archaea and Eukarya are the only two lineages that survive the extraordinary success of bacteria.\nWe find there are many compelling reasons to still consider the Gram-positives a monophyletic group as discussed in [8]. We have also presented evidence to justify why we do not trust the rRNA tree as a tool for macrophylogeny, especially for two groups nicknamed the \"low gc gram-positives\" and the \"high gc gram-positives\". We have two sources that disagree on the position of these groups. The solution is not to make declarative statements that one data source makes looking at other unreasonable, but rather to consider the strengths and weaknesses of each. One of the goals of the paper is to unify the hypotheses that question the rooting of the tree of life between the Achaea and Bacteria. Again, we'd like to point out that despite their differences Lake, Gupta, and Cavalier-Smith all agree the Archaea are derived from a Gram-positive bacterium. So even though we narrow it down to a phyla, we think this title still reflects the larger goal of the paper.\nIn summary, Valas and Bourne proposed that both Archaea and Eukaryotes derived by transmutation from a member of Firmicutes, i.e. of one of the many bacterial phyla present today on our planet. This is revival of the old view that bacteria are primitive organisms that populated the planet much before all others (a sequel of Heckel monera). In fact, Bacteria are very evolved organisms, a superkingdom, that have been extremely successful sice they are now present everywhere and are usually much more abundant than members of the two other domains. It's unclear if they predated Archaea and ancient Eukarya, but they will certainly survive long after complex eukaryotes like us will have disappeared. I suspect that Archaea and Eukarya are the only two lineages that survive the extraordinary success of bacteria.\n[SUBTITLE] Author's response [SUBSECTION] In this manuscript we have presented three pieces of evidence that imply Bacteria did predate the Archaea. This reviewer has not addressed why he feels those are insufficient.\nIn my opinion, one of the reason for this success was the invention of DNA gyrase. This enzyme allows to couple directly the energetic state of the cell (the ATP/ADP ratio) to the expression of all genes at once in modulating the supercoiled state of the chromosome. Once you become addict to DNA gyrase, you can't let it go. The last bacterial common ancestor had a DNA gyrase, and all modern bacteria still have it. Some archaea succeeded to get gyrase from bacteria, they are now fully dependent of it. Plants also get gyrase from cyanobacteria, one possible reason for their success ?? The idea that a poor Firmicutes abandoned DNA gyrase to escape antigyrase drug producers does not seem realistic to me. Unfortunately for too many human patients, gyrases have found many way to become multi drug resistant without having to abandon it. In general, bacteria have been very efficient to thrive happily in all possible « deserts » that one can imagine, including hot springs up to 95°C. Hyperthermophilic bacteria or desiccation resistant bacteria are not en route to become archaea but bona fide bacteria. I cannot discuss in details all ad hoc hypotheses proposed by the authors to explain how a Firmicutes become an archaeon.\nIn this manuscript we have presented three pieces of evidence that imply Bacteria did predate the Archaea. This reviewer has not addressed why he feels those are insufficient.\nIn my opinion, one of the reason for this success was the invention of DNA gyrase. This enzyme allows to couple directly the energetic state of the cell (the ATP/ADP ratio) to the expression of all genes at once in modulating the supercoiled state of the chromosome. Once you become addict to DNA gyrase, you can't let it go. The last bacterial common ancestor had a DNA gyrase, and all modern bacteria still have it. Some archaea succeeded to get gyrase from bacteria, they are now fully dependent of it. Plants also get gyrase from cyanobacteria, one possible reason for their success ?? The idea that a poor Firmicutes abandoned DNA gyrase to escape antigyrase drug producers does not seem realistic to me. Unfortunately for too many human patients, gyrases have found many way to become multi drug resistant without having to abandon it. In general, bacteria have been very efficient to thrive happily in all possible « deserts » that one can imagine, including hot springs up to 95°C. Hyperthermophilic bacteria or desiccation resistant bacteria are not en route to become archaea but bona fide bacteria. I cannot discuss in details all ad hoc hypotheses proposed by the authors to explain how a Firmicutes become an archaeon.\n[SUBTITLE] Author's response [SUBSECTION] We argue that extreme habitats and antibiotic warfare were not unique enough niches in the manuscript, and this why do not think Gupta's or Cavalier-Smith's hypotheses are sufficient on their own. We agree that DNA gyrase is a big deal, and it would require some very unique circumstances for it to be lost. If the archaea are adapted to chronic energy stress it would not be unreasonable from them to move away from gyrase becuase the benefit you described above disappears if ATP is always scarce. The question is whether the transition is impossible or implausible. We are arguing this was a very rare event that only happened once. We feel it is more productive to try evaluating some of the hybrid systems we propose than to speculate about their impossibility. Once again, we regret a reviewer refuses to discuss details with us.\nThey have certainly done a huge amont of bibliographic work and hard thinking which will help them in future debates on the origin of the three domains, but in my opinion, they have reached an impasse in trying to revive Gupta's hypothesis. For me, all hypotheses that invoke the transmutation of one domain (in its modern form) into another are definitively wrong. It is the same for hypotheses in which a combination of modern archaea and bacteria produced a protoeukaryote.\nWe argue that extreme habitats and antibiotic warfare were not unique enough niches in the manuscript, and this why do not think Gupta's or Cavalier-Smith's hypotheses are sufficient on their own. We agree that DNA gyrase is a big deal, and it would require some very unique circumstances for it to be lost. If the archaea are adapted to chronic energy stress it would not be unreasonable from them to move away from gyrase becuase the benefit you described above disappears if ATP is always scarce. The question is whether the transition is impossible or implausible. We are arguing this was a very rare event that only happened once. We feel it is more productive to try evaluating some of the hybrid systems we propose than to speculate about their impossibility. Once again, we regret a reviewer refuses to discuss details with us.\nThey have certainly done a huge amont of bibliographic work and hard thinking which will help them in future debates on the origin of the three domains, but in my opinion, they have reached an impasse in trying to revive Gupta's hypothesis. For me, all hypotheses that invoke the transmutation of one domain (in its modern form) into another are definitively wrong. It is the same for hypotheses in which a combination of modern archaea and bacteria produced a protoeukaryote.\n[SUBTITLE] Author's response [SUBSECTION] This implies there are essentially three primordial lineages; a view that we think is definitely wrong based on the currently available evidence. We have provided three robust pieces of evidence why the archaea appear younger than the bacteria which this reviewer has completely ignored. Other work from our group demonstrates that we can constrain the evolution of eukaryotes based on the biochemical history of the ocean [36,37]; that data argues the eukaryotes are a more recent lineage than the bacteria and archaea, and is completely independent of this work. Again this reviewer provides no argument for why transmutation hypotheses are definitely wrong, or why the polarization evidence bears no weight on these questions.\nI fully agree with Carl Woese who already wrote several years ago that « Modern cells are fully evolved entities. They are sufficiently complex, integrated, and \"individualized\" that further major change in their designs does not appear possible, which is not to say that relatively minor (but still functionally significant) variations on existing cellular themes cannot occur or that, under certain conditions, cellular design cannot degenerate\". Firmicutes are modern cells, they cannot have experienced \"major change in their designs\" to become archaea and later on eukarya. These transmutation hypotheses put us backward in the pre-Woesian era, when evolution was viewed as a succession of steps from simple organism (moneraprokaryote-bacteria) to lower eukaryotes, then to higher eukaryotes, then to human (the scala natura). Definitely, a bacterium cannot be transmutated into an archaeon, even by a virus.\nThis implies there are essentially three primordial lineages; a view that we think is definitely wrong based on the currently available evidence. We have provided three robust pieces of evidence why the archaea appear younger than the bacteria which this reviewer has completely ignored. Other work from our group demonstrates that we can constrain the evolution of eukaryotes based on the biochemical history of the ocean [36,37]; that data argues the eukaryotes are a more recent lineage than the bacteria and archaea, and is completely independent of this work. Again this reviewer provides no argument for why transmutation hypotheses are definitely wrong, or why the polarization evidence bears no weight on these questions.\nI fully agree with Carl Woese who already wrote several years ago that « Modern cells are fully evolved entities. They are sufficiently complex, integrated, and \"individualized\" that further major change in their designs does not appear possible, which is not to say that relatively minor (but still functionally significant) variations on existing cellular themes cannot occur or that, under certain conditions, cellular design cannot degenerate\". Firmicutes are modern cells, they cannot have experienced \"major change in their designs\" to become archaea and later on eukarya. These transmutation hypotheses put us backward in the pre-Woesian era, when evolution was viewed as a succession of steps from simple organism (moneraprokaryote-bacteria) to lower eukaryotes, then to higher eukaryotes, then to human (the scala natura). Definitely, a bacterium cannot be transmutated into an archaeon, even by a virus.\n[SUBTITLE] Author's response [SUBSECTION] Only time will tell whether our skeptical reading of the rRNA tree will turn out to be pre-Woesian or post-Woesian. We do not express the view that evolution is just a series of successive steps or that a bacterial cell is simple in any way, and neither does Cavalier-Smith, Gupta, or Lake in our opinion. However within the larger process of evolution there are clear paths that were built in successive steps of increasing complexity. We think the best example of this is the proteasome's quaternary structure. Fortunately, the proteasome has an informative phylogenetic distribution that allows us to polarize the direction of its evolution. We argue the Archaea evolved from the Bacteria because their proteasome is more complicated, but that does do not imply the rest of the machinery is simpler in the Bacteria. If there are markers that are clear cut stories, how could using them for phylogenetic inference be pre-Woesian? Again, we ask why there are no polarizations that place the Bacteria as derived from the Archaea? We think the view that there is an insurmountable divide between the superkingdoms by definition to leads to circular reasoning, instead of a discussion about the actual data. Ironically, we see parallels between the current situation and Woese's account of how his ideas were first received [5].\nA virus take over of the replication apparatus could have created a bacterium with a novel replication apparatus, that's all. This would not have changed bacterial lipids, membranes, ribosomes, proteasomes, ATP synthases, transport sustems, metabolism,........ Possibly, one day, among the ten of thousand of bacteria whose genomes will be sequences, one will find one bacterium with an atypical replication system of recent viral origin, but I bet that this bacterium will have « bacterial ribosomes » and so on.\nOnly time will tell whether our skeptical reading of the rRNA tree will turn out to be pre-Woesian or post-Woesian. We do not express the view that evolution is just a series of successive steps or that a bacterial cell is simple in any way, and neither does Cavalier-Smith, Gupta, or Lake in our opinion. However within the larger process of evolution there are clear paths that were built in successive steps of increasing complexity. We think the best example of this is the proteasome's quaternary structure. Fortunately, the proteasome has an informative phylogenetic distribution that allows us to polarize the direction of its evolution. We argue the Archaea evolved from the Bacteria because their proteasome is more complicated, but that does do not imply the rest of the machinery is simpler in the Bacteria. If there are markers that are clear cut stories, how could using them for phylogenetic inference be pre-Woesian? Again, we ask why there are no polarizations that place the Bacteria as derived from the Archaea? We think the view that there is an insurmountable divide between the superkingdoms by definition to leads to circular reasoning, instead of a discussion about the actual data. Ironically, we see parallels between the current situation and Woese's account of how his ideas were first received [5].\nA virus take over of the replication apparatus could have created a bacterium with a novel replication apparatus, that's all. This would not have changed bacterial lipids, membranes, ribosomes, proteasomes, ATP synthases, transport sustems, metabolism,........ Possibly, one day, among the ten of thousand of bacteria whose genomes will be sequences, one will find one bacterium with an atypical replication system of recent viral origin, but I bet that this bacterium will have « bacterial ribosomes » and so on.\n[SUBTITLE] Author's response [SUBSECTION] We would be very surprised if the DNA replication system was that different and rest of the cell was purely bacterial. The nice thing is this is one of the few points we disagree on that data will actually make clearer.\nIf one want to understand the origin of modern domains, one has to consider that they originated in a very different world that our present one. A world with many lineages (domains or protodomains) that have now disappeared, possibly back to the cellular RNA world. This is a really difficult and fascinating objective which requires to propose sometimes bold hypotheses, but these hypotheses should take into account that the divide between the three modern domains is now so great that it cannot be crossed, even by an adventurous, desperate Firmicutes.\nWe would be very surprised if the DNA replication system was that different and rest of the cell was purely bacterial. The nice thing is this is one of the few points we disagree on that data will actually make clearer.\nIf one want to understand the origin of modern domains, one has to consider that they originated in a very different world that our present one. A world with many lineages (domains or protodomains) that have now disappeared, possibly back to the cellular RNA world. This is a really difficult and fascinating objective which requires to propose sometimes bold hypotheses, but these hypotheses should take into account that the divide between the three modern domains is now so great that it cannot be crossed, even by an adventurous, desperate Firmicutes.\n[SUBTITLE] Author's response [SUBSECTION] We again refer readers to work from our group on the evolution of the superkingdoms in relationship to history of ocean's biochemistry [36,37]. We will continue to incorporate new data sources that allow us to measure how different that world was instead of speculating about it. For now, the many data sources we have woven together imply there is something deeply wrong with the canonical rooting as well the logic used to support it (see reviewer #3's comments). This reviewer's advice that we need bold hypotheses, but the rooting must be taken as dogma makes little sense to us in light of the many problems with that rooting.\nWe again refer readers to work from our group on the evolution of the superkingdoms in relationship to history of ocean's biochemistry [36,37]. We will continue to incorporate new data sources that allow us to measure how different that world was instead of speculating about it. For now, the many data sources we have woven together imply there is something deeply wrong with the canonical rooting as well the logic used to support it (see reviewer #3's comments). This reviewer's advice that we need bold hypotheses, but the rooting must be taken as dogma makes little sense to us in light of the many problems with that rooting.\n[SUBTITLE] Reviewer's report 2 [SUBSECTION] Eugene V Koonin, National Center for Biotechnology Information, NIH, Bethesda, Maryland, United States\nTo this reviewer, the manuscript by Valas and Bourne is frustrating. These authors continue to question the primary divide in the evolution of cellular life, that between archaea and bacteria, without any legitimate grounds. Here they go deeper into this falsehood by trying to present arguments for one bacterial root of archaea as opposed to another that has been proposed by a different author, in an equally faulty manner. Another innovation here is adding insult to injury: \"This data have been dismissed because those who support the canonical rooting between the prokaryotic superkingdoms cannot imagine how the vast divide between the prokaryotic superkingdoms could be crossed.\" This allegation is a substantial part of the exceptionally brief abstract of Valas and Bourne. No comment seems to be required.\nMy general view which I see no reason whatsoever to change is expressed in the following quote from my review of a previous publication by the same authors:\n\"The nature of the primary divide in prokaryotes - and actually among all cellular life forms is clear, and it is between archaea and bacteria. This view is supported by the fundamental differences between archaeal and bacterial systems of DNA replication, core transcription, translation, and membrane biogenesis - essentially, all central cellular systems (not just the replication system as noted in the present paper). I believe these differences are sufficient to close the \"root debate\" (regardless of the appropriateness or lack thereof of the very notion of a root in this context) and to base analyses and discussions aimed at the elucidation of the nature of LUCA on that foundation.\" [11]\nPerhaps, it is worth adding the results of a recent comprehensive analysis of phylogenetic trees for prokaryotic proteins that firmly supports the primary divide between archaea and bacteria [128].\n[SUBTITLE] Author's response [SUBSECTION] A large part of the motivation for this manuscript was the review of our previous work [11]. Cleary there are others besides us who do not find things as clear cut as this reviewer (see reviewer #3 comments). We think there many reasons to support the canonical rooting, as well as reasons to questions it. We have presented our views on much of this evidence. The reviewer has again refused to discuss our data in any detail implying it is obvious why we are wrong. We feel we have greatly strengthened our previous argument by looking at the big picture in terms of the Gram-negative rooting. This reviewer claimed that rooting was unsupportable because it is so obvious that Achaea did not evolve from Bacteria. We feel we have strengthened that view, but clearly we have not swayed this reviewer. We do not think there is anything more to say on this subject so we point readers to the discussion between this reviewer and Cavalier-Smith in [81].\nThe only other comment I wish to make is the extreme carelessness with which the manuscript is written. The abstract consists of 6 sentences of which two are obviously ungrammatical. Furthermore, in the Conclusion section of the abstract, the astonished reader finds \"antibiotic warfare and a viral endosymbiosis\" for which no argument and no mention has been made in the Results section. Perhaps, the authors can get rid of these and other similar problems in a revision but I do want to keep it in the record that this is how the manuscript was submitted for review.\nA large part of the motivation for this manuscript was the review of our previous work [11]. Cleary there are others besides us who do not find things as clear cut as this reviewer (see reviewer #3 comments). We think there many reasons to support the canonical rooting, as well as reasons to questions it. We have presented our views on much of this evidence. The reviewer has again refused to discuss our data in any detail implying it is obvious why we are wrong. We feel we have greatly strengthened our previous argument by looking at the big picture in terms of the Gram-negative rooting. This reviewer claimed that rooting was unsupportable because it is so obvious that Achaea did not evolve from Bacteria. We feel we have strengthened that view, but clearly we have not swayed this reviewer. We do not think there is anything more to say on this subject so we point readers to the discussion between this reviewer and Cavalier-Smith in [81].\nThe only other comment I wish to make is the extreme carelessness with which the manuscript is written. The abstract consists of 6 sentences of which two are obviously ungrammatical. Furthermore, in the Conclusion section of the abstract, the astonished reader finds \"antibiotic warfare and a viral endosymbiosis\" for which no argument and no mention has been made in the Results section. Perhaps, the authors can get rid of these and other similar problems in a revision but I do want to keep it in the record that this is how the manuscript was submitted for review.\n[SUBTITLE] Author's response [SUBSECTION] We apologize for any issues with the form that took away from the content, and we hope the final version is improved. The paper is written somewhat recklessly because it is what it is: the end of a Ph.D. dissertation. We feel it was the right time to get these ideas out because of our perception that the canonical rooting is too dogmatic. This review has only supported our view this manuscript was needed. We think a reader should be astonished by the end of a short abstract before a long reckless paper; it gets them to read paper.\nWe find it interesting that this reviewer had no comment on two aspects of this work which can be judged independently of the rooting issue; the holophyly of the archaea and the actinobacteria's role in eukaryogenesis. We have presented much evidence that the conclusion this reviewer reached on the former, using indel data, is flawed. It would have been informative to hear this reviewer's opinion on that analysis.\nWe apologize for any issues with the form that took away from the content, and we hope the final version is improved. The paper is written somewhat recklessly because it is what it is: the end of a Ph.D. dissertation. We feel it was the right time to get these ideas out because of our perception that the canonical rooting is too dogmatic. This review has only supported our view this manuscript was needed. We think a reader should be astonished by the end of a short abstract before a long reckless paper; it gets them to read paper.\nWe find it interesting that this reviewer had no comment on two aspects of this work which can be judged independently of the rooting issue; the holophyly of the archaea and the actinobacteria's role in eukaryogenesis. We have presented much evidence that the conclusion this reviewer reached on the former, using indel data, is flawed. It would have been informative to hear this reviewer's opinion on that analysis.\nEugene V Koonin, National Center for Biotechnology Information, NIH, Bethesda, Maryland, United States\nTo this reviewer, the manuscript by Valas and Bourne is frustrating. These authors continue to question the primary divide in the evolution of cellular life, that between archaea and bacteria, without any legitimate grounds. Here they go deeper into this falsehood by trying to present arguments for one bacterial root of archaea as opposed to another that has been proposed by a different author, in an equally faulty manner. Another innovation here is adding insult to injury: \"This data have been dismissed because those who support the canonical rooting between the prokaryotic superkingdoms cannot imagine how the vast divide between the prokaryotic superkingdoms could be crossed.\" This allegation is a substantial part of the exceptionally brief abstract of Valas and Bourne. No comment seems to be required.\nMy general view which I see no reason whatsoever to change is expressed in the following quote from my review of a previous publication by the same authors:\n\"The nature of the primary divide in prokaryotes - and actually among all cellular life forms is clear, and it is between archaea and bacteria. This view is supported by the fundamental differences between archaeal and bacterial systems of DNA replication, core transcription, translation, and membrane biogenesis - essentially, all central cellular systems (not just the replication system as noted in the present paper). I believe these differences are sufficient to close the \"root debate\" (regardless of the appropriateness or lack thereof of the very notion of a root in this context) and to base analyses and discussions aimed at the elucidation of the nature of LUCA on that foundation.\" [11]\nPerhaps, it is worth adding the results of a recent comprehensive analysis of phylogenetic trees for prokaryotic proteins that firmly supports the primary divide between archaea and bacteria [128].\n[SUBTITLE] Author's response [SUBSECTION] A large part of the motivation for this manuscript was the review of our previous work [11]. Cleary there are others besides us who do not find things as clear cut as this reviewer (see reviewer #3 comments). We think there many reasons to support the canonical rooting, as well as reasons to questions it. We have presented our views on much of this evidence. The reviewer has again refused to discuss our data in any detail implying it is obvious why we are wrong. We feel we have greatly strengthened our previous argument by looking at the big picture in terms of the Gram-negative rooting. This reviewer claimed that rooting was unsupportable because it is so obvious that Achaea did not evolve from Bacteria. We feel we have strengthened that view, but clearly we have not swayed this reviewer. We do not think there is anything more to say on this subject so we point readers to the discussion between this reviewer and Cavalier-Smith in [81].\nThe only other comment I wish to make is the extreme carelessness with which the manuscript is written. The abstract consists of 6 sentences of which two are obviously ungrammatical. Furthermore, in the Conclusion section of the abstract, the astonished reader finds \"antibiotic warfare and a viral endosymbiosis\" for which no argument and no mention has been made in the Results section. Perhaps, the authors can get rid of these and other similar problems in a revision but I do want to keep it in the record that this is how the manuscript was submitted for review.\nA large part of the motivation for this manuscript was the review of our previous work [11]. Cleary there are others besides us who do not find things as clear cut as this reviewer (see reviewer #3 comments). We think there many reasons to support the canonical rooting, as well as reasons to questions it. We have presented our views on much of this evidence. The reviewer has again refused to discuss our data in any detail implying it is obvious why we are wrong. We feel we have greatly strengthened our previous argument by looking at the big picture in terms of the Gram-negative rooting. This reviewer claimed that rooting was unsupportable because it is so obvious that Achaea did not evolve from Bacteria. We feel we have strengthened that view, but clearly we have not swayed this reviewer. We do not think there is anything more to say on this subject so we point readers to the discussion between this reviewer and Cavalier-Smith in [81].\nThe only other comment I wish to make is the extreme carelessness with which the manuscript is written. The abstract consists of 6 sentences of which two are obviously ungrammatical. Furthermore, in the Conclusion section of the abstract, the astonished reader finds \"antibiotic warfare and a viral endosymbiosis\" for which no argument and no mention has been made in the Results section. Perhaps, the authors can get rid of these and other similar problems in a revision but I do want to keep it in the record that this is how the manuscript was submitted for review.\n[SUBTITLE] Author's response [SUBSECTION] We apologize for any issues with the form that took away from the content, and we hope the final version is improved. The paper is written somewhat recklessly because it is what it is: the end of a Ph.D. dissertation. We feel it was the right time to get these ideas out because of our perception that the canonical rooting is too dogmatic. This review has only supported our view this manuscript was needed. We think a reader should be astonished by the end of a short abstract before a long reckless paper; it gets them to read paper.\nWe find it interesting that this reviewer had no comment on two aspects of this work which can be judged independently of the rooting issue; the holophyly of the archaea and the actinobacteria's role in eukaryogenesis. We have presented much evidence that the conclusion this reviewer reached on the former, using indel data, is flawed. It would have been informative to hear this reviewer's opinion on that analysis.\nWe apologize for any issues with the form that took away from the content, and we hope the final version is improved. The paper is written somewhat recklessly because it is what it is: the end of a Ph.D. dissertation. We feel it was the right time to get these ideas out because of our perception that the canonical rooting is too dogmatic. This review has only supported our view this manuscript was needed. We think a reader should be astonished by the end of a short abstract before a long reckless paper; it gets them to read paper.\nWe find it interesting that this reviewer had no comment on two aspects of this work which can be judged independently of the rooting issue; the holophyly of the archaea and the actinobacteria's role in eukaryogenesis. We have presented much evidence that the conclusion this reviewer reached on the former, using indel data, is flawed. It would have been informative to hear this reviewer's opinion on that analysis.\n[SUBTITLE] Reviewer's report 3 [SUBSECTION] Gaspar Jekely, European Molecular Biology Laboratory, Heidelberg, Germany\nIn this paper Vales and Bourne address a very difficult problem in evolutionary cellbiology, that of the origin of Archaea (archaebacteria). They do this after arguing at length for the bacterial rooting of the tree of life. Such attempts are very welcome, since these areas are extremely controversial and important, yet few people seem to notice that there is a problem there, namely that the conventional rooting of the tree of life between archaea and bacteria is far from being proven and as trivial as it seems. The evidence for this rooting, coming from paralogous gene rootings is highly questionable, and givesconflicting results when different paralogs are analyzed.\n[SUBTITLE] Author's response [SUBSECTION] We thank this reviewer for demonstrating that not everyone thinks our line of questioning is as unreasonable as reviewers 1 and 2. There are certainly problems with any rooting, and the question is still very open at this point in our minds. Many experts share the opinions of those reviewers, but we feel these points are often swept under the rug when invoking that rooting.\nNotwithstanding these problems, conventional wisdom holds, as the authors rightly point out, that the position of the root is between the two domains (superphyla) bacteria and archaea, since these are the groups that are most distinct from each other. However, such rooting based on maximum divergence can often be wrong, since an ancestral group can give rise to highly derived groups. I don't have particular problems with uprooting the tree of life and abandoning the conventional rooting, but I find the evidence, as presented in this paper, quite week. I also have the feeling that the three indels and the distribution of the proteosome will not convince too many people to favour one rooting over the other. In all cases one can conceive scenarios that are in agreement with the conventional rooting, such as the presence of both forms in the last common ancestor and then differential losses in the stem bacterial and archaeal lineage. I acknowledge that some of this would be less parsimonious than under a bacterial rooting, but given that there are only very few characters that can be used, such less parsimonious scenarios can still easily be defended.\nWe thank this reviewer for demonstrating that not everyone thinks our line of questioning is as unreasonable as reviewers 1 and 2. There are certainly problems with any rooting, and the question is still very open at this point in our minds. Many experts share the opinions of those reviewers, but we feel these points are often swept under the rug when invoking that rooting.\nNotwithstanding these problems, conventional wisdom holds, as the authors rightly point out, that the position of the root is between the two domains (superphyla) bacteria and archaea, since these are the groups that are most distinct from each other. However, such rooting based on maximum divergence can often be wrong, since an ancestral group can give rise to highly derived groups. I don't have particular problems with uprooting the tree of life and abandoning the conventional rooting, but I find the evidence, as presented in this paper, quite week. I also have the feeling that the three indels and the distribution of the proteosome will not convince too many people to favour one rooting over the other. In all cases one can conceive scenarios that are in agreement with the conventional rooting, such as the presence of both forms in the last common ancestor and then differential losses in the stem bacterial and archaeal lineage. I acknowledge that some of this would be less parsimonious than under a bacterial rooting, but given that there are only very few characters that can be used, such less parsimonious scenarios can still easily be defended.\n[SUBTITLE] Author's response [SUBSECTION] The strength of our evidence rests in the difference between polarization and parsimony which we have expressed before [11]:\n\"To us parsimony can be used to analyze events where gain and loss have nearly equal probabilities, while polarizations imply that one direction would evolve more easily than the other. Consider the example of the proteasome discussed in detail in Cavalier-Smith 2002. A parsimony argument would be that the 20s proteasome is the result of a duplication so a non duplicated structure must precede it. The polarization argument involves considering the structure and function of proteasomes as well as the fitness of the intermediates to argue that evolution towards the 20s proteasome is much more plausible than the reverse direction. There are probably many cases where evolution has not been parsimonious, and we do not think parsimony is a safe or productive assumption. However, there appears to be many polarizable transitions and hopefully there are many more waiting to be discovered. \"\nWe do not think the polarizations make this an open and shut case. However, we find they are sufficient to question the canonical rooting and search for more evidence to support the alternative rooting we support here.\nThe rooting issue aside, my main concern is that the scenario for the origin of archaea is not worked out well enough at the moment, and this contrasts with the length and ambition of the paper. The authors invoke an endosymbiosis with a virus to explain the origin of the archaeal DNA-handling enzymes. What does this exactly mean? Is it a lyzogenic virus? The term viral endosymbiosis does not seem to be the best choice here. The authors then invoke a very improbably form of infection with a virus that had collected from the virosphere the right combination of genes, to hand it over to the cell. In this way they try create a unique event that led to the unique origin of the archaea. I am not sure that invoking such a hypothetical, extremely rare event, for which there is no evidence, solves the problem. I acknowledge the enrichment of proteins of possible viral origin in the stem archaea, but this slight statistical enrichment does not mean that there was only one virus infection involved. One could just as well imagine a series of viral gene transfers in the framework of the antibiotic warfare scenario that provided the novel enzymes in a step-by step manner. Given the random sequence of events and the nature of the transferred genes, this could also lead to a lineage with unique identity. This scenario is at present the weakest part of the paper and should be worked out much better in a more focused paper.\nThe strength of our evidence rests in the difference between polarization and parsimony which we have expressed before [11]:\n\"To us parsimony can be used to analyze events where gain and loss have nearly equal probabilities, while polarizations imply that one direction would evolve more easily than the other. Consider the example of the proteasome discussed in detail in Cavalier-Smith 2002. A parsimony argument would be that the 20s proteasome is the result of a duplication so a non duplicated structure must precede it. The polarization argument involves considering the structure and function of proteasomes as well as the fitness of the intermediates to argue that evolution towards the 20s proteasome is much more plausible than the reverse direction. There are probably many cases where evolution has not been parsimonious, and we do not think parsimony is a safe or productive assumption. However, there appears to be many polarizable transitions and hopefully there are many more waiting to be discovered. \"\nWe do not think the polarizations make this an open and shut case. However, we find they are sufficient to question the canonical rooting and search for more evidence to support the alternative rooting we support here.\nThe rooting issue aside, my main concern is that the scenario for the origin of archaea is not worked out well enough at the moment, and this contrasts with the length and ambition of the paper. The authors invoke an endosymbiosis with a virus to explain the origin of the archaeal DNA-handling enzymes. What does this exactly mean? Is it a lyzogenic virus? The term viral endosymbiosis does not seem to be the best choice here. The authors then invoke a very improbably form of infection with a virus that had collected from the virosphere the right combination of genes, to hand it over to the cell. In this way they try create a unique event that led to the unique origin of the archaea. I am not sure that invoking such a hypothetical, extremely rare event, for which there is no evidence, solves the problem. I acknowledge the enrichment of proteins of possible viral origin in the stem archaea, but this slight statistical enrichment does not mean that there was only one virus infection involved. One could just as well imagine a series of viral gene transfers in the framework of the antibiotic warfare scenario that provided the novel enzymes in a step-by step manner. Given the random sequence of events and the nature of the transferred genes, this could also lead to a lineage with unique identity. This scenario is at present the weakest part of the paper and should be worked out much better in a more focused paper.\n[SUBTITLE] Author's response [SUBSECTION] We completely agree with this assessment of our viral endosymbiotic scenario. Endosymbiosis is probably not the best term, but we want to stress the radical nature of this interaction. We do not think our assumption about a rare virus is off base. The virosphere is very large and viruses are experts at manipulating genetic material in novel ways. The DNA handling enzymes did evolve twice despite how unlikely it was. We think a radical turnover of the machinery is only possible in a virus. Successive viral transfers could explain the data too, but this would still be a rare event to account for so many genes. An event like this would not leave much of mark besides a statistical enrichment (if even that). If that enrichment is real the question becomes why do the Archaea interact differently with viruses than Bacteria? That answer cannot be developed too much more at present, but a better sampling of the virosphere is definitely going to help here.\nThere are several other ideas in the paper that are potentially interesting, but not well worked out. The proposition of an actinobacterial symbiont during early eukaryote evolution is one example. Such a hypothesis could possibly be spelled out in a full paper, with a detailed scenario and all the evidence that seems to support such a model. In this form it is just a proposition that is hard to judge thoroughly, and is very easy to dismiss. In general, my recommendation would be to refocus the paper around one key idea, namely the origin of archaeal DNA-handling enzymes by quantum evolution and from viral sources as a result of an antibiotics arms race. If the authors spell out this scenario clearly, together with the supporting evidence, but without going into the details of the rooting issue (discussed already in their previous Biology Direct paper) and the origin of eukaryotes (this could be done in a separate paper), this could become a much more useful and potentially influential manuscript. The title could then be changed accordingly. In the present form the title gives the impression that the authors wish to explain everything, which is far from being the case (for example the unique membrane chemistry or archaeal flagella are not covered).\nWe completely agree with this assessment of our viral endosymbiotic scenario. Endosymbiosis is probably not the best term, but we want to stress the radical nature of this interaction. We do not think our assumption about a rare virus is off base. The virosphere is very large and viruses are experts at manipulating genetic material in novel ways. The DNA handling enzymes did evolve twice despite how unlikely it was. We think a radical turnover of the machinery is only possible in a virus. Successive viral transfers could explain the data too, but this would still be a rare event to account for so many genes. An event like this would not leave much of mark besides a statistical enrichment (if even that). If that enrichment is real the question becomes why do the Archaea interact differently with viruses than Bacteria? That answer cannot be developed too much more at present, but a better sampling of the virosphere is definitely going to help here.\nThere are several other ideas in the paper that are potentially interesting, but not well worked out. The proposition of an actinobacterial symbiont during early eukaryote evolution is one example. Such a hypothesis could possibly be spelled out in a full paper, with a detailed scenario and all the evidence that seems to support such a model. In this form it is just a proposition that is hard to judge thoroughly, and is very easy to dismiss. In general, my recommendation would be to refocus the paper around one key idea, namely the origin of archaeal DNA-handling enzymes by quantum evolution and from viral sources as a result of an antibiotics arms race. If the authors spell out this scenario clearly, together with the supporting evidence, but without going into the details of the rooting issue (discussed already in their previous Biology Direct paper) and the origin of eukaryotes (this could be done in a separate paper), this could become a much more useful and potentially influential manuscript. The title could then be changed accordingly. In the present form the title gives the impression that the authors wish to explain everything, which is far from being the case (for example the unique membrane chemistry or archaeal flagella are not covered).\n[SUBTITLE] Author's response [SUBSECTION] This is a fair assessment of the manuscript. It is certainly overly ambitious and in many ways incomplete. The goal of this paper was to unite the various ideas about bacterial rootings of the Archaea. The fact that many of our ideas could be developed further is more evidence this debate is not as closed as the two other reviewers have declared. While we certainly have omitted many details, we have chosen to take a big picture view. There is an obvious connection between the problems rooting of the tree of life and the origin of the superkingdoms. We think we can only judge a rooting hypothesis by assessing how well it addresses these questions. We think the canonical rooting is insufficient when one begins asking questions on the origins of the superkingdoms. We hope that readers will pick and choose which ideas they like and continue to develop and test them.\nThis is a fair assessment of the manuscript. It is certainly overly ambitious and in many ways incomplete. The goal of this paper was to unite the various ideas about bacterial rootings of the Archaea. The fact that many of our ideas could be developed further is more evidence this debate is not as closed as the two other reviewers have declared. While we certainly have omitted many details, we have chosen to take a big picture view. There is an obvious connection between the problems rooting of the tree of life and the origin of the superkingdoms. We think we can only judge a rooting hypothesis by assessing how well it addresses these questions. We think the canonical rooting is insufficient when one begins asking questions on the origins of the superkingdoms. We hope that readers will pick and choose which ideas they like and continue to develop and test them.\nGaspar Jekely, European Molecular Biology Laboratory, Heidelberg, Germany\nIn this paper Vales and Bourne address a very difficult problem in evolutionary cellbiology, that of the origin of Archaea (archaebacteria). They do this after arguing at length for the bacterial rooting of the tree of life. Such attempts are very welcome, since these areas are extremely controversial and important, yet few people seem to notice that there is a problem there, namely that the conventional rooting of the tree of life between archaea and bacteria is far from being proven and as trivial as it seems. The evidence for this rooting, coming from paralogous gene rootings is highly questionable, and givesconflicting results when different paralogs are analyzed.\n[SUBTITLE] Author's response [SUBSECTION] We thank this reviewer for demonstrating that not everyone thinks our line of questioning is as unreasonable as reviewers 1 and 2. There are certainly problems with any rooting, and the question is still very open at this point in our minds. Many experts share the opinions of those reviewers, but we feel these points are often swept under the rug when invoking that rooting.\nNotwithstanding these problems, conventional wisdom holds, as the authors rightly point out, that the position of the root is between the two domains (superphyla) bacteria and archaea, since these are the groups that are most distinct from each other. However, such rooting based on maximum divergence can often be wrong, since an ancestral group can give rise to highly derived groups. I don't have particular problems with uprooting the tree of life and abandoning the conventional rooting, but I find the evidence, as presented in this paper, quite week. I also have the feeling that the three indels and the distribution of the proteosome will not convince too many people to favour one rooting over the other. In all cases one can conceive scenarios that are in agreement with the conventional rooting, such as the presence of both forms in the last common ancestor and then differential losses in the stem bacterial and archaeal lineage. I acknowledge that some of this would be less parsimonious than under a bacterial rooting, but given that there are only very few characters that can be used, such less parsimonious scenarios can still easily be defended.\nWe thank this reviewer for demonstrating that not everyone thinks our line of questioning is as unreasonable as reviewers 1 and 2. There are certainly problems with any rooting, and the question is still very open at this point in our minds. Many experts share the opinions of those reviewers, but we feel these points are often swept under the rug when invoking that rooting.\nNotwithstanding these problems, conventional wisdom holds, as the authors rightly point out, that the position of the root is between the two domains (superphyla) bacteria and archaea, since these are the groups that are most distinct from each other. However, such rooting based on maximum divergence can often be wrong, since an ancestral group can give rise to highly derived groups. I don't have particular problems with uprooting the tree of life and abandoning the conventional rooting, but I find the evidence, as presented in this paper, quite week. I also have the feeling that the three indels and the distribution of the proteosome will not convince too many people to favour one rooting over the other. In all cases one can conceive scenarios that are in agreement with the conventional rooting, such as the presence of both forms in the last common ancestor and then differential losses in the stem bacterial and archaeal lineage. I acknowledge that some of this would be less parsimonious than under a bacterial rooting, but given that there are only very few characters that can be used, such less parsimonious scenarios can still easily be defended.\n[SUBTITLE] Author's response [SUBSECTION] The strength of our evidence rests in the difference between polarization and parsimony which we have expressed before [11]:\n\"To us parsimony can be used to analyze events where gain and loss have nearly equal probabilities, while polarizations imply that one direction would evolve more easily than the other. Consider the example of the proteasome discussed in detail in Cavalier-Smith 2002. A parsimony argument would be that the 20s proteasome is the result of a duplication so a non duplicated structure must precede it. The polarization argument involves considering the structure and function of proteasomes as well as the fitness of the intermediates to argue that evolution towards the 20s proteasome is much more plausible than the reverse direction. There are probably many cases where evolution has not been parsimonious, and we do not think parsimony is a safe or productive assumption. However, there appears to be many polarizable transitions and hopefully there are many more waiting to be discovered. \"\nWe do not think the polarizations make this an open and shut case. However, we find they are sufficient to question the canonical rooting and search for more evidence to support the alternative rooting we support here.\nThe rooting issue aside, my main concern is that the scenario for the origin of archaea is not worked out well enough at the moment, and this contrasts with the length and ambition of the paper. The authors invoke an endosymbiosis with a virus to explain the origin of the archaeal DNA-handling enzymes. What does this exactly mean? Is it a lyzogenic virus? The term viral endosymbiosis does not seem to be the best choice here. The authors then invoke a very improbably form of infection with a virus that had collected from the virosphere the right combination of genes, to hand it over to the cell. In this way they try create a unique event that led to the unique origin of the archaea. I am not sure that invoking such a hypothetical, extremely rare event, for which there is no evidence, solves the problem. I acknowledge the enrichment of proteins of possible viral origin in the stem archaea, but this slight statistical enrichment does not mean that there was only one virus infection involved. One could just as well imagine a series of viral gene transfers in the framework of the antibiotic warfare scenario that provided the novel enzymes in a step-by step manner. Given the random sequence of events and the nature of the transferred genes, this could also lead to a lineage with unique identity. This scenario is at present the weakest part of the paper and should be worked out much better in a more focused paper.\nThe strength of our evidence rests in the difference between polarization and parsimony which we have expressed before [11]:\n\"To us parsimony can be used to analyze events where gain and loss have nearly equal probabilities, while polarizations imply that one direction would evolve more easily than the other. Consider the example of the proteasome discussed in detail in Cavalier-Smith 2002. A parsimony argument would be that the 20s proteasome is the result of a duplication so a non duplicated structure must precede it. The polarization argument involves considering the structure and function of proteasomes as well as the fitness of the intermediates to argue that evolution towards the 20s proteasome is much more plausible than the reverse direction. There are probably many cases where evolution has not been parsimonious, and we do not think parsimony is a safe or productive assumption. However, there appears to be many polarizable transitions and hopefully there are many more waiting to be discovered. \"\nWe do not think the polarizations make this an open and shut case. However, we find they are sufficient to question the canonical rooting and search for more evidence to support the alternative rooting we support here.\nThe rooting issue aside, my main concern is that the scenario for the origin of archaea is not worked out well enough at the moment, and this contrasts with the length and ambition of the paper. The authors invoke an endosymbiosis with a virus to explain the origin of the archaeal DNA-handling enzymes. What does this exactly mean? Is it a lyzogenic virus? The term viral endosymbiosis does not seem to be the best choice here. The authors then invoke a very improbably form of infection with a virus that had collected from the virosphere the right combination of genes, to hand it over to the cell. In this way they try create a unique event that led to the unique origin of the archaea. I am not sure that invoking such a hypothetical, extremely rare event, for which there is no evidence, solves the problem. I acknowledge the enrichment of proteins of possible viral origin in the stem archaea, but this slight statistical enrichment does not mean that there was only one virus infection involved. One could just as well imagine a series of viral gene transfers in the framework of the antibiotic warfare scenario that provided the novel enzymes in a step-by step manner. Given the random sequence of events and the nature of the transferred genes, this could also lead to a lineage with unique identity. This scenario is at present the weakest part of the paper and should be worked out much better in a more focused paper.\n[SUBTITLE] Author's response [SUBSECTION] We completely agree with this assessment of our viral endosymbiotic scenario. Endosymbiosis is probably not the best term, but we want to stress the radical nature of this interaction. We do not think our assumption about a rare virus is off base. The virosphere is very large and viruses are experts at manipulating genetic material in novel ways. The DNA handling enzymes did evolve twice despite how unlikely it was. We think a radical turnover of the machinery is only possible in a virus. Successive viral transfers could explain the data too, but this would still be a rare event to account for so many genes. An event like this would not leave much of mark besides a statistical enrichment (if even that). If that enrichment is real the question becomes why do the Archaea interact differently with viruses than Bacteria? That answer cannot be developed too much more at present, but a better sampling of the virosphere is definitely going to help here.\nThere are several other ideas in the paper that are potentially interesting, but not well worked out. The proposition of an actinobacterial symbiont during early eukaryote evolution is one example. Such a hypothesis could possibly be spelled out in a full paper, with a detailed scenario and all the evidence that seems to support such a model. In this form it is just a proposition that is hard to judge thoroughly, and is very easy to dismiss. In general, my recommendation would be to refocus the paper around one key idea, namely the origin of archaeal DNA-handling enzymes by quantum evolution and from viral sources as a result of an antibiotics arms race. If the authors spell out this scenario clearly, together with the supporting evidence, but without going into the details of the rooting issue (discussed already in their previous Biology Direct paper) and the origin of eukaryotes (this could be done in a separate paper), this could become a much more useful and potentially influential manuscript. The title could then be changed accordingly. In the present form the title gives the impression that the authors wish to explain everything, which is far from being the case (for example the unique membrane chemistry or archaeal flagella are not covered).\nWe completely agree with this assessment of our viral endosymbiotic scenario. Endosymbiosis is probably not the best term, but we want to stress the radical nature of this interaction. We do not think our assumption about a rare virus is off base. The virosphere is very large and viruses are experts at manipulating genetic material in novel ways. The DNA handling enzymes did evolve twice despite how unlikely it was. We think a radical turnover of the machinery is only possible in a virus. Successive viral transfers could explain the data too, but this would still be a rare event to account for so many genes. An event like this would not leave much of mark besides a statistical enrichment (if even that). If that enrichment is real the question becomes why do the Archaea interact differently with viruses than Bacteria? That answer cannot be developed too much more at present, but a better sampling of the virosphere is definitely going to help here.\nThere are several other ideas in the paper that are potentially interesting, but not well worked out. The proposition of an actinobacterial symbiont during early eukaryote evolution is one example. Such a hypothesis could possibly be spelled out in a full paper, with a detailed scenario and all the evidence that seems to support such a model. In this form it is just a proposition that is hard to judge thoroughly, and is very easy to dismiss. In general, my recommendation would be to refocus the paper around one key idea, namely the origin of archaeal DNA-handling enzymes by quantum evolution and from viral sources as a result of an antibiotics arms race. If the authors spell out this scenario clearly, together with the supporting evidence, but without going into the details of the rooting issue (discussed already in their previous Biology Direct paper) and the origin of eukaryotes (this could be done in a separate paper), this could become a much more useful and potentially influential manuscript. The title could then be changed accordingly. In the present form the title gives the impression that the authors wish to explain everything, which is far from being the case (for example the unique membrane chemistry or archaeal flagella are not covered).\n[SUBTITLE] Author's response [SUBSECTION] This is a fair assessment of the manuscript. It is certainly overly ambitious and in many ways incomplete. The goal of this paper was to unite the various ideas about bacterial rootings of the Archaea. The fact that many of our ideas could be developed further is more evidence this debate is not as closed as the two other reviewers have declared. While we certainly have omitted many details, we have chosen to take a big picture view. There is an obvious connection between the problems rooting of the tree of life and the origin of the superkingdoms. We think we can only judge a rooting hypothesis by assessing how well it addresses these questions. We think the canonical rooting is insufficient when one begins asking questions on the origins of the superkingdoms. We hope that readers will pick and choose which ideas they like and continue to develop and test them.\nThis is a fair assessment of the manuscript. It is certainly overly ambitious and in many ways incomplete. The goal of this paper was to unite the various ideas about bacterial rootings of the Archaea. The fact that many of our ideas could be developed further is more evidence this debate is not as closed as the two other reviewers have declared. While we certainly have omitted many details, we have chosen to take a big picture view. There is an obvious connection between the problems rooting of the tree of life and the origin of the superkingdoms. We think we can only judge a rooting hypothesis by assessing how well it addresses these questions. We think the canonical rooting is insufficient when one begins asking questions on the origins of the superkingdoms. We hope that readers will pick and choose which ideas they like and continue to develop and test them.", "Patrick Forterre, Universite Paris Sud and Institut Pasteur, Paris, France\nIn their paper, « the origin of a derived superkingdom: how a gram positive bacterium crossed the desert to become an archaeon\", Valas and Bourne update the previous proposal by Gupta linking Archaea to « gram positive bacteria ». The term gram positive bacteria is really outdated, since the work of Carl Woese has shown that it has no phylogenetic meaning. In fact, the title of this paper should be: \"how a Firmicutes bacterium crossed the desert to become an archaeon ». Firmicutes are one of the 20, 30 more...(it's not yet clear) bacterial phyla. It has been much more extensively studied by human for medical and biotechnological reasons, but this does not qualify it to be more than that.\n[SUBTITLE] Author's response [SUBSECTION] We find there are many compelling reasons to still consider the Gram-positives a monophyletic group as discussed in [8]. We have also presented evidence to justify why we do not trust the rRNA tree as a tool for macrophylogeny, especially for two groups nicknamed the \"low gc gram-positives\" and the \"high gc gram-positives\". We have two sources that disagree on the position of these groups. The solution is not to make declarative statements that one data source makes looking at other unreasonable, but rather to consider the strengths and weaknesses of each. One of the goals of the paper is to unify the hypotheses that question the rooting of the tree of life between the Achaea and Bacteria. Again, we'd like to point out that despite their differences Lake, Gupta, and Cavalier-Smith all agree the Archaea are derived from a Gram-positive bacterium. So even though we narrow it down to a phyla, we think this title still reflects the larger goal of the paper.\nIn summary, Valas and Bourne proposed that both Archaea and Eukaryotes derived by transmutation from a member of Firmicutes, i.e. of one of the many bacterial phyla present today on our planet. This is revival of the old view that bacteria are primitive organisms that populated the planet much before all others (a sequel of Heckel monera). In fact, Bacteria are very evolved organisms, a superkingdom, that have been extremely successful sice they are now present everywhere and are usually much more abundant than members of the two other domains. It's unclear if they predated Archaea and ancient Eukarya, but they will certainly survive long after complex eukaryotes like us will have disappeared. I suspect that Archaea and Eukarya are the only two lineages that survive the extraordinary success of bacteria.\nWe find there are many compelling reasons to still consider the Gram-positives a monophyletic group as discussed in [8]. We have also presented evidence to justify why we do not trust the rRNA tree as a tool for macrophylogeny, especially for two groups nicknamed the \"low gc gram-positives\" and the \"high gc gram-positives\". We have two sources that disagree on the position of these groups. The solution is not to make declarative statements that one data source makes looking at other unreasonable, but rather to consider the strengths and weaknesses of each. One of the goals of the paper is to unify the hypotheses that question the rooting of the tree of life between the Achaea and Bacteria. Again, we'd like to point out that despite their differences Lake, Gupta, and Cavalier-Smith all agree the Archaea are derived from a Gram-positive bacterium. So even though we narrow it down to a phyla, we think this title still reflects the larger goal of the paper.\nIn summary, Valas and Bourne proposed that both Archaea and Eukaryotes derived by transmutation from a member of Firmicutes, i.e. of one of the many bacterial phyla present today on our planet. This is revival of the old view that bacteria are primitive organisms that populated the planet much before all others (a sequel of Heckel monera). In fact, Bacteria are very evolved organisms, a superkingdom, that have been extremely successful sice they are now present everywhere and are usually much more abundant than members of the two other domains. It's unclear if they predated Archaea and ancient Eukarya, but they will certainly survive long after complex eukaryotes like us will have disappeared. I suspect that Archaea and Eukarya are the only two lineages that survive the extraordinary success of bacteria.\n[SUBTITLE] Author's response [SUBSECTION] In this manuscript we have presented three pieces of evidence that imply Bacteria did predate the Archaea. This reviewer has not addressed why he feels those are insufficient.\nIn my opinion, one of the reason for this success was the invention of DNA gyrase. This enzyme allows to couple directly the energetic state of the cell (the ATP/ADP ratio) to the expression of all genes at once in modulating the supercoiled state of the chromosome. Once you become addict to DNA gyrase, you can't let it go. The last bacterial common ancestor had a DNA gyrase, and all modern bacteria still have it. Some archaea succeeded to get gyrase from bacteria, they are now fully dependent of it. Plants also get gyrase from cyanobacteria, one possible reason for their success ?? The idea that a poor Firmicutes abandoned DNA gyrase to escape antigyrase drug producers does not seem realistic to me. Unfortunately for too many human patients, gyrases have found many way to become multi drug resistant without having to abandon it. In general, bacteria have been very efficient to thrive happily in all possible « deserts » that one can imagine, including hot springs up to 95°C. Hyperthermophilic bacteria or desiccation resistant bacteria are not en route to become archaea but bona fide bacteria. I cannot discuss in details all ad hoc hypotheses proposed by the authors to explain how a Firmicutes become an archaeon.\nIn this manuscript we have presented three pieces of evidence that imply Bacteria did predate the Archaea. This reviewer has not addressed why he feels those are insufficient.\nIn my opinion, one of the reason for this success was the invention of DNA gyrase. This enzyme allows to couple directly the energetic state of the cell (the ATP/ADP ratio) to the expression of all genes at once in modulating the supercoiled state of the chromosome. Once you become addict to DNA gyrase, you can't let it go. The last bacterial common ancestor had a DNA gyrase, and all modern bacteria still have it. Some archaea succeeded to get gyrase from bacteria, they are now fully dependent of it. Plants also get gyrase from cyanobacteria, one possible reason for their success ?? The idea that a poor Firmicutes abandoned DNA gyrase to escape antigyrase drug producers does not seem realistic to me. Unfortunately for too many human patients, gyrases have found many way to become multi drug resistant without having to abandon it. In general, bacteria have been very efficient to thrive happily in all possible « deserts » that one can imagine, including hot springs up to 95°C. Hyperthermophilic bacteria or desiccation resistant bacteria are not en route to become archaea but bona fide bacteria. I cannot discuss in details all ad hoc hypotheses proposed by the authors to explain how a Firmicutes become an archaeon.\n[SUBTITLE] Author's response [SUBSECTION] We argue that extreme habitats and antibiotic warfare were not unique enough niches in the manuscript, and this why do not think Gupta's or Cavalier-Smith's hypotheses are sufficient on their own. We agree that DNA gyrase is a big deal, and it would require some very unique circumstances for it to be lost. If the archaea are adapted to chronic energy stress it would not be unreasonable from them to move away from gyrase becuase the benefit you described above disappears if ATP is always scarce. The question is whether the transition is impossible or implausible. We are arguing this was a very rare event that only happened once. We feel it is more productive to try evaluating some of the hybrid systems we propose than to speculate about their impossibility. Once again, we regret a reviewer refuses to discuss details with us.\nThey have certainly done a huge amont of bibliographic work and hard thinking which will help them in future debates on the origin of the three domains, but in my opinion, they have reached an impasse in trying to revive Gupta's hypothesis. For me, all hypotheses that invoke the transmutation of one domain (in its modern form) into another are definitively wrong. It is the same for hypotheses in which a combination of modern archaea and bacteria produced a protoeukaryote.\nWe argue that extreme habitats and antibiotic warfare were not unique enough niches in the manuscript, and this why do not think Gupta's or Cavalier-Smith's hypotheses are sufficient on their own. We agree that DNA gyrase is a big deal, and it would require some very unique circumstances for it to be lost. If the archaea are adapted to chronic energy stress it would not be unreasonable from them to move away from gyrase becuase the benefit you described above disappears if ATP is always scarce. The question is whether the transition is impossible or implausible. We are arguing this was a very rare event that only happened once. We feel it is more productive to try evaluating some of the hybrid systems we propose than to speculate about their impossibility. Once again, we regret a reviewer refuses to discuss details with us.\nThey have certainly done a huge amont of bibliographic work and hard thinking which will help them in future debates on the origin of the three domains, but in my opinion, they have reached an impasse in trying to revive Gupta's hypothesis. For me, all hypotheses that invoke the transmutation of one domain (in its modern form) into another are definitively wrong. It is the same for hypotheses in which a combination of modern archaea and bacteria produced a protoeukaryote.\n[SUBTITLE] Author's response [SUBSECTION] This implies there are essentially three primordial lineages; a view that we think is definitely wrong based on the currently available evidence. We have provided three robust pieces of evidence why the archaea appear younger than the bacteria which this reviewer has completely ignored. Other work from our group demonstrates that we can constrain the evolution of eukaryotes based on the biochemical history of the ocean [36,37]; that data argues the eukaryotes are a more recent lineage than the bacteria and archaea, and is completely independent of this work. Again this reviewer provides no argument for why transmutation hypotheses are definitely wrong, or why the polarization evidence bears no weight on these questions.\nI fully agree with Carl Woese who already wrote several years ago that « Modern cells are fully evolved entities. They are sufficiently complex, integrated, and \"individualized\" that further major change in their designs does not appear possible, which is not to say that relatively minor (but still functionally significant) variations on existing cellular themes cannot occur or that, under certain conditions, cellular design cannot degenerate\". Firmicutes are modern cells, they cannot have experienced \"major change in their designs\" to become archaea and later on eukarya. These transmutation hypotheses put us backward in the pre-Woesian era, when evolution was viewed as a succession of steps from simple organism (moneraprokaryote-bacteria) to lower eukaryotes, then to higher eukaryotes, then to human (the scala natura). Definitely, a bacterium cannot be transmutated into an archaeon, even by a virus.\nThis implies there are essentially three primordial lineages; a view that we think is definitely wrong based on the currently available evidence. We have provided three robust pieces of evidence why the archaea appear younger than the bacteria which this reviewer has completely ignored. Other work from our group demonstrates that we can constrain the evolution of eukaryotes based on the biochemical history of the ocean [36,37]; that data argues the eukaryotes are a more recent lineage than the bacteria and archaea, and is completely independent of this work. Again this reviewer provides no argument for why transmutation hypotheses are definitely wrong, or why the polarization evidence bears no weight on these questions.\nI fully agree with Carl Woese who already wrote several years ago that « Modern cells are fully evolved entities. They are sufficiently complex, integrated, and \"individualized\" that further major change in their designs does not appear possible, which is not to say that relatively minor (but still functionally significant) variations on existing cellular themes cannot occur or that, under certain conditions, cellular design cannot degenerate\". Firmicutes are modern cells, they cannot have experienced \"major change in their designs\" to become archaea and later on eukarya. These transmutation hypotheses put us backward in the pre-Woesian era, when evolution was viewed as a succession of steps from simple organism (moneraprokaryote-bacteria) to lower eukaryotes, then to higher eukaryotes, then to human (the scala natura). Definitely, a bacterium cannot be transmutated into an archaeon, even by a virus.\n[SUBTITLE] Author's response [SUBSECTION] Only time will tell whether our skeptical reading of the rRNA tree will turn out to be pre-Woesian or post-Woesian. We do not express the view that evolution is just a series of successive steps or that a bacterial cell is simple in any way, and neither does Cavalier-Smith, Gupta, or Lake in our opinion. However within the larger process of evolution there are clear paths that were built in successive steps of increasing complexity. We think the best example of this is the proteasome's quaternary structure. Fortunately, the proteasome has an informative phylogenetic distribution that allows us to polarize the direction of its evolution. We argue the Archaea evolved from the Bacteria because their proteasome is more complicated, but that does do not imply the rest of the machinery is simpler in the Bacteria. If there are markers that are clear cut stories, how could using them for phylogenetic inference be pre-Woesian? Again, we ask why there are no polarizations that place the Bacteria as derived from the Archaea? We think the view that there is an insurmountable divide between the superkingdoms by definition to leads to circular reasoning, instead of a discussion about the actual data. Ironically, we see parallels between the current situation and Woese's account of how his ideas were first received [5].\nA virus take over of the replication apparatus could have created a bacterium with a novel replication apparatus, that's all. This would not have changed bacterial lipids, membranes, ribosomes, proteasomes, ATP synthases, transport sustems, metabolism,........ Possibly, one day, among the ten of thousand of bacteria whose genomes will be sequences, one will find one bacterium with an atypical replication system of recent viral origin, but I bet that this bacterium will have « bacterial ribosomes » and so on.\nOnly time will tell whether our skeptical reading of the rRNA tree will turn out to be pre-Woesian or post-Woesian. We do not express the view that evolution is just a series of successive steps or that a bacterial cell is simple in any way, and neither does Cavalier-Smith, Gupta, or Lake in our opinion. However within the larger process of evolution there are clear paths that were built in successive steps of increasing complexity. We think the best example of this is the proteasome's quaternary structure. Fortunately, the proteasome has an informative phylogenetic distribution that allows us to polarize the direction of its evolution. We argue the Archaea evolved from the Bacteria because their proteasome is more complicated, but that does do not imply the rest of the machinery is simpler in the Bacteria. If there are markers that are clear cut stories, how could using them for phylogenetic inference be pre-Woesian? Again, we ask why there are no polarizations that place the Bacteria as derived from the Archaea? We think the view that there is an insurmountable divide between the superkingdoms by definition to leads to circular reasoning, instead of a discussion about the actual data. Ironically, we see parallels between the current situation and Woese's account of how his ideas were first received [5].\nA virus take over of the replication apparatus could have created a bacterium with a novel replication apparatus, that's all. This would not have changed bacterial lipids, membranes, ribosomes, proteasomes, ATP synthases, transport sustems, metabolism,........ Possibly, one day, among the ten of thousand of bacteria whose genomes will be sequences, one will find one bacterium with an atypical replication system of recent viral origin, but I bet that this bacterium will have « bacterial ribosomes » and so on.\n[SUBTITLE] Author's response [SUBSECTION] We would be very surprised if the DNA replication system was that different and rest of the cell was purely bacterial. The nice thing is this is one of the few points we disagree on that data will actually make clearer.\nIf one want to understand the origin of modern domains, one has to consider that they originated in a very different world that our present one. A world with many lineages (domains or protodomains) that have now disappeared, possibly back to the cellular RNA world. This is a really difficult and fascinating objective which requires to propose sometimes bold hypotheses, but these hypotheses should take into account that the divide between the three modern domains is now so great that it cannot be crossed, even by an adventurous, desperate Firmicutes.\nWe would be very surprised if the DNA replication system was that different and rest of the cell was purely bacterial. The nice thing is this is one of the few points we disagree on that data will actually make clearer.\nIf one want to understand the origin of modern domains, one has to consider that they originated in a very different world that our present one. A world with many lineages (domains or protodomains) that have now disappeared, possibly back to the cellular RNA world. This is a really difficult and fascinating objective which requires to propose sometimes bold hypotheses, but these hypotheses should take into account that the divide between the three modern domains is now so great that it cannot be crossed, even by an adventurous, desperate Firmicutes.\n[SUBTITLE] Author's response [SUBSECTION] We again refer readers to work from our group on the evolution of the superkingdoms in relationship to history of ocean's biochemistry [36,37]. We will continue to incorporate new data sources that allow us to measure how different that world was instead of speculating about it. For now, the many data sources we have woven together imply there is something deeply wrong with the canonical rooting as well the logic used to support it (see reviewer #3's comments). This reviewer's advice that we need bold hypotheses, but the rooting must be taken as dogma makes little sense to us in light of the many problems with that rooting.\nWe again refer readers to work from our group on the evolution of the superkingdoms in relationship to history of ocean's biochemistry [36,37]. We will continue to incorporate new data sources that allow us to measure how different that world was instead of speculating about it. For now, the many data sources we have woven together imply there is something deeply wrong with the canonical rooting as well the logic used to support it (see reviewer #3's comments). This reviewer's advice that we need bold hypotheses, but the rooting must be taken as dogma makes little sense to us in light of the many problems with that rooting.", "We find there are many compelling reasons to still consider the Gram-positives a monophyletic group as discussed in [8]. We have also presented evidence to justify why we do not trust the rRNA tree as a tool for macrophylogeny, especially for two groups nicknamed the \"low gc gram-positives\" and the \"high gc gram-positives\". We have two sources that disagree on the position of these groups. The solution is not to make declarative statements that one data source makes looking at other unreasonable, but rather to consider the strengths and weaknesses of each. One of the goals of the paper is to unify the hypotheses that question the rooting of the tree of life between the Achaea and Bacteria. Again, we'd like to point out that despite their differences Lake, Gupta, and Cavalier-Smith all agree the Archaea are derived from a Gram-positive bacterium. So even though we narrow it down to a phyla, we think this title still reflects the larger goal of the paper.\nIn summary, Valas and Bourne proposed that both Archaea and Eukaryotes derived by transmutation from a member of Firmicutes, i.e. of one of the many bacterial phyla present today on our planet. This is revival of the old view that bacteria are primitive organisms that populated the planet much before all others (a sequel of Heckel monera). In fact, Bacteria are very evolved organisms, a superkingdom, that have been extremely successful sice they are now present everywhere and are usually much more abundant than members of the two other domains. It's unclear if they predated Archaea and ancient Eukarya, but they will certainly survive long after complex eukaryotes like us will have disappeared. I suspect that Archaea and Eukarya are the only two lineages that survive the extraordinary success of bacteria.", "In this manuscript we have presented three pieces of evidence that imply Bacteria did predate the Archaea. This reviewer has not addressed why he feels those are insufficient.\nIn my opinion, one of the reason for this success was the invention of DNA gyrase. This enzyme allows to couple directly the energetic state of the cell (the ATP/ADP ratio) to the expression of all genes at once in modulating the supercoiled state of the chromosome. Once you become addict to DNA gyrase, you can't let it go. The last bacterial common ancestor had a DNA gyrase, and all modern bacteria still have it. Some archaea succeeded to get gyrase from bacteria, they are now fully dependent of it. Plants also get gyrase from cyanobacteria, one possible reason for their success ?? The idea that a poor Firmicutes abandoned DNA gyrase to escape antigyrase drug producers does not seem realistic to me. Unfortunately for too many human patients, gyrases have found many way to become multi drug resistant without having to abandon it. In general, bacteria have been very efficient to thrive happily in all possible « deserts » that one can imagine, including hot springs up to 95°C. Hyperthermophilic bacteria or desiccation resistant bacteria are not en route to become archaea but bona fide bacteria. I cannot discuss in details all ad hoc hypotheses proposed by the authors to explain how a Firmicutes become an archaeon.", "We argue that extreme habitats and antibiotic warfare were not unique enough niches in the manuscript, and this why do not think Gupta's or Cavalier-Smith's hypotheses are sufficient on their own. We agree that DNA gyrase is a big deal, and it would require some very unique circumstances for it to be lost. If the archaea are adapted to chronic energy stress it would not be unreasonable from them to move away from gyrase becuase the benefit you described above disappears if ATP is always scarce. The question is whether the transition is impossible or implausible. We are arguing this was a very rare event that only happened once. We feel it is more productive to try evaluating some of the hybrid systems we propose than to speculate about their impossibility. Once again, we regret a reviewer refuses to discuss details with us.\nThey have certainly done a huge amont of bibliographic work and hard thinking which will help them in future debates on the origin of the three domains, but in my opinion, they have reached an impasse in trying to revive Gupta's hypothesis. For me, all hypotheses that invoke the transmutation of one domain (in its modern form) into another are definitively wrong. It is the same for hypotheses in which a combination of modern archaea and bacteria produced a protoeukaryote.", "This implies there are essentially three primordial lineages; a view that we think is definitely wrong based on the currently available evidence. We have provided three robust pieces of evidence why the archaea appear younger than the bacteria which this reviewer has completely ignored. Other work from our group demonstrates that we can constrain the evolution of eukaryotes based on the biochemical history of the ocean [36,37]; that data argues the eukaryotes are a more recent lineage than the bacteria and archaea, and is completely independent of this work. Again this reviewer provides no argument for why transmutation hypotheses are definitely wrong, or why the polarization evidence bears no weight on these questions.\nI fully agree with Carl Woese who already wrote several years ago that « Modern cells are fully evolved entities. They are sufficiently complex, integrated, and \"individualized\" that further major change in their designs does not appear possible, which is not to say that relatively minor (but still functionally significant) variations on existing cellular themes cannot occur or that, under certain conditions, cellular design cannot degenerate\". Firmicutes are modern cells, they cannot have experienced \"major change in their designs\" to become archaea and later on eukarya. These transmutation hypotheses put us backward in the pre-Woesian era, when evolution was viewed as a succession of steps from simple organism (moneraprokaryote-bacteria) to lower eukaryotes, then to higher eukaryotes, then to human (the scala natura). Definitely, a bacterium cannot be transmutated into an archaeon, even by a virus.", "Only time will tell whether our skeptical reading of the rRNA tree will turn out to be pre-Woesian or post-Woesian. We do not express the view that evolution is just a series of successive steps or that a bacterial cell is simple in any way, and neither does Cavalier-Smith, Gupta, or Lake in our opinion. However within the larger process of evolution there are clear paths that were built in successive steps of increasing complexity. We think the best example of this is the proteasome's quaternary structure. Fortunately, the proteasome has an informative phylogenetic distribution that allows us to polarize the direction of its evolution. We argue the Archaea evolved from the Bacteria because their proteasome is more complicated, but that does do not imply the rest of the machinery is simpler in the Bacteria. If there are markers that are clear cut stories, how could using them for phylogenetic inference be pre-Woesian? Again, we ask why there are no polarizations that place the Bacteria as derived from the Archaea? We think the view that there is an insurmountable divide between the superkingdoms by definition to leads to circular reasoning, instead of a discussion about the actual data. Ironically, we see parallels between the current situation and Woese's account of how his ideas were first received [5].\nA virus take over of the replication apparatus could have created a bacterium with a novel replication apparatus, that's all. This would not have changed bacterial lipids, membranes, ribosomes, proteasomes, ATP synthases, transport sustems, metabolism,........ Possibly, one day, among the ten of thousand of bacteria whose genomes will be sequences, one will find one bacterium with an atypical replication system of recent viral origin, but I bet that this bacterium will have « bacterial ribosomes » and so on.", "We would be very surprised if the DNA replication system was that different and rest of the cell was purely bacterial. The nice thing is this is one of the few points we disagree on that data will actually make clearer.\nIf one want to understand the origin of modern domains, one has to consider that they originated in a very different world that our present one. A world with many lineages (domains or protodomains) that have now disappeared, possibly back to the cellular RNA world. This is a really difficult and fascinating objective which requires to propose sometimes bold hypotheses, but these hypotheses should take into account that the divide between the three modern domains is now so great that it cannot be crossed, even by an adventurous, desperate Firmicutes.", "We again refer readers to work from our group on the evolution of the superkingdoms in relationship to history of ocean's biochemistry [36,37]. We will continue to incorporate new data sources that allow us to measure how different that world was instead of speculating about it. For now, the many data sources we have woven together imply there is something deeply wrong with the canonical rooting as well the logic used to support it (see reviewer #3's comments). This reviewer's advice that we need bold hypotheses, but the rooting must be taken as dogma makes little sense to us in light of the many problems with that rooting.", "Eugene V Koonin, National Center for Biotechnology Information, NIH, Bethesda, Maryland, United States\nTo this reviewer, the manuscript by Valas and Bourne is frustrating. These authors continue to question the primary divide in the evolution of cellular life, that between archaea and bacteria, without any legitimate grounds. Here they go deeper into this falsehood by trying to present arguments for one bacterial root of archaea as opposed to another that has been proposed by a different author, in an equally faulty manner. Another innovation here is adding insult to injury: \"This data have been dismissed because those who support the canonical rooting between the prokaryotic superkingdoms cannot imagine how the vast divide between the prokaryotic superkingdoms could be crossed.\" This allegation is a substantial part of the exceptionally brief abstract of Valas and Bourne. No comment seems to be required.\nMy general view which I see no reason whatsoever to change is expressed in the following quote from my review of a previous publication by the same authors:\n\"The nature of the primary divide in prokaryotes - and actually among all cellular life forms is clear, and it is between archaea and bacteria. This view is supported by the fundamental differences between archaeal and bacterial systems of DNA replication, core transcription, translation, and membrane biogenesis - essentially, all central cellular systems (not just the replication system as noted in the present paper). I believe these differences are sufficient to close the \"root debate\" (regardless of the appropriateness or lack thereof of the very notion of a root in this context) and to base analyses and discussions aimed at the elucidation of the nature of LUCA on that foundation.\" [11]\nPerhaps, it is worth adding the results of a recent comprehensive analysis of phylogenetic trees for prokaryotic proteins that firmly supports the primary divide between archaea and bacteria [128].\n[SUBTITLE] Author's response [SUBSECTION] A large part of the motivation for this manuscript was the review of our previous work [11]. Cleary there are others besides us who do not find things as clear cut as this reviewer (see reviewer #3 comments). We think there many reasons to support the canonical rooting, as well as reasons to questions it. We have presented our views on much of this evidence. The reviewer has again refused to discuss our data in any detail implying it is obvious why we are wrong. We feel we have greatly strengthened our previous argument by looking at the big picture in terms of the Gram-negative rooting. This reviewer claimed that rooting was unsupportable because it is so obvious that Achaea did not evolve from Bacteria. We feel we have strengthened that view, but clearly we have not swayed this reviewer. We do not think there is anything more to say on this subject so we point readers to the discussion between this reviewer and Cavalier-Smith in [81].\nThe only other comment I wish to make is the extreme carelessness with which the manuscript is written. The abstract consists of 6 sentences of which two are obviously ungrammatical. Furthermore, in the Conclusion section of the abstract, the astonished reader finds \"antibiotic warfare and a viral endosymbiosis\" for which no argument and no mention has been made in the Results section. Perhaps, the authors can get rid of these and other similar problems in a revision but I do want to keep it in the record that this is how the manuscript was submitted for review.\nA large part of the motivation for this manuscript was the review of our previous work [11]. Cleary there are others besides us who do not find things as clear cut as this reviewer (see reviewer #3 comments). We think there many reasons to support the canonical rooting, as well as reasons to questions it. We have presented our views on much of this evidence. The reviewer has again refused to discuss our data in any detail implying it is obvious why we are wrong. We feel we have greatly strengthened our previous argument by looking at the big picture in terms of the Gram-negative rooting. This reviewer claimed that rooting was unsupportable because it is so obvious that Achaea did not evolve from Bacteria. We feel we have strengthened that view, but clearly we have not swayed this reviewer. We do not think there is anything more to say on this subject so we point readers to the discussion between this reviewer and Cavalier-Smith in [81].\nThe only other comment I wish to make is the extreme carelessness with which the manuscript is written. The abstract consists of 6 sentences of which two are obviously ungrammatical. Furthermore, in the Conclusion section of the abstract, the astonished reader finds \"antibiotic warfare and a viral endosymbiosis\" for which no argument and no mention has been made in the Results section. Perhaps, the authors can get rid of these and other similar problems in a revision but I do want to keep it in the record that this is how the manuscript was submitted for review.\n[SUBTITLE] Author's response [SUBSECTION] We apologize for any issues with the form that took away from the content, and we hope the final version is improved. The paper is written somewhat recklessly because it is what it is: the end of a Ph.D. dissertation. We feel it was the right time to get these ideas out because of our perception that the canonical rooting is too dogmatic. This review has only supported our view this manuscript was needed. We think a reader should be astonished by the end of a short abstract before a long reckless paper; it gets them to read paper.\nWe find it interesting that this reviewer had no comment on two aspects of this work which can be judged independently of the rooting issue; the holophyly of the archaea and the actinobacteria's role in eukaryogenesis. We have presented much evidence that the conclusion this reviewer reached on the former, using indel data, is flawed. It would have been informative to hear this reviewer's opinion on that analysis.\nWe apologize for any issues with the form that took away from the content, and we hope the final version is improved. The paper is written somewhat recklessly because it is what it is: the end of a Ph.D. dissertation. We feel it was the right time to get these ideas out because of our perception that the canonical rooting is too dogmatic. This review has only supported our view this manuscript was needed. We think a reader should be astonished by the end of a short abstract before a long reckless paper; it gets them to read paper.\nWe find it interesting that this reviewer had no comment on two aspects of this work which can be judged independently of the rooting issue; the holophyly of the archaea and the actinobacteria's role in eukaryogenesis. We have presented much evidence that the conclusion this reviewer reached on the former, using indel data, is flawed. It would have been informative to hear this reviewer's opinion on that analysis.", "A large part of the motivation for this manuscript was the review of our previous work [11]. Cleary there are others besides us who do not find things as clear cut as this reviewer (see reviewer #3 comments). We think there many reasons to support the canonical rooting, as well as reasons to questions it. We have presented our views on much of this evidence. The reviewer has again refused to discuss our data in any detail implying it is obvious why we are wrong. We feel we have greatly strengthened our previous argument by looking at the big picture in terms of the Gram-negative rooting. This reviewer claimed that rooting was unsupportable because it is so obvious that Achaea did not evolve from Bacteria. We feel we have strengthened that view, but clearly we have not swayed this reviewer. We do not think there is anything more to say on this subject so we point readers to the discussion between this reviewer and Cavalier-Smith in [81].\nThe only other comment I wish to make is the extreme carelessness with which the manuscript is written. The abstract consists of 6 sentences of which two are obviously ungrammatical. Furthermore, in the Conclusion section of the abstract, the astonished reader finds \"antibiotic warfare and a viral endosymbiosis\" for which no argument and no mention has been made in the Results section. Perhaps, the authors can get rid of these and other similar problems in a revision but I do want to keep it in the record that this is how the manuscript was submitted for review.", "We apologize for any issues with the form that took away from the content, and we hope the final version is improved. The paper is written somewhat recklessly because it is what it is: the end of a Ph.D. dissertation. We feel it was the right time to get these ideas out because of our perception that the canonical rooting is too dogmatic. This review has only supported our view this manuscript was needed. We think a reader should be astonished by the end of a short abstract before a long reckless paper; it gets them to read paper.\nWe find it interesting that this reviewer had no comment on two aspects of this work which can be judged independently of the rooting issue; the holophyly of the archaea and the actinobacteria's role in eukaryogenesis. We have presented much evidence that the conclusion this reviewer reached on the former, using indel data, is flawed. It would have been informative to hear this reviewer's opinion on that analysis.", "Gaspar Jekely, European Molecular Biology Laboratory, Heidelberg, Germany\nIn this paper Vales and Bourne address a very difficult problem in evolutionary cellbiology, that of the origin of Archaea (archaebacteria). They do this after arguing at length for the bacterial rooting of the tree of life. Such attempts are very welcome, since these areas are extremely controversial and important, yet few people seem to notice that there is a problem there, namely that the conventional rooting of the tree of life between archaea and bacteria is far from being proven and as trivial as it seems. The evidence for this rooting, coming from paralogous gene rootings is highly questionable, and givesconflicting results when different paralogs are analyzed.\n[SUBTITLE] Author's response [SUBSECTION] We thank this reviewer for demonstrating that not everyone thinks our line of questioning is as unreasonable as reviewers 1 and 2. There are certainly problems with any rooting, and the question is still very open at this point in our minds. Many experts share the opinions of those reviewers, but we feel these points are often swept under the rug when invoking that rooting.\nNotwithstanding these problems, conventional wisdom holds, as the authors rightly point out, that the position of the root is between the two domains (superphyla) bacteria and archaea, since these are the groups that are most distinct from each other. However, such rooting based on maximum divergence can often be wrong, since an ancestral group can give rise to highly derived groups. I don't have particular problems with uprooting the tree of life and abandoning the conventional rooting, but I find the evidence, as presented in this paper, quite week. I also have the feeling that the three indels and the distribution of the proteosome will not convince too many people to favour one rooting over the other. In all cases one can conceive scenarios that are in agreement with the conventional rooting, such as the presence of both forms in the last common ancestor and then differential losses in the stem bacterial and archaeal lineage. I acknowledge that some of this would be less parsimonious than under a bacterial rooting, but given that there are only very few characters that can be used, such less parsimonious scenarios can still easily be defended.\nWe thank this reviewer for demonstrating that not everyone thinks our line of questioning is as unreasonable as reviewers 1 and 2. There are certainly problems with any rooting, and the question is still very open at this point in our minds. Many experts share the opinions of those reviewers, but we feel these points are often swept under the rug when invoking that rooting.\nNotwithstanding these problems, conventional wisdom holds, as the authors rightly point out, that the position of the root is between the two domains (superphyla) bacteria and archaea, since these are the groups that are most distinct from each other. However, such rooting based on maximum divergence can often be wrong, since an ancestral group can give rise to highly derived groups. I don't have particular problems with uprooting the tree of life and abandoning the conventional rooting, but I find the evidence, as presented in this paper, quite week. I also have the feeling that the three indels and the distribution of the proteosome will not convince too many people to favour one rooting over the other. In all cases one can conceive scenarios that are in agreement with the conventional rooting, such as the presence of both forms in the last common ancestor and then differential losses in the stem bacterial and archaeal lineage. I acknowledge that some of this would be less parsimonious than under a bacterial rooting, but given that there are only very few characters that can be used, such less parsimonious scenarios can still easily be defended.\n[SUBTITLE] Author's response [SUBSECTION] The strength of our evidence rests in the difference between polarization and parsimony which we have expressed before [11]:\n\"To us parsimony can be used to analyze events where gain and loss have nearly equal probabilities, while polarizations imply that one direction would evolve more easily than the other. Consider the example of the proteasome discussed in detail in Cavalier-Smith 2002. A parsimony argument would be that the 20s proteasome is the result of a duplication so a non duplicated structure must precede it. The polarization argument involves considering the structure and function of proteasomes as well as the fitness of the intermediates to argue that evolution towards the 20s proteasome is much more plausible than the reverse direction. There are probably many cases where evolution has not been parsimonious, and we do not think parsimony is a safe or productive assumption. However, there appears to be many polarizable transitions and hopefully there are many more waiting to be discovered. \"\nWe do not think the polarizations make this an open and shut case. However, we find they are sufficient to question the canonical rooting and search for more evidence to support the alternative rooting we support here.\nThe rooting issue aside, my main concern is that the scenario for the origin of archaea is not worked out well enough at the moment, and this contrasts with the length and ambition of the paper. The authors invoke an endosymbiosis with a virus to explain the origin of the archaeal DNA-handling enzymes. What does this exactly mean? Is it a lyzogenic virus? The term viral endosymbiosis does not seem to be the best choice here. The authors then invoke a very improbably form of infection with a virus that had collected from the virosphere the right combination of genes, to hand it over to the cell. In this way they try create a unique event that led to the unique origin of the archaea. I am not sure that invoking such a hypothetical, extremely rare event, for which there is no evidence, solves the problem. I acknowledge the enrichment of proteins of possible viral origin in the stem archaea, but this slight statistical enrichment does not mean that there was only one virus infection involved. One could just as well imagine a series of viral gene transfers in the framework of the antibiotic warfare scenario that provided the novel enzymes in a step-by step manner. Given the random sequence of events and the nature of the transferred genes, this could also lead to a lineage with unique identity. This scenario is at present the weakest part of the paper and should be worked out much better in a more focused paper.\nThe strength of our evidence rests in the difference between polarization and parsimony which we have expressed before [11]:\n\"To us parsimony can be used to analyze events where gain and loss have nearly equal probabilities, while polarizations imply that one direction would evolve more easily than the other. Consider the example of the proteasome discussed in detail in Cavalier-Smith 2002. A parsimony argument would be that the 20s proteasome is the result of a duplication so a non duplicated structure must precede it. The polarization argument involves considering the structure and function of proteasomes as well as the fitness of the intermediates to argue that evolution towards the 20s proteasome is much more plausible than the reverse direction. There are probably many cases where evolution has not been parsimonious, and we do not think parsimony is a safe or productive assumption. However, there appears to be many polarizable transitions and hopefully there are many more waiting to be discovered. \"\nWe do not think the polarizations make this an open and shut case. However, we find they are sufficient to question the canonical rooting and search for more evidence to support the alternative rooting we support here.\nThe rooting issue aside, my main concern is that the scenario for the origin of archaea is not worked out well enough at the moment, and this contrasts with the length and ambition of the paper. The authors invoke an endosymbiosis with a virus to explain the origin of the archaeal DNA-handling enzymes. What does this exactly mean? Is it a lyzogenic virus? The term viral endosymbiosis does not seem to be the best choice here. The authors then invoke a very improbably form of infection with a virus that had collected from the virosphere the right combination of genes, to hand it over to the cell. In this way they try create a unique event that led to the unique origin of the archaea. I am not sure that invoking such a hypothetical, extremely rare event, for which there is no evidence, solves the problem. I acknowledge the enrichment of proteins of possible viral origin in the stem archaea, but this slight statistical enrichment does not mean that there was only one virus infection involved. One could just as well imagine a series of viral gene transfers in the framework of the antibiotic warfare scenario that provided the novel enzymes in a step-by step manner. Given the random sequence of events and the nature of the transferred genes, this could also lead to a lineage with unique identity. This scenario is at present the weakest part of the paper and should be worked out much better in a more focused paper.\n[SUBTITLE] Author's response [SUBSECTION] We completely agree with this assessment of our viral endosymbiotic scenario. Endosymbiosis is probably not the best term, but we want to stress the radical nature of this interaction. We do not think our assumption about a rare virus is off base. The virosphere is very large and viruses are experts at manipulating genetic material in novel ways. The DNA handling enzymes did evolve twice despite how unlikely it was. We think a radical turnover of the machinery is only possible in a virus. Successive viral transfers could explain the data too, but this would still be a rare event to account for so many genes. An event like this would not leave much of mark besides a statistical enrichment (if even that). If that enrichment is real the question becomes why do the Archaea interact differently with viruses than Bacteria? That answer cannot be developed too much more at present, but a better sampling of the virosphere is definitely going to help here.\nThere are several other ideas in the paper that are potentially interesting, but not well worked out. The proposition of an actinobacterial symbiont during early eukaryote evolution is one example. Such a hypothesis could possibly be spelled out in a full paper, with a detailed scenario and all the evidence that seems to support such a model. In this form it is just a proposition that is hard to judge thoroughly, and is very easy to dismiss. In general, my recommendation would be to refocus the paper around one key idea, namely the origin of archaeal DNA-handling enzymes by quantum evolution and from viral sources as a result of an antibiotics arms race. If the authors spell out this scenario clearly, together with the supporting evidence, but without going into the details of the rooting issue (discussed already in their previous Biology Direct paper) and the origin of eukaryotes (this could be done in a separate paper), this could become a much more useful and potentially influential manuscript. The title could then be changed accordingly. In the present form the title gives the impression that the authors wish to explain everything, which is far from being the case (for example the unique membrane chemistry or archaeal flagella are not covered).\nWe completely agree with this assessment of our viral endosymbiotic scenario. Endosymbiosis is probably not the best term, but we want to stress the radical nature of this interaction. We do not think our assumption about a rare virus is off base. The virosphere is very large and viruses are experts at manipulating genetic material in novel ways. The DNA handling enzymes did evolve twice despite how unlikely it was. We think a radical turnover of the machinery is only possible in a virus. Successive viral transfers could explain the data too, but this would still be a rare event to account for so many genes. An event like this would not leave much of mark besides a statistical enrichment (if even that). If that enrichment is real the question becomes why do the Archaea interact differently with viruses than Bacteria? That answer cannot be developed too much more at present, but a better sampling of the virosphere is definitely going to help here.\nThere are several other ideas in the paper that are potentially interesting, but not well worked out. The proposition of an actinobacterial symbiont during early eukaryote evolution is one example. Such a hypothesis could possibly be spelled out in a full paper, with a detailed scenario and all the evidence that seems to support such a model. In this form it is just a proposition that is hard to judge thoroughly, and is very easy to dismiss. In general, my recommendation would be to refocus the paper around one key idea, namely the origin of archaeal DNA-handling enzymes by quantum evolution and from viral sources as a result of an antibiotics arms race. If the authors spell out this scenario clearly, together with the supporting evidence, but without going into the details of the rooting issue (discussed already in their previous Biology Direct paper) and the origin of eukaryotes (this could be done in a separate paper), this could become a much more useful and potentially influential manuscript. The title could then be changed accordingly. In the present form the title gives the impression that the authors wish to explain everything, which is far from being the case (for example the unique membrane chemistry or archaeal flagella are not covered).\n[SUBTITLE] Author's response [SUBSECTION] This is a fair assessment of the manuscript. It is certainly overly ambitious and in many ways incomplete. The goal of this paper was to unite the various ideas about bacterial rootings of the Archaea. The fact that many of our ideas could be developed further is more evidence this debate is not as closed as the two other reviewers have declared. While we certainly have omitted many details, we have chosen to take a big picture view. There is an obvious connection between the problems rooting of the tree of life and the origin of the superkingdoms. We think we can only judge a rooting hypothesis by assessing how well it addresses these questions. We think the canonical rooting is insufficient when one begins asking questions on the origins of the superkingdoms. We hope that readers will pick and choose which ideas they like and continue to develop and test them.\nThis is a fair assessment of the manuscript. It is certainly overly ambitious and in many ways incomplete. The goal of this paper was to unite the various ideas about bacterial rootings of the Archaea. The fact that many of our ideas could be developed further is more evidence this debate is not as closed as the two other reviewers have declared. While we certainly have omitted many details, we have chosen to take a big picture view. There is an obvious connection between the problems rooting of the tree of life and the origin of the superkingdoms. We think we can only judge a rooting hypothesis by assessing how well it addresses these questions. We think the canonical rooting is insufficient when one begins asking questions on the origins of the superkingdoms. We hope that readers will pick and choose which ideas they like and continue to develop and test them.", "We thank this reviewer for demonstrating that not everyone thinks our line of questioning is as unreasonable as reviewers 1 and 2. There are certainly problems with any rooting, and the question is still very open at this point in our minds. Many experts share the opinions of those reviewers, but we feel these points are often swept under the rug when invoking that rooting.\nNotwithstanding these problems, conventional wisdom holds, as the authors rightly point out, that the position of the root is between the two domains (superphyla) bacteria and archaea, since these are the groups that are most distinct from each other. However, such rooting based on maximum divergence can often be wrong, since an ancestral group can give rise to highly derived groups. I don't have particular problems with uprooting the tree of life and abandoning the conventional rooting, but I find the evidence, as presented in this paper, quite week. I also have the feeling that the three indels and the distribution of the proteosome will not convince too many people to favour one rooting over the other. In all cases one can conceive scenarios that are in agreement with the conventional rooting, such as the presence of both forms in the last common ancestor and then differential losses in the stem bacterial and archaeal lineage. I acknowledge that some of this would be less parsimonious than under a bacterial rooting, but given that there are only very few characters that can be used, such less parsimonious scenarios can still easily be defended.", "The strength of our evidence rests in the difference between polarization and parsimony which we have expressed before [11]:\n\"To us parsimony can be used to analyze events where gain and loss have nearly equal probabilities, while polarizations imply that one direction would evolve more easily than the other. Consider the example of the proteasome discussed in detail in Cavalier-Smith 2002. A parsimony argument would be that the 20s proteasome is the result of a duplication so a non duplicated structure must precede it. The polarization argument involves considering the structure and function of proteasomes as well as the fitness of the intermediates to argue that evolution towards the 20s proteasome is much more plausible than the reverse direction. There are probably many cases where evolution has not been parsimonious, and we do not think parsimony is a safe or productive assumption. However, there appears to be many polarizable transitions and hopefully there are many more waiting to be discovered. \"\nWe do not think the polarizations make this an open and shut case. However, we find they are sufficient to question the canonical rooting and search for more evidence to support the alternative rooting we support here.\nThe rooting issue aside, my main concern is that the scenario for the origin of archaea is not worked out well enough at the moment, and this contrasts with the length and ambition of the paper. The authors invoke an endosymbiosis with a virus to explain the origin of the archaeal DNA-handling enzymes. What does this exactly mean? Is it a lyzogenic virus? The term viral endosymbiosis does not seem to be the best choice here. The authors then invoke a very improbably form of infection with a virus that had collected from the virosphere the right combination of genes, to hand it over to the cell. In this way they try create a unique event that led to the unique origin of the archaea. I am not sure that invoking such a hypothetical, extremely rare event, for which there is no evidence, solves the problem. I acknowledge the enrichment of proteins of possible viral origin in the stem archaea, but this slight statistical enrichment does not mean that there was only one virus infection involved. One could just as well imagine a series of viral gene transfers in the framework of the antibiotic warfare scenario that provided the novel enzymes in a step-by step manner. Given the random sequence of events and the nature of the transferred genes, this could also lead to a lineage with unique identity. This scenario is at present the weakest part of the paper and should be worked out much better in a more focused paper.", "We completely agree with this assessment of our viral endosymbiotic scenario. Endosymbiosis is probably not the best term, but we want to stress the radical nature of this interaction. We do not think our assumption about a rare virus is off base. The virosphere is very large and viruses are experts at manipulating genetic material in novel ways. The DNA handling enzymes did evolve twice despite how unlikely it was. We think a radical turnover of the machinery is only possible in a virus. Successive viral transfers could explain the data too, but this would still be a rare event to account for so many genes. An event like this would not leave much of mark besides a statistical enrichment (if even that). If that enrichment is real the question becomes why do the Archaea interact differently with viruses than Bacteria? That answer cannot be developed too much more at present, but a better sampling of the virosphere is definitely going to help here.\nThere are several other ideas in the paper that are potentially interesting, but not well worked out. The proposition of an actinobacterial symbiont during early eukaryote evolution is one example. Such a hypothesis could possibly be spelled out in a full paper, with a detailed scenario and all the evidence that seems to support such a model. In this form it is just a proposition that is hard to judge thoroughly, and is very easy to dismiss. In general, my recommendation would be to refocus the paper around one key idea, namely the origin of archaeal DNA-handling enzymes by quantum evolution and from viral sources as a result of an antibiotics arms race. If the authors spell out this scenario clearly, together with the supporting evidence, but without going into the details of the rooting issue (discussed already in their previous Biology Direct paper) and the origin of eukaryotes (this could be done in a separate paper), this could become a much more useful and potentially influential manuscript. The title could then be changed accordingly. In the present form the title gives the impression that the authors wish to explain everything, which is far from being the case (for example the unique membrane chemistry or archaeal flagella are not covered).", "This is a fair assessment of the manuscript. It is certainly overly ambitious and in many ways incomplete. The goal of this paper was to unite the various ideas about bacterial rootings of the Archaea. The fact that many of our ideas could be developed further is more evidence this debate is not as closed as the two other reviewers have declared. While we certainly have omitted many details, we have chosen to take a big picture view. There is an obvious connection between the problems rooting of the tree of life and the origin of the superkingdoms. We think we can only judge a rooting hypothesis by assessing how well it addresses these questions. We think the canonical rooting is insufficient when one begins asking questions on the origins of the superkingdoms. We hope that readers will pick and choose which ideas they like and continue to develop and test them." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Results", "Three reasons why archaea are derived", "Ribosomal revolutions are historical fact", "There is a great divide in DNA replication machinery, but it can be bridged", "Are the Archaea Paraphyletic or Holophyletic? We're agnostic", "Weakening the neomuran hypothesis", "Evidence that supports a firmicute ancestry for archaea", "Peroxisomes: the red herring?", "Viruses as the missing link between the prokaryotic superkingdoms", "The greatest battle ever fought", "Discussion", "Conclusions", "Methods", "Competing interests", "Authors' contributions", "Reviewers' comments", "Reviewer's report 1", "Author's response", "Author's response", "Author's response", "Author's response", "Author's response", "Author's response", "Author's response", "Reviewer's report 2", "Author's response", "Author's response", "Reviewer's report 3", "Author's response", "Author's response", "Author's response", "Author's response", "Supplementary Material" ]
[ "Archaea were first discovered because of a distinct sequence signature in their ribosomal RNA [1]. This remains one of the strongest signals found anywhere in the phylogenetic tree. It was truly a revolution in thought when the world realized there were two distinct types of prokaryotes. Besides placement on sequence trees, there are three major areas where archaea and bacteria differ greatly. First, the structures of archaeal and bacterial ribosomes each have many unique proteins [2]. Second, archaeal membranes are composed of glycerol-ether lipids, while bacterial membranes are composed of glycerol-ester lipids [3]. The glycerols have different stereochemistries between the superkingdoms as well. Third, the DNA replication machinery of these two superkingdoms is very different; many key proteins comprising this machinery have a superkingdom specific distribution [4].\nThese differences as well as the rRNA tree have convinced most scientists that the root of the tree of life must be between the prokaryotic superkingdoms. The proposal that archaea were a different kingdom was originally considered ridiculous because no one could imagine two distinct groups of prokaryotes [5]. In 30 years we went from the prevailing opinion that the archaea were similar enough to bacteria to be just prokaryotes, to the view they are so different they must each be primordial lineages.\nLocating the root of the tree of life is a prerequisite for understanding the origin and evolution of life. There are many examples of conclusions that become radically different if one assumes a different rooting of the tree. For example, the proposal that LUCA was acellular relies on a rooting between the archaea and bacteria [6]. Each of the estimates for divergence times of the prokaryotic taxa [7] would change drastically if archaea are not the same age as LUCA.\nSeveral groups have hypothesized that the root of the tree of life lies within bacteria and place archaea as a taxon derived from gram-positive bacteria [8-10]. These hypotheses are often dismissed for two reasons: 1) they do not agree on a single rooting; 2) there is an immense gap between archaea and bacteria in sequence trees and in the systems mentioned above. We addressed the differences between these alternative rooting options in [11] and concluded that it is possible for them to converge on a single root in the gram-negative bacteria. The point of this work is to address the objection to rooting archaea within the gram-positives.\nThis work is a synthesis of many creative ideas that came before us; as a result, much of what we say here has been said in some form before by others. However, the arrangement of the pieces is, we believe, novel and sheds light on the strengths and weaknesses of the various rootings of the tree of life. First, we discuss the ideas in their original form and consider what we see as the strengths and weaknesses of each. We take the stance that closing the debate prematurely deprives one of the ability to the see the many strengths of each of these hypotheses and the large common ground between them. We then offer novel data that helps refine some of these ideas and show the potential for testing them further.\nRadhey Gupta and colleagues created a detailed tree of life using rarely fixed indels (insertion-deletions) in prokaryotic groups [9]. He concluded that the root of the tree of life is within the gram-positive bacteria, and he places archaea as derived from firmicutes. The major driving force in his scenario is antibiotic warfare. He argues the differences between archaea and bacteria coincide with many of the targets of antibiotics produced by gram-positive bacteria. We will review recent work that demonstrates many antibiotic binding sites have dramatically different affinities in the superkingdoms. The strength of Gupta's phylogeny rests on the fact that many of the branch orders are supported by several independent indels. However, there are several points that concern us about Gupta's hypothesis. First, we disagree with his polarization of Hsp70 which is used to justify the root of the tree of life [11]. But the focus of the present paper is the origin of archaea, so that debate is probably better left to our other work[11]. The transition between gram-positives and archaea must have been a drastic event to be confronted in any hypothesis that roots the tree of life in bacteria. Antibiotic warfare is a powerful evolutionary force, but in Gupta's hypothesis there seems to be a special battle that resulted in archaea. He does not explain why antibiotic warfare only gave rise to one other prokaryotic superkingdom. Should not one expect there to be several different modified ribosomes in response to antibiotic pressure? We will invoke antibiotic warfare as a major driver in the origin of archaea, but we feel our scenario better sets the stage for why this was a unique event. Antibiotic warfare on its own is not enough to account for the vast differences between the prokaryotic superkingdoms, but it certainly was important.\nJames Lake and colleagues has also constructed a detailed tree of life using indels [10]. His group has focused more on indels that can be polarized using paralogous out-groups. The strength of Lake et al.'s method is that it provides evidence for derived and ancestral groups, which we feel is essential for understanding evolutionary histories. The polarizations are largely independent. This allows one to refine the tree because a flawed polarization will only affect one part of the tree. Like Gupta, his group roots archaea within the firmicutes and provides several independent reasons why this makes sense [12]. Lake has also proposed that eukaryotes had a crenarchaeal (eocyte) origin based on a shared indel in EF-1 and similarities in their ribosomal structure [13,14]. We find arguments like this appealing as it is a synthesis of both sequence and structural data. We discuss the strengths and weaknesses of that particular hypothesis at length below.\nThe weakness of the indel method in general is the difficulty in properly aligning paralogs as we argued in [11]. Fortunately, polarizations are mostly independent; so changing a polarization does not invalidate the whole tree, it just refines it. We argue that the refined version of Lake's tree is completely consistent with Cavalier-Smith's [11]. There are very few universal paralogs, so this method certainly needs to be supplemented with other data sources.\nCavalier-Smith has discussed the relationship and origin of the superkingdoms at length [8,15,16]. The major difference between his hypothesis and that of Gupta and Lake is the placement of the root in the gram-negative bacteria. He also roots archaea within or next to the actinobacteria. Cavalier-Smith constructed his tree by polarizing multiple types of data including indels, membrane structure, and quaternary structure. Again, if any one of these polarizations is brought into question it does not weaken the remainder. Cavalier-Smith included unique supporting discussions from the prokaryotic fossil. His analysis concludes that there is no fossil to indicate archaea are older than eukaryotes, despite much evidence that bacteria are older than eukaryotes. That said, there are several aspects of Cavalier-Smith's tree that still do not sit well with us. His hypothesis relies on the assumption that archaea are holophyletic (eukaryotes are their sisters, not their descendents). He provides some justification for this, but we will discuss below why we believe this is not a completely safe assumption at this time. Cavalier-Smith's rooting of the neomura (his term for archaea, eukaryotes and their last common ancestor (LAECA)) is in the actinobacteria. He cites traits shared between the eukaryotes and actinobacteria to support this hypothesis, but they are only relevant if archaea are holophyletic. We provide an alternative interpretation of this distribution by invoking an actinobacterial endosymbiont near the root of eukaryotes. Cavalier-Smith argues thermophily was the major force that lead to the neomuran revolution. We feel this argument falls short for the same reason as Gupta's; it just does not seem to be a unique enough selective pressure to create a novel superkingdom. Cavalier-Smith prefers the labels archaebacteria and eubacteria because he feels the labels archaea and bacteria over emphasize the difference between these superkingdoms. We disagree, these superkingdoms are fundamentally different. Despite that, we still believe archaea evolved from within bacteria.\nNone of these scenarios adequately addresses the origin of the DNA replication machinery shared between archaea and eukaryotes. Therefore we invoke the ideas of Patrick Forterre, who has proposed that cells received the ability to replicate DNA from viruses. He proposes this occurred three times; each event resulting in the birth of a superkingdom [17,18]. The amazing variation in DNA replication machinery found throughout the virosphere supports this idea. All extant cells uses double stranded DNA, but viruses can have many other forms of genetic material (reviewed in [19]). The plasticity of replication in the virus world certainly could lead to innovations of great importance in the cellular world.\nThere are two weaknesses to this view in our opinion. First, it is DNA centric so it necessarily neglects the many other important differences between the superkingdoms. Second, it is firmly placed within the framework of the classical rRNA tree. Forterre even assumes eukaryotes are a primordial lineage, as a consequence of taking the sequence tree too literally. We will demonstrate that this view is also highly informative if archaea are derived from bacteria. It has also been noted that other extra chromosomal elements could play key roles in the evolution of the different DNA replication systems [20], but that discussion is also firmly grounded in the canonical rooting.\nTaking all these viewpoints together, it would seem an uphill battle to argue that archaea are a derived superkingdom. One needs to provide compelling evidence archaea are derived, so we will review our data that supports that view. Any hypothesis that addresses how a bacterium could become an archaeon would have to explain dramatic changes in membranes, DNA replication, and ribosomes. We will demonstrate that the ribosome can have great plasticity under certain circumstances. It has been previously argued that the firmicutes have many of the enzymes needed to make archaeal membranes [21]. We will invoke viral endosymbiosis to explain the differences in DNA replication. For the reasons discussed below the hypothesis must work if archaea are paraphyletic or holophyletic. Finally, it must also address the rarity of the event that lead to this revolution. If a hypothesis could do all of these things, it would make a compelling argument for the origin of archaea.", "[SUBTITLE] Three reasons why archaea are derived [SUBSECTION] Several large indels are shared between archaea and gram-positive bacteria, and both groups only have one membrane [9]. Thus, if there is a direct relationship between the gram-positives and archaea the root is either between them, or one is derived from the other. Every piece of evidence that is polarizable implies archaea are derived from bacteria. Arguments that archaea and bacteria are so different that they both evolved from LUCA sidesteps directionality altogether. The only recent work that explicitly roots the tree in archaea is that of Wong et al. [22]. Many of their arguments are based on assumptions about the nature of LUCA and assumptions of what a primitive state would look like. None of their arguments are true polarizations. To the best of our knowledge there is no single polarized argument for an archaeal rooting that is on par with the three we shall discuss that place archaea as derived.\nThe first of these arguments is the proteasome. Proteasomes are self compartmentalized atp-dependent proteases that are found in varying degrees of complexity across the tree of life. All archaea contain a 20S proteasome which is composed of 28 subunits and is encoded by at least two genes that are clearly homologs. Therefore the 20S proteasome must be the result of duplication. Cavalier-Smith has argued that the simpler bacterial homolog HslV (heat shock locus v) could be duplicated to generate a 20S proteasome [8,16]. Loss of a subunit in the 20S proteasome would result in an open proteasome with no ATPase. Such a protein would lose the essential function of controlled degradation found in proteasomes, and does not make sense as an intermediate. It is more likely that the 20S proteasome is derived from a simpler structure. Cavalier-Smith excludes the root from archaea because all archaea contain a clearly derived protein.\nHowever, there is a counter argument to that proposal; LUCA had HslV and LACA (last archaeal common ancestor) is the point in the tree where HslV evolves into the 20S proteasome (Figure 1A). This would still exclude the root from the crown archaea, but it still allows for the possibility that the root is between the extinct stems of archaea and bacteria. Excluding the root from archaea will never be enough because one can always invoke stem lineages that show up before the derived trait. This would imply the 20S proteasome present in actinobacteria is probably the result of a horizontal transfer from archaea. However, we have observed that the two proteasome genes are often in the same operon in actinobacteria, but rarely together in archaea. This weakly polarizes the direction of the horizontal transfer to the archaea.\nTwo scenarios for interpreting the three polarizations. A) Under the canonical rooting proteasome evolution would require several selective sweeps and large-scale loss. The monomer PyrD B would have evolved from one of the more complex quaternary structures, and the derived insert in EF-2 would occur after LACA. B) Under the gram-negative rooting, Anbu could be ancestral to both HslV and the 20S proteasome. PyrD could evolve via stepwise increases in structural complexity, and there is no need to invoke extinct stem archaea to explain the EF_G insert. We believe these transitions argue for a gram-negative rooting.\nHowever, there is stronger evidence that narrows the root to within the bacteria. Our own work argues that the Anbu proteasome (or peptidase according to [23]) is more likely than HslV to be the 20S proteasome's direct ancestor based on both sequence data and structure predictions [24]. This argument is much stronger than Cavalier-Smith's because HslV is widespread in the gram-positives but Anbu appears to be missing in them altogether (Figure 1B). If the divide between archaea and bacteria is the earliest split in the tree, and our hypothesis on proteasome evolution is correct, then LUCA must have had Anbu. This would mean that all extant gram-positives need to have lost Anbu while the gram-negatives (that must be derived from gram-positives in this scenario) somehow retained Anbu. One would have to invoke a selective sweep of the 20S proteasome in archaea, and of HslV in the gram-positives. It is plausible that the 20S proteasome outcompeted Anbu or HslV since they are almost never found in the same genome. However, Anbu and HslV are found together in many genomes, which is evidence neither totally displaces the other in terms of function. Our arguments about Anbu are based on structure prediction, but a crystal structure could experimentally verify those predictions. If we are correct it may be the smoking gun for a gram-negative rooting, but even without that there is ample evidence to support Cavalier-Smith's position. Even if HslV is the direct ancestor of the 20S proteasome the root can still be excluded from all extant archaeal lineages.\nThe recent analysis of proteins occur in Anbu's operon [23] presented evidence we are wrong in labeling Anbu a proteasome because it lacks an associated ATP-dependent protein required for unfolding substrates. HslV and the 20S proteasome clearly have associated ATPases dedicated to unfolding substrates. Therefore the transition to both of them is easier from Anbu as no ATPase would have to be lost. The origin of HslV and the 20S proteasome would both involve the recruitment of distinct ATPases subunits. Therefore we think this new work strengthens our hypothesis that Anbu is ancestral to the 20S proteasome because no intermediate would ever lose the regulatory ATPase. If our hypothesis is correct, proteasomes would be polyphyletic if they are defined by the presence of the ATPase subunit as suggested in [23].\nThe indel in EF-2 shared between archaea and eukaryotes has been polarized using EF-Tu as an outgroup [25]. Our alignment free analysis of this indel agrees with the authors' conclusions despite there being a sequence artifact in their original alignment [11]. This polarization robustly excludes the root from within archaea, but does not narrow it to within bacteria.\nIn that analysis we also presented a novel structure-based argument for polarizing archaea. The quaternary structure of PyrD 1B is a heterotetramer across the firmicutes and archaea. We argue that the heterotetramer is probably derived from the homodimer PyrD 1A based on the presence of a conserved interface. The monomeric and homodimeric versions are present in the Gram-negatives and Actinobacteria. PyrD 1B is found across a gram-positive group and archaea, so it would have to be present in their last common ancestor, which is LUCA under the canonical rooting. This could be explained by the presence of both PyrD 1A and 1B in LUCA. But that scenario would require PyrD 1A to be lost in every archaea and some firmicutes, and for there to be a reversion to the monomeric form, PyrD, across the gram-negatives and actinobacteria. PyrD 1B is probably derived, so it follows that archaea, firmicutes, and their last common ancestor are also derived.\nThe polarization of the indel in EF-2 excludes the root from the extant archaea. Our novel polarizations of Anbu and PyrD argue the root is within bacteria. If these arguments only excluded the root from all extant archaea one is left wondering why all archaea that are not clearly derived went extinct. The combination of all three arguments strongly supports the bacterial rooting of the tree. If archaea are derived, there must be some way of reconciling the major differences between them and bacteria.\nSeveral large indels are shared between archaea and gram-positive bacteria, and both groups only have one membrane [9]. Thus, if there is a direct relationship between the gram-positives and archaea the root is either between them, or one is derived from the other. Every piece of evidence that is polarizable implies archaea are derived from bacteria. Arguments that archaea and bacteria are so different that they both evolved from LUCA sidesteps directionality altogether. The only recent work that explicitly roots the tree in archaea is that of Wong et al. [22]. Many of their arguments are based on assumptions about the nature of LUCA and assumptions of what a primitive state would look like. None of their arguments are true polarizations. To the best of our knowledge there is no single polarized argument for an archaeal rooting that is on par with the three we shall discuss that place archaea as derived.\nThe first of these arguments is the proteasome. Proteasomes are self compartmentalized atp-dependent proteases that are found in varying degrees of complexity across the tree of life. All archaea contain a 20S proteasome which is composed of 28 subunits and is encoded by at least two genes that are clearly homologs. Therefore the 20S proteasome must be the result of duplication. Cavalier-Smith has argued that the simpler bacterial homolog HslV (heat shock locus v) could be duplicated to generate a 20S proteasome [8,16]. Loss of a subunit in the 20S proteasome would result in an open proteasome with no ATPase. Such a protein would lose the essential function of controlled degradation found in proteasomes, and does not make sense as an intermediate. It is more likely that the 20S proteasome is derived from a simpler structure. Cavalier-Smith excludes the root from archaea because all archaea contain a clearly derived protein.\nHowever, there is a counter argument to that proposal; LUCA had HslV and LACA (last archaeal common ancestor) is the point in the tree where HslV evolves into the 20S proteasome (Figure 1A). This would still exclude the root from the crown archaea, but it still allows for the possibility that the root is between the extinct stems of archaea and bacteria. Excluding the root from archaea will never be enough because one can always invoke stem lineages that show up before the derived trait. This would imply the 20S proteasome present in actinobacteria is probably the result of a horizontal transfer from archaea. However, we have observed that the two proteasome genes are often in the same operon in actinobacteria, but rarely together in archaea. This weakly polarizes the direction of the horizontal transfer to the archaea.\nTwo scenarios for interpreting the three polarizations. A) Under the canonical rooting proteasome evolution would require several selective sweeps and large-scale loss. The monomer PyrD B would have evolved from one of the more complex quaternary structures, and the derived insert in EF-2 would occur after LACA. B) Under the gram-negative rooting, Anbu could be ancestral to both HslV and the 20S proteasome. PyrD could evolve via stepwise increases in structural complexity, and there is no need to invoke extinct stem archaea to explain the EF_G insert. We believe these transitions argue for a gram-negative rooting.\nHowever, there is stronger evidence that narrows the root to within the bacteria. Our own work argues that the Anbu proteasome (or peptidase according to [23]) is more likely than HslV to be the 20S proteasome's direct ancestor based on both sequence data and structure predictions [24]. This argument is much stronger than Cavalier-Smith's because HslV is widespread in the gram-positives but Anbu appears to be missing in them altogether (Figure 1B). If the divide between archaea and bacteria is the earliest split in the tree, and our hypothesis on proteasome evolution is correct, then LUCA must have had Anbu. This would mean that all extant gram-positives need to have lost Anbu while the gram-negatives (that must be derived from gram-positives in this scenario) somehow retained Anbu. One would have to invoke a selective sweep of the 20S proteasome in archaea, and of HslV in the gram-positives. It is plausible that the 20S proteasome outcompeted Anbu or HslV since they are almost never found in the same genome. However, Anbu and HslV are found together in many genomes, which is evidence neither totally displaces the other in terms of function. Our arguments about Anbu are based on structure prediction, but a crystal structure could experimentally verify those predictions. If we are correct it may be the smoking gun for a gram-negative rooting, but even without that there is ample evidence to support Cavalier-Smith's position. Even if HslV is the direct ancestor of the 20S proteasome the root can still be excluded from all extant archaeal lineages.\nThe recent analysis of proteins occur in Anbu's operon [23] presented evidence we are wrong in labeling Anbu a proteasome because it lacks an associated ATP-dependent protein required for unfolding substrates. HslV and the 20S proteasome clearly have associated ATPases dedicated to unfolding substrates. Therefore the transition to both of them is easier from Anbu as no ATPase would have to be lost. The origin of HslV and the 20S proteasome would both involve the recruitment of distinct ATPases subunits. Therefore we think this new work strengthens our hypothesis that Anbu is ancestral to the 20S proteasome because no intermediate would ever lose the regulatory ATPase. If our hypothesis is correct, proteasomes would be polyphyletic if they are defined by the presence of the ATPase subunit as suggested in [23].\nThe indel in EF-2 shared between archaea and eukaryotes has been polarized using EF-Tu as an outgroup [25]. Our alignment free analysis of this indel agrees with the authors' conclusions despite there being a sequence artifact in their original alignment [11]. This polarization robustly excludes the root from within archaea, but does not narrow it to within bacteria.\nIn that analysis we also presented a novel structure-based argument for polarizing archaea. The quaternary structure of PyrD 1B is a heterotetramer across the firmicutes and archaea. We argue that the heterotetramer is probably derived from the homodimer PyrD 1A based on the presence of a conserved interface. The monomeric and homodimeric versions are present in the Gram-negatives and Actinobacteria. PyrD 1B is found across a gram-positive group and archaea, so it would have to be present in their last common ancestor, which is LUCA under the canonical rooting. This could be explained by the presence of both PyrD 1A and 1B in LUCA. But that scenario would require PyrD 1A to be lost in every archaea and some firmicutes, and for there to be a reversion to the monomeric form, PyrD, across the gram-negatives and actinobacteria. PyrD 1B is probably derived, so it follows that archaea, firmicutes, and their last common ancestor are also derived.\nThe polarization of the indel in EF-2 excludes the root from the extant archaea. Our novel polarizations of Anbu and PyrD argue the root is within bacteria. If these arguments only excluded the root from all extant archaea one is left wondering why all archaea that are not clearly derived went extinct. The combination of all three arguments strongly supports the bacterial rooting of the tree. If archaea are derived, there must be some way of reconciling the major differences between them and bacteria.\n[SUBTITLE] Ribosomal revolutions are historical fact [SUBSECTION] Archaea cluster separately on phylogenetic trees based on ribosomal RNA [1]. This split has remained robust in many trees derived since then. We will discuss three scenarios that can explain this. The first scenario is that the ribosomal sequences are pretty good molecular clocks. The great splits seen in the tree reflects this most ancient divide in cellular life and is in accordance with the canonical rooting.\nThe second scenario also does not contradict the canonical rooting. It goes as follows. The ribosome in LUCA was incomplete. It did not have all the proteins found in extant archaea and bacteria, only the core that is universal between them. The addition of proteins after the split of the superkingdoms would start a quantum evolutionary event. Some sites would be free to mutate to achieve increased stability, while others would be under evolutionary pressure to maintain a strict structure-function relationship. The rate of mutation at different sites on the ribosome could vary wildly and exaggerate the true distance between the superkingdoms, even if they do represent a very ancient split.\nThe third scenario, which we champion here, is that the bacterial ribosome evolved into an archaeal one. Again this would be a quantum evolutionary event and sequences of both rRNA and ribosomal proteins would evolve rapidly. The point we are trying to make is that these three scenarios would result in exactly the same sequence tree. Hence we must look towards independent lines of reasoning to determine which of these scenarios best describes the tree branching.\nWe can exclude the first scenario by comparing the structure of the ribosome in archaea and bacteria. In the 50S subunit there are six ribosomal proteins that are in the same position on the rRNA, but have non-homologous structures in archaea and bacteria [26,27]. These must have changed in at least one lineage since LUCA, regardless of LUCA's nature. Therefore, we should expect that the distance between archaea and bacteria would be exaggerated due to compensatory mutations in the rRNA and ribosomal proteins.\nIt is certainly reasonable to object to the third scenario because it seems implausible that a ribosome would change so much between superkingdoms, yet stay so well conserved within a superkingdom. However, there are two examples where we know that ribosome structure has indeed changed significantly. Mitochondrial ribosomes have changed dramatically from their bacterial ancestors. They have lost about half their rRNA and replaced it with additional proteins [28]. The eukaryotic ribosome evolved from an archaeal one (or technically from some sort of proto-archaeal ribsosome if the archaea are holophyletic). There are eleven ribosomal proteins found only in the eukaryotes, nine of which are conserved across the superkingdom [2], and there is good separation on rRNA trees between eukaryotes and archaea. In the two cases where the ribosome structure has changed we know it changed from another fully functional ribosome. Thus, why would it be out of the question for it to happen between archaea and bacteria? There are five ribosomal proteins present across the crenarchaea, but absent in the euryarchaea [2]. These proteins were either lost or gained in one of these groups after they split. In either case there would be a transition between two complete ribosomes. In each of these cases we can clearly see that a ribosome can undergo dramatic changes in macromolecular structure when there is proper selective pressure (or relaxation of selective pressure).\nThe tree presented in [29] was constructed by concatenating 31 universal proteins. 23 of these are ribosomal proteins and many more are directly involved in translation. Many taxa on the tree cluster together with high bootstrap values (greater than 80%). However, there appears to be only three connections between high level taxa that are supported with that strength. The clustering of crenarchaea and euryarchaea is well supported, as is the clustering of eukaryotes and archaea. There is also a long well supported branch between the archaeal-eukaryal clade and bacteria. We doubt it is a coincidence that these splits correspond to the greatest changes in ribosomal structure on the tree. It appears the sequence tree in [29] and rRNA trees could be merely a reflection of the large changes in ribosomal structure that have occurred throughout the true tree of life. This protein set would be expected to work better as a clock within groups that have the same ribosomal proteins. Even if ones uses more sophisticated tree building techniques, such as those in [30], the major changes in the ribosome are still going to be problematic. The authors concatenated many translational proteins and the resulting tree supported the paraphyly of archaea. Eukaryotes were placed near the archaeal species with the most similar ribosomal structures. However, a single gene tree of RNA polymerase alpha subunit (RPOA) supported holophyly in the same study. This implies some of their results are an artifact caused by structural changes in a ribosomal revolution.\nThe third scenario could certainly be weakened if it was found that all the ribosomal proteins were essential in bacteria and there was absolutely no way they could be tinkered with. We examined which ribosomal proteins are essential in eleven different bacterial species using the Database of Essential Genes [31]. There are sixteen ribosomal proteins that would need to be lost in the transition from a bacterium to an archaeon, as they are found across bacteria but never in archaea. None of these ribosomal proteins were found to be essential in all species, which is the first sign it is possible to lose and replace them. Four of the sixteen proteins are essential in all species except Mycobacterium tuberculosis (Table 1). Only four of these proteins are essential in M. tuberculosis, the least of any species in this data set.\nEssentiality of ribosomal proteins.\nThe essentiality of proteins that would need to be lost in the transition from an extant bacterial ribosome to an archaeal one varies from species to species. M. tuberculosis appears preadapted for the losses that would be necessary in the transition to an archaeon.\nTo determine whether this portion of the ribosome is significantly flexible we calculated a p-value assuming a binomial distribution. The essentiality of each subunit can be considered a success or a failure. The p-value measures the odds of seeing at most n essential subunits in a set of sixteen random ribosomal proteins. The odds of a random ribosomal protein being essential were estimated as the proportion of ribosomal proteins found to be essential in that species. This was done to eliminate experimental biases between the species sets, as some of the knockout experiments are more thorough than others. Several species had p-values under .05, but M. tuberculosis was by far the most significant with a p-value of .0031. This implies that M. tuberculosis's ribosome is under different selective pressure than most bacteria, and that it is the most preadapted ribosome in this dataset capable of evolving into an archaeal ribosome.\nIt is highly counterintuitive that nearly every universal protein could be nonessential. The difference between essential and persistent genes was discussed in [32]. The authors point out that essentiality differs in the wild and laboratory settings. Many of the ribosomal proteins listed as nonessential are still highly deleterious to lose. But the point is they can be lost under the right circumstances. It might be our proteasome centric view of the world, but we think the presence of the 20S proteasome in Mycobacterium could partially explain this observation. It has been proposed the major cost of mutations and mistranslation comes from dealing with mis-folded proteins [33]. The ribosomal proteins are among the most highly translated proteins in the cell, so there is lots of pressure to ensure they fold correctly. A highly advanced degradation system, like the 20S proteasome with a Pup targeting system [34], could greatly relax that selective pressure. If the initial tinkering is not lethal, one can easily imagine a scenario where compensatory mutations and structures could rapidly and significantly change the ribosome if there is proper selective pressure. We will describe such a scenario below.\nIt has been observed that many bacteria contain paralogs of ribosomal proteins where one form binds Zn and the other does not [35]. M. tuberculosis has duplicates of several ribosomal proteins, which could explain why some (but not all) of the ribosomal proteins are not essential in that genome. The authors note that thermophilic bacteria seem to prefer the Zn binding forms of the ribosomal proteins, and that there are seven Zn binding ribosomal proteins conserved across archaea and eukaryotes that are absent in bacteria. This is consistent with our ideas that major historical changes in the availability of Zn in the ocean were a significant constraint on protein structure evolution [36,37]. Bacteria vary their ribosomes to optimize for both high and low Zn conditions. One can imagine this strategy being taken to an extreme where the tweaks are not just simple displacements, but larger rearrangements. Increased availability of Zn, as the ocean became oxic, could be a factor that made toying with the ribosome favorable for the early archaea. This combined with the antibiotic pressures discussed below, could lead to a ribosomal revolution, just as the presence of two ribosomes leads to a revolution at the root of eukaryotes.\nArchaea cluster separately on phylogenetic trees based on ribosomal RNA [1]. This split has remained robust in many trees derived since then. We will discuss three scenarios that can explain this. The first scenario is that the ribosomal sequences are pretty good molecular clocks. The great splits seen in the tree reflects this most ancient divide in cellular life and is in accordance with the canonical rooting.\nThe second scenario also does not contradict the canonical rooting. It goes as follows. The ribosome in LUCA was incomplete. It did not have all the proteins found in extant archaea and bacteria, only the core that is universal between them. The addition of proteins after the split of the superkingdoms would start a quantum evolutionary event. Some sites would be free to mutate to achieve increased stability, while others would be under evolutionary pressure to maintain a strict structure-function relationship. The rate of mutation at different sites on the ribosome could vary wildly and exaggerate the true distance between the superkingdoms, even if they do represent a very ancient split.\nThe third scenario, which we champion here, is that the bacterial ribosome evolved into an archaeal one. Again this would be a quantum evolutionary event and sequences of both rRNA and ribosomal proteins would evolve rapidly. The point we are trying to make is that these three scenarios would result in exactly the same sequence tree. Hence we must look towards independent lines of reasoning to determine which of these scenarios best describes the tree branching.\nWe can exclude the first scenario by comparing the structure of the ribosome in archaea and bacteria. In the 50S subunit there are six ribosomal proteins that are in the same position on the rRNA, but have non-homologous structures in archaea and bacteria [26,27]. These must have changed in at least one lineage since LUCA, regardless of LUCA's nature. Therefore, we should expect that the distance between archaea and bacteria would be exaggerated due to compensatory mutations in the rRNA and ribosomal proteins.\nIt is certainly reasonable to object to the third scenario because it seems implausible that a ribosome would change so much between superkingdoms, yet stay so well conserved within a superkingdom. However, there are two examples where we know that ribosome structure has indeed changed significantly. Mitochondrial ribosomes have changed dramatically from their bacterial ancestors. They have lost about half their rRNA and replaced it with additional proteins [28]. The eukaryotic ribosome evolved from an archaeal one (or technically from some sort of proto-archaeal ribsosome if the archaea are holophyletic). There are eleven ribosomal proteins found only in the eukaryotes, nine of which are conserved across the superkingdom [2], and there is good separation on rRNA trees between eukaryotes and archaea. In the two cases where the ribosome structure has changed we know it changed from another fully functional ribosome. Thus, why would it be out of the question for it to happen between archaea and bacteria? There are five ribosomal proteins present across the crenarchaea, but absent in the euryarchaea [2]. These proteins were either lost or gained in one of these groups after they split. In either case there would be a transition between two complete ribosomes. In each of these cases we can clearly see that a ribosome can undergo dramatic changes in macromolecular structure when there is proper selective pressure (or relaxation of selective pressure).\nThe tree presented in [29] was constructed by concatenating 31 universal proteins. 23 of these are ribosomal proteins and many more are directly involved in translation. Many taxa on the tree cluster together with high bootstrap values (greater than 80%). However, there appears to be only three connections between high level taxa that are supported with that strength. The clustering of crenarchaea and euryarchaea is well supported, as is the clustering of eukaryotes and archaea. There is also a long well supported branch between the archaeal-eukaryal clade and bacteria. We doubt it is a coincidence that these splits correspond to the greatest changes in ribosomal structure on the tree. It appears the sequence tree in [29] and rRNA trees could be merely a reflection of the large changes in ribosomal structure that have occurred throughout the true tree of life. This protein set would be expected to work better as a clock within groups that have the same ribosomal proteins. Even if ones uses more sophisticated tree building techniques, such as those in [30], the major changes in the ribosome are still going to be problematic. The authors concatenated many translational proteins and the resulting tree supported the paraphyly of archaea. Eukaryotes were placed near the archaeal species with the most similar ribosomal structures. However, a single gene tree of RNA polymerase alpha subunit (RPOA) supported holophyly in the same study. This implies some of their results are an artifact caused by structural changes in a ribosomal revolution.\nThe third scenario could certainly be weakened if it was found that all the ribosomal proteins were essential in bacteria and there was absolutely no way they could be tinkered with. We examined which ribosomal proteins are essential in eleven different bacterial species using the Database of Essential Genes [31]. There are sixteen ribosomal proteins that would need to be lost in the transition from a bacterium to an archaeon, as they are found across bacteria but never in archaea. None of these ribosomal proteins were found to be essential in all species, which is the first sign it is possible to lose and replace them. Four of the sixteen proteins are essential in all species except Mycobacterium tuberculosis (Table 1). Only four of these proteins are essential in M. tuberculosis, the least of any species in this data set.\nEssentiality of ribosomal proteins.\nThe essentiality of proteins that would need to be lost in the transition from an extant bacterial ribosome to an archaeal one varies from species to species. M. tuberculosis appears preadapted for the losses that would be necessary in the transition to an archaeon.\nTo determine whether this portion of the ribosome is significantly flexible we calculated a p-value assuming a binomial distribution. The essentiality of each subunit can be considered a success or a failure. The p-value measures the odds of seeing at most n essential subunits in a set of sixteen random ribosomal proteins. The odds of a random ribosomal protein being essential were estimated as the proportion of ribosomal proteins found to be essential in that species. This was done to eliminate experimental biases between the species sets, as some of the knockout experiments are more thorough than others. Several species had p-values under .05, but M. tuberculosis was by far the most significant with a p-value of .0031. This implies that M. tuberculosis's ribosome is under different selective pressure than most bacteria, and that it is the most preadapted ribosome in this dataset capable of evolving into an archaeal ribosome.\nIt is highly counterintuitive that nearly every universal protein could be nonessential. The difference between essential and persistent genes was discussed in [32]. The authors point out that essentiality differs in the wild and laboratory settings. Many of the ribosomal proteins listed as nonessential are still highly deleterious to lose. But the point is they can be lost under the right circumstances. It might be our proteasome centric view of the world, but we think the presence of the 20S proteasome in Mycobacterium could partially explain this observation. It has been proposed the major cost of mutations and mistranslation comes from dealing with mis-folded proteins [33]. The ribosomal proteins are among the most highly translated proteins in the cell, so there is lots of pressure to ensure they fold correctly. A highly advanced degradation system, like the 20S proteasome with a Pup targeting system [34], could greatly relax that selective pressure. If the initial tinkering is not lethal, one can easily imagine a scenario where compensatory mutations and structures could rapidly and significantly change the ribosome if there is proper selective pressure. We will describe such a scenario below.\nIt has been observed that many bacteria contain paralogs of ribosomal proteins where one form binds Zn and the other does not [35]. M. tuberculosis has duplicates of several ribosomal proteins, which could explain why some (but not all) of the ribosomal proteins are not essential in that genome. The authors note that thermophilic bacteria seem to prefer the Zn binding forms of the ribosomal proteins, and that there are seven Zn binding ribosomal proteins conserved across archaea and eukaryotes that are absent in bacteria. This is consistent with our ideas that major historical changes in the availability of Zn in the ocean were a significant constraint on protein structure evolution [36,37]. Bacteria vary their ribosomes to optimize for both high and low Zn conditions. One can imagine this strategy being taken to an extreme where the tweaks are not just simple displacements, but larger rearrangements. Increased availability of Zn, as the ocean became oxic, could be a factor that made toying with the ribosome favorable for the early archaea. This combined with the antibiotic pressures discussed below, could lead to a ribosomal revolution, just as the presence of two ribosomes leads to a revolution at the root of eukaryotes.\n[SUBTITLE] There is a great divide in DNA replication machinery, but it can be bridged [SUBSECTION] The differences between archaeal and bacterial replication machinery is vast [4]. Leipe et al. claim this difference is so great that it is unreasonable to argue that one prokaryotic superkingdom evolved from the other. They list four key functions of DNA replication that are performed by completely non-homologous proteins in archaea and bacteria: the main polymerase's polymerization domain, the phosphatase that powers the polymerase, the gap filling polymerase, and the DNA primase. We will argue that the differences between archaea and bacteria do not imply the root of the tree of life has to be between them.\nWe must keep in mind there is some flexibility in the DNA replication machinery despite the division across the superkingdoms; consider two examples. First, many proteobacteria use a PolB family polymerase as a repair protein [38], which is almost certainly the result of HGT. Second, PolD appears to have been present in LACA, but was lost in the crenarchaea [39]. These two examples illustrate major changes in the replication machinery that occured in DNA based genomes possessing fully functional replication systems. We are arguing that an even larger event occurred between the prokaryotic superkingdoms. This event entailed viral transfers and novel innovations, but there are several proteins whose origins can be better described by vertical inheritance from the gram-positive bacteria which we review first.\nKoonin et al. have demonstrated that many bacterial proteins have a region that is homologous to the small subunit of the archaeo-eukaryotic primase [40]. This domain is present in DNA ligase D from M. tuberculosis, which can act as a DNA-dependent RNA polymerase [41]. The rest of the protein is homologous to the ATP-dependent DNA ligase found in archaea and eukaryotes. Therefore, DNA ligase D is perfectly preadapted to replace the primase function of DnaG. The fission of the two halves of the protein would allow for the preservation of ligase activity while developing enhanced primase activity. A recent analysis of DNA ligases revealed many transfers between archaea, bacteria and viruses [42]. This history is very complicated, so it is hard to say with certainty where the archaeal enzymes originated. The large subunit of the primase may be a true innovation since it has no detectable bacterial homologs, but the small subunit of the primase and ATP-dependent DNA ligases both could have been inherited from the gram-positive ancestors of archaea.\nThe main helicase in bacteria is DnaB, while archaea use MCM6. Relevant to this discussion is the recent biochemical analysis of a protein in a prophage element in Bacillus cereus that has domains homologous to the MCM6-AAA domain as well as the small subunit of the archaeal primase [43]. The authors found that this protein was a functional helicase but had no primase activity. The narrow distribution of this prophage element implies its insertion was probably too recent to play a role in the origin of archaea. However, it demonstrates that there can be a selective advantage for a DNA based genome to take novel DNA handling machinery from a virus and use it in a different context. We will come back to this point later.\nBacteria use DnaA to define the origin of replication, while archaea use Cdc6. These proteins have a homologous AAA+ ATPase domain, but have little similarity otherwise. However, the bacterial protein RuvB has the same domain combination as Cdc6. RuvB, Cdc6, and DnaA were all put in the same superfamily in a recent classification of AAA+ domains [44]. RuvB is recruited to Holliday junctions by RuvA where it forms a hexamer around the DNA [45], just like Cdc6. It is plausible that Cdc6 evolved from RuvB.\nArchaea use a protein called Hjc to resolve Holliday junctions instead of the bacterial RuvABC system. Hjc is related to the alternative bacterial system RecU [46]. The only bacteria that use RecU are the firmicutes, and they also have RuvABC. We argue below archaea are derived from within the firmicutes. It is possible that the redundancy in Holliday junction systems allowed RuvB to drift in function. The homology between RecU and Hjc could be explained by the presence of a Holliday junction resolvase in LUCA under the canonical rooting. However, if the hypothetical RNA-DNA hybrid LUCA proposed in [4] was dealing with Holliday junctions we argue it probably would also need topoisomerases at that point. However, since the distribution of topoisomerases is different across the prokaryotic superkingdoms [47,48] that would imply the ancestral topoisomerase was displaced in at least one lineage. This weakens the proposal in [4]. We feel it is more likely archaeal topoisomerases evolved from bacterial ones as Cavalier-Smith has proposed [16].\nThere are certainly large differences between archaeal and bacterial DNA replication machineries. We have demonstrated the divide between replication systems has some flexibility, and this opens the door for a replication revolution. It is possible to come up with detailed scenarios for how each of the archaeal replication proteins originated. These results are summarized in Table 2. We will elaborate on this scenario below. However, there are several archaeal replication proteins that do not appear to have any homologs in bacteria; namely histones, PolD, and the large subunit of the archaeal-eukaryal primase. These are true innovations, but there really are not that many of them; certainly not enough to make the transitions seem unreasonable in light of the polarizations presented above.\nSummary of differences in DNA replication machinery of Archaea and Bacteria.\nThe list of protein functions was compiled from box 1 in [20] and table one in [129]. Italics indicate a probable horizontal transfer to a superkingdom. There are very few proteins in archaea that are true innovations. Many of their unique replication proteins could be recruited from bacterial or viral systems. A * indicates the Superfamily database was used to predict domain assignments of PDB entries not yet classified in SCOP.\nThe proposal of two independent inventions of DNA replication has recently been challenged [49]. The authors argue that ribonucleotide reduction is thermodynamically unfavorable, so convergent evolution is highly unlikely. They note that all ribonucleotide reductases have been shown to have a monophyletic origin. Finally, they argue that the proteins that are universally conserved imply a high fidelity replication system in LUCA that could not have been RNA based. The hypothesis that the root must be between the superkingdoms is diminished when one combines these arguments with the scenarios we have outlined here.\nThe differences between archaeal and bacterial replication machinery is vast [4]. Leipe et al. claim this difference is so great that it is unreasonable to argue that one prokaryotic superkingdom evolved from the other. They list four key functions of DNA replication that are performed by completely non-homologous proteins in archaea and bacteria: the main polymerase's polymerization domain, the phosphatase that powers the polymerase, the gap filling polymerase, and the DNA primase. We will argue that the differences between archaea and bacteria do not imply the root of the tree of life has to be between them.\nWe must keep in mind there is some flexibility in the DNA replication machinery despite the division across the superkingdoms; consider two examples. First, many proteobacteria use a PolB family polymerase as a repair protein [38], which is almost certainly the result of HGT. Second, PolD appears to have been present in LACA, but was lost in the crenarchaea [39]. These two examples illustrate major changes in the replication machinery that occured in DNA based genomes possessing fully functional replication systems. We are arguing that an even larger event occurred between the prokaryotic superkingdoms. This event entailed viral transfers and novel innovations, but there are several proteins whose origins can be better described by vertical inheritance from the gram-positive bacteria which we review first.\nKoonin et al. have demonstrated that many bacterial proteins have a region that is homologous to the small subunit of the archaeo-eukaryotic primase [40]. This domain is present in DNA ligase D from M. tuberculosis, which can act as a DNA-dependent RNA polymerase [41]. The rest of the protein is homologous to the ATP-dependent DNA ligase found in archaea and eukaryotes. Therefore, DNA ligase D is perfectly preadapted to replace the primase function of DnaG. The fission of the two halves of the protein would allow for the preservation of ligase activity while developing enhanced primase activity. A recent analysis of DNA ligases revealed many transfers between archaea, bacteria and viruses [42]. This history is very complicated, so it is hard to say with certainty where the archaeal enzymes originated. The large subunit of the primase may be a true innovation since it has no detectable bacterial homologs, but the small subunit of the primase and ATP-dependent DNA ligases both could have been inherited from the gram-positive ancestors of archaea.\nThe main helicase in bacteria is DnaB, while archaea use MCM6. Relevant to this discussion is the recent biochemical analysis of a protein in a prophage element in Bacillus cereus that has domains homologous to the MCM6-AAA domain as well as the small subunit of the archaeal primase [43]. The authors found that this protein was a functional helicase but had no primase activity. The narrow distribution of this prophage element implies its insertion was probably too recent to play a role in the origin of archaea. However, it demonstrates that there can be a selective advantage for a DNA based genome to take novel DNA handling machinery from a virus and use it in a different context. We will come back to this point later.\nBacteria use DnaA to define the origin of replication, while archaea use Cdc6. These proteins have a homologous AAA+ ATPase domain, but have little similarity otherwise. However, the bacterial protein RuvB has the same domain combination as Cdc6. RuvB, Cdc6, and DnaA were all put in the same superfamily in a recent classification of AAA+ domains [44]. RuvB is recruited to Holliday junctions by RuvA where it forms a hexamer around the DNA [45], just like Cdc6. It is plausible that Cdc6 evolved from RuvB.\nArchaea use a protein called Hjc to resolve Holliday junctions instead of the bacterial RuvABC system. Hjc is related to the alternative bacterial system RecU [46]. The only bacteria that use RecU are the firmicutes, and they also have RuvABC. We argue below archaea are derived from within the firmicutes. It is possible that the redundancy in Holliday junction systems allowed RuvB to drift in function. The homology between RecU and Hjc could be explained by the presence of a Holliday junction resolvase in LUCA under the canonical rooting. However, if the hypothetical RNA-DNA hybrid LUCA proposed in [4] was dealing with Holliday junctions we argue it probably would also need topoisomerases at that point. However, since the distribution of topoisomerases is different across the prokaryotic superkingdoms [47,48] that would imply the ancestral topoisomerase was displaced in at least one lineage. This weakens the proposal in [4]. We feel it is more likely archaeal topoisomerases evolved from bacterial ones as Cavalier-Smith has proposed [16].\nThere are certainly large differences between archaeal and bacterial DNA replication machineries. We have demonstrated the divide between replication systems has some flexibility, and this opens the door for a replication revolution. It is possible to come up with detailed scenarios for how each of the archaeal replication proteins originated. These results are summarized in Table 2. We will elaborate on this scenario below. However, there are several archaeal replication proteins that do not appear to have any homologs in bacteria; namely histones, PolD, and the large subunit of the archaeal-eukaryal primase. These are true innovations, but there really are not that many of them; certainly not enough to make the transitions seem unreasonable in light of the polarizations presented above.\nSummary of differences in DNA replication machinery of Archaea and Bacteria.\nThe list of protein functions was compiled from box 1 in [20] and table one in [129]. Italics indicate a probable horizontal transfer to a superkingdom. There are very few proteins in archaea that are true innovations. Many of their unique replication proteins could be recruited from bacterial or viral systems. A * indicates the Superfamily database was used to predict domain assignments of PDB entries not yet classified in SCOP.\nThe proposal of two independent inventions of DNA replication has recently been challenged [49]. The authors argue that ribonucleotide reduction is thermodynamically unfavorable, so convergent evolution is highly unlikely. They note that all ribonucleotide reductases have been shown to have a monophyletic origin. Finally, they argue that the proteins that are universally conserved imply a high fidelity replication system in LUCA that could not have been RNA based. The hypothesis that the root must be between the superkingdoms is diminished when one combines these arguments with the scenarios we have outlined here.\n[SUBTITLE] Are the Archaea Paraphyletic or Holophyletic? We're agnostic [SUBSECTION] So far we have presented several independent arguments that strongly polarize archaea as a taxa derived from within bacteria. We have demonstrated that although there are vast differences between the ribosomes and DNA replication machinery between the prokaryotic superkingdoms, none of the arguments associated with their respective proteins seems insurmountable. We will soon present a novel hypothesis to account for this; but first we must pinpoint the bacterial roots of archaea. One cannot properly reason about this rooting without first discussing whether archaea are paraphyletic (eukaryotes branch within them) or holophyletic (eukaryotes are their sisters). As there is clearly a relationship between archaea and eukaryotes it is vital to differentiate between these two scenarios to understand their origins. We will review the current available data, and argue that for now precise agnosticism seems the best course, that is, any hypothesis on the origin of archaea must accommodate both models. That said, we lean towards holophyly and our hypothesis does as well.\nEukaryotes and Archaea are sister clades under the standard three domain model. However, James Lake proposed that eukaryotes had a crenarchaeal (eocyte) origin based on a shared indel in EF-1 and similarities in their ribosomal structures [13,14]. This hypothesis never gained much support because there was little phylogenetic evidence to corroborate it. However, recent work [30] has shown that there is sequence data that implies archaea are paraphyletic and eukaryotes have a crenarchaeal-like ancestor. Conversely, another analysis done around the same time supported a deep branching archaeon as the host of mitochondria [50], which would be inconsistent with the eocyte hypothesis. They demonstrate that eukaryotes inherited both crenarchaeal and euryarchaeal specific proteins, so ancestry from either group alone is not enough to explain the eukaryotic protein repertoire. However, several deep branching archaeal genomes from korarchaeota and thaumarchaea are now available and change the context of some of these conclusions [51,52]. Both of these groups appear to contain a mix of crenarchaeal and euryarchaeal genes so the observation in [50] could be explained by a member of one of these groups being ancestral to eukaryotes.\nCavalier-Smith's hypothesis on the origin of the neomura relies on a sisterhood relationship between archaea and eukaryotes [16]. As discussed below, he mainly roots the neomura using traits actinobacteria share with eukaryotes, but not archaea. This only makes sense if archaea and eukaryotes are sisters, otherwise the traits should be present in at least some archaea. He lists eight properties that are unique and ubiquitous in archaea [16]. All of these traits strongly imply that archaea are monophyletic. However, most of them do not differentiate between whether archaea are holophyletic or paraphyletic.\nFor instance, the unique isoprenoid ether lipids found in all archaeal membranes are best explained by their presence in LACA. Eukaryotes have lipids that are more similar to those of bacteria. It would be more parsimonious for archaea and eukaryotes to be sisters with a single change in lipid structure. Any other scenario requires a reversion in eukaryotes back to the bacterial state. Even though this is not parsimonious, it is not out of the question because the mitochondrial ancestor would have all the necessary genes to make bacterial membranes [53]. We have to admit that does not seem unreasonable relative to the innovations we are discussing in this work. This certainly seems like a case where simple parsimony in terms of any one trait, even membrane structure, will be misleading.\nThe only one of these properties that appeared really informative in regard to this problem was the split gene for RPOA. RPOA is the only single gene tree that supported the three domain model in [30], so it is clear eukaryotes did not get this protein from the mitochondrial ancestor. Reassembling the split gene is highly improbable, so there is no reason to doubt the fused genes are monophyletic. This strongly contradicts the original eocyte hypothesis. However, novel genomic data has revealed that representatives from the deep branching phyla korarchaeota and thaumarchaeota have the non-split form of this gene [51,52]. This opens the door for a more specific version of the eocyte hypothesis where eukaryotes stem from either of these groups. Therefore, we have examined what additional data have to say about these taxa. The branch order between them has not yet been resolved, but it appears safe to assume they both branch before the split between crenarchaea and euryarchaea. This branching is supported by several phylogenetic trees as well as the non-split RPOA. This assumption will be key to our subsequent reasoning in several ways.\nIt seems impossible to come up with a scenario utilizing all the traits we discuss below that is completely parsimonious for all traits at the same time. With that in mind we have tried to reason which traits can be better explained by convergent evolution than others. When we observe convergent evolution happening at an indel site we do not consider it informative. Independent loss in any form is much easier than independent invention. Loss seems to be the rule rather than the exception in archaea. Both the thaumarchaeota and korachaeaota have traits that were thought to be specific to either euryarchaea or crenarchaea. For instance euryarchaea use FtsZ for cell division while crenarchaea use the cdvABC system. Intriguingly the thaumarchaeotal genomes have orthologs of both of these systems [54]. This implies that the crenarchaea and euryarchaea each lost one of these systems. This is not the most parsimonious solution, but it is the only one that is consistent with the apparent branch order of these taxa. Many other traits have the same distribution pattern. It is clear that groups of archaea can lose proteins of major functional importance. We will attempt to address these distributions in our hypothesis below.\nBeyond the EF-1 indel that implies paraphyly, six highly conserved indels were found to be informative in describing the relationship between archaea and eukaryotes in [50]. The authors only looked at derived insertions with well conserved sequences. The authors state that four indels argue for the holophyly of archaea. There is one indel that is shared between eukaryotes and crenarchaea, as well as one shared between the euryarchaea and eukaryotes. This implies there was a reversion in at least one lineage or a horizontal transfer.\nWe have analyzed those six indels as well as EF-1 in the context of the recently sequenced deep branching genomes (Table 3). Only the indels that differ between archaeal groups are useful for determining their branch order. Therefore we only created alignments that contained archaeal sequences to ensure these indels were not artifacts created by including eukaryotic and bacterial sequences. Where possible we also used structural alignments from representatives of the superkingdoms to further ensure the larger indels were real (a similar methodology as used in [11]).\nAnalysis of potentially informative gene structures in korarchaeota and thaumarchaeota.\nEach indel was analyzed by creating an alignment of archaeal sequences from BLAST searches. We consider these results to be inconclusive until thaumarchaeota and korarchaeota are sampled better.\nFirst, the reported indel shared between euryarchaea and eukaryotes in the DNA repair protein RadA appears to be an artifact. The euryarchaeal and crenarchaeal sequences align well in the indel region (Additional File 1; Figure S1). This is important because it was the only line of evidence in that work that implied a relationship between euryarchaea and eukaryotes. This new alignment, in conjunction with the split RPOA gene, implies eukaryotes either descend from within the deep branching archaea or are their sisters.\nWe also argue that the two reported indels in the alignments of Beta-glucosidase/6-phospho-beta-glucosidase/beta-galactosidase (PBG) and ribosomal protein S12 are both uninformative based off the authors' own analyses (supplemental data from [50]). The indel in ribosomal S12 is conserved across all archaea and eukaryotes, so it implies nothing about their branch order. The indel in PBG is uninformative because the authors conclude the eukaryotic version of this gene is probably of bacterial origin (supplemental data from [50]). Therefore, the state of the gene in archaea implies nothing about the branch order of these groups.\nTwo of the remaining four indels are only a single residue. The glycine insertion in SecY is present in thaumarchaeota and eukaryotes, but absent in korarchaeota. That weakly implies a relationship between eukaryotes and thaumarchaeota. However, given that the insertion is present in some of the deep branching taxa, but not in all euryarchaea, implies there was at least one secondary loss of this insertion. This is reasonable since the insertion is a single glycine residue, and will not have a dramatic effect on protein structure.\nThe single residue insertion in prolyl-tRNA aminoacyl synthetase initially implied archaea were holophyletic, however, the insert is missing in the thaumarchaeal genomes. When these genes are used to seed a BLAST [55] search they hit firmicutes more so than other archaea. This implies a possible horizontal transfer to thaumarchaeota. If so this insert could still support holophyly, but that cannot be concluded with absolute certainty.\nThis leaves us with two larger indels in EF-1 and glutamyl-tRNA amidotransferase subunit D (gatD). The seven AA insert in gatD is well conserved in the archaeal alignment. A structural alignment with a bacterial homolog reveals this indel is not an artifact caused by the sequence alignment (data not shown). The phylogenetic tree for this family (presented in the supplemental data of [50]) places archaea and eukaryotes as sisters with 100% bootstrap support. This is remarkable because the archaeal proteins have a different domain combination and quaternary structure than the eukaryotes and bacteria [56]. However, it seems that tree is too good to be true. We have attempted to verify the history of this indel, and found that the tree in [50] was missing a bacterial paralog. E. coli has members of two paralogous families of l-asparaginases [57], and it appears only one of them was present in the initial tree. The tree in Additional File 2; Figure S2 shows that fungi and the rest of the eukaryotes received the same domain superfamily from two distinct sources. Their sequences are mixed in with some bacteria, which implies there were some recent horizontal transfers. This tree is not well resolved, but it certainly does not support the notion eukaryotes inherited this protein from their archaeal ancestor. That, as well as the differences in domain combination and quaternary structure, implies this indel is inconclusive with regards to holophyly verses paraphyly.\nEF-1 also appears inconclusive. The insert shared between crenarchaea and eukaryotes is present in thaumarchaeota, but not korarchaeota. Our alignment revealed there are actually four different forms of indel at this site in archaea (Additional File 3; Figure S3). This implies there is some plasticity in this region in archaea. This is in contrast to the bacterial alignment which has no indels in this region. A structural alignment between a bacterial representative from E. coli and an archaeal one from Sulfolobus solfataricus reveals the conserved glycines in the sequence alignments are very close in their position in both forms of this indel (Figure 2). It is possible there were two insertions near the root of archaea that preserved the position of that residue. This indel's history does not appear to be parsimonious, which weakens it usefulness as a marker. Therefore, this indel appears to weakly support archaeal paraphyly, but we consider it inconclusive.\nStructural alignment of EF-1 and EF-Tu. The structural alignment of EF-1 (1JNYA) and EF-Tu (1EFC) in A, and the corresponding sequence alignment in B, show the potential for two independent indels in this region that confounds analysis.\nThe ribosomal proteins are the other side to this story. In a previous study, five ribosomal proteins were found in at least one crenarchaeon, but not in any of the euryarchaea (L38e, L13e, S25e, S26e and S30e) [2]. These, as well as four others that are not universal in archaea, are conserved across eukaryotes. We examined what ribosomal proteins are present in the thaumarchaeal and korarchaeal genomes (Table 4). It still appears that Lake is correct that crenarchaea have more similar ribosomal proteins to eukaryotes than any other group of archaea.\nInformative ribosomal proteins in thaumarchaeota and korarchaeota.\nThis table was constructed from [2]. The values listed were taken from searches of the Pfam website. Ribosomal proteins L20A and L30E were not well defined in Pfam so BLAST searches were performed instead. These results support the eocyte hypothesis, but it is plausible that there were independent losses of ribosomal subunits in archaea based on additional data.\nThe korarchaeota are missing three ribosomal proteins found in some crenarchaea and eukaryotes. They have five ribosomal proteins that are present across eukaryotes that are absent in thaumarchaeota. There are two ways we can interpret this trend. If archaea are paraphyletic then this distribution is best explained by the invention of ribosomal proteins after LACA. LECA could branch between the korarchaeota and crenarchaea, before the RPOA gene split. The alternative interpretation is that archaea are holophyletic and the archaeal ancestor had all the ribosomal proteins that are in any archaeon and at least one eukaryote. There would have to be several independent losses of each these ribosomal proteins. Again this is not parsimonious, but there is evidence it has occurred several times so we must consider it. Again, it can be argued that if a protein is present in korarchaeota and crenarchaea, but absent in euryarchaea, it must have been lost. The archaeal ribosomal proteins are more dispensable than their counterparts in the other superkingdoms [2], so they might not be a reliable marker for rooting eukaryotes in archaea.\nFor now it seems the only reasonable stance in light of all of this evidence is agnosticism. Only when thaumarchaeota and korarchaeota are sampled better, and their positions in the archaeal tree are determined robustly, will it be possible to state with confidence whether archaea are holophyletic or paraphyletic. We might always be left trying to weigh whether reversion of ribosomal proteins or indels is the more parsimonious scenario. However, several of these traits clearly exclude the root of eukaryotes from within crenarchaea and euryarchaea. Therefore, any hypotheses on the origin of eukaryotes that invokes specific taxa within those groups can be rejected with confidence (for a discussion of the many hypotheses on this subject see [58]). However, it may be possible those scenarios could be reworked to fit thaumarchaeota or korarchaeota once they are sampled better.\nSo far we have presented several independent arguments that strongly polarize archaea as a taxa derived from within bacteria. We have demonstrated that although there are vast differences between the ribosomes and DNA replication machinery between the prokaryotic superkingdoms, none of the arguments associated with their respective proteins seems insurmountable. We will soon present a novel hypothesis to account for this; but first we must pinpoint the bacterial roots of archaea. One cannot properly reason about this rooting without first discussing whether archaea are paraphyletic (eukaryotes branch within them) or holophyletic (eukaryotes are their sisters). As there is clearly a relationship between archaea and eukaryotes it is vital to differentiate between these two scenarios to understand their origins. We will review the current available data, and argue that for now precise agnosticism seems the best course, that is, any hypothesis on the origin of archaea must accommodate both models. That said, we lean towards holophyly and our hypothesis does as well.\nEukaryotes and Archaea are sister clades under the standard three domain model. However, James Lake proposed that eukaryotes had a crenarchaeal (eocyte) origin based on a shared indel in EF-1 and similarities in their ribosomal structures [13,14]. This hypothesis never gained much support because there was little phylogenetic evidence to corroborate it. However, recent work [30] has shown that there is sequence data that implies archaea are paraphyletic and eukaryotes have a crenarchaeal-like ancestor. Conversely, another analysis done around the same time supported a deep branching archaeon as the host of mitochondria [50], which would be inconsistent with the eocyte hypothesis. They demonstrate that eukaryotes inherited both crenarchaeal and euryarchaeal specific proteins, so ancestry from either group alone is not enough to explain the eukaryotic protein repertoire. However, several deep branching archaeal genomes from korarchaeota and thaumarchaea are now available and change the context of some of these conclusions [51,52]. Both of these groups appear to contain a mix of crenarchaeal and euryarchaeal genes so the observation in [50] could be explained by a member of one of these groups being ancestral to eukaryotes.\nCavalier-Smith's hypothesis on the origin of the neomura relies on a sisterhood relationship between archaea and eukaryotes [16]. As discussed below, he mainly roots the neomura using traits actinobacteria share with eukaryotes, but not archaea. This only makes sense if archaea and eukaryotes are sisters, otherwise the traits should be present in at least some archaea. He lists eight properties that are unique and ubiquitous in archaea [16]. All of these traits strongly imply that archaea are monophyletic. However, most of them do not differentiate between whether archaea are holophyletic or paraphyletic.\nFor instance, the unique isoprenoid ether lipids found in all archaeal membranes are best explained by their presence in LACA. Eukaryotes have lipids that are more similar to those of bacteria. It would be more parsimonious for archaea and eukaryotes to be sisters with a single change in lipid structure. Any other scenario requires a reversion in eukaryotes back to the bacterial state. Even though this is not parsimonious, it is not out of the question because the mitochondrial ancestor would have all the necessary genes to make bacterial membranes [53]. We have to admit that does not seem unreasonable relative to the innovations we are discussing in this work. This certainly seems like a case where simple parsimony in terms of any one trait, even membrane structure, will be misleading.\nThe only one of these properties that appeared really informative in regard to this problem was the split gene for RPOA. RPOA is the only single gene tree that supported the three domain model in [30], so it is clear eukaryotes did not get this protein from the mitochondrial ancestor. Reassembling the split gene is highly improbable, so there is no reason to doubt the fused genes are monophyletic. This strongly contradicts the original eocyte hypothesis. However, novel genomic data has revealed that representatives from the deep branching phyla korarchaeota and thaumarchaeota have the non-split form of this gene [51,52]. This opens the door for a more specific version of the eocyte hypothesis where eukaryotes stem from either of these groups. Therefore, we have examined what additional data have to say about these taxa. The branch order between them has not yet been resolved, but it appears safe to assume they both branch before the split between crenarchaea and euryarchaea. This branching is supported by several phylogenetic trees as well as the non-split RPOA. This assumption will be key to our subsequent reasoning in several ways.\nIt seems impossible to come up with a scenario utilizing all the traits we discuss below that is completely parsimonious for all traits at the same time. With that in mind we have tried to reason which traits can be better explained by convergent evolution than others. When we observe convergent evolution happening at an indel site we do not consider it informative. Independent loss in any form is much easier than independent invention. Loss seems to be the rule rather than the exception in archaea. Both the thaumarchaeota and korachaeaota have traits that were thought to be specific to either euryarchaea or crenarchaea. For instance euryarchaea use FtsZ for cell division while crenarchaea use the cdvABC system. Intriguingly the thaumarchaeotal genomes have orthologs of both of these systems [54]. This implies that the crenarchaea and euryarchaea each lost one of these systems. This is not the most parsimonious solution, but it is the only one that is consistent with the apparent branch order of these taxa. Many other traits have the same distribution pattern. It is clear that groups of archaea can lose proteins of major functional importance. We will attempt to address these distributions in our hypothesis below.\nBeyond the EF-1 indel that implies paraphyly, six highly conserved indels were found to be informative in describing the relationship between archaea and eukaryotes in [50]. The authors only looked at derived insertions with well conserved sequences. The authors state that four indels argue for the holophyly of archaea. There is one indel that is shared between eukaryotes and crenarchaea, as well as one shared between the euryarchaea and eukaryotes. This implies there was a reversion in at least one lineage or a horizontal transfer.\nWe have analyzed those six indels as well as EF-1 in the context of the recently sequenced deep branching genomes (Table 3). Only the indels that differ between archaeal groups are useful for determining their branch order. Therefore we only created alignments that contained archaeal sequences to ensure these indels were not artifacts created by including eukaryotic and bacterial sequences. Where possible we also used structural alignments from representatives of the superkingdoms to further ensure the larger indels were real (a similar methodology as used in [11]).\nAnalysis of potentially informative gene structures in korarchaeota and thaumarchaeota.\nEach indel was analyzed by creating an alignment of archaeal sequences from BLAST searches. We consider these results to be inconclusive until thaumarchaeota and korarchaeota are sampled better.\nFirst, the reported indel shared between euryarchaea and eukaryotes in the DNA repair protein RadA appears to be an artifact. The euryarchaeal and crenarchaeal sequences align well in the indel region (Additional File 1; Figure S1). This is important because it was the only line of evidence in that work that implied a relationship between euryarchaea and eukaryotes. This new alignment, in conjunction with the split RPOA gene, implies eukaryotes either descend from within the deep branching archaea or are their sisters.\nWe also argue that the two reported indels in the alignments of Beta-glucosidase/6-phospho-beta-glucosidase/beta-galactosidase (PBG) and ribosomal protein S12 are both uninformative based off the authors' own analyses (supplemental data from [50]). The indel in ribosomal S12 is conserved across all archaea and eukaryotes, so it implies nothing about their branch order. The indel in PBG is uninformative because the authors conclude the eukaryotic version of this gene is probably of bacterial origin (supplemental data from [50]). Therefore, the state of the gene in archaea implies nothing about the branch order of these groups.\nTwo of the remaining four indels are only a single residue. The glycine insertion in SecY is present in thaumarchaeota and eukaryotes, but absent in korarchaeota. That weakly implies a relationship between eukaryotes and thaumarchaeota. However, given that the insertion is present in some of the deep branching taxa, but not in all euryarchaea, implies there was at least one secondary loss of this insertion. This is reasonable since the insertion is a single glycine residue, and will not have a dramatic effect on protein structure.\nThe single residue insertion in prolyl-tRNA aminoacyl synthetase initially implied archaea were holophyletic, however, the insert is missing in the thaumarchaeal genomes. When these genes are used to seed a BLAST [55] search they hit firmicutes more so than other archaea. This implies a possible horizontal transfer to thaumarchaeota. If so this insert could still support holophyly, but that cannot be concluded with absolute certainty.\nThis leaves us with two larger indels in EF-1 and glutamyl-tRNA amidotransferase subunit D (gatD). The seven AA insert in gatD is well conserved in the archaeal alignment. A structural alignment with a bacterial homolog reveals this indel is not an artifact caused by the sequence alignment (data not shown). The phylogenetic tree for this family (presented in the supplemental data of [50]) places archaea and eukaryotes as sisters with 100% bootstrap support. This is remarkable because the archaeal proteins have a different domain combination and quaternary structure than the eukaryotes and bacteria [56]. However, it seems that tree is too good to be true. We have attempted to verify the history of this indel, and found that the tree in [50] was missing a bacterial paralog. E. coli has members of two paralogous families of l-asparaginases [57], and it appears only one of them was present in the initial tree. The tree in Additional File 2; Figure S2 shows that fungi and the rest of the eukaryotes received the same domain superfamily from two distinct sources. Their sequences are mixed in with some bacteria, which implies there were some recent horizontal transfers. This tree is not well resolved, but it certainly does not support the notion eukaryotes inherited this protein from their archaeal ancestor. That, as well as the differences in domain combination and quaternary structure, implies this indel is inconclusive with regards to holophyly verses paraphyly.\nEF-1 also appears inconclusive. The insert shared between crenarchaea and eukaryotes is present in thaumarchaeota, but not korarchaeota. Our alignment revealed there are actually four different forms of indel at this site in archaea (Additional File 3; Figure S3). This implies there is some plasticity in this region in archaea. This is in contrast to the bacterial alignment which has no indels in this region. A structural alignment between a bacterial representative from E. coli and an archaeal one from Sulfolobus solfataricus reveals the conserved glycines in the sequence alignments are very close in their position in both forms of this indel (Figure 2). It is possible there were two insertions near the root of archaea that preserved the position of that residue. This indel's history does not appear to be parsimonious, which weakens it usefulness as a marker. Therefore, this indel appears to weakly support archaeal paraphyly, but we consider it inconclusive.\nStructural alignment of EF-1 and EF-Tu. The structural alignment of EF-1 (1JNYA) and EF-Tu (1EFC) in A, and the corresponding sequence alignment in B, show the potential for two independent indels in this region that confounds analysis.\nThe ribosomal proteins are the other side to this story. In a previous study, five ribosomal proteins were found in at least one crenarchaeon, but not in any of the euryarchaea (L38e, L13e, S25e, S26e and S30e) [2]. These, as well as four others that are not universal in archaea, are conserved across eukaryotes. We examined what ribosomal proteins are present in the thaumarchaeal and korarchaeal genomes (Table 4). It still appears that Lake is correct that crenarchaea have more similar ribosomal proteins to eukaryotes than any other group of archaea.\nInformative ribosomal proteins in thaumarchaeota and korarchaeota.\nThis table was constructed from [2]. The values listed were taken from searches of the Pfam website. Ribosomal proteins L20A and L30E were not well defined in Pfam so BLAST searches were performed instead. These results support the eocyte hypothesis, but it is plausible that there were independent losses of ribosomal subunits in archaea based on additional data.\nThe korarchaeota are missing three ribosomal proteins found in some crenarchaea and eukaryotes. They have five ribosomal proteins that are present across eukaryotes that are absent in thaumarchaeota. There are two ways we can interpret this trend. If archaea are paraphyletic then this distribution is best explained by the invention of ribosomal proteins after LACA. LECA could branch between the korarchaeota and crenarchaea, before the RPOA gene split. The alternative interpretation is that archaea are holophyletic and the archaeal ancestor had all the ribosomal proteins that are in any archaeon and at least one eukaryote. There would have to be several independent losses of each these ribosomal proteins. Again this is not parsimonious, but there is evidence it has occurred several times so we must consider it. Again, it can be argued that if a protein is present in korarchaeota and crenarchaea, but absent in euryarchaea, it must have been lost. The archaeal ribosomal proteins are more dispensable than their counterparts in the other superkingdoms [2], so they might not be a reliable marker for rooting eukaryotes in archaea.\nFor now it seems the only reasonable stance in light of all of this evidence is agnosticism. Only when thaumarchaeota and korarchaeota are sampled better, and their positions in the archaeal tree are determined robustly, will it be possible to state with confidence whether archaea are holophyletic or paraphyletic. We might always be left trying to weigh whether reversion of ribosomal proteins or indels is the more parsimonious scenario. However, several of these traits clearly exclude the root of eukaryotes from within crenarchaea and euryarchaea. Therefore, any hypotheses on the origin of eukaryotes that invokes specific taxa within those groups can be rejected with confidence (for a discussion of the many hypotheses on this subject see [58]). However, it may be possible those scenarios could be reworked to fit thaumarchaeota or korarchaeota once they are sampled better.\n[SUBTITLE] Weakening the neomuran hypothesis [SUBSECTION] Now that we have argued for the true distance between the superkingdoms we can begin to address how it could be bridged. From our discussion above we feel we must be cautious about declaring the debate closed on the holophyly of archaea. Therefore, we are more interested in traits shared between a group of bacteria and all archaea than those shared with eukaryotes. Cavalier-Smith has presented fourteen reasons why the root of the neomura is probably within or next to actinobacteria [16]. Two of these traits are shared between actinobacteria and neomura, but the other twelve are only shared between eukaryotes and actinobacteria. Under this scenario these twelve traits would be lost in the ancestor of archaea, which implies archaea are holophyletic. We will review these fourteen traits, and argue that placing the archaeal ancestor in the bacilli makes more sense. We use the term neomura to refer to the clade of eukaryotes and archaea, but when we refer to the neomuran hypothesis we refer to Cavalier-Smith's rooting of that clade in the actinobacteria.\nThe first piece of evidence that places the neomuran root near actinobacteria is the proteasome. Actinobacterial and archaeal 20s proteasomes are well separated on phylogenetic trees which implies the presence of the 20s proteasome across these groups is not the result of recent horizontal transfers. Recently 20S proteasomes have also been found in sequenced genomes from verrucomicrobia [59] and leptospirillum metagenomic sequences [60]. This somewhat weakens the actinobacterial argument for ancestry, as archaea could have inherited a proteasome from these other groups. However, these recent findings do not weaken the polarization argument; it just excludes the root from these additional groups.\nThe second trait apparently shared between actinobacteria and all neomura is the post translational addition of CCA to the 3' end of tRNAs. The gene performing that function in archaea is tRNA CCA-pyrophosphorylase (protein cluster PRK13300 [61]). One of the domains, PAP/Archaeal CCA-adding enzyme, does not hit any bacteria in the Superfamily database [62]. Since the CCA addition is performed by nonhomologous enzymes this is not strong evidence for rooting neomura. There is also an analogous enzyme conserved across bacilli (protein cluster PRK13299). Even if archaea inherited this function from their bacterial ancestors, it is not clear which gram-positive group provided it.\nNow we must address the remaining dozen traits shared between actinobacteria and eukaryotes. Although there were initial reports of sterol synthesis in the actinobacteria [63,64], the latest work has found no evidence for a complete pathway [65]. The authors report that the few cases of the full pathway in bacteria (all outside the actinobacteria) are probably the result of horizontal transfer. However, they find several sterol synthesis enzymes are present in many actinobacteria. They conclude these are probably the result of a transfer from eukaryotes, but this is not supported by their trees, which show good separation between eukaryotes and actinobacteria. Several sterol enzymes appear to have been inherited vertically from actinobacteria to eukaryotes. This is certainly consistent with Cavalier-Smith's hypothesis. This is a good example of the dangers of closing the debate on the position of the root too soon. Their trees clearly support an alternative hypothesis, but that data is buried in the supplemental material without discussion of the opposing view.\nInitial reports also claimed the presence of chitin in actinobacteria [66]. However, there is no gene for chitin synthase in actinobacterial genomes. Several of them have chitinase which breaks chitin down. Also, chitin is found in metazoa and fungi, but not in archaeplastida which implies this enzyme was not in LECA.\nIt is true that actinobacteria have many serine/threonine signaling systems related to cyclin-dependent kinases [67]. This would be a key preadaptation to the cell cycle. However, it has recently been shown that Bacillus subtilis also has an extensive network of such regulation [68]. Therefore this line of evidence is consistent with either gram-positive group being ancestral to neomura.\nPhosphatidylinositol is an interesting case. Recent work on this subject confirms the presence of phosphatidylinositol synthase as well as the eukaryotic form of cardiolipin synthase in many actinobacteria [69]. These enzymes are paralogs. We could not create a quality tree for this superfamily because the alignment was of low quality. However BLAST searches showed a good separation between prokaryotic and eukaryotic sequences that implies this is not the result of a recent HGT. It is difficult to determine exactly what family each prokaryotic homolog belongs to, so it is hard to say with certainty what other groups of bacteria have phosphatidylinositol. It is certainly possible eukaryotes inherited phosphatidylinositol from actinobacteria.\nSome actinobacteria do have an α-amylase with similar primary structure to the form found in metazoa, but a recent comprehensive study found several other bacteria that did as well [70]. The authors concluded this was probably the result of a horizontal transfer due to their position in the phylogenetic tree as well the extremely sparse distribution of this form in actinobacteria. Therefore, this is not evidence for actinobacterial ancestry of the neomura.\nThe fatty acid synthetase (FAS) complex found in actinobacteria is unique among bacteria in that it is the same form as found in some fungi [71]. These fungi have the FAS complex split into two genes, but actinobacteria have it fused. Our phylogenetic trees are consistent with actinobacterial ancestry (Figure 3). However, the distribution of the fungal type complex in eukaryotes does not conclusively prove that this enzyme had to be in LECA. The only group outside the Fungi with this complex are stramenopiles. However, the animal type FAS is also present in some alveolata, so there could be some functional displacements. Actinobacteria probably played a role in the evolution of this enzyme in eukaryotes, but not necessarily via the neomuran hypothesis.\nMaximum likelihood tree of fungal type Fatty Acid Synthase (FAS) complex. This tree implies eukaryotes did not get FAS from a recent transfer, but it is also not clear whether or not it was in LECA. Circles indicate the split form of the gene. This gene is split in two different places in the fungi indicated by the yellow and red circles.\nThe argument that the exospore structure of actinobacteria could be a precursor to eukaryotic spore structures seems sound [72], but we are unable to locate a list of proteins involved in exospore formation. Without specific proteins homologs we cannot begin to evaluate this with bioinformatics. However, this argument becomes irrelevant if one invokes a viral ancestor of the nucleus as in [73].\nCavalier-Smith has also suggested that the C-terminal HEH domain found in the Ku proteins of some actinobacteria is ancestral to the HEH domain found in the eukaryotic Ku70 protein. However, the sequence analysis in [74] conclusively demonstrates eukaryotes did not inherit the HEH domain from actinobacteria. This domain is very compact and common. Therefore, it is not out of the question that it was recruited twice to the C-terminus of similar structures. Consequently we do not take this as evidence that eukaryotes inherited Ku from actinobacteria.\nSeveral traits initially listed as unique to actinobacteria are now found in enough other bacterial groups to now be considered ambiguous markers. Actinobacteria do have tyrosine kinases, but they have recently been put into a bacterial specific family, BY-kinase [75]. This family is present across actinobacteria, firmicutes, and proteobacteria, so it is does not support an actinobacterial rooting exclusively in the neomura. Many groups of bacteria have HU (histone H1 homologs) according the Superfamily database. This protein is relatively short, so we should not expect sequence to resolve its history. It is possible this protein was inherited from actinobacteria, but there are too many other possibilities to state that with certainty. Calmodulin-like proteins are now found in many bacteria, so this trait is not specific enough to root neomura near actinobacteria as Cavalier-Smith now admits [8]. The Superfamily database reveals that trypsin-like serine proteases are present in many groups of bacteria, but absent in archaea. This appears to be another trait that is too general to be useful for rooting neomura.\nNow that we have argued for the true distance between the superkingdoms we can begin to address how it could be bridged. From our discussion above we feel we must be cautious about declaring the debate closed on the holophyly of archaea. Therefore, we are more interested in traits shared between a group of bacteria and all archaea than those shared with eukaryotes. Cavalier-Smith has presented fourteen reasons why the root of the neomura is probably within or next to actinobacteria [16]. Two of these traits are shared between actinobacteria and neomura, but the other twelve are only shared between eukaryotes and actinobacteria. Under this scenario these twelve traits would be lost in the ancestor of archaea, which implies archaea are holophyletic. We will review these fourteen traits, and argue that placing the archaeal ancestor in the bacilli makes more sense. We use the term neomura to refer to the clade of eukaryotes and archaea, but when we refer to the neomuran hypothesis we refer to Cavalier-Smith's rooting of that clade in the actinobacteria.\nThe first piece of evidence that places the neomuran root near actinobacteria is the proteasome. Actinobacterial and archaeal 20s proteasomes are well separated on phylogenetic trees which implies the presence of the 20s proteasome across these groups is not the result of recent horizontal transfers. Recently 20S proteasomes have also been found in sequenced genomes from verrucomicrobia [59] and leptospirillum metagenomic sequences [60]. This somewhat weakens the actinobacterial argument for ancestry, as archaea could have inherited a proteasome from these other groups. However, these recent findings do not weaken the polarization argument; it just excludes the root from these additional groups.\nThe second trait apparently shared between actinobacteria and all neomura is the post translational addition of CCA to the 3' end of tRNAs. The gene performing that function in archaea is tRNA CCA-pyrophosphorylase (protein cluster PRK13300 [61]). One of the domains, PAP/Archaeal CCA-adding enzyme, does not hit any bacteria in the Superfamily database [62]. Since the CCA addition is performed by nonhomologous enzymes this is not strong evidence for rooting neomura. There is also an analogous enzyme conserved across bacilli (protein cluster PRK13299). Even if archaea inherited this function from their bacterial ancestors, it is not clear which gram-positive group provided it.\nNow we must address the remaining dozen traits shared between actinobacteria and eukaryotes. Although there were initial reports of sterol synthesis in the actinobacteria [63,64], the latest work has found no evidence for a complete pathway [65]. The authors report that the few cases of the full pathway in bacteria (all outside the actinobacteria) are probably the result of horizontal transfer. However, they find several sterol synthesis enzymes are present in many actinobacteria. They conclude these are probably the result of a transfer from eukaryotes, but this is not supported by their trees, which show good separation between eukaryotes and actinobacteria. Several sterol enzymes appear to have been inherited vertically from actinobacteria to eukaryotes. This is certainly consistent with Cavalier-Smith's hypothesis. This is a good example of the dangers of closing the debate on the position of the root too soon. Their trees clearly support an alternative hypothesis, but that data is buried in the supplemental material without discussion of the opposing view.\nInitial reports also claimed the presence of chitin in actinobacteria [66]. However, there is no gene for chitin synthase in actinobacterial genomes. Several of them have chitinase which breaks chitin down. Also, chitin is found in metazoa and fungi, but not in archaeplastida which implies this enzyme was not in LECA.\nIt is true that actinobacteria have many serine/threonine signaling systems related to cyclin-dependent kinases [67]. This would be a key preadaptation to the cell cycle. However, it has recently been shown that Bacillus subtilis also has an extensive network of such regulation [68]. Therefore this line of evidence is consistent with either gram-positive group being ancestral to neomura.\nPhosphatidylinositol is an interesting case. Recent work on this subject confirms the presence of phosphatidylinositol synthase as well as the eukaryotic form of cardiolipin synthase in many actinobacteria [69]. These enzymes are paralogs. We could not create a quality tree for this superfamily because the alignment was of low quality. However BLAST searches showed a good separation between prokaryotic and eukaryotic sequences that implies this is not the result of a recent HGT. It is difficult to determine exactly what family each prokaryotic homolog belongs to, so it is hard to say with certainty what other groups of bacteria have phosphatidylinositol. It is certainly possible eukaryotes inherited phosphatidylinositol from actinobacteria.\nSome actinobacteria do have an α-amylase with similar primary structure to the form found in metazoa, but a recent comprehensive study found several other bacteria that did as well [70]. The authors concluded this was probably the result of a horizontal transfer due to their position in the phylogenetic tree as well the extremely sparse distribution of this form in actinobacteria. Therefore, this is not evidence for actinobacterial ancestry of the neomura.\nThe fatty acid synthetase (FAS) complex found in actinobacteria is unique among bacteria in that it is the same form as found in some fungi [71]. These fungi have the FAS complex split into two genes, but actinobacteria have it fused. Our phylogenetic trees are consistent with actinobacterial ancestry (Figure 3). However, the distribution of the fungal type complex in eukaryotes does not conclusively prove that this enzyme had to be in LECA. The only group outside the Fungi with this complex are stramenopiles. However, the animal type FAS is also present in some alveolata, so there could be some functional displacements. Actinobacteria probably played a role in the evolution of this enzyme in eukaryotes, but not necessarily via the neomuran hypothesis.\nMaximum likelihood tree of fungal type Fatty Acid Synthase (FAS) complex. This tree implies eukaryotes did not get FAS from a recent transfer, but it is also not clear whether or not it was in LECA. Circles indicate the split form of the gene. This gene is split in two different places in the fungi indicated by the yellow and red circles.\nThe argument that the exospore structure of actinobacteria could be a precursor to eukaryotic spore structures seems sound [72], but we are unable to locate a list of proteins involved in exospore formation. Without specific proteins homologs we cannot begin to evaluate this with bioinformatics. However, this argument becomes irrelevant if one invokes a viral ancestor of the nucleus as in [73].\nCavalier-Smith has also suggested that the C-terminal HEH domain found in the Ku proteins of some actinobacteria is ancestral to the HEH domain found in the eukaryotic Ku70 protein. However, the sequence analysis in [74] conclusively demonstrates eukaryotes did not inherit the HEH domain from actinobacteria. This domain is very compact and common. Therefore, it is not out of the question that it was recruited twice to the C-terminus of similar structures. Consequently we do not take this as evidence that eukaryotes inherited Ku from actinobacteria.\nSeveral traits initially listed as unique to actinobacteria are now found in enough other bacterial groups to now be considered ambiguous markers. Actinobacteria do have tyrosine kinases, but they have recently been put into a bacterial specific family, BY-kinase [75]. This family is present across actinobacteria, firmicutes, and proteobacteria, so it is does not support an actinobacterial rooting exclusively in the neomura. Many groups of bacteria have HU (histone H1 homologs) according the Superfamily database. This protein is relatively short, so we should not expect sequence to resolve its history. It is possible this protein was inherited from actinobacteria, but there are too many other possibilities to state that with certainty. Calmodulin-like proteins are now found in many bacteria, so this trait is not specific enough to root neomura near actinobacteria as Cavalier-Smith now admits [8]. The Superfamily database reveals that trypsin-like serine proteases are present in many groups of bacteria, but absent in archaea. This appears to be another trait that is too general to be useful for rooting neomura.\n[SUBTITLE] Evidence that supports a firmicute ancestry for archaea [SUBSECTION] Skophammer et al. compiled several reasons to argue archaea are derived from bacilli [12]. There is an insert in ribosomal protein S12 that is present in archaea and bacilli (and maybe chloroflexi). Skophammer et al. conclude this indel is derived, but we argue elsewhere this polarization is flawed [11]. The insertion appears well conserved between archaea and bacilli regardless of whether it is ancestral or derived.\nSkophammer et al. also note that there is a shared deletion between firmicutes and archaea in PyrD. Our own work strengthens this connection by considering the quaternary structure of PyrD. The form that has the deletion also has an additional subunit, PyrK. The sequence and structure of the firmicute PyrD 1B are both shared by archaea. Our phylogenetic analysis of this protein implies this is not the result of recent horizontal transfers [11].\nSkophammer et al. note that many enzymes involved in the biosynthesis of unique archaeal membranes have previously been found in firmicutes [21]. The isoprenoid lipid precursors of archaeal membranes are made via the mevalonate pathway, which is five enzymes long. The KEGG database [76] reveals the entire mevalonate pathway is present in several bacilli as well as some actinobacteria (KEGG module M00191). The unique stereochemistry of archaeal membranes is determined by the enzyme geranylgeranylglyceryl phosphatase. Homologs of this enzyme are present in bacilli (protein cluster PRK04169), but appear to be absent in actinobacteria. The authors of an analysis of archaeal membrane biosynthesis propose that archaea became genetically isolated from bacteria once their membrane chemistry changed [77]. They suggest that archaea branched early from within bacteria, but their hypothesis is also consistent with a later gram-positive origin. Cavalier-Smith's own analysis [8] suggests that eukaryotic enzymes that make n-linked glycoproteins, which are necessary for the loss of peptidoglycan, evolved from the firmicute specific gene EspE. Therefore, for several reasons, the firmicutes are the bacterial group most preadapted to gain archaeal membranes.\nHomologs to ribosomal proteins L30e and L7ae are found across firmicutes. This is evidence of the link between firmicutes and archaea. Pfam [78] shows this family in several other groups, but many firmicutes contain two copies of this family. One of these paralogs has been characterized as a ribosomal protein, but neither is essential [79]. We constructed phylogenetic trees to see if they are consistent with vertical inheritance (Figure 4). There is good separation between the paralogs in firmicutes, which implies the duplication occurred early in firmicutes. All archaeal and eukaryotic genomes contain at least two copies of this family. The phylogenetic tree of the archaeal and firmicute sequences places the firmicute paralogs between the archaeal paralogs. The firmicute sequences are paraphyletic, albeit with very weak support. If these proteins are the result of independent duplications the archaeal sequences should cluster together, not appear on opposite ends of the tree. However, it is possible one of the archaeal sequences evolved rapidly after duplication.\nAlignment of L7Ae paralogs in archaea and firmicutes. This tree is consistent with a firmicute origin for two archaeal ribosomal proteins.\nOne of the paralogs in Bacillus subtilis was found to localize to a different portion of the ribosome than either of the archaeal paralogs [79]. The proteins would not only have to jump superkingdoms for a transfer to occur, they would also have to bind to a different region of the rRNA without interfering with ribosome assembly. We argue it would be less disruptive for a protein already present to gradually bind a different piece of rRNA. The separation between the superkingdoms in the phylogenetic trees also argues against HGT. If this is the result of vertical inheritance only two possibilities explain it. Either the firmicutes are ancestral to archaea, or the root lies between archaea and firmicutes. Our polarization of PyrD 1B's quaternary structure eliminates the latter rooting as a possibility. Thus this tree appears to support a firmicute ancestry for archaea, although it may just be the result of rapid evolution of structures in different contexts in the ribosome.\nAs discussed above, almost all the firmicute genomes have a unique Holliday junction resolvase, RecU, which is only found sparsely in other bacterial groups. It is homologous to the archaeal Holliday junction resolvase, Hjc [46]. Therefore the firmicutes have a DNA repair mechanism more similar to archaea than any bacterial group.\nHsp90 is missing in all archaeal genomes, so its presence across eukaryotes and bacteria implies it was inherited from the mitochondrial ancestor. However, a detailed analysis of this family did not reveal a relationship between eukaryotic and proteobacterial sequences [80]. Instead, the eukaryotic sequences branch within the gram-positive bacteria. The authors argue this supports the classical neomuran hypothesis, but eukaryotes are sisters to firmicutes rather than actinobacteria in that tree (albeit with moderate support). This would slightly favor firmicutes over actinobacteria ancestry. In either case it supports the view that the archaeal ancestor lost Hsp90.\nSkophammer et al. compiled several reasons to argue archaea are derived from bacilli [12]. There is an insert in ribosomal protein S12 that is present in archaea and bacilli (and maybe chloroflexi). Skophammer et al. conclude this indel is derived, but we argue elsewhere this polarization is flawed [11]. The insertion appears well conserved between archaea and bacilli regardless of whether it is ancestral or derived.\nSkophammer et al. also note that there is a shared deletion between firmicutes and archaea in PyrD. Our own work strengthens this connection by considering the quaternary structure of PyrD. The form that has the deletion also has an additional subunit, PyrK. The sequence and structure of the firmicute PyrD 1B are both shared by archaea. Our phylogenetic analysis of this protein implies this is not the result of recent horizontal transfers [11].\nSkophammer et al. note that many enzymes involved in the biosynthesis of unique archaeal membranes have previously been found in firmicutes [21]. The isoprenoid lipid precursors of archaeal membranes are made via the mevalonate pathway, which is five enzymes long. The KEGG database [76] reveals the entire mevalonate pathway is present in several bacilli as well as some actinobacteria (KEGG module M00191). The unique stereochemistry of archaeal membranes is determined by the enzyme geranylgeranylglyceryl phosphatase. Homologs of this enzyme are present in bacilli (protein cluster PRK04169), but appear to be absent in actinobacteria. The authors of an analysis of archaeal membrane biosynthesis propose that archaea became genetically isolated from bacteria once their membrane chemistry changed [77]. They suggest that archaea branched early from within bacteria, but their hypothesis is also consistent with a later gram-positive origin. Cavalier-Smith's own analysis [8] suggests that eukaryotic enzymes that make n-linked glycoproteins, which are necessary for the loss of peptidoglycan, evolved from the firmicute specific gene EspE. Therefore, for several reasons, the firmicutes are the bacterial group most preadapted to gain archaeal membranes.\nHomologs to ribosomal proteins L30e and L7ae are found across firmicutes. This is evidence of the link between firmicutes and archaea. Pfam [78] shows this family in several other groups, but many firmicutes contain two copies of this family. One of these paralogs has been characterized as a ribosomal protein, but neither is essential [79]. We constructed phylogenetic trees to see if they are consistent with vertical inheritance (Figure 4). There is good separation between the paralogs in firmicutes, which implies the duplication occurred early in firmicutes. All archaeal and eukaryotic genomes contain at least two copies of this family. The phylogenetic tree of the archaeal and firmicute sequences places the firmicute paralogs between the archaeal paralogs. The firmicute sequences are paraphyletic, albeit with very weak support. If these proteins are the result of independent duplications the archaeal sequences should cluster together, not appear on opposite ends of the tree. However, it is possible one of the archaeal sequences evolved rapidly after duplication.\nAlignment of L7Ae paralogs in archaea and firmicutes. This tree is consistent with a firmicute origin for two archaeal ribosomal proteins.\nOne of the paralogs in Bacillus subtilis was found to localize to a different portion of the ribosome than either of the archaeal paralogs [79]. The proteins would not only have to jump superkingdoms for a transfer to occur, they would also have to bind to a different region of the rRNA without interfering with ribosome assembly. We argue it would be less disruptive for a protein already present to gradually bind a different piece of rRNA. The separation between the superkingdoms in the phylogenetic trees also argues against HGT. If this is the result of vertical inheritance only two possibilities explain it. Either the firmicutes are ancestral to archaea, or the root lies between archaea and firmicutes. Our polarization of PyrD 1B's quaternary structure eliminates the latter rooting as a possibility. Thus this tree appears to support a firmicute ancestry for archaea, although it may just be the result of rapid evolution of structures in different contexts in the ribosome.\nAs discussed above, almost all the firmicute genomes have a unique Holliday junction resolvase, RecU, which is only found sparsely in other bacterial groups. It is homologous to the archaeal Holliday junction resolvase, Hjc [46]. Therefore the firmicutes have a DNA repair mechanism more similar to archaea than any bacterial group.\nHsp90 is missing in all archaeal genomes, so its presence across eukaryotes and bacteria implies it was inherited from the mitochondrial ancestor. However, a detailed analysis of this family did not reveal a relationship between eukaryotic and proteobacterial sequences [80]. Instead, the eukaryotic sequences branch within the gram-positive bacteria. The authors argue this supports the classical neomuran hypothesis, but eukaryotes are sisters to firmicutes rather than actinobacteria in that tree (albeit with moderate support). This would slightly favor firmicutes over actinobacteria ancestry. In either case it supports the view that the archaeal ancestor lost Hsp90.\n[SUBTITLE] Peroxisomes: the red herring? [SUBSECTION] There are several traits present in either firmicutes or actinobacteria that argue they are ancestral to either eukaryotes or archaea. The only trait that argues actinobacteria are ancestral to the neomura is the proteasome. Several more traits make compelling arguments that actinobacteria are ancestral to eukaryotes, but certainly not the dozen traits listed in [16]. In Cavalier-Smith's most recent version of the neomuran hypothesis he concludes firmicutes contributed a significant number of genes to the neomuran ancestor [81]. He proposed neomura originated as sisters of actinobacteria, and both of these taxa are descendents of firmicutes. That proposal is dependent on his argument that actinobacteria are derived from firmicutes, which is one of the less developed ideas in [8]. We believe he is wrong in his assertion that our analysis of the indel in ribosomal S12 [11] does not support firmicute ancestry of archaea. It is only shared (and well conserved) between bacilli and archaea regardless of the polarization of that indel. Cavalier-Smith is also not aware of the arguments about L7AE paralogs and RecU we present here for the first time. So we are left with a stronger list of reasons supporting firmicute ancestry and a weaker list for actinobacterial ancestry. However, there are still some key eukaryotic proteins that appear to have descended from actinobacteria. We will try to reconcile this apparent anomaly.\nThe peroxisome is an organelle with a single membrane, found across eukaryotes, that has various oxidative functions including the synthesis of some lipids [82]. They have been observed to divide independently of the rest of the cell, which initially led someto question whether they had an endosymbiotic origin [83,84]. Two recent studies both concluded that the peroxisome was likely derived from the endoplasmic reticulum [85,86], which led those initial proponents of peroxisomal endosymbiosis to abandon that idea.\nHowever, [85] found that many peroxisomal proteins likely originated in cyanobacteria, α-proteobacteria, or actinobacteria. The authors suggest that the proteobacterial genes were probably transferred from the mitochondria, which is consistent with observations that mitochondrial genes are often retargeted to other organelles [87]. However, recent work argues for an endosymbiotic origin of the peroxisome from an actinobacterium [88]. These latter authors demonstrate that at least two proteins imported into the peroxisome are of actinobacterial origin, and that the peroxisomal proteome has higher average BLAST scores to actinobacteria than any other group of prokaryotes. They argue that the retargeting of mitochondrial proteins after their genes migrate to the host's genome is easier than de novo targeting of peroxisomal proteins. They propose this masks the true history of the peroxisome.\nThe literature proposes two scenarios to explain the origin of the peroxisome: either the peroxisome was an endosymbiont, or actinobacteria were not endosymbionts. Clearly there is a third possibility; there was an actinobacterial endosymbiont, but the peroxisome is not a descendent of that membrane. That is to say, genes of an endosymbiotic origin were targeted into the peroxisome, but historically they are foreigners there. How could this be? A primitive peroxisome derived from the endomembrane system would be beneficial because it would separate dangerous oxidative chemistry from the rest of the cell. Proteins would be targeted to the organelle with relative ease since that system would be developed through mitochondrial endosymbiosis. Genes would be copied from the actinobacterial endosymbiont to the host genome (but not necessarily lost in the actinobacterium), and then imported into the peroxisome. This would be advantageous because some of these reactions would do better in that specialized environment rather than their original host. Potentially there would be less cost involved in maintaining an organelle that already existed versus an entire endosymbiont. Once enough genes were present in the host, the actinobacterial endosymbiont would essentially be a parasite, and complete gene loss would be beneficial.\nContrast the peroxisome to organelles such as plastids and mitochondria which retained both genomes and membranes long after they became organelles. Some have questioned why some organelles retain any genes at all [89]. These authors note that most genes retained in plastids and mitochondria are membrane-spanning proteins involved in core photosynthetic and respiratory systems. They agree with an earlier proposal that these proteins must be kept in the organelle to be able to quickly respond to, and balance, redox gradients [90]. In other words, plastids and mitochondria have retained membranes and genes because their functions are centered on membrane based chemistry. The stripped down endomsymbionts perform these functions better than a novel organelle initially could, so they are left with a few essential genes and membranes they inherited from endosymbiosis. These genes come with a high cost because the organelles need to import the machinery to translate them as well as the machinery to replicate the genes that encode them. Therefore one can hypothesize that other endosymbionts whose functions are not as membrane-centric could be replaced by organelles that are not of endosymbiotic origin. Unfortunately, plastids and mitochondria have shaped our expectations that endosymbionts will leave both membranes and genomes behind. We believe this is an over simplistic expectation.\nWe argue actinobacterial endosymbiosis accounts for the traits shared between eukaryotes and actinobacteria, as well the phylogenetic trees that place actinobacteria as sisters of the peroxisomal proteins. The fact that numerous mitochondrial proteins are imported into the peroxisome is evidence this endosymbiosis occurred after mitochondrial endosymbiosis. This would reconcile the apparently conflicting signals in terms of which gram-positive group is ancestral to archaea and eukaryotes. We find this scenario more reasonable than invoking an extinct lineage of gram-positives that has all the traits listed in Table 5 and Table 6. However, if a genome is sequenced that contains actinobacterial specific traits as well as firmicute specific traits listed here we would have no need to invoke endosymbiosis. It is also possible to reconcile the canonical rooting with the traits shared by actinobacteria by invoking this endosymbiotic hypothesis.\nSummary of data used to support actinobacterial ancestry of archaea.\nMany of these traits argue for an actinobacterial role in eukaryogenesis but not the origin of archaea. This list of informative characters is taken from [16].\nSummary of data that supports bacilli ancestry for archaea.\nThe bacilli are more similar to archaea in terms of DNA repair, ribosome structure, and lipid metabolism than any other group of bacteria.\nThere are several traits present in either firmicutes or actinobacteria that argue they are ancestral to either eukaryotes or archaea. The only trait that argues actinobacteria are ancestral to the neomura is the proteasome. Several more traits make compelling arguments that actinobacteria are ancestral to eukaryotes, but certainly not the dozen traits listed in [16]. In Cavalier-Smith's most recent version of the neomuran hypothesis he concludes firmicutes contributed a significant number of genes to the neomuran ancestor [81]. He proposed neomura originated as sisters of actinobacteria, and both of these taxa are descendents of firmicutes. That proposal is dependent on his argument that actinobacteria are derived from firmicutes, which is one of the less developed ideas in [8]. We believe he is wrong in his assertion that our analysis of the indel in ribosomal S12 [11] does not support firmicute ancestry of archaea. It is only shared (and well conserved) between bacilli and archaea regardless of the polarization of that indel. Cavalier-Smith is also not aware of the arguments about L7AE paralogs and RecU we present here for the first time. So we are left with a stronger list of reasons supporting firmicute ancestry and a weaker list for actinobacterial ancestry. However, there are still some key eukaryotic proteins that appear to have descended from actinobacteria. We will try to reconcile this apparent anomaly.\nThe peroxisome is an organelle with a single membrane, found across eukaryotes, that has various oxidative functions including the synthesis of some lipids [82]. They have been observed to divide independently of the rest of the cell, which initially led someto question whether they had an endosymbiotic origin [83,84]. Two recent studies both concluded that the peroxisome was likely derived from the endoplasmic reticulum [85,86], which led those initial proponents of peroxisomal endosymbiosis to abandon that idea.\nHowever, [85] found that many peroxisomal proteins likely originated in cyanobacteria, α-proteobacteria, or actinobacteria. The authors suggest that the proteobacterial genes were probably transferred from the mitochondria, which is consistent with observations that mitochondrial genes are often retargeted to other organelles [87]. However, recent work argues for an endosymbiotic origin of the peroxisome from an actinobacterium [88]. These latter authors demonstrate that at least two proteins imported into the peroxisome are of actinobacterial origin, and that the peroxisomal proteome has higher average BLAST scores to actinobacteria than any other group of prokaryotes. They argue that the retargeting of mitochondrial proteins after their genes migrate to the host's genome is easier than de novo targeting of peroxisomal proteins. They propose this masks the true history of the peroxisome.\nThe literature proposes two scenarios to explain the origin of the peroxisome: either the peroxisome was an endosymbiont, or actinobacteria were not endosymbionts. Clearly there is a third possibility; there was an actinobacterial endosymbiont, but the peroxisome is not a descendent of that membrane. That is to say, genes of an endosymbiotic origin were targeted into the peroxisome, but historically they are foreigners there. How could this be? A primitive peroxisome derived from the endomembrane system would be beneficial because it would separate dangerous oxidative chemistry from the rest of the cell. Proteins would be targeted to the organelle with relative ease since that system would be developed through mitochondrial endosymbiosis. Genes would be copied from the actinobacterial endosymbiont to the host genome (but not necessarily lost in the actinobacterium), and then imported into the peroxisome. This would be advantageous because some of these reactions would do better in that specialized environment rather than their original host. Potentially there would be less cost involved in maintaining an organelle that already existed versus an entire endosymbiont. Once enough genes were present in the host, the actinobacterial endosymbiont would essentially be a parasite, and complete gene loss would be beneficial.\nContrast the peroxisome to organelles such as plastids and mitochondria which retained both genomes and membranes long after they became organelles. Some have questioned why some organelles retain any genes at all [89]. These authors note that most genes retained in plastids and mitochondria are membrane-spanning proteins involved in core photosynthetic and respiratory systems. They agree with an earlier proposal that these proteins must be kept in the organelle to be able to quickly respond to, and balance, redox gradients [90]. In other words, plastids and mitochondria have retained membranes and genes because their functions are centered on membrane based chemistry. The stripped down endomsymbionts perform these functions better than a novel organelle initially could, so they are left with a few essential genes and membranes they inherited from endosymbiosis. These genes come with a high cost because the organelles need to import the machinery to translate them as well as the machinery to replicate the genes that encode them. Therefore one can hypothesize that other endosymbionts whose functions are not as membrane-centric could be replaced by organelles that are not of endosymbiotic origin. Unfortunately, plastids and mitochondria have shaped our expectations that endosymbionts will leave both membranes and genomes behind. We believe this is an over simplistic expectation.\nWe argue actinobacterial endosymbiosis accounts for the traits shared between eukaryotes and actinobacteria, as well the phylogenetic trees that place actinobacteria as sisters of the peroxisomal proteins. The fact that numerous mitochondrial proteins are imported into the peroxisome is evidence this endosymbiosis occurred after mitochondrial endosymbiosis. This would reconcile the apparently conflicting signals in terms of which gram-positive group is ancestral to archaea and eukaryotes. We find this scenario more reasonable than invoking an extinct lineage of gram-positives that has all the traits listed in Table 5 and Table 6. However, if a genome is sequenced that contains actinobacterial specific traits as well as firmicute specific traits listed here we would have no need to invoke endosymbiosis. It is also possible to reconcile the canonical rooting with the traits shared by actinobacteria by invoking this endosymbiotic hypothesis.\nSummary of data used to support actinobacterial ancestry of archaea.\nMany of these traits argue for an actinobacterial role in eukaryogenesis but not the origin of archaea. This list of informative characters is taken from [16].\nSummary of data that supports bacilli ancestry for archaea.\nThe bacilli are more similar to archaea in terms of DNA repair, ribosome structure, and lipid metabolism than any other group of bacteria.\n[SUBTITLE] Viruses as the missing link between the prokaryotic superkingdoms [SUBSECTION] Now that we have argued for the true distance between archaea and bacteria, the time has come to cross that desert. As we have asserted above, this is a unique event in evolution, so we must properly set the stage. The selective pressures associated with extreme environments and antibiotic warfare are ancient, however, they cannot cause a revolution on their own, so a significant relaxation in selective pressure is necessary. We argue that viral endosymbiosis could relax selective pressure enough to start such a revolution.\nKoonin has observed that the PolB family of polymerases are the most common DNA polymerase in viruses [91]. Koonin et al. also observed that archaeo-eukaryotic DNA primase was a hallmark viral protein [19]. This hints at some connection in DNA replication between archaea, eukaryotes, and viruses. We examined the distribution of all protein families in Pfam [78] that originated at the root of archaea and eukaryotes to see if this connection could be extended. We defined Pfam familiess that were present in at least 90% of archaeal genomes (46 at the time) and 90% of eukaryotic genomes (35 at the time) and in less than 50% of bacterial genomes (939 at the time) as originating at the root of archaea and eukaryotes. A 90% cutoff is strict enough to imply that the protein was present in LAECA, while a 50% cutoff is loose enough to accommodate recent horizontal transfers. Most of these Pfam families are well below the 50% cutoff in bacteria.\nBy this definition there are 74 Pfam domains that originated in LAECA; 24 of these are found in at least one viral genome (Table 7). On average each of these Pfam domains is present in 36.38 viral genomes (14.36 if one excludes PolB). As an approximate measure of the significance of this result we took 10000 random samples of 74 Pfam domains that are found in at least one cellular genome to see how often one finds 24 or more in at least one viral genome. None of the random sets had that many viral Pfam domains, which implies this set is significantly enriched in viral proteins. However, we must keep in mind that our sampling of the viral world is still highly biased (discussed in [91]) and that viral genomes evolve rapidly. Viral genomes are sampled so poorly that none had the MCM domain from Pfam, even though it is found in a prophage region of some bacilli as discussed above. Further, 18 of the remaining Pfam proteins that originate in LAECA are ribosomal, which we assume are less advantageous for viruses to encode than the DNA replication machinery (although we did find several ribosomal proteins in viruses in this set).\nPfam proteins that originated near LAECA and their distribution in the viral world.\nThe Pfam familiess that originated in LAECA are more common in viruses than those that originated in LBCA.\nWe can also verify whether this result is significant by looking at the set of proteins that would be present in LBCA (last bacterial common ancestor), but not LEACA under the same definition, that is, Pfam domains present in at least 90% of bacterial genomes and less than 50% of archaeal and eukaryal genomes. There are 106 such Pfam domains and 15 of them are found in at least one viral genome (p-value 0.2457). Each of those 15 is in an average of 8.33 viral genomes. It should be noted that this is an underestimate for LBCA's content since there are so many parasitic bacteria with genomic sequences available. However, in general viruses share more Pfam domains with LAECA than LBCA.\nKoonin proposes, based on PolB's distribution, that archaea arose from an acellular ancestor and then retained the more ancient polymerase [91]. We find this view hard to reconcile with the three independent arguments for the derived nature of archaea provided above. Forterre has argued that DNA originated from a viral endosymbiosis in each of the superkingdoms [17], but our data argues against that scenario for the origin of bacteria. We propose the alternative hypothesis that viral endosymbiosis occurred in bacteria and gave rise to archaea. This virus would supply the missing link in terms of DNA replication machinery between the prokaryotic superkingdoms. We think this would have to be endosymbiosis and not just a horizontal transfer given the distribution and interdependencies of these systems in cellular life.\nTo a first approximation there are three components that define the propensity of a genome to get permanently damaged. The first is the environment. Many different extreme environments are damaging to DNA, including radiation, high temperature and desiccation [92]. Second is the size of the genome. The larger the piece of DNA, the more likely damage will occur, and the more it must be mediated. Third is the state of the active repair system. If active repair is poor even rare damage events will eventually accumulate. Therefore, we argue that systems that are extreme in any one of these three components must routinely deal with DNA damage during replication.\nArchaea, in general, fit the description of extremophile better than any other major taxa. It has been proposed that the unifying trait of all archaea is adaptation to chronic energy stress [93]. The author argues that archaea outcompete bacteria in niches that are under chronic stress. Thus archaea have become successful in dealing with environments that other superkingdoms cannot handle. The author noted that archaea do better in environments that are consistently extreme, and are outcompeted by bacteria in environments that fluctuate.\nA corollary of chronic energy stress is chronic DNA damage. Many of the extremophilic environments archaea have made home severely damage DNA. On the other hand, bacteria may face occasional stressful situations and require DNA repair. Therefore it is disadvantageous for bacteria to have their repair systems on all the time. Conversely, archaea need to constantly repair their DNA, so it would make sense if the line is blurred between their replication and repair systems. An example of this prepare for the worst strategy is the unique ability of PolB to read ahead and stall replication if a uracil is encountered in archaea [94].\nIn terms of large genomes eukaryotes win hands down (see figure 1 in [95]). A polymerase is more likely to encounter damage somewhere in the replication of these large genomes than a prokaryote with a smaller genome in a similar environment. This is supported by evidence that eukaryotes use a separate repair system during replication of the large non-transcribed regions of their genome [96,97].\nWhat other situation besides chronic DNA stress and large genome size would put similar pressure on the DNA replication machinery? We argue, somewhat counter intuitively, that a total lack of active DNA repair systems would create a similar situation. Again it is optimal for the replicative system to expect to encounter damage. Viruses fit that description perfectly as they are unable to actively maintain their genomes without their host.\nIf the repair systems were turned on more and more of the time, the main replicative system would become free to drift. Under this scenario the ancestors of archaea could mix and match bacterial repair and replication proteins with several molecular innovations and some transfers from the viral endosymbiont. The end result could be a system that is more robust to chronic stress. The canonical rooting implies that the components of the replication machinery that are homologous, but not orthologous, were independently recruited from proteins that initially processed RNA. Under either scenario the same amount of molecular innovation is required. The question then becomes, is it easier to innovate function in a RNA based organism or a DNA based organism under relaxed selective pressure? We argue that the difference cannot be quantified, as both scenarios predict exactly what we observe: some proteins are orthologs, some are homologs, and some are unrelated. Therefore the way to tell the difference between these scenarios is independent lines of evidence. The polarizations presented above imply the bacterial repair machinery was recruited to become the replication machinery of archaea.\nIt is also tempting to speculate that many of the features shared between viruses and the eukaryotic nucleus described in the viral eukaryogenesis hypothesis [73,98,99] could be extended to this hypothesis. Bell notes many similarities between nuclei and viral replication factories. One can imagine the ancestry of these traits going back to LAECA with some being lost in archaea, and others not developing until the root of eukaryotes. This is only consistent with our hypothesis if archaea are holophyletic, but for now it is certainly worth considering.\nNow that we have argued for the true distance between archaea and bacteria, the time has come to cross that desert. As we have asserted above, this is a unique event in evolution, so we must properly set the stage. The selective pressures associated with extreme environments and antibiotic warfare are ancient, however, they cannot cause a revolution on their own, so a significant relaxation in selective pressure is necessary. We argue that viral endosymbiosis could relax selective pressure enough to start such a revolution.\nKoonin has observed that the PolB family of polymerases are the most common DNA polymerase in viruses [91]. Koonin et al. also observed that archaeo-eukaryotic DNA primase was a hallmark viral protein [19]. This hints at some connection in DNA replication between archaea, eukaryotes, and viruses. We examined the distribution of all protein families in Pfam [78] that originated at the root of archaea and eukaryotes to see if this connection could be extended. We defined Pfam familiess that were present in at least 90% of archaeal genomes (46 at the time) and 90% of eukaryotic genomes (35 at the time) and in less than 50% of bacterial genomes (939 at the time) as originating at the root of archaea and eukaryotes. A 90% cutoff is strict enough to imply that the protein was present in LAECA, while a 50% cutoff is loose enough to accommodate recent horizontal transfers. Most of these Pfam families are well below the 50% cutoff in bacteria.\nBy this definition there are 74 Pfam domains that originated in LAECA; 24 of these are found in at least one viral genome (Table 7). On average each of these Pfam domains is present in 36.38 viral genomes (14.36 if one excludes PolB). As an approximate measure of the significance of this result we took 10000 random samples of 74 Pfam domains that are found in at least one cellular genome to see how often one finds 24 or more in at least one viral genome. None of the random sets had that many viral Pfam domains, which implies this set is significantly enriched in viral proteins. However, we must keep in mind that our sampling of the viral world is still highly biased (discussed in [91]) and that viral genomes evolve rapidly. Viral genomes are sampled so poorly that none had the MCM domain from Pfam, even though it is found in a prophage region of some bacilli as discussed above. Further, 18 of the remaining Pfam proteins that originate in LAECA are ribosomal, which we assume are less advantageous for viruses to encode than the DNA replication machinery (although we did find several ribosomal proteins in viruses in this set).\nPfam proteins that originated near LAECA and their distribution in the viral world.\nThe Pfam familiess that originated in LAECA are more common in viruses than those that originated in LBCA.\nWe can also verify whether this result is significant by looking at the set of proteins that would be present in LBCA (last bacterial common ancestor), but not LEACA under the same definition, that is, Pfam domains present in at least 90% of bacterial genomes and less than 50% of archaeal and eukaryal genomes. There are 106 such Pfam domains and 15 of them are found in at least one viral genome (p-value 0.2457). Each of those 15 is in an average of 8.33 viral genomes. It should be noted that this is an underestimate for LBCA's content since there are so many parasitic bacteria with genomic sequences available. However, in general viruses share more Pfam domains with LAECA than LBCA.\nKoonin proposes, based on PolB's distribution, that archaea arose from an acellular ancestor and then retained the more ancient polymerase [91]. We find this view hard to reconcile with the three independent arguments for the derived nature of archaea provided above. Forterre has argued that DNA originated from a viral endosymbiosis in each of the superkingdoms [17], but our data argues against that scenario for the origin of bacteria. We propose the alternative hypothesis that viral endosymbiosis occurred in bacteria and gave rise to archaea. This virus would supply the missing link in terms of DNA replication machinery between the prokaryotic superkingdoms. We think this would have to be endosymbiosis and not just a horizontal transfer given the distribution and interdependencies of these systems in cellular life.\nTo a first approximation there are three components that define the propensity of a genome to get permanently damaged. The first is the environment. Many different extreme environments are damaging to DNA, including radiation, high temperature and desiccation [92]. Second is the size of the genome. The larger the piece of DNA, the more likely damage will occur, and the more it must be mediated. Third is the state of the active repair system. If active repair is poor even rare damage events will eventually accumulate. Therefore, we argue that systems that are extreme in any one of these three components must routinely deal with DNA damage during replication.\nArchaea, in general, fit the description of extremophile better than any other major taxa. It has been proposed that the unifying trait of all archaea is adaptation to chronic energy stress [93]. The author argues that archaea outcompete bacteria in niches that are under chronic stress. Thus archaea have become successful in dealing with environments that other superkingdoms cannot handle. The author noted that archaea do better in environments that are consistently extreme, and are outcompeted by bacteria in environments that fluctuate.\nA corollary of chronic energy stress is chronic DNA damage. Many of the extremophilic environments archaea have made home severely damage DNA. On the other hand, bacteria may face occasional stressful situations and require DNA repair. Therefore it is disadvantageous for bacteria to have their repair systems on all the time. Conversely, archaea need to constantly repair their DNA, so it would make sense if the line is blurred between their replication and repair systems. An example of this prepare for the worst strategy is the unique ability of PolB to read ahead and stall replication if a uracil is encountered in archaea [94].\nIn terms of large genomes eukaryotes win hands down (see figure 1 in [95]). A polymerase is more likely to encounter damage somewhere in the replication of these large genomes than a prokaryote with a smaller genome in a similar environment. This is supported by evidence that eukaryotes use a separate repair system during replication of the large non-transcribed regions of their genome [96,97].\nWhat other situation besides chronic DNA stress and large genome size would put similar pressure on the DNA replication machinery? We argue, somewhat counter intuitively, that a total lack of active DNA repair systems would create a similar situation. Again it is optimal for the replicative system to expect to encounter damage. Viruses fit that description perfectly as they are unable to actively maintain their genomes without their host.\nIf the repair systems were turned on more and more of the time, the main replicative system would become free to drift. Under this scenario the ancestors of archaea could mix and match bacterial repair and replication proteins with several molecular innovations and some transfers from the viral endosymbiont. The end result could be a system that is more robust to chronic stress. The canonical rooting implies that the components of the replication machinery that are homologous, but not orthologous, were independently recruited from proteins that initially processed RNA. Under either scenario the same amount of molecular innovation is required. The question then becomes, is it easier to innovate function in a RNA based organism or a DNA based organism under relaxed selective pressure? We argue that the difference cannot be quantified, as both scenarios predict exactly what we observe: some proteins are orthologs, some are homologs, and some are unrelated. Therefore the way to tell the difference between these scenarios is independent lines of evidence. The polarizations presented above imply the bacterial repair machinery was recruited to become the replication machinery of archaea.\nIt is also tempting to speculate that many of the features shared between viruses and the eukaryotic nucleus described in the viral eukaryogenesis hypothesis [73,98,99] could be extended to this hypothesis. Bell notes many similarities between nuclei and viral replication factories. One can imagine the ancestry of these traits going back to LAECA with some being lost in archaea, and others not developing until the root of eukaryotes. This is only consistent with our hypothesis if archaea are holophyletic, but for now it is certainly worth considering.\n[SUBTITLE] The greatest battle ever fought [SUBSECTION] So far we have demonstrated that there is robust evidence that archaea are a derived superkingdom. We have shown the bacterial ribosome could have enough plasticity to evolve into an archaeal one. We have presented evidence that there is some link between DNA replication in archaea, eukaryotes and viruses that could be the result of endosymbiosis. Now we will try to combine these into the larger story of why a bacterium would evolve into an archaeon.\nAs we discussed above, we feel the greatest weakness of Gupta's invocation of antibiotics is it is not of sufficient evolutionary pressure to cause a revolution on the scale necessary to create the differences between the prokaryotic superkingdoms. Observations of the vast differences in DNA replication machinery and evidence of a viral endosymbiosis in a bacillus before LEACA will set the stage for our subsequent hypothesis.\nIn the traditional antibiotic battle the gram-positives are capable of evolving resistance to each other. This leads to what is commonly referred to as a Red Queen game [100]. Neither group ever really gets ahead in the long-term war as each defensive innovation is matched by an offensive one. But that does not mean there are never winners in battles on shorter time scales. Winning a battle is not a good thing in the long run. The winners will increase in population size and consume more of an environment's resources. The corollary is that they become a better target for less dominant species to kill. If a species evolves a more resistant ribosome it just puts more pressure on the rest of the community to hit other targets in that species.\nOne can imagine a firmicute deeply entrenched in such warfare endowed with the gift of a complete and novel replication system from a virus. This is supported by the distribution of viral Pfam proteins discussed above. The virosphere contains so much diversity that even rare combinations of genes would eventually end up in the same capsid at the same time as long as they have some advantage to any virus. It would be an incredibly rare event for the virus to be just right for the bacterium to take up the entire replication system. And thus the stage is partly set for why the revolution happened but once.\nThe core of the DNA replication system does not appear to be as common an antibiotic target as the ribosomes or RNA polymerase. A search of DrugBank revealed no antibiotics that target PolC [101]. However, there are several that target gyrase. Why the difference? Inhibition of PolC just stops a population from growing, but the damage induced by the loss of a functional gyrase invokes an SOS response and leads to cell death. There are probably natural antibiotics that target PolC, but they would not be as effective as the numerous ones that target the ribosome and RNA polymerase. Thus the introduction of PolB into the bacillus genome would not be the enough to start the revolution. This is supported by the fact that many proteobacteria use PolB as a repair enzyme, the result of a HGT that did not start a revolution.\nAs discussed above there are no bacteria that have archaeal histones. This strongly implies they are only compatible with the archaeal-eukaryal replication machinery. Thus we argue that viral endosymbiosis was a relaxation in selective pressure that in combination with pressure from antibiotics targeting gyrase led to the innovation of histones. This is not a trivial difference with Cavalier-Smith's hypothesis that the numerous differences between the DNA-handling machinery of bacteria and archaea are the result of histones dramatically changing the way in which this machinery could interact with DNA [16]. He argues this was an adaptation to thermophily.\nHowever, Forterre has presented several arguments against Cavalier-Smith's scenario. He argues that the bacterial histone-like proteins that have replaced the archaeal ones in Thermoplasma acidophilum work just fine with the archaeal replication machinery [17]. He also notes that many hyperthermophilic bacteria do not use histones. At the same time hyperthermophilic bacteria exchange many genes with archaea [102]. Therefore the standard bacterial replication machinery could probably not tolerate the invention of histones even under selective pressure from an extreme environment. Euryarchaea appear to have gained DNA gyrase via several independent horizontal transfers from bacteria [47]. The fact that several euryarchaea retain both histones and gyrase is evidence against Cavalier-Smith's idea that gyrase became totally redundant with the advent of histones. That view is weakened further given that gyrase was found to be essential in several of those genomes [103].\nSince pressure from thermophily alone could not force histone innovation, we invoke the viral endosymbiont hypothesis. In other bacteria an alternative system to gyrase would not be much of an advantage, as getting rid of gyrase would just put more pressure on targets like the ribosome and peptidoglycan synthesis. However, as discussed above, the bacilli have several unique ribosomal proteins. That means they could already have some adaptations and preadaptation to antibiotic warfare that makes them a difficult target to hit. As discussed above they have EpsE [104], which could preadapt them for functioning without peptidoglycan. Once gyrase was no longer a useful target they could quickly lose peptidoglycan in their cell walls. The loss of these two major targets would be a huge advantage and increase pressure on the ribosomes as a target.\nAt this point any change to the ribosome would be highly beneficial. One can imagine a Red Queen game where neomura have a distinct advantage over gram-positives but need constant innovation in their ribosomes to maintain that advantage. The observation that many archaeal-eukaryal ribosomal proteins bind Zn would be consistent with pressure to ensure proper assembly despite the antibiotics. This is supported by the fact that bacterial hyperthermophiles, whose environment interferes with ribosomal assembly, have more Zn binding sites than most other bacteria [35].\nThus the initial neomura would have an advantage in antibiotic warfare as well as the ability to replicate DNA even in the presence of damaging pressures. Their genomes could be much larger than extant prokaryotes. A large robust genome would allow neomuran to be oligotrophic and handle extreme environments. This would put them in direct competition with many bacteria in diverse environments. Their larger genome size would allow for more gene duplication, which could lead to structural innovations like the ribosomal proteins found in neomura but not bacteria.\nThe strongest support for this hypothesis comes from the antibiotic target site most studied in the ribosome: the 23S RNA between ribosomal proteins L22 and L4. L22 and L4 are conserved across teh superkingdoms. They bind to the same positions on the ribosome in all three superkingdoms. There are numerous crystal structures, from both prokaryotic superkingdoms, with antibiotics bound in these sites [105,106]. These studies demonstrated that nine different antibiotics that bind strongly to this site in bacteria bind with much less affinity in archaea. A2058 (E. coli numbering) is one of the sites on the 23S RNA directly involved in binding these drugs. A2058 is conserved across 99.4% of sequenced bacterial 23S rRNAs [107]. The site is almost universally guanine in archaea and eukaryotes. The mutation A2058G makes many bacteria macrolide-resistant [108], while the reverse mutation can make archaea macrolide-sensitive [109]. These differences in antibiotic affinity are well conserved across the divide between bacteria and neomura, and appear to be the result of intense selective pressure from antibiotics.\nEven though bacteria are able to gain resistance through a similar mutation, it is probably not fixed because there is a slight decrease in fitness that can be reduced with other mutations [107]. If there were constant pressure on that site other mutations and changes in structure could relax those costs and fix that position. That would be completely consistent with the scenario outlined here. If the divide between archaea and bacteria is primordial, it is much harder to explain this difference. Ribosomal proteins L22 and L4 must have been present in LUCA. If the ancestor of archaea was an extremophile they should not have been in competition with enough bacteria to need the resistance inferred by this mutation.\nIt would be tempting to speculate that this mutation is an adaptation to thermophily or some other extreme environment to answer this nagging issue of antibiotic pressure at the root of archaea. Examining the position in bacterial hyperthermophiles can be tested. In both the hyperthermophiles Aquifex aeolicus and Thermotoga maritime this position is 100% conserved as adenine, as it is in their thermophilic relatives (Additional file 4; Figure S4). The thermophile Thermus thermophilus has two copies of the 23S RNA where usually both have adenine at that position unless they are under selective pressure from antibiotics [110]. Thus the only explanation that appears to hold water is some extreme antibiotic pressure at the root of archaea.\nThe mark of antibiotic pressure can also be seen in the proteins that would be lost at the origin of archaea. We searched Pfam and DrugBank for antibiotic targets that are conserved across bacteria but were clearly not in LACA. Eight of these are listed in Table 8. Several of these appear to have been horizontally transferred to archaea, such as DNA gyrase. That is consistent with the scenario under discussion because once archaea were no longer under strong antibiotic pressure these systems would be free to become essential again. It would be interesting to look at each of these eight predicted losses and see what preadaptations and environmental conditions can make them non-essential.\nDrug targets found across bacteria that were probably not in LACA.\nWe argue these proteins were lost in the archaeal ancestor in response to a unique antibiotic warfare scenario. Targets in italics appear to have transferred to archaea after LACA.\nExamples of drugs targets sites with resistance in archaea.\nThese drugs bind targets sites present in both bacteria and archaea (or eukaryotes), but with very different affinities. We argue this is a molecular fossil of the unique antibiotic war that resulted in the origin of archaea.\nWhy would this war end and who would the winners be? To address this question we will invoke the two novel niches that are central to the neomuran hypothesis; phagotrophy and hyperthermophily [16]. The oligotrophic neomuran with large genomes would be able to form many symbioses with prokaryotes because of their diverse metabolism. Such an environment would favor the preadapations to phagotrophy discussed in [111]. This could lead to several endosymbiotic events in a short span of time. These would force the nucleus to become a better separator in dealing with selective pressures proposed by several hypotheses: invasion of introns [112], differing metabolisms [113] and ribosome chimerism [114]. The successful phagotroph would eat prokaryotes, so at first it would be to the advantage of the prey to try to kill the neomura. However, that is not the optimal strategy for dealing with phagotrophs. It is much better to persist inside them and eat them from the inside out, as can be seen by the numerous bacterial taxa that have independently evolved the ability to infect eukaryotes. Once it is possible to infect the phagotrophs, killing them with antibiotics becomes counterproductive. And thus a truce (or new war) would be declared on one front of the great antibiotics war.\nThe early eukaryotes would outcompete and eat many of the initial neomura, but would be at a disadvantage in extreme environments as they began to rely on their cytoskeletons and larger cell size more. It would be easier for the neomura to drift into more extreme environments because of their DNA replication machinery. The proto-archaea would begin to emerge as the neomura began moving into previously unoccupied niches of extremophily. The conversion of their membranes would probably be the commitment step in the process. Once they began settling into environments that are constantly extreme they would be under pressure to streamline their genomes.\nThis scenario is consistent with a recent study on gene content evolution in archaea that concluded that most archaeal genomes have been streamlined from larger ancestral genomes [115]. The authors conclude that the archaeal ancestor could have had 2000 gene families, and the extant archaeal groups are mostly created through differential loss. The authors note this repeated loss is consistent with the energy shock state of the archaea described in [93], as specialization and loss are highly favorable in consistently extreme environments. The trend of euryarchaeal and crenarchaeal specific traits to both be present in the deep branching archaea is also consistent with the idea archaea became specialized from a more generalized genomic ancestor. The redundancy in archaeal systems such as two replicative polymerases and two cell division systems could be remnants of the antibiotic war. That redundancy would become unnecessary once archaea committed to extremophily. It was noted in [2] that ribosomal protein loss is much more common in archaea than in bacteria. Our hypothesis implies that the distribution of ribosomal proteins in archaea is the result of independent losses once they were no longer under antibiotic pressure. Some of these novel proteins developed other roles to deal with extremophily so they have been retained. The ancestral archaeal ribosome could very well have contained all of the proteins found in any archaeal genome, which would certainly weaken that aspect of the eocyte hypothesis.\nWhat about the neomura? They would be stuck in the middle. The eukaryotes would be eating them, and they would still be in competition with bacteria. Their only viable strategy would be constant innovation, as they would not really have a novel niche. However, the wave caused by viral endosybiosis would not go on forever. There would be diminishing returns in terms of the resistance provided by the new innovations. Eventually the innovations would become a disadvantage as bacteria can then release compounds that only target the new systems. For instance aphidicolin inhibits DNA replication in archaea and eukaryotes but not bacteria by targeting their unique polymerase [116,117]. So the initial advantage the neomura have in terms of antibiotic resistance is not a stable niche. They were outcompeted from three sides, and thus we are left with a hole in the middle of the branches of the tree of life that often gets mistaken as the root. This scenario is summarized in Figure 5.\nSummary of our hypothesis. A viral endosymbiosis bridges the gap in DNA machinery between the superkingdoms. That triggered an antibiotic war that resulted in the birth of eukaryotes and archaea. The antibiotic war ended when archaea became extremophiles and the eukaryotes became phagotrophs. Traits shared between eukaryotes and actinobacteria are the result of endosymbiosis; the peroxisome is not the direct descendent of an actinobacterium.\nSo far we have demonstrated that there is robust evidence that archaea are a derived superkingdom. We have shown the bacterial ribosome could have enough plasticity to evolve into an archaeal one. We have presented evidence that there is some link between DNA replication in archaea, eukaryotes and viruses that could be the result of endosymbiosis. Now we will try to combine these into the larger story of why a bacterium would evolve into an archaeon.\nAs we discussed above, we feel the greatest weakness of Gupta's invocation of antibiotics is it is not of sufficient evolutionary pressure to cause a revolution on the scale necessary to create the differences between the prokaryotic superkingdoms. Observations of the vast differences in DNA replication machinery and evidence of a viral endosymbiosis in a bacillus before LEACA will set the stage for our subsequent hypothesis.\nIn the traditional antibiotic battle the gram-positives are capable of evolving resistance to each other. This leads to what is commonly referred to as a Red Queen game [100]. Neither group ever really gets ahead in the long-term war as each defensive innovation is matched by an offensive one. But that does not mean there are never winners in battles on shorter time scales. Winning a battle is not a good thing in the long run. The winners will increase in population size and consume more of an environment's resources. The corollary is that they become a better target for less dominant species to kill. If a species evolves a more resistant ribosome it just puts more pressure on the rest of the community to hit other targets in that species.\nOne can imagine a firmicute deeply entrenched in such warfare endowed with the gift of a complete and novel replication system from a virus. This is supported by the distribution of viral Pfam proteins discussed above. The virosphere contains so much diversity that even rare combinations of genes would eventually end up in the same capsid at the same time as long as they have some advantage to any virus. It would be an incredibly rare event for the virus to be just right for the bacterium to take up the entire replication system. And thus the stage is partly set for why the revolution happened but once.\nThe core of the DNA replication system does not appear to be as common an antibiotic target as the ribosomes or RNA polymerase. A search of DrugBank revealed no antibiotics that target PolC [101]. However, there are several that target gyrase. Why the difference? Inhibition of PolC just stops a population from growing, but the damage induced by the loss of a functional gyrase invokes an SOS response and leads to cell death. There are probably natural antibiotics that target PolC, but they would not be as effective as the numerous ones that target the ribosome and RNA polymerase. Thus the introduction of PolB into the bacillus genome would not be the enough to start the revolution. This is supported by the fact that many proteobacteria use PolB as a repair enzyme, the result of a HGT that did not start a revolution.\nAs discussed above there are no bacteria that have archaeal histones. This strongly implies they are only compatible with the archaeal-eukaryal replication machinery. Thus we argue that viral endosymbiosis was a relaxation in selective pressure that in combination with pressure from antibiotics targeting gyrase led to the innovation of histones. This is not a trivial difference with Cavalier-Smith's hypothesis that the numerous differences between the DNA-handling machinery of bacteria and archaea are the result of histones dramatically changing the way in which this machinery could interact with DNA [16]. He argues this was an adaptation to thermophily.\nHowever, Forterre has presented several arguments against Cavalier-Smith's scenario. He argues that the bacterial histone-like proteins that have replaced the archaeal ones in Thermoplasma acidophilum work just fine with the archaeal replication machinery [17]. He also notes that many hyperthermophilic bacteria do not use histones. At the same time hyperthermophilic bacteria exchange many genes with archaea [102]. Therefore the standard bacterial replication machinery could probably not tolerate the invention of histones even under selective pressure from an extreme environment. Euryarchaea appear to have gained DNA gyrase via several independent horizontal transfers from bacteria [47]. The fact that several euryarchaea retain both histones and gyrase is evidence against Cavalier-Smith's idea that gyrase became totally redundant with the advent of histones. That view is weakened further given that gyrase was found to be essential in several of those genomes [103].\nSince pressure from thermophily alone could not force histone innovation, we invoke the viral endosymbiont hypothesis. In other bacteria an alternative system to gyrase would not be much of an advantage, as getting rid of gyrase would just put more pressure on targets like the ribosome and peptidoglycan synthesis. However, as discussed above, the bacilli have several unique ribosomal proteins. That means they could already have some adaptations and preadaptation to antibiotic warfare that makes them a difficult target to hit. As discussed above they have EpsE [104], which could preadapt them for functioning without peptidoglycan. Once gyrase was no longer a useful target they could quickly lose peptidoglycan in their cell walls. The loss of these two major targets would be a huge advantage and increase pressure on the ribosomes as a target.\nAt this point any change to the ribosome would be highly beneficial. One can imagine a Red Queen game where neomura have a distinct advantage over gram-positives but need constant innovation in their ribosomes to maintain that advantage. The observation that many archaeal-eukaryal ribosomal proteins bind Zn would be consistent with pressure to ensure proper assembly despite the antibiotics. This is supported by the fact that bacterial hyperthermophiles, whose environment interferes with ribosomal assembly, have more Zn binding sites than most other bacteria [35].\nThus the initial neomura would have an advantage in antibiotic warfare as well as the ability to replicate DNA even in the presence of damaging pressures. Their genomes could be much larger than extant prokaryotes. A large robust genome would allow neomuran to be oligotrophic and handle extreme environments. This would put them in direct competition with many bacteria in diverse environments. Their larger genome size would allow for more gene duplication, which could lead to structural innovations like the ribosomal proteins found in neomura but not bacteria.\nThe strongest support for this hypothesis comes from the antibiotic target site most studied in the ribosome: the 23S RNA between ribosomal proteins L22 and L4. L22 and L4 are conserved across teh superkingdoms. They bind to the same positions on the ribosome in all three superkingdoms. There are numerous crystal structures, from both prokaryotic superkingdoms, with antibiotics bound in these sites [105,106]. These studies demonstrated that nine different antibiotics that bind strongly to this site in bacteria bind with much less affinity in archaea. A2058 (E. coli numbering) is one of the sites on the 23S RNA directly involved in binding these drugs. A2058 is conserved across 99.4% of sequenced bacterial 23S rRNAs [107]. The site is almost universally guanine in archaea and eukaryotes. The mutation A2058G makes many bacteria macrolide-resistant [108], while the reverse mutation can make archaea macrolide-sensitive [109]. These differences in antibiotic affinity are well conserved across the divide between bacteria and neomura, and appear to be the result of intense selective pressure from antibiotics.\nEven though bacteria are able to gain resistance through a similar mutation, it is probably not fixed because there is a slight decrease in fitness that can be reduced with other mutations [107]. If there were constant pressure on that site other mutations and changes in structure could relax those costs and fix that position. That would be completely consistent with the scenario outlined here. If the divide between archaea and bacteria is primordial, it is much harder to explain this difference. Ribosomal proteins L22 and L4 must have been present in LUCA. If the ancestor of archaea was an extremophile they should not have been in competition with enough bacteria to need the resistance inferred by this mutation.\nIt would be tempting to speculate that this mutation is an adaptation to thermophily or some other extreme environment to answer this nagging issue of antibiotic pressure at the root of archaea. Examining the position in bacterial hyperthermophiles can be tested. In both the hyperthermophiles Aquifex aeolicus and Thermotoga maritime this position is 100% conserved as adenine, as it is in their thermophilic relatives (Additional file 4; Figure S4). The thermophile Thermus thermophilus has two copies of the 23S RNA where usually both have adenine at that position unless they are under selective pressure from antibiotics [110]. Thus the only explanation that appears to hold water is some extreme antibiotic pressure at the root of archaea.\nThe mark of antibiotic pressure can also be seen in the proteins that would be lost at the origin of archaea. We searched Pfam and DrugBank for antibiotic targets that are conserved across bacteria but were clearly not in LACA. Eight of these are listed in Table 8. Several of these appear to have been horizontally transferred to archaea, such as DNA gyrase. That is consistent with the scenario under discussion because once archaea were no longer under strong antibiotic pressure these systems would be free to become essential again. It would be interesting to look at each of these eight predicted losses and see what preadaptations and environmental conditions can make them non-essential.\nDrug targets found across bacteria that were probably not in LACA.\nWe argue these proteins were lost in the archaeal ancestor in response to a unique antibiotic warfare scenario. Targets in italics appear to have transferred to archaea after LACA.\nExamples of drugs targets sites with resistance in archaea.\nThese drugs bind targets sites present in both bacteria and archaea (or eukaryotes), but with very different affinities. We argue this is a molecular fossil of the unique antibiotic war that resulted in the origin of archaea.\nWhy would this war end and who would the winners be? To address this question we will invoke the two novel niches that are central to the neomuran hypothesis; phagotrophy and hyperthermophily [16]. The oligotrophic neomuran with large genomes would be able to form many symbioses with prokaryotes because of their diverse metabolism. Such an environment would favor the preadapations to phagotrophy discussed in [111]. This could lead to several endosymbiotic events in a short span of time. These would force the nucleus to become a better separator in dealing with selective pressures proposed by several hypotheses: invasion of introns [112], differing metabolisms [113] and ribosome chimerism [114]. The successful phagotroph would eat prokaryotes, so at first it would be to the advantage of the prey to try to kill the neomura. However, that is not the optimal strategy for dealing with phagotrophs. It is much better to persist inside them and eat them from the inside out, as can be seen by the numerous bacterial taxa that have independently evolved the ability to infect eukaryotes. Once it is possible to infect the phagotrophs, killing them with antibiotics becomes counterproductive. And thus a truce (or new war) would be declared on one front of the great antibiotics war.\nThe early eukaryotes would outcompete and eat many of the initial neomura, but would be at a disadvantage in extreme environments as they began to rely on their cytoskeletons and larger cell size more. It would be easier for the neomura to drift into more extreme environments because of their DNA replication machinery. The proto-archaea would begin to emerge as the neomura began moving into previously unoccupied niches of extremophily. The conversion of their membranes would probably be the commitment step in the process. Once they began settling into environments that are constantly extreme they would be under pressure to streamline their genomes.\nThis scenario is consistent with a recent study on gene content evolution in archaea that concluded that most archaeal genomes have been streamlined from larger ancestral genomes [115]. The authors conclude that the archaeal ancestor could have had 2000 gene families, and the extant archaeal groups are mostly created through differential loss. The authors note this repeated loss is consistent with the energy shock state of the archaea described in [93], as specialization and loss are highly favorable in consistently extreme environments. The trend of euryarchaeal and crenarchaeal specific traits to both be present in the deep branching archaea is also consistent with the idea archaea became specialized from a more generalized genomic ancestor. The redundancy in archaeal systems such as two replicative polymerases and two cell division systems could be remnants of the antibiotic war. That redundancy would become unnecessary once archaea committed to extremophily. It was noted in [2] that ribosomal protein loss is much more common in archaea than in bacteria. Our hypothesis implies that the distribution of ribosomal proteins in archaea is the result of independent losses once they were no longer under antibiotic pressure. Some of these novel proteins developed other roles to deal with extremophily so they have been retained. The ancestral archaeal ribosome could very well have contained all of the proteins found in any archaeal genome, which would certainly weaken that aspect of the eocyte hypothesis.\nWhat about the neomura? They would be stuck in the middle. The eukaryotes would be eating them, and they would still be in competition with bacteria. Their only viable strategy would be constant innovation, as they would not really have a novel niche. However, the wave caused by viral endosybiosis would not go on forever. There would be diminishing returns in terms of the resistance provided by the new innovations. Eventually the innovations would become a disadvantage as bacteria can then release compounds that only target the new systems. For instance aphidicolin inhibits DNA replication in archaea and eukaryotes but not bacteria by targeting their unique polymerase [116,117]. So the initial advantage the neomura have in terms of antibiotic resistance is not a stable niche. They were outcompeted from three sides, and thus we are left with a hole in the middle of the branches of the tree of life that often gets mistaken as the root. This scenario is summarized in Figure 5.\nSummary of our hypothesis. A viral endosymbiosis bridges the gap in DNA machinery between the superkingdoms. That triggered an antibiotic war that resulted in the birth of eukaryotes and archaea. The antibiotic war ended when archaea became extremophiles and the eukaryotes became phagotrophs. Traits shared between eukaryotes and actinobacteria are the result of endosymbiosis; the peroxisome is not the direct descendent of an actinobacterium.", "Several large indels are shared between archaea and gram-positive bacteria, and both groups only have one membrane [9]. Thus, if there is a direct relationship between the gram-positives and archaea the root is either between them, or one is derived from the other. Every piece of evidence that is polarizable implies archaea are derived from bacteria. Arguments that archaea and bacteria are so different that they both evolved from LUCA sidesteps directionality altogether. The only recent work that explicitly roots the tree in archaea is that of Wong et al. [22]. Many of their arguments are based on assumptions about the nature of LUCA and assumptions of what a primitive state would look like. None of their arguments are true polarizations. To the best of our knowledge there is no single polarized argument for an archaeal rooting that is on par with the three we shall discuss that place archaea as derived.\nThe first of these arguments is the proteasome. Proteasomes are self compartmentalized atp-dependent proteases that are found in varying degrees of complexity across the tree of life. All archaea contain a 20S proteasome which is composed of 28 subunits and is encoded by at least two genes that are clearly homologs. Therefore the 20S proteasome must be the result of duplication. Cavalier-Smith has argued that the simpler bacterial homolog HslV (heat shock locus v) could be duplicated to generate a 20S proteasome [8,16]. Loss of a subunit in the 20S proteasome would result in an open proteasome with no ATPase. Such a protein would lose the essential function of controlled degradation found in proteasomes, and does not make sense as an intermediate. It is more likely that the 20S proteasome is derived from a simpler structure. Cavalier-Smith excludes the root from archaea because all archaea contain a clearly derived protein.\nHowever, there is a counter argument to that proposal; LUCA had HslV and LACA (last archaeal common ancestor) is the point in the tree where HslV evolves into the 20S proteasome (Figure 1A). This would still exclude the root from the crown archaea, but it still allows for the possibility that the root is between the extinct stems of archaea and bacteria. Excluding the root from archaea will never be enough because one can always invoke stem lineages that show up before the derived trait. This would imply the 20S proteasome present in actinobacteria is probably the result of a horizontal transfer from archaea. However, we have observed that the two proteasome genes are often in the same operon in actinobacteria, but rarely together in archaea. This weakly polarizes the direction of the horizontal transfer to the archaea.\nTwo scenarios for interpreting the three polarizations. A) Under the canonical rooting proteasome evolution would require several selective sweeps and large-scale loss. The monomer PyrD B would have evolved from one of the more complex quaternary structures, and the derived insert in EF-2 would occur after LACA. B) Under the gram-negative rooting, Anbu could be ancestral to both HslV and the 20S proteasome. PyrD could evolve via stepwise increases in structural complexity, and there is no need to invoke extinct stem archaea to explain the EF_G insert. We believe these transitions argue for a gram-negative rooting.\nHowever, there is stronger evidence that narrows the root to within the bacteria. Our own work argues that the Anbu proteasome (or peptidase according to [23]) is more likely than HslV to be the 20S proteasome's direct ancestor based on both sequence data and structure predictions [24]. This argument is much stronger than Cavalier-Smith's because HslV is widespread in the gram-positives but Anbu appears to be missing in them altogether (Figure 1B). If the divide between archaea and bacteria is the earliest split in the tree, and our hypothesis on proteasome evolution is correct, then LUCA must have had Anbu. This would mean that all extant gram-positives need to have lost Anbu while the gram-negatives (that must be derived from gram-positives in this scenario) somehow retained Anbu. One would have to invoke a selective sweep of the 20S proteasome in archaea, and of HslV in the gram-positives. It is plausible that the 20S proteasome outcompeted Anbu or HslV since they are almost never found in the same genome. However, Anbu and HslV are found together in many genomes, which is evidence neither totally displaces the other in terms of function. Our arguments about Anbu are based on structure prediction, but a crystal structure could experimentally verify those predictions. If we are correct it may be the smoking gun for a gram-negative rooting, but even without that there is ample evidence to support Cavalier-Smith's position. Even if HslV is the direct ancestor of the 20S proteasome the root can still be excluded from all extant archaeal lineages.\nThe recent analysis of proteins occur in Anbu's operon [23] presented evidence we are wrong in labeling Anbu a proteasome because it lacks an associated ATP-dependent protein required for unfolding substrates. HslV and the 20S proteasome clearly have associated ATPases dedicated to unfolding substrates. Therefore the transition to both of them is easier from Anbu as no ATPase would have to be lost. The origin of HslV and the 20S proteasome would both involve the recruitment of distinct ATPases subunits. Therefore we think this new work strengthens our hypothesis that Anbu is ancestral to the 20S proteasome because no intermediate would ever lose the regulatory ATPase. If our hypothesis is correct, proteasomes would be polyphyletic if they are defined by the presence of the ATPase subunit as suggested in [23].\nThe indel in EF-2 shared between archaea and eukaryotes has been polarized using EF-Tu as an outgroup [25]. Our alignment free analysis of this indel agrees with the authors' conclusions despite there being a sequence artifact in their original alignment [11]. This polarization robustly excludes the root from within archaea, but does not narrow it to within bacteria.\nIn that analysis we also presented a novel structure-based argument for polarizing archaea. The quaternary structure of PyrD 1B is a heterotetramer across the firmicutes and archaea. We argue that the heterotetramer is probably derived from the homodimer PyrD 1A based on the presence of a conserved interface. The monomeric and homodimeric versions are present in the Gram-negatives and Actinobacteria. PyrD 1B is found across a gram-positive group and archaea, so it would have to be present in their last common ancestor, which is LUCA under the canonical rooting. This could be explained by the presence of both PyrD 1A and 1B in LUCA. But that scenario would require PyrD 1A to be lost in every archaea and some firmicutes, and for there to be a reversion to the monomeric form, PyrD, across the gram-negatives and actinobacteria. PyrD 1B is probably derived, so it follows that archaea, firmicutes, and their last common ancestor are also derived.\nThe polarization of the indel in EF-2 excludes the root from the extant archaea. Our novel polarizations of Anbu and PyrD argue the root is within bacteria. If these arguments only excluded the root from all extant archaea one is left wondering why all archaea that are not clearly derived went extinct. The combination of all three arguments strongly supports the bacterial rooting of the tree. If archaea are derived, there must be some way of reconciling the major differences between them and bacteria.", "Archaea cluster separately on phylogenetic trees based on ribosomal RNA [1]. This split has remained robust in many trees derived since then. We will discuss three scenarios that can explain this. The first scenario is that the ribosomal sequences are pretty good molecular clocks. The great splits seen in the tree reflects this most ancient divide in cellular life and is in accordance with the canonical rooting.\nThe second scenario also does not contradict the canonical rooting. It goes as follows. The ribosome in LUCA was incomplete. It did not have all the proteins found in extant archaea and bacteria, only the core that is universal between them. The addition of proteins after the split of the superkingdoms would start a quantum evolutionary event. Some sites would be free to mutate to achieve increased stability, while others would be under evolutionary pressure to maintain a strict structure-function relationship. The rate of mutation at different sites on the ribosome could vary wildly and exaggerate the true distance between the superkingdoms, even if they do represent a very ancient split.\nThe third scenario, which we champion here, is that the bacterial ribosome evolved into an archaeal one. Again this would be a quantum evolutionary event and sequences of both rRNA and ribosomal proteins would evolve rapidly. The point we are trying to make is that these three scenarios would result in exactly the same sequence tree. Hence we must look towards independent lines of reasoning to determine which of these scenarios best describes the tree branching.\nWe can exclude the first scenario by comparing the structure of the ribosome in archaea and bacteria. In the 50S subunit there are six ribosomal proteins that are in the same position on the rRNA, but have non-homologous structures in archaea and bacteria [26,27]. These must have changed in at least one lineage since LUCA, regardless of LUCA's nature. Therefore, we should expect that the distance between archaea and bacteria would be exaggerated due to compensatory mutations in the rRNA and ribosomal proteins.\nIt is certainly reasonable to object to the third scenario because it seems implausible that a ribosome would change so much between superkingdoms, yet stay so well conserved within a superkingdom. However, there are two examples where we know that ribosome structure has indeed changed significantly. Mitochondrial ribosomes have changed dramatically from their bacterial ancestors. They have lost about half their rRNA and replaced it with additional proteins [28]. The eukaryotic ribosome evolved from an archaeal one (or technically from some sort of proto-archaeal ribsosome if the archaea are holophyletic). There are eleven ribosomal proteins found only in the eukaryotes, nine of which are conserved across the superkingdom [2], and there is good separation on rRNA trees between eukaryotes and archaea. In the two cases where the ribosome structure has changed we know it changed from another fully functional ribosome. Thus, why would it be out of the question for it to happen between archaea and bacteria? There are five ribosomal proteins present across the crenarchaea, but absent in the euryarchaea [2]. These proteins were either lost or gained in one of these groups after they split. In either case there would be a transition between two complete ribosomes. In each of these cases we can clearly see that a ribosome can undergo dramatic changes in macromolecular structure when there is proper selective pressure (or relaxation of selective pressure).\nThe tree presented in [29] was constructed by concatenating 31 universal proteins. 23 of these are ribosomal proteins and many more are directly involved in translation. Many taxa on the tree cluster together with high bootstrap values (greater than 80%). However, there appears to be only three connections between high level taxa that are supported with that strength. The clustering of crenarchaea and euryarchaea is well supported, as is the clustering of eukaryotes and archaea. There is also a long well supported branch between the archaeal-eukaryal clade and bacteria. We doubt it is a coincidence that these splits correspond to the greatest changes in ribosomal structure on the tree. It appears the sequence tree in [29] and rRNA trees could be merely a reflection of the large changes in ribosomal structure that have occurred throughout the true tree of life. This protein set would be expected to work better as a clock within groups that have the same ribosomal proteins. Even if ones uses more sophisticated tree building techniques, such as those in [30], the major changes in the ribosome are still going to be problematic. The authors concatenated many translational proteins and the resulting tree supported the paraphyly of archaea. Eukaryotes were placed near the archaeal species with the most similar ribosomal structures. However, a single gene tree of RNA polymerase alpha subunit (RPOA) supported holophyly in the same study. This implies some of their results are an artifact caused by structural changes in a ribosomal revolution.\nThe third scenario could certainly be weakened if it was found that all the ribosomal proteins were essential in bacteria and there was absolutely no way they could be tinkered with. We examined which ribosomal proteins are essential in eleven different bacterial species using the Database of Essential Genes [31]. There are sixteen ribosomal proteins that would need to be lost in the transition from a bacterium to an archaeon, as they are found across bacteria but never in archaea. None of these ribosomal proteins were found to be essential in all species, which is the first sign it is possible to lose and replace them. Four of the sixteen proteins are essential in all species except Mycobacterium tuberculosis (Table 1). Only four of these proteins are essential in M. tuberculosis, the least of any species in this data set.\nEssentiality of ribosomal proteins.\nThe essentiality of proteins that would need to be lost in the transition from an extant bacterial ribosome to an archaeal one varies from species to species. M. tuberculosis appears preadapted for the losses that would be necessary in the transition to an archaeon.\nTo determine whether this portion of the ribosome is significantly flexible we calculated a p-value assuming a binomial distribution. The essentiality of each subunit can be considered a success or a failure. The p-value measures the odds of seeing at most n essential subunits in a set of sixteen random ribosomal proteins. The odds of a random ribosomal protein being essential were estimated as the proportion of ribosomal proteins found to be essential in that species. This was done to eliminate experimental biases between the species sets, as some of the knockout experiments are more thorough than others. Several species had p-values under .05, but M. tuberculosis was by far the most significant with a p-value of .0031. This implies that M. tuberculosis's ribosome is under different selective pressure than most bacteria, and that it is the most preadapted ribosome in this dataset capable of evolving into an archaeal ribosome.\nIt is highly counterintuitive that nearly every universal protein could be nonessential. The difference between essential and persistent genes was discussed in [32]. The authors point out that essentiality differs in the wild and laboratory settings. Many of the ribosomal proteins listed as nonessential are still highly deleterious to lose. But the point is they can be lost under the right circumstances. It might be our proteasome centric view of the world, but we think the presence of the 20S proteasome in Mycobacterium could partially explain this observation. It has been proposed the major cost of mutations and mistranslation comes from dealing with mis-folded proteins [33]. The ribosomal proteins are among the most highly translated proteins in the cell, so there is lots of pressure to ensure they fold correctly. A highly advanced degradation system, like the 20S proteasome with a Pup targeting system [34], could greatly relax that selective pressure. If the initial tinkering is not lethal, one can easily imagine a scenario where compensatory mutations and structures could rapidly and significantly change the ribosome if there is proper selective pressure. We will describe such a scenario below.\nIt has been observed that many bacteria contain paralogs of ribosomal proteins where one form binds Zn and the other does not [35]. M. tuberculosis has duplicates of several ribosomal proteins, which could explain why some (but not all) of the ribosomal proteins are not essential in that genome. The authors note that thermophilic bacteria seem to prefer the Zn binding forms of the ribosomal proteins, and that there are seven Zn binding ribosomal proteins conserved across archaea and eukaryotes that are absent in bacteria. This is consistent with our ideas that major historical changes in the availability of Zn in the ocean were a significant constraint on protein structure evolution [36,37]. Bacteria vary their ribosomes to optimize for both high and low Zn conditions. One can imagine this strategy being taken to an extreme where the tweaks are not just simple displacements, but larger rearrangements. Increased availability of Zn, as the ocean became oxic, could be a factor that made toying with the ribosome favorable for the early archaea. This combined with the antibiotic pressures discussed below, could lead to a ribosomal revolution, just as the presence of two ribosomes leads to a revolution at the root of eukaryotes.", "The differences between archaeal and bacterial replication machinery is vast [4]. Leipe et al. claim this difference is so great that it is unreasonable to argue that one prokaryotic superkingdom evolved from the other. They list four key functions of DNA replication that are performed by completely non-homologous proteins in archaea and bacteria: the main polymerase's polymerization domain, the phosphatase that powers the polymerase, the gap filling polymerase, and the DNA primase. We will argue that the differences between archaea and bacteria do not imply the root of the tree of life has to be between them.\nWe must keep in mind there is some flexibility in the DNA replication machinery despite the division across the superkingdoms; consider two examples. First, many proteobacteria use a PolB family polymerase as a repair protein [38], which is almost certainly the result of HGT. Second, PolD appears to have been present in LACA, but was lost in the crenarchaea [39]. These two examples illustrate major changes in the replication machinery that occured in DNA based genomes possessing fully functional replication systems. We are arguing that an even larger event occurred between the prokaryotic superkingdoms. This event entailed viral transfers and novel innovations, but there are several proteins whose origins can be better described by vertical inheritance from the gram-positive bacteria which we review first.\nKoonin et al. have demonstrated that many bacterial proteins have a region that is homologous to the small subunit of the archaeo-eukaryotic primase [40]. This domain is present in DNA ligase D from M. tuberculosis, which can act as a DNA-dependent RNA polymerase [41]. The rest of the protein is homologous to the ATP-dependent DNA ligase found in archaea and eukaryotes. Therefore, DNA ligase D is perfectly preadapted to replace the primase function of DnaG. The fission of the two halves of the protein would allow for the preservation of ligase activity while developing enhanced primase activity. A recent analysis of DNA ligases revealed many transfers between archaea, bacteria and viruses [42]. This history is very complicated, so it is hard to say with certainty where the archaeal enzymes originated. The large subunit of the primase may be a true innovation since it has no detectable bacterial homologs, but the small subunit of the primase and ATP-dependent DNA ligases both could have been inherited from the gram-positive ancestors of archaea.\nThe main helicase in bacteria is DnaB, while archaea use MCM6. Relevant to this discussion is the recent biochemical analysis of a protein in a prophage element in Bacillus cereus that has domains homologous to the MCM6-AAA domain as well as the small subunit of the archaeal primase [43]. The authors found that this protein was a functional helicase but had no primase activity. The narrow distribution of this prophage element implies its insertion was probably too recent to play a role in the origin of archaea. However, it demonstrates that there can be a selective advantage for a DNA based genome to take novel DNA handling machinery from a virus and use it in a different context. We will come back to this point later.\nBacteria use DnaA to define the origin of replication, while archaea use Cdc6. These proteins have a homologous AAA+ ATPase domain, but have little similarity otherwise. However, the bacterial protein RuvB has the same domain combination as Cdc6. RuvB, Cdc6, and DnaA were all put in the same superfamily in a recent classification of AAA+ domains [44]. RuvB is recruited to Holliday junctions by RuvA where it forms a hexamer around the DNA [45], just like Cdc6. It is plausible that Cdc6 evolved from RuvB.\nArchaea use a protein called Hjc to resolve Holliday junctions instead of the bacterial RuvABC system. Hjc is related to the alternative bacterial system RecU [46]. The only bacteria that use RecU are the firmicutes, and they also have RuvABC. We argue below archaea are derived from within the firmicutes. It is possible that the redundancy in Holliday junction systems allowed RuvB to drift in function. The homology between RecU and Hjc could be explained by the presence of a Holliday junction resolvase in LUCA under the canonical rooting. However, if the hypothetical RNA-DNA hybrid LUCA proposed in [4] was dealing with Holliday junctions we argue it probably would also need topoisomerases at that point. However, since the distribution of topoisomerases is different across the prokaryotic superkingdoms [47,48] that would imply the ancestral topoisomerase was displaced in at least one lineage. This weakens the proposal in [4]. We feel it is more likely archaeal topoisomerases evolved from bacterial ones as Cavalier-Smith has proposed [16].\nThere are certainly large differences between archaeal and bacterial DNA replication machineries. We have demonstrated the divide between replication systems has some flexibility, and this opens the door for a replication revolution. It is possible to come up with detailed scenarios for how each of the archaeal replication proteins originated. These results are summarized in Table 2. We will elaborate on this scenario below. However, there are several archaeal replication proteins that do not appear to have any homologs in bacteria; namely histones, PolD, and the large subunit of the archaeal-eukaryal primase. These are true innovations, but there really are not that many of them; certainly not enough to make the transitions seem unreasonable in light of the polarizations presented above.\nSummary of differences in DNA replication machinery of Archaea and Bacteria.\nThe list of protein functions was compiled from box 1 in [20] and table one in [129]. Italics indicate a probable horizontal transfer to a superkingdom. There are very few proteins in archaea that are true innovations. Many of their unique replication proteins could be recruited from bacterial or viral systems. A * indicates the Superfamily database was used to predict domain assignments of PDB entries not yet classified in SCOP.\nThe proposal of two independent inventions of DNA replication has recently been challenged [49]. The authors argue that ribonucleotide reduction is thermodynamically unfavorable, so convergent evolution is highly unlikely. They note that all ribonucleotide reductases have been shown to have a monophyletic origin. Finally, they argue that the proteins that are universally conserved imply a high fidelity replication system in LUCA that could not have been RNA based. The hypothesis that the root must be between the superkingdoms is diminished when one combines these arguments with the scenarios we have outlined here.", "So far we have presented several independent arguments that strongly polarize archaea as a taxa derived from within bacteria. We have demonstrated that although there are vast differences between the ribosomes and DNA replication machinery between the prokaryotic superkingdoms, none of the arguments associated with their respective proteins seems insurmountable. We will soon present a novel hypothesis to account for this; but first we must pinpoint the bacterial roots of archaea. One cannot properly reason about this rooting without first discussing whether archaea are paraphyletic (eukaryotes branch within them) or holophyletic (eukaryotes are their sisters). As there is clearly a relationship between archaea and eukaryotes it is vital to differentiate between these two scenarios to understand their origins. We will review the current available data, and argue that for now precise agnosticism seems the best course, that is, any hypothesis on the origin of archaea must accommodate both models. That said, we lean towards holophyly and our hypothesis does as well.\nEukaryotes and Archaea are sister clades under the standard three domain model. However, James Lake proposed that eukaryotes had a crenarchaeal (eocyte) origin based on a shared indel in EF-1 and similarities in their ribosomal structures [13,14]. This hypothesis never gained much support because there was little phylogenetic evidence to corroborate it. However, recent work [30] has shown that there is sequence data that implies archaea are paraphyletic and eukaryotes have a crenarchaeal-like ancestor. Conversely, another analysis done around the same time supported a deep branching archaeon as the host of mitochondria [50], which would be inconsistent with the eocyte hypothesis. They demonstrate that eukaryotes inherited both crenarchaeal and euryarchaeal specific proteins, so ancestry from either group alone is not enough to explain the eukaryotic protein repertoire. However, several deep branching archaeal genomes from korarchaeota and thaumarchaea are now available and change the context of some of these conclusions [51,52]. Both of these groups appear to contain a mix of crenarchaeal and euryarchaeal genes so the observation in [50] could be explained by a member of one of these groups being ancestral to eukaryotes.\nCavalier-Smith's hypothesis on the origin of the neomura relies on a sisterhood relationship between archaea and eukaryotes [16]. As discussed below, he mainly roots the neomura using traits actinobacteria share with eukaryotes, but not archaea. This only makes sense if archaea and eukaryotes are sisters, otherwise the traits should be present in at least some archaea. He lists eight properties that are unique and ubiquitous in archaea [16]. All of these traits strongly imply that archaea are monophyletic. However, most of them do not differentiate between whether archaea are holophyletic or paraphyletic.\nFor instance, the unique isoprenoid ether lipids found in all archaeal membranes are best explained by their presence in LACA. Eukaryotes have lipids that are more similar to those of bacteria. It would be more parsimonious for archaea and eukaryotes to be sisters with a single change in lipid structure. Any other scenario requires a reversion in eukaryotes back to the bacterial state. Even though this is not parsimonious, it is not out of the question because the mitochondrial ancestor would have all the necessary genes to make bacterial membranes [53]. We have to admit that does not seem unreasonable relative to the innovations we are discussing in this work. This certainly seems like a case where simple parsimony in terms of any one trait, even membrane structure, will be misleading.\nThe only one of these properties that appeared really informative in regard to this problem was the split gene for RPOA. RPOA is the only single gene tree that supported the three domain model in [30], so it is clear eukaryotes did not get this protein from the mitochondrial ancestor. Reassembling the split gene is highly improbable, so there is no reason to doubt the fused genes are monophyletic. This strongly contradicts the original eocyte hypothesis. However, novel genomic data has revealed that representatives from the deep branching phyla korarchaeota and thaumarchaeota have the non-split form of this gene [51,52]. This opens the door for a more specific version of the eocyte hypothesis where eukaryotes stem from either of these groups. Therefore, we have examined what additional data have to say about these taxa. The branch order between them has not yet been resolved, but it appears safe to assume they both branch before the split between crenarchaea and euryarchaea. This branching is supported by several phylogenetic trees as well as the non-split RPOA. This assumption will be key to our subsequent reasoning in several ways.\nIt seems impossible to come up with a scenario utilizing all the traits we discuss below that is completely parsimonious for all traits at the same time. With that in mind we have tried to reason which traits can be better explained by convergent evolution than others. When we observe convergent evolution happening at an indel site we do not consider it informative. Independent loss in any form is much easier than independent invention. Loss seems to be the rule rather than the exception in archaea. Both the thaumarchaeota and korachaeaota have traits that were thought to be specific to either euryarchaea or crenarchaea. For instance euryarchaea use FtsZ for cell division while crenarchaea use the cdvABC system. Intriguingly the thaumarchaeotal genomes have orthologs of both of these systems [54]. This implies that the crenarchaea and euryarchaea each lost one of these systems. This is not the most parsimonious solution, but it is the only one that is consistent with the apparent branch order of these taxa. Many other traits have the same distribution pattern. It is clear that groups of archaea can lose proteins of major functional importance. We will attempt to address these distributions in our hypothesis below.\nBeyond the EF-1 indel that implies paraphyly, six highly conserved indels were found to be informative in describing the relationship between archaea and eukaryotes in [50]. The authors only looked at derived insertions with well conserved sequences. The authors state that four indels argue for the holophyly of archaea. There is one indel that is shared between eukaryotes and crenarchaea, as well as one shared between the euryarchaea and eukaryotes. This implies there was a reversion in at least one lineage or a horizontal transfer.\nWe have analyzed those six indels as well as EF-1 in the context of the recently sequenced deep branching genomes (Table 3). Only the indels that differ between archaeal groups are useful for determining their branch order. Therefore we only created alignments that contained archaeal sequences to ensure these indels were not artifacts created by including eukaryotic and bacterial sequences. Where possible we also used structural alignments from representatives of the superkingdoms to further ensure the larger indels were real (a similar methodology as used in [11]).\nAnalysis of potentially informative gene structures in korarchaeota and thaumarchaeota.\nEach indel was analyzed by creating an alignment of archaeal sequences from BLAST searches. We consider these results to be inconclusive until thaumarchaeota and korarchaeota are sampled better.\nFirst, the reported indel shared between euryarchaea and eukaryotes in the DNA repair protein RadA appears to be an artifact. The euryarchaeal and crenarchaeal sequences align well in the indel region (Additional File 1; Figure S1). This is important because it was the only line of evidence in that work that implied a relationship between euryarchaea and eukaryotes. This new alignment, in conjunction with the split RPOA gene, implies eukaryotes either descend from within the deep branching archaea or are their sisters.\nWe also argue that the two reported indels in the alignments of Beta-glucosidase/6-phospho-beta-glucosidase/beta-galactosidase (PBG) and ribosomal protein S12 are both uninformative based off the authors' own analyses (supplemental data from [50]). The indel in ribosomal S12 is conserved across all archaea and eukaryotes, so it implies nothing about their branch order. The indel in PBG is uninformative because the authors conclude the eukaryotic version of this gene is probably of bacterial origin (supplemental data from [50]). Therefore, the state of the gene in archaea implies nothing about the branch order of these groups.\nTwo of the remaining four indels are only a single residue. The glycine insertion in SecY is present in thaumarchaeota and eukaryotes, but absent in korarchaeota. That weakly implies a relationship between eukaryotes and thaumarchaeota. However, given that the insertion is present in some of the deep branching taxa, but not in all euryarchaea, implies there was at least one secondary loss of this insertion. This is reasonable since the insertion is a single glycine residue, and will not have a dramatic effect on protein structure.\nThe single residue insertion in prolyl-tRNA aminoacyl synthetase initially implied archaea were holophyletic, however, the insert is missing in the thaumarchaeal genomes. When these genes are used to seed a BLAST [55] search they hit firmicutes more so than other archaea. This implies a possible horizontal transfer to thaumarchaeota. If so this insert could still support holophyly, but that cannot be concluded with absolute certainty.\nThis leaves us with two larger indels in EF-1 and glutamyl-tRNA amidotransferase subunit D (gatD). The seven AA insert in gatD is well conserved in the archaeal alignment. A structural alignment with a bacterial homolog reveals this indel is not an artifact caused by the sequence alignment (data not shown). The phylogenetic tree for this family (presented in the supplemental data of [50]) places archaea and eukaryotes as sisters with 100% bootstrap support. This is remarkable because the archaeal proteins have a different domain combination and quaternary structure than the eukaryotes and bacteria [56]. However, it seems that tree is too good to be true. We have attempted to verify the history of this indel, and found that the tree in [50] was missing a bacterial paralog. E. coli has members of two paralogous families of l-asparaginases [57], and it appears only one of them was present in the initial tree. The tree in Additional File 2; Figure S2 shows that fungi and the rest of the eukaryotes received the same domain superfamily from two distinct sources. Their sequences are mixed in with some bacteria, which implies there were some recent horizontal transfers. This tree is not well resolved, but it certainly does not support the notion eukaryotes inherited this protein from their archaeal ancestor. That, as well as the differences in domain combination and quaternary structure, implies this indel is inconclusive with regards to holophyly verses paraphyly.\nEF-1 also appears inconclusive. The insert shared between crenarchaea and eukaryotes is present in thaumarchaeota, but not korarchaeota. Our alignment revealed there are actually four different forms of indel at this site in archaea (Additional File 3; Figure S3). This implies there is some plasticity in this region in archaea. This is in contrast to the bacterial alignment which has no indels in this region. A structural alignment between a bacterial representative from E. coli and an archaeal one from Sulfolobus solfataricus reveals the conserved glycines in the sequence alignments are very close in their position in both forms of this indel (Figure 2). It is possible there were two insertions near the root of archaea that preserved the position of that residue. This indel's history does not appear to be parsimonious, which weakens it usefulness as a marker. Therefore, this indel appears to weakly support archaeal paraphyly, but we consider it inconclusive.\nStructural alignment of EF-1 and EF-Tu. The structural alignment of EF-1 (1JNYA) and EF-Tu (1EFC) in A, and the corresponding sequence alignment in B, show the potential for two independent indels in this region that confounds analysis.\nThe ribosomal proteins are the other side to this story. In a previous study, five ribosomal proteins were found in at least one crenarchaeon, but not in any of the euryarchaea (L38e, L13e, S25e, S26e and S30e) [2]. These, as well as four others that are not universal in archaea, are conserved across eukaryotes. We examined what ribosomal proteins are present in the thaumarchaeal and korarchaeal genomes (Table 4). It still appears that Lake is correct that crenarchaea have more similar ribosomal proteins to eukaryotes than any other group of archaea.\nInformative ribosomal proteins in thaumarchaeota and korarchaeota.\nThis table was constructed from [2]. The values listed were taken from searches of the Pfam website. Ribosomal proteins L20A and L30E were not well defined in Pfam so BLAST searches were performed instead. These results support the eocyte hypothesis, but it is plausible that there were independent losses of ribosomal subunits in archaea based on additional data.\nThe korarchaeota are missing three ribosomal proteins found in some crenarchaea and eukaryotes. They have five ribosomal proteins that are present across eukaryotes that are absent in thaumarchaeota. There are two ways we can interpret this trend. If archaea are paraphyletic then this distribution is best explained by the invention of ribosomal proteins after LACA. LECA could branch between the korarchaeota and crenarchaea, before the RPOA gene split. The alternative interpretation is that archaea are holophyletic and the archaeal ancestor had all the ribosomal proteins that are in any archaeon and at least one eukaryote. There would have to be several independent losses of each these ribosomal proteins. Again this is not parsimonious, but there is evidence it has occurred several times so we must consider it. Again, it can be argued that if a protein is present in korarchaeota and crenarchaea, but absent in euryarchaea, it must have been lost. The archaeal ribosomal proteins are more dispensable than their counterparts in the other superkingdoms [2], so they might not be a reliable marker for rooting eukaryotes in archaea.\nFor now it seems the only reasonable stance in light of all of this evidence is agnosticism. Only when thaumarchaeota and korarchaeota are sampled better, and their positions in the archaeal tree are determined robustly, will it be possible to state with confidence whether archaea are holophyletic or paraphyletic. We might always be left trying to weigh whether reversion of ribosomal proteins or indels is the more parsimonious scenario. However, several of these traits clearly exclude the root of eukaryotes from within crenarchaea and euryarchaea. Therefore, any hypotheses on the origin of eukaryotes that invokes specific taxa within those groups can be rejected with confidence (for a discussion of the many hypotheses on this subject see [58]). However, it may be possible those scenarios could be reworked to fit thaumarchaeota or korarchaeota once they are sampled better.", "Now that we have argued for the true distance between the superkingdoms we can begin to address how it could be bridged. From our discussion above we feel we must be cautious about declaring the debate closed on the holophyly of archaea. Therefore, we are more interested in traits shared between a group of bacteria and all archaea than those shared with eukaryotes. Cavalier-Smith has presented fourteen reasons why the root of the neomura is probably within or next to actinobacteria [16]. Two of these traits are shared between actinobacteria and neomura, but the other twelve are only shared between eukaryotes and actinobacteria. Under this scenario these twelve traits would be lost in the ancestor of archaea, which implies archaea are holophyletic. We will review these fourteen traits, and argue that placing the archaeal ancestor in the bacilli makes more sense. We use the term neomura to refer to the clade of eukaryotes and archaea, but when we refer to the neomuran hypothesis we refer to Cavalier-Smith's rooting of that clade in the actinobacteria.\nThe first piece of evidence that places the neomuran root near actinobacteria is the proteasome. Actinobacterial and archaeal 20s proteasomes are well separated on phylogenetic trees which implies the presence of the 20s proteasome across these groups is not the result of recent horizontal transfers. Recently 20S proteasomes have also been found in sequenced genomes from verrucomicrobia [59] and leptospirillum metagenomic sequences [60]. This somewhat weakens the actinobacterial argument for ancestry, as archaea could have inherited a proteasome from these other groups. However, these recent findings do not weaken the polarization argument; it just excludes the root from these additional groups.\nThe second trait apparently shared between actinobacteria and all neomura is the post translational addition of CCA to the 3' end of tRNAs. The gene performing that function in archaea is tRNA CCA-pyrophosphorylase (protein cluster PRK13300 [61]). One of the domains, PAP/Archaeal CCA-adding enzyme, does not hit any bacteria in the Superfamily database [62]. Since the CCA addition is performed by nonhomologous enzymes this is not strong evidence for rooting neomura. There is also an analogous enzyme conserved across bacilli (protein cluster PRK13299). Even if archaea inherited this function from their bacterial ancestors, it is not clear which gram-positive group provided it.\nNow we must address the remaining dozen traits shared between actinobacteria and eukaryotes. Although there were initial reports of sterol synthesis in the actinobacteria [63,64], the latest work has found no evidence for a complete pathway [65]. The authors report that the few cases of the full pathway in bacteria (all outside the actinobacteria) are probably the result of horizontal transfer. However, they find several sterol synthesis enzymes are present in many actinobacteria. They conclude these are probably the result of a transfer from eukaryotes, but this is not supported by their trees, which show good separation between eukaryotes and actinobacteria. Several sterol enzymes appear to have been inherited vertically from actinobacteria to eukaryotes. This is certainly consistent with Cavalier-Smith's hypothesis. This is a good example of the dangers of closing the debate on the position of the root too soon. Their trees clearly support an alternative hypothesis, but that data is buried in the supplemental material without discussion of the opposing view.\nInitial reports also claimed the presence of chitin in actinobacteria [66]. However, there is no gene for chitin synthase in actinobacterial genomes. Several of them have chitinase which breaks chitin down. Also, chitin is found in metazoa and fungi, but not in archaeplastida which implies this enzyme was not in LECA.\nIt is true that actinobacteria have many serine/threonine signaling systems related to cyclin-dependent kinases [67]. This would be a key preadaptation to the cell cycle. However, it has recently been shown that Bacillus subtilis also has an extensive network of such regulation [68]. Therefore this line of evidence is consistent with either gram-positive group being ancestral to neomura.\nPhosphatidylinositol is an interesting case. Recent work on this subject confirms the presence of phosphatidylinositol synthase as well as the eukaryotic form of cardiolipin synthase in many actinobacteria [69]. These enzymes are paralogs. We could not create a quality tree for this superfamily because the alignment was of low quality. However BLAST searches showed a good separation between prokaryotic and eukaryotic sequences that implies this is not the result of a recent HGT. It is difficult to determine exactly what family each prokaryotic homolog belongs to, so it is hard to say with certainty what other groups of bacteria have phosphatidylinositol. It is certainly possible eukaryotes inherited phosphatidylinositol from actinobacteria.\nSome actinobacteria do have an α-amylase with similar primary structure to the form found in metazoa, but a recent comprehensive study found several other bacteria that did as well [70]. The authors concluded this was probably the result of a horizontal transfer due to their position in the phylogenetic tree as well the extremely sparse distribution of this form in actinobacteria. Therefore, this is not evidence for actinobacterial ancestry of the neomura.\nThe fatty acid synthetase (FAS) complex found in actinobacteria is unique among bacteria in that it is the same form as found in some fungi [71]. These fungi have the FAS complex split into two genes, but actinobacteria have it fused. Our phylogenetic trees are consistent with actinobacterial ancestry (Figure 3). However, the distribution of the fungal type complex in eukaryotes does not conclusively prove that this enzyme had to be in LECA. The only group outside the Fungi with this complex are stramenopiles. However, the animal type FAS is also present in some alveolata, so there could be some functional displacements. Actinobacteria probably played a role in the evolution of this enzyme in eukaryotes, but not necessarily via the neomuran hypothesis.\nMaximum likelihood tree of fungal type Fatty Acid Synthase (FAS) complex. This tree implies eukaryotes did not get FAS from a recent transfer, but it is also not clear whether or not it was in LECA. Circles indicate the split form of the gene. This gene is split in two different places in the fungi indicated by the yellow and red circles.\nThe argument that the exospore structure of actinobacteria could be a precursor to eukaryotic spore structures seems sound [72], but we are unable to locate a list of proteins involved in exospore formation. Without specific proteins homologs we cannot begin to evaluate this with bioinformatics. However, this argument becomes irrelevant if one invokes a viral ancestor of the nucleus as in [73].\nCavalier-Smith has also suggested that the C-terminal HEH domain found in the Ku proteins of some actinobacteria is ancestral to the HEH domain found in the eukaryotic Ku70 protein. However, the sequence analysis in [74] conclusively demonstrates eukaryotes did not inherit the HEH domain from actinobacteria. This domain is very compact and common. Therefore, it is not out of the question that it was recruited twice to the C-terminus of similar structures. Consequently we do not take this as evidence that eukaryotes inherited Ku from actinobacteria.\nSeveral traits initially listed as unique to actinobacteria are now found in enough other bacterial groups to now be considered ambiguous markers. Actinobacteria do have tyrosine kinases, but they have recently been put into a bacterial specific family, BY-kinase [75]. This family is present across actinobacteria, firmicutes, and proteobacteria, so it is does not support an actinobacterial rooting exclusively in the neomura. Many groups of bacteria have HU (histone H1 homologs) according the Superfamily database. This protein is relatively short, so we should not expect sequence to resolve its history. It is possible this protein was inherited from actinobacteria, but there are too many other possibilities to state that with certainty. Calmodulin-like proteins are now found in many bacteria, so this trait is not specific enough to root neomura near actinobacteria as Cavalier-Smith now admits [8]. The Superfamily database reveals that trypsin-like serine proteases are present in many groups of bacteria, but absent in archaea. This appears to be another trait that is too general to be useful for rooting neomura.", "Skophammer et al. compiled several reasons to argue archaea are derived from bacilli [12]. There is an insert in ribosomal protein S12 that is present in archaea and bacilli (and maybe chloroflexi). Skophammer et al. conclude this indel is derived, but we argue elsewhere this polarization is flawed [11]. The insertion appears well conserved between archaea and bacilli regardless of whether it is ancestral or derived.\nSkophammer et al. also note that there is a shared deletion between firmicutes and archaea in PyrD. Our own work strengthens this connection by considering the quaternary structure of PyrD. The form that has the deletion also has an additional subunit, PyrK. The sequence and structure of the firmicute PyrD 1B are both shared by archaea. Our phylogenetic analysis of this protein implies this is not the result of recent horizontal transfers [11].\nSkophammer et al. note that many enzymes involved in the biosynthesis of unique archaeal membranes have previously been found in firmicutes [21]. The isoprenoid lipid precursors of archaeal membranes are made via the mevalonate pathway, which is five enzymes long. The KEGG database [76] reveals the entire mevalonate pathway is present in several bacilli as well as some actinobacteria (KEGG module M00191). The unique stereochemistry of archaeal membranes is determined by the enzyme geranylgeranylglyceryl phosphatase. Homologs of this enzyme are present in bacilli (protein cluster PRK04169), but appear to be absent in actinobacteria. The authors of an analysis of archaeal membrane biosynthesis propose that archaea became genetically isolated from bacteria once their membrane chemistry changed [77]. They suggest that archaea branched early from within bacteria, but their hypothesis is also consistent with a later gram-positive origin. Cavalier-Smith's own analysis [8] suggests that eukaryotic enzymes that make n-linked glycoproteins, which are necessary for the loss of peptidoglycan, evolved from the firmicute specific gene EspE. Therefore, for several reasons, the firmicutes are the bacterial group most preadapted to gain archaeal membranes.\nHomologs to ribosomal proteins L30e and L7ae are found across firmicutes. This is evidence of the link between firmicutes and archaea. Pfam [78] shows this family in several other groups, but many firmicutes contain two copies of this family. One of these paralogs has been characterized as a ribosomal protein, but neither is essential [79]. We constructed phylogenetic trees to see if they are consistent with vertical inheritance (Figure 4). There is good separation between the paralogs in firmicutes, which implies the duplication occurred early in firmicutes. All archaeal and eukaryotic genomes contain at least two copies of this family. The phylogenetic tree of the archaeal and firmicute sequences places the firmicute paralogs between the archaeal paralogs. The firmicute sequences are paraphyletic, albeit with very weak support. If these proteins are the result of independent duplications the archaeal sequences should cluster together, not appear on opposite ends of the tree. However, it is possible one of the archaeal sequences evolved rapidly after duplication.\nAlignment of L7Ae paralogs in archaea and firmicutes. This tree is consistent with a firmicute origin for two archaeal ribosomal proteins.\nOne of the paralogs in Bacillus subtilis was found to localize to a different portion of the ribosome than either of the archaeal paralogs [79]. The proteins would not only have to jump superkingdoms for a transfer to occur, they would also have to bind to a different region of the rRNA without interfering with ribosome assembly. We argue it would be less disruptive for a protein already present to gradually bind a different piece of rRNA. The separation between the superkingdoms in the phylogenetic trees also argues against HGT. If this is the result of vertical inheritance only two possibilities explain it. Either the firmicutes are ancestral to archaea, or the root lies between archaea and firmicutes. Our polarization of PyrD 1B's quaternary structure eliminates the latter rooting as a possibility. Thus this tree appears to support a firmicute ancestry for archaea, although it may just be the result of rapid evolution of structures in different contexts in the ribosome.\nAs discussed above, almost all the firmicute genomes have a unique Holliday junction resolvase, RecU, which is only found sparsely in other bacterial groups. It is homologous to the archaeal Holliday junction resolvase, Hjc [46]. Therefore the firmicutes have a DNA repair mechanism more similar to archaea than any bacterial group.\nHsp90 is missing in all archaeal genomes, so its presence across eukaryotes and bacteria implies it was inherited from the mitochondrial ancestor. However, a detailed analysis of this family did not reveal a relationship between eukaryotic and proteobacterial sequences [80]. Instead, the eukaryotic sequences branch within the gram-positive bacteria. The authors argue this supports the classical neomuran hypothesis, but eukaryotes are sisters to firmicutes rather than actinobacteria in that tree (albeit with moderate support). This would slightly favor firmicutes over actinobacteria ancestry. In either case it supports the view that the archaeal ancestor lost Hsp90.", "There are several traits present in either firmicutes or actinobacteria that argue they are ancestral to either eukaryotes or archaea. The only trait that argues actinobacteria are ancestral to the neomura is the proteasome. Several more traits make compelling arguments that actinobacteria are ancestral to eukaryotes, but certainly not the dozen traits listed in [16]. In Cavalier-Smith's most recent version of the neomuran hypothesis he concludes firmicutes contributed a significant number of genes to the neomuran ancestor [81]. He proposed neomura originated as sisters of actinobacteria, and both of these taxa are descendents of firmicutes. That proposal is dependent on his argument that actinobacteria are derived from firmicutes, which is one of the less developed ideas in [8]. We believe he is wrong in his assertion that our analysis of the indel in ribosomal S12 [11] does not support firmicute ancestry of archaea. It is only shared (and well conserved) between bacilli and archaea regardless of the polarization of that indel. Cavalier-Smith is also not aware of the arguments about L7AE paralogs and RecU we present here for the first time. So we are left with a stronger list of reasons supporting firmicute ancestry and a weaker list for actinobacterial ancestry. However, there are still some key eukaryotic proteins that appear to have descended from actinobacteria. We will try to reconcile this apparent anomaly.\nThe peroxisome is an organelle with a single membrane, found across eukaryotes, that has various oxidative functions including the synthesis of some lipids [82]. They have been observed to divide independently of the rest of the cell, which initially led someto question whether they had an endosymbiotic origin [83,84]. Two recent studies both concluded that the peroxisome was likely derived from the endoplasmic reticulum [85,86], which led those initial proponents of peroxisomal endosymbiosis to abandon that idea.\nHowever, [85] found that many peroxisomal proteins likely originated in cyanobacteria, α-proteobacteria, or actinobacteria. The authors suggest that the proteobacterial genes were probably transferred from the mitochondria, which is consistent with observations that mitochondrial genes are often retargeted to other organelles [87]. However, recent work argues for an endosymbiotic origin of the peroxisome from an actinobacterium [88]. These latter authors demonstrate that at least two proteins imported into the peroxisome are of actinobacterial origin, and that the peroxisomal proteome has higher average BLAST scores to actinobacteria than any other group of prokaryotes. They argue that the retargeting of mitochondrial proteins after their genes migrate to the host's genome is easier than de novo targeting of peroxisomal proteins. They propose this masks the true history of the peroxisome.\nThe literature proposes two scenarios to explain the origin of the peroxisome: either the peroxisome was an endosymbiont, or actinobacteria were not endosymbionts. Clearly there is a third possibility; there was an actinobacterial endosymbiont, but the peroxisome is not a descendent of that membrane. That is to say, genes of an endosymbiotic origin were targeted into the peroxisome, but historically they are foreigners there. How could this be? A primitive peroxisome derived from the endomembrane system would be beneficial because it would separate dangerous oxidative chemistry from the rest of the cell. Proteins would be targeted to the organelle with relative ease since that system would be developed through mitochondrial endosymbiosis. Genes would be copied from the actinobacterial endosymbiont to the host genome (but not necessarily lost in the actinobacterium), and then imported into the peroxisome. This would be advantageous because some of these reactions would do better in that specialized environment rather than their original host. Potentially there would be less cost involved in maintaining an organelle that already existed versus an entire endosymbiont. Once enough genes were present in the host, the actinobacterial endosymbiont would essentially be a parasite, and complete gene loss would be beneficial.\nContrast the peroxisome to organelles such as plastids and mitochondria which retained both genomes and membranes long after they became organelles. Some have questioned why some organelles retain any genes at all [89]. These authors note that most genes retained in plastids and mitochondria are membrane-spanning proteins involved in core photosynthetic and respiratory systems. They agree with an earlier proposal that these proteins must be kept in the organelle to be able to quickly respond to, and balance, redox gradients [90]. In other words, plastids and mitochondria have retained membranes and genes because their functions are centered on membrane based chemistry. The stripped down endomsymbionts perform these functions better than a novel organelle initially could, so they are left with a few essential genes and membranes they inherited from endosymbiosis. These genes come with a high cost because the organelles need to import the machinery to translate them as well as the machinery to replicate the genes that encode them. Therefore one can hypothesize that other endosymbionts whose functions are not as membrane-centric could be replaced by organelles that are not of endosymbiotic origin. Unfortunately, plastids and mitochondria have shaped our expectations that endosymbionts will leave both membranes and genomes behind. We believe this is an over simplistic expectation.\nWe argue actinobacterial endosymbiosis accounts for the traits shared between eukaryotes and actinobacteria, as well the phylogenetic trees that place actinobacteria as sisters of the peroxisomal proteins. The fact that numerous mitochondrial proteins are imported into the peroxisome is evidence this endosymbiosis occurred after mitochondrial endosymbiosis. This would reconcile the apparently conflicting signals in terms of which gram-positive group is ancestral to archaea and eukaryotes. We find this scenario more reasonable than invoking an extinct lineage of gram-positives that has all the traits listed in Table 5 and Table 6. However, if a genome is sequenced that contains actinobacterial specific traits as well as firmicute specific traits listed here we would have no need to invoke endosymbiosis. It is also possible to reconcile the canonical rooting with the traits shared by actinobacteria by invoking this endosymbiotic hypothesis.\nSummary of data used to support actinobacterial ancestry of archaea.\nMany of these traits argue for an actinobacterial role in eukaryogenesis but not the origin of archaea. This list of informative characters is taken from [16].\nSummary of data that supports bacilli ancestry for archaea.\nThe bacilli are more similar to archaea in terms of DNA repair, ribosome structure, and lipid metabolism than any other group of bacteria.", "Now that we have argued for the true distance between archaea and bacteria, the time has come to cross that desert. As we have asserted above, this is a unique event in evolution, so we must properly set the stage. The selective pressures associated with extreme environments and antibiotic warfare are ancient, however, they cannot cause a revolution on their own, so a significant relaxation in selective pressure is necessary. We argue that viral endosymbiosis could relax selective pressure enough to start such a revolution.\nKoonin has observed that the PolB family of polymerases are the most common DNA polymerase in viruses [91]. Koonin et al. also observed that archaeo-eukaryotic DNA primase was a hallmark viral protein [19]. This hints at some connection in DNA replication between archaea, eukaryotes, and viruses. We examined the distribution of all protein families in Pfam [78] that originated at the root of archaea and eukaryotes to see if this connection could be extended. We defined Pfam familiess that were present in at least 90% of archaeal genomes (46 at the time) and 90% of eukaryotic genomes (35 at the time) and in less than 50% of bacterial genomes (939 at the time) as originating at the root of archaea and eukaryotes. A 90% cutoff is strict enough to imply that the protein was present in LAECA, while a 50% cutoff is loose enough to accommodate recent horizontal transfers. Most of these Pfam families are well below the 50% cutoff in bacteria.\nBy this definition there are 74 Pfam domains that originated in LAECA; 24 of these are found in at least one viral genome (Table 7). On average each of these Pfam domains is present in 36.38 viral genomes (14.36 if one excludes PolB). As an approximate measure of the significance of this result we took 10000 random samples of 74 Pfam domains that are found in at least one cellular genome to see how often one finds 24 or more in at least one viral genome. None of the random sets had that many viral Pfam domains, which implies this set is significantly enriched in viral proteins. However, we must keep in mind that our sampling of the viral world is still highly biased (discussed in [91]) and that viral genomes evolve rapidly. Viral genomes are sampled so poorly that none had the MCM domain from Pfam, even though it is found in a prophage region of some bacilli as discussed above. Further, 18 of the remaining Pfam proteins that originate in LAECA are ribosomal, which we assume are less advantageous for viruses to encode than the DNA replication machinery (although we did find several ribosomal proteins in viruses in this set).\nPfam proteins that originated near LAECA and their distribution in the viral world.\nThe Pfam familiess that originated in LAECA are more common in viruses than those that originated in LBCA.\nWe can also verify whether this result is significant by looking at the set of proteins that would be present in LBCA (last bacterial common ancestor), but not LEACA under the same definition, that is, Pfam domains present in at least 90% of bacterial genomes and less than 50% of archaeal and eukaryal genomes. There are 106 such Pfam domains and 15 of them are found in at least one viral genome (p-value 0.2457). Each of those 15 is in an average of 8.33 viral genomes. It should be noted that this is an underestimate for LBCA's content since there are so many parasitic bacteria with genomic sequences available. However, in general viruses share more Pfam domains with LAECA than LBCA.\nKoonin proposes, based on PolB's distribution, that archaea arose from an acellular ancestor and then retained the more ancient polymerase [91]. We find this view hard to reconcile with the three independent arguments for the derived nature of archaea provided above. Forterre has argued that DNA originated from a viral endosymbiosis in each of the superkingdoms [17], but our data argues against that scenario for the origin of bacteria. We propose the alternative hypothesis that viral endosymbiosis occurred in bacteria and gave rise to archaea. This virus would supply the missing link in terms of DNA replication machinery between the prokaryotic superkingdoms. We think this would have to be endosymbiosis and not just a horizontal transfer given the distribution and interdependencies of these systems in cellular life.\nTo a first approximation there are three components that define the propensity of a genome to get permanently damaged. The first is the environment. Many different extreme environments are damaging to DNA, including radiation, high temperature and desiccation [92]. Second is the size of the genome. The larger the piece of DNA, the more likely damage will occur, and the more it must be mediated. Third is the state of the active repair system. If active repair is poor even rare damage events will eventually accumulate. Therefore, we argue that systems that are extreme in any one of these three components must routinely deal with DNA damage during replication.\nArchaea, in general, fit the description of extremophile better than any other major taxa. It has been proposed that the unifying trait of all archaea is adaptation to chronic energy stress [93]. The author argues that archaea outcompete bacteria in niches that are under chronic stress. Thus archaea have become successful in dealing with environments that other superkingdoms cannot handle. The author noted that archaea do better in environments that are consistently extreme, and are outcompeted by bacteria in environments that fluctuate.\nA corollary of chronic energy stress is chronic DNA damage. Many of the extremophilic environments archaea have made home severely damage DNA. On the other hand, bacteria may face occasional stressful situations and require DNA repair. Therefore it is disadvantageous for bacteria to have their repair systems on all the time. Conversely, archaea need to constantly repair their DNA, so it would make sense if the line is blurred between their replication and repair systems. An example of this prepare for the worst strategy is the unique ability of PolB to read ahead and stall replication if a uracil is encountered in archaea [94].\nIn terms of large genomes eukaryotes win hands down (see figure 1 in [95]). A polymerase is more likely to encounter damage somewhere in the replication of these large genomes than a prokaryote with a smaller genome in a similar environment. This is supported by evidence that eukaryotes use a separate repair system during replication of the large non-transcribed regions of their genome [96,97].\nWhat other situation besides chronic DNA stress and large genome size would put similar pressure on the DNA replication machinery? We argue, somewhat counter intuitively, that a total lack of active DNA repair systems would create a similar situation. Again it is optimal for the replicative system to expect to encounter damage. Viruses fit that description perfectly as they are unable to actively maintain their genomes without their host.\nIf the repair systems were turned on more and more of the time, the main replicative system would become free to drift. Under this scenario the ancestors of archaea could mix and match bacterial repair and replication proteins with several molecular innovations and some transfers from the viral endosymbiont. The end result could be a system that is more robust to chronic stress. The canonical rooting implies that the components of the replication machinery that are homologous, but not orthologous, were independently recruited from proteins that initially processed RNA. Under either scenario the same amount of molecular innovation is required. The question then becomes, is it easier to innovate function in a RNA based organism or a DNA based organism under relaxed selective pressure? We argue that the difference cannot be quantified, as both scenarios predict exactly what we observe: some proteins are orthologs, some are homologs, and some are unrelated. Therefore the way to tell the difference between these scenarios is independent lines of evidence. The polarizations presented above imply the bacterial repair machinery was recruited to become the replication machinery of archaea.\nIt is also tempting to speculate that many of the features shared between viruses and the eukaryotic nucleus described in the viral eukaryogenesis hypothesis [73,98,99] could be extended to this hypothesis. Bell notes many similarities between nuclei and viral replication factories. One can imagine the ancestry of these traits going back to LAECA with some being lost in archaea, and others not developing until the root of eukaryotes. This is only consistent with our hypothesis if archaea are holophyletic, but for now it is certainly worth considering.", "So far we have demonstrated that there is robust evidence that archaea are a derived superkingdom. We have shown the bacterial ribosome could have enough plasticity to evolve into an archaeal one. We have presented evidence that there is some link between DNA replication in archaea, eukaryotes and viruses that could be the result of endosymbiosis. Now we will try to combine these into the larger story of why a bacterium would evolve into an archaeon.\nAs we discussed above, we feel the greatest weakness of Gupta's invocation of antibiotics is it is not of sufficient evolutionary pressure to cause a revolution on the scale necessary to create the differences between the prokaryotic superkingdoms. Observations of the vast differences in DNA replication machinery and evidence of a viral endosymbiosis in a bacillus before LEACA will set the stage for our subsequent hypothesis.\nIn the traditional antibiotic battle the gram-positives are capable of evolving resistance to each other. This leads to what is commonly referred to as a Red Queen game [100]. Neither group ever really gets ahead in the long-term war as each defensive innovation is matched by an offensive one. But that does not mean there are never winners in battles on shorter time scales. Winning a battle is not a good thing in the long run. The winners will increase in population size and consume more of an environment's resources. The corollary is that they become a better target for less dominant species to kill. If a species evolves a more resistant ribosome it just puts more pressure on the rest of the community to hit other targets in that species.\nOne can imagine a firmicute deeply entrenched in such warfare endowed with the gift of a complete and novel replication system from a virus. This is supported by the distribution of viral Pfam proteins discussed above. The virosphere contains so much diversity that even rare combinations of genes would eventually end up in the same capsid at the same time as long as they have some advantage to any virus. It would be an incredibly rare event for the virus to be just right for the bacterium to take up the entire replication system. And thus the stage is partly set for why the revolution happened but once.\nThe core of the DNA replication system does not appear to be as common an antibiotic target as the ribosomes or RNA polymerase. A search of DrugBank revealed no antibiotics that target PolC [101]. However, there are several that target gyrase. Why the difference? Inhibition of PolC just stops a population from growing, but the damage induced by the loss of a functional gyrase invokes an SOS response and leads to cell death. There are probably natural antibiotics that target PolC, but they would not be as effective as the numerous ones that target the ribosome and RNA polymerase. Thus the introduction of PolB into the bacillus genome would not be the enough to start the revolution. This is supported by the fact that many proteobacteria use PolB as a repair enzyme, the result of a HGT that did not start a revolution.\nAs discussed above there are no bacteria that have archaeal histones. This strongly implies they are only compatible with the archaeal-eukaryal replication machinery. Thus we argue that viral endosymbiosis was a relaxation in selective pressure that in combination with pressure from antibiotics targeting gyrase led to the innovation of histones. This is not a trivial difference with Cavalier-Smith's hypothesis that the numerous differences between the DNA-handling machinery of bacteria and archaea are the result of histones dramatically changing the way in which this machinery could interact with DNA [16]. He argues this was an adaptation to thermophily.\nHowever, Forterre has presented several arguments against Cavalier-Smith's scenario. He argues that the bacterial histone-like proteins that have replaced the archaeal ones in Thermoplasma acidophilum work just fine with the archaeal replication machinery [17]. He also notes that many hyperthermophilic bacteria do not use histones. At the same time hyperthermophilic bacteria exchange many genes with archaea [102]. Therefore the standard bacterial replication machinery could probably not tolerate the invention of histones even under selective pressure from an extreme environment. Euryarchaea appear to have gained DNA gyrase via several independent horizontal transfers from bacteria [47]. The fact that several euryarchaea retain both histones and gyrase is evidence against Cavalier-Smith's idea that gyrase became totally redundant with the advent of histones. That view is weakened further given that gyrase was found to be essential in several of those genomes [103].\nSince pressure from thermophily alone could not force histone innovation, we invoke the viral endosymbiont hypothesis. In other bacteria an alternative system to gyrase would not be much of an advantage, as getting rid of gyrase would just put more pressure on targets like the ribosome and peptidoglycan synthesis. However, as discussed above, the bacilli have several unique ribosomal proteins. That means they could already have some adaptations and preadaptation to antibiotic warfare that makes them a difficult target to hit. As discussed above they have EpsE [104], which could preadapt them for functioning without peptidoglycan. Once gyrase was no longer a useful target they could quickly lose peptidoglycan in their cell walls. The loss of these two major targets would be a huge advantage and increase pressure on the ribosomes as a target.\nAt this point any change to the ribosome would be highly beneficial. One can imagine a Red Queen game where neomura have a distinct advantage over gram-positives but need constant innovation in their ribosomes to maintain that advantage. The observation that many archaeal-eukaryal ribosomal proteins bind Zn would be consistent with pressure to ensure proper assembly despite the antibiotics. This is supported by the fact that bacterial hyperthermophiles, whose environment interferes with ribosomal assembly, have more Zn binding sites than most other bacteria [35].\nThus the initial neomura would have an advantage in antibiotic warfare as well as the ability to replicate DNA even in the presence of damaging pressures. Their genomes could be much larger than extant prokaryotes. A large robust genome would allow neomuran to be oligotrophic and handle extreme environments. This would put them in direct competition with many bacteria in diverse environments. Their larger genome size would allow for more gene duplication, which could lead to structural innovations like the ribosomal proteins found in neomura but not bacteria.\nThe strongest support for this hypothesis comes from the antibiotic target site most studied in the ribosome: the 23S RNA between ribosomal proteins L22 and L4. L22 and L4 are conserved across teh superkingdoms. They bind to the same positions on the ribosome in all three superkingdoms. There are numerous crystal structures, from both prokaryotic superkingdoms, with antibiotics bound in these sites [105,106]. These studies demonstrated that nine different antibiotics that bind strongly to this site in bacteria bind with much less affinity in archaea. A2058 (E. coli numbering) is one of the sites on the 23S RNA directly involved in binding these drugs. A2058 is conserved across 99.4% of sequenced bacterial 23S rRNAs [107]. The site is almost universally guanine in archaea and eukaryotes. The mutation A2058G makes many bacteria macrolide-resistant [108], while the reverse mutation can make archaea macrolide-sensitive [109]. These differences in antibiotic affinity are well conserved across the divide between bacteria and neomura, and appear to be the result of intense selective pressure from antibiotics.\nEven though bacteria are able to gain resistance through a similar mutation, it is probably not fixed because there is a slight decrease in fitness that can be reduced with other mutations [107]. If there were constant pressure on that site other mutations and changes in structure could relax those costs and fix that position. That would be completely consistent with the scenario outlined here. If the divide between archaea and bacteria is primordial, it is much harder to explain this difference. Ribosomal proteins L22 and L4 must have been present in LUCA. If the ancestor of archaea was an extremophile they should not have been in competition with enough bacteria to need the resistance inferred by this mutation.\nIt would be tempting to speculate that this mutation is an adaptation to thermophily or some other extreme environment to answer this nagging issue of antibiotic pressure at the root of archaea. Examining the position in bacterial hyperthermophiles can be tested. In both the hyperthermophiles Aquifex aeolicus and Thermotoga maritime this position is 100% conserved as adenine, as it is in their thermophilic relatives (Additional file 4; Figure S4). The thermophile Thermus thermophilus has two copies of the 23S RNA where usually both have adenine at that position unless they are under selective pressure from antibiotics [110]. Thus the only explanation that appears to hold water is some extreme antibiotic pressure at the root of archaea.\nThe mark of antibiotic pressure can also be seen in the proteins that would be lost at the origin of archaea. We searched Pfam and DrugBank for antibiotic targets that are conserved across bacteria but were clearly not in LACA. Eight of these are listed in Table 8. Several of these appear to have been horizontally transferred to archaea, such as DNA gyrase. That is consistent with the scenario under discussion because once archaea were no longer under strong antibiotic pressure these systems would be free to become essential again. It would be interesting to look at each of these eight predicted losses and see what preadaptations and environmental conditions can make them non-essential.\nDrug targets found across bacteria that were probably not in LACA.\nWe argue these proteins were lost in the archaeal ancestor in response to a unique antibiotic warfare scenario. Targets in italics appear to have transferred to archaea after LACA.\nExamples of drugs targets sites with resistance in archaea.\nThese drugs bind targets sites present in both bacteria and archaea (or eukaryotes), but with very different affinities. We argue this is a molecular fossil of the unique antibiotic war that resulted in the origin of archaea.\nWhy would this war end and who would the winners be? To address this question we will invoke the two novel niches that are central to the neomuran hypothesis; phagotrophy and hyperthermophily [16]. The oligotrophic neomuran with large genomes would be able to form many symbioses with prokaryotes because of their diverse metabolism. Such an environment would favor the preadapations to phagotrophy discussed in [111]. This could lead to several endosymbiotic events in a short span of time. These would force the nucleus to become a better separator in dealing with selective pressures proposed by several hypotheses: invasion of introns [112], differing metabolisms [113] and ribosome chimerism [114]. The successful phagotroph would eat prokaryotes, so at first it would be to the advantage of the prey to try to kill the neomura. However, that is not the optimal strategy for dealing with phagotrophs. It is much better to persist inside them and eat them from the inside out, as can be seen by the numerous bacterial taxa that have independently evolved the ability to infect eukaryotes. Once it is possible to infect the phagotrophs, killing them with antibiotics becomes counterproductive. And thus a truce (or new war) would be declared on one front of the great antibiotics war.\nThe early eukaryotes would outcompete and eat many of the initial neomura, but would be at a disadvantage in extreme environments as they began to rely on their cytoskeletons and larger cell size more. It would be easier for the neomura to drift into more extreme environments because of their DNA replication machinery. The proto-archaea would begin to emerge as the neomura began moving into previously unoccupied niches of extremophily. The conversion of their membranes would probably be the commitment step in the process. Once they began settling into environments that are constantly extreme they would be under pressure to streamline their genomes.\nThis scenario is consistent with a recent study on gene content evolution in archaea that concluded that most archaeal genomes have been streamlined from larger ancestral genomes [115]. The authors conclude that the archaeal ancestor could have had 2000 gene families, and the extant archaeal groups are mostly created through differential loss. The authors note this repeated loss is consistent with the energy shock state of the archaea described in [93], as specialization and loss are highly favorable in consistently extreme environments. The trend of euryarchaeal and crenarchaeal specific traits to both be present in the deep branching archaea is also consistent with the idea archaea became specialized from a more generalized genomic ancestor. The redundancy in archaeal systems such as two replicative polymerases and two cell division systems could be remnants of the antibiotic war. That redundancy would become unnecessary once archaea committed to extremophily. It was noted in [2] that ribosomal protein loss is much more common in archaea than in bacteria. Our hypothesis implies that the distribution of ribosomal proteins in archaea is the result of independent losses once they were no longer under antibiotic pressure. Some of these novel proteins developed other roles to deal with extremophily so they have been retained. The ancestral archaeal ribosome could very well have contained all of the proteins found in any archaeal genome, which would certainly weaken that aspect of the eocyte hypothesis.\nWhat about the neomura? They would be stuck in the middle. The eukaryotes would be eating them, and they would still be in competition with bacteria. Their only viable strategy would be constant innovation, as they would not really have a novel niche. However, the wave caused by viral endosybiosis would not go on forever. There would be diminishing returns in terms of the resistance provided by the new innovations. Eventually the innovations would become a disadvantage as bacteria can then release compounds that only target the new systems. For instance aphidicolin inhibits DNA replication in archaea and eukaryotes but not bacteria by targeting their unique polymerase [116,117]. So the initial advantage the neomura have in terms of antibiotic resistance is not a stable niche. They were outcompeted from three sides, and thus we are left with a hole in the middle of the branches of the tree of life that often gets mistaken as the root. This scenario is summarized in Figure 5.\nSummary of our hypothesis. A viral endosymbiosis bridges the gap in DNA machinery between the superkingdoms. That triggered an antibiotic war that resulted in the birth of eukaryotes and archaea. The antibiotic war ended when archaea became extremophiles and the eukaryotes became phagotrophs. Traits shared between eukaryotes and actinobacteria are the result of endosymbiosis; the peroxisome is not the direct descendent of an actinobacterium.", "It is reasonable to ask how different archaea and bacteria would have to be for us to consider the rooting debate closed. If the genetic material were different between the superkingdoms it would be strong evidence of life being polyphyletic. If the genetic codes were somewhat different (even a few codons), that would certainly be evidence that both groups were primordial. If membrane proteins like SecY were not universally conserved, we would take that as evidence LUCA was acellular. The differences between the prokaryotic superkingdoms seem small if we consider that the last prokaryotic common ancestor had a membrane and a ribosome that used the same genetic code as all extant life. They have more in common than can be described by any tree.\nNone of the differences between archaea and bacteria are great enough to imply a transition between the superkingdoms is impossible. The three independent polarizations provide compelling evidence the transition occurred. A viral endosymbiosis in a firmicute host could be the relaxation in selective pressure that acted in combination with pressure from antibiotics to cause a revolution in terms of membranes, ribosomes, and DNA replication machinery. This is supported by the association between proteins found in viral genomes and those that appear at the root of archaea and eukaryotes. Gupta's hypothesis that antibiotics led to the differences between the superkingdoms is well supported by the data generated in the past decade.\nArchaea would certainly need to have several innovations in terms of protein structure. None of these are deal breakers. They are present in extant cells, and are not found in viruses. So they would have been an innovation at some point. There is no reason to assume all structural innovations happened near the root of the tree of life. Work from our group probing the relationship between ancient ocean chemistry and protein structure evolution is an example of one source of later innovations [36,37]. The modern ocean has several orders of magnitude more Zn than the ocean of LUCA's time [118]. Many Zn extant binding sites evolved after that transition. As noted above, several of the ribosomal proteins unique to the neomura have Zn binding sites. One of the innovations needed, PolD, is predicted to have two Zn fingers [39]. Increasing levels of Zn would not be the only factor, but it is another example of how the revolutionary planetary changes shape evolution as discussed in [119]. This observation makes sense if one places the origin of archaea after the great oxidation event, and considers the fossil record as a supplement to phylogenetic data. If we look at the details we may find the rhyme and reason to the other novel structures at the root of archaea as well. There are also many structural innovations at the root of the eukaryotes [120]. The fact that archaea have many unique protein structures does not imply they are primordial.\nAs we have hinted above, one of the strengths of this hypothesis is it does not rely on archaea being holophyletic. The scenario we have described implies holophyly, but if something conclusively proved archaea were ancestral to eukaryotes, it could be adapted. There is no explanation in the neomuran hypothesis for the traits shared between actinobacteria and eukaryotes besides vertical descent. The link Cavalier-Smith has justified could be the result of an endosymbiosis that did not leave its mark with an extant organelle. If archaea are paraphyletic it just means eukaryotes did not originate for the reasons we have hinted at here, but rather more along the lines of the traditional endosymbiotic hypotheses. It does not change the way we have to think about the origin and rooting of archaea, which is the central focus of this paper.\nThe hypothesis we have proposed can be refined with experiment. It seems if one really wants to understand the likelihood of intermediates between archaea and bacteria we need to understand why hybrid systems are unheard of. For instance, what other proteins need to be placed into a bacterium to allow them to use histones? How would an archaea with a bacterial ribosome function? Trying to recreate the intermediates we believe went extinct would certainly give insight into their plausibility. It definitely would give better insight into the functional nuances of proteins with homologous function across the prokaryotic superkingdoms that appear to be highly resistant to horizontal transfer. It would be highly informative regardless of the location of the root of the tree of life.\nWe have drawn our data from diverse sources that are not usually the primary tools for studying evolution. Viruses have been getting more attention as players in shaping the tree of life recently [18] and better sampling will clarify the plausibility of the endosymbiosis we have proposed. However, essentiality and protein structure are non-traditional tools in this field. If essentiality experiments were performed across the ribosomes and DNA replication machinery of bacilli under different conditions it could give us hints as to what selective pressures would need to be relaxed for the major transition to begin. Further study of natural antibiotics will also continue to increase the resolution of the hypothesis. We argue this line of experiment would be useful in its own right, since many of the firmicutes are pathogens that effect human health.\nOf course indepth sampling of thaumarchaeota and korarchaeota is going to be invaluable to this endeavor. If the ribosomal proteins that are currently missing in these groups are found in new genomes it would imply independent losses and make holophyly seem a little more appealing. The redundancy left in these genomes could just be the first surprise. Deeper sampling may reveal redundancy in some of the archaeal-bacterial hybrid systems we discussed above. Finding a deep branching archaeon that uses a bacterial system would truly validate this hypothesis.\nThe proteasome is an important component of our hypothesis. If one roots archaea in bacilli it does not explain the presence of the proteasome across the entire superkingdom. We think Cavalier-Smith is correct in pointing it out as a link between archaea and actinobacteria, but in light of the other evidence raised here we do not find that argument convincing on its own. It is not clear what direction the proteasome was inherited. Even if the proteasome was horizontally transferred it does not weaken the polarization of archaea; it would still be a derived structure that was present at the root of archaea, so they must still be derived.\nThere are many instances in the literature where data are only presented under the canonical rooting, when in fact it is better explained by an alternative rooting. This quickly leads to circular logic; a hypothesis gets buried because no data supports it, data gets buried (in supplemental data or ad hoc invocations of HGT) because it does not fit with the canonical rooting. As an example look at how much data from Eugene Koonin's group we have cited in this work to support our hypothesis even though he has made it clear he thinks this rooting is unsupportable (see his reviews of [11,81] and this manuscript as well). We refute the view, and prevailing opinion, that there is no reasonable data to support a rooting within bacteria.\nFor instance, one of the biggest problems with the canonical rooting is the origin of cells. The term \"RNA world\" is sometimes invoked as a miracle that could explain anything that happened in evolution before cells look the way they do now. But one thing RNA definitely cannot do is to make transmembrane pores. This problem is addressed well by the obcell hypothesis of Blobel and Cavalier-Smith [121,122]. They propose proto-cells had very little going on inside them initially. Rather, they were collections of ribozymes tethered to the outside of a cell. The details of their proposal get around the problems of transmembrane RNA structures, but also implies the first true cell had a double membrane (like the gram-negative bacteria). Our point here is not about which hypothesis is correct; but that both of them are understood better in terms of the strengths and weaknesses of the other; throwing either out is essentially operating without a null hypothesis. The differences in these hypotheses was recently reviewed in [123].\nOur view is that the debate should not be closed, but we acknowledge the difficulty in making meaningful contributions to that debate due to the complexity of the problem. Clearly DNA or protein sequence data alone does not suffice to provide a satisfactory answer. Data from different scales of biology - structure, function, biochemical processes, cell morphology etc. as well as the fossil record and earth's environment at different time points have to be applied. Fortunately, in our view, the increasing availability of these data and the tools to manipulate that data promise to keep the debate alive and opinion will continue to see-saw as it has done for the past 33 years since the pioneering work of Woese.", "This novel combination of hypotheses on the origin of archaea is intended to keep the debate alive. We think Cavalier-Smith has the best method for rooting the tree. His attention to detail and multiple sources of data allows one to refine his ideas as we have done here. Lake (and Gupta) has the right root for archaea, and despite our criticism, indel polarization is a useful methodology. Gupta has the right idea about antibiotics being a major force in this story and of course his work on indels laid the groundwork for our own work as well as Lake's. Forterre is right about viruses being major players in this event. Of course many others have shaped our thoughts on this subject, but we have clearly taken the most from the work of these four. In so doing we have tried to demonstrate the value of using opposing ideas as null hypotheses to each other.\nHave we provided a scenario that explains every detail for how archaea evolved out of gram-positive bacteria? We certainly have not. What we have presented is a variety of data that attempts to show it is a plausible and defendable stance. The emergence of archea is an amazing event in the history of life, but decipherings its origin is not simple. However, if we close the debate we close our eyes to the large body of evidence that supports the polarization of this transition.\nWe have tried to provide a novel view on the origin of archaea that makes it clear very little is settled on this subject. We have provided a scenario that covers most of the transition between bacteria and archaea. The ideas we propose here can be refined with further experiment and more observations. The ideas are currently supported by diverse data. The study of these hypotheses will give us insight into several tangentially related topics that are worth pursuing such as the subtleties of antibiotic resistance in the ribosome in Gram-positives. In summary, the hypothesis we present and support here reconciles many opposing viewpoints and strongly argues that archaea are derived from Bacilli.", "Structural alignments were performed using CE [124]. Sequence alignments were performed using MUSCLE [125]. The alignments were visualized in Jalview [126]. Sequence trees were constructed using Phyml [127]. The essentiality of genes was determined by querying the database of essential of genes [31]. Drug targets were indentified from DrugBank [101]. The distribution of those targets was examined using Pfam [78].", "The authors declare that they have no competing interests.", "REV conceived the study and analyzed the data. PEB assisted in writing the manuscript. All authors read and approved the final manuscript.", "[SUBTITLE] Reviewer's report 1 [SUBSECTION] Patrick Forterre, Universite Paris Sud and Institut Pasteur, Paris, France\nIn their paper, « the origin of a derived superkingdom: how a gram positive bacterium crossed the desert to become an archaeon\", Valas and Bourne update the previous proposal by Gupta linking Archaea to « gram positive bacteria ». The term gram positive bacteria is really outdated, since the work of Carl Woese has shown that it has no phylogenetic meaning. In fact, the title of this paper should be: \"how a Firmicutes bacterium crossed the desert to become an archaeon ». Firmicutes are one of the 20, 30 more...(it's not yet clear) bacterial phyla. It has been much more extensively studied by human for medical and biotechnological reasons, but this does not qualify it to be more than that.\n[SUBTITLE] Author's response [SUBSECTION] We find there are many compelling reasons to still consider the Gram-positives a monophyletic group as discussed in [8]. We have also presented evidence to justify why we do not trust the rRNA tree as a tool for macrophylogeny, especially for two groups nicknamed the \"low gc gram-positives\" and the \"high gc gram-positives\". We have two sources that disagree on the position of these groups. The solution is not to make declarative statements that one data source makes looking at other unreasonable, but rather to consider the strengths and weaknesses of each. One of the goals of the paper is to unify the hypotheses that question the rooting of the tree of life between the Achaea and Bacteria. Again, we'd like to point out that despite their differences Lake, Gupta, and Cavalier-Smith all agree the Archaea are derived from a Gram-positive bacterium. So even though we narrow it down to a phyla, we think this title still reflects the larger goal of the paper.\nIn summary, Valas and Bourne proposed that both Archaea and Eukaryotes derived by transmutation from a member of Firmicutes, i.e. of one of the many bacterial phyla present today on our planet. This is revival of the old view that bacteria are primitive organisms that populated the planet much before all others (a sequel of Heckel monera). In fact, Bacteria are very evolved organisms, a superkingdom, that have been extremely successful sice they are now present everywhere and are usually much more abundant than members of the two other domains. It's unclear if they predated Archaea and ancient Eukarya, but they will certainly survive long after complex eukaryotes like us will have disappeared. I suspect that Archaea and Eukarya are the only two lineages that survive the extraordinary success of bacteria.\nWe find there are many compelling reasons to still consider the Gram-positives a monophyletic group as discussed in [8]. We have also presented evidence to justify why we do not trust the rRNA tree as a tool for macrophylogeny, especially for two groups nicknamed the \"low gc gram-positives\" and the \"high gc gram-positives\". We have two sources that disagree on the position of these groups. The solution is not to make declarative statements that one data source makes looking at other unreasonable, but rather to consider the strengths and weaknesses of each. One of the goals of the paper is to unify the hypotheses that question the rooting of the tree of life between the Achaea and Bacteria. Again, we'd like to point out that despite their differences Lake, Gupta, and Cavalier-Smith all agree the Archaea are derived from a Gram-positive bacterium. So even though we narrow it down to a phyla, we think this title still reflects the larger goal of the paper.\nIn summary, Valas and Bourne proposed that both Archaea and Eukaryotes derived by transmutation from a member of Firmicutes, i.e. of one of the many bacterial phyla present today on our planet. This is revival of the old view that bacteria are primitive organisms that populated the planet much before all others (a sequel of Heckel monera). In fact, Bacteria are very evolved organisms, a superkingdom, that have been extremely successful sice they are now present everywhere and are usually much more abundant than members of the two other domains. It's unclear if they predated Archaea and ancient Eukarya, but they will certainly survive long after complex eukaryotes like us will have disappeared. I suspect that Archaea and Eukarya are the only two lineages that survive the extraordinary success of bacteria.\n[SUBTITLE] Author's response [SUBSECTION] In this manuscript we have presented three pieces of evidence that imply Bacteria did predate the Archaea. This reviewer has not addressed why he feels those are insufficient.\nIn my opinion, one of the reason for this success was the invention of DNA gyrase. This enzyme allows to couple directly the energetic state of the cell (the ATP/ADP ratio) to the expression of all genes at once in modulating the supercoiled state of the chromosome. Once you become addict to DNA gyrase, you can't let it go. The last bacterial common ancestor had a DNA gyrase, and all modern bacteria still have it. Some archaea succeeded to get gyrase from bacteria, they are now fully dependent of it. Plants also get gyrase from cyanobacteria, one possible reason for their success ?? The idea that a poor Firmicutes abandoned DNA gyrase to escape antigyrase drug producers does not seem realistic to me. Unfortunately for too many human patients, gyrases have found many way to become multi drug resistant without having to abandon it. In general, bacteria have been very efficient to thrive happily in all possible « deserts » that one can imagine, including hot springs up to 95°C. Hyperthermophilic bacteria or desiccation resistant bacteria are not en route to become archaea but bona fide bacteria. I cannot discuss in details all ad hoc hypotheses proposed by the authors to explain how a Firmicutes become an archaeon.\nIn this manuscript we have presented three pieces of evidence that imply Bacteria did predate the Archaea. This reviewer has not addressed why he feels those are insufficient.\nIn my opinion, one of the reason for this success was the invention of DNA gyrase. This enzyme allows to couple directly the energetic state of the cell (the ATP/ADP ratio) to the expression of all genes at once in modulating the supercoiled state of the chromosome. Once you become addict to DNA gyrase, you can't let it go. The last bacterial common ancestor had a DNA gyrase, and all modern bacteria still have it. Some archaea succeeded to get gyrase from bacteria, they are now fully dependent of it. Plants also get gyrase from cyanobacteria, one possible reason for their success ?? The idea that a poor Firmicutes abandoned DNA gyrase to escape antigyrase drug producers does not seem realistic to me. Unfortunately for too many human patients, gyrases have found many way to become multi drug resistant without having to abandon it. In general, bacteria have been very efficient to thrive happily in all possible « deserts » that one can imagine, including hot springs up to 95°C. Hyperthermophilic bacteria or desiccation resistant bacteria are not en route to become archaea but bona fide bacteria. I cannot discuss in details all ad hoc hypotheses proposed by the authors to explain how a Firmicutes become an archaeon.\n[SUBTITLE] Author's response [SUBSECTION] We argue that extreme habitats and antibiotic warfare were not unique enough niches in the manuscript, and this why do not think Gupta's or Cavalier-Smith's hypotheses are sufficient on their own. We agree that DNA gyrase is a big deal, and it would require some very unique circumstances for it to be lost. If the archaea are adapted to chronic energy stress it would not be unreasonable from them to move away from gyrase becuase the benefit you described above disappears if ATP is always scarce. The question is whether the transition is impossible or implausible. We are arguing this was a very rare event that only happened once. We feel it is more productive to try evaluating some of the hybrid systems we propose than to speculate about their impossibility. Once again, we regret a reviewer refuses to discuss details with us.\nThey have certainly done a huge amont of bibliographic work and hard thinking which will help them in future debates on the origin of the three domains, but in my opinion, they have reached an impasse in trying to revive Gupta's hypothesis. For me, all hypotheses that invoke the transmutation of one domain (in its modern form) into another are definitively wrong. It is the same for hypotheses in which a combination of modern archaea and bacteria produced a protoeukaryote.\nWe argue that extreme habitats and antibiotic warfare were not unique enough niches in the manuscript, and this why do not think Gupta's or Cavalier-Smith's hypotheses are sufficient on their own. We agree that DNA gyrase is a big deal, and it would require some very unique circumstances for it to be lost. If the archaea are adapted to chronic energy stress it would not be unreasonable from them to move away from gyrase becuase the benefit you described above disappears if ATP is always scarce. The question is whether the transition is impossible or implausible. We are arguing this was a very rare event that only happened once. We feel it is more productive to try evaluating some of the hybrid systems we propose than to speculate about their impossibility. Once again, we regret a reviewer refuses to discuss details with us.\nThey have certainly done a huge amont of bibliographic work and hard thinking which will help them in future debates on the origin of the three domains, but in my opinion, they have reached an impasse in trying to revive Gupta's hypothesis. For me, all hypotheses that invoke the transmutation of one domain (in its modern form) into another are definitively wrong. It is the same for hypotheses in which a combination of modern archaea and bacteria produced a protoeukaryote.\n[SUBTITLE] Author's response [SUBSECTION] This implies there are essentially three primordial lineages; a view that we think is definitely wrong based on the currently available evidence. We have provided three robust pieces of evidence why the archaea appear younger than the bacteria which this reviewer has completely ignored. Other work from our group demonstrates that we can constrain the evolution of eukaryotes based on the biochemical history of the ocean [36,37]; that data argues the eukaryotes are a more recent lineage than the bacteria and archaea, and is completely independent of this work. Again this reviewer provides no argument for why transmutation hypotheses are definitely wrong, or why the polarization evidence bears no weight on these questions.\nI fully agree with Carl Woese who already wrote several years ago that « Modern cells are fully evolved entities. They are sufficiently complex, integrated, and \"individualized\" that further major change in their designs does not appear possible, which is not to say that relatively minor (but still functionally significant) variations on existing cellular themes cannot occur or that, under certain conditions, cellular design cannot degenerate\". Firmicutes are modern cells, they cannot have experienced \"major change in their designs\" to become archaea and later on eukarya. These transmutation hypotheses put us backward in the pre-Woesian era, when evolution was viewed as a succession of steps from simple organism (moneraprokaryote-bacteria) to lower eukaryotes, then to higher eukaryotes, then to human (the scala natura). Definitely, a bacterium cannot be transmutated into an archaeon, even by a virus.\nThis implies there are essentially three primordial lineages; a view that we think is definitely wrong based on the currently available evidence. We have provided three robust pieces of evidence why the archaea appear younger than the bacteria which this reviewer has completely ignored. Other work from our group demonstrates that we can constrain the evolution of eukaryotes based on the biochemical history of the ocean [36,37]; that data argues the eukaryotes are a more recent lineage than the bacteria and archaea, and is completely independent of this work. Again this reviewer provides no argument for why transmutation hypotheses are definitely wrong, or why the polarization evidence bears no weight on these questions.\nI fully agree with Carl Woese who already wrote several years ago that « Modern cells are fully evolved entities. They are sufficiently complex, integrated, and \"individualized\" that further major change in their designs does not appear possible, which is not to say that relatively minor (but still functionally significant) variations on existing cellular themes cannot occur or that, under certain conditions, cellular design cannot degenerate\". Firmicutes are modern cells, they cannot have experienced \"major change in their designs\" to become archaea and later on eukarya. These transmutation hypotheses put us backward in the pre-Woesian era, when evolution was viewed as a succession of steps from simple organism (moneraprokaryote-bacteria) to lower eukaryotes, then to higher eukaryotes, then to human (the scala natura). Definitely, a bacterium cannot be transmutated into an archaeon, even by a virus.\n[SUBTITLE] Author's response [SUBSECTION] Only time will tell whether our skeptical reading of the rRNA tree will turn out to be pre-Woesian or post-Woesian. We do not express the view that evolution is just a series of successive steps or that a bacterial cell is simple in any way, and neither does Cavalier-Smith, Gupta, or Lake in our opinion. However within the larger process of evolution there are clear paths that were built in successive steps of increasing complexity. We think the best example of this is the proteasome's quaternary structure. Fortunately, the proteasome has an informative phylogenetic distribution that allows us to polarize the direction of its evolution. We argue the Archaea evolved from the Bacteria because their proteasome is more complicated, but that does do not imply the rest of the machinery is simpler in the Bacteria. If there are markers that are clear cut stories, how could using them for phylogenetic inference be pre-Woesian? Again, we ask why there are no polarizations that place the Bacteria as derived from the Archaea? We think the view that there is an insurmountable divide between the superkingdoms by definition to leads to circular reasoning, instead of a discussion about the actual data. Ironically, we see parallels between the current situation and Woese's account of how his ideas were first received [5].\nA virus take over of the replication apparatus could have created a bacterium with a novel replication apparatus, that's all. This would not have changed bacterial lipids, membranes, ribosomes, proteasomes, ATP synthases, transport sustems, metabolism,........ Possibly, one day, among the ten of thousand of bacteria whose genomes will be sequences, one will find one bacterium with an atypical replication system of recent viral origin, but I bet that this bacterium will have « bacterial ribosomes » and so on.\nOnly time will tell whether our skeptical reading of the rRNA tree will turn out to be pre-Woesian or post-Woesian. We do not express the view that evolution is just a series of successive steps or that a bacterial cell is simple in any way, and neither does Cavalier-Smith, Gupta, or Lake in our opinion. However within the larger process of evolution there are clear paths that were built in successive steps of increasing complexity. We think the best example of this is the proteasome's quaternary structure. Fortunately, the proteasome has an informative phylogenetic distribution that allows us to polarize the direction of its evolution. We argue the Archaea evolved from the Bacteria because their proteasome is more complicated, but that does do not imply the rest of the machinery is simpler in the Bacteria. If there are markers that are clear cut stories, how could using them for phylogenetic inference be pre-Woesian? Again, we ask why there are no polarizations that place the Bacteria as derived from the Archaea? We think the view that there is an insurmountable divide between the superkingdoms by definition to leads to circular reasoning, instead of a discussion about the actual data. Ironically, we see parallels between the current situation and Woese's account of how his ideas were first received [5].\nA virus take over of the replication apparatus could have created a bacterium with a novel replication apparatus, that's all. This would not have changed bacterial lipids, membranes, ribosomes, proteasomes, ATP synthases, transport sustems, metabolism,........ Possibly, one day, among the ten of thousand of bacteria whose genomes will be sequences, one will find one bacterium with an atypical replication system of recent viral origin, but I bet that this bacterium will have « bacterial ribosomes » and so on.\n[SUBTITLE] Author's response [SUBSECTION] We would be very surprised if the DNA replication system was that different and rest of the cell was purely bacterial. The nice thing is this is one of the few points we disagree on that data will actually make clearer.\nIf one want to understand the origin of modern domains, one has to consider that they originated in a very different world that our present one. A world with many lineages (domains or protodomains) that have now disappeared, possibly back to the cellular RNA world. This is a really difficult and fascinating objective which requires to propose sometimes bold hypotheses, but these hypotheses should take into account that the divide between the three modern domains is now so great that it cannot be crossed, even by an adventurous, desperate Firmicutes.\nWe would be very surprised if the DNA replication system was that different and rest of the cell was purely bacterial. The nice thing is this is one of the few points we disagree on that data will actually make clearer.\nIf one want to understand the origin of modern domains, one has to consider that they originated in a very different world that our present one. A world with many lineages (domains or protodomains) that have now disappeared, possibly back to the cellular RNA world. This is a really difficult and fascinating objective which requires to propose sometimes bold hypotheses, but these hypotheses should take into account that the divide between the three modern domains is now so great that it cannot be crossed, even by an adventurous, desperate Firmicutes.\n[SUBTITLE] Author's response [SUBSECTION] We again refer readers to work from our group on the evolution of the superkingdoms in relationship to history of ocean's biochemistry [36,37]. We will continue to incorporate new data sources that allow us to measure how different that world was instead of speculating about it. For now, the many data sources we have woven together imply there is something deeply wrong with the canonical rooting as well the logic used to support it (see reviewer #3's comments). This reviewer's advice that we need bold hypotheses, but the rooting must be taken as dogma makes little sense to us in light of the many problems with that rooting.\nWe again refer readers to work from our group on the evolution of the superkingdoms in relationship to history of ocean's biochemistry [36,37]. We will continue to incorporate new data sources that allow us to measure how different that world was instead of speculating about it. For now, the many data sources we have woven together imply there is something deeply wrong with the canonical rooting as well the logic used to support it (see reviewer #3's comments). This reviewer's advice that we need bold hypotheses, but the rooting must be taken as dogma makes little sense to us in light of the many problems with that rooting.\nPatrick Forterre, Universite Paris Sud and Institut Pasteur, Paris, France\nIn their paper, « the origin of a derived superkingdom: how a gram positive bacterium crossed the desert to become an archaeon\", Valas and Bourne update the previous proposal by Gupta linking Archaea to « gram positive bacteria ». The term gram positive bacteria is really outdated, since the work of Carl Woese has shown that it has no phylogenetic meaning. In fact, the title of this paper should be: \"how a Firmicutes bacterium crossed the desert to become an archaeon ». Firmicutes are one of the 20, 30 more...(it's not yet clear) bacterial phyla. It has been much more extensively studied by human for medical and biotechnological reasons, but this does not qualify it to be more than that.\n[SUBTITLE] Author's response [SUBSECTION] We find there are many compelling reasons to still consider the Gram-positives a monophyletic group as discussed in [8]. We have also presented evidence to justify why we do not trust the rRNA tree as a tool for macrophylogeny, especially for two groups nicknamed the \"low gc gram-positives\" and the \"high gc gram-positives\". We have two sources that disagree on the position of these groups. The solution is not to make declarative statements that one data source makes looking at other unreasonable, but rather to consider the strengths and weaknesses of each. One of the goals of the paper is to unify the hypotheses that question the rooting of the tree of life between the Achaea and Bacteria. Again, we'd like to point out that despite their differences Lake, Gupta, and Cavalier-Smith all agree the Archaea are derived from a Gram-positive bacterium. So even though we narrow it down to a phyla, we think this title still reflects the larger goal of the paper.\nIn summary, Valas and Bourne proposed that both Archaea and Eukaryotes derived by transmutation from a member of Firmicutes, i.e. of one of the many bacterial phyla present today on our planet. This is revival of the old view that bacteria are primitive organisms that populated the planet much before all others (a sequel of Heckel monera). In fact, Bacteria are very evolved organisms, a superkingdom, that have been extremely successful sice they are now present everywhere and are usually much more abundant than members of the two other domains. It's unclear if they predated Archaea and ancient Eukarya, but they will certainly survive long after complex eukaryotes like us will have disappeared. I suspect that Archaea and Eukarya are the only two lineages that survive the extraordinary success of bacteria.\nWe find there are many compelling reasons to still consider the Gram-positives a monophyletic group as discussed in [8]. We have also presented evidence to justify why we do not trust the rRNA tree as a tool for macrophylogeny, especially for two groups nicknamed the \"low gc gram-positives\" and the \"high gc gram-positives\". We have two sources that disagree on the position of these groups. The solution is not to make declarative statements that one data source makes looking at other unreasonable, but rather to consider the strengths and weaknesses of each. One of the goals of the paper is to unify the hypotheses that question the rooting of the tree of life between the Achaea and Bacteria. Again, we'd like to point out that despite their differences Lake, Gupta, and Cavalier-Smith all agree the Archaea are derived from a Gram-positive bacterium. So even though we narrow it down to a phyla, we think this title still reflects the larger goal of the paper.\nIn summary, Valas and Bourne proposed that both Archaea and Eukaryotes derived by transmutation from a member of Firmicutes, i.e. of one of the many bacterial phyla present today on our planet. This is revival of the old view that bacteria are primitive organisms that populated the planet much before all others (a sequel of Heckel monera). In fact, Bacteria are very evolved organisms, a superkingdom, that have been extremely successful sice they are now present everywhere and are usually much more abundant than members of the two other domains. It's unclear if they predated Archaea and ancient Eukarya, but they will certainly survive long after complex eukaryotes like us will have disappeared. I suspect that Archaea and Eukarya are the only two lineages that survive the extraordinary success of bacteria.\n[SUBTITLE] Author's response [SUBSECTION] In this manuscript we have presented three pieces of evidence that imply Bacteria did predate the Archaea. This reviewer has not addressed why he feels those are insufficient.\nIn my opinion, one of the reason for this success was the invention of DNA gyrase. This enzyme allows to couple directly the energetic state of the cell (the ATP/ADP ratio) to the expression of all genes at once in modulating the supercoiled state of the chromosome. Once you become addict to DNA gyrase, you can't let it go. The last bacterial common ancestor had a DNA gyrase, and all modern bacteria still have it. Some archaea succeeded to get gyrase from bacteria, they are now fully dependent of it. Plants also get gyrase from cyanobacteria, one possible reason for their success ?? The idea that a poor Firmicutes abandoned DNA gyrase to escape antigyrase drug producers does not seem realistic to me. Unfortunately for too many human patients, gyrases have found many way to become multi drug resistant without having to abandon it. In general, bacteria have been very efficient to thrive happily in all possible « deserts » that one can imagine, including hot springs up to 95°C. Hyperthermophilic bacteria or desiccation resistant bacteria are not en route to become archaea but bona fide bacteria. I cannot discuss in details all ad hoc hypotheses proposed by the authors to explain how a Firmicutes become an archaeon.\nIn this manuscript we have presented three pieces of evidence that imply Bacteria did predate the Archaea. This reviewer has not addressed why he feels those are insufficient.\nIn my opinion, one of the reason for this success was the invention of DNA gyrase. This enzyme allows to couple directly the energetic state of the cell (the ATP/ADP ratio) to the expression of all genes at once in modulating the supercoiled state of the chromosome. Once you become addict to DNA gyrase, you can't let it go. The last bacterial common ancestor had a DNA gyrase, and all modern bacteria still have it. Some archaea succeeded to get gyrase from bacteria, they are now fully dependent of it. Plants also get gyrase from cyanobacteria, one possible reason for their success ?? The idea that a poor Firmicutes abandoned DNA gyrase to escape antigyrase drug producers does not seem realistic to me. Unfortunately for too many human patients, gyrases have found many way to become multi drug resistant without having to abandon it. In general, bacteria have been very efficient to thrive happily in all possible « deserts » that one can imagine, including hot springs up to 95°C. Hyperthermophilic bacteria or desiccation resistant bacteria are not en route to become archaea but bona fide bacteria. I cannot discuss in details all ad hoc hypotheses proposed by the authors to explain how a Firmicutes become an archaeon.\n[SUBTITLE] Author's response [SUBSECTION] We argue that extreme habitats and antibiotic warfare were not unique enough niches in the manuscript, and this why do not think Gupta's or Cavalier-Smith's hypotheses are sufficient on their own. We agree that DNA gyrase is a big deal, and it would require some very unique circumstances for it to be lost. If the archaea are adapted to chronic energy stress it would not be unreasonable from them to move away from gyrase becuase the benefit you described above disappears if ATP is always scarce. The question is whether the transition is impossible or implausible. We are arguing this was a very rare event that only happened once. We feel it is more productive to try evaluating some of the hybrid systems we propose than to speculate about their impossibility. Once again, we regret a reviewer refuses to discuss details with us.\nThey have certainly done a huge amont of bibliographic work and hard thinking which will help them in future debates on the origin of the three domains, but in my opinion, they have reached an impasse in trying to revive Gupta's hypothesis. For me, all hypotheses that invoke the transmutation of one domain (in its modern form) into another are definitively wrong. It is the same for hypotheses in which a combination of modern archaea and bacteria produced a protoeukaryote.\nWe argue that extreme habitats and antibiotic warfare were not unique enough niches in the manuscript, and this why do not think Gupta's or Cavalier-Smith's hypotheses are sufficient on their own. We agree that DNA gyrase is a big deal, and it would require some very unique circumstances for it to be lost. If the archaea are adapted to chronic energy stress it would not be unreasonable from them to move away from gyrase becuase the benefit you described above disappears if ATP is always scarce. The question is whether the transition is impossible or implausible. We are arguing this was a very rare event that only happened once. We feel it is more productive to try evaluating some of the hybrid systems we propose than to speculate about their impossibility. Once again, we regret a reviewer refuses to discuss details with us.\nThey have certainly done a huge amont of bibliographic work and hard thinking which will help them in future debates on the origin of the three domains, but in my opinion, they have reached an impasse in trying to revive Gupta's hypothesis. For me, all hypotheses that invoke the transmutation of one domain (in its modern form) into another are definitively wrong. It is the same for hypotheses in which a combination of modern archaea and bacteria produced a protoeukaryote.\n[SUBTITLE] Author's response [SUBSECTION] This implies there are essentially three primordial lineages; a view that we think is definitely wrong based on the currently available evidence. We have provided three robust pieces of evidence why the archaea appear younger than the bacteria which this reviewer has completely ignored. Other work from our group demonstrates that we can constrain the evolution of eukaryotes based on the biochemical history of the ocean [36,37]; that data argues the eukaryotes are a more recent lineage than the bacteria and archaea, and is completely independent of this work. Again this reviewer provides no argument for why transmutation hypotheses are definitely wrong, or why the polarization evidence bears no weight on these questions.\nI fully agree with Carl Woese who already wrote several years ago that « Modern cells are fully evolved entities. They are sufficiently complex, integrated, and \"individualized\" that further major change in their designs does not appear possible, which is not to say that relatively minor (but still functionally significant) variations on existing cellular themes cannot occur or that, under certain conditions, cellular design cannot degenerate\". Firmicutes are modern cells, they cannot have experienced \"major change in their designs\" to become archaea and later on eukarya. These transmutation hypotheses put us backward in the pre-Woesian era, when evolution was viewed as a succession of steps from simple organism (moneraprokaryote-bacteria) to lower eukaryotes, then to higher eukaryotes, then to human (the scala natura). Definitely, a bacterium cannot be transmutated into an archaeon, even by a virus.\nThis implies there are essentially three primordial lineages; a view that we think is definitely wrong based on the currently available evidence. We have provided three robust pieces of evidence why the archaea appear younger than the bacteria which this reviewer has completely ignored. Other work from our group demonstrates that we can constrain the evolution of eukaryotes based on the biochemical history of the ocean [36,37]; that data argues the eukaryotes are a more recent lineage than the bacteria and archaea, and is completely independent of this work. Again this reviewer provides no argument for why transmutation hypotheses are definitely wrong, or why the polarization evidence bears no weight on these questions.\nI fully agree with Carl Woese who already wrote several years ago that « Modern cells are fully evolved entities. They are sufficiently complex, integrated, and \"individualized\" that further major change in their designs does not appear possible, which is not to say that relatively minor (but still functionally significant) variations on existing cellular themes cannot occur or that, under certain conditions, cellular design cannot degenerate\". Firmicutes are modern cells, they cannot have experienced \"major change in their designs\" to become archaea and later on eukarya. These transmutation hypotheses put us backward in the pre-Woesian era, when evolution was viewed as a succession of steps from simple organism (moneraprokaryote-bacteria) to lower eukaryotes, then to higher eukaryotes, then to human (the scala natura). Definitely, a bacterium cannot be transmutated into an archaeon, even by a virus.\n[SUBTITLE] Author's response [SUBSECTION] Only time will tell whether our skeptical reading of the rRNA tree will turn out to be pre-Woesian or post-Woesian. We do not express the view that evolution is just a series of successive steps or that a bacterial cell is simple in any way, and neither does Cavalier-Smith, Gupta, or Lake in our opinion. However within the larger process of evolution there are clear paths that were built in successive steps of increasing complexity. We think the best example of this is the proteasome's quaternary structure. Fortunately, the proteasome has an informative phylogenetic distribution that allows us to polarize the direction of its evolution. We argue the Archaea evolved from the Bacteria because their proteasome is more complicated, but that does do not imply the rest of the machinery is simpler in the Bacteria. If there are markers that are clear cut stories, how could using them for phylogenetic inference be pre-Woesian? Again, we ask why there are no polarizations that place the Bacteria as derived from the Archaea? We think the view that there is an insurmountable divide between the superkingdoms by definition to leads to circular reasoning, instead of a discussion about the actual data. Ironically, we see parallels between the current situation and Woese's account of how his ideas were first received [5].\nA virus take over of the replication apparatus could have created a bacterium with a novel replication apparatus, that's all. This would not have changed bacterial lipids, membranes, ribosomes, proteasomes, ATP synthases, transport sustems, metabolism,........ Possibly, one day, among the ten of thousand of bacteria whose genomes will be sequences, one will find one bacterium with an atypical replication system of recent viral origin, but I bet that this bacterium will have « bacterial ribosomes » and so on.\nOnly time will tell whether our skeptical reading of the rRNA tree will turn out to be pre-Woesian or post-Woesian. We do not express the view that evolution is just a series of successive steps or that a bacterial cell is simple in any way, and neither does Cavalier-Smith, Gupta, or Lake in our opinion. However within the larger process of evolution there are clear paths that were built in successive steps of increasing complexity. We think the best example of this is the proteasome's quaternary structure. Fortunately, the proteasome has an informative phylogenetic distribution that allows us to polarize the direction of its evolution. We argue the Archaea evolved from the Bacteria because their proteasome is more complicated, but that does do not imply the rest of the machinery is simpler in the Bacteria. If there are markers that are clear cut stories, how could using them for phylogenetic inference be pre-Woesian? Again, we ask why there are no polarizations that place the Bacteria as derived from the Archaea? We think the view that there is an insurmountable divide between the superkingdoms by definition to leads to circular reasoning, instead of a discussion about the actual data. Ironically, we see parallels between the current situation and Woese's account of how his ideas were first received [5].\nA virus take over of the replication apparatus could have created a bacterium with a novel replication apparatus, that's all. This would not have changed bacterial lipids, membranes, ribosomes, proteasomes, ATP synthases, transport sustems, metabolism,........ Possibly, one day, among the ten of thousand of bacteria whose genomes will be sequences, one will find one bacterium with an atypical replication system of recent viral origin, but I bet that this bacterium will have « bacterial ribosomes » and so on.\n[SUBTITLE] Author's response [SUBSECTION] We would be very surprised if the DNA replication system was that different and rest of the cell was purely bacterial. The nice thing is this is one of the few points we disagree on that data will actually make clearer.\nIf one want to understand the origin of modern domains, one has to consider that they originated in a very different world that our present one. A world with many lineages (domains or protodomains) that have now disappeared, possibly back to the cellular RNA world. This is a really difficult and fascinating objective which requires to propose sometimes bold hypotheses, but these hypotheses should take into account that the divide between the three modern domains is now so great that it cannot be crossed, even by an adventurous, desperate Firmicutes.\nWe would be very surprised if the DNA replication system was that different and rest of the cell was purely bacterial. The nice thing is this is one of the few points we disagree on that data will actually make clearer.\nIf one want to understand the origin of modern domains, one has to consider that they originated in a very different world that our present one. A world with many lineages (domains or protodomains) that have now disappeared, possibly back to the cellular RNA world. This is a really difficult and fascinating objective which requires to propose sometimes bold hypotheses, but these hypotheses should take into account that the divide between the three modern domains is now so great that it cannot be crossed, even by an adventurous, desperate Firmicutes.\n[SUBTITLE] Author's response [SUBSECTION] We again refer readers to work from our group on the evolution of the superkingdoms in relationship to history of ocean's biochemistry [36,37]. We will continue to incorporate new data sources that allow us to measure how different that world was instead of speculating about it. For now, the many data sources we have woven together imply there is something deeply wrong with the canonical rooting as well the logic used to support it (see reviewer #3's comments). This reviewer's advice that we need bold hypotheses, but the rooting must be taken as dogma makes little sense to us in light of the many problems with that rooting.\nWe again refer readers to work from our group on the evolution of the superkingdoms in relationship to history of ocean's biochemistry [36,37]. We will continue to incorporate new data sources that allow us to measure how different that world was instead of speculating about it. For now, the many data sources we have woven together imply there is something deeply wrong with the canonical rooting as well the logic used to support it (see reviewer #3's comments). This reviewer's advice that we need bold hypotheses, but the rooting must be taken as dogma makes little sense to us in light of the many problems with that rooting.\n[SUBTITLE] Reviewer's report 2 [SUBSECTION] Eugene V Koonin, National Center for Biotechnology Information, NIH, Bethesda, Maryland, United States\nTo this reviewer, the manuscript by Valas and Bourne is frustrating. These authors continue to question the primary divide in the evolution of cellular life, that between archaea and bacteria, without any legitimate grounds. Here they go deeper into this falsehood by trying to present arguments for one bacterial root of archaea as opposed to another that has been proposed by a different author, in an equally faulty manner. Another innovation here is adding insult to injury: \"This data have been dismissed because those who support the canonical rooting between the prokaryotic superkingdoms cannot imagine how the vast divide between the prokaryotic superkingdoms could be crossed.\" This allegation is a substantial part of the exceptionally brief abstract of Valas and Bourne. No comment seems to be required.\nMy general view which I see no reason whatsoever to change is expressed in the following quote from my review of a previous publication by the same authors:\n\"The nature of the primary divide in prokaryotes - and actually among all cellular life forms is clear, and it is between archaea and bacteria. This view is supported by the fundamental differences between archaeal and bacterial systems of DNA replication, core transcription, translation, and membrane biogenesis - essentially, all central cellular systems (not just the replication system as noted in the present paper). I believe these differences are sufficient to close the \"root debate\" (regardless of the appropriateness or lack thereof of the very notion of a root in this context) and to base analyses and discussions aimed at the elucidation of the nature of LUCA on that foundation.\" [11]\nPerhaps, it is worth adding the results of a recent comprehensive analysis of phylogenetic trees for prokaryotic proteins that firmly supports the primary divide between archaea and bacteria [128].\n[SUBTITLE] Author's response [SUBSECTION] A large part of the motivation for this manuscript was the review of our previous work [11]. Cleary there are others besides us who do not find things as clear cut as this reviewer (see reviewer #3 comments). We think there many reasons to support the canonical rooting, as well as reasons to questions it. We have presented our views on much of this evidence. The reviewer has again refused to discuss our data in any detail implying it is obvious why we are wrong. We feel we have greatly strengthened our previous argument by looking at the big picture in terms of the Gram-negative rooting. This reviewer claimed that rooting was unsupportable because it is so obvious that Achaea did not evolve from Bacteria. We feel we have strengthened that view, but clearly we have not swayed this reviewer. We do not think there is anything more to say on this subject so we point readers to the discussion between this reviewer and Cavalier-Smith in [81].\nThe only other comment I wish to make is the extreme carelessness with which the manuscript is written. The abstract consists of 6 sentences of which two are obviously ungrammatical. Furthermore, in the Conclusion section of the abstract, the astonished reader finds \"antibiotic warfare and a viral endosymbiosis\" for which no argument and no mention has been made in the Results section. Perhaps, the authors can get rid of these and other similar problems in a revision but I do want to keep it in the record that this is how the manuscript was submitted for review.\nA large part of the motivation for this manuscript was the review of our previous work [11]. Cleary there are others besides us who do not find things as clear cut as this reviewer (see reviewer #3 comments). We think there many reasons to support the canonical rooting, as well as reasons to questions it. We have presented our views on much of this evidence. The reviewer has again refused to discuss our data in any detail implying it is obvious why we are wrong. We feel we have greatly strengthened our previous argument by looking at the big picture in terms of the Gram-negative rooting. This reviewer claimed that rooting was unsupportable because it is so obvious that Achaea did not evolve from Bacteria. We feel we have strengthened that view, but clearly we have not swayed this reviewer. We do not think there is anything more to say on this subject so we point readers to the discussion between this reviewer and Cavalier-Smith in [81].\nThe only other comment I wish to make is the extreme carelessness with which the manuscript is written. The abstract consists of 6 sentences of which two are obviously ungrammatical. Furthermore, in the Conclusion section of the abstract, the astonished reader finds \"antibiotic warfare and a viral endosymbiosis\" for which no argument and no mention has been made in the Results section. Perhaps, the authors can get rid of these and other similar problems in a revision but I do want to keep it in the record that this is how the manuscript was submitted for review.\n[SUBTITLE] Author's response [SUBSECTION] We apologize for any issues with the form that took away from the content, and we hope the final version is improved. The paper is written somewhat recklessly because it is what it is: the end of a Ph.D. dissertation. We feel it was the right time to get these ideas out because of our perception that the canonical rooting is too dogmatic. This review has only supported our view this manuscript was needed. We think a reader should be astonished by the end of a short abstract before a long reckless paper; it gets them to read paper.\nWe find it interesting that this reviewer had no comment on two aspects of this work which can be judged independently of the rooting issue; the holophyly of the archaea and the actinobacteria's role in eukaryogenesis. We have presented much evidence that the conclusion this reviewer reached on the former, using indel data, is flawed. It would have been informative to hear this reviewer's opinion on that analysis.\nWe apologize for any issues with the form that took away from the content, and we hope the final version is improved. The paper is written somewhat recklessly because it is what it is: the end of a Ph.D. dissertation. We feel it was the right time to get these ideas out because of our perception that the canonical rooting is too dogmatic. This review has only supported our view this manuscript was needed. We think a reader should be astonished by the end of a short abstract before a long reckless paper; it gets them to read paper.\nWe find it interesting that this reviewer had no comment on two aspects of this work which can be judged independently of the rooting issue; the holophyly of the archaea and the actinobacteria's role in eukaryogenesis. We have presented much evidence that the conclusion this reviewer reached on the former, using indel data, is flawed. It would have been informative to hear this reviewer's opinion on that analysis.\nEugene V Koonin, National Center for Biotechnology Information, NIH, Bethesda, Maryland, United States\nTo this reviewer, the manuscript by Valas and Bourne is frustrating. These authors continue to question the primary divide in the evolution of cellular life, that between archaea and bacteria, without any legitimate grounds. Here they go deeper into this falsehood by trying to present arguments for one bacterial root of archaea as opposed to another that has been proposed by a different author, in an equally faulty manner. Another innovation here is adding insult to injury: \"This data have been dismissed because those who support the canonical rooting between the prokaryotic superkingdoms cannot imagine how the vast divide between the prokaryotic superkingdoms could be crossed.\" This allegation is a substantial part of the exceptionally brief abstract of Valas and Bourne. No comment seems to be required.\nMy general view which I see no reason whatsoever to change is expressed in the following quote from my review of a previous publication by the same authors:\n\"The nature of the primary divide in prokaryotes - and actually among all cellular life forms is clear, and it is between archaea and bacteria. This view is supported by the fundamental differences between archaeal and bacterial systems of DNA replication, core transcription, translation, and membrane biogenesis - essentially, all central cellular systems (not just the replication system as noted in the present paper). I believe these differences are sufficient to close the \"root debate\" (regardless of the appropriateness or lack thereof of the very notion of a root in this context) and to base analyses and discussions aimed at the elucidation of the nature of LUCA on that foundation.\" [11]\nPerhaps, it is worth adding the results of a recent comprehensive analysis of phylogenetic trees for prokaryotic proteins that firmly supports the primary divide between archaea and bacteria [128].\n[SUBTITLE] Author's response [SUBSECTION] A large part of the motivation for this manuscript was the review of our previous work [11]. Cleary there are others besides us who do not find things as clear cut as this reviewer (see reviewer #3 comments). We think there many reasons to support the canonical rooting, as well as reasons to questions it. We have presented our views on much of this evidence. The reviewer has again refused to discuss our data in any detail implying it is obvious why we are wrong. We feel we have greatly strengthened our previous argument by looking at the big picture in terms of the Gram-negative rooting. This reviewer claimed that rooting was unsupportable because it is so obvious that Achaea did not evolve from Bacteria. We feel we have strengthened that view, but clearly we have not swayed this reviewer. We do not think there is anything more to say on this subject so we point readers to the discussion between this reviewer and Cavalier-Smith in [81].\nThe only other comment I wish to make is the extreme carelessness with which the manuscript is written. The abstract consists of 6 sentences of which two are obviously ungrammatical. Furthermore, in the Conclusion section of the abstract, the astonished reader finds \"antibiotic warfare and a viral endosymbiosis\" for which no argument and no mention has been made in the Results section. Perhaps, the authors can get rid of these and other similar problems in a revision but I do want to keep it in the record that this is how the manuscript was submitted for review.\nA large part of the motivation for this manuscript was the review of our previous work [11]. Cleary there are others besides us who do not find things as clear cut as this reviewer (see reviewer #3 comments). We think there many reasons to support the canonical rooting, as well as reasons to questions it. We have presented our views on much of this evidence. The reviewer has again refused to discuss our data in any detail implying it is obvious why we are wrong. We feel we have greatly strengthened our previous argument by looking at the big picture in terms of the Gram-negative rooting. This reviewer claimed that rooting was unsupportable because it is so obvious that Achaea did not evolve from Bacteria. We feel we have strengthened that view, but clearly we have not swayed this reviewer. We do not think there is anything more to say on this subject so we point readers to the discussion between this reviewer and Cavalier-Smith in [81].\nThe only other comment I wish to make is the extreme carelessness with which the manuscript is written. The abstract consists of 6 sentences of which two are obviously ungrammatical. Furthermore, in the Conclusion section of the abstract, the astonished reader finds \"antibiotic warfare and a viral endosymbiosis\" for which no argument and no mention has been made in the Results section. Perhaps, the authors can get rid of these and other similar problems in a revision but I do want to keep it in the record that this is how the manuscript was submitted for review.\n[SUBTITLE] Author's response [SUBSECTION] We apologize for any issues with the form that took away from the content, and we hope the final version is improved. The paper is written somewhat recklessly because it is what it is: the end of a Ph.D. dissertation. We feel it was the right time to get these ideas out because of our perception that the canonical rooting is too dogmatic. This review has only supported our view this manuscript was needed. We think a reader should be astonished by the end of a short abstract before a long reckless paper; it gets them to read paper.\nWe find it interesting that this reviewer had no comment on two aspects of this work which can be judged independently of the rooting issue; the holophyly of the archaea and the actinobacteria's role in eukaryogenesis. We have presented much evidence that the conclusion this reviewer reached on the former, using indel data, is flawed. It would have been informative to hear this reviewer's opinion on that analysis.\nWe apologize for any issues with the form that took away from the content, and we hope the final version is improved. The paper is written somewhat recklessly because it is what it is: the end of a Ph.D. dissertation. We feel it was the right time to get these ideas out because of our perception that the canonical rooting is too dogmatic. This review has only supported our view this manuscript was needed. We think a reader should be astonished by the end of a short abstract before a long reckless paper; it gets them to read paper.\nWe find it interesting that this reviewer had no comment on two aspects of this work which can be judged independently of the rooting issue; the holophyly of the archaea and the actinobacteria's role in eukaryogenesis. We have presented much evidence that the conclusion this reviewer reached on the former, using indel data, is flawed. It would have been informative to hear this reviewer's opinion on that analysis.\n[SUBTITLE] Reviewer's report 3 [SUBSECTION] Gaspar Jekely, European Molecular Biology Laboratory, Heidelberg, Germany\nIn this paper Vales and Bourne address a very difficult problem in evolutionary cellbiology, that of the origin of Archaea (archaebacteria). They do this after arguing at length for the bacterial rooting of the tree of life. Such attempts are very welcome, since these areas are extremely controversial and important, yet few people seem to notice that there is a problem there, namely that the conventional rooting of the tree of life between archaea and bacteria is far from being proven and as trivial as it seems. The evidence for this rooting, coming from paralogous gene rootings is highly questionable, and givesconflicting results when different paralogs are analyzed.\n[SUBTITLE] Author's response [SUBSECTION] We thank this reviewer for demonstrating that not everyone thinks our line of questioning is as unreasonable as reviewers 1 and 2. There are certainly problems with any rooting, and the question is still very open at this point in our minds. Many experts share the opinions of those reviewers, but we feel these points are often swept under the rug when invoking that rooting.\nNotwithstanding these problems, conventional wisdom holds, as the authors rightly point out, that the position of the root is between the two domains (superphyla) bacteria and archaea, since these are the groups that are most distinct from each other. However, such rooting based on maximum divergence can often be wrong, since an ancestral group can give rise to highly derived groups. I don't have particular problems with uprooting the tree of life and abandoning the conventional rooting, but I find the evidence, as presented in this paper, quite week. I also have the feeling that the three indels and the distribution of the proteosome will not convince too many people to favour one rooting over the other. In all cases one can conceive scenarios that are in agreement with the conventional rooting, such as the presence of both forms in the last common ancestor and then differential losses in the stem bacterial and archaeal lineage. I acknowledge that some of this would be less parsimonious than under a bacterial rooting, but given that there are only very few characters that can be used, such less parsimonious scenarios can still easily be defended.\nWe thank this reviewer for demonstrating that not everyone thinks our line of questioning is as unreasonable as reviewers 1 and 2. There are certainly problems with any rooting, and the question is still very open at this point in our minds. Many experts share the opinions of those reviewers, but we feel these points are often swept under the rug when invoking that rooting.\nNotwithstanding these problems, conventional wisdom holds, as the authors rightly point out, that the position of the root is between the two domains (superphyla) bacteria and archaea, since these are the groups that are most distinct from each other. However, such rooting based on maximum divergence can often be wrong, since an ancestral group can give rise to highly derived groups. I don't have particular problems with uprooting the tree of life and abandoning the conventional rooting, but I find the evidence, as presented in this paper, quite week. I also have the feeling that the three indels and the distribution of the proteosome will not convince too many people to favour one rooting over the other. In all cases one can conceive scenarios that are in agreement with the conventional rooting, such as the presence of both forms in the last common ancestor and then differential losses in the stem bacterial and archaeal lineage. I acknowledge that some of this would be less parsimonious than under a bacterial rooting, but given that there are only very few characters that can be used, such less parsimonious scenarios can still easily be defended.\n[SUBTITLE] Author's response [SUBSECTION] The strength of our evidence rests in the difference between polarization and parsimony which we have expressed before [11]:\n\"To us parsimony can be used to analyze events where gain and loss have nearly equal probabilities, while polarizations imply that one direction would evolve more easily than the other. Consider the example of the proteasome discussed in detail in Cavalier-Smith 2002. A parsimony argument would be that the 20s proteasome is the result of a duplication so a non duplicated structure must precede it. The polarization argument involves considering the structure and function of proteasomes as well as the fitness of the intermediates to argue that evolution towards the 20s proteasome is much more plausible than the reverse direction. There are probably many cases where evolution has not been parsimonious, and we do not think parsimony is a safe or productive assumption. However, there appears to be many polarizable transitions and hopefully there are many more waiting to be discovered. \"\nWe do not think the polarizations make this an open and shut case. However, we find they are sufficient to question the canonical rooting and search for more evidence to support the alternative rooting we support here.\nThe rooting issue aside, my main concern is that the scenario for the origin of archaea is not worked out well enough at the moment, and this contrasts with the length and ambition of the paper. The authors invoke an endosymbiosis with a virus to explain the origin of the archaeal DNA-handling enzymes. What does this exactly mean? Is it a lyzogenic virus? The term viral endosymbiosis does not seem to be the best choice here. The authors then invoke a very improbably form of infection with a virus that had collected from the virosphere the right combination of genes, to hand it over to the cell. In this way they try create a unique event that led to the unique origin of the archaea. I am not sure that invoking such a hypothetical, extremely rare event, for which there is no evidence, solves the problem. I acknowledge the enrichment of proteins of possible viral origin in the stem archaea, but this slight statistical enrichment does not mean that there was only one virus infection involved. One could just as well imagine a series of viral gene transfers in the framework of the antibiotic warfare scenario that provided the novel enzymes in a step-by step manner. Given the random sequence of events and the nature of the transferred genes, this could also lead to a lineage with unique identity. This scenario is at present the weakest part of the paper and should be worked out much better in a more focused paper.\nThe strength of our evidence rests in the difference between polarization and parsimony which we have expressed before [11]:\n\"To us parsimony can be used to analyze events where gain and loss have nearly equal probabilities, while polarizations imply that one direction would evolve more easily than the other. Consider the example of the proteasome discussed in detail in Cavalier-Smith 2002. A parsimony argument would be that the 20s proteasome is the result of a duplication so a non duplicated structure must precede it. The polarization argument involves considering the structure and function of proteasomes as well as the fitness of the intermediates to argue that evolution towards the 20s proteasome is much more plausible than the reverse direction. There are probably many cases where evolution has not been parsimonious, and we do not think parsimony is a safe or productive assumption. However, there appears to be many polarizable transitions and hopefully there are many more waiting to be discovered. \"\nWe do not think the polarizations make this an open and shut case. However, we find they are sufficient to question the canonical rooting and search for more evidence to support the alternative rooting we support here.\nThe rooting issue aside, my main concern is that the scenario for the origin of archaea is not worked out well enough at the moment, and this contrasts with the length and ambition of the paper. The authors invoke an endosymbiosis with a virus to explain the origin of the archaeal DNA-handling enzymes. What does this exactly mean? Is it a lyzogenic virus? The term viral endosymbiosis does not seem to be the best choice here. The authors then invoke a very improbably form of infection with a virus that had collected from the virosphere the right combination of genes, to hand it over to the cell. In this way they try create a unique event that led to the unique origin of the archaea. I am not sure that invoking such a hypothetical, extremely rare event, for which there is no evidence, solves the problem. I acknowledge the enrichment of proteins of possible viral origin in the stem archaea, but this slight statistical enrichment does not mean that there was only one virus infection involved. One could just as well imagine a series of viral gene transfers in the framework of the antibiotic warfare scenario that provided the novel enzymes in a step-by step manner. Given the random sequence of events and the nature of the transferred genes, this could also lead to a lineage with unique identity. This scenario is at present the weakest part of the paper and should be worked out much better in a more focused paper.\n[SUBTITLE] Author's response [SUBSECTION] We completely agree with this assessment of our viral endosymbiotic scenario. Endosymbiosis is probably not the best term, but we want to stress the radical nature of this interaction. We do not think our assumption about a rare virus is off base. The virosphere is very large and viruses are experts at manipulating genetic material in novel ways. The DNA handling enzymes did evolve twice despite how unlikely it was. We think a radical turnover of the machinery is only possible in a virus. Successive viral transfers could explain the data too, but this would still be a rare event to account for so many genes. An event like this would not leave much of mark besides a statistical enrichment (if even that). If that enrichment is real the question becomes why do the Archaea interact differently with viruses than Bacteria? That answer cannot be developed too much more at present, but a better sampling of the virosphere is definitely going to help here.\nThere are several other ideas in the paper that are potentially interesting, but not well worked out. The proposition of an actinobacterial symbiont during early eukaryote evolution is one example. Such a hypothesis could possibly be spelled out in a full paper, with a detailed scenario and all the evidence that seems to support such a model. In this form it is just a proposition that is hard to judge thoroughly, and is very easy to dismiss. In general, my recommendation would be to refocus the paper around one key idea, namely the origin of archaeal DNA-handling enzymes by quantum evolution and from viral sources as a result of an antibiotics arms race. If the authors spell out this scenario clearly, together with the supporting evidence, but without going into the details of the rooting issue (discussed already in their previous Biology Direct paper) and the origin of eukaryotes (this could be done in a separate paper), this could become a much more useful and potentially influential manuscript. The title could then be changed accordingly. In the present form the title gives the impression that the authors wish to explain everything, which is far from being the case (for example the unique membrane chemistry or archaeal flagella are not covered).\nWe completely agree with this assessment of our viral endosymbiotic scenario. Endosymbiosis is probably not the best term, but we want to stress the radical nature of this interaction. We do not think our assumption about a rare virus is off base. The virosphere is very large and viruses are experts at manipulating genetic material in novel ways. The DNA handling enzymes did evolve twice despite how unlikely it was. We think a radical turnover of the machinery is only possible in a virus. Successive viral transfers could explain the data too, but this would still be a rare event to account for so many genes. An event like this would not leave much of mark besides a statistical enrichment (if even that). If that enrichment is real the question becomes why do the Archaea interact differently with viruses than Bacteria? That answer cannot be developed too much more at present, but a better sampling of the virosphere is definitely going to help here.\nThere are several other ideas in the paper that are potentially interesting, but not well worked out. The proposition of an actinobacterial symbiont during early eukaryote evolution is one example. Such a hypothesis could possibly be spelled out in a full paper, with a detailed scenario and all the evidence that seems to support such a model. In this form it is just a proposition that is hard to judge thoroughly, and is very easy to dismiss. In general, my recommendation would be to refocus the paper around one key idea, namely the origin of archaeal DNA-handling enzymes by quantum evolution and from viral sources as a result of an antibiotics arms race. If the authors spell out this scenario clearly, together with the supporting evidence, but without going into the details of the rooting issue (discussed already in their previous Biology Direct paper) and the origin of eukaryotes (this could be done in a separate paper), this could become a much more useful and potentially influential manuscript. The title could then be changed accordingly. In the present form the title gives the impression that the authors wish to explain everything, which is far from being the case (for example the unique membrane chemistry or archaeal flagella are not covered).\n[SUBTITLE] Author's response [SUBSECTION] This is a fair assessment of the manuscript. It is certainly overly ambitious and in many ways incomplete. The goal of this paper was to unite the various ideas about bacterial rootings of the Archaea. The fact that many of our ideas could be developed further is more evidence this debate is not as closed as the two other reviewers have declared. While we certainly have omitted many details, we have chosen to take a big picture view. There is an obvious connection between the problems rooting of the tree of life and the origin of the superkingdoms. We think we can only judge a rooting hypothesis by assessing how well it addresses these questions. We think the canonical rooting is insufficient when one begins asking questions on the origins of the superkingdoms. We hope that readers will pick and choose which ideas they like and continue to develop and test them.\nThis is a fair assessment of the manuscript. It is certainly overly ambitious and in many ways incomplete. The goal of this paper was to unite the various ideas about bacterial rootings of the Archaea. The fact that many of our ideas could be developed further is more evidence this debate is not as closed as the two other reviewers have declared. While we certainly have omitted many details, we have chosen to take a big picture view. There is an obvious connection between the problems rooting of the tree of life and the origin of the superkingdoms. We think we can only judge a rooting hypothesis by assessing how well it addresses these questions. We think the canonical rooting is insufficient when one begins asking questions on the origins of the superkingdoms. We hope that readers will pick and choose which ideas they like and continue to develop and test them.\nGaspar Jekely, European Molecular Biology Laboratory, Heidelberg, Germany\nIn this paper Vales and Bourne address a very difficult problem in evolutionary cellbiology, that of the origin of Archaea (archaebacteria). They do this after arguing at length for the bacterial rooting of the tree of life. Such attempts are very welcome, since these areas are extremely controversial and important, yet few people seem to notice that there is a problem there, namely that the conventional rooting of the tree of life between archaea and bacteria is far from being proven and as trivial as it seems. The evidence for this rooting, coming from paralogous gene rootings is highly questionable, and givesconflicting results when different paralogs are analyzed.\n[SUBTITLE] Author's response [SUBSECTION] We thank this reviewer for demonstrating that not everyone thinks our line of questioning is as unreasonable as reviewers 1 and 2. There are certainly problems with any rooting, and the question is still very open at this point in our minds. Many experts share the opinions of those reviewers, but we feel these points are often swept under the rug when invoking that rooting.\nNotwithstanding these problems, conventional wisdom holds, as the authors rightly point out, that the position of the root is between the two domains (superphyla) bacteria and archaea, since these are the groups that are most distinct from each other. However, such rooting based on maximum divergence can often be wrong, since an ancestral group can give rise to highly derived groups. I don't have particular problems with uprooting the tree of life and abandoning the conventional rooting, but I find the evidence, as presented in this paper, quite week. I also have the feeling that the three indels and the distribution of the proteosome will not convince too many people to favour one rooting over the other. In all cases one can conceive scenarios that are in agreement with the conventional rooting, such as the presence of both forms in the last common ancestor and then differential losses in the stem bacterial and archaeal lineage. I acknowledge that some of this would be less parsimonious than under a bacterial rooting, but given that there are only very few characters that can be used, such less parsimonious scenarios can still easily be defended.\nWe thank this reviewer for demonstrating that not everyone thinks our line of questioning is as unreasonable as reviewers 1 and 2. There are certainly problems with any rooting, and the question is still very open at this point in our minds. Many experts share the opinions of those reviewers, but we feel these points are often swept under the rug when invoking that rooting.\nNotwithstanding these problems, conventional wisdom holds, as the authors rightly point out, that the position of the root is between the two domains (superphyla) bacteria and archaea, since these are the groups that are most distinct from each other. However, such rooting based on maximum divergence can often be wrong, since an ancestral group can give rise to highly derived groups. I don't have particular problems with uprooting the tree of life and abandoning the conventional rooting, but I find the evidence, as presented in this paper, quite week. I also have the feeling that the three indels and the distribution of the proteosome will not convince too many people to favour one rooting over the other. In all cases one can conceive scenarios that are in agreement with the conventional rooting, such as the presence of both forms in the last common ancestor and then differential losses in the stem bacterial and archaeal lineage. I acknowledge that some of this would be less parsimonious than under a bacterial rooting, but given that there are only very few characters that can be used, such less parsimonious scenarios can still easily be defended.\n[SUBTITLE] Author's response [SUBSECTION] The strength of our evidence rests in the difference between polarization and parsimony which we have expressed before [11]:\n\"To us parsimony can be used to analyze events where gain and loss have nearly equal probabilities, while polarizations imply that one direction would evolve more easily than the other. Consider the example of the proteasome discussed in detail in Cavalier-Smith 2002. A parsimony argument would be that the 20s proteasome is the result of a duplication so a non duplicated structure must precede it. The polarization argument involves considering the structure and function of proteasomes as well as the fitness of the intermediates to argue that evolution towards the 20s proteasome is much more plausible than the reverse direction. There are probably many cases where evolution has not been parsimonious, and we do not think parsimony is a safe or productive assumption. However, there appears to be many polarizable transitions and hopefully there are many more waiting to be discovered. \"\nWe do not think the polarizations make this an open and shut case. However, we find they are sufficient to question the canonical rooting and search for more evidence to support the alternative rooting we support here.\nThe rooting issue aside, my main concern is that the scenario for the origin of archaea is not worked out well enough at the moment, and this contrasts with the length and ambition of the paper. The authors invoke an endosymbiosis with a virus to explain the origin of the archaeal DNA-handling enzymes. What does this exactly mean? Is it a lyzogenic virus? The term viral endosymbiosis does not seem to be the best choice here. The authors then invoke a very improbably form of infection with a virus that had collected from the virosphere the right combination of genes, to hand it over to the cell. In this way they try create a unique event that led to the unique origin of the archaea. I am not sure that invoking such a hypothetical, extremely rare event, for which there is no evidence, solves the problem. I acknowledge the enrichment of proteins of possible viral origin in the stem archaea, but this slight statistical enrichment does not mean that there was only one virus infection involved. One could just as well imagine a series of viral gene transfers in the framework of the antibiotic warfare scenario that provided the novel enzymes in a step-by step manner. Given the random sequence of events and the nature of the transferred genes, this could also lead to a lineage with unique identity. This scenario is at present the weakest part of the paper and should be worked out much better in a more focused paper.\nThe strength of our evidence rests in the difference between polarization and parsimony which we have expressed before [11]:\n\"To us parsimony can be used to analyze events where gain and loss have nearly equal probabilities, while polarizations imply that one direction would evolve more easily than the other. Consider the example of the proteasome discussed in detail in Cavalier-Smith 2002. A parsimony argument would be that the 20s proteasome is the result of a duplication so a non duplicated structure must precede it. The polarization argument involves considering the structure and function of proteasomes as well as the fitness of the intermediates to argue that evolution towards the 20s proteasome is much more plausible than the reverse direction. There are probably many cases where evolution has not been parsimonious, and we do not think parsimony is a safe or productive assumption. However, there appears to be many polarizable transitions and hopefully there are many more waiting to be discovered. \"\nWe do not think the polarizations make this an open and shut case. However, we find they are sufficient to question the canonical rooting and search for more evidence to support the alternative rooting we support here.\nThe rooting issue aside, my main concern is that the scenario for the origin of archaea is not worked out well enough at the moment, and this contrasts with the length and ambition of the paper. The authors invoke an endosymbiosis with a virus to explain the origin of the archaeal DNA-handling enzymes. What does this exactly mean? Is it a lyzogenic virus? The term viral endosymbiosis does not seem to be the best choice here. The authors then invoke a very improbably form of infection with a virus that had collected from the virosphere the right combination of genes, to hand it over to the cell. In this way they try create a unique event that led to the unique origin of the archaea. I am not sure that invoking such a hypothetical, extremely rare event, for which there is no evidence, solves the problem. I acknowledge the enrichment of proteins of possible viral origin in the stem archaea, but this slight statistical enrichment does not mean that there was only one virus infection involved. One could just as well imagine a series of viral gene transfers in the framework of the antibiotic warfare scenario that provided the novel enzymes in a step-by step manner. Given the random sequence of events and the nature of the transferred genes, this could also lead to a lineage with unique identity. This scenario is at present the weakest part of the paper and should be worked out much better in a more focused paper.\n[SUBTITLE] Author's response [SUBSECTION] We completely agree with this assessment of our viral endosymbiotic scenario. Endosymbiosis is probably not the best term, but we want to stress the radical nature of this interaction. We do not think our assumption about a rare virus is off base. The virosphere is very large and viruses are experts at manipulating genetic material in novel ways. The DNA handling enzymes did evolve twice despite how unlikely it was. We think a radical turnover of the machinery is only possible in a virus. Successive viral transfers could explain the data too, but this would still be a rare event to account for so many genes. An event like this would not leave much of mark besides a statistical enrichment (if even that). If that enrichment is real the question becomes why do the Archaea interact differently with viruses than Bacteria? That answer cannot be developed too much more at present, but a better sampling of the virosphere is definitely going to help here.\nThere are several other ideas in the paper that are potentially interesting, but not well worked out. The proposition of an actinobacterial symbiont during early eukaryote evolution is one example. Such a hypothesis could possibly be spelled out in a full paper, with a detailed scenario and all the evidence that seems to support such a model. In this form it is just a proposition that is hard to judge thoroughly, and is very easy to dismiss. In general, my recommendation would be to refocus the paper around one key idea, namely the origin of archaeal DNA-handling enzymes by quantum evolution and from viral sources as a result of an antibiotics arms race. If the authors spell out this scenario clearly, together with the supporting evidence, but without going into the details of the rooting issue (discussed already in their previous Biology Direct paper) and the origin of eukaryotes (this could be done in a separate paper), this could become a much more useful and potentially influential manuscript. The title could then be changed accordingly. In the present form the title gives the impression that the authors wish to explain everything, which is far from being the case (for example the unique membrane chemistry or archaeal flagella are not covered).\nWe completely agree with this assessment of our viral endosymbiotic scenario. Endosymbiosis is probably not the best term, but we want to stress the radical nature of this interaction. We do not think our assumption about a rare virus is off base. The virosphere is very large and viruses are experts at manipulating genetic material in novel ways. The DNA handling enzymes did evolve twice despite how unlikely it was. We think a radical turnover of the machinery is only possible in a virus. Successive viral transfers could explain the data too, but this would still be a rare event to account for so many genes. An event like this would not leave much of mark besides a statistical enrichment (if even that). If that enrichment is real the question becomes why do the Archaea interact differently with viruses than Bacteria? That answer cannot be developed too much more at present, but a better sampling of the virosphere is definitely going to help here.\nThere are several other ideas in the paper that are potentially interesting, but not well worked out. The proposition of an actinobacterial symbiont during early eukaryote evolution is one example. Such a hypothesis could possibly be spelled out in a full paper, with a detailed scenario and all the evidence that seems to support such a model. In this form it is just a proposition that is hard to judge thoroughly, and is very easy to dismiss. In general, my recommendation would be to refocus the paper around one key idea, namely the origin of archaeal DNA-handling enzymes by quantum evolution and from viral sources as a result of an antibiotics arms race. If the authors spell out this scenario clearly, together with the supporting evidence, but without going into the details of the rooting issue (discussed already in their previous Biology Direct paper) and the origin of eukaryotes (this could be done in a separate paper), this could become a much more useful and potentially influential manuscript. The title could then be changed accordingly. In the present form the title gives the impression that the authors wish to explain everything, which is far from being the case (for example the unique membrane chemistry or archaeal flagella are not covered).\n[SUBTITLE] Author's response [SUBSECTION] This is a fair assessment of the manuscript. It is certainly overly ambitious and in many ways incomplete. The goal of this paper was to unite the various ideas about bacterial rootings of the Archaea. The fact that many of our ideas could be developed further is more evidence this debate is not as closed as the two other reviewers have declared. While we certainly have omitted many details, we have chosen to take a big picture view. There is an obvious connection between the problems rooting of the tree of life and the origin of the superkingdoms. We think we can only judge a rooting hypothesis by assessing how well it addresses these questions. We think the canonical rooting is insufficient when one begins asking questions on the origins of the superkingdoms. We hope that readers will pick and choose which ideas they like and continue to develop and test them.\nThis is a fair assessment of the manuscript. It is certainly overly ambitious and in many ways incomplete. The goal of this paper was to unite the various ideas about bacterial rootings of the Archaea. The fact that many of our ideas could be developed further is more evidence this debate is not as closed as the two other reviewers have declared. While we certainly have omitted many details, we have chosen to take a big picture view. There is an obvious connection between the problems rooting of the tree of life and the origin of the superkingdoms. We think we can only judge a rooting hypothesis by assessing how well it addresses these questions. We think the canonical rooting is insufficient when one begins asking questions on the origins of the superkingdoms. We hope that readers will pick and choose which ideas they like and continue to develop and test them.", "Patrick Forterre, Universite Paris Sud and Institut Pasteur, Paris, France\nIn their paper, « the origin of a derived superkingdom: how a gram positive bacterium crossed the desert to become an archaeon\", Valas and Bourne update the previous proposal by Gupta linking Archaea to « gram positive bacteria ». The term gram positive bacteria is really outdated, since the work of Carl Woese has shown that it has no phylogenetic meaning. In fact, the title of this paper should be: \"how a Firmicutes bacterium crossed the desert to become an archaeon ». Firmicutes are one of the 20, 30 more...(it's not yet clear) bacterial phyla. It has been much more extensively studied by human for medical and biotechnological reasons, but this does not qualify it to be more than that.\n[SUBTITLE] Author's response [SUBSECTION] We find there are many compelling reasons to still consider the Gram-positives a monophyletic group as discussed in [8]. We have also presented evidence to justify why we do not trust the rRNA tree as a tool for macrophylogeny, especially for two groups nicknamed the \"low gc gram-positives\" and the \"high gc gram-positives\". We have two sources that disagree on the position of these groups. The solution is not to make declarative statements that one data source makes looking at other unreasonable, but rather to consider the strengths and weaknesses of each. One of the goals of the paper is to unify the hypotheses that question the rooting of the tree of life between the Achaea and Bacteria. Again, we'd like to point out that despite their differences Lake, Gupta, and Cavalier-Smith all agree the Archaea are derived from a Gram-positive bacterium. So even though we narrow it down to a phyla, we think this title still reflects the larger goal of the paper.\nIn summary, Valas and Bourne proposed that both Archaea and Eukaryotes derived by transmutation from a member of Firmicutes, i.e. of one of the many bacterial phyla present today on our planet. This is revival of the old view that bacteria are primitive organisms that populated the planet much before all others (a sequel of Heckel monera). In fact, Bacteria are very evolved organisms, a superkingdom, that have been extremely successful sice they are now present everywhere and are usually much more abundant than members of the two other domains. It's unclear if they predated Archaea and ancient Eukarya, but they will certainly survive long after complex eukaryotes like us will have disappeared. I suspect that Archaea and Eukarya are the only two lineages that survive the extraordinary success of bacteria.\nWe find there are many compelling reasons to still consider the Gram-positives a monophyletic group as discussed in [8]. We have also presented evidence to justify why we do not trust the rRNA tree as a tool for macrophylogeny, especially for two groups nicknamed the \"low gc gram-positives\" and the \"high gc gram-positives\". We have two sources that disagree on the position of these groups. The solution is not to make declarative statements that one data source makes looking at other unreasonable, but rather to consider the strengths and weaknesses of each. One of the goals of the paper is to unify the hypotheses that question the rooting of the tree of life between the Achaea and Bacteria. Again, we'd like to point out that despite their differences Lake, Gupta, and Cavalier-Smith all agree the Archaea are derived from a Gram-positive bacterium. So even though we narrow it down to a phyla, we think this title still reflects the larger goal of the paper.\nIn summary, Valas and Bourne proposed that both Archaea and Eukaryotes derived by transmutation from a member of Firmicutes, i.e. of one of the many bacterial phyla present today on our planet. This is revival of the old view that bacteria are primitive organisms that populated the planet much before all others (a sequel of Heckel monera). In fact, Bacteria are very evolved organisms, a superkingdom, that have been extremely successful sice they are now present everywhere and are usually much more abundant than members of the two other domains. It's unclear if they predated Archaea and ancient Eukarya, but they will certainly survive long after complex eukaryotes like us will have disappeared. I suspect that Archaea and Eukarya are the only two lineages that survive the extraordinary success of bacteria.\n[SUBTITLE] Author's response [SUBSECTION] In this manuscript we have presented three pieces of evidence that imply Bacteria did predate the Archaea. This reviewer has not addressed why he feels those are insufficient.\nIn my opinion, one of the reason for this success was the invention of DNA gyrase. This enzyme allows to couple directly the energetic state of the cell (the ATP/ADP ratio) to the expression of all genes at once in modulating the supercoiled state of the chromosome. Once you become addict to DNA gyrase, you can't let it go. The last bacterial common ancestor had a DNA gyrase, and all modern bacteria still have it. Some archaea succeeded to get gyrase from bacteria, they are now fully dependent of it. Plants also get gyrase from cyanobacteria, one possible reason for their success ?? The idea that a poor Firmicutes abandoned DNA gyrase to escape antigyrase drug producers does not seem realistic to me. Unfortunately for too many human patients, gyrases have found many way to become multi drug resistant without having to abandon it. In general, bacteria have been very efficient to thrive happily in all possible « deserts » that one can imagine, including hot springs up to 95°C. Hyperthermophilic bacteria or desiccation resistant bacteria are not en route to become archaea but bona fide bacteria. I cannot discuss in details all ad hoc hypotheses proposed by the authors to explain how a Firmicutes become an archaeon.\nIn this manuscript we have presented three pieces of evidence that imply Bacteria did predate the Archaea. This reviewer has not addressed why he feels those are insufficient.\nIn my opinion, one of the reason for this success was the invention of DNA gyrase. This enzyme allows to couple directly the energetic state of the cell (the ATP/ADP ratio) to the expression of all genes at once in modulating the supercoiled state of the chromosome. Once you become addict to DNA gyrase, you can't let it go. The last bacterial common ancestor had a DNA gyrase, and all modern bacteria still have it. Some archaea succeeded to get gyrase from bacteria, they are now fully dependent of it. Plants also get gyrase from cyanobacteria, one possible reason for their success ?? The idea that a poor Firmicutes abandoned DNA gyrase to escape antigyrase drug producers does not seem realistic to me. Unfortunately for too many human patients, gyrases have found many way to become multi drug resistant without having to abandon it. In general, bacteria have been very efficient to thrive happily in all possible « deserts » that one can imagine, including hot springs up to 95°C. Hyperthermophilic bacteria or desiccation resistant bacteria are not en route to become archaea but bona fide bacteria. I cannot discuss in details all ad hoc hypotheses proposed by the authors to explain how a Firmicutes become an archaeon.\n[SUBTITLE] Author's response [SUBSECTION] We argue that extreme habitats and antibiotic warfare were not unique enough niches in the manuscript, and this why do not think Gupta's or Cavalier-Smith's hypotheses are sufficient on their own. We agree that DNA gyrase is a big deal, and it would require some very unique circumstances for it to be lost. If the archaea are adapted to chronic energy stress it would not be unreasonable from them to move away from gyrase becuase the benefit you described above disappears if ATP is always scarce. The question is whether the transition is impossible or implausible. We are arguing this was a very rare event that only happened once. We feel it is more productive to try evaluating some of the hybrid systems we propose than to speculate about their impossibility. Once again, we regret a reviewer refuses to discuss details with us.\nThey have certainly done a huge amont of bibliographic work and hard thinking which will help them in future debates on the origin of the three domains, but in my opinion, they have reached an impasse in trying to revive Gupta's hypothesis. For me, all hypotheses that invoke the transmutation of one domain (in its modern form) into another are definitively wrong. It is the same for hypotheses in which a combination of modern archaea and bacteria produced a protoeukaryote.\nWe argue that extreme habitats and antibiotic warfare were not unique enough niches in the manuscript, and this why do not think Gupta's or Cavalier-Smith's hypotheses are sufficient on their own. We agree that DNA gyrase is a big deal, and it would require some very unique circumstances for it to be lost. If the archaea are adapted to chronic energy stress it would not be unreasonable from them to move away from gyrase becuase the benefit you described above disappears if ATP is always scarce. The question is whether the transition is impossible or implausible. We are arguing this was a very rare event that only happened once. We feel it is more productive to try evaluating some of the hybrid systems we propose than to speculate about their impossibility. Once again, we regret a reviewer refuses to discuss details with us.\nThey have certainly done a huge amont of bibliographic work and hard thinking which will help them in future debates on the origin of the three domains, but in my opinion, they have reached an impasse in trying to revive Gupta's hypothesis. For me, all hypotheses that invoke the transmutation of one domain (in its modern form) into another are definitively wrong. It is the same for hypotheses in which a combination of modern archaea and bacteria produced a protoeukaryote.\n[SUBTITLE] Author's response [SUBSECTION] This implies there are essentially three primordial lineages; a view that we think is definitely wrong based on the currently available evidence. We have provided three robust pieces of evidence why the archaea appear younger than the bacteria which this reviewer has completely ignored. Other work from our group demonstrates that we can constrain the evolution of eukaryotes based on the biochemical history of the ocean [36,37]; that data argues the eukaryotes are a more recent lineage than the bacteria and archaea, and is completely independent of this work. Again this reviewer provides no argument for why transmutation hypotheses are definitely wrong, or why the polarization evidence bears no weight on these questions.\nI fully agree with Carl Woese who already wrote several years ago that « Modern cells are fully evolved entities. They are sufficiently complex, integrated, and \"individualized\" that further major change in their designs does not appear possible, which is not to say that relatively minor (but still functionally significant) variations on existing cellular themes cannot occur or that, under certain conditions, cellular design cannot degenerate\". Firmicutes are modern cells, they cannot have experienced \"major change in their designs\" to become archaea and later on eukarya. These transmutation hypotheses put us backward in the pre-Woesian era, when evolution was viewed as a succession of steps from simple organism (moneraprokaryote-bacteria) to lower eukaryotes, then to higher eukaryotes, then to human (the scala natura). Definitely, a bacterium cannot be transmutated into an archaeon, even by a virus.\nThis implies there are essentially three primordial lineages; a view that we think is definitely wrong based on the currently available evidence. We have provided three robust pieces of evidence why the archaea appear younger than the bacteria which this reviewer has completely ignored. Other work from our group demonstrates that we can constrain the evolution of eukaryotes based on the biochemical history of the ocean [36,37]; that data argues the eukaryotes are a more recent lineage than the bacteria and archaea, and is completely independent of this work. Again this reviewer provides no argument for why transmutation hypotheses are definitely wrong, or why the polarization evidence bears no weight on these questions.\nI fully agree with Carl Woese who already wrote several years ago that « Modern cells are fully evolved entities. They are sufficiently complex, integrated, and \"individualized\" that further major change in their designs does not appear possible, which is not to say that relatively minor (but still functionally significant) variations on existing cellular themes cannot occur or that, under certain conditions, cellular design cannot degenerate\". Firmicutes are modern cells, they cannot have experienced \"major change in their designs\" to become archaea and later on eukarya. These transmutation hypotheses put us backward in the pre-Woesian era, when evolution was viewed as a succession of steps from simple organism (moneraprokaryote-bacteria) to lower eukaryotes, then to higher eukaryotes, then to human (the scala natura). Definitely, a bacterium cannot be transmutated into an archaeon, even by a virus.\n[SUBTITLE] Author's response [SUBSECTION] Only time will tell whether our skeptical reading of the rRNA tree will turn out to be pre-Woesian or post-Woesian. We do not express the view that evolution is just a series of successive steps or that a bacterial cell is simple in any way, and neither does Cavalier-Smith, Gupta, or Lake in our opinion. However within the larger process of evolution there are clear paths that were built in successive steps of increasing complexity. We think the best example of this is the proteasome's quaternary structure. Fortunately, the proteasome has an informative phylogenetic distribution that allows us to polarize the direction of its evolution. We argue the Archaea evolved from the Bacteria because their proteasome is more complicated, but that does do not imply the rest of the machinery is simpler in the Bacteria. If there are markers that are clear cut stories, how could using them for phylogenetic inference be pre-Woesian? Again, we ask why there are no polarizations that place the Bacteria as derived from the Archaea? We think the view that there is an insurmountable divide between the superkingdoms by definition to leads to circular reasoning, instead of a discussion about the actual data. Ironically, we see parallels between the current situation and Woese's account of how his ideas were first received [5].\nA virus take over of the replication apparatus could have created a bacterium with a novel replication apparatus, that's all. This would not have changed bacterial lipids, membranes, ribosomes, proteasomes, ATP synthases, transport sustems, metabolism,........ Possibly, one day, among the ten of thousand of bacteria whose genomes will be sequences, one will find one bacterium with an atypical replication system of recent viral origin, but I bet that this bacterium will have « bacterial ribosomes » and so on.\nOnly time will tell whether our skeptical reading of the rRNA tree will turn out to be pre-Woesian or post-Woesian. We do not express the view that evolution is just a series of successive steps or that a bacterial cell is simple in any way, and neither does Cavalier-Smith, Gupta, or Lake in our opinion. However within the larger process of evolution there are clear paths that were built in successive steps of increasing complexity. We think the best example of this is the proteasome's quaternary structure. Fortunately, the proteasome has an informative phylogenetic distribution that allows us to polarize the direction of its evolution. We argue the Archaea evolved from the Bacteria because their proteasome is more complicated, but that does do not imply the rest of the machinery is simpler in the Bacteria. If there are markers that are clear cut stories, how could using them for phylogenetic inference be pre-Woesian? Again, we ask why there are no polarizations that place the Bacteria as derived from the Archaea? We think the view that there is an insurmountable divide between the superkingdoms by definition to leads to circular reasoning, instead of a discussion about the actual data. Ironically, we see parallels between the current situation and Woese's account of how his ideas were first received [5].\nA virus take over of the replication apparatus could have created a bacterium with a novel replication apparatus, that's all. This would not have changed bacterial lipids, membranes, ribosomes, proteasomes, ATP synthases, transport sustems, metabolism,........ Possibly, one day, among the ten of thousand of bacteria whose genomes will be sequences, one will find one bacterium with an atypical replication system of recent viral origin, but I bet that this bacterium will have « bacterial ribosomes » and so on.\n[SUBTITLE] Author's response [SUBSECTION] We would be very surprised if the DNA replication system was that different and rest of the cell was purely bacterial. The nice thing is this is one of the few points we disagree on that data will actually make clearer.\nIf one want to understand the origin of modern domains, one has to consider that they originated in a very different world that our present one. A world with many lineages (domains or protodomains) that have now disappeared, possibly back to the cellular RNA world. This is a really difficult and fascinating objective which requires to propose sometimes bold hypotheses, but these hypotheses should take into account that the divide between the three modern domains is now so great that it cannot be crossed, even by an adventurous, desperate Firmicutes.\nWe would be very surprised if the DNA replication system was that different and rest of the cell was purely bacterial. The nice thing is this is one of the few points we disagree on that data will actually make clearer.\nIf one want to understand the origin of modern domains, one has to consider that they originated in a very different world that our present one. A world with many lineages (domains or protodomains) that have now disappeared, possibly back to the cellular RNA world. This is a really difficult and fascinating objective which requires to propose sometimes bold hypotheses, but these hypotheses should take into account that the divide between the three modern domains is now so great that it cannot be crossed, even by an adventurous, desperate Firmicutes.\n[SUBTITLE] Author's response [SUBSECTION] We again refer readers to work from our group on the evolution of the superkingdoms in relationship to history of ocean's biochemistry [36,37]. We will continue to incorporate new data sources that allow us to measure how different that world was instead of speculating about it. For now, the many data sources we have woven together imply there is something deeply wrong with the canonical rooting as well the logic used to support it (see reviewer #3's comments). This reviewer's advice that we need bold hypotheses, but the rooting must be taken as dogma makes little sense to us in light of the many problems with that rooting.\nWe again refer readers to work from our group on the evolution of the superkingdoms in relationship to history of ocean's biochemistry [36,37]. We will continue to incorporate new data sources that allow us to measure how different that world was instead of speculating about it. For now, the many data sources we have woven together imply there is something deeply wrong with the canonical rooting as well the logic used to support it (see reviewer #3's comments). This reviewer's advice that we need bold hypotheses, but the rooting must be taken as dogma makes little sense to us in light of the many problems with that rooting.", "We find there are many compelling reasons to still consider the Gram-positives a monophyletic group as discussed in [8]. We have also presented evidence to justify why we do not trust the rRNA tree as a tool for macrophylogeny, especially for two groups nicknamed the \"low gc gram-positives\" and the \"high gc gram-positives\". We have two sources that disagree on the position of these groups. The solution is not to make declarative statements that one data source makes looking at other unreasonable, but rather to consider the strengths and weaknesses of each. One of the goals of the paper is to unify the hypotheses that question the rooting of the tree of life between the Achaea and Bacteria. Again, we'd like to point out that despite their differences Lake, Gupta, and Cavalier-Smith all agree the Archaea are derived from a Gram-positive bacterium. So even though we narrow it down to a phyla, we think this title still reflects the larger goal of the paper.\nIn summary, Valas and Bourne proposed that both Archaea and Eukaryotes derived by transmutation from a member of Firmicutes, i.e. of one of the many bacterial phyla present today on our planet. This is revival of the old view that bacteria are primitive organisms that populated the planet much before all others (a sequel of Heckel monera). In fact, Bacteria are very evolved organisms, a superkingdom, that have been extremely successful sice they are now present everywhere and are usually much more abundant than members of the two other domains. It's unclear if they predated Archaea and ancient Eukarya, but they will certainly survive long after complex eukaryotes like us will have disappeared. I suspect that Archaea and Eukarya are the only two lineages that survive the extraordinary success of bacteria.", "In this manuscript we have presented three pieces of evidence that imply Bacteria did predate the Archaea. This reviewer has not addressed why he feels those are insufficient.\nIn my opinion, one of the reason for this success was the invention of DNA gyrase. This enzyme allows to couple directly the energetic state of the cell (the ATP/ADP ratio) to the expression of all genes at once in modulating the supercoiled state of the chromosome. Once you become addict to DNA gyrase, you can't let it go. The last bacterial common ancestor had a DNA gyrase, and all modern bacteria still have it. Some archaea succeeded to get gyrase from bacteria, they are now fully dependent of it. Plants also get gyrase from cyanobacteria, one possible reason for their success ?? The idea that a poor Firmicutes abandoned DNA gyrase to escape antigyrase drug producers does not seem realistic to me. Unfortunately for too many human patients, gyrases have found many way to become multi drug resistant without having to abandon it. In general, bacteria have been very efficient to thrive happily in all possible « deserts » that one can imagine, including hot springs up to 95°C. Hyperthermophilic bacteria or desiccation resistant bacteria are not en route to become archaea but bona fide bacteria. I cannot discuss in details all ad hoc hypotheses proposed by the authors to explain how a Firmicutes become an archaeon.", "We argue that extreme habitats and antibiotic warfare were not unique enough niches in the manuscript, and this why do not think Gupta's or Cavalier-Smith's hypotheses are sufficient on their own. We agree that DNA gyrase is a big deal, and it would require some very unique circumstances for it to be lost. If the archaea are adapted to chronic energy stress it would not be unreasonable from them to move away from gyrase becuase the benefit you described above disappears if ATP is always scarce. The question is whether the transition is impossible or implausible. We are arguing this was a very rare event that only happened once. We feel it is more productive to try evaluating some of the hybrid systems we propose than to speculate about their impossibility. Once again, we regret a reviewer refuses to discuss details with us.\nThey have certainly done a huge amont of bibliographic work and hard thinking which will help them in future debates on the origin of the three domains, but in my opinion, they have reached an impasse in trying to revive Gupta's hypothesis. For me, all hypotheses that invoke the transmutation of one domain (in its modern form) into another are definitively wrong. It is the same for hypotheses in which a combination of modern archaea and bacteria produced a protoeukaryote.", "This implies there are essentially three primordial lineages; a view that we think is definitely wrong based on the currently available evidence. We have provided three robust pieces of evidence why the archaea appear younger than the bacteria which this reviewer has completely ignored. Other work from our group demonstrates that we can constrain the evolution of eukaryotes based on the biochemical history of the ocean [36,37]; that data argues the eukaryotes are a more recent lineage than the bacteria and archaea, and is completely independent of this work. Again this reviewer provides no argument for why transmutation hypotheses are definitely wrong, or why the polarization evidence bears no weight on these questions.\nI fully agree with Carl Woese who already wrote several years ago that « Modern cells are fully evolved entities. They are sufficiently complex, integrated, and \"individualized\" that further major change in their designs does not appear possible, which is not to say that relatively minor (but still functionally significant) variations on existing cellular themes cannot occur or that, under certain conditions, cellular design cannot degenerate\". Firmicutes are modern cells, they cannot have experienced \"major change in their designs\" to become archaea and later on eukarya. These transmutation hypotheses put us backward in the pre-Woesian era, when evolution was viewed as a succession of steps from simple organism (moneraprokaryote-bacteria) to lower eukaryotes, then to higher eukaryotes, then to human (the scala natura). Definitely, a bacterium cannot be transmutated into an archaeon, even by a virus.", "Only time will tell whether our skeptical reading of the rRNA tree will turn out to be pre-Woesian or post-Woesian. We do not express the view that evolution is just a series of successive steps or that a bacterial cell is simple in any way, and neither does Cavalier-Smith, Gupta, or Lake in our opinion. However within the larger process of evolution there are clear paths that were built in successive steps of increasing complexity. We think the best example of this is the proteasome's quaternary structure. Fortunately, the proteasome has an informative phylogenetic distribution that allows us to polarize the direction of its evolution. We argue the Archaea evolved from the Bacteria because their proteasome is more complicated, but that does do not imply the rest of the machinery is simpler in the Bacteria. If there are markers that are clear cut stories, how could using them for phylogenetic inference be pre-Woesian? Again, we ask why there are no polarizations that place the Bacteria as derived from the Archaea? We think the view that there is an insurmountable divide between the superkingdoms by definition to leads to circular reasoning, instead of a discussion about the actual data. Ironically, we see parallels between the current situation and Woese's account of how his ideas were first received [5].\nA virus take over of the replication apparatus could have created a bacterium with a novel replication apparatus, that's all. This would not have changed bacterial lipids, membranes, ribosomes, proteasomes, ATP synthases, transport sustems, metabolism,........ Possibly, one day, among the ten of thousand of bacteria whose genomes will be sequences, one will find one bacterium with an atypical replication system of recent viral origin, but I bet that this bacterium will have « bacterial ribosomes » and so on.", "We would be very surprised if the DNA replication system was that different and rest of the cell was purely bacterial. The nice thing is this is one of the few points we disagree on that data will actually make clearer.\nIf one want to understand the origin of modern domains, one has to consider that they originated in a very different world that our present one. A world with many lineages (domains or protodomains) that have now disappeared, possibly back to the cellular RNA world. This is a really difficult and fascinating objective which requires to propose sometimes bold hypotheses, but these hypotheses should take into account that the divide between the three modern domains is now so great that it cannot be crossed, even by an adventurous, desperate Firmicutes.", "We again refer readers to work from our group on the evolution of the superkingdoms in relationship to history of ocean's biochemistry [36,37]. We will continue to incorporate new data sources that allow us to measure how different that world was instead of speculating about it. For now, the many data sources we have woven together imply there is something deeply wrong with the canonical rooting as well the logic used to support it (see reviewer #3's comments). This reviewer's advice that we need bold hypotheses, but the rooting must be taken as dogma makes little sense to us in light of the many problems with that rooting.", "Eugene V Koonin, National Center for Biotechnology Information, NIH, Bethesda, Maryland, United States\nTo this reviewer, the manuscript by Valas and Bourne is frustrating. These authors continue to question the primary divide in the evolution of cellular life, that between archaea and bacteria, without any legitimate grounds. Here they go deeper into this falsehood by trying to present arguments for one bacterial root of archaea as opposed to another that has been proposed by a different author, in an equally faulty manner. Another innovation here is adding insult to injury: \"This data have been dismissed because those who support the canonical rooting between the prokaryotic superkingdoms cannot imagine how the vast divide between the prokaryotic superkingdoms could be crossed.\" This allegation is a substantial part of the exceptionally brief abstract of Valas and Bourne. No comment seems to be required.\nMy general view which I see no reason whatsoever to change is expressed in the following quote from my review of a previous publication by the same authors:\n\"The nature of the primary divide in prokaryotes - and actually among all cellular life forms is clear, and it is between archaea and bacteria. This view is supported by the fundamental differences between archaeal and bacterial systems of DNA replication, core transcription, translation, and membrane biogenesis - essentially, all central cellular systems (not just the replication system as noted in the present paper). I believe these differences are sufficient to close the \"root debate\" (regardless of the appropriateness or lack thereof of the very notion of a root in this context) and to base analyses and discussions aimed at the elucidation of the nature of LUCA on that foundation.\" [11]\nPerhaps, it is worth adding the results of a recent comprehensive analysis of phylogenetic trees for prokaryotic proteins that firmly supports the primary divide between archaea and bacteria [128].\n[SUBTITLE] Author's response [SUBSECTION] A large part of the motivation for this manuscript was the review of our previous work [11]. Cleary there are others besides us who do not find things as clear cut as this reviewer (see reviewer #3 comments). We think there many reasons to support the canonical rooting, as well as reasons to questions it. We have presented our views on much of this evidence. The reviewer has again refused to discuss our data in any detail implying it is obvious why we are wrong. We feel we have greatly strengthened our previous argument by looking at the big picture in terms of the Gram-negative rooting. This reviewer claimed that rooting was unsupportable because it is so obvious that Achaea did not evolve from Bacteria. We feel we have strengthened that view, but clearly we have not swayed this reviewer. We do not think there is anything more to say on this subject so we point readers to the discussion between this reviewer and Cavalier-Smith in [81].\nThe only other comment I wish to make is the extreme carelessness with which the manuscript is written. The abstract consists of 6 sentences of which two are obviously ungrammatical. Furthermore, in the Conclusion section of the abstract, the astonished reader finds \"antibiotic warfare and a viral endosymbiosis\" for which no argument and no mention has been made in the Results section. Perhaps, the authors can get rid of these and other similar problems in a revision but I do want to keep it in the record that this is how the manuscript was submitted for review.\nA large part of the motivation for this manuscript was the review of our previous work [11]. Cleary there are others besides us who do not find things as clear cut as this reviewer (see reviewer #3 comments). We think there many reasons to support the canonical rooting, as well as reasons to questions it. We have presented our views on much of this evidence. The reviewer has again refused to discuss our data in any detail implying it is obvious why we are wrong. We feel we have greatly strengthened our previous argument by looking at the big picture in terms of the Gram-negative rooting. This reviewer claimed that rooting was unsupportable because it is so obvious that Achaea did not evolve from Bacteria. We feel we have strengthened that view, but clearly we have not swayed this reviewer. We do not think there is anything more to say on this subject so we point readers to the discussion between this reviewer and Cavalier-Smith in [81].\nThe only other comment I wish to make is the extreme carelessness with which the manuscript is written. The abstract consists of 6 sentences of which two are obviously ungrammatical. Furthermore, in the Conclusion section of the abstract, the astonished reader finds \"antibiotic warfare and a viral endosymbiosis\" for which no argument and no mention has been made in the Results section. Perhaps, the authors can get rid of these and other similar problems in a revision but I do want to keep it in the record that this is how the manuscript was submitted for review.\n[SUBTITLE] Author's response [SUBSECTION] We apologize for any issues with the form that took away from the content, and we hope the final version is improved. The paper is written somewhat recklessly because it is what it is: the end of a Ph.D. dissertation. We feel it was the right time to get these ideas out because of our perception that the canonical rooting is too dogmatic. This review has only supported our view this manuscript was needed. We think a reader should be astonished by the end of a short abstract before a long reckless paper; it gets them to read paper.\nWe find it interesting that this reviewer had no comment on two aspects of this work which can be judged independently of the rooting issue; the holophyly of the archaea and the actinobacteria's role in eukaryogenesis. We have presented much evidence that the conclusion this reviewer reached on the former, using indel data, is flawed. It would have been informative to hear this reviewer's opinion on that analysis.\nWe apologize for any issues with the form that took away from the content, and we hope the final version is improved. The paper is written somewhat recklessly because it is what it is: the end of a Ph.D. dissertation. We feel it was the right time to get these ideas out because of our perception that the canonical rooting is too dogmatic. This review has only supported our view this manuscript was needed. We think a reader should be astonished by the end of a short abstract before a long reckless paper; it gets them to read paper.\nWe find it interesting that this reviewer had no comment on two aspects of this work which can be judged independently of the rooting issue; the holophyly of the archaea and the actinobacteria's role in eukaryogenesis. We have presented much evidence that the conclusion this reviewer reached on the former, using indel data, is flawed. It would have been informative to hear this reviewer's opinion on that analysis.", "A large part of the motivation for this manuscript was the review of our previous work [11]. Cleary there are others besides us who do not find things as clear cut as this reviewer (see reviewer #3 comments). We think there many reasons to support the canonical rooting, as well as reasons to questions it. We have presented our views on much of this evidence. The reviewer has again refused to discuss our data in any detail implying it is obvious why we are wrong. We feel we have greatly strengthened our previous argument by looking at the big picture in terms of the Gram-negative rooting. This reviewer claimed that rooting was unsupportable because it is so obvious that Achaea did not evolve from Bacteria. We feel we have strengthened that view, but clearly we have not swayed this reviewer. We do not think there is anything more to say on this subject so we point readers to the discussion between this reviewer and Cavalier-Smith in [81].\nThe only other comment I wish to make is the extreme carelessness with which the manuscript is written. The abstract consists of 6 sentences of which two are obviously ungrammatical. Furthermore, in the Conclusion section of the abstract, the astonished reader finds \"antibiotic warfare and a viral endosymbiosis\" for which no argument and no mention has been made in the Results section. Perhaps, the authors can get rid of these and other similar problems in a revision but I do want to keep it in the record that this is how the manuscript was submitted for review.", "We apologize for any issues with the form that took away from the content, and we hope the final version is improved. The paper is written somewhat recklessly because it is what it is: the end of a Ph.D. dissertation. We feel it was the right time to get these ideas out because of our perception that the canonical rooting is too dogmatic. This review has only supported our view this manuscript was needed. We think a reader should be astonished by the end of a short abstract before a long reckless paper; it gets them to read paper.\nWe find it interesting that this reviewer had no comment on two aspects of this work which can be judged independently of the rooting issue; the holophyly of the archaea and the actinobacteria's role in eukaryogenesis. We have presented much evidence that the conclusion this reviewer reached on the former, using indel data, is flawed. It would have been informative to hear this reviewer's opinion on that analysis.", "Gaspar Jekely, European Molecular Biology Laboratory, Heidelberg, Germany\nIn this paper Vales and Bourne address a very difficult problem in evolutionary cellbiology, that of the origin of Archaea (archaebacteria). They do this after arguing at length for the bacterial rooting of the tree of life. Such attempts are very welcome, since these areas are extremely controversial and important, yet few people seem to notice that there is a problem there, namely that the conventional rooting of the tree of life between archaea and bacteria is far from being proven and as trivial as it seems. The evidence for this rooting, coming from paralogous gene rootings is highly questionable, and givesconflicting results when different paralogs are analyzed.\n[SUBTITLE] Author's response [SUBSECTION] We thank this reviewer for demonstrating that not everyone thinks our line of questioning is as unreasonable as reviewers 1 and 2. There are certainly problems with any rooting, and the question is still very open at this point in our minds. Many experts share the opinions of those reviewers, but we feel these points are often swept under the rug when invoking that rooting.\nNotwithstanding these problems, conventional wisdom holds, as the authors rightly point out, that the position of the root is between the two domains (superphyla) bacteria and archaea, since these are the groups that are most distinct from each other. However, such rooting based on maximum divergence can often be wrong, since an ancestral group can give rise to highly derived groups. I don't have particular problems with uprooting the tree of life and abandoning the conventional rooting, but I find the evidence, as presented in this paper, quite week. I also have the feeling that the three indels and the distribution of the proteosome will not convince too many people to favour one rooting over the other. In all cases one can conceive scenarios that are in agreement with the conventional rooting, such as the presence of both forms in the last common ancestor and then differential losses in the stem bacterial and archaeal lineage. I acknowledge that some of this would be less parsimonious than under a bacterial rooting, but given that there are only very few characters that can be used, such less parsimonious scenarios can still easily be defended.\nWe thank this reviewer for demonstrating that not everyone thinks our line of questioning is as unreasonable as reviewers 1 and 2. There are certainly problems with any rooting, and the question is still very open at this point in our minds. Many experts share the opinions of those reviewers, but we feel these points are often swept under the rug when invoking that rooting.\nNotwithstanding these problems, conventional wisdom holds, as the authors rightly point out, that the position of the root is between the two domains (superphyla) bacteria and archaea, since these are the groups that are most distinct from each other. However, such rooting based on maximum divergence can often be wrong, since an ancestral group can give rise to highly derived groups. I don't have particular problems with uprooting the tree of life and abandoning the conventional rooting, but I find the evidence, as presented in this paper, quite week. I also have the feeling that the three indels and the distribution of the proteosome will not convince too many people to favour one rooting over the other. In all cases one can conceive scenarios that are in agreement with the conventional rooting, such as the presence of both forms in the last common ancestor and then differential losses in the stem bacterial and archaeal lineage. I acknowledge that some of this would be less parsimonious than under a bacterial rooting, but given that there are only very few characters that can be used, such less parsimonious scenarios can still easily be defended.\n[SUBTITLE] Author's response [SUBSECTION] The strength of our evidence rests in the difference between polarization and parsimony which we have expressed before [11]:\n\"To us parsimony can be used to analyze events where gain and loss have nearly equal probabilities, while polarizations imply that one direction would evolve more easily than the other. Consider the example of the proteasome discussed in detail in Cavalier-Smith 2002. A parsimony argument would be that the 20s proteasome is the result of a duplication so a non duplicated structure must precede it. The polarization argument involves considering the structure and function of proteasomes as well as the fitness of the intermediates to argue that evolution towards the 20s proteasome is much more plausible than the reverse direction. There are probably many cases where evolution has not been parsimonious, and we do not think parsimony is a safe or productive assumption. However, there appears to be many polarizable transitions and hopefully there are many more waiting to be discovered. \"\nWe do not think the polarizations make this an open and shut case. However, we find they are sufficient to question the canonical rooting and search for more evidence to support the alternative rooting we support here.\nThe rooting issue aside, my main concern is that the scenario for the origin of archaea is not worked out well enough at the moment, and this contrasts with the length and ambition of the paper. The authors invoke an endosymbiosis with a virus to explain the origin of the archaeal DNA-handling enzymes. What does this exactly mean? Is it a lyzogenic virus? The term viral endosymbiosis does not seem to be the best choice here. The authors then invoke a very improbably form of infection with a virus that had collected from the virosphere the right combination of genes, to hand it over to the cell. In this way they try create a unique event that led to the unique origin of the archaea. I am not sure that invoking such a hypothetical, extremely rare event, for which there is no evidence, solves the problem. I acknowledge the enrichment of proteins of possible viral origin in the stem archaea, but this slight statistical enrichment does not mean that there was only one virus infection involved. One could just as well imagine a series of viral gene transfers in the framework of the antibiotic warfare scenario that provided the novel enzymes in a step-by step manner. Given the random sequence of events and the nature of the transferred genes, this could also lead to a lineage with unique identity. This scenario is at present the weakest part of the paper and should be worked out much better in a more focused paper.\nThe strength of our evidence rests in the difference between polarization and parsimony which we have expressed before [11]:\n\"To us parsimony can be used to analyze events where gain and loss have nearly equal probabilities, while polarizations imply that one direction would evolve more easily than the other. Consider the example of the proteasome discussed in detail in Cavalier-Smith 2002. A parsimony argument would be that the 20s proteasome is the result of a duplication so a non duplicated structure must precede it. The polarization argument involves considering the structure and function of proteasomes as well as the fitness of the intermediates to argue that evolution towards the 20s proteasome is much more plausible than the reverse direction. There are probably many cases where evolution has not been parsimonious, and we do not think parsimony is a safe or productive assumption. However, there appears to be many polarizable transitions and hopefully there are many more waiting to be discovered. \"\nWe do not think the polarizations make this an open and shut case. However, we find they are sufficient to question the canonical rooting and search for more evidence to support the alternative rooting we support here.\nThe rooting issue aside, my main concern is that the scenario for the origin of archaea is not worked out well enough at the moment, and this contrasts with the length and ambition of the paper. The authors invoke an endosymbiosis with a virus to explain the origin of the archaeal DNA-handling enzymes. What does this exactly mean? Is it a lyzogenic virus? The term viral endosymbiosis does not seem to be the best choice here. The authors then invoke a very improbably form of infection with a virus that had collected from the virosphere the right combination of genes, to hand it over to the cell. In this way they try create a unique event that led to the unique origin of the archaea. I am not sure that invoking such a hypothetical, extremely rare event, for which there is no evidence, solves the problem. I acknowledge the enrichment of proteins of possible viral origin in the stem archaea, but this slight statistical enrichment does not mean that there was only one virus infection involved. One could just as well imagine a series of viral gene transfers in the framework of the antibiotic warfare scenario that provided the novel enzymes in a step-by step manner. Given the random sequence of events and the nature of the transferred genes, this could also lead to a lineage with unique identity. This scenario is at present the weakest part of the paper and should be worked out much better in a more focused paper.\n[SUBTITLE] Author's response [SUBSECTION] We completely agree with this assessment of our viral endosymbiotic scenario. Endosymbiosis is probably not the best term, but we want to stress the radical nature of this interaction. We do not think our assumption about a rare virus is off base. The virosphere is very large and viruses are experts at manipulating genetic material in novel ways. The DNA handling enzymes did evolve twice despite how unlikely it was. We think a radical turnover of the machinery is only possible in a virus. Successive viral transfers could explain the data too, but this would still be a rare event to account for so many genes. An event like this would not leave much of mark besides a statistical enrichment (if even that). If that enrichment is real the question becomes why do the Archaea interact differently with viruses than Bacteria? That answer cannot be developed too much more at present, but a better sampling of the virosphere is definitely going to help here.\nThere are several other ideas in the paper that are potentially interesting, but not well worked out. The proposition of an actinobacterial symbiont during early eukaryote evolution is one example. Such a hypothesis could possibly be spelled out in a full paper, with a detailed scenario and all the evidence that seems to support such a model. In this form it is just a proposition that is hard to judge thoroughly, and is very easy to dismiss. In general, my recommendation would be to refocus the paper around one key idea, namely the origin of archaeal DNA-handling enzymes by quantum evolution and from viral sources as a result of an antibiotics arms race. If the authors spell out this scenario clearly, together with the supporting evidence, but without going into the details of the rooting issue (discussed already in their previous Biology Direct paper) and the origin of eukaryotes (this could be done in a separate paper), this could become a much more useful and potentially influential manuscript. The title could then be changed accordingly. In the present form the title gives the impression that the authors wish to explain everything, which is far from being the case (for example the unique membrane chemistry or archaeal flagella are not covered).\nWe completely agree with this assessment of our viral endosymbiotic scenario. Endosymbiosis is probably not the best term, but we want to stress the radical nature of this interaction. We do not think our assumption about a rare virus is off base. The virosphere is very large and viruses are experts at manipulating genetic material in novel ways. The DNA handling enzymes did evolve twice despite how unlikely it was. We think a radical turnover of the machinery is only possible in a virus. Successive viral transfers could explain the data too, but this would still be a rare event to account for so many genes. An event like this would not leave much of mark besides a statistical enrichment (if even that). If that enrichment is real the question becomes why do the Archaea interact differently with viruses than Bacteria? That answer cannot be developed too much more at present, but a better sampling of the virosphere is definitely going to help here.\nThere are several other ideas in the paper that are potentially interesting, but not well worked out. The proposition of an actinobacterial symbiont during early eukaryote evolution is one example. Such a hypothesis could possibly be spelled out in a full paper, with a detailed scenario and all the evidence that seems to support such a model. In this form it is just a proposition that is hard to judge thoroughly, and is very easy to dismiss. In general, my recommendation would be to refocus the paper around one key idea, namely the origin of archaeal DNA-handling enzymes by quantum evolution and from viral sources as a result of an antibiotics arms race. If the authors spell out this scenario clearly, together with the supporting evidence, but without going into the details of the rooting issue (discussed already in their previous Biology Direct paper) and the origin of eukaryotes (this could be done in a separate paper), this could become a much more useful and potentially influential manuscript. The title could then be changed accordingly. In the present form the title gives the impression that the authors wish to explain everything, which is far from being the case (for example the unique membrane chemistry or archaeal flagella are not covered).\n[SUBTITLE] Author's response [SUBSECTION] This is a fair assessment of the manuscript. It is certainly overly ambitious and in many ways incomplete. The goal of this paper was to unite the various ideas about bacterial rootings of the Archaea. The fact that many of our ideas could be developed further is more evidence this debate is not as closed as the two other reviewers have declared. While we certainly have omitted many details, we have chosen to take a big picture view. There is an obvious connection between the problems rooting of the tree of life and the origin of the superkingdoms. We think we can only judge a rooting hypothesis by assessing how well it addresses these questions. We think the canonical rooting is insufficient when one begins asking questions on the origins of the superkingdoms. We hope that readers will pick and choose which ideas they like and continue to develop and test them.\nThis is a fair assessment of the manuscript. It is certainly overly ambitious and in many ways incomplete. The goal of this paper was to unite the various ideas about bacterial rootings of the Archaea. The fact that many of our ideas could be developed further is more evidence this debate is not as closed as the two other reviewers have declared. While we certainly have omitted many details, we have chosen to take a big picture view. There is an obvious connection between the problems rooting of the tree of life and the origin of the superkingdoms. We think we can only judge a rooting hypothesis by assessing how well it addresses these questions. We think the canonical rooting is insufficient when one begins asking questions on the origins of the superkingdoms. We hope that readers will pick and choose which ideas they like and continue to develop and test them.", "We thank this reviewer for demonstrating that not everyone thinks our line of questioning is as unreasonable as reviewers 1 and 2. There are certainly problems with any rooting, and the question is still very open at this point in our minds. Many experts share the opinions of those reviewers, but we feel these points are often swept under the rug when invoking that rooting.\nNotwithstanding these problems, conventional wisdom holds, as the authors rightly point out, that the position of the root is between the two domains (superphyla) bacteria and archaea, since these are the groups that are most distinct from each other. However, such rooting based on maximum divergence can often be wrong, since an ancestral group can give rise to highly derived groups. I don't have particular problems with uprooting the tree of life and abandoning the conventional rooting, but I find the evidence, as presented in this paper, quite week. I also have the feeling that the three indels and the distribution of the proteosome will not convince too many people to favour one rooting over the other. In all cases one can conceive scenarios that are in agreement with the conventional rooting, such as the presence of both forms in the last common ancestor and then differential losses in the stem bacterial and archaeal lineage. I acknowledge that some of this would be less parsimonious than under a bacterial rooting, but given that there are only very few characters that can be used, such less parsimonious scenarios can still easily be defended.", "The strength of our evidence rests in the difference between polarization and parsimony which we have expressed before [11]:\n\"To us parsimony can be used to analyze events where gain and loss have nearly equal probabilities, while polarizations imply that one direction would evolve more easily than the other. Consider the example of the proteasome discussed in detail in Cavalier-Smith 2002. A parsimony argument would be that the 20s proteasome is the result of a duplication so a non duplicated structure must precede it. The polarization argument involves considering the structure and function of proteasomes as well as the fitness of the intermediates to argue that evolution towards the 20s proteasome is much more plausible than the reverse direction. There are probably many cases where evolution has not been parsimonious, and we do not think parsimony is a safe or productive assumption. However, there appears to be many polarizable transitions and hopefully there are many more waiting to be discovered. \"\nWe do not think the polarizations make this an open and shut case. However, we find they are sufficient to question the canonical rooting and search for more evidence to support the alternative rooting we support here.\nThe rooting issue aside, my main concern is that the scenario for the origin of archaea is not worked out well enough at the moment, and this contrasts with the length and ambition of the paper. The authors invoke an endosymbiosis with a virus to explain the origin of the archaeal DNA-handling enzymes. What does this exactly mean? Is it a lyzogenic virus? The term viral endosymbiosis does not seem to be the best choice here. The authors then invoke a very improbably form of infection with a virus that had collected from the virosphere the right combination of genes, to hand it over to the cell. In this way they try create a unique event that led to the unique origin of the archaea. I am not sure that invoking such a hypothetical, extremely rare event, for which there is no evidence, solves the problem. I acknowledge the enrichment of proteins of possible viral origin in the stem archaea, but this slight statistical enrichment does not mean that there was only one virus infection involved. One could just as well imagine a series of viral gene transfers in the framework of the antibiotic warfare scenario that provided the novel enzymes in a step-by step manner. Given the random sequence of events and the nature of the transferred genes, this could also lead to a lineage with unique identity. This scenario is at present the weakest part of the paper and should be worked out much better in a more focused paper.", "We completely agree with this assessment of our viral endosymbiotic scenario. Endosymbiosis is probably not the best term, but we want to stress the radical nature of this interaction. We do not think our assumption about a rare virus is off base. The virosphere is very large and viruses are experts at manipulating genetic material in novel ways. The DNA handling enzymes did evolve twice despite how unlikely it was. We think a radical turnover of the machinery is only possible in a virus. Successive viral transfers could explain the data too, but this would still be a rare event to account for so many genes. An event like this would not leave much of mark besides a statistical enrichment (if even that). If that enrichment is real the question becomes why do the Archaea interact differently with viruses than Bacteria? That answer cannot be developed too much more at present, but a better sampling of the virosphere is definitely going to help here.\nThere are several other ideas in the paper that are potentially interesting, but not well worked out. The proposition of an actinobacterial symbiont during early eukaryote evolution is one example. Such a hypothesis could possibly be spelled out in a full paper, with a detailed scenario and all the evidence that seems to support such a model. In this form it is just a proposition that is hard to judge thoroughly, and is very easy to dismiss. In general, my recommendation would be to refocus the paper around one key idea, namely the origin of archaeal DNA-handling enzymes by quantum evolution and from viral sources as a result of an antibiotics arms race. If the authors spell out this scenario clearly, together with the supporting evidence, but without going into the details of the rooting issue (discussed already in their previous Biology Direct paper) and the origin of eukaryotes (this could be done in a separate paper), this could become a much more useful and potentially influential manuscript. The title could then be changed accordingly. In the present form the title gives the impression that the authors wish to explain everything, which is far from being the case (for example the unique membrane chemistry or archaeal flagella are not covered).", "This is a fair assessment of the manuscript. It is certainly overly ambitious and in many ways incomplete. The goal of this paper was to unite the various ideas about bacterial rootings of the Archaea. The fact that many of our ideas could be developed further is more evidence this debate is not as closed as the two other reviewers have declared. While we certainly have omitted many details, we have chosen to take a big picture view. There is an obvious connection between the problems rooting of the tree of life and the origin of the superkingdoms. We think we can only judge a rooting hypothesis by assessing how well it addresses these questions. We think the canonical rooting is insufficient when one begins asking questions on the origins of the superkingdoms. We hope that readers will pick and choose which ideas they like and continue to develop and test them.", "Supplemental Figure 1. Alignment of RadA sequences from representative archaea.\nClick here for file\nSupplemental Figure 2. Maximum likelihood tree of GatD argues for multiple horizontal transfers. This tree is not well resolved, but it does not support archaeal ancestry for eukaryotic proteins. Euryarchaeal sequences are highlighted in green, crenarchaea are magenta, thaumarchaeota are cyan, and korarchaeota are blue. The region of the indel is highlighted in red. There is no informative indel in this gene as was initially reported.\nClick here for file\nSupplemental Figure 3. Sequence alignment of EF-1 from representative archaea. Euryarchaeal sequences are highlighted in green, crenarchaea are magenta, thaumarchaea are cyan, and korarchaeota are blue. The region of the indel is highlighted in red. This alignment implies several reversions. Therefore this indel is not robust enough to determine whether archaea are holophyletic or paraphyletic.\nClick here for file\nSupplemental Figure 4. 23s rRNA A2058 (E. coli numbering) is well conserved across bacterial hyperthermophiles. This implies the conserved guanine in that position in archaea is not an adaptation to thermophily.\nClick here for file" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, "supplementary-material" ]
[]
Safety profile and clinical activity of multiple subcutaneous doses of MEDI-528, a humanized anti-interleukin-9 monoclonal antibody, in two randomized phase 2a studies in subjects with asthma.
21356110
Interleukin-9 (IL-9)-targeted therapies may offer a novel approach for treating asthmatics. Two randomized placebo-controlled studies were conducted to assess the safety profile and potential efficacy of multiple subcutaneous doses of MEDI-528, a humanized anti-IL-9 monoclonal antibody, in asthmatics.
BACKGROUND
Study 1: adults (18-65 years) with mild asthma received MEDI-528 (0.3, 1, 3 mg/kg) or placebo subcutaneously twice weekly for 4 weeks. Study 2: adults (18-50 years) with stable, mild to moderate asthma and exercise-induced bronchoconstriction received 50 mg MEDI-528 or placebo subcutaneously twice weekly for 4 weeks. Adverse events (AEs), pharmacokinetics (PK), immunogenicity, asthma control (including asthma exacerbations), and exercise challenge test were evaluated in study 1, study 2, or both.
METHODS
In study 1 (N = 36), MEDI-528 showed linear serum PK; no anti-MEDI-528 antibodies were detected. Asthma control: 1/27 MEDI-528-treated subjects had 1 asthma exacerbation, and 2/9 placebo-treated subjects had a total of 4 asthma exacerbations (one considered a serious AE). In study 2, MEDI-528 (n = 7) elicited a trend in the reduction in mean maximum decrease in FEV1 post-exercise compared to placebo (n = 2) (-6.49% MEDI-528 vs -12.60% placebo; -1.40% vs -20.10%; -5.04% vs -15.20% at study days 28, 56, and 150, respectively). Study 2 was halted prematurely due to a serious AE in an asymptomatic MEDI-528-treated subject who had an abnormal brain magnetic resonance imaging that was found to be an artifact on further evaluation.
RESULTS
In these studies, MEDI-528 showed an acceptable safety profile and findings suggestive of clinical activity that support continued study in subjects with mild to moderate asthma.
CONCLUSIONS
[ "Adolescent", "Adult", "Antibodies, Monoclonal", "Antibodies, Monoclonal, Humanized", "Asthma", "Asthma, Exercise-Induced", "Dose-Response Relationship, Drug", "Double-Blind Method", "Drug-Related Side Effects and Adverse Reactions", "Female", "Humans", "Injections, Subcutaneous", "Interleukin-9", "Male", "Middle Aged", "Quality of Life", "Respiratory Function Tests", "Treatment Outcome", "Young Adult" ]
3058114
null
null
Methods
[SUBTITLE] Subjects [SUBSECTION] Adults aged 18-65 years with mild persistent asthma (forced expiratory volume in 1 second [FEV1] or peak expiratory flow [PEF] ≥80% of predicted) receiving therapy with short-acting β2-agonists (SABA), inhaled corticosteroids (ICS) <264 μg/day fluticasone or equivalent, or both (study 1) and adults aged 18-50 years with stable mild to moderate persistent asthma receiving therapy with SABA, ICS <800 μg/day budesonide or equivalent, and EIB (decrease in FEV1 of ≥15% from baseline during screening) (study 2) were eligible [25]. Exclusion criteria included lung disease other than asthma, use of systemic immunosuppressive drugs, and smoking history ≥10 pack-years. Long-acting β2-agonists, cromolyn sodium, nedocromil sodium, leukotriene receptor antagonists, theophylline, and omalizumab were not allowed (studies 1 and 2). Adults aged 18-65 years with mild persistent asthma (forced expiratory volume in 1 second [FEV1] or peak expiratory flow [PEF] ≥80% of predicted) receiving therapy with short-acting β2-agonists (SABA), inhaled corticosteroids (ICS) <264 μg/day fluticasone or equivalent, or both (study 1) and adults aged 18-50 years with stable mild to moderate persistent asthma receiving therapy with SABA, ICS <800 μg/day budesonide or equivalent, and EIB (decrease in FEV1 of ≥15% from baseline during screening) (study 2) were eligible [25]. Exclusion criteria included lung disease other than asthma, use of systemic immunosuppressive drugs, and smoking history ≥10 pack-years. Long-acting β2-agonists, cromolyn sodium, nedocromil sodium, leukotriene receptor antagonists, theophylline, and omalizumab were not allowed (studies 1 and 2). [SUBTITLE] Study design [SUBSECTION] Study 1 was a randomized, double-blind, placebo-controlled, dose-escalation, multicenter study evaluating the safety, tolerability, PK, and immunogenicity profiles of multiple SC doses of MEDI-528. For each cohort (0.3 mg/kg, 1 mg/kg, or 3 mg/kg), subjects were randomized 3:1 via an interactive voice response system (IVRS) to receive MEDI-528 or placebo as SC injections twice weekly for 4 weeks through study day 24; thereafter, subjects were monitored for 126 days. Dosing at each next higher dose group commenced after all evaluable subjects from the previous lower dose group completed evaluations on study day 56 with acceptable safety profiles. Subjects who received ≥7 doses of the study drug were considered evaluable. Those not evaluable were replaced, unless they withdrew from the study due to safety reasons. The primary outcome for this study was the safety and tolerability of multiple SC doses of MEDI-528. Secondary outcomes included PK and immunogenecity of MEDI-528 in this subject population. Exploratory outcomes included effects of MEDI-528 on pulmonary function, asthma exacerbations, symptoms, rescue SABA use, and quality of life. Study 2 was a randomized, double-blind, placebo-controlled, multicenter study evaluating the safety and tolerability profiles of multiple SC doses of MEDI-528 in three cohorts of 50, 100, and 200 mg versus placebo. Subjects were randomized 2:1 via IVRS to receive MEDI-528 or placebo as an SC injection twice weekly for 4 weeks; thereafter, subjects were monitored for 126 days. Subjects who received ≥4 doses of study drug (unless they discontinued for safety reasons) and had ≥2 exercise challenge tests (baseline and post-therapy) were considered evaluable. The primary outcome of this study was the safety and tolerability of multiple SC doses of MEDI-528 in adult subjects with stable asthma and EIB. Secondary objectives included the effect of MEDI-528 on EIB and immunogenicity. Exploratory outcomes included the effects of MEDI-528 on spirometry, airway hyperresponsiveness as measured by methacholine challenge testing, asthma exacerbations, asthma symptoms, rescue SABA use, quality of life, and nasal allergy symptoms in this population. In both studies, all subjects and protocol-associated personnel were blinded to the individual subject treatment assignment until the last subject in each cohort completed the study and the databases were locked. Both studies were conducted in accordance with the Declaration of Helsinki and were approved by an institutional review board/independent ethics committee at each participating site. Written informed consent was obtained from each subject before study entry. Study 1 was a randomized, double-blind, placebo-controlled, dose-escalation, multicenter study evaluating the safety, tolerability, PK, and immunogenicity profiles of multiple SC doses of MEDI-528. For each cohort (0.3 mg/kg, 1 mg/kg, or 3 mg/kg), subjects were randomized 3:1 via an interactive voice response system (IVRS) to receive MEDI-528 or placebo as SC injections twice weekly for 4 weeks through study day 24; thereafter, subjects were monitored for 126 days. Dosing at each next higher dose group commenced after all evaluable subjects from the previous lower dose group completed evaluations on study day 56 with acceptable safety profiles. Subjects who received ≥7 doses of the study drug were considered evaluable. Those not evaluable were replaced, unless they withdrew from the study due to safety reasons. The primary outcome for this study was the safety and tolerability of multiple SC doses of MEDI-528. Secondary outcomes included PK and immunogenecity of MEDI-528 in this subject population. Exploratory outcomes included effects of MEDI-528 on pulmonary function, asthma exacerbations, symptoms, rescue SABA use, and quality of life. Study 2 was a randomized, double-blind, placebo-controlled, multicenter study evaluating the safety and tolerability profiles of multiple SC doses of MEDI-528 in three cohorts of 50, 100, and 200 mg versus placebo. Subjects were randomized 2:1 via IVRS to receive MEDI-528 or placebo as an SC injection twice weekly for 4 weeks; thereafter, subjects were monitored for 126 days. Subjects who received ≥4 doses of study drug (unless they discontinued for safety reasons) and had ≥2 exercise challenge tests (baseline and post-therapy) were considered evaluable. The primary outcome of this study was the safety and tolerability of multiple SC doses of MEDI-528 in adult subjects with stable asthma and EIB. Secondary objectives included the effect of MEDI-528 on EIB and immunogenicity. Exploratory outcomes included the effects of MEDI-528 on spirometry, airway hyperresponsiveness as measured by methacholine challenge testing, asthma exacerbations, asthma symptoms, rescue SABA use, quality of life, and nasal allergy symptoms in this population. In both studies, all subjects and protocol-associated personnel were blinded to the individual subject treatment assignment until the last subject in each cohort completed the study and the databases were locked. Both studies were conducted in accordance with the Declaration of Helsinki and were approved by an institutional review board/independent ethics committee at each participating site. Written informed consent was obtained from each subject before study entry. [SUBTITLE] Safety profile [SUBSECTION] In both studies, AEs and SAEs were monitored after the first dose through day 150. AEs were graded by severity (mild, moderate, severe) and relationship to study drug (none, remote, possible, probable, definite) as determined by each investigator. Other safety measures included routine laboratory tests, vital signs, electrocardiograms (ECGs), and physical examinations. Physical examination included assessments for splenomegaly (palpable spleen), lymphadenopathy, and neurologic abnormalities. In both studies, a noncontrast MRI of the brain was performed at screening and day 28. MRI was added to the current studies and other MEDI-528 studies [24] based on preclinical toxicology findings of lymphohistiocytic perivascular infiltrates in the brains of cynomolgus monkeys seen in both treated and control animals. The study and peer-review pathologists considered this a spontaneous background finding unrelated to treatment. Subsequent MEDI-528 toxicology studies in monkeys and of MM9C1 in mice found no evidence of macroscopic or microscopic pathologic changes in the brain; MRI is therefore not required for subsequent clinical studies of MEDI-528 (data on file, MedImmune, LLC). Subjects in both studies who received any dose of study drug were included in the safety analyses. In both studies, AEs and SAEs were monitored after the first dose through day 150. AEs were graded by severity (mild, moderate, severe) and relationship to study drug (none, remote, possible, probable, definite) as determined by each investigator. Other safety measures included routine laboratory tests, vital signs, electrocardiograms (ECGs), and physical examinations. Physical examination included assessments for splenomegaly (palpable spleen), lymphadenopathy, and neurologic abnormalities. In both studies, a noncontrast MRI of the brain was performed at screening and day 28. MRI was added to the current studies and other MEDI-528 studies [24] based on preclinical toxicology findings of lymphohistiocytic perivascular infiltrates in the brains of cynomolgus monkeys seen in both treated and control animals. The study and peer-review pathologists considered this a spontaneous background finding unrelated to treatment. Subsequent MEDI-528 toxicology studies in monkeys and of MM9C1 in mice found no evidence of macroscopic or microscopic pathologic changes in the brain; MRI is therefore not required for subsequent clinical studies of MEDI-528 (data on file, MedImmune, LLC). Subjects in both studies who received any dose of study drug were included in the safety analyses. [SUBTITLE] Pharmacokinetics and immunogenicity [SUBSECTION] In study 1, blood samples for measuring serum concentrations of MEDI-528 were collected before dosing and at specified times throughout the study. As previously described [24], a validated enzyme-linked immunosorbent assay (ELISA) was used for these measurements. Unknown values with calculated concentrations below the assay's lower limit of quantitation (< 1.25 μg/mL) were reported as less than the limit of quantitation. A double-antigen sandwich ELISA was performed to evaluate anti-MEDI-528 antibodies [24]. In study 1, blood samples for measuring serum concentrations of MEDI-528 were collected before dosing and at specified times throughout the study. As previously described [24], a validated enzyme-linked immunosorbent assay (ELISA) was used for these measurements. Unknown values with calculated concentrations below the assay's lower limit of quantitation (< 1.25 μg/mL) were reported as less than the limit of quantitation. A double-antigen sandwich ELISA was performed to evaluate anti-MEDI-528 antibodies [24]. [SUBTITLE] Excercise challenge test [SUBSECTION] In study 2, exercise challenge was performed at baseline prior to dosing and on study days 28, 56, and 150 after dosing [26]. A response to therapy was defined as a maximum decrease in post-exercise FEV1 of <10%, based on American Thoracic Society guidelines [27]. Spirometry was performed 15 minutes before initiation of the treadmill test and 5, 10, 15, 20, and 30 minutes after the treadmill test was completed. At each time point, maximum FEV1 was determined. The decrease in FEV1 at each time point after the treadmill test was calculated as a percentage of the best baseline FEV1. In study 2, exercise challenge was performed at baseline prior to dosing and on study days 28, 56, and 150 after dosing [26]. A response to therapy was defined as a maximum decrease in post-exercise FEV1 of <10%, based on American Thoracic Society guidelines [27]. Spirometry was performed 15 minutes before initiation of the treadmill test and 5, 10, 15, 20, and 30 minutes after the treadmill test was completed. At each time point, maximum FEV1 was determined. The decrease in FEV1 at each time point after the treadmill test was calculated as a percentage of the best baseline FEV1. [SUBTITLE] Asthma control and quality of life [SUBSECTION] In both studies, an asthma exacerbation was defined as a worsening of asthma requiring oral corticosteroids, a doubling of the ICS dose from baseline, hospitalization, emergency department visit, or an unscheduled asthma-related visit to a health care provider. Asthma symptom scores were recorded twice daily for the duration of the study. Symptoms were assessed on a scale of 0 (no symptoms) to 4 (marked discomfort). Rescue SABA use (puffs/day) was recorded daily by subjects for the duration of the study. Subjects completed the Asthma Quality of Life Questionnaire (AQLQ) [28] at baseline and after dosing. In both studies, an asthma exacerbation was defined as a worsening of asthma requiring oral corticosteroids, a doubling of the ICS dose from baseline, hospitalization, emergency department visit, or an unscheduled asthma-related visit to a health care provider. Asthma symptom scores were recorded twice daily for the duration of the study. Symptoms were assessed on a scale of 0 (no symptoms) to 4 (marked discomfort). Rescue SABA use (puffs/day) was recorded daily by subjects for the duration of the study. Subjects completed the Asthma Quality of Life Questionnaire (AQLQ) [28] at baseline and after dosing. [SUBTITLE] Additional evaluations [SUBSECTION] Skin prick testing of common food and aeroallergens was performed during screening for both studies and on study day 28 for study 1 [29]. In both studies, spirometry was performed according to existing guidelines [30]. Skin prick testing of common food and aeroallergens was performed during screening for both studies and on study day 28 for study 1 [29]. In both studies, spirometry was performed according to existing guidelines [30]. [SUBTITLE] Statistical analyses [SUBSECTION] Formal sample size calculations were not applicable for assessment of the primary objective (safety/tolerability profile). No statistical hypothesis testing was performed for this end point. Data analyses were conducted using the SAS System (SAS Institute Inc., Cary, NC). AEs and SAEs were described using the MedDRA Adverse Event Thesaurus by system organ class, severity, and relationship to study drug through 18 weeks after the last dose (day 150). Laboratory values with a higher toxicity grade than that observed at baseline were recorded as AEs. The day 0 value before study drug administration was used as a baseline for laboratory parameters. Study discontinuation blood samples were summarized at the closest nominal time point that did not already have a value. The original design for study 2 included 3 cohorts (50 mg, 100 mg, and 200 mg MEDI-528), each with 18 subjects randomized 2:1 to receive MEDI-528 or placebo. Sample size calculations were based on two-sample t test of the reduction in the maximum decline of FEV1 after exercise challenge testing. The power to detect a statistically significant difference in the maximum decline in FEV1 was greater than 80% based on an assumption of a 20% fall in the placebo group and 65% reduction in the maximum decline of FEV1 in the combined group (36 subjects) on active treatment versus placebo (18 subjects). Serum concentrations and PK parameters were analyzed using WinNonlin (Pharsight, St. Louis, MO) and descriptive statistics summarized for each MEDI-528 treatment group. The number of subjects exhibiting anti-MEDI-528 antibodies was summarized, and all valid assay results from subjects who received any study drug were included in immunogenicity summaries. No formal statistical hypothesis tests were conducted for exploratory variables (pulmonary function, asthma exacerbations, symptom scores, rescue SABA use, quality of life). These variables were examined for their medical/clinical implications. Two-sample t tests were used to explore the change from baseline in FEV1 and asthma symptom score between MEDI-528 and placebo groups. The Fisher exact test was used to explore the difference in asthma exacerbation proportions between the placebo and MEDI-528 groups. The total number of exacerbations per subject during the study was noted. Formal sample size calculations were not applicable for assessment of the primary objective (safety/tolerability profile). No statistical hypothesis testing was performed for this end point. Data analyses were conducted using the SAS System (SAS Institute Inc., Cary, NC). AEs and SAEs were described using the MedDRA Adverse Event Thesaurus by system organ class, severity, and relationship to study drug through 18 weeks after the last dose (day 150). Laboratory values with a higher toxicity grade than that observed at baseline were recorded as AEs. The day 0 value before study drug administration was used as a baseline for laboratory parameters. Study discontinuation blood samples were summarized at the closest nominal time point that did not already have a value. The original design for study 2 included 3 cohorts (50 mg, 100 mg, and 200 mg MEDI-528), each with 18 subjects randomized 2:1 to receive MEDI-528 or placebo. Sample size calculations were based on two-sample t test of the reduction in the maximum decline of FEV1 after exercise challenge testing. The power to detect a statistically significant difference in the maximum decline in FEV1 was greater than 80% based on an assumption of a 20% fall in the placebo group and 65% reduction in the maximum decline of FEV1 in the combined group (36 subjects) on active treatment versus placebo (18 subjects). Serum concentrations and PK parameters were analyzed using WinNonlin (Pharsight, St. Louis, MO) and descriptive statistics summarized for each MEDI-528 treatment group. The number of subjects exhibiting anti-MEDI-528 antibodies was summarized, and all valid assay results from subjects who received any study drug were included in immunogenicity summaries. No formal statistical hypothesis tests were conducted for exploratory variables (pulmonary function, asthma exacerbations, symptom scores, rescue SABA use, quality of life). These variables were examined for their medical/clinical implications. Two-sample t tests were used to explore the change from baseline in FEV1 and asthma symptom score between MEDI-528 and placebo groups. The Fisher exact test was used to explore the difference in asthma exacerbation proportions between the placebo and MEDI-528 groups. The total number of exacerbations per subject during the study was noted.
null
null
null
null
[ "Background", "Subjects", "Study design", "Safety profile", "Pharmacokinetics and immunogenicity", "Excercise challenge test", "Asthma control and quality of life", "Additional evaluations", "Statistical analyses", "Results", "Demographics and baseline characteristics", "Safety profile", "Pharmacokinetics and immunogenicity", "Pulmonary function", "Asthma control and quality of life", "Exercise challenge test", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Asthma continues to be a significant health problem [1], with nearly 8% of the US population reported to have asthma in 2006 [2]. In one study, approximately 30% of >3,400 asthmatics failed to achieve control despite regular use of combination therapy with high-dose inhaled corticosteroids (ICS) and long-acting β2-agonists [3].\nInterleukin (IL)-9, a 144 amino acid-long protein secreted by CD4+ T-helper 2 (Th2) cells, mast cells, eosinophils, and neutrophils [4-7], may be associated with airway hyperresponsiveness (AHR) and inflammation [8-11]. Evidence supporting IL-9 as a potential target treatment for asthma emerged from a series of genetic experiments linking AHR to a region on chromosome 13 in mice, which contains the IL-9 gene and is syntenic with the 5q31-q33 chromosome in humans [8].\nOverexpression of IL-9 in murine models of asthma has been shown to cause airway inflammation with pulmonary infiltration of eosinophils and lymphocytes, airway obstruction, and mast cell hyperplasia [9,10,12]. In contrast, anti−IL-9 antibody therapy has led to reduced levels of AHR in murine models of allergen-induced asthma [13,14].\nBlocking IL-9 expression inhibits airway inflammation in a mast cell-dependent murine model of asthma. Mast cell-deficient animals demonstrated reduced lung inflammation and AHR compared with wild-type control mice [15]. An IL-9-neutralizing monoclonal antibody effectively reduced lung recovery of mast cell precursors and inflammatory cells after allergen challenge [16]. These findings suggest that IL-9 promotes asthma pathology in a mast cell-dependent manner through the proliferation of mast cell precursors or the recruitment of immature mast cells to lung tissue, or both.\nMast cell degranulation and release of spasmogenic mediators have been reported to cause bronchoconstriction in subjects with exercise-induced asthma [17,18]. Exercise challenge is an indirect airway challenge that results in airway narrowing due to the release of mediators from mast cell degranulation, as opposed to direct airway challenges such as methacholine that act directly on the airway smooth muscle to produce bronchoconstriction [19].\nAdditionally, in asthmatics, bronchial biopsy specimens revealed increased IL-9 immunoreactive cells and IL-9 mRNA, protein, and receptor levels compared with those of healthy controls [20-23]. These data suggest that IL-9-targeted therapies may offer a novel approach for treating patients with asthma and may reduce exercise-induced bronchoconstriction (EIB).\nMEDI-528 is a humanized anti-IL-9 monoclonal antibody. Results from 2 open-label, phase 1 studies demonstrated that MEDI-528, administered as a single intravenous or subcutaneous (SC) dose, had an acceptable safety profile in healthy volunteers, with no serious adverse events (AEs) and a linear pharmacokinetic (PK) profile [24].\nWe report the results of 2 studies evaluating the safety, tolerability, PK, and immunogenicity profiles of multiple SC doses of MEDI-528, and the potential reduction of EIB in subjects with mild to moderate asthma. Study 2 was halted prematurely due to a serious AE (SAE) in an asymptomatic MEDI-528-treated subject who had an abnormal brain magnetic resonance imaging (MRI) that was found to be an artifact on further evaluation.", "Adults aged 18-65 years with mild persistent asthma (forced expiratory volume in 1 second [FEV1] or peak expiratory flow [PEF] ≥80% of predicted) receiving therapy with short-acting β2-agonists (SABA), inhaled corticosteroids (ICS) <264 μg/day fluticasone or equivalent, or both (study 1) and adults aged 18-50 years with stable mild to moderate persistent asthma receiving therapy with SABA, ICS <800 μg/day budesonide or equivalent, and EIB (decrease in FEV1 of ≥15% from baseline during screening) (study 2) were eligible [25].\nExclusion criteria included lung disease other than asthma, use of systemic immunosuppressive drugs, and smoking history ≥10 pack-years. Long-acting β2-agonists, cromolyn sodium, nedocromil sodium, leukotriene receptor antagonists, theophylline, and omalizumab were not allowed (studies 1 and 2).", "Study 1 was a randomized, double-blind, placebo-controlled, dose-escalation, multicenter study evaluating the safety, tolerability, PK, and immunogenicity profiles of multiple SC doses of MEDI-528. For each cohort (0.3 mg/kg, 1 mg/kg, or 3 mg/kg), subjects were randomized 3:1 via an interactive voice response system (IVRS) to receive MEDI-528 or placebo as SC injections twice weekly for 4 weeks through study day 24; thereafter, subjects were monitored for 126 days. Dosing at each next higher dose group commenced after all evaluable subjects from the previous lower dose group completed evaluations on study day 56 with acceptable safety profiles. Subjects who received ≥7 doses of the study drug were considered evaluable. Those not evaluable were replaced, unless they withdrew from the study due to safety reasons. The primary outcome for this study was the safety and tolerability of multiple SC doses of MEDI-528. Secondary outcomes included PK and immunogenecity of MEDI-528 in this subject population. Exploratory outcomes included effects of MEDI-528 on pulmonary function, asthma exacerbations, symptoms, rescue SABA use, and quality of life.\nStudy 2 was a randomized, double-blind, placebo-controlled, multicenter study evaluating the safety and tolerability profiles of multiple SC doses of MEDI-528 in three cohorts of 50, 100, and 200 mg versus placebo. Subjects were randomized 2:1 via IVRS to receive MEDI-528 or placebo as an SC injection twice weekly for 4 weeks; thereafter, subjects were monitored for 126 days. Subjects who received ≥4 doses of study drug (unless they discontinued for safety reasons) and had ≥2 exercise challenge tests (baseline and post-therapy) were considered evaluable. The primary outcome of this study was the safety and tolerability of multiple SC doses of MEDI-528 in adult subjects with stable asthma and EIB. Secondary objectives included the effect of MEDI-528 on EIB and immunogenicity. Exploratory outcomes included the effects of MEDI-528 on spirometry, airway hyperresponsiveness as measured by methacholine challenge testing, asthma exacerbations, asthma symptoms, rescue SABA use, quality of life, and nasal allergy symptoms in this population.\nIn both studies, all subjects and protocol-associated personnel were blinded to the individual subject treatment assignment until the last subject in each cohort completed the study and the databases were locked.\nBoth studies were conducted in accordance with the Declaration of Helsinki and were approved by an institutional review board/independent ethics committee at each participating site. Written informed consent was obtained from each subject before study entry.", "In both studies, AEs and SAEs were monitored after the first dose through day 150. AEs were graded by severity (mild, moderate, severe) and relationship to study drug (none, remote, possible, probable, definite) as determined by each investigator. Other safety measures included routine laboratory tests, vital signs, electrocardiograms (ECGs), and physical examinations. Physical examination included assessments for splenomegaly (palpable spleen), lymphadenopathy, and neurologic abnormalities.\nIn both studies, a noncontrast MRI of the brain was performed at screening and day 28. MRI was added to the current studies and other MEDI-528 studies [24] based on preclinical toxicology findings of lymphohistiocytic perivascular infiltrates in the brains of cynomolgus monkeys seen in both treated and control animals. The study and peer-review pathologists considered this a spontaneous background finding unrelated to treatment. Subsequent MEDI-528 toxicology studies in monkeys and of MM9C1 in mice found no evidence of macroscopic or microscopic pathologic changes in the brain; MRI is therefore not required for subsequent clinical studies of MEDI-528 (data on file, MedImmune, LLC).\nSubjects in both studies who received any dose of study drug were included in the safety analyses.", "In study 1, blood samples for measuring serum concentrations of MEDI-528 were collected before dosing and at specified times throughout the study. As previously described [24], a validated enzyme-linked immunosorbent assay (ELISA) was used for these measurements. Unknown values with calculated concentrations below the assay's lower limit of quantitation (< 1.25 μg/mL) were reported as less than the limit of quantitation.\nA double-antigen sandwich ELISA was performed to evaluate anti-MEDI-528 antibodies [24].", "In study 2, exercise challenge was performed at baseline prior to dosing and on study days 28, 56, and 150 after dosing [26]. A response to therapy was defined as a maximum decrease in post-exercise FEV1 of <10%, based on American Thoracic Society guidelines [27]. Spirometry was performed 15 minutes before initiation of the treadmill test and 5, 10, 15, 20, and 30 minutes after the treadmill test was completed. At each time point, maximum FEV1 was determined. The decrease in FEV1 at each time point after the treadmill test was calculated as a percentage of the best baseline FEV1.", "In both studies, an asthma exacerbation was defined as a worsening of asthma requiring oral corticosteroids, a doubling of the ICS dose from baseline, hospitalization, emergency department visit, or an unscheduled asthma-related visit to a health care provider. Asthma symptom scores were recorded twice daily for the duration of the study. Symptoms were assessed on a scale of 0 (no symptoms) to 4 (marked discomfort). Rescue SABA use (puffs/day) was recorded daily by subjects for the duration of the study. Subjects completed the Asthma Quality of Life Questionnaire (AQLQ) [28] at baseline and after dosing.", "Skin prick testing of common food and aeroallergens was performed during screening for both studies and on study day 28 for study 1 [29]. In both studies, spirometry was performed according to existing guidelines [30].", "Formal sample size calculations were not applicable for assessment of the primary objective (safety/tolerability profile). No statistical hypothesis testing was performed for this end point. Data analyses were conducted using the SAS System (SAS Institute Inc., Cary, NC).\nAEs and SAEs were described using the MedDRA Adverse Event Thesaurus by system organ class, severity, and relationship to study drug through 18 weeks after the last dose (day 150). Laboratory values with a higher toxicity grade than that observed at baseline were recorded as AEs. The day 0 value before study drug administration was used as a baseline for laboratory parameters. Study discontinuation blood samples were summarized at the closest nominal time point that did not already have a value.\nThe original design for study 2 included 3 cohorts (50 mg, 100 mg, and 200 mg MEDI-528), each with 18 subjects randomized 2:1 to receive MEDI-528 or placebo. Sample size calculations were based on two-sample t test of the reduction in the maximum decline of FEV1 after exercise challenge testing. The power to detect a statistically significant difference in the maximum decline in FEV1 was greater than 80% based on an assumption of a 20% fall in the placebo group and 65% reduction in the maximum decline of FEV1 in the combined group (36 subjects) on active treatment versus placebo (18 subjects).\nSerum concentrations and PK parameters were analyzed using WinNonlin (Pharsight, St. Louis, MO) and descriptive statistics summarized for each MEDI-528 treatment group. The number of subjects exhibiting anti-MEDI-528 antibodies was summarized, and all valid assay results from subjects who received any study drug were included in immunogenicity summaries.\nNo formal statistical hypothesis tests were conducted for exploratory variables (pulmonary function, asthma exacerbations, symptom scores, rescue SABA use, quality of life). These variables were examined for their medical/clinical implications.\nTwo-sample t tests were used to explore the change from baseline in FEV1 and asthma symptom score between MEDI-528 and placebo groups. The Fisher exact test was used to explore the difference in asthma exacerbation proportions between the placebo and MEDI-528 groups. The total number of exacerbations per subject during the study was noted.", "[SUBTITLE] Demographics and baseline characteristics [SUBSECTION] In study 1, 36 subjects were randomized between June 2007 and February 2008 at 8 sites, and 33 completed the study through day 150. Two MEDI-528-treated subjects were lost to follow-up and 1 placebo-treated subject withdrew due to an asthma exacerbation requiring hospitalization. All 36 subjects were evaluable (ie, received ≥7 doses of study drug) and included in the safety analyses (Figure 1A). There were more ex-smokers in the placebo group, otherwise the groups' baseline characteristics were comparable (Table 1).\nFlow diagram for subjects in studies 1 (A) and 2 (B).\nDemographics and Baseline Characteristics (ITT Population)\n*Measurements from evaluable subjects (those who received ≥7 or ≥4 doses of the study drug in study 1 and study 2, respectively); ITT = intent to treat (all randomized subjects); SD = standard deviation; FEV1 = forced expiratory volume in 1 second.\nIn study 2, 11 subjects were randomized and dosed with 50 mg of MEDI-528 or placebo between February and May 2008 at 4 sites, and 9 completed the study through day 150 (Figure 1B). Nine subjects were evaluable (ie, received ≥4 doses of study drug). Two placebo-treated subjects were not evaluable (1 received only 2 doses of study drug and 1 was lost to follow-up). All 11 subjects were included in the safety analyses. Baseline characteristics were similar between study groups (Table 1).\nIn study 1, 36 subjects were randomized between June 2007 and February 2008 at 8 sites, and 33 completed the study through day 150. Two MEDI-528-treated subjects were lost to follow-up and 1 placebo-treated subject withdrew due to an asthma exacerbation requiring hospitalization. All 36 subjects were evaluable (ie, received ≥7 doses of study drug) and included in the safety analyses (Figure 1A). There were more ex-smokers in the placebo group, otherwise the groups' baseline characteristics were comparable (Table 1).\nFlow diagram for subjects in studies 1 (A) and 2 (B).\nDemographics and Baseline Characteristics (ITT Population)\n*Measurements from evaluable subjects (those who received ≥7 or ≥4 doses of the study drug in study 1 and study 2, respectively); ITT = intent to treat (all randomized subjects); SD = standard deviation; FEV1 = forced expiratory volume in 1 second.\nIn study 2, 11 subjects were randomized and dosed with 50 mg of MEDI-528 or placebo between February and May 2008 at 4 sites, and 9 completed the study through day 150 (Figure 1B). Nine subjects were evaluable (ie, received ≥4 doses of study drug). Two placebo-treated subjects were not evaluable (1 received only 2 doses of study drug and 1 was lost to follow-up). All 11 subjects were included in the safety analyses. Baseline characteristics were similar between study groups (Table 1).\n[SUBTITLE] Safety profile [SUBSECTION] The most frequently reported AEs in study 1 are listed in Table 2. Severe AEs were reported in 4 placebo-treated subjects (vomiting, n = 1; elevated lipase, n = 2; asthma, n = 1) and 2 subjects receiving MEDI-528 0.3 mg/kg (severe diarrhea, n = 1; elevated alanine aminotranferase [ALT], n = 1). The severe asthma AE in the 1 placebo-treated subject was an SAE (asthma exacerbation requiring hospitalization); this subject was discontinued from the study. The elevated ALT seen in the MEDI-528-treated subject was noted on the first day of dosing. This subject received no further drug and the elevated levels resolved by study day 42. No deaths were noted during this study.\nMost Frequently Reported Adverse Events in Study 1 (Safety Population*)\nValues are shown in descending order of frequency in the total MEDI-528 group.\n*Consisted of all subjects who received the study drug.\nAbnormal but clinically nonsignificant ECG results were detected during the study in 5 placebo-treated subjects, 5 subjects in the MEDI-528 0.3-mg/kg group, and 2 subjects each in the MEDI-528 1-mg/kg and 3-mg/kg groups. One subject in the MEDI-528 3-mg/kg group exhibited an asymptomatic elevation of troponin levels at a single time point on study day 84. No significant changes in the central nervous system were observed from the brain MRI or focused neurological examinations.\nThe most frequently reported AEs in study 2 are listed in Table 3. A total of 5 severe AEs occurred in 4 subjects; 3 placebo-treated subjects had 1 event each (eye infection, cough, drug hypersensitivity) and 1 MEDI-528-treated subject had 2 events (sunburn, back pain). No significant ECG changes occurred and no elevations in troponin levels were observed. One SAE occurred in a MEDI-528-treated subject (abnormal brain MRI results). The subject had a 6- × 4-mm left-sided pontine hyperintensity noted on the day 28 MRI that was not present at baseline. The investigator considered the event possibly related to study drug, resulting in a clinical hold of the study. A repeat MRI with gadolinium contrast showed no abnormal findings or pontine hyperintensity. Review by an independent neuroradiologist determined the initial MRI finding to be an artifact. The clinical hold was lifted, but the study was discontinued due to the length of the delay.\nMost Frequently Reported Adverse Events in Study 2 (Safety Population*)\nValues are shown in descending order of frequency in the MEDI-528 group. MRI = magnetic resonance imaging.\n*Consisted of all subjects who received the study drug.\nThe most frequently reported AEs in study 1 are listed in Table 2. Severe AEs were reported in 4 placebo-treated subjects (vomiting, n = 1; elevated lipase, n = 2; asthma, n = 1) and 2 subjects receiving MEDI-528 0.3 mg/kg (severe diarrhea, n = 1; elevated alanine aminotranferase [ALT], n = 1). The severe asthma AE in the 1 placebo-treated subject was an SAE (asthma exacerbation requiring hospitalization); this subject was discontinued from the study. The elevated ALT seen in the MEDI-528-treated subject was noted on the first day of dosing. This subject received no further drug and the elevated levels resolved by study day 42. No deaths were noted during this study.\nMost Frequently Reported Adverse Events in Study 1 (Safety Population*)\nValues are shown in descending order of frequency in the total MEDI-528 group.\n*Consisted of all subjects who received the study drug.\nAbnormal but clinically nonsignificant ECG results were detected during the study in 5 placebo-treated subjects, 5 subjects in the MEDI-528 0.3-mg/kg group, and 2 subjects each in the MEDI-528 1-mg/kg and 3-mg/kg groups. One subject in the MEDI-528 3-mg/kg group exhibited an asymptomatic elevation of troponin levels at a single time point on study day 84. No significant changes in the central nervous system were observed from the brain MRI or focused neurological examinations.\nThe most frequently reported AEs in study 2 are listed in Table 3. A total of 5 severe AEs occurred in 4 subjects; 3 placebo-treated subjects had 1 event each (eye infection, cough, drug hypersensitivity) and 1 MEDI-528-treated subject had 2 events (sunburn, back pain). No significant ECG changes occurred and no elevations in troponin levels were observed. One SAE occurred in a MEDI-528-treated subject (abnormal brain MRI results). The subject had a 6- × 4-mm left-sided pontine hyperintensity noted on the day 28 MRI that was not present at baseline. The investigator considered the event possibly related to study drug, resulting in a clinical hold of the study. A repeat MRI with gadolinium contrast showed no abnormal findings or pontine hyperintensity. Review by an independent neuroradiologist determined the initial MRI finding to be an artifact. The clinical hold was lifted, but the study was discontinued due to the length of the delay.\nMost Frequently Reported Adverse Events in Study 2 (Safety Population*)\nValues are shown in descending order of frequency in the MEDI-528 group. MRI = magnetic resonance imaging.\n*Consisted of all subjects who received the study drug.\n[SUBTITLE] Pharmacokinetics and immunogenicity [SUBSECTION] In study 1, limited PK parameters were estimable because the dosing interval for MEDI-528 was not constant, alternating between 3 days and 4 days, and PK sampling was sparse.\nAfter the last MEDI-528 dose, maximum serum concentrations were generally achieved between 3 and 4 days across dose levels (Table 4). Mean maximum concentration after the last dose of 0.3 mg/kg to 3 mg/kg, respectively, increased in an approximately dose-proportional manner from 13.7 μg/mL to 105.5 μg/mL; similar dose proportionality was noted for trough concentrations, which increased from 11.7 μg/mL to 90.4 μg/mL. Mean half-life was similar across dose levels (range, 35-38 days).\nMEDI-528 Multiple-Dose Pharmacokinetic Parameters in Study 1\nValues are mean ± SD. Cmax = maximum concentration; Tmax = time to maximum concentration; T1/2 = half-life.\n*Accumulation index was based on trough concentration after first and last dose.\n†n = 8.\n‡n = 7.\nComparison of trough concentrations after the first and last doses yielded accumulation index values between 5 and 10 across dose levels. The fluctuation of MEDI-528 concentrations within a dosing interval was small, consistent with the frequency of dosing and half-life (Figure 2).\nMEDI-528 mean serum concentrations in study 1. Mean maximum concentration after the last dose of 0.3 mg/kg to 3 mg/kg, respectively, mean half-life, trough concentrations after the first and last doses were measured. Mean concentrations of MEDI-528 increased in a dose-proportional manner and peaked after the last dose of the study drug.\nNo anti-MEDI-528 antibodies, defined as antibody titer of >10, were detected in any group during study 1 or study 2.\nIn study 1, limited PK parameters were estimable because the dosing interval for MEDI-528 was not constant, alternating between 3 days and 4 days, and PK sampling was sparse.\nAfter the last MEDI-528 dose, maximum serum concentrations were generally achieved between 3 and 4 days across dose levels (Table 4). Mean maximum concentration after the last dose of 0.3 mg/kg to 3 mg/kg, respectively, increased in an approximately dose-proportional manner from 13.7 μg/mL to 105.5 μg/mL; similar dose proportionality was noted for trough concentrations, which increased from 11.7 μg/mL to 90.4 μg/mL. Mean half-life was similar across dose levels (range, 35-38 days).\nMEDI-528 Multiple-Dose Pharmacokinetic Parameters in Study 1\nValues are mean ± SD. Cmax = maximum concentration; Tmax = time to maximum concentration; T1/2 = half-life.\n*Accumulation index was based on trough concentration after first and last dose.\n†n = 8.\n‡n = 7.\nComparison of trough concentrations after the first and last doses yielded accumulation index values between 5 and 10 across dose levels. The fluctuation of MEDI-528 concentrations within a dosing interval was small, consistent with the frequency of dosing and half-life (Figure 2).\nMEDI-528 mean serum concentrations in study 1. Mean maximum concentration after the last dose of 0.3 mg/kg to 3 mg/kg, respectively, mean half-life, trough concentrations after the first and last doses were measured. Mean concentrations of MEDI-528 increased in a dose-proportional manner and peaked after the last dose of the study drug.\nNo anti-MEDI-528 antibodies, defined as antibody titer of >10, were detected in any group during study 1 or study 2.\n[SUBTITLE] Pulmonary function [SUBSECTION] Pulmonary function was generally unchanged throughout both studies. FEV1 values were comparable among groups at baseline and at the end of the studies (Table 5). In study 1, FEV1 percent predicted values were comparable at baseline and end of study (Table 5).\nExploratory Analyses in Studies 1 and 2 (Evaluable Population)\nValues are mean ± SD.\nAQLQ = Asthma Quality of Life Questionnaire; FEV1 = forced expiratory volume in 1 second.\nPulmonary function was generally unchanged throughout both studies. FEV1 values were comparable among groups at baseline and at the end of the studies (Table 5). In study 1, FEV1 percent predicted values were comparable at baseline and end of study (Table 5).\nExploratory Analyses in Studies 1 and 2 (Evaluable Population)\nValues are mean ± SD.\nAQLQ = Asthma Quality of Life Questionnaire; FEV1 = forced expiratory volume in 1 second.\n[SUBTITLE] Asthma control and quality of life [SUBSECTION] Asthma control and quality of life results are shown in Table 5. In both studies, a trend toward improvement in the overall mean asthma symptom scores was noted during the treatment period in all subjects compared with baseline values. Overall mean rescue SABA use during both studies was comparable among groups, and overall mean AQLQ scores were comparable between the MEDI-528 groups and placebo at baseline and during the study.\nIn study 1, fewer subjects having ≥1 asthma exacerbation were observed in the combined MEDI-528 group (n = 1 of 27) compared with the placebo group (n = 2 of 9; P = 0.148). The one MEDI-528-treated subject was receiving the lowest dose. The 2 placebo-treated subjects had a total of 4 asthma exacerbation episodes (1 subject had 1 episode; 1 subject had 3 episodes).\nIn study 2, no asthma exacerbations were reported.\nAsthma control and quality of life results are shown in Table 5. In both studies, a trend toward improvement in the overall mean asthma symptom scores was noted during the treatment period in all subjects compared with baseline values. Overall mean rescue SABA use during both studies was comparable among groups, and overall mean AQLQ scores were comparable between the MEDI-528 groups and placebo at baseline and during the study.\nIn study 1, fewer subjects having ≥1 asthma exacerbation were observed in the combined MEDI-528 group (n = 1 of 27) compared with the placebo group (n = 2 of 9; P = 0.148). The one MEDI-528-treated subject was receiving the lowest dose. The 2 placebo-treated subjects had a total of 4 asthma exacerbation episodes (1 subject had 1 episode; 1 subject had 3 episodes).\nIn study 2, no asthma exacerbations were reported.\n[SUBTITLE] Exercise challenge test [SUBSECTION] In study 2, multiple doses of MEDI-528 resulted in a reduction in the mean maximum percentage decrease in FEV1 after exercise as compared to placebo (Table 6). The mean absolute maximum decline in FEV1 at study day 56 was -0.04 L for the MEDI-528 group compared with -0.60 L for the placebo group (P < 0.01). Differences at all other data points did not achieve statistical significance (data not shown). Time to return to 90% of baseline FEV1 following exercise challenge was shorter during the screening period prior to dosing with study drug and at all subsequent study days for the MEDI-528 group as compared with placebo. Furthermore, the time to return to 90% of baseline FEV1 improved in the MEDI-528 group at study days 28 and 56, while there was no improvement in the placebo group. A post hoc analysis showed that 6 of 7 MEDI-528-treated subjects were responders at study day 28, 7 of 7 at day 56, and 6 of 6 at day 150. There were no placebo-treated responders at days 28 and 56, and there was 1 responder at day 150 (Figure 3).\nMean (SD) maximum percentage change in FEV1 after exercise in study 2.\n*For day 150, n = 6. P values are based on two-sample t test between placebo and MEDI-528 groups.\nMean maximum percentage decline in FEV1 after exercise for each individual subject in study 2. Multiple doses of MEDI-528 resulted in a reduction in the mean maximum percentage decrease in FEV1 after exercise as compared to placebo.\nIn study 2, multiple doses of MEDI-528 resulted in a reduction in the mean maximum percentage decrease in FEV1 after exercise as compared to placebo (Table 6). The mean absolute maximum decline in FEV1 at study day 56 was -0.04 L for the MEDI-528 group compared with -0.60 L for the placebo group (P < 0.01). Differences at all other data points did not achieve statistical significance (data not shown). Time to return to 90% of baseline FEV1 following exercise challenge was shorter during the screening period prior to dosing with study drug and at all subsequent study days for the MEDI-528 group as compared with placebo. Furthermore, the time to return to 90% of baseline FEV1 improved in the MEDI-528 group at study days 28 and 56, while there was no improvement in the placebo group. A post hoc analysis showed that 6 of 7 MEDI-528-treated subjects were responders at study day 28, 7 of 7 at day 56, and 6 of 6 at day 150. There were no placebo-treated responders at days 28 and 56, and there was 1 responder at day 150 (Figure 3).\nMean (SD) maximum percentage change in FEV1 after exercise in study 2.\n*For day 150, n = 6. P values are based on two-sample t test between placebo and MEDI-528 groups.\nMean maximum percentage decline in FEV1 after exercise for each individual subject in study 2. Multiple doses of MEDI-528 resulted in a reduction in the mean maximum percentage decrease in FEV1 after exercise as compared to placebo.", "In study 1, 36 subjects were randomized between June 2007 and February 2008 at 8 sites, and 33 completed the study through day 150. Two MEDI-528-treated subjects were lost to follow-up and 1 placebo-treated subject withdrew due to an asthma exacerbation requiring hospitalization. All 36 subjects were evaluable (ie, received ≥7 doses of study drug) and included in the safety analyses (Figure 1A). There were more ex-smokers in the placebo group, otherwise the groups' baseline characteristics were comparable (Table 1).\nFlow diagram for subjects in studies 1 (A) and 2 (B).\nDemographics and Baseline Characteristics (ITT Population)\n*Measurements from evaluable subjects (those who received ≥7 or ≥4 doses of the study drug in study 1 and study 2, respectively); ITT = intent to treat (all randomized subjects); SD = standard deviation; FEV1 = forced expiratory volume in 1 second.\nIn study 2, 11 subjects were randomized and dosed with 50 mg of MEDI-528 or placebo between February and May 2008 at 4 sites, and 9 completed the study through day 150 (Figure 1B). Nine subjects were evaluable (ie, received ≥4 doses of study drug). Two placebo-treated subjects were not evaluable (1 received only 2 doses of study drug and 1 was lost to follow-up). All 11 subjects were included in the safety analyses. Baseline characteristics were similar between study groups (Table 1).", "The most frequently reported AEs in study 1 are listed in Table 2. Severe AEs were reported in 4 placebo-treated subjects (vomiting, n = 1; elevated lipase, n = 2; asthma, n = 1) and 2 subjects receiving MEDI-528 0.3 mg/kg (severe diarrhea, n = 1; elevated alanine aminotranferase [ALT], n = 1). The severe asthma AE in the 1 placebo-treated subject was an SAE (asthma exacerbation requiring hospitalization); this subject was discontinued from the study. The elevated ALT seen in the MEDI-528-treated subject was noted on the first day of dosing. This subject received no further drug and the elevated levels resolved by study day 42. No deaths were noted during this study.\nMost Frequently Reported Adverse Events in Study 1 (Safety Population*)\nValues are shown in descending order of frequency in the total MEDI-528 group.\n*Consisted of all subjects who received the study drug.\nAbnormal but clinically nonsignificant ECG results were detected during the study in 5 placebo-treated subjects, 5 subjects in the MEDI-528 0.3-mg/kg group, and 2 subjects each in the MEDI-528 1-mg/kg and 3-mg/kg groups. One subject in the MEDI-528 3-mg/kg group exhibited an asymptomatic elevation of troponin levels at a single time point on study day 84. No significant changes in the central nervous system were observed from the brain MRI or focused neurological examinations.\nThe most frequently reported AEs in study 2 are listed in Table 3. A total of 5 severe AEs occurred in 4 subjects; 3 placebo-treated subjects had 1 event each (eye infection, cough, drug hypersensitivity) and 1 MEDI-528-treated subject had 2 events (sunburn, back pain). No significant ECG changes occurred and no elevations in troponin levels were observed. One SAE occurred in a MEDI-528-treated subject (abnormal brain MRI results). The subject had a 6- × 4-mm left-sided pontine hyperintensity noted on the day 28 MRI that was not present at baseline. The investigator considered the event possibly related to study drug, resulting in a clinical hold of the study. A repeat MRI with gadolinium contrast showed no abnormal findings or pontine hyperintensity. Review by an independent neuroradiologist determined the initial MRI finding to be an artifact. The clinical hold was lifted, but the study was discontinued due to the length of the delay.\nMost Frequently Reported Adverse Events in Study 2 (Safety Population*)\nValues are shown in descending order of frequency in the MEDI-528 group. MRI = magnetic resonance imaging.\n*Consisted of all subjects who received the study drug.", "In study 1, limited PK parameters were estimable because the dosing interval for MEDI-528 was not constant, alternating between 3 days and 4 days, and PK sampling was sparse.\nAfter the last MEDI-528 dose, maximum serum concentrations were generally achieved between 3 and 4 days across dose levels (Table 4). Mean maximum concentration after the last dose of 0.3 mg/kg to 3 mg/kg, respectively, increased in an approximately dose-proportional manner from 13.7 μg/mL to 105.5 μg/mL; similar dose proportionality was noted for trough concentrations, which increased from 11.7 μg/mL to 90.4 μg/mL. Mean half-life was similar across dose levels (range, 35-38 days).\nMEDI-528 Multiple-Dose Pharmacokinetic Parameters in Study 1\nValues are mean ± SD. Cmax = maximum concentration; Tmax = time to maximum concentration; T1/2 = half-life.\n*Accumulation index was based on trough concentration after first and last dose.\n†n = 8.\n‡n = 7.\nComparison of trough concentrations after the first and last doses yielded accumulation index values between 5 and 10 across dose levels. The fluctuation of MEDI-528 concentrations within a dosing interval was small, consistent with the frequency of dosing and half-life (Figure 2).\nMEDI-528 mean serum concentrations in study 1. Mean maximum concentration after the last dose of 0.3 mg/kg to 3 mg/kg, respectively, mean half-life, trough concentrations after the first and last doses were measured. Mean concentrations of MEDI-528 increased in a dose-proportional manner and peaked after the last dose of the study drug.\nNo anti-MEDI-528 antibodies, defined as antibody titer of >10, were detected in any group during study 1 or study 2.", "Pulmonary function was generally unchanged throughout both studies. FEV1 values were comparable among groups at baseline and at the end of the studies (Table 5). In study 1, FEV1 percent predicted values were comparable at baseline and end of study (Table 5).\nExploratory Analyses in Studies 1 and 2 (Evaluable Population)\nValues are mean ± SD.\nAQLQ = Asthma Quality of Life Questionnaire; FEV1 = forced expiratory volume in 1 second.", "Asthma control and quality of life results are shown in Table 5. In both studies, a trend toward improvement in the overall mean asthma symptom scores was noted during the treatment period in all subjects compared with baseline values. Overall mean rescue SABA use during both studies was comparable among groups, and overall mean AQLQ scores were comparable between the MEDI-528 groups and placebo at baseline and during the study.\nIn study 1, fewer subjects having ≥1 asthma exacerbation were observed in the combined MEDI-528 group (n = 1 of 27) compared with the placebo group (n = 2 of 9; P = 0.148). The one MEDI-528-treated subject was receiving the lowest dose. The 2 placebo-treated subjects had a total of 4 asthma exacerbation episodes (1 subject had 1 episode; 1 subject had 3 episodes).\nIn study 2, no asthma exacerbations were reported.", "In study 2, multiple doses of MEDI-528 resulted in a reduction in the mean maximum percentage decrease in FEV1 after exercise as compared to placebo (Table 6). The mean absolute maximum decline in FEV1 at study day 56 was -0.04 L for the MEDI-528 group compared with -0.60 L for the placebo group (P < 0.01). Differences at all other data points did not achieve statistical significance (data not shown). Time to return to 90% of baseline FEV1 following exercise challenge was shorter during the screening period prior to dosing with study drug and at all subsequent study days for the MEDI-528 group as compared with placebo. Furthermore, the time to return to 90% of baseline FEV1 improved in the MEDI-528 group at study days 28 and 56, while there was no improvement in the placebo group. A post hoc analysis showed that 6 of 7 MEDI-528-treated subjects were responders at study day 28, 7 of 7 at day 56, and 6 of 6 at day 150. There were no placebo-treated responders at days 28 and 56, and there was 1 responder at day 150 (Figure 3).\nMean (SD) maximum percentage change in FEV1 after exercise in study 2.\n*For day 150, n = 6. P values are based on two-sample t test between placebo and MEDI-528 groups.\nMean maximum percentage decline in FEV1 after exercise for each individual subject in study 2. Multiple doses of MEDI-528 resulted in a reduction in the mean maximum percentage decrease in FEV1 after exercise as compared to placebo.", "In both studies, subjects tolerated multiple SC doses of MEDI-528, with AE, severe AE and SAE rates similar to placebo-treated subjects. No subjects developed anti-MEDI-528 antibodies. MEDI-528 exhibited linear PK, with peak and trough concentrations increasing in an approximately dose-proportional manner over the dose range studied. The half-life values (35-38 days) were consistent with those observed in the high-dose groups of the single-dose SC study of healthy volunteers [24], namely, the 3-mg/kg (half-life, 44 days) and 9-mg/kg (half-life, 33 days) doses. In addition, these results are in accord with findings from the 2 studies conducted in healthy volunteers [24]. In those studies, MEDI-528 had an acceptable safety profile, with no SAEs or deaths reported.\nThroughout study 1, pulmonary function was essentially unchanged and SABA use was comparable among groups. Overall, asthma symptom scores showed a trend toward improvement in all subjects. A trend toward fewer subjects having ≥1 asthma exacerbation was observed in MEDI-528-treated subjects in study 1 compared with placebo-treated subjects. This finding is of unclear significance as blocking IL-9 may result in a reduction in AHR [13,14] and AHR has not been clearly related to asthma exacerbations.\nAlthough the actions of IL-9 are not fully understood, IL-9 is believed to play an important role in the trafficking and function of mast cells [15,16]. Local IL-9 production from inflammatory cells would theoretically result in the recruitment and differentiation of mast cell progenitors from the bone marrow to the lung. Blocking IL-9 would therefore not be expected to have an immediate clinical effect, but this would rather be dependent on the loss of resident mast cells in the tissue.\nThe results of study 2 are intriguing in that they suggest that blocking IL-9 with MEDI-528 may have an effect on EIB, which is dependent on mast cell degranulation [19]. The maximum effect of MEDI-528 was seen at study day 56, which would be consistent with a later onset of action. Future studies should consider that a period of chronic dosing may be required before seeing maximum clinical effect. Conclusions from study 2 are unfortunately limited in that it was prematurely halted.\nBecause of the small sample size in both studies, firm conclusions regarding the clinical activity of MEDI-528 cannot be drawn from these results. The potential benefits of MEDI-528 observed in both studies should be interpreted with caution as these studies were also limited by the mild disease severity of the study population.", "In conclusion, prior and current results suggest that MEDI-528 has acceptable safety and tolerability profiles and provide evidence of clinical activity in mild to moderate asthmatics. Further studies are warranted to assess the clinical efficacy of MEDI-528 for treating patients with inadequately controlled asthma.", "CLF, SDM, and DP received research funding from MedImmune, LLC, for the conduct of these studies. JMP, CKO, CL, GJR, WIW and NAM are employees of MedImmune, LLC. BW was an employee of MedImmune, LLC, at the time of the study and manuscript submission. These studies were sponsored by MedImmune, LLC.", "CLF, SDM, and DP were involved in the collection of data and interpretation of the results; JMP, CKO, CL, GJR, WIW, BW, and NAM were involved in the design of the study, analysis of the data, and interpretation of the results; all authors critically reviewed and revised the manuscript and approved the final version.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2466/11/14/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Subjects", "Study design", "Safety profile", "Pharmacokinetics and immunogenicity", "Excercise challenge test", "Asthma control and quality of life", "Additional evaluations", "Statistical analyses", "Results", "Demographics and baseline characteristics", "Safety profile", "Pharmacokinetics and immunogenicity", "Pulmonary function", "Asthma control and quality of life", "Exercise challenge test", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Asthma continues to be a significant health problem [1], with nearly 8% of the US population reported to have asthma in 2006 [2]. In one study, approximately 30% of >3,400 asthmatics failed to achieve control despite regular use of combination therapy with high-dose inhaled corticosteroids (ICS) and long-acting β2-agonists [3].\nInterleukin (IL)-9, a 144 amino acid-long protein secreted by CD4+ T-helper 2 (Th2) cells, mast cells, eosinophils, and neutrophils [4-7], may be associated with airway hyperresponsiveness (AHR) and inflammation [8-11]. Evidence supporting IL-9 as a potential target treatment for asthma emerged from a series of genetic experiments linking AHR to a region on chromosome 13 in mice, which contains the IL-9 gene and is syntenic with the 5q31-q33 chromosome in humans [8].\nOverexpression of IL-9 in murine models of asthma has been shown to cause airway inflammation with pulmonary infiltration of eosinophils and lymphocytes, airway obstruction, and mast cell hyperplasia [9,10,12]. In contrast, anti−IL-9 antibody therapy has led to reduced levels of AHR in murine models of allergen-induced asthma [13,14].\nBlocking IL-9 expression inhibits airway inflammation in a mast cell-dependent murine model of asthma. Mast cell-deficient animals demonstrated reduced lung inflammation and AHR compared with wild-type control mice [15]. An IL-9-neutralizing monoclonal antibody effectively reduced lung recovery of mast cell precursors and inflammatory cells after allergen challenge [16]. These findings suggest that IL-9 promotes asthma pathology in a mast cell-dependent manner through the proliferation of mast cell precursors or the recruitment of immature mast cells to lung tissue, or both.\nMast cell degranulation and release of spasmogenic mediators have been reported to cause bronchoconstriction in subjects with exercise-induced asthma [17,18]. Exercise challenge is an indirect airway challenge that results in airway narrowing due to the release of mediators from mast cell degranulation, as opposed to direct airway challenges such as methacholine that act directly on the airway smooth muscle to produce bronchoconstriction [19].\nAdditionally, in asthmatics, bronchial biopsy specimens revealed increased IL-9 immunoreactive cells and IL-9 mRNA, protein, and receptor levels compared with those of healthy controls [20-23]. These data suggest that IL-9-targeted therapies may offer a novel approach for treating patients with asthma and may reduce exercise-induced bronchoconstriction (EIB).\nMEDI-528 is a humanized anti-IL-9 monoclonal antibody. Results from 2 open-label, phase 1 studies demonstrated that MEDI-528, administered as a single intravenous or subcutaneous (SC) dose, had an acceptable safety profile in healthy volunteers, with no serious adverse events (AEs) and a linear pharmacokinetic (PK) profile [24].\nWe report the results of 2 studies evaluating the safety, tolerability, PK, and immunogenicity profiles of multiple SC doses of MEDI-528, and the potential reduction of EIB in subjects with mild to moderate asthma. Study 2 was halted prematurely due to a serious AE (SAE) in an asymptomatic MEDI-528-treated subject who had an abnormal brain magnetic resonance imaging (MRI) that was found to be an artifact on further evaluation.", "[SUBTITLE] Subjects [SUBSECTION] Adults aged 18-65 years with mild persistent asthma (forced expiratory volume in 1 second [FEV1] or peak expiratory flow [PEF] ≥80% of predicted) receiving therapy with short-acting β2-agonists (SABA), inhaled corticosteroids (ICS) <264 μg/day fluticasone or equivalent, or both (study 1) and adults aged 18-50 years with stable mild to moderate persistent asthma receiving therapy with SABA, ICS <800 μg/day budesonide or equivalent, and EIB (decrease in FEV1 of ≥15% from baseline during screening) (study 2) were eligible [25].\nExclusion criteria included lung disease other than asthma, use of systemic immunosuppressive drugs, and smoking history ≥10 pack-years. Long-acting β2-agonists, cromolyn sodium, nedocromil sodium, leukotriene receptor antagonists, theophylline, and omalizumab were not allowed (studies 1 and 2).\nAdults aged 18-65 years with mild persistent asthma (forced expiratory volume in 1 second [FEV1] or peak expiratory flow [PEF] ≥80% of predicted) receiving therapy with short-acting β2-agonists (SABA), inhaled corticosteroids (ICS) <264 μg/day fluticasone or equivalent, or both (study 1) and adults aged 18-50 years with stable mild to moderate persistent asthma receiving therapy with SABA, ICS <800 μg/day budesonide or equivalent, and EIB (decrease in FEV1 of ≥15% from baseline during screening) (study 2) were eligible [25].\nExclusion criteria included lung disease other than asthma, use of systemic immunosuppressive drugs, and smoking history ≥10 pack-years. Long-acting β2-agonists, cromolyn sodium, nedocromil sodium, leukotriene receptor antagonists, theophylline, and omalizumab were not allowed (studies 1 and 2).\n[SUBTITLE] Study design [SUBSECTION] Study 1 was a randomized, double-blind, placebo-controlled, dose-escalation, multicenter study evaluating the safety, tolerability, PK, and immunogenicity profiles of multiple SC doses of MEDI-528. For each cohort (0.3 mg/kg, 1 mg/kg, or 3 mg/kg), subjects were randomized 3:1 via an interactive voice response system (IVRS) to receive MEDI-528 or placebo as SC injections twice weekly for 4 weeks through study day 24; thereafter, subjects were monitored for 126 days. Dosing at each next higher dose group commenced after all evaluable subjects from the previous lower dose group completed evaluations on study day 56 with acceptable safety profiles. Subjects who received ≥7 doses of the study drug were considered evaluable. Those not evaluable were replaced, unless they withdrew from the study due to safety reasons. The primary outcome for this study was the safety and tolerability of multiple SC doses of MEDI-528. Secondary outcomes included PK and immunogenecity of MEDI-528 in this subject population. Exploratory outcomes included effects of MEDI-528 on pulmonary function, asthma exacerbations, symptoms, rescue SABA use, and quality of life.\nStudy 2 was a randomized, double-blind, placebo-controlled, multicenter study evaluating the safety and tolerability profiles of multiple SC doses of MEDI-528 in three cohorts of 50, 100, and 200 mg versus placebo. Subjects were randomized 2:1 via IVRS to receive MEDI-528 or placebo as an SC injection twice weekly for 4 weeks; thereafter, subjects were monitored for 126 days. Subjects who received ≥4 doses of study drug (unless they discontinued for safety reasons) and had ≥2 exercise challenge tests (baseline and post-therapy) were considered evaluable. The primary outcome of this study was the safety and tolerability of multiple SC doses of MEDI-528 in adult subjects with stable asthma and EIB. Secondary objectives included the effect of MEDI-528 on EIB and immunogenicity. Exploratory outcomes included the effects of MEDI-528 on spirometry, airway hyperresponsiveness as measured by methacholine challenge testing, asthma exacerbations, asthma symptoms, rescue SABA use, quality of life, and nasal allergy symptoms in this population.\nIn both studies, all subjects and protocol-associated personnel were blinded to the individual subject treatment assignment until the last subject in each cohort completed the study and the databases were locked.\nBoth studies were conducted in accordance with the Declaration of Helsinki and were approved by an institutional review board/independent ethics committee at each participating site. Written informed consent was obtained from each subject before study entry.\nStudy 1 was a randomized, double-blind, placebo-controlled, dose-escalation, multicenter study evaluating the safety, tolerability, PK, and immunogenicity profiles of multiple SC doses of MEDI-528. For each cohort (0.3 mg/kg, 1 mg/kg, or 3 mg/kg), subjects were randomized 3:1 via an interactive voice response system (IVRS) to receive MEDI-528 or placebo as SC injections twice weekly for 4 weeks through study day 24; thereafter, subjects were monitored for 126 days. Dosing at each next higher dose group commenced after all evaluable subjects from the previous lower dose group completed evaluations on study day 56 with acceptable safety profiles. Subjects who received ≥7 doses of the study drug were considered evaluable. Those not evaluable were replaced, unless they withdrew from the study due to safety reasons. The primary outcome for this study was the safety and tolerability of multiple SC doses of MEDI-528. Secondary outcomes included PK and immunogenecity of MEDI-528 in this subject population. Exploratory outcomes included effects of MEDI-528 on pulmonary function, asthma exacerbations, symptoms, rescue SABA use, and quality of life.\nStudy 2 was a randomized, double-blind, placebo-controlled, multicenter study evaluating the safety and tolerability profiles of multiple SC doses of MEDI-528 in three cohorts of 50, 100, and 200 mg versus placebo. Subjects were randomized 2:1 via IVRS to receive MEDI-528 or placebo as an SC injection twice weekly for 4 weeks; thereafter, subjects were monitored for 126 days. Subjects who received ≥4 doses of study drug (unless they discontinued for safety reasons) and had ≥2 exercise challenge tests (baseline and post-therapy) were considered evaluable. The primary outcome of this study was the safety and tolerability of multiple SC doses of MEDI-528 in adult subjects with stable asthma and EIB. Secondary objectives included the effect of MEDI-528 on EIB and immunogenicity. Exploratory outcomes included the effects of MEDI-528 on spirometry, airway hyperresponsiveness as measured by methacholine challenge testing, asthma exacerbations, asthma symptoms, rescue SABA use, quality of life, and nasal allergy symptoms in this population.\nIn both studies, all subjects and protocol-associated personnel were blinded to the individual subject treatment assignment until the last subject in each cohort completed the study and the databases were locked.\nBoth studies were conducted in accordance with the Declaration of Helsinki and were approved by an institutional review board/independent ethics committee at each participating site. Written informed consent was obtained from each subject before study entry.\n[SUBTITLE] Safety profile [SUBSECTION] In both studies, AEs and SAEs were monitored after the first dose through day 150. AEs were graded by severity (mild, moderate, severe) and relationship to study drug (none, remote, possible, probable, definite) as determined by each investigator. Other safety measures included routine laboratory tests, vital signs, electrocardiograms (ECGs), and physical examinations. Physical examination included assessments for splenomegaly (palpable spleen), lymphadenopathy, and neurologic abnormalities.\nIn both studies, a noncontrast MRI of the brain was performed at screening and day 28. MRI was added to the current studies and other MEDI-528 studies [24] based on preclinical toxicology findings of lymphohistiocytic perivascular infiltrates in the brains of cynomolgus monkeys seen in both treated and control animals. The study and peer-review pathologists considered this a spontaneous background finding unrelated to treatment. Subsequent MEDI-528 toxicology studies in monkeys and of MM9C1 in mice found no evidence of macroscopic or microscopic pathologic changes in the brain; MRI is therefore not required for subsequent clinical studies of MEDI-528 (data on file, MedImmune, LLC).\nSubjects in both studies who received any dose of study drug were included in the safety analyses.\nIn both studies, AEs and SAEs were monitored after the first dose through day 150. AEs were graded by severity (mild, moderate, severe) and relationship to study drug (none, remote, possible, probable, definite) as determined by each investigator. Other safety measures included routine laboratory tests, vital signs, electrocardiograms (ECGs), and physical examinations. Physical examination included assessments for splenomegaly (palpable spleen), lymphadenopathy, and neurologic abnormalities.\nIn both studies, a noncontrast MRI of the brain was performed at screening and day 28. MRI was added to the current studies and other MEDI-528 studies [24] based on preclinical toxicology findings of lymphohistiocytic perivascular infiltrates in the brains of cynomolgus monkeys seen in both treated and control animals. The study and peer-review pathologists considered this a spontaneous background finding unrelated to treatment. Subsequent MEDI-528 toxicology studies in monkeys and of MM9C1 in mice found no evidence of macroscopic or microscopic pathologic changes in the brain; MRI is therefore not required for subsequent clinical studies of MEDI-528 (data on file, MedImmune, LLC).\nSubjects in both studies who received any dose of study drug were included in the safety analyses.\n[SUBTITLE] Pharmacokinetics and immunogenicity [SUBSECTION] In study 1, blood samples for measuring serum concentrations of MEDI-528 were collected before dosing and at specified times throughout the study. As previously described [24], a validated enzyme-linked immunosorbent assay (ELISA) was used for these measurements. Unknown values with calculated concentrations below the assay's lower limit of quantitation (< 1.25 μg/mL) were reported as less than the limit of quantitation.\nA double-antigen sandwich ELISA was performed to evaluate anti-MEDI-528 antibodies [24].\nIn study 1, blood samples for measuring serum concentrations of MEDI-528 were collected before dosing and at specified times throughout the study. As previously described [24], a validated enzyme-linked immunosorbent assay (ELISA) was used for these measurements. Unknown values with calculated concentrations below the assay's lower limit of quantitation (< 1.25 μg/mL) were reported as less than the limit of quantitation.\nA double-antigen sandwich ELISA was performed to evaluate anti-MEDI-528 antibodies [24].\n[SUBTITLE] Excercise challenge test [SUBSECTION] In study 2, exercise challenge was performed at baseline prior to dosing and on study days 28, 56, and 150 after dosing [26]. A response to therapy was defined as a maximum decrease in post-exercise FEV1 of <10%, based on American Thoracic Society guidelines [27]. Spirometry was performed 15 minutes before initiation of the treadmill test and 5, 10, 15, 20, and 30 minutes after the treadmill test was completed. At each time point, maximum FEV1 was determined. The decrease in FEV1 at each time point after the treadmill test was calculated as a percentage of the best baseline FEV1.\nIn study 2, exercise challenge was performed at baseline prior to dosing and on study days 28, 56, and 150 after dosing [26]. A response to therapy was defined as a maximum decrease in post-exercise FEV1 of <10%, based on American Thoracic Society guidelines [27]. Spirometry was performed 15 minutes before initiation of the treadmill test and 5, 10, 15, 20, and 30 minutes after the treadmill test was completed. At each time point, maximum FEV1 was determined. The decrease in FEV1 at each time point after the treadmill test was calculated as a percentage of the best baseline FEV1.\n[SUBTITLE] Asthma control and quality of life [SUBSECTION] In both studies, an asthma exacerbation was defined as a worsening of asthma requiring oral corticosteroids, a doubling of the ICS dose from baseline, hospitalization, emergency department visit, or an unscheduled asthma-related visit to a health care provider. Asthma symptom scores were recorded twice daily for the duration of the study. Symptoms were assessed on a scale of 0 (no symptoms) to 4 (marked discomfort). Rescue SABA use (puffs/day) was recorded daily by subjects for the duration of the study. Subjects completed the Asthma Quality of Life Questionnaire (AQLQ) [28] at baseline and after dosing.\nIn both studies, an asthma exacerbation was defined as a worsening of asthma requiring oral corticosteroids, a doubling of the ICS dose from baseline, hospitalization, emergency department visit, or an unscheduled asthma-related visit to a health care provider. Asthma symptom scores were recorded twice daily for the duration of the study. Symptoms were assessed on a scale of 0 (no symptoms) to 4 (marked discomfort). Rescue SABA use (puffs/day) was recorded daily by subjects for the duration of the study. Subjects completed the Asthma Quality of Life Questionnaire (AQLQ) [28] at baseline and after dosing.\n[SUBTITLE] Additional evaluations [SUBSECTION] Skin prick testing of common food and aeroallergens was performed during screening for both studies and on study day 28 for study 1 [29]. In both studies, spirometry was performed according to existing guidelines [30].\nSkin prick testing of common food and aeroallergens was performed during screening for both studies and on study day 28 for study 1 [29]. In both studies, spirometry was performed according to existing guidelines [30].\n[SUBTITLE] Statistical analyses [SUBSECTION] Formal sample size calculations were not applicable for assessment of the primary objective (safety/tolerability profile). No statistical hypothesis testing was performed for this end point. Data analyses were conducted using the SAS System (SAS Institute Inc., Cary, NC).\nAEs and SAEs were described using the MedDRA Adverse Event Thesaurus by system organ class, severity, and relationship to study drug through 18 weeks after the last dose (day 150). Laboratory values with a higher toxicity grade than that observed at baseline were recorded as AEs. The day 0 value before study drug administration was used as a baseline for laboratory parameters. Study discontinuation blood samples were summarized at the closest nominal time point that did not already have a value.\nThe original design for study 2 included 3 cohorts (50 mg, 100 mg, and 200 mg MEDI-528), each with 18 subjects randomized 2:1 to receive MEDI-528 or placebo. Sample size calculations were based on two-sample t test of the reduction in the maximum decline of FEV1 after exercise challenge testing. The power to detect a statistically significant difference in the maximum decline in FEV1 was greater than 80% based on an assumption of a 20% fall in the placebo group and 65% reduction in the maximum decline of FEV1 in the combined group (36 subjects) on active treatment versus placebo (18 subjects).\nSerum concentrations and PK parameters were analyzed using WinNonlin (Pharsight, St. Louis, MO) and descriptive statistics summarized for each MEDI-528 treatment group. The number of subjects exhibiting anti-MEDI-528 antibodies was summarized, and all valid assay results from subjects who received any study drug were included in immunogenicity summaries.\nNo formal statistical hypothesis tests were conducted for exploratory variables (pulmonary function, asthma exacerbations, symptom scores, rescue SABA use, quality of life). These variables were examined for their medical/clinical implications.\nTwo-sample t tests were used to explore the change from baseline in FEV1 and asthma symptom score between MEDI-528 and placebo groups. The Fisher exact test was used to explore the difference in asthma exacerbation proportions between the placebo and MEDI-528 groups. The total number of exacerbations per subject during the study was noted.\nFormal sample size calculations were not applicable for assessment of the primary objective (safety/tolerability profile). No statistical hypothesis testing was performed for this end point. Data analyses were conducted using the SAS System (SAS Institute Inc., Cary, NC).\nAEs and SAEs were described using the MedDRA Adverse Event Thesaurus by system organ class, severity, and relationship to study drug through 18 weeks after the last dose (day 150). Laboratory values with a higher toxicity grade than that observed at baseline were recorded as AEs. The day 0 value before study drug administration was used as a baseline for laboratory parameters. Study discontinuation blood samples were summarized at the closest nominal time point that did not already have a value.\nThe original design for study 2 included 3 cohorts (50 mg, 100 mg, and 200 mg MEDI-528), each with 18 subjects randomized 2:1 to receive MEDI-528 or placebo. Sample size calculations were based on two-sample t test of the reduction in the maximum decline of FEV1 after exercise challenge testing. The power to detect a statistically significant difference in the maximum decline in FEV1 was greater than 80% based on an assumption of a 20% fall in the placebo group and 65% reduction in the maximum decline of FEV1 in the combined group (36 subjects) on active treatment versus placebo (18 subjects).\nSerum concentrations and PK parameters were analyzed using WinNonlin (Pharsight, St. Louis, MO) and descriptive statistics summarized for each MEDI-528 treatment group. The number of subjects exhibiting anti-MEDI-528 antibodies was summarized, and all valid assay results from subjects who received any study drug were included in immunogenicity summaries.\nNo formal statistical hypothesis tests were conducted for exploratory variables (pulmonary function, asthma exacerbations, symptom scores, rescue SABA use, quality of life). These variables were examined for their medical/clinical implications.\nTwo-sample t tests were used to explore the change from baseline in FEV1 and asthma symptom score between MEDI-528 and placebo groups. The Fisher exact test was used to explore the difference in asthma exacerbation proportions between the placebo and MEDI-528 groups. The total number of exacerbations per subject during the study was noted.", "Adults aged 18-65 years with mild persistent asthma (forced expiratory volume in 1 second [FEV1] or peak expiratory flow [PEF] ≥80% of predicted) receiving therapy with short-acting β2-agonists (SABA), inhaled corticosteroids (ICS) <264 μg/day fluticasone or equivalent, or both (study 1) and adults aged 18-50 years with stable mild to moderate persistent asthma receiving therapy with SABA, ICS <800 μg/day budesonide or equivalent, and EIB (decrease in FEV1 of ≥15% from baseline during screening) (study 2) were eligible [25].\nExclusion criteria included lung disease other than asthma, use of systemic immunosuppressive drugs, and smoking history ≥10 pack-years. Long-acting β2-agonists, cromolyn sodium, nedocromil sodium, leukotriene receptor antagonists, theophylline, and omalizumab were not allowed (studies 1 and 2).", "Study 1 was a randomized, double-blind, placebo-controlled, dose-escalation, multicenter study evaluating the safety, tolerability, PK, and immunogenicity profiles of multiple SC doses of MEDI-528. For each cohort (0.3 mg/kg, 1 mg/kg, or 3 mg/kg), subjects were randomized 3:1 via an interactive voice response system (IVRS) to receive MEDI-528 or placebo as SC injections twice weekly for 4 weeks through study day 24; thereafter, subjects were monitored for 126 days. Dosing at each next higher dose group commenced after all evaluable subjects from the previous lower dose group completed evaluations on study day 56 with acceptable safety profiles. Subjects who received ≥7 doses of the study drug were considered evaluable. Those not evaluable were replaced, unless they withdrew from the study due to safety reasons. The primary outcome for this study was the safety and tolerability of multiple SC doses of MEDI-528. Secondary outcomes included PK and immunogenecity of MEDI-528 in this subject population. Exploratory outcomes included effects of MEDI-528 on pulmonary function, asthma exacerbations, symptoms, rescue SABA use, and quality of life.\nStudy 2 was a randomized, double-blind, placebo-controlled, multicenter study evaluating the safety and tolerability profiles of multiple SC doses of MEDI-528 in three cohorts of 50, 100, and 200 mg versus placebo. Subjects were randomized 2:1 via IVRS to receive MEDI-528 or placebo as an SC injection twice weekly for 4 weeks; thereafter, subjects were monitored for 126 days. Subjects who received ≥4 doses of study drug (unless they discontinued for safety reasons) and had ≥2 exercise challenge tests (baseline and post-therapy) were considered evaluable. The primary outcome of this study was the safety and tolerability of multiple SC doses of MEDI-528 in adult subjects with stable asthma and EIB. Secondary objectives included the effect of MEDI-528 on EIB and immunogenicity. Exploratory outcomes included the effects of MEDI-528 on spirometry, airway hyperresponsiveness as measured by methacholine challenge testing, asthma exacerbations, asthma symptoms, rescue SABA use, quality of life, and nasal allergy symptoms in this population.\nIn both studies, all subjects and protocol-associated personnel were blinded to the individual subject treatment assignment until the last subject in each cohort completed the study and the databases were locked.\nBoth studies were conducted in accordance with the Declaration of Helsinki and were approved by an institutional review board/independent ethics committee at each participating site. Written informed consent was obtained from each subject before study entry.", "In both studies, AEs and SAEs were monitored after the first dose through day 150. AEs were graded by severity (mild, moderate, severe) and relationship to study drug (none, remote, possible, probable, definite) as determined by each investigator. Other safety measures included routine laboratory tests, vital signs, electrocardiograms (ECGs), and physical examinations. Physical examination included assessments for splenomegaly (palpable spleen), lymphadenopathy, and neurologic abnormalities.\nIn both studies, a noncontrast MRI of the brain was performed at screening and day 28. MRI was added to the current studies and other MEDI-528 studies [24] based on preclinical toxicology findings of lymphohistiocytic perivascular infiltrates in the brains of cynomolgus monkeys seen in both treated and control animals. The study and peer-review pathologists considered this a spontaneous background finding unrelated to treatment. Subsequent MEDI-528 toxicology studies in monkeys and of MM9C1 in mice found no evidence of macroscopic or microscopic pathologic changes in the brain; MRI is therefore not required for subsequent clinical studies of MEDI-528 (data on file, MedImmune, LLC).\nSubjects in both studies who received any dose of study drug were included in the safety analyses.", "In study 1, blood samples for measuring serum concentrations of MEDI-528 were collected before dosing and at specified times throughout the study. As previously described [24], a validated enzyme-linked immunosorbent assay (ELISA) was used for these measurements. Unknown values with calculated concentrations below the assay's lower limit of quantitation (< 1.25 μg/mL) were reported as less than the limit of quantitation.\nA double-antigen sandwich ELISA was performed to evaluate anti-MEDI-528 antibodies [24].", "In study 2, exercise challenge was performed at baseline prior to dosing and on study days 28, 56, and 150 after dosing [26]. A response to therapy was defined as a maximum decrease in post-exercise FEV1 of <10%, based on American Thoracic Society guidelines [27]. Spirometry was performed 15 minutes before initiation of the treadmill test and 5, 10, 15, 20, and 30 minutes after the treadmill test was completed. At each time point, maximum FEV1 was determined. The decrease in FEV1 at each time point after the treadmill test was calculated as a percentage of the best baseline FEV1.", "In both studies, an asthma exacerbation was defined as a worsening of asthma requiring oral corticosteroids, a doubling of the ICS dose from baseline, hospitalization, emergency department visit, or an unscheduled asthma-related visit to a health care provider. Asthma symptom scores were recorded twice daily for the duration of the study. Symptoms were assessed on a scale of 0 (no symptoms) to 4 (marked discomfort). Rescue SABA use (puffs/day) was recorded daily by subjects for the duration of the study. Subjects completed the Asthma Quality of Life Questionnaire (AQLQ) [28] at baseline and after dosing.", "Skin prick testing of common food and aeroallergens was performed during screening for both studies and on study day 28 for study 1 [29]. In both studies, spirometry was performed according to existing guidelines [30].", "Formal sample size calculations were not applicable for assessment of the primary objective (safety/tolerability profile). No statistical hypothesis testing was performed for this end point. Data analyses were conducted using the SAS System (SAS Institute Inc., Cary, NC).\nAEs and SAEs were described using the MedDRA Adverse Event Thesaurus by system organ class, severity, and relationship to study drug through 18 weeks after the last dose (day 150). Laboratory values with a higher toxicity grade than that observed at baseline were recorded as AEs. The day 0 value before study drug administration was used as a baseline for laboratory parameters. Study discontinuation blood samples were summarized at the closest nominal time point that did not already have a value.\nThe original design for study 2 included 3 cohorts (50 mg, 100 mg, and 200 mg MEDI-528), each with 18 subjects randomized 2:1 to receive MEDI-528 or placebo. Sample size calculations were based on two-sample t test of the reduction in the maximum decline of FEV1 after exercise challenge testing. The power to detect a statistically significant difference in the maximum decline in FEV1 was greater than 80% based on an assumption of a 20% fall in the placebo group and 65% reduction in the maximum decline of FEV1 in the combined group (36 subjects) on active treatment versus placebo (18 subjects).\nSerum concentrations and PK parameters were analyzed using WinNonlin (Pharsight, St. Louis, MO) and descriptive statistics summarized for each MEDI-528 treatment group. The number of subjects exhibiting anti-MEDI-528 antibodies was summarized, and all valid assay results from subjects who received any study drug were included in immunogenicity summaries.\nNo formal statistical hypothesis tests were conducted for exploratory variables (pulmonary function, asthma exacerbations, symptom scores, rescue SABA use, quality of life). These variables were examined for their medical/clinical implications.\nTwo-sample t tests were used to explore the change from baseline in FEV1 and asthma symptom score between MEDI-528 and placebo groups. The Fisher exact test was used to explore the difference in asthma exacerbation proportions between the placebo and MEDI-528 groups. The total number of exacerbations per subject during the study was noted.", "[SUBTITLE] Demographics and baseline characteristics [SUBSECTION] In study 1, 36 subjects were randomized between June 2007 and February 2008 at 8 sites, and 33 completed the study through day 150. Two MEDI-528-treated subjects were lost to follow-up and 1 placebo-treated subject withdrew due to an asthma exacerbation requiring hospitalization. All 36 subjects were evaluable (ie, received ≥7 doses of study drug) and included in the safety analyses (Figure 1A). There were more ex-smokers in the placebo group, otherwise the groups' baseline characteristics were comparable (Table 1).\nFlow diagram for subjects in studies 1 (A) and 2 (B).\nDemographics and Baseline Characteristics (ITT Population)\n*Measurements from evaluable subjects (those who received ≥7 or ≥4 doses of the study drug in study 1 and study 2, respectively); ITT = intent to treat (all randomized subjects); SD = standard deviation; FEV1 = forced expiratory volume in 1 second.\nIn study 2, 11 subjects were randomized and dosed with 50 mg of MEDI-528 or placebo between February and May 2008 at 4 sites, and 9 completed the study through day 150 (Figure 1B). Nine subjects were evaluable (ie, received ≥4 doses of study drug). Two placebo-treated subjects were not evaluable (1 received only 2 doses of study drug and 1 was lost to follow-up). All 11 subjects were included in the safety analyses. Baseline characteristics were similar between study groups (Table 1).\nIn study 1, 36 subjects were randomized between June 2007 and February 2008 at 8 sites, and 33 completed the study through day 150. Two MEDI-528-treated subjects were lost to follow-up and 1 placebo-treated subject withdrew due to an asthma exacerbation requiring hospitalization. All 36 subjects were evaluable (ie, received ≥7 doses of study drug) and included in the safety analyses (Figure 1A). There were more ex-smokers in the placebo group, otherwise the groups' baseline characteristics were comparable (Table 1).\nFlow diagram for subjects in studies 1 (A) and 2 (B).\nDemographics and Baseline Characteristics (ITT Population)\n*Measurements from evaluable subjects (those who received ≥7 or ≥4 doses of the study drug in study 1 and study 2, respectively); ITT = intent to treat (all randomized subjects); SD = standard deviation; FEV1 = forced expiratory volume in 1 second.\nIn study 2, 11 subjects were randomized and dosed with 50 mg of MEDI-528 or placebo between February and May 2008 at 4 sites, and 9 completed the study through day 150 (Figure 1B). Nine subjects were evaluable (ie, received ≥4 doses of study drug). Two placebo-treated subjects were not evaluable (1 received only 2 doses of study drug and 1 was lost to follow-up). All 11 subjects were included in the safety analyses. Baseline characteristics were similar between study groups (Table 1).\n[SUBTITLE] Safety profile [SUBSECTION] The most frequently reported AEs in study 1 are listed in Table 2. Severe AEs were reported in 4 placebo-treated subjects (vomiting, n = 1; elevated lipase, n = 2; asthma, n = 1) and 2 subjects receiving MEDI-528 0.3 mg/kg (severe diarrhea, n = 1; elevated alanine aminotranferase [ALT], n = 1). The severe asthma AE in the 1 placebo-treated subject was an SAE (asthma exacerbation requiring hospitalization); this subject was discontinued from the study. The elevated ALT seen in the MEDI-528-treated subject was noted on the first day of dosing. This subject received no further drug and the elevated levels resolved by study day 42. No deaths were noted during this study.\nMost Frequently Reported Adverse Events in Study 1 (Safety Population*)\nValues are shown in descending order of frequency in the total MEDI-528 group.\n*Consisted of all subjects who received the study drug.\nAbnormal but clinically nonsignificant ECG results were detected during the study in 5 placebo-treated subjects, 5 subjects in the MEDI-528 0.3-mg/kg group, and 2 subjects each in the MEDI-528 1-mg/kg and 3-mg/kg groups. One subject in the MEDI-528 3-mg/kg group exhibited an asymptomatic elevation of troponin levels at a single time point on study day 84. No significant changes in the central nervous system were observed from the brain MRI or focused neurological examinations.\nThe most frequently reported AEs in study 2 are listed in Table 3. A total of 5 severe AEs occurred in 4 subjects; 3 placebo-treated subjects had 1 event each (eye infection, cough, drug hypersensitivity) and 1 MEDI-528-treated subject had 2 events (sunburn, back pain). No significant ECG changes occurred and no elevations in troponin levels were observed. One SAE occurred in a MEDI-528-treated subject (abnormal brain MRI results). The subject had a 6- × 4-mm left-sided pontine hyperintensity noted on the day 28 MRI that was not present at baseline. The investigator considered the event possibly related to study drug, resulting in a clinical hold of the study. A repeat MRI with gadolinium contrast showed no abnormal findings or pontine hyperintensity. Review by an independent neuroradiologist determined the initial MRI finding to be an artifact. The clinical hold was lifted, but the study was discontinued due to the length of the delay.\nMost Frequently Reported Adverse Events in Study 2 (Safety Population*)\nValues are shown in descending order of frequency in the MEDI-528 group. MRI = magnetic resonance imaging.\n*Consisted of all subjects who received the study drug.\nThe most frequently reported AEs in study 1 are listed in Table 2. Severe AEs were reported in 4 placebo-treated subjects (vomiting, n = 1; elevated lipase, n = 2; asthma, n = 1) and 2 subjects receiving MEDI-528 0.3 mg/kg (severe diarrhea, n = 1; elevated alanine aminotranferase [ALT], n = 1). The severe asthma AE in the 1 placebo-treated subject was an SAE (asthma exacerbation requiring hospitalization); this subject was discontinued from the study. The elevated ALT seen in the MEDI-528-treated subject was noted on the first day of dosing. This subject received no further drug and the elevated levels resolved by study day 42. No deaths were noted during this study.\nMost Frequently Reported Adverse Events in Study 1 (Safety Population*)\nValues are shown in descending order of frequency in the total MEDI-528 group.\n*Consisted of all subjects who received the study drug.\nAbnormal but clinically nonsignificant ECG results were detected during the study in 5 placebo-treated subjects, 5 subjects in the MEDI-528 0.3-mg/kg group, and 2 subjects each in the MEDI-528 1-mg/kg and 3-mg/kg groups. One subject in the MEDI-528 3-mg/kg group exhibited an asymptomatic elevation of troponin levels at a single time point on study day 84. No significant changes in the central nervous system were observed from the brain MRI or focused neurological examinations.\nThe most frequently reported AEs in study 2 are listed in Table 3. A total of 5 severe AEs occurred in 4 subjects; 3 placebo-treated subjects had 1 event each (eye infection, cough, drug hypersensitivity) and 1 MEDI-528-treated subject had 2 events (sunburn, back pain). No significant ECG changes occurred and no elevations in troponin levels were observed. One SAE occurred in a MEDI-528-treated subject (abnormal brain MRI results). The subject had a 6- × 4-mm left-sided pontine hyperintensity noted on the day 28 MRI that was not present at baseline. The investigator considered the event possibly related to study drug, resulting in a clinical hold of the study. A repeat MRI with gadolinium contrast showed no abnormal findings or pontine hyperintensity. Review by an independent neuroradiologist determined the initial MRI finding to be an artifact. The clinical hold was lifted, but the study was discontinued due to the length of the delay.\nMost Frequently Reported Adverse Events in Study 2 (Safety Population*)\nValues are shown in descending order of frequency in the MEDI-528 group. MRI = magnetic resonance imaging.\n*Consisted of all subjects who received the study drug.\n[SUBTITLE] Pharmacokinetics and immunogenicity [SUBSECTION] In study 1, limited PK parameters were estimable because the dosing interval for MEDI-528 was not constant, alternating between 3 days and 4 days, and PK sampling was sparse.\nAfter the last MEDI-528 dose, maximum serum concentrations were generally achieved between 3 and 4 days across dose levels (Table 4). Mean maximum concentration after the last dose of 0.3 mg/kg to 3 mg/kg, respectively, increased in an approximately dose-proportional manner from 13.7 μg/mL to 105.5 μg/mL; similar dose proportionality was noted for trough concentrations, which increased from 11.7 μg/mL to 90.4 μg/mL. Mean half-life was similar across dose levels (range, 35-38 days).\nMEDI-528 Multiple-Dose Pharmacokinetic Parameters in Study 1\nValues are mean ± SD. Cmax = maximum concentration; Tmax = time to maximum concentration; T1/2 = half-life.\n*Accumulation index was based on trough concentration after first and last dose.\n†n = 8.\n‡n = 7.\nComparison of trough concentrations after the first and last doses yielded accumulation index values between 5 and 10 across dose levels. The fluctuation of MEDI-528 concentrations within a dosing interval was small, consistent with the frequency of dosing and half-life (Figure 2).\nMEDI-528 mean serum concentrations in study 1. Mean maximum concentration after the last dose of 0.3 mg/kg to 3 mg/kg, respectively, mean half-life, trough concentrations after the first and last doses were measured. Mean concentrations of MEDI-528 increased in a dose-proportional manner and peaked after the last dose of the study drug.\nNo anti-MEDI-528 antibodies, defined as antibody titer of >10, were detected in any group during study 1 or study 2.\nIn study 1, limited PK parameters were estimable because the dosing interval for MEDI-528 was not constant, alternating between 3 days and 4 days, and PK sampling was sparse.\nAfter the last MEDI-528 dose, maximum serum concentrations were generally achieved between 3 and 4 days across dose levels (Table 4). Mean maximum concentration after the last dose of 0.3 mg/kg to 3 mg/kg, respectively, increased in an approximately dose-proportional manner from 13.7 μg/mL to 105.5 μg/mL; similar dose proportionality was noted for trough concentrations, which increased from 11.7 μg/mL to 90.4 μg/mL. Mean half-life was similar across dose levels (range, 35-38 days).\nMEDI-528 Multiple-Dose Pharmacokinetic Parameters in Study 1\nValues are mean ± SD. Cmax = maximum concentration; Tmax = time to maximum concentration; T1/2 = half-life.\n*Accumulation index was based on trough concentration after first and last dose.\n†n = 8.\n‡n = 7.\nComparison of trough concentrations after the first and last doses yielded accumulation index values between 5 and 10 across dose levels. The fluctuation of MEDI-528 concentrations within a dosing interval was small, consistent with the frequency of dosing and half-life (Figure 2).\nMEDI-528 mean serum concentrations in study 1. Mean maximum concentration after the last dose of 0.3 mg/kg to 3 mg/kg, respectively, mean half-life, trough concentrations after the first and last doses were measured. Mean concentrations of MEDI-528 increased in a dose-proportional manner and peaked after the last dose of the study drug.\nNo anti-MEDI-528 antibodies, defined as antibody titer of >10, were detected in any group during study 1 or study 2.\n[SUBTITLE] Pulmonary function [SUBSECTION] Pulmonary function was generally unchanged throughout both studies. FEV1 values were comparable among groups at baseline and at the end of the studies (Table 5). In study 1, FEV1 percent predicted values were comparable at baseline and end of study (Table 5).\nExploratory Analyses in Studies 1 and 2 (Evaluable Population)\nValues are mean ± SD.\nAQLQ = Asthma Quality of Life Questionnaire; FEV1 = forced expiratory volume in 1 second.\nPulmonary function was generally unchanged throughout both studies. FEV1 values were comparable among groups at baseline and at the end of the studies (Table 5). In study 1, FEV1 percent predicted values were comparable at baseline and end of study (Table 5).\nExploratory Analyses in Studies 1 and 2 (Evaluable Population)\nValues are mean ± SD.\nAQLQ = Asthma Quality of Life Questionnaire; FEV1 = forced expiratory volume in 1 second.\n[SUBTITLE] Asthma control and quality of life [SUBSECTION] Asthma control and quality of life results are shown in Table 5. In both studies, a trend toward improvement in the overall mean asthma symptom scores was noted during the treatment period in all subjects compared with baseline values. Overall mean rescue SABA use during both studies was comparable among groups, and overall mean AQLQ scores were comparable between the MEDI-528 groups and placebo at baseline and during the study.\nIn study 1, fewer subjects having ≥1 asthma exacerbation were observed in the combined MEDI-528 group (n = 1 of 27) compared with the placebo group (n = 2 of 9; P = 0.148). The one MEDI-528-treated subject was receiving the lowest dose. The 2 placebo-treated subjects had a total of 4 asthma exacerbation episodes (1 subject had 1 episode; 1 subject had 3 episodes).\nIn study 2, no asthma exacerbations were reported.\nAsthma control and quality of life results are shown in Table 5. In both studies, a trend toward improvement in the overall mean asthma symptom scores was noted during the treatment period in all subjects compared with baseline values. Overall mean rescue SABA use during both studies was comparable among groups, and overall mean AQLQ scores were comparable between the MEDI-528 groups and placebo at baseline and during the study.\nIn study 1, fewer subjects having ≥1 asthma exacerbation were observed in the combined MEDI-528 group (n = 1 of 27) compared with the placebo group (n = 2 of 9; P = 0.148). The one MEDI-528-treated subject was receiving the lowest dose. The 2 placebo-treated subjects had a total of 4 asthma exacerbation episodes (1 subject had 1 episode; 1 subject had 3 episodes).\nIn study 2, no asthma exacerbations were reported.\n[SUBTITLE] Exercise challenge test [SUBSECTION] In study 2, multiple doses of MEDI-528 resulted in a reduction in the mean maximum percentage decrease in FEV1 after exercise as compared to placebo (Table 6). The mean absolute maximum decline in FEV1 at study day 56 was -0.04 L for the MEDI-528 group compared with -0.60 L for the placebo group (P < 0.01). Differences at all other data points did not achieve statistical significance (data not shown). Time to return to 90% of baseline FEV1 following exercise challenge was shorter during the screening period prior to dosing with study drug and at all subsequent study days for the MEDI-528 group as compared with placebo. Furthermore, the time to return to 90% of baseline FEV1 improved in the MEDI-528 group at study days 28 and 56, while there was no improvement in the placebo group. A post hoc analysis showed that 6 of 7 MEDI-528-treated subjects were responders at study day 28, 7 of 7 at day 56, and 6 of 6 at day 150. There were no placebo-treated responders at days 28 and 56, and there was 1 responder at day 150 (Figure 3).\nMean (SD) maximum percentage change in FEV1 after exercise in study 2.\n*For day 150, n = 6. P values are based on two-sample t test between placebo and MEDI-528 groups.\nMean maximum percentage decline in FEV1 after exercise for each individual subject in study 2. Multiple doses of MEDI-528 resulted in a reduction in the mean maximum percentage decrease in FEV1 after exercise as compared to placebo.\nIn study 2, multiple doses of MEDI-528 resulted in a reduction in the mean maximum percentage decrease in FEV1 after exercise as compared to placebo (Table 6). The mean absolute maximum decline in FEV1 at study day 56 was -0.04 L for the MEDI-528 group compared with -0.60 L for the placebo group (P < 0.01). Differences at all other data points did not achieve statistical significance (data not shown). Time to return to 90% of baseline FEV1 following exercise challenge was shorter during the screening period prior to dosing with study drug and at all subsequent study days for the MEDI-528 group as compared with placebo. Furthermore, the time to return to 90% of baseline FEV1 improved in the MEDI-528 group at study days 28 and 56, while there was no improvement in the placebo group. A post hoc analysis showed that 6 of 7 MEDI-528-treated subjects were responders at study day 28, 7 of 7 at day 56, and 6 of 6 at day 150. There were no placebo-treated responders at days 28 and 56, and there was 1 responder at day 150 (Figure 3).\nMean (SD) maximum percentage change in FEV1 after exercise in study 2.\n*For day 150, n = 6. P values are based on two-sample t test between placebo and MEDI-528 groups.\nMean maximum percentage decline in FEV1 after exercise for each individual subject in study 2. Multiple doses of MEDI-528 resulted in a reduction in the mean maximum percentage decrease in FEV1 after exercise as compared to placebo.", "In study 1, 36 subjects were randomized between June 2007 and February 2008 at 8 sites, and 33 completed the study through day 150. Two MEDI-528-treated subjects were lost to follow-up and 1 placebo-treated subject withdrew due to an asthma exacerbation requiring hospitalization. All 36 subjects were evaluable (ie, received ≥7 doses of study drug) and included in the safety analyses (Figure 1A). There were more ex-smokers in the placebo group, otherwise the groups' baseline characteristics were comparable (Table 1).\nFlow diagram for subjects in studies 1 (A) and 2 (B).\nDemographics and Baseline Characteristics (ITT Population)\n*Measurements from evaluable subjects (those who received ≥7 or ≥4 doses of the study drug in study 1 and study 2, respectively); ITT = intent to treat (all randomized subjects); SD = standard deviation; FEV1 = forced expiratory volume in 1 second.\nIn study 2, 11 subjects were randomized and dosed with 50 mg of MEDI-528 or placebo between February and May 2008 at 4 sites, and 9 completed the study through day 150 (Figure 1B). Nine subjects were evaluable (ie, received ≥4 doses of study drug). Two placebo-treated subjects were not evaluable (1 received only 2 doses of study drug and 1 was lost to follow-up). All 11 subjects were included in the safety analyses. Baseline characteristics were similar between study groups (Table 1).", "The most frequently reported AEs in study 1 are listed in Table 2. Severe AEs were reported in 4 placebo-treated subjects (vomiting, n = 1; elevated lipase, n = 2; asthma, n = 1) and 2 subjects receiving MEDI-528 0.3 mg/kg (severe diarrhea, n = 1; elevated alanine aminotranferase [ALT], n = 1). The severe asthma AE in the 1 placebo-treated subject was an SAE (asthma exacerbation requiring hospitalization); this subject was discontinued from the study. The elevated ALT seen in the MEDI-528-treated subject was noted on the first day of dosing. This subject received no further drug and the elevated levels resolved by study day 42. No deaths were noted during this study.\nMost Frequently Reported Adverse Events in Study 1 (Safety Population*)\nValues are shown in descending order of frequency in the total MEDI-528 group.\n*Consisted of all subjects who received the study drug.\nAbnormal but clinically nonsignificant ECG results were detected during the study in 5 placebo-treated subjects, 5 subjects in the MEDI-528 0.3-mg/kg group, and 2 subjects each in the MEDI-528 1-mg/kg and 3-mg/kg groups. One subject in the MEDI-528 3-mg/kg group exhibited an asymptomatic elevation of troponin levels at a single time point on study day 84. No significant changes in the central nervous system were observed from the brain MRI or focused neurological examinations.\nThe most frequently reported AEs in study 2 are listed in Table 3. A total of 5 severe AEs occurred in 4 subjects; 3 placebo-treated subjects had 1 event each (eye infection, cough, drug hypersensitivity) and 1 MEDI-528-treated subject had 2 events (sunburn, back pain). No significant ECG changes occurred and no elevations in troponin levels were observed. One SAE occurred in a MEDI-528-treated subject (abnormal brain MRI results). The subject had a 6- × 4-mm left-sided pontine hyperintensity noted on the day 28 MRI that was not present at baseline. The investigator considered the event possibly related to study drug, resulting in a clinical hold of the study. A repeat MRI with gadolinium contrast showed no abnormal findings or pontine hyperintensity. Review by an independent neuroradiologist determined the initial MRI finding to be an artifact. The clinical hold was lifted, but the study was discontinued due to the length of the delay.\nMost Frequently Reported Adverse Events in Study 2 (Safety Population*)\nValues are shown in descending order of frequency in the MEDI-528 group. MRI = magnetic resonance imaging.\n*Consisted of all subjects who received the study drug.", "In study 1, limited PK parameters were estimable because the dosing interval for MEDI-528 was not constant, alternating between 3 days and 4 days, and PK sampling was sparse.\nAfter the last MEDI-528 dose, maximum serum concentrations were generally achieved between 3 and 4 days across dose levels (Table 4). Mean maximum concentration after the last dose of 0.3 mg/kg to 3 mg/kg, respectively, increased in an approximately dose-proportional manner from 13.7 μg/mL to 105.5 μg/mL; similar dose proportionality was noted for trough concentrations, which increased from 11.7 μg/mL to 90.4 μg/mL. Mean half-life was similar across dose levels (range, 35-38 days).\nMEDI-528 Multiple-Dose Pharmacokinetic Parameters in Study 1\nValues are mean ± SD. Cmax = maximum concentration; Tmax = time to maximum concentration; T1/2 = half-life.\n*Accumulation index was based on trough concentration after first and last dose.\n†n = 8.\n‡n = 7.\nComparison of trough concentrations after the first and last doses yielded accumulation index values between 5 and 10 across dose levels. The fluctuation of MEDI-528 concentrations within a dosing interval was small, consistent with the frequency of dosing and half-life (Figure 2).\nMEDI-528 mean serum concentrations in study 1. Mean maximum concentration after the last dose of 0.3 mg/kg to 3 mg/kg, respectively, mean half-life, trough concentrations after the first and last doses were measured. Mean concentrations of MEDI-528 increased in a dose-proportional manner and peaked after the last dose of the study drug.\nNo anti-MEDI-528 antibodies, defined as antibody titer of >10, were detected in any group during study 1 or study 2.", "Pulmonary function was generally unchanged throughout both studies. FEV1 values were comparable among groups at baseline and at the end of the studies (Table 5). In study 1, FEV1 percent predicted values were comparable at baseline and end of study (Table 5).\nExploratory Analyses in Studies 1 and 2 (Evaluable Population)\nValues are mean ± SD.\nAQLQ = Asthma Quality of Life Questionnaire; FEV1 = forced expiratory volume in 1 second.", "Asthma control and quality of life results are shown in Table 5. In both studies, a trend toward improvement in the overall mean asthma symptom scores was noted during the treatment period in all subjects compared with baseline values. Overall mean rescue SABA use during both studies was comparable among groups, and overall mean AQLQ scores were comparable between the MEDI-528 groups and placebo at baseline and during the study.\nIn study 1, fewer subjects having ≥1 asthma exacerbation were observed in the combined MEDI-528 group (n = 1 of 27) compared with the placebo group (n = 2 of 9; P = 0.148). The one MEDI-528-treated subject was receiving the lowest dose. The 2 placebo-treated subjects had a total of 4 asthma exacerbation episodes (1 subject had 1 episode; 1 subject had 3 episodes).\nIn study 2, no asthma exacerbations were reported.", "In study 2, multiple doses of MEDI-528 resulted in a reduction in the mean maximum percentage decrease in FEV1 after exercise as compared to placebo (Table 6). The mean absolute maximum decline in FEV1 at study day 56 was -0.04 L for the MEDI-528 group compared with -0.60 L for the placebo group (P < 0.01). Differences at all other data points did not achieve statistical significance (data not shown). Time to return to 90% of baseline FEV1 following exercise challenge was shorter during the screening period prior to dosing with study drug and at all subsequent study days for the MEDI-528 group as compared with placebo. Furthermore, the time to return to 90% of baseline FEV1 improved in the MEDI-528 group at study days 28 and 56, while there was no improvement in the placebo group. A post hoc analysis showed that 6 of 7 MEDI-528-treated subjects were responders at study day 28, 7 of 7 at day 56, and 6 of 6 at day 150. There were no placebo-treated responders at days 28 and 56, and there was 1 responder at day 150 (Figure 3).\nMean (SD) maximum percentage change in FEV1 after exercise in study 2.\n*For day 150, n = 6. P values are based on two-sample t test between placebo and MEDI-528 groups.\nMean maximum percentage decline in FEV1 after exercise for each individual subject in study 2. Multiple doses of MEDI-528 resulted in a reduction in the mean maximum percentage decrease in FEV1 after exercise as compared to placebo.", "In both studies, subjects tolerated multiple SC doses of MEDI-528, with AE, severe AE and SAE rates similar to placebo-treated subjects. No subjects developed anti-MEDI-528 antibodies. MEDI-528 exhibited linear PK, with peak and trough concentrations increasing in an approximately dose-proportional manner over the dose range studied. The half-life values (35-38 days) were consistent with those observed in the high-dose groups of the single-dose SC study of healthy volunteers [24], namely, the 3-mg/kg (half-life, 44 days) and 9-mg/kg (half-life, 33 days) doses. In addition, these results are in accord with findings from the 2 studies conducted in healthy volunteers [24]. In those studies, MEDI-528 had an acceptable safety profile, with no SAEs or deaths reported.\nThroughout study 1, pulmonary function was essentially unchanged and SABA use was comparable among groups. Overall, asthma symptom scores showed a trend toward improvement in all subjects. A trend toward fewer subjects having ≥1 asthma exacerbation was observed in MEDI-528-treated subjects in study 1 compared with placebo-treated subjects. This finding is of unclear significance as blocking IL-9 may result in a reduction in AHR [13,14] and AHR has not been clearly related to asthma exacerbations.\nAlthough the actions of IL-9 are not fully understood, IL-9 is believed to play an important role in the trafficking and function of mast cells [15,16]. Local IL-9 production from inflammatory cells would theoretically result in the recruitment and differentiation of mast cell progenitors from the bone marrow to the lung. Blocking IL-9 would therefore not be expected to have an immediate clinical effect, but this would rather be dependent on the loss of resident mast cells in the tissue.\nThe results of study 2 are intriguing in that they suggest that blocking IL-9 with MEDI-528 may have an effect on EIB, which is dependent on mast cell degranulation [19]. The maximum effect of MEDI-528 was seen at study day 56, which would be consistent with a later onset of action. Future studies should consider that a period of chronic dosing may be required before seeing maximum clinical effect. Conclusions from study 2 are unfortunately limited in that it was prematurely halted.\nBecause of the small sample size in both studies, firm conclusions regarding the clinical activity of MEDI-528 cannot be drawn from these results. The potential benefits of MEDI-528 observed in both studies should be interpreted with caution as these studies were also limited by the mild disease severity of the study population.", "In conclusion, prior and current results suggest that MEDI-528 has acceptable safety and tolerability profiles and provide evidence of clinical activity in mild to moderate asthmatics. Further studies are warranted to assess the clinical efficacy of MEDI-528 for treating patients with inadequately controlled asthma.", "CLF, SDM, and DP received research funding from MedImmune, LLC, for the conduct of these studies. JMP, CKO, CL, GJR, WIW and NAM are employees of MedImmune, LLC. BW was an employee of MedImmune, LLC, at the time of the study and manuscript submission. These studies were sponsored by MedImmune, LLC.", "CLF, SDM, and DP were involved in the collection of data and interpretation of the results; JMP, CKO, CL, GJR, WIW, BW, and NAM were involved in the design of the study, analysis of the data, and interpretation of the results; all authors critically reviewed and revised the manuscript and approved the final version.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2466/11/14/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Serum adiponectin and transient elastography as non-invasive markers for postoperative biliary atresia.
21356120
Biliary atresia (BA) is a progressive inflammatory disorder of the extrahepatic bile ducts leading to the obliteration of bile flow. The purpose of this study was to determine serum adiponectin in BA patients and to investigate the relationship of adiponectin with clinical parameters and liver stiffness scores.
BACKGROUND
Sixty BA patients post Kasai operation and 20 controls were enrolled. The mean age of BA patients and controls was 9.6 ± 0.7 and 10.1 ± 0.7 years, respectively. BA patients were classified into two groups according to their serum total bilirubin (TB) levels (non-jaundice, TB < 2 mg/dl vs. jaundice, TB ≥ 2 mg/dl) and liver stiffness (insignificant fibrosis, liver stiffness < 7 kPa vs. significant fibrosis, liver stiffness ≥ 7 kPa). Serum adiponectin levels were analyzed by enzyme-linked immunosorbent assay. Liver stiffness scores were examined by transient elastography (FibroScan).
METHODS
BA patients had markedly higher serum adiponectin levels (15.5 ± 1.1 vs. 11.1 ± 1.1 μg/ml, P = 0.03) and liver stiffness than controls (30.1 ± 3.0 vs. 5.1 ± 0.5 kPa, P < 0.001). Serum adiponectin levels were significantly elevated in BA patients with jaundice compared with those without jaundice (24.4 ± 1.4 vs. 11.0 ± 0.7 μg/ml, P < 0.001). In addition, BA patients with significant liver fibrosis had remarkably greater serum adiponectin than insignificant fibrosis counterparts (17.7 ± 1.2 vs. 9.4 ± 1.1 μg/ml, P < 0.001). Subsequent analysis revealed that serum adiponectin was positively correlated with total bilirubin, hyaluronic acid, and liver stiffness (r = 0.58, r = 0.46, and r = 0.60, P < 0.001, respectively).
RESULTS
Serum adiponectin and liver stiffness values were higher in BA patients compared with normal participants. The elevated serum adiponectin levels also positively correlated with the degree of hepatic dysfunction and liver fibrosis. Accordingly, serum adiponectin and transient elastography could serve as the useful non-invasive biomarkers for monitoring the severity and progression in postoperative BA.
CONCLUSIONS
[ "Adiponectin", "Biliary Atresia", "Biomarkers", "Case-Control Studies", "Child", "Disease Progression", "Elasticity Imaging Techniques", "Female", "Humans", "Jaundice", "Jejunostomy", "Liver Cirrhosis", "Male", "Postoperative Period", "Treatment Outcome" ]
3053237
null
null
Methods
All parents of children were informed of the purpose of the study and of any interventions involved in this study. Written informed consents were obtained from participants' parents prior to the children entering the study. This study complied with the ethical guidelines of the 1975 Declaration of Helsinki, and was approved by the Institutional Review Board of the Faculty of Medicine, Chulalongkorn University. [SUBTITLE] Study Population [SUBSECTION] Sixty BA patients (32 girls and 28 boys with mean age of 9.6 ± 0.7 years) and 20 healthy children (10 girls and 10 boys with mean age of 10.1 ± 0.7 years) were recruited in this study. All BA patients had undergone hepatic portojejunostomy with Roux-en-Y reconstruction (original Kasai procedure), and they were generally in good health; no signs of suspected infection or bleeding abnormalities at the time of blood sampling. None of them had undergone liver transplantation. Healthy controls who attended the Well Baby Clinic at King Chulalongkorn Memorial hospital for vaccination had normal physical findings and no underlying disease. BA patients were classified into two groups according to serum total bilirubin (TB), serum alanine aminotransferase (ALT), and liver stiffness score. Based on their jaundice status, BA children were divided into a non-jaundice group (TB < 2 mg/dl) and a persistent jaundice group (TB ≥ 2 mg/dl). Subsequently, BA patients were categorized into a non-significant fibrosis group (liver stiffness < 7 kPa) and a significant fibrosis group (liver stiffness≥7 kPa). The cut-off point of liver stiffness score for significant fibrosis was based on the study by Castera L, et al. [22] with sensitivity of 67% and specificity of 89%. Sixty BA patients (32 girls and 28 boys with mean age of 9.6 ± 0.7 years) and 20 healthy children (10 girls and 10 boys with mean age of 10.1 ± 0.7 years) were recruited in this study. All BA patients had undergone hepatic portojejunostomy with Roux-en-Y reconstruction (original Kasai procedure), and they were generally in good health; no signs of suspected infection or bleeding abnormalities at the time of blood sampling. None of them had undergone liver transplantation. Healthy controls who attended the Well Baby Clinic at King Chulalongkorn Memorial hospital for vaccination had normal physical findings and no underlying disease. BA patients were classified into two groups according to serum total bilirubin (TB), serum alanine aminotransferase (ALT), and liver stiffness score. Based on their jaundice status, BA children were divided into a non-jaundice group (TB < 2 mg/dl) and a persistent jaundice group (TB ≥ 2 mg/dl). Subsequently, BA patients were categorized into a non-significant fibrosis group (liver stiffness < 7 kPa) and a significant fibrosis group (liver stiffness≥7 kPa). The cut-off point of liver stiffness score for significant fibrosis was based on the study by Castera L, et al. [22] with sensitivity of 67% and specificity of 89%. [SUBTITLE] Laboratory methods [SUBSECTION] After overnight fast, samples of peripheral venous blood were collected in the morning from every participant, centrifuged for 15 min at 1000 × g, and stored immediately at -80°C for further analysis. Quantitative determination of adiponectin concentration in serum was performed using commercially available enzyme-linked immunosorbent assay (ELISA) (R&D Systems, Inc., Minneapolis, MN, USA). According to the manufacturer's protocol, 50 μl of recombinant human adiponectin standards and serum samples were pipetted into each well, which has been pre-coated with specific antibody for adiponectin. After incubating for 2 h at room temperature, every well was washed thoroughly with wash buffer for 4 times. 200 μl of a horseradish peroxidase-conjugated monoclonal antibody specific for adiponectin was then added to each well and incubated for a further 2 h at room temperature. After 4 washes, substrate solution was pipetted into the wells and then microplate was incubated for 30 min at room temperature with protection from light. Lastly, the reaction was stopped by the stop solution and the color intensity was measured with an automated microplate reader at 450 nm. The adiponectin concentration was determined by a standard optical density-concentration curve. Twofold serial dilutions of recombinant human adiponectin with a concentration of 3.9-250 ng/ml were used as standards. The manufacturer reported precision was 2.5-4.7% (intra-assay) and 5.8-6.9% (inter-assay). The sensitivity of this assay was 0.246 ng/ml. The liver function tests including serum albumin, total bilirubin (TB), direct bilirubin (DB), aspatate aminotransferase (AST), alanine aminotransferase (ALT), and alkaline phosphatase (ALP) were measured using a Hitachi 912 automated machine at the central laboratory of our hospital. The aspartate aminotransferase to platelets ratio index (APRI) was calculated as follows: (AST/upper limit of normal) × 100/platelet count (109/l) [23]. After overnight fast, samples of peripheral venous blood were collected in the morning from every participant, centrifuged for 15 min at 1000 × g, and stored immediately at -80°C for further analysis. Quantitative determination of adiponectin concentration in serum was performed using commercially available enzyme-linked immunosorbent assay (ELISA) (R&D Systems, Inc., Minneapolis, MN, USA). According to the manufacturer's protocol, 50 μl of recombinant human adiponectin standards and serum samples were pipetted into each well, which has been pre-coated with specific antibody for adiponectin. After incubating for 2 h at room temperature, every well was washed thoroughly with wash buffer for 4 times. 200 μl of a horseradish peroxidase-conjugated monoclonal antibody specific for adiponectin was then added to each well and incubated for a further 2 h at room temperature. After 4 washes, substrate solution was pipetted into the wells and then microplate was incubated for 30 min at room temperature with protection from light. Lastly, the reaction was stopped by the stop solution and the color intensity was measured with an automated microplate reader at 450 nm. The adiponectin concentration was determined by a standard optical density-concentration curve. Twofold serial dilutions of recombinant human adiponectin with a concentration of 3.9-250 ng/ml were used as standards. The manufacturer reported precision was 2.5-4.7% (intra-assay) and 5.8-6.9% (inter-assay). The sensitivity of this assay was 0.246 ng/ml. The liver function tests including serum albumin, total bilirubin (TB), direct bilirubin (DB), aspatate aminotransferase (AST), alanine aminotransferase (ALT), and alkaline phosphatase (ALP) were measured using a Hitachi 912 automated machine at the central laboratory of our hospital. The aspartate aminotransferase to platelets ratio index (APRI) was calculated as follows: (AST/upper limit of normal) × 100/platelet count (109/l) [23]. [SUBTITLE] Liver stiffness measurement [SUBSECTION] Transient elastography (FibroScan) measured the liver stiffness between 25 to 65 mm from the skin surface, which is approximately equivalent to the volume of a cylinder of 1 cm wide and 4 cm long. The measurements were performed by placing a transducer probe of FibroScan on the intercostal space at the area of right lobe of liver with patients lying in a dorsal decubitus position with maximal abduction of right arm. The target location for measurement was a liver portion that was at least 6 cm thick, and was free of major vascular structures. The measurements were performed until 10 validated results were achieved with a success rate of at least 80%. The median value of 10 validated scores was considered the elastic modulus of the liver, and it was expressed in kilopascals (kPa). Transient elastography (FibroScan) measured the liver stiffness between 25 to 65 mm from the skin surface, which is approximately equivalent to the volume of a cylinder of 1 cm wide and 4 cm long. The measurements were performed by placing a transducer probe of FibroScan on the intercostal space at the area of right lobe of liver with patients lying in a dorsal decubitus position with maximal abduction of right arm. The target location for measurement was a liver portion that was at least 6 cm thick, and was free of major vascular structures. The measurements were performed until 10 validated results were achieved with a success rate of at least 80%. The median value of 10 validated scores was considered the elastic modulus of the liver, and it was expressed in kilopascals (kPa). [SUBTITLE] Statistical analysis [SUBSECTION] Statistical analysis was performed using SPSS software version 16.0 for Windows. Mann-Whitney U test was used to compare the difference of serum adiponectin concentrations between groups. Correlation between serum adiponectin levels and other serological markers, and liver stiffness scores were calculated using Pearson's correlation coefficient (r). Data were expressed as mean ± SEM. P-values < 0.05 were considered to be statistically significant. Statistical analysis was performed using SPSS software version 16.0 for Windows. Mann-Whitney U test was used to compare the difference of serum adiponectin concentrations between groups. Correlation between serum adiponectin levels and other serological markers, and liver stiffness scores were calculated using Pearson's correlation coefficient (r). Data were expressed as mean ± SEM. P-values < 0.05 were considered to be statistically significant.
null
null
null
null
[ "Background", "Study Population", "Laboratory methods", "Liver stiffness measurement", "Statistical analysis", "Results", "Comparisons between BA patients and healthy controls", "Comparisons between BA patients with and without persistent jaundice", "Discussion", "Conclusion", "Abbreviations", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Biliary atresia (BA) is a progressive, inflammatory, fibrosclerotic cholangiopathy resulting in complete obliteration of the extrahepatic bile ducts [1]. The obstruction of bile flow leads to worsening cholestasis, hepatic fibrosis, biliary cirrhosis, end-stage liver disease, and death within a few years [2]. Currently, Kasai operation or hepatoportoenterostomy constitutes the initial surgical treatment of choice for infants with BA. Although Kasai procedure can successfully establish bile flow to the gastrointestinal tract, a number of BA children progress to hepatic cirrhosis, portal hypertension and ultimately require liver transplantation [3]. To date, the etiology and pathogenesis of BA have not been completely understood; however, several mechanisms have been proposed including genetic defects, perinatal viral infections, morphogenic abnormalities, immune mediated bile duct injuries, and autoimmune disorders involving the bile ducts [4,5].\nBile duct inflammation, cytokine responses, and bile acid toxicity are three major contributors of liver parenchymal destruction and hepatic fibrosis in BA patients [2]. After hepatic stellate cells (HSC) are activated, these key effecter cells in hepatic fibrogenesis are transformed into extracellular matrix-producing myofibroblast. This process results in the production and the accumulation of collagen and other extracellular matrix in liver parenchyma, thus initiating and perpetuating the liver fibrosis [6,7]. Recent studies showed the role of adipokines in hepatic fibrogenesis of various chronic liver diseases [8]. In this study, we focused on a unique adipokine, adiponectin.\nAdiponectin, a 244 amino acid polypeptide, is the most abundant adipokine exclusively produced and secreted by adipocytes into systemic circulation in trimeric, hexameric, and larger multimeric high-molecular-weight (HMW) forms [9,10]. Adiponectin is structurally homologue to tumor necrosis factor-α (TNF-α); however, these two molecules antagonize each other's effects in the target organs [11]. Adiponectin exerts its anti-inflammatory effects through the reduction of pro-inflammatory cytokines release including TNF-α and interleukin-6, and inducing the expression of anti-inflammatory cytokines, such as interleukin-10 [12,13]. Adiponectin is also renowned for its anti-diabetic, anti-atherosclerotic, and anti-obesity effects. It is believed that adiponectin plays a protective role in liver diseases. In animal studies, adiponectin-knockout mice developed more severe carbon tetrachloride-induced liver fibrosis compared with wild type mice, and adiponectin injection prior to carbon tetrachloride treatment could prevent it [14]. In non-alcoholic obese mice, administration of recombinant adiponectin could attenuate hepatomegaly, hepatic steatosis, and aminotransferase abnormality [15]. Moreover, elevated adiponectin levels correlate positively with the severity of liver cirrhosis and negatively with hepatic protein synthesis [11,16]. In contrast, low adiponectin levels have been shown in non-alcoholic fatty liver disease [17]. Adiponectin levels correlate negatively with liver fat and hepatic insulin resistance in diabetic patients [18]. Adiponectin is currently a subject of research interest since it has the potential to be a useful marker for liver fibrosis, and a possible target for a new therapeutic approach.\nRecently, it has been reported that a number of cytokines and growth factors have been studied in BA patients including osteopontin [19], basic fibroblast growth factor [20], and stem cell factor [21]. To the best of our knowledge, there have been no published studied on serum adiponectin levels from various clinical stages of BA. This is the first study to evaluate the correlation of serum adiponectin, liver stiffness and clinical outcomes in postoperative BA. In the present study, we postulated that serum adiponectin could be associated with the severity of clinical outcomes and the liver stiffness in BA patients, and to prove this hypothesis, we analyzed serum adiponectin and liver stiffness in BA patients compared with healthy controls. Therefore, the purpose of this study was to determine serum adiponectin levels collected from BA patients and to examine the possible correlations of serum adiponectin and outcome parameters of postoperative BA patients.", "Sixty BA patients (32 girls and 28 boys with mean age of 9.6 ± 0.7 years) and 20 healthy children (10 girls and 10 boys with mean age of 10.1 ± 0.7 years) were recruited in this study. All BA patients had undergone hepatic portojejunostomy with Roux-en-Y reconstruction (original Kasai procedure), and they were generally in good health; no signs of suspected infection or bleeding abnormalities at the time of blood sampling. None of them had undergone liver transplantation. Healthy controls who attended the Well Baby Clinic at King Chulalongkorn Memorial hospital for vaccination had normal physical findings and no underlying disease. BA patients were classified into two groups according to serum total bilirubin (TB), serum alanine aminotransferase (ALT), and liver stiffness score. Based on their jaundice status, BA children were divided into a non-jaundice group (TB < 2 mg/dl) and a persistent jaundice group (TB ≥ 2 mg/dl). Subsequently, BA patients were categorized into a non-significant fibrosis group (liver stiffness < 7 kPa) and a significant fibrosis group (liver stiffness≥7 kPa). The cut-off point of liver stiffness score for significant fibrosis was based on the study by Castera L, et al. [22] with sensitivity of 67% and specificity of 89%.", "After overnight fast, samples of peripheral venous blood were collected in the morning from every participant, centrifuged for 15 min at 1000 × g, and stored immediately at -80°C for further analysis. Quantitative determination of adiponectin concentration in serum was performed using commercially available enzyme-linked immunosorbent assay (ELISA) (R&D Systems, Inc., Minneapolis, MN, USA). According to the manufacturer's protocol, 50 μl of recombinant human adiponectin standards and serum samples were pipetted into each well, which has been pre-coated with specific antibody for adiponectin. After incubating for 2 h at room temperature, every well was washed thoroughly with wash buffer for 4 times. 200 μl of a horseradish peroxidase-conjugated monoclonal antibody specific for adiponectin was then added to each well and incubated for a further 2 h at room temperature. After 4 washes, substrate solution was pipetted into the wells and then microplate was incubated for 30 min at room temperature with protection from light. Lastly, the reaction was stopped by the stop solution and the color intensity was measured with an automated microplate reader at 450 nm. The adiponectin concentration was determined by a standard optical density-concentration curve. Twofold serial dilutions of recombinant human adiponectin with a concentration of 3.9-250 ng/ml were used as standards. The manufacturer reported precision was 2.5-4.7% (intra-assay) and 5.8-6.9% (inter-assay). The sensitivity of this assay was 0.246 ng/ml.\nThe liver function tests including serum albumin, total bilirubin (TB), direct bilirubin (DB), aspatate aminotransferase (AST), alanine aminotransferase (ALT), and alkaline phosphatase (ALP) were measured using a Hitachi 912 automated machine at the central laboratory of our hospital. The aspartate aminotransferase to platelets ratio index (APRI) was calculated as follows: (AST/upper limit of normal) × 100/platelet count (109/l) [23].", "Transient elastography (FibroScan) measured the liver stiffness between 25 to 65 mm from the skin surface, which is approximately equivalent to the volume of a cylinder of 1 cm wide and 4 cm long. The measurements were performed by placing a transducer probe of FibroScan on the intercostal space at the area of right lobe of liver with patients lying in a dorsal decubitus position with maximal abduction of right arm. The target location for measurement was a liver portion that was at least 6 cm thick, and was free of major vascular structures. The measurements were performed until 10 validated results were achieved with a success rate of at least 80%. The median value of 10 validated scores was considered the elastic modulus of the liver, and it was expressed in kilopascals (kPa).", "Statistical analysis was performed using SPSS software version 16.0 for Windows. Mann-Whitney U test was used to compare the difference of serum adiponectin concentrations between groups. Correlation between serum adiponectin levels and other serological markers, and liver stiffness scores were calculated using Pearson's correlation coefficient (r). Data were expressed as mean ± SEM. P-values < 0.05 were considered to be statistically significant.", "[SUBTITLE] Comparisons between BA patients and healthy controls [SUBSECTION] A total of 60 BA patients and 20 healthy controls were enrolled in this study. The characteristics of participants in both groups were demonstrated in Table 1. Mean age and gender ratio in controls and BA patients were not different whereas serum adiponectin levels in BA patients were markedly elevated compared with those in controls (15.5 ± 1.1 vs. 11.1 ± 1.1 μg/ml, P = 0.03) (Figure 1). Furthermore, BA patients had significantly higher hyaluronic acid than controls (50.3 ± 7.1 vs. 23.9 ± 1.5 ng/ml, P = 0.03). Additionally, liver stiffness scores in BA patients were dramatically higher than those in controls (30.1 ± 3.0 vs. 5.1 ± 0.5 kPa, P < 0.001).\nDemographic data, biochemical characteristics, and liver stiffness scores of controls and biliary atresia patients.\nThe data was expressed as mean ± SEM. BA, biliary atresia; BMI, body mass index; AST, aspartate aminotransferase; ALT, alanine aminotransferase; ALP, alkaline phosphatase; APRI, aspartate aminotransferase to platelets ratio index; NA, not applicable\nComparison of serum adiponectin levels in biliary atresia patients and healthy controls. The data was expressed as mean ± SEM.\nA total of 60 BA patients and 20 healthy controls were enrolled in this study. The characteristics of participants in both groups were demonstrated in Table 1. Mean age and gender ratio in controls and BA patients were not different whereas serum adiponectin levels in BA patients were markedly elevated compared with those in controls (15.5 ± 1.1 vs. 11.1 ± 1.1 μg/ml, P = 0.03) (Figure 1). Furthermore, BA patients had significantly higher hyaluronic acid than controls (50.3 ± 7.1 vs. 23.9 ± 1.5 ng/ml, P = 0.03). Additionally, liver stiffness scores in BA patients were dramatically higher than those in controls (30.1 ± 3.0 vs. 5.1 ± 0.5 kPa, P < 0.001).\nDemographic data, biochemical characteristics, and liver stiffness scores of controls and biliary atresia patients.\nThe data was expressed as mean ± SEM. BA, biliary atresia; BMI, body mass index; AST, aspartate aminotransferase; ALT, alanine aminotransferase; ALP, alkaline phosphatase; APRI, aspartate aminotransferase to platelets ratio index; NA, not applicable\nComparison of serum adiponectin levels in biliary atresia patients and healthy controls. The data was expressed as mean ± SEM.\n[SUBTITLE] Comparisons between BA patients with and without persistent jaundice [SUBSECTION] We further categorized BA patients into jaundice (n = 20) and non-jaundice group (n = 40). As presented in Table 2, BA patients with jaundice had significantly higher serum bilirubin, AST, ALT, ALP, APRI and liver stiffness values compared to those without jaundice. Moreover, serum adiponectin levels in BA patients with persistent jaundice were greater than those in BA patients without jaundice (24.4 ± 1.4 vs. 11.0 ± 0.7 μg/ml, P < 0.001) (Figure 2). Similarly, BA patients with significant fibrosis (n = 44) possessed remarkably higher serum adiponectin than those with insignificant fibrosis (n = 16) (17.7 ± 1.2 vs. 9.4 ± 1.1 μg/ml, P < 0.001). We also found that BA patients with persistent jaundice had substantially higher levels of serum hyaluronic acid than those without jaundice (93.2 ± 16.7 vs. 28.9 ± 3.4 ng/ml, P < 0.001).\nComparison between biliary atresia patients without and with jaundice.\nThe data was expressed as mean ± SEM. BA, biliary atresia; BMI, body mass index; AST, aspartate aminotransferase; ALT, alanine aminotransferase; ALP, alkaline phosphatase; APRI, aspartate aminotransferase to platelets ratio index\nComparison of serum adiponectin levels in controls, biliary atresia patients without jaundice, and biliary atresia patients with jaundice. The data was expressed as mean ± SEM.\nFurther analysis demonstrated that serum adiponectin levels positively correlated with TB (r = 0.58, P < 0.001), serum hyaluronic acid (r = 0.46, P < 0.001), AST (r = 0.41, P = 0.001), ALP (r = 0.30, P = 0.02), and liver stiffness values (r = 0.60, P < 0.001). Conversely, serum levels of adiponectin inversely correlated with serum albumin (r = -0.65, P < 0.001). Correlations between serum adiponectin and total bilirubin, hyaluronic acid, liver stiffness, and serum albumin were shown in Figure 3.\nCorrelation analysis in biliary atresia patients. Serum adiponectin is correlated with (A) total bilirubin (r = 0.58, P < 0.001), (B) hyaluronic acid (r = 0.46, P < 0.001), (C) liver stiffness (r = 0.60, P < 0.001) and (D) albumin (r = -0.65, P < 0.001) in patients with biliary atresia.\nWe further categorized BA patients into jaundice (n = 20) and non-jaundice group (n = 40). As presented in Table 2, BA patients with jaundice had significantly higher serum bilirubin, AST, ALT, ALP, APRI and liver stiffness values compared to those without jaundice. Moreover, serum adiponectin levels in BA patients with persistent jaundice were greater than those in BA patients without jaundice (24.4 ± 1.4 vs. 11.0 ± 0.7 μg/ml, P < 0.001) (Figure 2). Similarly, BA patients with significant fibrosis (n = 44) possessed remarkably higher serum adiponectin than those with insignificant fibrosis (n = 16) (17.7 ± 1.2 vs. 9.4 ± 1.1 μg/ml, P < 0.001). We also found that BA patients with persistent jaundice had substantially higher levels of serum hyaluronic acid than those without jaundice (93.2 ± 16.7 vs. 28.9 ± 3.4 ng/ml, P < 0.001).\nComparison between biliary atresia patients without and with jaundice.\nThe data was expressed as mean ± SEM. BA, biliary atresia; BMI, body mass index; AST, aspartate aminotransferase; ALT, alanine aminotransferase; ALP, alkaline phosphatase; APRI, aspartate aminotransferase to platelets ratio index\nComparison of serum adiponectin levels in controls, biliary atresia patients without jaundice, and biliary atresia patients with jaundice. The data was expressed as mean ± SEM.\nFurther analysis demonstrated that serum adiponectin levels positively correlated with TB (r = 0.58, P < 0.001), serum hyaluronic acid (r = 0.46, P < 0.001), AST (r = 0.41, P = 0.001), ALP (r = 0.30, P = 0.02), and liver stiffness values (r = 0.60, P < 0.001). Conversely, serum levels of adiponectin inversely correlated with serum albumin (r = -0.65, P < 0.001). Correlations between serum adiponectin and total bilirubin, hyaluronic acid, liver stiffness, and serum albumin were shown in Figure 3.\nCorrelation analysis in biliary atresia patients. Serum adiponectin is correlated with (A) total bilirubin (r = 0.58, P < 0.001), (B) hyaluronic acid (r = 0.46, P < 0.001), (C) liver stiffness (r = 0.60, P < 0.001) and (D) albumin (r = -0.65, P < 0.001) in patients with biliary atresia.", "A total of 60 BA patients and 20 healthy controls were enrolled in this study. The characteristics of participants in both groups were demonstrated in Table 1. Mean age and gender ratio in controls and BA patients were not different whereas serum adiponectin levels in BA patients were markedly elevated compared with those in controls (15.5 ± 1.1 vs. 11.1 ± 1.1 μg/ml, P = 0.03) (Figure 1). Furthermore, BA patients had significantly higher hyaluronic acid than controls (50.3 ± 7.1 vs. 23.9 ± 1.5 ng/ml, P = 0.03). Additionally, liver stiffness scores in BA patients were dramatically higher than those in controls (30.1 ± 3.0 vs. 5.1 ± 0.5 kPa, P < 0.001).\nDemographic data, biochemical characteristics, and liver stiffness scores of controls and biliary atresia patients.\nThe data was expressed as mean ± SEM. BA, biliary atresia; BMI, body mass index; AST, aspartate aminotransferase; ALT, alanine aminotransferase; ALP, alkaline phosphatase; APRI, aspartate aminotransferase to platelets ratio index; NA, not applicable\nComparison of serum adiponectin levels in biliary atresia patients and healthy controls. The data was expressed as mean ± SEM.", "We further categorized BA patients into jaundice (n = 20) and non-jaundice group (n = 40). As presented in Table 2, BA patients with jaundice had significantly higher serum bilirubin, AST, ALT, ALP, APRI and liver stiffness values compared to those without jaundice. Moreover, serum adiponectin levels in BA patients with persistent jaundice were greater than those in BA patients without jaundice (24.4 ± 1.4 vs. 11.0 ± 0.7 μg/ml, P < 0.001) (Figure 2). Similarly, BA patients with significant fibrosis (n = 44) possessed remarkably higher serum adiponectin than those with insignificant fibrosis (n = 16) (17.7 ± 1.2 vs. 9.4 ± 1.1 μg/ml, P < 0.001). We also found that BA patients with persistent jaundice had substantially higher levels of serum hyaluronic acid than those without jaundice (93.2 ± 16.7 vs. 28.9 ± 3.4 ng/ml, P < 0.001).\nComparison between biliary atresia patients without and with jaundice.\nThe data was expressed as mean ± SEM. BA, biliary atresia; BMI, body mass index; AST, aspartate aminotransferase; ALT, alanine aminotransferase; ALP, alkaline phosphatase; APRI, aspartate aminotransferase to platelets ratio index\nComparison of serum adiponectin levels in controls, biliary atresia patients without jaundice, and biliary atresia patients with jaundice. The data was expressed as mean ± SEM.\nFurther analysis demonstrated that serum adiponectin levels positively correlated with TB (r = 0.58, P < 0.001), serum hyaluronic acid (r = 0.46, P < 0.001), AST (r = 0.41, P = 0.001), ALP (r = 0.30, P = 0.02), and liver stiffness values (r = 0.60, P < 0.001). Conversely, serum levels of adiponectin inversely correlated with serum albumin (r = -0.65, P < 0.001). Correlations between serum adiponectin and total bilirubin, hyaluronic acid, liver stiffness, and serum albumin were shown in Figure 3.\nCorrelation analysis in biliary atresia patients. Serum adiponectin is correlated with (A) total bilirubin (r = 0.58, P < 0.001), (B) hyaluronic acid (r = 0.46, P < 0.001), (C) liver stiffness (r = 0.60, P < 0.001) and (D) albumin (r = -0.65, P < 0.001) in patients with biliary atresia.", "Biliary atresia is an intractable liver disorder affecting infants and children. Despite early diagnosis and successful Kasai operation, the great majority of BA patients inevitably develop liver fibrosis, portal hypertension, and liver failure. Therefore, the investigation of fibrogenic progression in BA is undoubtedly important. Liver biopsy is considered a gold standard in diagnosing liver fibrosis and determining its severity. However, it is a painful and invasive procedure with infrequent but possible life-threatening complications [24]. Furthermore, there have been questions concerning the accuracy of liver biopsy, which is adversely affected sampling errors, intra- and inter-observer variability. These problems may result in false staging of liver fibrosis [25,22]. In an attempt to develop noninvasive methods for the assessment of liver fibrosis, transient elastography or FibroScan has emerged as the most promising tool in BA patients [26].\nTransient elastography or FibroScan (Echosens, Paris, France) is a novel, rapid, and non-invasive technique for measuring the degree of liver fibrosis, and it can be performed in the out-patient setting. The transducer probe creates mild amplitude and low frequency (50 Hz) vibration, which induces an elastic shear wave in the tissues underneath. A pulse-echo ultrasound is used to follow the propagation of the elastic shear wave and to measure its velocity, which is in direct proportion to tissue stiffness [25,22]. According to the equation E = ρV2 (E = Elastic modulus, V = Shear velocity, ρ = mass density), the stiffer the tissue is, the faster the wave can pass through it. FibroScan can measure liver stiffness in a volume of 100 times bigger than that obtained from liver biopsy and is, therefore, a better representative for the whole liver parenchyma [22].\nHepatic stellate cell activation followed by extracellular matrix production and accumulation is a major mechanism contributing to the development of liver fibrosis [6]. A number of cytokines are believed to play essential roles in this process and they become topics of research interest in an attempt to evaluate the use of serum cytokine as a biochemical marker of liver fibrosis. In the present study, we investigated the relationship of serum adiponectin with clinical outcomes and liver stiffness scores in postoperative BA patients.\nAdiponectin - also known as complement-related protein 30 (Acrp30), adipose most abundant gene transcript (apM1) and adipoQ - is mostly synthesized by adipose tissue [27]. Various animal models and clinical researches showed that adiponectin mediated anti-obesity, anti-atherosclerotic, and anti-inflammatory effects [28]. Direct effects on hepatocytes via a specific receptor (AdipoR2 receptor) and anti-inflammatory properties are partly mediated by its antagonism against TNF-α [29]. This raises the postulation on the potential hepatoprotective role of adiponectin against liver fibrosis and cirrhosis. However, the possible role of adiponectin in the pathogenesis of BA remains as yet unclear.\nThe present study showed that serum adiponectin levels were significantly elevated in BA patients compared with healthy controls. In addition, serum adiponectin levels were substantially higher in BA patients with persistent jaundice than those without jaundice. Subsequent analysis revealed that serum adiponectin was positively correlated with serum total bilirubin, suggesting that serum adiponectin was associated with jaundice status in BA patients. Furthermore, jaundice status in BA patients seems to be an indicator for intrahepatic biliary obstruction. Thus, these findings indicate that adiponectin may play a potential role in the pathogenesis of hepatocellular damage in BA.\nTo our knowledge, this study demonstrates for the first time increased serum adiponectin levels in BA patients. We also found that serum adiponectin positively correlated with AST, ALP, hyaluronic acid, and liver stiffness, but negatively correlated with serum albumin. These results support that serum adiponectin is associated with clinical outcomes (jaundice status, hepatic dysfunction, and liver fibrosis) in BA patients post Kasai operation. Elevated serum adiponectin has been documented in various liver diseases, including acute hepatitis, chronic hepatitis, liver cirrhosis, hepatocellular carcinoma, and primary biliary cirrhosis [11,30-32]. In agreement with our findings, Tacke and colleagues demonstrated that adiponectin was increased and associated with inflammation and hepatic damage in chronic liver disease [11]. Elevated adiponectin concentrations following bile duct ligation in mice and in human bile from cholestatic patients suggest that biliary secretion is involved in adiponectin clearance. High adiponectin levels in patients with liver cirrhosis correlated positively with the severity of cirrhosis and negatively with hepatic protein synthesis, indicating that adiponectin might be used as a marker for liver cell injury [11,16]. These findings suggest that elevated circulating adiponectin is related to hepatic damage and hence reflects liver dysfunction. Accordingly, our results also revealed that serum adiponectin was positively associated with degree of liver stiffness determined using FibroScan. Liver stiffness values are well correlated with advanced stages of hepatic fibrosis and cirrhosis in children [33]. Wolf and coworkers showed that mice received adiponectin before concanavalin A treatment developed less hepatic damage [12]. Adiponectin expression was upregulated in concanavalin A-induced liver failure. Therefore, adiponectin could play a possible role in the regulation of hepatic inflammation. Future studies on HMW form of adiponectin may help identify more pieces of the inflammatory jigsaw of BA; nevertheless, the challenge remains to piece them together to originate a rational solid hypothesis pertaining to their exact role.\nSeveral possible mechanisms may be responsible for the significant elevation of adiponectin in BA patients, particularly in those with a poor outcome. Increased serum adiponectin could be attributable to imbalance between adiponectin production and adiponectin clearance. In advance BA stages, reduced biliary clearance of adiponectin may plausibly contribute to elevated serum adiponectin levels. Moreover, extrahepatic organs can produce and secrete adiponectin in circulation. The higher adiponectin levels might be regarded as indicating hepatic injury and cholestasis in BA patients. Further clinical studies could render more valuable information on the pathophysiological roles of adiponectin in BA.\nIt should be noted, however, that there are some limitations in this study. Firstly, the sample size of participants was not large enough to arrive at definite conclusions. Secondly, we investigated only those subjects who attended King Chulalongkorn Memorial Hospital, a tertiary care center, for assessment or treatment of BA. As this investigation was designed as a cross-sectional study, therefore we could not determine the causal relationship between adiponectin and liver fibrosis. Other limitation would be the use of FibroScan instead of liver biopsy to evaluate the stage of liver fibrosis, which was not the definitive diagnosis. Furthermore, the sensitivity for significant fibrosis of 67% is not high enough to use Castera score as a screening tool of liver fibrosis in BA patients [22]. Prospective studies with a longitudinal design and hepatic expression of adiponectin would provide useful information on the role of adiponectin in hepatic fibrogenesis.", "This is the first study to demonstrate the elevation of serum adiponectin, hyaluronic acid, and liver stiffness values in BA patients. Serum adiponectin were correlated well with clinical parameters and the degree of liver fibrosis determined by FibroScan. Accordingly, serum adiponectin, hyaluronic acid, and transient elastrography could be used as non-invasive biomarkers reflecting the severity and progression of disease in BA patients post Kasai operation. Further studies will be needed to determine the precise role of adiponectin in the process of liver fibrogenesis.", "ALT: alanine aminotransferase; ALP: alkaline phosphatase; APRI: aspartate aminotransferase to platelets ratio index; AST: aspatate aminotransferase; BA: biliary atresia; DB: direct bilirubin; HSC: hepatic stellate cells; TNF-α: tumor necrosis factor-α; TB: total bilirubin; kPa: kilopascals; HMW: high-molecular-weight.", "The authors declare that they have no competing interests.", "SH and MC have conceived the study, analyzed the data, and have written the manuscript. AT, KP, and WU performed laboratory analysis. VC, PV, and YP were involved in the diagnosis and recruitment of cases. YP was responsible for the design of the study. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-230X/11/16/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study Population", "Laboratory methods", "Liver stiffness measurement", "Statistical analysis", "Results", "Comparisons between BA patients and healthy controls", "Comparisons between BA patients with and without persistent jaundice", "Discussion", "Conclusion", "Abbreviations", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Biliary atresia (BA) is a progressive, inflammatory, fibrosclerotic cholangiopathy resulting in complete obliteration of the extrahepatic bile ducts [1]. The obstruction of bile flow leads to worsening cholestasis, hepatic fibrosis, biliary cirrhosis, end-stage liver disease, and death within a few years [2]. Currently, Kasai operation or hepatoportoenterostomy constitutes the initial surgical treatment of choice for infants with BA. Although Kasai procedure can successfully establish bile flow to the gastrointestinal tract, a number of BA children progress to hepatic cirrhosis, portal hypertension and ultimately require liver transplantation [3]. To date, the etiology and pathogenesis of BA have not been completely understood; however, several mechanisms have been proposed including genetic defects, perinatal viral infections, morphogenic abnormalities, immune mediated bile duct injuries, and autoimmune disorders involving the bile ducts [4,5].\nBile duct inflammation, cytokine responses, and bile acid toxicity are three major contributors of liver parenchymal destruction and hepatic fibrosis in BA patients [2]. After hepatic stellate cells (HSC) are activated, these key effecter cells in hepatic fibrogenesis are transformed into extracellular matrix-producing myofibroblast. This process results in the production and the accumulation of collagen and other extracellular matrix in liver parenchyma, thus initiating and perpetuating the liver fibrosis [6,7]. Recent studies showed the role of adipokines in hepatic fibrogenesis of various chronic liver diseases [8]. In this study, we focused on a unique adipokine, adiponectin.\nAdiponectin, a 244 amino acid polypeptide, is the most abundant adipokine exclusively produced and secreted by adipocytes into systemic circulation in trimeric, hexameric, and larger multimeric high-molecular-weight (HMW) forms [9,10]. Adiponectin is structurally homologue to tumor necrosis factor-α (TNF-α); however, these two molecules antagonize each other's effects in the target organs [11]. Adiponectin exerts its anti-inflammatory effects through the reduction of pro-inflammatory cytokines release including TNF-α and interleukin-6, and inducing the expression of anti-inflammatory cytokines, such as interleukin-10 [12,13]. Adiponectin is also renowned for its anti-diabetic, anti-atherosclerotic, and anti-obesity effects. It is believed that adiponectin plays a protective role in liver diseases. In animal studies, adiponectin-knockout mice developed more severe carbon tetrachloride-induced liver fibrosis compared with wild type mice, and adiponectin injection prior to carbon tetrachloride treatment could prevent it [14]. In non-alcoholic obese mice, administration of recombinant adiponectin could attenuate hepatomegaly, hepatic steatosis, and aminotransferase abnormality [15]. Moreover, elevated adiponectin levels correlate positively with the severity of liver cirrhosis and negatively with hepatic protein synthesis [11,16]. In contrast, low adiponectin levels have been shown in non-alcoholic fatty liver disease [17]. Adiponectin levels correlate negatively with liver fat and hepatic insulin resistance in diabetic patients [18]. Adiponectin is currently a subject of research interest since it has the potential to be a useful marker for liver fibrosis, and a possible target for a new therapeutic approach.\nRecently, it has been reported that a number of cytokines and growth factors have been studied in BA patients including osteopontin [19], basic fibroblast growth factor [20], and stem cell factor [21]. To the best of our knowledge, there have been no published studied on serum adiponectin levels from various clinical stages of BA. This is the first study to evaluate the correlation of serum adiponectin, liver stiffness and clinical outcomes in postoperative BA. In the present study, we postulated that serum adiponectin could be associated with the severity of clinical outcomes and the liver stiffness in BA patients, and to prove this hypothesis, we analyzed serum adiponectin and liver stiffness in BA patients compared with healthy controls. Therefore, the purpose of this study was to determine serum adiponectin levels collected from BA patients and to examine the possible correlations of serum adiponectin and outcome parameters of postoperative BA patients.", "All parents of children were informed of the purpose of the study and of any interventions involved in this study. Written informed consents were obtained from participants' parents prior to the children entering the study. This study complied with the ethical guidelines of the 1975 Declaration of Helsinki, and was approved by the Institutional Review Board of the Faculty of Medicine, Chulalongkorn University.\n[SUBTITLE] Study Population [SUBSECTION] Sixty BA patients (32 girls and 28 boys with mean age of 9.6 ± 0.7 years) and 20 healthy children (10 girls and 10 boys with mean age of 10.1 ± 0.7 years) were recruited in this study. All BA patients had undergone hepatic portojejunostomy with Roux-en-Y reconstruction (original Kasai procedure), and they were generally in good health; no signs of suspected infection or bleeding abnormalities at the time of blood sampling. None of them had undergone liver transplantation. Healthy controls who attended the Well Baby Clinic at King Chulalongkorn Memorial hospital for vaccination had normal physical findings and no underlying disease. BA patients were classified into two groups according to serum total bilirubin (TB), serum alanine aminotransferase (ALT), and liver stiffness score. Based on their jaundice status, BA children were divided into a non-jaundice group (TB < 2 mg/dl) and a persistent jaundice group (TB ≥ 2 mg/dl). Subsequently, BA patients were categorized into a non-significant fibrosis group (liver stiffness < 7 kPa) and a significant fibrosis group (liver stiffness≥7 kPa). The cut-off point of liver stiffness score for significant fibrosis was based on the study by Castera L, et al. [22] with sensitivity of 67% and specificity of 89%.\nSixty BA patients (32 girls and 28 boys with mean age of 9.6 ± 0.7 years) and 20 healthy children (10 girls and 10 boys with mean age of 10.1 ± 0.7 years) were recruited in this study. All BA patients had undergone hepatic portojejunostomy with Roux-en-Y reconstruction (original Kasai procedure), and they were generally in good health; no signs of suspected infection or bleeding abnormalities at the time of blood sampling. None of them had undergone liver transplantation. Healthy controls who attended the Well Baby Clinic at King Chulalongkorn Memorial hospital for vaccination had normal physical findings and no underlying disease. BA patients were classified into two groups according to serum total bilirubin (TB), serum alanine aminotransferase (ALT), and liver stiffness score. Based on their jaundice status, BA children were divided into a non-jaundice group (TB < 2 mg/dl) and a persistent jaundice group (TB ≥ 2 mg/dl). Subsequently, BA patients were categorized into a non-significant fibrosis group (liver stiffness < 7 kPa) and a significant fibrosis group (liver stiffness≥7 kPa). The cut-off point of liver stiffness score for significant fibrosis was based on the study by Castera L, et al. [22] with sensitivity of 67% and specificity of 89%.\n[SUBTITLE] Laboratory methods [SUBSECTION] After overnight fast, samples of peripheral venous blood were collected in the morning from every participant, centrifuged for 15 min at 1000 × g, and stored immediately at -80°C for further analysis. Quantitative determination of adiponectin concentration in serum was performed using commercially available enzyme-linked immunosorbent assay (ELISA) (R&D Systems, Inc., Minneapolis, MN, USA). According to the manufacturer's protocol, 50 μl of recombinant human adiponectin standards and serum samples were pipetted into each well, which has been pre-coated with specific antibody for adiponectin. After incubating for 2 h at room temperature, every well was washed thoroughly with wash buffer for 4 times. 200 μl of a horseradish peroxidase-conjugated monoclonal antibody specific for adiponectin was then added to each well and incubated for a further 2 h at room temperature. After 4 washes, substrate solution was pipetted into the wells and then microplate was incubated for 30 min at room temperature with protection from light. Lastly, the reaction was stopped by the stop solution and the color intensity was measured with an automated microplate reader at 450 nm. The adiponectin concentration was determined by a standard optical density-concentration curve. Twofold serial dilutions of recombinant human adiponectin with a concentration of 3.9-250 ng/ml were used as standards. The manufacturer reported precision was 2.5-4.7% (intra-assay) and 5.8-6.9% (inter-assay). The sensitivity of this assay was 0.246 ng/ml.\nThe liver function tests including serum albumin, total bilirubin (TB), direct bilirubin (DB), aspatate aminotransferase (AST), alanine aminotransferase (ALT), and alkaline phosphatase (ALP) were measured using a Hitachi 912 automated machine at the central laboratory of our hospital. The aspartate aminotransferase to platelets ratio index (APRI) was calculated as follows: (AST/upper limit of normal) × 100/platelet count (109/l) [23].\nAfter overnight fast, samples of peripheral venous blood were collected in the morning from every participant, centrifuged for 15 min at 1000 × g, and stored immediately at -80°C for further analysis. Quantitative determination of adiponectin concentration in serum was performed using commercially available enzyme-linked immunosorbent assay (ELISA) (R&D Systems, Inc., Minneapolis, MN, USA). According to the manufacturer's protocol, 50 μl of recombinant human adiponectin standards and serum samples were pipetted into each well, which has been pre-coated with specific antibody for adiponectin. After incubating for 2 h at room temperature, every well was washed thoroughly with wash buffer for 4 times. 200 μl of a horseradish peroxidase-conjugated monoclonal antibody specific for adiponectin was then added to each well and incubated for a further 2 h at room temperature. After 4 washes, substrate solution was pipetted into the wells and then microplate was incubated for 30 min at room temperature with protection from light. Lastly, the reaction was stopped by the stop solution and the color intensity was measured with an automated microplate reader at 450 nm. The adiponectin concentration was determined by a standard optical density-concentration curve. Twofold serial dilutions of recombinant human adiponectin with a concentration of 3.9-250 ng/ml were used as standards. The manufacturer reported precision was 2.5-4.7% (intra-assay) and 5.8-6.9% (inter-assay). The sensitivity of this assay was 0.246 ng/ml.\nThe liver function tests including serum albumin, total bilirubin (TB), direct bilirubin (DB), aspatate aminotransferase (AST), alanine aminotransferase (ALT), and alkaline phosphatase (ALP) were measured using a Hitachi 912 automated machine at the central laboratory of our hospital. The aspartate aminotransferase to platelets ratio index (APRI) was calculated as follows: (AST/upper limit of normal) × 100/platelet count (109/l) [23].\n[SUBTITLE] Liver stiffness measurement [SUBSECTION] Transient elastography (FibroScan) measured the liver stiffness between 25 to 65 mm from the skin surface, which is approximately equivalent to the volume of a cylinder of 1 cm wide and 4 cm long. The measurements were performed by placing a transducer probe of FibroScan on the intercostal space at the area of right lobe of liver with patients lying in a dorsal decubitus position with maximal abduction of right arm. The target location for measurement was a liver portion that was at least 6 cm thick, and was free of major vascular structures. The measurements were performed until 10 validated results were achieved with a success rate of at least 80%. The median value of 10 validated scores was considered the elastic modulus of the liver, and it was expressed in kilopascals (kPa).\nTransient elastography (FibroScan) measured the liver stiffness between 25 to 65 mm from the skin surface, which is approximately equivalent to the volume of a cylinder of 1 cm wide and 4 cm long. The measurements were performed by placing a transducer probe of FibroScan on the intercostal space at the area of right lobe of liver with patients lying in a dorsal decubitus position with maximal abduction of right arm. The target location for measurement was a liver portion that was at least 6 cm thick, and was free of major vascular structures. The measurements were performed until 10 validated results were achieved with a success rate of at least 80%. The median value of 10 validated scores was considered the elastic modulus of the liver, and it was expressed in kilopascals (kPa).\n[SUBTITLE] Statistical analysis [SUBSECTION] Statistical analysis was performed using SPSS software version 16.0 for Windows. Mann-Whitney U test was used to compare the difference of serum adiponectin concentrations between groups. Correlation between serum adiponectin levels and other serological markers, and liver stiffness scores were calculated using Pearson's correlation coefficient (r). Data were expressed as mean ± SEM. P-values < 0.05 were considered to be statistically significant.\nStatistical analysis was performed using SPSS software version 16.0 for Windows. Mann-Whitney U test was used to compare the difference of serum adiponectin concentrations between groups. Correlation between serum adiponectin levels and other serological markers, and liver stiffness scores were calculated using Pearson's correlation coefficient (r). Data were expressed as mean ± SEM. P-values < 0.05 were considered to be statistically significant.", "Sixty BA patients (32 girls and 28 boys with mean age of 9.6 ± 0.7 years) and 20 healthy children (10 girls and 10 boys with mean age of 10.1 ± 0.7 years) were recruited in this study. All BA patients had undergone hepatic portojejunostomy with Roux-en-Y reconstruction (original Kasai procedure), and they were generally in good health; no signs of suspected infection or bleeding abnormalities at the time of blood sampling. None of them had undergone liver transplantation. Healthy controls who attended the Well Baby Clinic at King Chulalongkorn Memorial hospital for vaccination had normal physical findings and no underlying disease. BA patients were classified into two groups according to serum total bilirubin (TB), serum alanine aminotransferase (ALT), and liver stiffness score. Based on their jaundice status, BA children were divided into a non-jaundice group (TB < 2 mg/dl) and a persistent jaundice group (TB ≥ 2 mg/dl). Subsequently, BA patients were categorized into a non-significant fibrosis group (liver stiffness < 7 kPa) and a significant fibrosis group (liver stiffness≥7 kPa). The cut-off point of liver stiffness score for significant fibrosis was based on the study by Castera L, et al. [22] with sensitivity of 67% and specificity of 89%.", "After overnight fast, samples of peripheral venous blood were collected in the morning from every participant, centrifuged for 15 min at 1000 × g, and stored immediately at -80°C for further analysis. Quantitative determination of adiponectin concentration in serum was performed using commercially available enzyme-linked immunosorbent assay (ELISA) (R&D Systems, Inc., Minneapolis, MN, USA). According to the manufacturer's protocol, 50 μl of recombinant human adiponectin standards and serum samples were pipetted into each well, which has been pre-coated with specific antibody for adiponectin. After incubating for 2 h at room temperature, every well was washed thoroughly with wash buffer for 4 times. 200 μl of a horseradish peroxidase-conjugated monoclonal antibody specific for adiponectin was then added to each well and incubated for a further 2 h at room temperature. After 4 washes, substrate solution was pipetted into the wells and then microplate was incubated for 30 min at room temperature with protection from light. Lastly, the reaction was stopped by the stop solution and the color intensity was measured with an automated microplate reader at 450 nm. The adiponectin concentration was determined by a standard optical density-concentration curve. Twofold serial dilutions of recombinant human adiponectin with a concentration of 3.9-250 ng/ml were used as standards. The manufacturer reported precision was 2.5-4.7% (intra-assay) and 5.8-6.9% (inter-assay). The sensitivity of this assay was 0.246 ng/ml.\nThe liver function tests including serum albumin, total bilirubin (TB), direct bilirubin (DB), aspatate aminotransferase (AST), alanine aminotransferase (ALT), and alkaline phosphatase (ALP) were measured using a Hitachi 912 automated machine at the central laboratory of our hospital. The aspartate aminotransferase to platelets ratio index (APRI) was calculated as follows: (AST/upper limit of normal) × 100/platelet count (109/l) [23].", "Transient elastography (FibroScan) measured the liver stiffness between 25 to 65 mm from the skin surface, which is approximately equivalent to the volume of a cylinder of 1 cm wide and 4 cm long. The measurements were performed by placing a transducer probe of FibroScan on the intercostal space at the area of right lobe of liver with patients lying in a dorsal decubitus position with maximal abduction of right arm. The target location for measurement was a liver portion that was at least 6 cm thick, and was free of major vascular structures. The measurements were performed until 10 validated results were achieved with a success rate of at least 80%. The median value of 10 validated scores was considered the elastic modulus of the liver, and it was expressed in kilopascals (kPa).", "Statistical analysis was performed using SPSS software version 16.0 for Windows. Mann-Whitney U test was used to compare the difference of serum adiponectin concentrations between groups. Correlation between serum adiponectin levels and other serological markers, and liver stiffness scores were calculated using Pearson's correlation coefficient (r). Data were expressed as mean ± SEM. P-values < 0.05 were considered to be statistically significant.", "[SUBTITLE] Comparisons between BA patients and healthy controls [SUBSECTION] A total of 60 BA patients and 20 healthy controls were enrolled in this study. The characteristics of participants in both groups were demonstrated in Table 1. Mean age and gender ratio in controls and BA patients were not different whereas serum adiponectin levels in BA patients were markedly elevated compared with those in controls (15.5 ± 1.1 vs. 11.1 ± 1.1 μg/ml, P = 0.03) (Figure 1). Furthermore, BA patients had significantly higher hyaluronic acid than controls (50.3 ± 7.1 vs. 23.9 ± 1.5 ng/ml, P = 0.03). Additionally, liver stiffness scores in BA patients were dramatically higher than those in controls (30.1 ± 3.0 vs. 5.1 ± 0.5 kPa, P < 0.001).\nDemographic data, biochemical characteristics, and liver stiffness scores of controls and biliary atresia patients.\nThe data was expressed as mean ± SEM. BA, biliary atresia; BMI, body mass index; AST, aspartate aminotransferase; ALT, alanine aminotransferase; ALP, alkaline phosphatase; APRI, aspartate aminotransferase to platelets ratio index; NA, not applicable\nComparison of serum adiponectin levels in biliary atresia patients and healthy controls. The data was expressed as mean ± SEM.\nA total of 60 BA patients and 20 healthy controls were enrolled in this study. The characteristics of participants in both groups were demonstrated in Table 1. Mean age and gender ratio in controls and BA patients were not different whereas serum adiponectin levels in BA patients were markedly elevated compared with those in controls (15.5 ± 1.1 vs. 11.1 ± 1.1 μg/ml, P = 0.03) (Figure 1). Furthermore, BA patients had significantly higher hyaluronic acid than controls (50.3 ± 7.1 vs. 23.9 ± 1.5 ng/ml, P = 0.03). Additionally, liver stiffness scores in BA patients were dramatically higher than those in controls (30.1 ± 3.0 vs. 5.1 ± 0.5 kPa, P < 0.001).\nDemographic data, biochemical characteristics, and liver stiffness scores of controls and biliary atresia patients.\nThe data was expressed as mean ± SEM. BA, biliary atresia; BMI, body mass index; AST, aspartate aminotransferase; ALT, alanine aminotransferase; ALP, alkaline phosphatase; APRI, aspartate aminotransferase to platelets ratio index; NA, not applicable\nComparison of serum adiponectin levels in biliary atresia patients and healthy controls. The data was expressed as mean ± SEM.\n[SUBTITLE] Comparisons between BA patients with and without persistent jaundice [SUBSECTION] We further categorized BA patients into jaundice (n = 20) and non-jaundice group (n = 40). As presented in Table 2, BA patients with jaundice had significantly higher serum bilirubin, AST, ALT, ALP, APRI and liver stiffness values compared to those without jaundice. Moreover, serum adiponectin levels in BA patients with persistent jaundice were greater than those in BA patients without jaundice (24.4 ± 1.4 vs. 11.0 ± 0.7 μg/ml, P < 0.001) (Figure 2). Similarly, BA patients with significant fibrosis (n = 44) possessed remarkably higher serum adiponectin than those with insignificant fibrosis (n = 16) (17.7 ± 1.2 vs. 9.4 ± 1.1 μg/ml, P < 0.001). We also found that BA patients with persistent jaundice had substantially higher levels of serum hyaluronic acid than those without jaundice (93.2 ± 16.7 vs. 28.9 ± 3.4 ng/ml, P < 0.001).\nComparison between biliary atresia patients without and with jaundice.\nThe data was expressed as mean ± SEM. BA, biliary atresia; BMI, body mass index; AST, aspartate aminotransferase; ALT, alanine aminotransferase; ALP, alkaline phosphatase; APRI, aspartate aminotransferase to platelets ratio index\nComparison of serum adiponectin levels in controls, biliary atresia patients without jaundice, and biliary atresia patients with jaundice. The data was expressed as mean ± SEM.\nFurther analysis demonstrated that serum adiponectin levels positively correlated with TB (r = 0.58, P < 0.001), serum hyaluronic acid (r = 0.46, P < 0.001), AST (r = 0.41, P = 0.001), ALP (r = 0.30, P = 0.02), and liver stiffness values (r = 0.60, P < 0.001). Conversely, serum levels of adiponectin inversely correlated with serum albumin (r = -0.65, P < 0.001). Correlations between serum adiponectin and total bilirubin, hyaluronic acid, liver stiffness, and serum albumin were shown in Figure 3.\nCorrelation analysis in biliary atresia patients. Serum adiponectin is correlated with (A) total bilirubin (r = 0.58, P < 0.001), (B) hyaluronic acid (r = 0.46, P < 0.001), (C) liver stiffness (r = 0.60, P < 0.001) and (D) albumin (r = -0.65, P < 0.001) in patients with biliary atresia.\nWe further categorized BA patients into jaundice (n = 20) and non-jaundice group (n = 40). As presented in Table 2, BA patients with jaundice had significantly higher serum bilirubin, AST, ALT, ALP, APRI and liver stiffness values compared to those without jaundice. Moreover, serum adiponectin levels in BA patients with persistent jaundice were greater than those in BA patients without jaundice (24.4 ± 1.4 vs. 11.0 ± 0.7 μg/ml, P < 0.001) (Figure 2). Similarly, BA patients with significant fibrosis (n = 44) possessed remarkably higher serum adiponectin than those with insignificant fibrosis (n = 16) (17.7 ± 1.2 vs. 9.4 ± 1.1 μg/ml, P < 0.001). We also found that BA patients with persistent jaundice had substantially higher levels of serum hyaluronic acid than those without jaundice (93.2 ± 16.7 vs. 28.9 ± 3.4 ng/ml, P < 0.001).\nComparison between biliary atresia patients without and with jaundice.\nThe data was expressed as mean ± SEM. BA, biliary atresia; BMI, body mass index; AST, aspartate aminotransferase; ALT, alanine aminotransferase; ALP, alkaline phosphatase; APRI, aspartate aminotransferase to platelets ratio index\nComparison of serum adiponectin levels in controls, biliary atresia patients without jaundice, and biliary atresia patients with jaundice. The data was expressed as mean ± SEM.\nFurther analysis demonstrated that serum adiponectin levels positively correlated with TB (r = 0.58, P < 0.001), serum hyaluronic acid (r = 0.46, P < 0.001), AST (r = 0.41, P = 0.001), ALP (r = 0.30, P = 0.02), and liver stiffness values (r = 0.60, P < 0.001). Conversely, serum levels of adiponectin inversely correlated with serum albumin (r = -0.65, P < 0.001). Correlations between serum adiponectin and total bilirubin, hyaluronic acid, liver stiffness, and serum albumin were shown in Figure 3.\nCorrelation analysis in biliary atresia patients. Serum adiponectin is correlated with (A) total bilirubin (r = 0.58, P < 0.001), (B) hyaluronic acid (r = 0.46, P < 0.001), (C) liver stiffness (r = 0.60, P < 0.001) and (D) albumin (r = -0.65, P < 0.001) in patients with biliary atresia.", "A total of 60 BA patients and 20 healthy controls were enrolled in this study. The characteristics of participants in both groups were demonstrated in Table 1. Mean age and gender ratio in controls and BA patients were not different whereas serum adiponectin levels in BA patients were markedly elevated compared with those in controls (15.5 ± 1.1 vs. 11.1 ± 1.1 μg/ml, P = 0.03) (Figure 1). Furthermore, BA patients had significantly higher hyaluronic acid than controls (50.3 ± 7.1 vs. 23.9 ± 1.5 ng/ml, P = 0.03). Additionally, liver stiffness scores in BA patients were dramatically higher than those in controls (30.1 ± 3.0 vs. 5.1 ± 0.5 kPa, P < 0.001).\nDemographic data, biochemical characteristics, and liver stiffness scores of controls and biliary atresia patients.\nThe data was expressed as mean ± SEM. BA, biliary atresia; BMI, body mass index; AST, aspartate aminotransferase; ALT, alanine aminotransferase; ALP, alkaline phosphatase; APRI, aspartate aminotransferase to platelets ratio index; NA, not applicable\nComparison of serum adiponectin levels in biliary atresia patients and healthy controls. The data was expressed as mean ± SEM.", "We further categorized BA patients into jaundice (n = 20) and non-jaundice group (n = 40). As presented in Table 2, BA patients with jaundice had significantly higher serum bilirubin, AST, ALT, ALP, APRI and liver stiffness values compared to those without jaundice. Moreover, serum adiponectin levels in BA patients with persistent jaundice were greater than those in BA patients without jaundice (24.4 ± 1.4 vs. 11.0 ± 0.7 μg/ml, P < 0.001) (Figure 2). Similarly, BA patients with significant fibrosis (n = 44) possessed remarkably higher serum adiponectin than those with insignificant fibrosis (n = 16) (17.7 ± 1.2 vs. 9.4 ± 1.1 μg/ml, P < 0.001). We also found that BA patients with persistent jaundice had substantially higher levels of serum hyaluronic acid than those without jaundice (93.2 ± 16.7 vs. 28.9 ± 3.4 ng/ml, P < 0.001).\nComparison between biliary atresia patients without and with jaundice.\nThe data was expressed as mean ± SEM. BA, biliary atresia; BMI, body mass index; AST, aspartate aminotransferase; ALT, alanine aminotransferase; ALP, alkaline phosphatase; APRI, aspartate aminotransferase to platelets ratio index\nComparison of serum adiponectin levels in controls, biliary atresia patients without jaundice, and biliary atresia patients with jaundice. The data was expressed as mean ± SEM.\nFurther analysis demonstrated that serum adiponectin levels positively correlated with TB (r = 0.58, P < 0.001), serum hyaluronic acid (r = 0.46, P < 0.001), AST (r = 0.41, P = 0.001), ALP (r = 0.30, P = 0.02), and liver stiffness values (r = 0.60, P < 0.001). Conversely, serum levels of adiponectin inversely correlated with serum albumin (r = -0.65, P < 0.001). Correlations between serum adiponectin and total bilirubin, hyaluronic acid, liver stiffness, and serum albumin were shown in Figure 3.\nCorrelation analysis in biliary atresia patients. Serum adiponectin is correlated with (A) total bilirubin (r = 0.58, P < 0.001), (B) hyaluronic acid (r = 0.46, P < 0.001), (C) liver stiffness (r = 0.60, P < 0.001) and (D) albumin (r = -0.65, P < 0.001) in patients with biliary atresia.", "Biliary atresia is an intractable liver disorder affecting infants and children. Despite early diagnosis and successful Kasai operation, the great majority of BA patients inevitably develop liver fibrosis, portal hypertension, and liver failure. Therefore, the investigation of fibrogenic progression in BA is undoubtedly important. Liver biopsy is considered a gold standard in diagnosing liver fibrosis and determining its severity. However, it is a painful and invasive procedure with infrequent but possible life-threatening complications [24]. Furthermore, there have been questions concerning the accuracy of liver biopsy, which is adversely affected sampling errors, intra- and inter-observer variability. These problems may result in false staging of liver fibrosis [25,22]. In an attempt to develop noninvasive methods for the assessment of liver fibrosis, transient elastography or FibroScan has emerged as the most promising tool in BA patients [26].\nTransient elastography or FibroScan (Echosens, Paris, France) is a novel, rapid, and non-invasive technique for measuring the degree of liver fibrosis, and it can be performed in the out-patient setting. The transducer probe creates mild amplitude and low frequency (50 Hz) vibration, which induces an elastic shear wave in the tissues underneath. A pulse-echo ultrasound is used to follow the propagation of the elastic shear wave and to measure its velocity, which is in direct proportion to tissue stiffness [25,22]. According to the equation E = ρV2 (E = Elastic modulus, V = Shear velocity, ρ = mass density), the stiffer the tissue is, the faster the wave can pass through it. FibroScan can measure liver stiffness in a volume of 100 times bigger than that obtained from liver biopsy and is, therefore, a better representative for the whole liver parenchyma [22].\nHepatic stellate cell activation followed by extracellular matrix production and accumulation is a major mechanism contributing to the development of liver fibrosis [6]. A number of cytokines are believed to play essential roles in this process and they become topics of research interest in an attempt to evaluate the use of serum cytokine as a biochemical marker of liver fibrosis. In the present study, we investigated the relationship of serum adiponectin with clinical outcomes and liver stiffness scores in postoperative BA patients.\nAdiponectin - also known as complement-related protein 30 (Acrp30), adipose most abundant gene transcript (apM1) and adipoQ - is mostly synthesized by adipose tissue [27]. Various animal models and clinical researches showed that adiponectin mediated anti-obesity, anti-atherosclerotic, and anti-inflammatory effects [28]. Direct effects on hepatocytes via a specific receptor (AdipoR2 receptor) and anti-inflammatory properties are partly mediated by its antagonism against TNF-α [29]. This raises the postulation on the potential hepatoprotective role of adiponectin against liver fibrosis and cirrhosis. However, the possible role of adiponectin in the pathogenesis of BA remains as yet unclear.\nThe present study showed that serum adiponectin levels were significantly elevated in BA patients compared with healthy controls. In addition, serum adiponectin levels were substantially higher in BA patients with persistent jaundice than those without jaundice. Subsequent analysis revealed that serum adiponectin was positively correlated with serum total bilirubin, suggesting that serum adiponectin was associated with jaundice status in BA patients. Furthermore, jaundice status in BA patients seems to be an indicator for intrahepatic biliary obstruction. Thus, these findings indicate that adiponectin may play a potential role in the pathogenesis of hepatocellular damage in BA.\nTo our knowledge, this study demonstrates for the first time increased serum adiponectin levels in BA patients. We also found that serum adiponectin positively correlated with AST, ALP, hyaluronic acid, and liver stiffness, but negatively correlated with serum albumin. These results support that serum adiponectin is associated with clinical outcomes (jaundice status, hepatic dysfunction, and liver fibrosis) in BA patients post Kasai operation. Elevated serum adiponectin has been documented in various liver diseases, including acute hepatitis, chronic hepatitis, liver cirrhosis, hepatocellular carcinoma, and primary biliary cirrhosis [11,30-32]. In agreement with our findings, Tacke and colleagues demonstrated that adiponectin was increased and associated with inflammation and hepatic damage in chronic liver disease [11]. Elevated adiponectin concentrations following bile duct ligation in mice and in human bile from cholestatic patients suggest that biliary secretion is involved in adiponectin clearance. High adiponectin levels in patients with liver cirrhosis correlated positively with the severity of cirrhosis and negatively with hepatic protein synthesis, indicating that adiponectin might be used as a marker for liver cell injury [11,16]. These findings suggest that elevated circulating adiponectin is related to hepatic damage and hence reflects liver dysfunction. Accordingly, our results also revealed that serum adiponectin was positively associated with degree of liver stiffness determined using FibroScan. Liver stiffness values are well correlated with advanced stages of hepatic fibrosis and cirrhosis in children [33]. Wolf and coworkers showed that mice received adiponectin before concanavalin A treatment developed less hepatic damage [12]. Adiponectin expression was upregulated in concanavalin A-induced liver failure. Therefore, adiponectin could play a possible role in the regulation of hepatic inflammation. Future studies on HMW form of adiponectin may help identify more pieces of the inflammatory jigsaw of BA; nevertheless, the challenge remains to piece them together to originate a rational solid hypothesis pertaining to their exact role.\nSeveral possible mechanisms may be responsible for the significant elevation of adiponectin in BA patients, particularly in those with a poor outcome. Increased serum adiponectin could be attributable to imbalance between adiponectin production and adiponectin clearance. In advance BA stages, reduced biliary clearance of adiponectin may plausibly contribute to elevated serum adiponectin levels. Moreover, extrahepatic organs can produce and secrete adiponectin in circulation. The higher adiponectin levels might be regarded as indicating hepatic injury and cholestasis in BA patients. Further clinical studies could render more valuable information on the pathophysiological roles of adiponectin in BA.\nIt should be noted, however, that there are some limitations in this study. Firstly, the sample size of participants was not large enough to arrive at definite conclusions. Secondly, we investigated only those subjects who attended King Chulalongkorn Memorial Hospital, a tertiary care center, for assessment or treatment of BA. As this investigation was designed as a cross-sectional study, therefore we could not determine the causal relationship between adiponectin and liver fibrosis. Other limitation would be the use of FibroScan instead of liver biopsy to evaluate the stage of liver fibrosis, which was not the definitive diagnosis. Furthermore, the sensitivity for significant fibrosis of 67% is not high enough to use Castera score as a screening tool of liver fibrosis in BA patients [22]. Prospective studies with a longitudinal design and hepatic expression of adiponectin would provide useful information on the role of adiponectin in hepatic fibrogenesis.", "This is the first study to demonstrate the elevation of serum adiponectin, hyaluronic acid, and liver stiffness values in BA patients. Serum adiponectin were correlated well with clinical parameters and the degree of liver fibrosis determined by FibroScan. Accordingly, serum adiponectin, hyaluronic acid, and transient elastrography could be used as non-invasive biomarkers reflecting the severity and progression of disease in BA patients post Kasai operation. Further studies will be needed to determine the precise role of adiponectin in the process of liver fibrogenesis.", "ALT: alanine aminotransferase; ALP: alkaline phosphatase; APRI: aspartate aminotransferase to platelets ratio index; AST: aspatate aminotransferase; BA: biliary atresia; DB: direct bilirubin; HSC: hepatic stellate cells; TNF-α: tumor necrosis factor-α; TB: total bilirubin; kPa: kilopascals; HMW: high-molecular-weight.", "The authors declare that they have no competing interests.", "SH and MC have conceived the study, analyzed the data, and have written the manuscript. AT, KP, and WU performed laboratory analysis. VC, PV, and YP were involved in the diagnosis and recruitment of cases. YP was responsible for the design of the study. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-230X/11/16/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Dynamic changes of serum soluble triggering receptor expressed on myeloid cells-1 (sTREM-1) reflect sepsis severity and can predict prognosis: a prospective study.
21356122
We examined the utility of serum levels of soluble triggering receptor expressed on myeloid cells-1 (sTREM-1) for the diagnoses, severity assessments, and predicting the prognoses of patients with sepsis and compared sTREM-1 values with those of C-reactive protein (CRP) and procalcitonin (PCT).
BACKGROUND
Fifty-two patients with sepsis were included: 15 sepsis cases and 37 severe sepsis cases (severe sepsis + septic shock). Serum levels of sTREM-1, CRP, and PCT were determined on days 1, 3, 5, 7, 10, and 14 after admission to an ICU.
METHODS
Serum sTREM-1 levels of patients with severe sepsis were significantly higher than for those with sepsis on day 1 (240.6 pg/ml vs. 118.3 pg/ml; P < 0.01), but CRP and PCT levels were not significantly different between the two groups. The area under an ROC curve for sTREM-1 for severe sepsis patients was 0.823 (95% confidence interval: 0.690-0.957). Using 222.5 pg/ml of sTREM-1 as the cut-off value, the sensitivity was 59.5%, the specificity was 93.3%, the positive predictive value was 95.6%, the negative predictive value was 48.3%, the positive likelihood ratio was 8.92, and the negative likelihood ratio was 0.434. Based on 28-day survivals, sTREM-1 levels in the surviving group showed a tendency to decrease over time, while they tended to gradually increase in the non-surviving group. sTREM-1 levels in the non-surviving group were higher than those in the surviving group at all time points, whereas CRP and PCT levels showed a tendency to decrease over time in both groups. sTREM-1 levels and Sequential Organ Failure Assessment (SOFA) scores were positively correlated (r = 0.443; P < 0.001), and this correlation coefficient was greater than the correlation coefficients for both CRP and PCT.
RESULTS
Serum sTREM-1 levels reflected the severity of sepsis more accurately than those of CRP and PCT and were more sensitive for dynamic evaluations of sepsis prognosis.
CONCLUSIONS
[ "Adult", "Aged", "Biomarkers", "C-Reactive Protein", "Calcitonin", "Calcitonin Gene-Related Peptide", "Female", "Humans", "Male", "Membrane Glycoproteins", "Middle Aged", "Predictive Value of Tests", "Prognosis", "Prospective Studies", "Protein Precursors", "Receptors, Immunologic", "Sensitivity and Specificity", "Sepsis", "Serum", "Severity of Illness Index", "Triggering Receptor Expressed on Myeloid Cells-1" ]
3056794
null
null
Methods
[SUBTITLE] Subjects [SUBSECTION] Between September 2009 and March 2010, inpatients were included who were in the intensive care units (ICU) of the Department of Respiratory Disease, the Emergency Department, and the Department of Surgery of the Chinese People's Liberation Army General Hospital. These patients were diagnosed with sepsis, severe sepsis, or septic shock according to the 1991 ACCP/SCCM Joint Meeting [6] and by the diagnostic criteria developed at the 2001 International Sepsis Definition Conference [7]. Patients were excluded if they were < 18 years old, died within 24 hours of admission, had neutropenia (< 500 neutrophils/mm3), had an acquired immunodeficiency syndrome, or refused to participate in this study. Patients were divided into a sepsis group and a severe sepsis group (severe sepsis + septic shock), and additional analysis was based on 28-day survivals for a surviving group (≥ 28 days survival) and those who died (< 28 days survival). Patients or their family members were fully informed and signed informed consent forms. This study was approved by the Ethics Committee of the Chinese PLA General Hospital (project number 20090923-001). Between September 2009 and March 2010, inpatients were included who were in the intensive care units (ICU) of the Department of Respiratory Disease, the Emergency Department, and the Department of Surgery of the Chinese People's Liberation Army General Hospital. These patients were diagnosed with sepsis, severe sepsis, or septic shock according to the 1991 ACCP/SCCM Joint Meeting [6] and by the diagnostic criteria developed at the 2001 International Sepsis Definition Conference [7]. Patients were excluded if they were < 18 years old, died within 24 hours of admission, had neutropenia (< 500 neutrophils/mm3), had an acquired immunodeficiency syndrome, or refused to participate in this study. Patients were divided into a sepsis group and a severe sepsis group (severe sepsis + septic shock), and additional analysis was based on 28-day survivals for a surviving group (≥ 28 days survival) and those who died (< 28 days survival). Patients or their family members were fully informed and signed informed consent forms. This study was approved by the Ethics Committee of the Chinese PLA General Hospital (project number 20090923-001). [SUBTITLE] Data collection [SUBSECTION] Demographic and disease data of patients included age, gender, chief complaint for admission, vital signs, routine blood test results, liver and kidney functions, coagulation indicators, Acute Physiologic Assessment and Chronic Health Evaluation (APACHE) II scores, and Sequential Organ Failure Assessment (SOFA) scores. These were recorded on days 1, 3, 5, 7, 10, and 14. Serum was collected at these same time points and sTREM-1, CRP and PCT levels were determined. Demographic and disease data of patients included age, gender, chief complaint for admission, vital signs, routine blood test results, liver and kidney functions, coagulation indicators, Acute Physiologic Assessment and Chronic Health Evaluation (APACHE) II scores, and Sequential Organ Failure Assessment (SOFA) scores. These were recorded on days 1, 3, 5, 7, 10, and 14. Serum was collected at these same time points and sTREM-1, CRP and PCT levels were determined. [SUBTITLE] Assays [SUBSECTION] sTREM-1 was determined using a double antibody sandwich ELISA (Quantikine Human TREM-1 Immunoassay ELISA Kit, R & D Systems, Minneapolis, MN, USA, product number DTRM10B). CRP was determined using scattering turbidimetry (CardioPhase hsCRP, Siemens, Germany) and PCT was measured using an enzyme-linked fluorescence analysis kit (ELFA, VIDAS BRAHMS PCT kit, bioMerieux SA, France). All assays were performed according to the manufacturer's instructions. sTREM-1 was determined using a double antibody sandwich ELISA (Quantikine Human TREM-1 Immunoassay ELISA Kit, R & D Systems, Minneapolis, MN, USA, product number DTRM10B). CRP was determined using scattering turbidimetry (CardioPhase hsCRP, Siemens, Germany) and PCT was measured using an enzyme-linked fluorescence analysis kit (ELFA, VIDAS BRAHMS PCT kit, bioMerieux SA, France). All assays were performed according to the manufacturer's instructions. [SUBTITLE] Statistical analysis [SUBSECTION] Quantitative data with normal distributions, including age, APACHE II scores, body temperature, and white blood cell counts (WBC), are given as means ± standard deviations (SD). Student's t-test was used to compare means between two groups. Quantitative data that were not normally distributed, including sTREM-1, CRP, PCT and SOFA scores, were summarized as medians (interquartile ranges) and compared by non-parametric tests (Mann-Whitney U test). Proportions were used to express qualitative data and the differences in proportions between groups were compared using a chi-square test. Spearman correlation coefficients were used to assess associations between SOFA scores and sTREM-1, CRP and PCT levels. Statistical analysis used SPSS Statistics 17.0. Quantitative data with normal distributions, including age, APACHE II scores, body temperature, and white blood cell counts (WBC), are given as means ± standard deviations (SD). Student's t-test was used to compare means between two groups. Quantitative data that were not normally distributed, including sTREM-1, CRP, PCT and SOFA scores, were summarized as medians (interquartile ranges) and compared by non-parametric tests (Mann-Whitney U test). Proportions were used to express qualitative data and the differences in proportions between groups were compared using a chi-square test. Spearman correlation coefficients were used to assess associations between SOFA scores and sTREM-1, CRP and PCT levels. Statistical analysis used SPSS Statistics 17.0.
null
null
null
null
[ "Background", "Subjects", "Data collection", "Assays", "Statistical analysis", "Results", "Patient Characteristics", "Comparisons of initial sTREM-1, CRP and PCT levels", "Dynamic changes of sTREM-1, CRP and PCT levels", "Associations between SOFA scores and sTREM-1, CRP and PCT levels", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Sepsis is the most important cause of morbidity and mortality in the intensive care unit; however, sepsis lacks specific clinical manifestations. Thus, it is highly desirable to find sensitive and specific indicators of infection that can be easily collected, that accurately reflect infection severity and prognosis and are clinically important. Current common clinical indicators of infection include pyrexia, white blood cell counts, C-reactive protein (CRP) and procalcitonin (PCT).\nTriggering receptor expressed on myeloid cells-1 (TREM-1), discovered by Bouchon et al. in 2000 [1], is a member of the immunoglobulin superfamily of receptors that is specifically expressed on the surfaces of monocytes and neutrophils. TREM-1 expression is increased in infectious diseases and is associated with the release of soluble TREM-1 (sTREM-1). One study by Gibot et al. [2] demonstrated that the value of plasma sTREM-1 levels as an indicator of sepsis was superior to CRP and PCT, although other studies reported that the value of sTREM-1 for diagnosing sepsis was inferior to CRP and PCT [3-5].\nThe purpose of this study was to track changes in serum sTREM-1, CRP and PCT levels in patients with sepsis and to compare the predictive values of these three factors for assessing sepsis and establishing prognosis.", "Between September 2009 and March 2010, inpatients were included who were in the intensive care units (ICU) of the Department of Respiratory Disease, the Emergency Department, and the Department of Surgery of the Chinese People's Liberation Army General Hospital. These patients were diagnosed with sepsis, severe sepsis, or septic shock according to the 1991 ACCP/SCCM Joint Meeting [6] and by the diagnostic criteria developed at the 2001 International Sepsis Definition Conference [7]. Patients were excluded if they were < 18 years old, died within 24 hours of admission, had neutropenia (< 500 neutrophils/mm3), had an acquired immunodeficiency syndrome, or refused to participate in this study.\nPatients were divided into a sepsis group and a severe sepsis group (severe sepsis + septic shock), and additional analysis was based on 28-day survivals for a surviving group (≥ 28 days survival) and those who died (< 28 days survival). Patients or their family members were fully informed and signed informed consent forms. This study was approved by the Ethics Committee of the Chinese PLA General Hospital (project number 20090923-001).", "Demographic and disease data of patients included age, gender, chief complaint for admission, vital signs, routine blood test results, liver and kidney functions, coagulation indicators, Acute Physiologic Assessment and Chronic Health Evaluation (APACHE) II scores, and Sequential Organ Failure Assessment (SOFA) scores. These were recorded on days 1, 3, 5, 7, 10, and 14. Serum was collected at these same time points and sTREM-1, CRP and PCT levels were determined.", "sTREM-1 was determined using a double antibody sandwich ELISA (Quantikine Human TREM-1 Immunoassay ELISA Kit, R & D Systems, Minneapolis, MN, USA, product number DTRM10B). CRP was determined using scattering turbidimetry (CardioPhase hsCRP, Siemens, Germany) and PCT was measured using an enzyme-linked fluorescence analysis kit (ELFA, VIDAS BRAHMS PCT kit, bioMerieux SA, France). All assays were performed according to the manufacturer's instructions.", "Quantitative data with normal distributions, including age, APACHE II scores, body temperature, and white blood cell counts (WBC), are given as means ± standard deviations (SD). Student's t-test was used to compare means between two groups. Quantitative data that were not normally distributed, including sTREM-1, CRP, PCT and SOFA scores, were summarized as medians (interquartile ranges) and compared by non-parametric tests (Mann-Whitney U test). Proportions were used to express qualitative data and the differences in proportions between groups were compared using a chi-square test. Spearman correlation coefficients were used to assess associations between SOFA scores and sTREM-1, CRP and PCT levels. Statistical analysis used SPSS Statistics 17.0.", "[SUBTITLE] Patient Characteristics [SUBSECTION] Fifty-two patients with sepsis were included in this study. There were 15 cases with sepsis and 37 cases with severe sepsis. Thirty-one patients were male (59.6%) and the mean patient age was 56 ± 19 years. Initial sites of infection were the lungs (50%), blood (23%), and abdomen (13%). Patients' ages and day 1 temperatures were not significantly different between the two groups (P > 0.05). However, the APACHE II scores and white blood cell counts in the severe sepsis group were higher than those of the sepsis group (P = 0.006 and P = 0.003, respectively).\nFifty-two patients with sepsis were included in this study. There were 15 cases with sepsis and 37 cases with severe sepsis. Thirty-one patients were male (59.6%) and the mean patient age was 56 ± 19 years. Initial sites of infection were the lungs (50%), blood (23%), and abdomen (13%). Patients' ages and day 1 temperatures were not significantly different between the two groups (P > 0.05). However, the APACHE II scores and white blood cell counts in the severe sepsis group were higher than those of the sepsis group (P = 0.006 and P = 0.003, respectively).\n[SUBTITLE] Comparisons of initial sTREM-1, CRP and PCT levels [SUBSECTION] Serum sTREM-1 levels of patients in the severe sepsis group were significantly higher than those in the sepsis group on day 1 (240.6 pg/ml vs. 118.3 pg/ml, P < 0.01), but there were no significant differences in CRP or PCT levels between the two groups. SOFA scores of the severe sepsis group were significantly higher than those of the sepsis group (6 vs. 3; P < 0.001; Table 1).\nSerum sTREM-1, CRP, PCT levels and SOFA scores on day of enrollment.\n*: Data are presented as median (interquartile range).\nROC curves for APACHE II scores, WBCs, SOFA scores and sTREM-1 levels for the two groups were constructed based on statistically significant differences (Figure 1) for calculating the areas under the ROC curves (Table 2). These results showed that combining both sTREM-1 and SOFA scores was better than any single indicator for diagnosing severe sepsis. We used the maximum Youden Index (YI) to select cut-off criteria values. Using a cut-off value for sTREM-1 of 222.5 pg/ml, the sensitivity was 59.5%, the specificity was 93.3%, the positive predictive value was 95.6%, the negative predictive value was 48.3%, the positive likelihood ratio was 8.92 and the negative likelihood ratio was 0.434 for severe sepsis diagnosis. Using a cut-off value for SOFA scores of 3.5, the sensitivity was 78.4% and the specificity was 66.7% for severe sepsis diagnosis.\nROC curves for APACHE II scores, WBCs, SOFA scores and sTREM-1 levels for distinguishing severe sepsis from sepsis.\nArea under ROC curve on diagnosing severe sepsis.\nSerum sTREM-1 levels of patients in the severe sepsis group were significantly higher than those in the sepsis group on day 1 (240.6 pg/ml vs. 118.3 pg/ml, P < 0.01), but there were no significant differences in CRP or PCT levels between the two groups. SOFA scores of the severe sepsis group were significantly higher than those of the sepsis group (6 vs. 3; P < 0.001; Table 1).\nSerum sTREM-1, CRP, PCT levels and SOFA scores on day of enrollment.\n*: Data are presented as median (interquartile range).\nROC curves for APACHE II scores, WBCs, SOFA scores and sTREM-1 levels for the two groups were constructed based on statistically significant differences (Figure 1) for calculating the areas under the ROC curves (Table 2). These results showed that combining both sTREM-1 and SOFA scores was better than any single indicator for diagnosing severe sepsis. We used the maximum Youden Index (YI) to select cut-off criteria values. Using a cut-off value for sTREM-1 of 222.5 pg/ml, the sensitivity was 59.5%, the specificity was 93.3%, the positive predictive value was 95.6%, the negative predictive value was 48.3%, the positive likelihood ratio was 8.92 and the negative likelihood ratio was 0.434 for severe sepsis diagnosis. Using a cut-off value for SOFA scores of 3.5, the sensitivity was 78.4% and the specificity was 66.7% for severe sepsis diagnosis.\nROC curves for APACHE II scores, WBCs, SOFA scores and sTREM-1 levels for distinguishing severe sepsis from sepsis.\nArea under ROC curve on diagnosing severe sepsis.\n[SUBTITLE] Dynamic changes of sTREM-1, CRP and PCT levels [SUBSECTION] To assess dynamic changes in serum levels of serum sTREM-1, CRP and PCT levels, patients were divided into either a surviving group (≥ 28 days survival) or a non-surviving group (< 28 days survival) based on 28-day survivals (Table 3). The number of patients in the non-surviving group decreased with increasing mortality. Thus, the actual numbers of patients in the non-surviving group were 16, 16, 14, 14, 12, and 9 on days 1, 3, 5, 7, 10, and 14, respectively. For patients who died within 14 days of admission, the last measurements obtained before these patients died were used for each of the time points after a patient's death.\nComparison of patient demographics and clinical data between the survival group (≥ 28 days) and the non-survival group (< 28 days) based on 28-day survival.\n*: Data are presented as mean ± standard deviation, number, or median (interquartile range).\nMedian serum sTREM-1, CRP and PCT levels determined on days 1, 3, 5, 7, 10, and 14, and were compared between the surviving and non-surviving groups (Figure 2). Serum sTREM-1, CRP and PCT levels in the non-surviving group tended to be higher than those in the surviving group. In particular, serum sTREM-1, CRP and PCT levels in the non-surviving group were higher than those in the surviving group on days 1, 3, 5, and 7, although the differences were not statistically significant between the two groups (P > 0.05). Serum sTREM-1, CRP and PCT levels in the non-surviving group were, however, significantly higher than those in the surviving group on days 10 and 14 (P < 0.05).\nSerum levels of (A) sTREM-1, (B) CRP, and (C) PCT measured over 14 days in patients diagnosed with sepsis based on 28-day survival.\nLongitudinally, serum sTREM-1, CRP and PCT levels in the surviving group showed a tendency to decrease over time (P < 0.001). In the non-surviving group, the sTREM-1 levels tended to gradually increase with time, but the changes in sTREM-1 levels were not statistically significant (P = 0.222). In contrast, CRP and PCT levels tended to decrease over time, especially within the first 3 days.\nPatients with higher serum levels of sTREM-1, CRP and PCT had poorer prognoses. Compared with the tendency for changing levels of CRP and PCT, the gradual increase of sTREM-1 levels was a better reflection of the progression of sepsis, which was of greater value for predicting death.\nTo assess dynamic changes in serum levels of serum sTREM-1, CRP and PCT levels, patients were divided into either a surviving group (≥ 28 days survival) or a non-surviving group (< 28 days survival) based on 28-day survivals (Table 3). The number of patients in the non-surviving group decreased with increasing mortality. Thus, the actual numbers of patients in the non-surviving group were 16, 16, 14, 14, 12, and 9 on days 1, 3, 5, 7, 10, and 14, respectively. For patients who died within 14 days of admission, the last measurements obtained before these patients died were used for each of the time points after a patient's death.\nComparison of patient demographics and clinical data between the survival group (≥ 28 days) and the non-survival group (< 28 days) based on 28-day survival.\n*: Data are presented as mean ± standard deviation, number, or median (interquartile range).\nMedian serum sTREM-1, CRP and PCT levels determined on days 1, 3, 5, 7, 10, and 14, and were compared between the surviving and non-surviving groups (Figure 2). Serum sTREM-1, CRP and PCT levels in the non-surviving group tended to be higher than those in the surviving group. In particular, serum sTREM-1, CRP and PCT levels in the non-surviving group were higher than those in the surviving group on days 1, 3, 5, and 7, although the differences were not statistically significant between the two groups (P > 0.05). Serum sTREM-1, CRP and PCT levels in the non-surviving group were, however, significantly higher than those in the surviving group on days 10 and 14 (P < 0.05).\nSerum levels of (A) sTREM-1, (B) CRP, and (C) PCT measured over 14 days in patients diagnosed with sepsis based on 28-day survival.\nLongitudinally, serum sTREM-1, CRP and PCT levels in the surviving group showed a tendency to decrease over time (P < 0.001). In the non-surviving group, the sTREM-1 levels tended to gradually increase with time, but the changes in sTREM-1 levels were not statistically significant (P = 0.222). In contrast, CRP and PCT levels tended to decrease over time, especially within the first 3 days.\nPatients with higher serum levels of sTREM-1, CRP and PCT had poorer prognoses. Compared with the tendency for changing levels of CRP and PCT, the gradual increase of sTREM-1 levels was a better reflection of the progression of sepsis, which was of greater value for predicting death.\n[SUBTITLE] Associations between SOFA scores and sTREM-1, CRP and PCT levels [SUBSECTION] SOFA scores in the surviving group were lower than those in the non-surviving group on day 1 (4.0 vs. 9.5; Z = -3.387; P = 0.001). SOFA scores in the surviving group gradually decreased as the course of the disease progressed. There was no apparent decrease in SOFA scores for the non-surviving group. Rather, SOFA scores in the non-surviving group showed a tendency to increase during the last days (Figure 3), suggesting that SOFA scores were closely related to the severity and prognosis of sepsis.\nComparison of SOFA scores between patients in the surviving and non-surviving groups.\nAs shown in Figure 4 Spearman correlation analysis was used evaluate the associations between SOFA scores and serum sTREM-1, CRP and PCT (logarithmic values of PCT were used for this analysis). The correlation coefficients (r) were 0.443, 0.257, and 0.406, respectively (P < 0.001).\nCorrelation between SOFA scores and serum (A) sTREM-1, (B) CRP, and (C) PCT levels. The Spearman correlation coefficients (r) were 0.443, 0.257, and 0.406 (P < 0.001) between SOFA and serum sTREM-1, CRP, and PCT respectively (the logarithmic value of PCT was used for analysis).\nSOFA scores in the surviving group were lower than those in the non-surviving group on day 1 (4.0 vs. 9.5; Z = -3.387; P = 0.001). SOFA scores in the surviving group gradually decreased as the course of the disease progressed. There was no apparent decrease in SOFA scores for the non-surviving group. Rather, SOFA scores in the non-surviving group showed a tendency to increase during the last days (Figure 3), suggesting that SOFA scores were closely related to the severity and prognosis of sepsis.\nComparison of SOFA scores between patients in the surviving and non-surviving groups.\nAs shown in Figure 4 Spearman correlation analysis was used evaluate the associations between SOFA scores and serum sTREM-1, CRP and PCT (logarithmic values of PCT were used for this analysis). The correlation coefficients (r) were 0.443, 0.257, and 0.406, respectively (P < 0.001).\nCorrelation between SOFA scores and serum (A) sTREM-1, (B) CRP, and (C) PCT levels. The Spearman correlation coefficients (r) were 0.443, 0.257, and 0.406 (P < 0.001) between SOFA and serum sTREM-1, CRP, and PCT respectively (the logarithmic value of PCT was used for analysis).", "Fifty-two patients with sepsis were included in this study. There were 15 cases with sepsis and 37 cases with severe sepsis. Thirty-one patients were male (59.6%) and the mean patient age was 56 ± 19 years. Initial sites of infection were the lungs (50%), blood (23%), and abdomen (13%). Patients' ages and day 1 temperatures were not significantly different between the two groups (P > 0.05). However, the APACHE II scores and white blood cell counts in the severe sepsis group were higher than those of the sepsis group (P = 0.006 and P = 0.003, respectively).", "Serum sTREM-1 levels of patients in the severe sepsis group were significantly higher than those in the sepsis group on day 1 (240.6 pg/ml vs. 118.3 pg/ml, P < 0.01), but there were no significant differences in CRP or PCT levels between the two groups. SOFA scores of the severe sepsis group were significantly higher than those of the sepsis group (6 vs. 3; P < 0.001; Table 1).\nSerum sTREM-1, CRP, PCT levels and SOFA scores on day of enrollment.\n*: Data are presented as median (interquartile range).\nROC curves for APACHE II scores, WBCs, SOFA scores and sTREM-1 levels for the two groups were constructed based on statistically significant differences (Figure 1) for calculating the areas under the ROC curves (Table 2). These results showed that combining both sTREM-1 and SOFA scores was better than any single indicator for diagnosing severe sepsis. We used the maximum Youden Index (YI) to select cut-off criteria values. Using a cut-off value for sTREM-1 of 222.5 pg/ml, the sensitivity was 59.5%, the specificity was 93.3%, the positive predictive value was 95.6%, the negative predictive value was 48.3%, the positive likelihood ratio was 8.92 and the negative likelihood ratio was 0.434 for severe sepsis diagnosis. Using a cut-off value for SOFA scores of 3.5, the sensitivity was 78.4% and the specificity was 66.7% for severe sepsis diagnosis.\nROC curves for APACHE II scores, WBCs, SOFA scores and sTREM-1 levels for distinguishing severe sepsis from sepsis.\nArea under ROC curve on diagnosing severe sepsis.", "To assess dynamic changes in serum levels of serum sTREM-1, CRP and PCT levels, patients were divided into either a surviving group (≥ 28 days survival) or a non-surviving group (< 28 days survival) based on 28-day survivals (Table 3). The number of patients in the non-surviving group decreased with increasing mortality. Thus, the actual numbers of patients in the non-surviving group were 16, 16, 14, 14, 12, and 9 on days 1, 3, 5, 7, 10, and 14, respectively. For patients who died within 14 days of admission, the last measurements obtained before these patients died were used for each of the time points after a patient's death.\nComparison of patient demographics and clinical data between the survival group (≥ 28 days) and the non-survival group (< 28 days) based on 28-day survival.\n*: Data are presented as mean ± standard deviation, number, or median (interquartile range).\nMedian serum sTREM-1, CRP and PCT levels determined on days 1, 3, 5, 7, 10, and 14, and were compared between the surviving and non-surviving groups (Figure 2). Serum sTREM-1, CRP and PCT levels in the non-surviving group tended to be higher than those in the surviving group. In particular, serum sTREM-1, CRP and PCT levels in the non-surviving group were higher than those in the surviving group on days 1, 3, 5, and 7, although the differences were not statistically significant between the two groups (P > 0.05). Serum sTREM-1, CRP and PCT levels in the non-surviving group were, however, significantly higher than those in the surviving group on days 10 and 14 (P < 0.05).\nSerum levels of (A) sTREM-1, (B) CRP, and (C) PCT measured over 14 days in patients diagnosed with sepsis based on 28-day survival.\nLongitudinally, serum sTREM-1, CRP and PCT levels in the surviving group showed a tendency to decrease over time (P < 0.001). In the non-surviving group, the sTREM-1 levels tended to gradually increase with time, but the changes in sTREM-1 levels were not statistically significant (P = 0.222). In contrast, CRP and PCT levels tended to decrease over time, especially within the first 3 days.\nPatients with higher serum levels of sTREM-1, CRP and PCT had poorer prognoses. Compared with the tendency for changing levels of CRP and PCT, the gradual increase of sTREM-1 levels was a better reflection of the progression of sepsis, which was of greater value for predicting death.", "SOFA scores in the surviving group were lower than those in the non-surviving group on day 1 (4.0 vs. 9.5; Z = -3.387; P = 0.001). SOFA scores in the surviving group gradually decreased as the course of the disease progressed. There was no apparent decrease in SOFA scores for the non-surviving group. Rather, SOFA scores in the non-surviving group showed a tendency to increase during the last days (Figure 3), suggesting that SOFA scores were closely related to the severity and prognosis of sepsis.\nComparison of SOFA scores between patients in the surviving and non-surviving groups.\nAs shown in Figure 4 Spearman correlation analysis was used evaluate the associations between SOFA scores and serum sTREM-1, CRP and PCT (logarithmic values of PCT were used for this analysis). The correlation coefficients (r) were 0.443, 0.257, and 0.406, respectively (P < 0.001).\nCorrelation between SOFA scores and serum (A) sTREM-1, (B) CRP, and (C) PCT levels. The Spearman correlation coefficients (r) were 0.443, 0.257, and 0.406 (P < 0.001) between SOFA and serum sTREM-1, CRP, and PCT respectively (the logarithmic value of PCT was used for analysis).", "During the course of sepsis, TREM-1 amplifies infection-induced inflammatory response signals primarily through the mediation of adapter protein DAP12 on the cell surface. sTREM-1 is the soluble form of TREM-1 that lacks the transmembrane and intracellular domains. These two domains are cleaved from TREM-1 on the membrane surface by proteolysis [8]. In 2009, a meta-analysis [9] found that the sensitivity of sTREM-1 for diagnosing bacterial infection was 0.82 (95% confidence interval [CI]: 0.68-0.90) and the specificity was 0.86 (95% CI: 0.77-0.91). These high values suggested that sTREM-1 was a reasonably reliable indicator for diagnosing bacterial infections.\nCompared with previous studies, our study found that sTREM-1 was a valuable tool for diagnosing severe sepsis (severe sepsis + septic shock) with organ dysfunction. sTREM-1 levels in the severe sepsis group were significantly higher than those in the sepsis group starting at day 1 of ICU admission. Thus, sTREM-1 measurements may be useful for an early diagnosis of severe sepsis and timely intervention. There were no significant differences for either CRP or PCT levels between the sepsis and severe sepsis groups, implying that serum sTREM-1 was superior to CRP and PCT for diagnosing severe sepsis.\nIn our study, the sensitivity and specificity of sTREM-1 for diagnosing severe sepsis were 59.5% and 93.3%, respectively. The low sensitivity value may have been related to the small sample size, and follow-up studies should include larger samples. SOFA scores had a slightly higher sensitivity with a lower specificity. Therefore, combining both sTREM-1 levels and SOFA scores may be more valuable for diagnosing severe sepsis compared with any single indicator.\nIn our study, patients with more severe conditions had higher SOFA scores. Additionally, SOFA scores in the non-surviving group tended to increase over time, while they decreased gradually in the surviving group. Thus, SOFA scores can dynamically reflect an improvement in a patient's condition, or even the progression of sepsis. Consistent with the results of Dimopoulou et al. [10], we also found that serum sTREM-1 levels were positively correlated with the SOFA scores. Compared with CRP and PCT, sTREM-1 had a higher correlation with SOFA scores, suggesting that sTREM-1 was more closely related to disease conditions than both CRP and PCT.\nDuring the 14-day observation period, sTREM-1 levels in the non-surviving group increased gradually over time, whereas sTREM-1 levels tended to gradually decrease in the surviving group. This indicates that the inflammatory indicator sTREM-1 may be associated with sepsis prognosis. That is, a progressively decreasing expression of sTREM-1 indicates that the infection-induced inflammatory responses are being controlled and that a patient's prognosis is good.\nsTREM-1 is primarily produced by the hydrolysis and shedding of membrane-bound TREM-1. Thus, progressive increases of sTREM-1 levels indicate that the total expression of TREM-1 continuously increases and that more pro-inflammatory cytokines and mediators are being released in the body. TREM-1 levels are further increased via positive feedback mechanisms, suggesting the persistence or progression of excessive inflammatory responses and poor prognoses.\nDifferent from sTREM-1 levels, CRP and PCT levels in the surviving group were lower than those in the non-surviving group, indicating that patients with higher expressions of CRP and PCT may have poorer prognoses (although CRP and PCT levels tended to decrease in both groups over time). Overall, these results suggest that dynamic changes of sTREM-1 may better reflect the body's state of inflammatory response and sepsis severity simultaneously making sTREM-1 superior to CRP and PCT. Progressive increases in serum sTREM-1 levels is an indicator of a poor prognosis.\nThe changing trends for sTREM-1, CRP and PCT identified in the current study were consistent with those in previous studies [11,12]. Nonetheless, it remains debatable whether or not initial sTREM-1 levels are directly related to prognosis. Gibot et al. [11] found that initial sTREM-1 levels in a non-surviving group were lower than those in a surviving group, and suggested that prognosis would be poorer for patients with lower initial sTREM-1 levels. One possible reason for this finding is a hypothesis that sTREM-1 may compete with membrane-bound TREM-1 for ligand binding and thereby attenuate the transmission of infectious signals from membrane-bound TREM-1 into cells. Thus, the release of pro-inflammatory cytokines and mediators may be reduced and excessive inflammatory reactions and injury may be abrogated. Another possible reason is that inhibitory DAP12 receptors are present on the cell membrane that can bind with sTREM-1. DAP12 receptors negatively regulate TLR signalling pathways, thereby preventing excessive inflammatory responses. Thus, sTREM-1 may have certain anti-inflammatory and protective effects [13]. Patients with low levels of sTREM-1 are prone to excessive inflammatory responses and have poor prognoses.\nOur results are consistent with those reported by Giamarellos-Bourboulis et al. [12] who reported that initial sTREM-1 levels in a non-surviving group were higher than those in a surviving group. There are several possible reasons for this observation. First, sTREM-1 is produced when membrane-bound TREM-1 is cleaved from the cell surface. High expression levels of sTREM-1 reflect the high expression of membrane-bound TREM-1. The initial disease conditions were more severe in the non-surviving group and systemic inflammatory reactions were obvious. The expression of membrane-bound TREM-1 increased and the inflammatory reaction caused injury to the body's cells and tissues resulting in the production of endogenous inflammatory pathogenic factors and an increase in the expression of TREM-1. As a result, the initial sTREM-1 levels in the non-surviving group were higher than those in the surviving group. Second, although sTREM-1 may have some anti-inflammatory effects, infections were not well controlled in the non-surviving group. As a result, inflammatory responses overwhelmed the body's compensatory anti-inflammatory capacities. This excessive inflammatory response was persistent resulting in a continuous high expression of membrane-bound TREM-1 and an increase in the total amount of sTREM-1 produced by its hydrolysis. Third, because some of our patients developed sepsis at other hospitals and were transferred to the ICU due to poor therapeutic effects, the observed initial sTREM-1 levels might not actually have been the levels at the onset of sepsis. Instead, the measured levels might actually have been at a later stage in the non-surviving group, which were higher than those measured in the surviving group. Finally, the small sample size for this study could also have influenced the results.\nAlthough the relationship between initial sTREM-1 levels and prognosis remains controversial, multiple studies had similar findings showing that patients with progressively decreasing sTREM-1 levels had better prognoses, whereas patients with progressively increasing sTREM-1 levels had poorer prognoses.", "In summary, sTREM-1 levels can reflect sepsis severity more accurately than CRP and PCT and the dynamic changes of sTREM-1 are more sensitive for predicting prognosis. However, the sample size for this study was quite small and larger studies are needed.", "The authors declare that they have no competing interests.", "JZ designed the study, carried it out, performed the data analysis and wrote the first draft of the manuscript. DS and YJ conceived the initial idea for using sTREM-1 levels for infectious diseases and supplemented the study design. DF guided the data analysis and the use of medical statistics. LX was responsible for protocol revisions, data analysis and final draft revision. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2334/11/53/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Subjects", "Data collection", "Assays", "Statistical analysis", "Results", "Patient Characteristics", "Comparisons of initial sTREM-1, CRP and PCT levels", "Dynamic changes of sTREM-1, CRP and PCT levels", "Associations between SOFA scores and sTREM-1, CRP and PCT levels", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Sepsis is the most important cause of morbidity and mortality in the intensive care unit; however, sepsis lacks specific clinical manifestations. Thus, it is highly desirable to find sensitive and specific indicators of infection that can be easily collected, that accurately reflect infection severity and prognosis and are clinically important. Current common clinical indicators of infection include pyrexia, white blood cell counts, C-reactive protein (CRP) and procalcitonin (PCT).\nTriggering receptor expressed on myeloid cells-1 (TREM-1), discovered by Bouchon et al. in 2000 [1], is a member of the immunoglobulin superfamily of receptors that is specifically expressed on the surfaces of monocytes and neutrophils. TREM-1 expression is increased in infectious diseases and is associated with the release of soluble TREM-1 (sTREM-1). One study by Gibot et al. [2] demonstrated that the value of plasma sTREM-1 levels as an indicator of sepsis was superior to CRP and PCT, although other studies reported that the value of sTREM-1 for diagnosing sepsis was inferior to CRP and PCT [3-5].\nThe purpose of this study was to track changes in serum sTREM-1, CRP and PCT levels in patients with sepsis and to compare the predictive values of these three factors for assessing sepsis and establishing prognosis.", "[SUBTITLE] Subjects [SUBSECTION] Between September 2009 and March 2010, inpatients were included who were in the intensive care units (ICU) of the Department of Respiratory Disease, the Emergency Department, and the Department of Surgery of the Chinese People's Liberation Army General Hospital. These patients were diagnosed with sepsis, severe sepsis, or septic shock according to the 1991 ACCP/SCCM Joint Meeting [6] and by the diagnostic criteria developed at the 2001 International Sepsis Definition Conference [7]. Patients were excluded if they were < 18 years old, died within 24 hours of admission, had neutropenia (< 500 neutrophils/mm3), had an acquired immunodeficiency syndrome, or refused to participate in this study.\nPatients were divided into a sepsis group and a severe sepsis group (severe sepsis + septic shock), and additional analysis was based on 28-day survivals for a surviving group (≥ 28 days survival) and those who died (< 28 days survival). Patients or their family members were fully informed and signed informed consent forms. This study was approved by the Ethics Committee of the Chinese PLA General Hospital (project number 20090923-001).\nBetween September 2009 and March 2010, inpatients were included who were in the intensive care units (ICU) of the Department of Respiratory Disease, the Emergency Department, and the Department of Surgery of the Chinese People's Liberation Army General Hospital. These patients were diagnosed with sepsis, severe sepsis, or septic shock according to the 1991 ACCP/SCCM Joint Meeting [6] and by the diagnostic criteria developed at the 2001 International Sepsis Definition Conference [7]. Patients were excluded if they were < 18 years old, died within 24 hours of admission, had neutropenia (< 500 neutrophils/mm3), had an acquired immunodeficiency syndrome, or refused to participate in this study.\nPatients were divided into a sepsis group and a severe sepsis group (severe sepsis + septic shock), and additional analysis was based on 28-day survivals for a surviving group (≥ 28 days survival) and those who died (< 28 days survival). Patients or their family members were fully informed and signed informed consent forms. This study was approved by the Ethics Committee of the Chinese PLA General Hospital (project number 20090923-001).\n[SUBTITLE] Data collection [SUBSECTION] Demographic and disease data of patients included age, gender, chief complaint for admission, vital signs, routine blood test results, liver and kidney functions, coagulation indicators, Acute Physiologic Assessment and Chronic Health Evaluation (APACHE) II scores, and Sequential Organ Failure Assessment (SOFA) scores. These were recorded on days 1, 3, 5, 7, 10, and 14. Serum was collected at these same time points and sTREM-1, CRP and PCT levels were determined.\nDemographic and disease data of patients included age, gender, chief complaint for admission, vital signs, routine blood test results, liver and kidney functions, coagulation indicators, Acute Physiologic Assessment and Chronic Health Evaluation (APACHE) II scores, and Sequential Organ Failure Assessment (SOFA) scores. These were recorded on days 1, 3, 5, 7, 10, and 14. Serum was collected at these same time points and sTREM-1, CRP and PCT levels were determined.\n[SUBTITLE] Assays [SUBSECTION] sTREM-1 was determined using a double antibody sandwich ELISA (Quantikine Human TREM-1 Immunoassay ELISA Kit, R & D Systems, Minneapolis, MN, USA, product number DTRM10B). CRP was determined using scattering turbidimetry (CardioPhase hsCRP, Siemens, Germany) and PCT was measured using an enzyme-linked fluorescence analysis kit (ELFA, VIDAS BRAHMS PCT kit, bioMerieux SA, France). All assays were performed according to the manufacturer's instructions.\nsTREM-1 was determined using a double antibody sandwich ELISA (Quantikine Human TREM-1 Immunoassay ELISA Kit, R & D Systems, Minneapolis, MN, USA, product number DTRM10B). CRP was determined using scattering turbidimetry (CardioPhase hsCRP, Siemens, Germany) and PCT was measured using an enzyme-linked fluorescence analysis kit (ELFA, VIDAS BRAHMS PCT kit, bioMerieux SA, France). All assays were performed according to the manufacturer's instructions.\n[SUBTITLE] Statistical analysis [SUBSECTION] Quantitative data with normal distributions, including age, APACHE II scores, body temperature, and white blood cell counts (WBC), are given as means ± standard deviations (SD). Student's t-test was used to compare means between two groups. Quantitative data that were not normally distributed, including sTREM-1, CRP, PCT and SOFA scores, were summarized as medians (interquartile ranges) and compared by non-parametric tests (Mann-Whitney U test). Proportions were used to express qualitative data and the differences in proportions between groups were compared using a chi-square test. Spearman correlation coefficients were used to assess associations between SOFA scores and sTREM-1, CRP and PCT levels. Statistical analysis used SPSS Statistics 17.0.\nQuantitative data with normal distributions, including age, APACHE II scores, body temperature, and white blood cell counts (WBC), are given as means ± standard deviations (SD). Student's t-test was used to compare means between two groups. Quantitative data that were not normally distributed, including sTREM-1, CRP, PCT and SOFA scores, were summarized as medians (interquartile ranges) and compared by non-parametric tests (Mann-Whitney U test). Proportions were used to express qualitative data and the differences in proportions between groups were compared using a chi-square test. Spearman correlation coefficients were used to assess associations between SOFA scores and sTREM-1, CRP and PCT levels. Statistical analysis used SPSS Statistics 17.0.", "Between September 2009 and March 2010, inpatients were included who were in the intensive care units (ICU) of the Department of Respiratory Disease, the Emergency Department, and the Department of Surgery of the Chinese People's Liberation Army General Hospital. These patients were diagnosed with sepsis, severe sepsis, or septic shock according to the 1991 ACCP/SCCM Joint Meeting [6] and by the diagnostic criteria developed at the 2001 International Sepsis Definition Conference [7]. Patients were excluded if they were < 18 years old, died within 24 hours of admission, had neutropenia (< 500 neutrophils/mm3), had an acquired immunodeficiency syndrome, or refused to participate in this study.\nPatients were divided into a sepsis group and a severe sepsis group (severe sepsis + septic shock), and additional analysis was based on 28-day survivals for a surviving group (≥ 28 days survival) and those who died (< 28 days survival). Patients or their family members were fully informed and signed informed consent forms. This study was approved by the Ethics Committee of the Chinese PLA General Hospital (project number 20090923-001).", "Demographic and disease data of patients included age, gender, chief complaint for admission, vital signs, routine blood test results, liver and kidney functions, coagulation indicators, Acute Physiologic Assessment and Chronic Health Evaluation (APACHE) II scores, and Sequential Organ Failure Assessment (SOFA) scores. These were recorded on days 1, 3, 5, 7, 10, and 14. Serum was collected at these same time points and sTREM-1, CRP and PCT levels were determined.", "sTREM-1 was determined using a double antibody sandwich ELISA (Quantikine Human TREM-1 Immunoassay ELISA Kit, R & D Systems, Minneapolis, MN, USA, product number DTRM10B). CRP was determined using scattering turbidimetry (CardioPhase hsCRP, Siemens, Germany) and PCT was measured using an enzyme-linked fluorescence analysis kit (ELFA, VIDAS BRAHMS PCT kit, bioMerieux SA, France). All assays were performed according to the manufacturer's instructions.", "Quantitative data with normal distributions, including age, APACHE II scores, body temperature, and white blood cell counts (WBC), are given as means ± standard deviations (SD). Student's t-test was used to compare means between two groups. Quantitative data that were not normally distributed, including sTREM-1, CRP, PCT and SOFA scores, were summarized as medians (interquartile ranges) and compared by non-parametric tests (Mann-Whitney U test). Proportions were used to express qualitative data and the differences in proportions between groups were compared using a chi-square test. Spearman correlation coefficients were used to assess associations between SOFA scores and sTREM-1, CRP and PCT levels. Statistical analysis used SPSS Statistics 17.0.", "[SUBTITLE] Patient Characteristics [SUBSECTION] Fifty-two patients with sepsis were included in this study. There were 15 cases with sepsis and 37 cases with severe sepsis. Thirty-one patients were male (59.6%) and the mean patient age was 56 ± 19 years. Initial sites of infection were the lungs (50%), blood (23%), and abdomen (13%). Patients' ages and day 1 temperatures were not significantly different between the two groups (P > 0.05). However, the APACHE II scores and white blood cell counts in the severe sepsis group were higher than those of the sepsis group (P = 0.006 and P = 0.003, respectively).\nFifty-two patients with sepsis were included in this study. There were 15 cases with sepsis and 37 cases with severe sepsis. Thirty-one patients were male (59.6%) and the mean patient age was 56 ± 19 years. Initial sites of infection were the lungs (50%), blood (23%), and abdomen (13%). Patients' ages and day 1 temperatures were not significantly different between the two groups (P > 0.05). However, the APACHE II scores and white blood cell counts in the severe sepsis group were higher than those of the sepsis group (P = 0.006 and P = 0.003, respectively).\n[SUBTITLE] Comparisons of initial sTREM-1, CRP and PCT levels [SUBSECTION] Serum sTREM-1 levels of patients in the severe sepsis group were significantly higher than those in the sepsis group on day 1 (240.6 pg/ml vs. 118.3 pg/ml, P < 0.01), but there were no significant differences in CRP or PCT levels between the two groups. SOFA scores of the severe sepsis group were significantly higher than those of the sepsis group (6 vs. 3; P < 0.001; Table 1).\nSerum sTREM-1, CRP, PCT levels and SOFA scores on day of enrollment.\n*: Data are presented as median (interquartile range).\nROC curves for APACHE II scores, WBCs, SOFA scores and sTREM-1 levels for the two groups were constructed based on statistically significant differences (Figure 1) for calculating the areas under the ROC curves (Table 2). These results showed that combining both sTREM-1 and SOFA scores was better than any single indicator for diagnosing severe sepsis. We used the maximum Youden Index (YI) to select cut-off criteria values. Using a cut-off value for sTREM-1 of 222.5 pg/ml, the sensitivity was 59.5%, the specificity was 93.3%, the positive predictive value was 95.6%, the negative predictive value was 48.3%, the positive likelihood ratio was 8.92 and the negative likelihood ratio was 0.434 for severe sepsis diagnosis. Using a cut-off value for SOFA scores of 3.5, the sensitivity was 78.4% and the specificity was 66.7% for severe sepsis diagnosis.\nROC curves for APACHE II scores, WBCs, SOFA scores and sTREM-1 levels for distinguishing severe sepsis from sepsis.\nArea under ROC curve on diagnosing severe sepsis.\nSerum sTREM-1 levels of patients in the severe sepsis group were significantly higher than those in the sepsis group on day 1 (240.6 pg/ml vs. 118.3 pg/ml, P < 0.01), but there were no significant differences in CRP or PCT levels between the two groups. SOFA scores of the severe sepsis group were significantly higher than those of the sepsis group (6 vs. 3; P < 0.001; Table 1).\nSerum sTREM-1, CRP, PCT levels and SOFA scores on day of enrollment.\n*: Data are presented as median (interquartile range).\nROC curves for APACHE II scores, WBCs, SOFA scores and sTREM-1 levels for the two groups were constructed based on statistically significant differences (Figure 1) for calculating the areas under the ROC curves (Table 2). These results showed that combining both sTREM-1 and SOFA scores was better than any single indicator for diagnosing severe sepsis. We used the maximum Youden Index (YI) to select cut-off criteria values. Using a cut-off value for sTREM-1 of 222.5 pg/ml, the sensitivity was 59.5%, the specificity was 93.3%, the positive predictive value was 95.6%, the negative predictive value was 48.3%, the positive likelihood ratio was 8.92 and the negative likelihood ratio was 0.434 for severe sepsis diagnosis. Using a cut-off value for SOFA scores of 3.5, the sensitivity was 78.4% and the specificity was 66.7% for severe sepsis diagnosis.\nROC curves for APACHE II scores, WBCs, SOFA scores and sTREM-1 levels for distinguishing severe sepsis from sepsis.\nArea under ROC curve on diagnosing severe sepsis.\n[SUBTITLE] Dynamic changes of sTREM-1, CRP and PCT levels [SUBSECTION] To assess dynamic changes in serum levels of serum sTREM-1, CRP and PCT levels, patients were divided into either a surviving group (≥ 28 days survival) or a non-surviving group (< 28 days survival) based on 28-day survivals (Table 3). The number of patients in the non-surviving group decreased with increasing mortality. Thus, the actual numbers of patients in the non-surviving group were 16, 16, 14, 14, 12, and 9 on days 1, 3, 5, 7, 10, and 14, respectively. For patients who died within 14 days of admission, the last measurements obtained before these patients died were used for each of the time points after a patient's death.\nComparison of patient demographics and clinical data between the survival group (≥ 28 days) and the non-survival group (< 28 days) based on 28-day survival.\n*: Data are presented as mean ± standard deviation, number, or median (interquartile range).\nMedian serum sTREM-1, CRP and PCT levels determined on days 1, 3, 5, 7, 10, and 14, and were compared between the surviving and non-surviving groups (Figure 2). Serum sTREM-1, CRP and PCT levels in the non-surviving group tended to be higher than those in the surviving group. In particular, serum sTREM-1, CRP and PCT levels in the non-surviving group were higher than those in the surviving group on days 1, 3, 5, and 7, although the differences were not statistically significant between the two groups (P > 0.05). Serum sTREM-1, CRP and PCT levels in the non-surviving group were, however, significantly higher than those in the surviving group on days 10 and 14 (P < 0.05).\nSerum levels of (A) sTREM-1, (B) CRP, and (C) PCT measured over 14 days in patients diagnosed with sepsis based on 28-day survival.\nLongitudinally, serum sTREM-1, CRP and PCT levels in the surviving group showed a tendency to decrease over time (P < 0.001). In the non-surviving group, the sTREM-1 levels tended to gradually increase with time, but the changes in sTREM-1 levels were not statistically significant (P = 0.222). In contrast, CRP and PCT levels tended to decrease over time, especially within the first 3 days.\nPatients with higher serum levels of sTREM-1, CRP and PCT had poorer prognoses. Compared with the tendency for changing levels of CRP and PCT, the gradual increase of sTREM-1 levels was a better reflection of the progression of sepsis, which was of greater value for predicting death.\nTo assess dynamic changes in serum levels of serum sTREM-1, CRP and PCT levels, patients were divided into either a surviving group (≥ 28 days survival) or a non-surviving group (< 28 days survival) based on 28-day survivals (Table 3). The number of patients in the non-surviving group decreased with increasing mortality. Thus, the actual numbers of patients in the non-surviving group were 16, 16, 14, 14, 12, and 9 on days 1, 3, 5, 7, 10, and 14, respectively. For patients who died within 14 days of admission, the last measurements obtained before these patients died were used for each of the time points after a patient's death.\nComparison of patient demographics and clinical data between the survival group (≥ 28 days) and the non-survival group (< 28 days) based on 28-day survival.\n*: Data are presented as mean ± standard deviation, number, or median (interquartile range).\nMedian serum sTREM-1, CRP and PCT levels determined on days 1, 3, 5, 7, 10, and 14, and were compared between the surviving and non-surviving groups (Figure 2). Serum sTREM-1, CRP and PCT levels in the non-surviving group tended to be higher than those in the surviving group. In particular, serum sTREM-1, CRP and PCT levels in the non-surviving group were higher than those in the surviving group on days 1, 3, 5, and 7, although the differences were not statistically significant between the two groups (P > 0.05). Serum sTREM-1, CRP and PCT levels in the non-surviving group were, however, significantly higher than those in the surviving group on days 10 and 14 (P < 0.05).\nSerum levels of (A) sTREM-1, (B) CRP, and (C) PCT measured over 14 days in patients diagnosed with sepsis based on 28-day survival.\nLongitudinally, serum sTREM-1, CRP and PCT levels in the surviving group showed a tendency to decrease over time (P < 0.001). In the non-surviving group, the sTREM-1 levels tended to gradually increase with time, but the changes in sTREM-1 levels were not statistically significant (P = 0.222). In contrast, CRP and PCT levels tended to decrease over time, especially within the first 3 days.\nPatients with higher serum levels of sTREM-1, CRP and PCT had poorer prognoses. Compared with the tendency for changing levels of CRP and PCT, the gradual increase of sTREM-1 levels was a better reflection of the progression of sepsis, which was of greater value for predicting death.\n[SUBTITLE] Associations between SOFA scores and sTREM-1, CRP and PCT levels [SUBSECTION] SOFA scores in the surviving group were lower than those in the non-surviving group on day 1 (4.0 vs. 9.5; Z = -3.387; P = 0.001). SOFA scores in the surviving group gradually decreased as the course of the disease progressed. There was no apparent decrease in SOFA scores for the non-surviving group. Rather, SOFA scores in the non-surviving group showed a tendency to increase during the last days (Figure 3), suggesting that SOFA scores were closely related to the severity and prognosis of sepsis.\nComparison of SOFA scores between patients in the surviving and non-surviving groups.\nAs shown in Figure 4 Spearman correlation analysis was used evaluate the associations between SOFA scores and serum sTREM-1, CRP and PCT (logarithmic values of PCT were used for this analysis). The correlation coefficients (r) were 0.443, 0.257, and 0.406, respectively (P < 0.001).\nCorrelation between SOFA scores and serum (A) sTREM-1, (B) CRP, and (C) PCT levels. The Spearman correlation coefficients (r) were 0.443, 0.257, and 0.406 (P < 0.001) between SOFA and serum sTREM-1, CRP, and PCT respectively (the logarithmic value of PCT was used for analysis).\nSOFA scores in the surviving group were lower than those in the non-surviving group on day 1 (4.0 vs. 9.5; Z = -3.387; P = 0.001). SOFA scores in the surviving group gradually decreased as the course of the disease progressed. There was no apparent decrease in SOFA scores for the non-surviving group. Rather, SOFA scores in the non-surviving group showed a tendency to increase during the last days (Figure 3), suggesting that SOFA scores were closely related to the severity and prognosis of sepsis.\nComparison of SOFA scores between patients in the surviving and non-surviving groups.\nAs shown in Figure 4 Spearman correlation analysis was used evaluate the associations between SOFA scores and serum sTREM-1, CRP and PCT (logarithmic values of PCT were used for this analysis). The correlation coefficients (r) were 0.443, 0.257, and 0.406, respectively (P < 0.001).\nCorrelation between SOFA scores and serum (A) sTREM-1, (B) CRP, and (C) PCT levels. The Spearman correlation coefficients (r) were 0.443, 0.257, and 0.406 (P < 0.001) between SOFA and serum sTREM-1, CRP, and PCT respectively (the logarithmic value of PCT was used for analysis).", "Fifty-two patients with sepsis were included in this study. There were 15 cases with sepsis and 37 cases with severe sepsis. Thirty-one patients were male (59.6%) and the mean patient age was 56 ± 19 years. Initial sites of infection were the lungs (50%), blood (23%), and abdomen (13%). Patients' ages and day 1 temperatures were not significantly different between the two groups (P > 0.05). However, the APACHE II scores and white blood cell counts in the severe sepsis group were higher than those of the sepsis group (P = 0.006 and P = 0.003, respectively).", "Serum sTREM-1 levels of patients in the severe sepsis group were significantly higher than those in the sepsis group on day 1 (240.6 pg/ml vs. 118.3 pg/ml, P < 0.01), but there were no significant differences in CRP or PCT levels between the two groups. SOFA scores of the severe sepsis group were significantly higher than those of the sepsis group (6 vs. 3; P < 0.001; Table 1).\nSerum sTREM-1, CRP, PCT levels and SOFA scores on day of enrollment.\n*: Data are presented as median (interquartile range).\nROC curves for APACHE II scores, WBCs, SOFA scores and sTREM-1 levels for the two groups were constructed based on statistically significant differences (Figure 1) for calculating the areas under the ROC curves (Table 2). These results showed that combining both sTREM-1 and SOFA scores was better than any single indicator for diagnosing severe sepsis. We used the maximum Youden Index (YI) to select cut-off criteria values. Using a cut-off value for sTREM-1 of 222.5 pg/ml, the sensitivity was 59.5%, the specificity was 93.3%, the positive predictive value was 95.6%, the negative predictive value was 48.3%, the positive likelihood ratio was 8.92 and the negative likelihood ratio was 0.434 for severe sepsis diagnosis. Using a cut-off value for SOFA scores of 3.5, the sensitivity was 78.4% and the specificity was 66.7% for severe sepsis diagnosis.\nROC curves for APACHE II scores, WBCs, SOFA scores and sTREM-1 levels for distinguishing severe sepsis from sepsis.\nArea under ROC curve on diagnosing severe sepsis.", "To assess dynamic changes in serum levels of serum sTREM-1, CRP and PCT levels, patients were divided into either a surviving group (≥ 28 days survival) or a non-surviving group (< 28 days survival) based on 28-day survivals (Table 3). The number of patients in the non-surviving group decreased with increasing mortality. Thus, the actual numbers of patients in the non-surviving group were 16, 16, 14, 14, 12, and 9 on days 1, 3, 5, 7, 10, and 14, respectively. For patients who died within 14 days of admission, the last measurements obtained before these patients died were used for each of the time points after a patient's death.\nComparison of patient demographics and clinical data between the survival group (≥ 28 days) and the non-survival group (< 28 days) based on 28-day survival.\n*: Data are presented as mean ± standard deviation, number, or median (interquartile range).\nMedian serum sTREM-1, CRP and PCT levels determined on days 1, 3, 5, 7, 10, and 14, and were compared between the surviving and non-surviving groups (Figure 2). Serum sTREM-1, CRP and PCT levels in the non-surviving group tended to be higher than those in the surviving group. In particular, serum sTREM-1, CRP and PCT levels in the non-surviving group were higher than those in the surviving group on days 1, 3, 5, and 7, although the differences were not statistically significant between the two groups (P > 0.05). Serum sTREM-1, CRP and PCT levels in the non-surviving group were, however, significantly higher than those in the surviving group on days 10 and 14 (P < 0.05).\nSerum levels of (A) sTREM-1, (B) CRP, and (C) PCT measured over 14 days in patients diagnosed with sepsis based on 28-day survival.\nLongitudinally, serum sTREM-1, CRP and PCT levels in the surviving group showed a tendency to decrease over time (P < 0.001). In the non-surviving group, the sTREM-1 levels tended to gradually increase with time, but the changes in sTREM-1 levels were not statistically significant (P = 0.222). In contrast, CRP and PCT levels tended to decrease over time, especially within the first 3 days.\nPatients with higher serum levels of sTREM-1, CRP and PCT had poorer prognoses. Compared with the tendency for changing levels of CRP and PCT, the gradual increase of sTREM-1 levels was a better reflection of the progression of sepsis, which was of greater value for predicting death.", "SOFA scores in the surviving group were lower than those in the non-surviving group on day 1 (4.0 vs. 9.5; Z = -3.387; P = 0.001). SOFA scores in the surviving group gradually decreased as the course of the disease progressed. There was no apparent decrease in SOFA scores for the non-surviving group. Rather, SOFA scores in the non-surviving group showed a tendency to increase during the last days (Figure 3), suggesting that SOFA scores were closely related to the severity and prognosis of sepsis.\nComparison of SOFA scores between patients in the surviving and non-surviving groups.\nAs shown in Figure 4 Spearman correlation analysis was used evaluate the associations between SOFA scores and serum sTREM-1, CRP and PCT (logarithmic values of PCT were used for this analysis). The correlation coefficients (r) were 0.443, 0.257, and 0.406, respectively (P < 0.001).\nCorrelation between SOFA scores and serum (A) sTREM-1, (B) CRP, and (C) PCT levels. The Spearman correlation coefficients (r) were 0.443, 0.257, and 0.406 (P < 0.001) between SOFA and serum sTREM-1, CRP, and PCT respectively (the logarithmic value of PCT was used for analysis).", "During the course of sepsis, TREM-1 amplifies infection-induced inflammatory response signals primarily through the mediation of adapter protein DAP12 on the cell surface. sTREM-1 is the soluble form of TREM-1 that lacks the transmembrane and intracellular domains. These two domains are cleaved from TREM-1 on the membrane surface by proteolysis [8]. In 2009, a meta-analysis [9] found that the sensitivity of sTREM-1 for diagnosing bacterial infection was 0.82 (95% confidence interval [CI]: 0.68-0.90) and the specificity was 0.86 (95% CI: 0.77-0.91). These high values suggested that sTREM-1 was a reasonably reliable indicator for diagnosing bacterial infections.\nCompared with previous studies, our study found that sTREM-1 was a valuable tool for diagnosing severe sepsis (severe sepsis + septic shock) with organ dysfunction. sTREM-1 levels in the severe sepsis group were significantly higher than those in the sepsis group starting at day 1 of ICU admission. Thus, sTREM-1 measurements may be useful for an early diagnosis of severe sepsis and timely intervention. There were no significant differences for either CRP or PCT levels between the sepsis and severe sepsis groups, implying that serum sTREM-1 was superior to CRP and PCT for diagnosing severe sepsis.\nIn our study, the sensitivity and specificity of sTREM-1 for diagnosing severe sepsis were 59.5% and 93.3%, respectively. The low sensitivity value may have been related to the small sample size, and follow-up studies should include larger samples. SOFA scores had a slightly higher sensitivity with a lower specificity. Therefore, combining both sTREM-1 levels and SOFA scores may be more valuable for diagnosing severe sepsis compared with any single indicator.\nIn our study, patients with more severe conditions had higher SOFA scores. Additionally, SOFA scores in the non-surviving group tended to increase over time, while they decreased gradually in the surviving group. Thus, SOFA scores can dynamically reflect an improvement in a patient's condition, or even the progression of sepsis. Consistent with the results of Dimopoulou et al. [10], we also found that serum sTREM-1 levels were positively correlated with the SOFA scores. Compared with CRP and PCT, sTREM-1 had a higher correlation with SOFA scores, suggesting that sTREM-1 was more closely related to disease conditions than both CRP and PCT.\nDuring the 14-day observation period, sTREM-1 levels in the non-surviving group increased gradually over time, whereas sTREM-1 levels tended to gradually decrease in the surviving group. This indicates that the inflammatory indicator sTREM-1 may be associated with sepsis prognosis. That is, a progressively decreasing expression of sTREM-1 indicates that the infection-induced inflammatory responses are being controlled and that a patient's prognosis is good.\nsTREM-1 is primarily produced by the hydrolysis and shedding of membrane-bound TREM-1. Thus, progressive increases of sTREM-1 levels indicate that the total expression of TREM-1 continuously increases and that more pro-inflammatory cytokines and mediators are being released in the body. TREM-1 levels are further increased via positive feedback mechanisms, suggesting the persistence or progression of excessive inflammatory responses and poor prognoses.\nDifferent from sTREM-1 levels, CRP and PCT levels in the surviving group were lower than those in the non-surviving group, indicating that patients with higher expressions of CRP and PCT may have poorer prognoses (although CRP and PCT levels tended to decrease in both groups over time). Overall, these results suggest that dynamic changes of sTREM-1 may better reflect the body's state of inflammatory response and sepsis severity simultaneously making sTREM-1 superior to CRP and PCT. Progressive increases in serum sTREM-1 levels is an indicator of a poor prognosis.\nThe changing trends for sTREM-1, CRP and PCT identified in the current study were consistent with those in previous studies [11,12]. Nonetheless, it remains debatable whether or not initial sTREM-1 levels are directly related to prognosis. Gibot et al. [11] found that initial sTREM-1 levels in a non-surviving group were lower than those in a surviving group, and suggested that prognosis would be poorer for patients with lower initial sTREM-1 levels. One possible reason for this finding is a hypothesis that sTREM-1 may compete with membrane-bound TREM-1 for ligand binding and thereby attenuate the transmission of infectious signals from membrane-bound TREM-1 into cells. Thus, the release of pro-inflammatory cytokines and mediators may be reduced and excessive inflammatory reactions and injury may be abrogated. Another possible reason is that inhibitory DAP12 receptors are present on the cell membrane that can bind with sTREM-1. DAP12 receptors negatively regulate TLR signalling pathways, thereby preventing excessive inflammatory responses. Thus, sTREM-1 may have certain anti-inflammatory and protective effects [13]. Patients with low levels of sTREM-1 are prone to excessive inflammatory responses and have poor prognoses.\nOur results are consistent with those reported by Giamarellos-Bourboulis et al. [12] who reported that initial sTREM-1 levels in a non-surviving group were higher than those in a surviving group. There are several possible reasons for this observation. First, sTREM-1 is produced when membrane-bound TREM-1 is cleaved from the cell surface. High expression levels of sTREM-1 reflect the high expression of membrane-bound TREM-1. The initial disease conditions were more severe in the non-surviving group and systemic inflammatory reactions were obvious. The expression of membrane-bound TREM-1 increased and the inflammatory reaction caused injury to the body's cells and tissues resulting in the production of endogenous inflammatory pathogenic factors and an increase in the expression of TREM-1. As a result, the initial sTREM-1 levels in the non-surviving group were higher than those in the surviving group. Second, although sTREM-1 may have some anti-inflammatory effects, infections were not well controlled in the non-surviving group. As a result, inflammatory responses overwhelmed the body's compensatory anti-inflammatory capacities. This excessive inflammatory response was persistent resulting in a continuous high expression of membrane-bound TREM-1 and an increase in the total amount of sTREM-1 produced by its hydrolysis. Third, because some of our patients developed sepsis at other hospitals and were transferred to the ICU due to poor therapeutic effects, the observed initial sTREM-1 levels might not actually have been the levels at the onset of sepsis. Instead, the measured levels might actually have been at a later stage in the non-surviving group, which were higher than those measured in the surviving group. Finally, the small sample size for this study could also have influenced the results.\nAlthough the relationship between initial sTREM-1 levels and prognosis remains controversial, multiple studies had similar findings showing that patients with progressively decreasing sTREM-1 levels had better prognoses, whereas patients with progressively increasing sTREM-1 levels had poorer prognoses.", "In summary, sTREM-1 levels can reflect sepsis severity more accurately than CRP and PCT and the dynamic changes of sTREM-1 are more sensitive for predicting prognosis. However, the sample size for this study was quite small and larger studies are needed.", "The authors declare that they have no competing interests.", "JZ designed the study, carried it out, performed the data analysis and wrote the first draft of the manuscript. DS and YJ conceived the initial idea for using sTREM-1 levels for infectious diseases and supplemented the study design. DF guided the data analysis and the use of medical statistics. LX was responsible for protocol revisions, data analysis and final draft revision. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2334/11/53/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Reactive strategies for containing developing outbreaks of pandemic influenza.
21356128
In 2009 and the early part of 2010, the northern hemisphere had to cope with the first waves of the new influenza A (H1N1) pandemic. Despite high-profile vaccination campaigns in many countries, delays in administration of vaccination programs were common, and high vaccination coverage levels were not achieved. This experience suggests the need to explore the epidemiological and economic effectiveness of additional, reactive strategies for combating pandemic influenza.
BACKGROUND
We use a stochastic model of pandemic influenza to investigate realistic strategies that can be used in reaction to developing outbreaks. The model is calibrated to documented illness attack rates and basic reproductive number (R0) estimates, and constructed to represent a typical mid-sized North American city.
METHODS
Our model predicts an average illness attack rate of 34.1% in the absence of intervention, with total costs associated with morbidity and mortality of US$81 million for such a city. Attack rates and economic costs can be reduced to 5.4% and US$37 million, respectively, when low-coverage reactive vaccination and limited antiviral use are combined with practical, minimally disruptive social distancing strategies, including short-term, as-needed closure of individual schools, even when vaccine supply-chain-related delays occur. Results improve with increasing vaccination coverage and higher vaccine efficacy.
RESULTS
Such combination strategies can be substantially more effective than vaccination alone from epidemiological and economic standpoints, and warrant strong consideration by public health authorities when reacting to future outbreaks of pandemic influenza.
CONCLUSIONS
[ "Adolescent", "Adult", "Child", "Child, Preschool", "Disease Outbreaks", "Female", "Humans", "Immunization Programs", "Incidence", "Infant", "Infant, Newborn", "Influenza A Virus, H1N1 Subtype", "Influenza, Human", "Male", "Middle Aged", "Ontario", "Pandemics", "Prevalence", "Preventive Health Services", "Stochastic Processes", "Young Adult" ]
3317583
null
null
Methods
[SUBTITLE] The simulation model [SUBSECTION] We developed a portable and adaptable stochastic, individual-level simulation model of influenza spread within a structured population. The simulator is similar to models developed by Longini et al. [7,16]. The simulation population of 649,565 people was generated stochastically to represent a typical North American city, namely, Hamilton (Ontario), Canada, which was chosen due to availability of demographic and epidemiological data necessary for constructing and calibrating the simulator. Our population is a collection of heterogeneous individuals with various attributes that impact whom they interact with (and hence whom they may infect or get infected by). More specifically, each individual has the following stochastically generated attributes: age, household, playgroup or daycare attended (for pre-school children), school attended (for school-age children), workgroup (for working adults), household census tract and workplace census subdivision, community, and neighborhood. As in [16], a community consists of approximately 2000 people living within the same census tract, and a neighborhood consists of approximately 500 people living within proximity to each other within the same community; also see the recent papers [17] and [18], which incorporate more-detailed individual-level behavior involving larger populations. Age and household-size distributions, shown in Figures 1 and 2, were matched to 2001 Canadian census data [19,20]. Household census tract assignments were made so that census tract population sizes were consistent with 2006 census statistics [21]. Workgroups were formed to match 2006 employment statistics [22] as well as census statistics on the geographical distribution of workers [23]. Rather than representing entire workplace institutions, we formed workgroups of size 20 to represent the typical number of co-workers an individual is likely to have close contact with during the day. Average playgroup, daycare, and lower and upper secondary school (i.e., middle and high school) contact group sizes were chosen for similar reasons; see the Appendix. Age distribution for simulated population Household size distribution for simulated population Susceptible people are assumed to have daily contacts with other individuals in their contact groups, i.e., their household and school or workgroups, as well as with people in their neighborhood and community. Infection of susceptibles depends on the number of infected persons in their contact groups, on the vaccine and antiviral-use status of susceptibles and their infectious contacts, and on age- and contact-group-specific per-contact transmission probabilities (Table 1). This disease transmission model is based on previously described models [7,16], and is detailed in the Appendix. People infected with influenza first pass through a latent / incubation period, during which they do not have influenza symptoms. They are not infectious until the last day of the period; at that point, they become half as infectious as if they were to develop symptoms in the subsequent period. During that subsequent infectious period, 67% will develop influenza symptoms and 33% will be asymptomatic (and will be half as infectious as those who are symptomatic) [7]. The model allows for people to withdraw from all of their mixing groups, except the household, if they become infected or have an infected child. Per-contact influenza infection transmission probabilities within contact groups 1. Within households, the probability that a symptomatic child (age 18 years or less) infects a susceptible child is 0.8; that a symptomatic child infects a susceptible adult (at least 19 years old), or that a symptomatic adult infects a susceptible child, is 0.3; that a symptomatic adult infects a susceptible adult is 0.4 [16]. 2. Probability that a susceptible person in the age or school group is infected through contact with a symptomatic person in the group. The simulator is calibrated to match documented illness attack rates and basic reproductive numbers (R0). Baseline (no-intervention) scenario age-group-specific attack rates were derived using 2009 estimates for the H1N1 basic reproductive number in Ontario [14,24,25] (see Table 2). These rates take into account reduced risk in adults born prior to 1957 [24]. A compartmental model parameterized in this way was well-calibrated to observed attack rates during the Fall pandemic wave in Ontario [25]. The simulator’s R0 value of 1.4 is also consistent with other published reports [4,26,27]. Age-group-specific H1N1 influenza illness attack rates in Ontario, Canada, 2009, and calibrated attack rates 1. See the discussion in [25]. We developed a portable and adaptable stochastic, individual-level simulation model of influenza spread within a structured population. The simulator is similar to models developed by Longini et al. [7,16]. The simulation population of 649,565 people was generated stochastically to represent a typical North American city, namely, Hamilton (Ontario), Canada, which was chosen due to availability of demographic and epidemiological data necessary for constructing and calibrating the simulator. Our population is a collection of heterogeneous individuals with various attributes that impact whom they interact with (and hence whom they may infect or get infected by). More specifically, each individual has the following stochastically generated attributes: age, household, playgroup or daycare attended (for pre-school children), school attended (for school-age children), workgroup (for working adults), household census tract and workplace census subdivision, community, and neighborhood. As in [16], a community consists of approximately 2000 people living within the same census tract, and a neighborhood consists of approximately 500 people living within proximity to each other within the same community; also see the recent papers [17] and [18], which incorporate more-detailed individual-level behavior involving larger populations. Age and household-size distributions, shown in Figures 1 and 2, were matched to 2001 Canadian census data [19,20]. Household census tract assignments were made so that census tract population sizes were consistent with 2006 census statistics [21]. Workgroups were formed to match 2006 employment statistics [22] as well as census statistics on the geographical distribution of workers [23]. Rather than representing entire workplace institutions, we formed workgroups of size 20 to represent the typical number of co-workers an individual is likely to have close contact with during the day. Average playgroup, daycare, and lower and upper secondary school (i.e., middle and high school) contact group sizes were chosen for similar reasons; see the Appendix. Age distribution for simulated population Household size distribution for simulated population Susceptible people are assumed to have daily contacts with other individuals in their contact groups, i.e., their household and school or workgroups, as well as with people in their neighborhood and community. Infection of susceptibles depends on the number of infected persons in their contact groups, on the vaccine and antiviral-use status of susceptibles and their infectious contacts, and on age- and contact-group-specific per-contact transmission probabilities (Table 1). This disease transmission model is based on previously described models [7,16], and is detailed in the Appendix. People infected with influenza first pass through a latent / incubation period, during which they do not have influenza symptoms. They are not infectious until the last day of the period; at that point, they become half as infectious as if they were to develop symptoms in the subsequent period. During that subsequent infectious period, 67% will develop influenza symptoms and 33% will be asymptomatic (and will be half as infectious as those who are symptomatic) [7]. The model allows for people to withdraw from all of their mixing groups, except the household, if they become infected or have an infected child. Per-contact influenza infection transmission probabilities within contact groups 1. Within households, the probability that a symptomatic child (age 18 years or less) infects a susceptible child is 0.8; that a symptomatic child infects a susceptible adult (at least 19 years old), or that a symptomatic adult infects a susceptible child, is 0.3; that a symptomatic adult infects a susceptible adult is 0.4 [16]. 2. Probability that a susceptible person in the age or school group is infected through contact with a symptomatic person in the group. The simulator is calibrated to match documented illness attack rates and basic reproductive numbers (R0). Baseline (no-intervention) scenario age-group-specific attack rates were derived using 2009 estimates for the H1N1 basic reproductive number in Ontario [14,24,25] (see Table 2). These rates take into account reduced risk in adults born prior to 1957 [24]. A compartmental model parameterized in this way was well-calibrated to observed attack rates during the Fall pandemic wave in Ontario [25]. The simulator’s R0 value of 1.4 is also consistent with other published reports [4,26,27]. Age-group-specific H1N1 influenza illness attack rates in Ontario, Canada, 2009, and calibrated attack rates 1. See the discussion in [25]. [SUBTITLE] Intervention strategies [SUBSECTION] We modeled a baseline case where no intervention takes place, along with strategies representing various combinations of vaccination, antiviral treatment and household prophylaxis, school closure, and general social distancing (see the results in Tables 3 and 4 and Supplementary Data Table S1 provided in “Additional File 1”). Each component of the strategies is described in detail below. Interventions are triggered in a particular simulation run when the overall illness attack rate reaches 0.01%. Twenty runs of the simulator were performed for each intervention strategy, from which average illness attack rates were calculated. We briefly describe the interventions under consideration. [SUBTITLE] Vaccination [SUBSECTION] We model both pre-vaccination as well as reactive strategies, with reactive vaccination programs beginning immediately, 30 days, or 60 days after the trigger. The delays model disruptions in vaccine production and supply chains. We allow enough doses to cover either 35% or 70% of the population. In reactive strategies, we consider cases where (i) all vaccines become available at the same time, and (ii) the doses become available in three equal-sized batches, two weeks apart, due to additional production and supply-chain disruptions. We study a low-efficacy single-dose vaccine (efficacy against susceptibility to infection, VEs = 0.3, and efficacy against infectiousness, VEi = 0.2) as well as a moderate-efficacy vaccine (VEs = 0.4, VEi = 0.5) [28]. Vaccine efficacy refers to the reduction, after vaccination, in the probability of becoming infected due to contact with an infected person (VEs), or to the reduction, after vaccination, in the probability of infecting a susceptible contact (VEi). Vaccine efficacy does not refer to the fraction of individuals having an immunogenic response to the vaccine (which is typically much larger than our measures). Each day, our model randomly vaccinates any remaining unvaccinated individuals who are either uninfected or in the latent or asymptomatic phases of infection, all with equal probability based on the number of available doses. Moreover, protection from the vaccine builds over time, with 50% of the vaccine’s efficacy realized upon vaccination, and full protection after two weeks. We model both pre-vaccination as well as reactive strategies, with reactive vaccination programs beginning immediately, 30 days, or 60 days after the trigger. The delays model disruptions in vaccine production and supply chains. We allow enough doses to cover either 35% or 70% of the population. In reactive strategies, we consider cases where (i) all vaccines become available at the same time, and (ii) the doses become available in three equal-sized batches, two weeks apart, due to additional production and supply-chain disruptions. We study a low-efficacy single-dose vaccine (efficacy against susceptibility to infection, VEs = 0.3, and efficacy against infectiousness, VEi = 0.2) as well as a moderate-efficacy vaccine (VEs = 0.4, VEi = 0.5) [28]. Vaccine efficacy refers to the reduction, after vaccination, in the probability of becoming infected due to contact with an infected person (VEs), or to the reduction, after vaccination, in the probability of infecting a susceptible contact (VEi). Vaccine efficacy does not refer to the fraction of individuals having an immunogenic response to the vaccine (which is typically much larger than our measures). Each day, our model randomly vaccinates any remaining unvaccinated individuals who are either uninfected or in the latent or asymptomatic phases of infection, all with equal probability based on the number of available doses. Moreover, protection from the vaccine builds over time, with 50% of the vaccine’s efficacy realized upon vaccination, and full protection after two weeks. [SUBTITLE] Antiviral treatment and household prophylaxis [SUBSECTION] We investigate strategies involving treatment of infected individuals with a five-day course of antivirals, as well as strategies that also allow for ten-day prophylaxis of the infected individuals’ household members. We assume that 1% of individuals do not complete their course. We use an antiviral efficacy against susceptibility (AVEs) of 0.3 and against infectiousness (AVEi) of 0.7 [16]. Individuals receive direct benefit from antivirals only while they are taking them. Antiviral use is considered alone and in combination with other intervention strategies. It is assumed that antiviral courses are available for 10% of the population and that they are distributed to infected individuals and their household members until the supply is exhausted. We investigate strategies involving treatment of infected individuals with a five-day course of antivirals, as well as strategies that also allow for ten-day prophylaxis of the infected individuals’ household members. We assume that 1% of individuals do not complete their course. We use an antiviral efficacy against susceptibility (AVEs) of 0.3 and against infectiousness (AVEi) of 0.7 [16]. Individuals receive direct benefit from antivirals only while they are taking them. Antiviral use is considered alone and in combination with other intervention strategies. It is assumed that antiviral courses are available for 10% of the population and that they are distributed to infected individuals and their household members until the supply is exhausted. [SUBTITLE] School closure and social distancing [SUBSECTION] We implement a rolling school closure model, where a daycare or school closes for five days if five or more cases are identified in that group. Given that infected individuals are on average infectious for 4.1 days (see Figure 3), closing schools for fewer than 5 days is unlikely to be very effective. It is possible for these groups to close more than once during the simulation. We also model a reduction in workplace and general community contacts of 20% (i.e., 20% of infected individuals in each contact group will not infect other members of the group). This represents the exercise of a general level of caution, including a modest limitation of contacts within workgroups (e.g., by invoking occasional telecommuting and other self-limiting behaviors, holding fewer large meetings, etc.) and also within the general community (e.g., reduction in attendance in social groups and larger community events, etc.). Simulation flowchart and modeled influenza natural history We implement a rolling school closure model, where a daycare or school closes for five days if five or more cases are identified in that group. Given that infected individuals are on average infectious for 4.1 days (see Figure 3), closing schools for fewer than 5 days is unlikely to be very effective. It is possible for these groups to close more than once during the simulation. We also model a reduction in workplace and general community contacts of 20% (i.e., 20% of infected individuals in each contact group will not infect other members of the group). This represents the exercise of a general level of caution, including a modest limitation of contacts within workgroups (e.g., by invoking occasional telecommuting and other self-limiting behaviors, holding fewer large meetings, etc.) and also within the general community (e.g., reduction in attendance in social groups and larger community events, etc.). Simulation flowchart and modeled influenza natural history We modeled a baseline case where no intervention takes place, along with strategies representing various combinations of vaccination, antiviral treatment and household prophylaxis, school closure, and general social distancing (see the results in Tables 3 and 4 and Supplementary Data Table S1 provided in “Additional File 1”). Each component of the strategies is described in detail below. Interventions are triggered in a particular simulation run when the overall illness attack rate reaches 0.01%. Twenty runs of the simulator were performed for each intervention strategy, from which average illness attack rates were calculated. We briefly describe the interventions under consideration. [SUBTITLE] Vaccination [SUBSECTION] We model both pre-vaccination as well as reactive strategies, with reactive vaccination programs beginning immediately, 30 days, or 60 days after the trigger. The delays model disruptions in vaccine production and supply chains. We allow enough doses to cover either 35% or 70% of the population. In reactive strategies, we consider cases where (i) all vaccines become available at the same time, and (ii) the doses become available in three equal-sized batches, two weeks apart, due to additional production and supply-chain disruptions. We study a low-efficacy single-dose vaccine (efficacy against susceptibility to infection, VEs = 0.3, and efficacy against infectiousness, VEi = 0.2) as well as a moderate-efficacy vaccine (VEs = 0.4, VEi = 0.5) [28]. Vaccine efficacy refers to the reduction, after vaccination, in the probability of becoming infected due to contact with an infected person (VEs), or to the reduction, after vaccination, in the probability of infecting a susceptible contact (VEi). Vaccine efficacy does not refer to the fraction of individuals having an immunogenic response to the vaccine (which is typically much larger than our measures). Each day, our model randomly vaccinates any remaining unvaccinated individuals who are either uninfected or in the latent or asymptomatic phases of infection, all with equal probability based on the number of available doses. Moreover, protection from the vaccine builds over time, with 50% of the vaccine’s efficacy realized upon vaccination, and full protection after two weeks. We model both pre-vaccination as well as reactive strategies, with reactive vaccination programs beginning immediately, 30 days, or 60 days after the trigger. The delays model disruptions in vaccine production and supply chains. We allow enough doses to cover either 35% or 70% of the population. In reactive strategies, we consider cases where (i) all vaccines become available at the same time, and (ii) the doses become available in three equal-sized batches, two weeks apart, due to additional production and supply-chain disruptions. We study a low-efficacy single-dose vaccine (efficacy against susceptibility to infection, VEs = 0.3, and efficacy against infectiousness, VEi = 0.2) as well as a moderate-efficacy vaccine (VEs = 0.4, VEi = 0.5) [28]. Vaccine efficacy refers to the reduction, after vaccination, in the probability of becoming infected due to contact with an infected person (VEs), or to the reduction, after vaccination, in the probability of infecting a susceptible contact (VEi). Vaccine efficacy does not refer to the fraction of individuals having an immunogenic response to the vaccine (which is typically much larger than our measures). Each day, our model randomly vaccinates any remaining unvaccinated individuals who are either uninfected or in the latent or asymptomatic phases of infection, all with equal probability based on the number of available doses. Moreover, protection from the vaccine builds over time, with 50% of the vaccine’s efficacy realized upon vaccination, and full protection after two weeks. [SUBTITLE] Antiviral treatment and household prophylaxis [SUBSECTION] We investigate strategies involving treatment of infected individuals with a five-day course of antivirals, as well as strategies that also allow for ten-day prophylaxis of the infected individuals’ household members. We assume that 1% of individuals do not complete their course. We use an antiviral efficacy against susceptibility (AVEs) of 0.3 and against infectiousness (AVEi) of 0.7 [16]. Individuals receive direct benefit from antivirals only while they are taking them. Antiviral use is considered alone and in combination with other intervention strategies. It is assumed that antiviral courses are available for 10% of the population and that they are distributed to infected individuals and their household members until the supply is exhausted. We investigate strategies involving treatment of infected individuals with a five-day course of antivirals, as well as strategies that also allow for ten-day prophylaxis of the infected individuals’ household members. We assume that 1% of individuals do not complete their course. We use an antiviral efficacy against susceptibility (AVEs) of 0.3 and against infectiousness (AVEi) of 0.7 [16]. Individuals receive direct benefit from antivirals only while they are taking them. Antiviral use is considered alone and in combination with other intervention strategies. It is assumed that antiviral courses are available for 10% of the population and that they are distributed to infected individuals and their household members until the supply is exhausted. [SUBTITLE] School closure and social distancing [SUBSECTION] We implement a rolling school closure model, where a daycare or school closes for five days if five or more cases are identified in that group. Given that infected individuals are on average infectious for 4.1 days (see Figure 3), closing schools for fewer than 5 days is unlikely to be very effective. It is possible for these groups to close more than once during the simulation. We also model a reduction in workplace and general community contacts of 20% (i.e., 20% of infected individuals in each contact group will not infect other members of the group). This represents the exercise of a general level of caution, including a modest limitation of contacts within workgroups (e.g., by invoking occasional telecommuting and other self-limiting behaviors, holding fewer large meetings, etc.) and also within the general community (e.g., reduction in attendance in social groups and larger community events, etc.). Simulation flowchart and modeled influenza natural history We implement a rolling school closure model, where a daycare or school closes for five days if five or more cases are identified in that group. Given that infected individuals are on average infectious for 4.1 days (see Figure 3), closing schools for fewer than 5 days is unlikely to be very effective. It is possible for these groups to close more than once during the simulation. We also model a reduction in workplace and general community contacts of 20% (i.e., 20% of infected individuals in each contact group will not infect other members of the group). This represents the exercise of a general level of caution, including a modest limitation of contacts within workgroups (e.g., by invoking occasional telecommuting and other self-limiting behaviors, holding fewer large meetings, etc.) and also within the general community (e.g., reduction in attendance in social groups and larger community events, etc.). Simulation flowchart and modeled influenza natural history [SUBTITLE] Economic cost estimation [SUBSECTION] We determine economic costs associated with the influenza outbreaks and modeled intervention strategies using methods described by Meltzer et al. [29]. We include medical spending due to illness, costs of antivirals and vaccines, and costs associated with teachers and other working adults staying home due to their own illness, illness of dependent children, or due to school closure. Medical spending includes co-payments and net payments for outpatient visits and hospitalization, as well as prescription and over-the-counter medications for influenza and complications or secondary infections. Costs are stratified by age-group and by low- or high-risk status of individuals with respect to complications of influenza. We also include the present value of earnings lost due to premature mortality. Cost estimates and probabilities of risk status and of complications and death were taken from Meltzer et al. [29], with costs inflated using 2008 consumer price index and medical price index estimates [30-33]. These costs are combined with the data on age-specific attack rates, utilized vaccination doses, and days of school closure obtained from our simulation model. Details of the cost calculations are given in the Appendix. We determine economic costs associated with the influenza outbreaks and modeled intervention strategies using methods described by Meltzer et al. [29]. We include medical spending due to illness, costs of antivirals and vaccines, and costs associated with teachers and other working adults staying home due to their own illness, illness of dependent children, or due to school closure. Medical spending includes co-payments and net payments for outpatient visits and hospitalization, as well as prescription and over-the-counter medications for influenza and complications or secondary infections. Costs are stratified by age-group and by low- or high-risk status of individuals with respect to complications of influenza. We also include the present value of earnings lost due to premature mortality. Cost estimates and probabilities of risk status and of complications and death were taken from Meltzer et al. [29], with costs inflated using 2008 consumer price index and medical price index estimates [30-33]. These costs are combined with the data on age-specific attack rates, utilized vaccination doses, and days of school closure obtained from our simulation model. Details of the cost calculations are given in the Appendix.
null
null
null
null
[ "Background", "The simulation model", "Intervention strategies", "Vaccination", "Antiviral treatment and household prophylaxis", "School closure and social distancing", "Economic cost estimation", "Results", "Discussion", "Conclusions", "Appendix", "Simulation model", "Population structure", "Influenza transmission model", "Economic cost calculations", "Authors' contributions", "Competing interests" ]
[ "In April, 2009, the World Health Organization (WHO) announced the emergence of a new influenza A (H1N1) virus, and on June 11, 2009, it declared that the world was at the start of a new influenza pandemic [1]. WHO reported more than 414,000 laboratory-confirmed cases of H1N1 [2] — a gross underestimate, as many countries simply stopped counting individual cases. The US Centers for Disease Control and Prevention reported widespread influenza activity in forty-six states, with influenza-like illness (ILI) activity in October 2009 higher than what is seen during the peak of many regular flu seasons; and further, “Almost all of the influenza viruses identified … are 2009 H1N1 influenza A viruses” [3]. Countries found themselves in the position of having to react to contain already developing Fall outbreaks of influenza due to the new pandemic strain, a position they are likely to find themselves in again if and when future waves of pandemic influenza occur.\nResearch has suggested that mass vaccination of 60–70% of the population prior to the start of the flu season could effectively contain outbreaks due to pandemic strains [4-7]; and the public health preparedness plans of most countries have, accordingly, emphasized vaccination intervention strategies. However, the recent experience with H1N1 suggests that high vaccination coverage levels are difficult to achieve. In the case of H1N1, vaccination programs in most northern hemisphere countries started only after the virus was widely circulating. Furthermore, in some countries, supplies of vaccine were limited [8], delivery and administration occurred over a period of several months [9,10], and there were reports of public skepticism regarding the necessity and safety of vaccination [11,12], all of which were strong indicators suggesting that high vaccination coverage would be difficult to achieve. While many institutions in the US and elsewhere strongly encouraged and, in some cases, required workers to be vaccinated against seasonal influenza in 2009, H1N1 vaccination guidelines were focused mostly on people in certain age and high-risk groups [13]. Delays, limited and untimely vaccination supplies, and public reluctance to be vaccinated are likely to reduce the effectiveness of vaccination campaigns [4,5].\nThe issues outlined above for the recent outbreak of H1N1 are likely to occur again in future outbreaks of pandemic influenza. In this paper, we explore the effectiveness of realistic reactive intervention strategies implemented after the beginning of outbreaks of pandemic influenza. We calibrate our model based on data for the H1N1 pandemic (see Tuite et al. [14]), and we investigate the impacts of (i) the moderate vaccination coverage levels which, based on past experience, are likely to be realized, as well as high levels which would be more ideal; (ii) very limited treatment of cases with antivirals and prophylaxis of cases’ households with antivirals; and (iii) limited and practical social distancing measures such as five-day closure of individual schools on an as-needed basis, encouragement of liberal leave policies in the workplace, and encouragement of self-isolation. Intervention strategies that combine these approaches are also studied (cf. Halloran et al. [15]). For all intervention strategies, we provide cost estimates associated with morbidity and mortality that take into account direct medical costs as well as economic consequences resulting from school closures and work loss.", "We developed a portable and adaptable stochastic, individual-level simulation model of influenza spread within a structured population. The simulator is similar to models developed by Longini et al. [7,16]. The simulation population of 649,565 people was generated stochastically to represent a typical North American city, namely, Hamilton (Ontario), Canada, which was chosen due to availability of demographic and epidemiological data necessary for constructing and calibrating the simulator. Our population is a collection of heterogeneous individuals with various attributes that impact whom they interact with (and hence whom they may infect or get infected by). More specifically, each individual has the following stochastically generated attributes: age, household, playgroup or daycare attended (for pre-school children), school attended (for school-age children), workgroup (for working adults), household census tract and workplace census subdivision, community, and neighborhood. As in [16], a community consists of approximately 2000 people living within the same census tract, and a neighborhood consists of approximately 500 people living within proximity to each other within the same community; also see the recent papers [17] and [18], which incorporate more-detailed individual-level behavior involving larger populations. Age and household-size distributions, shown in Figures 1 and 2, were matched to 2001 Canadian census data [19,20]. Household census tract assignments were made so that census tract population sizes were consistent with 2006 census statistics [21]. Workgroups were formed to match 2006 employment statistics [22] as well as census statistics on the geographical distribution of workers [23]. Rather than representing entire workplace institutions, we formed workgroups of size 20 to represent the typical number of co-workers an individual is likely to have close contact with during the day. Average playgroup, daycare, and lower and upper secondary school (i.e., middle and high school) contact group sizes were chosen for similar reasons; see the Appendix.\nAge distribution for simulated population\nHousehold size distribution for simulated population\nSusceptible people are assumed to have daily contacts with other individuals in their contact groups, i.e., their household and school or workgroups, as well as with people in their neighborhood and community. Infection of susceptibles depends on the number of infected persons in their contact groups, on the vaccine and antiviral-use status of susceptibles and their infectious contacts, and on age- and contact-group-specific per-contact transmission probabilities (Table 1). This disease transmission model is based on previously described models [7,16], and is detailed in the Appendix. People infected with influenza first pass through a latent / incubation period, during which they do not have influenza symptoms. They are not infectious until the last day of the period; at that point, they become half as infectious as if they were to develop symptoms in the subsequent period. During that subsequent infectious period, 67% will develop influenza symptoms and 33% will be asymptomatic (and will be half as infectious as those who are symptomatic) [7]. The model allows for people to withdraw from all of their mixing groups, except the household, if they become infected or have an infected child.\nPer-contact influenza infection transmission probabilities within contact groups\n1. Within households, the probability that a symptomatic child (age 18 years or less) infects a susceptible child is 0.8; that a symptomatic child infects a susceptible adult (at least 19 years old), or that a symptomatic adult infects a susceptible child, is 0.3; that a symptomatic adult infects a susceptible adult is 0.4 [16].\n2. Probability that a susceptible person in the age or school group is infected through contact with a symptomatic person in the group.\nThe simulator is calibrated to match documented illness attack rates and basic reproductive numbers (R0). Baseline (no-intervention) scenario age-group-specific attack rates were derived using 2009 estimates for the H1N1 basic reproductive number in Ontario [14,24,25] (see Table 2). These rates take into account reduced risk in adults born prior to 1957 [24]. A compartmental model parameterized in this way was well-calibrated to observed attack rates during the Fall pandemic wave in Ontario [25]. The simulator’s R0 value of 1.4 is also consistent with other published reports [4,26,27].\nAge-group-specific H1N1 influenza illness attack rates in Ontario, Canada, 2009, and calibrated attack rates\n1. See the discussion in [25].", "We modeled a baseline case where no intervention takes place, along with strategies representing various combinations of vaccination, antiviral treatment and household prophylaxis, school closure, and general social distancing (see the results in Tables 3 and 4 and Supplementary Data Table S1 provided in “Additional File 1”). Each component of the strategies is described in detail below. Interventions are triggered in a particular simulation run when the overall illness attack rate reaches 0.01%. Twenty runs of the simulator were performed for each intervention strategy, from which average illness attack rates were calculated. We briefly describe the interventions under consideration.\n[SUBTITLE] Vaccination [SUBSECTION] We model both pre-vaccination as well as reactive strategies, with reactive vaccination programs beginning immediately, 30 days, or 60 days after the trigger. The delays model disruptions in vaccine production and supply chains. We allow enough doses to cover either 35% or 70% of the population. In reactive strategies, we consider cases where (i) all vaccines become available at the same time, and (ii) the doses become available in three equal-sized batches, two weeks apart, due to additional production and supply-chain disruptions. We study a low-efficacy single-dose vaccine (efficacy against susceptibility to infection, VEs = 0.3, and efficacy against infectiousness, VEi = 0.2) as well as a moderate-efficacy vaccine (VEs = 0.4, VEi = 0.5) [28]. Vaccine efficacy refers to the reduction, after vaccination, in the probability of becoming infected due to contact with an infected person (VEs), or to the reduction, after vaccination, in the probability of infecting a susceptible contact (VEi). Vaccine efficacy does not refer to the fraction of individuals having an immunogenic response to the vaccine (which is typically much larger than our measures).\nEach day, our model randomly vaccinates any remaining unvaccinated individuals who are either uninfected or in the latent or asymptomatic phases of infection, all with equal probability based on the number of available doses. Moreover, protection from the vaccine builds over time, with 50% of the vaccine’s efficacy realized upon vaccination, and full protection after two weeks.\nWe model both pre-vaccination as well as reactive strategies, with reactive vaccination programs beginning immediately, 30 days, or 60 days after the trigger. The delays model disruptions in vaccine production and supply chains. We allow enough doses to cover either 35% or 70% of the population. In reactive strategies, we consider cases where (i) all vaccines become available at the same time, and (ii) the doses become available in three equal-sized batches, two weeks apart, due to additional production and supply-chain disruptions. We study a low-efficacy single-dose vaccine (efficacy against susceptibility to infection, VEs = 0.3, and efficacy against infectiousness, VEi = 0.2) as well as a moderate-efficacy vaccine (VEs = 0.4, VEi = 0.5) [28]. Vaccine efficacy refers to the reduction, after vaccination, in the probability of becoming infected due to contact with an infected person (VEs), or to the reduction, after vaccination, in the probability of infecting a susceptible contact (VEi). Vaccine efficacy does not refer to the fraction of individuals having an immunogenic response to the vaccine (which is typically much larger than our measures).\nEach day, our model randomly vaccinates any remaining unvaccinated individuals who are either uninfected or in the latent or asymptomatic phases of infection, all with equal probability based on the number of available doses. Moreover, protection from the vaccine builds over time, with 50% of the vaccine’s efficacy realized upon vaccination, and full protection after two weeks.\n[SUBTITLE] Antiviral treatment and household prophylaxis [SUBSECTION] We investigate strategies involving treatment of infected individuals with a five-day course of antivirals, as well as strategies that also allow for ten-day prophylaxis of the infected individuals’ household members. We assume that 1% of individuals do not complete their course. We use an antiviral efficacy against susceptibility (AVEs) of 0.3 and against infectiousness (AVEi) of 0.7 [16]. Individuals receive direct benefit from antivirals only while they are taking them. Antiviral use is considered alone and in combination with other intervention strategies. It is assumed that antiviral courses are available for 10% of the population and that they are distributed to infected individuals and their household members until the supply is exhausted.\nWe investigate strategies involving treatment of infected individuals with a five-day course of antivirals, as well as strategies that also allow for ten-day prophylaxis of the infected individuals’ household members. We assume that 1% of individuals do not complete their course. We use an antiviral efficacy against susceptibility (AVEs) of 0.3 and against infectiousness (AVEi) of 0.7 [16]. Individuals receive direct benefit from antivirals only while they are taking them. Antiviral use is considered alone and in combination with other intervention strategies. It is assumed that antiviral courses are available for 10% of the population and that they are distributed to infected individuals and their household members until the supply is exhausted.\n[SUBTITLE] School closure and social distancing [SUBSECTION] We implement a rolling school closure model, where a daycare or school closes for five days if five or more cases are identified in that group. Given that infected individuals are on average infectious for 4.1 days (see Figure 3), closing schools for fewer than 5 days is unlikely to be very effective. It is possible for these groups to close more than once during the simulation. We also model a reduction in workplace and general community contacts of 20% (i.e., 20% of infected individuals in each contact group will not infect other members of the group). This represents the exercise of a general level of caution, including a modest limitation of contacts within workgroups (e.g., by invoking occasional telecommuting and other self-limiting behaviors, holding fewer large meetings, etc.) and also within the general community (e.g., reduction in attendance in social groups and larger community events, etc.).\nSimulation flowchart and modeled influenza natural history\nWe implement a rolling school closure model, where a daycare or school closes for five days if five or more cases are identified in that group. Given that infected individuals are on average infectious for 4.1 days (see Figure 3), closing schools for fewer than 5 days is unlikely to be very effective. It is possible for these groups to close more than once during the simulation. We also model a reduction in workplace and general community contacts of 20% (i.e., 20% of infected individuals in each contact group will not infect other members of the group). This represents the exercise of a general level of caution, including a modest limitation of contacts within workgroups (e.g., by invoking occasional telecommuting and other self-limiting behaviors, holding fewer large meetings, etc.) and also within the general community (e.g., reduction in attendance in social groups and larger community events, etc.).\nSimulation flowchart and modeled influenza natural history", "We model both pre-vaccination as well as reactive strategies, with reactive vaccination programs beginning immediately, 30 days, or 60 days after the trigger. The delays model disruptions in vaccine production and supply chains. We allow enough doses to cover either 35% or 70% of the population. In reactive strategies, we consider cases where (i) all vaccines become available at the same time, and (ii) the doses become available in three equal-sized batches, two weeks apart, due to additional production and supply-chain disruptions. We study a low-efficacy single-dose vaccine (efficacy against susceptibility to infection, VEs = 0.3, and efficacy against infectiousness, VEi = 0.2) as well as a moderate-efficacy vaccine (VEs = 0.4, VEi = 0.5) [28]. Vaccine efficacy refers to the reduction, after vaccination, in the probability of becoming infected due to contact with an infected person (VEs), or to the reduction, after vaccination, in the probability of infecting a susceptible contact (VEi). Vaccine efficacy does not refer to the fraction of individuals having an immunogenic response to the vaccine (which is typically much larger than our measures).\nEach day, our model randomly vaccinates any remaining unvaccinated individuals who are either uninfected or in the latent or asymptomatic phases of infection, all with equal probability based on the number of available doses. Moreover, protection from the vaccine builds over time, with 50% of the vaccine’s efficacy realized upon vaccination, and full protection after two weeks.", "We investigate strategies involving treatment of infected individuals with a five-day course of antivirals, as well as strategies that also allow for ten-day prophylaxis of the infected individuals’ household members. We assume that 1% of individuals do not complete their course. We use an antiviral efficacy against susceptibility (AVEs) of 0.3 and against infectiousness (AVEi) of 0.7 [16]. Individuals receive direct benefit from antivirals only while they are taking them. Antiviral use is considered alone and in combination with other intervention strategies. It is assumed that antiviral courses are available for 10% of the population and that they are distributed to infected individuals and their household members until the supply is exhausted.", "We implement a rolling school closure model, where a daycare or school closes for five days if five or more cases are identified in that group. Given that infected individuals are on average infectious for 4.1 days (see Figure 3), closing schools for fewer than 5 days is unlikely to be very effective. It is possible for these groups to close more than once during the simulation. We also model a reduction in workplace and general community contacts of 20% (i.e., 20% of infected individuals in each contact group will not infect other members of the group). This represents the exercise of a general level of caution, including a modest limitation of contacts within workgroups (e.g., by invoking occasional telecommuting and other self-limiting behaviors, holding fewer large meetings, etc.) and also within the general community (e.g., reduction in attendance in social groups and larger community events, etc.).\nSimulation flowchart and modeled influenza natural history", "We determine economic costs associated with the influenza outbreaks and modeled intervention strategies using methods described by Meltzer et al. [29]. We include medical spending due to illness, costs of antivirals and vaccines, and costs associated with teachers and other working adults staying home due to their own illness, illness of dependent children, or due to school closure. Medical spending includes co-payments and net payments for outpatient visits and hospitalization, as well as prescription and over-the-counter medications for influenza and complications or secondary infections. Costs are stratified by age-group and by low- or high-risk status of individuals with respect to complications of influenza. We also include the present value of earnings lost due to premature mortality.\nCost estimates and probabilities of risk status and of complications and death were taken from Meltzer et al. [29], with costs inflated using 2008 consumer price index and medical price index estimates [30-33]. These costs are combined with the data on age-specific attack rates, utilized vaccination doses, and days of school closure obtained from our simulation model. Details of the cost calculations are given in the Appendix.", "With no intervention, the average overall illness attack rate is 34.1%, with an estimated total cost of $81.1 million (Table 3). Pre-vaccination of 35% of the population with a low-efficacy vaccine reduces the average overall illness attack rate to 26.1% (total cost $71.1 million), and with a moderate-efficacy vaccine to 18.8% (total cost $53.7 million). Not surprisingly, pre-vaccination of 70% of the population is more effective (overall average illness attack rate 12.0%, total cost $47.0 million for a low-efficacy vaccine; and 0.2% and $19.3 million with a moderate-efficacy vaccine; see Table 4).\nAverage overall illness attack rates and total costs of interventions with 35% vaccination coverage\n1. Abbreviations for modeled interventions: V (vaccination of up to 35% of the population), L (low efficacy), M (moderate efficacy), A (antiviral treatment and household prophylaxis of up to 10% of the population), S (school closure and social distancing).\n2. Initial supply-chain delays which prevent immediate initiation of vaccination programs after the intervention trigger occurs.\n3. Additional supply-chain delays, after initiation of the vaccination program, as a result of which vaccines become available in three equal batches, spaced two weeks apart.\nAverage overall illness attack rates and total costs of interventions with 70% vaccination coverage\n1. Abbreviations for modeled interventions: V (vaccination of up to 70% of the population), L (low efficacy), M (moderate efficacy), A (antiviral treatment and household prophylaxis of up to 10% of the population), S (school closure and social distancing).\n2. Initial supply-chain delays which prevent immediate initiation of vaccination programs after the intervention trigger occurs.\n3. Additional supply-chain delays, after initiation of the vaccination program, as a result of which vaccines become available in three equal batches, spaced two weeks apart\nReactive vaccination alone, of 35% of the population with a low-efficacy vaccine delivered in three batches, reduces the overall average illness attack rate to 28.8% (or 22.8% with a moderate-efficacy vaccine), with a total cost of $77.7 million ($63.1 million with a moderate-efficacy vaccine). Thirty- and 60-day delays in initiation of reactive vaccination, with vaccines delivered in three batches, result in attack rates of 29.5% (total cost $79.3 million) and 32.2% (total cost $86.0 million), respectively, for a low-efficacy vaccine, and 24.6% (total cost $67.5 million) and 30.8% (total cost $82.5 million), respectively, for a moderate-efficacy vaccine. Clearly, with a 60-day delay, interventions occur too late in the epidemic to have any meaningful effect (see Figure 4).\nDaily attack rates for (i) the case of 70% coverage of low-efficacy vaccine with 60-day initial delay, and (ii) the baseline case. For case (i), the vaccine is given on the 60th day followed by receipt of vaccine after two additional two-week delays (see arrows). Note that vaccine given on the 60th day decreases the attack rate compared to the baseline; but the two subsequent receipts of vaccine do not result in additional benefits.\nAntiviral use at low (10%) coverage alone results in an overall attack rate of 31.3% (total cost $75.9 million). School closure and social distancing alone result in an attack rate of 24.0%, with a total cost of $125.0 million.\nSuppose we combine reactive low-efficacy vaccination of 35% of the population delivered in three batches, antivirals (10% coverage), and school closure and social distancing. Then the overall average illness attack rate is 4.5% (total cost $32.2 million) if no delays occur in the initiation of vaccination, and 5.4% (total cost $36.8 million) if a 60-day delay occurs. With a moderate-efficacy vaccine, the attack rate for this last scenario reduces to 2.4% (total cost $22.0 million). Similar relationships between interventions are apparent for interventions with 70% vaccination coverage, shown in Table 4. Vaccination coverage of 70% with a moderate-efficacy vaccine, combined with antiviral treatment and school closure, is highly effective, even with an initial 60-day delay and additional supply-chain disruptions (average illness attack rate 1.4%, total cost $27.4 million).\nWe note that the results when all vaccines are available at the same time are better than those involving delivery in batches, and sometimes significantly so, especially for a moderate-efficacy vaccine (Tables 3 and 4). Figures 5A through 5D illustrate the comparative illness attack rates of the various intervention strategies discussed above for all combinations of low/moderate-efficacy vaccine delivered in three batches and at 35% / 70% coverage as a function of the initial delay in vaccination implementation due to supply-chain disruptions. The impact of vaccinating 70% of the population, rather than 35%, ranges from moderate to substantial, with the increased coverage being most beneficial when the vaccine is delivered in a timely manner, and the vaccine is either of moderate efficacy or of low efficacy applied in combination with other intervention strategies.\nAverage overall illness attack rates (%) for modeled interventions. Average overall illness attack rates for the following scenarios: no intervention; pre-vaccination; reactive vaccination with delays in initiation of 0, 30, and 60 days after the intervention trigger of a 0.01% overall illness attack rate; antiviral treatment or household prophylaxis with 10% population coverage (intervention “A”); rolling, as-needed five-day individual school closures and social distancing (20% reduction in workgroup and general community contacts—intervention “S”); antiviral use plus vaccination; school closure, and social distancing plus vaccination; antiviral use, school closure and social distancing, plus vaccination (“A+S”). Vaccination coverage is 35% of the population in Figures 5A and 5B; it is 70% of the population in Figures 5C and 5D. In reactive vaccination scenarios, additional supply-chain disruptions are assumed, such that vaccines are available in three equal batches, spaced two weeks apart, after initiation of vaccination programs. In Figures 5A and 5C, a low-efficacy vaccine is assumed (efficacy against susceptibility, VEs, 0.3; efficacy against infectiousness, VEi, 0.2). In Figures 5B and 5D, a moderate-efficacy vaccine is assumed (VEs, 0.5; VEi, 0.5).\nComplete (age-stratified and overall) average illness attack results for all modeled interventions are given in Supplementary Data Table S1. The comparative effectiveness of interventions is similar when age-group-specific results are studied.\nFigure 6A illustrates attack rate and total cost combinations for interventions that result in at least a 75% reduction in cost compared to no intervention. The closer to the origin, the more desirable an intervention is in terms of total cost and average illness attack rate. Aside from pre-vaccination strategies, we see that 70% reactive vaccination with a moderate-efficacy vaccine and school closure and social distancing, or even 35% reactive vaccination with a moderate-efficacy vaccine, antiviral use, and school closure, also result in substantial reductions in cost and attack rates. Figure 6B illustrates attack rate and cost results for interventions that result in more-modest 50%–75% reductions in cost compared to no intervention. Once again, several strategies combining vaccination, antiviral use, and school closure/social distancing are competitive with pre-vaccination.\nTotal cost of modeled intervention strategies (US$m) vs. average illness attack rate (%) Figure 6A shows results for interventions with cost reductions of more than 75% compared with no intervention, and Figure 6B shows results for interventions with cost reductions of 50%−75% compared with no intervention. Abbreviations for modeled interventions: PV (pre-vaccination), V (vaccination), L (low-efficacy), M (moderate efficacy), 35 (35% coverage of population), 70 (70% coverage), A (antiviral treatment and household prophylaxis of up to 10% of the population), S (school closure and social distancing). Multiple occurrences of each plotting symbol may occur; occurrences at higher costs and illness attack rates represent interventions with longer supply-chain delays.", "Previously published research has shown that pre-vaccination of 60%–70% of the population can contain seasonal as well as pandemic influenza, but that delays in vaccination can greatly reduce the effectiveness of the vaccination programs [4-7]. Our model confirms these results for moderate-efficacy vaccines (Tables 3, 4, and S1). However, vaccination efforts in countries such as the US, Canada, and others began well after the first waves of H1N1 activity, and it is reasonable to believe that the same will be true in future outbreaks of pandemic influenza. In particular, in the event of an outbreak, it will likely take time to achieve high levels of vaccination coverage, and, if past experience with seasonal influenza vaccination campaigns is an indication, it is plausible that only low or moderate coverage will eventually be achieved. The results of our simulation model show that delayed and low-coverage reactive vaccination strategies (with a low-efficacy vaccine, plus limited use of antivirals) will not be enough to mitigate the pandemic or to significantly reduce total costs associated with influenza morbidity and mortality (based on results from Table 3, average illness attack rates are only reduced by 26% and total costs by 13%, compared to no intervention).\nAccording to our model, combining rolling, limited-duration, as-needed closures of individual schools and a practical social distancing policy with 35% reactive low-efficacy vaccination coverage and low-level (10%) antiviral use can reduce illness attack rates by 89% compared to no intervention, as well as total costs by 64%. Similarly, combining interventions in this manner reduces overall attack rates by 99% and costs by 84% when a moderate-efficacy vaccine is available. This strategy remains highly effective even when delays in implementing vaccination of up to 60 days occur. Previously published results have left open the question of how costly interventions involving school closure might be [5]. Our results show that reactive combination strategies that include practical school closure measures, when diligently implemented, can reduce total costs associated with influenza morbidity and mortality substantially.\nOur model has several limitations. We do not consider vaccination strategies targeted to high-risk groups, which could reduce costs associated with complications from influenza. We have not modeled co-circulating strains of seasonal and pandemic influenza or possible resistance to antiviral drugs (although, to mitigate this limitation, our model assumes only low coverage with antivirals, as well as interventions without antivirals). As is always the case with simulation models, continuing follow-up analyses are needed, including: (i) sensitivity to model parameters; (ii) sensitivity to model intervention triggers (e.g., overall illness attack rate, numbers of cases detected in schools, etc.); (iii) sensitivity to R0, which can be heterogeneous across cities and countries; and (iv) results for new H1N1 natural history and transmission parameters, and new cost estimates for complications resulting from H1N1 illness, as they become known.\nOur model has several strengths. We model a large, realistic, heterogeneous population, base the simulation model on well-studied and documented stochastic simulators, calibrate to actual H1N1 attack rates and most-likely R0 values, and have the ability to model large numbers of scenarios in a relatively short amount of time on a desktop platform. The model also provides cost estimates that are useful for making policy decisions about potentially expensive interventions. In particular, we model and analyze a variety of interventions and combinations of interventions in terms of costs and efficacy. We also take into consideration reactive strategies incorporating supply-chain delays, and we identify strategies that effectively contain outbreaks and costs even in the presence of supply-chain delays, low vaccine efficacy, and low vaccine coverage.", "Our model illustrates the epidemiological effectiveness of a combination strategy involving short-term closures of individual schools on an as-needed basis, other practical social distancing activities, reactive vaccination of 35% or more of the population, and limited use of antivirals for treatment and prophylaxis. The model also quantifies the cost savings for this and alternative reactive strategies. Public health authorities should consider placing renewed emphasis on such combination strategies when reacting to possible additional waves of the current pandemic, or to new waves of future pandemics.", "In this Appendix, we provide details on the simulation model as well as economic cost considerations.\n[SUBTITLE] Simulation model [SUBSECTION] Our simulator is similar to those developed by Longini et al. for high-end computing platforms [7,16]; our simulator is programmed in C++ and runs on desktop platforms. Population structure and influenza transmission model details are given below.\n[SUBTITLE] Population structure [SUBSECTION] As discussed in the main text, the stochastically generated attributes for each person in our population of 649,565 included: age, household, playgroup or daycare attended (for pre-school children), school attended (for children 5–18 years of age), workgroup (for working adults and working 16–18 year old children), household census tract and workplace census subdivision, community (approximately 2000 people), and neighborhood (approximately 500 people). Thus individuals belong to three or four contact groups. In particular, each individual belongs to a household, neighborhood, and community. In addition, children younger than 16 belong to either a playgroup, daycare, or school, depending on age; most children in age range 16–18 belong to a school or workgroup; and most adults in age range 19–59 belong to a workgroup. Preschool children were categorized as belonging to a playgroup / daycare, each with 50% probability. We separated secondary schools into middle schools and high schools based on grade to allow different contact group sizes and to make our model more representative of mid-sized US cities. The numbers of playgroups, daycares, elementary, middle, and high schools in each community were based on Longini et al. [16], and were combined with the number of individuals in each category in our simulation population to obtain the contact group sizes. The number of working adults (19–59 years old) was based on census data [23]; and the number of working children (16–18 years old) was based on Ontario data on drop-out rates [34] and the employment rate for ages 15–24 [23].\nAs discussed in the main text, the stochastically generated attributes for each person in our population of 649,565 included: age, household, playgroup or daycare attended (for pre-school children), school attended (for children 5–18 years of age), workgroup (for working adults and working 16–18 year old children), household census tract and workplace census subdivision, community (approximately 2000 people), and neighborhood (approximately 500 people). Thus individuals belong to three or four contact groups. In particular, each individual belongs to a household, neighborhood, and community. In addition, children younger than 16 belong to either a playgroup, daycare, or school, depending on age; most children in age range 16–18 belong to a school or workgroup; and most adults in age range 19–59 belong to a workgroup. Preschool children were categorized as belonging to a playgroup / daycare, each with 50% probability. We separated secondary schools into middle schools and high schools based on grade to allow different contact group sizes and to make our model more representative of mid-sized US cities. The numbers of playgroups, daycares, elementary, middle, and high schools in each community were based on Longini et al. [16], and were combined with the number of individuals in each category in our simulation population to obtain the contact group sizes. The number of working adults (19–59 years old) was based on census data [23]; and the number of working children (16–18 years old) was based on Ontario data on drop-out rates [34] and the employment rate for ages 15–24 [23].\n[SUBTITLE] Influenza transmission model [SUBSECTION] The simulator models influenza transmission over a 180-day period, within the contact groups previously defined. Figure 3 depicts a flowchart of the model. The modeled natural history and simulator dynamics parameters, described below and shown in Figure 3, were based on Longini et al. [7,19].\nTo initiate influenza outbreaks, simulations are seeded with approximately 100 randomly selected initial infectives, with all other individuals considered susceptible (state 0). Susceptible people have the opportunity, each day, to become infected in their contact groups. As discussed in the main text, the daily probability of infection for each susceptible person is determined by the number of infectious contacts in his contact groups, and on the per-contact probability of transmission for each type of contact. For example, the probability of a susceptible child who attends daycare being infected on a particular day is:\n1 – [Pr(child is not infected in the household)\n× Pr(child is not infected in the neighborhood)\n× Pr(child is not infected in the community)\n× Pr(child is not infected at the daycare center)].\nWithin each contact group, the probability of infection of a susceptible individual depends on the number of infectious individuals in the group. For example, suppose that k1 children and k2 adults in a household are infectious on a particular day. Then the probability of a susceptible household member being infected in that household on that day is:\n1 – [Pr(not infected by a particular infected child in the household)k1\n× Pr(not infected by a particular infected adult in the household)k2].\nThe number of infectious people in the contact groups (e.g., k1 and k2), are random variables that are updated at the beginning of each day.\nAge- and contact-group-specific per-contact probabilities of transmission of infection are given in Table 1. The probability that infection is transmitted from an infected person to a susceptible person also depends on whether the infectious person is symptomatic or asymptomatic. Table 1 shows the rates for symptomatic individuals. The transmission rates for asymptomatic individuals are half of those shown in Table 1. These probabilities are based on Longini et al. [7,16], with adjustments made to calibrate baseline (no intervention) results to age-group-specific illness attack rates and R0 estimates for novel A (H1N1) in Ontario [14,24,25]; see Table 2.\nOnce infected, people enter a 1–3 day latent period (state 1; average length 1.9 days). They are assumed to become infectious on the last day of the latent period, and are half as infectious as they will be after the latent period ends. After the latent period, 67% of infectives become symptomatic (state 2), and 33% are asymptomatic (state 3). These infectious states last between 3 and 6 days. Symptomatic infectives are assumed to be twice as infectious as asymptomatics, and have a chance of withdrawing home during each day of illness (see Figure 3); upon withdrawal, they only make contacts within their household and neighborhood, with transmission probabilities doubled in the household contact group, until they recover. If a school child withdraws home due to illness, one adult in the household also stays home. Each day in states 2 and 3, an infectious person has a chance to exit the state and be removed from the simulation (i.e., to recover or die — state 4). Probabilities for transition into and out of states are given in Figure 3 and are based on Longini et al. [7,16].\nThe simulator models influenza transmission over a 180-day period, within the contact groups previously defined. Figure 3 depicts a flowchart of the model. The modeled natural history and simulator dynamics parameters, described below and shown in Figure 3, were based on Longini et al. [7,19].\nTo initiate influenza outbreaks, simulations are seeded with approximately 100 randomly selected initial infectives, with all other individuals considered susceptible (state 0). Susceptible people have the opportunity, each day, to become infected in their contact groups. As discussed in the main text, the daily probability of infection for each susceptible person is determined by the number of infectious contacts in his contact groups, and on the per-contact probability of transmission for each type of contact. For example, the probability of a susceptible child who attends daycare being infected on a particular day is:\n1 – [Pr(child is not infected in the household)\n× Pr(child is not infected in the neighborhood)\n× Pr(child is not infected in the community)\n× Pr(child is not infected at the daycare center)].\nWithin each contact group, the probability of infection of a susceptible individual depends on the number of infectious individuals in the group. For example, suppose that k1 children and k2 adults in a household are infectious on a particular day. Then the probability of a susceptible household member being infected in that household on that day is:\n1 – [Pr(not infected by a particular infected child in the household)k1\n× Pr(not infected by a particular infected adult in the household)k2].\nThe number of infectious people in the contact groups (e.g., k1 and k2), are random variables that are updated at the beginning of each day.\nAge- and contact-group-specific per-contact probabilities of transmission of infection are given in Table 1. The probability that infection is transmitted from an infected person to a susceptible person also depends on whether the infectious person is symptomatic or asymptomatic. Table 1 shows the rates for symptomatic individuals. The transmission rates for asymptomatic individuals are half of those shown in Table 1. These probabilities are based on Longini et al. [7,16], with adjustments made to calibrate baseline (no intervention) results to age-group-specific illness attack rates and R0 estimates for novel A (H1N1) in Ontario [14,24,25]; see Table 2.\nOnce infected, people enter a 1–3 day latent period (state 1; average length 1.9 days). They are assumed to become infectious on the last day of the latent period, and are half as infectious as they will be after the latent period ends. After the latent period, 67% of infectives become symptomatic (state 2), and 33% are asymptomatic (state 3). These infectious states last between 3 and 6 days. Symptomatic infectives are assumed to be twice as infectious as asymptomatics, and have a chance of withdrawing home during each day of illness (see Figure 3); upon withdrawal, they only make contacts within their household and neighborhood, with transmission probabilities doubled in the household contact group, until they recover. If a school child withdraws home due to illness, one adult in the household also stays home. Each day in states 2 and 3, an infectious person has a chance to exit the state and be removed from the simulation (i.e., to recover or die — state 4). Probabilities for transition into and out of states are given in Figure 3 and are based on Longini et al. [7,16].\nOur simulator is similar to those developed by Longini et al. for high-end computing platforms [7,16]; our simulator is programmed in C++ and runs on desktop platforms. Population structure and influenza transmission model details are given below.\n[SUBTITLE] Population structure [SUBSECTION] As discussed in the main text, the stochastically generated attributes for each person in our population of 649,565 included: age, household, playgroup or daycare attended (for pre-school children), school attended (for children 5–18 years of age), workgroup (for working adults and working 16–18 year old children), household census tract and workplace census subdivision, community (approximately 2000 people), and neighborhood (approximately 500 people). Thus individuals belong to three or four contact groups. In particular, each individual belongs to a household, neighborhood, and community. In addition, children younger than 16 belong to either a playgroup, daycare, or school, depending on age; most children in age range 16–18 belong to a school or workgroup; and most adults in age range 19–59 belong to a workgroup. Preschool children were categorized as belonging to a playgroup / daycare, each with 50% probability. We separated secondary schools into middle schools and high schools based on grade to allow different contact group sizes and to make our model more representative of mid-sized US cities. The numbers of playgroups, daycares, elementary, middle, and high schools in each community were based on Longini et al. [16], and were combined with the number of individuals in each category in our simulation population to obtain the contact group sizes. The number of working adults (19–59 years old) was based on census data [23]; and the number of working children (16–18 years old) was based on Ontario data on drop-out rates [34] and the employment rate for ages 15–24 [23].\nAs discussed in the main text, the stochastically generated attributes for each person in our population of 649,565 included: age, household, playgroup or daycare attended (for pre-school children), school attended (for children 5–18 years of age), workgroup (for working adults and working 16–18 year old children), household census tract and workplace census subdivision, community (approximately 2000 people), and neighborhood (approximately 500 people). Thus individuals belong to three or four contact groups. In particular, each individual belongs to a household, neighborhood, and community. In addition, children younger than 16 belong to either a playgroup, daycare, or school, depending on age; most children in age range 16–18 belong to a school or workgroup; and most adults in age range 19–59 belong to a workgroup. Preschool children were categorized as belonging to a playgroup / daycare, each with 50% probability. We separated secondary schools into middle schools and high schools based on grade to allow different contact group sizes and to make our model more representative of mid-sized US cities. The numbers of playgroups, daycares, elementary, middle, and high schools in each community were based on Longini et al. [16], and were combined with the number of individuals in each category in our simulation population to obtain the contact group sizes. The number of working adults (19–59 years old) was based on census data [23]; and the number of working children (16–18 years old) was based on Ontario data on drop-out rates [34] and the employment rate for ages 15–24 [23].\n[SUBTITLE] Influenza transmission model [SUBSECTION] The simulator models influenza transmission over a 180-day period, within the contact groups previously defined. Figure 3 depicts a flowchart of the model. The modeled natural history and simulator dynamics parameters, described below and shown in Figure 3, were based on Longini et al. [7,19].\nTo initiate influenza outbreaks, simulations are seeded with approximately 100 randomly selected initial infectives, with all other individuals considered susceptible (state 0). Susceptible people have the opportunity, each day, to become infected in their contact groups. As discussed in the main text, the daily probability of infection for each susceptible person is determined by the number of infectious contacts in his contact groups, and on the per-contact probability of transmission for each type of contact. For example, the probability of a susceptible child who attends daycare being infected on a particular day is:\n1 – [Pr(child is not infected in the household)\n× Pr(child is not infected in the neighborhood)\n× Pr(child is not infected in the community)\n× Pr(child is not infected at the daycare center)].\nWithin each contact group, the probability of infection of a susceptible individual depends on the number of infectious individuals in the group. For example, suppose that k1 children and k2 adults in a household are infectious on a particular day. Then the probability of a susceptible household member being infected in that household on that day is:\n1 – [Pr(not infected by a particular infected child in the household)k1\n× Pr(not infected by a particular infected adult in the household)k2].\nThe number of infectious people in the contact groups (e.g., k1 and k2), are random variables that are updated at the beginning of each day.\nAge- and contact-group-specific per-contact probabilities of transmission of infection are given in Table 1. The probability that infection is transmitted from an infected person to a susceptible person also depends on whether the infectious person is symptomatic or asymptomatic. Table 1 shows the rates for symptomatic individuals. The transmission rates for asymptomatic individuals are half of those shown in Table 1. These probabilities are based on Longini et al. [7,16], with adjustments made to calibrate baseline (no intervention) results to age-group-specific illness attack rates and R0 estimates for novel A (H1N1) in Ontario [14,24,25]; see Table 2.\nOnce infected, people enter a 1–3 day latent period (state 1; average length 1.9 days). They are assumed to become infectious on the last day of the latent period, and are half as infectious as they will be after the latent period ends. After the latent period, 67% of infectives become symptomatic (state 2), and 33% are asymptomatic (state 3). These infectious states last between 3 and 6 days. Symptomatic infectives are assumed to be twice as infectious as asymptomatics, and have a chance of withdrawing home during each day of illness (see Figure 3); upon withdrawal, they only make contacts within their household and neighborhood, with transmission probabilities doubled in the household contact group, until they recover. If a school child withdraws home due to illness, one adult in the household also stays home. Each day in states 2 and 3, an infectious person has a chance to exit the state and be removed from the simulation (i.e., to recover or die — state 4). Probabilities for transition into and out of states are given in Figure 3 and are based on Longini et al. [7,16].\nThe simulator models influenza transmission over a 180-day period, within the contact groups previously defined. Figure 3 depicts a flowchart of the model. The modeled natural history and simulator dynamics parameters, described below and shown in Figure 3, were based on Longini et al. [7,19].\nTo initiate influenza outbreaks, simulations are seeded with approximately 100 randomly selected initial infectives, with all other individuals considered susceptible (state 0). Susceptible people have the opportunity, each day, to become infected in their contact groups. As discussed in the main text, the daily probability of infection for each susceptible person is determined by the number of infectious contacts in his contact groups, and on the per-contact probability of transmission for each type of contact. For example, the probability of a susceptible child who attends daycare being infected on a particular day is:\n1 – [Pr(child is not infected in the household)\n× Pr(child is not infected in the neighborhood)\n× Pr(child is not infected in the community)\n× Pr(child is not infected at the daycare center)].\nWithin each contact group, the probability of infection of a susceptible individual depends on the number of infectious individuals in the group. For example, suppose that k1 children and k2 adults in a household are infectious on a particular day. Then the probability of a susceptible household member being infected in that household on that day is:\n1 – [Pr(not infected by a particular infected child in the household)k1\n× Pr(not infected by a particular infected adult in the household)k2].\nThe number of infectious people in the contact groups (e.g., k1 and k2), are random variables that are updated at the beginning of each day.\nAge- and contact-group-specific per-contact probabilities of transmission of infection are given in Table 1. The probability that infection is transmitted from an infected person to a susceptible person also depends on whether the infectious person is symptomatic or asymptomatic. Table 1 shows the rates for symptomatic individuals. The transmission rates for asymptomatic individuals are half of those shown in Table 1. These probabilities are based on Longini et al. [7,16], with adjustments made to calibrate baseline (no intervention) results to age-group-specific illness attack rates and R0 estimates for novel A (H1N1) in Ontario [14,24,25]; see Table 2.\nOnce infected, people enter a 1–3 day latent period (state 1; average length 1.9 days). They are assumed to become infectious on the last day of the latent period, and are half as infectious as they will be after the latent period ends. After the latent period, 67% of infectives become symptomatic (state 2), and 33% are asymptomatic (state 3). These infectious states last between 3 and 6 days. Symptomatic infectives are assumed to be twice as infectious as asymptomatics, and have a chance of withdrawing home during each day of illness (see Figure 3); upon withdrawal, they only make contacts within their household and neighborhood, with transmission probabilities doubled in the household contact group, until they recover. If a school child withdraws home due to illness, one adult in the household also stays home. Each day in states 2 and 3, an infectious person has a chance to exit the state and be removed from the simulation (i.e., to recover or die — state 4). Probabilities for transition into and out of states are given in Figure 3 and are based on Longini et al. [7,16].\n[SUBTITLE] Economic cost calculations [SUBSECTION] The total cost of each intervention scenario includes the cost of vaccine doses and antiviral courses used, if any; costs associated with parents staying at home with sick children and school teachers, parents, and children staying home due to school closure; costs due to illness-related absence from work; medical costs associated with illness, including outpatient visits, prescription and over-the-counter drugs, and hospitalization; and lost earnings due to death.\nWe use methods described by Meltzer et al. [29] to quantify most medical and work-loss costs (see also [33]). Table 5 shows the proportions of illnesses assumed to be at high risk for complications among children (0–18 years old), younger adults (19–59 years old) and seniors (over 60). Table 6 shows estimated rates of outpatient visits, hospitalizations, and death used in our calculations for children, adults, and seniors at high risk and not at high risk of complications. We chose the ‘low’ rate estimates presented in Meltzer et al. [29], which we believe to be most consistent with the relatively low R0 (1.4) for our model. Outpatient visit, hospitalization, and death costs are shown in Table 7; cost figures from Meltzer et al. [29] have been inflated using 2008 consumer price and medical price indexes [30-32]. All the above costs were combined with age-specific attack rates obtained from our simulation model. In addition, we assume average costs of $25 per vaccine dose or antiviral course used, consistent with previous reports [35]. Table 8 shows other costs associated with vaccination (i.e., the cost of lost time, travel, and side effects). These costs are based on [34], inflated as described above. The vaccination costs are combined with the number of used vaccination doses obtained from our simulation model. We assume that 1% of antiviral users discontinue use due to side effects; medical and other costs associated with these side effects are not included in our model.\nProportions of influenza cases at high risk for complications1\n1. Proportions taken from Meltzer et al. [29], and adapted to our age groups\nOutpatient visit, hospitalization, and death rates, by age group and risk status for complications1\n1. Rates taken from Meltzer et al. [29]\nFrequency and costs (in US$) associated with influenza-related outpatient visits, hospitalizations, and deaths1\n1. Estimates based on figures from Meltzer et al. [29]. Cost estimates inflated by 2008 consumer and medical price indices [30-32] as appropriate.\nCosts and impacts of vaccination1\n1. Estimates based on figures from Meltzer et al. [29]. Travel and side effect cost estimates inflated by 2008 consumer and medical price indices [30-32] as appropriate.\nTo estimate costs of ill individuals staying home and work-loss associated with parents staying at home with sick children, we multiplied the number of days (obtained from our simulation model) with the inflation-adjusted average value of lost days from Table 7. Similarly, we estimated the average number of teachers at schools and daycares by dividing the total number of such teachers in Hamilton [36] among the schools and daycares in our model. To estimate the cost of lost teacher productivity due to school closures, we multiplied the number of days schools and daycares are closed in our simulation model by the average number of teachers at Hamilton schools and daycares and by the average value of a day of lost work obtained from Table 7.\nTable S1 shows age-stratified and overall illness attack rates for all modeled scenarios, along with total cost estimates. Figure 7 depicts the total cost (US$) plotted vs. average overall illness attack rate for each intervention.\nTotal cost of modeled intervention strategies versus the average illness attack rate\nThe total cost of each intervention scenario includes the cost of vaccine doses and antiviral courses used, if any; costs associated with parents staying at home with sick children and school teachers, parents, and children staying home due to school closure; costs due to illness-related absence from work; medical costs associated with illness, including outpatient visits, prescription and over-the-counter drugs, and hospitalization; and lost earnings due to death.\nWe use methods described by Meltzer et al. [29] to quantify most medical and work-loss costs (see also [33]). Table 5 shows the proportions of illnesses assumed to be at high risk for complications among children (0–18 years old), younger adults (19–59 years old) and seniors (over 60). Table 6 shows estimated rates of outpatient visits, hospitalizations, and death used in our calculations for children, adults, and seniors at high risk and not at high risk of complications. We chose the ‘low’ rate estimates presented in Meltzer et al. [29], which we believe to be most consistent with the relatively low R0 (1.4) for our model. Outpatient visit, hospitalization, and death costs are shown in Table 7; cost figures from Meltzer et al. [29] have been inflated using 2008 consumer price and medical price indexes [30-32]. All the above costs were combined with age-specific attack rates obtained from our simulation model. In addition, we assume average costs of $25 per vaccine dose or antiviral course used, consistent with previous reports [35]. Table 8 shows other costs associated with vaccination (i.e., the cost of lost time, travel, and side effects). These costs are based on [34], inflated as described above. The vaccination costs are combined with the number of used vaccination doses obtained from our simulation model. We assume that 1% of antiviral users discontinue use due to side effects; medical and other costs associated with these side effects are not included in our model.\nProportions of influenza cases at high risk for complications1\n1. Proportions taken from Meltzer et al. [29], and adapted to our age groups\nOutpatient visit, hospitalization, and death rates, by age group and risk status for complications1\n1. Rates taken from Meltzer et al. [29]\nFrequency and costs (in US$) associated with influenza-related outpatient visits, hospitalizations, and deaths1\n1. Estimates based on figures from Meltzer et al. [29]. Cost estimates inflated by 2008 consumer and medical price indices [30-32] as appropriate.\nCosts and impacts of vaccination1\n1. Estimates based on figures from Meltzer et al. [29]. Travel and side effect cost estimates inflated by 2008 consumer and medical price indices [30-32] as appropriate.\nTo estimate costs of ill individuals staying home and work-loss associated with parents staying at home with sick children, we multiplied the number of days (obtained from our simulation model) with the inflation-adjusted average value of lost days from Table 7. Similarly, we estimated the average number of teachers at schools and daycares by dividing the total number of such teachers in Hamilton [36] among the schools and daycares in our model. To estimate the cost of lost teacher productivity due to school closures, we multiplied the number of days schools and daycares are closed in our simulation model by the average number of teachers at Hamilton schools and daycares and by the average value of a day of lost work obtained from Table 7.\nTable S1 shows age-stratified and overall illness attack rates for all modeled scenarios, along with total cost estimates. Figure 7 depicts the total cost (US$) plotted vs. average overall illness attack rate for each intervention.\nTotal cost of modeled intervention strategies versus the average illness attack rate", "Our simulator is similar to those developed by Longini et al. for high-end computing platforms [7,16]; our simulator is programmed in C++ and runs on desktop platforms. Population structure and influenza transmission model details are given below.\n[SUBTITLE] Population structure [SUBSECTION] As discussed in the main text, the stochastically generated attributes for each person in our population of 649,565 included: age, household, playgroup or daycare attended (for pre-school children), school attended (for children 5–18 years of age), workgroup (for working adults and working 16–18 year old children), household census tract and workplace census subdivision, community (approximately 2000 people), and neighborhood (approximately 500 people). Thus individuals belong to three or four contact groups. In particular, each individual belongs to a household, neighborhood, and community. In addition, children younger than 16 belong to either a playgroup, daycare, or school, depending on age; most children in age range 16–18 belong to a school or workgroup; and most adults in age range 19–59 belong to a workgroup. Preschool children were categorized as belonging to a playgroup / daycare, each with 50% probability. We separated secondary schools into middle schools and high schools based on grade to allow different contact group sizes and to make our model more representative of mid-sized US cities. The numbers of playgroups, daycares, elementary, middle, and high schools in each community were based on Longini et al. [16], and were combined with the number of individuals in each category in our simulation population to obtain the contact group sizes. The number of working adults (19–59 years old) was based on census data [23]; and the number of working children (16–18 years old) was based on Ontario data on drop-out rates [34] and the employment rate for ages 15–24 [23].\nAs discussed in the main text, the stochastically generated attributes for each person in our population of 649,565 included: age, household, playgroup or daycare attended (for pre-school children), school attended (for children 5–18 years of age), workgroup (for working adults and working 16–18 year old children), household census tract and workplace census subdivision, community (approximately 2000 people), and neighborhood (approximately 500 people). Thus individuals belong to three or four contact groups. In particular, each individual belongs to a household, neighborhood, and community. In addition, children younger than 16 belong to either a playgroup, daycare, or school, depending on age; most children in age range 16–18 belong to a school or workgroup; and most adults in age range 19–59 belong to a workgroup. Preschool children were categorized as belonging to a playgroup / daycare, each with 50% probability. We separated secondary schools into middle schools and high schools based on grade to allow different contact group sizes and to make our model more representative of mid-sized US cities. The numbers of playgroups, daycares, elementary, middle, and high schools in each community were based on Longini et al. [16], and were combined with the number of individuals in each category in our simulation population to obtain the contact group sizes. The number of working adults (19–59 years old) was based on census data [23]; and the number of working children (16–18 years old) was based on Ontario data on drop-out rates [34] and the employment rate for ages 15–24 [23].\n[SUBTITLE] Influenza transmission model [SUBSECTION] The simulator models influenza transmission over a 180-day period, within the contact groups previously defined. Figure 3 depicts a flowchart of the model. The modeled natural history and simulator dynamics parameters, described below and shown in Figure 3, were based on Longini et al. [7,19].\nTo initiate influenza outbreaks, simulations are seeded with approximately 100 randomly selected initial infectives, with all other individuals considered susceptible (state 0). Susceptible people have the opportunity, each day, to become infected in their contact groups. As discussed in the main text, the daily probability of infection for each susceptible person is determined by the number of infectious contacts in his contact groups, and on the per-contact probability of transmission for each type of contact. For example, the probability of a susceptible child who attends daycare being infected on a particular day is:\n1 – [Pr(child is not infected in the household)\n× Pr(child is not infected in the neighborhood)\n× Pr(child is not infected in the community)\n× Pr(child is not infected at the daycare center)].\nWithin each contact group, the probability of infection of a susceptible individual depends on the number of infectious individuals in the group. For example, suppose that k1 children and k2 adults in a household are infectious on a particular day. Then the probability of a susceptible household member being infected in that household on that day is:\n1 – [Pr(not infected by a particular infected child in the household)k1\n× Pr(not infected by a particular infected adult in the household)k2].\nThe number of infectious people in the contact groups (e.g., k1 and k2), are random variables that are updated at the beginning of each day.\nAge- and contact-group-specific per-contact probabilities of transmission of infection are given in Table 1. The probability that infection is transmitted from an infected person to a susceptible person also depends on whether the infectious person is symptomatic or asymptomatic. Table 1 shows the rates for symptomatic individuals. The transmission rates for asymptomatic individuals are half of those shown in Table 1. These probabilities are based on Longini et al. [7,16], with adjustments made to calibrate baseline (no intervention) results to age-group-specific illness attack rates and R0 estimates for novel A (H1N1) in Ontario [14,24,25]; see Table 2.\nOnce infected, people enter a 1–3 day latent period (state 1; average length 1.9 days). They are assumed to become infectious on the last day of the latent period, and are half as infectious as they will be after the latent period ends. After the latent period, 67% of infectives become symptomatic (state 2), and 33% are asymptomatic (state 3). These infectious states last between 3 and 6 days. Symptomatic infectives are assumed to be twice as infectious as asymptomatics, and have a chance of withdrawing home during each day of illness (see Figure 3); upon withdrawal, they only make contacts within their household and neighborhood, with transmission probabilities doubled in the household contact group, until they recover. If a school child withdraws home due to illness, one adult in the household also stays home. Each day in states 2 and 3, an infectious person has a chance to exit the state and be removed from the simulation (i.e., to recover or die — state 4). Probabilities for transition into and out of states are given in Figure 3 and are based on Longini et al. [7,16].\nThe simulator models influenza transmission over a 180-day period, within the contact groups previously defined. Figure 3 depicts a flowchart of the model. The modeled natural history and simulator dynamics parameters, described below and shown in Figure 3, were based on Longini et al. [7,19].\nTo initiate influenza outbreaks, simulations are seeded with approximately 100 randomly selected initial infectives, with all other individuals considered susceptible (state 0). Susceptible people have the opportunity, each day, to become infected in their contact groups. As discussed in the main text, the daily probability of infection for each susceptible person is determined by the number of infectious contacts in his contact groups, and on the per-contact probability of transmission for each type of contact. For example, the probability of a susceptible child who attends daycare being infected on a particular day is:\n1 – [Pr(child is not infected in the household)\n× Pr(child is not infected in the neighborhood)\n× Pr(child is not infected in the community)\n× Pr(child is not infected at the daycare center)].\nWithin each contact group, the probability of infection of a susceptible individual depends on the number of infectious individuals in the group. For example, suppose that k1 children and k2 adults in a household are infectious on a particular day. Then the probability of a susceptible household member being infected in that household on that day is:\n1 – [Pr(not infected by a particular infected child in the household)k1\n× Pr(not infected by a particular infected adult in the household)k2].\nThe number of infectious people in the contact groups (e.g., k1 and k2), are random variables that are updated at the beginning of each day.\nAge- and contact-group-specific per-contact probabilities of transmission of infection are given in Table 1. The probability that infection is transmitted from an infected person to a susceptible person also depends on whether the infectious person is symptomatic or asymptomatic. Table 1 shows the rates for symptomatic individuals. The transmission rates for asymptomatic individuals are half of those shown in Table 1. These probabilities are based on Longini et al. [7,16], with adjustments made to calibrate baseline (no intervention) results to age-group-specific illness attack rates and R0 estimates for novel A (H1N1) in Ontario [14,24,25]; see Table 2.\nOnce infected, people enter a 1–3 day latent period (state 1; average length 1.9 days). They are assumed to become infectious on the last day of the latent period, and are half as infectious as they will be after the latent period ends. After the latent period, 67% of infectives become symptomatic (state 2), and 33% are asymptomatic (state 3). These infectious states last between 3 and 6 days. Symptomatic infectives are assumed to be twice as infectious as asymptomatics, and have a chance of withdrawing home during each day of illness (see Figure 3); upon withdrawal, they only make contacts within their household and neighborhood, with transmission probabilities doubled in the household contact group, until they recover. If a school child withdraws home due to illness, one adult in the household also stays home. Each day in states 2 and 3, an infectious person has a chance to exit the state and be removed from the simulation (i.e., to recover or die — state 4). Probabilities for transition into and out of states are given in Figure 3 and are based on Longini et al. [7,16].", "As discussed in the main text, the stochastically generated attributes for each person in our population of 649,565 included: age, household, playgroup or daycare attended (for pre-school children), school attended (for children 5–18 years of age), workgroup (for working adults and working 16–18 year old children), household census tract and workplace census subdivision, community (approximately 2000 people), and neighborhood (approximately 500 people). Thus individuals belong to three or four contact groups. In particular, each individual belongs to a household, neighborhood, and community. In addition, children younger than 16 belong to either a playgroup, daycare, or school, depending on age; most children in age range 16–18 belong to a school or workgroup; and most adults in age range 19–59 belong to a workgroup. Preschool children were categorized as belonging to a playgroup / daycare, each with 50% probability. We separated secondary schools into middle schools and high schools based on grade to allow different contact group sizes and to make our model more representative of mid-sized US cities. The numbers of playgroups, daycares, elementary, middle, and high schools in each community were based on Longini et al. [16], and were combined with the number of individuals in each category in our simulation population to obtain the contact group sizes. The number of working adults (19–59 years old) was based on census data [23]; and the number of working children (16–18 years old) was based on Ontario data on drop-out rates [34] and the employment rate for ages 15–24 [23].", "The simulator models influenza transmission over a 180-day period, within the contact groups previously defined. Figure 3 depicts a flowchart of the model. The modeled natural history and simulator dynamics parameters, described below and shown in Figure 3, were based on Longini et al. [7,19].\nTo initiate influenza outbreaks, simulations are seeded with approximately 100 randomly selected initial infectives, with all other individuals considered susceptible (state 0). Susceptible people have the opportunity, each day, to become infected in their contact groups. As discussed in the main text, the daily probability of infection for each susceptible person is determined by the number of infectious contacts in his contact groups, and on the per-contact probability of transmission for each type of contact. For example, the probability of a susceptible child who attends daycare being infected on a particular day is:\n1 – [Pr(child is not infected in the household)\n× Pr(child is not infected in the neighborhood)\n× Pr(child is not infected in the community)\n× Pr(child is not infected at the daycare center)].\nWithin each contact group, the probability of infection of a susceptible individual depends on the number of infectious individuals in the group. For example, suppose that k1 children and k2 adults in a household are infectious on a particular day. Then the probability of a susceptible household member being infected in that household on that day is:\n1 – [Pr(not infected by a particular infected child in the household)k1\n× Pr(not infected by a particular infected adult in the household)k2].\nThe number of infectious people in the contact groups (e.g., k1 and k2), are random variables that are updated at the beginning of each day.\nAge- and contact-group-specific per-contact probabilities of transmission of infection are given in Table 1. The probability that infection is transmitted from an infected person to a susceptible person also depends on whether the infectious person is symptomatic or asymptomatic. Table 1 shows the rates for symptomatic individuals. The transmission rates for asymptomatic individuals are half of those shown in Table 1. These probabilities are based on Longini et al. [7,16], with adjustments made to calibrate baseline (no intervention) results to age-group-specific illness attack rates and R0 estimates for novel A (H1N1) in Ontario [14,24,25]; see Table 2.\nOnce infected, people enter a 1–3 day latent period (state 1; average length 1.9 days). They are assumed to become infectious on the last day of the latent period, and are half as infectious as they will be after the latent period ends. After the latent period, 67% of infectives become symptomatic (state 2), and 33% are asymptomatic (state 3). These infectious states last between 3 and 6 days. Symptomatic infectives are assumed to be twice as infectious as asymptomatics, and have a chance of withdrawing home during each day of illness (see Figure 3); upon withdrawal, they only make contacts within their household and neighborhood, with transmission probabilities doubled in the household contact group, until they recover. If a school child withdraws home due to illness, one adult in the household also stays home. Each day in states 2 and 3, an infectious person has a chance to exit the state and be removed from the simulation (i.e., to recover or die — state 4). Probabilities for transition into and out of states are given in Figure 3 and are based on Longini et al. [7,16].", "The total cost of each intervention scenario includes the cost of vaccine doses and antiviral courses used, if any; costs associated with parents staying at home with sick children and school teachers, parents, and children staying home due to school closure; costs due to illness-related absence from work; medical costs associated with illness, including outpatient visits, prescription and over-the-counter drugs, and hospitalization; and lost earnings due to death.\nWe use methods described by Meltzer et al. [29] to quantify most medical and work-loss costs (see also [33]). Table 5 shows the proportions of illnesses assumed to be at high risk for complications among children (0–18 years old), younger adults (19–59 years old) and seniors (over 60). Table 6 shows estimated rates of outpatient visits, hospitalizations, and death used in our calculations for children, adults, and seniors at high risk and not at high risk of complications. We chose the ‘low’ rate estimates presented in Meltzer et al. [29], which we believe to be most consistent with the relatively low R0 (1.4) for our model. Outpatient visit, hospitalization, and death costs are shown in Table 7; cost figures from Meltzer et al. [29] have been inflated using 2008 consumer price and medical price indexes [30-32]. All the above costs were combined with age-specific attack rates obtained from our simulation model. In addition, we assume average costs of $25 per vaccine dose or antiviral course used, consistent with previous reports [35]. Table 8 shows other costs associated with vaccination (i.e., the cost of lost time, travel, and side effects). These costs are based on [34], inflated as described above. The vaccination costs are combined with the number of used vaccination doses obtained from our simulation model. We assume that 1% of antiviral users discontinue use due to side effects; medical and other costs associated with these side effects are not included in our model.\nProportions of influenza cases at high risk for complications1\n1. Proportions taken from Meltzer et al. [29], and adapted to our age groups\nOutpatient visit, hospitalization, and death rates, by age group and risk status for complications1\n1. Rates taken from Meltzer et al. [29]\nFrequency and costs (in US$) associated with influenza-related outpatient visits, hospitalizations, and deaths1\n1. Estimates based on figures from Meltzer et al. [29]. Cost estimates inflated by 2008 consumer and medical price indices [30-32] as appropriate.\nCosts and impacts of vaccination1\n1. Estimates based on figures from Meltzer et al. [29]. Travel and side effect cost estimates inflated by 2008 consumer and medical price indices [30-32] as appropriate.\nTo estimate costs of ill individuals staying home and work-loss associated with parents staying at home with sick children, we multiplied the number of days (obtained from our simulation model) with the inflation-adjusted average value of lost days from Table 7. Similarly, we estimated the average number of teachers at schools and daycares by dividing the total number of such teachers in Hamilton [36] among the schools and daycares in our model. To estimate the cost of lost teacher productivity due to school closures, we multiplied the number of days schools and daycares are closed in our simulation model by the average number of teachers at Hamilton schools and daycares and by the average value of a day of lost work obtained from Table 7.\nTable S1 shows age-stratified and overall illness attack rates for all modeled scenarios, along with total cost estimates. Figure 7 depicts the total cost (US$) plotted vs. average overall illness attack rate for each intervention.\nTotal cost of modeled intervention strategies versus the average illness attack rate", "Study conception and design: AN, SA, DG, KLT\nSimulation model development: AN, WC, KLT, SA, MLL, DG, BS, DNF\nAnalysis and interpretation of simulation results: AN, SA, DG, WC\nDrafting of manuscript: SA, AN, DG\nAll authors read and approved the final manuscript.", "DNF has received grant matching funds from Sanofi Pasteur, which manufactures a vaccine for use against influenza A (H1N1)-2009 outside Canada." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "The simulation model", "Intervention strategies", "Vaccination", "Antiviral treatment and household prophylaxis", "School closure and social distancing", "Economic cost estimation", "Results", "Discussion", "Conclusions", "Appendix", "Simulation model", "Population structure", "Influenza transmission model", "Economic cost calculations", "Authors' contributions", "Competing interests", "Supplementary Material" ]
[ "In April, 2009, the World Health Organization (WHO) announced the emergence of a new influenza A (H1N1) virus, and on June 11, 2009, it declared that the world was at the start of a new influenza pandemic [1]. WHO reported more than 414,000 laboratory-confirmed cases of H1N1 [2] — a gross underestimate, as many countries simply stopped counting individual cases. The US Centers for Disease Control and Prevention reported widespread influenza activity in forty-six states, with influenza-like illness (ILI) activity in October 2009 higher than what is seen during the peak of many regular flu seasons; and further, “Almost all of the influenza viruses identified … are 2009 H1N1 influenza A viruses” [3]. Countries found themselves in the position of having to react to contain already developing Fall outbreaks of influenza due to the new pandemic strain, a position they are likely to find themselves in again if and when future waves of pandemic influenza occur.\nResearch has suggested that mass vaccination of 60–70% of the population prior to the start of the flu season could effectively contain outbreaks due to pandemic strains [4-7]; and the public health preparedness plans of most countries have, accordingly, emphasized vaccination intervention strategies. However, the recent experience with H1N1 suggests that high vaccination coverage levels are difficult to achieve. In the case of H1N1, vaccination programs in most northern hemisphere countries started only after the virus was widely circulating. Furthermore, in some countries, supplies of vaccine were limited [8], delivery and administration occurred over a period of several months [9,10], and there were reports of public skepticism regarding the necessity and safety of vaccination [11,12], all of which were strong indicators suggesting that high vaccination coverage would be difficult to achieve. While many institutions in the US and elsewhere strongly encouraged and, in some cases, required workers to be vaccinated against seasonal influenza in 2009, H1N1 vaccination guidelines were focused mostly on people in certain age and high-risk groups [13]. Delays, limited and untimely vaccination supplies, and public reluctance to be vaccinated are likely to reduce the effectiveness of vaccination campaigns [4,5].\nThe issues outlined above for the recent outbreak of H1N1 are likely to occur again in future outbreaks of pandemic influenza. In this paper, we explore the effectiveness of realistic reactive intervention strategies implemented after the beginning of outbreaks of pandemic influenza. We calibrate our model based on data for the H1N1 pandemic (see Tuite et al. [14]), and we investigate the impacts of (i) the moderate vaccination coverage levels which, based on past experience, are likely to be realized, as well as high levels which would be more ideal; (ii) very limited treatment of cases with antivirals and prophylaxis of cases’ households with antivirals; and (iii) limited and practical social distancing measures such as five-day closure of individual schools on an as-needed basis, encouragement of liberal leave policies in the workplace, and encouragement of self-isolation. Intervention strategies that combine these approaches are also studied (cf. Halloran et al. [15]). For all intervention strategies, we provide cost estimates associated with morbidity and mortality that take into account direct medical costs as well as economic consequences resulting from school closures and work loss.", "[SUBTITLE] The simulation model [SUBSECTION] We developed a portable and adaptable stochastic, individual-level simulation model of influenza spread within a structured population. The simulator is similar to models developed by Longini et al. [7,16]. The simulation population of 649,565 people was generated stochastically to represent a typical North American city, namely, Hamilton (Ontario), Canada, which was chosen due to availability of demographic and epidemiological data necessary for constructing and calibrating the simulator. Our population is a collection of heterogeneous individuals with various attributes that impact whom they interact with (and hence whom they may infect or get infected by). More specifically, each individual has the following stochastically generated attributes: age, household, playgroup or daycare attended (for pre-school children), school attended (for school-age children), workgroup (for working adults), household census tract and workplace census subdivision, community, and neighborhood. As in [16], a community consists of approximately 2000 people living within the same census tract, and a neighborhood consists of approximately 500 people living within proximity to each other within the same community; also see the recent papers [17] and [18], which incorporate more-detailed individual-level behavior involving larger populations. Age and household-size distributions, shown in Figures 1 and 2, were matched to 2001 Canadian census data [19,20]. Household census tract assignments were made so that census tract population sizes were consistent with 2006 census statistics [21]. Workgroups were formed to match 2006 employment statistics [22] as well as census statistics on the geographical distribution of workers [23]. Rather than representing entire workplace institutions, we formed workgroups of size 20 to represent the typical number of co-workers an individual is likely to have close contact with during the day. Average playgroup, daycare, and lower and upper secondary school (i.e., middle and high school) contact group sizes were chosen for similar reasons; see the Appendix.\nAge distribution for simulated population\nHousehold size distribution for simulated population\nSusceptible people are assumed to have daily contacts with other individuals in their contact groups, i.e., their household and school or workgroups, as well as with people in their neighborhood and community. Infection of susceptibles depends on the number of infected persons in their contact groups, on the vaccine and antiviral-use status of susceptibles and their infectious contacts, and on age- and contact-group-specific per-contact transmission probabilities (Table 1). This disease transmission model is based on previously described models [7,16], and is detailed in the Appendix. People infected with influenza first pass through a latent / incubation period, during which they do not have influenza symptoms. They are not infectious until the last day of the period; at that point, they become half as infectious as if they were to develop symptoms in the subsequent period. During that subsequent infectious period, 67% will develop influenza symptoms and 33% will be asymptomatic (and will be half as infectious as those who are symptomatic) [7]. The model allows for people to withdraw from all of their mixing groups, except the household, if they become infected or have an infected child.\nPer-contact influenza infection transmission probabilities within contact groups\n1. Within households, the probability that a symptomatic child (age 18 years or less) infects a susceptible child is 0.8; that a symptomatic child infects a susceptible adult (at least 19 years old), or that a symptomatic adult infects a susceptible child, is 0.3; that a symptomatic adult infects a susceptible adult is 0.4 [16].\n2. Probability that a susceptible person in the age or school group is infected through contact with a symptomatic person in the group.\nThe simulator is calibrated to match documented illness attack rates and basic reproductive numbers (R0). Baseline (no-intervention) scenario age-group-specific attack rates were derived using 2009 estimates for the H1N1 basic reproductive number in Ontario [14,24,25] (see Table 2). These rates take into account reduced risk in adults born prior to 1957 [24]. A compartmental model parameterized in this way was well-calibrated to observed attack rates during the Fall pandemic wave in Ontario [25]. The simulator’s R0 value of 1.4 is also consistent with other published reports [4,26,27].\nAge-group-specific H1N1 influenza illness attack rates in Ontario, Canada, 2009, and calibrated attack rates\n1. See the discussion in [25].\nWe developed a portable and adaptable stochastic, individual-level simulation model of influenza spread within a structured population. The simulator is similar to models developed by Longini et al. [7,16]. The simulation population of 649,565 people was generated stochastically to represent a typical North American city, namely, Hamilton (Ontario), Canada, which was chosen due to availability of demographic and epidemiological data necessary for constructing and calibrating the simulator. Our population is a collection of heterogeneous individuals with various attributes that impact whom they interact with (and hence whom they may infect or get infected by). More specifically, each individual has the following stochastically generated attributes: age, household, playgroup or daycare attended (for pre-school children), school attended (for school-age children), workgroup (for working adults), household census tract and workplace census subdivision, community, and neighborhood. As in [16], a community consists of approximately 2000 people living within the same census tract, and a neighborhood consists of approximately 500 people living within proximity to each other within the same community; also see the recent papers [17] and [18], which incorporate more-detailed individual-level behavior involving larger populations. Age and household-size distributions, shown in Figures 1 and 2, were matched to 2001 Canadian census data [19,20]. Household census tract assignments were made so that census tract population sizes were consistent with 2006 census statistics [21]. Workgroups were formed to match 2006 employment statistics [22] as well as census statistics on the geographical distribution of workers [23]. Rather than representing entire workplace institutions, we formed workgroups of size 20 to represent the typical number of co-workers an individual is likely to have close contact with during the day. Average playgroup, daycare, and lower and upper secondary school (i.e., middle and high school) contact group sizes were chosen for similar reasons; see the Appendix.\nAge distribution for simulated population\nHousehold size distribution for simulated population\nSusceptible people are assumed to have daily contacts with other individuals in their contact groups, i.e., their household and school or workgroups, as well as with people in their neighborhood and community. Infection of susceptibles depends on the number of infected persons in their contact groups, on the vaccine and antiviral-use status of susceptibles and their infectious contacts, and on age- and contact-group-specific per-contact transmission probabilities (Table 1). This disease transmission model is based on previously described models [7,16], and is detailed in the Appendix. People infected with influenza first pass through a latent / incubation period, during which they do not have influenza symptoms. They are not infectious until the last day of the period; at that point, they become half as infectious as if they were to develop symptoms in the subsequent period. During that subsequent infectious period, 67% will develop influenza symptoms and 33% will be asymptomatic (and will be half as infectious as those who are symptomatic) [7]. The model allows for people to withdraw from all of their mixing groups, except the household, if they become infected or have an infected child.\nPer-contact influenza infection transmission probabilities within contact groups\n1. Within households, the probability that a symptomatic child (age 18 years or less) infects a susceptible child is 0.8; that a symptomatic child infects a susceptible adult (at least 19 years old), or that a symptomatic adult infects a susceptible child, is 0.3; that a symptomatic adult infects a susceptible adult is 0.4 [16].\n2. Probability that a susceptible person in the age or school group is infected through contact with a symptomatic person in the group.\nThe simulator is calibrated to match documented illness attack rates and basic reproductive numbers (R0). Baseline (no-intervention) scenario age-group-specific attack rates were derived using 2009 estimates for the H1N1 basic reproductive number in Ontario [14,24,25] (see Table 2). These rates take into account reduced risk in adults born prior to 1957 [24]. A compartmental model parameterized in this way was well-calibrated to observed attack rates during the Fall pandemic wave in Ontario [25]. The simulator’s R0 value of 1.4 is also consistent with other published reports [4,26,27].\nAge-group-specific H1N1 influenza illness attack rates in Ontario, Canada, 2009, and calibrated attack rates\n1. See the discussion in [25].\n[SUBTITLE] Intervention strategies [SUBSECTION] We modeled a baseline case where no intervention takes place, along with strategies representing various combinations of vaccination, antiviral treatment and household prophylaxis, school closure, and general social distancing (see the results in Tables 3 and 4 and Supplementary Data Table S1 provided in “Additional File 1”). Each component of the strategies is described in detail below. Interventions are triggered in a particular simulation run when the overall illness attack rate reaches 0.01%. Twenty runs of the simulator were performed for each intervention strategy, from which average illness attack rates were calculated. We briefly describe the interventions under consideration.\n[SUBTITLE] Vaccination [SUBSECTION] We model both pre-vaccination as well as reactive strategies, with reactive vaccination programs beginning immediately, 30 days, or 60 days after the trigger. The delays model disruptions in vaccine production and supply chains. We allow enough doses to cover either 35% or 70% of the population. In reactive strategies, we consider cases where (i) all vaccines become available at the same time, and (ii) the doses become available in three equal-sized batches, two weeks apart, due to additional production and supply-chain disruptions. We study a low-efficacy single-dose vaccine (efficacy against susceptibility to infection, VEs = 0.3, and efficacy against infectiousness, VEi = 0.2) as well as a moderate-efficacy vaccine (VEs = 0.4, VEi = 0.5) [28]. Vaccine efficacy refers to the reduction, after vaccination, in the probability of becoming infected due to contact with an infected person (VEs), or to the reduction, after vaccination, in the probability of infecting a susceptible contact (VEi). Vaccine efficacy does not refer to the fraction of individuals having an immunogenic response to the vaccine (which is typically much larger than our measures).\nEach day, our model randomly vaccinates any remaining unvaccinated individuals who are either uninfected or in the latent or asymptomatic phases of infection, all with equal probability based on the number of available doses. Moreover, protection from the vaccine builds over time, with 50% of the vaccine’s efficacy realized upon vaccination, and full protection after two weeks.\nWe model both pre-vaccination as well as reactive strategies, with reactive vaccination programs beginning immediately, 30 days, or 60 days after the trigger. The delays model disruptions in vaccine production and supply chains. We allow enough doses to cover either 35% or 70% of the population. In reactive strategies, we consider cases where (i) all vaccines become available at the same time, and (ii) the doses become available in three equal-sized batches, two weeks apart, due to additional production and supply-chain disruptions. We study a low-efficacy single-dose vaccine (efficacy against susceptibility to infection, VEs = 0.3, and efficacy against infectiousness, VEi = 0.2) as well as a moderate-efficacy vaccine (VEs = 0.4, VEi = 0.5) [28]. Vaccine efficacy refers to the reduction, after vaccination, in the probability of becoming infected due to contact with an infected person (VEs), or to the reduction, after vaccination, in the probability of infecting a susceptible contact (VEi). Vaccine efficacy does not refer to the fraction of individuals having an immunogenic response to the vaccine (which is typically much larger than our measures).\nEach day, our model randomly vaccinates any remaining unvaccinated individuals who are either uninfected or in the latent or asymptomatic phases of infection, all with equal probability based on the number of available doses. Moreover, protection from the vaccine builds over time, with 50% of the vaccine’s efficacy realized upon vaccination, and full protection after two weeks.\n[SUBTITLE] Antiviral treatment and household prophylaxis [SUBSECTION] We investigate strategies involving treatment of infected individuals with a five-day course of antivirals, as well as strategies that also allow for ten-day prophylaxis of the infected individuals’ household members. We assume that 1% of individuals do not complete their course. We use an antiviral efficacy against susceptibility (AVEs) of 0.3 and against infectiousness (AVEi) of 0.7 [16]. Individuals receive direct benefit from antivirals only while they are taking them. Antiviral use is considered alone and in combination with other intervention strategies. It is assumed that antiviral courses are available for 10% of the population and that they are distributed to infected individuals and their household members until the supply is exhausted.\nWe investigate strategies involving treatment of infected individuals with a five-day course of antivirals, as well as strategies that also allow for ten-day prophylaxis of the infected individuals’ household members. We assume that 1% of individuals do not complete their course. We use an antiviral efficacy against susceptibility (AVEs) of 0.3 and against infectiousness (AVEi) of 0.7 [16]. Individuals receive direct benefit from antivirals only while they are taking them. Antiviral use is considered alone and in combination with other intervention strategies. It is assumed that antiviral courses are available for 10% of the population and that they are distributed to infected individuals and their household members until the supply is exhausted.\n[SUBTITLE] School closure and social distancing [SUBSECTION] We implement a rolling school closure model, where a daycare or school closes for five days if five or more cases are identified in that group. Given that infected individuals are on average infectious for 4.1 days (see Figure 3), closing schools for fewer than 5 days is unlikely to be very effective. It is possible for these groups to close more than once during the simulation. We also model a reduction in workplace and general community contacts of 20% (i.e., 20% of infected individuals in each contact group will not infect other members of the group). This represents the exercise of a general level of caution, including a modest limitation of contacts within workgroups (e.g., by invoking occasional telecommuting and other self-limiting behaviors, holding fewer large meetings, etc.) and also within the general community (e.g., reduction in attendance in social groups and larger community events, etc.).\nSimulation flowchart and modeled influenza natural history\nWe implement a rolling school closure model, where a daycare or school closes for five days if five or more cases are identified in that group. Given that infected individuals are on average infectious for 4.1 days (see Figure 3), closing schools for fewer than 5 days is unlikely to be very effective. It is possible for these groups to close more than once during the simulation. We also model a reduction in workplace and general community contacts of 20% (i.e., 20% of infected individuals in each contact group will not infect other members of the group). This represents the exercise of a general level of caution, including a modest limitation of contacts within workgroups (e.g., by invoking occasional telecommuting and other self-limiting behaviors, holding fewer large meetings, etc.) and also within the general community (e.g., reduction in attendance in social groups and larger community events, etc.).\nSimulation flowchart and modeled influenza natural history\nWe modeled a baseline case where no intervention takes place, along with strategies representing various combinations of vaccination, antiviral treatment and household prophylaxis, school closure, and general social distancing (see the results in Tables 3 and 4 and Supplementary Data Table S1 provided in “Additional File 1”). Each component of the strategies is described in detail below. Interventions are triggered in a particular simulation run when the overall illness attack rate reaches 0.01%. Twenty runs of the simulator were performed for each intervention strategy, from which average illness attack rates were calculated. We briefly describe the interventions under consideration.\n[SUBTITLE] Vaccination [SUBSECTION] We model both pre-vaccination as well as reactive strategies, with reactive vaccination programs beginning immediately, 30 days, or 60 days after the trigger. The delays model disruptions in vaccine production and supply chains. We allow enough doses to cover either 35% or 70% of the population. In reactive strategies, we consider cases where (i) all vaccines become available at the same time, and (ii) the doses become available in three equal-sized batches, two weeks apart, due to additional production and supply-chain disruptions. We study a low-efficacy single-dose vaccine (efficacy against susceptibility to infection, VEs = 0.3, and efficacy against infectiousness, VEi = 0.2) as well as a moderate-efficacy vaccine (VEs = 0.4, VEi = 0.5) [28]. Vaccine efficacy refers to the reduction, after vaccination, in the probability of becoming infected due to contact with an infected person (VEs), or to the reduction, after vaccination, in the probability of infecting a susceptible contact (VEi). Vaccine efficacy does not refer to the fraction of individuals having an immunogenic response to the vaccine (which is typically much larger than our measures).\nEach day, our model randomly vaccinates any remaining unvaccinated individuals who are either uninfected or in the latent or asymptomatic phases of infection, all with equal probability based on the number of available doses. Moreover, protection from the vaccine builds over time, with 50% of the vaccine’s efficacy realized upon vaccination, and full protection after two weeks.\nWe model both pre-vaccination as well as reactive strategies, with reactive vaccination programs beginning immediately, 30 days, or 60 days after the trigger. The delays model disruptions in vaccine production and supply chains. We allow enough doses to cover either 35% or 70% of the population. In reactive strategies, we consider cases where (i) all vaccines become available at the same time, and (ii) the doses become available in three equal-sized batches, two weeks apart, due to additional production and supply-chain disruptions. We study a low-efficacy single-dose vaccine (efficacy against susceptibility to infection, VEs = 0.3, and efficacy against infectiousness, VEi = 0.2) as well as a moderate-efficacy vaccine (VEs = 0.4, VEi = 0.5) [28]. Vaccine efficacy refers to the reduction, after vaccination, in the probability of becoming infected due to contact with an infected person (VEs), or to the reduction, after vaccination, in the probability of infecting a susceptible contact (VEi). Vaccine efficacy does not refer to the fraction of individuals having an immunogenic response to the vaccine (which is typically much larger than our measures).\nEach day, our model randomly vaccinates any remaining unvaccinated individuals who are either uninfected or in the latent or asymptomatic phases of infection, all with equal probability based on the number of available doses. Moreover, protection from the vaccine builds over time, with 50% of the vaccine’s efficacy realized upon vaccination, and full protection after two weeks.\n[SUBTITLE] Antiviral treatment and household prophylaxis [SUBSECTION] We investigate strategies involving treatment of infected individuals with a five-day course of antivirals, as well as strategies that also allow for ten-day prophylaxis of the infected individuals’ household members. We assume that 1% of individuals do not complete their course. We use an antiviral efficacy against susceptibility (AVEs) of 0.3 and against infectiousness (AVEi) of 0.7 [16]. Individuals receive direct benefit from antivirals only while they are taking them. Antiviral use is considered alone and in combination with other intervention strategies. It is assumed that antiviral courses are available for 10% of the population and that they are distributed to infected individuals and their household members until the supply is exhausted.\nWe investigate strategies involving treatment of infected individuals with a five-day course of antivirals, as well as strategies that also allow for ten-day prophylaxis of the infected individuals’ household members. We assume that 1% of individuals do not complete their course. We use an antiviral efficacy against susceptibility (AVEs) of 0.3 and against infectiousness (AVEi) of 0.7 [16]. Individuals receive direct benefit from antivirals only while they are taking them. Antiviral use is considered alone and in combination with other intervention strategies. It is assumed that antiviral courses are available for 10% of the population and that they are distributed to infected individuals and their household members until the supply is exhausted.\n[SUBTITLE] School closure and social distancing [SUBSECTION] We implement a rolling school closure model, where a daycare or school closes for five days if five or more cases are identified in that group. Given that infected individuals are on average infectious for 4.1 days (see Figure 3), closing schools for fewer than 5 days is unlikely to be very effective. It is possible for these groups to close more than once during the simulation. We also model a reduction in workplace and general community contacts of 20% (i.e., 20% of infected individuals in each contact group will not infect other members of the group). This represents the exercise of a general level of caution, including a modest limitation of contacts within workgroups (e.g., by invoking occasional telecommuting and other self-limiting behaviors, holding fewer large meetings, etc.) and also within the general community (e.g., reduction in attendance in social groups and larger community events, etc.).\nSimulation flowchart and modeled influenza natural history\nWe implement a rolling school closure model, where a daycare or school closes for five days if five or more cases are identified in that group. Given that infected individuals are on average infectious for 4.1 days (see Figure 3), closing schools for fewer than 5 days is unlikely to be very effective. It is possible for these groups to close more than once during the simulation. We also model a reduction in workplace and general community contacts of 20% (i.e., 20% of infected individuals in each contact group will not infect other members of the group). This represents the exercise of a general level of caution, including a modest limitation of contacts within workgroups (e.g., by invoking occasional telecommuting and other self-limiting behaviors, holding fewer large meetings, etc.) and also within the general community (e.g., reduction in attendance in social groups and larger community events, etc.).\nSimulation flowchart and modeled influenza natural history\n[SUBTITLE] Economic cost estimation [SUBSECTION] We determine economic costs associated with the influenza outbreaks and modeled intervention strategies using methods described by Meltzer et al. [29]. We include medical spending due to illness, costs of antivirals and vaccines, and costs associated with teachers and other working adults staying home due to their own illness, illness of dependent children, or due to school closure. Medical spending includes co-payments and net payments for outpatient visits and hospitalization, as well as prescription and over-the-counter medications for influenza and complications or secondary infections. Costs are stratified by age-group and by low- or high-risk status of individuals with respect to complications of influenza. We also include the present value of earnings lost due to premature mortality.\nCost estimates and probabilities of risk status and of complications and death were taken from Meltzer et al. [29], with costs inflated using 2008 consumer price index and medical price index estimates [30-33]. These costs are combined with the data on age-specific attack rates, utilized vaccination doses, and days of school closure obtained from our simulation model. Details of the cost calculations are given in the Appendix.\nWe determine economic costs associated with the influenza outbreaks and modeled intervention strategies using methods described by Meltzer et al. [29]. We include medical spending due to illness, costs of antivirals and vaccines, and costs associated with teachers and other working adults staying home due to their own illness, illness of dependent children, or due to school closure. Medical spending includes co-payments and net payments for outpatient visits and hospitalization, as well as prescription and over-the-counter medications for influenza and complications or secondary infections. Costs are stratified by age-group and by low- or high-risk status of individuals with respect to complications of influenza. We also include the present value of earnings lost due to premature mortality.\nCost estimates and probabilities of risk status and of complications and death were taken from Meltzer et al. [29], with costs inflated using 2008 consumer price index and medical price index estimates [30-33]. These costs are combined with the data on age-specific attack rates, utilized vaccination doses, and days of school closure obtained from our simulation model. Details of the cost calculations are given in the Appendix.", "We developed a portable and adaptable stochastic, individual-level simulation model of influenza spread within a structured population. The simulator is similar to models developed by Longini et al. [7,16]. The simulation population of 649,565 people was generated stochastically to represent a typical North American city, namely, Hamilton (Ontario), Canada, which was chosen due to availability of demographic and epidemiological data necessary for constructing and calibrating the simulator. Our population is a collection of heterogeneous individuals with various attributes that impact whom they interact with (and hence whom they may infect or get infected by). More specifically, each individual has the following stochastically generated attributes: age, household, playgroup or daycare attended (for pre-school children), school attended (for school-age children), workgroup (for working adults), household census tract and workplace census subdivision, community, and neighborhood. As in [16], a community consists of approximately 2000 people living within the same census tract, and a neighborhood consists of approximately 500 people living within proximity to each other within the same community; also see the recent papers [17] and [18], which incorporate more-detailed individual-level behavior involving larger populations. Age and household-size distributions, shown in Figures 1 and 2, were matched to 2001 Canadian census data [19,20]. Household census tract assignments were made so that census tract population sizes were consistent with 2006 census statistics [21]. Workgroups were formed to match 2006 employment statistics [22] as well as census statistics on the geographical distribution of workers [23]. Rather than representing entire workplace institutions, we formed workgroups of size 20 to represent the typical number of co-workers an individual is likely to have close contact with during the day. Average playgroup, daycare, and lower and upper secondary school (i.e., middle and high school) contact group sizes were chosen for similar reasons; see the Appendix.\nAge distribution for simulated population\nHousehold size distribution for simulated population\nSusceptible people are assumed to have daily contacts with other individuals in their contact groups, i.e., their household and school or workgroups, as well as with people in their neighborhood and community. Infection of susceptibles depends on the number of infected persons in their contact groups, on the vaccine and antiviral-use status of susceptibles and their infectious contacts, and on age- and contact-group-specific per-contact transmission probabilities (Table 1). This disease transmission model is based on previously described models [7,16], and is detailed in the Appendix. People infected with influenza first pass through a latent / incubation period, during which they do not have influenza symptoms. They are not infectious until the last day of the period; at that point, they become half as infectious as if they were to develop symptoms in the subsequent period. During that subsequent infectious period, 67% will develop influenza symptoms and 33% will be asymptomatic (and will be half as infectious as those who are symptomatic) [7]. The model allows for people to withdraw from all of their mixing groups, except the household, if they become infected or have an infected child.\nPer-contact influenza infection transmission probabilities within contact groups\n1. Within households, the probability that a symptomatic child (age 18 years or less) infects a susceptible child is 0.8; that a symptomatic child infects a susceptible adult (at least 19 years old), or that a symptomatic adult infects a susceptible child, is 0.3; that a symptomatic adult infects a susceptible adult is 0.4 [16].\n2. Probability that a susceptible person in the age or school group is infected through contact with a symptomatic person in the group.\nThe simulator is calibrated to match documented illness attack rates and basic reproductive numbers (R0). Baseline (no-intervention) scenario age-group-specific attack rates were derived using 2009 estimates for the H1N1 basic reproductive number in Ontario [14,24,25] (see Table 2). These rates take into account reduced risk in adults born prior to 1957 [24]. A compartmental model parameterized in this way was well-calibrated to observed attack rates during the Fall pandemic wave in Ontario [25]. The simulator’s R0 value of 1.4 is also consistent with other published reports [4,26,27].\nAge-group-specific H1N1 influenza illness attack rates in Ontario, Canada, 2009, and calibrated attack rates\n1. See the discussion in [25].", "We modeled a baseline case where no intervention takes place, along with strategies representing various combinations of vaccination, antiviral treatment and household prophylaxis, school closure, and general social distancing (see the results in Tables 3 and 4 and Supplementary Data Table S1 provided in “Additional File 1”). Each component of the strategies is described in detail below. Interventions are triggered in a particular simulation run when the overall illness attack rate reaches 0.01%. Twenty runs of the simulator were performed for each intervention strategy, from which average illness attack rates were calculated. We briefly describe the interventions under consideration.\n[SUBTITLE] Vaccination [SUBSECTION] We model both pre-vaccination as well as reactive strategies, with reactive vaccination programs beginning immediately, 30 days, or 60 days after the trigger. The delays model disruptions in vaccine production and supply chains. We allow enough doses to cover either 35% or 70% of the population. In reactive strategies, we consider cases where (i) all vaccines become available at the same time, and (ii) the doses become available in three equal-sized batches, two weeks apart, due to additional production and supply-chain disruptions. We study a low-efficacy single-dose vaccine (efficacy against susceptibility to infection, VEs = 0.3, and efficacy against infectiousness, VEi = 0.2) as well as a moderate-efficacy vaccine (VEs = 0.4, VEi = 0.5) [28]. Vaccine efficacy refers to the reduction, after vaccination, in the probability of becoming infected due to contact with an infected person (VEs), or to the reduction, after vaccination, in the probability of infecting a susceptible contact (VEi). Vaccine efficacy does not refer to the fraction of individuals having an immunogenic response to the vaccine (which is typically much larger than our measures).\nEach day, our model randomly vaccinates any remaining unvaccinated individuals who are either uninfected or in the latent or asymptomatic phases of infection, all with equal probability based on the number of available doses. Moreover, protection from the vaccine builds over time, with 50% of the vaccine’s efficacy realized upon vaccination, and full protection after two weeks.\nWe model both pre-vaccination as well as reactive strategies, with reactive vaccination programs beginning immediately, 30 days, or 60 days after the trigger. The delays model disruptions in vaccine production and supply chains. We allow enough doses to cover either 35% or 70% of the population. In reactive strategies, we consider cases where (i) all vaccines become available at the same time, and (ii) the doses become available in three equal-sized batches, two weeks apart, due to additional production and supply-chain disruptions. We study a low-efficacy single-dose vaccine (efficacy against susceptibility to infection, VEs = 0.3, and efficacy against infectiousness, VEi = 0.2) as well as a moderate-efficacy vaccine (VEs = 0.4, VEi = 0.5) [28]. Vaccine efficacy refers to the reduction, after vaccination, in the probability of becoming infected due to contact with an infected person (VEs), or to the reduction, after vaccination, in the probability of infecting a susceptible contact (VEi). Vaccine efficacy does not refer to the fraction of individuals having an immunogenic response to the vaccine (which is typically much larger than our measures).\nEach day, our model randomly vaccinates any remaining unvaccinated individuals who are either uninfected or in the latent or asymptomatic phases of infection, all with equal probability based on the number of available doses. Moreover, protection from the vaccine builds over time, with 50% of the vaccine’s efficacy realized upon vaccination, and full protection after two weeks.\n[SUBTITLE] Antiviral treatment and household prophylaxis [SUBSECTION] We investigate strategies involving treatment of infected individuals with a five-day course of antivirals, as well as strategies that also allow for ten-day prophylaxis of the infected individuals’ household members. We assume that 1% of individuals do not complete their course. We use an antiviral efficacy against susceptibility (AVEs) of 0.3 and against infectiousness (AVEi) of 0.7 [16]. Individuals receive direct benefit from antivirals only while they are taking them. Antiviral use is considered alone and in combination with other intervention strategies. It is assumed that antiviral courses are available for 10% of the population and that they are distributed to infected individuals and their household members until the supply is exhausted.\nWe investigate strategies involving treatment of infected individuals with a five-day course of antivirals, as well as strategies that also allow for ten-day prophylaxis of the infected individuals’ household members. We assume that 1% of individuals do not complete their course. We use an antiviral efficacy against susceptibility (AVEs) of 0.3 and against infectiousness (AVEi) of 0.7 [16]. Individuals receive direct benefit from antivirals only while they are taking them. Antiviral use is considered alone and in combination with other intervention strategies. It is assumed that antiviral courses are available for 10% of the population and that they are distributed to infected individuals and their household members until the supply is exhausted.\n[SUBTITLE] School closure and social distancing [SUBSECTION] We implement a rolling school closure model, where a daycare or school closes for five days if five or more cases are identified in that group. Given that infected individuals are on average infectious for 4.1 days (see Figure 3), closing schools for fewer than 5 days is unlikely to be very effective. It is possible for these groups to close more than once during the simulation. We also model a reduction in workplace and general community contacts of 20% (i.e., 20% of infected individuals in each contact group will not infect other members of the group). This represents the exercise of a general level of caution, including a modest limitation of contacts within workgroups (e.g., by invoking occasional telecommuting and other self-limiting behaviors, holding fewer large meetings, etc.) and also within the general community (e.g., reduction in attendance in social groups and larger community events, etc.).\nSimulation flowchart and modeled influenza natural history\nWe implement a rolling school closure model, where a daycare or school closes for five days if five or more cases are identified in that group. Given that infected individuals are on average infectious for 4.1 days (see Figure 3), closing schools for fewer than 5 days is unlikely to be very effective. It is possible for these groups to close more than once during the simulation. We also model a reduction in workplace and general community contacts of 20% (i.e., 20% of infected individuals in each contact group will not infect other members of the group). This represents the exercise of a general level of caution, including a modest limitation of contacts within workgroups (e.g., by invoking occasional telecommuting and other self-limiting behaviors, holding fewer large meetings, etc.) and also within the general community (e.g., reduction in attendance in social groups and larger community events, etc.).\nSimulation flowchart and modeled influenza natural history", "We model both pre-vaccination as well as reactive strategies, with reactive vaccination programs beginning immediately, 30 days, or 60 days after the trigger. The delays model disruptions in vaccine production and supply chains. We allow enough doses to cover either 35% or 70% of the population. In reactive strategies, we consider cases where (i) all vaccines become available at the same time, and (ii) the doses become available in three equal-sized batches, two weeks apart, due to additional production and supply-chain disruptions. We study a low-efficacy single-dose vaccine (efficacy against susceptibility to infection, VEs = 0.3, and efficacy against infectiousness, VEi = 0.2) as well as a moderate-efficacy vaccine (VEs = 0.4, VEi = 0.5) [28]. Vaccine efficacy refers to the reduction, after vaccination, in the probability of becoming infected due to contact with an infected person (VEs), or to the reduction, after vaccination, in the probability of infecting a susceptible contact (VEi). Vaccine efficacy does not refer to the fraction of individuals having an immunogenic response to the vaccine (which is typically much larger than our measures).\nEach day, our model randomly vaccinates any remaining unvaccinated individuals who are either uninfected or in the latent or asymptomatic phases of infection, all with equal probability based on the number of available doses. Moreover, protection from the vaccine builds over time, with 50% of the vaccine’s efficacy realized upon vaccination, and full protection after two weeks.", "We investigate strategies involving treatment of infected individuals with a five-day course of antivirals, as well as strategies that also allow for ten-day prophylaxis of the infected individuals’ household members. We assume that 1% of individuals do not complete their course. We use an antiviral efficacy against susceptibility (AVEs) of 0.3 and against infectiousness (AVEi) of 0.7 [16]. Individuals receive direct benefit from antivirals only while they are taking them. Antiviral use is considered alone and in combination with other intervention strategies. It is assumed that antiviral courses are available for 10% of the population and that they are distributed to infected individuals and their household members until the supply is exhausted.", "We implement a rolling school closure model, where a daycare or school closes for five days if five or more cases are identified in that group. Given that infected individuals are on average infectious for 4.1 days (see Figure 3), closing schools for fewer than 5 days is unlikely to be very effective. It is possible for these groups to close more than once during the simulation. We also model a reduction in workplace and general community contacts of 20% (i.e., 20% of infected individuals in each contact group will not infect other members of the group). This represents the exercise of a general level of caution, including a modest limitation of contacts within workgroups (e.g., by invoking occasional telecommuting and other self-limiting behaviors, holding fewer large meetings, etc.) and also within the general community (e.g., reduction in attendance in social groups and larger community events, etc.).\nSimulation flowchart and modeled influenza natural history", "We determine economic costs associated with the influenza outbreaks and modeled intervention strategies using methods described by Meltzer et al. [29]. We include medical spending due to illness, costs of antivirals and vaccines, and costs associated with teachers and other working adults staying home due to their own illness, illness of dependent children, or due to school closure. Medical spending includes co-payments and net payments for outpatient visits and hospitalization, as well as prescription and over-the-counter medications for influenza and complications or secondary infections. Costs are stratified by age-group and by low- or high-risk status of individuals with respect to complications of influenza. We also include the present value of earnings lost due to premature mortality.\nCost estimates and probabilities of risk status and of complications and death were taken from Meltzer et al. [29], with costs inflated using 2008 consumer price index and medical price index estimates [30-33]. These costs are combined with the data on age-specific attack rates, utilized vaccination doses, and days of school closure obtained from our simulation model. Details of the cost calculations are given in the Appendix.", "With no intervention, the average overall illness attack rate is 34.1%, with an estimated total cost of $81.1 million (Table 3). Pre-vaccination of 35% of the population with a low-efficacy vaccine reduces the average overall illness attack rate to 26.1% (total cost $71.1 million), and with a moderate-efficacy vaccine to 18.8% (total cost $53.7 million). Not surprisingly, pre-vaccination of 70% of the population is more effective (overall average illness attack rate 12.0%, total cost $47.0 million for a low-efficacy vaccine; and 0.2% and $19.3 million with a moderate-efficacy vaccine; see Table 4).\nAverage overall illness attack rates and total costs of interventions with 35% vaccination coverage\n1. Abbreviations for modeled interventions: V (vaccination of up to 35% of the population), L (low efficacy), M (moderate efficacy), A (antiviral treatment and household prophylaxis of up to 10% of the population), S (school closure and social distancing).\n2. Initial supply-chain delays which prevent immediate initiation of vaccination programs after the intervention trigger occurs.\n3. Additional supply-chain delays, after initiation of the vaccination program, as a result of which vaccines become available in three equal batches, spaced two weeks apart.\nAverage overall illness attack rates and total costs of interventions with 70% vaccination coverage\n1. Abbreviations for modeled interventions: V (vaccination of up to 70% of the population), L (low efficacy), M (moderate efficacy), A (antiviral treatment and household prophylaxis of up to 10% of the population), S (school closure and social distancing).\n2. Initial supply-chain delays which prevent immediate initiation of vaccination programs after the intervention trigger occurs.\n3. Additional supply-chain delays, after initiation of the vaccination program, as a result of which vaccines become available in three equal batches, spaced two weeks apart\nReactive vaccination alone, of 35% of the population with a low-efficacy vaccine delivered in three batches, reduces the overall average illness attack rate to 28.8% (or 22.8% with a moderate-efficacy vaccine), with a total cost of $77.7 million ($63.1 million with a moderate-efficacy vaccine). Thirty- and 60-day delays in initiation of reactive vaccination, with vaccines delivered in three batches, result in attack rates of 29.5% (total cost $79.3 million) and 32.2% (total cost $86.0 million), respectively, for a low-efficacy vaccine, and 24.6% (total cost $67.5 million) and 30.8% (total cost $82.5 million), respectively, for a moderate-efficacy vaccine. Clearly, with a 60-day delay, interventions occur too late in the epidemic to have any meaningful effect (see Figure 4).\nDaily attack rates for (i) the case of 70% coverage of low-efficacy vaccine with 60-day initial delay, and (ii) the baseline case. For case (i), the vaccine is given on the 60th day followed by receipt of vaccine after two additional two-week delays (see arrows). Note that vaccine given on the 60th day decreases the attack rate compared to the baseline; but the two subsequent receipts of vaccine do not result in additional benefits.\nAntiviral use at low (10%) coverage alone results in an overall attack rate of 31.3% (total cost $75.9 million). School closure and social distancing alone result in an attack rate of 24.0%, with a total cost of $125.0 million.\nSuppose we combine reactive low-efficacy vaccination of 35% of the population delivered in three batches, antivirals (10% coverage), and school closure and social distancing. Then the overall average illness attack rate is 4.5% (total cost $32.2 million) if no delays occur in the initiation of vaccination, and 5.4% (total cost $36.8 million) if a 60-day delay occurs. With a moderate-efficacy vaccine, the attack rate for this last scenario reduces to 2.4% (total cost $22.0 million). Similar relationships between interventions are apparent for interventions with 70% vaccination coverage, shown in Table 4. Vaccination coverage of 70% with a moderate-efficacy vaccine, combined with antiviral treatment and school closure, is highly effective, even with an initial 60-day delay and additional supply-chain disruptions (average illness attack rate 1.4%, total cost $27.4 million).\nWe note that the results when all vaccines are available at the same time are better than those involving delivery in batches, and sometimes significantly so, especially for a moderate-efficacy vaccine (Tables 3 and 4). Figures 5A through 5D illustrate the comparative illness attack rates of the various intervention strategies discussed above for all combinations of low/moderate-efficacy vaccine delivered in three batches and at 35% / 70% coverage as a function of the initial delay in vaccination implementation due to supply-chain disruptions. The impact of vaccinating 70% of the population, rather than 35%, ranges from moderate to substantial, with the increased coverage being most beneficial when the vaccine is delivered in a timely manner, and the vaccine is either of moderate efficacy or of low efficacy applied in combination with other intervention strategies.\nAverage overall illness attack rates (%) for modeled interventions. Average overall illness attack rates for the following scenarios: no intervention; pre-vaccination; reactive vaccination with delays in initiation of 0, 30, and 60 days after the intervention trigger of a 0.01% overall illness attack rate; antiviral treatment or household prophylaxis with 10% population coverage (intervention “A”); rolling, as-needed five-day individual school closures and social distancing (20% reduction in workgroup and general community contacts—intervention “S”); antiviral use plus vaccination; school closure, and social distancing plus vaccination; antiviral use, school closure and social distancing, plus vaccination (“A+S”). Vaccination coverage is 35% of the population in Figures 5A and 5B; it is 70% of the population in Figures 5C and 5D. In reactive vaccination scenarios, additional supply-chain disruptions are assumed, such that vaccines are available in three equal batches, spaced two weeks apart, after initiation of vaccination programs. In Figures 5A and 5C, a low-efficacy vaccine is assumed (efficacy against susceptibility, VEs, 0.3; efficacy against infectiousness, VEi, 0.2). In Figures 5B and 5D, a moderate-efficacy vaccine is assumed (VEs, 0.5; VEi, 0.5).\nComplete (age-stratified and overall) average illness attack results for all modeled interventions are given in Supplementary Data Table S1. The comparative effectiveness of interventions is similar when age-group-specific results are studied.\nFigure 6A illustrates attack rate and total cost combinations for interventions that result in at least a 75% reduction in cost compared to no intervention. The closer to the origin, the more desirable an intervention is in terms of total cost and average illness attack rate. Aside from pre-vaccination strategies, we see that 70% reactive vaccination with a moderate-efficacy vaccine and school closure and social distancing, or even 35% reactive vaccination with a moderate-efficacy vaccine, antiviral use, and school closure, also result in substantial reductions in cost and attack rates. Figure 6B illustrates attack rate and cost results for interventions that result in more-modest 50%–75% reductions in cost compared to no intervention. Once again, several strategies combining vaccination, antiviral use, and school closure/social distancing are competitive with pre-vaccination.\nTotal cost of modeled intervention strategies (US$m) vs. average illness attack rate (%) Figure 6A shows results for interventions with cost reductions of more than 75% compared with no intervention, and Figure 6B shows results for interventions with cost reductions of 50%−75% compared with no intervention. Abbreviations for modeled interventions: PV (pre-vaccination), V (vaccination), L (low-efficacy), M (moderate efficacy), 35 (35% coverage of population), 70 (70% coverage), A (antiviral treatment and household prophylaxis of up to 10% of the population), S (school closure and social distancing). Multiple occurrences of each plotting symbol may occur; occurrences at higher costs and illness attack rates represent interventions with longer supply-chain delays.", "Previously published research has shown that pre-vaccination of 60%–70% of the population can contain seasonal as well as pandemic influenza, but that delays in vaccination can greatly reduce the effectiveness of the vaccination programs [4-7]. Our model confirms these results for moderate-efficacy vaccines (Tables 3, 4, and S1). However, vaccination efforts in countries such as the US, Canada, and others began well after the first waves of H1N1 activity, and it is reasonable to believe that the same will be true in future outbreaks of pandemic influenza. In particular, in the event of an outbreak, it will likely take time to achieve high levels of vaccination coverage, and, if past experience with seasonal influenza vaccination campaigns is an indication, it is plausible that only low or moderate coverage will eventually be achieved. The results of our simulation model show that delayed and low-coverage reactive vaccination strategies (with a low-efficacy vaccine, plus limited use of antivirals) will not be enough to mitigate the pandemic or to significantly reduce total costs associated with influenza morbidity and mortality (based on results from Table 3, average illness attack rates are only reduced by 26% and total costs by 13%, compared to no intervention).\nAccording to our model, combining rolling, limited-duration, as-needed closures of individual schools and a practical social distancing policy with 35% reactive low-efficacy vaccination coverage and low-level (10%) antiviral use can reduce illness attack rates by 89% compared to no intervention, as well as total costs by 64%. Similarly, combining interventions in this manner reduces overall attack rates by 99% and costs by 84% when a moderate-efficacy vaccine is available. This strategy remains highly effective even when delays in implementing vaccination of up to 60 days occur. Previously published results have left open the question of how costly interventions involving school closure might be [5]. Our results show that reactive combination strategies that include practical school closure measures, when diligently implemented, can reduce total costs associated with influenza morbidity and mortality substantially.\nOur model has several limitations. We do not consider vaccination strategies targeted to high-risk groups, which could reduce costs associated with complications from influenza. We have not modeled co-circulating strains of seasonal and pandemic influenza or possible resistance to antiviral drugs (although, to mitigate this limitation, our model assumes only low coverage with antivirals, as well as interventions without antivirals). As is always the case with simulation models, continuing follow-up analyses are needed, including: (i) sensitivity to model parameters; (ii) sensitivity to model intervention triggers (e.g., overall illness attack rate, numbers of cases detected in schools, etc.); (iii) sensitivity to R0, which can be heterogeneous across cities and countries; and (iv) results for new H1N1 natural history and transmission parameters, and new cost estimates for complications resulting from H1N1 illness, as they become known.\nOur model has several strengths. We model a large, realistic, heterogeneous population, base the simulation model on well-studied and documented stochastic simulators, calibrate to actual H1N1 attack rates and most-likely R0 values, and have the ability to model large numbers of scenarios in a relatively short amount of time on a desktop platform. The model also provides cost estimates that are useful for making policy decisions about potentially expensive interventions. In particular, we model and analyze a variety of interventions and combinations of interventions in terms of costs and efficacy. We also take into consideration reactive strategies incorporating supply-chain delays, and we identify strategies that effectively contain outbreaks and costs even in the presence of supply-chain delays, low vaccine efficacy, and low vaccine coverage.", "Our model illustrates the epidemiological effectiveness of a combination strategy involving short-term closures of individual schools on an as-needed basis, other practical social distancing activities, reactive vaccination of 35% or more of the population, and limited use of antivirals for treatment and prophylaxis. The model also quantifies the cost savings for this and alternative reactive strategies. Public health authorities should consider placing renewed emphasis on such combination strategies when reacting to possible additional waves of the current pandemic, or to new waves of future pandemics.", "In this Appendix, we provide details on the simulation model as well as economic cost considerations.\n[SUBTITLE] Simulation model [SUBSECTION] Our simulator is similar to those developed by Longini et al. for high-end computing platforms [7,16]; our simulator is programmed in C++ and runs on desktop platforms. Population structure and influenza transmission model details are given below.\n[SUBTITLE] Population structure [SUBSECTION] As discussed in the main text, the stochastically generated attributes for each person in our population of 649,565 included: age, household, playgroup or daycare attended (for pre-school children), school attended (for children 5–18 years of age), workgroup (for working adults and working 16–18 year old children), household census tract and workplace census subdivision, community (approximately 2000 people), and neighborhood (approximately 500 people). Thus individuals belong to three or four contact groups. In particular, each individual belongs to a household, neighborhood, and community. In addition, children younger than 16 belong to either a playgroup, daycare, or school, depending on age; most children in age range 16–18 belong to a school or workgroup; and most adults in age range 19–59 belong to a workgroup. Preschool children were categorized as belonging to a playgroup / daycare, each with 50% probability. We separated secondary schools into middle schools and high schools based on grade to allow different contact group sizes and to make our model more representative of mid-sized US cities. The numbers of playgroups, daycares, elementary, middle, and high schools in each community were based on Longini et al. [16], and were combined with the number of individuals in each category in our simulation population to obtain the contact group sizes. The number of working adults (19–59 years old) was based on census data [23]; and the number of working children (16–18 years old) was based on Ontario data on drop-out rates [34] and the employment rate for ages 15–24 [23].\nAs discussed in the main text, the stochastically generated attributes for each person in our population of 649,565 included: age, household, playgroup or daycare attended (for pre-school children), school attended (for children 5–18 years of age), workgroup (for working adults and working 16–18 year old children), household census tract and workplace census subdivision, community (approximately 2000 people), and neighborhood (approximately 500 people). Thus individuals belong to three or four contact groups. In particular, each individual belongs to a household, neighborhood, and community. In addition, children younger than 16 belong to either a playgroup, daycare, or school, depending on age; most children in age range 16–18 belong to a school or workgroup; and most adults in age range 19–59 belong to a workgroup. Preschool children were categorized as belonging to a playgroup / daycare, each with 50% probability. We separated secondary schools into middle schools and high schools based on grade to allow different contact group sizes and to make our model more representative of mid-sized US cities. The numbers of playgroups, daycares, elementary, middle, and high schools in each community were based on Longini et al. [16], and were combined with the number of individuals in each category in our simulation population to obtain the contact group sizes. The number of working adults (19–59 years old) was based on census data [23]; and the number of working children (16–18 years old) was based on Ontario data on drop-out rates [34] and the employment rate for ages 15–24 [23].\n[SUBTITLE] Influenza transmission model [SUBSECTION] The simulator models influenza transmission over a 180-day period, within the contact groups previously defined. Figure 3 depicts a flowchart of the model. The modeled natural history and simulator dynamics parameters, described below and shown in Figure 3, were based on Longini et al. [7,19].\nTo initiate influenza outbreaks, simulations are seeded with approximately 100 randomly selected initial infectives, with all other individuals considered susceptible (state 0). Susceptible people have the opportunity, each day, to become infected in their contact groups. As discussed in the main text, the daily probability of infection for each susceptible person is determined by the number of infectious contacts in his contact groups, and on the per-contact probability of transmission for each type of contact. For example, the probability of a susceptible child who attends daycare being infected on a particular day is:\n1 – [Pr(child is not infected in the household)\n× Pr(child is not infected in the neighborhood)\n× Pr(child is not infected in the community)\n× Pr(child is not infected at the daycare center)].\nWithin each contact group, the probability of infection of a susceptible individual depends on the number of infectious individuals in the group. For example, suppose that k1 children and k2 adults in a household are infectious on a particular day. Then the probability of a susceptible household member being infected in that household on that day is:\n1 – [Pr(not infected by a particular infected child in the household)k1\n× Pr(not infected by a particular infected adult in the household)k2].\nThe number of infectious people in the contact groups (e.g., k1 and k2), are random variables that are updated at the beginning of each day.\nAge- and contact-group-specific per-contact probabilities of transmission of infection are given in Table 1. The probability that infection is transmitted from an infected person to a susceptible person also depends on whether the infectious person is symptomatic or asymptomatic. Table 1 shows the rates for symptomatic individuals. The transmission rates for asymptomatic individuals are half of those shown in Table 1. These probabilities are based on Longini et al. [7,16], with adjustments made to calibrate baseline (no intervention) results to age-group-specific illness attack rates and R0 estimates for novel A (H1N1) in Ontario [14,24,25]; see Table 2.\nOnce infected, people enter a 1–3 day latent period (state 1; average length 1.9 days). They are assumed to become infectious on the last day of the latent period, and are half as infectious as they will be after the latent period ends. After the latent period, 67% of infectives become symptomatic (state 2), and 33% are asymptomatic (state 3). These infectious states last between 3 and 6 days. Symptomatic infectives are assumed to be twice as infectious as asymptomatics, and have a chance of withdrawing home during each day of illness (see Figure 3); upon withdrawal, they only make contacts within their household and neighborhood, with transmission probabilities doubled in the household contact group, until they recover. If a school child withdraws home due to illness, one adult in the household also stays home. Each day in states 2 and 3, an infectious person has a chance to exit the state and be removed from the simulation (i.e., to recover or die — state 4). Probabilities for transition into and out of states are given in Figure 3 and are based on Longini et al. [7,16].\nThe simulator models influenza transmission over a 180-day period, within the contact groups previously defined. Figure 3 depicts a flowchart of the model. The modeled natural history and simulator dynamics parameters, described below and shown in Figure 3, were based on Longini et al. [7,19].\nTo initiate influenza outbreaks, simulations are seeded with approximately 100 randomly selected initial infectives, with all other individuals considered susceptible (state 0). Susceptible people have the opportunity, each day, to become infected in their contact groups. As discussed in the main text, the daily probability of infection for each susceptible person is determined by the number of infectious contacts in his contact groups, and on the per-contact probability of transmission for each type of contact. For example, the probability of a susceptible child who attends daycare being infected on a particular day is:\n1 – [Pr(child is not infected in the household)\n× Pr(child is not infected in the neighborhood)\n× Pr(child is not infected in the community)\n× Pr(child is not infected at the daycare center)].\nWithin each contact group, the probability of infection of a susceptible individual depends on the number of infectious individuals in the group. For example, suppose that k1 children and k2 adults in a household are infectious on a particular day. Then the probability of a susceptible household member being infected in that household on that day is:\n1 – [Pr(not infected by a particular infected child in the household)k1\n× Pr(not infected by a particular infected adult in the household)k2].\nThe number of infectious people in the contact groups (e.g., k1 and k2), are random variables that are updated at the beginning of each day.\nAge- and contact-group-specific per-contact probabilities of transmission of infection are given in Table 1. The probability that infection is transmitted from an infected person to a susceptible person also depends on whether the infectious person is symptomatic or asymptomatic. Table 1 shows the rates for symptomatic individuals. The transmission rates for asymptomatic individuals are half of those shown in Table 1. These probabilities are based on Longini et al. [7,16], with adjustments made to calibrate baseline (no intervention) results to age-group-specific illness attack rates and R0 estimates for novel A (H1N1) in Ontario [14,24,25]; see Table 2.\nOnce infected, people enter a 1–3 day latent period (state 1; average length 1.9 days). They are assumed to become infectious on the last day of the latent period, and are half as infectious as they will be after the latent period ends. After the latent period, 67% of infectives become symptomatic (state 2), and 33% are asymptomatic (state 3). These infectious states last between 3 and 6 days. Symptomatic infectives are assumed to be twice as infectious as asymptomatics, and have a chance of withdrawing home during each day of illness (see Figure 3); upon withdrawal, they only make contacts within their household and neighborhood, with transmission probabilities doubled in the household contact group, until they recover. If a school child withdraws home due to illness, one adult in the household also stays home. Each day in states 2 and 3, an infectious person has a chance to exit the state and be removed from the simulation (i.e., to recover or die — state 4). Probabilities for transition into and out of states are given in Figure 3 and are based on Longini et al. [7,16].\nOur simulator is similar to those developed by Longini et al. for high-end computing platforms [7,16]; our simulator is programmed in C++ and runs on desktop platforms. Population structure and influenza transmission model details are given below.\n[SUBTITLE] Population structure [SUBSECTION] As discussed in the main text, the stochastically generated attributes for each person in our population of 649,565 included: age, household, playgroup or daycare attended (for pre-school children), school attended (for children 5–18 years of age), workgroup (for working adults and working 16–18 year old children), household census tract and workplace census subdivision, community (approximately 2000 people), and neighborhood (approximately 500 people). Thus individuals belong to three or four contact groups. In particular, each individual belongs to a household, neighborhood, and community. In addition, children younger than 16 belong to either a playgroup, daycare, or school, depending on age; most children in age range 16–18 belong to a school or workgroup; and most adults in age range 19–59 belong to a workgroup. Preschool children were categorized as belonging to a playgroup / daycare, each with 50% probability. We separated secondary schools into middle schools and high schools based on grade to allow different contact group sizes and to make our model more representative of mid-sized US cities. The numbers of playgroups, daycares, elementary, middle, and high schools in each community were based on Longini et al. [16], and were combined with the number of individuals in each category in our simulation population to obtain the contact group sizes. The number of working adults (19–59 years old) was based on census data [23]; and the number of working children (16–18 years old) was based on Ontario data on drop-out rates [34] and the employment rate for ages 15–24 [23].\nAs discussed in the main text, the stochastically generated attributes for each person in our population of 649,565 included: age, household, playgroup or daycare attended (for pre-school children), school attended (for children 5–18 years of age), workgroup (for working adults and working 16–18 year old children), household census tract and workplace census subdivision, community (approximately 2000 people), and neighborhood (approximately 500 people). Thus individuals belong to three or four contact groups. In particular, each individual belongs to a household, neighborhood, and community. In addition, children younger than 16 belong to either a playgroup, daycare, or school, depending on age; most children in age range 16–18 belong to a school or workgroup; and most adults in age range 19–59 belong to a workgroup. Preschool children were categorized as belonging to a playgroup / daycare, each with 50% probability. We separated secondary schools into middle schools and high schools based on grade to allow different contact group sizes and to make our model more representative of mid-sized US cities. The numbers of playgroups, daycares, elementary, middle, and high schools in each community were based on Longini et al. [16], and were combined with the number of individuals in each category in our simulation population to obtain the contact group sizes. The number of working adults (19–59 years old) was based on census data [23]; and the number of working children (16–18 years old) was based on Ontario data on drop-out rates [34] and the employment rate for ages 15–24 [23].\n[SUBTITLE] Influenza transmission model [SUBSECTION] The simulator models influenza transmission over a 180-day period, within the contact groups previously defined. Figure 3 depicts a flowchart of the model. The modeled natural history and simulator dynamics parameters, described below and shown in Figure 3, were based on Longini et al. [7,19].\nTo initiate influenza outbreaks, simulations are seeded with approximately 100 randomly selected initial infectives, with all other individuals considered susceptible (state 0). Susceptible people have the opportunity, each day, to become infected in their contact groups. As discussed in the main text, the daily probability of infection for each susceptible person is determined by the number of infectious contacts in his contact groups, and on the per-contact probability of transmission for each type of contact. For example, the probability of a susceptible child who attends daycare being infected on a particular day is:\n1 – [Pr(child is not infected in the household)\n× Pr(child is not infected in the neighborhood)\n× Pr(child is not infected in the community)\n× Pr(child is not infected at the daycare center)].\nWithin each contact group, the probability of infection of a susceptible individual depends on the number of infectious individuals in the group. For example, suppose that k1 children and k2 adults in a household are infectious on a particular day. Then the probability of a susceptible household member being infected in that household on that day is:\n1 – [Pr(not infected by a particular infected child in the household)k1\n× Pr(not infected by a particular infected adult in the household)k2].\nThe number of infectious people in the contact groups (e.g., k1 and k2), are random variables that are updated at the beginning of each day.\nAge- and contact-group-specific per-contact probabilities of transmission of infection are given in Table 1. The probability that infection is transmitted from an infected person to a susceptible person also depends on whether the infectious person is symptomatic or asymptomatic. Table 1 shows the rates for symptomatic individuals. The transmission rates for asymptomatic individuals are half of those shown in Table 1. These probabilities are based on Longini et al. [7,16], with adjustments made to calibrate baseline (no intervention) results to age-group-specific illness attack rates and R0 estimates for novel A (H1N1) in Ontario [14,24,25]; see Table 2.\nOnce infected, people enter a 1–3 day latent period (state 1; average length 1.9 days). They are assumed to become infectious on the last day of the latent period, and are half as infectious as they will be after the latent period ends. After the latent period, 67% of infectives become symptomatic (state 2), and 33% are asymptomatic (state 3). These infectious states last between 3 and 6 days. Symptomatic infectives are assumed to be twice as infectious as asymptomatics, and have a chance of withdrawing home during each day of illness (see Figure 3); upon withdrawal, they only make contacts within their household and neighborhood, with transmission probabilities doubled in the household contact group, until they recover. If a school child withdraws home due to illness, one adult in the household also stays home. Each day in states 2 and 3, an infectious person has a chance to exit the state and be removed from the simulation (i.e., to recover or die — state 4). Probabilities for transition into and out of states are given in Figure 3 and are based on Longini et al. [7,16].\nThe simulator models influenza transmission over a 180-day period, within the contact groups previously defined. Figure 3 depicts a flowchart of the model. The modeled natural history and simulator dynamics parameters, described below and shown in Figure 3, were based on Longini et al. [7,19].\nTo initiate influenza outbreaks, simulations are seeded with approximately 100 randomly selected initial infectives, with all other individuals considered susceptible (state 0). Susceptible people have the opportunity, each day, to become infected in their contact groups. As discussed in the main text, the daily probability of infection for each susceptible person is determined by the number of infectious contacts in his contact groups, and on the per-contact probability of transmission for each type of contact. For example, the probability of a susceptible child who attends daycare being infected on a particular day is:\n1 – [Pr(child is not infected in the household)\n× Pr(child is not infected in the neighborhood)\n× Pr(child is not infected in the community)\n× Pr(child is not infected at the daycare center)].\nWithin each contact group, the probability of infection of a susceptible individual depends on the number of infectious individuals in the group. For example, suppose that k1 children and k2 adults in a household are infectious on a particular day. Then the probability of a susceptible household member being infected in that household on that day is:\n1 – [Pr(not infected by a particular infected child in the household)k1\n× Pr(not infected by a particular infected adult in the household)k2].\nThe number of infectious people in the contact groups (e.g., k1 and k2), are random variables that are updated at the beginning of each day.\nAge- and contact-group-specific per-contact probabilities of transmission of infection are given in Table 1. The probability that infection is transmitted from an infected person to a susceptible person also depends on whether the infectious person is symptomatic or asymptomatic. Table 1 shows the rates for symptomatic individuals. The transmission rates for asymptomatic individuals are half of those shown in Table 1. These probabilities are based on Longini et al. [7,16], with adjustments made to calibrate baseline (no intervention) results to age-group-specific illness attack rates and R0 estimates for novel A (H1N1) in Ontario [14,24,25]; see Table 2.\nOnce infected, people enter a 1–3 day latent period (state 1; average length 1.9 days). They are assumed to become infectious on the last day of the latent period, and are half as infectious as they will be after the latent period ends. After the latent period, 67% of infectives become symptomatic (state 2), and 33% are asymptomatic (state 3). These infectious states last between 3 and 6 days. Symptomatic infectives are assumed to be twice as infectious as asymptomatics, and have a chance of withdrawing home during each day of illness (see Figure 3); upon withdrawal, they only make contacts within their household and neighborhood, with transmission probabilities doubled in the household contact group, until they recover. If a school child withdraws home due to illness, one adult in the household also stays home. Each day in states 2 and 3, an infectious person has a chance to exit the state and be removed from the simulation (i.e., to recover or die — state 4). Probabilities for transition into and out of states are given in Figure 3 and are based on Longini et al. [7,16].\n[SUBTITLE] Economic cost calculations [SUBSECTION] The total cost of each intervention scenario includes the cost of vaccine doses and antiviral courses used, if any; costs associated with parents staying at home with sick children and school teachers, parents, and children staying home due to school closure; costs due to illness-related absence from work; medical costs associated with illness, including outpatient visits, prescription and over-the-counter drugs, and hospitalization; and lost earnings due to death.\nWe use methods described by Meltzer et al. [29] to quantify most medical and work-loss costs (see also [33]). Table 5 shows the proportions of illnesses assumed to be at high risk for complications among children (0–18 years old), younger adults (19–59 years old) and seniors (over 60). Table 6 shows estimated rates of outpatient visits, hospitalizations, and death used in our calculations for children, adults, and seniors at high risk and not at high risk of complications. We chose the ‘low’ rate estimates presented in Meltzer et al. [29], which we believe to be most consistent with the relatively low R0 (1.4) for our model. Outpatient visit, hospitalization, and death costs are shown in Table 7; cost figures from Meltzer et al. [29] have been inflated using 2008 consumer price and medical price indexes [30-32]. All the above costs were combined with age-specific attack rates obtained from our simulation model. In addition, we assume average costs of $25 per vaccine dose or antiviral course used, consistent with previous reports [35]. Table 8 shows other costs associated with vaccination (i.e., the cost of lost time, travel, and side effects). These costs are based on [34], inflated as described above. The vaccination costs are combined with the number of used vaccination doses obtained from our simulation model. We assume that 1% of antiviral users discontinue use due to side effects; medical and other costs associated with these side effects are not included in our model.\nProportions of influenza cases at high risk for complications1\n1. Proportions taken from Meltzer et al. [29], and adapted to our age groups\nOutpatient visit, hospitalization, and death rates, by age group and risk status for complications1\n1. Rates taken from Meltzer et al. [29]\nFrequency and costs (in US$) associated with influenza-related outpatient visits, hospitalizations, and deaths1\n1. Estimates based on figures from Meltzer et al. [29]. Cost estimates inflated by 2008 consumer and medical price indices [30-32] as appropriate.\nCosts and impacts of vaccination1\n1. Estimates based on figures from Meltzer et al. [29]. Travel and side effect cost estimates inflated by 2008 consumer and medical price indices [30-32] as appropriate.\nTo estimate costs of ill individuals staying home and work-loss associated with parents staying at home with sick children, we multiplied the number of days (obtained from our simulation model) with the inflation-adjusted average value of lost days from Table 7. Similarly, we estimated the average number of teachers at schools and daycares by dividing the total number of such teachers in Hamilton [36] among the schools and daycares in our model. To estimate the cost of lost teacher productivity due to school closures, we multiplied the number of days schools and daycares are closed in our simulation model by the average number of teachers at Hamilton schools and daycares and by the average value of a day of lost work obtained from Table 7.\nTable S1 shows age-stratified and overall illness attack rates for all modeled scenarios, along with total cost estimates. Figure 7 depicts the total cost (US$) plotted vs. average overall illness attack rate for each intervention.\nTotal cost of modeled intervention strategies versus the average illness attack rate\nThe total cost of each intervention scenario includes the cost of vaccine doses and antiviral courses used, if any; costs associated with parents staying at home with sick children and school teachers, parents, and children staying home due to school closure; costs due to illness-related absence from work; medical costs associated with illness, including outpatient visits, prescription and over-the-counter drugs, and hospitalization; and lost earnings due to death.\nWe use methods described by Meltzer et al. [29] to quantify most medical and work-loss costs (see also [33]). Table 5 shows the proportions of illnesses assumed to be at high risk for complications among children (0–18 years old), younger adults (19–59 years old) and seniors (over 60). Table 6 shows estimated rates of outpatient visits, hospitalizations, and death used in our calculations for children, adults, and seniors at high risk and not at high risk of complications. We chose the ‘low’ rate estimates presented in Meltzer et al. [29], which we believe to be most consistent with the relatively low R0 (1.4) for our model. Outpatient visit, hospitalization, and death costs are shown in Table 7; cost figures from Meltzer et al. [29] have been inflated using 2008 consumer price and medical price indexes [30-32]. All the above costs were combined with age-specific attack rates obtained from our simulation model. In addition, we assume average costs of $25 per vaccine dose or antiviral course used, consistent with previous reports [35]. Table 8 shows other costs associated with vaccination (i.e., the cost of lost time, travel, and side effects). These costs are based on [34], inflated as described above. The vaccination costs are combined with the number of used vaccination doses obtained from our simulation model. We assume that 1% of antiviral users discontinue use due to side effects; medical and other costs associated with these side effects are not included in our model.\nProportions of influenza cases at high risk for complications1\n1. Proportions taken from Meltzer et al. [29], and adapted to our age groups\nOutpatient visit, hospitalization, and death rates, by age group and risk status for complications1\n1. Rates taken from Meltzer et al. [29]\nFrequency and costs (in US$) associated with influenza-related outpatient visits, hospitalizations, and deaths1\n1. Estimates based on figures from Meltzer et al. [29]. Cost estimates inflated by 2008 consumer and medical price indices [30-32] as appropriate.\nCosts and impacts of vaccination1\n1. Estimates based on figures from Meltzer et al. [29]. Travel and side effect cost estimates inflated by 2008 consumer and medical price indices [30-32] as appropriate.\nTo estimate costs of ill individuals staying home and work-loss associated with parents staying at home with sick children, we multiplied the number of days (obtained from our simulation model) with the inflation-adjusted average value of lost days from Table 7. Similarly, we estimated the average number of teachers at schools and daycares by dividing the total number of such teachers in Hamilton [36] among the schools and daycares in our model. To estimate the cost of lost teacher productivity due to school closures, we multiplied the number of days schools and daycares are closed in our simulation model by the average number of teachers at Hamilton schools and daycares and by the average value of a day of lost work obtained from Table 7.\nTable S1 shows age-stratified and overall illness attack rates for all modeled scenarios, along with total cost estimates. Figure 7 depicts the total cost (US$) plotted vs. average overall illness attack rate for each intervention.\nTotal cost of modeled intervention strategies versus the average illness attack rate", "Our simulator is similar to those developed by Longini et al. for high-end computing platforms [7,16]; our simulator is programmed in C++ and runs on desktop platforms. Population structure and influenza transmission model details are given below.\n[SUBTITLE] Population structure [SUBSECTION] As discussed in the main text, the stochastically generated attributes for each person in our population of 649,565 included: age, household, playgroup or daycare attended (for pre-school children), school attended (for children 5–18 years of age), workgroup (for working adults and working 16–18 year old children), household census tract and workplace census subdivision, community (approximately 2000 people), and neighborhood (approximately 500 people). Thus individuals belong to three or four contact groups. In particular, each individual belongs to a household, neighborhood, and community. In addition, children younger than 16 belong to either a playgroup, daycare, or school, depending on age; most children in age range 16–18 belong to a school or workgroup; and most adults in age range 19–59 belong to a workgroup. Preschool children were categorized as belonging to a playgroup / daycare, each with 50% probability. We separated secondary schools into middle schools and high schools based on grade to allow different contact group sizes and to make our model more representative of mid-sized US cities. The numbers of playgroups, daycares, elementary, middle, and high schools in each community were based on Longini et al. [16], and were combined with the number of individuals in each category in our simulation population to obtain the contact group sizes. The number of working adults (19–59 years old) was based on census data [23]; and the number of working children (16–18 years old) was based on Ontario data on drop-out rates [34] and the employment rate for ages 15–24 [23].\nAs discussed in the main text, the stochastically generated attributes for each person in our population of 649,565 included: age, household, playgroup or daycare attended (for pre-school children), school attended (for children 5–18 years of age), workgroup (for working adults and working 16–18 year old children), household census tract and workplace census subdivision, community (approximately 2000 people), and neighborhood (approximately 500 people). Thus individuals belong to three or four contact groups. In particular, each individual belongs to a household, neighborhood, and community. In addition, children younger than 16 belong to either a playgroup, daycare, or school, depending on age; most children in age range 16–18 belong to a school or workgroup; and most adults in age range 19–59 belong to a workgroup. Preschool children were categorized as belonging to a playgroup / daycare, each with 50% probability. We separated secondary schools into middle schools and high schools based on grade to allow different contact group sizes and to make our model more representative of mid-sized US cities. The numbers of playgroups, daycares, elementary, middle, and high schools in each community were based on Longini et al. [16], and were combined with the number of individuals in each category in our simulation population to obtain the contact group sizes. The number of working adults (19–59 years old) was based on census data [23]; and the number of working children (16–18 years old) was based on Ontario data on drop-out rates [34] and the employment rate for ages 15–24 [23].\n[SUBTITLE] Influenza transmission model [SUBSECTION] The simulator models influenza transmission over a 180-day period, within the contact groups previously defined. Figure 3 depicts a flowchart of the model. The modeled natural history and simulator dynamics parameters, described below and shown in Figure 3, were based on Longini et al. [7,19].\nTo initiate influenza outbreaks, simulations are seeded with approximately 100 randomly selected initial infectives, with all other individuals considered susceptible (state 0). Susceptible people have the opportunity, each day, to become infected in their contact groups. As discussed in the main text, the daily probability of infection for each susceptible person is determined by the number of infectious contacts in his contact groups, and on the per-contact probability of transmission for each type of contact. For example, the probability of a susceptible child who attends daycare being infected on a particular day is:\n1 – [Pr(child is not infected in the household)\n× Pr(child is not infected in the neighborhood)\n× Pr(child is not infected in the community)\n× Pr(child is not infected at the daycare center)].\nWithin each contact group, the probability of infection of a susceptible individual depends on the number of infectious individuals in the group. For example, suppose that k1 children and k2 adults in a household are infectious on a particular day. Then the probability of a susceptible household member being infected in that household on that day is:\n1 – [Pr(not infected by a particular infected child in the household)k1\n× Pr(not infected by a particular infected adult in the household)k2].\nThe number of infectious people in the contact groups (e.g., k1 and k2), are random variables that are updated at the beginning of each day.\nAge- and contact-group-specific per-contact probabilities of transmission of infection are given in Table 1. The probability that infection is transmitted from an infected person to a susceptible person also depends on whether the infectious person is symptomatic or asymptomatic. Table 1 shows the rates for symptomatic individuals. The transmission rates for asymptomatic individuals are half of those shown in Table 1. These probabilities are based on Longini et al. [7,16], with adjustments made to calibrate baseline (no intervention) results to age-group-specific illness attack rates and R0 estimates for novel A (H1N1) in Ontario [14,24,25]; see Table 2.\nOnce infected, people enter a 1–3 day latent period (state 1; average length 1.9 days). They are assumed to become infectious on the last day of the latent period, and are half as infectious as they will be after the latent period ends. After the latent period, 67% of infectives become symptomatic (state 2), and 33% are asymptomatic (state 3). These infectious states last between 3 and 6 days. Symptomatic infectives are assumed to be twice as infectious as asymptomatics, and have a chance of withdrawing home during each day of illness (see Figure 3); upon withdrawal, they only make contacts within their household and neighborhood, with transmission probabilities doubled in the household contact group, until they recover. If a school child withdraws home due to illness, one adult in the household also stays home. Each day in states 2 and 3, an infectious person has a chance to exit the state and be removed from the simulation (i.e., to recover or die — state 4). Probabilities for transition into and out of states are given in Figure 3 and are based on Longini et al. [7,16].\nThe simulator models influenza transmission over a 180-day period, within the contact groups previously defined. Figure 3 depicts a flowchart of the model. The modeled natural history and simulator dynamics parameters, described below and shown in Figure 3, were based on Longini et al. [7,19].\nTo initiate influenza outbreaks, simulations are seeded with approximately 100 randomly selected initial infectives, with all other individuals considered susceptible (state 0). Susceptible people have the opportunity, each day, to become infected in their contact groups. As discussed in the main text, the daily probability of infection for each susceptible person is determined by the number of infectious contacts in his contact groups, and on the per-contact probability of transmission for each type of contact. For example, the probability of a susceptible child who attends daycare being infected on a particular day is:\n1 – [Pr(child is not infected in the household)\n× Pr(child is not infected in the neighborhood)\n× Pr(child is not infected in the community)\n× Pr(child is not infected at the daycare center)].\nWithin each contact group, the probability of infection of a susceptible individual depends on the number of infectious individuals in the group. For example, suppose that k1 children and k2 adults in a household are infectious on a particular day. Then the probability of a susceptible household member being infected in that household on that day is:\n1 – [Pr(not infected by a particular infected child in the household)k1\n× Pr(not infected by a particular infected adult in the household)k2].\nThe number of infectious people in the contact groups (e.g., k1 and k2), are random variables that are updated at the beginning of each day.\nAge- and contact-group-specific per-contact probabilities of transmission of infection are given in Table 1. The probability that infection is transmitted from an infected person to a susceptible person also depends on whether the infectious person is symptomatic or asymptomatic. Table 1 shows the rates for symptomatic individuals. The transmission rates for asymptomatic individuals are half of those shown in Table 1. These probabilities are based on Longini et al. [7,16], with adjustments made to calibrate baseline (no intervention) results to age-group-specific illness attack rates and R0 estimates for novel A (H1N1) in Ontario [14,24,25]; see Table 2.\nOnce infected, people enter a 1–3 day latent period (state 1; average length 1.9 days). They are assumed to become infectious on the last day of the latent period, and are half as infectious as they will be after the latent period ends. After the latent period, 67% of infectives become symptomatic (state 2), and 33% are asymptomatic (state 3). These infectious states last between 3 and 6 days. Symptomatic infectives are assumed to be twice as infectious as asymptomatics, and have a chance of withdrawing home during each day of illness (see Figure 3); upon withdrawal, they only make contacts within their household and neighborhood, with transmission probabilities doubled in the household contact group, until they recover. If a school child withdraws home due to illness, one adult in the household also stays home. Each day in states 2 and 3, an infectious person has a chance to exit the state and be removed from the simulation (i.e., to recover or die — state 4). Probabilities for transition into and out of states are given in Figure 3 and are based on Longini et al. [7,16].", "As discussed in the main text, the stochastically generated attributes for each person in our population of 649,565 included: age, household, playgroup or daycare attended (for pre-school children), school attended (for children 5–18 years of age), workgroup (for working adults and working 16–18 year old children), household census tract and workplace census subdivision, community (approximately 2000 people), and neighborhood (approximately 500 people). Thus individuals belong to three or four contact groups. In particular, each individual belongs to a household, neighborhood, and community. In addition, children younger than 16 belong to either a playgroup, daycare, or school, depending on age; most children in age range 16–18 belong to a school or workgroup; and most adults in age range 19–59 belong to a workgroup. Preschool children were categorized as belonging to a playgroup / daycare, each with 50% probability. We separated secondary schools into middle schools and high schools based on grade to allow different contact group sizes and to make our model more representative of mid-sized US cities. The numbers of playgroups, daycares, elementary, middle, and high schools in each community were based on Longini et al. [16], and were combined with the number of individuals in each category in our simulation population to obtain the contact group sizes. The number of working adults (19–59 years old) was based on census data [23]; and the number of working children (16–18 years old) was based on Ontario data on drop-out rates [34] and the employment rate for ages 15–24 [23].", "The simulator models influenza transmission over a 180-day period, within the contact groups previously defined. Figure 3 depicts a flowchart of the model. The modeled natural history and simulator dynamics parameters, described below and shown in Figure 3, were based on Longini et al. [7,19].\nTo initiate influenza outbreaks, simulations are seeded with approximately 100 randomly selected initial infectives, with all other individuals considered susceptible (state 0). Susceptible people have the opportunity, each day, to become infected in their contact groups. As discussed in the main text, the daily probability of infection for each susceptible person is determined by the number of infectious contacts in his contact groups, and on the per-contact probability of transmission for each type of contact. For example, the probability of a susceptible child who attends daycare being infected on a particular day is:\n1 – [Pr(child is not infected in the household)\n× Pr(child is not infected in the neighborhood)\n× Pr(child is not infected in the community)\n× Pr(child is not infected at the daycare center)].\nWithin each contact group, the probability of infection of a susceptible individual depends on the number of infectious individuals in the group. For example, suppose that k1 children and k2 adults in a household are infectious on a particular day. Then the probability of a susceptible household member being infected in that household on that day is:\n1 – [Pr(not infected by a particular infected child in the household)k1\n× Pr(not infected by a particular infected adult in the household)k2].\nThe number of infectious people in the contact groups (e.g., k1 and k2), are random variables that are updated at the beginning of each day.\nAge- and contact-group-specific per-contact probabilities of transmission of infection are given in Table 1. The probability that infection is transmitted from an infected person to a susceptible person also depends on whether the infectious person is symptomatic or asymptomatic. Table 1 shows the rates for symptomatic individuals. The transmission rates for asymptomatic individuals are half of those shown in Table 1. These probabilities are based on Longini et al. [7,16], with adjustments made to calibrate baseline (no intervention) results to age-group-specific illness attack rates and R0 estimates for novel A (H1N1) in Ontario [14,24,25]; see Table 2.\nOnce infected, people enter a 1–3 day latent period (state 1; average length 1.9 days). They are assumed to become infectious on the last day of the latent period, and are half as infectious as they will be after the latent period ends. After the latent period, 67% of infectives become symptomatic (state 2), and 33% are asymptomatic (state 3). These infectious states last between 3 and 6 days. Symptomatic infectives are assumed to be twice as infectious as asymptomatics, and have a chance of withdrawing home during each day of illness (see Figure 3); upon withdrawal, they only make contacts within their household and neighborhood, with transmission probabilities doubled in the household contact group, until they recover. If a school child withdraws home due to illness, one adult in the household also stays home. Each day in states 2 and 3, an infectious person has a chance to exit the state and be removed from the simulation (i.e., to recover or die — state 4). Probabilities for transition into and out of states are given in Figure 3 and are based on Longini et al. [7,16].", "The total cost of each intervention scenario includes the cost of vaccine doses and antiviral courses used, if any; costs associated with parents staying at home with sick children and school teachers, parents, and children staying home due to school closure; costs due to illness-related absence from work; medical costs associated with illness, including outpatient visits, prescription and over-the-counter drugs, and hospitalization; and lost earnings due to death.\nWe use methods described by Meltzer et al. [29] to quantify most medical and work-loss costs (see also [33]). Table 5 shows the proportions of illnesses assumed to be at high risk for complications among children (0–18 years old), younger adults (19–59 years old) and seniors (over 60). Table 6 shows estimated rates of outpatient visits, hospitalizations, and death used in our calculations for children, adults, and seniors at high risk and not at high risk of complications. We chose the ‘low’ rate estimates presented in Meltzer et al. [29], which we believe to be most consistent with the relatively low R0 (1.4) for our model. Outpatient visit, hospitalization, and death costs are shown in Table 7; cost figures from Meltzer et al. [29] have been inflated using 2008 consumer price and medical price indexes [30-32]. All the above costs were combined with age-specific attack rates obtained from our simulation model. In addition, we assume average costs of $25 per vaccine dose or antiviral course used, consistent with previous reports [35]. Table 8 shows other costs associated with vaccination (i.e., the cost of lost time, travel, and side effects). These costs are based on [34], inflated as described above. The vaccination costs are combined with the number of used vaccination doses obtained from our simulation model. We assume that 1% of antiviral users discontinue use due to side effects; medical and other costs associated with these side effects are not included in our model.\nProportions of influenza cases at high risk for complications1\n1. Proportions taken from Meltzer et al. [29], and adapted to our age groups\nOutpatient visit, hospitalization, and death rates, by age group and risk status for complications1\n1. Rates taken from Meltzer et al. [29]\nFrequency and costs (in US$) associated with influenza-related outpatient visits, hospitalizations, and deaths1\n1. Estimates based on figures from Meltzer et al. [29]. Cost estimates inflated by 2008 consumer and medical price indices [30-32] as appropriate.\nCosts and impacts of vaccination1\n1. Estimates based on figures from Meltzer et al. [29]. Travel and side effect cost estimates inflated by 2008 consumer and medical price indices [30-32] as appropriate.\nTo estimate costs of ill individuals staying home and work-loss associated with parents staying at home with sick children, we multiplied the number of days (obtained from our simulation model) with the inflation-adjusted average value of lost days from Table 7. Similarly, we estimated the average number of teachers at schools and daycares by dividing the total number of such teachers in Hamilton [36] among the schools and daycares in our model. To estimate the cost of lost teacher productivity due to school closures, we multiplied the number of days schools and daycares are closed in our simulation model by the average number of teachers at Hamilton schools and daycares and by the average value of a day of lost work obtained from Table 7.\nTable S1 shows age-stratified and overall illness attack rates for all modeled scenarios, along with total cost estimates. Figure 7 depicts the total cost (US$) plotted vs. average overall illness attack rate for each intervention.\nTotal cost of modeled intervention strategies versus the average illness attack rate", "Study conception and design: AN, SA, DG, KLT\nSimulation model development: AN, WC, KLT, SA, MLL, DG, BS, DNF\nAnalysis and interpretation of simulation results: AN, SA, DG, WC\nDrafting of manuscript: SA, AN, DG\nAll authors read and approved the final manuscript.", "DNF has received grant matching funds from Sanofi Pasteur, which manufactures a vaccine for use against influenza A (H1N1)-2009 outside Canada.", "Supplementary Data for Reactive Strategies for Containing Developing Outbreaks of Pandemic Influenza\nClick here for file" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, "supplementary-material" ]
[]
Public health interventions for epidemics: implications for multiple infection waves.
21356131
Epidemics with multiple infection waves have been documented for some human diseases, most notably during past influenza pandemics. While pathogen evolution, co-infection, and behavioural changes have been proposed as possible mechanisms for the occurrence of subsequent outbreaks, the effect of public health interventions remains undetermined.
BACKGROUND
We develop mean-field and stochastic epidemiological models for disease transmission, and perform simulations to show how control measures, such as drug treatment and isolation of ill individuals, can influence the epidemic profile and generate sequences of infection waves with different characteristics.
METHODS
We demonstrate the impact of parameters representing the effectiveness and adverse consequences of intervention measures, such as treatment and emergence of drug resistance, on the spread of a pathogen in the population. If pathogen resistant strains evolve under drug pressure, multiple outbreaks are possible with variability in their characteristics, magnitude, and timing. In this context, the level of drug use and isolation capacity play an important role in the occurrence of subsequent outbreaks. Our simulations for influenza infection as a case study indicate that the intensive use of these interventions during the early stages of the epidemic could delay the spread of disease, but it may also result in later infection waves with possibly larger magnitudes.
RESULTS
The findings highlight the importance of intervention parameters in the process of public health decision-making, and in evaluating control measures when facing substantial uncertainty regarding the epidemiological characteristics of an emerging infectious pathogen. Critical factors that influence population health including evolutionary responses of the pathogen under the pressure of different intervention measures during an epidemic should be considered for the design of effective strategies that address short-term targets compatible with long-term disease outcomes.
CONCLUSIONS
[ "Disease Outbreaks", "Disease Transmission, Infectious", "Epidemiologic Methods", "Humans", "Influenza, Human", "Models, Statistical", "Public Health", "Stochastic Processes" ]
3317576
null
null
Methods
[SUBTITLE] The model structure [SUBSECTION] To formulate the models for describing disease epidemic, we assume that the population is initially entirely susceptible to the infectious pathogen. It is assumed that the infection can be treated with drugs, but the pathogen may develop resistance during the course of treatment with potential for transmission. Since resistance emergence may impose fitness cost on pathogen replication and transmission [11], we assume that the drug-resistant pathogen is less transmissible than the drug-sensitive pathogen. Treatment is assumed to reduce transmissibility of the drug-sensitive infection, but remains ineffective against drug-resistant infection. We also assume that the recovery from infection confers immunity to re-infection with either drug-sensitive or resistant pathogens. Considering epidemics with relatively short time-courses, we ignore the effect of recruitment, natural death, and other demographic variables of the population. With the assumption of homogeneous mixing, we divide the population into classes of susceptibles ( S ); individuals exposed (not yet infectious) to sensitive ( E ) and resistant ( Er ) infections; untreated individuals infected with sensitive ( I ) and resistant ( Ir ) infections; treated individuals infected with sensitive ( IT ) and resistant ( IT,r ) infections; isolated individuals infected with either sensitive or resistant infection ( J ); and recovered individuals ( R ). Figure 1 shows the movements of individuals between these classes during the course of an epidemic. With parameters described in Table 1, the dynamics of the mean-field model can be mathematically expressed by the following system of differential equations:(1) The model. Model diagram for the movements of individuals between population compartments. Parameter values. Baseline values of the parameters used for simulations of the models with sources from published literature. For a given value of R0, the baseline transmission rate β can be calculated using the expression R0 = βS0/γ. Details of the model in its stochastic form are provided in the Appendix. To formulate the models for describing disease epidemic, we assume that the population is initially entirely susceptible to the infectious pathogen. It is assumed that the infection can be treated with drugs, but the pathogen may develop resistance during the course of treatment with potential for transmission. Since resistance emergence may impose fitness cost on pathogen replication and transmission [11], we assume that the drug-resistant pathogen is less transmissible than the drug-sensitive pathogen. Treatment is assumed to reduce transmissibility of the drug-sensitive infection, but remains ineffective against drug-resistant infection. We also assume that the recovery from infection confers immunity to re-infection with either drug-sensitive or resistant pathogens. Considering epidemics with relatively short time-courses, we ignore the effect of recruitment, natural death, and other demographic variables of the population. With the assumption of homogeneous mixing, we divide the population into classes of susceptibles ( S ); individuals exposed (not yet infectious) to sensitive ( E ) and resistant ( Er ) infections; untreated individuals infected with sensitive ( I ) and resistant ( Ir ) infections; treated individuals infected with sensitive ( IT ) and resistant ( IT,r ) infections; isolated individuals infected with either sensitive or resistant infection ( J ); and recovered individuals ( R ). Figure 1 shows the movements of individuals between these classes during the course of an epidemic. With parameters described in Table 1, the dynamics of the mean-field model can be mathematically expressed by the following system of differential equations:(1) The model. Model diagram for the movements of individuals between population compartments. Parameter values. Baseline values of the parameters used for simulations of the models with sources from published literature. For a given value of R0, the baseline transmission rate β can be calculated using the expression R0 = βS0/γ. Details of the model in its stochastic form are provided in the Appendix. [SUBTITLE] Reproduction numbers [SUBSECTION] A key parameter in disease epidemiology is the basic reproduction number of the invading pathogen, commonly denoted by R0, which is the average number of new infections generated by a single infected case introduced into an entirely susceptible (non-immune) population [12]. The quantity R0 can be used to estimate the growth rate of an epidemic (during the initial phase) and the total number of infections (final size of the epidemic) [13]. When public health interventions are implemented, the reproduction number of disease is affected by parameters that determine the effectiveness of control measures; and we therefore introduce the control reproduction number ( Rc ) to evaluate the impact of such parameters on transmissibility of the pathogen and epidemic dynamics. Applying a previously established method [12,14], for model (1), we obtain , where and are respectively the reproduction numbers of the sensitive and resistant infections, expressed by(2) where S0 is the size of the susceptible population at the onset of the outbreak. In the absence of treatment and isolation, Rc reduces to the basic reproduction number of the sensitive pathogen, given by R0 = βS0/γ. Using the expression for in (2), one can easily calculate the critical value p* at which , and therefore the spread of the sensitive infection can be contained for p >p* . Rewriting in terms of R0 , the value p* is given by However, the spread of disease caused by the sensitive pathogen cannot be controlled if R0 exceeds the threshold R* = (γ + α)/δT qγ, which results in p* > 1. Since 0 ≤ q ≤ 1, for parameter values used in simulations (Table 1), disease control becomes infeasible if R0 > 2.5. Similarly, there is a critical value at which , and the spread of the resistant pathogen is contained if . Letting , the value of can be expressed as which highlights the importance of isolation for controlling the spread of resistant infection. A key parameter in disease epidemiology is the basic reproduction number of the invading pathogen, commonly denoted by R0, which is the average number of new infections generated by a single infected case introduced into an entirely susceptible (non-immune) population [12]. The quantity R0 can be used to estimate the growth rate of an epidemic (during the initial phase) and the total number of infections (final size of the epidemic) [13]. When public health interventions are implemented, the reproduction number of disease is affected by parameters that determine the effectiveness of control measures; and we therefore introduce the control reproduction number ( Rc ) to evaluate the impact of such parameters on transmissibility of the pathogen and epidemic dynamics. Applying a previously established method [12,14], for model (1), we obtain , where and are respectively the reproduction numbers of the sensitive and resistant infections, expressed by(2) where S0 is the size of the susceptible population at the onset of the outbreak. In the absence of treatment and isolation, Rc reduces to the basic reproduction number of the sensitive pathogen, given by R0 = βS0/γ. Using the expression for in (2), one can easily calculate the critical value p* at which , and therefore the spread of the sensitive infection can be contained for p >p* . Rewriting in terms of R0 , the value p* is given by However, the spread of disease caused by the sensitive pathogen cannot be controlled if R0 exceeds the threshold R* = (γ + α)/δT qγ, which results in p* > 1. Since 0 ≤ q ≤ 1, for parameter values used in simulations (Table 1), disease control becomes infeasible if R0 > 2.5. Similarly, there is a critical value at which , and the spread of the resistant pathogen is contained if . Letting , the value of can be expressed as which highlights the importance of isolation for controlling the spread of resistant infection.
null
null
null
null
[ "Background", "The model structure", "Reproduction numbers", "Simulations and results", "Discussion", "Appendix: Stochastic model", "Algorithm for stochastic simulations", "Stochastic simulations", "Authors’ contributions", "Competing interests" ]
[ "Epidemics of infectious diseases have been observed throughout history, with substantial variability in their dynamical patterns. The 1918 influenza pandemic is a notorious case documented as the most devastating epidemic with over 50 million deaths and multiple outbreaks in many geographic areas worldwide [1,2]. Distinct pandemic infection waves were recorded with an 8 to 15 week interval; the latter were more severe than the first and were associated with the majority of deaths [2,3]. Although several factors may be involved, such as the effect of seasonal changes, demographics, and evolution of the virus, the true mechanism by which subsequent waves occur is not fully understood. Nor is it clearly understood how different control measures and strategies for deployment of limited health resources may interfere with disease dynamics and the occurrence of later infection waves.\nRecent epidemiological and modelling studies have attempted to provide explanatory theories for the mechanisms of multiple outbreaks of an infectious pathogen capable of establishing an epidemic [2,4-10]. Spontaneous behavioural changes (e.g., a change in the number of contacts due to modified behaviour of susceptible individuals) have been shown to affect the course of infection events and produce subsequent outbreaks in an epidemic episode [9]. This has been further investigated through modelling “concerned awareness” of individuals that may result in contagion dynamics of fear and disease [6], and the implementation of public health control measures (e.g., social distancing) that may interfere with the individuals’ contact patterns during the epidemic [5]. Co-infection has also been suggested as a possible explanation for multiple infection outbreaks as a result of increased transmissibility in co-infected individuals and non-synchronicity in the time course of the two co-circulating infections [8]. Other possible mechanisms include transient post-infection immunity and evolutionary changes that may occur in the characteristics of the infectious pathogens [2,4,10].\nIn this study, we consider the occurrence of multiple infection waves of a pathogen from a public health perspective, and develop mathematical models to investigate how intervention measures may affect the transmission dynamics in a population. Specifically, we are interested in exploring the impact of changes in policy-relevant parameters on the patterns of disease spread during the course of an epidemic. These parameters may reflect the effectiveness of intervention strategies (e.g., treatment or isolation of infected cases) in reducing disease transmission, or their epidemiological consequences (e.g., emergence of drug resistance), and may therefore play an important role in determining the outcome of disease control activities. The significance of this work thus relates to the process of public health decision-making, in particular when confronting the emergence of a novel infectious disease with substantial uncertainty regarding the epidemiological characteristics of the invading pathogen.\nFor the purpose of this investigation, we develop both mean-field and stochastic epidemiological models that describe the transmission dynamics of a disease in the population, and incorporate treatment and isolation of infected cases as control measures. We parameterize these models to simulate the spread of influenza as a case study, and determine the impact of control parameters on disease dynamics. We illustrate the occurrence of multiple infection waves associated with different treatment levels and the development of drug resistance in the population under the scenario of limited capacity for treatment and isolation of infectious individuals. We compare the results obtained by simulating the mean-field model with those observed in the stochastic model, and discuss our findings in the context of epidemiology and public health.", "To formulate the models for describing disease epidemic, we assume that the population is initially entirely susceptible to the infectious pathogen. It is assumed that the infection can be treated with drugs, but the pathogen may develop resistance during the course of treatment with potential for transmission. Since resistance emergence may impose fitness cost on pathogen replication and transmission [11], we assume that the drug-resistant pathogen is less transmissible than the drug-sensitive pathogen. Treatment is assumed to reduce transmissibility of the drug-sensitive infection, but remains ineffective against drug-resistant infection. We also assume that the recovery from infection confers immunity to re-infection with either drug-sensitive or resistant pathogens. Considering epidemics with relatively short time-courses, we ignore the effect of recruitment, natural death, and other demographic variables of the population.\nWith the assumption of homogeneous mixing, we divide the population into classes of susceptibles ( S ); individuals exposed (not yet infectious) to sensitive ( E ) and resistant ( Er ) infections; untreated individuals infected with sensitive ( I ) and resistant ( Ir ) infections; treated individuals infected with sensitive ( IT ) and resistant ( IT,r ) infections; isolated individuals infected with either sensitive or resistant infection ( J ); and recovered individuals ( R ). Figure 1 shows the movements of individuals between these classes during the course of an epidemic. With parameters described in Table 1, the dynamics of the mean-field model can be mathematically expressed by the following system of differential equations:(1)\nThe model. Model diagram for the movements of individuals between population compartments.\nParameter values.\nBaseline values of the parameters used for simulations of the models with sources from published literature. For a given value of R0, the baseline transmission rate β can be calculated using the expression R0 = βS0/γ.\nDetails of the model in its stochastic form are provided in the Appendix.", "A key parameter in disease epidemiology is the basic reproduction number of the invading pathogen, commonly denoted by R0, which is the average number of new infections generated by a single infected case introduced into an entirely susceptible (non-immune) population [12]. The quantity R0 can be used to estimate the growth rate of an epidemic (during the initial phase) and the total number of infections (final size of the epidemic) [13]. When public health interventions are implemented, the reproduction number of disease is affected by parameters that determine the effectiveness of control measures; and we therefore introduce the control reproduction number ( Rc ) to evaluate the impact of such parameters on transmissibility of the pathogen and epidemic dynamics. Applying a previously established method [12,14], for model (1), we obtain , where and are respectively the reproduction numbers of the sensitive and resistant infections, expressed by(2)\nwhere S0 is the size of the susceptible population at the onset of the outbreak. In the absence of treatment and isolation, Rc reduces to the basic reproduction number of the sensitive pathogen, given by R0 = βS0/γ. Using the expression for in (2), one can easily calculate the critical value p* at which , and therefore the spread of the sensitive infection can be contained for p >p* . Rewriting in terms of R0 , the value p* is given by\nHowever, the spread of disease caused by the sensitive pathogen cannot be controlled if R0 exceeds the threshold R* = (γ + α)/δT qγ, which results in p* > 1. Since 0 ≤ q ≤ 1, for parameter values used in simulations (Table 1), disease control becomes infeasible if R0 > 2.5. Similarly, there is a critical value at which , and the spread of the resistant pathogen is contained if . Letting , the value of can be expressed as\nwhich highlights the importance of isolation for controlling the spread of resistant infection.", "To simulate the models, we considered influenza infection as a case study, for which emergence and spread of drug-resistance during an outbreak can result from treatment of infected individuals. We assumed that the epidemic is triggered by a drug-sensitive influenza virus, and investigated the role of several key model parameters in changing the epidemic patterns and generating multiple waves of infection. These parameters include the fractions of infected individuals identified for treatment or isolation, and the basic reproduction number of disease which varies within the estimated range published in the literature (Table 1). Since public health resources may be limited during an epidemic, we also defined a parameter (Tc ) as the capacity for treatment of infected individuals including those who are isolated (i.e., the percentage of the total population that can be treated). To illustrate various scenarios, we initially seeded a susceptible population of size S0 = 10,000 with E0 = 10 individuals exposed to the sensitive virus, and assumed that treatment can result in the emergence of resistance with the relative transmissibility δr = 0.8 during the outbreak. Other parameter values are given in Table 1.\nThe mean-field model was simulated for a number of scenarios to show the occurrence of multiple infection waves during an epidemic episode (Figure 2). These simulations indicate that variation in the transmissibility of the pathogen (determined by R0 ), as well as parameters that govern the effectiveness of control measures can significantly impact the epidemic profile, leading to sequences of infection waves with different magnitudes and time-courses. To explore the causes of these multiple outbreaks, we plotted time-courses of treated and untreated sensitive (black curves) and resistant infections (red curves), corresponding to epidemic profiles in Figures 2a-2d. As illustrated in Figures 2a-2b, a large scale use of treatment (combined with isolation) suppresses the spread of the sensitive infection quickly, but leads to the emergence and spread of resistance that causes the first wave of infection. Due to the limited capacity of treatment and isolation (run-out scenario), a second wave of infection follows as a result of wide-spread resistance (red curves), which declines once a sizable portion of the susceptible population is infected and the level of susceptibility reduces below a threshold that is sufficient to block the transmission of the resistant pathogen with reduced fitness. However, this level of susceptibility may still be above the threshold required for disease containment, and therefore the sensitive pathogen can cause the third wave of infection (black curves).\nMultiple infection waves during an epidemic episode. Simulations for the time-courses of epidemic using parameter values given in Table 1 with: (a) R0 = 1.6; p = 0.68; q = 0.8; Tc = 12% (three infection waves); (b) R0 =1.7; p = 0.66; q = 0.71; Tc = 11% (three infection waves); (c) R0 = 1.9; p = 0.47; q = 0.7; Tc = 18% (two infection waves); and (d) R0 = 2; p = 0.8; q = 0.64; Tc = 18% (two infection waves). Black and red curves correspond respectively to the total untreated and treated sensitive ( I + IT ) and resistance ( Ir + IT,r) infections without isolation.\nAs the reproduction number of the sensitive infection increases (Figures 2c-2d), higher treatment levels are required for the resistant infection to prevail and cause a significant outbreak [18]. For a reduced level of treatment and a higher transmissibility of the sensitive virus, corresponding to the epidemic profile in Figure 2c, we observed two infection waves, both of which are caused by the spread of the sensitive virus, with generation of very few cases of resistant infection. In this scenario, run-out occurs before epidemic is contained, and a second infection wave takes place. Similar dynamics can occur with two subsequent waves of resistant infections for a significantly higher treatment level (Figure 2d). However, the second wave that occurs after the treatment capacity is fully dispensed (run-out scenario) leads to a major reduction in susceptibility of the population; thereby ending the epidemic. These simulations indicate that multiple infection waves could occur due to limited resources for treatment/isolation of infected cases, the ways that such resources are deployed during the outbreak, the evolutionary responses of the pathogen to control measures (e.g., emergence of drug resistance), or a combination thereof. We performed further experiments with small changes in these parameters, and observed significant influences on the epidemic dynamics that can be associated with the elimination or creation of an infection wave. It is worth noting that the above scenarios can take place even for sufficient drug stockpiles for which run-out does not occur, if a policy for adaptation (e.g., reduction) of treatment at the population level is implemented due to wide-spread drug-resistance [15].\nFor comparison purposes, we simulated the stochastic version of the model using a Markov Chain Monte Carlo method and observed sequences of infection waves for different sets of parameter values (see Appendix). Consistent with previous observations [4], the stochastic model displays a later peak time of infection waves (with lower magnitudes) than the homogeneous mean-field model. This depends not only on the treatment level, but also on other parameters involved in the spread of sensitive and resistant infections, such as the reduction in the potentially infectious contacts and the fitness of resistance. Furthermore, stochastic effects can play a significant role in determining disease dynamics even during the outbreak well past the initial establishment phase of the epidemic. This is illustrated in Figure 4c of the Appendix that the epidemic dies out after the first outbreak in the stochastic model; whereas a second wave of infection takes place in the mean-field model with a larger magnitude compared to the first outbreak.\nIn addition to parameters pertaining to the nature of disease and effectiveness of interventions, the number of infected cases at the onset of an epidemic can greatly influence the dynamics of disease. Our simulations (Figure 3) indicate that small changes in the initial number of infections may result in different epidemic profiles exhibiting more than one infection wave. This suggests that the true dynamics of an emerging disease (with unknown initial number of infections) may not be predicted with certainty, even when reliable estimates of other pathogen-related and intervention parameters are available.\nThe effect of changes in the initial number of infections on epidemic profiles. Simulations for the time-courses of epidemic (total number of infections without isolation: I + IT + Ir + IT,r) using parameter values given in Table 1. Other parameters are: (a,b) R0 =1.9; p = 0.72; q = 0.71; Tc = 22.2% and initial infected cases of (a) E0 = 6 (three infection waves) and (b) E0 = 12 (single infection wave); (c,d) R0 = 2.5; p = 0.55; q = 0.41; Tc = 26% and initial infected cases of (c) E0 = 8 (single infection wave) and (d) E0 = 10 (two infection waves).", "Stellar advances in the prevention and management of infectious diseases have been achieved since the great influenza pandemic of 1918. Yet, emerging pathogens often inflict incalculable devastation to humanity. The global mobilization with rapid international transportation between populations makes the impact of such diseases even more dramatic with potential socioeconomic upheaval. This was recognized in 2003 with the appearance of severe acute respiratory syndrome (SARS) as the first major infectious disease threat of the 21st century [21], and was recently experienced with the worldwide spread of a swine-origin influenza A virus H1N1, that led the World Health Organization to declare this virus as the cause of an influenza pandemic on June 11, 2009 [22]. Public health responses to the emergence of new diseases often involve difficult decisions on optimal use of health resources over very short timelines. Such decisions are further confounded by substantial uncertainties regarding the epidemiological characteristics of the novel infectious pathogen, the effectiveness of public health intervention strategies, and the evolutionary responses of the pathogen under the pressure of control measures [23]. From a population health perspective, it is therefore imperative to look beyond short-term targets and account for long-term disease outcomes in strategy development and implementation. This is particularly important for preventing multiple infection outbreaks that may result from imprudent use of resources or unintended adverse consequences of disease containment strategies.\nGiven the historical evidence for the occurrence of multiple infection waves [2,3,7], several modelling studies have attempted to provide explanatory theories for these events in a single epidemic course [2,5,6,8-10]. In this study, we developed mean-field and stochastic models to investigate possible causes of sequential outbreaks from a public health perspective. Our results show that epidemic dynamics can be substantially affected by factors that influence policy design and implementation (e.g., treatment level or isolation of infected individuals), and parameters that determine the effectiveness and consequences of control measures (e.g., reduction in infectiousness due to treatment or emergence of drug-resistance). Furthermore, the initial number of infections can influence disease outcomes. While mean-field and stochastic models may exhibit similar epidemic behaviour, we also observed differences in their predictions in terms of the speed with which disease spreads through the population (with further delay in the peak time of outbreaks in the stochastic model); the magnitudes of infection outbreaks; and more importantly, the occurrence of infection waves (see Appendix). The latter is particularly influenced by stochastic effects, in addition to the structure of contact patterns and heterogeneity in population interactions [4]. Previous work [4,24] provides a solid foundation for extension of this study through the development of network dynamical models of disease transmission in which heterogeneous contacts between individuals are accounted for.\nIn this study, we simplified the models and included compartments corresponding to some possible stages of a disease; yet we understand that different pathogens may cause infections with different clinical manifestations and infectiousness periods. For example, influenza is known to have a short latent period of less than 2 days before becoming infectious [17], followed by a pre-symptomatic infection during which disease can be transmitted without showing clinical symptoms; however, the latent period of SARS is estimated to be longer and may be comparable to the duration of a complete course of influenza infection [17]. It is also well-documented that influenza can be transmitted in asymptomatic form without developing clinical symptoms [25]; while evidence for asymptomatic transmission of SARS is rather scant. These discrepancies in infection stages of human diseases, combined with the ability of the pathogens to overcome the pressures that are applied to limit their replication and spread, can profoundly impact not only the feasibility and effectiveness of control measures, but also the dynamics of disease over the course of an epidemic. Our study highlights these considerations for further investigation, while demonstrating possible mechanisms for the occurrence of multiple infection waves in a single epidemic. Future research in this direction should address some limitations of the present study, including a systematic exploration of parameter space to characterize which intervention parameter regimes are more likely to give rise to sequences of infection outbreaks, and to determine the sensitivity of model outputs (epidemic dynamics) on parameter changes.\nAlthough models considered here are simulated for influenza infection as a case study, understanding the interplay between intervention parameters, evolutionary responses of the pathogens, and epidemic dynamics remains a critical objective of public health for many diseases [26], including HIV, tuberculosis, malaria, and several bacterial infections. Such diseases often share common features, including the emergence and prevalence of drug resistant pathogens under the pressure of drug treatment. The initial rise of resistance is generally associated with fitness costs that make the resistant pathogen less capable of competing with the sensitive pathogen (as the dominant competitor) in a given host population [11]. However, evolutionary mechanisms (e.g., compensatory mutations [27]) may improve the fitness of resistant pathogens, and therefore intervention measures may result in further selection of resistance, as has been documented for the global spread of seasonal influenza drug resistance that appears to be associated with fitness enhancement processes [28]. This suggests that future modelling efforts should integrate factors that govern pathogen-host interactions with the mechanisms of disease epidemiology to guide public health in devising novel and effective means of infection control.", "With the same population compartments as defined in the mean-field model described in the main text, we develop a stochastic model for disease transmission dynamics to investigate the epidemic patterns with random effects. We consider time t as a continuous variable, and define the following random vector for t ∈ [0, ∞)\nwith that represents changes that occur to the random vector at Δt units of time. We define the transition probability as(3)\nwhere\nThe function Θ(·) describes the status of an individual in a subpopulation (i.e., Θ(·) = –1: an individual leaves the subpopulation; Θ(·) = 0: no changes occur to the individuals' status in the subpopulation; Θ(·) = 1: an individual enters the subpopulation). We assume that Δt is sufficiently small, so that at most one change of status can occur during the time interval Δt, which can be viewed as a Markov chain process. The resulting stochastic model can be described as a continuous time Markov model, with the transition probabilities given in Table 2.\nPossible transitions between model compartments that can occur in Δt units of time\n[SUBTITLE] Algorithm for stochastic simulations [SUBSECTION] For simulating the stochastic model, we used the Markov Chain Monte Carlo method, with an initial E(0) = 10 exposed individuals to sensitive infection in a population of S0 = 10, 000 susceptibles. A key parameter in these simulations is the step-size of the Monte Carlo method. Using a fixed step-size requires a large number of steps to guarantee that the transitions between subpopulations take place and disease transmission can occur, which is computationally very demanding in terms of both timing and resources. To reduce such a computational load, we implemented an adaptive step-size method [29] to estimate the transition time to the next event (Δt) by calculating the sum of the frequencies of all possible events, given by η = β(I + δTIT)S(t) + δrβ(Ir + IT,r)S(t) + (1 – p)µE(E + Er) + pqµE(E + Er) + αIT + p(1 – q)µE(E + Er) + γ(I + Ir + IT,r + J + IT). Then, by choosing Δt = U1/η, where U1 is uniform distribution in the interval [0,1], we ordered all possible events as an increasing fraction of η and generated another uniform deviate (U2 ∈ [0,1]) to determine the nature of the next event. For the convergence of the results, we ran these simulations for 1000 samples, and considered the average of sample realizations of the stochastic process to generate infection curves.\nFor simulating the stochastic model, we used the Markov Chain Monte Carlo method, with an initial E(0) = 10 exposed individuals to sensitive infection in a population of S0 = 10, 000 susceptibles. A key parameter in these simulations is the step-size of the Monte Carlo method. Using a fixed step-size requires a large number of steps to guarantee that the transitions between subpopulations take place and disease transmission can occur, which is computationally very demanding in terms of both timing and resources. To reduce such a computational load, we implemented an adaptive step-size method [29] to estimate the transition time to the next event (Δt) by calculating the sum of the frequencies of all possible events, given by η = β(I + δTIT)S(t) + δrβ(Ir + IT,r)S(t) + (1 – p)µE(E + Er) + pqµE(E + Er) + αIT + p(1 – q)µE(E + Er) + γ(I + Ir + IT,r + J + IT). Then, by choosing Δt = U1/η, where U1 is uniform distribution in the interval [0,1], we ordered all possible events as an increasing fraction of η and generated another uniform deviate (U2 ∈ [0,1]) to determine the nature of the next event. For the convergence of the results, we ran these simulations for 1000 samples, and considered the average of sample realizations of the stochastic process to generate infection curves.\n[SUBTITLE] Stochastic simulations [SUBSECTION] We ran stochastic simulations with parameter values given in Table 1 to illustrate the possibility of multiple infection waves for different scenarios with variation in the basic reproduction number, fractions of treated and isolated ill individuals, and the capacity for treatment and isolation. Figure 4a shows that, since the transmission of the sensitive infection is largely blocked by a high treatment level, resistance emerges and causes the first infection wave of the outbreak. The second wave of resistant infections follows after the capacity of treatment and isolation (Tc) is exhausted, and declines when susceptibility of the population falls below a certain threshold that is sufficient to end the resistant outbreak (red curve). However, due to higher fitness of the sensitive infection, a third wave of outbreak occurs which results in depletion of the susceptible population to levels sufficient for ending the epidemic (black curve). We observed similar behaviour in the mean-field model, as illustrated by the blue curve in Figure 4a. When treatment level is reduced by a significant margin, generated resistant infection is out-competed by the sensitive infection which has a higher fitness advantage (Figure 4b), and only outbreaks of the sensitive infection occur; the second wave takes place after the capacity of treatment is fully dispensed (black curve). While, the mean-field model also produces similar results (blue curve), we observed differences in the behaviour of the stochastic model. A small reduction in the fraction of isolated individuals leads to the elimination of the second wave in the stochastic model, while mean-field model still produces a second wave with even a larger magnitude than the first wave of the outbreak (Figure 4c). This suggests that not only are stochastic effects important during the early stages of disease outset, but they also can play a critical role in shaping the epidemic well beyond the establishment phase of the disease.\nStochastic simulations. Stochastic simulations for the time-courses of epidemic (including sensitive and resistant infections without isolation) using parameter values given in Table 1 of the main text with: (a) R0 = 1.9, p = 0.65, q = 0.72, Tc = 19.5% (three infection waves); (b) R0 = 1.9, p = 0.5, q = 0.65, Tc = 15% (two infection waves); and (c) R0 = 1.9, p = 0.5, q = 0.66, Tc = 16% (one infection wave). Black and red curves correspond respectively to the sensitive (untreated and treated: I + Iτ) and resistant (untreated and treated: Ir + IT,r) infections. Blue curves illustrate the corresponding scenarios for the total number of infections (I + IT + Ir + IT,r) during epidemic simulated in the mean-field model. In all simulations, initial number of infected cases is E0 = 10\nWe ran stochastic simulations with parameter values given in Table 1 to illustrate the possibility of multiple infection waves for different scenarios with variation in the basic reproduction number, fractions of treated and isolated ill individuals, and the capacity for treatment and isolation. Figure 4a shows that, since the transmission of the sensitive infection is largely blocked by a high treatment level, resistance emerges and causes the first infection wave of the outbreak. The second wave of resistant infections follows after the capacity of treatment and isolation (Tc) is exhausted, and declines when susceptibility of the population falls below a certain threshold that is sufficient to end the resistant outbreak (red curve). However, due to higher fitness of the sensitive infection, a third wave of outbreak occurs which results in depletion of the susceptible population to levels sufficient for ending the epidemic (black curve). We observed similar behaviour in the mean-field model, as illustrated by the blue curve in Figure 4a. When treatment level is reduced by a significant margin, generated resistant infection is out-competed by the sensitive infection which has a higher fitness advantage (Figure 4b), and only outbreaks of the sensitive infection occur; the second wave takes place after the capacity of treatment is fully dispensed (black curve). While, the mean-field model also produces similar results (blue curve), we observed differences in the behaviour of the stochastic model. A small reduction in the fraction of isolated individuals leads to the elimination of the second wave in the stochastic model, while mean-field model still produces a second wave with even a larger magnitude than the first wave of the outbreak (Figure 4c). This suggests that not only are stochastic effects important during the early stages of disease outset, but they also can play a critical role in shaping the epidemic well beyond the establishment phase of the disease.\nStochastic simulations. Stochastic simulations for the time-courses of epidemic (including sensitive and resistant infections without isolation) using parameter values given in Table 1 of the main text with: (a) R0 = 1.9, p = 0.65, q = 0.72, Tc = 19.5% (three infection waves); (b) R0 = 1.9, p = 0.5, q = 0.65, Tc = 15% (two infection waves); and (c) R0 = 1.9, p = 0.5, q = 0.66, Tc = 16% (one infection wave). Black and red curves correspond respectively to the sensitive (untreated and treated: I + Iτ) and resistant (untreated and treated: Ir + IT,r) infections. Blue curves illustrate the corresponding scenarios for the total number of infections (I + IT + Ir + IT,r) during epidemic simulated in the mean-field model. In all simulations, initial number of infected cases is E0 = 10", "For simulating the stochastic model, we used the Markov Chain Monte Carlo method, with an initial E(0) = 10 exposed individuals to sensitive infection in a population of S0 = 10, 000 susceptibles. A key parameter in these simulations is the step-size of the Monte Carlo method. Using a fixed step-size requires a large number of steps to guarantee that the transitions between subpopulations take place and disease transmission can occur, which is computationally very demanding in terms of both timing and resources. To reduce such a computational load, we implemented an adaptive step-size method [29] to estimate the transition time to the next event (Δt) by calculating the sum of the frequencies of all possible events, given by η = β(I + δTIT)S(t) + δrβ(Ir + IT,r)S(t) + (1 – p)µE(E + Er) + pqµE(E + Er) + αIT + p(1 – q)µE(E + Er) + γ(I + Ir + IT,r + J + IT). Then, by choosing Δt = U1/η, where U1 is uniform distribution in the interval [0,1], we ordered all possible events as an increasing fraction of η and generated another uniform deviate (U2 ∈ [0,1]) to determine the nature of the next event. For the convergence of the results, we ran these simulations for 1000 samples, and considered the average of sample realizations of the stochastic process to generate infection curves.", "We ran stochastic simulations with parameter values given in Table 1 to illustrate the possibility of multiple infection waves for different scenarios with variation in the basic reproduction number, fractions of treated and isolated ill individuals, and the capacity for treatment and isolation. Figure 4a shows that, since the transmission of the sensitive infection is largely blocked by a high treatment level, resistance emerges and causes the first infection wave of the outbreak. The second wave of resistant infections follows after the capacity of treatment and isolation (Tc) is exhausted, and declines when susceptibility of the population falls below a certain threshold that is sufficient to end the resistant outbreak (red curve). However, due to higher fitness of the sensitive infection, a third wave of outbreak occurs which results in depletion of the susceptible population to levels sufficient for ending the epidemic (black curve). We observed similar behaviour in the mean-field model, as illustrated by the blue curve in Figure 4a. When treatment level is reduced by a significant margin, generated resistant infection is out-competed by the sensitive infection which has a higher fitness advantage (Figure 4b), and only outbreaks of the sensitive infection occur; the second wave takes place after the capacity of treatment is fully dispensed (black curve). While, the mean-field model also produces similar results (blue curve), we observed differences in the behaviour of the stochastic model. A small reduction in the fraction of isolated individuals leads to the elimination of the second wave in the stochastic model, while mean-field model still produces a second wave with even a larger magnitude than the first wave of the outbreak (Figure 4c). This suggests that not only are stochastic effects important during the early stages of disease outset, but they also can play a critical role in shaping the epidemic well beyond the establishment phase of the disease.\nStochastic simulations. Stochastic simulations for the time-courses of epidemic (including sensitive and resistant infections without isolation) using parameter values given in Table 1 of the main text with: (a) R0 = 1.9, p = 0.65, q = 0.72, Tc = 19.5% (three infection waves); (b) R0 = 1.9, p = 0.5, q = 0.65, Tc = 15% (two infection waves); and (c) R0 = 1.9, p = 0.5, q = 0.66, Tc = 16% (one infection wave). Black and red curves correspond respectively to the sensitive (untreated and treated: I + Iτ) and resistant (untreated and treated: Ir + IT,r) infections. Blue curves illustrate the corresponding scenarios for the total number of infections (I + IT + Ir + IT,r) during epidemic simulated in the mean-field model. In all simulations, initial number of infected cases is E0 = 10", "Developed mean-field model and performed simulations: LW, SM. Developed stochastic model and performed simulations: YH, SM. Designed the study and wrote the paper: JW, SM. All the authors have read the final version of the paper and approved it.", "The authors declare that they have no competing interests." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "The model structure", "Reproduction numbers", "Simulations and results", "Discussion", "Appendix: Stochastic model", "Algorithm for stochastic simulations", "Stochastic simulations", "Authors’ contributions", "Competing interests" ]
[ "Epidemics of infectious diseases have been observed throughout history, with substantial variability in their dynamical patterns. The 1918 influenza pandemic is a notorious case documented as the most devastating epidemic with over 50 million deaths and multiple outbreaks in many geographic areas worldwide [1,2]. Distinct pandemic infection waves were recorded with an 8 to 15 week interval; the latter were more severe than the first and were associated with the majority of deaths [2,3]. Although several factors may be involved, such as the effect of seasonal changes, demographics, and evolution of the virus, the true mechanism by which subsequent waves occur is not fully understood. Nor is it clearly understood how different control measures and strategies for deployment of limited health resources may interfere with disease dynamics and the occurrence of later infection waves.\nRecent epidemiological and modelling studies have attempted to provide explanatory theories for the mechanisms of multiple outbreaks of an infectious pathogen capable of establishing an epidemic [2,4-10]. Spontaneous behavioural changes (e.g., a change in the number of contacts due to modified behaviour of susceptible individuals) have been shown to affect the course of infection events and produce subsequent outbreaks in an epidemic episode [9]. This has been further investigated through modelling “concerned awareness” of individuals that may result in contagion dynamics of fear and disease [6], and the implementation of public health control measures (e.g., social distancing) that may interfere with the individuals’ contact patterns during the epidemic [5]. Co-infection has also been suggested as a possible explanation for multiple infection outbreaks as a result of increased transmissibility in co-infected individuals and non-synchronicity in the time course of the two co-circulating infections [8]. Other possible mechanisms include transient post-infection immunity and evolutionary changes that may occur in the characteristics of the infectious pathogens [2,4,10].\nIn this study, we consider the occurrence of multiple infection waves of a pathogen from a public health perspective, and develop mathematical models to investigate how intervention measures may affect the transmission dynamics in a population. Specifically, we are interested in exploring the impact of changes in policy-relevant parameters on the patterns of disease spread during the course of an epidemic. These parameters may reflect the effectiveness of intervention strategies (e.g., treatment or isolation of infected cases) in reducing disease transmission, or their epidemiological consequences (e.g., emergence of drug resistance), and may therefore play an important role in determining the outcome of disease control activities. The significance of this work thus relates to the process of public health decision-making, in particular when confronting the emergence of a novel infectious disease with substantial uncertainty regarding the epidemiological characteristics of the invading pathogen.\nFor the purpose of this investigation, we develop both mean-field and stochastic epidemiological models that describe the transmission dynamics of a disease in the population, and incorporate treatment and isolation of infected cases as control measures. We parameterize these models to simulate the spread of influenza as a case study, and determine the impact of control parameters on disease dynamics. We illustrate the occurrence of multiple infection waves associated with different treatment levels and the development of drug resistance in the population under the scenario of limited capacity for treatment and isolation of infectious individuals. We compare the results obtained by simulating the mean-field model with those observed in the stochastic model, and discuss our findings in the context of epidemiology and public health.", "[SUBTITLE] The model structure [SUBSECTION] To formulate the models for describing disease epidemic, we assume that the population is initially entirely susceptible to the infectious pathogen. It is assumed that the infection can be treated with drugs, but the pathogen may develop resistance during the course of treatment with potential for transmission. Since resistance emergence may impose fitness cost on pathogen replication and transmission [11], we assume that the drug-resistant pathogen is less transmissible than the drug-sensitive pathogen. Treatment is assumed to reduce transmissibility of the drug-sensitive infection, but remains ineffective against drug-resistant infection. We also assume that the recovery from infection confers immunity to re-infection with either drug-sensitive or resistant pathogens. Considering epidemics with relatively short time-courses, we ignore the effect of recruitment, natural death, and other demographic variables of the population.\nWith the assumption of homogeneous mixing, we divide the population into classes of susceptibles ( S ); individuals exposed (not yet infectious) to sensitive ( E ) and resistant ( Er ) infections; untreated individuals infected with sensitive ( I ) and resistant ( Ir ) infections; treated individuals infected with sensitive ( IT ) and resistant ( IT,r ) infections; isolated individuals infected with either sensitive or resistant infection ( J ); and recovered individuals ( R ). Figure 1 shows the movements of individuals between these classes during the course of an epidemic. With parameters described in Table 1, the dynamics of the mean-field model can be mathematically expressed by the following system of differential equations:(1)\nThe model. Model diagram for the movements of individuals between population compartments.\nParameter values.\nBaseline values of the parameters used for simulations of the models with sources from published literature. For a given value of R0, the baseline transmission rate β can be calculated using the expression R0 = βS0/γ.\nDetails of the model in its stochastic form are provided in the Appendix.\nTo formulate the models for describing disease epidemic, we assume that the population is initially entirely susceptible to the infectious pathogen. It is assumed that the infection can be treated with drugs, but the pathogen may develop resistance during the course of treatment with potential for transmission. Since resistance emergence may impose fitness cost on pathogen replication and transmission [11], we assume that the drug-resistant pathogen is less transmissible than the drug-sensitive pathogen. Treatment is assumed to reduce transmissibility of the drug-sensitive infection, but remains ineffective against drug-resistant infection. We also assume that the recovery from infection confers immunity to re-infection with either drug-sensitive or resistant pathogens. Considering epidemics with relatively short time-courses, we ignore the effect of recruitment, natural death, and other demographic variables of the population.\nWith the assumption of homogeneous mixing, we divide the population into classes of susceptibles ( S ); individuals exposed (not yet infectious) to sensitive ( E ) and resistant ( Er ) infections; untreated individuals infected with sensitive ( I ) and resistant ( Ir ) infections; treated individuals infected with sensitive ( IT ) and resistant ( IT,r ) infections; isolated individuals infected with either sensitive or resistant infection ( J ); and recovered individuals ( R ). Figure 1 shows the movements of individuals between these classes during the course of an epidemic. With parameters described in Table 1, the dynamics of the mean-field model can be mathematically expressed by the following system of differential equations:(1)\nThe model. Model diagram for the movements of individuals between population compartments.\nParameter values.\nBaseline values of the parameters used for simulations of the models with sources from published literature. For a given value of R0, the baseline transmission rate β can be calculated using the expression R0 = βS0/γ.\nDetails of the model in its stochastic form are provided in the Appendix.\n[SUBTITLE] Reproduction numbers [SUBSECTION] A key parameter in disease epidemiology is the basic reproduction number of the invading pathogen, commonly denoted by R0, which is the average number of new infections generated by a single infected case introduced into an entirely susceptible (non-immune) population [12]. The quantity R0 can be used to estimate the growth rate of an epidemic (during the initial phase) and the total number of infections (final size of the epidemic) [13]. When public health interventions are implemented, the reproduction number of disease is affected by parameters that determine the effectiveness of control measures; and we therefore introduce the control reproduction number ( Rc ) to evaluate the impact of such parameters on transmissibility of the pathogen and epidemic dynamics. Applying a previously established method [12,14], for model (1), we obtain , where and are respectively the reproduction numbers of the sensitive and resistant infections, expressed by(2)\nwhere S0 is the size of the susceptible population at the onset of the outbreak. In the absence of treatment and isolation, Rc reduces to the basic reproduction number of the sensitive pathogen, given by R0 = βS0/γ. Using the expression for in (2), one can easily calculate the critical value p* at which , and therefore the spread of the sensitive infection can be contained for p >p* . Rewriting in terms of R0 , the value p* is given by\nHowever, the spread of disease caused by the sensitive pathogen cannot be controlled if R0 exceeds the threshold R* = (γ + α)/δT qγ, which results in p* > 1. Since 0 ≤ q ≤ 1, for parameter values used in simulations (Table 1), disease control becomes infeasible if R0 > 2.5. Similarly, there is a critical value at which , and the spread of the resistant pathogen is contained if . Letting , the value of can be expressed as\nwhich highlights the importance of isolation for controlling the spread of resistant infection.\nA key parameter in disease epidemiology is the basic reproduction number of the invading pathogen, commonly denoted by R0, which is the average number of new infections generated by a single infected case introduced into an entirely susceptible (non-immune) population [12]. The quantity R0 can be used to estimate the growth rate of an epidemic (during the initial phase) and the total number of infections (final size of the epidemic) [13]. When public health interventions are implemented, the reproduction number of disease is affected by parameters that determine the effectiveness of control measures; and we therefore introduce the control reproduction number ( Rc ) to evaluate the impact of such parameters on transmissibility of the pathogen and epidemic dynamics. Applying a previously established method [12,14], for model (1), we obtain , where and are respectively the reproduction numbers of the sensitive and resistant infections, expressed by(2)\nwhere S0 is the size of the susceptible population at the onset of the outbreak. In the absence of treatment and isolation, Rc reduces to the basic reproduction number of the sensitive pathogen, given by R0 = βS0/γ. Using the expression for in (2), one can easily calculate the critical value p* at which , and therefore the spread of the sensitive infection can be contained for p >p* . Rewriting in terms of R0 , the value p* is given by\nHowever, the spread of disease caused by the sensitive pathogen cannot be controlled if R0 exceeds the threshold R* = (γ + α)/δT qγ, which results in p* > 1. Since 0 ≤ q ≤ 1, for parameter values used in simulations (Table 1), disease control becomes infeasible if R0 > 2.5. Similarly, there is a critical value at which , and the spread of the resistant pathogen is contained if . Letting , the value of can be expressed as\nwhich highlights the importance of isolation for controlling the spread of resistant infection.", "To formulate the models for describing disease epidemic, we assume that the population is initially entirely susceptible to the infectious pathogen. It is assumed that the infection can be treated with drugs, but the pathogen may develop resistance during the course of treatment with potential for transmission. Since resistance emergence may impose fitness cost on pathogen replication and transmission [11], we assume that the drug-resistant pathogen is less transmissible than the drug-sensitive pathogen. Treatment is assumed to reduce transmissibility of the drug-sensitive infection, but remains ineffective against drug-resistant infection. We also assume that the recovery from infection confers immunity to re-infection with either drug-sensitive or resistant pathogens. Considering epidemics with relatively short time-courses, we ignore the effect of recruitment, natural death, and other demographic variables of the population.\nWith the assumption of homogeneous mixing, we divide the population into classes of susceptibles ( S ); individuals exposed (not yet infectious) to sensitive ( E ) and resistant ( Er ) infections; untreated individuals infected with sensitive ( I ) and resistant ( Ir ) infections; treated individuals infected with sensitive ( IT ) and resistant ( IT,r ) infections; isolated individuals infected with either sensitive or resistant infection ( J ); and recovered individuals ( R ). Figure 1 shows the movements of individuals between these classes during the course of an epidemic. With parameters described in Table 1, the dynamics of the mean-field model can be mathematically expressed by the following system of differential equations:(1)\nThe model. Model diagram for the movements of individuals between population compartments.\nParameter values.\nBaseline values of the parameters used for simulations of the models with sources from published literature. For a given value of R0, the baseline transmission rate β can be calculated using the expression R0 = βS0/γ.\nDetails of the model in its stochastic form are provided in the Appendix.", "A key parameter in disease epidemiology is the basic reproduction number of the invading pathogen, commonly denoted by R0, which is the average number of new infections generated by a single infected case introduced into an entirely susceptible (non-immune) population [12]. The quantity R0 can be used to estimate the growth rate of an epidemic (during the initial phase) and the total number of infections (final size of the epidemic) [13]. When public health interventions are implemented, the reproduction number of disease is affected by parameters that determine the effectiveness of control measures; and we therefore introduce the control reproduction number ( Rc ) to evaluate the impact of such parameters on transmissibility of the pathogen and epidemic dynamics. Applying a previously established method [12,14], for model (1), we obtain , where and are respectively the reproduction numbers of the sensitive and resistant infections, expressed by(2)\nwhere S0 is the size of the susceptible population at the onset of the outbreak. In the absence of treatment and isolation, Rc reduces to the basic reproduction number of the sensitive pathogen, given by R0 = βS0/γ. Using the expression for in (2), one can easily calculate the critical value p* at which , and therefore the spread of the sensitive infection can be contained for p >p* . Rewriting in terms of R0 , the value p* is given by\nHowever, the spread of disease caused by the sensitive pathogen cannot be controlled if R0 exceeds the threshold R* = (γ + α)/δT qγ, which results in p* > 1. Since 0 ≤ q ≤ 1, for parameter values used in simulations (Table 1), disease control becomes infeasible if R0 > 2.5. Similarly, there is a critical value at which , and the spread of the resistant pathogen is contained if . Letting , the value of can be expressed as\nwhich highlights the importance of isolation for controlling the spread of resistant infection.", "To simulate the models, we considered influenza infection as a case study, for which emergence and spread of drug-resistance during an outbreak can result from treatment of infected individuals. We assumed that the epidemic is triggered by a drug-sensitive influenza virus, and investigated the role of several key model parameters in changing the epidemic patterns and generating multiple waves of infection. These parameters include the fractions of infected individuals identified for treatment or isolation, and the basic reproduction number of disease which varies within the estimated range published in the literature (Table 1). Since public health resources may be limited during an epidemic, we also defined a parameter (Tc ) as the capacity for treatment of infected individuals including those who are isolated (i.e., the percentage of the total population that can be treated). To illustrate various scenarios, we initially seeded a susceptible population of size S0 = 10,000 with E0 = 10 individuals exposed to the sensitive virus, and assumed that treatment can result in the emergence of resistance with the relative transmissibility δr = 0.8 during the outbreak. Other parameter values are given in Table 1.\nThe mean-field model was simulated for a number of scenarios to show the occurrence of multiple infection waves during an epidemic episode (Figure 2). These simulations indicate that variation in the transmissibility of the pathogen (determined by R0 ), as well as parameters that govern the effectiveness of control measures can significantly impact the epidemic profile, leading to sequences of infection waves with different magnitudes and time-courses. To explore the causes of these multiple outbreaks, we plotted time-courses of treated and untreated sensitive (black curves) and resistant infections (red curves), corresponding to epidemic profiles in Figures 2a-2d. As illustrated in Figures 2a-2b, a large scale use of treatment (combined with isolation) suppresses the spread of the sensitive infection quickly, but leads to the emergence and spread of resistance that causes the first wave of infection. Due to the limited capacity of treatment and isolation (run-out scenario), a second wave of infection follows as a result of wide-spread resistance (red curves), which declines once a sizable portion of the susceptible population is infected and the level of susceptibility reduces below a threshold that is sufficient to block the transmission of the resistant pathogen with reduced fitness. However, this level of susceptibility may still be above the threshold required for disease containment, and therefore the sensitive pathogen can cause the third wave of infection (black curves).\nMultiple infection waves during an epidemic episode. Simulations for the time-courses of epidemic using parameter values given in Table 1 with: (a) R0 = 1.6; p = 0.68; q = 0.8; Tc = 12% (three infection waves); (b) R0 =1.7; p = 0.66; q = 0.71; Tc = 11% (three infection waves); (c) R0 = 1.9; p = 0.47; q = 0.7; Tc = 18% (two infection waves); and (d) R0 = 2; p = 0.8; q = 0.64; Tc = 18% (two infection waves). Black and red curves correspond respectively to the total untreated and treated sensitive ( I + IT ) and resistance ( Ir + IT,r) infections without isolation.\nAs the reproduction number of the sensitive infection increases (Figures 2c-2d), higher treatment levels are required for the resistant infection to prevail and cause a significant outbreak [18]. For a reduced level of treatment and a higher transmissibility of the sensitive virus, corresponding to the epidemic profile in Figure 2c, we observed two infection waves, both of which are caused by the spread of the sensitive virus, with generation of very few cases of resistant infection. In this scenario, run-out occurs before epidemic is contained, and a second infection wave takes place. Similar dynamics can occur with two subsequent waves of resistant infections for a significantly higher treatment level (Figure 2d). However, the second wave that occurs after the treatment capacity is fully dispensed (run-out scenario) leads to a major reduction in susceptibility of the population; thereby ending the epidemic. These simulations indicate that multiple infection waves could occur due to limited resources for treatment/isolation of infected cases, the ways that such resources are deployed during the outbreak, the evolutionary responses of the pathogen to control measures (e.g., emergence of drug resistance), or a combination thereof. We performed further experiments with small changes in these parameters, and observed significant influences on the epidemic dynamics that can be associated with the elimination or creation of an infection wave. It is worth noting that the above scenarios can take place even for sufficient drug stockpiles for which run-out does not occur, if a policy for adaptation (e.g., reduction) of treatment at the population level is implemented due to wide-spread drug-resistance [15].\nFor comparison purposes, we simulated the stochastic version of the model using a Markov Chain Monte Carlo method and observed sequences of infection waves for different sets of parameter values (see Appendix). Consistent with previous observations [4], the stochastic model displays a later peak time of infection waves (with lower magnitudes) than the homogeneous mean-field model. This depends not only on the treatment level, but also on other parameters involved in the spread of sensitive and resistant infections, such as the reduction in the potentially infectious contacts and the fitness of resistance. Furthermore, stochastic effects can play a significant role in determining disease dynamics even during the outbreak well past the initial establishment phase of the epidemic. This is illustrated in Figure 4c of the Appendix that the epidemic dies out after the first outbreak in the stochastic model; whereas a second wave of infection takes place in the mean-field model with a larger magnitude compared to the first outbreak.\nIn addition to parameters pertaining to the nature of disease and effectiveness of interventions, the number of infected cases at the onset of an epidemic can greatly influence the dynamics of disease. Our simulations (Figure 3) indicate that small changes in the initial number of infections may result in different epidemic profiles exhibiting more than one infection wave. This suggests that the true dynamics of an emerging disease (with unknown initial number of infections) may not be predicted with certainty, even when reliable estimates of other pathogen-related and intervention parameters are available.\nThe effect of changes in the initial number of infections on epidemic profiles. Simulations for the time-courses of epidemic (total number of infections without isolation: I + IT + Ir + IT,r) using parameter values given in Table 1. Other parameters are: (a,b) R0 =1.9; p = 0.72; q = 0.71; Tc = 22.2% and initial infected cases of (a) E0 = 6 (three infection waves) and (b) E0 = 12 (single infection wave); (c,d) R0 = 2.5; p = 0.55; q = 0.41; Tc = 26% and initial infected cases of (c) E0 = 8 (single infection wave) and (d) E0 = 10 (two infection waves).", "Stellar advances in the prevention and management of infectious diseases have been achieved since the great influenza pandemic of 1918. Yet, emerging pathogens often inflict incalculable devastation to humanity. The global mobilization with rapid international transportation between populations makes the impact of such diseases even more dramatic with potential socioeconomic upheaval. This was recognized in 2003 with the appearance of severe acute respiratory syndrome (SARS) as the first major infectious disease threat of the 21st century [21], and was recently experienced with the worldwide spread of a swine-origin influenza A virus H1N1, that led the World Health Organization to declare this virus as the cause of an influenza pandemic on June 11, 2009 [22]. Public health responses to the emergence of new diseases often involve difficult decisions on optimal use of health resources over very short timelines. Such decisions are further confounded by substantial uncertainties regarding the epidemiological characteristics of the novel infectious pathogen, the effectiveness of public health intervention strategies, and the evolutionary responses of the pathogen under the pressure of control measures [23]. From a population health perspective, it is therefore imperative to look beyond short-term targets and account for long-term disease outcomes in strategy development and implementation. This is particularly important for preventing multiple infection outbreaks that may result from imprudent use of resources or unintended adverse consequences of disease containment strategies.\nGiven the historical evidence for the occurrence of multiple infection waves [2,3,7], several modelling studies have attempted to provide explanatory theories for these events in a single epidemic course [2,5,6,8-10]. In this study, we developed mean-field and stochastic models to investigate possible causes of sequential outbreaks from a public health perspective. Our results show that epidemic dynamics can be substantially affected by factors that influence policy design and implementation (e.g., treatment level or isolation of infected individuals), and parameters that determine the effectiveness and consequences of control measures (e.g., reduction in infectiousness due to treatment or emergence of drug-resistance). Furthermore, the initial number of infections can influence disease outcomes. While mean-field and stochastic models may exhibit similar epidemic behaviour, we also observed differences in their predictions in terms of the speed with which disease spreads through the population (with further delay in the peak time of outbreaks in the stochastic model); the magnitudes of infection outbreaks; and more importantly, the occurrence of infection waves (see Appendix). The latter is particularly influenced by stochastic effects, in addition to the structure of contact patterns and heterogeneity in population interactions [4]. Previous work [4,24] provides a solid foundation for extension of this study through the development of network dynamical models of disease transmission in which heterogeneous contacts between individuals are accounted for.\nIn this study, we simplified the models and included compartments corresponding to some possible stages of a disease; yet we understand that different pathogens may cause infections with different clinical manifestations and infectiousness periods. For example, influenza is known to have a short latent period of less than 2 days before becoming infectious [17], followed by a pre-symptomatic infection during which disease can be transmitted without showing clinical symptoms; however, the latent period of SARS is estimated to be longer and may be comparable to the duration of a complete course of influenza infection [17]. It is also well-documented that influenza can be transmitted in asymptomatic form without developing clinical symptoms [25]; while evidence for asymptomatic transmission of SARS is rather scant. These discrepancies in infection stages of human diseases, combined with the ability of the pathogens to overcome the pressures that are applied to limit their replication and spread, can profoundly impact not only the feasibility and effectiveness of control measures, but also the dynamics of disease over the course of an epidemic. Our study highlights these considerations for further investigation, while demonstrating possible mechanisms for the occurrence of multiple infection waves in a single epidemic. Future research in this direction should address some limitations of the present study, including a systematic exploration of parameter space to characterize which intervention parameter regimes are more likely to give rise to sequences of infection outbreaks, and to determine the sensitivity of model outputs (epidemic dynamics) on parameter changes.\nAlthough models considered here are simulated for influenza infection as a case study, understanding the interplay between intervention parameters, evolutionary responses of the pathogens, and epidemic dynamics remains a critical objective of public health for many diseases [26], including HIV, tuberculosis, malaria, and several bacterial infections. Such diseases often share common features, including the emergence and prevalence of drug resistant pathogens under the pressure of drug treatment. The initial rise of resistance is generally associated with fitness costs that make the resistant pathogen less capable of competing with the sensitive pathogen (as the dominant competitor) in a given host population [11]. However, evolutionary mechanisms (e.g., compensatory mutations [27]) may improve the fitness of resistant pathogens, and therefore intervention measures may result in further selection of resistance, as has been documented for the global spread of seasonal influenza drug resistance that appears to be associated with fitness enhancement processes [28]. This suggests that future modelling efforts should integrate factors that govern pathogen-host interactions with the mechanisms of disease epidemiology to guide public health in devising novel and effective means of infection control.", "With the same population compartments as defined in the mean-field model described in the main text, we develop a stochastic model for disease transmission dynamics to investigate the epidemic patterns with random effects. We consider time t as a continuous variable, and define the following random vector for t ∈ [0, ∞)\nwith that represents changes that occur to the random vector at Δt units of time. We define the transition probability as(3)\nwhere\nThe function Θ(·) describes the status of an individual in a subpopulation (i.e., Θ(·) = –1: an individual leaves the subpopulation; Θ(·) = 0: no changes occur to the individuals' status in the subpopulation; Θ(·) = 1: an individual enters the subpopulation). We assume that Δt is sufficiently small, so that at most one change of status can occur during the time interval Δt, which can be viewed as a Markov chain process. The resulting stochastic model can be described as a continuous time Markov model, with the transition probabilities given in Table 2.\nPossible transitions between model compartments that can occur in Δt units of time\n[SUBTITLE] Algorithm for stochastic simulations [SUBSECTION] For simulating the stochastic model, we used the Markov Chain Monte Carlo method, with an initial E(0) = 10 exposed individuals to sensitive infection in a population of S0 = 10, 000 susceptibles. A key parameter in these simulations is the step-size of the Monte Carlo method. Using a fixed step-size requires a large number of steps to guarantee that the transitions between subpopulations take place and disease transmission can occur, which is computationally very demanding in terms of both timing and resources. To reduce such a computational load, we implemented an adaptive step-size method [29] to estimate the transition time to the next event (Δt) by calculating the sum of the frequencies of all possible events, given by η = β(I + δTIT)S(t) + δrβ(Ir + IT,r)S(t) + (1 – p)µE(E + Er) + pqµE(E + Er) + αIT + p(1 – q)µE(E + Er) + γ(I + Ir + IT,r + J + IT). Then, by choosing Δt = U1/η, where U1 is uniform distribution in the interval [0,1], we ordered all possible events as an increasing fraction of η and generated another uniform deviate (U2 ∈ [0,1]) to determine the nature of the next event. For the convergence of the results, we ran these simulations for 1000 samples, and considered the average of sample realizations of the stochastic process to generate infection curves.\nFor simulating the stochastic model, we used the Markov Chain Monte Carlo method, with an initial E(0) = 10 exposed individuals to sensitive infection in a population of S0 = 10, 000 susceptibles. A key parameter in these simulations is the step-size of the Monte Carlo method. Using a fixed step-size requires a large number of steps to guarantee that the transitions between subpopulations take place and disease transmission can occur, which is computationally very demanding in terms of both timing and resources. To reduce such a computational load, we implemented an adaptive step-size method [29] to estimate the transition time to the next event (Δt) by calculating the sum of the frequencies of all possible events, given by η = β(I + δTIT)S(t) + δrβ(Ir + IT,r)S(t) + (1 – p)µE(E + Er) + pqµE(E + Er) + αIT + p(1 – q)µE(E + Er) + γ(I + Ir + IT,r + J + IT). Then, by choosing Δt = U1/η, where U1 is uniform distribution in the interval [0,1], we ordered all possible events as an increasing fraction of η and generated another uniform deviate (U2 ∈ [0,1]) to determine the nature of the next event. For the convergence of the results, we ran these simulations for 1000 samples, and considered the average of sample realizations of the stochastic process to generate infection curves.\n[SUBTITLE] Stochastic simulations [SUBSECTION] We ran stochastic simulations with parameter values given in Table 1 to illustrate the possibility of multiple infection waves for different scenarios with variation in the basic reproduction number, fractions of treated and isolated ill individuals, and the capacity for treatment and isolation. Figure 4a shows that, since the transmission of the sensitive infection is largely blocked by a high treatment level, resistance emerges and causes the first infection wave of the outbreak. The second wave of resistant infections follows after the capacity of treatment and isolation (Tc) is exhausted, and declines when susceptibility of the population falls below a certain threshold that is sufficient to end the resistant outbreak (red curve). However, due to higher fitness of the sensitive infection, a third wave of outbreak occurs which results in depletion of the susceptible population to levels sufficient for ending the epidemic (black curve). We observed similar behaviour in the mean-field model, as illustrated by the blue curve in Figure 4a. When treatment level is reduced by a significant margin, generated resistant infection is out-competed by the sensitive infection which has a higher fitness advantage (Figure 4b), and only outbreaks of the sensitive infection occur; the second wave takes place after the capacity of treatment is fully dispensed (black curve). While, the mean-field model also produces similar results (blue curve), we observed differences in the behaviour of the stochastic model. A small reduction in the fraction of isolated individuals leads to the elimination of the second wave in the stochastic model, while mean-field model still produces a second wave with even a larger magnitude than the first wave of the outbreak (Figure 4c). This suggests that not only are stochastic effects important during the early stages of disease outset, but they also can play a critical role in shaping the epidemic well beyond the establishment phase of the disease.\nStochastic simulations. Stochastic simulations for the time-courses of epidemic (including sensitive and resistant infections without isolation) using parameter values given in Table 1 of the main text with: (a) R0 = 1.9, p = 0.65, q = 0.72, Tc = 19.5% (three infection waves); (b) R0 = 1.9, p = 0.5, q = 0.65, Tc = 15% (two infection waves); and (c) R0 = 1.9, p = 0.5, q = 0.66, Tc = 16% (one infection wave). Black and red curves correspond respectively to the sensitive (untreated and treated: I + Iτ) and resistant (untreated and treated: Ir + IT,r) infections. Blue curves illustrate the corresponding scenarios for the total number of infections (I + IT + Ir + IT,r) during epidemic simulated in the mean-field model. In all simulations, initial number of infected cases is E0 = 10\nWe ran stochastic simulations with parameter values given in Table 1 to illustrate the possibility of multiple infection waves for different scenarios with variation in the basic reproduction number, fractions of treated and isolated ill individuals, and the capacity for treatment and isolation. Figure 4a shows that, since the transmission of the sensitive infection is largely blocked by a high treatment level, resistance emerges and causes the first infection wave of the outbreak. The second wave of resistant infections follows after the capacity of treatment and isolation (Tc) is exhausted, and declines when susceptibility of the population falls below a certain threshold that is sufficient to end the resistant outbreak (red curve). However, due to higher fitness of the sensitive infection, a third wave of outbreak occurs which results in depletion of the susceptible population to levels sufficient for ending the epidemic (black curve). We observed similar behaviour in the mean-field model, as illustrated by the blue curve in Figure 4a. When treatment level is reduced by a significant margin, generated resistant infection is out-competed by the sensitive infection which has a higher fitness advantage (Figure 4b), and only outbreaks of the sensitive infection occur; the second wave takes place after the capacity of treatment is fully dispensed (black curve). While, the mean-field model also produces similar results (blue curve), we observed differences in the behaviour of the stochastic model. A small reduction in the fraction of isolated individuals leads to the elimination of the second wave in the stochastic model, while mean-field model still produces a second wave with even a larger magnitude than the first wave of the outbreak (Figure 4c). This suggests that not only are stochastic effects important during the early stages of disease outset, but they also can play a critical role in shaping the epidemic well beyond the establishment phase of the disease.\nStochastic simulations. Stochastic simulations for the time-courses of epidemic (including sensitive and resistant infections without isolation) using parameter values given in Table 1 of the main text with: (a) R0 = 1.9, p = 0.65, q = 0.72, Tc = 19.5% (three infection waves); (b) R0 = 1.9, p = 0.5, q = 0.65, Tc = 15% (two infection waves); and (c) R0 = 1.9, p = 0.5, q = 0.66, Tc = 16% (one infection wave). Black and red curves correspond respectively to the sensitive (untreated and treated: I + Iτ) and resistant (untreated and treated: Ir + IT,r) infections. Blue curves illustrate the corresponding scenarios for the total number of infections (I + IT + Ir + IT,r) during epidemic simulated in the mean-field model. In all simulations, initial number of infected cases is E0 = 10", "For simulating the stochastic model, we used the Markov Chain Monte Carlo method, with an initial E(0) = 10 exposed individuals to sensitive infection in a population of S0 = 10, 000 susceptibles. A key parameter in these simulations is the step-size of the Monte Carlo method. Using a fixed step-size requires a large number of steps to guarantee that the transitions between subpopulations take place and disease transmission can occur, which is computationally very demanding in terms of both timing and resources. To reduce such a computational load, we implemented an adaptive step-size method [29] to estimate the transition time to the next event (Δt) by calculating the sum of the frequencies of all possible events, given by η = β(I + δTIT)S(t) + δrβ(Ir + IT,r)S(t) + (1 – p)µE(E + Er) + pqµE(E + Er) + αIT + p(1 – q)µE(E + Er) + γ(I + Ir + IT,r + J + IT). Then, by choosing Δt = U1/η, where U1 is uniform distribution in the interval [0,1], we ordered all possible events as an increasing fraction of η and generated another uniform deviate (U2 ∈ [0,1]) to determine the nature of the next event. For the convergence of the results, we ran these simulations for 1000 samples, and considered the average of sample realizations of the stochastic process to generate infection curves.", "We ran stochastic simulations with parameter values given in Table 1 to illustrate the possibility of multiple infection waves for different scenarios with variation in the basic reproduction number, fractions of treated and isolated ill individuals, and the capacity for treatment and isolation. Figure 4a shows that, since the transmission of the sensitive infection is largely blocked by a high treatment level, resistance emerges and causes the first infection wave of the outbreak. The second wave of resistant infections follows after the capacity of treatment and isolation (Tc) is exhausted, and declines when susceptibility of the population falls below a certain threshold that is sufficient to end the resistant outbreak (red curve). However, due to higher fitness of the sensitive infection, a third wave of outbreak occurs which results in depletion of the susceptible population to levels sufficient for ending the epidemic (black curve). We observed similar behaviour in the mean-field model, as illustrated by the blue curve in Figure 4a. When treatment level is reduced by a significant margin, generated resistant infection is out-competed by the sensitive infection which has a higher fitness advantage (Figure 4b), and only outbreaks of the sensitive infection occur; the second wave takes place after the capacity of treatment is fully dispensed (black curve). While, the mean-field model also produces similar results (blue curve), we observed differences in the behaviour of the stochastic model. A small reduction in the fraction of isolated individuals leads to the elimination of the second wave in the stochastic model, while mean-field model still produces a second wave with even a larger magnitude than the first wave of the outbreak (Figure 4c). This suggests that not only are stochastic effects important during the early stages of disease outset, but they also can play a critical role in shaping the epidemic well beyond the establishment phase of the disease.\nStochastic simulations. Stochastic simulations for the time-courses of epidemic (including sensitive and resistant infections without isolation) using parameter values given in Table 1 of the main text with: (a) R0 = 1.9, p = 0.65, q = 0.72, Tc = 19.5% (three infection waves); (b) R0 = 1.9, p = 0.5, q = 0.65, Tc = 15% (two infection waves); and (c) R0 = 1.9, p = 0.5, q = 0.66, Tc = 16% (one infection wave). Black and red curves correspond respectively to the sensitive (untreated and treated: I + Iτ) and resistant (untreated and treated: Ir + IT,r) infections. Blue curves illustrate the corresponding scenarios for the total number of infections (I + IT + Ir + IT,r) during epidemic simulated in the mean-field model. In all simulations, initial number of infected cases is E0 = 10", "Developed mean-field model and performed simulations: LW, SM. Developed stochastic model and performed simulations: YH, SM. Designed the study and wrote the paper: JW, SM. All the authors have read the final version of the paper and approved it.", "The authors declare that they have no competing interests." ]
[ null, "methods", null, null, null, null, null, null, null, null, null ]
[]
Optimal H1N1 vaccination strategies based on self-interest versus group interest.
21356133
Influenza vaccination is vital for reducing H1N1 infection-mediated morbidity and mortality. To reduce transmission and achieve herd immunity during the initial 2009-2010 pandemic season, the US Centers for Disease Control and Prevention (CDC) recommended that initial priority for H1N1 vaccines be given to individuals under age 25, as these individuals are more likely to spread influenza than older adults. However, due to significant delay in vaccine delivery for the H1N1 influenza pandemic, a large fraction of population was exposed to the H1N1 virus and thereby obtained immunity prior to the wide availability of vaccines. This exposure affects the spread of the disease and needs to be considered when prioritizing vaccine distribution.
BACKGROUND
To determine optimal H1N1 vaccine distributions based on individual self-interest versus population interest, we constructed a game theoretical age-structured model of influenza transmission and considered the impact of delayed vaccination.
METHODS
Our results indicate that if individuals decide to vaccinate according to self-interest, the resulting optimal vaccination strategy would prioritize adults of age 25 to 49 followed by either preschool-age children before the pandemic peak or older adults (age 50-64) at the pandemic peak. In contrast, the vaccine allocation strategy that is optimal for the population as a whole would prioritize individuals of ages 5 to 64 to curb a growing pandemic regardless of the timing of the vaccination program.
RESULTS
Our results indicate that for a delayed vaccine distribution, the priorities that are optimal at a population level do not align with those that are optimal according to individual self-interest. Moreover, the discordance between the optimal vaccine distributions based on individual self-interest and those based on population interest is even more pronounced when vaccine availability is delayed. To determine optimal vaccine allocation for pandemic influenza, public health agencies need to consider both the changes in infection risks among age groups and expected patterns of adherence.
CONCLUSIONS
[ "Adolescent", "Adult", "Aged", "Child", "Child, Preschool", "Cost-Benefit Analysis", "Disease Outbreaks", "Female", "Humans", "Influenza A Virus, H1N1 Subtype", "Influenza Vaccines", "Influenza, Human", "Male", "Mass Vaccination", "Middle Aged", "Models, Statistical", "Monte Carlo Method", "Population Surveillance", "United States", "Vaccination" ]
3152777
null
null
Methods
To model the transmission of the 2009 H1N1 influenza and vaccination, we developed the age-structured model incorporating six epidemiological compartments (i.e. susceptible, vaccinated, latent, asymptomatic and infectious, symptomatic and infectious, and recovered). Each epidemiological compartment is then subdivided into two depending on an individual’s vaccination decision. The asymptotic dynamics of this model are then used to calculate the probability for individuals to become infected based on their vaccination decision. The expected cost of infection and vaccination associated with vaccine acceptance and refusal are calculated using these infection probabilities. Since the payoff of vaccination depends on both the individual’s decision and the population's average behaviour, we formulate our model as a population game. Monte Carlo methods are employed to determine the optimal vaccination levels driven by self-interest versus the population interest. [SUBTITLE] Mathematical model for disease transmission and vaccination [SUBSECTION] To model H1N1 influenza transmission in the United States, we divide the population into the five age groups (0-4, 5-24, 25–50, 50–64, and 65+), according to the age classes used in US CDC case reports [14]. The numbers of people in each age group were set to values estimated for the US 2008 population (Additional File) [15]. In our model, individuals in each age class k are subdivided based on epidemiological status. The dynamics of influenza infection, illness, and infectiousness reflect our current understanding of the natural history of influenza. Here, subscripts U and V represent an unvaccinated and vaccinated population, respectively. We assume that SU,k (t), LU,k (t), AU,k (t), IU,k (t), and RU,k (t) represents the respective number of unvaccinated susceptible, latent, asymptomatic and infectious, symptomatic and infectious, and recovered individuals in age groups k at time t (k = 1, 2, . . . , 5). Similarly, we define SV,k (t), LV,k (t), AV,k (t), IV,k (t), and RV,k (t) as the respective number of vaccinated susceptible, latent, asymptomatic and infectious, symptomatic and infectious, and recovered individuals (k = 1, 2, . . . , 5). We assume that the vaccine provides partial protection, resulting in vaccinated individuals being less susceptible than unvaccinated ones. Vaccinated individuals become infected at a fraction (1-σk) of the rate at which unvaccinated susceptible individuals become infected, where σk is the efficacy of the vaccine against infection for individuals of age group k (Additional File). We consider the three vaccination scenarios where vaccines become available before, at, or after the peak of an influenza pandemic. Thus, when vaccines become available, we assume that a proportion, ψk, of susceptible individuals in age group k is vaccinated. We also assume that the same proportion, ψk, of individuals in age group k who have been infected asymptomatically still may get vaccinated, because they were not aware of exposure to novel influenza A (H1N1) viruses. However, vaccine doses given to those who were already exposed to H1N1 viruses are assumed to be wasted, because these individuals already gained immunity to the H1N1 strain. Recovered individuals are assumed to be fully protected against further influenza infection for the remainder of the outbreak. Upon infection, individuals enter a latency period, 1/δ. Latently infected individuals proceed to become infectious, and a proportion, p, of infected individuals becomes symptomatic. Infectious individuals recover after an average period of 1/γ. The inf luenza-induced death rates are αU,k and αV,k for unvaccinated and vaccinated individuals, respectively, for people in age group k. Age-specific influenza-related death rates are based on estimates of excess pneumonia and on influenza deaths from the H1N1 influenza [16]. The transmission dynamics are thus described by the following differential equations: for k=1,…, 5. We used a standard-incidence form for the force of infection λk: where N is the total population size. Thus, it follows that where Nk is the number of people of age group k, i.e. Here ϕkm is the number of contacts per day between a person in age group k with people in age group m, and β is the probability of infection for a susceptible person who has contact with an infectious person. As both epidemiological and serological data are suggestive of residual immunity to H1N1 among adults and seniors, we assume that a proportion (ξk) of individuals in age group k is immune to H1N1 viruses [4]. The residual immunity incorporates the fact that younger people are more susceptible to the current H1N1 strain than older people due to lack of exposure to a similar virus in the past [4,17]. The demographic effects of aging, birth, and death by causes unrelated to influenza are not included because we only model one influenza season, where these demographic effects are negligible. The epidemic is initiated with a proportion of each age group assumed to be immune to infection, with one person of each age group assumed infectious, and with the remaining population assumed susceptible. That is, We assume that an influenza pandemic approaches its peak at time t=ω, and a proportion of the population is vaccinated at time τ=ω±θ where θ=0 or 21 days. We assume that vaccination instantaneously protects people, so that the state variables change discontinuously at t=τ: with the other state variables remaining the same. We further assume that the basic reproductive number (R0), defined as the number of secondary cases caused by a single infective case in a completely susceptible population, was 1.4, as estimated for the novel swine-origin H1N1 influenza outbreak [18]. For sensitivity analysis, the basic reproductive ratio was increased from 1.4 to 1.6 (Figure 1). We parameterized age-specific contact rates, ϕkm, using data from a large-scale survey of daily contacts [19]. These contact data show strong mixing between people of similar ages and moderately high mixing between children and people of their parents’ ages [20]. Given the contact data and US population size, we reconstructed the contact matrix to match our five age groups [19,20]. Using the relative size of the age group m (Nm / N) and the number of contacts per person in age group k with people in age group m, ckm, we define the elements of the contact matrix by . To ensure that the number of contacts between age groups is symmetric, Nmckm = Nkcmk, i.e., ϕkm = ϕmk, we made further adjustment, , and used ϕkm to be the contact matrix. (a) Nash and (b) utilitarian strategies when basic reproductive ratio is 1.6. Vaccination is assumed to be offered free of charge. Vaccination is implemented three weeks before, exactly at, or three weeks after the peak of a pandemic influenza. To model H1N1 influenza transmission in the United States, we divide the population into the five age groups (0-4, 5-24, 25–50, 50–64, and 65+), according to the age classes used in US CDC case reports [14]. The numbers of people in each age group were set to values estimated for the US 2008 population (Additional File) [15]. In our model, individuals in each age class k are subdivided based on epidemiological status. The dynamics of influenza infection, illness, and infectiousness reflect our current understanding of the natural history of influenza. Here, subscripts U and V represent an unvaccinated and vaccinated population, respectively. We assume that SU,k (t), LU,k (t), AU,k (t), IU,k (t), and RU,k (t) represents the respective number of unvaccinated susceptible, latent, asymptomatic and infectious, symptomatic and infectious, and recovered individuals in age groups k at time t (k = 1, 2, . . . , 5). Similarly, we define SV,k (t), LV,k (t), AV,k (t), IV,k (t), and RV,k (t) as the respective number of vaccinated susceptible, latent, asymptomatic and infectious, symptomatic and infectious, and recovered individuals (k = 1, 2, . . . , 5). We assume that the vaccine provides partial protection, resulting in vaccinated individuals being less susceptible than unvaccinated ones. Vaccinated individuals become infected at a fraction (1-σk) of the rate at which unvaccinated susceptible individuals become infected, where σk is the efficacy of the vaccine against infection for individuals of age group k (Additional File). We consider the three vaccination scenarios where vaccines become available before, at, or after the peak of an influenza pandemic. Thus, when vaccines become available, we assume that a proportion, ψk, of susceptible individuals in age group k is vaccinated. We also assume that the same proportion, ψk, of individuals in age group k who have been infected asymptomatically still may get vaccinated, because they were not aware of exposure to novel influenza A (H1N1) viruses. However, vaccine doses given to those who were already exposed to H1N1 viruses are assumed to be wasted, because these individuals already gained immunity to the H1N1 strain. Recovered individuals are assumed to be fully protected against further influenza infection for the remainder of the outbreak. Upon infection, individuals enter a latency period, 1/δ. Latently infected individuals proceed to become infectious, and a proportion, p, of infected individuals becomes symptomatic. Infectious individuals recover after an average period of 1/γ. The inf luenza-induced death rates are αU,k and αV,k for unvaccinated and vaccinated individuals, respectively, for people in age group k. Age-specific influenza-related death rates are based on estimates of excess pneumonia and on influenza deaths from the H1N1 influenza [16]. The transmission dynamics are thus described by the following differential equations: for k=1,…, 5. We used a standard-incidence form for the force of infection λk: where N is the total population size. Thus, it follows that where Nk is the number of people of age group k, i.e. Here ϕkm is the number of contacts per day between a person in age group k with people in age group m, and β is the probability of infection for a susceptible person who has contact with an infectious person. As both epidemiological and serological data are suggestive of residual immunity to H1N1 among adults and seniors, we assume that a proportion (ξk) of individuals in age group k is immune to H1N1 viruses [4]. The residual immunity incorporates the fact that younger people are more susceptible to the current H1N1 strain than older people due to lack of exposure to a similar virus in the past [4,17]. The demographic effects of aging, birth, and death by causes unrelated to influenza are not included because we only model one influenza season, where these demographic effects are negligible. The epidemic is initiated with a proportion of each age group assumed to be immune to infection, with one person of each age group assumed infectious, and with the remaining population assumed susceptible. That is, We assume that an influenza pandemic approaches its peak at time t=ω, and a proportion of the population is vaccinated at time τ=ω±θ where θ=0 or 21 days. We assume that vaccination instantaneously protects people, so that the state variables change discontinuously at t=τ: with the other state variables remaining the same. We further assume that the basic reproductive number (R0), defined as the number of secondary cases caused by a single infective case in a completely susceptible population, was 1.4, as estimated for the novel swine-origin H1N1 influenza outbreak [18]. For sensitivity analysis, the basic reproductive ratio was increased from 1.4 to 1.6 (Figure 1). We parameterized age-specific contact rates, ϕkm, using data from a large-scale survey of daily contacts [19]. These contact data show strong mixing between people of similar ages and moderately high mixing between children and people of their parents’ ages [20]. Given the contact data and US population size, we reconstructed the contact matrix to match our five age groups [19,20]. Using the relative size of the age group m (Nm / N) and the number of contacts per person in age group k with people in age group m, ckm, we define the elements of the contact matrix by . To ensure that the number of contacts between age groups is symmetric, Nmckm = Nkcmk, i.e., ϕkm = ϕmk, we made further adjustment, , and used ϕkm to be the contact matrix. (a) Nash and (b) utilitarian strategies when basic reproductive ratio is 1.6. Vaccination is assumed to be offered free of charge. Vaccination is implemented three weeks before, exactly at, or three weeks after the peak of a pandemic influenza. [SUBTITLE] Cost parameterization [SUBSECTION] To calculate the average individual net payoff of vaccination strategy, we incorporated the costs associated with infection, vaccination, and the side effects of the vaccine (Table 1). We calculate the cost of infection using weighted average of the costs associated with possible infection outcomes such as mortality, hospitalization, outpatient visits, and cases without medical care. The cost of vaccination includes the value of an individual's time receiving it ($16), and travel cost ($4), resulting in the total cost of vaccination estimated at $20 [21]. The cost of administration is not included in baseline parameters because vaccine for the 2009 novel H1N1 influenza was provided free of charge in the US. However, for sensitivity analysis, we increase the cost of administration from $0 up to $20, in order to examine the elasticity of the Nash and utilitarian strategies to a range of vaccination cost (Figures 2 and 3). Parameterization of infection and vaccination cost (a) Nash and (b) utilitarian strategies when vaccination administration costs $10 (a) Nash and (b) utilitarian strategies when vaccination administration costs $20 We calculate the cost for vaccine side effects based on the reduction in quality of life and the costs of treating individuals with severe side effects. Mild to moderate side effects are reported to occur at a probability of 5% and to reduce the quality of life by 0.05 for two days on average [21]. To calculate the cost of vaccine side effects, we use the conversion that a quality-adjusted life year (QALY) is monetarily equivalent to $100,000 [22-24]. Thus, the cost associated with mild to moderate vaccine side effects can be estimated at 0.05(0.05)(2/365)($100,000)=$1.37. In line with clinical data, severe vaccine side effects are assumed to occur at a probability of 0.001% and result in hospitalization for 7 days, and the cost of hospitalization in ICU to treat severe side effects is taken as $3,739.05 per day [21]. In addition, we assume that severe vaccine adverse effects result in death at 5% of probability [21]. We assume that all individuals value their life equally, irrespective of their age. Thus, the value of life is estimated at $1,045,278 using average expected future lifetime earnings for all ages [25,26]. Estimating the value of life at $1,045,278, the cost of severe side effects is calculated as 10-5($3739(7)+0.05($1045278))=$0.78. To calculate the average individual net payoff of vaccination strategy, we incorporated the costs associated with infection, vaccination, and the side effects of the vaccine (Table 1). We calculate the cost of infection using weighted average of the costs associated with possible infection outcomes such as mortality, hospitalization, outpatient visits, and cases without medical care. The cost of vaccination includes the value of an individual's time receiving it ($16), and travel cost ($4), resulting in the total cost of vaccination estimated at $20 [21]. The cost of administration is not included in baseline parameters because vaccine for the 2009 novel H1N1 influenza was provided free of charge in the US. However, for sensitivity analysis, we increase the cost of administration from $0 up to $20, in order to examine the elasticity of the Nash and utilitarian strategies to a range of vaccination cost (Figures 2 and 3). Parameterization of infection and vaccination cost (a) Nash and (b) utilitarian strategies when vaccination administration costs $10 (a) Nash and (b) utilitarian strategies when vaccination administration costs $20 We calculate the cost for vaccine side effects based on the reduction in quality of life and the costs of treating individuals with severe side effects. Mild to moderate side effects are reported to occur at a probability of 5% and to reduce the quality of life by 0.05 for two days on average [21]. To calculate the cost of vaccine side effects, we use the conversion that a quality-adjusted life year (QALY) is monetarily equivalent to $100,000 [22-24]. Thus, the cost associated with mild to moderate vaccine side effects can be estimated at 0.05(0.05)(2/365)($100,000)=$1.37. In line with clinical data, severe vaccine side effects are assumed to occur at a probability of 0.001% and result in hospitalization for 7 days, and the cost of hospitalization in ICU to treat severe side effects is taken as $3,739.05 per day [21]. In addition, we assume that severe vaccine adverse effects result in death at 5% of probability [21]. We assume that all individuals value their life equally, irrespective of their age. Thus, the value of life is estimated at $1,045,278 using average expected future lifetime earnings for all ages [25,26]. Estimating the value of life at $1,045,278, the cost of severe side effects is calculated as 10-5($3739(7)+0.05($1045278))=$0.78. [SUBTITLE] Payoff to vaccination strategy [SUBSECTION] In our vaccination game, the payoff to an individual choosing a particular strategy depends on the average behavior of the population. We considered the two basic strategies, “vaccinator” (obtain vaccination) and “non-vaccinator” (decline vaccination). For both strategies, the payoff to an individual is measured in terms of a monetary cost due to infection and/or vaccination, based on the probability of infections and vaccine risks (Table 1 and Additional File). We also parameterized the payoff calculations with age-specific distributions of vaccine efficacy in reducing influenza morbidity and mortality (Additional File). The net payoff to vaccinator strategy then is where xk is the probability of infection among vaccinators, zM is the probability of mild to moderate side effects, and zS is the probability of severe side effects. CIV,k denotes the cost of infection among vaccinators in age group k, CV denotes the cost of vaccination, and CM and CS denote the cost of mild and severe side effects associated with vaccination, respectively. As the vaccine efficacy is imperfect, the vaccinator may still be infected with reduced probability of infection (xk), which depends on both vaccination probability of age group k (ψk) and on vaccination probability across all age groups . If infected, vaccinated individuals incur lower infection cost (CIV,k) than unvaccinated ones. The probability of symptomatic infection among vaccinators in age group k who are not yet infected before vaccination is given by. Here represents the number of cumulative symptomatic infections until time t = τ. People who had have been symptomatically infected would be aware that they gained immunity against H1N1, thus would not get vaccinated, and therefore the expression, , represents the maximum number of vaccinating people in age group k. The net payoff to a non-vaccinator is where CIN,k denotes the cost of infection among non-vaccinators of age group k, and yk is the probability of symptomatic infection among non-vaccinators, given by. Here, describes the cumulative number of symptomatic infections among unvaccinated individuals in age group k after vaccination is implemented at time t = τ. In our vaccination game, the payoff to an individual choosing a particular strategy depends on the average behavior of the population. We considered the two basic strategies, “vaccinator” (obtain vaccination) and “non-vaccinator” (decline vaccination). For both strategies, the payoff to an individual is measured in terms of a monetary cost due to infection and/or vaccination, based on the probability of infections and vaccine risks (Table 1 and Additional File). We also parameterized the payoff calculations with age-specific distributions of vaccine efficacy in reducing influenza morbidity and mortality (Additional File). The net payoff to vaccinator strategy then is where xk is the probability of infection among vaccinators, zM is the probability of mild to moderate side effects, and zS is the probability of severe side effects. CIV,k denotes the cost of infection among vaccinators in age group k, CV denotes the cost of vaccination, and CM and CS denote the cost of mild and severe side effects associated with vaccination, respectively. As the vaccine efficacy is imperfect, the vaccinator may still be infected with reduced probability of infection (xk), which depends on both vaccination probability of age group k (ψk) and on vaccination probability across all age groups . If infected, vaccinated individuals incur lower infection cost (CIV,k) than unvaccinated ones. The probability of symptomatic infection among vaccinators in age group k who are not yet infected before vaccination is given by. Here represents the number of cumulative symptomatic infections until time t = τ. People who had have been symptomatically infected would be aware that they gained immunity against H1N1, thus would not get vaccinated, and therefore the expression, , represents the maximum number of vaccinating people in age group k. The net payoff to a non-vaccinator is where CIN,k denotes the cost of infection among non-vaccinators of age group k, and yk is the probability of symptomatic infection among non-vaccinators, given by. Here, describes the cumulative number of symptomatic infections among unvaccinated individuals in age group k after vaccination is implemented at time t = τ. [SUBTITLE] Defining the Nash strategy [SUBSECTION] For individuals driven by self-interest, game-theoretic decisions are assumed to settle to a Nash equilibrium at which it is impossible for a few individuals to increase their payoffs by switching to a different strategy [27]. We define these individual decisions at the Nash equilibrium as the Nash strategy. A pure vaccinator strategy cannot be the Nash equilibrium, because when the population vaccine coverage is 100%, an individual who chooses a non-vaccinator strategy reaps the benefits of herd immunity without paying for vaccination and without experiencing possible vaccine side effects. By comparison, a non-vaccinator can result in an individual optimum under certain conditions, such as when the infection risk is sufficiently low when vaccines become available. In our age-structured model, it might be best for some people in an age group to be vaccinated and for others in the same group to choose not to get vaccinated. To allow this scenario, we consider mixed strategies whereby individuals in age group k choose the vaccinator strategy with probability ψk (0 <ψk < 1) and the non-vaccinator strategy otherwise. If all individuals play the mixed strategy ψk, then a proportion ψk of the population in age group k is vaccinated. The individual optimum can be found by solving for ψk,ind in the equation Uvac,k = Unonvac,k (k=1…5). The individual optimum (ψk,ind) predicted by this game-theoretical analysis corresponds to the level of coverage ψk,ind expected under a voluntary program where individuals act in a rational way to maximize their payoffs. For individuals driven by self-interest, game-theoretic decisions are assumed to settle to a Nash equilibrium at which it is impossible for a few individuals to increase their payoffs by switching to a different strategy [27]. We define these individual decisions at the Nash equilibrium as the Nash strategy. A pure vaccinator strategy cannot be the Nash equilibrium, because when the population vaccine coverage is 100%, an individual who chooses a non-vaccinator strategy reaps the benefits of herd immunity without paying for vaccination and without experiencing possible vaccine side effects. By comparison, a non-vaccinator can result in an individual optimum under certain conditions, such as when the infection risk is sufficiently low when vaccines become available. In our age-structured model, it might be best for some people in an age group to be vaccinated and for others in the same group to choose not to get vaccinated. To allow this scenario, we consider mixed strategies whereby individuals in age group k choose the vaccinator strategy with probability ψk (0 <ψk < 1) and the non-vaccinator strategy otherwise. If all individuals play the mixed strategy ψk, then a proportion ψk of the population in age group k is vaccinated. The individual optimum can be found by solving for ψk,ind in the equation Uvac,k = Unonvac,k (k=1…5). The individual optimum (ψk,ind) predicted by this game-theoretical analysis corresponds to the level of coverage ψk,ind expected under a voluntary program where individuals act in a rational way to maximize their payoffs. [SUBTITLE] Defining utilitarian strategy [SUBSECTION] From the perspective of group interest, the objective is to maximize the total payoff of vaccinators and non-vaccinators. If ψk is the proportion of the population in age group k that is vaccinated, we can express the expected payoff due to vaccination and an influenza pandemic as We now maximize T(ψ1, ψ2, ψ3, ψ4, ψ5) on the parameter space {(ψ1, ψ2, ψ3, ψ4, ψ5) | 0 ≤ ψk ≤1} to determine the utilitarian strategy (ψ1*, ψ2*, ψ3*, ψ4*, ψ5*), which is the coverage level that would maximize the total payoff. From the perspective of group interest, the objective is to maximize the total payoff of vaccinators and non-vaccinators. If ψk is the proportion of the population in age group k that is vaccinated, we can express the expected payoff due to vaccination and an influenza pandemic as We now maximize T(ψ1, ψ2, ψ3, ψ4, ψ5) on the parameter space {(ψ1, ψ2, ψ3, ψ4, ψ5) | 0 ≤ ψk ≤1} to determine the utilitarian strategy (ψ1*, ψ2*, ψ3*, ψ4*, ψ5*), which is the coverage level that would maximize the total payoff.
null
null
null
null
[ "Background", "Mathematical model for disease transmission and vaccination", "Cost parameterization", "\nPayoff to vaccination strategy\n", "Defining the Nash strategy", "Defining utilitarian strategy", "Results", "Epidemiological impact of the 2009 H1N1 influenza pandemic", "Optimal H1N1 vaccine distribution based on individual self-interest", "Optimal H1N1 vaccine distribution based on population interest", "Conclusions", "List of abbreviations", "Competing interests", "Authors' contributions" ]
[ "In response to the rapid spread of a pandemic strain of H1N1 influenza A, the World Health Organization (WHO) raised the pandemic alert to its highest phase on June 11, 2009 [1]. The H1N1 pandemic was the first influenza pandemic in over 40 years. Although most H1N1 cases in individuals were mild and the case fatality rate was lower than that of previous influenza pandemics, severe cases frequently occurred in previously healthy, young adults [2].\nVaccines hold considerable promise for reducing the spread of H1N1 influenza A. However, the H1N1 vaccine was not readily available until late October, 2009 [3]. This delayed the US vaccination program until after a large proportion of the population had already been exposed to H1N1.\nThere is evidence that a substantial proportion of the elderly was protected by cross-immunity from prior infection, resulting in the lowest infection rate in this age group [4]. The 2009 H1N1 influenza disproportionately affected younger patients [5,6]. The median age of hospitalized H1N1 patients was 27 years, which is much lower than the median age of hospitalized seasonal-influenza cases (between 75 and 79 years) [7,8]. Yet, H1N1 was least likely to turn fatal in patients under age 17 [8]. Such differences in age-specific susceptibility and case fatality for 2009 H1N1 strain posed a challenge to public health agencies that sought to determine optimal vaccine distribution and expected public adherence.\nDetermining an optimal vaccination policy can be quite challenging. An individual's risk of infection depends not only on his or her decision to be vaccinated, but also on the decisions of others [9,10]. In addition, overwhelming majority of infected people are either asymptomatic or recover without medical attention. Such cases may be unaware that they have been exposed to the virus and still seek vaccination [11]. To calculate the payoff of vaccination to an individual and to the population as a whole, it is important to incorporate the cost of vaccination as well as the benefits of vaccination such as both direct and indirect protection due to herd immunity [10,12,13].\nHere, we use game theory to investigate age-dependent optimal vaccine distribution against H1N1 influenza A in the US, from both individual and population perspectives. We first model the evolving age distribution of H1N1 cases as the pandemic unfolds, and examine the optimal control strategy assuming that the vaccine becomes available before, at, or after the peak of the influenza pandemic. Then, we find the expected age-specific H1N1 vaccine allocation strategy that would emerge if individuals pursue their own interest, i.e. the Nash strategy, and compare it to a strategy that is optimal to the population as a whole, known as the utilitarian strategy. The personal payoff of vaccination varies among age groups and changes over the course of an outbreak, and we recognize that individuals may not adhere to the utilitarian strategy when acting according to self-interest.\nOur game theoretical analyses of the vaccination program for an influenza A (H1N1) pandemic in the United States show that the utilitarian strategy prioritizes aggressive control among individuals of age 5 and 64 regardless of the timing of vaccination. In contrast, the Nash strategy dictates vaccination of adults, ages 25-49, as the first priority group. If the vaccination program implemented before the peak of pandemic wave, then the second priority group to be vaccinated based on the Nash strategy is preschool-age children; however, if vaccination is delayed until the peak of pandemic wave, then the second priority group is older adults (ages 50 to 64).", "To model H1N1 influenza transmission in the United States, we divide the population into the five age groups (0-4, 5-24, 25–50, 50–64, and 65+), according to the age classes used in US CDC case reports [14]. The numbers of people in each age group were set to values estimated for the US 2008 population (Additional File) [15]. In our model, individuals in each age class k are subdivided based on epidemiological status. The dynamics of influenza infection, illness, and infectiousness reflect our current understanding of the natural history of influenza. Here, subscripts U and V represent an unvaccinated and vaccinated population, respectively. We assume that SU,k (t), LU,k (t), AU,k (t), IU,k (t), and RU,k (t) represents the respective number of unvaccinated susceptible, latent, asymptomatic and infectious, symptomatic and infectious, and recovered individuals in age groups k at time t (k = 1, 2, . . . , 5). Similarly, we define SV,k (t), LV,k (t), AV,k (t), IV,k (t), and RV,k (t) as the respective number of vaccinated susceptible, latent, asymptomatic and infectious, symptomatic and infectious, and recovered individuals (k = 1, 2, . . . , 5).\nWe assume that the vaccine provides partial protection, resulting in vaccinated individuals being less susceptible than unvaccinated ones. Vaccinated individuals become infected at a fraction (1-σk) of the rate at which unvaccinated susceptible individuals become infected, where σk is the efficacy of the vaccine against infection for individuals of age group k (Additional File). We consider the three vaccination scenarios where vaccines become available before, at, or after the peak of an influenza pandemic. Thus, when vaccines become available, we assume that a proportion, ψk, of susceptible individuals in age group k is vaccinated. We also assume that the same proportion, ψk, of individuals in age group k who have been infected asymptomatically still may get vaccinated, because they were not aware of exposure to novel influenza A (H1N1) viruses. However, vaccine doses given to those who were already exposed to H1N1 viruses are assumed to be wasted, because these individuals already gained immunity to the H1N1 strain. Recovered individuals are assumed to be fully protected against further influenza infection for the remainder of the outbreak.\nUpon infection, individuals enter a latency period, 1/δ. Latently infected individuals proceed to become infectious, and a proportion, p, of infected individuals becomes symptomatic. Infectious individuals recover after an average period of 1/γ. The inf luenza-induced death rates are αU,k and αV,k for unvaccinated and vaccinated individuals, respectively, for people in age group k. Age-specific influenza-related death rates are based on estimates of excess pneumonia and on influenza deaths from the H1N1 influenza [16]. The transmission dynamics are thus described by the following differential equations:\nfor k=1,…, 5.\nWe used a standard-incidence form for the force of infection λk:\nwhere N is the total population size. Thus, it follows that\n where Nk is the number of people of age group k, i.e. \nHere ϕkm is the number of contacts per day between a person in age group k with people in age group m, and β is the probability of infection for a susceptible person who has contact with an infectious person.\nAs both epidemiological and serological data are suggestive of residual immunity to H1N1 among adults and seniors, we assume that a proportion (ξk) of individuals in age group k is immune to H1N1 viruses [4]. The residual immunity incorporates the fact that younger people are more susceptible to the current H1N1 strain than older people due to lack of exposure to a similar virus in the past [4,17]. The demographic effects of aging, birth, and death by causes unrelated to influenza are not included because we only model one influenza season, where these demographic effects are negligible.\nThe epidemic is initiated with a proportion of each age group assumed to be immune to infection, with one person of each age group assumed infectious, and with the remaining population assumed susceptible. That is,\nWe assume that an influenza pandemic approaches its peak at time t=ω, and a proportion of the population is vaccinated at time τ=ω±θ where θ=0 or 21 days. We assume that vaccination instantaneously protects people, so that the state variables change discontinuously at t=τ:\nwith the other state variables remaining the same.\nWe further assume that the basic reproductive number (R0), defined as the number of secondary cases caused by a single infective case in a completely susceptible population, was 1.4, as estimated for the novel swine-origin H1N1 influenza outbreak [18]. For sensitivity analysis, the basic reproductive ratio was increased from 1.4 to 1.6 (Figure 1). We parameterized age-specific contact rates, ϕkm, using data from a large-scale survey of daily contacts [19]. These contact data show strong mixing between people of similar ages and moderately high mixing between children and people of their parents’ ages [20]. Given the contact data and US population size, we reconstructed the contact matrix to match our five age groups [19,20]. Using the relative size of the age group m (Nm / N) and the number of contacts per person in age group k with people in age group m, ckm, we define the elements of the contact matrix by . To ensure that the number of contacts between age groups is symmetric, Nmckm = Nkcmk, i.e., ϕkm = ϕmk, we made further adjustment, , and used ϕkm to be the contact matrix.\n(a) Nash and (b) utilitarian strategies when basic reproductive ratio is 1.6. Vaccination is assumed to be offered free of charge. Vaccination is implemented three weeks before, exactly at, or three weeks after the peak of a pandemic influenza.", "To calculate the average individual net payoff of vaccination strategy, we incorporated the costs associated with infection, vaccination, and the side effects of the vaccine (Table 1). We calculate the cost of infection using weighted average of the costs associated with possible infection outcomes such as mortality, hospitalization, outpatient visits, and cases without medical care. The cost of vaccination includes the value of an individual's time receiving it ($16), and travel cost ($4), resulting in the total cost of vaccination estimated at $20 [21]. The cost of administration is not included in baseline parameters because vaccine for the 2009 novel H1N1 influenza was provided free of charge in the US. However, for sensitivity analysis, we increase the cost of administration from $0 up to $20, in order to examine the elasticity of the Nash and utilitarian strategies to a range of vaccination cost (Figures 2 and 3).\nParameterization of infection and vaccination cost\n(a) Nash and (b) utilitarian strategies when vaccination administration costs $10\n(a) Nash and (b) utilitarian strategies when vaccination administration costs $20\nWe calculate the cost for vaccine side effects based on the reduction in quality of life and the costs of treating individuals with severe side effects. Mild to moderate side effects are reported to occur at a probability of 5% and to reduce the quality of life by 0.05 for two days on average [21]. To calculate the cost of vaccine side effects, we use the conversion that a quality-adjusted life year (QALY) is monetarily equivalent to $100,000 [22-24]. Thus, the cost associated with mild to moderate vaccine side effects can be estimated at\n0.05(0.05)(2/365)($100,000)=$1.37.\nIn line with clinical data, severe vaccine side effects are assumed to occur at a probability of 0.001% and result in hospitalization for 7 days, and the cost of hospitalization in ICU to treat severe side effects is taken as $3,739.05 per day [21]. In addition, we assume that severe vaccine adverse effects result in death at 5% of probability [21]. We assume that all individuals value their life equally, irrespective of their age. Thus, the value of life is estimated at $1,045,278 using average expected future lifetime earnings for all ages [25,26]. Estimating the value of life at $1,045,278, the cost of severe side effects is calculated as\n\n10-5($3739(7)+0.05($1045278))=$0.78.\n", "In our vaccination game, the payoff to an individual choosing a particular strategy depends on the average behavior of the population. We considered the two basic strategies, “vaccinator” (obtain vaccination) and “non-vaccinator” (decline vaccination). For both strategies, the payoff to an individual is measured in terms of a monetary cost due to infection and/or vaccination, based on the probability of infections and vaccine risks (Table 1 and Additional File). We also parameterized the payoff calculations with age-specific distributions of vaccine efficacy in reducing influenza morbidity and mortality (Additional File).\nThe net payoff to vaccinator strategy then is\nwhere xk is the probability of infection among vaccinators, zM is the probability of mild to moderate side effects, and zS is the probability of severe side effects. CIV,k denotes the cost of infection among vaccinators in age group k, CV denotes the cost of vaccination, and CM and CS denote the cost of mild and severe side effects associated with vaccination, respectively.\nAs the vaccine efficacy is imperfect, the vaccinator may still be infected with reduced probability of infection (xk), which depends on both vaccination probability of age group k (ψk) and on vaccination probability across all age groups . If infected, vaccinated individuals incur lower infection cost (CIV,k) than unvaccinated ones. The probability of symptomatic infection among vaccinators in age group k who are not yet infected before vaccination is given by.\nHere represents the number of cumulative symptomatic infections until time t = τ. People who had have been symptomatically infected would be aware that they gained immunity against H1N1, thus would not get vaccinated, and therefore the expression, , represents the maximum number of vaccinating people in age group k.\nThe net payoff to a non-vaccinator is\nwhere CIN,k denotes the cost of infection among non-vaccinators of age group k, and yk is the probability of symptomatic infection among non-vaccinators, given by.\nHere, describes the cumulative number of symptomatic infections among unvaccinated individuals in age group k after vaccination is implemented at time t = τ.", "For individuals driven by self-interest, game-theoretic decisions are assumed to settle to a Nash equilibrium at which it is impossible for a few individuals to increase their payoffs by switching to a different strategy [27]. We define these individual decisions at the Nash equilibrium as the Nash strategy. A pure vaccinator strategy cannot be the Nash equilibrium, because when the population vaccine coverage is 100%, an individual who chooses a non-vaccinator strategy reaps the benefits of herd immunity without paying for vaccination and without experiencing possible vaccine side effects. By comparison, a non-vaccinator can result in an individual optimum under certain conditions, such as when the infection risk is sufficiently low when vaccines become available. In our age-structured model, it might be best for some people in an age group to be vaccinated and for others in the same group to choose not to get vaccinated. To allow this scenario, we consider mixed strategies whereby individuals in age group k choose the vaccinator strategy with probability ψk (0 <ψk < 1) and the non-vaccinator strategy otherwise. If all individuals play the mixed strategy ψk, then a proportion ψk of the population in age group k is vaccinated. The individual optimum can be found by solving for ψk,ind in the equation Uvac,k = Unonvac,k (k=1…5). The individual optimum (ψk,ind) predicted by this game-theoretical analysis corresponds to the level of coverage ψk,ind expected under a voluntary program where individuals act in a rational way to maximize their payoffs.", "From the perspective of group interest, the objective is to maximize the total payoff of vaccinators and non-vaccinators. If ψk is the proportion of the population in age group k that is vaccinated, we can express the expected payoff due to vaccination and an influenza pandemic as We now maximize T(ψ1, ψ2, ψ3, ψ4, ψ5) on the parameter space {(ψ1, ψ2, ψ3, ψ4, ψ5) | 0 ≤ ψk ≤1} to determine the utilitarian strategy (ψ1*, ψ2*, ψ3*, ψ4*, ψ5*), which is the coverage level that would maximize the total payoff.", "[SUBTITLE] Epidemiological impact of the 2009 H1N1 influenza pandemic [SUBSECTION] Our age-structured model of influenza transmission predicts that 41% of the US population will be infected with pandemic H1N1 influenza in the absence of interventions (Figures 4 and 5). Based on our assumptions that on average 33% of infected people become symptomatic after three day of incubation period [28], we estimate that 13% of the population will be symptomatically infected during the current influenza pandemic, which is consistent with the estimate of previous modeling studies [29-31]. However, the age-specific attack rates are predicted to vary considerably between age groups because of age-dependent activity patterns and immune profiles. The highest incidence is predicted to occur in individuals of age 5-24, followed by adult population of age 25-49, with symptomatic plus asymptomatic attack rates of 57% and 43%, respectively (Figure 4b). The lowest attack rate (14%) is predicted to occur in the oldest age group (age 65 and older).\nOutcomes of influenza A/H1N1 infection. (a) Simulated age-stratified daily influenza A/H1N1 infection incidence per 100,000 individuals in the absence of vaccination or other interventions (b) Simulated age-specific attack rates, in the absence of vaccination or other interventions. Both symptomatic and asymptomatic cases are shown.\nCumulative incidence of influenza A/H1N1 when vaccination is guided by the Nash or utilitarian strategies. Vaccination is implemented at free of charge the three weeks before, exactly at, or three weeks after the peak of a pandemic influenza. Solid lines show the cumulative attack rate when vaccination is in alignment with utilitarian strategies when vaccination and is offered free of charge, whereas dotted lines show the cumulative attack rate assuming the vaccination is in alignment with the Nash strategies. For comparison, cumulative incidence without vaccination is also shown.\nOur results also suggest that individuals in each age group reach their highest incidence at different times (Figure 4a). That is, school-age children and young adults (age 5-24) reach their maximum incidence first, followed by adults (age 25-49) and preschool-age children under five years of age. In contrast, the oldest group is the last one to reach maximum incidence.\nIn the absence of vaccination or other interventions, our model predicts that the 2009 H1N1 influenza pandemic would result in 271 hospitalizations and 13 deaths per 100,000 individuals. With their highest case fatality and hospitalization ratios among all age groups, adults (age 25-49) bear the highest case fatality rate (7 out of 13 per 100,000) and hospitalization rate (133 out of 271 per 100,000), followed by school-age children and young adults of age 5-24 (Figures 4, 6, 7).\nHospitalizations per 100,000 when vaccination follows the Nash and utilitarian strategies. The number of hospitalizations per 100,000 is shown in age groups with and without vaccination. (a) Vaccination is implemented according to the Nash strategy at different timing of a pandemic wave, i.e. three weeks before, exactly at, or three weeks after the peak of a pandemic influenza. (b) Vaccination is implemented according to the utilitarian strategy at different timing of a pandemic wave, i.e. three weeks before, exactly at, or three weeks after the peak of a pandemic influenza.\nDeaths per 100,000 when vaccination follows the Nash and utilitarian strategies. The number of deaths per 100,000 is shown in age groups with and without vaccination. (a) Vaccination is implemented according to the Nash strategy at different timing of a pandemic wave, i.e. three weeks before, exactly at, or three weeks after the peak of a pandemic influenza. (b) Vaccination is implemented according to the utilitarian strategy at different timing of a pandemic wave, i.e. three weeks before, exactly at, or three weeks after the peak of a pandemic influenza.\nOur age-structured model of influenza transmission predicts that 41% of the US population will be infected with pandemic H1N1 influenza in the absence of interventions (Figures 4 and 5). Based on our assumptions that on average 33% of infected people become symptomatic after three day of incubation period [28], we estimate that 13% of the population will be symptomatically infected during the current influenza pandemic, which is consistent with the estimate of previous modeling studies [29-31]. However, the age-specific attack rates are predicted to vary considerably between age groups because of age-dependent activity patterns and immune profiles. The highest incidence is predicted to occur in individuals of age 5-24, followed by adult population of age 25-49, with symptomatic plus asymptomatic attack rates of 57% and 43%, respectively (Figure 4b). The lowest attack rate (14%) is predicted to occur in the oldest age group (age 65 and older).\nOutcomes of influenza A/H1N1 infection. (a) Simulated age-stratified daily influenza A/H1N1 infection incidence per 100,000 individuals in the absence of vaccination or other interventions (b) Simulated age-specific attack rates, in the absence of vaccination or other interventions. Both symptomatic and asymptomatic cases are shown.\nCumulative incidence of influenza A/H1N1 when vaccination is guided by the Nash or utilitarian strategies. Vaccination is implemented at free of charge the three weeks before, exactly at, or three weeks after the peak of a pandemic influenza. Solid lines show the cumulative attack rate when vaccination is in alignment with utilitarian strategies when vaccination and is offered free of charge, whereas dotted lines show the cumulative attack rate assuming the vaccination is in alignment with the Nash strategies. For comparison, cumulative incidence without vaccination is also shown.\nOur results also suggest that individuals in each age group reach their highest incidence at different times (Figure 4a). That is, school-age children and young adults (age 5-24) reach their maximum incidence first, followed by adults (age 25-49) and preschool-age children under five years of age. In contrast, the oldest group is the last one to reach maximum incidence.\nIn the absence of vaccination or other interventions, our model predicts that the 2009 H1N1 influenza pandemic would result in 271 hospitalizations and 13 deaths per 100,000 individuals. With their highest case fatality and hospitalization ratios among all age groups, adults (age 25-49) bear the highest case fatality rate (7 out of 13 per 100,000) and hospitalization rate (133 out of 271 per 100,000), followed by school-age children and young adults of age 5-24 (Figures 4, 6, 7).\nHospitalizations per 100,000 when vaccination follows the Nash and utilitarian strategies. The number of hospitalizations per 100,000 is shown in age groups with and without vaccination. (a) Vaccination is implemented according to the Nash strategy at different timing of a pandemic wave, i.e. three weeks before, exactly at, or three weeks after the peak of a pandemic influenza. (b) Vaccination is implemented according to the utilitarian strategy at different timing of a pandemic wave, i.e. three weeks before, exactly at, or three weeks after the peak of a pandemic influenza.\nDeaths per 100,000 when vaccination follows the Nash and utilitarian strategies. The number of deaths per 100,000 is shown in age groups with and without vaccination. (a) Vaccination is implemented according to the Nash strategy at different timing of a pandemic wave, i.e. three weeks before, exactly at, or three weeks after the peak of a pandemic influenza. (b) Vaccination is implemented according to the utilitarian strategy at different timing of a pandemic wave, i.e. three weeks before, exactly at, or three weeks after the peak of a pandemic influenza.\n[SUBTITLE] Optimal H1N1 vaccine distribution based on individual self-interest [SUBSECTION] To determine optimal H1N1 vaccine distributions based on individual self-interest versus population interest, we constructed a game theoretical age-structured model of influenza transmission assuming delayed vaccination. Our calculations show that when vaccination occurs three weeks prior to the peak of a pandemic wave, the Nash (individual-based) strategy prioritizes vaccinating adults (age 25- 49) and preschool-age children, followed by school-age children and young adults (age 5-24) and then older adults (age 50-64) (Figure 8a). The Nash vaccination strategy among senior population of age 65 and older would be to refuse vaccination. With such strategy, the vaccination program is predicted to reduce an overall attack rate from 41% to 15%, averting 8,514 clinical infections, 179 hospitalizations and 9 deaths per 100,000 individuals.\nNash (a) and utilitarian (b) strategies when vaccination is offered free of charge. Vaccination is implemented three weeks before, exactly at, or three weeks after the peak of a pandemic influenza.\nThe Nash strategy, however, was found to be highly dependent on the timing of vaccine implementation (Figure 8a). If vaccine production is delayed, then the payoff to the vaccinators is diminished because of reduced risk of future infection. For example, if vaccination is implemented at the peak of a pandemic, the Nash vaccination strategy does not include the preschool-age children anymore. Instead, the Nash strategy is to vaccinate 91% of adults (age 25-49), 87% of older adults (age 50-64), and 63% of school-age children and young adults. This change in vaccination strategy occurs because preschool-age children have a relatively early pandemic peak and moderate morbidity compared to other age groups. Thus, when a vaccination is significantly delayed, the relative infection risk of this group is low.\nThe only age groups that are included in the Nash vaccination strategy regardless of vaccine delay are adults (age 25-49) and school-age children/young adults (age 5-24). In general, the demand for vaccine among adults (age 25-49) is the most inelastic to vaccine delay. The Nash level of vaccine coverage for school-age children and young adults falls rapidly with vaccine delay (Figure 8). For instance, when vaccination is delayed until three weeks after the pandemic peak, the Nash strategy is to vaccinate 88% of adults (age 25-49) and 23% of school-age children/young adults. This vaccination allocation would result in an overall attack rate of 36%, with 229 hospitalizations and 11 deaths per 100,000 individuals (Figures 5, 6, 7).\nOur results also demonstrate the dependence of the Nash strategy on basic reproductive ratio of pandemic influenza. At higher transmissibility (R0=1.6), the Nash levels of vaccination are 100% among adults of age 25-64 regardless of timing of vaccine implementation. This is, in part, because the case fatality ratio and case hospitalization ratio are highest in these age groups, yielding a high payoff of vaccination (Figure 1). School-age children/younger adults (age 5-24), on the other hand, are expected to seek vaccination only if vaccination is offered on or before the pandemic peak (Figures 9 and 10).\nNash strategy when vaccine is available at the beginning of an influenza pandemic and when vaccination is offered free of charge\nUtilitarian strategy when vaccine is available at the beginning of an influenza pandemic and when vaccination is offered free of charge\nFinally, we considered the rising cost of vaccination and its impact on the Nash strategy of each age group. We show that the Nash vaccination level among adults (age 25-49) is the most inelastic to the changes in vaccination cost (Figures 2 and 3). This inelasticity arises because, in this age group, the infection risk and case-fatality rate of H1N1 are high and residual immunity is low. In contrast, the Nash vaccination of older adults (age 50 to 64) is the most elastic to the changes in the vaccination cost, demonstrating the trade-off between vaccine cost and reduced benefit of vaccine due to vaccine delay. This elasticity is because the infection risk in this age group is relatively low, resulted from their residual immunity against H1N1 and low contact rate. Seniors of age 65 or older are unlikely to seek vaccination if vaccination is voluntary at a wide range of vaccination costs because their risk of infection (and thus their vaccination payoff) is lowest among all age groups.\nTo determine optimal H1N1 vaccine distributions based on individual self-interest versus population interest, we constructed a game theoretical age-structured model of influenza transmission assuming delayed vaccination. Our calculations show that when vaccination occurs three weeks prior to the peak of a pandemic wave, the Nash (individual-based) strategy prioritizes vaccinating adults (age 25- 49) and preschool-age children, followed by school-age children and young adults (age 5-24) and then older adults (age 50-64) (Figure 8a). The Nash vaccination strategy among senior population of age 65 and older would be to refuse vaccination. With such strategy, the vaccination program is predicted to reduce an overall attack rate from 41% to 15%, averting 8,514 clinical infections, 179 hospitalizations and 9 deaths per 100,000 individuals.\nNash (a) and utilitarian (b) strategies when vaccination is offered free of charge. Vaccination is implemented three weeks before, exactly at, or three weeks after the peak of a pandemic influenza.\nThe Nash strategy, however, was found to be highly dependent on the timing of vaccine implementation (Figure 8a). If vaccine production is delayed, then the payoff to the vaccinators is diminished because of reduced risk of future infection. For example, if vaccination is implemented at the peak of a pandemic, the Nash vaccination strategy does not include the preschool-age children anymore. Instead, the Nash strategy is to vaccinate 91% of adults (age 25-49), 87% of older adults (age 50-64), and 63% of school-age children and young adults. This change in vaccination strategy occurs because preschool-age children have a relatively early pandemic peak and moderate morbidity compared to other age groups. Thus, when a vaccination is significantly delayed, the relative infection risk of this group is low.\nThe only age groups that are included in the Nash vaccination strategy regardless of vaccine delay are adults (age 25-49) and school-age children/young adults (age 5-24). In general, the demand for vaccine among adults (age 25-49) is the most inelastic to vaccine delay. The Nash level of vaccine coverage for school-age children and young adults falls rapidly with vaccine delay (Figure 8). For instance, when vaccination is delayed until three weeks after the pandemic peak, the Nash strategy is to vaccinate 88% of adults (age 25-49) and 23% of school-age children/young adults. This vaccination allocation would result in an overall attack rate of 36%, with 229 hospitalizations and 11 deaths per 100,000 individuals (Figures 5, 6, 7).\nOur results also demonstrate the dependence of the Nash strategy on basic reproductive ratio of pandemic influenza. At higher transmissibility (R0=1.6), the Nash levels of vaccination are 100% among adults of age 25-64 regardless of timing of vaccine implementation. This is, in part, because the case fatality ratio and case hospitalization ratio are highest in these age groups, yielding a high payoff of vaccination (Figure 1). School-age children/younger adults (age 5-24), on the other hand, are expected to seek vaccination only if vaccination is offered on or before the pandemic peak (Figures 9 and 10).\nNash strategy when vaccine is available at the beginning of an influenza pandemic and when vaccination is offered free of charge\nUtilitarian strategy when vaccine is available at the beginning of an influenza pandemic and when vaccination is offered free of charge\nFinally, we considered the rising cost of vaccination and its impact on the Nash strategy of each age group. We show that the Nash vaccination level among adults (age 25-49) is the most inelastic to the changes in vaccination cost (Figures 2 and 3). This inelasticity arises because, in this age group, the infection risk and case-fatality rate of H1N1 are high and residual immunity is low. In contrast, the Nash vaccination of older adults (age 50 to 64) is the most elastic to the changes in the vaccination cost, demonstrating the trade-off between vaccine cost and reduced benefit of vaccine due to vaccine delay. This elasticity is because the infection risk in this age group is relatively low, resulted from their residual immunity against H1N1 and low contact rate. Seniors of age 65 or older are unlikely to seek vaccination if vaccination is voluntary at a wide range of vaccination costs because their risk of infection (and thus their vaccination payoff) is lowest among all age groups.\n[SUBTITLE] Optimal H1N1 vaccine distribution based on population interest [SUBSECTION] The average vaccination level across all age groups for the utilitarian strategy is higher than that for the Nash strategy (Figure 8). For example, if vaccines become available three weeks before the pandemic peak, the overall Nash and utilitarian vaccine coverage are 76% and 82%, respectively. When 93% of young individuals (age under 24), 96% of adults (age 25-64) and 1% of seniors (age 65 and older) are vaccinated according to the utilitarian strategy three weeks prior to the peak of the influenza pandemic, 25,767 clinical infections, 182 hospitalizations and 9 deaths would be averted per 100,000 individuals (Figures 5, 6, 7).\nThe utilitarian strategy is, however, much less effective if vaccination is delayed. For instance, if vaccination is delayed until the peak of influenza pandemic, it is estimated that 15,683 infections, 112 hospitalizations and 6 deaths would be averted per 100,000 individuals, which is considerably fewer than when the vaccination is implemented before the pandemic peak (Figures 5 and 8).\nWe find that the utilitarian vaccine coverage levels are more inelastic than those under the Nash strategy. For instance, if vaccination is delayed until three weeks after the pandemic peak, the resulting vaccine coverage level according to the Nash strategy falls to 37% whereas the utilitarian strategy is to still vaccinate 73% of population. Thus, the resulting disease incidence and the number of disease-related deaths are lower under the utilitarian strategy than under the Nash strategy. The utilitarian strategy includes vaccinating 90% of older adults (age 50-64), 88% of adults (age 25-49), 83% of school-age children/young adults (age 5-24) and 43% of preschool-age children. At a higher basic reproductive ratio of 1.6, the utilitarian strategy also includes the vaccination of preschool-age children (age 0-4), because the risk of infection increases with transmissibility of influenza virus, increasing the payoff of vaccination (Figure 1). However, the Nash vaccination strategy does not include preschool-age children or older adults, and thus the utilitarian coverage levels may be unachievable under voluntary vaccination.\nThe average vaccination level across all age groups for the utilitarian strategy is higher than that for the Nash strategy (Figure 8). For example, if vaccines become available three weeks before the pandemic peak, the overall Nash and utilitarian vaccine coverage are 76% and 82%, respectively. When 93% of young individuals (age under 24), 96% of adults (age 25-64) and 1% of seniors (age 65 and older) are vaccinated according to the utilitarian strategy three weeks prior to the peak of the influenza pandemic, 25,767 clinical infections, 182 hospitalizations and 9 deaths would be averted per 100,000 individuals (Figures 5, 6, 7).\nThe utilitarian strategy is, however, much less effective if vaccination is delayed. For instance, if vaccination is delayed until the peak of influenza pandemic, it is estimated that 15,683 infections, 112 hospitalizations and 6 deaths would be averted per 100,000 individuals, which is considerably fewer than when the vaccination is implemented before the pandemic peak (Figures 5 and 8).\nWe find that the utilitarian vaccine coverage levels are more inelastic than those under the Nash strategy. For instance, if vaccination is delayed until three weeks after the pandemic peak, the resulting vaccine coverage level according to the Nash strategy falls to 37% whereas the utilitarian strategy is to still vaccinate 73% of population. Thus, the resulting disease incidence and the number of disease-related deaths are lower under the utilitarian strategy than under the Nash strategy. The utilitarian strategy includes vaccinating 90% of older adults (age 50-64), 88% of adults (age 25-49), 83% of school-age children/young adults (age 5-24) and 43% of preschool-age children. At a higher basic reproductive ratio of 1.6, the utilitarian strategy also includes the vaccination of preschool-age children (age 0-4), because the risk of infection increases with transmissibility of influenza virus, increasing the payoff of vaccination (Figure 1). However, the Nash vaccination strategy does not include preschool-age children or older adults, and thus the utilitarian coverage levels may be unachievable under voluntary vaccination.", "Our age-structured model of influenza transmission predicts that 41% of the US population will be infected with pandemic H1N1 influenza in the absence of interventions (Figures 4 and 5). Based on our assumptions that on average 33% of infected people become symptomatic after three day of incubation period [28], we estimate that 13% of the population will be symptomatically infected during the current influenza pandemic, which is consistent with the estimate of previous modeling studies [29-31]. However, the age-specific attack rates are predicted to vary considerably between age groups because of age-dependent activity patterns and immune profiles. The highest incidence is predicted to occur in individuals of age 5-24, followed by adult population of age 25-49, with symptomatic plus asymptomatic attack rates of 57% and 43%, respectively (Figure 4b). The lowest attack rate (14%) is predicted to occur in the oldest age group (age 65 and older).\nOutcomes of influenza A/H1N1 infection. (a) Simulated age-stratified daily influenza A/H1N1 infection incidence per 100,000 individuals in the absence of vaccination or other interventions (b) Simulated age-specific attack rates, in the absence of vaccination or other interventions. Both symptomatic and asymptomatic cases are shown.\nCumulative incidence of influenza A/H1N1 when vaccination is guided by the Nash or utilitarian strategies. Vaccination is implemented at free of charge the three weeks before, exactly at, or three weeks after the peak of a pandemic influenza. Solid lines show the cumulative attack rate when vaccination is in alignment with utilitarian strategies when vaccination and is offered free of charge, whereas dotted lines show the cumulative attack rate assuming the vaccination is in alignment with the Nash strategies. For comparison, cumulative incidence without vaccination is also shown.\nOur results also suggest that individuals in each age group reach their highest incidence at different times (Figure 4a). That is, school-age children and young adults (age 5-24) reach their maximum incidence first, followed by adults (age 25-49) and preschool-age children under five years of age. In contrast, the oldest group is the last one to reach maximum incidence.\nIn the absence of vaccination or other interventions, our model predicts that the 2009 H1N1 influenza pandemic would result in 271 hospitalizations and 13 deaths per 100,000 individuals. With their highest case fatality and hospitalization ratios among all age groups, adults (age 25-49) bear the highest case fatality rate (7 out of 13 per 100,000) and hospitalization rate (133 out of 271 per 100,000), followed by school-age children and young adults of age 5-24 (Figures 4, 6, 7).\nHospitalizations per 100,000 when vaccination follows the Nash and utilitarian strategies. The number of hospitalizations per 100,000 is shown in age groups with and without vaccination. (a) Vaccination is implemented according to the Nash strategy at different timing of a pandemic wave, i.e. three weeks before, exactly at, or three weeks after the peak of a pandemic influenza. (b) Vaccination is implemented according to the utilitarian strategy at different timing of a pandemic wave, i.e. three weeks before, exactly at, or three weeks after the peak of a pandemic influenza.\nDeaths per 100,000 when vaccination follows the Nash and utilitarian strategies. The number of deaths per 100,000 is shown in age groups with and without vaccination. (a) Vaccination is implemented according to the Nash strategy at different timing of a pandemic wave, i.e. three weeks before, exactly at, or three weeks after the peak of a pandemic influenza. (b) Vaccination is implemented according to the utilitarian strategy at different timing of a pandemic wave, i.e. three weeks before, exactly at, or three weeks after the peak of a pandemic influenza.", "To determine optimal H1N1 vaccine distributions based on individual self-interest versus population interest, we constructed a game theoretical age-structured model of influenza transmission assuming delayed vaccination. Our calculations show that when vaccination occurs three weeks prior to the peak of a pandemic wave, the Nash (individual-based) strategy prioritizes vaccinating adults (age 25- 49) and preschool-age children, followed by school-age children and young adults (age 5-24) and then older adults (age 50-64) (Figure 8a). The Nash vaccination strategy among senior population of age 65 and older would be to refuse vaccination. With such strategy, the vaccination program is predicted to reduce an overall attack rate from 41% to 15%, averting 8,514 clinical infections, 179 hospitalizations and 9 deaths per 100,000 individuals.\nNash (a) and utilitarian (b) strategies when vaccination is offered free of charge. Vaccination is implemented three weeks before, exactly at, or three weeks after the peak of a pandemic influenza.\nThe Nash strategy, however, was found to be highly dependent on the timing of vaccine implementation (Figure 8a). If vaccine production is delayed, then the payoff to the vaccinators is diminished because of reduced risk of future infection. For example, if vaccination is implemented at the peak of a pandemic, the Nash vaccination strategy does not include the preschool-age children anymore. Instead, the Nash strategy is to vaccinate 91% of adults (age 25-49), 87% of older adults (age 50-64), and 63% of school-age children and young adults. This change in vaccination strategy occurs because preschool-age children have a relatively early pandemic peak and moderate morbidity compared to other age groups. Thus, when a vaccination is significantly delayed, the relative infection risk of this group is low.\nThe only age groups that are included in the Nash vaccination strategy regardless of vaccine delay are adults (age 25-49) and school-age children/young adults (age 5-24). In general, the demand for vaccine among adults (age 25-49) is the most inelastic to vaccine delay. The Nash level of vaccine coverage for school-age children and young adults falls rapidly with vaccine delay (Figure 8). For instance, when vaccination is delayed until three weeks after the pandemic peak, the Nash strategy is to vaccinate 88% of adults (age 25-49) and 23% of school-age children/young adults. This vaccination allocation would result in an overall attack rate of 36%, with 229 hospitalizations and 11 deaths per 100,000 individuals (Figures 5, 6, 7).\nOur results also demonstrate the dependence of the Nash strategy on basic reproductive ratio of pandemic influenza. At higher transmissibility (R0=1.6), the Nash levels of vaccination are 100% among adults of age 25-64 regardless of timing of vaccine implementation. This is, in part, because the case fatality ratio and case hospitalization ratio are highest in these age groups, yielding a high payoff of vaccination (Figure 1). School-age children/younger adults (age 5-24), on the other hand, are expected to seek vaccination only if vaccination is offered on or before the pandemic peak (Figures 9 and 10).\nNash strategy when vaccine is available at the beginning of an influenza pandemic and when vaccination is offered free of charge\nUtilitarian strategy when vaccine is available at the beginning of an influenza pandemic and when vaccination is offered free of charge\nFinally, we considered the rising cost of vaccination and its impact on the Nash strategy of each age group. We show that the Nash vaccination level among adults (age 25-49) is the most inelastic to the changes in vaccination cost (Figures 2 and 3). This inelasticity arises because, in this age group, the infection risk and case-fatality rate of H1N1 are high and residual immunity is low. In contrast, the Nash vaccination of older adults (age 50 to 64) is the most elastic to the changes in the vaccination cost, demonstrating the trade-off between vaccine cost and reduced benefit of vaccine due to vaccine delay. This elasticity is because the infection risk in this age group is relatively low, resulted from their residual immunity against H1N1 and low contact rate. Seniors of age 65 or older are unlikely to seek vaccination if vaccination is voluntary at a wide range of vaccination costs because their risk of infection (and thus their vaccination payoff) is lowest among all age groups.", "The average vaccination level across all age groups for the utilitarian strategy is higher than that for the Nash strategy (Figure 8). For example, if vaccines become available three weeks before the pandemic peak, the overall Nash and utilitarian vaccine coverage are 76% and 82%, respectively. When 93% of young individuals (age under 24), 96% of adults (age 25-64) and 1% of seniors (age 65 and older) are vaccinated according to the utilitarian strategy three weeks prior to the peak of the influenza pandemic, 25,767 clinical infections, 182 hospitalizations and 9 deaths would be averted per 100,000 individuals (Figures 5, 6, 7).\nThe utilitarian strategy is, however, much less effective if vaccination is delayed. For instance, if vaccination is delayed until the peak of influenza pandemic, it is estimated that 15,683 infections, 112 hospitalizations and 6 deaths would be averted per 100,000 individuals, which is considerably fewer than when the vaccination is implemented before the pandemic peak (Figures 5 and 8).\nWe find that the utilitarian vaccine coverage levels are more inelastic than those under the Nash strategy. For instance, if vaccination is delayed until three weeks after the pandemic peak, the resulting vaccine coverage level according to the Nash strategy falls to 37% whereas the utilitarian strategy is to still vaccinate 73% of population. Thus, the resulting disease incidence and the number of disease-related deaths are lower under the utilitarian strategy than under the Nash strategy. The utilitarian strategy includes vaccinating 90% of older adults (age 50-64), 88% of adults (age 25-49), 83% of school-age children/young adults (age 5-24) and 43% of preschool-age children. At a higher basic reproductive ratio of 1.6, the utilitarian strategy also includes the vaccination of preschool-age children (age 0-4), because the risk of infection increases with transmissibility of influenza virus, increasing the payoff of vaccination (Figure 1). However, the Nash vaccination strategy does not include preschool-age children or older adults, and thus the utilitarian coverage levels may be unachievable under voluntary vaccination.", "For pandemic H1N1, we find that the individual-based (Nash) vaccination strategies differ significantly from the utilitarian vaccination strategies. Without vaccination delay, the primary priority group under the utilitarian strategy is school-age children and young adults (age 5-24) because of their important role in transmitting disease (Figures 9 and 10). The case hospitalization ratio and the case fatality ratio are the highest, and thus vaccinating these individuals yields high individual and population payoffs. Indeed, regardless of length of the delay and when vaccination is guided by the Nash or utilitarian strategies, younger adults (age 25-49) are among the highest priority groups for vaccination.\nHowever, the second priority group changes dramatically under the Nash strategy. If vaccination occurs before the pandemic peak, the second Nash priority group is preschool-age children. If vaccination is delayed, the second Nash priority group is shifted to older adults (age 50-64) or to school-age children/younger adults (age 5-24). The peak incidence among preschool-age children is relatively early compared to other age groups, thus lowering the benefit of vaccination to these children with time. Because the case fatality ratio is the highest among older adults, and H1N1 morbidity is the highest among school-age children/younger adults, the benefit of vaccination is relatively inelastic over the course of a pandemic. Therefore, the demand for vaccines among these age groups is high even if vaccination is delayed in a pandemic.\nThe discordance between the Nash and utilitarian strategies is even more pronounced when vaccine availability is delayed. If vaccination is delayed but implemented near the pandemic peak, the utilitarian vaccination strategy includes individuals of age up to 64, in contrast to the Nash strategy which excludes preschool-age children and older adults (age 50-64) (Figure 8). If vaccination is further delayed, the Nash strategy would also exclude adults (age 25-49), preschool-age children and older adults (age 50-64), whereas the utilitarian vaccination strategy still includes individuals of age up to 64. Therefore, the average vaccination level across all age groups at the utilitarian strategy was found to be higher than that at the Nash strategy.\nOverall, our results indicate that a vaccination levels under a voluntary immunization program may not be optimal for the population, regardless of vaccine delay. Such discordance between the Nash and utilitarian strategies is predicted to be robust to the increase in the basic reproductive ratio for pandemic influenza (Figure 1). This finding is consistent with those of previous studies, which demonstrated that, in the context of vaccination against smallpox and seasonal influenza, the vaccination levels driven by self-interest are likely to be lower than those that are optimal from the population perspective [32-35].\nThere are three primary reasons for the discrepancy between the individual-based and utilitarian age-specific vaccination levels for pandemic H1N1. First, different age groups have different incentives to vaccinate. In particular, an earlier pandemic peak among young individuals results in a relatively low infection risk later in the pandemic compared to that for older adults. Therefore, the young are predicted to under-vaccinate under the Nash strategy relative to the utilitarian strategy when vaccination is delayed. Second, the positive externalities of indirect protection by herd immunity also contribute to the differences between utilitarian and Nash vaccination strategies. The benefits of herd immunity contribute to the utilitarian strategy, but also create an incentive for individuals to free ride on the vaccination of others. Consequently, the overall level of population vaccination is lower for the Nash strategy than for the utilitarian strategy. Third, because vaccine delivery was delayed for the H1N1 pandemic, our model predicts that people will be less inclined to vaccinate than if vaccine was available at the beginning of the pandemic. As a consequence, achieving vaccination rates high enough to achieve the utilitarian strategy may be difficult, and the discordance between the Nash and utilitarian strategies is found to increase with vaccine delay.\nThe guidelines for vaccinating against the 2009-2010 pandemic H1N1 influenza proposed by the CDC’s Advisory Committee on Immunization Practices (ACIP) prioritize young people aged 6 months to 25 years, who are the most efficient at transmitting influenza viruses [36]. This guideline also reflects the reduced susceptibility among the elderly due to their residual immunity from past exposure [37]. If large stockpiles of vaccines had been available prior to the pandemic, the optimal vaccine distribution strategy would be to vaccinate children in order to reduce transmission and achieve herd immunity [38,39]. However, our analysis suggests that the success of such vaccination strategies depends heavily on the timing of a vaccine’s availability. Nevertheless, our analysis might be limited by the difficulties of knowing the state of the pandemic at the time vaccines become available. In addition, our outcome measure (i.e. cost of infection and vaccination) may oversimplify the vaccination decisions or be incongruous with the consideration of the Advisory Committee on Immunization Practices (ACIP).\nWe found that, for both the Nash and utilitarian strategies, the optimal vaccination strategies with vaccine delay should prioritize individuals of age 25 to 49. Our results also suggest that a utilitarian vaccine strategy should also include individuals from a wide range of ages, from 5 months to 65 years; and for longer delay in vaccination, vaccination priority should increasingly be given to older individuals. Our results further suggest that age-specific demands for vaccination depend on the risk of infection at the time of vaccine delivery and the severity of the disease. When vaccination is delayed, voluntary adherence to vaccine recommendation might become lower among young individuals. This suggests that influenza pandemic response plans should include efforts to encourage the vaccination of young individuals if vaccine delivery is delayed.", "H1N1:(H: Hemaglutinin) (N: Neuraminadase); WHO: the World Health Organization; CDC: Centers for Disease Control and Prevention", "The authors declare that they have no competing interests.", "ES developed and analyzed the model, and carried out numerical simulations. ES produced all figures, interpreted results, and wrote the manuscript. AG suggested some of the simulations and helped write the manuscript. LM edited the manuscript. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Mathematical model for disease transmission and vaccination", "Cost parameterization", "\nPayoff to vaccination strategy\n", "Defining the Nash strategy", "Defining utilitarian strategy", "Results", "Epidemiological impact of the 2009 H1N1 influenza pandemic", "Optimal H1N1 vaccine distribution based on individual self-interest", "Optimal H1N1 vaccine distribution based on population interest", "Conclusions", "List of abbreviations", "Competing interests", "Authors' contributions" ]
[ "In response to the rapid spread of a pandemic strain of H1N1 influenza A, the World Health Organization (WHO) raised the pandemic alert to its highest phase on June 11, 2009 [1]. The H1N1 pandemic was the first influenza pandemic in over 40 years. Although most H1N1 cases in individuals were mild and the case fatality rate was lower than that of previous influenza pandemics, severe cases frequently occurred in previously healthy, young adults [2].\nVaccines hold considerable promise for reducing the spread of H1N1 influenza A. However, the H1N1 vaccine was not readily available until late October, 2009 [3]. This delayed the US vaccination program until after a large proportion of the population had already been exposed to H1N1.\nThere is evidence that a substantial proportion of the elderly was protected by cross-immunity from prior infection, resulting in the lowest infection rate in this age group [4]. The 2009 H1N1 influenza disproportionately affected younger patients [5,6]. The median age of hospitalized H1N1 patients was 27 years, which is much lower than the median age of hospitalized seasonal-influenza cases (between 75 and 79 years) [7,8]. Yet, H1N1 was least likely to turn fatal in patients under age 17 [8]. Such differences in age-specific susceptibility and case fatality for 2009 H1N1 strain posed a challenge to public health agencies that sought to determine optimal vaccine distribution and expected public adherence.\nDetermining an optimal vaccination policy can be quite challenging. An individual's risk of infection depends not only on his or her decision to be vaccinated, but also on the decisions of others [9,10]. In addition, overwhelming majority of infected people are either asymptomatic or recover without medical attention. Such cases may be unaware that they have been exposed to the virus and still seek vaccination [11]. To calculate the payoff of vaccination to an individual and to the population as a whole, it is important to incorporate the cost of vaccination as well as the benefits of vaccination such as both direct and indirect protection due to herd immunity [10,12,13].\nHere, we use game theory to investigate age-dependent optimal vaccine distribution against H1N1 influenza A in the US, from both individual and population perspectives. We first model the evolving age distribution of H1N1 cases as the pandemic unfolds, and examine the optimal control strategy assuming that the vaccine becomes available before, at, or after the peak of the influenza pandemic. Then, we find the expected age-specific H1N1 vaccine allocation strategy that would emerge if individuals pursue their own interest, i.e. the Nash strategy, and compare it to a strategy that is optimal to the population as a whole, known as the utilitarian strategy. The personal payoff of vaccination varies among age groups and changes over the course of an outbreak, and we recognize that individuals may not adhere to the utilitarian strategy when acting according to self-interest.\nOur game theoretical analyses of the vaccination program for an influenza A (H1N1) pandemic in the United States show that the utilitarian strategy prioritizes aggressive control among individuals of age 5 and 64 regardless of the timing of vaccination. In contrast, the Nash strategy dictates vaccination of adults, ages 25-49, as the first priority group. If the vaccination program implemented before the peak of pandemic wave, then the second priority group to be vaccinated based on the Nash strategy is preschool-age children; however, if vaccination is delayed until the peak of pandemic wave, then the second priority group is older adults (ages 50 to 64).", "To model the transmission of the 2009 H1N1 influenza and vaccination, we developed the age-structured model incorporating six epidemiological compartments (i.e. susceptible, vaccinated, latent, asymptomatic and infectious, symptomatic and infectious, and recovered). Each epidemiological compartment is then subdivided into two depending on an individual’s vaccination decision. The asymptotic dynamics of this model are then used to calculate the probability for individuals to become infected based on their vaccination decision. The expected cost of infection and vaccination associated with vaccine acceptance and refusal are calculated using these infection probabilities. Since the payoff of vaccination depends on both the individual’s decision and the population's average behaviour, we formulate our model as a population game. Monte Carlo methods are employed to determine the optimal vaccination levels driven by self-interest versus the population interest.\n[SUBTITLE] Mathematical model for disease transmission and vaccination [SUBSECTION] To model H1N1 influenza transmission in the United States, we divide the population into the five age groups (0-4, 5-24, 25–50, 50–64, and 65+), according to the age classes used in US CDC case reports [14]. The numbers of people in each age group were set to values estimated for the US 2008 population (Additional File) [15]. In our model, individuals in each age class k are subdivided based on epidemiological status. The dynamics of influenza infection, illness, and infectiousness reflect our current understanding of the natural history of influenza. Here, subscripts U and V represent an unvaccinated and vaccinated population, respectively. We assume that SU,k (t), LU,k (t), AU,k (t), IU,k (t), and RU,k (t) represents the respective number of unvaccinated susceptible, latent, asymptomatic and infectious, symptomatic and infectious, and recovered individuals in age groups k at time t (k = 1, 2, . . . , 5). Similarly, we define SV,k (t), LV,k (t), AV,k (t), IV,k (t), and RV,k (t) as the respective number of vaccinated susceptible, latent, asymptomatic and infectious, symptomatic and infectious, and recovered individuals (k = 1, 2, . . . , 5).\nWe assume that the vaccine provides partial protection, resulting in vaccinated individuals being less susceptible than unvaccinated ones. Vaccinated individuals become infected at a fraction (1-σk) of the rate at which unvaccinated susceptible individuals become infected, where σk is the efficacy of the vaccine against infection for individuals of age group k (Additional File). We consider the three vaccination scenarios where vaccines become available before, at, or after the peak of an influenza pandemic. Thus, when vaccines become available, we assume that a proportion, ψk, of susceptible individuals in age group k is vaccinated. We also assume that the same proportion, ψk, of individuals in age group k who have been infected asymptomatically still may get vaccinated, because they were not aware of exposure to novel influenza A (H1N1) viruses. However, vaccine doses given to those who were already exposed to H1N1 viruses are assumed to be wasted, because these individuals already gained immunity to the H1N1 strain. Recovered individuals are assumed to be fully protected against further influenza infection for the remainder of the outbreak.\nUpon infection, individuals enter a latency period, 1/δ. Latently infected individuals proceed to become infectious, and a proportion, p, of infected individuals becomes symptomatic. Infectious individuals recover after an average period of 1/γ. The inf luenza-induced death rates are αU,k and αV,k for unvaccinated and vaccinated individuals, respectively, for people in age group k. Age-specific influenza-related death rates are based on estimates of excess pneumonia and on influenza deaths from the H1N1 influenza [16]. The transmission dynamics are thus described by the following differential equations:\nfor k=1,…, 5.\nWe used a standard-incidence form for the force of infection λk:\nwhere N is the total population size. Thus, it follows that\n where Nk is the number of people of age group k, i.e. \nHere ϕkm is the number of contacts per day between a person in age group k with people in age group m, and β is the probability of infection for a susceptible person who has contact with an infectious person.\nAs both epidemiological and serological data are suggestive of residual immunity to H1N1 among adults and seniors, we assume that a proportion (ξk) of individuals in age group k is immune to H1N1 viruses [4]. The residual immunity incorporates the fact that younger people are more susceptible to the current H1N1 strain than older people due to lack of exposure to a similar virus in the past [4,17]. The demographic effects of aging, birth, and death by causes unrelated to influenza are not included because we only model one influenza season, where these demographic effects are negligible.\nThe epidemic is initiated with a proportion of each age group assumed to be immune to infection, with one person of each age group assumed infectious, and with the remaining population assumed susceptible. That is,\nWe assume that an influenza pandemic approaches its peak at time t=ω, and a proportion of the population is vaccinated at time τ=ω±θ where θ=0 or 21 days. We assume that vaccination instantaneously protects people, so that the state variables change discontinuously at t=τ:\nwith the other state variables remaining the same.\nWe further assume that the basic reproductive number (R0), defined as the number of secondary cases caused by a single infective case in a completely susceptible population, was 1.4, as estimated for the novel swine-origin H1N1 influenza outbreak [18]. For sensitivity analysis, the basic reproductive ratio was increased from 1.4 to 1.6 (Figure 1). We parameterized age-specific contact rates, ϕkm, using data from a large-scale survey of daily contacts [19]. These contact data show strong mixing between people of similar ages and moderately high mixing between children and people of their parents’ ages [20]. Given the contact data and US population size, we reconstructed the contact matrix to match our five age groups [19,20]. Using the relative size of the age group m (Nm / N) and the number of contacts per person in age group k with people in age group m, ckm, we define the elements of the contact matrix by . To ensure that the number of contacts between age groups is symmetric, Nmckm = Nkcmk, i.e., ϕkm = ϕmk, we made further adjustment, , and used ϕkm to be the contact matrix.\n(a) Nash and (b) utilitarian strategies when basic reproductive ratio is 1.6. Vaccination is assumed to be offered free of charge. Vaccination is implemented three weeks before, exactly at, or three weeks after the peak of a pandemic influenza.\nTo model H1N1 influenza transmission in the United States, we divide the population into the five age groups (0-4, 5-24, 25–50, 50–64, and 65+), according to the age classes used in US CDC case reports [14]. The numbers of people in each age group were set to values estimated for the US 2008 population (Additional File) [15]. In our model, individuals in each age class k are subdivided based on epidemiological status. The dynamics of influenza infection, illness, and infectiousness reflect our current understanding of the natural history of influenza. Here, subscripts U and V represent an unvaccinated and vaccinated population, respectively. We assume that SU,k (t), LU,k (t), AU,k (t), IU,k (t), and RU,k (t) represents the respective number of unvaccinated susceptible, latent, asymptomatic and infectious, symptomatic and infectious, and recovered individuals in age groups k at time t (k = 1, 2, . . . , 5). Similarly, we define SV,k (t), LV,k (t), AV,k (t), IV,k (t), and RV,k (t) as the respective number of vaccinated susceptible, latent, asymptomatic and infectious, symptomatic and infectious, and recovered individuals (k = 1, 2, . . . , 5).\nWe assume that the vaccine provides partial protection, resulting in vaccinated individuals being less susceptible than unvaccinated ones. Vaccinated individuals become infected at a fraction (1-σk) of the rate at which unvaccinated susceptible individuals become infected, where σk is the efficacy of the vaccine against infection for individuals of age group k (Additional File). We consider the three vaccination scenarios where vaccines become available before, at, or after the peak of an influenza pandemic. Thus, when vaccines become available, we assume that a proportion, ψk, of susceptible individuals in age group k is vaccinated. We also assume that the same proportion, ψk, of individuals in age group k who have been infected asymptomatically still may get vaccinated, because they were not aware of exposure to novel influenza A (H1N1) viruses. However, vaccine doses given to those who were already exposed to H1N1 viruses are assumed to be wasted, because these individuals already gained immunity to the H1N1 strain. Recovered individuals are assumed to be fully protected against further influenza infection for the remainder of the outbreak.\nUpon infection, individuals enter a latency period, 1/δ. Latently infected individuals proceed to become infectious, and a proportion, p, of infected individuals becomes symptomatic. Infectious individuals recover after an average period of 1/γ. The inf luenza-induced death rates are αU,k and αV,k for unvaccinated and vaccinated individuals, respectively, for people in age group k. Age-specific influenza-related death rates are based on estimates of excess pneumonia and on influenza deaths from the H1N1 influenza [16]. The transmission dynamics are thus described by the following differential equations:\nfor k=1,…, 5.\nWe used a standard-incidence form for the force of infection λk:\nwhere N is the total population size. Thus, it follows that\n where Nk is the number of people of age group k, i.e. \nHere ϕkm is the number of contacts per day between a person in age group k with people in age group m, and β is the probability of infection for a susceptible person who has contact with an infectious person.\nAs both epidemiological and serological data are suggestive of residual immunity to H1N1 among adults and seniors, we assume that a proportion (ξk) of individuals in age group k is immune to H1N1 viruses [4]. The residual immunity incorporates the fact that younger people are more susceptible to the current H1N1 strain than older people due to lack of exposure to a similar virus in the past [4,17]. The demographic effects of aging, birth, and death by causes unrelated to influenza are not included because we only model one influenza season, where these demographic effects are negligible.\nThe epidemic is initiated with a proportion of each age group assumed to be immune to infection, with one person of each age group assumed infectious, and with the remaining population assumed susceptible. That is,\nWe assume that an influenza pandemic approaches its peak at time t=ω, and a proportion of the population is vaccinated at time τ=ω±θ where θ=0 or 21 days. We assume that vaccination instantaneously protects people, so that the state variables change discontinuously at t=τ:\nwith the other state variables remaining the same.\nWe further assume that the basic reproductive number (R0), defined as the number of secondary cases caused by a single infective case in a completely susceptible population, was 1.4, as estimated for the novel swine-origin H1N1 influenza outbreak [18]. For sensitivity analysis, the basic reproductive ratio was increased from 1.4 to 1.6 (Figure 1). We parameterized age-specific contact rates, ϕkm, using data from a large-scale survey of daily contacts [19]. These contact data show strong mixing between people of similar ages and moderately high mixing between children and people of their parents’ ages [20]. Given the contact data and US population size, we reconstructed the contact matrix to match our five age groups [19,20]. Using the relative size of the age group m (Nm / N) and the number of contacts per person in age group k with people in age group m, ckm, we define the elements of the contact matrix by . To ensure that the number of contacts between age groups is symmetric, Nmckm = Nkcmk, i.e., ϕkm = ϕmk, we made further adjustment, , and used ϕkm to be the contact matrix.\n(a) Nash and (b) utilitarian strategies when basic reproductive ratio is 1.6. Vaccination is assumed to be offered free of charge. Vaccination is implemented three weeks before, exactly at, or three weeks after the peak of a pandemic influenza.\n[SUBTITLE] Cost parameterization [SUBSECTION] To calculate the average individual net payoff of vaccination strategy, we incorporated the costs associated with infection, vaccination, and the side effects of the vaccine (Table 1). We calculate the cost of infection using weighted average of the costs associated with possible infection outcomes such as mortality, hospitalization, outpatient visits, and cases without medical care. The cost of vaccination includes the value of an individual's time receiving it ($16), and travel cost ($4), resulting in the total cost of vaccination estimated at $20 [21]. The cost of administration is not included in baseline parameters because vaccine for the 2009 novel H1N1 influenza was provided free of charge in the US. However, for sensitivity analysis, we increase the cost of administration from $0 up to $20, in order to examine the elasticity of the Nash and utilitarian strategies to a range of vaccination cost (Figures 2 and 3).\nParameterization of infection and vaccination cost\n(a) Nash and (b) utilitarian strategies when vaccination administration costs $10\n(a) Nash and (b) utilitarian strategies when vaccination administration costs $20\nWe calculate the cost for vaccine side effects based on the reduction in quality of life and the costs of treating individuals with severe side effects. Mild to moderate side effects are reported to occur at a probability of 5% and to reduce the quality of life by 0.05 for two days on average [21]. To calculate the cost of vaccine side effects, we use the conversion that a quality-adjusted life year (QALY) is monetarily equivalent to $100,000 [22-24]. Thus, the cost associated with mild to moderate vaccine side effects can be estimated at\n0.05(0.05)(2/365)($100,000)=$1.37.\nIn line with clinical data, severe vaccine side effects are assumed to occur at a probability of 0.001% and result in hospitalization for 7 days, and the cost of hospitalization in ICU to treat severe side effects is taken as $3,739.05 per day [21]. In addition, we assume that severe vaccine adverse effects result in death at 5% of probability [21]. We assume that all individuals value their life equally, irrespective of their age. Thus, the value of life is estimated at $1,045,278 using average expected future lifetime earnings for all ages [25,26]. Estimating the value of life at $1,045,278, the cost of severe side effects is calculated as\n\n10-5($3739(7)+0.05($1045278))=$0.78.\n\nTo calculate the average individual net payoff of vaccination strategy, we incorporated the costs associated with infection, vaccination, and the side effects of the vaccine (Table 1). We calculate the cost of infection using weighted average of the costs associated with possible infection outcomes such as mortality, hospitalization, outpatient visits, and cases without medical care. The cost of vaccination includes the value of an individual's time receiving it ($16), and travel cost ($4), resulting in the total cost of vaccination estimated at $20 [21]. The cost of administration is not included in baseline parameters because vaccine for the 2009 novel H1N1 influenza was provided free of charge in the US. However, for sensitivity analysis, we increase the cost of administration from $0 up to $20, in order to examine the elasticity of the Nash and utilitarian strategies to a range of vaccination cost (Figures 2 and 3).\nParameterization of infection and vaccination cost\n(a) Nash and (b) utilitarian strategies when vaccination administration costs $10\n(a) Nash and (b) utilitarian strategies when vaccination administration costs $20\nWe calculate the cost for vaccine side effects based on the reduction in quality of life and the costs of treating individuals with severe side effects. Mild to moderate side effects are reported to occur at a probability of 5% and to reduce the quality of life by 0.05 for two days on average [21]. To calculate the cost of vaccine side effects, we use the conversion that a quality-adjusted life year (QALY) is monetarily equivalent to $100,000 [22-24]. Thus, the cost associated with mild to moderate vaccine side effects can be estimated at\n0.05(0.05)(2/365)($100,000)=$1.37.\nIn line with clinical data, severe vaccine side effects are assumed to occur at a probability of 0.001% and result in hospitalization for 7 days, and the cost of hospitalization in ICU to treat severe side effects is taken as $3,739.05 per day [21]. In addition, we assume that severe vaccine adverse effects result in death at 5% of probability [21]. We assume that all individuals value their life equally, irrespective of their age. Thus, the value of life is estimated at $1,045,278 using average expected future lifetime earnings for all ages [25,26]. Estimating the value of life at $1,045,278, the cost of severe side effects is calculated as\n\n10-5($3739(7)+0.05($1045278))=$0.78.\n\n[SUBTITLE] \nPayoff to vaccination strategy\n [SUBSECTION] In our vaccination game, the payoff to an individual choosing a particular strategy depends on the average behavior of the population. We considered the two basic strategies, “vaccinator” (obtain vaccination) and “non-vaccinator” (decline vaccination). For both strategies, the payoff to an individual is measured in terms of a monetary cost due to infection and/or vaccination, based on the probability of infections and vaccine risks (Table 1 and Additional File). We also parameterized the payoff calculations with age-specific distributions of vaccine efficacy in reducing influenza morbidity and mortality (Additional File).\nThe net payoff to vaccinator strategy then is\nwhere xk is the probability of infection among vaccinators, zM is the probability of mild to moderate side effects, and zS is the probability of severe side effects. CIV,k denotes the cost of infection among vaccinators in age group k, CV denotes the cost of vaccination, and CM and CS denote the cost of mild and severe side effects associated with vaccination, respectively.\nAs the vaccine efficacy is imperfect, the vaccinator may still be infected with reduced probability of infection (xk), which depends on both vaccination probability of age group k (ψk) and on vaccination probability across all age groups . If infected, vaccinated individuals incur lower infection cost (CIV,k) than unvaccinated ones. The probability of symptomatic infection among vaccinators in age group k who are not yet infected before vaccination is given by.\nHere represents the number of cumulative symptomatic infections until time t = τ. People who had have been symptomatically infected would be aware that they gained immunity against H1N1, thus would not get vaccinated, and therefore the expression, , represents the maximum number of vaccinating people in age group k.\nThe net payoff to a non-vaccinator is\nwhere CIN,k denotes the cost of infection among non-vaccinators of age group k, and yk is the probability of symptomatic infection among non-vaccinators, given by.\nHere, describes the cumulative number of symptomatic infections among unvaccinated individuals in age group k after vaccination is implemented at time t = τ.\nIn our vaccination game, the payoff to an individual choosing a particular strategy depends on the average behavior of the population. We considered the two basic strategies, “vaccinator” (obtain vaccination) and “non-vaccinator” (decline vaccination). For both strategies, the payoff to an individual is measured in terms of a monetary cost due to infection and/or vaccination, based on the probability of infections and vaccine risks (Table 1 and Additional File). We also parameterized the payoff calculations with age-specific distributions of vaccine efficacy in reducing influenza morbidity and mortality (Additional File).\nThe net payoff to vaccinator strategy then is\nwhere xk is the probability of infection among vaccinators, zM is the probability of mild to moderate side effects, and zS is the probability of severe side effects. CIV,k denotes the cost of infection among vaccinators in age group k, CV denotes the cost of vaccination, and CM and CS denote the cost of mild and severe side effects associated with vaccination, respectively.\nAs the vaccine efficacy is imperfect, the vaccinator may still be infected with reduced probability of infection (xk), which depends on both vaccination probability of age group k (ψk) and on vaccination probability across all age groups . If infected, vaccinated individuals incur lower infection cost (CIV,k) than unvaccinated ones. The probability of symptomatic infection among vaccinators in age group k who are not yet infected before vaccination is given by.\nHere represents the number of cumulative symptomatic infections until time t = τ. People who had have been symptomatically infected would be aware that they gained immunity against H1N1, thus would not get vaccinated, and therefore the expression, , represents the maximum number of vaccinating people in age group k.\nThe net payoff to a non-vaccinator is\nwhere CIN,k denotes the cost of infection among non-vaccinators of age group k, and yk is the probability of symptomatic infection among non-vaccinators, given by.\nHere, describes the cumulative number of symptomatic infections among unvaccinated individuals in age group k after vaccination is implemented at time t = τ.\n[SUBTITLE] Defining the Nash strategy [SUBSECTION] For individuals driven by self-interest, game-theoretic decisions are assumed to settle to a Nash equilibrium at which it is impossible for a few individuals to increase their payoffs by switching to a different strategy [27]. We define these individual decisions at the Nash equilibrium as the Nash strategy. A pure vaccinator strategy cannot be the Nash equilibrium, because when the population vaccine coverage is 100%, an individual who chooses a non-vaccinator strategy reaps the benefits of herd immunity without paying for vaccination and without experiencing possible vaccine side effects. By comparison, a non-vaccinator can result in an individual optimum under certain conditions, such as when the infection risk is sufficiently low when vaccines become available. In our age-structured model, it might be best for some people in an age group to be vaccinated and for others in the same group to choose not to get vaccinated. To allow this scenario, we consider mixed strategies whereby individuals in age group k choose the vaccinator strategy with probability ψk (0 <ψk < 1) and the non-vaccinator strategy otherwise. If all individuals play the mixed strategy ψk, then a proportion ψk of the population in age group k is vaccinated. The individual optimum can be found by solving for ψk,ind in the equation Uvac,k = Unonvac,k (k=1…5). The individual optimum (ψk,ind) predicted by this game-theoretical analysis corresponds to the level of coverage ψk,ind expected under a voluntary program where individuals act in a rational way to maximize their payoffs.\nFor individuals driven by self-interest, game-theoretic decisions are assumed to settle to a Nash equilibrium at which it is impossible for a few individuals to increase their payoffs by switching to a different strategy [27]. We define these individual decisions at the Nash equilibrium as the Nash strategy. A pure vaccinator strategy cannot be the Nash equilibrium, because when the population vaccine coverage is 100%, an individual who chooses a non-vaccinator strategy reaps the benefits of herd immunity without paying for vaccination and without experiencing possible vaccine side effects. By comparison, a non-vaccinator can result in an individual optimum under certain conditions, such as when the infection risk is sufficiently low when vaccines become available. In our age-structured model, it might be best for some people in an age group to be vaccinated and for others in the same group to choose not to get vaccinated. To allow this scenario, we consider mixed strategies whereby individuals in age group k choose the vaccinator strategy with probability ψk (0 <ψk < 1) and the non-vaccinator strategy otherwise. If all individuals play the mixed strategy ψk, then a proportion ψk of the population in age group k is vaccinated. The individual optimum can be found by solving for ψk,ind in the equation Uvac,k = Unonvac,k (k=1…5). The individual optimum (ψk,ind) predicted by this game-theoretical analysis corresponds to the level of coverage ψk,ind expected under a voluntary program where individuals act in a rational way to maximize their payoffs.\n[SUBTITLE] Defining utilitarian strategy [SUBSECTION] From the perspective of group interest, the objective is to maximize the total payoff of vaccinators and non-vaccinators. If ψk is the proportion of the population in age group k that is vaccinated, we can express the expected payoff due to vaccination and an influenza pandemic as We now maximize T(ψ1, ψ2, ψ3, ψ4, ψ5) on the parameter space {(ψ1, ψ2, ψ3, ψ4, ψ5) | 0 ≤ ψk ≤1} to determine the utilitarian strategy (ψ1*, ψ2*, ψ3*, ψ4*, ψ5*), which is the coverage level that would maximize the total payoff.\nFrom the perspective of group interest, the objective is to maximize the total payoff of vaccinators and non-vaccinators. If ψk is the proportion of the population in age group k that is vaccinated, we can express the expected payoff due to vaccination and an influenza pandemic as We now maximize T(ψ1, ψ2, ψ3, ψ4, ψ5) on the parameter space {(ψ1, ψ2, ψ3, ψ4, ψ5) | 0 ≤ ψk ≤1} to determine the utilitarian strategy (ψ1*, ψ2*, ψ3*, ψ4*, ψ5*), which is the coverage level that would maximize the total payoff.", "To model H1N1 influenza transmission in the United States, we divide the population into the five age groups (0-4, 5-24, 25–50, 50–64, and 65+), according to the age classes used in US CDC case reports [14]. The numbers of people in each age group were set to values estimated for the US 2008 population (Additional File) [15]. In our model, individuals in each age class k are subdivided based on epidemiological status. The dynamics of influenza infection, illness, and infectiousness reflect our current understanding of the natural history of influenza. Here, subscripts U and V represent an unvaccinated and vaccinated population, respectively. We assume that SU,k (t), LU,k (t), AU,k (t), IU,k (t), and RU,k (t) represents the respective number of unvaccinated susceptible, latent, asymptomatic and infectious, symptomatic and infectious, and recovered individuals in age groups k at time t (k = 1, 2, . . . , 5). Similarly, we define SV,k (t), LV,k (t), AV,k (t), IV,k (t), and RV,k (t) as the respective number of vaccinated susceptible, latent, asymptomatic and infectious, symptomatic and infectious, and recovered individuals (k = 1, 2, . . . , 5).\nWe assume that the vaccine provides partial protection, resulting in vaccinated individuals being less susceptible than unvaccinated ones. Vaccinated individuals become infected at a fraction (1-σk) of the rate at which unvaccinated susceptible individuals become infected, where σk is the efficacy of the vaccine against infection for individuals of age group k (Additional File). We consider the three vaccination scenarios where vaccines become available before, at, or after the peak of an influenza pandemic. Thus, when vaccines become available, we assume that a proportion, ψk, of susceptible individuals in age group k is vaccinated. We also assume that the same proportion, ψk, of individuals in age group k who have been infected asymptomatically still may get vaccinated, because they were not aware of exposure to novel influenza A (H1N1) viruses. However, vaccine doses given to those who were already exposed to H1N1 viruses are assumed to be wasted, because these individuals already gained immunity to the H1N1 strain. Recovered individuals are assumed to be fully protected against further influenza infection for the remainder of the outbreak.\nUpon infection, individuals enter a latency period, 1/δ. Latently infected individuals proceed to become infectious, and a proportion, p, of infected individuals becomes symptomatic. Infectious individuals recover after an average period of 1/γ. The inf luenza-induced death rates are αU,k and αV,k for unvaccinated and vaccinated individuals, respectively, for people in age group k. Age-specific influenza-related death rates are based on estimates of excess pneumonia and on influenza deaths from the H1N1 influenza [16]. The transmission dynamics are thus described by the following differential equations:\nfor k=1,…, 5.\nWe used a standard-incidence form for the force of infection λk:\nwhere N is the total population size. Thus, it follows that\n where Nk is the number of people of age group k, i.e. \nHere ϕkm is the number of contacts per day between a person in age group k with people in age group m, and β is the probability of infection for a susceptible person who has contact with an infectious person.\nAs both epidemiological and serological data are suggestive of residual immunity to H1N1 among adults and seniors, we assume that a proportion (ξk) of individuals in age group k is immune to H1N1 viruses [4]. The residual immunity incorporates the fact that younger people are more susceptible to the current H1N1 strain than older people due to lack of exposure to a similar virus in the past [4,17]. The demographic effects of aging, birth, and death by causes unrelated to influenza are not included because we only model one influenza season, where these demographic effects are negligible.\nThe epidemic is initiated with a proportion of each age group assumed to be immune to infection, with one person of each age group assumed infectious, and with the remaining population assumed susceptible. That is,\nWe assume that an influenza pandemic approaches its peak at time t=ω, and a proportion of the population is vaccinated at time τ=ω±θ where θ=0 or 21 days. We assume that vaccination instantaneously protects people, so that the state variables change discontinuously at t=τ:\nwith the other state variables remaining the same.\nWe further assume that the basic reproductive number (R0), defined as the number of secondary cases caused by a single infective case in a completely susceptible population, was 1.4, as estimated for the novel swine-origin H1N1 influenza outbreak [18]. For sensitivity analysis, the basic reproductive ratio was increased from 1.4 to 1.6 (Figure 1). We parameterized age-specific contact rates, ϕkm, using data from a large-scale survey of daily contacts [19]. These contact data show strong mixing between people of similar ages and moderately high mixing between children and people of their parents’ ages [20]. Given the contact data and US population size, we reconstructed the contact matrix to match our five age groups [19,20]. Using the relative size of the age group m (Nm / N) and the number of contacts per person in age group k with people in age group m, ckm, we define the elements of the contact matrix by . To ensure that the number of contacts between age groups is symmetric, Nmckm = Nkcmk, i.e., ϕkm = ϕmk, we made further adjustment, , and used ϕkm to be the contact matrix.\n(a) Nash and (b) utilitarian strategies when basic reproductive ratio is 1.6. Vaccination is assumed to be offered free of charge. Vaccination is implemented three weeks before, exactly at, or three weeks after the peak of a pandemic influenza.", "To calculate the average individual net payoff of vaccination strategy, we incorporated the costs associated with infection, vaccination, and the side effects of the vaccine (Table 1). We calculate the cost of infection using weighted average of the costs associated with possible infection outcomes such as mortality, hospitalization, outpatient visits, and cases without medical care. The cost of vaccination includes the value of an individual's time receiving it ($16), and travel cost ($4), resulting in the total cost of vaccination estimated at $20 [21]. The cost of administration is not included in baseline parameters because vaccine for the 2009 novel H1N1 influenza was provided free of charge in the US. However, for sensitivity analysis, we increase the cost of administration from $0 up to $20, in order to examine the elasticity of the Nash and utilitarian strategies to a range of vaccination cost (Figures 2 and 3).\nParameterization of infection and vaccination cost\n(a) Nash and (b) utilitarian strategies when vaccination administration costs $10\n(a) Nash and (b) utilitarian strategies when vaccination administration costs $20\nWe calculate the cost for vaccine side effects based on the reduction in quality of life and the costs of treating individuals with severe side effects. Mild to moderate side effects are reported to occur at a probability of 5% and to reduce the quality of life by 0.05 for two days on average [21]. To calculate the cost of vaccine side effects, we use the conversion that a quality-adjusted life year (QALY) is monetarily equivalent to $100,000 [22-24]. Thus, the cost associated with mild to moderate vaccine side effects can be estimated at\n0.05(0.05)(2/365)($100,000)=$1.37.\nIn line with clinical data, severe vaccine side effects are assumed to occur at a probability of 0.001% and result in hospitalization for 7 days, and the cost of hospitalization in ICU to treat severe side effects is taken as $3,739.05 per day [21]. In addition, we assume that severe vaccine adverse effects result in death at 5% of probability [21]. We assume that all individuals value their life equally, irrespective of their age. Thus, the value of life is estimated at $1,045,278 using average expected future lifetime earnings for all ages [25,26]. Estimating the value of life at $1,045,278, the cost of severe side effects is calculated as\n\n10-5($3739(7)+0.05($1045278))=$0.78.\n", "In our vaccination game, the payoff to an individual choosing a particular strategy depends on the average behavior of the population. We considered the two basic strategies, “vaccinator” (obtain vaccination) and “non-vaccinator” (decline vaccination). For both strategies, the payoff to an individual is measured in terms of a monetary cost due to infection and/or vaccination, based on the probability of infections and vaccine risks (Table 1 and Additional File). We also parameterized the payoff calculations with age-specific distributions of vaccine efficacy in reducing influenza morbidity and mortality (Additional File).\nThe net payoff to vaccinator strategy then is\nwhere xk is the probability of infection among vaccinators, zM is the probability of mild to moderate side effects, and zS is the probability of severe side effects. CIV,k denotes the cost of infection among vaccinators in age group k, CV denotes the cost of vaccination, and CM and CS denote the cost of mild and severe side effects associated with vaccination, respectively.\nAs the vaccine efficacy is imperfect, the vaccinator may still be infected with reduced probability of infection (xk), which depends on both vaccination probability of age group k (ψk) and on vaccination probability across all age groups . If infected, vaccinated individuals incur lower infection cost (CIV,k) than unvaccinated ones. The probability of symptomatic infection among vaccinators in age group k who are not yet infected before vaccination is given by.\nHere represents the number of cumulative symptomatic infections until time t = τ. People who had have been symptomatically infected would be aware that they gained immunity against H1N1, thus would not get vaccinated, and therefore the expression, , represents the maximum number of vaccinating people in age group k.\nThe net payoff to a non-vaccinator is\nwhere CIN,k denotes the cost of infection among non-vaccinators of age group k, and yk is the probability of symptomatic infection among non-vaccinators, given by.\nHere, describes the cumulative number of symptomatic infections among unvaccinated individuals in age group k after vaccination is implemented at time t = τ.", "For individuals driven by self-interest, game-theoretic decisions are assumed to settle to a Nash equilibrium at which it is impossible for a few individuals to increase their payoffs by switching to a different strategy [27]. We define these individual decisions at the Nash equilibrium as the Nash strategy. A pure vaccinator strategy cannot be the Nash equilibrium, because when the population vaccine coverage is 100%, an individual who chooses a non-vaccinator strategy reaps the benefits of herd immunity without paying for vaccination and without experiencing possible vaccine side effects. By comparison, a non-vaccinator can result in an individual optimum under certain conditions, such as when the infection risk is sufficiently low when vaccines become available. In our age-structured model, it might be best for some people in an age group to be vaccinated and for others in the same group to choose not to get vaccinated. To allow this scenario, we consider mixed strategies whereby individuals in age group k choose the vaccinator strategy with probability ψk (0 <ψk < 1) and the non-vaccinator strategy otherwise. If all individuals play the mixed strategy ψk, then a proportion ψk of the population in age group k is vaccinated. The individual optimum can be found by solving for ψk,ind in the equation Uvac,k = Unonvac,k (k=1…5). The individual optimum (ψk,ind) predicted by this game-theoretical analysis corresponds to the level of coverage ψk,ind expected under a voluntary program where individuals act in a rational way to maximize their payoffs.", "From the perspective of group interest, the objective is to maximize the total payoff of vaccinators and non-vaccinators. If ψk is the proportion of the population in age group k that is vaccinated, we can express the expected payoff due to vaccination and an influenza pandemic as We now maximize T(ψ1, ψ2, ψ3, ψ4, ψ5) on the parameter space {(ψ1, ψ2, ψ3, ψ4, ψ5) | 0 ≤ ψk ≤1} to determine the utilitarian strategy (ψ1*, ψ2*, ψ3*, ψ4*, ψ5*), which is the coverage level that would maximize the total payoff.", "[SUBTITLE] Epidemiological impact of the 2009 H1N1 influenza pandemic [SUBSECTION] Our age-structured model of influenza transmission predicts that 41% of the US population will be infected with pandemic H1N1 influenza in the absence of interventions (Figures 4 and 5). Based on our assumptions that on average 33% of infected people become symptomatic after three day of incubation period [28], we estimate that 13% of the population will be symptomatically infected during the current influenza pandemic, which is consistent with the estimate of previous modeling studies [29-31]. However, the age-specific attack rates are predicted to vary considerably between age groups because of age-dependent activity patterns and immune profiles. The highest incidence is predicted to occur in individuals of age 5-24, followed by adult population of age 25-49, with symptomatic plus asymptomatic attack rates of 57% and 43%, respectively (Figure 4b). The lowest attack rate (14%) is predicted to occur in the oldest age group (age 65 and older).\nOutcomes of influenza A/H1N1 infection. (a) Simulated age-stratified daily influenza A/H1N1 infection incidence per 100,000 individuals in the absence of vaccination or other interventions (b) Simulated age-specific attack rates, in the absence of vaccination or other interventions. Both symptomatic and asymptomatic cases are shown.\nCumulative incidence of influenza A/H1N1 when vaccination is guided by the Nash or utilitarian strategies. Vaccination is implemented at free of charge the three weeks before, exactly at, or three weeks after the peak of a pandemic influenza. Solid lines show the cumulative attack rate when vaccination is in alignment with utilitarian strategies when vaccination and is offered free of charge, whereas dotted lines show the cumulative attack rate assuming the vaccination is in alignment with the Nash strategies. For comparison, cumulative incidence without vaccination is also shown.\nOur results also suggest that individuals in each age group reach their highest incidence at different times (Figure 4a). That is, school-age children and young adults (age 5-24) reach their maximum incidence first, followed by adults (age 25-49) and preschool-age children under five years of age. In contrast, the oldest group is the last one to reach maximum incidence.\nIn the absence of vaccination or other interventions, our model predicts that the 2009 H1N1 influenza pandemic would result in 271 hospitalizations and 13 deaths per 100,000 individuals. With their highest case fatality and hospitalization ratios among all age groups, adults (age 25-49) bear the highest case fatality rate (7 out of 13 per 100,000) and hospitalization rate (133 out of 271 per 100,000), followed by school-age children and young adults of age 5-24 (Figures 4, 6, 7).\nHospitalizations per 100,000 when vaccination follows the Nash and utilitarian strategies. The number of hospitalizations per 100,000 is shown in age groups with and without vaccination. (a) Vaccination is implemented according to the Nash strategy at different timing of a pandemic wave, i.e. three weeks before, exactly at, or three weeks after the peak of a pandemic influenza. (b) Vaccination is implemented according to the utilitarian strategy at different timing of a pandemic wave, i.e. three weeks before, exactly at, or three weeks after the peak of a pandemic influenza.\nDeaths per 100,000 when vaccination follows the Nash and utilitarian strategies. The number of deaths per 100,000 is shown in age groups with and without vaccination. (a) Vaccination is implemented according to the Nash strategy at different timing of a pandemic wave, i.e. three weeks before, exactly at, or three weeks after the peak of a pandemic influenza. (b) Vaccination is implemented according to the utilitarian strategy at different timing of a pandemic wave, i.e. three weeks before, exactly at, or three weeks after the peak of a pandemic influenza.\nOur age-structured model of influenza transmission predicts that 41% of the US population will be infected with pandemic H1N1 influenza in the absence of interventions (Figures 4 and 5). Based on our assumptions that on average 33% of infected people become symptomatic after three day of incubation period [28], we estimate that 13% of the population will be symptomatically infected during the current influenza pandemic, which is consistent with the estimate of previous modeling studies [29-31]. However, the age-specific attack rates are predicted to vary considerably between age groups because of age-dependent activity patterns and immune profiles. The highest incidence is predicted to occur in individuals of age 5-24, followed by adult population of age 25-49, with symptomatic plus asymptomatic attack rates of 57% and 43%, respectively (Figure 4b). The lowest attack rate (14%) is predicted to occur in the oldest age group (age 65 and older).\nOutcomes of influenza A/H1N1 infection. (a) Simulated age-stratified daily influenza A/H1N1 infection incidence per 100,000 individuals in the absence of vaccination or other interventions (b) Simulated age-specific attack rates, in the absence of vaccination or other interventions. Both symptomatic and asymptomatic cases are shown.\nCumulative incidence of influenza A/H1N1 when vaccination is guided by the Nash or utilitarian strategies. Vaccination is implemented at free of charge the three weeks before, exactly at, or three weeks after the peak of a pandemic influenza. Solid lines show the cumulative attack rate when vaccination is in alignment with utilitarian strategies when vaccination and is offered free of charge, whereas dotted lines show the cumulative attack rate assuming the vaccination is in alignment with the Nash strategies. For comparison, cumulative incidence without vaccination is also shown.\nOur results also suggest that individuals in each age group reach their highest incidence at different times (Figure 4a). That is, school-age children and young adults (age 5-24) reach their maximum incidence first, followed by adults (age 25-49) and preschool-age children under five years of age. In contrast, the oldest group is the last one to reach maximum incidence.\nIn the absence of vaccination or other interventions, our model predicts that the 2009 H1N1 influenza pandemic would result in 271 hospitalizations and 13 deaths per 100,000 individuals. With their highest case fatality and hospitalization ratios among all age groups, adults (age 25-49) bear the highest case fatality rate (7 out of 13 per 100,000) and hospitalization rate (133 out of 271 per 100,000), followed by school-age children and young adults of age 5-24 (Figures 4, 6, 7).\nHospitalizations per 100,000 when vaccination follows the Nash and utilitarian strategies. The number of hospitalizations per 100,000 is shown in age groups with and without vaccination. (a) Vaccination is implemented according to the Nash strategy at different timing of a pandemic wave, i.e. three weeks before, exactly at, or three weeks after the peak of a pandemic influenza. (b) Vaccination is implemented according to the utilitarian strategy at different timing of a pandemic wave, i.e. three weeks before, exactly at, or three weeks after the peak of a pandemic influenza.\nDeaths per 100,000 when vaccination follows the Nash and utilitarian strategies. The number of deaths per 100,000 is shown in age groups with and without vaccination. (a) Vaccination is implemented according to the Nash strategy at different timing of a pandemic wave, i.e. three weeks before, exactly at, or three weeks after the peak of a pandemic influenza. (b) Vaccination is implemented according to the utilitarian strategy at different timing of a pandemic wave, i.e. three weeks before, exactly at, or three weeks after the peak of a pandemic influenza.\n[SUBTITLE] Optimal H1N1 vaccine distribution based on individual self-interest [SUBSECTION] To determine optimal H1N1 vaccine distributions based on individual self-interest versus population interest, we constructed a game theoretical age-structured model of influenza transmission assuming delayed vaccination. Our calculations show that when vaccination occurs three weeks prior to the peak of a pandemic wave, the Nash (individual-based) strategy prioritizes vaccinating adults (age 25- 49) and preschool-age children, followed by school-age children and young adults (age 5-24) and then older adults (age 50-64) (Figure 8a). The Nash vaccination strategy among senior population of age 65 and older would be to refuse vaccination. With such strategy, the vaccination program is predicted to reduce an overall attack rate from 41% to 15%, averting 8,514 clinical infections, 179 hospitalizations and 9 deaths per 100,000 individuals.\nNash (a) and utilitarian (b) strategies when vaccination is offered free of charge. Vaccination is implemented three weeks before, exactly at, or three weeks after the peak of a pandemic influenza.\nThe Nash strategy, however, was found to be highly dependent on the timing of vaccine implementation (Figure 8a). If vaccine production is delayed, then the payoff to the vaccinators is diminished because of reduced risk of future infection. For example, if vaccination is implemented at the peak of a pandemic, the Nash vaccination strategy does not include the preschool-age children anymore. Instead, the Nash strategy is to vaccinate 91% of adults (age 25-49), 87% of older adults (age 50-64), and 63% of school-age children and young adults. This change in vaccination strategy occurs because preschool-age children have a relatively early pandemic peak and moderate morbidity compared to other age groups. Thus, when a vaccination is significantly delayed, the relative infection risk of this group is low.\nThe only age groups that are included in the Nash vaccination strategy regardless of vaccine delay are adults (age 25-49) and school-age children/young adults (age 5-24). In general, the demand for vaccine among adults (age 25-49) is the most inelastic to vaccine delay. The Nash level of vaccine coverage for school-age children and young adults falls rapidly with vaccine delay (Figure 8). For instance, when vaccination is delayed until three weeks after the pandemic peak, the Nash strategy is to vaccinate 88% of adults (age 25-49) and 23% of school-age children/young adults. This vaccination allocation would result in an overall attack rate of 36%, with 229 hospitalizations and 11 deaths per 100,000 individuals (Figures 5, 6, 7).\nOur results also demonstrate the dependence of the Nash strategy on basic reproductive ratio of pandemic influenza. At higher transmissibility (R0=1.6), the Nash levels of vaccination are 100% among adults of age 25-64 regardless of timing of vaccine implementation. This is, in part, because the case fatality ratio and case hospitalization ratio are highest in these age groups, yielding a high payoff of vaccination (Figure 1). School-age children/younger adults (age 5-24), on the other hand, are expected to seek vaccination only if vaccination is offered on or before the pandemic peak (Figures 9 and 10).\nNash strategy when vaccine is available at the beginning of an influenza pandemic and when vaccination is offered free of charge\nUtilitarian strategy when vaccine is available at the beginning of an influenza pandemic and when vaccination is offered free of charge\nFinally, we considered the rising cost of vaccination and its impact on the Nash strategy of each age group. We show that the Nash vaccination level among adults (age 25-49) is the most inelastic to the changes in vaccination cost (Figures 2 and 3). This inelasticity arises because, in this age group, the infection risk and case-fatality rate of H1N1 are high and residual immunity is low. In contrast, the Nash vaccination of older adults (age 50 to 64) is the most elastic to the changes in the vaccination cost, demonstrating the trade-off between vaccine cost and reduced benefit of vaccine due to vaccine delay. This elasticity is because the infection risk in this age group is relatively low, resulted from their residual immunity against H1N1 and low contact rate. Seniors of age 65 or older are unlikely to seek vaccination if vaccination is voluntary at a wide range of vaccination costs because their risk of infection (and thus their vaccination payoff) is lowest among all age groups.\nTo determine optimal H1N1 vaccine distributions based on individual self-interest versus population interest, we constructed a game theoretical age-structured model of influenza transmission assuming delayed vaccination. Our calculations show that when vaccination occurs three weeks prior to the peak of a pandemic wave, the Nash (individual-based) strategy prioritizes vaccinating adults (age 25- 49) and preschool-age children, followed by school-age children and young adults (age 5-24) and then older adults (age 50-64) (Figure 8a). The Nash vaccination strategy among senior population of age 65 and older would be to refuse vaccination. With such strategy, the vaccination program is predicted to reduce an overall attack rate from 41% to 15%, averting 8,514 clinical infections, 179 hospitalizations and 9 deaths per 100,000 individuals.\nNash (a) and utilitarian (b) strategies when vaccination is offered free of charge. Vaccination is implemented three weeks before, exactly at, or three weeks after the peak of a pandemic influenza.\nThe Nash strategy, however, was found to be highly dependent on the timing of vaccine implementation (Figure 8a). If vaccine production is delayed, then the payoff to the vaccinators is diminished because of reduced risk of future infection. For example, if vaccination is implemented at the peak of a pandemic, the Nash vaccination strategy does not include the preschool-age children anymore. Instead, the Nash strategy is to vaccinate 91% of adults (age 25-49), 87% of older adults (age 50-64), and 63% of school-age children and young adults. This change in vaccination strategy occurs because preschool-age children have a relatively early pandemic peak and moderate morbidity compared to other age groups. Thus, when a vaccination is significantly delayed, the relative infection risk of this group is low.\nThe only age groups that are included in the Nash vaccination strategy regardless of vaccine delay are adults (age 25-49) and school-age children/young adults (age 5-24). In general, the demand for vaccine among adults (age 25-49) is the most inelastic to vaccine delay. The Nash level of vaccine coverage for school-age children and young adults falls rapidly with vaccine delay (Figure 8). For instance, when vaccination is delayed until three weeks after the pandemic peak, the Nash strategy is to vaccinate 88% of adults (age 25-49) and 23% of school-age children/young adults. This vaccination allocation would result in an overall attack rate of 36%, with 229 hospitalizations and 11 deaths per 100,000 individuals (Figures 5, 6, 7).\nOur results also demonstrate the dependence of the Nash strategy on basic reproductive ratio of pandemic influenza. At higher transmissibility (R0=1.6), the Nash levels of vaccination are 100% among adults of age 25-64 regardless of timing of vaccine implementation. This is, in part, because the case fatality ratio and case hospitalization ratio are highest in these age groups, yielding a high payoff of vaccination (Figure 1). School-age children/younger adults (age 5-24), on the other hand, are expected to seek vaccination only if vaccination is offered on or before the pandemic peak (Figures 9 and 10).\nNash strategy when vaccine is available at the beginning of an influenza pandemic and when vaccination is offered free of charge\nUtilitarian strategy when vaccine is available at the beginning of an influenza pandemic and when vaccination is offered free of charge\nFinally, we considered the rising cost of vaccination and its impact on the Nash strategy of each age group. We show that the Nash vaccination level among adults (age 25-49) is the most inelastic to the changes in vaccination cost (Figures 2 and 3). This inelasticity arises because, in this age group, the infection risk and case-fatality rate of H1N1 are high and residual immunity is low. In contrast, the Nash vaccination of older adults (age 50 to 64) is the most elastic to the changes in the vaccination cost, demonstrating the trade-off between vaccine cost and reduced benefit of vaccine due to vaccine delay. This elasticity is because the infection risk in this age group is relatively low, resulted from their residual immunity against H1N1 and low contact rate. Seniors of age 65 or older are unlikely to seek vaccination if vaccination is voluntary at a wide range of vaccination costs because their risk of infection (and thus their vaccination payoff) is lowest among all age groups.\n[SUBTITLE] Optimal H1N1 vaccine distribution based on population interest [SUBSECTION] The average vaccination level across all age groups for the utilitarian strategy is higher than that for the Nash strategy (Figure 8). For example, if vaccines become available three weeks before the pandemic peak, the overall Nash and utilitarian vaccine coverage are 76% and 82%, respectively. When 93% of young individuals (age under 24), 96% of adults (age 25-64) and 1% of seniors (age 65 and older) are vaccinated according to the utilitarian strategy three weeks prior to the peak of the influenza pandemic, 25,767 clinical infections, 182 hospitalizations and 9 deaths would be averted per 100,000 individuals (Figures 5, 6, 7).\nThe utilitarian strategy is, however, much less effective if vaccination is delayed. For instance, if vaccination is delayed until the peak of influenza pandemic, it is estimated that 15,683 infections, 112 hospitalizations and 6 deaths would be averted per 100,000 individuals, which is considerably fewer than when the vaccination is implemented before the pandemic peak (Figures 5 and 8).\nWe find that the utilitarian vaccine coverage levels are more inelastic than those under the Nash strategy. For instance, if vaccination is delayed until three weeks after the pandemic peak, the resulting vaccine coverage level according to the Nash strategy falls to 37% whereas the utilitarian strategy is to still vaccinate 73% of population. Thus, the resulting disease incidence and the number of disease-related deaths are lower under the utilitarian strategy than under the Nash strategy. The utilitarian strategy includes vaccinating 90% of older adults (age 50-64), 88% of adults (age 25-49), 83% of school-age children/young adults (age 5-24) and 43% of preschool-age children. At a higher basic reproductive ratio of 1.6, the utilitarian strategy also includes the vaccination of preschool-age children (age 0-4), because the risk of infection increases with transmissibility of influenza virus, increasing the payoff of vaccination (Figure 1). However, the Nash vaccination strategy does not include preschool-age children or older adults, and thus the utilitarian coverage levels may be unachievable under voluntary vaccination.\nThe average vaccination level across all age groups for the utilitarian strategy is higher than that for the Nash strategy (Figure 8). For example, if vaccines become available three weeks before the pandemic peak, the overall Nash and utilitarian vaccine coverage are 76% and 82%, respectively. When 93% of young individuals (age under 24), 96% of adults (age 25-64) and 1% of seniors (age 65 and older) are vaccinated according to the utilitarian strategy three weeks prior to the peak of the influenza pandemic, 25,767 clinical infections, 182 hospitalizations and 9 deaths would be averted per 100,000 individuals (Figures 5, 6, 7).\nThe utilitarian strategy is, however, much less effective if vaccination is delayed. For instance, if vaccination is delayed until the peak of influenza pandemic, it is estimated that 15,683 infections, 112 hospitalizations and 6 deaths would be averted per 100,000 individuals, which is considerably fewer than when the vaccination is implemented before the pandemic peak (Figures 5 and 8).\nWe find that the utilitarian vaccine coverage levels are more inelastic than those under the Nash strategy. For instance, if vaccination is delayed until three weeks after the pandemic peak, the resulting vaccine coverage level according to the Nash strategy falls to 37% whereas the utilitarian strategy is to still vaccinate 73% of population. Thus, the resulting disease incidence and the number of disease-related deaths are lower under the utilitarian strategy than under the Nash strategy. The utilitarian strategy includes vaccinating 90% of older adults (age 50-64), 88% of adults (age 25-49), 83% of school-age children/young adults (age 5-24) and 43% of preschool-age children. At a higher basic reproductive ratio of 1.6, the utilitarian strategy also includes the vaccination of preschool-age children (age 0-4), because the risk of infection increases with transmissibility of influenza virus, increasing the payoff of vaccination (Figure 1). However, the Nash vaccination strategy does not include preschool-age children or older adults, and thus the utilitarian coverage levels may be unachievable under voluntary vaccination.", "Our age-structured model of influenza transmission predicts that 41% of the US population will be infected with pandemic H1N1 influenza in the absence of interventions (Figures 4 and 5). Based on our assumptions that on average 33% of infected people become symptomatic after three day of incubation period [28], we estimate that 13% of the population will be symptomatically infected during the current influenza pandemic, which is consistent with the estimate of previous modeling studies [29-31]. However, the age-specific attack rates are predicted to vary considerably between age groups because of age-dependent activity patterns and immune profiles. The highest incidence is predicted to occur in individuals of age 5-24, followed by adult population of age 25-49, with symptomatic plus asymptomatic attack rates of 57% and 43%, respectively (Figure 4b). The lowest attack rate (14%) is predicted to occur in the oldest age group (age 65 and older).\nOutcomes of influenza A/H1N1 infection. (a) Simulated age-stratified daily influenza A/H1N1 infection incidence per 100,000 individuals in the absence of vaccination or other interventions (b) Simulated age-specific attack rates, in the absence of vaccination or other interventions. Both symptomatic and asymptomatic cases are shown.\nCumulative incidence of influenza A/H1N1 when vaccination is guided by the Nash or utilitarian strategies. Vaccination is implemented at free of charge the three weeks before, exactly at, or three weeks after the peak of a pandemic influenza. Solid lines show the cumulative attack rate when vaccination is in alignment with utilitarian strategies when vaccination and is offered free of charge, whereas dotted lines show the cumulative attack rate assuming the vaccination is in alignment with the Nash strategies. For comparison, cumulative incidence without vaccination is also shown.\nOur results also suggest that individuals in each age group reach their highest incidence at different times (Figure 4a). That is, school-age children and young adults (age 5-24) reach their maximum incidence first, followed by adults (age 25-49) and preschool-age children under five years of age. In contrast, the oldest group is the last one to reach maximum incidence.\nIn the absence of vaccination or other interventions, our model predicts that the 2009 H1N1 influenza pandemic would result in 271 hospitalizations and 13 deaths per 100,000 individuals. With their highest case fatality and hospitalization ratios among all age groups, adults (age 25-49) bear the highest case fatality rate (7 out of 13 per 100,000) and hospitalization rate (133 out of 271 per 100,000), followed by school-age children and young adults of age 5-24 (Figures 4, 6, 7).\nHospitalizations per 100,000 when vaccination follows the Nash and utilitarian strategies. The number of hospitalizations per 100,000 is shown in age groups with and without vaccination. (a) Vaccination is implemented according to the Nash strategy at different timing of a pandemic wave, i.e. three weeks before, exactly at, or three weeks after the peak of a pandemic influenza. (b) Vaccination is implemented according to the utilitarian strategy at different timing of a pandemic wave, i.e. three weeks before, exactly at, or three weeks after the peak of a pandemic influenza.\nDeaths per 100,000 when vaccination follows the Nash and utilitarian strategies. The number of deaths per 100,000 is shown in age groups with and without vaccination. (a) Vaccination is implemented according to the Nash strategy at different timing of a pandemic wave, i.e. three weeks before, exactly at, or three weeks after the peak of a pandemic influenza. (b) Vaccination is implemented according to the utilitarian strategy at different timing of a pandemic wave, i.e. three weeks before, exactly at, or three weeks after the peak of a pandemic influenza.", "To determine optimal H1N1 vaccine distributions based on individual self-interest versus population interest, we constructed a game theoretical age-structured model of influenza transmission assuming delayed vaccination. Our calculations show that when vaccination occurs three weeks prior to the peak of a pandemic wave, the Nash (individual-based) strategy prioritizes vaccinating adults (age 25- 49) and preschool-age children, followed by school-age children and young adults (age 5-24) and then older adults (age 50-64) (Figure 8a). The Nash vaccination strategy among senior population of age 65 and older would be to refuse vaccination. With such strategy, the vaccination program is predicted to reduce an overall attack rate from 41% to 15%, averting 8,514 clinical infections, 179 hospitalizations and 9 deaths per 100,000 individuals.\nNash (a) and utilitarian (b) strategies when vaccination is offered free of charge. Vaccination is implemented three weeks before, exactly at, or three weeks after the peak of a pandemic influenza.\nThe Nash strategy, however, was found to be highly dependent on the timing of vaccine implementation (Figure 8a). If vaccine production is delayed, then the payoff to the vaccinators is diminished because of reduced risk of future infection. For example, if vaccination is implemented at the peak of a pandemic, the Nash vaccination strategy does not include the preschool-age children anymore. Instead, the Nash strategy is to vaccinate 91% of adults (age 25-49), 87% of older adults (age 50-64), and 63% of school-age children and young adults. This change in vaccination strategy occurs because preschool-age children have a relatively early pandemic peak and moderate morbidity compared to other age groups. Thus, when a vaccination is significantly delayed, the relative infection risk of this group is low.\nThe only age groups that are included in the Nash vaccination strategy regardless of vaccine delay are adults (age 25-49) and school-age children/young adults (age 5-24). In general, the demand for vaccine among adults (age 25-49) is the most inelastic to vaccine delay. The Nash level of vaccine coverage for school-age children and young adults falls rapidly with vaccine delay (Figure 8). For instance, when vaccination is delayed until three weeks after the pandemic peak, the Nash strategy is to vaccinate 88% of adults (age 25-49) and 23% of school-age children/young adults. This vaccination allocation would result in an overall attack rate of 36%, with 229 hospitalizations and 11 deaths per 100,000 individuals (Figures 5, 6, 7).\nOur results also demonstrate the dependence of the Nash strategy on basic reproductive ratio of pandemic influenza. At higher transmissibility (R0=1.6), the Nash levels of vaccination are 100% among adults of age 25-64 regardless of timing of vaccine implementation. This is, in part, because the case fatality ratio and case hospitalization ratio are highest in these age groups, yielding a high payoff of vaccination (Figure 1). School-age children/younger adults (age 5-24), on the other hand, are expected to seek vaccination only if vaccination is offered on or before the pandemic peak (Figures 9 and 10).\nNash strategy when vaccine is available at the beginning of an influenza pandemic and when vaccination is offered free of charge\nUtilitarian strategy when vaccine is available at the beginning of an influenza pandemic and when vaccination is offered free of charge\nFinally, we considered the rising cost of vaccination and its impact on the Nash strategy of each age group. We show that the Nash vaccination level among adults (age 25-49) is the most inelastic to the changes in vaccination cost (Figures 2 and 3). This inelasticity arises because, in this age group, the infection risk and case-fatality rate of H1N1 are high and residual immunity is low. In contrast, the Nash vaccination of older adults (age 50 to 64) is the most elastic to the changes in the vaccination cost, demonstrating the trade-off between vaccine cost and reduced benefit of vaccine due to vaccine delay. This elasticity is because the infection risk in this age group is relatively low, resulted from their residual immunity against H1N1 and low contact rate. Seniors of age 65 or older are unlikely to seek vaccination if vaccination is voluntary at a wide range of vaccination costs because their risk of infection (and thus their vaccination payoff) is lowest among all age groups.", "The average vaccination level across all age groups for the utilitarian strategy is higher than that for the Nash strategy (Figure 8). For example, if vaccines become available three weeks before the pandemic peak, the overall Nash and utilitarian vaccine coverage are 76% and 82%, respectively. When 93% of young individuals (age under 24), 96% of adults (age 25-64) and 1% of seniors (age 65 and older) are vaccinated according to the utilitarian strategy three weeks prior to the peak of the influenza pandemic, 25,767 clinical infections, 182 hospitalizations and 9 deaths would be averted per 100,000 individuals (Figures 5, 6, 7).\nThe utilitarian strategy is, however, much less effective if vaccination is delayed. For instance, if vaccination is delayed until the peak of influenza pandemic, it is estimated that 15,683 infections, 112 hospitalizations and 6 deaths would be averted per 100,000 individuals, which is considerably fewer than when the vaccination is implemented before the pandemic peak (Figures 5 and 8).\nWe find that the utilitarian vaccine coverage levels are more inelastic than those under the Nash strategy. For instance, if vaccination is delayed until three weeks after the pandemic peak, the resulting vaccine coverage level according to the Nash strategy falls to 37% whereas the utilitarian strategy is to still vaccinate 73% of population. Thus, the resulting disease incidence and the number of disease-related deaths are lower under the utilitarian strategy than under the Nash strategy. The utilitarian strategy includes vaccinating 90% of older adults (age 50-64), 88% of adults (age 25-49), 83% of school-age children/young adults (age 5-24) and 43% of preschool-age children. At a higher basic reproductive ratio of 1.6, the utilitarian strategy also includes the vaccination of preschool-age children (age 0-4), because the risk of infection increases with transmissibility of influenza virus, increasing the payoff of vaccination (Figure 1). However, the Nash vaccination strategy does not include preschool-age children or older adults, and thus the utilitarian coverage levels may be unachievable under voluntary vaccination.", "For pandemic H1N1, we find that the individual-based (Nash) vaccination strategies differ significantly from the utilitarian vaccination strategies. Without vaccination delay, the primary priority group under the utilitarian strategy is school-age children and young adults (age 5-24) because of their important role in transmitting disease (Figures 9 and 10). The case hospitalization ratio and the case fatality ratio are the highest, and thus vaccinating these individuals yields high individual and population payoffs. Indeed, regardless of length of the delay and when vaccination is guided by the Nash or utilitarian strategies, younger adults (age 25-49) are among the highest priority groups for vaccination.\nHowever, the second priority group changes dramatically under the Nash strategy. If vaccination occurs before the pandemic peak, the second Nash priority group is preschool-age children. If vaccination is delayed, the second Nash priority group is shifted to older adults (age 50-64) or to school-age children/younger adults (age 5-24). The peak incidence among preschool-age children is relatively early compared to other age groups, thus lowering the benefit of vaccination to these children with time. Because the case fatality ratio is the highest among older adults, and H1N1 morbidity is the highest among school-age children/younger adults, the benefit of vaccination is relatively inelastic over the course of a pandemic. Therefore, the demand for vaccines among these age groups is high even if vaccination is delayed in a pandemic.\nThe discordance between the Nash and utilitarian strategies is even more pronounced when vaccine availability is delayed. If vaccination is delayed but implemented near the pandemic peak, the utilitarian vaccination strategy includes individuals of age up to 64, in contrast to the Nash strategy which excludes preschool-age children and older adults (age 50-64) (Figure 8). If vaccination is further delayed, the Nash strategy would also exclude adults (age 25-49), preschool-age children and older adults (age 50-64), whereas the utilitarian vaccination strategy still includes individuals of age up to 64. Therefore, the average vaccination level across all age groups at the utilitarian strategy was found to be higher than that at the Nash strategy.\nOverall, our results indicate that a vaccination levels under a voluntary immunization program may not be optimal for the population, regardless of vaccine delay. Such discordance between the Nash and utilitarian strategies is predicted to be robust to the increase in the basic reproductive ratio for pandemic influenza (Figure 1). This finding is consistent with those of previous studies, which demonstrated that, in the context of vaccination against smallpox and seasonal influenza, the vaccination levels driven by self-interest are likely to be lower than those that are optimal from the population perspective [32-35].\nThere are three primary reasons for the discrepancy between the individual-based and utilitarian age-specific vaccination levels for pandemic H1N1. First, different age groups have different incentives to vaccinate. In particular, an earlier pandemic peak among young individuals results in a relatively low infection risk later in the pandemic compared to that for older adults. Therefore, the young are predicted to under-vaccinate under the Nash strategy relative to the utilitarian strategy when vaccination is delayed. Second, the positive externalities of indirect protection by herd immunity also contribute to the differences between utilitarian and Nash vaccination strategies. The benefits of herd immunity contribute to the utilitarian strategy, but also create an incentive for individuals to free ride on the vaccination of others. Consequently, the overall level of population vaccination is lower for the Nash strategy than for the utilitarian strategy. Third, because vaccine delivery was delayed for the H1N1 pandemic, our model predicts that people will be less inclined to vaccinate than if vaccine was available at the beginning of the pandemic. As a consequence, achieving vaccination rates high enough to achieve the utilitarian strategy may be difficult, and the discordance between the Nash and utilitarian strategies is found to increase with vaccine delay.\nThe guidelines for vaccinating against the 2009-2010 pandemic H1N1 influenza proposed by the CDC’s Advisory Committee on Immunization Practices (ACIP) prioritize young people aged 6 months to 25 years, who are the most efficient at transmitting influenza viruses [36]. This guideline also reflects the reduced susceptibility among the elderly due to their residual immunity from past exposure [37]. If large stockpiles of vaccines had been available prior to the pandemic, the optimal vaccine distribution strategy would be to vaccinate children in order to reduce transmission and achieve herd immunity [38,39]. However, our analysis suggests that the success of such vaccination strategies depends heavily on the timing of a vaccine’s availability. Nevertheless, our analysis might be limited by the difficulties of knowing the state of the pandemic at the time vaccines become available. In addition, our outcome measure (i.e. cost of infection and vaccination) may oversimplify the vaccination decisions or be incongruous with the consideration of the Advisory Committee on Immunization Practices (ACIP).\nWe found that, for both the Nash and utilitarian strategies, the optimal vaccination strategies with vaccine delay should prioritize individuals of age 25 to 49. Our results also suggest that a utilitarian vaccine strategy should also include individuals from a wide range of ages, from 5 months to 65 years; and for longer delay in vaccination, vaccination priority should increasingly be given to older individuals. Our results further suggest that age-specific demands for vaccination depend on the risk of infection at the time of vaccine delivery and the severity of the disease. When vaccination is delayed, voluntary adherence to vaccine recommendation might become lower among young individuals. This suggests that influenza pandemic response plans should include efforts to encourage the vaccination of young individuals if vaccine delivery is delayed.", "H1N1:(H: Hemaglutinin) (N: Neuraminadase); WHO: the World Health Organization; CDC: Centers for Disease Control and Prevention", "The authors declare that they have no competing interests.", "ES developed and analyzed the model, and carried out numerical simulations. ES produced all figures, interpreted results, and wrote the manuscript. AG suggested some of the simulations and helped write the manuscript. LM edited the manuscript. All authors read and approved the final manuscript." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Emergence and dynamics of influenza super-strains.
21356135
Influenza super-strains can emerge through recombination of strains from birds, pigs, and humans. However, once a new recombinant strain emerges, it is not clear whether the strain is capable of sustaining an outbreak. In certain cases, such strains have caused major influenza pandemics.
BACKGROUND
Here we develop a multi-host (i.e., birds, pigs, and humans) and multi-strain model of influenza to analyze the outcome of emergent strains. In the model, pigs act as "mixing vessels" for avian and human strains and can produce super-strains from genetic recombination.
METHODS
We find that epidemiological outcomes are predicted by three factors: (i) contact between pigs and humans, (ii) transmissibility of the super-strain in humans, and (iii) transmissibility from pigs to humans. Specifically, outbreaks will reoccur when the super-strain infections are less frequent between humans (e.g., R0=1.4) but frequent from pigs to humans, and a large-scale outbreak followed by successively damping outbreaks will occur when human transmissibility is high (e.g., R0=2.3). The average time between the initial outbreak and the first resurgence varies from 41 to 82 years. We determine the largest outbreak will occur when 2.3 <R0 < 3.8 and the highest cumulative infections occur when 0 <R0 < 3.0 and is dependent on the frequency of pig-to-human infections for lower R0 values (0 <R0 < 1.9).
RESULTS
Our results provide insights on the effect of species interactions on the dynamics of influenza super-strains. Counter intuitively, epidemics may occur in humans even if the transmissibility of a super-strain is low. Surprisingly, our modeling shows strains that have generated past epidemics (e.g., H1N1) could resurge decades after they have apparently disappeared.
CONCLUSIONS
[ "Animals", "Birds", "Communicable Diseases, Emerging", "Disease Outbreaks", "Humans", "Influenza A Virus, H1N1 Subtype", "Influenza A virus", "Influenza in Birds", "Influenza, Human", "Orthomyxoviridae Infections", "Recombination, Genetic", "Swine" ]
3317579
null
null
Methods
To study the emergence of super-strains from interspecies sources, we have constructed a novel multi-strain/multi-host (MSMH) model [21]. This enables the transmission of influenza to be tracked within and between species. We characterize a super-strain in humans as a strain that is highly virulent and capable of directly infecting all humans (i.e., non- and seasonal infected humans) once the strain emerges. Furthermore, when a human with the seasonal strain acquires the super-strain, the super-strain is dominant and there is no coinfection. We use the model to analyze the consequences of a super-strain that forms from a highly virulent (but perhaps non-transmissible) avian strain (e.g., H5N1) and seasonal human strain. A flow diagram for this model is shown in Fig. 1. The avian model (signified by subscript b) consists of susceptibles (Sb) and infectives (Ib); the human model (signified by subscript h) consists of susceptibles (Sh), seasonal infectives (Ih), and super-strain infectives (Jh); and the swine model (signified by subscript p) consists of susceptibles (Sp), seasonal human and avian strain infectives (Ip,1 and Ip,2, respectively), coinfected pigs (Ip,12), and infectives that carry a super-strain from recombination during coinfection (Jp). The three host species are coupled as an interacting species system, where the couplings enable avian and human strains to infect pigs (i.e., pigs act as intermediate hosts) and super-strains (developed in pigs) to infect humans. A pig coinfected with both avian and human strains is a “mixing vessel” and can produce a super-strain capable of infecting humans. The ten equations that specify the model, as well as a more detailed description, are given in Section 1 of the Appendix. Schematic diagram illustrating the MSMH model. Susceptibles are denoted by S, infectious classes by I, and super-strain infectious classes by J. Each species’ classes and parameters for are subscripted by b (birds), h (humans), and p (pigs). All host species have a SIS structure, where recruitment is logistic of the entire population into the susceptible subclass and transfer from infected subclasses by recovery into the susceptible subclass or disease-induced death. The birds have a basic structure, humans have a super-infection structure, and pigs have a coinfection structure with recombination. Solid arrows represent the transfer due to infection within a single host population. Dashed arrows represent the direction of interspecies infectivity from human-to-pig with the endemic human strain, bird-to-pig with the endemic avian strain, and pig-to-human with the super-strain. A description of the variables and parameters is given in Sections 1 and 3 of the Appendix. We calibrated the MSMH model using data on humans and swine in Southeast Asia. The two human infectious classes are parameterized to represent individuals infected with a seasonal strains (e.g., H1N1 and H3N2) and a transmissible super-strain with disease virulence similar to the H5N1 strain (i.e., 68 percent disease-induced mortality rate) [8]. We chose reasonable parameter values for human influenza transmissibility in Thailand [8,22] and set similar values to sustain prevalent levels of avian and human reassortants in pigs [21]; see Section 3 of the Appendix. Population growth parameters and initial population densities for humans and pigs were determined from census data in Thailand for the twentieth century; see Section 3 of the Appendix. We assume the rate in which a coinfected pig produces a super-strain ψp is 0.1. We define the pig-to-human contact transmissibility γp as the probability per unit time that a super-strain infected pig will come into contact with and infected human. That is, γp is a product of two factors: (i) the rate at which a human will come into contact with a pig and (ii) the rate at which a pig will successfully infect a human with the super-strain. Similarly, the parameters βh,1 and βh,2 are the transmissibility of the seasonal and super- strains, respectively, to susceptible humans (Sh). The transmissibility of the super-strain to seasonal infected humans (or super-transmissibility) is δh, which we will assume is equal to βh,2 (i.e., super-strain transmission is the same for susceptible (Sh) and seasonal infectives (Ih,1)). The basic reproductive number R0 for each strain is the average number of secondary infections that one infectious individual can generate in an entirely susceptible population [23]. The R0's for endemic strains (i.e., excluding the super-strain in humans) in each species range between 1 and 2.5. We examine R0 values from 0 to 9.5 for a super-strain in humans [18] together with pig-to-human contact transmissibility ranging from 0 to 100 percent, which allows us to consider all epidemiologically relevant cases for past influenza pandemics.
null
null
null
null
[ "Background", "Results", "Conclusions", "Abbreviations", "Competing interests", "Authors’ contributions", "Appendix", "Multi-strain/multi-host model", "Basic reproductive numbers", "Parameter values" ]
[ "Until very recently there has been limited attention focused on the role of pigs in the spread of influenza and even less on the possibility of recombination of avian and human viruses in pigs [1-4]. The emergence of the novel strain of influenza A (H1N1) in Mexico [5,6] in April 2009 has shown birds, pigs and humans can all play a role in the formation of new strains and can be transmitted from pigs to humans [7]. Prior to the outbreak, much of the surveillance against pandemic influenza has been focused on avian strains (e.g., H5N1).\nInfluenza pandemics occur when a transmissible strain emerges into the human population in which humans have little or no immunity. One way this can occur is when a strain emerges in humans from a reservoir host species (e.g., birds or pigs). In fact, the three previous influenza pandemics (1918 H1N1, 1957 H2N2, and 1968 H3N2) emerged to humans from a non-human reservoir and were subtypes that originated in avian hosts [7]. In fact, strong evidence suggests all influenza subtypes trace back to avian origin, implying the avian virus emerged and established itself in mammals from birds. Avian strains such as the H5N1 have proven to be the most threatening particularly to humans upon infection, killing approximately 50 to 70 percent of those infected [8]. An H5N1 pandemic appears unlikely in that the strain has been shown to transmit poorly between humans. Furthermore, a strain with such high virulence would also need to be highly transmissible in order for a pandemic to occur.\nGenetic changes in influenza provide new opportunities for pathogen emergence in humans, particularly through recombination. When coinfection occurs, two strains may interact and recombine to form a new reassortant strain. Intermediate hosts, such as pigs and domestic poultry, play an important role in influenza infection between birds and human. Pigs are also capable of infection by human influenza viruses [9]. It has been reported that pigs are capable of infecting humans [4,10,11] and are likely candidates as intermediate hosts for coinfection of interspecies strains [12].\nSeveral empirical studies indicate that not only avian and human influenza strains are likely to establish infection in pigs but also that pigs are capable of passing newly reassortant strains to humans. Strikingly, half of Java’s pigs in 2005 were reported to be infected with avian influenza H5N1 [13]. In 1999, a strain of avian origin (H4N6) was isolated from pigs in Canadian swine farms [14]. Furthermore, the previously known H1N1 swine virus emerging in Europe in 1979 was antigenically similar to avian strains [4]. The avian virus is more likely to establish infection in swine [15,16]. Evidence for human strains being transmitted to swine was documented when a swine flu H1N2 outbreak in Japan’s pigs was determined to be a result of reassortment between swine H3N2 and human H1N1 [17].\nMathematical models have been used to predict epidemics and develop intervention strategies [18-20]; however, these models have not been developed to include both recombination and cross-species transmission. Here we present a theoretical model that tracks influenza transmission dynamics within and among three species (i.e., birds, pigs, and humans). The model describes how new “super-strains” (i.e., strains that are highly virulent to humans) can arise if pigs act as “mixing vessels” for the recombination of species-specific strains. We use the model to determine the epidemiological outcomes of a reassortant super-strain by varying the factors that influence the strain’s incidence in humans (i.e., transmissibility within humans, pig-to-human transmissibility, and pig-to-human contact). Furthermore, we determine the state(s) (or parameter ranges) of a super-strain that will result in (i) the greatest single epidemic and (ii) the highest cumulative prevalence.", "We simulate the MSMH model over different time periods (i.e., 200-1000 years) to understand both the initial impact and long-term consequences of super-strains. We assume that initially there are neither infected pigs nor super-infected humans. In other words, an initial pre-epidemic period occurs before the super-strain emerges in humans, during which the avian and human strains can cocirculate in pigs for many years or even decades before a super-strain emerges in humans (Figs. 2A-D). The initial outbreak is abrupt once the super-strain emerges in humans and typically has the greatest magnitude. After a super-strain has emerged in humans, there are three possible epidemiological outcomes: periodic outbreaks (Figs. 2A-B for approx. γp > 0.15); a super-strain that sustains at low levels without causing a significant outbreak (Figs. 2A-B for approx. γp < 0.15); or a strong initial epidemic followed by weaker epidemics (Figs. 2C-D), where the super-strain replaces a previously circulating strain. The outcome is determined by transmissibility of the super-strain and the contact transmissibility from super-infectious pigs to humans. The time frame between the first two outbreaks ranges from 41 to 82 years, depending on the level of transmissibility (Fig. 3).\nTime-series figures for the super-infectious humans at varying levels of interaction(A) The simulation illustrates that for low super-transmissibility (R0=1.4) and varying levels of interaction (γp), the system will exhibit reoccurring outbreaks. (B) Bird’s-eye view of the Fig. 2A shows as the level of interaction increases, the outbreaks become larger but further apart. (C) The simulation illustrates that for high super-incidence (R0=2.3) and varying levels of pig-to-human contact transmissibility (γp), the system will exhibit an initial large-scale outbreak that is followed by successively damping outbreaks. (D) Bird’s-eye view of Fig. 2C shows how varying level of interaction has no effect on the dynamics of the outbreak.\nTime between the first and second outbreaks at varying degrees of pig-to-human interaction The transmission of the super-strain in humans was set at five specific values. Time frames vary for βh,2=0.015 from 41 to 81 years (blue), βh,2=0.0215 from 56 to 82 years (red), βh,2=0.024 from 61 to 70 years (green), for βh,2=0.04 from 58 to 59 years (black), for βh,2=0.084 from 52 to 54 years (magenta). Overall times range from 41 to 82 years.\nWhen the super-strain is not highly transmissible (i.e., less transmissible than seasonal flu) in humans, the super-strain can cause periodic epidemics in humans (Figs. 2A-B). For R0= 1.4, periodic outbreaks occur at higher levels of pig-to-human contact transmissibility (approx. γp > 0.15). In this case, we observe periodic epidemics in all population classes in both humans (Fig. 4A), and pigs (Fig. 4B). In addition, recombinant strains emerge in cycles (cyan: Fig. 4B). The oscillations in the number of coinfected pigs correlate with those of pigs infected with each separate strain (Fig. 4B). Super-infectious pigs attain a maximum number when pigs infected with the human strain are at a peak (cyan: Fig. 4B). Smaller outbreaks are more frequent with lower levels of interaction between pigs and humans, and larger but less frequent outbreaks occur at higher levels of interaction (Fig 2B). The periodicity of each strain will eventually stabilize, and outbreaks of both strains will occur abruptly after interepidemic periods of virtually no disease prevalence (red: Figs. 4C-D. The time between the first two outbreaks ranges from approximately 55 to 81 years and is dependent on the transmissibility of the super-strain and pig-to-human contact transmissibility (Fig. 3: blue). These simulations show the reoccurring nature of pandemics (e.g., the three major outbreaks of the twentieth century: 1918, 1957, and 1968). It is worth noting that the 1918 virus spread between pigs and humans [24]; however, it is unclear whether the virus initially spread from pig-to-human or human-to-pig.\nSimulation of the MSMH model for population densities versus time with high pig-to-human contact transmissibility and low super-strain transmissibility(A) The simulation for the humans. Colors represent densities over time of the susceptible (blue), primary strain infectious (green), and super-strain infectious (red) classes. (B) The simulation for pigs. Colors represent population densities over time of the susceptible (blue), human strain infectious (red), avian strain infectious (green), coinfected (cyan), and super-strain infectious (magenta) classes. (C) Phase portrait for the humans. The figure shows human populations (blue) will reach a positive oscillating state (red). (D) A different view of the humans’ phase portrait showing the attracting region in 3D space.\nAt lower levels of transmission, we found that super-strains were continuously produced in the pig population, but only sustained at a low level (Figs. 2A-B) so they did not pose a threat to the human population. This example correlates to cases when the virus infects a few but is unable to establish itself after the initial outbreak, e.g., the “New Jersey” Incident in the USA in 1976 [25] and perhaps other outbreaks involving a few individuals as described in [4].\nWhen the super-strain is highly transmissible (i.e., approximately as transmissible as 1918 flu, R0=2.3), the strain will slowly emerge and cause an “epidemic spike” in humans (Figs. 2C-D). In this case, the resulting outbreak dynamics are influenced predominately by the high transmissibility of the super-strain. A series of dampening outbreaks follow the initial spike with periods of virtually no disease prevalence in humans between epidemic outbreaks. The super-strain outcompetes the preexisting endemic strain, causing the endemic strain to become extinct in humans and hence disappear from the pigs, eliminating the reoccurrence of coinfection and recombination. The time frame between the first two outbreaks ranges from approximately 61 to 69 years (Fig. 3: green). These simulations show features present in the pandemics of 1918, 1957, and 1968 in that after the initial pandemic the new super-strain displaces the previously dominant strain [26], continues to circulate, and becomes endemic but with reduced epidemiological impact.\nWe compare both the epidemic magnitude and cumulative density for varying levels of pig-to-human contact transmissibility and super-strain transmission in humans. By analyzing the largest outbreak for approximately 75 years after the strain has emerged, we determine that a large scale epidemic will occur when 1.9 <R0 < 6.6; however, the greatest epidemic will occur when 2.3 <R0 < 3.8 (Fig. 5A) and not when R0 > 3.8. The contact transmissibility does not influence the magnitude of the largest outbreak but is a necessary component for the super-strain to emerge in humans. We determine that a significant outbreak is unlikely for R0 < 1.9 and a moderate outbreak will occur when R0 > 6.6. Furthermore, the most significant outbreaks (i.e., 2.3 < R0 < 3.8) occur when the super-strain is highly transmissible, and dampened outbreaks follow the large-scale epidemic (Figs. 2C-D). These results for the different values of R0 remain consistent for much longer time frames (e.g., 500 and 1000 years). Our results show that a super-strain that emerges from pigs to humans typically has the greatest potential for human mortalities during its initial outbreak.\nContour maps of the pigs-to-human contact transmissibility (γp) versus the super-transmissibility in humans (βh,2)(A) The figure shows maximum number of super-strain over 200 years. The largest outbreaks occur when 2.3 <R0 < 3.8 (0.24 <βh,2 < 0.4). (B) The figure shows the sum of yearly average super-strain cases over 200 years. The two maximal states occur when 2.5 <R0 < 3.0 (0.26 <βh,2 < 0.32) and 4.0 <R0 < 4.5 (0.42 <βh,2 < 0.48). (C) The figure shows the sum of yearly average super-strain cases over 500 years. The long-term cumulative incidence of a super-strain is highest when either: 1.9 <R0 < 3.0, or 0 <R0 < 1.9 and a specified range of interaction for the given R0. (D) The figure shows the sum of yearly average super-strain cases over 1000 years. The long-term cumulative incidence of a super-strain is highest when either: 1.9 <R0 < 3.0, or 0 <R0 < 1.9 and a specified range of interaction for the given R0.\nA similar analysis of cumulative (averaged annually) infections over different time frames shows higher levels of disease prevalence at lower levels of transmissibility. For 200 years, highest cumulative number occurs when 0 <R0 < 7.6 (Fig. 5B: red-orange), of which there are two maximal states when 2.5 <R0 < 3.0 and 4.0 <R0 < 4.5 (Fig. 4B: dark red). In this case, the cumulative number of infectives can be high even when R0 < 1.9 but depends on the pig-to-human contact transmissibility. For much longer time frames of 500 years (Fig. 5C: dark red) and 1000 years (Fig. 5D: dark red), the maximal prevalence occurs when: 1.9 <R0 < 3.0 for all levels of interaction, and R0 < 1.9 with a sufficient level of pig-to-human contact transmissibility. In fact, long-term prevalence is high even when R0 < 1 (βh,2 < 0.011) and the pig-to-human contact transmission is low (γp < 0.4, where the lower bound of γp is dependent on the value of R0). Furthermore, the epidemiological outcome can either be a large-scale outbreak or milder reoccurring outbreaks to sustain maximal prevalence of a super-strain.", "Cross-species interaction and transmission of influenza creates a reservoir of super-strains in pigs; therefore, the continuous introduction of super-strains into humans is almost inevitable. Our modeling is consistent with this notion in that it shows how a super-strain is likely to emerge in humans from an intermediate host, such as pigs. However, we have found that, once a super-strain has emerged, three different outcomes are possible: (i) large scale outbreak, (ii) milder reoccurring outbreaks, and (iii) no outbreak. Examples of each types of outbreak have been well observed in nature. Our results show there may be little to no infectious activity during interepidemic periods; hence, epidemic forecasting may be difficult until the early stages of an outbreak. This is particularly important to ensure vaccine development in that detection may not be possible until an outbreak is significant. The more recent 2009 H1N1 strain was a reassortant of avian, human, and swine strains, and its gene segments have been circulating undetected for an extended period [7]. Our results also show that the greatest outbreak occurs when 2.3 <R0 < 3.8, which may be a consequence of strain competition and interspecies infection. That is, pig-to-human infection provides a competitive advantage to the super-strain over the seasonal. Higher R0 values for the super-strain may result in competitive exclusion of the seasonal strain, thus suppressing seasonal infections and limiting the opportunity for external infections.\nWe have demonstrated how genetic recombination and species interactions significantly affect the emergence and dynamics of influenza strains. The notion of reoccurring epidemics is particularly important in that it is specifically driven by species interaction and may explain the disappearance and re-emergence of a subtype many years after an epidemic. We recommend, that in order to obtain further insights into the emergence of influenza super-strains, strain subtypes should be frequently monitored in the major host species (particularly, birds, humans, and pigs). In addition, we recommend comparing various reassortants to identify strains that are more likely to recombine, particularly those that are infectious to humans.\nA reassortant of the avian strain such as the H5N1 that can spread from human-to-human poses a great threat, even when transmissibility is low (as shown in our simulation). Although the chance of such an event seems rare, situations like one described in the report [13] on Java’s pigs suggest that not only is a super-strain able to emerge but that such a strain is likely to emerge. Our results suggest more careful surveillance of influenza in pigs with a specific focus on detecting rare but potentially dangerous recombination events. Unless a global monitoring system is put in place, we will be unable to recognize lurking novel strains that are capable of the next pandemic. Most importantly, our modeling shows the necessity of a global monitoring system to track strain subtypes within different host species to adequately forecast the next pandemic.", "FAO, Food and Agriculture Organization of the United Nations; MSMH, multi-strain/multi-host; SIS, susceptible-infective-susceptible, WHO, World Health Organization.", "The authors declare that they have no competing interests.", "BJC, CC, and SR developed the concept and study design, analyzed and interpreted the data and drafted the manuscript. BJC conducted mathematical analyses and simulations.", "[SUBTITLE] Multi-strain/multi-host model [SUBSECTION] We use the multi-strain/multi-host (MSMH) model to simulate the spread of influenza in bird, pigs, and humans. Each species has a set of susceptible-infectious-susceptible (SIS) type differential equations that govern the spread of certain strains in that population. Moreover, some host species can infect other species with specific strains, but this ability is generally not symmetric. For example, birds can infect pigs with an avian strain, but the pigs cannot pass the avian strain back into the avian population. All infections follow a “mass action” structure, and recruitment is into the susceptible subclass, so the birth term is logistic and based on the entire population. The three host species are coupled by external inputs of avian and human strains from the respective hosts to the pigs and a super-strain external input from the pigs to the humans.\nThe subscripts b, p, and h denote birds, pigs, and humans, respectively.\nSi(t) is the density of susceptibles for species i at time t; Ii,j(t) is the density of jth infectious individuals for species i at time t; and Ji(t) is the density of super-infectious individuals for species i at time t. β is the per capita incidence rate, α is the recovery rate, v is the disease-induced mortality rate (virulence). r is the intrinsic growth, K is the carrying capacity, and N is the total population. More specifically, Nb=Sb+Ib, Np=Sp+Ip,1+Ip,2+Ip,12+Jp, and Nh=Sh+Ih+Jh. δh is the super-transmission rate in which super-infectious humans Jh infect individuals in the group Ih. Ψp is the rate at which co-infected pigs become super-infected. gb and gh are the transmission rates in which birds and humans infect pigs, respectively. γp the transmission rate super-infectious pigs infect humans.\nWe use the multi-strain/multi-host (MSMH) model to simulate the spread of influenza in bird, pigs, and humans. Each species has a set of susceptible-infectious-susceptible (SIS) type differential equations that govern the spread of certain strains in that population. Moreover, some host species can infect other species with specific strains, but this ability is generally not symmetric. For example, birds can infect pigs with an avian strain, but the pigs cannot pass the avian strain back into the avian population. All infections follow a “mass action” structure, and recruitment is into the susceptible subclass, so the birth term is logistic and based on the entire population. The three host species are coupled by external inputs of avian and human strains from the respective hosts to the pigs and a super-strain external input from the pigs to the humans.\nThe subscripts b, p, and h denote birds, pigs, and humans, respectively.\nSi(t) is the density of susceptibles for species i at time t; Ii,j(t) is the density of jth infectious individuals for species i at time t; and Ji(t) is the density of super-infectious individuals for species i at time t. β is the per capita incidence rate, α is the recovery rate, v is the disease-induced mortality rate (virulence). r is the intrinsic growth, K is the carrying capacity, and N is the total population. More specifically, Nb=Sb+Ib, Np=Sp+Ip,1+Ip,2+Ip,12+Jp, and Nh=Sh+Ih+Jh. δh is the super-transmission rate in which super-infectious humans Jh infect individuals in the group Ih. Ψp is the rate at which co-infected pigs become super-infected. gb and gh are the transmission rates in which birds and humans infect pigs, respectively. γp the transmission rate super-infectious pigs infect humans.\n[SUBTITLE] Basic reproductive numbers [SUBSECTION] In this section, we calculate the basic reproductive numbers R0 for the MSMH model. In general, R0 for an independent strain (i.e., when the MSMH model is reduced to a species’ susceptibles and specified infectious individual) is\nFor example, the basic reproductive number for the independent super-strain in humans is\nWe calculate R0 for the MSMH model by using the methods for compartmental models provided in [27]. We calculate about the disease-free equilibrium point (Kb,0,Kp,0,0,0,0,Kh,0,0). The next-generation matrix (FV-1) is\nThe basic reproductive number (i.e., the spectral radius of FV-1) for the MSMH model is\nwhich is the maximum basic reproductive number of the strains in the three host populations.\nIn this section, we calculate the basic reproductive numbers R0 for the MSMH model. In general, R0 for an independent strain (i.e., when the MSMH model is reduced to a species’ susceptibles and specified infectious individual) is\nFor example, the basic reproductive number for the independent super-strain in humans is\nWe calculate R0 for the MSMH model by using the methods for compartmental models provided in [27]. We calculate about the disease-free equilibrium point (Kb,0,Kp,0,0,0,0,Kh,0,0). The next-generation matrix (FV-1) is\nThe basic reproductive number (i.e., the spectral radius of FV-1) for the MSMH model is\nwhich is the maximum basic reproductive number of the strains in the three host populations.\n[SUBTITLE] Parameter values [SUBSECTION] To calibrate the human parameters, we set the parameter values for the two strains in humans to reflect a seasonal strain and a transmissible avian strain with the virulence of H5N1. We use data from Thailand because of the surveillance of strains and the prevalence of the avian strain H5N1. To determine the logistic growth parameters in humans, we use data for the total population of Thailand [28,29]. We best-fit the data to the solution of the logistic equation given by\nusing the method of least squares (Fig. S1). The parameters were determined using iterative methods: Kh=87.3, Sh(0)=3.983, and rh=0.038. In a study by the National Institute of Health in Thailand [22], case specimens of influenza were taken from patients in 2004 and 2005. It was determined that the numbers of cases of influenza-like illness for 2004 and 2005 were 21,176 and 21,351 per 100,000 people, respectively. The study also sampled specimens for influenza type and subtype. We determined that the incidence βh,1 for the endemic strain of influenza is 0.0253 and 0.0273 for the years 2004 and 2005, respectively. The incidence βh,2 of the avian strain H5N1 for 2004 was determined to be 0.00084. A summary report on influenza in Asian countries from 1999 cites the annual mortality due to pneumonia as 176 per 100,000 people [30], so we let the disease-induced mortality rate be vh,1=0.00176 and the recovery rate αh,1=0.99824. The World Health Organization (WHO) reported on June 2008 that the number of confirmed cases due to H5N1 in Thailand was 25 and the number of deaths was 17 [8], so let the disease-induced mortality of the super-strain to be vh,2=0.68 and recovery rate αh,2=0.32.\nTo calibrate the pig parameters, we used data from the Food and Agriculture Organization of the United Nations (FAO) on pig populations [31,32] and fit the data using the same methodology as in the humans (see above). The parameters for the logistic growth term were determined to be Kp=9.16, Sp(0)=0.002, and rp=0.093. Due to the limited data on influenza in pigs, we assumed reasonable values (i.e., similar to the values for humans) for the remaining parameters. We assumed the birds have the same parameters as humans; that is, the single strain will have the same values as the endemics strain in humans.\nTo calibrate the human parameters, we set the parameter values for the two strains in humans to reflect a seasonal strain and a transmissible avian strain with the virulence of H5N1. We use data from Thailand because of the surveillance of strains and the prevalence of the avian strain H5N1. To determine the logistic growth parameters in humans, we use data for the total population of Thailand [28,29]. We best-fit the data to the solution of the logistic equation given by\nusing the method of least squares (Fig. S1). The parameters were determined using iterative methods: Kh=87.3, Sh(0)=3.983, and rh=0.038. In a study by the National Institute of Health in Thailand [22], case specimens of influenza were taken from patients in 2004 and 2005. It was determined that the numbers of cases of influenza-like illness for 2004 and 2005 were 21,176 and 21,351 per 100,000 people, respectively. The study also sampled specimens for influenza type and subtype. We determined that the incidence βh,1 for the endemic strain of influenza is 0.0253 and 0.0273 for the years 2004 and 2005, respectively. The incidence βh,2 of the avian strain H5N1 for 2004 was determined to be 0.00084. A summary report on influenza in Asian countries from 1999 cites the annual mortality due to pneumonia as 176 per 100,000 people [30], so we let the disease-induced mortality rate be vh,1=0.00176 and the recovery rate αh,1=0.99824. The World Health Organization (WHO) reported on June 2008 that the number of confirmed cases due to H5N1 in Thailand was 25 and the number of deaths was 17 [8], so let the disease-induced mortality of the super-strain to be vh,2=0.68 and recovery rate αh,2=0.32.\nTo calibrate the pig parameters, we used data from the Food and Agriculture Organization of the United Nations (FAO) on pig populations [31,32] and fit the data using the same methodology as in the humans (see above). The parameters for the logistic growth term were determined to be Kp=9.16, Sp(0)=0.002, and rp=0.093. Due to the limited data on influenza in pigs, we assumed reasonable values (i.e., similar to the values for humans) for the remaining parameters. We assumed the birds have the same parameters as humans; that is, the single strain will have the same values as the endemics strain in humans.", "We use the multi-strain/multi-host (MSMH) model to simulate the spread of influenza in bird, pigs, and humans. Each species has a set of susceptible-infectious-susceptible (SIS) type differential equations that govern the spread of certain strains in that population. Moreover, some host species can infect other species with specific strains, but this ability is generally not symmetric. For example, birds can infect pigs with an avian strain, but the pigs cannot pass the avian strain back into the avian population. All infections follow a “mass action” structure, and recruitment is into the susceptible subclass, so the birth term is logistic and based on the entire population. The three host species are coupled by external inputs of avian and human strains from the respective hosts to the pigs and a super-strain external input from the pigs to the humans.\nThe subscripts b, p, and h denote birds, pigs, and humans, respectively.\nSi(t) is the density of susceptibles for species i at time t; Ii,j(t) is the density of jth infectious individuals for species i at time t; and Ji(t) is the density of super-infectious individuals for species i at time t. β is the per capita incidence rate, α is the recovery rate, v is the disease-induced mortality rate (virulence). r is the intrinsic growth, K is the carrying capacity, and N is the total population. More specifically, Nb=Sb+Ib, Np=Sp+Ip,1+Ip,2+Ip,12+Jp, and Nh=Sh+Ih+Jh. δh is the super-transmission rate in which super-infectious humans Jh infect individuals in the group Ih. Ψp is the rate at which co-infected pigs become super-infected. gb and gh are the transmission rates in which birds and humans infect pigs, respectively. γp the transmission rate super-infectious pigs infect humans.", "In this section, we calculate the basic reproductive numbers R0 for the MSMH model. In general, R0 for an independent strain (i.e., when the MSMH model is reduced to a species’ susceptibles and specified infectious individual) is\nFor example, the basic reproductive number for the independent super-strain in humans is\nWe calculate R0 for the MSMH model by using the methods for compartmental models provided in [27]. We calculate about the disease-free equilibrium point (Kb,0,Kp,0,0,0,0,Kh,0,0). The next-generation matrix (FV-1) is\nThe basic reproductive number (i.e., the spectral radius of FV-1) for the MSMH model is\nwhich is the maximum basic reproductive number of the strains in the three host populations.", "To calibrate the human parameters, we set the parameter values for the two strains in humans to reflect a seasonal strain and a transmissible avian strain with the virulence of H5N1. We use data from Thailand because of the surveillance of strains and the prevalence of the avian strain H5N1. To determine the logistic growth parameters in humans, we use data for the total population of Thailand [28,29]. We best-fit the data to the solution of the logistic equation given by\nusing the method of least squares (Fig. S1). The parameters were determined using iterative methods: Kh=87.3, Sh(0)=3.983, and rh=0.038. In a study by the National Institute of Health in Thailand [22], case specimens of influenza were taken from patients in 2004 and 2005. It was determined that the numbers of cases of influenza-like illness for 2004 and 2005 were 21,176 and 21,351 per 100,000 people, respectively. The study also sampled specimens for influenza type and subtype. We determined that the incidence βh,1 for the endemic strain of influenza is 0.0253 and 0.0273 for the years 2004 and 2005, respectively. The incidence βh,2 of the avian strain H5N1 for 2004 was determined to be 0.00084. A summary report on influenza in Asian countries from 1999 cites the annual mortality due to pneumonia as 176 per 100,000 people [30], so we let the disease-induced mortality rate be vh,1=0.00176 and the recovery rate αh,1=0.99824. The World Health Organization (WHO) reported on June 2008 that the number of confirmed cases due to H5N1 in Thailand was 25 and the number of deaths was 17 [8], so let the disease-induced mortality of the super-strain to be vh,2=0.68 and recovery rate αh,2=0.32.\nTo calibrate the pig parameters, we used data from the Food and Agriculture Organization of the United Nations (FAO) on pig populations [31,32] and fit the data using the same methodology as in the humans (see above). The parameters for the logistic growth term were determined to be Kp=9.16, Sp(0)=0.002, and rp=0.093. Due to the limited data on influenza in pigs, we assumed reasonable values (i.e., similar to the values for humans) for the remaining parameters. We assumed the birds have the same parameters as humans; that is, the single strain will have the same values as the endemics strain in humans." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Results", "Conclusions", "Abbreviations", "Competing interests", "Authors’ contributions", "Appendix", "Multi-strain/multi-host model", "Basic reproductive numbers", "Parameter values" ]
[ "Until very recently there has been limited attention focused on the role of pigs in the spread of influenza and even less on the possibility of recombination of avian and human viruses in pigs [1-4]. The emergence of the novel strain of influenza A (H1N1) in Mexico [5,6] in April 2009 has shown birds, pigs and humans can all play a role in the formation of new strains and can be transmitted from pigs to humans [7]. Prior to the outbreak, much of the surveillance against pandemic influenza has been focused on avian strains (e.g., H5N1).\nInfluenza pandemics occur when a transmissible strain emerges into the human population in which humans have little or no immunity. One way this can occur is when a strain emerges in humans from a reservoir host species (e.g., birds or pigs). In fact, the three previous influenza pandemics (1918 H1N1, 1957 H2N2, and 1968 H3N2) emerged to humans from a non-human reservoir and were subtypes that originated in avian hosts [7]. In fact, strong evidence suggests all influenza subtypes trace back to avian origin, implying the avian virus emerged and established itself in mammals from birds. Avian strains such as the H5N1 have proven to be the most threatening particularly to humans upon infection, killing approximately 50 to 70 percent of those infected [8]. An H5N1 pandemic appears unlikely in that the strain has been shown to transmit poorly between humans. Furthermore, a strain with such high virulence would also need to be highly transmissible in order for a pandemic to occur.\nGenetic changes in influenza provide new opportunities for pathogen emergence in humans, particularly through recombination. When coinfection occurs, two strains may interact and recombine to form a new reassortant strain. Intermediate hosts, such as pigs and domestic poultry, play an important role in influenza infection between birds and human. Pigs are also capable of infection by human influenza viruses [9]. It has been reported that pigs are capable of infecting humans [4,10,11] and are likely candidates as intermediate hosts for coinfection of interspecies strains [12].\nSeveral empirical studies indicate that not only avian and human influenza strains are likely to establish infection in pigs but also that pigs are capable of passing newly reassortant strains to humans. Strikingly, half of Java’s pigs in 2005 were reported to be infected with avian influenza H5N1 [13]. In 1999, a strain of avian origin (H4N6) was isolated from pigs in Canadian swine farms [14]. Furthermore, the previously known H1N1 swine virus emerging in Europe in 1979 was antigenically similar to avian strains [4]. The avian virus is more likely to establish infection in swine [15,16]. Evidence for human strains being transmitted to swine was documented when a swine flu H1N2 outbreak in Japan’s pigs was determined to be a result of reassortment between swine H3N2 and human H1N1 [17].\nMathematical models have been used to predict epidemics and develop intervention strategies [18-20]; however, these models have not been developed to include both recombination and cross-species transmission. Here we present a theoretical model that tracks influenza transmission dynamics within and among three species (i.e., birds, pigs, and humans). The model describes how new “super-strains” (i.e., strains that are highly virulent to humans) can arise if pigs act as “mixing vessels” for the recombination of species-specific strains. We use the model to determine the epidemiological outcomes of a reassortant super-strain by varying the factors that influence the strain’s incidence in humans (i.e., transmissibility within humans, pig-to-human transmissibility, and pig-to-human contact). Furthermore, we determine the state(s) (or parameter ranges) of a super-strain that will result in (i) the greatest single epidemic and (ii) the highest cumulative prevalence.", "To study the emergence of super-strains from interspecies sources, we have constructed a novel multi-strain/multi-host (MSMH) model [21]. This enables the transmission of influenza to be tracked within and between species. We characterize a super-strain in humans as a strain that is highly virulent and capable of directly infecting all humans (i.e., non- and seasonal infected humans) once the strain emerges. Furthermore, when a human with the seasonal strain acquires the super-strain, the super-strain is dominant and there is no coinfection. We use the model to analyze the consequences of a super-strain that forms from a highly virulent (but perhaps non-transmissible) avian strain (e.g., H5N1) and seasonal human strain. A flow diagram for this model is shown in Fig. 1. The avian model (signified by subscript b) consists of susceptibles (Sb) and infectives (Ib); the human model (signified by subscript h) consists of susceptibles (Sh), seasonal infectives (Ih), and super-strain infectives (Jh); and the swine model (signified by subscript p) consists of susceptibles (Sp), seasonal human and avian strain infectives (Ip,1 and Ip,2, respectively), coinfected pigs (Ip,12), and infectives that carry a super-strain from recombination during coinfection (Jp). The three host species are coupled as an interacting species system, where the couplings enable avian and human strains to infect pigs (i.e., pigs act as intermediate hosts) and super-strains (developed in pigs) to infect humans. A pig coinfected with both avian and human strains is a “mixing vessel” and can produce a super-strain capable of infecting humans. The ten equations that specify the model, as well as a more detailed description, are given in Section 1 of the Appendix.\nSchematic diagram illustrating the MSMH model. Susceptibles are denoted by S, infectious classes by I, and super-strain infectious classes by J. Each species’ classes and parameters for are subscripted by b (birds), h (humans), and p (pigs). All host species have a SIS structure, where recruitment is logistic of the entire population into the susceptible subclass and transfer from infected subclasses by recovery into the susceptible subclass or disease-induced death. The birds have a basic structure, humans have a super-infection structure, and pigs have a coinfection structure with recombination. Solid arrows represent the transfer due to infection within a single host population. Dashed arrows represent the direction of interspecies infectivity from human-to-pig with the endemic human strain, bird-to-pig with the endemic avian strain, and pig-to-human with the super-strain. A description of the variables and parameters is given in Sections 1 and 3 of the Appendix.\nWe calibrated the MSMH model using data on humans and swine in Southeast Asia. The two human infectious classes are parameterized to represent individuals infected with a seasonal strains (e.g., H1N1 and H3N2) and a transmissible super-strain with disease virulence similar to the H5N1 strain (i.e., 68 percent disease-induced mortality rate) [8]. We chose reasonable parameter values for human influenza transmissibility in Thailand [8,22] and set similar values to sustain prevalent levels of avian and human reassortants in pigs [21]; see Section 3 of the Appendix. Population growth parameters and initial population densities for humans and pigs were determined from census data in Thailand for the twentieth century; see Section 3 of the Appendix. We assume the rate in which a coinfected pig produces a super-strain ψp is 0.1. We define the pig-to-human contact transmissibility γp as the probability per unit time that a super-strain infected pig will come into contact with and infected human. That is, γp is a product of two factors: (i) the rate at which a human will come into contact with a pig and (ii) the rate at which a pig will successfully infect a human with the super-strain. Similarly, the parameters βh,1 and βh,2 are the transmissibility of the seasonal and super- strains, respectively, to susceptible humans (Sh). The transmissibility of the super-strain to seasonal infected humans (or super-transmissibility) is δh, which we will assume is equal to βh,2 (i.e., super-strain transmission is the same for susceptible (Sh) and seasonal infectives (Ih,1)). The basic reproductive number R0 for each strain is the average number of secondary infections that one infectious individual can generate in an entirely susceptible population [23]. The R0's for endemic strains (i.e., excluding the super-strain in humans) in each species range between 1 and 2.5. We examine R0 values from 0 to 9.5 for a super-strain in humans [18] together with pig-to-human contact transmissibility ranging from 0 to 100 percent, which allows us to consider all epidemiologically relevant cases for past influenza pandemics.", "We simulate the MSMH model over different time periods (i.e., 200-1000 years) to understand both the initial impact and long-term consequences of super-strains. We assume that initially there are neither infected pigs nor super-infected humans. In other words, an initial pre-epidemic period occurs before the super-strain emerges in humans, during which the avian and human strains can cocirculate in pigs for many years or even decades before a super-strain emerges in humans (Figs. 2A-D). The initial outbreak is abrupt once the super-strain emerges in humans and typically has the greatest magnitude. After a super-strain has emerged in humans, there are three possible epidemiological outcomes: periodic outbreaks (Figs. 2A-B for approx. γp > 0.15); a super-strain that sustains at low levels without causing a significant outbreak (Figs. 2A-B for approx. γp < 0.15); or a strong initial epidemic followed by weaker epidemics (Figs. 2C-D), where the super-strain replaces a previously circulating strain. The outcome is determined by transmissibility of the super-strain and the contact transmissibility from super-infectious pigs to humans. The time frame between the first two outbreaks ranges from 41 to 82 years, depending on the level of transmissibility (Fig. 3).\nTime-series figures for the super-infectious humans at varying levels of interaction(A) The simulation illustrates that for low super-transmissibility (R0=1.4) and varying levels of interaction (γp), the system will exhibit reoccurring outbreaks. (B) Bird’s-eye view of the Fig. 2A shows as the level of interaction increases, the outbreaks become larger but further apart. (C) The simulation illustrates that for high super-incidence (R0=2.3) and varying levels of pig-to-human contact transmissibility (γp), the system will exhibit an initial large-scale outbreak that is followed by successively damping outbreaks. (D) Bird’s-eye view of Fig. 2C shows how varying level of interaction has no effect on the dynamics of the outbreak.\nTime between the first and second outbreaks at varying degrees of pig-to-human interaction The transmission of the super-strain in humans was set at five specific values. Time frames vary for βh,2=0.015 from 41 to 81 years (blue), βh,2=0.0215 from 56 to 82 years (red), βh,2=0.024 from 61 to 70 years (green), for βh,2=0.04 from 58 to 59 years (black), for βh,2=0.084 from 52 to 54 years (magenta). Overall times range from 41 to 82 years.\nWhen the super-strain is not highly transmissible (i.e., less transmissible than seasonal flu) in humans, the super-strain can cause periodic epidemics in humans (Figs. 2A-B). For R0= 1.4, periodic outbreaks occur at higher levels of pig-to-human contact transmissibility (approx. γp > 0.15). In this case, we observe periodic epidemics in all population classes in both humans (Fig. 4A), and pigs (Fig. 4B). In addition, recombinant strains emerge in cycles (cyan: Fig. 4B). The oscillations in the number of coinfected pigs correlate with those of pigs infected with each separate strain (Fig. 4B). Super-infectious pigs attain a maximum number when pigs infected with the human strain are at a peak (cyan: Fig. 4B). Smaller outbreaks are more frequent with lower levels of interaction between pigs and humans, and larger but less frequent outbreaks occur at higher levels of interaction (Fig 2B). The periodicity of each strain will eventually stabilize, and outbreaks of both strains will occur abruptly after interepidemic periods of virtually no disease prevalence (red: Figs. 4C-D. The time between the first two outbreaks ranges from approximately 55 to 81 years and is dependent on the transmissibility of the super-strain and pig-to-human contact transmissibility (Fig. 3: blue). These simulations show the reoccurring nature of pandemics (e.g., the three major outbreaks of the twentieth century: 1918, 1957, and 1968). It is worth noting that the 1918 virus spread between pigs and humans [24]; however, it is unclear whether the virus initially spread from pig-to-human or human-to-pig.\nSimulation of the MSMH model for population densities versus time with high pig-to-human contact transmissibility and low super-strain transmissibility(A) The simulation for the humans. Colors represent densities over time of the susceptible (blue), primary strain infectious (green), and super-strain infectious (red) classes. (B) The simulation for pigs. Colors represent population densities over time of the susceptible (blue), human strain infectious (red), avian strain infectious (green), coinfected (cyan), and super-strain infectious (magenta) classes. (C) Phase portrait for the humans. The figure shows human populations (blue) will reach a positive oscillating state (red). (D) A different view of the humans’ phase portrait showing the attracting region in 3D space.\nAt lower levels of transmission, we found that super-strains were continuously produced in the pig population, but only sustained at a low level (Figs. 2A-B) so they did not pose a threat to the human population. This example correlates to cases when the virus infects a few but is unable to establish itself after the initial outbreak, e.g., the “New Jersey” Incident in the USA in 1976 [25] and perhaps other outbreaks involving a few individuals as described in [4].\nWhen the super-strain is highly transmissible (i.e., approximately as transmissible as 1918 flu, R0=2.3), the strain will slowly emerge and cause an “epidemic spike” in humans (Figs. 2C-D). In this case, the resulting outbreak dynamics are influenced predominately by the high transmissibility of the super-strain. A series of dampening outbreaks follow the initial spike with periods of virtually no disease prevalence in humans between epidemic outbreaks. The super-strain outcompetes the preexisting endemic strain, causing the endemic strain to become extinct in humans and hence disappear from the pigs, eliminating the reoccurrence of coinfection and recombination. The time frame between the first two outbreaks ranges from approximately 61 to 69 years (Fig. 3: green). These simulations show features present in the pandemics of 1918, 1957, and 1968 in that after the initial pandemic the new super-strain displaces the previously dominant strain [26], continues to circulate, and becomes endemic but with reduced epidemiological impact.\nWe compare both the epidemic magnitude and cumulative density for varying levels of pig-to-human contact transmissibility and super-strain transmission in humans. By analyzing the largest outbreak for approximately 75 years after the strain has emerged, we determine that a large scale epidemic will occur when 1.9 <R0 < 6.6; however, the greatest epidemic will occur when 2.3 <R0 < 3.8 (Fig. 5A) and not when R0 > 3.8. The contact transmissibility does not influence the magnitude of the largest outbreak but is a necessary component for the super-strain to emerge in humans. We determine that a significant outbreak is unlikely for R0 < 1.9 and a moderate outbreak will occur when R0 > 6.6. Furthermore, the most significant outbreaks (i.e., 2.3 < R0 < 3.8) occur when the super-strain is highly transmissible, and dampened outbreaks follow the large-scale epidemic (Figs. 2C-D). These results for the different values of R0 remain consistent for much longer time frames (e.g., 500 and 1000 years). Our results show that a super-strain that emerges from pigs to humans typically has the greatest potential for human mortalities during its initial outbreak.\nContour maps of the pigs-to-human contact transmissibility (γp) versus the super-transmissibility in humans (βh,2)(A) The figure shows maximum number of super-strain over 200 years. The largest outbreaks occur when 2.3 <R0 < 3.8 (0.24 <βh,2 < 0.4). (B) The figure shows the sum of yearly average super-strain cases over 200 years. The two maximal states occur when 2.5 <R0 < 3.0 (0.26 <βh,2 < 0.32) and 4.0 <R0 < 4.5 (0.42 <βh,2 < 0.48). (C) The figure shows the sum of yearly average super-strain cases over 500 years. The long-term cumulative incidence of a super-strain is highest when either: 1.9 <R0 < 3.0, or 0 <R0 < 1.9 and a specified range of interaction for the given R0. (D) The figure shows the sum of yearly average super-strain cases over 1000 years. The long-term cumulative incidence of a super-strain is highest when either: 1.9 <R0 < 3.0, or 0 <R0 < 1.9 and a specified range of interaction for the given R0.\nA similar analysis of cumulative (averaged annually) infections over different time frames shows higher levels of disease prevalence at lower levels of transmissibility. For 200 years, highest cumulative number occurs when 0 <R0 < 7.6 (Fig. 5B: red-orange), of which there are two maximal states when 2.5 <R0 < 3.0 and 4.0 <R0 < 4.5 (Fig. 4B: dark red). In this case, the cumulative number of infectives can be high even when R0 < 1.9 but depends on the pig-to-human contact transmissibility. For much longer time frames of 500 years (Fig. 5C: dark red) and 1000 years (Fig. 5D: dark red), the maximal prevalence occurs when: 1.9 <R0 < 3.0 for all levels of interaction, and R0 < 1.9 with a sufficient level of pig-to-human contact transmissibility. In fact, long-term prevalence is high even when R0 < 1 (βh,2 < 0.011) and the pig-to-human contact transmission is low (γp < 0.4, where the lower bound of γp is dependent on the value of R0). Furthermore, the epidemiological outcome can either be a large-scale outbreak or milder reoccurring outbreaks to sustain maximal prevalence of a super-strain.", "Cross-species interaction and transmission of influenza creates a reservoir of super-strains in pigs; therefore, the continuous introduction of super-strains into humans is almost inevitable. Our modeling is consistent with this notion in that it shows how a super-strain is likely to emerge in humans from an intermediate host, such as pigs. However, we have found that, once a super-strain has emerged, three different outcomes are possible: (i) large scale outbreak, (ii) milder reoccurring outbreaks, and (iii) no outbreak. Examples of each types of outbreak have been well observed in nature. Our results show there may be little to no infectious activity during interepidemic periods; hence, epidemic forecasting may be difficult until the early stages of an outbreak. This is particularly important to ensure vaccine development in that detection may not be possible until an outbreak is significant. The more recent 2009 H1N1 strain was a reassortant of avian, human, and swine strains, and its gene segments have been circulating undetected for an extended period [7]. Our results also show that the greatest outbreak occurs when 2.3 <R0 < 3.8, which may be a consequence of strain competition and interspecies infection. That is, pig-to-human infection provides a competitive advantage to the super-strain over the seasonal. Higher R0 values for the super-strain may result in competitive exclusion of the seasonal strain, thus suppressing seasonal infections and limiting the opportunity for external infections.\nWe have demonstrated how genetic recombination and species interactions significantly affect the emergence and dynamics of influenza strains. The notion of reoccurring epidemics is particularly important in that it is specifically driven by species interaction and may explain the disappearance and re-emergence of a subtype many years after an epidemic. We recommend, that in order to obtain further insights into the emergence of influenza super-strains, strain subtypes should be frequently monitored in the major host species (particularly, birds, humans, and pigs). In addition, we recommend comparing various reassortants to identify strains that are more likely to recombine, particularly those that are infectious to humans.\nA reassortant of the avian strain such as the H5N1 that can spread from human-to-human poses a great threat, even when transmissibility is low (as shown in our simulation). Although the chance of such an event seems rare, situations like one described in the report [13] on Java’s pigs suggest that not only is a super-strain able to emerge but that such a strain is likely to emerge. Our results suggest more careful surveillance of influenza in pigs with a specific focus on detecting rare but potentially dangerous recombination events. Unless a global monitoring system is put in place, we will be unable to recognize lurking novel strains that are capable of the next pandemic. Most importantly, our modeling shows the necessity of a global monitoring system to track strain subtypes within different host species to adequately forecast the next pandemic.", "FAO, Food and Agriculture Organization of the United Nations; MSMH, multi-strain/multi-host; SIS, susceptible-infective-susceptible, WHO, World Health Organization.", "The authors declare that they have no competing interests.", "BJC, CC, and SR developed the concept and study design, analyzed and interpreted the data and drafted the manuscript. BJC conducted mathematical analyses and simulations.", "[SUBTITLE] Multi-strain/multi-host model [SUBSECTION] We use the multi-strain/multi-host (MSMH) model to simulate the spread of influenza in bird, pigs, and humans. Each species has a set of susceptible-infectious-susceptible (SIS) type differential equations that govern the spread of certain strains in that population. Moreover, some host species can infect other species with specific strains, but this ability is generally not symmetric. For example, birds can infect pigs with an avian strain, but the pigs cannot pass the avian strain back into the avian population. All infections follow a “mass action” structure, and recruitment is into the susceptible subclass, so the birth term is logistic and based on the entire population. The three host species are coupled by external inputs of avian and human strains from the respective hosts to the pigs and a super-strain external input from the pigs to the humans.\nThe subscripts b, p, and h denote birds, pigs, and humans, respectively.\nSi(t) is the density of susceptibles for species i at time t; Ii,j(t) is the density of jth infectious individuals for species i at time t; and Ji(t) is the density of super-infectious individuals for species i at time t. β is the per capita incidence rate, α is the recovery rate, v is the disease-induced mortality rate (virulence). r is the intrinsic growth, K is the carrying capacity, and N is the total population. More specifically, Nb=Sb+Ib, Np=Sp+Ip,1+Ip,2+Ip,12+Jp, and Nh=Sh+Ih+Jh. δh is the super-transmission rate in which super-infectious humans Jh infect individuals in the group Ih. Ψp is the rate at which co-infected pigs become super-infected. gb and gh are the transmission rates in which birds and humans infect pigs, respectively. γp the transmission rate super-infectious pigs infect humans.\nWe use the multi-strain/multi-host (MSMH) model to simulate the spread of influenza in bird, pigs, and humans. Each species has a set of susceptible-infectious-susceptible (SIS) type differential equations that govern the spread of certain strains in that population. Moreover, some host species can infect other species with specific strains, but this ability is generally not symmetric. For example, birds can infect pigs with an avian strain, but the pigs cannot pass the avian strain back into the avian population. All infections follow a “mass action” structure, and recruitment is into the susceptible subclass, so the birth term is logistic and based on the entire population. The three host species are coupled by external inputs of avian and human strains from the respective hosts to the pigs and a super-strain external input from the pigs to the humans.\nThe subscripts b, p, and h denote birds, pigs, and humans, respectively.\nSi(t) is the density of susceptibles for species i at time t; Ii,j(t) is the density of jth infectious individuals for species i at time t; and Ji(t) is the density of super-infectious individuals for species i at time t. β is the per capita incidence rate, α is the recovery rate, v is the disease-induced mortality rate (virulence). r is the intrinsic growth, K is the carrying capacity, and N is the total population. More specifically, Nb=Sb+Ib, Np=Sp+Ip,1+Ip,2+Ip,12+Jp, and Nh=Sh+Ih+Jh. δh is the super-transmission rate in which super-infectious humans Jh infect individuals in the group Ih. Ψp is the rate at which co-infected pigs become super-infected. gb and gh are the transmission rates in which birds and humans infect pigs, respectively. γp the transmission rate super-infectious pigs infect humans.\n[SUBTITLE] Basic reproductive numbers [SUBSECTION] In this section, we calculate the basic reproductive numbers R0 for the MSMH model. In general, R0 for an independent strain (i.e., when the MSMH model is reduced to a species’ susceptibles and specified infectious individual) is\nFor example, the basic reproductive number for the independent super-strain in humans is\nWe calculate R0 for the MSMH model by using the methods for compartmental models provided in [27]. We calculate about the disease-free equilibrium point (Kb,0,Kp,0,0,0,0,Kh,0,0). The next-generation matrix (FV-1) is\nThe basic reproductive number (i.e., the spectral radius of FV-1) for the MSMH model is\nwhich is the maximum basic reproductive number of the strains in the three host populations.\nIn this section, we calculate the basic reproductive numbers R0 for the MSMH model. In general, R0 for an independent strain (i.e., when the MSMH model is reduced to a species’ susceptibles and specified infectious individual) is\nFor example, the basic reproductive number for the independent super-strain in humans is\nWe calculate R0 for the MSMH model by using the methods for compartmental models provided in [27]. We calculate about the disease-free equilibrium point (Kb,0,Kp,0,0,0,0,Kh,0,0). The next-generation matrix (FV-1) is\nThe basic reproductive number (i.e., the spectral radius of FV-1) for the MSMH model is\nwhich is the maximum basic reproductive number of the strains in the three host populations.\n[SUBTITLE] Parameter values [SUBSECTION] To calibrate the human parameters, we set the parameter values for the two strains in humans to reflect a seasonal strain and a transmissible avian strain with the virulence of H5N1. We use data from Thailand because of the surveillance of strains and the prevalence of the avian strain H5N1. To determine the logistic growth parameters in humans, we use data for the total population of Thailand [28,29]. We best-fit the data to the solution of the logistic equation given by\nusing the method of least squares (Fig. S1). The parameters were determined using iterative methods: Kh=87.3, Sh(0)=3.983, and rh=0.038. In a study by the National Institute of Health in Thailand [22], case specimens of influenza were taken from patients in 2004 and 2005. It was determined that the numbers of cases of influenza-like illness for 2004 and 2005 were 21,176 and 21,351 per 100,000 people, respectively. The study also sampled specimens for influenza type and subtype. We determined that the incidence βh,1 for the endemic strain of influenza is 0.0253 and 0.0273 for the years 2004 and 2005, respectively. The incidence βh,2 of the avian strain H5N1 for 2004 was determined to be 0.00084. A summary report on influenza in Asian countries from 1999 cites the annual mortality due to pneumonia as 176 per 100,000 people [30], so we let the disease-induced mortality rate be vh,1=0.00176 and the recovery rate αh,1=0.99824. The World Health Organization (WHO) reported on June 2008 that the number of confirmed cases due to H5N1 in Thailand was 25 and the number of deaths was 17 [8], so let the disease-induced mortality of the super-strain to be vh,2=0.68 and recovery rate αh,2=0.32.\nTo calibrate the pig parameters, we used data from the Food and Agriculture Organization of the United Nations (FAO) on pig populations [31,32] and fit the data using the same methodology as in the humans (see above). The parameters for the logistic growth term were determined to be Kp=9.16, Sp(0)=0.002, and rp=0.093. Due to the limited data on influenza in pigs, we assumed reasonable values (i.e., similar to the values for humans) for the remaining parameters. We assumed the birds have the same parameters as humans; that is, the single strain will have the same values as the endemics strain in humans.\nTo calibrate the human parameters, we set the parameter values for the two strains in humans to reflect a seasonal strain and a transmissible avian strain with the virulence of H5N1. We use data from Thailand because of the surveillance of strains and the prevalence of the avian strain H5N1. To determine the logistic growth parameters in humans, we use data for the total population of Thailand [28,29]. We best-fit the data to the solution of the logistic equation given by\nusing the method of least squares (Fig. S1). The parameters were determined using iterative methods: Kh=87.3, Sh(0)=3.983, and rh=0.038. In a study by the National Institute of Health in Thailand [22], case specimens of influenza were taken from patients in 2004 and 2005. It was determined that the numbers of cases of influenza-like illness for 2004 and 2005 were 21,176 and 21,351 per 100,000 people, respectively. The study also sampled specimens for influenza type and subtype. We determined that the incidence βh,1 for the endemic strain of influenza is 0.0253 and 0.0273 for the years 2004 and 2005, respectively. The incidence βh,2 of the avian strain H5N1 for 2004 was determined to be 0.00084. A summary report on influenza in Asian countries from 1999 cites the annual mortality due to pneumonia as 176 per 100,000 people [30], so we let the disease-induced mortality rate be vh,1=0.00176 and the recovery rate αh,1=0.99824. The World Health Organization (WHO) reported on June 2008 that the number of confirmed cases due to H5N1 in Thailand was 25 and the number of deaths was 17 [8], so let the disease-induced mortality of the super-strain to be vh,2=0.68 and recovery rate αh,2=0.32.\nTo calibrate the pig parameters, we used data from the Food and Agriculture Organization of the United Nations (FAO) on pig populations [31,32] and fit the data using the same methodology as in the humans (see above). The parameters for the logistic growth term were determined to be Kp=9.16, Sp(0)=0.002, and rp=0.093. Due to the limited data on influenza in pigs, we assumed reasonable values (i.e., similar to the values for humans) for the remaining parameters. We assumed the birds have the same parameters as humans; that is, the single strain will have the same values as the endemics strain in humans.", "We use the multi-strain/multi-host (MSMH) model to simulate the spread of influenza in bird, pigs, and humans. Each species has a set of susceptible-infectious-susceptible (SIS) type differential equations that govern the spread of certain strains in that population. Moreover, some host species can infect other species with specific strains, but this ability is generally not symmetric. For example, birds can infect pigs with an avian strain, but the pigs cannot pass the avian strain back into the avian population. All infections follow a “mass action” structure, and recruitment is into the susceptible subclass, so the birth term is logistic and based on the entire population. The three host species are coupled by external inputs of avian and human strains from the respective hosts to the pigs and a super-strain external input from the pigs to the humans.\nThe subscripts b, p, and h denote birds, pigs, and humans, respectively.\nSi(t) is the density of susceptibles for species i at time t; Ii,j(t) is the density of jth infectious individuals for species i at time t; and Ji(t) is the density of super-infectious individuals for species i at time t. β is the per capita incidence rate, α is the recovery rate, v is the disease-induced mortality rate (virulence). r is the intrinsic growth, K is the carrying capacity, and N is the total population. More specifically, Nb=Sb+Ib, Np=Sp+Ip,1+Ip,2+Ip,12+Jp, and Nh=Sh+Ih+Jh. δh is the super-transmission rate in which super-infectious humans Jh infect individuals in the group Ih. Ψp is the rate at which co-infected pigs become super-infected. gb and gh are the transmission rates in which birds and humans infect pigs, respectively. γp the transmission rate super-infectious pigs infect humans.", "In this section, we calculate the basic reproductive numbers R0 for the MSMH model. In general, R0 for an independent strain (i.e., when the MSMH model is reduced to a species’ susceptibles and specified infectious individual) is\nFor example, the basic reproductive number for the independent super-strain in humans is\nWe calculate R0 for the MSMH model by using the methods for compartmental models provided in [27]. We calculate about the disease-free equilibrium point (Kb,0,Kp,0,0,0,0,Kh,0,0). The next-generation matrix (FV-1) is\nThe basic reproductive number (i.e., the spectral radius of FV-1) for the MSMH model is\nwhich is the maximum basic reproductive number of the strains in the three host populations.", "To calibrate the human parameters, we set the parameter values for the two strains in humans to reflect a seasonal strain and a transmissible avian strain with the virulence of H5N1. We use data from Thailand because of the surveillance of strains and the prevalence of the avian strain H5N1. To determine the logistic growth parameters in humans, we use data for the total population of Thailand [28,29]. We best-fit the data to the solution of the logistic equation given by\nusing the method of least squares (Fig. S1). The parameters were determined using iterative methods: Kh=87.3, Sh(0)=3.983, and rh=0.038. In a study by the National Institute of Health in Thailand [22], case specimens of influenza were taken from patients in 2004 and 2005. It was determined that the numbers of cases of influenza-like illness for 2004 and 2005 were 21,176 and 21,351 per 100,000 people, respectively. The study also sampled specimens for influenza type and subtype. We determined that the incidence βh,1 for the endemic strain of influenza is 0.0253 and 0.0273 for the years 2004 and 2005, respectively. The incidence βh,2 of the avian strain H5N1 for 2004 was determined to be 0.00084. A summary report on influenza in Asian countries from 1999 cites the annual mortality due to pneumonia as 176 per 100,000 people [30], so we let the disease-induced mortality rate be vh,1=0.00176 and the recovery rate αh,1=0.99824. The World Health Organization (WHO) reported on June 2008 that the number of confirmed cases due to H5N1 in Thailand was 25 and the number of deaths was 17 [8], so let the disease-induced mortality of the super-strain to be vh,2=0.68 and recovery rate αh,2=0.32.\nTo calibrate the pig parameters, we used data from the Food and Agriculture Organization of the United Nations (FAO) on pig populations [31,32] and fit the data using the same methodology as in the humans (see above). The parameters for the logistic growth term were determined to be Kp=9.16, Sp(0)=0.002, and rp=0.093. Due to the limited data on influenza in pigs, we assumed reasonable values (i.e., similar to the values for humans) for the remaining parameters. We assumed the birds have the same parameters as humans; that is, the single strain will have the same values as the endemics strain in humans." ]
[ null, "methods", null, null, null, null, null, null, null, null, null ]
[]
Effects of vaccination and population structure on influenza epidemic spread in the presence of two circulating strains.
21356137
Human influenza is characterized by seasonal epidemics, caused by rapid viral adaptation to population immunity. Vaccination against influenza must be updated annually, following surveillance of newly appearing viral strains. During an influenza season, several strains may be co-circulating, which will influence their individual evolution; furthermore, selective forces acting on the strains will be mediated by the transmission dynamics in the population. Clearly, viral evolution and public health policy are strongly interconnected. Understanding population-level dynamics of coexisting viral influenza infections, would be of great benefit in designing vaccination strategies.
BACKGROUND
We use a Markov network to extend a previous homogeneous model of two co-circulating influenza viral strains by including vaccination (either prior to or during an outbreak), age structure, and heterogeneity of the contact network. We explore the effects of changes in vaccination rate, cross-immunity, and delay in appearance of the second strain, on the size and timing of infection peaks, attack rates, and disease-induced mortality rate; and compare the outcomes of the network and corresponding homogeneous models.
METHODS
Pre-vaccination is more effective than vaccination during an outbreak, resulting in lower attack rates for the first strain but higher attack rates for the second strain, until a "threshold" vaccination level of ~30-40% is reached, after which attack rates due to both strains sharply dropped. A small increase in mortality was found for increasing pre-vaccination coverage below about 40%, due to increasing numbers of strain 2 infections. The amount of cross-immunity present determines whether a second wave of infection will occur. Some significant differences were found between the homogeneous and network models, including timing and height of peak infection(s).
RESULTS
Contact and age structure significantly influence the propagation of disease in the population. The present model explores only qualitative behaviour, based on parameters derived for homogeneous influenza models, but may be used for realistic populations through statistical estimates of inter-age contact patterns. This could have significant implications for vaccination strategies in realistic models of populations in which more than one strain is circulating.
CONCLUSIONS
[ "Disease Outbreaks", "Humans", "Influenza Vaccines", "Influenza, Human", "Markov Chains", "Orthomyxoviridae", "Population Dynamics" ]
3317581
null
null
Methods
The state flow diagram of the model is given in Figure 1, which represents either the population counts in various compartments in the homogeneous model or the state of any given node (labelled by infection state, degree- and age-class) in the network model. The model describes the evolution of two concurrent strains of influenza infection, over a duration short compared to the natural lifespan of an individual in the population, and for this reason birth and natural death processes are ignored, and furthermore the number of individuals in each age class remains constant. Since we consider a static contact network, the degree class of each individual is also fixed. Therefore, each individual in the population belongs to a unique class (k,a), where k denotes the number of contacts, and a the age class, and the total number of individuals in each (k,a) class is constant. S denotes the susceptibles and V denotes individuals receiving vaccination either prior to the onset of the first infection or after this onset. State flow diagram for the two-strain influenza model. S denotes the susceptible state, without prior vaccination. Other susceptible individuals may receive vaccination prior to the onset of infection, or after infection has appeared, and are denoted by V. Despite vaccination, some individuals become infected and follow a similar sequence of infection states to that of the susceptibles. States (and parameters) originating from vaccination are denoted by a subscript ‘V’. States representing symptomatic infection by strain j are denoted by Ij, and correspondingly those infected asymptomatically by Aj. Double-subscripted states indicate that the individual was previously infected with one strain, and is now progressing through infected states of the other strain. Pj denotes individuals partially recovered from infection by strain j, but still susceptible to infection by the other strain. R denotes the class recovered from successive infection by both strains. In addition, infected individuals may die (at rates d or dA) and transfer (via the dashed arrows) to a disease-induced death class D (not shown). See text for details and explanation of the parameters. Vaccination prior to the onset of infection is specified by the fraction of susceptibles in each age class receiving vaccination. For vaccination occurring during an outbreak, the following model is used: for individuals in any given (k,a) class, the rate of vaccination at any given time is (i) proportional to the current number of susceptibles in the class; (ii) an increasing function of the total current (symptomatic) infection in the population as a whole, saturating at a prescribed rate. This was done to attempt to model the social response to an outbreak in the population, in which the greater the number of infected individuals the more likely that susceptible individuals would avail themselves of existing vaccination opportunities. The precise mathematical specification of this response is given in the Appendix. The baseline transmission rate of infection between a susceptible-infected pair of individuals is denoted by τ. The actual rate will depend on the age-classes that these individuals belong to, and whether the susceptible individual of the pair is seeing infection (by either strain) for the first or second time. These various possibilities are accounted for by expressing the actual transmission rate as τ times a factor, which depends on age classes involved, whether this is the first or second infection, and whether the individual has received prior vaccination. Details are given in the Appendix and in Table 1. Model parameters and their values [6]. The values used in the simulations reported here correspond to those reported in [23,24] for influenza. See Section 2 and Appendix for descriptions of parameters. Although data pertain to mean field (homogeneous) models, a correspondence between these and network models of this paper - see Appendix - enables one to apply these parameters to the latter type of model. In particular, a relationship between the transmission coefficient of the mean field model, β, and the baseline transmission rate τ between individuals in contact, enables one to assign a value to τ via the mean field expression for the basic reproductive number R0, as described in the Appendix. For the simulations, with pre-vaccination rate V0 = 0.2, a value R0 = 1.9 was assumed, representative of seasonal influenza and close to the value R0 = 1.8 used in [25]. Without vaccination, the corresponding value of R0 would be 2.34 (see Table 5). States labelled with I denote symptomatic infection, and those labelled with A denote asymptomatic infection. The P states describe immunity to one strain but not the other: Pj is the state with immunity to strain j (j = 1, 2), and R the state with immunity to both strains. In this model, we exclude co-infection: at any given time, an individual may be infected with at most one strain. State Ij denotes infection with strain j; and Ijk denotes previous infection with (and subsequent recovery from) strain j and current infection with strain k (where k ≠ j). A similar notation applies to the A-classes. The efficacy of the vaccine against strain j is denoted by σj. Subscript ‘V’ denotes states of infection (or partial recovery) arising from failure of the vaccine; and as before, labels states with infection due to, or partial recovery from, one of the strains. Following vaccination, infection due to strain j occurs with probability (1-σj). In general, for seasonal influenza, the vaccine is targeted against the earlier-occurring strain 1 virus; its efficacy against the later-occurring strain 2 (mutated) virus is expected to be less, i.e., σ2 < σ1. As in [6], the delay T* in appearance of strain 2 in the population is a parameter of the model. In Figure 1, the diverging pairs of directed edges are labelled with branching ratios for each strain of infection, with two pairs of such edges emanating from S and V classes. For example, if S is infected with one of the strains, it has a probability p of being symptomatically infected, and 1-p of being asymptomatically infected. (We assume that p is the same for both strains). Since S may be infected with either strain, there are two pairs of branches emanating from S in Figure 1. Similarly, there are two branch pairs for V, representing infection due to failure of the vaccine. After recovery from one strain of infection, an individual is still, in general, susceptible to infection by the other strain: individuals in state Pj (i.e., recovered from infection with strain j), can become infected with strain k (≠ j) but with diminished probability δjk. The probability of such infection being symptomatic is denoted by pjk. Similarly, for individuals who have received prior vaccination but still become infected by strain j, the probability of strain k infection is denoted by pVjk. Finally, the model allows for the possibility of disease-induced death, denoted by the state D. The rates at which these occur are assumed to be d or dA for symptomatic and asymptomatic infections, respectively, regardless of which of the disease states precede death; furthermore, the death rates - as with other parameters of the model – may depend on the age group in which the death occurs. The converging directed edges in this Figure are labelled with the recovery rates from infection: either µ (symptomatic infection) or µA (asymptomatic infection), where we assume that these rates are the same for both strains, regardless of whether this is the first or second infection for that individual. The parameter values used in the simulations are given in Table 1. For the homogeneous model, we may apply the technique of the next-generation matrix [17] to derive the basic reproductive number R0. In general, the second strain appears after infection due to the first strain has begun, so that R0 can be calculated using a one-strain sub-model. With this assumption, we find where S0, V0 denote the initial numbers of susceptible and vaccinated individuals, respectively, in the population, and β is the transmission coefficient. To establish a relationship between β and the baseline transmission rate τ between individuals in contact, we construct a (single-age class) network in which the ‘edge probability’ of randomly choosing an edge, one of whose vertices has degree k, is uniform. By relating this to the mean field model we derive (see Appendix) where k1 = vertex degree of population sub-class into which the Strain-1 infection is introduced at time t = 0, and kmax = maximum vertex degree in the finite network (kmax = 20 in the simulations). If we choose for V0 = 0.2, a conservative value R0 = 1.9 for influenza [10], then using the above expressions for β and R0 we derive τ = 3.5 d-1 for the transmission rate to be used in the simulations. The value of R0 corresponding to this τ in the absence of vaccination is R0 = 2.34. In keeping with the definition of the two age class model (see Appendix), the estimates of death rates [18,19] arising from symptomatic or asymptomatic infection (d, dA, respectively) for the two age-class model correspond to the general population above and below the median of the age distribution Pa which, for the city of Vancouver, is about 38 years [20]. We assume that the death rates due to natural causes are negligible, and choose nominal values for the disease-induced rates: d(a1 ) = d(a2 ) = 0.002 d-1 (Ref.[10]). These rates vary with the particular circulating influenza strains. Furthermore, we set d = dA in this illustrative study. In the model described above, the total number of individuals Nk,a in each (k,a) class is fixed, and hence the total population N (summed over all (k,a) classes) is constant. Therefore, by dividing the number of individuals in class (k,a) in state X at any given time by N, we may express the model in terms of the probability Xk,a(t) that a randomly chosen individual is in state X, and belongs to class (k,a), at time t. The resulting set of ordinary differential equations describing this deterministic model is given in the Appendix.
null
null
null
null
[ "Background", "Results", "Conclusions", "Appendix: Effects of vaccination and population structure on influenza epidemic spread in the presence of two circulating strains", "Comparison with mean field model", "Initial conditions for the mean field model", "Authors’ contributions", "Competing interests" ]
[ "Human influenza infection is characterized by seasonal epidemics. This occurs because influenza A is able to maintain its presence in human populations by evolutionary adaptations to population-wide immunity, resulting in mutations that gradually change viral antigens allowing the virus to evade immune detection, a process known as “antigenic drift”. On account of these rapid mutations, vaccination for influenza must be updated annually on a global basis, following surveillance to monitor the appearance of new strains [1]. Antigenic drift also diminishes vaccine efficacy for mutant strains, but may still confer partial immunity to these strains. Therefore, understanding the short-term evolution of influenza virus is crucial to developing seasonal vaccines. Conversely, vaccination of a population may influence the short-term evolution of the virus, for example by decreasing the number of hosts in which the virus may replicate.\nIn general, during a single influenza season, more than one viral strain is circulating. It is known [2,3] that when suitable invasion conditions are satisfied, stable coexistence of two different strains is possible. The coexistence of two or more strains in a population will influence their individual evolution; and furthermore, the selective forces acting on the strains will be mediated by the transmission dynamics in the population. For example, the infection of hosts by one strain will reduce susceptibility to other strains, thereby limiting their spread in the population [4]. In addition, the time lag in emergence of a second strain following onset of an epidemic by a first strain will be influenced by the strategy and timing of vaccination [5]. It is clear that viral evolution and public health policy are strongly interconnected, and understanding the population-level dynamics of coexisting viral influenza infections, when vaccination of the population is to be undertaken, would be of great benefit in designing such vaccination strategies [6].\nIn [6], a homogeneous model of two viral strains was developed, incorporating cross-immunity and delay in emergence of the second strain. It was found that for small delay and large cross-immunity, infections with both strains appeared as a single epidemic wave; on the other hand, with sufficient delay, a second epidemic wave is possible. Further, for sufficient delay and high cross-immunity, the population of susceptible hosts may become so depleted as to prevent a second wave. These findings, together with possible impact of vaccination on antigenic drift, suggest that vaccination would be an important factor to include [6].\nIn large populations, contacts between individuals are not uniform, as assumed in the homogeneous model [6]. Typically, the number of contacts per day per individual is much smaller than the population size, and the structure of the corresponding ‘contact matrix’ plays an important role in the development of the pattern of the disease [7]. The effects of spatial correlations [8], such as occur when community structures are present [9], were illustrated in the spread of drug resistance in a network with mild clustering [10]: the spread of the resistant strain occurred more rapidly, and at significantly lower treatment levels, than was predicted by the homogeneous model.\nThe present paper extends the model in [6] in a number of ways. The model includes either pre-vaccination or vaccination during the epidemic, of a predetermined part of the population. The contact structure is modelled as a Markov network [11], in which the distribution of degrees of the nodes (i.e., number of contacts for individuals in the population) is specified. In addition, the model allows a distribution of ages in the population by incorporating a prescribed number of age classes. The Markov assumption for the contact network allows the specification of structural parameters such as assortativity [12] and clustering [13-15] that are important characteristics of social groupings. These generalizations enable vaccination to be targeted according to age group and ‘contact number’ (degree of node), which in general respond to the vaccine in different ways. The model inevitably contains many parameters and allows a wide range of network structures to be specified; in addition, initial conditions can be specified in many different ways. Therefore, in this paper, only a simplified network model will be investigated. The structure of the network is comprised of uncorrelated nodes, with degree distribution specified as a truncated scale-free form [7]. Furthermore, for simplicity only one or two age-classes are considered, where, for the latter, the median age is chosen to separate the two classes. While a detailed age distribution, characteristic of a real population, could be specified, the present results are intended to be illustrative only and to allow comparison with the corresponding homogeneous model. The network model can potentially be useful in describing specific populations, such as a small or large city, in which case the network structure and age distribution would need to be determined from statistical analysis of demographic and census data [16].\nSection 2 describes the model in broad terms, and lists some of the parameter values used; technical details are given in the Appendix. Section 3 presents the results of simulations, in which the cross-immunity and delay in appearance of the second strain infection are varied. These results are also compared with those produced by the corresponding homogeneous model, to ascertain the importance of structure in the network for determining the time-course and final extent (“total attack rate”) of the disease. Finally, Section 4 discusses these results, some possible extensions of the model, and implications for vaccination strategies in more realistic models based on specific demographic data.", "The initial state was specified as follows. For pre-vaccination, a prescribed fraction V0\n(a) of individuals in each age class a receive vaccination. Infection by strain 1 is introduced into fraction ε1 of the remaining susceptibles residing in a single class (k1,a1). After the strain 1 infection has spread through the population for a time T*, a strain 2 infection is introduced into a fraction ε2 of class (k2,a2) individuals. In the simulations, we use ε1 = ε2 = 0.5; k1 = 5, k2 = 10, and for the two age class model, a1 = 1, a2 = 2. As previously mentioned, it is assumed that no individual may be infected with both strains simultaneously. The simulations were performed using three models: (1) network model with two age classes; (2) network model with one age class; and (3) the homogeneously-mixing 'mean field' model. For (1) and (2), the structure of the network was chosen to have a scale-free form [7], with the number of individuals (nodes of the contact network) with k contacts being proportional to k-2.5[21], and 1 ≤ k ≤ kmax = 20. Furthermore, the degrees of the nodes of the network were assumed to be uncorrelated: although real networks show significant correlation structure – e.g., clustering and associativity [12-15] - the purpose of the present simulations is to illustrate the general effects of departure from the homogeneous mixing assumption. A value R0 = 1.9 was fixed for the mean field model with V0\n = 0.2.\nFigure 2 compares the results from the network model using one age class (top row), two age classes (middle row), and the mean field model (bottom row), for delays of T* = 10 days (left column) and 60 days (right column) for introducing strain-2 infection into a prescribed sub-population. Solid curves correspond to a large (60%) cross-immunity (with δ ≡ δ12 =δ21 = 0.4) between strains, and dashed curves to a low (10%) cross-immunity (δ = 0.9). These results show that the level of cross-immunity has a significant effect on whether a second wave appears: if it is too high (60%, δ = 0.4), then the second wave does not appear in any of the three models. For the 2-age class model (middle row), the (first) peak infection occurs at a slightly lower value for the larger cross-immunity: 0.03 vs. 0.033 fraction of the population; however, the timing of these peaks (~75 days after initial infection) is not sensitive to the level of cross-immunity. Similar conclusions apply to the one age class network model and the mean field model, though the timing of these peaks is different for different models.\nComparison of models: Varying cross-immunity and delay for introducing strain. 2 Comparison of results from network model for one age class (top row), two age classes (middle row), and mean field (bottom row) models. Solid curves are for δ = 0.4 (60% cross-immunity) and dashed curves for δ = 0.9 (10% cross-immunity). Plots in the left column are for the appearance of the second strain 10 days after strain-1 infection starts; right column plots for the appearance of the second strain 60 days after initial strain-1 infection. For both network models, strain 1 was introduced into class k = 5 and strain 2 into class k = 10. For the two age class model, infections were started in different age classes. Note that a second wave appears only when the cross-immunity is small (dashed curves), apart from a small, delayed outbreak in the one-age class model.\nAs expected, if the second strain is introduced after the strain 1 infection has been largely cleared from the population (as is the case when T* = 60 days: right column), then the first and second waves behave as distinct, non-interacting one-strain epidemics. However, when T* is only 10 days (left column), there is still a significant presence of strain 1 infection in the population: the infections in the two age-class model merge into a single broad peak, whereas the other two models show two distinct peaks, with the second peak occurring in both models ~50 days after initial infection.\nIt is therefore apparent that the two age class network model exhibits a larger delay in peak infection – for both first and second waves – compared to the one age class and mean-field models. This can be accounted for by the reduced transmissibility between classes compared to within one class, as well as reduced transmissibility within the second age class (see the M matrix in the Appendix). (Recall that strain 1 and 2 infections are introduced into different age classes in the two age class network model). Such differences in delays between mean-field and structured models have been observed elsewhere [10], and underline the importance of spatial structure in determining the course of an epidemic.\nFigure 3 shows the effect of different levels of pre-vaccination (V0 = 0.2 [dashed curves] or 0.4 [solid curves]) on the infection profiles, for each of the three models, assuming a delay T* = 10 days in appearance of strain 2 infection. For all models, the peak infection is reduced by about 50%, and occurs slightly later, when increasing from 20% to 40% coverage. The peak infections in the two age class model, however, occur at significantly lower values than in the other two models: by a factor of 5 for the mean field model, and a factor of 10 for the one age class network model. Again, this may be accounted for by the reduced transmissibility between different age classes. Notice also that, as in Figure 2, the two age class model exhibits a single peak of infection for both levels of vaccination.\nComparison of models: Prior-vaccination coverage. Comparison of mean-field (a), one- and two age class network models ((b) and (c), respectively), for different prior vaccination coverage of 40% (solid curves) or 20% (dashed curves). For the two age class model, strain-1 and strain-2 infections occur in different age classes. The parameter δ = 0.9 (10% cross-immunity), and second strain infection begins at T* = 10 days after first-strain infection.\nIn Figure 4, the two age class network model is used to explore the effects of vaccination during an epidemic outbreak, with no vaccination prior to the initial appearance of strain 1 infection, where vaccination rates are determined according to the “social response” to total infection in the population, as described in Section 2 and the Appendix. The resulting infection profiles are remarkably insensitive to (i) level of cross-immunity, (ii) delay T*, and (iii) the rate of vaccination ω0. Apart from a temporary decrease in the rate of disappearance of the infection when T* = 10 days and δ = 0.4 (top left in Figure 4), there is only one peak of infection which occurs consistently at very similar times (50 days after strain 1 infection) and with peak magnitudes between 0.07 and 0.08 of the fraction of population. Comparing these results to the two age class model with pre-vaccination (middle row of Figure 2), it can be seen that the peak infection occurs about 20-30 days earlier for vaccination during the epidemic than when pre-vaccination is carried out.\nComparison of models: Vaccination during outbreak. Comparison of total clinical infection resulting when vaccination is introduced during a disease outbreak, for different vaccination rates. Left column is for T* = 10 days, right column for T* = 60 days. Top row: δ = 0.4; bottom row: δ = 0.9. The underlying model is a two age class network model, and vaccination rates in each (k,a) class are proportional to the number of susceptible individuals in that class. These rates are determined by the total infection in the general population, and saturate at a value of ω0 per individual per day (see Appendix), where ω0 = 1 (solid curve), 0.5 (dashed curve) or 0 (dotted curve).\nTables 2, 3, 4 compare the attack rates when individuals are infected by each strain separately (represented by the final values of the P1\n and P2\n compartments in Figure 1) and by both strains in succession (represented by the R compartment), for each of the three models. It is seen that in general, for given T* and δ, the attack rates predicted by the network models are quite different between the model types. However for pre-vaccination, within each model, strain 1 attack rates increase, while strain 2 attack rates decrease, with the delay T*. For vaccination during the epidemic, these trends are also seen though for strain 1 infection are less pronounced. This is reasonable, because for longer T*, the strain 2 infection draws upon a smaller pool of susceptibles (and individuals with failed vaccination) which are depleted by strain 1 infection for a longer time interval before strain 2 appears. Also, as expected, strain 2 attack rates are sensitive to the level of cross-immunity of the strains, decreasing sharply as cross-immunity increases from 10% to 60%. Variations between models are manifestations of the importance of heterogeneity of contact structure. The dependence on age distribution between network models in Tables 2, 3 is a consequence of our assumption (see matrix M in the Appendix) that transmissibility within age class 1 is greater than that within age class 2 or between age classes.\nTotal attack rates for 2-age class network model\n\nFor the 2-age class network model, the initial strain-1 infection occurs in a fraction ε1 = 0.5 of individuals in class (k1,a1) = (5,1), and initial strain-2 infection, occurs at time T* = 10 or 60 days later, in a fraction ε2 = 0.5 of individuals in class (k2,a2) = (10,2). For these conditions, and the given degree- and age-distributions, the initial number of strain-1 and strain-2 infections (introduced at times t = 0 and T*, respectively) in a population of size N = 10,000, is 67 and 12, respectively.\n\nTotal attack rates for 1-age class network model\n\nFor the 1-age class network model, fraction ε1 = 0.5 of degree class k1 = 5 is infected with strain-1 at time t = 0, and fraction ε2 = 0.5 of degree class k2 = 10 with strain-2 at time t = T* = 10 days or 60 days. The initial number of strain-1 and strain-2 infections (introduced at times t = 0 and T*, respectively) in a population of size N = 10,000, is 67 and 12, respectively.\n\nTotal attack rates for mean field model\n\nFor the mean field model, the initial conditions were chosen to be compatible with those of the 1-age class model - see Appendix and Table 3. The initial number of strain-1 and strain-2 infections (introduced at times t = 0 and T*, respectively) in a population of size N = 10,000, is 67 and 12, respectively.\n\nA question of importance to public health is the dependence of both the attack rate and death rate due to infection. The attack rates are shown in Table 5 for the two age class network model with T* = 60 days, and cross-immunity of 10% and 60%. For pre-vaccination, as expected, strain 1 attack rates decrease with increasing level of vaccination, at first slowly then dropping off sharply between V0 = 0.3 and 0.4 regardless of the level of cross-immunity. Interestingly, the maximum attack rate due to strain 2 infections is reached in this range, and drops off sharply thereafter. Thus, for the parameter values used, it appears that vaccination is most effective around these values, with diminishing returns for higher V0. Tables 6, 7 show that death rates due to pre-vaccination are lower than predicted for the entire range of vaccination rates during an epidemic; and in both cases the death rates decrease with increasing levels of vaccination. Analogous to the attack rates, there is a small increase in mortality for vaccination coverage around 20%, due to increased numbers of strain 2 infections at the expense of reduced numbers of strain 1 infections; however, the mortality rate drops sharply once the vaccination coverage exceeds about 40%.\nEffects of varying pre-vaccination fraction on total attack rates for 2-age class network model\n\nIn all cases, no vaccination occurs during the epidemic, i.e., ω0 = 0, and the delay in appearance of second strain infection is T* = 60 days.\nEffect on death rate of pre-vaccination\n\nAll results are for 2 age class network model, with delay in introduction of strain-2 of T* = 60 days, and cross-immunity of 60% (δ = 0.4) and 10% (δ = 0.9). The case of pre-vaccination only (ω0 = 0) is considered below.\nEffect on death rate of vaccination during epidemic\n\nAll results are for 2 age class network model, with delay in introduction of strain-2 of T* = 60 days, and cross-immunity of 60% (δ = 0.4) and 10% (δ = 0.9). The case of vaccination only during epidemic (V0 = 0) is considered.\n", "We have considered extensions of the two viral strain mean field (homogenous) model introduced in [6], to explore the effects of both local network structure and the division of the population into different age classes. The present study used model parameter values (in particular, R0) originally estimated for mean field models; and in order to translate these to the network models derived in this paper, a correspondence was established between the mean field model and a limiting case of the network model (see Appendix). The two age class model assumed the age boundary was located at the median age (about 38 years for Vancouver), with vaccine efficacy of 80% in the lower age group and 40% in the upper age group.\nSeveral notable features were observed when comparing the network models to the corresponding mean field case. Firstly, the amount of cross-immunity present is significant in determining whether a second wave of infection occurs. Due to a lower transmission rate between age classes and within the second age class, compared to within the first age class, infection levels were found to be significantly less for the two age class model than for either the one age class or mean field models. The infections occurred as either a single wave or as two successive waves. A second wave is more likely to occur the longer the delay in introduction of the second strain, since when this delay is short (~10 days) infections due to both strains merge into a single, broad peak. When a second wave does occur, the shapes of the two waves depend on when the second strain infection is introduced. If it occurs well after the first infection has run its course, then the two waves behave as distinct, non-interacting infections. The second infection peak is delayed, and its amplitude reduced, in the network model, compared to the mean field case. This behaviour reflects a longer propagation time in the network model, and has been qualitatively observed in other models, reinforcing the importance of including local network structure in realistic models.\nAs expected, the amount of cross-immunity between the two strains is important in determining the size of the second-strain outbreak. It was found that its size decreased sharply with increasing cross-immunity. As the level of vaccination increases, strain 1 attack rates decrease, with a sharp drop occurring around 30-40% pre-vaccination coverage; at the same time, strain 2 infections increase with increasing vaccination coverage, reaching their maximum somewhere in this range, and drop off sharply for higher coverage levels. This phenomenon is reminiscent of the development of drug resistance, where there is an optimal level of drug treatment (compare: vaccination coverage) that minimizes the overall infection [10]. This could have significant implications for vaccination strategies in realistic models of populations in which more than one strain is circulating.\nIt was found that increasing either pre-vaccination or vaccination during an outbreak, reduces the disease-induced mortality. Furthermore, pre-vaccination appears to be more effective than vaccination during an outbreak in reducing overall mortality, though this needs further investigation as it may depend critically on how the latter is implemented. This study considered only a simple model in which at any given time vaccination rates during an outbreak were governed by the total infection in the population at that time, and considers only vaccination of the susceptible class S, neglecting vaccination of other classes (e.g., P1 and P2 and asymptomatic cases).\nAs mentioned earlier, the particular form of the terms included in the model to incorporate local network structure and the effects of age classes was chosen for illustrative purposes. This approach, though, can be used on a specific population if sufficient data are available to determine realistic estimates of the age classes and network structure present and of the parameters of the model. The main difficulty is in determining the form of the two-point correlations between vertices of the contact network for a realistic particular population, and this must be derived indirectly from estimates of network structure extracted from the data [16]. An intermediate approach is to explore the effects of a few network structure parameters – e.g., clustering, associativity, betweenness, and centrality [7,16], obtaining expressions for the two-point probabilities defining the Markov network directly in terms of these parameters. This is currently under investigation.", "The various parameters in the model (Figure 1 of main text) are defined below:\nτ = baseline transmission rate between a susceptible-infected pair\np = probability of developing symptomatic infection with no prior exposure\npV1, pV2 = probabilities of pre-vaccinated individuals developing symptomatic infection from strains 1 and 2, respectively, with no prior exposure\nσ1, σ2 = effectiveness of vaccine to strains 1 and 2, respectively\nδV1, δV2 = reduction in transmissibility of strains 1 and 2, respectively, for vaccinated individuals\np12 = probability of developing symptomatic infection with prior exposure to strain 1\npV12 = probability of pre-vaccinated individuals developing symptomatic infection with prior exposure to strain 1\np21 = probability of developing symptomatic infection with prior exposure to strain 2\npV21 = probability of pre-vaccinated individuals developing symptomatic infection with prior exposure to strain 2\nδA = reduction in infectiousness due to asymptomatic infection\nµ, µA = recovery rates from symptomatic and asymptomatic infections\nδ12, δ21 = level of cross-immunity induced by previous exposure to strain 1 and strain 2, respectively\nd, dA = disease-induced death rates, assumed to be age-dependent but the same for each type of infection.\nDefine age classes 1 ≤ a ≤ amax, and network degree classes 1 ≤ k ≤ kmax. Let\nfor U ∈ {A1, AV1,I1, IV1,A21, AV21,I21, IV21, A2, AV2, I2, IV2, A12, AV12, I12, IV12}, denote the force of infection for age-class a and degree-class k. Here, M(a,a′) denotes the relative transmission coefficient between age-groups, so that τM(a,a′) = transmission coefficient between a susceptible individual of age-class a in contact with an infected individual of age-class a′. Also, P(k′,a′|k,a) is the probability that an individual (node) of age-class a and degree-class k has a neighbour (adjacent node) of age-class a′ and degree-class k′.\nIn the special case that the contact network is the same for all age-classes, P(k′,a′|k,a) = Pa(a′|a)P(k′|k), where Pa ( a′|a) denotes the probability that an age-class a individual has an age-class a′ neighbour. The two conditional distributions obey the conditions\nIn the general case, . If the node-degrees are uncorrelated, then P(k′|k) = Pe(k′), where Pe(k) is the edge distribution [22], defined as the probability of randomly drawing an edge connected to a vertex of degree k. It is related to P(k), the vertex distribution, by Similarly, if the age distributions are uncorrelated, then Pa(a′|a) = Pa(a′). Thus, for uncorrelated age-structured networks, which are considered in this paper, P(k′,a′|k,a) = Pa(a′)Pe(k′). In the present study, the degree distribution follows a scale-free form [7]P(k) ~ k-γ, where we have chosen γ = 3.5. In this paper, we consider only one or two age classes (amax = 1, 2); more realistic models would incorporate demographic data on several ages classes (typically, amax ≥ 4), but as discussed in the main text, the model simulations are only intended to illustrate the effects of heterogeneous contact- and age-structure, in comparison with homogeneous models. For the simulations reported in the main paper, we have for the one-age class model: Pa(a) = 1, and for the two age class model we chose Pa(a1) = Pa(a2) = 0.5, and\nWe extend the homogeneous (mean-field) model in [6] to a Markov network model with age structure, and include vaccination of strain 1 prior to onset of influenza outbreak. Furthermore, we extend this model to allow vaccination during an epidemic, by introducing a flux φ of susceptibles from the S classes to their corresponding V classes, according to the prescription:\n(i) φ is a function of the total (symptomatic) infection in the population, Itot (summed over all k and a), and φ = 0 when Itot = 0;\n(ii) φ is proportional to the population in class S(k,a,t);\n(iii) φ eventually saturates at a maximum value as Itot increases.\nA functional form that satisfies these criteria is\nwhere ω0 is the (age-class dependent) saturation rate of vaccination, α is the value of Itot at half-saturation, and n > 0 governs the steepness of the response curve. In the simulations, α = 0.4 (i.e., half-saturation occurs when 40% of population is infected), and n = 2.\nIn order to incorporate death due to infection, we add a set of classes {D(k,a,t)} to the model, and (similar to the recovery rates) assume that death rates are either d (for all symptomatic infections) or dA (for all asymptomatic infections).\nBased on Figure 1 (main text), this gives rise to the following set of 24×(amax×kmax) equations:\nwhere S ≡ S(k,a,t), etc., and the force of infection ΘU may be expressed as\nwhere U ∈ {A1, AV1,I1, IV1,\n A21, AV21,I21, IV21,A2, AV2,I2, IV2,A12,\nAV12,I12,IV12}, and is the connectivity matrix, defined by the contact structure of the population. Also,\nwhere\nThis shows that the various Θ(k,a,t)’s describe the connectivity of a vertex of degree k and age-class a to all the infected adjacent vertices.", "In order to make this comparison, we need to obtain a “limiting” form of the network model that approximates the mean field model. This will enable us to obtain a relationship between the mean field model transmission rate β and the transmission rate τ between a susceptible-infected pair in the network. We consider a simplified network model consisting of a single age class, and an uncorrelated network (so that P(k’|k)=Pe(k’)), and further that Pe is a uniform distribution: Pe(k) = 1/kmax, where kmax is the maximum degree in the network. With these assumptions, (A3) simplifies to\nTherefore, the equation for S(k,t) becomes\nwhere the term in parentheses is independent of k. Assume that S(k,t) = P(k)S(t), etc.; then this becomes\nSumming over k from 1 to kmax, we obtain\nwhereis the mean degree of nodes in the network.\nComparing this approximation with the Mean Field expression for dS/dt, suggests we make the following correspondence:\nMore realistically, since we introduce Strain-1 infections into the sub-population defined by (k,a) = (k1,a1), and taking account of the fact that R0\n is defined in terms of the first generation of infection, it would be more accurate to replace by k1, so that\nUsing this approximate relationship enables us to compare the simulation of the behaviours of the network and mean-field models, by relating numerical values of the parameters β and τ through the simplified limiting case of a network in which the probability of drawing an edge at random from the network is uniform.", "In what follows, it is assumed that the total population (including deaths) is normalized to unity, which is permissible since for this model it is constant. The initial conditions for the mean field model must be consistent with those of the network model. The analysis that follows applies to an arbitrary number of age classes and degree distributions.\nIn the network model, at t = 0 a fraction ε1 of Strain-1 infection is introduced into the sub-population in class (k1,a1). Therefore, we specify the initial conditions at t = 0 according to\nwhere Pka1 ≡ P(k1\n )Pa(a1\n ) represents the fraction of the total population in class (k1,a1). Similarly, at t = T* when strain 2 infections are introduced to a fraction ε2 of the sub-population in class (k2, a2), the corresponding mean field conditions are modified as follows:\nwhereare the changes in the susceptible and vaccinated sub-populations, respectively, and Pka2 ≡P(k2\n )Pa(a2\n ) is the fraction of the total population in class (k2,a2).\nIn order to allow comparison between Mean Field and network models, all age-dependent parameters δA, δV1, δV2, σ1, σ2, µA, µ, dA, d, etc., in the network model are replaced by their age-distributed averages:, etc., where without risk of ambiguity we may drop the ‘MF’ superscript.\nFor the network model, for all age classes we set ε1 = ε2 = 0.5, p = 0.6, V0\n = 0.2, σ1 = 0.8, σ2 = 0.4. For the two age-class model, we chose (k1,a1) = (5,1), (k2,a2\n) = (10,2); and for the one-age class model k1 = 5, k2\n = 10. The (truncated) scale-free distribution P(k) ~ k-3.5 with kmax = 20 yields Pka1 = 0.0067, Pka2 = 0.0012 (so that, in a population of N = 10,000, the number of infections is 67 and 12, respectively), where we are assuming Pa to be uniformly distributed in the 2-age-class model: Pa\n (a1) = Pa(a2\n) = 0.5.\nSubstituting these values into the expression for R0 (Section 2 in main paper), and using R0 = 1.9, V0 = 0.2, kmax = 20, and k1 = 5, yields the values β = 0.8765, and τ = 3.5 d-1. For V0 = 0.4, using the same value τ = 3.5 d-1, the corresponding value of R0 is 2.34.", "MEA wrote the bulk of the manuscript, and designed, implemented and simulated the network model from a homogeneous one constructed with assistance from Dr S. Moghadas. RK implemented and simulated the homogeneous model, and wrote the Conclusions. MEA wrote the Appendix.", "The authors declare that they have no competing interests" ]
[ null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Results", "Conclusions", "Appendix: Effects of vaccination and population structure on influenza epidemic spread in the presence of two circulating strains", "Comparison with mean field model", "Initial conditions for the mean field model", "Authors’ contributions", "Competing interests" ]
[ "Human influenza infection is characterized by seasonal epidemics. This occurs because influenza A is able to maintain its presence in human populations by evolutionary adaptations to population-wide immunity, resulting in mutations that gradually change viral antigens allowing the virus to evade immune detection, a process known as “antigenic drift”. On account of these rapid mutations, vaccination for influenza must be updated annually on a global basis, following surveillance to monitor the appearance of new strains [1]. Antigenic drift also diminishes vaccine efficacy for mutant strains, but may still confer partial immunity to these strains. Therefore, understanding the short-term evolution of influenza virus is crucial to developing seasonal vaccines. Conversely, vaccination of a population may influence the short-term evolution of the virus, for example by decreasing the number of hosts in which the virus may replicate.\nIn general, during a single influenza season, more than one viral strain is circulating. It is known [2,3] that when suitable invasion conditions are satisfied, stable coexistence of two different strains is possible. The coexistence of two or more strains in a population will influence their individual evolution; and furthermore, the selective forces acting on the strains will be mediated by the transmission dynamics in the population. For example, the infection of hosts by one strain will reduce susceptibility to other strains, thereby limiting their spread in the population [4]. In addition, the time lag in emergence of a second strain following onset of an epidemic by a first strain will be influenced by the strategy and timing of vaccination [5]. It is clear that viral evolution and public health policy are strongly interconnected, and understanding the population-level dynamics of coexisting viral influenza infections, when vaccination of the population is to be undertaken, would be of great benefit in designing such vaccination strategies [6].\nIn [6], a homogeneous model of two viral strains was developed, incorporating cross-immunity and delay in emergence of the second strain. It was found that for small delay and large cross-immunity, infections with both strains appeared as a single epidemic wave; on the other hand, with sufficient delay, a second epidemic wave is possible. Further, for sufficient delay and high cross-immunity, the population of susceptible hosts may become so depleted as to prevent a second wave. These findings, together with possible impact of vaccination on antigenic drift, suggest that vaccination would be an important factor to include [6].\nIn large populations, contacts between individuals are not uniform, as assumed in the homogeneous model [6]. Typically, the number of contacts per day per individual is much smaller than the population size, and the structure of the corresponding ‘contact matrix’ plays an important role in the development of the pattern of the disease [7]. The effects of spatial correlations [8], such as occur when community structures are present [9], were illustrated in the spread of drug resistance in a network with mild clustering [10]: the spread of the resistant strain occurred more rapidly, and at significantly lower treatment levels, than was predicted by the homogeneous model.\nThe present paper extends the model in [6] in a number of ways. The model includes either pre-vaccination or vaccination during the epidemic, of a predetermined part of the population. The contact structure is modelled as a Markov network [11], in which the distribution of degrees of the nodes (i.e., number of contacts for individuals in the population) is specified. In addition, the model allows a distribution of ages in the population by incorporating a prescribed number of age classes. The Markov assumption for the contact network allows the specification of structural parameters such as assortativity [12] and clustering [13-15] that are important characteristics of social groupings. These generalizations enable vaccination to be targeted according to age group and ‘contact number’ (degree of node), which in general respond to the vaccine in different ways. The model inevitably contains many parameters and allows a wide range of network structures to be specified; in addition, initial conditions can be specified in many different ways. Therefore, in this paper, only a simplified network model will be investigated. The structure of the network is comprised of uncorrelated nodes, with degree distribution specified as a truncated scale-free form [7]. Furthermore, for simplicity only one or two age-classes are considered, where, for the latter, the median age is chosen to separate the two classes. While a detailed age distribution, characteristic of a real population, could be specified, the present results are intended to be illustrative only and to allow comparison with the corresponding homogeneous model. The network model can potentially be useful in describing specific populations, such as a small or large city, in which case the network structure and age distribution would need to be determined from statistical analysis of demographic and census data [16].\nSection 2 describes the model in broad terms, and lists some of the parameter values used; technical details are given in the Appendix. Section 3 presents the results of simulations, in which the cross-immunity and delay in appearance of the second strain infection are varied. These results are also compared with those produced by the corresponding homogeneous model, to ascertain the importance of structure in the network for determining the time-course and final extent (“total attack rate”) of the disease. Finally, Section 4 discusses these results, some possible extensions of the model, and implications for vaccination strategies in more realistic models based on specific demographic data.", "The state flow diagram of the model is given in Figure 1, which represents either the population counts in various compartments in the homogeneous model or the state of any given node (labelled by infection state, degree- and age-class) in the network model. The model describes the evolution of two concurrent strains of influenza infection, over a duration short compared to the natural lifespan of an individual in the population, and for this reason birth and natural death processes are ignored, and furthermore the number of individuals in each age class remains constant. Since we consider a static contact network, the degree class of each individual is also fixed. Therefore, each individual in the population belongs to a unique class (k,a), where k denotes the number of contacts, and a the age class, and the total number of individuals in each (k,a) class is constant. S denotes the susceptibles and V denotes individuals receiving vaccination either prior to the onset of the first infection or after this onset.\nState flow diagram for the two-strain influenza model. S denotes the susceptible state, without prior vaccination. Other susceptible individuals may receive vaccination prior to the onset of infection, or after infection has appeared, and are denoted by V. Despite vaccination, some individuals become infected and follow a similar sequence of infection states to that of the susceptibles. States (and parameters) originating from vaccination are denoted by a subscript ‘V’. States representing symptomatic infection by strain j are denoted by Ij, and correspondingly those infected asymptomatically by Aj. Double-subscripted states indicate that the individual was previously infected with one strain, and is now progressing through infected states of the other strain. Pj denotes individuals partially recovered from infection by strain j, but still susceptible to infection by the other strain. R denotes the class recovered from successive infection by both strains. In addition, infected individuals may die (at rates d or dA) and transfer (via the dashed arrows) to a disease-induced death class D (not shown). See text for details and explanation of the parameters.\nVaccination prior to the onset of infection is specified by the fraction of susceptibles in each age class receiving vaccination. For vaccination occurring during an outbreak, the following model is used: for individuals in any given (k,a) class, the rate of vaccination at any given time is (i) proportional to the current number of susceptibles in the class; (ii) an increasing function of the total current (symptomatic) infection in the population as a whole, saturating at a prescribed rate. This was done to attempt to model the social response to an outbreak in the population, in which the greater the number of infected individuals the more likely that susceptible individuals would avail themselves of existing vaccination opportunities. The precise mathematical specification of this response is given in the Appendix.\nThe baseline transmission rate of infection between a susceptible-infected pair of individuals is denoted by τ. The actual rate will depend on the age-classes that these individuals belong to, and whether the susceptible individual of the pair is seeing infection (by either strain) for the first or second time. These various possibilities are accounted for by expressing the actual transmission rate as τ times a factor, which depends on age classes involved, whether this is the first or second infection, and whether the individual has received prior vaccination. Details are given in the Appendix and in Table 1.\nModel parameters and their values [6].\n\nThe values used in the simulations reported here correspond to those reported in [23,24] for influenza. See Section 2 and Appendix for descriptions of parameters. Although data pertain to mean field (homogeneous) models, a correspondence between these and network models of this paper - see Appendix - enables one to apply these parameters to the latter type of model. In particular, a relationship between the transmission coefficient of the mean field model, β, and the baseline transmission rate τ between individuals in contact, enables one to assign a value to τ via the mean field expression for the basic reproductive number R0, as described in the Appendix. For the simulations, with pre-vaccination rate V0 = 0.2, a value R0 = 1.9 was assumed, representative of seasonal influenza and close to the value R0 = 1.8 used in [25]. Without vaccination, the corresponding value of R0 would be 2.34 (see Table 5).\n\nStates labelled with I denote symptomatic infection, and those labelled with A denote asymptomatic infection. The P states describe immunity to one strain but not the other: Pj is the state with immunity to strain j (j = 1, 2), and R the state with immunity to both strains. In this model, we exclude co-infection: at any given time, an individual may be infected with at most one strain. State Ij denotes infection with strain j; and Ijk denotes previous infection with (and subsequent recovery from) strain j and current infection with strain k (where k ≠ j). A similar notation applies to the A-classes. The efficacy of the vaccine against strain j is denoted by σj.\nSubscript ‘V’ denotes states of infection (or partial recovery) arising from failure of the vaccine; and as before, labels states with infection due to, or partial recovery from, one of the strains. Following vaccination, infection due to strain j occurs with probability (1-σj). In general, for seasonal influenza, the vaccine is targeted against the earlier-occurring strain 1 virus; its efficacy against the later-occurring strain 2 (mutated) virus is expected to be less, i.e., σ2 < σ1. As in [6], the delay T* in appearance of strain 2 in the population is a parameter of the model.\nIn Figure 1, the diverging pairs of directed edges are labelled with branching ratios for each strain of infection, with two pairs of such edges emanating from S and V classes. For example, if S is infected with one of the strains, it has a probability p of being symptomatically infected, and 1-p of being asymptomatically infected. (We assume that p is the same for both strains). Since S may be infected with either strain, there are two pairs of branches emanating from S in Figure 1. Similarly, there are two branch pairs for V, representing infection due to failure of the vaccine.\nAfter recovery from one strain of infection, an individual is still, in general, susceptible to infection by the other strain: individuals in state Pj (i.e., recovered from infection with strain j), can become infected with strain k (≠ j) but with diminished probability δjk. The probability of such infection being symptomatic is denoted by pjk. Similarly, for individuals who have received prior vaccination but still become infected by strain j, the probability of strain k infection is denoted by pVjk. Finally, the model allows for the possibility of disease-induced death, denoted by the state D. The rates at which these occur are assumed to be d or dA for symptomatic and asymptomatic infections, respectively, regardless of which of the disease states precede death; furthermore, the death rates - as with other parameters of the model – may depend on the age group in which the death occurs.\nThe converging directed edges in this Figure are labelled with the recovery rates from infection: either µ (symptomatic infection) or µA (asymptomatic infection), where we assume that these rates are the same for both strains, regardless of whether this is the first or second infection for that individual. The parameter values used in the simulations are given in Table 1.\nFor the homogeneous model, we may apply the technique of the next-generation matrix [17] to derive the basic reproductive number R0. In general, the second strain appears after infection due to the first strain has begun, so that R0 can be calculated using a one-strain sub-model. With this assumption, we find\nwhere S0, V0\n denote the initial numbers of susceptible and vaccinated individuals, respectively, in the population, and β is the transmission coefficient. To establish a relationship between β and the baseline transmission rate τ between individuals in contact, we construct a (single-age class) network in which the ‘edge probability’ of randomly choosing an edge, one of whose vertices has degree k, is uniform. By relating this to the mean field model we derive (see Appendix)\nwhere k1\n = vertex degree of population sub-class into which the Strain-1 infection is introduced at time t = 0, and kmax = maximum vertex degree in the finite network (kmax = 20 in the simulations). If we choose for V0\n = 0.2, a conservative value R0 = 1.9 for influenza [10], then using the above expressions for β and R0 we derive τ = 3.5 d-1 for the transmission rate to be used in the simulations. The value of R0 corresponding to this τ in the absence of vaccination is R0 = 2.34.\nIn keeping with the definition of the two age class model (see Appendix), the estimates of death rates [18,19] arising from symptomatic or asymptomatic infection (d, dA, respectively) for the two age-class model correspond to the general population above and below the median of the age distribution Pa which, for the city of Vancouver, is about 38 years [20]. We assume that the death rates due to natural causes are negligible, and choose nominal values for the disease-induced rates: d(a1\n) = d(a2\n) = 0.002 d-1 (Ref.[10]). These rates vary with the particular circulating influenza strains. Furthermore, we set d = dA in this illustrative study.\nIn the model described above, the total number of individuals Nk,a in each (k,a) class is fixed, and hence the total population N (summed over all (k,a) classes) is constant. Therefore, by dividing the number of individuals in class (k,a) in state X at any given time by N, we may express the model in terms of the probability Xk,a(t) that a randomly chosen individual is in state X, and belongs to class (k,a), at time t. The resulting set of ordinary differential equations describing this deterministic model is given in the Appendix.", "The initial state was specified as follows. For pre-vaccination, a prescribed fraction V0\n(a) of individuals in each age class a receive vaccination. Infection by strain 1 is introduced into fraction ε1 of the remaining susceptibles residing in a single class (k1,a1). After the strain 1 infection has spread through the population for a time T*, a strain 2 infection is introduced into a fraction ε2 of class (k2,a2) individuals. In the simulations, we use ε1 = ε2 = 0.5; k1 = 5, k2 = 10, and for the two age class model, a1 = 1, a2 = 2. As previously mentioned, it is assumed that no individual may be infected with both strains simultaneously. The simulations were performed using three models: (1) network model with two age classes; (2) network model with one age class; and (3) the homogeneously-mixing 'mean field' model. For (1) and (2), the structure of the network was chosen to have a scale-free form [7], with the number of individuals (nodes of the contact network) with k contacts being proportional to k-2.5[21], and 1 ≤ k ≤ kmax = 20. Furthermore, the degrees of the nodes of the network were assumed to be uncorrelated: although real networks show significant correlation structure – e.g., clustering and associativity [12-15] - the purpose of the present simulations is to illustrate the general effects of departure from the homogeneous mixing assumption. A value R0 = 1.9 was fixed for the mean field model with V0\n = 0.2.\nFigure 2 compares the results from the network model using one age class (top row), two age classes (middle row), and the mean field model (bottom row), for delays of T* = 10 days (left column) and 60 days (right column) for introducing strain-2 infection into a prescribed sub-population. Solid curves correspond to a large (60%) cross-immunity (with δ ≡ δ12 =δ21 = 0.4) between strains, and dashed curves to a low (10%) cross-immunity (δ = 0.9). These results show that the level of cross-immunity has a significant effect on whether a second wave appears: if it is too high (60%, δ = 0.4), then the second wave does not appear in any of the three models. For the 2-age class model (middle row), the (first) peak infection occurs at a slightly lower value for the larger cross-immunity: 0.03 vs. 0.033 fraction of the population; however, the timing of these peaks (~75 days after initial infection) is not sensitive to the level of cross-immunity. Similar conclusions apply to the one age class network model and the mean field model, though the timing of these peaks is different for different models.\nComparison of models: Varying cross-immunity and delay for introducing strain. 2 Comparison of results from network model for one age class (top row), two age classes (middle row), and mean field (bottom row) models. Solid curves are for δ = 0.4 (60% cross-immunity) and dashed curves for δ = 0.9 (10% cross-immunity). Plots in the left column are for the appearance of the second strain 10 days after strain-1 infection starts; right column plots for the appearance of the second strain 60 days after initial strain-1 infection. For both network models, strain 1 was introduced into class k = 5 and strain 2 into class k = 10. For the two age class model, infections were started in different age classes. Note that a second wave appears only when the cross-immunity is small (dashed curves), apart from a small, delayed outbreak in the one-age class model.\nAs expected, if the second strain is introduced after the strain 1 infection has been largely cleared from the population (as is the case when T* = 60 days: right column), then the first and second waves behave as distinct, non-interacting one-strain epidemics. However, when T* is only 10 days (left column), there is still a significant presence of strain 1 infection in the population: the infections in the two age-class model merge into a single broad peak, whereas the other two models show two distinct peaks, with the second peak occurring in both models ~50 days after initial infection.\nIt is therefore apparent that the two age class network model exhibits a larger delay in peak infection – for both first and second waves – compared to the one age class and mean-field models. This can be accounted for by the reduced transmissibility between classes compared to within one class, as well as reduced transmissibility within the second age class (see the M matrix in the Appendix). (Recall that strain 1 and 2 infections are introduced into different age classes in the two age class network model). Such differences in delays between mean-field and structured models have been observed elsewhere [10], and underline the importance of spatial structure in determining the course of an epidemic.\nFigure 3 shows the effect of different levels of pre-vaccination (V0 = 0.2 [dashed curves] or 0.4 [solid curves]) on the infection profiles, for each of the three models, assuming a delay T* = 10 days in appearance of strain 2 infection. For all models, the peak infection is reduced by about 50%, and occurs slightly later, when increasing from 20% to 40% coverage. The peak infections in the two age class model, however, occur at significantly lower values than in the other two models: by a factor of 5 for the mean field model, and a factor of 10 for the one age class network model. Again, this may be accounted for by the reduced transmissibility between different age classes. Notice also that, as in Figure 2, the two age class model exhibits a single peak of infection for both levels of vaccination.\nComparison of models: Prior-vaccination coverage. Comparison of mean-field (a), one- and two age class network models ((b) and (c), respectively), for different prior vaccination coverage of 40% (solid curves) or 20% (dashed curves). For the two age class model, strain-1 and strain-2 infections occur in different age classes. The parameter δ = 0.9 (10% cross-immunity), and second strain infection begins at T* = 10 days after first-strain infection.\nIn Figure 4, the two age class network model is used to explore the effects of vaccination during an epidemic outbreak, with no vaccination prior to the initial appearance of strain 1 infection, where vaccination rates are determined according to the “social response” to total infection in the population, as described in Section 2 and the Appendix. The resulting infection profiles are remarkably insensitive to (i) level of cross-immunity, (ii) delay T*, and (iii) the rate of vaccination ω0. Apart from a temporary decrease in the rate of disappearance of the infection when T* = 10 days and δ = 0.4 (top left in Figure 4), there is only one peak of infection which occurs consistently at very similar times (50 days after strain 1 infection) and with peak magnitudes between 0.07 and 0.08 of the fraction of population. Comparing these results to the two age class model with pre-vaccination (middle row of Figure 2), it can be seen that the peak infection occurs about 20-30 days earlier for vaccination during the epidemic than when pre-vaccination is carried out.\nComparison of models: Vaccination during outbreak. Comparison of total clinical infection resulting when vaccination is introduced during a disease outbreak, for different vaccination rates. Left column is for T* = 10 days, right column for T* = 60 days. Top row: δ = 0.4; bottom row: δ = 0.9. The underlying model is a two age class network model, and vaccination rates in each (k,a) class are proportional to the number of susceptible individuals in that class. These rates are determined by the total infection in the general population, and saturate at a value of ω0 per individual per day (see Appendix), where ω0 = 1 (solid curve), 0.5 (dashed curve) or 0 (dotted curve).\nTables 2, 3, 4 compare the attack rates when individuals are infected by each strain separately (represented by the final values of the P1\n and P2\n compartments in Figure 1) and by both strains in succession (represented by the R compartment), for each of the three models. It is seen that in general, for given T* and δ, the attack rates predicted by the network models are quite different between the model types. However for pre-vaccination, within each model, strain 1 attack rates increase, while strain 2 attack rates decrease, with the delay T*. For vaccination during the epidemic, these trends are also seen though for strain 1 infection are less pronounced. This is reasonable, because for longer T*, the strain 2 infection draws upon a smaller pool of susceptibles (and individuals with failed vaccination) which are depleted by strain 1 infection for a longer time interval before strain 2 appears. Also, as expected, strain 2 attack rates are sensitive to the level of cross-immunity of the strains, decreasing sharply as cross-immunity increases from 10% to 60%. Variations between models are manifestations of the importance of heterogeneity of contact structure. The dependence on age distribution between network models in Tables 2, 3 is a consequence of our assumption (see matrix M in the Appendix) that transmissibility within age class 1 is greater than that within age class 2 or between age classes.\nTotal attack rates for 2-age class network model\n\nFor the 2-age class network model, the initial strain-1 infection occurs in a fraction ε1 = 0.5 of individuals in class (k1,a1) = (5,1), and initial strain-2 infection, occurs at time T* = 10 or 60 days later, in a fraction ε2 = 0.5 of individuals in class (k2,a2) = (10,2). For these conditions, and the given degree- and age-distributions, the initial number of strain-1 and strain-2 infections (introduced at times t = 0 and T*, respectively) in a population of size N = 10,000, is 67 and 12, respectively.\n\nTotal attack rates for 1-age class network model\n\nFor the 1-age class network model, fraction ε1 = 0.5 of degree class k1 = 5 is infected with strain-1 at time t = 0, and fraction ε2 = 0.5 of degree class k2 = 10 with strain-2 at time t = T* = 10 days or 60 days. The initial number of strain-1 and strain-2 infections (introduced at times t = 0 and T*, respectively) in a population of size N = 10,000, is 67 and 12, respectively.\n\nTotal attack rates for mean field model\n\nFor the mean field model, the initial conditions were chosen to be compatible with those of the 1-age class model - see Appendix and Table 3. The initial number of strain-1 and strain-2 infections (introduced at times t = 0 and T*, respectively) in a population of size N = 10,000, is 67 and 12, respectively.\n\nA question of importance to public health is the dependence of both the attack rate and death rate due to infection. The attack rates are shown in Table 5 for the two age class network model with T* = 60 days, and cross-immunity of 10% and 60%. For pre-vaccination, as expected, strain 1 attack rates decrease with increasing level of vaccination, at first slowly then dropping off sharply between V0 = 0.3 and 0.4 regardless of the level of cross-immunity. Interestingly, the maximum attack rate due to strain 2 infections is reached in this range, and drops off sharply thereafter. Thus, for the parameter values used, it appears that vaccination is most effective around these values, with diminishing returns for higher V0. Tables 6, 7 show that death rates due to pre-vaccination are lower than predicted for the entire range of vaccination rates during an epidemic; and in both cases the death rates decrease with increasing levels of vaccination. Analogous to the attack rates, there is a small increase in mortality for vaccination coverage around 20%, due to increased numbers of strain 2 infections at the expense of reduced numbers of strain 1 infections; however, the mortality rate drops sharply once the vaccination coverage exceeds about 40%.\nEffects of varying pre-vaccination fraction on total attack rates for 2-age class network model\n\nIn all cases, no vaccination occurs during the epidemic, i.e., ω0 = 0, and the delay in appearance of second strain infection is T* = 60 days.\nEffect on death rate of pre-vaccination\n\nAll results are for 2 age class network model, with delay in introduction of strain-2 of T* = 60 days, and cross-immunity of 60% (δ = 0.4) and 10% (δ = 0.9). The case of pre-vaccination only (ω0 = 0) is considered below.\nEffect on death rate of vaccination during epidemic\n\nAll results are for 2 age class network model, with delay in introduction of strain-2 of T* = 60 days, and cross-immunity of 60% (δ = 0.4) and 10% (δ = 0.9). The case of vaccination only during epidemic (V0 = 0) is considered.\n", "We have considered extensions of the two viral strain mean field (homogenous) model introduced in [6], to explore the effects of both local network structure and the division of the population into different age classes. The present study used model parameter values (in particular, R0) originally estimated for mean field models; and in order to translate these to the network models derived in this paper, a correspondence was established between the mean field model and a limiting case of the network model (see Appendix). The two age class model assumed the age boundary was located at the median age (about 38 years for Vancouver), with vaccine efficacy of 80% in the lower age group and 40% in the upper age group.\nSeveral notable features were observed when comparing the network models to the corresponding mean field case. Firstly, the amount of cross-immunity present is significant in determining whether a second wave of infection occurs. Due to a lower transmission rate between age classes and within the second age class, compared to within the first age class, infection levels were found to be significantly less for the two age class model than for either the one age class or mean field models. The infections occurred as either a single wave or as two successive waves. A second wave is more likely to occur the longer the delay in introduction of the second strain, since when this delay is short (~10 days) infections due to both strains merge into a single, broad peak. When a second wave does occur, the shapes of the two waves depend on when the second strain infection is introduced. If it occurs well after the first infection has run its course, then the two waves behave as distinct, non-interacting infections. The second infection peak is delayed, and its amplitude reduced, in the network model, compared to the mean field case. This behaviour reflects a longer propagation time in the network model, and has been qualitatively observed in other models, reinforcing the importance of including local network structure in realistic models.\nAs expected, the amount of cross-immunity between the two strains is important in determining the size of the second-strain outbreak. It was found that its size decreased sharply with increasing cross-immunity. As the level of vaccination increases, strain 1 attack rates decrease, with a sharp drop occurring around 30-40% pre-vaccination coverage; at the same time, strain 2 infections increase with increasing vaccination coverage, reaching their maximum somewhere in this range, and drop off sharply for higher coverage levels. This phenomenon is reminiscent of the development of drug resistance, where there is an optimal level of drug treatment (compare: vaccination coverage) that minimizes the overall infection [10]. This could have significant implications for vaccination strategies in realistic models of populations in which more than one strain is circulating.\nIt was found that increasing either pre-vaccination or vaccination during an outbreak, reduces the disease-induced mortality. Furthermore, pre-vaccination appears to be more effective than vaccination during an outbreak in reducing overall mortality, though this needs further investigation as it may depend critically on how the latter is implemented. This study considered only a simple model in which at any given time vaccination rates during an outbreak were governed by the total infection in the population at that time, and considers only vaccination of the susceptible class S, neglecting vaccination of other classes (e.g., P1 and P2 and asymptomatic cases).\nAs mentioned earlier, the particular form of the terms included in the model to incorporate local network structure and the effects of age classes was chosen for illustrative purposes. This approach, though, can be used on a specific population if sufficient data are available to determine realistic estimates of the age classes and network structure present and of the parameters of the model. The main difficulty is in determining the form of the two-point correlations between vertices of the contact network for a realistic particular population, and this must be derived indirectly from estimates of network structure extracted from the data [16]. An intermediate approach is to explore the effects of a few network structure parameters – e.g., clustering, associativity, betweenness, and centrality [7,16], obtaining expressions for the two-point probabilities defining the Markov network directly in terms of these parameters. This is currently under investigation.", "The various parameters in the model (Figure 1 of main text) are defined below:\nτ = baseline transmission rate between a susceptible-infected pair\np = probability of developing symptomatic infection with no prior exposure\npV1, pV2 = probabilities of pre-vaccinated individuals developing symptomatic infection from strains 1 and 2, respectively, with no prior exposure\nσ1, σ2 = effectiveness of vaccine to strains 1 and 2, respectively\nδV1, δV2 = reduction in transmissibility of strains 1 and 2, respectively, for vaccinated individuals\np12 = probability of developing symptomatic infection with prior exposure to strain 1\npV12 = probability of pre-vaccinated individuals developing symptomatic infection with prior exposure to strain 1\np21 = probability of developing symptomatic infection with prior exposure to strain 2\npV21 = probability of pre-vaccinated individuals developing symptomatic infection with prior exposure to strain 2\nδA = reduction in infectiousness due to asymptomatic infection\nµ, µA = recovery rates from symptomatic and asymptomatic infections\nδ12, δ21 = level of cross-immunity induced by previous exposure to strain 1 and strain 2, respectively\nd, dA = disease-induced death rates, assumed to be age-dependent but the same for each type of infection.\nDefine age classes 1 ≤ a ≤ amax, and network degree classes 1 ≤ k ≤ kmax. Let\nfor U ∈ {A1, AV1,I1, IV1,A21, AV21,I21, IV21, A2, AV2, I2, IV2, A12, AV12, I12, IV12}, denote the force of infection for age-class a and degree-class k. Here, M(a,a′) denotes the relative transmission coefficient between age-groups, so that τM(a,a′) = transmission coefficient between a susceptible individual of age-class a in contact with an infected individual of age-class a′. Also, P(k′,a′|k,a) is the probability that an individual (node) of age-class a and degree-class k has a neighbour (adjacent node) of age-class a′ and degree-class k′.\nIn the special case that the contact network is the same for all age-classes, P(k′,a′|k,a) = Pa(a′|a)P(k′|k), where Pa ( a′|a) denotes the probability that an age-class a individual has an age-class a′ neighbour. The two conditional distributions obey the conditions\nIn the general case, . If the node-degrees are uncorrelated, then P(k′|k) = Pe(k′), where Pe(k) is the edge distribution [22], defined as the probability of randomly drawing an edge connected to a vertex of degree k. It is related to P(k), the vertex distribution, by Similarly, if the age distributions are uncorrelated, then Pa(a′|a) = Pa(a′). Thus, for uncorrelated age-structured networks, which are considered in this paper, P(k′,a′|k,a) = Pa(a′)Pe(k′). In the present study, the degree distribution follows a scale-free form [7]P(k) ~ k-γ, where we have chosen γ = 3.5. In this paper, we consider only one or two age classes (amax = 1, 2); more realistic models would incorporate demographic data on several ages classes (typically, amax ≥ 4), but as discussed in the main text, the model simulations are only intended to illustrate the effects of heterogeneous contact- and age-structure, in comparison with homogeneous models. For the simulations reported in the main paper, we have for the one-age class model: Pa(a) = 1, and for the two age class model we chose Pa(a1) = Pa(a2) = 0.5, and\nWe extend the homogeneous (mean-field) model in [6] to a Markov network model with age structure, and include vaccination of strain 1 prior to onset of influenza outbreak. Furthermore, we extend this model to allow vaccination during an epidemic, by introducing a flux φ of susceptibles from the S classes to their corresponding V classes, according to the prescription:\n(i) φ is a function of the total (symptomatic) infection in the population, Itot (summed over all k and a), and φ = 0 when Itot = 0;\n(ii) φ is proportional to the population in class S(k,a,t);\n(iii) φ eventually saturates at a maximum value as Itot increases.\nA functional form that satisfies these criteria is\nwhere ω0 is the (age-class dependent) saturation rate of vaccination, α is the value of Itot at half-saturation, and n > 0 governs the steepness of the response curve. In the simulations, α = 0.4 (i.e., half-saturation occurs when 40% of population is infected), and n = 2.\nIn order to incorporate death due to infection, we add a set of classes {D(k,a,t)} to the model, and (similar to the recovery rates) assume that death rates are either d (for all symptomatic infections) or dA (for all asymptomatic infections).\nBased on Figure 1 (main text), this gives rise to the following set of 24×(amax×kmax) equations:\nwhere S ≡ S(k,a,t), etc., and the force of infection ΘU may be expressed as\nwhere U ∈ {A1, AV1,I1, IV1,\n A21, AV21,I21, IV21,A2, AV2,I2, IV2,A12,\nAV12,I12,IV12}, and is the connectivity matrix, defined by the contact structure of the population. Also,\nwhere\nThis shows that the various Θ(k,a,t)’s describe the connectivity of a vertex of degree k and age-class a to all the infected adjacent vertices.", "In order to make this comparison, we need to obtain a “limiting” form of the network model that approximates the mean field model. This will enable us to obtain a relationship between the mean field model transmission rate β and the transmission rate τ between a susceptible-infected pair in the network. We consider a simplified network model consisting of a single age class, and an uncorrelated network (so that P(k’|k)=Pe(k’)), and further that Pe is a uniform distribution: Pe(k) = 1/kmax, where kmax is the maximum degree in the network. With these assumptions, (A3) simplifies to\nTherefore, the equation for S(k,t) becomes\nwhere the term in parentheses is independent of k. Assume that S(k,t) = P(k)S(t), etc.; then this becomes\nSumming over k from 1 to kmax, we obtain\nwhereis the mean degree of nodes in the network.\nComparing this approximation with the Mean Field expression for dS/dt, suggests we make the following correspondence:\nMore realistically, since we introduce Strain-1 infections into the sub-population defined by (k,a) = (k1,a1), and taking account of the fact that R0\n is defined in terms of the first generation of infection, it would be more accurate to replace by k1, so that\nUsing this approximate relationship enables us to compare the simulation of the behaviours of the network and mean-field models, by relating numerical values of the parameters β and τ through the simplified limiting case of a network in which the probability of drawing an edge at random from the network is uniform.", "In what follows, it is assumed that the total population (including deaths) is normalized to unity, which is permissible since for this model it is constant. The initial conditions for the mean field model must be consistent with those of the network model. The analysis that follows applies to an arbitrary number of age classes and degree distributions.\nIn the network model, at t = 0 a fraction ε1 of Strain-1 infection is introduced into the sub-population in class (k1,a1). Therefore, we specify the initial conditions at t = 0 according to\nwhere Pka1 ≡ P(k1\n )Pa(a1\n ) represents the fraction of the total population in class (k1,a1). Similarly, at t = T* when strain 2 infections are introduced to a fraction ε2 of the sub-population in class (k2, a2), the corresponding mean field conditions are modified as follows:\nwhereare the changes in the susceptible and vaccinated sub-populations, respectively, and Pka2 ≡P(k2\n )Pa(a2\n ) is the fraction of the total population in class (k2,a2).\nIn order to allow comparison between Mean Field and network models, all age-dependent parameters δA, δV1, δV2, σ1, σ2, µA, µ, dA, d, etc., in the network model are replaced by their age-distributed averages:, etc., where without risk of ambiguity we may drop the ‘MF’ superscript.\nFor the network model, for all age classes we set ε1 = ε2 = 0.5, p = 0.6, V0\n = 0.2, σ1 = 0.8, σ2 = 0.4. For the two age-class model, we chose (k1,a1) = (5,1), (k2,a2\n) = (10,2); and for the one-age class model k1 = 5, k2\n = 10. The (truncated) scale-free distribution P(k) ~ k-3.5 with kmax = 20 yields Pka1 = 0.0067, Pka2 = 0.0012 (so that, in a population of N = 10,000, the number of infections is 67 and 12, respectively), where we are assuming Pa to be uniformly distributed in the 2-age-class model: Pa\n (a1) = Pa(a2\n) = 0.5.\nSubstituting these values into the expression for R0 (Section 2 in main paper), and using R0 = 1.9, V0 = 0.2, kmax = 20, and k1 = 5, yields the values β = 0.8765, and τ = 3.5 d-1. For V0 = 0.4, using the same value τ = 3.5 d-1, the corresponding value of R0 is 2.34.", "MEA wrote the bulk of the manuscript, and designed, implemented and simulated the network model from a homogeneous one constructed with assistance from Dr S. Moghadas. RK implemented and simulated the homogeneous model, and wrote the Conclusions. MEA wrote the Appendix.", "The authors declare that they have no competing interests" ]
[ null, "methods", null, null, null, null, null, null, null ]
[]
Bronchoalveolar CD4+ T cell responses to respiratory antigens are impaired in HIV-infected adults.
21357587
HIV-infected adults are at an increased risk of lower respiratory tract infections. HIV infection impairs systemic acquired immunity, but there is limited information in humans on HIV-related cell-mediated immune defects in the lung.
RATIONALE
We obtained BAL fluid and blood from HIV-infected individuals (n=21) and HIV-uninfected adults (n=24). We determined the proportion of T cell subsets including naive, memory and regulatory T cells using flow cytometry, and used intracellular cytokine staining to identify CD4(+) T cells recognising influenza virus-, S pneumoniae- and M tuberculosis-antigens.
METHODS
CD4(+) T cells in BAL were predominantly of effector memory phenotype compared to blood, irrespective of HIV status (p<0.001). There was immune compartmentalisation with a higher frequency of antigen-specific CD4(+) T cells against influenza virus, S pneumoniae and M tuberculosis retained in BAL compared to blood in HIV-uninfected adults (p<0.001 in each case). Influenza virus- and M tuberculosis-specific CD4(+) T cell responses in BAL were impaired in HIV-infected individuals: proportions of total antigen-specific CD4(+) T cells and of polyfunctional IFN-γ and TNF-α-secreting cells were lower in HIV-infected individuals than in HIV-uninfected adults (p<0.05 in each case).
MAIN RESULTS
BAL antigen-specific CD4(+) T cell responses against important viral and bacterial respiratory pathogens are impaired in HIV-infected adults. This might contribute to the susceptibility of HIV-infected adults to lower respiratory tract infections such as pneumonia and tuberculosis.
CONCLUSIONS
[ "Adult", "Antigens, Bacterial", "Antigens, Viral", "Bronchoalveolar Lavage Fluid", "CD4-Positive T-Lymphocytes", "Cytokines", "Female", "HIV Infections", "HIV-1", "Humans", "Immunologic Memory", "Immunophenotyping", "Male", "Middle Aged", "Mycobacterium tuberculosis", "Orthomyxoviridae", "Pneumonia, Staphylococcal", "T-Lymphocyte Subsets", "Young Adult" ]
3088469
Introduction
Respiratory infections are a leading cause of death in lower income countries, accounting for approximately three million deaths a year.1 Infants, the elderly and immunocompromised individuals are particularly susceptible to these infections. In particular, HIV-infected individuals are 30 times more likely than uninfected adults to suffer from bacterial pneumonia or active tuberculosis.2 3 Southern Africa is the region with the world's highest burden of HIV infection, with over nine countries, including Malawi, having an estimated HIV prevalence that is greater than 10%.4 Defence against respiratory infection involves mucosal and systemic immunity.5 Antigen-specific CD4+ T cells are important as they protect against respiratory infections.6–8 Cytokines secreted by CD4+ T cells, such as IFN-γ, TNF-α, IL-2, IL-17 and IL-22,9–12 are critical to the activation of macrophages9 and the recruitment of neutrophils,10 and enhance the magnitude and quality of CD8+ T cell responses.12 Immune protection against common viral and bacterial respiratory pathogens depends on the integrity of these effector responses. There is limited information about the phenotype and function of these CD4+ T cells within the human lung. Studies of the human lung suggest that mechanisms of local intrapulmonary immunity may differ from those mediating systemic immunity. Influenza virus antigen-specific memory CD4+ T cells from lung tissue were present at higher frequencies and produced more IFN-γ than those from peripheral blood in patients undergoing lobectomy for a localised solitary peripheral lung carcinoma who had no symptoms of upper respiratory infection.13 In patients with tuberculosis, IFN-γ and TNF-α responses to Purified Protein Derivative (PPD) were stronger by CD4+ T cells from BAL fluid than by CD4+ T cells from peripheral blood.14 The decline in immunity caused by HIV is not equally distributed among immunological sites. In particular, the depletion of CD4+ T cells primarily occurs at sites of ‘persistent inflammation’ such as the mucosa, which may leave individuals vulnerable to acute infections. Brenchley et al showed that there was a rapid depletion of mucosal gut T cells during early HIV infection while pulmonary CD4+ T cell depletion was less acute.15 Clinical evidence indicates that there is a high burden of pneumococcal pneumonia and tuberculosis early on in HIV infection when peripheral CD4+ T cell counts are relatively stable.16 17 We hypothesised that HIV infection preferentially depletes antigen-specific T cells against common respiratory pathogens within the lung compartment, which predisposes individuals to respiratory infections. The authors have compared baseline T cell phenotypes and antigen-specific CD4+ T cells in BAL and peripheral blood, between HIV-infected individuals and HIV-uninfected adults. The aim of the authors was to compare T cell phenotypes in BAL and peripheral blood between the two groups of subjects; to assess antigen-specific CD4+ T cell responses to common respiratory antigens; and to investigate whether HIV infection differentially affects the lung and peripheral blood compartments.
Methods
[SUBTITLE] Participants [SUBSECTION] Adult volunteers with no recent history of severe respiratory diseases and a normal chest x-ray were recruited by advertisement in the Queen Elizabeth Central Hospital, Blantyre, Malawi. All participants gave written-informed-consent to HIV testing, venesection and bronchoscopy. The authors enrolled HIV-uninfected adults and asymptomatic anti-retroviral therapy naive HIV-infected individuals (WHO stage 1) into the study. The exclusion criteria for the study were as follows: the presence of other immunocompromising illnesses such as diabetes and cancer, the use of immunosuppressive drugs, cigarette smoking, moderate or severe anaemia (HB<8 g/dl), and known or possible pregnancy. This study complies with local institutional guidelines and was approved by the College of Medicine Research Ethics Committee of the University of Malawi (COMREC P.01/09/717) and the Liverpool School of Tropical Medicine Research Ethics Committee (LSTM REC 08.61). Adult volunteers with no recent history of severe respiratory diseases and a normal chest x-ray were recruited by advertisement in the Queen Elizabeth Central Hospital, Blantyre, Malawi. All participants gave written-informed-consent to HIV testing, venesection and bronchoscopy. The authors enrolled HIV-uninfected adults and asymptomatic anti-retroviral therapy naive HIV-infected individuals (WHO stage 1) into the study. The exclusion criteria for the study were as follows: the presence of other immunocompromising illnesses such as diabetes and cancer, the use of immunosuppressive drugs, cigarette smoking, moderate or severe anaemia (HB<8 g/dl), and known or possible pregnancy. This study complies with local institutional guidelines and was approved by the College of Medicine Research Ethics Committee of the University of Malawi (COMREC P.01/09/717) and the Liverpool School of Tropical Medicine Research Ethics Committee (LSTM REC 08.61). [SUBTITLE] Sample collection and processing [SUBSECTION] Peripheral blood samples were collected on all subjects. Peripheral blood mononuclear cells (PBMCs) were isolated from blood by density centrifugation using Lymphoprep (Axis-shield, Norway) according to the manufacturer's instructions. Bronchoscopy and BAL collection was carried out as previously described.18 The BAL samples were filtered and spun to obtain a cell pellet. The cells were counted and re-suspended in complete cell culture media (containing RPMI, L-glutamine, penicillin/streptomycin and HEPES (all from Sigma-Aldrich, UK) with 2% (vol/vol) heat-inactivated human AB serum (National Blood Services, Blantyre)). Peripheral blood samples were collected on all subjects. Peripheral blood mononuclear cells (PBMCs) were isolated from blood by density centrifugation using Lymphoprep (Axis-shield, Norway) according to the manufacturer's instructions. Bronchoscopy and BAL collection was carried out as previously described.18 The BAL samples were filtered and spun to obtain a cell pellet. The cells were counted and re-suspended in complete cell culture media (containing RPMI, L-glutamine, penicillin/streptomycin and HEPES (all from Sigma-Aldrich, UK) with 2% (vol/vol) heat-inactivated human AB serum (National Blood Services, Blantyre)). [SUBTITLE] Phenotyping of T cell subsets [SUBSECTION] PBMCs and BAL cells were stained with fluorochrome conjugate antibodies when cell numbers were sufficient. Anti-CD3 fluorescein isothiocyanate (FITC), anti-CD4 Pacific blue, anti-CD8 allophycocyanin-H7 (APC-H7), anti-CD45RA phycoerythrin (PE), and anti-CCR7 allophycocyanin (APC) (all antibodies from BD Bioscience, UK) were used to characterise: naive (CD45RA+CCR7+), central memory (CD45RA−CCR7+), effector memory (CD45RA−CCR7−) and terminal effector T cells (CD45RA+CCR7−).19 Anti-CD4 pacific blue, anti-CD25 FITC (all antibodies from BD Bioscience, UK) and anti-FoxP3 PE (eBioscience, UK) were used to characterise regulatory T cells (CD4+CD25hiFoxP3+).20 The samples were acquired on CyAn ADP 9 Colour flow cytometer (Beckman Coulter, USA) and analysed using FlowJo (TreeStar, USA). PBMCs and BAL cells were stained with fluorochrome conjugate antibodies when cell numbers were sufficient. Anti-CD3 fluorescein isothiocyanate (FITC), anti-CD4 Pacific blue, anti-CD8 allophycocyanin-H7 (APC-H7), anti-CD45RA phycoerythrin (PE), and anti-CCR7 allophycocyanin (APC) (all antibodies from BD Bioscience, UK) were used to characterise: naive (CD45RA+CCR7+), central memory (CD45RA−CCR7+), effector memory (CD45RA−CCR7−) and terminal effector T cells (CD45RA+CCR7−).19 Anti-CD4 pacific blue, anti-CD25 FITC (all antibodies from BD Bioscience, UK) and anti-FoxP3 PE (eBioscience, UK) were used to characterise regulatory T cells (CD4+CD25hiFoxP3+).20 The samples were acquired on CyAn ADP 9 Colour flow cytometer (Beckman Coulter, USA) and analysed using FlowJo (TreeStar, USA). [SUBTITLE] Intracellular cytokine staining [SUBSECTION] PBMCs and BAL cells re-suspended in complete cell culture media were cultured in a volume of 200 μl in a 96 well plate and stimulated with influenza vaccine (0.45 μg/ml), pneumococcal cell culture supernatant (8 μg/ml) or Purified Protein Derivative (PPD, 10 μg/ml) for 2 h. Brefeldin A (1 μl) (BD Bioscience, UK) was added at 2 h and the cells were cultured for a further 16 h. Cells were harvested and stained with Violet Viability dye (LIVE/DEAD® Fixable Dead Cell Stain kit, Invitrogen, UK) as per manufacturer's instructions. Cells were then surface stained with anti-CD4 FITC and CD8 PerCP (all BD Bioscience, UK). Next, cells were permeabilised and fixed using Cytofix/Cytoperm (BD Bioscience, UK) as per manufacturer's instructions. The cells were then stained with anti-interferon-gamma (IFN-γ) APC and anti-tumour necrosis factor-alpha (TNF-α) Alexa flour 488 antibodies (all BD Bioscience, UK) to detect intracellular cytokines. Lastly, cells were washed with 1x Perm Wash (BD Bioscience, UK), re-suspended in FACS flow and acquired on a flow on CyAn ADP 9 Colour flow cytometer (Beckman Coulter, USA). Flow cytometry analysis was done using FlowJo (TreeStar, USA). PBMCs and BAL cells re-suspended in complete cell culture media were cultured in a volume of 200 μl in a 96 well plate and stimulated with influenza vaccine (0.45 μg/ml), pneumococcal cell culture supernatant (8 μg/ml) or Purified Protein Derivative (PPD, 10 μg/ml) for 2 h. Brefeldin A (1 μl) (BD Bioscience, UK) was added at 2 h and the cells were cultured for a further 16 h. Cells were harvested and stained with Violet Viability dye (LIVE/DEAD® Fixable Dead Cell Stain kit, Invitrogen, UK) as per manufacturer's instructions. Cells were then surface stained with anti-CD4 FITC and CD8 PerCP (all BD Bioscience, UK). Next, cells were permeabilised and fixed using Cytofix/Cytoperm (BD Bioscience, UK) as per manufacturer's instructions. The cells were then stained with anti-interferon-gamma (IFN-γ) APC and anti-tumour necrosis factor-alpha (TNF-α) Alexa flour 488 antibodies (all BD Bioscience, UK) to detect intracellular cytokines. Lastly, cells were washed with 1x Perm Wash (BD Bioscience, UK), re-suspended in FACS flow and acquired on a flow on CyAn ADP 9 Colour flow cytometer (Beckman Coulter, USA). Flow cytometry analysis was done using FlowJo (TreeStar, USA). [SUBTITLE] Statistical analysis [SUBSECTION] Statistical analyses and graphical presentation were carried out using Graphpad Prism 5 (GraphPad Software, USA). Student t tests were used for the volunteer demographic data with the exception of gender, where a χ2 test was used instead. For the experimental data, Mann–Whitney U test was used for non-paired data and Wilcoxon sign ranked test for paired data. Results are given as mean with ranges or medians with IQRs. Differences were considered statistically significant if p values were less than 0.05. Statistical analyses and graphical presentation were carried out using Graphpad Prism 5 (GraphPad Software, USA). Student t tests were used for the volunteer demographic data with the exception of gender, where a χ2 test was used instead. For the experimental data, Mann–Whitney U test was used for non-paired data and Wilcoxon sign ranked test for paired data. Results are given as mean with ranges or medians with IQRs. Differences were considered statistically significant if p values were less than 0.05.
Results
[SUBTITLE] Demographic characteristics of study population [SUBSECTION] Basic demographics are shown in table 1. HIV-uninfected Malawian adults ((n=24, females 11) mean age 38 years)) and asymptomatic HIV-infected adults ((n=21, females 11) mean age 40 years)) participated in the study. The mean CD4 count for HIV-infected individuals was 375 cells/μl. All participants were asymptomatic and had no recent history of respiratory infection or tuberculosis. The mean BAL cell concentration was comparable between HIV-infected individuals and HIV-uninfected adults (16.2×106cells/100 ml vs 20.5×106cells/100 ml respectively; p=0.1134), but the proportion of lymphocytes in the BAL cells was higher in HIV-infected individuals than HIV-uninfected adults (17.8% vs 9.0%; p=0.0106). Characteristics of subjects enrolled in the study BAL, bronchoalveolar lavage; BALF, BAL fluid; N/A, not applicable. Basic demographics are shown in table 1. HIV-uninfected Malawian adults ((n=24, females 11) mean age 38 years)) and asymptomatic HIV-infected adults ((n=21, females 11) mean age 40 years)) participated in the study. The mean CD4 count for HIV-infected individuals was 375 cells/μl. All participants were asymptomatic and had no recent history of respiratory infection or tuberculosis. The mean BAL cell concentration was comparable between HIV-infected individuals and HIV-uninfected adults (16.2×106cells/100 ml vs 20.5×106cells/100 ml respectively; p=0.1134), but the proportion of lymphocytes in the BAL cells was higher in HIV-infected individuals than HIV-uninfected adults (17.8% vs 9.0%; p=0.0106). Characteristics of subjects enrolled in the study BAL, bronchoalveolar lavage; BALF, BAL fluid; N/A, not applicable. [SUBTITLE] Proportions of naive, memory and regulatory T cells in BAL and peripheral blood [SUBSECTION] [SUBTITLE] CD4 and CD8 T cells [SUBSECTION] There was a lower proportion of CD4+ T cells in the total CD3+ T cell population in HIV-infected individuals compared to HIV-uninfected adults in both BAL (median 35.0% vs 65.5%, p<0.001) and peripheral blood (median 38.5% vs 61.2%, p<0.001). The proportion of CD8+ T cells in the total CD3+ T cell population was higher in HIV-infected individuals compared to HIV-uninfected adults in both BAL (median 59.7% vs 34.5%, p<0.05) and peripheral blood (median 61.5% vs 38.8%, p<0.05) (figure 1A). Lower proportion of CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD3 FITC, anti-CD4 Pacific blue and anti-CD8 APC-H7 fluorochrome conjugated antibodies. (A) The data shows a lower proportion of CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. (B) The data shows a higher proportion of CD8+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. Significance was assessed using Mann–Whitney U test. Black horizontal bars represent median and IQRs. There was a lower proportion of CD4+ T cells in the total CD3+ T cell population in HIV-infected individuals compared to HIV-uninfected adults in both BAL (median 35.0% vs 65.5%, p<0.001) and peripheral blood (median 38.5% vs 61.2%, p<0.001). The proportion of CD8+ T cells in the total CD3+ T cell population was higher in HIV-infected individuals compared to HIV-uninfected adults in both BAL (median 59.7% vs 34.5%, p<0.05) and peripheral blood (median 61.5% vs 38.8%, p<0.05) (figure 1A). Lower proportion of CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD3 FITC, anti-CD4 Pacific blue and anti-CD8 APC-H7 fluorochrome conjugated antibodies. (A) The data shows a lower proportion of CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. (B) The data shows a higher proportion of CD8+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. Significance was assessed using Mann–Whitney U test. Black horizontal bars represent median and IQRs. [SUBTITLE] Naive and memory T cells [SUBSECTION] To examine whether the proportion of T cell subsets was similar between compartments and whether they were altered by HIV infection, the authors measured expression of CD45RA and CCR7 on CD4+ and CD8+ T cells—as described in the materials and methods section. BAL CD4+ and CD8+ T cells were predominantly of effector memory phenotype (CD4+CD45RA−CCR7−, median 98%; CD8+CD45RA−CCR7−, median 90%) compared to peripheral blood CD4+ and CD8+ T cells which were distributed among naive (CD4+CD45RA+CCR7+, median 25%; CD8+CD45RA+CCR7+, median 30%), central memory (CD4+CD45RA−CCR7+, median 37%; CD8+CD45RA−CCR7+, median 5%), effector memory (CD4+ CD45RA−CCR7−, median 37%; CD8+ CD45RA−CCR7−, median 20%) and terminal effector phenotypes (CD4+CD45RA+CCR7−, median 6%; CD8+ CD45RA+CCR7−, median 45%) (figure 2). In BAL, there was a higher proportion of effector memory cells in HIV-infected individuals compared to HIV-uninfected adults (CD4, median 98.9% vs 98.1% p=0.03; CD8, median 96.3% vs 90.3% p=0.03) and a lower proportion of central memory T cells (CD4, median 0.37% vs 1.42% p=0.003; CD8, median 0.44% vs 1.88% p=0.007) (figure 2A,B). In peripheral blood, there was no difference in peripheral blood CD4+ T cell subsets between HIV-infected and HIV-uninfected groups. However, the HIV-infected group had a higher proportion of CD8+ effector memory T cells (median 29.6% vs 17.8%, p=0.007) and a lower proportion of CD8+ terminal effector T cells (median 22.2% vs 45.0%, p=0.01) than the HIV-uninfected group (figure 2C,D). The proportions of naive and memory T cell subsets are different between BAL and peripheral blood, and are altered during HIV infection. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD3 FITC, anti-CD4 Pacific blue, anti-CD8 APC-H7, anti-CD45RA PE and anti-CCR7 APC fluorochrome conjugated antibodies. The proportion of naïve (CD45RA+CCR7+), central memory (CD45RA−CCR7+), effector memory (CD45RA−CCR7−) and terminal effector (CD45RA+CCR7−) were defined. (A, B, C, D) The data shows that BAL T cells (upper) were predominantly of effector memory phenotype compared to peripheral blood (lower), in which T cells were distributed among naive, central memory, effector memory and terminal effector phenotypes. (A, B) The data shows a higher proportion of effector memory and lower proportion of central memory BAL CD4+ (left) and CD8+ (right) T cells in HIV-infected individuals compared to HIV-uninfected adults. (C, D) The data shows no difference in peripheral blood CD4+ T cell subsets between HIV-infected individuals compared to HIV-uninfected adults (left), but there was a higher proportion of CD8+ effector memory and a lower proportion of CD8+ terminal effector in HIV-infected individuals compared to HIV-uninfected adults (right). Black horizontal bars represent median and IQRs. Statistical significance was analysed by the Mann–Whitney U test. p value <0.05 was used to determine statistical significance. To examine whether the proportion of T cell subsets was similar between compartments and whether they were altered by HIV infection, the authors measured expression of CD45RA and CCR7 on CD4+ and CD8+ T cells—as described in the materials and methods section. BAL CD4+ and CD8+ T cells were predominantly of effector memory phenotype (CD4+CD45RA−CCR7−, median 98%; CD8+CD45RA−CCR7−, median 90%) compared to peripheral blood CD4+ and CD8+ T cells which were distributed among naive (CD4+CD45RA+CCR7+, median 25%; CD8+CD45RA+CCR7+, median 30%), central memory (CD4+CD45RA−CCR7+, median 37%; CD8+CD45RA−CCR7+, median 5%), effector memory (CD4+ CD45RA−CCR7−, median 37%; CD8+ CD45RA−CCR7−, median 20%) and terminal effector phenotypes (CD4+CD45RA+CCR7−, median 6%; CD8+ CD45RA+CCR7−, median 45%) (figure 2). In BAL, there was a higher proportion of effector memory cells in HIV-infected individuals compared to HIV-uninfected adults (CD4, median 98.9% vs 98.1% p=0.03; CD8, median 96.3% vs 90.3% p=0.03) and a lower proportion of central memory T cells (CD4, median 0.37% vs 1.42% p=0.003; CD8, median 0.44% vs 1.88% p=0.007) (figure 2A,B). In peripheral blood, there was no difference in peripheral blood CD4+ T cell subsets between HIV-infected and HIV-uninfected groups. However, the HIV-infected group had a higher proportion of CD8+ effector memory T cells (median 29.6% vs 17.8%, p=0.007) and a lower proportion of CD8+ terminal effector T cells (median 22.2% vs 45.0%, p=0.01) than the HIV-uninfected group (figure 2C,D). The proportions of naive and memory T cell subsets are different between BAL and peripheral blood, and are altered during HIV infection. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD3 FITC, anti-CD4 Pacific blue, anti-CD8 APC-H7, anti-CD45RA PE and anti-CCR7 APC fluorochrome conjugated antibodies. The proportion of naïve (CD45RA+CCR7+), central memory (CD45RA−CCR7+), effector memory (CD45RA−CCR7−) and terminal effector (CD45RA+CCR7−) were defined. (A, B, C, D) The data shows that BAL T cells (upper) were predominantly of effector memory phenotype compared to peripheral blood (lower), in which T cells were distributed among naive, central memory, effector memory and terminal effector phenotypes. (A, B) The data shows a higher proportion of effector memory and lower proportion of central memory BAL CD4+ (left) and CD8+ (right) T cells in HIV-infected individuals compared to HIV-uninfected adults. (C, D) The data shows no difference in peripheral blood CD4+ T cell subsets between HIV-infected individuals compared to HIV-uninfected adults (left), but there was a higher proportion of CD8+ effector memory and a lower proportion of CD8+ terminal effector in HIV-infected individuals compared to HIV-uninfected adults (right). Black horizontal bars represent median and IQRs. Statistical significance was analysed by the Mann–Whitney U test. p value <0.05 was used to determine statistical significance. [SUBTITLE] Regulatory T cells (Tregs) [SUBSECTION] We investigated the hypothesis that in HIV-infected individuals, persistent immune activation by HIV results in a higher frequency of regulatory T cells. Blood and BAL Tregs were defined as CD4+CD25hiFoxP3+ as described in the materials and methods (figure 3A). The proportions of Tregs in HIV-infected individuals were similar in BAL and peripheral blood (median 3.7% vs 3%, p>0.01), but were higher in BAL compared to peripheral blood in HIV-uninfected adults (median 4.3% vs 1.5%, p<0.001) (figure 3B). There was a higher proportion of Tregs in the peripheral blood of HIV-infected individuals than in the HIV-uninfected adults (median 3% vs 1.5%, p<0.001). However, in BAL the proportions were similar between the groups (median 3.7% vs 4.3%, p>0.01) (figure 3B). Absolute counts, however, of Tregs in peripheral blood were similar between HIV-uninfected adults and HIV-infected individuals (median 11 cells/μl vs 9 cells/μl, p>0.01) (figure 3C). Higher frequency of regulatory T cells in BAL compared to peripheral blood, but altered in HIV-infected individuals. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD4 Pacific blue, anti-CD25 FITC and anti-Foxp3 PE fluorochrome conjugated antibodies. Regulatory T cells (Tregs) were defined as CD4+ T cells expressing CD25hi and FoxP3+. (A) A flow cytometry representative dot plot showing Tregs in BAL and peripheral blood from a healthy control. (B) The data shows a higher frequency of Tregs in BAL than peripheral blood in HIV-uninfected adults. It also shows a higher frequency of peripheral blood Tregs in HIV-infected individuals compared to HIV-uninfected adults. (C) The data shows no difference in the absolute counts of Tregs in peripheral blood between HIV-infected individuals and HIV-uninfected adults. Black horizontal bars represent median and IQRs. Statistical significance was analysed by the Mann–Whitney U test in the HIV-uninfected versus HIV-infected comparison, and Wilcoxon Signed rank test in the BAL versus peripheral blood comparison. p value <0.05 was used to determine statistical significance. We investigated the hypothesis that in HIV-infected individuals, persistent immune activation by HIV results in a higher frequency of regulatory T cells. Blood and BAL Tregs were defined as CD4+CD25hiFoxP3+ as described in the materials and methods (figure 3A). The proportions of Tregs in HIV-infected individuals were similar in BAL and peripheral blood (median 3.7% vs 3%, p>0.01), but were higher in BAL compared to peripheral blood in HIV-uninfected adults (median 4.3% vs 1.5%, p<0.001) (figure 3B). There was a higher proportion of Tregs in the peripheral blood of HIV-infected individuals than in the HIV-uninfected adults (median 3% vs 1.5%, p<0.001). However, in BAL the proportions were similar between the groups (median 3.7% vs 4.3%, p>0.01) (figure 3B). Absolute counts, however, of Tregs in peripheral blood were similar between HIV-uninfected adults and HIV-infected individuals (median 11 cells/μl vs 9 cells/μl, p>0.01) (figure 3C). Higher frequency of regulatory T cells in BAL compared to peripheral blood, but altered in HIV-infected individuals. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD4 Pacific blue, anti-CD25 FITC and anti-Foxp3 PE fluorochrome conjugated antibodies. Regulatory T cells (Tregs) were defined as CD4+ T cells expressing CD25hi and FoxP3+. (A) A flow cytometry representative dot plot showing Tregs in BAL and peripheral blood from a healthy control. (B) The data shows a higher frequency of Tregs in BAL than peripheral blood in HIV-uninfected adults. It also shows a higher frequency of peripheral blood Tregs in HIV-infected individuals compared to HIV-uninfected adults. (C) The data shows no difference in the absolute counts of Tregs in peripheral blood between HIV-infected individuals and HIV-uninfected adults. Black horizontal bars represent median and IQRs. Statistical significance was analysed by the Mann–Whitney U test in the HIV-uninfected versus HIV-infected comparison, and Wilcoxon Signed rank test in the BAL versus peripheral blood comparison. p value <0.05 was used to determine statistical significance. [SUBTITLE] CD4 and CD8 T cells [SUBSECTION] There was a lower proportion of CD4+ T cells in the total CD3+ T cell population in HIV-infected individuals compared to HIV-uninfected adults in both BAL (median 35.0% vs 65.5%, p<0.001) and peripheral blood (median 38.5% vs 61.2%, p<0.001). The proportion of CD8+ T cells in the total CD3+ T cell population was higher in HIV-infected individuals compared to HIV-uninfected adults in both BAL (median 59.7% vs 34.5%, p<0.05) and peripheral blood (median 61.5% vs 38.8%, p<0.05) (figure 1A). Lower proportion of CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD3 FITC, anti-CD4 Pacific blue and anti-CD8 APC-H7 fluorochrome conjugated antibodies. (A) The data shows a lower proportion of CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. (B) The data shows a higher proportion of CD8+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. Significance was assessed using Mann–Whitney U test. Black horizontal bars represent median and IQRs. There was a lower proportion of CD4+ T cells in the total CD3+ T cell population in HIV-infected individuals compared to HIV-uninfected adults in both BAL (median 35.0% vs 65.5%, p<0.001) and peripheral blood (median 38.5% vs 61.2%, p<0.001). The proportion of CD8+ T cells in the total CD3+ T cell population was higher in HIV-infected individuals compared to HIV-uninfected adults in both BAL (median 59.7% vs 34.5%, p<0.05) and peripheral blood (median 61.5% vs 38.8%, p<0.05) (figure 1A). Lower proportion of CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD3 FITC, anti-CD4 Pacific blue and anti-CD8 APC-H7 fluorochrome conjugated antibodies. (A) The data shows a lower proportion of CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. (B) The data shows a higher proportion of CD8+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. Significance was assessed using Mann–Whitney U test. Black horizontal bars represent median and IQRs. [SUBTITLE] Naive and memory T cells [SUBSECTION] To examine whether the proportion of T cell subsets was similar between compartments and whether they were altered by HIV infection, the authors measured expression of CD45RA and CCR7 on CD4+ and CD8+ T cells—as described in the materials and methods section. BAL CD4+ and CD8+ T cells were predominantly of effector memory phenotype (CD4+CD45RA−CCR7−, median 98%; CD8+CD45RA−CCR7−, median 90%) compared to peripheral blood CD4+ and CD8+ T cells which were distributed among naive (CD4+CD45RA+CCR7+, median 25%; CD8+CD45RA+CCR7+, median 30%), central memory (CD4+CD45RA−CCR7+, median 37%; CD8+CD45RA−CCR7+, median 5%), effector memory (CD4+ CD45RA−CCR7−, median 37%; CD8+ CD45RA−CCR7−, median 20%) and terminal effector phenotypes (CD4+CD45RA+CCR7−, median 6%; CD8+ CD45RA+CCR7−, median 45%) (figure 2). In BAL, there was a higher proportion of effector memory cells in HIV-infected individuals compared to HIV-uninfected adults (CD4, median 98.9% vs 98.1% p=0.03; CD8, median 96.3% vs 90.3% p=0.03) and a lower proportion of central memory T cells (CD4, median 0.37% vs 1.42% p=0.003; CD8, median 0.44% vs 1.88% p=0.007) (figure 2A,B). In peripheral blood, there was no difference in peripheral blood CD4+ T cell subsets between HIV-infected and HIV-uninfected groups. However, the HIV-infected group had a higher proportion of CD8+ effector memory T cells (median 29.6% vs 17.8%, p=0.007) and a lower proportion of CD8+ terminal effector T cells (median 22.2% vs 45.0%, p=0.01) than the HIV-uninfected group (figure 2C,D). The proportions of naive and memory T cell subsets are different between BAL and peripheral blood, and are altered during HIV infection. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD3 FITC, anti-CD4 Pacific blue, anti-CD8 APC-H7, anti-CD45RA PE and anti-CCR7 APC fluorochrome conjugated antibodies. The proportion of naïve (CD45RA+CCR7+), central memory (CD45RA−CCR7+), effector memory (CD45RA−CCR7−) and terminal effector (CD45RA+CCR7−) were defined. (A, B, C, D) The data shows that BAL T cells (upper) were predominantly of effector memory phenotype compared to peripheral blood (lower), in which T cells were distributed among naive, central memory, effector memory and terminal effector phenotypes. (A, B) The data shows a higher proportion of effector memory and lower proportion of central memory BAL CD4+ (left) and CD8+ (right) T cells in HIV-infected individuals compared to HIV-uninfected adults. (C, D) The data shows no difference in peripheral blood CD4+ T cell subsets between HIV-infected individuals compared to HIV-uninfected adults (left), but there was a higher proportion of CD8+ effector memory and a lower proportion of CD8+ terminal effector in HIV-infected individuals compared to HIV-uninfected adults (right). Black horizontal bars represent median and IQRs. Statistical significance was analysed by the Mann–Whitney U test. p value <0.05 was used to determine statistical significance. To examine whether the proportion of T cell subsets was similar between compartments and whether they were altered by HIV infection, the authors measured expression of CD45RA and CCR7 on CD4+ and CD8+ T cells—as described in the materials and methods section. BAL CD4+ and CD8+ T cells were predominantly of effector memory phenotype (CD4+CD45RA−CCR7−, median 98%; CD8+CD45RA−CCR7−, median 90%) compared to peripheral blood CD4+ and CD8+ T cells which were distributed among naive (CD4+CD45RA+CCR7+, median 25%; CD8+CD45RA+CCR7+, median 30%), central memory (CD4+CD45RA−CCR7+, median 37%; CD8+CD45RA−CCR7+, median 5%), effector memory (CD4+ CD45RA−CCR7−, median 37%; CD8+ CD45RA−CCR7−, median 20%) and terminal effector phenotypes (CD4+CD45RA+CCR7−, median 6%; CD8+ CD45RA+CCR7−, median 45%) (figure 2). In BAL, there was a higher proportion of effector memory cells in HIV-infected individuals compared to HIV-uninfected adults (CD4, median 98.9% vs 98.1% p=0.03; CD8, median 96.3% vs 90.3% p=0.03) and a lower proportion of central memory T cells (CD4, median 0.37% vs 1.42% p=0.003; CD8, median 0.44% vs 1.88% p=0.007) (figure 2A,B). In peripheral blood, there was no difference in peripheral blood CD4+ T cell subsets between HIV-infected and HIV-uninfected groups. However, the HIV-infected group had a higher proportion of CD8+ effector memory T cells (median 29.6% vs 17.8%, p=0.007) and a lower proportion of CD8+ terminal effector T cells (median 22.2% vs 45.0%, p=0.01) than the HIV-uninfected group (figure 2C,D). The proportions of naive and memory T cell subsets are different between BAL and peripheral blood, and are altered during HIV infection. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD3 FITC, anti-CD4 Pacific blue, anti-CD8 APC-H7, anti-CD45RA PE and anti-CCR7 APC fluorochrome conjugated antibodies. The proportion of naïve (CD45RA+CCR7+), central memory (CD45RA−CCR7+), effector memory (CD45RA−CCR7−) and terminal effector (CD45RA+CCR7−) were defined. (A, B, C, D) The data shows that BAL T cells (upper) were predominantly of effector memory phenotype compared to peripheral blood (lower), in which T cells were distributed among naive, central memory, effector memory and terminal effector phenotypes. (A, B) The data shows a higher proportion of effector memory and lower proportion of central memory BAL CD4+ (left) and CD8+ (right) T cells in HIV-infected individuals compared to HIV-uninfected adults. (C, D) The data shows no difference in peripheral blood CD4+ T cell subsets between HIV-infected individuals compared to HIV-uninfected adults (left), but there was a higher proportion of CD8+ effector memory and a lower proportion of CD8+ terminal effector in HIV-infected individuals compared to HIV-uninfected adults (right). Black horizontal bars represent median and IQRs. Statistical significance was analysed by the Mann–Whitney U test. p value <0.05 was used to determine statistical significance. [SUBTITLE] Regulatory T cells (Tregs) [SUBSECTION] We investigated the hypothesis that in HIV-infected individuals, persistent immune activation by HIV results in a higher frequency of regulatory T cells. Blood and BAL Tregs were defined as CD4+CD25hiFoxP3+ as described in the materials and methods (figure 3A). The proportions of Tregs in HIV-infected individuals were similar in BAL and peripheral blood (median 3.7% vs 3%, p>0.01), but were higher in BAL compared to peripheral blood in HIV-uninfected adults (median 4.3% vs 1.5%, p<0.001) (figure 3B). There was a higher proportion of Tregs in the peripheral blood of HIV-infected individuals than in the HIV-uninfected adults (median 3% vs 1.5%, p<0.001). However, in BAL the proportions were similar between the groups (median 3.7% vs 4.3%, p>0.01) (figure 3B). Absolute counts, however, of Tregs in peripheral blood were similar between HIV-uninfected adults and HIV-infected individuals (median 11 cells/μl vs 9 cells/μl, p>0.01) (figure 3C). Higher frequency of regulatory T cells in BAL compared to peripheral blood, but altered in HIV-infected individuals. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD4 Pacific blue, anti-CD25 FITC and anti-Foxp3 PE fluorochrome conjugated antibodies. Regulatory T cells (Tregs) were defined as CD4+ T cells expressing CD25hi and FoxP3+. (A) A flow cytometry representative dot plot showing Tregs in BAL and peripheral blood from a healthy control. (B) The data shows a higher frequency of Tregs in BAL than peripheral blood in HIV-uninfected adults. It also shows a higher frequency of peripheral blood Tregs in HIV-infected individuals compared to HIV-uninfected adults. (C) The data shows no difference in the absolute counts of Tregs in peripheral blood between HIV-infected individuals and HIV-uninfected adults. Black horizontal bars represent median and IQRs. Statistical significance was analysed by the Mann–Whitney U test in the HIV-uninfected versus HIV-infected comparison, and Wilcoxon Signed rank test in the BAL versus peripheral blood comparison. p value <0.05 was used to determine statistical significance. We investigated the hypothesis that in HIV-infected individuals, persistent immune activation by HIV results in a higher frequency of regulatory T cells. Blood and BAL Tregs were defined as CD4+CD25hiFoxP3+ as described in the materials and methods (figure 3A). The proportions of Tregs in HIV-infected individuals were similar in BAL and peripheral blood (median 3.7% vs 3%, p>0.01), but were higher in BAL compared to peripheral blood in HIV-uninfected adults (median 4.3% vs 1.5%, p<0.001) (figure 3B). There was a higher proportion of Tregs in the peripheral blood of HIV-infected individuals than in the HIV-uninfected adults (median 3% vs 1.5%, p<0.001). However, in BAL the proportions were similar between the groups (median 3.7% vs 4.3%, p>0.01) (figure 3B). Absolute counts, however, of Tregs in peripheral blood were similar between HIV-uninfected adults and HIV-infected individuals (median 11 cells/μl vs 9 cells/μl, p>0.01) (figure 3C). Higher frequency of regulatory T cells in BAL compared to peripheral blood, but altered in HIV-infected individuals. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD4 Pacific blue, anti-CD25 FITC and anti-Foxp3 PE fluorochrome conjugated antibodies. Regulatory T cells (Tregs) were defined as CD4+ T cells expressing CD25hi and FoxP3+. (A) A flow cytometry representative dot plot showing Tregs in BAL and peripheral blood from a healthy control. (B) The data shows a higher frequency of Tregs in BAL than peripheral blood in HIV-uninfected adults. It also shows a higher frequency of peripheral blood Tregs in HIV-infected individuals compared to HIV-uninfected adults. (C) The data shows no difference in the absolute counts of Tregs in peripheral blood between HIV-infected individuals and HIV-uninfected adults. Black horizontal bars represent median and IQRs. Statistical significance was analysed by the Mann–Whitney U test in the HIV-uninfected versus HIV-infected comparison, and Wilcoxon Signed rank test in the BAL versus peripheral blood comparison. p value <0.05 was used to determine statistical significance. [SUBTITLE] Antigen-specific CD4+ T cells against influenza virus, S pneumoniae and M tuberculosis in BAL and peripheral blood [SUBSECTION] To investigate respiratory pathogen antigen-specific T cell responses in BAL and peripheral blood, we measured the quality and magnitude of the CD4+ T cell response to influenza virus, S pneumoniae and M tuberculosis antigens using intracellular cytokine staining. We detected antigen-specific CD4+ T cells against influenza virus, S pneumoniae and M tuberculosis in BAL and peripheral blood (figure 4). Representative flow cytometry dot flow from an HIV-uninfected adult showing multiple subsets of antigen-specific CD4+ T cells in BAL and peripheral blood. BAL and peripheral blood lymphocytes were stimulated with antigens and T cell responses were measured by intracellular cytokine staining. Representative flow cytometry dot plots from an HIV-uninfected adult showing interferon-γ (IFN-γ) and TNF-alpha (TNF-α) responses in BAL (top) and peripheral blood (bottom) cells, in an unstimulated negative control and cells stimulated with influenza virus, S. pneumoniae and M tuberculosis antigens. [SUBTITLE] Differences between BAL and peripheral blood compartments [SUBSECTION] There was a higher percentage frequency of antigen-specific CD4+ T cells against influenza virus (median 0.51% vs 0.13%, p<0.001), S pneumoniae (median 0.59% vs 0.11%, p<0.001) and M tuberculosis (median 5.53% vs 0.20%, p<0.001) in BAL compared to peripheral blood (figure 5A–C). The proportion of antigen-specific CD4+ T cells against influenza virus, S pneumoniae and M tuberculosis producing either IFN-γ alone, TNF-α alone or both cytokines were different between BAL and peripheral blood (figure 6A–C). Background responses were subtracted from all the antigen-specific CD4+ T cell responses. Lower frequency of antigen-specific BAL CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. BAL and peripheral blood lymphocytes were stimulated with antigens and the magnitude of the antigen-specific T cell response was measured by intracellular cytokine staining. The total of all cytokine-secreting CD4+ T cells was used to represent the percentage frequency of antigen-specific cells. (A) The data shows a higher percentage frequency of influenza virus antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a lower percentage frequency of BAL influenza virus antigen-specific CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. (B) The data shows a higher percentage frequency of S pneumoniae antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a higher percentage frequency of S pneumoniae antigen-specific peripheral blood CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. (C) The data shows a higher percentage frequency of M tuberculosis antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a lower percentage frequency of BAL M tuberculosis antigen-specific CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. Black horizontal bars represent median and IQRs after background responses were subtracted from all the antigen-specific CD4 T cell responses. Statistical significance was analysed by the Mann–Whitney U test in the HIV-uninfected versus HIV-infected comparison, and Wilcoxon Signed rank test in the BAL versus peripheral blood comparison. p value <0.05 was used to determine statistical significance. Lower proportion of polyfunctional antigen-specific CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. BAL and peripheral blood lymphocytes were stimulated with antigens and the quality of the antigen-specific T cell response was measured by intracellular cytokine staining. The proportion of single producers (IFN-γ alone or TNF-α alone) and double producers (IFN-γ and TNF-α) in the antigen-specific CD4+ T cell population was defined. (A) The data shows that in both BAL (upper) and peripheral blood (lower), the proportion of double producers (green) in the influenza virus antigen-specific CD4+ T cell population was lower in HIV-infected individuals (right) than HIV-uninfected adults (left). It also shows that the proportion of subsets of antigen-specific CD4+ T cells against influenza virus including IFN-γ single producers (blue), TNF-α single producers (red), and IFN-γ/TNF-α double producers (green) were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults. (B) The data shows that the proportion of subsets of antigen-specific CD4+ T cells against S pneumoniae (including single producers and double producers), were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults. (C) The data shows that in BAL (upper) and peripheral blood (lower) the proportion of double producers (green) in the M tuberculosis antigen-specific CD4 T cell population was lower in HIV-infected individuals (right) than HIV-uninfected adults (left). It also shows that the proportion of subsets of antigen-specific CD4+ T cells against M tuberculosis (including single producers, and double producers), were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults. There was a higher percentage frequency of antigen-specific CD4+ T cells against influenza virus (median 0.51% vs 0.13%, p<0.001), S pneumoniae (median 0.59% vs 0.11%, p<0.001) and M tuberculosis (median 5.53% vs 0.20%, p<0.001) in BAL compared to peripheral blood (figure 5A–C). The proportion of antigen-specific CD4+ T cells against influenza virus, S pneumoniae and M tuberculosis producing either IFN-γ alone, TNF-α alone or both cytokines were different between BAL and peripheral blood (figure 6A–C). Background responses were subtracted from all the antigen-specific CD4+ T cell responses. Lower frequency of antigen-specific BAL CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. BAL and peripheral blood lymphocytes were stimulated with antigens and the magnitude of the antigen-specific T cell response was measured by intracellular cytokine staining. The total of all cytokine-secreting CD4+ T cells was used to represent the percentage frequency of antigen-specific cells. (A) The data shows a higher percentage frequency of influenza virus antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a lower percentage frequency of BAL influenza virus antigen-specific CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. (B) The data shows a higher percentage frequency of S pneumoniae antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a higher percentage frequency of S pneumoniae antigen-specific peripheral blood CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. (C) The data shows a higher percentage frequency of M tuberculosis antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a lower percentage frequency of BAL M tuberculosis antigen-specific CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. Black horizontal bars represent median and IQRs after background responses were subtracted from all the antigen-specific CD4 T cell responses. Statistical significance was analysed by the Mann–Whitney U test in the HIV-uninfected versus HIV-infected comparison, and Wilcoxon Signed rank test in the BAL versus peripheral blood comparison. p value <0.05 was used to determine statistical significance. Lower proportion of polyfunctional antigen-specific CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. BAL and peripheral blood lymphocytes were stimulated with antigens and the quality of the antigen-specific T cell response was measured by intracellular cytokine staining. The proportion of single producers (IFN-γ alone or TNF-α alone) and double producers (IFN-γ and TNF-α) in the antigen-specific CD4+ T cell population was defined. (A) The data shows that in both BAL (upper) and peripheral blood (lower), the proportion of double producers (green) in the influenza virus antigen-specific CD4+ T cell population was lower in HIV-infected individuals (right) than HIV-uninfected adults (left). It also shows that the proportion of subsets of antigen-specific CD4+ T cells against influenza virus including IFN-γ single producers (blue), TNF-α single producers (red), and IFN-γ/TNF-α double producers (green) were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults. (B) The data shows that the proportion of subsets of antigen-specific CD4+ T cells against S pneumoniae (including single producers and double producers), were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults. (C) The data shows that in BAL (upper) and peripheral blood (lower) the proportion of double producers (green) in the M tuberculosis antigen-specific CD4 T cell population was lower in HIV-infected individuals (right) than HIV-uninfected adults (left). It also shows that the proportion of subsets of antigen-specific CD4+ T cells against M tuberculosis (including single producers, and double producers), were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults. [SUBTITLE] Differences between HIV-infected and HIV-uninfected adults [SUBSECTION] The percentage frequency of antigen-specific CD4+ T cells against influenza virus (median 0.15% vs 0.51%, p<0.05) and M tuberculosis (median 0.5% vs 5.53%, p<0.05) antigens were lower in HIV-infected individuals compared to HIV-uninfected adults in BAL (figure 5A,C). However, this was not the case in peripheral blood where the percentage frequencies were comparable (Influenza virus, median 0.3% vs 0.13%, p>0.05; M tuberculosis, median 0.12% vs 0.19% p>0.05)(figure 5A,C). The percentage frequency of antigen-specific CD4+ T cells against S pneumoniae were similar in HIV-infected compared to HIV-uninfected adults in BAL (median 0.84% vs 0.59%, p>0.05). However, in peripheral blood the percentage frequency was higher in HIV-infected compared to HIV-uninfected adults (median 0.2% vs 0.1%, p=0.02) (figure 5B). Background responses were subtracted from all the antigen-specific CD4+ T cell responses. Further, in BAL and peripheral blood, there was a lower proportion of multiple cytokine-producing (polyfunctional) antigen-specific CD4+ T cells against influenza virus (BAL, 59% vs 27%; Blood, 41% vs 34%) and M tuberculosis (BAL, median 77% vs 41%; Blood, median 57% vs 26%) in HIV-infected individuals compared to HIV-uninfected adults (figure 6A,C). The percentage frequency of antigen-specific CD4+ T cells against influenza virus (median 0.15% vs 0.51%, p<0.05) and M tuberculosis (median 0.5% vs 5.53%, p<0.05) antigens were lower in HIV-infected individuals compared to HIV-uninfected adults in BAL (figure 5A,C). However, this was not the case in peripheral blood where the percentage frequencies were comparable (Influenza virus, median 0.3% vs 0.13%, p>0.05; M tuberculosis, median 0.12% vs 0.19% p>0.05)(figure 5A,C). The percentage frequency of antigen-specific CD4+ T cells against S pneumoniae were similar in HIV-infected compared to HIV-uninfected adults in BAL (median 0.84% vs 0.59%, p>0.05). However, in peripheral blood the percentage frequency was higher in HIV-infected compared to HIV-uninfected adults (median 0.2% vs 0.1%, p=0.02) (figure 5B). Background responses were subtracted from all the antigen-specific CD4+ T cell responses. Further, in BAL and peripheral blood, there was a lower proportion of multiple cytokine-producing (polyfunctional) antigen-specific CD4+ T cells against influenza virus (BAL, 59% vs 27%; Blood, 41% vs 34%) and M tuberculosis (BAL, median 77% vs 41%; Blood, median 57% vs 26%) in HIV-infected individuals compared to HIV-uninfected adults (figure 6A,C). To investigate respiratory pathogen antigen-specific T cell responses in BAL and peripheral blood, we measured the quality and magnitude of the CD4+ T cell response to influenza virus, S pneumoniae and M tuberculosis antigens using intracellular cytokine staining. We detected antigen-specific CD4+ T cells against influenza virus, S pneumoniae and M tuberculosis in BAL and peripheral blood (figure 4). Representative flow cytometry dot flow from an HIV-uninfected adult showing multiple subsets of antigen-specific CD4+ T cells in BAL and peripheral blood. BAL and peripheral blood lymphocytes were stimulated with antigens and T cell responses were measured by intracellular cytokine staining. Representative flow cytometry dot plots from an HIV-uninfected adult showing interferon-γ (IFN-γ) and TNF-alpha (TNF-α) responses in BAL (top) and peripheral blood (bottom) cells, in an unstimulated negative control and cells stimulated with influenza virus, S. pneumoniae and M tuberculosis antigens. [SUBTITLE] Differences between BAL and peripheral blood compartments [SUBSECTION] There was a higher percentage frequency of antigen-specific CD4+ T cells against influenza virus (median 0.51% vs 0.13%, p<0.001), S pneumoniae (median 0.59% vs 0.11%, p<0.001) and M tuberculosis (median 5.53% vs 0.20%, p<0.001) in BAL compared to peripheral blood (figure 5A–C). The proportion of antigen-specific CD4+ T cells against influenza virus, S pneumoniae and M tuberculosis producing either IFN-γ alone, TNF-α alone or both cytokines were different between BAL and peripheral blood (figure 6A–C). Background responses were subtracted from all the antigen-specific CD4+ T cell responses. Lower frequency of antigen-specific BAL CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. BAL and peripheral blood lymphocytes were stimulated with antigens and the magnitude of the antigen-specific T cell response was measured by intracellular cytokine staining. The total of all cytokine-secreting CD4+ T cells was used to represent the percentage frequency of antigen-specific cells. (A) The data shows a higher percentage frequency of influenza virus antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a lower percentage frequency of BAL influenza virus antigen-specific CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. (B) The data shows a higher percentage frequency of S pneumoniae antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a higher percentage frequency of S pneumoniae antigen-specific peripheral blood CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. (C) The data shows a higher percentage frequency of M tuberculosis antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a lower percentage frequency of BAL M tuberculosis antigen-specific CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. Black horizontal bars represent median and IQRs after background responses were subtracted from all the antigen-specific CD4 T cell responses. Statistical significance was analysed by the Mann–Whitney U test in the HIV-uninfected versus HIV-infected comparison, and Wilcoxon Signed rank test in the BAL versus peripheral blood comparison. p value <0.05 was used to determine statistical significance. Lower proportion of polyfunctional antigen-specific CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. BAL and peripheral blood lymphocytes were stimulated with antigens and the quality of the antigen-specific T cell response was measured by intracellular cytokine staining. The proportion of single producers (IFN-γ alone or TNF-α alone) and double producers (IFN-γ and TNF-α) in the antigen-specific CD4+ T cell population was defined. (A) The data shows that in both BAL (upper) and peripheral blood (lower), the proportion of double producers (green) in the influenza virus antigen-specific CD4+ T cell population was lower in HIV-infected individuals (right) than HIV-uninfected adults (left). It also shows that the proportion of subsets of antigen-specific CD4+ T cells against influenza virus including IFN-γ single producers (blue), TNF-α single producers (red), and IFN-γ/TNF-α double producers (green) were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults. (B) The data shows that the proportion of subsets of antigen-specific CD4+ T cells against S pneumoniae (including single producers and double producers), were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults. (C) The data shows that in BAL (upper) and peripheral blood (lower) the proportion of double producers (green) in the M tuberculosis antigen-specific CD4 T cell population was lower in HIV-infected individuals (right) than HIV-uninfected adults (left). It also shows that the proportion of subsets of antigen-specific CD4+ T cells against M tuberculosis (including single producers, and double producers), were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults. There was a higher percentage frequency of antigen-specific CD4+ T cells against influenza virus (median 0.51% vs 0.13%, p<0.001), S pneumoniae (median 0.59% vs 0.11%, p<0.001) and M tuberculosis (median 5.53% vs 0.20%, p<0.001) in BAL compared to peripheral blood (figure 5A–C). The proportion of antigen-specific CD4+ T cells against influenza virus, S pneumoniae and M tuberculosis producing either IFN-γ alone, TNF-α alone or both cytokines were different between BAL and peripheral blood (figure 6A–C). Background responses were subtracted from all the antigen-specific CD4+ T cell responses. Lower frequency of antigen-specific BAL CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. BAL and peripheral blood lymphocytes were stimulated with antigens and the magnitude of the antigen-specific T cell response was measured by intracellular cytokine staining. The total of all cytokine-secreting CD4+ T cells was used to represent the percentage frequency of antigen-specific cells. (A) The data shows a higher percentage frequency of influenza virus antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a lower percentage frequency of BAL influenza virus antigen-specific CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. (B) The data shows a higher percentage frequency of S pneumoniae antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a higher percentage frequency of S pneumoniae antigen-specific peripheral blood CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. (C) The data shows a higher percentage frequency of M tuberculosis antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a lower percentage frequency of BAL M tuberculosis antigen-specific CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. Black horizontal bars represent median and IQRs after background responses were subtracted from all the antigen-specific CD4 T cell responses. Statistical significance was analysed by the Mann–Whitney U test in the HIV-uninfected versus HIV-infected comparison, and Wilcoxon Signed rank test in the BAL versus peripheral blood comparison. p value <0.05 was used to determine statistical significance. Lower proportion of polyfunctional antigen-specific CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. BAL and peripheral blood lymphocytes were stimulated with antigens and the quality of the antigen-specific T cell response was measured by intracellular cytokine staining. The proportion of single producers (IFN-γ alone or TNF-α alone) and double producers (IFN-γ and TNF-α) in the antigen-specific CD4+ T cell population was defined. (A) The data shows that in both BAL (upper) and peripheral blood (lower), the proportion of double producers (green) in the influenza virus antigen-specific CD4+ T cell population was lower in HIV-infected individuals (right) than HIV-uninfected adults (left). It also shows that the proportion of subsets of antigen-specific CD4+ T cells against influenza virus including IFN-γ single producers (blue), TNF-α single producers (red), and IFN-γ/TNF-α double producers (green) were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults. (B) The data shows that the proportion of subsets of antigen-specific CD4+ T cells against S pneumoniae (including single producers and double producers), were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults. (C) The data shows that in BAL (upper) and peripheral blood (lower) the proportion of double producers (green) in the M tuberculosis antigen-specific CD4 T cell population was lower in HIV-infected individuals (right) than HIV-uninfected adults (left). It also shows that the proportion of subsets of antigen-specific CD4+ T cells against M tuberculosis (including single producers, and double producers), were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults. [SUBTITLE] Differences between HIV-infected and HIV-uninfected adults [SUBSECTION] The percentage frequency of antigen-specific CD4+ T cells against influenza virus (median 0.15% vs 0.51%, p<0.05) and M tuberculosis (median 0.5% vs 5.53%, p<0.05) antigens were lower in HIV-infected individuals compared to HIV-uninfected adults in BAL (figure 5A,C). However, this was not the case in peripheral blood where the percentage frequencies were comparable (Influenza virus, median 0.3% vs 0.13%, p>0.05; M tuberculosis, median 0.12% vs 0.19% p>0.05)(figure 5A,C). The percentage frequency of antigen-specific CD4+ T cells against S pneumoniae were similar in HIV-infected compared to HIV-uninfected adults in BAL (median 0.84% vs 0.59%, p>0.05). However, in peripheral blood the percentage frequency was higher in HIV-infected compared to HIV-uninfected adults (median 0.2% vs 0.1%, p=0.02) (figure 5B). Background responses were subtracted from all the antigen-specific CD4+ T cell responses. Further, in BAL and peripheral blood, there was a lower proportion of multiple cytokine-producing (polyfunctional) antigen-specific CD4+ T cells against influenza virus (BAL, 59% vs 27%; Blood, 41% vs 34%) and M tuberculosis (BAL, median 77% vs 41%; Blood, median 57% vs 26%) in HIV-infected individuals compared to HIV-uninfected adults (figure 6A,C). The percentage frequency of antigen-specific CD4+ T cells against influenza virus (median 0.15% vs 0.51%, p<0.05) and M tuberculosis (median 0.5% vs 5.53%, p<0.05) antigens were lower in HIV-infected individuals compared to HIV-uninfected adults in BAL (figure 5A,C). However, this was not the case in peripheral blood where the percentage frequencies were comparable (Influenza virus, median 0.3% vs 0.13%, p>0.05; M tuberculosis, median 0.12% vs 0.19% p>0.05)(figure 5A,C). The percentage frequency of antigen-specific CD4+ T cells against S pneumoniae were similar in HIV-infected compared to HIV-uninfected adults in BAL (median 0.84% vs 0.59%, p>0.05). However, in peripheral blood the percentage frequency was higher in HIV-infected compared to HIV-uninfected adults (median 0.2% vs 0.1%, p=0.02) (figure 5B). Background responses were subtracted from all the antigen-specific CD4+ T cell responses. Further, in BAL and peripheral blood, there was a lower proportion of multiple cytokine-producing (polyfunctional) antigen-specific CD4+ T cells against influenza virus (BAL, 59% vs 27%; Blood, 41% vs 34%) and M tuberculosis (BAL, median 77% vs 41%; Blood, median 57% vs 26%) in HIV-infected individuals compared to HIV-uninfected adults (figure 6A,C).
null
null
[ "Participants", "Sample collection and processing", "Phenotyping of T cell subsets", "Intracellular cytokine staining", "Statistical analysis", "Demographic characteristics of study population", "Proportions of naive, memory and regulatory T cells in BAL and peripheral blood", "CD4 and CD8 T cells", "Naive and memory T cells", "Regulatory T cells (Tregs)", "Antigen-specific CD4+ T cells against influenza virus, S pneumoniae and M tuberculosis in BAL and peripheral blood", "Differences between BAL and peripheral blood compartments", "Differences between HIV-infected and HIV-uninfected adults" ]
[ "Adult volunteers with no recent history of severe respiratory diseases and a normal chest x-ray were recruited by advertisement in the Queen Elizabeth Central Hospital, Blantyre, Malawi. All participants gave written-informed-consent to HIV testing, venesection and bronchoscopy. The authors enrolled HIV-uninfected adults and asymptomatic anti-retroviral therapy naive HIV-infected individuals (WHO stage 1) into the study. The exclusion criteria for the study were as follows: the presence of other immunocompromising illnesses such as diabetes and cancer, the use of immunosuppressive drugs, cigarette smoking, moderate or severe anaemia (HB<8 g/dl), and known or possible pregnancy. This study complies with local institutional guidelines and was approved by the College of Medicine Research Ethics Committee of the University of Malawi (COMREC P.01/09/717) and the Liverpool School of Tropical Medicine Research Ethics Committee (LSTM REC 08.61).", "Peripheral blood samples were collected on all subjects. Peripheral blood mononuclear cells (PBMCs) were isolated from blood by density centrifugation using Lymphoprep (Axis-shield, Norway) according to the manufacturer's instructions. Bronchoscopy and BAL collection was carried out as previously described.18 The BAL samples were filtered and spun to obtain a cell pellet. The cells were counted and re-suspended in complete cell culture media (containing RPMI, L-glutamine, penicillin/streptomycin and HEPES (all from Sigma-Aldrich, UK) with 2% (vol/vol) heat-inactivated human AB serum (National Blood Services, Blantyre)).", "PBMCs and BAL cells were stained with fluorochrome conjugate antibodies when cell numbers were sufficient. Anti-CD3 fluorescein isothiocyanate (FITC), anti-CD4 Pacific blue, anti-CD8 allophycocyanin-H7 (APC-H7), anti-CD45RA phycoerythrin (PE), and anti-CCR7 allophycocyanin (APC) (all antibodies from BD Bioscience, UK) were used to characterise: naive (CD45RA+CCR7+), central memory (CD45RA−CCR7+), effector memory (CD45RA−CCR7−) and terminal effector T cells (CD45RA+CCR7−).19 Anti-CD4 pacific blue, anti-CD25 FITC (all antibodies from BD Bioscience, UK) and anti-FoxP3 PE (eBioscience, UK) were used to characterise regulatory T cells (CD4+CD25hiFoxP3+).20 The samples were acquired on CyAn ADP 9 Colour flow cytometer (Beckman Coulter, USA) and analysed using FlowJo (TreeStar, USA).", "PBMCs and BAL cells re-suspended in complete cell culture media were cultured in a volume of 200 μl in a 96 well plate and stimulated with influenza vaccine (0.45 μg/ml), pneumococcal cell culture supernatant (8 μg/ml) or Purified Protein Derivative (PPD, 10 μg/ml) for 2 h. Brefeldin A (1 μl) (BD Bioscience, UK) was added at 2 h and the cells were cultured for a further 16 h. Cells were harvested and stained with Violet Viability dye (LIVE/DEAD® Fixable Dead Cell Stain kit, Invitrogen, UK) as per manufacturer's instructions. Cells were then surface stained with anti-CD4 FITC and CD8 PerCP (all BD Bioscience, UK). Next, cells were permeabilised and fixed using Cytofix/Cytoperm (BD Bioscience, UK) as per manufacturer's instructions. The cells were then stained with anti-interferon-gamma (IFN-γ) APC and anti-tumour necrosis factor-alpha (TNF-α) Alexa flour 488 antibodies (all BD Bioscience, UK) to detect intracellular cytokines. Lastly, cells were washed with 1x Perm Wash (BD Bioscience, UK), re-suspended in FACS flow and acquired on a flow on CyAn ADP 9 Colour flow cytometer (Beckman Coulter, USA). Flow cytometry analysis was done using FlowJo (TreeStar, USA).", "Statistical analyses and graphical presentation were carried out using Graphpad Prism 5 (GraphPad Software, USA). Student t tests were used for the volunteer demographic data with the exception of gender, where a χ2 test was used instead. For the experimental data, Mann–Whitney U test was used for non-paired data and Wilcoxon sign ranked test for paired data. Results are given as mean with ranges or medians with IQRs. Differences were considered statistically significant if p values were less than 0.05.", "Basic demographics are shown in table 1. HIV-uninfected Malawian adults ((n=24, females 11) mean age 38 years)) and asymptomatic HIV-infected adults ((n=21, females 11) mean age 40 years)) participated in the study. The mean CD4 count for HIV-infected individuals was 375 cells/μl. All participants were asymptomatic and had no recent history of respiratory infection or tuberculosis. The mean BAL cell concentration was comparable between HIV-infected individuals and HIV-uninfected adults (16.2×106cells/100 ml vs 20.5×106cells/100 ml respectively; p=0.1134), but the proportion of lymphocytes in the BAL cells was higher in HIV-infected individuals than HIV-uninfected adults (17.8% vs 9.0%; p=0.0106).\nCharacteristics of subjects enrolled in the study\nBAL, bronchoalveolar lavage; BALF, BAL fluid; N/A, not applicable.", "[SUBTITLE] CD4 and CD8 T cells [SUBSECTION] There was a lower proportion of CD4+ T cells in the total CD3+ T cell population in HIV-infected individuals compared to HIV-uninfected adults in both BAL (median 35.0% vs 65.5%, p<0.001) and peripheral blood (median 38.5% vs 61.2%, p<0.001). The proportion of CD8+ T cells in the total CD3+ T cell population was higher in HIV-infected individuals compared to HIV-uninfected adults in both BAL (median 59.7% vs 34.5%, p<0.05) and peripheral blood (median 61.5% vs 38.8%, p<0.05) (figure 1A).\nLower proportion of CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD3 FITC, anti-CD4 Pacific blue and anti-CD8 APC-H7 fluorochrome conjugated antibodies. (A) The data shows a lower proportion of CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. (B) The data shows a higher proportion of CD8+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. Significance was assessed using Mann–Whitney U test. Black horizontal bars represent median and IQRs.\nThere was a lower proportion of CD4+ T cells in the total CD3+ T cell population in HIV-infected individuals compared to HIV-uninfected adults in both BAL (median 35.0% vs 65.5%, p<0.001) and peripheral blood (median 38.5% vs 61.2%, p<0.001). The proportion of CD8+ T cells in the total CD3+ T cell population was higher in HIV-infected individuals compared to HIV-uninfected adults in both BAL (median 59.7% vs 34.5%, p<0.05) and peripheral blood (median 61.5% vs 38.8%, p<0.05) (figure 1A).\nLower proportion of CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD3 FITC, anti-CD4 Pacific blue and anti-CD8 APC-H7 fluorochrome conjugated antibodies. (A) The data shows a lower proportion of CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. (B) The data shows a higher proportion of CD8+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. Significance was assessed using Mann–Whitney U test. Black horizontal bars represent median and IQRs.\n[SUBTITLE] Naive and memory T cells [SUBSECTION] To examine whether the proportion of T cell subsets was similar between compartments and whether they were altered by HIV infection, the authors measured expression of CD45RA and CCR7 on CD4+ and CD8+ T cells—as described in the materials and methods section. BAL CD4+ and CD8+ T cells were predominantly of effector memory phenotype (CD4+CD45RA−CCR7−, median 98%; CD8+CD45RA−CCR7−, median 90%) compared to peripheral blood CD4+ and CD8+ T cells which were distributed among naive (CD4+CD45RA+CCR7+, median 25%; CD8+CD45RA+CCR7+, median 30%), central memory (CD4+CD45RA−CCR7+, median 37%; CD8+CD45RA−CCR7+, median 5%), effector memory (CD4+ CD45RA−CCR7−, median 37%; CD8+ CD45RA−CCR7−, median 20%) and terminal effector phenotypes (CD4+CD45RA+CCR7−, median 6%; CD8+ CD45RA+CCR7−, median 45%) (figure 2). In BAL, there was a higher proportion of effector memory cells in HIV-infected individuals compared to HIV-uninfected adults (CD4, median 98.9% vs 98.1% p=0.03; CD8, median 96.3% vs 90.3% p=0.03) and a lower proportion of central memory T cells (CD4, median 0.37% vs 1.42% p=0.003; CD8, median 0.44% vs 1.88% p=0.007) (figure 2A,B). In peripheral blood, there was no difference in peripheral blood CD4+ T cell subsets between HIV-infected and HIV-uninfected groups. However, the HIV-infected group had a higher proportion of CD8+ effector memory T cells (median 29.6% vs 17.8%, p=0.007) and a lower proportion of CD8+ terminal effector T cells (median 22.2% vs 45.0%, p=0.01) than the HIV-uninfected group (figure 2C,D).\nThe proportions of naive and memory T cell subsets are different between BAL and peripheral blood, and are altered during HIV infection. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD3 FITC, anti-CD4 Pacific blue, anti-CD8 APC-H7, anti-CD45RA PE and anti-CCR7 APC fluorochrome conjugated antibodies. The proportion of naïve (CD45RA+CCR7+), central memory (CD45RA−CCR7+), effector memory (CD45RA−CCR7−) and terminal effector (CD45RA+CCR7−) were defined. (A, B, C, D) The data shows that BAL T cells (upper) were predominantly of effector memory phenotype compared to peripheral blood (lower), in which T cells were distributed among naive, central memory, effector memory and terminal effector phenotypes. (A, B) The data shows a higher proportion of effector memory and lower proportion of central memory BAL CD4+ (left) and CD8+ (right) T cells in HIV-infected individuals compared to HIV-uninfected adults. (C, D) The data shows no difference in peripheral blood CD4+ T cell subsets between HIV-infected individuals compared to HIV-uninfected adults (left), but there was a higher proportion of CD8+ effector memory and a lower proportion of CD8+ terminal effector in HIV-infected individuals compared to HIV-uninfected adults (right). Black horizontal bars represent median and IQRs. Statistical significance was analysed by the Mann–Whitney U test. p value <0.05 was used to determine statistical significance.\nTo examine whether the proportion of T cell subsets was similar between compartments and whether they were altered by HIV infection, the authors measured expression of CD45RA and CCR7 on CD4+ and CD8+ T cells—as described in the materials and methods section. BAL CD4+ and CD8+ T cells were predominantly of effector memory phenotype (CD4+CD45RA−CCR7−, median 98%; CD8+CD45RA−CCR7−, median 90%) compared to peripheral blood CD4+ and CD8+ T cells which were distributed among naive (CD4+CD45RA+CCR7+, median 25%; CD8+CD45RA+CCR7+, median 30%), central memory (CD4+CD45RA−CCR7+, median 37%; CD8+CD45RA−CCR7+, median 5%), effector memory (CD4+ CD45RA−CCR7−, median 37%; CD8+ CD45RA−CCR7−, median 20%) and terminal effector phenotypes (CD4+CD45RA+CCR7−, median 6%; CD8+ CD45RA+CCR7−, median 45%) (figure 2). In BAL, there was a higher proportion of effector memory cells in HIV-infected individuals compared to HIV-uninfected adults (CD4, median 98.9% vs 98.1% p=0.03; CD8, median 96.3% vs 90.3% p=0.03) and a lower proportion of central memory T cells (CD4, median 0.37% vs 1.42% p=0.003; CD8, median 0.44% vs 1.88% p=0.007) (figure 2A,B). In peripheral blood, there was no difference in peripheral blood CD4+ T cell subsets between HIV-infected and HIV-uninfected groups. However, the HIV-infected group had a higher proportion of CD8+ effector memory T cells (median 29.6% vs 17.8%, p=0.007) and a lower proportion of CD8+ terminal effector T cells (median 22.2% vs 45.0%, p=0.01) than the HIV-uninfected group (figure 2C,D).\nThe proportions of naive and memory T cell subsets are different between BAL and peripheral blood, and are altered during HIV infection. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD3 FITC, anti-CD4 Pacific blue, anti-CD8 APC-H7, anti-CD45RA PE and anti-CCR7 APC fluorochrome conjugated antibodies. The proportion of naïve (CD45RA+CCR7+), central memory (CD45RA−CCR7+), effector memory (CD45RA−CCR7−) and terminal effector (CD45RA+CCR7−) were defined. (A, B, C, D) The data shows that BAL T cells (upper) were predominantly of effector memory phenotype compared to peripheral blood (lower), in which T cells were distributed among naive, central memory, effector memory and terminal effector phenotypes. (A, B) The data shows a higher proportion of effector memory and lower proportion of central memory BAL CD4+ (left) and CD8+ (right) T cells in HIV-infected individuals compared to HIV-uninfected adults. (C, D) The data shows no difference in peripheral blood CD4+ T cell subsets between HIV-infected individuals compared to HIV-uninfected adults (left), but there was a higher proportion of CD8+ effector memory and a lower proportion of CD8+ terminal effector in HIV-infected individuals compared to HIV-uninfected adults (right). Black horizontal bars represent median and IQRs. Statistical significance was analysed by the Mann–Whitney U test. p value <0.05 was used to determine statistical significance.\n[SUBTITLE] Regulatory T cells (Tregs) [SUBSECTION] We investigated the hypothesis that in HIV-infected individuals, persistent immune activation by HIV results in a higher frequency of regulatory T cells. Blood and BAL Tregs were defined as CD4+CD25hiFoxP3+ as described in the materials and methods (figure 3A). The proportions of Tregs in HIV-infected individuals were similar in BAL and peripheral blood (median 3.7% vs 3%, p>0.01), but were higher in BAL compared to peripheral blood in HIV-uninfected adults (median 4.3% vs 1.5%, p<0.001) (figure 3B). There was a higher proportion of Tregs in the peripheral blood of HIV-infected individuals than in the HIV-uninfected adults (median 3% vs 1.5%, p<0.001). However, in BAL the proportions were similar between the groups (median 3.7% vs 4.3%, p>0.01) (figure 3B). Absolute counts, however, of Tregs in peripheral blood were similar between HIV-uninfected adults and HIV-infected individuals (median 11 cells/μl vs 9 cells/μl, p>0.01) (figure 3C).\nHigher frequency of regulatory T cells in BAL compared to peripheral blood, but altered in HIV-infected individuals. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD4 Pacific blue, anti-CD25 FITC and anti-Foxp3 PE fluorochrome conjugated antibodies. Regulatory T cells (Tregs) were defined as CD4+ T cells expressing CD25hi and FoxP3+. (A) A flow cytometry representative dot plot showing Tregs in BAL and peripheral blood from a healthy control. (B) The data shows a higher frequency of Tregs in BAL than peripheral blood in HIV-uninfected adults. It also shows a higher frequency of peripheral blood Tregs in HIV-infected individuals compared to HIV-uninfected adults. (C) The data shows no difference in the absolute counts of Tregs in peripheral blood between HIV-infected individuals and HIV-uninfected adults. Black horizontal bars represent median and IQRs. Statistical significance was analysed by the Mann–Whitney U test in the HIV-uninfected versus HIV-infected comparison, and Wilcoxon Signed rank test in the BAL versus peripheral blood comparison. p value <0.05 was used to determine statistical significance.\nWe investigated the hypothesis that in HIV-infected individuals, persistent immune activation by HIV results in a higher frequency of regulatory T cells. Blood and BAL Tregs were defined as CD4+CD25hiFoxP3+ as described in the materials and methods (figure 3A). The proportions of Tregs in HIV-infected individuals were similar in BAL and peripheral blood (median 3.7% vs 3%, p>0.01), but were higher in BAL compared to peripheral blood in HIV-uninfected adults (median 4.3% vs 1.5%, p<0.001) (figure 3B). There was a higher proportion of Tregs in the peripheral blood of HIV-infected individuals than in the HIV-uninfected adults (median 3% vs 1.5%, p<0.001). However, in BAL the proportions were similar between the groups (median 3.7% vs 4.3%, p>0.01) (figure 3B). Absolute counts, however, of Tregs in peripheral blood were similar between HIV-uninfected adults and HIV-infected individuals (median 11 cells/μl vs 9 cells/μl, p>0.01) (figure 3C).\nHigher frequency of regulatory T cells in BAL compared to peripheral blood, but altered in HIV-infected individuals. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD4 Pacific blue, anti-CD25 FITC and anti-Foxp3 PE fluorochrome conjugated antibodies. Regulatory T cells (Tregs) were defined as CD4+ T cells expressing CD25hi and FoxP3+. (A) A flow cytometry representative dot plot showing Tregs in BAL and peripheral blood from a healthy control. (B) The data shows a higher frequency of Tregs in BAL than peripheral blood in HIV-uninfected adults. It also shows a higher frequency of peripheral blood Tregs in HIV-infected individuals compared to HIV-uninfected adults. (C) The data shows no difference in the absolute counts of Tregs in peripheral blood between HIV-infected individuals and HIV-uninfected adults. Black horizontal bars represent median and IQRs. Statistical significance was analysed by the Mann–Whitney U test in the HIV-uninfected versus HIV-infected comparison, and Wilcoxon Signed rank test in the BAL versus peripheral blood comparison. p value <0.05 was used to determine statistical significance.", "There was a lower proportion of CD4+ T cells in the total CD3+ T cell population in HIV-infected individuals compared to HIV-uninfected adults in both BAL (median 35.0% vs 65.5%, p<0.001) and peripheral blood (median 38.5% vs 61.2%, p<0.001). The proportion of CD8+ T cells in the total CD3+ T cell population was higher in HIV-infected individuals compared to HIV-uninfected adults in both BAL (median 59.7% vs 34.5%, p<0.05) and peripheral blood (median 61.5% vs 38.8%, p<0.05) (figure 1A).\nLower proportion of CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD3 FITC, anti-CD4 Pacific blue and anti-CD8 APC-H7 fluorochrome conjugated antibodies. (A) The data shows a lower proportion of CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. (B) The data shows a higher proportion of CD8+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. Significance was assessed using Mann–Whitney U test. Black horizontal bars represent median and IQRs.", "To examine whether the proportion of T cell subsets was similar between compartments and whether they were altered by HIV infection, the authors measured expression of CD45RA and CCR7 on CD4+ and CD8+ T cells—as described in the materials and methods section. BAL CD4+ and CD8+ T cells were predominantly of effector memory phenotype (CD4+CD45RA−CCR7−, median 98%; CD8+CD45RA−CCR7−, median 90%) compared to peripheral blood CD4+ and CD8+ T cells which were distributed among naive (CD4+CD45RA+CCR7+, median 25%; CD8+CD45RA+CCR7+, median 30%), central memory (CD4+CD45RA−CCR7+, median 37%; CD8+CD45RA−CCR7+, median 5%), effector memory (CD4+ CD45RA−CCR7−, median 37%; CD8+ CD45RA−CCR7−, median 20%) and terminal effector phenotypes (CD4+CD45RA+CCR7−, median 6%; CD8+ CD45RA+CCR7−, median 45%) (figure 2). In BAL, there was a higher proportion of effector memory cells in HIV-infected individuals compared to HIV-uninfected adults (CD4, median 98.9% vs 98.1% p=0.03; CD8, median 96.3% vs 90.3% p=0.03) and a lower proportion of central memory T cells (CD4, median 0.37% vs 1.42% p=0.003; CD8, median 0.44% vs 1.88% p=0.007) (figure 2A,B). In peripheral blood, there was no difference in peripheral blood CD4+ T cell subsets between HIV-infected and HIV-uninfected groups. However, the HIV-infected group had a higher proportion of CD8+ effector memory T cells (median 29.6% vs 17.8%, p=0.007) and a lower proportion of CD8+ terminal effector T cells (median 22.2% vs 45.0%, p=0.01) than the HIV-uninfected group (figure 2C,D).\nThe proportions of naive and memory T cell subsets are different between BAL and peripheral blood, and are altered during HIV infection. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD3 FITC, anti-CD4 Pacific blue, anti-CD8 APC-H7, anti-CD45RA PE and anti-CCR7 APC fluorochrome conjugated antibodies. The proportion of naïve (CD45RA+CCR7+), central memory (CD45RA−CCR7+), effector memory (CD45RA−CCR7−) and terminal effector (CD45RA+CCR7−) were defined. (A, B, C, D) The data shows that BAL T cells (upper) were predominantly of effector memory phenotype compared to peripheral blood (lower), in which T cells were distributed among naive, central memory, effector memory and terminal effector phenotypes. (A, B) The data shows a higher proportion of effector memory and lower proportion of central memory BAL CD4+ (left) and CD8+ (right) T cells in HIV-infected individuals compared to HIV-uninfected adults. (C, D) The data shows no difference in peripheral blood CD4+ T cell subsets between HIV-infected individuals compared to HIV-uninfected adults (left), but there was a higher proportion of CD8+ effector memory and a lower proportion of CD8+ terminal effector in HIV-infected individuals compared to HIV-uninfected adults (right). Black horizontal bars represent median and IQRs. Statistical significance was analysed by the Mann–Whitney U test. p value <0.05 was used to determine statistical significance.", "We investigated the hypothesis that in HIV-infected individuals, persistent immune activation by HIV results in a higher frequency of regulatory T cells. Blood and BAL Tregs were defined as CD4+CD25hiFoxP3+ as described in the materials and methods (figure 3A). The proportions of Tregs in HIV-infected individuals were similar in BAL and peripheral blood (median 3.7% vs 3%, p>0.01), but were higher in BAL compared to peripheral blood in HIV-uninfected adults (median 4.3% vs 1.5%, p<0.001) (figure 3B). There was a higher proportion of Tregs in the peripheral blood of HIV-infected individuals than in the HIV-uninfected adults (median 3% vs 1.5%, p<0.001). However, in BAL the proportions were similar between the groups (median 3.7% vs 4.3%, p>0.01) (figure 3B). Absolute counts, however, of Tregs in peripheral blood were similar between HIV-uninfected adults and HIV-infected individuals (median 11 cells/μl vs 9 cells/μl, p>0.01) (figure 3C).\nHigher frequency of regulatory T cells in BAL compared to peripheral blood, but altered in HIV-infected individuals. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD4 Pacific blue, anti-CD25 FITC and anti-Foxp3 PE fluorochrome conjugated antibodies. Regulatory T cells (Tregs) were defined as CD4+ T cells expressing CD25hi and FoxP3+. (A) A flow cytometry representative dot plot showing Tregs in BAL and peripheral blood from a healthy control. (B) The data shows a higher frequency of Tregs in BAL than peripheral blood in HIV-uninfected adults. It also shows a higher frequency of peripheral blood Tregs in HIV-infected individuals compared to HIV-uninfected adults. (C) The data shows no difference in the absolute counts of Tregs in peripheral blood between HIV-infected individuals and HIV-uninfected adults. Black horizontal bars represent median and IQRs. Statistical significance was analysed by the Mann–Whitney U test in the HIV-uninfected versus HIV-infected comparison, and Wilcoxon Signed rank test in the BAL versus peripheral blood comparison. p value <0.05 was used to determine statistical significance.", "To investigate respiratory pathogen antigen-specific T cell responses in BAL and peripheral blood, we measured the quality and magnitude of the CD4+ T cell response to influenza virus, S pneumoniae and M tuberculosis antigens using intracellular cytokine staining. We detected antigen-specific CD4+ T cells against influenza virus, S pneumoniae and M tuberculosis in BAL and peripheral blood (figure 4).\nRepresentative flow cytometry dot flow from an HIV-uninfected adult showing multiple subsets of antigen-specific CD4+ T cells in BAL and peripheral blood. BAL and peripheral blood lymphocytes were stimulated with antigens and T cell responses were measured by intracellular cytokine staining. Representative flow cytometry dot plots from an HIV-uninfected adult showing interferon-γ (IFN-γ) and TNF-alpha (TNF-α) responses in BAL (top) and peripheral blood (bottom) cells, in an unstimulated negative control and cells stimulated with influenza virus, S. pneumoniae and M tuberculosis antigens.\n[SUBTITLE] Differences between BAL and peripheral blood compartments [SUBSECTION] There was a higher percentage frequency of antigen-specific CD4+ T cells against influenza virus (median 0.51% vs 0.13%, p<0.001), S pneumoniae (median 0.59% vs 0.11%, p<0.001) and M tuberculosis (median 5.53% vs 0.20%, p<0.001) in BAL compared to peripheral blood (figure 5A–C). The proportion of antigen-specific CD4+ T cells against influenza virus, S pneumoniae and M tuberculosis producing either IFN-γ alone, TNF-α alone or both cytokines were different between BAL and peripheral blood (figure 6A–C). Background responses were subtracted from all the antigen-specific CD4+ T cell responses.\nLower frequency of antigen-specific BAL CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. BAL and peripheral blood lymphocytes were stimulated with antigens and the magnitude of the antigen-specific T cell response was measured by intracellular cytokine staining. The total of all cytokine-secreting CD4+ T cells was used to represent the percentage frequency of antigen-specific cells. (A) The data shows a higher percentage frequency of influenza virus antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a lower percentage frequency of BAL influenza virus antigen-specific CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. (B) The data shows a higher percentage frequency of S pneumoniae antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a higher percentage frequency of S pneumoniae antigen-specific peripheral blood CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. (C) The data shows a higher percentage frequency of M tuberculosis antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a lower percentage frequency of BAL M tuberculosis antigen-specific CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. Black horizontal bars represent median and IQRs after background responses were subtracted from all the antigen-specific CD4 T cell responses. Statistical significance was analysed by the Mann–Whitney U test in the HIV-uninfected versus HIV-infected comparison, and Wilcoxon Signed rank test in the BAL versus peripheral blood comparison. p value <0.05 was used to determine statistical significance.\nLower proportion of polyfunctional antigen-specific CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. BAL and peripheral blood lymphocytes were stimulated with antigens and the quality of the antigen-specific T cell response was measured by intracellular cytokine staining. The proportion of single producers (IFN-γ alone or TNF-α alone) and double producers (IFN-γ and TNF-α) in the antigen-specific CD4+ T cell population was defined. (A) The data shows that in both BAL (upper) and peripheral blood (lower), the proportion of double producers (green) in the influenza virus antigen-specific CD4+ T cell population was lower in HIV-infected individuals (right) than HIV-uninfected adults (left). It also shows that the proportion of subsets of antigen-specific CD4+ T cells against influenza virus including IFN-γ single producers (blue), TNF-α single producers (red), and IFN-γ/TNF-α double producers (green) were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults. (B) The data shows that the proportion of subsets of antigen-specific CD4+ T cells against S pneumoniae (including single producers and double producers), were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults. (C) The data shows that in BAL (upper) and peripheral blood (lower) the proportion of double producers (green) in the M tuberculosis antigen-specific CD4 T cell population was lower in HIV-infected individuals (right) than HIV-uninfected adults (left). It also shows that the proportion of subsets of antigen-specific CD4+ T cells against M tuberculosis (including single producers, and double producers), were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults.\nThere was a higher percentage frequency of antigen-specific CD4+ T cells against influenza virus (median 0.51% vs 0.13%, p<0.001), S pneumoniae (median 0.59% vs 0.11%, p<0.001) and M tuberculosis (median 5.53% vs 0.20%, p<0.001) in BAL compared to peripheral blood (figure 5A–C). The proportion of antigen-specific CD4+ T cells against influenza virus, S pneumoniae and M tuberculosis producing either IFN-γ alone, TNF-α alone or both cytokines were different between BAL and peripheral blood (figure 6A–C). Background responses were subtracted from all the antigen-specific CD4+ T cell responses.\nLower frequency of antigen-specific BAL CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. BAL and peripheral blood lymphocytes were stimulated with antigens and the magnitude of the antigen-specific T cell response was measured by intracellular cytokine staining. The total of all cytokine-secreting CD4+ T cells was used to represent the percentage frequency of antigen-specific cells. (A) The data shows a higher percentage frequency of influenza virus antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a lower percentage frequency of BAL influenza virus antigen-specific CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. (B) The data shows a higher percentage frequency of S pneumoniae antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a higher percentage frequency of S pneumoniae antigen-specific peripheral blood CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. (C) The data shows a higher percentage frequency of M tuberculosis antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a lower percentage frequency of BAL M tuberculosis antigen-specific CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. Black horizontal bars represent median and IQRs after background responses were subtracted from all the antigen-specific CD4 T cell responses. Statistical significance was analysed by the Mann–Whitney U test in the HIV-uninfected versus HIV-infected comparison, and Wilcoxon Signed rank test in the BAL versus peripheral blood comparison. p value <0.05 was used to determine statistical significance.\nLower proportion of polyfunctional antigen-specific CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. BAL and peripheral blood lymphocytes were stimulated with antigens and the quality of the antigen-specific T cell response was measured by intracellular cytokine staining. The proportion of single producers (IFN-γ alone or TNF-α alone) and double producers (IFN-γ and TNF-α) in the antigen-specific CD4+ T cell population was defined. (A) The data shows that in both BAL (upper) and peripheral blood (lower), the proportion of double producers (green) in the influenza virus antigen-specific CD4+ T cell population was lower in HIV-infected individuals (right) than HIV-uninfected adults (left). It also shows that the proportion of subsets of antigen-specific CD4+ T cells against influenza virus including IFN-γ single producers (blue), TNF-α single producers (red), and IFN-γ/TNF-α double producers (green) were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults. (B) The data shows that the proportion of subsets of antigen-specific CD4+ T cells against S pneumoniae (including single producers and double producers), were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults. (C) The data shows that in BAL (upper) and peripheral blood (lower) the proportion of double producers (green) in the M tuberculosis antigen-specific CD4 T cell population was lower in HIV-infected individuals (right) than HIV-uninfected adults (left). It also shows that the proportion of subsets of antigen-specific CD4+ T cells against M tuberculosis (including single producers, and double producers), were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults.\n[SUBTITLE] Differences between HIV-infected and HIV-uninfected adults [SUBSECTION] The percentage frequency of antigen-specific CD4+ T cells against influenza virus (median 0.15% vs 0.51%, p<0.05) and M tuberculosis (median 0.5% vs 5.53%, p<0.05) antigens were lower in HIV-infected individuals compared to HIV-uninfected adults in BAL (figure 5A,C). However, this was not the case in peripheral blood where the percentage frequencies were comparable (Influenza virus, median 0.3% vs 0.13%, p>0.05; M tuberculosis, median 0.12% vs 0.19% p>0.05)(figure 5A,C). The percentage frequency of antigen-specific CD4+ T cells against S pneumoniae were similar in HIV-infected compared to HIV-uninfected adults in BAL (median 0.84% vs 0.59%, p>0.05). However, in peripheral blood the percentage frequency was higher in HIV-infected compared to HIV-uninfected adults (median 0.2% vs 0.1%, p=0.02) (figure 5B). Background responses were subtracted from all the antigen-specific CD4+ T cell responses.\nFurther, in BAL and peripheral blood, there was a lower proportion of multiple cytokine-producing (polyfunctional) antigen-specific CD4+ T cells against influenza virus (BAL, 59% vs 27%; Blood, 41% vs 34%) and M tuberculosis (BAL, median 77% vs 41%; Blood, median 57% vs 26%) in HIV-infected individuals compared to HIV-uninfected adults (figure 6A,C).\nThe percentage frequency of antigen-specific CD4+ T cells against influenza virus (median 0.15% vs 0.51%, p<0.05) and M tuberculosis (median 0.5% vs 5.53%, p<0.05) antigens were lower in HIV-infected individuals compared to HIV-uninfected adults in BAL (figure 5A,C). However, this was not the case in peripheral blood where the percentage frequencies were comparable (Influenza virus, median 0.3% vs 0.13%, p>0.05; M tuberculosis, median 0.12% vs 0.19% p>0.05)(figure 5A,C). The percentage frequency of antigen-specific CD4+ T cells against S pneumoniae were similar in HIV-infected compared to HIV-uninfected adults in BAL (median 0.84% vs 0.59%, p>0.05). However, in peripheral blood the percentage frequency was higher in HIV-infected compared to HIV-uninfected adults (median 0.2% vs 0.1%, p=0.02) (figure 5B). Background responses were subtracted from all the antigen-specific CD4+ T cell responses.\nFurther, in BAL and peripheral blood, there was a lower proportion of multiple cytokine-producing (polyfunctional) antigen-specific CD4+ T cells against influenza virus (BAL, 59% vs 27%; Blood, 41% vs 34%) and M tuberculosis (BAL, median 77% vs 41%; Blood, median 57% vs 26%) in HIV-infected individuals compared to HIV-uninfected adults (figure 6A,C).", "There was a higher percentage frequency of antigen-specific CD4+ T cells against influenza virus (median 0.51% vs 0.13%, p<0.001), S pneumoniae (median 0.59% vs 0.11%, p<0.001) and M tuberculosis (median 5.53% vs 0.20%, p<0.001) in BAL compared to peripheral blood (figure 5A–C). The proportion of antigen-specific CD4+ T cells against influenza virus, S pneumoniae and M tuberculosis producing either IFN-γ alone, TNF-α alone or both cytokines were different between BAL and peripheral blood (figure 6A–C). Background responses were subtracted from all the antigen-specific CD4+ T cell responses.\nLower frequency of antigen-specific BAL CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. BAL and peripheral blood lymphocytes were stimulated with antigens and the magnitude of the antigen-specific T cell response was measured by intracellular cytokine staining. The total of all cytokine-secreting CD4+ T cells was used to represent the percentage frequency of antigen-specific cells. (A) The data shows a higher percentage frequency of influenza virus antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a lower percentage frequency of BAL influenza virus antigen-specific CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. (B) The data shows a higher percentage frequency of S pneumoniae antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a higher percentage frequency of S pneumoniae antigen-specific peripheral blood CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. (C) The data shows a higher percentage frequency of M tuberculosis antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a lower percentage frequency of BAL M tuberculosis antigen-specific CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. Black horizontal bars represent median and IQRs after background responses were subtracted from all the antigen-specific CD4 T cell responses. Statistical significance was analysed by the Mann–Whitney U test in the HIV-uninfected versus HIV-infected comparison, and Wilcoxon Signed rank test in the BAL versus peripheral blood comparison. p value <0.05 was used to determine statistical significance.\nLower proportion of polyfunctional antigen-specific CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. BAL and peripheral blood lymphocytes were stimulated with antigens and the quality of the antigen-specific T cell response was measured by intracellular cytokine staining. The proportion of single producers (IFN-γ alone or TNF-α alone) and double producers (IFN-γ and TNF-α) in the antigen-specific CD4+ T cell population was defined. (A) The data shows that in both BAL (upper) and peripheral blood (lower), the proportion of double producers (green) in the influenza virus antigen-specific CD4+ T cell population was lower in HIV-infected individuals (right) than HIV-uninfected adults (left). It also shows that the proportion of subsets of antigen-specific CD4+ T cells against influenza virus including IFN-γ single producers (blue), TNF-α single producers (red), and IFN-γ/TNF-α double producers (green) were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults. (B) The data shows that the proportion of subsets of antigen-specific CD4+ T cells against S pneumoniae (including single producers and double producers), were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults. (C) The data shows that in BAL (upper) and peripheral blood (lower) the proportion of double producers (green) in the M tuberculosis antigen-specific CD4 T cell population was lower in HIV-infected individuals (right) than HIV-uninfected adults (left). It also shows that the proportion of subsets of antigen-specific CD4+ T cells against M tuberculosis (including single producers, and double producers), were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults.", "The percentage frequency of antigen-specific CD4+ T cells against influenza virus (median 0.15% vs 0.51%, p<0.05) and M tuberculosis (median 0.5% vs 5.53%, p<0.05) antigens were lower in HIV-infected individuals compared to HIV-uninfected adults in BAL (figure 5A,C). However, this was not the case in peripheral blood where the percentage frequencies were comparable (Influenza virus, median 0.3% vs 0.13%, p>0.05; M tuberculosis, median 0.12% vs 0.19% p>0.05)(figure 5A,C). The percentage frequency of antigen-specific CD4+ T cells against S pneumoniae were similar in HIV-infected compared to HIV-uninfected adults in BAL (median 0.84% vs 0.59%, p>0.05). However, in peripheral blood the percentage frequency was higher in HIV-infected compared to HIV-uninfected adults (median 0.2% vs 0.1%, p=0.02) (figure 5B). Background responses were subtracted from all the antigen-specific CD4+ T cell responses.\nFurther, in BAL and peripheral blood, there was a lower proportion of multiple cytokine-producing (polyfunctional) antigen-specific CD4+ T cells against influenza virus (BAL, 59% vs 27%; Blood, 41% vs 34%) and M tuberculosis (BAL, median 77% vs 41%; Blood, median 57% vs 26%) in HIV-infected individuals compared to HIV-uninfected adults (figure 6A,C)." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Participants", "Sample collection and processing", "Phenotyping of T cell subsets", "Intracellular cytokine staining", "Statistical analysis", "Results", "Demographic characteristics of study population", "Proportions of naive, memory and regulatory T cells in BAL and peripheral blood", "CD4 and CD8 T cells", "Naive and memory T cells", "Regulatory T cells (Tregs)", "Antigen-specific CD4+ T cells against influenza virus, S pneumoniae and M tuberculosis in BAL and peripheral blood", "Differences between BAL and peripheral blood compartments", "Differences between HIV-infected and HIV-uninfected adults", "Discussion" ]
[ "Respiratory infections are a leading cause of death in lower income countries, accounting for approximately three million deaths a year.1 Infants, the elderly and immunocompromised individuals are particularly susceptible to these infections. In particular, HIV-infected individuals are 30 times more likely than uninfected adults to suffer from bacterial pneumonia or active tuberculosis.2 3 Southern Africa is the region with the world's highest burden of HIV infection, with over nine countries, including Malawi, having an estimated HIV prevalence that is greater than 10%.4\nDefence against respiratory infection involves mucosal and systemic immunity.5 Antigen-specific CD4+ T cells are important as they protect against respiratory infections.6–8 Cytokines secreted by CD4+ T cells, such as IFN-γ, TNF-α, IL-2, IL-17 and IL-22,9–12 are critical to the activation of macrophages9 and the recruitment of neutrophils,10 and enhance the magnitude and quality of CD8+ T cell responses.12 Immune protection against common viral and bacterial respiratory pathogens depends on the integrity of these effector responses. There is limited information about the phenotype and function of these CD4+ T cells within the human lung.\nStudies of the human lung suggest that mechanisms of local intrapulmonary immunity may differ from those mediating systemic immunity. Influenza virus antigen-specific memory CD4+ T cells from lung tissue were present at higher frequencies and produced more IFN-γ than those from peripheral blood in patients undergoing lobectomy for a localised solitary peripheral lung carcinoma who had no symptoms of upper respiratory infection.13 In patients with tuberculosis, IFN-γ and TNF-α responses to Purified Protein Derivative (PPD) were stronger by CD4+ T cells from BAL fluid than by CD4+ T cells from peripheral blood.14\nThe decline in immunity caused by HIV is not equally distributed among immunological sites. In particular, the depletion of CD4+ T cells primarily occurs at sites of ‘persistent inflammation’ such as the mucosa, which may leave individuals vulnerable to acute infections. Brenchley et al showed that there was a rapid depletion of mucosal gut T cells during early HIV infection while pulmonary CD4+ T cell depletion was less acute.15 Clinical evidence indicates that there is a high burden of pneumococcal pneumonia and tuberculosis early on in HIV infection when peripheral CD4+ T cell counts are relatively stable.16 17 We hypothesised that HIV infection preferentially depletes antigen-specific T cells against common respiratory pathogens within the lung compartment, which predisposes individuals to respiratory infections.\nThe authors have compared baseline T cell phenotypes and antigen-specific CD4+ T cells in BAL and peripheral blood, between HIV-infected individuals and HIV-uninfected adults. The aim of the authors was to compare T cell phenotypes in BAL and peripheral blood between the two groups of subjects; to assess antigen-specific CD4+ T cell responses to common respiratory antigens; and to investigate whether HIV infection differentially affects the lung and peripheral blood compartments.", "[SUBTITLE] Participants [SUBSECTION] Adult volunteers with no recent history of severe respiratory diseases and a normal chest x-ray were recruited by advertisement in the Queen Elizabeth Central Hospital, Blantyre, Malawi. All participants gave written-informed-consent to HIV testing, venesection and bronchoscopy. The authors enrolled HIV-uninfected adults and asymptomatic anti-retroviral therapy naive HIV-infected individuals (WHO stage 1) into the study. The exclusion criteria for the study were as follows: the presence of other immunocompromising illnesses such as diabetes and cancer, the use of immunosuppressive drugs, cigarette smoking, moderate or severe anaemia (HB<8 g/dl), and known or possible pregnancy. This study complies with local institutional guidelines and was approved by the College of Medicine Research Ethics Committee of the University of Malawi (COMREC P.01/09/717) and the Liverpool School of Tropical Medicine Research Ethics Committee (LSTM REC 08.61).\nAdult volunteers with no recent history of severe respiratory diseases and a normal chest x-ray were recruited by advertisement in the Queen Elizabeth Central Hospital, Blantyre, Malawi. All participants gave written-informed-consent to HIV testing, venesection and bronchoscopy. The authors enrolled HIV-uninfected adults and asymptomatic anti-retroviral therapy naive HIV-infected individuals (WHO stage 1) into the study. The exclusion criteria for the study were as follows: the presence of other immunocompromising illnesses such as diabetes and cancer, the use of immunosuppressive drugs, cigarette smoking, moderate or severe anaemia (HB<8 g/dl), and known or possible pregnancy. This study complies with local institutional guidelines and was approved by the College of Medicine Research Ethics Committee of the University of Malawi (COMREC P.01/09/717) and the Liverpool School of Tropical Medicine Research Ethics Committee (LSTM REC 08.61).\n[SUBTITLE] Sample collection and processing [SUBSECTION] Peripheral blood samples were collected on all subjects. Peripheral blood mononuclear cells (PBMCs) were isolated from blood by density centrifugation using Lymphoprep (Axis-shield, Norway) according to the manufacturer's instructions. Bronchoscopy and BAL collection was carried out as previously described.18 The BAL samples were filtered and spun to obtain a cell pellet. The cells were counted and re-suspended in complete cell culture media (containing RPMI, L-glutamine, penicillin/streptomycin and HEPES (all from Sigma-Aldrich, UK) with 2% (vol/vol) heat-inactivated human AB serum (National Blood Services, Blantyre)).\nPeripheral blood samples were collected on all subjects. Peripheral blood mononuclear cells (PBMCs) were isolated from blood by density centrifugation using Lymphoprep (Axis-shield, Norway) according to the manufacturer's instructions. Bronchoscopy and BAL collection was carried out as previously described.18 The BAL samples were filtered and spun to obtain a cell pellet. The cells were counted and re-suspended in complete cell culture media (containing RPMI, L-glutamine, penicillin/streptomycin and HEPES (all from Sigma-Aldrich, UK) with 2% (vol/vol) heat-inactivated human AB serum (National Blood Services, Blantyre)).\n[SUBTITLE] Phenotyping of T cell subsets [SUBSECTION] PBMCs and BAL cells were stained with fluorochrome conjugate antibodies when cell numbers were sufficient. Anti-CD3 fluorescein isothiocyanate (FITC), anti-CD4 Pacific blue, anti-CD8 allophycocyanin-H7 (APC-H7), anti-CD45RA phycoerythrin (PE), and anti-CCR7 allophycocyanin (APC) (all antibodies from BD Bioscience, UK) were used to characterise: naive (CD45RA+CCR7+), central memory (CD45RA−CCR7+), effector memory (CD45RA−CCR7−) and terminal effector T cells (CD45RA+CCR7−).19 Anti-CD4 pacific blue, anti-CD25 FITC (all antibodies from BD Bioscience, UK) and anti-FoxP3 PE (eBioscience, UK) were used to characterise regulatory T cells (CD4+CD25hiFoxP3+).20 The samples were acquired on CyAn ADP 9 Colour flow cytometer (Beckman Coulter, USA) and analysed using FlowJo (TreeStar, USA).\nPBMCs and BAL cells were stained with fluorochrome conjugate antibodies when cell numbers were sufficient. Anti-CD3 fluorescein isothiocyanate (FITC), anti-CD4 Pacific blue, anti-CD8 allophycocyanin-H7 (APC-H7), anti-CD45RA phycoerythrin (PE), and anti-CCR7 allophycocyanin (APC) (all antibodies from BD Bioscience, UK) were used to characterise: naive (CD45RA+CCR7+), central memory (CD45RA−CCR7+), effector memory (CD45RA−CCR7−) and terminal effector T cells (CD45RA+CCR7−).19 Anti-CD4 pacific blue, anti-CD25 FITC (all antibodies from BD Bioscience, UK) and anti-FoxP3 PE (eBioscience, UK) were used to characterise regulatory T cells (CD4+CD25hiFoxP3+).20 The samples were acquired on CyAn ADP 9 Colour flow cytometer (Beckman Coulter, USA) and analysed using FlowJo (TreeStar, USA).\n[SUBTITLE] Intracellular cytokine staining [SUBSECTION] PBMCs and BAL cells re-suspended in complete cell culture media were cultured in a volume of 200 μl in a 96 well plate and stimulated with influenza vaccine (0.45 μg/ml), pneumococcal cell culture supernatant (8 μg/ml) or Purified Protein Derivative (PPD, 10 μg/ml) for 2 h. Brefeldin A (1 μl) (BD Bioscience, UK) was added at 2 h and the cells were cultured for a further 16 h. Cells were harvested and stained with Violet Viability dye (LIVE/DEAD® Fixable Dead Cell Stain kit, Invitrogen, UK) as per manufacturer's instructions. Cells were then surface stained with anti-CD4 FITC and CD8 PerCP (all BD Bioscience, UK). Next, cells were permeabilised and fixed using Cytofix/Cytoperm (BD Bioscience, UK) as per manufacturer's instructions. The cells were then stained with anti-interferon-gamma (IFN-γ) APC and anti-tumour necrosis factor-alpha (TNF-α) Alexa flour 488 antibodies (all BD Bioscience, UK) to detect intracellular cytokines. Lastly, cells were washed with 1x Perm Wash (BD Bioscience, UK), re-suspended in FACS flow and acquired on a flow on CyAn ADP 9 Colour flow cytometer (Beckman Coulter, USA). Flow cytometry analysis was done using FlowJo (TreeStar, USA).\nPBMCs and BAL cells re-suspended in complete cell culture media were cultured in a volume of 200 μl in a 96 well plate and stimulated with influenza vaccine (0.45 μg/ml), pneumococcal cell culture supernatant (8 μg/ml) or Purified Protein Derivative (PPD, 10 μg/ml) for 2 h. Brefeldin A (1 μl) (BD Bioscience, UK) was added at 2 h and the cells were cultured for a further 16 h. Cells were harvested and stained with Violet Viability dye (LIVE/DEAD® Fixable Dead Cell Stain kit, Invitrogen, UK) as per manufacturer's instructions. Cells were then surface stained with anti-CD4 FITC and CD8 PerCP (all BD Bioscience, UK). Next, cells were permeabilised and fixed using Cytofix/Cytoperm (BD Bioscience, UK) as per manufacturer's instructions. The cells were then stained with anti-interferon-gamma (IFN-γ) APC and anti-tumour necrosis factor-alpha (TNF-α) Alexa flour 488 antibodies (all BD Bioscience, UK) to detect intracellular cytokines. Lastly, cells were washed with 1x Perm Wash (BD Bioscience, UK), re-suspended in FACS flow and acquired on a flow on CyAn ADP 9 Colour flow cytometer (Beckman Coulter, USA). Flow cytometry analysis was done using FlowJo (TreeStar, USA).\n[SUBTITLE] Statistical analysis [SUBSECTION] Statistical analyses and graphical presentation were carried out using Graphpad Prism 5 (GraphPad Software, USA). Student t tests were used for the volunteer demographic data with the exception of gender, where a χ2 test was used instead. For the experimental data, Mann–Whitney U test was used for non-paired data and Wilcoxon sign ranked test for paired data. Results are given as mean with ranges or medians with IQRs. Differences were considered statistically significant if p values were less than 0.05.\nStatistical analyses and graphical presentation were carried out using Graphpad Prism 5 (GraphPad Software, USA). Student t tests were used for the volunteer demographic data with the exception of gender, where a χ2 test was used instead. For the experimental data, Mann–Whitney U test was used for non-paired data and Wilcoxon sign ranked test for paired data. Results are given as mean with ranges or medians with IQRs. Differences were considered statistically significant if p values were less than 0.05.", "Adult volunteers with no recent history of severe respiratory diseases and a normal chest x-ray were recruited by advertisement in the Queen Elizabeth Central Hospital, Blantyre, Malawi. All participants gave written-informed-consent to HIV testing, venesection and bronchoscopy. The authors enrolled HIV-uninfected adults and asymptomatic anti-retroviral therapy naive HIV-infected individuals (WHO stage 1) into the study. The exclusion criteria for the study were as follows: the presence of other immunocompromising illnesses such as diabetes and cancer, the use of immunosuppressive drugs, cigarette smoking, moderate or severe anaemia (HB<8 g/dl), and known or possible pregnancy. This study complies with local institutional guidelines and was approved by the College of Medicine Research Ethics Committee of the University of Malawi (COMREC P.01/09/717) and the Liverpool School of Tropical Medicine Research Ethics Committee (LSTM REC 08.61).", "Peripheral blood samples were collected on all subjects. Peripheral blood mononuclear cells (PBMCs) were isolated from blood by density centrifugation using Lymphoprep (Axis-shield, Norway) according to the manufacturer's instructions. Bronchoscopy and BAL collection was carried out as previously described.18 The BAL samples were filtered and spun to obtain a cell pellet. The cells were counted and re-suspended in complete cell culture media (containing RPMI, L-glutamine, penicillin/streptomycin and HEPES (all from Sigma-Aldrich, UK) with 2% (vol/vol) heat-inactivated human AB serum (National Blood Services, Blantyre)).", "PBMCs and BAL cells were stained with fluorochrome conjugate antibodies when cell numbers were sufficient. Anti-CD3 fluorescein isothiocyanate (FITC), anti-CD4 Pacific blue, anti-CD8 allophycocyanin-H7 (APC-H7), anti-CD45RA phycoerythrin (PE), and anti-CCR7 allophycocyanin (APC) (all antibodies from BD Bioscience, UK) were used to characterise: naive (CD45RA+CCR7+), central memory (CD45RA−CCR7+), effector memory (CD45RA−CCR7−) and terminal effector T cells (CD45RA+CCR7−).19 Anti-CD4 pacific blue, anti-CD25 FITC (all antibodies from BD Bioscience, UK) and anti-FoxP3 PE (eBioscience, UK) were used to characterise regulatory T cells (CD4+CD25hiFoxP3+).20 The samples were acquired on CyAn ADP 9 Colour flow cytometer (Beckman Coulter, USA) and analysed using FlowJo (TreeStar, USA).", "PBMCs and BAL cells re-suspended in complete cell culture media were cultured in a volume of 200 μl in a 96 well plate and stimulated with influenza vaccine (0.45 μg/ml), pneumococcal cell culture supernatant (8 μg/ml) or Purified Protein Derivative (PPD, 10 μg/ml) for 2 h. Brefeldin A (1 μl) (BD Bioscience, UK) was added at 2 h and the cells were cultured for a further 16 h. Cells were harvested and stained with Violet Viability dye (LIVE/DEAD® Fixable Dead Cell Stain kit, Invitrogen, UK) as per manufacturer's instructions. Cells were then surface stained with anti-CD4 FITC and CD8 PerCP (all BD Bioscience, UK). Next, cells were permeabilised and fixed using Cytofix/Cytoperm (BD Bioscience, UK) as per manufacturer's instructions. The cells were then stained with anti-interferon-gamma (IFN-γ) APC and anti-tumour necrosis factor-alpha (TNF-α) Alexa flour 488 antibodies (all BD Bioscience, UK) to detect intracellular cytokines. Lastly, cells were washed with 1x Perm Wash (BD Bioscience, UK), re-suspended in FACS flow and acquired on a flow on CyAn ADP 9 Colour flow cytometer (Beckman Coulter, USA). Flow cytometry analysis was done using FlowJo (TreeStar, USA).", "Statistical analyses and graphical presentation were carried out using Graphpad Prism 5 (GraphPad Software, USA). Student t tests were used for the volunteer demographic data with the exception of gender, where a χ2 test was used instead. For the experimental data, Mann–Whitney U test was used for non-paired data and Wilcoxon sign ranked test for paired data. Results are given as mean with ranges or medians with IQRs. Differences were considered statistically significant if p values were less than 0.05.", "[SUBTITLE] Demographic characteristics of study population [SUBSECTION] Basic demographics are shown in table 1. HIV-uninfected Malawian adults ((n=24, females 11) mean age 38 years)) and asymptomatic HIV-infected adults ((n=21, females 11) mean age 40 years)) participated in the study. The mean CD4 count for HIV-infected individuals was 375 cells/μl. All participants were asymptomatic and had no recent history of respiratory infection or tuberculosis. The mean BAL cell concentration was comparable between HIV-infected individuals and HIV-uninfected adults (16.2×106cells/100 ml vs 20.5×106cells/100 ml respectively; p=0.1134), but the proportion of lymphocytes in the BAL cells was higher in HIV-infected individuals than HIV-uninfected adults (17.8% vs 9.0%; p=0.0106).\nCharacteristics of subjects enrolled in the study\nBAL, bronchoalveolar lavage; BALF, BAL fluid; N/A, not applicable.\nBasic demographics are shown in table 1. HIV-uninfected Malawian adults ((n=24, females 11) mean age 38 years)) and asymptomatic HIV-infected adults ((n=21, females 11) mean age 40 years)) participated in the study. The mean CD4 count for HIV-infected individuals was 375 cells/μl. All participants were asymptomatic and had no recent history of respiratory infection or tuberculosis. The mean BAL cell concentration was comparable between HIV-infected individuals and HIV-uninfected adults (16.2×106cells/100 ml vs 20.5×106cells/100 ml respectively; p=0.1134), but the proportion of lymphocytes in the BAL cells was higher in HIV-infected individuals than HIV-uninfected adults (17.8% vs 9.0%; p=0.0106).\nCharacteristics of subjects enrolled in the study\nBAL, bronchoalveolar lavage; BALF, BAL fluid; N/A, not applicable.\n[SUBTITLE] Proportions of naive, memory and regulatory T cells in BAL and peripheral blood [SUBSECTION] [SUBTITLE] CD4 and CD8 T cells [SUBSECTION] There was a lower proportion of CD4+ T cells in the total CD3+ T cell population in HIV-infected individuals compared to HIV-uninfected adults in both BAL (median 35.0% vs 65.5%, p<0.001) and peripheral blood (median 38.5% vs 61.2%, p<0.001). The proportion of CD8+ T cells in the total CD3+ T cell population was higher in HIV-infected individuals compared to HIV-uninfected adults in both BAL (median 59.7% vs 34.5%, p<0.05) and peripheral blood (median 61.5% vs 38.8%, p<0.05) (figure 1A).\nLower proportion of CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD3 FITC, anti-CD4 Pacific blue and anti-CD8 APC-H7 fluorochrome conjugated antibodies. (A) The data shows a lower proportion of CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. (B) The data shows a higher proportion of CD8+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. Significance was assessed using Mann–Whitney U test. Black horizontal bars represent median and IQRs.\nThere was a lower proportion of CD4+ T cells in the total CD3+ T cell population in HIV-infected individuals compared to HIV-uninfected adults in both BAL (median 35.0% vs 65.5%, p<0.001) and peripheral blood (median 38.5% vs 61.2%, p<0.001). The proportion of CD8+ T cells in the total CD3+ T cell population was higher in HIV-infected individuals compared to HIV-uninfected adults in both BAL (median 59.7% vs 34.5%, p<0.05) and peripheral blood (median 61.5% vs 38.8%, p<0.05) (figure 1A).\nLower proportion of CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD3 FITC, anti-CD4 Pacific blue and anti-CD8 APC-H7 fluorochrome conjugated antibodies. (A) The data shows a lower proportion of CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. (B) The data shows a higher proportion of CD8+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. Significance was assessed using Mann–Whitney U test. Black horizontal bars represent median and IQRs.\n[SUBTITLE] Naive and memory T cells [SUBSECTION] To examine whether the proportion of T cell subsets was similar between compartments and whether they were altered by HIV infection, the authors measured expression of CD45RA and CCR7 on CD4+ and CD8+ T cells—as described in the materials and methods section. BAL CD4+ and CD8+ T cells were predominantly of effector memory phenotype (CD4+CD45RA−CCR7−, median 98%; CD8+CD45RA−CCR7−, median 90%) compared to peripheral blood CD4+ and CD8+ T cells which were distributed among naive (CD4+CD45RA+CCR7+, median 25%; CD8+CD45RA+CCR7+, median 30%), central memory (CD4+CD45RA−CCR7+, median 37%; CD8+CD45RA−CCR7+, median 5%), effector memory (CD4+ CD45RA−CCR7−, median 37%; CD8+ CD45RA−CCR7−, median 20%) and terminal effector phenotypes (CD4+CD45RA+CCR7−, median 6%; CD8+ CD45RA+CCR7−, median 45%) (figure 2). In BAL, there was a higher proportion of effector memory cells in HIV-infected individuals compared to HIV-uninfected adults (CD4, median 98.9% vs 98.1% p=0.03; CD8, median 96.3% vs 90.3% p=0.03) and a lower proportion of central memory T cells (CD4, median 0.37% vs 1.42% p=0.003; CD8, median 0.44% vs 1.88% p=0.007) (figure 2A,B). In peripheral blood, there was no difference in peripheral blood CD4+ T cell subsets between HIV-infected and HIV-uninfected groups. However, the HIV-infected group had a higher proportion of CD8+ effector memory T cells (median 29.6% vs 17.8%, p=0.007) and a lower proportion of CD8+ terminal effector T cells (median 22.2% vs 45.0%, p=0.01) than the HIV-uninfected group (figure 2C,D).\nThe proportions of naive and memory T cell subsets are different between BAL and peripheral blood, and are altered during HIV infection. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD3 FITC, anti-CD4 Pacific blue, anti-CD8 APC-H7, anti-CD45RA PE and anti-CCR7 APC fluorochrome conjugated antibodies. The proportion of naïve (CD45RA+CCR7+), central memory (CD45RA−CCR7+), effector memory (CD45RA−CCR7−) and terminal effector (CD45RA+CCR7−) were defined. (A, B, C, D) The data shows that BAL T cells (upper) were predominantly of effector memory phenotype compared to peripheral blood (lower), in which T cells were distributed among naive, central memory, effector memory and terminal effector phenotypes. (A, B) The data shows a higher proportion of effector memory and lower proportion of central memory BAL CD4+ (left) and CD8+ (right) T cells in HIV-infected individuals compared to HIV-uninfected adults. (C, D) The data shows no difference in peripheral blood CD4+ T cell subsets between HIV-infected individuals compared to HIV-uninfected adults (left), but there was a higher proportion of CD8+ effector memory and a lower proportion of CD8+ terminal effector in HIV-infected individuals compared to HIV-uninfected adults (right). Black horizontal bars represent median and IQRs. Statistical significance was analysed by the Mann–Whitney U test. p value <0.05 was used to determine statistical significance.\nTo examine whether the proportion of T cell subsets was similar between compartments and whether they were altered by HIV infection, the authors measured expression of CD45RA and CCR7 on CD4+ and CD8+ T cells—as described in the materials and methods section. BAL CD4+ and CD8+ T cells were predominantly of effector memory phenotype (CD4+CD45RA−CCR7−, median 98%; CD8+CD45RA−CCR7−, median 90%) compared to peripheral blood CD4+ and CD8+ T cells which were distributed among naive (CD4+CD45RA+CCR7+, median 25%; CD8+CD45RA+CCR7+, median 30%), central memory (CD4+CD45RA−CCR7+, median 37%; CD8+CD45RA−CCR7+, median 5%), effector memory (CD4+ CD45RA−CCR7−, median 37%; CD8+ CD45RA−CCR7−, median 20%) and terminal effector phenotypes (CD4+CD45RA+CCR7−, median 6%; CD8+ CD45RA+CCR7−, median 45%) (figure 2). In BAL, there was a higher proportion of effector memory cells in HIV-infected individuals compared to HIV-uninfected adults (CD4, median 98.9% vs 98.1% p=0.03; CD8, median 96.3% vs 90.3% p=0.03) and a lower proportion of central memory T cells (CD4, median 0.37% vs 1.42% p=0.003; CD8, median 0.44% vs 1.88% p=0.007) (figure 2A,B). In peripheral blood, there was no difference in peripheral blood CD4+ T cell subsets between HIV-infected and HIV-uninfected groups. However, the HIV-infected group had a higher proportion of CD8+ effector memory T cells (median 29.6% vs 17.8%, p=0.007) and a lower proportion of CD8+ terminal effector T cells (median 22.2% vs 45.0%, p=0.01) than the HIV-uninfected group (figure 2C,D).\nThe proportions of naive and memory T cell subsets are different between BAL and peripheral blood, and are altered during HIV infection. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD3 FITC, anti-CD4 Pacific blue, anti-CD8 APC-H7, anti-CD45RA PE and anti-CCR7 APC fluorochrome conjugated antibodies. The proportion of naïve (CD45RA+CCR7+), central memory (CD45RA−CCR7+), effector memory (CD45RA−CCR7−) and terminal effector (CD45RA+CCR7−) were defined. (A, B, C, D) The data shows that BAL T cells (upper) were predominantly of effector memory phenotype compared to peripheral blood (lower), in which T cells were distributed among naive, central memory, effector memory and terminal effector phenotypes. (A, B) The data shows a higher proportion of effector memory and lower proportion of central memory BAL CD4+ (left) and CD8+ (right) T cells in HIV-infected individuals compared to HIV-uninfected adults. (C, D) The data shows no difference in peripheral blood CD4+ T cell subsets between HIV-infected individuals compared to HIV-uninfected adults (left), but there was a higher proportion of CD8+ effector memory and a lower proportion of CD8+ terminal effector in HIV-infected individuals compared to HIV-uninfected adults (right). Black horizontal bars represent median and IQRs. Statistical significance was analysed by the Mann–Whitney U test. p value <0.05 was used to determine statistical significance.\n[SUBTITLE] Regulatory T cells (Tregs) [SUBSECTION] We investigated the hypothesis that in HIV-infected individuals, persistent immune activation by HIV results in a higher frequency of regulatory T cells. Blood and BAL Tregs were defined as CD4+CD25hiFoxP3+ as described in the materials and methods (figure 3A). The proportions of Tregs in HIV-infected individuals were similar in BAL and peripheral blood (median 3.7% vs 3%, p>0.01), but were higher in BAL compared to peripheral blood in HIV-uninfected adults (median 4.3% vs 1.5%, p<0.001) (figure 3B). There was a higher proportion of Tregs in the peripheral blood of HIV-infected individuals than in the HIV-uninfected adults (median 3% vs 1.5%, p<0.001). However, in BAL the proportions were similar between the groups (median 3.7% vs 4.3%, p>0.01) (figure 3B). Absolute counts, however, of Tregs in peripheral blood were similar between HIV-uninfected adults and HIV-infected individuals (median 11 cells/μl vs 9 cells/μl, p>0.01) (figure 3C).\nHigher frequency of regulatory T cells in BAL compared to peripheral blood, but altered in HIV-infected individuals. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD4 Pacific blue, anti-CD25 FITC and anti-Foxp3 PE fluorochrome conjugated antibodies. Regulatory T cells (Tregs) were defined as CD4+ T cells expressing CD25hi and FoxP3+. (A) A flow cytometry representative dot plot showing Tregs in BAL and peripheral blood from a healthy control. (B) The data shows a higher frequency of Tregs in BAL than peripheral blood in HIV-uninfected adults. It also shows a higher frequency of peripheral blood Tregs in HIV-infected individuals compared to HIV-uninfected adults. (C) The data shows no difference in the absolute counts of Tregs in peripheral blood between HIV-infected individuals and HIV-uninfected adults. Black horizontal bars represent median and IQRs. Statistical significance was analysed by the Mann–Whitney U test in the HIV-uninfected versus HIV-infected comparison, and Wilcoxon Signed rank test in the BAL versus peripheral blood comparison. p value <0.05 was used to determine statistical significance.\nWe investigated the hypothesis that in HIV-infected individuals, persistent immune activation by HIV results in a higher frequency of regulatory T cells. Blood and BAL Tregs were defined as CD4+CD25hiFoxP3+ as described in the materials and methods (figure 3A). The proportions of Tregs in HIV-infected individuals were similar in BAL and peripheral blood (median 3.7% vs 3%, p>0.01), but were higher in BAL compared to peripheral blood in HIV-uninfected adults (median 4.3% vs 1.5%, p<0.001) (figure 3B). There was a higher proportion of Tregs in the peripheral blood of HIV-infected individuals than in the HIV-uninfected adults (median 3% vs 1.5%, p<0.001). However, in BAL the proportions were similar between the groups (median 3.7% vs 4.3%, p>0.01) (figure 3B). Absolute counts, however, of Tregs in peripheral blood were similar between HIV-uninfected adults and HIV-infected individuals (median 11 cells/μl vs 9 cells/μl, p>0.01) (figure 3C).\nHigher frequency of regulatory T cells in BAL compared to peripheral blood, but altered in HIV-infected individuals. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD4 Pacific blue, anti-CD25 FITC and anti-Foxp3 PE fluorochrome conjugated antibodies. Regulatory T cells (Tregs) were defined as CD4+ T cells expressing CD25hi and FoxP3+. (A) A flow cytometry representative dot plot showing Tregs in BAL and peripheral blood from a healthy control. (B) The data shows a higher frequency of Tregs in BAL than peripheral blood in HIV-uninfected adults. It also shows a higher frequency of peripheral blood Tregs in HIV-infected individuals compared to HIV-uninfected adults. (C) The data shows no difference in the absolute counts of Tregs in peripheral blood between HIV-infected individuals and HIV-uninfected adults. Black horizontal bars represent median and IQRs. Statistical significance was analysed by the Mann–Whitney U test in the HIV-uninfected versus HIV-infected comparison, and Wilcoxon Signed rank test in the BAL versus peripheral blood comparison. p value <0.05 was used to determine statistical significance.\n[SUBTITLE] CD4 and CD8 T cells [SUBSECTION] There was a lower proportion of CD4+ T cells in the total CD3+ T cell population in HIV-infected individuals compared to HIV-uninfected adults in both BAL (median 35.0% vs 65.5%, p<0.001) and peripheral blood (median 38.5% vs 61.2%, p<0.001). The proportion of CD8+ T cells in the total CD3+ T cell population was higher in HIV-infected individuals compared to HIV-uninfected adults in both BAL (median 59.7% vs 34.5%, p<0.05) and peripheral blood (median 61.5% vs 38.8%, p<0.05) (figure 1A).\nLower proportion of CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD3 FITC, anti-CD4 Pacific blue and anti-CD8 APC-H7 fluorochrome conjugated antibodies. (A) The data shows a lower proportion of CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. (B) The data shows a higher proportion of CD8+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. Significance was assessed using Mann–Whitney U test. Black horizontal bars represent median and IQRs.\nThere was a lower proportion of CD4+ T cells in the total CD3+ T cell population in HIV-infected individuals compared to HIV-uninfected adults in both BAL (median 35.0% vs 65.5%, p<0.001) and peripheral blood (median 38.5% vs 61.2%, p<0.001). The proportion of CD8+ T cells in the total CD3+ T cell population was higher in HIV-infected individuals compared to HIV-uninfected adults in both BAL (median 59.7% vs 34.5%, p<0.05) and peripheral blood (median 61.5% vs 38.8%, p<0.05) (figure 1A).\nLower proportion of CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD3 FITC, anti-CD4 Pacific blue and anti-CD8 APC-H7 fluorochrome conjugated antibodies. (A) The data shows a lower proportion of CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. (B) The data shows a higher proportion of CD8+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. Significance was assessed using Mann–Whitney U test. Black horizontal bars represent median and IQRs.\n[SUBTITLE] Naive and memory T cells [SUBSECTION] To examine whether the proportion of T cell subsets was similar between compartments and whether they were altered by HIV infection, the authors measured expression of CD45RA and CCR7 on CD4+ and CD8+ T cells—as described in the materials and methods section. BAL CD4+ and CD8+ T cells were predominantly of effector memory phenotype (CD4+CD45RA−CCR7−, median 98%; CD8+CD45RA−CCR7−, median 90%) compared to peripheral blood CD4+ and CD8+ T cells which were distributed among naive (CD4+CD45RA+CCR7+, median 25%; CD8+CD45RA+CCR7+, median 30%), central memory (CD4+CD45RA−CCR7+, median 37%; CD8+CD45RA−CCR7+, median 5%), effector memory (CD4+ CD45RA−CCR7−, median 37%; CD8+ CD45RA−CCR7−, median 20%) and terminal effector phenotypes (CD4+CD45RA+CCR7−, median 6%; CD8+ CD45RA+CCR7−, median 45%) (figure 2). In BAL, there was a higher proportion of effector memory cells in HIV-infected individuals compared to HIV-uninfected adults (CD4, median 98.9% vs 98.1% p=0.03; CD8, median 96.3% vs 90.3% p=0.03) and a lower proportion of central memory T cells (CD4, median 0.37% vs 1.42% p=0.003; CD8, median 0.44% vs 1.88% p=0.007) (figure 2A,B). In peripheral blood, there was no difference in peripheral blood CD4+ T cell subsets between HIV-infected and HIV-uninfected groups. However, the HIV-infected group had a higher proportion of CD8+ effector memory T cells (median 29.6% vs 17.8%, p=0.007) and a lower proportion of CD8+ terminal effector T cells (median 22.2% vs 45.0%, p=0.01) than the HIV-uninfected group (figure 2C,D).\nThe proportions of naive and memory T cell subsets are different between BAL and peripheral blood, and are altered during HIV infection. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD3 FITC, anti-CD4 Pacific blue, anti-CD8 APC-H7, anti-CD45RA PE and anti-CCR7 APC fluorochrome conjugated antibodies. The proportion of naïve (CD45RA+CCR7+), central memory (CD45RA−CCR7+), effector memory (CD45RA−CCR7−) and terminal effector (CD45RA+CCR7−) were defined. (A, B, C, D) The data shows that BAL T cells (upper) were predominantly of effector memory phenotype compared to peripheral blood (lower), in which T cells were distributed among naive, central memory, effector memory and terminal effector phenotypes. (A, B) The data shows a higher proportion of effector memory and lower proportion of central memory BAL CD4+ (left) and CD8+ (right) T cells in HIV-infected individuals compared to HIV-uninfected adults. (C, D) The data shows no difference in peripheral blood CD4+ T cell subsets between HIV-infected individuals compared to HIV-uninfected adults (left), but there was a higher proportion of CD8+ effector memory and a lower proportion of CD8+ terminal effector in HIV-infected individuals compared to HIV-uninfected adults (right). Black horizontal bars represent median and IQRs. Statistical significance was analysed by the Mann–Whitney U test. p value <0.05 was used to determine statistical significance.\nTo examine whether the proportion of T cell subsets was similar between compartments and whether they were altered by HIV infection, the authors measured expression of CD45RA and CCR7 on CD4+ and CD8+ T cells—as described in the materials and methods section. BAL CD4+ and CD8+ T cells were predominantly of effector memory phenotype (CD4+CD45RA−CCR7−, median 98%; CD8+CD45RA−CCR7−, median 90%) compared to peripheral blood CD4+ and CD8+ T cells which were distributed among naive (CD4+CD45RA+CCR7+, median 25%; CD8+CD45RA+CCR7+, median 30%), central memory (CD4+CD45RA−CCR7+, median 37%; CD8+CD45RA−CCR7+, median 5%), effector memory (CD4+ CD45RA−CCR7−, median 37%; CD8+ CD45RA−CCR7−, median 20%) and terminal effector phenotypes (CD4+CD45RA+CCR7−, median 6%; CD8+ CD45RA+CCR7−, median 45%) (figure 2). In BAL, there was a higher proportion of effector memory cells in HIV-infected individuals compared to HIV-uninfected adults (CD4, median 98.9% vs 98.1% p=0.03; CD8, median 96.3% vs 90.3% p=0.03) and a lower proportion of central memory T cells (CD4, median 0.37% vs 1.42% p=0.003; CD8, median 0.44% vs 1.88% p=0.007) (figure 2A,B). In peripheral blood, there was no difference in peripheral blood CD4+ T cell subsets between HIV-infected and HIV-uninfected groups. However, the HIV-infected group had a higher proportion of CD8+ effector memory T cells (median 29.6% vs 17.8%, p=0.007) and a lower proportion of CD8+ terminal effector T cells (median 22.2% vs 45.0%, p=0.01) than the HIV-uninfected group (figure 2C,D).\nThe proportions of naive and memory T cell subsets are different between BAL and peripheral blood, and are altered during HIV infection. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD3 FITC, anti-CD4 Pacific blue, anti-CD8 APC-H7, anti-CD45RA PE and anti-CCR7 APC fluorochrome conjugated antibodies. The proportion of naïve (CD45RA+CCR7+), central memory (CD45RA−CCR7+), effector memory (CD45RA−CCR7−) and terminal effector (CD45RA+CCR7−) were defined. (A, B, C, D) The data shows that BAL T cells (upper) were predominantly of effector memory phenotype compared to peripheral blood (lower), in which T cells were distributed among naive, central memory, effector memory and terminal effector phenotypes. (A, B) The data shows a higher proportion of effector memory and lower proportion of central memory BAL CD4+ (left) and CD8+ (right) T cells in HIV-infected individuals compared to HIV-uninfected adults. (C, D) The data shows no difference in peripheral blood CD4+ T cell subsets between HIV-infected individuals compared to HIV-uninfected adults (left), but there was a higher proportion of CD8+ effector memory and a lower proportion of CD8+ terminal effector in HIV-infected individuals compared to HIV-uninfected adults (right). Black horizontal bars represent median and IQRs. Statistical significance was analysed by the Mann–Whitney U test. p value <0.05 was used to determine statistical significance.\n[SUBTITLE] Regulatory T cells (Tregs) [SUBSECTION] We investigated the hypothesis that in HIV-infected individuals, persistent immune activation by HIV results in a higher frequency of regulatory T cells. Blood and BAL Tregs were defined as CD4+CD25hiFoxP3+ as described in the materials and methods (figure 3A). The proportions of Tregs in HIV-infected individuals were similar in BAL and peripheral blood (median 3.7% vs 3%, p>0.01), but were higher in BAL compared to peripheral blood in HIV-uninfected adults (median 4.3% vs 1.5%, p<0.001) (figure 3B). There was a higher proportion of Tregs in the peripheral blood of HIV-infected individuals than in the HIV-uninfected adults (median 3% vs 1.5%, p<0.001). However, in BAL the proportions were similar between the groups (median 3.7% vs 4.3%, p>0.01) (figure 3B). Absolute counts, however, of Tregs in peripheral blood were similar between HIV-uninfected adults and HIV-infected individuals (median 11 cells/μl vs 9 cells/μl, p>0.01) (figure 3C).\nHigher frequency of regulatory T cells in BAL compared to peripheral blood, but altered in HIV-infected individuals. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD4 Pacific blue, anti-CD25 FITC and anti-Foxp3 PE fluorochrome conjugated antibodies. Regulatory T cells (Tregs) were defined as CD4+ T cells expressing CD25hi and FoxP3+. (A) A flow cytometry representative dot plot showing Tregs in BAL and peripheral blood from a healthy control. (B) The data shows a higher frequency of Tregs in BAL than peripheral blood in HIV-uninfected adults. It also shows a higher frequency of peripheral blood Tregs in HIV-infected individuals compared to HIV-uninfected adults. (C) The data shows no difference in the absolute counts of Tregs in peripheral blood between HIV-infected individuals and HIV-uninfected adults. Black horizontal bars represent median and IQRs. Statistical significance was analysed by the Mann–Whitney U test in the HIV-uninfected versus HIV-infected comparison, and Wilcoxon Signed rank test in the BAL versus peripheral blood comparison. p value <0.05 was used to determine statistical significance.\nWe investigated the hypothesis that in HIV-infected individuals, persistent immune activation by HIV results in a higher frequency of regulatory T cells. Blood and BAL Tregs were defined as CD4+CD25hiFoxP3+ as described in the materials and methods (figure 3A). The proportions of Tregs in HIV-infected individuals were similar in BAL and peripheral blood (median 3.7% vs 3%, p>0.01), but were higher in BAL compared to peripheral blood in HIV-uninfected adults (median 4.3% vs 1.5%, p<0.001) (figure 3B). There was a higher proportion of Tregs in the peripheral blood of HIV-infected individuals than in the HIV-uninfected adults (median 3% vs 1.5%, p<0.001). However, in BAL the proportions were similar between the groups (median 3.7% vs 4.3%, p>0.01) (figure 3B). Absolute counts, however, of Tregs in peripheral blood were similar between HIV-uninfected adults and HIV-infected individuals (median 11 cells/μl vs 9 cells/μl, p>0.01) (figure 3C).\nHigher frequency of regulatory T cells in BAL compared to peripheral blood, but altered in HIV-infected individuals. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD4 Pacific blue, anti-CD25 FITC and anti-Foxp3 PE fluorochrome conjugated antibodies. Regulatory T cells (Tregs) were defined as CD4+ T cells expressing CD25hi and FoxP3+. (A) A flow cytometry representative dot plot showing Tregs in BAL and peripheral blood from a healthy control. (B) The data shows a higher frequency of Tregs in BAL than peripheral blood in HIV-uninfected adults. It also shows a higher frequency of peripheral blood Tregs in HIV-infected individuals compared to HIV-uninfected adults. (C) The data shows no difference in the absolute counts of Tregs in peripheral blood between HIV-infected individuals and HIV-uninfected adults. Black horizontal bars represent median and IQRs. Statistical significance was analysed by the Mann–Whitney U test in the HIV-uninfected versus HIV-infected comparison, and Wilcoxon Signed rank test in the BAL versus peripheral blood comparison. p value <0.05 was used to determine statistical significance.\n[SUBTITLE] Antigen-specific CD4+ T cells against influenza virus, S pneumoniae and M tuberculosis in BAL and peripheral blood [SUBSECTION] To investigate respiratory pathogen antigen-specific T cell responses in BAL and peripheral blood, we measured the quality and magnitude of the CD4+ T cell response to influenza virus, S pneumoniae and M tuberculosis antigens using intracellular cytokine staining. We detected antigen-specific CD4+ T cells against influenza virus, S pneumoniae and M tuberculosis in BAL and peripheral blood (figure 4).\nRepresentative flow cytometry dot flow from an HIV-uninfected adult showing multiple subsets of antigen-specific CD4+ T cells in BAL and peripheral blood. BAL and peripheral blood lymphocytes were stimulated with antigens and T cell responses were measured by intracellular cytokine staining. Representative flow cytometry dot plots from an HIV-uninfected adult showing interferon-γ (IFN-γ) and TNF-alpha (TNF-α) responses in BAL (top) and peripheral blood (bottom) cells, in an unstimulated negative control and cells stimulated with influenza virus, S. pneumoniae and M tuberculosis antigens.\n[SUBTITLE] Differences between BAL and peripheral blood compartments [SUBSECTION] There was a higher percentage frequency of antigen-specific CD4+ T cells against influenza virus (median 0.51% vs 0.13%, p<0.001), S pneumoniae (median 0.59% vs 0.11%, p<0.001) and M tuberculosis (median 5.53% vs 0.20%, p<0.001) in BAL compared to peripheral blood (figure 5A–C). The proportion of antigen-specific CD4+ T cells against influenza virus, S pneumoniae and M tuberculosis producing either IFN-γ alone, TNF-α alone or both cytokines were different between BAL and peripheral blood (figure 6A–C). Background responses were subtracted from all the antigen-specific CD4+ T cell responses.\nLower frequency of antigen-specific BAL CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. BAL and peripheral blood lymphocytes were stimulated with antigens and the magnitude of the antigen-specific T cell response was measured by intracellular cytokine staining. The total of all cytokine-secreting CD4+ T cells was used to represent the percentage frequency of antigen-specific cells. (A) The data shows a higher percentage frequency of influenza virus antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a lower percentage frequency of BAL influenza virus antigen-specific CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. (B) The data shows a higher percentage frequency of S pneumoniae antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a higher percentage frequency of S pneumoniae antigen-specific peripheral blood CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. (C) The data shows a higher percentage frequency of M tuberculosis antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a lower percentage frequency of BAL M tuberculosis antigen-specific CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. Black horizontal bars represent median and IQRs after background responses were subtracted from all the antigen-specific CD4 T cell responses. Statistical significance was analysed by the Mann–Whitney U test in the HIV-uninfected versus HIV-infected comparison, and Wilcoxon Signed rank test in the BAL versus peripheral blood comparison. p value <0.05 was used to determine statistical significance.\nLower proportion of polyfunctional antigen-specific CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. BAL and peripheral blood lymphocytes were stimulated with antigens and the quality of the antigen-specific T cell response was measured by intracellular cytokine staining. The proportion of single producers (IFN-γ alone or TNF-α alone) and double producers (IFN-γ and TNF-α) in the antigen-specific CD4+ T cell population was defined. (A) The data shows that in both BAL (upper) and peripheral blood (lower), the proportion of double producers (green) in the influenza virus antigen-specific CD4+ T cell population was lower in HIV-infected individuals (right) than HIV-uninfected adults (left). It also shows that the proportion of subsets of antigen-specific CD4+ T cells against influenza virus including IFN-γ single producers (blue), TNF-α single producers (red), and IFN-γ/TNF-α double producers (green) were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults. (B) The data shows that the proportion of subsets of antigen-specific CD4+ T cells against S pneumoniae (including single producers and double producers), were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults. (C) The data shows that in BAL (upper) and peripheral blood (lower) the proportion of double producers (green) in the M tuberculosis antigen-specific CD4 T cell population was lower in HIV-infected individuals (right) than HIV-uninfected adults (left). It also shows that the proportion of subsets of antigen-specific CD4+ T cells against M tuberculosis (including single producers, and double producers), were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults.\nThere was a higher percentage frequency of antigen-specific CD4+ T cells against influenza virus (median 0.51% vs 0.13%, p<0.001), S pneumoniae (median 0.59% vs 0.11%, p<0.001) and M tuberculosis (median 5.53% vs 0.20%, p<0.001) in BAL compared to peripheral blood (figure 5A–C). The proportion of antigen-specific CD4+ T cells against influenza virus, S pneumoniae and M tuberculosis producing either IFN-γ alone, TNF-α alone or both cytokines were different between BAL and peripheral blood (figure 6A–C). Background responses were subtracted from all the antigen-specific CD4+ T cell responses.\nLower frequency of antigen-specific BAL CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. BAL and peripheral blood lymphocytes were stimulated with antigens and the magnitude of the antigen-specific T cell response was measured by intracellular cytokine staining. The total of all cytokine-secreting CD4+ T cells was used to represent the percentage frequency of antigen-specific cells. (A) The data shows a higher percentage frequency of influenza virus antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a lower percentage frequency of BAL influenza virus antigen-specific CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. (B) The data shows a higher percentage frequency of S pneumoniae antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a higher percentage frequency of S pneumoniae antigen-specific peripheral blood CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. (C) The data shows a higher percentage frequency of M tuberculosis antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a lower percentage frequency of BAL M tuberculosis antigen-specific CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. Black horizontal bars represent median and IQRs after background responses were subtracted from all the antigen-specific CD4 T cell responses. Statistical significance was analysed by the Mann–Whitney U test in the HIV-uninfected versus HIV-infected comparison, and Wilcoxon Signed rank test in the BAL versus peripheral blood comparison. p value <0.05 was used to determine statistical significance.\nLower proportion of polyfunctional antigen-specific CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. BAL and peripheral blood lymphocytes were stimulated with antigens and the quality of the antigen-specific T cell response was measured by intracellular cytokine staining. The proportion of single producers (IFN-γ alone or TNF-α alone) and double producers (IFN-γ and TNF-α) in the antigen-specific CD4+ T cell population was defined. (A) The data shows that in both BAL (upper) and peripheral blood (lower), the proportion of double producers (green) in the influenza virus antigen-specific CD4+ T cell population was lower in HIV-infected individuals (right) than HIV-uninfected adults (left). It also shows that the proportion of subsets of antigen-specific CD4+ T cells against influenza virus including IFN-γ single producers (blue), TNF-α single producers (red), and IFN-γ/TNF-α double producers (green) were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults. (B) The data shows that the proportion of subsets of antigen-specific CD4+ T cells against S pneumoniae (including single producers and double producers), were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults. (C) The data shows that in BAL (upper) and peripheral blood (lower) the proportion of double producers (green) in the M tuberculosis antigen-specific CD4 T cell population was lower in HIV-infected individuals (right) than HIV-uninfected adults (left). It also shows that the proportion of subsets of antigen-specific CD4+ T cells against M tuberculosis (including single producers, and double producers), were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults.\n[SUBTITLE] Differences between HIV-infected and HIV-uninfected adults [SUBSECTION] The percentage frequency of antigen-specific CD4+ T cells against influenza virus (median 0.15% vs 0.51%, p<0.05) and M tuberculosis (median 0.5% vs 5.53%, p<0.05) antigens were lower in HIV-infected individuals compared to HIV-uninfected adults in BAL (figure 5A,C). However, this was not the case in peripheral blood where the percentage frequencies were comparable (Influenza virus, median 0.3% vs 0.13%, p>0.05; M tuberculosis, median 0.12% vs 0.19% p>0.05)(figure 5A,C). The percentage frequency of antigen-specific CD4+ T cells against S pneumoniae were similar in HIV-infected compared to HIV-uninfected adults in BAL (median 0.84% vs 0.59%, p>0.05). However, in peripheral blood the percentage frequency was higher in HIV-infected compared to HIV-uninfected adults (median 0.2% vs 0.1%, p=0.02) (figure 5B). Background responses were subtracted from all the antigen-specific CD4+ T cell responses.\nFurther, in BAL and peripheral blood, there was a lower proportion of multiple cytokine-producing (polyfunctional) antigen-specific CD4+ T cells against influenza virus (BAL, 59% vs 27%; Blood, 41% vs 34%) and M tuberculosis (BAL, median 77% vs 41%; Blood, median 57% vs 26%) in HIV-infected individuals compared to HIV-uninfected adults (figure 6A,C).\nThe percentage frequency of antigen-specific CD4+ T cells against influenza virus (median 0.15% vs 0.51%, p<0.05) and M tuberculosis (median 0.5% vs 5.53%, p<0.05) antigens were lower in HIV-infected individuals compared to HIV-uninfected adults in BAL (figure 5A,C). However, this was not the case in peripheral blood where the percentage frequencies were comparable (Influenza virus, median 0.3% vs 0.13%, p>0.05; M tuberculosis, median 0.12% vs 0.19% p>0.05)(figure 5A,C). The percentage frequency of antigen-specific CD4+ T cells against S pneumoniae were similar in HIV-infected compared to HIV-uninfected adults in BAL (median 0.84% vs 0.59%, p>0.05). However, in peripheral blood the percentage frequency was higher in HIV-infected compared to HIV-uninfected adults (median 0.2% vs 0.1%, p=0.02) (figure 5B). Background responses were subtracted from all the antigen-specific CD4+ T cell responses.\nFurther, in BAL and peripheral blood, there was a lower proportion of multiple cytokine-producing (polyfunctional) antigen-specific CD4+ T cells against influenza virus (BAL, 59% vs 27%; Blood, 41% vs 34%) and M tuberculosis (BAL, median 77% vs 41%; Blood, median 57% vs 26%) in HIV-infected individuals compared to HIV-uninfected adults (figure 6A,C).\nTo investigate respiratory pathogen antigen-specific T cell responses in BAL and peripheral blood, we measured the quality and magnitude of the CD4+ T cell response to influenza virus, S pneumoniae and M tuberculosis antigens using intracellular cytokine staining. We detected antigen-specific CD4+ T cells against influenza virus, S pneumoniae and M tuberculosis in BAL and peripheral blood (figure 4).\nRepresentative flow cytometry dot flow from an HIV-uninfected adult showing multiple subsets of antigen-specific CD4+ T cells in BAL and peripheral blood. BAL and peripheral blood lymphocytes were stimulated with antigens and T cell responses were measured by intracellular cytokine staining. Representative flow cytometry dot plots from an HIV-uninfected adult showing interferon-γ (IFN-γ) and TNF-alpha (TNF-α) responses in BAL (top) and peripheral blood (bottom) cells, in an unstimulated negative control and cells stimulated with influenza virus, S. pneumoniae and M tuberculosis antigens.\n[SUBTITLE] Differences between BAL and peripheral blood compartments [SUBSECTION] There was a higher percentage frequency of antigen-specific CD4+ T cells against influenza virus (median 0.51% vs 0.13%, p<0.001), S pneumoniae (median 0.59% vs 0.11%, p<0.001) and M tuberculosis (median 5.53% vs 0.20%, p<0.001) in BAL compared to peripheral blood (figure 5A–C). The proportion of antigen-specific CD4+ T cells against influenza virus, S pneumoniae and M tuberculosis producing either IFN-γ alone, TNF-α alone or both cytokines were different between BAL and peripheral blood (figure 6A–C). Background responses were subtracted from all the antigen-specific CD4+ T cell responses.\nLower frequency of antigen-specific BAL CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. BAL and peripheral blood lymphocytes were stimulated with antigens and the magnitude of the antigen-specific T cell response was measured by intracellular cytokine staining. The total of all cytokine-secreting CD4+ T cells was used to represent the percentage frequency of antigen-specific cells. (A) The data shows a higher percentage frequency of influenza virus antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a lower percentage frequency of BAL influenza virus antigen-specific CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. (B) The data shows a higher percentage frequency of S pneumoniae antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a higher percentage frequency of S pneumoniae antigen-specific peripheral blood CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. (C) The data shows a higher percentage frequency of M tuberculosis antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a lower percentage frequency of BAL M tuberculosis antigen-specific CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. Black horizontal bars represent median and IQRs after background responses were subtracted from all the antigen-specific CD4 T cell responses. Statistical significance was analysed by the Mann–Whitney U test in the HIV-uninfected versus HIV-infected comparison, and Wilcoxon Signed rank test in the BAL versus peripheral blood comparison. p value <0.05 was used to determine statistical significance.\nLower proportion of polyfunctional antigen-specific CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. BAL and peripheral blood lymphocytes were stimulated with antigens and the quality of the antigen-specific T cell response was measured by intracellular cytokine staining. The proportion of single producers (IFN-γ alone or TNF-α alone) and double producers (IFN-γ and TNF-α) in the antigen-specific CD4+ T cell population was defined. (A) The data shows that in both BAL (upper) and peripheral blood (lower), the proportion of double producers (green) in the influenza virus antigen-specific CD4+ T cell population was lower in HIV-infected individuals (right) than HIV-uninfected adults (left). It also shows that the proportion of subsets of antigen-specific CD4+ T cells against influenza virus including IFN-γ single producers (blue), TNF-α single producers (red), and IFN-γ/TNF-α double producers (green) were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults. (B) The data shows that the proportion of subsets of antigen-specific CD4+ T cells against S pneumoniae (including single producers and double producers), were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults. (C) The data shows that in BAL (upper) and peripheral blood (lower) the proportion of double producers (green) in the M tuberculosis antigen-specific CD4 T cell population was lower in HIV-infected individuals (right) than HIV-uninfected adults (left). It also shows that the proportion of subsets of antigen-specific CD4+ T cells against M tuberculosis (including single producers, and double producers), were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults.\nThere was a higher percentage frequency of antigen-specific CD4+ T cells against influenza virus (median 0.51% vs 0.13%, p<0.001), S pneumoniae (median 0.59% vs 0.11%, p<0.001) and M tuberculosis (median 5.53% vs 0.20%, p<0.001) in BAL compared to peripheral blood (figure 5A–C). The proportion of antigen-specific CD4+ T cells against influenza virus, S pneumoniae and M tuberculosis producing either IFN-γ alone, TNF-α alone or both cytokines were different between BAL and peripheral blood (figure 6A–C). Background responses were subtracted from all the antigen-specific CD4+ T cell responses.\nLower frequency of antigen-specific BAL CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. BAL and peripheral blood lymphocytes were stimulated with antigens and the magnitude of the antigen-specific T cell response was measured by intracellular cytokine staining. The total of all cytokine-secreting CD4+ T cells was used to represent the percentage frequency of antigen-specific cells. (A) The data shows a higher percentage frequency of influenza virus antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a lower percentage frequency of BAL influenza virus antigen-specific CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. (B) The data shows a higher percentage frequency of S pneumoniae antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a higher percentage frequency of S pneumoniae antigen-specific peripheral blood CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. (C) The data shows a higher percentage frequency of M tuberculosis antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a lower percentage frequency of BAL M tuberculosis antigen-specific CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. Black horizontal bars represent median and IQRs after background responses were subtracted from all the antigen-specific CD4 T cell responses. Statistical significance was analysed by the Mann–Whitney U test in the HIV-uninfected versus HIV-infected comparison, and Wilcoxon Signed rank test in the BAL versus peripheral blood comparison. p value <0.05 was used to determine statistical significance.\nLower proportion of polyfunctional antigen-specific CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. BAL and peripheral blood lymphocytes were stimulated with antigens and the quality of the antigen-specific T cell response was measured by intracellular cytokine staining. The proportion of single producers (IFN-γ alone or TNF-α alone) and double producers (IFN-γ and TNF-α) in the antigen-specific CD4+ T cell population was defined. (A) The data shows that in both BAL (upper) and peripheral blood (lower), the proportion of double producers (green) in the influenza virus antigen-specific CD4+ T cell population was lower in HIV-infected individuals (right) than HIV-uninfected adults (left). It also shows that the proportion of subsets of antigen-specific CD4+ T cells against influenza virus including IFN-γ single producers (blue), TNF-α single producers (red), and IFN-γ/TNF-α double producers (green) were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults. (B) The data shows that the proportion of subsets of antigen-specific CD4+ T cells against S pneumoniae (including single producers and double producers), were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults. (C) The data shows that in BAL (upper) and peripheral blood (lower) the proportion of double producers (green) in the M tuberculosis antigen-specific CD4 T cell population was lower in HIV-infected individuals (right) than HIV-uninfected adults (left). It also shows that the proportion of subsets of antigen-specific CD4+ T cells against M tuberculosis (including single producers, and double producers), were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults.\n[SUBTITLE] Differences between HIV-infected and HIV-uninfected adults [SUBSECTION] The percentage frequency of antigen-specific CD4+ T cells against influenza virus (median 0.15% vs 0.51%, p<0.05) and M tuberculosis (median 0.5% vs 5.53%, p<0.05) antigens were lower in HIV-infected individuals compared to HIV-uninfected adults in BAL (figure 5A,C). However, this was not the case in peripheral blood where the percentage frequencies were comparable (Influenza virus, median 0.3% vs 0.13%, p>0.05; M tuberculosis, median 0.12% vs 0.19% p>0.05)(figure 5A,C). The percentage frequency of antigen-specific CD4+ T cells against S pneumoniae were similar in HIV-infected compared to HIV-uninfected adults in BAL (median 0.84% vs 0.59%, p>0.05). However, in peripheral blood the percentage frequency was higher in HIV-infected compared to HIV-uninfected adults (median 0.2% vs 0.1%, p=0.02) (figure 5B). Background responses were subtracted from all the antigen-specific CD4+ T cell responses.\nFurther, in BAL and peripheral blood, there was a lower proportion of multiple cytokine-producing (polyfunctional) antigen-specific CD4+ T cells against influenza virus (BAL, 59% vs 27%; Blood, 41% vs 34%) and M tuberculosis (BAL, median 77% vs 41%; Blood, median 57% vs 26%) in HIV-infected individuals compared to HIV-uninfected adults (figure 6A,C).\nThe percentage frequency of antigen-specific CD4+ T cells against influenza virus (median 0.15% vs 0.51%, p<0.05) and M tuberculosis (median 0.5% vs 5.53%, p<0.05) antigens were lower in HIV-infected individuals compared to HIV-uninfected adults in BAL (figure 5A,C). However, this was not the case in peripheral blood where the percentage frequencies were comparable (Influenza virus, median 0.3% vs 0.13%, p>0.05; M tuberculosis, median 0.12% vs 0.19% p>0.05)(figure 5A,C). The percentage frequency of antigen-specific CD4+ T cells against S pneumoniae were similar in HIV-infected compared to HIV-uninfected adults in BAL (median 0.84% vs 0.59%, p>0.05). However, in peripheral blood the percentage frequency was higher in HIV-infected compared to HIV-uninfected adults (median 0.2% vs 0.1%, p=0.02) (figure 5B). Background responses were subtracted from all the antigen-specific CD4+ T cell responses.\nFurther, in BAL and peripheral blood, there was a lower proportion of multiple cytokine-producing (polyfunctional) antigen-specific CD4+ T cells against influenza virus (BAL, 59% vs 27%; Blood, 41% vs 34%) and M tuberculosis (BAL, median 77% vs 41%; Blood, median 57% vs 26%) in HIV-infected individuals compared to HIV-uninfected adults (figure 6A,C).", "Basic demographics are shown in table 1. HIV-uninfected Malawian adults ((n=24, females 11) mean age 38 years)) and asymptomatic HIV-infected adults ((n=21, females 11) mean age 40 years)) participated in the study. The mean CD4 count for HIV-infected individuals was 375 cells/μl. All participants were asymptomatic and had no recent history of respiratory infection or tuberculosis. The mean BAL cell concentration was comparable between HIV-infected individuals and HIV-uninfected adults (16.2×106cells/100 ml vs 20.5×106cells/100 ml respectively; p=0.1134), but the proportion of lymphocytes in the BAL cells was higher in HIV-infected individuals than HIV-uninfected adults (17.8% vs 9.0%; p=0.0106).\nCharacteristics of subjects enrolled in the study\nBAL, bronchoalveolar lavage; BALF, BAL fluid; N/A, not applicable.", "[SUBTITLE] CD4 and CD8 T cells [SUBSECTION] There was a lower proportion of CD4+ T cells in the total CD3+ T cell population in HIV-infected individuals compared to HIV-uninfected adults in both BAL (median 35.0% vs 65.5%, p<0.001) and peripheral blood (median 38.5% vs 61.2%, p<0.001). The proportion of CD8+ T cells in the total CD3+ T cell population was higher in HIV-infected individuals compared to HIV-uninfected adults in both BAL (median 59.7% vs 34.5%, p<0.05) and peripheral blood (median 61.5% vs 38.8%, p<0.05) (figure 1A).\nLower proportion of CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD3 FITC, anti-CD4 Pacific blue and anti-CD8 APC-H7 fluorochrome conjugated antibodies. (A) The data shows a lower proportion of CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. (B) The data shows a higher proportion of CD8+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. Significance was assessed using Mann–Whitney U test. Black horizontal bars represent median and IQRs.\nThere was a lower proportion of CD4+ T cells in the total CD3+ T cell population in HIV-infected individuals compared to HIV-uninfected adults in both BAL (median 35.0% vs 65.5%, p<0.001) and peripheral blood (median 38.5% vs 61.2%, p<0.001). The proportion of CD8+ T cells in the total CD3+ T cell population was higher in HIV-infected individuals compared to HIV-uninfected adults in both BAL (median 59.7% vs 34.5%, p<0.05) and peripheral blood (median 61.5% vs 38.8%, p<0.05) (figure 1A).\nLower proportion of CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD3 FITC, anti-CD4 Pacific blue and anti-CD8 APC-H7 fluorochrome conjugated antibodies. (A) The data shows a lower proportion of CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. (B) The data shows a higher proportion of CD8+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. Significance was assessed using Mann–Whitney U test. Black horizontal bars represent median and IQRs.\n[SUBTITLE] Naive and memory T cells [SUBSECTION] To examine whether the proportion of T cell subsets was similar between compartments and whether they were altered by HIV infection, the authors measured expression of CD45RA and CCR7 on CD4+ and CD8+ T cells—as described in the materials and methods section. BAL CD4+ and CD8+ T cells were predominantly of effector memory phenotype (CD4+CD45RA−CCR7−, median 98%; CD8+CD45RA−CCR7−, median 90%) compared to peripheral blood CD4+ and CD8+ T cells which were distributed among naive (CD4+CD45RA+CCR7+, median 25%; CD8+CD45RA+CCR7+, median 30%), central memory (CD4+CD45RA−CCR7+, median 37%; CD8+CD45RA−CCR7+, median 5%), effector memory (CD4+ CD45RA−CCR7−, median 37%; CD8+ CD45RA−CCR7−, median 20%) and terminal effector phenotypes (CD4+CD45RA+CCR7−, median 6%; CD8+ CD45RA+CCR7−, median 45%) (figure 2). In BAL, there was a higher proportion of effector memory cells in HIV-infected individuals compared to HIV-uninfected adults (CD4, median 98.9% vs 98.1% p=0.03; CD8, median 96.3% vs 90.3% p=0.03) and a lower proportion of central memory T cells (CD4, median 0.37% vs 1.42% p=0.003; CD8, median 0.44% vs 1.88% p=0.007) (figure 2A,B). In peripheral blood, there was no difference in peripheral blood CD4+ T cell subsets between HIV-infected and HIV-uninfected groups. However, the HIV-infected group had a higher proportion of CD8+ effector memory T cells (median 29.6% vs 17.8%, p=0.007) and a lower proportion of CD8+ terminal effector T cells (median 22.2% vs 45.0%, p=0.01) than the HIV-uninfected group (figure 2C,D).\nThe proportions of naive and memory T cell subsets are different between BAL and peripheral blood, and are altered during HIV infection. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD3 FITC, anti-CD4 Pacific blue, anti-CD8 APC-H7, anti-CD45RA PE and anti-CCR7 APC fluorochrome conjugated antibodies. The proportion of naïve (CD45RA+CCR7+), central memory (CD45RA−CCR7+), effector memory (CD45RA−CCR7−) and terminal effector (CD45RA+CCR7−) were defined. (A, B, C, D) The data shows that BAL T cells (upper) were predominantly of effector memory phenotype compared to peripheral blood (lower), in which T cells were distributed among naive, central memory, effector memory and terminal effector phenotypes. (A, B) The data shows a higher proportion of effector memory and lower proportion of central memory BAL CD4+ (left) and CD8+ (right) T cells in HIV-infected individuals compared to HIV-uninfected adults. (C, D) The data shows no difference in peripheral blood CD4+ T cell subsets between HIV-infected individuals compared to HIV-uninfected adults (left), but there was a higher proportion of CD8+ effector memory and a lower proportion of CD8+ terminal effector in HIV-infected individuals compared to HIV-uninfected adults (right). Black horizontal bars represent median and IQRs. Statistical significance was analysed by the Mann–Whitney U test. p value <0.05 was used to determine statistical significance.\nTo examine whether the proportion of T cell subsets was similar between compartments and whether they were altered by HIV infection, the authors measured expression of CD45RA and CCR7 on CD4+ and CD8+ T cells—as described in the materials and methods section. BAL CD4+ and CD8+ T cells were predominantly of effector memory phenotype (CD4+CD45RA−CCR7−, median 98%; CD8+CD45RA−CCR7−, median 90%) compared to peripheral blood CD4+ and CD8+ T cells which were distributed among naive (CD4+CD45RA+CCR7+, median 25%; CD8+CD45RA+CCR7+, median 30%), central memory (CD4+CD45RA−CCR7+, median 37%; CD8+CD45RA−CCR7+, median 5%), effector memory (CD4+ CD45RA−CCR7−, median 37%; CD8+ CD45RA−CCR7−, median 20%) and terminal effector phenotypes (CD4+CD45RA+CCR7−, median 6%; CD8+ CD45RA+CCR7−, median 45%) (figure 2). In BAL, there was a higher proportion of effector memory cells in HIV-infected individuals compared to HIV-uninfected adults (CD4, median 98.9% vs 98.1% p=0.03; CD8, median 96.3% vs 90.3% p=0.03) and a lower proportion of central memory T cells (CD4, median 0.37% vs 1.42% p=0.003; CD8, median 0.44% vs 1.88% p=0.007) (figure 2A,B). In peripheral blood, there was no difference in peripheral blood CD4+ T cell subsets between HIV-infected and HIV-uninfected groups. However, the HIV-infected group had a higher proportion of CD8+ effector memory T cells (median 29.6% vs 17.8%, p=0.007) and a lower proportion of CD8+ terminal effector T cells (median 22.2% vs 45.0%, p=0.01) than the HIV-uninfected group (figure 2C,D).\nThe proportions of naive and memory T cell subsets are different between BAL and peripheral blood, and are altered during HIV infection. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD3 FITC, anti-CD4 Pacific blue, anti-CD8 APC-H7, anti-CD45RA PE and anti-CCR7 APC fluorochrome conjugated antibodies. The proportion of naïve (CD45RA+CCR7+), central memory (CD45RA−CCR7+), effector memory (CD45RA−CCR7−) and terminal effector (CD45RA+CCR7−) were defined. (A, B, C, D) The data shows that BAL T cells (upper) were predominantly of effector memory phenotype compared to peripheral blood (lower), in which T cells were distributed among naive, central memory, effector memory and terminal effector phenotypes. (A, B) The data shows a higher proportion of effector memory and lower proportion of central memory BAL CD4+ (left) and CD8+ (right) T cells in HIV-infected individuals compared to HIV-uninfected adults. (C, D) The data shows no difference in peripheral blood CD4+ T cell subsets between HIV-infected individuals compared to HIV-uninfected adults (left), but there was a higher proportion of CD8+ effector memory and a lower proportion of CD8+ terminal effector in HIV-infected individuals compared to HIV-uninfected adults (right). Black horizontal bars represent median and IQRs. Statistical significance was analysed by the Mann–Whitney U test. p value <0.05 was used to determine statistical significance.\n[SUBTITLE] Regulatory T cells (Tregs) [SUBSECTION] We investigated the hypothesis that in HIV-infected individuals, persistent immune activation by HIV results in a higher frequency of regulatory T cells. Blood and BAL Tregs were defined as CD4+CD25hiFoxP3+ as described in the materials and methods (figure 3A). The proportions of Tregs in HIV-infected individuals were similar in BAL and peripheral blood (median 3.7% vs 3%, p>0.01), but were higher in BAL compared to peripheral blood in HIV-uninfected adults (median 4.3% vs 1.5%, p<0.001) (figure 3B). There was a higher proportion of Tregs in the peripheral blood of HIV-infected individuals than in the HIV-uninfected adults (median 3% vs 1.5%, p<0.001). However, in BAL the proportions were similar between the groups (median 3.7% vs 4.3%, p>0.01) (figure 3B). Absolute counts, however, of Tregs in peripheral blood were similar between HIV-uninfected adults and HIV-infected individuals (median 11 cells/μl vs 9 cells/μl, p>0.01) (figure 3C).\nHigher frequency of regulatory T cells in BAL compared to peripheral blood, but altered in HIV-infected individuals. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD4 Pacific blue, anti-CD25 FITC and anti-Foxp3 PE fluorochrome conjugated antibodies. Regulatory T cells (Tregs) were defined as CD4+ T cells expressing CD25hi and FoxP3+. (A) A flow cytometry representative dot plot showing Tregs in BAL and peripheral blood from a healthy control. (B) The data shows a higher frequency of Tregs in BAL than peripheral blood in HIV-uninfected adults. It also shows a higher frequency of peripheral blood Tregs in HIV-infected individuals compared to HIV-uninfected adults. (C) The data shows no difference in the absolute counts of Tregs in peripheral blood between HIV-infected individuals and HIV-uninfected adults. Black horizontal bars represent median and IQRs. Statistical significance was analysed by the Mann–Whitney U test in the HIV-uninfected versus HIV-infected comparison, and Wilcoxon Signed rank test in the BAL versus peripheral blood comparison. p value <0.05 was used to determine statistical significance.\nWe investigated the hypothesis that in HIV-infected individuals, persistent immune activation by HIV results in a higher frequency of regulatory T cells. Blood and BAL Tregs were defined as CD4+CD25hiFoxP3+ as described in the materials and methods (figure 3A). The proportions of Tregs in HIV-infected individuals were similar in BAL and peripheral blood (median 3.7% vs 3%, p>0.01), but were higher in BAL compared to peripheral blood in HIV-uninfected adults (median 4.3% vs 1.5%, p<0.001) (figure 3B). There was a higher proportion of Tregs in the peripheral blood of HIV-infected individuals than in the HIV-uninfected adults (median 3% vs 1.5%, p<0.001). However, in BAL the proportions were similar between the groups (median 3.7% vs 4.3%, p>0.01) (figure 3B). Absolute counts, however, of Tregs in peripheral blood were similar between HIV-uninfected adults and HIV-infected individuals (median 11 cells/μl vs 9 cells/μl, p>0.01) (figure 3C).\nHigher frequency of regulatory T cells in BAL compared to peripheral blood, but altered in HIV-infected individuals. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD4 Pacific blue, anti-CD25 FITC and anti-Foxp3 PE fluorochrome conjugated antibodies. Regulatory T cells (Tregs) were defined as CD4+ T cells expressing CD25hi and FoxP3+. (A) A flow cytometry representative dot plot showing Tregs in BAL and peripheral blood from a healthy control. (B) The data shows a higher frequency of Tregs in BAL than peripheral blood in HIV-uninfected adults. It also shows a higher frequency of peripheral blood Tregs in HIV-infected individuals compared to HIV-uninfected adults. (C) The data shows no difference in the absolute counts of Tregs in peripheral blood between HIV-infected individuals and HIV-uninfected adults. Black horizontal bars represent median and IQRs. Statistical significance was analysed by the Mann–Whitney U test in the HIV-uninfected versus HIV-infected comparison, and Wilcoxon Signed rank test in the BAL versus peripheral blood comparison. p value <0.05 was used to determine statistical significance.", "There was a lower proportion of CD4+ T cells in the total CD3+ T cell population in HIV-infected individuals compared to HIV-uninfected adults in both BAL (median 35.0% vs 65.5%, p<0.001) and peripheral blood (median 38.5% vs 61.2%, p<0.001). The proportion of CD8+ T cells in the total CD3+ T cell population was higher in HIV-infected individuals compared to HIV-uninfected adults in both BAL (median 59.7% vs 34.5%, p<0.05) and peripheral blood (median 61.5% vs 38.8%, p<0.05) (figure 1A).\nLower proportion of CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD3 FITC, anti-CD4 Pacific blue and anti-CD8 APC-H7 fluorochrome conjugated antibodies. (A) The data shows a lower proportion of CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. (B) The data shows a higher proportion of CD8+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. Significance was assessed using Mann–Whitney U test. Black horizontal bars represent median and IQRs.", "To examine whether the proportion of T cell subsets was similar between compartments and whether they were altered by HIV infection, the authors measured expression of CD45RA and CCR7 on CD4+ and CD8+ T cells—as described in the materials and methods section. BAL CD4+ and CD8+ T cells were predominantly of effector memory phenotype (CD4+CD45RA−CCR7−, median 98%; CD8+CD45RA−CCR7−, median 90%) compared to peripheral blood CD4+ and CD8+ T cells which were distributed among naive (CD4+CD45RA+CCR7+, median 25%; CD8+CD45RA+CCR7+, median 30%), central memory (CD4+CD45RA−CCR7+, median 37%; CD8+CD45RA−CCR7+, median 5%), effector memory (CD4+ CD45RA−CCR7−, median 37%; CD8+ CD45RA−CCR7−, median 20%) and terminal effector phenotypes (CD4+CD45RA+CCR7−, median 6%; CD8+ CD45RA+CCR7−, median 45%) (figure 2). In BAL, there was a higher proportion of effector memory cells in HIV-infected individuals compared to HIV-uninfected adults (CD4, median 98.9% vs 98.1% p=0.03; CD8, median 96.3% vs 90.3% p=0.03) and a lower proportion of central memory T cells (CD4, median 0.37% vs 1.42% p=0.003; CD8, median 0.44% vs 1.88% p=0.007) (figure 2A,B). In peripheral blood, there was no difference in peripheral blood CD4+ T cell subsets between HIV-infected and HIV-uninfected groups. However, the HIV-infected group had a higher proportion of CD8+ effector memory T cells (median 29.6% vs 17.8%, p=0.007) and a lower proportion of CD8+ terminal effector T cells (median 22.2% vs 45.0%, p=0.01) than the HIV-uninfected group (figure 2C,D).\nThe proportions of naive and memory T cell subsets are different between BAL and peripheral blood, and are altered during HIV infection. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD3 FITC, anti-CD4 Pacific blue, anti-CD8 APC-H7, anti-CD45RA PE and anti-CCR7 APC fluorochrome conjugated antibodies. The proportion of naïve (CD45RA+CCR7+), central memory (CD45RA−CCR7+), effector memory (CD45RA−CCR7−) and terminal effector (CD45RA+CCR7−) were defined. (A, B, C, D) The data shows that BAL T cells (upper) were predominantly of effector memory phenotype compared to peripheral blood (lower), in which T cells were distributed among naive, central memory, effector memory and terminal effector phenotypes. (A, B) The data shows a higher proportion of effector memory and lower proportion of central memory BAL CD4+ (left) and CD8+ (right) T cells in HIV-infected individuals compared to HIV-uninfected adults. (C, D) The data shows no difference in peripheral blood CD4+ T cell subsets between HIV-infected individuals compared to HIV-uninfected adults (left), but there was a higher proportion of CD8+ effector memory and a lower proportion of CD8+ terminal effector in HIV-infected individuals compared to HIV-uninfected adults (right). Black horizontal bars represent median and IQRs. Statistical significance was analysed by the Mann–Whitney U test. p value <0.05 was used to determine statistical significance.", "We investigated the hypothesis that in HIV-infected individuals, persistent immune activation by HIV results in a higher frequency of regulatory T cells. Blood and BAL Tregs were defined as CD4+CD25hiFoxP3+ as described in the materials and methods (figure 3A). The proportions of Tregs in HIV-infected individuals were similar in BAL and peripheral blood (median 3.7% vs 3%, p>0.01), but were higher in BAL compared to peripheral blood in HIV-uninfected adults (median 4.3% vs 1.5%, p<0.001) (figure 3B). There was a higher proportion of Tregs in the peripheral blood of HIV-infected individuals than in the HIV-uninfected adults (median 3% vs 1.5%, p<0.001). However, in BAL the proportions were similar between the groups (median 3.7% vs 4.3%, p>0.01) (figure 3B). Absolute counts, however, of Tregs in peripheral blood were similar between HIV-uninfected adults and HIV-infected individuals (median 11 cells/μl vs 9 cells/μl, p>0.01) (figure 3C).\nHigher frequency of regulatory T cells in BAL compared to peripheral blood, but altered in HIV-infected individuals. T lymphocytes obtained from BAL and peripheral blood were stained with anti-CD4 Pacific blue, anti-CD25 FITC and anti-Foxp3 PE fluorochrome conjugated antibodies. Regulatory T cells (Tregs) were defined as CD4+ T cells expressing CD25hi and FoxP3+. (A) A flow cytometry representative dot plot showing Tregs in BAL and peripheral blood from a healthy control. (B) The data shows a higher frequency of Tregs in BAL than peripheral blood in HIV-uninfected adults. It also shows a higher frequency of peripheral blood Tregs in HIV-infected individuals compared to HIV-uninfected adults. (C) The data shows no difference in the absolute counts of Tregs in peripheral blood between HIV-infected individuals and HIV-uninfected adults. Black horizontal bars represent median and IQRs. Statistical significance was analysed by the Mann–Whitney U test in the HIV-uninfected versus HIV-infected comparison, and Wilcoxon Signed rank test in the BAL versus peripheral blood comparison. p value <0.05 was used to determine statistical significance.", "To investigate respiratory pathogen antigen-specific T cell responses in BAL and peripheral blood, we measured the quality and magnitude of the CD4+ T cell response to influenza virus, S pneumoniae and M tuberculosis antigens using intracellular cytokine staining. We detected antigen-specific CD4+ T cells against influenza virus, S pneumoniae and M tuberculosis in BAL and peripheral blood (figure 4).\nRepresentative flow cytometry dot flow from an HIV-uninfected adult showing multiple subsets of antigen-specific CD4+ T cells in BAL and peripheral blood. BAL and peripheral blood lymphocytes were stimulated with antigens and T cell responses were measured by intracellular cytokine staining. Representative flow cytometry dot plots from an HIV-uninfected adult showing interferon-γ (IFN-γ) and TNF-alpha (TNF-α) responses in BAL (top) and peripheral blood (bottom) cells, in an unstimulated negative control and cells stimulated with influenza virus, S. pneumoniae and M tuberculosis antigens.\n[SUBTITLE] Differences between BAL and peripheral blood compartments [SUBSECTION] There was a higher percentage frequency of antigen-specific CD4+ T cells against influenza virus (median 0.51% vs 0.13%, p<0.001), S pneumoniae (median 0.59% vs 0.11%, p<0.001) and M tuberculosis (median 5.53% vs 0.20%, p<0.001) in BAL compared to peripheral blood (figure 5A–C). The proportion of antigen-specific CD4+ T cells against influenza virus, S pneumoniae and M tuberculosis producing either IFN-γ alone, TNF-α alone or both cytokines were different between BAL and peripheral blood (figure 6A–C). Background responses were subtracted from all the antigen-specific CD4+ T cell responses.\nLower frequency of antigen-specific BAL CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. BAL and peripheral blood lymphocytes were stimulated with antigens and the magnitude of the antigen-specific T cell response was measured by intracellular cytokine staining. The total of all cytokine-secreting CD4+ T cells was used to represent the percentage frequency of antigen-specific cells. (A) The data shows a higher percentage frequency of influenza virus antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a lower percentage frequency of BAL influenza virus antigen-specific CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. (B) The data shows a higher percentage frequency of S pneumoniae antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a higher percentage frequency of S pneumoniae antigen-specific peripheral blood CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. (C) The data shows a higher percentage frequency of M tuberculosis antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a lower percentage frequency of BAL M tuberculosis antigen-specific CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. Black horizontal bars represent median and IQRs after background responses were subtracted from all the antigen-specific CD4 T cell responses. Statistical significance was analysed by the Mann–Whitney U test in the HIV-uninfected versus HIV-infected comparison, and Wilcoxon Signed rank test in the BAL versus peripheral blood comparison. p value <0.05 was used to determine statistical significance.\nLower proportion of polyfunctional antigen-specific CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. BAL and peripheral blood lymphocytes were stimulated with antigens and the quality of the antigen-specific T cell response was measured by intracellular cytokine staining. The proportion of single producers (IFN-γ alone or TNF-α alone) and double producers (IFN-γ and TNF-α) in the antigen-specific CD4+ T cell population was defined. (A) The data shows that in both BAL (upper) and peripheral blood (lower), the proportion of double producers (green) in the influenza virus antigen-specific CD4+ T cell population was lower in HIV-infected individuals (right) than HIV-uninfected adults (left). It also shows that the proportion of subsets of antigen-specific CD4+ T cells against influenza virus including IFN-γ single producers (blue), TNF-α single producers (red), and IFN-γ/TNF-α double producers (green) were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults. (B) The data shows that the proportion of subsets of antigen-specific CD4+ T cells against S pneumoniae (including single producers and double producers), were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults. (C) The data shows that in BAL (upper) and peripheral blood (lower) the proportion of double producers (green) in the M tuberculosis antigen-specific CD4 T cell population was lower in HIV-infected individuals (right) than HIV-uninfected adults (left). It also shows that the proportion of subsets of antigen-specific CD4+ T cells against M tuberculosis (including single producers, and double producers), were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults.\nThere was a higher percentage frequency of antigen-specific CD4+ T cells against influenza virus (median 0.51% vs 0.13%, p<0.001), S pneumoniae (median 0.59% vs 0.11%, p<0.001) and M tuberculosis (median 5.53% vs 0.20%, p<0.001) in BAL compared to peripheral blood (figure 5A–C). The proportion of antigen-specific CD4+ T cells against influenza virus, S pneumoniae and M tuberculosis producing either IFN-γ alone, TNF-α alone or both cytokines were different between BAL and peripheral blood (figure 6A–C). Background responses were subtracted from all the antigen-specific CD4+ T cell responses.\nLower frequency of antigen-specific BAL CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. BAL and peripheral blood lymphocytes were stimulated with antigens and the magnitude of the antigen-specific T cell response was measured by intracellular cytokine staining. The total of all cytokine-secreting CD4+ T cells was used to represent the percentage frequency of antigen-specific cells. (A) The data shows a higher percentage frequency of influenza virus antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a lower percentage frequency of BAL influenza virus antigen-specific CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. (B) The data shows a higher percentage frequency of S pneumoniae antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a higher percentage frequency of S pneumoniae antigen-specific peripheral blood CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. (C) The data shows a higher percentage frequency of M tuberculosis antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a lower percentage frequency of BAL M tuberculosis antigen-specific CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. Black horizontal bars represent median and IQRs after background responses were subtracted from all the antigen-specific CD4 T cell responses. Statistical significance was analysed by the Mann–Whitney U test in the HIV-uninfected versus HIV-infected comparison, and Wilcoxon Signed rank test in the BAL versus peripheral blood comparison. p value <0.05 was used to determine statistical significance.\nLower proportion of polyfunctional antigen-specific CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. BAL and peripheral blood lymphocytes were stimulated with antigens and the quality of the antigen-specific T cell response was measured by intracellular cytokine staining. The proportion of single producers (IFN-γ alone or TNF-α alone) and double producers (IFN-γ and TNF-α) in the antigen-specific CD4+ T cell population was defined. (A) The data shows that in both BAL (upper) and peripheral blood (lower), the proportion of double producers (green) in the influenza virus antigen-specific CD4+ T cell population was lower in HIV-infected individuals (right) than HIV-uninfected adults (left). It also shows that the proportion of subsets of antigen-specific CD4+ T cells against influenza virus including IFN-γ single producers (blue), TNF-α single producers (red), and IFN-γ/TNF-α double producers (green) were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults. (B) The data shows that the proportion of subsets of antigen-specific CD4+ T cells against S pneumoniae (including single producers and double producers), were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults. (C) The data shows that in BAL (upper) and peripheral blood (lower) the proportion of double producers (green) in the M tuberculosis antigen-specific CD4 T cell population was lower in HIV-infected individuals (right) than HIV-uninfected adults (left). It also shows that the proportion of subsets of antigen-specific CD4+ T cells against M tuberculosis (including single producers, and double producers), were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults.\n[SUBTITLE] Differences between HIV-infected and HIV-uninfected adults [SUBSECTION] The percentage frequency of antigen-specific CD4+ T cells against influenza virus (median 0.15% vs 0.51%, p<0.05) and M tuberculosis (median 0.5% vs 5.53%, p<0.05) antigens were lower in HIV-infected individuals compared to HIV-uninfected adults in BAL (figure 5A,C). However, this was not the case in peripheral blood where the percentage frequencies were comparable (Influenza virus, median 0.3% vs 0.13%, p>0.05; M tuberculosis, median 0.12% vs 0.19% p>0.05)(figure 5A,C). The percentage frequency of antigen-specific CD4+ T cells against S pneumoniae were similar in HIV-infected compared to HIV-uninfected adults in BAL (median 0.84% vs 0.59%, p>0.05). However, in peripheral blood the percentage frequency was higher in HIV-infected compared to HIV-uninfected adults (median 0.2% vs 0.1%, p=0.02) (figure 5B). Background responses were subtracted from all the antigen-specific CD4+ T cell responses.\nFurther, in BAL and peripheral blood, there was a lower proportion of multiple cytokine-producing (polyfunctional) antigen-specific CD4+ T cells against influenza virus (BAL, 59% vs 27%; Blood, 41% vs 34%) and M tuberculosis (BAL, median 77% vs 41%; Blood, median 57% vs 26%) in HIV-infected individuals compared to HIV-uninfected adults (figure 6A,C).\nThe percentage frequency of antigen-specific CD4+ T cells against influenza virus (median 0.15% vs 0.51%, p<0.05) and M tuberculosis (median 0.5% vs 5.53%, p<0.05) antigens were lower in HIV-infected individuals compared to HIV-uninfected adults in BAL (figure 5A,C). However, this was not the case in peripheral blood where the percentage frequencies were comparable (Influenza virus, median 0.3% vs 0.13%, p>0.05; M tuberculosis, median 0.12% vs 0.19% p>0.05)(figure 5A,C). The percentage frequency of antigen-specific CD4+ T cells against S pneumoniae were similar in HIV-infected compared to HIV-uninfected adults in BAL (median 0.84% vs 0.59%, p>0.05). However, in peripheral blood the percentage frequency was higher in HIV-infected compared to HIV-uninfected adults (median 0.2% vs 0.1%, p=0.02) (figure 5B). Background responses were subtracted from all the antigen-specific CD4+ T cell responses.\nFurther, in BAL and peripheral blood, there was a lower proportion of multiple cytokine-producing (polyfunctional) antigen-specific CD4+ T cells against influenza virus (BAL, 59% vs 27%; Blood, 41% vs 34%) and M tuberculosis (BAL, median 77% vs 41%; Blood, median 57% vs 26%) in HIV-infected individuals compared to HIV-uninfected adults (figure 6A,C).", "There was a higher percentage frequency of antigen-specific CD4+ T cells against influenza virus (median 0.51% vs 0.13%, p<0.001), S pneumoniae (median 0.59% vs 0.11%, p<0.001) and M tuberculosis (median 5.53% vs 0.20%, p<0.001) in BAL compared to peripheral blood (figure 5A–C). The proportion of antigen-specific CD4+ T cells against influenza virus, S pneumoniae and M tuberculosis producing either IFN-γ alone, TNF-α alone or both cytokines were different between BAL and peripheral blood (figure 6A–C). Background responses were subtracted from all the antigen-specific CD4+ T cell responses.\nLower frequency of antigen-specific BAL CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. BAL and peripheral blood lymphocytes were stimulated with antigens and the magnitude of the antigen-specific T cell response was measured by intracellular cytokine staining. The total of all cytokine-secreting CD4+ T cells was used to represent the percentage frequency of antigen-specific cells. (A) The data shows a higher percentage frequency of influenza virus antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a lower percentage frequency of BAL influenza virus antigen-specific CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. (B) The data shows a higher percentage frequency of S pneumoniae antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a higher percentage frequency of S pneumoniae antigen-specific peripheral blood CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. (C) The data shows a higher percentage frequency of M tuberculosis antigen-specific CD4+ T cells in BAL compared to peripheral blood in HIV-uninfected adults. It also shows a lower percentage frequency of BAL M tuberculosis antigen-specific CD4+ T cells in HIV-infected individuals compared to HIV-uninfected adults. Black horizontal bars represent median and IQRs after background responses were subtracted from all the antigen-specific CD4 T cell responses. Statistical significance was analysed by the Mann–Whitney U test in the HIV-uninfected versus HIV-infected comparison, and Wilcoxon Signed rank test in the BAL versus peripheral blood comparison. p value <0.05 was used to determine statistical significance.\nLower proportion of polyfunctional antigen-specific CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. BAL and peripheral blood lymphocytes were stimulated with antigens and the quality of the antigen-specific T cell response was measured by intracellular cytokine staining. The proportion of single producers (IFN-γ alone or TNF-α alone) and double producers (IFN-γ and TNF-α) in the antigen-specific CD4+ T cell population was defined. (A) The data shows that in both BAL (upper) and peripheral blood (lower), the proportion of double producers (green) in the influenza virus antigen-specific CD4+ T cell population was lower in HIV-infected individuals (right) than HIV-uninfected adults (left). It also shows that the proportion of subsets of antigen-specific CD4+ T cells against influenza virus including IFN-γ single producers (blue), TNF-α single producers (red), and IFN-γ/TNF-α double producers (green) were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults. (B) The data shows that the proportion of subsets of antigen-specific CD4+ T cells against S pneumoniae (including single producers and double producers), were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults. (C) The data shows that in BAL (upper) and peripheral blood (lower) the proportion of double producers (green) in the M tuberculosis antigen-specific CD4 T cell population was lower in HIV-infected individuals (right) than HIV-uninfected adults (left). It also shows that the proportion of subsets of antigen-specific CD4+ T cells against M tuberculosis (including single producers, and double producers), were different between BAL (upper) and peripheral blood (lower) in HIV-uninfected adults.", "The percentage frequency of antigen-specific CD4+ T cells against influenza virus (median 0.15% vs 0.51%, p<0.05) and M tuberculosis (median 0.5% vs 5.53%, p<0.05) antigens were lower in HIV-infected individuals compared to HIV-uninfected adults in BAL (figure 5A,C). However, this was not the case in peripheral blood where the percentage frequencies were comparable (Influenza virus, median 0.3% vs 0.13%, p>0.05; M tuberculosis, median 0.12% vs 0.19% p>0.05)(figure 5A,C). The percentage frequency of antigen-specific CD4+ T cells against S pneumoniae were similar in HIV-infected compared to HIV-uninfected adults in BAL (median 0.84% vs 0.59%, p>0.05). However, in peripheral blood the percentage frequency was higher in HIV-infected compared to HIV-uninfected adults (median 0.2% vs 0.1%, p=0.02) (figure 5B). Background responses were subtracted from all the antigen-specific CD4+ T cell responses.\nFurther, in BAL and peripheral blood, there was a lower proportion of multiple cytokine-producing (polyfunctional) antigen-specific CD4+ T cells against influenza virus (BAL, 59% vs 27%; Blood, 41% vs 34%) and M tuberculosis (BAL, median 77% vs 41%; Blood, median 57% vs 26%) in HIV-infected individuals compared to HIV-uninfected adults (figure 6A,C).", "The authors have demonstrated compartment differences between T cell immunity in the bronchoalveolar space and the periphery. These include a predominant presence of effector memory T cells and regulatory CD4+ T cells in BAL, and a higher percentage frequency of antigen-specific CD4+ T cells against influenza virus, S pneumoniae and M tuberculosis in BAL compared to peripheral blood. Our data has also demonstrated that HIV-infected individuals have impaired pulmonary CD4+ T cell immunity, which is characterised by lower proportions of total CD4+ T cells and impaired antigen-specific BAL CD4+ T cell response to influenza virus and M tuberculosis antigens.\nConsistent with previous observations,21 we noticed that BAL CD4+ and CD8+ T cells were predominantly of effector memory phenotype irrespective of HIV status, while peripheral blood T cell phenotypes were distributed among naive, central memory, effector memory and terminal effector. Effector memory T cells migrate to the lung following antigen presentation by antigen presenting cells in the local draining lymph nodes.22 In the lung, effector memory T cells may be involved in mediating host defence against pathogens through macrophage activation and neutrophil recruitment.9 10 Formulation of vaccines which can induce effector memory T cells in the lung, mimicking the natural route of exposure, might improve the efficacy of future vaccines against pulmonary mucosal infections such as pneumococcal pneumonia.\nThe authors observed a higher proportion of regulatory T cells in BAL fluid compared to peripheral blood in HIV-uninfected adults. Immunity in the lung is well regulated. A high degree of regulation is necessary because an excessive response may lead to destructive immunopathology, while a weak response may lead to prolonged infection. The higher proportion of Tregs in BAL, compared to peripheral blood in HIV-uninfected adults, may reflect the capacity for highly regulated immunity in the lung. While beneficial to the host, this sophisticated level of pulmonary immunocompetence poses a challenge to the potential success of mucosally-administered vaccines against respiratory pathogens.\nIn addition to the overall phenotypic differences between BAL and peripheral blood T cells seen in this study, the authors observed a higher percentage frequency of antigen-specific CD4+ T cells against influenza virus, S pneumoniae and M tuberculosis in BAL, when compared to the peripheral blood of HIV-uninfected adults. The authors investigated a variety of respiratory antigens in this study and demonstrated that each pathogen has its unique cytokine secreting profile. The influenza virus antigen-specific CD4+ T cells were likely induced through previous infection or exposure. The pneumococcal antigen-specific CD4+ T cells were probably induced through previous exposure following contact, colonisation in the nasopharynx or disease. The M tuberculosis antigen-specific CD4+ T cells may have been induced by previous BCG vaccination, latent infection or exposure. Despite differences in kinetics of exposure, our data suggest that antigen-specific CD4+ T cell responses against common respiratory pathogens are compartmentalised in the lung, where they may confer protection at the site of antigen entry. Therefore, the induction of pulmonary antigen-specific CD4+ T cells through vaccination may provide a useful strategy for preventing respiratory infections in the lung.\nIn the context of HIV, first, the authors observed a higher proportion of BAL and peripheral blood effector memory CD4+ and CD8+ T cells in HIV-infected individuals than in HIV-uninfected adults. This may be attributed to the high level of hyper-activation associated with HIV infection,23 whereby naive T cells are driven along their differentiation pathway to become effector memory T cells.\nSecond, the authors observed a higher proportion of Tregs in the blood of HIV-infected individuals compared to HIV-uninfected adults. This observation is consistent with other studies on Tregs in peripheral blood during HIV infection.24 Expressing Tregs as absolute counts, however, showed that there was no difference between HIV-uninfected adults and HIV-infected individuals. This suggests that, during HIV infection, non-Tregs are preferentially depleted compared to Tregs. In contrast, the authors did not observe a difference in proportion of BAL Tregs between HIV-infected individuals and HIV-uninfected adults. However, the proportion of BAL CD8+ T cells was higher in HIV-infected compared to HIV-uninfected adults, suggesting that there is either a depletion of CD4+ T cells or an infiltration of CD8+ T cells. Taking these observations together, there is likely to be an altered ratio of Tregs to effector CD4+ and CD8+ T cells in BAL. This factor may either tip the balance to over-regulation of CD4 responses or lack of control of CD8 infiltrates in HIV-infected adults.\nThird, consistent with the earlier observation on altered proportions of pulmonary T cell subsets in HIV-infected individuals, the percentage frequencies of influenza virus and M tuberculosis antigen-specific BAL CD4+ T cells were lower in HIV-infected individuals compared to HIV-uninfected adults. This is in line with a recent observation that there was depletion of BAL M tuberculosis antigen-specific CD4+ T cells in HIV-infected individuals.21 Recent work from other investigators has shown a preferential depletion of peripheral blood M tuberculosis antigen-specific but not cytomegalovirus (CMV) antigen-specific CD4+ T cells in HIV-infected adults.25 They concluded that M tuberculosis antigen-specific adaptive immunity is particularly vulnerable to HIV-associated immune damage. This study has shown that adaptive immunity to other respiratory antigens such influenza virus and not only M tuberculosis is also impaired in HIV-infected individuals. The impaired antigen-specific BAL CD4+ T cell response observed in this study might help to explain the increased susceptibility of HIV-infected individuals to influenza virus and M tuberculosis infections.17 26 The impaired BAL CD4+ T cell immunity to influenza virus in HIV-infected adults may result in increased risk to secondary bacterial infection such as pneumococcal pneumonia. The authors observation that there were detectable antigen-specific CD4+ T cell defects in BAL, but not in peripheral blood, has implications for clinical practice and for vaccine development. Most available assays, being based on peripheral blood, do not detect local responses in the lung and may therefore give misleading results when used for diagnosis or for estimations of vaccine efficacy.\nLastly, the authors demonstrated that there was a lower proportion of polyfunctional CD4+ T cells in BAL and peripheral blood of HIV-infected individuals compared to HIV-uninfected adults. This observation is consistent with others that have showed impaired polyfunctionality in M tuberculosis antigen-specific CD4+ T cells in BAL from HIV-infected adults.21 Recently, it has been shown that during HIV infection, M tuberculosis antigen-specific T cells in blood change from polyfunctional CD4+ T cells to a more predominant monofunctional CD8+ T cell phenotype.27 The authors speculate that owing to hyperactivation induced by HIV,23 T cells are driven towards the terminal differentiation stages of their life cycle, where there is loss of the ability to secrete more than one cytokine.28 It has been reported that polyfunctional T cells may be functionally superior to monofunctional T cells and may provide a correlate of protection against parasitic,29 bacterial30 and viral pathogens.31 This finding provides supporting evidence to the suggestion that an impaired repertoire of multifunctionality, as well as lower absolute numbers of respiratory antigen-specific CD4+ T cell immunity, may contribute to the high burden of respiratory diseases in HIV-infected individuals.\nIn conclusion, the authors have demonstrated that HIV infection is associated with impaired pulmonary antigen-specific CD4+ T cell immunity against viral and bacterial respiratory pathogens. These defects in pulmonary immunity may help to explain the observed increased risk of acute lower respiratory tract infections in HIV-infected adults at any point during the fall in their systemic CD4 count. A greater understanding of antigen-specific T cell responses that occur in the lungs of HIV-infected individuals will potentially enable better planning of clinical care for these individuals – in terms of prophylactic treatments against common infectious pathogens and the timing of anti-retroviral medication. In addition, it will also provide appropriate markers of efficacy in future vaccine trials for common respiratory pathogens such as S pneumoniae, M tuberculosis and influenza virus." ]
[ "intro", "methods", null, null, null, null, null, "results", null, null, null, null, null, null, null, null, "discussion" ]
[ "HIV", "human", "lung", "T cells", "bronchoalveolar lavage", "mucosa", "bronchoscopy", "immunodeficiency", "lymphocyte biology", "pneumonia", "respiratory infection" ]
Prevalence of bipolar disorder in children and adolescents with attention-deficit hyperactivity disorder.
21357877
Some research suggests that children with attention-deficit hyperactivity disorder (ADHD) have a higher than expected risk of bipolar affective disorder. No study has examined the prevalence of bipolar disorder in a UK sample of children with ADHD.
BACKGROUND
Psychopathology symptoms and diagnoses of bipolar disorder were assessed in 200 young people with ADHD (170 male, 30 female; age 6-18 years, mean 11.15, s.d. = 2.95). Rates of current bipolar disorder symptoms and diagnoses are reported. A family history of bipolar disorder in parents and siblings was also recorded.
METHOD
Only one child, a 9-year-old boy, met diagnostic criteria for both ICD-10 hypomania and DSM-IV bipolar disorder not otherwise specified.
RESULTS
In a UK sample of children with ADHD a current diagnosis of bipolar disorder was uncommon.
CONCLUSIONS
[ "Adolescent", "Attention Deficit Disorder with Hyperactivity", "Bipolar Disorder", "Child", "Comorbidity", "Diagnostic and Statistical Manual of Mental Disorders", "Family Health", "Female", "Humans", "International Classification of Diseases", "Interview, Psychological", "Male", "Prevalence", "United Kingdom" ]
3046179
null
null
Method
[SUBTITLE] Participants [SUBSECTION] The sample consisted of the first 200 participants in a larger ongoing genetic study of ADHD. Participants were consecutively recruited from community child and adolescent psychiatry and paediatric out-patient clinics in South Wales and other parts of the UK. Each child had a clinical diagnosis of ADHD or was undergoing assessment for a diagnosis. No pre-selection strategy was used apart from the exclusion criteria listed below and the willingness of families to participate. Psychopathology was assessed using the Child and Adolescent Psychiatric Assessment (CAPA) research diagnostic interview,5 which was used to confirm that all participants met DSM–III–R or DSM–IV criteria for ADHD or ICD–10 criteria for hyperkinetic disorder.6–8 To meet study inclusion criteria the participant had to be living with at least one biological parent, be British, White and have a full-scale IQ of 70 or above, assessed using the Wechsler Intelligence Scale for Children version IV.9 Exclusion criteria comprised any major neurological or genetic condition such as epilepsy or fragile-X syndrome, psychosis (but not mood disorder), pervasive developmental disorder and Tourette syndrome (although those with other tic disorders were not excluded). Ethical approval for the study was obtained from the Wales Multicentre Research Ethics Committee and written informed consent and assent were obtained from participating parents and children. The sample consisted of the first 200 participants in a larger ongoing genetic study of ADHD. Participants were consecutively recruited from community child and adolescent psychiatry and paediatric out-patient clinics in South Wales and other parts of the UK. Each child had a clinical diagnosis of ADHD or was undergoing assessment for a diagnosis. No pre-selection strategy was used apart from the exclusion criteria listed below and the willingness of families to participate. Psychopathology was assessed using the Child and Adolescent Psychiatric Assessment (CAPA) research diagnostic interview,5 which was used to confirm that all participants met DSM–III–R or DSM–IV criteria for ADHD or ICD–10 criteria for hyperkinetic disorder.6–8 To meet study inclusion criteria the participant had to be living with at least one biological parent, be British, White and have a full-scale IQ of 70 or above, assessed using the Wechsler Intelligence Scale for Children version IV.9 Exclusion criteria comprised any major neurological or genetic condition such as epilepsy or fragile-X syndrome, psychosis (but not mood disorder), pervasive developmental disorder and Tourette syndrome (although those with other tic disorders were not excluded). Ethical approval for the study was obtained from the Wales Multicentre Research Ethics Committee and written informed consent and assent were obtained from participating parents and children. [SUBTITLE] Assessments [SUBSECTION] Interviews were conducted by trained and supervised graduate and postdoctoral psychologists. The ‘parent’ version of the CAPA, a reliable, well-established semi-structured research diagnostic interview that assesses current symptom presence, was used to assess clinical symptoms of ADHD, oppositional defiant disorder, conduct disorder, anxiety disorder and mood disorders, including bipolar disorder. The interview section on hypomania/mania covers mood changes (elation and irritability) that have a duration of at least 1 h. If there is no mood change, criterion B hypomania/mania items are not assessed. The child version of the CAPA includes the same questions as the parent interview, although it does not assess self-reported ADHD symptoms.10 This measure was additionally used to interview those aged 12 years and over. Comorbidities (including bipolar disorder) were considered present if reported by either parent or child.11 To assess the ICD–10 and DSM–IV criteria of ADHD pervasiveness in more than one setting, reports of symptoms in school were obtained using teacher reports on the Child ADHD Teacher Telephone Interview or the Conners Teacher Rating Scale.12,13 Symptoms and diagnoses according to DSM–IV and ICD–10 criteria were generated using information from the CAPA. All interviews were audiotaped, and interviewers were supervised weekly by an experienced clinician (A.T.). Reports of family psychiatric history were obtained for parents and biological siblings, by asking the parent about each parent and sibling in turn. The participating children had a total of 409 siblings; the number of siblings for each individual ranged from 0 to 6 (mean 2). Parents also completed questionnaires concerning demographic and family information. Interviews were conducted by trained and supervised graduate and postdoctoral psychologists. The ‘parent’ version of the CAPA, a reliable, well-established semi-structured research diagnostic interview that assesses current symptom presence, was used to assess clinical symptoms of ADHD, oppositional defiant disorder, conduct disorder, anxiety disorder and mood disorders, including bipolar disorder. The interview section on hypomania/mania covers mood changes (elation and irritability) that have a duration of at least 1 h. If there is no mood change, criterion B hypomania/mania items are not assessed. The child version of the CAPA includes the same questions as the parent interview, although it does not assess self-reported ADHD symptoms.10 This measure was additionally used to interview those aged 12 years and over. Comorbidities (including bipolar disorder) were considered present if reported by either parent or child.11 To assess the ICD–10 and DSM–IV criteria of ADHD pervasiveness in more than one setting, reports of symptoms in school were obtained using teacher reports on the Child ADHD Teacher Telephone Interview or the Conners Teacher Rating Scale.12,13 Symptoms and diagnoses according to DSM–IV and ICD–10 criteria were generated using information from the CAPA. All interviews were audiotaped, and interviewers were supervised weekly by an experienced clinician (A.T.). Reports of family psychiatric history were obtained for parents and biological siblings, by asking the parent about each parent and sibling in turn. The participating children had a total of 409 siblings; the number of siblings for each individual ranged from 0 to 6 (mean 2). Parents also completed questionnaires concerning demographic and family information.
null
null
null
null
[ "Participants", "Assessments", "Results", "ADHD", "Comorbidities", "Bipolar disorder", "Mania and hypomania symptoms", "Family history of bipolar disorder", "Discussion", "Abnormal irritable mood", "Limitations", "Funding" ]
[ "The sample consisted of the first 200 participants in a larger ongoing\n genetic study of ADHD. Participants were consecutively recruited from\n community child and adolescent psychiatry and paediatric out-patient clinics\n in South Wales and other parts of the UK. Each child had a clinical diagnosis\n of ADHD or was undergoing assessment for a diagnosis. No pre-selection\n strategy was used apart from the exclusion criteria listed below and the\n willingness of families to participate. Psychopathology was assessed using the\n Child and Adolescent Psychiatric Assessment (CAPA) research diagnostic\n interview,5 which\n was used to confirm that all participants met DSM–III–R or\n DSM–IV criteria for ADHD or ICD–10 criteria for hyperkinetic\n disorder.6–8\n To meet study inclusion criteria the participant had to be living with at\n least one biological parent, be British, White and have a full-scale IQ of 70\n or above, assessed using the Wechsler Intelligence Scale for Children version\n IV.9 Exclusion\n criteria comprised any major neurological or genetic condition such as\n epilepsy or fragile-X syndrome, psychosis (but not mood disorder), pervasive\n developmental disorder and Tourette syndrome (although those with other tic\n disorders were not excluded). Ethical approval for the study was obtained from\n the Wales Multicentre Research Ethics Committee and written informed consent\n and assent were obtained from participating parents and children.", "Interviews were conducted by trained and supervised graduate and\n postdoctoral psychologists. The ‘parent’ version of the CAPA, a\n reliable, well-established semi-structured research diagnostic interview that\n assesses current symptom presence, was used to assess clinical symptoms of\n ADHD, oppositional defiant disorder, conduct disorder, anxiety disorder and\n mood disorders, including bipolar disorder. The interview section on\n hypomania/mania covers mood changes (elation and irritability) that have a\n duration of at least 1 h. If there is no mood change, criterion B\n hypomania/mania items are not assessed. The child version of the CAPA includes\n the same questions as the parent interview, although it does not assess\n self-reported ADHD\n symptoms.10 This\n measure was additionally used to interview those aged 12 years and over.\n Comorbidities (including bipolar disorder) were considered present if reported\n by either parent or\n child.11 To assess\n the ICD–10 and DSM–IV criteria of ADHD pervasiveness in more than\n one setting, reports of symptoms in school were obtained using teacher reports\n on the Child ADHD Teacher Telephone Interview or the Conners Teacher Rating\n Scale.12,13\nSymptoms and diagnoses according to DSM–IV and ICD–10 criteria\n were generated using information from the CAPA. All interviews were\n audiotaped, and interviewers were supervised weekly by an experienced\n clinician (A.T.). Reports of family psychiatric history were obtained for\n parents and biological siblings, by asking the parent about each parent and\n sibling in turn. The participating children had a total of 409 siblings; the\n number of siblings for each individual ranged from 0 to 6 (mean 2). Parents\n also completed questionnaires concerning demographic and family\n information.", "The 200 participants, 170 (85%) of whom were male and 30 (15%) female, were\n aged 6–18 years (mean 11.15, s.d. = 2.95; median 11.00, s.d. = 2.95).\n The social class of 176 families (24 families had missing data) was determined\n by classifying the occupation of the main wage-earner in the family using the\n UK Standard Occupation Classification 2000\n (www.ons.gov.uk).\n The families were then split into three social class categories: high social\n class (12.5%, n = 22), comprising families from professional and\n managerial jobs; medium social class (36.4%, n = 64), comprising\n those with skilled occupations (manual, non-manual) and partially skilled\n workers; and low social class (51.1%, n = 90), comprising those with\n unskilled jobs and unemployed or unclassified individuals.\n[SUBTITLE] ADHD [SUBSECTION] The mean number of ADHD symptoms (from a possible 18) was 15.97 (s.d. =\n 1.98). All participants met criteria for at least one of the following: a\n DSM–III–R diagnosis of ADHD (99.0%), an ICD–10 diagnosis of\n hyperkinetic disorder (27.5% hyperkinetic disorder, 34.5% hyperkinetic conduct\n disorder) or a DSM–IV diagnosis of ADHD (78.0% ADHD combined type, 10.5%\n hyperactive/impulsive type, 6.5% inattentive type).\nThe mean number of ADHD symptoms (from a possible 18) was 15.97 (s.d. =\n 1.98). All participants met criteria for at least one of the following: a\n DSM–III–R diagnosis of ADHD (99.0%), an ICD–10 diagnosis of\n hyperkinetic disorder (27.5% hyperkinetic disorder, 34.5% hyperkinetic conduct\n disorder) or a DSM–IV diagnosis of ADHD (78.0% ADHD combined type, 10.5%\n hyperactive/impulsive type, 6.5% inattentive type).\n[SUBTITLE] Comorbidities [SUBSECTION] Oppositional defiant disorder according to DSM–IV criteria was\n diagnosed in 42.0% of the sample and conduct disorder in 14.5%. Four\n participants (2.0%) received a diagnosis of generalised anxiety disorder. One\n (0.5%) participant had social anxiety disorder and 2 (1.0%) met criteria for\n separation anxiety disorder. Three children (1.5%) met criteria for major\n depressive episode. Twenty (10.0%) had a tic disorder. The comorbidity rates\n of oppositional defiant and conduct disorders were similar to those found in\n other European ADHD\n studies,14,15\n but rates of anxiety and depression were lower than in some US clinic-based\n studies.16\nOppositional defiant disorder according to DSM–IV criteria was\n diagnosed in 42.0% of the sample and conduct disorder in 14.5%. Four\n participants (2.0%) received a diagnosis of generalised anxiety disorder. One\n (0.5%) participant had social anxiety disorder and 2 (1.0%) met criteria for\n separation anxiety disorder. Three children (1.5%) met criteria for major\n depressive episode. Twenty (10.0%) had a tic disorder. The comorbidity rates\n of oppositional defiant and conduct disorders were similar to those found in\n other European ADHD\n studies,14,15\n but rates of anxiety and depression were lower than in some US clinic-based\n studies.16\n[SUBTITLE] Bipolar disorder [SUBSECTION] When both DSM–IV and ICD–10 research diagnostic criteria were\n applied, only one child, a 9-year-old boy, met ICD–10 criteria for\n hypomania and DSM–IV criteria for bipolar disorder not otherwise\n specified (NOS). He had expansive mood (a criterion A symptom) that lasted for\n 2 weeks and three criterion B symptoms (talkativeness, decreased need for\n sleep and distractibility during the mood disturbance) needed to diagnose an\n episode of hypomania and bipolar disorder. The CAPA requires that the\n criterion B symptoms represent a change from usual (i.e. it does not allow\n simply double-coding of ADHD items). For confirmation of the diagnoses, three\n clinicians (one masked to the possible diagnosis) reviewed the audiotaped\n interview. All gave a clinical diagnosis of bipolar disorder NOS, because\n hypomanic episodes (not mania) with no depressive episode were reported. The\n child had a family psychiatric history of mood disorder, as his mother had\n bipolar disorder. No mental health problem was reported for the father. The\n affected child met DSM–IV criteria for combined type ADHD and\n ICD–10 criteria for hyperkinetic conduct disorder. There was no case of\n rapid cycling.\nWhen both DSM–IV and ICD–10 research diagnostic criteria were\n applied, only one child, a 9-year-old boy, met ICD–10 criteria for\n hypomania and DSM–IV criteria for bipolar disorder not otherwise\n specified (NOS). He had expansive mood (a criterion A symptom) that lasted for\n 2 weeks and three criterion B symptoms (talkativeness, decreased need for\n sleep and distractibility during the mood disturbance) needed to diagnose an\n episode of hypomania and bipolar disorder. The CAPA requires that the\n criterion B symptoms represent a change from usual (i.e. it does not allow\n simply double-coding of ADHD items). For confirmation of the diagnoses, three\n clinicians (one masked to the possible diagnosis) reviewed the audiotaped\n interview. All gave a clinical diagnosis of bipolar disorder NOS, because\n hypomanic episodes (not mania) with no depressive episode were reported. The\n child had a family psychiatric history of mood disorder, as his mother had\n bipolar disorder. No mental health problem was reported for the father. The\n affected child met DSM–IV criteria for combined type ADHD and\n ICD–10 criteria for hyperkinetic conduct disorder. There was no case of\n rapid cycling.\n[SUBTITLE] Mania and hypomania symptoms [SUBSECTION] Symptoms of mania and hypomania, including mood disturbance of less than 4\n days, were found in only one child, who met criteria for bipolar disorder, and\n in 19 individuals who reported persistent irritability\n (Table 1).\n\n\nTable 1\n\nDistribution of DSM–IV and ICD–10 symptoms of mania and\n hypomania (n = 200)\n\n\n\n\n Symptom\n \nn (%)\n \n\n\n\n\n Expansive mood\n \n 1 (0.5)\n \n\n\n Irritable mood\n \n 19 (9.5)\n \n\n\n More talkativea\n 1 (0.5)\n \n\n\n Flight of ideasa\n 0 (0.0)\n \n\n\n Pressure of speecha\n 0 (0.0)\n \n\n\n Increased goal-directed activity (motor pressure)a\n 0 (0.0)\n \n\n\n Psychomotor agitationa\n 0 (0.0)\n \n\n\n Decreased need for sleepa\n 1 (0.5)\n \n\n\n Distractibilitya\n 1 (0.5)\n \n\n\n Grandiositya\n 0 (0.0)\n \n\n\n Reckless behavioura\n 0 (0)\n \n\n\n Increase in adaptive activitya\n 0 (0)\n \n\n\n\n\n a. Symptom presence rated if there is mood disturbance lasting ≥1 h, not\n necessarily occurring exclusively during the mood episode.\n \n\n\n\nDistribution of DSM–IV and ICD–10 symptoms of mania and\n hypomania (n = 200)\n a. Symptom presence rated if there is mood disturbance lasting ≥1 h, not\n necessarily occurring exclusively during the mood episode.\n \nSymptoms of mania and hypomania, including mood disturbance of less than 4\n days, were found in only one child, who met criteria for bipolar disorder, and\n in 19 individuals who reported persistent irritability\n (Table 1).\n\n\nTable 1\n\nDistribution of DSM–IV and ICD–10 symptoms of mania and\n hypomania (n = 200)\n\n\n\n\n Symptom\n \nn (%)\n \n\n\n\n\n Expansive mood\n \n 1 (0.5)\n \n\n\n Irritable mood\n \n 19 (9.5)\n \n\n\n More talkativea\n 1 (0.5)\n \n\n\n Flight of ideasa\n 0 (0.0)\n \n\n\n Pressure of speecha\n 0 (0.0)\n \n\n\n Increased goal-directed activity (motor pressure)a\n 0 (0.0)\n \n\n\n Psychomotor agitationa\n 0 (0.0)\n \n\n\n Decreased need for sleepa\n 1 (0.5)\n \n\n\n Distractibilitya\n 1 (0.5)\n \n\n\n Grandiositya\n 0 (0.0)\n \n\n\n Reckless behavioura\n 0 (0)\n \n\n\n Increase in adaptive activitya\n 0 (0)\n \n\n\n\n\n a. Symptom presence rated if there is mood disturbance lasting ≥1 h, not\n necessarily occurring exclusively during the mood episode.\n \n\n\n\nDistribution of DSM–IV and ICD–10 symptoms of mania and\n hypomania (n = 200)\n a. Symptom presence rated if there is mood disturbance lasting ≥1 h, not\n necessarily occurring exclusively during the mood episode.\n \n[SUBTITLE] Family history of bipolar disorder [SUBSECTION] A family history of bipolar disorder was absent in fathers and siblings,\n but was reported by three mothers (Table\n 2).\n\n\nTable 2\n\nReported psychiatric history in parents and siblings\n\n\n\n\n\n Mothers\n \n Fathers\n \n Siblings\n \n\n\n\nn = 200\n \nn = 200\n \nn = 409\n \n\n\n Diagnosisa\nn (%)b\nn (%)b\nn (%)b\n\n\n\n\n ADHD\n \n 2 (1.0)\n \n 5 (2.5)\n \n 46 (11.2)\n \n\n\n Depression\n \n 49 (24.5)\n \n 18 (9.0)\n \n 2 (0.5)\n \n\n\n Bipolar disorder\n \n 3 (1.5)\n \n 0 (0.0)\n \n 0 (0.0)\n \n\n\n Schizophrenia\n \n 1 (0.5)\n \n 1 (0.5)\n \n 1 (0.2)\n \n\n\n\n\n ADHD, attention-deficit hyperactivity disorder.\n \n a. Diagnoses are not mutually exclusive as many participants had more than one\n diagnosis.\n \n b. Percentage of whole sample.\n \n\n\n\nReported psychiatric history in parents and siblings\n ADHD, attention-deficit hyperactivity disorder.\n \n a. Diagnoses are not mutually exclusive as many participants had more than one\n diagnosis.\n \n b. Percentage of whole sample.\n \nA family history of bipolar disorder was absent in fathers and siblings,\n but was reported by three mothers (Table\n 2).\n\n\nTable 2\n\nReported psychiatric history in parents and siblings\n\n\n\n\n\n Mothers\n \n Fathers\n \n Siblings\n \n\n\n\nn = 200\n \nn = 200\n \nn = 409\n \n\n\n Diagnosisa\nn (%)b\nn (%)b\nn (%)b\n\n\n\n\n ADHD\n \n 2 (1.0)\n \n 5 (2.5)\n \n 46 (11.2)\n \n\n\n Depression\n \n 49 (24.5)\n \n 18 (9.0)\n \n 2 (0.5)\n \n\n\n Bipolar disorder\n \n 3 (1.5)\n \n 0 (0.0)\n \n 0 (0.0)\n \n\n\n Schizophrenia\n \n 1 (0.5)\n \n 1 (0.5)\n \n 1 (0.2)\n \n\n\n\n\n ADHD, attention-deficit hyperactivity disorder.\n \n a. Diagnoses are not mutually exclusive as many participants had more than one\n diagnosis.\n \n b. Percentage of whole sample.\n \n\n\n\nReported psychiatric history in parents and siblings\n ADHD, attention-deficit hyperactivity disorder.\n \n a. Diagnoses are not mutually exclusive as many participants had more than one\n diagnosis.\n \n b. Percentage of whole sample.\n ", "The mean number of ADHD symptoms (from a possible 18) was 15.97 (s.d. =\n 1.98). All participants met criteria for at least one of the following: a\n DSM–III–R diagnosis of ADHD (99.0%), an ICD–10 diagnosis of\n hyperkinetic disorder (27.5% hyperkinetic disorder, 34.5% hyperkinetic conduct\n disorder) or a DSM–IV diagnosis of ADHD (78.0% ADHD combined type, 10.5%\n hyperactive/impulsive type, 6.5% inattentive type).", "Oppositional defiant disorder according to DSM–IV criteria was\n diagnosed in 42.0% of the sample and conduct disorder in 14.5%. Four\n participants (2.0%) received a diagnosis of generalised anxiety disorder. One\n (0.5%) participant had social anxiety disorder and 2 (1.0%) met criteria for\n separation anxiety disorder. Three children (1.5%) met criteria for major\n depressive episode. Twenty (10.0%) had a tic disorder. The comorbidity rates\n of oppositional defiant and conduct disorders were similar to those found in\n other European ADHD\n studies,14,15\n but rates of anxiety and depression were lower than in some US clinic-based\n studies.16", "When both DSM–IV and ICD–10 research diagnostic criteria were\n applied, only one child, a 9-year-old boy, met ICD–10 criteria for\n hypomania and DSM–IV criteria for bipolar disorder not otherwise\n specified (NOS). He had expansive mood (a criterion A symptom) that lasted for\n 2 weeks and three criterion B symptoms (talkativeness, decreased need for\n sleep and distractibility during the mood disturbance) needed to diagnose an\n episode of hypomania and bipolar disorder. The CAPA requires that the\n criterion B symptoms represent a change from usual (i.e. it does not allow\n simply double-coding of ADHD items). For confirmation of the diagnoses, three\n clinicians (one masked to the possible diagnosis) reviewed the audiotaped\n interview. All gave a clinical diagnosis of bipolar disorder NOS, because\n hypomanic episodes (not mania) with no depressive episode were reported. The\n child had a family psychiatric history of mood disorder, as his mother had\n bipolar disorder. No mental health problem was reported for the father. The\n affected child met DSM–IV criteria for combined type ADHD and\n ICD–10 criteria for hyperkinetic conduct disorder. There was no case of\n rapid cycling.", "Symptoms of mania and hypomania, including mood disturbance of less than 4\n days, were found in only one child, who met criteria for bipolar disorder, and\n in 19 individuals who reported persistent irritability\n (Table 1).\n\n\nTable 1\n\nDistribution of DSM–IV and ICD–10 symptoms of mania and\n hypomania (n = 200)\n\n\n\n\n Symptom\n \nn (%)\n \n\n\n\n\n Expansive mood\n \n 1 (0.5)\n \n\n\n Irritable mood\n \n 19 (9.5)\n \n\n\n More talkativea\n 1 (0.5)\n \n\n\n Flight of ideasa\n 0 (0.0)\n \n\n\n Pressure of speecha\n 0 (0.0)\n \n\n\n Increased goal-directed activity (motor pressure)a\n 0 (0.0)\n \n\n\n Psychomotor agitationa\n 0 (0.0)\n \n\n\n Decreased need for sleepa\n 1 (0.5)\n \n\n\n Distractibilitya\n 1 (0.5)\n \n\n\n Grandiositya\n 0 (0.0)\n \n\n\n Reckless behavioura\n 0 (0)\n \n\n\n Increase in adaptive activitya\n 0 (0)\n \n\n\n\n\n a. Symptom presence rated if there is mood disturbance lasting ≥1 h, not\n necessarily occurring exclusively during the mood episode.\n \n\n\n\nDistribution of DSM–IV and ICD–10 symptoms of mania and\n hypomania (n = 200)\n a. Symptom presence rated if there is mood disturbance lasting ≥1 h, not\n necessarily occurring exclusively during the mood episode.\n ", "A family history of bipolar disorder was absent in fathers and siblings,\n but was reported by three mothers (Table\n 2).\n\n\nTable 2\n\nReported psychiatric history in parents and siblings\n\n\n\n\n\n Mothers\n \n Fathers\n \n Siblings\n \n\n\n\nn = 200\n \nn = 200\n \nn = 409\n \n\n\n Diagnosisa\nn (%)b\nn (%)b\nn (%)b\n\n\n\n\n ADHD\n \n 2 (1.0)\n \n 5 (2.5)\n \n 46 (11.2)\n \n\n\n Depression\n \n 49 (24.5)\n \n 18 (9.0)\n \n 2 (0.5)\n \n\n\n Bipolar disorder\n \n 3 (1.5)\n \n 0 (0.0)\n \n 0 (0.0)\n \n\n\n Schizophrenia\n \n 1 (0.5)\n \n 1 (0.5)\n \n 1 (0.2)\n \n\n\n\n\n ADHD, attention-deficit hyperactivity disorder.\n \n a. Diagnoses are not mutually exclusive as many participants had more than one\n diagnosis.\n \n b. Percentage of whole sample.\n \n\n\n\nReported psychiatric history in parents and siblings\n ADHD, attention-deficit hyperactivity disorder.\n \n a. Diagnoses are not mutually exclusive as many participants had more than one\n diagnosis.\n \n b. Percentage of whole sample.\n ", "This study is, to our knowledge, the first to examine the current\n prevalence of bipolar affective disorder and hypomania/mania symptoms in a UK\n sample of children and adolescents with a diagnosis of ADHD. In this study the\n prevalence of bipolar disorder or hypomania was low (0.5% of the sample). A\n recent epidemiological study of 5326 UK children aged 8–19 years that\n used a different diagnostic interview, the Development and Well-Being\n Assessment, found that only 0.1% met DSM–IV criteria for bipolar\n disorder.17 A\n similar rate of bipolar disorder (0.1%) was reported from the USA in the Great\n Smoky Mountains epidemiological study of children aged 9–13 years, which\n used the CAPA.18\n Thus our estimate suggests that there is no greatly elevated level of\n unidentified bipolar disorder in children with ADHD who are currently referred\n to district child psychiatry and paediatric out-patient settings. The overall\n rate of a family history of bipolar disorder was also within population\n prevalence estimates of this disorder (0.5–1.5% in\n adults).19\nOur low rates of bipolar disorder differ substantially from some studies in\n the USA, although reported rates vary widely. These studies have been reviewed\n extensively\n elsewhere.20 There\n are a number of possible reasons for this variation, including sample\n selection (specialist clinic v. routine out-patient services), the\n age group studied, whether or not researchers started with index cases of\n bipolar disorder or ADHD, and differences in diagnostic practice. Our sample\n consisted of routine cases from child and adolescent out-patient psychiatry\n and paediatric services rather than specialist clinics, so we might expect\n lower rates of bipolar disorder than in studies of samples from specialist\n centres.2 The\n variation in ascertainment across different studies is likely to have\n contributed to the varying prevalence rates of bipolar disorder in ADHD.\n Another possibility is that our sample had not yet passed through the age of\n risk (the median age in our sample was 11 years). For example, Biederman\n et al found a slightly higher rate of bipolar disorder over time in a\n cohort of 140 boys with ADHD after 4 years – 11% at baseline (mean age\n 10.7 years) and an additional 12% of new cases at follow-up (mean age 14.6\n years).2 We have to\n consider that the Biederman study was a longitudinal study and ours was\n cross-sectional, although the baseline prevalence rate in the former sample\n was still much higher than that found in our UK sample.\nOne key issue is that the reported relationship between ADHD and bipolar\n disorder also appears to vary depending on the diagnosis of the index cases.\n Thus, in general, rates of ADHD in those with bipolar disorder (especially\n early-onset disorder) appear to be much higher than rates of bipolar disorder\n in those with\n ADHD.21,22\n There is reasonably consistent evidence suggesting high rates of ADHD in\n samples of bipolar disorder. The Course and Outcome of Bipolar Illness in\n Youth study, a cohort study involving children and adolescents with\n bipolar-spectrum disorders, found the rate of ADHD to be\n 58.6%.22 Another\n study showed an increased rate of ADHD in youths with bipolar disorder (32%)\n compared with adults with bipolar disorder\n (3%).23 One\n possibility is that there is a subgroup of individuals with early-onset\n bipolar disorder accompanied by comorbid ADHD. Alternatively, ADHD could be a\n herald of later bipolar disorder in a subgroup of genetically susceptible\n individuals, indexed by a strong family history of bipolar disorder. However,\n the evidence to date suggests that the majority of individuals with ADHD do\n not develop bipolar disorder or show a relationship with bipolar disorder,\n given that family studies of ADHD do not seem to show elevated rates of\n bipolar disorder in\n relatives.24\nAnother issue is variation in diagnostic practice. A recent study into the\n age at onset of bipolar disorder in the USA and Europe (in children without\n ADHD) showed that the rate of childhood-onset bipolar disorder in the USA is\n double that in\n Europe.25 An\n investigation of national trends in the USA has also shown a recent rapid\n increase in the rate of diagnosis of bipolar\n disorder.23 This\n has raised the possibility that child psychiatrists in the UK are missing\n bipolar disorder. An alternative explanation is that there are transatlantic\n differences in diagnosing bipolar disorder and one study does indeed suggest\n that UK clinicians interpret hypomania symptoms in children differently from\n their US\n counterparts.26\n However, our study suggests that when a standardised research diagnostic\n interview and ICD–10 or DSM–IV criteria for bipolar disorder are\n used in the UK, the rate of bipolar disorder in out-patients with ADHD is low.\n Rates of bipolar disorder in the USA that have also been based on standardised\n interviews and diagnostic criteria have varied\n widely.2,20\n It is not known whether the exact type of diagnostic instrument used is\n important because many current diagnostic instruments, including the CAPA,\n have yet to be evaluated or compared with regard to sensitivity and\n specificity specifically for detecting bipolar disorder. Another possibility\n is that referral patterns for ADHD vary in different countries. For example,\n rates of comorbid depression and ADHD inattentive type appear to be lower in\n our sample and in a UK community study (Ford et\n al27) than in\n US studies (e.g. that by Elia et\n al.16) Many\n studies of bipolar disorder have not reported the rates of ADHD subtypes, and\n this needs to be considered in the future because it is possible that the\n relationship between ADHD and bipolar disorder varies for the different ADHD\n subtypes. A final explanation of the observed variation in prevalence rates of\n bipolar disorder is that there is genuine geographical variation for unknown\n reasons.\n[SUBTITLE] Abnormal irritable mood [SUBSECTION] Some researchers put much emphasis on the importance of irritability when\n diagnosing bipolar disorder in children and adolescents, arguing that the\n presentation and course are different from adult bipolar\n disorder.28 Most\n clinicians and researchers suggest that irritability needs to be episodic in\n nature. However, other researchers have suggested that it is the severity of\n irritability that distinguishes bipolar disorder, rather than its episodic\n nature.29 Although\n in our sample 19 individuals reported persistent irritable mood none met the\n level of severity required by Mick et\n al,29 and\n similarly none of these children had any criterion B symptom, although given\n that many of these symptoms (e.g. distractibility, increased activity,\n talkativeness) are almost inevitably present in ADHD, we only included these\n symptoms if they were reported as increased during the episodes of the\n irritable mood (i.e. episodic). Other investigators have suggested that these\n overlapping symptoms are not of diagnostic importance as they failed to\n differentiate between ADHD and bipolar disorder in one study, and suggest that\n elated mood and grandiosity are of greater diagnostic importance to bipolar\n disorder than is irritability, as they found the latter to be a shared symptom\n of ADHD and bipolar disorder, and of higher prevalence in people with ADHD but\n no mood\n disorder.30\nSome researchers put much emphasis on the importance of irritability when\n diagnosing bipolar disorder in children and adolescents, arguing that the\n presentation and course are different from adult bipolar\n disorder.28 Most\n clinicians and researchers suggest that irritability needs to be episodic in\n nature. However, other researchers have suggested that it is the severity of\n irritability that distinguishes bipolar disorder, rather than its episodic\n nature.29 Although\n in our sample 19 individuals reported persistent irritable mood none met the\n level of severity required by Mick et\n al,29 and\n similarly none of these children had any criterion B symptom, although given\n that many of these symptoms (e.g. distractibility, increased activity,\n talkativeness) are almost inevitably present in ADHD, we only included these\n symptoms if they were reported as increased during the episodes of the\n irritable mood (i.e. episodic). Other investigators have suggested that these\n overlapping symptoms are not of diagnostic importance as they failed to\n differentiate between ADHD and bipolar disorder in one study, and suggest that\n elated mood and grandiosity are of greater diagnostic importance to bipolar\n disorder than is irritability, as they found the latter to be a shared symptom\n of ADHD and bipolar disorder, and of higher prevalence in people with ADHD but\n no mood\n disorder.30\n[SUBTITLE] Limitations [SUBSECTION] Like all studies, ours has limitations that must be considered, including\n ascertainment issues. First, excluding cases with psychosis would have an\n impact on the prevalence of bipolar disorder in our sample, as mania may\n present with psychotic symptoms. However, the contentious research and\n clinical issue at present is not to do with psychosis that is likely to be\n picked up but rather with the overlap of bipolar disorder and ADHD. We also\n cannot rule out the possibility that clinicians pre-selected ADHD cases\n without mood disturbance, although this was not an exclusion criterion and\n informal enquiry suggested that this approach had not been adopted by clinics.\n Another limitation is that the CAPA assesses current symptoms. Also, some have\n suggested that as most child and adolescent research diagnostic interviews are\n based on stringent interpretations of DSM and ICD criteria, current\n instruments might underdetect bipolar\n disorder.31 As all\n participants were White, we cannot generalise our findings to other ethnic\n groups; our sample size was relatively small, and participants might not have\n been old enough for us to detect increased rates of bipolar disorder. However,\n the sample ages are representative of current UK child and adolescent mental\n health service attenders with ADHD. Our findings might have been different in\n older adolescents and young adults. In line with other ADHD studies our sample\n was predominately male (85%), so further investigations might be needed to\n generalise to the female population. Finally, the family psychiatric history\n was reported by family members and not formally obtained from medical notes or\n by direct interview witheachfamily member, andreported rates of disorder may\n therefore be lower than the true estimate.\nIn conclusion, in a UK sample of children with ADHD, current bipolar\n affective disorder and bipolar symptoms were uncommon.\nLike all studies, ours has limitations that must be considered, including\n ascertainment issues. First, excluding cases with psychosis would have an\n impact on the prevalence of bipolar disorder in our sample, as mania may\n present with psychotic symptoms. However, the contentious research and\n clinical issue at present is not to do with psychosis that is likely to be\n picked up but rather with the overlap of bipolar disorder and ADHD. We also\n cannot rule out the possibility that clinicians pre-selected ADHD cases\n without mood disturbance, although this was not an exclusion criterion and\n informal enquiry suggested that this approach had not been adopted by clinics.\n Another limitation is that the CAPA assesses current symptoms. Also, some have\n suggested that as most child and adolescent research diagnostic interviews are\n based on stringent interpretations of DSM and ICD criteria, current\n instruments might underdetect bipolar\n disorder.31 As all\n participants were White, we cannot generalise our findings to other ethnic\n groups; our sample size was relatively small, and participants might not have\n been old enough for us to detect increased rates of bipolar disorder. However,\n the sample ages are representative of current UK child and adolescent mental\n health service attenders with ADHD. Our findings might have been different in\n older adolescents and young adults. In line with other ADHD studies our sample\n was predominately male (85%), so further investigations might be needed to\n generalise to the female population. Finally, the family psychiatric history\n was reported by family members and not formally obtained from medical notes or\n by direct interview witheachfamily member, andreported rates of disorder may\n therefore be lower than the true estimate.\nIn conclusion, in a UK sample of children with ADHD, current bipolar\n affective disorder and bipolar symptoms were uncommon.", "Some researchers put much emphasis on the importance of irritability when\n diagnosing bipolar disorder in children and adolescents, arguing that the\n presentation and course are different from adult bipolar\n disorder.28 Most\n clinicians and researchers suggest that irritability needs to be episodic in\n nature. However, other researchers have suggested that it is the severity of\n irritability that distinguishes bipolar disorder, rather than its episodic\n nature.29 Although\n in our sample 19 individuals reported persistent irritable mood none met the\n level of severity required by Mick et\n al,29 and\n similarly none of these children had any criterion B symptom, although given\n that many of these symptoms (e.g. distractibility, increased activity,\n talkativeness) are almost inevitably present in ADHD, we only included these\n symptoms if they were reported as increased during the episodes of the\n irritable mood (i.e. episodic). Other investigators have suggested that these\n overlapping symptoms are not of diagnostic importance as they failed to\n differentiate between ADHD and bipolar disorder in one study, and suggest that\n elated mood and grandiosity are of greater diagnostic importance to bipolar\n disorder than is irritability, as they found the latter to be a shared symptom\n of ADHD and bipolar disorder, and of higher prevalence in people with ADHD but\n no mood\n disorder.30", "Like all studies, ours has limitations that must be considered, including\n ascertainment issues. First, excluding cases with psychosis would have an\n impact on the prevalence of bipolar disorder in our sample, as mania may\n present with psychotic symptoms. However, the contentious research and\n clinical issue at present is not to do with psychosis that is likely to be\n picked up but rather with the overlap of bipolar disorder and ADHD. We also\n cannot rule out the possibility that clinicians pre-selected ADHD cases\n without mood disturbance, although this was not an exclusion criterion and\n informal enquiry suggested that this approach had not been adopted by clinics.\n Another limitation is that the CAPA assesses current symptoms. Also, some have\n suggested that as most child and adolescent research diagnostic interviews are\n based on stringent interpretations of DSM and ICD criteria, current\n instruments might underdetect bipolar\n disorder.31 As all\n participants were White, we cannot generalise our findings to other ethnic\n groups; our sample size was relatively small, and participants might not have\n been old enough for us to detect increased rates of bipolar disorder. However,\n the sample ages are representative of current UK child and adolescent mental\n health service attenders with ADHD. Our findings might have been different in\n older adolescents and young adults. In line with other ADHD studies our sample\n was predominately male (85%), so further investigations might be needed to\n generalise to the female population. Finally, the family psychiatric history\n was reported by family members and not formally obtained from medical notes or\n by direct interview witheachfamily member, andreported rates of disorder may\n therefore be lower than the true estimate.\nIn conclusion, in a UK sample of children with ADHD, current bipolar\n affective disorder and bipolar symptoms were uncommon.", "The study was supported by a grant from the Wellcome\n Trust on the genetics of attention-deficit\n hyperactivity disorder to A.T., M. O’Donovan, M. Owen, P. Holmans,\n M.B.M. van den Bree (Cardiff University) and L. Kent (St Andrew’s\n University)." ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Method", "Participants", "Assessments", "Results", "ADHD", "Comorbidities", "Bipolar disorder", "Mania and hypomania symptoms", "Family history of bipolar disorder", "Discussion", "Abnormal irritable mood", "Limitations", "Funding" ]
[ "[SUBTITLE] Participants [SUBSECTION] The sample consisted of the first 200 participants in a larger ongoing\n genetic study of ADHD. Participants were consecutively recruited from\n community child and adolescent psychiatry and paediatric out-patient clinics\n in South Wales and other parts of the UK. Each child had a clinical diagnosis\n of ADHD or was undergoing assessment for a diagnosis. No pre-selection\n strategy was used apart from the exclusion criteria listed below and the\n willingness of families to participate. Psychopathology was assessed using the\n Child and Adolescent Psychiatric Assessment (CAPA) research diagnostic\n interview,5 which\n was used to confirm that all participants met DSM–III–R or\n DSM–IV criteria for ADHD or ICD–10 criteria for hyperkinetic\n disorder.6–8\n To meet study inclusion criteria the participant had to be living with at\n least one biological parent, be British, White and have a full-scale IQ of 70\n or above, assessed using the Wechsler Intelligence Scale for Children version\n IV.9 Exclusion\n criteria comprised any major neurological or genetic condition such as\n epilepsy or fragile-X syndrome, psychosis (but not mood disorder), pervasive\n developmental disorder and Tourette syndrome (although those with other tic\n disorders were not excluded). Ethical approval for the study was obtained from\n the Wales Multicentre Research Ethics Committee and written informed consent\n and assent were obtained from participating parents and children.\nThe sample consisted of the first 200 participants in a larger ongoing\n genetic study of ADHD. Participants were consecutively recruited from\n community child and adolescent psychiatry and paediatric out-patient clinics\n in South Wales and other parts of the UK. Each child had a clinical diagnosis\n of ADHD or was undergoing assessment for a diagnosis. No pre-selection\n strategy was used apart from the exclusion criteria listed below and the\n willingness of families to participate. Psychopathology was assessed using the\n Child and Adolescent Psychiatric Assessment (CAPA) research diagnostic\n interview,5 which\n was used to confirm that all participants met DSM–III–R or\n DSM–IV criteria for ADHD or ICD–10 criteria for hyperkinetic\n disorder.6–8\n To meet study inclusion criteria the participant had to be living with at\n least one biological parent, be British, White and have a full-scale IQ of 70\n or above, assessed using the Wechsler Intelligence Scale for Children version\n IV.9 Exclusion\n criteria comprised any major neurological or genetic condition such as\n epilepsy or fragile-X syndrome, psychosis (but not mood disorder), pervasive\n developmental disorder and Tourette syndrome (although those with other tic\n disorders were not excluded). Ethical approval for the study was obtained from\n the Wales Multicentre Research Ethics Committee and written informed consent\n and assent were obtained from participating parents and children.\n[SUBTITLE] Assessments [SUBSECTION] Interviews were conducted by trained and supervised graduate and\n postdoctoral psychologists. The ‘parent’ version of the CAPA, a\n reliable, well-established semi-structured research diagnostic interview that\n assesses current symptom presence, was used to assess clinical symptoms of\n ADHD, oppositional defiant disorder, conduct disorder, anxiety disorder and\n mood disorders, including bipolar disorder. The interview section on\n hypomania/mania covers mood changes (elation and irritability) that have a\n duration of at least 1 h. If there is no mood change, criterion B\n hypomania/mania items are not assessed. The child version of the CAPA includes\n the same questions as the parent interview, although it does not assess\n self-reported ADHD\n symptoms.10 This\n measure was additionally used to interview those aged 12 years and over.\n Comorbidities (including bipolar disorder) were considered present if reported\n by either parent or\n child.11 To assess\n the ICD–10 and DSM–IV criteria of ADHD pervasiveness in more than\n one setting, reports of symptoms in school were obtained using teacher reports\n on the Child ADHD Teacher Telephone Interview or the Conners Teacher Rating\n Scale.12,13\nSymptoms and diagnoses according to DSM–IV and ICD–10 criteria\n were generated using information from the CAPA. All interviews were\n audiotaped, and interviewers were supervised weekly by an experienced\n clinician (A.T.). Reports of family psychiatric history were obtained for\n parents and biological siblings, by asking the parent about each parent and\n sibling in turn. The participating children had a total of 409 siblings; the\n number of siblings for each individual ranged from 0 to 6 (mean 2). Parents\n also completed questionnaires concerning demographic and family\n information.\nInterviews were conducted by trained and supervised graduate and\n postdoctoral psychologists. The ‘parent’ version of the CAPA, a\n reliable, well-established semi-structured research diagnostic interview that\n assesses current symptom presence, was used to assess clinical symptoms of\n ADHD, oppositional defiant disorder, conduct disorder, anxiety disorder and\n mood disorders, including bipolar disorder. The interview section on\n hypomania/mania covers mood changes (elation and irritability) that have a\n duration of at least 1 h. If there is no mood change, criterion B\n hypomania/mania items are not assessed. The child version of the CAPA includes\n the same questions as the parent interview, although it does not assess\n self-reported ADHD\n symptoms.10 This\n measure was additionally used to interview those aged 12 years and over.\n Comorbidities (including bipolar disorder) were considered present if reported\n by either parent or\n child.11 To assess\n the ICD–10 and DSM–IV criteria of ADHD pervasiveness in more than\n one setting, reports of symptoms in school were obtained using teacher reports\n on the Child ADHD Teacher Telephone Interview or the Conners Teacher Rating\n Scale.12,13\nSymptoms and diagnoses according to DSM–IV and ICD–10 criteria\n were generated using information from the CAPA. All interviews were\n audiotaped, and interviewers were supervised weekly by an experienced\n clinician (A.T.). Reports of family psychiatric history were obtained for\n parents and biological siblings, by asking the parent about each parent and\n sibling in turn. The participating children had a total of 409 siblings; the\n number of siblings for each individual ranged from 0 to 6 (mean 2). Parents\n also completed questionnaires concerning demographic and family\n information.", "The sample consisted of the first 200 participants in a larger ongoing\n genetic study of ADHD. Participants were consecutively recruited from\n community child and adolescent psychiatry and paediatric out-patient clinics\n in South Wales and other parts of the UK. Each child had a clinical diagnosis\n of ADHD or was undergoing assessment for a diagnosis. No pre-selection\n strategy was used apart from the exclusion criteria listed below and the\n willingness of families to participate. Psychopathology was assessed using the\n Child and Adolescent Psychiatric Assessment (CAPA) research diagnostic\n interview,5 which\n was used to confirm that all participants met DSM–III–R or\n DSM–IV criteria for ADHD or ICD–10 criteria for hyperkinetic\n disorder.6–8\n To meet study inclusion criteria the participant had to be living with at\n least one biological parent, be British, White and have a full-scale IQ of 70\n or above, assessed using the Wechsler Intelligence Scale for Children version\n IV.9 Exclusion\n criteria comprised any major neurological or genetic condition such as\n epilepsy or fragile-X syndrome, psychosis (but not mood disorder), pervasive\n developmental disorder and Tourette syndrome (although those with other tic\n disorders were not excluded). Ethical approval for the study was obtained from\n the Wales Multicentre Research Ethics Committee and written informed consent\n and assent were obtained from participating parents and children.", "Interviews were conducted by trained and supervised graduate and\n postdoctoral psychologists. The ‘parent’ version of the CAPA, a\n reliable, well-established semi-structured research diagnostic interview that\n assesses current symptom presence, was used to assess clinical symptoms of\n ADHD, oppositional defiant disorder, conduct disorder, anxiety disorder and\n mood disorders, including bipolar disorder. The interview section on\n hypomania/mania covers mood changes (elation and irritability) that have a\n duration of at least 1 h. If there is no mood change, criterion B\n hypomania/mania items are not assessed. The child version of the CAPA includes\n the same questions as the parent interview, although it does not assess\n self-reported ADHD\n symptoms.10 This\n measure was additionally used to interview those aged 12 years and over.\n Comorbidities (including bipolar disorder) were considered present if reported\n by either parent or\n child.11 To assess\n the ICD–10 and DSM–IV criteria of ADHD pervasiveness in more than\n one setting, reports of symptoms in school were obtained using teacher reports\n on the Child ADHD Teacher Telephone Interview or the Conners Teacher Rating\n Scale.12,13\nSymptoms and diagnoses according to DSM–IV and ICD–10 criteria\n were generated using information from the CAPA. All interviews were\n audiotaped, and interviewers were supervised weekly by an experienced\n clinician (A.T.). Reports of family psychiatric history were obtained for\n parents and biological siblings, by asking the parent about each parent and\n sibling in turn. The participating children had a total of 409 siblings; the\n number of siblings for each individual ranged from 0 to 6 (mean 2). Parents\n also completed questionnaires concerning demographic and family\n information.", "The 200 participants, 170 (85%) of whom were male and 30 (15%) female, were\n aged 6–18 years (mean 11.15, s.d. = 2.95; median 11.00, s.d. = 2.95).\n The social class of 176 families (24 families had missing data) was determined\n by classifying the occupation of the main wage-earner in the family using the\n UK Standard Occupation Classification 2000\n (www.ons.gov.uk).\n The families were then split into three social class categories: high social\n class (12.5%, n = 22), comprising families from professional and\n managerial jobs; medium social class (36.4%, n = 64), comprising\n those with skilled occupations (manual, non-manual) and partially skilled\n workers; and low social class (51.1%, n = 90), comprising those with\n unskilled jobs and unemployed or unclassified individuals.\n[SUBTITLE] ADHD [SUBSECTION] The mean number of ADHD symptoms (from a possible 18) was 15.97 (s.d. =\n 1.98). All participants met criteria for at least one of the following: a\n DSM–III–R diagnosis of ADHD (99.0%), an ICD–10 diagnosis of\n hyperkinetic disorder (27.5% hyperkinetic disorder, 34.5% hyperkinetic conduct\n disorder) or a DSM–IV diagnosis of ADHD (78.0% ADHD combined type, 10.5%\n hyperactive/impulsive type, 6.5% inattentive type).\nThe mean number of ADHD symptoms (from a possible 18) was 15.97 (s.d. =\n 1.98). All participants met criteria for at least one of the following: a\n DSM–III–R diagnosis of ADHD (99.0%), an ICD–10 diagnosis of\n hyperkinetic disorder (27.5% hyperkinetic disorder, 34.5% hyperkinetic conduct\n disorder) or a DSM–IV diagnosis of ADHD (78.0% ADHD combined type, 10.5%\n hyperactive/impulsive type, 6.5% inattentive type).\n[SUBTITLE] Comorbidities [SUBSECTION] Oppositional defiant disorder according to DSM–IV criteria was\n diagnosed in 42.0% of the sample and conduct disorder in 14.5%. Four\n participants (2.0%) received a diagnosis of generalised anxiety disorder. One\n (0.5%) participant had social anxiety disorder and 2 (1.0%) met criteria for\n separation anxiety disorder. Three children (1.5%) met criteria for major\n depressive episode. Twenty (10.0%) had a tic disorder. The comorbidity rates\n of oppositional defiant and conduct disorders were similar to those found in\n other European ADHD\n studies,14,15\n but rates of anxiety and depression were lower than in some US clinic-based\n studies.16\nOppositional defiant disorder according to DSM–IV criteria was\n diagnosed in 42.0% of the sample and conduct disorder in 14.5%. Four\n participants (2.0%) received a diagnosis of generalised anxiety disorder. One\n (0.5%) participant had social anxiety disorder and 2 (1.0%) met criteria for\n separation anxiety disorder. Three children (1.5%) met criteria for major\n depressive episode. Twenty (10.0%) had a tic disorder. The comorbidity rates\n of oppositional defiant and conduct disorders were similar to those found in\n other European ADHD\n studies,14,15\n but rates of anxiety and depression were lower than in some US clinic-based\n studies.16\n[SUBTITLE] Bipolar disorder [SUBSECTION] When both DSM–IV and ICD–10 research diagnostic criteria were\n applied, only one child, a 9-year-old boy, met ICD–10 criteria for\n hypomania and DSM–IV criteria for bipolar disorder not otherwise\n specified (NOS). He had expansive mood (a criterion A symptom) that lasted for\n 2 weeks and three criterion B symptoms (talkativeness, decreased need for\n sleep and distractibility during the mood disturbance) needed to diagnose an\n episode of hypomania and bipolar disorder. The CAPA requires that the\n criterion B symptoms represent a change from usual (i.e. it does not allow\n simply double-coding of ADHD items). For confirmation of the diagnoses, three\n clinicians (one masked to the possible diagnosis) reviewed the audiotaped\n interview. All gave a clinical diagnosis of bipolar disorder NOS, because\n hypomanic episodes (not mania) with no depressive episode were reported. The\n child had a family psychiatric history of mood disorder, as his mother had\n bipolar disorder. No mental health problem was reported for the father. The\n affected child met DSM–IV criteria for combined type ADHD and\n ICD–10 criteria for hyperkinetic conduct disorder. There was no case of\n rapid cycling.\nWhen both DSM–IV and ICD–10 research diagnostic criteria were\n applied, only one child, a 9-year-old boy, met ICD–10 criteria for\n hypomania and DSM–IV criteria for bipolar disorder not otherwise\n specified (NOS). He had expansive mood (a criterion A symptom) that lasted for\n 2 weeks and three criterion B symptoms (talkativeness, decreased need for\n sleep and distractibility during the mood disturbance) needed to diagnose an\n episode of hypomania and bipolar disorder. The CAPA requires that the\n criterion B symptoms represent a change from usual (i.e. it does not allow\n simply double-coding of ADHD items). For confirmation of the diagnoses, three\n clinicians (one masked to the possible diagnosis) reviewed the audiotaped\n interview. All gave a clinical diagnosis of bipolar disorder NOS, because\n hypomanic episodes (not mania) with no depressive episode were reported. The\n child had a family psychiatric history of mood disorder, as his mother had\n bipolar disorder. No mental health problem was reported for the father. The\n affected child met DSM–IV criteria for combined type ADHD and\n ICD–10 criteria for hyperkinetic conduct disorder. There was no case of\n rapid cycling.\n[SUBTITLE] Mania and hypomania symptoms [SUBSECTION] Symptoms of mania and hypomania, including mood disturbance of less than 4\n days, were found in only one child, who met criteria for bipolar disorder, and\n in 19 individuals who reported persistent irritability\n (Table 1).\n\n\nTable 1\n\nDistribution of DSM–IV and ICD–10 symptoms of mania and\n hypomania (n = 200)\n\n\n\n\n Symptom\n \nn (%)\n \n\n\n\n\n Expansive mood\n \n 1 (0.5)\n \n\n\n Irritable mood\n \n 19 (9.5)\n \n\n\n More talkativea\n 1 (0.5)\n \n\n\n Flight of ideasa\n 0 (0.0)\n \n\n\n Pressure of speecha\n 0 (0.0)\n \n\n\n Increased goal-directed activity (motor pressure)a\n 0 (0.0)\n \n\n\n Psychomotor agitationa\n 0 (0.0)\n \n\n\n Decreased need for sleepa\n 1 (0.5)\n \n\n\n Distractibilitya\n 1 (0.5)\n \n\n\n Grandiositya\n 0 (0.0)\n \n\n\n Reckless behavioura\n 0 (0)\n \n\n\n Increase in adaptive activitya\n 0 (0)\n \n\n\n\n\n a. Symptom presence rated if there is mood disturbance lasting ≥1 h, not\n necessarily occurring exclusively during the mood episode.\n \n\n\n\nDistribution of DSM–IV and ICD–10 symptoms of mania and\n hypomania (n = 200)\n a. Symptom presence rated if there is mood disturbance lasting ≥1 h, not\n necessarily occurring exclusively during the mood episode.\n \nSymptoms of mania and hypomania, including mood disturbance of less than 4\n days, were found in only one child, who met criteria for bipolar disorder, and\n in 19 individuals who reported persistent irritability\n (Table 1).\n\n\nTable 1\n\nDistribution of DSM–IV and ICD–10 symptoms of mania and\n hypomania (n = 200)\n\n\n\n\n Symptom\n \nn (%)\n \n\n\n\n\n Expansive mood\n \n 1 (0.5)\n \n\n\n Irritable mood\n \n 19 (9.5)\n \n\n\n More talkativea\n 1 (0.5)\n \n\n\n Flight of ideasa\n 0 (0.0)\n \n\n\n Pressure of speecha\n 0 (0.0)\n \n\n\n Increased goal-directed activity (motor pressure)a\n 0 (0.0)\n \n\n\n Psychomotor agitationa\n 0 (0.0)\n \n\n\n Decreased need for sleepa\n 1 (0.5)\n \n\n\n Distractibilitya\n 1 (0.5)\n \n\n\n Grandiositya\n 0 (0.0)\n \n\n\n Reckless behavioura\n 0 (0)\n \n\n\n Increase in adaptive activitya\n 0 (0)\n \n\n\n\n\n a. Symptom presence rated if there is mood disturbance lasting ≥1 h, not\n necessarily occurring exclusively during the mood episode.\n \n\n\n\nDistribution of DSM–IV and ICD–10 symptoms of mania and\n hypomania (n = 200)\n a. Symptom presence rated if there is mood disturbance lasting ≥1 h, not\n necessarily occurring exclusively during the mood episode.\n \n[SUBTITLE] Family history of bipolar disorder [SUBSECTION] A family history of bipolar disorder was absent in fathers and siblings,\n but was reported by three mothers (Table\n 2).\n\n\nTable 2\n\nReported psychiatric history in parents and siblings\n\n\n\n\n\n Mothers\n \n Fathers\n \n Siblings\n \n\n\n\nn = 200\n \nn = 200\n \nn = 409\n \n\n\n Diagnosisa\nn (%)b\nn (%)b\nn (%)b\n\n\n\n\n ADHD\n \n 2 (1.0)\n \n 5 (2.5)\n \n 46 (11.2)\n \n\n\n Depression\n \n 49 (24.5)\n \n 18 (9.0)\n \n 2 (0.5)\n \n\n\n Bipolar disorder\n \n 3 (1.5)\n \n 0 (0.0)\n \n 0 (0.0)\n \n\n\n Schizophrenia\n \n 1 (0.5)\n \n 1 (0.5)\n \n 1 (0.2)\n \n\n\n\n\n ADHD, attention-deficit hyperactivity disorder.\n \n a. Diagnoses are not mutually exclusive as many participants had more than one\n diagnosis.\n \n b. Percentage of whole sample.\n \n\n\n\nReported psychiatric history in parents and siblings\n ADHD, attention-deficit hyperactivity disorder.\n \n a. Diagnoses are not mutually exclusive as many participants had more than one\n diagnosis.\n \n b. Percentage of whole sample.\n \nA family history of bipolar disorder was absent in fathers and siblings,\n but was reported by three mothers (Table\n 2).\n\n\nTable 2\n\nReported psychiatric history in parents and siblings\n\n\n\n\n\n Mothers\n \n Fathers\n \n Siblings\n \n\n\n\nn = 200\n \nn = 200\n \nn = 409\n \n\n\n Diagnosisa\nn (%)b\nn (%)b\nn (%)b\n\n\n\n\n ADHD\n \n 2 (1.0)\n \n 5 (2.5)\n \n 46 (11.2)\n \n\n\n Depression\n \n 49 (24.5)\n \n 18 (9.0)\n \n 2 (0.5)\n \n\n\n Bipolar disorder\n \n 3 (1.5)\n \n 0 (0.0)\n \n 0 (0.0)\n \n\n\n Schizophrenia\n \n 1 (0.5)\n \n 1 (0.5)\n \n 1 (0.2)\n \n\n\n\n\n ADHD, attention-deficit hyperactivity disorder.\n \n a. Diagnoses are not mutually exclusive as many participants had more than one\n diagnosis.\n \n b. Percentage of whole sample.\n \n\n\n\nReported psychiatric history in parents and siblings\n ADHD, attention-deficit hyperactivity disorder.\n \n a. Diagnoses are not mutually exclusive as many participants had more than one\n diagnosis.\n \n b. Percentage of whole sample.\n ", "The mean number of ADHD symptoms (from a possible 18) was 15.97 (s.d. =\n 1.98). All participants met criteria for at least one of the following: a\n DSM–III–R diagnosis of ADHD (99.0%), an ICD–10 diagnosis of\n hyperkinetic disorder (27.5% hyperkinetic disorder, 34.5% hyperkinetic conduct\n disorder) or a DSM–IV diagnosis of ADHD (78.0% ADHD combined type, 10.5%\n hyperactive/impulsive type, 6.5% inattentive type).", "Oppositional defiant disorder according to DSM–IV criteria was\n diagnosed in 42.0% of the sample and conduct disorder in 14.5%. Four\n participants (2.0%) received a diagnosis of generalised anxiety disorder. One\n (0.5%) participant had social anxiety disorder and 2 (1.0%) met criteria for\n separation anxiety disorder. Three children (1.5%) met criteria for major\n depressive episode. Twenty (10.0%) had a tic disorder. The comorbidity rates\n of oppositional defiant and conduct disorders were similar to those found in\n other European ADHD\n studies,14,15\n but rates of anxiety and depression were lower than in some US clinic-based\n studies.16", "When both DSM–IV and ICD–10 research diagnostic criteria were\n applied, only one child, a 9-year-old boy, met ICD–10 criteria for\n hypomania and DSM–IV criteria for bipolar disorder not otherwise\n specified (NOS). He had expansive mood (a criterion A symptom) that lasted for\n 2 weeks and three criterion B symptoms (talkativeness, decreased need for\n sleep and distractibility during the mood disturbance) needed to diagnose an\n episode of hypomania and bipolar disorder. The CAPA requires that the\n criterion B symptoms represent a change from usual (i.e. it does not allow\n simply double-coding of ADHD items). For confirmation of the diagnoses, three\n clinicians (one masked to the possible diagnosis) reviewed the audiotaped\n interview. All gave a clinical diagnosis of bipolar disorder NOS, because\n hypomanic episodes (not mania) with no depressive episode were reported. The\n child had a family psychiatric history of mood disorder, as his mother had\n bipolar disorder. No mental health problem was reported for the father. The\n affected child met DSM–IV criteria for combined type ADHD and\n ICD–10 criteria for hyperkinetic conduct disorder. There was no case of\n rapid cycling.", "Symptoms of mania and hypomania, including mood disturbance of less than 4\n days, were found in only one child, who met criteria for bipolar disorder, and\n in 19 individuals who reported persistent irritability\n (Table 1).\n\n\nTable 1\n\nDistribution of DSM–IV and ICD–10 symptoms of mania and\n hypomania (n = 200)\n\n\n\n\n Symptom\n \nn (%)\n \n\n\n\n\n Expansive mood\n \n 1 (0.5)\n \n\n\n Irritable mood\n \n 19 (9.5)\n \n\n\n More talkativea\n 1 (0.5)\n \n\n\n Flight of ideasa\n 0 (0.0)\n \n\n\n Pressure of speecha\n 0 (0.0)\n \n\n\n Increased goal-directed activity (motor pressure)a\n 0 (0.0)\n \n\n\n Psychomotor agitationa\n 0 (0.0)\n \n\n\n Decreased need for sleepa\n 1 (0.5)\n \n\n\n Distractibilitya\n 1 (0.5)\n \n\n\n Grandiositya\n 0 (0.0)\n \n\n\n Reckless behavioura\n 0 (0)\n \n\n\n Increase in adaptive activitya\n 0 (0)\n \n\n\n\n\n a. Symptom presence rated if there is mood disturbance lasting ≥1 h, not\n necessarily occurring exclusively during the mood episode.\n \n\n\n\nDistribution of DSM–IV and ICD–10 symptoms of mania and\n hypomania (n = 200)\n a. Symptom presence rated if there is mood disturbance lasting ≥1 h, not\n necessarily occurring exclusively during the mood episode.\n ", "A family history of bipolar disorder was absent in fathers and siblings,\n but was reported by three mothers (Table\n 2).\n\n\nTable 2\n\nReported psychiatric history in parents and siblings\n\n\n\n\n\n Mothers\n \n Fathers\n \n Siblings\n \n\n\n\nn = 200\n \nn = 200\n \nn = 409\n \n\n\n Diagnosisa\nn (%)b\nn (%)b\nn (%)b\n\n\n\n\n ADHD\n \n 2 (1.0)\n \n 5 (2.5)\n \n 46 (11.2)\n \n\n\n Depression\n \n 49 (24.5)\n \n 18 (9.0)\n \n 2 (0.5)\n \n\n\n Bipolar disorder\n \n 3 (1.5)\n \n 0 (0.0)\n \n 0 (0.0)\n \n\n\n Schizophrenia\n \n 1 (0.5)\n \n 1 (0.5)\n \n 1 (0.2)\n \n\n\n\n\n ADHD, attention-deficit hyperactivity disorder.\n \n a. Diagnoses are not mutually exclusive as many participants had more than one\n diagnosis.\n \n b. Percentage of whole sample.\n \n\n\n\nReported psychiatric history in parents and siblings\n ADHD, attention-deficit hyperactivity disorder.\n \n a. Diagnoses are not mutually exclusive as many participants had more than one\n diagnosis.\n \n b. Percentage of whole sample.\n ", "This study is, to our knowledge, the first to examine the current\n prevalence of bipolar affective disorder and hypomania/mania symptoms in a UK\n sample of children and adolescents with a diagnosis of ADHD. In this study the\n prevalence of bipolar disorder or hypomania was low (0.5% of the sample). A\n recent epidemiological study of 5326 UK children aged 8–19 years that\n used a different diagnostic interview, the Development and Well-Being\n Assessment, found that only 0.1% met DSM–IV criteria for bipolar\n disorder.17 A\n similar rate of bipolar disorder (0.1%) was reported from the USA in the Great\n Smoky Mountains epidemiological study of children aged 9–13 years, which\n used the CAPA.18\n Thus our estimate suggests that there is no greatly elevated level of\n unidentified bipolar disorder in children with ADHD who are currently referred\n to district child psychiatry and paediatric out-patient settings. The overall\n rate of a family history of bipolar disorder was also within population\n prevalence estimates of this disorder (0.5–1.5% in\n adults).19\nOur low rates of bipolar disorder differ substantially from some studies in\n the USA, although reported rates vary widely. These studies have been reviewed\n extensively\n elsewhere.20 There\n are a number of possible reasons for this variation, including sample\n selection (specialist clinic v. routine out-patient services), the\n age group studied, whether or not researchers started with index cases of\n bipolar disorder or ADHD, and differences in diagnostic practice. Our sample\n consisted of routine cases from child and adolescent out-patient psychiatry\n and paediatric services rather than specialist clinics, so we might expect\n lower rates of bipolar disorder than in studies of samples from specialist\n centres.2 The\n variation in ascertainment across different studies is likely to have\n contributed to the varying prevalence rates of bipolar disorder in ADHD.\n Another possibility is that our sample had not yet passed through the age of\n risk (the median age in our sample was 11 years). For example, Biederman\n et al found a slightly higher rate of bipolar disorder over time in a\n cohort of 140 boys with ADHD after 4 years – 11% at baseline (mean age\n 10.7 years) and an additional 12% of new cases at follow-up (mean age 14.6\n years).2 We have to\n consider that the Biederman study was a longitudinal study and ours was\n cross-sectional, although the baseline prevalence rate in the former sample\n was still much higher than that found in our UK sample.\nOne key issue is that the reported relationship between ADHD and bipolar\n disorder also appears to vary depending on the diagnosis of the index cases.\n Thus, in general, rates of ADHD in those with bipolar disorder (especially\n early-onset disorder) appear to be much higher than rates of bipolar disorder\n in those with\n ADHD.21,22\n There is reasonably consistent evidence suggesting high rates of ADHD in\n samples of bipolar disorder. The Course and Outcome of Bipolar Illness in\n Youth study, a cohort study involving children and adolescents with\n bipolar-spectrum disorders, found the rate of ADHD to be\n 58.6%.22 Another\n study showed an increased rate of ADHD in youths with bipolar disorder (32%)\n compared with adults with bipolar disorder\n (3%).23 One\n possibility is that there is a subgroup of individuals with early-onset\n bipolar disorder accompanied by comorbid ADHD. Alternatively, ADHD could be a\n herald of later bipolar disorder in a subgroup of genetically susceptible\n individuals, indexed by a strong family history of bipolar disorder. However,\n the evidence to date suggests that the majority of individuals with ADHD do\n not develop bipolar disorder or show a relationship with bipolar disorder,\n given that family studies of ADHD do not seem to show elevated rates of\n bipolar disorder in\n relatives.24\nAnother issue is variation in diagnostic practice. A recent study into the\n age at onset of bipolar disorder in the USA and Europe (in children without\n ADHD) showed that the rate of childhood-onset bipolar disorder in the USA is\n double that in\n Europe.25 An\n investigation of national trends in the USA has also shown a recent rapid\n increase in the rate of diagnosis of bipolar\n disorder.23 This\n has raised the possibility that child psychiatrists in the UK are missing\n bipolar disorder. An alternative explanation is that there are transatlantic\n differences in diagnosing bipolar disorder and one study does indeed suggest\n that UK clinicians interpret hypomania symptoms in children differently from\n their US\n counterparts.26\n However, our study suggests that when a standardised research diagnostic\n interview and ICD–10 or DSM–IV criteria for bipolar disorder are\n used in the UK, the rate of bipolar disorder in out-patients with ADHD is low.\n Rates of bipolar disorder in the USA that have also been based on standardised\n interviews and diagnostic criteria have varied\n widely.2,20\n It is not known whether the exact type of diagnostic instrument used is\n important because many current diagnostic instruments, including the CAPA,\n have yet to be evaluated or compared with regard to sensitivity and\n specificity specifically for detecting bipolar disorder. Another possibility\n is that referral patterns for ADHD vary in different countries. For example,\n rates of comorbid depression and ADHD inattentive type appear to be lower in\n our sample and in a UK community study (Ford et\n al27) than in\n US studies (e.g. that by Elia et\n al.16) Many\n studies of bipolar disorder have not reported the rates of ADHD subtypes, and\n this needs to be considered in the future because it is possible that the\n relationship between ADHD and bipolar disorder varies for the different ADHD\n subtypes. A final explanation of the observed variation in prevalence rates of\n bipolar disorder is that there is genuine geographical variation for unknown\n reasons.\n[SUBTITLE] Abnormal irritable mood [SUBSECTION] Some researchers put much emphasis on the importance of irritability when\n diagnosing bipolar disorder in children and adolescents, arguing that the\n presentation and course are different from adult bipolar\n disorder.28 Most\n clinicians and researchers suggest that irritability needs to be episodic in\n nature. However, other researchers have suggested that it is the severity of\n irritability that distinguishes bipolar disorder, rather than its episodic\n nature.29 Although\n in our sample 19 individuals reported persistent irritable mood none met the\n level of severity required by Mick et\n al,29 and\n similarly none of these children had any criterion B symptom, although given\n that many of these symptoms (e.g. distractibility, increased activity,\n talkativeness) are almost inevitably present in ADHD, we only included these\n symptoms if they were reported as increased during the episodes of the\n irritable mood (i.e. episodic). Other investigators have suggested that these\n overlapping symptoms are not of diagnostic importance as they failed to\n differentiate between ADHD and bipolar disorder in one study, and suggest that\n elated mood and grandiosity are of greater diagnostic importance to bipolar\n disorder than is irritability, as they found the latter to be a shared symptom\n of ADHD and bipolar disorder, and of higher prevalence in people with ADHD but\n no mood\n disorder.30\nSome researchers put much emphasis on the importance of irritability when\n diagnosing bipolar disorder in children and adolescents, arguing that the\n presentation and course are different from adult bipolar\n disorder.28 Most\n clinicians and researchers suggest that irritability needs to be episodic in\n nature. However, other researchers have suggested that it is the severity of\n irritability that distinguishes bipolar disorder, rather than its episodic\n nature.29 Although\n in our sample 19 individuals reported persistent irritable mood none met the\n level of severity required by Mick et\n al,29 and\n similarly none of these children had any criterion B symptom, although given\n that many of these symptoms (e.g. distractibility, increased activity,\n talkativeness) are almost inevitably present in ADHD, we only included these\n symptoms if they were reported as increased during the episodes of the\n irritable mood (i.e. episodic). Other investigators have suggested that these\n overlapping symptoms are not of diagnostic importance as they failed to\n differentiate between ADHD and bipolar disorder in one study, and suggest that\n elated mood and grandiosity are of greater diagnostic importance to bipolar\n disorder than is irritability, as they found the latter to be a shared symptom\n of ADHD and bipolar disorder, and of higher prevalence in people with ADHD but\n no mood\n disorder.30\n[SUBTITLE] Limitations [SUBSECTION] Like all studies, ours has limitations that must be considered, including\n ascertainment issues. First, excluding cases with psychosis would have an\n impact on the prevalence of bipolar disorder in our sample, as mania may\n present with psychotic symptoms. However, the contentious research and\n clinical issue at present is not to do with psychosis that is likely to be\n picked up but rather with the overlap of bipolar disorder and ADHD. We also\n cannot rule out the possibility that clinicians pre-selected ADHD cases\n without mood disturbance, although this was not an exclusion criterion and\n informal enquiry suggested that this approach had not been adopted by clinics.\n Another limitation is that the CAPA assesses current symptoms. Also, some have\n suggested that as most child and adolescent research diagnostic interviews are\n based on stringent interpretations of DSM and ICD criteria, current\n instruments might underdetect bipolar\n disorder.31 As all\n participants were White, we cannot generalise our findings to other ethnic\n groups; our sample size was relatively small, and participants might not have\n been old enough for us to detect increased rates of bipolar disorder. However,\n the sample ages are representative of current UK child and adolescent mental\n health service attenders with ADHD. Our findings might have been different in\n older adolescents and young adults. In line with other ADHD studies our sample\n was predominately male (85%), so further investigations might be needed to\n generalise to the female population. Finally, the family psychiatric history\n was reported by family members and not formally obtained from medical notes or\n by direct interview witheachfamily member, andreported rates of disorder may\n therefore be lower than the true estimate.\nIn conclusion, in a UK sample of children with ADHD, current bipolar\n affective disorder and bipolar symptoms were uncommon.\nLike all studies, ours has limitations that must be considered, including\n ascertainment issues. First, excluding cases with psychosis would have an\n impact on the prevalence of bipolar disorder in our sample, as mania may\n present with psychotic symptoms. However, the contentious research and\n clinical issue at present is not to do with psychosis that is likely to be\n picked up but rather with the overlap of bipolar disorder and ADHD. We also\n cannot rule out the possibility that clinicians pre-selected ADHD cases\n without mood disturbance, although this was not an exclusion criterion and\n informal enquiry suggested that this approach had not been adopted by clinics.\n Another limitation is that the CAPA assesses current symptoms. Also, some have\n suggested that as most child and adolescent research diagnostic interviews are\n based on stringent interpretations of DSM and ICD criteria, current\n instruments might underdetect bipolar\n disorder.31 As all\n participants were White, we cannot generalise our findings to other ethnic\n groups; our sample size was relatively small, and participants might not have\n been old enough for us to detect increased rates of bipolar disorder. However,\n the sample ages are representative of current UK child and adolescent mental\n health service attenders with ADHD. Our findings might have been different in\n older adolescents and young adults. In line with other ADHD studies our sample\n was predominately male (85%), so further investigations might be needed to\n generalise to the female population. Finally, the family psychiatric history\n was reported by family members and not formally obtained from medical notes or\n by direct interview witheachfamily member, andreported rates of disorder may\n therefore be lower than the true estimate.\nIn conclusion, in a UK sample of children with ADHD, current bipolar\n affective disorder and bipolar symptoms were uncommon.", "Some researchers put much emphasis on the importance of irritability when\n diagnosing bipolar disorder in children and adolescents, arguing that the\n presentation and course are different from adult bipolar\n disorder.28 Most\n clinicians and researchers suggest that irritability needs to be episodic in\n nature. However, other researchers have suggested that it is the severity of\n irritability that distinguishes bipolar disorder, rather than its episodic\n nature.29 Although\n in our sample 19 individuals reported persistent irritable mood none met the\n level of severity required by Mick et\n al,29 and\n similarly none of these children had any criterion B symptom, although given\n that many of these symptoms (e.g. distractibility, increased activity,\n talkativeness) are almost inevitably present in ADHD, we only included these\n symptoms if they were reported as increased during the episodes of the\n irritable mood (i.e. episodic). Other investigators have suggested that these\n overlapping symptoms are not of diagnostic importance as they failed to\n differentiate between ADHD and bipolar disorder in one study, and suggest that\n elated mood and grandiosity are of greater diagnostic importance to bipolar\n disorder than is irritability, as they found the latter to be a shared symptom\n of ADHD and bipolar disorder, and of higher prevalence in people with ADHD but\n no mood\n disorder.30", "Like all studies, ours has limitations that must be considered, including\n ascertainment issues. First, excluding cases with psychosis would have an\n impact on the prevalence of bipolar disorder in our sample, as mania may\n present with psychotic symptoms. However, the contentious research and\n clinical issue at present is not to do with psychosis that is likely to be\n picked up but rather with the overlap of bipolar disorder and ADHD. We also\n cannot rule out the possibility that clinicians pre-selected ADHD cases\n without mood disturbance, although this was not an exclusion criterion and\n informal enquiry suggested that this approach had not been adopted by clinics.\n Another limitation is that the CAPA assesses current symptoms. Also, some have\n suggested that as most child and adolescent research diagnostic interviews are\n based on stringent interpretations of DSM and ICD criteria, current\n instruments might underdetect bipolar\n disorder.31 As all\n participants were White, we cannot generalise our findings to other ethnic\n groups; our sample size was relatively small, and participants might not have\n been old enough for us to detect increased rates of bipolar disorder. However,\n the sample ages are representative of current UK child and adolescent mental\n health service attenders with ADHD. Our findings might have been different in\n older adolescents and young adults. In line with other ADHD studies our sample\n was predominately male (85%), so further investigations might be needed to\n generalise to the female population. Finally, the family psychiatric history\n was reported by family members and not formally obtained from medical notes or\n by direct interview witheachfamily member, andreported rates of disorder may\n therefore be lower than the true estimate.\nIn conclusion, in a UK sample of children with ADHD, current bipolar\n affective disorder and bipolar symptoms were uncommon.", "The study was supported by a grant from the Wellcome\n Trust on the genetics of attention-deficit\n hyperactivity disorder to A.T., M. O’Donovan, M. Owen, P. Holmans,\n M.B.M. van den Bree (Cardiff University) and L. Kent (St Andrew’s\n University)." ]
[ "methods", null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Differential changes in expression of intestinal antimicrobial peptide genes during Ascaris lumbricoides infection in Zambian adults do not respond to helminth eradication.
21357944
Intestinal helminthiasis modulates immune responses to vaccines and environmental allergens. To explore the impact on intestinal host defense, we assessed expression of antimicrobial peptide genes, together with T cell subset markers and cytokines, in patients with ascariasis before and after treatment.
BACKGROUND
Case patients (n = 27) and control subjects (n = 44) underwent enteroscopy for collection of jejunal biopsy specimens, which were used in quantitative, real-time reverse-transcription polymerase chain reaction for a range of host defense genes; blood samples were also analyzed simultaneously.
METHODS
The level of gene expression (mRNA) of HD5, hBD1, and LL-37 was lower in case patients than in control subjects, and the level of expression of HD6 was increased. However, after successful eradication, there was no trend to values seen in control subjects. Helminthiasis was associated with increased intestinal expression of the Th1 genes T-bet and interferon-γ. In peripheral blood mononuclear cells (PBMCs), a mixed profile of T cell markers and cytokines was increased. Ascaris-induced down-regulation of HD5 was observed in individuals with higher RORγt expression in PBMCs, but we found no evidence that this was mediated by circulating interleukin-22.
RESULTS
Human ascariasis was associated with changes in antimicrobial peptide gene expression and immunological markers. Such changes may have implications for susceptibility to infectious disease and responsiveness to oral vaccines in tropical populations.
CONCLUSIONS
[ "Adult", "Animals", "Anthelmintics", "Antimicrobial Cationic Peptides", "Ascariasis", "Ascaris lumbricoides", "Biomarkers", "Case-Control Studies", "Cytokines", "Female", "Gene Expression Regulation", "Humans", "Interferon-gamma", "Intestinal Mucosa", "Leukocytes, Mononuclear", "Male", "Middle Aged", "Nuclear Receptor Subfamily 1, Group F, Member 3", "T-Box Domain Proteins", "T-Lymphocyte Subsets", "Young Adult", "Zambia" ]
3080889
null
null
null
null
RESULTS
[SUBTITLE] Study Population in Misisi, Lusaka [SUBSECTION] We recruited 71 consenting individuals (age, 21–60 years). Case patients were defined as those participants found to have A. lumbricoides infection; coinfections with Strongyloides stercoralis, S. mansoni, and Taenia saginata were also found (Table 1). Case patients (n = 27) and control subjects (n = 44) were similar with respect to age and HIV status, although more women were found (P = .002) with helminthiasis (Table 1), and ownership of the house of residence showed clear evidence that case patients were of lower economic status (P = .03) (Table 1). Case patients also had eosinophilia (P = .01) and lower hemoglobin levels (P = .02). Stool samples examined after treatment showed successful helminth eradication in all case patients. Characteristics of the Study Population and Helminthiasis Diagnosis NOTE. HIV, human immunodeficiency virus; NS, not significant; SD, standard deviation. P value determined by the Fisher exact test. P value determined by the Kruskal-Wallis test. We recruited 71 consenting individuals (age, 21–60 years). Case patients were defined as those participants found to have A. lumbricoides infection; coinfections with Strongyloides stercoralis, S. mansoni, and Taenia saginata were also found (Table 1). Case patients (n = 27) and control subjects (n = 44) were similar with respect to age and HIV status, although more women were found (P = .002) with helminthiasis (Table 1), and ownership of the house of residence showed clear evidence that case patients were of lower economic status (P = .03) (Table 1). Case patients also had eosinophilia (P = .01) and lower hemoglobin levels (P = .02). Stool samples examined after treatment showed successful helminth eradication in all case patients. Characteristics of the Study Population and Helminthiasis Diagnosis NOTE. HIV, human immunodeficiency virus; NS, not significant; SD, standard deviation. P value determined by the Fisher exact test. P value determined by the Kruskal-Wallis test. [SUBTITLE] Gastrointestinal Expression of AMPs Differed in Case Patients and Control Subjects [SUBSECTION] We characterized the gastrointestinal expression profiles in jejunal biopsy samples from case patients and control subjects using RT-PCR for mRNA of the α−defensins (HD5 and HD6), the β−defensins hBD1 and hBD2, and the cathelicidin LL-37. Biopsy samples were taken before treatment and at 7 (n = 6) or 14 (n = 21) days after treatment; there was no difference in the changes in mRNA of HD5 and HD6 at 7 or 14 days, so these data are considered together (Figures 1A and 1B). After treatment, HD5 and HD6 expression did not change overall (Figure 1C and 1D), but some variation was evident between individuals: down-regulation of HD5 was observed in 60% of helminth-infected individuals during infection, compared with posttreatment values (Figure 1C), and in 48% of these subjects, we observed down-regulation of HD6 expression (Figure 1D). Expression of HD5 was lower in biopsy samples taken from case patients before (median, 1234; IQR, 286.6 - 6444) and after (median, 1030; IQR, 276.1 - 14,045) treatment than in control subjects (median, 8721; IQR, 646.7 - 56,384) (Figure 1E), but expression of HD6 before (median, 43,353; IQR, 3048-405,461) and after (median, 73,946; IQR, 8267-360,127) was higher than in control subjects (median, 9210; IQR, 690.4 - 27,750) (Figure 1F). hBD1 and LL-37 expression was lower in case patients, and there was no detectable mRNA of hBD2 in any biopsy specimen obtained in this study (Table 2), even though mRNA from IL-1β-treated Caco-2 cells gave a clear positive amplification signal for hBD2 (data not shown). Expression of these antimicrobial peptide genes was completely unaffected by HIV status (data not shown). Ascariasis was Associated With Attenuated Expression of hBD1 and LL-37 NOTE. hBD, human β−defensin; NE, no detectable mRNA expressed. P = .0001, determined by the Kruskal-Wallis test of differences between control and pretreatment groups. Differential expression of α-defensins in the intestinal mucosa during ascariasis. mRNA for HD5 and HD6 was quantified in jejunal biopsy samples obtained from control subjects and from case patients before and after helminth eradication and are expressed as transcripts/μg of total RNA extracted. A and B, expression of individual samples in multiple biopsy specimens taken after 7 or 14 days after treatment. C and D, changes following treatment expressed as –fold change, relative to pretreatment values, such that values >1 represent an increase in expression and <1 as a decrease during infection. E and F, absolute mRNA quantities shown in case patients versus control subjects (represented as box whisker plots of median, interquartile range, and range) We characterized the gastrointestinal expression profiles in jejunal biopsy samples from case patients and control subjects using RT-PCR for mRNA of the α−defensins (HD5 and HD6), the β−defensins hBD1 and hBD2, and the cathelicidin LL-37. Biopsy samples were taken before treatment and at 7 (n = 6) or 14 (n = 21) days after treatment; there was no difference in the changes in mRNA of HD5 and HD6 at 7 or 14 days, so these data are considered together (Figures 1A and 1B). After treatment, HD5 and HD6 expression did not change overall (Figure 1C and 1D), but some variation was evident between individuals: down-regulation of HD5 was observed in 60% of helminth-infected individuals during infection, compared with posttreatment values (Figure 1C), and in 48% of these subjects, we observed down-regulation of HD6 expression (Figure 1D). Expression of HD5 was lower in biopsy samples taken from case patients before (median, 1234; IQR, 286.6 - 6444) and after (median, 1030; IQR, 276.1 - 14,045) treatment than in control subjects (median, 8721; IQR, 646.7 - 56,384) (Figure 1E), but expression of HD6 before (median, 43,353; IQR, 3048-405,461) and after (median, 73,946; IQR, 8267-360,127) was higher than in control subjects (median, 9210; IQR, 690.4 - 27,750) (Figure 1F). hBD1 and LL-37 expression was lower in case patients, and there was no detectable mRNA of hBD2 in any biopsy specimen obtained in this study (Table 2), even though mRNA from IL-1β-treated Caco-2 cells gave a clear positive amplification signal for hBD2 (data not shown). Expression of these antimicrobial peptide genes was completely unaffected by HIV status (data not shown). Ascariasis was Associated With Attenuated Expression of hBD1 and LL-37 NOTE. hBD, human β−defensin; NE, no detectable mRNA expressed. P = .0001, determined by the Kruskal-Wallis test of differences between control and pretreatment groups. Differential expression of α-defensins in the intestinal mucosa during ascariasis. mRNA for HD5 and HD6 was quantified in jejunal biopsy samples obtained from control subjects and from case patients before and after helminth eradication and are expressed as transcripts/μg of total RNA extracted. A and B, expression of individual samples in multiple biopsy specimens taken after 7 or 14 days after treatment. C and D, changes following treatment expressed as –fold change, relative to pretreatment values, such that values >1 represent an increase in expression and <1 as a decrease during infection. E and F, absolute mRNA quantities shown in case patients versus control subjects (represented as box whisker plots of median, interquartile range, and range) [SUBTITLE] Gastrointestinal Expression of Th1 Phenotype Increased During Helminth Infection [SUBSECTION] We measured mRNA expression of CD4+ T cell subset signature transcription factors and cytokines in biopsy samples from the infected group before and after treatment but from none of the control subjects, because mRNA from intestinal tissue was limited in quantity. There was a decrease in Th1 markers after treatment, as reflected in IFN-γ mRNA expression (observed in 72% of case patients; P = .009) and T-bet (in 88% of case patients; P = .01), suggesting that they were up-regulated during infection. In quantitative terms, IFN-γ mRNA relative to GAPDH decreased following treatment (P = .009) and T-bet mRNA decreased (P = .01, using the Wilcoxon rank sum test) after treatment (Figure 2 A and 2B respectively). There was no change in TNF-α or RORγt expression (Figure 2 C and 2D, respectively) nor in expression of GATA-3, Foxp3, TGF-β, IL-4, IL-10, or IL-5 (data not shown). Association between helminth infection and increased expression of gastrointestinal CD4+ Th1 phenotype markers. Jejunal biopsy samples obtained before and after treatment were used to quantify, by real-time reverse-transcription polymerase chain reaction, the expression of interferon (IFN)–γ (A), T-bet (B), tumor necrosis factor (TNF)–α (C), and RORγt (D) represented as box whisker plots of median, interquartile, and range. mRNA expression is shown relative to GAPDH We measured mRNA expression of CD4+ T cell subset signature transcription factors and cytokines in biopsy samples from the infected group before and after treatment but from none of the control subjects, because mRNA from intestinal tissue was limited in quantity. There was a decrease in Th1 markers after treatment, as reflected in IFN-γ mRNA expression (observed in 72% of case patients; P = .009) and T-bet (in 88% of case patients; P = .01), suggesting that they were up-regulated during infection. In quantitative terms, IFN-γ mRNA relative to GAPDH decreased following treatment (P = .009) and T-bet mRNA decreased (P = .01, using the Wilcoxon rank sum test) after treatment (Figure 2 A and 2B respectively). There was no change in TNF-α or RORγt expression (Figure 2 C and 2D, respectively) nor in expression of GATA-3, Foxp3, TGF-β, IL-4, IL-10, or IL-5 (data not shown). Association between helminth infection and increased expression of gastrointestinal CD4+ Th1 phenotype markers. Jejunal biopsy samples obtained before and after treatment were used to quantify, by real-time reverse-transcription polymerase chain reaction, the expression of interferon (IFN)–γ (A), T-bet (B), tumor necrosis factor (TNF)–α (C), and RORγt (D) represented as box whisker plots of median, interquartile, and range. mRNA expression is shown relative to GAPDH [SUBTITLE] Helminth Infection Was Associated with Up-Regulated Peripheral Blood Markers of All Major T cell Subsets and Some Cytokines [SUBSECTION] To compare the changes in the intestinal mucosa with the systemic response, we looked at peripheral T cell signature transcription factor mRNA expression in PBMCs from case patients before and after treatment and from control subjects. There was up-regulation of Th1 (P < .0001), Th2 (P = .04), Th17 (P < .0001), and Treg (P = .0002) markers in pretreatment case patients, compared with control subjects (Figure 3A), and altered expression was observed in a range of cytokines: IFN-γ (P = .03) and IL-10 (P = .002) were down-regulated, whereas IL-5 (P < .0001) and TGF-β (P < .0001) were up-regulated (Figure 3B). Neither TNF-α nor IL-4 showed any changes (data not shown). There was no change in subset markers after treatment (Figure 3A) or in any of the cytokines measured (IFN-γ, IL-5, TGFβ, and IL-10 [Figure 3B]; data not shown for TNF-α and IL-4). Expression of these genes did not differ between HIV-infected and HIV-uninfected case patients or control subjects. Association between helminth infection and altered peripheral expression of CD4+ T subset markers and cytokines. Peripheral blood mononuclear cells were isolated from blood from control subjects and case patients before and after treatment, and real time reverse-transcription polymerase chain reaction was performed; mRNA expression is shown relative to GAPDH for CD4+ T subset markers (Figure 3A) and cytokines (Figure 3B) and represented as box whisker plots of median, interquartile range, and range. To compare the changes in the intestinal mucosa with the systemic response, we looked at peripheral T cell signature transcription factor mRNA expression in PBMCs from case patients before and after treatment and from control subjects. There was up-regulation of Th1 (P < .0001), Th2 (P = .04), Th17 (P < .0001), and Treg (P = .0002) markers in pretreatment case patients, compared with control subjects (Figure 3A), and altered expression was observed in a range of cytokines: IFN-γ (P = .03) and IL-10 (P = .002) were down-regulated, whereas IL-5 (P < .0001) and TGF-β (P < .0001) were up-regulated (Figure 3B). Neither TNF-α nor IL-4 showed any changes (data not shown). There was no change in subset markers after treatment (Figure 3A) or in any of the cytokines measured (IFN-γ, IL-5, TGFβ, and IL-10 [Figure 3B]; data not shown for TNF-α and IL-4). Expression of these genes did not differ between HIV-infected and HIV-uninfected case patients or control subjects. Association between helminth infection and altered peripheral expression of CD4+ T subset markers and cytokines. Peripheral blood mononuclear cells were isolated from blood from control subjects and case patients before and after treatment, and real time reverse-transcription polymerase chain reaction was performed; mRNA expression is shown relative to GAPDH for CD4+ T subset markers (Figure 3A) and cytokines (Figure 3B) and represented as box whisker plots of median, interquartile range, and range. [SUBTITLE] Correlations Between Peripheral Blood T cell Subset Markers and Cytokines [SUBSECTION] To explore the impact of ascariasis on these T cell markers and cytokines, we looked for correlations between the markers measured. Although the number of case patients and control subjects is modest, some strong and significant positive correlations were observed. Several correlations were seen only in helminth infection and not in control subjects, some of which are expected (such that between GATA-3 and Foxp3) and some of which link apparently antagonistic subsets (such as T-bet and IL-10 or IFN-γ and IL-4) (see Supplementary Tables 1 and 2 at https://docs.google.com/document/pub?id=1k0m0bLjbJZGpplEhHNSldwXLMSoXLAM4OtLKsI7YjWI). Helminth infection appears to abolish the correlation between GATA-3 and TNF-α or and between Foxp3 and IL-4. None of the correlations were seen exclusively as a consequence of HIV infection (see Supplementary Tables 3 and 4, also at https://docs.google.com/document/pub?id=1k0m0bLjbJZGpplEhHNSldwXLMSoXLAM4OtLKsI7YjWI). To explore the impact of ascariasis on these T cell markers and cytokines, we looked for correlations between the markers measured. Although the number of case patients and control subjects is modest, some strong and significant positive correlations were observed. Several correlations were seen only in helminth infection and not in control subjects, some of which are expected (such that between GATA-3 and Foxp3) and some of which link apparently antagonistic subsets (such as T-bet and IL-10 or IFN-γ and IL-4) (see Supplementary Tables 1 and 2 at https://docs.google.com/document/pub?id=1k0m0bLjbJZGpplEhHNSldwXLMSoXLAM4OtLKsI7YjWI). Helminth infection appears to abolish the correlation between GATA-3 and TNF-α or and between Foxp3 and IL-4. None of the correlations were seen exclusively as a consequence of HIV infection (see Supplementary Tables 3 and 4, also at https://docs.google.com/document/pub?id=1k0m0bLjbJZGpplEhHNSldwXLMSoXLAM4OtLKsI7YjWI). [SUBTITLE] HD5 Suppression During Helminth Infection Was Associated With Peripheral Blood RORγt Expression [SUBSECTION] To explore the modulation of HD5 and HD6 expression during infection, we dichotomized the cases into those with up-regulation during expression (as inferred from changes following eradication) and those with suppression, and we correlated this change with mRNA of T cell subset markers in peripheral blood. We found that HD5 suppression was significantly associated with increased expression of RORγt in PBMCs (Figure 4). The expression of Th1, Th2, and Treg markers was not associated with this suppression, and changes in HD6 were not significantly associated with any of the markers used (Figure 4). There was no significant association with any cytokine mRNA and HD5 or HD6 suppression, nor was it associated with HIV status. Association between HD5 suppression and increased expression of RORγt during infection. The study group was dichotomized into those with up-regulation of HD5 or HD6 during helminth infection and those with down-regulation of HD5 or HD6. mRNA of CD4+ T cell subset markers, mean expression ratio to GAPDH, in peripheral blood mononuclear cells is shown in relation to the changes in gastrointestinal expression of HD5 and HD6 expression. Asterisks represent statistical significance. *P = .01. To explore the modulation of HD5 and HD6 expression during infection, we dichotomized the cases into those with up-regulation during expression (as inferred from changes following eradication) and those with suppression, and we correlated this change with mRNA of T cell subset markers in peripheral blood. We found that HD5 suppression was significantly associated with increased expression of RORγt in PBMCs (Figure 4). The expression of Th1, Th2, and Treg markers was not associated with this suppression, and changes in HD6 were not significantly associated with any of the markers used (Figure 4). There was no significant association with any cytokine mRNA and HD5 or HD6 suppression, nor was it associated with HIV status. Association between HD5 suppression and increased expression of RORγt during infection. The study group was dichotomized into those with up-regulation of HD5 or HD6 during helminth infection and those with down-regulation of HD5 or HD6. mRNA of CD4+ T cell subset markers, mean expression ratio to GAPDH, in peripheral blood mononuclear cells is shown in relation to the changes in gastrointestinal expression of HD5 and HD6 expression. Asterisks represent statistical significance. *P = .01. [SUBTITLE] Serum Cytokines Representative of Major T cell Subsets Were All Down-Regulated in Helminth Infection and Did Not Explain HD5 Suppression [SUBSECTION] To explore further the peripheral blood T cell responses, we measured concentrations of 4 cytokines in serum that had been chosen to be representative of the 4 major T cell subsets: IFN-γ, IL-13, IL-22, and TGF-β. IFN-γ was not detected in most serum specimens (data not shown). IL-13 and IL-22 concentrations were lower in case patients than in control subjects and did not change after helminth eradication. IL-13 was detected in all samples measured (in 35 control subjects and in 25 case patients) but was lower in case patients (median, 91.4 pg/mL; IQR, 82-142 pg/mL) than in control subjects (median, 220.3 pg/mL; IQR, 146.7-289.2 pg/mL; P < .0001) (Figure 5). IL-22 was detected in 16 control subjects (out of 30 samples) and 12 case patients (out of 27 samples) and was also lower (median, 0.04 pg/mL; IQR, 0.0-18.4 pg/mL) in case patients than in control subjects (median, 22.4 pg/mL; IQR, 0.0-36.5 pg/mL; P = .001). TGF-β was detected in 15 control subjects (out of 35 samples) and in 13 case patients (out of 25 samples), but the concentration did not differ between case patients (median, 0.0 pg/mL; IQR, 0.0-8.1 pg/mL) and control subjects (median, 0.0 pg/mL; IQR, 0.0-62.8 pg/mL) (Figure 5). There was no increase after treatment and no computed association between HD5 suppression and concentrations of IL-13, IL-22, or TGF-β. Serum cytokine concentrations in helminth infection. Serum concentrations of 4 cytokines, selected to represent major T cell subsets (interferon [IFN]–γ, interleukin [IL]–13, IL-22, and transforming growth factor [TGF]–β), were measured by enzyme-linked immunosorbent assay in samples from control subjects and in pretreatment or posttreatment samples from case patients (represented as box whisker plots of median, interquartile range, and range). Asterisks represent statistical significance for Kruskal-Wallis test of differences. ***P < .0001; **P = .001. To explore further the peripheral blood T cell responses, we measured concentrations of 4 cytokines in serum that had been chosen to be representative of the 4 major T cell subsets: IFN-γ, IL-13, IL-22, and TGF-β. IFN-γ was not detected in most serum specimens (data not shown). IL-13 and IL-22 concentrations were lower in case patients than in control subjects and did not change after helminth eradication. IL-13 was detected in all samples measured (in 35 control subjects and in 25 case patients) but was lower in case patients (median, 91.4 pg/mL; IQR, 82-142 pg/mL) than in control subjects (median, 220.3 pg/mL; IQR, 146.7-289.2 pg/mL; P < .0001) (Figure 5). IL-22 was detected in 16 control subjects (out of 30 samples) and 12 case patients (out of 27 samples) and was also lower (median, 0.04 pg/mL; IQR, 0.0-18.4 pg/mL) in case patients than in control subjects (median, 22.4 pg/mL; IQR, 0.0-36.5 pg/mL; P = .001). TGF-β was detected in 15 control subjects (out of 35 samples) and in 13 case patients (out of 25 samples), but the concentration did not differ between case patients (median, 0.0 pg/mL; IQR, 0.0-8.1 pg/mL) and control subjects (median, 0.0 pg/mL; IQR, 0.0-62.8 pg/mL) (Figure 5). There was no increase after treatment and no computed association between HD5 suppression and concentrations of IL-13, IL-22, or TGF-β. Serum cytokine concentrations in helminth infection. Serum concentrations of 4 cytokines, selected to represent major T cell subsets (interferon [IFN]–γ, interleukin [IL]–13, IL-22, and transforming growth factor [TGF]–β), were measured by enzyme-linked immunosorbent assay in samples from control subjects and in pretreatment or posttreatment samples from case patients (represented as box whisker plots of median, interquartile range, and range). Asterisks represent statistical significance for Kruskal-Wallis test of differences. ***P < .0001; **P = .001.
null
null
[ "Study Setting", "Determination of Helminth Infection and Treatment", "Biopsy Sample Collection and PBMC Isolation", "RNA Extraction and cDNA Preparation", "Quantitative Real-time RT- PCR", "Cytokine ELISAs", "Data Analysis", "Study Population in Misisi, Lusaka", "Gastrointestinal Expression of AMPs Differed in Case Patients and Control Subjects", "Gastrointestinal Expression of Th1 Phenotype Increased During Helminth Infection", "Helminth Infection Was Associated with Up-Regulated Peripheral Blood Markers of All Major T cell Subsets and Some Cytokines", "Correlations Between Peripheral Blood T cell Subset Markers and Cytokines", "HD5 Suppression During Helminth Infection Was Associated With Peripheral Blood RORγt Expression", "Serum Cytokines Representative of Major T cell Subsets Were All Down-Regulated in Helminth Infection and Did Not Explain HD5 Suppression", "Supplementary Data", "Funding" ]
[ "Study participants were recruited from Misisi township in Lusaka, Zambia. This is an unplanned settlement south of Lusaka with poor sanitation and inadequate hygiene facilities. Adults (age, ≥18 years) were recruited for the study if they were resident in a defined sector of Misisi and gave informed consent. Potential participants were excluded if they were pregnant or lactating, had experienced diarrhea ≤1 month before planned participation, or had taken antibiotics or nonsteroidal anti-inflammatory drugs in the same period. Prior work in this population revealed that 17.4% of human immunodeficiency virus (HIV)–negative adults were infected with A. lumbricoides, compared with 13.1% of HIV-positive individuals [25]. Ethics approval was obtained from the University of Zambia Research Ethics Committee (007-10-07).", "All participants submitted 3 stool samples over 3–5-day periods that were screened for the presence of eggs using the Kato-Katz technique. If helminth infection was found, the persons were designated as case patients, and they were designated as control subjects if not. Depending on the parasite species detected, infected participants were treated with albendazole (Zentel; GSK; 400 mg twice daily for 3 days), praziquantel (Biltricide; Bayer Pharmaceuticals Corporation; 400 mg/kg in 3 divided doses on 1 day), and/or ivermectin (Stromectol; MSD; 9 mg [∼200 μg/kg] as a single dose). Stool samples were collected after treatment and checked for infection to confirm eradication of worms. Ten mililiters of blood was collected prior to endoscopy to perform a complete blood cell count (including eosinophil count), to determine HIV status, and to perform a CD4 cell count if the subject was HIV seropositive, and serum was separated and stored at -80°C for cytokine enzyme-linked immunosorbent assay (ELISA).", "Participants underwent enteroscopy with a fiberoptic enteroscope (Olympus SIF-10) under sedation to collect biopsy samples of the jejunum, followed immediately by treatment described above. An additional set of biopsies was performed either 7 or 14 days after initiation of treatment. Samples were immediately suspended in Tri Reagent (Sigma) and frozen at -80°C for RNA extraction. An additional 20 mL of venous blood was collected prior to endoscopy in EDTA tubes (BD Biosciences) before and after treatment. PBMCs were separated using Ficoll-Paque Premium (GE Healthcare) and washed once in phosphate-buffered saline (Sigma), and 1 × 107 cells were resuspended in Tri Reagent (Sigma) and stored at -20°C ready for RNA extraction.", "RNA was extracted using the phenol-chloroform extraction method, as described elsewhere [26]. In brief, all gut biopsy samples were broken up using a mini-homogenizer and sterilized pestles and stored in fresh sterile molecular biology grade (MBG) water (Sigma). RNA was subjected to DNase treatment using RQ1 DNase (Promega, UK) in the presence of RNA ribonuclease inhibitor (Promega). DNase-treated RNA was then extracted using phenol-chloroform-isoamyl alcohol (ratio, 25:24:1; Sigma) and quantified by spectroscopy. cDNA was prepared using standard protocols.", "Primers for the CD4+ T cell transcription factors RORγt, T-bet, GATA-3, and Foxp3 and the cytokines IFN-γ, transforming growth factor (TGF)–β, tumor necrosis factor (TNF)–α, IL-4, IL-5, and IL-10 were generated using Primer 3 software (http://frodo.wi.mit.edu/primer3/input.htm) and synthesized by Sigma. (For sequences, see http://www.icms.qmul.ac.uk/Profiles/Digestive%20Diseases/Kelly%20Paul.htm.) GAPDH was used as a reference standard for all PBMC cDNA, whereast CK-19 and GAPDH were used for gut cDNA. Quantitative real-time PCR was performed using a Corbett Rotor Gene thermal cycler and SYBR Green (Qiagen) detection over 45 cycles of 95°C, 60°C, and 72°C. cDNA from IL-1β–treated Caco-2 cells was used as a positive control for all the gut RT-PCRs, whereas MBG water was used as negative control (at least 3 in each run). HD5 and HD6 were expressed as transcripts/μg total RNA using plasmid standards, as previously described [20], but all other results were expressed relative to GAPDH and/or CK-19.", "Aliquots of serum from case patients and control subjects that had been stored at -80°C for ≤6 months were used for measurement of IFN-γ, IL-13, IL-22, and TGF-β by ELISA (R&D Systems) in accordance with the manufacturer's instructions.", "Data analysis was performed using Stata software, version 10.1 (StataCorp), and GraphPad Prism, version 5.01 (GraphPad Software). Nonparametric statistical tests were used because of the nonnormal distribution of the data, which are summarized below as median and interquartile range (IQR). The Fisher exact test was used for proportions, the Kruskal-Wallis test was used to compare distributions of unpaired data (case patients and control subjects), and the Wilcoxon matched-pair rank sum test was used to compare pre- and posttreatment values for the same individual. Spearman's rank correlation was used to determine the correlation of peripheral blood and intestinal cytokine and T cell subset markers.", "We recruited 71 consenting individuals (age, 21–60 years). Case patients were defined as those participants found to have A. lumbricoides infection; coinfections with Strongyloides stercoralis, S. mansoni, and Taenia saginata were also found (Table 1). Case patients (n = 27) and control subjects (n = 44) were similar with respect to age and HIV status, although more women were found (P = .002) with helminthiasis (Table 1), and ownership of the house of residence showed clear evidence that case patients were of lower economic status (P = .03) (Table 1). Case patients also had eosinophilia (P = .01) and lower hemoglobin levels (P = .02). Stool samples examined after treatment showed successful helminth eradication in all case patients.\nCharacteristics of the Study Population and Helminthiasis Diagnosis\nNOTE. HIV, human immunodeficiency virus; NS, not significant; SD, standard deviation.\nP value determined by the Fisher exact test.\nP value determined by the Kruskal-Wallis test.", "We characterized the gastrointestinal expression profiles in jejunal biopsy samples from case patients and control subjects using RT-PCR for mRNA of the α−defensins (HD5 and HD6), the β−defensins hBD1 and hBD2, and the cathelicidin LL-37. Biopsy samples were taken before treatment and at 7 (n = 6) or 14 (n = 21) days after treatment; there was no difference in the changes in mRNA of HD5 and HD6 at 7 or 14 days, so these data are considered together (Figures 1A and 1B). After treatment, HD5 and HD6 expression did not change overall (Figure 1C and 1D), but some variation was evident between individuals: down-regulation of HD5 was observed in 60% of helminth-infected individuals during infection, compared with posttreatment values (Figure 1C), and in 48% of these subjects, we observed down-regulation of HD6 expression (Figure 1D). Expression of HD5 was lower in biopsy samples taken from case patients before (median, 1234; IQR, 286.6 - 6444) and after (median, 1030; IQR, 276.1 - 14,045) treatment than in control subjects (median, 8721; IQR, 646.7 - 56,384) (Figure 1E), but expression of HD6 before (median, 43,353; IQR, 3048-405,461) and after (median, 73,946; IQR, 8267-360,127) was higher than in control subjects (median, 9210; IQR, 690.4 - 27,750) (Figure 1F). hBD1 and LL-37 expression was lower in case patients, and there was no detectable mRNA of hBD2 in any biopsy specimen obtained in this study (Table 2), even though mRNA from IL-1β-treated Caco-2 cells gave a clear positive amplification signal for hBD2 (data not shown). Expression of these antimicrobial peptide genes was completely unaffected by HIV status (data not shown).\nAscariasis was Associated With Attenuated Expression of hBD1 and LL-37\nNOTE. hBD, human β−defensin; NE, no detectable mRNA expressed.\nP = .0001, determined by the Kruskal-Wallis test of differences between control and pretreatment groups.\nDifferential expression of α-defensins in the intestinal mucosa during ascariasis. mRNA for HD5 and HD6 was quantified in jejunal biopsy samples obtained from control subjects and from case patients before and after helminth eradication and are expressed as transcripts/μg of total RNA extracted. A and B, expression of individual samples in multiple biopsy specimens taken after 7 or 14 days after treatment. C and D, changes following treatment expressed as –fold change, relative to pretreatment values, such that values >1 represent an increase in expression and <1 as a decrease during infection. E and F, absolute mRNA quantities shown in case patients versus control subjects (represented as box whisker plots of median, interquartile range, and range)", "We measured mRNA expression of CD4+ T cell subset signature transcription factors and cytokines in biopsy samples from the infected group before and after treatment but from none of the control subjects, because mRNA from intestinal tissue was limited in quantity. There was a decrease in Th1 markers after treatment, as reflected in IFN-γ mRNA expression (observed in 72% of case patients; P = .009) and T-bet (in 88% of case patients; P = .01), suggesting that they were up-regulated during infection. In quantitative terms, IFN-γ mRNA relative to GAPDH decreased following treatment (P = .009) and T-bet mRNA decreased (P = .01, using the Wilcoxon rank sum test) after treatment (Figure 2 A and 2B respectively). There was no change in TNF-α or RORγt expression (Figure 2 C and 2D, respectively) nor in expression of GATA-3, Foxp3, TGF-β, IL-4, IL-10, or IL-5 (data not shown).\nAssociation between helminth infection and increased expression of gastrointestinal CD4+ Th1 phenotype markers. Jejunal biopsy samples obtained before and after treatment were used to quantify, by real-time reverse-transcription polymerase chain reaction, the expression of interferon (IFN)–γ (A), T-bet (B), tumor necrosis factor (TNF)–α (C), and RORγt (D) represented as box whisker plots of median, interquartile, and range. mRNA expression is shown relative to GAPDH", "To compare the changes in the intestinal mucosa with the systemic response, we looked at peripheral T cell signature transcription factor mRNA expression in PBMCs from case patients before and after treatment and from control subjects. There was up-regulation of Th1 (P < .0001), Th2 (P = .04), Th17 (P < .0001), and Treg (P = .0002) markers in pretreatment case patients, compared with control subjects (Figure 3A), and altered expression was observed in a range of cytokines: IFN-γ (P = .03) and IL-10 (P = .002) were down-regulated, whereas IL-5 (P < .0001) and TGF-β (P < .0001) were up-regulated (Figure 3B). Neither TNF-α nor IL-4 showed any changes (data not shown). There was no change in subset markers after treatment (Figure 3A) or in any of the cytokines measured (IFN-γ, IL-5, TGFβ, and IL-10 [Figure 3B]; data not shown for TNF-α and IL-4). Expression of these genes did not differ between HIV-infected and HIV-uninfected case patients or control subjects.\nAssociation between helminth infection and altered peripheral expression of CD4+ T subset markers and cytokines. Peripheral blood mononuclear cells were isolated from blood from control subjects and case patients before and after treatment, and real time reverse-transcription polymerase chain reaction was performed; mRNA expression is shown relative to GAPDH for CD4+ T subset markers (Figure 3A) and cytokines (Figure 3B) and represented as box whisker plots of median, interquartile range, and range.", "To explore the impact of ascariasis on these T cell markers and cytokines, we looked for correlations between the markers measured. Although the number of case patients and control subjects is modest, some strong and significant positive correlations were observed. Several correlations were seen only in helminth infection and not in control subjects, some of which are expected (such that between GATA-3 and Foxp3) and some of which link apparently antagonistic subsets (such as T-bet and IL-10 or IFN-γ and IL-4) (see Supplementary Tables 1 and 2 at https://docs.google.com/document/pub?id=1k0m0bLjbJZGpplEhHNSldwXLMSoXLAM4OtLKsI7YjWI). Helminth infection appears to abolish the correlation between GATA-3 and TNF-α or and between Foxp3 and IL-4. None of the correlations were seen exclusively as a consequence of HIV infection (see Supplementary Tables 3 and 4, also at https://docs.google.com/document/pub?id=1k0m0bLjbJZGpplEhHNSldwXLMSoXLAM4OtLKsI7YjWI).", "To explore the modulation of HD5 and HD6 expression during infection, we dichotomized the cases into those with up-regulation during expression (as inferred from changes following eradication) and those with suppression, and we correlated this change with mRNA of T cell subset markers in peripheral blood. We found that HD5 suppression was significantly associated with increased expression of RORγt in PBMCs (Figure 4). The expression of Th1, Th2, and Treg markers was not associated with this suppression, and changes in HD6 were not significantly associated with any of the markers used (Figure 4). There was no significant association with any cytokine mRNA and HD5 or HD6 suppression, nor was it associated with HIV status.\nAssociation between HD5 suppression and increased expression of RORγt during infection. The study group was dichotomized into those with up-regulation of HD5 or HD6 during helminth infection and those with down-regulation of HD5 or HD6. mRNA of CD4+ T cell subset markers, mean expression ratio to GAPDH, in peripheral blood mononuclear cells is shown in relation to the changes in gastrointestinal expression of HD5 and HD6 expression. Asterisks represent statistical significance. *P = .01.", "To explore further the peripheral blood T cell responses, we measured concentrations of 4 cytokines in serum that had been chosen to be representative of the 4 major T cell subsets: IFN-γ, IL-13, IL-22, and TGF-β. IFN-γ was not detected in most serum specimens (data not shown). IL-13 and IL-22 concentrations were lower in case patients than in control subjects and did not change after helminth eradication. IL-13 was detected in all samples measured (in 35 control subjects and in 25 case patients) but was lower in case patients (median, 91.4 pg/mL; IQR, 82-142 pg/mL) than in control subjects (median, 220.3 pg/mL; IQR, 146.7-289.2 pg/mL; P < .0001) (Figure 5). IL-22 was detected in 16 control subjects (out of 30 samples) and 12 case patients (out of 27 samples) and was also lower (median, 0.04 pg/mL; IQR, 0.0-18.4 pg/mL) in case patients than in control subjects (median, 22.4 pg/mL; IQR, 0.0-36.5 pg/mL; P = .001). TGF-β was detected in 15 control subjects (out of 35 samples) and in 13 case patients (out of 25 samples), but the concentration did not differ between case patients (median, 0.0 pg/mL; IQR, 0.0-8.1 pg/mL) and control subjects (median, 0.0 pg/mL; IQR, 0.0-62.8 pg/mL) (Figure 5). There was no increase after treatment and no computed association between HD5 suppression and concentrations of IL-13, IL-22, or TGF-β.\nSerum cytokine concentrations in helminth infection. Serum concentrations of 4 cytokines, selected to represent major T cell subsets (interferon [IFN]–γ, interleukin [IL]–13, IL-22, and transforming growth factor [TGF]–β), were measured by enzyme-linked immunosorbent assay in samples from control subjects and in pretreatment or posttreatment samples from case patients (represented as box whisker plots of median, interquartile range, and range). Asterisks represent statistical significance for Kruskal-Wallis test of differences. ***P < .0001; **P = .001.", "Supplementary data are available at http://jid.oxfordjournals.org online.", "The Wellcome Trust (067948)." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "MATERIALS AND METHODS", "Study Setting", "Determination of Helminth Infection and Treatment", "Biopsy Sample Collection and PBMC Isolation", "RNA Extraction and cDNA Preparation", "Quantitative Real-time RT- PCR", "Cytokine ELISAs", "Data Analysis", "RESULTS", "Study Population in Misisi, Lusaka", "Gastrointestinal Expression of AMPs Differed in Case Patients and Control Subjects", "Gastrointestinal Expression of Th1 Phenotype Increased During Helminth Infection", "Helminth Infection Was Associated with Up-Regulated Peripheral Blood Markers of All Major T cell Subsets and Some Cytokines", "Correlations Between Peripheral Blood T cell Subset Markers and Cytokines", "HD5 Suppression During Helminth Infection Was Associated With Peripheral Blood RORγt Expression", "Serum Cytokines Representative of Major T cell Subsets Were All Down-Regulated in Helminth Infection and Did Not Explain HD5 Suppression", "DISCUSSION", "Supplementary Data", "Funding" ]
[ "[SUBTITLE] Study Setting [SUBSECTION] Study participants were recruited from Misisi township in Lusaka, Zambia. This is an unplanned settlement south of Lusaka with poor sanitation and inadequate hygiene facilities. Adults (age, ≥18 years) were recruited for the study if they were resident in a defined sector of Misisi and gave informed consent. Potential participants were excluded if they were pregnant or lactating, had experienced diarrhea ≤1 month before planned participation, or had taken antibiotics or nonsteroidal anti-inflammatory drugs in the same period. Prior work in this population revealed that 17.4% of human immunodeficiency virus (HIV)–negative adults were infected with A. lumbricoides, compared with 13.1% of HIV-positive individuals [25]. Ethics approval was obtained from the University of Zambia Research Ethics Committee (007-10-07).\nStudy participants were recruited from Misisi township in Lusaka, Zambia. This is an unplanned settlement south of Lusaka with poor sanitation and inadequate hygiene facilities. Adults (age, ≥18 years) were recruited for the study if they were resident in a defined sector of Misisi and gave informed consent. Potential participants were excluded if they were pregnant or lactating, had experienced diarrhea ≤1 month before planned participation, or had taken antibiotics or nonsteroidal anti-inflammatory drugs in the same period. Prior work in this population revealed that 17.4% of human immunodeficiency virus (HIV)–negative adults were infected with A. lumbricoides, compared with 13.1% of HIV-positive individuals [25]. Ethics approval was obtained from the University of Zambia Research Ethics Committee (007-10-07).\n[SUBTITLE] Determination of Helminth Infection and Treatment [SUBSECTION] All participants submitted 3 stool samples over 3–5-day periods that were screened for the presence of eggs using the Kato-Katz technique. If helminth infection was found, the persons were designated as case patients, and they were designated as control subjects if not. Depending on the parasite species detected, infected participants were treated with albendazole (Zentel; GSK; 400 mg twice daily for 3 days), praziquantel (Biltricide; Bayer Pharmaceuticals Corporation; 400 mg/kg in 3 divided doses on 1 day), and/or ivermectin (Stromectol; MSD; 9 mg [∼200 μg/kg] as a single dose). Stool samples were collected after treatment and checked for infection to confirm eradication of worms. Ten mililiters of blood was collected prior to endoscopy to perform a complete blood cell count (including eosinophil count), to determine HIV status, and to perform a CD4 cell count if the subject was HIV seropositive, and serum was separated and stored at -80°C for cytokine enzyme-linked immunosorbent assay (ELISA).\nAll participants submitted 3 stool samples over 3–5-day periods that were screened for the presence of eggs using the Kato-Katz technique. If helminth infection was found, the persons were designated as case patients, and they were designated as control subjects if not. Depending on the parasite species detected, infected participants were treated with albendazole (Zentel; GSK; 400 mg twice daily for 3 days), praziquantel (Biltricide; Bayer Pharmaceuticals Corporation; 400 mg/kg in 3 divided doses on 1 day), and/or ivermectin (Stromectol; MSD; 9 mg [∼200 μg/kg] as a single dose). Stool samples were collected after treatment and checked for infection to confirm eradication of worms. Ten mililiters of blood was collected prior to endoscopy to perform a complete blood cell count (including eosinophil count), to determine HIV status, and to perform a CD4 cell count if the subject was HIV seropositive, and serum was separated and stored at -80°C for cytokine enzyme-linked immunosorbent assay (ELISA).\n[SUBTITLE] Biopsy Sample Collection and PBMC Isolation [SUBSECTION] Participants underwent enteroscopy with a fiberoptic enteroscope (Olympus SIF-10) under sedation to collect biopsy samples of the jejunum, followed immediately by treatment described above. An additional set of biopsies was performed either 7 or 14 days after initiation of treatment. Samples were immediately suspended in Tri Reagent (Sigma) and frozen at -80°C for RNA extraction. An additional 20 mL of venous blood was collected prior to endoscopy in EDTA tubes (BD Biosciences) before and after treatment. PBMCs were separated using Ficoll-Paque Premium (GE Healthcare) and washed once in phosphate-buffered saline (Sigma), and 1 × 107 cells were resuspended in Tri Reagent (Sigma) and stored at -20°C ready for RNA extraction.\nParticipants underwent enteroscopy with a fiberoptic enteroscope (Olympus SIF-10) under sedation to collect biopsy samples of the jejunum, followed immediately by treatment described above. An additional set of biopsies was performed either 7 or 14 days after initiation of treatment. Samples were immediately suspended in Tri Reagent (Sigma) and frozen at -80°C for RNA extraction. An additional 20 mL of venous blood was collected prior to endoscopy in EDTA tubes (BD Biosciences) before and after treatment. PBMCs were separated using Ficoll-Paque Premium (GE Healthcare) and washed once in phosphate-buffered saline (Sigma), and 1 × 107 cells were resuspended in Tri Reagent (Sigma) and stored at -20°C ready for RNA extraction.\n[SUBTITLE] RNA Extraction and cDNA Preparation [SUBSECTION] RNA was extracted using the phenol-chloroform extraction method, as described elsewhere [26]. In brief, all gut biopsy samples were broken up using a mini-homogenizer and sterilized pestles and stored in fresh sterile molecular biology grade (MBG) water (Sigma). RNA was subjected to DNase treatment using RQ1 DNase (Promega, UK) in the presence of RNA ribonuclease inhibitor (Promega). DNase-treated RNA was then extracted using phenol-chloroform-isoamyl alcohol (ratio, 25:24:1; Sigma) and quantified by spectroscopy. cDNA was prepared using standard protocols.\nRNA was extracted using the phenol-chloroform extraction method, as described elsewhere [26]. In brief, all gut biopsy samples were broken up using a mini-homogenizer and sterilized pestles and stored in fresh sterile molecular biology grade (MBG) water (Sigma). RNA was subjected to DNase treatment using RQ1 DNase (Promega, UK) in the presence of RNA ribonuclease inhibitor (Promega). DNase-treated RNA was then extracted using phenol-chloroform-isoamyl alcohol (ratio, 25:24:1; Sigma) and quantified by spectroscopy. cDNA was prepared using standard protocols.\n[SUBTITLE] Quantitative Real-time RT- PCR [SUBSECTION] Primers for the CD4+ T cell transcription factors RORγt, T-bet, GATA-3, and Foxp3 and the cytokines IFN-γ, transforming growth factor (TGF)–β, tumor necrosis factor (TNF)–α, IL-4, IL-5, and IL-10 were generated using Primer 3 software (http://frodo.wi.mit.edu/primer3/input.htm) and synthesized by Sigma. (For sequences, see http://www.icms.qmul.ac.uk/Profiles/Digestive%20Diseases/Kelly%20Paul.htm.) GAPDH was used as a reference standard for all PBMC cDNA, whereast CK-19 and GAPDH were used for gut cDNA. Quantitative real-time PCR was performed using a Corbett Rotor Gene thermal cycler and SYBR Green (Qiagen) detection over 45 cycles of 95°C, 60°C, and 72°C. cDNA from IL-1β–treated Caco-2 cells was used as a positive control for all the gut RT-PCRs, whereas MBG water was used as negative control (at least 3 in each run). HD5 and HD6 were expressed as transcripts/μg total RNA using plasmid standards, as previously described [20], but all other results were expressed relative to GAPDH and/or CK-19.\nPrimers for the CD4+ T cell transcription factors RORγt, T-bet, GATA-3, and Foxp3 and the cytokines IFN-γ, transforming growth factor (TGF)–β, tumor necrosis factor (TNF)–α, IL-4, IL-5, and IL-10 were generated using Primer 3 software (http://frodo.wi.mit.edu/primer3/input.htm) and synthesized by Sigma. (For sequences, see http://www.icms.qmul.ac.uk/Profiles/Digestive%20Diseases/Kelly%20Paul.htm.) GAPDH was used as a reference standard for all PBMC cDNA, whereast CK-19 and GAPDH were used for gut cDNA. Quantitative real-time PCR was performed using a Corbett Rotor Gene thermal cycler and SYBR Green (Qiagen) detection over 45 cycles of 95°C, 60°C, and 72°C. cDNA from IL-1β–treated Caco-2 cells was used as a positive control for all the gut RT-PCRs, whereas MBG water was used as negative control (at least 3 in each run). HD5 and HD6 were expressed as transcripts/μg total RNA using plasmid standards, as previously described [20], but all other results were expressed relative to GAPDH and/or CK-19.\n[SUBTITLE] Cytokine ELISAs [SUBSECTION] Aliquots of serum from case patients and control subjects that had been stored at -80°C for ≤6 months were used for measurement of IFN-γ, IL-13, IL-22, and TGF-β by ELISA (R&D Systems) in accordance with the manufacturer's instructions.\nAliquots of serum from case patients and control subjects that had been stored at -80°C for ≤6 months were used for measurement of IFN-γ, IL-13, IL-22, and TGF-β by ELISA (R&D Systems) in accordance with the manufacturer's instructions.\n[SUBTITLE] Data Analysis [SUBSECTION] Data analysis was performed using Stata software, version 10.1 (StataCorp), and GraphPad Prism, version 5.01 (GraphPad Software). Nonparametric statistical tests were used because of the nonnormal distribution of the data, which are summarized below as median and interquartile range (IQR). The Fisher exact test was used for proportions, the Kruskal-Wallis test was used to compare distributions of unpaired data (case patients and control subjects), and the Wilcoxon matched-pair rank sum test was used to compare pre- and posttreatment values for the same individual. Spearman's rank correlation was used to determine the correlation of peripheral blood and intestinal cytokine and T cell subset markers.\nData analysis was performed using Stata software, version 10.1 (StataCorp), and GraphPad Prism, version 5.01 (GraphPad Software). Nonparametric statistical tests were used because of the nonnormal distribution of the data, which are summarized below as median and interquartile range (IQR). The Fisher exact test was used for proportions, the Kruskal-Wallis test was used to compare distributions of unpaired data (case patients and control subjects), and the Wilcoxon matched-pair rank sum test was used to compare pre- and posttreatment values for the same individual. Spearman's rank correlation was used to determine the correlation of peripheral blood and intestinal cytokine and T cell subset markers.", "Study participants were recruited from Misisi township in Lusaka, Zambia. This is an unplanned settlement south of Lusaka with poor sanitation and inadequate hygiene facilities. Adults (age, ≥18 years) were recruited for the study if they were resident in a defined sector of Misisi and gave informed consent. Potential participants were excluded if they were pregnant or lactating, had experienced diarrhea ≤1 month before planned participation, or had taken antibiotics or nonsteroidal anti-inflammatory drugs in the same period. Prior work in this population revealed that 17.4% of human immunodeficiency virus (HIV)–negative adults were infected with A. lumbricoides, compared with 13.1% of HIV-positive individuals [25]. Ethics approval was obtained from the University of Zambia Research Ethics Committee (007-10-07).", "All participants submitted 3 stool samples over 3–5-day periods that were screened for the presence of eggs using the Kato-Katz technique. If helminth infection was found, the persons were designated as case patients, and they were designated as control subjects if not. Depending on the parasite species detected, infected participants were treated with albendazole (Zentel; GSK; 400 mg twice daily for 3 days), praziquantel (Biltricide; Bayer Pharmaceuticals Corporation; 400 mg/kg in 3 divided doses on 1 day), and/or ivermectin (Stromectol; MSD; 9 mg [∼200 μg/kg] as a single dose). Stool samples were collected after treatment and checked for infection to confirm eradication of worms. Ten mililiters of blood was collected prior to endoscopy to perform a complete blood cell count (including eosinophil count), to determine HIV status, and to perform a CD4 cell count if the subject was HIV seropositive, and serum was separated and stored at -80°C for cytokine enzyme-linked immunosorbent assay (ELISA).", "Participants underwent enteroscopy with a fiberoptic enteroscope (Olympus SIF-10) under sedation to collect biopsy samples of the jejunum, followed immediately by treatment described above. An additional set of biopsies was performed either 7 or 14 days after initiation of treatment. Samples were immediately suspended in Tri Reagent (Sigma) and frozen at -80°C for RNA extraction. An additional 20 mL of venous blood was collected prior to endoscopy in EDTA tubes (BD Biosciences) before and after treatment. PBMCs were separated using Ficoll-Paque Premium (GE Healthcare) and washed once in phosphate-buffered saline (Sigma), and 1 × 107 cells were resuspended in Tri Reagent (Sigma) and stored at -20°C ready for RNA extraction.", "RNA was extracted using the phenol-chloroform extraction method, as described elsewhere [26]. In brief, all gut biopsy samples were broken up using a mini-homogenizer and sterilized pestles and stored in fresh sterile molecular biology grade (MBG) water (Sigma). RNA was subjected to DNase treatment using RQ1 DNase (Promega, UK) in the presence of RNA ribonuclease inhibitor (Promega). DNase-treated RNA was then extracted using phenol-chloroform-isoamyl alcohol (ratio, 25:24:1; Sigma) and quantified by spectroscopy. cDNA was prepared using standard protocols.", "Primers for the CD4+ T cell transcription factors RORγt, T-bet, GATA-3, and Foxp3 and the cytokines IFN-γ, transforming growth factor (TGF)–β, tumor necrosis factor (TNF)–α, IL-4, IL-5, and IL-10 were generated using Primer 3 software (http://frodo.wi.mit.edu/primer3/input.htm) and synthesized by Sigma. (For sequences, see http://www.icms.qmul.ac.uk/Profiles/Digestive%20Diseases/Kelly%20Paul.htm.) GAPDH was used as a reference standard for all PBMC cDNA, whereast CK-19 and GAPDH were used for gut cDNA. Quantitative real-time PCR was performed using a Corbett Rotor Gene thermal cycler and SYBR Green (Qiagen) detection over 45 cycles of 95°C, 60°C, and 72°C. cDNA from IL-1β–treated Caco-2 cells was used as a positive control for all the gut RT-PCRs, whereas MBG water was used as negative control (at least 3 in each run). HD5 and HD6 were expressed as transcripts/μg total RNA using plasmid standards, as previously described [20], but all other results were expressed relative to GAPDH and/or CK-19.", "Aliquots of serum from case patients and control subjects that had been stored at -80°C for ≤6 months were used for measurement of IFN-γ, IL-13, IL-22, and TGF-β by ELISA (R&D Systems) in accordance with the manufacturer's instructions.", "Data analysis was performed using Stata software, version 10.1 (StataCorp), and GraphPad Prism, version 5.01 (GraphPad Software). Nonparametric statistical tests were used because of the nonnormal distribution of the data, which are summarized below as median and interquartile range (IQR). The Fisher exact test was used for proportions, the Kruskal-Wallis test was used to compare distributions of unpaired data (case patients and control subjects), and the Wilcoxon matched-pair rank sum test was used to compare pre- and posttreatment values for the same individual. Spearman's rank correlation was used to determine the correlation of peripheral blood and intestinal cytokine and T cell subset markers.", "[SUBTITLE] Study Population in Misisi, Lusaka [SUBSECTION] We recruited 71 consenting individuals (age, 21–60 years). Case patients were defined as those participants found to have A. lumbricoides infection; coinfections with Strongyloides stercoralis, S. mansoni, and Taenia saginata were also found (Table 1). Case patients (n = 27) and control subjects (n = 44) were similar with respect to age and HIV status, although more women were found (P = .002) with helminthiasis (Table 1), and ownership of the house of residence showed clear evidence that case patients were of lower economic status (P = .03) (Table 1). Case patients also had eosinophilia (P = .01) and lower hemoglobin levels (P = .02). Stool samples examined after treatment showed successful helminth eradication in all case patients.\nCharacteristics of the Study Population and Helminthiasis Diagnosis\nNOTE. HIV, human immunodeficiency virus; NS, not significant; SD, standard deviation.\nP value determined by the Fisher exact test.\nP value determined by the Kruskal-Wallis test.\nWe recruited 71 consenting individuals (age, 21–60 years). Case patients were defined as those participants found to have A. lumbricoides infection; coinfections with Strongyloides stercoralis, S. mansoni, and Taenia saginata were also found (Table 1). Case patients (n = 27) and control subjects (n = 44) were similar with respect to age and HIV status, although more women were found (P = .002) with helminthiasis (Table 1), and ownership of the house of residence showed clear evidence that case patients were of lower economic status (P = .03) (Table 1). Case patients also had eosinophilia (P = .01) and lower hemoglobin levels (P = .02). Stool samples examined after treatment showed successful helminth eradication in all case patients.\nCharacteristics of the Study Population and Helminthiasis Diagnosis\nNOTE. HIV, human immunodeficiency virus; NS, not significant; SD, standard deviation.\nP value determined by the Fisher exact test.\nP value determined by the Kruskal-Wallis test.\n[SUBTITLE] Gastrointestinal Expression of AMPs Differed in Case Patients and Control Subjects [SUBSECTION] We characterized the gastrointestinal expression profiles in jejunal biopsy samples from case patients and control subjects using RT-PCR for mRNA of the α−defensins (HD5 and HD6), the β−defensins hBD1 and hBD2, and the cathelicidin LL-37. Biopsy samples were taken before treatment and at 7 (n = 6) or 14 (n = 21) days after treatment; there was no difference in the changes in mRNA of HD5 and HD6 at 7 or 14 days, so these data are considered together (Figures 1A and 1B). After treatment, HD5 and HD6 expression did not change overall (Figure 1C and 1D), but some variation was evident between individuals: down-regulation of HD5 was observed in 60% of helminth-infected individuals during infection, compared with posttreatment values (Figure 1C), and in 48% of these subjects, we observed down-regulation of HD6 expression (Figure 1D). Expression of HD5 was lower in biopsy samples taken from case patients before (median, 1234; IQR, 286.6 - 6444) and after (median, 1030; IQR, 276.1 - 14,045) treatment than in control subjects (median, 8721; IQR, 646.7 - 56,384) (Figure 1E), but expression of HD6 before (median, 43,353; IQR, 3048-405,461) and after (median, 73,946; IQR, 8267-360,127) was higher than in control subjects (median, 9210; IQR, 690.4 - 27,750) (Figure 1F). hBD1 and LL-37 expression was lower in case patients, and there was no detectable mRNA of hBD2 in any biopsy specimen obtained in this study (Table 2), even though mRNA from IL-1β-treated Caco-2 cells gave a clear positive amplification signal for hBD2 (data not shown). Expression of these antimicrobial peptide genes was completely unaffected by HIV status (data not shown).\nAscariasis was Associated With Attenuated Expression of hBD1 and LL-37\nNOTE. hBD, human β−defensin; NE, no detectable mRNA expressed.\nP = .0001, determined by the Kruskal-Wallis test of differences between control and pretreatment groups.\nDifferential expression of α-defensins in the intestinal mucosa during ascariasis. mRNA for HD5 and HD6 was quantified in jejunal biopsy samples obtained from control subjects and from case patients before and after helminth eradication and are expressed as transcripts/μg of total RNA extracted. A and B, expression of individual samples in multiple biopsy specimens taken after 7 or 14 days after treatment. C and D, changes following treatment expressed as –fold change, relative to pretreatment values, such that values >1 represent an increase in expression and <1 as a decrease during infection. E and F, absolute mRNA quantities shown in case patients versus control subjects (represented as box whisker plots of median, interquartile range, and range)\nWe characterized the gastrointestinal expression profiles in jejunal biopsy samples from case patients and control subjects using RT-PCR for mRNA of the α−defensins (HD5 and HD6), the β−defensins hBD1 and hBD2, and the cathelicidin LL-37. Biopsy samples were taken before treatment and at 7 (n = 6) or 14 (n = 21) days after treatment; there was no difference in the changes in mRNA of HD5 and HD6 at 7 or 14 days, so these data are considered together (Figures 1A and 1B). After treatment, HD5 and HD6 expression did not change overall (Figure 1C and 1D), but some variation was evident between individuals: down-regulation of HD5 was observed in 60% of helminth-infected individuals during infection, compared with posttreatment values (Figure 1C), and in 48% of these subjects, we observed down-regulation of HD6 expression (Figure 1D). Expression of HD5 was lower in biopsy samples taken from case patients before (median, 1234; IQR, 286.6 - 6444) and after (median, 1030; IQR, 276.1 - 14,045) treatment than in control subjects (median, 8721; IQR, 646.7 - 56,384) (Figure 1E), but expression of HD6 before (median, 43,353; IQR, 3048-405,461) and after (median, 73,946; IQR, 8267-360,127) was higher than in control subjects (median, 9210; IQR, 690.4 - 27,750) (Figure 1F). hBD1 and LL-37 expression was lower in case patients, and there was no detectable mRNA of hBD2 in any biopsy specimen obtained in this study (Table 2), even though mRNA from IL-1β-treated Caco-2 cells gave a clear positive amplification signal for hBD2 (data not shown). Expression of these antimicrobial peptide genes was completely unaffected by HIV status (data not shown).\nAscariasis was Associated With Attenuated Expression of hBD1 and LL-37\nNOTE. hBD, human β−defensin; NE, no detectable mRNA expressed.\nP = .0001, determined by the Kruskal-Wallis test of differences between control and pretreatment groups.\nDifferential expression of α-defensins in the intestinal mucosa during ascariasis. mRNA for HD5 and HD6 was quantified in jejunal biopsy samples obtained from control subjects and from case patients before and after helminth eradication and are expressed as transcripts/μg of total RNA extracted. A and B, expression of individual samples in multiple biopsy specimens taken after 7 or 14 days after treatment. C and D, changes following treatment expressed as –fold change, relative to pretreatment values, such that values >1 represent an increase in expression and <1 as a decrease during infection. E and F, absolute mRNA quantities shown in case patients versus control subjects (represented as box whisker plots of median, interquartile range, and range)\n[SUBTITLE] Gastrointestinal Expression of Th1 Phenotype Increased During Helminth Infection [SUBSECTION] We measured mRNA expression of CD4+ T cell subset signature transcription factors and cytokines in biopsy samples from the infected group before and after treatment but from none of the control subjects, because mRNA from intestinal tissue was limited in quantity. There was a decrease in Th1 markers after treatment, as reflected in IFN-γ mRNA expression (observed in 72% of case patients; P = .009) and T-bet (in 88% of case patients; P = .01), suggesting that they were up-regulated during infection. In quantitative terms, IFN-γ mRNA relative to GAPDH decreased following treatment (P = .009) and T-bet mRNA decreased (P = .01, using the Wilcoxon rank sum test) after treatment (Figure 2 A and 2B respectively). There was no change in TNF-α or RORγt expression (Figure 2 C and 2D, respectively) nor in expression of GATA-3, Foxp3, TGF-β, IL-4, IL-10, or IL-5 (data not shown).\nAssociation between helminth infection and increased expression of gastrointestinal CD4+ Th1 phenotype markers. Jejunal biopsy samples obtained before and after treatment were used to quantify, by real-time reverse-transcription polymerase chain reaction, the expression of interferon (IFN)–γ (A), T-bet (B), tumor necrosis factor (TNF)–α (C), and RORγt (D) represented as box whisker plots of median, interquartile, and range. mRNA expression is shown relative to GAPDH\nWe measured mRNA expression of CD4+ T cell subset signature transcription factors and cytokines in biopsy samples from the infected group before and after treatment but from none of the control subjects, because mRNA from intestinal tissue was limited in quantity. There was a decrease in Th1 markers after treatment, as reflected in IFN-γ mRNA expression (observed in 72% of case patients; P = .009) and T-bet (in 88% of case patients; P = .01), suggesting that they were up-regulated during infection. In quantitative terms, IFN-γ mRNA relative to GAPDH decreased following treatment (P = .009) and T-bet mRNA decreased (P = .01, using the Wilcoxon rank sum test) after treatment (Figure 2 A and 2B respectively). There was no change in TNF-α or RORγt expression (Figure 2 C and 2D, respectively) nor in expression of GATA-3, Foxp3, TGF-β, IL-4, IL-10, or IL-5 (data not shown).\nAssociation between helminth infection and increased expression of gastrointestinal CD4+ Th1 phenotype markers. Jejunal biopsy samples obtained before and after treatment were used to quantify, by real-time reverse-transcription polymerase chain reaction, the expression of interferon (IFN)–γ (A), T-bet (B), tumor necrosis factor (TNF)–α (C), and RORγt (D) represented as box whisker plots of median, interquartile, and range. mRNA expression is shown relative to GAPDH\n[SUBTITLE] Helminth Infection Was Associated with Up-Regulated Peripheral Blood Markers of All Major T cell Subsets and Some Cytokines [SUBSECTION] To compare the changes in the intestinal mucosa with the systemic response, we looked at peripheral T cell signature transcription factor mRNA expression in PBMCs from case patients before and after treatment and from control subjects. There was up-regulation of Th1 (P < .0001), Th2 (P = .04), Th17 (P < .0001), and Treg (P = .0002) markers in pretreatment case patients, compared with control subjects (Figure 3A), and altered expression was observed in a range of cytokines: IFN-γ (P = .03) and IL-10 (P = .002) were down-regulated, whereas IL-5 (P < .0001) and TGF-β (P < .0001) were up-regulated (Figure 3B). Neither TNF-α nor IL-4 showed any changes (data not shown). There was no change in subset markers after treatment (Figure 3A) or in any of the cytokines measured (IFN-γ, IL-5, TGFβ, and IL-10 [Figure 3B]; data not shown for TNF-α and IL-4). Expression of these genes did not differ between HIV-infected and HIV-uninfected case patients or control subjects.\nAssociation between helminth infection and altered peripheral expression of CD4+ T subset markers and cytokines. Peripheral blood mononuclear cells were isolated from blood from control subjects and case patients before and after treatment, and real time reverse-transcription polymerase chain reaction was performed; mRNA expression is shown relative to GAPDH for CD4+ T subset markers (Figure 3A) and cytokines (Figure 3B) and represented as box whisker plots of median, interquartile range, and range.\nTo compare the changes in the intestinal mucosa with the systemic response, we looked at peripheral T cell signature transcription factor mRNA expression in PBMCs from case patients before and after treatment and from control subjects. There was up-regulation of Th1 (P < .0001), Th2 (P = .04), Th17 (P < .0001), and Treg (P = .0002) markers in pretreatment case patients, compared with control subjects (Figure 3A), and altered expression was observed in a range of cytokines: IFN-γ (P = .03) and IL-10 (P = .002) were down-regulated, whereas IL-5 (P < .0001) and TGF-β (P < .0001) were up-regulated (Figure 3B). Neither TNF-α nor IL-4 showed any changes (data not shown). There was no change in subset markers after treatment (Figure 3A) or in any of the cytokines measured (IFN-γ, IL-5, TGFβ, and IL-10 [Figure 3B]; data not shown for TNF-α and IL-4). Expression of these genes did not differ between HIV-infected and HIV-uninfected case patients or control subjects.\nAssociation between helminth infection and altered peripheral expression of CD4+ T subset markers and cytokines. Peripheral blood mononuclear cells were isolated from blood from control subjects and case patients before and after treatment, and real time reverse-transcription polymerase chain reaction was performed; mRNA expression is shown relative to GAPDH for CD4+ T subset markers (Figure 3A) and cytokines (Figure 3B) and represented as box whisker plots of median, interquartile range, and range.\n[SUBTITLE] Correlations Between Peripheral Blood T cell Subset Markers and Cytokines [SUBSECTION] To explore the impact of ascariasis on these T cell markers and cytokines, we looked for correlations between the markers measured. Although the number of case patients and control subjects is modest, some strong and significant positive correlations were observed. Several correlations were seen only in helminth infection and not in control subjects, some of which are expected (such that between GATA-3 and Foxp3) and some of which link apparently antagonistic subsets (such as T-bet and IL-10 or IFN-γ and IL-4) (see Supplementary Tables 1 and 2 at https://docs.google.com/document/pub?id=1k0m0bLjbJZGpplEhHNSldwXLMSoXLAM4OtLKsI7YjWI). Helminth infection appears to abolish the correlation between GATA-3 and TNF-α or and between Foxp3 and IL-4. None of the correlations were seen exclusively as a consequence of HIV infection (see Supplementary Tables 3 and 4, also at https://docs.google.com/document/pub?id=1k0m0bLjbJZGpplEhHNSldwXLMSoXLAM4OtLKsI7YjWI).\nTo explore the impact of ascariasis on these T cell markers and cytokines, we looked for correlations between the markers measured. Although the number of case patients and control subjects is modest, some strong and significant positive correlations were observed. Several correlations were seen only in helminth infection and not in control subjects, some of which are expected (such that between GATA-3 and Foxp3) and some of which link apparently antagonistic subsets (such as T-bet and IL-10 or IFN-γ and IL-4) (see Supplementary Tables 1 and 2 at https://docs.google.com/document/pub?id=1k0m0bLjbJZGpplEhHNSldwXLMSoXLAM4OtLKsI7YjWI). Helminth infection appears to abolish the correlation between GATA-3 and TNF-α or and between Foxp3 and IL-4. None of the correlations were seen exclusively as a consequence of HIV infection (see Supplementary Tables 3 and 4, also at https://docs.google.com/document/pub?id=1k0m0bLjbJZGpplEhHNSldwXLMSoXLAM4OtLKsI7YjWI).\n[SUBTITLE] HD5 Suppression During Helminth Infection Was Associated With Peripheral Blood RORγt Expression [SUBSECTION] To explore the modulation of HD5 and HD6 expression during infection, we dichotomized the cases into those with up-regulation during expression (as inferred from changes following eradication) and those with suppression, and we correlated this change with mRNA of T cell subset markers in peripheral blood. We found that HD5 suppression was significantly associated with increased expression of RORγt in PBMCs (Figure 4). The expression of Th1, Th2, and Treg markers was not associated with this suppression, and changes in HD6 were not significantly associated with any of the markers used (Figure 4). There was no significant association with any cytokine mRNA and HD5 or HD6 suppression, nor was it associated with HIV status.\nAssociation between HD5 suppression and increased expression of RORγt during infection. The study group was dichotomized into those with up-regulation of HD5 or HD6 during helminth infection and those with down-regulation of HD5 or HD6. mRNA of CD4+ T cell subset markers, mean expression ratio to GAPDH, in peripheral blood mononuclear cells is shown in relation to the changes in gastrointestinal expression of HD5 and HD6 expression. Asterisks represent statistical significance. *P = .01.\nTo explore the modulation of HD5 and HD6 expression during infection, we dichotomized the cases into those with up-regulation during expression (as inferred from changes following eradication) and those with suppression, and we correlated this change with mRNA of T cell subset markers in peripheral blood. We found that HD5 suppression was significantly associated with increased expression of RORγt in PBMCs (Figure 4). The expression of Th1, Th2, and Treg markers was not associated with this suppression, and changes in HD6 were not significantly associated with any of the markers used (Figure 4). There was no significant association with any cytokine mRNA and HD5 or HD6 suppression, nor was it associated with HIV status.\nAssociation between HD5 suppression and increased expression of RORγt during infection. The study group was dichotomized into those with up-regulation of HD5 or HD6 during helminth infection and those with down-regulation of HD5 or HD6. mRNA of CD4+ T cell subset markers, mean expression ratio to GAPDH, in peripheral blood mononuclear cells is shown in relation to the changes in gastrointestinal expression of HD5 and HD6 expression. Asterisks represent statistical significance. *P = .01.\n[SUBTITLE] Serum Cytokines Representative of Major T cell Subsets Were All Down-Regulated in Helminth Infection and Did Not Explain HD5 Suppression [SUBSECTION] To explore further the peripheral blood T cell responses, we measured concentrations of 4 cytokines in serum that had been chosen to be representative of the 4 major T cell subsets: IFN-γ, IL-13, IL-22, and TGF-β. IFN-γ was not detected in most serum specimens (data not shown). IL-13 and IL-22 concentrations were lower in case patients than in control subjects and did not change after helminth eradication. IL-13 was detected in all samples measured (in 35 control subjects and in 25 case patients) but was lower in case patients (median, 91.4 pg/mL; IQR, 82-142 pg/mL) than in control subjects (median, 220.3 pg/mL; IQR, 146.7-289.2 pg/mL; P < .0001) (Figure 5). IL-22 was detected in 16 control subjects (out of 30 samples) and 12 case patients (out of 27 samples) and was also lower (median, 0.04 pg/mL; IQR, 0.0-18.4 pg/mL) in case patients than in control subjects (median, 22.4 pg/mL; IQR, 0.0-36.5 pg/mL; P = .001). TGF-β was detected in 15 control subjects (out of 35 samples) and in 13 case patients (out of 25 samples), but the concentration did not differ between case patients (median, 0.0 pg/mL; IQR, 0.0-8.1 pg/mL) and control subjects (median, 0.0 pg/mL; IQR, 0.0-62.8 pg/mL) (Figure 5). There was no increase after treatment and no computed association between HD5 suppression and concentrations of IL-13, IL-22, or TGF-β.\nSerum cytokine concentrations in helminth infection. Serum concentrations of 4 cytokines, selected to represent major T cell subsets (interferon [IFN]–γ, interleukin [IL]–13, IL-22, and transforming growth factor [TGF]–β), were measured by enzyme-linked immunosorbent assay in samples from control subjects and in pretreatment or posttreatment samples from case patients (represented as box whisker plots of median, interquartile range, and range). Asterisks represent statistical significance for Kruskal-Wallis test of differences. ***P < .0001; **P = .001.\nTo explore further the peripheral blood T cell responses, we measured concentrations of 4 cytokines in serum that had been chosen to be representative of the 4 major T cell subsets: IFN-γ, IL-13, IL-22, and TGF-β. IFN-γ was not detected in most serum specimens (data not shown). IL-13 and IL-22 concentrations were lower in case patients than in control subjects and did not change after helminth eradication. IL-13 was detected in all samples measured (in 35 control subjects and in 25 case patients) but was lower in case patients (median, 91.4 pg/mL; IQR, 82-142 pg/mL) than in control subjects (median, 220.3 pg/mL; IQR, 146.7-289.2 pg/mL; P < .0001) (Figure 5). IL-22 was detected in 16 control subjects (out of 30 samples) and 12 case patients (out of 27 samples) and was also lower (median, 0.04 pg/mL; IQR, 0.0-18.4 pg/mL) in case patients than in control subjects (median, 22.4 pg/mL; IQR, 0.0-36.5 pg/mL; P = .001). TGF-β was detected in 15 control subjects (out of 35 samples) and in 13 case patients (out of 25 samples), but the concentration did not differ between case patients (median, 0.0 pg/mL; IQR, 0.0-8.1 pg/mL) and control subjects (median, 0.0 pg/mL; IQR, 0.0-62.8 pg/mL) (Figure 5). There was no increase after treatment and no computed association between HD5 suppression and concentrations of IL-13, IL-22, or TGF-β.\nSerum cytokine concentrations in helminth infection. Serum concentrations of 4 cytokines, selected to represent major T cell subsets (interferon [IFN]–γ, interleukin [IL]–13, IL-22, and transforming growth factor [TGF]–β), were measured by enzyme-linked immunosorbent assay in samples from control subjects and in pretreatment or posttreatment samples from case patients (represented as box whisker plots of median, interquartile range, and range). Asterisks represent statistical significance for Kruskal-Wallis test of differences. ***P < .0001; **P = .001.", "We recruited 71 consenting individuals (age, 21–60 years). Case patients were defined as those participants found to have A. lumbricoides infection; coinfections with Strongyloides stercoralis, S. mansoni, and Taenia saginata were also found (Table 1). Case patients (n = 27) and control subjects (n = 44) were similar with respect to age and HIV status, although more women were found (P = .002) with helminthiasis (Table 1), and ownership of the house of residence showed clear evidence that case patients were of lower economic status (P = .03) (Table 1). Case patients also had eosinophilia (P = .01) and lower hemoglobin levels (P = .02). Stool samples examined after treatment showed successful helminth eradication in all case patients.\nCharacteristics of the Study Population and Helminthiasis Diagnosis\nNOTE. HIV, human immunodeficiency virus; NS, not significant; SD, standard deviation.\nP value determined by the Fisher exact test.\nP value determined by the Kruskal-Wallis test.", "We characterized the gastrointestinal expression profiles in jejunal biopsy samples from case patients and control subjects using RT-PCR for mRNA of the α−defensins (HD5 and HD6), the β−defensins hBD1 and hBD2, and the cathelicidin LL-37. Biopsy samples were taken before treatment and at 7 (n = 6) or 14 (n = 21) days after treatment; there was no difference in the changes in mRNA of HD5 and HD6 at 7 or 14 days, so these data are considered together (Figures 1A and 1B). After treatment, HD5 and HD6 expression did not change overall (Figure 1C and 1D), but some variation was evident between individuals: down-regulation of HD5 was observed in 60% of helminth-infected individuals during infection, compared with posttreatment values (Figure 1C), and in 48% of these subjects, we observed down-regulation of HD6 expression (Figure 1D). Expression of HD5 was lower in biopsy samples taken from case patients before (median, 1234; IQR, 286.6 - 6444) and after (median, 1030; IQR, 276.1 - 14,045) treatment than in control subjects (median, 8721; IQR, 646.7 - 56,384) (Figure 1E), but expression of HD6 before (median, 43,353; IQR, 3048-405,461) and after (median, 73,946; IQR, 8267-360,127) was higher than in control subjects (median, 9210; IQR, 690.4 - 27,750) (Figure 1F). hBD1 and LL-37 expression was lower in case patients, and there was no detectable mRNA of hBD2 in any biopsy specimen obtained in this study (Table 2), even though mRNA from IL-1β-treated Caco-2 cells gave a clear positive amplification signal for hBD2 (data not shown). Expression of these antimicrobial peptide genes was completely unaffected by HIV status (data not shown).\nAscariasis was Associated With Attenuated Expression of hBD1 and LL-37\nNOTE. hBD, human β−defensin; NE, no detectable mRNA expressed.\nP = .0001, determined by the Kruskal-Wallis test of differences between control and pretreatment groups.\nDifferential expression of α-defensins in the intestinal mucosa during ascariasis. mRNA for HD5 and HD6 was quantified in jejunal biopsy samples obtained from control subjects and from case patients before and after helminth eradication and are expressed as transcripts/μg of total RNA extracted. A and B, expression of individual samples in multiple biopsy specimens taken after 7 or 14 days after treatment. C and D, changes following treatment expressed as –fold change, relative to pretreatment values, such that values >1 represent an increase in expression and <1 as a decrease during infection. E and F, absolute mRNA quantities shown in case patients versus control subjects (represented as box whisker plots of median, interquartile range, and range)", "We measured mRNA expression of CD4+ T cell subset signature transcription factors and cytokines in biopsy samples from the infected group before and after treatment but from none of the control subjects, because mRNA from intestinal tissue was limited in quantity. There was a decrease in Th1 markers after treatment, as reflected in IFN-γ mRNA expression (observed in 72% of case patients; P = .009) and T-bet (in 88% of case patients; P = .01), suggesting that they were up-regulated during infection. In quantitative terms, IFN-γ mRNA relative to GAPDH decreased following treatment (P = .009) and T-bet mRNA decreased (P = .01, using the Wilcoxon rank sum test) after treatment (Figure 2 A and 2B respectively). There was no change in TNF-α or RORγt expression (Figure 2 C and 2D, respectively) nor in expression of GATA-3, Foxp3, TGF-β, IL-4, IL-10, or IL-5 (data not shown).\nAssociation between helminth infection and increased expression of gastrointestinal CD4+ Th1 phenotype markers. Jejunal biopsy samples obtained before and after treatment were used to quantify, by real-time reverse-transcription polymerase chain reaction, the expression of interferon (IFN)–γ (A), T-bet (B), tumor necrosis factor (TNF)–α (C), and RORγt (D) represented as box whisker plots of median, interquartile, and range. mRNA expression is shown relative to GAPDH", "To compare the changes in the intestinal mucosa with the systemic response, we looked at peripheral T cell signature transcription factor mRNA expression in PBMCs from case patients before and after treatment and from control subjects. There was up-regulation of Th1 (P < .0001), Th2 (P = .04), Th17 (P < .0001), and Treg (P = .0002) markers in pretreatment case patients, compared with control subjects (Figure 3A), and altered expression was observed in a range of cytokines: IFN-γ (P = .03) and IL-10 (P = .002) were down-regulated, whereas IL-5 (P < .0001) and TGF-β (P < .0001) were up-regulated (Figure 3B). Neither TNF-α nor IL-4 showed any changes (data not shown). There was no change in subset markers after treatment (Figure 3A) or in any of the cytokines measured (IFN-γ, IL-5, TGFβ, and IL-10 [Figure 3B]; data not shown for TNF-α and IL-4). Expression of these genes did not differ between HIV-infected and HIV-uninfected case patients or control subjects.\nAssociation between helminth infection and altered peripheral expression of CD4+ T subset markers and cytokines. Peripheral blood mononuclear cells were isolated from blood from control subjects and case patients before and after treatment, and real time reverse-transcription polymerase chain reaction was performed; mRNA expression is shown relative to GAPDH for CD4+ T subset markers (Figure 3A) and cytokines (Figure 3B) and represented as box whisker plots of median, interquartile range, and range.", "To explore the impact of ascariasis on these T cell markers and cytokines, we looked for correlations between the markers measured. Although the number of case patients and control subjects is modest, some strong and significant positive correlations were observed. Several correlations were seen only in helminth infection and not in control subjects, some of which are expected (such that between GATA-3 and Foxp3) and some of which link apparently antagonistic subsets (such as T-bet and IL-10 or IFN-γ and IL-4) (see Supplementary Tables 1 and 2 at https://docs.google.com/document/pub?id=1k0m0bLjbJZGpplEhHNSldwXLMSoXLAM4OtLKsI7YjWI). Helminth infection appears to abolish the correlation between GATA-3 and TNF-α or and between Foxp3 and IL-4. None of the correlations were seen exclusively as a consequence of HIV infection (see Supplementary Tables 3 and 4, also at https://docs.google.com/document/pub?id=1k0m0bLjbJZGpplEhHNSldwXLMSoXLAM4OtLKsI7YjWI).", "To explore the modulation of HD5 and HD6 expression during infection, we dichotomized the cases into those with up-regulation during expression (as inferred from changes following eradication) and those with suppression, and we correlated this change with mRNA of T cell subset markers in peripheral blood. We found that HD5 suppression was significantly associated with increased expression of RORγt in PBMCs (Figure 4). The expression of Th1, Th2, and Treg markers was not associated with this suppression, and changes in HD6 were not significantly associated with any of the markers used (Figure 4). There was no significant association with any cytokine mRNA and HD5 or HD6 suppression, nor was it associated with HIV status.\nAssociation between HD5 suppression and increased expression of RORγt during infection. The study group was dichotomized into those with up-regulation of HD5 or HD6 during helminth infection and those with down-regulation of HD5 or HD6. mRNA of CD4+ T cell subset markers, mean expression ratio to GAPDH, in peripheral blood mononuclear cells is shown in relation to the changes in gastrointestinal expression of HD5 and HD6 expression. Asterisks represent statistical significance. *P = .01.", "To explore further the peripheral blood T cell responses, we measured concentrations of 4 cytokines in serum that had been chosen to be representative of the 4 major T cell subsets: IFN-γ, IL-13, IL-22, and TGF-β. IFN-γ was not detected in most serum specimens (data not shown). IL-13 and IL-22 concentrations were lower in case patients than in control subjects and did not change after helminth eradication. IL-13 was detected in all samples measured (in 35 control subjects and in 25 case patients) but was lower in case patients (median, 91.4 pg/mL; IQR, 82-142 pg/mL) than in control subjects (median, 220.3 pg/mL; IQR, 146.7-289.2 pg/mL; P < .0001) (Figure 5). IL-22 was detected in 16 control subjects (out of 30 samples) and 12 case patients (out of 27 samples) and was also lower (median, 0.04 pg/mL; IQR, 0.0-18.4 pg/mL) in case patients than in control subjects (median, 22.4 pg/mL; IQR, 0.0-36.5 pg/mL; P = .001). TGF-β was detected in 15 control subjects (out of 35 samples) and in 13 case patients (out of 25 samples), but the concentration did not differ between case patients (median, 0.0 pg/mL; IQR, 0.0-8.1 pg/mL) and control subjects (median, 0.0 pg/mL; IQR, 0.0-62.8 pg/mL) (Figure 5). There was no increase after treatment and no computed association between HD5 suppression and concentrations of IL-13, IL-22, or TGF-β.\nSerum cytokine concentrations in helminth infection. Serum concentrations of 4 cytokines, selected to represent major T cell subsets (interferon [IFN]–γ, interleukin [IL]–13, IL-22, and transforming growth factor [TGF]–β), were measured by enzyme-linked immunosorbent assay in samples from control subjects and in pretreatment or posttreatment samples from case patients (represented as box whisker plots of median, interquartile range, and range). Asterisks represent statistical significance for Kruskal-Wallis test of differences. ***P < .0001; **P = .001.", "We have previously shown that helminth infection (particularly ascariasis) is common in this population [25], but the effects of helminth infection on innate defense mechanisms in the intestinal mucosa are, to our knowledge, unknown. Our finding that helminthiasis is associated with lower expression of HD5, hBD1, and LL-37 might explain some of the previously noted reduction in HD5 expression in this population [26] but not the reduction in HD6. However, this would only be true if exposure to helminths causes the change in HD5 expression and not if low HD5 expression predisposes to helminthiasis. To our knowledge, there are no data on the effect of human antimicrobial peptides on nematode development. In general, the lower antimicrobial peptide expression in case patients (higher expression in the case of HD6) did not change significantly after treatment, which could be explained either if lower defensin expression predisposes to helminthiasis or if 14 days is insufficient for such changes to occur. We did not examine antimicrobial peptide expression after 14 days, because intercurrent intestinal infections, to which adults in this community are exposed [25], might complicate interpretation of immunological changes during convalescence. Therefore, we cannot distinguish between these possibilities, and we propose that the only way to resolve the issue of causation is to study experimental helminth infection in naive volunteers.\nOur data indicate that the dominant T helper phenotype in control subjects was Th2 but that, in case patients, a Th1 subset was more prominent during infection. In animal studies, a dominant Th2 response is associated with clearance of helminth infection, but a dominant Th1 response is associated with persistent infection [11]. Our data are therefore consistent with the hypothesis that persistent gut helminthiasis in humans is at least partly a failure of the Th2 response. This also correlates with observations that a predominant Th2 phenotype (with its associated cytokines) in humans confers immunity to reinfection and/or immunity [7, 27, 28]. Proinflammatory cytokines can suppress defensin expression [29], which would fit with Th1- or Th17-mediated defensin suppression in helminthiasis when the Th2 response is attenuated.\nRecent work done by Salzman et al [21] suggests that HD5 expression regulates IL-17A production by Th17 cells in the small intestine. Helminth infection has been shown not to induce the expression of IL-22 in the liver [30]. However, IL-22 has been reported to be involved in the immune response to other infections in the gut mucosa, so it is interesting to speculate that helminth-mediated immunomodulation might alter host susceptibility to those bacterial infections against which IL-22 is protective [31]. Helminth infection was associated with reduced concentrations of IL-22 in serum, so clearly more work needs to be done to understand the effect of helminth infections on Th17 responses. These interactions are also complicated by the fact that RORγt expression in the intestine is not entirely restricted to the CD4+ Th17 phenotype and that there are other immune cells that also express RORγt and possibly secrete IL-22 [32, 33].\nIt is not known how the presence of luminal helminths is sensed by the immune system. Ascarid worms occupy a purely luminal niche, and there is no attachment or invasion within the mucosa, unlike adult hookworms or Strongyloides species, which have a more obvious point of contact with host immune sensory elements. However, the jejunum is the primary site for larval localisation [34]. It is also likely that soluble antigens shed by the worms could be sensed by inductive sites in Peyer's patches if there are receptors that can distinguish them from food- or microbiota-derived antigens. Helminth-derived glycans are involved in the activation of various antigen presenting cells (reviewed in [35]).The dissociation between posttreatment CD4+ T cell subset phenotypes and cytokine expression in gut and peripheral blood might suggest the presence of long-lasting circulating antigens that could possibly circulate for long periods of time [36].\nWe were surprised about the predominance of women in the case patients. We have previously noted an under-representation of younger men in the Misisi cohort but we would expect this to apply to case patients and control subjects equally. The rate of house ownership, one measure of economic status, was significantly lower among case patients. We found no influence of HIV infection on antimicrobial peptide expression, which is entirely consistent with our previous observations in this population [20, 37]. We were somewhat surprised that we could detect no effect of HIV on mRNA of T cell subsets or the cytokines we tested. In the case of mucosal expression of these genes, we did not have sufficient mRNA to quantify the mRNA in control subjects, so it may be that helminth infection overrides any effect of HIV. In PBMCs, there was clearly no effect of HIV in either case patients or control subjects, but the number of control subjects was small.\nWe believe that our data showing modulation of AMP gene expression are the first data of this kind in humans. There were surprisingly few changes in AMP expression following eradication, at least within the time scale of our study. The effect of Th17 cells that are implied by our observations about RORγt are intriguing and deserve future exploration. It would be useful in future studies to characterize T cell subsets by flow cytometry, but even very detailed characterization will not answer the fundamental question that our study raises, which is whether the changes we observed are a consequence of helminth infection or in some way predispose to it. This question can probably only be answered by challenge studies in human volunteers.", "Supplementary data are available at http://jid.oxfordjournals.org online.", "The Wellcome Trust (067948)." ]
[ "materials|methods", null, null, null, null, null, null, null, "results", null, null, null, null, null, null, null, "discussion", null, null ]
[]
Transforming growth factor-beta increases the expression of vascular smooth muscle cell markers in human multi-lineage progenitor cells.
21358594
Vascular smooth muscle cell (SMC) differentiation is an essential component of vascular repair and tissue engineering. However, currently used cell models for the study of SMC differentiation have several limitations. Multi-lineage progenitor cells (MLPCs) originate from human umbilical cord blood and are cloned from a single cell. The object of this study was to investigate whether MLPCs could differentiate into SMCs in vitro with induction by transforming growth factor beta1 (TGF-beta1).
BACKGROUND
MLPCs were treated without or with TGF-beta1 (1 and 5 ng/mL) in mesenchymal stem cell media plus 1% FBS for 7 days. Total RNA was isolated from the MLPCs, and semi-quantitative real-time PCR was performed to test the following mRNA levels: early and late phase SMC-specific markers, two endothelial cell (EC)-specific markers, endothelial progenitor cell (EPC) marker CD34, TGF-beta1 accessory protein CD105, and adhesion molecule CD146.
MATERIAL/METHODS
TGF-beta1 (1 ng/mL) significantly increased the mRNA levels of SMC-specific markers SM22α, calponin-1, SM α-actin, caldesmon, tropomyosin and MLCK as well as adhesion molecule CD146. The mRNA levels of EC-specific markers VE-cadherin and VEGFR-2, EPC marker CD34 and TGF-beta1 accessory protein CD105 were decreased significantly, after MLPC were treated with TGF-beta1 (1 ng/mL). TGF-beta1 at 5 ng/mL showed similar effect on the expression of these genes.
RESULTS
This study demonstrates that in the presence of TGF-beta1, MLPCs undergo SMC lineage differentiation indicating that MLPCs are a promising cell model for SMC lineage differentiation studies, which may contribute to advances in vascular repair and tissue engineering.
CONCLUSIONS
[ "Biomarkers", "CD146 Antigen", "Cell Lineage", "Endothelial Cells", "Gene Expression Regulation", "Humans", "Muscle, Smooth, Vascular", "Myocytes, Smooth Muscle", "RNA, Messenger", "Stem Cells", "Transforming Growth Factor beta" ]
3276078
null
null
Statistical analysis
Data from the control and TGF-β1-treated groups were analyzed using a paired Student’s t test (one tail, Minitab software, Sigma Breakthrough Technologies, Inc., San Marcos, TX). Statistics are reported as mean ± the standard deviation (SD). P value <0.05 was considered statistically significant.
Results
[SUBTITLE] TGF-β1 increased the expression of SMC markers in MLPCs [SUBSECTION] After 7 days of exposure to TGF-β1, several SMC-specific markers were dramatically increased. The addition of TGF-β1 (1 ng/mL) to mesenchymal stem cell medium significantly increased the mRNA levels of SM22α, calponin-1, SM α-actin, caldesmon, tropomyosin and myosin light chain kinase (MLCK) to 1215.5%, 1974.6%, 567%, 429.7%, 567% and 162.8%, respectively, when compared to controls (medium only) (P<0.05, Figure 1A–F). TGF-β1 (5 ng/mL) also increased the mRNA levels of these markers significantly (P<0.05), but the effect was less than that of the TGF-β1 treatment at 1 ng/mL (Figure 1A–F). In addition, we investigated the expression level of CD105 (a TGF-β accessory receptor) in MLPCs. We found that CD105 is expressed at a high level in MLPCs under control conditions (relative mRNA level is 0.156, data not shown). After 7 days, TGF-β1 at 1 ng/mL and 5 ng/mL led to a significant downregulation of CD105 mRNA levels to 61.7% and 70.8% of that of controls, respectively (P<0.05, Figure 2). After 7 days of exposure to TGF-β1, several SMC-specific markers were dramatically increased. The addition of TGF-β1 (1 ng/mL) to mesenchymal stem cell medium significantly increased the mRNA levels of SM22α, calponin-1, SM α-actin, caldesmon, tropomyosin and myosin light chain kinase (MLCK) to 1215.5%, 1974.6%, 567%, 429.7%, 567% and 162.8%, respectively, when compared to controls (medium only) (P<0.05, Figure 1A–F). TGF-β1 (5 ng/mL) also increased the mRNA levels of these markers significantly (P<0.05), but the effect was less than that of the TGF-β1 treatment at 1 ng/mL (Figure 1A–F). In addition, we investigated the expression level of CD105 (a TGF-β accessory receptor) in MLPCs. We found that CD105 is expressed at a high level in MLPCs under control conditions (relative mRNA level is 0.156, data not shown). After 7 days, TGF-β1 at 1 ng/mL and 5 ng/mL led to a significant downregulation of CD105 mRNA levels to 61.7% and 70.8% of that of controls, respectively (P<0.05, Figure 2). [SUBTITLE] TGF-β1 decreased the expression of endothelial cell-specific markers in MLPCs [SUBSECTION] Two EC-specific markers, VE-cadherin and VEGFR-2, were detected in MLPCs under control conditions. After 7 days of exposure to TGF-β1 at 1 ng/mL, there was a significant reduction in VE-cadherin and VEGFR-2 mRNA levels to 15.4% and 70.8% of control levels, respectively. TGF-β1 at 5 ng/mL also significantly decreased VE-cadherin and VEGFR-2; mRNA levels were 21.8% and 66.1% of controls, respectively (P<0.05, Figure 3A and B). Two EC-specific markers, VE-cadherin and VEGFR-2, were detected in MLPCs under control conditions. After 7 days of exposure to TGF-β1 at 1 ng/mL, there was a significant reduction in VE-cadherin and VEGFR-2 mRNA levels to 15.4% and 70.8% of control levels, respectively. TGF-β1 at 5 ng/mL also significantly decreased VE-cadherin and VEGFR-2; mRNA levels were 21.8% and 66.1% of controls, respectively (P<0.05, Figure 3A and B). [SUBTITLE] L2 TGF-β1 decreased the expression of endothelial progenitor cell (EPC) marker CD34 in MLPCs [SUBSECTION] We investigated the expression of CD34, a marker of endothelial progenitor cells, in MLPCs. CD34 was expressed at a low level in MLPCs under control conditions (relative mRNA level is 2.34E-05, data not shown). After 7 days, the addition of TGF-β1 at 1 ng/mL and 5 ng/mL to the mesenchymal stem cell culture media significantly decreased the mRNA level of CD34 to 28.7% and 25.1% of control levels, respectively (P<0.05, Figure 4). We investigated the expression of CD34, a marker of endothelial progenitor cells, in MLPCs. CD34 was expressed at a low level in MLPCs under control conditions (relative mRNA level is 2.34E-05, data not shown). After 7 days, the addition of TGF-β1 at 1 ng/mL and 5 ng/mL to the mesenchymal stem cell culture media significantly decreased the mRNA level of CD34 to 28.7% and 25.1% of control levels, respectively (P<0.05, Figure 4). [SUBTITLE] TGF-β1 increased the expression of adhesion molecule CD146 in MLPCs [SUBSECTION] We also tested the expression of CD146 in MLPCs. The mRNA level of CD146 was low in MLPCs under control conditions (relative mRNA level is 3.8E-05, data not shown). After 7 days, TGF-β1 at 1 ng/mL and 5 ng/mL significantly increased the mRNA level of CD146 to 2430% and 2605% of that of the controls, respectively (P<0.05, Figure 5). We also tested the expression of CD146 in MLPCs. The mRNA level of CD146 was low in MLPCs under control conditions (relative mRNA level is 3.8E-05, data not shown). After 7 days, TGF-β1 at 1 ng/mL and 5 ng/mL significantly increased the mRNA level of CD146 to 2430% and 2605% of that of the controls, respectively (P<0.05, Figure 5).
Conclusions
In conclusion, a variety of SMC-specific markers, including early and late phase markers, were dramatically increased in MLCP treated with TGF-β1. Meanwhile, two EC-specific markers as well as the EPC marker CD34 were significantly decreased. These data strongly indicate that MLCPS differentiate into SMC lineage cells in the presence of TGF-β1. MLPCs are karyotypically normal, non-transformed, non-immortalized cells that are obtained from post-partum human umbilical cord blood. Because they have been expanded from a single cell and have the capacity to differentiate into multiple lineages, they are highly pure and proliferative. MLPCs offer significant advantages over other currently used cell models, such as 10T1/2 cells, the neural crest stem cell line Monc-1 and SMC progenitor cells from human peripheral blood for the study of SMC differentiation. This study demonstrates a novel cell model for SMC lineage differentiation analysis, which may increase our understanding of SMC differentiation and help contribute to the field of vascular repair and tissue engineering.
[ "Background", "Chemicals and reagents", "Cell culture", "Real-time PCR", "TGF-β1 increased the expression of SMC markers in MLPCs", "TGF-β1 decreased the expression of endothelial cell-specific markers in MLPCs", "L2 TGF-β1 decreased the expression of endothelial progenitor cell (EPC) marker CD34 in MLPCs", "TGF-β1 increased the expression of adhesion molecule CD146 in MLPCs" ]
[ "Vascular smooth muscle cell (SMC) differentiation is an essential component of vascular development. A variety of human vascular diseases can be traced to a defect in smooth muscle development or proliferation [1–3]. In mice with defective smooth muscle development, embryonic lethality occurs [4]. Strategies that facilitate SMC differentiation should contribute to vascular repair and tissue engineering.\nThere are several cell models generally used in SMC differentiation studies. They include mouse neural crest stem cell line Monc-1 [5], mouse embryonic fibroblast 10T1/2 [6,7] and mouse embryonic stem cells [8]. Under certain culture conditions, these cells express SMC-specific markers that indicate SMC differentiation, which include contractile apparatus-associated proteins such as calponin-1 and smooth muscle α-actin (SM α-actin). However, several problems arise when using these cell models for the study of SMC differentiation. First, because most of the cells used in SMC differentiation studies are of mouse origin, there may be important interspecies differences in the differentiation environment, intracellular molecules involved in the differentiation process and underlying signaling mechanisms promoting SMC differentiation. This may significantly impair the application of data from mouse to human tissue engineering. Second, some cell lines are not naive cells but immortalized cell lines derived from a primary culture. The regulating mechanisms involved in SMC differentiation may be changed in the immortalized cell lines. As an alternative, Simper et al. [9] have described SMC progenitor cells in human peripheral blood mononuclear cells. However, based on the experience of our lab and several other labs, progenitor cells isolated from human peripheral blood can hardly proliferate under in vitro culture condition and usually deteriorate after 3 weeks, and as a result, it is impossible to maintain the cell line for serial analysis and meaningful comparisons. Also, the cell samples isolated from donors can only be used one time due to the lack of proliferative properties. The variation in blood samples from donors of different ethnicities, ages, genders and medical backgrounds makes the analysis complicated and inaccurate.\nMulti-lineage progenitor cell lines (MLPCs) are karyotypically normal multi-potent progenitor cells obtained from post-partum human umbilical cord blood. They have been expanded from a single cell and are clonal. MLPCs are normal, non-transformed, non-immortalized cells. It is possible to continue to expand these cells, and they can differentiate down specific lineages beyond 20 passages. Their high purity and proliferative features make MLPC an ideal tool for the study of SMC differentiation. Transforming growth factor β1 (TGF-β1) is thought to play a key role in SMC differentiation and is known to coordinately upregulate a variety of SMC differentiation markers in cultured SMC from mature blood vessels [10,11] as well as pluri-potential stem cells [5,7,8]. In this study, we investigated the differentiation of MLPC into SMC lineage cells, which was induced by TGF-β1. We found that the mRNA levels of a variety of SMC-specific markers were increased during this process, whereas endothelial cell-specific markers and EPC marker CD34 were decreased in TGF-β1-treated MLPC. This convincingly indicated the cells’ differentiation into SMC lineage.", "Recombinant human TGF-β1 was obtained from R&D systems (Minneapolis, MN, USA). MLPCs were purchased from BioE company (St. Paul, MN, USA). The mesenchymal stem cell medium Bulletkit was obtained from Cambrex-walker (Walkersville, MD, USA). RNAquous-4PCR kit was purchased from Ambion (Austin, TX, USA). IQ SYBR Green super-mix kit was obtained from Bio-Rad (Hercules, CA, USA). All of the primers were synthesized by Sigma Genosys (The Woodlands, TX, USA).", "MLPCs were cultured in mesenchymal stem cell basic medium to maintain their undifferentiated status and were then subcultured by using trypsin-EDTA reagent in the regular way. MLPCs at passage 4 to passage 6 were used in the experiment. To induce cell differentiation, MLPCs were seeded at 1.5×105 in each well on a 6 well plate containing mesenchymal stem cell basic medium plus 1% FBS with or without TGF-β1 (1 ng/mL or 5 ng/mL). MLPCs were cultured for 7 days, and cell RNA was then harvested for PCR analysis.", "Total cellular RNA was isolated using the RNAquous-4PCR kit. The genomic DNA contamination in RNA preparation was removed by using the DNA-free kit (Ambion, Austin, TX), and a lack of detectable genomic DNA was confirmed by PCR. Total RNA (0.5 μg) was reverse-transcribed into cDNA using the iScipt cDNA synthesis kit (Bio-Rad) following the manufacturer’s instruction. Primers for all tested genes were designed using the Beacon Designer 2.1 software (Bio-Rad). The sequences of primers are shown in Table 1. The quality of individual pairs of primers was confirmed by running conventional PCR before real-time PCR to make sure there were no detectable primer dimers or non-specific products yielded. The real-time PCR reaction mixture included the following: 250 nM primers, 50 ng cDNA, and iQ SYBR Green supermix (0.2 mM of each dNTP, 25 U/mL iTaq DNA polymerase, SYBR Green I, 10 nM fluorescein, 3 mM MgCl2, 50 mM KCl, and 20 mM Tris-HCl). Using the iCycler iQ Real-time PCR detection system (Bio-Rad), PCR cycling conditions were set as follows: 95°C for 90 seconds, 40 cycles at 95°C for 20 seconds, and 60°C for 1 minunte. Melting curve analysis was performed on the iCycler over the range 55–95°C by monitoring iQ SYBR green fluorescence with increasing temperature (0.5°C increment changes at 10 seconds intervals). Specific products were determined as clear single peaks at their melting curves. All sample measurements were performed in triplicate. Sample cycle threshold (Ct) values were determined from plots of relative fluorescence units (RFU) versus PCR cycle number during exponential amplification so that sample measurement comparison was possible. Standard curves for all primer amplifications were generated by plotting average Ct values against the logarithm starting quantity of target template molecules (series dilution of cDNA template: 50, 10, 2, 0.4, and 0.08 ng), followed by a sum of least squares regression analysis. The correlation coefficiency and PCR efficiency of all primers were above 90%, respectively. The gene expression in each sample was normalized to GAPDH expression as [2^(CtGAPDH−Ctgene)].", "After 7 days of exposure to TGF-β1, several SMC-specific markers were dramatically increased. The addition of TGF-β1 (1 ng/mL) to mesenchymal stem cell medium significantly increased the mRNA levels of SM22α, calponin-1, SM α-actin, caldesmon, tropomyosin and myosin light chain kinase (MLCK) to 1215.5%, 1974.6%, 567%, 429.7%, 567% and 162.8%, respectively, when compared to controls (medium only) (P<0.05, Figure 1A–F). TGF-β1 (5 ng/mL) also increased the mRNA levels of these markers significantly (P<0.05), but the effect was less than that of the TGF-β1 treatment at 1 ng/mL (Figure 1A–F). In addition, we investigated the expression level of CD105 (a TGF-β accessory receptor) in MLPCs. We found that CD105 is expressed at a high level in MLPCs under control conditions (relative mRNA level is 0.156, data not shown). After 7 days, TGF-β1 at 1 ng/mL and 5 ng/mL led to a significant downregulation of CD105 mRNA levels to 61.7% and 70.8% of that of controls, respectively (P<0.05, Figure 2).", "Two EC-specific markers, VE-cadherin and VEGFR-2, were detected in MLPCs under control conditions. After 7 days of exposure to TGF-β1 at 1 ng/mL, there was a significant reduction in VE-cadherin and VEGFR-2 mRNA levels to 15.4% and 70.8% of control levels, respectively. TGF-β1 at 5 ng/mL also significantly decreased VE-cadherin and VEGFR-2; mRNA levels were 21.8% and 66.1% of controls, respectively (P<0.05, Figure 3A and B).", "We investigated the expression of CD34, a marker of endothelial progenitor cells, in MLPCs. CD34 was expressed at a low level in MLPCs under control conditions (relative mRNA level is 2.34E-05, data not shown). After 7 days, the addition of TGF-β1 at 1 ng/mL and 5 ng/mL to the mesenchymal stem cell culture media significantly decreased the mRNA level of CD34 to 28.7% and 25.1% of control levels, respectively (P<0.05, Figure 4).", "We also tested the expression of CD146 in MLPCs. The mRNA level of CD146 was low in MLPCs under control conditions (relative mRNA level is 3.8E-05, data not shown). After 7 days, TGF-β1 at 1 ng/mL and 5 ng/mL significantly increased the mRNA level of CD146 to 2430% and 2605% of that of the controls, respectively (P<0.05, Figure 5)." ]
[ null, null, null, null, null, null, null, null ]
[ "Background", "Material and Methods", "Chemicals and reagents", "Cell culture", "Real-time PCR", "Statistical analysis", "Results", "TGF-β1 increased the expression of SMC markers in MLPCs", "TGF-β1 decreased the expression of endothelial cell-specific markers in MLPCs", "L2 TGF-β1 decreased the expression of endothelial progenitor cell (EPC) marker CD34 in MLPCs", "TGF-β1 increased the expression of adhesion molecule CD146 in MLPCs", "Discussion", "Conclusions" ]
[ "Vascular smooth muscle cell (SMC) differentiation is an essential component of vascular development. A variety of human vascular diseases can be traced to a defect in smooth muscle development or proliferation [1–3]. In mice with defective smooth muscle development, embryonic lethality occurs [4]. Strategies that facilitate SMC differentiation should contribute to vascular repair and tissue engineering.\nThere are several cell models generally used in SMC differentiation studies. They include mouse neural crest stem cell line Monc-1 [5], mouse embryonic fibroblast 10T1/2 [6,7] and mouse embryonic stem cells [8]. Under certain culture conditions, these cells express SMC-specific markers that indicate SMC differentiation, which include contractile apparatus-associated proteins such as calponin-1 and smooth muscle α-actin (SM α-actin). However, several problems arise when using these cell models for the study of SMC differentiation. First, because most of the cells used in SMC differentiation studies are of mouse origin, there may be important interspecies differences in the differentiation environment, intracellular molecules involved in the differentiation process and underlying signaling mechanisms promoting SMC differentiation. This may significantly impair the application of data from mouse to human tissue engineering. Second, some cell lines are not naive cells but immortalized cell lines derived from a primary culture. The regulating mechanisms involved in SMC differentiation may be changed in the immortalized cell lines. As an alternative, Simper et al. [9] have described SMC progenitor cells in human peripheral blood mononuclear cells. However, based on the experience of our lab and several other labs, progenitor cells isolated from human peripheral blood can hardly proliferate under in vitro culture condition and usually deteriorate after 3 weeks, and as a result, it is impossible to maintain the cell line for serial analysis and meaningful comparisons. Also, the cell samples isolated from donors can only be used one time due to the lack of proliferative properties. The variation in blood samples from donors of different ethnicities, ages, genders and medical backgrounds makes the analysis complicated and inaccurate.\nMulti-lineage progenitor cell lines (MLPCs) are karyotypically normal multi-potent progenitor cells obtained from post-partum human umbilical cord blood. They have been expanded from a single cell and are clonal. MLPCs are normal, non-transformed, non-immortalized cells. It is possible to continue to expand these cells, and they can differentiate down specific lineages beyond 20 passages. Their high purity and proliferative features make MLPC an ideal tool for the study of SMC differentiation. Transforming growth factor β1 (TGF-β1) is thought to play a key role in SMC differentiation and is known to coordinately upregulate a variety of SMC differentiation markers in cultured SMC from mature blood vessels [10,11] as well as pluri-potential stem cells [5,7,8]. In this study, we investigated the differentiation of MLPC into SMC lineage cells, which was induced by TGF-β1. We found that the mRNA levels of a variety of SMC-specific markers were increased during this process, whereas endothelial cell-specific markers and EPC marker CD34 were decreased in TGF-β1-treated MLPC. This convincingly indicated the cells’ differentiation into SMC lineage.", "[SUBTITLE] Chemicals and reagents [SUBSECTION] Recombinant human TGF-β1 was obtained from R&D systems (Minneapolis, MN, USA). MLPCs were purchased from BioE company (St. Paul, MN, USA). The mesenchymal stem cell medium Bulletkit was obtained from Cambrex-walker (Walkersville, MD, USA). RNAquous-4PCR kit was purchased from Ambion (Austin, TX, USA). IQ SYBR Green super-mix kit was obtained from Bio-Rad (Hercules, CA, USA). All of the primers were synthesized by Sigma Genosys (The Woodlands, TX, USA).\nRecombinant human TGF-β1 was obtained from R&D systems (Minneapolis, MN, USA). MLPCs were purchased from BioE company (St. Paul, MN, USA). The mesenchymal stem cell medium Bulletkit was obtained from Cambrex-walker (Walkersville, MD, USA). RNAquous-4PCR kit was purchased from Ambion (Austin, TX, USA). IQ SYBR Green super-mix kit was obtained from Bio-Rad (Hercules, CA, USA). All of the primers were synthesized by Sigma Genosys (The Woodlands, TX, USA).\n[SUBTITLE] Cell culture [SUBSECTION] MLPCs were cultured in mesenchymal stem cell basic medium to maintain their undifferentiated status and were then subcultured by using trypsin-EDTA reagent in the regular way. MLPCs at passage 4 to passage 6 were used in the experiment. To induce cell differentiation, MLPCs were seeded at 1.5×105 in each well on a 6 well plate containing mesenchymal stem cell basic medium plus 1% FBS with or without TGF-β1 (1 ng/mL or 5 ng/mL). MLPCs were cultured for 7 days, and cell RNA was then harvested for PCR analysis.\nMLPCs were cultured in mesenchymal stem cell basic medium to maintain their undifferentiated status and were then subcultured by using trypsin-EDTA reagent in the regular way. MLPCs at passage 4 to passage 6 were used in the experiment. To induce cell differentiation, MLPCs were seeded at 1.5×105 in each well on a 6 well plate containing mesenchymal stem cell basic medium plus 1% FBS with or without TGF-β1 (1 ng/mL or 5 ng/mL). MLPCs were cultured for 7 days, and cell RNA was then harvested for PCR analysis.\n[SUBTITLE] Real-time PCR [SUBSECTION] Total cellular RNA was isolated using the RNAquous-4PCR kit. The genomic DNA contamination in RNA preparation was removed by using the DNA-free kit (Ambion, Austin, TX), and a lack of detectable genomic DNA was confirmed by PCR. Total RNA (0.5 μg) was reverse-transcribed into cDNA using the iScipt cDNA synthesis kit (Bio-Rad) following the manufacturer’s instruction. Primers for all tested genes were designed using the Beacon Designer 2.1 software (Bio-Rad). The sequences of primers are shown in Table 1. The quality of individual pairs of primers was confirmed by running conventional PCR before real-time PCR to make sure there were no detectable primer dimers or non-specific products yielded. The real-time PCR reaction mixture included the following: 250 nM primers, 50 ng cDNA, and iQ SYBR Green supermix (0.2 mM of each dNTP, 25 U/mL iTaq DNA polymerase, SYBR Green I, 10 nM fluorescein, 3 mM MgCl2, 50 mM KCl, and 20 mM Tris-HCl). Using the iCycler iQ Real-time PCR detection system (Bio-Rad), PCR cycling conditions were set as follows: 95°C for 90 seconds, 40 cycles at 95°C for 20 seconds, and 60°C for 1 minunte. Melting curve analysis was performed on the iCycler over the range 55–95°C by monitoring iQ SYBR green fluorescence with increasing temperature (0.5°C increment changes at 10 seconds intervals). Specific products were determined as clear single peaks at their melting curves. All sample measurements were performed in triplicate. Sample cycle threshold (Ct) values were determined from plots of relative fluorescence units (RFU) versus PCR cycle number during exponential amplification so that sample measurement comparison was possible. Standard curves for all primer amplifications were generated by plotting average Ct values against the logarithm starting quantity of target template molecules (series dilution of cDNA template: 50, 10, 2, 0.4, and 0.08 ng), followed by a sum of least squares regression analysis. The correlation coefficiency and PCR efficiency of all primers were above 90%, respectively. The gene expression in each sample was normalized to GAPDH expression as [2^(CtGAPDH−Ctgene)].\nTotal cellular RNA was isolated using the RNAquous-4PCR kit. The genomic DNA contamination in RNA preparation was removed by using the DNA-free kit (Ambion, Austin, TX), and a lack of detectable genomic DNA was confirmed by PCR. Total RNA (0.5 μg) was reverse-transcribed into cDNA using the iScipt cDNA synthesis kit (Bio-Rad) following the manufacturer’s instruction. Primers for all tested genes were designed using the Beacon Designer 2.1 software (Bio-Rad). The sequences of primers are shown in Table 1. The quality of individual pairs of primers was confirmed by running conventional PCR before real-time PCR to make sure there were no detectable primer dimers or non-specific products yielded. The real-time PCR reaction mixture included the following: 250 nM primers, 50 ng cDNA, and iQ SYBR Green supermix (0.2 mM of each dNTP, 25 U/mL iTaq DNA polymerase, SYBR Green I, 10 nM fluorescein, 3 mM MgCl2, 50 mM KCl, and 20 mM Tris-HCl). Using the iCycler iQ Real-time PCR detection system (Bio-Rad), PCR cycling conditions were set as follows: 95°C for 90 seconds, 40 cycles at 95°C for 20 seconds, and 60°C for 1 minunte. Melting curve analysis was performed on the iCycler over the range 55–95°C by monitoring iQ SYBR green fluorescence with increasing temperature (0.5°C increment changes at 10 seconds intervals). Specific products were determined as clear single peaks at their melting curves. All sample measurements were performed in triplicate. Sample cycle threshold (Ct) values were determined from plots of relative fluorescence units (RFU) versus PCR cycle number during exponential amplification so that sample measurement comparison was possible. Standard curves for all primer amplifications were generated by plotting average Ct values against the logarithm starting quantity of target template molecules (series dilution of cDNA template: 50, 10, 2, 0.4, and 0.08 ng), followed by a sum of least squares regression analysis. The correlation coefficiency and PCR efficiency of all primers were above 90%, respectively. The gene expression in each sample was normalized to GAPDH expression as [2^(CtGAPDH−Ctgene)].\n[SUBTITLE] Statistical analysis [SUBSECTION] Data from the control and TGF-β1-treated groups were analyzed using a paired Student’s t test (one tail, Minitab software, Sigma Breakthrough Technologies, Inc., San Marcos, TX). Statistics are reported as mean ± the standard deviation (SD). P value <0.05 was considered statistically significant.\nData from the control and TGF-β1-treated groups were analyzed using a paired Student’s t test (one tail, Minitab software, Sigma Breakthrough Technologies, Inc., San Marcos, TX). Statistics are reported as mean ± the standard deviation (SD). P value <0.05 was considered statistically significant.", "Recombinant human TGF-β1 was obtained from R&D systems (Minneapolis, MN, USA). MLPCs were purchased from BioE company (St. Paul, MN, USA). The mesenchymal stem cell medium Bulletkit was obtained from Cambrex-walker (Walkersville, MD, USA). RNAquous-4PCR kit was purchased from Ambion (Austin, TX, USA). IQ SYBR Green super-mix kit was obtained from Bio-Rad (Hercules, CA, USA). All of the primers were synthesized by Sigma Genosys (The Woodlands, TX, USA).", "MLPCs were cultured in mesenchymal stem cell basic medium to maintain their undifferentiated status and were then subcultured by using trypsin-EDTA reagent in the regular way. MLPCs at passage 4 to passage 6 were used in the experiment. To induce cell differentiation, MLPCs were seeded at 1.5×105 in each well on a 6 well plate containing mesenchymal stem cell basic medium plus 1% FBS with or without TGF-β1 (1 ng/mL or 5 ng/mL). MLPCs were cultured for 7 days, and cell RNA was then harvested for PCR analysis.", "Total cellular RNA was isolated using the RNAquous-4PCR kit. The genomic DNA contamination in RNA preparation was removed by using the DNA-free kit (Ambion, Austin, TX), and a lack of detectable genomic DNA was confirmed by PCR. Total RNA (0.5 μg) was reverse-transcribed into cDNA using the iScipt cDNA synthesis kit (Bio-Rad) following the manufacturer’s instruction. Primers for all tested genes were designed using the Beacon Designer 2.1 software (Bio-Rad). The sequences of primers are shown in Table 1. The quality of individual pairs of primers was confirmed by running conventional PCR before real-time PCR to make sure there were no detectable primer dimers or non-specific products yielded. The real-time PCR reaction mixture included the following: 250 nM primers, 50 ng cDNA, and iQ SYBR Green supermix (0.2 mM of each dNTP, 25 U/mL iTaq DNA polymerase, SYBR Green I, 10 nM fluorescein, 3 mM MgCl2, 50 mM KCl, and 20 mM Tris-HCl). Using the iCycler iQ Real-time PCR detection system (Bio-Rad), PCR cycling conditions were set as follows: 95°C for 90 seconds, 40 cycles at 95°C for 20 seconds, and 60°C for 1 minunte. Melting curve analysis was performed on the iCycler over the range 55–95°C by monitoring iQ SYBR green fluorescence with increasing temperature (0.5°C increment changes at 10 seconds intervals). Specific products were determined as clear single peaks at their melting curves. All sample measurements were performed in triplicate. Sample cycle threshold (Ct) values were determined from plots of relative fluorescence units (RFU) versus PCR cycle number during exponential amplification so that sample measurement comparison was possible. Standard curves for all primer amplifications were generated by plotting average Ct values against the logarithm starting quantity of target template molecules (series dilution of cDNA template: 50, 10, 2, 0.4, and 0.08 ng), followed by a sum of least squares regression analysis. The correlation coefficiency and PCR efficiency of all primers were above 90%, respectively. The gene expression in each sample was normalized to GAPDH expression as [2^(CtGAPDH−Ctgene)].", "Data from the control and TGF-β1-treated groups were analyzed using a paired Student’s t test (one tail, Minitab software, Sigma Breakthrough Technologies, Inc., San Marcos, TX). Statistics are reported as mean ± the standard deviation (SD). P value <0.05 was considered statistically significant.", "[SUBTITLE] TGF-β1 increased the expression of SMC markers in MLPCs [SUBSECTION] After 7 days of exposure to TGF-β1, several SMC-specific markers were dramatically increased. The addition of TGF-β1 (1 ng/mL) to mesenchymal stem cell medium significantly increased the mRNA levels of SM22α, calponin-1, SM α-actin, caldesmon, tropomyosin and myosin light chain kinase (MLCK) to 1215.5%, 1974.6%, 567%, 429.7%, 567% and 162.8%, respectively, when compared to controls (medium only) (P<0.05, Figure 1A–F). TGF-β1 (5 ng/mL) also increased the mRNA levels of these markers significantly (P<0.05), but the effect was less than that of the TGF-β1 treatment at 1 ng/mL (Figure 1A–F). In addition, we investigated the expression level of CD105 (a TGF-β accessory receptor) in MLPCs. We found that CD105 is expressed at a high level in MLPCs under control conditions (relative mRNA level is 0.156, data not shown). After 7 days, TGF-β1 at 1 ng/mL and 5 ng/mL led to a significant downregulation of CD105 mRNA levels to 61.7% and 70.8% of that of controls, respectively (P<0.05, Figure 2).\nAfter 7 days of exposure to TGF-β1, several SMC-specific markers were dramatically increased. The addition of TGF-β1 (1 ng/mL) to mesenchymal stem cell medium significantly increased the mRNA levels of SM22α, calponin-1, SM α-actin, caldesmon, tropomyosin and myosin light chain kinase (MLCK) to 1215.5%, 1974.6%, 567%, 429.7%, 567% and 162.8%, respectively, when compared to controls (medium only) (P<0.05, Figure 1A–F). TGF-β1 (5 ng/mL) also increased the mRNA levels of these markers significantly (P<0.05), but the effect was less than that of the TGF-β1 treatment at 1 ng/mL (Figure 1A–F). In addition, we investigated the expression level of CD105 (a TGF-β accessory receptor) in MLPCs. We found that CD105 is expressed at a high level in MLPCs under control conditions (relative mRNA level is 0.156, data not shown). After 7 days, TGF-β1 at 1 ng/mL and 5 ng/mL led to a significant downregulation of CD105 mRNA levels to 61.7% and 70.8% of that of controls, respectively (P<0.05, Figure 2).\n[SUBTITLE] TGF-β1 decreased the expression of endothelial cell-specific markers in MLPCs [SUBSECTION] Two EC-specific markers, VE-cadherin and VEGFR-2, were detected in MLPCs under control conditions. After 7 days of exposure to TGF-β1 at 1 ng/mL, there was a significant reduction in VE-cadherin and VEGFR-2 mRNA levels to 15.4% and 70.8% of control levels, respectively. TGF-β1 at 5 ng/mL also significantly decreased VE-cadherin and VEGFR-2; mRNA levels were 21.8% and 66.1% of controls, respectively (P<0.05, Figure 3A and B).\nTwo EC-specific markers, VE-cadherin and VEGFR-2, were detected in MLPCs under control conditions. After 7 days of exposure to TGF-β1 at 1 ng/mL, there was a significant reduction in VE-cadherin and VEGFR-2 mRNA levels to 15.4% and 70.8% of control levels, respectively. TGF-β1 at 5 ng/mL also significantly decreased VE-cadherin and VEGFR-2; mRNA levels were 21.8% and 66.1% of controls, respectively (P<0.05, Figure 3A and B).\n[SUBTITLE] L2 TGF-β1 decreased the expression of endothelial progenitor cell (EPC) marker CD34 in MLPCs [SUBSECTION] We investigated the expression of CD34, a marker of endothelial progenitor cells, in MLPCs. CD34 was expressed at a low level in MLPCs under control conditions (relative mRNA level is 2.34E-05, data not shown). After 7 days, the addition of TGF-β1 at 1 ng/mL and 5 ng/mL to the mesenchymal stem cell culture media significantly decreased the mRNA level of CD34 to 28.7% and 25.1% of control levels, respectively (P<0.05, Figure 4).\nWe investigated the expression of CD34, a marker of endothelial progenitor cells, in MLPCs. CD34 was expressed at a low level in MLPCs under control conditions (relative mRNA level is 2.34E-05, data not shown). After 7 days, the addition of TGF-β1 at 1 ng/mL and 5 ng/mL to the mesenchymal stem cell culture media significantly decreased the mRNA level of CD34 to 28.7% and 25.1% of control levels, respectively (P<0.05, Figure 4).\n[SUBTITLE] TGF-β1 increased the expression of adhesion molecule CD146 in MLPCs [SUBSECTION] We also tested the expression of CD146 in MLPCs. The mRNA level of CD146 was low in MLPCs under control conditions (relative mRNA level is 3.8E-05, data not shown). After 7 days, TGF-β1 at 1 ng/mL and 5 ng/mL significantly increased the mRNA level of CD146 to 2430% and 2605% of that of the controls, respectively (P<0.05, Figure 5).\nWe also tested the expression of CD146 in MLPCs. The mRNA level of CD146 was low in MLPCs under control conditions (relative mRNA level is 3.8E-05, data not shown). After 7 days, TGF-β1 at 1 ng/mL and 5 ng/mL significantly increased the mRNA level of CD146 to 2430% and 2605% of that of the controls, respectively (P<0.05, Figure 5).", "After 7 days of exposure to TGF-β1, several SMC-specific markers were dramatically increased. The addition of TGF-β1 (1 ng/mL) to mesenchymal stem cell medium significantly increased the mRNA levels of SM22α, calponin-1, SM α-actin, caldesmon, tropomyosin and myosin light chain kinase (MLCK) to 1215.5%, 1974.6%, 567%, 429.7%, 567% and 162.8%, respectively, when compared to controls (medium only) (P<0.05, Figure 1A–F). TGF-β1 (5 ng/mL) also increased the mRNA levels of these markers significantly (P<0.05), but the effect was less than that of the TGF-β1 treatment at 1 ng/mL (Figure 1A–F). In addition, we investigated the expression level of CD105 (a TGF-β accessory receptor) in MLPCs. We found that CD105 is expressed at a high level in MLPCs under control conditions (relative mRNA level is 0.156, data not shown). After 7 days, TGF-β1 at 1 ng/mL and 5 ng/mL led to a significant downregulation of CD105 mRNA levels to 61.7% and 70.8% of that of controls, respectively (P<0.05, Figure 2).", "Two EC-specific markers, VE-cadherin and VEGFR-2, were detected in MLPCs under control conditions. After 7 days of exposure to TGF-β1 at 1 ng/mL, there was a significant reduction in VE-cadherin and VEGFR-2 mRNA levels to 15.4% and 70.8% of control levels, respectively. TGF-β1 at 5 ng/mL also significantly decreased VE-cadherin and VEGFR-2; mRNA levels were 21.8% and 66.1% of controls, respectively (P<0.05, Figure 3A and B).", "We investigated the expression of CD34, a marker of endothelial progenitor cells, in MLPCs. CD34 was expressed at a low level in MLPCs under control conditions (relative mRNA level is 2.34E-05, data not shown). After 7 days, the addition of TGF-β1 at 1 ng/mL and 5 ng/mL to the mesenchymal stem cell culture media significantly decreased the mRNA level of CD34 to 28.7% and 25.1% of control levels, respectively (P<0.05, Figure 4).", "We also tested the expression of CD146 in MLPCs. The mRNA level of CD146 was low in MLPCs under control conditions (relative mRNA level is 3.8E-05, data not shown). After 7 days, TGF-β1 at 1 ng/mL and 5 ng/mL significantly increased the mRNA level of CD146 to 2430% and 2605% of that of the controls, respectively (P<0.05, Figure 5).", "MLPCs originate from human umbilical cord blood, are karyotypically normal, demonstrate proliferative features, and are highly pure. In the present study, we observed the differentiation of MLPCs to SMC lineage in the presence of TGF-β1 as evidenced by an upregulation of the mRNA levels of both early and late phase SMC-specific markers and adhesion molecule CD146. Two EC markers, as well as EPC marker CD34 and TGF-β1 cell surface accessory protein CD105, were decreased in TGF-β1 treated MLPCs. As such, these data suggest that MLPCs represent a promising tool for the study of SMC differentiation.\nQuiescent vascular SMCs exhibit a phenotype characterized by the expression of several contractile apparatus-associated proteins; these are used as markers to identify SMC lineage differentiation. We checked both early phase markers (SM α-actin, SM22α) and late phase markers (MLCK, calponin-1, caldesmon, and tropomyosin) for SMC differentiation in MLPCs. We found that treatment with TGF-β1 led to a dramatic increase in the mRNA levels of all the tested markers in MLPCs after 7 days of culture; this strongly indicates the differentiation of MLPCs into SMC lineage in the presence of TGF-β1. Chen S et al. [5] reported that TGF-β1 (5 ng/mL) increases the expression of SM α-actin, SM22α, calponin-1 and myosin in the neural crest stem cell line Monc-1 in vitro. Lien SC et al. [7] also found that TGF-β1 (2 ng/mL) induces SM α-actin, SM22α and smooth muscle myosin heavy chain (SMMHC) in 10T1/2 mesenchymal cells in vitro. In our study, the mRNA levels of all the tested SMC markers in MLPCs were upregulated to a comparable level with TGF-β1 treatment when compared to those found in neural crest stem cell line (Monc-1) or 10T1/2 mesenchymal cells.\nThe signaling pathway underlying TGF-β1-induced SMC-specific marker expression in MLPCs was not investigated in this study. Sinha et al. [8] investigated differential TGF-β-Smad signaling for early versus late SMC marker expression; SM α-actin promoter activity was found to be dependent on both Smad 2 and Smad 3 whereas smooth muscle myosin heavy chain (SMMHC) activity is Smad2 dependent in mouse embryonic stem cell. Chen S et al. [5] also found that TGF-β increased SM α-actin and SM22α in the neural crest stem cell line through activation of Smad2 and Smad3. In addition, RhoA was reported to be essential in Smad signaling during TGF-β-induced SM a-actin, SM22α and calponin expression in the neural crest stem cell line Monc-1 [12]. Meanwhile, the PI3 kinase/Akt pathway was found to be involved in TGF-β1-induced SM α-actin, SM22α and SMMHC expression in 10T1/2 mesenchymal cells [7].\nWe determined the expression of CD105 (endoglin) in the current study. CD105 is a transmembrane accessory receptor for TGF-β. By forming a heteromeric complex with distinct TGF-β receptors, it modulates the access of TGF-β to the signaling complex and the postponed cellular responses to TGF-β [13,14]. We found that CD105 is expressed at a high level in MLPCs in the control condition (relative mRNA level is 0.156), which may facilitate TGF-β1/TGF-β receptor signaling transduction. Over a long duration of TGF-β1 treatment (7 days), expression of CD105 was decreased, which may be due to a negative feedback mechanism. The specific signaling mechanism involved in TGF-β1-induced differentiation of MLPCs into SMC will need to be further elucidated.\nProgenitor cells are primitive cells that possess the capacity to differentiate into multiple lineages under different conditions [15,16]. Evidence has shown that progenitor cells from human peripheral blood can differentiate into EC when stimulated by shear stress [17] and into SMCs in conditions of cyclic strain [18]. Hence, we investigated the expression of two EC specific markers to confirm that the potential for EC lineage differentiation by the MLPCs was suppressed under the designated conditions of this study. We found that the mRNA levels of VE-cadherin and VEGFR-2 were significantly decreased by TGF-β1 on day 7 of culture. VEGFR-2 is the pivotal receptor mediating the mitogenic action of VEGF; it plays an essential role in angiogenesis, neovascularization and EPC differentiation [19]. VE-cadherin is another EC-specific marker which has been shown to play important role in vasculogenesis and angiogenesis. The downregulated VE-cadherin and VEGFR-2 observed in the MLPCs provides further confirmation that TGF-β1 induced SMC lineage differentiation in the MLPCs. Similarly, Chen S et al. [5] found that TGF-β1 reduced epithelial markers while inducing SMC-specific markers in the neural crest stem cell line Monc-1. The signaling pathways underlying TGF-β1-decreased EC markers remain unclear. Watabe et al. [20] reported that TGF-β inhibited proliferation and sheet formation of embryonic stem cell (ESC)-derived ECs. Stimulation of ESC-derived ECs with TGF-β resulted in phosphorylation of both Smad2 and Smadl/5. The specific signaling mechanism underlying the TGF-β1-induced reduction in EC markers in MLPC need to be further studied.\nIn addition to EC markers, we tested the EPC marker CD34 in MLPCs. CD34 has been used as a hematopoietic stem cell marker; endothelial progenitor cells are characterized by the expression of CD34, CD133 and VEGFR-2 [21]. In this study, CD34 was expressed at a low level in MLPC under the control conditions, and expression was further reduced by TGF-β1 treatment; this suggests that the tendency for EC lineage differentiation by the MLCPs was suppressed under the designated conditions. This in accordance with a previous finding that CD34 expression was downregulated during TGF-β1 induced myofibroblast differentiation [22]. Our data provide further confirmation for SMC lineage differentiation in MLPCs. The specific mechanisms whereby TGF-β1 mediates this phenomenon remain unclear.\nCD146, melanoma cell adhesion molecule, is also expressed in SMC and plays an important role in vasculogenesis and embryo development [23]. CD146, when activated, induces association of p59fyn with CD146, resulting in the phosphorylation of p125FAK and its binding with paxillin; this finding suggests that CD146 is involved in outside-in signaling and may contribute to focal adhesion assembly, reorganization of the cytoskeleton, intercellular interaction, maintenance of cell shape, and control of cell migration and proliferation [24]. In this study, CD146 was significantly increased by TGF-β1 treatment in MLPCs, which may facilitate signaling transduction by TGF-β1 and contribute to the differentiation of MLPCs into mature SMC.", "In conclusion, a variety of SMC-specific markers, including early and late phase markers, were dramatically increased in MLCP treated with TGF-β1. Meanwhile, two EC-specific markers as well as the EPC marker CD34 were significantly decreased. These data strongly indicate that MLCPS differentiate into SMC lineage cells in the presence of TGF-β1. MLPCs are karyotypically normal, non-transformed, non-immortalized cells that are obtained from post-partum human umbilical cord blood. Because they have been expanded from a single cell and have the capacity to differentiate into multiple lineages, they are highly pure and proliferative. MLPCs offer significant advantages over other currently used cell models, such as 10T1/2 cells, the neural crest stem cell line Monc-1 and SMC progenitor cells from human peripheral blood for the study of SMC differentiation. This study demonstrates a novel cell model for SMC lineage differentiation analysis, which may increase our understanding of SMC differentiation and help contribute to the field of vascular repair and tissue engineering." ]
[ null, "materials|methods", null, null, null, "methods", "results", null, null, null, null, "discussion", "conclusions" ]
[ "multi-lineage progenitor cell", "transforming growth factor β1", "smooth muscle cell", "differentiation" ]
Sexual dimorphism in the effect of concomitant progesterone administration on changes caused by long-term estrogen treatment in pituitary hormone immunoreactivities of rats.
21358595
Since in clinical practice long-term estrogen (E) treatment is frequently applied, our aim was to study the effect of concomitant progesterone (P) administration on changes caused by long-term estrogen treatment in the secretion of LH, FSH, PRL and GH.
BACKGROUND
Diethylstilbestrol (DES), P or both in silastic capsules were implanted under the skin of prepubertal Sprague-Dawley male and female rats. Animals survived for two or five months. We have also studied whether the changed hormone secretion caused by DES can return to normal level 1 or 2 months after removing DES capsule.
MATERIAL/METHODS
1.) The males more rapidly responded than females with decreasing basal LH release upon treatments. The basal FSH release was decreased only in males. The effect of DES persisted in males; however, in females basal LH and FSH levels were upregulated after removal of DES capsule. 2.) The basal GH levels were low in each group. The body weight and length were depressed by DES in both genders and P little blunted this effect. The body weight and length in males remained low after removal of DES capsule, in females it was nearly similar to intact rats. 3.) There was no sexual dimorphism in the effect of steroids on PRL secretion. In both genders DES extremely enhanced the PRL levels, P prevented the effect of DES. PRL levels returned to intact value after removal of DES influence. 4.) Removal of DES capsule reversed the changes in the immunohistochemical appearance of hormone immunoreactivities.
RESULTS
There was sexual dimorphism in the change of basal gonadotropic hormone and GH secretion but not of PRL upon DES and DES+P treatments. P was basically protective and this role may be mediated by P receptors locally in the pituitary gland.
CONCLUSIONS
[ "Animals", "Biometry", "Body Weight", "Estrogens", "Female", "Follicle Stimulating Hormone", "Growth Hormone", "Immunohistochemistry", "Luteinizing Hormone", "Male", "Organ Size", "Pituitary Gland", "Pituitary Hormones", "Progesterone", "Prolactin", "Rats", "Rats, Sprague-Dawley", "Sex Characteristics", "Time Factors", "Vaginal Smears" ]
3524720
null
null
New data are as follows
There is sexual dimorphism in the change of basal LH and FSH release upon DES and combined DES+P treatments. The male rats more rapidly responded than female rats with decreasing basal LH release upon E and E+P treatments. The effect of P was ambiguous. It was protective in five-month treated females but not in males. The basal FSH release was decreased only in male rats and later it was recovered. Neither LH, FSH nor the weight of testes and seminal vesicles restored in male rats after the removal of DES capsule. However, the LH level and the weight of seminal vesicles were not significantly lower than in intact rats. It means that the effect of DES persisted in male. However, in females both basal LH and FSH levels were rather upregulated and the low ovarian weight returned to the control level. There was no sexual dimorphism in the effect of long-term steroid treatments on PRL secretion. In both female and male rats the PRL release similarly enhanced and it was partially prevented after two month and completely after 5-month survival by concomitant P treatment. Removal of DES influence completely restored the PRL release by the end of further 2-month survival. There was a well definied parallel change between the PRL levels and weight of anterior pituitary that was extremly enhanced in DES treated rats and then the enlargement of pituitaries was completely prevented by concomitant P treatment in the case of 5-month survival. A mild sexual dimorphism was also observed in GH level. It was significantly lower only in DES and DES+P treated female, not in male rats with two-month survival. The BW and B+TL were very much depressed by DES treatment in both female and male rats and P little blunted but not prevented this effect. The BW and B+TL in male rats remained low after the removal of DES capsule; however, in females it was nearly similar to intact rats. P alone had no significant effect on the parameters studied; however, the concomitant P treatment with DES was usually protective. Removal of DES capsule reversed the changes in the immunohistochemical appearance of LH, FSH, PRL and GH immunoreactivities. It is well known that GnRH and LH are released in a cyclic pattern in females but not in male rats. The neural LH release apparatus was demonstrated at the first time by Everett and Sawyer [45,46]. A sexual dimorphism of prepubertal rats in GnRH and LH responses to steroid treatment was also demonstrated in acute experiments. Dluzen and Ramirez [47] showed that E treated prebubertal females displayed a significant change in GnRH concentrations in the preoptic-suprachiasmatic area upon P treatment; however, males did not show any change. Sexual dimorphism was also observed in the gonadotropin α-subunit promoter activity to GnRH [48]. In the present study we have found sexual dimorphism in the response of basal LH level to long-term DES and P treatments as well. In our model we did not find any sexual dimorphism in the effect of long-term steroid treatments on PRL secretion and on the weight of pituitaries; however, in other models some sexual dimorphism in the basal level of PRL was observed. It was published earlier that around the time of puberty females had higher PRL mRNA level than males [49], and the weight and PRL level of anterior pituitary in prepubertal females is significantly higher in females than in males [50]. In ovariectomized rats P implanted in silicone tubes subcutaneously induced nocturnal PRL surge, similar P sensitive central mechanism was not observed in adult male rats [51]. A sexual dimorphism was found in the plasma GH pattern and regulation as well. In the above-mentioned experiment it was also demonstrated that around the puberty the males had higher level of GH mRNA than females [49]. Neonatal masculinization happens in neonatal males which determines the secretory pattern of GH. The masculine type (high infrequent GH pulses with low plasma GH levels in between) promotes growth more effectively than the feminine type (an intermediate, rather constant level of plasma GH) [52]. In our model long-term DES treatment similarly depressed the body growth in both male and female rats indicating that the effect of DES is not mediated through GH. There are data which show that chondrocytes in the epiphyseal plate express ERs [53]. And in this relation the effect of E does not depend on a central mechanism. P alone slightly depressed the body weight of both sexes in the case of five-month treatment. There were no other significant change. In the literature we did not find as long combined treatment as ours. That is why difficult to draw paralell to others and our experiments. More data are available about the protective role of P when it is administered with E. The models are very different from what we used. Genazzani and his co-workers [24] studied the role of P and various synthetic progestins in the modulation of the effect of E on hypothalamic GnRH and on pituitary LH and PRL concentrations in ovariectomized rats. And they used only two-week steroid treatment. It was found that P and norethisterone enanthate (NET, synthetic progestin) reversed EB induced hypothalamic GnRH depression, that is elevated its level. Both P and NET blocked the EB induced increase of pituitary LH, and plasma LH levels remained low. Progestins alone did not influence the PRL levels but reversed the EB induced increase. Cho and his co-workers [54] also demonstrated that P suppressed E-enhanced PRL mRNA level. In our previous experiment it was found that the DES treatment changed not only the number and distribution of tropic hormone producing cells, but the distribution of S-100 immunoreactive folliculostellate cells (FSCs) as well. Inside the prolactinomas FSCs were scattered but they surrounded the prolactinomas forming a demarcation line around them [55]. Inside the prolactinomas there were only a few GH cells. When DES was implanted with P the changes, characteristic for DES treatment, could not develop. Concomitant P influence prevented the morphological changes in the anterior pituitary. The distribution of FSCs and GH cells was very similar to that of controls. In this present experiment in male rats DES permanently injured the gonadotropic functions. After the removal of DES capsule LH and FSH levels, the weight of testis and seminal vesicle remained lower than in controls. Interestingly, in females there was no permanent depression in basal LH and FSH levels and ovarian weight. Rather, both LH and FSH were upregulated. In the literature there is no animal experiment in which so long steroid treatment was applied and plasma hormone levels were measured. In human medication sexual steroids are used for months to syncronize the ovarian cyclicity, functional dysmenorrhea, premenstrual syndrome, endometriosis and to limit the adult height of girls [56,57]. The question arises how E induces and P prevents the changes caused by a long-term E treatment. As it was mentioned in the introduction ERs and PRs are present not only in the hypothalamus but in the anterior pituitary as well. Steroids can act locally through these receptors on pituitary cells. In a recent publication it was found that ERα is localized in the largest number in lactotropes, and in decreasing number in somatotropes, thyrotropes, gonadotropes. The number of ERβ was much lower and it was identified in somatotropes, lactotropes and gonadotropes [4]. PRA immunoreactivity was seen in the nuclei of gonadotropes [7]. It is also possible that both E and P act through the hypothalamus as well. The distribution of these receptors was recently demonstrated by several authors. Merchenthaler and his co-workers [11] very precisely mapped the distribution of both ERα and ERβ in the hypothalamus. It was found that the density of ERα is very high in the arcuate nucleus and periventricular area and the density of ERβ is high in the paraventricular nucleus and the medial preoptic area where ERβ immunoreactivity colocalize with LHRH immunoreactivity [58]. In the arcuate nucleus colocalization between ERα and GHRH immunoreactivity but not ERα and somatostatin was published by Shimizu et al. [59]. PR containing neurons were demonstrated in the ventrolateral and in the medial part of the arcuate nucleus [10]. The mechanism how the E stimulates the cell proliferation in the anterior pituitary was thoroughly studied [60,61]. It was found that E through ERα induces production of TGF-β3 from lactotropes and this stimulates the bFGF production of FSCs which has a proliferative effect on lactotropes. The mechanism how the concomitant P treatment prevents the E induced proliferation is not fully explored. Calderon and his co-workers [62] used ovariectomized immature rats for analysis of action of P in response of anterior pituitary and hypothalamus to estradiol exposure. E induced the nuclear accumulation of ERs and the appearance of PRs reaching a peak at 12 hours, then declined to a plateau near the control level. If P was administered to the animals at the peak of PRs, subseqent nuclear accumulation of ER caused by estradiol injection was suppressed. This was observed only in the anterior pituitary, not in the hypothalamus. They concluded that P affected the response to estradiol in the pituitary gland by a well definied temporal pattern and using a same protocol it had no effect in the hypothalamus.
Results
[SUBTITLE] Vaginal smears [SUBSECTION] In intact rats the vaginal membrane opened on 31.36±0.80 day of postnatal life. In all animals implanted with capsule (independently of their hormone content) the vaginal membrane opened earlier. In DES treated rats it happened much earlier than in intact animals (27.50±0.31, p<0.0001), although in animals implanted with ecap, DES+P or P alone the opening was also significantly shifted earlier than in intact rats, 28.36±0.47 (p<0.003) 28.06±0.34 (p<0.0005) and 29.09±0.51 (p<0.03) day of postnatal life, respectively. Upon DES treatment in the vaginal smear persistent estrus was observed. DES+P did not interrupt the cyclicity but it was irregular and metestrus predominated, ecap and P alone had no effect. In intact rats the vaginal membrane opened on 31.36±0.80 day of postnatal life. In all animals implanted with capsule (independently of their hormone content) the vaginal membrane opened earlier. In DES treated rats it happened much earlier than in intact animals (27.50±0.31, p<0.0001), although in animals implanted with ecap, DES+P or P alone the opening was also significantly shifted earlier than in intact rats, 28.36±0.47 (p<0.003) 28.06±0.34 (p<0.0005) and 29.09±0.51 (p<0.03) day of postnatal life, respectively. Upon DES treatment in the vaginal smear persistent estrus was observed. DES+P did not interrupt the cyclicity but it was irregular and metestrus predominated, ecap and P alone had no effect. [SUBTITLE] Effect of treatments on the body weight and body length [SUBSECTION] Table 1 shows BW and B+TL in various groups of female and male rats 2 and 5 months after implantation. In DES and DES+P implanted animals BW and B+TL were significantly lower than in control rats 2 months after implantation; however, 5 months after implantation P moderated the effect of DES. P alone slightly decreased BW only in the case of 5-month treatment. When we removed the DES capsule BW and B+TL could not return to intact levels, but the difference was not significant in B+TL of females. Table 1 shows BW and B+TL in various groups of female and male rats 2 and 5 months after implantation. In DES and DES+P implanted animals BW and B+TL were significantly lower than in control rats 2 months after implantation; however, 5 months after implantation P moderated the effect of DES. P alone slightly decreased BW only in the case of 5-month treatment. When we removed the DES capsule BW and B+TL could not return to intact levels, but the difference was not significant in B+TL of females. [SUBTITLE] Effect of treatments on the weight of the endocrine organs [SUBSECTION] Table 2 shows the weight of the anterior pituitary and the ovaries in female rats. The weight of anterior pituitaries increased upon DES and DES+P treatments in the case of 2-month survival, but in the case of 5-month treatment P reversed the weight gain. When the DES capsule was removed the weight of pituitaries returned to control level. DES and DES+P extremly decreased the weight of ovaries after 2-month steroid treatment. In the case of 5-month survival P prevented the effect of DES and P alone slightly decreased the weight of ovaries. When DES capsule was removed the weight of ovaries returned to control levels. Table 3 shows the weight of the anterior pituitary, testes and seminal vesicles of male rats. Similarly to female rats the weight of anterior pituitaries increased upon DES and DES+P treatment in the case of 2-month survival time, but in the case of 5-month survival P reversed the weight gain. DES and DES+P treatments extremly decreased the weight of testes and seminal vesicles, P alone enhanced those in the animals having 2-, but not 5-month survival time. Removal of DES capsule partially, but not completely, restored the weight of testes and seminal vesicles. Table 2 shows the weight of the anterior pituitary and the ovaries in female rats. The weight of anterior pituitaries increased upon DES and DES+P treatments in the case of 2-month survival, but in the case of 5-month treatment P reversed the weight gain. When the DES capsule was removed the weight of pituitaries returned to control level. DES and DES+P extremly decreased the weight of ovaries after 2-month steroid treatment. In the case of 5-month survival P prevented the effect of DES and P alone slightly decreased the weight of ovaries. When DES capsule was removed the weight of ovaries returned to control levels. Table 3 shows the weight of the anterior pituitary, testes and seminal vesicles of male rats. Similarly to female rats the weight of anterior pituitaries increased upon DES and DES+P treatment in the case of 2-month survival time, but in the case of 5-month survival P reversed the weight gain. DES and DES+P treatments extremly decreased the weight of testes and seminal vesicles, P alone enhanced those in the animals having 2-, but not 5-month survival time. Removal of DES capsule partially, but not completely, restored the weight of testes and seminal vesicles. [SUBTITLE] Effect of treatments on the classical pituitary hormone immunoreactivities [SUBSECTION] [SUBTITLE] Radioimmunoassay [SUBSECTION] The basal LH level in female rats was depressed by DES treatment (Figure 1A,B), but it become significant only five months after implantation. Progesterone alone had no significant effect. One month after removal of DES capsule LH level further decreased and it was significantly lower than in controls; however, two months after removal it was even higher than in age-matched controls (Figure 1C). In male rats DES and DES+P significantly decreased the LH levels already two months after implantation (Figure 1D). The lower levels persisted five months after implantation in DES+P treated rats (Figure 1E). After the removal of DES capsule the LH level remained lower than in controls for a month, and later it was similar to the control values (Figure 1F). The basal FSH level in female rats was not significantly influenced in groups of 2- or 5-month survival (Figure 2A,B); but 2 months after the removal of DES capsule FSH was significantly higher than in the age-matched control rats (Figure 2C). In male rats both DES and DES+P treatment depressed the FSH levels two months after implantation (Figure 2D) but five months after the implantation the FSH levels were similar in each group (Figure 2E), likewise the females. After the removal of DES capsule the FSH levels remained lower and did not return to control values (Figure 2F). The basal PRL levels showed very similar pattern in both female and male rats. DES extremly enhanced the PRL levels (Figure 3A,B and D,E). Two months after implantation P blunted the effect of DES (Figure 3A, D) and five months after implantation P completely reversed the effect of DES (Figure 3B, E). P alone had no effect on the PRL levels (Figure 3A,B and D,E). Removal of DES capsule gradually restored the PRL levels (Figure 3C,F). The basal GH levels were very contradictory (Figure 4A–F). In females DES and DES+P treatment decreased the level (Figure 4A,B); however, it was significant only after 2-month treatment (Figure 4A). P alone did not influence the GH level 2 months after implantation (Figure 4A) but it decreased after 5 months (Figure 4B). When the DES capsule was removed GH levels reached the control value in one month and by the end of 2-month survival after removal and it was even higher than in controls (this elevation was not statistically significant) (Figure 4C). In male rats we did not find significant difference in any groups (Figure 4D,F). The basal LH level in female rats was depressed by DES treatment (Figure 1A,B), but it become significant only five months after implantation. Progesterone alone had no significant effect. One month after removal of DES capsule LH level further decreased and it was significantly lower than in controls; however, two months after removal it was even higher than in age-matched controls (Figure 1C). In male rats DES and DES+P significantly decreased the LH levels already two months after implantation (Figure 1D). The lower levels persisted five months after implantation in DES+P treated rats (Figure 1E). After the removal of DES capsule the LH level remained lower than in controls for a month, and later it was similar to the control values (Figure 1F). The basal FSH level in female rats was not significantly influenced in groups of 2- or 5-month survival (Figure 2A,B); but 2 months after the removal of DES capsule FSH was significantly higher than in the age-matched control rats (Figure 2C). In male rats both DES and DES+P treatment depressed the FSH levels two months after implantation (Figure 2D) but five months after the implantation the FSH levels were similar in each group (Figure 2E), likewise the females. After the removal of DES capsule the FSH levels remained lower and did not return to control values (Figure 2F). The basal PRL levels showed very similar pattern in both female and male rats. DES extremly enhanced the PRL levels (Figure 3A,B and D,E). Two months after implantation P blunted the effect of DES (Figure 3A, D) and five months after implantation P completely reversed the effect of DES (Figure 3B, E). P alone had no effect on the PRL levels (Figure 3A,B and D,E). Removal of DES capsule gradually restored the PRL levels (Figure 3C,F). The basal GH levels were very contradictory (Figure 4A–F). In females DES and DES+P treatment decreased the level (Figure 4A,B); however, it was significant only after 2-month treatment (Figure 4A). P alone did not influence the GH level 2 months after implantation (Figure 4A) but it decreased after 5 months (Figure 4B). When the DES capsule was removed GH levels reached the control value in one month and by the end of 2-month survival after removal and it was even higher than in controls (this elevation was not statistically significant) (Figure 4C). In male rats we did not find significant difference in any groups (Figure 4D,F). [SUBTITLE] Immunohistochemistry [SUBSECTION] The quantitative measurements obtained with RIA were confirmed with the qualitative observations provided by immunohistochemistry. The quantitative measurements obtained with RIA were confirmed with the qualitative observations provided by immunohistochemistry. [SUBTITLE] Distribution of immunoreactive cells in intact rats [SUBSECTION] In the anterior lobe of intact rats the distribution of tropic hormone immunoreactive cells is very characteristic (Figure 5A–H). The density of LH and FSH immunoreactive cells is much higher in the gonadotropic zone that is in the anterior pole of the anterior pituitary (Figure 5A–D). PRL is evenly distributed all over the anterior lobe (Figure 5E,F). GH immunoreactive cells are almost absent in the anterior pole of intact rats. Everywhere else they are evenly distributed (Figure 5G,H). In the anterior lobe of intact rats the distribution of tropic hormone immunoreactive cells is very characteristic (Figure 5A–H). The density of LH and FSH immunoreactive cells is much higher in the gonadotropic zone that is in the anterior pole of the anterior pituitary (Figure 5A–D). PRL is evenly distributed all over the anterior lobe (Figure 5E,F). GH immunoreactive cells are almost absent in the anterior pole of intact rats. Everywhere else they are evenly distributed (Figure 5G,H). [SUBTITLE] Distribution of immunoreactive cells in steroid treated rats [SUBSECTION] Steroid treatments differently influenced the immunoreactivity of the four hormone producing cells. The number of LH and FSH cells decreased in both female and male rats. The effect of DES was more pronounced five than two months after DES implantation. Figure 5I and J show LH immunoreactive cells in the anterior pituitary of DES treated female rats after 2- and 5-month treatments, respectively. When P was implanted together with DES, the effect of DES was nearly prevented by P (Figure 5K,L). It seemed that only the density of gonadotropes in the rostral pole of the anterior lobe was lower than in intact rats. The appearance of LH and FSH immunoreactivities in male rats was very similar to that of females (not shown). P alone did not influence the LH and FSH immunoreactivities in the anterior lobe of both sexes (not shown). DES enhanced the number and the size of PRL cells in both female and male rats (Figure 5M–O and Q, male). In intact rats these cells are cup-shaped (Figure 5N). In DES treated rats the cells are large, ovoid or rounded, and their diameter is nearly double than that of intact rats already after two-month steroid influence (Figure 5O). Figure 5P shows PRL cells from a male animal with DES+P treatment (2-month survival). In these animals the size of the cells were only moderately enhanced compared to control rats and they lost their cup-shaped appearance. When the animals were treated with DES for 5 months prolactinomas developed. Figure 5Q shows a well developed prolactinoma. P prevented the effect of DES on PRL. The immunostaining was similar to control rats (Figure 5R). P alone did not affect the PRL immunostaining (not shown). GH cells did not show a pronounced change either in male or female rats upon 2-month DES treatment but the distribution of these cells was very characteristic in animals with 5-month DES treatment. GH cells were completely missing in areas resembling prolactinomas (Figure 5S). Double labeling immunohistochemistry confirmed this observation (Figure 5T). In DES+P treated rats GH immunostaining was very similar to intact rats. The only difference was observed in the distribution of GH cells. They could be also observed in the anterior pole of the anterior pituitary gland (Figure 5U), this is the region where they are missing in intact rats. Implantation of ecap did not alter the immunostaining of pituitary tropic hormone producing cells. Steroid treatments differently influenced the immunoreactivity of the four hormone producing cells. The number of LH and FSH cells decreased in both female and male rats. The effect of DES was more pronounced five than two months after DES implantation. Figure 5I and J show LH immunoreactive cells in the anterior pituitary of DES treated female rats after 2- and 5-month treatments, respectively. When P was implanted together with DES, the effect of DES was nearly prevented by P (Figure 5K,L). It seemed that only the density of gonadotropes in the rostral pole of the anterior lobe was lower than in intact rats. The appearance of LH and FSH immunoreactivities in male rats was very similar to that of females (not shown). P alone did not influence the LH and FSH immunoreactivities in the anterior lobe of both sexes (not shown). DES enhanced the number and the size of PRL cells in both female and male rats (Figure 5M–O and Q, male). In intact rats these cells are cup-shaped (Figure 5N). In DES treated rats the cells are large, ovoid or rounded, and their diameter is nearly double than that of intact rats already after two-month steroid influence (Figure 5O). Figure 5P shows PRL cells from a male animal with DES+P treatment (2-month survival). In these animals the size of the cells were only moderately enhanced compared to control rats and they lost their cup-shaped appearance. When the animals were treated with DES for 5 months prolactinomas developed. Figure 5Q shows a well developed prolactinoma. P prevented the effect of DES on PRL. The immunostaining was similar to control rats (Figure 5R). P alone did not affect the PRL immunostaining (not shown). GH cells did not show a pronounced change either in male or female rats upon 2-month DES treatment but the distribution of these cells was very characteristic in animals with 5-month DES treatment. GH cells were completely missing in areas resembling prolactinomas (Figure 5S). Double labeling immunohistochemistry confirmed this observation (Figure 5T). In DES+P treated rats GH immunostaining was very similar to intact rats. The only difference was observed in the distribution of GH cells. They could be also observed in the anterior pole of the anterior pituitary gland (Figure 5U), this is the region where they are missing in intact rats. Implantation of ecap did not alter the immunostaining of pituitary tropic hormone producing cells. [SUBTITLE] Effect of the removal of DES capsule on the distribution of immunoreactive cells [SUBSECTION] The removal of DES capsule from the animals restored the density and distribution of all the four hormon producing cells already in one month. One or 2 months after the removal of DES capsule the appearance of immunostaining was similar to those of intact rats (Figure 5V–X). The removal of DES capsule from the animals restored the density and distribution of all the four hormon producing cells already in one month. One or 2 months after the removal of DES capsule the appearance of immunostaining was similar to those of intact rats (Figure 5V–X). [SUBTITLE] Radioimmunoassay [SUBSECTION] The basal LH level in female rats was depressed by DES treatment (Figure 1A,B), but it become significant only five months after implantation. Progesterone alone had no significant effect. One month after removal of DES capsule LH level further decreased and it was significantly lower than in controls; however, two months after removal it was even higher than in age-matched controls (Figure 1C). In male rats DES and DES+P significantly decreased the LH levels already two months after implantation (Figure 1D). The lower levels persisted five months after implantation in DES+P treated rats (Figure 1E). After the removal of DES capsule the LH level remained lower than in controls for a month, and later it was similar to the control values (Figure 1F). The basal FSH level in female rats was not significantly influenced in groups of 2- or 5-month survival (Figure 2A,B); but 2 months after the removal of DES capsule FSH was significantly higher than in the age-matched control rats (Figure 2C). In male rats both DES and DES+P treatment depressed the FSH levels two months after implantation (Figure 2D) but five months after the implantation the FSH levels were similar in each group (Figure 2E), likewise the females. After the removal of DES capsule the FSH levels remained lower and did not return to control values (Figure 2F). The basal PRL levels showed very similar pattern in both female and male rats. DES extremly enhanced the PRL levels (Figure 3A,B and D,E). Two months after implantation P blunted the effect of DES (Figure 3A, D) and five months after implantation P completely reversed the effect of DES (Figure 3B, E). P alone had no effect on the PRL levels (Figure 3A,B and D,E). Removal of DES capsule gradually restored the PRL levels (Figure 3C,F). The basal GH levels were very contradictory (Figure 4A–F). In females DES and DES+P treatment decreased the level (Figure 4A,B); however, it was significant only after 2-month treatment (Figure 4A). P alone did not influence the GH level 2 months after implantation (Figure 4A) but it decreased after 5 months (Figure 4B). When the DES capsule was removed GH levels reached the control value in one month and by the end of 2-month survival after removal and it was even higher than in controls (this elevation was not statistically significant) (Figure 4C). In male rats we did not find significant difference in any groups (Figure 4D,F). The basal LH level in female rats was depressed by DES treatment (Figure 1A,B), but it become significant only five months after implantation. Progesterone alone had no significant effect. One month after removal of DES capsule LH level further decreased and it was significantly lower than in controls; however, two months after removal it was even higher than in age-matched controls (Figure 1C). In male rats DES and DES+P significantly decreased the LH levels already two months after implantation (Figure 1D). The lower levels persisted five months after implantation in DES+P treated rats (Figure 1E). After the removal of DES capsule the LH level remained lower than in controls for a month, and later it was similar to the control values (Figure 1F). The basal FSH level in female rats was not significantly influenced in groups of 2- or 5-month survival (Figure 2A,B); but 2 months after the removal of DES capsule FSH was significantly higher than in the age-matched control rats (Figure 2C). In male rats both DES and DES+P treatment depressed the FSH levels two months after implantation (Figure 2D) but five months after the implantation the FSH levels were similar in each group (Figure 2E), likewise the females. After the removal of DES capsule the FSH levels remained lower and did not return to control values (Figure 2F). The basal PRL levels showed very similar pattern in both female and male rats. DES extremly enhanced the PRL levels (Figure 3A,B and D,E). Two months after implantation P blunted the effect of DES (Figure 3A, D) and five months after implantation P completely reversed the effect of DES (Figure 3B, E). P alone had no effect on the PRL levels (Figure 3A,B and D,E). Removal of DES capsule gradually restored the PRL levels (Figure 3C,F). The basal GH levels were very contradictory (Figure 4A–F). In females DES and DES+P treatment decreased the level (Figure 4A,B); however, it was significant only after 2-month treatment (Figure 4A). P alone did not influence the GH level 2 months after implantation (Figure 4A) but it decreased after 5 months (Figure 4B). When the DES capsule was removed GH levels reached the control value in one month and by the end of 2-month survival after removal and it was even higher than in controls (this elevation was not statistically significant) (Figure 4C). In male rats we did not find significant difference in any groups (Figure 4D,F). [SUBTITLE] Immunohistochemistry [SUBSECTION] The quantitative measurements obtained with RIA were confirmed with the qualitative observations provided by immunohistochemistry. The quantitative measurements obtained with RIA were confirmed with the qualitative observations provided by immunohistochemistry. [SUBTITLE] Distribution of immunoreactive cells in intact rats [SUBSECTION] In the anterior lobe of intact rats the distribution of tropic hormone immunoreactive cells is very characteristic (Figure 5A–H). The density of LH and FSH immunoreactive cells is much higher in the gonadotropic zone that is in the anterior pole of the anterior pituitary (Figure 5A–D). PRL is evenly distributed all over the anterior lobe (Figure 5E,F). GH immunoreactive cells are almost absent in the anterior pole of intact rats. Everywhere else they are evenly distributed (Figure 5G,H). In the anterior lobe of intact rats the distribution of tropic hormone immunoreactive cells is very characteristic (Figure 5A–H). The density of LH and FSH immunoreactive cells is much higher in the gonadotropic zone that is in the anterior pole of the anterior pituitary (Figure 5A–D). PRL is evenly distributed all over the anterior lobe (Figure 5E,F). GH immunoreactive cells are almost absent in the anterior pole of intact rats. Everywhere else they are evenly distributed (Figure 5G,H). [SUBTITLE] Distribution of immunoreactive cells in steroid treated rats [SUBSECTION] Steroid treatments differently influenced the immunoreactivity of the four hormone producing cells. The number of LH and FSH cells decreased in both female and male rats. The effect of DES was more pronounced five than two months after DES implantation. Figure 5I and J show LH immunoreactive cells in the anterior pituitary of DES treated female rats after 2- and 5-month treatments, respectively. When P was implanted together with DES, the effect of DES was nearly prevented by P (Figure 5K,L). It seemed that only the density of gonadotropes in the rostral pole of the anterior lobe was lower than in intact rats. The appearance of LH and FSH immunoreactivities in male rats was very similar to that of females (not shown). P alone did not influence the LH and FSH immunoreactivities in the anterior lobe of both sexes (not shown). DES enhanced the number and the size of PRL cells in both female and male rats (Figure 5M–O and Q, male). In intact rats these cells are cup-shaped (Figure 5N). In DES treated rats the cells are large, ovoid or rounded, and their diameter is nearly double than that of intact rats already after two-month steroid influence (Figure 5O). Figure 5P shows PRL cells from a male animal with DES+P treatment (2-month survival). In these animals the size of the cells were only moderately enhanced compared to control rats and they lost their cup-shaped appearance. When the animals were treated with DES for 5 months prolactinomas developed. Figure 5Q shows a well developed prolactinoma. P prevented the effect of DES on PRL. The immunostaining was similar to control rats (Figure 5R). P alone did not affect the PRL immunostaining (not shown). GH cells did not show a pronounced change either in male or female rats upon 2-month DES treatment but the distribution of these cells was very characteristic in animals with 5-month DES treatment. GH cells were completely missing in areas resembling prolactinomas (Figure 5S). Double labeling immunohistochemistry confirmed this observation (Figure 5T). In DES+P treated rats GH immunostaining was very similar to intact rats. The only difference was observed in the distribution of GH cells. They could be also observed in the anterior pole of the anterior pituitary gland (Figure 5U), this is the region where they are missing in intact rats. Implantation of ecap did not alter the immunostaining of pituitary tropic hormone producing cells. Steroid treatments differently influenced the immunoreactivity of the four hormone producing cells. The number of LH and FSH cells decreased in both female and male rats. The effect of DES was more pronounced five than two months after DES implantation. Figure 5I and J show LH immunoreactive cells in the anterior pituitary of DES treated female rats after 2- and 5-month treatments, respectively. When P was implanted together with DES, the effect of DES was nearly prevented by P (Figure 5K,L). It seemed that only the density of gonadotropes in the rostral pole of the anterior lobe was lower than in intact rats. The appearance of LH and FSH immunoreactivities in male rats was very similar to that of females (not shown). P alone did not influence the LH and FSH immunoreactivities in the anterior lobe of both sexes (not shown). DES enhanced the number and the size of PRL cells in both female and male rats (Figure 5M–O and Q, male). In intact rats these cells are cup-shaped (Figure 5N). In DES treated rats the cells are large, ovoid or rounded, and their diameter is nearly double than that of intact rats already after two-month steroid influence (Figure 5O). Figure 5P shows PRL cells from a male animal with DES+P treatment (2-month survival). In these animals the size of the cells were only moderately enhanced compared to control rats and they lost their cup-shaped appearance. When the animals were treated with DES for 5 months prolactinomas developed. Figure 5Q shows a well developed prolactinoma. P prevented the effect of DES on PRL. The immunostaining was similar to control rats (Figure 5R). P alone did not affect the PRL immunostaining (not shown). GH cells did not show a pronounced change either in male or female rats upon 2-month DES treatment but the distribution of these cells was very characteristic in animals with 5-month DES treatment. GH cells were completely missing in areas resembling prolactinomas (Figure 5S). Double labeling immunohistochemistry confirmed this observation (Figure 5T). In DES+P treated rats GH immunostaining was very similar to intact rats. The only difference was observed in the distribution of GH cells. They could be also observed in the anterior pole of the anterior pituitary gland (Figure 5U), this is the region where they are missing in intact rats. Implantation of ecap did not alter the immunostaining of pituitary tropic hormone producing cells. [SUBTITLE] Effect of the removal of DES capsule on the distribution of immunoreactive cells [SUBSECTION] The removal of DES capsule from the animals restored the density and distribution of all the four hormon producing cells already in one month. One or 2 months after the removal of DES capsule the appearance of immunostaining was similar to those of intact rats (Figure 5V–X). The removal of DES capsule from the animals restored the density and distribution of all the four hormon producing cells already in one month. One or 2 months after the removal of DES capsule the appearance of immunostaining was similar to those of intact rats (Figure 5V–X).
Conclusions
It seems that there is a cross-talk between ERs and PRs. In our protocol we ensured a consistently higher blood level of both DES and P than in controls by the implanted silastic capsules for 5 months. How the ER and PR levels changed during this long period is not known. And there is no data in the literature describing the level of these receptors in such a long-term experiment. There are several possibilities to explain the protective effect of P. 1) PRα is present in gonadotropes. P binding may stimulate these cells to influence the E binding to lactotropes and to prevent the production of TGF-β3. 2) ERs in lactotropes may have P responsive elements and P can bind directly to these receptors to prevent the production of TGF-β3. 3) The effect of P through the hypothalamus is not excluded. But the above-mentioned experiment indicates that the P acts on the pituitary locally rather than through the hypothalamus.
[ "Background", "Animals", "Preparation of the hormone containing capsules", "Experimental protocol", "Radioimmunoassay (RIA)", "Immunohistochemistry", "Vaginal smears", "Effect of treatments on the body weight and body length", "Effect of treatments on the weight of the endocrine organs", "Effect of treatments on the classical pituitary hormone immunoreactivities", "Radioimmunoassay", "Immunohistochemistry", "Distribution of immunoreactive cells in intact rats", "Distribution of immunoreactive cells in steroid treated rats", "Effect of the removal of DES capsule on the distribution of immunoreactive cells" ]
[ "It is well known that sexual steroids, estrogen (E) and progesterone (P), among other factors regulate the secretion of pituitary tropic hormones directly acting on pituitary cells and through the central nervous system by a feed-back mechanism. It was demonstrated two decades ago that diethylstilbestrol (DES) induced appearance of prolactin (PRL)-secreting tumors [1].\nEstrogen and progesterone receptors (ERs and PRs) were identified in various pituitary hormone secreting cells and in various hypothalamic and extrahypothalamic structures. Both ER and PR have two isoforms. ERα was identified by Koike and his co-workers [2], ERβ was cloned by Kuiper and his coworkers [3]. In a recent publication it was found that ERα is localized in the largest number in lactotropes, and in decreasing number in somatotropes, thyrotropes, gonadotropes. The number of ERβ was much lower and it was identified in somatotropes, lactotropes and gonadotropes. Immunoreactivity of both receptors was present in the nucleus, rough endoplasmic reticulum, secretory granules and in the free cytosol. The intracellular localization was dependent on the stage of estrous cycle [4]. In another study it was shown that ERβ was expressed by folliculus stimulating hormone (FSH) cells [5]. A and B isoforms of PR were demonstrated in the anterior lobe using RT-PCR technique [6] and immunohistochemistry [7]. The predominant form is the A-isoform (PRA). In rat PR immunostaining was seen in the nuclei of gonadotropes. Both ERs and PRs were also shown in the central nervous system including hypothalamic structures [8–11].\nBesides classical hormones, the anterior pituitary also produces many other substances such as neurotransmitters, neuropeptides, and growth factors [12]. The classical pituitary hormones act far from the site of production and reach their target tissues through the general circulation. The small peptide molecules act in auto- and paracrine manners [13,14].\nThe effect of E and P on rat pituitary tropic hormon levels was investigated in in vivo and in vitro models [7,15–18]. In our previous experiment [19] long-term E treatment (implantation of DES capsule under the skin of neck) depressed the number of luteinizing hormone (LH) and thyroid stimulating hormone (TSH) cells, it enhanced the number of cells (the cells formed prolactinoma), it enhanced the size of growth hormone (GH) cells and it did not influence the number of adrenocorticotrop (ACTH) cells. DES extremly enhanced the number of vasoactive intestinal poly-peptide (VIP) cells and induced appearance of VIP-oma. It was also demonstrated that E enhanced the VIP expression in the anterior pituitary by six times as compared with intact rats [20].\nWith the use of haemolytic plaque assay it was demonstrated that a chronic E treatment increased the percentage of PRL secreting cells and the amount of secreted PRL; however, there was a decrease in both the percentage of GH secreting cells and the amount of secreted GH. Following the haemolytic plaque assay in situ hybridization revealed that estradiol β (E2) increased the PRL mRNA while it decreased the GH mRNA [16]. This research group did not find any colocalization between GH and PRL expression in control male rat pituitary cell culture; however, in E2 treated culture about 10% of PRL secreting cells contained GH mRNA as well. It was also demonstrated that the effect of E in the pituitary is mediated through ERα [21].\nThe data in the literature concerning the effect of P on pituitary hormone secretion is controversial and not so abundant. The protective effect of P on E inducing undesired changes was observed about twenty-five years ago. It was demonstrated that P was able to reduce the size of E induced mammary tumor [22]. The protective role of P is also used to prevent preterm birth when being applied as vaginal suppository [23]. Genazzani and his co-workers [24] analysed how P and various synthetic progestins modulate the effect of E on hypo-thalamic gonadotropin-releasing hormone (GnRH) and on pituitary LH and PRL concentrations in ovariectomized rats. Two-week steroid treatments were applied. It was found that P and norethisterone enanthate (NET, synthetic progestin) reversed the depression of hypothalamic GnRH induced by the estradiol benzoate (EB) and elevated its level. Both P and NET blocked the EB induced increase of pituitary LH, and the plasma LH levels remained low. Progestins alone did not influence the PRL levels but reversed the EB induced increase.\nIn clinical practice sexual steroid treatment is a very common medicament. Steroid hormone contraception was suggested about seventy years ago [25]. It is well known that administration of E, P or the combination of E and P prevents the LH surge and the ovulation [26–29]. Sexual steroids are also applied to synchronize the ovarian cyclicity, to maintain the endometrium during pregnancy [30,31]. E is even recently used as replacement therapy after surgical removal of ovaries or in menopause as well [32–34]. In men E is used to treat prostatic cancer [35].\nThe aim of our present work was to further clarify the way how the concomitant P administration affected the changes in the anterior pituitary caused by long-term estrogen treatment. Long-term steroid treatments in experimental animals, similar to those applied in the human practice was rarely found in the literature. We decided to imitate the long-term clinical treatments. Two- and five-month survivals after implantation of hormones were chosen, both are long time in the life of rats. The animals were three and six-month-old when they were sacrificed. Three-month-old rat is a relatively young adult and the six-month-old rats are old. The continuous hormone influence was provided by the implantation of DES and P containing capsules. At the end of experimental periods we have examined LH, FSH, PRL and GH immunoreactivities in the anterior pituitary gland and basal LH, FSH, PRL and GH plasma levels in rats of both sexes. We have also examined whether removal of DES capsule two months after implantation can reverse the DES induced changes one or two months following the removal.", "Sprague-Dawley female and male rats were used for our experiments. They were kept in a light (light on at 5 h and off at 19 h) and temperature (22±2°C) controlled vivarium and fed with standard lab chow and water ad libitum. The treatment of the animals was in accordance with the rules of the „European convention for the protection of vertebrate animals used for experimental and other scientific purposes”, Strasbourg, 1986. Our protocol was approved by the Department of Animal Health Care, Budapest. Permission number: 37/1999. When the animals were 25 day-old they received empty capsule (ecap), DES, DES + P or just P containing silastic capsule implanted under the skin of the neck in the subcutaneous tissue.", "Silastic capsules (id 1.55 mm, od 3.13 mm, length 10 mm) (Dow Corning Corporation, Midland, MI) were filled with DES (Sigma, St. Louis, Mo) or P (Sigma) and the ends of the capsules were sealed by Szilorfix (Finomvegyszer Szövetkezet, Budapest, Hungary), a silicon based glue. A day after drying, the capsules were used for implantation. The steroids can pass through the wall of the capsule and can exert its effect through the general circulation [36].", "The animals were grouped as follows.\nExperiment 1.\nintact (negative control) rats,\necap (sham operated rats receiving ecap sealed with glue),\nDES implanted rats,\nDES+P implanted rats,\nP implanted rats.\n85 female and 79 male rats were included in this experiment. The survival time after implantation was 2 months.\nExperiment 2.\nSimilar groups were included in this experiment (56 females and 56 males) as described in Experiment 1; however, the survival time was 5 months.\nExperiment 3.\nFrom 35 DES implanted animals (18 females and 17 males) the DES capsules were removed. Half of these animals was sacrificed 1 month after the removal and the other half was sacrificed 2 months after removal. Age matched controls were also included in all the groups (17 females and 16 males). The data obtained in these groups were compared with those of DES implanted rats and their age matched controls of Experiment 1.\nIn female rats the opening of the vaginal membrane was recorded. Then vaginal smear was taken daily for two weeks and for another two weeks before sacrificing. Females were handled in this way. The male rats were transferred from their cage to another and back every day for two weeks before sacrificing (handling) to blunt the stress effect.\nAt the end of the experimental period the body weight (BW) and body and tail length (B+TL) were measured and the animals were sacrificed by decapitation between 9 am and 11 am. The trunk blood was collected, its coagulation was prevented by EDTA (ethylene diamine tetraacetic acide sodium salt, Serva, Heidelberg, Germany). Plasma was used to determine hormone levels. Endocrine organs were weighted. Anterior pituitaries were immersed in Bouin solution (75% saturated picric acid, 25% formaldehyde, 5% acetic acid [Reanal, Budapest, Hungary]). Two days later tissue samples were washed in 0.1 M potassium-phosphate buffer (KPB) and put in ascending sucrose solution (10–20–30%) for cryoprotection. One day later the pituitaries were embedded in cryomatrix (Shandon, Pittsburg, PA) and 20 μm thick sections were cut on cryostat (Cryotome, Thermo Shandon, Pittsburg, PA) in 4 parallel series. The sections were stained for LH, FSH, GH and PRL immunoreactivities.", "The trunk blood was centrifugated at 4°C. Plasma was stored at −20°C until determination of pituitary hormones. Iodination was carried out by Chloramine-T method. Separation of the free and bound antigen was made by double antibody method. All hormone kits were obtained from the National Hormone and Peptide Program (NHPP), NIDDK, and Dr. Parlow. Hormone levels were expressed in ng/ml plasma. The mean of two parallel determinations for each animal was subjected to one way analysis of variance (ANOVA) followed by a Student’s t test.", "Three animals of each group were anesthetized by ket-amine-hydrochloride (75 mg/100g bw) (Sigma, St. Louis, MO) and perfused with 4% paraformaldehyde (Merck, Darmstadt, Germany) in potassium phosphate buffer (KPB) (pH 7.4, 0.1mol). The components were purchased from Sigma (St. Louis, MO). Pituitaries were removed and post-fixed overnight. All the pituitaries (fixed in Bouin solution or PFA) were used for immunohistochemistry. After washing in KPB, the pituitaries were immersed in ascending sucrose solution (10–20–30%), then frozen on dry ice. The pituitary sections were rinsed in potassium phosphate buffer (KPB), pH 7.4. The tropic hormones were visualized by indirect fluorescence technique. The following primary antisera were used in our experiment: LH, FSH, GH (dilution 1:500) were raised in guinea pig and obtained from the National Hormone and Peptide Program (NHPP), NIDDK, and Dr. Parlow. PRL antiserum (dilution 1:500) was raised in rabbit by Nagy and characterized by Demaria et al. [37]. Fluorescence labeled secondary antibodies were the following: rabbit anti-guinea pig IgG conjugated to FITC (DAKO A/S, Glostrup, Denmark), goat anti-rabbit IgG conjugated to Cy3 (DAKO A/S, Glostrup, Denmark).\nSelected sections were double stained for PRL and GH immunoreactivities using fluorescence technique. First PRL staining was carried out using a monoclonal antibody (raised and characterized by Scammell [38] and TRITC conjugated secondary antimouse antibody (DAKO A/S, Glostrup, Denmark). Then the GH staining was done.", "In intact rats the vaginal membrane opened on 31.36±0.80 day of postnatal life. In all animals implanted with capsule (independently of their hormone content) the vaginal membrane opened earlier. In DES treated rats it happened much earlier than in intact animals (27.50±0.31, p<0.0001), although in animals implanted with ecap, DES+P or P alone the opening was also significantly shifted earlier than in intact rats, 28.36±0.47 (p<0.003) 28.06±0.34 (p<0.0005) and 29.09±0.51 (p<0.03) day of postnatal life, respectively. Upon DES treatment in the vaginal smear persistent estrus was observed. DES+P did not interrupt the cyclicity but it was irregular and metestrus predominated, ecap and P alone had no effect.", "Table 1 shows BW and B+TL in various groups of female and male rats 2 and 5 months after implantation. In DES and DES+P implanted animals BW and B+TL were significantly lower than in control rats 2 months after implantation; however, 5 months after implantation P moderated the effect of DES. P alone slightly decreased BW only in the case of 5-month treatment. When we removed the DES capsule BW and B+TL could not return to intact levels, but the difference was not significant in B+TL of females.", "Table 2 shows the weight of the anterior pituitary and the ovaries in female rats. The weight of anterior pituitaries increased upon DES and DES+P treatments in the case of 2-month survival, but in the case of 5-month treatment P reversed the weight gain. When the DES capsule was removed the weight of pituitaries returned to control level. DES and DES+P extremly decreased the weight of ovaries after 2-month steroid treatment. In the case of 5-month survival P prevented the effect of DES and P alone slightly decreased the weight of ovaries. When DES capsule was removed the weight of ovaries returned to control levels.\nTable 3 shows the weight of the anterior pituitary, testes and seminal vesicles of male rats. Similarly to female rats the weight of anterior pituitaries increased upon DES and DES+P treatment in the case of 2-month survival time, but in the case of 5-month survival P reversed the weight gain. DES and DES+P treatments extremly decreased the weight of testes and seminal vesicles, P alone enhanced those in the animals having 2-, but not 5-month survival time. Removal of DES capsule partially, but not completely, restored the weight of testes and seminal vesicles.", "[SUBTITLE] Radioimmunoassay [SUBSECTION] The basal LH level in female rats was depressed by DES treatment (Figure 1A,B), but it become significant only five months after implantation. Progesterone alone had no significant effect. One month after removal of DES capsule LH level further decreased and it was significantly lower than in controls; however, two months after removal it was even higher than in age-matched controls (Figure 1C). In male rats DES and DES+P significantly decreased the LH levels already two months after implantation (Figure 1D). The lower levels persisted five months after implantation in DES+P treated rats (Figure 1E). After the removal of DES capsule the LH level remained lower than in controls for a month, and later it was similar to the control values (Figure 1F).\nThe basal FSH level in female rats was not significantly influenced in groups of 2- or 5-month survival (Figure 2A,B); but 2 months after the removal of DES capsule FSH was significantly higher than in the age-matched control rats (Figure 2C). In male rats both DES and DES+P treatment depressed the FSH levels two months after implantation (Figure 2D) but five months after the implantation the FSH levels were similar in each group (Figure 2E), likewise the females. After the removal of DES capsule the FSH levels remained lower and did not return to control values (Figure 2F).\nThe basal PRL levels showed very similar pattern in both female and male rats. DES extremly enhanced the PRL levels (Figure 3A,B and D,E). Two months after implantation P blunted the effect of DES (Figure 3A, D) and five months after implantation P completely reversed the effect of DES (Figure 3B, E). P alone had no effect on the PRL levels (Figure 3A,B and D,E). Removal of DES capsule gradually restored the PRL levels (Figure 3C,F).\nThe basal GH levels were very contradictory (Figure 4A–F). In females DES and DES+P treatment decreased the level (Figure 4A,B); however, it was significant only after 2-month treatment (Figure 4A). P alone did not influence the GH level 2 months after implantation (Figure 4A) but it decreased after 5 months (Figure 4B). When the DES capsule was removed GH levels reached the control value in one month and by the end of 2-month survival after removal and it was even higher than in controls (this elevation was not statistically significant) (Figure 4C). In male rats we did not find significant difference in any groups (Figure 4D,F).\nThe basal LH level in female rats was depressed by DES treatment (Figure 1A,B), but it become significant only five months after implantation. Progesterone alone had no significant effect. One month after removal of DES capsule LH level further decreased and it was significantly lower than in controls; however, two months after removal it was even higher than in age-matched controls (Figure 1C). In male rats DES and DES+P significantly decreased the LH levels already two months after implantation (Figure 1D). The lower levels persisted five months after implantation in DES+P treated rats (Figure 1E). After the removal of DES capsule the LH level remained lower than in controls for a month, and later it was similar to the control values (Figure 1F).\nThe basal FSH level in female rats was not significantly influenced in groups of 2- or 5-month survival (Figure 2A,B); but 2 months after the removal of DES capsule FSH was significantly higher than in the age-matched control rats (Figure 2C). In male rats both DES and DES+P treatment depressed the FSH levels two months after implantation (Figure 2D) but five months after the implantation the FSH levels were similar in each group (Figure 2E), likewise the females. After the removal of DES capsule the FSH levels remained lower and did not return to control values (Figure 2F).\nThe basal PRL levels showed very similar pattern in both female and male rats. DES extremly enhanced the PRL levels (Figure 3A,B and D,E). Two months after implantation P blunted the effect of DES (Figure 3A, D) and five months after implantation P completely reversed the effect of DES (Figure 3B, E). P alone had no effect on the PRL levels (Figure 3A,B and D,E). Removal of DES capsule gradually restored the PRL levels (Figure 3C,F).\nThe basal GH levels were very contradictory (Figure 4A–F). In females DES and DES+P treatment decreased the level (Figure 4A,B); however, it was significant only after 2-month treatment (Figure 4A). P alone did not influence the GH level 2 months after implantation (Figure 4A) but it decreased after 5 months (Figure 4B). When the DES capsule was removed GH levels reached the control value in one month and by the end of 2-month survival after removal and it was even higher than in controls (this elevation was not statistically significant) (Figure 4C). In male rats we did not find significant difference in any groups (Figure 4D,F).\n[SUBTITLE] Immunohistochemistry [SUBSECTION] The quantitative measurements obtained with RIA were confirmed with the qualitative observations provided by immunohistochemistry.\nThe quantitative measurements obtained with RIA were confirmed with the qualitative observations provided by immunohistochemistry.\n[SUBTITLE] Distribution of immunoreactive cells in intact rats [SUBSECTION] In the anterior lobe of intact rats the distribution of tropic hormone immunoreactive cells is very characteristic (Figure 5A–H). The density of LH and FSH immunoreactive cells is much higher in the gonadotropic zone that is in the anterior pole of the anterior pituitary (Figure 5A–D). PRL is evenly distributed all over the anterior lobe (Figure 5E,F). GH immunoreactive cells are almost absent in the anterior pole of intact rats. Everywhere else they are evenly distributed (Figure 5G,H).\nIn the anterior lobe of intact rats the distribution of tropic hormone immunoreactive cells is very characteristic (Figure 5A–H). The density of LH and FSH immunoreactive cells is much higher in the gonadotropic zone that is in the anterior pole of the anterior pituitary (Figure 5A–D). PRL is evenly distributed all over the anterior lobe (Figure 5E,F). GH immunoreactive cells are almost absent in the anterior pole of intact rats. Everywhere else they are evenly distributed (Figure 5G,H).\n[SUBTITLE] Distribution of immunoreactive cells in steroid treated rats [SUBSECTION] Steroid treatments differently influenced the immunoreactivity of the four hormone producing cells.\nThe number of LH and FSH cells decreased in both female and male rats. The effect of DES was more pronounced five than two months after DES implantation. Figure 5I and J show LH immunoreactive cells in the anterior pituitary of DES treated female rats after 2- and 5-month treatments, respectively. When P was implanted together with DES, the effect of DES was nearly prevented by P (Figure 5K,L). It seemed that only the density of gonadotropes in the rostral pole of the anterior lobe was lower than in intact rats. The appearance of LH and FSH immunoreactivities in male rats was very similar to that of females (not shown). P alone did not influence the LH and FSH immunoreactivities in the anterior lobe of both sexes (not shown).\nDES enhanced the number and the size of PRL cells in both female and male rats (Figure 5M–O and Q, male). In intact rats these cells are cup-shaped (Figure 5N). In DES treated rats the cells are large, ovoid or rounded, and their diameter is nearly double than that of intact rats already after two-month steroid influence (Figure 5O). Figure 5P shows PRL cells from a male animal with DES+P treatment (2-month survival). In these animals the size of the cells were only moderately enhanced compared to control rats and they lost their cup-shaped appearance. When the animals were treated with DES for 5 months prolactinomas developed. Figure 5Q shows a well developed prolactinoma. P prevented the effect of DES on PRL. The immunostaining was similar to control rats (Figure 5R). P alone did not affect the PRL immunostaining (not shown).\nGH cells did not show a pronounced change either in male or female rats upon 2-month DES treatment but the distribution of these cells was very characteristic in animals with 5-month DES treatment. GH cells were completely missing in areas resembling prolactinomas (Figure 5S). Double labeling immunohistochemistry confirmed this observation (Figure 5T). In DES+P treated rats GH immunostaining was very similar to intact rats. The only difference was observed in the distribution of GH cells. They could be also observed in the anterior pole of the anterior pituitary gland (Figure 5U), this is the region where they are missing in intact rats.\nImplantation of ecap did not alter the immunostaining of pituitary tropic hormone producing cells.\nSteroid treatments differently influenced the immunoreactivity of the four hormone producing cells.\nThe number of LH and FSH cells decreased in both female and male rats. The effect of DES was more pronounced five than two months after DES implantation. Figure 5I and J show LH immunoreactive cells in the anterior pituitary of DES treated female rats after 2- and 5-month treatments, respectively. When P was implanted together with DES, the effect of DES was nearly prevented by P (Figure 5K,L). It seemed that only the density of gonadotropes in the rostral pole of the anterior lobe was lower than in intact rats. The appearance of LH and FSH immunoreactivities in male rats was very similar to that of females (not shown). P alone did not influence the LH and FSH immunoreactivities in the anterior lobe of both sexes (not shown).\nDES enhanced the number and the size of PRL cells in both female and male rats (Figure 5M–O and Q, male). In intact rats these cells are cup-shaped (Figure 5N). In DES treated rats the cells are large, ovoid or rounded, and their diameter is nearly double than that of intact rats already after two-month steroid influence (Figure 5O). Figure 5P shows PRL cells from a male animal with DES+P treatment (2-month survival). In these animals the size of the cells were only moderately enhanced compared to control rats and they lost their cup-shaped appearance. When the animals were treated with DES for 5 months prolactinomas developed. Figure 5Q shows a well developed prolactinoma. P prevented the effect of DES on PRL. The immunostaining was similar to control rats (Figure 5R). P alone did not affect the PRL immunostaining (not shown).\nGH cells did not show a pronounced change either in male or female rats upon 2-month DES treatment but the distribution of these cells was very characteristic in animals with 5-month DES treatment. GH cells were completely missing in areas resembling prolactinomas (Figure 5S). Double labeling immunohistochemistry confirmed this observation (Figure 5T). In DES+P treated rats GH immunostaining was very similar to intact rats. The only difference was observed in the distribution of GH cells. They could be also observed in the anterior pole of the anterior pituitary gland (Figure 5U), this is the region where they are missing in intact rats.\nImplantation of ecap did not alter the immunostaining of pituitary tropic hormone producing cells.\n[SUBTITLE] Effect of the removal of DES capsule on the distribution of immunoreactive cells [SUBSECTION] The removal of DES capsule from the animals restored the density and distribution of all the four hormon producing cells already in one month. One or 2 months after the removal of DES capsule the appearance of immunostaining was similar to those of intact rats (Figure 5V–X).\nThe removal of DES capsule from the animals restored the density and distribution of all the four hormon producing cells already in one month. One or 2 months after the removal of DES capsule the appearance of immunostaining was similar to those of intact rats (Figure 5V–X).", "The basal LH level in female rats was depressed by DES treatment (Figure 1A,B), but it become significant only five months after implantation. Progesterone alone had no significant effect. One month after removal of DES capsule LH level further decreased and it was significantly lower than in controls; however, two months after removal it was even higher than in age-matched controls (Figure 1C). In male rats DES and DES+P significantly decreased the LH levels already two months after implantation (Figure 1D). The lower levels persisted five months after implantation in DES+P treated rats (Figure 1E). After the removal of DES capsule the LH level remained lower than in controls for a month, and later it was similar to the control values (Figure 1F).\nThe basal FSH level in female rats was not significantly influenced in groups of 2- or 5-month survival (Figure 2A,B); but 2 months after the removal of DES capsule FSH was significantly higher than in the age-matched control rats (Figure 2C). In male rats both DES and DES+P treatment depressed the FSH levels two months after implantation (Figure 2D) but five months after the implantation the FSH levels were similar in each group (Figure 2E), likewise the females. After the removal of DES capsule the FSH levels remained lower and did not return to control values (Figure 2F).\nThe basal PRL levels showed very similar pattern in both female and male rats. DES extremly enhanced the PRL levels (Figure 3A,B and D,E). Two months after implantation P blunted the effect of DES (Figure 3A, D) and five months after implantation P completely reversed the effect of DES (Figure 3B, E). P alone had no effect on the PRL levels (Figure 3A,B and D,E). Removal of DES capsule gradually restored the PRL levels (Figure 3C,F).\nThe basal GH levels were very contradictory (Figure 4A–F). In females DES and DES+P treatment decreased the level (Figure 4A,B); however, it was significant only after 2-month treatment (Figure 4A). P alone did not influence the GH level 2 months after implantation (Figure 4A) but it decreased after 5 months (Figure 4B). When the DES capsule was removed GH levels reached the control value in one month and by the end of 2-month survival after removal and it was even higher than in controls (this elevation was not statistically significant) (Figure 4C). In male rats we did not find significant difference in any groups (Figure 4D,F).", "The quantitative measurements obtained with RIA were confirmed with the qualitative observations provided by immunohistochemistry.", "In the anterior lobe of intact rats the distribution of tropic hormone immunoreactive cells is very characteristic (Figure 5A–H). The density of LH and FSH immunoreactive cells is much higher in the gonadotropic zone that is in the anterior pole of the anterior pituitary (Figure 5A–D). PRL is evenly distributed all over the anterior lobe (Figure 5E,F). GH immunoreactive cells are almost absent in the anterior pole of intact rats. Everywhere else they are evenly distributed (Figure 5G,H).", "Steroid treatments differently influenced the immunoreactivity of the four hormone producing cells.\nThe number of LH and FSH cells decreased in both female and male rats. The effect of DES was more pronounced five than two months after DES implantation. Figure 5I and J show LH immunoreactive cells in the anterior pituitary of DES treated female rats after 2- and 5-month treatments, respectively. When P was implanted together with DES, the effect of DES was nearly prevented by P (Figure 5K,L). It seemed that only the density of gonadotropes in the rostral pole of the anterior lobe was lower than in intact rats. The appearance of LH and FSH immunoreactivities in male rats was very similar to that of females (not shown). P alone did not influence the LH and FSH immunoreactivities in the anterior lobe of both sexes (not shown).\nDES enhanced the number and the size of PRL cells in both female and male rats (Figure 5M–O and Q, male). In intact rats these cells are cup-shaped (Figure 5N). In DES treated rats the cells are large, ovoid or rounded, and their diameter is nearly double than that of intact rats already after two-month steroid influence (Figure 5O). Figure 5P shows PRL cells from a male animal with DES+P treatment (2-month survival). In these animals the size of the cells were only moderately enhanced compared to control rats and they lost their cup-shaped appearance. When the animals were treated with DES for 5 months prolactinomas developed. Figure 5Q shows a well developed prolactinoma. P prevented the effect of DES on PRL. The immunostaining was similar to control rats (Figure 5R). P alone did not affect the PRL immunostaining (not shown).\nGH cells did not show a pronounced change either in male or female rats upon 2-month DES treatment but the distribution of these cells was very characteristic in animals with 5-month DES treatment. GH cells were completely missing in areas resembling prolactinomas (Figure 5S). Double labeling immunohistochemistry confirmed this observation (Figure 5T). In DES+P treated rats GH immunostaining was very similar to intact rats. The only difference was observed in the distribution of GH cells. They could be also observed in the anterior pole of the anterior pituitary gland (Figure 5U), this is the region where they are missing in intact rats.\nImplantation of ecap did not alter the immunostaining of pituitary tropic hormone producing cells.", "The removal of DES capsule from the animals restored the density and distribution of all the four hormon producing cells already in one month. One or 2 months after the removal of DES capsule the appearance of immunostaining was similar to those of intact rats (Figure 5V–X)." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Material and Methods", "Animals", "Preparation of the hormone containing capsules", "Experimental protocol", "Radioimmunoassay (RIA)", "Immunohistochemistry", "Results", "Vaginal smears", "Effect of treatments on the body weight and body length", "Effect of treatments on the weight of the endocrine organs", "Effect of treatments on the classical pituitary hormone immunoreactivities", "Radioimmunoassay", "Immunohistochemistry", "Distribution of immunoreactive cells in intact rats", "Distribution of immunoreactive cells in steroid treated rats", "Effect of the removal of DES capsule on the distribution of immunoreactive cells", "Discussion", "New data are as follows", "Conclusions" ]
[ "It is well known that sexual steroids, estrogen (E) and progesterone (P), among other factors regulate the secretion of pituitary tropic hormones directly acting on pituitary cells and through the central nervous system by a feed-back mechanism. It was demonstrated two decades ago that diethylstilbestrol (DES) induced appearance of prolactin (PRL)-secreting tumors [1].\nEstrogen and progesterone receptors (ERs and PRs) were identified in various pituitary hormone secreting cells and in various hypothalamic and extrahypothalamic structures. Both ER and PR have two isoforms. ERα was identified by Koike and his co-workers [2], ERβ was cloned by Kuiper and his coworkers [3]. In a recent publication it was found that ERα is localized in the largest number in lactotropes, and in decreasing number in somatotropes, thyrotropes, gonadotropes. The number of ERβ was much lower and it was identified in somatotropes, lactotropes and gonadotropes. Immunoreactivity of both receptors was present in the nucleus, rough endoplasmic reticulum, secretory granules and in the free cytosol. The intracellular localization was dependent on the stage of estrous cycle [4]. In another study it was shown that ERβ was expressed by folliculus stimulating hormone (FSH) cells [5]. A and B isoforms of PR were demonstrated in the anterior lobe using RT-PCR technique [6] and immunohistochemistry [7]. The predominant form is the A-isoform (PRA). In rat PR immunostaining was seen in the nuclei of gonadotropes. Both ERs and PRs were also shown in the central nervous system including hypothalamic structures [8–11].\nBesides classical hormones, the anterior pituitary also produces many other substances such as neurotransmitters, neuropeptides, and growth factors [12]. The classical pituitary hormones act far from the site of production and reach their target tissues through the general circulation. The small peptide molecules act in auto- and paracrine manners [13,14].\nThe effect of E and P on rat pituitary tropic hormon levels was investigated in in vivo and in vitro models [7,15–18]. In our previous experiment [19] long-term E treatment (implantation of DES capsule under the skin of neck) depressed the number of luteinizing hormone (LH) and thyroid stimulating hormone (TSH) cells, it enhanced the number of cells (the cells formed prolactinoma), it enhanced the size of growth hormone (GH) cells and it did not influence the number of adrenocorticotrop (ACTH) cells. DES extremly enhanced the number of vasoactive intestinal poly-peptide (VIP) cells and induced appearance of VIP-oma. It was also demonstrated that E enhanced the VIP expression in the anterior pituitary by six times as compared with intact rats [20].\nWith the use of haemolytic plaque assay it was demonstrated that a chronic E treatment increased the percentage of PRL secreting cells and the amount of secreted PRL; however, there was a decrease in both the percentage of GH secreting cells and the amount of secreted GH. Following the haemolytic plaque assay in situ hybridization revealed that estradiol β (E2) increased the PRL mRNA while it decreased the GH mRNA [16]. This research group did not find any colocalization between GH and PRL expression in control male rat pituitary cell culture; however, in E2 treated culture about 10% of PRL secreting cells contained GH mRNA as well. It was also demonstrated that the effect of E in the pituitary is mediated through ERα [21].\nThe data in the literature concerning the effect of P on pituitary hormone secretion is controversial and not so abundant. The protective effect of P on E inducing undesired changes was observed about twenty-five years ago. It was demonstrated that P was able to reduce the size of E induced mammary tumor [22]. The protective role of P is also used to prevent preterm birth when being applied as vaginal suppository [23]. Genazzani and his co-workers [24] analysed how P and various synthetic progestins modulate the effect of E on hypo-thalamic gonadotropin-releasing hormone (GnRH) and on pituitary LH and PRL concentrations in ovariectomized rats. Two-week steroid treatments were applied. It was found that P and norethisterone enanthate (NET, synthetic progestin) reversed the depression of hypothalamic GnRH induced by the estradiol benzoate (EB) and elevated its level. Both P and NET blocked the EB induced increase of pituitary LH, and the plasma LH levels remained low. Progestins alone did not influence the PRL levels but reversed the EB induced increase.\nIn clinical practice sexual steroid treatment is a very common medicament. Steroid hormone contraception was suggested about seventy years ago [25]. It is well known that administration of E, P or the combination of E and P prevents the LH surge and the ovulation [26–29]. Sexual steroids are also applied to synchronize the ovarian cyclicity, to maintain the endometrium during pregnancy [30,31]. E is even recently used as replacement therapy after surgical removal of ovaries or in menopause as well [32–34]. In men E is used to treat prostatic cancer [35].\nThe aim of our present work was to further clarify the way how the concomitant P administration affected the changes in the anterior pituitary caused by long-term estrogen treatment. Long-term steroid treatments in experimental animals, similar to those applied in the human practice was rarely found in the literature. We decided to imitate the long-term clinical treatments. Two- and five-month survivals after implantation of hormones were chosen, both are long time in the life of rats. The animals were three and six-month-old when they were sacrificed. Three-month-old rat is a relatively young adult and the six-month-old rats are old. The continuous hormone influence was provided by the implantation of DES and P containing capsules. At the end of experimental periods we have examined LH, FSH, PRL and GH immunoreactivities in the anterior pituitary gland and basal LH, FSH, PRL and GH plasma levels in rats of both sexes. We have also examined whether removal of DES capsule two months after implantation can reverse the DES induced changes one or two months following the removal.", "[SUBTITLE] Animals [SUBSECTION] Sprague-Dawley female and male rats were used for our experiments. They were kept in a light (light on at 5 h and off at 19 h) and temperature (22±2°C) controlled vivarium and fed with standard lab chow and water ad libitum. The treatment of the animals was in accordance with the rules of the „European convention for the protection of vertebrate animals used for experimental and other scientific purposes”, Strasbourg, 1986. Our protocol was approved by the Department of Animal Health Care, Budapest. Permission number: 37/1999. When the animals were 25 day-old they received empty capsule (ecap), DES, DES + P or just P containing silastic capsule implanted under the skin of the neck in the subcutaneous tissue.\nSprague-Dawley female and male rats were used for our experiments. They were kept in a light (light on at 5 h and off at 19 h) and temperature (22±2°C) controlled vivarium and fed with standard lab chow and water ad libitum. The treatment of the animals was in accordance with the rules of the „European convention for the protection of vertebrate animals used for experimental and other scientific purposes”, Strasbourg, 1986. Our protocol was approved by the Department of Animal Health Care, Budapest. Permission number: 37/1999. When the animals were 25 day-old they received empty capsule (ecap), DES, DES + P or just P containing silastic capsule implanted under the skin of the neck in the subcutaneous tissue.\n[SUBTITLE] Preparation of the hormone containing capsules [SUBSECTION] Silastic capsules (id 1.55 mm, od 3.13 mm, length 10 mm) (Dow Corning Corporation, Midland, MI) were filled with DES (Sigma, St. Louis, Mo) or P (Sigma) and the ends of the capsules were sealed by Szilorfix (Finomvegyszer Szövetkezet, Budapest, Hungary), a silicon based glue. A day after drying, the capsules were used for implantation. The steroids can pass through the wall of the capsule and can exert its effect through the general circulation [36].\nSilastic capsules (id 1.55 mm, od 3.13 mm, length 10 mm) (Dow Corning Corporation, Midland, MI) were filled with DES (Sigma, St. Louis, Mo) or P (Sigma) and the ends of the capsules were sealed by Szilorfix (Finomvegyszer Szövetkezet, Budapest, Hungary), a silicon based glue. A day after drying, the capsules were used for implantation. The steroids can pass through the wall of the capsule and can exert its effect through the general circulation [36].\n[SUBTITLE] Experimental protocol [SUBSECTION] The animals were grouped as follows.\nExperiment 1.\nintact (negative control) rats,\necap (sham operated rats receiving ecap sealed with glue),\nDES implanted rats,\nDES+P implanted rats,\nP implanted rats.\n85 female and 79 male rats were included in this experiment. The survival time after implantation was 2 months.\nExperiment 2.\nSimilar groups were included in this experiment (56 females and 56 males) as described in Experiment 1; however, the survival time was 5 months.\nExperiment 3.\nFrom 35 DES implanted animals (18 females and 17 males) the DES capsules were removed. Half of these animals was sacrificed 1 month after the removal and the other half was sacrificed 2 months after removal. Age matched controls were also included in all the groups (17 females and 16 males). The data obtained in these groups were compared with those of DES implanted rats and their age matched controls of Experiment 1.\nIn female rats the opening of the vaginal membrane was recorded. Then vaginal smear was taken daily for two weeks and for another two weeks before sacrificing. Females were handled in this way. The male rats were transferred from their cage to another and back every day for two weeks before sacrificing (handling) to blunt the stress effect.\nAt the end of the experimental period the body weight (BW) and body and tail length (B+TL) were measured and the animals were sacrificed by decapitation between 9 am and 11 am. The trunk blood was collected, its coagulation was prevented by EDTA (ethylene diamine tetraacetic acide sodium salt, Serva, Heidelberg, Germany). Plasma was used to determine hormone levels. Endocrine organs were weighted. Anterior pituitaries were immersed in Bouin solution (75% saturated picric acid, 25% formaldehyde, 5% acetic acid [Reanal, Budapest, Hungary]). Two days later tissue samples were washed in 0.1 M potassium-phosphate buffer (KPB) and put in ascending sucrose solution (10–20–30%) for cryoprotection. One day later the pituitaries were embedded in cryomatrix (Shandon, Pittsburg, PA) and 20 μm thick sections were cut on cryostat (Cryotome, Thermo Shandon, Pittsburg, PA) in 4 parallel series. The sections were stained for LH, FSH, GH and PRL immunoreactivities.\nThe animals were grouped as follows.\nExperiment 1.\nintact (negative control) rats,\necap (sham operated rats receiving ecap sealed with glue),\nDES implanted rats,\nDES+P implanted rats,\nP implanted rats.\n85 female and 79 male rats were included in this experiment. The survival time after implantation was 2 months.\nExperiment 2.\nSimilar groups were included in this experiment (56 females and 56 males) as described in Experiment 1; however, the survival time was 5 months.\nExperiment 3.\nFrom 35 DES implanted animals (18 females and 17 males) the DES capsules were removed. Half of these animals was sacrificed 1 month after the removal and the other half was sacrificed 2 months after removal. Age matched controls were also included in all the groups (17 females and 16 males). The data obtained in these groups were compared with those of DES implanted rats and their age matched controls of Experiment 1.\nIn female rats the opening of the vaginal membrane was recorded. Then vaginal smear was taken daily for two weeks and for another two weeks before sacrificing. Females were handled in this way. The male rats were transferred from their cage to another and back every day for two weeks before sacrificing (handling) to blunt the stress effect.\nAt the end of the experimental period the body weight (BW) and body and tail length (B+TL) were measured and the animals were sacrificed by decapitation between 9 am and 11 am. The trunk blood was collected, its coagulation was prevented by EDTA (ethylene diamine tetraacetic acide sodium salt, Serva, Heidelberg, Germany). Plasma was used to determine hormone levels. Endocrine organs were weighted. Anterior pituitaries were immersed in Bouin solution (75% saturated picric acid, 25% formaldehyde, 5% acetic acid [Reanal, Budapest, Hungary]). Two days later tissue samples were washed in 0.1 M potassium-phosphate buffer (KPB) and put in ascending sucrose solution (10–20–30%) for cryoprotection. One day later the pituitaries were embedded in cryomatrix (Shandon, Pittsburg, PA) and 20 μm thick sections were cut on cryostat (Cryotome, Thermo Shandon, Pittsburg, PA) in 4 parallel series. The sections were stained for LH, FSH, GH and PRL immunoreactivities.\n[SUBTITLE] Radioimmunoassay (RIA) [SUBSECTION] The trunk blood was centrifugated at 4°C. Plasma was stored at −20°C until determination of pituitary hormones. Iodination was carried out by Chloramine-T method. Separation of the free and bound antigen was made by double antibody method. All hormone kits were obtained from the National Hormone and Peptide Program (NHPP), NIDDK, and Dr. Parlow. Hormone levels were expressed in ng/ml plasma. The mean of two parallel determinations for each animal was subjected to one way analysis of variance (ANOVA) followed by a Student’s t test.\nThe trunk blood was centrifugated at 4°C. Plasma was stored at −20°C until determination of pituitary hormones. Iodination was carried out by Chloramine-T method. Separation of the free and bound antigen was made by double antibody method. All hormone kits were obtained from the National Hormone and Peptide Program (NHPP), NIDDK, and Dr. Parlow. Hormone levels were expressed in ng/ml plasma. The mean of two parallel determinations for each animal was subjected to one way analysis of variance (ANOVA) followed by a Student’s t test.\n[SUBTITLE] Immunohistochemistry [SUBSECTION] Three animals of each group were anesthetized by ket-amine-hydrochloride (75 mg/100g bw) (Sigma, St. Louis, MO) and perfused with 4% paraformaldehyde (Merck, Darmstadt, Germany) in potassium phosphate buffer (KPB) (pH 7.4, 0.1mol). The components were purchased from Sigma (St. Louis, MO). Pituitaries were removed and post-fixed overnight. All the pituitaries (fixed in Bouin solution or PFA) were used for immunohistochemistry. After washing in KPB, the pituitaries were immersed in ascending sucrose solution (10–20–30%), then frozen on dry ice. The pituitary sections were rinsed in potassium phosphate buffer (KPB), pH 7.4. The tropic hormones were visualized by indirect fluorescence technique. The following primary antisera were used in our experiment: LH, FSH, GH (dilution 1:500) were raised in guinea pig and obtained from the National Hormone and Peptide Program (NHPP), NIDDK, and Dr. Parlow. PRL antiserum (dilution 1:500) was raised in rabbit by Nagy and characterized by Demaria et al. [37]. Fluorescence labeled secondary antibodies were the following: rabbit anti-guinea pig IgG conjugated to FITC (DAKO A/S, Glostrup, Denmark), goat anti-rabbit IgG conjugated to Cy3 (DAKO A/S, Glostrup, Denmark).\nSelected sections were double stained for PRL and GH immunoreactivities using fluorescence technique. First PRL staining was carried out using a monoclonal antibody (raised and characterized by Scammell [38] and TRITC conjugated secondary antimouse antibody (DAKO A/S, Glostrup, Denmark). Then the GH staining was done.\nThree animals of each group were anesthetized by ket-amine-hydrochloride (75 mg/100g bw) (Sigma, St. Louis, MO) and perfused with 4% paraformaldehyde (Merck, Darmstadt, Germany) in potassium phosphate buffer (KPB) (pH 7.4, 0.1mol). The components were purchased from Sigma (St. Louis, MO). Pituitaries were removed and post-fixed overnight. All the pituitaries (fixed in Bouin solution or PFA) were used for immunohistochemistry. After washing in KPB, the pituitaries were immersed in ascending sucrose solution (10–20–30%), then frozen on dry ice. The pituitary sections were rinsed in potassium phosphate buffer (KPB), pH 7.4. The tropic hormones were visualized by indirect fluorescence technique. The following primary antisera were used in our experiment: LH, FSH, GH (dilution 1:500) were raised in guinea pig and obtained from the National Hormone and Peptide Program (NHPP), NIDDK, and Dr. Parlow. PRL antiserum (dilution 1:500) was raised in rabbit by Nagy and characterized by Demaria et al. [37]. Fluorescence labeled secondary antibodies were the following: rabbit anti-guinea pig IgG conjugated to FITC (DAKO A/S, Glostrup, Denmark), goat anti-rabbit IgG conjugated to Cy3 (DAKO A/S, Glostrup, Denmark).\nSelected sections were double stained for PRL and GH immunoreactivities using fluorescence technique. First PRL staining was carried out using a monoclonal antibody (raised and characterized by Scammell [38] and TRITC conjugated secondary antimouse antibody (DAKO A/S, Glostrup, Denmark). Then the GH staining was done.", "Sprague-Dawley female and male rats were used for our experiments. They were kept in a light (light on at 5 h and off at 19 h) and temperature (22±2°C) controlled vivarium and fed with standard lab chow and water ad libitum. The treatment of the animals was in accordance with the rules of the „European convention for the protection of vertebrate animals used for experimental and other scientific purposes”, Strasbourg, 1986. Our protocol was approved by the Department of Animal Health Care, Budapest. Permission number: 37/1999. When the animals were 25 day-old they received empty capsule (ecap), DES, DES + P or just P containing silastic capsule implanted under the skin of the neck in the subcutaneous tissue.", "Silastic capsules (id 1.55 mm, od 3.13 mm, length 10 mm) (Dow Corning Corporation, Midland, MI) were filled with DES (Sigma, St. Louis, Mo) or P (Sigma) and the ends of the capsules were sealed by Szilorfix (Finomvegyszer Szövetkezet, Budapest, Hungary), a silicon based glue. A day after drying, the capsules were used for implantation. The steroids can pass through the wall of the capsule and can exert its effect through the general circulation [36].", "The animals were grouped as follows.\nExperiment 1.\nintact (negative control) rats,\necap (sham operated rats receiving ecap sealed with glue),\nDES implanted rats,\nDES+P implanted rats,\nP implanted rats.\n85 female and 79 male rats were included in this experiment. The survival time after implantation was 2 months.\nExperiment 2.\nSimilar groups were included in this experiment (56 females and 56 males) as described in Experiment 1; however, the survival time was 5 months.\nExperiment 3.\nFrom 35 DES implanted animals (18 females and 17 males) the DES capsules were removed. Half of these animals was sacrificed 1 month after the removal and the other half was sacrificed 2 months after removal. Age matched controls were also included in all the groups (17 females and 16 males). The data obtained in these groups were compared with those of DES implanted rats and their age matched controls of Experiment 1.\nIn female rats the opening of the vaginal membrane was recorded. Then vaginal smear was taken daily for two weeks and for another two weeks before sacrificing. Females were handled in this way. The male rats were transferred from their cage to another and back every day for two weeks before sacrificing (handling) to blunt the stress effect.\nAt the end of the experimental period the body weight (BW) and body and tail length (B+TL) were measured and the animals were sacrificed by decapitation between 9 am and 11 am. The trunk blood was collected, its coagulation was prevented by EDTA (ethylene diamine tetraacetic acide sodium salt, Serva, Heidelberg, Germany). Plasma was used to determine hormone levels. Endocrine organs were weighted. Anterior pituitaries were immersed in Bouin solution (75% saturated picric acid, 25% formaldehyde, 5% acetic acid [Reanal, Budapest, Hungary]). Two days later tissue samples were washed in 0.1 M potassium-phosphate buffer (KPB) and put in ascending sucrose solution (10–20–30%) for cryoprotection. One day later the pituitaries were embedded in cryomatrix (Shandon, Pittsburg, PA) and 20 μm thick sections were cut on cryostat (Cryotome, Thermo Shandon, Pittsburg, PA) in 4 parallel series. The sections were stained for LH, FSH, GH and PRL immunoreactivities.", "The trunk blood was centrifugated at 4°C. Plasma was stored at −20°C until determination of pituitary hormones. Iodination was carried out by Chloramine-T method. Separation of the free and bound antigen was made by double antibody method. All hormone kits were obtained from the National Hormone and Peptide Program (NHPP), NIDDK, and Dr. Parlow. Hormone levels were expressed in ng/ml plasma. The mean of two parallel determinations for each animal was subjected to one way analysis of variance (ANOVA) followed by a Student’s t test.", "Three animals of each group were anesthetized by ket-amine-hydrochloride (75 mg/100g bw) (Sigma, St. Louis, MO) and perfused with 4% paraformaldehyde (Merck, Darmstadt, Germany) in potassium phosphate buffer (KPB) (pH 7.4, 0.1mol). The components were purchased from Sigma (St. Louis, MO). Pituitaries were removed and post-fixed overnight. All the pituitaries (fixed in Bouin solution or PFA) were used for immunohistochemistry. After washing in KPB, the pituitaries were immersed in ascending sucrose solution (10–20–30%), then frozen on dry ice. The pituitary sections were rinsed in potassium phosphate buffer (KPB), pH 7.4. The tropic hormones were visualized by indirect fluorescence technique. The following primary antisera were used in our experiment: LH, FSH, GH (dilution 1:500) were raised in guinea pig and obtained from the National Hormone and Peptide Program (NHPP), NIDDK, and Dr. Parlow. PRL antiserum (dilution 1:500) was raised in rabbit by Nagy and characterized by Demaria et al. [37]. Fluorescence labeled secondary antibodies were the following: rabbit anti-guinea pig IgG conjugated to FITC (DAKO A/S, Glostrup, Denmark), goat anti-rabbit IgG conjugated to Cy3 (DAKO A/S, Glostrup, Denmark).\nSelected sections were double stained for PRL and GH immunoreactivities using fluorescence technique. First PRL staining was carried out using a monoclonal antibody (raised and characterized by Scammell [38] and TRITC conjugated secondary antimouse antibody (DAKO A/S, Glostrup, Denmark). Then the GH staining was done.", "[SUBTITLE] Vaginal smears [SUBSECTION] In intact rats the vaginal membrane opened on 31.36±0.80 day of postnatal life. In all animals implanted with capsule (independently of their hormone content) the vaginal membrane opened earlier. In DES treated rats it happened much earlier than in intact animals (27.50±0.31, p<0.0001), although in animals implanted with ecap, DES+P or P alone the opening was also significantly shifted earlier than in intact rats, 28.36±0.47 (p<0.003) 28.06±0.34 (p<0.0005) and 29.09±0.51 (p<0.03) day of postnatal life, respectively. Upon DES treatment in the vaginal smear persistent estrus was observed. DES+P did not interrupt the cyclicity but it was irregular and metestrus predominated, ecap and P alone had no effect.\nIn intact rats the vaginal membrane opened on 31.36±0.80 day of postnatal life. In all animals implanted with capsule (independently of their hormone content) the vaginal membrane opened earlier. In DES treated rats it happened much earlier than in intact animals (27.50±0.31, p<0.0001), although in animals implanted with ecap, DES+P or P alone the opening was also significantly shifted earlier than in intact rats, 28.36±0.47 (p<0.003) 28.06±0.34 (p<0.0005) and 29.09±0.51 (p<0.03) day of postnatal life, respectively. Upon DES treatment in the vaginal smear persistent estrus was observed. DES+P did not interrupt the cyclicity but it was irregular and metestrus predominated, ecap and P alone had no effect.\n[SUBTITLE] Effect of treatments on the body weight and body length [SUBSECTION] Table 1 shows BW and B+TL in various groups of female and male rats 2 and 5 months after implantation. In DES and DES+P implanted animals BW and B+TL were significantly lower than in control rats 2 months after implantation; however, 5 months after implantation P moderated the effect of DES. P alone slightly decreased BW only in the case of 5-month treatment. When we removed the DES capsule BW and B+TL could not return to intact levels, but the difference was not significant in B+TL of females.\nTable 1 shows BW and B+TL in various groups of female and male rats 2 and 5 months after implantation. In DES and DES+P implanted animals BW and B+TL were significantly lower than in control rats 2 months after implantation; however, 5 months after implantation P moderated the effect of DES. P alone slightly decreased BW only in the case of 5-month treatment. When we removed the DES capsule BW and B+TL could not return to intact levels, but the difference was not significant in B+TL of females.\n[SUBTITLE] Effect of treatments on the weight of the endocrine organs [SUBSECTION] Table 2 shows the weight of the anterior pituitary and the ovaries in female rats. The weight of anterior pituitaries increased upon DES and DES+P treatments in the case of 2-month survival, but in the case of 5-month treatment P reversed the weight gain. When the DES capsule was removed the weight of pituitaries returned to control level. DES and DES+P extremly decreased the weight of ovaries after 2-month steroid treatment. In the case of 5-month survival P prevented the effect of DES and P alone slightly decreased the weight of ovaries. When DES capsule was removed the weight of ovaries returned to control levels.\nTable 3 shows the weight of the anterior pituitary, testes and seminal vesicles of male rats. Similarly to female rats the weight of anterior pituitaries increased upon DES and DES+P treatment in the case of 2-month survival time, but in the case of 5-month survival P reversed the weight gain. DES and DES+P treatments extremly decreased the weight of testes and seminal vesicles, P alone enhanced those in the animals having 2-, but not 5-month survival time. Removal of DES capsule partially, but not completely, restored the weight of testes and seminal vesicles.\nTable 2 shows the weight of the anterior pituitary and the ovaries in female rats. The weight of anterior pituitaries increased upon DES and DES+P treatments in the case of 2-month survival, but in the case of 5-month treatment P reversed the weight gain. When the DES capsule was removed the weight of pituitaries returned to control level. DES and DES+P extremly decreased the weight of ovaries after 2-month steroid treatment. In the case of 5-month survival P prevented the effect of DES and P alone slightly decreased the weight of ovaries. When DES capsule was removed the weight of ovaries returned to control levels.\nTable 3 shows the weight of the anterior pituitary, testes and seminal vesicles of male rats. Similarly to female rats the weight of anterior pituitaries increased upon DES and DES+P treatment in the case of 2-month survival time, but in the case of 5-month survival P reversed the weight gain. DES and DES+P treatments extremly decreased the weight of testes and seminal vesicles, P alone enhanced those in the animals having 2-, but not 5-month survival time. Removal of DES capsule partially, but not completely, restored the weight of testes and seminal vesicles.\n[SUBTITLE] Effect of treatments on the classical pituitary hormone immunoreactivities [SUBSECTION] [SUBTITLE] Radioimmunoassay [SUBSECTION] The basal LH level in female rats was depressed by DES treatment (Figure 1A,B), but it become significant only five months after implantation. Progesterone alone had no significant effect. One month after removal of DES capsule LH level further decreased and it was significantly lower than in controls; however, two months after removal it was even higher than in age-matched controls (Figure 1C). In male rats DES and DES+P significantly decreased the LH levels already two months after implantation (Figure 1D). The lower levels persisted five months after implantation in DES+P treated rats (Figure 1E). After the removal of DES capsule the LH level remained lower than in controls for a month, and later it was similar to the control values (Figure 1F).\nThe basal FSH level in female rats was not significantly influenced in groups of 2- or 5-month survival (Figure 2A,B); but 2 months after the removal of DES capsule FSH was significantly higher than in the age-matched control rats (Figure 2C). In male rats both DES and DES+P treatment depressed the FSH levels two months after implantation (Figure 2D) but five months after the implantation the FSH levels were similar in each group (Figure 2E), likewise the females. After the removal of DES capsule the FSH levels remained lower and did not return to control values (Figure 2F).\nThe basal PRL levels showed very similar pattern in both female and male rats. DES extremly enhanced the PRL levels (Figure 3A,B and D,E). Two months after implantation P blunted the effect of DES (Figure 3A, D) and five months after implantation P completely reversed the effect of DES (Figure 3B, E). P alone had no effect on the PRL levels (Figure 3A,B and D,E). Removal of DES capsule gradually restored the PRL levels (Figure 3C,F).\nThe basal GH levels were very contradictory (Figure 4A–F). In females DES and DES+P treatment decreased the level (Figure 4A,B); however, it was significant only after 2-month treatment (Figure 4A). P alone did not influence the GH level 2 months after implantation (Figure 4A) but it decreased after 5 months (Figure 4B). When the DES capsule was removed GH levels reached the control value in one month and by the end of 2-month survival after removal and it was even higher than in controls (this elevation was not statistically significant) (Figure 4C). In male rats we did not find significant difference in any groups (Figure 4D,F).\nThe basal LH level in female rats was depressed by DES treatment (Figure 1A,B), but it become significant only five months after implantation. Progesterone alone had no significant effect. One month after removal of DES capsule LH level further decreased and it was significantly lower than in controls; however, two months after removal it was even higher than in age-matched controls (Figure 1C). In male rats DES and DES+P significantly decreased the LH levels already two months after implantation (Figure 1D). The lower levels persisted five months after implantation in DES+P treated rats (Figure 1E). After the removal of DES capsule the LH level remained lower than in controls for a month, and later it was similar to the control values (Figure 1F).\nThe basal FSH level in female rats was not significantly influenced in groups of 2- or 5-month survival (Figure 2A,B); but 2 months after the removal of DES capsule FSH was significantly higher than in the age-matched control rats (Figure 2C). In male rats both DES and DES+P treatment depressed the FSH levels two months after implantation (Figure 2D) but five months after the implantation the FSH levels were similar in each group (Figure 2E), likewise the females. After the removal of DES capsule the FSH levels remained lower and did not return to control values (Figure 2F).\nThe basal PRL levels showed very similar pattern in both female and male rats. DES extremly enhanced the PRL levels (Figure 3A,B and D,E). Two months after implantation P blunted the effect of DES (Figure 3A, D) and five months after implantation P completely reversed the effect of DES (Figure 3B, E). P alone had no effect on the PRL levels (Figure 3A,B and D,E). Removal of DES capsule gradually restored the PRL levels (Figure 3C,F).\nThe basal GH levels were very contradictory (Figure 4A–F). In females DES and DES+P treatment decreased the level (Figure 4A,B); however, it was significant only after 2-month treatment (Figure 4A). P alone did not influence the GH level 2 months after implantation (Figure 4A) but it decreased after 5 months (Figure 4B). When the DES capsule was removed GH levels reached the control value in one month and by the end of 2-month survival after removal and it was even higher than in controls (this elevation was not statistically significant) (Figure 4C). In male rats we did not find significant difference in any groups (Figure 4D,F).\n[SUBTITLE] Immunohistochemistry [SUBSECTION] The quantitative measurements obtained with RIA were confirmed with the qualitative observations provided by immunohistochemistry.\nThe quantitative measurements obtained with RIA were confirmed with the qualitative observations provided by immunohistochemistry.\n[SUBTITLE] Distribution of immunoreactive cells in intact rats [SUBSECTION] In the anterior lobe of intact rats the distribution of tropic hormone immunoreactive cells is very characteristic (Figure 5A–H). The density of LH and FSH immunoreactive cells is much higher in the gonadotropic zone that is in the anterior pole of the anterior pituitary (Figure 5A–D). PRL is evenly distributed all over the anterior lobe (Figure 5E,F). GH immunoreactive cells are almost absent in the anterior pole of intact rats. Everywhere else they are evenly distributed (Figure 5G,H).\nIn the anterior lobe of intact rats the distribution of tropic hormone immunoreactive cells is very characteristic (Figure 5A–H). The density of LH and FSH immunoreactive cells is much higher in the gonadotropic zone that is in the anterior pole of the anterior pituitary (Figure 5A–D). PRL is evenly distributed all over the anterior lobe (Figure 5E,F). GH immunoreactive cells are almost absent in the anterior pole of intact rats. Everywhere else they are evenly distributed (Figure 5G,H).\n[SUBTITLE] Distribution of immunoreactive cells in steroid treated rats [SUBSECTION] Steroid treatments differently influenced the immunoreactivity of the four hormone producing cells.\nThe number of LH and FSH cells decreased in both female and male rats. The effect of DES was more pronounced five than two months after DES implantation. Figure 5I and J show LH immunoreactive cells in the anterior pituitary of DES treated female rats after 2- and 5-month treatments, respectively. When P was implanted together with DES, the effect of DES was nearly prevented by P (Figure 5K,L). It seemed that only the density of gonadotropes in the rostral pole of the anterior lobe was lower than in intact rats. The appearance of LH and FSH immunoreactivities in male rats was very similar to that of females (not shown). P alone did not influence the LH and FSH immunoreactivities in the anterior lobe of both sexes (not shown).\nDES enhanced the number and the size of PRL cells in both female and male rats (Figure 5M–O and Q, male). In intact rats these cells are cup-shaped (Figure 5N). In DES treated rats the cells are large, ovoid or rounded, and their diameter is nearly double than that of intact rats already after two-month steroid influence (Figure 5O). Figure 5P shows PRL cells from a male animal with DES+P treatment (2-month survival). In these animals the size of the cells were only moderately enhanced compared to control rats and they lost their cup-shaped appearance. When the animals were treated with DES for 5 months prolactinomas developed. Figure 5Q shows a well developed prolactinoma. P prevented the effect of DES on PRL. The immunostaining was similar to control rats (Figure 5R). P alone did not affect the PRL immunostaining (not shown).\nGH cells did not show a pronounced change either in male or female rats upon 2-month DES treatment but the distribution of these cells was very characteristic in animals with 5-month DES treatment. GH cells were completely missing in areas resembling prolactinomas (Figure 5S). Double labeling immunohistochemistry confirmed this observation (Figure 5T). In DES+P treated rats GH immunostaining was very similar to intact rats. The only difference was observed in the distribution of GH cells. They could be also observed in the anterior pole of the anterior pituitary gland (Figure 5U), this is the region where they are missing in intact rats.\nImplantation of ecap did not alter the immunostaining of pituitary tropic hormone producing cells.\nSteroid treatments differently influenced the immunoreactivity of the four hormone producing cells.\nThe number of LH and FSH cells decreased in both female and male rats. The effect of DES was more pronounced five than two months after DES implantation. Figure 5I and J show LH immunoreactive cells in the anterior pituitary of DES treated female rats after 2- and 5-month treatments, respectively. When P was implanted together with DES, the effect of DES was nearly prevented by P (Figure 5K,L). It seemed that only the density of gonadotropes in the rostral pole of the anterior lobe was lower than in intact rats. The appearance of LH and FSH immunoreactivities in male rats was very similar to that of females (not shown). P alone did not influence the LH and FSH immunoreactivities in the anterior lobe of both sexes (not shown).\nDES enhanced the number and the size of PRL cells in both female and male rats (Figure 5M–O and Q, male). In intact rats these cells are cup-shaped (Figure 5N). In DES treated rats the cells are large, ovoid or rounded, and their diameter is nearly double than that of intact rats already after two-month steroid influence (Figure 5O). Figure 5P shows PRL cells from a male animal with DES+P treatment (2-month survival). In these animals the size of the cells were only moderately enhanced compared to control rats and they lost their cup-shaped appearance. When the animals were treated with DES for 5 months prolactinomas developed. Figure 5Q shows a well developed prolactinoma. P prevented the effect of DES on PRL. The immunostaining was similar to control rats (Figure 5R). P alone did not affect the PRL immunostaining (not shown).\nGH cells did not show a pronounced change either in male or female rats upon 2-month DES treatment but the distribution of these cells was very characteristic in animals with 5-month DES treatment. GH cells were completely missing in areas resembling prolactinomas (Figure 5S). Double labeling immunohistochemistry confirmed this observation (Figure 5T). In DES+P treated rats GH immunostaining was very similar to intact rats. The only difference was observed in the distribution of GH cells. They could be also observed in the anterior pole of the anterior pituitary gland (Figure 5U), this is the region where they are missing in intact rats.\nImplantation of ecap did not alter the immunostaining of pituitary tropic hormone producing cells.\n[SUBTITLE] Effect of the removal of DES capsule on the distribution of immunoreactive cells [SUBSECTION] The removal of DES capsule from the animals restored the density and distribution of all the four hormon producing cells already in one month. One or 2 months after the removal of DES capsule the appearance of immunostaining was similar to those of intact rats (Figure 5V–X).\nThe removal of DES capsule from the animals restored the density and distribution of all the four hormon producing cells already in one month. One or 2 months after the removal of DES capsule the appearance of immunostaining was similar to those of intact rats (Figure 5V–X).\n[SUBTITLE] Radioimmunoassay [SUBSECTION] The basal LH level in female rats was depressed by DES treatment (Figure 1A,B), but it become significant only five months after implantation. Progesterone alone had no significant effect. One month after removal of DES capsule LH level further decreased and it was significantly lower than in controls; however, two months after removal it was even higher than in age-matched controls (Figure 1C). In male rats DES and DES+P significantly decreased the LH levels already two months after implantation (Figure 1D). The lower levels persisted five months after implantation in DES+P treated rats (Figure 1E). After the removal of DES capsule the LH level remained lower than in controls for a month, and later it was similar to the control values (Figure 1F).\nThe basal FSH level in female rats was not significantly influenced in groups of 2- or 5-month survival (Figure 2A,B); but 2 months after the removal of DES capsule FSH was significantly higher than in the age-matched control rats (Figure 2C). In male rats both DES and DES+P treatment depressed the FSH levels two months after implantation (Figure 2D) but five months after the implantation the FSH levels were similar in each group (Figure 2E), likewise the females. After the removal of DES capsule the FSH levels remained lower and did not return to control values (Figure 2F).\nThe basal PRL levels showed very similar pattern in both female and male rats. DES extremly enhanced the PRL levels (Figure 3A,B and D,E). Two months after implantation P blunted the effect of DES (Figure 3A, D) and five months after implantation P completely reversed the effect of DES (Figure 3B, E). P alone had no effect on the PRL levels (Figure 3A,B and D,E). Removal of DES capsule gradually restored the PRL levels (Figure 3C,F).\nThe basal GH levels were very contradictory (Figure 4A–F). In females DES and DES+P treatment decreased the level (Figure 4A,B); however, it was significant only after 2-month treatment (Figure 4A). P alone did not influence the GH level 2 months after implantation (Figure 4A) but it decreased after 5 months (Figure 4B). When the DES capsule was removed GH levels reached the control value in one month and by the end of 2-month survival after removal and it was even higher than in controls (this elevation was not statistically significant) (Figure 4C). In male rats we did not find significant difference in any groups (Figure 4D,F).\nThe basal LH level in female rats was depressed by DES treatment (Figure 1A,B), but it become significant only five months after implantation. Progesterone alone had no significant effect. One month after removal of DES capsule LH level further decreased and it was significantly lower than in controls; however, two months after removal it was even higher than in age-matched controls (Figure 1C). In male rats DES and DES+P significantly decreased the LH levels already two months after implantation (Figure 1D). The lower levels persisted five months after implantation in DES+P treated rats (Figure 1E). After the removal of DES capsule the LH level remained lower than in controls for a month, and later it was similar to the control values (Figure 1F).\nThe basal FSH level in female rats was not significantly influenced in groups of 2- or 5-month survival (Figure 2A,B); but 2 months after the removal of DES capsule FSH was significantly higher than in the age-matched control rats (Figure 2C). In male rats both DES and DES+P treatment depressed the FSH levels two months after implantation (Figure 2D) but five months after the implantation the FSH levels were similar in each group (Figure 2E), likewise the females. After the removal of DES capsule the FSH levels remained lower and did not return to control values (Figure 2F).\nThe basal PRL levels showed very similar pattern in both female and male rats. DES extremly enhanced the PRL levels (Figure 3A,B and D,E). Two months after implantation P blunted the effect of DES (Figure 3A, D) and five months after implantation P completely reversed the effect of DES (Figure 3B, E). P alone had no effect on the PRL levels (Figure 3A,B and D,E). Removal of DES capsule gradually restored the PRL levels (Figure 3C,F).\nThe basal GH levels were very contradictory (Figure 4A–F). In females DES and DES+P treatment decreased the level (Figure 4A,B); however, it was significant only after 2-month treatment (Figure 4A). P alone did not influence the GH level 2 months after implantation (Figure 4A) but it decreased after 5 months (Figure 4B). When the DES capsule was removed GH levels reached the control value in one month and by the end of 2-month survival after removal and it was even higher than in controls (this elevation was not statistically significant) (Figure 4C). In male rats we did not find significant difference in any groups (Figure 4D,F).\n[SUBTITLE] Immunohistochemistry [SUBSECTION] The quantitative measurements obtained with RIA were confirmed with the qualitative observations provided by immunohistochemistry.\nThe quantitative measurements obtained with RIA were confirmed with the qualitative observations provided by immunohistochemistry.\n[SUBTITLE] Distribution of immunoreactive cells in intact rats [SUBSECTION] In the anterior lobe of intact rats the distribution of tropic hormone immunoreactive cells is very characteristic (Figure 5A–H). The density of LH and FSH immunoreactive cells is much higher in the gonadotropic zone that is in the anterior pole of the anterior pituitary (Figure 5A–D). PRL is evenly distributed all over the anterior lobe (Figure 5E,F). GH immunoreactive cells are almost absent in the anterior pole of intact rats. Everywhere else they are evenly distributed (Figure 5G,H).\nIn the anterior lobe of intact rats the distribution of tropic hormone immunoreactive cells is very characteristic (Figure 5A–H). The density of LH and FSH immunoreactive cells is much higher in the gonadotropic zone that is in the anterior pole of the anterior pituitary (Figure 5A–D). PRL is evenly distributed all over the anterior lobe (Figure 5E,F). GH immunoreactive cells are almost absent in the anterior pole of intact rats. Everywhere else they are evenly distributed (Figure 5G,H).\n[SUBTITLE] Distribution of immunoreactive cells in steroid treated rats [SUBSECTION] Steroid treatments differently influenced the immunoreactivity of the four hormone producing cells.\nThe number of LH and FSH cells decreased in both female and male rats. The effect of DES was more pronounced five than two months after DES implantation. Figure 5I and J show LH immunoreactive cells in the anterior pituitary of DES treated female rats after 2- and 5-month treatments, respectively. When P was implanted together with DES, the effect of DES was nearly prevented by P (Figure 5K,L). It seemed that only the density of gonadotropes in the rostral pole of the anterior lobe was lower than in intact rats. The appearance of LH and FSH immunoreactivities in male rats was very similar to that of females (not shown). P alone did not influence the LH and FSH immunoreactivities in the anterior lobe of both sexes (not shown).\nDES enhanced the number and the size of PRL cells in both female and male rats (Figure 5M–O and Q, male). In intact rats these cells are cup-shaped (Figure 5N). In DES treated rats the cells are large, ovoid or rounded, and their diameter is nearly double than that of intact rats already after two-month steroid influence (Figure 5O). Figure 5P shows PRL cells from a male animal with DES+P treatment (2-month survival). In these animals the size of the cells were only moderately enhanced compared to control rats and they lost their cup-shaped appearance. When the animals were treated with DES for 5 months prolactinomas developed. Figure 5Q shows a well developed prolactinoma. P prevented the effect of DES on PRL. The immunostaining was similar to control rats (Figure 5R). P alone did not affect the PRL immunostaining (not shown).\nGH cells did not show a pronounced change either in male or female rats upon 2-month DES treatment but the distribution of these cells was very characteristic in animals with 5-month DES treatment. GH cells were completely missing in areas resembling prolactinomas (Figure 5S). Double labeling immunohistochemistry confirmed this observation (Figure 5T). In DES+P treated rats GH immunostaining was very similar to intact rats. The only difference was observed in the distribution of GH cells. They could be also observed in the anterior pole of the anterior pituitary gland (Figure 5U), this is the region where they are missing in intact rats.\nImplantation of ecap did not alter the immunostaining of pituitary tropic hormone producing cells.\nSteroid treatments differently influenced the immunoreactivity of the four hormone producing cells.\nThe number of LH and FSH cells decreased in both female and male rats. The effect of DES was more pronounced five than two months after DES implantation. Figure 5I and J show LH immunoreactive cells in the anterior pituitary of DES treated female rats after 2- and 5-month treatments, respectively. When P was implanted together with DES, the effect of DES was nearly prevented by P (Figure 5K,L). It seemed that only the density of gonadotropes in the rostral pole of the anterior lobe was lower than in intact rats. The appearance of LH and FSH immunoreactivities in male rats was very similar to that of females (not shown). P alone did not influence the LH and FSH immunoreactivities in the anterior lobe of both sexes (not shown).\nDES enhanced the number and the size of PRL cells in both female and male rats (Figure 5M–O and Q, male). In intact rats these cells are cup-shaped (Figure 5N). In DES treated rats the cells are large, ovoid or rounded, and their diameter is nearly double than that of intact rats already after two-month steroid influence (Figure 5O). Figure 5P shows PRL cells from a male animal with DES+P treatment (2-month survival). In these animals the size of the cells were only moderately enhanced compared to control rats and they lost their cup-shaped appearance. When the animals were treated with DES for 5 months prolactinomas developed. Figure 5Q shows a well developed prolactinoma. P prevented the effect of DES on PRL. The immunostaining was similar to control rats (Figure 5R). P alone did not affect the PRL immunostaining (not shown).\nGH cells did not show a pronounced change either in male or female rats upon 2-month DES treatment but the distribution of these cells was very characteristic in animals with 5-month DES treatment. GH cells were completely missing in areas resembling prolactinomas (Figure 5S). Double labeling immunohistochemistry confirmed this observation (Figure 5T). In DES+P treated rats GH immunostaining was very similar to intact rats. The only difference was observed in the distribution of GH cells. They could be also observed in the anterior pole of the anterior pituitary gland (Figure 5U), this is the region where they are missing in intact rats.\nImplantation of ecap did not alter the immunostaining of pituitary tropic hormone producing cells.\n[SUBTITLE] Effect of the removal of DES capsule on the distribution of immunoreactive cells [SUBSECTION] The removal of DES capsule from the animals restored the density and distribution of all the four hormon producing cells already in one month. One or 2 months after the removal of DES capsule the appearance of immunostaining was similar to those of intact rats (Figure 5V–X).\nThe removal of DES capsule from the animals restored the density and distribution of all the four hormon producing cells already in one month. One or 2 months after the removal of DES capsule the appearance of immunostaining was similar to those of intact rats (Figure 5V–X).", "In intact rats the vaginal membrane opened on 31.36±0.80 day of postnatal life. In all animals implanted with capsule (independently of their hormone content) the vaginal membrane opened earlier. In DES treated rats it happened much earlier than in intact animals (27.50±0.31, p<0.0001), although in animals implanted with ecap, DES+P or P alone the opening was also significantly shifted earlier than in intact rats, 28.36±0.47 (p<0.003) 28.06±0.34 (p<0.0005) and 29.09±0.51 (p<0.03) day of postnatal life, respectively. Upon DES treatment in the vaginal smear persistent estrus was observed. DES+P did not interrupt the cyclicity but it was irregular and metestrus predominated, ecap and P alone had no effect.", "Table 1 shows BW and B+TL in various groups of female and male rats 2 and 5 months after implantation. In DES and DES+P implanted animals BW and B+TL were significantly lower than in control rats 2 months after implantation; however, 5 months after implantation P moderated the effect of DES. P alone slightly decreased BW only in the case of 5-month treatment. When we removed the DES capsule BW and B+TL could not return to intact levels, but the difference was not significant in B+TL of females.", "Table 2 shows the weight of the anterior pituitary and the ovaries in female rats. The weight of anterior pituitaries increased upon DES and DES+P treatments in the case of 2-month survival, but in the case of 5-month treatment P reversed the weight gain. When the DES capsule was removed the weight of pituitaries returned to control level. DES and DES+P extremly decreased the weight of ovaries after 2-month steroid treatment. In the case of 5-month survival P prevented the effect of DES and P alone slightly decreased the weight of ovaries. When DES capsule was removed the weight of ovaries returned to control levels.\nTable 3 shows the weight of the anterior pituitary, testes and seminal vesicles of male rats. Similarly to female rats the weight of anterior pituitaries increased upon DES and DES+P treatment in the case of 2-month survival time, but in the case of 5-month survival P reversed the weight gain. DES and DES+P treatments extremly decreased the weight of testes and seminal vesicles, P alone enhanced those in the animals having 2-, but not 5-month survival time. Removal of DES capsule partially, but not completely, restored the weight of testes and seminal vesicles.", "[SUBTITLE] Radioimmunoassay [SUBSECTION] The basal LH level in female rats was depressed by DES treatment (Figure 1A,B), but it become significant only five months after implantation. Progesterone alone had no significant effect. One month after removal of DES capsule LH level further decreased and it was significantly lower than in controls; however, two months after removal it was even higher than in age-matched controls (Figure 1C). In male rats DES and DES+P significantly decreased the LH levels already two months after implantation (Figure 1D). The lower levels persisted five months after implantation in DES+P treated rats (Figure 1E). After the removal of DES capsule the LH level remained lower than in controls for a month, and later it was similar to the control values (Figure 1F).\nThe basal FSH level in female rats was not significantly influenced in groups of 2- or 5-month survival (Figure 2A,B); but 2 months after the removal of DES capsule FSH was significantly higher than in the age-matched control rats (Figure 2C). In male rats both DES and DES+P treatment depressed the FSH levels two months after implantation (Figure 2D) but five months after the implantation the FSH levels were similar in each group (Figure 2E), likewise the females. After the removal of DES capsule the FSH levels remained lower and did not return to control values (Figure 2F).\nThe basal PRL levels showed very similar pattern in both female and male rats. DES extremly enhanced the PRL levels (Figure 3A,B and D,E). Two months after implantation P blunted the effect of DES (Figure 3A, D) and five months after implantation P completely reversed the effect of DES (Figure 3B, E). P alone had no effect on the PRL levels (Figure 3A,B and D,E). Removal of DES capsule gradually restored the PRL levels (Figure 3C,F).\nThe basal GH levels were very contradictory (Figure 4A–F). In females DES and DES+P treatment decreased the level (Figure 4A,B); however, it was significant only after 2-month treatment (Figure 4A). P alone did not influence the GH level 2 months after implantation (Figure 4A) but it decreased after 5 months (Figure 4B). When the DES capsule was removed GH levels reached the control value in one month and by the end of 2-month survival after removal and it was even higher than in controls (this elevation was not statistically significant) (Figure 4C). In male rats we did not find significant difference in any groups (Figure 4D,F).\nThe basal LH level in female rats was depressed by DES treatment (Figure 1A,B), but it become significant only five months after implantation. Progesterone alone had no significant effect. One month after removal of DES capsule LH level further decreased and it was significantly lower than in controls; however, two months after removal it was even higher than in age-matched controls (Figure 1C). In male rats DES and DES+P significantly decreased the LH levels already two months after implantation (Figure 1D). The lower levels persisted five months after implantation in DES+P treated rats (Figure 1E). After the removal of DES capsule the LH level remained lower than in controls for a month, and later it was similar to the control values (Figure 1F).\nThe basal FSH level in female rats was not significantly influenced in groups of 2- or 5-month survival (Figure 2A,B); but 2 months after the removal of DES capsule FSH was significantly higher than in the age-matched control rats (Figure 2C). In male rats both DES and DES+P treatment depressed the FSH levels two months after implantation (Figure 2D) but five months after the implantation the FSH levels were similar in each group (Figure 2E), likewise the females. After the removal of DES capsule the FSH levels remained lower and did not return to control values (Figure 2F).\nThe basal PRL levels showed very similar pattern in both female and male rats. DES extremly enhanced the PRL levels (Figure 3A,B and D,E). Two months after implantation P blunted the effect of DES (Figure 3A, D) and five months after implantation P completely reversed the effect of DES (Figure 3B, E). P alone had no effect on the PRL levels (Figure 3A,B and D,E). Removal of DES capsule gradually restored the PRL levels (Figure 3C,F).\nThe basal GH levels were very contradictory (Figure 4A–F). In females DES and DES+P treatment decreased the level (Figure 4A,B); however, it was significant only after 2-month treatment (Figure 4A). P alone did not influence the GH level 2 months after implantation (Figure 4A) but it decreased after 5 months (Figure 4B). When the DES capsule was removed GH levels reached the control value in one month and by the end of 2-month survival after removal and it was even higher than in controls (this elevation was not statistically significant) (Figure 4C). In male rats we did not find significant difference in any groups (Figure 4D,F).\n[SUBTITLE] Immunohistochemistry [SUBSECTION] The quantitative measurements obtained with RIA were confirmed with the qualitative observations provided by immunohistochemistry.\nThe quantitative measurements obtained with RIA were confirmed with the qualitative observations provided by immunohistochemistry.\n[SUBTITLE] Distribution of immunoreactive cells in intact rats [SUBSECTION] In the anterior lobe of intact rats the distribution of tropic hormone immunoreactive cells is very characteristic (Figure 5A–H). The density of LH and FSH immunoreactive cells is much higher in the gonadotropic zone that is in the anterior pole of the anterior pituitary (Figure 5A–D). PRL is evenly distributed all over the anterior lobe (Figure 5E,F). GH immunoreactive cells are almost absent in the anterior pole of intact rats. Everywhere else they are evenly distributed (Figure 5G,H).\nIn the anterior lobe of intact rats the distribution of tropic hormone immunoreactive cells is very characteristic (Figure 5A–H). The density of LH and FSH immunoreactive cells is much higher in the gonadotropic zone that is in the anterior pole of the anterior pituitary (Figure 5A–D). PRL is evenly distributed all over the anterior lobe (Figure 5E,F). GH immunoreactive cells are almost absent in the anterior pole of intact rats. Everywhere else they are evenly distributed (Figure 5G,H).\n[SUBTITLE] Distribution of immunoreactive cells in steroid treated rats [SUBSECTION] Steroid treatments differently influenced the immunoreactivity of the four hormone producing cells.\nThe number of LH and FSH cells decreased in both female and male rats. The effect of DES was more pronounced five than two months after DES implantation. Figure 5I and J show LH immunoreactive cells in the anterior pituitary of DES treated female rats after 2- and 5-month treatments, respectively. When P was implanted together with DES, the effect of DES was nearly prevented by P (Figure 5K,L). It seemed that only the density of gonadotropes in the rostral pole of the anterior lobe was lower than in intact rats. The appearance of LH and FSH immunoreactivities in male rats was very similar to that of females (not shown). P alone did not influence the LH and FSH immunoreactivities in the anterior lobe of both sexes (not shown).\nDES enhanced the number and the size of PRL cells in both female and male rats (Figure 5M–O and Q, male). In intact rats these cells are cup-shaped (Figure 5N). In DES treated rats the cells are large, ovoid or rounded, and their diameter is nearly double than that of intact rats already after two-month steroid influence (Figure 5O). Figure 5P shows PRL cells from a male animal with DES+P treatment (2-month survival). In these animals the size of the cells were only moderately enhanced compared to control rats and they lost their cup-shaped appearance. When the animals were treated with DES for 5 months prolactinomas developed. Figure 5Q shows a well developed prolactinoma. P prevented the effect of DES on PRL. The immunostaining was similar to control rats (Figure 5R). P alone did not affect the PRL immunostaining (not shown).\nGH cells did not show a pronounced change either in male or female rats upon 2-month DES treatment but the distribution of these cells was very characteristic in animals with 5-month DES treatment. GH cells were completely missing in areas resembling prolactinomas (Figure 5S). Double labeling immunohistochemistry confirmed this observation (Figure 5T). In DES+P treated rats GH immunostaining was very similar to intact rats. The only difference was observed in the distribution of GH cells. They could be also observed in the anterior pole of the anterior pituitary gland (Figure 5U), this is the region where they are missing in intact rats.\nImplantation of ecap did not alter the immunostaining of pituitary tropic hormone producing cells.\nSteroid treatments differently influenced the immunoreactivity of the four hormone producing cells.\nThe number of LH and FSH cells decreased in both female and male rats. The effect of DES was more pronounced five than two months after DES implantation. Figure 5I and J show LH immunoreactive cells in the anterior pituitary of DES treated female rats after 2- and 5-month treatments, respectively. When P was implanted together with DES, the effect of DES was nearly prevented by P (Figure 5K,L). It seemed that only the density of gonadotropes in the rostral pole of the anterior lobe was lower than in intact rats. The appearance of LH and FSH immunoreactivities in male rats was very similar to that of females (not shown). P alone did not influence the LH and FSH immunoreactivities in the anterior lobe of both sexes (not shown).\nDES enhanced the number and the size of PRL cells in both female and male rats (Figure 5M–O and Q, male). In intact rats these cells are cup-shaped (Figure 5N). In DES treated rats the cells are large, ovoid or rounded, and their diameter is nearly double than that of intact rats already after two-month steroid influence (Figure 5O). Figure 5P shows PRL cells from a male animal with DES+P treatment (2-month survival). In these animals the size of the cells were only moderately enhanced compared to control rats and they lost their cup-shaped appearance. When the animals were treated with DES for 5 months prolactinomas developed. Figure 5Q shows a well developed prolactinoma. P prevented the effect of DES on PRL. The immunostaining was similar to control rats (Figure 5R). P alone did not affect the PRL immunostaining (not shown).\nGH cells did not show a pronounced change either in male or female rats upon 2-month DES treatment but the distribution of these cells was very characteristic in animals with 5-month DES treatment. GH cells were completely missing in areas resembling prolactinomas (Figure 5S). Double labeling immunohistochemistry confirmed this observation (Figure 5T). In DES+P treated rats GH immunostaining was very similar to intact rats. The only difference was observed in the distribution of GH cells. They could be also observed in the anterior pole of the anterior pituitary gland (Figure 5U), this is the region where they are missing in intact rats.\nImplantation of ecap did not alter the immunostaining of pituitary tropic hormone producing cells.\n[SUBTITLE] Effect of the removal of DES capsule on the distribution of immunoreactive cells [SUBSECTION] The removal of DES capsule from the animals restored the density and distribution of all the four hormon producing cells already in one month. One or 2 months after the removal of DES capsule the appearance of immunostaining was similar to those of intact rats (Figure 5V–X).\nThe removal of DES capsule from the animals restored the density and distribution of all the four hormon producing cells already in one month. One or 2 months after the removal of DES capsule the appearance of immunostaining was similar to those of intact rats (Figure 5V–X).", "The basal LH level in female rats was depressed by DES treatment (Figure 1A,B), but it become significant only five months after implantation. Progesterone alone had no significant effect. One month after removal of DES capsule LH level further decreased and it was significantly lower than in controls; however, two months after removal it was even higher than in age-matched controls (Figure 1C). In male rats DES and DES+P significantly decreased the LH levels already two months after implantation (Figure 1D). The lower levels persisted five months after implantation in DES+P treated rats (Figure 1E). After the removal of DES capsule the LH level remained lower than in controls for a month, and later it was similar to the control values (Figure 1F).\nThe basal FSH level in female rats was not significantly influenced in groups of 2- or 5-month survival (Figure 2A,B); but 2 months after the removal of DES capsule FSH was significantly higher than in the age-matched control rats (Figure 2C). In male rats both DES and DES+P treatment depressed the FSH levels two months after implantation (Figure 2D) but five months after the implantation the FSH levels were similar in each group (Figure 2E), likewise the females. After the removal of DES capsule the FSH levels remained lower and did not return to control values (Figure 2F).\nThe basal PRL levels showed very similar pattern in both female and male rats. DES extremly enhanced the PRL levels (Figure 3A,B and D,E). Two months after implantation P blunted the effect of DES (Figure 3A, D) and five months after implantation P completely reversed the effect of DES (Figure 3B, E). P alone had no effect on the PRL levels (Figure 3A,B and D,E). Removal of DES capsule gradually restored the PRL levels (Figure 3C,F).\nThe basal GH levels were very contradictory (Figure 4A–F). In females DES and DES+P treatment decreased the level (Figure 4A,B); however, it was significant only after 2-month treatment (Figure 4A). P alone did not influence the GH level 2 months after implantation (Figure 4A) but it decreased after 5 months (Figure 4B). When the DES capsule was removed GH levels reached the control value in one month and by the end of 2-month survival after removal and it was even higher than in controls (this elevation was not statistically significant) (Figure 4C). In male rats we did not find significant difference in any groups (Figure 4D,F).", "The quantitative measurements obtained with RIA were confirmed with the qualitative observations provided by immunohistochemistry.", "In the anterior lobe of intact rats the distribution of tropic hormone immunoreactive cells is very characteristic (Figure 5A–H). The density of LH and FSH immunoreactive cells is much higher in the gonadotropic zone that is in the anterior pole of the anterior pituitary (Figure 5A–D). PRL is evenly distributed all over the anterior lobe (Figure 5E,F). GH immunoreactive cells are almost absent in the anterior pole of intact rats. Everywhere else they are evenly distributed (Figure 5G,H).", "Steroid treatments differently influenced the immunoreactivity of the four hormone producing cells.\nThe number of LH and FSH cells decreased in both female and male rats. The effect of DES was more pronounced five than two months after DES implantation. Figure 5I and J show LH immunoreactive cells in the anterior pituitary of DES treated female rats after 2- and 5-month treatments, respectively. When P was implanted together with DES, the effect of DES was nearly prevented by P (Figure 5K,L). It seemed that only the density of gonadotropes in the rostral pole of the anterior lobe was lower than in intact rats. The appearance of LH and FSH immunoreactivities in male rats was very similar to that of females (not shown). P alone did not influence the LH and FSH immunoreactivities in the anterior lobe of both sexes (not shown).\nDES enhanced the number and the size of PRL cells in both female and male rats (Figure 5M–O and Q, male). In intact rats these cells are cup-shaped (Figure 5N). In DES treated rats the cells are large, ovoid or rounded, and their diameter is nearly double than that of intact rats already after two-month steroid influence (Figure 5O). Figure 5P shows PRL cells from a male animal with DES+P treatment (2-month survival). In these animals the size of the cells were only moderately enhanced compared to control rats and they lost their cup-shaped appearance. When the animals were treated with DES for 5 months prolactinomas developed. Figure 5Q shows a well developed prolactinoma. P prevented the effect of DES on PRL. The immunostaining was similar to control rats (Figure 5R). P alone did not affect the PRL immunostaining (not shown).\nGH cells did not show a pronounced change either in male or female rats upon 2-month DES treatment but the distribution of these cells was very characteristic in animals with 5-month DES treatment. GH cells were completely missing in areas resembling prolactinomas (Figure 5S). Double labeling immunohistochemistry confirmed this observation (Figure 5T). In DES+P treated rats GH immunostaining was very similar to intact rats. The only difference was observed in the distribution of GH cells. They could be also observed in the anterior pole of the anterior pituitary gland (Figure 5U), this is the region where they are missing in intact rats.\nImplantation of ecap did not alter the immunostaining of pituitary tropic hormone producing cells.", "The removal of DES capsule from the animals restored the density and distribution of all the four hormon producing cells already in one month. One or 2 months after the removal of DES capsule the appearance of immunostaining was similar to those of intact rats (Figure 5V–X).", "In the present experiment we tried to imitate long-term steroid treatments which are frequently used in clinical practice. It is well known that continuous E or combined E + P administration prevents the ovulation. The modern contraceptive medication is based on these observations [26,39–41]. Nowdays high percent of girls around puberty uses contraceptive hormones [42], in some cases the steroids are used as medication of acne vulgaris in this early age [43]. In men E preparations (DES) are usually used in adult age to treat prostate cancer [35,44]. E is also used to ameliorate the post-menopausal syndromes and osteoporosis [34].\nOur experiments provide new informations concerning the effect of long-term E and concomitant P treatments on the changes caused by E, and the consequence of the removal of E capsule after two months of its implantation on pituitary LH, FSH, PRL and GH synthesis and release.\n[SUBTITLE] New data are as follows [SUBSECTION] There is sexual dimorphism in the change of basal LH and FSH release upon DES and combined DES+P treatments. The male rats more rapidly responded than female rats with decreasing basal LH release upon E and E+P treatments. The effect of P was ambiguous. It was protective in five-month treated females but not in males. The basal FSH release was decreased only in male rats and later it was recovered. Neither LH, FSH nor the weight of testes and seminal vesicles restored in male rats after the removal of DES capsule. However, the LH level and the weight of seminal vesicles were not significantly lower than in intact rats. It means that the effect of DES persisted in male. However, in females both basal LH and FSH levels were rather upregulated and the low ovarian weight returned to the control level.\nThere was no sexual dimorphism in the effect of long-term steroid treatments on PRL secretion. In both female and male rats the PRL release similarly enhanced and it was partially prevented after two month and completely after 5-month survival by concomitant P treatment. Removal of DES influence completely restored the PRL release by the end of further 2-month survival. There was a well definied parallel change between the PRL levels and weight of anterior pituitary that was extremly enhanced in DES treated rats and then the enlargement of pituitaries was completely prevented by concomitant P treatment in the case of 5-month survival.\nA mild sexual dimorphism was also observed in GH level. It was significantly lower only in DES and DES+P treated female, not in male rats with two-month survival. The BW and B+TL were very much depressed by DES treatment in both female and male rats and P little blunted but not prevented this effect. The BW and B+TL in male rats remained low after the removal of DES capsule; however, in females it was nearly similar to intact rats.\nP alone had no significant effect on the parameters studied; however, the concomitant P treatment with DES was usually protective.\nRemoval of DES capsule reversed the changes in the immunohistochemical appearance of LH, FSH, PRL and GH immunoreactivities.\nIt is well known that GnRH and LH are released in a cyclic pattern in females but not in male rats. The neural LH release apparatus was demonstrated at the first time by Everett and Sawyer [45,46]. A sexual dimorphism of prepubertal rats in GnRH and LH responses to steroid treatment was also demonstrated in acute experiments. Dluzen and Ramirez [47] showed that E treated prebubertal females displayed a significant change in GnRH concentrations in the preoptic-suprachiasmatic area upon P treatment; however, males did not show any change. Sexual dimorphism was also observed in the gonadotropin α-subunit promoter activity to GnRH [48]. In the present study we have found sexual dimorphism in the response of basal LH level to long-term DES and P treatments as well.\nIn our model we did not find any sexual dimorphism in the effect of long-term steroid treatments on PRL secretion and on the weight of pituitaries; however, in other models some sexual dimorphism in the basal level of PRL was observed. It was published earlier that around the time of puberty females had higher PRL mRNA level than males [49], and the weight and PRL level of anterior pituitary in prepubertal females is significantly higher in females than in males [50]. In ovariectomized rats P implanted in silicone tubes subcutaneously induced nocturnal PRL surge, similar P sensitive central mechanism was not observed in adult male rats [51].\nA sexual dimorphism was found in the plasma GH pattern and regulation as well. In the above-mentioned experiment it was also demonstrated that around the puberty the males had higher level of GH mRNA than females [49]. Neonatal masculinization happens in neonatal males which determines the secretory pattern of GH. The masculine type (high infrequent GH pulses with low plasma GH levels in between) promotes growth more effectively than the feminine type (an intermediate, rather constant level of plasma GH) [52]. In our model long-term DES treatment similarly depressed the body growth in both male and female rats indicating that the effect of DES is not mediated through GH. There are data which show that chondrocytes in the epiphyseal plate express ERs [53]. And in this relation the effect of E does not depend on a central mechanism.\nP alone slightly depressed the body weight of both sexes in the case of five-month treatment. There were no other significant change. In the literature we did not find as long combined treatment as ours. That is why difficult to draw paralell to others and our experiments.\nMore data are available about the protective role of P when it is administered with E. The models are very different from what we used. Genazzani and his co-workers [24] studied the role of P and various synthetic progestins in the modulation of the effect of E on hypothalamic GnRH and on pituitary LH and PRL concentrations in ovariectomized rats. And they used only two-week steroid treatment. It was found that P and norethisterone enanthate (NET, synthetic progestin) reversed EB induced hypothalamic GnRH depression, that is elevated its level. Both P and NET blocked the EB induced increase of pituitary LH, and plasma LH levels remained low. Progestins alone did not influence the PRL levels but reversed the EB induced increase. Cho and his co-workers [54] also demonstrated that P suppressed E-enhanced PRL mRNA level.\nIn our previous experiment it was found that the DES treatment changed not only the number and distribution of tropic hormone producing cells, but the distribution of S-100 immunoreactive folliculostellate cells (FSCs) as well. Inside the prolactinomas FSCs were scattered but they surrounded the prolactinomas forming a demarcation line around them [55]. Inside the prolactinomas there were only a few GH cells. When DES was implanted with P the changes, characteristic for DES treatment, could not develop. Concomitant P influence prevented the morphological changes in the anterior pituitary. The distribution of FSCs and GH cells was very similar to that of controls.\nIn this present experiment in male rats DES permanently injured the gonadotropic functions. After the removal of DES capsule LH and FSH levels, the weight of testis and seminal vesicle remained lower than in controls. Interestingly, in females there was no permanent depression in basal LH and FSH levels and ovarian weight. Rather, both LH and FSH were upregulated. In the literature there is no animal experiment in which so long steroid treatment was applied and plasma hormone levels were measured. In human medication sexual steroids are used for months to syncronize the ovarian cyclicity, functional dysmenorrhea, premenstrual syndrome, endometriosis and to limit the adult height of girls [56,57].\nThe question arises how E induces and P prevents the changes caused by a long-term E treatment. As it was mentioned in the introduction ERs and PRs are present not only in the hypothalamus but in the anterior pituitary as well. Steroids can act locally through these receptors on pituitary cells. In a recent publication it was found that ERα is localized in the largest number in lactotropes, and in decreasing number in somatotropes, thyrotropes, gonadotropes. The number of ERβ was much lower and it was identified in somatotropes, lactotropes and gonadotropes [4]. PRA immunoreactivity was seen in the nuclei of gonadotropes [7]. It is also possible that both E and P act through the hypothalamus as well. The distribution of these receptors was recently demonstrated by several authors. Merchenthaler and his co-workers [11] very precisely mapped the distribution of both ERα and ERβ in the hypothalamus. It was found that the density of ERα is very high in the arcuate nucleus and periventricular area and the density of ERβ is high in the paraventricular nucleus and the medial preoptic area where ERβ immunoreactivity colocalize with LHRH immunoreactivity [58]. In the arcuate nucleus colocalization between ERα and GHRH immunoreactivity but not ERα and somatostatin was published by Shimizu et al. [59]. PR containing neurons were demonstrated in the ventrolateral and in the medial part of the arcuate nucleus [10].\nThe mechanism how the E stimulates the cell proliferation in the anterior pituitary was thoroughly studied [60,61]. It was found that E through ERα induces production of TGF-β3 from lactotropes and this stimulates the bFGF production of FSCs which has a proliferative effect on lactotropes. The mechanism how the concomitant P treatment prevents the E induced proliferation is not fully explored. Calderon and his co-workers [62] used ovariectomized immature rats for analysis of action of P in response of anterior pituitary and hypothalamus to estradiol exposure. E induced the nuclear accumulation of ERs and the appearance of PRs reaching a peak at 12 hours, then declined to a plateau near the control level. If P was administered to the animals at the peak of PRs, subseqent nuclear accumulation of ER caused by estradiol injection was suppressed. This was observed only in the anterior pituitary, not in the hypothalamus. They concluded that P affected the response to estradiol in the pituitary gland by a well definied temporal pattern and using a same protocol it had no effect in the hypothalamus.\nThere is sexual dimorphism in the change of basal LH and FSH release upon DES and combined DES+P treatments. The male rats more rapidly responded than female rats with decreasing basal LH release upon E and E+P treatments. The effect of P was ambiguous. It was protective in five-month treated females but not in males. The basal FSH release was decreased only in male rats and later it was recovered. Neither LH, FSH nor the weight of testes and seminal vesicles restored in male rats after the removal of DES capsule. However, the LH level and the weight of seminal vesicles were not significantly lower than in intact rats. It means that the effect of DES persisted in male. However, in females both basal LH and FSH levels were rather upregulated and the low ovarian weight returned to the control level.\nThere was no sexual dimorphism in the effect of long-term steroid treatments on PRL secretion. In both female and male rats the PRL release similarly enhanced and it was partially prevented after two month and completely after 5-month survival by concomitant P treatment. Removal of DES influence completely restored the PRL release by the end of further 2-month survival. There was a well definied parallel change between the PRL levels and weight of anterior pituitary that was extremly enhanced in DES treated rats and then the enlargement of pituitaries was completely prevented by concomitant P treatment in the case of 5-month survival.\nA mild sexual dimorphism was also observed in GH level. It was significantly lower only in DES and DES+P treated female, not in male rats with two-month survival. The BW and B+TL were very much depressed by DES treatment in both female and male rats and P little blunted but not prevented this effect. The BW and B+TL in male rats remained low after the removal of DES capsule; however, in females it was nearly similar to intact rats.\nP alone had no significant effect on the parameters studied; however, the concomitant P treatment with DES was usually protective.\nRemoval of DES capsule reversed the changes in the immunohistochemical appearance of LH, FSH, PRL and GH immunoreactivities.\nIt is well known that GnRH and LH are released in a cyclic pattern in females but not in male rats. The neural LH release apparatus was demonstrated at the first time by Everett and Sawyer [45,46]. A sexual dimorphism of prepubertal rats in GnRH and LH responses to steroid treatment was also demonstrated in acute experiments. Dluzen and Ramirez [47] showed that E treated prebubertal females displayed a significant change in GnRH concentrations in the preoptic-suprachiasmatic area upon P treatment; however, males did not show any change. Sexual dimorphism was also observed in the gonadotropin α-subunit promoter activity to GnRH [48]. In the present study we have found sexual dimorphism in the response of basal LH level to long-term DES and P treatments as well.\nIn our model we did not find any sexual dimorphism in the effect of long-term steroid treatments on PRL secretion and on the weight of pituitaries; however, in other models some sexual dimorphism in the basal level of PRL was observed. It was published earlier that around the time of puberty females had higher PRL mRNA level than males [49], and the weight and PRL level of anterior pituitary in prepubertal females is significantly higher in females than in males [50]. In ovariectomized rats P implanted in silicone tubes subcutaneously induced nocturnal PRL surge, similar P sensitive central mechanism was not observed in adult male rats [51].\nA sexual dimorphism was found in the plasma GH pattern and regulation as well. In the above-mentioned experiment it was also demonstrated that around the puberty the males had higher level of GH mRNA than females [49]. Neonatal masculinization happens in neonatal males which determines the secretory pattern of GH. The masculine type (high infrequent GH pulses with low plasma GH levels in between) promotes growth more effectively than the feminine type (an intermediate, rather constant level of plasma GH) [52]. In our model long-term DES treatment similarly depressed the body growth in both male and female rats indicating that the effect of DES is not mediated through GH. There are data which show that chondrocytes in the epiphyseal plate express ERs [53]. And in this relation the effect of E does not depend on a central mechanism.\nP alone slightly depressed the body weight of both sexes in the case of five-month treatment. There were no other significant change. In the literature we did not find as long combined treatment as ours. That is why difficult to draw paralell to others and our experiments.\nMore data are available about the protective role of P when it is administered with E. The models are very different from what we used. Genazzani and his co-workers [24] studied the role of P and various synthetic progestins in the modulation of the effect of E on hypothalamic GnRH and on pituitary LH and PRL concentrations in ovariectomized rats. And they used only two-week steroid treatment. It was found that P and norethisterone enanthate (NET, synthetic progestin) reversed EB induced hypothalamic GnRH depression, that is elevated its level. Both P and NET blocked the EB induced increase of pituitary LH, and plasma LH levels remained low. Progestins alone did not influence the PRL levels but reversed the EB induced increase. Cho and his co-workers [54] also demonstrated that P suppressed E-enhanced PRL mRNA level.\nIn our previous experiment it was found that the DES treatment changed not only the number and distribution of tropic hormone producing cells, but the distribution of S-100 immunoreactive folliculostellate cells (FSCs) as well. Inside the prolactinomas FSCs were scattered but they surrounded the prolactinomas forming a demarcation line around them [55]. Inside the prolactinomas there were only a few GH cells. When DES was implanted with P the changes, characteristic for DES treatment, could not develop. Concomitant P influence prevented the morphological changes in the anterior pituitary. The distribution of FSCs and GH cells was very similar to that of controls.\nIn this present experiment in male rats DES permanently injured the gonadotropic functions. After the removal of DES capsule LH and FSH levels, the weight of testis and seminal vesicle remained lower than in controls. Interestingly, in females there was no permanent depression in basal LH and FSH levels and ovarian weight. Rather, both LH and FSH were upregulated. In the literature there is no animal experiment in which so long steroid treatment was applied and plasma hormone levels were measured. In human medication sexual steroids are used for months to syncronize the ovarian cyclicity, functional dysmenorrhea, premenstrual syndrome, endometriosis and to limit the adult height of girls [56,57].\nThe question arises how E induces and P prevents the changes caused by a long-term E treatment. As it was mentioned in the introduction ERs and PRs are present not only in the hypothalamus but in the anterior pituitary as well. Steroids can act locally through these receptors on pituitary cells. In a recent publication it was found that ERα is localized in the largest number in lactotropes, and in decreasing number in somatotropes, thyrotropes, gonadotropes. The number of ERβ was much lower and it was identified in somatotropes, lactotropes and gonadotropes [4]. PRA immunoreactivity was seen in the nuclei of gonadotropes [7]. It is also possible that both E and P act through the hypothalamus as well. The distribution of these receptors was recently demonstrated by several authors. Merchenthaler and his co-workers [11] very precisely mapped the distribution of both ERα and ERβ in the hypothalamus. It was found that the density of ERα is very high in the arcuate nucleus and periventricular area and the density of ERβ is high in the paraventricular nucleus and the medial preoptic area where ERβ immunoreactivity colocalize with LHRH immunoreactivity [58]. In the arcuate nucleus colocalization between ERα and GHRH immunoreactivity but not ERα and somatostatin was published by Shimizu et al. [59]. PR containing neurons were demonstrated in the ventrolateral and in the medial part of the arcuate nucleus [10].\nThe mechanism how the E stimulates the cell proliferation in the anterior pituitary was thoroughly studied [60,61]. It was found that E through ERα induces production of TGF-β3 from lactotropes and this stimulates the bFGF production of FSCs which has a proliferative effect on lactotropes. The mechanism how the concomitant P treatment prevents the E induced proliferation is not fully explored. Calderon and his co-workers [62] used ovariectomized immature rats for analysis of action of P in response of anterior pituitary and hypothalamus to estradiol exposure. E induced the nuclear accumulation of ERs and the appearance of PRs reaching a peak at 12 hours, then declined to a plateau near the control level. If P was administered to the animals at the peak of PRs, subseqent nuclear accumulation of ER caused by estradiol injection was suppressed. This was observed only in the anterior pituitary, not in the hypothalamus. They concluded that P affected the response to estradiol in the pituitary gland by a well definied temporal pattern and using a same protocol it had no effect in the hypothalamus.", "There is sexual dimorphism in the change of basal LH and FSH release upon DES and combined DES+P treatments. The male rats more rapidly responded than female rats with decreasing basal LH release upon E and E+P treatments. The effect of P was ambiguous. It was protective in five-month treated females but not in males. The basal FSH release was decreased only in male rats and later it was recovered. Neither LH, FSH nor the weight of testes and seminal vesicles restored in male rats after the removal of DES capsule. However, the LH level and the weight of seminal vesicles were not significantly lower than in intact rats. It means that the effect of DES persisted in male. However, in females both basal LH and FSH levels were rather upregulated and the low ovarian weight returned to the control level.\nThere was no sexual dimorphism in the effect of long-term steroid treatments on PRL secretion. In both female and male rats the PRL release similarly enhanced and it was partially prevented after two month and completely after 5-month survival by concomitant P treatment. Removal of DES influence completely restored the PRL release by the end of further 2-month survival. There was a well definied parallel change between the PRL levels and weight of anterior pituitary that was extremly enhanced in DES treated rats and then the enlargement of pituitaries was completely prevented by concomitant P treatment in the case of 5-month survival.\nA mild sexual dimorphism was also observed in GH level. It was significantly lower only in DES and DES+P treated female, not in male rats with two-month survival. The BW and B+TL were very much depressed by DES treatment in both female and male rats and P little blunted but not prevented this effect. The BW and B+TL in male rats remained low after the removal of DES capsule; however, in females it was nearly similar to intact rats.\nP alone had no significant effect on the parameters studied; however, the concomitant P treatment with DES was usually protective.\nRemoval of DES capsule reversed the changes in the immunohistochemical appearance of LH, FSH, PRL and GH immunoreactivities.\nIt is well known that GnRH and LH are released in a cyclic pattern in females but not in male rats. The neural LH release apparatus was demonstrated at the first time by Everett and Sawyer [45,46]. A sexual dimorphism of prepubertal rats in GnRH and LH responses to steroid treatment was also demonstrated in acute experiments. Dluzen and Ramirez [47] showed that E treated prebubertal females displayed a significant change in GnRH concentrations in the preoptic-suprachiasmatic area upon P treatment; however, males did not show any change. Sexual dimorphism was also observed in the gonadotropin α-subunit promoter activity to GnRH [48]. In the present study we have found sexual dimorphism in the response of basal LH level to long-term DES and P treatments as well.\nIn our model we did not find any sexual dimorphism in the effect of long-term steroid treatments on PRL secretion and on the weight of pituitaries; however, in other models some sexual dimorphism in the basal level of PRL was observed. It was published earlier that around the time of puberty females had higher PRL mRNA level than males [49], and the weight and PRL level of anterior pituitary in prepubertal females is significantly higher in females than in males [50]. In ovariectomized rats P implanted in silicone tubes subcutaneously induced nocturnal PRL surge, similar P sensitive central mechanism was not observed in adult male rats [51].\nA sexual dimorphism was found in the plasma GH pattern and regulation as well. In the above-mentioned experiment it was also demonstrated that around the puberty the males had higher level of GH mRNA than females [49]. Neonatal masculinization happens in neonatal males which determines the secretory pattern of GH. The masculine type (high infrequent GH pulses with low plasma GH levels in between) promotes growth more effectively than the feminine type (an intermediate, rather constant level of plasma GH) [52]. In our model long-term DES treatment similarly depressed the body growth in both male and female rats indicating that the effect of DES is not mediated through GH. There are data which show that chondrocytes in the epiphyseal plate express ERs [53]. And in this relation the effect of E does not depend on a central mechanism.\nP alone slightly depressed the body weight of both sexes in the case of five-month treatment. There were no other significant change. In the literature we did not find as long combined treatment as ours. That is why difficult to draw paralell to others and our experiments.\nMore data are available about the protective role of P when it is administered with E. The models are very different from what we used. Genazzani and his co-workers [24] studied the role of P and various synthetic progestins in the modulation of the effect of E on hypothalamic GnRH and on pituitary LH and PRL concentrations in ovariectomized rats. And they used only two-week steroid treatment. It was found that P and norethisterone enanthate (NET, synthetic progestin) reversed EB induced hypothalamic GnRH depression, that is elevated its level. Both P and NET blocked the EB induced increase of pituitary LH, and plasma LH levels remained low. Progestins alone did not influence the PRL levels but reversed the EB induced increase. Cho and his co-workers [54] also demonstrated that P suppressed E-enhanced PRL mRNA level.\nIn our previous experiment it was found that the DES treatment changed not only the number and distribution of tropic hormone producing cells, but the distribution of S-100 immunoreactive folliculostellate cells (FSCs) as well. Inside the prolactinomas FSCs were scattered but they surrounded the prolactinomas forming a demarcation line around them [55]. Inside the prolactinomas there were only a few GH cells. When DES was implanted with P the changes, characteristic for DES treatment, could not develop. Concomitant P influence prevented the morphological changes in the anterior pituitary. The distribution of FSCs and GH cells was very similar to that of controls.\nIn this present experiment in male rats DES permanently injured the gonadotropic functions. After the removal of DES capsule LH and FSH levels, the weight of testis and seminal vesicle remained lower than in controls. Interestingly, in females there was no permanent depression in basal LH and FSH levels and ovarian weight. Rather, both LH and FSH were upregulated. In the literature there is no animal experiment in which so long steroid treatment was applied and plasma hormone levels were measured. In human medication sexual steroids are used for months to syncronize the ovarian cyclicity, functional dysmenorrhea, premenstrual syndrome, endometriosis and to limit the adult height of girls [56,57].\nThe question arises how E induces and P prevents the changes caused by a long-term E treatment. As it was mentioned in the introduction ERs and PRs are present not only in the hypothalamus but in the anterior pituitary as well. Steroids can act locally through these receptors on pituitary cells. In a recent publication it was found that ERα is localized in the largest number in lactotropes, and in decreasing number in somatotropes, thyrotropes, gonadotropes. The number of ERβ was much lower and it was identified in somatotropes, lactotropes and gonadotropes [4]. PRA immunoreactivity was seen in the nuclei of gonadotropes [7]. It is also possible that both E and P act through the hypothalamus as well. The distribution of these receptors was recently demonstrated by several authors. Merchenthaler and his co-workers [11] very precisely mapped the distribution of both ERα and ERβ in the hypothalamus. It was found that the density of ERα is very high in the arcuate nucleus and periventricular area and the density of ERβ is high in the paraventricular nucleus and the medial preoptic area where ERβ immunoreactivity colocalize with LHRH immunoreactivity [58]. In the arcuate nucleus colocalization between ERα and GHRH immunoreactivity but not ERα and somatostatin was published by Shimizu et al. [59]. PR containing neurons were demonstrated in the ventrolateral and in the medial part of the arcuate nucleus [10].\nThe mechanism how the E stimulates the cell proliferation in the anterior pituitary was thoroughly studied [60,61]. It was found that E through ERα induces production of TGF-β3 from lactotropes and this stimulates the bFGF production of FSCs which has a proliferative effect on lactotropes. The mechanism how the concomitant P treatment prevents the E induced proliferation is not fully explored. Calderon and his co-workers [62] used ovariectomized immature rats for analysis of action of P in response of anterior pituitary and hypothalamus to estradiol exposure. E induced the nuclear accumulation of ERs and the appearance of PRs reaching a peak at 12 hours, then declined to a plateau near the control level. If P was administered to the animals at the peak of PRs, subseqent nuclear accumulation of ER caused by estradiol injection was suppressed. This was observed only in the anterior pituitary, not in the hypothalamus. They concluded that P affected the response to estradiol in the pituitary gland by a well definied temporal pattern and using a same protocol it had no effect in the hypothalamus.", "It seems that there is a cross-talk between ERs and PRs. In our protocol we ensured a consistently higher blood level of both DES and P than in controls by the implanted silastic capsules for 5 months. How the ER and PR levels changed during this long period is not known. And there is no data in the literature describing the level of these receptors in such a long-term experiment. There are several possibilities to explain the protective effect of P. 1) PRα is present in gonadotropes. P binding may stimulate these cells to influence the E binding to lactotropes and to prevent the production of TGF-β3. 2) ERs in lactotropes may have P responsive elements and P can bind directly to these receptors to prevent the production of TGF-β3. 3) The effect of P through the hypothalamus is not excluded. But the above-mentioned experiment indicates that the P acts on the pituitary locally rather than through the hypothalamus." ]
[ null, "materials|methods", null, null, null, null, null, "results", null, null, null, null, null, null, null, null, null, "discussion", "methods", "conclusions" ]
[ "anterior pituitary", "immunohistochemistry", "RIA", "sexual dimorphism" ]
Relation between expression pattern of p53 and survivin in cutaneous basal cell carcinomas.
21358596
The tumor suppressor gene p53 is a key regulator of cell division and/or apoptosis. Survivin is a multifunctional member of the inhibitor of apoptosis family. Survivin and p53 represent diametrically opposed signals that influence the apoptotic pathway.
BACKGROUND
To determine the role of p53 and survivin in basal cell carcinoma (BCC), we evaluated the expression pattern of both proteins with regard to the percentage of positively immunostained tumor cells, the intensity of staining, and subcellular localization among 31 subjects with BCC.
MATERIAL/METHODS
Overexpression of p53 protein was found in 28 of 31 cases (90.3%), whereas survivin accumulation was seen in 27 (87.1%). For p53, moderate and/or strong immunoreactivity was seen in 20 of 28 cases (71.4%), and 26 of 28 cases (92.9%) showed more than 25% reactive tumor cells. Nuclear p53 staining was detected in 23 of 28 cases (82.1%), whereas combined nuclear and cytoplasmic localization was found in only 5 of 28 cases (17.9%). Survivin revealed mild intensity of immunoreaction in 22 of 27 cases (71%), and 25 of 27 cases (92.6%) showed less than 25% labeled tumor cells. Combined nuclear and cytoplasmic survivin localization was present in 26 of 27 cases (96.3%). Statistically significant differences were detected in the assessed expression parameters between those proteins.
RESULTS
Our results suggest that overexpression of wild type p53 protein may suppress the expression of survivin and its antiapoptotic activity in BCC cells.
CONCLUSIONS
[ "Adult", "Aged", "Aged, 80 and over", "Carcinoma, Basal Cell", "Cell Count", "Female", "Humans", "Inhibitor of Apoptosis Proteins", "Male", "Middle Aged", "Protein Transport", "Skin Neoplasms", "Subcellular Fractions", "Survivin", "Tumor Suppressor Protein p53" ]
3524735
null
null
null
null
Results
Both p53 and survivin were found in BCC cells either in the nucleus (N), the cytoplasm (C), or both the nucleus and cytoplasm (NC). The expression of antigens among the 31 subjects was scored semiquantitatively as follows: Intensity of p53 staining: (a) absent or barely detectable; 3/31 cases, 9.7%; (b) weak; 8/31 cases, 25.8%; (c) moderate; 18/31 cases, 58.1%; and (d) strong; 2/31 cases, 6.4%. Intensity of survivin staining: (a) absent or barely detectable; 4/31 cases, 12.9%; (b) week: 22/31 cases, 71%; (c) moderate: 5/31 cases, 16.1%; and (d) strong: 0/31 cases, 0%. Number of p53 positively stained cells: (a) more than 25% per field of view: 26/28 cases, 92.9%; (b) less than 25% per field of view: 2/28 cases, 7.1%. Number of survivin positively stained cells: (a) more than 25% per field of view: 2/27 cases, 7.4%; (b) less than 25% per field of view: 25/27 cases (92.6%). Subcellular localization of p53 staining: (a) N localization only: 23/28 cases, 82.1%; (b) C localization only: 0/28 cases, 0%; and (c) NC localization: 5/28 cases, 17.9%. Subcellular localization of survivin staining: (a) N localization only: 1/27 cases, 3.7%; (b) C localization only: 0/27 cases, 0%; (c) NC localization: 26/27 cases, 96.3%. The results of all expression profiles are listed in Table 1. Tables 2 and 3 summarize the number of p53 and survivin positively stained cells and their cellular localization, respectively. The relation between the expression intensity of p53 and survivin is shown in Table 4. Increased p53 expression was present also in adjacent cells of normal epidermis. The χ2 analysis confirmed a significant correlation between the intensity of survivin and p53 immunoreactivity (P<.05), a significant difference in the percentage of survivin- and p53-labeled cells (P<.001), and a significant difference in N and NC localization of survivin and p53 (P<.001).
Conclusions
The results of our study suggest that overexpression of wild type p53 protein may suppress the expression of survivin and its antiapoptotic activity in BCC cells. In addition, de novo production or stabilization of p53 is fundamental to triggering apoptosis in BCC [55].
[ "Background" ]
[ "Basal cell carcinoma (BCC) is the most common form of human skin cancer. Its etiopathogenesis is still not fully understood. Generally, most common genetic alterations in human cutaneous cancers are found at the level of the p53 tumor suppressor gene [1,2]. The p53 tumor suppressor protein plays a critical role in both the induction of apoptosis and carcinogenesis [3].\nUnder normal conditions, the inactive form of p53 protein is usually present at low levels in the cytoplasm of cells. Cytoplasmic localization in normal cells is controlled by murine double minute (MDM2) oncogene [4]. Murine double minute is able to shuttle continually between the cytoplasm and nucleus because it contains both a nuclear export signal and a nuclear localization signal. In response to DNA damage, for example, by ultraviolet or ionizing radiation, p53 starts accumulating in the nucleus [1,5]. Deoxyribonucleic acid damage can cause activation of p53 by phosphorylation and, consequently, the interaction of p53 and MDM2 is disrupted [6,7]. Tumor protein 53 becomes temporarily free of MDM2 and then accumulates in the nucleus. The intracellular content of p53 depends on the DNA damage response.\nHigher content of p53 directs the cells toward apoptosis, whereas low and/or moderate p53 levels result in cell cycle arrest, providing time for DNA repair [8,9]. On one hand, the activated p53 can induce apoptosis or cell cycle arrest through up-regulation of several associated genes. On the other hand, there are genes with antiapoptotic activity that are repressed by p53 at the transcription or translation levels, and this negative regulation is important for the induction of apoptosis [2,3]. Survivin, a unique member of the inhibitor of apoptosis protein (IAP) family, is known to be repressed by the wild-type p53. This repression is one of the mechanisms inducing apoptosis by the activation of the mitochondrial pathway [2,10].\nProteins that belong to the IAP family play a key role in negative regulation of apoptosis. Eight human IAP family members have been identified: c-IAP1, c-IAP2, NAIP, ILP-2, XIAP, apollon, ML-IAP/livin, and survivin [11,12]. Multifunctional survivin possesses a number of distinct features not shared with other IAP members: (1) it is the shortest polypeptide consisting of 142 amino acid residues; (2) the expression of this protein is cell cycle-regulated and occurs in the G2/M phase; (3) it is undetectable in most normal differentiated adult tissues, but is frequently expressed in embryonic and fetal organs, as well as in developed human malignant tumors; (4) it can be found in different subcellular localizations; and (5) it functions in both inhibition of apoptosis and regulation of cell division in contrast to other IAP members regularly restricted to 1 of these 2 functions [13–15]. Survivin is currently undergoing intensive research as a potential tumor marker and prognostic factor [14,16–19,39].\nThe down-regulation of survivin by wild type p53 in cell cycle is generally well known [3,19], but data that describe the expression pattern and the interaction between survivin and p53 in BCC cells are scarce [20]. Recently, more research groups have been dealing with this topic, mainly in the framework of apoptosis regulation. A variety of non-malignant and malignant cells are being studied, including human melanocytic and keratolytic lesions. The majority of skin lesion studies have focused primarily on squamous cell carcinoma and malignant melanoma [19,21–23]. However, this is not the case with BCC. Review articles on the molecular etiology and pathogenesis of BCC explain the genetic aberrations in human skin cancers at the level of the p53 gene; however, data on survivin expression are limited [20]. Therefore, we evaluated the expression pattern of both proteins with respect to intensity, relative number of positively stained cells, and cellular localization as detected by immunohistochemical methods and the relationship between their expression modes in cutaneous basal cell carcinoma." ]
[ null ]
[ "Background", "Material and Methods", "Results", "Discussion", "Conclusions" ]
[ "Basal cell carcinoma (BCC) is the most common form of human skin cancer. Its etiopathogenesis is still not fully understood. Generally, most common genetic alterations in human cutaneous cancers are found at the level of the p53 tumor suppressor gene [1,2]. The p53 tumor suppressor protein plays a critical role in both the induction of apoptosis and carcinogenesis [3].\nUnder normal conditions, the inactive form of p53 protein is usually present at low levels in the cytoplasm of cells. Cytoplasmic localization in normal cells is controlled by murine double minute (MDM2) oncogene [4]. Murine double minute is able to shuttle continually between the cytoplasm and nucleus because it contains both a nuclear export signal and a nuclear localization signal. In response to DNA damage, for example, by ultraviolet or ionizing radiation, p53 starts accumulating in the nucleus [1,5]. Deoxyribonucleic acid damage can cause activation of p53 by phosphorylation and, consequently, the interaction of p53 and MDM2 is disrupted [6,7]. Tumor protein 53 becomes temporarily free of MDM2 and then accumulates in the nucleus. The intracellular content of p53 depends on the DNA damage response.\nHigher content of p53 directs the cells toward apoptosis, whereas low and/or moderate p53 levels result in cell cycle arrest, providing time for DNA repair [8,9]. On one hand, the activated p53 can induce apoptosis or cell cycle arrest through up-regulation of several associated genes. On the other hand, there are genes with antiapoptotic activity that are repressed by p53 at the transcription or translation levels, and this negative regulation is important for the induction of apoptosis [2,3]. Survivin, a unique member of the inhibitor of apoptosis protein (IAP) family, is known to be repressed by the wild-type p53. This repression is one of the mechanisms inducing apoptosis by the activation of the mitochondrial pathway [2,10].\nProteins that belong to the IAP family play a key role in negative regulation of apoptosis. Eight human IAP family members have been identified: c-IAP1, c-IAP2, NAIP, ILP-2, XIAP, apollon, ML-IAP/livin, and survivin [11,12]. Multifunctional survivin possesses a number of distinct features not shared with other IAP members: (1) it is the shortest polypeptide consisting of 142 amino acid residues; (2) the expression of this protein is cell cycle-regulated and occurs in the G2/M phase; (3) it is undetectable in most normal differentiated adult tissues, but is frequently expressed in embryonic and fetal organs, as well as in developed human malignant tumors; (4) it can be found in different subcellular localizations; and (5) it functions in both inhibition of apoptosis and regulation of cell division in contrast to other IAP members regularly restricted to 1 of these 2 functions [13–15]. Survivin is currently undergoing intensive research as a potential tumor marker and prognostic factor [14,16–19,39].\nThe down-regulation of survivin by wild type p53 in cell cycle is generally well known [3,19], but data that describe the expression pattern and the interaction between survivin and p53 in BCC cells are scarce [20]. Recently, more research groups have been dealing with this topic, mainly in the framework of apoptosis regulation. A variety of non-malignant and malignant cells are being studied, including human melanocytic and keratolytic lesions. The majority of skin lesion studies have focused primarily on squamous cell carcinoma and malignant melanoma [19,21–23]. However, this is not the case with BCC. Review articles on the molecular etiology and pathogenesis of BCC explain the genetic aberrations in human skin cancers at the level of the p53 gene; however, data on survivin expression are limited [20]. Therefore, we evaluated the expression pattern of both proteins with respect to intensity, relative number of positively stained cells, and cellular localization as detected by immunohistochemical methods and the relationship between their expression modes in cutaneous basal cell carcinoma.", "The sample included 31 subjects with cutaneous BCC. The hematoxylin and eosin-stained slides from each subject were independently reviewed by 2 pathologists to ascertain the diagnosis based on morphologic and immunohistochemical parameters and were correlated with clinical data. The standard diagnostic histomorphologic and immunohistochemical criteria were applied as previously described [24–26]. Immunohistochemical staining was performed using monoclonal mouse anti-p53 antibody (Dako, Carpinteria, CA) and monoclonal mouse antisurvivin antibody (Dako, Carpinteria, CA). Negative controls were obtained by simply omitting the primary antibodies.\nIn each case, the following features were assessed: (1) the intensity of staining, (2) the relative number of positively stained cells and (3), the subcellular localization of p53 and survivin antigens. Microsoft Excel software package was used to perform statistical analyses. Survivin and p53 expression and the correlation between them were analyzed by χ2 test. A P value less than.05 was considered to indicate statistical significance.", "Both p53 and survivin were found in BCC cells either in the nucleus (N), the cytoplasm (C), or both the nucleus and cytoplasm (NC). The expression of antigens among the 31 subjects was scored semiquantitatively as follows:\nIntensity of p53 staining: (a) absent or barely detectable; 3/31 cases, 9.7%; (b) weak; 8/31 cases, 25.8%; (c) moderate; 18/31 cases, 58.1%; and (d) strong; 2/31 cases, 6.4%.\nIntensity of survivin staining: (a) absent or barely detectable; 4/31 cases, 12.9%; (b) week: 22/31 cases, 71%; (c) moderate: 5/31 cases, 16.1%; and (d) strong: 0/31 cases, 0%.\nNumber of p53 positively stained cells: (a) more than 25% per field of view: 26/28 cases, 92.9%; (b) less than 25% per field of view: 2/28 cases, 7.1%.\nNumber of survivin positively stained cells: (a) more than 25% per field of view: 2/27 cases, 7.4%; (b) less than 25% per field of view: 25/27 cases (92.6%).\nSubcellular localization of p53 staining: (a) N localization only: 23/28 cases, 82.1%; (b) C localization only: 0/28 cases, 0%; and (c) NC localization: 5/28 cases, 17.9%.\nSubcellular localization of survivin staining: (a) N localization only: 1/27 cases, 3.7%; (b) C localization only: 0/27 cases, 0%; (c) NC localization: 26/27 cases, 96.3%.\nThe results of all expression profiles are listed in Table 1. Tables 2 and 3 summarize the number of p53 and survivin positively stained cells and their cellular localization, respectively. The relation between the expression intensity of p53 and survivin is shown in Table 4. Increased p53 expression was present also in adjacent cells of normal epidermis. The χ2 analysis confirmed a significant correlation between the intensity of survivin and p53 immunoreactivity (P<.05), a significant difference in the percentage of survivin- and p53-labeled cells (P<.001), and a significant difference in N and NC localization of survivin and p53 (P<.001).", "The results of the current study show significant differences in the intensity of immunoreactions and in the percentage of labeled cells between p53 and survival expression. There were also significant differences in subcellular compartmentalization of both proteins. Positive p53 immunoreactivity was observed in 28 of 31 cases (90.3%). In accord with previous papers [20,26–28], we noted a high frequency of p53 overexpression in BCC. Interestingly, basal layers of the epidermis, adjacent to the tumor lesion, expressed mild and/or moderate immunopositivity. All of our subjects showed cytoplasmic p53-positive reactions only. These findings may be related to early accumulation of p53 immunopositive clones as a consequence of sun exposure but without any pathological proliferation [29]. Calli-Demirkan and associates [30] found statistically significant differences between the immunoreactivity of p53 protein in normal epithelia adjacent to BCC and age of patients. Bäckvall and associates [31] pointed out that the foci of normal p53-positive keratinocytes are under regulation of more genetic events.\nGenerally, cells labeled by p53 display nuclear staining, but in some cases, cytoplasmic positivity has also been described [32]. The antiapoptotic protein survivin is also known to be localized both in cytoplasm and nucleus [33,34]. Similar to p53, nuclear export signal has been identified in survivin, as well [35]. It has been reported that survivin is highly expressed in many malignant tissues and is rarely detected in normal differentiated adult tissue [33]. In our study, survivin immunoreactivity was observed in 27 of 31 subjects (86.1%).\nFor p53, the number of positively stained cells in the majority of cases was more than 25% per field of view, whereas a greater number of subjects with positively stained survivin cells had less than 25% immunopositive cells (Table 2). Consistent with previous data, cellular localization of both proteins showed measurable differences. Results of our study showed nuclear p53 expression in 23 of 28 cases (82.1%) and combined nuclear and cytoplasmic staining in 5 of 28 cases (17.9%; Table 3).\nA more-complicated situation seems to occur with compartmentalization of survivin. Conflicting data were reported with regard to survivin expression in various benign tissues and different types of tumors. Survivin exists in 5 different isoforms as a result of alternatively spliced transcripts: survivin, survivin-2B, survivin-Ex3, survivin-3B, and survivin-2a [36–38]. Survivin and survivin-2B are localized in the cytoplasm, particularly in mitochondria; survivin-Ex3 is primarily nuclear, survivin-2a is located in both the cytoplasm and nucleus, and the subcellular localization of survivin-3B remains to be determined [7,38,39].\nThese 5 survivin splicing variants and their unique subcellular compartmentalization create a delicate balance between induction and inhibition of the apoptotic process [36,38]. Discrepancies in the results regarding cytoplasmic versus nuclear survivin localization proceed from the fact that survivin basically exists in 2 subcellular compartments as a result of the properties of different polypeptides specified by alternatively spliced transcripts [40]. In general, malignant tumors are characterized mainly by nuclear localization [33,34,39]. In our study, nuclear or combined nuclear and cytoplasmic survivin immunoreactivity was observed in all of the 27 positive cases (Table 3). Thus, for the first time, we demonstrated a combined nuclear and cytoplasmic pattern of survivin expression in the majority of cutaneous BCC cases (26/27, 96.3%). As shown in our subjects, there were significant differences in the number of positively stained cells and cellular localization between p53 and survivin (P<.001).\nThere are several genes that play a role in the control of the G2/M phase of the cell cycle that are repressed following induction of wild type p53. Survivin is identified as a gene that is potently repressed at both the RNA and protein levels [3,30]. In the present study, we investigated this interplay at the protein level. Immunoreactivity of p53 was characterized by a predominance of moderate and strong intensity (20/31 cases, 64.5%) over mild intensity (8/31 cases. 25.8%). However, mild intensity of survivin staining dominated (22/31 cases; 71%). By comparison, moderate and strong p53 immunopositivity was accompanied by mild survivin immunopositivity in 16 cases (51.6%). Equal intensity of p53 and survivin was detected in 14 cases (45.2%). Only one case (3.2%) showed no p53 expression accompanied by mild intensity of survivin reaction (Table 4). Our immunohistochemical findings indicated that there are statistically significant differences between the expression intensity of both examined proteins (P<.05).\nThe basic function of p53 is associated with gene regulations that control apoptosis. Cell cycle arrest induced by p53 permits the repair of DNA damage or p53 can promote apoptosis by activation of the mitochondrial pathway. Wild type p53 leads to the release of mitochondrial cytochrome c, which is considered critical for the intrinsic apoptotic pathway [41–43]. Cytochrome c induces the formation of a large multimeric complex “apoptosome” within cytoplasm, which consists of cytochrome c, the adapter protein Apaf-1, adenosine triphosphate, and procaspase-9. The special 3-dimensional structure of this complex recruits and mediates the autoactivation of initiator caspase-9 with a subsequent caspase cascade [43]. Death receptor-mediated caspase activation may also induce p53-dependent apoptosis [44]. The precise mechanism for how survivin inhibits apoptosis is not fully understood. Some research groups have reported direct suppression of caspase-3 and caspase-7 [45]. Other investigators have suggested that survivin lacks structural linker sequence, which allows its binding to caspase-3 [46]. Marusawa and associates [47] found that survivin needs to use the cofactor, interacting with the hepatitis B virus coded X protein and binding to and inhibiting procaspase-9. It is known that survivin also may indirectly suppress apoptosis via intermediate proteins, for example, through binding and inactivating Smac/DIABLO [48,49].\nPrevious studies revealed that many genes involved in the G2/M phase of cell cycle, including a number of genes encoding microtubule components, are subject to negative regulation by wild type p53 [50,51]. Survivin is only expressed in the G2/M phase [52]. It is associated with mitotic spindle by interaction with tubulin during mitosis [50,53]. p53 interacts with the survivin promoter, which is demonstrated as the first promoter to confer p53-dependent repression. Survivin promoter is repressed by both direct (p53 binding) and indirect (induction of p21 protein) mechanisms. Each mechanism may depend on particular stresses, phases of the cell cycle, and cell types, as well [3]. Furthermore, p53 can interfere with bcl-2 proteins in mitochondria, resulting in cytochrome c release [54].", "The results of our study suggest that overexpression of wild type p53 protein may suppress the expression of survivin and its antiapoptotic activity in BCC cells. In addition, de novo production or stabilization of p53 is fundamental to triggering apoptosis in BCC [55]." ]
[ null, "materials|methods", "results", "discussion", "conclusions" ]
[ "p53", "survivin", "immunohistochemistry", "basal", "cell", "carcinoma" ]
Expression of DNA repair proteins MSH2, MLH1 and MGMT in human benign and malignant thyroid lesions: an immunohistochemical study.
21358597
DNA repair is a major defense mechanism, which contributes to the maintenance of genetic sequence, and minimizes cell death, mutation rates, replication errors, DNA damage persistence and genomic instability. Alterations in the expression levels of proteins participating in DNA repair mechanisms have been associated with several aspects of cancer biology. The present study aimed to evaluate the clinical significance of DNA repair proteins MSH2, MLH1 and MGMT in benign and malignant thyroid lesions.
BACKGROUND
MSH2, MLH1 and MGMT protein expression was assessed immunohistochemically on paraffin-embedded thyroid tissues from 90 patients with benign and malignant lesions.
MATERIAL/METHODS
The expression levels of MLH1 was significantly upregulated in cases with malignant compared to those with benign thyroid lesions (p = 0.038). The expression levels of MGMT was significantly downregulated in malignant compared to benign thyroid lesions (p = 0.001). Similar associations for both MLH1 and MGMT between cases with papillary carcinoma and hyperplastic nodules were also noted (p = 0.014 and p = 0.026, respectively). In the subgroup of malignant thyroid lesions, MSH2 downregulation was significantly associated with larger tumor size (p = 0.031), while MLH1 upregulation was significantly associated with the presence of lymphatic and vascular invasion (p = 0.006 and p = 0.002, respectively).
RESULTS
Alterations in the mismatch repair proteins MSH2 and MLH1 and the direct repair protein MGMT may result from tumor development and/or progression. Further studies are recommended to draw definite conclusions on the clinical significance of DNA repair proteins in thyroid neoplasia.
CONCLUSIONS
[ "Adaptor Proteins, Signal Transducing", "DNA Modification Methylases", "DNA Repair", "DNA Repair Enzymes", "Female", "Humans", "Immunohistochemistry", "Male", "Middle Aged", "MutL Protein Homolog 1", "MutS Homolog 2 Protein", "Nuclear Proteins", "Thyroid Neoplasms", "Tumor Suppressor Proteins" ]
3524721
null
null
Statistical analysis
Chi-square tests were used to assess the difference of MSH2, MLH1 and MGMT immunoreactivity between malignant and benign thyroid lesions, as well as between papillary carcinoma cases and hyperplastic nodules, which comprise the most numerous histopathological entities of malignant and benign cases, respectively. Chi-square tests were also used to assess the associations between MSH2, MLH1 and MGMT immunoreactivity and clinicopathological characteristics in the subgroup of patients with malignant thyroid lesions. A 2-tailed p<0.05 was considered statistically significant. Statistical analyses were performed using SPSS for Windows (version 11.0; SPSS Inc., Chicago, IL, USA).
Results
MSH2 positivity (IHC score >0) was noted in 82 (91%) out of 90 cases with thyroid lesions. More than half (50/90, 56%) of the examined cases presented moderate/strong immunoreactivity for MSH2 protein (IHC score ≥3). The pattern of MSH2 distribution was predominantly nuclear, and occasionally cytoplasmic in cases with malignant thyroid lesions (Figure 1A, B). The pattern of MSH2 distribution was predominantly nuclear in cases with hyperplastic nodules, whereas Hashimoto thyroiditis cases showed both nuclear and cytoplasmic staining to an equivalent extent (Figure 1C, D). In cross-tables, female patients presented significantly higher incidence of moderate/strong MSH2 immunoreactivity compared to male patients (Table 2, p=0.019). MSH2 immunoreactivity was not significantly different between benign and malignant thyroid lesions (Table 2, p=0.665). Moderate/strong MSH2 immunoreactivity was more frequently observed in cases with papillary carcinoma (27/40, 68%) compared to those with hyperplastic nodules (16/30, 53%), without reaching statistical significance (Table 2, p=0.228). The vast majority of follicular (6/7, 85.71%) and medullary (4/5, 80%) carcinoma cases showed negative/weak MSH2 immunostaining (Table 3). Five (83%) out of 6 cases with Hashimoto thyroiditis and 1 (50%) out of 2 cases with anaplastic carcinoma showed moderate/strong MSH2 immunoreactivity (Table 3). In the subgroup of malignant thyroid lesions, moderate/strong MSH2 immunoreactivity was significantly associated with small tumor size and borderline with enhanced follicular cells’ proliferative capacity (Table 4, p=0.031 and p=0.067, respectively). MLH1 positivity (IHC score >0) was noted in 26 (29%) out of 90 cases with thyroid lesions. Twenty-three (26%) out of 90 thyroid lesions showed moderate/strong MLH1 immunoreactivity (IHC score ≥3). The pattern of MLH1 distribution was both nuclear and cytoplasmic in malignant thyroid lesions, except for papillary carcinoma cases, which showed perinuclear and cytoplasmic staining (Figure 2A, B). The pattern of MLH1 distribution was nuclear in cases with hyperplastic nodules, whereas those with Hashimoto thyroiditis showed predominantly cytoplasmic and occasionally nuclear patterns of staining (Figure 2C, D). MLH1 immunoreactivity was not significantly associated with patients’ sex (Table 2, p>0.05). Moderate/strong MLH1 immunoreactivity was significantly more frequently observed in cases with malignant compared to those with benign thyroid lesions (Table 2, p=0.038). A similar discrimination between cases with papillary carcinoma and hyperplastic nodules was also noted (Table 2, p=0.014). Sixteen (40.00%) out of 40 cases with papillary carcinoma presented moderate/strong MLH1 immunoreactivity, whereas only 4 (13%) out of 30 cases with hyperplastic nodules showed moderate/strong MLH1 immunoreactivity. Thyroid lesions with enhanced follicular cell proliferative capacity, reflected by Ki-67 labeling index, showed significantly increased incidence of moderate/strong MLH1 immunoreactivity (Table 2, p=0.038). The vast majority of follicular (6/7, 86%) and medullary (4/5, 80%) carcinomas showed negative/weak MLH1 immunostaining (Table 3). Five (83%) out of 6 cases with Hashimoto thyroiditis and all (2/2, 100%) anaplastic carcinomas showed negative/weak MLH1 immunoreactivity (Table 3). In the subgroup of malignant thyroid lesions, moderate/strong MLH1 immunoreactivity was significantly associated with the presence of lymphatic and vascular invasion (Table 4, p=0.006, p=0.002, respectively), whereas a trend of correlation with capsular invasion was also noted (Table 4, p=0.094). Malignant thyroid cases with enhanced follicular cells showed increased incidence of moderate/strong MLH1 immunoreactivity, without reaching statistical significance (Table 4, p=0.177). MGMT positivity (IHC score >0) was noted in 70 (78%) out of 90 cases with thyroid lesions. Sixty-six (73%) out of 90 thyroid lesions showed moderate/strong MGMT immunoreactivity (IHC score ≥3). The pattern of MGMT distribution was both nuclear and cytoplasmic in malignant thyroid lesions (Figure 3A, B). The pattern of MGMT distribution was predominately nuclear in cases with hyperplastic nodules, whereas in Hashimoto thyroiditis perinuclear and cytoplasmic staining was noted (Figure 3C, D). MGMT immunoreactivity was not significantly associated with patients’ sex or follicular cell proliferative capacity (Table 2, p>0.05). Negative/weak MGMT immunoreactivity was significantly more frequently observed in malignant compared to benign thyroid lesions (Table 2, p=0.001). A similar discrimination between cases with papillary carcinoma and hyperplastic nodules was also noted (Table 2, p=0.026). Thirteen (32%) out of 40 cases with papillary carcinoma presented negative/weak MGMT immunoreactivity, whereas only 3 (10%) out of 30 cases with hyperplastic nodules showed analogous MGMT immunoreactivity (Table 3). Three (60%) out of 5 medullary, 2 (29%) out of 7 follicular, and 1 (50%) out of 2 anaplastic carcinomas presented moderate/strong MGMT immunoreactivity (Table 3). All (6/6, 100%) Hashimoto thyroiditis cases showed moderate/strong MGMT immunoreactivity (Table 3). In the subgroup of malignant thyroid lesions, MGMT immunoreactivity was not associated with clinicopathological parameters, except for a trend of correlation with tumor size, as larger tumors presented increased frequency of negative/weak MGMT immunoreactivity (Table 4, p= 0.079). Moderate/strong MGMT immunoreactivity was more frequently observed in malignant cases with enhanced follicular cell proliferative capacity, but not at a significant level (Table 4, p=0.238).
Conclusions
Alterations in the expression levels of MLH1 and MGMT proteins detected by immunohistochemistry were associated with thyroid malignancy. MSH2, MLH1 and MGMT also showed correlations with clinicopathological parameters crucial for patient management. In this context, the present study suggests that alterations in the expression levels of MMR and MGMT proteins may result from tumor development and/or progression of thyroid neoplasia. Larger cohort studies are recommended in order to draw more definite conclusions on the clinical significance of the DNA repair proteins, with the aim to improve diagnostic scrutiny in thyroid neoplasia. Subset analysis of histological subtypes of papillary and follicular carcinoma, with the latter being the most difficult for pathologists to diagnose, is also recommended. Further research should also been conducted to elucidation the precise molecular mechanisms through which DNA repair proteins participate in thyroid cancer development and progression.
[ "Background", "Immunohistochemistry", "Evaluation of immunohistochemistry" ]
[ "Benign and malignant thyroid lesions constitute the most common malignancy of endocrine glands, with rates increasing during the last 2 decades [1,2]. The rapidly rising incidence of thyroid cancer has mainly been attributed to improved small papillary tumor detection [3]. However, increased diagnostic scrutiny is not the sole reason, as large and more advanced cancers, which are clinically apparent and associated with a less favourable prognosis, are increasing nearly as fast as small tumors [4]. Papillary thyroid carcinoma is the most common thyroid malignancy, accounting for more than 80% of all thyroid cancers. Together, papillary and follicular thyroid carcinomas are termed differentiated thyroid carcinomas, and represent approximately 90% of all thyroid cancers [5]. The rare forms of thyroid cancer mainly consist of medullary carcinoma arising from para-follicular C-cells, anaplastic carcinoma, and, less frequently, thyroid lymphoma, Hurthle cell and squamous cell carcinoma, and intrathyroid sarcoma [6]. Although thyroid cancer generally has a favorable outcome, a significant proportion of patients ultimately die from the disease due to local recurrences and/or distant metastases [7]. Thyroid-stimulating hormone levels, thyroid ultrasound and fine-needle aspiration biopsy are key clinical tests to guide patient management [8]. However, in many cases the pathologist is confronted by thyroid lesions in which the distinction between benign and malignant can be quite subtle, and the decision favouring one or another has clinical consequences and implies different treatment modalities [9]. In this respect, the identification of molecular markers, which contribute to the discrimination of benign from malignant thyroid tumors, represents a diagnostic challenge. Recently, there has been considerable progress in identifying biomarkers in thyroid tumors, improving the accuracy of fine-needle aspiration biopsy and contributing to the estimation of tumor aggressiveness or behavior [10–12].\nDNA repair is an important defense mechanism against DNA damage caused by normal metabolic activities and environmental factors [13]. It includes several distinct pathways: direct repair (DR), base and nucleotide excision repair (BER and NER), mismatch repair (MMR), double strand break repair (DSBR), and interstrand crosslink repair systems [14–16]. Inherited or acquired deficiencies in DNA repair proteins participating in the above mechanisms have generally been considered to contribute to the onset of carcinogenesis [14–16]. In the last few years, alterations in expression levels and polymorphisms of DNA repair genes have been associated with increased risk for developing thyroid cancer [17]. Among them, MMR proteins such as MSH2 (Mut-S-Homologin-2) and MLH1 (Mut-L-Homologon-1) have recently been implicated in the development, progression and metastatis of several types of head and neck neoplasia, including thyroid cancer [18–21]. MSH2 is located on chromosome 2p22-p21 and acts as a heterodimer with either MSH6 or MSH3 (MutSα and MutSβ complex, respectively) in order to bind to and recognize base-base mismatches and 1–10 nucleotides insertion/deletion loops [22]. MLH1 forms heterodimers with PMS2 and MLH3 (MutLα complex) to discriminate the old from the new DNA strand and to signal downstream repair factors, such as helicases and exonucleases [23]. Notably, germ line mutations in MMR genes, in which MSH2 and MLH1 alterations account for the vast majority of cases, have been implicated in hereditary non-polyposis colorectal cancer [24].\nBeyond MMR proteins, the methylation status of methyl guanine DNA methyltranferase (MGMT), a direct repair enzyme located on chromosome 10q26, has also been investigated in thyroid neoplasia [25,26]. MGMT protein acts through a self-destruction mechanism, removing abnormal adducts from the O6 position of guanine, providing protection from mutagenic agents and conferring resistance to alkylating chemotherapeutic drugs [27]. Loss of MGMT expression has been associated with aggressive tumor behaviour and progression in several types of neoplasia, including esophageal, hepatocellular, lung, gastric and breast carcinomas [28–32]. However, the available data evaluating the immunohistochemical expression of MSH2, MLH1 and MGMT in benign and malignant thyroid lesions thus far remains sparce. The present study aimed to evaluate the immunohistochemical expression of MSH2, MLH1 and MGMT in patients with benign and malignant thyroid lesions. The association of MSH2, MLH1 and MGMT protein expression with important clinicopathological characteristics, such as tumor size and lymph node metastases, capsular, lymphatic and vascular invasion, was also examined.", "Immunostainings for MSH2, MLH1 and MGMT were performed on formalin-fixed, paraffin-embedded thyroid tissue sections using mouse monoclonal anti-MSH2 (CM 219 BK, Biocare Medical, Walnut Creek, CA, USA), anti-MLH1 (CM 220 CK, Biocare Medical) and anti-MGMT (sc-56432, Santa Cruz Biotechnology, Santa Cruz, CA, USA), and IgG1 antibodies, respectively. Briefly, 4μm thick tissue sections were dewaxed in xylene and were brought to water through graded alcohols. Antigen retrieval was performed by microwaving slides in 10mM citrate buffer (pH 6.1) for 15 min at high power, according to the manufacturer’s instructions. To remove the endogenous peroxidase activity, sections were then treated with freshly prepared 0.3% hydrogen peroxide in methanol in the dark for 30 min at room temperature. Non-specific antibody binding was blocked using Sniper, a specific blocking reagent for mouse primary antibodies (Sniper, Biocare Medical) for 5 min. The sections were incubated for 1 h, at room temperature, with the primary antibodies against MSH2, MLH1 and MGMT diluted 1:100 in Van Gogh (Biocare Medical), Renoir Red (Biocare Medical) and phosphate buffered saline (PBS), respectively, according to the manufacturers’ instructions. After washing 3 times with PBS, sections were then incubated at room temperature with biotinylated linking reagent (Biocare Medical) for 10 min, followed by incubation with peroxidase-conjugated streptavidin label (Biocare Medical) for 10 min. The resultant immune peroxidase activity was developed using a DAB substrate kit (Vector Laboratories, USA) for 10 min. Sections were counterstained with Harris’ hematoxylin and mounted in Entellan (Merck, Darmstadt, Germany). Appropriate negative controls were performed by omitting the primary antibody and/or substituting it with an irrelevant anti-serum. As a positive control, colon cancer tissue sections with known increased MLH1, MSH2 and MGMT immunoreactivity were used [34]. Follicular cells’ proliferative capacity was assessed immunohistochemically, using a mouse anti-human Ki-67 antigen; IgG1k antibody (clone MIB-1, Dakopatts, Glostrup, Denmark), as previously described [35,36].", "The percentages of positively stained follicular cells were obtained by counting at least 1000 cells in each case by 2 independent observers (S.T. and P.A) blinded to the clinical data, with complete observer agreement (κ=0.959, SE: 0.024). Specimens were considered “positive” for MSH2, MLH1 and MGMT when more than 5% of the follicular cells were stained. A semi-quantitative scoring system was applied based on previously published reports of other immunohistochemical markers on thyroid tissue lesions [36–39]. The immunoreactivity of the follicular cells for MSH2, MLH1 and MGMT was scored according to the percentage of MSH2, MLH1 and MGMT positive follicular cells as 0: negative staining – 0–4% of follicular cells positive; 1: 5–24% of follicular cells positive; 2: 25–49% of follicular cells positive; 3: 50–100% of follicular cells positive, and its intensity as 0: negative staining, 1: mild staining; 2: intermediate staining; 3: intense staining. Finally, the immunoreactivity of MSH2, MLH1 and MGMT was classified as negative/weak if the total score was 0–2, and moderate/strong if the total score was ≥3. In this way we ensured that each group had a sufficient and more homogeneous number of cases in order to be comparable with the other groups [35,36]. Both nuclear and cytoplasmic immunostaining was taken into consideration for the immunohistochemical scoring. Ki-67 immunoreactivity was classified according to the percentage of positively stained follicular cells exceeded the mean percentage value into 2 categories (below and above mean value), as previously reported [35,36]." ]
[ null, null, null ]
[ "Background", "Material and Methods", "Patients", "Immunohistochemistry", "Evaluation of immunohistochemistry", "Statistical analysis", "Results", "Discussion", "Conclusions" ]
[ "Benign and malignant thyroid lesions constitute the most common malignancy of endocrine glands, with rates increasing during the last 2 decades [1,2]. The rapidly rising incidence of thyroid cancer has mainly been attributed to improved small papillary tumor detection [3]. However, increased diagnostic scrutiny is not the sole reason, as large and more advanced cancers, which are clinically apparent and associated with a less favourable prognosis, are increasing nearly as fast as small tumors [4]. Papillary thyroid carcinoma is the most common thyroid malignancy, accounting for more than 80% of all thyroid cancers. Together, papillary and follicular thyroid carcinomas are termed differentiated thyroid carcinomas, and represent approximately 90% of all thyroid cancers [5]. The rare forms of thyroid cancer mainly consist of medullary carcinoma arising from para-follicular C-cells, anaplastic carcinoma, and, less frequently, thyroid lymphoma, Hurthle cell and squamous cell carcinoma, and intrathyroid sarcoma [6]. Although thyroid cancer generally has a favorable outcome, a significant proportion of patients ultimately die from the disease due to local recurrences and/or distant metastases [7]. Thyroid-stimulating hormone levels, thyroid ultrasound and fine-needle aspiration biopsy are key clinical tests to guide patient management [8]. However, in many cases the pathologist is confronted by thyroid lesions in which the distinction between benign and malignant can be quite subtle, and the decision favouring one or another has clinical consequences and implies different treatment modalities [9]. In this respect, the identification of molecular markers, which contribute to the discrimination of benign from malignant thyroid tumors, represents a diagnostic challenge. Recently, there has been considerable progress in identifying biomarkers in thyroid tumors, improving the accuracy of fine-needle aspiration biopsy and contributing to the estimation of tumor aggressiveness or behavior [10–12].\nDNA repair is an important defense mechanism against DNA damage caused by normal metabolic activities and environmental factors [13]. It includes several distinct pathways: direct repair (DR), base and nucleotide excision repair (BER and NER), mismatch repair (MMR), double strand break repair (DSBR), and interstrand crosslink repair systems [14–16]. Inherited or acquired deficiencies in DNA repair proteins participating in the above mechanisms have generally been considered to contribute to the onset of carcinogenesis [14–16]. In the last few years, alterations in expression levels and polymorphisms of DNA repair genes have been associated with increased risk for developing thyroid cancer [17]. Among them, MMR proteins such as MSH2 (Mut-S-Homologin-2) and MLH1 (Mut-L-Homologon-1) have recently been implicated in the development, progression and metastatis of several types of head and neck neoplasia, including thyroid cancer [18–21]. MSH2 is located on chromosome 2p22-p21 and acts as a heterodimer with either MSH6 or MSH3 (MutSα and MutSβ complex, respectively) in order to bind to and recognize base-base mismatches and 1–10 nucleotides insertion/deletion loops [22]. MLH1 forms heterodimers with PMS2 and MLH3 (MutLα complex) to discriminate the old from the new DNA strand and to signal downstream repair factors, such as helicases and exonucleases [23]. Notably, germ line mutations in MMR genes, in which MSH2 and MLH1 alterations account for the vast majority of cases, have been implicated in hereditary non-polyposis colorectal cancer [24].\nBeyond MMR proteins, the methylation status of methyl guanine DNA methyltranferase (MGMT), a direct repair enzyme located on chromosome 10q26, has also been investigated in thyroid neoplasia [25,26]. MGMT protein acts through a self-destruction mechanism, removing abnormal adducts from the O6 position of guanine, providing protection from mutagenic agents and conferring resistance to alkylating chemotherapeutic drugs [27]. Loss of MGMT expression has been associated with aggressive tumor behaviour and progression in several types of neoplasia, including esophageal, hepatocellular, lung, gastric and breast carcinomas [28–32]. However, the available data evaluating the immunohistochemical expression of MSH2, MLH1 and MGMT in benign and malignant thyroid lesions thus far remains sparce. The present study aimed to evaluate the immunohistochemical expression of MSH2, MLH1 and MGMT in patients with benign and malignant thyroid lesions. The association of MSH2, MLH1 and MGMT protein expression with important clinicopathological characteristics, such as tumor size and lymph node metastases, capsular, lymphatic and vascular invasion, was also examined.", "[SUBTITLE] Patients [SUBSECTION] Ninety formalin-fixed, paraffin-embedded thyroid tissues from an equal number of patients who had undergone thyroid surgery for benign or malignant disease were included in this study. None of the patients received any kind of anti-cancer treatment prior to surgery. None of the patients had a history of head and neck irradiation or a history of other cancer types. Each case was classified according to the WHO histological classification of thyroid tumors [33]. The clinical material consisted of 36 benign (30 hyperplastic nodules and 6 Hashimoto thyroiditis) and 54 malignant (40 papillary, 5 medullary, 7 follicular and 2 anaplastic thyroid carcinomas) cases. The characteristics of the population under study classified as benign and malignant thyroid lesions are summarized in Table 1.\nNinety formalin-fixed, paraffin-embedded thyroid tissues from an equal number of patients who had undergone thyroid surgery for benign or malignant disease were included in this study. None of the patients received any kind of anti-cancer treatment prior to surgery. None of the patients had a history of head and neck irradiation or a history of other cancer types. Each case was classified according to the WHO histological classification of thyroid tumors [33]. The clinical material consisted of 36 benign (30 hyperplastic nodules and 6 Hashimoto thyroiditis) and 54 malignant (40 papillary, 5 medullary, 7 follicular and 2 anaplastic thyroid carcinomas) cases. The characteristics of the population under study classified as benign and malignant thyroid lesions are summarized in Table 1.\n[SUBTITLE] Immunohistochemistry [SUBSECTION] Immunostainings for MSH2, MLH1 and MGMT were performed on formalin-fixed, paraffin-embedded thyroid tissue sections using mouse monoclonal anti-MSH2 (CM 219 BK, Biocare Medical, Walnut Creek, CA, USA), anti-MLH1 (CM 220 CK, Biocare Medical) and anti-MGMT (sc-56432, Santa Cruz Biotechnology, Santa Cruz, CA, USA), and IgG1 antibodies, respectively. Briefly, 4μm thick tissue sections were dewaxed in xylene and were brought to water through graded alcohols. Antigen retrieval was performed by microwaving slides in 10mM citrate buffer (pH 6.1) for 15 min at high power, according to the manufacturer’s instructions. To remove the endogenous peroxidase activity, sections were then treated with freshly prepared 0.3% hydrogen peroxide in methanol in the dark for 30 min at room temperature. Non-specific antibody binding was blocked using Sniper, a specific blocking reagent for mouse primary antibodies (Sniper, Biocare Medical) for 5 min. The sections were incubated for 1 h, at room temperature, with the primary antibodies against MSH2, MLH1 and MGMT diluted 1:100 in Van Gogh (Biocare Medical), Renoir Red (Biocare Medical) and phosphate buffered saline (PBS), respectively, according to the manufacturers’ instructions. After washing 3 times with PBS, sections were then incubated at room temperature with biotinylated linking reagent (Biocare Medical) for 10 min, followed by incubation with peroxidase-conjugated streptavidin label (Biocare Medical) for 10 min. The resultant immune peroxidase activity was developed using a DAB substrate kit (Vector Laboratories, USA) for 10 min. Sections were counterstained with Harris’ hematoxylin and mounted in Entellan (Merck, Darmstadt, Germany). Appropriate negative controls were performed by omitting the primary antibody and/or substituting it with an irrelevant anti-serum. As a positive control, colon cancer tissue sections with known increased MLH1, MSH2 and MGMT immunoreactivity were used [34]. Follicular cells’ proliferative capacity was assessed immunohistochemically, using a mouse anti-human Ki-67 antigen; IgG1k antibody (clone MIB-1, Dakopatts, Glostrup, Denmark), as previously described [35,36].\nImmunostainings for MSH2, MLH1 and MGMT were performed on formalin-fixed, paraffin-embedded thyroid tissue sections using mouse monoclonal anti-MSH2 (CM 219 BK, Biocare Medical, Walnut Creek, CA, USA), anti-MLH1 (CM 220 CK, Biocare Medical) and anti-MGMT (sc-56432, Santa Cruz Biotechnology, Santa Cruz, CA, USA), and IgG1 antibodies, respectively. Briefly, 4μm thick tissue sections were dewaxed in xylene and were brought to water through graded alcohols. Antigen retrieval was performed by microwaving slides in 10mM citrate buffer (pH 6.1) for 15 min at high power, according to the manufacturer’s instructions. To remove the endogenous peroxidase activity, sections were then treated with freshly prepared 0.3% hydrogen peroxide in methanol in the dark for 30 min at room temperature. Non-specific antibody binding was blocked using Sniper, a specific blocking reagent for mouse primary antibodies (Sniper, Biocare Medical) for 5 min. The sections were incubated for 1 h, at room temperature, with the primary antibodies against MSH2, MLH1 and MGMT diluted 1:100 in Van Gogh (Biocare Medical), Renoir Red (Biocare Medical) and phosphate buffered saline (PBS), respectively, according to the manufacturers’ instructions. After washing 3 times with PBS, sections were then incubated at room temperature with biotinylated linking reagent (Biocare Medical) for 10 min, followed by incubation with peroxidase-conjugated streptavidin label (Biocare Medical) for 10 min. The resultant immune peroxidase activity was developed using a DAB substrate kit (Vector Laboratories, USA) for 10 min. Sections were counterstained with Harris’ hematoxylin and mounted in Entellan (Merck, Darmstadt, Germany). Appropriate negative controls were performed by omitting the primary antibody and/or substituting it with an irrelevant anti-serum. As a positive control, colon cancer tissue sections with known increased MLH1, MSH2 and MGMT immunoreactivity were used [34]. Follicular cells’ proliferative capacity was assessed immunohistochemically, using a mouse anti-human Ki-67 antigen; IgG1k antibody (clone MIB-1, Dakopatts, Glostrup, Denmark), as previously described [35,36].\n[SUBTITLE] Evaluation of immunohistochemistry [SUBSECTION] The percentages of positively stained follicular cells were obtained by counting at least 1000 cells in each case by 2 independent observers (S.T. and P.A) blinded to the clinical data, with complete observer agreement (κ=0.959, SE: 0.024). Specimens were considered “positive” for MSH2, MLH1 and MGMT when more than 5% of the follicular cells were stained. A semi-quantitative scoring system was applied based on previously published reports of other immunohistochemical markers on thyroid tissue lesions [36–39]. The immunoreactivity of the follicular cells for MSH2, MLH1 and MGMT was scored according to the percentage of MSH2, MLH1 and MGMT positive follicular cells as 0: negative staining – 0–4% of follicular cells positive; 1: 5–24% of follicular cells positive; 2: 25–49% of follicular cells positive; 3: 50–100% of follicular cells positive, and its intensity as 0: negative staining, 1: mild staining; 2: intermediate staining; 3: intense staining. Finally, the immunoreactivity of MSH2, MLH1 and MGMT was classified as negative/weak if the total score was 0–2, and moderate/strong if the total score was ≥3. In this way we ensured that each group had a sufficient and more homogeneous number of cases in order to be comparable with the other groups [35,36]. Both nuclear and cytoplasmic immunostaining was taken into consideration for the immunohistochemical scoring. Ki-67 immunoreactivity was classified according to the percentage of positively stained follicular cells exceeded the mean percentage value into 2 categories (below and above mean value), as previously reported [35,36].\nThe percentages of positively stained follicular cells were obtained by counting at least 1000 cells in each case by 2 independent observers (S.T. and P.A) blinded to the clinical data, with complete observer agreement (κ=0.959, SE: 0.024). Specimens were considered “positive” for MSH2, MLH1 and MGMT when more than 5% of the follicular cells were stained. A semi-quantitative scoring system was applied based on previously published reports of other immunohistochemical markers on thyroid tissue lesions [36–39]. The immunoreactivity of the follicular cells for MSH2, MLH1 and MGMT was scored according to the percentage of MSH2, MLH1 and MGMT positive follicular cells as 0: negative staining – 0–4% of follicular cells positive; 1: 5–24% of follicular cells positive; 2: 25–49% of follicular cells positive; 3: 50–100% of follicular cells positive, and its intensity as 0: negative staining, 1: mild staining; 2: intermediate staining; 3: intense staining. Finally, the immunoreactivity of MSH2, MLH1 and MGMT was classified as negative/weak if the total score was 0–2, and moderate/strong if the total score was ≥3. In this way we ensured that each group had a sufficient and more homogeneous number of cases in order to be comparable with the other groups [35,36]. Both nuclear and cytoplasmic immunostaining was taken into consideration for the immunohistochemical scoring. Ki-67 immunoreactivity was classified according to the percentage of positively stained follicular cells exceeded the mean percentage value into 2 categories (below and above mean value), as previously reported [35,36].\n[SUBTITLE] Statistical analysis [SUBSECTION] Chi-square tests were used to assess the difference of MSH2, MLH1 and MGMT immunoreactivity between malignant and benign thyroid lesions, as well as between papillary carcinoma cases and hyperplastic nodules, which comprise the most numerous histopathological entities of malignant and benign cases, respectively. Chi-square tests were also used to assess the associations between MSH2, MLH1 and MGMT immunoreactivity and clinicopathological characteristics in the subgroup of patients with malignant thyroid lesions. A 2-tailed p<0.05 was considered statistically significant. Statistical analyses were performed using SPSS for Windows (version 11.0; SPSS Inc., Chicago, IL, USA).\nChi-square tests were used to assess the difference of MSH2, MLH1 and MGMT immunoreactivity between malignant and benign thyroid lesions, as well as between papillary carcinoma cases and hyperplastic nodules, which comprise the most numerous histopathological entities of malignant and benign cases, respectively. Chi-square tests were also used to assess the associations between MSH2, MLH1 and MGMT immunoreactivity and clinicopathological characteristics in the subgroup of patients with malignant thyroid lesions. A 2-tailed p<0.05 was considered statistically significant. Statistical analyses were performed using SPSS for Windows (version 11.0; SPSS Inc., Chicago, IL, USA).", "Ninety formalin-fixed, paraffin-embedded thyroid tissues from an equal number of patients who had undergone thyroid surgery for benign or malignant disease were included in this study. None of the patients received any kind of anti-cancer treatment prior to surgery. None of the patients had a history of head and neck irradiation or a history of other cancer types. Each case was classified according to the WHO histological classification of thyroid tumors [33]. The clinical material consisted of 36 benign (30 hyperplastic nodules and 6 Hashimoto thyroiditis) and 54 malignant (40 papillary, 5 medullary, 7 follicular and 2 anaplastic thyroid carcinomas) cases. The characteristics of the population under study classified as benign and malignant thyroid lesions are summarized in Table 1.", "Immunostainings for MSH2, MLH1 and MGMT were performed on formalin-fixed, paraffin-embedded thyroid tissue sections using mouse monoclonal anti-MSH2 (CM 219 BK, Biocare Medical, Walnut Creek, CA, USA), anti-MLH1 (CM 220 CK, Biocare Medical) and anti-MGMT (sc-56432, Santa Cruz Biotechnology, Santa Cruz, CA, USA), and IgG1 antibodies, respectively. Briefly, 4μm thick tissue sections were dewaxed in xylene and were brought to water through graded alcohols. Antigen retrieval was performed by microwaving slides in 10mM citrate buffer (pH 6.1) for 15 min at high power, according to the manufacturer’s instructions. To remove the endogenous peroxidase activity, sections were then treated with freshly prepared 0.3% hydrogen peroxide in methanol in the dark for 30 min at room temperature. Non-specific antibody binding was blocked using Sniper, a specific blocking reagent for mouse primary antibodies (Sniper, Biocare Medical) for 5 min. The sections were incubated for 1 h, at room temperature, with the primary antibodies against MSH2, MLH1 and MGMT diluted 1:100 in Van Gogh (Biocare Medical), Renoir Red (Biocare Medical) and phosphate buffered saline (PBS), respectively, according to the manufacturers’ instructions. After washing 3 times with PBS, sections were then incubated at room temperature with biotinylated linking reagent (Biocare Medical) for 10 min, followed by incubation with peroxidase-conjugated streptavidin label (Biocare Medical) for 10 min. The resultant immune peroxidase activity was developed using a DAB substrate kit (Vector Laboratories, USA) for 10 min. Sections were counterstained with Harris’ hematoxylin and mounted in Entellan (Merck, Darmstadt, Germany). Appropriate negative controls were performed by omitting the primary antibody and/or substituting it with an irrelevant anti-serum. As a positive control, colon cancer tissue sections with known increased MLH1, MSH2 and MGMT immunoreactivity were used [34]. Follicular cells’ proliferative capacity was assessed immunohistochemically, using a mouse anti-human Ki-67 antigen; IgG1k antibody (clone MIB-1, Dakopatts, Glostrup, Denmark), as previously described [35,36].", "The percentages of positively stained follicular cells were obtained by counting at least 1000 cells in each case by 2 independent observers (S.T. and P.A) blinded to the clinical data, with complete observer agreement (κ=0.959, SE: 0.024). Specimens were considered “positive” for MSH2, MLH1 and MGMT when more than 5% of the follicular cells were stained. A semi-quantitative scoring system was applied based on previously published reports of other immunohistochemical markers on thyroid tissue lesions [36–39]. The immunoreactivity of the follicular cells for MSH2, MLH1 and MGMT was scored according to the percentage of MSH2, MLH1 and MGMT positive follicular cells as 0: negative staining – 0–4% of follicular cells positive; 1: 5–24% of follicular cells positive; 2: 25–49% of follicular cells positive; 3: 50–100% of follicular cells positive, and its intensity as 0: negative staining, 1: mild staining; 2: intermediate staining; 3: intense staining. Finally, the immunoreactivity of MSH2, MLH1 and MGMT was classified as negative/weak if the total score was 0–2, and moderate/strong if the total score was ≥3. In this way we ensured that each group had a sufficient and more homogeneous number of cases in order to be comparable with the other groups [35,36]. Both nuclear and cytoplasmic immunostaining was taken into consideration for the immunohistochemical scoring. Ki-67 immunoreactivity was classified according to the percentage of positively stained follicular cells exceeded the mean percentage value into 2 categories (below and above mean value), as previously reported [35,36].", "Chi-square tests were used to assess the difference of MSH2, MLH1 and MGMT immunoreactivity between malignant and benign thyroid lesions, as well as between papillary carcinoma cases and hyperplastic nodules, which comprise the most numerous histopathological entities of malignant and benign cases, respectively. Chi-square tests were also used to assess the associations between MSH2, MLH1 and MGMT immunoreactivity and clinicopathological characteristics in the subgroup of patients with malignant thyroid lesions. A 2-tailed p<0.05 was considered statistically significant. Statistical analyses were performed using SPSS for Windows (version 11.0; SPSS Inc., Chicago, IL, USA).", "MSH2 positivity (IHC score >0) was noted in 82 (91%) out of 90 cases with thyroid lesions. More than half (50/90, 56%) of the examined cases presented moderate/strong immunoreactivity for MSH2 protein (IHC score ≥3). The pattern of MSH2 distribution was predominantly nuclear, and occasionally cytoplasmic in cases with malignant thyroid lesions (Figure 1A, B). The pattern of MSH2 distribution was predominantly nuclear in cases with hyperplastic nodules, whereas Hashimoto thyroiditis cases showed both nuclear and cytoplasmic staining to an equivalent extent (Figure 1C, D). In cross-tables, female patients presented significantly higher incidence of moderate/strong MSH2 immunoreactivity compared to male patients (Table 2, p=0.019). MSH2 immunoreactivity was not significantly different between benign and malignant thyroid lesions (Table 2, p=0.665). Moderate/strong MSH2 immunoreactivity was more frequently observed in cases with papillary carcinoma (27/40, 68%) compared to those with hyperplastic nodules (16/30, 53%), without reaching statistical significance (Table 2, p=0.228). The vast majority of follicular (6/7, 85.71%) and medullary (4/5, 80%) carcinoma cases showed negative/weak MSH2 immunostaining (Table 3). Five (83%) out of 6 cases with Hashimoto thyroiditis and 1 (50%) out of 2 cases with anaplastic carcinoma showed moderate/strong MSH2 immunoreactivity (Table 3). In the subgroup of malignant thyroid lesions, moderate/strong MSH2 immunoreactivity was significantly associated with small tumor size and borderline with enhanced follicular cells’ proliferative capacity (Table 4, p=0.031 and p=0.067, respectively).\nMLH1 positivity (IHC score >0) was noted in 26 (29%) out of 90 cases with thyroid lesions. Twenty-three (26%) out of 90 thyroid lesions showed moderate/strong MLH1 immunoreactivity (IHC score ≥3). The pattern of MLH1 distribution was both nuclear and cytoplasmic in malignant thyroid lesions, except for papillary carcinoma cases, which showed perinuclear and cytoplasmic staining (Figure 2A, B). The pattern of MLH1 distribution was nuclear in cases with hyperplastic nodules, whereas those with Hashimoto thyroiditis showed predominantly cytoplasmic and occasionally nuclear patterns of staining (Figure 2C, D). MLH1 immunoreactivity was not significantly associated with patients’ sex (Table 2, p>0.05). Moderate/strong MLH1 immunoreactivity was significantly more frequently observed in cases with malignant compared to those with benign thyroid lesions (Table 2, p=0.038). A similar discrimination between cases with papillary carcinoma and hyperplastic nodules was also noted (Table 2, p=0.014). Sixteen (40.00%) out of 40 cases with papillary carcinoma presented moderate/strong MLH1 immunoreactivity, whereas only 4 (13%) out of 30 cases with hyperplastic nodules showed moderate/strong MLH1 immunoreactivity. Thyroid lesions with enhanced follicular cell proliferative capacity, reflected by Ki-67 labeling index, showed significantly increased incidence of moderate/strong MLH1 immunoreactivity (Table 2, p=0.038). The vast majority of follicular (6/7, 86%) and medullary (4/5, 80%) carcinomas showed negative/weak MLH1 immunostaining (Table 3). Five (83%) out of 6 cases with Hashimoto thyroiditis and all (2/2, 100%) anaplastic carcinomas showed negative/weak MLH1 immunoreactivity (Table 3). In the subgroup of malignant thyroid lesions, moderate/strong MLH1 immunoreactivity was significantly associated with the presence of lymphatic and vascular invasion (Table 4, p=0.006, p=0.002, respectively), whereas a trend of correlation with capsular invasion was also noted (Table 4, p=0.094). Malignant thyroid cases with enhanced follicular cells showed increased incidence of moderate/strong MLH1 immunoreactivity, without reaching statistical significance (Table 4, p=0.177).\nMGMT positivity (IHC score >0) was noted in 70 (78%) out of 90 cases with thyroid lesions. Sixty-six (73%) out of 90 thyroid lesions showed moderate/strong MGMT immunoreactivity (IHC score ≥3). The pattern of MGMT distribution was both nuclear and cytoplasmic in malignant thyroid lesions (Figure 3A, B). The pattern of MGMT distribution was predominately nuclear in cases with hyperplastic nodules, whereas in Hashimoto thyroiditis perinuclear and cytoplasmic staining was noted (Figure 3C, D). MGMT immunoreactivity was not significantly associated with patients’ sex or follicular cell proliferative capacity (Table 2, p>0.05). Negative/weak MGMT immunoreactivity was significantly more frequently observed in malignant compared to benign thyroid lesions (Table 2, p=0.001). A similar discrimination between cases with papillary carcinoma and hyperplastic nodules was also noted (Table 2, p=0.026). Thirteen (32%) out of 40 cases with papillary carcinoma presented negative/weak MGMT immunoreactivity, whereas only 3 (10%) out of 30 cases with hyperplastic nodules showed analogous MGMT immunoreactivity (Table 3). Three (60%) out of 5 medullary, 2 (29%) out of 7 follicular, and 1 (50%) out of 2 anaplastic carcinomas presented moderate/strong MGMT immunoreactivity (Table 3). All (6/6, 100%) Hashimoto thyroiditis cases showed moderate/strong MGMT immunoreactivity (Table 3). In the subgroup of malignant thyroid lesions, MGMT immunoreactivity was not associated with clinicopathological parameters, except for a trend of correlation with tumor size, as larger tumors presented increased frequency of negative/weak MGMT immunoreactivity (Table 4, p= 0.079). Moderate/strong MGMT immunoreactivity was more frequently observed in malignant cases with enhanced follicular cell proliferative capacity, but not at a significant level (Table 4, p=0.238).", "It is well established that inherited or acquired deficiencies in DNA repair proteins may lead to deleterious mutation rates, genomic instability and cell death associated with development, differentiation and progression of cancer [15,27]. Notably, alterations in expression levels and polymorphisms of DNA repair genes have been considered responsible for induction of thyroid carcinogenesis [17]. In this regard, the present study evaluated the immunohistochemical expression of MSH2 and MLH1 proteins, which participate in MMR mechanism, as well as that of MGMT, a direct DNA repair protein, in human benign and malignant thyroid lesions.\nAmong MMR proteins, the expression levels of MLH1 were upregulated in cases with malignant compared to those with benign thyroid lesions. This discrimination is mainly ascribed to the increased frequency of moderate/strong MLH1 expression in cases with papillary carcinoma compared to those with hyperplastic nodules. On the other hand, MSH2 expression was not significantly different between malignant and benign thyroid lesions, while in cases of papillary carcinoma an increased frequency of moderate/strong MSH2 immunoreactivity compared to those with hyperplastic nodules was noted, but at a non-significant statistical level. Ruschenburg et al documented that the expression levels of 3 MMR proteins, MLH1, MSH2 and PMS1, were generally elevated in malignant compared to benign thyroid lesions [20]. The present study included hyperplastic nodules and follicular adenomas as benign, and follicular carcinomas as malignant thyroid lesions, and did not detect point mutations in MSH2 (exon 12, 13) and MLH1 (exon 15, 16) genes [20]. On the other hand, we found negative/weak MSH2 and MLH1 expression in the vast majority of follicular carcinoma cases (7/7 and 6/7, respectively), which needs to be confirmed by a larger cohort study in order for precise conclusions to be drawn. The vast majority of medullary carcinoma cases also showed increased incidence of negative/weak expression for both MSH2 and MLH1 proteins, which further suggests that the role of MMR proteins in thyroid neoplastic transformation may vary among the different thyroid carcinoma variants.\nThe present study also showed that MLH1 upregulation was positively associated with the presence of capsular, lymphatic and vascular invasion, while MSH2 downregulation was positively associated with larger tumor size. These differences in the expression levels of MSH2 and MLH1 may result from tumor development and/or progression of thyroid cancer. In this context, epigenetic alterations on MLH1 have been reported, as methylation of the MLH1 gene was associated with lymph node metastasis and T1799A BRAF mutation in patients with papillary thyroid carcinoma [21]. Moreover, microsatellite instability (MSI), a form of genomic instability associated with MMR deficiency, has recently been implicated in the pathogenesis of thyroid cancer. In fact, Mitmaker et al showed increased incidence of MSI in papillary (9/14) and follicular (10/16) carcinoma [40]. In addition, a significant difference in MSI frequency between follicular adenomas and follicular carcinomas was noted [40]. On the other hand, several studies reported higher MSI frequency in benign compared to malignant thyroid lesions [41–43]. However, a more recent study documented MSI in 70% of benign and 65% of malignant thyroid lesions, and stated that iodine deficiency may influence MSI by altering the molecular pathway and leading especially to follicular and anaplastic carcinoma [44]. Immunohistochemical analysis of MMR proteins has been considered as a better first-line screening tool than MSI testing for detecting mutation rates of particular genes [45].\nThe present study further documented that moderate/strong MSH2 and MLH1 immunoreactivity was more frequently observed in cases with enhanced follicular cell proliferative capacity. This finding implies that MMR proteins may be involved in the cell proliferation state of thyroid neoplasia, as highly proliferating follicular cells are expected to have increased necessity for DNA repair systems. Such an association may enhance the diagnostic utility of DNA repair protein detection in the thyroid neoplasia decision-making process, as the proliferative capacity of tumor cells is a characteristic feature in whole growing tumors, while enhanced cellular proliferative activity and cell death have already been associated with thyroid malignancy [46–48].\nDifferences in the cellular localization of MMR proteins were also noted. Papillary carcinoma cases presented nuclear (or perinuclear) and cytoplasmic distribution for MSH2 and MLH1 proteins, in contrast to hyperplastic nodules, which showed only nuclear staining. This finding may suggest that destabilization of the MMR protein complexes could be occurring progressively from hyperplasia to malignancy, as MMR proteins may be no longer bound to the nuclear DNA. Such a distribution pattern may be ascribed to certain types of MMR gene mutation and may reflect protein deregulation, as MMR proteins could be incompletely synthesized in the cytoplasmic ribosomes due to gene mutations that prevent transport to the nucleus [45,49–52]. Hashimoto thyroiditis cases also showed both nuclear and cytoplasmic MSH2 and MLH1 protein distribution. It has been shown that there is an overlap in the morphological features, immunohistochemical staining pattern, and most importantly, molecular profile between papillary thyroid carcinoma and Hashimoto thyroiditis [53]. Although considered as a benign condition, Hashimoto thyroiditis almost always harbours a genetic rearrangement strongly associated with and highly specific for papillary thyroid carcinoma [53]. The RET/PTC-RAS-BRAF cascade was been shown to be involved in the development of papillary thyroid carcinoma and oxyphil cell metaplasia in Hashimoto thyroiditis, raising the possibility of a molecular link between the 2 disorders [54].\nWe further showed that the expression levels of MGMT were downregulated in cases with malignant compared to those with benign thyroid lesions. Papillary carcinoma cases also showed increased incidence of MGMT downregulation compared to hyperplastic nodules. Moreover, MGMT downregulation was associated with large tumor size in thyroid cancer cases. Research involving methylation status of MGMT showed an incidence of 15% methylation in papillary thyroid tumors [26]. Moreover, Schagdarsurengin et al reported that MGMT hypermethylation preferentially occurred in undifferentiated thyroid carcinomas compared to differentiated ones [25]. Accordingly, loss of MGMT expression in several cancer tissues, including hepatocellular, gastric, breast, esophageal and biliary tract carcinoma, has been correlated with clinicopathological parameters and poor prognosis [28,29,55]. Reduced MGMT expression was associated with advanced stage, lymph node positivity and poor prognosis in oral squamous cell carcinoma patients [31]. In precancerous oral lesions, significant loss of MGMT expression was noted from hyperplasia to dysplasia, supporting the assumption that MGMT deregulation may be an early event in oral tumorogenesis [31]. Low MGMT expression was also correlated with hepatic invasion and poor prognosis in biliary tract carcinomas [29]. Hepatocellular carcinoma patients with reduced MGMT presented advanced disease stage and poor prognosis [30]. Low MGMT expression was also associated with serosal invasion, advanced disease stage, lymph node positivity, undifferentiated histopathological type and poor prognosis in gastric carcinoma patients [28]. Collectively, the reduced expression levels of MGMT protein in malignant thyroid lesions, in accordance with evidence from other types of cancer, reinforces the assumption that downregulation of MGMT expression may result from tumor development and/or progression of thyroid neoplasia. The co-localization of MGMT in the nucleus and cytoplasm of follicular cells in malignant thyroid lesions, in contrast to hyperplastic nodules, which showed predominately nuclear distribution, may be ascribed to MGMT gene mutation, as has been suggested for other types of cancer [56]. A similar pattern of distribution was observed in cases with Hashimoto’s thyroiditis, which may be ascribed to the fact that it almost always harbours a genetic rearrangement that is strongly associated with and is highly specific for papillary thyroid carcinoma, raising the possibility of a molecular link between the 2 disorders [53,54].", "Alterations in the expression levels of MLH1 and MGMT proteins detected by immunohistochemistry were associated with thyroid malignancy. MSH2, MLH1 and MGMT also showed correlations with clinicopathological parameters crucial for patient management. In this context, the present study suggests that alterations in the expression levels of MMR and MGMT proteins may result from tumor development and/or progression of thyroid neoplasia. Larger cohort studies are recommended in order to draw more definite conclusions on the clinical significance of the DNA repair proteins, with the aim to improve diagnostic scrutiny in thyroid neoplasia. Subset analysis of histological subtypes of papillary and follicular carcinoma, with the latter being the most difficult for pathologists to diagnose, is also recommended. Further research should also been conducted to elucidation the precise molecular mechanisms through which DNA repair proteins participate in thyroid cancer development and progression." ]
[ null, "materials|methods", "subjects", null, null, "methods", "results", "discussion", "conclusions" ]
[ "DNA repair proteins", "immunohistochemistry", "thyroid malignancy", "clinicopathological", "parameters", "diagnosis" ]
Serum nitric oxide metabolite as a biomarker of visceral fat accumulation: clinical significance of measurement for nitrate/nitrite.
21358598
A visceral fat area of more than 100 cm2 as measured by computed tomography (CT) at the umbilical level has been included as a criterion for obesity in all the proposed criteria for metabolic syndrome. However, CT cannot be used frequently because of radiation exposure. We evaluated the usefulness of measurement of the serum levels of nitric oxide (NO), instead of CT and the waist circumference, as a marker of abdominal visceral fat accumulation.
BACKGROUND
The study was carried out in 80 subjects. The serum levels of NO metabolites (nitrate/nitrite) were measured using the Griess reagent.
MATERIAL/METHODS
Simple and multiple regression analysis revealed that the serum levels of NO metabolites showed the greatest degree of correlation with the visceral fat area (r = 0.743, p<0.0001), and corresponded to a visceral fat area of 100 cm2, as determined using the ROC curve, was 21.0 µmol/ml (sensitivity 88%, specificity 82%); this method was more sensitive than the waist circumference for evaluation of the visceral fat accumulation.
RESULTS
Measurement of the serum levels of NO metabolites may be a simple, safe, convenient and reliable method for the evaluation of visceral fat accumulation in clinical diagnostic screening.
CONCLUSIONS
[ "Adipocytes", "Biomarkers", "Circadian Rhythm", "Female", "Humans", "Intra-Abdominal Fat", "Male", "Middle Aged", "Nitrates", "Nitric Oxide", "Nitric Oxide Synthase Type II", "Nitrites", "Regression Analysis", "Waist Circumference" ]
3524723
null
null
Statistical analysis
All the data were expressed as means ±SD. The relationship between any 2 variables was analyzed by standard correlation analysis conducted using the StatView version 5.0 software (SAS, Cary, NC). ANOVA was applied for group comparisons, followed by Student’s t test for confirmation of the statistical associations. The relationship between the L/S ratio, VFA, or SFA and other relevant covariates was examined by multiple regression analysis and determination of the standardized correlation coefficients. Statistical significance was assumed when the P value was <0.05.
Results
[SUBTITLE] Characteristic of subjects [SUBSECTION] Characteristics of study subjects by sex are presented in Table 1. Of the 80 total subjects, 38 were men and 42 were women. The mean BMI was 24.2±1.0 kg/m2 in men, and 25.0±1.1 kg/m2 in women. There was no significant difference between sexes for BMI, L/S ratio, all serum markers, existence of obesity-related disorder and total abdominal fat area. However, the mean subcutaneous fat area and waist circumference were higher in women (163.2±23.1 cm2, 84.5±0.8 cm, respectively) than in men (145.8±10.4 cm2, 88.2±3.6 cm), although the mean visceral fat area in men was higher (104.3±3.1 cm2) than in women (96.2±15.4 cm2) (p<0.05). Characteristics of study subjects by sex are presented in Table 1. Of the 80 total subjects, 38 were men and 42 were women. The mean BMI was 24.2±1.0 kg/m2 in men, and 25.0±1.1 kg/m2 in women. There was no significant difference between sexes for BMI, L/S ratio, all serum markers, existence of obesity-related disorder and total abdominal fat area. However, the mean subcutaneous fat area and waist circumference were higher in women (163.2±23.1 cm2, 84.5±0.8 cm, respectively) than in men (145.8±10.4 cm2, 88.2±3.6 cm), although the mean visceral fat area in men was higher (104.3±3.1 cm2) than in women (96.2±15.4 cm2) (p<0.05). [SUBTITLE] Stability of NO metabolites [SUBSECTION] As NO is an unstable vasodilator with a short half-life in physiological condition, we therefore measured serum NO metabolites (Nitrate/Nitrite) as NO level in blood [27]. To confirm the stability of NO metabolites after sample collection, we examined the stability of NO metabolites under the condition of −80°C for 7 days, and room temperature for 24 hours. As shown in Supplemental Figures 1 and 2, NO metabolites were stable under the storage condition of −80°C for 7 days and room temperature for 24 hours. As NO is an unstable vasodilator with a short half-life in physiological condition, we therefore measured serum NO metabolites (Nitrate/Nitrite) as NO level in blood [27]. To confirm the stability of NO metabolites after sample collection, we examined the stability of NO metabolites under the condition of −80°C for 7 days, and room temperature for 24 hours. As shown in Supplemental Figures 1 and 2, NO metabolites were stable under the storage condition of −80°C for 7 days and room temperature for 24 hours. [SUBTITLE] Circadian rhythm of NO metabolite levels in humans [SUBSECTION] Since certain foods are known to be rich sources of NO metabolites, we measured NO metabolite levels 10 times a day to examine circadian rhythm and the influence of eating a meal. Nevertheless, serum NO metabolite levels were increased and the influence of a meal was clearly recognized 2 hours after eating a meal in all 5 patients (Figure 1); serum NO metabolite levels were also waning 2 hours after eating a meal, and fasting serum NO metabolite levels were quite stable (Figure 1). Since certain foods are known to be rich sources of NO metabolites, we measured NO metabolite levels 10 times a day to examine circadian rhythm and the influence of eating a meal. Nevertheless, serum NO metabolite levels were increased and the influence of a meal was clearly recognized 2 hours after eating a meal in all 5 patients (Figure 1); serum NO metabolite levels were also waning 2 hours after eating a meal, and fasting serum NO metabolite levels were quite stable (Figure 1). [SUBTITLE] Comparison of serum NO metabolite levels under 100 cm2 and over 100 cm2 of visceral fat area [SUBSECTION] We compared serum NO metabolite levels of the group with visceral fat area under 100 cm2 to the group that had visceral fat area over 100 cm2. Significant elevation of serum NO metabolites (nitrate/nitrite) in obese subjects (visceral fat area over 100 cm2) were observed in comparison to that in non-obese subjects (visceral fat area under 100 cm2) (Figure 2). We compared serum NO metabolite levels of the group with visceral fat area under 100 cm2 to the group that had visceral fat area over 100 cm2. Significant elevation of serum NO metabolites (nitrate/nitrite) in obese subjects (visceral fat area over 100 cm2) were observed in comparison to that in non-obese subjects (visceral fat area under 100 cm2) (Figure 2). [SUBTITLE] Relationship between visceral fat accumulation and clinical parameters [SUBSECTION] The relationship between visceral fat area and the other factors is shown in Figure 3. The visceral fat area was correlated with: BMI (r=0.652, p<0.0001; Figure 3A); body weight (r=0.677, p<0.0001; Figure 3B); WC (r=0.743, p<0.0001; Figure 3C); hsCRP (r=0.535, p<0.0001; Figure 3D); serum insulin concentration (r=0.485, p<0.0001; Figure 3E); and NO metabolite (r=0.743, p<0.0001; Figure 3F). WC and NO metabolites were closely correlated with visceral fat area. Multiple regression analysis was performed to quantify the impact of the measured indices (age, sex, and 6 correlative factors [BMI, body weight, WC, hsCRP, serum insulin concentration, NO metabolites]) on the visceral fat area. The results shown in Table 2 indicate that the WC and NO metabolites, but not other factors, were significantly related to the visceral fat area. The relationship between visceral fat area and the other factors is shown in Figure 3. The visceral fat area was correlated with: BMI (r=0.652, p<0.0001; Figure 3A); body weight (r=0.677, p<0.0001; Figure 3B); WC (r=0.743, p<0.0001; Figure 3C); hsCRP (r=0.535, p<0.0001; Figure 3D); serum insulin concentration (r=0.485, p<0.0001; Figure 3E); and NO metabolite (r=0.743, p<0.0001; Figure 3F). WC and NO metabolites were closely correlated with visceral fat area. Multiple regression analysis was performed to quantify the impact of the measured indices (age, sex, and 6 correlative factors [BMI, body weight, WC, hsCRP, serum insulin concentration, NO metabolites]) on the visceral fat area. The results shown in Table 2 indicate that the WC and NO metabolites, but not other factors, were significantly related to the visceral fat area. [SUBTITLE] Relationship between NO metabolites and clinical parameters [SUBSECTION] Next, we examined the relationship between NO metabolites in serum and other factors. As shown in Figure 3, the level of serum NO metabolites was correlated with the visceral fat area (r=0.743, p<0.0001; Figure 3G), WC (r=0.269, p=0.0228; Figure 3H), hsCRP (r=0.472, p=0.0002; Figure 3I), FBS (r=0.287, p=0.0190; Figure 3J), serum insulin concentration (r=0.262, p=0.0282; Figure 3K), and HOMA-IR (r=0.273, p=0.0264; Figure 3L). In particular, the visceral fat area was closely correlated with the NO metabolites. Multiple regression analysis was also performed to quantify the impact of the measured indices (age, sex, and 6 correlative factors [FA, WC, hsCRP, FBS, serum insulin concentration, HOMA-IR]) on the NO metabolites. The results shown in Table 2 indicate that only the visceral fat area, but not other factors, were significantly related to the NO metabolites. Next, we examined the relationship between NO metabolites in serum and other factors. As shown in Figure 3, the level of serum NO metabolites was correlated with the visceral fat area (r=0.743, p<0.0001; Figure 3G), WC (r=0.269, p=0.0228; Figure 3H), hsCRP (r=0.472, p=0.0002; Figure 3I), FBS (r=0.287, p=0.0190; Figure 3J), serum insulin concentration (r=0.262, p=0.0282; Figure 3K), and HOMA-IR (r=0.273, p=0.0264; Figure 3L). In particular, the visceral fat area was closely correlated with the NO metabolites. Multiple regression analysis was also performed to quantify the impact of the measured indices (age, sex, and 6 correlative factors [FA, WC, hsCRP, FBS, serum insulin concentration, HOMA-IR]) on the NO metabolites. The results shown in Table 2 indicate that only the visceral fat area, but not other factors, were significantly related to the NO metabolites. [SUBTITLE] Comparison of NO metabolites and waist circumference in the association of visceral fat accumulation [SUBSECTION] As our results indicated that visceral fat area was strongly correlated with WC and NO metabolites, we compared the NO metabolites and waist circumference for the association with visceral fat area. As shown in Table 3, any of 3 criteria of metabolic syndrome respectively indicated another waist circumference. We compared NO metabolite with each of the 3 criteria of WC. The cut-off value for NO metabolite, which corresponds to VFA 100 cm2 using the ROC curve, was 21.0 μmol/ml (sensitivity 88%, specificity 82%). Using the same method, the cut-off value for WC of 2005 guidelines of the Japanese Society was 85 cm for men and 90 cm for women (sensitivity 75%, specificity 90%), WHO was 37 inches for men and women (sensitivity 56%, specificity 95%), and AHA was 40 inches for men and 35 inches for women (sensitivity 50%, specificity 98%) (Table 3). As our results indicated that visceral fat area was strongly correlated with WC and NO metabolites, we compared the NO metabolites and waist circumference for the association with visceral fat area. As shown in Table 3, any of 3 criteria of metabolic syndrome respectively indicated another waist circumference. We compared NO metabolite with each of the 3 criteria of WC. The cut-off value for NO metabolite, which corresponds to VFA 100 cm2 using the ROC curve, was 21.0 μmol/ml (sensitivity 88%, specificity 82%). Using the same method, the cut-off value for WC of 2005 guidelines of the Japanese Society was 85 cm for men and 90 cm for women (sensitivity 75%, specificity 90%), WHO was 37 inches for men and women (sensitivity 56%, specificity 95%), and AHA was 40 inches for men and 35 inches for women (sensitivity 50%, specificity 98%) (Table 3). [SUBTITLE] Detection of NO generation and iNOS expression in human visceral adipose cells [SUBSECTION] We confirmed that visceral adipose cells express NO synthase and can generate NO by stimulation with obesity-associated hormones such as insulin, leptin, and angiotensin II. As shown in Figure 4A, expression of iNOS protein was observed in cultured human visceral adipose cells by the stimulation of obesity-associated hormones. In addition, increase in NO generation from cultured human visceral adipose cells by the stimulation of obesity-associated hormones was also observed (Figure 4B). These results indicate that the majority of elevated serum NO was generated from visceral fat. We confirmed that visceral adipose cells express NO synthase and can generate NO by stimulation with obesity-associated hormones such as insulin, leptin, and angiotensin II. As shown in Figure 4A, expression of iNOS protein was observed in cultured human visceral adipose cells by the stimulation of obesity-associated hormones. In addition, increase in NO generation from cultured human visceral adipose cells by the stimulation of obesity-associated hormones was also observed (Figure 4B). These results indicate that the majority of elevated serum NO was generated from visceral fat.
Conclusions
NO is a vasodilator with a short half-life and acts within a short distance, and is thus ideally suited to act as a tissue hormone [16,17]. In our preliminary experiments, production of NO metabolites was increased in visceral adipose tissue in rats [56]. In the present study we clearly show that obesity-associated hormones such as insulin, leptin, and angiotensin II regulate the production of NO in human cultured visceral adipose cells. These results indicate that visceral fat might be an important source of NO in humans. We suggest that measurement of serum NO metabolites may be a good biomarker for the evaluation of visceral fat accumulation. In the present study, we clearly demonstrated the marked elevation of serum NO metabolites in obese subjects in comparison to non-obese subjects. Our data also indicate that the marked elevation in serum NO metabolites were closely correlated with visceral fat area. Therefore, the monitoring of serum NO metabolite levels may be a more useful, safer, and more cost-effective biomarker for the diagnosis of visceral fat accumulation than CT and WC. Since certain foods, especially processed meats, are known to be rich sources of NO metabolites [57], a long-term diet rich in NO metabolites could affect the serum NO metabolite level, and care must be taken to avoid those foods before measuring serum NO metabolite level. In conclusion, serum NO metabolites are significantly related to visceral fat accumulation. The monitoring of serum NO metabolite levels may be a simple, safe, convenient and reliable biomarker for the evaluation of visceral fat accumulation in clinical diagnostic screening.
[ "Background", "Subjects", "Measurement for various indices", "Circadian rhythm of NO metabolite levels in human", "Detection of NO generation and iNOS expression in human visceral adipose cells", "Anthropometry and abdominal fat distribution", "Definition of metabolic syndrome", "Assessment of hepatic fat content", "Stability of NO metabolites", "Circadian rhythm of NO metabolite levels in humans", "Comparison of serum NO metabolite levels under 100 cm2 and over 100 cm2 of visceral fat area", "Relationship between visceral fat accumulation and clinical parameters", "Relationship between NO metabolites and clinical parameters", "Comparison of NO metabolites and waist circumference in the association of visceral fat accumulation", "Detection of NO generation and iNOS expression in human visceral adipose cells" ]
[ "Recently, rapid socioeconomic growth has led to a more sedentary lifestyle and change in diet over the past several decades in developed nations. As a result of economic prosperity, obesity has become a serious health problem in several Western countries. Although the prevalence of obesity in Asian populations is lower than that in Westerners, it has been reported that the health risks associated with obesity in Asians are observed at lower body mass index (BMI) [1] as compared to Westerners.\nMany studies have demonstrated a close relationship between body fat accumulation and occurrence of metabolic syndrome [2,3]. Specifically, excessive accumulation of abdominal adipose tissue, especially intra-abdominal visceral fat, leads to obesity-related complications [4,5]. The metabolic syndrome, a cluster of glucose intolerance, hypertension, and dyslipidemia with visceral fat accumulation, is a common health problem in developed nations [6–8].\nA simple obesity index is needed to monitor the development of obesity because of the worldwide increase in obesity [9,10]. In the mid-1990s, waist circumference was used as an independent risk indicator to identify individuals who needed weight management [11–13]. According to the American Heart Association/National Heart, Lung, and Blood Institute (AHA/NHLBI) criteria, waist circumference of 40 inches or more for men and 35 inches or more for women is a risk factor for development of metabolic syndrome. At the annual meeting of the Japanese Society of Internal Medicine in April 2005, one of the diagnostic criteria for metabolic syndrome is abdominal circumference of 85 cm (33.46 inches) for men and 90 cm (35.43 inches) for women, measured at the umbilical level [14]. However, subcutaneous and visceral fat cannot be differentially evaluated by measuring only abdominal circumference. Therefore, a criterion was set for measuring the area of visceral fat (more than 100 cm2) by computed tomography (CT) at the umbilical level [15]. However, CT cannot be used frequently because of the risk from radiation exposure.\nWe therefore considered serum nitric oxide (NO) level for use in the evaluation of abdominal visceral fat accumulation, instead of CT and waist circumference. NO is a vasodilator with a short half-life and acts within a small distance, and is thus ideally suited to act as a tissue hormone [16,17]. It is thought that NO is involved in adipose tissue biology by influencing adipogenesis, insulin-stimulated glucose uptake, and lipolysis [16]. The enzymes for NO generation in adipose cells are endothelial NO synthase (eNOS) and inducible NO synthase (iNOS) [18–20]. It has been reported that NO inhibits the proliferation, but stimulates the expression, of 2 adipogenic marker genes – peroxisome proliferator-activated receptor γ and uncoupling protein 1, in vitro in rat brown preadipocytes [21]. Lipid accumulation and lipogenic enzymes are also induced by NO in rat white preadipocytes [22]. In addition, it is reported that insulin-stimulated glucose uptake in rat white adipose tissue is dependent on NO synthesis in vivo [23]. Basal as well as catecholamine-stimulated lipolysis is inhibited by NO in human and rat subcutaneous adipose tissue depots [24–27].\nCytokine-dependent regulation of iNOS has also been reported in fat cells [28]. Based on these findings, NO appears to be an important mediator for adipocyte physiology. Thus, we investigated the relationship between abdominal visceral fat accumulation and serum NO level, and showed the clinical significance of measurement of NO metabolites in serum for the evaluation of fat accumulation.", "We evaluated the clinical indices in patients who were admitted to the Yokohama City University Hospital from 2006 to 2009. The protocol was reviewed and approved by the institutional ethics review committee. Informed consent was obtained from all subjects before examination. The study was carried out in 80 subjects admitted to our hospital, and was restricted to men and postmenopausal women to eliminate the influence of pregnancy and female hormone replacement therapy. The study was also restricted to persons over 20 years of age to reduce the possible confounding effect of growth and development. A total of 80 Japanese subjects (38 men and 42 postmenopausal women, 58.0±7.8 years, BMI 24.6±1.1 kg/m2) were evaluated in this study (Table 1). The proportions of subjects with history of diabetes mellitus, hyperlipidemia, and hypertension were 25.6%, 24.3%, and 22.7%, respectively.", "Venous blood samples were obtained after the patients had fasted overnight (12 hours), for measurement of the serum ALT, glucose, insulin, total cholesterol, LDL-cholesterol, HDL-cholesterol, triglyceride, Fe, Ferritin, high-sensitive C-reactive protein (hsCRP), type IV collagen 7s domain, and hyaluronic acid. The serum insulin levels were measured by radioimmunoassay, while the other laboratory biochemical parameters were measured in a conventional automated analyzer.\nAs NO itself is unstable in in vivo physiological condition, we therefore measured serum NO metabolites (Nitrate/Nitrite) as indicators of NO level in blood [29]. Plasma samples (50 μl) were deproteinized by incubation with 140 μl of deionized H2O and 10 μl of 30% ZnSO4 at room temperature for 15 min, and the samples were centrifuged at 2000 g for 10 min. Nitrate was converted to nitrite using cadmium beads, and total nitrite as measured spectrophotometrically using a Nitrate/Nitrite Colorimetric assay kit (Cayman Chemical, Ann Arbor, MI).\nInsulin resistance was calculated by the homeostasis model assessment of insulin resistance (HOMA-IR) using the following formula: [fasting serum insulin (μU/ml) × fasting plasma glucose (mg/dl)/405]. However, the HOMA-IR was performed in only 72 subjects for whom the fasting plasma glucose was under 170 mg/dl, because HOMA-IR has been reported to be a suitable method for evaluating the presence of insulin resistance in patients only when the fasting glucose levels are under 170 mg/dl [30].", "To examine circadian rhythm of NO metabolite levels in human, we measured NO metabolite levels 10 times a day (6:00 am, 8:00 am, 9:00 am, 12:00 pm, 2:00 pm, 3:00 pm, 6:00 pm, 8:00 pm, 9:00 pm, 6:00 am) for inpatients on whom liver biopsy was performed. Since certain foods are known to be rich sources of NO metabolites, all patients ate meals at the same times (7:00 am, 1:00 pm, 7:00 pm), and venous blood samples were obtained 1 hour before, 1 hour after, and 2 hours after eating a meal.", "Human adipose cells were purchased from Cell Garage Co., Ltd. (Ishikari, Hokkaido, Japan) and cultured in human primary mesenterium visceral adipose cells in a 37°C incubator room. Cells were stimulated with obesity-associated hormones such as insulin, leptin, and angiotensin II, and supernatant for measurement of NO metabolite, and samples for Western blot analysis were prepared. NO metabolite in supernatant was measured by Nitrate/Nitrite Colorimetric assay kit (Cayman Chemical, Ann Arbor, MI). The samples for Western blot analysis (25 μg of protein/lane) were separated by SDS-polyacrylamide gel electrophoresis (PAGE), and transferred to a nitrocellulose membrane using an electroblotting transfer apparatus. The nitrocellulose membranes were incubated in blocking buffer (10 mM Tris, 100 mM NaCl, 0.1% Tween 20, and 5% nonfat milk) overnight at 4°C. Thereafter, the membranes were incubated with rabbit polyclonal anti-iNOS antibody (Cayman Chemical, Ann Arbor, MI) for 90 min at room temperature. The membranes were washed 3 times for 10 min each in washing buffer, and incubated with the secondary antibody diluted in blocking buffer (anti-rabbit peroxidase-conjugated antibody) for 60 min at room temperature. The signals were then detected by ECL-plus Western blotting starter kit (GE Healthcare UK Ltd., Amersham Place, Little Chalfont, Buckinghamshire, England) as described by the manufacture.", "Anthropometric measurements (height, weight and waist circumference [WC]) were performed in a standing position. The weight and height of the patients were measured with a calibrated scale after the patients removed their shoes and any heavy items of clothing. Body mass index (BMI) was calculated as weight divided by the square of height (kg/m2). WC at the umbilical level was measured with a non-stretch-able tape in the late exhalation phase while standing [31]. Abdominal fat distribution was determined using CT (computed tomography) scanning while the subjects were the supine position, in accordance with a previously described procedure [32]. Ordinary CT parameters were used, specifically, 120 kV and 200 mA, as well as a slice thickness of 5 mm, a scanning time of 2 seconds, and a field of view of 400 mm. The subcutaneous fat area (SFA) and intra-abdominal visceral fat area (VFA) were measured at the level of the umbilicus and determined by a standardized method with CT numbers. Briefly, a region of interest of the subcutaneous fat layer was defined by tracing its contour on each scan, and the attenuation range of CT numbers (in Hounsfield units) for fat tissue was calculated. A histogram for fat tissue was computed on the basis of mean attenuation ±2SD. Total and intraperitoneal tissue with attenuation within the mean ±2SD were considered to be the total fat area (TFA) and VFA, and the SFA was defined by subtracting the VFA from the TFA.", "Metabolic syndrome was defined according to the 2005 guidelines of the Japanese Society of Internal Medicine or the American Heart Association/National Heart, Lung, and Blood Institute (AHA/NHLBI) criteria [33,34], and World Health Organization (WHO). In the Japanese guideline, which is similar to the IDF criteria [35,36], subjects with metabolic syndrome must have: abdominal obesity (defined as WC more than 85 cm in men or more than 90 cm in women [37]), plus any 2 of the following 3 factors: (1) dyslipidemia – hypertriglyceridemia (serum triglyceride concentration more than 150 mg/dl {1.69 mmol/L}) and/or low HDL-cholesterol (serum concentration less than 40 mg/dl {1.04 mmol/L}); (2) hypertension – systolic blood pressure more than 130 mmHg and/or diastolic blood pressure more than 85 mmHg; (3) high fasting glucose – serum glucose concentration more than 110 mg/dl {6.1 mmol/L}). In the present study, abdominal obesity was defined as WC more than 85 cm in men or more than 90 cm in women as reported for the Japanese population [37].", "The degree of liver steatosis was measured by CT. Previous studies have shown a strong correlation between the CT attenuation values of the liver and the extent of fatty infiltration as measured by biopsy [38,39]. The ratio of the CT attenuation value of the liver to that of the spleen (L/S ratio) was used for quantitative estimation of the hepatic fat content, with an L/S ratio of <1 being considered to represent fatty liver [37].", "As NO is an unstable vasodilator with a short half-life in physiological condition, we therefore measured serum NO metabolites (Nitrate/Nitrite) as NO level in blood [27]. To confirm the stability of NO metabolites after sample collection, we examined the stability of NO metabolites under the condition of −80°C for 7 days, and room temperature for 24 hours. As shown in Supplemental Figures 1 and 2, NO metabolites were stable under the storage condition of −80°C for 7 days and room temperature for 24 hours.", "Since certain foods are known to be rich sources of NO metabolites, we measured NO metabolite levels 10 times a day to examine circadian rhythm and the influence of eating a meal. Nevertheless, serum NO metabolite levels were increased and the influence of a meal was clearly recognized 2 hours after eating a meal in all 5 patients (Figure 1); serum NO metabolite levels were also waning 2 hours after eating a meal, and fasting serum NO metabolite levels were quite stable (Figure 1).", "We compared serum NO metabolite levels of the group with visceral fat area under 100 cm2 to the group that had visceral fat area over 100 cm2. Significant elevation of serum NO metabolites (nitrate/nitrite) in obese subjects (visceral fat area over 100 cm2) were observed in comparison to that in non-obese subjects (visceral fat area under 100 cm2) (Figure 2).", "The relationship between visceral fat area and the other factors is shown in Figure 3. The visceral fat area was correlated with: BMI (r=0.652, p<0.0001; Figure 3A); body weight (r=0.677, p<0.0001; Figure 3B); WC (r=0.743, p<0.0001; Figure 3C); hsCRP (r=0.535, p<0.0001; Figure 3D); serum insulin concentration (r=0.485, p<0.0001; Figure 3E); and NO metabolite (r=0.743, p<0.0001; Figure 3F). WC and NO metabolites were closely correlated with visceral fat area. Multiple regression analysis was performed to quantify the impact of the measured indices (age, sex, and 6 correlative factors [BMI, body weight, WC, hsCRP, serum insulin concentration, NO metabolites]) on the visceral fat area. The results shown in Table 2 indicate that the WC and NO metabolites, but not other factors, were significantly related to the visceral fat area.", "Next, we examined the relationship between NO metabolites in serum and other factors. As shown in Figure 3, the level of serum NO metabolites was correlated with the visceral fat area (r=0.743, p<0.0001; Figure 3G), WC (r=0.269, p=0.0228; Figure 3H), hsCRP (r=0.472, p=0.0002; Figure 3I), FBS (r=0.287, p=0.0190; Figure 3J), serum insulin concentration (r=0.262, p=0.0282; Figure 3K), and HOMA-IR (r=0.273, p=0.0264; Figure 3L). In particular, the visceral fat area was closely correlated with the NO metabolites. Multiple regression analysis was also performed to quantify the impact of the measured indices (age, sex, and 6 correlative factors [FA, WC, hsCRP, FBS, serum insulin concentration, HOMA-IR]) on the NO metabolites. The results shown in Table 2 indicate that only the visceral fat area, but not other factors, were significantly related to the NO metabolites.", "As our results indicated that visceral fat area was strongly correlated with WC and NO metabolites, we compared the NO metabolites and waist circumference for the association with visceral fat area. As shown in Table 3, any of 3 criteria of metabolic syndrome respectively indicated another waist circumference. We compared NO metabolite with each of the 3 criteria of WC. The cut-off value for NO metabolite, which corresponds to VFA 100 cm2 using the ROC curve, was 21.0 μmol/ml (sensitivity 88%, specificity 82%). Using the same method, the cut-off value for WC of 2005 guidelines of the Japanese Society was 85 cm for men and 90 cm for women (sensitivity 75%, specificity 90%), WHO was 37 inches for men and women (sensitivity 56%, specificity 95%), and AHA was 40 inches for men and 35 inches for women (sensitivity 50%, specificity 98%) (Table 3).", "We confirmed that visceral adipose cells express NO synthase and can generate NO by stimulation with obesity-associated hormones such as insulin, leptin, and angiotensin II. As shown in Figure 4A, expression of iNOS protein was observed in cultured human visceral adipose cells by the stimulation of obesity-associated hormones. In addition, increase in NO generation from cultured human visceral adipose cells by the stimulation of obesity-associated hormones was also observed (Figure 4B). These results indicate that the majority of elevated serum NO was generated from visceral fat." ]
[ null, "subjects", null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Material and Methods", "Subjects", "Measurement for various indices", "Circadian rhythm of NO metabolite levels in human", "Detection of NO generation and iNOS expression in human visceral adipose cells", "Anthropometry and abdominal fat distribution", "Definition of metabolic syndrome", "Assessment of hepatic fat content", "Statistical analysis", "Results", "Characteristic of subjects", "Stability of NO metabolites", "Circadian rhythm of NO metabolite levels in humans", "Comparison of serum NO metabolite levels under 100 cm2 and over 100 cm2 of visceral fat area", "Relationship between visceral fat accumulation and clinical parameters", "Relationship between NO metabolites and clinical parameters", "Comparison of NO metabolites and waist circumference in the association of visceral fat accumulation", "Detection of NO generation and iNOS expression in human visceral adipose cells", "Discussion", "Conclusions" ]
[ "Recently, rapid socioeconomic growth has led to a more sedentary lifestyle and change in diet over the past several decades in developed nations. As a result of economic prosperity, obesity has become a serious health problem in several Western countries. Although the prevalence of obesity in Asian populations is lower than that in Westerners, it has been reported that the health risks associated with obesity in Asians are observed at lower body mass index (BMI) [1] as compared to Westerners.\nMany studies have demonstrated a close relationship between body fat accumulation and occurrence of metabolic syndrome [2,3]. Specifically, excessive accumulation of abdominal adipose tissue, especially intra-abdominal visceral fat, leads to obesity-related complications [4,5]. The metabolic syndrome, a cluster of glucose intolerance, hypertension, and dyslipidemia with visceral fat accumulation, is a common health problem in developed nations [6–8].\nA simple obesity index is needed to monitor the development of obesity because of the worldwide increase in obesity [9,10]. In the mid-1990s, waist circumference was used as an independent risk indicator to identify individuals who needed weight management [11–13]. According to the American Heart Association/National Heart, Lung, and Blood Institute (AHA/NHLBI) criteria, waist circumference of 40 inches or more for men and 35 inches or more for women is a risk factor for development of metabolic syndrome. At the annual meeting of the Japanese Society of Internal Medicine in April 2005, one of the diagnostic criteria for metabolic syndrome is abdominal circumference of 85 cm (33.46 inches) for men and 90 cm (35.43 inches) for women, measured at the umbilical level [14]. However, subcutaneous and visceral fat cannot be differentially evaluated by measuring only abdominal circumference. Therefore, a criterion was set for measuring the area of visceral fat (more than 100 cm2) by computed tomography (CT) at the umbilical level [15]. However, CT cannot be used frequently because of the risk from radiation exposure.\nWe therefore considered serum nitric oxide (NO) level for use in the evaluation of abdominal visceral fat accumulation, instead of CT and waist circumference. NO is a vasodilator with a short half-life and acts within a small distance, and is thus ideally suited to act as a tissue hormone [16,17]. It is thought that NO is involved in adipose tissue biology by influencing adipogenesis, insulin-stimulated glucose uptake, and lipolysis [16]. The enzymes for NO generation in adipose cells are endothelial NO synthase (eNOS) and inducible NO synthase (iNOS) [18–20]. It has been reported that NO inhibits the proliferation, but stimulates the expression, of 2 adipogenic marker genes – peroxisome proliferator-activated receptor γ and uncoupling protein 1, in vitro in rat brown preadipocytes [21]. Lipid accumulation and lipogenic enzymes are also induced by NO in rat white preadipocytes [22]. In addition, it is reported that insulin-stimulated glucose uptake in rat white adipose tissue is dependent on NO synthesis in vivo [23]. Basal as well as catecholamine-stimulated lipolysis is inhibited by NO in human and rat subcutaneous adipose tissue depots [24–27].\nCytokine-dependent regulation of iNOS has also been reported in fat cells [28]. Based on these findings, NO appears to be an important mediator for adipocyte physiology. Thus, we investigated the relationship between abdominal visceral fat accumulation and serum NO level, and showed the clinical significance of measurement of NO metabolites in serum for the evaluation of fat accumulation.", "[SUBTITLE] Subjects [SUBSECTION] We evaluated the clinical indices in patients who were admitted to the Yokohama City University Hospital from 2006 to 2009. The protocol was reviewed and approved by the institutional ethics review committee. Informed consent was obtained from all subjects before examination. The study was carried out in 80 subjects admitted to our hospital, and was restricted to men and postmenopausal women to eliminate the influence of pregnancy and female hormone replacement therapy. The study was also restricted to persons over 20 years of age to reduce the possible confounding effect of growth and development. A total of 80 Japanese subjects (38 men and 42 postmenopausal women, 58.0±7.8 years, BMI 24.6±1.1 kg/m2) were evaluated in this study (Table 1). The proportions of subjects with history of diabetes mellitus, hyperlipidemia, and hypertension were 25.6%, 24.3%, and 22.7%, respectively.\nWe evaluated the clinical indices in patients who were admitted to the Yokohama City University Hospital from 2006 to 2009. The protocol was reviewed and approved by the institutional ethics review committee. Informed consent was obtained from all subjects before examination. The study was carried out in 80 subjects admitted to our hospital, and was restricted to men and postmenopausal women to eliminate the influence of pregnancy and female hormone replacement therapy. The study was also restricted to persons over 20 years of age to reduce the possible confounding effect of growth and development. A total of 80 Japanese subjects (38 men and 42 postmenopausal women, 58.0±7.8 years, BMI 24.6±1.1 kg/m2) were evaluated in this study (Table 1). The proportions of subjects with history of diabetes mellitus, hyperlipidemia, and hypertension were 25.6%, 24.3%, and 22.7%, respectively.\n[SUBTITLE] Measurement for various indices [SUBSECTION] Venous blood samples were obtained after the patients had fasted overnight (12 hours), for measurement of the serum ALT, glucose, insulin, total cholesterol, LDL-cholesterol, HDL-cholesterol, triglyceride, Fe, Ferritin, high-sensitive C-reactive protein (hsCRP), type IV collagen 7s domain, and hyaluronic acid. The serum insulin levels were measured by radioimmunoassay, while the other laboratory biochemical parameters were measured in a conventional automated analyzer.\nAs NO itself is unstable in in vivo physiological condition, we therefore measured serum NO metabolites (Nitrate/Nitrite) as indicators of NO level in blood [29]. Plasma samples (50 μl) were deproteinized by incubation with 140 μl of deionized H2O and 10 μl of 30% ZnSO4 at room temperature for 15 min, and the samples were centrifuged at 2000 g for 10 min. Nitrate was converted to nitrite using cadmium beads, and total nitrite as measured spectrophotometrically using a Nitrate/Nitrite Colorimetric assay kit (Cayman Chemical, Ann Arbor, MI).\nInsulin resistance was calculated by the homeostasis model assessment of insulin resistance (HOMA-IR) using the following formula: [fasting serum insulin (μU/ml) × fasting plasma glucose (mg/dl)/405]. However, the HOMA-IR was performed in only 72 subjects for whom the fasting plasma glucose was under 170 mg/dl, because HOMA-IR has been reported to be a suitable method for evaluating the presence of insulin resistance in patients only when the fasting glucose levels are under 170 mg/dl [30].\nVenous blood samples were obtained after the patients had fasted overnight (12 hours), for measurement of the serum ALT, glucose, insulin, total cholesterol, LDL-cholesterol, HDL-cholesterol, triglyceride, Fe, Ferritin, high-sensitive C-reactive protein (hsCRP), type IV collagen 7s domain, and hyaluronic acid. The serum insulin levels were measured by radioimmunoassay, while the other laboratory biochemical parameters were measured in a conventional automated analyzer.\nAs NO itself is unstable in in vivo physiological condition, we therefore measured serum NO metabolites (Nitrate/Nitrite) as indicators of NO level in blood [29]. Plasma samples (50 μl) were deproteinized by incubation with 140 μl of deionized H2O and 10 μl of 30% ZnSO4 at room temperature for 15 min, and the samples were centrifuged at 2000 g for 10 min. Nitrate was converted to nitrite using cadmium beads, and total nitrite as measured spectrophotometrically using a Nitrate/Nitrite Colorimetric assay kit (Cayman Chemical, Ann Arbor, MI).\nInsulin resistance was calculated by the homeostasis model assessment of insulin resistance (HOMA-IR) using the following formula: [fasting serum insulin (μU/ml) × fasting plasma glucose (mg/dl)/405]. However, the HOMA-IR was performed in only 72 subjects for whom the fasting plasma glucose was under 170 mg/dl, because HOMA-IR has been reported to be a suitable method for evaluating the presence of insulin resistance in patients only when the fasting glucose levels are under 170 mg/dl [30].\n[SUBTITLE] Circadian rhythm of NO metabolite levels in human [SUBSECTION] To examine circadian rhythm of NO metabolite levels in human, we measured NO metabolite levels 10 times a day (6:00 am, 8:00 am, 9:00 am, 12:00 pm, 2:00 pm, 3:00 pm, 6:00 pm, 8:00 pm, 9:00 pm, 6:00 am) for inpatients on whom liver biopsy was performed. Since certain foods are known to be rich sources of NO metabolites, all patients ate meals at the same times (7:00 am, 1:00 pm, 7:00 pm), and venous blood samples were obtained 1 hour before, 1 hour after, and 2 hours after eating a meal.\nTo examine circadian rhythm of NO metabolite levels in human, we measured NO metabolite levels 10 times a day (6:00 am, 8:00 am, 9:00 am, 12:00 pm, 2:00 pm, 3:00 pm, 6:00 pm, 8:00 pm, 9:00 pm, 6:00 am) for inpatients on whom liver biopsy was performed. Since certain foods are known to be rich sources of NO metabolites, all patients ate meals at the same times (7:00 am, 1:00 pm, 7:00 pm), and venous blood samples were obtained 1 hour before, 1 hour after, and 2 hours after eating a meal.\n[SUBTITLE] Detection of NO generation and iNOS expression in human visceral adipose cells [SUBSECTION] Human adipose cells were purchased from Cell Garage Co., Ltd. (Ishikari, Hokkaido, Japan) and cultured in human primary mesenterium visceral adipose cells in a 37°C incubator room. Cells were stimulated with obesity-associated hormones such as insulin, leptin, and angiotensin II, and supernatant for measurement of NO metabolite, and samples for Western blot analysis were prepared. NO metabolite in supernatant was measured by Nitrate/Nitrite Colorimetric assay kit (Cayman Chemical, Ann Arbor, MI). The samples for Western blot analysis (25 μg of protein/lane) were separated by SDS-polyacrylamide gel electrophoresis (PAGE), and transferred to a nitrocellulose membrane using an electroblotting transfer apparatus. The nitrocellulose membranes were incubated in blocking buffer (10 mM Tris, 100 mM NaCl, 0.1% Tween 20, and 5% nonfat milk) overnight at 4°C. Thereafter, the membranes were incubated with rabbit polyclonal anti-iNOS antibody (Cayman Chemical, Ann Arbor, MI) for 90 min at room temperature. The membranes were washed 3 times for 10 min each in washing buffer, and incubated with the secondary antibody diluted in blocking buffer (anti-rabbit peroxidase-conjugated antibody) for 60 min at room temperature. The signals were then detected by ECL-plus Western blotting starter kit (GE Healthcare UK Ltd., Amersham Place, Little Chalfont, Buckinghamshire, England) as described by the manufacture.\nHuman adipose cells were purchased from Cell Garage Co., Ltd. (Ishikari, Hokkaido, Japan) and cultured in human primary mesenterium visceral adipose cells in a 37°C incubator room. Cells were stimulated with obesity-associated hormones such as insulin, leptin, and angiotensin II, and supernatant for measurement of NO metabolite, and samples for Western blot analysis were prepared. NO metabolite in supernatant was measured by Nitrate/Nitrite Colorimetric assay kit (Cayman Chemical, Ann Arbor, MI). The samples for Western blot analysis (25 μg of protein/lane) were separated by SDS-polyacrylamide gel electrophoresis (PAGE), and transferred to a nitrocellulose membrane using an electroblotting transfer apparatus. The nitrocellulose membranes were incubated in blocking buffer (10 mM Tris, 100 mM NaCl, 0.1% Tween 20, and 5% nonfat milk) overnight at 4°C. Thereafter, the membranes were incubated with rabbit polyclonal anti-iNOS antibody (Cayman Chemical, Ann Arbor, MI) for 90 min at room temperature. The membranes were washed 3 times for 10 min each in washing buffer, and incubated with the secondary antibody diluted in blocking buffer (anti-rabbit peroxidase-conjugated antibody) for 60 min at room temperature. The signals were then detected by ECL-plus Western blotting starter kit (GE Healthcare UK Ltd., Amersham Place, Little Chalfont, Buckinghamshire, England) as described by the manufacture.\n[SUBTITLE] Anthropometry and abdominal fat distribution [SUBSECTION] Anthropometric measurements (height, weight and waist circumference [WC]) were performed in a standing position. The weight and height of the patients were measured with a calibrated scale after the patients removed their shoes and any heavy items of clothing. Body mass index (BMI) was calculated as weight divided by the square of height (kg/m2). WC at the umbilical level was measured with a non-stretch-able tape in the late exhalation phase while standing [31]. Abdominal fat distribution was determined using CT (computed tomography) scanning while the subjects were the supine position, in accordance with a previously described procedure [32]. Ordinary CT parameters were used, specifically, 120 kV and 200 mA, as well as a slice thickness of 5 mm, a scanning time of 2 seconds, and a field of view of 400 mm. The subcutaneous fat area (SFA) and intra-abdominal visceral fat area (VFA) were measured at the level of the umbilicus and determined by a standardized method with CT numbers. Briefly, a region of interest of the subcutaneous fat layer was defined by tracing its contour on each scan, and the attenuation range of CT numbers (in Hounsfield units) for fat tissue was calculated. A histogram for fat tissue was computed on the basis of mean attenuation ±2SD. Total and intraperitoneal tissue with attenuation within the mean ±2SD were considered to be the total fat area (TFA) and VFA, and the SFA was defined by subtracting the VFA from the TFA.\nAnthropometric measurements (height, weight and waist circumference [WC]) were performed in a standing position. The weight and height of the patients were measured with a calibrated scale after the patients removed their shoes and any heavy items of clothing. Body mass index (BMI) was calculated as weight divided by the square of height (kg/m2). WC at the umbilical level was measured with a non-stretch-able tape in the late exhalation phase while standing [31]. Abdominal fat distribution was determined using CT (computed tomography) scanning while the subjects were the supine position, in accordance with a previously described procedure [32]. Ordinary CT parameters were used, specifically, 120 kV and 200 mA, as well as a slice thickness of 5 mm, a scanning time of 2 seconds, and a field of view of 400 mm. The subcutaneous fat area (SFA) and intra-abdominal visceral fat area (VFA) were measured at the level of the umbilicus and determined by a standardized method with CT numbers. Briefly, a region of interest of the subcutaneous fat layer was defined by tracing its contour on each scan, and the attenuation range of CT numbers (in Hounsfield units) for fat tissue was calculated. A histogram for fat tissue was computed on the basis of mean attenuation ±2SD. Total and intraperitoneal tissue with attenuation within the mean ±2SD were considered to be the total fat area (TFA) and VFA, and the SFA was defined by subtracting the VFA from the TFA.\n[SUBTITLE] Definition of metabolic syndrome [SUBSECTION] Metabolic syndrome was defined according to the 2005 guidelines of the Japanese Society of Internal Medicine or the American Heart Association/National Heart, Lung, and Blood Institute (AHA/NHLBI) criteria [33,34], and World Health Organization (WHO). In the Japanese guideline, which is similar to the IDF criteria [35,36], subjects with metabolic syndrome must have: abdominal obesity (defined as WC more than 85 cm in men or more than 90 cm in women [37]), plus any 2 of the following 3 factors: (1) dyslipidemia – hypertriglyceridemia (serum triglyceride concentration more than 150 mg/dl {1.69 mmol/L}) and/or low HDL-cholesterol (serum concentration less than 40 mg/dl {1.04 mmol/L}); (2) hypertension – systolic blood pressure more than 130 mmHg and/or diastolic blood pressure more than 85 mmHg; (3) high fasting glucose – serum glucose concentration more than 110 mg/dl {6.1 mmol/L}). In the present study, abdominal obesity was defined as WC more than 85 cm in men or more than 90 cm in women as reported for the Japanese population [37].\nMetabolic syndrome was defined according to the 2005 guidelines of the Japanese Society of Internal Medicine or the American Heart Association/National Heart, Lung, and Blood Institute (AHA/NHLBI) criteria [33,34], and World Health Organization (WHO). In the Japanese guideline, which is similar to the IDF criteria [35,36], subjects with metabolic syndrome must have: abdominal obesity (defined as WC more than 85 cm in men or more than 90 cm in women [37]), plus any 2 of the following 3 factors: (1) dyslipidemia – hypertriglyceridemia (serum triglyceride concentration more than 150 mg/dl {1.69 mmol/L}) and/or low HDL-cholesterol (serum concentration less than 40 mg/dl {1.04 mmol/L}); (2) hypertension – systolic blood pressure more than 130 mmHg and/or diastolic blood pressure more than 85 mmHg; (3) high fasting glucose – serum glucose concentration more than 110 mg/dl {6.1 mmol/L}). In the present study, abdominal obesity was defined as WC more than 85 cm in men or more than 90 cm in women as reported for the Japanese population [37].\n[SUBTITLE] Assessment of hepatic fat content [SUBSECTION] The degree of liver steatosis was measured by CT. Previous studies have shown a strong correlation between the CT attenuation values of the liver and the extent of fatty infiltration as measured by biopsy [38,39]. The ratio of the CT attenuation value of the liver to that of the spleen (L/S ratio) was used for quantitative estimation of the hepatic fat content, with an L/S ratio of <1 being considered to represent fatty liver [37].\nThe degree of liver steatosis was measured by CT. Previous studies have shown a strong correlation between the CT attenuation values of the liver and the extent of fatty infiltration as measured by biopsy [38,39]. The ratio of the CT attenuation value of the liver to that of the spleen (L/S ratio) was used for quantitative estimation of the hepatic fat content, with an L/S ratio of <1 being considered to represent fatty liver [37].\n[SUBTITLE] Statistical analysis [SUBSECTION] All the data were expressed as means ±SD. The relationship between any 2 variables was analyzed by standard correlation analysis conducted using the StatView version 5.0 software (SAS, Cary, NC). ANOVA was applied for group comparisons, followed by Student’s t test for confirmation of the statistical associations. The relationship between the L/S ratio, VFA, or SFA and other relevant covariates was examined by multiple regression analysis and determination of the standardized correlation coefficients. Statistical significance was assumed when the P value was <0.05.\nAll the data were expressed as means ±SD. The relationship between any 2 variables was analyzed by standard correlation analysis conducted using the StatView version 5.0 software (SAS, Cary, NC). ANOVA was applied for group comparisons, followed by Student’s t test for confirmation of the statistical associations. The relationship between the L/S ratio, VFA, or SFA and other relevant covariates was examined by multiple regression analysis and determination of the standardized correlation coefficients. Statistical significance was assumed when the P value was <0.05.", "We evaluated the clinical indices in patients who were admitted to the Yokohama City University Hospital from 2006 to 2009. The protocol was reviewed and approved by the institutional ethics review committee. Informed consent was obtained from all subjects before examination. The study was carried out in 80 subjects admitted to our hospital, and was restricted to men and postmenopausal women to eliminate the influence of pregnancy and female hormone replacement therapy. The study was also restricted to persons over 20 years of age to reduce the possible confounding effect of growth and development. A total of 80 Japanese subjects (38 men and 42 postmenopausal women, 58.0±7.8 years, BMI 24.6±1.1 kg/m2) were evaluated in this study (Table 1). The proportions of subjects with history of diabetes mellitus, hyperlipidemia, and hypertension were 25.6%, 24.3%, and 22.7%, respectively.", "Venous blood samples were obtained after the patients had fasted overnight (12 hours), for measurement of the serum ALT, glucose, insulin, total cholesterol, LDL-cholesterol, HDL-cholesterol, triglyceride, Fe, Ferritin, high-sensitive C-reactive protein (hsCRP), type IV collagen 7s domain, and hyaluronic acid. The serum insulin levels were measured by radioimmunoassay, while the other laboratory biochemical parameters were measured in a conventional automated analyzer.\nAs NO itself is unstable in in vivo physiological condition, we therefore measured serum NO metabolites (Nitrate/Nitrite) as indicators of NO level in blood [29]. Plasma samples (50 μl) were deproteinized by incubation with 140 μl of deionized H2O and 10 μl of 30% ZnSO4 at room temperature for 15 min, and the samples were centrifuged at 2000 g for 10 min. Nitrate was converted to nitrite using cadmium beads, and total nitrite as measured spectrophotometrically using a Nitrate/Nitrite Colorimetric assay kit (Cayman Chemical, Ann Arbor, MI).\nInsulin resistance was calculated by the homeostasis model assessment of insulin resistance (HOMA-IR) using the following formula: [fasting serum insulin (μU/ml) × fasting plasma glucose (mg/dl)/405]. However, the HOMA-IR was performed in only 72 subjects for whom the fasting plasma glucose was under 170 mg/dl, because HOMA-IR has been reported to be a suitable method for evaluating the presence of insulin resistance in patients only when the fasting glucose levels are under 170 mg/dl [30].", "To examine circadian rhythm of NO metabolite levels in human, we measured NO metabolite levels 10 times a day (6:00 am, 8:00 am, 9:00 am, 12:00 pm, 2:00 pm, 3:00 pm, 6:00 pm, 8:00 pm, 9:00 pm, 6:00 am) for inpatients on whom liver biopsy was performed. Since certain foods are known to be rich sources of NO metabolites, all patients ate meals at the same times (7:00 am, 1:00 pm, 7:00 pm), and venous blood samples were obtained 1 hour before, 1 hour after, and 2 hours after eating a meal.", "Human adipose cells were purchased from Cell Garage Co., Ltd. (Ishikari, Hokkaido, Japan) and cultured in human primary mesenterium visceral adipose cells in a 37°C incubator room. Cells were stimulated with obesity-associated hormones such as insulin, leptin, and angiotensin II, and supernatant for measurement of NO metabolite, and samples for Western blot analysis were prepared. NO metabolite in supernatant was measured by Nitrate/Nitrite Colorimetric assay kit (Cayman Chemical, Ann Arbor, MI). The samples for Western blot analysis (25 μg of protein/lane) were separated by SDS-polyacrylamide gel electrophoresis (PAGE), and transferred to a nitrocellulose membrane using an electroblotting transfer apparatus. The nitrocellulose membranes were incubated in blocking buffer (10 mM Tris, 100 mM NaCl, 0.1% Tween 20, and 5% nonfat milk) overnight at 4°C. Thereafter, the membranes were incubated with rabbit polyclonal anti-iNOS antibody (Cayman Chemical, Ann Arbor, MI) for 90 min at room temperature. The membranes were washed 3 times for 10 min each in washing buffer, and incubated with the secondary antibody diluted in blocking buffer (anti-rabbit peroxidase-conjugated antibody) for 60 min at room temperature. The signals were then detected by ECL-plus Western blotting starter kit (GE Healthcare UK Ltd., Amersham Place, Little Chalfont, Buckinghamshire, England) as described by the manufacture.", "Anthropometric measurements (height, weight and waist circumference [WC]) were performed in a standing position. The weight and height of the patients were measured with a calibrated scale after the patients removed their shoes and any heavy items of clothing. Body mass index (BMI) was calculated as weight divided by the square of height (kg/m2). WC at the umbilical level was measured with a non-stretch-able tape in the late exhalation phase while standing [31]. Abdominal fat distribution was determined using CT (computed tomography) scanning while the subjects were the supine position, in accordance with a previously described procedure [32]. Ordinary CT parameters were used, specifically, 120 kV and 200 mA, as well as a slice thickness of 5 mm, a scanning time of 2 seconds, and a field of view of 400 mm. The subcutaneous fat area (SFA) and intra-abdominal visceral fat area (VFA) were measured at the level of the umbilicus and determined by a standardized method with CT numbers. Briefly, a region of interest of the subcutaneous fat layer was defined by tracing its contour on each scan, and the attenuation range of CT numbers (in Hounsfield units) for fat tissue was calculated. A histogram for fat tissue was computed on the basis of mean attenuation ±2SD. Total and intraperitoneal tissue with attenuation within the mean ±2SD were considered to be the total fat area (TFA) and VFA, and the SFA was defined by subtracting the VFA from the TFA.", "Metabolic syndrome was defined according to the 2005 guidelines of the Japanese Society of Internal Medicine or the American Heart Association/National Heart, Lung, and Blood Institute (AHA/NHLBI) criteria [33,34], and World Health Organization (WHO). In the Japanese guideline, which is similar to the IDF criteria [35,36], subjects with metabolic syndrome must have: abdominal obesity (defined as WC more than 85 cm in men or more than 90 cm in women [37]), plus any 2 of the following 3 factors: (1) dyslipidemia – hypertriglyceridemia (serum triglyceride concentration more than 150 mg/dl {1.69 mmol/L}) and/or low HDL-cholesterol (serum concentration less than 40 mg/dl {1.04 mmol/L}); (2) hypertension – systolic blood pressure more than 130 mmHg and/or diastolic blood pressure more than 85 mmHg; (3) high fasting glucose – serum glucose concentration more than 110 mg/dl {6.1 mmol/L}). In the present study, abdominal obesity was defined as WC more than 85 cm in men or more than 90 cm in women as reported for the Japanese population [37].", "The degree of liver steatosis was measured by CT. Previous studies have shown a strong correlation between the CT attenuation values of the liver and the extent of fatty infiltration as measured by biopsy [38,39]. The ratio of the CT attenuation value of the liver to that of the spleen (L/S ratio) was used for quantitative estimation of the hepatic fat content, with an L/S ratio of <1 being considered to represent fatty liver [37].", "All the data were expressed as means ±SD. The relationship between any 2 variables was analyzed by standard correlation analysis conducted using the StatView version 5.0 software (SAS, Cary, NC). ANOVA was applied for group comparisons, followed by Student’s t test for confirmation of the statistical associations. The relationship between the L/S ratio, VFA, or SFA and other relevant covariates was examined by multiple regression analysis and determination of the standardized correlation coefficients. Statistical significance was assumed when the P value was <0.05.", "[SUBTITLE] Characteristic of subjects [SUBSECTION] Characteristics of study subjects by sex are presented in Table 1. Of the 80 total subjects, 38 were men and 42 were women. The mean BMI was 24.2±1.0 kg/m2 in men, and 25.0±1.1 kg/m2 in women. There was no significant difference between sexes for BMI, L/S ratio, all serum markers, existence of obesity-related disorder and total abdominal fat area. However, the mean subcutaneous fat area and waist circumference were higher in women (163.2±23.1 cm2, 84.5±0.8 cm, respectively) than in men (145.8±10.4 cm2, 88.2±3.6 cm), although the mean visceral fat area in men was higher (104.3±3.1 cm2) than in women (96.2±15.4 cm2) (p<0.05).\nCharacteristics of study subjects by sex are presented in Table 1. Of the 80 total subjects, 38 were men and 42 were women. The mean BMI was 24.2±1.0 kg/m2 in men, and 25.0±1.1 kg/m2 in women. There was no significant difference between sexes for BMI, L/S ratio, all serum markers, existence of obesity-related disorder and total abdominal fat area. However, the mean subcutaneous fat area and waist circumference were higher in women (163.2±23.1 cm2, 84.5±0.8 cm, respectively) than in men (145.8±10.4 cm2, 88.2±3.6 cm), although the mean visceral fat area in men was higher (104.3±3.1 cm2) than in women (96.2±15.4 cm2) (p<0.05).\n[SUBTITLE] Stability of NO metabolites [SUBSECTION] As NO is an unstable vasodilator with a short half-life in physiological condition, we therefore measured serum NO metabolites (Nitrate/Nitrite) as NO level in blood [27]. To confirm the stability of NO metabolites after sample collection, we examined the stability of NO metabolites under the condition of −80°C for 7 days, and room temperature for 24 hours. As shown in Supplemental Figures 1 and 2, NO metabolites were stable under the storage condition of −80°C for 7 days and room temperature for 24 hours.\nAs NO is an unstable vasodilator with a short half-life in physiological condition, we therefore measured serum NO metabolites (Nitrate/Nitrite) as NO level in blood [27]. To confirm the stability of NO metabolites after sample collection, we examined the stability of NO metabolites under the condition of −80°C for 7 days, and room temperature for 24 hours. As shown in Supplemental Figures 1 and 2, NO metabolites were stable under the storage condition of −80°C for 7 days and room temperature for 24 hours.\n[SUBTITLE] Circadian rhythm of NO metabolite levels in humans [SUBSECTION] Since certain foods are known to be rich sources of NO metabolites, we measured NO metabolite levels 10 times a day to examine circadian rhythm and the influence of eating a meal. Nevertheless, serum NO metabolite levels were increased and the influence of a meal was clearly recognized 2 hours after eating a meal in all 5 patients (Figure 1); serum NO metabolite levels were also waning 2 hours after eating a meal, and fasting serum NO metabolite levels were quite stable (Figure 1).\nSince certain foods are known to be rich sources of NO metabolites, we measured NO metabolite levels 10 times a day to examine circadian rhythm and the influence of eating a meal. Nevertheless, serum NO metabolite levels were increased and the influence of a meal was clearly recognized 2 hours after eating a meal in all 5 patients (Figure 1); serum NO metabolite levels were also waning 2 hours after eating a meal, and fasting serum NO metabolite levels were quite stable (Figure 1).\n[SUBTITLE] Comparison of serum NO metabolite levels under 100 cm2 and over 100 cm2 of visceral fat area [SUBSECTION] We compared serum NO metabolite levels of the group with visceral fat area under 100 cm2 to the group that had visceral fat area over 100 cm2. Significant elevation of serum NO metabolites (nitrate/nitrite) in obese subjects (visceral fat area over 100 cm2) were observed in comparison to that in non-obese subjects (visceral fat area under 100 cm2) (Figure 2).\nWe compared serum NO metabolite levels of the group with visceral fat area under 100 cm2 to the group that had visceral fat area over 100 cm2. Significant elevation of serum NO metabolites (nitrate/nitrite) in obese subjects (visceral fat area over 100 cm2) were observed in comparison to that in non-obese subjects (visceral fat area under 100 cm2) (Figure 2).\n[SUBTITLE] Relationship between visceral fat accumulation and clinical parameters [SUBSECTION] The relationship between visceral fat area and the other factors is shown in Figure 3. The visceral fat area was correlated with: BMI (r=0.652, p<0.0001; Figure 3A); body weight (r=0.677, p<0.0001; Figure 3B); WC (r=0.743, p<0.0001; Figure 3C); hsCRP (r=0.535, p<0.0001; Figure 3D); serum insulin concentration (r=0.485, p<0.0001; Figure 3E); and NO metabolite (r=0.743, p<0.0001; Figure 3F). WC and NO metabolites were closely correlated with visceral fat area. Multiple regression analysis was performed to quantify the impact of the measured indices (age, sex, and 6 correlative factors [BMI, body weight, WC, hsCRP, serum insulin concentration, NO metabolites]) on the visceral fat area. The results shown in Table 2 indicate that the WC and NO metabolites, but not other factors, were significantly related to the visceral fat area.\nThe relationship between visceral fat area and the other factors is shown in Figure 3. The visceral fat area was correlated with: BMI (r=0.652, p<0.0001; Figure 3A); body weight (r=0.677, p<0.0001; Figure 3B); WC (r=0.743, p<0.0001; Figure 3C); hsCRP (r=0.535, p<0.0001; Figure 3D); serum insulin concentration (r=0.485, p<0.0001; Figure 3E); and NO metabolite (r=0.743, p<0.0001; Figure 3F). WC and NO metabolites were closely correlated with visceral fat area. Multiple regression analysis was performed to quantify the impact of the measured indices (age, sex, and 6 correlative factors [BMI, body weight, WC, hsCRP, serum insulin concentration, NO metabolites]) on the visceral fat area. The results shown in Table 2 indicate that the WC and NO metabolites, but not other factors, were significantly related to the visceral fat area.\n[SUBTITLE] Relationship between NO metabolites and clinical parameters [SUBSECTION] Next, we examined the relationship between NO metabolites in serum and other factors. As shown in Figure 3, the level of serum NO metabolites was correlated with the visceral fat area (r=0.743, p<0.0001; Figure 3G), WC (r=0.269, p=0.0228; Figure 3H), hsCRP (r=0.472, p=0.0002; Figure 3I), FBS (r=0.287, p=0.0190; Figure 3J), serum insulin concentration (r=0.262, p=0.0282; Figure 3K), and HOMA-IR (r=0.273, p=0.0264; Figure 3L). In particular, the visceral fat area was closely correlated with the NO metabolites. Multiple regression analysis was also performed to quantify the impact of the measured indices (age, sex, and 6 correlative factors [FA, WC, hsCRP, FBS, serum insulin concentration, HOMA-IR]) on the NO metabolites. The results shown in Table 2 indicate that only the visceral fat area, but not other factors, were significantly related to the NO metabolites.\nNext, we examined the relationship between NO metabolites in serum and other factors. As shown in Figure 3, the level of serum NO metabolites was correlated with the visceral fat area (r=0.743, p<0.0001; Figure 3G), WC (r=0.269, p=0.0228; Figure 3H), hsCRP (r=0.472, p=0.0002; Figure 3I), FBS (r=0.287, p=0.0190; Figure 3J), serum insulin concentration (r=0.262, p=0.0282; Figure 3K), and HOMA-IR (r=0.273, p=0.0264; Figure 3L). In particular, the visceral fat area was closely correlated with the NO metabolites. Multiple regression analysis was also performed to quantify the impact of the measured indices (age, sex, and 6 correlative factors [FA, WC, hsCRP, FBS, serum insulin concentration, HOMA-IR]) on the NO metabolites. The results shown in Table 2 indicate that only the visceral fat area, but not other factors, were significantly related to the NO metabolites.\n[SUBTITLE] Comparison of NO metabolites and waist circumference in the association of visceral fat accumulation [SUBSECTION] As our results indicated that visceral fat area was strongly correlated with WC and NO metabolites, we compared the NO metabolites and waist circumference for the association with visceral fat area. As shown in Table 3, any of 3 criteria of metabolic syndrome respectively indicated another waist circumference. We compared NO metabolite with each of the 3 criteria of WC. The cut-off value for NO metabolite, which corresponds to VFA 100 cm2 using the ROC curve, was 21.0 μmol/ml (sensitivity 88%, specificity 82%). Using the same method, the cut-off value for WC of 2005 guidelines of the Japanese Society was 85 cm for men and 90 cm for women (sensitivity 75%, specificity 90%), WHO was 37 inches for men and women (sensitivity 56%, specificity 95%), and AHA was 40 inches for men and 35 inches for women (sensitivity 50%, specificity 98%) (Table 3).\nAs our results indicated that visceral fat area was strongly correlated with WC and NO metabolites, we compared the NO metabolites and waist circumference for the association with visceral fat area. As shown in Table 3, any of 3 criteria of metabolic syndrome respectively indicated another waist circumference. We compared NO metabolite with each of the 3 criteria of WC. The cut-off value for NO metabolite, which corresponds to VFA 100 cm2 using the ROC curve, was 21.0 μmol/ml (sensitivity 88%, specificity 82%). Using the same method, the cut-off value for WC of 2005 guidelines of the Japanese Society was 85 cm for men and 90 cm for women (sensitivity 75%, specificity 90%), WHO was 37 inches for men and women (sensitivity 56%, specificity 95%), and AHA was 40 inches for men and 35 inches for women (sensitivity 50%, specificity 98%) (Table 3).\n[SUBTITLE] Detection of NO generation and iNOS expression in human visceral adipose cells [SUBSECTION] We confirmed that visceral adipose cells express NO synthase and can generate NO by stimulation with obesity-associated hormones such as insulin, leptin, and angiotensin II. As shown in Figure 4A, expression of iNOS protein was observed in cultured human visceral adipose cells by the stimulation of obesity-associated hormones. In addition, increase in NO generation from cultured human visceral adipose cells by the stimulation of obesity-associated hormones was also observed (Figure 4B). These results indicate that the majority of elevated serum NO was generated from visceral fat.\nWe confirmed that visceral adipose cells express NO synthase and can generate NO by stimulation with obesity-associated hormones such as insulin, leptin, and angiotensin II. As shown in Figure 4A, expression of iNOS protein was observed in cultured human visceral adipose cells by the stimulation of obesity-associated hormones. In addition, increase in NO generation from cultured human visceral adipose cells by the stimulation of obesity-associated hormones was also observed (Figure 4B). These results indicate that the majority of elevated serum NO was generated from visceral fat.", "Characteristics of study subjects by sex are presented in Table 1. Of the 80 total subjects, 38 were men and 42 were women. The mean BMI was 24.2±1.0 kg/m2 in men, and 25.0±1.1 kg/m2 in women. There was no significant difference between sexes for BMI, L/S ratio, all serum markers, existence of obesity-related disorder and total abdominal fat area. However, the mean subcutaneous fat area and waist circumference were higher in women (163.2±23.1 cm2, 84.5±0.8 cm, respectively) than in men (145.8±10.4 cm2, 88.2±3.6 cm), although the mean visceral fat area in men was higher (104.3±3.1 cm2) than in women (96.2±15.4 cm2) (p<0.05).", "As NO is an unstable vasodilator with a short half-life in physiological condition, we therefore measured serum NO metabolites (Nitrate/Nitrite) as NO level in blood [27]. To confirm the stability of NO metabolites after sample collection, we examined the stability of NO metabolites under the condition of −80°C for 7 days, and room temperature for 24 hours. As shown in Supplemental Figures 1 and 2, NO metabolites were stable under the storage condition of −80°C for 7 days and room temperature for 24 hours.", "Since certain foods are known to be rich sources of NO metabolites, we measured NO metabolite levels 10 times a day to examine circadian rhythm and the influence of eating a meal. Nevertheless, serum NO metabolite levels were increased and the influence of a meal was clearly recognized 2 hours after eating a meal in all 5 patients (Figure 1); serum NO metabolite levels were also waning 2 hours after eating a meal, and fasting serum NO metabolite levels were quite stable (Figure 1).", "We compared serum NO metabolite levels of the group with visceral fat area under 100 cm2 to the group that had visceral fat area over 100 cm2. Significant elevation of serum NO metabolites (nitrate/nitrite) in obese subjects (visceral fat area over 100 cm2) were observed in comparison to that in non-obese subjects (visceral fat area under 100 cm2) (Figure 2).", "The relationship between visceral fat area and the other factors is shown in Figure 3. The visceral fat area was correlated with: BMI (r=0.652, p<0.0001; Figure 3A); body weight (r=0.677, p<0.0001; Figure 3B); WC (r=0.743, p<0.0001; Figure 3C); hsCRP (r=0.535, p<0.0001; Figure 3D); serum insulin concentration (r=0.485, p<0.0001; Figure 3E); and NO metabolite (r=0.743, p<0.0001; Figure 3F). WC and NO metabolites were closely correlated with visceral fat area. Multiple regression analysis was performed to quantify the impact of the measured indices (age, sex, and 6 correlative factors [BMI, body weight, WC, hsCRP, serum insulin concentration, NO metabolites]) on the visceral fat area. The results shown in Table 2 indicate that the WC and NO metabolites, but not other factors, were significantly related to the visceral fat area.", "Next, we examined the relationship between NO metabolites in serum and other factors. As shown in Figure 3, the level of serum NO metabolites was correlated with the visceral fat area (r=0.743, p<0.0001; Figure 3G), WC (r=0.269, p=0.0228; Figure 3H), hsCRP (r=0.472, p=0.0002; Figure 3I), FBS (r=0.287, p=0.0190; Figure 3J), serum insulin concentration (r=0.262, p=0.0282; Figure 3K), and HOMA-IR (r=0.273, p=0.0264; Figure 3L). In particular, the visceral fat area was closely correlated with the NO metabolites. Multiple regression analysis was also performed to quantify the impact of the measured indices (age, sex, and 6 correlative factors [FA, WC, hsCRP, FBS, serum insulin concentration, HOMA-IR]) on the NO metabolites. The results shown in Table 2 indicate that only the visceral fat area, but not other factors, were significantly related to the NO metabolites.", "As our results indicated that visceral fat area was strongly correlated with WC and NO metabolites, we compared the NO metabolites and waist circumference for the association with visceral fat area. As shown in Table 3, any of 3 criteria of metabolic syndrome respectively indicated another waist circumference. We compared NO metabolite with each of the 3 criteria of WC. The cut-off value for NO metabolite, which corresponds to VFA 100 cm2 using the ROC curve, was 21.0 μmol/ml (sensitivity 88%, specificity 82%). Using the same method, the cut-off value for WC of 2005 guidelines of the Japanese Society was 85 cm for men and 90 cm for women (sensitivity 75%, specificity 90%), WHO was 37 inches for men and women (sensitivity 56%, specificity 95%), and AHA was 40 inches for men and 35 inches for women (sensitivity 50%, specificity 98%) (Table 3).", "We confirmed that visceral adipose cells express NO synthase and can generate NO by stimulation with obesity-associated hormones such as insulin, leptin, and angiotensin II. As shown in Figure 4A, expression of iNOS protein was observed in cultured human visceral adipose cells by the stimulation of obesity-associated hormones. In addition, increase in NO generation from cultured human visceral adipose cells by the stimulation of obesity-associated hormones was also observed (Figure 4B). These results indicate that the majority of elevated serum NO was generated from visceral fat.", "In the present study, we demonstrated marked elevation of serum NO metabolites (nitrate/nitrite) in obese subjects in comparison to that in non-obese subjects, even after adjustment for age, sex, waist circumference, hsCRP, BMI, subcutaneous fat area, and insulin resistance. We clearly showed that NO metabolites, but not other factors, were significantly related to the visceral fat area. This is the first report to demonstrate a strong association between visceral fat area and serum levels of NO metabolites.\nWC is currently widely used for estimation of intra-abdominal visceral fat area. The increase in WC has been demonstrated to be a good index of cardiovascular disease [40] and an important diagnostic marker for metabolic syndrome. Although CT scanning and magnetic resonance imaging are used for precise measurements of visceral fat [41,42], simple anthropometric measurements for estimating the amount of visceral fat are essential for simple screening of the general population. On the basis of prior studies, WC has been shown to directly reflect abdominal fat mass [43]. WHO initially proposed a definition for metabolic syndrome in 1998 [44], and AHA proposed useful criteria for the diagnosis of the metabolic syndrome in 2001 [45]. However, the measurements offered for cut-off criteria for abdominal circumference of more than 94 cm in men and women by WHO, and more than 102 cm in men and more than 88 cm in women by AHA for abdominal obesity, are based on Westerners’ criteria. Therefore, based on these criteria, the prevalence of metabolic syndrome in the Asian population is very low [46], because Asians have higher risk of diabetes and/or metabolic syndrome even though they have lower fat measurement. Asian populations have been shown to have a higher body fat deposition at a lower BMI than Westerners [47–49]. Abdominal obesity promotes insulin resistance and leads to metabolic syndrome [50–52]. Therefore, Asians have higher glucose intolerance and cardiovascular risk factors compared to Westerners at a relatively lower BMI [53]. In 2002, a study of 1193 Japanese subjects revealed that WC measurement that equated with a visceral fat area of 100 cm2 was a useful cut-off value for the prediction of patients with more than 1 clinical diagnosis (dyslipidemia, hyperglycemia and hypertension). This 100 cm2 value is observed in more than 70% of those Japanese patients identified with coronary heart disease. Therefore, Japanese guidelines have established a cutoff point for waist circumference to be 85 cm in men and 90 cm in women [54].\nBased on such reports, it is suggested that WC that corresponds to VFA 100 cm2 is extremely different among races and between the sexes. Moreover, visceral and subcutaneous fat obesity cannot be clearly distinguished, and the intraobserver and interobserver variability for waist circumference were higher than those for body mass index [55]. Abdominal visceral fat area can be evaluated by CT; however, CT cannot be used frequently because of the risk of radiation exposure. Thus, a simple, safe, convenient and reliable marker for the evaluation of visceral fat accumulation is required in clinical diagnostic screening. We therefore focused on serum NO level for the evaluation of abdominal visceral fat accumulation, instead of CT and waist circumference.", "NO is a vasodilator with a short half-life and acts within a short distance, and is thus ideally suited to act as a tissue hormone [16,17]. In our preliminary experiments, production of NO metabolites was increased in visceral adipose tissue in rats [56]. In the present study we clearly show that obesity-associated hormones such as insulin, leptin, and angiotensin II regulate the production of NO in human cultured visceral adipose cells. These results indicate that visceral fat might be an important source of NO in humans. We suggest that measurement of serum NO metabolites may be a good biomarker for the evaluation of visceral fat accumulation. In the present study, we clearly demonstrated the marked elevation of serum NO metabolites in obese subjects in comparison to non-obese subjects. Our data also indicate that the marked elevation in serum NO metabolites were closely correlated with visceral fat area. Therefore, the monitoring of serum NO metabolite levels may be a more useful, safer, and more cost-effective biomarker for the diagnosis of visceral fat accumulation than CT and WC.\nSince certain foods, especially processed meats, are known to be rich sources of NO metabolites [57], a long-term diet rich in NO metabolites could affect the serum NO metabolite level, and care must be taken to avoid those foods before measuring serum NO metabolite level.\nIn conclusion, serum NO metabolites are significantly related to visceral fat accumulation. The monitoring of serum NO metabolite levels may be a simple, safe, convenient and reliable biomarker for the evaluation of visceral fat accumulation in clinical diagnostic screening." ]
[ null, "materials|methods", "subjects", null, null, null, null, null, null, "methods", "results", "subjects", null, null, null, null, null, null, null, "discussion", "conclusions" ]
[ "visceral fat", "obesity", "NO metabolite", "waist scale", "metabolic syndrome" ]
Excitatory repetitive transcranial magnetic stimulation induces improvements in chronic post-stroke aphasia.
21358599
Aphasia affects 1/3 of stroke patients with improvements noted only in some of them. The goal of this exploratory study was to provide preliminary evidence regarding safety and efficacy of fMRI-guided excitatory repetitive transcranial magnetic stimulation (rTMS) applied to the residual left-hemispheric Broca's area for chronic aphasia treatment.
BACKGROUND
We enrolled 8 patients with moderate or severe aphasia >1 year after LMCA stroke. Linguistic battery was administered pre-/post-rTMS; a semantic decision/tone decision (SDTD) fMRI task was used to localize left-hemispheric Broca's area. RTMS protocol consisted of 10 daily treatments of 200 seconds each using an excitatory stimulation protocol called intermittent theta burst stimulation (iTBS). Coil placement was targeted individually to the left Broca's.
MATERIAL/METHODS
6/8 patients showed significant pre-/post-rTMS improvements in semantic fluency (p=0.028); they were able to generate more appropriate words when prompted with a semantic category. Pre-/post-rTMS fMRI maps showed increases in left fronto-temporo-parietal language networks with a significant left-hemispheric shift in the left frontal (p=0.025), left temporo-parietal (p=0.038) regions and global language LI (p=0.018). Patients tended to report subjective improvement on Communicative Activities Log (mini-CAL; p=0.075). None of the subjects reported ill effects of rTMS.
RESULTS
FMRI-guided, excitatory rTMS applied to the affected Broca's area improved language skills in patients with chronic post-stroke aphasia; these improvements correlated with increased language lateralization to the left hemisphere. This rTMS protocol appears to be safe and should be further tested in blinded studies assessing its short- and long-term safety/efficacy for post-stroke aphasia rehabilitation.
CONCLUSIONS
[ "Aphasia", "Brain Mapping", "Chronic Disease", "Demography", "Female", "Humans", "Magnetic Resonance Imaging", "Male", "Middle Aged", "Stroke", "Transcranial Magnetic Stimulation" ]
3057942
null
null
Data analyses
Data were initially characterized using descriptive statistics (means and standard deviations or frequencies and percentages as appropriate). Next, the pre-/post-rTMS aphasia testing and fMRI lateralization indices were compared using the Wilcoxon signed-rank test. All data management and analyses were performed using SPSS version 18.0 (SPSS, Chicago, IL, USA).
Results
Eight subjects fulfilled the inclusion criteria and completed all study procedures (Table 1); additional 3 subjects signed the informed consent but were later excluded due to metallic artifact (2) and history of a single seizure at the time of stroke (1). All subjects were ≥1 year since the aphasia-producing stroke and were not engaged in any specific aphasia rehabilitation. Aphasia testing showed trends towards improvement with all performed tests except for COWAT (Table 1); improvements in semantic fluency were significant (p=0.028) and mini-CAL showed trend towards post-rTMS communication improvements (p=0.075). No subjects reported any seizures or serious adverse events. MR images of individual strokes are presented in Figure 1. On these images, clearly noted are individual activations that served as a target for nerTMS (enclosed in circles). There does not appear to be any overlap between the areas of activation and the location of the stroke. Types of individual aphasias, based on clinical impression are provided in the legend. Group pre- and post-rTMS fMRI results are shown in Figure 2. Pre-rTMS scans showed areas of increased activation for the tone decision baseline in the right hemisphere and low-level activations predominantly in the left hemisphere. Post-rTMS group images showed clear overall increase in the BOLD signal changes in response to the intervention. Further, pre-/post-rTMS contrasts showed significant overall increases in the BOLD signal with apparent shifts to the left hemisphere (p=0.025 for Broca’s; p=0.036 for Wernicke’s; p=0.018 for the global ROI). While the pre- and post-rTMS locations of the BOLD signal changes are similar and include language areas typical for this task (Table 2), post-rTMS images showed shifts in activations predominantly to the left hemispheric head regions (Figure 3).
Conclusions
In this study, we provide preliminary evidence that fMRI-guided, excitatory rTMS applied to the affected Broca’s area improves language skills in patients with chronic post-stroke aphasia. We also show that these linguistic gains are associated with stronger language lateralization to the dominant left hemisphere. Finally, in addition to the objective gains, patients tend to report improvements in their language skills after the intervention. Therefore, the used in this study rTMS protocol appears to be safe and should be further tested in blinded studies assessing its short- and long-term safety and efficacy for post-stroke aphasia rehabilitation.
[ "Background", "Functional MRI task and scanning procedures", "Functional MRI data post-processing and analyses", "Neuropsychological testing of aphasia", "Neuronavigated excitatory repetitive transcranial magnetic stimulation (nerTMS)" ]
[ "Aphasia, impaired communication ability that usually occurs in patients with left middle cerebral artery (LMCA) stroke, is associated with high mortality, significant motor impairment, and severe limitations in social participation [1–5]. While 1/3 of new strokes have aphasia as one of its symptoms, the progress of aphasia rehabilitation has not matched the great strides in developing acute treatment strategies that, in some patients, decreased or eliminated the stroke-related deficits. Further, the development of post-stroke aphasia treatments lags behind the development of therapies that alleviate motor deficits in patients with history of stroke such as constraint-induced motor therapy [6,7], mental practice [8], electrical stimulation of the affected extremity [9] or neurostimulation [10,11]. The current aphasia therapy approaches are based largely on compensatory strategies or repetitive training of lost functions rather than function restoration. While these therapies may lead to improvements in some patients, many continue to be aphasic despite interventions [12]. Further, while language recovery >1 year after stroke is thought to be less likely, studies showed that improvements even in patients with chronic aphasia are possible when they are provided with an intervention [13,14]. These patients need additional restorative therapies that will enable them to return to society as productive members. The question raised in this study is whether some form of repetitive transcranial magnetic stimulation (rTMS) may be one of those interventions [15]. TMS is a noninvasive method of stimulating neurons by inducing weak electric currents by electromagnetic induction. When applied in repetitive paradigms, synaptic plasticity can be altered to transiently increase or decrease localized cortical activities. Changes in the neuronal networks induced by TMS can persist well beyond the actual period of stimulation with improved language functions observed in some excitatory rTMS studies [16]. But, the safety and efficacy of the excitatory rTMS applied directly to the stroke/peri-stroke areas is unclear. Further, early reports of rTMS-related seizures have led to the development of safety rules regarding stimulation frequency, intensity, train duration and intertrain interval [17].\nNeuroimaging studies have revealed that post-stroke reorganization of language functions may be related to increased cortical involvement of the non-dominant [18,19] or dominant cortical areas [20–22]. Unfortunately, these discoveries have thus far not contributed to the development of specific aphasia treatments. Therefore, the primary goals of this study were to determine whether excitatory repetitive transcranial magnetic stimulation with fMRI guidance (neuronavigated rTMS; nerTMS) applied directly to the language area identified by fMRI is safe and whether it might facilitate rehabilitation from post-stroke aphasia and improve patient outcomes. We hypothesized that nerTMS will have positive impact on language skills evaluated with aphasia battery.", "FMRI procedures at 4T Varian Unity INOVA Whole Body MRI/MRS scanner have been previously described in detail [20,23,24]. Briefly, participants were oriented to the equipment and engaged in explicit practice of the tasks; they were allowed to enter the scanner only after expressing complete understanding of all procedures. All fMRIs were performed following the same protocol. First, an alignment scan was performed for head position adjustments so that the AC-PC reference line was as close as possible to the vertical axis of the scanner. A high resolution, T1-weighted 3-D MDEFT (Modified Driven Equilibrium Fourier Transform) anatomical image was obtained using the following parameters: TR/TE=13.1/6 ms, FOV=25.6×19.2×19.2 cm, flip angle=22°, and voxel dimensions=1×1×1 mm. Finally, T2*-weighted functional images were obtained using the following parameters: TR/TE=3000/30 ms, FOV=25.6×25.6 cm, matrix=64×64 pixels, number of slices=30, slice thickness=4 mm, and flip angle=75°.\nFor language localization we used a highly reliable semantic decision/tone decision (SDTD) task [20,25–28]. Briefly, subjects performed two different alternating conditions, the control condition (tones recognition) and the active condition (semantic recognition/association), starting with the control condition. In the tone decision task, subjects heard brief sequences of four to seven low- (500 Hz) and high-pitch (750 Hz) tones every 3.75 seconds and responded with a non-dominant hand button press for any sequence containing two high-pitched tones. In the semantic task, subjects heard spoken English nouns designating animals every 3.75 seconds and responded with a non-dominant hand button press to stimuli that meet two criteria: “lives in the United States” and “is commonly used by humans” (e.g., “horse”). The contrast between the tone discrimination task and the semantic task results in an activation pattern that is inherent only to semantic, word-form, and phoneme processing, all of which are part of the language system. Each subject performed this task twice during each scanning session and the runs were concatenated for processing.", "The fMRI image post-processing was performed using software developed in the Imaging Research Center at the Cincinnati Children’s Hospital Medical Center in the IDL software environment (IDL 7.0; Research Systems Inc., Boulder, CO) [27,28]. As previously, Hamming-filtering preceded data reconstruction and the geometric distortion correction using the multi-echo reference method. Data were then co-registered to further reduce the effects of motion artifact using a previously developed pyramid co-registration algorithm. A general linear model (GLM) was used to process the data with a set of cosine basis functions used to minimize artifacts due to signal drift from aliased respiratory and cardiac effects. Finally, z-score maps were computed on a pixel-by-pixel basis and transformed into Talairach space for composite mapping. Data in native space were utilized to localize left-hemispheric language centers for subsequent rTMS administration.\nTo parameterize the fMRI data, we used previously generated regions of interest (ROIs) from 49 healthy controls, to calculate the laterality indices (LI) in the frontal and temporo-parietal brain regions (see Figure 1 in Szaflarski et. al., 2008) [28]. These primary ROIs include the lateral inferior and middle frontal region that corresponds to Broca’s area and the lateral/posterior temporal and parietal region that corresponds to Wernicke’s area; these ROIs were superimposed on the corresponding right hemispheric homologues (right hemispheric ROIs mirrored the left hemispheric ones). We also combined both ROIs into one “global language ROI.” Next, to avoid biases induced by arbitrary thresholding schemes, we determined the mean value of the t-statistics for all voxels within each ROIs; the number of voxels above the mean for each ROI was entered in the following formula to calculate the lateralization index: LI = (NL−NR)/(NL+NR), where NL and NR represent the number of voxels (left and right, respectively) above the mean t for each ROI [24]. This calculation method produces LI ranging from 1 (left lateralized) to −1 (right lateralized).", "All subjects completed a previously described aphasia testing protocol [14]. A battery of neuropsychological measures was obtained on the day of fMRI or within 1 week of initiating the nerTMS and again, within 1 week of completing the nerTMS protocol on the same day as the follow-up fMRI. The battery included 1) Boston Naming Test (BNT), 2) Controlled Oral Word Association Test (COWAT), 3) Semantic Fluency Test (SFT), 4) Complex Ideation subtest from the Boston Diagnostic Aphasia Examination (BDAE) and 5) Peabody Picture Vocabulary Test IV (PPVT IV). For COWAT, SFT, and PPVT IV, different forms of the tests were administered pre-/post-nerTMS. All subjects also completed Communicative Abilities Log (mini-CAL), a subjective, self-administered measure of communicative abilities [14].", "Single-pulse TMS was performed to establish resting motor threshold (RMT) and active motor threshold (AMT) with a Magstim 200® stimulator connected through a Bistim® module to a 70 mm figure-8 coil (Magstim Co., Wales, UK) [29]. Surface electromyography (EMG) leads were placed over the first dorsal interosseous (FDI) muscle of both hands. Subjects were seated comfortably, with both arms fully supported on a pillow. Full muscle relaxation was maintained through visual and online EMG monitoring. The coil was then placed over the primary motor cortex in the right and left hemisphere at the optimal site for obtaining a MEP in the FDI. After the RMT and AMT were determined, iTBS was performed using Magstim Rapid2® (Magstim Co., Wales, UK) with intensity set at 80% of AMT obtained from the right hemisphere. Motor threshold values from the left hemisphere were too high due to the LMCA stroke; therefore, these values were not used to set the rTMS intensity. The figure-8 coil was positioned tangentially to the skull, with the handle parallel to the sagittal axis pointing occipitally. ITBS consisted of bursts of three pulses at 50 Hz given every 200 milliseconds in two second trains, repeated every 10 seconds over 200 seconds for a total of 600 pulses [30]. BrainSight™2 (Rogue Research Inc., Montreal, Canada) was used for neuronavigation to guide rTMS stimulation to the fMRI-identified residual left hemispheric Broca’s area of each individual patient. This technology also enabled reliable three-dimensionally precise reapplication of rTMS throughout the study. Subjects received one session of iTBS each day for 10 consecutive days from Monday through Friday." ]
[ null, "methods", "methods", null, null ]
[ "Background", "Material and Methods", "Subjects", "Functional MRI task and scanning procedures", "Functional MRI data post-processing and analyses", "Neuropsychological testing of aphasia", "Neuronavigated excitatory repetitive transcranial magnetic stimulation (nerTMS)", "Data analyses", "Results", "Discussion", "Conclusions" ]
[ "Aphasia, impaired communication ability that usually occurs in patients with left middle cerebral artery (LMCA) stroke, is associated with high mortality, significant motor impairment, and severe limitations in social participation [1–5]. While 1/3 of new strokes have aphasia as one of its symptoms, the progress of aphasia rehabilitation has not matched the great strides in developing acute treatment strategies that, in some patients, decreased or eliminated the stroke-related deficits. Further, the development of post-stroke aphasia treatments lags behind the development of therapies that alleviate motor deficits in patients with history of stroke such as constraint-induced motor therapy [6,7], mental practice [8], electrical stimulation of the affected extremity [9] or neurostimulation [10,11]. The current aphasia therapy approaches are based largely on compensatory strategies or repetitive training of lost functions rather than function restoration. While these therapies may lead to improvements in some patients, many continue to be aphasic despite interventions [12]. Further, while language recovery >1 year after stroke is thought to be less likely, studies showed that improvements even in patients with chronic aphasia are possible when they are provided with an intervention [13,14]. These patients need additional restorative therapies that will enable them to return to society as productive members. The question raised in this study is whether some form of repetitive transcranial magnetic stimulation (rTMS) may be one of those interventions [15]. TMS is a noninvasive method of stimulating neurons by inducing weak electric currents by electromagnetic induction. When applied in repetitive paradigms, synaptic plasticity can be altered to transiently increase or decrease localized cortical activities. Changes in the neuronal networks induced by TMS can persist well beyond the actual period of stimulation with improved language functions observed in some excitatory rTMS studies [16]. But, the safety and efficacy of the excitatory rTMS applied directly to the stroke/peri-stroke areas is unclear. Further, early reports of rTMS-related seizures have led to the development of safety rules regarding stimulation frequency, intensity, train duration and intertrain interval [17].\nNeuroimaging studies have revealed that post-stroke reorganization of language functions may be related to increased cortical involvement of the non-dominant [18,19] or dominant cortical areas [20–22]. Unfortunately, these discoveries have thus far not contributed to the development of specific aphasia treatments. Therefore, the primary goals of this study were to determine whether excitatory repetitive transcranial magnetic stimulation with fMRI guidance (neuronavigated rTMS; nerTMS) applied directly to the language area identified by fMRI is safe and whether it might facilitate rehabilitation from post-stroke aphasia and improve patient outcomes. We hypothesized that nerTMS will have positive impact on language skills evaluated with aphasia battery.", "[SUBTITLE] Subjects [SUBSECTION] This exploratory study was approved by the Institutional Review Board and all subjects provided written informed consent. The inclusion criteria were: 1. LMCA distribution stroke ≥12 months prior to study participation, 2. moderate aphasia at time of enrollment, 3. no contraindication(s) to fMRI at 4T, 4. no history of seizures. All subjects were right-handed prior to stroke, as determined by an Edinburgh Handedness Inventory (EHI) score ≥50.\nThis exploratory study was approved by the Institutional Review Board and all subjects provided written informed consent. The inclusion criteria were: 1. LMCA distribution stroke ≥12 months prior to study participation, 2. moderate aphasia at time of enrollment, 3. no contraindication(s) to fMRI at 4T, 4. no history of seizures. All subjects were right-handed prior to stroke, as determined by an Edinburgh Handedness Inventory (EHI) score ≥50.\n[SUBTITLE] Functional MRI task and scanning procedures [SUBSECTION] FMRI procedures at 4T Varian Unity INOVA Whole Body MRI/MRS scanner have been previously described in detail [20,23,24]. Briefly, participants were oriented to the equipment and engaged in explicit practice of the tasks; they were allowed to enter the scanner only after expressing complete understanding of all procedures. All fMRIs were performed following the same protocol. First, an alignment scan was performed for head position adjustments so that the AC-PC reference line was as close as possible to the vertical axis of the scanner. A high resolution, T1-weighted 3-D MDEFT (Modified Driven Equilibrium Fourier Transform) anatomical image was obtained using the following parameters: TR/TE=13.1/6 ms, FOV=25.6×19.2×19.2 cm, flip angle=22°, and voxel dimensions=1×1×1 mm. Finally, T2*-weighted functional images were obtained using the following parameters: TR/TE=3000/30 ms, FOV=25.6×25.6 cm, matrix=64×64 pixels, number of slices=30, slice thickness=4 mm, and flip angle=75°.\nFor language localization we used a highly reliable semantic decision/tone decision (SDTD) task [20,25–28]. Briefly, subjects performed two different alternating conditions, the control condition (tones recognition) and the active condition (semantic recognition/association), starting with the control condition. In the tone decision task, subjects heard brief sequences of four to seven low- (500 Hz) and high-pitch (750 Hz) tones every 3.75 seconds and responded with a non-dominant hand button press for any sequence containing two high-pitched tones. In the semantic task, subjects heard spoken English nouns designating animals every 3.75 seconds and responded with a non-dominant hand button press to stimuli that meet two criteria: “lives in the United States” and “is commonly used by humans” (e.g., “horse”). The contrast between the tone discrimination task and the semantic task results in an activation pattern that is inherent only to semantic, word-form, and phoneme processing, all of which are part of the language system. Each subject performed this task twice during each scanning session and the runs were concatenated for processing.\nFMRI procedures at 4T Varian Unity INOVA Whole Body MRI/MRS scanner have been previously described in detail [20,23,24]. Briefly, participants were oriented to the equipment and engaged in explicit practice of the tasks; they were allowed to enter the scanner only after expressing complete understanding of all procedures. All fMRIs were performed following the same protocol. First, an alignment scan was performed for head position adjustments so that the AC-PC reference line was as close as possible to the vertical axis of the scanner. A high resolution, T1-weighted 3-D MDEFT (Modified Driven Equilibrium Fourier Transform) anatomical image was obtained using the following parameters: TR/TE=13.1/6 ms, FOV=25.6×19.2×19.2 cm, flip angle=22°, and voxel dimensions=1×1×1 mm. Finally, T2*-weighted functional images were obtained using the following parameters: TR/TE=3000/30 ms, FOV=25.6×25.6 cm, matrix=64×64 pixels, number of slices=30, slice thickness=4 mm, and flip angle=75°.\nFor language localization we used a highly reliable semantic decision/tone decision (SDTD) task [20,25–28]. Briefly, subjects performed two different alternating conditions, the control condition (tones recognition) and the active condition (semantic recognition/association), starting with the control condition. In the tone decision task, subjects heard brief sequences of four to seven low- (500 Hz) and high-pitch (750 Hz) tones every 3.75 seconds and responded with a non-dominant hand button press for any sequence containing two high-pitched tones. In the semantic task, subjects heard spoken English nouns designating animals every 3.75 seconds and responded with a non-dominant hand button press to stimuli that meet two criteria: “lives in the United States” and “is commonly used by humans” (e.g., “horse”). The contrast between the tone discrimination task and the semantic task results in an activation pattern that is inherent only to semantic, word-form, and phoneme processing, all of which are part of the language system. Each subject performed this task twice during each scanning session and the runs were concatenated for processing.\n[SUBTITLE] Functional MRI data post-processing and analyses [SUBSECTION] The fMRI image post-processing was performed using software developed in the Imaging Research Center at the Cincinnati Children’s Hospital Medical Center in the IDL software environment (IDL 7.0; Research Systems Inc., Boulder, CO) [27,28]. As previously, Hamming-filtering preceded data reconstruction and the geometric distortion correction using the multi-echo reference method. Data were then co-registered to further reduce the effects of motion artifact using a previously developed pyramid co-registration algorithm. A general linear model (GLM) was used to process the data with a set of cosine basis functions used to minimize artifacts due to signal drift from aliased respiratory and cardiac effects. Finally, z-score maps were computed on a pixel-by-pixel basis and transformed into Talairach space for composite mapping. Data in native space were utilized to localize left-hemispheric language centers for subsequent rTMS administration.\nTo parameterize the fMRI data, we used previously generated regions of interest (ROIs) from 49 healthy controls, to calculate the laterality indices (LI) in the frontal and temporo-parietal brain regions (see Figure 1 in Szaflarski et. al., 2008) [28]. These primary ROIs include the lateral inferior and middle frontal region that corresponds to Broca’s area and the lateral/posterior temporal and parietal region that corresponds to Wernicke’s area; these ROIs were superimposed on the corresponding right hemispheric homologues (right hemispheric ROIs mirrored the left hemispheric ones). We also combined both ROIs into one “global language ROI.” Next, to avoid biases induced by arbitrary thresholding schemes, we determined the mean value of the t-statistics for all voxels within each ROIs; the number of voxels above the mean for each ROI was entered in the following formula to calculate the lateralization index: LI = (NL−NR)/(NL+NR), where NL and NR represent the number of voxels (left and right, respectively) above the mean t for each ROI [24]. This calculation method produces LI ranging from 1 (left lateralized) to −1 (right lateralized).\nThe fMRI image post-processing was performed using software developed in the Imaging Research Center at the Cincinnati Children’s Hospital Medical Center in the IDL software environment (IDL 7.0; Research Systems Inc., Boulder, CO) [27,28]. As previously, Hamming-filtering preceded data reconstruction and the geometric distortion correction using the multi-echo reference method. Data were then co-registered to further reduce the effects of motion artifact using a previously developed pyramid co-registration algorithm. A general linear model (GLM) was used to process the data with a set of cosine basis functions used to minimize artifacts due to signal drift from aliased respiratory and cardiac effects. Finally, z-score maps were computed on a pixel-by-pixel basis and transformed into Talairach space for composite mapping. Data in native space were utilized to localize left-hemispheric language centers for subsequent rTMS administration.\nTo parameterize the fMRI data, we used previously generated regions of interest (ROIs) from 49 healthy controls, to calculate the laterality indices (LI) in the frontal and temporo-parietal brain regions (see Figure 1 in Szaflarski et. al., 2008) [28]. These primary ROIs include the lateral inferior and middle frontal region that corresponds to Broca’s area and the lateral/posterior temporal and parietal region that corresponds to Wernicke’s area; these ROIs were superimposed on the corresponding right hemispheric homologues (right hemispheric ROIs mirrored the left hemispheric ones). We also combined both ROIs into one “global language ROI.” Next, to avoid biases induced by arbitrary thresholding schemes, we determined the mean value of the t-statistics for all voxels within each ROIs; the number of voxels above the mean for each ROI was entered in the following formula to calculate the lateralization index: LI = (NL−NR)/(NL+NR), where NL and NR represent the number of voxels (left and right, respectively) above the mean t for each ROI [24]. This calculation method produces LI ranging from 1 (left lateralized) to −1 (right lateralized).\n[SUBTITLE] Neuropsychological testing of aphasia [SUBSECTION] All subjects completed a previously described aphasia testing protocol [14]. A battery of neuropsychological measures was obtained on the day of fMRI or within 1 week of initiating the nerTMS and again, within 1 week of completing the nerTMS protocol on the same day as the follow-up fMRI. The battery included 1) Boston Naming Test (BNT), 2) Controlled Oral Word Association Test (COWAT), 3) Semantic Fluency Test (SFT), 4) Complex Ideation subtest from the Boston Diagnostic Aphasia Examination (BDAE) and 5) Peabody Picture Vocabulary Test IV (PPVT IV). For COWAT, SFT, and PPVT IV, different forms of the tests were administered pre-/post-nerTMS. All subjects also completed Communicative Abilities Log (mini-CAL), a subjective, self-administered measure of communicative abilities [14].\nAll subjects completed a previously described aphasia testing protocol [14]. A battery of neuropsychological measures was obtained on the day of fMRI or within 1 week of initiating the nerTMS and again, within 1 week of completing the nerTMS protocol on the same day as the follow-up fMRI. The battery included 1) Boston Naming Test (BNT), 2) Controlled Oral Word Association Test (COWAT), 3) Semantic Fluency Test (SFT), 4) Complex Ideation subtest from the Boston Diagnostic Aphasia Examination (BDAE) and 5) Peabody Picture Vocabulary Test IV (PPVT IV). For COWAT, SFT, and PPVT IV, different forms of the tests were administered pre-/post-nerTMS. All subjects also completed Communicative Abilities Log (mini-CAL), a subjective, self-administered measure of communicative abilities [14].\n[SUBTITLE] Neuronavigated excitatory repetitive transcranial magnetic stimulation (nerTMS) [SUBSECTION] Single-pulse TMS was performed to establish resting motor threshold (RMT) and active motor threshold (AMT) with a Magstim 200® stimulator connected through a Bistim® module to a 70 mm figure-8 coil (Magstim Co., Wales, UK) [29]. Surface electromyography (EMG) leads were placed over the first dorsal interosseous (FDI) muscle of both hands. Subjects were seated comfortably, with both arms fully supported on a pillow. Full muscle relaxation was maintained through visual and online EMG monitoring. The coil was then placed over the primary motor cortex in the right and left hemisphere at the optimal site for obtaining a MEP in the FDI. After the RMT and AMT were determined, iTBS was performed using Magstim Rapid2® (Magstim Co., Wales, UK) with intensity set at 80% of AMT obtained from the right hemisphere. Motor threshold values from the left hemisphere were too high due to the LMCA stroke; therefore, these values were not used to set the rTMS intensity. The figure-8 coil was positioned tangentially to the skull, with the handle parallel to the sagittal axis pointing occipitally. ITBS consisted of bursts of three pulses at 50 Hz given every 200 milliseconds in two second trains, repeated every 10 seconds over 200 seconds for a total of 600 pulses [30]. BrainSight™2 (Rogue Research Inc., Montreal, Canada) was used for neuronavigation to guide rTMS stimulation to the fMRI-identified residual left hemispheric Broca’s area of each individual patient. This technology also enabled reliable three-dimensionally precise reapplication of rTMS throughout the study. Subjects received one session of iTBS each day for 10 consecutive days from Monday through Friday.\nSingle-pulse TMS was performed to establish resting motor threshold (RMT) and active motor threshold (AMT) with a Magstim 200® stimulator connected through a Bistim® module to a 70 mm figure-8 coil (Magstim Co., Wales, UK) [29]. Surface electromyography (EMG) leads were placed over the first dorsal interosseous (FDI) muscle of both hands. Subjects were seated comfortably, with both arms fully supported on a pillow. Full muscle relaxation was maintained through visual and online EMG monitoring. The coil was then placed over the primary motor cortex in the right and left hemisphere at the optimal site for obtaining a MEP in the FDI. After the RMT and AMT were determined, iTBS was performed using Magstim Rapid2® (Magstim Co., Wales, UK) with intensity set at 80% of AMT obtained from the right hemisphere. Motor threshold values from the left hemisphere were too high due to the LMCA stroke; therefore, these values were not used to set the rTMS intensity. The figure-8 coil was positioned tangentially to the skull, with the handle parallel to the sagittal axis pointing occipitally. ITBS consisted of bursts of three pulses at 50 Hz given every 200 milliseconds in two second trains, repeated every 10 seconds over 200 seconds for a total of 600 pulses [30]. BrainSight™2 (Rogue Research Inc., Montreal, Canada) was used for neuronavigation to guide rTMS stimulation to the fMRI-identified residual left hemispheric Broca’s area of each individual patient. This technology also enabled reliable three-dimensionally precise reapplication of rTMS throughout the study. Subjects received one session of iTBS each day for 10 consecutive days from Monday through Friday.\n[SUBTITLE] Data analyses [SUBSECTION] Data were initially characterized using descriptive statistics (means and standard deviations or frequencies and percentages as appropriate). Next, the pre-/post-rTMS aphasia testing and fMRI lateralization indices were compared using the Wilcoxon signed-rank test. All data management and analyses were performed using SPSS version 18.0 (SPSS, Chicago, IL, USA).\nData were initially characterized using descriptive statistics (means and standard deviations or frequencies and percentages as appropriate). Next, the pre-/post-rTMS aphasia testing and fMRI lateralization indices were compared using the Wilcoxon signed-rank test. All data management and analyses were performed using SPSS version 18.0 (SPSS, Chicago, IL, USA).", "This exploratory study was approved by the Institutional Review Board and all subjects provided written informed consent. The inclusion criteria were: 1. LMCA distribution stroke ≥12 months prior to study participation, 2. moderate aphasia at time of enrollment, 3. no contraindication(s) to fMRI at 4T, 4. no history of seizures. All subjects were right-handed prior to stroke, as determined by an Edinburgh Handedness Inventory (EHI) score ≥50.", "FMRI procedures at 4T Varian Unity INOVA Whole Body MRI/MRS scanner have been previously described in detail [20,23,24]. Briefly, participants were oriented to the equipment and engaged in explicit practice of the tasks; they were allowed to enter the scanner only after expressing complete understanding of all procedures. All fMRIs were performed following the same protocol. First, an alignment scan was performed for head position adjustments so that the AC-PC reference line was as close as possible to the vertical axis of the scanner. A high resolution, T1-weighted 3-D MDEFT (Modified Driven Equilibrium Fourier Transform) anatomical image was obtained using the following parameters: TR/TE=13.1/6 ms, FOV=25.6×19.2×19.2 cm, flip angle=22°, and voxel dimensions=1×1×1 mm. Finally, T2*-weighted functional images were obtained using the following parameters: TR/TE=3000/30 ms, FOV=25.6×25.6 cm, matrix=64×64 pixels, number of slices=30, slice thickness=4 mm, and flip angle=75°.\nFor language localization we used a highly reliable semantic decision/tone decision (SDTD) task [20,25–28]. Briefly, subjects performed two different alternating conditions, the control condition (tones recognition) and the active condition (semantic recognition/association), starting with the control condition. In the tone decision task, subjects heard brief sequences of four to seven low- (500 Hz) and high-pitch (750 Hz) tones every 3.75 seconds and responded with a non-dominant hand button press for any sequence containing two high-pitched tones. In the semantic task, subjects heard spoken English nouns designating animals every 3.75 seconds and responded with a non-dominant hand button press to stimuli that meet two criteria: “lives in the United States” and “is commonly used by humans” (e.g., “horse”). The contrast between the tone discrimination task and the semantic task results in an activation pattern that is inherent only to semantic, word-form, and phoneme processing, all of which are part of the language system. Each subject performed this task twice during each scanning session and the runs were concatenated for processing.", "The fMRI image post-processing was performed using software developed in the Imaging Research Center at the Cincinnati Children’s Hospital Medical Center in the IDL software environment (IDL 7.0; Research Systems Inc., Boulder, CO) [27,28]. As previously, Hamming-filtering preceded data reconstruction and the geometric distortion correction using the multi-echo reference method. Data were then co-registered to further reduce the effects of motion artifact using a previously developed pyramid co-registration algorithm. A general linear model (GLM) was used to process the data with a set of cosine basis functions used to minimize artifacts due to signal drift from aliased respiratory and cardiac effects. Finally, z-score maps were computed on a pixel-by-pixel basis and transformed into Talairach space for composite mapping. Data in native space were utilized to localize left-hemispheric language centers for subsequent rTMS administration.\nTo parameterize the fMRI data, we used previously generated regions of interest (ROIs) from 49 healthy controls, to calculate the laterality indices (LI) in the frontal and temporo-parietal brain regions (see Figure 1 in Szaflarski et. al., 2008) [28]. These primary ROIs include the lateral inferior and middle frontal region that corresponds to Broca’s area and the lateral/posterior temporal and parietal region that corresponds to Wernicke’s area; these ROIs were superimposed on the corresponding right hemispheric homologues (right hemispheric ROIs mirrored the left hemispheric ones). We also combined both ROIs into one “global language ROI.” Next, to avoid biases induced by arbitrary thresholding schemes, we determined the mean value of the t-statistics for all voxels within each ROIs; the number of voxels above the mean for each ROI was entered in the following formula to calculate the lateralization index: LI = (NL−NR)/(NL+NR), where NL and NR represent the number of voxels (left and right, respectively) above the mean t for each ROI [24]. This calculation method produces LI ranging from 1 (left lateralized) to −1 (right lateralized).", "All subjects completed a previously described aphasia testing protocol [14]. A battery of neuropsychological measures was obtained on the day of fMRI or within 1 week of initiating the nerTMS and again, within 1 week of completing the nerTMS protocol on the same day as the follow-up fMRI. The battery included 1) Boston Naming Test (BNT), 2) Controlled Oral Word Association Test (COWAT), 3) Semantic Fluency Test (SFT), 4) Complex Ideation subtest from the Boston Diagnostic Aphasia Examination (BDAE) and 5) Peabody Picture Vocabulary Test IV (PPVT IV). For COWAT, SFT, and PPVT IV, different forms of the tests were administered pre-/post-nerTMS. All subjects also completed Communicative Abilities Log (mini-CAL), a subjective, self-administered measure of communicative abilities [14].", "Single-pulse TMS was performed to establish resting motor threshold (RMT) and active motor threshold (AMT) with a Magstim 200® stimulator connected through a Bistim® module to a 70 mm figure-8 coil (Magstim Co., Wales, UK) [29]. Surface electromyography (EMG) leads were placed over the first dorsal interosseous (FDI) muscle of both hands. Subjects were seated comfortably, with both arms fully supported on a pillow. Full muscle relaxation was maintained through visual and online EMG monitoring. The coil was then placed over the primary motor cortex in the right and left hemisphere at the optimal site for obtaining a MEP in the FDI. After the RMT and AMT were determined, iTBS was performed using Magstim Rapid2® (Magstim Co., Wales, UK) with intensity set at 80% of AMT obtained from the right hemisphere. Motor threshold values from the left hemisphere were too high due to the LMCA stroke; therefore, these values were not used to set the rTMS intensity. The figure-8 coil was positioned tangentially to the skull, with the handle parallel to the sagittal axis pointing occipitally. ITBS consisted of bursts of three pulses at 50 Hz given every 200 milliseconds in two second trains, repeated every 10 seconds over 200 seconds for a total of 600 pulses [30]. BrainSight™2 (Rogue Research Inc., Montreal, Canada) was used for neuronavigation to guide rTMS stimulation to the fMRI-identified residual left hemispheric Broca’s area of each individual patient. This technology also enabled reliable three-dimensionally precise reapplication of rTMS throughout the study. Subjects received one session of iTBS each day for 10 consecutive days from Monday through Friday.", "Data were initially characterized using descriptive statistics (means and standard deviations or frequencies and percentages as appropriate). Next, the pre-/post-rTMS aphasia testing and fMRI lateralization indices were compared using the Wilcoxon signed-rank test. All data management and analyses were performed using SPSS version 18.0 (SPSS, Chicago, IL, USA).", "Eight subjects fulfilled the inclusion criteria and completed all study procedures (Table 1); additional 3 subjects signed the informed consent but were later excluded due to metallic artifact (2) and history of a single seizure at the time of stroke (1). All subjects were ≥1 year since the aphasia-producing stroke and were not engaged in any specific aphasia rehabilitation. Aphasia testing showed trends towards improvement with all performed tests except for COWAT (Table 1); improvements in semantic fluency were significant (p=0.028) and mini-CAL showed trend towards post-rTMS communication improvements (p=0.075). No subjects reported any seizures or serious adverse events.\nMR images of individual strokes are presented in Figure 1. On these images, clearly noted are individual activations that served as a target for nerTMS (enclosed in circles). There does not appear to be any overlap between the areas of activation and the location of the stroke. Types of individual aphasias, based on clinical impression are provided in the legend. Group pre- and post-rTMS fMRI results are shown in Figure 2. Pre-rTMS scans showed areas of increased activation for the tone decision baseline in the right hemisphere and low-level activations predominantly in the left hemisphere. Post-rTMS group images showed clear overall increase in the BOLD signal changes in response to the intervention. Further, pre-/post-rTMS contrasts showed significant overall increases in the BOLD signal with apparent shifts to the left hemisphere (p=0.025 for Broca’s; p=0.036 for Wernicke’s; p=0.018 for the global ROI). While the pre- and post-rTMS locations of the BOLD signal changes are similar and include language areas typical for this task (Table 2), post-rTMS images showed shifts in activations predominantly to the left hemispheric head regions (Figure 3).", "In this exploratory study we evaluated the safety and efficacy of the neuronavigated excitatory rTMS (nerTMS) applied to the identified by fMRI frontal language areas as a tool for post-stroke aphasia rehabilitation. Despite a relatively small sample size (n=8), we were able to show significant improvements in language function after 2 weeks of stimulation that were associated with significant shifts of fMRI signal to the affected hemisphere; patients also tended to report subjective improvements in language skills after therapy completion (p=0.075). These results are very encouraging and support further testing of nerTMS as a post-stroke aphasia rehabilitation tool.\nIn general, rTMS involves repeated brain stimulation at regular frequencies; it is a noninvasive method of exciting or inhibiting neurons through repeated induction of weak electric currents in the brain via rapidly changing magnetic fields. If used with appropriate intensity, brain activity can be modulated using rTMS without discomfort or ill effects [31]. The cortical plasticity induced by rTMS and its therapeutic effects are thought to be primarily related to synaptic plasticity; short- and long-term modifications in neural communication mediated by post-synaptic NMDA receptors in which rTMS releases Mg++ blockade allowing Ca++ entering the post-synaptic cell and inducing long-term potentiation [16]. The rTMS stimulation area is usually very focal [32] and the changes induced by rTMS can persist well beyond the duration of the stimulation [30,33]. We speculate that the efficacy of the nerTMS observed in this study is related to the excitatory effect on the peri-stroke area and to the induction of long-term potentiation in this area. Further, we are encouraged by the fact that nerT-MS induced changes in the entire semantic decision network [26] (Figure 2) and not only in the left frontal (stimulated) area. This finding further supports the notion that the effect of nerTMS on a single area, if associated with improvement(s), may correspond to both local and remote changes in the relevant network, and, therefore, improvements in all language-related skills. While a further discussion about the localization of the pre-/post-rTMS BOLD signal changes is beyond the scope of this manuscript, of importance is that the post-rTMS increases are seen in brain regions typically active during this fMRI task – left inferior frontal and left temporo-parietal regions [25,26]. Certainly, the left frontal BOLD signal increase may be related to the direct stimulation of that area in the course of therapy; this increase is in agreement with the improvements on an expressive language measure (SFT) and subjective verbal output reported by the patients (CAL) [14]. The BOLD signal increase in the left temporo-parietal region (the area that is typically responsible for lexical retrieval and word meaning) was observed even though this region was not directly stimulated with rTMS, and is also consistent with the observed improvements on the semantic fluency test (SFT).\nIn healthy controls, inhibitory rTMS may decrease motor and cognitive performance in the stimulated area [16]. In patients with post-stroke aphasia caused by LMCA stroke, inhibitory rTMS applied to the unaffected, non-dominant right-hemispheric language homologues improved the left-hemispheric language functions [34]. RTMS-evoked inhibition of redundant language circuits in the non-dominant hemisphere may release the damaged, dominant language networks allowing them to participate in post-stroke recovery. These findings support the hypothesis that the residual left-hemispheric network(s) may be more important or effective for post-stroke language processing when not negatively influenced by the redundant non-dominant (less effective) circuitry.\nIn contrast to the inhibitory rTMS, excitatory rTMS has been shown to facilitate language processing via increased short- and long-term cortical excitability; rTMS applied to the left-hemispheric speech area facilitated naming in healthy controls [35] or patients with Alzheimer’s disease [36] or Primary Progressive Aphasia [37]. In general, as a treatment, rTMS has been shown in numerous pilot studies to be efficacious in various neurological and psychiatric conditions [38] and is now FDA-cleared for the treatment of depression. The findings from the above studies combined with our findings suggest that the application of nerTMS to the damaged by stroke residual language circuits in the dominant-hemisphere may indeed lead to improved language skill as observed in our preliminary study.\nOur original hypothesis that stimulating the left hemispheric language areas with nerTMS would produce aphasia improvements is further supported by the finding of increased post-nerTMS BOLD signal activation (when compared to the pre-rTMS fMRI) in the dominant-hemisphere language circuits and decreased involvement of the non-dominant for language hemisphere after the course of therapy as observed with fMRI. Post-stroke neuroimaging studies that have evaluated recovery from aphasia in adults with unilateral lesions show evidence of cortical reorganization and migration of language functions to the non-dominant hemisphere after a dominant hemisphere insult [20,23,39]. Interestingly, one study that showed such a redistribution pattern using PET and fMRI language tasks also found negative association between increased non-dominant inferior frontal gyrus activation and recovery after an ischemic stroke [40]. The best recovery was observed in patients with peri-infarct activation on the fMRI and PET studies. Winhuisen et al. (2007) applied inhibitory rTMS to the non-dominant language circuits and found improved post-stroke aphasia recovery in patients with preserved left-hemispheric language centers; recovery was independent of the recruitment of the non-dominant homologues [19]. A recent study showed that the fMRI activation may be dependent on the phase of post-stroke recovery with early right-hemispheric upregulation of the fMRI signal changes correlating with language recovery and late consolidation of the activation in the left-hemispheric language centers [41]. This and other studies support the presence of preexisting language pathways in both, the dominant and non-dominant hemispheres and suggest that in health the circuitry in the non-dominant hemisphere may be inhibited by the active circuitry in the dominant hemisphere; when the preferred pathway is interrupted (as in a stroke), the non-dominant circuitry is uninhibited, hence activated. These studies suggest that increased reliance on language circuits in the dominant hemisphere supports higher levels of recovery from aphasia after stroke [20,23]. As hypothesized, in this study nerTMS applied to the identified by fMRI language areas facilitated the increased reliance on these areas for post-stroke aphasia recovery.\nEarly reports [42] of possible seizures associated with TMS led over 10 years ago to the development of safety rules regarding stimulation frequency, intensity, train duration and intertrain interval [17,43]. Multiple subsequent studies of rTMS have demonstrated this to be a safe technique for use in studies of stroke [44] and even in epilepsy [45]. In healthy adult studies, TBS has been well tolerated. However, there is one report of a seizure in a healthy subject caused by continuous TBS stimulation with intensity set at approximately 100% of the resting motor threshold, a threshold far exceeding our protocol [46]. Our results also demonstrate the safety of the nerTMS protocol; no adverse reactions were observed in our pilot group of stroke patients despite 10 daily 200 seconds long nerTMS (iTBS) sessions; subjects reported no discomfort and the subjective assessment of pre-/post-rTMS language skills (mini-CAL) indicated the patients felt the treatments they received were beneficial. Therefore we are confident in proposing nerTMS as a safe and potentially beneficial intervention in patients with chronic aphasia after stroke.\nThe shortcomings of this study include small sample size, lack of sham treatment, un-blinded application of rTMS and only short-term follow-up. Nevertheless, we were able to show improved language skills and shifts of fMRI activations to the left hemisphere in response to the neuronavigated rTMS. Future studies should focus on addressing these limitations.", "In this study, we provide preliminary evidence that fMRI-guided, excitatory rTMS applied to the affected Broca’s area improves language skills in patients with chronic post-stroke aphasia. We also show that these linguistic gains are associated with stronger language lateralization to the dominant left hemisphere. Finally, in addition to the objective gains, patients tend to report improvements in their language skills after the intervention. Therefore, the used in this study rTMS protocol appears to be safe and should be further tested in blinded studies assessing its short- and long-term safety and efficacy for post-stroke aphasia rehabilitation." ]
[ null, "materials|methods", "subjects", "methods", "methods", null, null, "methods", "results", "discussion", "conclusions" ]
[ "aphasia", "language", "fMRI", "rTMS", "rehabilitation", "stroke" ]
Significance of 99mTc-sestamibi myocardial scintigraphy after percutaneous coronary intervention in patients with acute myocardial infarction.
21358600
This study was designed to clarify the significance of washout rate (WR) determined from 99mTc-sestamibi myocardial scintigraphic images and the levels of cardiac enzymes in patients with acute myocardial infarction (AMI) after percutaneous coronary intervention (PCI).
BACKGROUND
A total of 56 consecutive patients with AMI (mean age 65.8 ± 8.5 years), who underwent PCI on admission, were included. Cardiac enzyme, the MB isoenzyme of creatinine kinase (CK-MB), was measured every 3 h after admission. Two weeks after the onset of AMI, 99mTc-sestamibi myocardial scintigraphy was performed at early (30 min) and delayed (4 h) phases after tracer injection. The heart-to-mediastinum ratio (H/M) and WR were calculated from the planar images.
MATERIAL/METHODS
PCI was performed at 9.4 ± 6.0 h after the onset of AMI. In 26 patients the culprit lesion was located in the right coronary artery and in 24 patients it was located in the left anterior descending coronary artery. The peak CK-MB was 274.1 ± 169.4 IU/L (13.5 ± 3.9 h). The early and delayed H/Ms and WR of 99mTc-sestamibi were 2.74 ± 0.58, 3.00 ± 0.70, and 58.8 ± 10.0%, respectively. The delayed H/M was significantly correlated with the peak CK-MB (r = -0.37, p = 0.005). The WR of 99mTc-sestamibi was also significantly correlated with the peak CK-MB (r = -0.34, p = 0.012).
RESULTS
These results suggest that the WR determined from 99mTc-sestamibi myocardial scintigraphic images reflects the extent of myocardial damage in AMI patients.
CONCLUSIONS
[ "Aged", "Angioplasty, Balloon, Coronary", "Female", "Humans", "Male", "Myocardial Infarction", "Myocardial Perfusion Imaging", "Myocardium", "Technetium Tc 99m Sestamibi" ]
3524719
null
null
Study limitation
In the present study we did not evaluate regional WR of 99mTc-sestamibi in association with culprit region. Furthermore, this study was not a controlled trial. AMI patients who did not receive PCI should be included as controls in future studies. In the study with non-ischemic patients, it was reported that WR of 99mTc-sestamibi might provide prognostic information. However, the prognostic values of H/M and WR in patients with ischemic cardiac disease remain unknown. Further investigations with larger numbers of patients should be conducted to evaluate the potential use of 99mTc-sestamibi as a prognostic incremental indicator.
Results
[SUBTITLE] Patients’ characteristics and laboratory findings [SUBSECTION] PCI was performed 9.4±6.0 h after the actual onset of AMI. All patients received an appropriate PCI; in 51 (91%) patients, aspiration catheters successfully penetrated the occluding lesions and were followed by conventional PCI. The remaining 5 patients received PCI without aspiration catheters. Thrombolysis in myocardial infarction (TIMI) [15] grade 3 flow was observed in all patients after PCI. There were no differences in the age, BMI, cardiac enzymes on admission, time from the onset to revascularization, and peak cardiac enzymes, even though the patients had culprit lesions located in different arteries (Table 1). The CK and CK-MB levels on admission were 410.6±1318.0 IU/L and 39.8±198.1 IU/L, and the peak CK and CK-MB levels were 2689.6±1167.4 IU/L (15.3±4.6 h) and 274.1±169.4 IU/L (13.5±3.9 h), respectively. After 200mg of acetylsalicylic acid and 300 mg of clopidgrel sulfate administration as a loading dose, all patients received 100 mg of acetylsalicylic acid and 75 mg of clopidogrel sulfate after PCI as a maintenance dose. Most of the patients were treated with angiotensin-converting enzyme inhibitor or angiotensin receptor blocker, β-blocker, or some type of statin in order to prevent secondary cardiac events and deterioration of cardiac function. Seven patients who received statin were not able to continue the administration due to muscle ache. PCI was performed 9.4±6.0 h after the actual onset of AMI. All patients received an appropriate PCI; in 51 (91%) patients, aspiration catheters successfully penetrated the occluding lesions and were followed by conventional PCI. The remaining 5 patients received PCI without aspiration catheters. Thrombolysis in myocardial infarction (TIMI) [15] grade 3 flow was observed in all patients after PCI. There were no differences in the age, BMI, cardiac enzymes on admission, time from the onset to revascularization, and peak cardiac enzymes, even though the patients had culprit lesions located in different arteries (Table 1). The CK and CK-MB levels on admission were 410.6±1318.0 IU/L and 39.8±198.1 IU/L, and the peak CK and CK-MB levels were 2689.6±1167.4 IU/L (15.3±4.6 h) and 274.1±169.4 IU/L (13.5±3.9 h), respectively. After 200mg of acetylsalicylic acid and 300 mg of clopidgrel sulfate administration as a loading dose, all patients received 100 mg of acetylsalicylic acid and 75 mg of clopidogrel sulfate after PCI as a maintenance dose. Most of the patients were treated with angiotensin-converting enzyme inhibitor or angiotensin receptor blocker, β-blocker, or some type of statin in order to prevent secondary cardiac events and deterioration of cardiac function. Seven patients who received statin were not able to continue the administration due to muscle ache. [SUBTITLE] Scintigraphic study [SUBSECTION] The early and delayed H/Ms and WR of 99mTc-sestamibi were 2.74±0.58, 3.00±0.70, and 58.7±10.0%, respectively. The early and delayed H/Ms were significantly lower in the patients with an LAD culprit lesion (2.59±0.36 and 2.70±0.41, respectively) than in those with an LCx culprit lesion (2.96±0.42, p=0.037; and 3.27±0.64, p=0.01) or an RCA culprit lesion (2.84±0.43, p=0.040; and 3.21±0.49, p<0.01). The global WR of 99mTc-sestamibi was significantly accelerated in the patients with an LAD culprit lesion compared with those with an RCA culprit lesion (61.1±6.6% vs. 56.4±4.5%, p<0.01). A significant difference in the corrected WR was found between the patients with an LAD culprit lesion and those with an RCA culprit lesion (p<0.01; Table 2). The Left Ventricular End Diastolic Volume (LVEDV), Left Ventricular End Systolic Volume (LVESV), and Left Ventricular Ejection Fraction (LVEF) were 97.9±27.2 ml, 51.9±22.0 ml, and 48.8±10.0%, respectively (Table 2). No significant difference was observed among the patients with regard to the occluded artery. The early and delayed H/Ms and WR of 99mTc-sestamibi were 2.74±0.58, 3.00±0.70, and 58.7±10.0%, respectively. The early and delayed H/Ms were significantly lower in the patients with an LAD culprit lesion (2.59±0.36 and 2.70±0.41, respectively) than in those with an LCx culprit lesion (2.96±0.42, p=0.037; and 3.27±0.64, p=0.01) or an RCA culprit lesion (2.84±0.43, p=0.040; and 3.21±0.49, p<0.01). The global WR of 99mTc-sestamibi was significantly accelerated in the patients with an LAD culprit lesion compared with those with an RCA culprit lesion (61.1±6.6% vs. 56.4±4.5%, p<0.01). A significant difference in the corrected WR was found between the patients with an LAD culprit lesion and those with an RCA culprit lesion (p<0.01; Table 2). The Left Ventricular End Diastolic Volume (LVEDV), Left Ventricular End Systolic Volume (LVESV), and Left Ventricular Ejection Fraction (LVEF) were 97.9±27.2 ml, 51.9±22.0 ml, and 48.8±10.0%, respectively (Table 2). No significant difference was observed among the patients with regard to the occluded artery. [SUBTITLE] Parameters from 99mTc-sestamibi images and peak cardiac enzymes [SUBSECTION] Figure 1 shows the association between the parameters obtained from the 99mTc-sestamibi planar images and cardiac enzymes. Although the early H/M was not correlated with the peak CK or peak CK-MB, the delayed H/M was correlated with the peak CK (r=−0.32, p=0.015) and peak CK-MB (r=−0.37, p=0.005). The WR of 99mTc-sestamibi was also correlated with the peak CK (r=−0.32, p=0.017) and peak CK-MB (r=−0.34, p=0.012). Figure 1 shows the association between the parameters obtained from the 99mTc-sestamibi planar images and cardiac enzymes. Although the early H/M was not correlated with the peak CK or peak CK-MB, the delayed H/M was correlated with the peak CK (r=−0.32, p=0.015) and peak CK-MB (r=−0.37, p=0.005). The WR of 99mTc-sestamibi was also correlated with the peak CK (r=−0.32, p=0.017) and peak CK-MB (r=−0.34, p=0.012).
Conclusions
These results suggest that the WR determined from 99mTc-sestamibi myocardial scintigraphic images could reflect the extent of myocardial damage in AMI patients after PCI. This study also demonstrates the significance of taking 99mTc-sestamibi myocardial scintigraphic images at 2 different time points. Further studies are necessary to confirm these results.
[ "Background", "Radionuclide studies", "Data analysis", "Statistical analysis", "Scintigraphic study", "Parameters from 99mTc-sestamibi images and peak cardiac enzymes", "Discussion" ]
[ "Creatinine kinase (CK) and cardiac troponin is used for diagnostic evaluation of myocardial damage in patients with acute myocardial infarction (AMI). CK is an established noninvasive measure of myocardial infarct size and severity [1] and has been accepted as a reliable prognostic marker in AMI patients undergoing primary percutaneous coronary intervention (PCI) [2]. Although early reperfusion phenomena strongly influence CK release [3], the peak values of cardiac biomarkers, including CK and the MB isoenzyme of CK (CK-MB), predict the prognosis of AMI patients after PCI [2,4]. In general, contrast-enhanced magnetic resonance imaging [5,6] and nuclear cardiology techniques [1,7,8] evaluate the association between cardiac biomarkers and myocardial damage.\nSince 1990, single-photon emission computed tomography (SPECT) with technetium-99m hexakis 2-methoxy-isobutyl-isonitrile (99mTc-sestamibi) has been widely used to assess myocardial damage at rest after the onset of AMI [9]. 99mTc-sestamibi is known to exhibit the phenomenon of reverse redistribution, so-called washout, in patients with AMI after PCI [10–12]. Myocardial scintigraphy with 123I-metaiodobenzylguanidine is a standard method of rendering early and delayed images; nuclide tracer washout is the gold standard for evaluating cardiac function [13]. However, washout of 99mTc-sestamibi is still a matter of debate; the significance of washout has not been fully elucidated in AMI patients after primary PCI. No previous study has demonstrated the significance of cardiac biomarkers and washout of 99mTc-sestamibi for the evaluation of cardiac damage in AMI patients. Accordingly, the present study was designed to clarify the association between the washout rate (WR) of 99mTc-sestamibi determined from myocardial scintigraphic images and cardiac enzymes in AMI patients after PCI.", "All study patients underwent SPECT 2 weeks after the onset of AMI. 99mTc-sestamibi (740 MBq; FUJI FILM RI Pharma Co. Ltd., Tokyo, Japan) was injected into the left antecubital vein, and thereafter SPECT was performed twice-initially at 30 min after injection (early 99mTc-sestamibi uptake) and subsequently at 4 h after injection (delayed uptake).\nBefore performing SPECT, anterior and lateral planar images were acquired for 300 s using a gamma camera equipped with a low-/medium-energy general-purpose collimator and a 512×512 matrix. 99mTc-sestamibi images were obtained using a double-headed gamma camera (Symbia E; Siemens-Asahi Medical Technologies Ltd., Tokyo, Japan) equipped with a low-/medium-energy general-purpose collimator. Two detectors (2×180°) were used to acquire 64 views for 25 s in 5.6° steps using a 64×64 matrix. The energy window of 99mTc was centered at 140 keV ±15%.\nRaw imaging data were reconstructed using Butterworth-filtered (order, 8; cut-off frequency, 0.20 cycles/pixel) back-projection. Transaxial slices were reconstructed and reoriented to represent coronal slices, and then horizontal long- and short-axis slices were produced by axis shift.\nStandard electrocardiographically gated images were acquired in 64 steps at 19 s per step, using the step acquisition mode in RR interval, and divided into 16 frames. Tracer uptake was assessed by non-gated early images created from the sums of all of the gated images obtained in standard acquisition mode.", "Regions of interest (ROIs) were drawn over the entire heart and upper mediastinum depicted in the planar images. The H/M ratio and global WR of 99mTc-sestamibi were calculated from the pixel counts in the ROIs using the following equations: H/M = mean pixel count of cardiac ROI/mean pixel count of mediastinal ROI; and WR (%) = [(mean early cardiac pixel count – mean delayed cardiac pixel count)/mean early cardiac pixel count] × 100. Backgrounds and time-decay corrections were not applied to the calculation of WR.\nOnce a SPECT image was acquired and reconstructed from an early image, quantitative gated SPECT (QGS) software (Cedars-Sinai Medical Center, Los Angeles, CA) was used to calculate the ventricular edges and evaluate the left ventricular end-diastolic volume (LVEDV), left ventricular end-systolic volume (LVESV), and left ventricular ejection fraction (LVEF) [14].", "The results are expressed as the means ±SD. The significance of differences among different coronary territories was assessed by one-way analysis of variance (ANOVA). Parameters in the early and delayed phases in the same patient were compared using a paired t test. Liner regression analysis was used to evaluate the significance of peak values of cardiac enzymes and the values obtained from 99mTc-sestamibi myocardial scintigraphic images. P value of less than 0.05 was considered to indicate statistical significance.", "The early and delayed H/Ms and WR of 99mTc-sestamibi were 2.74±0.58, 3.00±0.70, and 58.7±10.0%, respectively. The early and delayed H/Ms were significantly lower in the patients with an LAD culprit lesion (2.59±0.36 and 2.70±0.41, respectively) than in those with an LCx culprit lesion (2.96±0.42, p=0.037; and 3.27±0.64, p=0.01) or an RCA culprit lesion (2.84±0.43, p=0.040; and 3.21±0.49, p<0.01). The global WR of 99mTc-sestamibi was significantly accelerated in the patients with an LAD culprit lesion compared with those with an RCA culprit lesion (61.1±6.6% vs. 56.4±4.5%, p<0.01). A significant difference in the corrected WR was found between the patients with an LAD culprit lesion and those with an RCA culprit lesion (p<0.01; Table 2).\nThe Left Ventricular End Diastolic Volume (LVEDV), Left Ventricular End Systolic Volume (LVESV), and Left Ventricular Ejection Fraction (LVEF) were 97.9±27.2 ml, 51.9±22.0 ml, and 48.8±10.0%, respectively (Table 2). No significant difference was observed among the patients with regard to the occluded artery.", "Figure 1 shows the association between the parameters obtained from the 99mTc-sestamibi planar images and cardiac enzymes. Although the early H/M was not correlated with the peak CK or peak CK-MB, the delayed H/M was correlated with the peak CK (r=−0.32, p=0.015) and peak CK-MB (r=−0.37, p=0.005). The WR of 99mTc-sestamibi was also correlated with the peak CK (r=−0.32, p=0.017) and peak CK-MB (r=−0.34, p=0.012).", "The present study demonstrated several significant aspects of 99mTc-sestamibi planar imaging for the assessment of cardiac damage in AMI patients after primary PCI. Firstly, the delayed uptake was negatively correlated with the peak values of CK and CK-MB. Secondly, the WR was positively correlated with the peak values of these cardiac enzymes. These results suggest that 99mTc-sestamibi imaging reflects injured myocardium. Since 99mTc-sestamibi WR indicates injured but viable myocardium, 99mTc-sestamibi imaging in the subacute phase of AMI may provide additional clinical information.\nThe association among peak CK, infarct size, and mortality was demonstrated in the 1970s [16,17]. After the importance of measuring CK in AMI patients was recognized, various studies were conducted to confirm the efficacy of peak CK. One study reported that CK-MB elevation without concomitant CK elevation is associated with a worse prognosis [18]. Although it has been suggested that CK-MB overestimates infarct size after reperfusion [19], a recent study has reported that peak CK and CK-MB are still related to mortality and infarct size in AMI patients with TIMI grade 3 flow after PCI. According to the guidelines described by Alpert et al [20], cardiac troponin is considered the sensitive marker of choice, and is more sensitive than CK and CK-MB. However, Tzivoni et al. [8] have demonstrated that peak troponin T is as accurate as peak CK, and CK-MB are as accurate as troponin T in estimating infarct size. Thus, in the present study, we determined peak CK and CK-MB and conducted a serological study for detecting cardiac damage.\n99mTc-sestamibi is widely used as a myocardial perfusion tracer. The myocardial uptake mechanism of 99mTc-sestamibi depends on the passive distribution across the plasma and mitochondrial membranes in response to a transmembrane electrochemical gradient [21,22]; approximately 90% of its activity in vivo is associated with mitochondria [23]. Fundamental studies have reported a close relationship between mitochondrial function and retention of 99mTc-sestamibi in the myocardium. Crane et al. [24] have demonstrated that loss of mitochondrial metabolic function is related to 99mTc-sestamibi release from the myocardium. Another study reported that 99mTc-sestamibi uptake and retention are inhibited in a cultured chick myocyte model when mitochondrial membrane potential is depolarized [25]. Under ischemic conditions and during reperfusion, reactive oxygen species produced by endothelial cells induce the release of phagocytes in the myocardium, which leads to mitochondrial dysfunction [26]. Mitochondrial dysfunction may alter the mitochondrial membrane potential and impair myocardial retention of 99mTc-sestamibi.\nIn the present study, the delayed H/M and WR were correlated with the peak CK and CK-MB. A 99mTc-sestamibi kinetics study [27] has demonstrated a significant correlation between 99mTc-sestamibi activity after reperfusion and peak CK release in ischemic-reperfused rat heart models, which is consistent with the results of our study. Since peak CK represents the extent of myocardial injury, 99mTc-sestamibi-delayed H/M and WR are probably good markers of ischemic-damaged myocardium. In contrast, the early H/M of 99mTc-sestamibi was uncorrelated with peak CK and peak CK-MB. Weinstein et al. [28] reported that the initial uptake of 99mTc-sestamibi predominantly reflects coronary blood flow in a rabbit heart model of coronary occlusion. The early H/M of 99mTc-sestamibi reflects reperfused myocardial perfusion caused by primary PCI, which suggests that the early H/M of 99mTc-sestamibi is uncorrelated with peak CK and CK-MB. In studies conducted on ischemic patients, Takeishi et al. [10] reported that the WR of 99mTc-sestamibi after direct percutaneous transluminal coronary angioplasty was associated with infarct-related area and preserved left ventricular function. Fujiwara et al. [12] compared the WR of 99mTc-sestamibi with contractile reserve wall motion evaluated by low-dose dobutamine echocardiography. They concluded that the enhancement of 99mTc-sestamibi WR was related to the reversible functional abnormalities indicated by the dobutamine-responsive contractile reserve. These results suggest that the WR of 99mTc-sestamibi is associated with an ischemic-damaged but viable myocardium. In non-ischemic patients, Kumita et al. [29] reported that the WR of 99mTc-sestamibi, which was related to left ventricular function, was higher in patients with chronic heart failure than in controls. Matsuo et al. [30] analyzed left ventricular systolic and diastolic function in patients with dilated cardiomyopathy and demonstrated a positive correlation between the WR of 99mTc-sestamibi and the plasma BNP level. They also suggested that the WR of 99mTc-sestamibi might provide prognostic information in chronic heart failure patients because the incidence of cardiac events was higher in such patients with higher 99mTc-sestamibi WR. These non-ischemic heart disease studies also suggest that the WR of 99mTc-sestamibi might be a reliable marker of myocardial damage.\n[SUBTITLE] Study limitation [SUBSECTION] In the present study we did not evaluate regional WR of 99mTc-sestamibi in association with culprit region. Furthermore, this study was not a controlled trial. AMI patients who did not receive PCI should be included as controls in future studies. In the study with non-ischemic patients, it was reported that WR of 99mTc-sestamibi might provide prognostic information. However, the prognostic values of H/M and WR in patients with ischemic cardiac disease remain unknown. Further investigations with larger numbers of patients should be conducted to evaluate the potential use of 99mTc-sestamibi as a prognostic incremental indicator.\nIn the present study we did not evaluate regional WR of 99mTc-sestamibi in association with culprit region. Furthermore, this study was not a controlled trial. AMI patients who did not receive PCI should be included as controls in future studies. In the study with non-ischemic patients, it was reported that WR of 99mTc-sestamibi might provide prognostic information. However, the prognostic values of H/M and WR in patients with ischemic cardiac disease remain unknown. Further investigations with larger numbers of patients should be conducted to evaluate the potential use of 99mTc-sestamibi as a prognostic incremental indicator.\n[SUBTITLE] Clinical implications [SUBSECTION] 99mTc-sestamibi imaging is an objective tool for assessing myocardial damage and viability. The present study showed that 99mTc-sestamibi planar imaging may be useful for the assessment of cardiac damage in AMI patients. Since WR of 99mTc-sestamibi (after PCI) is associated with infarcted myocardium (but with preserved left ventricular function), increased WR might predict the improvement of left ventricular wall motion in chronic phase. Thus, 99mTc-sestamibi imaging after PCI may provide additional clinical information. Follow-up studies with larger number of patients are needed to confirm the usefulness of 99mTc-sestamibi images in AMI patients.\n99mTc-sestamibi imaging is an objective tool for assessing myocardial damage and viability. The present study showed that 99mTc-sestamibi planar imaging may be useful for the assessment of cardiac damage in AMI patients. Since WR of 99mTc-sestamibi (after PCI) is associated with infarcted myocardium (but with preserved left ventricular function), increased WR might predict the improvement of left ventricular wall motion in chronic phase. Thus, 99mTc-sestamibi imaging after PCI may provide additional clinical information. Follow-up studies with larger number of patients are needed to confirm the usefulness of 99mTc-sestamibi images in AMI patients." ]
[ null, null, "methods", "methods", "methods", null, "discussion" ]
[ "Background", "Material and Methods", "Subjects", "Radionuclide studies", "Data analysis", "Statistical analysis", "Results", "Patients’ characteristics and laboratory findings", "Scintigraphic study", "Parameters from 99mTc-sestamibi images and peak cardiac enzymes", "Discussion", "Study limitation", "Clinical implications", "Conclusions" ]
[ "Creatinine kinase (CK) and cardiac troponin is used for diagnostic evaluation of myocardial damage in patients with acute myocardial infarction (AMI). CK is an established noninvasive measure of myocardial infarct size and severity [1] and has been accepted as a reliable prognostic marker in AMI patients undergoing primary percutaneous coronary intervention (PCI) [2]. Although early reperfusion phenomena strongly influence CK release [3], the peak values of cardiac biomarkers, including CK and the MB isoenzyme of CK (CK-MB), predict the prognosis of AMI patients after PCI [2,4]. In general, contrast-enhanced magnetic resonance imaging [5,6] and nuclear cardiology techniques [1,7,8] evaluate the association between cardiac biomarkers and myocardial damage.\nSince 1990, single-photon emission computed tomography (SPECT) with technetium-99m hexakis 2-methoxy-isobutyl-isonitrile (99mTc-sestamibi) has been widely used to assess myocardial damage at rest after the onset of AMI [9]. 99mTc-sestamibi is known to exhibit the phenomenon of reverse redistribution, so-called washout, in patients with AMI after PCI [10–12]. Myocardial scintigraphy with 123I-metaiodobenzylguanidine is a standard method of rendering early and delayed images; nuclide tracer washout is the gold standard for evaluating cardiac function [13]. However, washout of 99mTc-sestamibi is still a matter of debate; the significance of washout has not been fully elucidated in AMI patients after primary PCI. No previous study has demonstrated the significance of cardiac biomarkers and washout of 99mTc-sestamibi for the evaluation of cardiac damage in AMI patients. Accordingly, the present study was designed to clarify the association between the washout rate (WR) of 99mTc-sestamibi determined from myocardial scintigraphic images and cardiac enzymes in AMI patients after PCI.", "[SUBTITLE] Subjects [SUBSECTION] This study population comprised 56 consecutive patients (mean age 65.8 years) who had suffered their first AMI. On arrival at the emergency department, venous blood was withdrawn from the cubital vein. AMI was diagnosed by cardiologists on the basis of the symptoms, electrocardiographic changes, echocardiograms, detection of human heart fatty acid binding protein by immunochromatography, and by hematological findings, including CK and CK-MB. In order to determine the actual onset time, cardiologists conducted interviews with the patients and family members.\nTable 1 shows detailed patient characteristics. After the diagnosis of AMI, the patients were immediately transferred to the cardiac catheter laboratory for emergent cardiac catheterization. In 26 patients, the culprit lesion was located in the right coronary artery (RCA); in 24 in the left anterior descending coronary artery (LAD); and in 6 patients in the left circumflex coronary artery (LCx). Thrombus aspiration catheters were used to cross the occluding lesions; follow-up coronary angiography was performed during PCI. Blood samples were collected every 3 h after PCI to determine the peak values of cardiac enzymes. Those patients with AMI, who received PCI followed with conventional drugs, experienced no worsening of symptoms and required no hospitalization due to AMI-related complications, either before or after the scintigraphic examinations.\nThe Human Investigation Committee of St. Marianna University School of Medicine approved the study protocol. Written informed consent was obtained from each patient before the study.\nThis study population comprised 56 consecutive patients (mean age 65.8 years) who had suffered their first AMI. On arrival at the emergency department, venous blood was withdrawn from the cubital vein. AMI was diagnosed by cardiologists on the basis of the symptoms, electrocardiographic changes, echocardiograms, detection of human heart fatty acid binding protein by immunochromatography, and by hematological findings, including CK and CK-MB. In order to determine the actual onset time, cardiologists conducted interviews with the patients and family members.\nTable 1 shows detailed patient characteristics. After the diagnosis of AMI, the patients were immediately transferred to the cardiac catheter laboratory for emergent cardiac catheterization. In 26 patients, the culprit lesion was located in the right coronary artery (RCA); in 24 in the left anterior descending coronary artery (LAD); and in 6 patients in the left circumflex coronary artery (LCx). Thrombus aspiration catheters were used to cross the occluding lesions; follow-up coronary angiography was performed during PCI. Blood samples were collected every 3 h after PCI to determine the peak values of cardiac enzymes. Those patients with AMI, who received PCI followed with conventional drugs, experienced no worsening of symptoms and required no hospitalization due to AMI-related complications, either before or after the scintigraphic examinations.\nThe Human Investigation Committee of St. Marianna University School of Medicine approved the study protocol. Written informed consent was obtained from each patient before the study.\n[SUBTITLE] Radionuclide studies [SUBSECTION] All study patients underwent SPECT 2 weeks after the onset of AMI. 99mTc-sestamibi (740 MBq; FUJI FILM RI Pharma Co. Ltd., Tokyo, Japan) was injected into the left antecubital vein, and thereafter SPECT was performed twice-initially at 30 min after injection (early 99mTc-sestamibi uptake) and subsequently at 4 h after injection (delayed uptake).\nBefore performing SPECT, anterior and lateral planar images were acquired for 300 s using a gamma camera equipped with a low-/medium-energy general-purpose collimator and a 512×512 matrix. 99mTc-sestamibi images were obtained using a double-headed gamma camera (Symbia E; Siemens-Asahi Medical Technologies Ltd., Tokyo, Japan) equipped with a low-/medium-energy general-purpose collimator. Two detectors (2×180°) were used to acquire 64 views for 25 s in 5.6° steps using a 64×64 matrix. The energy window of 99mTc was centered at 140 keV ±15%.\nRaw imaging data were reconstructed using Butterworth-filtered (order, 8; cut-off frequency, 0.20 cycles/pixel) back-projection. Transaxial slices were reconstructed and reoriented to represent coronal slices, and then horizontal long- and short-axis slices were produced by axis shift.\nStandard electrocardiographically gated images were acquired in 64 steps at 19 s per step, using the step acquisition mode in RR interval, and divided into 16 frames. Tracer uptake was assessed by non-gated early images created from the sums of all of the gated images obtained in standard acquisition mode.\nAll study patients underwent SPECT 2 weeks after the onset of AMI. 99mTc-sestamibi (740 MBq; FUJI FILM RI Pharma Co. Ltd., Tokyo, Japan) was injected into the left antecubital vein, and thereafter SPECT was performed twice-initially at 30 min after injection (early 99mTc-sestamibi uptake) and subsequently at 4 h after injection (delayed uptake).\nBefore performing SPECT, anterior and lateral planar images were acquired for 300 s using a gamma camera equipped with a low-/medium-energy general-purpose collimator and a 512×512 matrix. 99mTc-sestamibi images were obtained using a double-headed gamma camera (Symbia E; Siemens-Asahi Medical Technologies Ltd., Tokyo, Japan) equipped with a low-/medium-energy general-purpose collimator. Two detectors (2×180°) were used to acquire 64 views for 25 s in 5.6° steps using a 64×64 matrix. The energy window of 99mTc was centered at 140 keV ±15%.\nRaw imaging data were reconstructed using Butterworth-filtered (order, 8; cut-off frequency, 0.20 cycles/pixel) back-projection. Transaxial slices were reconstructed and reoriented to represent coronal slices, and then horizontal long- and short-axis slices were produced by axis shift.\nStandard electrocardiographically gated images were acquired in 64 steps at 19 s per step, using the step acquisition mode in RR interval, and divided into 16 frames. Tracer uptake was assessed by non-gated early images created from the sums of all of the gated images obtained in standard acquisition mode.\n[SUBTITLE] Data analysis [SUBSECTION] Regions of interest (ROIs) were drawn over the entire heart and upper mediastinum depicted in the planar images. The H/M ratio and global WR of 99mTc-sestamibi were calculated from the pixel counts in the ROIs using the following equations: H/M = mean pixel count of cardiac ROI/mean pixel count of mediastinal ROI; and WR (%) = [(mean early cardiac pixel count – mean delayed cardiac pixel count)/mean early cardiac pixel count] × 100. Backgrounds and time-decay corrections were not applied to the calculation of WR.\nOnce a SPECT image was acquired and reconstructed from an early image, quantitative gated SPECT (QGS) software (Cedars-Sinai Medical Center, Los Angeles, CA) was used to calculate the ventricular edges and evaluate the left ventricular end-diastolic volume (LVEDV), left ventricular end-systolic volume (LVESV), and left ventricular ejection fraction (LVEF) [14].\nRegions of interest (ROIs) were drawn over the entire heart and upper mediastinum depicted in the planar images. The H/M ratio and global WR of 99mTc-sestamibi were calculated from the pixel counts in the ROIs using the following equations: H/M = mean pixel count of cardiac ROI/mean pixel count of mediastinal ROI; and WR (%) = [(mean early cardiac pixel count – mean delayed cardiac pixel count)/mean early cardiac pixel count] × 100. Backgrounds and time-decay corrections were not applied to the calculation of WR.\nOnce a SPECT image was acquired and reconstructed from an early image, quantitative gated SPECT (QGS) software (Cedars-Sinai Medical Center, Los Angeles, CA) was used to calculate the ventricular edges and evaluate the left ventricular end-diastolic volume (LVEDV), left ventricular end-systolic volume (LVESV), and left ventricular ejection fraction (LVEF) [14].\n[SUBTITLE] Statistical analysis [SUBSECTION] The results are expressed as the means ±SD. The significance of differences among different coronary territories was assessed by one-way analysis of variance (ANOVA). Parameters in the early and delayed phases in the same patient were compared using a paired t test. Liner regression analysis was used to evaluate the significance of peak values of cardiac enzymes and the values obtained from 99mTc-sestamibi myocardial scintigraphic images. P value of less than 0.05 was considered to indicate statistical significance.\nThe results are expressed as the means ±SD. The significance of differences among different coronary territories was assessed by one-way analysis of variance (ANOVA). Parameters in the early and delayed phases in the same patient were compared using a paired t test. Liner regression analysis was used to evaluate the significance of peak values of cardiac enzymes and the values obtained from 99mTc-sestamibi myocardial scintigraphic images. P value of less than 0.05 was considered to indicate statistical significance.", "This study population comprised 56 consecutive patients (mean age 65.8 years) who had suffered their first AMI. On arrival at the emergency department, venous blood was withdrawn from the cubital vein. AMI was diagnosed by cardiologists on the basis of the symptoms, electrocardiographic changes, echocardiograms, detection of human heart fatty acid binding protein by immunochromatography, and by hematological findings, including CK and CK-MB. In order to determine the actual onset time, cardiologists conducted interviews with the patients and family members.\nTable 1 shows detailed patient characteristics. After the diagnosis of AMI, the patients were immediately transferred to the cardiac catheter laboratory for emergent cardiac catheterization. In 26 patients, the culprit lesion was located in the right coronary artery (RCA); in 24 in the left anterior descending coronary artery (LAD); and in 6 patients in the left circumflex coronary artery (LCx). Thrombus aspiration catheters were used to cross the occluding lesions; follow-up coronary angiography was performed during PCI. Blood samples were collected every 3 h after PCI to determine the peak values of cardiac enzymes. Those patients with AMI, who received PCI followed with conventional drugs, experienced no worsening of symptoms and required no hospitalization due to AMI-related complications, either before or after the scintigraphic examinations.\nThe Human Investigation Committee of St. Marianna University School of Medicine approved the study protocol. Written informed consent was obtained from each patient before the study.", "All study patients underwent SPECT 2 weeks after the onset of AMI. 99mTc-sestamibi (740 MBq; FUJI FILM RI Pharma Co. Ltd., Tokyo, Japan) was injected into the left antecubital vein, and thereafter SPECT was performed twice-initially at 30 min after injection (early 99mTc-sestamibi uptake) and subsequently at 4 h after injection (delayed uptake).\nBefore performing SPECT, anterior and lateral planar images were acquired for 300 s using a gamma camera equipped with a low-/medium-energy general-purpose collimator and a 512×512 matrix. 99mTc-sestamibi images were obtained using a double-headed gamma camera (Symbia E; Siemens-Asahi Medical Technologies Ltd., Tokyo, Japan) equipped with a low-/medium-energy general-purpose collimator. Two detectors (2×180°) were used to acquire 64 views for 25 s in 5.6° steps using a 64×64 matrix. The energy window of 99mTc was centered at 140 keV ±15%.\nRaw imaging data were reconstructed using Butterworth-filtered (order, 8; cut-off frequency, 0.20 cycles/pixel) back-projection. Transaxial slices were reconstructed and reoriented to represent coronal slices, and then horizontal long- and short-axis slices were produced by axis shift.\nStandard electrocardiographically gated images were acquired in 64 steps at 19 s per step, using the step acquisition mode in RR interval, and divided into 16 frames. Tracer uptake was assessed by non-gated early images created from the sums of all of the gated images obtained in standard acquisition mode.", "Regions of interest (ROIs) were drawn over the entire heart and upper mediastinum depicted in the planar images. The H/M ratio and global WR of 99mTc-sestamibi were calculated from the pixel counts in the ROIs using the following equations: H/M = mean pixel count of cardiac ROI/mean pixel count of mediastinal ROI; and WR (%) = [(mean early cardiac pixel count – mean delayed cardiac pixel count)/mean early cardiac pixel count] × 100. Backgrounds and time-decay corrections were not applied to the calculation of WR.\nOnce a SPECT image was acquired and reconstructed from an early image, quantitative gated SPECT (QGS) software (Cedars-Sinai Medical Center, Los Angeles, CA) was used to calculate the ventricular edges and evaluate the left ventricular end-diastolic volume (LVEDV), left ventricular end-systolic volume (LVESV), and left ventricular ejection fraction (LVEF) [14].", "The results are expressed as the means ±SD. The significance of differences among different coronary territories was assessed by one-way analysis of variance (ANOVA). Parameters in the early and delayed phases in the same patient were compared using a paired t test. Liner regression analysis was used to evaluate the significance of peak values of cardiac enzymes and the values obtained from 99mTc-sestamibi myocardial scintigraphic images. P value of less than 0.05 was considered to indicate statistical significance.", "[SUBTITLE] Patients’ characteristics and laboratory findings [SUBSECTION] PCI was performed 9.4±6.0 h after the actual onset of AMI. All patients received an appropriate PCI; in 51 (91%) patients, aspiration catheters successfully penetrated the occluding lesions and were followed by conventional PCI. The remaining 5 patients received PCI without aspiration catheters. Thrombolysis in myocardial infarction (TIMI) [15] grade 3 flow was observed in all patients after PCI.\nThere were no differences in the age, BMI, cardiac enzymes on admission, time from the onset to revascularization, and peak cardiac enzymes, even though the patients had culprit lesions located in different arteries (Table 1). The CK and CK-MB levels on admission were 410.6±1318.0 IU/L and 39.8±198.1 IU/L, and the peak CK and CK-MB levels were 2689.6±1167.4 IU/L (15.3±4.6 h) and 274.1±169.4 IU/L (13.5±3.9 h), respectively.\nAfter 200mg of acetylsalicylic acid and 300 mg of clopidgrel sulfate administration as a loading dose, all patients received 100 mg of acetylsalicylic acid and 75 mg of clopidogrel sulfate after PCI as a maintenance dose. Most of the patients were treated with angiotensin-converting enzyme inhibitor or angiotensin receptor blocker, β-blocker, or some type of statin in order to prevent secondary cardiac events and deterioration of cardiac function. Seven patients who received statin were not able to continue the administration due to muscle ache.\nPCI was performed 9.4±6.0 h after the actual onset of AMI. All patients received an appropriate PCI; in 51 (91%) patients, aspiration catheters successfully penetrated the occluding lesions and were followed by conventional PCI. The remaining 5 patients received PCI without aspiration catheters. Thrombolysis in myocardial infarction (TIMI) [15] grade 3 flow was observed in all patients after PCI.\nThere were no differences in the age, BMI, cardiac enzymes on admission, time from the onset to revascularization, and peak cardiac enzymes, even though the patients had culprit lesions located in different arteries (Table 1). The CK and CK-MB levels on admission were 410.6±1318.0 IU/L and 39.8±198.1 IU/L, and the peak CK and CK-MB levels were 2689.6±1167.4 IU/L (15.3±4.6 h) and 274.1±169.4 IU/L (13.5±3.9 h), respectively.\nAfter 200mg of acetylsalicylic acid and 300 mg of clopidgrel sulfate administration as a loading dose, all patients received 100 mg of acetylsalicylic acid and 75 mg of clopidogrel sulfate after PCI as a maintenance dose. Most of the patients were treated with angiotensin-converting enzyme inhibitor or angiotensin receptor blocker, β-blocker, or some type of statin in order to prevent secondary cardiac events and deterioration of cardiac function. Seven patients who received statin were not able to continue the administration due to muscle ache.\n[SUBTITLE] Scintigraphic study [SUBSECTION] The early and delayed H/Ms and WR of 99mTc-sestamibi were 2.74±0.58, 3.00±0.70, and 58.7±10.0%, respectively. The early and delayed H/Ms were significantly lower in the patients with an LAD culprit lesion (2.59±0.36 and 2.70±0.41, respectively) than in those with an LCx culprit lesion (2.96±0.42, p=0.037; and 3.27±0.64, p=0.01) or an RCA culprit lesion (2.84±0.43, p=0.040; and 3.21±0.49, p<0.01). The global WR of 99mTc-sestamibi was significantly accelerated in the patients with an LAD culprit lesion compared with those with an RCA culprit lesion (61.1±6.6% vs. 56.4±4.5%, p<0.01). A significant difference in the corrected WR was found between the patients with an LAD culprit lesion and those with an RCA culprit lesion (p<0.01; Table 2).\nThe Left Ventricular End Diastolic Volume (LVEDV), Left Ventricular End Systolic Volume (LVESV), and Left Ventricular Ejection Fraction (LVEF) were 97.9±27.2 ml, 51.9±22.0 ml, and 48.8±10.0%, respectively (Table 2). No significant difference was observed among the patients with regard to the occluded artery.\nThe early and delayed H/Ms and WR of 99mTc-sestamibi were 2.74±0.58, 3.00±0.70, and 58.7±10.0%, respectively. The early and delayed H/Ms were significantly lower in the patients with an LAD culprit lesion (2.59±0.36 and 2.70±0.41, respectively) than in those with an LCx culprit lesion (2.96±0.42, p=0.037; and 3.27±0.64, p=0.01) or an RCA culprit lesion (2.84±0.43, p=0.040; and 3.21±0.49, p<0.01). The global WR of 99mTc-sestamibi was significantly accelerated in the patients with an LAD culprit lesion compared with those with an RCA culprit lesion (61.1±6.6% vs. 56.4±4.5%, p<0.01). A significant difference in the corrected WR was found between the patients with an LAD culprit lesion and those with an RCA culprit lesion (p<0.01; Table 2).\nThe Left Ventricular End Diastolic Volume (LVEDV), Left Ventricular End Systolic Volume (LVESV), and Left Ventricular Ejection Fraction (LVEF) were 97.9±27.2 ml, 51.9±22.0 ml, and 48.8±10.0%, respectively (Table 2). No significant difference was observed among the patients with regard to the occluded artery.\n[SUBTITLE] Parameters from 99mTc-sestamibi images and peak cardiac enzymes [SUBSECTION] Figure 1 shows the association between the parameters obtained from the 99mTc-sestamibi planar images and cardiac enzymes. Although the early H/M was not correlated with the peak CK or peak CK-MB, the delayed H/M was correlated with the peak CK (r=−0.32, p=0.015) and peak CK-MB (r=−0.37, p=0.005). The WR of 99mTc-sestamibi was also correlated with the peak CK (r=−0.32, p=0.017) and peak CK-MB (r=−0.34, p=0.012).\nFigure 1 shows the association between the parameters obtained from the 99mTc-sestamibi planar images and cardiac enzymes. Although the early H/M was not correlated with the peak CK or peak CK-MB, the delayed H/M was correlated with the peak CK (r=−0.32, p=0.015) and peak CK-MB (r=−0.37, p=0.005). The WR of 99mTc-sestamibi was also correlated with the peak CK (r=−0.32, p=0.017) and peak CK-MB (r=−0.34, p=0.012).", "PCI was performed 9.4±6.0 h after the actual onset of AMI. All patients received an appropriate PCI; in 51 (91%) patients, aspiration catheters successfully penetrated the occluding lesions and were followed by conventional PCI. The remaining 5 patients received PCI without aspiration catheters. Thrombolysis in myocardial infarction (TIMI) [15] grade 3 flow was observed in all patients after PCI.\nThere were no differences in the age, BMI, cardiac enzymes on admission, time from the onset to revascularization, and peak cardiac enzymes, even though the patients had culprit lesions located in different arteries (Table 1). The CK and CK-MB levels on admission were 410.6±1318.0 IU/L and 39.8±198.1 IU/L, and the peak CK and CK-MB levels were 2689.6±1167.4 IU/L (15.3±4.6 h) and 274.1±169.4 IU/L (13.5±3.9 h), respectively.\nAfter 200mg of acetylsalicylic acid and 300 mg of clopidgrel sulfate administration as a loading dose, all patients received 100 mg of acetylsalicylic acid and 75 mg of clopidogrel sulfate after PCI as a maintenance dose. Most of the patients were treated with angiotensin-converting enzyme inhibitor or angiotensin receptor blocker, β-blocker, or some type of statin in order to prevent secondary cardiac events and deterioration of cardiac function. Seven patients who received statin were not able to continue the administration due to muscle ache.", "The early and delayed H/Ms and WR of 99mTc-sestamibi were 2.74±0.58, 3.00±0.70, and 58.7±10.0%, respectively. The early and delayed H/Ms were significantly lower in the patients with an LAD culprit lesion (2.59±0.36 and 2.70±0.41, respectively) than in those with an LCx culprit lesion (2.96±0.42, p=0.037; and 3.27±0.64, p=0.01) or an RCA culprit lesion (2.84±0.43, p=0.040; and 3.21±0.49, p<0.01). The global WR of 99mTc-sestamibi was significantly accelerated in the patients with an LAD culprit lesion compared with those with an RCA culprit lesion (61.1±6.6% vs. 56.4±4.5%, p<0.01). A significant difference in the corrected WR was found between the patients with an LAD culprit lesion and those with an RCA culprit lesion (p<0.01; Table 2).\nThe Left Ventricular End Diastolic Volume (LVEDV), Left Ventricular End Systolic Volume (LVESV), and Left Ventricular Ejection Fraction (LVEF) were 97.9±27.2 ml, 51.9±22.0 ml, and 48.8±10.0%, respectively (Table 2). No significant difference was observed among the patients with regard to the occluded artery.", "Figure 1 shows the association between the parameters obtained from the 99mTc-sestamibi planar images and cardiac enzymes. Although the early H/M was not correlated with the peak CK or peak CK-MB, the delayed H/M was correlated with the peak CK (r=−0.32, p=0.015) and peak CK-MB (r=−0.37, p=0.005). The WR of 99mTc-sestamibi was also correlated with the peak CK (r=−0.32, p=0.017) and peak CK-MB (r=−0.34, p=0.012).", "The present study demonstrated several significant aspects of 99mTc-sestamibi planar imaging for the assessment of cardiac damage in AMI patients after primary PCI. Firstly, the delayed uptake was negatively correlated with the peak values of CK and CK-MB. Secondly, the WR was positively correlated with the peak values of these cardiac enzymes. These results suggest that 99mTc-sestamibi imaging reflects injured myocardium. Since 99mTc-sestamibi WR indicates injured but viable myocardium, 99mTc-sestamibi imaging in the subacute phase of AMI may provide additional clinical information.\nThe association among peak CK, infarct size, and mortality was demonstrated in the 1970s [16,17]. After the importance of measuring CK in AMI patients was recognized, various studies were conducted to confirm the efficacy of peak CK. One study reported that CK-MB elevation without concomitant CK elevation is associated with a worse prognosis [18]. Although it has been suggested that CK-MB overestimates infarct size after reperfusion [19], a recent study has reported that peak CK and CK-MB are still related to mortality and infarct size in AMI patients with TIMI grade 3 flow after PCI. According to the guidelines described by Alpert et al [20], cardiac troponin is considered the sensitive marker of choice, and is more sensitive than CK and CK-MB. However, Tzivoni et al. [8] have demonstrated that peak troponin T is as accurate as peak CK, and CK-MB are as accurate as troponin T in estimating infarct size. Thus, in the present study, we determined peak CK and CK-MB and conducted a serological study for detecting cardiac damage.\n99mTc-sestamibi is widely used as a myocardial perfusion tracer. The myocardial uptake mechanism of 99mTc-sestamibi depends on the passive distribution across the plasma and mitochondrial membranes in response to a transmembrane electrochemical gradient [21,22]; approximately 90% of its activity in vivo is associated with mitochondria [23]. Fundamental studies have reported a close relationship between mitochondrial function and retention of 99mTc-sestamibi in the myocardium. Crane et al. [24] have demonstrated that loss of mitochondrial metabolic function is related to 99mTc-sestamibi release from the myocardium. Another study reported that 99mTc-sestamibi uptake and retention are inhibited in a cultured chick myocyte model when mitochondrial membrane potential is depolarized [25]. Under ischemic conditions and during reperfusion, reactive oxygen species produced by endothelial cells induce the release of phagocytes in the myocardium, which leads to mitochondrial dysfunction [26]. Mitochondrial dysfunction may alter the mitochondrial membrane potential and impair myocardial retention of 99mTc-sestamibi.\nIn the present study, the delayed H/M and WR were correlated with the peak CK and CK-MB. A 99mTc-sestamibi kinetics study [27] has demonstrated a significant correlation between 99mTc-sestamibi activity after reperfusion and peak CK release in ischemic-reperfused rat heart models, which is consistent with the results of our study. Since peak CK represents the extent of myocardial injury, 99mTc-sestamibi-delayed H/M and WR are probably good markers of ischemic-damaged myocardium. In contrast, the early H/M of 99mTc-sestamibi was uncorrelated with peak CK and peak CK-MB. Weinstein et al. [28] reported that the initial uptake of 99mTc-sestamibi predominantly reflects coronary blood flow in a rabbit heart model of coronary occlusion. The early H/M of 99mTc-sestamibi reflects reperfused myocardial perfusion caused by primary PCI, which suggests that the early H/M of 99mTc-sestamibi is uncorrelated with peak CK and CK-MB. In studies conducted on ischemic patients, Takeishi et al. [10] reported that the WR of 99mTc-sestamibi after direct percutaneous transluminal coronary angioplasty was associated with infarct-related area and preserved left ventricular function. Fujiwara et al. [12] compared the WR of 99mTc-sestamibi with contractile reserve wall motion evaluated by low-dose dobutamine echocardiography. They concluded that the enhancement of 99mTc-sestamibi WR was related to the reversible functional abnormalities indicated by the dobutamine-responsive contractile reserve. These results suggest that the WR of 99mTc-sestamibi is associated with an ischemic-damaged but viable myocardium. In non-ischemic patients, Kumita et al. [29] reported that the WR of 99mTc-sestamibi, which was related to left ventricular function, was higher in patients with chronic heart failure than in controls. Matsuo et al. [30] analyzed left ventricular systolic and diastolic function in patients with dilated cardiomyopathy and demonstrated a positive correlation between the WR of 99mTc-sestamibi and the plasma BNP level. They also suggested that the WR of 99mTc-sestamibi might provide prognostic information in chronic heart failure patients because the incidence of cardiac events was higher in such patients with higher 99mTc-sestamibi WR. These non-ischemic heart disease studies also suggest that the WR of 99mTc-sestamibi might be a reliable marker of myocardial damage.\n[SUBTITLE] Study limitation [SUBSECTION] In the present study we did not evaluate regional WR of 99mTc-sestamibi in association with culprit region. Furthermore, this study was not a controlled trial. AMI patients who did not receive PCI should be included as controls in future studies. In the study with non-ischemic patients, it was reported that WR of 99mTc-sestamibi might provide prognostic information. However, the prognostic values of H/M and WR in patients with ischemic cardiac disease remain unknown. Further investigations with larger numbers of patients should be conducted to evaluate the potential use of 99mTc-sestamibi as a prognostic incremental indicator.\nIn the present study we did not evaluate regional WR of 99mTc-sestamibi in association with culprit region. Furthermore, this study was not a controlled trial. AMI patients who did not receive PCI should be included as controls in future studies. In the study with non-ischemic patients, it was reported that WR of 99mTc-sestamibi might provide prognostic information. However, the prognostic values of H/M and WR in patients with ischemic cardiac disease remain unknown. Further investigations with larger numbers of patients should be conducted to evaluate the potential use of 99mTc-sestamibi as a prognostic incremental indicator.\n[SUBTITLE] Clinical implications [SUBSECTION] 99mTc-sestamibi imaging is an objective tool for assessing myocardial damage and viability. The present study showed that 99mTc-sestamibi planar imaging may be useful for the assessment of cardiac damage in AMI patients. Since WR of 99mTc-sestamibi (after PCI) is associated with infarcted myocardium (but with preserved left ventricular function), increased WR might predict the improvement of left ventricular wall motion in chronic phase. Thus, 99mTc-sestamibi imaging after PCI may provide additional clinical information. Follow-up studies with larger number of patients are needed to confirm the usefulness of 99mTc-sestamibi images in AMI patients.\n99mTc-sestamibi imaging is an objective tool for assessing myocardial damage and viability. The present study showed that 99mTc-sestamibi planar imaging may be useful for the assessment of cardiac damage in AMI patients. Since WR of 99mTc-sestamibi (after PCI) is associated with infarcted myocardium (but with preserved left ventricular function), increased WR might predict the improvement of left ventricular wall motion in chronic phase. Thus, 99mTc-sestamibi imaging after PCI may provide additional clinical information. Follow-up studies with larger number of patients are needed to confirm the usefulness of 99mTc-sestamibi images in AMI patients.", "In the present study we did not evaluate regional WR of 99mTc-sestamibi in association with culprit region. Furthermore, this study was not a controlled trial. AMI patients who did not receive PCI should be included as controls in future studies. In the study with non-ischemic patients, it was reported that WR of 99mTc-sestamibi might provide prognostic information. However, the prognostic values of H/M and WR in patients with ischemic cardiac disease remain unknown. Further investigations with larger numbers of patients should be conducted to evaluate the potential use of 99mTc-sestamibi as a prognostic incremental indicator.", "99mTc-sestamibi imaging is an objective tool for assessing myocardial damage and viability. The present study showed that 99mTc-sestamibi planar imaging may be useful for the assessment of cardiac damage in AMI patients. Since WR of 99mTc-sestamibi (after PCI) is associated with infarcted myocardium (but with preserved left ventricular function), increased WR might predict the improvement of left ventricular wall motion in chronic phase. Thus, 99mTc-sestamibi imaging after PCI may provide additional clinical information. Follow-up studies with larger number of patients are needed to confirm the usefulness of 99mTc-sestamibi images in AMI patients.", "These results suggest that the WR determined from 99mTc-sestamibi myocardial scintigraphic images could reflect the extent of myocardial damage in AMI patients after PCI. This study also demonstrates the significance of taking 99mTc-sestamibi myocardial scintigraphic images at 2 different time points. Further studies are necessary to confirm these results." ]
[ null, "materials|methods", "subjects", null, "methods", "methods", "results", "intro|subjects|results", "methods", null, "discussion", "methods", "discussion", "conclusions" ]
[ "acute myocardial infarction", "creatinine kinase", "99mTc-sestamibi", "washout rate" ]
Prognosis and cardiovascular morbidity and mortality in prospective study of hypertensive patients with obstructive sleep apnea syndrome in St Petersburg, Russia.
21358601
To assess the impact of obstructive sleep apnea-hypopnea syndrome (OSAHS) on prognosis and cardiovascular morbidity and mortality in relation to other major cardiovascular risk factors.
BACKGROUND
This prospective study recruited 234 patients from an out-patient clinic. Based on the Berlin questionnaire, 147 patients (90 males, mean age 52.1 ± 10.4 years) with highly suspected sleep breathing disorders were included in the study. Based on cardiorespiratory monitoring, patients were divided into 2 groups: 42 patients without sleep breathing disorders (SBD), and 105 patients with OSAHS. Among these, 12 patients started CPAP therapy and formed the third group.
MATERIAL/METHODS
The mean follow-up period was 46.4 ± 14.3 months. Event-free survival was lowest in the untreated OSAHS patients (log rank test 6.732, p = 0.035). In the non-adjusted regression model, OSAHS was also associated with a higher risk of cardiovascular events (OR = 8.557, 95% CI 1.142-64.131, p = 0.037). OSAHS patients demonstrated higher rates of hospitalization compared to the control group without SBD (OR 2.750, 95%CI 1.100-6.873, p = 0.04).
RESULTS
OSAHS hypertensive patients, and in particular, according to our model, patients with severe OSAHS (AHI ≥ 30/h), are at higher risk of fatal and non-fatal cardiovascular events. Moreover, untreated OSAHS patients demonstrate higher rates of hospitalization caused by the onset or deterioration of cardiovascular disease.
CONCLUSIONS
[ "Adult", "Aged", "Aged, 80 and over", "Antihypertensive Agents", "Female", "Follow-Up Studies", "Humans", "Hypertension", "Male", "Middle Aged", "Morbidity", "Prognosis", "Prospective Studies", "Russia", "Sleep Apnea, Obstructive", "Survival Analysis", "Syndrome" ]
3524738
null
null
Assessments and study groups
All patients completed a baseline questionnaire to collect data about personal and medical history, heredity, and lifestyle. Every patient underwent physical examination, including measurement of anthropometric parameters (height, weight, body mass index; waist, hip and neck circumferences) and vital signs (HR and blood pressure, BP). Patients who smoked 1 or more cigarettes per week were considered smokers. Patients were considered to be alcohol-users if they consumed 3 or more units of alcohol per week. Three or more sessions of aerobic exercise (30 minutes or longer) per week was considered to be the normal level of physical activity. All patients underwent cardiorespiratory monitoring by Embletta Pds (Embla, USA) and the following parameters were recorded: snore, nasal flow, thoracic and abdominal excursions, body position, blood oxygen saturation, and heart rate. All recordings were evaluated manually by a specialist in polysomnography. Apnea was defined as an episode of airflow cessation lasting for at least 10 seconds. Hypopnea was defined as an episode of a decrease in air-flow of more than 50% compared to baseline, lasting for ≥10 seconds, combined with a decrease in oxygen saturation of ≥4% [14]. Sleep apnea/hypopnea syndrome was diagnosed if the patient had 5 or more events of apnea and hypopnea per hour of sleep (apnea-hypopnea index, AHI). Apnea was classified as obstructive if simultaneous excursions of the thorax and abdomen were registered, and was classified as central if no respiratory muscle effort was observed. Sleep apnea was classified as mild if the patient had 5–14.9 apnea or hypopnea events per hour, as moderate if AHI was 15–29.9 per hour, and severe if AHI was ≥30 per hour of sleep session. Based on the results of the sleep study, patients were divided into 2 groups (Figure 1), with baseline characteristics shown in Table 1. Forty-two patients without sleep breathing disorders (SBD), with AHI <5 per hour [median 1.8 (0.3–4.9)], formed the control group, and 105 patients with AHI 5 and more episodes per hour constituted the OSAHS group (no cases of central sleep apnea were diagnosed). Twenty-seven out of 105 patients (26%) had mild OSAHS (AHI 10.9 [5.7–14.9] events per hour of sleep), 28 (27%) – moderate OSAHS (AHI 22.5 [15.4–29.3] events per hour of sleep), and 50 (47%) patients – severe OSAHS 9AHI 51.0 [32.0–108.0] events per hour of sleep). The groups did not differ by age, BMI, neck circumference, hypertension stage and duration, by the prevalence of hyperglycemia or diabetes, thyroid pathology (thyreotrophic hormone level remained normal), smoking, alcohol use and by physical activity level at baseline (Table 1). A CPAP therapy session was offered (we used Auto-CPAP-devices ResMed, Australia; Respironics, USA, and ViPAP-III-device, ResMed, Australia) to anyone diagnosed with OSAHS to evaluate its efficacy and assess the necessary therapeutic pressure. Out of 105 patients with SBD, 51 (48.6%) underwent the CPAP therapy test, but most refused to use CPAP therapy at home due to the high cost of the device (23 [45.1%] patients) and the inconvenience of the method (911 [21.6%] patients); in addition, 5 patients (9.8%) failed to get a fitting mask. Twelve (23.5%) patients who started CPAP therapy at home formed the third group (Figure 1): 11 patients used autoCPAP-devices and 1 subject used a BiPAP-device. Most patients in this group were males (75%, 9 men), and AHI was higher in this group compared to OSAHS patients who did not use a CPAP. All patients obtained counseling regarding lifestyle changes including weight loss, smoking cessation, alcohol use (the importance of drinking alcohol not later than 4 hours before sleep was emphasized) and use of sedative medicines. Change of body position during sleep was advised to all untreated OSAHS patients. At baseline, antihypertensive therapy was prescribed to all patients with achievement of target blood pressure level (≤140/90 mm Hg). Baseline therapy in CPAP-treated and untreated OSAHS patients did not differ by intensity or by components (p>0.05) (Table 2).
Results
By March 2009, 8 patients from the 147 originally enrolled into the study were lost to follow-up because they had moved to another area. The mean follow-up period was 46.4±14.3 months. All 12 patients from the third group showed good compliance with the CPAP therapy, with mean nighttime device usage being 6.04±1.4 hours. Overall, 23 events (15.6%) were registered, including 13 (8.9%) deaths, of which 11 (84.6%) occurred in males. One (7.7%) fatal event (sudden cardiac death) occurred among the CPAP-treated patients, 1 death from non-cardiac disease was observed in a patient without SBD, and 11 (84.6%) fatal events, including 1 non-cardiac death, occurred in the untreated OSAHS patients (Table 3). Out of 10 non-fatal events, 5 were strokes and 4 were myocardial infarctions (MI), all observed in untreated OSAHS patients. One MI occurred in the control group. No non-fatal cardiovascular events were registered in the CPAP-treated group (Table 3). The risk of fatal/non-fatal MI did not differ significantly from the control group (OR=3.859, 95%CI 0.467–31.892; p=0.273). However, incidence of primary combined endpoint (cardiovascular death, fatal/non-fatal MI and stroke) differed significantly between the groups (P=7.145, p=0.023, Fisher exact test). The risk estimated by the Monte Carlo test was higher in untreated OSAHS patients (20.4%) than in the control group (2.4%) (OR=9.293 95% CI 1.194–72.363, p=0.012); however, no difference was found as compared to CPAP-treated patients (OR=3.727, 95% CI 0.215–64.474, p=0.398). This might be due to the small sample size of the CPAP-treated group, which made it difficult to draw any definitive conclusions. There was no discrepancy between CPAP-treated patients and the control group (OR=0.379, 95% CI 0.045–3.127, p=0.690). Event-free survival (by Kaplan-Meier method) was lowest in the untreated OSAHS patients (log rank test 6.732, p=0.035) (Figure 2). In the non-adjusted regression model, OSAHS was also associated with a higher risk of cardiovascular events (OR=8.557, 95% CI 1.142–64.131, p=0.037). The relationship between the variables (sex, age, BMI, duration of hypertension, smoking, alcohol use, physical activity level, family history of cardiovascular diseases, current coronary heart disease, glucose metabolism) and survival was assessed by Cox regression analysis (Table 4). The time-dependent model showed that only severe OSAHS was associated with poor outcome (OR=9.203 95% CI 1.176–72.002, p=0.034), while presence of moderate and mild OSAHS did not affect survival (OR=8.588 95% CI 0.999–73.82, p>0.05, and OR=4.205 95% CI 0.437–40.434, p>0.05). In the adjusted model, the impact of severe OSAHS was significant even when adjusted to the above-mentioned variables (p=0.04, Table 4). Moreover, OSAHS patients demonstrated higher rates of hospitalization compared to the control group without SBD. Thirty-three untreated OSAHS patients were hospitalized (and 12 of them required readmission), while only 7 patients from the control group required hospitalization (OR 2.75, 95% CI 1.100–6.87, p=0.04). There were 3 hospitalizations registered among CPAP-treated patients, and 1 of these patients required readmission (compared to untreated patients OR=0.606 95% CI 0.15–2.395, p=0.54). New onset of diabetes was registered in 4 untreated OSAHS patients, as well as 2 cases of glucose intolerance, compared to 2 (4.8%) cases of glucose intolerance impairment in the control group (p>0.05). There were no significant changes between baseline and final values of BMI (33.6±6.3 and 34.2±6.6 kg/m2 respectively, p>0.05), waist circumference (107.3±14.1 and 109.6±13.7 cm, p>0.05), and neck circumference (41.8±4.5 and 41.9±5.1 cm, p>0.05) in control group, as well as in OSAHS patients (BMI 35.4±5.2 cm at baseline and 35.3±5.8 kg/m2 by 2009, p>0.05; waist circumference, respectively, 116.9±13.4 and 117.1±15.9 cm, p>0.05; and neck circumference 43±2.7 and 44.1±3.1 cm, p>0.05). By 2009 most of OSAHS patients were on combination antihypertensive therapy (Table 2), while the majority (60%) of non-OSAHS subjects continued monotherapy (p<0.001). Thirty-six (44%) OSAHS patients compared to 5 (12%) controls took 3 or more antihypertensive drugs (p<0.05). There was also a difference in the distribution by classes of antihypertensive drugs. By 2009, 76 (93%) OSAHS patients and 32 (29%) non-SBD patients (p=0.012) took angiotensin converting enzyme inhibitors/angiotensin-2-receptor antagonists. Beta-blockers were prescribed to 63 (77%) OSAHS patients compared to 10 (24%) patients in the control group (p<0.001). By 2009, intensity of antihypertensive therapy in CPAP-treated and untreated patients with SBD was comparable. This finding might be due to the small size of the CPAP group, limiting the analysis and requiring further investigation.
Conclusions
Our results confirmed that OSAHS hypertensive patients, in particular patients with severe OSAHS (AHI ≥30 episodes per hour of sleep), are at higher risk of fatal and non-fatal cardiovascular events. Moreover, we demonstrated higher rates of hospitalization were caused by the onset or deterioration of cardiovascular disease in untreated OSAHS patients, and higher prevalence of resistant hypertension.
[ "Background", "Study population and design", "Follow-up and endpoints", "Statistics" ]
[ "Obstructive sleep apnea/hypopnea syndrome (OSAHS) is considered to be an independent risk factor for cardiovascular diseases [1–12]. According to data of cohort prospective [13] and large population [10] studies, the prevalence of fatal cardiovascular events is higher in patients with OSAHS [14–16]. Recent studies have shown a relationship between sleep breathing disorders and cardiovascular morbidity and mortality [15,17–19]. Peker et al. (2006) demonstrated a 5-fold increase in the risk of coronary heart disease. Marin et al. (2005) analyzed a large male cohort and concluded that patients with severe OSAHS are at higher risk of fatal and non-fatal cardiovascular events compared to healthy people and those with regular snoring. The beneficial effect of CPAP therapy was also documented [20]. However, some authors try to explain increased mortality in OSAHS patients by concomitant diseases rather than sleep breathing disorders [16]. Contributions of other cardiovascular risk factors (such as obesity and metabolic disorders) that are common for OSAHS patients, and their relation to outcome, are still not clear. None of the previous studies included a cohort of OSAHS patients from Russia.\nOur study aimed to assess the impact of OSAHS on prognosis and cardiovascular morbidity and mortality in relation to other major cardiovascular risk factors. We focused on hypertensive patients because OSAHS is known to be closely connected to the development of hypertension [21] and resistance to antihypertensive therapy [12], and in most cases in our center OSAHS is diagnosed in patients with already abnormal blood pressure levels.", "[SUBTITLE] Selection of patients [SUBSECTION] From May 2003 to March 2007 we selected 234 patients from a cohort referred to the out-patient department of Almazov Federal Heart, Blood and Endocrinology Centre with newly diagnosed or uncontrolled hypertension according to the following inclusion criteria: arterial hypertension (diagnosed if systolic blood pressure [SBP] and/or diastolic blood pressure [DBP] were 140 and 90 mmHg or higher, respectively, or if the patient was on antihypertensive therapy).\nPatients were not included if they:\n– had a concomitant significant cardiovascular pathology (coronary artery disease [angina pectoris] class II or higher), severe arrhythmia, congestive heart failure, valvular disease or cardiomyopathy;\n– had other factors or diseases predisposing to OSAHS, such as congenital and acquired (rheumatoid arthritis, etc.) anatomical changes, visceral cranium abnormalities, macroglossia, vocal fold paralysis, diseases leading to pharyngeal lymphoid tissue proliferation (Hodgkin’s lymphoma, AIDS), endocrine diseases (acromegaly, hypothyreosis), or neurological diseases (stroke, myasthenia, myotonic dystrophy, metabolic myopathy, amyotrophic lateral sclerosis, Guillain-Barré Syndrome, amyloidosis, diphtheritic, alcoholic and diabetic polyneuropathy);\n– had severe concomitant diseases (chronic liver or kidney diseases, cancer);\n– were found to have a severe cognitive deficit that could confound the sleep examination.\nFurther selection was based on the results of a sleep breathing disorder questionnaire (Berlin Questionnaire [22]), and daytime sleepiness assessment by the Epworth scale [23]. The study enrolled only patients with suspected OSAHS. As a result, 147 patients (90 males and 57 females) aged 23–80 years (mean age 52.1±10.4 years) were included into the study (Figure 1).\nAll recruited patients signed the informed consent after full explanation of the procedure, which complied with the Declaration of Helsinki and the ethics policies of the institutions participating in the study.\nFrom May 2003 to March 2007 we selected 234 patients from a cohort referred to the out-patient department of Almazov Federal Heart, Blood and Endocrinology Centre with newly diagnosed or uncontrolled hypertension according to the following inclusion criteria: arterial hypertension (diagnosed if systolic blood pressure [SBP] and/or diastolic blood pressure [DBP] were 140 and 90 mmHg or higher, respectively, or if the patient was on antihypertensive therapy).\nPatients were not included if they:\n– had a concomitant significant cardiovascular pathology (coronary artery disease [angina pectoris] class II or higher), severe arrhythmia, congestive heart failure, valvular disease or cardiomyopathy;\n– had other factors or diseases predisposing to OSAHS, such as congenital and acquired (rheumatoid arthritis, etc.) anatomical changes, visceral cranium abnormalities, macroglossia, vocal fold paralysis, diseases leading to pharyngeal lymphoid tissue proliferation (Hodgkin’s lymphoma, AIDS), endocrine diseases (acromegaly, hypothyreosis), or neurological diseases (stroke, myasthenia, myotonic dystrophy, metabolic myopathy, amyotrophic lateral sclerosis, Guillain-Barré Syndrome, amyloidosis, diphtheritic, alcoholic and diabetic polyneuropathy);\n– had severe concomitant diseases (chronic liver or kidney diseases, cancer);\n– were found to have a severe cognitive deficit that could confound the sleep examination.\nFurther selection was based on the results of a sleep breathing disorder questionnaire (Berlin Questionnaire [22]), and daytime sleepiness assessment by the Epworth scale [23]. The study enrolled only patients with suspected OSAHS. As a result, 147 patients (90 males and 57 females) aged 23–80 years (mean age 52.1±10.4 years) were included into the study (Figure 1).\nAll recruited patients signed the informed consent after full explanation of the procedure, which complied with the Declaration of Helsinki and the ethics policies of the institutions participating in the study.\n[SUBTITLE] Assessments and study groups [SUBSECTION] All patients completed a baseline questionnaire to collect data about personal and medical history, heredity, and lifestyle. Every patient underwent physical examination, including measurement of anthropometric parameters (height, weight, body mass index; waist, hip and neck circumferences) and vital signs (HR and blood pressure, BP). Patients who smoked 1 or more cigarettes per week were considered smokers. Patients were considered to be alcohol-users if they consumed 3 or more units of alcohol per week. Three or more sessions of aerobic exercise (30 minutes or longer) per week was considered to be the normal level of physical activity.\nAll patients underwent cardiorespiratory monitoring by Embletta Pds (Embla, USA) and the following parameters were recorded: snore, nasal flow, thoracic and abdominal excursions, body position, blood oxygen saturation, and heart rate. All recordings were evaluated manually by a specialist in polysomnography. Apnea was defined as an episode of airflow cessation lasting for at least 10 seconds. Hypopnea was defined as an episode of a decrease in air-flow of more than 50% compared to baseline, lasting for ≥10 seconds, combined with a decrease in oxygen saturation of ≥4% [14]. Sleep apnea/hypopnea syndrome was diagnosed if the patient had 5 or more events of apnea and hypopnea per hour of sleep (apnea-hypopnea index, AHI). Apnea was classified as obstructive if simultaneous excursions of the thorax and abdomen were registered, and was classified as central if no respiratory muscle effort was observed. Sleep apnea was classified as mild if the patient had 5–14.9 apnea or hypopnea events per hour, as moderate if AHI was 15–29.9 per hour, and severe if AHI was ≥30 per hour of sleep session.\nBased on the results of the sleep study, patients were divided into 2 groups (Figure 1), with baseline characteristics shown in Table 1. Forty-two patients without sleep breathing disorders (SBD), with AHI <5 per hour [median 1.8 (0.3–4.9)], formed the control group, and 105 patients with AHI 5 and more episodes per hour constituted the OSAHS group (no cases of central sleep apnea were diagnosed). Twenty-seven out of 105 patients (26%) had mild OSAHS (AHI 10.9 [5.7–14.9] events per hour of sleep), 28 (27%) – moderate OSAHS (AHI 22.5 [15.4–29.3] events per hour of sleep), and 50 (47%) patients – severe OSAHS 9AHI 51.0 [32.0–108.0] events per hour of sleep).\nThe groups did not differ by age, BMI, neck circumference, hypertension stage and duration, by the prevalence of hyperglycemia or diabetes, thyroid pathology (thyreotrophic hormone level remained normal), smoking, alcohol use and by physical activity level at baseline (Table 1).\nA CPAP therapy session was offered (we used Auto-CPAP-devices ResMed, Australia; Respironics, USA, and ViPAP-III-device, ResMed, Australia) to anyone diagnosed with OSAHS to evaluate its efficacy and assess the necessary therapeutic pressure. Out of 105 patients with SBD, 51 (48.6%) underwent the CPAP therapy test, but most refused to use CPAP therapy at home due to the high cost of the device (23 [45.1%] patients) and the inconvenience of the method (911 [21.6%] patients); in addition, 5 patients (9.8%) failed to get a fitting mask. Twelve (23.5%) patients who started CPAP therapy at home formed the third group (Figure 1): 11 patients used autoCPAP-devices and 1 subject used a BiPAP-device. Most patients in this group were males (75%, 9 men), and AHI was higher in this group compared to OSAHS patients who did not use a CPAP.\nAll patients obtained counseling regarding lifestyle changes including weight loss, smoking cessation, alcohol use (the importance of drinking alcohol not later than 4 hours before sleep was emphasized) and use of sedative medicines. Change of body position during sleep was advised to all untreated OSAHS patients.\nAt baseline, antihypertensive therapy was prescribed to all patients with achievement of target blood pressure level (≤140/90 mm Hg). Baseline therapy in CPAP-treated and untreated OSAHS patients did not differ by intensity or by components (p>0.05) (Table 2).\nAll patients completed a baseline questionnaire to collect data about personal and medical history, heredity, and lifestyle. Every patient underwent physical examination, including measurement of anthropometric parameters (height, weight, body mass index; waist, hip and neck circumferences) and vital signs (HR and blood pressure, BP). Patients who smoked 1 or more cigarettes per week were considered smokers. Patients were considered to be alcohol-users if they consumed 3 or more units of alcohol per week. Three or more sessions of aerobic exercise (30 minutes or longer) per week was considered to be the normal level of physical activity.\nAll patients underwent cardiorespiratory monitoring by Embletta Pds (Embla, USA) and the following parameters were recorded: snore, nasal flow, thoracic and abdominal excursions, body position, blood oxygen saturation, and heart rate. All recordings were evaluated manually by a specialist in polysomnography. Apnea was defined as an episode of airflow cessation lasting for at least 10 seconds. Hypopnea was defined as an episode of a decrease in air-flow of more than 50% compared to baseline, lasting for ≥10 seconds, combined with a decrease in oxygen saturation of ≥4% [14]. Sleep apnea/hypopnea syndrome was diagnosed if the patient had 5 or more events of apnea and hypopnea per hour of sleep (apnea-hypopnea index, AHI). Apnea was classified as obstructive if simultaneous excursions of the thorax and abdomen were registered, and was classified as central if no respiratory muscle effort was observed. Sleep apnea was classified as mild if the patient had 5–14.9 apnea or hypopnea events per hour, as moderate if AHI was 15–29.9 per hour, and severe if AHI was ≥30 per hour of sleep session.\nBased on the results of the sleep study, patients were divided into 2 groups (Figure 1), with baseline characteristics shown in Table 1. Forty-two patients without sleep breathing disorders (SBD), with AHI <5 per hour [median 1.8 (0.3–4.9)], formed the control group, and 105 patients with AHI 5 and more episodes per hour constituted the OSAHS group (no cases of central sleep apnea were diagnosed). Twenty-seven out of 105 patients (26%) had mild OSAHS (AHI 10.9 [5.7–14.9] events per hour of sleep), 28 (27%) – moderate OSAHS (AHI 22.5 [15.4–29.3] events per hour of sleep), and 50 (47%) patients – severe OSAHS 9AHI 51.0 [32.0–108.0] events per hour of sleep).\nThe groups did not differ by age, BMI, neck circumference, hypertension stage and duration, by the prevalence of hyperglycemia or diabetes, thyroid pathology (thyreotrophic hormone level remained normal), smoking, alcohol use and by physical activity level at baseline (Table 1).\nA CPAP therapy session was offered (we used Auto-CPAP-devices ResMed, Australia; Respironics, USA, and ViPAP-III-device, ResMed, Australia) to anyone diagnosed with OSAHS to evaluate its efficacy and assess the necessary therapeutic pressure. Out of 105 patients with SBD, 51 (48.6%) underwent the CPAP therapy test, but most refused to use CPAP therapy at home due to the high cost of the device (23 [45.1%] patients) and the inconvenience of the method (911 [21.6%] patients); in addition, 5 patients (9.8%) failed to get a fitting mask. Twelve (23.5%) patients who started CPAP therapy at home formed the third group (Figure 1): 11 patients used autoCPAP-devices and 1 subject used a BiPAP-device. Most patients in this group were males (75%, 9 men), and AHI was higher in this group compared to OSAHS patients who did not use a CPAP.\nAll patients obtained counseling regarding lifestyle changes including weight loss, smoking cessation, alcohol use (the importance of drinking alcohol not later than 4 hours before sleep was emphasized) and use of sedative medicines. Change of body position during sleep was advised to all untreated OSAHS patients.\nAt baseline, antihypertensive therapy was prescribed to all patients with achievement of target blood pressure level (≤140/90 mm Hg). Baseline therapy in CPAP-treated and untreated OSAHS patients did not differ by intensity or by components (p>0.05) (Table 2).\n[SUBTITLE] Follow-up and endpoints [SUBSECTION] Three-to-five year follow-up was initially planned. Patients’ status and compliance with medical advices were assessed twice per year by telephone; if necessary, visits to the clinic were arranged. Efficacy of CPAP therapy and the condition of the device were assessed once in 6 months during a visit to the clinic using an appropriate software package (ResScan, ResMed, Australia; EncorePro, Respironics, USA). It was considered effective if AHI was lower than 10 episodes per hour of session, and the mean nightly usage was more than 5 hours [24]. Antihypertensive therapy was assessed by clinic chart reviews and by direct patient interviews.\nThe primary endpoint was a composite of cardiovascular death, fatal/non-fatal myocardial infarction and stroke. If fatal events occurred, information about the circumstances and the date of death was obtained from the relatives, and the causes were assessed by analyzing the medical records in the medical institution and/or from death certificates. The non-fatal events were recorded by medical documentation. The secondary endpoint included hospitalization resulting from a new onset or deterioration of cardiovascular disease. Frequency of hospitalization and readmission was also assessed. The reasons for hospitalization were determined from the medical records. We also analyzed the changes in anthropometric parameters in untreated OSAHS and non-OSAHS patients – BMI, waist and neck circumferences, and changes in antihypertensive treatment.\nSleep breathing disorders are believed to be a risk factor for glucose and insulin metabolism impairment [25,26]; thus, we also assessed the new cases of glucose tolerance impairment and diabetes mellitus.\nThree-to-five year follow-up was initially planned. Patients’ status and compliance with medical advices were assessed twice per year by telephone; if necessary, visits to the clinic were arranged. Efficacy of CPAP therapy and the condition of the device were assessed once in 6 months during a visit to the clinic using an appropriate software package (ResScan, ResMed, Australia; EncorePro, Respironics, USA). It was considered effective if AHI was lower than 10 episodes per hour of session, and the mean nightly usage was more than 5 hours [24]. Antihypertensive therapy was assessed by clinic chart reviews and by direct patient interviews.\nThe primary endpoint was a composite of cardiovascular death, fatal/non-fatal myocardial infarction and stroke. If fatal events occurred, information about the circumstances and the date of death was obtained from the relatives, and the causes were assessed by analyzing the medical records in the medical institution and/or from death certificates. The non-fatal events were recorded by medical documentation. The secondary endpoint included hospitalization resulting from a new onset or deterioration of cardiovascular disease. Frequency of hospitalization and readmission was also assessed. The reasons for hospitalization were determined from the medical records. We also analyzed the changes in anthropometric parameters in untreated OSAHS and non-OSAHS patients – BMI, waist and neck circumferences, and changes in antihypertensive treatment.\nSleep breathing disorders are believed to be a risk factor for glucose and insulin metabolism impairment [25,26]; thus, we also assessed the new cases of glucose tolerance impairment and diabetes mellitus.", "Three-to-five year follow-up was initially planned. Patients’ status and compliance with medical advices were assessed twice per year by telephone; if necessary, visits to the clinic were arranged. Efficacy of CPAP therapy and the condition of the device were assessed once in 6 months during a visit to the clinic using an appropriate software package (ResScan, ResMed, Australia; EncorePro, Respironics, USA). It was considered effective if AHI was lower than 10 episodes per hour of session, and the mean nightly usage was more than 5 hours [24]. Antihypertensive therapy was assessed by clinic chart reviews and by direct patient interviews.\nThe primary endpoint was a composite of cardiovascular death, fatal/non-fatal myocardial infarction and stroke. If fatal events occurred, information about the circumstances and the date of death was obtained from the relatives, and the causes were assessed by analyzing the medical records in the medical institution and/or from death certificates. The non-fatal events were recorded by medical documentation. The secondary endpoint included hospitalization resulting from a new onset or deterioration of cardiovascular disease. Frequency of hospitalization and readmission was also assessed. The reasons for hospitalization were determined from the medical records. We also analyzed the changes in anthropometric parameters in untreated OSAHS and non-OSAHS patients – BMI, waist and neck circumferences, and changes in antihypertensive treatment.\nSleep breathing disorders are believed to be a risk factor for glucose and insulin metabolism impairment [25,26]; thus, we also assessed the new cases of glucose tolerance impairment and diabetes mellitus.", "Baseline and final data are presented as median and range of deviation or as mean and standard deviation (SD); categorical variables are expressed as numbers and percentages. Histograms, Kolmogorov-Smirnov and Shapiro-Wilk statistics, skewness and kurtosis analyses were used for distribution assessment. Baseline characteristics were compared using the Mann-Whitney U test and the Cruskall-Wallis criteria for continuous variables; the χ2 test and Fisher exact test were used for categorical variables. Kaplan-Meier analysis with the log-rank test was used to estimate survival between the groups, and the odds ratio (OR) was calculated using the Cox proportional hazards model. Cox regression was used to estimate the OSAHS impact on time to cardiovascular events. ORs were considered significant when the 95% confidence intervals (CI) did not include the value of 1; and the p value of <0.05 was considered to be statistically significant. All statistical analyses were performed using a statistical software package (Statistica for Windows version 7.0, StatSoft Inc., U.S. è SPSS version 16.0 software; SPSS, Inc., Chicago, IL)." ]
[ null, "methods", null, null ]
[ "Background", "Material and Methods", "Study population and design", "Selection of patients", "Assessments and study groups", "Follow-up and endpoints", "Statistics", "Results", "Discussion", "Conclusions" ]
[ "Obstructive sleep apnea/hypopnea syndrome (OSAHS) is considered to be an independent risk factor for cardiovascular diseases [1–12]. According to data of cohort prospective [13] and large population [10] studies, the prevalence of fatal cardiovascular events is higher in patients with OSAHS [14–16]. Recent studies have shown a relationship between sleep breathing disorders and cardiovascular morbidity and mortality [15,17–19]. Peker et al. (2006) demonstrated a 5-fold increase in the risk of coronary heart disease. Marin et al. (2005) analyzed a large male cohort and concluded that patients with severe OSAHS are at higher risk of fatal and non-fatal cardiovascular events compared to healthy people and those with regular snoring. The beneficial effect of CPAP therapy was also documented [20]. However, some authors try to explain increased mortality in OSAHS patients by concomitant diseases rather than sleep breathing disorders [16]. Contributions of other cardiovascular risk factors (such as obesity and metabolic disorders) that are common for OSAHS patients, and their relation to outcome, are still not clear. None of the previous studies included a cohort of OSAHS patients from Russia.\nOur study aimed to assess the impact of OSAHS on prognosis and cardiovascular morbidity and mortality in relation to other major cardiovascular risk factors. We focused on hypertensive patients because OSAHS is known to be closely connected to the development of hypertension [21] and resistance to antihypertensive therapy [12], and in most cases in our center OSAHS is diagnosed in patients with already abnormal blood pressure levels.", "[SUBTITLE] Study population and design [SUBSECTION] [SUBTITLE] Selection of patients [SUBSECTION] From May 2003 to March 2007 we selected 234 patients from a cohort referred to the out-patient department of Almazov Federal Heart, Blood and Endocrinology Centre with newly diagnosed or uncontrolled hypertension according to the following inclusion criteria: arterial hypertension (diagnosed if systolic blood pressure [SBP] and/or diastolic blood pressure [DBP] were 140 and 90 mmHg or higher, respectively, or if the patient was on antihypertensive therapy).\nPatients were not included if they:\n– had a concomitant significant cardiovascular pathology (coronary artery disease [angina pectoris] class II or higher), severe arrhythmia, congestive heart failure, valvular disease or cardiomyopathy;\n– had other factors or diseases predisposing to OSAHS, such as congenital and acquired (rheumatoid arthritis, etc.) anatomical changes, visceral cranium abnormalities, macroglossia, vocal fold paralysis, diseases leading to pharyngeal lymphoid tissue proliferation (Hodgkin’s lymphoma, AIDS), endocrine diseases (acromegaly, hypothyreosis), or neurological diseases (stroke, myasthenia, myotonic dystrophy, metabolic myopathy, amyotrophic lateral sclerosis, Guillain-Barré Syndrome, amyloidosis, diphtheritic, alcoholic and diabetic polyneuropathy);\n– had severe concomitant diseases (chronic liver or kidney diseases, cancer);\n– were found to have a severe cognitive deficit that could confound the sleep examination.\nFurther selection was based on the results of a sleep breathing disorder questionnaire (Berlin Questionnaire [22]), and daytime sleepiness assessment by the Epworth scale [23]. The study enrolled only patients with suspected OSAHS. As a result, 147 patients (90 males and 57 females) aged 23–80 years (mean age 52.1±10.4 years) were included into the study (Figure 1).\nAll recruited patients signed the informed consent after full explanation of the procedure, which complied with the Declaration of Helsinki and the ethics policies of the institutions participating in the study.\nFrom May 2003 to March 2007 we selected 234 patients from a cohort referred to the out-patient department of Almazov Federal Heart, Blood and Endocrinology Centre with newly diagnosed or uncontrolled hypertension according to the following inclusion criteria: arterial hypertension (diagnosed if systolic blood pressure [SBP] and/or diastolic blood pressure [DBP] were 140 and 90 mmHg or higher, respectively, or if the patient was on antihypertensive therapy).\nPatients were not included if they:\n– had a concomitant significant cardiovascular pathology (coronary artery disease [angina pectoris] class II or higher), severe arrhythmia, congestive heart failure, valvular disease or cardiomyopathy;\n– had other factors or diseases predisposing to OSAHS, such as congenital and acquired (rheumatoid arthritis, etc.) anatomical changes, visceral cranium abnormalities, macroglossia, vocal fold paralysis, diseases leading to pharyngeal lymphoid tissue proliferation (Hodgkin’s lymphoma, AIDS), endocrine diseases (acromegaly, hypothyreosis), or neurological diseases (stroke, myasthenia, myotonic dystrophy, metabolic myopathy, amyotrophic lateral sclerosis, Guillain-Barré Syndrome, amyloidosis, diphtheritic, alcoholic and diabetic polyneuropathy);\n– had severe concomitant diseases (chronic liver or kidney diseases, cancer);\n– were found to have a severe cognitive deficit that could confound the sleep examination.\nFurther selection was based on the results of a sleep breathing disorder questionnaire (Berlin Questionnaire [22]), and daytime sleepiness assessment by the Epworth scale [23]. The study enrolled only patients with suspected OSAHS. As a result, 147 patients (90 males and 57 females) aged 23–80 years (mean age 52.1±10.4 years) were included into the study (Figure 1).\nAll recruited patients signed the informed consent after full explanation of the procedure, which complied with the Declaration of Helsinki and the ethics policies of the institutions participating in the study.\n[SUBTITLE] Assessments and study groups [SUBSECTION] All patients completed a baseline questionnaire to collect data about personal and medical history, heredity, and lifestyle. Every patient underwent physical examination, including measurement of anthropometric parameters (height, weight, body mass index; waist, hip and neck circumferences) and vital signs (HR and blood pressure, BP). Patients who smoked 1 or more cigarettes per week were considered smokers. Patients were considered to be alcohol-users if they consumed 3 or more units of alcohol per week. Three or more sessions of aerobic exercise (30 minutes or longer) per week was considered to be the normal level of physical activity.\nAll patients underwent cardiorespiratory monitoring by Embletta Pds (Embla, USA) and the following parameters were recorded: snore, nasal flow, thoracic and abdominal excursions, body position, blood oxygen saturation, and heart rate. All recordings were evaluated manually by a specialist in polysomnography. Apnea was defined as an episode of airflow cessation lasting for at least 10 seconds. Hypopnea was defined as an episode of a decrease in air-flow of more than 50% compared to baseline, lasting for ≥10 seconds, combined with a decrease in oxygen saturation of ≥4% [14]. Sleep apnea/hypopnea syndrome was diagnosed if the patient had 5 or more events of apnea and hypopnea per hour of sleep (apnea-hypopnea index, AHI). Apnea was classified as obstructive if simultaneous excursions of the thorax and abdomen were registered, and was classified as central if no respiratory muscle effort was observed. Sleep apnea was classified as mild if the patient had 5–14.9 apnea or hypopnea events per hour, as moderate if AHI was 15–29.9 per hour, and severe if AHI was ≥30 per hour of sleep session.\nBased on the results of the sleep study, patients were divided into 2 groups (Figure 1), with baseline characteristics shown in Table 1. Forty-two patients without sleep breathing disorders (SBD), with AHI <5 per hour [median 1.8 (0.3–4.9)], formed the control group, and 105 patients with AHI 5 and more episodes per hour constituted the OSAHS group (no cases of central sleep apnea were diagnosed). Twenty-seven out of 105 patients (26%) had mild OSAHS (AHI 10.9 [5.7–14.9] events per hour of sleep), 28 (27%) – moderate OSAHS (AHI 22.5 [15.4–29.3] events per hour of sleep), and 50 (47%) patients – severe OSAHS 9AHI 51.0 [32.0–108.0] events per hour of sleep).\nThe groups did not differ by age, BMI, neck circumference, hypertension stage and duration, by the prevalence of hyperglycemia or diabetes, thyroid pathology (thyreotrophic hormone level remained normal), smoking, alcohol use and by physical activity level at baseline (Table 1).\nA CPAP therapy session was offered (we used Auto-CPAP-devices ResMed, Australia; Respironics, USA, and ViPAP-III-device, ResMed, Australia) to anyone diagnosed with OSAHS to evaluate its efficacy and assess the necessary therapeutic pressure. Out of 105 patients with SBD, 51 (48.6%) underwent the CPAP therapy test, but most refused to use CPAP therapy at home due to the high cost of the device (23 [45.1%] patients) and the inconvenience of the method (911 [21.6%] patients); in addition, 5 patients (9.8%) failed to get a fitting mask. Twelve (23.5%) patients who started CPAP therapy at home formed the third group (Figure 1): 11 patients used autoCPAP-devices and 1 subject used a BiPAP-device. Most patients in this group were males (75%, 9 men), and AHI was higher in this group compared to OSAHS patients who did not use a CPAP.\nAll patients obtained counseling regarding lifestyle changes including weight loss, smoking cessation, alcohol use (the importance of drinking alcohol not later than 4 hours before sleep was emphasized) and use of sedative medicines. Change of body position during sleep was advised to all untreated OSAHS patients.\nAt baseline, antihypertensive therapy was prescribed to all patients with achievement of target blood pressure level (≤140/90 mm Hg). Baseline therapy in CPAP-treated and untreated OSAHS patients did not differ by intensity or by components (p>0.05) (Table 2).\nAll patients completed a baseline questionnaire to collect data about personal and medical history, heredity, and lifestyle. Every patient underwent physical examination, including measurement of anthropometric parameters (height, weight, body mass index; waist, hip and neck circumferences) and vital signs (HR and blood pressure, BP). Patients who smoked 1 or more cigarettes per week were considered smokers. Patients were considered to be alcohol-users if they consumed 3 or more units of alcohol per week. Three or more sessions of aerobic exercise (30 minutes or longer) per week was considered to be the normal level of physical activity.\nAll patients underwent cardiorespiratory monitoring by Embletta Pds (Embla, USA) and the following parameters were recorded: snore, nasal flow, thoracic and abdominal excursions, body position, blood oxygen saturation, and heart rate. All recordings were evaluated manually by a specialist in polysomnography. Apnea was defined as an episode of airflow cessation lasting for at least 10 seconds. Hypopnea was defined as an episode of a decrease in air-flow of more than 50% compared to baseline, lasting for ≥10 seconds, combined with a decrease in oxygen saturation of ≥4% [14]. Sleep apnea/hypopnea syndrome was diagnosed if the patient had 5 or more events of apnea and hypopnea per hour of sleep (apnea-hypopnea index, AHI). Apnea was classified as obstructive if simultaneous excursions of the thorax and abdomen were registered, and was classified as central if no respiratory muscle effort was observed. Sleep apnea was classified as mild if the patient had 5–14.9 apnea or hypopnea events per hour, as moderate if AHI was 15–29.9 per hour, and severe if AHI was ≥30 per hour of sleep session.\nBased on the results of the sleep study, patients were divided into 2 groups (Figure 1), with baseline characteristics shown in Table 1. Forty-two patients without sleep breathing disorders (SBD), with AHI <5 per hour [median 1.8 (0.3–4.9)], formed the control group, and 105 patients with AHI 5 and more episodes per hour constituted the OSAHS group (no cases of central sleep apnea were diagnosed). Twenty-seven out of 105 patients (26%) had mild OSAHS (AHI 10.9 [5.7–14.9] events per hour of sleep), 28 (27%) – moderate OSAHS (AHI 22.5 [15.4–29.3] events per hour of sleep), and 50 (47%) patients – severe OSAHS 9AHI 51.0 [32.0–108.0] events per hour of sleep).\nThe groups did not differ by age, BMI, neck circumference, hypertension stage and duration, by the prevalence of hyperglycemia or diabetes, thyroid pathology (thyreotrophic hormone level remained normal), smoking, alcohol use and by physical activity level at baseline (Table 1).\nA CPAP therapy session was offered (we used Auto-CPAP-devices ResMed, Australia; Respironics, USA, and ViPAP-III-device, ResMed, Australia) to anyone diagnosed with OSAHS to evaluate its efficacy and assess the necessary therapeutic pressure. Out of 105 patients with SBD, 51 (48.6%) underwent the CPAP therapy test, but most refused to use CPAP therapy at home due to the high cost of the device (23 [45.1%] patients) and the inconvenience of the method (911 [21.6%] patients); in addition, 5 patients (9.8%) failed to get a fitting mask. Twelve (23.5%) patients who started CPAP therapy at home formed the third group (Figure 1): 11 patients used autoCPAP-devices and 1 subject used a BiPAP-device. Most patients in this group were males (75%, 9 men), and AHI was higher in this group compared to OSAHS patients who did not use a CPAP.\nAll patients obtained counseling regarding lifestyle changes including weight loss, smoking cessation, alcohol use (the importance of drinking alcohol not later than 4 hours before sleep was emphasized) and use of sedative medicines. Change of body position during sleep was advised to all untreated OSAHS patients.\nAt baseline, antihypertensive therapy was prescribed to all patients with achievement of target blood pressure level (≤140/90 mm Hg). Baseline therapy in CPAP-treated and untreated OSAHS patients did not differ by intensity or by components (p>0.05) (Table 2).\n[SUBTITLE] Follow-up and endpoints [SUBSECTION] Three-to-five year follow-up was initially planned. Patients’ status and compliance with medical advices were assessed twice per year by telephone; if necessary, visits to the clinic were arranged. Efficacy of CPAP therapy and the condition of the device were assessed once in 6 months during a visit to the clinic using an appropriate software package (ResScan, ResMed, Australia; EncorePro, Respironics, USA). It was considered effective if AHI was lower than 10 episodes per hour of session, and the mean nightly usage was more than 5 hours [24]. Antihypertensive therapy was assessed by clinic chart reviews and by direct patient interviews.\nThe primary endpoint was a composite of cardiovascular death, fatal/non-fatal myocardial infarction and stroke. If fatal events occurred, information about the circumstances and the date of death was obtained from the relatives, and the causes were assessed by analyzing the medical records in the medical institution and/or from death certificates. The non-fatal events were recorded by medical documentation. The secondary endpoint included hospitalization resulting from a new onset or deterioration of cardiovascular disease. Frequency of hospitalization and readmission was also assessed. The reasons for hospitalization were determined from the medical records. We also analyzed the changes in anthropometric parameters in untreated OSAHS and non-OSAHS patients – BMI, waist and neck circumferences, and changes in antihypertensive treatment.\nSleep breathing disorders are believed to be a risk factor for glucose and insulin metabolism impairment [25,26]; thus, we also assessed the new cases of glucose tolerance impairment and diabetes mellitus.\nThree-to-five year follow-up was initially planned. Patients’ status and compliance with medical advices were assessed twice per year by telephone; if necessary, visits to the clinic were arranged. Efficacy of CPAP therapy and the condition of the device were assessed once in 6 months during a visit to the clinic using an appropriate software package (ResScan, ResMed, Australia; EncorePro, Respironics, USA). It was considered effective if AHI was lower than 10 episodes per hour of session, and the mean nightly usage was more than 5 hours [24]. Antihypertensive therapy was assessed by clinic chart reviews and by direct patient interviews.\nThe primary endpoint was a composite of cardiovascular death, fatal/non-fatal myocardial infarction and stroke. If fatal events occurred, information about the circumstances and the date of death was obtained from the relatives, and the causes were assessed by analyzing the medical records in the medical institution and/or from death certificates. The non-fatal events were recorded by medical documentation. The secondary endpoint included hospitalization resulting from a new onset or deterioration of cardiovascular disease. Frequency of hospitalization and readmission was also assessed. The reasons for hospitalization were determined from the medical records. We also analyzed the changes in anthropometric parameters in untreated OSAHS and non-OSAHS patients – BMI, waist and neck circumferences, and changes in antihypertensive treatment.\nSleep breathing disorders are believed to be a risk factor for glucose and insulin metabolism impairment [25,26]; thus, we also assessed the new cases of glucose tolerance impairment and diabetes mellitus.\n[SUBTITLE] Selection of patients [SUBSECTION] From May 2003 to March 2007 we selected 234 patients from a cohort referred to the out-patient department of Almazov Federal Heart, Blood and Endocrinology Centre with newly diagnosed or uncontrolled hypertension according to the following inclusion criteria: arterial hypertension (diagnosed if systolic blood pressure [SBP] and/or diastolic blood pressure [DBP] were 140 and 90 mmHg or higher, respectively, or if the patient was on antihypertensive therapy).\nPatients were not included if they:\n– had a concomitant significant cardiovascular pathology (coronary artery disease [angina pectoris] class II or higher), severe arrhythmia, congestive heart failure, valvular disease or cardiomyopathy;\n– had other factors or diseases predisposing to OSAHS, such as congenital and acquired (rheumatoid arthritis, etc.) anatomical changes, visceral cranium abnormalities, macroglossia, vocal fold paralysis, diseases leading to pharyngeal lymphoid tissue proliferation (Hodgkin’s lymphoma, AIDS), endocrine diseases (acromegaly, hypothyreosis), or neurological diseases (stroke, myasthenia, myotonic dystrophy, metabolic myopathy, amyotrophic lateral sclerosis, Guillain-Barré Syndrome, amyloidosis, diphtheritic, alcoholic and diabetic polyneuropathy);\n– had severe concomitant diseases (chronic liver or kidney diseases, cancer);\n– were found to have a severe cognitive deficit that could confound the sleep examination.\nFurther selection was based on the results of a sleep breathing disorder questionnaire (Berlin Questionnaire [22]), and daytime sleepiness assessment by the Epworth scale [23]. The study enrolled only patients with suspected OSAHS. As a result, 147 patients (90 males and 57 females) aged 23–80 years (mean age 52.1±10.4 years) were included into the study (Figure 1).\nAll recruited patients signed the informed consent after full explanation of the procedure, which complied with the Declaration of Helsinki and the ethics policies of the institutions participating in the study.\nFrom May 2003 to March 2007 we selected 234 patients from a cohort referred to the out-patient department of Almazov Federal Heart, Blood and Endocrinology Centre with newly diagnosed or uncontrolled hypertension according to the following inclusion criteria: arterial hypertension (diagnosed if systolic blood pressure [SBP] and/or diastolic blood pressure [DBP] were 140 and 90 mmHg or higher, respectively, or if the patient was on antihypertensive therapy).\nPatients were not included if they:\n– had a concomitant significant cardiovascular pathology (coronary artery disease [angina pectoris] class II or higher), severe arrhythmia, congestive heart failure, valvular disease or cardiomyopathy;\n– had other factors or diseases predisposing to OSAHS, such as congenital and acquired (rheumatoid arthritis, etc.) anatomical changes, visceral cranium abnormalities, macroglossia, vocal fold paralysis, diseases leading to pharyngeal lymphoid tissue proliferation (Hodgkin’s lymphoma, AIDS), endocrine diseases (acromegaly, hypothyreosis), or neurological diseases (stroke, myasthenia, myotonic dystrophy, metabolic myopathy, amyotrophic lateral sclerosis, Guillain-Barré Syndrome, amyloidosis, diphtheritic, alcoholic and diabetic polyneuropathy);\n– had severe concomitant diseases (chronic liver or kidney diseases, cancer);\n– were found to have a severe cognitive deficit that could confound the sleep examination.\nFurther selection was based on the results of a sleep breathing disorder questionnaire (Berlin Questionnaire [22]), and daytime sleepiness assessment by the Epworth scale [23]. The study enrolled only patients with suspected OSAHS. As a result, 147 patients (90 males and 57 females) aged 23–80 years (mean age 52.1±10.4 years) were included into the study (Figure 1).\nAll recruited patients signed the informed consent after full explanation of the procedure, which complied with the Declaration of Helsinki and the ethics policies of the institutions participating in the study.\n[SUBTITLE] Assessments and study groups [SUBSECTION] All patients completed a baseline questionnaire to collect data about personal and medical history, heredity, and lifestyle. Every patient underwent physical examination, including measurement of anthropometric parameters (height, weight, body mass index; waist, hip and neck circumferences) and vital signs (HR and blood pressure, BP). Patients who smoked 1 or more cigarettes per week were considered smokers. Patients were considered to be alcohol-users if they consumed 3 or more units of alcohol per week. Three or more sessions of aerobic exercise (30 minutes or longer) per week was considered to be the normal level of physical activity.\nAll patients underwent cardiorespiratory monitoring by Embletta Pds (Embla, USA) and the following parameters were recorded: snore, nasal flow, thoracic and abdominal excursions, body position, blood oxygen saturation, and heart rate. All recordings were evaluated manually by a specialist in polysomnography. Apnea was defined as an episode of airflow cessation lasting for at least 10 seconds. Hypopnea was defined as an episode of a decrease in air-flow of more than 50% compared to baseline, lasting for ≥10 seconds, combined with a decrease in oxygen saturation of ≥4% [14]. Sleep apnea/hypopnea syndrome was diagnosed if the patient had 5 or more events of apnea and hypopnea per hour of sleep (apnea-hypopnea index, AHI). Apnea was classified as obstructive if simultaneous excursions of the thorax and abdomen were registered, and was classified as central if no respiratory muscle effort was observed. Sleep apnea was classified as mild if the patient had 5–14.9 apnea or hypopnea events per hour, as moderate if AHI was 15–29.9 per hour, and severe if AHI was ≥30 per hour of sleep session.\nBased on the results of the sleep study, patients were divided into 2 groups (Figure 1), with baseline characteristics shown in Table 1. Forty-two patients without sleep breathing disorders (SBD), with AHI <5 per hour [median 1.8 (0.3–4.9)], formed the control group, and 105 patients with AHI 5 and more episodes per hour constituted the OSAHS group (no cases of central sleep apnea were diagnosed). Twenty-seven out of 105 patients (26%) had mild OSAHS (AHI 10.9 [5.7–14.9] events per hour of sleep), 28 (27%) – moderate OSAHS (AHI 22.5 [15.4–29.3] events per hour of sleep), and 50 (47%) patients – severe OSAHS 9AHI 51.0 [32.0–108.0] events per hour of sleep).\nThe groups did not differ by age, BMI, neck circumference, hypertension stage and duration, by the prevalence of hyperglycemia or diabetes, thyroid pathology (thyreotrophic hormone level remained normal), smoking, alcohol use and by physical activity level at baseline (Table 1).\nA CPAP therapy session was offered (we used Auto-CPAP-devices ResMed, Australia; Respironics, USA, and ViPAP-III-device, ResMed, Australia) to anyone diagnosed with OSAHS to evaluate its efficacy and assess the necessary therapeutic pressure. Out of 105 patients with SBD, 51 (48.6%) underwent the CPAP therapy test, but most refused to use CPAP therapy at home due to the high cost of the device (23 [45.1%] patients) and the inconvenience of the method (911 [21.6%] patients); in addition, 5 patients (9.8%) failed to get a fitting mask. Twelve (23.5%) patients who started CPAP therapy at home formed the third group (Figure 1): 11 patients used autoCPAP-devices and 1 subject used a BiPAP-device. Most patients in this group were males (75%, 9 men), and AHI was higher in this group compared to OSAHS patients who did not use a CPAP.\nAll patients obtained counseling regarding lifestyle changes including weight loss, smoking cessation, alcohol use (the importance of drinking alcohol not later than 4 hours before sleep was emphasized) and use of sedative medicines. Change of body position during sleep was advised to all untreated OSAHS patients.\nAt baseline, antihypertensive therapy was prescribed to all patients with achievement of target blood pressure level (≤140/90 mm Hg). Baseline therapy in CPAP-treated and untreated OSAHS patients did not differ by intensity or by components (p>0.05) (Table 2).\nAll patients completed a baseline questionnaire to collect data about personal and medical history, heredity, and lifestyle. Every patient underwent physical examination, including measurement of anthropometric parameters (height, weight, body mass index; waist, hip and neck circumferences) and vital signs (HR and blood pressure, BP). Patients who smoked 1 or more cigarettes per week were considered smokers. Patients were considered to be alcohol-users if they consumed 3 or more units of alcohol per week. Three or more sessions of aerobic exercise (30 minutes or longer) per week was considered to be the normal level of physical activity.\nAll patients underwent cardiorespiratory monitoring by Embletta Pds (Embla, USA) and the following parameters were recorded: snore, nasal flow, thoracic and abdominal excursions, body position, blood oxygen saturation, and heart rate. All recordings were evaluated manually by a specialist in polysomnography. Apnea was defined as an episode of airflow cessation lasting for at least 10 seconds. Hypopnea was defined as an episode of a decrease in air-flow of more than 50% compared to baseline, lasting for ≥10 seconds, combined with a decrease in oxygen saturation of ≥4% [14]. Sleep apnea/hypopnea syndrome was diagnosed if the patient had 5 or more events of apnea and hypopnea per hour of sleep (apnea-hypopnea index, AHI). Apnea was classified as obstructive if simultaneous excursions of the thorax and abdomen were registered, and was classified as central if no respiratory muscle effort was observed. Sleep apnea was classified as mild if the patient had 5–14.9 apnea or hypopnea events per hour, as moderate if AHI was 15–29.9 per hour, and severe if AHI was ≥30 per hour of sleep session.\nBased on the results of the sleep study, patients were divided into 2 groups (Figure 1), with baseline characteristics shown in Table 1. Forty-two patients without sleep breathing disorders (SBD), with AHI <5 per hour [median 1.8 (0.3–4.9)], formed the control group, and 105 patients with AHI 5 and more episodes per hour constituted the OSAHS group (no cases of central sleep apnea were diagnosed). Twenty-seven out of 105 patients (26%) had mild OSAHS (AHI 10.9 [5.7–14.9] events per hour of sleep), 28 (27%) – moderate OSAHS (AHI 22.5 [15.4–29.3] events per hour of sleep), and 50 (47%) patients – severe OSAHS 9AHI 51.0 [32.0–108.0] events per hour of sleep).\nThe groups did not differ by age, BMI, neck circumference, hypertension stage and duration, by the prevalence of hyperglycemia or diabetes, thyroid pathology (thyreotrophic hormone level remained normal), smoking, alcohol use and by physical activity level at baseline (Table 1).\nA CPAP therapy session was offered (we used Auto-CPAP-devices ResMed, Australia; Respironics, USA, and ViPAP-III-device, ResMed, Australia) to anyone diagnosed with OSAHS to evaluate its efficacy and assess the necessary therapeutic pressure. Out of 105 patients with SBD, 51 (48.6%) underwent the CPAP therapy test, but most refused to use CPAP therapy at home due to the high cost of the device (23 [45.1%] patients) and the inconvenience of the method (911 [21.6%] patients); in addition, 5 patients (9.8%) failed to get a fitting mask. Twelve (23.5%) patients who started CPAP therapy at home formed the third group (Figure 1): 11 patients used autoCPAP-devices and 1 subject used a BiPAP-device. Most patients in this group were males (75%, 9 men), and AHI was higher in this group compared to OSAHS patients who did not use a CPAP.\nAll patients obtained counseling regarding lifestyle changes including weight loss, smoking cessation, alcohol use (the importance of drinking alcohol not later than 4 hours before sleep was emphasized) and use of sedative medicines. Change of body position during sleep was advised to all untreated OSAHS patients.\nAt baseline, antihypertensive therapy was prescribed to all patients with achievement of target blood pressure level (≤140/90 mm Hg). Baseline therapy in CPAP-treated and untreated OSAHS patients did not differ by intensity or by components (p>0.05) (Table 2).\n[SUBTITLE] Follow-up and endpoints [SUBSECTION] Three-to-five year follow-up was initially planned. Patients’ status and compliance with medical advices were assessed twice per year by telephone; if necessary, visits to the clinic were arranged. Efficacy of CPAP therapy and the condition of the device were assessed once in 6 months during a visit to the clinic using an appropriate software package (ResScan, ResMed, Australia; EncorePro, Respironics, USA). It was considered effective if AHI was lower than 10 episodes per hour of session, and the mean nightly usage was more than 5 hours [24]. Antihypertensive therapy was assessed by clinic chart reviews and by direct patient interviews.\nThe primary endpoint was a composite of cardiovascular death, fatal/non-fatal myocardial infarction and stroke. If fatal events occurred, information about the circumstances and the date of death was obtained from the relatives, and the causes were assessed by analyzing the medical records in the medical institution and/or from death certificates. The non-fatal events were recorded by medical documentation. The secondary endpoint included hospitalization resulting from a new onset or deterioration of cardiovascular disease. Frequency of hospitalization and readmission was also assessed. The reasons for hospitalization were determined from the medical records. We also analyzed the changes in anthropometric parameters in untreated OSAHS and non-OSAHS patients – BMI, waist and neck circumferences, and changes in antihypertensive treatment.\nSleep breathing disorders are believed to be a risk factor for glucose and insulin metabolism impairment [25,26]; thus, we also assessed the new cases of glucose tolerance impairment and diabetes mellitus.\nThree-to-five year follow-up was initially planned. Patients’ status and compliance with medical advices were assessed twice per year by telephone; if necessary, visits to the clinic were arranged. Efficacy of CPAP therapy and the condition of the device were assessed once in 6 months during a visit to the clinic using an appropriate software package (ResScan, ResMed, Australia; EncorePro, Respironics, USA). It was considered effective if AHI was lower than 10 episodes per hour of session, and the mean nightly usage was more than 5 hours [24]. Antihypertensive therapy was assessed by clinic chart reviews and by direct patient interviews.\nThe primary endpoint was a composite of cardiovascular death, fatal/non-fatal myocardial infarction and stroke. If fatal events occurred, information about the circumstances and the date of death was obtained from the relatives, and the causes were assessed by analyzing the medical records in the medical institution and/or from death certificates. The non-fatal events were recorded by medical documentation. The secondary endpoint included hospitalization resulting from a new onset or deterioration of cardiovascular disease. Frequency of hospitalization and readmission was also assessed. The reasons for hospitalization were determined from the medical records. We also analyzed the changes in anthropometric parameters in untreated OSAHS and non-OSAHS patients – BMI, waist and neck circumferences, and changes in antihypertensive treatment.\nSleep breathing disorders are believed to be a risk factor for glucose and insulin metabolism impairment [25,26]; thus, we also assessed the new cases of glucose tolerance impairment and diabetes mellitus.\n[SUBTITLE] Statistics [SUBSECTION] Baseline and final data are presented as median and range of deviation or as mean and standard deviation (SD); categorical variables are expressed as numbers and percentages. Histograms, Kolmogorov-Smirnov and Shapiro-Wilk statistics, skewness and kurtosis analyses were used for distribution assessment. Baseline characteristics were compared using the Mann-Whitney U test and the Cruskall-Wallis criteria for continuous variables; the χ2 test and Fisher exact test were used for categorical variables. Kaplan-Meier analysis with the log-rank test was used to estimate survival between the groups, and the odds ratio (OR) was calculated using the Cox proportional hazards model. Cox regression was used to estimate the OSAHS impact on time to cardiovascular events. ORs were considered significant when the 95% confidence intervals (CI) did not include the value of 1; and the p value of <0.05 was considered to be statistically significant. All statistical analyses were performed using a statistical software package (Statistica for Windows version 7.0, StatSoft Inc., U.S. è SPSS version 16.0 software; SPSS, Inc., Chicago, IL).\nBaseline and final data are presented as median and range of deviation or as mean and standard deviation (SD); categorical variables are expressed as numbers and percentages. Histograms, Kolmogorov-Smirnov and Shapiro-Wilk statistics, skewness and kurtosis analyses were used for distribution assessment. Baseline characteristics were compared using the Mann-Whitney U test and the Cruskall-Wallis criteria for continuous variables; the χ2 test and Fisher exact test were used for categorical variables. Kaplan-Meier analysis with the log-rank test was used to estimate survival between the groups, and the odds ratio (OR) was calculated using the Cox proportional hazards model. Cox regression was used to estimate the OSAHS impact on time to cardiovascular events. ORs were considered significant when the 95% confidence intervals (CI) did not include the value of 1; and the p value of <0.05 was considered to be statistically significant. All statistical analyses were performed using a statistical software package (Statistica for Windows version 7.0, StatSoft Inc., U.S. è SPSS version 16.0 software; SPSS, Inc., Chicago, IL).", "[SUBTITLE] Selection of patients [SUBSECTION] From May 2003 to March 2007 we selected 234 patients from a cohort referred to the out-patient department of Almazov Federal Heart, Blood and Endocrinology Centre with newly diagnosed or uncontrolled hypertension according to the following inclusion criteria: arterial hypertension (diagnosed if systolic blood pressure [SBP] and/or diastolic blood pressure [DBP] were 140 and 90 mmHg or higher, respectively, or if the patient was on antihypertensive therapy).\nPatients were not included if they:\n– had a concomitant significant cardiovascular pathology (coronary artery disease [angina pectoris] class II or higher), severe arrhythmia, congestive heart failure, valvular disease or cardiomyopathy;\n– had other factors or diseases predisposing to OSAHS, such as congenital and acquired (rheumatoid arthritis, etc.) anatomical changes, visceral cranium abnormalities, macroglossia, vocal fold paralysis, diseases leading to pharyngeal lymphoid tissue proliferation (Hodgkin’s lymphoma, AIDS), endocrine diseases (acromegaly, hypothyreosis), or neurological diseases (stroke, myasthenia, myotonic dystrophy, metabolic myopathy, amyotrophic lateral sclerosis, Guillain-Barré Syndrome, amyloidosis, diphtheritic, alcoholic and diabetic polyneuropathy);\n– had severe concomitant diseases (chronic liver or kidney diseases, cancer);\n– were found to have a severe cognitive deficit that could confound the sleep examination.\nFurther selection was based on the results of a sleep breathing disorder questionnaire (Berlin Questionnaire [22]), and daytime sleepiness assessment by the Epworth scale [23]. The study enrolled only patients with suspected OSAHS. As a result, 147 patients (90 males and 57 females) aged 23–80 years (mean age 52.1±10.4 years) were included into the study (Figure 1).\nAll recruited patients signed the informed consent after full explanation of the procedure, which complied with the Declaration of Helsinki and the ethics policies of the institutions participating in the study.\nFrom May 2003 to March 2007 we selected 234 patients from a cohort referred to the out-patient department of Almazov Federal Heart, Blood and Endocrinology Centre with newly diagnosed or uncontrolled hypertension according to the following inclusion criteria: arterial hypertension (diagnosed if systolic blood pressure [SBP] and/or diastolic blood pressure [DBP] were 140 and 90 mmHg or higher, respectively, or if the patient was on antihypertensive therapy).\nPatients were not included if they:\n– had a concomitant significant cardiovascular pathology (coronary artery disease [angina pectoris] class II or higher), severe arrhythmia, congestive heart failure, valvular disease or cardiomyopathy;\n– had other factors or diseases predisposing to OSAHS, such as congenital and acquired (rheumatoid arthritis, etc.) anatomical changes, visceral cranium abnormalities, macroglossia, vocal fold paralysis, diseases leading to pharyngeal lymphoid tissue proliferation (Hodgkin’s lymphoma, AIDS), endocrine diseases (acromegaly, hypothyreosis), or neurological diseases (stroke, myasthenia, myotonic dystrophy, metabolic myopathy, amyotrophic lateral sclerosis, Guillain-Barré Syndrome, amyloidosis, diphtheritic, alcoholic and diabetic polyneuropathy);\n– had severe concomitant diseases (chronic liver or kidney diseases, cancer);\n– were found to have a severe cognitive deficit that could confound the sleep examination.\nFurther selection was based on the results of a sleep breathing disorder questionnaire (Berlin Questionnaire [22]), and daytime sleepiness assessment by the Epworth scale [23]. The study enrolled only patients with suspected OSAHS. As a result, 147 patients (90 males and 57 females) aged 23–80 years (mean age 52.1±10.4 years) were included into the study (Figure 1).\nAll recruited patients signed the informed consent after full explanation of the procedure, which complied with the Declaration of Helsinki and the ethics policies of the institutions participating in the study.\n[SUBTITLE] Assessments and study groups [SUBSECTION] All patients completed a baseline questionnaire to collect data about personal and medical history, heredity, and lifestyle. Every patient underwent physical examination, including measurement of anthropometric parameters (height, weight, body mass index; waist, hip and neck circumferences) and vital signs (HR and blood pressure, BP). Patients who smoked 1 or more cigarettes per week were considered smokers. Patients were considered to be alcohol-users if they consumed 3 or more units of alcohol per week. Three or more sessions of aerobic exercise (30 minutes or longer) per week was considered to be the normal level of physical activity.\nAll patients underwent cardiorespiratory monitoring by Embletta Pds (Embla, USA) and the following parameters were recorded: snore, nasal flow, thoracic and abdominal excursions, body position, blood oxygen saturation, and heart rate. All recordings were evaluated manually by a specialist in polysomnography. Apnea was defined as an episode of airflow cessation lasting for at least 10 seconds. Hypopnea was defined as an episode of a decrease in air-flow of more than 50% compared to baseline, lasting for ≥10 seconds, combined with a decrease in oxygen saturation of ≥4% [14]. Sleep apnea/hypopnea syndrome was diagnosed if the patient had 5 or more events of apnea and hypopnea per hour of sleep (apnea-hypopnea index, AHI). Apnea was classified as obstructive if simultaneous excursions of the thorax and abdomen were registered, and was classified as central if no respiratory muscle effort was observed. Sleep apnea was classified as mild if the patient had 5–14.9 apnea or hypopnea events per hour, as moderate if AHI was 15–29.9 per hour, and severe if AHI was ≥30 per hour of sleep session.\nBased on the results of the sleep study, patients were divided into 2 groups (Figure 1), with baseline characteristics shown in Table 1. Forty-two patients without sleep breathing disorders (SBD), with AHI <5 per hour [median 1.8 (0.3–4.9)], formed the control group, and 105 patients with AHI 5 and more episodes per hour constituted the OSAHS group (no cases of central sleep apnea were diagnosed). Twenty-seven out of 105 patients (26%) had mild OSAHS (AHI 10.9 [5.7–14.9] events per hour of sleep), 28 (27%) – moderate OSAHS (AHI 22.5 [15.4–29.3] events per hour of sleep), and 50 (47%) patients – severe OSAHS 9AHI 51.0 [32.0–108.0] events per hour of sleep).\nThe groups did not differ by age, BMI, neck circumference, hypertension stage and duration, by the prevalence of hyperglycemia or diabetes, thyroid pathology (thyreotrophic hormone level remained normal), smoking, alcohol use and by physical activity level at baseline (Table 1).\nA CPAP therapy session was offered (we used Auto-CPAP-devices ResMed, Australia; Respironics, USA, and ViPAP-III-device, ResMed, Australia) to anyone diagnosed with OSAHS to evaluate its efficacy and assess the necessary therapeutic pressure. Out of 105 patients with SBD, 51 (48.6%) underwent the CPAP therapy test, but most refused to use CPAP therapy at home due to the high cost of the device (23 [45.1%] patients) and the inconvenience of the method (911 [21.6%] patients); in addition, 5 patients (9.8%) failed to get a fitting mask. Twelve (23.5%) patients who started CPAP therapy at home formed the third group (Figure 1): 11 patients used autoCPAP-devices and 1 subject used a BiPAP-device. Most patients in this group were males (75%, 9 men), and AHI was higher in this group compared to OSAHS patients who did not use a CPAP.\nAll patients obtained counseling regarding lifestyle changes including weight loss, smoking cessation, alcohol use (the importance of drinking alcohol not later than 4 hours before sleep was emphasized) and use of sedative medicines. Change of body position during sleep was advised to all untreated OSAHS patients.\nAt baseline, antihypertensive therapy was prescribed to all patients with achievement of target blood pressure level (≤140/90 mm Hg). Baseline therapy in CPAP-treated and untreated OSAHS patients did not differ by intensity or by components (p>0.05) (Table 2).\nAll patients completed a baseline questionnaire to collect data about personal and medical history, heredity, and lifestyle. Every patient underwent physical examination, including measurement of anthropometric parameters (height, weight, body mass index; waist, hip and neck circumferences) and vital signs (HR and blood pressure, BP). Patients who smoked 1 or more cigarettes per week were considered smokers. Patients were considered to be alcohol-users if they consumed 3 or more units of alcohol per week. Three or more sessions of aerobic exercise (30 minutes or longer) per week was considered to be the normal level of physical activity.\nAll patients underwent cardiorespiratory monitoring by Embletta Pds (Embla, USA) and the following parameters were recorded: snore, nasal flow, thoracic and abdominal excursions, body position, blood oxygen saturation, and heart rate. All recordings were evaluated manually by a specialist in polysomnography. Apnea was defined as an episode of airflow cessation lasting for at least 10 seconds. Hypopnea was defined as an episode of a decrease in air-flow of more than 50% compared to baseline, lasting for ≥10 seconds, combined with a decrease in oxygen saturation of ≥4% [14]. Sleep apnea/hypopnea syndrome was diagnosed if the patient had 5 or more events of apnea and hypopnea per hour of sleep (apnea-hypopnea index, AHI). Apnea was classified as obstructive if simultaneous excursions of the thorax and abdomen were registered, and was classified as central if no respiratory muscle effort was observed. Sleep apnea was classified as mild if the patient had 5–14.9 apnea or hypopnea events per hour, as moderate if AHI was 15–29.9 per hour, and severe if AHI was ≥30 per hour of sleep session.\nBased on the results of the sleep study, patients were divided into 2 groups (Figure 1), with baseline characteristics shown in Table 1. Forty-two patients without sleep breathing disorders (SBD), with AHI <5 per hour [median 1.8 (0.3–4.9)], formed the control group, and 105 patients with AHI 5 and more episodes per hour constituted the OSAHS group (no cases of central sleep apnea were diagnosed). Twenty-seven out of 105 patients (26%) had mild OSAHS (AHI 10.9 [5.7–14.9] events per hour of sleep), 28 (27%) – moderate OSAHS (AHI 22.5 [15.4–29.3] events per hour of sleep), and 50 (47%) patients – severe OSAHS 9AHI 51.0 [32.0–108.0] events per hour of sleep).\nThe groups did not differ by age, BMI, neck circumference, hypertension stage and duration, by the prevalence of hyperglycemia or diabetes, thyroid pathology (thyreotrophic hormone level remained normal), smoking, alcohol use and by physical activity level at baseline (Table 1).\nA CPAP therapy session was offered (we used Auto-CPAP-devices ResMed, Australia; Respironics, USA, and ViPAP-III-device, ResMed, Australia) to anyone diagnosed with OSAHS to evaluate its efficacy and assess the necessary therapeutic pressure. Out of 105 patients with SBD, 51 (48.6%) underwent the CPAP therapy test, but most refused to use CPAP therapy at home due to the high cost of the device (23 [45.1%] patients) and the inconvenience of the method (911 [21.6%] patients); in addition, 5 patients (9.8%) failed to get a fitting mask. Twelve (23.5%) patients who started CPAP therapy at home formed the third group (Figure 1): 11 patients used autoCPAP-devices and 1 subject used a BiPAP-device. Most patients in this group were males (75%, 9 men), and AHI was higher in this group compared to OSAHS patients who did not use a CPAP.\nAll patients obtained counseling regarding lifestyle changes including weight loss, smoking cessation, alcohol use (the importance of drinking alcohol not later than 4 hours before sleep was emphasized) and use of sedative medicines. Change of body position during sleep was advised to all untreated OSAHS patients.\nAt baseline, antihypertensive therapy was prescribed to all patients with achievement of target blood pressure level (≤140/90 mm Hg). Baseline therapy in CPAP-treated and untreated OSAHS patients did not differ by intensity or by components (p>0.05) (Table 2).\n[SUBTITLE] Follow-up and endpoints [SUBSECTION] Three-to-five year follow-up was initially planned. Patients’ status and compliance with medical advices were assessed twice per year by telephone; if necessary, visits to the clinic were arranged. Efficacy of CPAP therapy and the condition of the device were assessed once in 6 months during a visit to the clinic using an appropriate software package (ResScan, ResMed, Australia; EncorePro, Respironics, USA). It was considered effective if AHI was lower than 10 episodes per hour of session, and the mean nightly usage was more than 5 hours [24]. Antihypertensive therapy was assessed by clinic chart reviews and by direct patient interviews.\nThe primary endpoint was a composite of cardiovascular death, fatal/non-fatal myocardial infarction and stroke. If fatal events occurred, information about the circumstances and the date of death was obtained from the relatives, and the causes were assessed by analyzing the medical records in the medical institution and/or from death certificates. The non-fatal events were recorded by medical documentation. The secondary endpoint included hospitalization resulting from a new onset or deterioration of cardiovascular disease. Frequency of hospitalization and readmission was also assessed. The reasons for hospitalization were determined from the medical records. We also analyzed the changes in anthropometric parameters in untreated OSAHS and non-OSAHS patients – BMI, waist and neck circumferences, and changes in antihypertensive treatment.\nSleep breathing disorders are believed to be a risk factor for glucose and insulin metabolism impairment [25,26]; thus, we also assessed the new cases of glucose tolerance impairment and diabetes mellitus.\nThree-to-five year follow-up was initially planned. Patients’ status and compliance with medical advices were assessed twice per year by telephone; if necessary, visits to the clinic were arranged. Efficacy of CPAP therapy and the condition of the device were assessed once in 6 months during a visit to the clinic using an appropriate software package (ResScan, ResMed, Australia; EncorePro, Respironics, USA). It was considered effective if AHI was lower than 10 episodes per hour of session, and the mean nightly usage was more than 5 hours [24]. Antihypertensive therapy was assessed by clinic chart reviews and by direct patient interviews.\nThe primary endpoint was a composite of cardiovascular death, fatal/non-fatal myocardial infarction and stroke. If fatal events occurred, information about the circumstances and the date of death was obtained from the relatives, and the causes were assessed by analyzing the medical records in the medical institution and/or from death certificates. The non-fatal events were recorded by medical documentation. The secondary endpoint included hospitalization resulting from a new onset or deterioration of cardiovascular disease. Frequency of hospitalization and readmission was also assessed. The reasons for hospitalization were determined from the medical records. We also analyzed the changes in anthropometric parameters in untreated OSAHS and non-OSAHS patients – BMI, waist and neck circumferences, and changes in antihypertensive treatment.\nSleep breathing disorders are believed to be a risk factor for glucose and insulin metabolism impairment [25,26]; thus, we also assessed the new cases of glucose tolerance impairment and diabetes mellitus.", "From May 2003 to March 2007 we selected 234 patients from a cohort referred to the out-patient department of Almazov Federal Heart, Blood and Endocrinology Centre with newly diagnosed or uncontrolled hypertension according to the following inclusion criteria: arterial hypertension (diagnosed if systolic blood pressure [SBP] and/or diastolic blood pressure [DBP] were 140 and 90 mmHg or higher, respectively, or if the patient was on antihypertensive therapy).\nPatients were not included if they:\n– had a concomitant significant cardiovascular pathology (coronary artery disease [angina pectoris] class II or higher), severe arrhythmia, congestive heart failure, valvular disease or cardiomyopathy;\n– had other factors or diseases predisposing to OSAHS, such as congenital and acquired (rheumatoid arthritis, etc.) anatomical changes, visceral cranium abnormalities, macroglossia, vocal fold paralysis, diseases leading to pharyngeal lymphoid tissue proliferation (Hodgkin’s lymphoma, AIDS), endocrine diseases (acromegaly, hypothyreosis), or neurological diseases (stroke, myasthenia, myotonic dystrophy, metabolic myopathy, amyotrophic lateral sclerosis, Guillain-Barré Syndrome, amyloidosis, diphtheritic, alcoholic and diabetic polyneuropathy);\n– had severe concomitant diseases (chronic liver or kidney diseases, cancer);\n– were found to have a severe cognitive deficit that could confound the sleep examination.\nFurther selection was based on the results of a sleep breathing disorder questionnaire (Berlin Questionnaire [22]), and daytime sleepiness assessment by the Epworth scale [23]. The study enrolled only patients with suspected OSAHS. As a result, 147 patients (90 males and 57 females) aged 23–80 years (mean age 52.1±10.4 years) were included into the study (Figure 1).\nAll recruited patients signed the informed consent after full explanation of the procedure, which complied with the Declaration of Helsinki and the ethics policies of the institutions participating in the study.", "All patients completed a baseline questionnaire to collect data about personal and medical history, heredity, and lifestyle. Every patient underwent physical examination, including measurement of anthropometric parameters (height, weight, body mass index; waist, hip and neck circumferences) and vital signs (HR and blood pressure, BP). Patients who smoked 1 or more cigarettes per week were considered smokers. Patients were considered to be alcohol-users if they consumed 3 or more units of alcohol per week. Three or more sessions of aerobic exercise (30 minutes or longer) per week was considered to be the normal level of physical activity.\nAll patients underwent cardiorespiratory monitoring by Embletta Pds (Embla, USA) and the following parameters were recorded: snore, nasal flow, thoracic and abdominal excursions, body position, blood oxygen saturation, and heart rate. All recordings were evaluated manually by a specialist in polysomnography. Apnea was defined as an episode of airflow cessation lasting for at least 10 seconds. Hypopnea was defined as an episode of a decrease in air-flow of more than 50% compared to baseline, lasting for ≥10 seconds, combined with a decrease in oxygen saturation of ≥4% [14]. Sleep apnea/hypopnea syndrome was diagnosed if the patient had 5 or more events of apnea and hypopnea per hour of sleep (apnea-hypopnea index, AHI). Apnea was classified as obstructive if simultaneous excursions of the thorax and abdomen were registered, and was classified as central if no respiratory muscle effort was observed. Sleep apnea was classified as mild if the patient had 5–14.9 apnea or hypopnea events per hour, as moderate if AHI was 15–29.9 per hour, and severe if AHI was ≥30 per hour of sleep session.\nBased on the results of the sleep study, patients were divided into 2 groups (Figure 1), with baseline characteristics shown in Table 1. Forty-two patients without sleep breathing disorders (SBD), with AHI <5 per hour [median 1.8 (0.3–4.9)], formed the control group, and 105 patients with AHI 5 and more episodes per hour constituted the OSAHS group (no cases of central sleep apnea were diagnosed). Twenty-seven out of 105 patients (26%) had mild OSAHS (AHI 10.9 [5.7–14.9] events per hour of sleep), 28 (27%) – moderate OSAHS (AHI 22.5 [15.4–29.3] events per hour of sleep), and 50 (47%) patients – severe OSAHS 9AHI 51.0 [32.0–108.0] events per hour of sleep).\nThe groups did not differ by age, BMI, neck circumference, hypertension stage and duration, by the prevalence of hyperglycemia or diabetes, thyroid pathology (thyreotrophic hormone level remained normal), smoking, alcohol use and by physical activity level at baseline (Table 1).\nA CPAP therapy session was offered (we used Auto-CPAP-devices ResMed, Australia; Respironics, USA, and ViPAP-III-device, ResMed, Australia) to anyone diagnosed with OSAHS to evaluate its efficacy and assess the necessary therapeutic pressure. Out of 105 patients with SBD, 51 (48.6%) underwent the CPAP therapy test, but most refused to use CPAP therapy at home due to the high cost of the device (23 [45.1%] patients) and the inconvenience of the method (911 [21.6%] patients); in addition, 5 patients (9.8%) failed to get a fitting mask. Twelve (23.5%) patients who started CPAP therapy at home formed the third group (Figure 1): 11 patients used autoCPAP-devices and 1 subject used a BiPAP-device. Most patients in this group were males (75%, 9 men), and AHI was higher in this group compared to OSAHS patients who did not use a CPAP.\nAll patients obtained counseling regarding lifestyle changes including weight loss, smoking cessation, alcohol use (the importance of drinking alcohol not later than 4 hours before sleep was emphasized) and use of sedative medicines. Change of body position during sleep was advised to all untreated OSAHS patients.\nAt baseline, antihypertensive therapy was prescribed to all patients with achievement of target blood pressure level (≤140/90 mm Hg). Baseline therapy in CPAP-treated and untreated OSAHS patients did not differ by intensity or by components (p>0.05) (Table 2).", "Three-to-five year follow-up was initially planned. Patients’ status and compliance with medical advices were assessed twice per year by telephone; if necessary, visits to the clinic were arranged. Efficacy of CPAP therapy and the condition of the device were assessed once in 6 months during a visit to the clinic using an appropriate software package (ResScan, ResMed, Australia; EncorePro, Respironics, USA). It was considered effective if AHI was lower than 10 episodes per hour of session, and the mean nightly usage was more than 5 hours [24]. Antihypertensive therapy was assessed by clinic chart reviews and by direct patient interviews.\nThe primary endpoint was a composite of cardiovascular death, fatal/non-fatal myocardial infarction and stroke. If fatal events occurred, information about the circumstances and the date of death was obtained from the relatives, and the causes were assessed by analyzing the medical records in the medical institution and/or from death certificates. The non-fatal events were recorded by medical documentation. The secondary endpoint included hospitalization resulting from a new onset or deterioration of cardiovascular disease. Frequency of hospitalization and readmission was also assessed. The reasons for hospitalization were determined from the medical records. We also analyzed the changes in anthropometric parameters in untreated OSAHS and non-OSAHS patients – BMI, waist and neck circumferences, and changes in antihypertensive treatment.\nSleep breathing disorders are believed to be a risk factor for glucose and insulin metabolism impairment [25,26]; thus, we also assessed the new cases of glucose tolerance impairment and diabetes mellitus.", "Baseline and final data are presented as median and range of deviation or as mean and standard deviation (SD); categorical variables are expressed as numbers and percentages. Histograms, Kolmogorov-Smirnov and Shapiro-Wilk statistics, skewness and kurtosis analyses were used for distribution assessment. Baseline characteristics were compared using the Mann-Whitney U test and the Cruskall-Wallis criteria for continuous variables; the χ2 test and Fisher exact test were used for categorical variables. Kaplan-Meier analysis with the log-rank test was used to estimate survival between the groups, and the odds ratio (OR) was calculated using the Cox proportional hazards model. Cox regression was used to estimate the OSAHS impact on time to cardiovascular events. ORs were considered significant when the 95% confidence intervals (CI) did not include the value of 1; and the p value of <0.05 was considered to be statistically significant. All statistical analyses were performed using a statistical software package (Statistica for Windows version 7.0, StatSoft Inc., U.S. è SPSS version 16.0 software; SPSS, Inc., Chicago, IL).", "By March 2009, 8 patients from the 147 originally enrolled into the study were lost to follow-up because they had moved to another area. The mean follow-up period was 46.4±14.3 months. All 12 patients from the third group showed good compliance with the CPAP therapy, with mean nighttime device usage being 6.04±1.4 hours.\nOverall, 23 events (15.6%) were registered, including 13 (8.9%) deaths, of which 11 (84.6%) occurred in males. One (7.7%) fatal event (sudden cardiac death) occurred among the CPAP-treated patients, 1 death from non-cardiac disease was observed in a patient without SBD, and 11 (84.6%) fatal events, including 1 non-cardiac death, occurred in the untreated OSAHS patients (Table 3).\nOut of 10 non-fatal events, 5 were strokes and 4 were myocardial infarctions (MI), all observed in untreated OSAHS patients. One MI occurred in the control group. No non-fatal cardiovascular events were registered in the CPAP-treated group (Table 3). The risk of fatal/non-fatal MI did not differ significantly from the control group (OR=3.859, 95%CI 0.467–31.892; p=0.273).\nHowever, incidence of primary combined endpoint (cardiovascular death, fatal/non-fatal MI and stroke) differed significantly between the groups (P=7.145, p=0.023, Fisher exact test). The risk estimated by the Monte Carlo test was higher in untreated OSAHS patients (20.4%) than in the control group (2.4%) (OR=9.293 95% CI 1.194–72.363, p=0.012); however, no difference was found as compared to CPAP-treated patients (OR=3.727, 95% CI 0.215–64.474, p=0.398). This might be due to the small sample size of the CPAP-treated group, which made it difficult to draw any definitive conclusions. There was no discrepancy between CPAP-treated patients and the control group (OR=0.379, 95% CI 0.045–3.127, p=0.690).\nEvent-free survival (by Kaplan-Meier method) was lowest in the untreated OSAHS patients (log rank test 6.732, p=0.035) (Figure 2).\nIn the non-adjusted regression model, OSAHS was also associated with a higher risk of cardiovascular events (OR=8.557, 95% CI 1.142–64.131, p=0.037). The relationship between the variables (sex, age, BMI, duration of hypertension, smoking, alcohol use, physical activity level, family history of cardiovascular diseases, current coronary heart disease, glucose metabolism) and survival was assessed by Cox regression analysis (Table 4). The time-dependent model showed that only severe OSAHS was associated with poor outcome (OR=9.203 95% CI 1.176–72.002, p=0.034), while presence of moderate and mild OSAHS did not affect survival (OR=8.588 95% CI 0.999–73.82, p>0.05, and OR=4.205 95% CI 0.437–40.434, p>0.05). In the adjusted model, the impact of severe OSAHS was significant even when adjusted to the above-mentioned variables (p=0.04, Table 4).\nMoreover, OSAHS patients demonstrated higher rates of hospitalization compared to the control group without SBD. Thirty-three untreated OSAHS patients were hospitalized (and 12 of them required readmission), while only 7 patients from the control group required hospitalization (OR 2.75, 95% CI 1.100–6.87, p=0.04). There were 3 hospitalizations registered among CPAP-treated patients, and 1 of these patients required readmission (compared to untreated patients OR=0.606 95% CI 0.15–2.395, p=0.54).\nNew onset of diabetes was registered in 4 untreated OSAHS patients, as well as 2 cases of glucose intolerance, compared to 2 (4.8%) cases of glucose intolerance impairment in the control group (p>0.05).\nThere were no significant changes between baseline and final values of BMI (33.6±6.3 and 34.2±6.6 kg/m2 respectively, p>0.05), waist circumference (107.3±14.1 and 109.6±13.7 cm, p>0.05), and neck circumference (41.8±4.5 and 41.9±5.1 cm, p>0.05) in control group, as well as in OSAHS patients (BMI 35.4±5.2 cm at baseline and 35.3±5.8 kg/m2 by 2009, p>0.05; waist circumference, respectively, 116.9±13.4 and 117.1±15.9 cm, p>0.05; and neck circumference 43±2.7 and 44.1±3.1 cm, p>0.05).\nBy 2009 most of OSAHS patients were on combination antihypertensive therapy (Table 2), while the majority (60%) of non-OSAHS subjects continued monotherapy (p<0.001). Thirty-six (44%) OSAHS patients compared to 5 (12%) controls took 3 or more antihypertensive drugs (p<0.05). There was also a difference in the distribution by classes of antihypertensive drugs. By 2009, 76 (93%) OSAHS patients and 32 (29%) non-SBD patients (p=0.012) took angiotensin converting enzyme inhibitors/angiotensin-2-receptor antagonists. Beta-blockers were prescribed to 63 (77%) OSAHS patients compared to 10 (24%) patients in the control group (p<0.001). By 2009, intensity of antihypertensive therapy in CPAP-treated and untreated patients with SBD was comparable. This finding might be due to the small size of the CPAP group, limiting the analysis and requiring further investigation.", "We tried to avoid the main limitations of other trials: determination of sleep apnea according to questionnaires that cannot be compared to the results based on the sleep study; inclusion of patients with severe cardiovascular pathology, diabetes mellitus, severe congestive heart failure or history of stroke [13,17], which can compromise the outcome history; and inclusion of patients with both central and obstructive sleep apnea, which does not allow comparison with the results of other OSAHS studies [10].\nOur study also has several important limitations: the lack of randomization between the groups makes it observational, the follow-up period was relatively short (in one-third of patients it was less than 3 years), no sleep studies were performed during the follow-up period, and we do not know if all the patients in the control group remained OSAHS-free and vice versa. One of the main limitations is the sample size of the CPAP-treated group. This can be explained by the fact that the majority of patients in Russia are still not mentally ready to use CPAP devices regularly (only 11.4% of patients with verified SBD agreed to this treatment in our study), even those with severe OSAHS, who should receive CPAP-therapy according to international standards.\nNevertheless, the present study appears to be the first performed in the Russian population, and is focused on hypertensive patients without known cardiovascular events.\nThe major finding is the impact of OSAHS on cardiovascular outcome. Our results showed that only severe OSAHS was associated with negative outcome. This finding corresponds to the recent publication of Valham et al (2008) – in a 10-year follow-up, SBD patients (n=392) with verified coronary heart disease had a significantly higher risk of stroke, correlating with desaturation index >5% and AHI ≥15 episodes per hour of sleep, independent of demographical, anthropometric, clinical and lifestyle parameters. Valham et al. (2008) failed to find any relationship between SBD and major cardiovascular events, which could be due to the older population with male predominance.\nIn our study the rate of fatal and non-fatal cardiovascular events was significantly higher in OSAHS hypertensive patients as compared with the SBD-free group, and in the adjusted regression model was associated with the severity of SBD. Thus, the risk of cardiovascular events in the Russian population of OSAHS patients was 9 times higher than in SBD-free subjects. Mean event-free survival in the OSAHS group was 60.1 months (95% CI 55.9–64.2). However, only patients with severe SBD appeared to be at higher cardiovascular risk independent of lifestyle and other factors (sex, age, BMI, duration of hypertension, smoking, alcohol use, physical activity level, family history, current coronary heart disease, glucose metabolism impairment), while the prognosis in patients with mild-to-moderate OSAHS was comparable to the SBD-free group. It should be emphasized that all patients in our study had obstructive sleep apnea, and the central causes, as well as concomitant diseases predisposing to SBD, were excluded.\nIn our study the prognosis in CPAP-treated patients did not differ from the untreated OSAHS patients, though other authors showed a clear beneficial effect of CPAP therapy on the risk of recurrent cardiovascular events [20] and rates of fatal and non-fatal cardiovascular events and hospitalization [29]. Our results could be explained by the sample size of the CPAP-treated group, due to the limited availability of CPAP therapy in Russia. We speculate that with a bigger group we could expect to find differences and to form more definitive conclusions.\nThere have been only a few studies analyzing hospitalization rates in OSAHS patients. In 1 study, CPAP therapy reduced urgent hospitalization rates [30], and another study included combined endpoint analysis of deaths and hospitalizations, but the cause of hospitalizations was not verified [29]. We demonstrated a higher rate of hospitalization due to the onset or deterioration of the cardiovascular disease in OSAHS patients compared to the SBD-free group.\nOur results show the need for a multiple combination antihypertensive therapy in OSAHS patients. These data support other publications showing the high prevalence of resistant hypertension in OSAHS [12,28].\nWe did not intend to verify the possible underlying mechanisms leading to high mortality in patients with SBD. The higher prevalence of resistant hypertension demonstrated in our study can be one of the reasons for worse outcome in the OSAHS group. The data on no long-term changes in BMI may indicate that the effect was not related to obesity. Lipid changes were not assessed, and could be an important issue.", "Our results confirmed that OSAHS hypertensive patients, in particular patients with severe OSAHS (AHI ≥30 episodes per hour of sleep), are at higher risk of fatal and non-fatal cardiovascular events. Moreover, we demonstrated higher rates of hospitalization were caused by the onset or deterioration of cardiovascular disease in untreated OSAHS patients, and higher prevalence of resistant hypertension." ]
[ null, "materials|methods", "methods", "subjects", "methods", null, null, "results", "discussion", "conclusions" ]
[ "obstructive sleep apnea", "prognosis", "mortality", "hypertension" ]
Confirmation of HIV-like sequences in respiratory tract bacteria of Cambodian and Kenyan HIV-positive pediatric patients.
21358602
Bacteria and yeasts isolated from respiratory tracts of 39 Cambodian and 28 Kenyan HIV-positive children were tested for the presence of HIV-1 sequences.
BACKGROUND
Bacteria and yeasts from the respiratory tract (nose, pharyngeal swabs) were isolated from 39 Cambodian and 28 Kenyan HIV-positive children. Bacterial chromosomal DNA was prepared by standard protocol and by Qiagen kit. The PCR specific for HIV sequences was carried out using HIV-1-specific primers.The analysis was performed by colony and dot-blot hybridization using HIV-1-specific primers which represent gag, pol and env genes of the virus. The sequencing of some PCR products was performed on the ABI 373 DNA Sequencer.
MATERIAL/METHODS
The majority of microbes were characterized as Staphylococcus aureus, Klebsiella pneumoniae, and resp. Candida albicans. In some cases E. coli, Streptococcus pyogenes, Proteus mirabilis and Candida tropicalis were identified. Bacteria of 16 Cambodian (41%) and 8 Kenyan (31%) children were found to be positive in colony and dot-blot DNA hybridization. By the sequencing of PCR products synthesized on the template of patients' bacterial DNA using primers 68;69 for env HIV-1 gene, homology of greater than 90% with HIV-1 isolate HXB2 (HIVHXB2CG) was revealed.
RESULTS
Bacteria and yeasts from the respiratory tract of 41% of Cambodian and 31% of Kenyan HIV-positive children bear HIV-like sequences. The role of bacteria in the HIV disease process is discussed.
CONCLUSIONS
[ "Bacteria", "Base Sequence", "Cambodia", "Child", "DNA, Viral", "HIV", "HIV Seropositivity", "Humans", "Kenya", "Molecular Sequence Data", "Nucleic Acid Hybridization", "Polymerase Chain Reaction", "Reproducibility of Results", "Respiratory System", "Sequence Analysis, DNA" ]
3524724
null
null
null
null
Results
The bacteria and yeast isolates from the respiratory tract of Cambodian and Kenyan HIV-positive children were Staphylococcus aureus, Streptococcus pyogenes, Klebsiella pneumoniae, Escherichia coli, Proteus mirabilis, Candida albicans and Candida tropicalis (Table 2). There were no evident differences between the proportions of species from children of both countries; however, differences were detected in dot-blot hybridization with HIV-1 probes (Figures 1, 2). DNA isolated from 39 tested Cambodian children hybridized for 41% (16 samples) with used probes and for 29% (8 samples) from 28 tested Kenyan children. The most frequently found bacteria in both Cambodian [12] and Kenyan [6] children was Klebsiella pneumoniae. This bacteria was positive in hybridization for 50% and 67% (Table 2). Candida albicans of 7 Kenyan patients was completely negative in colony and dot-blot hybridization, but was positive in 3 out of 5 Cambodian children. Escherichia coli detected in 3 Cambodian children hybridized positively in 2 cases. Candida tropicalis was found in 1 patient from Kenya and hybridized with HIV-1 probes. Proteus mirabilis of 1 Cambodian child was found positive in hybridization with used probes. The results of colony (data not shown) and dot-blot hybridization were highly compatible. The differences in hybridization using probes of PCR products synthesized on 2 different templates – pBH10 and DNA of patient 30 – were not significant. The 142 bp amplicon limited by primers 68; 69 produced using template DNA from both bacteria and lymphocytes of Kenyan (9 Ke, 22 Ke, 10’ Ke, 30 Ke, 5’ Ke, 16’ Ke, 29 Ke) and Cambodian children (15 Cm, 28 Cm, 33 Cm) was sequenced (Figure 3). Amplicons from these different sources were more than 90% identical with reference sequences (HIV/HXB2). Some differences were found in the first part of sequenced fragments. Largest differences were detected between the isolates as a group and the reference sequence in pBH10.
Conclusions
Bacteria and yeasts from the respiratory tract of 41% of Cambodian and 31% of Kenyan HIV-positive children bear HIV-like sequences. According to our preliminary results, we conclude that the ability of invasive bacteria containing HIV sequences in the form of “virus-like particles” to enter into HL-60 cells or human lymphocytes represents an ideal system for of horizontal transfer of genes between eukaryotic and prokaryotic cells. In this way “virus-like particles” or other particles are introduced into cells of the lymphoproliferative system, and, consequently, their genetic information may interact with or be integrated into the human DNA and induce the HIV disease process [7,8].
[ "Background", "DNA isolation and PCR amplification", "Dot-blot hybridization", "Southern hybridization", "DNA sequencing" ]
[ "It has recently been found that both acute human immunodeficiency virus (HIV) and simian immunodeficiency virus (SIV) infections are accompanied by a dramatic and selective loss of memory CD4+ T cells, predominantly from the mucosal surfaces [1].\nGut-associated lymphatic tissue (GALT), the largest component of the lymphoid organ system, is a principal site of both virus production and depletion of primarily lamina propria memory CD4+ T cells. CD4-expressing T cells that previously encountered antigens and microbes are homed to the lamina propria of GALT [2]. The scale of this CD4+ T-cell depletion has adverse effects on the immune system of the host, emphasizing the significance of developing countermeasures to SIV that are effective before infection of GALT. The SIV directly killed a massive number of immune cells in the gut within days of infection. The gut-associated lymphoid tissue (GALT) is an important early site for HIV replication and severe CD4+ T cell depletion [3]. Viral suppression and immune restoration exists in the gastrointestinal mucosa of human immunodeficiency virus type 1-infected patients initiating therapy during primary or chronic infection [3]. In individuals treated with highly active anti-retroviral therapy (HAART), the plasma HIV RNA is reduced to below the level of detection, but there is strong evidence of continuing viral replication after suppression of plasma viremia. It is apparent that a viral reservoir persists in virtually all infected individuals receiving HAART [4,5]. As shown previously, HIV-1 was also detected in bowel crypt cells and the lamina propria in HIV-positive patients [6]. Since these cells are in close contact with intestinal bacteria, it may be possible that bowel bacteria are involved in the pathogenesis of the disease. These findings, confirming that the gut and other mucosal tissue, rather than blood, is the major site of HIV infection and CD4+T cell loss, suggest the possibility that bacteria might serve as a viral reservoir as well.\nBased on our previous results concerning detection of HIV-1 sequences in gastrointestinal tracts of HIV-positive patients [7–9], we analysed bacteria and yeasts isolated from the respiratory tract (nose, pharyngeal swabs) of Cambodian and Kenyan HIV-positive children for presence of HIV-like sequences. The study protocol was reviewed and approved by the Ethics Committee of St. Elizabeth University.", "Bacteria and yeasts from the respiratory tract (nose, pharyngeal swabs) were collected from 39 Cambodian and 28 Kenyan HIV-positive children.\nCollected samples were directly transported from Cambodia and Kenya to the Cancer Research Institute in Bratislava and incubated overnight in LB medium at 37°C for amplification. Bacterial chromosomal DNA was prepared by standard protocol [10,11] and by Qiagen kit (Qiagen). Plasmid DNA was purified by an alkaline lysis procedure (Table 1).\nTo avoid false positive results, 3 PCR reactions were performed as arbitrary controls in every set of reactions. The PCR products, used for sequencing and for HIV-specific probe, were purified through LMP agarose and by QIAquick PCR purification kit (Qiagen). Plasmid pBH10 (genebank accession number M15654) was used as a reference source of HIV DNA and lymphocytes’ DNA of HIV/AIDS patient 30 was used as a template for PCR products.", "For dot-blotting, bacteria and yeasts isolated from patients were amplified overnight in LB medium, and chromosomal DNA was prepared. DNA (0.25 μg/sample) of each patient was transferred to Hybond N+ membrane, lysed and prehybridized. For probe, the 3 aforementioned purified PCR products were synthesized on DNA template: a) HIV/AIDS patient 30; b) plasmid pBH10. 32P-labelled probe was obtained by Ready-To-Go DNA Labelling Kit (Amersham Bioscience, England). Hybridization was performed for 16 hours in standard hybridization buffer at 42°C or in Rapid-hyb buffer (Amersham Bioscience) at 60°C, and washing was carried out as described previously [9]. To exclude potential contamination, 8 DNA samples of healthy persons were analysed together with tested children.", "For colony blotting, bacterial suspension was diluted to the concentration of 10−10 or 10−9 on LB plates and grown colonies were blotted to the Hybond N+ membrane, lysed, washed and prehybridized. 32P-labeled probes were obtained using Ready-To-Go DNA Labeling kit (−dCTP) (Amersham Bioscience). The combined PCR probe was prepared as a mixture of all 3 aforementioned PCR products. Hybridization was performed for 16 hours in standard hybridization buffer at 42°C or in Rapid-hyb buffer (Amersham Bioscience) at 60°C. Subsequently, membranes were washed at final temperature 60°C, resp. 65°C.", "The PCR products determined by primers 68, 69 synthesized on the template of bacterial DNA were directly sequenced on the ABI 373 DNA Sequencer and ABI PRISM 310 Genetic Analyzer (Applied Biosystem). The sequencing reaction was performed using fluorescent dyes of the ABI Prism Big Dye Terminator sequencing kit (Applied Biosystems) and afterwards extension products were purified by Auto-Seq G-50 columns (Amersham Biosciences)." ]
[ null, null, null, null, null ]
[ "Background", "Material and Methods", "DNA isolation and PCR amplification", "Dot-blot hybridization", "Southern hybridization", "DNA sequencing", "Results", "Discussion", "Conclusions" ]
[ "It has recently been found that both acute human immunodeficiency virus (HIV) and simian immunodeficiency virus (SIV) infections are accompanied by a dramatic and selective loss of memory CD4+ T cells, predominantly from the mucosal surfaces [1].\nGut-associated lymphatic tissue (GALT), the largest component of the lymphoid organ system, is a principal site of both virus production and depletion of primarily lamina propria memory CD4+ T cells. CD4-expressing T cells that previously encountered antigens and microbes are homed to the lamina propria of GALT [2]. The scale of this CD4+ T-cell depletion has adverse effects on the immune system of the host, emphasizing the significance of developing countermeasures to SIV that are effective before infection of GALT. The SIV directly killed a massive number of immune cells in the gut within days of infection. The gut-associated lymphoid tissue (GALT) is an important early site for HIV replication and severe CD4+ T cell depletion [3]. Viral suppression and immune restoration exists in the gastrointestinal mucosa of human immunodeficiency virus type 1-infected patients initiating therapy during primary or chronic infection [3]. In individuals treated with highly active anti-retroviral therapy (HAART), the plasma HIV RNA is reduced to below the level of detection, but there is strong evidence of continuing viral replication after suppression of plasma viremia. It is apparent that a viral reservoir persists in virtually all infected individuals receiving HAART [4,5]. As shown previously, HIV-1 was also detected in bowel crypt cells and the lamina propria in HIV-positive patients [6]. Since these cells are in close contact with intestinal bacteria, it may be possible that bowel bacteria are involved in the pathogenesis of the disease. These findings, confirming that the gut and other mucosal tissue, rather than blood, is the major site of HIV infection and CD4+T cell loss, suggest the possibility that bacteria might serve as a viral reservoir as well.\nBased on our previous results concerning detection of HIV-1 sequences in gastrointestinal tracts of HIV-positive patients [7–9], we analysed bacteria and yeasts isolated from the respiratory tract (nose, pharyngeal swabs) of Cambodian and Kenyan HIV-positive children for presence of HIV-like sequences. The study protocol was reviewed and approved by the Ethics Committee of St. Elizabeth University.", "[SUBTITLE] DNA isolation and PCR amplification [SUBSECTION] Bacteria and yeasts from the respiratory tract (nose, pharyngeal swabs) were collected from 39 Cambodian and 28 Kenyan HIV-positive children.\nCollected samples were directly transported from Cambodia and Kenya to the Cancer Research Institute in Bratislava and incubated overnight in LB medium at 37°C for amplification. Bacterial chromosomal DNA was prepared by standard protocol [10,11] and by Qiagen kit (Qiagen). Plasmid DNA was purified by an alkaline lysis procedure (Table 1).\nTo avoid false positive results, 3 PCR reactions were performed as arbitrary controls in every set of reactions. The PCR products, used for sequencing and for HIV-specific probe, were purified through LMP agarose and by QIAquick PCR purification kit (Qiagen). Plasmid pBH10 (genebank accession number M15654) was used as a reference source of HIV DNA and lymphocytes’ DNA of HIV/AIDS patient 30 was used as a template for PCR products.\nBacteria and yeasts from the respiratory tract (nose, pharyngeal swabs) were collected from 39 Cambodian and 28 Kenyan HIV-positive children.\nCollected samples were directly transported from Cambodia and Kenya to the Cancer Research Institute in Bratislava and incubated overnight in LB medium at 37°C for amplification. Bacterial chromosomal DNA was prepared by standard protocol [10,11] and by Qiagen kit (Qiagen). Plasmid DNA was purified by an alkaline lysis procedure (Table 1).\nTo avoid false positive results, 3 PCR reactions were performed as arbitrary controls in every set of reactions. The PCR products, used for sequencing and for HIV-specific probe, were purified through LMP agarose and by QIAquick PCR purification kit (Qiagen). Plasmid pBH10 (genebank accession number M15654) was used as a reference source of HIV DNA and lymphocytes’ DNA of HIV/AIDS patient 30 was used as a template for PCR products.\n[SUBTITLE] Dot-blot hybridization [SUBSECTION] For dot-blotting, bacteria and yeasts isolated from patients were amplified overnight in LB medium, and chromosomal DNA was prepared. DNA (0.25 μg/sample) of each patient was transferred to Hybond N+ membrane, lysed and prehybridized. For probe, the 3 aforementioned purified PCR products were synthesized on DNA template: a) HIV/AIDS patient 30; b) plasmid pBH10. 32P-labelled probe was obtained by Ready-To-Go DNA Labelling Kit (Amersham Bioscience, England). Hybridization was performed for 16 hours in standard hybridization buffer at 42°C or in Rapid-hyb buffer (Amersham Bioscience) at 60°C, and washing was carried out as described previously [9]. To exclude potential contamination, 8 DNA samples of healthy persons were analysed together with tested children.\nFor dot-blotting, bacteria and yeasts isolated from patients were amplified overnight in LB medium, and chromosomal DNA was prepared. DNA (0.25 μg/sample) of each patient was transferred to Hybond N+ membrane, lysed and prehybridized. For probe, the 3 aforementioned purified PCR products were synthesized on DNA template: a) HIV/AIDS patient 30; b) plasmid pBH10. 32P-labelled probe was obtained by Ready-To-Go DNA Labelling Kit (Amersham Bioscience, England). Hybridization was performed for 16 hours in standard hybridization buffer at 42°C or in Rapid-hyb buffer (Amersham Bioscience) at 60°C, and washing was carried out as described previously [9]. To exclude potential contamination, 8 DNA samples of healthy persons were analysed together with tested children.\n[SUBTITLE] Southern hybridization [SUBSECTION] For colony blotting, bacterial suspension was diluted to the concentration of 10−10 or 10−9 on LB plates and grown colonies were blotted to the Hybond N+ membrane, lysed, washed and prehybridized. 32P-labeled probes were obtained using Ready-To-Go DNA Labeling kit (−dCTP) (Amersham Bioscience). The combined PCR probe was prepared as a mixture of all 3 aforementioned PCR products. Hybridization was performed for 16 hours in standard hybridization buffer at 42°C or in Rapid-hyb buffer (Amersham Bioscience) at 60°C. Subsequently, membranes were washed at final temperature 60°C, resp. 65°C.\nFor colony blotting, bacterial suspension was diluted to the concentration of 10−10 or 10−9 on LB plates and grown colonies were blotted to the Hybond N+ membrane, lysed, washed and prehybridized. 32P-labeled probes were obtained using Ready-To-Go DNA Labeling kit (−dCTP) (Amersham Bioscience). The combined PCR probe was prepared as a mixture of all 3 aforementioned PCR products. Hybridization was performed for 16 hours in standard hybridization buffer at 42°C or in Rapid-hyb buffer (Amersham Bioscience) at 60°C. Subsequently, membranes were washed at final temperature 60°C, resp. 65°C.\n[SUBTITLE] DNA sequencing [SUBSECTION] The PCR products determined by primers 68, 69 synthesized on the template of bacterial DNA were directly sequenced on the ABI 373 DNA Sequencer and ABI PRISM 310 Genetic Analyzer (Applied Biosystem). The sequencing reaction was performed using fluorescent dyes of the ABI Prism Big Dye Terminator sequencing kit (Applied Biosystems) and afterwards extension products were purified by Auto-Seq G-50 columns (Amersham Biosciences).\nThe PCR products determined by primers 68, 69 synthesized on the template of bacterial DNA were directly sequenced on the ABI 373 DNA Sequencer and ABI PRISM 310 Genetic Analyzer (Applied Biosystem). The sequencing reaction was performed using fluorescent dyes of the ABI Prism Big Dye Terminator sequencing kit (Applied Biosystems) and afterwards extension products were purified by Auto-Seq G-50 columns (Amersham Biosciences).", "Bacteria and yeasts from the respiratory tract (nose, pharyngeal swabs) were collected from 39 Cambodian and 28 Kenyan HIV-positive children.\nCollected samples were directly transported from Cambodia and Kenya to the Cancer Research Institute in Bratislava and incubated overnight in LB medium at 37°C for amplification. Bacterial chromosomal DNA was prepared by standard protocol [10,11] and by Qiagen kit (Qiagen). Plasmid DNA was purified by an alkaline lysis procedure (Table 1).\nTo avoid false positive results, 3 PCR reactions were performed as arbitrary controls in every set of reactions. The PCR products, used for sequencing and for HIV-specific probe, were purified through LMP agarose and by QIAquick PCR purification kit (Qiagen). Plasmid pBH10 (genebank accession number M15654) was used as a reference source of HIV DNA and lymphocytes’ DNA of HIV/AIDS patient 30 was used as a template for PCR products.", "For dot-blotting, bacteria and yeasts isolated from patients were amplified overnight in LB medium, and chromosomal DNA was prepared. DNA (0.25 μg/sample) of each patient was transferred to Hybond N+ membrane, lysed and prehybridized. For probe, the 3 aforementioned purified PCR products were synthesized on DNA template: a) HIV/AIDS patient 30; b) plasmid pBH10. 32P-labelled probe was obtained by Ready-To-Go DNA Labelling Kit (Amersham Bioscience, England). Hybridization was performed for 16 hours in standard hybridization buffer at 42°C or in Rapid-hyb buffer (Amersham Bioscience) at 60°C, and washing was carried out as described previously [9]. To exclude potential contamination, 8 DNA samples of healthy persons were analysed together with tested children.", "For colony blotting, bacterial suspension was diluted to the concentration of 10−10 or 10−9 on LB plates and grown colonies were blotted to the Hybond N+ membrane, lysed, washed and prehybridized. 32P-labeled probes were obtained using Ready-To-Go DNA Labeling kit (−dCTP) (Amersham Bioscience). The combined PCR probe was prepared as a mixture of all 3 aforementioned PCR products. Hybridization was performed for 16 hours in standard hybridization buffer at 42°C or in Rapid-hyb buffer (Amersham Bioscience) at 60°C. Subsequently, membranes were washed at final temperature 60°C, resp. 65°C.", "The PCR products determined by primers 68, 69 synthesized on the template of bacterial DNA were directly sequenced on the ABI 373 DNA Sequencer and ABI PRISM 310 Genetic Analyzer (Applied Biosystem). The sequencing reaction was performed using fluorescent dyes of the ABI Prism Big Dye Terminator sequencing kit (Applied Biosystems) and afterwards extension products were purified by Auto-Seq G-50 columns (Amersham Biosciences).", "The bacteria and yeast isolates from the respiratory tract of Cambodian and Kenyan HIV-positive children were Staphylococcus aureus, Streptococcus pyogenes, Klebsiella pneumoniae, Escherichia coli, Proteus mirabilis, Candida albicans and Candida tropicalis (Table 2). There were no evident differences between the proportions of species from children of both countries; however, differences were detected in dot-blot hybridization with HIV-1 probes (Figures 1, 2). DNA isolated from 39 tested Cambodian children hybridized for 41% (16 samples) with used probes and for 29% (8 samples) from 28 tested Kenyan children. The most frequently found bacteria in both Cambodian [12] and Kenyan [6] children was Klebsiella pneumoniae. This bacteria was positive in hybridization for 50% and 67% (Table 2). Candida albicans of 7 Kenyan patients was completely negative in colony and dot-blot hybridization, but was positive in 3 out of 5 Cambodian children. Escherichia coli detected in 3 Cambodian children hybridized positively in 2 cases. Candida tropicalis was found in 1 patient from Kenya and hybridized with HIV-1 probes. Proteus mirabilis of 1 Cambodian child was found positive in hybridization with used probes.\nThe results of colony (data not shown) and dot-blot hybridization were highly compatible. The differences in hybridization using probes of PCR products synthesized on 2 different templates – pBH10 and DNA of patient 30 – were not significant.\nThe 142 bp amplicon limited by primers 68; 69 produced using template DNA from both bacteria and lymphocytes of Kenyan (9 Ke, 22 Ke, 10’ Ke, 30 Ke, 5’ Ke, 16’ Ke, 29 Ke) and Cambodian children (15 Cm, 28 Cm, 33 Cm) was sequenced (Figure 3). Amplicons from these different sources were more than 90% identical with reference sequences (HIV/HXB2). Some differences were found in the first part of sequenced fragments. Largest differences were detected between the isolates as a group and the reference sequence in pBH10.", "Recent observations suggest that the main fight against the HIV disease process is performed in gut-associated lymphatic tissue closed to the gastrointestinal tract [1–3,11]. Our understanding about the restoration of the gut mucosal immune system during highly active antiretroviral therapy is very limited. A dramatic loss of CD4+T cells, predominantly from the mucosal surfaces, suggests the question of whether bacteria play some role in this process. Our previous detection of HIV-like sequences in gut bacteria of HIV/AIDS patients may confirm that bacteria are involved in this trial [7,12,13]. Accordingly, in the respiratory tract bacteria of HIV-positive children from Cambodia and Kenya, HIV-like sequences were detected in 41% and 29%, respectively, of samples. Klebsiella pneumoniae, detected most frequently in both cohorts, hybridized with HIV-1-specific probes in 50% and 67%, respectively. These results were expected, because Klebsiella is a gut commensal localized in the intestinal tract, where previously detected bacteria bearing HIV-like sequences were found [12]. The second most isolated Staphylococcus aureus, colonizing mostly skin and/or respiratory tract, hybridized only 13.5% and 10%, respectively, with HIV-1-specific probes. Candida tropicalis was detected once, with a positive hybridization signal.\nOn the basis of our previous detection of HIV-like sequences in bacteria isolated from the respiratory tract of AIDS patients [7], it is possible to conclude that bacteria bearing HIV-like sequences are localized not only in the intestinal tract of HIV/AIDS patients, but in other organs as well [12,13]. The transmission of these organisms and their role in AIDS pathogenesis is not clear. Some bacteria probably may serve as a reservoir of HIV-like sequences in the form of “virus-like HIV particles” or as HIV. Our sequencing results showed a very high homology (over 90%) between bacterial HIV-like sequences and HIV-1 isolate HXB2 (HIVHXB2CG). Because all the above mentioned species are gut or skin commensals that cannot be eliminated, they may represent continual imminency for the host.\nOn the other hand, differences in homology of patient’s env sequences limited by primers 68;69 with coresponding pBH10 sequences, eliminated to a large extent suspicion of contamination. HIV-1 sequences presented in pBH10 are only one source of retroviral genetic information in laboratory.\nThere is increasing evidence that the mucosa-associated bacteria may play important roles in the pathogenesis of inflammatory bowel disease, ulcerative colitis, Crohn’s disease and potentially even colon cancer [14,15]. Invasive strains of E. coli that undergo lyses upon entry into mammalian cells can act as a stable DNA delivery system to their hosts [16]. They work on the basis of “hit and run away”, and their extrachromosomal content remains mainly in the host cell, even when the bacterial carriers are not detectable. Horizontal gene transfer from bacteria to yeast, to plant and mammalian cells has been reported by other investigators [16–19].", "Bacteria and yeasts from the respiratory tract of 41% of Cambodian and 31% of Kenyan HIV-positive children bear HIV-like sequences. According to our preliminary results, we conclude that the ability of invasive bacteria containing HIV sequences in the form of “virus-like particles” to enter into HL-60 cells or human lymphocytes represents an ideal system for of horizontal transfer of genes between eukaryotic and prokaryotic cells. In this way “virus-like particles” or other particles are introduced into cells of the lymphoproliferative system, and, consequently, their genetic information may interact with or be integrated into the human DNA and induce the HIV disease process [7,8]." ]
[ null, "materials|methods", null, null, null, null, "results", "discussion", "conclusions" ]
[ "respiratory tract bacteria", "HIV/AIDS positive children", "HIV-like sequences", "DNA hybridization", "GALT", "viral reservoir" ]
Leptin, obestatin and apelin levels in patients with obstructive sleep apnoea syndrome.
21358603
Recent studies suggest that adipose tissue hormones are involved in the pathogenesis of obstructive sleep apnoea syndrome (OSAS). The role of leptin, obestatin and apelin still needs to be established.
BACKGROUND
Ten patients with newly diagnosed OSAS (AHI >10/h and ESS >10 points) were enrolled in the study as well as ten healthy volunteers as controls. All underwent measurements for Leptin, Obestatin and Apelin in four hour intervals during diagnostic polysomnography for 24 h and the patients also three months after onset of CPAP treatment. Furthermore the HOMA-index and body composition were quantified.
MATERIAL/METHODS
Plasma apelin levels in the patients decreased under CPAP therapy, but showed no significant difference in patients and volunteers. We found a positive correlation to AHI, BMI in the therapy group at all observation points. Leptin plasma levels were higher in the patient group and decreased after onset of CPAP therapy. Leptin plasma levels were positively correlated to the BMI, min. 02 and AHI in the patient group before therapy. Plasma obestatin levels did not differ significantly in these three observation groups, but were partly correlated to AHI and weight in the newly diagnosed OSAS group.
RESULTS
In agreement with previous investigations, we could demonstrate a difference in leptin plasma levels between healthy volunteers and patients with newly diagnosed OSAS. Apelin decreases under CPAP therapy, but not significantly. Obestatin remains unchanged after onset of CPAP. We further found a linkage between leptin plasma levels and BMI, AHI and weight in the untreated patient group.
CONCLUSIONS
[ "Apelin", "Case-Control Studies", "Female", "Ghrelin", "Humans", "Intercellular Signaling Peptides and Proteins", "Leptin", "Male", "Middle Aged", "Sleep Apnea, Obstructive" ]
3524733
null
null
null
null
Results
Patients and the healthy controls were approximatly the same age (58.9±10.2 vs. 53.6±7.7, p>0.05). As to be expected for caucasian subjects the two groups differ significantly regarding weight (BMI 31.7±3.2 vs. 26.7±2.3 kg/m2, p<0.05). Corresponding to this fact the patients had a significant higher percental body fat (26.6±7.5 vs. 33.7±3.7, p<0.05). The patients before therapy had a significant higher AHI (40.4±18.9 vs. 2.7±3.1 /h, p<0.05) and a lower sleep quality (ESS 11.7±1.7 vs. 5.1±2.1 points, p<0.05) compared to the healthy volunteers. In total 10 patients and 10 healthy controls were studied. From the 10 patients at the beginning 3 refused further CPAP therapy in the first 4–6 weeks due to discomfort. The remaining 7 patients applied the therapy according to the initial instructions for use (at least 6 hours per night). The patient group was measured twice: Before therapy and after three months of application of CPAP. The CPAP treatment significantly eliminated the previously observed obstructions. In consequence the initial AHI was reduced to a normal range (42.1±16.2 vs. 4.7±6.0 /h, p<0.05) and the mean SaO2 was ensured (91.2±4.1 vs. 94.6±1.7%, p<0.05). The subjective sleepiness and sleep quality were improved (ESS 11.8±1.8 vs. 5.0±2.0, p<0.05) (Table 1). HOMA-index between patients before and after therapy showed no significant differences (22.35±42.64 vs. 11.62±16,36, p>0.05). In contrast the HOMA-index comparing patients before therapy and healthy controls differ in a significant manner (19.15±29,83 vs. 2.73±3.37, p<0.05). Other parameters such as pCO2, pO2, pH and lung function parameters (FEV1 and VC) were also analysed. None of them could line out significancies between patients before or under therapy and volunteers, except pO2. The healthy volunteers showed a significant higher mean pO2 in comparison to the patients with newly diagnosed OSAS (82.1±8.01 vs. 72.5±11.3, p<0.05). Furthermore we examined any possible correlation of the parameters (pO2, pCO2, VC, FEV1, pH, age, height, weight, HOMA-Index, body fat, therapeutical pressure of CPAP, min. O2, basal O2, glucose, insulin, BMI, AHI and ESS points) to the peptide hormones (apelin, leptin, obestatin): Leptin plasma levels of the patient group before therapy had at each time of the observation a significant correlation to the body mass index (pmin. 0.55; pmax. 0.69), min. O2 (pmin. –0.76; pmax. −0.58) and AHI (pmin. 0.55; pmax. 0.62). A significant correlation was found between Leptin measurements of newly diagnosed OSAS patients and weight at 2 a.m., 2, 6, 10 p.m. as well as the body fat percentage at 6, 10 a.m., 2 p.m.. In the therapy group positive significant correlations could be observed between leptin plasma levels and pH (at 2, 6, 10 a.m., 2, 6, 10 p.m.), BMC in g (at 2, 6, 10 a.m., 2, 6, 10 p.m.), body fat in% (at 6 a.m., 2 p.m.), FEV1 and VC (at 2, 6, 10 a.m., 2, 10 p.m.) and age (6, 10 a.m., 2, 10 p.m.). Between Apelin plasma levels and the mentioned parameters we could not find any significancies in the newly diagnosed OSAS group. In contrast the plasma apelin levels showed a significant correlation to the AHI (pmin. 0.82; pmax. 0.91), min. O2 (pmin. −0.75; pmax. −0.86) and BMI (pmin. 0.78; pmax. 0.85) in the therapy group at all observation points. Furthermore there was a significant correlation to the basal O2 (at 2, 6 p.m.), pressure and pO2 (at 2, 6, 10 a.m., 6, 10 p.m.). Obestatin plasma levels were significantly correlated at time of OSAS diagnosis to AHI and weight (at 6, 10 p.m.), to body fat in% (at 2, 6 a.m., 2 p.m.), BMC in g (at 10 a.m., 2 p.m.) and pCO2 (at 6 a.m., 2 p.m.). After three months of therapy there was only a significant correlation to weight and height (at 10 a.m.). Further we analysed the plasma levels of the three adipose tissue hormones in the three different observation groups: [SUBTITLE] Leptin [SUBSECTION] Controls had significant lower plasma leptin levels as the patients at time of diagnosis of OSAS at all observation points. Leptin levels decrease after 3 therapeutic months, in a significant manner at 6.00 and 10.00 a.m. (Figure 1). Controls had significant lower plasma leptin levels as the patients at time of diagnosis of OSAS at all observation points. Leptin levels decrease after 3 therapeutic months, in a significant manner at 6.00 and 10.00 a.m. (Figure 1). [SUBTITLE] Apelin [SUBSECTION] The mean plasma apelin levels of patients with newly diagnosed OSAS are higher than those of the patients under therapy, but failed to reach significance (Figure 2). The mean plasma apelin levels of patients with newly diagnosed OSAS are higher than those of the patients under therapy, but failed to reach significance (Figure 2). [SUBTITLE] Obestatin [SUBSECTION] Plasma obestatin levels of the volunteers were higher in comparison to the patients before or after three months of therapy except at 6 and 10 p.m. (Figure 3). We also analysed correlations between the three adipose tissue hormones. We found a significant correlation between leptin and obestatin in the patient group before and after onset of CPAP therapy (except at 2 a.m. in the CPAP group). Plasma obestatin levels of the volunteers were higher in comparison to the patients before or after three months of therapy except at 6 and 10 p.m. (Figure 3). We also analysed correlations between the three adipose tissue hormones. We found a significant correlation between leptin and obestatin in the patient group before and after onset of CPAP therapy (except at 2 a.m. in the CPAP group).
Conclusions
Previous studies raised suspicion that regulatory peptides such as leptin, apelin and obestatin might play a role in the pathogenesis of OSAS. In our study we could demonstrate different plasma levels of leptin and apelin regarding OSAS patients before and after onset of CPAP therapy in comparison to healthy volunteers, but not in obestatin. Further investigations and different approaches will have to investigate, if these hormones might have direct effects on OSAS and through which signaling pathways they might exhibit their effects, or if these hormonal changes are rather associated to concomitant metabolical changes in treated OSAS patients such as a modulation of insulin sensitivity or changes in body fat distribution.
[ "Background", "Measurement of serum apelin", "Measurement of serum obestatin", "Measurement of serum leptin", "Sleep studies", "Leptin", "Apelin", "Obestatin" ]
[ "The obstructive sleep apnoea syndrome (OSAS) is a disorder with a high prevalence characterized by an increased cardiovascular risk. Obesity and insulin resistance are typical metabolical features of OSAS. Since obesity has reached epidemic proportions globally [1] numerous efforts have been made to understand the complex mechanisms involved in the regulation of food intake, hunger, satiety, energy storage and consumption.\nIn the last two decades, several novel regulatory peptides such as leptin, ghrelin or adiponectin have been discovered and studied emphasizing their role in energy homeostasis. The demonstration of a possible role of some of these peptides in sleep regulation as independent effects apart from those in energy regulation was a fascinating additional finding [2,3].\nApelin is a peptide that was identified in 1998 [4]. The effect best characterized of this hormone is within the cardiovascular system. A hypotensive effect of apelin results from the activation of receptors expressed at the surface of endothelial cells [5], but the hormone has also been intrigued in angiogenesis [6]. Apelin receptors have been found within the lung [7] and Eyries et al. recently demonstrated that apelin expression was induced by hypoxia in cell cultures as well as in mice exposed to hypoxia [8].\nWith this in mind, we investigated, whether there might be differences in Apelin secretion in patients with OSAS compared to healthy people due to chronic nocturnal hypoxia in the first group and potential effects of CPAP therapy after reversal of these events. Since patients with OSAS are insulin-resistant [9], and insulin-induced apelin expression was recently demonstrated in adipocytes [10] the groups might have different apelin secretion in general and not only during some periods.\nAnother novel aminopeptide is obestatin. To date, effects on food intake, weight gain and intestinal motility have been investigated due to its encodement on the same gene as ghrelin [11]. We were particularly interested in its role in sleep regulation that has been investigated in rats [12,13], but not humans yet. To our knowledge, serial measurements of obestatin with regard to its putative role in sleep regulation were not performed. Patients with OSAS thus, served as a “biomodel” of disturbed sleep behaviour were compared to healthy controls and furthermore, we investigated possible alterations of obestatin secretion after restoration of normal sleep in OSAS patients.\nAn interaction of apelin and obestatin with insulin resistance as a well characterized feature of OSAS patients has already been described; and we also determined their state of insulin (in)sensitivity by use of the HOMA model.\nThe role of leptin in OSAS has already been more precisely characterized. Fasting leptin levels in patients with OSAS decrease after initiation of CPAP (continuous positive airway pressure) treatment without changes in weight and are discussed as a respiratory stimulus [14]. With interactions between insulin resistance and leptin [15,16] as well as between leptin and apelin [7] and obestatin [12], we chose to determine serial leptin measurements as well to characterize possible interactions between these three hormones.", "Serum apelin was measured directly by a specific radioimmunoassay kit with a measurement range from 10–1280 pg/ml (Human Apelin-36 RIA kit, Phoenix Pharmaceuticals, Inc., 330 Beach Road, Burlingame, CA 94010, U.S.A.). All samples and standards were assayed in duplicate within the same assay.", "Peripheral obestatin levels were measured using a commercial radioimmunoassay kit with a measurement range from 50–6400 pg/ml (Human Obestatin RIA kit, Phoenix Pharmaceuticals, Inc., 330 Beach Road, Burlingame, CA 94010, U.S.A.). All samples and standards were assayed in duplicate within the same assay.", "Serum leptin levels were measured by a commercially available radioimmunoassay kit (Human Leptin RIA kit, Millipore Corporation, 290 Concord Road, Billerica, MA 01821). The sensitivity limit was 0,5 ng/mL. All samples and standards were assayed in duplicate within the same assay.", "The polysomnographies were performed according to the recommendations of the American Thoracic Society [17] and the German Sleep Society [18,19]. Sleep parameters were determined using the criteria of Rechtschaffen and Kahles [20] and microarousals were defined in accordance with the definitions of the American Sleep Disorders Association (ASDA) [21]. All patients were monitored for at least 6h in our sleep laboratory. The measured parameters included submental electromyography, snoring detected by a microphone, electrocardiography, thoracic and abdominal movements, bilateral electrooculography, electroencephalography, nasal airflow measured by oronasal thermistors during diagnostic polysomnographics and by an pneumotachograph during CPAP studies and oxyhaemoglobin saturation using a finger oxymeter (Microspan 3040G™, Jaeger amd Toennies, Würzburg, Germany). The measured data were evaluated by the same qualified doctor.", "Controls had significant lower plasma leptin levels as the patients at time of diagnosis of OSAS at all observation points. Leptin levels decrease after 3 therapeutic months, in a significant manner at 6.00 and 10.00 a.m. (Figure 1).", "The mean plasma apelin levels of patients with newly diagnosed OSAS are higher than those of the patients under therapy, but failed to reach significance (Figure 2).", "Plasma obestatin levels of the volunteers were higher in comparison to the patients before or after three months of therapy except at 6 and 10 p.m. (Figure 3).\nWe also analysed correlations between the three adipose tissue hormones. We found a significant correlation between leptin and obestatin in the patient group before and after onset of CPAP therapy (except at 2 a.m. in the CPAP group)." ]
[ null, null, null, null, null, null, null, null ]
[ "Background", "Material and Methods", "Subjects", "Measurement of serum apelin", "Measurement of serum obestatin", "Measurement of serum leptin", "Sleep studies", "Results", "Leptin", "Apelin", "Obestatin", "Discussion", "Conclusions" ]
[ "The obstructive sleep apnoea syndrome (OSAS) is a disorder with a high prevalence characterized by an increased cardiovascular risk. Obesity and insulin resistance are typical metabolical features of OSAS. Since obesity has reached epidemic proportions globally [1] numerous efforts have been made to understand the complex mechanisms involved in the regulation of food intake, hunger, satiety, energy storage and consumption.\nIn the last two decades, several novel regulatory peptides such as leptin, ghrelin or adiponectin have been discovered and studied emphasizing their role in energy homeostasis. The demonstration of a possible role of some of these peptides in sleep regulation as independent effects apart from those in energy regulation was a fascinating additional finding [2,3].\nApelin is a peptide that was identified in 1998 [4]. The effect best characterized of this hormone is within the cardiovascular system. A hypotensive effect of apelin results from the activation of receptors expressed at the surface of endothelial cells [5], but the hormone has also been intrigued in angiogenesis [6]. Apelin receptors have been found within the lung [7] and Eyries et al. recently demonstrated that apelin expression was induced by hypoxia in cell cultures as well as in mice exposed to hypoxia [8].\nWith this in mind, we investigated, whether there might be differences in Apelin secretion in patients with OSAS compared to healthy people due to chronic nocturnal hypoxia in the first group and potential effects of CPAP therapy after reversal of these events. Since patients with OSAS are insulin-resistant [9], and insulin-induced apelin expression was recently demonstrated in adipocytes [10] the groups might have different apelin secretion in general and not only during some periods.\nAnother novel aminopeptide is obestatin. To date, effects on food intake, weight gain and intestinal motility have been investigated due to its encodement on the same gene as ghrelin [11]. We were particularly interested in its role in sleep regulation that has been investigated in rats [12,13], but not humans yet. To our knowledge, serial measurements of obestatin with regard to its putative role in sleep regulation were not performed. Patients with OSAS thus, served as a “biomodel” of disturbed sleep behaviour were compared to healthy controls and furthermore, we investigated possible alterations of obestatin secretion after restoration of normal sleep in OSAS patients.\nAn interaction of apelin and obestatin with insulin resistance as a well characterized feature of OSAS patients has already been described; and we also determined their state of insulin (in)sensitivity by use of the HOMA model.\nThe role of leptin in OSAS has already been more precisely characterized. Fasting leptin levels in patients with OSAS decrease after initiation of CPAP (continuous positive airway pressure) treatment without changes in weight and are discussed as a respiratory stimulus [14]. With interactions between insulin resistance and leptin [15,16] as well as between leptin and apelin [7] and obestatin [12], we chose to determine serial leptin measurements as well to characterize possible interactions between these three hormones.", "[SUBTITLE] Subjects [SUBSECTION] Ten patients (n=10, male) with newly diagnosed symptomatic obstructive sleep apnoea syndrome (apnoea-hypopnea-index, AHI >10 per hour and Epworth-sleepiness-scale, ESS >10 points) were enrolled in the study. Obstructive apnoeas were defined as the absence of oronasal flow for at least 10s. Hypopnoeas were defined as reduction in airflow to ≤50% of the preceding stable baseline for 10s or longer together with a dip in oxygen saturation ≥4%. The mean number of apnoeas and hypopnoeas per hour of sleep was calculated as the apnoea/hypopnoea index (AHI). The Epworth-sleepiness-scale is a questionnaire to evaluate especially daytime sleepiness.\nAs a first step diagnostic polysomnography was performed. In this setting the degree of the OSAS and the sleep disturbance was quantified. The AHI as well as the sleep quality were assessed. Before polysomnographic measurements subjects underwent a complete medical history, clinical chemistry and physical examination. This was done to rule out serious other diseases (such as diabetes, hepatitis, cancer) and medication with an impact on insulin sensitivity, which might influence the analyses of adipocyte-derived hormones.\nThree months later and having used the nCPAP therapy reliably all examinations were repeated (polysomnography under nCPAP therapy, questionnaire, clinical chemistry and DXA) und compared with the initial results.\nThe hormones leptin, obestatin and apelin were measured in four hour intervals (2, 6, 10 a.m. and 2, 6, 10 p.m.) including the night of diagnostic and therapeutic polysomnography. Normal sleep wake rhythms were retained, the average sleep time duration lay between six and nine hours. The light was turned off at 10 to 11 p.m., the patients were waked up at about 6 to 7 a.m.\nBlood drawings were performed throughout an indwelling superficial forearm catheter to minimalize the patients disturbance. In the second night a titration to find out the minimal sufficient therapeutic pressure was made. The goal was to reduce the pathologic AHI to a normal rage (<5/h). Additionally as a potential marker of influence the exact body composition was measured by using DXA (Lunar Prodigy™). Before demission each patient was instructed to use the nCPAP therapy regularly each night for at least six hours to reach a therapeutic effect. To clarify CPAP adherence, the built-in data stores of the CPAP devices were read out. This allows to establish the number of days of use within the last three months and to calculate the mean duration of use per night of treatment.\nTen healthy volunteers (n=9 male and n=1 female) with no sleep disorder were recruited to serve as a control group. As the ten patients with OSAS they underwent the same examinations (DXA, clinical chemistry, questionnaire etc.). To rule out subjects with OSAS the volunteers were measured under study conditions with Apnoe Screen (ApnoeScreen Pro, VIASYS Healthcare GmbH, Leibnizstrasse 7, 97204 Höchberg, Germany). Finally a comparison of the data of the volunteers and the patients regarding circadian rhythm of the adipocyte-derived hormones were made.\nAll persons studied gave written informed consent to participate in the study, the study protocol was approved by the local ethics committee.\nThe samples were collected in ethylendiamine tetraacetate-coated polypropylene tubes, centrifuged immediately at 3.000 rpm for 20 min at 0°C, and the clear plasma supernatant was then stored until plasma leptin, obestatin and apelin levels were measured as follows:\n[SUBTITLE] Measurement of serum apelin [SUBSECTION] Serum apelin was measured directly by a specific radioimmunoassay kit with a measurement range from 10–1280 pg/ml (Human Apelin-36 RIA kit, Phoenix Pharmaceuticals, Inc., 330 Beach Road, Burlingame, CA 94010, U.S.A.). All samples and standards were assayed in duplicate within the same assay.\nSerum apelin was measured directly by a specific radioimmunoassay kit with a measurement range from 10–1280 pg/ml (Human Apelin-36 RIA kit, Phoenix Pharmaceuticals, Inc., 330 Beach Road, Burlingame, CA 94010, U.S.A.). All samples and standards were assayed in duplicate within the same assay.\n[SUBTITLE] Measurement of serum obestatin [SUBSECTION] Peripheral obestatin levels were measured using a commercial radioimmunoassay kit with a measurement range from 50–6400 pg/ml (Human Obestatin RIA kit, Phoenix Pharmaceuticals, Inc., 330 Beach Road, Burlingame, CA 94010, U.S.A.). All samples and standards were assayed in duplicate within the same assay.\nPeripheral obestatin levels were measured using a commercial radioimmunoassay kit with a measurement range from 50–6400 pg/ml (Human Obestatin RIA kit, Phoenix Pharmaceuticals, Inc., 330 Beach Road, Burlingame, CA 94010, U.S.A.). All samples and standards were assayed in duplicate within the same assay.\n[SUBTITLE] Measurement of serum leptin [SUBSECTION] Serum leptin levels were measured by a commercially available radioimmunoassay kit (Human Leptin RIA kit, Millipore Corporation, 290 Concord Road, Billerica, MA 01821). The sensitivity limit was 0,5 ng/mL. All samples and standards were assayed in duplicate within the same assay.\nSerum leptin levels were measured by a commercially available radioimmunoassay kit (Human Leptin RIA kit, Millipore Corporation, 290 Concord Road, Billerica, MA 01821). The sensitivity limit was 0,5 ng/mL. All samples and standards were assayed in duplicate within the same assay.\nTen patients (n=10, male) with newly diagnosed symptomatic obstructive sleep apnoea syndrome (apnoea-hypopnea-index, AHI >10 per hour and Epworth-sleepiness-scale, ESS >10 points) were enrolled in the study. Obstructive apnoeas were defined as the absence of oronasal flow for at least 10s. Hypopnoeas were defined as reduction in airflow to ≤50% of the preceding stable baseline for 10s or longer together with a dip in oxygen saturation ≥4%. The mean number of apnoeas and hypopnoeas per hour of sleep was calculated as the apnoea/hypopnoea index (AHI). The Epworth-sleepiness-scale is a questionnaire to evaluate especially daytime sleepiness.\nAs a first step diagnostic polysomnography was performed. In this setting the degree of the OSAS and the sleep disturbance was quantified. The AHI as well as the sleep quality were assessed. Before polysomnographic measurements subjects underwent a complete medical history, clinical chemistry and physical examination. This was done to rule out serious other diseases (such as diabetes, hepatitis, cancer) and medication with an impact on insulin sensitivity, which might influence the analyses of adipocyte-derived hormones.\nThree months later and having used the nCPAP therapy reliably all examinations were repeated (polysomnography under nCPAP therapy, questionnaire, clinical chemistry and DXA) und compared with the initial results.\nThe hormones leptin, obestatin and apelin were measured in four hour intervals (2, 6, 10 a.m. and 2, 6, 10 p.m.) including the night of diagnostic and therapeutic polysomnography. Normal sleep wake rhythms were retained, the average sleep time duration lay between six and nine hours. The light was turned off at 10 to 11 p.m., the patients were waked up at about 6 to 7 a.m.\nBlood drawings were performed throughout an indwelling superficial forearm catheter to minimalize the patients disturbance. In the second night a titration to find out the minimal sufficient therapeutic pressure was made. The goal was to reduce the pathologic AHI to a normal rage (<5/h). Additionally as a potential marker of influence the exact body composition was measured by using DXA (Lunar Prodigy™). Before demission each patient was instructed to use the nCPAP therapy regularly each night for at least six hours to reach a therapeutic effect. To clarify CPAP adherence, the built-in data stores of the CPAP devices were read out. This allows to establish the number of days of use within the last three months and to calculate the mean duration of use per night of treatment.\nTen healthy volunteers (n=9 male and n=1 female) with no sleep disorder were recruited to serve as a control group. As the ten patients with OSAS they underwent the same examinations (DXA, clinical chemistry, questionnaire etc.). To rule out subjects with OSAS the volunteers were measured under study conditions with Apnoe Screen (ApnoeScreen Pro, VIASYS Healthcare GmbH, Leibnizstrasse 7, 97204 Höchberg, Germany). Finally a comparison of the data of the volunteers and the patients regarding circadian rhythm of the adipocyte-derived hormones were made.\nAll persons studied gave written informed consent to participate in the study, the study protocol was approved by the local ethics committee.\nThe samples were collected in ethylendiamine tetraacetate-coated polypropylene tubes, centrifuged immediately at 3.000 rpm for 20 min at 0°C, and the clear plasma supernatant was then stored until plasma leptin, obestatin and apelin levels were measured as follows:\n[SUBTITLE] Measurement of serum apelin [SUBSECTION] Serum apelin was measured directly by a specific radioimmunoassay kit with a measurement range from 10–1280 pg/ml (Human Apelin-36 RIA kit, Phoenix Pharmaceuticals, Inc., 330 Beach Road, Burlingame, CA 94010, U.S.A.). All samples and standards were assayed in duplicate within the same assay.\nSerum apelin was measured directly by a specific radioimmunoassay kit with a measurement range from 10–1280 pg/ml (Human Apelin-36 RIA kit, Phoenix Pharmaceuticals, Inc., 330 Beach Road, Burlingame, CA 94010, U.S.A.). All samples and standards were assayed in duplicate within the same assay.\n[SUBTITLE] Measurement of serum obestatin [SUBSECTION] Peripheral obestatin levels were measured using a commercial radioimmunoassay kit with a measurement range from 50–6400 pg/ml (Human Obestatin RIA kit, Phoenix Pharmaceuticals, Inc., 330 Beach Road, Burlingame, CA 94010, U.S.A.). All samples and standards were assayed in duplicate within the same assay.\nPeripheral obestatin levels were measured using a commercial radioimmunoassay kit with a measurement range from 50–6400 pg/ml (Human Obestatin RIA kit, Phoenix Pharmaceuticals, Inc., 330 Beach Road, Burlingame, CA 94010, U.S.A.). All samples and standards were assayed in duplicate within the same assay.\n[SUBTITLE] Measurement of serum leptin [SUBSECTION] Serum leptin levels were measured by a commercially available radioimmunoassay kit (Human Leptin RIA kit, Millipore Corporation, 290 Concord Road, Billerica, MA 01821). The sensitivity limit was 0,5 ng/mL. All samples and standards were assayed in duplicate within the same assay.\nSerum leptin levels were measured by a commercially available radioimmunoassay kit (Human Leptin RIA kit, Millipore Corporation, 290 Concord Road, Billerica, MA 01821). The sensitivity limit was 0,5 ng/mL. All samples and standards were assayed in duplicate within the same assay.\n[SUBTITLE] Sleep studies [SUBSECTION] The polysomnographies were performed according to the recommendations of the American Thoracic Society [17] and the German Sleep Society [18,19]. Sleep parameters were determined using the criteria of Rechtschaffen and Kahles [20] and microarousals were defined in accordance with the definitions of the American Sleep Disorders Association (ASDA) [21]. All patients were monitored for at least 6h in our sleep laboratory. The measured parameters included submental electromyography, snoring detected by a microphone, electrocardiography, thoracic and abdominal movements, bilateral electrooculography, electroencephalography, nasal airflow measured by oronasal thermistors during diagnostic polysomnographics and by an pneumotachograph during CPAP studies and oxyhaemoglobin saturation using a finger oxymeter (Microspan 3040G™, Jaeger amd Toennies, Würzburg, Germany). The measured data were evaluated by the same qualified doctor.\nThe polysomnographies were performed according to the recommendations of the American Thoracic Society [17] and the German Sleep Society [18,19]. Sleep parameters were determined using the criteria of Rechtschaffen and Kahles [20] and microarousals were defined in accordance with the definitions of the American Sleep Disorders Association (ASDA) [21]. All patients were monitored for at least 6h in our sleep laboratory. The measured parameters included submental electromyography, snoring detected by a microphone, electrocardiography, thoracic and abdominal movements, bilateral electrooculography, electroencephalography, nasal airflow measured by oronasal thermistors during diagnostic polysomnographics and by an pneumotachograph during CPAP studies and oxyhaemoglobin saturation using a finger oxymeter (Microspan 3040G™, Jaeger amd Toennies, Würzburg, Germany). The measured data were evaluated by the same qualified doctor.", "Ten patients (n=10, male) with newly diagnosed symptomatic obstructive sleep apnoea syndrome (apnoea-hypopnea-index, AHI >10 per hour and Epworth-sleepiness-scale, ESS >10 points) were enrolled in the study. Obstructive apnoeas were defined as the absence of oronasal flow for at least 10s. Hypopnoeas were defined as reduction in airflow to ≤50% of the preceding stable baseline for 10s or longer together with a dip in oxygen saturation ≥4%. The mean number of apnoeas and hypopnoeas per hour of sleep was calculated as the apnoea/hypopnoea index (AHI). The Epworth-sleepiness-scale is a questionnaire to evaluate especially daytime sleepiness.\nAs a first step diagnostic polysomnography was performed. In this setting the degree of the OSAS and the sleep disturbance was quantified. The AHI as well as the sleep quality were assessed. Before polysomnographic measurements subjects underwent a complete medical history, clinical chemistry and physical examination. This was done to rule out serious other diseases (such as diabetes, hepatitis, cancer) and medication with an impact on insulin sensitivity, which might influence the analyses of adipocyte-derived hormones.\nThree months later and having used the nCPAP therapy reliably all examinations were repeated (polysomnography under nCPAP therapy, questionnaire, clinical chemistry and DXA) und compared with the initial results.\nThe hormones leptin, obestatin and apelin were measured in four hour intervals (2, 6, 10 a.m. and 2, 6, 10 p.m.) including the night of diagnostic and therapeutic polysomnography. Normal sleep wake rhythms were retained, the average sleep time duration lay between six and nine hours. The light was turned off at 10 to 11 p.m., the patients were waked up at about 6 to 7 a.m.\nBlood drawings were performed throughout an indwelling superficial forearm catheter to minimalize the patients disturbance. In the second night a titration to find out the minimal sufficient therapeutic pressure was made. The goal was to reduce the pathologic AHI to a normal rage (<5/h). Additionally as a potential marker of influence the exact body composition was measured by using DXA (Lunar Prodigy™). Before demission each patient was instructed to use the nCPAP therapy regularly each night for at least six hours to reach a therapeutic effect. To clarify CPAP adherence, the built-in data stores of the CPAP devices were read out. This allows to establish the number of days of use within the last three months and to calculate the mean duration of use per night of treatment.\nTen healthy volunteers (n=9 male and n=1 female) with no sleep disorder were recruited to serve as a control group. As the ten patients with OSAS they underwent the same examinations (DXA, clinical chemistry, questionnaire etc.). To rule out subjects with OSAS the volunteers were measured under study conditions with Apnoe Screen (ApnoeScreen Pro, VIASYS Healthcare GmbH, Leibnizstrasse 7, 97204 Höchberg, Germany). Finally a comparison of the data of the volunteers and the patients regarding circadian rhythm of the adipocyte-derived hormones were made.\nAll persons studied gave written informed consent to participate in the study, the study protocol was approved by the local ethics committee.\nThe samples were collected in ethylendiamine tetraacetate-coated polypropylene tubes, centrifuged immediately at 3.000 rpm for 20 min at 0°C, and the clear plasma supernatant was then stored until plasma leptin, obestatin and apelin levels were measured as follows:\n[SUBTITLE] Measurement of serum apelin [SUBSECTION] Serum apelin was measured directly by a specific radioimmunoassay kit with a measurement range from 10–1280 pg/ml (Human Apelin-36 RIA kit, Phoenix Pharmaceuticals, Inc., 330 Beach Road, Burlingame, CA 94010, U.S.A.). All samples and standards were assayed in duplicate within the same assay.\nSerum apelin was measured directly by a specific radioimmunoassay kit with a measurement range from 10–1280 pg/ml (Human Apelin-36 RIA kit, Phoenix Pharmaceuticals, Inc., 330 Beach Road, Burlingame, CA 94010, U.S.A.). All samples and standards were assayed in duplicate within the same assay.\n[SUBTITLE] Measurement of serum obestatin [SUBSECTION] Peripheral obestatin levels were measured using a commercial radioimmunoassay kit with a measurement range from 50–6400 pg/ml (Human Obestatin RIA kit, Phoenix Pharmaceuticals, Inc., 330 Beach Road, Burlingame, CA 94010, U.S.A.). All samples and standards were assayed in duplicate within the same assay.\nPeripheral obestatin levels were measured using a commercial radioimmunoassay kit with a measurement range from 50–6400 pg/ml (Human Obestatin RIA kit, Phoenix Pharmaceuticals, Inc., 330 Beach Road, Burlingame, CA 94010, U.S.A.). All samples and standards were assayed in duplicate within the same assay.\n[SUBTITLE] Measurement of serum leptin [SUBSECTION] Serum leptin levels were measured by a commercially available radioimmunoassay kit (Human Leptin RIA kit, Millipore Corporation, 290 Concord Road, Billerica, MA 01821). The sensitivity limit was 0,5 ng/mL. All samples and standards were assayed in duplicate within the same assay.\nSerum leptin levels were measured by a commercially available radioimmunoassay kit (Human Leptin RIA kit, Millipore Corporation, 290 Concord Road, Billerica, MA 01821). The sensitivity limit was 0,5 ng/mL. All samples and standards were assayed in duplicate within the same assay.", "Serum apelin was measured directly by a specific radioimmunoassay kit with a measurement range from 10–1280 pg/ml (Human Apelin-36 RIA kit, Phoenix Pharmaceuticals, Inc., 330 Beach Road, Burlingame, CA 94010, U.S.A.). All samples and standards were assayed in duplicate within the same assay.", "Peripheral obestatin levels were measured using a commercial radioimmunoassay kit with a measurement range from 50–6400 pg/ml (Human Obestatin RIA kit, Phoenix Pharmaceuticals, Inc., 330 Beach Road, Burlingame, CA 94010, U.S.A.). All samples and standards were assayed in duplicate within the same assay.", "Serum leptin levels were measured by a commercially available radioimmunoassay kit (Human Leptin RIA kit, Millipore Corporation, 290 Concord Road, Billerica, MA 01821). The sensitivity limit was 0,5 ng/mL. All samples and standards were assayed in duplicate within the same assay.", "The polysomnographies were performed according to the recommendations of the American Thoracic Society [17] and the German Sleep Society [18,19]. Sleep parameters were determined using the criteria of Rechtschaffen and Kahles [20] and microarousals were defined in accordance with the definitions of the American Sleep Disorders Association (ASDA) [21]. All patients were monitored for at least 6h in our sleep laboratory. The measured parameters included submental electromyography, snoring detected by a microphone, electrocardiography, thoracic and abdominal movements, bilateral electrooculography, electroencephalography, nasal airflow measured by oronasal thermistors during diagnostic polysomnographics and by an pneumotachograph during CPAP studies and oxyhaemoglobin saturation using a finger oxymeter (Microspan 3040G™, Jaeger amd Toennies, Würzburg, Germany). The measured data were evaluated by the same qualified doctor.", "Patients and the healthy controls were approximatly the same age (58.9±10.2 vs. 53.6±7.7, p>0.05). As to be expected for caucasian subjects the two groups differ significantly regarding weight (BMI 31.7±3.2 vs. 26.7±2.3 kg/m2, p<0.05). Corresponding to this fact the patients had a significant higher percental body fat (26.6±7.5 vs. 33.7±3.7, p<0.05). The patients before therapy had a significant higher AHI (40.4±18.9 vs. 2.7±3.1 /h, p<0.05) and a lower sleep quality (ESS 11.7±1.7 vs. 5.1±2.1 points, p<0.05) compared to the healthy volunteers. In total 10 patients and 10 healthy controls were studied. From the 10 patients at the beginning 3 refused further CPAP therapy in the first 4–6 weeks due to discomfort. The remaining 7 patients applied the therapy according to the initial instructions for use (at least 6 hours per night). The patient group was measured twice: Before therapy and after three months of application of CPAP. The CPAP treatment significantly eliminated the previously observed obstructions. In consequence the initial AHI was reduced to a normal range (42.1±16.2 vs. 4.7±6.0 /h, p<0.05) and the mean SaO2 was ensured (91.2±4.1 vs. 94.6±1.7%, p<0.05). The subjective sleepiness and sleep quality were improved (ESS 11.8±1.8 vs. 5.0±2.0, p<0.05) (Table 1).\nHOMA-index between patients before and after therapy showed no significant differences (22.35±42.64 vs. 11.62±16,36, p>0.05). In contrast the HOMA-index comparing patients before therapy and healthy controls differ in a significant manner (19.15±29,83 vs. 2.73±3.37, p<0.05).\nOther parameters such as pCO2, pO2, pH and lung function parameters (FEV1 and VC) were also analysed. None of them could line out significancies between patients before or under therapy and volunteers, except pO2. The healthy volunteers showed a significant higher mean pO2 in comparison to the patients with newly diagnosed OSAS (82.1±8.01 vs. 72.5±11.3, p<0.05).\nFurthermore we examined any possible correlation of the parameters (pO2, pCO2, VC, FEV1, pH, age, height, weight, HOMA-Index, body fat, therapeutical pressure of CPAP, min. O2, basal O2, glucose, insulin, BMI, AHI and ESS points) to the peptide hormones (apelin, leptin, obestatin):\nLeptin plasma levels of the patient group before therapy had at each time of the observation a significant correlation to the body mass index (pmin. 0.55; pmax. 0.69), min. O2 (pmin. –0.76; pmax. −0.58) and AHI (pmin. 0.55; pmax. 0.62). A significant correlation was found between Leptin measurements of newly diagnosed OSAS patients and weight at 2 a.m., 2, 6, 10 p.m. as well as the body fat percentage at 6, 10 a.m., 2 p.m.. In the therapy group positive significant correlations could be observed between leptin plasma levels and pH (at 2, 6, 10 a.m., 2, 6, 10 p.m.), BMC in g (at 2, 6, 10 a.m., 2, 6, 10 p.m.), body fat in% (at 6 a.m., 2 p.m.), FEV1 and VC (at 2, 6, 10 a.m., 2, 10 p.m.) and age (6, 10 a.m., 2, 10 p.m.).\nBetween Apelin plasma levels and the mentioned parameters we could not find any significancies in the newly diagnosed OSAS group. In contrast the plasma apelin levels showed a significant correlation to the AHI (pmin. 0.82; pmax. 0.91), min. O2 (pmin. −0.75; pmax. −0.86) and BMI (pmin. 0.78; pmax. 0.85) in the therapy group at all observation points. Furthermore there was a significant correlation to the basal O2 (at 2, 6 p.m.), pressure and pO2 (at 2, 6, 10 a.m., 6, 10 p.m.).\nObestatin plasma levels were significantly correlated at time of OSAS diagnosis to AHI and weight (at 6, 10 p.m.), to body fat in% (at 2, 6 a.m., 2 p.m.), BMC in g (at 10 a.m., 2 p.m.) and pCO2 (at 6 a.m., 2 p.m.). After three months of therapy there was only a significant correlation to weight and height (at 10 a.m.).\nFurther we analysed the plasma levels of the three adipose tissue hormones in the three different observation groups:\n[SUBTITLE] Leptin [SUBSECTION] Controls had significant lower plasma leptin levels as the patients at time of diagnosis of OSAS at all observation points. Leptin levels decrease after 3 therapeutic months, in a significant manner at 6.00 and 10.00 a.m. (Figure 1).\nControls had significant lower plasma leptin levels as the patients at time of diagnosis of OSAS at all observation points. Leptin levels decrease after 3 therapeutic months, in a significant manner at 6.00 and 10.00 a.m. (Figure 1).\n[SUBTITLE] Apelin [SUBSECTION] The mean plasma apelin levels of patients with newly diagnosed OSAS are higher than those of the patients under therapy, but failed to reach significance (Figure 2).\nThe mean plasma apelin levels of patients with newly diagnosed OSAS are higher than those of the patients under therapy, but failed to reach significance (Figure 2).\n[SUBTITLE] Obestatin [SUBSECTION] Plasma obestatin levels of the volunteers were higher in comparison to the patients before or after three months of therapy except at 6 and 10 p.m. (Figure 3).\nWe also analysed correlations between the three adipose tissue hormones. We found a significant correlation between leptin and obestatin in the patient group before and after onset of CPAP therapy (except at 2 a.m. in the CPAP group).\nPlasma obestatin levels of the volunteers were higher in comparison to the patients before or after three months of therapy except at 6 and 10 p.m. (Figure 3).\nWe also analysed correlations between the three adipose tissue hormones. We found a significant correlation between leptin and obestatin in the patient group before and after onset of CPAP therapy (except at 2 a.m. in the CPAP group).", "Controls had significant lower plasma leptin levels as the patients at time of diagnosis of OSAS at all observation points. Leptin levels decrease after 3 therapeutic months, in a significant manner at 6.00 and 10.00 a.m. (Figure 1).", "The mean plasma apelin levels of patients with newly diagnosed OSAS are higher than those of the patients under therapy, but failed to reach significance (Figure 2).", "Plasma obestatin levels of the volunteers were higher in comparison to the patients before or after three months of therapy except at 6 and 10 p.m. (Figure 3).\nWe also analysed correlations between the three adipose tissue hormones. We found a significant correlation between leptin and obestatin in the patient group before and after onset of CPAP therapy (except at 2 a.m. in the CPAP group).", "The obstructive sleep apnoea syndrome is a common disorder in the adult population, especially in men. It is characterised by repetitive episodes of hypoxaemia caused by an obstruction of the upper airways. In consequence the patients suffer from sleep disturbance at night due to “arousals” [22]. Furthermore other diseases such as diabetes and hypertension are associated with OSAS [14,23]. Treatment with positive continous airway pressure (CPAP therapy) can eliminate the obstructions and therefore the states of hypoxaemia [24]. Sleep quality and daytime vigilance restores and the risk of suffering from associated other diseases like hypertension, insulin resistance or cardiovascular complications, etc. is reduced. Unfortunately, besides these positive effects CPAP does not always impact on the weight of the patients, as demonstrated in our study group.\nWith the recent discovery of novel peptides like apelin and obestatin [25], data from animal models suggested, that some of these hormones play a role in OSAS independently of obesity. This has also been hypothesized in the case of leptin. Leptin regulates body fat mass by decreasing food intake and increasing resting energy expenditure [26]. Its production declines during starvation [27]. It is established, that leptin levels are elevated in an obese population as well as in patients with OSAS. Leptin levels decrease under CPAP therapy independently from the obesity of the patients, which can be confirmed in our study. In previous studies, leptin was already discussed as a respiratory stimulus, which has to date only been confirmed in animal models [14].\nApelin is a hormone with cardiovascular properties and impact on glucose homoeostasis [28,29]. Apelin is secreted and expressed by human adipocytes and up-regulated by insulin and obesity [30]. The expression is strongly inhibited by fasting [31] and recovered after refeeding, in a similar way to insulin [30]. Several studies have demonstrated that Apelin is a potent angiogenic factor. And like other angiogenic factors the apelin gene is upregulated under hypoxia conditions [32]. In addition, inhibition of apelin signaling during frog embryonic development resulted in severe reduction in the formation of vascular structures [33]. A most recent study on the pathophysiologic role played by apelin showed that intraperitoneal administration in normal and obese mice for 14 days reduced body fat without affecting food intake, reduced insulin, leptin and triglycerides level and respiratory quotient [29]. It is mentioned that Apelin might have different effects, depending on whether it is acting in brain or peripherally [32]. As OSAS is strongly correlated with obesity, hyperinsulinaemia and hypoxic states we analysed this hormone before and after onset of CPAP therapy. Our measurements during a 24h period in patients with newly diagnosed OSAS could show higher apelin plasma levels than in the same observation group three months after sufficient nCPAP therapy. Apelin levels had a trend to decrease in CPAP treated subjects as previously demonstrated in leptin, too. This could be related to changes in the body fat distribution, that we could not evaluate in our study. However, the weight of our patients remained unchanged. Thus, respiratory effects of apelin (e.g. maintaining respiratory drive in the chronically hypoxic state of OSAS) can be discussed and warrant further studies. Elevated apelin levels in patients with OSAS have also recently been reported by Henley et al. after glucose challenge. They did also not find a salient variability of apelin levels during a 24h period. They reported lower apelin levels after CPAP therapy overnight, a trend that we can confirm in our patient group. As in previous observations [31] apelin plasma levels in our study group (therapied OSAS patients) were positively correlated to the body mass index. Like Henley et al. we found no correlation between plasma apelin and BMI in untreated OSAS patients [25].\nObestatin, as well as Ghrelin, is derived from a 117-residue prepro-peptide by posttranslational cleavage (Prepro Ghrelin) [11]. From animal models, Obestatin was first discussed as an antagonist of ghrelin due to suppression of food intake, inhibition of insulin secretion and suppression of gastric emptying [34]. Data about effects on sleep behaviour are rare. A significant increase in non REM sleep was reported after intraperitoneal or intracerebroventricular injection in rats [13]. To our knowledge, data about humans with OSAS are not available. With our approach, we could not demonstrate differences in the obestatin levels of our obese patient group with OSAS before and 3 months after onset of CPAP therapy. Apart from the conclusion, that there seem to be no significant direct effects of obestatin on respiratory function, these data fit well with the study of Anderwald-Stadler et al. [35], who observed almost no effects of insulin on obestatin during a clamp study in insulin-resistant humans. Patients with OSAS are typically insulin-resistant [36–38]. The small amount of an improvement of insulin sensitivity by CPAP in OSAS subjects demonstrated by our study group [36] does obviously not impact on obestatin levels.", "Previous studies raised suspicion that regulatory peptides such as leptin, apelin and obestatin might play a role in the pathogenesis of OSAS. In our study we could demonstrate different plasma levels of leptin and apelin regarding OSAS patients before and after onset of CPAP therapy in comparison to healthy volunteers, but not in obestatin. Further investigations and different approaches will have to investigate, if these hormones might have direct effects on OSAS and through which signaling pathways they might exhibit their effects, or if these hormonal changes are rather associated to concomitant metabolical changes in treated OSAS patients such as a modulation of insulin sensitivity or changes in body fat distribution." ]
[ null, "materials|methods", "subjects", null, null, null, null, "results", null, null, null, "discussion", "conclusions" ]
[ "OSAS", "adipose tissue hormones", "CPAP therapy", "apelin", "obestatin", "leptin" ]
Analysis of rehabilitation procedure following arthroplasty of the knee with the use of complete endoprosthesis.
21358604
The use of endoprosthesis in arthroplasty requires adaptation of rehabilitation procedures in order to reinstate the correct model of gait, which enables the patient to recover independence and full functionality in everyday life, which in turn results in an improvement in the quality of life.
BACKGROUND
We studied 33 patients following an initial total arthroplasty of the knee involving endoprosthesis. The patients were divided into two groups according to age. The range of movement within the knee joints was measured for all patients, along with muscle strength and the subjective sensation of pain on a VAS, and the time required to complete the 'up and go' test was measured. The gait model and movement ability were evaluated. The testing was conducted at baseline and after completion of the rehabilitation exercise cycle.
MATERIAL/METHODS
No significant differences were noted between the groups in the tests of the range of movement in the operated joint or muscle strength acting on the knee joint. Muscle strength was similar in both groups. In the "up and go" task the time needed to complete the test was 2.9 seconds shorter after rehabilitation in Group 1 (average age 60.4), and 4.5 seconds shorter in Group 2 (average age 73.1)).
RESULTS
The physiotherapy procedures we applied, following arthroplasty of the knee with cemented endoprosthesis, brought about good results in both research groups of older patients.
CONCLUSIONS
[ "Aged", "Arthroplasty, Replacement, Knee", "Employment", "Humans", "Knee Prosthesis", "Middle Aged", "Movement", "Physical Fitness" ]
3524729
null
null
null
null
Results
An improvement in the measured parameters within both groups was obtained in the tests conducted, with the better results being obtained in the group with younger patients. The time that had elapsed from the operation to the beginning of rehabilitation could have also had an effect on the result, something that is also confirmed in the tests conducted by Błaszczak et al (2007) where the results obtained by patients following endoprosthesis of the knee joint, who underwent rehabilitation after 8.5 months and 5.4 years following the operation, were compared. In these tests it was claimed that in both groups of patients there occurred an improvement in the functioning of the joint, somewhat more so in those who commenced rehabilitation within a year of the operation itself [9]. In the evaluation of the muscle strength acting on the knee joint there was confirmed the observations of other authors on the weakening of muscles in the non-operated on limb. This could have been brought about by several factors, amongst which the most often cited are degenerative alterations in the joint, as well as restriction in activeness resulting most often from sensations of pain [10] and disturbances in the mechanics of gait prior to arthoplasty, which equally may be causes of muscle weakening [2]. In analysing the results obtained no statistically significant differences were noted in either of the two test groups, although there were noted better results for the group of individuals under the age of 65.
Conclusions
The application of physiotherapeutic methods after the initial arthroplasty of the knee with the use of complete cemented endoprosthesis brought about good results in both age-groups of patients tested. The results of physiotherapy evaluation were better in group 1, in which were patients aged under 65 though this was not statistically significant.
[ "Background", "Results" ]
[ "The first endoprosthesis of a knee joint was implanted in the 1950s, with work upon its construction being carried out by, among others Smith-Petersen, Waldius, Campbell. These prostheses had a hinge construction and as a result often became loose. In the 1960s and 1970s a new type of prosthesis became available. In 1971 the Canadian Frank H. Gunston in cooperating with Sir John Charnley on a new type of hip joint prosthesis, composed of a metal femoral part and a polyethylene acetabulum fastened by cement, developed a polycentric prosthesis of the knee joint [1]. From then onwards there has occurred constant improvement based on conducted research in enhancing prosthesis construction in order to allow for the best possible reflection of movement in a natural joint, as equally work into the materials used in its construction so that the best biocompatibility with human tissue can be obtained. Many prosthesis models have been built. At present two types are the most frequently used: unbound prosthesis, the so-called sledge prosthesis, and semi bound prosthesis, the so-called rear stabilised with a hinge fulfilling the function of the posterior cruciate ligament. One may list many types of prosthesis allowing at present individual application with regard to a given patient, including also, for example, unicompartmental prosthesis. Both types of prosthesis have their adherents. In the case of unbound prostheses the possibility of achieving a greater range of bending movement in the knee joint as well as the physiological model of gait is emphasised. In turn semi bound prostheses give greater possibilities for correcting deformation in the knee joint, however they are also the cause of an appearance of cutting forces can result in loosening. Yet, as is emphasised by the authors of publications the frequency of loosening in both types of prosthesis is comparable [2,3]. The progress and development of arthroplasty with the use of endoprosthesis resulted in the need to adapt the rehabilitation proceedings in order for it to be the most beneficial for the patient. There are many factors equally before, during and post operational which condition the final treatment effect obtained, while the main aim of the operation is a reduction in pain experienced and an increase in the scope of movement within the joint, which has been limited more often than not by pathological changes generally defined as degenerative [4,5]. Both of these factors are designed to return a correct model of gait as well as enable the patient to return to independence and activeness in everyday life which at the same time influences the patient’s quality of life. Of importance therefore is the appropriate conducting of rehabilitation directed equally to individual patient possibilities as contemporary treatment norms. One may still find described in publications from the mid 1990s post-operational proceedings commenced on the 2–3 day through kinesitherapy (active-passive exercise, active) intensified through subsequent days. Only on the eighth day was it recommended to sit with suspended lower limbs while on the 10th day tilting to an erect position was commenced in order for the patient to be taught to walk from day 12–14 [6]. In the tests of Nolewajka et al (2008) into the factors of thrombosis risk for deep veins of the lower limbs in patients after complete arthroplasty, it was claimed that besides other factors such as, for example, age, obesity, and operation duration time equally the time spent erect had a significant effect in the prevention of thrombotic-embolic complications. At present, bearing in mind the need for the individual application of the rehabilitation programme for each patient, rehabilitation proceedings are commenced on the 1–2 day starting with breathing, isometric, active-passive and active exercises. On the 2–3 day making the patient erect is conducted in the initial period with the help of a tall Zimmer frame and subsequently the learning to walk with elbow crutches. Around day 7–9 learning to walk up stairs is introduced and depending on the patient’s fitness, walking with just one elbow crutch [2,7,8].", "An improvement in the measured parameters within both groups was obtained in the tests conducted, with the better results being obtained in the group with younger patients. The time that had elapsed from the operation to the beginning of rehabilitation could have also had an effect on the result, something that is also confirmed in the tests conducted by Błaszczak et al (2007) where the results obtained by patients following endoprosthesis of the knee joint, who underwent rehabilitation after 8.5 months and 5.4 years following the operation, were compared. In these tests it was claimed that in both groups of patients there occurred an improvement in the functioning of the joint, somewhat more so in those who commenced rehabilitation within a year of the operation itself [9].\nIn the evaluation of the muscle strength acting on the knee joint there was confirmed the observations of other authors on the weakening of muscles in the non-operated on limb. This could have been brought about by several factors, amongst which the most often cited are degenerative alterations in the joint, as well as restriction in activeness resulting most often from sensations of pain [10] and disturbances in the mechanics of gait prior to arthoplasty, which equally may be causes of muscle weakening [2]. In analysing the results obtained no statistically significant differences were noted in either of the two test groups, although there were noted better results for the group of individuals under the age of 65." ]
[ null, "results" ]
[ "Background", "Material and Methods", "Results", "Results", "Conclusions" ]
[ "The first endoprosthesis of a knee joint was implanted in the 1950s, with work upon its construction being carried out by, among others Smith-Petersen, Waldius, Campbell. These prostheses had a hinge construction and as a result often became loose. In the 1960s and 1970s a new type of prosthesis became available. In 1971 the Canadian Frank H. Gunston in cooperating with Sir John Charnley on a new type of hip joint prosthesis, composed of a metal femoral part and a polyethylene acetabulum fastened by cement, developed a polycentric prosthesis of the knee joint [1]. From then onwards there has occurred constant improvement based on conducted research in enhancing prosthesis construction in order to allow for the best possible reflection of movement in a natural joint, as equally work into the materials used in its construction so that the best biocompatibility with human tissue can be obtained. Many prosthesis models have been built. At present two types are the most frequently used: unbound prosthesis, the so-called sledge prosthesis, and semi bound prosthesis, the so-called rear stabilised with a hinge fulfilling the function of the posterior cruciate ligament. One may list many types of prosthesis allowing at present individual application with regard to a given patient, including also, for example, unicompartmental prosthesis. Both types of prosthesis have their adherents. In the case of unbound prostheses the possibility of achieving a greater range of bending movement in the knee joint as well as the physiological model of gait is emphasised. In turn semi bound prostheses give greater possibilities for correcting deformation in the knee joint, however they are also the cause of an appearance of cutting forces can result in loosening. Yet, as is emphasised by the authors of publications the frequency of loosening in both types of prosthesis is comparable [2,3]. The progress and development of arthroplasty with the use of endoprosthesis resulted in the need to adapt the rehabilitation proceedings in order for it to be the most beneficial for the patient. There are many factors equally before, during and post operational which condition the final treatment effect obtained, while the main aim of the operation is a reduction in pain experienced and an increase in the scope of movement within the joint, which has been limited more often than not by pathological changes generally defined as degenerative [4,5]. Both of these factors are designed to return a correct model of gait as well as enable the patient to return to independence and activeness in everyday life which at the same time influences the patient’s quality of life. Of importance therefore is the appropriate conducting of rehabilitation directed equally to individual patient possibilities as contemporary treatment norms. One may still find described in publications from the mid 1990s post-operational proceedings commenced on the 2–3 day through kinesitherapy (active-passive exercise, active) intensified through subsequent days. Only on the eighth day was it recommended to sit with suspended lower limbs while on the 10th day tilting to an erect position was commenced in order for the patient to be taught to walk from day 12–14 [6]. In the tests of Nolewajka et al (2008) into the factors of thrombosis risk for deep veins of the lower limbs in patients after complete arthroplasty, it was claimed that besides other factors such as, for example, age, obesity, and operation duration time equally the time spent erect had a significant effect in the prevention of thrombotic-embolic complications. At present, bearing in mind the need for the individual application of the rehabilitation programme for each patient, rehabilitation proceedings are commenced on the 1–2 day starting with breathing, isometric, active-passive and active exercises. On the 2–3 day making the patient erect is conducted in the initial period with the help of a tall Zimmer frame and subsequently the learning to walk with elbow crutches. Around day 7–9 learning to walk up stairs is introduced and depending on the patient’s fitness, walking with just one elbow crutch [2,7,8].", "The research was conducted in 2008 at the Cracow Rehabilitation Centre. 33 patients were qualified for the tests after having undergone total arthroplasty of the knee with the use of complete cemented endoprosthesis. The patients were divided into two groups according to age. In Group 1 were 15 patients aged up to 65, while Group 2 was composed of 18 individuals aged over 65. All the patients were rehabilitated according to the same programme which had been used in the Cracow Rehabilitation Centre since 2005. Before the operation the patient is taught to walk on elbow crutches with a gradual burdening of the lower limb in which the replaced knee joint is to be, as well as undergoing instruction as to the course of the post-operation rehabilitation programme. On the 1–2 day after the operation kinesitherapy is begun (breathing exercises, isometric exercises, constant passive movement, active movements of the ankle joint of the operated on limb as well as active movements of the other lower limb and of the upper limbs). On day 2–3 active exercises on the operated on limb are introduced with subsequent transversal straightening through sitting with suspended lower limbs as well as learning to walk, initially with the aid of a high Zimmer frame. On subsequent days the kinesitherapy is intensified in the form of time, the number of repetitions and the type of exercises. Also in teaching the patient to walk the patient undergoes by turns its subsequent stages: walking on elbow crutches, walking up stairs (around day 7–9), walking on one elbow crutch. Between day 10 and 14 after the operation the patient is discharged from the Department of Surgery, Orthopaedics and Rehabilitation and instructed on the course of further rehabilitation. In certain cases the patient continues rehabilitation at home according to the instructions given, and sometimes in the form of hospitalisation at a rehabilitation department. The rehabilitation programme in department conditions involves constant passive movement, isometric movements (chiefly of the quadriceps thigh muscle) active free and active with resistance, exercises on ladders and on a rotor as well as the PNF method and gait re-education. Within the physiotherapy, one individually geared for each patient, cryotherapy, magnetic fields and laser therapy are employed.\nFor all of the patients qualified for the tests a measurement of the scope of movements in knee joints was conducted using a hand goniometer, the strength of the muscle acting on the knee joint according to the Lovett test, subjective pain complaints according to the VAS scale as well as the noting of the time taken to complete the ‘up and go’ test. There was also analysed the means of translocating and an evaluation of fitness in a test devised on a scale of 0 to 6 points, where 0 points represented an inability to move position and extremely intensive constant complaints of pain, noticeable movement limitation in the operated on joint; 1 point – walking on crutches, pain while at rest and in movement, limitation to movement in the operated on joint; 2 points – walking on crutches for short distances, pain experienced only during movement, bending of the knee to 40°; 3 points – capable of walking on crutches, pain during movement, bending in the knee 40–60°; 4 points – able to walk competently on one crutch, minimal pain experienced that recedes with rest, bending between 60° and 80°; 5 points – walking without crutches with breaks, intermittent not too intensive pain, bending to 90°; 6 points – normal walking without pain, bending in the joint above 90°. The tests were conducted twice, before the cycle of rehabilitation exercises (Test 1) and after their completion (Test 2). All the results obtained were entered into the research questionnaire. There was also noted information in relation to a visual and manual evaluation of the fibre from the area around the operated knee joint. The results were subjected to statistical analysis in Microsoft Excel.", "The research encompassed 33 patients, who were divided according to age, the borderline being established as 65, into two groups. Group 1 comprised 15 individuals (average age 60.4 years; min min=52; max=65). 18 people were qualified into Group 2 (average age x=73.1 years; min=69; max=78 years). On the basis of the data on body mass and height the BMI was calculated, the average value of which was for the particular groups: Group 1 x=29.02 (min=22.2; max=39.5); Group 2 x=28.89 (min=21.9; max=35.7). The number of days from the moment of operation to the first test was for Group 1 49 days, and in Group 2–58 days. In the test questionnaire was recorded information on the place of abode and the professional situation of the patients under examination. This data is presented in Figures 1 and 2. In Group 1 there were noticeably more people living in large towns and who were on disability benefit, something that is understandable given their age. In Group 2 were more people who lived in the countryside and who were retired.\nWithin the research into the scope for movements in the operated on joint and the muscle strength acting on the knee joint there was not noted in Test 1 any significant differences between the groups. The range for bending displayed was on average for Group 1 x=63.27°, for Group 2 x=65.72°. In the measurements of extension there was noted a deficiency in this movement in both groups tested – in Group 1 (in 10 patients) on average x=−6.33°; in Group 2 (in 11 patients) on average x=−8.89°. The muscle strength was similar in both groups and thus in the operated on limb was respectively 3.1 and 3.2, in the non-operated on limb 4.6 and 4.8 according to the Lovett scale. In test 2 there was observed an increase in the range of bending and a reduction in the extension deficit, as well as an increase in muscle strength. In Group 1 the scope of bending increased on average by 25.3°, while the deficit of extension reduced on average by 3.47°. In Group 2 the scope of bending increased on average by 23.9°, while the deficit in extension reduced by 5.89°. The muscle strength measured according to the Lovett scale underwent an increase in both tested group to the amount of 1 degree. In Group 1 for Test 1 there was noted a warming up of the area around the knee joint in 73% of patients. In Group 2 a warming up of the area around the operated on knee joint and ballottement of the kneecap was noted in 20%, while in Group 2 this was the case in 22% of those tested. In the ‘up and go’ test there was noted a shortening in the time for test completion in both groups examined (in Group 1 by 2.9 seconds, while in Group 2 by 4.5). The results illustrating fitness are presented in Figure 3 (the data from Test 1) and Figure 4 (the data from test 2), while in Figures 5 and 6 there is presented the means of patient movement within the particular tests and groups. In both groups the patients confirmed a diminished intensification of pain symptoms within a subjective evaluation.", "An improvement in the measured parameters within both groups was obtained in the tests conducted, with the better results being obtained in the group with younger patients. The time that had elapsed from the operation to the beginning of rehabilitation could have also had an effect on the result, something that is also confirmed in the tests conducted by Błaszczak et al (2007) where the results obtained by patients following endoprosthesis of the knee joint, who underwent rehabilitation after 8.5 months and 5.4 years following the operation, were compared. In these tests it was claimed that in both groups of patients there occurred an improvement in the functioning of the joint, somewhat more so in those who commenced rehabilitation within a year of the operation itself [9].\nIn the evaluation of the muscle strength acting on the knee joint there was confirmed the observations of other authors on the weakening of muscles in the non-operated on limb. This could have been brought about by several factors, amongst which the most often cited are degenerative alterations in the joint, as well as restriction in activeness resulting most often from sensations of pain [10] and disturbances in the mechanics of gait prior to arthoplasty, which equally may be causes of muscle weakening [2]. In analysing the results obtained no statistically significant differences were noted in either of the two test groups, although there were noted better results for the group of individuals under the age of 65.", "The application of physiotherapeutic methods after the initial arthroplasty of the knee with the use of complete cemented endoprosthesis brought about good results in both age-groups of patients tested.\nThe results of physiotherapy evaluation were better in group 1, in which were patients aged under 65 though this was not statistically significant." ]
[ null, "materials|methods", "results", "results", "conclusions" ]
[ "physiotherapy", "kinesitherapy", "arthoplasty of the knee" ]
Assessment of vestibulocochlear organ function in patients meeting radiologic criteria of vascular compression syndrome of vestibulocochlear nerve--diagnosis of disabling positional vertigo.
21358605
This study sought to assess the vestibulo-cochlear organ in patients meeting radiologic criteria of vascular compression syndrome (VCS) of the eighth cranial nerve.
BACKGROUND
The authors performed a retrospective analysis of 34 patients (18 women, 16 men; mean age, 49 years) treated in between 2000 and 2007, with VCS of the eighth cranial nerve by MRI. Contrasted magnetic resonance imaging identified an anterior inferior cerebellar artery vascular loop adhering to the vestibule-cochlear nerve in all 34 cases. All patients were given pure tone audiometry, distortion product otoacoustic emissions, auditory brainstem response, and electroneurographic examinations.
MATERIAL/METHODS
Most-common symptoms were unilateral hearing loss (82%), unilateral tinnitus (80%), and dizziness (74%). Most-frequent abnormalities in performed examinations were specific auditory brainstem response changes (interpreted according to Möller's criteria) in 86% of cases and sensorineural hearing loss in pure tone audiometry (82%). Abnormal changes in electronystagmography were found in the absence (12%) or weakness (35%) of a caloric response. No patients were surgically treated.
RESULTS
Significantly, there is no more weakness or absence of the caloric response of a vestibular organ in a patient with vascular compression of the vestibulo-cochlear nerve. Despite an absence of electrophysiologic testing of vestibular organ dysfunction, most examined patients (meeting the radiologic criteria of VCS of the eighth cranial nerve) had subjective symptoms like vertigo and dizziness. Disabling positional vertigo should be considered in the differential diagnosis of vertigo when accompanied by tinnitus or deafness.
CONCLUSIONS
[ "Adult", "Aged", "Ear", "Female", "Gadolinium", "Humans", "Magnetic Resonance Imaging", "Male", "Middle Aged", "Nerve Compression Syndromes", "Radiography", "Vertigo", "Vestibulocochlear Nerve" ]
3524734
null
null
null
null
Results
The most-common symptoms were unilateral tinnitus in 27 patients (79% cases), unilateral hearing loss in 28 patients (82%), and dizziness in 25 patients (74%). The symptoms lasted for 2 to 20 years before the diagnosis (mean, 8 years). As regarding the otorhinolaryngology organs, no pathogenic lesions have been noted. Pure tone audiometry revealed sensorineural high-frequency deafness (over 80 dB) in 28 patients (82%). Impedance audiometry measurements (middle ear examination) showed tympanogram A type in 32 patients (94%) and C type in 2 patients (6%). Auditory brainstem response examination revealed retrocochlear impairment in 29 patients (86%) (meeting Möller’s criteria of disabling positional vertigo-prolongation of wave I–III), co-chlear impairment in 3 patients (9%), and conductive impairment in 2 patients (6%). Electroneurographic examination revealed spontaneous and positional nystagmus present in 6 patients (18%) and it was absent in 28 (82%); optokinetic nystagmus was normal in 14 (41%) and disturbed in 20 patients (59%). Bicaloric testing revealed a normal response in 17 patients (50%), caloric weakness in 14 patients (41%), and absence of a caloric response in 3 cases (9%). Evoked otoacoustic emission (distortion product otoacoustic emissions, DP-gram 1–4 kHz frequencies in patients with ipsilateral, retrocochlear impairment in auditory brainstem response showed distorted product otoacoustic emissions in 9 patients (26%). Radiographs of ears, according to Stenvers revealed no changes. In all cases, MR imagining showed that the anterior inferior cerebellar artery was adjacent to the vestibulocochlear nerve. All patients were referred for neurosurgical consultations, yet so far, none has given consent for surgery (Figures 1, 2).
Conclusions
We conclude the following: Significantly, there is no more weakness or absence of a caloric response of the vestibular organ in a patient with vascular compression syndrome of vestibulocochlear nerve. Despite of absence of electrophysiologic signs of vestibular organ dysfunction, most of the examined patients (meeting the radiologic criteria of vascular compression syndrome of eighth cranial nerve) had subjective symptoms, such as vertigo and dizziness. However, 86% of patients had abnormal auditory brainstem responses meeting Möller’s criteria. Disabling positional vertigo is the syndrome that should be considered in the differential diagnosis in cases of vertigo, especially when accompanied by tinnitus or deafness.
[ "Background" ]
[ "The term vascular compression syndrome was introduced to professional literature by McKenzie in 1936 [1] and popularized by Jannetta in 1975, to refer to a group of diseases caused by direct contact of a blood vessel with a cranial nerve stem [2]. Most authors believe that the anterior inferior cerebellar artery is the vessel responsible for the vascular compression syndrome of the eighth cranial nerve [3,4].\nNumerous theories explain the pathogenesis of the vascular compression syndrome in the vestibulocochlear nerve and facial nerve. Sabarbati and associates state that the place of nerve lesion is situated strictly in the transition zone of the cranial nerve, where the myelin of the central nervous system of the neurolemma turns into peripheral myelin [3]. This transition zone is also labeled the Obersteiner-Redlich zone [4].\nThe blood vessel adhering to the nerve axons in the transition zone may initially cause, as Rasminsky claims, a topical demyelinization of the neurolemma, followed by nerve lesion. In may lead to ectopic nerve stimulations for dromic as well as antidromic conduction [5]. Consequences of that are disturbances in the vestibular and cochlear nuclei appearing as hearing loss, tinnitus, and vertigo.\nJannetta in 1984 introduced a new term, disabling positional vertigo [6]. Disabling positional vertigo refers to a group of patients with symptoms of the vascular compression of eighth nerve, and it is used to describe a cochleovestibular organ impairment in these cases. This syndrome was distinct from other established vertigo syndromes based on clinical and electrophysiologic criteria. Existence of disabling positional vertigo syndrome is still not universally accepted, and no specific findings have been credible enough to detect this syndrome [7,8]. Only Möller proposed specific auditory brainstem response abnormalities as criteria in disabling positional vertigo diagnosis [9]. According to some researchers, the diagnosis of disabling positional vertigo could be based on a combination of clinical symptoms and electrophysiologic findings. Disabling positional vertigo comprises constant positional vertigo or dysequilibrium, constant nausea, sensation of the floor constantly moving “as if one was on a boat,” vertigo with any head movement (positional vertigo), and commonly associated symptoms include hearing loss and tinnitus [6,9–11]. This study sought to assess the vestibulocochlear organ in patients meeting radiologic criteria of the vascular compression syndrome of eighth cranial nerve." ]
[ null ]
[ "Background", "Material and Methods", "Results", "Discussion", "Conclusions" ]
[ "The term vascular compression syndrome was introduced to professional literature by McKenzie in 1936 [1] and popularized by Jannetta in 1975, to refer to a group of diseases caused by direct contact of a blood vessel with a cranial nerve stem [2]. Most authors believe that the anterior inferior cerebellar artery is the vessel responsible for the vascular compression syndrome of the eighth cranial nerve [3,4].\nNumerous theories explain the pathogenesis of the vascular compression syndrome in the vestibulocochlear nerve and facial nerve. Sabarbati and associates state that the place of nerve lesion is situated strictly in the transition zone of the cranial nerve, where the myelin of the central nervous system of the neurolemma turns into peripheral myelin [3]. This transition zone is also labeled the Obersteiner-Redlich zone [4].\nThe blood vessel adhering to the nerve axons in the transition zone may initially cause, as Rasminsky claims, a topical demyelinization of the neurolemma, followed by nerve lesion. In may lead to ectopic nerve stimulations for dromic as well as antidromic conduction [5]. Consequences of that are disturbances in the vestibular and cochlear nuclei appearing as hearing loss, tinnitus, and vertigo.\nJannetta in 1984 introduced a new term, disabling positional vertigo [6]. Disabling positional vertigo refers to a group of patients with symptoms of the vascular compression of eighth nerve, and it is used to describe a cochleovestibular organ impairment in these cases. This syndrome was distinct from other established vertigo syndromes based on clinical and electrophysiologic criteria. Existence of disabling positional vertigo syndrome is still not universally accepted, and no specific findings have been credible enough to detect this syndrome [7,8]. Only Möller proposed specific auditory brainstem response abnormalities as criteria in disabling positional vertigo diagnosis [9]. According to some researchers, the diagnosis of disabling positional vertigo could be based on a combination of clinical symptoms and electrophysiologic findings. Disabling positional vertigo comprises constant positional vertigo or dysequilibrium, constant nausea, sensation of the floor constantly moving “as if one was on a boat,” vertigo with any head movement (positional vertigo), and commonly associated symptoms include hearing loss and tinnitus [6,9–11]. This study sought to assess the vestibulocochlear organ in patients meeting radiologic criteria of the vascular compression syndrome of eighth cranial nerve.", "Material consisted of 34 patients (18 women, 16 men; mean age, 49 years; age range, 36–74 years), with from vascular compression syndrome of eighth cranial nerve recognized by means of angio-MRI. Contrasted magnetic resonance imaging identified a vascular loop of the anterior inferior cerebellar artery near to cochleovestibular nerve in all 34 cases. After taking a history and doing an otolaryngologic examination, all 34 patients underwent pure tone audiometry, impendence audiometry, distortion product otoacoustic emissions, auditory brainstem response, electroneurographic, radiographs of temporal bones in Stenvers projection (evaluation of eighth nerve tumors), as well as neurologic and ophthalmologic consultations. These were followed by an MRI targeting the areas of the cerebellopontine angle in sequences SE/T1, T2, an PD in transverse planes and frontal planes (where SE and PD means technical parameters of the MRI examination).", "The most-common symptoms were unilateral tinnitus in 27 patients (79% cases), unilateral hearing loss in 28 patients (82%), and dizziness in 25 patients (74%). The symptoms lasted for 2 to 20 years before the diagnosis (mean, 8 years). As regarding the otorhinolaryngology organs, no pathogenic lesions have been noted. Pure tone audiometry revealed sensorineural high-frequency deafness (over 80 dB) in 28 patients (82%). Impedance audiometry measurements (middle ear examination) showed tympanogram A type in 32 patients (94%) and C type in 2 patients (6%). Auditory brainstem response examination revealed retrocochlear impairment in 29 patients (86%) (meeting Möller’s criteria of disabling positional vertigo-prolongation of wave I–III), co-chlear impairment in 3 patients (9%), and conductive impairment in 2 patients (6%). Electroneurographic examination revealed spontaneous and positional nystagmus present in 6 patients (18%) and it was absent in 28 (82%); optokinetic nystagmus was normal in 14 (41%) and disturbed in 20 patients (59%). Bicaloric testing revealed a normal response in 17 patients (50%), caloric weakness in 14 patients (41%), and absence of a caloric response in 3 cases (9%). Evoked otoacoustic emission (distortion product otoacoustic emissions, DP-gram 1–4 kHz frequencies in patients with ipsilateral, retrocochlear impairment in auditory brainstem response showed distorted product otoacoustic emissions in 9 patients (26%). Radiographs of ears, according to Stenvers revealed no changes. In all cases, MR imagining showed that the anterior inferior cerebellar artery was adjacent to the vestibulocochlear nerve. All patients were referred for neurosurgical consultations, yet so far, none has given consent for surgery (Figures 1, 2).", "In 2000–2007, among all the patients of our department with sensorineural hearing loss, tinnitus, and dizziness, we found 34 meeting the radiologic criteria of vascular compression syndrome of the eighth cranial nerve. In 25 cases, typical subjective symptoms accompanying disabling positional vertigo could be recognized. Vascular compression syndrome, as claimed by Schwaber, is found in both men and women between the ages of 20 and 70 years (2:1 in incidence in women and men) [12]. Other authors report the following spectrum of symptoms of disabling positional vertigo: Schwaber mentions 77% of patients with unilateral hypoacusia and 57% with tinnitus, 84% of patients with periodic rotatory vertigo, and in 6.9% of patients with vertigo, nausea, and vomiting [12]. Makins found hypoacusia in 85% of the cases and tinnitus in 41% [1]. Electroneurographic examinations revealed changes in nearly 90% of patients; they comprise, according to Möller and Ryu, the incidence of spontaneous nystagmus and positional nystagmus, as well as a reduction of, or lack of, labyrinthian excitability in caloric tests [13,14]. In 10% of cases, Schwaber revealed hypersensitivity of the labyrinth in caloric tests, which was not confirmed in our studies [12].\nAuditory brainstem response examination is an important element in the diagnostics of vascular compression syndrome, as it allows us to locate the site of nerve lesion, thus being indicative for MRI (Magnetic Resonance Imaging). All authors confirm the invaluable character of imaging, in particular MRI, and MRI angiography in diagnosing vascular compression syndrome of the eighth nerve and qualifying its cases for microsurgery. The presence of an adjacent loop of a vessel (usually the anterior inferior cerebellar artery) touching the vestibulocochlear nerve, coupled with manifestations characteristic for vascular compression syndrome can be found in 25% (according to Makins) or 35% of cases (according to Schwaber) [1,12]. Much controversy in diagnosing vascular compression syndrome arises from frequent lack of congruence between MRI (showing the adjacency of vessel and nerve) and lack of otoneurologic manifestations. There are many theories explaining that phenomenon. The most credible one comes from Sabarbati according to whom, nerve lesions may occur only when the vessel is adjacent to the transition zone of the nerve [3].\nSchwaber and Hall selected a group of 63 patients with diagnosed vascular compression syndrome of the eighth. Hearing loss was recognized in 51 patients (81%), 33 of whom had high-frequency loss, and 14 patients had mid-frequency loss. In auditory brainstem response examination, neuritic hearing loss was diagnosed in 75% patients, whereas a decrease in cochlear excitability was observed in 93% of patients [12].\nNoguchi and Ohgaki suggest that vessel compression of the eighth nerve as a cause of vertigo is still a debatable issue [15]. They examined 5 patients with VSC (Vascular Compression Syndrome eighth, diagnosed with an angio-MRI scan, who also complained of vertigo. Audiometric tests showed normal results in 2 cases, bilateral midfrequency hypoacusia in 1 case, and fluctuating hypoacusia as in Meniere disease in 2 cases. Prolonged wave I–III interval in an auditory brainstem response examination, suggested by Möller as a criterion for diagnosing eighth nerve damage caused by vessel compression, was observed only in 1 case. Lack of cochlear excitability occurred in 2 cases, whereas spontaneous nystagmus, and optokinetic nystagmus were absent in all cases. The authors conclude that they found no specific symptoms of cochleovestibular apparatus that might be caused by eighth nerve compression suggested by an MRI examination. Adamczyk said, that contact between a trigeminal nerve root and an artery in the prepontine cistern is a frequently seen anatomical variant. Therefore, detection of such a variant is not equivalent to finding the cause of a patient’s complaints [16].\nRyu and Yamamoto examined 10 patients with vascular compression syndrome of the eighth nerve. Only 2 patients showed abnormal auditory brainstem response pattern, and electroneurographic examination revealed a slight decrease in cochlear excitability in 3 cases. They also mention “still continuing skepticism about the existence of vascular compression syndrome of the eighth nerve” [13,14]. All the results obtained may be treated as an unquestionable evidence that the examination with the use of MRI should be the basic investigation for the visualisation of neurovascular suggested in the cases of nerves V, VII, VIII, IX, X, XI compression. [17–19].", "We conclude the following:\nSignificantly, there is no more weakness or absence of a caloric response of the vestibular organ in a patient with vascular compression syndrome of vestibulocochlear nerve.\nDespite of absence of electrophysiologic signs of vestibular organ dysfunction, most of the examined patients (meeting the radiologic criteria of vascular compression syndrome of eighth cranial nerve) had subjective symptoms, such as vertigo and dizziness. However, 86% of patients had abnormal auditory brainstem responses meeting Möller’s criteria.\nDisabling positional vertigo is the syndrome that should be considered in the differential diagnosis in cases of vertigo, especially when accompanied by tinnitus or deafness." ]
[ null, "materials|methods", "results", "discussion", "conclusions" ]
[ "vertigo", "hearing loss", "tinnitus", "nerve compression syndromes" ]
In vitro evaluation of the cytotoxicity of FotoSan™ light-activated disinfection on human fibroblasts.
21358611
Root canal disinfection needs to be improved because actual techniques are not able to eliminate all microorganisms present in the root canal system. The aim of the present study was to investigate the in vitro cytotoxicity of FotoSan (CMS Dental APS, Copenhagen Denmark), 17% EDTA and 2% chlorhexidine.
BACKGROUND
Fibroblasts of periodontal ligament from healthy patients were cultured. FotoSan (with and without light activation for 30 sec.), 17% EDTA and 2% chlorexidine were used for the cell viability tests. Untreated cells were used as control. The cellular vitality was evaluated by MTT test. The production of reactive oxygen species (ROS) was measured using an oxidation-sensitive fluorescent probe. Results were statistically analyzed by ANOVA, followed by a multiple comparison of means by Student-Newman-Keuls, and the statistical significance was set at p<0.05.
MATERIAL/METHODS
MTT tests showed that cytotoxic effects of FotoSan (both photocured and uncured) were statistically lower (p<0.05) than that observed using 2% Chlorhexidine, while no significant differences were found in comparison with 17% EDTA. No alterations in ROS production were detectable in any of the tested materials.
RESULTS
Since the toxicity of the FotoSan photosensitizer, both light-activated and not light-activated, is similar to common endodontic irrigants, it can be clinically used with precautions of use similar to those usually recommended for the above-mentioned irrigating solutions.
CONCLUSIONS
[ "Anti-Bacterial Agents", "Cell Death", "Disinfection", "Fibroblasts", "Humans", "Light", "Photosensitizing Agents", "Reactive Oxygen Species" ]
3524736
null
null
Statistical analysis
Each value represents the mean of 4 experiments, each repeated 6 times. All results are expressed as mean ± standard error of the mean (SEM). The group means were compared by analysis of variance (ANOVA), followed by a multiple comparison of means by Student-Newman-Keuls; if necessary, comparison of means by t test was used. p<0.05 was considered significant.
Results
MTT tests showed that cytotoxic effects of FotoSan (both photocured and uncured) were statistically lower (p<0.05) than that observed using chlorhexidine, while no significant differences were found in comparison with EDTA (Figure 1). No statistically significant alterations in ROS production were detectable in any of the tested materials (Figure 2).
Conclusions
Preliminary results are encouraging, but further studies are needed to more precisely evaluate any differences, if any, between the activated and non-activated solutions, and possibly to identify the components responsible for the slightly cytotoxic reactions. Data from the present study showed that FotoSan photo-sensitizer, both activated and non-activated, can be routinely used in endodontic therapy, with precautions of use similar to those usually recommended for common endodontic irrigants.
[ "Background", "Isolation and cell culture", "Products to be tested", "Cells treatment", "Cytotoxicity evaluation", "Measurement of the reactive oxygen species" ]
[ "Endodontic failure may occur in cases of persistent bacteria in the root canal system as a consequence of poor disinfection and debridement of the endodontic space, untreated canals, inadequate filling or coronal leakage [1]. It is well established that the elimination of pathogens from root canals during endodontic treatment is difficult [2,3] and current endodontic techniques are unable to consistently disinfect the canal [4,5]. Mechanical instrumentation alone is not able to obtain a complete cleaning of the root canal system.6 To assist in the cleaning and debridement of the canal, a wide range of irrigating and disinfecting solutions have been used [7].\nRecently, a novel method of disinfection for use in both caries and endodontics has become available. This is photo-activated disinfection (PAD). The principle on which it operates is that photo-sensitizer molecules attach to the membrane of the bacteria. Irradiation with light at a specific wavelength matched to the peak absorption of the photo-sensitizer leads to the production of singlet oxygen, which causes the bacterial cell wall to rupture, killing the bacteria [8]. Extensive laboratory studies have shown that an important aspect of this system is that the 2 components when used independently produce no effect on bacteria or on normal tissue. It is only the combination of photo-sensitizer and light that produces the effect on the bacteria [8–10]. Using the principles described above, a system has been developed for endodontic use consisting of a lamp (FotoSan, CMS Dental APS, Copenhagen, Denmark). This antibacterial treatment has been called light-activated disinfection (LAD). LAD, also called PACT (Photodynamic Antimicrobial Chemo Therapy), is a treatment based on the combination of a photo-sensitizer and a powerful red light. The photo-sensitizer attaches to the membranes of microorganisms and binds itself to their surface, absorbs energy from the light and then releases this energy to oxygen (O2), which is transformed into highly reactive oxygen species (ROS), such as oxygen ions and radicals. The ROS reacts strongly and destroys the microorganisms instantly and effectively. The results of a study by Bouillaguet et al. [11] support the use of blue- or red-light-absorbing photo-sensitizers as candidates to produce ROS for clinical applications. The photo-sensitizer is available in low, medium and high viscosities. All solutions have the same concentration of active ingredients. The LAD principle is not only effective against bacteria, but also against other micro-organisms including viruses, fungi and protozoa [12–14]. The applied photo-sensitizers have far less affinity to mammalian cells; thus, no negative side effects in the treatment have been reported by toxicological tests [15].\nClinically, after completion of canal preparation, the canal is inoculated with the photo-sensitizer solution, which is left in situ for a fixed period of time (60 seconds) to permit the solution to come into contact with the bacteria and diffuse through any biofilm structures. The emitter is then placed in the root canal and irradiation carried out for 30 seconds in each canal. This has been demonstrated in the laboratory to kill high concentrations of bacteria generally found in root canals [16].\nCare must be taken to ensure maximum wetting, as it is important that the PAD solution contacts the bacteria, otherwise the photosensitization process will not occur [17]. The results of a previous study showed that the PAD technique was successful in eliminating all the cultivable bacteria when the photo-sensitizer reached the bacteria [17]. Furthermore, it highlighted the need for caution in the use of the emitter to ensure that it is not bent too tightly or trapped in the canal [17].\nHowever, since the photo-sensitizer molecules in aqueous solution are injected by a syringe, they can be inadvertently forced beyond the apex and come into contact with the periapical tissues. In such cases the apically extruded solution would probably be inert, being not activated by the light (uncured), but this cannot be scientifically proven. Therefore, in the present study both the activated and the non-activated solution were evaluated. Since the effectiveness and the mode of delivery and removal are very similar to traditional endodontic irrigants, the biological properties of different irrigating solutions were tested and compared.\nTherefore, the aim of the present study was to investigate the in vitro cytotoxicity of FotoSan (CMS Dental APS, Copenhagen Denmark) (photocured or uncured) and to compare this with 17% EDTA and 2% chlorhexidine.", "Fibroblasts of periodontal ligament were obtained from premolar teeth of patients undergoing tooth extractions for orthodontic reasons; the authorization for the use of the biological material was obtained from each patient.\nThe extracted premolars were rinsed twice with ‘explant’ medium (Dulbecco’s Modified Eagle Medium, DMEM) containing FCS (10%), gentamicin sulphate, fungizone (2.5 μg/ml), penicillin (100 units/ml), streptomycin (100 μg/ml) and non-essential amino acids. To avoid the contamination of the periodontal ligament cell cultures with gingival and apical tissues, the middle third of the periodontal ligament was gently curetted and removed from the root surface of the extracted tooth. Periodontal ligament tissues were rinsed in DMEM, cut into small pieces, enzymatically digested for 1 h at 37°C in a solution of type I collagenase (3 mg/ml) and dispase (4 mg/ml), and then dispersed in tissue-culture dishes to allow the pegging of explant cultures. The latter were subsequently cultured in DMEM containing FCS (10%), penicillin (50 units/ml), streptomycin (50 μg/ml) and non-essential amino acids. Cells were used between the fourth and the eighth in culture transfer.", "FotoSan (with and without light activation, and both high and medium viscosity), 17% EDTA (Vista Dental, Racine, WI, US) and 2% chlorhexidine (Vista Dental, Racine, WI, US).\nIn clinical practice FotoSan is used for light-activated disinfection in combination with a photosensitizer (FotoSan Agent) containing toluidine blue O as an active ingredient, used to catalyze the photochemical process. In this study we used the medium and high viscosity material preparations containing the same concentration of the active principle.", "In order to evaluate the cytotoxic effects of the analyzed products, fibroblasts (1×104) in DMEM (200 μL) were seeded into each well of a 96-well tissue culture plate (Costar, Cambridge, MA) and cultured to subconfluent monolayer for 24 hours. The medium was removed and the products were then added to monolayers (20 μL during 30 sec). When necessary, the FotoSan specimens were light-activated with the FotoSan lamp for 30 seconds; after the treatments the cells were washed 2 times with DMEM (200 μL), and the cellular vitality was evaluated by MTT test. Untreated cells were used as control.", "MTT test was performed according to Wataha et al. [18] MTT solution (20 μL) in PBS (phosphate buffer, 5 mg/mL) was added to the medium (200 μL) and, after incubation (4 h, 37°C), the intracellular formazan crystals produced were solubilized with a solution of HCl in isopropanol (4×10−2 N, 200 μL). The optical density (OD) of the solution contained in each well was determined using an automatic microplate photometer (Packard Spectracount™, Packard BioScience Company, Meriden, U.S.A.) at a wavelength of 570 nm.\nEach experiment was performed 6 times and the cell cytotoxicity was calculated according to the following equation [19]:", "The production of reactive oxygen species (ROS) was measured using an oxidation-sensitive fluorescent probe 2′,7′-dichlorodihydrofluorescin diacetate (H2DCF-DA) [20,21]. The non-polar H2DCF-DA readily diffuses into the cells, where it is enzymatically deacetylated, by intracellular esterases, to a polar non-fluorescent derivate (probe) trapped inside. In the presence of ROS, the probe is oxidized to 2′,7′-dichlorodihydrofluorescin (DCF); fluorescence levels depend on the intracellular ROS concentration [22]. Human normal fibroblasts (2×105) were seeded into each well of a 6-well plate, pre-incubated with DCFH-DA (2.5 μl, 10 mm) for 30 min at 37°C, and incubated for 24 h at 37°C. The cells were subsequently exposed to FotoSan, EDTA or chlorhexidine for 30 seconds. DCF fluorescence was measured using a Glomax Multi detection system fluorimeter (Promega, Milan, Italy) (490 nm excitation and 526 nm emission wavelengths). Results of DCF fluorescence intensity were expressed as arbitrary units (a.u.)." ]
[ null, null, null, null, null, null ]
[ "Background", "Material and Methods", "Isolation and cell culture", "Products to be tested", "Cells treatment", "Cytotoxicity evaluation", "Measurement of the reactive oxygen species", "Statistical analysis", "Results", "Discussion", "Conclusions" ]
[ "Endodontic failure may occur in cases of persistent bacteria in the root canal system as a consequence of poor disinfection and debridement of the endodontic space, untreated canals, inadequate filling or coronal leakage [1]. It is well established that the elimination of pathogens from root canals during endodontic treatment is difficult [2,3] and current endodontic techniques are unable to consistently disinfect the canal [4,5]. Mechanical instrumentation alone is not able to obtain a complete cleaning of the root canal system.6 To assist in the cleaning and debridement of the canal, a wide range of irrigating and disinfecting solutions have been used [7].\nRecently, a novel method of disinfection for use in both caries and endodontics has become available. This is photo-activated disinfection (PAD). The principle on which it operates is that photo-sensitizer molecules attach to the membrane of the bacteria. Irradiation with light at a specific wavelength matched to the peak absorption of the photo-sensitizer leads to the production of singlet oxygen, which causes the bacterial cell wall to rupture, killing the bacteria [8]. Extensive laboratory studies have shown that an important aspect of this system is that the 2 components when used independently produce no effect on bacteria or on normal tissue. It is only the combination of photo-sensitizer and light that produces the effect on the bacteria [8–10]. Using the principles described above, a system has been developed for endodontic use consisting of a lamp (FotoSan, CMS Dental APS, Copenhagen, Denmark). This antibacterial treatment has been called light-activated disinfection (LAD). LAD, also called PACT (Photodynamic Antimicrobial Chemo Therapy), is a treatment based on the combination of a photo-sensitizer and a powerful red light. The photo-sensitizer attaches to the membranes of microorganisms and binds itself to their surface, absorbs energy from the light and then releases this energy to oxygen (O2), which is transformed into highly reactive oxygen species (ROS), such as oxygen ions and radicals. The ROS reacts strongly and destroys the microorganisms instantly and effectively. The results of a study by Bouillaguet et al. [11] support the use of blue- or red-light-absorbing photo-sensitizers as candidates to produce ROS for clinical applications. The photo-sensitizer is available in low, medium and high viscosities. All solutions have the same concentration of active ingredients. The LAD principle is not only effective against bacteria, but also against other micro-organisms including viruses, fungi and protozoa [12–14]. The applied photo-sensitizers have far less affinity to mammalian cells; thus, no negative side effects in the treatment have been reported by toxicological tests [15].\nClinically, after completion of canal preparation, the canal is inoculated with the photo-sensitizer solution, which is left in situ for a fixed period of time (60 seconds) to permit the solution to come into contact with the bacteria and diffuse through any biofilm structures. The emitter is then placed in the root canal and irradiation carried out for 30 seconds in each canal. This has been demonstrated in the laboratory to kill high concentrations of bacteria generally found in root canals [16].\nCare must be taken to ensure maximum wetting, as it is important that the PAD solution contacts the bacteria, otherwise the photosensitization process will not occur [17]. The results of a previous study showed that the PAD technique was successful in eliminating all the cultivable bacteria when the photo-sensitizer reached the bacteria [17]. Furthermore, it highlighted the need for caution in the use of the emitter to ensure that it is not bent too tightly or trapped in the canal [17].\nHowever, since the photo-sensitizer molecules in aqueous solution are injected by a syringe, they can be inadvertently forced beyond the apex and come into contact with the periapical tissues. In such cases the apically extruded solution would probably be inert, being not activated by the light (uncured), but this cannot be scientifically proven. Therefore, in the present study both the activated and the non-activated solution were evaluated. Since the effectiveness and the mode of delivery and removal are very similar to traditional endodontic irrigants, the biological properties of different irrigating solutions were tested and compared.\nTherefore, the aim of the present study was to investigate the in vitro cytotoxicity of FotoSan (CMS Dental APS, Copenhagen Denmark) (photocured or uncured) and to compare this with 17% EDTA and 2% chlorhexidine.", "Unless otherwise specified, all chemicals and reagents used in this study (cell culture grade) were obtained from Sigma Chemical Co., Milan, Italy.\n[SUBTITLE] Isolation and cell culture [SUBSECTION] Fibroblasts of periodontal ligament were obtained from premolar teeth of patients undergoing tooth extractions for orthodontic reasons; the authorization for the use of the biological material was obtained from each patient.\nThe extracted premolars were rinsed twice with ‘explant’ medium (Dulbecco’s Modified Eagle Medium, DMEM) containing FCS (10%), gentamicin sulphate, fungizone (2.5 μg/ml), penicillin (100 units/ml), streptomycin (100 μg/ml) and non-essential amino acids. To avoid the contamination of the periodontal ligament cell cultures with gingival and apical tissues, the middle third of the periodontal ligament was gently curetted and removed from the root surface of the extracted tooth. Periodontal ligament tissues were rinsed in DMEM, cut into small pieces, enzymatically digested for 1 h at 37°C in a solution of type I collagenase (3 mg/ml) and dispase (4 mg/ml), and then dispersed in tissue-culture dishes to allow the pegging of explant cultures. The latter were subsequently cultured in DMEM containing FCS (10%), penicillin (50 units/ml), streptomycin (50 μg/ml) and non-essential amino acids. Cells were used between the fourth and the eighth in culture transfer.\nFibroblasts of periodontal ligament were obtained from premolar teeth of patients undergoing tooth extractions for orthodontic reasons; the authorization for the use of the biological material was obtained from each patient.\nThe extracted premolars were rinsed twice with ‘explant’ medium (Dulbecco’s Modified Eagle Medium, DMEM) containing FCS (10%), gentamicin sulphate, fungizone (2.5 μg/ml), penicillin (100 units/ml), streptomycin (100 μg/ml) and non-essential amino acids. To avoid the contamination of the periodontal ligament cell cultures with gingival and apical tissues, the middle third of the periodontal ligament was gently curetted and removed from the root surface of the extracted tooth. Periodontal ligament tissues were rinsed in DMEM, cut into small pieces, enzymatically digested for 1 h at 37°C in a solution of type I collagenase (3 mg/ml) and dispase (4 mg/ml), and then dispersed in tissue-culture dishes to allow the pegging of explant cultures. The latter were subsequently cultured in DMEM containing FCS (10%), penicillin (50 units/ml), streptomycin (50 μg/ml) and non-essential amino acids. Cells were used between the fourth and the eighth in culture transfer.\n[SUBTITLE] Products to be tested [SUBSECTION] FotoSan (with and without light activation, and both high and medium viscosity), 17% EDTA (Vista Dental, Racine, WI, US) and 2% chlorhexidine (Vista Dental, Racine, WI, US).\nIn clinical practice FotoSan is used for light-activated disinfection in combination with a photosensitizer (FotoSan Agent) containing toluidine blue O as an active ingredient, used to catalyze the photochemical process. In this study we used the medium and high viscosity material preparations containing the same concentration of the active principle.\nFotoSan (with and without light activation, and both high and medium viscosity), 17% EDTA (Vista Dental, Racine, WI, US) and 2% chlorhexidine (Vista Dental, Racine, WI, US).\nIn clinical practice FotoSan is used for light-activated disinfection in combination with a photosensitizer (FotoSan Agent) containing toluidine blue O as an active ingredient, used to catalyze the photochemical process. In this study we used the medium and high viscosity material preparations containing the same concentration of the active principle.\n[SUBTITLE] Cells treatment [SUBSECTION] In order to evaluate the cytotoxic effects of the analyzed products, fibroblasts (1×104) in DMEM (200 μL) were seeded into each well of a 96-well tissue culture plate (Costar, Cambridge, MA) and cultured to subconfluent monolayer for 24 hours. The medium was removed and the products were then added to monolayers (20 μL during 30 sec). When necessary, the FotoSan specimens were light-activated with the FotoSan lamp for 30 seconds; after the treatments the cells were washed 2 times with DMEM (200 μL), and the cellular vitality was evaluated by MTT test. Untreated cells were used as control.\nIn order to evaluate the cytotoxic effects of the analyzed products, fibroblasts (1×104) in DMEM (200 μL) were seeded into each well of a 96-well tissue culture plate (Costar, Cambridge, MA) and cultured to subconfluent monolayer for 24 hours. The medium was removed and the products were then added to monolayers (20 μL during 30 sec). When necessary, the FotoSan specimens were light-activated with the FotoSan lamp for 30 seconds; after the treatments the cells were washed 2 times with DMEM (200 μL), and the cellular vitality was evaluated by MTT test. Untreated cells were used as control.\n[SUBTITLE] Cytotoxicity evaluation [SUBSECTION] MTT test was performed according to Wataha et al. [18] MTT solution (20 μL) in PBS (phosphate buffer, 5 mg/mL) was added to the medium (200 μL) and, after incubation (4 h, 37°C), the intracellular formazan crystals produced were solubilized with a solution of HCl in isopropanol (4×10−2 N, 200 μL). The optical density (OD) of the solution contained in each well was determined using an automatic microplate photometer (Packard Spectracount™, Packard BioScience Company, Meriden, U.S.A.) at a wavelength of 570 nm.\nEach experiment was performed 6 times and the cell cytotoxicity was calculated according to the following equation [19]:\nMTT test was performed according to Wataha et al. [18] MTT solution (20 μL) in PBS (phosphate buffer, 5 mg/mL) was added to the medium (200 μL) and, after incubation (4 h, 37°C), the intracellular formazan crystals produced were solubilized with a solution of HCl in isopropanol (4×10−2 N, 200 μL). The optical density (OD) of the solution contained in each well was determined using an automatic microplate photometer (Packard Spectracount™, Packard BioScience Company, Meriden, U.S.A.) at a wavelength of 570 nm.\nEach experiment was performed 6 times and the cell cytotoxicity was calculated according to the following equation [19]:\n[SUBTITLE] Measurement of the reactive oxygen species [SUBSECTION] The production of reactive oxygen species (ROS) was measured using an oxidation-sensitive fluorescent probe 2′,7′-dichlorodihydrofluorescin diacetate (H2DCF-DA) [20,21]. The non-polar H2DCF-DA readily diffuses into the cells, where it is enzymatically deacetylated, by intracellular esterases, to a polar non-fluorescent derivate (probe) trapped inside. In the presence of ROS, the probe is oxidized to 2′,7′-dichlorodihydrofluorescin (DCF); fluorescence levels depend on the intracellular ROS concentration [22]. Human normal fibroblasts (2×105) were seeded into each well of a 6-well plate, pre-incubated with DCFH-DA (2.5 μl, 10 mm) for 30 min at 37°C, and incubated for 24 h at 37°C. The cells were subsequently exposed to FotoSan, EDTA or chlorhexidine for 30 seconds. DCF fluorescence was measured using a Glomax Multi detection system fluorimeter (Promega, Milan, Italy) (490 nm excitation and 526 nm emission wavelengths). Results of DCF fluorescence intensity were expressed as arbitrary units (a.u.).\nThe production of reactive oxygen species (ROS) was measured using an oxidation-sensitive fluorescent probe 2′,7′-dichlorodihydrofluorescin diacetate (H2DCF-DA) [20,21]. The non-polar H2DCF-DA readily diffuses into the cells, where it is enzymatically deacetylated, by intracellular esterases, to a polar non-fluorescent derivate (probe) trapped inside. In the presence of ROS, the probe is oxidized to 2′,7′-dichlorodihydrofluorescin (DCF); fluorescence levels depend on the intracellular ROS concentration [22]. Human normal fibroblasts (2×105) were seeded into each well of a 6-well plate, pre-incubated with DCFH-DA (2.5 μl, 10 mm) for 30 min at 37°C, and incubated for 24 h at 37°C. The cells were subsequently exposed to FotoSan, EDTA or chlorhexidine for 30 seconds. DCF fluorescence was measured using a Glomax Multi detection system fluorimeter (Promega, Milan, Italy) (490 nm excitation and 526 nm emission wavelengths). Results of DCF fluorescence intensity were expressed as arbitrary units (a.u.).\n[SUBTITLE] Statistical analysis [SUBSECTION] Each value represents the mean of 4 experiments, each repeated 6 times. All results are expressed as mean ± standard error of the mean (SEM). The group means were compared by analysis of variance (ANOVA), followed by a multiple comparison of means by Student-Newman-Keuls; if necessary, comparison of means by t test was used. p<0.05 was considered significant.\nEach value represents the mean of 4 experiments, each repeated 6 times. All results are expressed as mean ± standard error of the mean (SEM). The group means were compared by analysis of variance (ANOVA), followed by a multiple comparison of means by Student-Newman-Keuls; if necessary, comparison of means by t test was used. p<0.05 was considered significant.", "Fibroblasts of periodontal ligament were obtained from premolar teeth of patients undergoing tooth extractions for orthodontic reasons; the authorization for the use of the biological material was obtained from each patient.\nThe extracted premolars were rinsed twice with ‘explant’ medium (Dulbecco’s Modified Eagle Medium, DMEM) containing FCS (10%), gentamicin sulphate, fungizone (2.5 μg/ml), penicillin (100 units/ml), streptomycin (100 μg/ml) and non-essential amino acids. To avoid the contamination of the periodontal ligament cell cultures with gingival and apical tissues, the middle third of the periodontal ligament was gently curetted and removed from the root surface of the extracted tooth. Periodontal ligament tissues were rinsed in DMEM, cut into small pieces, enzymatically digested for 1 h at 37°C in a solution of type I collagenase (3 mg/ml) and dispase (4 mg/ml), and then dispersed in tissue-culture dishes to allow the pegging of explant cultures. The latter were subsequently cultured in DMEM containing FCS (10%), penicillin (50 units/ml), streptomycin (50 μg/ml) and non-essential amino acids. Cells were used between the fourth and the eighth in culture transfer.", "FotoSan (with and without light activation, and both high and medium viscosity), 17% EDTA (Vista Dental, Racine, WI, US) and 2% chlorhexidine (Vista Dental, Racine, WI, US).\nIn clinical practice FotoSan is used for light-activated disinfection in combination with a photosensitizer (FotoSan Agent) containing toluidine blue O as an active ingredient, used to catalyze the photochemical process. In this study we used the medium and high viscosity material preparations containing the same concentration of the active principle.", "In order to evaluate the cytotoxic effects of the analyzed products, fibroblasts (1×104) in DMEM (200 μL) were seeded into each well of a 96-well tissue culture plate (Costar, Cambridge, MA) and cultured to subconfluent monolayer for 24 hours. The medium was removed and the products were then added to monolayers (20 μL during 30 sec). When necessary, the FotoSan specimens were light-activated with the FotoSan lamp for 30 seconds; after the treatments the cells were washed 2 times with DMEM (200 μL), and the cellular vitality was evaluated by MTT test. Untreated cells were used as control.", "MTT test was performed according to Wataha et al. [18] MTT solution (20 μL) in PBS (phosphate buffer, 5 mg/mL) was added to the medium (200 μL) and, after incubation (4 h, 37°C), the intracellular formazan crystals produced were solubilized with a solution of HCl in isopropanol (4×10−2 N, 200 μL). The optical density (OD) of the solution contained in each well was determined using an automatic microplate photometer (Packard Spectracount™, Packard BioScience Company, Meriden, U.S.A.) at a wavelength of 570 nm.\nEach experiment was performed 6 times and the cell cytotoxicity was calculated according to the following equation [19]:", "The production of reactive oxygen species (ROS) was measured using an oxidation-sensitive fluorescent probe 2′,7′-dichlorodihydrofluorescin diacetate (H2DCF-DA) [20,21]. The non-polar H2DCF-DA readily diffuses into the cells, where it is enzymatically deacetylated, by intracellular esterases, to a polar non-fluorescent derivate (probe) trapped inside. In the presence of ROS, the probe is oxidized to 2′,7′-dichlorodihydrofluorescin (DCF); fluorescence levels depend on the intracellular ROS concentration [22]. Human normal fibroblasts (2×105) were seeded into each well of a 6-well plate, pre-incubated with DCFH-DA (2.5 μl, 10 mm) for 30 min at 37°C, and incubated for 24 h at 37°C. The cells were subsequently exposed to FotoSan, EDTA or chlorhexidine for 30 seconds. DCF fluorescence was measured using a Glomax Multi detection system fluorimeter (Promega, Milan, Italy) (490 nm excitation and 526 nm emission wavelengths). Results of DCF fluorescence intensity were expressed as arbitrary units (a.u.).", "Each value represents the mean of 4 experiments, each repeated 6 times. All results are expressed as mean ± standard error of the mean (SEM). The group means were compared by analysis of variance (ANOVA), followed by a multiple comparison of means by Student-Newman-Keuls; if necessary, comparison of means by t test was used. p<0.05 was considered significant.", "MTT tests showed that cytotoxic effects of FotoSan (both photocured and uncured) were statistically lower (p<0.05) than that observed using chlorhexidine, while no significant differences were found in comparison with EDTA (Figure 1). No statistically significant alterations in ROS production were detectable in any of the tested materials (Figure 2).", "The biocompatibility of endodontic materials has been of concern to dentistry for many decades because they can come into contact with the connective periapical tissue. Molecules present in these materials could produce irritation or even degeneration in the surrounding tissues [23]. An ideal endodontic material, in addition to having suitable chemical and physical properties, should be biologically compatible and well tolerated by the periapical tissues, avoiding any possible alteration and delay of the healing process [24]. A careful evaluation of the interactions between the components of these materials and the host is therefore important. In vitro tests are especially suitable for this purpose, allowing separate analysis of the different metabolic aspects, whereas the same results could not be obtained by in vivo trials [25]. In vitro tests, characterized by quickness, inexpensiveness, sensitivity and reproducibility, can be performed both directly or through eluate analysis [26,27]. Unfortunately, the results obtained by this type of tests are not sufficient for a conclusive clinical evaluation. Permanent cell lines (i.e., 3T3 cells) and primary cells (oral fibroblasts) are frequently used for in vitro tests with cell culture [25,28]. Human fibroblasts are considered a suitable model for preliminary studies of the possible cytotoxic effects of root filling materials [27,29] because this type of cell better reproduces the in vivo behavior of oral mucosa [25,27,30–32].\nThe LAD principle appears to be not only effective against bacteria, but also against biofilms [33]. Advanced non-invasive LAD using a photosensitizer formulation containing oxidizer and oxygen carrier has been demonstrated to disrupt the biofilm matrix and to facilitate comprehensive inactivation and disinfection of matured endodontic biofilm [34].\nThe results of MTT tests in the present study showed that FotoSan produced a slight cytotoxic effect, similar to that produced by 17% EDTA and significantly less than that produced by 2% chlorhexidine. The cytotoxic effect of irradiated FotoSan was similar to that produced by non-irradiated material. This means that inadvertent extrusion of the material beyond the apex can lead to some reactions in the periapical tissues, independently from the photoactivation of the material. Such a non-significant difference can be explained by the overall low toxic effect induced by FotoSan in both experimental conditions (photocured and uncured) and by the sensitivity of the experimental methodology. Since the values are low, if there were slight differences in the toxicity, MTT assay would not be able to detect them. These results supported those from a previous study that reported excellent biocompatibility of LAD, and it becomes obvious that the slightly toxic effect caused by FotoSan is not due to an increase in ROS production.", "Preliminary results are encouraging, but further studies are needed to more precisely evaluate any differences, if any, between the activated and non-activated solutions, and possibly to identify the components responsible for the slightly cytotoxic reactions. Data from the present study showed that FotoSan photo-sensitizer, both activated and non-activated, can be routinely used in endodontic therapy, with precautions of use similar to those usually recommended for common endodontic irrigants." ]
[ null, "materials|methods", null, null, null, null, null, "methods", "results", "discussion", "conclusions" ]
[ "cytotoxicity", "light-activated disinfection", "root canal" ]
Risk assessment of ventricular arrhythmia using new parameters based on high resolution body surface potential mapping.
21358612
The effective screening of myocardial infarction (MI) patients threatened by ventricular tachycardia (VT) is an important issue in clinical practice, especially in the process of implantable cardioverter-defibrillator (ICD) therapy recommendation. This study proposes new parameters describing depolarization and repolarization inhomogeneity in high resolution body surface potential maps (HR BSPM) to identify MI patients threatened by VT.
BACKGROUND
High resolution ECGs were recorded from 64 surface leads. Time-averaged HR BSPMs were used. Several parameters for arrhythmia risk assessment were calculated in 2 groups of MI patients: those with and without documented VT. Additionally, a control group of healthy subjects was studied. To assess the risk of VT, the following parameters were proposed: correlation coefficient between STT and QRST integral maps (STT_QRST_CORR), departure index of absolute value of STT integral map (STT_DI), and departure index of absolute value of T-wave shape index (TSI_DI). These new parameters were compared to known parameters: QRS width, QT interval, QT dispersion, Tpeak-Tend interval, total cosines between QRS complex and T wave, and non-dipolar content of QRST integral maps.
MATERIAL/METHODS
STT_DI, TSI_DI, STT_QRST_CORR, QRS width, and QT interval parameters were statistically significant (p ≤ 0.05) in arrhythmia risk assessment. The highest sensitivity was found for the STT_DI parameter (0.77) and the highest specificity for TSI_DI (0.79).
RESULTS
Arrhythmia risk is demonstrated by both abnormal spatial distribution of the repolarization phase and changed relationship between depolarization and repolarization phases, as well as their prolongation. The proposed new parameters might be applied for risk stratification of cardiac arrhythmia.
CONCLUSIONS
[ "Aged", "Body Surface Potential Mapping", "Electrocardiography", "Humans", "Middle Aged", "Risk Assessment", "Sensitivity and Specificity", "Tachycardia, Ventricular", "Ultrasonography" ]
3524725
null
null
Data analysis
[SUBTITLE] Detection of ECG characteristic time instances [SUBSECTION] To detect ECG characteristic time instances (onsets and offsets of both depolarization and repolarization waves), an algorithm proposed by Acar [1], based on singular value decomposition (SVD), was applied. For each subject, a rectangular matrix M (n,m), containing n averaged ECG leads signals, each of m samples length, was decomposed according to the following equation where U(m,m),V(n,n) are the square, orthogonal matrices and ∑⌣ is the diagonal matrix (m,n) of singular values. Next, the matrix S was obtained by projecting the matrix M down into the reduced space defined by only the first 3 left - singular vectors of the unitary matrix U. The RMS signal was calculated using 3 vectors of the matrix S (S1, S2, S3), as shown in Figure 2. The global (common for all leads) characteristic time instances of ECG curves were found for this RMS signal, and then were searched in each lead separately. Initially, R-peak was marked as the maximum of the RMS signal, and then T maximum was marked in relation to R-peak. The P-wave beginning was calculated by inspecting the stationarity of the time relation between the vectors S1 and S2 in respectively defined time windows. Next, the onset of Q-wave and the offset of S-wave were calculated from the difference of RMS signal using the thresholding method. The T-wave end was established as the minimum of RMS signal in time window related to T wave maximum. Then, in each lead individually, the waves’ ends were established in relation to the global waves’ ends using the thresholding method for Q onset, and P, S, T offsets and for P, R, T peaks. Finally, iso-amplitude maps were generated for various parameters described in the next section. To detect ECG characteristic time instances (onsets and offsets of both depolarization and repolarization waves), an algorithm proposed by Acar [1], based on singular value decomposition (SVD), was applied. For each subject, a rectangular matrix M (n,m), containing n averaged ECG leads signals, each of m samples length, was decomposed according to the following equation where U(m,m),V(n,n) are the square, orthogonal matrices and ∑⌣ is the diagonal matrix (m,n) of singular values. Next, the matrix S was obtained by projecting the matrix M down into the reduced space defined by only the first 3 left - singular vectors of the unitary matrix U. The RMS signal was calculated using 3 vectors of the matrix S (S1, S2, S3), as shown in Figure 2. The global (common for all leads) characteristic time instances of ECG curves were found for this RMS signal, and then were searched in each lead separately. Initially, R-peak was marked as the maximum of the RMS signal, and then T maximum was marked in relation to R-peak. The P-wave beginning was calculated by inspecting the stationarity of the time relation between the vectors S1 and S2 in respectively defined time windows. Next, the onset of Q-wave and the offset of S-wave were calculated from the difference of RMS signal using the thresholding method. The T-wave end was established as the minimum of RMS signal in time window related to T wave maximum. Then, in each lead individually, the waves’ ends were established in relation to the global waves’ ends using the thresholding method for Q onset, and P, S, T offsets and for P, R, T peaks. Finally, iso-amplitude maps were generated for various parameters described in the next section.
Results
Basic statistical data (the mean and the standard deviation values of all examined parameters in the 3 studied groups, as well as statistical differences (p value) between the groups calculated by Mann-Whitney tests) are summarized in Table 2. Statistical differences were calculated between MI-VT and MI-non-VT patients group, as well as between the MI–non-VT patients group and the control group. The statistical data of 3 proposed parameters: STT_DI, TSI_DI and STT_QRST_CORR are also presented graphically in Figure 3. Besides the mean and the standard deviation, values of the 25th, the 50th (median) and the 75th percentiles are shown. The best results of the Mann-Whitney test in a discrimination between MI patients groups with and without the ventricular tachycardia were obtained using STT_DI parameter (p<0.01), QT interval (p<0.02), TSI_DI parameter (p<0.0.3), QRS width (p<0.03) and STT_QRST_CORR parameter (p=0.05). The differences were not statistically significant for: TCRT parameter (p<0.06), Tpeak-Tend interval (p<0.09), QT interval dispersion (p<0.33) and for NDC of QRST integral maps (p<0.68). The obtained results indicate, however, that for TCRT parameter and Tpeak-Tend interval, differences are very close to the limit of significance level (p=0.05) and might be statistically significant in a larger database of MI patients. The differences between the control group and MI-non-VT patients group were not statistically significant for the following parameters: Tpeak-Tend interval, TCRT parameter and QT dispersion. The differences between the control group and MI-non-VT patients group were statistically significant for the remaining parameters (Table 2). For statistically significant parameters in discrimination between MI patients groups with and without VT, sensitivity and specificity were calculated, and diagnostic criteria were proposed. The obtained results are presented in Table 3. For 3 proposed parameters, sensitivity, specificity and diagnostic criteria are also shown graphically in Figure 4. The highest value of sensitivity was found for STT_DI parameter (0.77), and the highest value of specificity for TSI_DI (0.79). STT_QRST_CORR parameter, QRS width and QT interval had the same values of sensitivity (0.73) and specificity (0.71). The calculated diagnostic criteria concerning the risk of VT amounted to: 1.4 for STT_DI parameter, 2.1 for TSI_DI parameter, 0.7 for STT_QRST_CORR, 122 ms for QRS width and 459 ms for QT interval. Additionally, the sensitivities of the above mentioned parameters were also calculated for 14 patients from the MI-VT group, in whom the episodes of VT occurred during follow-up study (Table 4). The highest sensitivity showed (0.79) for STT_QRST_CORR parameter and (0.71) for TCRT and STT_DI parameters. The ejection fraction value in those patients ranged from 15% to 65%. In 11 out of 14 MI-VT patients with episodes of VT during follow- up study, the value of EF was above 30%.
Conclusions
The obtained results indicate that the risk of arrhythmia increases with abnormal disturbances of both depolarization and repolarization processes. The best indices of the threat of ventricular tachycardia were: 1) the depolarization and repolarization prolongation expressed by QRS interval and QT interval, 2) the changed relation between depolarization and repolarization phases described by STT_QRST_CORR, and 3) spatial changes in repolarization phase expressed by STT_DI and TSI_DI parameters. Thus, the proposed new parameters might be applied for risk stratification of cardiac arrhythmia. These parameters might additionally support noninvasive identification of MI patients for ICD therapy. However, this preliminary study requires further analysis in a larger number of more precisely selected MI patients groups, especially patients who are recommended for primary prevention by ICD therapy.
[ "Background", "Studied groups", "Measurements and data processing", "Detection of ECG characteristic time instances", "Arrhythmia risk parameters", "Statistics" ]
[ "The non-invasive risk assessment of life-threatening ventricular arrhythmia is of great clinical importance, especially in the prevention of sudden cardiac death (SCD) in patients after myocardial infarction (MI). Although there currently exist many noninvasive parameters quantifying the depolarization and repolarization process in time and space domains, these parameters remain the subject of intensive study. The ECG-based parameters calculated from the 12 standard leads or 3 orthogonal leads (e.g., prolonged QRS interval, prolonged QT interval, large QT dispersion, microvolt T-wave alternans, total cosine of mean angle between QRS complex and T wave) are considered as predictors of SCD [1–6]; nonetheless, their effectiveness is still debated [6–12]. Beat-to-beat analysis of subsequent cardiac cycles has recently become a very promising tool for the identification of patients at risk for ventricular tachycardia (VT). There is a large body of evidence supporting the assumption that increased QT variability, low heart rate variability, and heart rate turbulence show good diagnostic value for ventricular tachycardia (VT) detection [13–16]. VT risk assessment in MI patients was also studied using advanced frequency and time-frequency analysis of high resolution ECG. It was reported that parameters obtained by using the fast Fourier transform and parametric modeling method with an autoregressive model, as well as wavelet transform, have good discriminative properties [17–20].\nBody surface potential mapping (BSPM) offers a possibility for more precise study of temporal and spatial distributions of cardiac electrical activity in comparison to standard 12-lead electrocardiography [21]. One of the most often examined parameters calculated from BSPMs is the area under the QRST complex [22], which was also used to assess repolarization heterogeneity described by non-dipolar content of QRST integral maps [21–25]. However, the QRST integral map’s ability to reveal repolarization dispersion has been questioned [7]. The search for reliable risk indices for ventricular arrhythmias is still one of the major remaining tasks in high resolution ECG mapping. BSPM might be studied using time-average technique [26] or beat-to-beat analysis [24,27]. In this preliminary study we focused on the time-average technique.\nThe values of depolarization and repolarization parameters for ventricular arrhythmia risk stratification were assessed by studying patients with chronic myocardial scar areas, with and without documented ventricular arrhythmias, as well as by comparing these 2 patient groups with a control group of volunteers with fully intact myocardium. We proposed and preliminarily verified new, non-invasive markers of VT risk assessment based on analysis of high resolution body surface potential maps (HR BSPM). The developed parameters reflect the abnormality in spatial distribution of the repolarization phase and its relation to changed spatial distribution of the depolarization phase in the group of MI patients with risk of VT. The statistical values obtained for these new parameters were compared to already known parameters calculated in the same studied groups of MI patients and healthy volunteers. The effectiveness of implanted cardioverter-defibrillators (ICD) to prevent SCD in MI patients is already proven [6,28–31]. However, highly effective identification of patients for ICD therapy is still difficult, and the number of ICDs needed to achieve 1 year of patient survival is still un-satisfyingly high [31,32]. The proposed parameters could also assist in noninvasive identification of MI patients for ICD therapy.", "Two groups of patients with remote myocardial infarction and a control group of healthy volunteers were studied. The first group comprised 26 patients with documented risk of ventricular tachycardia, called the MI-VT patients group. This group consisted of mainly secondary prevention patients with implanted cardioverter-defibrillator or qualified for ICD therapy due to the risk of ventricular tachycardia. The eligibility criteria to implant ICD were: 1) documented episodes of ventricular tachycardia (21 pts), and 2) induced sustained monomorphic VT during programmed electrical stimulation (5 pts). The second group consisted of 14 patients after myocardial infarction in whom any sustained or not-sustained ventricular tachycardia was stated, and is called the MI-non-VT patients group. The control group consisted of 25 volunteers who had normal electrocardiograms, no history of cardiovascular disease, and to whom any medications were administered. Basic data from all 3 studied groups are presented in Table 1.\nThe study was approved by an institutional ethical review committee and the subjects gave informed consent.", "A high resolution multi-lead ECG system (Active Two, BioSemi B.V., The Netherlands) with 64 surface electrodes was used for data acquisition. Active electrodes (containing Ag2Cl contact sensors with preamplifiers) were located around the torso (Figure 1) according to the ECG lead system proposed by SippensGroenewegen et al. [33]. The 64 unipolar ECGs were simultaneously recorded for 15 minutes with 4096 Hz sampling frequency. Next, signals were amplified and converted into digital form with 24-bit amplitude resolution, sent through a fiber-optic transmitter to the computer, and stored on a hard disk for off-line processing.\nThe raw ECG data were filtered using low pass Butterworth filter limiting frequency to 300 Hz and decimate filter, decreasing sampling frequency to 1024 Hz. The reference ECG signal, known as Wilson’s central terminal (WCT), was subtracted at this preprocessing step. Then, ECG signals in each lead separately were averaged in time using a cross-correlation method. To receive a low noise level, both the number of averaged cycles (usually ca. 100 cycles) and the value of correlation coefficient (usually ≥0.98) were fitted. The level of noise was measured on the 20 ms isoelectric U-P interval, because, as we noticed, during PQ interval, atrial repolarization is still present, which has also been confirmed by Ihara et al [34]. The obtained root mean square (RMS) value of noise in averaged ECG signals was below 0.5 μV.", "To detect ECG characteristic time instances (onsets and offsets of both depolarization and repolarization waves), an algorithm proposed by Acar [1], based on singular value decomposition (SVD), was applied. For each subject, a rectangular matrix M (n,m), containing n averaged ECG leads signals, each of m samples length, was decomposed according to the following equation\nwhere U(m,m),V(n,n) are the square, orthogonal matrices and ∑⌣ is the diagonal matrix (m,n) of singular values.\nNext, the matrix S was obtained by projecting the matrix M down into the reduced space defined by only the first 3 left - singular vectors of the unitary matrix U. The RMS signal was calculated using 3 vectors of the matrix S (S1, S2, S3), as shown in Figure 2. The global (common for all leads) characteristic time instances of ECG curves were found for this RMS signal, and then were searched in each lead separately. Initially, R-peak was marked as the maximum of the RMS signal, and then T maximum was marked in relation to R-peak. The P-wave beginning was calculated by inspecting the stationarity of the time relation between the vectors S1 and S2 in respectively defined time windows. Next, the onset of Q-wave and the offset of S-wave were calculated from the difference of RMS signal using the thresholding method. The T-wave end was established as the minimum of RMS signal in time window related to T wave maximum. Then, in each lead individually, the waves’ ends were established in relation to the global waves’ ends using the thresholding method for Q onset, and P, S, T offsets and for P, R, T peaks. Finally, iso-amplitude maps were generated for various parameters described in the next section.", "To assess the risk of arrhythmia, 3 new BSPM parameters were proposed. The first parameter, called STT_QRST_CORR, was based on the spatial relationship between the maps of STT and QRST integrals. This parameter was defined as a Spearman rank correlation coefficient between STT area and QRST area maps (Eq. 2). Two sets of the data, STTi and QRSTi, were converted into the stti rankings qrsti and before calculating the Spearman correlation coefficient between ranks\nwhere n is the subject number, i is the lead number, and M is the total number of leads.\nThe next 2 parameters, called STT_DI and TSI_DI, were proposed to quantify spatial changes in the repolarization phase. Departure index (DI) [35] was used to describe the changes between the group of healthy subjects and each individual subject of the entire studied population.\nA TSI parameter (T wave Shape Index) was defined as the ratio of STT integral and T-wave length [26,36]:\nwhere TSIn is the value of TSI parameter for subject n, V(t) is the ECG amplitude in the time instant t, Ton, Toff is the onset and offset of T wave, respectively, and L(V) is the length of T wave curve.\nThe TSI_DI parameter was defined as the sum of absolute values of departure indices of TSI parameter calculated in each lead. The formula describing the TSI_DI parameter is as follows:\nwhere TSI_DIn is the value of TSI_DI parameter for subject n, \ncSTTi¯ is the mean value of TSI parameter calculated in the control group in the lead i, and σ(cSTTi ) is the standard deviation of TSI parameter calculated in control group in the lead i.\nThe STT_DI parameter was defined as the sum of absolute values of departure indices of STT integral calculated in each lead. The formula describing STT_DI parameter is as follows:\nwhere STT_DI is the value of STT_DI parameter for subject n, \ncSTTi¯, and σ (cSTTi ) are adequately the mean value and the standard deviation of STT integrals calculated in the control group in lead i.\nThe proposed parameters were compared to 6 well known arrhythmia indices: the QRS width, the QT interval, the dispersion of QT interval, the Tpeak-Tend interval, the averaged cosine of the angle between QRS complex, and the T-wave (TCRT) introduced by Acar [1], as well as the non-dipolar content (NDC) of QRST integral maps considered as the repolarization inhomogeneity index [23,37]. The values of the following parameters: the QRS width, the QT interval, the dispersion of QT interval, and the Tpeak-Tend interval were calculated for each subject and each lead separately, then were averaged over all leads. The TCRT gives information about the relation between propagation of depolarization and repolarization waves in myocardium. First, 12-lead ECG signals are decomposed to minimum dimensional space by SVD (singular value decomposition). TCRT is then computed as an averaged cosine between threshold vectors of QRS (vectors above threshold value around R peak) and the unit vector with the maximum T-wave energy. More details can be found in the original article [1]. The non-dipolar content was calculated using principal component analysis and Karhunen-Loève transform [38], providing information about the multipolarity of QRST integral maps.", "For studied groups, the mean values and standard deviations, as well as the 25th, the 50th (median), and the 75th percentiles of all parameters, were calculated. The statistical significance was assessed by means of non-parametric Mann-Whitney test, since the normal distribution of the values could not have been assumed. The level of statistical significance was set at p≤0.05.\nTo assess the effectiveness of new parameters and to define the risk criteria of ventricular tachycardia for MI patients, the specificity and the sensitivity of proposed parameters were calculated. The diagnostic criterion assessing VT risk in the MI patient group was calculated as a maximal value of the product of the specificity and the sensitivity." ]
[ null, null, "methods", null, null, null ]
[ "Background", "Material and Methods", "Studied groups", "Measurements and data processing", "Data analysis", "Detection of ECG characteristic time instances", "Arrhythmia risk parameters", "Statistics", "Results", "Discussion", "Conclusions" ]
[ "The non-invasive risk assessment of life-threatening ventricular arrhythmia is of great clinical importance, especially in the prevention of sudden cardiac death (SCD) in patients after myocardial infarction (MI). Although there currently exist many noninvasive parameters quantifying the depolarization and repolarization process in time and space domains, these parameters remain the subject of intensive study. The ECG-based parameters calculated from the 12 standard leads or 3 orthogonal leads (e.g., prolonged QRS interval, prolonged QT interval, large QT dispersion, microvolt T-wave alternans, total cosine of mean angle between QRS complex and T wave) are considered as predictors of SCD [1–6]; nonetheless, their effectiveness is still debated [6–12]. Beat-to-beat analysis of subsequent cardiac cycles has recently become a very promising tool for the identification of patients at risk for ventricular tachycardia (VT). There is a large body of evidence supporting the assumption that increased QT variability, low heart rate variability, and heart rate turbulence show good diagnostic value for ventricular tachycardia (VT) detection [13–16]. VT risk assessment in MI patients was also studied using advanced frequency and time-frequency analysis of high resolution ECG. It was reported that parameters obtained by using the fast Fourier transform and parametric modeling method with an autoregressive model, as well as wavelet transform, have good discriminative properties [17–20].\nBody surface potential mapping (BSPM) offers a possibility for more precise study of temporal and spatial distributions of cardiac electrical activity in comparison to standard 12-lead electrocardiography [21]. One of the most often examined parameters calculated from BSPMs is the area under the QRST complex [22], which was also used to assess repolarization heterogeneity described by non-dipolar content of QRST integral maps [21–25]. However, the QRST integral map’s ability to reveal repolarization dispersion has been questioned [7]. The search for reliable risk indices for ventricular arrhythmias is still one of the major remaining tasks in high resolution ECG mapping. BSPM might be studied using time-average technique [26] or beat-to-beat analysis [24,27]. In this preliminary study we focused on the time-average technique.\nThe values of depolarization and repolarization parameters for ventricular arrhythmia risk stratification were assessed by studying patients with chronic myocardial scar areas, with and without documented ventricular arrhythmias, as well as by comparing these 2 patient groups with a control group of volunteers with fully intact myocardium. We proposed and preliminarily verified new, non-invasive markers of VT risk assessment based on analysis of high resolution body surface potential maps (HR BSPM). The developed parameters reflect the abnormality in spatial distribution of the repolarization phase and its relation to changed spatial distribution of the depolarization phase in the group of MI patients with risk of VT. The statistical values obtained for these new parameters were compared to already known parameters calculated in the same studied groups of MI patients and healthy volunteers. The effectiveness of implanted cardioverter-defibrillators (ICD) to prevent SCD in MI patients is already proven [6,28–31]. However, highly effective identification of patients for ICD therapy is still difficult, and the number of ICDs needed to achieve 1 year of patient survival is still un-satisfyingly high [31,32]. The proposed parameters could also assist in noninvasive identification of MI patients for ICD therapy.", "[SUBTITLE] Studied groups [SUBSECTION] Two groups of patients with remote myocardial infarction and a control group of healthy volunteers were studied. The first group comprised 26 patients with documented risk of ventricular tachycardia, called the MI-VT patients group. This group consisted of mainly secondary prevention patients with implanted cardioverter-defibrillator or qualified for ICD therapy due to the risk of ventricular tachycardia. The eligibility criteria to implant ICD were: 1) documented episodes of ventricular tachycardia (21 pts), and 2) induced sustained monomorphic VT during programmed electrical stimulation (5 pts). The second group consisted of 14 patients after myocardial infarction in whom any sustained or not-sustained ventricular tachycardia was stated, and is called the MI-non-VT patients group. The control group consisted of 25 volunteers who had normal electrocardiograms, no history of cardiovascular disease, and to whom any medications were administered. Basic data from all 3 studied groups are presented in Table 1.\nThe study was approved by an institutional ethical review committee and the subjects gave informed consent.\nTwo groups of patients with remote myocardial infarction and a control group of healthy volunteers were studied. The first group comprised 26 patients with documented risk of ventricular tachycardia, called the MI-VT patients group. This group consisted of mainly secondary prevention patients with implanted cardioverter-defibrillator or qualified for ICD therapy due to the risk of ventricular tachycardia. The eligibility criteria to implant ICD were: 1) documented episodes of ventricular tachycardia (21 pts), and 2) induced sustained monomorphic VT during programmed electrical stimulation (5 pts). The second group consisted of 14 patients after myocardial infarction in whom any sustained or not-sustained ventricular tachycardia was stated, and is called the MI-non-VT patients group. The control group consisted of 25 volunteers who had normal electrocardiograms, no history of cardiovascular disease, and to whom any medications were administered. Basic data from all 3 studied groups are presented in Table 1.\nThe study was approved by an institutional ethical review committee and the subjects gave informed consent.\n[SUBTITLE] Measurements and data processing [SUBSECTION] A high resolution multi-lead ECG system (Active Two, BioSemi B.V., The Netherlands) with 64 surface electrodes was used for data acquisition. Active electrodes (containing Ag2Cl contact sensors with preamplifiers) were located around the torso (Figure 1) according to the ECG lead system proposed by SippensGroenewegen et al. [33]. The 64 unipolar ECGs were simultaneously recorded for 15 minutes with 4096 Hz sampling frequency. Next, signals were amplified and converted into digital form with 24-bit amplitude resolution, sent through a fiber-optic transmitter to the computer, and stored on a hard disk for off-line processing.\nThe raw ECG data were filtered using low pass Butterworth filter limiting frequency to 300 Hz and decimate filter, decreasing sampling frequency to 1024 Hz. The reference ECG signal, known as Wilson’s central terminal (WCT), was subtracted at this preprocessing step. Then, ECG signals in each lead separately were averaged in time using a cross-correlation method. To receive a low noise level, both the number of averaged cycles (usually ca. 100 cycles) and the value of correlation coefficient (usually ≥0.98) were fitted. The level of noise was measured on the 20 ms isoelectric U-P interval, because, as we noticed, during PQ interval, atrial repolarization is still present, which has also been confirmed by Ihara et al [34]. The obtained root mean square (RMS) value of noise in averaged ECG signals was below 0.5 μV.\nA high resolution multi-lead ECG system (Active Two, BioSemi B.V., The Netherlands) with 64 surface electrodes was used for data acquisition. Active electrodes (containing Ag2Cl contact sensors with preamplifiers) were located around the torso (Figure 1) according to the ECG lead system proposed by SippensGroenewegen et al. [33]. The 64 unipolar ECGs were simultaneously recorded for 15 minutes with 4096 Hz sampling frequency. Next, signals were amplified and converted into digital form with 24-bit amplitude resolution, sent through a fiber-optic transmitter to the computer, and stored on a hard disk for off-line processing.\nThe raw ECG data were filtered using low pass Butterworth filter limiting frequency to 300 Hz and decimate filter, decreasing sampling frequency to 1024 Hz. The reference ECG signal, known as Wilson’s central terminal (WCT), was subtracted at this preprocessing step. Then, ECG signals in each lead separately were averaged in time using a cross-correlation method. To receive a low noise level, both the number of averaged cycles (usually ca. 100 cycles) and the value of correlation coefficient (usually ≥0.98) were fitted. The level of noise was measured on the 20 ms isoelectric U-P interval, because, as we noticed, during PQ interval, atrial repolarization is still present, which has also been confirmed by Ihara et al [34]. The obtained root mean square (RMS) value of noise in averaged ECG signals was below 0.5 μV.\n[SUBTITLE] Data analysis [SUBSECTION] [SUBTITLE] Detection of ECG characteristic time instances [SUBSECTION] To detect ECG characteristic time instances (onsets and offsets of both depolarization and repolarization waves), an algorithm proposed by Acar [1], based on singular value decomposition (SVD), was applied. For each subject, a rectangular matrix M (n,m), containing n averaged ECG leads signals, each of m samples length, was decomposed according to the following equation\nwhere U(m,m),V(n,n) are the square, orthogonal matrices and ∑⌣ is the diagonal matrix (m,n) of singular values.\nNext, the matrix S was obtained by projecting the matrix M down into the reduced space defined by only the first 3 left - singular vectors of the unitary matrix U. The RMS signal was calculated using 3 vectors of the matrix S (S1, S2, S3), as shown in Figure 2. The global (common for all leads) characteristic time instances of ECG curves were found for this RMS signal, and then were searched in each lead separately. Initially, R-peak was marked as the maximum of the RMS signal, and then T maximum was marked in relation to R-peak. The P-wave beginning was calculated by inspecting the stationarity of the time relation between the vectors S1 and S2 in respectively defined time windows. Next, the onset of Q-wave and the offset of S-wave were calculated from the difference of RMS signal using the thresholding method. The T-wave end was established as the minimum of RMS signal in time window related to T wave maximum. Then, in each lead individually, the waves’ ends were established in relation to the global waves’ ends using the thresholding method for Q onset, and P, S, T offsets and for P, R, T peaks. Finally, iso-amplitude maps were generated for various parameters described in the next section.\nTo detect ECG characteristic time instances (onsets and offsets of both depolarization and repolarization waves), an algorithm proposed by Acar [1], based on singular value decomposition (SVD), was applied. For each subject, a rectangular matrix M (n,m), containing n averaged ECG leads signals, each of m samples length, was decomposed according to the following equation\nwhere U(m,m),V(n,n) are the square, orthogonal matrices and ∑⌣ is the diagonal matrix (m,n) of singular values.\nNext, the matrix S was obtained by projecting the matrix M down into the reduced space defined by only the first 3 left - singular vectors of the unitary matrix U. The RMS signal was calculated using 3 vectors of the matrix S (S1, S2, S3), as shown in Figure 2. The global (common for all leads) characteristic time instances of ECG curves were found for this RMS signal, and then were searched in each lead separately. Initially, R-peak was marked as the maximum of the RMS signal, and then T maximum was marked in relation to R-peak. The P-wave beginning was calculated by inspecting the stationarity of the time relation between the vectors S1 and S2 in respectively defined time windows. Next, the onset of Q-wave and the offset of S-wave were calculated from the difference of RMS signal using the thresholding method. The T-wave end was established as the minimum of RMS signal in time window related to T wave maximum. Then, in each lead individually, the waves’ ends were established in relation to the global waves’ ends using the thresholding method for Q onset, and P, S, T offsets and for P, R, T peaks. Finally, iso-amplitude maps were generated for various parameters described in the next section.\n[SUBTITLE] Detection of ECG characteristic time instances [SUBSECTION] To detect ECG characteristic time instances (onsets and offsets of both depolarization and repolarization waves), an algorithm proposed by Acar [1], based on singular value decomposition (SVD), was applied. For each subject, a rectangular matrix M (n,m), containing n averaged ECG leads signals, each of m samples length, was decomposed according to the following equation\nwhere U(m,m),V(n,n) are the square, orthogonal matrices and ∑⌣ is the diagonal matrix (m,n) of singular values.\nNext, the matrix S was obtained by projecting the matrix M down into the reduced space defined by only the first 3 left - singular vectors of the unitary matrix U. The RMS signal was calculated using 3 vectors of the matrix S (S1, S2, S3), as shown in Figure 2. The global (common for all leads) characteristic time instances of ECG curves were found for this RMS signal, and then were searched in each lead separately. Initially, R-peak was marked as the maximum of the RMS signal, and then T maximum was marked in relation to R-peak. The P-wave beginning was calculated by inspecting the stationarity of the time relation between the vectors S1 and S2 in respectively defined time windows. Next, the onset of Q-wave and the offset of S-wave were calculated from the difference of RMS signal using the thresholding method. The T-wave end was established as the minimum of RMS signal in time window related to T wave maximum. Then, in each lead individually, the waves’ ends were established in relation to the global waves’ ends using the thresholding method for Q onset, and P, S, T offsets and for P, R, T peaks. Finally, iso-amplitude maps were generated for various parameters described in the next section.\nTo detect ECG characteristic time instances (onsets and offsets of both depolarization and repolarization waves), an algorithm proposed by Acar [1], based on singular value decomposition (SVD), was applied. For each subject, a rectangular matrix M (n,m), containing n averaged ECG leads signals, each of m samples length, was decomposed according to the following equation\nwhere U(m,m),V(n,n) are the square, orthogonal matrices and ∑⌣ is the diagonal matrix (m,n) of singular values.\nNext, the matrix S was obtained by projecting the matrix M down into the reduced space defined by only the first 3 left - singular vectors of the unitary matrix U. The RMS signal was calculated using 3 vectors of the matrix S (S1, S2, S3), as shown in Figure 2. The global (common for all leads) characteristic time instances of ECG curves were found for this RMS signal, and then were searched in each lead separately. Initially, R-peak was marked as the maximum of the RMS signal, and then T maximum was marked in relation to R-peak. The P-wave beginning was calculated by inspecting the stationarity of the time relation between the vectors S1 and S2 in respectively defined time windows. Next, the onset of Q-wave and the offset of S-wave were calculated from the difference of RMS signal using the thresholding method. The T-wave end was established as the minimum of RMS signal in time window related to T wave maximum. Then, in each lead individually, the waves’ ends were established in relation to the global waves’ ends using the thresholding method for Q onset, and P, S, T offsets and for P, R, T peaks. Finally, iso-amplitude maps were generated for various parameters described in the next section.\n[SUBTITLE] Arrhythmia risk parameters [SUBSECTION] To assess the risk of arrhythmia, 3 new BSPM parameters were proposed. The first parameter, called STT_QRST_CORR, was based on the spatial relationship between the maps of STT and QRST integrals. This parameter was defined as a Spearman rank correlation coefficient between STT area and QRST area maps (Eq. 2). Two sets of the data, STTi and QRSTi, were converted into the stti rankings qrsti and before calculating the Spearman correlation coefficient between ranks\nwhere n is the subject number, i is the lead number, and M is the total number of leads.\nThe next 2 parameters, called STT_DI and TSI_DI, were proposed to quantify spatial changes in the repolarization phase. Departure index (DI) [35] was used to describe the changes between the group of healthy subjects and each individual subject of the entire studied population.\nA TSI parameter (T wave Shape Index) was defined as the ratio of STT integral and T-wave length [26,36]:\nwhere TSIn is the value of TSI parameter for subject n, V(t) is the ECG amplitude in the time instant t, Ton, Toff is the onset and offset of T wave, respectively, and L(V) is the length of T wave curve.\nThe TSI_DI parameter was defined as the sum of absolute values of departure indices of TSI parameter calculated in each lead. The formula describing the TSI_DI parameter is as follows:\nwhere TSI_DIn is the value of TSI_DI parameter for subject n, \ncSTTi¯ is the mean value of TSI parameter calculated in the control group in the lead i, and σ(cSTTi ) is the standard deviation of TSI parameter calculated in control group in the lead i.\nThe STT_DI parameter was defined as the sum of absolute values of departure indices of STT integral calculated in each lead. The formula describing STT_DI parameter is as follows:\nwhere STT_DI is the value of STT_DI parameter for subject n, \ncSTTi¯, and σ (cSTTi ) are adequately the mean value and the standard deviation of STT integrals calculated in the control group in lead i.\nThe proposed parameters were compared to 6 well known arrhythmia indices: the QRS width, the QT interval, the dispersion of QT interval, the Tpeak-Tend interval, the averaged cosine of the angle between QRS complex, and the T-wave (TCRT) introduced by Acar [1], as well as the non-dipolar content (NDC) of QRST integral maps considered as the repolarization inhomogeneity index [23,37]. The values of the following parameters: the QRS width, the QT interval, the dispersion of QT interval, and the Tpeak-Tend interval were calculated for each subject and each lead separately, then were averaged over all leads. The TCRT gives information about the relation between propagation of depolarization and repolarization waves in myocardium. First, 12-lead ECG signals are decomposed to minimum dimensional space by SVD (singular value decomposition). TCRT is then computed as an averaged cosine between threshold vectors of QRS (vectors above threshold value around R peak) and the unit vector with the maximum T-wave energy. More details can be found in the original article [1]. The non-dipolar content was calculated using principal component analysis and Karhunen-Loève transform [38], providing information about the multipolarity of QRST integral maps.\nTo assess the risk of arrhythmia, 3 new BSPM parameters were proposed. The first parameter, called STT_QRST_CORR, was based on the spatial relationship between the maps of STT and QRST integrals. This parameter was defined as a Spearman rank correlation coefficient between STT area and QRST area maps (Eq. 2). Two sets of the data, STTi and QRSTi, were converted into the stti rankings qrsti and before calculating the Spearman correlation coefficient between ranks\nwhere n is the subject number, i is the lead number, and M is the total number of leads.\nThe next 2 parameters, called STT_DI and TSI_DI, were proposed to quantify spatial changes in the repolarization phase. Departure index (DI) [35] was used to describe the changes between the group of healthy subjects and each individual subject of the entire studied population.\nA TSI parameter (T wave Shape Index) was defined as the ratio of STT integral and T-wave length [26,36]:\nwhere TSIn is the value of TSI parameter for subject n, V(t) is the ECG amplitude in the time instant t, Ton, Toff is the onset and offset of T wave, respectively, and L(V) is the length of T wave curve.\nThe TSI_DI parameter was defined as the sum of absolute values of departure indices of TSI parameter calculated in each lead. The formula describing the TSI_DI parameter is as follows:\nwhere TSI_DIn is the value of TSI_DI parameter for subject n, \ncSTTi¯ is the mean value of TSI parameter calculated in the control group in the lead i, and σ(cSTTi ) is the standard deviation of TSI parameter calculated in control group in the lead i.\nThe STT_DI parameter was defined as the sum of absolute values of departure indices of STT integral calculated in each lead. The formula describing STT_DI parameter is as follows:\nwhere STT_DI is the value of STT_DI parameter for subject n, \ncSTTi¯, and σ (cSTTi ) are adequately the mean value and the standard deviation of STT integrals calculated in the control group in lead i.\nThe proposed parameters were compared to 6 well known arrhythmia indices: the QRS width, the QT interval, the dispersion of QT interval, the Tpeak-Tend interval, the averaged cosine of the angle between QRS complex, and the T-wave (TCRT) introduced by Acar [1], as well as the non-dipolar content (NDC) of QRST integral maps considered as the repolarization inhomogeneity index [23,37]. The values of the following parameters: the QRS width, the QT interval, the dispersion of QT interval, and the Tpeak-Tend interval were calculated for each subject and each lead separately, then were averaged over all leads. The TCRT gives information about the relation between propagation of depolarization and repolarization waves in myocardium. First, 12-lead ECG signals are decomposed to minimum dimensional space by SVD (singular value decomposition). TCRT is then computed as an averaged cosine between threshold vectors of QRS (vectors above threshold value around R peak) and the unit vector with the maximum T-wave energy. More details can be found in the original article [1]. The non-dipolar content was calculated using principal component analysis and Karhunen-Loève transform [38], providing information about the multipolarity of QRST integral maps.\n[SUBTITLE] Statistics [SUBSECTION] For studied groups, the mean values and standard deviations, as well as the 25th, the 50th (median), and the 75th percentiles of all parameters, were calculated. The statistical significance was assessed by means of non-parametric Mann-Whitney test, since the normal distribution of the values could not have been assumed. The level of statistical significance was set at p≤0.05.\nTo assess the effectiveness of new parameters and to define the risk criteria of ventricular tachycardia for MI patients, the specificity and the sensitivity of proposed parameters were calculated. The diagnostic criterion assessing VT risk in the MI patient group was calculated as a maximal value of the product of the specificity and the sensitivity.\nFor studied groups, the mean values and standard deviations, as well as the 25th, the 50th (median), and the 75th percentiles of all parameters, were calculated. The statistical significance was assessed by means of non-parametric Mann-Whitney test, since the normal distribution of the values could not have been assumed. The level of statistical significance was set at p≤0.05.\nTo assess the effectiveness of new parameters and to define the risk criteria of ventricular tachycardia for MI patients, the specificity and the sensitivity of proposed parameters were calculated. The diagnostic criterion assessing VT risk in the MI patient group was calculated as a maximal value of the product of the specificity and the sensitivity.", "Two groups of patients with remote myocardial infarction and a control group of healthy volunteers were studied. The first group comprised 26 patients with documented risk of ventricular tachycardia, called the MI-VT patients group. This group consisted of mainly secondary prevention patients with implanted cardioverter-defibrillator or qualified for ICD therapy due to the risk of ventricular tachycardia. The eligibility criteria to implant ICD were: 1) documented episodes of ventricular tachycardia (21 pts), and 2) induced sustained monomorphic VT during programmed electrical stimulation (5 pts). The second group consisted of 14 patients after myocardial infarction in whom any sustained or not-sustained ventricular tachycardia was stated, and is called the MI-non-VT patients group. The control group consisted of 25 volunteers who had normal electrocardiograms, no history of cardiovascular disease, and to whom any medications were administered. Basic data from all 3 studied groups are presented in Table 1.\nThe study was approved by an institutional ethical review committee and the subjects gave informed consent.", "A high resolution multi-lead ECG system (Active Two, BioSemi B.V., The Netherlands) with 64 surface electrodes was used for data acquisition. Active electrodes (containing Ag2Cl contact sensors with preamplifiers) were located around the torso (Figure 1) according to the ECG lead system proposed by SippensGroenewegen et al. [33]. The 64 unipolar ECGs were simultaneously recorded for 15 minutes with 4096 Hz sampling frequency. Next, signals were amplified and converted into digital form with 24-bit amplitude resolution, sent through a fiber-optic transmitter to the computer, and stored on a hard disk for off-line processing.\nThe raw ECG data were filtered using low pass Butterworth filter limiting frequency to 300 Hz and decimate filter, decreasing sampling frequency to 1024 Hz. The reference ECG signal, known as Wilson’s central terminal (WCT), was subtracted at this preprocessing step. Then, ECG signals in each lead separately were averaged in time using a cross-correlation method. To receive a low noise level, both the number of averaged cycles (usually ca. 100 cycles) and the value of correlation coefficient (usually ≥0.98) were fitted. The level of noise was measured on the 20 ms isoelectric U-P interval, because, as we noticed, during PQ interval, atrial repolarization is still present, which has also been confirmed by Ihara et al [34]. The obtained root mean square (RMS) value of noise in averaged ECG signals was below 0.5 μV.", "[SUBTITLE] Detection of ECG characteristic time instances [SUBSECTION] To detect ECG characteristic time instances (onsets and offsets of both depolarization and repolarization waves), an algorithm proposed by Acar [1], based on singular value decomposition (SVD), was applied. For each subject, a rectangular matrix M (n,m), containing n averaged ECG leads signals, each of m samples length, was decomposed according to the following equation\nwhere U(m,m),V(n,n) are the square, orthogonal matrices and ∑⌣ is the diagonal matrix (m,n) of singular values.\nNext, the matrix S was obtained by projecting the matrix M down into the reduced space defined by only the first 3 left - singular vectors of the unitary matrix U. The RMS signal was calculated using 3 vectors of the matrix S (S1, S2, S3), as shown in Figure 2. The global (common for all leads) characteristic time instances of ECG curves were found for this RMS signal, and then were searched in each lead separately. Initially, R-peak was marked as the maximum of the RMS signal, and then T maximum was marked in relation to R-peak. The P-wave beginning was calculated by inspecting the stationarity of the time relation between the vectors S1 and S2 in respectively defined time windows. Next, the onset of Q-wave and the offset of S-wave were calculated from the difference of RMS signal using the thresholding method. The T-wave end was established as the minimum of RMS signal in time window related to T wave maximum. Then, in each lead individually, the waves’ ends were established in relation to the global waves’ ends using the thresholding method for Q onset, and P, S, T offsets and for P, R, T peaks. Finally, iso-amplitude maps were generated for various parameters described in the next section.\nTo detect ECG characteristic time instances (onsets and offsets of both depolarization and repolarization waves), an algorithm proposed by Acar [1], based on singular value decomposition (SVD), was applied. For each subject, a rectangular matrix M (n,m), containing n averaged ECG leads signals, each of m samples length, was decomposed according to the following equation\nwhere U(m,m),V(n,n) are the square, orthogonal matrices and ∑⌣ is the diagonal matrix (m,n) of singular values.\nNext, the matrix S was obtained by projecting the matrix M down into the reduced space defined by only the first 3 left - singular vectors of the unitary matrix U. The RMS signal was calculated using 3 vectors of the matrix S (S1, S2, S3), as shown in Figure 2. The global (common for all leads) characteristic time instances of ECG curves were found for this RMS signal, and then were searched in each lead separately. Initially, R-peak was marked as the maximum of the RMS signal, and then T maximum was marked in relation to R-peak. The P-wave beginning was calculated by inspecting the stationarity of the time relation between the vectors S1 and S2 in respectively defined time windows. Next, the onset of Q-wave and the offset of S-wave were calculated from the difference of RMS signal using the thresholding method. The T-wave end was established as the minimum of RMS signal in time window related to T wave maximum. Then, in each lead individually, the waves’ ends were established in relation to the global waves’ ends using the thresholding method for Q onset, and P, S, T offsets and for P, R, T peaks. Finally, iso-amplitude maps were generated for various parameters described in the next section.", "To detect ECG characteristic time instances (onsets and offsets of both depolarization and repolarization waves), an algorithm proposed by Acar [1], based on singular value decomposition (SVD), was applied. For each subject, a rectangular matrix M (n,m), containing n averaged ECG leads signals, each of m samples length, was decomposed according to the following equation\nwhere U(m,m),V(n,n) are the square, orthogonal matrices and ∑⌣ is the diagonal matrix (m,n) of singular values.\nNext, the matrix S was obtained by projecting the matrix M down into the reduced space defined by only the first 3 left - singular vectors of the unitary matrix U. The RMS signal was calculated using 3 vectors of the matrix S (S1, S2, S3), as shown in Figure 2. The global (common for all leads) characteristic time instances of ECG curves were found for this RMS signal, and then were searched in each lead separately. Initially, R-peak was marked as the maximum of the RMS signal, and then T maximum was marked in relation to R-peak. The P-wave beginning was calculated by inspecting the stationarity of the time relation between the vectors S1 and S2 in respectively defined time windows. Next, the onset of Q-wave and the offset of S-wave were calculated from the difference of RMS signal using the thresholding method. The T-wave end was established as the minimum of RMS signal in time window related to T wave maximum. Then, in each lead individually, the waves’ ends were established in relation to the global waves’ ends using the thresholding method for Q onset, and P, S, T offsets and for P, R, T peaks. Finally, iso-amplitude maps were generated for various parameters described in the next section.", "To assess the risk of arrhythmia, 3 new BSPM parameters were proposed. The first parameter, called STT_QRST_CORR, was based on the spatial relationship between the maps of STT and QRST integrals. This parameter was defined as a Spearman rank correlation coefficient between STT area and QRST area maps (Eq. 2). Two sets of the data, STTi and QRSTi, were converted into the stti rankings qrsti and before calculating the Spearman correlation coefficient between ranks\nwhere n is the subject number, i is the lead number, and M is the total number of leads.\nThe next 2 parameters, called STT_DI and TSI_DI, were proposed to quantify spatial changes in the repolarization phase. Departure index (DI) [35] was used to describe the changes between the group of healthy subjects and each individual subject of the entire studied population.\nA TSI parameter (T wave Shape Index) was defined as the ratio of STT integral and T-wave length [26,36]:\nwhere TSIn is the value of TSI parameter for subject n, V(t) is the ECG amplitude in the time instant t, Ton, Toff is the onset and offset of T wave, respectively, and L(V) is the length of T wave curve.\nThe TSI_DI parameter was defined as the sum of absolute values of departure indices of TSI parameter calculated in each lead. The formula describing the TSI_DI parameter is as follows:\nwhere TSI_DIn is the value of TSI_DI parameter for subject n, \ncSTTi¯ is the mean value of TSI parameter calculated in the control group in the lead i, and σ(cSTTi ) is the standard deviation of TSI parameter calculated in control group in the lead i.\nThe STT_DI parameter was defined as the sum of absolute values of departure indices of STT integral calculated in each lead. The formula describing STT_DI parameter is as follows:\nwhere STT_DI is the value of STT_DI parameter for subject n, \ncSTTi¯, and σ (cSTTi ) are adequately the mean value and the standard deviation of STT integrals calculated in the control group in lead i.\nThe proposed parameters were compared to 6 well known arrhythmia indices: the QRS width, the QT interval, the dispersion of QT interval, the Tpeak-Tend interval, the averaged cosine of the angle between QRS complex, and the T-wave (TCRT) introduced by Acar [1], as well as the non-dipolar content (NDC) of QRST integral maps considered as the repolarization inhomogeneity index [23,37]. The values of the following parameters: the QRS width, the QT interval, the dispersion of QT interval, and the Tpeak-Tend interval were calculated for each subject and each lead separately, then were averaged over all leads. The TCRT gives information about the relation between propagation of depolarization and repolarization waves in myocardium. First, 12-lead ECG signals are decomposed to minimum dimensional space by SVD (singular value decomposition). TCRT is then computed as an averaged cosine between threshold vectors of QRS (vectors above threshold value around R peak) and the unit vector with the maximum T-wave energy. More details can be found in the original article [1]. The non-dipolar content was calculated using principal component analysis and Karhunen-Loève transform [38], providing information about the multipolarity of QRST integral maps.", "For studied groups, the mean values and standard deviations, as well as the 25th, the 50th (median), and the 75th percentiles of all parameters, were calculated. The statistical significance was assessed by means of non-parametric Mann-Whitney test, since the normal distribution of the values could not have been assumed. The level of statistical significance was set at p≤0.05.\nTo assess the effectiveness of new parameters and to define the risk criteria of ventricular tachycardia for MI patients, the specificity and the sensitivity of proposed parameters were calculated. The diagnostic criterion assessing VT risk in the MI patient group was calculated as a maximal value of the product of the specificity and the sensitivity.", "Basic statistical data (the mean and the standard deviation values of all examined parameters in the 3 studied groups, as well as statistical differences (p value) between the groups calculated by Mann-Whitney tests) are summarized in Table 2. Statistical differences were calculated between MI-VT and MI-non-VT patients group, as well as between the MI–non-VT patients group and the control group.\nThe statistical data of 3 proposed parameters: STT_DI, TSI_DI and STT_QRST_CORR are also presented graphically in Figure 3. Besides the mean and the standard deviation, values of the 25th, the 50th (median) and the 75th percentiles are shown.\nThe best results of the Mann-Whitney test in a discrimination between MI patients groups with and without the ventricular tachycardia were obtained using STT_DI parameter (p<0.01), QT interval (p<0.02), TSI_DI parameter (p<0.0.3), QRS width (p<0.03) and STT_QRST_CORR parameter (p=0.05). The differences were not statistically significant for: TCRT parameter (p<0.06), Tpeak-Tend interval (p<0.09), QT interval dispersion (p<0.33) and for NDC of QRST integral maps (p<0.68). The obtained results indicate, however, that for TCRT parameter and Tpeak-Tend interval, differences are very close to the limit of significance level (p=0.05) and might be statistically significant in a larger database of MI patients.\nThe differences between the control group and MI-non-VT patients group were not statistically significant for the following parameters: Tpeak-Tend interval, TCRT parameter and QT dispersion. The differences between the control group and MI-non-VT patients group were statistically significant for the remaining parameters (Table 2).\nFor statistically significant parameters in discrimination between MI patients groups with and without VT, sensitivity and specificity were calculated, and diagnostic criteria were proposed. The obtained results are presented in Table 3. For 3 proposed parameters, sensitivity, specificity and diagnostic criteria are also shown graphically in Figure 4.\nThe highest value of sensitivity was found for STT_DI parameter (0.77), and the highest value of specificity for TSI_DI (0.79). STT_QRST_CORR parameter, QRS width and QT interval had the same values of sensitivity (0.73) and specificity (0.71).\nThe calculated diagnostic criteria concerning the risk of VT amounted to: 1.4 for STT_DI parameter, 2.1 for TSI_DI parameter, 0.7 for STT_QRST_CORR, 122 ms for QRS width and 459 ms for QT interval.\nAdditionally, the sensitivities of the above mentioned parameters were also calculated for 14 patients from the MI-VT group, in whom the episodes of VT occurred during follow-up study (Table 4). The highest sensitivity showed (0.79) for STT_QRST_CORR parameter and (0.71) for TCRT and STT_DI parameters. The ejection fraction value in those patients ranged from 15% to 65%. In 11 out of 14 MI-VT patients with episodes of VT during follow- up study, the value of EF was above 30%.", "To identify MI patients at risk for ventricular tachycardia, 3 new parameters calculated from averaged HR BSPM were proposed: STT_QRST_CORR, STT_DI, and TSI_DI. For comparison, 6 parameters already used in VT risk assessment (QRS interval, QT interval, QT interval dispersion, Tmax-Tend interval, TCRT parameter, and NDC of QRST integral maps) were calculated.\nTwo newly proposed parameters, TSI_DI and STT_DI, are directly connected to repolarization phase, and represent a measure of departure from the mean distribution of TSI parameter and STT integral in the control group. Departure maps were developed for easy recognition of abnormalities in the body surface potential maps and for representation of the location and extent of abnormal increase or decrease. These maps are useful for assessment of myocardial infarction, ventricular hypertrophy, and myocardial ischemia [39]. In our study, departure indices were used to show an aggregated increase in abnormal changes of re-polarization parameters distributions accompanying pathological changes of the cardiac muscle affected by infarction and at risk of VT. Both parameters were statistically significant in discrimination between MI patients with and without documented risk of VT. Thus, the STT_DI and TSI_DI parameters could be sensitive markers based on repolarization spatial abnormality.\nThe third parameter, STT_QRST_CORR, was also statistically significant in discrimination between MI patients with and without VT. The STT_QRST_CORR parameter might be connected with the changes in shape and duration of the action potential of ischemic cells. The QRST integrals are considered to reveal local repolarization process independent of activation sequence. In the healthy cardiac muscle, the recovery process, which is influenced by both activation sequence and local repolarization, is dominated by local repolarization due to fast spread of activation, where the Purkinje system plays the key role [40]. Therefore, in the control group, the STT integral and QRST integral maps are highly correlated. On the other hand, Geselowitz [41] has shown that spatial variations of the QRST integral (ventricular gradient) are related to the local spatial variation of the action potential area. In ischemic cells, an increase in resting potential, a decrease in peak amplitude, an increase in the rise time of the upstroke, and a change in its duration, cause the change in the action potential area that influences the QRST area [42]. This is probably because the correlation coefficient between STT and QRST integral maps decreases in pathological changes of the myocardium. The depolarization process, which constitutes the shape of action potential as well as the repolarization process, is no longer as fast, and activation sequence is no longer as ordered due to the presence of ischemic, structurally impaired, or dead cells within cardiac muscle. We concluded that for diseased cardiac muscle, the QRST maps reveal not only local repolarization, but also pathological depolarization variations in cardiac muscle affected by ischemia, the structurally impaired transition area from normal to scar tissue, and, especially, by scars from myocardial infarction itself.\nStatistically significant results were obtained with 2 commonly used temporal indices: QRS duration and QT interval (influenced distinctly by QRS interval, except for long QT syndrome). The third temporal parameter, Tpeak-Tend interval, was not statistically significant in the studied groups’ separation, but its values increased in both MI patient groups in comparison to the control group. The TCRT parameter has shown similar properties. These 2 parameters might have been statistically significant if studied in a larger database.\nIn our study, dispersion of repolarization phase expressed by dispersion of QT interval and non-dipolar content of QRST integral maps (NDC) were not statistically significant in discrimination between MI patients groups with and without VT. The dispersion of QT interval is still the subject of debate regarding its ability to supply reliable information on the repolarization heterogeneity of cardiac muscle [7]. However, the discrimination between the healthy group and MI patients without VT was statistically significant. The results obtained using NDC parameter were also statistically significant in discrimination between the control group and the MI patients group without VT risk, which reveals its ability to quantify multipolarity in QRST maps in cardiac muscles affected by pathological changes.\nThis study had some limitations. First, due to the limited number of studied MI patients, the results should be considered as preliminary, requiring further analysis in a larger number of more precisely selected MI patients groups, especially those who are recommended for primary prevention by ICD implementation (i.e., with the EF below or equal to 30% [29]. Secondly, averaging ECG time-series prolonged the analysis process, which had to be done offline. However, in this study a precise assessment of the ends of ECG waves was crucial for QT dispersion calculation as well as for all temporal parameters such as QT interval, QRS width and Tpeak-Tend interval. The new parameters proposed in this work are less sensitive to time interval identification and might be used in on-line analysis and even calculated from a single selected heart beat with a low signal-to-noise ratio. Many studies have shown that pathological changes in beat-to-beat variability of subsequent cardiac cycles could be connected with increased risk of arrhythmia. Vulnerability to arrhythmia can be linked with dynamic changes in QT interval [9,13], RR interval variability (HRV) [6,14], heart rate turbulence (HRT) [15], or T-wave alternans (TWA) [2,8]. The analysis of beat-to-beat variability of proposed BSPM parameters is an important issue needing future research.", "The obtained results indicate that the risk of arrhythmia increases with abnormal disturbances of both depolarization and repolarization processes. The best indices of the threat of ventricular tachycardia were: 1) the depolarization and repolarization prolongation expressed by QRS interval and QT interval, 2) the changed relation between depolarization and repolarization phases described by STT_QRST_CORR, and 3) spatial changes in repolarization phase expressed by STT_DI and TSI_DI parameters. Thus, the proposed new parameters might be applied for risk stratification of cardiac arrhythmia. These parameters might additionally support noninvasive identification of MI patients for ICD therapy. However, this preliminary study requires further analysis in a larger number of more precisely selected MI patients groups, especially patients who are recommended for primary prevention by ICD therapy." ]
[ null, "materials|methods", null, "methods", "methods", null, null, null, "results", "discussion", "conclusions" ]
[ "body surface potential mapping", "myocardial infarction", "ventricular tachycardia", "implantable cardioverter-defibrillator" ]
Evaluation of nosocomial infections and risk factors in critically ill patients.
21358613
Nosocomial infections are one of the most serious complications in intensive care unit patients because they lead to high morbidity, mortality, length of stay and cost. The aim of this study was to determine the nosocomial infections, risk factors, pathogens and the antimicrobial susceptibilities of them in intensive care unit of a university hospital.
BACKGROUND
The patients were observed prospectively by the unit-directed active surveillance method based on patient and the laboratory.
MATERIAL/METHODS
20.1% of the patients developed a total of 40 intensive care unit-acquired infections for a total of 988 patient-days. The infection sites were the lower respiratory tract, urinary tract, bloodstream, wound, and the central nervous system. The respiratory deficiency, diabetes mellitus, usage of steroid and antibiotics were found as the risk factors. The most common pathogens were Enterobacteriaceae, Staphylococcus aureus, Candida species. No vancomycin resistance was determined in Gram positive bacteria. Imipenem and meropenem were found to be the most effective antibiotics to Enterobacteriaceae.
RESULTS
Hospital infection rate in intensive care unit is not very high. The diabetes mellitus, length of stay, usage of steroids, urinary catheter and central venous catheter were determined as the risk factors by the final logistic regression analysis. These data, which were collected from a newly established intensive care unit of a university hospital, are important in order to predict the infections and the antimicrobial resistance profile that will develop in the future.
CONCLUSIONS
[ "Adolescent", "Adult", "Aged", "Aged, 80 and over", "Critical Illness", "Cross Infection", "Female", "Humans", "Logistic Models", "Male", "Middle Aged", "Risk Factors", "Turkey", "Young Adult" ]
3524731
null
null
Statistical analysis
Student’s t test, Mann-Whitney U test, χ2 and Fisher’s exact χ2 tests were used for statistical analysis. P≤0.05 was considered significant. Also a logistic regression model was used in order to evaluate the risk factors of infections.
Results
A total of 250 patients were admitted during this 6-month period. 149 patients (61 female and 88 male) with a mean age of 61.1±18.1 (min 15 – max 94) were involved in this study. They stayed a mean of 6.6±5.9 (min 2– max 30) days in ICU. A mean of APACHE II scores was found as 13.2±4.8 (min 4 – max 26). 20.1% (n=30) of the patients developed a total of 40 ICU-acquired infections for a total of 988 patient-days. Nosocomial infection was diagnosed at a mean of 5.4±4.9 (min 2 – max 23) days after the admission in ICU. One event of NI occurred in 23 patients (76.7%), 5 (16.7%) had 2 infections and 2 (6.7%) had 3 or more. The infection sites were the lower respiratory tract (40%), urinary tract (40%), bloodstream (10%), wound (7.5%), and the central nervous system (CNS) infection (2.5%). The sites of infection are summarized in Figure 1. A total of 52 patients had ischemic heart disease, 32 (21.5%) had undergone surgery before admission whom 23 (15.4%) emergency, 9 (6%) had elective surgery, 29 (19.5%) had cerebrovascular disease, 19 (12.8%) had diabetes mellitus and 13 (8.7%) chronic obstructive pulmonary disease, 13 (8.7%) had gastrointestinal hemorrhage. 116 patients (77.9%) had a urinary catheter, 37 (24.8%) had a nasogastric tube, 28 (18.8%) were being mechanically ventilated, 25 (16.8%) were being intubated, 8 (5.4%) had a tracheostomy, 7 (4.7%) had an arterial catheter, 6 (4%) had a central venous catheter, 6 (4%) had drenage catheter. The respiratory deficiency, diabetes mellitus, usage of steroid and antibiotics were found as the risk factors for nosocomial infection. And male sex, respiratory deficiency, unconsciousness, intubation, mechanical ventilation and colonization of organisms in the lower respiratory tract were found as the main risk factors for lower respiratory tract infection. Only usage of antibiotic was found to be the risk factor for urinary tract infection. Analysis (Table 1) of the clinical characteristics of patients with and without NI denoted that numerous factors were associated with the occurrence of infection. Final logistic regression analysis showed that diabetes mellitus, length of stay, usage of steroid, urinary catheter and central venous catheter were statistically significant risk factors for nosocomial infection in ICU (Table 2). Among the total patients, 44 (29.5%) died and 16 (36.4%) was detected with and 28 (63.6%) without NI. The difference in mortality rate between presence of NI and absence of NI groups was significant. The Table 3 summarizes the organisms isolated from the nosocomial infections. The most common pathogens found were as follows: Enterobacteriaceae (26.1%), Staphylococcus aureus (21.7%), Candida species (16.7%), Pseudomonas aeruginosa (10.9%), Enterococcus species (10.9%), and Acinetobacter species (8.7%). It was found to have the highest antimicrobial resistance, with 8/10 resistant to methicillin, sulbactam-ampicillin, cefazolin, erytromycin, gentamicin, ciprofloxacin, ofloxacin in the strains of S. aureus. Trimethoprim/sulphamethoxazole, clindamycin, teicoplanin and vancomycin were found to be the most effective antibiotics to S. aureus. Of all the Enterococcus isolates recovered from patients in the ICU, 5/5 of the Enterococ strains were penicillin and ciprofloxacin resistant and 4/5 of them resistant to tetracycline. No vancomycin resistance was determined in Gram positive bacteria. Imipenem and meropenem were found to be the most effective antibiotics to Enterobacteriaceae. It was found to have the highest antimicrobial resistance, with 9/12 resistant to ampicillin and amoxicillin clavulanic acid. Within the P. aeruginosa strains there was no resistance to amikacin. But ceftazidime, gentamicin, mezlocillin, piperacillin/tazobactam were found to have the highest antimicrobial resistance within P. aeruginosa strains. The most frequently prescribed antibiotics were third generation cephalosporins (32.9%), quinolones (17.4%), metronidazole (15.4%), first generation cephalosporins (8.7%), and aminoglycozides (8.7%).
Conclusions
In this study hospital infection rate in ICU is not very high, being similar to rates previously reported from other units of our country. The respiratory deficiency, diabetes mellitus, usage of steroid and antibiotics were found as the risk factors for nosocomial infection. And male sex, respiratory deficiency, unconsciousness, intubation, mechanical ventilation and colonization of organisms in lower respiratory tract were found as the main risk factors for lower respiratory tract infection. Only usage of antibiotic was found to be the risk factor for urinary tract infection. The diabetes mellitus, length of stay, usage of steroid, urinary cathater and central venous catheter were determined as the risk factors for nosocomial infection in ICU by the final logistic regresion analysis. These data, which were collected from a newly established ICU of a university hospital, are important in order to predict the infections and the antimicrobial resistance profile that will develop in the future.
[ "Background" ]
[ "The intensive care units are treatment units that provide the vital support to the critically ill patients. Nosocomial infections (NIs) are one of the most serious complications in intensive care unit (ICU) patients because they lead to high morbidity, mortality, length of stay and cost [1]. Although only 5–10% of all hospitalized patients are treated in ICUs, they account for approximately 25% of all NIs [2]. Patients hospitalized in ICUs are 5 to 10 times more to acquire NIs than other hospital patients [3]. Patients admitted into ICUs are susceptible to infection because of their underlying diseases or invasive monitoring and they are disposed to the infections after exposure to broad-spectrum antimicrobials [2]. The high rate of nosocomial infection in ICU leads to use broad spectrum antibiotics and emergence of antibiotic resistant microorganisms. The mortality and treatment cost of the infection caused by the resistant strains is very high compared with the mortality and treatment cost infection caused by the susceptible strains [3]. On these grounds it is important to monitor and control of the NIs in ICUs.\nThe aim of this study was to determine the nosocomial infections, risk factors, causative agents and the antimicrobial susceptibilities of these agents in ICU of Mustafa Kemal University Hospital." ]
[ null ]
[ "Background", "Material and Methods", "Statistical analysis", "Results", "Discussion", "Conclusions" ]
[ "The intensive care units are treatment units that provide the vital support to the critically ill patients. Nosocomial infections (NIs) are one of the most serious complications in intensive care unit (ICU) patients because they lead to high morbidity, mortality, length of stay and cost [1]. Although only 5–10% of all hospitalized patients are treated in ICUs, they account for approximately 25% of all NIs [2]. Patients hospitalized in ICUs are 5 to 10 times more to acquire NIs than other hospital patients [3]. Patients admitted into ICUs are susceptible to infection because of their underlying diseases or invasive monitoring and they are disposed to the infections after exposure to broad-spectrum antimicrobials [2]. The high rate of nosocomial infection in ICU leads to use broad spectrum antibiotics and emergence of antibiotic resistant microorganisms. The mortality and treatment cost of the infection caused by the resistant strains is very high compared with the mortality and treatment cost infection caused by the susceptible strains [3]. On these grounds it is important to monitor and control of the NIs in ICUs.\nThe aim of this study was to determine the nosocomial infections, risk factors, causative agents and the antimicrobial susceptibilities of these agents in ICU of Mustafa Kemal University Hospital.", "This study was approved by Hospital Ethics Committee. All patients included in the study were admitted to the 10 bed mixed ICU for more than 48 hours during period of study from March 2007 to August 2007. The patients admitted to the ICU were observed prospectively by the unit-directed active surveillance method based on patient and the laboratory. Patients who stayed in ICU less than two days were excluded. They were prospectively followed up including five days after discharge from ICU. Infections that developed 48 hours after admission into the ICU were considered ICU acquired. The presence and criteria of infection were assessed daily on the ward round together with an infectious disease specialist. Urine bacterial culture was routinely performed on admission. Microbiological samples of blood, urine, tracheobronchial secretions, and any suspected infection focus were always obtained when a new infection was suspected. The definitions of infections were based on the definitions proposed by the Centers for Disease Control and Prevention. The risk factors were selected in the light of a review summarizing previously published articles about nosocomial infections in ICUs [2]. The following information was collected for all study patients: age, gender, cause of admission, severity of underlying diseases and organ dysfunction on admission as assessed by means of the Acute Physiology and Chronic Health Evaluation (APACHE) II, presence of ischemic heart disease, chronic obstructive pulmonary disease, diabetes mellitus, chronic renal or hepatic failure, intoxication, foreign body and prosthesis, underlying malignancy, general body trauma, recent use of immunosuppressive therapy, elective or emergency operations, previous antimicrobial therapy, prior hospitalization, parenteral nutrition, transfusion.\nSusceptibility testing of microorganisms was done according to recommended Clinical and Laboratory Standards Institute (CLSI) guidelines [4]. The automated Vitek bacteriology system (bioMerieux Vitek, France) was used for the identification of microorganisms and susceptibility testing.\n[SUBTITLE] Statistical analysis [SUBSECTION] Student’s t test, Mann-Whitney U test, χ2 and Fisher’s exact χ2 tests were used for statistical analysis. P≤0.05 was considered significant. Also a logistic regression model was used in order to evaluate the risk factors of infections.\nStudent’s t test, Mann-Whitney U test, χ2 and Fisher’s exact χ2 tests were used for statistical analysis. P≤0.05 was considered significant. Also a logistic regression model was used in order to evaluate the risk factors of infections.", "Student’s t test, Mann-Whitney U test, χ2 and Fisher’s exact χ2 tests were used for statistical analysis. P≤0.05 was considered significant. Also a logistic regression model was used in order to evaluate the risk factors of infections.", "A total of 250 patients were admitted during this 6-month period. 149 patients (61 female and 88 male) with a mean age of 61.1±18.1 (min 15 – max 94) were involved in this study. They stayed a mean of 6.6±5.9 (min 2– max 30) days in ICU. A mean of APACHE II scores was found as 13.2±4.8 (min 4 – max 26).\n20.1% (n=30) of the patients developed a total of 40 ICU-acquired infections for a total of 988 patient-days. Nosocomial infection was diagnosed at a mean of 5.4±4.9 (min 2 – max 23) days after the admission in ICU. One event of NI occurred in 23 patients (76.7%), 5 (16.7%) had 2 infections and 2 (6.7%) had 3 or more. The infection sites were the lower respiratory tract (40%), urinary tract (40%), bloodstream (10%), wound (7.5%), and the central nervous system (CNS) infection (2.5%). The sites of infection are summarized in Figure 1.\nA total of 52 patients had ischemic heart disease, 32 (21.5%) had undergone surgery before admission whom 23 (15.4%) emergency, 9 (6%) had elective surgery, 29 (19.5%) had cerebrovascular disease, 19 (12.8%) had diabetes mellitus and 13 (8.7%) chronic obstructive pulmonary disease, 13 (8.7%) had gastrointestinal hemorrhage.\n116 patients (77.9%) had a urinary catheter, 37 (24.8%) had a nasogastric tube, 28 (18.8%) were being mechanically ventilated, 25 (16.8%) were being intubated, 8 (5.4%) had a tracheostomy, 7 (4.7%) had an arterial catheter, 6 (4%) had a central venous catheter, 6 (4%) had drenage catheter. The respiratory deficiency, diabetes mellitus, usage of steroid and antibiotics were found as the risk factors for nosocomial infection. And male sex, respiratory deficiency, unconsciousness, intubation, mechanical ventilation and colonization of organisms in the lower respiratory tract were found as the main risk factors for lower respiratory tract infection. Only usage of antibiotic was found to be the risk factor for urinary tract infection. Analysis (Table 1) of the clinical characteristics of patients with and without NI denoted that numerous factors were associated with the occurrence of infection.\nFinal logistic regression analysis showed that diabetes mellitus, length of stay, usage of steroid, urinary catheter and central venous catheter were statistically significant risk factors for nosocomial infection in ICU (Table 2).\nAmong the total patients, 44 (29.5%) died and 16 (36.4%) was detected with and 28 (63.6%) without NI. The difference in mortality rate between presence of NI and absence of NI groups was significant.\nThe Table 3 summarizes the organisms isolated from the nosocomial infections. The most common pathogens found were as follows: Enterobacteriaceae (26.1%), Staphylococcus aureus (21.7%), Candida species (16.7%), Pseudomonas aeruginosa (10.9%), Enterococcus species (10.9%), and Acinetobacter species (8.7%).\nIt was found to have the highest antimicrobial resistance, with 8/10 resistant to methicillin, sulbactam-ampicillin, cefazolin, erytromycin, gentamicin, ciprofloxacin, ofloxacin in the strains of S. aureus. Trimethoprim/sulphamethoxazole, clindamycin, teicoplanin and vancomycin were found to be the most effective antibiotics to S. aureus. Of all the Enterococcus isolates recovered from patients in the ICU, 5/5 of the Enterococ strains were penicillin and ciprofloxacin resistant and 4/5 of them resistant to tetracycline. No vancomycin resistance was determined in Gram positive bacteria. Imipenem and meropenem were found to be the most effective antibiotics to Enterobacteriaceae. It was found to have the highest antimicrobial resistance, with 9/12 resistant to ampicillin and amoxicillin clavulanic acid. Within the P. aeruginosa strains there was no resistance to amikacin. But ceftazidime, gentamicin, mezlocillin, piperacillin/tazobactam were found to have the highest antimicrobial resistance within P. aeruginosa strains.\nThe most frequently prescribed antibiotics were third generation cephalosporins (32.9%), quinolones (17.4%), metronidazole (15.4%), first generation cephalosporins (8.7%), and aminoglycozides (8.7%).", "The nosocomial infection rates vary according to the geographical location, type of ICU, patient population and local infection control practices [5,6]. More than one third of NIs is acquired in ICUs, the incidence of 15 to 40% of hospital admissions, depending on the type of unit [2]. Such infections prolong the length of ICU stay and bring an important economic difficulty [7].\nWe performed this study, intending to evaluate the development of nosocomial infections, sites of infections and the most prevalent microorganisms and the antimicrobial resistance patterns in the ICU of a university hospital.\nThere were 250 admissions to the ICU during the study. 149 patients were involved in the study. The results of our study demonstrated a similar NI rate with other ICUs. It was 20.1%, similar to that observed by Klavs et al. [8]. Also in our study it was determined lower NI rates than that in many university hospitals in Turkey [10–12] and in the other countries [6,7]. In other countries and in our country nosocomial infection rates in ICUs were reported between 23.2% and 30.6%; this rate is similar to our study [13–15].\nNIs acquired frequently in ICU is lower respiratory tract, urinary tract, bloodstream, surgical wound and catheter associated infections [2]. In our study we found the lower respiratory tract and urinary tract to be the most frequent nosocomial infection. Other infection sites were bloodstream, wound, and the CNS infection respectively. Although the infection sites are in contrast to some studies, this is also same with some studies reported from the other countries [13,14–16]. The rate and sites of NIs can vary between countries according to the establishment of preventive measures and developmental status, between the hospitals according to the spectrum of their patients, between the wards of the hospitals according to treatment and intervention.\nIt is important to know and control the risk factors for nosocomial infection. The risk factors for NIs in ICU were investigated in internal and overseas studies. In this study the finding of a relationship between diabetes mellitus, usage of steroid, antibiotics and nosocomial infection was in accordance with the literature (17). The finding of a relationship between respiratory deficiency, unconsciousness, intubation, mechanical ventilation and lower respiratory tract infection were found in accordance with the literature [18,19].\nUrinary catheterization was recognized as the main risk factor for nosocomial infection by Girou et al. [20] and mechanical ventilation was recognized as the main risk factor for nosocomial pneumonia by McCusker et al. [21] and Gusmào et al. [19] in previous studies. Leone et al. [22]. reported that female sex, length of ICU stay and duration of catheterization were associated with an increased risk of urinary tract infection. Apostolopoulou et al. [23] reported that duration of mechanical ventilation ≤5 days was risk factor for ventilator associated pneumonia. Meric et al. [24] reported into a study from our country, a length of stay in ICU (>7 days), respiratory failure as a primary cause of admission, sedative medication and operation (before or after admission to ICU) as significant risk factors for NIs in ICU.\nGirou et al. [20] reported that nosocomial infection rates were significantly higher in those who had equal or a higher than 21 APACHE II score, and Apostolopoulou et al. [23] reported the equal or a higher than 18 APACHE II score as a risk factor for nosocomial infections. We did not find significant relation between APACHE II score and NI contrary with Girou et al. [20] and Apostolopoulou et al. [23].\nThe spectrum of potential pathogens and the predominant bacterial flora can vary considerably in different ICU settings. In this study Enterobacteriaceae was found the most common pathogens. S. aureus and P. aeruginosa were the predominant pathogens in lower respiratory tract infections. Candida spp. and E. coli were the predominant pathogens in urinary tract infections. In the surveillance studies from European countries, S. aureus and coagulase negative staphylococ among the gram positive bacteria and P.aeruginosa and E.coli among the gram negative bacteria were reported to be isolated [25,26]. Candida species are causative agents of urinary tract infections of the patients who are under consistent antibiotic treatment and have urinary catheter [22]. There are studies which reported fifth Candida species or less than fifth [8]. In our study Candida species were isolated with third frequency. In our study one Candida spp. was determined to be caused by meningitidis. In a normally way Candida spp. is not isolated from the CNS infections. In this study it was isolated from the patient who had a shunt.\nIn the multicenter studies, high resistance rates were determined. But there was a great difference between the centers [25,26]. For this reason every hospital needs to determine their own resistance patterns.\nThe antimicrobial resistance rates of the microorganisms isolated in nosocomial infections are higher than that isolated from the outpatients and the percentage of resistant isolates from the patients in ICU is also higher than that from outpatients or the other wards [27,28].\nConcerning resistance pattern for S. aureus, 8/10 were resistant to methicillin and all of them had susceptible to vancomycin. Among the strains of Acinetobacter none revealed sensitivity higher than 2/4 to any antimicrobial tested. Amikacin was found to be the most effective antibiotic to the strains of P. aeruginosa. These data confirm that the organisms develop maximum resistance against the antibiotics which are most frequently prescribed.", "In this study hospital infection rate in ICU is not very high, being similar to rates previously reported from other units of our country. The respiratory deficiency, diabetes mellitus, usage of steroid and antibiotics were found as the risk factors for nosocomial infection. And male sex, respiratory deficiency, unconsciousness, intubation, mechanical ventilation and colonization of organisms in lower respiratory tract were found as the main risk factors for lower respiratory tract infection. Only usage of antibiotic was found to be the risk factor for urinary tract infection. The diabetes mellitus, length of stay, usage of steroid, urinary cathater and central venous catheter were determined as the risk factors for nosocomial infection in ICU by the final logistic regresion analysis. These data, which were collected from a newly established ICU of a university hospital, are important in order to predict the infections and the antimicrobial resistance profile that will develop in the future." ]
[ null, "materials|methods", "methods", "results", "discussion", "conclusions" ]
[ "intensive care unit", "nosocomial infection", "risk factors" ]
Effects of levosimendan/furosemide infusion on plasma brain natriuretic peptide, echocardiographic parameters and cardiac output in end-stage heart failure patients.
21358614
Acute decompensation heart failure (ADHF) remains a cause of hospitalization in patients with end-stage congestive HF. The administration of levosimendan in comparison with a standard therapy in CHF patients admitted for ADHF was analysed.
BACKGROUND
Consecutive patients admitted for ADHF (NYHA class III-IV) were treated with levosimendan infusion 0.1 µg/kg/min or with furosemide infusion 100-160 mg per day for 48 hours (control group). All subjects underwent determination of brain natriuretic peptide (BNP), non-invasive cardiac output (CO), and echocardiogram at baseline, at the end of therapy and 1 week after therapy.
MATERIAL/METHODS
Seven patients admitted for 20 treatments in 16 months (age 66 years; mean admission/year 5.4) were treated with levosimendan and compared with 7 patients admitted for 15 treatments (age 69.1 years; mean admission/year 6.1). At the end of levosimendan therapy, BNP decreased (from 679.7 ± 512.1 pg/ml to 554.2 ± 407.6 pg/ml p = 0.03), and 6 MWT and LVEF improved (from 217.6 ± 97.7 m to 372.2 ± 90.4 m p = 0.0001; from 22.8 ± 9.1% to 25.4 ± 9.8% p = 0.05). Deceleration time, E/A, E/E', TAPSE, pulmonary pressure and CO did not change significantly after levosimendan therapy and after 1 week. At follow-up, only 6-min WT and NYHA class showed a significant improvement (p = 0.0001, p = 0.001 respectively). The furosemide infusion reduced NYHA class and body weight (from 3.4 ± 0.6 to 2.3 ± 0.5 p = 0.001; from 77.5 ± 8.6 kg to 76 ± 6.6 kg p = 0.04), but impaired renal function (clearances from 56.3 ± 21.9 ml/min to 41.2 ± 10.1 ml/min p = 0.04).
RESULTS
Treating end-stage CHF patients with levosimendan improved BNP and LVEF, but this effect disappeared after 1 week. The amelioration of 6 MWT and NYHA class lasted longer after levosimendan infusion.
CONCLUSIONS
[ "Aged", "Cardiac Output", "Case-Control Studies", "Echocardiography", "Female", "Follow-Up Studies", "Furosemide", "Heart Failure", "Humans", "Hydrazones", "Infusions, Intravenous", "Male", "Natriuretic Peptide, Brain", "Pyridazines", "Simendan", "Time Factors" ]
3524718
null
null
Limitations of the study
The major limitation of this study is the absence of randomization. Moreover, the small numbers of patients involved did not permit any analysis of mortality or arrhythmic risk correlated to levosimendan infusion – these questions need to be addressed by a larger, randomized trial. Nevertheless, our limited experience suggests caution using an infusion pump for levosimendan infusion in out-patients with chronic, refractory CHF [34], eventually limiting such therapy in subjects protected by an AICD. In fact, in our experience a ventricular arrhythmic disorder occurred in 10% of levosimendan treatment, while in the LIDO trial it was recorded in only 4% of patients treated [8].
Results
Seven patients (6 males; age 66.3±4.4) were admitted 20 times (range 1–6) for ADHF during the study (which lasted 16 months) and treated with a levosimendan infusion; 7 patients (5 males; age 69.1±3.9) did not agree to the levosimendan infusion and were admitted 15 times (range 1–4) for ADHF being treated with furosemide (control group). All subjects signed an informed consent. All subjects of the levosimendan group and 6 (85.7%) in the furosemide group had an implantable cardiac defibrillator with a pace-maker for cardiac resynchronization therapy. All patients of the 2 groups could not be included into a cardiac transplant protocol due to age, comorbidity (severe asthma) or recent history of malignant neoplasm. Two patients were evaluated for the implantation of a ventricular assistant device as a destination therapy and then excluded for severe right ventricular dysfunction. The etiology of the CHF was ischemic in 6/7 patients in the levosimendan group and in 3/7 in the furosemide group. Table 1 summarizes the comparison between the main parameters at baseline in the 2 groups. The effects caused by the levosimendan or furosemide infusion on renal function, blood pressure, echocardiographic parameters, functional status and cardiac output are described in Tables 2 and 3, respectively. In patients treated with levosimendan, a supplementary measurement of CO, cardiac index (CI) and stroke volume (SV) during the infusion (>12 hour) was provided. CO improved from 3.6±0.9 l/min to 4.5±0.7 l/min during infusion (p=0.001), reducing at the end of infusion and at 1-week follow-up (3.8±1.4 l/min and 4.1±1.5 l/min, p=0.5 and p=0.4 respectively). Similarly, CI and SV ameliorated during infusion (from 1.8±0.4 l/min/m2 to 2.3±0.3 l/min/m2 p=0.001; from 50.3±11.3 ml to 58.4±10 ml p=0.01) maintaining their improvement after the end of infusion (1.9±0.7 l/min/m2 p=0.4; 56.6±24.3 ml p=0.3) and at 1-week follow-up (2.1±0.7 l/min/m2 p=0.3; 56.6±24.2 ml p=0.3) (Figure 1). The analysis of the MLHFQ revealed a better health-related quality of life 1 week after levosimendan infusion (27.2±15.3 vs. 40.9±17.5, p=0.05). At 12-lead electrocardiogram, 2 (28.6%) subjects in the levosimendan group and 4 (57.2%) in furosemide the group were in permanent atrial fibrillation. The study lasted 16 months; during this period appropriate AICD shocks of ventricular tachycardia or ventricular fibrillation after the levosimendan infusion were registered in 2 patients (28.6%) (1 patient died a non-cardiac death for sepsis after admission in Intensive Care Unit). Two patients (28.6%) in the control group died of cardiac death due to worsening of CHF and multi-organ failure, and in 1 subject an appropriate AICD shock was registered.
Conclusions
Treating ADHF in end-stage CHF patients with clinical hyperhydration using levosimendan improved BNP and LVEF, but this effect disappeared after 1 week. The amelioration of 6MWT and NYHA class lasted longer after levosimendan infusion, without causing an impairment of renal function. Patients treated with levosimendan in comparison with furosemide infusion manifested a better quality of life. [SUBTITLE] Limitations of the study [SUBSECTION] The major limitation of this study is the absence of randomization. Moreover, the small numbers of patients involved did not permit any analysis of mortality or arrhythmic risk correlated to levosimendan infusion – these questions need to be addressed by a larger, randomized trial. Nevertheless, our limited experience suggests caution using an infusion pump for levosimendan infusion in out-patients with chronic, refractory CHF [34], eventually limiting such therapy in subjects protected by an AICD. In fact, in our experience a ventricular arrhythmic disorder occurred in 10% of levosimendan treatment, while in the LIDO trial it was recorded in only 4% of patients treated [8]. The major limitation of this study is the absence of randomization. Moreover, the small numbers of patients involved did not permit any analysis of mortality or arrhythmic risk correlated to levosimendan infusion – these questions need to be addressed by a larger, randomized trial. Nevertheless, our limited experience suggests caution using an infusion pump for levosimendan infusion in out-patients with chronic, refractory CHF [34], eventually limiting such therapy in subjects protected by an AICD. In fact, in our experience a ventricular arrhythmic disorder occurred in 10% of levosimendan treatment, while in the LIDO trial it was recorded in only 4% of patients treated [8].
[ "Background", "Doppler echocardiography", "Biochemical assays", "Non-invasive cardiac output", "Statistical analysis" ]
[ "Levosimendan is a pharmacological agent that exerts positive inotropic effect by binding to cardiac troponin C in a calcium-dependent manner and sensitizing myofilaments to calcium without increasing myocardial oxygen consumption [1–3]. Levosimendan also has vasodilatory properties through its facilitation of an adenosine-triphosphate-dependent potassium channel opening [4] and anti-ischemic effects [5]. In clinical studies the infusion of levosimendan increased cardiac output, reducing cardiac filling pressures, and was correlated to an improvement of cardiac symptoms and prognosis (death and hospitalization for congestive heart failure [CHF]) [6,7]. Previous experiences [7–9] suggest that a single 24-hour levosimendan infusion in patients suffering severe CHF due to left ventricular dysfunction induces beneficial hemodynamic effects, relief of symptoms and reduction in short-term morbidity and mortality compared with placebo or dobutamine. However, the largest randomized trail (SURVIVE) [10] showed an initial reduction in B-type natriuretic peptide (BNP) in the levosimendan vs. the dobutamine group, but failed to demonstrate a significant reduction of all-cause mortality or secondary clinical outcomes. Nevertheless, different single-centre observations regarding the intermittent infusion of levosimendan in severe CHF have reported an improvement of left ventricular performance, relief of symptoms and prolonged short-term survival without an increase in incidence of cardiac arrhythmias [11–14].\nThe objective of this study was to analyse the feasibility and efficacy of levosimendan infusion in end-stage CHF patients admitted for repetitive acute decompensation (ADHF) episodes with evidence of clinical hyperhydration, comparing these patients with a control group treated traditionally with an infusion of furosemide in order to reduce the fluid overload. The effects of the 2 different strategies on plasma BNP, echocardiographic parameters and functional variables [NYHA class, 6-min walking test (6MWT) and non-invasive cardiac output (CO)] were analysed.", "Echocardiograms were performed with a Vivid 7 computed sonography system (GE Medical Systems, Waukesha, Wisconsin, USA) according to the recommendations of the American Society of Echocardiography [18]. Two-dimensional apical 2- and 4-chamber views were used for volume measurements; LVEF was calculated with a modified Simpson’s method using biplane apical (2- and 4-chamber) views. The LV end-diastolic volume and the LV end-systolic volume were recorded. All echo examinations were performed by expert operators blinded to the results of BNP assay; the intra-observer variability in the evaluation of LVEF was found to be <5%. Echocardiographic measurements including LV end-diastolic diameter, and the diastolic thickness of the ventricular septum and the posterior LV wall were determined according to the American Society of Echocardiography recommendations [18]. Systolic dysfunction was defined as a level of LVEF <50%. The definition of restrictive filling pattern was a predefined modification of classifications used in prior studies (19): E/A ≥2, DT ≤150 msec, S/D ratio <1, and AR >35 cm/sec. All these criteria were verified to define the restrictive filling pattern. The Doppler sample was set 1–2 mm under the free edges of the mitral valve using the apical 4-chamber projection; an average of 5 beats was considered. In patients suffering from atrial fibrillation at the time of the echocardiogram, the diastolic function was classified as: 1) restrictive pattern (DT ≤150 msec), or 2) indeterminate (DT >150 msec). The pulmonary artery pressure (PAP) was obtained by determining the peak velocity of the tricuspid regurgitation jet, adding 5 or 10 mmHg as right atrial pressure according to right atrial size, severity of regurgitation and appearance of the inferior vena cava. From Doppler tissue imaging of the annulus, the E′ wave (early annular velocity opposites in direction to the mitral inflow) was determined and the ratio E/E′ calculated [20]. The right ventricle function was investigated using the M-mode echocardiography obtaining the tricuspid annular plane systolic excursion (TAPSE) [21].", "All blood samples were collected by venipuncture and immediately analysed with the bedside Triage B type natriuretic fluorescence immunoassay (Biosite Diagnostics, La Jolla, CA, USA). The Triage Meter is used to measure BNP concentration by detecting a fluorescent emission that reproduces the amount of BNP in the blood. After the addition of 250 μl of whole blood to the disposable device, cells were filtered and separated from the plasma with BNP, which entered a reaction chamber containing fluorescent BNP antibodies. After 2-min incubation, the BNP-antibody mixture migrated to an area containing immobilised antibodies and remained fixed. The unbound fluorescent antibodies were washed away by the excess sample fluid. The Triage Meter then measured the fluorescent intensity of the BNP assay area. The assay results were complete in 15 minutes. The creatinine clearance was calculated using the MDRD formula.", "For the measurement of non-invasive cardiac output (CO), an inert gas rebreathing method (Innocor, Innovison A/S, Odense, Denmark) was used. The system utilised a N2O (blood soluble gas) and SF6 (blood insoluble gas) enriched with O2 of 0.5% and 0.1%, respectively. Tidal volume was progressively increased in the closed circuit to match the physiologic increase. Use the SF6 allowed measuring the volume of lungs, valve and rebreathing bag. N2O concentration decreases during the rebreathing manoeuvre, with a rate proportional to pulmonary blood flow. Three to 4 respiratory cycles were needed to obtain N2O washout. Absence of pulmonary shunt was defined as arterial O2 saturation >98% (blood sample obtained from the arterial line). In the absence of a pulmonary shunt, pulmonary blood flow=CO. This method has been proven to be closely correlated with thermodilution (R=0.93) and the direct Fick method (R=0.94) [22].", "Continuous variables were expressed as mean ±standard deviation (SD). Inter-group differences in continuous variables were evaluated using 2-tailed t test for unpaired data; differences between baseline and follow-up were evaluated by 2-tailed t test for paired data. Differences in non-continuous variables were evaluated using non-parametric tests as needed (Wilcoxon, Mann-Whitney). Distribution of categorical variables between groups was evaluated by chi-square with Yates correction. Statistical significance was set at p≤0.05. Analyses were performed using SPSS software for Windows, release 7.5, SPSS Inc., Chicago, USA." ]
[ null, null, null, null, "methods" ]
[ "Background", "Material and Methods", "Patients", "Doppler echocardiography", "Biochemical assays", "Non-invasive cardiac output", "Statistical analysis", "Results", "Discussion", "Conclusions", "Limitations of the study" ]
[ "Levosimendan is a pharmacological agent that exerts positive inotropic effect by binding to cardiac troponin C in a calcium-dependent manner and sensitizing myofilaments to calcium without increasing myocardial oxygen consumption [1–3]. Levosimendan also has vasodilatory properties through its facilitation of an adenosine-triphosphate-dependent potassium channel opening [4] and anti-ischemic effects [5]. In clinical studies the infusion of levosimendan increased cardiac output, reducing cardiac filling pressures, and was correlated to an improvement of cardiac symptoms and prognosis (death and hospitalization for congestive heart failure [CHF]) [6,7]. Previous experiences [7–9] suggest that a single 24-hour levosimendan infusion in patients suffering severe CHF due to left ventricular dysfunction induces beneficial hemodynamic effects, relief of symptoms and reduction in short-term morbidity and mortality compared with placebo or dobutamine. However, the largest randomized trail (SURVIVE) [10] showed an initial reduction in B-type natriuretic peptide (BNP) in the levosimendan vs. the dobutamine group, but failed to demonstrate a significant reduction of all-cause mortality or secondary clinical outcomes. Nevertheless, different single-centre observations regarding the intermittent infusion of levosimendan in severe CHF have reported an improvement of left ventricular performance, relief of symptoms and prolonged short-term survival without an increase in incidence of cardiac arrhythmias [11–14].\nThe objective of this study was to analyse the feasibility and efficacy of levosimendan infusion in end-stage CHF patients admitted for repetitive acute decompensation (ADHF) episodes with evidence of clinical hyperhydration, comparing these patients with a control group treated traditionally with an infusion of furosemide in order to reduce the fluid overload. The effects of the 2 different strategies on plasma BNP, echocardiographic parameters and functional variables [NYHA class, 6-min walking test (6MWT) and non-invasive cardiac output (CO)] were analysed.", "[SUBTITLE] Patients [SUBSECTION] This single-centre prospective study, approved by the local ethics committee, included end-stage (stage D AHA/ACC) patients >18 years old, admitted into the Heart Failure Unit from October 2008 to February 2010 with the diagnosis of ADHF (NYHA functional class III to IV). Patients were included when presenting at admission all the following criteria: symptoms of CHF according to the accepted criteria in the literature [15,16], refractory to the usual pharmacological treatments; NYHA class III or IV due to a deterioration of symptoms (at least 1 class) despite optimum oral therapy; echocardiographic evidence of systolic and/or diastolic dysfunction (see below); a cardiac index (CI) ≤2.5 L/min/m2; and clinical fluid overload (≥2 findings of congestion as elevated jugular venous pressure, pulmonary rales, hepatomegaly, orthopnea, paroxysmal nocturnal dyspnoea, abdominal bloating). The exclusion criteria were: childbearing potential; CHF related to restrictive or hypertrophic cardiomyopathy or to uncorrected stenotic valvular disease; concomitant unstable angina or myocardial infarction; systolic blood pressure below 85 mmHg; severe renal failure (clearance creatinine <30 ml/min); administration of inotropes in the last week; and absence of a written consent that authorized the levosimendan infusion. The protocol of levosimendan infusion did not provide a loading dose, being administered intravenously at the dosage of 0.1 μg/kg/min for 24–36 hours in order to complete the dosage of 12.5 mg. The reduction of systolic blood pressure <80 mmHg, tachycardia with heart rate >140/min, and symptomatic hypotension were considered criteria for reducing dosage of levosimendan or suspending it for 30–60 min until the dose-limiting event had resolved. In the control group, an infusion of 100–160 mg per day of furosemide for 48 hours was administered. The dose of concomitant medications was held constant unless urgent modifications were required by the clinical status in both groups. The therapy prescribed in those patients included angiotensin-converting enzyme inhibitors (enalapril, ramipril), angiotensin receptor blockade (candesartan, losartan) in case of enalapril/ramipril intolerance, beta-blockers (metoprolol, bisoprolol or carvedilol), digoxin, loop diuretic and spironolactone at low dose. For beta-blockers, angiotensin-converting enzyme inhibitors and angiotensin receptor blockade, the patients’ maximum tolerated dose was used, after an adequate titration period. Before the infusion of levosimendan/furosemide, after ending (within 6 hours) and at 1-week follow-up all the parameters scheduled were measured. In the levosimendan group, a measurement of non-invasive CO during the infusion (between 12 and 24 hours) was obtained. The evaluation of health-related quality of life was obtained at 1-week follow-up using the Minnesota Living with Heart Failure Questionnaire (MLHFQ) in all patients [17].\nThe objective of this prospective study was to evaluate the immediate and short-term effects of a levosimendan infusion (without a loading dose) on plasma BNP, non-invasive CO, echocardiographic parameters, renal function and quality of life in ADHF episodes in end-stage CHF patients, comparing this treatment with a furosemide infusion.\nThis single-centre prospective study, approved by the local ethics committee, included end-stage (stage D AHA/ACC) patients >18 years old, admitted into the Heart Failure Unit from October 2008 to February 2010 with the diagnosis of ADHF (NYHA functional class III to IV). Patients were included when presenting at admission all the following criteria: symptoms of CHF according to the accepted criteria in the literature [15,16], refractory to the usual pharmacological treatments; NYHA class III or IV due to a deterioration of symptoms (at least 1 class) despite optimum oral therapy; echocardiographic evidence of systolic and/or diastolic dysfunction (see below); a cardiac index (CI) ≤2.5 L/min/m2; and clinical fluid overload (≥2 findings of congestion as elevated jugular venous pressure, pulmonary rales, hepatomegaly, orthopnea, paroxysmal nocturnal dyspnoea, abdominal bloating). The exclusion criteria were: childbearing potential; CHF related to restrictive or hypertrophic cardiomyopathy or to uncorrected stenotic valvular disease; concomitant unstable angina or myocardial infarction; systolic blood pressure below 85 mmHg; severe renal failure (clearance creatinine <30 ml/min); administration of inotropes in the last week; and absence of a written consent that authorized the levosimendan infusion. The protocol of levosimendan infusion did not provide a loading dose, being administered intravenously at the dosage of 0.1 μg/kg/min for 24–36 hours in order to complete the dosage of 12.5 mg. The reduction of systolic blood pressure <80 mmHg, tachycardia with heart rate >140/min, and symptomatic hypotension were considered criteria for reducing dosage of levosimendan or suspending it for 30–60 min until the dose-limiting event had resolved. In the control group, an infusion of 100–160 mg per day of furosemide for 48 hours was administered. The dose of concomitant medications was held constant unless urgent modifications were required by the clinical status in both groups. The therapy prescribed in those patients included angiotensin-converting enzyme inhibitors (enalapril, ramipril), angiotensin receptor blockade (candesartan, losartan) in case of enalapril/ramipril intolerance, beta-blockers (metoprolol, bisoprolol or carvedilol), digoxin, loop diuretic and spironolactone at low dose. For beta-blockers, angiotensin-converting enzyme inhibitors and angiotensin receptor blockade, the patients’ maximum tolerated dose was used, after an adequate titration period. Before the infusion of levosimendan/furosemide, after ending (within 6 hours) and at 1-week follow-up all the parameters scheduled were measured. In the levosimendan group, a measurement of non-invasive CO during the infusion (between 12 and 24 hours) was obtained. The evaluation of health-related quality of life was obtained at 1-week follow-up using the Minnesota Living with Heart Failure Questionnaire (MLHFQ) in all patients [17].\nThe objective of this prospective study was to evaluate the immediate and short-term effects of a levosimendan infusion (without a loading dose) on plasma BNP, non-invasive CO, echocardiographic parameters, renal function and quality of life in ADHF episodes in end-stage CHF patients, comparing this treatment with a furosemide infusion.\n[SUBTITLE] Doppler echocardiography [SUBSECTION] Echocardiograms were performed with a Vivid 7 computed sonography system (GE Medical Systems, Waukesha, Wisconsin, USA) according to the recommendations of the American Society of Echocardiography [18]. Two-dimensional apical 2- and 4-chamber views were used for volume measurements; LVEF was calculated with a modified Simpson’s method using biplane apical (2- and 4-chamber) views. The LV end-diastolic volume and the LV end-systolic volume were recorded. All echo examinations were performed by expert operators blinded to the results of BNP assay; the intra-observer variability in the evaluation of LVEF was found to be <5%. Echocardiographic measurements including LV end-diastolic diameter, and the diastolic thickness of the ventricular septum and the posterior LV wall were determined according to the American Society of Echocardiography recommendations [18]. Systolic dysfunction was defined as a level of LVEF <50%. The definition of restrictive filling pattern was a predefined modification of classifications used in prior studies (19): E/A ≥2, DT ≤150 msec, S/D ratio <1, and AR >35 cm/sec. All these criteria were verified to define the restrictive filling pattern. The Doppler sample was set 1–2 mm under the free edges of the mitral valve using the apical 4-chamber projection; an average of 5 beats was considered. In patients suffering from atrial fibrillation at the time of the echocardiogram, the diastolic function was classified as: 1) restrictive pattern (DT ≤150 msec), or 2) indeterminate (DT >150 msec). The pulmonary artery pressure (PAP) was obtained by determining the peak velocity of the tricuspid regurgitation jet, adding 5 or 10 mmHg as right atrial pressure according to right atrial size, severity of regurgitation and appearance of the inferior vena cava. From Doppler tissue imaging of the annulus, the E′ wave (early annular velocity opposites in direction to the mitral inflow) was determined and the ratio E/E′ calculated [20]. The right ventricle function was investigated using the M-mode echocardiography obtaining the tricuspid annular plane systolic excursion (TAPSE) [21].\nEchocardiograms were performed with a Vivid 7 computed sonography system (GE Medical Systems, Waukesha, Wisconsin, USA) according to the recommendations of the American Society of Echocardiography [18]. Two-dimensional apical 2- and 4-chamber views were used for volume measurements; LVEF was calculated with a modified Simpson’s method using biplane apical (2- and 4-chamber) views. The LV end-diastolic volume and the LV end-systolic volume were recorded. All echo examinations were performed by expert operators blinded to the results of BNP assay; the intra-observer variability in the evaluation of LVEF was found to be <5%. Echocardiographic measurements including LV end-diastolic diameter, and the diastolic thickness of the ventricular septum and the posterior LV wall were determined according to the American Society of Echocardiography recommendations [18]. Systolic dysfunction was defined as a level of LVEF <50%. The definition of restrictive filling pattern was a predefined modification of classifications used in prior studies (19): E/A ≥2, DT ≤150 msec, S/D ratio <1, and AR >35 cm/sec. All these criteria were verified to define the restrictive filling pattern. The Doppler sample was set 1–2 mm under the free edges of the mitral valve using the apical 4-chamber projection; an average of 5 beats was considered. In patients suffering from atrial fibrillation at the time of the echocardiogram, the diastolic function was classified as: 1) restrictive pattern (DT ≤150 msec), or 2) indeterminate (DT >150 msec). The pulmonary artery pressure (PAP) was obtained by determining the peak velocity of the tricuspid regurgitation jet, adding 5 or 10 mmHg as right atrial pressure according to right atrial size, severity of regurgitation and appearance of the inferior vena cava. From Doppler tissue imaging of the annulus, the E′ wave (early annular velocity opposites in direction to the mitral inflow) was determined and the ratio E/E′ calculated [20]. The right ventricle function was investigated using the M-mode echocardiography obtaining the tricuspid annular plane systolic excursion (TAPSE) [21].\n[SUBTITLE] Biochemical assays [SUBSECTION] All blood samples were collected by venipuncture and immediately analysed with the bedside Triage B type natriuretic fluorescence immunoassay (Biosite Diagnostics, La Jolla, CA, USA). The Triage Meter is used to measure BNP concentration by detecting a fluorescent emission that reproduces the amount of BNP in the blood. After the addition of 250 μl of whole blood to the disposable device, cells were filtered and separated from the plasma with BNP, which entered a reaction chamber containing fluorescent BNP antibodies. After 2-min incubation, the BNP-antibody mixture migrated to an area containing immobilised antibodies and remained fixed. The unbound fluorescent antibodies were washed away by the excess sample fluid. The Triage Meter then measured the fluorescent intensity of the BNP assay area. The assay results were complete in 15 minutes. The creatinine clearance was calculated using the MDRD formula.\nAll blood samples were collected by venipuncture and immediately analysed with the bedside Triage B type natriuretic fluorescence immunoassay (Biosite Diagnostics, La Jolla, CA, USA). The Triage Meter is used to measure BNP concentration by detecting a fluorescent emission that reproduces the amount of BNP in the blood. After the addition of 250 μl of whole blood to the disposable device, cells were filtered and separated from the plasma with BNP, which entered a reaction chamber containing fluorescent BNP antibodies. After 2-min incubation, the BNP-antibody mixture migrated to an area containing immobilised antibodies and remained fixed. The unbound fluorescent antibodies were washed away by the excess sample fluid. The Triage Meter then measured the fluorescent intensity of the BNP assay area. The assay results were complete in 15 minutes. The creatinine clearance was calculated using the MDRD formula.\n[SUBTITLE] Non-invasive cardiac output [SUBSECTION] For the measurement of non-invasive cardiac output (CO), an inert gas rebreathing method (Innocor, Innovison A/S, Odense, Denmark) was used. The system utilised a N2O (blood soluble gas) and SF6 (blood insoluble gas) enriched with O2 of 0.5% and 0.1%, respectively. Tidal volume was progressively increased in the closed circuit to match the physiologic increase. Use the SF6 allowed measuring the volume of lungs, valve and rebreathing bag. N2O concentration decreases during the rebreathing manoeuvre, with a rate proportional to pulmonary blood flow. Three to 4 respiratory cycles were needed to obtain N2O washout. Absence of pulmonary shunt was defined as arterial O2 saturation >98% (blood sample obtained from the arterial line). In the absence of a pulmonary shunt, pulmonary blood flow=CO. This method has been proven to be closely correlated with thermodilution (R=0.93) and the direct Fick method (R=0.94) [22].\nFor the measurement of non-invasive cardiac output (CO), an inert gas rebreathing method (Innocor, Innovison A/S, Odense, Denmark) was used. The system utilised a N2O (blood soluble gas) and SF6 (blood insoluble gas) enriched with O2 of 0.5% and 0.1%, respectively. Tidal volume was progressively increased in the closed circuit to match the physiologic increase. Use the SF6 allowed measuring the volume of lungs, valve and rebreathing bag. N2O concentration decreases during the rebreathing manoeuvre, with a rate proportional to pulmonary blood flow. Three to 4 respiratory cycles were needed to obtain N2O washout. Absence of pulmonary shunt was defined as arterial O2 saturation >98% (blood sample obtained from the arterial line). In the absence of a pulmonary shunt, pulmonary blood flow=CO. This method has been proven to be closely correlated with thermodilution (R=0.93) and the direct Fick method (R=0.94) [22].\n[SUBTITLE] Statistical analysis [SUBSECTION] Continuous variables were expressed as mean ±standard deviation (SD). Inter-group differences in continuous variables were evaluated using 2-tailed t test for unpaired data; differences between baseline and follow-up were evaluated by 2-tailed t test for paired data. Differences in non-continuous variables were evaluated using non-parametric tests as needed (Wilcoxon, Mann-Whitney). Distribution of categorical variables between groups was evaluated by chi-square with Yates correction. Statistical significance was set at p≤0.05. Analyses were performed using SPSS software for Windows, release 7.5, SPSS Inc., Chicago, USA.\nContinuous variables were expressed as mean ±standard deviation (SD). Inter-group differences in continuous variables were evaluated using 2-tailed t test for unpaired data; differences between baseline and follow-up were evaluated by 2-tailed t test for paired data. Differences in non-continuous variables were evaluated using non-parametric tests as needed (Wilcoxon, Mann-Whitney). Distribution of categorical variables between groups was evaluated by chi-square with Yates correction. Statistical significance was set at p≤0.05. Analyses were performed using SPSS software for Windows, release 7.5, SPSS Inc., Chicago, USA.", "This single-centre prospective study, approved by the local ethics committee, included end-stage (stage D AHA/ACC) patients >18 years old, admitted into the Heart Failure Unit from October 2008 to February 2010 with the diagnosis of ADHF (NYHA functional class III to IV). Patients were included when presenting at admission all the following criteria: symptoms of CHF according to the accepted criteria in the literature [15,16], refractory to the usual pharmacological treatments; NYHA class III or IV due to a deterioration of symptoms (at least 1 class) despite optimum oral therapy; echocardiographic evidence of systolic and/or diastolic dysfunction (see below); a cardiac index (CI) ≤2.5 L/min/m2; and clinical fluid overload (≥2 findings of congestion as elevated jugular venous pressure, pulmonary rales, hepatomegaly, orthopnea, paroxysmal nocturnal dyspnoea, abdominal bloating). The exclusion criteria were: childbearing potential; CHF related to restrictive or hypertrophic cardiomyopathy or to uncorrected stenotic valvular disease; concomitant unstable angina or myocardial infarction; systolic blood pressure below 85 mmHg; severe renal failure (clearance creatinine <30 ml/min); administration of inotropes in the last week; and absence of a written consent that authorized the levosimendan infusion. The protocol of levosimendan infusion did not provide a loading dose, being administered intravenously at the dosage of 0.1 μg/kg/min for 24–36 hours in order to complete the dosage of 12.5 mg. The reduction of systolic blood pressure <80 mmHg, tachycardia with heart rate >140/min, and symptomatic hypotension were considered criteria for reducing dosage of levosimendan or suspending it for 30–60 min until the dose-limiting event had resolved. In the control group, an infusion of 100–160 mg per day of furosemide for 48 hours was administered. The dose of concomitant medications was held constant unless urgent modifications were required by the clinical status in both groups. The therapy prescribed in those patients included angiotensin-converting enzyme inhibitors (enalapril, ramipril), angiotensin receptor blockade (candesartan, losartan) in case of enalapril/ramipril intolerance, beta-blockers (metoprolol, bisoprolol or carvedilol), digoxin, loop diuretic and spironolactone at low dose. For beta-blockers, angiotensin-converting enzyme inhibitors and angiotensin receptor blockade, the patients’ maximum tolerated dose was used, after an adequate titration period. Before the infusion of levosimendan/furosemide, after ending (within 6 hours) and at 1-week follow-up all the parameters scheduled were measured. In the levosimendan group, a measurement of non-invasive CO during the infusion (between 12 and 24 hours) was obtained. The evaluation of health-related quality of life was obtained at 1-week follow-up using the Minnesota Living with Heart Failure Questionnaire (MLHFQ) in all patients [17].\nThe objective of this prospective study was to evaluate the immediate and short-term effects of a levosimendan infusion (without a loading dose) on plasma BNP, non-invasive CO, echocardiographic parameters, renal function and quality of life in ADHF episodes in end-stage CHF patients, comparing this treatment with a furosemide infusion.", "Echocardiograms were performed with a Vivid 7 computed sonography system (GE Medical Systems, Waukesha, Wisconsin, USA) according to the recommendations of the American Society of Echocardiography [18]. Two-dimensional apical 2- and 4-chamber views were used for volume measurements; LVEF was calculated with a modified Simpson’s method using biplane apical (2- and 4-chamber) views. The LV end-diastolic volume and the LV end-systolic volume were recorded. All echo examinations were performed by expert operators blinded to the results of BNP assay; the intra-observer variability in the evaluation of LVEF was found to be <5%. Echocardiographic measurements including LV end-diastolic diameter, and the diastolic thickness of the ventricular septum and the posterior LV wall were determined according to the American Society of Echocardiography recommendations [18]. Systolic dysfunction was defined as a level of LVEF <50%. The definition of restrictive filling pattern was a predefined modification of classifications used in prior studies (19): E/A ≥2, DT ≤150 msec, S/D ratio <1, and AR >35 cm/sec. All these criteria were verified to define the restrictive filling pattern. The Doppler sample was set 1–2 mm under the free edges of the mitral valve using the apical 4-chamber projection; an average of 5 beats was considered. In patients suffering from atrial fibrillation at the time of the echocardiogram, the diastolic function was classified as: 1) restrictive pattern (DT ≤150 msec), or 2) indeterminate (DT >150 msec). The pulmonary artery pressure (PAP) was obtained by determining the peak velocity of the tricuspid regurgitation jet, adding 5 or 10 mmHg as right atrial pressure according to right atrial size, severity of regurgitation and appearance of the inferior vena cava. From Doppler tissue imaging of the annulus, the E′ wave (early annular velocity opposites in direction to the mitral inflow) was determined and the ratio E/E′ calculated [20]. The right ventricle function was investigated using the M-mode echocardiography obtaining the tricuspid annular plane systolic excursion (TAPSE) [21].", "All blood samples were collected by venipuncture and immediately analysed with the bedside Triage B type natriuretic fluorescence immunoassay (Biosite Diagnostics, La Jolla, CA, USA). The Triage Meter is used to measure BNP concentration by detecting a fluorescent emission that reproduces the amount of BNP in the blood. After the addition of 250 μl of whole blood to the disposable device, cells were filtered and separated from the plasma with BNP, which entered a reaction chamber containing fluorescent BNP antibodies. After 2-min incubation, the BNP-antibody mixture migrated to an area containing immobilised antibodies and remained fixed. The unbound fluorescent antibodies were washed away by the excess sample fluid. The Triage Meter then measured the fluorescent intensity of the BNP assay area. The assay results were complete in 15 minutes. The creatinine clearance was calculated using the MDRD formula.", "For the measurement of non-invasive cardiac output (CO), an inert gas rebreathing method (Innocor, Innovison A/S, Odense, Denmark) was used. The system utilised a N2O (blood soluble gas) and SF6 (blood insoluble gas) enriched with O2 of 0.5% and 0.1%, respectively. Tidal volume was progressively increased in the closed circuit to match the physiologic increase. Use the SF6 allowed measuring the volume of lungs, valve and rebreathing bag. N2O concentration decreases during the rebreathing manoeuvre, with a rate proportional to pulmonary blood flow. Three to 4 respiratory cycles were needed to obtain N2O washout. Absence of pulmonary shunt was defined as arterial O2 saturation >98% (blood sample obtained from the arterial line). In the absence of a pulmonary shunt, pulmonary blood flow=CO. This method has been proven to be closely correlated with thermodilution (R=0.93) and the direct Fick method (R=0.94) [22].", "Continuous variables were expressed as mean ±standard deviation (SD). Inter-group differences in continuous variables were evaluated using 2-tailed t test for unpaired data; differences between baseline and follow-up were evaluated by 2-tailed t test for paired data. Differences in non-continuous variables were evaluated using non-parametric tests as needed (Wilcoxon, Mann-Whitney). Distribution of categorical variables between groups was evaluated by chi-square with Yates correction. Statistical significance was set at p≤0.05. Analyses were performed using SPSS software for Windows, release 7.5, SPSS Inc., Chicago, USA.", "Seven patients (6 males; age 66.3±4.4) were admitted 20 times (range 1–6) for ADHF during the study (which lasted 16 months) and treated with a levosimendan infusion; 7 patients (5 males; age 69.1±3.9) did not agree to the levosimendan infusion and were admitted 15 times (range 1–4) for ADHF being treated with furosemide (control group). All subjects signed an informed consent. All subjects of the levosimendan group and 6 (85.7%) in the furosemide group had an implantable cardiac defibrillator with a pace-maker for cardiac resynchronization therapy. All patients of the 2 groups could not be included into a cardiac transplant protocol due to age, comorbidity (severe asthma) or recent history of malignant neoplasm. Two patients were evaluated for the implantation of a ventricular assistant device as a destination therapy and then excluded for severe right ventricular dysfunction. The etiology of the CHF was ischemic in 6/7 patients in the levosimendan group and in 3/7 in the furosemide group. Table 1 summarizes the comparison between the main parameters at baseline in the 2 groups. The effects caused by the levosimendan or furosemide infusion on renal function, blood pressure, echocardiographic parameters, functional status and cardiac output are described in Tables 2 and 3, respectively. In patients treated with levosimendan, a supplementary measurement of CO, cardiac index (CI) and stroke volume (SV) during the infusion (>12 hour) was provided. CO improved from 3.6±0.9 l/min to 4.5±0.7 l/min during infusion (p=0.001), reducing at the end of infusion and at 1-week follow-up (3.8±1.4 l/min and 4.1±1.5 l/min, p=0.5 and p=0.4 respectively). Similarly, CI and SV ameliorated during infusion (from 1.8±0.4 l/min/m2 to 2.3±0.3 l/min/m2 p=0.001; from 50.3±11.3 ml to 58.4±10 ml p=0.01) maintaining their improvement after the end of infusion (1.9±0.7 l/min/m2 p=0.4; 56.6±24.3 ml p=0.3) and at 1-week follow-up (2.1±0.7 l/min/m2 p=0.3; 56.6±24.2 ml p=0.3) (Figure 1). The analysis of the MLHFQ revealed a better health-related quality of life 1 week after levosimendan infusion (27.2±15.3 vs. 40.9±17.5, p=0.05).\nAt 12-lead electrocardiogram, 2 (28.6%) subjects in the levosimendan group and 4 (57.2%) in furosemide the group were in permanent atrial fibrillation.\nThe study lasted 16 months; during this period appropriate AICD shocks of ventricular tachycardia or ventricular fibrillation after the levosimendan infusion were registered in 2 patients (28.6%) (1 patient died a non-cardiac death for sepsis after admission in Intensive Care Unit). Two patients (28.6%) in the control group died of cardiac death due to worsening of CHF and multi-organ failure, and in 1 subject an appropriate AICD shock was registered.", "This non-randomized single-centre study analysed the feasibility and effects of levosimendan infusion in end-stage CHF admitted for repetitive ADHF. The severity of clinical status of this population was underlined by the unfavourable prognosis (mortality rate 21.4% in 16 months) and the high rate of readmission/year. Treating end-stage CHF patients, new therapeutic approaches should be proven in terms of safety, efficacy in improving functional status, quality of life, and reduction of hospital readmission. Long-term administration of dobutamine or PDE inhibitors failed to demonstrate significant clinical benefits and the meta-analysis of Rapezzi et al. [23], based on 21 randomized trials, proved that the continuous administration of β-adrenergic agonists or PDE inhibitors increased mortality. The repetitive administration of levosimendan in advanced CHF patients has been reported to improve symptoms and left ventricular systolic function [14], reduce NT-proBNP and immune activation (Interleukin-6 and C-reactive protein) [13], and increase 45-day survival compared to dobutamine [12].\nIn our experience the levosimendan infusion during an ADHF improved the NYHA class and the 6MWT, maintaining these effects at 1-week follow-up. In the furosemide group, an amelioration of NYHA class but not of 6MWT was observed. Moreover, CHF patients treated with levosimendan showed a better quality of life at MLHFQ at 1-week follow-up (p=0.05). The MLHFQ has been recently evaluated as being the most correlated with NYHA class, 6MWT and functional status in CHF patients [24], exploring both the physical domain and emotional/psychological aspects of quality of life.\nPlasma BNP was significantly reduced at the end of the levosimendan infusion (p=0.03), but returned to baseline after 1 week (p=0.08), while the reduction of volume overload with furosemide significantly decreased the BNP immediately and at follow-up (p=0.01). Farmakis et al. [25] found BNP was reduced only in CHF patients treated with levosimendan compared with furosemide, and that an obtained reduction of neurohormon >58% predicted a better 6-month prognosis. In contrast to that report [25], our echocardiographic results did not point out significant differences in LVEF, left ventricular volumes, diastolic function, pulmonary pressure and right ventricular performance (TAPSE) at 1-week follow-up after the levosimendan administration.\nOur experience confirmed the robust results obtained by Nieminen et al. [6] that described the favourable effects of levosimendan infusion on CO and SV at the dosage of 0.1 μg/kg/min using a Swan-Ganz catheter. Furthermore, Lilleberg et al. [26] demonstrated in 11 CHF patients that the positive inotropic effect reached the maximal effect in reducing pulmonary wedge pressure after 6 hours and increasing CO after 24 hours of levosimendan infusion. Nevertheless, these positive effects, estimated to last more than 1 week, were obtained with echocardiographic measurements. Using a validated non-invasive method, we documented in our patients treated with levosimendan an improvement in CO and SV during the infusion, losing this effect after the end of infusion and at 1-week follow-up. These results generate 2 main considerations: a) the long half-life of the active levosimendan metabolite (OR-1896) (80–90 hours) [27], that should have sustained the hemodynamic effect of the drug and might have justified the intermittent/repetitive levosimendan administration, needs to be investigated extensively; and b) if the functional capacity of our patients improved at short-term follow-up irrespective of cardiac function, an effect on skeletal muscles might be involved. In fact, the intermittent infusion of inotropic drugs partially reversed the impairment of peripheral muscle circulation in 30 end-stage CHF patients [28], and levosimendan, as a calcium-sensitizer, is considered an emerging class of agents to enhance the quality of life of patients suffering from skeletal muscle disorders [29].\nFinally, the clinical improvement of CHF patients treated with levosimendan was obtained without an impairment of renal function (Tables 2, 3), in contrast to the results of the furosemide group. Serum creatinine and urea nitrogen were strong and independent prognostic parameters in CHF patients [30,31]. In the LIDO trial [8], the levosimendan infusion improved renal function over 24 hours (mean change in creatinine concentration −0.10 mg/dl), while dobutamine did not (p=0.03). In the experience of Zemljic et al. [32], based on 20 CHF patients awaiting cardiac transplantation, a single levosimendan administration determined a significant improvement of plasma creatinine and creatinine clearance (p=0.005) at 3-month follow-up. The renoprotective effect of the drug might be related to an increase of renal medullar blood flow in spite of a reduction of the cortical flow [33], or to a change of inflammatory status (reduction of interleukin-6) [13], such that this favourable effect may be considered as a resource for anti-inflammatory therapy. In the control group, an increase of plasma creatinine was observed, not at the end of treatment, but at 1-week follow-up (p<0.01) – that could be explained by the tendency of reduced fluid overload to create a risk of subclinical dehydration.", "Treating ADHF in end-stage CHF patients with clinical hyperhydration using levosimendan improved BNP and LVEF, but this effect disappeared after 1 week. The amelioration of 6MWT and NYHA class lasted longer after levosimendan infusion, without causing an impairment of renal function. Patients treated with levosimendan in comparison with furosemide infusion manifested a better quality of life.\n[SUBTITLE] Limitations of the study [SUBSECTION] The major limitation of this study is the absence of randomization. Moreover, the small numbers of patients involved did not permit any analysis of mortality or arrhythmic risk correlated to levosimendan infusion – these questions need to be addressed by a larger, randomized trial. Nevertheless, our limited experience suggests caution using an infusion pump for levosimendan infusion in out-patients with chronic, refractory CHF [34], eventually limiting such therapy in subjects protected by an AICD. In fact, in our experience a ventricular arrhythmic disorder occurred in 10% of levosimendan treatment, while in the LIDO trial it was recorded in only 4% of patients treated [8].\nThe major limitation of this study is the absence of randomization. Moreover, the small numbers of patients involved did not permit any analysis of mortality or arrhythmic risk correlated to levosimendan infusion – these questions need to be addressed by a larger, randomized trial. Nevertheless, our limited experience suggests caution using an infusion pump for levosimendan infusion in out-patients with chronic, refractory CHF [34], eventually limiting such therapy in subjects protected by an AICD. In fact, in our experience a ventricular arrhythmic disorder occurred in 10% of levosimendan treatment, while in the LIDO trial it was recorded in only 4% of patients treated [8].", "The major limitation of this study is the absence of randomization. Moreover, the small numbers of patients involved did not permit any analysis of mortality or arrhythmic risk correlated to levosimendan infusion – these questions need to be addressed by a larger, randomized trial. Nevertheless, our limited experience suggests caution using an infusion pump for levosimendan infusion in out-patients with chronic, refractory CHF [34], eventually limiting such therapy in subjects protected by an AICD. In fact, in our experience a ventricular arrhythmic disorder occurred in 10% of levosimendan treatment, while in the LIDO trial it was recorded in only 4% of patients treated [8]." ]
[ null, "materials|methods", "subjects", null, null, null, "methods", "results", "discussion", "conclusions", "methods" ]
[ "brain natriuretic peptide", "end-stage heart failure", "levosimendan" ]
Mechanisms of lymphatic regeneration after tissue transfer.
21359148
Lymphedema is the chronic swelling of an extremity that occurs commonly after lymph node resection for cancer treatment. Recent studies have demonstrated that transfer of healthy tissues can be used as a means of bypassing damaged lymphatics and ameliorating lymphedema. The purpose of these studies was to investigate the mechanisms that regulate lymphatic regeneration after tissue transfer.
INTRODUCTION
Nude mice (recipients) underwent 2-mm tail skin excisions that were either left open or repaired with full-thickness skin grafts harvested from donor transgenic mice that expressed green fluorescent protein in all tissues or from LYVE-1 knockout mice. Lymphatic regeneration, expression of VEGF-C, macrophage infiltration, and potential for skin grafting to bypass damaged lymphatics were assessed.
METHODS
Skin grafts healed rapidly and restored lymphatic flow. Lymphatic regeneration occurred beginning at the peripheral edges of the graft, primarily from ingrowth of new lymphatic vessels originating from the recipient mouse. In addition, donor lymphatic vessels appeared to spontaneously re-anastomose with recipient vessels. Patterns of VEGF-C expression and macrophage infiltration were temporally and spatially associated with lymphatic regeneration. When compared to mice treated with excision only, there was a 4-fold decrease in tail volumes, 2.5-fold increase in lymphatic transport by lymphoscintigraphy, 40% decrease in dermal thickness, and 54% decrease in scar index in skin-grafted animals, indicating that tissue transfer could bypass damaged lymphatics and promote rapid lymphatic regeneration.
RESULTS
Our studies suggest that lymphatic regeneration after tissue transfer occurs by ingrowth of lymphatic vessels and spontaneous re-connection of existing lymphatics. This process is temporally and spatially associated with VEGF-C expression and macrophage infiltration. Finally, tissue transfer can be used to bypass damaged lymphatics and promote rapid lymphatic regeneration.
CONCLUSIONS
[ "Animals", "Dermis", "Female", "Glycoproteins", "Lymphatic System", "Lymphatic Vessels", "Lymphedema", "Lymphography", "Membrane Transport Proteins", "Mice", "Mice, Knockout", "Mice, Nude", "Micromanipulation", "Organ Size", "Regeneration", "Skin Transplantation", "Tail" ]
3040774
null
null
Methods
[SUBTITLE] Mouse tail excision model, surgical preparation, and full-thickness skin grafting [SUBSECTION] In order to investigate the mechanisms that regulate lymphatic regeneration after tissue transfer, we excised a 2-mm wide, full-thickness area of skin in 10–12 week old female athymic nude mice (nu/nu; NCI-Frederick, Frederick, ME). In addition, without injuring the tail blood supply, the deep lymphatic system was visualized and ligated using a surgical microscope (Leica, StereoZoom SZ-4, Wetzlar, Germany). The defects were then either left open or immediately repaired with full-thickness skin grafts harvested from 10–12 week-old female transgenic mice that express green fluorescent protein (GFP, C57BL/6-Tg(CAG-EGFP)1Osb/J; Jackson Labs, Bar Harbor, ME) or LYVE-1 knockout (B6.129S1-Lyve1tm1Lhua/J; Jackson Labs, Bar Harbor, ME) mice. The skin graft was secured with 9-0 nylon simple interrupted sutures, and the surgical wound was dressed with Tegaderm (3M, St. Paul, MN) adhered proximally and distally to minimize manipulation of the site by the animal post-operatively. Six to eight animals were used per experimental group. All animal procedures were approved by the Resource Animal Research Center Institutional Animal Care and Use Committee (IACUC; protocol #06-08-018) at Memorial Sloan-Kettering Cancer Center, New York, NY. In order to investigate the mechanisms that regulate lymphatic regeneration after tissue transfer, we excised a 2-mm wide, full-thickness area of skin in 10–12 week old female athymic nude mice (nu/nu; NCI-Frederick, Frederick, ME). In addition, without injuring the tail blood supply, the deep lymphatic system was visualized and ligated using a surgical microscope (Leica, StereoZoom SZ-4, Wetzlar, Germany). The defects were then either left open or immediately repaired with full-thickness skin grafts harvested from 10–12 week-old female transgenic mice that express green fluorescent protein (GFP, C57BL/6-Tg(CAG-EGFP)1Osb/J; Jackson Labs, Bar Harbor, ME) or LYVE-1 knockout (B6.129S1-Lyve1tm1Lhua/J; Jackson Labs, Bar Harbor, ME) mice. The skin graft was secured with 9-0 nylon simple interrupted sutures, and the surgical wound was dressed with Tegaderm (3M, St. Paul, MN) adhered proximally and distally to minimize manipulation of the site by the animal post-operatively. Six to eight animals were used per experimental group. All animal procedures were approved by the Resource Animal Research Center Institutional Animal Care and Use Committee (IACUC; protocol #06-08-018) at Memorial Sloan-Kettering Cancer Center, New York, NY. [SUBTITLE] Specimen preparation and histology [SUBSECTION] Ten-millimeter tail sections centered on the repair site were harvested 2 or 6 weeks after surgery and fixed in 4% paraformaldehyde overnight at 4°C. Specimens were then decalcified in Immunocal (Decal Chemical Corp., Tallman, N.Y.), embedded in paraffin, and sectioned longitudinally (5 µm). Immunohistochemical and immunofluorescent staining were performed as previously described.[16] Lymphatic vessels were identified using antibodies against podoplanin (Abcam, Cambridge, MA) and LYVE-1 (R&D Systems, Minneapolis, MN). Macrophages were identified using antibody against F4/80 (Abcam, Cambridge, MA), and VEGF-C expression was identified using antibody against VEGF-C (Abcam, Cambridge, MA). Immunofluorescent secondary antibodies used were Alexa Fluor 488, 594, and 647 (Invitrogen Molecular Probes, Carlsbad, CA). Three-dimensional reconstructions of 35–45 confocal z-stack tissue sections (40x) were created using the Imaris Software (Bitplane Corp, Zurich, Switzerland). For immunohistochemical staining, secondary antibody from the Vectastain ABC Kit (Vector Laboratories, Burlingame, CA) was used and developed using diaminobenzidine. Negative control sections were incubated with secondary antibody only. Images were obtained using bright-field microscopy (Leica TCS) for immunohistochemistry and confocal microscopy (Leica) for immunofluorescence. Cell and vessel counts were performed in 3-5 high power fields per section (n = 6–8/time point) by two blinded reviewers. Ten-millimeter tail sections centered on the repair site were harvested 2 or 6 weeks after surgery and fixed in 4% paraformaldehyde overnight at 4°C. Specimens were then decalcified in Immunocal (Decal Chemical Corp., Tallman, N.Y.), embedded in paraffin, and sectioned longitudinally (5 µm). Immunohistochemical and immunofluorescent staining were performed as previously described.[16] Lymphatic vessels were identified using antibodies against podoplanin (Abcam, Cambridge, MA) and LYVE-1 (R&D Systems, Minneapolis, MN). Macrophages were identified using antibody against F4/80 (Abcam, Cambridge, MA), and VEGF-C expression was identified using antibody against VEGF-C (Abcam, Cambridge, MA). Immunofluorescent secondary antibodies used were Alexa Fluor 488, 594, and 647 (Invitrogen Molecular Probes, Carlsbad, CA). Three-dimensional reconstructions of 35–45 confocal z-stack tissue sections (40x) were created using the Imaris Software (Bitplane Corp, Zurich, Switzerland). For immunohistochemical staining, secondary antibody from the Vectastain ABC Kit (Vector Laboratories, Burlingame, CA) was used and developed using diaminobenzidine. Negative control sections were incubated with secondary antibody only. Images were obtained using bright-field microscopy (Leica TCS) for immunohistochemistry and confocal microscopy (Leica) for immunofluorescence. Cell and vessel counts were performed in 3-5 high power fields per section (n = 6–8/time point) by two blinded reviewers. [SUBTITLE] Tail volume calculation [SUBSECTION] To evaluate the degree of postoperative acute lymphedema with and without skin graft, tail volumes were measured 1, 2, 4, and 6 weeks after surgery in both groups (n = 6–8 animals/group/time point) by blinded reviewers. Tail circumference was measured at 10-mm intervals starting at the distal margin of the tail excision using a digital caliper, and tail volumes were then calculated using the truncated cone formula: V =  ¼π (C1C2 + C2C3 + … +C7C8) from our previous methods.[16] To evaluate the degree of postoperative acute lymphedema with and without skin graft, tail volumes were measured 1, 2, 4, and 6 weeks after surgery in both groups (n = 6–8 animals/group/time point) by blinded reviewers. Tail circumference was measured at 10-mm intervals starting at the distal margin of the tail excision using a digital caliper, and tail volumes were then calculated using the truncated cone formula: V =  ¼π (C1C2 + C2C3 + … +C7C8) from our previous methods.[16] [SUBTITLE] Microlymphangiography [SUBSECTION] Microlymphangiography was performed prior to sacrifice to evaluate the gross structure of the capillary lymphatics as previously described.[17] Briefly, 15 µL of 2000-kDa dextran solution (10 mg/mL) conjugated to fluorescein isothiocyanate (FITC) was injected approximately 10 mm proximal to the tip of the mouse tail under constant pressure. This molecule is too large to enter blood vessels, but can enter lymphatics which are designed to transport large molecules in interstitial fluid. Capillary lymphatics were then visualized in the tail using the Leica MZFL3 Stereoscope (Wetzlar, Germany). Fluorescent images were obtained at consistent magnification using Volocity software (PerkinElmer, Waltham, MA). Microlymphangiography was performed prior to sacrifice to evaluate the gross structure of the capillary lymphatics as previously described.[17] Briefly, 15 µL of 2000-kDa dextran solution (10 mg/mL) conjugated to fluorescein isothiocyanate (FITC) was injected approximately 10 mm proximal to the tip of the mouse tail under constant pressure. This molecule is too large to enter blood vessels, but can enter lymphatics which are designed to transport large molecules in interstitial fluid. Capillary lymphatics were then visualized in the tail using the Leica MZFL3 Stereoscope (Wetzlar, Germany). Fluorescent images were obtained at consistent magnification using Volocity software (PerkinElmer, Waltham, MA). [SUBTITLE] Lymphoscintigraphy [SUBSECTION] In an effort to quantify lymphatic transport after surgery with or without skin graft, technetium-99 m sulfur colloid (100-nm particle size; 400-800 µCi in ∼50 µl) was injected intradermally approximately 20 mm from the tip of the mouse tail as previously described.[16] Dynamic planar gamma camera images were acquired for 2.5 hours after injection using an X-SPECT camera (Gamma Medica, Northridge, CA), and region-of-interest analysis was performed to derive the decay-adjusted activity using ASIPro software (CTI Molecular Imaging, Knoxville, TN). In an effort to quantify lymphatic transport after surgery with or without skin graft, technetium-99 m sulfur colloid (100-nm particle size; 400-800 µCi in ∼50 µl) was injected intradermally approximately 20 mm from the tip of the mouse tail as previously described.[16] Dynamic planar gamma camera images were acquired for 2.5 hours after injection using an X-SPECT camera (Gamma Medica, Northridge, CA), and region-of-interest analysis was performed to derive the decay-adjusted activity using ASIPro software (CTI Molecular Imaging, Knoxville, TN). [SUBTITLE] Dermal thickness calculation [SUBSECTION] Dermal thickness calculation was performed as previously described.[18] Briefly, longitudinal tail sections were stained with hematoxylin and eosin and visualized using the Mirax Slide Scanner (Carl Zeiss Microimaging, Munich, Germany). A distance of 4 mm was measured distal to the wound edge in standardized longitudinal histological sections and the dermal thickness was measured from the dermal/epidermal junction to the deep muscles (2–4 measurements/animal/group; n = 6–8 animals/group). Dermal thickness calculation was performed as previously described.[18] Briefly, longitudinal tail sections were stained with hematoxylin and eosin and visualized using the Mirax Slide Scanner (Carl Zeiss Microimaging, Munich, Germany). A distance of 4 mm was measured distal to the wound edge in standardized longitudinal histological sections and the dermal thickness was measured from the dermal/epidermal junction to the deep muscles (2–4 measurements/animal/group; n = 6–8 animals/group). [SUBTITLE] Scar index calculation [SUBSECTION] Sirius red (Direct Red 80; Sigma, St. Louis, MO) staining was performed to evaluate the degree of fibrosis in the specimens as previously described.[16], [19] Birefringence pattern and hue were evaluated using polarized light microscopy to determine collagen density, pattern of deposition, and maturity. This is based on the fact that normal tissues display fine collagen bundles with a yellow-green birefringence in a random pattern while soft tissue fibrosis results in formation of thicker, parallel collagen bundles with a orange-red birefringence.[20] Scar index is a quantitative analysis of fibrosis calculated by comparing the ratio of orange-red to yellow-green staining using Metamorph Offline software (Molecular Devices Corporation, Sunnyvale, CA) with a minimum of 4–6 sections per animal (n = 6–8 animals/group). Sirius red (Direct Red 80; Sigma, St. Louis, MO) staining was performed to evaluate the degree of fibrosis in the specimens as previously described.[16], [19] Birefringence pattern and hue were evaluated using polarized light microscopy to determine collagen density, pattern of deposition, and maturity. This is based on the fact that normal tissues display fine collagen bundles with a yellow-green birefringence in a random pattern while soft tissue fibrosis results in formation of thicker, parallel collagen bundles with a orange-red birefringence.[20] Scar index is a quantitative analysis of fibrosis calculated by comparing the ratio of orange-red to yellow-green staining using Metamorph Offline software (Molecular Devices Corporation, Sunnyvale, CA) with a minimum of 4–6 sections per animal (n = 6–8 animals/group). [SUBTITLE] Statistical Analysis [SUBSECTION] Statistical analysis was performed using GraphPad Prism software for Windows (GraphPad Software, Inc., San Diego, CA). Comparative analysis between multiple groups was performed using the Kruskal-Wallace test with post hoc tests to compare different groups. Differences between 2 groups were evaluated using the Student's t-test. Mean values and standard deviations are presented with p<0.05 considered significant unless otherwise noted. Statistical analysis was performed using GraphPad Prism software for Windows (GraphPad Software, Inc., San Diego, CA). Comparative analysis between multiple groups was performed using the Kruskal-Wallace test with post hoc tests to compare different groups. Differences between 2 groups were evaluated using the Student's t-test. Mean values and standard deviations are presented with p<0.05 considered significant unless otherwise noted.
null
null
null
null
[ "Introduction", "Mouse tail excision model, surgical preparation, and full-thickness skin grafting", "Specimen preparation and histology", "Tail volume calculation", "Microlymphangiography", "Lymphoscintigraphy", "Dermal thickness calculation", "Scar index calculation", "Statistical Analysis", "Results", "Autogenous tissue lymphangiogenesis is associated with spontaneous reconnection of local lymphatics and infiltration of new lymphatic capillaries", "Lymphatic regeneration after tissue transfer is associated with infiltration of LYVE-1 positive cells", "Lymphatic regeneration after tissue transfer is associated with expression of VEGF-C and infiltration of macrophages", "Spontaneous regeneration of lymphatics after tissue transfer can be used to bypass damaged lymphatics", "Discussion" ]
[ "Lymphedema is the chronic swelling of an extremity that occurs when the capacity of the lymphatic system to drain protein-rich interstitial fluid is exceeded.[1] In the United States and Western countries this condition occurs most commonly after lymph node resection for the treatment of breast and gynecological cancers.[2], [3], [4] It is estimated that as many as 50% of patients treated with lymph node dissection go on to develop lymphedema, even after more limited surgeries such as sentinel lymph node biopsy.[5], [6], [7] Patients with this complication have a significantly decreased quality of life with frequent infections, decreased range of motion, and a cosmetic deformity that is difficult to conceal.[8] Furthermore, lymphedema results in a nearly $10,000 increase in the two-year treatment cost of breast cancer patients.[9] With over 200,000 new breast cancers diagnosed annually, it is clear that lymphedema is a significant biomedical burden.\nTreatment for lymphedema has largely been symptomatic in nature and designed to prevent progression of swelling. Patients are treated with compressive stockings and manual massage to decrease fluid accumulation and encourage drainage of interstitial fluid.[10] These treatments can decrease swelling and discomfort but are time consuming and do not reverse the basic pathology of lymphedema. More importantly, once compression and manual drainage are stopped, lymphedema recurs and in most cases worsens over time.\nSome surgeons have attempted to bypass damaged lymphatic channels by interposition of healthy tissues. In fact, a few clinical case studies have shown remarkable improvements in limb edema after tissue transfer with evidence of lymphatic re-routing.[11], [12] Clinical evidence of spontaneous lymphatic regeneration is also noted after microsurgical tissue transfer.[13] In these procedures, composite tissues containing skin and fat are transferred from areas with excess tissues (e.g. abdomen) to replace damaged or surgically resected tissues by reconnecting the arterial and venous vessels using microsurgery. Although the lymphatic vessels are not re-anastomosed, tissue edema spontaneously resolves over a period of 6–8 weeks implying that lymphatic regeneration has occurred. More recently, Tammela et al. demonstrated that lymph node transfer combined with VEGF-C administration after lymphadenectomy in mice can promote reconstitution of the deep lymphatic system.[14] Similarly, Kerjaschki et al. have demonstrated that the majority of lymphatics (>95%) present in chronically rejected kidney transplants originated from the donor.[15] Thus, transfer of healthy tissues may be a means of replacing or re-routing damaged lymphatic vessels to treat or prevent lymphedema. The purpose of these experiments was to investigate the mechanisms that regulate lymphatic regeneration after tissue transfer.", "In order to investigate the mechanisms that regulate lymphatic regeneration after tissue transfer, we excised a 2-mm wide, full-thickness area of skin in 10–12 week old female athymic nude mice (nu/nu; NCI-Frederick, Frederick, ME). In addition, without injuring the tail blood supply, the deep lymphatic system was visualized and ligated using a surgical microscope (Leica, StereoZoom SZ-4, Wetzlar, Germany). The defects were then either left open or immediately repaired with full-thickness skin grafts harvested from 10–12 week-old female transgenic mice that express green fluorescent protein (GFP, C57BL/6-Tg(CAG-EGFP)1Osb/J; Jackson Labs, Bar Harbor, ME) or LYVE-1 knockout (B6.129S1-Lyve1tm1Lhua/J; Jackson Labs, Bar Harbor, ME) mice. The skin graft was secured with 9-0 nylon simple interrupted sutures, and the surgical wound was dressed with Tegaderm (3M, St. Paul, MN) adhered proximally and distally to minimize manipulation of the site by the animal post-operatively. Six to eight animals were used per experimental group. All animal procedures were approved by the Resource Animal Research Center Institutional Animal Care and Use Committee (IACUC; protocol #06-08-018) at Memorial Sloan-Kettering Cancer Center, New York, NY.", "Ten-millimeter tail sections centered on the repair site were harvested 2 or 6 weeks after surgery and fixed in 4% paraformaldehyde overnight at 4°C. Specimens were then decalcified in Immunocal (Decal Chemical Corp., Tallman, N.Y.), embedded in paraffin, and sectioned longitudinally (5 µm). Immunohistochemical and immunofluorescent staining were performed as previously described.[16] Lymphatic vessels were identified using antibodies against podoplanin (Abcam, Cambridge, MA) and LYVE-1 (R&D Systems, Minneapolis, MN). Macrophages were identified using antibody against F4/80 (Abcam, Cambridge, MA), and VEGF-C expression was identified using antibody against VEGF-C (Abcam, Cambridge, MA). Immunofluorescent secondary antibodies used were Alexa Fluor 488, 594, and 647 (Invitrogen Molecular Probes, Carlsbad, CA). Three-dimensional reconstructions of 35–45 confocal z-stack tissue sections (40x) were created using the Imaris Software (Bitplane Corp, Zurich, Switzerland). For immunohistochemical staining, secondary antibody from the Vectastain ABC Kit (Vector Laboratories, Burlingame, CA) was used and developed using diaminobenzidine. Negative control sections were incubated with secondary antibody only. Images were obtained using bright-field microscopy (Leica TCS) for immunohistochemistry and confocal microscopy (Leica) for immunofluorescence. Cell and vessel counts were performed in 3-5 high power fields per section (n = 6–8/time point) by two blinded reviewers.", "To evaluate the degree of postoperative acute lymphedema with and without skin graft, tail volumes were measured 1, 2, 4, and 6 weeks after surgery in both groups (n = 6–8 animals/group/time point) by blinded reviewers. Tail circumference was measured at 10-mm intervals starting at the distal margin of the tail excision using a digital caliper, and tail volumes were then calculated using the truncated cone formula: V =  ¼π (C1C2 + C2C3 + … +C7C8) from our previous methods.[16]\n", "Microlymphangiography was performed prior to sacrifice to evaluate the gross structure of the capillary lymphatics as previously described.[17] Briefly, 15 µL of 2000-kDa dextran solution (10 mg/mL) conjugated to fluorescein isothiocyanate (FITC) was injected approximately 10 mm proximal to the tip of the mouse tail under constant pressure. This molecule is too large to enter blood vessels, but can enter lymphatics which are designed to transport large molecules in interstitial fluid. Capillary lymphatics were then visualized in the tail using the Leica MZFL3 Stereoscope (Wetzlar, Germany). Fluorescent images were obtained at consistent magnification using Volocity software (PerkinElmer, Waltham, MA).", "In an effort to quantify lymphatic transport after surgery with or without skin graft, technetium-99 m sulfur colloid (100-nm particle size; 400-800 µCi in ∼50 µl) was injected intradermally approximately 20 mm from the tip of the mouse tail as previously described.[16] Dynamic planar gamma camera images were acquired for 2.5 hours after injection using an X-SPECT camera (Gamma Medica, Northridge, CA), and region-of-interest analysis was performed to derive the decay-adjusted activity using ASIPro software (CTI Molecular Imaging, Knoxville, TN).", "Dermal thickness calculation was performed as previously described.[18] Briefly, longitudinal tail sections were stained with hematoxylin and eosin and visualized using the Mirax Slide Scanner (Carl Zeiss Microimaging, Munich, Germany). A distance of 4 mm was measured distal to the wound edge in standardized longitudinal histological sections and the dermal thickness was measured from the dermal/epidermal junction to the deep muscles (2–4 measurements/animal/group; n = 6–8 animals/group).", "Sirius red (Direct Red 80; Sigma, St. Louis, MO) staining was performed to evaluate the degree of fibrosis in the specimens as previously described.[16], [19] Birefringence pattern and hue were evaluated using polarized light microscopy to determine collagen density, pattern of deposition, and maturity. This is based on the fact that normal tissues display fine collagen bundles with a yellow-green birefringence in a random pattern while soft tissue fibrosis results in formation of thicker, parallel collagen bundles with a orange-red birefringence.[20] Scar index is a quantitative analysis of fibrosis calculated by comparing the ratio of orange-red to yellow-green staining using Metamorph Offline software (Molecular Devices Corporation, Sunnyvale, CA) with a minimum of 4–6 sections per animal (n = 6–8 animals/group).", "Statistical analysis was performed using GraphPad Prism software for Windows (GraphPad Software, Inc., San Diego, CA). Comparative analysis between multiple groups was performed using the Kruskal-Wallace test with post hoc tests to compare different groups. Differences between 2 groups were evaluated using the Student's t-test. Mean values and standard deviations are presented with p<0.05 considered significant unless otherwise noted.", "[SUBTITLE] Autogenous tissue lymphangiogenesis is associated with spontaneous reconnection of local lymphatics and infiltration of new lymphatic capillaries [SUBSECTION] In order to investigate the involvement of recipient and donor cells in spontaneous lymphatic regeneration, the tails of athymic nude mice that underwent full-thickness skin excision were repaired with skin grafts harvested from GFP mice. This allowed us to follow the origin of cells by co-localization of GFP and lymphatic markers. Using this technique, the recipient nude mice lymphatics stained positive for lymphatic markers but negative for GFP. The transplanted tissue lymphatics, in contrast, stained positive for both GFP and lymphatic markers (\nFigure 1\n). Using fluorescent imaging, we found that high-level GFP expression was maintained in the transplanted skin even 6 weeks after tissue transfer (\nFigure 1A\n). The transplanted skin grafts healed quickly and with regrowth of hair and skin appendages by 6 weeks indicating that the grafts had become fully incorporated (\nFigure 1B\n). In addition, return of lymphatic function with transport of fluorescence-labeled colloid could be grossly visualized by microlymphangiography as early as 2 weeks after surgery (\nFigure 1C\n). This transport was initially driven by interstitial flow (i.e. no discrete lymphatic vessels could be visualized at 2 weeks); however, by 6 weeks honeycomb-like dermal lymphatics were noted in the transferred grafts (primarily at the distal margin) suggesting that the reformed lymphatic vessels are functional (\nFigure 1D\n). The lack of complete restoration of the honeycomb-like dermal lymphatic architecture in the mid portion of the graft at this time may reflect a delay in dermal lymphatic regeneration. This concept is supported by previous studies demonstrating that honeycomb-like lymphatic vessels that regenerate in the mouse tail after coverage of the wound with collagen gel are not ordinarily visible until day 60 after surgery and even at this time are only seen at the distal margin of the wound.[21], [22]\n\n\nA. Skin grafts harvested from GFP transgenic mice express GFP at high levels and for a long period of time. A representative fluorescent picture of a mouse (top) with higher magnification of the tail (bottom) is shown 6 weeks after surgery. The skin-grafted GFP portion of the tail can easily be seen. B. Gross photographs of mouse tails treated with skin grafts harvested from GFP transgenic mice. Note the rapid incorporation and ingrowth of hair follicles at the 6-week time point. C. Microlymphangiography performed 2 (left) or 6 (right) weeks following skin graft. Distal portion of the tail is shown at the bottom. Note the flow of fluorescent dye by interstitial flow after 2 weeks. In contrast, lymphatic flow can be seen in the skin-grafted area at the 6-week time point. Representative figures of triplicate experiments are shown. D. Higher power photograph of microlymphangiography 2 and 6 weeks after surgery demonstrating a few honeycomb-like dermal lymphatics (white arrows) in the skin graft at the 6-week time point. E. Representative LYVE-1 (pink) and GFP (green) co-localization in skin-grafted mouse tails 6 weeks after surgery. Low power (5x; left) and high power (20x; right) views are shown. Note connection between GFP- (recipient) and GFP+ (donor) lymphatic vessel. F. Representative photomicrograph (2x) demonstrating GFP (left), LYVE-1 (middle), and co-localization (right) of skin-grafted mouse tails 6 weeks after surgery. Note ingrowth of GFP-/LYVE-1+ vessels (yellow circle) from the distal (yellow dotted line) portion of the wound. G. High power (20x) view of section shown in F. Note the presence of both recipient (GFP-/LYVE-1+) and donor (GFP+/LYVE-1+) lymphatics at the distal margin of the wound.\nConfocal co-localization of GFP and LYVE-1 in sections obtained 6 weeks after surgery demonstrated the presence of numerous lymphatic vessels that were comprised of a portion of both recipient (GFP-/LYVE-1+) and donor (GFP+/LYVE-1+) lymphatic endothelial cells (\nFigure 1E\n). We also visualized thin lymphatic bridges connecting donor and recipient lymphatics suggesting that these vessels were spontaneously re-connecting at the edge of the wound. These connections were not frequently noted, however, due to the fact that the sections were performed in the longitudinal plane (i.e. necessitating sectioning at an extremely small/precise plane). In addition to these lymphatics, numerous recipient lymphatic vessels infiltrating the skin graft were noted primarily at the distal edge of the wound, suggesting that lymphatic regeneration of transplanted skin occurs via both spontaneous reconnection and ingrowth of new capillary lymphatic channels (\nFigures 1F–G\n). Interestingly, at the early time point, qualitatively, the majority of vessels present were GFP+/LYVE-1+; however, by 6 weeks these vessels represented a relatively small percentage of the total number of lymphatic vessels (\nFigure 1F–G\n). Precise quantification of donor and recipient vessels was difficult to perform with GFP skin grafts, however, due to the fact that GFP is expressed in all cells and not limited solely to the lymphatics. Due to this limitation, we performed identical operations using skin grafts obtained from LYVE-1 knockout mice.\nIn order to investigate the involvement of recipient and donor cells in spontaneous lymphatic regeneration, the tails of athymic nude mice that underwent full-thickness skin excision were repaired with skin grafts harvested from GFP mice. This allowed us to follow the origin of cells by co-localization of GFP and lymphatic markers. Using this technique, the recipient nude mice lymphatics stained positive for lymphatic markers but negative for GFP. The transplanted tissue lymphatics, in contrast, stained positive for both GFP and lymphatic markers (\nFigure 1\n). Using fluorescent imaging, we found that high-level GFP expression was maintained in the transplanted skin even 6 weeks after tissue transfer (\nFigure 1A\n). The transplanted skin grafts healed quickly and with regrowth of hair and skin appendages by 6 weeks indicating that the grafts had become fully incorporated (\nFigure 1B\n). In addition, return of lymphatic function with transport of fluorescence-labeled colloid could be grossly visualized by microlymphangiography as early as 2 weeks after surgery (\nFigure 1C\n). This transport was initially driven by interstitial flow (i.e. no discrete lymphatic vessels could be visualized at 2 weeks); however, by 6 weeks honeycomb-like dermal lymphatics were noted in the transferred grafts (primarily at the distal margin) suggesting that the reformed lymphatic vessels are functional (\nFigure 1D\n). The lack of complete restoration of the honeycomb-like dermal lymphatic architecture in the mid portion of the graft at this time may reflect a delay in dermal lymphatic regeneration. This concept is supported by previous studies demonstrating that honeycomb-like lymphatic vessels that regenerate in the mouse tail after coverage of the wound with collagen gel are not ordinarily visible until day 60 after surgery and even at this time are only seen at the distal margin of the wound.[21], [22]\n\n\nA. Skin grafts harvested from GFP transgenic mice express GFP at high levels and for a long period of time. A representative fluorescent picture of a mouse (top) with higher magnification of the tail (bottom) is shown 6 weeks after surgery. The skin-grafted GFP portion of the tail can easily be seen. B. Gross photographs of mouse tails treated with skin grafts harvested from GFP transgenic mice. Note the rapid incorporation and ingrowth of hair follicles at the 6-week time point. C. Microlymphangiography performed 2 (left) or 6 (right) weeks following skin graft. Distal portion of the tail is shown at the bottom. Note the flow of fluorescent dye by interstitial flow after 2 weeks. In contrast, lymphatic flow can be seen in the skin-grafted area at the 6-week time point. Representative figures of triplicate experiments are shown. D. Higher power photograph of microlymphangiography 2 and 6 weeks after surgery demonstrating a few honeycomb-like dermal lymphatics (white arrows) in the skin graft at the 6-week time point. E. Representative LYVE-1 (pink) and GFP (green) co-localization in skin-grafted mouse tails 6 weeks after surgery. Low power (5x; left) and high power (20x; right) views are shown. Note connection between GFP- (recipient) and GFP+ (donor) lymphatic vessel. F. Representative photomicrograph (2x) demonstrating GFP (left), LYVE-1 (middle), and co-localization (right) of skin-grafted mouse tails 6 weeks after surgery. Note ingrowth of GFP-/LYVE-1+ vessels (yellow circle) from the distal (yellow dotted line) portion of the wound. G. High power (20x) view of section shown in F. Note the presence of both recipient (GFP-/LYVE-1+) and donor (GFP+/LYVE-1+) lymphatics at the distal margin of the wound.\nConfocal co-localization of GFP and LYVE-1 in sections obtained 6 weeks after surgery demonstrated the presence of numerous lymphatic vessels that were comprised of a portion of both recipient (GFP-/LYVE-1+) and donor (GFP+/LYVE-1+) lymphatic endothelial cells (\nFigure 1E\n). We also visualized thin lymphatic bridges connecting donor and recipient lymphatics suggesting that these vessels were spontaneously re-connecting at the edge of the wound. These connections were not frequently noted, however, due to the fact that the sections were performed in the longitudinal plane (i.e. necessitating sectioning at an extremely small/precise plane). In addition to these lymphatics, numerous recipient lymphatic vessels infiltrating the skin graft were noted primarily at the distal edge of the wound, suggesting that lymphatic regeneration of transplanted skin occurs via both spontaneous reconnection and ingrowth of new capillary lymphatic channels (\nFigures 1F–G\n). Interestingly, at the early time point, qualitatively, the majority of vessels present were GFP+/LYVE-1+; however, by 6 weeks these vessels represented a relatively small percentage of the total number of lymphatic vessels (\nFigure 1F–G\n). Precise quantification of donor and recipient vessels was difficult to perform with GFP skin grafts, however, due to the fact that GFP is expressed in all cells and not limited solely to the lymphatics. Due to this limitation, we performed identical operations using skin grafts obtained from LYVE-1 knockout mice.\n[SUBTITLE] Lymphatic regeneration after tissue transfer is associated with infiltration of LYVE-1 positive cells [SUBSECTION] To further investigate the role of donor and recipient cells in lymphatic regeneration, athymic nude mice underwent full-thickness skin grafting with tissues harvested from LYVE-1 knockout mice. This approach again enabled us to trace the origin of the lymphatic vessels and differentiate between donor and recipient vessels more easily than GFP skin grafts since donor tissues lacked LYVE-1 expression (a molecule that is relatively specific to lymphatic vessels) but expressed podoplanin (podoplanin+/LYVE-1-), whereas recipient vessels expressed both molecules (podoplanin+/LYVE-1+).\nTo investigate the relative contribution of donor and recipient lymphatic vessels to lymphatic regeneration in the transplanted skin, immunofluorescence staining with the lymphatic endothelial cell markers LYVE-1 and podoplanin was performed and the number of donor- and recipient-derived lymphatic vessels were evaluated at various regions of the skin graft (distal, middle, and proximal portions) 2 or 6 weeks following surgery (\nFigures 2A–B\n). This analysis, similar to our GFP/LYVE-1 co-localization, demonstrated connections between LYVE-1+ and LYVE-1- lymphatic vessels suggesting that lymphatic vessels from the recipient and donor tissues spontaneously reconnect. This phenomenon is seen in 3-D reconstructions of z-stacked confocal images (\nFigure 2B\n).\n\n\nA. Co-localization of LYVE-1 (left panel) and podoplanin (middle panel) of mouse tail sections harvested 6 weeks after surgery (40x). Overlay of images demonstrates connection of LYVE-1+ (i.e. recipient) and LYVE-1- (i.e. donor) derived podoplanin-stained lymphatics. Yellow line marks the junction of the skin graft and native tail skin. Distal junction of the skin graft and mouse tail is shown to the left. B. Three-dimensional rendering of podoplanin/LYVE-1 co-localization at the junction of the skin graft and native tail skin (yellow line). Note connection between LYVE-1+ and LYVE-1- lymphatic. Also note presence of both recipient (podoplanin+/LYVE-1+) and donor (podoplanin+/LYVE-1-) vessels in the section.\nQuantification of donor and recipient lymphatic vessels also suggested that both reconnection and infiltration of recipient-derived lymphatics contribute to lymphatic regeneration in the transplanted skin. Two weeks following tissue transfer, 52% of the vessels in the distal margin and 37% of those at the proximal margin were donor-derived (i.e. podoplanin+/LYVE-1-). In contrast, 92% of the lymphatic vessels in the middle portion were donor-derived. By 6 weeks, the vast majority (>90%) of lymphatic vessels present in the distal and proximal margins were of recipient origin. Similarly, the number of recipient vessels in the middle portion also increased such that the distribution of these vessels with donor-derived lymphatics was 1∶1. These differences in the distribution of the lymphatic vessels were primarily due to the ingrowth of new lymphatics since the number of recipient vessels in the later time points increased substantially while the number of donor vessels (similar to our qualitative observations in the GFP experiment) decreased slightly, though not statistically significant (\nFigures 3D–E\n). There were significantly more recipient-derived vessels at the distal and proximal margins as compared to the central aspect of the graft suggesting ingrowth of vessels from the periphery of the transferred tissues (8.2 and 6.0 vs. 3.1; p<0.05). Taken together these findings suggest that donor-derived lymphatic vessels persist over time and that transferred tissues become further infiltrated by lymphatic vessels originating from the recipient beginning at the distal and proximal edges and infiltrating towards the center of the graft.\n\nA. Bar graph depicting source of lymphatic vessels in various regions of the skin-grafted tissues 2 or 6 weeks after surgery. Origin of lymphatic vessels in various regions of the skin graft 2 or 6 weeks after surgery was determined using podoplanin/LYVE-1 co-localization in skin grafts harvested from LYVE-1 knockout mice and transferred to nude mice. Recipient lymphatic vessels were identified as podoplanin+/LYVE-1+ (white bars) while donor lymphatics were identified as podoplanin+/LYVE-1- (black bars). Recipient-derived lymphatic vessels were noted primarily in the distal and proximal portions of the graft at the 2-week time point and became more uniform in distribution by 6 weeks. Increased number of recipient vessels in the peripheral regions of the graft at the 6-week time point resulted in a relative decrease in the percentage of donor-derived lymphatics at this time. B, C. Representative images (10x) of LYVE-1 (red) and podoplanin (green) staining of the distal portion of the graft 2 (B) and 6 (C) weeks after surgery. Yellow line marks the junction of the skin graft (located to the right) and recipient (located to the left) tissues. D. Recipient lymphatic vessels (podoplanin+/LYVE-1+) in various regions of the skin graft 2 and 6 weeks after skin grafting. Note that the numbers of recipient vessels in the distal (D) and proximal (P) portions of the tail are significantly greater than the number of recipient lymphatics in the middle (M) portion of the graft at both the 2- and 6-week time points indicating ingrowth of vessels from the periphery (*p<0.05). In addition, the number of recipient lymphatics in various regions of the skin graft increased significantly when comparing 2- and 6-week time points indicating ingrowth of recipient vessels. E. Donor lymphatic vessels (podoplanin+/LYVE-1-) in various regions of the skin graft 2 and 6 weeks after skin grafting (*p<0.05). The number of donor vessels remained essentially unchanged at both time points indicating that donor lymphatics persist over time and do not proliferate after tissue transfer.\nTo further investigate the role of donor and recipient cells in lymphatic regeneration, athymic nude mice underwent full-thickness skin grafting with tissues harvested from LYVE-1 knockout mice. This approach again enabled us to trace the origin of the lymphatic vessels and differentiate between donor and recipient vessels more easily than GFP skin grafts since donor tissues lacked LYVE-1 expression (a molecule that is relatively specific to lymphatic vessels) but expressed podoplanin (podoplanin+/LYVE-1-), whereas recipient vessels expressed both molecules (podoplanin+/LYVE-1+).\nTo investigate the relative contribution of donor and recipient lymphatic vessels to lymphatic regeneration in the transplanted skin, immunofluorescence staining with the lymphatic endothelial cell markers LYVE-1 and podoplanin was performed and the number of donor- and recipient-derived lymphatic vessels were evaluated at various regions of the skin graft (distal, middle, and proximal portions) 2 or 6 weeks following surgery (\nFigures 2A–B\n). This analysis, similar to our GFP/LYVE-1 co-localization, demonstrated connections between LYVE-1+ and LYVE-1- lymphatic vessels suggesting that lymphatic vessels from the recipient and donor tissues spontaneously reconnect. This phenomenon is seen in 3-D reconstructions of z-stacked confocal images (\nFigure 2B\n).\n\n\nA. Co-localization of LYVE-1 (left panel) and podoplanin (middle panel) of mouse tail sections harvested 6 weeks after surgery (40x). Overlay of images demonstrates connection of LYVE-1+ (i.e. recipient) and LYVE-1- (i.e. donor) derived podoplanin-stained lymphatics. Yellow line marks the junction of the skin graft and native tail skin. Distal junction of the skin graft and mouse tail is shown to the left. B. Three-dimensional rendering of podoplanin/LYVE-1 co-localization at the junction of the skin graft and native tail skin (yellow line). Note connection between LYVE-1+ and LYVE-1- lymphatic. Also note presence of both recipient (podoplanin+/LYVE-1+) and donor (podoplanin+/LYVE-1-) vessels in the section.\nQuantification of donor and recipient lymphatic vessels also suggested that both reconnection and infiltration of recipient-derived lymphatics contribute to lymphatic regeneration in the transplanted skin. Two weeks following tissue transfer, 52% of the vessels in the distal margin and 37% of those at the proximal margin were donor-derived (i.e. podoplanin+/LYVE-1-). In contrast, 92% of the lymphatic vessels in the middle portion were donor-derived. By 6 weeks, the vast majority (>90%) of lymphatic vessels present in the distal and proximal margins were of recipient origin. Similarly, the number of recipient vessels in the middle portion also increased such that the distribution of these vessels with donor-derived lymphatics was 1∶1. These differences in the distribution of the lymphatic vessels were primarily due to the ingrowth of new lymphatics since the number of recipient vessels in the later time points increased substantially while the number of donor vessels (similar to our qualitative observations in the GFP experiment) decreased slightly, though not statistically significant (\nFigures 3D–E\n). There were significantly more recipient-derived vessels at the distal and proximal margins as compared to the central aspect of the graft suggesting ingrowth of vessels from the periphery of the transferred tissues (8.2 and 6.0 vs. 3.1; p<0.05). Taken together these findings suggest that donor-derived lymphatic vessels persist over time and that transferred tissues become further infiltrated by lymphatic vessels originating from the recipient beginning at the distal and proximal edges and infiltrating towards the center of the graft.\n\nA. Bar graph depicting source of lymphatic vessels in various regions of the skin-grafted tissues 2 or 6 weeks after surgery. Origin of lymphatic vessels in various regions of the skin graft 2 or 6 weeks after surgery was determined using podoplanin/LYVE-1 co-localization in skin grafts harvested from LYVE-1 knockout mice and transferred to nude mice. Recipient lymphatic vessels were identified as podoplanin+/LYVE-1+ (white bars) while donor lymphatics were identified as podoplanin+/LYVE-1- (black bars). Recipient-derived lymphatic vessels were noted primarily in the distal and proximal portions of the graft at the 2-week time point and became more uniform in distribution by 6 weeks. Increased number of recipient vessels in the peripheral regions of the graft at the 6-week time point resulted in a relative decrease in the percentage of donor-derived lymphatics at this time. B, C. Representative images (10x) of LYVE-1 (red) and podoplanin (green) staining of the distal portion of the graft 2 (B) and 6 (C) weeks after surgery. Yellow line marks the junction of the skin graft (located to the right) and recipient (located to the left) tissues. D. Recipient lymphatic vessels (podoplanin+/LYVE-1+) in various regions of the skin graft 2 and 6 weeks after skin grafting. Note that the numbers of recipient vessels in the distal (D) and proximal (P) portions of the tail are significantly greater than the number of recipient lymphatics in the middle (M) portion of the graft at both the 2- and 6-week time points indicating ingrowth of vessels from the periphery (*p<0.05). In addition, the number of recipient lymphatics in various regions of the skin graft increased significantly when comparing 2- and 6-week time points indicating ingrowth of recipient vessels. E. Donor lymphatic vessels (podoplanin+/LYVE-1-) in various regions of the skin graft 2 and 6 weeks after skin grafting (*p<0.05). The number of donor vessels remained essentially unchanged at both time points indicating that donor lymphatics persist over time and do not proliferate after tissue transfer.\n[SUBTITLE] Lymphatic regeneration after tissue transfer is associated with expression of VEGF-C and infiltration of macrophages [SUBSECTION] VEGF-C is a critical regulator of lymphatic regeneration during adult lymphangiogenesis.[17] Therefore, we investigated the pattern of VEGF-C expression after tissue transfer using immunohistochemical staining. Two weeks after surgery, VEGF-C expression was highest at the periphery of the wounds with the distal wound margin displaying significantly more staining of VEGF-C+ cells/hpf than either the middle or proximal portion (110±4 in distal portion vs. 46±8 in middle portion; p<0.05; \nFigures 4A, C\n). VEGF-C staining at the peripheral margins of the grafts was associated with a modest inflammatory infiltrate into the skin graft. By 6 weeks, VEGF-C staining was primarily localized to the central region of the skin graft (73±11 in the central portion vs. 30±8 or 25±10 in proximal or distal portion; p<0.05; \nFigures 4B–C\n). This expression pattern corresponded to the infiltration of lymphatic vessels into the graft beginning at the peripheral margins and extending into the central portion.\n\nA. VEGF-C expression in skin-grafted tails 2 weeks after surgery. Representative low power (2x; upper panel) photomicrographs encompassing the skin-grafted area and distal/proximal portions of the recipient mouse-tail are shown. High power (20x) views of the distal and proximal junctions between recipient tissues and skin grafts are shown below. Black arrow shows large number of VEGF-C+ cells in the distal junction. Dashed box delineates skin-grafted area. B. VEGF-C expression in skin-grafted tails 6 weeks after surgery. Representative low power (2x; upper panel) and high power (20x) photomicrographs encompassing the skin-grafted area and distal/middle/proximal portions of the recipient mouse tails are shown. Dashed box delineates skin-grafted area. Note small amount of wound/skin graft contracture after repair. C. Cell counts per high power field of VEGF-C+ cells in the various tail regions (D = distal, M = middle, P = proximal) 2 and 6 weeks after surgery. Cell counts are means ± SD of at least 4 high power fields/mouse/time point. At least 6 mice were analyzed in each group (*p<0.05; #<0.01).\nMacrophages are critical cells that regulate wound repair and produce VEGF-C during lymphatic regeneration in the mouse tail. In addition, they are thought to directly contribute to lymphangiogenesis by trans-differentiating into lymphatic endothelial cells.[23] Immunohistochemical staining with F4/80 revealed a similar expression pattern as that of VEGF-C with a large number of F4/80+ cells localized at the periphery of the graft at the 2 week time point (\nFigures 5A, C\n). Similar to our findings of VEGF-C expression, the greatest number of macrophages was noted at the distal margin of the wound (44±7 in distal portion vs. 16±5 or 20±5 in middle or proximal areas; p<0.05). Co-localization of F4/80 and LYVE-1 suggested that some of the newly formed lymphatic channels in the skin graft may have formed as a result of trans-differentiation of macrophages into lymphatic endothelial cells (\nFigure 6\n). In addition, scattered, isolated F4/80+/LYVE-1+ cells that had not formed tubules could be seen within the wound sections. Newly formed lymphatics with trans-differentiating macrophages were recipient-derived since they also expressed LYVE-1. By 6 weeks, the number of macrophages had decreased in the distal margin of the graft and a more uniform distribution was noted throughout the grafted area (\nFigures 5B-C\n). In addition, we could not find any F4/80+/LYVE-1+ vessels at this time point indicating loss of F4/80 expression with vessel remodeling (not shown).\n\nA. F4/80 localization in skin-grafted tails 2 weeks after surgery. Representative low power (2x; upper panel) photomicrographs encompassing the skin-grafted area and distal/proximal portions of the recipient mouse-tail are shown. High power (20x) views of the distal and proximal junctions between recipient tissues and skin grafts are shown below. Dashed box delineates skin-grafted area. B. F4/80 localization in skin-grafted tails 6 weeks after surgery. Representative low power (2x; upper panel) and high power (20x) photomicrographs encompassing the skin-grafted area and distal/proximal portions of the recipient mouse tails are shown. Dashed box delineates skin-grafted area. Note small amount of wound/skin graft contracture after repair. C. Cell counts per high power field of F4/80+ cells in various regions of the tail 2 and 6 weeks after surgery. Cell counts are means ± SD of at least 4 high power fields/mouse/time point. At least 6 mice were analyzed in each group (*p<0.05).\n\nA, B, C. Representative photomicrograph (40x) demonstrating co-localization of F4/80 (A; brown immunostaining) and LYVE-1 (B; green fluorescent stain) in skin-grafted mouse tail sections 2 weeks after surgery. Arrows show co-localization of F4/80 and LYVE-1 in lymphatic capillaries. Also note scattered F4/80+/LYVE-1+ cells in the tail section. F4/80-/LYVE-1+ capillaries can also be appreciated (upper right corner).\nVEGF-C is a critical regulator of lymphatic regeneration during adult lymphangiogenesis.[17] Therefore, we investigated the pattern of VEGF-C expression after tissue transfer using immunohistochemical staining. Two weeks after surgery, VEGF-C expression was highest at the periphery of the wounds with the distal wound margin displaying significantly more staining of VEGF-C+ cells/hpf than either the middle or proximal portion (110±4 in distal portion vs. 46±8 in middle portion; p<0.05; \nFigures 4A, C\n). VEGF-C staining at the peripheral margins of the grafts was associated with a modest inflammatory infiltrate into the skin graft. By 6 weeks, VEGF-C staining was primarily localized to the central region of the skin graft (73±11 in the central portion vs. 30±8 or 25±10 in proximal or distal portion; p<0.05; \nFigures 4B–C\n). This expression pattern corresponded to the infiltration of lymphatic vessels into the graft beginning at the peripheral margins and extending into the central portion.\n\nA. VEGF-C expression in skin-grafted tails 2 weeks after surgery. Representative low power (2x; upper panel) photomicrographs encompassing the skin-grafted area and distal/proximal portions of the recipient mouse-tail are shown. High power (20x) views of the distal and proximal junctions between recipient tissues and skin grafts are shown below. Black arrow shows large number of VEGF-C+ cells in the distal junction. Dashed box delineates skin-grafted area. B. VEGF-C expression in skin-grafted tails 6 weeks after surgery. Representative low power (2x; upper panel) and high power (20x) photomicrographs encompassing the skin-grafted area and distal/middle/proximal portions of the recipient mouse tails are shown. Dashed box delineates skin-grafted area. Note small amount of wound/skin graft contracture after repair. C. Cell counts per high power field of VEGF-C+ cells in the various tail regions (D = distal, M = middle, P = proximal) 2 and 6 weeks after surgery. Cell counts are means ± SD of at least 4 high power fields/mouse/time point. At least 6 mice were analyzed in each group (*p<0.05; #<0.01).\nMacrophages are critical cells that regulate wound repair and produce VEGF-C during lymphatic regeneration in the mouse tail. In addition, they are thought to directly contribute to lymphangiogenesis by trans-differentiating into lymphatic endothelial cells.[23] Immunohistochemical staining with F4/80 revealed a similar expression pattern as that of VEGF-C with a large number of F4/80+ cells localized at the periphery of the graft at the 2 week time point (\nFigures 5A, C\n). Similar to our findings of VEGF-C expression, the greatest number of macrophages was noted at the distal margin of the wound (44±7 in distal portion vs. 16±5 or 20±5 in middle or proximal areas; p<0.05). Co-localization of F4/80 and LYVE-1 suggested that some of the newly formed lymphatic channels in the skin graft may have formed as a result of trans-differentiation of macrophages into lymphatic endothelial cells (\nFigure 6\n). In addition, scattered, isolated F4/80+/LYVE-1+ cells that had not formed tubules could be seen within the wound sections. Newly formed lymphatics with trans-differentiating macrophages were recipient-derived since they also expressed LYVE-1. By 6 weeks, the number of macrophages had decreased in the distal margin of the graft and a more uniform distribution was noted throughout the grafted area (\nFigures 5B-C\n). In addition, we could not find any F4/80+/LYVE-1+ vessels at this time point indicating loss of F4/80 expression with vessel remodeling (not shown).\n\nA. F4/80 localization in skin-grafted tails 2 weeks after surgery. Representative low power (2x; upper panel) photomicrographs encompassing the skin-grafted area and distal/proximal portions of the recipient mouse-tail are shown. High power (20x) views of the distal and proximal junctions between recipient tissues and skin grafts are shown below. Dashed box delineates skin-grafted area. B. F4/80 localization in skin-grafted tails 6 weeks after surgery. Representative low power (2x; upper panel) and high power (20x) photomicrographs encompassing the skin-grafted area and distal/proximal portions of the recipient mouse tails are shown. Dashed box delineates skin-grafted area. Note small amount of wound/skin graft contracture after repair. C. Cell counts per high power field of F4/80+ cells in various regions of the tail 2 and 6 weeks after surgery. Cell counts are means ± SD of at least 4 high power fields/mouse/time point. At least 6 mice were analyzed in each group (*p<0.05).\n\nA, B, C. Representative photomicrograph (40x) demonstrating co-localization of F4/80 (A; brown immunostaining) and LYVE-1 (B; green fluorescent stain) in skin-grafted mouse tail sections 2 weeks after surgery. Arrows show co-localization of F4/80 and LYVE-1 in lymphatic capillaries. Also note scattered F4/80+/LYVE-1+ cells in the tail section. F4/80-/LYVE-1+ capillaries can also be appreciated (upper right corner).\n[SUBTITLE] Spontaneous regeneration of lymphatics after tissue transfer can be used to bypass damaged lymphatics [SUBSECTION] In order to determine if spontaneous regeneration of lymphatics can be used to bypass surgically damaged lymphatic vessels, we compared lymphatic function in mice that underwent skin excision with animals that had skin excision and repair with full-thickness skin grafting. Evaluation of animals grossly demonstrated marked differences in tail swelling at all time points evaluated (\nFigure 7A\n). These findings were corroborated by tail volume calculation demonstrating significantly higher tail volumes in excision-only animals at every time point evaluated (\nFigure 7B\n). Skin-grafted animals had a 4-fold decrease in tail swelling at the 2 week time point (19±5.9% vs. 74±7.8%, p<0.05) and their tail volumes returned to baseline 6 weeks after surgery. In contrast, excision-only animals demonstrated a persistent increase even 6 weeks postoperatively (26±8%, p<0.01). The differences in tail volumes observed in our study were associated with significantly improved lymphatic function as assessed by lymphoscintigraphy (\nFigures 7C–D\n). At both the 2 and 6 week time points, the skin-grafted animals demonstrated a significantly increased total uptake of Tc99 in the lymph nodes at the base of the tail (3.74% vs. 0.61% at 2 weeks; 10.70% vs. 4.23% at 6 weeks; p<0.01 for both time points). In addition, the lymphatic uptake occurred more rapidly as reflected by a more rapid increase in the slope of the asymptotic curve.\n\nA. Gross photographs comparing nude mice that had undergone tail excision with (right) and without (left) skin grafting are shown 6 weeks after surgery. Note obvious difference in tail swelling. B. Tail volume measurements in nude mice that had undergone tail excision with or without skin grafting. Data are presented as percent change from baseline (i.e. preoperatively) with mean ± SD (*p<0.05). C, D. Representative lymphoscintigraphy of nude mice that had undergone tail excision with or without skin grafting. E. Representative photomicrograph (5x) of H&E stained tails sections from nude mice treated with (left) or without (right) 6 weeks after surgery. Dashed box delineates area of skin graft. Note decreased inflammation (cellularity) and dermal thickness in skin-grafted mice distal (to the left) of the wound. F. High power (40x) photomicrographs of tail skin harvested 5 mm distal to the excision site. Note decreased cellularity in skin-grafted section (left) as compared with excision section (right; arrow). Also note decreased dermal thickness. G. Dermal thickness measurements and representative figures (40x) in nude mice that had undergone tail excision with or without skin grafting 6 weeks following surgery (*p<0.05). H. Scar index measurements in tail tissues localized just distal to the site of lymphatic injury 6 weeks after treatment with excision with or without skin grafting. Representative Sirius red birefringence images are shown to the right. Orange-red is indicative of scar; yellow-green is consistent with normal (i.e. non-fibrosed) tissue (*p<0.01).\nHistological analysis of tail sections 6 weeks after surgery demonstrated markedly decreased inflammatory cell infiltrate and dermal thickness in skin-grafted animals as compared to excision-only controls (\nFigures 7E–F\n). Far fewer inflammatory cells were noted qualitatively in the skin-grafted animals as compared with excision only animals. This was most readily apparent in the portions of the tail distal to the skin excision site (\nFigure 7F\n). Skin-grafted animals had a 40% decrease in dermal thickness (284±59 µm vs. 482±62 µm, p<0.05) and more normal dermal architecture in the regions of the tail localized distal to the site of lymphatic injury (\nFigure 7G\n). Furthermore, skin-grafted animals demonstrated significantly decreased tissue fibrosis in these regions (\nFigure 7H\n) with a 54% decrease in the scar index (1.39±0.29 vs. 3.00±0.52, p<0.01). Together, these findings support the hypothesis that spontaneous regeneration of lymphatic vessels in autogenous tissues can bypass damaged lymphatic channels resulting in rapid lymphatic repair and markedly improved lymphatic function.\nIn order to determine if spontaneous regeneration of lymphatics can be used to bypass surgically damaged lymphatic vessels, we compared lymphatic function in mice that underwent skin excision with animals that had skin excision and repair with full-thickness skin grafting. Evaluation of animals grossly demonstrated marked differences in tail swelling at all time points evaluated (\nFigure 7A\n). These findings were corroborated by tail volume calculation demonstrating significantly higher tail volumes in excision-only animals at every time point evaluated (\nFigure 7B\n). Skin-grafted animals had a 4-fold decrease in tail swelling at the 2 week time point (19±5.9% vs. 74±7.8%, p<0.05) and their tail volumes returned to baseline 6 weeks after surgery. In contrast, excision-only animals demonstrated a persistent increase even 6 weeks postoperatively (26±8%, p<0.01). The differences in tail volumes observed in our study were associated with significantly improved lymphatic function as assessed by lymphoscintigraphy (\nFigures 7C–D\n). At both the 2 and 6 week time points, the skin-grafted animals demonstrated a significantly increased total uptake of Tc99 in the lymph nodes at the base of the tail (3.74% vs. 0.61% at 2 weeks; 10.70% vs. 4.23% at 6 weeks; p<0.01 for both time points). In addition, the lymphatic uptake occurred more rapidly as reflected by a more rapid increase in the slope of the asymptotic curve.\n\nA. Gross photographs comparing nude mice that had undergone tail excision with (right) and without (left) skin grafting are shown 6 weeks after surgery. Note obvious difference in tail swelling. B. Tail volume measurements in nude mice that had undergone tail excision with or without skin grafting. Data are presented as percent change from baseline (i.e. preoperatively) with mean ± SD (*p<0.05). C, D. Representative lymphoscintigraphy of nude mice that had undergone tail excision with or without skin grafting. E. Representative photomicrograph (5x) of H&E stained tails sections from nude mice treated with (left) or without (right) 6 weeks after surgery. Dashed box delineates area of skin graft. Note decreased inflammation (cellularity) and dermal thickness in skin-grafted mice distal (to the left) of the wound. F. High power (40x) photomicrographs of tail skin harvested 5 mm distal to the excision site. Note decreased cellularity in skin-grafted section (left) as compared with excision section (right; arrow). Also note decreased dermal thickness. G. Dermal thickness measurements and representative figures (40x) in nude mice that had undergone tail excision with or without skin grafting 6 weeks following surgery (*p<0.05). H. Scar index measurements in tail tissues localized just distal to the site of lymphatic injury 6 weeks after treatment with excision with or without skin grafting. Representative Sirius red birefringence images are shown to the right. Orange-red is indicative of scar; yellow-green is consistent with normal (i.e. non-fibrosed) tissue (*p<0.01).\nHistological analysis of tail sections 6 weeks after surgery demonstrated markedly decreased inflammatory cell infiltrate and dermal thickness in skin-grafted animals as compared to excision-only controls (\nFigures 7E–F\n). Far fewer inflammatory cells were noted qualitatively in the skin-grafted animals as compared with excision only animals. This was most readily apparent in the portions of the tail distal to the skin excision site (\nFigure 7F\n). Skin-grafted animals had a 40% decrease in dermal thickness (284±59 µm vs. 482±62 µm, p<0.05) and more normal dermal architecture in the regions of the tail localized distal to the site of lymphatic injury (\nFigure 7G\n). Furthermore, skin-grafted animals demonstrated significantly decreased tissue fibrosis in these regions (\nFigure 7H\n) with a 54% decrease in the scar index (1.39±0.29 vs. 3.00±0.52, p<0.01). Together, these findings support the hypothesis that spontaneous regeneration of lymphatic vessels in autogenous tissues can bypass damaged lymphatic channels resulting in rapid lymphatic repair and markedly improved lymphatic function.", "In order to investigate the involvement of recipient and donor cells in spontaneous lymphatic regeneration, the tails of athymic nude mice that underwent full-thickness skin excision were repaired with skin grafts harvested from GFP mice. This allowed us to follow the origin of cells by co-localization of GFP and lymphatic markers. Using this technique, the recipient nude mice lymphatics stained positive for lymphatic markers but negative for GFP. The transplanted tissue lymphatics, in contrast, stained positive for both GFP and lymphatic markers (\nFigure 1\n). Using fluorescent imaging, we found that high-level GFP expression was maintained in the transplanted skin even 6 weeks after tissue transfer (\nFigure 1A\n). The transplanted skin grafts healed quickly and with regrowth of hair and skin appendages by 6 weeks indicating that the grafts had become fully incorporated (\nFigure 1B\n). In addition, return of lymphatic function with transport of fluorescence-labeled colloid could be grossly visualized by microlymphangiography as early as 2 weeks after surgery (\nFigure 1C\n). This transport was initially driven by interstitial flow (i.e. no discrete lymphatic vessels could be visualized at 2 weeks); however, by 6 weeks honeycomb-like dermal lymphatics were noted in the transferred grafts (primarily at the distal margin) suggesting that the reformed lymphatic vessels are functional (\nFigure 1D\n). The lack of complete restoration of the honeycomb-like dermal lymphatic architecture in the mid portion of the graft at this time may reflect a delay in dermal lymphatic regeneration. This concept is supported by previous studies demonstrating that honeycomb-like lymphatic vessels that regenerate in the mouse tail after coverage of the wound with collagen gel are not ordinarily visible until day 60 after surgery and even at this time are only seen at the distal margin of the wound.[21], [22]\n\n\nA. Skin grafts harvested from GFP transgenic mice express GFP at high levels and for a long period of time. A representative fluorescent picture of a mouse (top) with higher magnification of the tail (bottom) is shown 6 weeks after surgery. The skin-grafted GFP portion of the tail can easily be seen. B. Gross photographs of mouse tails treated with skin grafts harvested from GFP transgenic mice. Note the rapid incorporation and ingrowth of hair follicles at the 6-week time point. C. Microlymphangiography performed 2 (left) or 6 (right) weeks following skin graft. Distal portion of the tail is shown at the bottom. Note the flow of fluorescent dye by interstitial flow after 2 weeks. In contrast, lymphatic flow can be seen in the skin-grafted area at the 6-week time point. Representative figures of triplicate experiments are shown. D. Higher power photograph of microlymphangiography 2 and 6 weeks after surgery demonstrating a few honeycomb-like dermal lymphatics (white arrows) in the skin graft at the 6-week time point. E. Representative LYVE-1 (pink) and GFP (green) co-localization in skin-grafted mouse tails 6 weeks after surgery. Low power (5x; left) and high power (20x; right) views are shown. Note connection between GFP- (recipient) and GFP+ (donor) lymphatic vessel. F. Representative photomicrograph (2x) demonstrating GFP (left), LYVE-1 (middle), and co-localization (right) of skin-grafted mouse tails 6 weeks after surgery. Note ingrowth of GFP-/LYVE-1+ vessels (yellow circle) from the distal (yellow dotted line) portion of the wound. G. High power (20x) view of section shown in F. Note the presence of both recipient (GFP-/LYVE-1+) and donor (GFP+/LYVE-1+) lymphatics at the distal margin of the wound.\nConfocal co-localization of GFP and LYVE-1 in sections obtained 6 weeks after surgery demonstrated the presence of numerous lymphatic vessels that were comprised of a portion of both recipient (GFP-/LYVE-1+) and donor (GFP+/LYVE-1+) lymphatic endothelial cells (\nFigure 1E\n). We also visualized thin lymphatic bridges connecting donor and recipient lymphatics suggesting that these vessels were spontaneously re-connecting at the edge of the wound. These connections were not frequently noted, however, due to the fact that the sections were performed in the longitudinal plane (i.e. necessitating sectioning at an extremely small/precise plane). In addition to these lymphatics, numerous recipient lymphatic vessels infiltrating the skin graft were noted primarily at the distal edge of the wound, suggesting that lymphatic regeneration of transplanted skin occurs via both spontaneous reconnection and ingrowth of new capillary lymphatic channels (\nFigures 1F–G\n). Interestingly, at the early time point, qualitatively, the majority of vessels present were GFP+/LYVE-1+; however, by 6 weeks these vessels represented a relatively small percentage of the total number of lymphatic vessels (\nFigure 1F–G\n). Precise quantification of donor and recipient vessels was difficult to perform with GFP skin grafts, however, due to the fact that GFP is expressed in all cells and not limited solely to the lymphatics. Due to this limitation, we performed identical operations using skin grafts obtained from LYVE-1 knockout mice.", "To further investigate the role of donor and recipient cells in lymphatic regeneration, athymic nude mice underwent full-thickness skin grafting with tissues harvested from LYVE-1 knockout mice. This approach again enabled us to trace the origin of the lymphatic vessels and differentiate between donor and recipient vessels more easily than GFP skin grafts since donor tissues lacked LYVE-1 expression (a molecule that is relatively specific to lymphatic vessels) but expressed podoplanin (podoplanin+/LYVE-1-), whereas recipient vessels expressed both molecules (podoplanin+/LYVE-1+).\nTo investigate the relative contribution of donor and recipient lymphatic vessels to lymphatic regeneration in the transplanted skin, immunofluorescence staining with the lymphatic endothelial cell markers LYVE-1 and podoplanin was performed and the number of donor- and recipient-derived lymphatic vessels were evaluated at various regions of the skin graft (distal, middle, and proximal portions) 2 or 6 weeks following surgery (\nFigures 2A–B\n). This analysis, similar to our GFP/LYVE-1 co-localization, demonstrated connections between LYVE-1+ and LYVE-1- lymphatic vessels suggesting that lymphatic vessels from the recipient and donor tissues spontaneously reconnect. This phenomenon is seen in 3-D reconstructions of z-stacked confocal images (\nFigure 2B\n).\n\n\nA. Co-localization of LYVE-1 (left panel) and podoplanin (middle panel) of mouse tail sections harvested 6 weeks after surgery (40x). Overlay of images demonstrates connection of LYVE-1+ (i.e. recipient) and LYVE-1- (i.e. donor) derived podoplanin-stained lymphatics. Yellow line marks the junction of the skin graft and native tail skin. Distal junction of the skin graft and mouse tail is shown to the left. B. Three-dimensional rendering of podoplanin/LYVE-1 co-localization at the junction of the skin graft and native tail skin (yellow line). Note connection between LYVE-1+ and LYVE-1- lymphatic. Also note presence of both recipient (podoplanin+/LYVE-1+) and donor (podoplanin+/LYVE-1-) vessels in the section.\nQuantification of donor and recipient lymphatic vessels also suggested that both reconnection and infiltration of recipient-derived lymphatics contribute to lymphatic regeneration in the transplanted skin. Two weeks following tissue transfer, 52% of the vessels in the distal margin and 37% of those at the proximal margin were donor-derived (i.e. podoplanin+/LYVE-1-). In contrast, 92% of the lymphatic vessels in the middle portion were donor-derived. By 6 weeks, the vast majority (>90%) of lymphatic vessels present in the distal and proximal margins were of recipient origin. Similarly, the number of recipient vessels in the middle portion also increased such that the distribution of these vessels with donor-derived lymphatics was 1∶1. These differences in the distribution of the lymphatic vessels were primarily due to the ingrowth of new lymphatics since the number of recipient vessels in the later time points increased substantially while the number of donor vessels (similar to our qualitative observations in the GFP experiment) decreased slightly, though not statistically significant (\nFigures 3D–E\n). There were significantly more recipient-derived vessels at the distal and proximal margins as compared to the central aspect of the graft suggesting ingrowth of vessels from the periphery of the transferred tissues (8.2 and 6.0 vs. 3.1; p<0.05). Taken together these findings suggest that donor-derived lymphatic vessels persist over time and that transferred tissues become further infiltrated by lymphatic vessels originating from the recipient beginning at the distal and proximal edges and infiltrating towards the center of the graft.\n\nA. Bar graph depicting source of lymphatic vessels in various regions of the skin-grafted tissues 2 or 6 weeks after surgery. Origin of lymphatic vessels in various regions of the skin graft 2 or 6 weeks after surgery was determined using podoplanin/LYVE-1 co-localization in skin grafts harvested from LYVE-1 knockout mice and transferred to nude mice. Recipient lymphatic vessels were identified as podoplanin+/LYVE-1+ (white bars) while donor lymphatics were identified as podoplanin+/LYVE-1- (black bars). Recipient-derived lymphatic vessels were noted primarily in the distal and proximal portions of the graft at the 2-week time point and became more uniform in distribution by 6 weeks. Increased number of recipient vessels in the peripheral regions of the graft at the 6-week time point resulted in a relative decrease in the percentage of donor-derived lymphatics at this time. B, C. Representative images (10x) of LYVE-1 (red) and podoplanin (green) staining of the distal portion of the graft 2 (B) and 6 (C) weeks after surgery. Yellow line marks the junction of the skin graft (located to the right) and recipient (located to the left) tissues. D. Recipient lymphatic vessels (podoplanin+/LYVE-1+) in various regions of the skin graft 2 and 6 weeks after skin grafting. Note that the numbers of recipient vessels in the distal (D) and proximal (P) portions of the tail are significantly greater than the number of recipient lymphatics in the middle (M) portion of the graft at both the 2- and 6-week time points indicating ingrowth of vessels from the periphery (*p<0.05). In addition, the number of recipient lymphatics in various regions of the skin graft increased significantly when comparing 2- and 6-week time points indicating ingrowth of recipient vessels. E. Donor lymphatic vessels (podoplanin+/LYVE-1-) in various regions of the skin graft 2 and 6 weeks after skin grafting (*p<0.05). The number of donor vessels remained essentially unchanged at both time points indicating that donor lymphatics persist over time and do not proliferate after tissue transfer.", "VEGF-C is a critical regulator of lymphatic regeneration during adult lymphangiogenesis.[17] Therefore, we investigated the pattern of VEGF-C expression after tissue transfer using immunohistochemical staining. Two weeks after surgery, VEGF-C expression was highest at the periphery of the wounds with the distal wound margin displaying significantly more staining of VEGF-C+ cells/hpf than either the middle or proximal portion (110±4 in distal portion vs. 46±8 in middle portion; p<0.05; \nFigures 4A, C\n). VEGF-C staining at the peripheral margins of the grafts was associated with a modest inflammatory infiltrate into the skin graft. By 6 weeks, VEGF-C staining was primarily localized to the central region of the skin graft (73±11 in the central portion vs. 30±8 or 25±10 in proximal or distal portion; p<0.05; \nFigures 4B–C\n). This expression pattern corresponded to the infiltration of lymphatic vessels into the graft beginning at the peripheral margins and extending into the central portion.\n\nA. VEGF-C expression in skin-grafted tails 2 weeks after surgery. Representative low power (2x; upper panel) photomicrographs encompassing the skin-grafted area and distal/proximal portions of the recipient mouse-tail are shown. High power (20x) views of the distal and proximal junctions between recipient tissues and skin grafts are shown below. Black arrow shows large number of VEGF-C+ cells in the distal junction. Dashed box delineates skin-grafted area. B. VEGF-C expression in skin-grafted tails 6 weeks after surgery. Representative low power (2x; upper panel) and high power (20x) photomicrographs encompassing the skin-grafted area and distal/middle/proximal portions of the recipient mouse tails are shown. Dashed box delineates skin-grafted area. Note small amount of wound/skin graft contracture after repair. C. Cell counts per high power field of VEGF-C+ cells in the various tail regions (D = distal, M = middle, P = proximal) 2 and 6 weeks after surgery. Cell counts are means ± SD of at least 4 high power fields/mouse/time point. At least 6 mice were analyzed in each group (*p<0.05; #<0.01).\nMacrophages are critical cells that regulate wound repair and produce VEGF-C during lymphatic regeneration in the mouse tail. In addition, they are thought to directly contribute to lymphangiogenesis by trans-differentiating into lymphatic endothelial cells.[23] Immunohistochemical staining with F4/80 revealed a similar expression pattern as that of VEGF-C with a large number of F4/80+ cells localized at the periphery of the graft at the 2 week time point (\nFigures 5A, C\n). Similar to our findings of VEGF-C expression, the greatest number of macrophages was noted at the distal margin of the wound (44±7 in distal portion vs. 16±5 or 20±5 in middle or proximal areas; p<0.05). Co-localization of F4/80 and LYVE-1 suggested that some of the newly formed lymphatic channels in the skin graft may have formed as a result of trans-differentiation of macrophages into lymphatic endothelial cells (\nFigure 6\n). In addition, scattered, isolated F4/80+/LYVE-1+ cells that had not formed tubules could be seen within the wound sections. Newly formed lymphatics with trans-differentiating macrophages were recipient-derived since they also expressed LYVE-1. By 6 weeks, the number of macrophages had decreased in the distal margin of the graft and a more uniform distribution was noted throughout the grafted area (\nFigures 5B-C\n). In addition, we could not find any F4/80+/LYVE-1+ vessels at this time point indicating loss of F4/80 expression with vessel remodeling (not shown).\n\nA. F4/80 localization in skin-grafted tails 2 weeks after surgery. Representative low power (2x; upper panel) photomicrographs encompassing the skin-grafted area and distal/proximal portions of the recipient mouse-tail are shown. High power (20x) views of the distal and proximal junctions between recipient tissues and skin grafts are shown below. Dashed box delineates skin-grafted area. B. F4/80 localization in skin-grafted tails 6 weeks after surgery. Representative low power (2x; upper panel) and high power (20x) photomicrographs encompassing the skin-grafted area and distal/proximal portions of the recipient mouse tails are shown. Dashed box delineates skin-grafted area. Note small amount of wound/skin graft contracture after repair. C. Cell counts per high power field of F4/80+ cells in various regions of the tail 2 and 6 weeks after surgery. Cell counts are means ± SD of at least 4 high power fields/mouse/time point. At least 6 mice were analyzed in each group (*p<0.05).\n\nA, B, C. Representative photomicrograph (40x) demonstrating co-localization of F4/80 (A; brown immunostaining) and LYVE-1 (B; green fluorescent stain) in skin-grafted mouse tail sections 2 weeks after surgery. Arrows show co-localization of F4/80 and LYVE-1 in lymphatic capillaries. Also note scattered F4/80+/LYVE-1+ cells in the tail section. F4/80-/LYVE-1+ capillaries can also be appreciated (upper right corner).", "In order to determine if spontaneous regeneration of lymphatics can be used to bypass surgically damaged lymphatic vessels, we compared lymphatic function in mice that underwent skin excision with animals that had skin excision and repair with full-thickness skin grafting. Evaluation of animals grossly demonstrated marked differences in tail swelling at all time points evaluated (\nFigure 7A\n). These findings were corroborated by tail volume calculation demonstrating significantly higher tail volumes in excision-only animals at every time point evaluated (\nFigure 7B\n). Skin-grafted animals had a 4-fold decrease in tail swelling at the 2 week time point (19±5.9% vs. 74±7.8%, p<0.05) and their tail volumes returned to baseline 6 weeks after surgery. In contrast, excision-only animals demonstrated a persistent increase even 6 weeks postoperatively (26±8%, p<0.01). The differences in tail volumes observed in our study were associated with significantly improved lymphatic function as assessed by lymphoscintigraphy (\nFigures 7C–D\n). At both the 2 and 6 week time points, the skin-grafted animals demonstrated a significantly increased total uptake of Tc99 in the lymph nodes at the base of the tail (3.74% vs. 0.61% at 2 weeks; 10.70% vs. 4.23% at 6 weeks; p<0.01 for both time points). In addition, the lymphatic uptake occurred more rapidly as reflected by a more rapid increase in the slope of the asymptotic curve.\n\nA. Gross photographs comparing nude mice that had undergone tail excision with (right) and without (left) skin grafting are shown 6 weeks after surgery. Note obvious difference in tail swelling. B. Tail volume measurements in nude mice that had undergone tail excision with or without skin grafting. Data are presented as percent change from baseline (i.e. preoperatively) with mean ± SD (*p<0.05). C, D. Representative lymphoscintigraphy of nude mice that had undergone tail excision with or without skin grafting. E. Representative photomicrograph (5x) of H&E stained tails sections from nude mice treated with (left) or without (right) 6 weeks after surgery. Dashed box delineates area of skin graft. Note decreased inflammation (cellularity) and dermal thickness in skin-grafted mice distal (to the left) of the wound. F. High power (40x) photomicrographs of tail skin harvested 5 mm distal to the excision site. Note decreased cellularity in skin-grafted section (left) as compared with excision section (right; arrow). Also note decreased dermal thickness. G. Dermal thickness measurements and representative figures (40x) in nude mice that had undergone tail excision with or without skin grafting 6 weeks following surgery (*p<0.05). H. Scar index measurements in tail tissues localized just distal to the site of lymphatic injury 6 weeks after treatment with excision with or without skin grafting. Representative Sirius red birefringence images are shown to the right. Orange-red is indicative of scar; yellow-green is consistent with normal (i.e. non-fibrosed) tissue (*p<0.01).\nHistological analysis of tail sections 6 weeks after surgery demonstrated markedly decreased inflammatory cell infiltrate and dermal thickness in skin-grafted animals as compared to excision-only controls (\nFigures 7E–F\n). Far fewer inflammatory cells were noted qualitatively in the skin-grafted animals as compared with excision only animals. This was most readily apparent in the portions of the tail distal to the skin excision site (\nFigure 7F\n). Skin-grafted animals had a 40% decrease in dermal thickness (284±59 µm vs. 482±62 µm, p<0.05) and more normal dermal architecture in the regions of the tail localized distal to the site of lymphatic injury (\nFigure 7G\n). Furthermore, skin-grafted animals demonstrated significantly decreased tissue fibrosis in these regions (\nFigure 7H\n) with a 54% decrease in the scar index (1.39±0.29 vs. 3.00±0.52, p<0.01). Together, these findings support the hypothesis that spontaneous regeneration of lymphatic vessels in autogenous tissues can bypass damaged lymphatic channels resulting in rapid lymphatic repair and markedly improved lymphatic function.", "Lymphedema is a debilitating disorder that occurs commonly after lymph node dissection for cancer treatment. Despite its common occurrence and profound impact on quality of life, there is no effective surgical cure. Although clinical reports have shown that autogenous tissue transfer (i.e. transfer of healthy tissues) can bypass damaged lymphatics and improve lymphedema, the precise mechanisms governing lymphatic regeneration in these circumstances remain unknown. The findings of the current study suggest that lymphatic regeneration after tissue transfer occurs as a result of spontaneous reconnection of existing lymphatics as well as ingrowth of new lymphatic vessels from the recipient. In addition, our studies support the hypothesis that these processes occur in the setting of VEGF-C expression and recipient-derived macrophage infiltration beginning at the periphery of the graft. Similar findings have been reported in blood vessel regeneration in skin grafts.[24]\n\n\nDe novo lymphangiogenesis has been studied in a variety of human diseases from chronic renal transplant rejection to tumor metastasis. The role of recipient and donor tissues in the regulation of lymphatic regeneration has only been evaluated in a few studies, however. Kerjaschki et al. demonstrated that the majority (>95%) of lymphatics in the transplanted kidney are donor-derived while a relatively small number of lymphatic vessels (4.5%) in these tissues were of recipient origin.[15] These results are in contrast to our finding that by 6 weeks following skin grafting, the majority of lymphatic vessels present in the transplanted tissues were of recipient origin (\nFigure 3A\n), indicating that inosculation with ingrowth of new lymphatic channels is a critical mechanism regulating lymphatic regeneration after skin/subcutaneous tissue transfer but not in solid organ transplantation. Although spontaneous reconnection of existing lymphatics occurs in transplanted skin/subcutaneous tissues, this mechanism is the primary mechanism regulating lymphatic regeneration after solid organ transplantation. It is possible that ingrowth of lymphatic vessels in kidney transplants from surrounding tissues is actively inhibited by the presence of the kidney capsule. Indeed, fibrous capsule and fascial tissues have long been considered to be important barriers to lymphatic tumor metastasis and are used routinely as surgical landmarks during oncological procedures. Alternatively, differences in lymphatic regeneration in solid organs as compared to skin and subcutaneous tissues may reflect volumetric differences in tissues transferred since skin grafts are relatively low volume (and therefore easily infiltrated by surrounding tissues) whereas solid organ transplantation involves a much larger volume of tissues that allows ingrowth of lymphatics only at the periphery of the tissues. Finally, in our model the skin graft was transferred as an avascular tissue with infiltration of local blood vessels and eventual incorporation into the recipient tissues. Kidney transplantation is performed by reconnecting the vascular supply of the tissues thereby minimizing the need for vessel ingrowth.\nPrevious studies have demonstrated that lymphangiogenesis in the mouse tail model is dependent on gradients of interstitial flow and expression of VEGF-C.[17], [21] These studies have shown that gradients of VEGF-C expression coordinate and regulate lymphatic endothelial cell migration and differentiation resulting in tubule formation and lymphatic regeneration in a distal to proximal direction. In the current study, we also found that 2 weeks following surgery, VEGF-C expression was localized to the peripheral edges of the skin graft especially in the distal margin. These expression patterns correlated with the ingrowth of new lymphatic vessels from the recipient into the donor tissues. Contrary to previous studies with lymphatic regeneration in collagen gel scaffolds, however, we also found lymphatic ingrowth from the proximal aspect of the wound albeit at a slightly slower rate, indicating that lymphatic regeneration across a tissue graft occurs as a result of coordinated lymphatic ingrowth from both margins of the wound. This finding may be due to a higher potential for lymphatic transport via interstitial fluid flow in skin grafts as compared to acellular scaffolds such as type I collagen. Alternatively, it is likely that the skin graft becomes incorporated into recipient tissues more quickly and efficiently than lymphatic/tissue regeneration that occurs in collagen gel scaffolds resulting in improved interstitial flow and expression of VEGF-C even in the proximal portion of the graft.\nIn the current study we found that by 6 weeks after surgery, VEGF-C expression was highest within the center of the transplanted tissues. This response correlated with the patterns of lymphatic vessel regeneration after tissue transfer, beginning from distal and proximal margins and invading into the center of the donor tissues, implying that lymphatic regeneration occurs as a consequence of VEGF-C expression within the graft. This finding is intriguing and is supported by previous studies demonstrating improved lymphangiogenesis in surgical skin flaps after adenoviral VEGF-C transfer.[25] However, in contrast to these previous studies, we found that endogenous expression of VEGF-C is adequate for promoting rapid and efficient lymphatic regeneration after tissue transfer suggesting that gene therapeutic interventions to augment lymphangiogenic cytokine expression are not necessary for lymphatic regeneration in uncomplicated tissue transfer or surgical interventions.\nMacrophages are critical regulators of lymphangiogenesis.[26], [27] Macrophage expression of VEGF-C is a major regulator of lymphangiogenesis in tumors and during inflammation after kidney transplantation or corneal injury.[27], [28] In addition, macrophages are thought to directly contribute to lymphatic vessel formation by trans-differentiating into lymphatic endothelial cells.[29] Our study showed that macrophages served a dual role in lymphatic regeneration after tissue transfer. Immunohistochemical localization of VEGF-C and F4/80 demonstrated that the patterns of VEGF-C expression correlated with accumulation of F4/80+ cells (\nFigures 3\n\n, \n\n4\n), supporting the hypothesis that macrophages play a critical role in VEGF-C expression during incorporation and lymphatic regeneration of autogenous tissues. In addition, we found that recipient-derived macrophages may directly contribute to lymphatic formation at the peripheral edges of the graft in the early time period by demonstrating F4/80+/LYVE-1+ tubular structures at the 2 week time point. Similar to previous studies on inflammatory lymphangiogenesis, these newly formed vessels likely underwent remodeling with loss of F4/80 expression at the later time point since no double positive lymphatic vessels could be identified in sections obtained from animals sacrificed 6 weeks after surgery.[29]\n\nPerhaps the most significant finding of our study was the fact that autogenous tissue transplantation could be used to rapidly bypass damaged lymphatic vessels with resultant restoration of lymphatic flow. Skin-grafted animals had marked reductions in tail swelling and significantly improved lymphatic function, with significant decreases in dermal thickness and tissue fibrosis. The finding that tissue transfer can bypass damaged lymphatic vessels is supported by anecdotal surgical reports demonstrating that tissue transfer can aid in treatment of some patients with lymphedema and provides a mechanism for this observation. Our findings suggest that improvements in extremity swelling occur as a result of both spontaneous lymphatic regeneration and reconnection of local and transferred lymphatic vessels thereby effectively bypassing lymphatic channels damaged during lymphadenectomy. These findings provide a rationale for the development of tissue engineered lymphatic constructs designed specifically for this purpose and may represent a novel means of treating patients with lymphedema." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Mouse tail excision model, surgical preparation, and full-thickness skin grafting", "Specimen preparation and histology", "Tail volume calculation", "Microlymphangiography", "Lymphoscintigraphy", "Dermal thickness calculation", "Scar index calculation", "Statistical Analysis", "Results", "Autogenous tissue lymphangiogenesis is associated with spontaneous reconnection of local lymphatics and infiltration of new lymphatic capillaries", "Lymphatic regeneration after tissue transfer is associated with infiltration of LYVE-1 positive cells", "Lymphatic regeneration after tissue transfer is associated with expression of VEGF-C and infiltration of macrophages", "Spontaneous regeneration of lymphatics after tissue transfer can be used to bypass damaged lymphatics", "Discussion" ]
[ "Lymphedema is the chronic swelling of an extremity that occurs when the capacity of the lymphatic system to drain protein-rich interstitial fluid is exceeded.[1] In the United States and Western countries this condition occurs most commonly after lymph node resection for the treatment of breast and gynecological cancers.[2], [3], [4] It is estimated that as many as 50% of patients treated with lymph node dissection go on to develop lymphedema, even after more limited surgeries such as sentinel lymph node biopsy.[5], [6], [7] Patients with this complication have a significantly decreased quality of life with frequent infections, decreased range of motion, and a cosmetic deformity that is difficult to conceal.[8] Furthermore, lymphedema results in a nearly $10,000 increase in the two-year treatment cost of breast cancer patients.[9] With over 200,000 new breast cancers diagnosed annually, it is clear that lymphedema is a significant biomedical burden.\nTreatment for lymphedema has largely been symptomatic in nature and designed to prevent progression of swelling. Patients are treated with compressive stockings and manual massage to decrease fluid accumulation and encourage drainage of interstitial fluid.[10] These treatments can decrease swelling and discomfort but are time consuming and do not reverse the basic pathology of lymphedema. More importantly, once compression and manual drainage are stopped, lymphedema recurs and in most cases worsens over time.\nSome surgeons have attempted to bypass damaged lymphatic channels by interposition of healthy tissues. In fact, a few clinical case studies have shown remarkable improvements in limb edema after tissue transfer with evidence of lymphatic re-routing.[11], [12] Clinical evidence of spontaneous lymphatic regeneration is also noted after microsurgical tissue transfer.[13] In these procedures, composite tissues containing skin and fat are transferred from areas with excess tissues (e.g. abdomen) to replace damaged or surgically resected tissues by reconnecting the arterial and venous vessels using microsurgery. Although the lymphatic vessels are not re-anastomosed, tissue edema spontaneously resolves over a period of 6–8 weeks implying that lymphatic regeneration has occurred. More recently, Tammela et al. demonstrated that lymph node transfer combined with VEGF-C administration after lymphadenectomy in mice can promote reconstitution of the deep lymphatic system.[14] Similarly, Kerjaschki et al. have demonstrated that the majority of lymphatics (>95%) present in chronically rejected kidney transplants originated from the donor.[15] Thus, transfer of healthy tissues may be a means of replacing or re-routing damaged lymphatic vessels to treat or prevent lymphedema. The purpose of these experiments was to investigate the mechanisms that regulate lymphatic regeneration after tissue transfer.", "[SUBTITLE] Mouse tail excision model, surgical preparation, and full-thickness skin grafting [SUBSECTION] In order to investigate the mechanisms that regulate lymphatic regeneration after tissue transfer, we excised a 2-mm wide, full-thickness area of skin in 10–12 week old female athymic nude mice (nu/nu; NCI-Frederick, Frederick, ME). In addition, without injuring the tail blood supply, the deep lymphatic system was visualized and ligated using a surgical microscope (Leica, StereoZoom SZ-4, Wetzlar, Germany). The defects were then either left open or immediately repaired with full-thickness skin grafts harvested from 10–12 week-old female transgenic mice that express green fluorescent protein (GFP, C57BL/6-Tg(CAG-EGFP)1Osb/J; Jackson Labs, Bar Harbor, ME) or LYVE-1 knockout (B6.129S1-Lyve1tm1Lhua/J; Jackson Labs, Bar Harbor, ME) mice. The skin graft was secured with 9-0 nylon simple interrupted sutures, and the surgical wound was dressed with Tegaderm (3M, St. Paul, MN) adhered proximally and distally to minimize manipulation of the site by the animal post-operatively. Six to eight animals were used per experimental group. All animal procedures were approved by the Resource Animal Research Center Institutional Animal Care and Use Committee (IACUC; protocol #06-08-018) at Memorial Sloan-Kettering Cancer Center, New York, NY.\nIn order to investigate the mechanisms that regulate lymphatic regeneration after tissue transfer, we excised a 2-mm wide, full-thickness area of skin in 10–12 week old female athymic nude mice (nu/nu; NCI-Frederick, Frederick, ME). In addition, without injuring the tail blood supply, the deep lymphatic system was visualized and ligated using a surgical microscope (Leica, StereoZoom SZ-4, Wetzlar, Germany). The defects were then either left open or immediately repaired with full-thickness skin grafts harvested from 10–12 week-old female transgenic mice that express green fluorescent protein (GFP, C57BL/6-Tg(CAG-EGFP)1Osb/J; Jackson Labs, Bar Harbor, ME) or LYVE-1 knockout (B6.129S1-Lyve1tm1Lhua/J; Jackson Labs, Bar Harbor, ME) mice. The skin graft was secured with 9-0 nylon simple interrupted sutures, and the surgical wound was dressed with Tegaderm (3M, St. Paul, MN) adhered proximally and distally to minimize manipulation of the site by the animal post-operatively. Six to eight animals were used per experimental group. All animal procedures were approved by the Resource Animal Research Center Institutional Animal Care and Use Committee (IACUC; protocol #06-08-018) at Memorial Sloan-Kettering Cancer Center, New York, NY.\n[SUBTITLE] Specimen preparation and histology [SUBSECTION] Ten-millimeter tail sections centered on the repair site were harvested 2 or 6 weeks after surgery and fixed in 4% paraformaldehyde overnight at 4°C. Specimens were then decalcified in Immunocal (Decal Chemical Corp., Tallman, N.Y.), embedded in paraffin, and sectioned longitudinally (5 µm). Immunohistochemical and immunofluorescent staining were performed as previously described.[16] Lymphatic vessels were identified using antibodies against podoplanin (Abcam, Cambridge, MA) and LYVE-1 (R&D Systems, Minneapolis, MN). Macrophages were identified using antibody against F4/80 (Abcam, Cambridge, MA), and VEGF-C expression was identified using antibody against VEGF-C (Abcam, Cambridge, MA). Immunofluorescent secondary antibodies used were Alexa Fluor 488, 594, and 647 (Invitrogen Molecular Probes, Carlsbad, CA). Three-dimensional reconstructions of 35–45 confocal z-stack tissue sections (40x) were created using the Imaris Software (Bitplane Corp, Zurich, Switzerland). For immunohistochemical staining, secondary antibody from the Vectastain ABC Kit (Vector Laboratories, Burlingame, CA) was used and developed using diaminobenzidine. Negative control sections were incubated with secondary antibody only. Images were obtained using bright-field microscopy (Leica TCS) for immunohistochemistry and confocal microscopy (Leica) for immunofluorescence. Cell and vessel counts were performed in 3-5 high power fields per section (n = 6–8/time point) by two blinded reviewers.\nTen-millimeter tail sections centered on the repair site were harvested 2 or 6 weeks after surgery and fixed in 4% paraformaldehyde overnight at 4°C. Specimens were then decalcified in Immunocal (Decal Chemical Corp., Tallman, N.Y.), embedded in paraffin, and sectioned longitudinally (5 µm). Immunohistochemical and immunofluorescent staining were performed as previously described.[16] Lymphatic vessels were identified using antibodies against podoplanin (Abcam, Cambridge, MA) and LYVE-1 (R&D Systems, Minneapolis, MN). Macrophages were identified using antibody against F4/80 (Abcam, Cambridge, MA), and VEGF-C expression was identified using antibody against VEGF-C (Abcam, Cambridge, MA). Immunofluorescent secondary antibodies used were Alexa Fluor 488, 594, and 647 (Invitrogen Molecular Probes, Carlsbad, CA). Three-dimensional reconstructions of 35–45 confocal z-stack tissue sections (40x) were created using the Imaris Software (Bitplane Corp, Zurich, Switzerland). For immunohistochemical staining, secondary antibody from the Vectastain ABC Kit (Vector Laboratories, Burlingame, CA) was used and developed using diaminobenzidine. Negative control sections were incubated with secondary antibody only. Images were obtained using bright-field microscopy (Leica TCS) for immunohistochemistry and confocal microscopy (Leica) for immunofluorescence. Cell and vessel counts were performed in 3-5 high power fields per section (n = 6–8/time point) by two blinded reviewers.\n[SUBTITLE] Tail volume calculation [SUBSECTION] To evaluate the degree of postoperative acute lymphedema with and without skin graft, tail volumes were measured 1, 2, 4, and 6 weeks after surgery in both groups (n = 6–8 animals/group/time point) by blinded reviewers. Tail circumference was measured at 10-mm intervals starting at the distal margin of the tail excision using a digital caliper, and tail volumes were then calculated using the truncated cone formula: V =  ¼π (C1C2 + C2C3 + … +C7C8) from our previous methods.[16]\n\nTo evaluate the degree of postoperative acute lymphedema with and without skin graft, tail volumes were measured 1, 2, 4, and 6 weeks after surgery in both groups (n = 6–8 animals/group/time point) by blinded reviewers. Tail circumference was measured at 10-mm intervals starting at the distal margin of the tail excision using a digital caliper, and tail volumes were then calculated using the truncated cone formula: V =  ¼π (C1C2 + C2C3 + … +C7C8) from our previous methods.[16]\n\n[SUBTITLE] Microlymphangiography [SUBSECTION] Microlymphangiography was performed prior to sacrifice to evaluate the gross structure of the capillary lymphatics as previously described.[17] Briefly, 15 µL of 2000-kDa dextran solution (10 mg/mL) conjugated to fluorescein isothiocyanate (FITC) was injected approximately 10 mm proximal to the tip of the mouse tail under constant pressure. This molecule is too large to enter blood vessels, but can enter lymphatics which are designed to transport large molecules in interstitial fluid. Capillary lymphatics were then visualized in the tail using the Leica MZFL3 Stereoscope (Wetzlar, Germany). Fluorescent images were obtained at consistent magnification using Volocity software (PerkinElmer, Waltham, MA).\nMicrolymphangiography was performed prior to sacrifice to evaluate the gross structure of the capillary lymphatics as previously described.[17] Briefly, 15 µL of 2000-kDa dextran solution (10 mg/mL) conjugated to fluorescein isothiocyanate (FITC) was injected approximately 10 mm proximal to the tip of the mouse tail under constant pressure. This molecule is too large to enter blood vessels, but can enter lymphatics which are designed to transport large molecules in interstitial fluid. Capillary lymphatics were then visualized in the tail using the Leica MZFL3 Stereoscope (Wetzlar, Germany). Fluorescent images were obtained at consistent magnification using Volocity software (PerkinElmer, Waltham, MA).\n[SUBTITLE] Lymphoscintigraphy [SUBSECTION] In an effort to quantify lymphatic transport after surgery with or without skin graft, technetium-99 m sulfur colloid (100-nm particle size; 400-800 µCi in ∼50 µl) was injected intradermally approximately 20 mm from the tip of the mouse tail as previously described.[16] Dynamic planar gamma camera images were acquired for 2.5 hours after injection using an X-SPECT camera (Gamma Medica, Northridge, CA), and region-of-interest analysis was performed to derive the decay-adjusted activity using ASIPro software (CTI Molecular Imaging, Knoxville, TN).\nIn an effort to quantify lymphatic transport after surgery with or without skin graft, technetium-99 m sulfur colloid (100-nm particle size; 400-800 µCi in ∼50 µl) was injected intradermally approximately 20 mm from the tip of the mouse tail as previously described.[16] Dynamic planar gamma camera images were acquired for 2.5 hours after injection using an X-SPECT camera (Gamma Medica, Northridge, CA), and region-of-interest analysis was performed to derive the decay-adjusted activity using ASIPro software (CTI Molecular Imaging, Knoxville, TN).\n[SUBTITLE] Dermal thickness calculation [SUBSECTION] Dermal thickness calculation was performed as previously described.[18] Briefly, longitudinal tail sections were stained with hematoxylin and eosin and visualized using the Mirax Slide Scanner (Carl Zeiss Microimaging, Munich, Germany). A distance of 4 mm was measured distal to the wound edge in standardized longitudinal histological sections and the dermal thickness was measured from the dermal/epidermal junction to the deep muscles (2–4 measurements/animal/group; n = 6–8 animals/group).\nDermal thickness calculation was performed as previously described.[18] Briefly, longitudinal tail sections were stained with hematoxylin and eosin and visualized using the Mirax Slide Scanner (Carl Zeiss Microimaging, Munich, Germany). A distance of 4 mm was measured distal to the wound edge in standardized longitudinal histological sections and the dermal thickness was measured from the dermal/epidermal junction to the deep muscles (2–4 measurements/animal/group; n = 6–8 animals/group).\n[SUBTITLE] Scar index calculation [SUBSECTION] Sirius red (Direct Red 80; Sigma, St. Louis, MO) staining was performed to evaluate the degree of fibrosis in the specimens as previously described.[16], [19] Birefringence pattern and hue were evaluated using polarized light microscopy to determine collagen density, pattern of deposition, and maturity. This is based on the fact that normal tissues display fine collagen bundles with a yellow-green birefringence in a random pattern while soft tissue fibrosis results in formation of thicker, parallel collagen bundles with a orange-red birefringence.[20] Scar index is a quantitative analysis of fibrosis calculated by comparing the ratio of orange-red to yellow-green staining using Metamorph Offline software (Molecular Devices Corporation, Sunnyvale, CA) with a minimum of 4–6 sections per animal (n = 6–8 animals/group).\nSirius red (Direct Red 80; Sigma, St. Louis, MO) staining was performed to evaluate the degree of fibrosis in the specimens as previously described.[16], [19] Birefringence pattern and hue were evaluated using polarized light microscopy to determine collagen density, pattern of deposition, and maturity. This is based on the fact that normal tissues display fine collagen bundles with a yellow-green birefringence in a random pattern while soft tissue fibrosis results in formation of thicker, parallel collagen bundles with a orange-red birefringence.[20] Scar index is a quantitative analysis of fibrosis calculated by comparing the ratio of orange-red to yellow-green staining using Metamorph Offline software (Molecular Devices Corporation, Sunnyvale, CA) with a minimum of 4–6 sections per animal (n = 6–8 animals/group).\n[SUBTITLE] Statistical Analysis [SUBSECTION] Statistical analysis was performed using GraphPad Prism software for Windows (GraphPad Software, Inc., San Diego, CA). Comparative analysis between multiple groups was performed using the Kruskal-Wallace test with post hoc tests to compare different groups. Differences between 2 groups were evaluated using the Student's t-test. Mean values and standard deviations are presented with p<0.05 considered significant unless otherwise noted.\nStatistical analysis was performed using GraphPad Prism software for Windows (GraphPad Software, Inc., San Diego, CA). Comparative analysis between multiple groups was performed using the Kruskal-Wallace test with post hoc tests to compare different groups. Differences between 2 groups were evaluated using the Student's t-test. Mean values and standard deviations are presented with p<0.05 considered significant unless otherwise noted.", "In order to investigate the mechanisms that regulate lymphatic regeneration after tissue transfer, we excised a 2-mm wide, full-thickness area of skin in 10–12 week old female athymic nude mice (nu/nu; NCI-Frederick, Frederick, ME). In addition, without injuring the tail blood supply, the deep lymphatic system was visualized and ligated using a surgical microscope (Leica, StereoZoom SZ-4, Wetzlar, Germany). The defects were then either left open or immediately repaired with full-thickness skin grafts harvested from 10–12 week-old female transgenic mice that express green fluorescent protein (GFP, C57BL/6-Tg(CAG-EGFP)1Osb/J; Jackson Labs, Bar Harbor, ME) or LYVE-1 knockout (B6.129S1-Lyve1tm1Lhua/J; Jackson Labs, Bar Harbor, ME) mice. The skin graft was secured with 9-0 nylon simple interrupted sutures, and the surgical wound was dressed with Tegaderm (3M, St. Paul, MN) adhered proximally and distally to minimize manipulation of the site by the animal post-operatively. Six to eight animals were used per experimental group. All animal procedures were approved by the Resource Animal Research Center Institutional Animal Care and Use Committee (IACUC; protocol #06-08-018) at Memorial Sloan-Kettering Cancer Center, New York, NY.", "Ten-millimeter tail sections centered on the repair site were harvested 2 or 6 weeks after surgery and fixed in 4% paraformaldehyde overnight at 4°C. Specimens were then decalcified in Immunocal (Decal Chemical Corp., Tallman, N.Y.), embedded in paraffin, and sectioned longitudinally (5 µm). Immunohistochemical and immunofluorescent staining were performed as previously described.[16] Lymphatic vessels were identified using antibodies against podoplanin (Abcam, Cambridge, MA) and LYVE-1 (R&D Systems, Minneapolis, MN). Macrophages were identified using antibody against F4/80 (Abcam, Cambridge, MA), and VEGF-C expression was identified using antibody against VEGF-C (Abcam, Cambridge, MA). Immunofluorescent secondary antibodies used were Alexa Fluor 488, 594, and 647 (Invitrogen Molecular Probes, Carlsbad, CA). Three-dimensional reconstructions of 35–45 confocal z-stack tissue sections (40x) were created using the Imaris Software (Bitplane Corp, Zurich, Switzerland). For immunohistochemical staining, secondary antibody from the Vectastain ABC Kit (Vector Laboratories, Burlingame, CA) was used and developed using diaminobenzidine. Negative control sections were incubated with secondary antibody only. Images were obtained using bright-field microscopy (Leica TCS) for immunohistochemistry and confocal microscopy (Leica) for immunofluorescence. Cell and vessel counts were performed in 3-5 high power fields per section (n = 6–8/time point) by two blinded reviewers.", "To evaluate the degree of postoperative acute lymphedema with and without skin graft, tail volumes were measured 1, 2, 4, and 6 weeks after surgery in both groups (n = 6–8 animals/group/time point) by blinded reviewers. Tail circumference was measured at 10-mm intervals starting at the distal margin of the tail excision using a digital caliper, and tail volumes were then calculated using the truncated cone formula: V =  ¼π (C1C2 + C2C3 + … +C7C8) from our previous methods.[16]\n", "Microlymphangiography was performed prior to sacrifice to evaluate the gross structure of the capillary lymphatics as previously described.[17] Briefly, 15 µL of 2000-kDa dextran solution (10 mg/mL) conjugated to fluorescein isothiocyanate (FITC) was injected approximately 10 mm proximal to the tip of the mouse tail under constant pressure. This molecule is too large to enter blood vessels, but can enter lymphatics which are designed to transport large molecules in interstitial fluid. Capillary lymphatics were then visualized in the tail using the Leica MZFL3 Stereoscope (Wetzlar, Germany). Fluorescent images were obtained at consistent magnification using Volocity software (PerkinElmer, Waltham, MA).", "In an effort to quantify lymphatic transport after surgery with or without skin graft, technetium-99 m sulfur colloid (100-nm particle size; 400-800 µCi in ∼50 µl) was injected intradermally approximately 20 mm from the tip of the mouse tail as previously described.[16] Dynamic planar gamma camera images were acquired for 2.5 hours after injection using an X-SPECT camera (Gamma Medica, Northridge, CA), and region-of-interest analysis was performed to derive the decay-adjusted activity using ASIPro software (CTI Molecular Imaging, Knoxville, TN).", "Dermal thickness calculation was performed as previously described.[18] Briefly, longitudinal tail sections were stained with hematoxylin and eosin and visualized using the Mirax Slide Scanner (Carl Zeiss Microimaging, Munich, Germany). A distance of 4 mm was measured distal to the wound edge in standardized longitudinal histological sections and the dermal thickness was measured from the dermal/epidermal junction to the deep muscles (2–4 measurements/animal/group; n = 6–8 animals/group).", "Sirius red (Direct Red 80; Sigma, St. Louis, MO) staining was performed to evaluate the degree of fibrosis in the specimens as previously described.[16], [19] Birefringence pattern and hue were evaluated using polarized light microscopy to determine collagen density, pattern of deposition, and maturity. This is based on the fact that normal tissues display fine collagen bundles with a yellow-green birefringence in a random pattern while soft tissue fibrosis results in formation of thicker, parallel collagen bundles with a orange-red birefringence.[20] Scar index is a quantitative analysis of fibrosis calculated by comparing the ratio of orange-red to yellow-green staining using Metamorph Offline software (Molecular Devices Corporation, Sunnyvale, CA) with a minimum of 4–6 sections per animal (n = 6–8 animals/group).", "Statistical analysis was performed using GraphPad Prism software for Windows (GraphPad Software, Inc., San Diego, CA). Comparative analysis between multiple groups was performed using the Kruskal-Wallace test with post hoc tests to compare different groups. Differences between 2 groups were evaluated using the Student's t-test. Mean values and standard deviations are presented with p<0.05 considered significant unless otherwise noted.", "[SUBTITLE] Autogenous tissue lymphangiogenesis is associated with spontaneous reconnection of local lymphatics and infiltration of new lymphatic capillaries [SUBSECTION] In order to investigate the involvement of recipient and donor cells in spontaneous lymphatic regeneration, the tails of athymic nude mice that underwent full-thickness skin excision were repaired with skin grafts harvested from GFP mice. This allowed us to follow the origin of cells by co-localization of GFP and lymphatic markers. Using this technique, the recipient nude mice lymphatics stained positive for lymphatic markers but negative for GFP. The transplanted tissue lymphatics, in contrast, stained positive for both GFP and lymphatic markers (\nFigure 1\n). Using fluorescent imaging, we found that high-level GFP expression was maintained in the transplanted skin even 6 weeks after tissue transfer (\nFigure 1A\n). The transplanted skin grafts healed quickly and with regrowth of hair and skin appendages by 6 weeks indicating that the grafts had become fully incorporated (\nFigure 1B\n). In addition, return of lymphatic function with transport of fluorescence-labeled colloid could be grossly visualized by microlymphangiography as early as 2 weeks after surgery (\nFigure 1C\n). This transport was initially driven by interstitial flow (i.e. no discrete lymphatic vessels could be visualized at 2 weeks); however, by 6 weeks honeycomb-like dermal lymphatics were noted in the transferred grafts (primarily at the distal margin) suggesting that the reformed lymphatic vessels are functional (\nFigure 1D\n). The lack of complete restoration of the honeycomb-like dermal lymphatic architecture in the mid portion of the graft at this time may reflect a delay in dermal lymphatic regeneration. This concept is supported by previous studies demonstrating that honeycomb-like lymphatic vessels that regenerate in the mouse tail after coverage of the wound with collagen gel are not ordinarily visible until day 60 after surgery and even at this time are only seen at the distal margin of the wound.[21], [22]\n\n\nA. Skin grafts harvested from GFP transgenic mice express GFP at high levels and for a long period of time. A representative fluorescent picture of a mouse (top) with higher magnification of the tail (bottom) is shown 6 weeks after surgery. The skin-grafted GFP portion of the tail can easily be seen. B. Gross photographs of mouse tails treated with skin grafts harvested from GFP transgenic mice. Note the rapid incorporation and ingrowth of hair follicles at the 6-week time point. C. Microlymphangiography performed 2 (left) or 6 (right) weeks following skin graft. Distal portion of the tail is shown at the bottom. Note the flow of fluorescent dye by interstitial flow after 2 weeks. In contrast, lymphatic flow can be seen in the skin-grafted area at the 6-week time point. Representative figures of triplicate experiments are shown. D. Higher power photograph of microlymphangiography 2 and 6 weeks after surgery demonstrating a few honeycomb-like dermal lymphatics (white arrows) in the skin graft at the 6-week time point. E. Representative LYVE-1 (pink) and GFP (green) co-localization in skin-grafted mouse tails 6 weeks after surgery. Low power (5x; left) and high power (20x; right) views are shown. Note connection between GFP- (recipient) and GFP+ (donor) lymphatic vessel. F. Representative photomicrograph (2x) demonstrating GFP (left), LYVE-1 (middle), and co-localization (right) of skin-grafted mouse tails 6 weeks after surgery. Note ingrowth of GFP-/LYVE-1+ vessels (yellow circle) from the distal (yellow dotted line) portion of the wound. G. High power (20x) view of section shown in F. Note the presence of both recipient (GFP-/LYVE-1+) and donor (GFP+/LYVE-1+) lymphatics at the distal margin of the wound.\nConfocal co-localization of GFP and LYVE-1 in sections obtained 6 weeks after surgery demonstrated the presence of numerous lymphatic vessels that were comprised of a portion of both recipient (GFP-/LYVE-1+) and donor (GFP+/LYVE-1+) lymphatic endothelial cells (\nFigure 1E\n). We also visualized thin lymphatic bridges connecting donor and recipient lymphatics suggesting that these vessels were spontaneously re-connecting at the edge of the wound. These connections were not frequently noted, however, due to the fact that the sections were performed in the longitudinal plane (i.e. necessitating sectioning at an extremely small/precise plane). In addition to these lymphatics, numerous recipient lymphatic vessels infiltrating the skin graft were noted primarily at the distal edge of the wound, suggesting that lymphatic regeneration of transplanted skin occurs via both spontaneous reconnection and ingrowth of new capillary lymphatic channels (\nFigures 1F–G\n). Interestingly, at the early time point, qualitatively, the majority of vessels present were GFP+/LYVE-1+; however, by 6 weeks these vessels represented a relatively small percentage of the total number of lymphatic vessels (\nFigure 1F–G\n). Precise quantification of donor and recipient vessels was difficult to perform with GFP skin grafts, however, due to the fact that GFP is expressed in all cells and not limited solely to the lymphatics. Due to this limitation, we performed identical operations using skin grafts obtained from LYVE-1 knockout mice.\nIn order to investigate the involvement of recipient and donor cells in spontaneous lymphatic regeneration, the tails of athymic nude mice that underwent full-thickness skin excision were repaired with skin grafts harvested from GFP mice. This allowed us to follow the origin of cells by co-localization of GFP and lymphatic markers. Using this technique, the recipient nude mice lymphatics stained positive for lymphatic markers but negative for GFP. The transplanted tissue lymphatics, in contrast, stained positive for both GFP and lymphatic markers (\nFigure 1\n). Using fluorescent imaging, we found that high-level GFP expression was maintained in the transplanted skin even 6 weeks after tissue transfer (\nFigure 1A\n). The transplanted skin grafts healed quickly and with regrowth of hair and skin appendages by 6 weeks indicating that the grafts had become fully incorporated (\nFigure 1B\n). In addition, return of lymphatic function with transport of fluorescence-labeled colloid could be grossly visualized by microlymphangiography as early as 2 weeks after surgery (\nFigure 1C\n). This transport was initially driven by interstitial flow (i.e. no discrete lymphatic vessels could be visualized at 2 weeks); however, by 6 weeks honeycomb-like dermal lymphatics were noted in the transferred grafts (primarily at the distal margin) suggesting that the reformed lymphatic vessels are functional (\nFigure 1D\n). The lack of complete restoration of the honeycomb-like dermal lymphatic architecture in the mid portion of the graft at this time may reflect a delay in dermal lymphatic regeneration. This concept is supported by previous studies demonstrating that honeycomb-like lymphatic vessels that regenerate in the mouse tail after coverage of the wound with collagen gel are not ordinarily visible until day 60 after surgery and even at this time are only seen at the distal margin of the wound.[21], [22]\n\n\nA. Skin grafts harvested from GFP transgenic mice express GFP at high levels and for a long period of time. A representative fluorescent picture of a mouse (top) with higher magnification of the tail (bottom) is shown 6 weeks after surgery. The skin-grafted GFP portion of the tail can easily be seen. B. Gross photographs of mouse tails treated with skin grafts harvested from GFP transgenic mice. Note the rapid incorporation and ingrowth of hair follicles at the 6-week time point. C. Microlymphangiography performed 2 (left) or 6 (right) weeks following skin graft. Distal portion of the tail is shown at the bottom. Note the flow of fluorescent dye by interstitial flow after 2 weeks. In contrast, lymphatic flow can be seen in the skin-grafted area at the 6-week time point. Representative figures of triplicate experiments are shown. D. Higher power photograph of microlymphangiography 2 and 6 weeks after surgery demonstrating a few honeycomb-like dermal lymphatics (white arrows) in the skin graft at the 6-week time point. E. Representative LYVE-1 (pink) and GFP (green) co-localization in skin-grafted mouse tails 6 weeks after surgery. Low power (5x; left) and high power (20x; right) views are shown. Note connection between GFP- (recipient) and GFP+ (donor) lymphatic vessel. F. Representative photomicrograph (2x) demonstrating GFP (left), LYVE-1 (middle), and co-localization (right) of skin-grafted mouse tails 6 weeks after surgery. Note ingrowth of GFP-/LYVE-1+ vessels (yellow circle) from the distal (yellow dotted line) portion of the wound. G. High power (20x) view of section shown in F. Note the presence of both recipient (GFP-/LYVE-1+) and donor (GFP+/LYVE-1+) lymphatics at the distal margin of the wound.\nConfocal co-localization of GFP and LYVE-1 in sections obtained 6 weeks after surgery demonstrated the presence of numerous lymphatic vessels that were comprised of a portion of both recipient (GFP-/LYVE-1+) and donor (GFP+/LYVE-1+) lymphatic endothelial cells (\nFigure 1E\n). We also visualized thin lymphatic bridges connecting donor and recipient lymphatics suggesting that these vessels were spontaneously re-connecting at the edge of the wound. These connections were not frequently noted, however, due to the fact that the sections were performed in the longitudinal plane (i.e. necessitating sectioning at an extremely small/precise plane). In addition to these lymphatics, numerous recipient lymphatic vessels infiltrating the skin graft were noted primarily at the distal edge of the wound, suggesting that lymphatic regeneration of transplanted skin occurs via both spontaneous reconnection and ingrowth of new capillary lymphatic channels (\nFigures 1F–G\n). Interestingly, at the early time point, qualitatively, the majority of vessels present were GFP+/LYVE-1+; however, by 6 weeks these vessels represented a relatively small percentage of the total number of lymphatic vessels (\nFigure 1F–G\n). Precise quantification of donor and recipient vessels was difficult to perform with GFP skin grafts, however, due to the fact that GFP is expressed in all cells and not limited solely to the lymphatics. Due to this limitation, we performed identical operations using skin grafts obtained from LYVE-1 knockout mice.\n[SUBTITLE] Lymphatic regeneration after tissue transfer is associated with infiltration of LYVE-1 positive cells [SUBSECTION] To further investigate the role of donor and recipient cells in lymphatic regeneration, athymic nude mice underwent full-thickness skin grafting with tissues harvested from LYVE-1 knockout mice. This approach again enabled us to trace the origin of the lymphatic vessels and differentiate between donor and recipient vessels more easily than GFP skin grafts since donor tissues lacked LYVE-1 expression (a molecule that is relatively specific to lymphatic vessels) but expressed podoplanin (podoplanin+/LYVE-1-), whereas recipient vessels expressed both molecules (podoplanin+/LYVE-1+).\nTo investigate the relative contribution of donor and recipient lymphatic vessels to lymphatic regeneration in the transplanted skin, immunofluorescence staining with the lymphatic endothelial cell markers LYVE-1 and podoplanin was performed and the number of donor- and recipient-derived lymphatic vessels were evaluated at various regions of the skin graft (distal, middle, and proximal portions) 2 or 6 weeks following surgery (\nFigures 2A–B\n). This analysis, similar to our GFP/LYVE-1 co-localization, demonstrated connections between LYVE-1+ and LYVE-1- lymphatic vessels suggesting that lymphatic vessels from the recipient and donor tissues spontaneously reconnect. This phenomenon is seen in 3-D reconstructions of z-stacked confocal images (\nFigure 2B\n).\n\n\nA. Co-localization of LYVE-1 (left panel) and podoplanin (middle panel) of mouse tail sections harvested 6 weeks after surgery (40x). Overlay of images demonstrates connection of LYVE-1+ (i.e. recipient) and LYVE-1- (i.e. donor) derived podoplanin-stained lymphatics. Yellow line marks the junction of the skin graft and native tail skin. Distal junction of the skin graft and mouse tail is shown to the left. B. Three-dimensional rendering of podoplanin/LYVE-1 co-localization at the junction of the skin graft and native tail skin (yellow line). Note connection between LYVE-1+ and LYVE-1- lymphatic. Also note presence of both recipient (podoplanin+/LYVE-1+) and donor (podoplanin+/LYVE-1-) vessels in the section.\nQuantification of donor and recipient lymphatic vessels also suggested that both reconnection and infiltration of recipient-derived lymphatics contribute to lymphatic regeneration in the transplanted skin. Two weeks following tissue transfer, 52% of the vessels in the distal margin and 37% of those at the proximal margin were donor-derived (i.e. podoplanin+/LYVE-1-). In contrast, 92% of the lymphatic vessels in the middle portion were donor-derived. By 6 weeks, the vast majority (>90%) of lymphatic vessels present in the distal and proximal margins were of recipient origin. Similarly, the number of recipient vessels in the middle portion also increased such that the distribution of these vessels with donor-derived lymphatics was 1∶1. These differences in the distribution of the lymphatic vessels were primarily due to the ingrowth of new lymphatics since the number of recipient vessels in the later time points increased substantially while the number of donor vessels (similar to our qualitative observations in the GFP experiment) decreased slightly, though not statistically significant (\nFigures 3D–E\n). There were significantly more recipient-derived vessels at the distal and proximal margins as compared to the central aspect of the graft suggesting ingrowth of vessels from the periphery of the transferred tissues (8.2 and 6.0 vs. 3.1; p<0.05). Taken together these findings suggest that donor-derived lymphatic vessels persist over time and that transferred tissues become further infiltrated by lymphatic vessels originating from the recipient beginning at the distal and proximal edges and infiltrating towards the center of the graft.\n\nA. Bar graph depicting source of lymphatic vessels in various regions of the skin-grafted tissues 2 or 6 weeks after surgery. Origin of lymphatic vessels in various regions of the skin graft 2 or 6 weeks after surgery was determined using podoplanin/LYVE-1 co-localization in skin grafts harvested from LYVE-1 knockout mice and transferred to nude mice. Recipient lymphatic vessels were identified as podoplanin+/LYVE-1+ (white bars) while donor lymphatics were identified as podoplanin+/LYVE-1- (black bars). Recipient-derived lymphatic vessels were noted primarily in the distal and proximal portions of the graft at the 2-week time point and became more uniform in distribution by 6 weeks. Increased number of recipient vessels in the peripheral regions of the graft at the 6-week time point resulted in a relative decrease in the percentage of donor-derived lymphatics at this time. B, C. Representative images (10x) of LYVE-1 (red) and podoplanin (green) staining of the distal portion of the graft 2 (B) and 6 (C) weeks after surgery. Yellow line marks the junction of the skin graft (located to the right) and recipient (located to the left) tissues. D. Recipient lymphatic vessels (podoplanin+/LYVE-1+) in various regions of the skin graft 2 and 6 weeks after skin grafting. Note that the numbers of recipient vessels in the distal (D) and proximal (P) portions of the tail are significantly greater than the number of recipient lymphatics in the middle (M) portion of the graft at both the 2- and 6-week time points indicating ingrowth of vessels from the periphery (*p<0.05). In addition, the number of recipient lymphatics in various regions of the skin graft increased significantly when comparing 2- and 6-week time points indicating ingrowth of recipient vessels. E. Donor lymphatic vessels (podoplanin+/LYVE-1-) in various regions of the skin graft 2 and 6 weeks after skin grafting (*p<0.05). The number of donor vessels remained essentially unchanged at both time points indicating that donor lymphatics persist over time and do not proliferate after tissue transfer.\nTo further investigate the role of donor and recipient cells in lymphatic regeneration, athymic nude mice underwent full-thickness skin grafting with tissues harvested from LYVE-1 knockout mice. This approach again enabled us to trace the origin of the lymphatic vessels and differentiate between donor and recipient vessels more easily than GFP skin grafts since donor tissues lacked LYVE-1 expression (a molecule that is relatively specific to lymphatic vessels) but expressed podoplanin (podoplanin+/LYVE-1-), whereas recipient vessels expressed both molecules (podoplanin+/LYVE-1+).\nTo investigate the relative contribution of donor and recipient lymphatic vessels to lymphatic regeneration in the transplanted skin, immunofluorescence staining with the lymphatic endothelial cell markers LYVE-1 and podoplanin was performed and the number of donor- and recipient-derived lymphatic vessels were evaluated at various regions of the skin graft (distal, middle, and proximal portions) 2 or 6 weeks following surgery (\nFigures 2A–B\n). This analysis, similar to our GFP/LYVE-1 co-localization, demonstrated connections between LYVE-1+ and LYVE-1- lymphatic vessels suggesting that lymphatic vessels from the recipient and donor tissues spontaneously reconnect. This phenomenon is seen in 3-D reconstructions of z-stacked confocal images (\nFigure 2B\n).\n\n\nA. Co-localization of LYVE-1 (left panel) and podoplanin (middle panel) of mouse tail sections harvested 6 weeks after surgery (40x). Overlay of images demonstrates connection of LYVE-1+ (i.e. recipient) and LYVE-1- (i.e. donor) derived podoplanin-stained lymphatics. Yellow line marks the junction of the skin graft and native tail skin. Distal junction of the skin graft and mouse tail is shown to the left. B. Three-dimensional rendering of podoplanin/LYVE-1 co-localization at the junction of the skin graft and native tail skin (yellow line). Note connection between LYVE-1+ and LYVE-1- lymphatic. Also note presence of both recipient (podoplanin+/LYVE-1+) and donor (podoplanin+/LYVE-1-) vessels in the section.\nQuantification of donor and recipient lymphatic vessels also suggested that both reconnection and infiltration of recipient-derived lymphatics contribute to lymphatic regeneration in the transplanted skin. Two weeks following tissue transfer, 52% of the vessels in the distal margin and 37% of those at the proximal margin were donor-derived (i.e. podoplanin+/LYVE-1-). In contrast, 92% of the lymphatic vessels in the middle portion were donor-derived. By 6 weeks, the vast majority (>90%) of lymphatic vessels present in the distal and proximal margins were of recipient origin. Similarly, the number of recipient vessels in the middle portion also increased such that the distribution of these vessels with donor-derived lymphatics was 1∶1. These differences in the distribution of the lymphatic vessels were primarily due to the ingrowth of new lymphatics since the number of recipient vessels in the later time points increased substantially while the number of donor vessels (similar to our qualitative observations in the GFP experiment) decreased slightly, though not statistically significant (\nFigures 3D–E\n). There were significantly more recipient-derived vessels at the distal and proximal margins as compared to the central aspect of the graft suggesting ingrowth of vessels from the periphery of the transferred tissues (8.2 and 6.0 vs. 3.1; p<0.05). Taken together these findings suggest that donor-derived lymphatic vessels persist over time and that transferred tissues become further infiltrated by lymphatic vessels originating from the recipient beginning at the distal and proximal edges and infiltrating towards the center of the graft.\n\nA. Bar graph depicting source of lymphatic vessels in various regions of the skin-grafted tissues 2 or 6 weeks after surgery. Origin of lymphatic vessels in various regions of the skin graft 2 or 6 weeks after surgery was determined using podoplanin/LYVE-1 co-localization in skin grafts harvested from LYVE-1 knockout mice and transferred to nude mice. Recipient lymphatic vessels were identified as podoplanin+/LYVE-1+ (white bars) while donor lymphatics were identified as podoplanin+/LYVE-1- (black bars). Recipient-derived lymphatic vessels were noted primarily in the distal and proximal portions of the graft at the 2-week time point and became more uniform in distribution by 6 weeks. Increased number of recipient vessels in the peripheral regions of the graft at the 6-week time point resulted in a relative decrease in the percentage of donor-derived lymphatics at this time. B, C. Representative images (10x) of LYVE-1 (red) and podoplanin (green) staining of the distal portion of the graft 2 (B) and 6 (C) weeks after surgery. Yellow line marks the junction of the skin graft (located to the right) and recipient (located to the left) tissues. D. Recipient lymphatic vessels (podoplanin+/LYVE-1+) in various regions of the skin graft 2 and 6 weeks after skin grafting. Note that the numbers of recipient vessels in the distal (D) and proximal (P) portions of the tail are significantly greater than the number of recipient lymphatics in the middle (M) portion of the graft at both the 2- and 6-week time points indicating ingrowth of vessels from the periphery (*p<0.05). In addition, the number of recipient lymphatics in various regions of the skin graft increased significantly when comparing 2- and 6-week time points indicating ingrowth of recipient vessels. E. Donor lymphatic vessels (podoplanin+/LYVE-1-) in various regions of the skin graft 2 and 6 weeks after skin grafting (*p<0.05). The number of donor vessels remained essentially unchanged at both time points indicating that donor lymphatics persist over time and do not proliferate after tissue transfer.\n[SUBTITLE] Lymphatic regeneration after tissue transfer is associated with expression of VEGF-C and infiltration of macrophages [SUBSECTION] VEGF-C is a critical regulator of lymphatic regeneration during adult lymphangiogenesis.[17] Therefore, we investigated the pattern of VEGF-C expression after tissue transfer using immunohistochemical staining. Two weeks after surgery, VEGF-C expression was highest at the periphery of the wounds with the distal wound margin displaying significantly more staining of VEGF-C+ cells/hpf than either the middle or proximal portion (110±4 in distal portion vs. 46±8 in middle portion; p<0.05; \nFigures 4A, C\n). VEGF-C staining at the peripheral margins of the grafts was associated with a modest inflammatory infiltrate into the skin graft. By 6 weeks, VEGF-C staining was primarily localized to the central region of the skin graft (73±11 in the central portion vs. 30±8 or 25±10 in proximal or distal portion; p<0.05; \nFigures 4B–C\n). This expression pattern corresponded to the infiltration of lymphatic vessels into the graft beginning at the peripheral margins and extending into the central portion.\n\nA. VEGF-C expression in skin-grafted tails 2 weeks after surgery. Representative low power (2x; upper panel) photomicrographs encompassing the skin-grafted area and distal/proximal portions of the recipient mouse-tail are shown. High power (20x) views of the distal and proximal junctions between recipient tissues and skin grafts are shown below. Black arrow shows large number of VEGF-C+ cells in the distal junction. Dashed box delineates skin-grafted area. B. VEGF-C expression in skin-grafted tails 6 weeks after surgery. Representative low power (2x; upper panel) and high power (20x) photomicrographs encompassing the skin-grafted area and distal/middle/proximal portions of the recipient mouse tails are shown. Dashed box delineates skin-grafted area. Note small amount of wound/skin graft contracture after repair. C. Cell counts per high power field of VEGF-C+ cells in the various tail regions (D = distal, M = middle, P = proximal) 2 and 6 weeks after surgery. Cell counts are means ± SD of at least 4 high power fields/mouse/time point. At least 6 mice were analyzed in each group (*p<0.05; #<0.01).\nMacrophages are critical cells that regulate wound repair and produce VEGF-C during lymphatic regeneration in the mouse tail. In addition, they are thought to directly contribute to lymphangiogenesis by trans-differentiating into lymphatic endothelial cells.[23] Immunohistochemical staining with F4/80 revealed a similar expression pattern as that of VEGF-C with a large number of F4/80+ cells localized at the periphery of the graft at the 2 week time point (\nFigures 5A, C\n). Similar to our findings of VEGF-C expression, the greatest number of macrophages was noted at the distal margin of the wound (44±7 in distal portion vs. 16±5 or 20±5 in middle or proximal areas; p<0.05). Co-localization of F4/80 and LYVE-1 suggested that some of the newly formed lymphatic channels in the skin graft may have formed as a result of trans-differentiation of macrophages into lymphatic endothelial cells (\nFigure 6\n). In addition, scattered, isolated F4/80+/LYVE-1+ cells that had not formed tubules could be seen within the wound sections. Newly formed lymphatics with trans-differentiating macrophages were recipient-derived since they also expressed LYVE-1. By 6 weeks, the number of macrophages had decreased in the distal margin of the graft and a more uniform distribution was noted throughout the grafted area (\nFigures 5B-C\n). In addition, we could not find any F4/80+/LYVE-1+ vessels at this time point indicating loss of F4/80 expression with vessel remodeling (not shown).\n\nA. F4/80 localization in skin-grafted tails 2 weeks after surgery. Representative low power (2x; upper panel) photomicrographs encompassing the skin-grafted area and distal/proximal portions of the recipient mouse-tail are shown. High power (20x) views of the distal and proximal junctions between recipient tissues and skin grafts are shown below. Dashed box delineates skin-grafted area. B. F4/80 localization in skin-grafted tails 6 weeks after surgery. Representative low power (2x; upper panel) and high power (20x) photomicrographs encompassing the skin-grafted area and distal/proximal portions of the recipient mouse tails are shown. Dashed box delineates skin-grafted area. Note small amount of wound/skin graft contracture after repair. C. Cell counts per high power field of F4/80+ cells in various regions of the tail 2 and 6 weeks after surgery. Cell counts are means ± SD of at least 4 high power fields/mouse/time point. At least 6 mice were analyzed in each group (*p<0.05).\n\nA, B, C. Representative photomicrograph (40x) demonstrating co-localization of F4/80 (A; brown immunostaining) and LYVE-1 (B; green fluorescent stain) in skin-grafted mouse tail sections 2 weeks after surgery. Arrows show co-localization of F4/80 and LYVE-1 in lymphatic capillaries. Also note scattered F4/80+/LYVE-1+ cells in the tail section. F4/80-/LYVE-1+ capillaries can also be appreciated (upper right corner).\nVEGF-C is a critical regulator of lymphatic regeneration during adult lymphangiogenesis.[17] Therefore, we investigated the pattern of VEGF-C expression after tissue transfer using immunohistochemical staining. Two weeks after surgery, VEGF-C expression was highest at the periphery of the wounds with the distal wound margin displaying significantly more staining of VEGF-C+ cells/hpf than either the middle or proximal portion (110±4 in distal portion vs. 46±8 in middle portion; p<0.05; \nFigures 4A, C\n). VEGF-C staining at the peripheral margins of the grafts was associated with a modest inflammatory infiltrate into the skin graft. By 6 weeks, VEGF-C staining was primarily localized to the central region of the skin graft (73±11 in the central portion vs. 30±8 or 25±10 in proximal or distal portion; p<0.05; \nFigures 4B–C\n). This expression pattern corresponded to the infiltration of lymphatic vessels into the graft beginning at the peripheral margins and extending into the central portion.\n\nA. VEGF-C expression in skin-grafted tails 2 weeks after surgery. Representative low power (2x; upper panel) photomicrographs encompassing the skin-grafted area and distal/proximal portions of the recipient mouse-tail are shown. High power (20x) views of the distal and proximal junctions between recipient tissues and skin grafts are shown below. Black arrow shows large number of VEGF-C+ cells in the distal junction. Dashed box delineates skin-grafted area. B. VEGF-C expression in skin-grafted tails 6 weeks after surgery. Representative low power (2x; upper panel) and high power (20x) photomicrographs encompassing the skin-grafted area and distal/middle/proximal portions of the recipient mouse tails are shown. Dashed box delineates skin-grafted area. Note small amount of wound/skin graft contracture after repair. C. Cell counts per high power field of VEGF-C+ cells in the various tail regions (D = distal, M = middle, P = proximal) 2 and 6 weeks after surgery. Cell counts are means ± SD of at least 4 high power fields/mouse/time point. At least 6 mice were analyzed in each group (*p<0.05; #<0.01).\nMacrophages are critical cells that regulate wound repair and produce VEGF-C during lymphatic regeneration in the mouse tail. In addition, they are thought to directly contribute to lymphangiogenesis by trans-differentiating into lymphatic endothelial cells.[23] Immunohistochemical staining with F4/80 revealed a similar expression pattern as that of VEGF-C with a large number of F4/80+ cells localized at the periphery of the graft at the 2 week time point (\nFigures 5A, C\n). Similar to our findings of VEGF-C expression, the greatest number of macrophages was noted at the distal margin of the wound (44±7 in distal portion vs. 16±5 or 20±5 in middle or proximal areas; p<0.05). Co-localization of F4/80 and LYVE-1 suggested that some of the newly formed lymphatic channels in the skin graft may have formed as a result of trans-differentiation of macrophages into lymphatic endothelial cells (\nFigure 6\n). In addition, scattered, isolated F4/80+/LYVE-1+ cells that had not formed tubules could be seen within the wound sections. Newly formed lymphatics with trans-differentiating macrophages were recipient-derived since they also expressed LYVE-1. By 6 weeks, the number of macrophages had decreased in the distal margin of the graft and a more uniform distribution was noted throughout the grafted area (\nFigures 5B-C\n). In addition, we could not find any F4/80+/LYVE-1+ vessels at this time point indicating loss of F4/80 expression with vessel remodeling (not shown).\n\nA. F4/80 localization in skin-grafted tails 2 weeks after surgery. Representative low power (2x; upper panel) photomicrographs encompassing the skin-grafted area and distal/proximal portions of the recipient mouse-tail are shown. High power (20x) views of the distal and proximal junctions between recipient tissues and skin grafts are shown below. Dashed box delineates skin-grafted area. B. F4/80 localization in skin-grafted tails 6 weeks after surgery. Representative low power (2x; upper panel) and high power (20x) photomicrographs encompassing the skin-grafted area and distal/proximal portions of the recipient mouse tails are shown. Dashed box delineates skin-grafted area. Note small amount of wound/skin graft contracture after repair. C. Cell counts per high power field of F4/80+ cells in various regions of the tail 2 and 6 weeks after surgery. Cell counts are means ± SD of at least 4 high power fields/mouse/time point. At least 6 mice were analyzed in each group (*p<0.05).\n\nA, B, C. Representative photomicrograph (40x) demonstrating co-localization of F4/80 (A; brown immunostaining) and LYVE-1 (B; green fluorescent stain) in skin-grafted mouse tail sections 2 weeks after surgery. Arrows show co-localization of F4/80 and LYVE-1 in lymphatic capillaries. Also note scattered F4/80+/LYVE-1+ cells in the tail section. F4/80-/LYVE-1+ capillaries can also be appreciated (upper right corner).\n[SUBTITLE] Spontaneous regeneration of lymphatics after tissue transfer can be used to bypass damaged lymphatics [SUBSECTION] In order to determine if spontaneous regeneration of lymphatics can be used to bypass surgically damaged lymphatic vessels, we compared lymphatic function in mice that underwent skin excision with animals that had skin excision and repair with full-thickness skin grafting. Evaluation of animals grossly demonstrated marked differences in tail swelling at all time points evaluated (\nFigure 7A\n). These findings were corroborated by tail volume calculation demonstrating significantly higher tail volumes in excision-only animals at every time point evaluated (\nFigure 7B\n). Skin-grafted animals had a 4-fold decrease in tail swelling at the 2 week time point (19±5.9% vs. 74±7.8%, p<0.05) and their tail volumes returned to baseline 6 weeks after surgery. In contrast, excision-only animals demonstrated a persistent increase even 6 weeks postoperatively (26±8%, p<0.01). The differences in tail volumes observed in our study were associated with significantly improved lymphatic function as assessed by lymphoscintigraphy (\nFigures 7C–D\n). At both the 2 and 6 week time points, the skin-grafted animals demonstrated a significantly increased total uptake of Tc99 in the lymph nodes at the base of the tail (3.74% vs. 0.61% at 2 weeks; 10.70% vs. 4.23% at 6 weeks; p<0.01 for both time points). In addition, the lymphatic uptake occurred more rapidly as reflected by a more rapid increase in the slope of the asymptotic curve.\n\nA. Gross photographs comparing nude mice that had undergone tail excision with (right) and without (left) skin grafting are shown 6 weeks after surgery. Note obvious difference in tail swelling. B. Tail volume measurements in nude mice that had undergone tail excision with or without skin grafting. Data are presented as percent change from baseline (i.e. preoperatively) with mean ± SD (*p<0.05). C, D. Representative lymphoscintigraphy of nude mice that had undergone tail excision with or without skin grafting. E. Representative photomicrograph (5x) of H&E stained tails sections from nude mice treated with (left) or without (right) 6 weeks after surgery. Dashed box delineates area of skin graft. Note decreased inflammation (cellularity) and dermal thickness in skin-grafted mice distal (to the left) of the wound. F. High power (40x) photomicrographs of tail skin harvested 5 mm distal to the excision site. Note decreased cellularity in skin-grafted section (left) as compared with excision section (right; arrow). Also note decreased dermal thickness. G. Dermal thickness measurements and representative figures (40x) in nude mice that had undergone tail excision with or without skin grafting 6 weeks following surgery (*p<0.05). H. Scar index measurements in tail tissues localized just distal to the site of lymphatic injury 6 weeks after treatment with excision with or without skin grafting. Representative Sirius red birefringence images are shown to the right. Orange-red is indicative of scar; yellow-green is consistent with normal (i.e. non-fibrosed) tissue (*p<0.01).\nHistological analysis of tail sections 6 weeks after surgery demonstrated markedly decreased inflammatory cell infiltrate and dermal thickness in skin-grafted animals as compared to excision-only controls (\nFigures 7E–F\n). Far fewer inflammatory cells were noted qualitatively in the skin-grafted animals as compared with excision only animals. This was most readily apparent in the portions of the tail distal to the skin excision site (\nFigure 7F\n). Skin-grafted animals had a 40% decrease in dermal thickness (284±59 µm vs. 482±62 µm, p<0.05) and more normal dermal architecture in the regions of the tail localized distal to the site of lymphatic injury (\nFigure 7G\n). Furthermore, skin-grafted animals demonstrated significantly decreased tissue fibrosis in these regions (\nFigure 7H\n) with a 54% decrease in the scar index (1.39±0.29 vs. 3.00±0.52, p<0.01). Together, these findings support the hypothesis that spontaneous regeneration of lymphatic vessels in autogenous tissues can bypass damaged lymphatic channels resulting in rapid lymphatic repair and markedly improved lymphatic function.\nIn order to determine if spontaneous regeneration of lymphatics can be used to bypass surgically damaged lymphatic vessels, we compared lymphatic function in mice that underwent skin excision with animals that had skin excision and repair with full-thickness skin grafting. Evaluation of animals grossly demonstrated marked differences in tail swelling at all time points evaluated (\nFigure 7A\n). These findings were corroborated by tail volume calculation demonstrating significantly higher tail volumes in excision-only animals at every time point evaluated (\nFigure 7B\n). Skin-grafted animals had a 4-fold decrease in tail swelling at the 2 week time point (19±5.9% vs. 74±7.8%, p<0.05) and their tail volumes returned to baseline 6 weeks after surgery. In contrast, excision-only animals demonstrated a persistent increase even 6 weeks postoperatively (26±8%, p<0.01). The differences in tail volumes observed in our study were associated with significantly improved lymphatic function as assessed by lymphoscintigraphy (\nFigures 7C–D\n). At both the 2 and 6 week time points, the skin-grafted animals demonstrated a significantly increased total uptake of Tc99 in the lymph nodes at the base of the tail (3.74% vs. 0.61% at 2 weeks; 10.70% vs. 4.23% at 6 weeks; p<0.01 for both time points). In addition, the lymphatic uptake occurred more rapidly as reflected by a more rapid increase in the slope of the asymptotic curve.\n\nA. Gross photographs comparing nude mice that had undergone tail excision with (right) and without (left) skin grafting are shown 6 weeks after surgery. Note obvious difference in tail swelling. B. Tail volume measurements in nude mice that had undergone tail excision with or without skin grafting. Data are presented as percent change from baseline (i.e. preoperatively) with mean ± SD (*p<0.05). C, D. Representative lymphoscintigraphy of nude mice that had undergone tail excision with or without skin grafting. E. Representative photomicrograph (5x) of H&E stained tails sections from nude mice treated with (left) or without (right) 6 weeks after surgery. Dashed box delineates area of skin graft. Note decreased inflammation (cellularity) and dermal thickness in skin-grafted mice distal (to the left) of the wound. F. High power (40x) photomicrographs of tail skin harvested 5 mm distal to the excision site. Note decreased cellularity in skin-grafted section (left) as compared with excision section (right; arrow). Also note decreased dermal thickness. G. Dermal thickness measurements and representative figures (40x) in nude mice that had undergone tail excision with or without skin grafting 6 weeks following surgery (*p<0.05). H. Scar index measurements in tail tissues localized just distal to the site of lymphatic injury 6 weeks after treatment with excision with or without skin grafting. Representative Sirius red birefringence images are shown to the right. Orange-red is indicative of scar; yellow-green is consistent with normal (i.e. non-fibrosed) tissue (*p<0.01).\nHistological analysis of tail sections 6 weeks after surgery demonstrated markedly decreased inflammatory cell infiltrate and dermal thickness in skin-grafted animals as compared to excision-only controls (\nFigures 7E–F\n). Far fewer inflammatory cells were noted qualitatively in the skin-grafted animals as compared with excision only animals. This was most readily apparent in the portions of the tail distal to the skin excision site (\nFigure 7F\n). Skin-grafted animals had a 40% decrease in dermal thickness (284±59 µm vs. 482±62 µm, p<0.05) and more normal dermal architecture in the regions of the tail localized distal to the site of lymphatic injury (\nFigure 7G\n). Furthermore, skin-grafted animals demonstrated significantly decreased tissue fibrosis in these regions (\nFigure 7H\n) with a 54% decrease in the scar index (1.39±0.29 vs. 3.00±0.52, p<0.01). Together, these findings support the hypothesis that spontaneous regeneration of lymphatic vessels in autogenous tissues can bypass damaged lymphatic channels resulting in rapid lymphatic repair and markedly improved lymphatic function.", "In order to investigate the involvement of recipient and donor cells in spontaneous lymphatic regeneration, the tails of athymic nude mice that underwent full-thickness skin excision were repaired with skin grafts harvested from GFP mice. This allowed us to follow the origin of cells by co-localization of GFP and lymphatic markers. Using this technique, the recipient nude mice lymphatics stained positive for lymphatic markers but negative for GFP. The transplanted tissue lymphatics, in contrast, stained positive for both GFP and lymphatic markers (\nFigure 1\n). Using fluorescent imaging, we found that high-level GFP expression was maintained in the transplanted skin even 6 weeks after tissue transfer (\nFigure 1A\n). The transplanted skin grafts healed quickly and with regrowth of hair and skin appendages by 6 weeks indicating that the grafts had become fully incorporated (\nFigure 1B\n). In addition, return of lymphatic function with transport of fluorescence-labeled colloid could be grossly visualized by microlymphangiography as early as 2 weeks after surgery (\nFigure 1C\n). This transport was initially driven by interstitial flow (i.e. no discrete lymphatic vessels could be visualized at 2 weeks); however, by 6 weeks honeycomb-like dermal lymphatics were noted in the transferred grafts (primarily at the distal margin) suggesting that the reformed lymphatic vessels are functional (\nFigure 1D\n). The lack of complete restoration of the honeycomb-like dermal lymphatic architecture in the mid portion of the graft at this time may reflect a delay in dermal lymphatic regeneration. This concept is supported by previous studies demonstrating that honeycomb-like lymphatic vessels that regenerate in the mouse tail after coverage of the wound with collagen gel are not ordinarily visible until day 60 after surgery and even at this time are only seen at the distal margin of the wound.[21], [22]\n\n\nA. Skin grafts harvested from GFP transgenic mice express GFP at high levels and for a long period of time. A representative fluorescent picture of a mouse (top) with higher magnification of the tail (bottom) is shown 6 weeks after surgery. The skin-grafted GFP portion of the tail can easily be seen. B. Gross photographs of mouse tails treated with skin grafts harvested from GFP transgenic mice. Note the rapid incorporation and ingrowth of hair follicles at the 6-week time point. C. Microlymphangiography performed 2 (left) or 6 (right) weeks following skin graft. Distal portion of the tail is shown at the bottom. Note the flow of fluorescent dye by interstitial flow after 2 weeks. In contrast, lymphatic flow can be seen in the skin-grafted area at the 6-week time point. Representative figures of triplicate experiments are shown. D. Higher power photograph of microlymphangiography 2 and 6 weeks after surgery demonstrating a few honeycomb-like dermal lymphatics (white arrows) in the skin graft at the 6-week time point. E. Representative LYVE-1 (pink) and GFP (green) co-localization in skin-grafted mouse tails 6 weeks after surgery. Low power (5x; left) and high power (20x; right) views are shown. Note connection between GFP- (recipient) and GFP+ (donor) lymphatic vessel. F. Representative photomicrograph (2x) demonstrating GFP (left), LYVE-1 (middle), and co-localization (right) of skin-grafted mouse tails 6 weeks after surgery. Note ingrowth of GFP-/LYVE-1+ vessels (yellow circle) from the distal (yellow dotted line) portion of the wound. G. High power (20x) view of section shown in F. Note the presence of both recipient (GFP-/LYVE-1+) and donor (GFP+/LYVE-1+) lymphatics at the distal margin of the wound.\nConfocal co-localization of GFP and LYVE-1 in sections obtained 6 weeks after surgery demonstrated the presence of numerous lymphatic vessels that were comprised of a portion of both recipient (GFP-/LYVE-1+) and donor (GFP+/LYVE-1+) lymphatic endothelial cells (\nFigure 1E\n). We also visualized thin lymphatic bridges connecting donor and recipient lymphatics suggesting that these vessels were spontaneously re-connecting at the edge of the wound. These connections were not frequently noted, however, due to the fact that the sections were performed in the longitudinal plane (i.e. necessitating sectioning at an extremely small/precise plane). In addition to these lymphatics, numerous recipient lymphatic vessels infiltrating the skin graft were noted primarily at the distal edge of the wound, suggesting that lymphatic regeneration of transplanted skin occurs via both spontaneous reconnection and ingrowth of new capillary lymphatic channels (\nFigures 1F–G\n). Interestingly, at the early time point, qualitatively, the majority of vessels present were GFP+/LYVE-1+; however, by 6 weeks these vessels represented a relatively small percentage of the total number of lymphatic vessels (\nFigure 1F–G\n). Precise quantification of donor and recipient vessels was difficult to perform with GFP skin grafts, however, due to the fact that GFP is expressed in all cells and not limited solely to the lymphatics. Due to this limitation, we performed identical operations using skin grafts obtained from LYVE-1 knockout mice.", "To further investigate the role of donor and recipient cells in lymphatic regeneration, athymic nude mice underwent full-thickness skin grafting with tissues harvested from LYVE-1 knockout mice. This approach again enabled us to trace the origin of the lymphatic vessels and differentiate between donor and recipient vessels more easily than GFP skin grafts since donor tissues lacked LYVE-1 expression (a molecule that is relatively specific to lymphatic vessels) but expressed podoplanin (podoplanin+/LYVE-1-), whereas recipient vessels expressed both molecules (podoplanin+/LYVE-1+).\nTo investigate the relative contribution of donor and recipient lymphatic vessels to lymphatic regeneration in the transplanted skin, immunofluorescence staining with the lymphatic endothelial cell markers LYVE-1 and podoplanin was performed and the number of donor- and recipient-derived lymphatic vessels were evaluated at various regions of the skin graft (distal, middle, and proximal portions) 2 or 6 weeks following surgery (\nFigures 2A–B\n). This analysis, similar to our GFP/LYVE-1 co-localization, demonstrated connections between LYVE-1+ and LYVE-1- lymphatic vessels suggesting that lymphatic vessels from the recipient and donor tissues spontaneously reconnect. This phenomenon is seen in 3-D reconstructions of z-stacked confocal images (\nFigure 2B\n).\n\n\nA. Co-localization of LYVE-1 (left panel) and podoplanin (middle panel) of mouse tail sections harvested 6 weeks after surgery (40x). Overlay of images demonstrates connection of LYVE-1+ (i.e. recipient) and LYVE-1- (i.e. donor) derived podoplanin-stained lymphatics. Yellow line marks the junction of the skin graft and native tail skin. Distal junction of the skin graft and mouse tail is shown to the left. B. Three-dimensional rendering of podoplanin/LYVE-1 co-localization at the junction of the skin graft and native tail skin (yellow line). Note connection between LYVE-1+ and LYVE-1- lymphatic. Also note presence of both recipient (podoplanin+/LYVE-1+) and donor (podoplanin+/LYVE-1-) vessels in the section.\nQuantification of donor and recipient lymphatic vessels also suggested that both reconnection and infiltration of recipient-derived lymphatics contribute to lymphatic regeneration in the transplanted skin. Two weeks following tissue transfer, 52% of the vessels in the distal margin and 37% of those at the proximal margin were donor-derived (i.e. podoplanin+/LYVE-1-). In contrast, 92% of the lymphatic vessels in the middle portion were donor-derived. By 6 weeks, the vast majority (>90%) of lymphatic vessels present in the distal and proximal margins were of recipient origin. Similarly, the number of recipient vessels in the middle portion also increased such that the distribution of these vessels with donor-derived lymphatics was 1∶1. These differences in the distribution of the lymphatic vessels were primarily due to the ingrowth of new lymphatics since the number of recipient vessels in the later time points increased substantially while the number of donor vessels (similar to our qualitative observations in the GFP experiment) decreased slightly, though not statistically significant (\nFigures 3D–E\n). There were significantly more recipient-derived vessels at the distal and proximal margins as compared to the central aspect of the graft suggesting ingrowth of vessels from the periphery of the transferred tissues (8.2 and 6.0 vs. 3.1; p<0.05). Taken together these findings suggest that donor-derived lymphatic vessels persist over time and that transferred tissues become further infiltrated by lymphatic vessels originating from the recipient beginning at the distal and proximal edges and infiltrating towards the center of the graft.\n\nA. Bar graph depicting source of lymphatic vessels in various regions of the skin-grafted tissues 2 or 6 weeks after surgery. Origin of lymphatic vessels in various regions of the skin graft 2 or 6 weeks after surgery was determined using podoplanin/LYVE-1 co-localization in skin grafts harvested from LYVE-1 knockout mice and transferred to nude mice. Recipient lymphatic vessels were identified as podoplanin+/LYVE-1+ (white bars) while donor lymphatics were identified as podoplanin+/LYVE-1- (black bars). Recipient-derived lymphatic vessels were noted primarily in the distal and proximal portions of the graft at the 2-week time point and became more uniform in distribution by 6 weeks. Increased number of recipient vessels in the peripheral regions of the graft at the 6-week time point resulted in a relative decrease in the percentage of donor-derived lymphatics at this time. B, C. Representative images (10x) of LYVE-1 (red) and podoplanin (green) staining of the distal portion of the graft 2 (B) and 6 (C) weeks after surgery. Yellow line marks the junction of the skin graft (located to the right) and recipient (located to the left) tissues. D. Recipient lymphatic vessels (podoplanin+/LYVE-1+) in various regions of the skin graft 2 and 6 weeks after skin grafting. Note that the numbers of recipient vessels in the distal (D) and proximal (P) portions of the tail are significantly greater than the number of recipient lymphatics in the middle (M) portion of the graft at both the 2- and 6-week time points indicating ingrowth of vessels from the periphery (*p<0.05). In addition, the number of recipient lymphatics in various regions of the skin graft increased significantly when comparing 2- and 6-week time points indicating ingrowth of recipient vessels. E. Donor lymphatic vessels (podoplanin+/LYVE-1-) in various regions of the skin graft 2 and 6 weeks after skin grafting (*p<0.05). The number of donor vessels remained essentially unchanged at both time points indicating that donor lymphatics persist over time and do not proliferate after tissue transfer.", "VEGF-C is a critical regulator of lymphatic regeneration during adult lymphangiogenesis.[17] Therefore, we investigated the pattern of VEGF-C expression after tissue transfer using immunohistochemical staining. Two weeks after surgery, VEGF-C expression was highest at the periphery of the wounds with the distal wound margin displaying significantly more staining of VEGF-C+ cells/hpf than either the middle or proximal portion (110±4 in distal portion vs. 46±8 in middle portion; p<0.05; \nFigures 4A, C\n). VEGF-C staining at the peripheral margins of the grafts was associated with a modest inflammatory infiltrate into the skin graft. By 6 weeks, VEGF-C staining was primarily localized to the central region of the skin graft (73±11 in the central portion vs. 30±8 or 25±10 in proximal or distal portion; p<0.05; \nFigures 4B–C\n). This expression pattern corresponded to the infiltration of lymphatic vessels into the graft beginning at the peripheral margins and extending into the central portion.\n\nA. VEGF-C expression in skin-grafted tails 2 weeks after surgery. Representative low power (2x; upper panel) photomicrographs encompassing the skin-grafted area and distal/proximal portions of the recipient mouse-tail are shown. High power (20x) views of the distal and proximal junctions between recipient tissues and skin grafts are shown below. Black arrow shows large number of VEGF-C+ cells in the distal junction. Dashed box delineates skin-grafted area. B. VEGF-C expression in skin-grafted tails 6 weeks after surgery. Representative low power (2x; upper panel) and high power (20x) photomicrographs encompassing the skin-grafted area and distal/middle/proximal portions of the recipient mouse tails are shown. Dashed box delineates skin-grafted area. Note small amount of wound/skin graft contracture after repair. C. Cell counts per high power field of VEGF-C+ cells in the various tail regions (D = distal, M = middle, P = proximal) 2 and 6 weeks after surgery. Cell counts are means ± SD of at least 4 high power fields/mouse/time point. At least 6 mice were analyzed in each group (*p<0.05; #<0.01).\nMacrophages are critical cells that regulate wound repair and produce VEGF-C during lymphatic regeneration in the mouse tail. In addition, they are thought to directly contribute to lymphangiogenesis by trans-differentiating into lymphatic endothelial cells.[23] Immunohistochemical staining with F4/80 revealed a similar expression pattern as that of VEGF-C with a large number of F4/80+ cells localized at the periphery of the graft at the 2 week time point (\nFigures 5A, C\n). Similar to our findings of VEGF-C expression, the greatest number of macrophages was noted at the distal margin of the wound (44±7 in distal portion vs. 16±5 or 20±5 in middle or proximal areas; p<0.05). Co-localization of F4/80 and LYVE-1 suggested that some of the newly formed lymphatic channels in the skin graft may have formed as a result of trans-differentiation of macrophages into lymphatic endothelial cells (\nFigure 6\n). In addition, scattered, isolated F4/80+/LYVE-1+ cells that had not formed tubules could be seen within the wound sections. Newly formed lymphatics with trans-differentiating macrophages were recipient-derived since they also expressed LYVE-1. By 6 weeks, the number of macrophages had decreased in the distal margin of the graft and a more uniform distribution was noted throughout the grafted area (\nFigures 5B-C\n). In addition, we could not find any F4/80+/LYVE-1+ vessels at this time point indicating loss of F4/80 expression with vessel remodeling (not shown).\n\nA. F4/80 localization in skin-grafted tails 2 weeks after surgery. Representative low power (2x; upper panel) photomicrographs encompassing the skin-grafted area and distal/proximal portions of the recipient mouse-tail are shown. High power (20x) views of the distal and proximal junctions between recipient tissues and skin grafts are shown below. Dashed box delineates skin-grafted area. B. F4/80 localization in skin-grafted tails 6 weeks after surgery. Representative low power (2x; upper panel) and high power (20x) photomicrographs encompassing the skin-grafted area and distal/proximal portions of the recipient mouse tails are shown. Dashed box delineates skin-grafted area. Note small amount of wound/skin graft contracture after repair. C. Cell counts per high power field of F4/80+ cells in various regions of the tail 2 and 6 weeks after surgery. Cell counts are means ± SD of at least 4 high power fields/mouse/time point. At least 6 mice were analyzed in each group (*p<0.05).\n\nA, B, C. Representative photomicrograph (40x) demonstrating co-localization of F4/80 (A; brown immunostaining) and LYVE-1 (B; green fluorescent stain) in skin-grafted mouse tail sections 2 weeks after surgery. Arrows show co-localization of F4/80 and LYVE-1 in lymphatic capillaries. Also note scattered F4/80+/LYVE-1+ cells in the tail section. F4/80-/LYVE-1+ capillaries can also be appreciated (upper right corner).", "In order to determine if spontaneous regeneration of lymphatics can be used to bypass surgically damaged lymphatic vessels, we compared lymphatic function in mice that underwent skin excision with animals that had skin excision and repair with full-thickness skin grafting. Evaluation of animals grossly demonstrated marked differences in tail swelling at all time points evaluated (\nFigure 7A\n). These findings were corroborated by tail volume calculation demonstrating significantly higher tail volumes in excision-only animals at every time point evaluated (\nFigure 7B\n). Skin-grafted animals had a 4-fold decrease in tail swelling at the 2 week time point (19±5.9% vs. 74±7.8%, p<0.05) and their tail volumes returned to baseline 6 weeks after surgery. In contrast, excision-only animals demonstrated a persistent increase even 6 weeks postoperatively (26±8%, p<0.01). The differences in tail volumes observed in our study were associated with significantly improved lymphatic function as assessed by lymphoscintigraphy (\nFigures 7C–D\n). At both the 2 and 6 week time points, the skin-grafted animals demonstrated a significantly increased total uptake of Tc99 in the lymph nodes at the base of the tail (3.74% vs. 0.61% at 2 weeks; 10.70% vs. 4.23% at 6 weeks; p<0.01 for both time points). In addition, the lymphatic uptake occurred more rapidly as reflected by a more rapid increase in the slope of the asymptotic curve.\n\nA. Gross photographs comparing nude mice that had undergone tail excision with (right) and without (left) skin grafting are shown 6 weeks after surgery. Note obvious difference in tail swelling. B. Tail volume measurements in nude mice that had undergone tail excision with or without skin grafting. Data are presented as percent change from baseline (i.e. preoperatively) with mean ± SD (*p<0.05). C, D. Representative lymphoscintigraphy of nude mice that had undergone tail excision with or without skin grafting. E. Representative photomicrograph (5x) of H&E stained tails sections from nude mice treated with (left) or without (right) 6 weeks after surgery. Dashed box delineates area of skin graft. Note decreased inflammation (cellularity) and dermal thickness in skin-grafted mice distal (to the left) of the wound. F. High power (40x) photomicrographs of tail skin harvested 5 mm distal to the excision site. Note decreased cellularity in skin-grafted section (left) as compared with excision section (right; arrow). Also note decreased dermal thickness. G. Dermal thickness measurements and representative figures (40x) in nude mice that had undergone tail excision with or without skin grafting 6 weeks following surgery (*p<0.05). H. Scar index measurements in tail tissues localized just distal to the site of lymphatic injury 6 weeks after treatment with excision with or without skin grafting. Representative Sirius red birefringence images are shown to the right. Orange-red is indicative of scar; yellow-green is consistent with normal (i.e. non-fibrosed) tissue (*p<0.01).\nHistological analysis of tail sections 6 weeks after surgery demonstrated markedly decreased inflammatory cell infiltrate and dermal thickness in skin-grafted animals as compared to excision-only controls (\nFigures 7E–F\n). Far fewer inflammatory cells were noted qualitatively in the skin-grafted animals as compared with excision only animals. This was most readily apparent in the portions of the tail distal to the skin excision site (\nFigure 7F\n). Skin-grafted animals had a 40% decrease in dermal thickness (284±59 µm vs. 482±62 µm, p<0.05) and more normal dermal architecture in the regions of the tail localized distal to the site of lymphatic injury (\nFigure 7G\n). Furthermore, skin-grafted animals demonstrated significantly decreased tissue fibrosis in these regions (\nFigure 7H\n) with a 54% decrease in the scar index (1.39±0.29 vs. 3.00±0.52, p<0.01). Together, these findings support the hypothesis that spontaneous regeneration of lymphatic vessels in autogenous tissues can bypass damaged lymphatic channels resulting in rapid lymphatic repair and markedly improved lymphatic function.", "Lymphedema is a debilitating disorder that occurs commonly after lymph node dissection for cancer treatment. Despite its common occurrence and profound impact on quality of life, there is no effective surgical cure. Although clinical reports have shown that autogenous tissue transfer (i.e. transfer of healthy tissues) can bypass damaged lymphatics and improve lymphedema, the precise mechanisms governing lymphatic regeneration in these circumstances remain unknown. The findings of the current study suggest that lymphatic regeneration after tissue transfer occurs as a result of spontaneous reconnection of existing lymphatics as well as ingrowth of new lymphatic vessels from the recipient. In addition, our studies support the hypothesis that these processes occur in the setting of VEGF-C expression and recipient-derived macrophage infiltration beginning at the periphery of the graft. Similar findings have been reported in blood vessel regeneration in skin grafts.[24]\n\n\nDe novo lymphangiogenesis has been studied in a variety of human diseases from chronic renal transplant rejection to tumor metastasis. The role of recipient and donor tissues in the regulation of lymphatic regeneration has only been evaluated in a few studies, however. Kerjaschki et al. demonstrated that the majority (>95%) of lymphatics in the transplanted kidney are donor-derived while a relatively small number of lymphatic vessels (4.5%) in these tissues were of recipient origin.[15] These results are in contrast to our finding that by 6 weeks following skin grafting, the majority of lymphatic vessels present in the transplanted tissues were of recipient origin (\nFigure 3A\n), indicating that inosculation with ingrowth of new lymphatic channels is a critical mechanism regulating lymphatic regeneration after skin/subcutaneous tissue transfer but not in solid organ transplantation. Although spontaneous reconnection of existing lymphatics occurs in transplanted skin/subcutaneous tissues, this mechanism is the primary mechanism regulating lymphatic regeneration after solid organ transplantation. It is possible that ingrowth of lymphatic vessels in kidney transplants from surrounding tissues is actively inhibited by the presence of the kidney capsule. Indeed, fibrous capsule and fascial tissues have long been considered to be important barriers to lymphatic tumor metastasis and are used routinely as surgical landmarks during oncological procedures. Alternatively, differences in lymphatic regeneration in solid organs as compared to skin and subcutaneous tissues may reflect volumetric differences in tissues transferred since skin grafts are relatively low volume (and therefore easily infiltrated by surrounding tissues) whereas solid organ transplantation involves a much larger volume of tissues that allows ingrowth of lymphatics only at the periphery of the tissues. Finally, in our model the skin graft was transferred as an avascular tissue with infiltration of local blood vessels and eventual incorporation into the recipient tissues. Kidney transplantation is performed by reconnecting the vascular supply of the tissues thereby minimizing the need for vessel ingrowth.\nPrevious studies have demonstrated that lymphangiogenesis in the mouse tail model is dependent on gradients of interstitial flow and expression of VEGF-C.[17], [21] These studies have shown that gradients of VEGF-C expression coordinate and regulate lymphatic endothelial cell migration and differentiation resulting in tubule formation and lymphatic regeneration in a distal to proximal direction. In the current study, we also found that 2 weeks following surgery, VEGF-C expression was localized to the peripheral edges of the skin graft especially in the distal margin. These expression patterns correlated with the ingrowth of new lymphatic vessels from the recipient into the donor tissues. Contrary to previous studies with lymphatic regeneration in collagen gel scaffolds, however, we also found lymphatic ingrowth from the proximal aspect of the wound albeit at a slightly slower rate, indicating that lymphatic regeneration across a tissue graft occurs as a result of coordinated lymphatic ingrowth from both margins of the wound. This finding may be due to a higher potential for lymphatic transport via interstitial fluid flow in skin grafts as compared to acellular scaffolds such as type I collagen. Alternatively, it is likely that the skin graft becomes incorporated into recipient tissues more quickly and efficiently than lymphatic/tissue regeneration that occurs in collagen gel scaffolds resulting in improved interstitial flow and expression of VEGF-C even in the proximal portion of the graft.\nIn the current study we found that by 6 weeks after surgery, VEGF-C expression was highest within the center of the transplanted tissues. This response correlated with the patterns of lymphatic vessel regeneration after tissue transfer, beginning from distal and proximal margins and invading into the center of the donor tissues, implying that lymphatic regeneration occurs as a consequence of VEGF-C expression within the graft. This finding is intriguing and is supported by previous studies demonstrating improved lymphangiogenesis in surgical skin flaps after adenoviral VEGF-C transfer.[25] However, in contrast to these previous studies, we found that endogenous expression of VEGF-C is adequate for promoting rapid and efficient lymphatic regeneration after tissue transfer suggesting that gene therapeutic interventions to augment lymphangiogenic cytokine expression are not necessary for lymphatic regeneration in uncomplicated tissue transfer or surgical interventions.\nMacrophages are critical regulators of lymphangiogenesis.[26], [27] Macrophage expression of VEGF-C is a major regulator of lymphangiogenesis in tumors and during inflammation after kidney transplantation or corneal injury.[27], [28] In addition, macrophages are thought to directly contribute to lymphatic vessel formation by trans-differentiating into lymphatic endothelial cells.[29] Our study showed that macrophages served a dual role in lymphatic regeneration after tissue transfer. Immunohistochemical localization of VEGF-C and F4/80 demonstrated that the patterns of VEGF-C expression correlated with accumulation of F4/80+ cells (\nFigures 3\n\n, \n\n4\n), supporting the hypothesis that macrophages play a critical role in VEGF-C expression during incorporation and lymphatic regeneration of autogenous tissues. In addition, we found that recipient-derived macrophages may directly contribute to lymphatic formation at the peripheral edges of the graft in the early time period by demonstrating F4/80+/LYVE-1+ tubular structures at the 2 week time point. Similar to previous studies on inflammatory lymphangiogenesis, these newly formed vessels likely underwent remodeling with loss of F4/80 expression at the later time point since no double positive lymphatic vessels could be identified in sections obtained from animals sacrificed 6 weeks after surgery.[29]\n\nPerhaps the most significant finding of our study was the fact that autogenous tissue transplantation could be used to rapidly bypass damaged lymphatic vessels with resultant restoration of lymphatic flow. Skin-grafted animals had marked reductions in tail swelling and significantly improved lymphatic function, with significant decreases in dermal thickness and tissue fibrosis. The finding that tissue transfer can bypass damaged lymphatic vessels is supported by anecdotal surgical reports demonstrating that tissue transfer can aid in treatment of some patients with lymphedema and provides a mechanism for this observation. Our findings suggest that improvements in extremity swelling occur as a result of both spontaneous lymphatic regeneration and reconnection of local and transferred lymphatic vessels thereby effectively bypassing lymphatic channels damaged during lymphadenectomy. These findings provide a rationale for the development of tissue engineered lymphatic constructs designed specifically for this purpose and may represent a novel means of treating patients with lymphedema." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Socio-economic disparities in the burden of seasonal influenza: the effect of social and material deprivation on rates of influenza infection.
21359150
There is little empirical evidence in support of a relationship between rates of influenza infection and level of material deprivation (i.e., lack of access to goods and services) and social deprivation (i.e. lack of social cohesion and support).
BACKGROUND
Using validated population-level indices of material and social deprivation and medical billing claims for outpatient clinic and emergency department visits for influenza from 1996 to 2006, we assessed the relationship between neighbourhood rates of influenza and neighbourhood levels of deprivation using Bayesian ecological regression models. Then, by pooling data from neighbourhoods in the top decile (i.e., most deprived) and the bottom decile, we compared rates in the most deprived populations to the least deprived populations using age- and sex-standardized rate ratios.
METHOD
Deprivation scores ranged from one to five with five representing the highest level of deprivation. We found a 21% reduction in rates for every 1 unit increase in social deprivation score (rate ratio [RR] 0.79, 95% Credible Interval [CrI] 0.66, 0.97). There was little evidence of a meaningful linear relationship with material deprivation (RR 1.06, 95% CrI 0.93, 1.24). However, relative to neighbourhoods with deprivation scores in the bottom decile, those in the top decile (i.e., most materially deprived) had substantially higher rates (RR 2.02, 95% Confidence Interval 1.99, 2.05).
RESULTS
Though it is hypothesized that social and material deprivation increase risk of acute respiratory infection, we found decreasing healthcare utilization rates for influenza with increasing social deprivation. This finding may be explained by the fewer social contacts and, thus, fewer influenza exposure opportunities of the socially deprived. Though there was no evidence of a linear relationship with material deprivation, when comparing the least to the most materially deprived populations, we observed higher rates in the most materially deprived populations.
CONCLUSION
[ "Adolescent", "Adult", "Aged", "Aged, 80 and over", "Child", "Child, Preschool", "Cost of Illness", "Female", "Health Status Disparities", "Healthcare Disparities", "Humans", "Incidence", "Infant", "Infant, Newborn", "Influenza, Human", "Male", "Middle Aged", "Psychosocial Deprivation", "Quebec", "Risk Factors", "Seasons", "Social Class", "Socioeconomic Factors", "Young Adult" ]
3040776
null
null
Methods
[SUBTITLE] Geographic Partition of the Island of Montreal [SUBSECTION] The department of public health (Direction de la santé publique) and the health and social services agency of Montreal (Agence de la santé et des services sociaux de Montréal), in collaboration with local communities, have created a neighbourhood partition of Montreal that preserves within-neighbourhood homogeneity with respect to socio-demographic factors (http://www.cmis.mtl.rtss.qc.ca/fr/atlas/creer_carte/details_creer_carte.html). These neighbourhoods were formed by aggregating census tracts, the smallest administrative region for which we had ED/OC utilization data. Although the finest partition is generally preferred to minimize ecological bias, there were several advantages to using a neighbourhood partition. Census tracts contain only 4000 individuals on average, thus census tract-level risk estimates lacked precision. Basing the analysis on the neighbourhood partition provided a balance between precision of risk estimates and reduction of ecological bias. In addition, the analyses were facilitated by the consistency of the neighbourhood boundaries throughout the study period. The department of public health (Direction de la santé publique) and the health and social services agency of Montreal (Agence de la santé et des services sociaux de Montréal), in collaboration with local communities, have created a neighbourhood partition of Montreal that preserves within-neighbourhood homogeneity with respect to socio-demographic factors (http://www.cmis.mtl.rtss.qc.ca/fr/atlas/creer_carte/details_creer_carte.html). These neighbourhoods were formed by aggregating census tracts, the smallest administrative region for which we had ED/OC utilization data. Although the finest partition is generally preferred to minimize ecological bias, there were several advantages to using a neighbourhood partition. Census tracts contain only 4000 individuals on average, thus census tract-level risk estimates lacked precision. Basing the analysis on the neighbourhood partition provided a balance between precision of risk estimates and reduction of ecological bias. In addition, the analyses were facilitated by the consistency of the neighbourhood boundaries throughout the study period. [SUBTITLE] Census data and billing claims for visits to outpatient clinics and emergency departments [SUBSECTION] The Régie de l'assurance maladie du Québec (RAMQ) is the government body that provides health insurance to 99% of the residents of the province of Quebec, Canada. For our study, we obtained billing claim records for visits to outpatient clinics and emergency departments, by residents of Montreal, from 1996 to 2006 for influenza-like illness. Each record provided data on patient age, sex, census tract of residence, date of visit, and an International Classification of Diseases, 9th Revision (ICD-9) diagnostic code. Sets of ICD-9 codes were used to define a visit for influenza. It has been shown that influenza (487) codes tend to have high specificity but low sensitivity for identifying influenza cases [16], [17], so we analysed the data using two definitions of influenza, one with only influenza-specific ICD-9 codes (487) and another with influenza and pneumonia ICD-9 codes (486 and 487). An influenza season was defined as the 40th CDC week of one year to the 39th CDC week of the following year. To more closely measure rates of influenza rather than rates of utilization for influenza, we counted only one visit per person per influenza season. We estimated at-risk person years for each age-sex stratum using data from the 1996, 2001 and 2006 censuses, approximating population sizes in inter-censal years by linear interpolation. Age groups were 0 to 4, 5 to 9, 10 to 14, 15 to 19, 20 to 39, 40 to 64 and 65+ years. Ethics approval was granted by the Institutional Review Board of McGill University's Faculty of Medicine. The Régie de l'assurance maladie du Québec (RAMQ) is the government body that provides health insurance to 99% of the residents of the province of Quebec, Canada. For our study, we obtained billing claim records for visits to outpatient clinics and emergency departments, by residents of Montreal, from 1996 to 2006 for influenza-like illness. Each record provided data on patient age, sex, census tract of residence, date of visit, and an International Classification of Diseases, 9th Revision (ICD-9) diagnostic code. Sets of ICD-9 codes were used to define a visit for influenza. It has been shown that influenza (487) codes tend to have high specificity but low sensitivity for identifying influenza cases [16], [17], so we analysed the data using two definitions of influenza, one with only influenza-specific ICD-9 codes (487) and another with influenza and pneumonia ICD-9 codes (486 and 487). An influenza season was defined as the 40th CDC week of one year to the 39th CDC week of the following year. To more closely measure rates of influenza rather than rates of utilization for influenza, we counted only one visit per person per influenza season. We estimated at-risk person years for each age-sex stratum using data from the 1996, 2001 and 2006 censuses, approximating population sizes in inter-censal years by linear interpolation. Age groups were 0 to 4, 5 to 9, 10 to 14, 15 to 19, 20 to 39, 40 to 64 and 65+ years. Ethics approval was granted by the Institutional Review Board of McGill University's Faculty of Medicine. [SUBTITLE] Material and Social Deprivation Indices [SUBSECTION] Pampalon and Raymond (2000) constructed indices of material and social deprivation for the province of Quebec [18]. Each index is composed of three census variables. For material deprivation, the variables are proportion of persons lacking a high school diploma, employment to population ratio, and average income. The census variables for social deprivation are proportion of persons living alone, the proportion of persons separated, divorced or widowed, and the proportion of single parent families. Deprivation was measured for all dissemination areas (DA) in the region and quintiles of deprivation were formed with the value of 1 representing the lowest levels of deprivation. As each neighbourhood geographical unit is comprised of several DA, we calculated a neighbourhood deprivation score by averaging the deprivation quintile values of the DA contained within the boundaries of the neighbourhood, using DA population sizes as weights. Pampalon and Raymond (2000) constructed indices of material and social deprivation for the province of Quebec [18]. Each index is composed of three census variables. For material deprivation, the variables are proportion of persons lacking a high school diploma, employment to population ratio, and average income. The census variables for social deprivation are proportion of persons living alone, the proportion of persons separated, divorced or widowed, and the proportion of single parent families. Deprivation was measured for all dissemination areas (DA) in the region and quintiles of deprivation were formed with the value of 1 representing the lowest levels of deprivation. As each neighbourhood geographical unit is comprised of several DA, we calculated a neighbourhood deprivation score by averaging the deprivation quintile values of the DA contained within the boundaries of the neighbourhood, using DA population sizes as weights. [SUBTITLE] Statistical Analyses [SUBSECTION] [SUBTITLE] Ecological Regressions [SUBSECTION] Preliminary analyses consisted of estimating correlations between deprivation scores and the neighbourhood standardized morbidity ratios (SMR). We also constructed choropleth maps of the SMR and deprivation scores. Assuming that the number of healthcare visits for influenza for each neighbourhood was Poisson distributed, we used Bayesian hierarchical models to examine the strength of the linear association between the log of the relative risk and each measure of deprivation. In all models, we accounted for spatial autocorrelation as failing to do so could invalidate inferences [19]. We considered four models, one with social deprivation as the predictor, another with material deprivation, the third with both social and material deprivation, and the fourth with social and material deprivation main effects and their interaction. Covariates were centred to improve convergence of the Monte Carlo Markov Chains. Preliminary analyses consisted of estimating correlations between deprivation scores and the neighbourhood standardized morbidity ratios (SMR). We also constructed choropleth maps of the SMR and deprivation scores. Assuming that the number of healthcare visits for influenza for each neighbourhood was Poisson distributed, we used Bayesian hierarchical models to examine the strength of the linear association between the log of the relative risk and each measure of deprivation. In all models, we accounted for spatial autocorrelation as failing to do so could invalidate inferences [19]. We considered four models, one with social deprivation as the predictor, another with material deprivation, the third with both social and material deprivation, and the fourth with social and material deprivation main effects and their interaction. Covariates were centred to improve convergence of the Monte Carlo Markov Chains. [SUBTITLE] Contrasting rates in the most and least deprived neighbourhoods [SUBSECTION] We examined whether the most deprived populations had significantly different rates than the least deprived populations. To do this, we identified the neighbourhoods with deprivation scores in the top and bottom decile (i.e. the most and least deprived, respectively) and then pooled their counts of influenza-related visits and their at-risk person time. Age and sex-standardized rate ratios were estimated using the least deprived population as the reference group. We assessed the sensitivity of our results to the percentage of neighbourhoods included in the extremes by also comparing the top and bottom 5% and 15%. Analyses were conducted with R and WinBUGS 1.4 software [20], [21]. Maps were constructed using maps2WinBUGS version 1.6. We examined whether the most deprived populations had significantly different rates than the least deprived populations. To do this, we identified the neighbourhoods with deprivation scores in the top and bottom decile (i.e. the most and least deprived, respectively) and then pooled their counts of influenza-related visits and their at-risk person time. Age and sex-standardized rate ratios were estimated using the least deprived population as the reference group. We assessed the sensitivity of our results to the percentage of neighbourhoods included in the extremes by also comparing the top and bottom 5% and 15%. Analyses were conducted with R and WinBUGS 1.4 software [20], [21]. Maps were constructed using maps2WinBUGS version 1.6. [SUBTITLE] Ecological Regressions [SUBSECTION] Preliminary analyses consisted of estimating correlations between deprivation scores and the neighbourhood standardized morbidity ratios (SMR). We also constructed choropleth maps of the SMR and deprivation scores. Assuming that the number of healthcare visits for influenza for each neighbourhood was Poisson distributed, we used Bayesian hierarchical models to examine the strength of the linear association between the log of the relative risk and each measure of deprivation. In all models, we accounted for spatial autocorrelation as failing to do so could invalidate inferences [19]. We considered four models, one with social deprivation as the predictor, another with material deprivation, the third with both social and material deprivation, and the fourth with social and material deprivation main effects and their interaction. Covariates were centred to improve convergence of the Monte Carlo Markov Chains. Preliminary analyses consisted of estimating correlations between deprivation scores and the neighbourhood standardized morbidity ratios (SMR). We also constructed choropleth maps of the SMR and deprivation scores. Assuming that the number of healthcare visits for influenza for each neighbourhood was Poisson distributed, we used Bayesian hierarchical models to examine the strength of the linear association between the log of the relative risk and each measure of deprivation. In all models, we accounted for spatial autocorrelation as failing to do so could invalidate inferences [19]. We considered four models, one with social deprivation as the predictor, another with material deprivation, the third with both social and material deprivation, and the fourth with social and material deprivation main effects and their interaction. Covariates were centred to improve convergence of the Monte Carlo Markov Chains. [SUBTITLE] Contrasting rates in the most and least deprived neighbourhoods [SUBSECTION] We examined whether the most deprived populations had significantly different rates than the least deprived populations. To do this, we identified the neighbourhoods with deprivation scores in the top and bottom decile (i.e. the most and least deprived, respectively) and then pooled their counts of influenza-related visits and their at-risk person time. Age and sex-standardized rate ratios were estimated using the least deprived population as the reference group. We assessed the sensitivity of our results to the percentage of neighbourhoods included in the extremes by also comparing the top and bottom 5% and 15%. Analyses were conducted with R and WinBUGS 1.4 software [20], [21]. Maps were constructed using maps2WinBUGS version 1.6. We examined whether the most deprived populations had significantly different rates than the least deprived populations. To do this, we identified the neighbourhoods with deprivation scores in the top and bottom decile (i.e. the most and least deprived, respectively) and then pooled their counts of influenza-related visits and their at-risk person time. Age and sex-standardized rate ratios were estimated using the least deprived population as the reference group. We assessed the sensitivity of our results to the percentage of neighbourhoods included in the extremes by also comparing the top and bottom 5% and 15%. Analyses were conducted with R and WinBUGS 1.4 software [20], [21]. Maps were constructed using maps2WinBUGS version 1.6.
null
null
null
null
[ "Introduction", "Geographic Partition of the Island of Montreal", "Census data and billing claims for visits to outpatient clinics and emergency departments", "Material and Social Deprivation Indices", "Statistical Analyses", "Ecological Regressions", "Contrasting rates in the most and least deprived neighbourhoods", "Results", "Discussion", "Conclusion" ]
[ "Defining subpopulations that initiate and promote influenza epidemics can help to guide the strategic distribution of prevention and control efforts. To date, researchers have focused primarily on the effect of age and have found that the paediatric population plays an important role in transmission [1], [2], [3]. Children tend to be infected earlier in the season and due to their more immature immune systems and more extensive contact networks, may spread the virus more readily than older age groups [3]. Socio-economically deprived populations may also experience higher rates of acute respiratory infection. However, research in this area is largely dominated by studies of hospitalizations and mortality [4], [5], [6] providing evidence that rates of severe illness and not necessarily rates of infection are elevated in this population [4], [5], [6], [7], [8], [9], [10].\nSocio-economic deprivation is a broad term that encompasses various aspects of socio-economic vulnerability. A number of composite measures have been developed as markers of deprivation. Townsend (1987) described two distinct forms of deprivation; material deprivation, a measure of access to ‘goods and conveniences’ and social deprivation, representing social cohesion, cooperation and support [11]. A few studies have linked material deprivation to rates of hospitalization and mortality due to respiratory infection [4], [5], [6], [12]. Changes in rates of respiratory illness in relation to social deprivation have not been studied as extensively, but it has been suggested that the stress resulting from social deprivation as well as its effects on personal habits and self-esteem may predispose individuals to infection [13], [14], [15]. Results from studies examining the impact of certain aspects of social deprivation (e.g. social support) on hospitalizations for acute respiratory illness are mixed, finding either a positive relationship or no relationship with admission rates [6]\n[9].\nIn this study, we assessed the relationship between material and social deprivation and rates of emergency department and outpatient clinic (ED/OC) utilization for influenza in all 111 neighbourhoods of Montreal, Quebec over the period 1996 to 2006. In doing so, we explore whether directing public health interventions towards socio-economically deprived neighbourhoods could mitigate population-wide morbidity and mortality rates.", "The department of public health (Direction de la santé publique) and the health and social services agency of Montreal (Agence de la santé et des services sociaux de Montréal), in collaboration with local communities, have created a neighbourhood partition of Montreal that preserves within-neighbourhood homogeneity with respect to socio-demographic factors (http://www.cmis.mtl.rtss.qc.ca/fr/atlas/creer_carte/details_creer_carte.html). These neighbourhoods were formed by aggregating census tracts, the smallest administrative region for which we had ED/OC utilization data. Although the finest partition is generally preferred to minimize ecological bias, there were several advantages to using a neighbourhood partition. Census tracts contain only 4000 individuals on average, thus census tract-level risk estimates lacked precision. Basing the analysis on the neighbourhood partition provided a balance between precision of risk estimates and reduction of ecological bias. In addition, the analyses were facilitated by the consistency of the neighbourhood boundaries throughout the study period.", "The Régie de l'assurance maladie du Québec (RAMQ) is the government body that provides health insurance to 99% of the residents of the province of Quebec, Canada. For our study, we obtained billing claim records for visits to outpatient clinics and emergency departments, by residents of Montreal, from 1996 to 2006 for influenza-like illness. Each record provided data on patient age, sex, census tract of residence, date of visit, and an International Classification of Diseases, 9th Revision (ICD-9) diagnostic code. Sets of ICD-9 codes were used to define a visit for influenza. It has been shown that influenza (487) codes tend to have high specificity but low sensitivity for identifying influenza cases [16], [17], so we analysed the data using two definitions of influenza, one with only influenza-specific ICD-9 codes (487) and another with influenza and pneumonia ICD-9 codes (486 and 487). An influenza season was defined as the 40th CDC week of one year to the 39th CDC week of the following year. To more closely measure rates of influenza rather than rates of utilization for influenza, we counted only one visit per person per influenza season. We estimated at-risk person years for each age-sex stratum using data from the 1996, 2001 and 2006 censuses, approximating population sizes in inter-censal years by linear interpolation. Age groups were 0 to 4, 5 to 9, 10 to 14, 15 to 19, 20 to 39, 40 to 64 and 65+ years. Ethics approval was granted by the Institutional Review Board of McGill University's Faculty of Medicine.", "Pampalon and Raymond (2000) constructed indices of material and social deprivation for the province of Quebec [18]. Each index is composed of three census variables. For material deprivation, the variables are proportion of persons lacking a high school diploma, employment to population ratio, and average income. The census variables for social deprivation are proportion of persons living alone, the proportion of persons separated, divorced or widowed, and the proportion of single parent families. Deprivation was measured for all dissemination areas (DA) in the region and quintiles of deprivation were formed with the value of 1 representing the lowest levels of deprivation. As each neighbourhood geographical unit is comprised of several DA, we calculated a neighbourhood deprivation score by averaging the deprivation quintile values of the DA contained within the boundaries of the neighbourhood, using DA population sizes as weights.", "[SUBTITLE] Ecological Regressions [SUBSECTION] Preliminary analyses consisted of estimating correlations between deprivation scores and the neighbourhood standardized morbidity ratios (SMR). We also constructed choropleth maps of the SMR and deprivation scores. Assuming that the number of healthcare visits for influenza for each neighbourhood was Poisson distributed, we used Bayesian hierarchical models to examine the strength of the linear association between the log of the relative risk and each measure of deprivation. In all models, we accounted for spatial autocorrelation as failing to do so could invalidate inferences [19]. We considered four models, one with social deprivation as the predictor, another with material deprivation, the third with both social and material deprivation, and the fourth with social and material deprivation main effects and their interaction. Covariates were centred to improve convergence of the Monte Carlo Markov Chains.\nPreliminary analyses consisted of estimating correlations between deprivation scores and the neighbourhood standardized morbidity ratios (SMR). We also constructed choropleth maps of the SMR and deprivation scores. Assuming that the number of healthcare visits for influenza for each neighbourhood was Poisson distributed, we used Bayesian hierarchical models to examine the strength of the linear association between the log of the relative risk and each measure of deprivation. In all models, we accounted for spatial autocorrelation as failing to do so could invalidate inferences [19]. We considered four models, one with social deprivation as the predictor, another with material deprivation, the third with both social and material deprivation, and the fourth with social and material deprivation main effects and their interaction. Covariates were centred to improve convergence of the Monte Carlo Markov Chains.\n[SUBTITLE] Contrasting rates in the most and least deprived neighbourhoods [SUBSECTION] We examined whether the most deprived populations had significantly different rates than the least deprived populations. To do this, we identified the neighbourhoods with deprivation scores in the top and bottom decile (i.e. the most and least deprived, respectively) and then pooled their counts of influenza-related visits and their at-risk person time. Age and sex-standardized rate ratios were estimated using the least deprived population as the reference group. We assessed the sensitivity of our results to the percentage of neighbourhoods included in the extremes by also comparing the top and bottom 5% and 15%. Analyses were conducted with R and WinBUGS 1.4 software [20], [21]. Maps were constructed using maps2WinBUGS version 1.6.\nWe examined whether the most deprived populations had significantly different rates than the least deprived populations. To do this, we identified the neighbourhoods with deprivation scores in the top and bottom decile (i.e. the most and least deprived, respectively) and then pooled their counts of influenza-related visits and their at-risk person time. Age and sex-standardized rate ratios were estimated using the least deprived population as the reference group. We assessed the sensitivity of our results to the percentage of neighbourhoods included in the extremes by also comparing the top and bottom 5% and 15%. Analyses were conducted with R and WinBUGS 1.4 software [20], [21]. Maps were constructed using maps2WinBUGS version 1.6.", "Preliminary analyses consisted of estimating correlations between deprivation scores and the neighbourhood standardized morbidity ratios (SMR). We also constructed choropleth maps of the SMR and deprivation scores. Assuming that the number of healthcare visits for influenza for each neighbourhood was Poisson distributed, we used Bayesian hierarchical models to examine the strength of the linear association between the log of the relative risk and each measure of deprivation. In all models, we accounted for spatial autocorrelation as failing to do so could invalidate inferences [19]. We considered four models, one with social deprivation as the predictor, another with material deprivation, the third with both social and material deprivation, and the fourth with social and material deprivation main effects and their interaction. Covariates were centred to improve convergence of the Monte Carlo Markov Chains.", "We examined whether the most deprived populations had significantly different rates than the least deprived populations. To do this, we identified the neighbourhoods with deprivation scores in the top and bottom decile (i.e. the most and least deprived, respectively) and then pooled their counts of influenza-related visits and their at-risk person time. Age and sex-standardized rate ratios were estimated using the least deprived population as the reference group. We assessed the sensitivity of our results to the percentage of neighbourhoods included in the extremes by also comparing the top and bottom 5% and 15%. Analyses were conducted with R and WinBUGS 1.4 software [20], [21]. Maps were constructed using maps2WinBUGS version 1.6.", "The correlation between the neighbourhood-level log standardized morbidity ratio (SMR) and the material and social deprivation score was 0.11 and −0.48, respectively. This finding is reflected in the choropleth maps (Figure 1), where areas with greater levels of social deprivation tended to have smaller SMRs. The results of the ecological regression indicated an average decrease in utilization rates by approximately 21% for every 1 unit increase in social deprivation score (Table 1). There did not appear to be a meaningful linear relationship with material deprivation (Table 1), nor was there evidence of an important interaction between social and material deprivation (regression coefficient −0.066, 95% CI −0.17 to 0.14). Results using the Pneumonia and Influenza definition were consistent with that of the Influenza definition (Table 1).\nStandardized morbidity ratio for influenza (a), standardized morbidity ratio for pneumonia and influenza (b), Material deprivation (c), Social Deprivation (d).\nPlots of the neighbourhood log SMR versus deprivation score are shown in Figure 2. The rate of utilization for influenza among populations living in the most materially deprived neighbourhoods (top decile) were 102% higher than those living in the least materially deprived neighbourhoods (Table 2). When we excluded neighbourhoods that had both high or both low material and social deprivation scores, we found an even greater disparity in rates (rate ratio [RR] 4.65, 95% Confidence Interval [CI] 4.55 to 4.76). In comparing the most to the least socially deprived neighbourhoods we found that the most socially deprived populations had approximately 79% lower utilization rates for influenza and 81% lower rates for pneumonia and influenza (Table 2). When neighbourhoods with deprivation scores in the top and bottom 5% and 15% were analysed, we found similar patterns of elevated risk in the materially deprived populations and decreased risk in the socially deprived populations were observed (Table 2).\ncomparison using pooled data from 11 most and 11 least deprived neighbourhoods.", "Contrary to the hypothesized effect of risk elevation in the socially deprived, we observed decreasing rates of ED/OC utilization for influenza with increasing social deprivation. Comparing the risk in the 10% most and least socially deprived neighbourhoods, we found a pronounced ‘protective’ effect of social deprivation. Unlike studies of hospitalization/mortality rates and material deprivation, we did not find an increasing gradient in utilization rates with greater levels of material deprivation. However, as compared to the least materially deprived neighbourhoods, rates were considerably elevated in the most materially deprived neighbourhoods.\nThough the negative association with social deprivation may be interpreted as the result of an overall underutilization of health services by the more socially deprived, Philibert et al (2007) found that this was not the case for a large downtown Montreal clinic in 2000–2002, a time period centred at the mid-point of our study period [22]. Thus an alternative explanation of our finding may lie in the fact that socially deprived groups have fewer social contacts and thus have fewer opportunities to become exposed to an infected individual [23]. One of the variables contributing to the social deprivation index is the proportion of the population living alone. There is evidence that indoor crowding facilitates influenza transmission [7], [24].\nFindings from previous studies assessing the relationship between social deprivation and hospitalizations/mortality due to respiratory illness are mixed [6],[9]. Crighton et al (2007) conducted an ecological study in the province of Ontario, Canada, and did not find a positive association between county hospitalization rates for influenza and pneumonia and percentage of surveyed individuals reporting low levels of social support and social involvement\n[6]. In contrast, Jordan et al (2008) found higher rates of hospital admissions for respiratory disease (either acute illness or worsening of chronic disease) among more socially isolated elderly [9]. However, the interpretation of this finding is not straightforward. Though the socially isolated elderly may experience more severe outcomes and even a greater incidence of infection, physicians may also be more inclined to admit patients that, due to a lack of social support, will not receive care if sent home.\nEcological studies in the United Kingdom have found a relationship between an index of material deprivation and rates of hospitalizations and mortality from respiratory illnesses [4], [5]. A similar finding is reported by Crighton et al (2007) in their ecological study using data from Ontario, Canada, in which they discovered high rates of hospitalizations for pneumonia and influenza in counties where a high proportion of the population did not possess a high school diploma [6]. The frequency of occurrence of influenza/cold symptoms may also be related to material deprivation. Stone et al (2010) found through a telephone survey that the incidence of individuals reporting headache, pain or influenza/cold symptoms increased as level of educational attainment and income decreased [25].\nA limitation of our study is our failure to account for ‘silent cases’, i.e. those that were sick but did not seek care. However, there is no evidence suggesting that rates of silent cases should vary by neighbourhood in an urban setting such as Montreal where residents have universal access to essential medical services [22]. Thus it is likely that the spatial variation of utilization rates for influenza approximates the spatial variation in rates of influenza infection. We were also limited by a case definition based on ICD-9 codes which may not identify all visits to ED/OCs for influenza infection. Nevertheless, we considered two influenza case definitions and results were consistent.\nEcological bias may prevent an extrapolation of our findings to the individual. However, for the purpose of designing effective public health strategies, it may be more feasible to direct interventions to neighbourhoods. In this case, inference at the level of the neighbourhood, and not the individual, is most pertinent. Even so, both the ecological and individualistic fallacies can be dealt with to some extent through an analysis of individual level outcomes that incorporates both individual and area-level deprivation data [26], [27], [28]. In our future work we will assess the relationship between ED/OC utilization rates for influenza and both individual-level and neighbourhood-level factors.\n[SUBTITLE] Conclusion [SUBSECTION] Though some studies have found an association between socio-economic deprivation and acute respiratory illness, we did not find evidence of increasing risk of influenza with increasing material deprivation. We did note, however, a disparity in the burden of influenza when comparing the extremes of material deprivation. Though social deprivation is a hypothesized risk factor for disease, we observed lower rates of visits to the emergency department and outpatient clinics for influenza from more socially deprived populations, a finding which may be explained by reduced infection exposure opportunities.\nThough some studies have found an association between socio-economic deprivation and acute respiratory illness, we did not find evidence of increasing risk of influenza with increasing material deprivation. We did note, however, a disparity in the burden of influenza when comparing the extremes of material deprivation. Though social deprivation is a hypothesized risk factor for disease, we observed lower rates of visits to the emergency department and outpatient clinics for influenza from more socially deprived populations, a finding which may be explained by reduced infection exposure opportunities.", "Though some studies have found an association between socio-economic deprivation and acute respiratory illness, we did not find evidence of increasing risk of influenza with increasing material deprivation. We did note, however, a disparity in the burden of influenza when comparing the extremes of material deprivation. Though social deprivation is a hypothesized risk factor for disease, we observed lower rates of visits to the emergency department and outpatient clinics for influenza from more socially deprived populations, a finding which may be explained by reduced infection exposure opportunities." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Geographic Partition of the Island of Montreal", "Census data and billing claims for visits to outpatient clinics and emergency departments", "Material and Social Deprivation Indices", "Statistical Analyses", "Ecological Regressions", "Contrasting rates in the most and least deprived neighbourhoods", "Results", "Discussion", "Conclusion" ]
[ "Defining subpopulations that initiate and promote influenza epidemics can help to guide the strategic distribution of prevention and control efforts. To date, researchers have focused primarily on the effect of age and have found that the paediatric population plays an important role in transmission [1], [2], [3]. Children tend to be infected earlier in the season and due to their more immature immune systems and more extensive contact networks, may spread the virus more readily than older age groups [3]. Socio-economically deprived populations may also experience higher rates of acute respiratory infection. However, research in this area is largely dominated by studies of hospitalizations and mortality [4], [5], [6] providing evidence that rates of severe illness and not necessarily rates of infection are elevated in this population [4], [5], [6], [7], [8], [9], [10].\nSocio-economic deprivation is a broad term that encompasses various aspects of socio-economic vulnerability. A number of composite measures have been developed as markers of deprivation. Townsend (1987) described two distinct forms of deprivation; material deprivation, a measure of access to ‘goods and conveniences’ and social deprivation, representing social cohesion, cooperation and support [11]. A few studies have linked material deprivation to rates of hospitalization and mortality due to respiratory infection [4], [5], [6], [12]. Changes in rates of respiratory illness in relation to social deprivation have not been studied as extensively, but it has been suggested that the stress resulting from social deprivation as well as its effects on personal habits and self-esteem may predispose individuals to infection [13], [14], [15]. Results from studies examining the impact of certain aspects of social deprivation (e.g. social support) on hospitalizations for acute respiratory illness are mixed, finding either a positive relationship or no relationship with admission rates [6]\n[9].\nIn this study, we assessed the relationship between material and social deprivation and rates of emergency department and outpatient clinic (ED/OC) utilization for influenza in all 111 neighbourhoods of Montreal, Quebec over the period 1996 to 2006. In doing so, we explore whether directing public health interventions towards socio-economically deprived neighbourhoods could mitigate population-wide morbidity and mortality rates.", "[SUBTITLE] Geographic Partition of the Island of Montreal [SUBSECTION] The department of public health (Direction de la santé publique) and the health and social services agency of Montreal (Agence de la santé et des services sociaux de Montréal), in collaboration with local communities, have created a neighbourhood partition of Montreal that preserves within-neighbourhood homogeneity with respect to socio-demographic factors (http://www.cmis.mtl.rtss.qc.ca/fr/atlas/creer_carte/details_creer_carte.html). These neighbourhoods were formed by aggregating census tracts, the smallest administrative region for which we had ED/OC utilization data. Although the finest partition is generally preferred to minimize ecological bias, there were several advantages to using a neighbourhood partition. Census tracts contain only 4000 individuals on average, thus census tract-level risk estimates lacked precision. Basing the analysis on the neighbourhood partition provided a balance between precision of risk estimates and reduction of ecological bias. In addition, the analyses were facilitated by the consistency of the neighbourhood boundaries throughout the study period.\nThe department of public health (Direction de la santé publique) and the health and social services agency of Montreal (Agence de la santé et des services sociaux de Montréal), in collaboration with local communities, have created a neighbourhood partition of Montreal that preserves within-neighbourhood homogeneity with respect to socio-demographic factors (http://www.cmis.mtl.rtss.qc.ca/fr/atlas/creer_carte/details_creer_carte.html). These neighbourhoods were formed by aggregating census tracts, the smallest administrative region for which we had ED/OC utilization data. Although the finest partition is generally preferred to minimize ecological bias, there were several advantages to using a neighbourhood partition. Census tracts contain only 4000 individuals on average, thus census tract-level risk estimates lacked precision. Basing the analysis on the neighbourhood partition provided a balance between precision of risk estimates and reduction of ecological bias. In addition, the analyses were facilitated by the consistency of the neighbourhood boundaries throughout the study period.\n[SUBTITLE] Census data and billing claims for visits to outpatient clinics and emergency departments [SUBSECTION] The Régie de l'assurance maladie du Québec (RAMQ) is the government body that provides health insurance to 99% of the residents of the province of Quebec, Canada. For our study, we obtained billing claim records for visits to outpatient clinics and emergency departments, by residents of Montreal, from 1996 to 2006 for influenza-like illness. Each record provided data on patient age, sex, census tract of residence, date of visit, and an International Classification of Diseases, 9th Revision (ICD-9) diagnostic code. Sets of ICD-9 codes were used to define a visit for influenza. It has been shown that influenza (487) codes tend to have high specificity but low sensitivity for identifying influenza cases [16], [17], so we analysed the data using two definitions of influenza, one with only influenza-specific ICD-9 codes (487) and another with influenza and pneumonia ICD-9 codes (486 and 487). An influenza season was defined as the 40th CDC week of one year to the 39th CDC week of the following year. To more closely measure rates of influenza rather than rates of utilization for influenza, we counted only one visit per person per influenza season. We estimated at-risk person years for each age-sex stratum using data from the 1996, 2001 and 2006 censuses, approximating population sizes in inter-censal years by linear interpolation. Age groups were 0 to 4, 5 to 9, 10 to 14, 15 to 19, 20 to 39, 40 to 64 and 65+ years. Ethics approval was granted by the Institutional Review Board of McGill University's Faculty of Medicine.\nThe Régie de l'assurance maladie du Québec (RAMQ) is the government body that provides health insurance to 99% of the residents of the province of Quebec, Canada. For our study, we obtained billing claim records for visits to outpatient clinics and emergency departments, by residents of Montreal, from 1996 to 2006 for influenza-like illness. Each record provided data on patient age, sex, census tract of residence, date of visit, and an International Classification of Diseases, 9th Revision (ICD-9) diagnostic code. Sets of ICD-9 codes were used to define a visit for influenza. It has been shown that influenza (487) codes tend to have high specificity but low sensitivity for identifying influenza cases [16], [17], so we analysed the data using two definitions of influenza, one with only influenza-specific ICD-9 codes (487) and another with influenza and pneumonia ICD-9 codes (486 and 487). An influenza season was defined as the 40th CDC week of one year to the 39th CDC week of the following year. To more closely measure rates of influenza rather than rates of utilization for influenza, we counted only one visit per person per influenza season. We estimated at-risk person years for each age-sex stratum using data from the 1996, 2001 and 2006 censuses, approximating population sizes in inter-censal years by linear interpolation. Age groups were 0 to 4, 5 to 9, 10 to 14, 15 to 19, 20 to 39, 40 to 64 and 65+ years. Ethics approval was granted by the Institutional Review Board of McGill University's Faculty of Medicine.\n[SUBTITLE] Material and Social Deprivation Indices [SUBSECTION] Pampalon and Raymond (2000) constructed indices of material and social deprivation for the province of Quebec [18]. Each index is composed of three census variables. For material deprivation, the variables are proportion of persons lacking a high school diploma, employment to population ratio, and average income. The census variables for social deprivation are proportion of persons living alone, the proportion of persons separated, divorced or widowed, and the proportion of single parent families. Deprivation was measured for all dissemination areas (DA) in the region and quintiles of deprivation were formed with the value of 1 representing the lowest levels of deprivation. As each neighbourhood geographical unit is comprised of several DA, we calculated a neighbourhood deprivation score by averaging the deprivation quintile values of the DA contained within the boundaries of the neighbourhood, using DA population sizes as weights.\nPampalon and Raymond (2000) constructed indices of material and social deprivation for the province of Quebec [18]. Each index is composed of three census variables. For material deprivation, the variables are proportion of persons lacking a high school diploma, employment to population ratio, and average income. The census variables for social deprivation are proportion of persons living alone, the proportion of persons separated, divorced or widowed, and the proportion of single parent families. Deprivation was measured for all dissemination areas (DA) in the region and quintiles of deprivation were formed with the value of 1 representing the lowest levels of deprivation. As each neighbourhood geographical unit is comprised of several DA, we calculated a neighbourhood deprivation score by averaging the deprivation quintile values of the DA contained within the boundaries of the neighbourhood, using DA population sizes as weights.\n[SUBTITLE] Statistical Analyses [SUBSECTION] [SUBTITLE] Ecological Regressions [SUBSECTION] Preliminary analyses consisted of estimating correlations between deprivation scores and the neighbourhood standardized morbidity ratios (SMR). We also constructed choropleth maps of the SMR and deprivation scores. Assuming that the number of healthcare visits for influenza for each neighbourhood was Poisson distributed, we used Bayesian hierarchical models to examine the strength of the linear association between the log of the relative risk and each measure of deprivation. In all models, we accounted for spatial autocorrelation as failing to do so could invalidate inferences [19]. We considered four models, one with social deprivation as the predictor, another with material deprivation, the third with both social and material deprivation, and the fourth with social and material deprivation main effects and their interaction. Covariates were centred to improve convergence of the Monte Carlo Markov Chains.\nPreliminary analyses consisted of estimating correlations between deprivation scores and the neighbourhood standardized morbidity ratios (SMR). We also constructed choropleth maps of the SMR and deprivation scores. Assuming that the number of healthcare visits for influenza for each neighbourhood was Poisson distributed, we used Bayesian hierarchical models to examine the strength of the linear association between the log of the relative risk and each measure of deprivation. In all models, we accounted for spatial autocorrelation as failing to do so could invalidate inferences [19]. We considered four models, one with social deprivation as the predictor, another with material deprivation, the third with both social and material deprivation, and the fourth with social and material deprivation main effects and their interaction. Covariates were centred to improve convergence of the Monte Carlo Markov Chains.\n[SUBTITLE] Contrasting rates in the most and least deprived neighbourhoods [SUBSECTION] We examined whether the most deprived populations had significantly different rates than the least deprived populations. To do this, we identified the neighbourhoods with deprivation scores in the top and bottom decile (i.e. the most and least deprived, respectively) and then pooled their counts of influenza-related visits and their at-risk person time. Age and sex-standardized rate ratios were estimated using the least deprived population as the reference group. We assessed the sensitivity of our results to the percentage of neighbourhoods included in the extremes by also comparing the top and bottom 5% and 15%. Analyses were conducted with R and WinBUGS 1.4 software [20], [21]. Maps were constructed using maps2WinBUGS version 1.6.\nWe examined whether the most deprived populations had significantly different rates than the least deprived populations. To do this, we identified the neighbourhoods with deprivation scores in the top and bottom decile (i.e. the most and least deprived, respectively) and then pooled their counts of influenza-related visits and their at-risk person time. Age and sex-standardized rate ratios were estimated using the least deprived population as the reference group. We assessed the sensitivity of our results to the percentage of neighbourhoods included in the extremes by also comparing the top and bottom 5% and 15%. Analyses were conducted with R and WinBUGS 1.4 software [20], [21]. Maps were constructed using maps2WinBUGS version 1.6.\n[SUBTITLE] Ecological Regressions [SUBSECTION] Preliminary analyses consisted of estimating correlations between deprivation scores and the neighbourhood standardized morbidity ratios (SMR). We also constructed choropleth maps of the SMR and deprivation scores. Assuming that the number of healthcare visits for influenza for each neighbourhood was Poisson distributed, we used Bayesian hierarchical models to examine the strength of the linear association between the log of the relative risk and each measure of deprivation. In all models, we accounted for spatial autocorrelation as failing to do so could invalidate inferences [19]. We considered four models, one with social deprivation as the predictor, another with material deprivation, the third with both social and material deprivation, and the fourth with social and material deprivation main effects and their interaction. Covariates were centred to improve convergence of the Monte Carlo Markov Chains.\nPreliminary analyses consisted of estimating correlations between deprivation scores and the neighbourhood standardized morbidity ratios (SMR). We also constructed choropleth maps of the SMR and deprivation scores. Assuming that the number of healthcare visits for influenza for each neighbourhood was Poisson distributed, we used Bayesian hierarchical models to examine the strength of the linear association between the log of the relative risk and each measure of deprivation. In all models, we accounted for spatial autocorrelation as failing to do so could invalidate inferences [19]. We considered four models, one with social deprivation as the predictor, another with material deprivation, the third with both social and material deprivation, and the fourth with social and material deprivation main effects and their interaction. Covariates were centred to improve convergence of the Monte Carlo Markov Chains.\n[SUBTITLE] Contrasting rates in the most and least deprived neighbourhoods [SUBSECTION] We examined whether the most deprived populations had significantly different rates than the least deprived populations. To do this, we identified the neighbourhoods with deprivation scores in the top and bottom decile (i.e. the most and least deprived, respectively) and then pooled their counts of influenza-related visits and their at-risk person time. Age and sex-standardized rate ratios were estimated using the least deprived population as the reference group. We assessed the sensitivity of our results to the percentage of neighbourhoods included in the extremes by also comparing the top and bottom 5% and 15%. Analyses were conducted with R and WinBUGS 1.4 software [20], [21]. Maps were constructed using maps2WinBUGS version 1.6.\nWe examined whether the most deprived populations had significantly different rates than the least deprived populations. To do this, we identified the neighbourhoods with deprivation scores in the top and bottom decile (i.e. the most and least deprived, respectively) and then pooled their counts of influenza-related visits and their at-risk person time. Age and sex-standardized rate ratios were estimated using the least deprived population as the reference group. We assessed the sensitivity of our results to the percentage of neighbourhoods included in the extremes by also comparing the top and bottom 5% and 15%. Analyses were conducted with R and WinBUGS 1.4 software [20], [21]. Maps were constructed using maps2WinBUGS version 1.6.", "The department of public health (Direction de la santé publique) and the health and social services agency of Montreal (Agence de la santé et des services sociaux de Montréal), in collaboration with local communities, have created a neighbourhood partition of Montreal that preserves within-neighbourhood homogeneity with respect to socio-demographic factors (http://www.cmis.mtl.rtss.qc.ca/fr/atlas/creer_carte/details_creer_carte.html). These neighbourhoods were formed by aggregating census tracts, the smallest administrative region for which we had ED/OC utilization data. Although the finest partition is generally preferred to minimize ecological bias, there were several advantages to using a neighbourhood partition. Census tracts contain only 4000 individuals on average, thus census tract-level risk estimates lacked precision. Basing the analysis on the neighbourhood partition provided a balance between precision of risk estimates and reduction of ecological bias. In addition, the analyses were facilitated by the consistency of the neighbourhood boundaries throughout the study period.", "The Régie de l'assurance maladie du Québec (RAMQ) is the government body that provides health insurance to 99% of the residents of the province of Quebec, Canada. For our study, we obtained billing claim records for visits to outpatient clinics and emergency departments, by residents of Montreal, from 1996 to 2006 for influenza-like illness. Each record provided data on patient age, sex, census tract of residence, date of visit, and an International Classification of Diseases, 9th Revision (ICD-9) diagnostic code. Sets of ICD-9 codes were used to define a visit for influenza. It has been shown that influenza (487) codes tend to have high specificity but low sensitivity for identifying influenza cases [16], [17], so we analysed the data using two definitions of influenza, one with only influenza-specific ICD-9 codes (487) and another with influenza and pneumonia ICD-9 codes (486 and 487). An influenza season was defined as the 40th CDC week of one year to the 39th CDC week of the following year. To more closely measure rates of influenza rather than rates of utilization for influenza, we counted only one visit per person per influenza season. We estimated at-risk person years for each age-sex stratum using data from the 1996, 2001 and 2006 censuses, approximating population sizes in inter-censal years by linear interpolation. Age groups were 0 to 4, 5 to 9, 10 to 14, 15 to 19, 20 to 39, 40 to 64 and 65+ years. Ethics approval was granted by the Institutional Review Board of McGill University's Faculty of Medicine.", "Pampalon and Raymond (2000) constructed indices of material and social deprivation for the province of Quebec [18]. Each index is composed of three census variables. For material deprivation, the variables are proportion of persons lacking a high school diploma, employment to population ratio, and average income. The census variables for social deprivation are proportion of persons living alone, the proportion of persons separated, divorced or widowed, and the proportion of single parent families. Deprivation was measured for all dissemination areas (DA) in the region and quintiles of deprivation were formed with the value of 1 representing the lowest levels of deprivation. As each neighbourhood geographical unit is comprised of several DA, we calculated a neighbourhood deprivation score by averaging the deprivation quintile values of the DA contained within the boundaries of the neighbourhood, using DA population sizes as weights.", "[SUBTITLE] Ecological Regressions [SUBSECTION] Preliminary analyses consisted of estimating correlations between deprivation scores and the neighbourhood standardized morbidity ratios (SMR). We also constructed choropleth maps of the SMR and deprivation scores. Assuming that the number of healthcare visits for influenza for each neighbourhood was Poisson distributed, we used Bayesian hierarchical models to examine the strength of the linear association between the log of the relative risk and each measure of deprivation. In all models, we accounted for spatial autocorrelation as failing to do so could invalidate inferences [19]. We considered four models, one with social deprivation as the predictor, another with material deprivation, the third with both social and material deprivation, and the fourth with social and material deprivation main effects and their interaction. Covariates were centred to improve convergence of the Monte Carlo Markov Chains.\nPreliminary analyses consisted of estimating correlations between deprivation scores and the neighbourhood standardized morbidity ratios (SMR). We also constructed choropleth maps of the SMR and deprivation scores. Assuming that the number of healthcare visits for influenza for each neighbourhood was Poisson distributed, we used Bayesian hierarchical models to examine the strength of the linear association between the log of the relative risk and each measure of deprivation. In all models, we accounted for spatial autocorrelation as failing to do so could invalidate inferences [19]. We considered four models, one with social deprivation as the predictor, another with material deprivation, the third with both social and material deprivation, and the fourth with social and material deprivation main effects and their interaction. Covariates were centred to improve convergence of the Monte Carlo Markov Chains.\n[SUBTITLE] Contrasting rates in the most and least deprived neighbourhoods [SUBSECTION] We examined whether the most deprived populations had significantly different rates than the least deprived populations. To do this, we identified the neighbourhoods with deprivation scores in the top and bottom decile (i.e. the most and least deprived, respectively) and then pooled their counts of influenza-related visits and their at-risk person time. Age and sex-standardized rate ratios were estimated using the least deprived population as the reference group. We assessed the sensitivity of our results to the percentage of neighbourhoods included in the extremes by also comparing the top and bottom 5% and 15%. Analyses were conducted with R and WinBUGS 1.4 software [20], [21]. Maps were constructed using maps2WinBUGS version 1.6.\nWe examined whether the most deprived populations had significantly different rates than the least deprived populations. To do this, we identified the neighbourhoods with deprivation scores in the top and bottom decile (i.e. the most and least deprived, respectively) and then pooled their counts of influenza-related visits and their at-risk person time. Age and sex-standardized rate ratios were estimated using the least deprived population as the reference group. We assessed the sensitivity of our results to the percentage of neighbourhoods included in the extremes by also comparing the top and bottom 5% and 15%. Analyses were conducted with R and WinBUGS 1.4 software [20], [21]. Maps were constructed using maps2WinBUGS version 1.6.", "Preliminary analyses consisted of estimating correlations between deprivation scores and the neighbourhood standardized morbidity ratios (SMR). We also constructed choropleth maps of the SMR and deprivation scores. Assuming that the number of healthcare visits for influenza for each neighbourhood was Poisson distributed, we used Bayesian hierarchical models to examine the strength of the linear association between the log of the relative risk and each measure of deprivation. In all models, we accounted for spatial autocorrelation as failing to do so could invalidate inferences [19]. We considered four models, one with social deprivation as the predictor, another with material deprivation, the third with both social and material deprivation, and the fourth with social and material deprivation main effects and their interaction. Covariates were centred to improve convergence of the Monte Carlo Markov Chains.", "We examined whether the most deprived populations had significantly different rates than the least deprived populations. To do this, we identified the neighbourhoods with deprivation scores in the top and bottom decile (i.e. the most and least deprived, respectively) and then pooled their counts of influenza-related visits and their at-risk person time. Age and sex-standardized rate ratios were estimated using the least deprived population as the reference group. We assessed the sensitivity of our results to the percentage of neighbourhoods included in the extremes by also comparing the top and bottom 5% and 15%. Analyses were conducted with R and WinBUGS 1.4 software [20], [21]. Maps were constructed using maps2WinBUGS version 1.6.", "The correlation between the neighbourhood-level log standardized morbidity ratio (SMR) and the material and social deprivation score was 0.11 and −0.48, respectively. This finding is reflected in the choropleth maps (Figure 1), where areas with greater levels of social deprivation tended to have smaller SMRs. The results of the ecological regression indicated an average decrease in utilization rates by approximately 21% for every 1 unit increase in social deprivation score (Table 1). There did not appear to be a meaningful linear relationship with material deprivation (Table 1), nor was there evidence of an important interaction between social and material deprivation (regression coefficient −0.066, 95% CI −0.17 to 0.14). Results using the Pneumonia and Influenza definition were consistent with that of the Influenza definition (Table 1).\nStandardized morbidity ratio for influenza (a), standardized morbidity ratio for pneumonia and influenza (b), Material deprivation (c), Social Deprivation (d).\nPlots of the neighbourhood log SMR versus deprivation score are shown in Figure 2. The rate of utilization for influenza among populations living in the most materially deprived neighbourhoods (top decile) were 102% higher than those living in the least materially deprived neighbourhoods (Table 2). When we excluded neighbourhoods that had both high or both low material and social deprivation scores, we found an even greater disparity in rates (rate ratio [RR] 4.65, 95% Confidence Interval [CI] 4.55 to 4.76). In comparing the most to the least socially deprived neighbourhoods we found that the most socially deprived populations had approximately 79% lower utilization rates for influenza and 81% lower rates for pneumonia and influenza (Table 2). When neighbourhoods with deprivation scores in the top and bottom 5% and 15% were analysed, we found similar patterns of elevated risk in the materially deprived populations and decreased risk in the socially deprived populations were observed (Table 2).\ncomparison using pooled data from 11 most and 11 least deprived neighbourhoods.", "Contrary to the hypothesized effect of risk elevation in the socially deprived, we observed decreasing rates of ED/OC utilization for influenza with increasing social deprivation. Comparing the risk in the 10% most and least socially deprived neighbourhoods, we found a pronounced ‘protective’ effect of social deprivation. Unlike studies of hospitalization/mortality rates and material deprivation, we did not find an increasing gradient in utilization rates with greater levels of material deprivation. However, as compared to the least materially deprived neighbourhoods, rates were considerably elevated in the most materially deprived neighbourhoods.\nThough the negative association with social deprivation may be interpreted as the result of an overall underutilization of health services by the more socially deprived, Philibert et al (2007) found that this was not the case for a large downtown Montreal clinic in 2000–2002, a time period centred at the mid-point of our study period [22]. Thus an alternative explanation of our finding may lie in the fact that socially deprived groups have fewer social contacts and thus have fewer opportunities to become exposed to an infected individual [23]. One of the variables contributing to the social deprivation index is the proportion of the population living alone. There is evidence that indoor crowding facilitates influenza transmission [7], [24].\nFindings from previous studies assessing the relationship between social deprivation and hospitalizations/mortality due to respiratory illness are mixed [6],[9]. Crighton et al (2007) conducted an ecological study in the province of Ontario, Canada, and did not find a positive association between county hospitalization rates for influenza and pneumonia and percentage of surveyed individuals reporting low levels of social support and social involvement\n[6]. In contrast, Jordan et al (2008) found higher rates of hospital admissions for respiratory disease (either acute illness or worsening of chronic disease) among more socially isolated elderly [9]. However, the interpretation of this finding is not straightforward. Though the socially isolated elderly may experience more severe outcomes and even a greater incidence of infection, physicians may also be more inclined to admit patients that, due to a lack of social support, will not receive care if sent home.\nEcological studies in the United Kingdom have found a relationship between an index of material deprivation and rates of hospitalizations and mortality from respiratory illnesses [4], [5]. A similar finding is reported by Crighton et al (2007) in their ecological study using data from Ontario, Canada, in which they discovered high rates of hospitalizations for pneumonia and influenza in counties where a high proportion of the population did not possess a high school diploma [6]. The frequency of occurrence of influenza/cold symptoms may also be related to material deprivation. Stone et al (2010) found through a telephone survey that the incidence of individuals reporting headache, pain or influenza/cold symptoms increased as level of educational attainment and income decreased [25].\nA limitation of our study is our failure to account for ‘silent cases’, i.e. those that were sick but did not seek care. However, there is no evidence suggesting that rates of silent cases should vary by neighbourhood in an urban setting such as Montreal where residents have universal access to essential medical services [22]. Thus it is likely that the spatial variation of utilization rates for influenza approximates the spatial variation in rates of influenza infection. We were also limited by a case definition based on ICD-9 codes which may not identify all visits to ED/OCs for influenza infection. Nevertheless, we considered two influenza case definitions and results were consistent.\nEcological bias may prevent an extrapolation of our findings to the individual. However, for the purpose of designing effective public health strategies, it may be more feasible to direct interventions to neighbourhoods. In this case, inference at the level of the neighbourhood, and not the individual, is most pertinent. Even so, both the ecological and individualistic fallacies can be dealt with to some extent through an analysis of individual level outcomes that incorporates both individual and area-level deprivation data [26], [27], [28]. In our future work we will assess the relationship between ED/OC utilization rates for influenza and both individual-level and neighbourhood-level factors.\n[SUBTITLE] Conclusion [SUBSECTION] Though some studies have found an association between socio-economic deprivation and acute respiratory illness, we did not find evidence of increasing risk of influenza with increasing material deprivation. We did note, however, a disparity in the burden of influenza when comparing the extremes of material deprivation. Though social deprivation is a hypothesized risk factor for disease, we observed lower rates of visits to the emergency department and outpatient clinics for influenza from more socially deprived populations, a finding which may be explained by reduced infection exposure opportunities.\nThough some studies have found an association between socio-economic deprivation and acute respiratory illness, we did not find evidence of increasing risk of influenza with increasing material deprivation. We did note, however, a disparity in the burden of influenza when comparing the extremes of material deprivation. Though social deprivation is a hypothesized risk factor for disease, we observed lower rates of visits to the emergency department and outpatient clinics for influenza from more socially deprived populations, a finding which may be explained by reduced infection exposure opportunities.", "Though some studies have found an association between socio-economic deprivation and acute respiratory illness, we did not find evidence of increasing risk of influenza with increasing material deprivation. We did note, however, a disparity in the burden of influenza when comparing the extremes of material deprivation. Though social deprivation is a hypothesized risk factor for disease, we observed lower rates of visits to the emergency department and outpatient clinics for influenza from more socially deprived populations, a finding which may be explained by reduced infection exposure opportunities." ]
[ null, "methods", null, null, null, null, null, null, null, null, null ]
[]
Effectiveness of carboplatin and paclitaxel as first- and second-line treatment in 61 patients with metastatic melanoma.
21359173
Patients with metastatic melanoma have a very unfavorable prognosis with few therapeutic options. Based on previous promising experiences within a clinical trial involving carboplatin and paclitaxel a series of advanced metastatic melanoma patients were treated with this combination.
BACKGROUND
Data of all patients with cutaneous metastatic melanoma treated with carboplatin and paclitaxel (CP) at our institution between October 2005 and December 2007 were retrospectively evaluated. For all patients a once-every-3-weeks dose-intensified regimen was used. Overall and progression free survival were calculated using the method of Kaplan and Meier. Tumour response was evaluated according to RECIST criteria.
METHODS
61 patients with cutaneous metastatic melanoma were treated with CP. 20 patients (85% M1c) received CP as first-line treatment, 41 patients (90.2% M1c) had received at least one prior systemic therapy for metastatic disease. Main toxicities were myelosuppression, fatigue and peripheral neuropathy. Partial responses were noted in 4.9% of patients, stable disease in 23% of patients. No complete response was observed. Median progression free survival was 10 weeks. Median overall survival was 31 weeks. Response, progression-free and overall survival were equivalent in first- and second-line patients. 60 patients of 61 died after a median follow up of 7 months. Median overall survival differed for patients with controlled disease (PR+SD) (49 weeks) compared to patients with progressive disease (18 weeks).
RESULTS
Among patients with metastatic melanoma a subgroup achieved disease control under CP therapy which may be associated with a survival benefit. This potential advantage has to be weighed against considerable toxicity. Since response rates and survival were not improved in previously untreated patients compared to pretreated patients, CP should thus not be applied as first-line treatment.
CONCLUSIONS
[ "Adult", "Aged", "Antineoplastic Combined Chemotherapy Protocols", "Carboplatin", "Chemotherapy, Adjuvant", "Cohort Studies", "Female", "Humans", "Male", "Melanoma", "Middle Aged", "Neoadjuvant Therapy", "Neoplasm Metastasis", "Paclitaxel", "Prognosis", "Retrospective Studies", "Skin Neoplasms", "Survival Analysis", "Treatment Outcome", "Young Adult" ]
3040212
null
null
Methods
All patients with advanced metastastic melanoma of cutaneous origin receiving CP at our institution between October 2005 and December 2007 were included. Patients with melanoma of ocular origin were excluded. Approval for this retrospective analysis was obtained by the Ethics commitee Tuebingen, German (approval number 384/2010A). Patient data of our own institution were analyzed anonymously, therefore we did not obtain informed consent. This approach was in accordance with the advice of our ethics committee. Approval for this study was gained retrospectively. Based on the treatment schedule of the second-line CP plus sorafenib trial all patients received intravenous paclitaxel 225 mg/m2 plus intravenous carboplatin at area under curve 6 (AUC 6) on day 1 of a 21-day cycle, with a dose reduction after the fourth cycle to carboplatin AUC 5 and paclitaxel 175/mg/m2. Some patients in poor general condition or insufficient myelofunction received a reduced dose from the start of treatment. All patients who received at least one cycle were included in the analysis. Tumour evaluation was based on CT or PET-CT scans, which were obtained after every 3rd cycle (every 9 weeks). Tumour response was evaluated using Response Evaluation Criteria in Solid Tumours (RECIST) criteria [12]. Best response to treatment was classified as complete response (CR) (no clinical or radiologic evidence of disease), partial response (PR) (30% decrease in the sum of the longest diameter), stable disease, and progression of disease (20% increase in the sum of the longest diameter or new lesion). All response evaluations were independently evaluated by a second radiologist and demonstrated to the interdisciplinary skin tumour board at the University Hospital of Tuebingen, Germany. Statistical analyses were performed with the statistical software SPSS 15.0 (SPSS Inc., Chicago, Illinois, USA). In order to check comparability, first-line and second-line group of patients were compared for the characteristics gender, age, disease classification, brain metastases, liver metastases, number of organs involved, ECOG and LDH level prior to therapy. Bivariate statistical testing was performed using two-sided Chi-square tests. P-values of less than 0.05 were considered statistically significant. Follow up was measured from start of treatment until death or last date of observation. Progression-free survival (PFS) was defined from start of treatment to first documented disease progression. Overall survival (OS) was defined from the start of treatment to the date of death. Non melanoma related deaths were included as censored events. Survival probabilities were calculated according to Kaplan-Meier and compared with log-rank test statistics [13].
null
null
null
null
[ "Introduction", "Results", "Patient characteristics", "Treatment and toxicity", "Response to treatment", "Discussion" ]
[ "Melanoma is an increasingly common disease, and its incidence still rises in the industrialized countries with white populations. Although primary cutaneous melanomas are frequently curable by surgical excision, metastatic melanoma carries a poor prognosis with a median survival ranging from 6 to 12 months, and has not improved during the last three decades. In the US 8700 patients are expected to die of metastatic melanoma in the year 2010 [1].\nMetastatic melanoma is a solid tumour that is relatively resistant to systemic treatment [2]. However, chemotherapy with one or more drugs can produce palliative clinical responses in some patients [3]. Currently only dacarbazine and interleukin-2 have been approved for routine therapy of metastatic melanoma. As the majority of patients progress under this treatment or have only short time responses, there is a strong need for second-line treatment options.\nCombined chemotherapy with carboplatin and paclitaxel (CP) is a well established treatment regimen in advanced non-small-cell lung cancer and in advanced ovarian cancer [4]–[8]. Carboplatin replaced cisplatin from previous combined regimens demonstrating equal efficacy and less toxicity [9]. This regimen has been used in order to examine combined effects with sorafenib in solid tumours, and, interestingly, melanoma showed promising responses [10]. Therefore, the combination with sorafenib was studied in comparison to CP alone in metastatic melanoma as second-line treatment. Surprisingly, CP treatment results in a long median progression-free survival of four months but sorafenib did not add additional efficacy. Based on the promising results with CP treatment of metastatic melanoma in this phase III trial, many centers introduced this chemotherapy regimen in the treatment of disseminated melanoma, particularly in the second line treatment situation [11]. We started to treat patients with advanced metastatic melanoma with CP in October 2005.\nOur patients received CP in case of tumour progression after one or more prior systemic treatments or in case of primarily rapidly progressive disease. The aim of this retrospective analysis was to investigate the effectiveness of CP in advanced melanoma patients in terms of overall survival and response and to compare the results between first and second line treatment.", "[SUBTITLE] Patient characteristics [SUBSECTION] A total of 61 patients were identified for evaluation. Patient characteristics are shown in Table 1. 20 patients (32.8%) received CP as first-line therapy, 33 patients (54.1%) as second-line therapy, eight patients (13.1%) had more than one previous therapy. The median age at start of treatment was 53 years (range 20–79). Concerning the M-classification 54 patients had M1c disease (88.5%), six patients had M1b disease (9.8%), one patient had unresectable stage IIIC disease (1.6%). 22 patients (36.1%) had brain metastases or a history of brain metastases. Six of these 22 patients were treated by surgery, three by stereotactic radiation, nine patients by whole brain radiotherapy and 4 patients had no additional treatment for their brain metastases. 32 patients (52.5%) had liver metastases. The median number of organ sites involved was four (range 1–7). Only 21 patients (34.4%) had normal Eastern Cooperative Oncology Group (ECOG) performance status (PS) prior therapy.\nA total of 61 patients were identified for evaluation. Patient characteristics are shown in Table 1. 20 patients (32.8%) received CP as first-line therapy, 33 patients (54.1%) as second-line therapy, eight patients (13.1%) had more than one previous therapy. The median age at start of treatment was 53 years (range 20–79). Concerning the M-classification 54 patients had M1c disease (88.5%), six patients had M1b disease (9.8%), one patient had unresectable stage IIIC disease (1.6%). 22 patients (36.1%) had brain metastases or a history of brain metastases. Six of these 22 patients were treated by surgery, three by stereotactic radiation, nine patients by whole brain radiotherapy and 4 patients had no additional treatment for their brain metastases. 32 patients (52.5%) had liver metastases. The median number of organ sites involved was four (range 1–7). Only 21 patients (34.4%) had normal Eastern Cooperative Oncology Group (ECOG) performance status (PS) prior therapy.\n[SUBTITLE] Treatment and toxicity [SUBSECTION] The majority of patients (82%) received the full dosage at start of CP treatment, 18% received the already reduced level (carboplatin AUC 5 and paclitaxel 175/mg/m2). The median number of cycles of therapy delivered was four (range 1 to 20). 13 patients (21.3%) received only one cycle of therapy due to clinical disease progression, intolerability or death.\nDose limiting toxicities (grade III and IV) were myelosuppression and peripheral neuropathy. Other frequent toxicities included alopecia and fatigue. In four patients (6.6%) hypersensitivity reactions to paclitaxel occurred. In all of this four patients a rapid desensitization protocol according to a scheme proposed by Lee et al was used to continue therapy [14].\nThe majority of patients (82%) received the full dosage at start of CP treatment, 18% received the already reduced level (carboplatin AUC 5 and paclitaxel 175/mg/m2). The median number of cycles of therapy delivered was four (range 1 to 20). 13 patients (21.3%) received only one cycle of therapy due to clinical disease progression, intolerability or death.\nDose limiting toxicities (grade III and IV) were myelosuppression and peripheral neuropathy. Other frequent toxicities included alopecia and fatigue. In four patients (6.6%) hypersensitivity reactions to paclitaxel occurred. In all of this four patients a rapid desensitization protocol according to a scheme proposed by Lee et al was used to continue therapy [14].\n[SUBTITLE] Response to treatment [SUBSECTION] All 61 patients had measurable disease by RECIST criteria and were assessable for response. Response rates are shown in Table 2. Partial response was noted in 4.9% of patients, stable disease in 23% of patients. No complete response was observed. Median progression free survival was 10 weeks (IQR  =  [7, 16]). Median overall survival was 31 weeks (IQR  =  [14, 52]). Response rates as well as progression and disease free survival were equivalent in first- and second-line patients (Figure 1 and 2). Median duration of stability was 22 weeks among the patients with stable disease (n = 14).\nProbability of overall survival after start of treatment in first-line and second-line patients. First-line patients: dotted line, second-line patients: bold line. (p = 0.961).\nProbability of progression free survival after start of treatment in first-line and second-line patients. First-line patients: dotted line, second-line patients: bold line. (p = 0.322).\nThere were no significant differences between patients receiving CP as first-line therapy and second-line therapy regarding S100 levels, LDH levels, ECOG performance status, number of organs involved and presence of brain metastases. But the chemo-naive group of patients included significantly more male (p = 0.008) and young patients (p = 0.011). Among the 22 patients with brain metastases none showed an objective response upon treatment with CP, however stabilisation of disease was observed in 5 patients. After a median follow up of 7 months 60 of 61 patients had died. Median overall survival was 49 weeks (IQR  =  [31, 79]) for patients with controlled disease (partial response and stable disease) compared to 18 weeks (IQR  =  [10. 35]) for patients with progressive disease (p = 0.001) (Figure 3).\nProbability of overall survival after start of treatment in patients with disease control (SD+PR) and in patients with progressive disease (PD). Patients with disease control: dotted line, patients with progressive disease: bold line. (p = 0.001).\nThere were no significant differences between patients with controlled and progressive disease regarding number of organs involved, presence of brain metastases, presence of liver metastases, age, gender and S100 levels. In contrast, a LDH value over two-fold-upper normal limit at start of CP treatment was significantly associated with progressive disease during therapy (p = 0.009). Decreasing or constant LDH levels under therapy were associated with a prolonged overall survival (p = 0.002).\nAll 61 patients had measurable disease by RECIST criteria and were assessable for response. Response rates are shown in Table 2. Partial response was noted in 4.9% of patients, stable disease in 23% of patients. No complete response was observed. Median progression free survival was 10 weeks (IQR  =  [7, 16]). Median overall survival was 31 weeks (IQR  =  [14, 52]). Response rates as well as progression and disease free survival were equivalent in first- and second-line patients (Figure 1 and 2). Median duration of stability was 22 weeks among the patients with stable disease (n = 14).\nProbability of overall survival after start of treatment in first-line and second-line patients. First-line patients: dotted line, second-line patients: bold line. (p = 0.961).\nProbability of progression free survival after start of treatment in first-line and second-line patients. First-line patients: dotted line, second-line patients: bold line. (p = 0.322).\nThere were no significant differences between patients receiving CP as first-line therapy and second-line therapy regarding S100 levels, LDH levels, ECOG performance status, number of organs involved and presence of brain metastases. But the chemo-naive group of patients included significantly more male (p = 0.008) and young patients (p = 0.011). Among the 22 patients with brain metastases none showed an objective response upon treatment with CP, however stabilisation of disease was observed in 5 patients. After a median follow up of 7 months 60 of 61 patients had died. Median overall survival was 49 weeks (IQR  =  [31, 79]) for patients with controlled disease (partial response and stable disease) compared to 18 weeks (IQR  =  [10. 35]) for patients with progressive disease (p = 0.001) (Figure 3).\nProbability of overall survival after start of treatment in patients with disease control (SD+PR) and in patients with progressive disease (PD). Patients with disease control: dotted line, patients with progressive disease: bold line. (p = 0.001).\nThere were no significant differences between patients with controlled and progressive disease regarding number of organs involved, presence of brain metastases, presence of liver metastases, age, gender and S100 levels. In contrast, a LDH value over two-fold-upper normal limit at start of CP treatment was significantly associated with progressive disease during therapy (p = 0.009). Decreasing or constant LDH levels under therapy were associated with a prolonged overall survival (p = 0.002).", "A total of 61 patients were identified for evaluation. Patient characteristics are shown in Table 1. 20 patients (32.8%) received CP as first-line therapy, 33 patients (54.1%) as second-line therapy, eight patients (13.1%) had more than one previous therapy. The median age at start of treatment was 53 years (range 20–79). Concerning the M-classification 54 patients had M1c disease (88.5%), six patients had M1b disease (9.8%), one patient had unresectable stage IIIC disease (1.6%). 22 patients (36.1%) had brain metastases or a history of brain metastases. Six of these 22 patients were treated by surgery, three by stereotactic radiation, nine patients by whole brain radiotherapy and 4 patients had no additional treatment for their brain metastases. 32 patients (52.5%) had liver metastases. The median number of organ sites involved was four (range 1–7). Only 21 patients (34.4%) had normal Eastern Cooperative Oncology Group (ECOG) performance status (PS) prior therapy.", "The majority of patients (82%) received the full dosage at start of CP treatment, 18% received the already reduced level (carboplatin AUC 5 and paclitaxel 175/mg/m2). The median number of cycles of therapy delivered was four (range 1 to 20). 13 patients (21.3%) received only one cycle of therapy due to clinical disease progression, intolerability or death.\nDose limiting toxicities (grade III and IV) were myelosuppression and peripheral neuropathy. Other frequent toxicities included alopecia and fatigue. In four patients (6.6%) hypersensitivity reactions to paclitaxel occurred. In all of this four patients a rapid desensitization protocol according to a scheme proposed by Lee et al was used to continue therapy [14].", "All 61 patients had measurable disease by RECIST criteria and were assessable for response. Response rates are shown in Table 2. Partial response was noted in 4.9% of patients, stable disease in 23% of patients. No complete response was observed. Median progression free survival was 10 weeks (IQR  =  [7, 16]). Median overall survival was 31 weeks (IQR  =  [14, 52]). Response rates as well as progression and disease free survival were equivalent in first- and second-line patients (Figure 1 and 2). Median duration of stability was 22 weeks among the patients with stable disease (n = 14).\nProbability of overall survival after start of treatment in first-line and second-line patients. First-line patients: dotted line, second-line patients: bold line. (p = 0.961).\nProbability of progression free survival after start of treatment in first-line and second-line patients. First-line patients: dotted line, second-line patients: bold line. (p = 0.322).\nThere were no significant differences between patients receiving CP as first-line therapy and second-line therapy regarding S100 levels, LDH levels, ECOG performance status, number of organs involved and presence of brain metastases. But the chemo-naive group of patients included significantly more male (p = 0.008) and young patients (p = 0.011). Among the 22 patients with brain metastases none showed an objective response upon treatment with CP, however stabilisation of disease was observed in 5 patients. After a median follow up of 7 months 60 of 61 patients had died. Median overall survival was 49 weeks (IQR  =  [31, 79]) for patients with controlled disease (partial response and stable disease) compared to 18 weeks (IQR  =  [10. 35]) for patients with progressive disease (p = 0.001) (Figure 3).\nProbability of overall survival after start of treatment in patients with disease control (SD+PR) and in patients with progressive disease (PD). Patients with disease control: dotted line, patients with progressive disease: bold line. (p = 0.001).\nThere were no significant differences between patients with controlled and progressive disease regarding number of organs involved, presence of brain metastases, presence of liver metastases, age, gender and S100 levels. In contrast, a LDH value over two-fold-upper normal limit at start of CP treatment was significantly associated with progressive disease during therapy (p = 0.009). Decreasing or constant LDH levels under therapy were associated with a prolonged overall survival (p = 0.002).", "The present patient collective consisted of patients with clearly progressive metastatic melanoma presenting with widespread metastatic disease. Two third of patients had already received a first-line chemotherapy mainly consisting in dacarbazine-based treatments. First-line patients with extensive metastatic disease were primarily treated with CP because the caring physicians felt that they will be unlikely to respond to dacarbazine. Response to CP treatment was low with five percent of partial responses as well in first-line as in second-line treatment situations. However, 23% of patients achieved stable disease which contributed obviously similarly to prolongation of survival. Thus, temporary disease control was attained in 28% of patients and seemed to be associated with prolongation of survival to 49 weeks as compared to 18 weeks in patients with progressive disease. The CP regimen showed transient disease stabilisations but neither complete responses nor long-term durable responses were accomplished. In one patient the treatment with CP enabled a complete resection of remaining metastases but the patient recurred afterwards. The only patient alive achieved stable disease under CP treatment and was included in a clinical trial with an anti-CTLA 4 antibody, then achieved a CR and has no evidence of disease to date.\nToxicities were manageable in all cases but dose limiting toxicities like myelosuppression and peripheral neuropathy occurred as already described in other tumour entities [15]–[17].\nSeveral studies investigated the combination of CP in metastatic melanoma with different treatment schedules, results and conclusions [11], [18]–[21]. (Table 3) Only 19 patients of our study comply with three main inclusion criteria (ECOG performance status 0 or 1, no cerebral metastases, not more than one prior therapy) of the largest randomized trial by Hauschild et al. It is therefore not possible to compare the two cohorts. The varying results may thus be influenced by different patient selection criteria. The results of our study regarding PFS and OS are similar to a retrospective study published 2005 by Rao et al. [21] Likewise patient characteristics are similar. Both studies reflect the real composition of advanced metastatic melanoma patient cohort in clinical routine.\nIn the current study additionally first-line patients were included. Survival curves for OS and PFS were remarkably identical for the cohorts of patients receiving CP as first and second-line treatment. It is noteworthy to mention that only patients with rapidly progressive disease in the first-line situation have been included into this protocol.\nThe only observed significant difference between patients with controlled and progressive disease was the level of LDH at treatment start. LDH may therefore be considered as a predictive factor for response. It seems to be more likely that factors like tumour load (associated with LDH-level) and number of organs involved but not the aggressiveness and sequence of the applied chemotherapeutic schedules predict treatment responses.\nCP therapy in metastatic melanoma has cytostatic effects with achievement of disease control for limited time periods in about one third of patients treated. Complete remissions or durable responses have not been accomplished. It seems not to be a better alternative to dacarbazine treatment in the first-line therapy, and should preferentially be applied as second-line treatment. A response to therapy may be associated with a prolonged overall survival. The indication for CP therapy has to be considered on an individual basis and has to be weighed against considerable toxicity." ]
[ null, null, null, null, null, null ]
[ "Introduction", "Methods", "Results", "Patient characteristics", "Treatment and toxicity", "Response to treatment", "Discussion" ]
[ "Melanoma is an increasingly common disease, and its incidence still rises in the industrialized countries with white populations. Although primary cutaneous melanomas are frequently curable by surgical excision, metastatic melanoma carries a poor prognosis with a median survival ranging from 6 to 12 months, and has not improved during the last three decades. In the US 8700 patients are expected to die of metastatic melanoma in the year 2010 [1].\nMetastatic melanoma is a solid tumour that is relatively resistant to systemic treatment [2]. However, chemotherapy with one or more drugs can produce palliative clinical responses in some patients [3]. Currently only dacarbazine and interleukin-2 have been approved for routine therapy of metastatic melanoma. As the majority of patients progress under this treatment or have only short time responses, there is a strong need for second-line treatment options.\nCombined chemotherapy with carboplatin and paclitaxel (CP) is a well established treatment regimen in advanced non-small-cell lung cancer and in advanced ovarian cancer [4]–[8]. Carboplatin replaced cisplatin from previous combined regimens demonstrating equal efficacy and less toxicity [9]. This regimen has been used in order to examine combined effects with sorafenib in solid tumours, and, interestingly, melanoma showed promising responses [10]. Therefore, the combination with sorafenib was studied in comparison to CP alone in metastatic melanoma as second-line treatment. Surprisingly, CP treatment results in a long median progression-free survival of four months but sorafenib did not add additional efficacy. Based on the promising results with CP treatment of metastatic melanoma in this phase III trial, many centers introduced this chemotherapy regimen in the treatment of disseminated melanoma, particularly in the second line treatment situation [11]. We started to treat patients with advanced metastatic melanoma with CP in October 2005.\nOur patients received CP in case of tumour progression after one or more prior systemic treatments or in case of primarily rapidly progressive disease. The aim of this retrospective analysis was to investigate the effectiveness of CP in advanced melanoma patients in terms of overall survival and response and to compare the results between first and second line treatment.", "All patients with advanced metastastic melanoma of cutaneous origin receiving CP at our institution between October 2005 and December 2007 were included. Patients with melanoma of ocular origin were excluded. Approval for this retrospective analysis was obtained by the Ethics commitee Tuebingen, German (approval number 384/2010A). Patient data of our own institution were analyzed anonymously, therefore we did not obtain informed consent. This approach was in accordance with the advice of our ethics committee. Approval for this study was gained retrospectively.\nBased on the treatment schedule of the second-line CP plus sorafenib trial all patients received intravenous paclitaxel 225 mg/m2 plus intravenous carboplatin at area under curve 6 (AUC 6) on day 1 of a 21-day cycle, with a dose reduction after the fourth cycle to carboplatin AUC 5 and paclitaxel 175/mg/m2. Some patients in poor general condition or insufficient myelofunction received a reduced dose from the start of treatment. All patients who received at least one cycle were included in the analysis.\nTumour evaluation was based on CT or PET-CT scans, which were obtained after every 3rd cycle (every 9 weeks). Tumour response was evaluated using Response Evaluation Criteria in Solid Tumours (RECIST) criteria [12]. Best response to treatment was classified as complete response (CR) (no clinical or radiologic evidence of disease), partial response (PR) (30% decrease in the sum of the longest diameter), stable disease, and progression of disease (20% increase in the sum of the longest diameter or new lesion). All response evaluations were independently evaluated by a second radiologist and demonstrated to the interdisciplinary skin tumour board at the University Hospital of Tuebingen, Germany.\nStatistical analyses were performed with the statistical software SPSS 15.0 (SPSS Inc., Chicago, Illinois, USA). In order to check comparability, first-line and second-line group of patients were compared for the characteristics gender, age, disease classification, brain metastases, liver metastases, number of organs involved, ECOG and LDH level prior to therapy. Bivariate statistical testing was performed using two-sided Chi-square tests. P-values of less than 0.05 were considered statistically significant. Follow up was measured from start of treatment until death or last date of observation. Progression-free survival (PFS) was defined from start of treatment to first documented disease progression. Overall survival (OS) was defined from the start of treatment to the date of death. Non melanoma related deaths were included as censored events. Survival probabilities were calculated according to Kaplan-Meier and compared with log-rank test statistics [13].", "[SUBTITLE] Patient characteristics [SUBSECTION] A total of 61 patients were identified for evaluation. Patient characteristics are shown in Table 1. 20 patients (32.8%) received CP as first-line therapy, 33 patients (54.1%) as second-line therapy, eight patients (13.1%) had more than one previous therapy. The median age at start of treatment was 53 years (range 20–79). Concerning the M-classification 54 patients had M1c disease (88.5%), six patients had M1b disease (9.8%), one patient had unresectable stage IIIC disease (1.6%). 22 patients (36.1%) had brain metastases or a history of brain metastases. Six of these 22 patients were treated by surgery, three by stereotactic radiation, nine patients by whole brain radiotherapy and 4 patients had no additional treatment for their brain metastases. 32 patients (52.5%) had liver metastases. The median number of organ sites involved was four (range 1–7). Only 21 patients (34.4%) had normal Eastern Cooperative Oncology Group (ECOG) performance status (PS) prior therapy.\nA total of 61 patients were identified for evaluation. Patient characteristics are shown in Table 1. 20 patients (32.8%) received CP as first-line therapy, 33 patients (54.1%) as second-line therapy, eight patients (13.1%) had more than one previous therapy. The median age at start of treatment was 53 years (range 20–79). Concerning the M-classification 54 patients had M1c disease (88.5%), six patients had M1b disease (9.8%), one patient had unresectable stage IIIC disease (1.6%). 22 patients (36.1%) had brain metastases or a history of brain metastases. Six of these 22 patients were treated by surgery, three by stereotactic radiation, nine patients by whole brain radiotherapy and 4 patients had no additional treatment for their brain metastases. 32 patients (52.5%) had liver metastases. The median number of organ sites involved was four (range 1–7). Only 21 patients (34.4%) had normal Eastern Cooperative Oncology Group (ECOG) performance status (PS) prior therapy.\n[SUBTITLE] Treatment and toxicity [SUBSECTION] The majority of patients (82%) received the full dosage at start of CP treatment, 18% received the already reduced level (carboplatin AUC 5 and paclitaxel 175/mg/m2). The median number of cycles of therapy delivered was four (range 1 to 20). 13 patients (21.3%) received only one cycle of therapy due to clinical disease progression, intolerability or death.\nDose limiting toxicities (grade III and IV) were myelosuppression and peripheral neuropathy. Other frequent toxicities included alopecia and fatigue. In four patients (6.6%) hypersensitivity reactions to paclitaxel occurred. In all of this four patients a rapid desensitization protocol according to a scheme proposed by Lee et al was used to continue therapy [14].\nThe majority of patients (82%) received the full dosage at start of CP treatment, 18% received the already reduced level (carboplatin AUC 5 and paclitaxel 175/mg/m2). The median number of cycles of therapy delivered was four (range 1 to 20). 13 patients (21.3%) received only one cycle of therapy due to clinical disease progression, intolerability or death.\nDose limiting toxicities (grade III and IV) were myelosuppression and peripheral neuropathy. Other frequent toxicities included alopecia and fatigue. In four patients (6.6%) hypersensitivity reactions to paclitaxel occurred. In all of this four patients a rapid desensitization protocol according to a scheme proposed by Lee et al was used to continue therapy [14].\n[SUBTITLE] Response to treatment [SUBSECTION] All 61 patients had measurable disease by RECIST criteria and were assessable for response. Response rates are shown in Table 2. Partial response was noted in 4.9% of patients, stable disease in 23% of patients. No complete response was observed. Median progression free survival was 10 weeks (IQR  =  [7, 16]). Median overall survival was 31 weeks (IQR  =  [14, 52]). Response rates as well as progression and disease free survival were equivalent in first- and second-line patients (Figure 1 and 2). Median duration of stability was 22 weeks among the patients with stable disease (n = 14).\nProbability of overall survival after start of treatment in first-line and second-line patients. First-line patients: dotted line, second-line patients: bold line. (p = 0.961).\nProbability of progression free survival after start of treatment in first-line and second-line patients. First-line patients: dotted line, second-line patients: bold line. (p = 0.322).\nThere were no significant differences between patients receiving CP as first-line therapy and second-line therapy regarding S100 levels, LDH levels, ECOG performance status, number of organs involved and presence of brain metastases. But the chemo-naive group of patients included significantly more male (p = 0.008) and young patients (p = 0.011). Among the 22 patients with brain metastases none showed an objective response upon treatment with CP, however stabilisation of disease was observed in 5 patients. After a median follow up of 7 months 60 of 61 patients had died. Median overall survival was 49 weeks (IQR  =  [31, 79]) for patients with controlled disease (partial response and stable disease) compared to 18 weeks (IQR  =  [10. 35]) for patients with progressive disease (p = 0.001) (Figure 3).\nProbability of overall survival after start of treatment in patients with disease control (SD+PR) and in patients with progressive disease (PD). Patients with disease control: dotted line, patients with progressive disease: bold line. (p = 0.001).\nThere were no significant differences between patients with controlled and progressive disease regarding number of organs involved, presence of brain metastases, presence of liver metastases, age, gender and S100 levels. In contrast, a LDH value over two-fold-upper normal limit at start of CP treatment was significantly associated with progressive disease during therapy (p = 0.009). Decreasing or constant LDH levels under therapy were associated with a prolonged overall survival (p = 0.002).\nAll 61 patients had measurable disease by RECIST criteria and were assessable for response. Response rates are shown in Table 2. Partial response was noted in 4.9% of patients, stable disease in 23% of patients. No complete response was observed. Median progression free survival was 10 weeks (IQR  =  [7, 16]). Median overall survival was 31 weeks (IQR  =  [14, 52]). Response rates as well as progression and disease free survival were equivalent in first- and second-line patients (Figure 1 and 2). Median duration of stability was 22 weeks among the patients with stable disease (n = 14).\nProbability of overall survival after start of treatment in first-line and second-line patients. First-line patients: dotted line, second-line patients: bold line. (p = 0.961).\nProbability of progression free survival after start of treatment in first-line and second-line patients. First-line patients: dotted line, second-line patients: bold line. (p = 0.322).\nThere were no significant differences between patients receiving CP as first-line therapy and second-line therapy regarding S100 levels, LDH levels, ECOG performance status, number of organs involved and presence of brain metastases. But the chemo-naive group of patients included significantly more male (p = 0.008) and young patients (p = 0.011). Among the 22 patients with brain metastases none showed an objective response upon treatment with CP, however stabilisation of disease was observed in 5 patients. After a median follow up of 7 months 60 of 61 patients had died. Median overall survival was 49 weeks (IQR  =  [31, 79]) for patients with controlled disease (partial response and stable disease) compared to 18 weeks (IQR  =  [10. 35]) for patients with progressive disease (p = 0.001) (Figure 3).\nProbability of overall survival after start of treatment in patients with disease control (SD+PR) and in patients with progressive disease (PD). Patients with disease control: dotted line, patients with progressive disease: bold line. (p = 0.001).\nThere were no significant differences between patients with controlled and progressive disease regarding number of organs involved, presence of brain metastases, presence of liver metastases, age, gender and S100 levels. In contrast, a LDH value over two-fold-upper normal limit at start of CP treatment was significantly associated with progressive disease during therapy (p = 0.009). Decreasing or constant LDH levels under therapy were associated with a prolonged overall survival (p = 0.002).", "A total of 61 patients were identified for evaluation. Patient characteristics are shown in Table 1. 20 patients (32.8%) received CP as first-line therapy, 33 patients (54.1%) as second-line therapy, eight patients (13.1%) had more than one previous therapy. The median age at start of treatment was 53 years (range 20–79). Concerning the M-classification 54 patients had M1c disease (88.5%), six patients had M1b disease (9.8%), one patient had unresectable stage IIIC disease (1.6%). 22 patients (36.1%) had brain metastases or a history of brain metastases. Six of these 22 patients were treated by surgery, three by stereotactic radiation, nine patients by whole brain radiotherapy and 4 patients had no additional treatment for their brain metastases. 32 patients (52.5%) had liver metastases. The median number of organ sites involved was four (range 1–7). Only 21 patients (34.4%) had normal Eastern Cooperative Oncology Group (ECOG) performance status (PS) prior therapy.", "The majority of patients (82%) received the full dosage at start of CP treatment, 18% received the already reduced level (carboplatin AUC 5 and paclitaxel 175/mg/m2). The median number of cycles of therapy delivered was four (range 1 to 20). 13 patients (21.3%) received only one cycle of therapy due to clinical disease progression, intolerability or death.\nDose limiting toxicities (grade III and IV) were myelosuppression and peripheral neuropathy. Other frequent toxicities included alopecia and fatigue. In four patients (6.6%) hypersensitivity reactions to paclitaxel occurred. In all of this four patients a rapid desensitization protocol according to a scheme proposed by Lee et al was used to continue therapy [14].", "All 61 patients had measurable disease by RECIST criteria and were assessable for response. Response rates are shown in Table 2. Partial response was noted in 4.9% of patients, stable disease in 23% of patients. No complete response was observed. Median progression free survival was 10 weeks (IQR  =  [7, 16]). Median overall survival was 31 weeks (IQR  =  [14, 52]). Response rates as well as progression and disease free survival were equivalent in first- and second-line patients (Figure 1 and 2). Median duration of stability was 22 weeks among the patients with stable disease (n = 14).\nProbability of overall survival after start of treatment in first-line and second-line patients. First-line patients: dotted line, second-line patients: bold line. (p = 0.961).\nProbability of progression free survival after start of treatment in first-line and second-line patients. First-line patients: dotted line, second-line patients: bold line. (p = 0.322).\nThere were no significant differences between patients receiving CP as first-line therapy and second-line therapy regarding S100 levels, LDH levels, ECOG performance status, number of organs involved and presence of brain metastases. But the chemo-naive group of patients included significantly more male (p = 0.008) and young patients (p = 0.011). Among the 22 patients with brain metastases none showed an objective response upon treatment with CP, however stabilisation of disease was observed in 5 patients. After a median follow up of 7 months 60 of 61 patients had died. Median overall survival was 49 weeks (IQR  =  [31, 79]) for patients with controlled disease (partial response and stable disease) compared to 18 weeks (IQR  =  [10. 35]) for patients with progressive disease (p = 0.001) (Figure 3).\nProbability of overall survival after start of treatment in patients with disease control (SD+PR) and in patients with progressive disease (PD). Patients with disease control: dotted line, patients with progressive disease: bold line. (p = 0.001).\nThere were no significant differences between patients with controlled and progressive disease regarding number of organs involved, presence of brain metastases, presence of liver metastases, age, gender and S100 levels. In contrast, a LDH value over two-fold-upper normal limit at start of CP treatment was significantly associated with progressive disease during therapy (p = 0.009). Decreasing or constant LDH levels under therapy were associated with a prolonged overall survival (p = 0.002).", "The present patient collective consisted of patients with clearly progressive metastatic melanoma presenting with widespread metastatic disease. Two third of patients had already received a first-line chemotherapy mainly consisting in dacarbazine-based treatments. First-line patients with extensive metastatic disease were primarily treated with CP because the caring physicians felt that they will be unlikely to respond to dacarbazine. Response to CP treatment was low with five percent of partial responses as well in first-line as in second-line treatment situations. However, 23% of patients achieved stable disease which contributed obviously similarly to prolongation of survival. Thus, temporary disease control was attained in 28% of patients and seemed to be associated with prolongation of survival to 49 weeks as compared to 18 weeks in patients with progressive disease. The CP regimen showed transient disease stabilisations but neither complete responses nor long-term durable responses were accomplished. In one patient the treatment with CP enabled a complete resection of remaining metastases but the patient recurred afterwards. The only patient alive achieved stable disease under CP treatment and was included in a clinical trial with an anti-CTLA 4 antibody, then achieved a CR and has no evidence of disease to date.\nToxicities were manageable in all cases but dose limiting toxicities like myelosuppression and peripheral neuropathy occurred as already described in other tumour entities [15]–[17].\nSeveral studies investigated the combination of CP in metastatic melanoma with different treatment schedules, results and conclusions [11], [18]–[21]. (Table 3) Only 19 patients of our study comply with three main inclusion criteria (ECOG performance status 0 or 1, no cerebral metastases, not more than one prior therapy) of the largest randomized trial by Hauschild et al. It is therefore not possible to compare the two cohorts. The varying results may thus be influenced by different patient selection criteria. The results of our study regarding PFS and OS are similar to a retrospective study published 2005 by Rao et al. [21] Likewise patient characteristics are similar. Both studies reflect the real composition of advanced metastatic melanoma patient cohort in clinical routine.\nIn the current study additionally first-line patients were included. Survival curves for OS and PFS were remarkably identical for the cohorts of patients receiving CP as first and second-line treatment. It is noteworthy to mention that only patients with rapidly progressive disease in the first-line situation have been included into this protocol.\nThe only observed significant difference between patients with controlled and progressive disease was the level of LDH at treatment start. LDH may therefore be considered as a predictive factor for response. It seems to be more likely that factors like tumour load (associated with LDH-level) and number of organs involved but not the aggressiveness and sequence of the applied chemotherapeutic schedules predict treatment responses.\nCP therapy in metastatic melanoma has cytostatic effects with achievement of disease control for limited time periods in about one third of patients treated. Complete remissions or durable responses have not been accomplished. It seems not to be a better alternative to dacarbazine treatment in the first-line therapy, and should preferentially be applied as second-line treatment. A response to therapy may be associated with a prolonged overall survival. The indication for CP therapy has to be considered on an individual basis and has to be weighed against considerable toxicity." ]
[ null, "methods", null, null, null, null, null ]
[]
Assessing protein immunogenicity with a dendritic cell line-derived endolysosomal degradome.
21359181
The growing number of novel candidate molecules for the treatment of allergic diseases imposed a dramatic increase in the demand for animal experiments to select immunogenic vaccines, a pre-requisite for efficacy. Because no in vitro methods to predict the immunogenicity of a protein are currently available, we developed an in vitro assay that exploits the link between a protein's immunogenicity and its susceptibility to endolysosomal proteolysis.
BACKGROUND
We compared protein composition and proteolytic activity of endolysosomal fractions isolated from murine bone marrow- and human blood- derived dendritic cells, and from the dendritic cell line JAWS II. Three groups of structurally related antigen variants differing in their ability to elicit immune responses in vivo (Bet v 1.0101 and Bet v 1.0401, RNases A and S, holo- and apo-HRP) were subjected to in vitro simulated endolysosomal degradation. Kinetics and patterns of generated proteolytic peptides were evaluated by gel electrophoresis and mass spectrometry.
METHODOLOGY
Antigens displaying weak capacity of T cell priming in vivo were highly susceptible to endolysosomal proteases in vitro. As proteolytic composition, activity, and specificity of endolysosomal fractions derived from human and murine dendritic cells were comparable, the JAWS II cell line could be used as a substitute for freshly isolated human or murine cells in in vitro degradation assays.
RESULTS
Endolysosomal fractions prepared from the JAWS II cell line provide a reliable tool for in vitro estimation of protein immunogenicity. The rapid and simple assay described here is very useful to study the immunogenic properties of a protein, and can help to replace, reduce, and refine animal experiments in allergy research and vaccine development in general.
CONCLUSIONS
[ "Animals", "Antibody Formation", "Bone Marrow Cells", "Cell Line", "Dendritic Cells", "Genes, p53", "Humans", "Lysosomes", "Mice", "Mice, Inbred BALB C", "Mice, Inbred C57BL", "Mice, Knockout", "Protein Processing, Post-Translational", "Proteins", "Vaccines" ]
3040223
null
null
Methods
[SUBTITLE] Subjects [SUBSECTION] All blood-donors gave written consent before enrollment in this study, which was approved by the local Medical Ethical Committee of Vienna. All blood-donors gave written consent before enrollment in this study, which was approved by the local Medical Ethical Committee of Vienna. [SUBTITLE] Mice [SUBSECTION] BALB/c mice were obtained from Charles River Laboratories (Wilmington, MA). All animal experiments were conducted according to National guidelines approved by the Austrian Ministry of Science (BMWF-66.012/0011-II/10b/2010). BALB/c mice were obtained from Charles River Laboratories (Wilmington, MA). All animal experiments were conducted according to National guidelines approved by the Austrian Ministry of Science (BMWF-66.012/0011-II/10b/2010). [SUBTITLE] Antigens [SUBSECTION] Recombinant Bet v 1.0101 (SwissProt accession: P15497) and Bet v 1.0401 (SwissProt accession: P43177) were produced in Escherichia coli as recently published [8]. Ribonuclease (RNase A (SwissProt accession: P61823), RNase S, and holo-HRP (SwissProt accession: P00433) were purchased from Sigma-Aldrich, St. Louis, MO. Apo-HRP was prepared according to previously described protocols [15]. Recombinant Bet v 1.0101 (SwissProt accession: P15497) and Bet v 1.0401 (SwissProt accession: P43177) were produced in Escherichia coli as recently published [8]. Ribonuclease (RNase A (SwissProt accession: P61823), RNase S, and holo-HRP (SwissProt accession: P00433) were purchased from Sigma-Aldrich, St. Louis, MO. Apo-HRP was prepared according to previously described protocols [15]. [SUBTITLE] Generation and culture of Dendritic Cells [SUBSECTION] Human mDCs (monocyte-derived DCs) obtained from heparinized blood samples and murine BMDCs (bone marrow-derived DCs) were generated and cultured as described elsewhere [8], [22]. The DC line JAWS II that has been established from bone marrow cells of a p53 knockout C57BL/6 mouse, was purchased from American Type Culture Collection (Manassas, VA) and cultured as previously described [21]. Human mDCs (monocyte-derived DCs) obtained from heparinized blood samples and murine BMDCs (bone marrow-derived DCs) were generated and cultured as described elsewhere [8], [22]. The DC line JAWS II that has been established from bone marrow cells of a p53 knockout C57BL/6 mouse, was purchased from American Type Culture Collection (Manassas, VA) and cultured as previously described [21]. [SUBTITLE] Subcellular Fractionation and Fingerprinting of Microsomes [SUBSECTION] Endosomes and lysosomes were isolated from DCs by differential centrifugation [16]. Briefly, cells (107 cells/protein) were homogenized in 10 mmol L−1 Tris/acetate pH 7 containing 250 mmol L−1 sucrose using a Dounce tissue grinder (Sigma-Aldrich, St. Louis, MO) and centrifuged for 10 minutes at 6,000× g. To obtain a total microsomal fraction, postnuclear supernatants were ultracentrifuged (60 minutes at 100,000× g). Microsomal content was released by 5 freeze-thaw cycles on liquid nitrogen and room temperature respectively, and stored at −20°C. For microsome fingerprinting 100 ng of microsomal proteins were reduced, alkylated, and trypsinized using the Calbiochem® ProteoExtract® All-in-One Trypsin digestion kit (EMD, Gibbstown, NJ) before injection into a one-dimensional capillary HPLC system (Model U3000, Dionex Benelux, Amsterdam, The Netherlands), equipped with a low-pressure gradient micro-pump, a micro-autoinjector and a capillary PS-DVB monolithic separation column (150×0.2 mm id). A 300 min gradient of 0–40% ACN in 0.05% aqueous TFA was applied using a flow rate of 1 µl min−1. The chromatographic setup was coupled to an ESI-LTQ Orbitrap mass analyzer (Thermo Fisher Scientific GmbH, Bremen, Germany). Each sample was analyzed in triplicate, while the peptides identified in one run were excluded from data dependent decisions in the following runs by the use of exclusion lists. Spectra were generated in positive mode in a mass range of m/z 500–2000. Fragmentation of a maximum of three precursors was realized in the ion trap using collision-induced dissociation. The Mascot search engine (Matrix Science, London, UK) and the software Proteome Discoverer (Thermo Fisher Scientific GmbH, Bremen, Germany) were used for peptide identification with the following parameters: taxonomy, all entries; Variable modification, methionine oxidation; fixed modification, carbamidomethyl (C), Enzyme, trypsin; peptide tolerance, ±10 ppm; MS/MS tolerance, ± 0.3 Da; maximum missed cleavages, 1; Human and murine samples were searched against species-specific SwissProt databases. Endosomes and lysosomes were isolated from DCs by differential centrifugation [16]. Briefly, cells (107 cells/protein) were homogenized in 10 mmol L−1 Tris/acetate pH 7 containing 250 mmol L−1 sucrose using a Dounce tissue grinder (Sigma-Aldrich, St. Louis, MO) and centrifuged for 10 minutes at 6,000× g. To obtain a total microsomal fraction, postnuclear supernatants were ultracentrifuged (60 minutes at 100,000× g). Microsomal content was released by 5 freeze-thaw cycles on liquid nitrogen and room temperature respectively, and stored at −20°C. For microsome fingerprinting 100 ng of microsomal proteins were reduced, alkylated, and trypsinized using the Calbiochem® ProteoExtract® All-in-One Trypsin digestion kit (EMD, Gibbstown, NJ) before injection into a one-dimensional capillary HPLC system (Model U3000, Dionex Benelux, Amsterdam, The Netherlands), equipped with a low-pressure gradient micro-pump, a micro-autoinjector and a capillary PS-DVB monolithic separation column (150×0.2 mm id). A 300 min gradient of 0–40% ACN in 0.05% aqueous TFA was applied using a flow rate of 1 µl min−1. The chromatographic setup was coupled to an ESI-LTQ Orbitrap mass analyzer (Thermo Fisher Scientific GmbH, Bremen, Germany). Each sample was analyzed in triplicate, while the peptides identified in one run were excluded from data dependent decisions in the following runs by the use of exclusion lists. Spectra were generated in positive mode in a mass range of m/z 500–2000. Fragmentation of a maximum of three precursors was realized in the ion trap using collision-induced dissociation. The Mascot search engine (Matrix Science, London, UK) and the software Proteome Discoverer (Thermo Fisher Scientific GmbH, Bremen, Germany) were used for peptide identification with the following parameters: taxonomy, all entries; Variable modification, methionine oxidation; fixed modification, carbamidomethyl (C), Enzyme, trypsin; peptide tolerance, ±10 ppm; MS/MS tolerance, ± 0.3 Da; maximum missed cleavages, 1; Human and murine samples were searched against species-specific SwissProt databases. [SUBTITLE] Degradation Assays [SUBSECTION] Endolysosomal degradation assays were performed with 0.25 µg µl−1 of substrates (Bet v 1.0101, Bet v 1.0401, RNase A, RNase S, holo-HRP, apo-HRP) and 0.4 µg µl−1 of isolated microsomal proteins in a final volume of 20 µl containing 100 mmol L−1 citrate buffer pH 4.8 and 2 mmol L−1 dithiothreitol. Reactions were conducted for 0, 0.5, 1, 3, 5, 12, 24, 36, and 48 h at 37°C and stopped by boiling for 5 min at 95°C followed by freezing at −20°C. Alternatively, in vitro degradations were performed using 5×10−4 U µl−1 of purified human Cathepsin S purchased from Sigma-Aldrich, St. Louis, MO. Assays were quantitatively evaluated by flatbed scanner densitometry of GelCode® Blue Reagent (Thermo Scientific, Waltham, MA) stained sodium dodecyl sulfate polyacrylamide gels [23]. Qualitative analysis was performed by mass spectrometry using an ESI-QTOF mass spectrometer fitted with a capillary reversed phase HPLC (Waters, Milford, MA) as described elsewhere [22]. Endolysosomal degradation assays were performed with 0.25 µg µl−1 of substrates (Bet v 1.0101, Bet v 1.0401, RNase A, RNase S, holo-HRP, apo-HRP) and 0.4 µg µl−1 of isolated microsomal proteins in a final volume of 20 µl containing 100 mmol L−1 citrate buffer pH 4.8 and 2 mmol L−1 dithiothreitol. Reactions were conducted for 0, 0.5, 1, 3, 5, 12, 24, 36, and 48 h at 37°C and stopped by boiling for 5 min at 95°C followed by freezing at −20°C. Alternatively, in vitro degradations were performed using 5×10−4 U µl−1 of purified human Cathepsin S purchased from Sigma-Aldrich, St. Louis, MO. Assays were quantitatively evaluated by flatbed scanner densitometry of GelCode® Blue Reagent (Thermo Scientific, Waltham, MA) stained sodium dodecyl sulfate polyacrylamide gels [23]. Qualitative analysis was performed by mass spectrometry using an ESI-QTOF mass spectrometer fitted with a capillary reversed phase HPLC (Waters, Milford, MA) as described elsewhere [22].
null
null
null
null
[ "Introduction", "Subjects", "Mice", "Antigens", "Generation and culture of Dendritic Cells", "Subcellular Fractionation and Fingerprinting of Microsomes", "Degradation Assays", "Results", "Human and Murine DC-Derived Microsomes Display Comparable Endolysosomal Degradomes", "Kinetics of Endolysosomal Degradation Differ Between Structural Variants of the Same Antigen", "Structural Variants of the Same Antigen Show Similar Patterns of Proteolytic Fragments", "Endolysosomal Degradation of Bet v 1 is Mediated by Cathepsin S", "Discussion" ]
[ "The development of a novel vaccine is a highly complex and demanding process that, from the initial concept to a licensed product, can take up to decades. Once a candidate has evolved in the laboratory, it undergoes vast series of pre-clinical in vitro and in vivo examination and optimization procedures. Evidently, only a minority of candidates passes all these obstacles, is permitted to clinical trials, accepted by regulatory agencies, and converted into a commercial product. The development of allergy vaccines faces additional problems, because unlike prophylactic vaccination, allergen-specific immunotherapy (SIT) attempts to counteract an already established pathological immune response [1]. Severe anaphylactic side effects can result from interactions between the administered vaccine and allergen-specific IgE antibodies of the atopic patient. Moreover, the current use of extracts of undefined contents can lead to sensitizations against new allergens during conventional immunotherapy [2]. Thus, allergy research today focuses on strategies to improve both, safety and clinical efficacy of SIT.\nOthers and we have proposed the substitution of allergen extracts in immunotherapy by molecule-based vaccines in order to implement safer and patient-tailored treatment [2], [3]. However, in contrast to infectious disease antigens, many allergens have been reported to be weak immunogens [4]–[7], a property hampering therapeutic success. Notably, it has been shown that allergen isoforms can differ by both immunogenicity (T cell reactivity) and allergenicity (IgE reactivity). For example, the birch pollen major allergen Bet v 1 isoform 0401 activates T cells much more efficiently than Bet v 1.0101 [8] but displays reduced IgE reactivity (hypoallergen) [4], [9], [10]. As molecules with such properties would bypass IgE mediated side effects during SIT, they are considered ideal allergy vaccines. Besides naturally occurring hypoallergens, modern DNA technology facilitates genetic engineering of recombinant hypoallergens [1]–[3], [11], [12]. However, the structural manipulation required for hypoallergen generation [11] and the production procedures [13] can severely affect the immunogenicity of a recombinant protein [14]–[17]. For example, although differing only by a single amino acid, one out of two in silico designed recombinant Bet v 1 mutants with reduced IgE reactivity lost its immunogenicity in mice [18]. Moreover, from 400 chimera generated by DNA shuffling of 14 major tree pollen allergens, only 2 fulfilled the requirements for efficient vaccine candidates [19]. The screening of such large candidate libraries requires high-throughput methods. Although IgE reactivity can be easily evaluated by antibody-based in vitro experiments in microtiter format, such tests are lacking for immunogenicity assessment. The prediction of T cell reactivity (immunogenicity) is costly, time-consuming, can only be performed for a limited number of molecules, and depends on in vivo or cell-based ex vivo systems. Nevertheless, ever since the first purified recombinant allergen was available in the early 1990s [20], a multitude of proteins has been produced as candidates for SIT [3], [11], [12]. Hence, the fast development of new molecule-based allergy vaccines dramatically increases the demand for animal sacrifice, conflicting the 3Rs declaration of the European Partnership for Alternative Approaches to Animal Testing (EPAA).\nWithin the present study we established a degradation assay based on earlier studies showing that susceptibility to endolysosomal proteolysis by antigen presenting cells (APC) serves as in vitro marker for protein immunogenicity [15], [16]. This assay enables the pre-selection of molecules with highest immunogenicity out of a big repertoire of related candidate proteins, hence aiming to replace, reduce, and refine animal experiments in allergy vaccine development. Whereas previous work solely focused on the kinetics of endolysosomal protein decomposition, we also evaluated the in vitro generated antigen-derived peptides and performed comparative fingerprinting of microsomal proteases. Moreover, we compared the degradative potential of different types of human and murine DC's microsomes. In addition to previously investigated antigens (i.e. ribonucleases A and S, as well as holo- and apo-horseradish peroxidase) [15], [16], we included two well-described allergens, i.e. the high and low allergenic isoforms of the birch pollen major allergen Bet v 1.0101 and Bet v 1.0401 [8], to evaluate the general applicability of the assay. As source for the endolysosomal hydrolases we employed a commercially available murine dendritic cell (DC) line [21]. This approach enables high-throughput screening for immunogenic candidates in vaccine development saving time, costs, human blood donors, and animal sacrifice, and might be interesting for any study on protein immunogenicity.", "All blood-donors gave written consent before enrollment in this study, which was approved by the local Medical Ethical Committee of Vienna.", "BALB/c mice were obtained from Charles River Laboratories (Wilmington, MA). All animal experiments were conducted according to National guidelines approved by the Austrian Ministry of Science (BMWF-66.012/0011-II/10b/2010).", "Recombinant Bet v 1.0101 (SwissProt accession: P15497) and Bet v 1.0401 (SwissProt accession: P43177) were produced in Escherichia coli as recently published [8]. Ribonuclease (RNase A (SwissProt accession: P61823), RNase S, and holo-HRP (SwissProt accession: P00433) were purchased from Sigma-Aldrich, St. Louis, MO. Apo-HRP was prepared according to previously described protocols [15].", "Human mDCs (monocyte-derived DCs) obtained from heparinized blood samples and murine BMDCs (bone marrow-derived DCs) were generated and cultured as described elsewhere [8], [22]. The DC line JAWS II that has been established from bone marrow cells of a p53 knockout C57BL/6 mouse, was purchased from American Type Culture Collection (Manassas, VA) and cultured as previously described [21].", "Endosomes and lysosomes were isolated from DCs by differential centrifugation [16]. Briefly, cells (107 cells/protein) were homogenized in 10 mmol L−1 Tris/acetate pH 7 containing 250 mmol L−1 sucrose using a Dounce tissue grinder (Sigma-Aldrich, St. Louis, MO) and centrifuged for 10 minutes at 6,000× g. To obtain a total microsomal fraction, postnuclear supernatants were ultracentrifuged (60 minutes at 100,000× g). Microsomal content was released by 5 freeze-thaw cycles on liquid nitrogen and room temperature respectively, and stored at −20°C. For microsome fingerprinting 100 ng of microsomal proteins were reduced, alkylated, and trypsinized using the Calbiochem® ProteoExtract® All-in-One Trypsin digestion kit (EMD, Gibbstown, NJ) before injection into a one-dimensional capillary HPLC system (Model U3000, Dionex Benelux, Amsterdam, The Netherlands), equipped with a low-pressure gradient micro-pump, a micro-autoinjector and a capillary PS-DVB monolithic separation column (150×0.2 mm id). A 300 min gradient of 0–40% ACN in 0.05% aqueous TFA was applied using a flow rate of 1 µl min−1. The chromatographic setup was coupled to an ESI-LTQ Orbitrap mass analyzer (Thermo Fisher Scientific GmbH, Bremen, Germany). Each sample was analyzed in triplicate, while the peptides identified in one run were excluded from data dependent decisions in the following runs by the use of exclusion lists. Spectra were generated in positive mode in a mass range of m/z 500–2000. Fragmentation of a maximum of three precursors was realized in the ion trap using collision-induced dissociation. The Mascot search engine (Matrix Science, London, UK) and the software Proteome Discoverer (Thermo Fisher Scientific GmbH, Bremen, Germany) were used for peptide identification with the following parameters: taxonomy, all entries; Variable modification, methionine oxidation; fixed modification, carbamidomethyl (C), Enzyme, trypsin; peptide tolerance, ±10 ppm; MS/MS tolerance, ± 0.3 Da; maximum missed cleavages, 1; Human and murine samples were searched against species-specific SwissProt databases.", "Endolysosomal degradation assays were performed with 0.25 µg µl−1 of substrates (Bet v 1.0101, Bet v 1.0401, RNase A, RNase S, holo-HRP, apo-HRP) and 0.4 µg µl−1 of isolated microsomal proteins in a final volume of 20 µl containing 100 mmol L−1 citrate buffer pH 4.8 and 2 mmol L−1 dithiothreitol. Reactions were conducted for 0, 0.5, 1, 3, 5, 12, 24, 36, and 48 h at 37°C and stopped by boiling for 5 min at 95°C followed by freezing at −20°C. Alternatively, in vitro degradations were performed using 5×10−4 U µl−1 of purified human Cathepsin S purchased from Sigma-Aldrich, St. Louis, MO. Assays were quantitatively evaluated by flatbed scanner densitometry of GelCode® Blue Reagent (Thermo Scientific, Waltham, MA) stained sodium dodecyl sulfate polyacrylamide gels [23]. Qualitative analysis was performed by mass spectrometry using an ESI-QTOF mass spectrometer fitted with a capillary reversed phase HPLC (Waters, Milford, MA) as described elsewhere [22].", "[SUBTITLE] Human and Murine DC-Derived Microsomes Display Comparable Endolysosomal Degradomes [SUBSECTION] To conduct endolysosomal in vitro degradation assays we isolated total microsomal (endolysosomal) fractions from human mDCs, murine BMDCs, and the murine JAWS II DC line employing a differential centrifugation protocol. Endolysosomal total proteomes were analyzed by mass spectrometry-based fingerprinting. 600 different proteins were detected in the microsomal fractions of all three different DC samples. Proteins belonging to the endolysosomal degradome or that have been shown to be involved in antigen processing are listed in Table 1. Cathepsins A, B, C, D, L, S, and Z as well as lysosomal prolylcarboxypeptidase and tripeptidyl peptidase 1 could be identified in all microsomal fractions. By contrast, cathepsin K, lysosomal dipeptidyl peptidase 2, and asparagine endopeptidase (AEP) were not detectable in BMDCs, and cathepsin H was not measured in the DC line. In summary, JAWS II DCs contain all important lysosomal endo- and exopeptidases that have been shown to be involved in antigen processing [24], [25]. Besides proteases, endolysosomal fractions also contained a multitude of other molecules (non-proteolytic acidic hydrolases, membrane-associated proteins, and GTPases involved in vesicular trafficking) that are associated with endosomal and lysosomal compartments of APCs (Table 1).\n%, percentage to which identified peptides cover the full length protein sequence; AEP, Asparagine endopeptidase; ARF, ADP-ribosylation factor; LAMP, Lysosome-associated membrane glycoprotein; LMP, lysosome membrane protein; pep., identified peptides; Rab, Ras-like protein; VAMP, Vesicle-associated membrane protein.\nTo conduct endolysosomal in vitro degradation assays we isolated total microsomal (endolysosomal) fractions from human mDCs, murine BMDCs, and the murine JAWS II DC line employing a differential centrifugation protocol. Endolysosomal total proteomes were analyzed by mass spectrometry-based fingerprinting. 600 different proteins were detected in the microsomal fractions of all three different DC samples. Proteins belonging to the endolysosomal degradome or that have been shown to be involved in antigen processing are listed in Table 1. Cathepsins A, B, C, D, L, S, and Z as well as lysosomal prolylcarboxypeptidase and tripeptidyl peptidase 1 could be identified in all microsomal fractions. By contrast, cathepsin K, lysosomal dipeptidyl peptidase 2, and asparagine endopeptidase (AEP) were not detectable in BMDCs, and cathepsin H was not measured in the DC line. In summary, JAWS II DCs contain all important lysosomal endo- and exopeptidases that have been shown to be involved in antigen processing [24], [25]. Besides proteases, endolysosomal fractions also contained a multitude of other molecules (non-proteolytic acidic hydrolases, membrane-associated proteins, and GTPases involved in vesicular trafficking) that are associated with endosomal and lysosomal compartments of APCs (Table 1).\n%, percentage to which identified peptides cover the full length protein sequence; AEP, Asparagine endopeptidase; ARF, ADP-ribosylation factor; LAMP, Lysosome-associated membrane glycoprotein; LMP, lysosome membrane protein; pep., identified peptides; Rab, Ras-like protein; VAMP, Vesicle-associated membrane protein.\n[SUBTITLE] Kinetics of Endolysosomal Degradation Differ Between Structural Variants of the Same Antigen [SUBSECTION] We compared the kinetics of endolysosomal decomposition for three pairs of antigens, i.e. structural variants of horseradish peroxidase (HRP), ribonuclease (RNase), and the major birch pollen allergen Bet v 1. The comparison of the average half lives (given in parenthesis) during endolysosomal in vitro degradation revealed that Bet v 1.0401 (3.8 h), RNase A (>48 h), and holo-HRP (1.5 h) were more resistant to proteolysis than Bet v 1.0101 (2 h), RNase S (2.2 h), and apo-HRP (0.4 h), respectively (Figures 1A and 1B). Thus, the decomposition of Bet v 1.0401 was around two-fold slower than observed for Bet v 1.0101. For the holo/apo-HRP and RNase A/S pairs endolysosomal half-life ratios were 3.8 and >20, respectively.\n2.5 µg of protein samples were analyzed by SDS-PAGE and GelCode® Blue staining after 0, 0.5, 1, 3, 5, 12, 24, 36 and 48 h of in vitro degradation using endolysosomal fractions isolated from human mDCs, murine BMDCs, and the DC line JAWS II (A). For each protein, the half-life during endolysosomal proteolysis was calculated from scanned and densitometrically quantified protein bands (B).\nWe compared the kinetics of endolysosomal decomposition for three pairs of antigens, i.e. structural variants of horseradish peroxidase (HRP), ribonuclease (RNase), and the major birch pollen allergen Bet v 1. The comparison of the average half lives (given in parenthesis) during endolysosomal in vitro degradation revealed that Bet v 1.0401 (3.8 h), RNase A (>48 h), and holo-HRP (1.5 h) were more resistant to proteolysis than Bet v 1.0101 (2 h), RNase S (2.2 h), and apo-HRP (0.4 h), respectively (Figures 1A and 1B). Thus, the decomposition of Bet v 1.0401 was around two-fold slower than observed for Bet v 1.0101. For the holo/apo-HRP and RNase A/S pairs endolysosomal half-life ratios were 3.8 and >20, respectively.\n2.5 µg of protein samples were analyzed by SDS-PAGE and GelCode® Blue staining after 0, 0.5, 1, 3, 5, 12, 24, 36 and 48 h of in vitro degradation using endolysosomal fractions isolated from human mDCs, murine BMDCs, and the DC line JAWS II (A). For each protein, the half-life during endolysosomal proteolysis was calculated from scanned and densitometrically quantified protein bands (B).\n[SUBTITLE] Structural Variants of the Same Antigen Show Similar Patterns of Proteolytic Fragments [SUBSECTION] Mass spectrometry-based analysis showed that peptides generated by endolysosomal in vitro proteolysis formed nested clusters sharing a common central core but displaying variable flanking regions. These features are characteristic for MHC (major histocompatibility complex) class II-bound peptides [22], [26]. Notably, peptides derived from both Bet v 1.0101 and Bet v 1.0401 were of 5 to 30 amino acids in length and clustered into 13 different regions along the Bet v 1 sequence. However, central (region Bet v 121–65 and Bet v 183–115) and C-terminal (region Bet v 1130–159) peptide clusters of the more stable isoallergen Bet v 1.0401 appeared temporally delayed. Interestingly, amino acids differing between the two Bet v 1 isoforms were rather located apart from proteolytic cutting sites in central regions of the peptide clusters (Figures 2A and 2B). Proteolytic fragments from both holo- and apo-HRP clustered in 5 analogous regions of the full-length molecules. Due to its high resistance to endolysosomal hydrolases, we could not detect any peptides from RNase A, whereas proteolytic fragments from RNase S were found in 5 different regions (Figure 3). According to our observations, differences in proteolytic resistance between pairs of antigens with similar sequence and conserved fold do not translate into altered patterns of proteolytic fragments.\nPeptides sequenced by mass spectrometry after 0.5, 1, 3, 5, 12, 24, 36, and 48 h of in vitro digestion with microsomal fractions from mDCs (blue), BMDCs (green) and JAWS II DCs (brown) are shown for Bet v 1.0101 (A) and Bet v 1.0401 (B), respectively. Regions of predominant peptide clusters (clx-x) are highlighted as bars colored in different shades of grey depending on their temporal occurrence (average appearance of early clusters: ≤1 h, intermediate clusters: >1 h and <5 h, and late clusters: ≥5 h. Bet v 1 secondary structures (α-helices and β-sheets) are indicated as framed boxes. Amino acids (n = 7) that differ between the 2 Bet v 1 isoforms are highlighted in orange.\nPeptides sequenced by mass spectrometry after 0.5, 1, 3, 5, 12, 24, 36, and 48 h of in vitro digestion with microsomal fractions from mDCs (blue), BMDCs (green) and JAWS II DCs (brown) clustered in distinct regions along the protein sequence of RNase (A) and peroxidase (B) model antigens. Secondary structures (α-helices and β-sheets) are indicated as framed boxes.\nMass spectrometry-based analysis showed that peptides generated by endolysosomal in vitro proteolysis formed nested clusters sharing a common central core but displaying variable flanking regions. These features are characteristic for MHC (major histocompatibility complex) class II-bound peptides [22], [26]. Notably, peptides derived from both Bet v 1.0101 and Bet v 1.0401 were of 5 to 30 amino acids in length and clustered into 13 different regions along the Bet v 1 sequence. However, central (region Bet v 121–65 and Bet v 183–115) and C-terminal (region Bet v 1130–159) peptide clusters of the more stable isoallergen Bet v 1.0401 appeared temporally delayed. Interestingly, amino acids differing between the two Bet v 1 isoforms were rather located apart from proteolytic cutting sites in central regions of the peptide clusters (Figures 2A and 2B). Proteolytic fragments from both holo- and apo-HRP clustered in 5 analogous regions of the full-length molecules. Due to its high resistance to endolysosomal hydrolases, we could not detect any peptides from RNase A, whereas proteolytic fragments from RNase S were found in 5 different regions (Figure 3). According to our observations, differences in proteolytic resistance between pairs of antigens with similar sequence and conserved fold do not translate into altered patterns of proteolytic fragments.\nPeptides sequenced by mass spectrometry after 0.5, 1, 3, 5, 12, 24, 36, and 48 h of in vitro digestion with microsomal fractions from mDCs (blue), BMDCs (green) and JAWS II DCs (brown) are shown for Bet v 1.0101 (A) and Bet v 1.0401 (B), respectively. Regions of predominant peptide clusters (clx-x) are highlighted as bars colored in different shades of grey depending on their temporal occurrence (average appearance of early clusters: ≤1 h, intermediate clusters: >1 h and <5 h, and late clusters: ≥5 h. Bet v 1 secondary structures (α-helices and β-sheets) are indicated as framed boxes. Amino acids (n = 7) that differ between the 2 Bet v 1 isoforms are highlighted in orange.\nPeptides sequenced by mass spectrometry after 0.5, 1, 3, 5, 12, 24, 36, and 48 h of in vitro digestion with microsomal fractions from mDCs (blue), BMDCs (green) and JAWS II DCs (brown) clustered in distinct regions along the protein sequence of RNase (A) and peroxidase (B) model antigens. Secondary structures (α-helices and β-sheets) are indicated as framed boxes.\n[SUBTITLE] Endolysosomal Degradation of Bet v 1 is Mediated by Cathepsin S [SUBSECTION] Strikingly, the majority of in vitro generated Bet v 1-derived peptide clusters (Figure 2) was flanked by lysine residues (K20, 54, 55, 65, 80, 97, 103, 115, and 129). This observation suggests that cathepsin S, a lysosomal cysteine endoprotease favoring substrates with glutamines or lysines at the P1 position [27], could be involved in Bet v 1 decomposition. To pursue this idea, we performed in vitro degradation assays of Bet v 1.0101 with purified cathepsin S, and identified 31 cathepsin S sites. As shown in Figure 4, cathepsin S sites seem to be progressively accessible by the protease in a time-dependent manner. Notably, 8 out of the 13 Bet v 1.0101 peptide clusters that appeared during endolysosomal proteolysis applying microsomal fractions could also be generated by cathepsin S alone (Figure 4). Initially occurring Bet v 1.0101 peptide clusters contained (cl1–22) or were flanked (cl146–59) by early-cleaved cathepsin S sites (Figure 4). Short-time degradation assays with human microsomes (data not shown) revealed that the N-terminal cluster cl1–22 appeared already after a few minutes of reaction. Thus, cathepsin S might be engaged in the initial steps of Bet v 1.0101 processing. Late-appearing peptide clusters, which corresponded to the middle region of the Bet v 1 sequence (cl83–115, cl83–103, and cl83–97), might be produced by cathepsin S and AEP, a cysteine endopeptidase that strictly cleaves on the C-terminal site of asparagine residues (e.g. Bet v 1.0101 N82) [24], [25].\nCathepsin S cleavage sites are depicted in the bar diagram showing their location in the Bet v 1.0101 sequence (x-axis) and the number of different peptides generated per cleavage site (y-axis). According to their temporal accessibility, cathepsin S sites are highlighted in red (early), orange (intermediate), and yellow (late) in the bar diagram. Generated peptides are shown as purple lines. Bet v 1.0101 secondary structures (α-helices and β-sheets) are indicated as framed boxes. Regions of Bet v 1.0101 peptide clusters (clx-x) generated by endolysosomal proteases isolated from DCs are depicted as grey boxes.\nStrikingly, the majority of in vitro generated Bet v 1-derived peptide clusters (Figure 2) was flanked by lysine residues (K20, 54, 55, 65, 80, 97, 103, 115, and 129). This observation suggests that cathepsin S, a lysosomal cysteine endoprotease favoring substrates with glutamines or lysines at the P1 position [27], could be involved in Bet v 1 decomposition. To pursue this idea, we performed in vitro degradation assays of Bet v 1.0101 with purified cathepsin S, and identified 31 cathepsin S sites. As shown in Figure 4, cathepsin S sites seem to be progressively accessible by the protease in a time-dependent manner. Notably, 8 out of the 13 Bet v 1.0101 peptide clusters that appeared during endolysosomal proteolysis applying microsomal fractions could also be generated by cathepsin S alone (Figure 4). Initially occurring Bet v 1.0101 peptide clusters contained (cl1–22) or were flanked (cl146–59) by early-cleaved cathepsin S sites (Figure 4). Short-time degradation assays with human microsomes (data not shown) revealed that the N-terminal cluster cl1–22 appeared already after a few minutes of reaction. Thus, cathepsin S might be engaged in the initial steps of Bet v 1.0101 processing. Late-appearing peptide clusters, which corresponded to the middle region of the Bet v 1 sequence (cl83–115, cl83–103, and cl83–97), might be produced by cathepsin S and AEP, a cysteine endopeptidase that strictly cleaves on the C-terminal site of asparagine residues (e.g. Bet v 1.0101 N82) [24], [25].\nCathepsin S cleavage sites are depicted in the bar diagram showing their location in the Bet v 1.0101 sequence (x-axis) and the number of different peptides generated per cleavage site (y-axis). According to their temporal accessibility, cathepsin S sites are highlighted in red (early), orange (intermediate), and yellow (late) in the bar diagram. Generated peptides are shown as purple lines. Bet v 1.0101 secondary structures (α-helices and β-sheets) are indicated as framed boxes. Regions of Bet v 1.0101 peptide clusters (clx-x) generated by endolysosomal proteases isolated from DCs are depicted as grey boxes.", "To conduct endolysosomal in vitro degradation assays we isolated total microsomal (endolysosomal) fractions from human mDCs, murine BMDCs, and the murine JAWS II DC line employing a differential centrifugation protocol. Endolysosomal total proteomes were analyzed by mass spectrometry-based fingerprinting. 600 different proteins were detected in the microsomal fractions of all three different DC samples. Proteins belonging to the endolysosomal degradome or that have been shown to be involved in antigen processing are listed in Table 1. Cathepsins A, B, C, D, L, S, and Z as well as lysosomal prolylcarboxypeptidase and tripeptidyl peptidase 1 could be identified in all microsomal fractions. By contrast, cathepsin K, lysosomal dipeptidyl peptidase 2, and asparagine endopeptidase (AEP) were not detectable in BMDCs, and cathepsin H was not measured in the DC line. In summary, JAWS II DCs contain all important lysosomal endo- and exopeptidases that have been shown to be involved in antigen processing [24], [25]. Besides proteases, endolysosomal fractions also contained a multitude of other molecules (non-proteolytic acidic hydrolases, membrane-associated proteins, and GTPases involved in vesicular trafficking) that are associated with endosomal and lysosomal compartments of APCs (Table 1).\n%, percentage to which identified peptides cover the full length protein sequence; AEP, Asparagine endopeptidase; ARF, ADP-ribosylation factor; LAMP, Lysosome-associated membrane glycoprotein; LMP, lysosome membrane protein; pep., identified peptides; Rab, Ras-like protein; VAMP, Vesicle-associated membrane protein.", "We compared the kinetics of endolysosomal decomposition for three pairs of antigens, i.e. structural variants of horseradish peroxidase (HRP), ribonuclease (RNase), and the major birch pollen allergen Bet v 1. The comparison of the average half lives (given in parenthesis) during endolysosomal in vitro degradation revealed that Bet v 1.0401 (3.8 h), RNase A (>48 h), and holo-HRP (1.5 h) were more resistant to proteolysis than Bet v 1.0101 (2 h), RNase S (2.2 h), and apo-HRP (0.4 h), respectively (Figures 1A and 1B). Thus, the decomposition of Bet v 1.0401 was around two-fold slower than observed for Bet v 1.0101. For the holo/apo-HRP and RNase A/S pairs endolysosomal half-life ratios were 3.8 and >20, respectively.\n2.5 µg of protein samples were analyzed by SDS-PAGE and GelCode® Blue staining after 0, 0.5, 1, 3, 5, 12, 24, 36 and 48 h of in vitro degradation using endolysosomal fractions isolated from human mDCs, murine BMDCs, and the DC line JAWS II (A). For each protein, the half-life during endolysosomal proteolysis was calculated from scanned and densitometrically quantified protein bands (B).", "Mass spectrometry-based analysis showed that peptides generated by endolysosomal in vitro proteolysis formed nested clusters sharing a common central core but displaying variable flanking regions. These features are characteristic for MHC (major histocompatibility complex) class II-bound peptides [22], [26]. Notably, peptides derived from both Bet v 1.0101 and Bet v 1.0401 were of 5 to 30 amino acids in length and clustered into 13 different regions along the Bet v 1 sequence. However, central (region Bet v 121–65 and Bet v 183–115) and C-terminal (region Bet v 1130–159) peptide clusters of the more stable isoallergen Bet v 1.0401 appeared temporally delayed. Interestingly, amino acids differing between the two Bet v 1 isoforms were rather located apart from proteolytic cutting sites in central regions of the peptide clusters (Figures 2A and 2B). Proteolytic fragments from both holo- and apo-HRP clustered in 5 analogous regions of the full-length molecules. Due to its high resistance to endolysosomal hydrolases, we could not detect any peptides from RNase A, whereas proteolytic fragments from RNase S were found in 5 different regions (Figure 3). According to our observations, differences in proteolytic resistance between pairs of antigens with similar sequence and conserved fold do not translate into altered patterns of proteolytic fragments.\nPeptides sequenced by mass spectrometry after 0.5, 1, 3, 5, 12, 24, 36, and 48 h of in vitro digestion with microsomal fractions from mDCs (blue), BMDCs (green) and JAWS II DCs (brown) are shown for Bet v 1.0101 (A) and Bet v 1.0401 (B), respectively. Regions of predominant peptide clusters (clx-x) are highlighted as bars colored in different shades of grey depending on their temporal occurrence (average appearance of early clusters: ≤1 h, intermediate clusters: >1 h and <5 h, and late clusters: ≥5 h. Bet v 1 secondary structures (α-helices and β-sheets) are indicated as framed boxes. Amino acids (n = 7) that differ between the 2 Bet v 1 isoforms are highlighted in orange.\nPeptides sequenced by mass spectrometry after 0.5, 1, 3, 5, 12, 24, 36, and 48 h of in vitro digestion with microsomal fractions from mDCs (blue), BMDCs (green) and JAWS II DCs (brown) clustered in distinct regions along the protein sequence of RNase (A) and peroxidase (B) model antigens. Secondary structures (α-helices and β-sheets) are indicated as framed boxes.", "Strikingly, the majority of in vitro generated Bet v 1-derived peptide clusters (Figure 2) was flanked by lysine residues (K20, 54, 55, 65, 80, 97, 103, 115, and 129). This observation suggests that cathepsin S, a lysosomal cysteine endoprotease favoring substrates with glutamines or lysines at the P1 position [27], could be involved in Bet v 1 decomposition. To pursue this idea, we performed in vitro degradation assays of Bet v 1.0101 with purified cathepsin S, and identified 31 cathepsin S sites. As shown in Figure 4, cathepsin S sites seem to be progressively accessible by the protease in a time-dependent manner. Notably, 8 out of the 13 Bet v 1.0101 peptide clusters that appeared during endolysosomal proteolysis applying microsomal fractions could also be generated by cathepsin S alone (Figure 4). Initially occurring Bet v 1.0101 peptide clusters contained (cl1–22) or were flanked (cl146–59) by early-cleaved cathepsin S sites (Figure 4). Short-time degradation assays with human microsomes (data not shown) revealed that the N-terminal cluster cl1–22 appeared already after a few minutes of reaction. Thus, cathepsin S might be engaged in the initial steps of Bet v 1.0101 processing. Late-appearing peptide clusters, which corresponded to the middle region of the Bet v 1 sequence (cl83–115, cl83–103, and cl83–97), might be produced by cathepsin S and AEP, a cysteine endopeptidase that strictly cleaves on the C-terminal site of asparagine residues (e.g. Bet v 1.0101 N82) [24], [25].\nCathepsin S cleavage sites are depicted in the bar diagram showing their location in the Bet v 1.0101 sequence (x-axis) and the number of different peptides generated per cleavage site (y-axis). According to their temporal accessibility, cathepsin S sites are highlighted in red (early), orange (intermediate), and yellow (late) in the bar diagram. Generated peptides are shown as purple lines. Bet v 1.0101 secondary structures (α-helices and β-sheets) are indicated as framed boxes. Regions of Bet v 1.0101 peptide clusters (clx-x) generated by endolysosomal proteases isolated from DCs are depicted as grey boxes.", "The ability to efficiently induce an immune response is a most important quality of any protein vaccine. As no in vitro tools have been available so far, immunogenicity is usually assessed in vivo by animal experiments or by ex vivo cell-based systems. The immunogenicity of an antigen strongly depends on the character of its interaction with DCs. These specialized APCs constitute the interface between antigens and adaptive immunity. After internalization, DCs convert antigen into peptides that are bound to MHC class II molecules, transported to the cell surface, and presented to CD4+ T helper cells. Of note, there is evidence that various subsets of DCs differ by their capacities of antigen processing in vivo\n[28]. For instance, the CD8+ DEC205+ subset of murine splenic DC is more biased for cross-presentation on MHC class I molecules, whereas the CD8− DCIR2+ DC subset is specialized for MHC class II antigen presentation and displays increased expression of proteins involved in the exogenous antigen processing pathway, including cathepsins C, H, Z, and AEP. Further, it has been shown that compared to monocyte-derived DCs, conventional DCs are much more efficient in antigen processing and presentation independent from the route of antigen uptake [29].\nWithin the course of antigen processing, several steps (antigen uptake, DC activation, peptide generation, MHC-peptide complex stability and density) can be decisive regarding the immunogenicity of a given antigen. For instance, the internalization of ovalbumin conjugated to trimethyl chitosan by DCs was 5-fold increased leading to a 1,000-fold higher ovalbumin-specific IgG titers in immunized mice [30]. Apart from such adjuvant-dependent aspects, also properties intrinsic to the antigen (posttranslational modifications, structural features, and stability) can influence immunogenicity. For example, targeting receptor-mediated endocytosis, mannosylated ovalbumin induced a much stronger proliferation of ovalbumin-specific T cells than its unglycosylated counterpart [31]. Moreover, although structural discrepancies between isoforms of the birch pollen major allergen are minor (PDB accession numbers are given in parenthesis), uptake and endocytosis of Bet v 1.0101 (PDB: 1BV1) by murine BMDCs were significantly slower and less efficient than observed for Bet v 1.0401 (PDB: 3K78). The two isoforms also induced divergent patterns of DC activation and humoral responses, which have been attributed to cysteine-mediated dimerization due to a serine to cysteine exchange in Bet v 1.0401 [8]. In more detail, mice immunized with Bet v 1.0401 displayed a comparable IgE but a strongly elevated IgG and IgA response. On DCs Bet v 1.0401 elicited a significantly increased expression of CD80 and CD86 activation markers, whereas the secretion of cytokines antagonizing DC maturation and activation (IL-6) was enhanced in BMDCs stimulated with Bet v 1.0101. Notably, insufficient DC activation might be generally responsible for a shift towards the allergic immune response [32] and could in part explain hypoallergenicity of Bet v 1.0401.\nAntigen uptake and DC activation might also affect the generation of T cell stimulatory peptides, a complex process that involves the sequential action of acidic hydrolases in endosomal and lysosomal compartments of the DC [24], [33]. Antigens that are rather stable to endolysosomal proteolysis can persistently provide peptides for binding to MHC molecules, thus ensuring efficient presentation to T cells and thus, high immunogenicity. By contrast, instable molecules fail to elicit an immune response due to rapid T cell epitope destruction [15], [16]. Hence, efficient antigen uptake of rather stable antigens might facilitate continuous delivery of peptides that can properly bind the MHC class II grove, thereby favoring a high density of MHC-peptide complexes on the surface of the DC for T cell presentation. Notably, the density of T cell epitopes is crucial for T cell activation [34] and could even be decisive in the development of a Th1 or allergic Th2 biased response [35]. Apart from such quantitative aspects, also the quality of peptides generated by endolysosomal proteases has a strong impact on the immunogenicity of proteins. For example, stabilization of MHC-peptide complexes by substituting single amino acids in a tumor antigen-derived T cell epitope increased specific T cell responses up to 50-fold [36]. Hence, the 7 amino acid exchanges in the Bet v 1.0401 isoform might also contribute to its stronger immunogenicity.\nIn the present study, we exploited the link between resistance to endolysosomal proteolysis and immunogenicity to establish a high-throughput screening procedure for rational protein vaccine development. Although the capacity of an antigen to induce an immune response depends on many parameters, according to our data susceptibility to endolysosomal proteases seems to be a key factor determining immunogenicity. Nevertheless, as cell-free endolysosomal in vitro degradations do not encompass important parameters of immunogenicity like antigen uptake, DC activation, and stability of the MHC-peptide complex, it cannot reflect the complex situation of antigen processing in vivo. Therefore, this method cannot be used for T cell epitope determination, and assessments on protein immunogenicity might deviate in some cases from in vivo obtained data. Still, the assay described here represents an excellent tool requiring only standard laboratory equipment, a tissue homogenizer, an ultracentrifuge, and a cell culture facility for rapid and simple assessment of protein immunogenicity. Because costs, time, up-scalability, and standardization problems limit the use of human as well as animal-derived DCs, we introduced the murine commercially available DC cell line JAWS II as source for endolysosomal proteases. Comparative mass spectrometry-based analysis of the protein composition provided strong evidence of the similarity between the different degradomes. To our knowledge, this represents the first study on the endolysosomal proteome from mice and men. We found that the proteome data nicely correlated with degradative activity and proteolytic specificity of the degradome using RNase A, HRP, and Bet v 1.0101 model antigens as substrates. Because structural features of proteins seem to be intimately connected to their immunogenic activity, as demonstrated earlier for structural variants of RNase A, HRP [15], [16], and Bet v 1 [8], we analyzed the kinetics and degradation patterns of RNase S, apo-HRP and Bet v 1.0401. Indeed, we observed remarkable differences in the proteolytic resistance and kinetics of degradation among the investigated antigens. It has been demonstrated that local structural flexibility is a general requirement for proteolytic sensitivity and that deletion of such elements enhances protein stability [37]. However, we did not observe a strict correlation between proteolytic cutting sites and loop structure elements in our model antigens and variants thereof. Of note, the flexible C-terminal loop of Bet v 1 (Bet v 1123–129) seems to be heavily degraded by lysosomal proteases, as we could not detect any in vitro generated peptides corresponding to this region. Thus, deletion of Bet v 1123–129 might increase Bet v 1.0101 stability and immunogenicity. Another possibility would be the targeting of frequently used and surface-exposed cathepsin S cleavage sites of Bet v 1.0101 (F19-K20, K65-Y66, G92-D93, K115-I116, and K134-A135) identified here.\nOther factors influencing proteolytic sensitivity and immunogenicity are aggregation [8], integrity of the polypeptide backbone, and the presence of ligands [15], [16]. As described above, due to a serine to cysteine exchange at position 112, Bet v 1.0401 shows a high tendency to form disulfide-linked aggregates [8]. Presumably due to steric hindrance of proteolytic cutting sites, this isoform appears twice as stable in endolysosomal in vitro degradation assays than Bet v 1.0101. Accordingly, antibody and T cell responses are elevated in mice immunized with Bet v 1.0401. Similarly, RNase A induces more than 10,000-fold stronger T cell and IgG responses than RNase S, which is destabilized by a single subtilisin-mediated peptide bond cleavage between A20 and S21. During endolysosomal in vitro proteolysis RNase S was degraded more than 20-fold faster than RNase A. Moreover, apo-HRP lacking Ca2+ ions and the prosthetic heme group is a weak immunogen and displayed about 4-fold stronger sensitivity to endolysosomal proteases during in vitro degradation than fully equipped holo-HRP [15], [16].\nIn summary, our data convincingly showed that not only the proteome fingerprint but also the proteolytic activity in terms of specificity and kinetics of the analyzed degradomes is equivalent. Our findings are further supported by degradation experiments of Bet v 1.0101 using purified cathepsin S, an important endolysosomal protease. Remarkably, the patterns obtained by degradation with cathepsin S alone resembled those observed in reactions applying endolysosomal fractions. Despite the advantage of straightforward determination of proteolytic epitopes, the single protease approach cannot reliably mimic the complexity of a whole degradome since concerted action of proteases might be crucial in the kinetics of antigen degradation. As kinetics is the key indicator for assaying immunogenicity, this strategy would lack general applicability.\nTo conclude, the adoption of the JAWS II DC line for endolysosomal in vitro degradation facilitated the development of a simple, rapid, reliable, and easily upscalable comparative assay to select highly immunogenic allergy vaccine candidates from a pool of related molecules for pre-clinical evaluation. Data can be obtained within one week, and RNases A and S might serve as internal references. The method described here is very useful to study the immunogenic properties of a protein, and can help to replace, reduce, and refine animal experiments in allergy research and vaccine development in general." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Subjects", "Mice", "Antigens", "Generation and culture of Dendritic Cells", "Subcellular Fractionation and Fingerprinting of Microsomes", "Degradation Assays", "Results", "Human and Murine DC-Derived Microsomes Display Comparable Endolysosomal Degradomes", "Kinetics of Endolysosomal Degradation Differ Between Structural Variants of the Same Antigen", "Structural Variants of the Same Antigen Show Similar Patterns of Proteolytic Fragments", "Endolysosomal Degradation of Bet v 1 is Mediated by Cathepsin S", "Discussion" ]
[ "The development of a novel vaccine is a highly complex and demanding process that, from the initial concept to a licensed product, can take up to decades. Once a candidate has evolved in the laboratory, it undergoes vast series of pre-clinical in vitro and in vivo examination and optimization procedures. Evidently, only a minority of candidates passes all these obstacles, is permitted to clinical trials, accepted by regulatory agencies, and converted into a commercial product. The development of allergy vaccines faces additional problems, because unlike prophylactic vaccination, allergen-specific immunotherapy (SIT) attempts to counteract an already established pathological immune response [1]. Severe anaphylactic side effects can result from interactions between the administered vaccine and allergen-specific IgE antibodies of the atopic patient. Moreover, the current use of extracts of undefined contents can lead to sensitizations against new allergens during conventional immunotherapy [2]. Thus, allergy research today focuses on strategies to improve both, safety and clinical efficacy of SIT.\nOthers and we have proposed the substitution of allergen extracts in immunotherapy by molecule-based vaccines in order to implement safer and patient-tailored treatment [2], [3]. However, in contrast to infectious disease antigens, many allergens have been reported to be weak immunogens [4]–[7], a property hampering therapeutic success. Notably, it has been shown that allergen isoforms can differ by both immunogenicity (T cell reactivity) and allergenicity (IgE reactivity). For example, the birch pollen major allergen Bet v 1 isoform 0401 activates T cells much more efficiently than Bet v 1.0101 [8] but displays reduced IgE reactivity (hypoallergen) [4], [9], [10]. As molecules with such properties would bypass IgE mediated side effects during SIT, they are considered ideal allergy vaccines. Besides naturally occurring hypoallergens, modern DNA technology facilitates genetic engineering of recombinant hypoallergens [1]–[3], [11], [12]. However, the structural manipulation required for hypoallergen generation [11] and the production procedures [13] can severely affect the immunogenicity of a recombinant protein [14]–[17]. For example, although differing only by a single amino acid, one out of two in silico designed recombinant Bet v 1 mutants with reduced IgE reactivity lost its immunogenicity in mice [18]. Moreover, from 400 chimera generated by DNA shuffling of 14 major tree pollen allergens, only 2 fulfilled the requirements for efficient vaccine candidates [19]. The screening of such large candidate libraries requires high-throughput methods. Although IgE reactivity can be easily evaluated by antibody-based in vitro experiments in microtiter format, such tests are lacking for immunogenicity assessment. The prediction of T cell reactivity (immunogenicity) is costly, time-consuming, can only be performed for a limited number of molecules, and depends on in vivo or cell-based ex vivo systems. Nevertheless, ever since the first purified recombinant allergen was available in the early 1990s [20], a multitude of proteins has been produced as candidates for SIT [3], [11], [12]. Hence, the fast development of new molecule-based allergy vaccines dramatically increases the demand for animal sacrifice, conflicting the 3Rs declaration of the European Partnership for Alternative Approaches to Animal Testing (EPAA).\nWithin the present study we established a degradation assay based on earlier studies showing that susceptibility to endolysosomal proteolysis by antigen presenting cells (APC) serves as in vitro marker for protein immunogenicity [15], [16]. This assay enables the pre-selection of molecules with highest immunogenicity out of a big repertoire of related candidate proteins, hence aiming to replace, reduce, and refine animal experiments in allergy vaccine development. Whereas previous work solely focused on the kinetics of endolysosomal protein decomposition, we also evaluated the in vitro generated antigen-derived peptides and performed comparative fingerprinting of microsomal proteases. Moreover, we compared the degradative potential of different types of human and murine DC's microsomes. In addition to previously investigated antigens (i.e. ribonucleases A and S, as well as holo- and apo-horseradish peroxidase) [15], [16], we included two well-described allergens, i.e. the high and low allergenic isoforms of the birch pollen major allergen Bet v 1.0101 and Bet v 1.0401 [8], to evaluate the general applicability of the assay. As source for the endolysosomal hydrolases we employed a commercially available murine dendritic cell (DC) line [21]. This approach enables high-throughput screening for immunogenic candidates in vaccine development saving time, costs, human blood donors, and animal sacrifice, and might be interesting for any study on protein immunogenicity.", "[SUBTITLE] Subjects [SUBSECTION] All blood-donors gave written consent before enrollment in this study, which was approved by the local Medical Ethical Committee of Vienna.\nAll blood-donors gave written consent before enrollment in this study, which was approved by the local Medical Ethical Committee of Vienna.\n[SUBTITLE] Mice [SUBSECTION] BALB/c mice were obtained from Charles River Laboratories (Wilmington, MA). All animal experiments were conducted according to National guidelines approved by the Austrian Ministry of Science (BMWF-66.012/0011-II/10b/2010).\nBALB/c mice were obtained from Charles River Laboratories (Wilmington, MA). All animal experiments were conducted according to National guidelines approved by the Austrian Ministry of Science (BMWF-66.012/0011-II/10b/2010).\n[SUBTITLE] Antigens [SUBSECTION] Recombinant Bet v 1.0101 (SwissProt accession: P15497) and Bet v 1.0401 (SwissProt accession: P43177) were produced in Escherichia coli as recently published [8]. Ribonuclease (RNase A (SwissProt accession: P61823), RNase S, and holo-HRP (SwissProt accession: P00433) were purchased from Sigma-Aldrich, St. Louis, MO. Apo-HRP was prepared according to previously described protocols [15].\nRecombinant Bet v 1.0101 (SwissProt accession: P15497) and Bet v 1.0401 (SwissProt accession: P43177) were produced in Escherichia coli as recently published [8]. Ribonuclease (RNase A (SwissProt accession: P61823), RNase S, and holo-HRP (SwissProt accession: P00433) were purchased from Sigma-Aldrich, St. Louis, MO. Apo-HRP was prepared according to previously described protocols [15].\n[SUBTITLE] Generation and culture of Dendritic Cells [SUBSECTION] Human mDCs (monocyte-derived DCs) obtained from heparinized blood samples and murine BMDCs (bone marrow-derived DCs) were generated and cultured as described elsewhere [8], [22]. The DC line JAWS II that has been established from bone marrow cells of a p53 knockout C57BL/6 mouse, was purchased from American Type Culture Collection (Manassas, VA) and cultured as previously described [21].\nHuman mDCs (monocyte-derived DCs) obtained from heparinized blood samples and murine BMDCs (bone marrow-derived DCs) were generated and cultured as described elsewhere [8], [22]. The DC line JAWS II that has been established from bone marrow cells of a p53 knockout C57BL/6 mouse, was purchased from American Type Culture Collection (Manassas, VA) and cultured as previously described [21].\n[SUBTITLE] Subcellular Fractionation and Fingerprinting of Microsomes [SUBSECTION] Endosomes and lysosomes were isolated from DCs by differential centrifugation [16]. Briefly, cells (107 cells/protein) were homogenized in 10 mmol L−1 Tris/acetate pH 7 containing 250 mmol L−1 sucrose using a Dounce tissue grinder (Sigma-Aldrich, St. Louis, MO) and centrifuged for 10 minutes at 6,000× g. To obtain a total microsomal fraction, postnuclear supernatants were ultracentrifuged (60 minutes at 100,000× g). Microsomal content was released by 5 freeze-thaw cycles on liquid nitrogen and room temperature respectively, and stored at −20°C. For microsome fingerprinting 100 ng of microsomal proteins were reduced, alkylated, and trypsinized using the Calbiochem® ProteoExtract® All-in-One Trypsin digestion kit (EMD, Gibbstown, NJ) before injection into a one-dimensional capillary HPLC system (Model U3000, Dionex Benelux, Amsterdam, The Netherlands), equipped with a low-pressure gradient micro-pump, a micro-autoinjector and a capillary PS-DVB monolithic separation column (150×0.2 mm id). A 300 min gradient of 0–40% ACN in 0.05% aqueous TFA was applied using a flow rate of 1 µl min−1. The chromatographic setup was coupled to an ESI-LTQ Orbitrap mass analyzer (Thermo Fisher Scientific GmbH, Bremen, Germany). Each sample was analyzed in triplicate, while the peptides identified in one run were excluded from data dependent decisions in the following runs by the use of exclusion lists. Spectra were generated in positive mode in a mass range of m/z 500–2000. Fragmentation of a maximum of three precursors was realized in the ion trap using collision-induced dissociation. The Mascot search engine (Matrix Science, London, UK) and the software Proteome Discoverer (Thermo Fisher Scientific GmbH, Bremen, Germany) were used for peptide identification with the following parameters: taxonomy, all entries; Variable modification, methionine oxidation; fixed modification, carbamidomethyl (C), Enzyme, trypsin; peptide tolerance, ±10 ppm; MS/MS tolerance, ± 0.3 Da; maximum missed cleavages, 1; Human and murine samples were searched against species-specific SwissProt databases.\nEndosomes and lysosomes were isolated from DCs by differential centrifugation [16]. Briefly, cells (107 cells/protein) were homogenized in 10 mmol L−1 Tris/acetate pH 7 containing 250 mmol L−1 sucrose using a Dounce tissue grinder (Sigma-Aldrich, St. Louis, MO) and centrifuged for 10 minutes at 6,000× g. To obtain a total microsomal fraction, postnuclear supernatants were ultracentrifuged (60 minutes at 100,000× g). Microsomal content was released by 5 freeze-thaw cycles on liquid nitrogen and room temperature respectively, and stored at −20°C. For microsome fingerprinting 100 ng of microsomal proteins were reduced, alkylated, and trypsinized using the Calbiochem® ProteoExtract® All-in-One Trypsin digestion kit (EMD, Gibbstown, NJ) before injection into a one-dimensional capillary HPLC system (Model U3000, Dionex Benelux, Amsterdam, The Netherlands), equipped with a low-pressure gradient micro-pump, a micro-autoinjector and a capillary PS-DVB monolithic separation column (150×0.2 mm id). A 300 min gradient of 0–40% ACN in 0.05% aqueous TFA was applied using a flow rate of 1 µl min−1. The chromatographic setup was coupled to an ESI-LTQ Orbitrap mass analyzer (Thermo Fisher Scientific GmbH, Bremen, Germany). Each sample was analyzed in triplicate, while the peptides identified in one run were excluded from data dependent decisions in the following runs by the use of exclusion lists. Spectra were generated in positive mode in a mass range of m/z 500–2000. Fragmentation of a maximum of three precursors was realized in the ion trap using collision-induced dissociation. The Mascot search engine (Matrix Science, London, UK) and the software Proteome Discoverer (Thermo Fisher Scientific GmbH, Bremen, Germany) were used for peptide identification with the following parameters: taxonomy, all entries; Variable modification, methionine oxidation; fixed modification, carbamidomethyl (C), Enzyme, trypsin; peptide tolerance, ±10 ppm; MS/MS tolerance, ± 0.3 Da; maximum missed cleavages, 1; Human and murine samples were searched against species-specific SwissProt databases.\n[SUBTITLE] Degradation Assays [SUBSECTION] Endolysosomal degradation assays were performed with 0.25 µg µl−1 of substrates (Bet v 1.0101, Bet v 1.0401, RNase A, RNase S, holo-HRP, apo-HRP) and 0.4 µg µl−1 of isolated microsomal proteins in a final volume of 20 µl containing 100 mmol L−1 citrate buffer pH 4.8 and 2 mmol L−1 dithiothreitol. Reactions were conducted for 0, 0.5, 1, 3, 5, 12, 24, 36, and 48 h at 37°C and stopped by boiling for 5 min at 95°C followed by freezing at −20°C. Alternatively, in vitro degradations were performed using 5×10−4 U µl−1 of purified human Cathepsin S purchased from Sigma-Aldrich, St. Louis, MO. Assays were quantitatively evaluated by flatbed scanner densitometry of GelCode® Blue Reagent (Thermo Scientific, Waltham, MA) stained sodium dodecyl sulfate polyacrylamide gels [23]. Qualitative analysis was performed by mass spectrometry using an ESI-QTOF mass spectrometer fitted with a capillary reversed phase HPLC (Waters, Milford, MA) as described elsewhere [22].\nEndolysosomal degradation assays were performed with 0.25 µg µl−1 of substrates (Bet v 1.0101, Bet v 1.0401, RNase A, RNase S, holo-HRP, apo-HRP) and 0.4 µg µl−1 of isolated microsomal proteins in a final volume of 20 µl containing 100 mmol L−1 citrate buffer pH 4.8 and 2 mmol L−1 dithiothreitol. Reactions were conducted for 0, 0.5, 1, 3, 5, 12, 24, 36, and 48 h at 37°C and stopped by boiling for 5 min at 95°C followed by freezing at −20°C. Alternatively, in vitro degradations were performed using 5×10−4 U µl−1 of purified human Cathepsin S purchased from Sigma-Aldrich, St. Louis, MO. Assays were quantitatively evaluated by flatbed scanner densitometry of GelCode® Blue Reagent (Thermo Scientific, Waltham, MA) stained sodium dodecyl sulfate polyacrylamide gels [23]. Qualitative analysis was performed by mass spectrometry using an ESI-QTOF mass spectrometer fitted with a capillary reversed phase HPLC (Waters, Milford, MA) as described elsewhere [22].", "All blood-donors gave written consent before enrollment in this study, which was approved by the local Medical Ethical Committee of Vienna.", "BALB/c mice were obtained from Charles River Laboratories (Wilmington, MA). All animal experiments were conducted according to National guidelines approved by the Austrian Ministry of Science (BMWF-66.012/0011-II/10b/2010).", "Recombinant Bet v 1.0101 (SwissProt accession: P15497) and Bet v 1.0401 (SwissProt accession: P43177) were produced in Escherichia coli as recently published [8]. Ribonuclease (RNase A (SwissProt accession: P61823), RNase S, and holo-HRP (SwissProt accession: P00433) were purchased from Sigma-Aldrich, St. Louis, MO. Apo-HRP was prepared according to previously described protocols [15].", "Human mDCs (monocyte-derived DCs) obtained from heparinized blood samples and murine BMDCs (bone marrow-derived DCs) were generated and cultured as described elsewhere [8], [22]. The DC line JAWS II that has been established from bone marrow cells of a p53 knockout C57BL/6 mouse, was purchased from American Type Culture Collection (Manassas, VA) and cultured as previously described [21].", "Endosomes and lysosomes were isolated from DCs by differential centrifugation [16]. Briefly, cells (107 cells/protein) were homogenized in 10 mmol L−1 Tris/acetate pH 7 containing 250 mmol L−1 sucrose using a Dounce tissue grinder (Sigma-Aldrich, St. Louis, MO) and centrifuged for 10 minutes at 6,000× g. To obtain a total microsomal fraction, postnuclear supernatants were ultracentrifuged (60 minutes at 100,000× g). Microsomal content was released by 5 freeze-thaw cycles on liquid nitrogen and room temperature respectively, and stored at −20°C. For microsome fingerprinting 100 ng of microsomal proteins were reduced, alkylated, and trypsinized using the Calbiochem® ProteoExtract® All-in-One Trypsin digestion kit (EMD, Gibbstown, NJ) before injection into a one-dimensional capillary HPLC system (Model U3000, Dionex Benelux, Amsterdam, The Netherlands), equipped with a low-pressure gradient micro-pump, a micro-autoinjector and a capillary PS-DVB monolithic separation column (150×0.2 mm id). A 300 min gradient of 0–40% ACN in 0.05% aqueous TFA was applied using a flow rate of 1 µl min−1. The chromatographic setup was coupled to an ESI-LTQ Orbitrap mass analyzer (Thermo Fisher Scientific GmbH, Bremen, Germany). Each sample was analyzed in triplicate, while the peptides identified in one run were excluded from data dependent decisions in the following runs by the use of exclusion lists. Spectra were generated in positive mode in a mass range of m/z 500–2000. Fragmentation of a maximum of three precursors was realized in the ion trap using collision-induced dissociation. The Mascot search engine (Matrix Science, London, UK) and the software Proteome Discoverer (Thermo Fisher Scientific GmbH, Bremen, Germany) were used for peptide identification with the following parameters: taxonomy, all entries; Variable modification, methionine oxidation; fixed modification, carbamidomethyl (C), Enzyme, trypsin; peptide tolerance, ±10 ppm; MS/MS tolerance, ± 0.3 Da; maximum missed cleavages, 1; Human and murine samples were searched against species-specific SwissProt databases.", "Endolysosomal degradation assays were performed with 0.25 µg µl−1 of substrates (Bet v 1.0101, Bet v 1.0401, RNase A, RNase S, holo-HRP, apo-HRP) and 0.4 µg µl−1 of isolated microsomal proteins in a final volume of 20 µl containing 100 mmol L−1 citrate buffer pH 4.8 and 2 mmol L−1 dithiothreitol. Reactions were conducted for 0, 0.5, 1, 3, 5, 12, 24, 36, and 48 h at 37°C and stopped by boiling for 5 min at 95°C followed by freezing at −20°C. Alternatively, in vitro degradations were performed using 5×10−4 U µl−1 of purified human Cathepsin S purchased from Sigma-Aldrich, St. Louis, MO. Assays were quantitatively evaluated by flatbed scanner densitometry of GelCode® Blue Reagent (Thermo Scientific, Waltham, MA) stained sodium dodecyl sulfate polyacrylamide gels [23]. Qualitative analysis was performed by mass spectrometry using an ESI-QTOF mass spectrometer fitted with a capillary reversed phase HPLC (Waters, Milford, MA) as described elsewhere [22].", "[SUBTITLE] Human and Murine DC-Derived Microsomes Display Comparable Endolysosomal Degradomes [SUBSECTION] To conduct endolysosomal in vitro degradation assays we isolated total microsomal (endolysosomal) fractions from human mDCs, murine BMDCs, and the murine JAWS II DC line employing a differential centrifugation protocol. Endolysosomal total proteomes were analyzed by mass spectrometry-based fingerprinting. 600 different proteins were detected in the microsomal fractions of all three different DC samples. Proteins belonging to the endolysosomal degradome or that have been shown to be involved in antigen processing are listed in Table 1. Cathepsins A, B, C, D, L, S, and Z as well as lysosomal prolylcarboxypeptidase and tripeptidyl peptidase 1 could be identified in all microsomal fractions. By contrast, cathepsin K, lysosomal dipeptidyl peptidase 2, and asparagine endopeptidase (AEP) were not detectable in BMDCs, and cathepsin H was not measured in the DC line. In summary, JAWS II DCs contain all important lysosomal endo- and exopeptidases that have been shown to be involved in antigen processing [24], [25]. Besides proteases, endolysosomal fractions also contained a multitude of other molecules (non-proteolytic acidic hydrolases, membrane-associated proteins, and GTPases involved in vesicular trafficking) that are associated with endosomal and lysosomal compartments of APCs (Table 1).\n%, percentage to which identified peptides cover the full length protein sequence; AEP, Asparagine endopeptidase; ARF, ADP-ribosylation factor; LAMP, Lysosome-associated membrane glycoprotein; LMP, lysosome membrane protein; pep., identified peptides; Rab, Ras-like protein; VAMP, Vesicle-associated membrane protein.\nTo conduct endolysosomal in vitro degradation assays we isolated total microsomal (endolysosomal) fractions from human mDCs, murine BMDCs, and the murine JAWS II DC line employing a differential centrifugation protocol. Endolysosomal total proteomes were analyzed by mass spectrometry-based fingerprinting. 600 different proteins were detected in the microsomal fractions of all three different DC samples. Proteins belonging to the endolysosomal degradome or that have been shown to be involved in antigen processing are listed in Table 1. Cathepsins A, B, C, D, L, S, and Z as well as lysosomal prolylcarboxypeptidase and tripeptidyl peptidase 1 could be identified in all microsomal fractions. By contrast, cathepsin K, lysosomal dipeptidyl peptidase 2, and asparagine endopeptidase (AEP) were not detectable in BMDCs, and cathepsin H was not measured in the DC line. In summary, JAWS II DCs contain all important lysosomal endo- and exopeptidases that have been shown to be involved in antigen processing [24], [25]. Besides proteases, endolysosomal fractions also contained a multitude of other molecules (non-proteolytic acidic hydrolases, membrane-associated proteins, and GTPases involved in vesicular trafficking) that are associated with endosomal and lysosomal compartments of APCs (Table 1).\n%, percentage to which identified peptides cover the full length protein sequence; AEP, Asparagine endopeptidase; ARF, ADP-ribosylation factor; LAMP, Lysosome-associated membrane glycoprotein; LMP, lysosome membrane protein; pep., identified peptides; Rab, Ras-like protein; VAMP, Vesicle-associated membrane protein.\n[SUBTITLE] Kinetics of Endolysosomal Degradation Differ Between Structural Variants of the Same Antigen [SUBSECTION] We compared the kinetics of endolysosomal decomposition for three pairs of antigens, i.e. structural variants of horseradish peroxidase (HRP), ribonuclease (RNase), and the major birch pollen allergen Bet v 1. The comparison of the average half lives (given in parenthesis) during endolysosomal in vitro degradation revealed that Bet v 1.0401 (3.8 h), RNase A (>48 h), and holo-HRP (1.5 h) were more resistant to proteolysis than Bet v 1.0101 (2 h), RNase S (2.2 h), and apo-HRP (0.4 h), respectively (Figures 1A and 1B). Thus, the decomposition of Bet v 1.0401 was around two-fold slower than observed for Bet v 1.0101. For the holo/apo-HRP and RNase A/S pairs endolysosomal half-life ratios were 3.8 and >20, respectively.\n2.5 µg of protein samples were analyzed by SDS-PAGE and GelCode® Blue staining after 0, 0.5, 1, 3, 5, 12, 24, 36 and 48 h of in vitro degradation using endolysosomal fractions isolated from human mDCs, murine BMDCs, and the DC line JAWS II (A). For each protein, the half-life during endolysosomal proteolysis was calculated from scanned and densitometrically quantified protein bands (B).\nWe compared the kinetics of endolysosomal decomposition for three pairs of antigens, i.e. structural variants of horseradish peroxidase (HRP), ribonuclease (RNase), and the major birch pollen allergen Bet v 1. The comparison of the average half lives (given in parenthesis) during endolysosomal in vitro degradation revealed that Bet v 1.0401 (3.8 h), RNase A (>48 h), and holo-HRP (1.5 h) were more resistant to proteolysis than Bet v 1.0101 (2 h), RNase S (2.2 h), and apo-HRP (0.4 h), respectively (Figures 1A and 1B). Thus, the decomposition of Bet v 1.0401 was around two-fold slower than observed for Bet v 1.0101. For the holo/apo-HRP and RNase A/S pairs endolysosomal half-life ratios were 3.8 and >20, respectively.\n2.5 µg of protein samples were analyzed by SDS-PAGE and GelCode® Blue staining after 0, 0.5, 1, 3, 5, 12, 24, 36 and 48 h of in vitro degradation using endolysosomal fractions isolated from human mDCs, murine BMDCs, and the DC line JAWS II (A). For each protein, the half-life during endolysosomal proteolysis was calculated from scanned and densitometrically quantified protein bands (B).\n[SUBTITLE] Structural Variants of the Same Antigen Show Similar Patterns of Proteolytic Fragments [SUBSECTION] Mass spectrometry-based analysis showed that peptides generated by endolysosomal in vitro proteolysis formed nested clusters sharing a common central core but displaying variable flanking regions. These features are characteristic for MHC (major histocompatibility complex) class II-bound peptides [22], [26]. Notably, peptides derived from both Bet v 1.0101 and Bet v 1.0401 were of 5 to 30 amino acids in length and clustered into 13 different regions along the Bet v 1 sequence. However, central (region Bet v 121–65 and Bet v 183–115) and C-terminal (region Bet v 1130–159) peptide clusters of the more stable isoallergen Bet v 1.0401 appeared temporally delayed. Interestingly, amino acids differing between the two Bet v 1 isoforms were rather located apart from proteolytic cutting sites in central regions of the peptide clusters (Figures 2A and 2B). Proteolytic fragments from both holo- and apo-HRP clustered in 5 analogous regions of the full-length molecules. Due to its high resistance to endolysosomal hydrolases, we could not detect any peptides from RNase A, whereas proteolytic fragments from RNase S were found in 5 different regions (Figure 3). According to our observations, differences in proteolytic resistance between pairs of antigens with similar sequence and conserved fold do not translate into altered patterns of proteolytic fragments.\nPeptides sequenced by mass spectrometry after 0.5, 1, 3, 5, 12, 24, 36, and 48 h of in vitro digestion with microsomal fractions from mDCs (blue), BMDCs (green) and JAWS II DCs (brown) are shown for Bet v 1.0101 (A) and Bet v 1.0401 (B), respectively. Regions of predominant peptide clusters (clx-x) are highlighted as bars colored in different shades of grey depending on their temporal occurrence (average appearance of early clusters: ≤1 h, intermediate clusters: >1 h and <5 h, and late clusters: ≥5 h. Bet v 1 secondary structures (α-helices and β-sheets) are indicated as framed boxes. Amino acids (n = 7) that differ between the 2 Bet v 1 isoforms are highlighted in orange.\nPeptides sequenced by mass spectrometry after 0.5, 1, 3, 5, 12, 24, 36, and 48 h of in vitro digestion with microsomal fractions from mDCs (blue), BMDCs (green) and JAWS II DCs (brown) clustered in distinct regions along the protein sequence of RNase (A) and peroxidase (B) model antigens. Secondary structures (α-helices and β-sheets) are indicated as framed boxes.\nMass spectrometry-based analysis showed that peptides generated by endolysosomal in vitro proteolysis formed nested clusters sharing a common central core but displaying variable flanking regions. These features are characteristic for MHC (major histocompatibility complex) class II-bound peptides [22], [26]. Notably, peptides derived from both Bet v 1.0101 and Bet v 1.0401 were of 5 to 30 amino acids in length and clustered into 13 different regions along the Bet v 1 sequence. However, central (region Bet v 121–65 and Bet v 183–115) and C-terminal (region Bet v 1130–159) peptide clusters of the more stable isoallergen Bet v 1.0401 appeared temporally delayed. Interestingly, amino acids differing between the two Bet v 1 isoforms were rather located apart from proteolytic cutting sites in central regions of the peptide clusters (Figures 2A and 2B). Proteolytic fragments from both holo- and apo-HRP clustered in 5 analogous regions of the full-length molecules. Due to its high resistance to endolysosomal hydrolases, we could not detect any peptides from RNase A, whereas proteolytic fragments from RNase S were found in 5 different regions (Figure 3). According to our observations, differences in proteolytic resistance between pairs of antigens with similar sequence and conserved fold do not translate into altered patterns of proteolytic fragments.\nPeptides sequenced by mass spectrometry after 0.5, 1, 3, 5, 12, 24, 36, and 48 h of in vitro digestion with microsomal fractions from mDCs (blue), BMDCs (green) and JAWS II DCs (brown) are shown for Bet v 1.0101 (A) and Bet v 1.0401 (B), respectively. Regions of predominant peptide clusters (clx-x) are highlighted as bars colored in different shades of grey depending on their temporal occurrence (average appearance of early clusters: ≤1 h, intermediate clusters: >1 h and <5 h, and late clusters: ≥5 h. Bet v 1 secondary structures (α-helices and β-sheets) are indicated as framed boxes. Amino acids (n = 7) that differ between the 2 Bet v 1 isoforms are highlighted in orange.\nPeptides sequenced by mass spectrometry after 0.5, 1, 3, 5, 12, 24, 36, and 48 h of in vitro digestion with microsomal fractions from mDCs (blue), BMDCs (green) and JAWS II DCs (brown) clustered in distinct regions along the protein sequence of RNase (A) and peroxidase (B) model antigens. Secondary structures (α-helices and β-sheets) are indicated as framed boxes.\n[SUBTITLE] Endolysosomal Degradation of Bet v 1 is Mediated by Cathepsin S [SUBSECTION] Strikingly, the majority of in vitro generated Bet v 1-derived peptide clusters (Figure 2) was flanked by lysine residues (K20, 54, 55, 65, 80, 97, 103, 115, and 129). This observation suggests that cathepsin S, a lysosomal cysteine endoprotease favoring substrates with glutamines or lysines at the P1 position [27], could be involved in Bet v 1 decomposition. To pursue this idea, we performed in vitro degradation assays of Bet v 1.0101 with purified cathepsin S, and identified 31 cathepsin S sites. As shown in Figure 4, cathepsin S sites seem to be progressively accessible by the protease in a time-dependent manner. Notably, 8 out of the 13 Bet v 1.0101 peptide clusters that appeared during endolysosomal proteolysis applying microsomal fractions could also be generated by cathepsin S alone (Figure 4). Initially occurring Bet v 1.0101 peptide clusters contained (cl1–22) or were flanked (cl146–59) by early-cleaved cathepsin S sites (Figure 4). Short-time degradation assays with human microsomes (data not shown) revealed that the N-terminal cluster cl1–22 appeared already after a few minutes of reaction. Thus, cathepsin S might be engaged in the initial steps of Bet v 1.0101 processing. Late-appearing peptide clusters, which corresponded to the middle region of the Bet v 1 sequence (cl83–115, cl83–103, and cl83–97), might be produced by cathepsin S and AEP, a cysteine endopeptidase that strictly cleaves on the C-terminal site of asparagine residues (e.g. Bet v 1.0101 N82) [24], [25].\nCathepsin S cleavage sites are depicted in the bar diagram showing their location in the Bet v 1.0101 sequence (x-axis) and the number of different peptides generated per cleavage site (y-axis). According to their temporal accessibility, cathepsin S sites are highlighted in red (early), orange (intermediate), and yellow (late) in the bar diagram. Generated peptides are shown as purple lines. Bet v 1.0101 secondary structures (α-helices and β-sheets) are indicated as framed boxes. Regions of Bet v 1.0101 peptide clusters (clx-x) generated by endolysosomal proteases isolated from DCs are depicted as grey boxes.\nStrikingly, the majority of in vitro generated Bet v 1-derived peptide clusters (Figure 2) was flanked by lysine residues (K20, 54, 55, 65, 80, 97, 103, 115, and 129). This observation suggests that cathepsin S, a lysosomal cysteine endoprotease favoring substrates with glutamines or lysines at the P1 position [27], could be involved in Bet v 1 decomposition. To pursue this idea, we performed in vitro degradation assays of Bet v 1.0101 with purified cathepsin S, and identified 31 cathepsin S sites. As shown in Figure 4, cathepsin S sites seem to be progressively accessible by the protease in a time-dependent manner. Notably, 8 out of the 13 Bet v 1.0101 peptide clusters that appeared during endolysosomal proteolysis applying microsomal fractions could also be generated by cathepsin S alone (Figure 4). Initially occurring Bet v 1.0101 peptide clusters contained (cl1–22) or were flanked (cl146–59) by early-cleaved cathepsin S sites (Figure 4). Short-time degradation assays with human microsomes (data not shown) revealed that the N-terminal cluster cl1–22 appeared already after a few minutes of reaction. Thus, cathepsin S might be engaged in the initial steps of Bet v 1.0101 processing. Late-appearing peptide clusters, which corresponded to the middle region of the Bet v 1 sequence (cl83–115, cl83–103, and cl83–97), might be produced by cathepsin S and AEP, a cysteine endopeptidase that strictly cleaves on the C-terminal site of asparagine residues (e.g. Bet v 1.0101 N82) [24], [25].\nCathepsin S cleavage sites are depicted in the bar diagram showing their location in the Bet v 1.0101 sequence (x-axis) and the number of different peptides generated per cleavage site (y-axis). According to their temporal accessibility, cathepsin S sites are highlighted in red (early), orange (intermediate), and yellow (late) in the bar diagram. Generated peptides are shown as purple lines. Bet v 1.0101 secondary structures (α-helices and β-sheets) are indicated as framed boxes. Regions of Bet v 1.0101 peptide clusters (clx-x) generated by endolysosomal proteases isolated from DCs are depicted as grey boxes.", "To conduct endolysosomal in vitro degradation assays we isolated total microsomal (endolysosomal) fractions from human mDCs, murine BMDCs, and the murine JAWS II DC line employing a differential centrifugation protocol. Endolysosomal total proteomes were analyzed by mass spectrometry-based fingerprinting. 600 different proteins were detected in the microsomal fractions of all three different DC samples. Proteins belonging to the endolysosomal degradome or that have been shown to be involved in antigen processing are listed in Table 1. Cathepsins A, B, C, D, L, S, and Z as well as lysosomal prolylcarboxypeptidase and tripeptidyl peptidase 1 could be identified in all microsomal fractions. By contrast, cathepsin K, lysosomal dipeptidyl peptidase 2, and asparagine endopeptidase (AEP) were not detectable in BMDCs, and cathepsin H was not measured in the DC line. In summary, JAWS II DCs contain all important lysosomal endo- and exopeptidases that have been shown to be involved in antigen processing [24], [25]. Besides proteases, endolysosomal fractions also contained a multitude of other molecules (non-proteolytic acidic hydrolases, membrane-associated proteins, and GTPases involved in vesicular trafficking) that are associated with endosomal and lysosomal compartments of APCs (Table 1).\n%, percentage to which identified peptides cover the full length protein sequence; AEP, Asparagine endopeptidase; ARF, ADP-ribosylation factor; LAMP, Lysosome-associated membrane glycoprotein; LMP, lysosome membrane protein; pep., identified peptides; Rab, Ras-like protein; VAMP, Vesicle-associated membrane protein.", "We compared the kinetics of endolysosomal decomposition for three pairs of antigens, i.e. structural variants of horseradish peroxidase (HRP), ribonuclease (RNase), and the major birch pollen allergen Bet v 1. The comparison of the average half lives (given in parenthesis) during endolysosomal in vitro degradation revealed that Bet v 1.0401 (3.8 h), RNase A (>48 h), and holo-HRP (1.5 h) were more resistant to proteolysis than Bet v 1.0101 (2 h), RNase S (2.2 h), and apo-HRP (0.4 h), respectively (Figures 1A and 1B). Thus, the decomposition of Bet v 1.0401 was around two-fold slower than observed for Bet v 1.0101. For the holo/apo-HRP and RNase A/S pairs endolysosomal half-life ratios were 3.8 and >20, respectively.\n2.5 µg of protein samples were analyzed by SDS-PAGE and GelCode® Blue staining after 0, 0.5, 1, 3, 5, 12, 24, 36 and 48 h of in vitro degradation using endolysosomal fractions isolated from human mDCs, murine BMDCs, and the DC line JAWS II (A). For each protein, the half-life during endolysosomal proteolysis was calculated from scanned and densitometrically quantified protein bands (B).", "Mass spectrometry-based analysis showed that peptides generated by endolysosomal in vitro proteolysis formed nested clusters sharing a common central core but displaying variable flanking regions. These features are characteristic for MHC (major histocompatibility complex) class II-bound peptides [22], [26]. Notably, peptides derived from both Bet v 1.0101 and Bet v 1.0401 were of 5 to 30 amino acids in length and clustered into 13 different regions along the Bet v 1 sequence. However, central (region Bet v 121–65 and Bet v 183–115) and C-terminal (region Bet v 1130–159) peptide clusters of the more stable isoallergen Bet v 1.0401 appeared temporally delayed. Interestingly, amino acids differing between the two Bet v 1 isoforms were rather located apart from proteolytic cutting sites in central regions of the peptide clusters (Figures 2A and 2B). Proteolytic fragments from both holo- and apo-HRP clustered in 5 analogous regions of the full-length molecules. Due to its high resistance to endolysosomal hydrolases, we could not detect any peptides from RNase A, whereas proteolytic fragments from RNase S were found in 5 different regions (Figure 3). According to our observations, differences in proteolytic resistance between pairs of antigens with similar sequence and conserved fold do not translate into altered patterns of proteolytic fragments.\nPeptides sequenced by mass spectrometry after 0.5, 1, 3, 5, 12, 24, 36, and 48 h of in vitro digestion with microsomal fractions from mDCs (blue), BMDCs (green) and JAWS II DCs (brown) are shown for Bet v 1.0101 (A) and Bet v 1.0401 (B), respectively. Regions of predominant peptide clusters (clx-x) are highlighted as bars colored in different shades of grey depending on their temporal occurrence (average appearance of early clusters: ≤1 h, intermediate clusters: >1 h and <5 h, and late clusters: ≥5 h. Bet v 1 secondary structures (α-helices and β-sheets) are indicated as framed boxes. Amino acids (n = 7) that differ between the 2 Bet v 1 isoforms are highlighted in orange.\nPeptides sequenced by mass spectrometry after 0.5, 1, 3, 5, 12, 24, 36, and 48 h of in vitro digestion with microsomal fractions from mDCs (blue), BMDCs (green) and JAWS II DCs (brown) clustered in distinct regions along the protein sequence of RNase (A) and peroxidase (B) model antigens. Secondary structures (α-helices and β-sheets) are indicated as framed boxes.", "Strikingly, the majority of in vitro generated Bet v 1-derived peptide clusters (Figure 2) was flanked by lysine residues (K20, 54, 55, 65, 80, 97, 103, 115, and 129). This observation suggests that cathepsin S, a lysosomal cysteine endoprotease favoring substrates with glutamines or lysines at the P1 position [27], could be involved in Bet v 1 decomposition. To pursue this idea, we performed in vitro degradation assays of Bet v 1.0101 with purified cathepsin S, and identified 31 cathepsin S sites. As shown in Figure 4, cathepsin S sites seem to be progressively accessible by the protease in a time-dependent manner. Notably, 8 out of the 13 Bet v 1.0101 peptide clusters that appeared during endolysosomal proteolysis applying microsomal fractions could also be generated by cathepsin S alone (Figure 4). Initially occurring Bet v 1.0101 peptide clusters contained (cl1–22) or were flanked (cl146–59) by early-cleaved cathepsin S sites (Figure 4). Short-time degradation assays with human microsomes (data not shown) revealed that the N-terminal cluster cl1–22 appeared already after a few minutes of reaction. Thus, cathepsin S might be engaged in the initial steps of Bet v 1.0101 processing. Late-appearing peptide clusters, which corresponded to the middle region of the Bet v 1 sequence (cl83–115, cl83–103, and cl83–97), might be produced by cathepsin S and AEP, a cysteine endopeptidase that strictly cleaves on the C-terminal site of asparagine residues (e.g. Bet v 1.0101 N82) [24], [25].\nCathepsin S cleavage sites are depicted in the bar diagram showing their location in the Bet v 1.0101 sequence (x-axis) and the number of different peptides generated per cleavage site (y-axis). According to their temporal accessibility, cathepsin S sites are highlighted in red (early), orange (intermediate), and yellow (late) in the bar diagram. Generated peptides are shown as purple lines. Bet v 1.0101 secondary structures (α-helices and β-sheets) are indicated as framed boxes. Regions of Bet v 1.0101 peptide clusters (clx-x) generated by endolysosomal proteases isolated from DCs are depicted as grey boxes.", "The ability to efficiently induce an immune response is a most important quality of any protein vaccine. As no in vitro tools have been available so far, immunogenicity is usually assessed in vivo by animal experiments or by ex vivo cell-based systems. The immunogenicity of an antigen strongly depends on the character of its interaction with DCs. These specialized APCs constitute the interface between antigens and adaptive immunity. After internalization, DCs convert antigen into peptides that are bound to MHC class II molecules, transported to the cell surface, and presented to CD4+ T helper cells. Of note, there is evidence that various subsets of DCs differ by their capacities of antigen processing in vivo\n[28]. For instance, the CD8+ DEC205+ subset of murine splenic DC is more biased for cross-presentation on MHC class I molecules, whereas the CD8− DCIR2+ DC subset is specialized for MHC class II antigen presentation and displays increased expression of proteins involved in the exogenous antigen processing pathway, including cathepsins C, H, Z, and AEP. Further, it has been shown that compared to monocyte-derived DCs, conventional DCs are much more efficient in antigen processing and presentation independent from the route of antigen uptake [29].\nWithin the course of antigen processing, several steps (antigen uptake, DC activation, peptide generation, MHC-peptide complex stability and density) can be decisive regarding the immunogenicity of a given antigen. For instance, the internalization of ovalbumin conjugated to trimethyl chitosan by DCs was 5-fold increased leading to a 1,000-fold higher ovalbumin-specific IgG titers in immunized mice [30]. Apart from such adjuvant-dependent aspects, also properties intrinsic to the antigen (posttranslational modifications, structural features, and stability) can influence immunogenicity. For example, targeting receptor-mediated endocytosis, mannosylated ovalbumin induced a much stronger proliferation of ovalbumin-specific T cells than its unglycosylated counterpart [31]. Moreover, although structural discrepancies between isoforms of the birch pollen major allergen are minor (PDB accession numbers are given in parenthesis), uptake and endocytosis of Bet v 1.0101 (PDB: 1BV1) by murine BMDCs were significantly slower and less efficient than observed for Bet v 1.0401 (PDB: 3K78). The two isoforms also induced divergent patterns of DC activation and humoral responses, which have been attributed to cysteine-mediated dimerization due to a serine to cysteine exchange in Bet v 1.0401 [8]. In more detail, mice immunized with Bet v 1.0401 displayed a comparable IgE but a strongly elevated IgG and IgA response. On DCs Bet v 1.0401 elicited a significantly increased expression of CD80 and CD86 activation markers, whereas the secretion of cytokines antagonizing DC maturation and activation (IL-6) was enhanced in BMDCs stimulated with Bet v 1.0101. Notably, insufficient DC activation might be generally responsible for a shift towards the allergic immune response [32] and could in part explain hypoallergenicity of Bet v 1.0401.\nAntigen uptake and DC activation might also affect the generation of T cell stimulatory peptides, a complex process that involves the sequential action of acidic hydrolases in endosomal and lysosomal compartments of the DC [24], [33]. Antigens that are rather stable to endolysosomal proteolysis can persistently provide peptides for binding to MHC molecules, thus ensuring efficient presentation to T cells and thus, high immunogenicity. By contrast, instable molecules fail to elicit an immune response due to rapid T cell epitope destruction [15], [16]. Hence, efficient antigen uptake of rather stable antigens might facilitate continuous delivery of peptides that can properly bind the MHC class II grove, thereby favoring a high density of MHC-peptide complexes on the surface of the DC for T cell presentation. Notably, the density of T cell epitopes is crucial for T cell activation [34] and could even be decisive in the development of a Th1 or allergic Th2 biased response [35]. Apart from such quantitative aspects, also the quality of peptides generated by endolysosomal proteases has a strong impact on the immunogenicity of proteins. For example, stabilization of MHC-peptide complexes by substituting single amino acids in a tumor antigen-derived T cell epitope increased specific T cell responses up to 50-fold [36]. Hence, the 7 amino acid exchanges in the Bet v 1.0401 isoform might also contribute to its stronger immunogenicity.\nIn the present study, we exploited the link between resistance to endolysosomal proteolysis and immunogenicity to establish a high-throughput screening procedure for rational protein vaccine development. Although the capacity of an antigen to induce an immune response depends on many parameters, according to our data susceptibility to endolysosomal proteases seems to be a key factor determining immunogenicity. Nevertheless, as cell-free endolysosomal in vitro degradations do not encompass important parameters of immunogenicity like antigen uptake, DC activation, and stability of the MHC-peptide complex, it cannot reflect the complex situation of antigen processing in vivo. Therefore, this method cannot be used for T cell epitope determination, and assessments on protein immunogenicity might deviate in some cases from in vivo obtained data. Still, the assay described here represents an excellent tool requiring only standard laboratory equipment, a tissue homogenizer, an ultracentrifuge, and a cell culture facility for rapid and simple assessment of protein immunogenicity. Because costs, time, up-scalability, and standardization problems limit the use of human as well as animal-derived DCs, we introduced the murine commercially available DC cell line JAWS II as source for endolysosomal proteases. Comparative mass spectrometry-based analysis of the protein composition provided strong evidence of the similarity between the different degradomes. To our knowledge, this represents the first study on the endolysosomal proteome from mice and men. We found that the proteome data nicely correlated with degradative activity and proteolytic specificity of the degradome using RNase A, HRP, and Bet v 1.0101 model antigens as substrates. Because structural features of proteins seem to be intimately connected to their immunogenic activity, as demonstrated earlier for structural variants of RNase A, HRP [15], [16], and Bet v 1 [8], we analyzed the kinetics and degradation patterns of RNase S, apo-HRP and Bet v 1.0401. Indeed, we observed remarkable differences in the proteolytic resistance and kinetics of degradation among the investigated antigens. It has been demonstrated that local structural flexibility is a general requirement for proteolytic sensitivity and that deletion of such elements enhances protein stability [37]. However, we did not observe a strict correlation between proteolytic cutting sites and loop structure elements in our model antigens and variants thereof. Of note, the flexible C-terminal loop of Bet v 1 (Bet v 1123–129) seems to be heavily degraded by lysosomal proteases, as we could not detect any in vitro generated peptides corresponding to this region. Thus, deletion of Bet v 1123–129 might increase Bet v 1.0101 stability and immunogenicity. Another possibility would be the targeting of frequently used and surface-exposed cathepsin S cleavage sites of Bet v 1.0101 (F19-K20, K65-Y66, G92-D93, K115-I116, and K134-A135) identified here.\nOther factors influencing proteolytic sensitivity and immunogenicity are aggregation [8], integrity of the polypeptide backbone, and the presence of ligands [15], [16]. As described above, due to a serine to cysteine exchange at position 112, Bet v 1.0401 shows a high tendency to form disulfide-linked aggregates [8]. Presumably due to steric hindrance of proteolytic cutting sites, this isoform appears twice as stable in endolysosomal in vitro degradation assays than Bet v 1.0101. Accordingly, antibody and T cell responses are elevated in mice immunized with Bet v 1.0401. Similarly, RNase A induces more than 10,000-fold stronger T cell and IgG responses than RNase S, which is destabilized by a single subtilisin-mediated peptide bond cleavage between A20 and S21. During endolysosomal in vitro proteolysis RNase S was degraded more than 20-fold faster than RNase A. Moreover, apo-HRP lacking Ca2+ ions and the prosthetic heme group is a weak immunogen and displayed about 4-fold stronger sensitivity to endolysosomal proteases during in vitro degradation than fully equipped holo-HRP [15], [16].\nIn summary, our data convincingly showed that not only the proteome fingerprint but also the proteolytic activity in terms of specificity and kinetics of the analyzed degradomes is equivalent. Our findings are further supported by degradation experiments of Bet v 1.0101 using purified cathepsin S, an important endolysosomal protease. Remarkably, the patterns obtained by degradation with cathepsin S alone resembled those observed in reactions applying endolysosomal fractions. Despite the advantage of straightforward determination of proteolytic epitopes, the single protease approach cannot reliably mimic the complexity of a whole degradome since concerted action of proteases might be crucial in the kinetics of antigen degradation. As kinetics is the key indicator for assaying immunogenicity, this strategy would lack general applicability.\nTo conclude, the adoption of the JAWS II DC line for endolysosomal in vitro degradation facilitated the development of a simple, rapid, reliable, and easily upscalable comparative assay to select highly immunogenic allergy vaccine candidates from a pool of related molecules for pre-clinical evaluation. Data can be obtained within one week, and RNases A and S might serve as internal references. The method described here is very useful to study the immunogenic properties of a protein, and can help to replace, reduce, and refine animal experiments in allergy research and vaccine development in general." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Prolonged mechanical ventilation induces cell cycle arrest in newborn rat lung.
21359218
The molecular mechanism(s) by which mechanical ventilation disrupts alveolar development, a hallmark of bronchopulmonary dysplasia, is unknown.
RATIONALE
Seven-day old rats were ventilated with room air for 8, 12 and 24 h using relatively moderate tidal volumes (8.5 mL.kg⁻¹).
METHODS
Ventilation for 24 h (h) decreased the number of elastin-positive secondary crests and increased the mean linear intercept, indicating arrest of alveolar development. Proliferation (assessed by BrdU incorporation) was halved after 12 h of ventilation and completely arrested after 24 h. Cyclin D1 and E1 mRNA and protein levels were decreased after 8-24 h of ventilation, while that of p27(Kip1) was significantly increased. Mechanical ventilation for 24 h also increased levels of p57(Kip2), decreased that of p16(INK4a), while the levels of p21(Waf/Cip1) and p15(INK4b) were unchanged. Increased p27(Kip1) expression coincided with reduced phosphorylation of p27(Kip1) at Thr¹⁵⁷, Thr¹⁸⁷ and Thr¹⁹⁸ (p<0.05), thereby promoting its nuclear localization. Similar -but more rapid- changes in cell cycle regulators were noted when 7-day rats were ventilated with high tidal volume (40 mL.kg⁻¹) and when fetal lung epithelial cells were subjected to a continuous (17% elongation) cyclic stretch.
MEASUREMENT AND MAIN RESULTS
This is the first demonstration that prolonged (24 h) of mechanical ventilation causes cell cycle arrest in newborn rat lungs; the arrest occurs in G₁ and is caused by increased expression and nuclear localization of Cdk inhibitor proteins (p27(Kip1), p57(Kip2)) from the Kip family.
CONCLUSION
[ "Animals", "Animals, Newborn", "Cell Count", "Cell Cycle", "Cell Proliferation", "Embryo, Mammalian", "Female", "Fetus", "Lung", "Pregnancy", "Pulmonary Alveoli", "Rats", "Rats, Wistar", "Respiration, Artificial", "Time Factors" ]
3040197
null
null
Methods
[SUBTITLE] Ethics statement [SUBSECTION] The study was conducted according to the guidelines of the Canadian Council for Animal Care and with approval of the Animal Care Review Committee of the Hospital for Sick Children (protocol #7217). The study was conducted according to the guidelines of the Canadian Council for Animal Care and with approval of the Animal Care Review Committee of the Hospital for Sick Children (protocol #7217). [SUBTITLE] Animal preparation [SUBSECTION] Timed pregnant Wistar rats (Charles River, Oakville, Quebec, Canada) were allowed to deliver and immediately afterwards litters were reduced to 10 pups. Newborn rat pups were anesthetized by i.p. injection of 30 mg kg−1 pentobarbital and a tracheotomy was performed. The trachea was cannulated with a 1 cm 19G cannula and connected to a rodent ventilator (FlexiVent Scireq, Montreal, PQ). Spontaneously breathing, non-ventilated, littermates served as sham controls. One pup per litter was ventilated and a littermate was used as non-ventilated control. Isoflurane was used as general anesthesia during the ventilation period and 0.9% saline (100 ml.kg−1/24 h) was administered subcutaneously by continuous infusion with a 27G needle to prevent dehydration. First rat pups at postnatal days 6, 7, 8, 9, 10 and 14 were ventilated to assess lung cell proliferation. For all subsequent experiments 7-day old rat pups were used. Preliminary experiments were performed to determine ventilator settings [18]. Starting from a normal respiratory rate for newborn rats (150 bpm), tidal volume was adjusted to achieve normal blood gas values after the ventilation period. Animals were monitored by ECG. Rectal temperature was maintained at 37°C using a thermal blanket, lamp and plastic wrap. At the end of the ventilation period a blood sample from the carotid artery was taken for blood gas analysis prior to euthanasia. Timed pregnant Wistar rats (Charles River, Oakville, Quebec, Canada) were allowed to deliver and immediately afterwards litters were reduced to 10 pups. Newborn rat pups were anesthetized by i.p. injection of 30 mg kg−1 pentobarbital and a tracheotomy was performed. The trachea was cannulated with a 1 cm 19G cannula and connected to a rodent ventilator (FlexiVent Scireq, Montreal, PQ). Spontaneously breathing, non-ventilated, littermates served as sham controls. One pup per litter was ventilated and a littermate was used as non-ventilated control. Isoflurane was used as general anesthesia during the ventilation period and 0.9% saline (100 ml.kg−1/24 h) was administered subcutaneously by continuous infusion with a 27G needle to prevent dehydration. First rat pups at postnatal days 6, 7, 8, 9, 10 and 14 were ventilated to assess lung cell proliferation. For all subsequent experiments 7-day old rat pups were used. Preliminary experiments were performed to determine ventilator settings [18]. Starting from a normal respiratory rate for newborn rats (150 bpm), tidal volume was adjusted to achieve normal blood gas values after the ventilation period. Animals were monitored by ECG. Rectal temperature was maintained at 37°C using a thermal blanket, lamp and plastic wrap. At the end of the ventilation period a blood sample from the carotid artery was taken for blood gas analysis prior to euthanasia. [SUBTITLE] Mechanical ventilation [SUBSECTION] Rat pups were ventilated with room air and moderate-VT (8.5 mL.kg−1, RR 150 min−1, PEEP 2 cm H2O) for 8, 12 and 24 h. In a few cases, pups were ventilated for 4 h with high-VT (40 mL.kg−1, RR 30 min−1, PEEP 2 cm H2O). The 7-day old pups weighed 15.5–18.6 g. Dynamic compliance was estimated every 4 h from data obtained during a single-frequency forced oscillation manoeuvre, using a mathematical model-fitting technique according to the specifications of Scireq Inc. (Montreal, PQ). Two h before completion of ventilation pups were injected ip with 50 mg/kg 5-bromo-2-deoxyuridine (BrdU). At completion of ventilation a blood sample was taken from the carotid artery for blood gas analysis and the animals killed by exsanguination. Lung tissues were processed for histology or fresh frozen for molecular/protein analyses. Rat pups were ventilated with room air and moderate-VT (8.5 mL.kg−1, RR 150 min−1, PEEP 2 cm H2O) for 8, 12 and 24 h. In a few cases, pups were ventilated for 4 h with high-VT (40 mL.kg−1, RR 30 min−1, PEEP 2 cm H2O). The 7-day old pups weighed 15.5–18.6 g. Dynamic compliance was estimated every 4 h from data obtained during a single-frequency forced oscillation manoeuvre, using a mathematical model-fitting technique according to the specifications of Scireq Inc. (Montreal, PQ). Two h before completion of ventilation pups were injected ip with 50 mg/kg 5-bromo-2-deoxyuridine (BrdU). At completion of ventilation a blood sample was taken from the carotid artery for blood gas analysis and the animals killed by exsanguination. Lung tissues were processed for histology or fresh frozen for molecular/protein analyses. [SUBTITLE] Histology [SUBSECTION] After flushing whole lungs were infused in situ with 4% (w/v) paraformaldehyde (PFA) in PBS with a constant pressure of 20 cm H2O over 5 minutes to have equalized filling pressure over the entire lung. Under these constant pressure conditions the cannula was removed and the trachea immediately ligated. The lungs were excised and immersed in 4% PFA in PBS overnight, dehydrated in a ethanol/xylene series and embedded in paraffin. Sections of 5 µm were stained with hematoxilin and eosin or stained for elastin using accustain artrazine solution (Sigma, St. Louis MO, USA). After flushing whole lungs were infused in situ with 4% (w/v) paraformaldehyde (PFA) in PBS with a constant pressure of 20 cm H2O over 5 minutes to have equalized filling pressure over the entire lung. Under these constant pressure conditions the cannula was removed and the trachea immediately ligated. The lungs were excised and immersed in 4% PFA in PBS overnight, dehydrated in a ethanol/xylene series and embedded in paraffin. Sections of 5 µm were stained with hematoxilin and eosin or stained for elastin using accustain artrazine solution (Sigma, St. Louis MO, USA). [SUBTITLE] Immunohistochemistry [SUBSECTION] Following sectioning and antigen retrieval by heating in 10 mM sodium citrate pH 6.0, sections were washed in PBS and endogenous peroxidase was blocked in 3% (v/v) H2O2 in methanol. Blocking was done with 5% (w/v) normal goat serum (NGS) and 1% (w/v) bovine serum albumin (BSA) in PBS. Sections were then incubated overnight at 4°C with either 1∶50 diluted mouse anti-BrdU (Boehringer Mannheim, Germany) or 1∶400 diluted rabbit anti-phospho-histone H3 (Millipore, Billerica, MA, USA) antibodies (Lab Vision Corporation, Fremont, Canada). Biotinylated rabbit anti-mouse IgG or goat anti-rabbit IgG were used as secondary antibodies, respectively. Color detection was performed according to instruction in the Vectastain ABC and DAB kit (Vector Laboratories, Burlingname, CA, USA). All sections were counterstained with hematoxylin. For quantitative analysis, digital images were captured using a Leica digital imaging system at 20× magnification with random sampling of all tissue in an unbiased fashion. Images were captured randomly from 15 non-overlapping fields from each slides, with 3 slides per animal and 4 animals per group. Following sectioning and antigen retrieval by heating in 10 mM sodium citrate pH 6.0, sections were washed in PBS and endogenous peroxidase was blocked in 3% (v/v) H2O2 in methanol. Blocking was done with 5% (w/v) normal goat serum (NGS) and 1% (w/v) bovine serum albumin (BSA) in PBS. Sections were then incubated overnight at 4°C with either 1∶50 diluted mouse anti-BrdU (Boehringer Mannheim, Germany) or 1∶400 diluted rabbit anti-phospho-histone H3 (Millipore, Billerica, MA, USA) antibodies (Lab Vision Corporation, Fremont, Canada). Biotinylated rabbit anti-mouse IgG or goat anti-rabbit IgG were used as secondary antibodies, respectively. Color detection was performed according to instruction in the Vectastain ABC and DAB kit (Vector Laboratories, Burlingname, CA, USA). All sections were counterstained with hematoxylin. For quantitative analysis, digital images were captured using a Leica digital imaging system at 20× magnification with random sampling of all tissue in an unbiased fashion. Images were captured randomly from 15 non-overlapping fields from each slides, with 3 slides per animal and 4 animals per group. [SUBTITLE] Morphometric analysis [SUBSECTION] Digital images were captured from either H&E- or elastin-stained slides with random sampling of all tissue in an unbiased fashion. Images were captured randomly from 15 non-overlapping fields/slide with 3 slides/animal and 4 animals/group. Tissue fraction was calculated from pixel counts of black/white images [19], mean linear intercepts (Lm) were measured and calculated [20] and the number of elastin-positive secondary septa determined. Digital images were captured from either H&E- or elastin-stained slides with random sampling of all tissue in an unbiased fashion. Images were captured randomly from 15 non-overlapping fields/slide with 3 slides/animal and 4 animals/group. Tissue fraction was calculated from pixel counts of black/white images [19], mean linear intercepts (Lm) were measured and calculated [20] and the number of elastin-positive secondary septa determined. [SUBTITLE] Western blot analysis [SUBSECTION] Lung tissues were lysed, protein content measured [21] and aliquots (40 g protein) were subjected to 10% SDS-PAGE and transferred to PVDF membranes. After blocking with 5% (w/v) skim milk in TBST (20 mM Tris, 137 mM NaCl, 0.1% Tween 20) membranes were incubated with appropriate primary antibody overnight in 4°C. Because of decreased BrdU incorporation and cyclin D1 and E1 expression, we focused on CKIs inhibiting Cdk-2, -4 and -6 [22]. Primary antibodies were rabbit anti-p15INK4B (dilution of 1∶500), rabbit anti-p16INK4A (dilution of 1∶1000), mouse anti-p21Waf1/Cip1 (dilution 1∶500), rabbit anti-p27Kip1(dilution 1∶500) and rabbit anti-p57Kip2 (dilution of 1∶1000), rabbit anti-cyclin D1 (dilution of 1∶1000) (all from Cell Signaling Technology, Danvers, USA) and rabbit anti-cyclin E1 (dilution of 1∶1000) (Santa Cruz Biotechnology, Santa Cruz USA). Primary phosphorylated p27Kip1 antibodies were rabbit anti-p27Kip1 (pThr198) (dilution of 1∶400) and rabbit anti-p27Kip1 (pSer10)-R (dilution of 1∶2000; both from Santa Cruz Biotechnology, Santa Cruz, USA), rabbit anti-p27Kip1 (pThr157) (dilution of 1∶300; R&D Systems Inc, Burlington, Canada) and rabbit anti-p27Kip1 (pThr187) (dilution of 1∶400; Novus Biologicals, Littleton, USA). The next day the membranes were washed TBST and incubated with either horseradish peroxidase–conjugated anti-rabbit or anti-mouse IgG (dilution of 1∶1000; Cell Signaling Technology, Danvers, USA). After several washes with TBST, protein bands were visualized using an enhanced chemiluminescence detection kit (Amersham, Piscataway, NJ, USA). Band densities were quantified using Scion Image software (Version 1.6, National Institutes of Health, Bethesda, MD, USA). Equal protein loading was confirmed by immunoblotting for β-actin of same membrane. Lung tissues were lysed, protein content measured [21] and aliquots (40 g protein) were subjected to 10% SDS-PAGE and transferred to PVDF membranes. After blocking with 5% (w/v) skim milk in TBST (20 mM Tris, 137 mM NaCl, 0.1% Tween 20) membranes were incubated with appropriate primary antibody overnight in 4°C. Because of decreased BrdU incorporation and cyclin D1 and E1 expression, we focused on CKIs inhibiting Cdk-2, -4 and -6 [22]. Primary antibodies were rabbit anti-p15INK4B (dilution of 1∶500), rabbit anti-p16INK4A (dilution of 1∶1000), mouse anti-p21Waf1/Cip1 (dilution 1∶500), rabbit anti-p27Kip1(dilution 1∶500) and rabbit anti-p57Kip2 (dilution of 1∶1000), rabbit anti-cyclin D1 (dilution of 1∶1000) (all from Cell Signaling Technology, Danvers, USA) and rabbit anti-cyclin E1 (dilution of 1∶1000) (Santa Cruz Biotechnology, Santa Cruz USA). Primary phosphorylated p27Kip1 antibodies were rabbit anti-p27Kip1 (pThr198) (dilution of 1∶400) and rabbit anti-p27Kip1 (pSer10)-R (dilution of 1∶2000; both from Santa Cruz Biotechnology, Santa Cruz, USA), rabbit anti-p27Kip1 (pThr157) (dilution of 1∶300; R&D Systems Inc, Burlington, Canada) and rabbit anti-p27Kip1 (pThr187) (dilution of 1∶400; Novus Biologicals, Littleton, USA). The next day the membranes were washed TBST and incubated with either horseradish peroxidase–conjugated anti-rabbit or anti-mouse IgG (dilution of 1∶1000; Cell Signaling Technology, Danvers, USA). After several washes with TBST, protein bands were visualized using an enhanced chemiluminescence detection kit (Amersham, Piscataway, NJ, USA). Band densities were quantified using Scion Image software (Version 1.6, National Institutes of Health, Bethesda, MD, USA). Equal protein loading was confirmed by immunoblotting for β-actin of same membrane. [SUBTITLE] Quantitative RT-PCR [SUBSECTION] Total RNA was extracted from lung tissues and reverse transcribed [14]. Complementary DNA was amplified for target genes cyclin D1, cyclin E1 and p27 as previously described [17], [19]. For relative quantification, polymerase chain reaction signals were compared between groups after normalization using 18S as internal reference. Fold change was calculated [23]. Total RNA was extracted from lung tissues and reverse transcribed [14]. Complementary DNA was amplified for target genes cyclin D1, cyclin E1 and p27 as previously described [17], [19]. For relative quantification, polymerase chain reaction signals were compared between groups after normalization using 18S as internal reference. Fold change was calculated [23]. [SUBTITLE] Stretch of epithelial cells isolated from fetal rat lungs [SUBSECTION] Distal fetal lung epithelial cells (day 19 of gestation) were isolated as previously described [24]. Cells were cultured on type-1 collagen-coated BioFlex plates and subjected various durations of cyclic continuous 17% stretch using a FX-4000 Flexercell Strain Unit (Flexercell Int., NC, USA) [25]. Neither cell viability (trypan blue exclusion) nor cell attachment was affected by the duration of applied stretch regimen. Cell lysates were processed for Western Blotting. Distal fetal lung epithelial cells (day 19 of gestation) were isolated as previously described [24]. Cells were cultured on type-1 collagen-coated BioFlex plates and subjected various durations of cyclic continuous 17% stretch using a FX-4000 Flexercell Strain Unit (Flexercell Int., NC, USA) [25]. Neither cell viability (trypan blue exclusion) nor cell attachment was affected by the duration of applied stretch regimen. Cell lysates were processed for Western Blotting. [SUBTITLE] Statistical analysis [SUBSECTION] Stated otherwise all data are presented as mean ± SD. Data was analyzed using SPSS software version 15 (SPSS Inc, Chicago, IL). Statistical significance (p<0.05) was determined by using one-way ANOVA or Kruskal-Wallis test. Post hoc analysis was performed using Duncan's multiple-range test (data presented as mean ± SD) or Mann-Whitney test (data presented as median and interquartile range). Stated otherwise all data are presented as mean ± SD. Data was analyzed using SPSS software version 15 (SPSS Inc, Chicago, IL). Statistical significance (p<0.05) was determined by using one-way ANOVA or Kruskal-Wallis test. Post hoc analysis was performed using Duncan's multiple-range test (data presented as mean ± SD) or Mann-Whitney test (data presented as median and interquartile range).
null
null
null
null
[ "Introduction", "Ethics statement", "Animal preparation", "Mechanical ventilation", "Histology", "Immunohistochemistry", "Morphometric analysis", "Western blot analysis", "Quantitative RT-PCR", "Stretch of epithelial cells isolated from fetal rat lungs", "Statistical analysis", "Results", "Physiologic data", "Morphometric analyses", "Lung cell proliferation", "Cell cycle regulators", "Discussion" ]
[ "Introduction of more gentle ventilation strategies -together with surfactant replacement and antenatal corticosteroids- has improved the survival rate of very premature infants. In parallel, the number of infants with ‘new’ bronchopulmonary dysplasia (BPD) [1] has also increased. Currently, infants born at ≤26 weeks of gestation are at the greatest risk of developing such ‘new’ BPD [2], a syndrome of arrested lung development with fewer and larger alveoli and dysmorphic vasculature [3]. BPD can no longer be considered only a pediatric disease because the substantial lung-function abnormalities -and significant symptoms- persist into adulthood [4], [5], [6]. The pathogenesis of BPD is incompletely understood and its treatment is empirical, but mechanical ventilation remains a major risk factor.\nLung development between 24–32 weeks of gestation is characterized by extensive vasculogenesis within the developing terminal saccules, followed by secondary crest formation as well as interstitial extracellular matrix loss and remodeling [7]. This tissue remodeling requires well-coordinated regulation of cell proliferation and apoptosis. Recent studies have shown that prolonged mechanical ventilation increases apoptosis and impairs alveolar septation in newborn mice [8], however the effect of mechanical ventilation on lung cell growth is mostly unknown. In vitro studies have demonstrated that mechanical stretch (5% elongation, 60 cycles per min, 15 min/h for 24 h) and oxygen (95%) can inhibit lung cell proliferation [9], [10], but molecular mechanisms are yet to be determined. Cell proliferation is a precisely coordinated set of events regulated by interaction of gene products that activate or suppress cell cycle progression. A series of cyclins and cyclin-dependent kinases (Cdk) act in concert to drive the cycle forward through the G1, S and G2/M phases [11]. In mammalian cells, G1/S transition is an important checkpoint in the cell cycle. Entry into the cell cycle is initiated by mitogen-stimulated expression of D-type cyclins which activate Cdk4/6. Shortly thereafter, cyclin E expression is increased and cyclin E-Cdk2 complexes are formed, promoting entry into the S phase [12]. While cyclin-Cdk complexes positively drive progression of the cell cycle, Cdk inhibitors (CKI) negatively regulate progression by binding to and inactivating cyclin–Cdks [13]. There are two distinct CKI families in mammalian cells: INK4 proteins, which block the progression of the cell cycle by binding to either Cdk4 or Cdk6 and inhibiting the action of cyclin D; and, Cip/Kip proteins that inhibit a broader spectrum of cyclin-Cdk complexes [14], [15], [16].\nIn this study we determined the effect of prolonged (24 h) mechanical ventilation on lung cell cycle regulators, proliferation and alveolar formation in a newborn rat model [17]. We hypothesized that continuous cyclic (over)stretching of the primitive airsacs would adversely affect proliferation of lung cells by influencing cell cycle regulators.", "The study was conducted according to the guidelines of the Canadian Council for Animal Care and with approval of the Animal Care Review Committee of the Hospital for Sick Children (protocol #7217).", "Timed pregnant Wistar rats (Charles River, Oakville, Quebec, Canada) were allowed to deliver and immediately afterwards litters were reduced to 10 pups. Newborn rat pups were anesthetized by i.p. injection of 30 mg kg−1 pentobarbital and a tracheotomy was performed. The trachea was cannulated with a 1 cm 19G cannula and connected to a rodent ventilator (FlexiVent Scireq, Montreal, PQ). Spontaneously breathing, non-ventilated, littermates served as sham controls. One pup per litter was ventilated and a littermate was used as non-ventilated control. Isoflurane was used as general anesthesia during the ventilation period and 0.9% saline (100 ml.kg−1/24 h) was administered subcutaneously by continuous infusion with a 27G needle to prevent dehydration. First rat pups at postnatal days 6, 7, 8, 9, 10 and 14 were ventilated to assess lung cell proliferation. For all subsequent experiments 7-day old rat pups were used. Preliminary experiments were performed to determine ventilator settings [18]. Starting from a normal respiratory rate for newborn rats (150 bpm), tidal volume was adjusted to achieve normal blood gas values after the ventilation period. Animals were monitored by ECG. Rectal temperature was maintained at 37°C using a thermal blanket, lamp and plastic wrap. At the end of the ventilation period a blood sample from the carotid artery was taken for blood gas analysis prior to euthanasia.", "Rat pups were ventilated with room air and moderate-VT (8.5 mL.kg−1, RR 150 min−1, PEEP 2 cm H2O) for 8, 12 and 24 h. In a few cases, pups were ventilated for 4 h with high-VT (40 mL.kg−1, RR 30 min−1, PEEP 2 cm H2O). The 7-day old pups weighed 15.5–18.6 g. Dynamic compliance was estimated every 4 h from data obtained during a single-frequency forced oscillation manoeuvre, using a mathematical model-fitting technique according to the specifications of Scireq Inc. (Montreal, PQ). Two h before completion of ventilation pups were injected ip with 50 mg/kg 5-bromo-2-deoxyuridine (BrdU). At completion of ventilation a blood sample was taken from the carotid artery for blood gas analysis and the animals killed by exsanguination. Lung tissues were processed for histology or fresh frozen for molecular/protein analyses.", "After flushing whole lungs were infused in situ with 4% (w/v) paraformaldehyde (PFA) in PBS with a constant pressure of 20 cm H2O over 5 minutes to have equalized filling pressure over the entire lung. Under these constant pressure conditions the cannula was removed and the trachea immediately ligated. The lungs were excised and immersed in 4% PFA in PBS overnight, dehydrated in a ethanol/xylene series and embedded in paraffin. Sections of 5 µm were stained with hematoxilin and eosin or stained for elastin using accustain artrazine solution (Sigma, St. Louis MO, USA).", "Following sectioning and antigen retrieval by heating in 10 mM sodium citrate pH 6.0, sections were washed in PBS and endogenous peroxidase was blocked in 3% (v/v) H2O2 in methanol. Blocking was done with 5% (w/v) normal goat serum (NGS) and 1% (w/v) bovine serum albumin (BSA) in PBS. Sections were then incubated overnight at 4°C with either 1∶50 diluted mouse anti-BrdU (Boehringer Mannheim, Germany) or 1∶400 diluted rabbit anti-phospho-histone H3 (Millipore, Billerica, MA, USA) antibodies (Lab Vision Corporation, Fremont, Canada). Biotinylated rabbit anti-mouse IgG or goat anti-rabbit IgG were used as secondary antibodies, respectively. Color detection was performed according to instruction in the Vectastain ABC and DAB kit (Vector Laboratories, Burlingname, CA, USA). All sections were counterstained with hematoxylin. For quantitative analysis, digital images were captured using a Leica digital imaging system at 20× magnification with random sampling of all tissue in an unbiased fashion. Images were captured randomly from 15 non-overlapping fields from each slides, with 3 slides per animal and 4 animals per group.", "Digital images were captured from either H&E- or elastin-stained slides with random sampling of all tissue in an unbiased fashion. Images were captured randomly from 15 non-overlapping fields/slide with 3 slides/animal and 4 animals/group. Tissue fraction was calculated from pixel counts of black/white images [19], mean linear intercepts (Lm) were measured and calculated [20] and the number of elastin-positive secondary septa determined.", "Lung tissues were lysed, protein content measured [21] and aliquots (40 g protein) were subjected to 10% SDS-PAGE and transferred to PVDF membranes. After blocking with 5% (w/v) skim milk in TBST (20 mM Tris, 137 mM NaCl, 0.1% Tween 20) membranes were incubated with appropriate primary antibody overnight in 4°C. Because of decreased BrdU incorporation and cyclin D1 and E1 expression, we focused on CKIs inhibiting Cdk-2, -4 and -6 [22]. Primary antibodies were rabbit anti-p15INK4B (dilution of 1∶500), rabbit anti-p16INK4A (dilution of 1∶1000), mouse anti-p21Waf1/Cip1 (dilution 1∶500), rabbit anti-p27Kip1(dilution 1∶500) and rabbit anti-p57Kip2 (dilution of 1∶1000), rabbit anti-cyclin D1 (dilution of 1∶1000) (all from Cell Signaling Technology, Danvers, USA) and rabbit anti-cyclin E1 (dilution of 1∶1000) (Santa Cruz Biotechnology, Santa Cruz USA). Primary phosphorylated p27Kip1 antibodies were rabbit anti-p27Kip1 (pThr198) (dilution of 1∶400) and rabbit anti-p27Kip1 (pSer10)-R (dilution of 1∶2000; both from Santa Cruz Biotechnology, Santa Cruz, USA), rabbit anti-p27Kip1 (pThr157) (dilution of 1∶300; R&D Systems Inc, Burlington, Canada) and rabbit anti-p27Kip1 (pThr187) (dilution of 1∶400; Novus Biologicals, Littleton, USA). The next day the membranes were washed TBST and incubated with either horseradish peroxidase–conjugated anti-rabbit or anti-mouse IgG (dilution of 1∶1000; Cell Signaling Technology, Danvers, USA). After several washes with TBST, protein bands were visualized using an enhanced chemiluminescence detection kit (Amersham, Piscataway, NJ, USA). Band densities were quantified using Scion Image software (Version 1.6, National Institutes of Health, Bethesda, MD, USA). Equal protein loading was confirmed by immunoblotting for β-actin of same membrane.", "Total RNA was extracted from lung tissues and reverse transcribed [14]. Complementary DNA was amplified for target genes cyclin D1, cyclin E1 and p27 as previously described [17], [19]. For relative quantification, polymerase chain reaction signals were compared between groups after normalization using 18S as internal reference. Fold change was calculated [23].", "Distal fetal lung epithelial cells (day 19 of gestation) were isolated as previously described [24]. Cells were cultured on type-1 collagen-coated BioFlex plates and subjected various durations of cyclic continuous 17% stretch using a FX-4000 Flexercell Strain Unit (Flexercell Int., NC, USA) [25]. Neither cell viability (trypan blue exclusion) nor cell attachment was affected by the duration of applied stretch regimen. Cell lysates were processed for Western Blotting.", "Stated otherwise all data are presented as mean ± SD. Data was analyzed using SPSS software version 15 (SPSS Inc, Chicago, IL). Statistical significance (p<0.05) was determined by using one-way ANOVA or Kruskal-Wallis test. Post hoc analysis was performed using Duncan's multiple-range test (data presented as mean ± SD) or Mann-Whitney test (data presented as median and interquartile range).", "[SUBTITLE] Physiologic data [SUBSECTION] Blood gases were in the normal range after 8, 12 and 24 h of ventilation (Table 1). Mean airway pressures, peak pressures and delivered VT remained constant up to 8 h of ventilation [18], but altered slightly after 12 h of ventilation compared to baseline (Table 1). Dynamic compliance of the respiratory system was constant up to 8 h of ventilation [18] decreased after 12 h and remained stable afterwards (Fig. 1). These results are indicative of the impact of 8 h of ventilation that did not subsequently worsen.\nDynamic compliance decreased during first 12 h of ventilation with room air and low/moderate VT but remained stable during the next 12 h. Data are mean ± SD, n = 12 rat pups per time group. *p<0.05 prolonged versus 1-min ventilation.\nValues represent means ± SD, n = 12 animals per group.\n*p<0.05 versus values at 0 hrs. Ppeak, peak pressure; Pmean, mean pressure; PEEP, positive-end expiratory pressure.\nBlood gases were in the normal range after 8, 12 and 24 h of ventilation (Table 1). Mean airway pressures, peak pressures and delivered VT remained constant up to 8 h of ventilation [18], but altered slightly after 12 h of ventilation compared to baseline (Table 1). Dynamic compliance of the respiratory system was constant up to 8 h of ventilation [18] decreased after 12 h and remained stable afterwards (Fig. 1). These results are indicative of the impact of 8 h of ventilation that did not subsequently worsen.\nDynamic compliance decreased during first 12 h of ventilation with room air and low/moderate VT but remained stable during the next 12 h. Data are mean ± SD, n = 12 rat pups per time group. *p<0.05 prolonged versus 1-min ventilation.\nValues represent means ± SD, n = 12 animals per group.\n*p<0.05 versus values at 0 hrs. Ppeak, peak pressure; Pmean, mean pressure; PEEP, positive-end expiratory pressure.\n[SUBTITLE] Morphometric analyses [SUBSECTION] Seven-day old rat pups ventilated for 12 and 24 h had fewer and larger alveoli when compared to the lungs of non-ventilated 8 day-old pups (Fig. 2A). The tissue-to-air ratio corroborated these findings; it decreased after 12 h of ventilation and declined further during the next 12 h of ventilation (Fig. 2B). To quantify alveolar development, we calculated the number of elastin-positive secondary crests per unit area (Fig. 2D). The number of secondary crests -indicating alveolar formation- increased significantly between the 7th and 8th postnatal days in non-ventilated rat pups. The number of secondary crests increased after 12 h of ventilation when compared to day 7 controls. In contrast, the number of secondary crests was significantly lower in pups ventilated for 24 h vs. non-ventilated day 8 control pups, even when corrected for tissue fraction. To further evaluate alveolar development, we measured the mean linear intercept (Lm; Fig. 2C). Ventilation increased the Lm after 12 h, and more so after 24 h.\n(A) Histology after mechanical ventilation: (A1) non-ventilated 7-day old rat (A2) 7-day old rat ventilated for 12 h, (A3) 7-day old rat ventilated for 24 h, (A4) non-ventilated 8-day old rat. (B) Mechanical ventilation for 12 and 24 h significantly increased alveolar airspace (reduction in tissue-to-air ratio (A) as well as increase in mean linear intercept (D)) but decreased number of elastin-positive secondary septa (C). Medians with 25th and 75th quartiles are shown, bars are 5th and 95th percentiles, n = 12 rat pups per time group. MV, mechanical ventilation. * p<0.05.\nTogether the data suggest that during the first 12 h of ventilation alveolar space increases because of hyperinflation while a further increase of alveolar space during the next 12 h of ventilation is in part due to arrest in alveolar development as well as hyperinflation.\nSeven-day old rat pups ventilated for 12 and 24 h had fewer and larger alveoli when compared to the lungs of non-ventilated 8 day-old pups (Fig. 2A). The tissue-to-air ratio corroborated these findings; it decreased after 12 h of ventilation and declined further during the next 12 h of ventilation (Fig. 2B). To quantify alveolar development, we calculated the number of elastin-positive secondary crests per unit area (Fig. 2D). The number of secondary crests -indicating alveolar formation- increased significantly between the 7th and 8th postnatal days in non-ventilated rat pups. The number of secondary crests increased after 12 h of ventilation when compared to day 7 controls. In contrast, the number of secondary crests was significantly lower in pups ventilated for 24 h vs. non-ventilated day 8 control pups, even when corrected for tissue fraction. To further evaluate alveolar development, we measured the mean linear intercept (Lm; Fig. 2C). Ventilation increased the Lm after 12 h, and more so after 24 h.\n(A) Histology after mechanical ventilation: (A1) non-ventilated 7-day old rat (A2) 7-day old rat ventilated for 12 h, (A3) 7-day old rat ventilated for 24 h, (A4) non-ventilated 8-day old rat. (B) Mechanical ventilation for 12 and 24 h significantly increased alveolar airspace (reduction in tissue-to-air ratio (A) as well as increase in mean linear intercept (D)) but decreased number of elastin-positive secondary septa (C). Medians with 25th and 75th quartiles are shown, bars are 5th and 95th percentiles, n = 12 rat pups per time group. MV, mechanical ventilation. * p<0.05.\nTogether the data suggest that during the first 12 h of ventilation alveolar space increases because of hyperinflation while a further increase of alveolar space during the next 12 h of ventilation is in part due to arrest in alveolar development as well as hyperinflation.\n[SUBTITLE] Lung cell proliferation [SUBSECTION] Lung cell proliferation was assessed in non-ventilated vs. ventilated rat pups at postnatal days 6, 7, 8, 9, 10 and 14. In non-ventilated rats, the number of proliferating lung cells was greatest at postnatal day 6 (BrdU labelling index: ∼12%), which declined gradually to almost undetectable at day 15 (Fig. 3). Ventilation for 24 h clearly inhibited lung cell proliferation in pups of all studied ages (days 6-14). Next, 7-day old rat pups were ventilated for all subsequent experiments. Proliferation was not affected by 8 h of ventilation (data not shown) but longer durations of ventilation significantly decreased the number of proliferating cells (Fig. 4A, B). The ratio of proliferating mesenchymal and epithelial cells did not significantly differ between non-ventilated pups and pups ventilated for 8 and 12 h, respectively (0.73±0.05 vs. 0.65±0.03 and 0.67±0.1). Since a 12-h ventilation decreased the total number of proliferating cells (Fig. 4B) the unchanged ratio suggest that cell proliferation of both tissue layers was equally affected by mechanical ventilation. Hardly any proliferating cells were seen after 24 h of ventilation; in agreement with a reduction in cell proliferation in both tissue layers. The almost total arrest in lung cell proliferation by prolonged (24 h) ventilation was confirmed by anti-PH3 immunochemistry (PH3-positive cells decreased from 8 to 1% of total).\nAlthough the BrdU labeling index decreased gradually with advancing postnatal gestation, mechanical ventilation for 24 h inhibited cell proliferation at every postnatal age. Medians with 25th and 75th quartiles are shown, bars are 5th and 95th percentiles, n = 4 rat pups per time group. *p<0.05.\nImmunohistochemistry ((A1) non-ventilated 7-day old rat, (A2) 7-day old rat ventilated for 12 h, (A3) 7-day old rat ventilated for 24 h, (A4) non-ventilated 8-day old rat) illustrates reduction in BrdU uptake (brown color) with duration of ventilation. (B) BrdU labeling index significantly decreased after 12 and 24 h of mechanical ventilation. Medians with 25th and 75th quartiles are shown, bars are 5th and 95th percentiles, n = 12 rat pups per time group. MV, mechanical ventilation. * p<0.05.\nLung cell proliferation was assessed in non-ventilated vs. ventilated rat pups at postnatal days 6, 7, 8, 9, 10 and 14. In non-ventilated rats, the number of proliferating lung cells was greatest at postnatal day 6 (BrdU labelling index: ∼12%), which declined gradually to almost undetectable at day 15 (Fig. 3). Ventilation for 24 h clearly inhibited lung cell proliferation in pups of all studied ages (days 6-14). Next, 7-day old rat pups were ventilated for all subsequent experiments. Proliferation was not affected by 8 h of ventilation (data not shown) but longer durations of ventilation significantly decreased the number of proliferating cells (Fig. 4A, B). The ratio of proliferating mesenchymal and epithelial cells did not significantly differ between non-ventilated pups and pups ventilated for 8 and 12 h, respectively (0.73±0.05 vs. 0.65±0.03 and 0.67±0.1). Since a 12-h ventilation decreased the total number of proliferating cells (Fig. 4B) the unchanged ratio suggest that cell proliferation of both tissue layers was equally affected by mechanical ventilation. Hardly any proliferating cells were seen after 24 h of ventilation; in agreement with a reduction in cell proliferation in both tissue layers. The almost total arrest in lung cell proliferation by prolonged (24 h) ventilation was confirmed by anti-PH3 immunochemistry (PH3-positive cells decreased from 8 to 1% of total).\nAlthough the BrdU labeling index decreased gradually with advancing postnatal gestation, mechanical ventilation for 24 h inhibited cell proliferation at every postnatal age. Medians with 25th and 75th quartiles are shown, bars are 5th and 95th percentiles, n = 4 rat pups per time group. *p<0.05.\nImmunohistochemistry ((A1) non-ventilated 7-day old rat, (A2) 7-day old rat ventilated for 12 h, (A3) 7-day old rat ventilated for 24 h, (A4) non-ventilated 8-day old rat) illustrates reduction in BrdU uptake (brown color) with duration of ventilation. (B) BrdU labeling index significantly decreased after 12 and 24 h of mechanical ventilation. Medians with 25th and 75th quartiles are shown, bars are 5th and 95th percentiles, n = 12 rat pups per time group. MV, mechanical ventilation. * p<0.05.\n[SUBTITLE] Cell cycle regulators [SUBSECTION] mRNA levels of lung cyclin D1 and E1 were significantly down-regulated after 8, 12 and 24 h of ventilation while that of p27Kip1 was increased (Fig. 5A). Immunoblot (i.e. protein) analysis of lungs ventilated for 24 h confirmed these mRNA changes of cyclin D1, E1 and p27Kip1 (Figs. 5B, 5C, 6A). The amount of p27Kip1 was 1.5-fold increased after 12 h of ventilation (not shown). Other members of the Cip/Kip family of CKIs were either increased (p57Kip2, Fig. 6B) or unchanged (p21Waf/Cip1, not shown) by 24 h of ventilation. In contrast, CKIs belonging to the INK4 family were either reduced (p16INK4a) or not affected (p15INK4b) by 24 h of ventilation (Fig. 6C, D). p27Kip1 can be phosphorylated at different sites, which influences its localization and activity [26]. A 12 h ventilation decreased phosphorylation of p27Kip1at Thr157 (Fig. 7A) but did not affect phosphorylation of Thr198 (not shown). However, mechanical ventilation for 24 h decreased p27Kip1 phosphorylation at Thr157, Thr187 and Thr198, thereby promoting stability and nuclear localization (Fig. 7B–D). Similar -but more rapid- changes in cell cycle regulators were noted when 7-day newborn rats were ventilated with high VT. Although β-actin can be responsive to stretch, no significant differences in β-actin expression was noted between ventilated animals and controls (not shown).\nMechanical ventilation for 24 h significantly decreased cyclin D1 and E1 mRNA (A) and protein (B and C) levels in lungs of 7-day old rats. In contrast, p27 kip1 mRNA increased (A). Inserts in (B) and (C) show cyclin D1 (B) and cyclin E1 (C) immunoblots of lung tissue of 7-day old rats ventilated for 24 h and non-ventilated 8-day old rats (controls). Blots were reprobed with β-actin for equal loading and transfer. Medians with 25th and 75th quartiles are shown, bars are 5th and 95th percentiles; qPCR, n = 6 rat pups per time group; immunoblot, n = 3 rat pups per time group. MV, mechanical ventilation. *p<0.05 versus non-ventilated group, § p<0.05 versus 24 h ventilation.\nMechanical ventilation for 24 h significantly increased p27 Kip1 (A) and p57 Kip2 (B) protein levels. In contrast, p16 INK4a protein (D) was decreased by ventilation while p15 INK4b (C) was unchanged. Inserts show immunoblots of lung tissue of 7-day old rats ventilated for 24 h and non-ventilated 8-day old rats (controls). Blots were reprobed with β-actin for equal loading and transfer. Medians with 25th and 75th quartiles are shown, bars are 5th and 95th percentiles; n = 3 rat pups per time group. *p<0.05 versus non-ventilated group.\nA 24 h-ventilation significantly decreased Thr157-phosphorylated p27Kip1 (A), Thr187-phosphorylated p27Kip1 (B) and Thr198-phosphorylated p27Kip1 (C). Phosphorylation of threonine 157 was already reduced after 12 h of ventilation (D). Inserts show immunoblots of lung tissue of 7-day old rats ventilated for 24 h and non-ventilated 8-day old rats (controls Blots were reprobed with β-actin for equal loading and transfer. Medians with 25th and 75th quartiles are shown, bars are 5th and 95th percentiles; n = 3 rat pups per time group. *p<0.05 versus non-ventilated group.\nHigh VT reduced the amount of D1 and D2 cyclins within 1 hour, while that of Cdk inhibitors p27Kip1 and p57Kip2 increased (Fig. 8A); in contrast, p16INK4a content was decreased by high-VT ventilation.\nHigh VT ventilation of 7-day old rat lungs (A) and a continuous cyclic 17% stretch of fetal lung epithelial cells (B) rapidly decreased type-D cyclins and p16INK4a while increasing Kip proteins, in particular p27Kip1. Blots were reprobed with β-actin for equal loading and transfer. Representative blots of 2 experiments carried out in duplicate (A) or 3 experiments (B).\nWe do not know in which particular tissue layer (epithelium, mesenchyme) these changes occurred in vivo, but they at least occur in epithelial cells as subjecting ex vivo fetal lung epithelial cells to cyclic continuous 17% stretch resulted in similar patterns of alteration in cell cycle regulators (Fig. 8B).\nmRNA levels of lung cyclin D1 and E1 were significantly down-regulated after 8, 12 and 24 h of ventilation while that of p27Kip1 was increased (Fig. 5A). Immunoblot (i.e. protein) analysis of lungs ventilated for 24 h confirmed these mRNA changes of cyclin D1, E1 and p27Kip1 (Figs. 5B, 5C, 6A). The amount of p27Kip1 was 1.5-fold increased after 12 h of ventilation (not shown). Other members of the Cip/Kip family of CKIs were either increased (p57Kip2, Fig. 6B) or unchanged (p21Waf/Cip1, not shown) by 24 h of ventilation. In contrast, CKIs belonging to the INK4 family were either reduced (p16INK4a) or not affected (p15INK4b) by 24 h of ventilation (Fig. 6C, D). p27Kip1 can be phosphorylated at different sites, which influences its localization and activity [26]. A 12 h ventilation decreased phosphorylation of p27Kip1at Thr157 (Fig. 7A) but did not affect phosphorylation of Thr198 (not shown). However, mechanical ventilation for 24 h decreased p27Kip1 phosphorylation at Thr157, Thr187 and Thr198, thereby promoting stability and nuclear localization (Fig. 7B–D). Similar -but more rapid- changes in cell cycle regulators were noted when 7-day newborn rats were ventilated with high VT. Although β-actin can be responsive to stretch, no significant differences in β-actin expression was noted between ventilated animals and controls (not shown).\nMechanical ventilation for 24 h significantly decreased cyclin D1 and E1 mRNA (A) and protein (B and C) levels in lungs of 7-day old rats. In contrast, p27 kip1 mRNA increased (A). Inserts in (B) and (C) show cyclin D1 (B) and cyclin E1 (C) immunoblots of lung tissue of 7-day old rats ventilated for 24 h and non-ventilated 8-day old rats (controls). Blots were reprobed with β-actin for equal loading and transfer. Medians with 25th and 75th quartiles are shown, bars are 5th and 95th percentiles; qPCR, n = 6 rat pups per time group; immunoblot, n = 3 rat pups per time group. MV, mechanical ventilation. *p<0.05 versus non-ventilated group, § p<0.05 versus 24 h ventilation.\nMechanical ventilation for 24 h significantly increased p27 Kip1 (A) and p57 Kip2 (B) protein levels. In contrast, p16 INK4a protein (D) was decreased by ventilation while p15 INK4b (C) was unchanged. Inserts show immunoblots of lung tissue of 7-day old rats ventilated for 24 h and non-ventilated 8-day old rats (controls). Blots were reprobed with β-actin for equal loading and transfer. Medians with 25th and 75th quartiles are shown, bars are 5th and 95th percentiles; n = 3 rat pups per time group. *p<0.05 versus non-ventilated group.\nA 24 h-ventilation significantly decreased Thr157-phosphorylated p27Kip1 (A), Thr187-phosphorylated p27Kip1 (B) and Thr198-phosphorylated p27Kip1 (C). Phosphorylation of threonine 157 was already reduced after 12 h of ventilation (D). Inserts show immunoblots of lung tissue of 7-day old rats ventilated for 24 h and non-ventilated 8-day old rats (controls Blots were reprobed with β-actin for equal loading and transfer. Medians with 25th and 75th quartiles are shown, bars are 5th and 95th percentiles; n = 3 rat pups per time group. *p<0.05 versus non-ventilated group.\nHigh VT reduced the amount of D1 and D2 cyclins within 1 hour, while that of Cdk inhibitors p27Kip1 and p57Kip2 increased (Fig. 8A); in contrast, p16INK4a content was decreased by high-VT ventilation.\nHigh VT ventilation of 7-day old rat lungs (A) and a continuous cyclic 17% stretch of fetal lung epithelial cells (B) rapidly decreased type-D cyclins and p16INK4a while increasing Kip proteins, in particular p27Kip1. Blots were reprobed with β-actin for equal loading and transfer. Representative blots of 2 experiments carried out in duplicate (A) or 3 experiments (B).\nWe do not know in which particular tissue layer (epithelium, mesenchyme) these changes occurred in vivo, but they at least occur in epithelial cells as subjecting ex vivo fetal lung epithelial cells to cyclic continuous 17% stretch resulted in similar patterns of alteration in cell cycle regulators (Fig. 8B).", "Blood gases were in the normal range after 8, 12 and 24 h of ventilation (Table 1). Mean airway pressures, peak pressures and delivered VT remained constant up to 8 h of ventilation [18], but altered slightly after 12 h of ventilation compared to baseline (Table 1). Dynamic compliance of the respiratory system was constant up to 8 h of ventilation [18] decreased after 12 h and remained stable afterwards (Fig. 1). These results are indicative of the impact of 8 h of ventilation that did not subsequently worsen.\nDynamic compliance decreased during first 12 h of ventilation with room air and low/moderate VT but remained stable during the next 12 h. Data are mean ± SD, n = 12 rat pups per time group. *p<0.05 prolonged versus 1-min ventilation.\nValues represent means ± SD, n = 12 animals per group.\n*p<0.05 versus values at 0 hrs. Ppeak, peak pressure; Pmean, mean pressure; PEEP, positive-end expiratory pressure.", "Seven-day old rat pups ventilated for 12 and 24 h had fewer and larger alveoli when compared to the lungs of non-ventilated 8 day-old pups (Fig. 2A). The tissue-to-air ratio corroborated these findings; it decreased after 12 h of ventilation and declined further during the next 12 h of ventilation (Fig. 2B). To quantify alveolar development, we calculated the number of elastin-positive secondary crests per unit area (Fig. 2D). The number of secondary crests -indicating alveolar formation- increased significantly between the 7th and 8th postnatal days in non-ventilated rat pups. The number of secondary crests increased after 12 h of ventilation when compared to day 7 controls. In contrast, the number of secondary crests was significantly lower in pups ventilated for 24 h vs. non-ventilated day 8 control pups, even when corrected for tissue fraction. To further evaluate alveolar development, we measured the mean linear intercept (Lm; Fig. 2C). Ventilation increased the Lm after 12 h, and more so after 24 h.\n(A) Histology after mechanical ventilation: (A1) non-ventilated 7-day old rat (A2) 7-day old rat ventilated for 12 h, (A3) 7-day old rat ventilated for 24 h, (A4) non-ventilated 8-day old rat. (B) Mechanical ventilation for 12 and 24 h significantly increased alveolar airspace (reduction in tissue-to-air ratio (A) as well as increase in mean linear intercept (D)) but decreased number of elastin-positive secondary septa (C). Medians with 25th and 75th quartiles are shown, bars are 5th and 95th percentiles, n = 12 rat pups per time group. MV, mechanical ventilation. * p<0.05.\nTogether the data suggest that during the first 12 h of ventilation alveolar space increases because of hyperinflation while a further increase of alveolar space during the next 12 h of ventilation is in part due to arrest in alveolar development as well as hyperinflation.", "Lung cell proliferation was assessed in non-ventilated vs. ventilated rat pups at postnatal days 6, 7, 8, 9, 10 and 14. In non-ventilated rats, the number of proliferating lung cells was greatest at postnatal day 6 (BrdU labelling index: ∼12%), which declined gradually to almost undetectable at day 15 (Fig. 3). Ventilation for 24 h clearly inhibited lung cell proliferation in pups of all studied ages (days 6-14). Next, 7-day old rat pups were ventilated for all subsequent experiments. Proliferation was not affected by 8 h of ventilation (data not shown) but longer durations of ventilation significantly decreased the number of proliferating cells (Fig. 4A, B). The ratio of proliferating mesenchymal and epithelial cells did not significantly differ between non-ventilated pups and pups ventilated for 8 and 12 h, respectively (0.73±0.05 vs. 0.65±0.03 and 0.67±0.1). Since a 12-h ventilation decreased the total number of proliferating cells (Fig. 4B) the unchanged ratio suggest that cell proliferation of both tissue layers was equally affected by mechanical ventilation. Hardly any proliferating cells were seen after 24 h of ventilation; in agreement with a reduction in cell proliferation in both tissue layers. The almost total arrest in lung cell proliferation by prolonged (24 h) ventilation was confirmed by anti-PH3 immunochemistry (PH3-positive cells decreased from 8 to 1% of total).\nAlthough the BrdU labeling index decreased gradually with advancing postnatal gestation, mechanical ventilation for 24 h inhibited cell proliferation at every postnatal age. Medians with 25th and 75th quartiles are shown, bars are 5th and 95th percentiles, n = 4 rat pups per time group. *p<0.05.\nImmunohistochemistry ((A1) non-ventilated 7-day old rat, (A2) 7-day old rat ventilated for 12 h, (A3) 7-day old rat ventilated for 24 h, (A4) non-ventilated 8-day old rat) illustrates reduction in BrdU uptake (brown color) with duration of ventilation. (B) BrdU labeling index significantly decreased after 12 and 24 h of mechanical ventilation. Medians with 25th and 75th quartiles are shown, bars are 5th and 95th percentiles, n = 12 rat pups per time group. MV, mechanical ventilation. * p<0.05.", "mRNA levels of lung cyclin D1 and E1 were significantly down-regulated after 8, 12 and 24 h of ventilation while that of p27Kip1 was increased (Fig. 5A). Immunoblot (i.e. protein) analysis of lungs ventilated for 24 h confirmed these mRNA changes of cyclin D1, E1 and p27Kip1 (Figs. 5B, 5C, 6A). The amount of p27Kip1 was 1.5-fold increased after 12 h of ventilation (not shown). Other members of the Cip/Kip family of CKIs were either increased (p57Kip2, Fig. 6B) or unchanged (p21Waf/Cip1, not shown) by 24 h of ventilation. In contrast, CKIs belonging to the INK4 family were either reduced (p16INK4a) or not affected (p15INK4b) by 24 h of ventilation (Fig. 6C, D). p27Kip1 can be phosphorylated at different sites, which influences its localization and activity [26]. A 12 h ventilation decreased phosphorylation of p27Kip1at Thr157 (Fig. 7A) but did not affect phosphorylation of Thr198 (not shown). However, mechanical ventilation for 24 h decreased p27Kip1 phosphorylation at Thr157, Thr187 and Thr198, thereby promoting stability and nuclear localization (Fig. 7B–D). Similar -but more rapid- changes in cell cycle regulators were noted when 7-day newborn rats were ventilated with high VT. Although β-actin can be responsive to stretch, no significant differences in β-actin expression was noted between ventilated animals and controls (not shown).\nMechanical ventilation for 24 h significantly decreased cyclin D1 and E1 mRNA (A) and protein (B and C) levels in lungs of 7-day old rats. In contrast, p27 kip1 mRNA increased (A). Inserts in (B) and (C) show cyclin D1 (B) and cyclin E1 (C) immunoblots of lung tissue of 7-day old rats ventilated for 24 h and non-ventilated 8-day old rats (controls). Blots were reprobed with β-actin for equal loading and transfer. Medians with 25th and 75th quartiles are shown, bars are 5th and 95th percentiles; qPCR, n = 6 rat pups per time group; immunoblot, n = 3 rat pups per time group. MV, mechanical ventilation. *p<0.05 versus non-ventilated group, § p<0.05 versus 24 h ventilation.\nMechanical ventilation for 24 h significantly increased p27 Kip1 (A) and p57 Kip2 (B) protein levels. In contrast, p16 INK4a protein (D) was decreased by ventilation while p15 INK4b (C) was unchanged. Inserts show immunoblots of lung tissue of 7-day old rats ventilated for 24 h and non-ventilated 8-day old rats (controls). Blots were reprobed with β-actin for equal loading and transfer. Medians with 25th and 75th quartiles are shown, bars are 5th and 95th percentiles; n = 3 rat pups per time group. *p<0.05 versus non-ventilated group.\nA 24 h-ventilation significantly decreased Thr157-phosphorylated p27Kip1 (A), Thr187-phosphorylated p27Kip1 (B) and Thr198-phosphorylated p27Kip1 (C). Phosphorylation of threonine 157 was already reduced after 12 h of ventilation (D). Inserts show immunoblots of lung tissue of 7-day old rats ventilated for 24 h and non-ventilated 8-day old rats (controls Blots were reprobed with β-actin for equal loading and transfer. Medians with 25th and 75th quartiles are shown, bars are 5th and 95th percentiles; n = 3 rat pups per time group. *p<0.05 versus non-ventilated group.\nHigh VT reduced the amount of D1 and D2 cyclins within 1 hour, while that of Cdk inhibitors p27Kip1 and p57Kip2 increased (Fig. 8A); in contrast, p16INK4a content was decreased by high-VT ventilation.\nHigh VT ventilation of 7-day old rat lungs (A) and a continuous cyclic 17% stretch of fetal lung epithelial cells (B) rapidly decreased type-D cyclins and p16INK4a while increasing Kip proteins, in particular p27Kip1. Blots were reprobed with β-actin for equal loading and transfer. Representative blots of 2 experiments carried out in duplicate (A) or 3 experiments (B).\nWe do not know in which particular tissue layer (epithelium, mesenchyme) these changes occurred in vivo, but they at least occur in epithelial cells as subjecting ex vivo fetal lung epithelial cells to cyclic continuous 17% stretch resulted in similar patterns of alteration in cell cycle regulators (Fig. 8B).", "The hallmark of ‘new’ BPD is arrested alveolarization [1], but the molecular and cellular basis of the alveolar arrest remains mostly unknown. Alveolarization occurs as the immature saccules, which form the lung parenchyma at birth, are subdivided into smaller units by the formation and extension of secondary septa; new tissue ridges are lifted off the existing primary septa and grow in a centripetal direction into the airspaces. This process, called septation, is mainly postnatal (human: 36 weeks-infancy; rat: Pnd5–Pnd21) [7], [27]. Before septation of the air spaces starts, the lung expands for a short period of time, and the cells of the inter-airway walls actively proliferate, peaking at day 5 in rats and steadily declining thereafter [28]. This active cell proliferation takes place just at the beginning of the septation of the distal airways. With the use of a newborn rat model [18] we demonstrate here that mechanical ventilation for 24 h with room air and moderate VT results in cell cycle arrest, and reduced alveolar septation. This ventilation strategy (room air and moderate VT) was chosen to avoid/minimize lung injury.\nIn rats, lungs at birth have a saccular appearance and alveolarization is an exclusively postnatal (between P4 and P21) event, which makes this model relevant to the infant population developing BPD. However, major differences exist between mechanically ventilated newborn rats and premature born infants. Newborn rats have immature lung architecture at birth, but they do not need mechanical ventilation to survive (likely due to differences in airway structure, with large airways extending almost to the lung periphery and quickly reducing in diameter to the alveoli) and they do not lack surfactant. Infants with BPD demonstrate interstitial thickening that may partly be due to fibroproliferation while in rat pups mechanical ventilation of 24 h caused cell arrest in both mesenchymal and epithelial cell layers. Despite these differences, our results suggest that the observed cell cycle arrest is due to increased expression of two CKIs (i.e. p27Kip1 and p57Kip2) that are members of the Cip/Kip family; the other member, p21waf/Cip1, was not affected by 24 h of mechanical ventilation.\nKnock-in mouse models have shown that p27Kip1 and p57Kip2 are interchangeable in vivo\n[29], suggesting similar mechanisms of regulation. Mechanical strain has been recognized as playing an important role in the regulation of fetal lung cell proliferation. Indeed the stimulatory effect of mechanical stretch (i.e. increased intratracheal pressure) on fetal lung growth has been extensively studied in tracheal occlusion (TO) models [30], [31], where the number of proliferating alveolar type II cells significantly increased. Fetal sheep, exposed to TO during the alveolar stage of lung development, showed an increase in alveolar type II cells between days 2-4 after TO [31]. This proliferative response of fetal lung cells to strain has also been demonstrated in vitro. Intermittent cyclic 5% stretching (simulating normal fetal breathing movements) of distal fetal rat lung cells (epithelial cells and fibroblasts) increased cell proliferation [32]. However, a continuous cyclic 17% stretch (simulating mechanical ventilation [33]) for 24 h inhibited fetal lung cell proliferation (unpublished results), in agreement with our in vivo findings of a proliferative arrest after 24 h of mechanical ventilation. In the present study, continuous cyclic 17% stretch of fetal lung epithelial cells caused similar alterations in cell cycle regulators as observed in mechanically ventilated newborn lungs in vivo, namely increased levels of p27Kip1 and a decrease in the amount of cyclin D1. CKI members of the Cip/Kip family (p21WAF1/Cip1, p27Kip1 and p57Kip2) preferentially inhibit cyclin-Cdk2 complexes [16]. How mechanical stretch influences CKIs is unknown. In many cancers, the ras/raf/mitogen activated protein kinase (MAPK) pathway increases p27Kip1 proteolysis while downstream effectors of the PI-3K pathway such as protein kinase B (also known as Akt) predominantly regulate p27Kip1 subcellular localization [26]. Although the MAPK pathway is activated by ventilation/stretch [34], [35] we found nuclear p27Kip1 accumulation instead of degradation. Thus, MAPK may regulate p27Kip1 differently in normal compared to cancer cells. The PI3K-Akt pathway during ventilation/stretch remains to be investigated. Mechanical ventilation of newborn rats triggers an inflammatory response[17], [18] and various inflammatory mediators including tumor necrosis factor-a (TNFα, interleukin-6 and transforming growth factor-β (TGF-β) have been shown to induce p21WAF1/Cip1 expression [22], [36], [37]. Also p15Ink4b is induced by TGF-β [38]. In the current study, TGFβ1 mRNA expression was not changed after 24 h of ventilation (data not shown) and, indeed, neither p21WAF1/Cip1 nor p15Ink4b expression was affected by mechanical ventilation.\nThe amount of p27Kip1 is regulated at the level of its synthesis (transcription, translation), degradation and localization [39]. During the G0 phase, it accumulates in the nucleus and inhibits cyclin-Cdk complexes. In response to growth stimuli, p27Kip1 translocates from the nucleus to the cytoplasm during G1 phase and is degraded by the proteosome after ubiquitination [39], permitting the cell cycle to progress to S phase. Several signaling pathways that alter p27Kip1 phosphorylation influence its subcellular localization and function. For example, phosphorylation of the following essential sites regulate important functions: Thr157 prevents nuclear import; Ser10 mediates nuclear export; Thr198 promotes cytoplasmic translocation and increases p27-dependent motility, which may be important to prepare cells for shape changes in later phases of the cell cycle; and, Thr187 results in proteolysis [26], [39]. In the present study, mechanical ventilation of 24 h increased the transcription of p27Kip1 and altered its phosphorylation: less phosphorylation of Thr157 (increasing nuclear import), Thr198 (decreasing nuclear export) and Thr187 (reduced proteolysis). No significant changes in Ser10 phosphorylation were noted (not shown). Together, these alterations in p27Kip1 phosphorylation favour its nuclear localization and stability. In addition, the reduced phosphorylation of p27Kip1 at Thr157 and Thr198 negatively affects the assembly function of p27Kip1 for cyclinD1-Cdk4, thereby negatively affecting cell cycle progression [26].\nThe second family of CKIs are INK4 proteins (p16INK4a, p15INK4b, p18INK4c and p19INK4d); they inhibit cyclin D-dependent kinases Cdk-4 and -6 [14], [15] and, are thus specific for early G1 phase. In the present study, we found a significant reduction in p16INK4a protein after ventilation with low, moderate or high VT. In addition to Cdk inhibition and G1 growth arrest, p16INK4a plays a role in regulating apoptosis. It has been shown that p16INK4a-deficiency increases apoptosis in osteosarcoma U2OS and mouse embryonic fibroblast (MEF) cells exposed to ultraviolet (UV) light [40], because of down-regulation of the anti-apoptotic protein Bcl-2. In contrast, the pro-apoptotic protein Bax was down-regulated in p16INK4a expressing cells [40]. Thus, p16INK4a appears to control apoptosis through the intrinsic mitochondrial death pathway. Prolonged mechanical ventilation has been shown to significantly increase lung cell apoptosis in newborn mice lungs [8]. Although p16INK4a levels were decreased in the present study, it remains yet to be determined whether it plays a role in ventilator-induced apoptosis.\nAnother risk factor for BPD is oxygen [1]. Hyperoxia has been shown to interfere with cell-cycle progression in vitro\n[36], [41], [42] and hyperoxia-induced G1 arrest appears to be mediated by p21Waf1/Cip1\n[43], [44]. The hyperoxic induction of p21Waf1/Cip1 is p53-dependent [44] and its increase promotes survival of cells exposed to continuous oxidative stress by maintaining anti-apoptotic Bcl-2X(L) expression [45]. Hyperoxic-ventilated premature baboons delivered at 125 and 140 days of gestation have increased p53 and p21Waf1/Cip1 expression [46], [47]. In the present study, we did not assess p53 but the absence of p21Waf1/Cip1 induction by 24 h of mechanical ventilation with room air suggests that p53 is likely not involved in ventilation-induced cell cycle arrest in these studies.\nThe increase of p27Kip1 and p57Kip2 by mechanical stretch in vitro and in vivo coincided with a reduced expression of cyclins D1 and E1, both of which are essential for cell cycle progression through G1 and entry in S phase. D-type cyclins are induced by mitogenic stimuli in quiescent cells. After association with Cdk4/6 and activation by Cdk activating kinase, they promote entry into the G1 phase, thereby triggering cyclin E expression. Cyclin E binds to Cdk2 and facilitates transition from G1 to S phase [22]. Both p27Kip1 and p57Kip2 are potent inhibitors of cyclin E-dependent kinase Cdk2, but at high concentrations they also block Cdks4/6. In addition, it is plausible that cell cycle progression is inhibited due to the reduced phosphorylation of p27Kip1 at critical threonines (Thr157, Thr198) which negatively affects the assembly function of p27Kip1 for cyclin D1-Cdk4 complexes [48]. The down-regulation of cyclin D1 and E1 expression suggests a G1 cell cycle arrest, a conclusion that is supported by the absence of BrdU incorporation (S-phase event) and positive PH3 staining (M-phase marker). In the 125-day premature born baboon model of BPD, the animals received ventilator support and oxygen as needed to achieve normal blood-gas values [49], and such treatment increased pulmonary expression of cyclin D1 and E at day 6 while prolonged ventilation and oxygen exposure led to a decrease in cyclin E [46]. It is possible that lung cells were initially undergoing repair by increasing proliferation, but that prolonged exposure impairs the expression of cyclins, resulting in failure of repair and inhibition of further development. Furthermore, increased levels of the Cdk inhibitor p21Waf1/Cip1 in the baboon BPD model [46] suggests that G1 growth arrest may occur in infants with BPD. Unfortunately, the expression of p27Kip1 or p57Kip2 has not been investigated in the baboon BPD model.\nIn summary, we conclude that mechanical ventilation for 24 h using moderate VT without supplemental O2 causes G1 cell cycle arrest of lung cells in newborn rats due to increased transcription and altered phosphorylation (in favour of nuclear localization) of Kip CKIs, and down-regulation of cyclins D and E. This proliferative arrest may cause a reduction in alveolarization, resulting in alveolar simplification. Such identification of ventilation-induced CKIs may have therapeutic potential for the prevention -or treatment- of arrested alveolarization in ventilated premature infants." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Ethics statement", "Animal preparation", "Mechanical ventilation", "Histology", "Immunohistochemistry", "Morphometric analysis", "Western blot analysis", "Quantitative RT-PCR", "Stretch of epithelial cells isolated from fetal rat lungs", "Statistical analysis", "Results", "Physiologic data", "Morphometric analyses", "Lung cell proliferation", "Cell cycle regulators", "Discussion" ]
[ "Introduction of more gentle ventilation strategies -together with surfactant replacement and antenatal corticosteroids- has improved the survival rate of very premature infants. In parallel, the number of infants with ‘new’ bronchopulmonary dysplasia (BPD) [1] has also increased. Currently, infants born at ≤26 weeks of gestation are at the greatest risk of developing such ‘new’ BPD [2], a syndrome of arrested lung development with fewer and larger alveoli and dysmorphic vasculature [3]. BPD can no longer be considered only a pediatric disease because the substantial lung-function abnormalities -and significant symptoms- persist into adulthood [4], [5], [6]. The pathogenesis of BPD is incompletely understood and its treatment is empirical, but mechanical ventilation remains a major risk factor.\nLung development between 24–32 weeks of gestation is characterized by extensive vasculogenesis within the developing terminal saccules, followed by secondary crest formation as well as interstitial extracellular matrix loss and remodeling [7]. This tissue remodeling requires well-coordinated regulation of cell proliferation and apoptosis. Recent studies have shown that prolonged mechanical ventilation increases apoptosis and impairs alveolar septation in newborn mice [8], however the effect of mechanical ventilation on lung cell growth is mostly unknown. In vitro studies have demonstrated that mechanical stretch (5% elongation, 60 cycles per min, 15 min/h for 24 h) and oxygen (95%) can inhibit lung cell proliferation [9], [10], but molecular mechanisms are yet to be determined. Cell proliferation is a precisely coordinated set of events regulated by interaction of gene products that activate or suppress cell cycle progression. A series of cyclins and cyclin-dependent kinases (Cdk) act in concert to drive the cycle forward through the G1, S and G2/M phases [11]. In mammalian cells, G1/S transition is an important checkpoint in the cell cycle. Entry into the cell cycle is initiated by mitogen-stimulated expression of D-type cyclins which activate Cdk4/6. Shortly thereafter, cyclin E expression is increased and cyclin E-Cdk2 complexes are formed, promoting entry into the S phase [12]. While cyclin-Cdk complexes positively drive progression of the cell cycle, Cdk inhibitors (CKI) negatively regulate progression by binding to and inactivating cyclin–Cdks [13]. There are two distinct CKI families in mammalian cells: INK4 proteins, which block the progression of the cell cycle by binding to either Cdk4 or Cdk6 and inhibiting the action of cyclin D; and, Cip/Kip proteins that inhibit a broader spectrum of cyclin-Cdk complexes [14], [15], [16].\nIn this study we determined the effect of prolonged (24 h) mechanical ventilation on lung cell cycle regulators, proliferation and alveolar formation in a newborn rat model [17]. We hypothesized that continuous cyclic (over)stretching of the primitive airsacs would adversely affect proliferation of lung cells by influencing cell cycle regulators.", "[SUBTITLE] Ethics statement [SUBSECTION] The study was conducted according to the guidelines of the Canadian Council for Animal Care and with approval of the Animal Care Review Committee of the Hospital for Sick Children (protocol #7217).\nThe study was conducted according to the guidelines of the Canadian Council for Animal Care and with approval of the Animal Care Review Committee of the Hospital for Sick Children (protocol #7217).\n[SUBTITLE] Animal preparation [SUBSECTION] Timed pregnant Wistar rats (Charles River, Oakville, Quebec, Canada) were allowed to deliver and immediately afterwards litters were reduced to 10 pups. Newborn rat pups were anesthetized by i.p. injection of 30 mg kg−1 pentobarbital and a tracheotomy was performed. The trachea was cannulated with a 1 cm 19G cannula and connected to a rodent ventilator (FlexiVent Scireq, Montreal, PQ). Spontaneously breathing, non-ventilated, littermates served as sham controls. One pup per litter was ventilated and a littermate was used as non-ventilated control. Isoflurane was used as general anesthesia during the ventilation period and 0.9% saline (100 ml.kg−1/24 h) was administered subcutaneously by continuous infusion with a 27G needle to prevent dehydration. First rat pups at postnatal days 6, 7, 8, 9, 10 and 14 were ventilated to assess lung cell proliferation. For all subsequent experiments 7-day old rat pups were used. Preliminary experiments were performed to determine ventilator settings [18]. Starting from a normal respiratory rate for newborn rats (150 bpm), tidal volume was adjusted to achieve normal blood gas values after the ventilation period. Animals were monitored by ECG. Rectal temperature was maintained at 37°C using a thermal blanket, lamp and plastic wrap. At the end of the ventilation period a blood sample from the carotid artery was taken for blood gas analysis prior to euthanasia.\nTimed pregnant Wistar rats (Charles River, Oakville, Quebec, Canada) were allowed to deliver and immediately afterwards litters were reduced to 10 pups. Newborn rat pups were anesthetized by i.p. injection of 30 mg kg−1 pentobarbital and a tracheotomy was performed. The trachea was cannulated with a 1 cm 19G cannula and connected to a rodent ventilator (FlexiVent Scireq, Montreal, PQ). Spontaneously breathing, non-ventilated, littermates served as sham controls. One pup per litter was ventilated and a littermate was used as non-ventilated control. Isoflurane was used as general anesthesia during the ventilation period and 0.9% saline (100 ml.kg−1/24 h) was administered subcutaneously by continuous infusion with a 27G needle to prevent dehydration. First rat pups at postnatal days 6, 7, 8, 9, 10 and 14 were ventilated to assess lung cell proliferation. For all subsequent experiments 7-day old rat pups were used. Preliminary experiments were performed to determine ventilator settings [18]. Starting from a normal respiratory rate for newborn rats (150 bpm), tidal volume was adjusted to achieve normal blood gas values after the ventilation period. Animals were monitored by ECG. Rectal temperature was maintained at 37°C using a thermal blanket, lamp and plastic wrap. At the end of the ventilation period a blood sample from the carotid artery was taken for blood gas analysis prior to euthanasia.\n[SUBTITLE] Mechanical ventilation [SUBSECTION] Rat pups were ventilated with room air and moderate-VT (8.5 mL.kg−1, RR 150 min−1, PEEP 2 cm H2O) for 8, 12 and 24 h. In a few cases, pups were ventilated for 4 h with high-VT (40 mL.kg−1, RR 30 min−1, PEEP 2 cm H2O). The 7-day old pups weighed 15.5–18.6 g. Dynamic compliance was estimated every 4 h from data obtained during a single-frequency forced oscillation manoeuvre, using a mathematical model-fitting technique according to the specifications of Scireq Inc. (Montreal, PQ). Two h before completion of ventilation pups were injected ip with 50 mg/kg 5-bromo-2-deoxyuridine (BrdU). At completion of ventilation a blood sample was taken from the carotid artery for blood gas analysis and the animals killed by exsanguination. Lung tissues were processed for histology or fresh frozen for molecular/protein analyses.\nRat pups were ventilated with room air and moderate-VT (8.5 mL.kg−1, RR 150 min−1, PEEP 2 cm H2O) for 8, 12 and 24 h. In a few cases, pups were ventilated for 4 h with high-VT (40 mL.kg−1, RR 30 min−1, PEEP 2 cm H2O). The 7-day old pups weighed 15.5–18.6 g. Dynamic compliance was estimated every 4 h from data obtained during a single-frequency forced oscillation manoeuvre, using a mathematical model-fitting technique according to the specifications of Scireq Inc. (Montreal, PQ). Two h before completion of ventilation pups were injected ip with 50 mg/kg 5-bromo-2-deoxyuridine (BrdU). At completion of ventilation a blood sample was taken from the carotid artery for blood gas analysis and the animals killed by exsanguination. Lung tissues were processed for histology or fresh frozen for molecular/protein analyses.\n[SUBTITLE] Histology [SUBSECTION] After flushing whole lungs were infused in situ with 4% (w/v) paraformaldehyde (PFA) in PBS with a constant pressure of 20 cm H2O over 5 minutes to have equalized filling pressure over the entire lung. Under these constant pressure conditions the cannula was removed and the trachea immediately ligated. The lungs were excised and immersed in 4% PFA in PBS overnight, dehydrated in a ethanol/xylene series and embedded in paraffin. Sections of 5 µm were stained with hematoxilin and eosin or stained for elastin using accustain artrazine solution (Sigma, St. Louis MO, USA).\nAfter flushing whole lungs were infused in situ with 4% (w/v) paraformaldehyde (PFA) in PBS with a constant pressure of 20 cm H2O over 5 minutes to have equalized filling pressure over the entire lung. Under these constant pressure conditions the cannula was removed and the trachea immediately ligated. The lungs were excised and immersed in 4% PFA in PBS overnight, dehydrated in a ethanol/xylene series and embedded in paraffin. Sections of 5 µm were stained with hematoxilin and eosin or stained for elastin using accustain artrazine solution (Sigma, St. Louis MO, USA).\n[SUBTITLE] Immunohistochemistry [SUBSECTION] Following sectioning and antigen retrieval by heating in 10 mM sodium citrate pH 6.0, sections were washed in PBS and endogenous peroxidase was blocked in 3% (v/v) H2O2 in methanol. Blocking was done with 5% (w/v) normal goat serum (NGS) and 1% (w/v) bovine serum albumin (BSA) in PBS. Sections were then incubated overnight at 4°C with either 1∶50 diluted mouse anti-BrdU (Boehringer Mannheim, Germany) or 1∶400 diluted rabbit anti-phospho-histone H3 (Millipore, Billerica, MA, USA) antibodies (Lab Vision Corporation, Fremont, Canada). Biotinylated rabbit anti-mouse IgG or goat anti-rabbit IgG were used as secondary antibodies, respectively. Color detection was performed according to instruction in the Vectastain ABC and DAB kit (Vector Laboratories, Burlingname, CA, USA). All sections were counterstained with hematoxylin. For quantitative analysis, digital images were captured using a Leica digital imaging system at 20× magnification with random sampling of all tissue in an unbiased fashion. Images were captured randomly from 15 non-overlapping fields from each slides, with 3 slides per animal and 4 animals per group.\nFollowing sectioning and antigen retrieval by heating in 10 mM sodium citrate pH 6.0, sections were washed in PBS and endogenous peroxidase was blocked in 3% (v/v) H2O2 in methanol. Blocking was done with 5% (w/v) normal goat serum (NGS) and 1% (w/v) bovine serum albumin (BSA) in PBS. Sections were then incubated overnight at 4°C with either 1∶50 diluted mouse anti-BrdU (Boehringer Mannheim, Germany) or 1∶400 diluted rabbit anti-phospho-histone H3 (Millipore, Billerica, MA, USA) antibodies (Lab Vision Corporation, Fremont, Canada). Biotinylated rabbit anti-mouse IgG or goat anti-rabbit IgG were used as secondary antibodies, respectively. Color detection was performed according to instruction in the Vectastain ABC and DAB kit (Vector Laboratories, Burlingname, CA, USA). All sections were counterstained with hematoxylin. For quantitative analysis, digital images were captured using a Leica digital imaging system at 20× magnification with random sampling of all tissue in an unbiased fashion. Images were captured randomly from 15 non-overlapping fields from each slides, with 3 slides per animal and 4 animals per group.\n[SUBTITLE] Morphometric analysis [SUBSECTION] Digital images were captured from either H&E- or elastin-stained slides with random sampling of all tissue in an unbiased fashion. Images were captured randomly from 15 non-overlapping fields/slide with 3 slides/animal and 4 animals/group. Tissue fraction was calculated from pixel counts of black/white images [19], mean linear intercepts (Lm) were measured and calculated [20] and the number of elastin-positive secondary septa determined.\nDigital images were captured from either H&E- or elastin-stained slides with random sampling of all tissue in an unbiased fashion. Images were captured randomly from 15 non-overlapping fields/slide with 3 slides/animal and 4 animals/group. Tissue fraction was calculated from pixel counts of black/white images [19], mean linear intercepts (Lm) were measured and calculated [20] and the number of elastin-positive secondary septa determined.\n[SUBTITLE] Western blot analysis [SUBSECTION] Lung tissues were lysed, protein content measured [21] and aliquots (40 g protein) were subjected to 10% SDS-PAGE and transferred to PVDF membranes. After blocking with 5% (w/v) skim milk in TBST (20 mM Tris, 137 mM NaCl, 0.1% Tween 20) membranes were incubated with appropriate primary antibody overnight in 4°C. Because of decreased BrdU incorporation and cyclin D1 and E1 expression, we focused on CKIs inhibiting Cdk-2, -4 and -6 [22]. Primary antibodies were rabbit anti-p15INK4B (dilution of 1∶500), rabbit anti-p16INK4A (dilution of 1∶1000), mouse anti-p21Waf1/Cip1 (dilution 1∶500), rabbit anti-p27Kip1(dilution 1∶500) and rabbit anti-p57Kip2 (dilution of 1∶1000), rabbit anti-cyclin D1 (dilution of 1∶1000) (all from Cell Signaling Technology, Danvers, USA) and rabbit anti-cyclin E1 (dilution of 1∶1000) (Santa Cruz Biotechnology, Santa Cruz USA). Primary phosphorylated p27Kip1 antibodies were rabbit anti-p27Kip1 (pThr198) (dilution of 1∶400) and rabbit anti-p27Kip1 (pSer10)-R (dilution of 1∶2000; both from Santa Cruz Biotechnology, Santa Cruz, USA), rabbit anti-p27Kip1 (pThr157) (dilution of 1∶300; R&D Systems Inc, Burlington, Canada) and rabbit anti-p27Kip1 (pThr187) (dilution of 1∶400; Novus Biologicals, Littleton, USA). The next day the membranes were washed TBST and incubated with either horseradish peroxidase–conjugated anti-rabbit or anti-mouse IgG (dilution of 1∶1000; Cell Signaling Technology, Danvers, USA). After several washes with TBST, protein bands were visualized using an enhanced chemiluminescence detection kit (Amersham, Piscataway, NJ, USA). Band densities were quantified using Scion Image software (Version 1.6, National Institutes of Health, Bethesda, MD, USA). Equal protein loading was confirmed by immunoblotting for β-actin of same membrane.\nLung tissues were lysed, protein content measured [21] and aliquots (40 g protein) were subjected to 10% SDS-PAGE and transferred to PVDF membranes. After blocking with 5% (w/v) skim milk in TBST (20 mM Tris, 137 mM NaCl, 0.1% Tween 20) membranes were incubated with appropriate primary antibody overnight in 4°C. Because of decreased BrdU incorporation and cyclin D1 and E1 expression, we focused on CKIs inhibiting Cdk-2, -4 and -6 [22]. Primary antibodies were rabbit anti-p15INK4B (dilution of 1∶500), rabbit anti-p16INK4A (dilution of 1∶1000), mouse anti-p21Waf1/Cip1 (dilution 1∶500), rabbit anti-p27Kip1(dilution 1∶500) and rabbit anti-p57Kip2 (dilution of 1∶1000), rabbit anti-cyclin D1 (dilution of 1∶1000) (all from Cell Signaling Technology, Danvers, USA) and rabbit anti-cyclin E1 (dilution of 1∶1000) (Santa Cruz Biotechnology, Santa Cruz USA). Primary phosphorylated p27Kip1 antibodies were rabbit anti-p27Kip1 (pThr198) (dilution of 1∶400) and rabbit anti-p27Kip1 (pSer10)-R (dilution of 1∶2000; both from Santa Cruz Biotechnology, Santa Cruz, USA), rabbit anti-p27Kip1 (pThr157) (dilution of 1∶300; R&D Systems Inc, Burlington, Canada) and rabbit anti-p27Kip1 (pThr187) (dilution of 1∶400; Novus Biologicals, Littleton, USA). The next day the membranes were washed TBST and incubated with either horseradish peroxidase–conjugated anti-rabbit or anti-mouse IgG (dilution of 1∶1000; Cell Signaling Technology, Danvers, USA). After several washes with TBST, protein bands were visualized using an enhanced chemiluminescence detection kit (Amersham, Piscataway, NJ, USA). Band densities were quantified using Scion Image software (Version 1.6, National Institutes of Health, Bethesda, MD, USA). Equal protein loading was confirmed by immunoblotting for β-actin of same membrane.\n[SUBTITLE] Quantitative RT-PCR [SUBSECTION] Total RNA was extracted from lung tissues and reverse transcribed [14]. Complementary DNA was amplified for target genes cyclin D1, cyclin E1 and p27 as previously described [17], [19]. For relative quantification, polymerase chain reaction signals were compared between groups after normalization using 18S as internal reference. Fold change was calculated [23].\nTotal RNA was extracted from lung tissues and reverse transcribed [14]. Complementary DNA was amplified for target genes cyclin D1, cyclin E1 and p27 as previously described [17], [19]. For relative quantification, polymerase chain reaction signals were compared between groups after normalization using 18S as internal reference. Fold change was calculated [23].\n[SUBTITLE] Stretch of epithelial cells isolated from fetal rat lungs [SUBSECTION] Distal fetal lung epithelial cells (day 19 of gestation) were isolated as previously described [24]. Cells were cultured on type-1 collagen-coated BioFlex plates and subjected various durations of cyclic continuous 17% stretch using a FX-4000 Flexercell Strain Unit (Flexercell Int., NC, USA) [25]. Neither cell viability (trypan blue exclusion) nor cell attachment was affected by the duration of applied stretch regimen. Cell lysates were processed for Western Blotting.\nDistal fetal lung epithelial cells (day 19 of gestation) were isolated as previously described [24]. Cells were cultured on type-1 collagen-coated BioFlex plates and subjected various durations of cyclic continuous 17% stretch using a FX-4000 Flexercell Strain Unit (Flexercell Int., NC, USA) [25]. Neither cell viability (trypan blue exclusion) nor cell attachment was affected by the duration of applied stretch regimen. Cell lysates were processed for Western Blotting.\n[SUBTITLE] Statistical analysis [SUBSECTION] Stated otherwise all data are presented as mean ± SD. Data was analyzed using SPSS software version 15 (SPSS Inc, Chicago, IL). Statistical significance (p<0.05) was determined by using one-way ANOVA or Kruskal-Wallis test. Post hoc analysis was performed using Duncan's multiple-range test (data presented as mean ± SD) or Mann-Whitney test (data presented as median and interquartile range).\nStated otherwise all data are presented as mean ± SD. Data was analyzed using SPSS software version 15 (SPSS Inc, Chicago, IL). Statistical significance (p<0.05) was determined by using one-way ANOVA or Kruskal-Wallis test. Post hoc analysis was performed using Duncan's multiple-range test (data presented as mean ± SD) or Mann-Whitney test (data presented as median and interquartile range).", "The study was conducted according to the guidelines of the Canadian Council for Animal Care and with approval of the Animal Care Review Committee of the Hospital for Sick Children (protocol #7217).", "Timed pregnant Wistar rats (Charles River, Oakville, Quebec, Canada) were allowed to deliver and immediately afterwards litters were reduced to 10 pups. Newborn rat pups were anesthetized by i.p. injection of 30 mg kg−1 pentobarbital and a tracheotomy was performed. The trachea was cannulated with a 1 cm 19G cannula and connected to a rodent ventilator (FlexiVent Scireq, Montreal, PQ). Spontaneously breathing, non-ventilated, littermates served as sham controls. One pup per litter was ventilated and a littermate was used as non-ventilated control. Isoflurane was used as general anesthesia during the ventilation period and 0.9% saline (100 ml.kg−1/24 h) was administered subcutaneously by continuous infusion with a 27G needle to prevent dehydration. First rat pups at postnatal days 6, 7, 8, 9, 10 and 14 were ventilated to assess lung cell proliferation. For all subsequent experiments 7-day old rat pups were used. Preliminary experiments were performed to determine ventilator settings [18]. Starting from a normal respiratory rate for newborn rats (150 bpm), tidal volume was adjusted to achieve normal blood gas values after the ventilation period. Animals were monitored by ECG. Rectal temperature was maintained at 37°C using a thermal blanket, lamp and plastic wrap. At the end of the ventilation period a blood sample from the carotid artery was taken for blood gas analysis prior to euthanasia.", "Rat pups were ventilated with room air and moderate-VT (8.5 mL.kg−1, RR 150 min−1, PEEP 2 cm H2O) for 8, 12 and 24 h. In a few cases, pups were ventilated for 4 h with high-VT (40 mL.kg−1, RR 30 min−1, PEEP 2 cm H2O). The 7-day old pups weighed 15.5–18.6 g. Dynamic compliance was estimated every 4 h from data obtained during a single-frequency forced oscillation manoeuvre, using a mathematical model-fitting technique according to the specifications of Scireq Inc. (Montreal, PQ). Two h before completion of ventilation pups were injected ip with 50 mg/kg 5-bromo-2-deoxyuridine (BrdU). At completion of ventilation a blood sample was taken from the carotid artery for blood gas analysis and the animals killed by exsanguination. Lung tissues were processed for histology or fresh frozen for molecular/protein analyses.", "After flushing whole lungs were infused in situ with 4% (w/v) paraformaldehyde (PFA) in PBS with a constant pressure of 20 cm H2O over 5 minutes to have equalized filling pressure over the entire lung. Under these constant pressure conditions the cannula was removed and the trachea immediately ligated. The lungs were excised and immersed in 4% PFA in PBS overnight, dehydrated in a ethanol/xylene series and embedded in paraffin. Sections of 5 µm were stained with hematoxilin and eosin or stained for elastin using accustain artrazine solution (Sigma, St. Louis MO, USA).", "Following sectioning and antigen retrieval by heating in 10 mM sodium citrate pH 6.0, sections were washed in PBS and endogenous peroxidase was blocked in 3% (v/v) H2O2 in methanol. Blocking was done with 5% (w/v) normal goat serum (NGS) and 1% (w/v) bovine serum albumin (BSA) in PBS. Sections were then incubated overnight at 4°C with either 1∶50 diluted mouse anti-BrdU (Boehringer Mannheim, Germany) or 1∶400 diluted rabbit anti-phospho-histone H3 (Millipore, Billerica, MA, USA) antibodies (Lab Vision Corporation, Fremont, Canada). Biotinylated rabbit anti-mouse IgG or goat anti-rabbit IgG were used as secondary antibodies, respectively. Color detection was performed according to instruction in the Vectastain ABC and DAB kit (Vector Laboratories, Burlingname, CA, USA). All sections were counterstained with hematoxylin. For quantitative analysis, digital images were captured using a Leica digital imaging system at 20× magnification with random sampling of all tissue in an unbiased fashion. Images were captured randomly from 15 non-overlapping fields from each slides, with 3 slides per animal and 4 animals per group.", "Digital images were captured from either H&E- or elastin-stained slides with random sampling of all tissue in an unbiased fashion. Images were captured randomly from 15 non-overlapping fields/slide with 3 slides/animal and 4 animals/group. Tissue fraction was calculated from pixel counts of black/white images [19], mean linear intercepts (Lm) were measured and calculated [20] and the number of elastin-positive secondary septa determined.", "Lung tissues were lysed, protein content measured [21] and aliquots (40 g protein) were subjected to 10% SDS-PAGE and transferred to PVDF membranes. After blocking with 5% (w/v) skim milk in TBST (20 mM Tris, 137 mM NaCl, 0.1% Tween 20) membranes were incubated with appropriate primary antibody overnight in 4°C. Because of decreased BrdU incorporation and cyclin D1 and E1 expression, we focused on CKIs inhibiting Cdk-2, -4 and -6 [22]. Primary antibodies were rabbit anti-p15INK4B (dilution of 1∶500), rabbit anti-p16INK4A (dilution of 1∶1000), mouse anti-p21Waf1/Cip1 (dilution 1∶500), rabbit anti-p27Kip1(dilution 1∶500) and rabbit anti-p57Kip2 (dilution of 1∶1000), rabbit anti-cyclin D1 (dilution of 1∶1000) (all from Cell Signaling Technology, Danvers, USA) and rabbit anti-cyclin E1 (dilution of 1∶1000) (Santa Cruz Biotechnology, Santa Cruz USA). Primary phosphorylated p27Kip1 antibodies were rabbit anti-p27Kip1 (pThr198) (dilution of 1∶400) and rabbit anti-p27Kip1 (pSer10)-R (dilution of 1∶2000; both from Santa Cruz Biotechnology, Santa Cruz, USA), rabbit anti-p27Kip1 (pThr157) (dilution of 1∶300; R&D Systems Inc, Burlington, Canada) and rabbit anti-p27Kip1 (pThr187) (dilution of 1∶400; Novus Biologicals, Littleton, USA). The next day the membranes were washed TBST and incubated with either horseradish peroxidase–conjugated anti-rabbit or anti-mouse IgG (dilution of 1∶1000; Cell Signaling Technology, Danvers, USA). After several washes with TBST, protein bands were visualized using an enhanced chemiluminescence detection kit (Amersham, Piscataway, NJ, USA). Band densities were quantified using Scion Image software (Version 1.6, National Institutes of Health, Bethesda, MD, USA). Equal protein loading was confirmed by immunoblotting for β-actin of same membrane.", "Total RNA was extracted from lung tissues and reverse transcribed [14]. Complementary DNA was amplified for target genes cyclin D1, cyclin E1 and p27 as previously described [17], [19]. For relative quantification, polymerase chain reaction signals were compared between groups after normalization using 18S as internal reference. Fold change was calculated [23].", "Distal fetal lung epithelial cells (day 19 of gestation) were isolated as previously described [24]. Cells were cultured on type-1 collagen-coated BioFlex plates and subjected various durations of cyclic continuous 17% stretch using a FX-4000 Flexercell Strain Unit (Flexercell Int., NC, USA) [25]. Neither cell viability (trypan blue exclusion) nor cell attachment was affected by the duration of applied stretch regimen. Cell lysates were processed for Western Blotting.", "Stated otherwise all data are presented as mean ± SD. Data was analyzed using SPSS software version 15 (SPSS Inc, Chicago, IL). Statistical significance (p<0.05) was determined by using one-way ANOVA or Kruskal-Wallis test. Post hoc analysis was performed using Duncan's multiple-range test (data presented as mean ± SD) or Mann-Whitney test (data presented as median and interquartile range).", "[SUBTITLE] Physiologic data [SUBSECTION] Blood gases were in the normal range after 8, 12 and 24 h of ventilation (Table 1). Mean airway pressures, peak pressures and delivered VT remained constant up to 8 h of ventilation [18], but altered slightly after 12 h of ventilation compared to baseline (Table 1). Dynamic compliance of the respiratory system was constant up to 8 h of ventilation [18] decreased after 12 h and remained stable afterwards (Fig. 1). These results are indicative of the impact of 8 h of ventilation that did not subsequently worsen.\nDynamic compliance decreased during first 12 h of ventilation with room air and low/moderate VT but remained stable during the next 12 h. Data are mean ± SD, n = 12 rat pups per time group. *p<0.05 prolonged versus 1-min ventilation.\nValues represent means ± SD, n = 12 animals per group.\n*p<0.05 versus values at 0 hrs. Ppeak, peak pressure; Pmean, mean pressure; PEEP, positive-end expiratory pressure.\nBlood gases were in the normal range after 8, 12 and 24 h of ventilation (Table 1). Mean airway pressures, peak pressures and delivered VT remained constant up to 8 h of ventilation [18], but altered slightly after 12 h of ventilation compared to baseline (Table 1). Dynamic compliance of the respiratory system was constant up to 8 h of ventilation [18] decreased after 12 h and remained stable afterwards (Fig. 1). These results are indicative of the impact of 8 h of ventilation that did not subsequently worsen.\nDynamic compliance decreased during first 12 h of ventilation with room air and low/moderate VT but remained stable during the next 12 h. Data are mean ± SD, n = 12 rat pups per time group. *p<0.05 prolonged versus 1-min ventilation.\nValues represent means ± SD, n = 12 animals per group.\n*p<0.05 versus values at 0 hrs. Ppeak, peak pressure; Pmean, mean pressure; PEEP, positive-end expiratory pressure.\n[SUBTITLE] Morphometric analyses [SUBSECTION] Seven-day old rat pups ventilated for 12 and 24 h had fewer and larger alveoli when compared to the lungs of non-ventilated 8 day-old pups (Fig. 2A). The tissue-to-air ratio corroborated these findings; it decreased after 12 h of ventilation and declined further during the next 12 h of ventilation (Fig. 2B). To quantify alveolar development, we calculated the number of elastin-positive secondary crests per unit area (Fig. 2D). The number of secondary crests -indicating alveolar formation- increased significantly between the 7th and 8th postnatal days in non-ventilated rat pups. The number of secondary crests increased after 12 h of ventilation when compared to day 7 controls. In contrast, the number of secondary crests was significantly lower in pups ventilated for 24 h vs. non-ventilated day 8 control pups, even when corrected for tissue fraction. To further evaluate alveolar development, we measured the mean linear intercept (Lm; Fig. 2C). Ventilation increased the Lm after 12 h, and more so after 24 h.\n(A) Histology after mechanical ventilation: (A1) non-ventilated 7-day old rat (A2) 7-day old rat ventilated for 12 h, (A3) 7-day old rat ventilated for 24 h, (A4) non-ventilated 8-day old rat. (B) Mechanical ventilation for 12 and 24 h significantly increased alveolar airspace (reduction in tissue-to-air ratio (A) as well as increase in mean linear intercept (D)) but decreased number of elastin-positive secondary septa (C). Medians with 25th and 75th quartiles are shown, bars are 5th and 95th percentiles, n = 12 rat pups per time group. MV, mechanical ventilation. * p<0.05.\nTogether the data suggest that during the first 12 h of ventilation alveolar space increases because of hyperinflation while a further increase of alveolar space during the next 12 h of ventilation is in part due to arrest in alveolar development as well as hyperinflation.\nSeven-day old rat pups ventilated for 12 and 24 h had fewer and larger alveoli when compared to the lungs of non-ventilated 8 day-old pups (Fig. 2A). The tissue-to-air ratio corroborated these findings; it decreased after 12 h of ventilation and declined further during the next 12 h of ventilation (Fig. 2B). To quantify alveolar development, we calculated the number of elastin-positive secondary crests per unit area (Fig. 2D). The number of secondary crests -indicating alveolar formation- increased significantly between the 7th and 8th postnatal days in non-ventilated rat pups. The number of secondary crests increased after 12 h of ventilation when compared to day 7 controls. In contrast, the number of secondary crests was significantly lower in pups ventilated for 24 h vs. non-ventilated day 8 control pups, even when corrected for tissue fraction. To further evaluate alveolar development, we measured the mean linear intercept (Lm; Fig. 2C). Ventilation increased the Lm after 12 h, and more so after 24 h.\n(A) Histology after mechanical ventilation: (A1) non-ventilated 7-day old rat (A2) 7-day old rat ventilated for 12 h, (A3) 7-day old rat ventilated for 24 h, (A4) non-ventilated 8-day old rat. (B) Mechanical ventilation for 12 and 24 h significantly increased alveolar airspace (reduction in tissue-to-air ratio (A) as well as increase in mean linear intercept (D)) but decreased number of elastin-positive secondary septa (C). Medians with 25th and 75th quartiles are shown, bars are 5th and 95th percentiles, n = 12 rat pups per time group. MV, mechanical ventilation. * p<0.05.\nTogether the data suggest that during the first 12 h of ventilation alveolar space increases because of hyperinflation while a further increase of alveolar space during the next 12 h of ventilation is in part due to arrest in alveolar development as well as hyperinflation.\n[SUBTITLE] Lung cell proliferation [SUBSECTION] Lung cell proliferation was assessed in non-ventilated vs. ventilated rat pups at postnatal days 6, 7, 8, 9, 10 and 14. In non-ventilated rats, the number of proliferating lung cells was greatest at postnatal day 6 (BrdU labelling index: ∼12%), which declined gradually to almost undetectable at day 15 (Fig. 3). Ventilation for 24 h clearly inhibited lung cell proliferation in pups of all studied ages (days 6-14). Next, 7-day old rat pups were ventilated for all subsequent experiments. Proliferation was not affected by 8 h of ventilation (data not shown) but longer durations of ventilation significantly decreased the number of proliferating cells (Fig. 4A, B). The ratio of proliferating mesenchymal and epithelial cells did not significantly differ between non-ventilated pups and pups ventilated for 8 and 12 h, respectively (0.73±0.05 vs. 0.65±0.03 and 0.67±0.1). Since a 12-h ventilation decreased the total number of proliferating cells (Fig. 4B) the unchanged ratio suggest that cell proliferation of both tissue layers was equally affected by mechanical ventilation. Hardly any proliferating cells were seen after 24 h of ventilation; in agreement with a reduction in cell proliferation in both tissue layers. The almost total arrest in lung cell proliferation by prolonged (24 h) ventilation was confirmed by anti-PH3 immunochemistry (PH3-positive cells decreased from 8 to 1% of total).\nAlthough the BrdU labeling index decreased gradually with advancing postnatal gestation, mechanical ventilation for 24 h inhibited cell proliferation at every postnatal age. Medians with 25th and 75th quartiles are shown, bars are 5th and 95th percentiles, n = 4 rat pups per time group. *p<0.05.\nImmunohistochemistry ((A1) non-ventilated 7-day old rat, (A2) 7-day old rat ventilated for 12 h, (A3) 7-day old rat ventilated for 24 h, (A4) non-ventilated 8-day old rat) illustrates reduction in BrdU uptake (brown color) with duration of ventilation. (B) BrdU labeling index significantly decreased after 12 and 24 h of mechanical ventilation. Medians with 25th and 75th quartiles are shown, bars are 5th and 95th percentiles, n = 12 rat pups per time group. MV, mechanical ventilation. * p<0.05.\nLung cell proliferation was assessed in non-ventilated vs. ventilated rat pups at postnatal days 6, 7, 8, 9, 10 and 14. In non-ventilated rats, the number of proliferating lung cells was greatest at postnatal day 6 (BrdU labelling index: ∼12%), which declined gradually to almost undetectable at day 15 (Fig. 3). Ventilation for 24 h clearly inhibited lung cell proliferation in pups of all studied ages (days 6-14). Next, 7-day old rat pups were ventilated for all subsequent experiments. Proliferation was not affected by 8 h of ventilation (data not shown) but longer durations of ventilation significantly decreased the number of proliferating cells (Fig. 4A, B). The ratio of proliferating mesenchymal and epithelial cells did not significantly differ between non-ventilated pups and pups ventilated for 8 and 12 h, respectively (0.73±0.05 vs. 0.65±0.03 and 0.67±0.1). Since a 12-h ventilation decreased the total number of proliferating cells (Fig. 4B) the unchanged ratio suggest that cell proliferation of both tissue layers was equally affected by mechanical ventilation. Hardly any proliferating cells were seen after 24 h of ventilation; in agreement with a reduction in cell proliferation in both tissue layers. The almost total arrest in lung cell proliferation by prolonged (24 h) ventilation was confirmed by anti-PH3 immunochemistry (PH3-positive cells decreased from 8 to 1% of total).\nAlthough the BrdU labeling index decreased gradually with advancing postnatal gestation, mechanical ventilation for 24 h inhibited cell proliferation at every postnatal age. Medians with 25th and 75th quartiles are shown, bars are 5th and 95th percentiles, n = 4 rat pups per time group. *p<0.05.\nImmunohistochemistry ((A1) non-ventilated 7-day old rat, (A2) 7-day old rat ventilated for 12 h, (A3) 7-day old rat ventilated for 24 h, (A4) non-ventilated 8-day old rat) illustrates reduction in BrdU uptake (brown color) with duration of ventilation. (B) BrdU labeling index significantly decreased after 12 and 24 h of mechanical ventilation. Medians with 25th and 75th quartiles are shown, bars are 5th and 95th percentiles, n = 12 rat pups per time group. MV, mechanical ventilation. * p<0.05.\n[SUBTITLE] Cell cycle regulators [SUBSECTION] mRNA levels of lung cyclin D1 and E1 were significantly down-regulated after 8, 12 and 24 h of ventilation while that of p27Kip1 was increased (Fig. 5A). Immunoblot (i.e. protein) analysis of lungs ventilated for 24 h confirmed these mRNA changes of cyclin D1, E1 and p27Kip1 (Figs. 5B, 5C, 6A). The amount of p27Kip1 was 1.5-fold increased after 12 h of ventilation (not shown). Other members of the Cip/Kip family of CKIs were either increased (p57Kip2, Fig. 6B) or unchanged (p21Waf/Cip1, not shown) by 24 h of ventilation. In contrast, CKIs belonging to the INK4 family were either reduced (p16INK4a) or not affected (p15INK4b) by 24 h of ventilation (Fig. 6C, D). p27Kip1 can be phosphorylated at different sites, which influences its localization and activity [26]. A 12 h ventilation decreased phosphorylation of p27Kip1at Thr157 (Fig. 7A) but did not affect phosphorylation of Thr198 (not shown). However, mechanical ventilation for 24 h decreased p27Kip1 phosphorylation at Thr157, Thr187 and Thr198, thereby promoting stability and nuclear localization (Fig. 7B–D). Similar -but more rapid- changes in cell cycle regulators were noted when 7-day newborn rats were ventilated with high VT. Although β-actin can be responsive to stretch, no significant differences in β-actin expression was noted between ventilated animals and controls (not shown).\nMechanical ventilation for 24 h significantly decreased cyclin D1 and E1 mRNA (A) and protein (B and C) levels in lungs of 7-day old rats. In contrast, p27 kip1 mRNA increased (A). Inserts in (B) and (C) show cyclin D1 (B) and cyclin E1 (C) immunoblots of lung tissue of 7-day old rats ventilated for 24 h and non-ventilated 8-day old rats (controls). Blots were reprobed with β-actin for equal loading and transfer. Medians with 25th and 75th quartiles are shown, bars are 5th and 95th percentiles; qPCR, n = 6 rat pups per time group; immunoblot, n = 3 rat pups per time group. MV, mechanical ventilation. *p<0.05 versus non-ventilated group, § p<0.05 versus 24 h ventilation.\nMechanical ventilation for 24 h significantly increased p27 Kip1 (A) and p57 Kip2 (B) protein levels. In contrast, p16 INK4a protein (D) was decreased by ventilation while p15 INK4b (C) was unchanged. Inserts show immunoblots of lung tissue of 7-day old rats ventilated for 24 h and non-ventilated 8-day old rats (controls). Blots were reprobed with β-actin for equal loading and transfer. Medians with 25th and 75th quartiles are shown, bars are 5th and 95th percentiles; n = 3 rat pups per time group. *p<0.05 versus non-ventilated group.\nA 24 h-ventilation significantly decreased Thr157-phosphorylated p27Kip1 (A), Thr187-phosphorylated p27Kip1 (B) and Thr198-phosphorylated p27Kip1 (C). Phosphorylation of threonine 157 was already reduced after 12 h of ventilation (D). Inserts show immunoblots of lung tissue of 7-day old rats ventilated for 24 h and non-ventilated 8-day old rats (controls Blots were reprobed with β-actin for equal loading and transfer. Medians with 25th and 75th quartiles are shown, bars are 5th and 95th percentiles; n = 3 rat pups per time group. *p<0.05 versus non-ventilated group.\nHigh VT reduced the amount of D1 and D2 cyclins within 1 hour, while that of Cdk inhibitors p27Kip1 and p57Kip2 increased (Fig. 8A); in contrast, p16INK4a content was decreased by high-VT ventilation.\nHigh VT ventilation of 7-day old rat lungs (A) and a continuous cyclic 17% stretch of fetal lung epithelial cells (B) rapidly decreased type-D cyclins and p16INK4a while increasing Kip proteins, in particular p27Kip1. Blots were reprobed with β-actin for equal loading and transfer. Representative blots of 2 experiments carried out in duplicate (A) or 3 experiments (B).\nWe do not know in which particular tissue layer (epithelium, mesenchyme) these changes occurred in vivo, but they at least occur in epithelial cells as subjecting ex vivo fetal lung epithelial cells to cyclic continuous 17% stretch resulted in similar patterns of alteration in cell cycle regulators (Fig. 8B).\nmRNA levels of lung cyclin D1 and E1 were significantly down-regulated after 8, 12 and 24 h of ventilation while that of p27Kip1 was increased (Fig. 5A). Immunoblot (i.e. protein) analysis of lungs ventilated for 24 h confirmed these mRNA changes of cyclin D1, E1 and p27Kip1 (Figs. 5B, 5C, 6A). The amount of p27Kip1 was 1.5-fold increased after 12 h of ventilation (not shown). Other members of the Cip/Kip family of CKIs were either increased (p57Kip2, Fig. 6B) or unchanged (p21Waf/Cip1, not shown) by 24 h of ventilation. In contrast, CKIs belonging to the INK4 family were either reduced (p16INK4a) or not affected (p15INK4b) by 24 h of ventilation (Fig. 6C, D). p27Kip1 can be phosphorylated at different sites, which influences its localization and activity [26]. A 12 h ventilation decreased phosphorylation of p27Kip1at Thr157 (Fig. 7A) but did not affect phosphorylation of Thr198 (not shown). However, mechanical ventilation for 24 h decreased p27Kip1 phosphorylation at Thr157, Thr187 and Thr198, thereby promoting stability and nuclear localization (Fig. 7B–D). Similar -but more rapid- changes in cell cycle regulators were noted when 7-day newborn rats were ventilated with high VT. Although β-actin can be responsive to stretch, no significant differences in β-actin expression was noted between ventilated animals and controls (not shown).\nMechanical ventilation for 24 h significantly decreased cyclin D1 and E1 mRNA (A) and protein (B and C) levels in lungs of 7-day old rats. In contrast, p27 kip1 mRNA increased (A). Inserts in (B) and (C) show cyclin D1 (B) and cyclin E1 (C) immunoblots of lung tissue of 7-day old rats ventilated for 24 h and non-ventilated 8-day old rats (controls). Blots were reprobed with β-actin for equal loading and transfer. Medians with 25th and 75th quartiles are shown, bars are 5th and 95th percentiles; qPCR, n = 6 rat pups per time group; immunoblot, n = 3 rat pups per time group. MV, mechanical ventilation. *p<0.05 versus non-ventilated group, § p<0.05 versus 24 h ventilation.\nMechanical ventilation for 24 h significantly increased p27 Kip1 (A) and p57 Kip2 (B) protein levels. In contrast, p16 INK4a protein (D) was decreased by ventilation while p15 INK4b (C) was unchanged. Inserts show immunoblots of lung tissue of 7-day old rats ventilated for 24 h and non-ventilated 8-day old rats (controls). Blots were reprobed with β-actin for equal loading and transfer. Medians with 25th and 75th quartiles are shown, bars are 5th and 95th percentiles; n = 3 rat pups per time group. *p<0.05 versus non-ventilated group.\nA 24 h-ventilation significantly decreased Thr157-phosphorylated p27Kip1 (A), Thr187-phosphorylated p27Kip1 (B) and Thr198-phosphorylated p27Kip1 (C). Phosphorylation of threonine 157 was already reduced after 12 h of ventilation (D). Inserts show immunoblots of lung tissue of 7-day old rats ventilated for 24 h and non-ventilated 8-day old rats (controls Blots were reprobed with β-actin for equal loading and transfer. Medians with 25th and 75th quartiles are shown, bars are 5th and 95th percentiles; n = 3 rat pups per time group. *p<0.05 versus non-ventilated group.\nHigh VT reduced the amount of D1 and D2 cyclins within 1 hour, while that of Cdk inhibitors p27Kip1 and p57Kip2 increased (Fig. 8A); in contrast, p16INK4a content was decreased by high-VT ventilation.\nHigh VT ventilation of 7-day old rat lungs (A) and a continuous cyclic 17% stretch of fetal lung epithelial cells (B) rapidly decreased type-D cyclins and p16INK4a while increasing Kip proteins, in particular p27Kip1. Blots were reprobed with β-actin for equal loading and transfer. Representative blots of 2 experiments carried out in duplicate (A) or 3 experiments (B).\nWe do not know in which particular tissue layer (epithelium, mesenchyme) these changes occurred in vivo, but they at least occur in epithelial cells as subjecting ex vivo fetal lung epithelial cells to cyclic continuous 17% stretch resulted in similar patterns of alteration in cell cycle regulators (Fig. 8B).", "Blood gases were in the normal range after 8, 12 and 24 h of ventilation (Table 1). Mean airway pressures, peak pressures and delivered VT remained constant up to 8 h of ventilation [18], but altered slightly after 12 h of ventilation compared to baseline (Table 1). Dynamic compliance of the respiratory system was constant up to 8 h of ventilation [18] decreased after 12 h and remained stable afterwards (Fig. 1). These results are indicative of the impact of 8 h of ventilation that did not subsequently worsen.\nDynamic compliance decreased during first 12 h of ventilation with room air and low/moderate VT but remained stable during the next 12 h. Data are mean ± SD, n = 12 rat pups per time group. *p<0.05 prolonged versus 1-min ventilation.\nValues represent means ± SD, n = 12 animals per group.\n*p<0.05 versus values at 0 hrs. Ppeak, peak pressure; Pmean, mean pressure; PEEP, positive-end expiratory pressure.", "Seven-day old rat pups ventilated for 12 and 24 h had fewer and larger alveoli when compared to the lungs of non-ventilated 8 day-old pups (Fig. 2A). The tissue-to-air ratio corroborated these findings; it decreased after 12 h of ventilation and declined further during the next 12 h of ventilation (Fig. 2B). To quantify alveolar development, we calculated the number of elastin-positive secondary crests per unit area (Fig. 2D). The number of secondary crests -indicating alveolar formation- increased significantly between the 7th and 8th postnatal days in non-ventilated rat pups. The number of secondary crests increased after 12 h of ventilation when compared to day 7 controls. In contrast, the number of secondary crests was significantly lower in pups ventilated for 24 h vs. non-ventilated day 8 control pups, even when corrected for tissue fraction. To further evaluate alveolar development, we measured the mean linear intercept (Lm; Fig. 2C). Ventilation increased the Lm after 12 h, and more so after 24 h.\n(A) Histology after mechanical ventilation: (A1) non-ventilated 7-day old rat (A2) 7-day old rat ventilated for 12 h, (A3) 7-day old rat ventilated for 24 h, (A4) non-ventilated 8-day old rat. (B) Mechanical ventilation for 12 and 24 h significantly increased alveolar airspace (reduction in tissue-to-air ratio (A) as well as increase in mean linear intercept (D)) but decreased number of elastin-positive secondary septa (C). Medians with 25th and 75th quartiles are shown, bars are 5th and 95th percentiles, n = 12 rat pups per time group. MV, mechanical ventilation. * p<0.05.\nTogether the data suggest that during the first 12 h of ventilation alveolar space increases because of hyperinflation while a further increase of alveolar space during the next 12 h of ventilation is in part due to arrest in alveolar development as well as hyperinflation.", "Lung cell proliferation was assessed in non-ventilated vs. ventilated rat pups at postnatal days 6, 7, 8, 9, 10 and 14. In non-ventilated rats, the number of proliferating lung cells was greatest at postnatal day 6 (BrdU labelling index: ∼12%), which declined gradually to almost undetectable at day 15 (Fig. 3). Ventilation for 24 h clearly inhibited lung cell proliferation in pups of all studied ages (days 6-14). Next, 7-day old rat pups were ventilated for all subsequent experiments. Proliferation was not affected by 8 h of ventilation (data not shown) but longer durations of ventilation significantly decreased the number of proliferating cells (Fig. 4A, B). The ratio of proliferating mesenchymal and epithelial cells did not significantly differ between non-ventilated pups and pups ventilated for 8 and 12 h, respectively (0.73±0.05 vs. 0.65±0.03 and 0.67±0.1). Since a 12-h ventilation decreased the total number of proliferating cells (Fig. 4B) the unchanged ratio suggest that cell proliferation of both tissue layers was equally affected by mechanical ventilation. Hardly any proliferating cells were seen after 24 h of ventilation; in agreement with a reduction in cell proliferation in both tissue layers. The almost total arrest in lung cell proliferation by prolonged (24 h) ventilation was confirmed by anti-PH3 immunochemistry (PH3-positive cells decreased from 8 to 1% of total).\nAlthough the BrdU labeling index decreased gradually with advancing postnatal gestation, mechanical ventilation for 24 h inhibited cell proliferation at every postnatal age. Medians with 25th and 75th quartiles are shown, bars are 5th and 95th percentiles, n = 4 rat pups per time group. *p<0.05.\nImmunohistochemistry ((A1) non-ventilated 7-day old rat, (A2) 7-day old rat ventilated for 12 h, (A3) 7-day old rat ventilated for 24 h, (A4) non-ventilated 8-day old rat) illustrates reduction in BrdU uptake (brown color) with duration of ventilation. (B) BrdU labeling index significantly decreased after 12 and 24 h of mechanical ventilation. Medians with 25th and 75th quartiles are shown, bars are 5th and 95th percentiles, n = 12 rat pups per time group. MV, mechanical ventilation. * p<0.05.", "mRNA levels of lung cyclin D1 and E1 were significantly down-regulated after 8, 12 and 24 h of ventilation while that of p27Kip1 was increased (Fig. 5A). Immunoblot (i.e. protein) analysis of lungs ventilated for 24 h confirmed these mRNA changes of cyclin D1, E1 and p27Kip1 (Figs. 5B, 5C, 6A). The amount of p27Kip1 was 1.5-fold increased after 12 h of ventilation (not shown). Other members of the Cip/Kip family of CKIs were either increased (p57Kip2, Fig. 6B) or unchanged (p21Waf/Cip1, not shown) by 24 h of ventilation. In contrast, CKIs belonging to the INK4 family were either reduced (p16INK4a) or not affected (p15INK4b) by 24 h of ventilation (Fig. 6C, D). p27Kip1 can be phosphorylated at different sites, which influences its localization and activity [26]. A 12 h ventilation decreased phosphorylation of p27Kip1at Thr157 (Fig. 7A) but did not affect phosphorylation of Thr198 (not shown). However, mechanical ventilation for 24 h decreased p27Kip1 phosphorylation at Thr157, Thr187 and Thr198, thereby promoting stability and nuclear localization (Fig. 7B–D). Similar -but more rapid- changes in cell cycle regulators were noted when 7-day newborn rats were ventilated with high VT. Although β-actin can be responsive to stretch, no significant differences in β-actin expression was noted between ventilated animals and controls (not shown).\nMechanical ventilation for 24 h significantly decreased cyclin D1 and E1 mRNA (A) and protein (B and C) levels in lungs of 7-day old rats. In contrast, p27 kip1 mRNA increased (A). Inserts in (B) and (C) show cyclin D1 (B) and cyclin E1 (C) immunoblots of lung tissue of 7-day old rats ventilated for 24 h and non-ventilated 8-day old rats (controls). Blots were reprobed with β-actin for equal loading and transfer. Medians with 25th and 75th quartiles are shown, bars are 5th and 95th percentiles; qPCR, n = 6 rat pups per time group; immunoblot, n = 3 rat pups per time group. MV, mechanical ventilation. *p<0.05 versus non-ventilated group, § p<0.05 versus 24 h ventilation.\nMechanical ventilation for 24 h significantly increased p27 Kip1 (A) and p57 Kip2 (B) protein levels. In contrast, p16 INK4a protein (D) was decreased by ventilation while p15 INK4b (C) was unchanged. Inserts show immunoblots of lung tissue of 7-day old rats ventilated for 24 h and non-ventilated 8-day old rats (controls). Blots were reprobed with β-actin for equal loading and transfer. Medians with 25th and 75th quartiles are shown, bars are 5th and 95th percentiles; n = 3 rat pups per time group. *p<0.05 versus non-ventilated group.\nA 24 h-ventilation significantly decreased Thr157-phosphorylated p27Kip1 (A), Thr187-phosphorylated p27Kip1 (B) and Thr198-phosphorylated p27Kip1 (C). Phosphorylation of threonine 157 was already reduced after 12 h of ventilation (D). Inserts show immunoblots of lung tissue of 7-day old rats ventilated for 24 h and non-ventilated 8-day old rats (controls Blots were reprobed with β-actin for equal loading and transfer. Medians with 25th and 75th quartiles are shown, bars are 5th and 95th percentiles; n = 3 rat pups per time group. *p<0.05 versus non-ventilated group.\nHigh VT reduced the amount of D1 and D2 cyclins within 1 hour, while that of Cdk inhibitors p27Kip1 and p57Kip2 increased (Fig. 8A); in contrast, p16INK4a content was decreased by high-VT ventilation.\nHigh VT ventilation of 7-day old rat lungs (A) and a continuous cyclic 17% stretch of fetal lung epithelial cells (B) rapidly decreased type-D cyclins and p16INK4a while increasing Kip proteins, in particular p27Kip1. Blots were reprobed with β-actin for equal loading and transfer. Representative blots of 2 experiments carried out in duplicate (A) or 3 experiments (B).\nWe do not know in which particular tissue layer (epithelium, mesenchyme) these changes occurred in vivo, but they at least occur in epithelial cells as subjecting ex vivo fetal lung epithelial cells to cyclic continuous 17% stretch resulted in similar patterns of alteration in cell cycle regulators (Fig. 8B).", "The hallmark of ‘new’ BPD is arrested alveolarization [1], but the molecular and cellular basis of the alveolar arrest remains mostly unknown. Alveolarization occurs as the immature saccules, which form the lung parenchyma at birth, are subdivided into smaller units by the formation and extension of secondary septa; new tissue ridges are lifted off the existing primary septa and grow in a centripetal direction into the airspaces. This process, called septation, is mainly postnatal (human: 36 weeks-infancy; rat: Pnd5–Pnd21) [7], [27]. Before septation of the air spaces starts, the lung expands for a short period of time, and the cells of the inter-airway walls actively proliferate, peaking at day 5 in rats and steadily declining thereafter [28]. This active cell proliferation takes place just at the beginning of the septation of the distal airways. With the use of a newborn rat model [18] we demonstrate here that mechanical ventilation for 24 h with room air and moderate VT results in cell cycle arrest, and reduced alveolar septation. This ventilation strategy (room air and moderate VT) was chosen to avoid/minimize lung injury.\nIn rats, lungs at birth have a saccular appearance and alveolarization is an exclusively postnatal (between P4 and P21) event, which makes this model relevant to the infant population developing BPD. However, major differences exist between mechanically ventilated newborn rats and premature born infants. Newborn rats have immature lung architecture at birth, but they do not need mechanical ventilation to survive (likely due to differences in airway structure, with large airways extending almost to the lung periphery and quickly reducing in diameter to the alveoli) and they do not lack surfactant. Infants with BPD demonstrate interstitial thickening that may partly be due to fibroproliferation while in rat pups mechanical ventilation of 24 h caused cell arrest in both mesenchymal and epithelial cell layers. Despite these differences, our results suggest that the observed cell cycle arrest is due to increased expression of two CKIs (i.e. p27Kip1 and p57Kip2) that are members of the Cip/Kip family; the other member, p21waf/Cip1, was not affected by 24 h of mechanical ventilation.\nKnock-in mouse models have shown that p27Kip1 and p57Kip2 are interchangeable in vivo\n[29], suggesting similar mechanisms of regulation. Mechanical strain has been recognized as playing an important role in the regulation of fetal lung cell proliferation. Indeed the stimulatory effect of mechanical stretch (i.e. increased intratracheal pressure) on fetal lung growth has been extensively studied in tracheal occlusion (TO) models [30], [31], where the number of proliferating alveolar type II cells significantly increased. Fetal sheep, exposed to TO during the alveolar stage of lung development, showed an increase in alveolar type II cells between days 2-4 after TO [31]. This proliferative response of fetal lung cells to strain has also been demonstrated in vitro. Intermittent cyclic 5% stretching (simulating normal fetal breathing movements) of distal fetal rat lung cells (epithelial cells and fibroblasts) increased cell proliferation [32]. However, a continuous cyclic 17% stretch (simulating mechanical ventilation [33]) for 24 h inhibited fetal lung cell proliferation (unpublished results), in agreement with our in vivo findings of a proliferative arrest after 24 h of mechanical ventilation. In the present study, continuous cyclic 17% stretch of fetal lung epithelial cells caused similar alterations in cell cycle regulators as observed in mechanically ventilated newborn lungs in vivo, namely increased levels of p27Kip1 and a decrease in the amount of cyclin D1. CKI members of the Cip/Kip family (p21WAF1/Cip1, p27Kip1 and p57Kip2) preferentially inhibit cyclin-Cdk2 complexes [16]. How mechanical stretch influences CKIs is unknown. In many cancers, the ras/raf/mitogen activated protein kinase (MAPK) pathway increases p27Kip1 proteolysis while downstream effectors of the PI-3K pathway such as protein kinase B (also known as Akt) predominantly regulate p27Kip1 subcellular localization [26]. Although the MAPK pathway is activated by ventilation/stretch [34], [35] we found nuclear p27Kip1 accumulation instead of degradation. Thus, MAPK may regulate p27Kip1 differently in normal compared to cancer cells. The PI3K-Akt pathway during ventilation/stretch remains to be investigated. Mechanical ventilation of newborn rats triggers an inflammatory response[17], [18] and various inflammatory mediators including tumor necrosis factor-a (TNFα, interleukin-6 and transforming growth factor-β (TGF-β) have been shown to induce p21WAF1/Cip1 expression [22], [36], [37]. Also p15Ink4b is induced by TGF-β [38]. In the current study, TGFβ1 mRNA expression was not changed after 24 h of ventilation (data not shown) and, indeed, neither p21WAF1/Cip1 nor p15Ink4b expression was affected by mechanical ventilation.\nThe amount of p27Kip1 is regulated at the level of its synthesis (transcription, translation), degradation and localization [39]. During the G0 phase, it accumulates in the nucleus and inhibits cyclin-Cdk complexes. In response to growth stimuli, p27Kip1 translocates from the nucleus to the cytoplasm during G1 phase and is degraded by the proteosome after ubiquitination [39], permitting the cell cycle to progress to S phase. Several signaling pathways that alter p27Kip1 phosphorylation influence its subcellular localization and function. For example, phosphorylation of the following essential sites regulate important functions: Thr157 prevents nuclear import; Ser10 mediates nuclear export; Thr198 promotes cytoplasmic translocation and increases p27-dependent motility, which may be important to prepare cells for shape changes in later phases of the cell cycle; and, Thr187 results in proteolysis [26], [39]. In the present study, mechanical ventilation of 24 h increased the transcription of p27Kip1 and altered its phosphorylation: less phosphorylation of Thr157 (increasing nuclear import), Thr198 (decreasing nuclear export) and Thr187 (reduced proteolysis). No significant changes in Ser10 phosphorylation were noted (not shown). Together, these alterations in p27Kip1 phosphorylation favour its nuclear localization and stability. In addition, the reduced phosphorylation of p27Kip1 at Thr157 and Thr198 negatively affects the assembly function of p27Kip1 for cyclinD1-Cdk4, thereby negatively affecting cell cycle progression [26].\nThe second family of CKIs are INK4 proteins (p16INK4a, p15INK4b, p18INK4c and p19INK4d); they inhibit cyclin D-dependent kinases Cdk-4 and -6 [14], [15] and, are thus specific for early G1 phase. In the present study, we found a significant reduction in p16INK4a protein after ventilation with low, moderate or high VT. In addition to Cdk inhibition and G1 growth arrest, p16INK4a plays a role in regulating apoptosis. It has been shown that p16INK4a-deficiency increases apoptosis in osteosarcoma U2OS and mouse embryonic fibroblast (MEF) cells exposed to ultraviolet (UV) light [40], because of down-regulation of the anti-apoptotic protein Bcl-2. In contrast, the pro-apoptotic protein Bax was down-regulated in p16INK4a expressing cells [40]. Thus, p16INK4a appears to control apoptosis through the intrinsic mitochondrial death pathway. Prolonged mechanical ventilation has been shown to significantly increase lung cell apoptosis in newborn mice lungs [8]. Although p16INK4a levels were decreased in the present study, it remains yet to be determined whether it plays a role in ventilator-induced apoptosis.\nAnother risk factor for BPD is oxygen [1]. Hyperoxia has been shown to interfere with cell-cycle progression in vitro\n[36], [41], [42] and hyperoxia-induced G1 arrest appears to be mediated by p21Waf1/Cip1\n[43], [44]. The hyperoxic induction of p21Waf1/Cip1 is p53-dependent [44] and its increase promotes survival of cells exposed to continuous oxidative stress by maintaining anti-apoptotic Bcl-2X(L) expression [45]. Hyperoxic-ventilated premature baboons delivered at 125 and 140 days of gestation have increased p53 and p21Waf1/Cip1 expression [46], [47]. In the present study, we did not assess p53 but the absence of p21Waf1/Cip1 induction by 24 h of mechanical ventilation with room air suggests that p53 is likely not involved in ventilation-induced cell cycle arrest in these studies.\nThe increase of p27Kip1 and p57Kip2 by mechanical stretch in vitro and in vivo coincided with a reduced expression of cyclins D1 and E1, both of which are essential for cell cycle progression through G1 and entry in S phase. D-type cyclins are induced by mitogenic stimuli in quiescent cells. After association with Cdk4/6 and activation by Cdk activating kinase, they promote entry into the G1 phase, thereby triggering cyclin E expression. Cyclin E binds to Cdk2 and facilitates transition from G1 to S phase [22]. Both p27Kip1 and p57Kip2 are potent inhibitors of cyclin E-dependent kinase Cdk2, but at high concentrations they also block Cdks4/6. In addition, it is plausible that cell cycle progression is inhibited due to the reduced phosphorylation of p27Kip1 at critical threonines (Thr157, Thr198) which negatively affects the assembly function of p27Kip1 for cyclin D1-Cdk4 complexes [48]. The down-regulation of cyclin D1 and E1 expression suggests a G1 cell cycle arrest, a conclusion that is supported by the absence of BrdU incorporation (S-phase event) and positive PH3 staining (M-phase marker). In the 125-day premature born baboon model of BPD, the animals received ventilator support and oxygen as needed to achieve normal blood-gas values [49], and such treatment increased pulmonary expression of cyclin D1 and E at day 6 while prolonged ventilation and oxygen exposure led to a decrease in cyclin E [46]. It is possible that lung cells were initially undergoing repair by increasing proliferation, but that prolonged exposure impairs the expression of cyclins, resulting in failure of repair and inhibition of further development. Furthermore, increased levels of the Cdk inhibitor p21Waf1/Cip1 in the baboon BPD model [46] suggests that G1 growth arrest may occur in infants with BPD. Unfortunately, the expression of p27Kip1 or p57Kip2 has not been investigated in the baboon BPD model.\nIn summary, we conclude that mechanical ventilation for 24 h using moderate VT without supplemental O2 causes G1 cell cycle arrest of lung cells in newborn rats due to increased transcription and altered phosphorylation (in favour of nuclear localization) of Kip CKIs, and down-regulation of cyclins D and E. This proliferative arrest may cause a reduction in alveolarization, resulting in alveolar simplification. Such identification of ventilation-induced CKIs may have therapeutic potential for the prevention -or treatment- of arrested alveolarization in ventilated premature infants." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Pubertal timing and body mass index gain from birth to maturity in relation with femoral neck BMD and distal tibia microstructure in healthy female subjects.
21359672
Recent data point to a relationship between BMI change during childhood and hip fracture risk in later life. We hypothesized that BMI development is linked to variation in pubertal timing as assessed by menarcheal age (MENA) which in turn, is related to peak bone mass (PBM) and hip fracture risk in elderly.
INTRODUCTION
We studied in a 124 healthy female cohort the relationship between MENA and BMI from birth to maturity, and DXA-measured femoral neck (FN) aBMD at 20.4 year. At this age, we also measured bone strength related microstructure components of distal tibia by HR-pQCT.
METHODS
At 20.4 ± 0.6 year, FN aBMD (mg/cm(2)), cortical thickness (μm), and trabecular density (mg HA/cm(3)) of distal tibia were inversely related to MENA (P = 0.023, 0.015, and 0.041, respectively) and positively to BMI changes from 1.0 to 12.4 years (P = 0.031, 0.089, 0.016, respectively). Significant inverse (P < 0.022 to <0.001) correlations (R = -0.21 to -0.42) were found between MENA and BMI from 7.9 to 20.4 years, but neither at birth nor at 1.0 year. Linear regression indicated that MENA Z-score was inversely related to BMI changes not only from 1.0 to 12.4 years (R = -0.35, P = 0.001), but also from 1.0 to 8.9 years, (R = -0.24, P = 0.017), i.e., before pubertal maturation.
RESULTS
BMI gain during childhood is associated with pubertal timing, which in turn, is correlated with several bone traits measured at PBM including FN aBMD, cortical thickness, and volumetric trabecular density of distal tibia. These data complement the reported relationship between childhood BMI gain and hip fracture risk in later life.
CONCLUSION
[ "Absorptiometry, Photon", "Adolescent", "Body Height", "Body Mass Index", "Bone Density", "Child", "Female", "Femur Neck", "Humans", "Menarche", "Prospective Studies", "Tibia", "Time Factors", "Tomography, X-Ray Computed", "Young Adult" ]
3169779
Introduction
In healthy human subjects, bone mineral mass follows a trajectory from birth on to attain a maximal value, the so-called peak bone mass (PBM), by the end of the second or the beginning of the third decade, according to both gender and skeletal sites examined [1]. Later menarcheal age was shown to be a risk factor for reduced bone mineral mass in postmenopausal women [2–7] and increased prevalence of fragility fractures at several sites of the skeleton [8–11]. The negative influence of later menarcheal age on bone mineral mass observed in postmenopausal women is already expressed long before menopause as it was observed in middle-age premenopausal women with mean age 45 years, and in healthy young adult females in their very early twenties [12]. Furthermore, this influence of pubertal timing on peak bone mass was found to be predetermined before the onset of pubertal maturation in a prospective follow-up study from age 8 to 20 years [13]. This suggested that both pubertal timing and bone traits may be under the influence of common genetic factors [14]. The risk of hip fracture is dependent upon the amount of areal bone mineral density (aBMD) or bone mineral content (BMC) as assessed by osteodensitometry at the level of proximal femur, particularly in the femoral neck (FN). Longitudinal studies of women ranging from 20 to 94 years with follow-up periods from 16 to 22 years showed that the average annual rate of bone loss was relatively constant and tracked well within individuals [15, 16]. Furthermore, a high correlation was recorded between the baseline aBMD values and those obtained after 16 (R = 0.83) [15] and 22 (R = 0.80) years of follow-up [16]. This tracking pattern of aBMD is thus maintained over six decades of adult life. Such a notion has two important implications. First, the prediction of hip fracture risk based on one single measurement of FN aBMD remains reliable in the long term [15, 16]. Second, within the wide range of FN aBMD values little variation occurs during adult life in individual Z-scores or percentiles. Hence, it can be inferred that bone mass acquired by the end of the growth period appears to be more important than bone loss occurring during adult life [17]. This tracking pattern of FN aBMD was also reported in healthy females, from prepuberty to peak bone mass attainment [18–20]. In fact, since PBM is under strong genetic influence [21–23], it can be expected that bone mineral density and size are found to significantly track during growth in healthy populations throughout the world [18, 20, 24–26]. Growth in infancy was reported to be associated with BMC in later life [27]. The risk of hip fracture in elderly was shown to be related to early variation in height and weight growth [28, 29]. Very recently, in a study of 6,370 women born in Finland, reduction in body mass index (BMI) gain between 1 and 12 years of age was associated with an increase risk of hip fracture in later life [30]. Two potential explanations for this link between reduction in Z-score for BMI and later fracture risk are discussed by the authors: first, a difference in pubertal timing; second, a slowing of growth in response to adverse environmental influences [30]. The authors concluded that thinness in childhood is a risk factor for hip fracture in later life, by a direct effect of low fat mass on bone mineralization or represents the influence of altered timing of pubertal maturation. In this study, the timing of puberty as precisely assessed by prospectively recording menarcheal age, was not determined [30], making uncertain whether this important determinant of FN PBM and subsequent premenopausal FN aBMD [12] could be implicated in this association. In the present report, we tested the hypothesis that variation in body growth during infancy and childhood are related to pubertal timing which, in turn is a determinant of FN peak bone mass. Data are presented on the relationship between menarcheal age and body weight (BW), height (H) and BMI from birth to 20 years, and in FN aBMD prospectively measured from prepuberty to maturity in a cohort of healthy females. In addition to FN PBM measurements, we also analyzed whether the impact of BMI as linked to pubertal timing was detectable on bone strength related microstructure, as assessed by high resolution peripheral computerized tomography (HR-pQCT) at the level of distal tibia.
null
null
Results
The whole cohort anthropometric variables from birth on and the development of DXA-measured FN aBMD from prepuberty to early twenties are described in Table 1. All values are within the reference ranges of the corresponding population of healthy female subjects. Among the 124 young women, 64 (51.6%) were current (n = 50) or previous smokers (n = 14). The use of contraceptive pill for more than 3 months concerned 24 (19.4%) young women. There was no significant difference between girls with (n = 96) and without (n = 28) birth weight and infant weight in terms of BMI, femoral neck aBMD and distal tibia pQCT values at the mean age of 20.4 years. At birth and at 1.0 year of age, there was no relationship between future menarcheal age, taken as a precise assessment of pubertal timing, and BMI (Table 2). In contrast, highly significant inverse regression coefficients (ß) were recorded at the age of 7.9 and 8.9 years, i.e., when all girls were still prepubertal as indicated in the legend in Table 1. The inverse regression coefficient still became maximally negative at the age of 12.4 year (ß = −0.455, P < 0.001). At this age, MENA explained 18% of the BMI variance (R 2 = 0.18) (Table 2). Afterwards, the negative slope regression of BMI on MENA was less steep, but still remained statistically significant at the beginning of the third decade (Table 2). Table 1Anthropometric and femoral neck aBMD data from birth to 20.4 years in healthy girlsAge (year/s)WeightHeightBMIFN aBMDkgcmkg/cm2 mg/cm2 Birth3.2 ± 0.449.3 ± 2.113.0 ± 1.2NA n = 11519.2 ± 0.973.9 ± 3.416.9 ± 1.4NA n = 967.9 ± 0.526.5 ± 4.1127.7 ± 5.916.2 ± 1.8634 ± 74 n = 1248.9 ± 0.529.8 ± 4.9132.7 ± 6.116.9 ± 2.1647 ± 75 n = 12310.0 ± 0.533.2 ± 5.7138.8 ± 6.717.1 ± 2.1675 ± 78 n = 11412.4 ± 0.544.5 ± 8.1153.8 ± 7.918.7 ± 2.5751 ± 103 n = 10616.4 ± 0.556.8 ± 7.9164.0 ± 6.221.1 ± 2.7867 ± 111 n = 11320.4 ± 0.660.0 ± 9.2165.0 ± 6.022.1 ± 3.4858 ± 108 n = 124All values are mean ± SD. The percent of girls having experienced their first menstruations was: 0, 1.8, and 25.5% at the age of 8.9, 10.0, and 12.4 years, respectively. All participants were menstruating at the visit when their mean age was 16.4. ± 0.5 year BMI body mass index, FN Femoral neck, aBMD areal bone mineral density, NA not available Table 2Regressions between Z-scores of body mass index (BMI) and menarcheal age (A) and between delta Z-scores of BMI and menarcheal age (B) N β P 95% CI for R 2 LowerUpperA)Age (year/s)Birth115−0.0700.468−0.2590.1200.01196−0.0260.804−0.2370.1840.017.9124−0.3360.000−0.505−0.1670.118.9123−0.3370.000−0.506−0.1690.1110.0114−0.3410.000−0.515−0.1660.1212.4105−0.4550.000−0.644−0.2650.1816.4113−0.3270.001−0.510−0.1370.10(0.001)a 20.4124−0.2080.020−0.383−0.0330.04(0.018)a B)Delta age (years)Birth to 196−0.0480.734−0.3280.2320.011 to 7.996−0.2450.058−0.4990.0090.041 to 8.996−0.2600.050−0.5190.0000.041 to 10.092−0.3560.010−0.624−0.0880.071 to 12.488−0.4170.006−0.710−0.1230.081 to 16.492−0.1990.268−0.5530.1560.01(0.089)a 1 to 20.496−0.1670.243−0.4480.1150.02(0.076)a CI confidence interval aAfter adjustment for smoking and contraceptive pill use Anthropometric and femoral neck aBMD data from birth to 20.4 years in healthy girls All values are mean ± SD. The percent of girls having experienced their first menstruations was: 0, 1.8, and 25.5% at the age of 8.9, 10.0, and 12.4 years, respectively. All participants were menstruating at the visit when their mean age was 16.4. ± 0.5 year BMI body mass index, FN Femoral neck, aBMD areal bone mineral density, NA not available Regressions between Z-scores of body mass index (BMI) and menarcheal age (A) and between delta Z-scores of BMI and menarcheal age (B) CI confidence interval aAfter adjustment for smoking and contraceptive pill use Regression coefficients were also calculated between MENA and BMI gains (Table 2). No relationship was found with BMI increment from birth to 1.0 year of age. In contrast, the regression coefficient of BMI gain on MENA was inversely related from 1.0 to 8.9 years, and 10.0 and 12.4 years. At this age, the negative slope of BMI gain on MENA was the steepest (Table 2). The regression coefficient was no longer significantly less than zero at 16.4 and 20.4 years of age. Adjustment by smoking and contraceptive pill use did not modify the statistical significance of the regressions calculated between BMI Z-score or gain in BMI Z-score at 16.4 and 20 years of age and menarcheal age Z-score (Table 2). As shown in Fig. 1a, b and c, the slopes of the linear regressions between FN aBMD, Ct.Th, and BV/TV of distal tibia, measured at 20.4 years, and MENA are negative. It ensues that the relationships between these three bone variables and BMI gains from 1 to 12.4 years are positively related (Fig. 1d, e, and f). Fig. 1Femoral neck aBMD, cortical thickness, and trabecular bone density of distal tibia measured at peak bone mass: relation with menarcheal age and change in BMI during childhood. The six linear regressions were calculated with the data prospectively recorded in 124 healthy girls. The regression equations are indicated above each plot, with the corresponding correlation coefficient and the statistical P values. The slopes of the three bone variables (Y) are negatively and positively related to menarcheal age (upper plots: a, b, c) and change in BMI from 1.0 to 12.4 years (lower plots: d, e, f), respectively. See text for further details Femoral neck aBMD, cortical thickness, and trabecular bone density of distal tibia measured at peak bone mass: relation with menarcheal age and change in BMI during childhood. The six linear regressions were calculated with the data prospectively recorded in 124 healthy girls. The regression equations are indicated above each plot, with the corresponding correlation coefficient and the statistical P values. The slopes of the three bone variables (Y) are negatively and positively related to menarcheal age (upper plots: a, b, c) and change in BMI from 1.0 to 12.4 years (lower plots: d, e, f), respectively. See text for further details The relation between pubertal timing and both anthropometric and bone variables was further analyzed by segregating the cohort by the median (12.9 years) of MENA. At birth and 1 year of age, no difference in BW, H, and thereby in BMI was detected between girls who will experience pubertal timing below (EARLIER) and above (LATER) the median of MENA (Table 3). From 7.9 to 12.4 years, BW, H, and BMI rose significantly, more in EARLIER than LATER MENA subgroup. The differences in these anthropometric variables culminated at 12.4 years of age. They remained statistically significant at 16.4 years for both BW and BMI, but not for H. At 20.4 years, there was still a trend for greater BW and BMI in the EARLIER than in the LATER subgroup (Table 3). From 7.9 to 20.4 years, FN aBMD was constantly greater in the EARLIER than LATER subgroup. The difference was the greatest (+14.1%) at 12.4 years, then declined but remained statistically significant at 20.4 years (+4.8%). Table 3Anthropometric and femoral neck aBMD data from birth to 20.4 years in healthy girls segregated by the median of menarcheal ageWeight (kg) P Standing height (cm) P Body mass index (kg/cm2) P FN aBMD (mg/cm2) P Age (year/s)EarlierLaterEarlierLaterEarlierLaterEarlierLaterBirth3.2 ± 0.43.2 ± 0.40.99549.4 ± 2.249.2 ± 1.90.68013.0 ± 1.213.1 ± 1.30.706NANA n = 47 n = 49 n = 47 n = 49 n = 57 n = 5819.1 ± 0.99.3 ± 1.00.40873.9 ± 3.274.0 ± 3.60.81916.7 ± 1.117.0 ± 1.60.317NANA n = 48 n = 49 n = 47 n = 49 n = 47 n = 497.9 ± 0.527.8 ± 4.225.1 ± 3.50.0002129.1 ± 5.7126.3 ± 5.70.00616.6 ± 1.915.7 ± 1.60.003640 ± 71628 ± 770.364 n = 62 n = 62 n = 62 n = 62 n = 62 n = 62 n = 62 n = 628.9 ± 0.531.6 ± 5.028.1 ± 4.00.0001134.5 ± 5.8130.9 ± 5.90.000117.4 ± 2.216.4 ± 1.80.005658 ± 72636 ± 770.104 n = 61 n = 62 n = 61 n = 62 n = 61 n = 62 n = 61 n = 6210.0 ± 0.535.4 ± 5.630.9 ± 4.90.0001141.5 ± 6.3136.1 ± 5.90.000117.6 ± 2.116.6 ± 2.00.009689 ± 72661 ± 810.061 n = 58 n = 56 n = 58 n = 56 n = 58 n = 56 n = 58 n = 5612.4 ± 0.548.6 ± 6.440.2 ± 7.40.0001157.8 ± 6.0149.7 ± 7.70.000119.5 ± 2.217.8 ± 2.50.0004799 ± 84700 ± 970.001 n = 54 n = 52 n = 54 n = 52 n = 54 n = 52 n = 54 n = 5216.4 ± 0.558.8 ± 7.454.8 ± 8.00.007164.2 ± 6.1163.8 ± 6.30.75121.8 ± 2.620.4 ± 2.80.005893 ± 94841 ± 1220.014 n = 57 n = 56 n = 57 n = 56 n = 57 n = 56 n = 57 n = 5620.4 ± 0.661.4 ± 8.758.5 ± 9.60.085164.7 ± 6.1165.1 ± 6.30.70322.7 ± 3.321.5 ± 3.40.051878 ± 97838 ± 1160.042 n = 62 n = 62 n = 62 n = 62 n = 62 n = 62 n = 62 n = 62All values are mean ± SD. The percent of girls having experienced their first menstruations was: 0, 1.8, and 25.5% at the age of 8.9, 10.0, and 12.4 years, respectively. All were menstruating at the visit when their mean age was 16.4 ± 0.5 year BMI body mass index, FN Femoral neck, aBMD areal bone mineral density NA not available Anthropometric and femoral neck aBMD data from birth to 20.4 years in healthy girls segregated by the median of menarcheal age All values are mean ± SD. The percent of girls having experienced their first menstruations was: 0, 1.8, and 25.5% at the age of 8.9, 10.0, and 12.4 years, respectively. All were menstruating at the visit when their mean age was 16.4 ± 0.5 year BMI body mass index, FN Femoral neck, aBMD areal bone mineral density NA not available The differences in BW, H, and BMI gains from birth to 1 year and from 1.0 to 7.9 years up to 20.4 years between the EARLIER and LATER subgroups are presented in Table 4. These differences corroborate the absolute values indicating that the pubertal timing influence is expressed on the gains from 1.0 to 7.4 years on, but not from birth to 1.0 year (Table 4). Table 4Gains in anthropometric variables from birth to 1 year and from 1 year of age in healthy girls segregated by menarcheal ageAge (year/s)Weight (kg) P Height (cm) P BMI (kg/cm2) P EarlierLaterEarlierLaterEarlierLaterFrom birth to 16.0 ± 0.86.1 ± 1.00.50624.7 ± 2.624.9 ± 3.90.8103.8 ± 1.63.9 ± 1.90.907 n = 47 n = 49 n = 47 n = 49 n = 47 n = 491 to 7.918.4 ± 3.915.9 ± 3.40.00155.2 ± 5.352.2 ± 5.70.009−0.2 ± 2.01.2 ± 1.90.013 n = 48 n = 49 n = 47 n = 49 n = 47 n = 491 to 8.922.1 ± 4.818.9 ± 4.00.00160.7 ± 5.456.9 ± 5.90.0010.5 ± 2.4−0.6 ± 2.20.023 n = 47 n = 49 n = 47 n = 49 n = 47 n = 491 to 10.026.3 ± 5.421.8 ± 4.90.00167.8 ± 6.062.5 ± 6.30.0011.0 ± 2.2−0.4 ± 2.40.005 n = 47 n = 46 n = 46 n = 46 n = 46 n = 461 to 12.439.2 ± 6.232.0 ± 7.70.00183.7 ± 5.676.0 ± 8.70.0012.8 ± 2.41.0 ± 2.90.002 n = 45 n = 45 n = 44 n = 45 n = 44 n = 451 to 16.450.2 ± 7.745.4 ± 7.40.00291.0 ± 4.989.6 ± 6.60.2315.1 ± 2.83.5 ± 3.10.009 n = 45 n = 47 n = 45 n = 47 n = 45 n = 471 to 20.453.0 ± 9.150.0 ± 10.10.13691.0 ± 5.391.3 ± 6.60.8426.1 ± 3.74.7 ± 3.80.067 n = 47 n = 49 n = 47 n = 49 n = 47 n = 49All values are mean ± SD BMI body mass index Gains in anthropometric variables from birth to 1 year and from 1 year of age in healthy girls segregated by menarcheal age All values are mean ± SD BMI body mass index Figure 2 illustrates the gains in BMI as expressed in Z-score from 1.0 to 7.9 years on, in EARLIER as compared to LATER subgroup. Under the histogram, the distribution of the pubertal stages from P1 to P5 documents the difference in the age-related progression of sexual maturation between the two MENA subgroups. Fig. 2Changes in BMI from 1.0 to 20.4 years in healthy subjects segregated by the median of menarcheal age. The diagram illustrates that the change in BMI Z-score from 1.0 year of age on between subjects with menarcheal age below (EARLIER) and above (LATER) the median is statistically significant at 7.9 and 8.9 years, an age at which all girls were still prepubertal (Tanner stage P1) as indicated below the diagram. The difference culminates at 12.4 years, and then declines afterwards. Note that the progression of BMI from birth to 1.0 year of age was very similar in the EARLIER (from 13.0 to 16.7 kg/m2) and LATER (from 13.1 to 17.0 kg/m2) subgroups (see Table 3). The number of subjects for each age is presented in Table 3. See text for further details. P values between EARLIER and LATER group at each age are indicated above the diagram Changes in BMI from 1.0 to 20.4 years in healthy subjects segregated by the median of menarcheal age. The diagram illustrates that the change in BMI Z-score from 1.0 year of age on between subjects with menarcheal age below (EARLIER) and above (LATER) the median is statistically significant at 7.9 and 8.9 years, an age at which all girls were still prepubertal (Tanner stage P1) as indicated below the diagram. The difference culminates at 12.4 years, and then declines afterwards. Note that the progression of BMI from birth to 1.0 year of age was very similar in the EARLIER (from 13.0 to 16.7 kg/m2) and LATER (from 13.1 to 17.0 kg/m2) subgroups (see Table 3). The number of subjects for each age is presented in Table 3. See text for further details. P values between EARLIER and LATER group at each age are indicated above the diagram
null
null
[ "Subjects and methods", "Participants", "Clinical assessment", "Calcium intake", "Measurement of bone variables", "Expression of the results and statistical analysis" ]
[ "[SUBTITLE] Participants [SUBSECTION] We studied 124 healthy women with mean (±SD) age of 20.4 ± 0.6 year. They belong to a cohort followed during 12 years and previously examined at mean age 7.9 ± 0.5, 8.9 ± 0.5, 10.0 ± 0.5 [31], 12.4 ± 0.5 [32], and 16.4 ± 0.5 year [33]. During 1 year, between mean age of 7.9 and 8.9 years, half the cohort received a supplementation of calcium in a randomized, double-blind, placebo-controlled design, as previously reported [31]. Exclusion criteria at baseline were: ratio of weight/height <3rd or >97th percentile, physical signs of puberty, chronic disease, malabsorption, bone disease, and regular use of medication as previously described [31].\nThe ethics committees of the Department of Pediatrics and the Department of Rehabilitation and Geriatrics of the University Hospitals of Geneva approved the protocol while informed consent was obtained from both parents and children [31]. All subjects were recruited within the Geneva area.\nWe studied 124 healthy women with mean (±SD) age of 20.4 ± 0.6 year. They belong to a cohort followed during 12 years and previously examined at mean age 7.9 ± 0.5, 8.9 ± 0.5, 10.0 ± 0.5 [31], 12.4 ± 0.5 [32], and 16.4 ± 0.5 year [33]. During 1 year, between mean age of 7.9 and 8.9 years, half the cohort received a supplementation of calcium in a randomized, double-blind, placebo-controlled design, as previously reported [31]. Exclusion criteria at baseline were: ratio of weight/height <3rd or >97th percentile, physical signs of puberty, chronic disease, malabsorption, bone disease, and regular use of medication as previously described [31].\nThe ethics committees of the Department of Pediatrics and the Department of Rehabilitation and Geriatrics of the University Hospitals of Geneva approved the protocol while informed consent was obtained from both parents and children [31]. All subjects were recruited within the Geneva area.\n[SUBTITLE] Clinical assessment [SUBSECTION] Body weight, standing height, and BMI (kg/m2) were retrospectively obtained at birth (n = 115) and 1 year of age (n = 96) through questionnaires sent to the parents and the pediatricians. These anthropometric variables were then prospectively measured at each visit from 7.9 years of age on. At mean age (±SD) 7.9 ± 0.5 and 8.9 ± 0.5 year, pubertal stage was assessed by direct clinical examination made by a pediatrician–endocrinologist. At mean age of 10.0, 12.4, and 16.4 years pubertal maturation was assessed by a self-assessment questionnaire with drawings and written description of Tanner’s breast and pubic hair. At mean age 7.9 and 8.9 years, all girls were classified Tanner’s stage P1 while at mean age of 10.0 years, 38% of them had reached Tanner’s stage P2. Menarcheal age (MENA) was then assessed prospectively by direct interview at the second, third, fourth, and fifth visits, i.e., at the mean age of 8.9, 10.0, 12.4, and 16.4 years. MENA was within physiological range in all girls according to reference values established in the general population living in the same area [33]. Moreover, there was no case of pathological delayed or precocious puberty. The use of contraceptive pill for more than 3 months was recorded as well as smoking expressed in yearly pack units.\nBody weight, standing height, and BMI (kg/m2) were retrospectively obtained at birth (n = 115) and 1 year of age (n = 96) through questionnaires sent to the parents and the pediatricians. These anthropometric variables were then prospectively measured at each visit from 7.9 years of age on. At mean age (±SD) 7.9 ± 0.5 and 8.9 ± 0.5 year, pubertal stage was assessed by direct clinical examination made by a pediatrician–endocrinologist. At mean age of 10.0, 12.4, and 16.4 years pubertal maturation was assessed by a self-assessment questionnaire with drawings and written description of Tanner’s breast and pubic hair. At mean age 7.9 and 8.9 years, all girls were classified Tanner’s stage P1 while at mean age of 10.0 years, 38% of them had reached Tanner’s stage P2. Menarcheal age (MENA) was then assessed prospectively by direct interview at the second, third, fourth, and fifth visits, i.e., at the mean age of 8.9, 10.0, 12.4, and 16.4 years. MENA was within physiological range in all girls according to reference values established in the general population living in the same area [33]. Moreover, there was no case of pathological delayed or precocious puberty. The use of contraceptive pill for more than 3 months was recorded as well as smoking expressed in yearly pack units.\n[SUBTITLE] Calcium intake [SUBSECTION] At each visit from 7.9 years, spontaneous, i.e., baseline calcium intake, as essentially assessed from dairy sources, was estimated by a frequency questionnaire [34].\nAt each visit from 7.9 years, spontaneous, i.e., baseline calcium intake, as essentially assessed from dairy sources, was estimated by a frequency questionnaire [34].\n[SUBTITLE] Measurement of bone variables [SUBSECTION] Areal bone mineral density (mg/cm2) was measured by dual-energy X-ray absorptiometry (DXA) at the level of the femoral neck (FN) with a Hologic QDR-4500 instrument (Waltham, MA, USA), as previously reported [33]. The coefficient of variation of repeated aBMD measurements varied between 1.0% and 1.6% [33].Volumetric bone density and microstructure were determined at the distal tibia by HR-pQCT on an XtremeCT instrument (Scanco medical AG®, Basserdorf, Switzerland), as previously described [35]. At the distal tibia, 110 parallel CT slices with a nominal resolution (voxel size) of 82 μm were obtained, thus delivering a three-dimensional representation of approximately 9 mm in the axial direction. An anteroposterior scout view was used to define the measurement region. Briefly, a reference line was manually placed at the endplate of the tibia and the first CT slice was 22.5 mm distal to the reference line. The following variables were measured: total (Dtot), cortical (Dcort), and trabecular (Dtrab) volumetric bone density expressed as mg hydroxyapatite (HA)/cm3; trabecular bone volume fraction (BV/TV, %), trabecular number (Tb.N), thickness (Tb.Th, μm) and spacing (Tb.Sp, μm); mean cortical thickness (Ct.Th, μm) and cross-sectional area (CSA, mm2). The in vivo short-term reproducibility of HR-pQCT at the distal tibia assessed in 15 subjects with repositioning varied from 0.7% to 1.0% and from 3.0% to 4.9% for bone density and for trabecular architecture, respectively. These reproducibility ranges in our facility are similar to those recently published [36].\nAreal bone mineral density (mg/cm2) was measured by dual-energy X-ray absorptiometry (DXA) at the level of the femoral neck (FN) with a Hologic QDR-4500 instrument (Waltham, MA, USA), as previously reported [33]. The coefficient of variation of repeated aBMD measurements varied between 1.0% and 1.6% [33].Volumetric bone density and microstructure were determined at the distal tibia by HR-pQCT on an XtremeCT instrument (Scanco medical AG®, Basserdorf, Switzerland), as previously described [35]. At the distal tibia, 110 parallel CT slices with a nominal resolution (voxel size) of 82 μm were obtained, thus delivering a three-dimensional representation of approximately 9 mm in the axial direction. An anteroposterior scout view was used to define the measurement region. Briefly, a reference line was manually placed at the endplate of the tibia and the first CT slice was 22.5 mm distal to the reference line. The following variables were measured: total (Dtot), cortical (Dcort), and trabecular (Dtrab) volumetric bone density expressed as mg hydroxyapatite (HA)/cm3; trabecular bone volume fraction (BV/TV, %), trabecular number (Tb.N), thickness (Tb.Th, μm) and spacing (Tb.Sp, μm); mean cortical thickness (Ct.Th, μm) and cross-sectional area (CSA, mm2). The in vivo short-term reproducibility of HR-pQCT at the distal tibia assessed in 15 subjects with repositioning varied from 0.7% to 1.0% and from 3.0% to 4.9% for bone density and for trabecular architecture, respectively. These reproducibility ranges in our facility are similar to those recently published [36].\n[SUBTITLE] Expression of the results and statistical analysis [SUBSECTION] The various anthropometric and osteodensitometric variables are given as mean ± SD. MENA and BMI as well as FN aBMD or distal tibia Ct.Th and Dtrab were expressed in Z-scores computed from this healthy female cohort. The mean values of anthropometric variable gains were expressed either in absolute terms or as the difference of the relative (Z-score) values at the different ages. A multivariate model adjusted for repeated measures using individual values of age and BMI Z-score at each visit was performed to demonstrate the overall significant association between BMI Z-score and MENA Z-score (β = −0.256, P ≤ 0.001, R\n2 = 0.07). Since an improvement in the coefficient of determination (R\n2) was observed when the model was repeated without taking into account values at birth and 1 year of age, we looked at which age the relationship between BMI Z-score and menarcheal age Z-score was most significant. Then, univariate analysis at different time points were performed between BMI Z-score and MENA Z-score and between delta BMI Z-score and MENA Z-score. The relationships between bone traits expressed in Z-scores and MENA Z-score or delta BMI expressed in absolute terms were also examined by univariate regression analysis. The subjects were segregated according to the median of menarcheal age. Timing of menarche (MENA) under and above the median age of the first menstruation was defined as “EARLIER” and “LATER,” respectively. The differences in anthropometric characteristics between EARLIER and LATER MENA were assessed by unpaired Student’s t test or by Wilcoxon signed rank test according to the variable distribution pattern. The significance level for two-sided P values was 0.05 for all tests. The data were analyzed using STATA software, version 9.0 (StataCorp LP, College Station, TX, USA).\nThe various anthropometric and osteodensitometric variables are given as mean ± SD. MENA and BMI as well as FN aBMD or distal tibia Ct.Th and Dtrab were expressed in Z-scores computed from this healthy female cohort. The mean values of anthropometric variable gains were expressed either in absolute terms or as the difference of the relative (Z-score) values at the different ages. A multivariate model adjusted for repeated measures using individual values of age and BMI Z-score at each visit was performed to demonstrate the overall significant association between BMI Z-score and MENA Z-score (β = −0.256, P ≤ 0.001, R\n2 = 0.07). Since an improvement in the coefficient of determination (R\n2) was observed when the model was repeated without taking into account values at birth and 1 year of age, we looked at which age the relationship between BMI Z-score and menarcheal age Z-score was most significant. Then, univariate analysis at different time points were performed between BMI Z-score and MENA Z-score and between delta BMI Z-score and MENA Z-score. The relationships between bone traits expressed in Z-scores and MENA Z-score or delta BMI expressed in absolute terms were also examined by univariate regression analysis. The subjects were segregated according to the median of menarcheal age. Timing of menarche (MENA) under and above the median age of the first menstruation was defined as “EARLIER” and “LATER,” respectively. The differences in anthropometric characteristics between EARLIER and LATER MENA were assessed by unpaired Student’s t test or by Wilcoxon signed rank test according to the variable distribution pattern. The significance level for two-sided P values was 0.05 for all tests. The data were analyzed using STATA software, version 9.0 (StataCorp LP, College Station, TX, USA).", "We studied 124 healthy women with mean (±SD) age of 20.4 ± 0.6 year. They belong to a cohort followed during 12 years and previously examined at mean age 7.9 ± 0.5, 8.9 ± 0.5, 10.0 ± 0.5 [31], 12.4 ± 0.5 [32], and 16.4 ± 0.5 year [33]. During 1 year, between mean age of 7.9 and 8.9 years, half the cohort received a supplementation of calcium in a randomized, double-blind, placebo-controlled design, as previously reported [31]. Exclusion criteria at baseline were: ratio of weight/height <3rd or >97th percentile, physical signs of puberty, chronic disease, malabsorption, bone disease, and regular use of medication as previously described [31].\nThe ethics committees of the Department of Pediatrics and the Department of Rehabilitation and Geriatrics of the University Hospitals of Geneva approved the protocol while informed consent was obtained from both parents and children [31]. All subjects were recruited within the Geneva area.", "Body weight, standing height, and BMI (kg/m2) were retrospectively obtained at birth (n = 115) and 1 year of age (n = 96) through questionnaires sent to the parents and the pediatricians. These anthropometric variables were then prospectively measured at each visit from 7.9 years of age on. At mean age (±SD) 7.9 ± 0.5 and 8.9 ± 0.5 year, pubertal stage was assessed by direct clinical examination made by a pediatrician–endocrinologist. At mean age of 10.0, 12.4, and 16.4 years pubertal maturation was assessed by a self-assessment questionnaire with drawings and written description of Tanner’s breast and pubic hair. At mean age 7.9 and 8.9 years, all girls were classified Tanner’s stage P1 while at mean age of 10.0 years, 38% of them had reached Tanner’s stage P2. Menarcheal age (MENA) was then assessed prospectively by direct interview at the second, third, fourth, and fifth visits, i.e., at the mean age of 8.9, 10.0, 12.4, and 16.4 years. MENA was within physiological range in all girls according to reference values established in the general population living in the same area [33]. Moreover, there was no case of pathological delayed or precocious puberty. The use of contraceptive pill for more than 3 months was recorded as well as smoking expressed in yearly pack units.", "At each visit from 7.9 years, spontaneous, i.e., baseline calcium intake, as essentially assessed from dairy sources, was estimated by a frequency questionnaire [34].", "Areal bone mineral density (mg/cm2) was measured by dual-energy X-ray absorptiometry (DXA) at the level of the femoral neck (FN) with a Hologic QDR-4500 instrument (Waltham, MA, USA), as previously reported [33]. The coefficient of variation of repeated aBMD measurements varied between 1.0% and 1.6% [33].Volumetric bone density and microstructure were determined at the distal tibia by HR-pQCT on an XtremeCT instrument (Scanco medical AG®, Basserdorf, Switzerland), as previously described [35]. At the distal tibia, 110 parallel CT slices with a nominal resolution (voxel size) of 82 μm were obtained, thus delivering a three-dimensional representation of approximately 9 mm in the axial direction. An anteroposterior scout view was used to define the measurement region. Briefly, a reference line was manually placed at the endplate of the tibia and the first CT slice was 22.5 mm distal to the reference line. The following variables were measured: total (Dtot), cortical (Dcort), and trabecular (Dtrab) volumetric bone density expressed as mg hydroxyapatite (HA)/cm3; trabecular bone volume fraction (BV/TV, %), trabecular number (Tb.N), thickness (Tb.Th, μm) and spacing (Tb.Sp, μm); mean cortical thickness (Ct.Th, μm) and cross-sectional area (CSA, mm2). The in vivo short-term reproducibility of HR-pQCT at the distal tibia assessed in 15 subjects with repositioning varied from 0.7% to 1.0% and from 3.0% to 4.9% for bone density and for trabecular architecture, respectively. These reproducibility ranges in our facility are similar to those recently published [36].", "The various anthropometric and osteodensitometric variables are given as mean ± SD. MENA and BMI as well as FN aBMD or distal tibia Ct.Th and Dtrab were expressed in Z-scores computed from this healthy female cohort. The mean values of anthropometric variable gains were expressed either in absolute terms or as the difference of the relative (Z-score) values at the different ages. A multivariate model adjusted for repeated measures using individual values of age and BMI Z-score at each visit was performed to demonstrate the overall significant association between BMI Z-score and MENA Z-score (β = −0.256, P ≤ 0.001, R\n2 = 0.07). Since an improvement in the coefficient of determination (R\n2) was observed when the model was repeated without taking into account values at birth and 1 year of age, we looked at which age the relationship between BMI Z-score and menarcheal age Z-score was most significant. Then, univariate analysis at different time points were performed between BMI Z-score and MENA Z-score and between delta BMI Z-score and MENA Z-score. The relationships between bone traits expressed in Z-scores and MENA Z-score or delta BMI expressed in absolute terms were also examined by univariate regression analysis. The subjects were segregated according to the median of menarcheal age. Timing of menarche (MENA) under and above the median age of the first menstruation was defined as “EARLIER” and “LATER,” respectively. The differences in anthropometric characteristics between EARLIER and LATER MENA were assessed by unpaired Student’s t test or by Wilcoxon signed rank test according to the variable distribution pattern. The significance level for two-sided P values was 0.05 for all tests. The data were analyzed using STATA software, version 9.0 (StataCorp LP, College Station, TX, USA)." ]
[ null, null, null, null, null, null ]
[ "Introduction", "Subjects and methods", "Participants", "Clinical assessment", "Calcium intake", "Measurement of bone variables", "Expression of the results and statistical analysis", "Results", "Discussion" ]
[ "In healthy human subjects, bone mineral mass follows a trajectory from birth on to attain a maximal value, the so-called peak bone mass (PBM), by the end of the second or the beginning of the third decade, according to both gender and skeletal sites examined [1].\nLater menarcheal age was shown to be a risk factor for reduced bone mineral mass in postmenopausal women [2–7] and increased prevalence of fragility fractures at several sites of the skeleton [8–11].\nThe negative influence of later menarcheal age on bone mineral mass observed in postmenopausal women is already expressed long before menopause as it was observed in middle-age premenopausal women with mean age 45 years, and in healthy young adult females in their very early twenties [12]. Furthermore, this influence of pubertal timing on peak bone mass was found to be predetermined before the onset of pubertal maturation in a prospective follow-up study from age 8 to 20 years [13]. This suggested that both pubertal timing and bone traits may be under the influence of common genetic factors [14].\nThe risk of hip fracture is dependent upon the amount of areal bone mineral density (aBMD) or bone mineral content (BMC) as assessed by osteodensitometry at the level of proximal femur, particularly in the femoral neck (FN). Longitudinal studies of women ranging from 20 to 94 years with follow-up periods from 16 to 22 years showed that the average annual rate of bone loss was relatively constant and tracked well within individuals [15, 16]. Furthermore, a high correlation was recorded between the baseline aBMD values and those obtained after 16 (R = 0.83) [15] and 22 (R = 0.80) years of follow-up [16]. This tracking pattern of aBMD is thus maintained over six decades of adult life. Such a notion has two important implications. First, the prediction of hip fracture risk based on one single measurement of FN aBMD remains reliable in the long term [15, 16]. Second, within the wide range of FN aBMD values little variation occurs during adult life in individual Z-scores or percentiles. Hence, it can be inferred that bone mass acquired by the end of the growth period appears to be more important than bone loss occurring during adult life [17].\nThis tracking pattern of FN aBMD was also reported in healthy females, from prepuberty to peak bone mass attainment [18–20]. In fact, since PBM is under strong genetic influence [21–23], it can be expected that bone mineral density and size are found to significantly track during growth in healthy populations throughout the world [18, 20, 24–26].\nGrowth in infancy was reported to be associated with BMC in later life [27]. The risk of hip fracture in elderly was shown to be related to early variation in height and weight growth [28, 29]. Very recently, in a study of 6,370 women born in Finland, reduction in body mass index (BMI) gain between 1 and 12 years of age was associated with an increase risk of hip fracture in later life [30]. Two potential explanations for this link between reduction in Z-score for BMI and later fracture risk are discussed by the authors: first, a difference in pubertal timing; second, a slowing of growth in response to adverse environmental influences [30]. The authors concluded that thinness in childhood is a risk factor for hip fracture in later life, by a direct effect of low fat mass on bone mineralization or represents the influence of altered timing of pubertal maturation. In this study, the timing of puberty as precisely assessed by prospectively recording menarcheal age, was not determined [30], making uncertain whether this important determinant of FN PBM and subsequent premenopausal FN aBMD [12] could be implicated in this association.\nIn the present report, we tested the hypothesis that variation in body growth during infancy and childhood are related to pubertal timing which, in turn is a determinant of FN peak bone mass. Data are presented on the relationship between menarcheal age and body weight (BW), height (H) and BMI from birth to 20 years, and in FN aBMD prospectively measured from prepuberty to maturity in a cohort of healthy females. In addition to FN PBM measurements, we also analyzed whether the impact of BMI as linked to pubertal timing was detectable on bone strength related microstructure, as assessed by high resolution peripheral computerized tomography (HR-pQCT) at the level of distal tibia.", "[SUBTITLE] Participants [SUBSECTION] We studied 124 healthy women with mean (±SD) age of 20.4 ± 0.6 year. They belong to a cohort followed during 12 years and previously examined at mean age 7.9 ± 0.5, 8.9 ± 0.5, 10.0 ± 0.5 [31], 12.4 ± 0.5 [32], and 16.4 ± 0.5 year [33]. During 1 year, between mean age of 7.9 and 8.9 years, half the cohort received a supplementation of calcium in a randomized, double-blind, placebo-controlled design, as previously reported [31]. Exclusion criteria at baseline were: ratio of weight/height <3rd or >97th percentile, physical signs of puberty, chronic disease, malabsorption, bone disease, and regular use of medication as previously described [31].\nThe ethics committees of the Department of Pediatrics and the Department of Rehabilitation and Geriatrics of the University Hospitals of Geneva approved the protocol while informed consent was obtained from both parents and children [31]. All subjects were recruited within the Geneva area.\nWe studied 124 healthy women with mean (±SD) age of 20.4 ± 0.6 year. They belong to a cohort followed during 12 years and previously examined at mean age 7.9 ± 0.5, 8.9 ± 0.5, 10.0 ± 0.5 [31], 12.4 ± 0.5 [32], and 16.4 ± 0.5 year [33]. During 1 year, between mean age of 7.9 and 8.9 years, half the cohort received a supplementation of calcium in a randomized, double-blind, placebo-controlled design, as previously reported [31]. Exclusion criteria at baseline were: ratio of weight/height <3rd or >97th percentile, physical signs of puberty, chronic disease, malabsorption, bone disease, and regular use of medication as previously described [31].\nThe ethics committees of the Department of Pediatrics and the Department of Rehabilitation and Geriatrics of the University Hospitals of Geneva approved the protocol while informed consent was obtained from both parents and children [31]. All subjects were recruited within the Geneva area.\n[SUBTITLE] Clinical assessment [SUBSECTION] Body weight, standing height, and BMI (kg/m2) were retrospectively obtained at birth (n = 115) and 1 year of age (n = 96) through questionnaires sent to the parents and the pediatricians. These anthropometric variables were then prospectively measured at each visit from 7.9 years of age on. At mean age (±SD) 7.9 ± 0.5 and 8.9 ± 0.5 year, pubertal stage was assessed by direct clinical examination made by a pediatrician–endocrinologist. At mean age of 10.0, 12.4, and 16.4 years pubertal maturation was assessed by a self-assessment questionnaire with drawings and written description of Tanner’s breast and pubic hair. At mean age 7.9 and 8.9 years, all girls were classified Tanner’s stage P1 while at mean age of 10.0 years, 38% of them had reached Tanner’s stage P2. Menarcheal age (MENA) was then assessed prospectively by direct interview at the second, third, fourth, and fifth visits, i.e., at the mean age of 8.9, 10.0, 12.4, and 16.4 years. MENA was within physiological range in all girls according to reference values established in the general population living in the same area [33]. Moreover, there was no case of pathological delayed or precocious puberty. The use of contraceptive pill for more than 3 months was recorded as well as smoking expressed in yearly pack units.\nBody weight, standing height, and BMI (kg/m2) were retrospectively obtained at birth (n = 115) and 1 year of age (n = 96) through questionnaires sent to the parents and the pediatricians. These anthropometric variables were then prospectively measured at each visit from 7.9 years of age on. At mean age (±SD) 7.9 ± 0.5 and 8.9 ± 0.5 year, pubertal stage was assessed by direct clinical examination made by a pediatrician–endocrinologist. At mean age of 10.0, 12.4, and 16.4 years pubertal maturation was assessed by a self-assessment questionnaire with drawings and written description of Tanner’s breast and pubic hair. At mean age 7.9 and 8.9 years, all girls were classified Tanner’s stage P1 while at mean age of 10.0 years, 38% of them had reached Tanner’s stage P2. Menarcheal age (MENA) was then assessed prospectively by direct interview at the second, third, fourth, and fifth visits, i.e., at the mean age of 8.9, 10.0, 12.4, and 16.4 years. MENA was within physiological range in all girls according to reference values established in the general population living in the same area [33]. Moreover, there was no case of pathological delayed or precocious puberty. The use of contraceptive pill for more than 3 months was recorded as well as smoking expressed in yearly pack units.\n[SUBTITLE] Calcium intake [SUBSECTION] At each visit from 7.9 years, spontaneous, i.e., baseline calcium intake, as essentially assessed from dairy sources, was estimated by a frequency questionnaire [34].\nAt each visit from 7.9 years, spontaneous, i.e., baseline calcium intake, as essentially assessed from dairy sources, was estimated by a frequency questionnaire [34].\n[SUBTITLE] Measurement of bone variables [SUBSECTION] Areal bone mineral density (mg/cm2) was measured by dual-energy X-ray absorptiometry (DXA) at the level of the femoral neck (FN) with a Hologic QDR-4500 instrument (Waltham, MA, USA), as previously reported [33]. The coefficient of variation of repeated aBMD measurements varied between 1.0% and 1.6% [33].Volumetric bone density and microstructure were determined at the distal tibia by HR-pQCT on an XtremeCT instrument (Scanco medical AG®, Basserdorf, Switzerland), as previously described [35]. At the distal tibia, 110 parallel CT slices with a nominal resolution (voxel size) of 82 μm were obtained, thus delivering a three-dimensional representation of approximately 9 mm in the axial direction. An anteroposterior scout view was used to define the measurement region. Briefly, a reference line was manually placed at the endplate of the tibia and the first CT slice was 22.5 mm distal to the reference line. The following variables were measured: total (Dtot), cortical (Dcort), and trabecular (Dtrab) volumetric bone density expressed as mg hydroxyapatite (HA)/cm3; trabecular bone volume fraction (BV/TV, %), trabecular number (Tb.N), thickness (Tb.Th, μm) and spacing (Tb.Sp, μm); mean cortical thickness (Ct.Th, μm) and cross-sectional area (CSA, mm2). The in vivo short-term reproducibility of HR-pQCT at the distal tibia assessed in 15 subjects with repositioning varied from 0.7% to 1.0% and from 3.0% to 4.9% for bone density and for trabecular architecture, respectively. These reproducibility ranges in our facility are similar to those recently published [36].\nAreal bone mineral density (mg/cm2) was measured by dual-energy X-ray absorptiometry (DXA) at the level of the femoral neck (FN) with a Hologic QDR-4500 instrument (Waltham, MA, USA), as previously reported [33]. The coefficient of variation of repeated aBMD measurements varied between 1.0% and 1.6% [33].Volumetric bone density and microstructure were determined at the distal tibia by HR-pQCT on an XtremeCT instrument (Scanco medical AG®, Basserdorf, Switzerland), as previously described [35]. At the distal tibia, 110 parallel CT slices with a nominal resolution (voxel size) of 82 μm were obtained, thus delivering a three-dimensional representation of approximately 9 mm in the axial direction. An anteroposterior scout view was used to define the measurement region. Briefly, a reference line was manually placed at the endplate of the tibia and the first CT slice was 22.5 mm distal to the reference line. The following variables were measured: total (Dtot), cortical (Dcort), and trabecular (Dtrab) volumetric bone density expressed as mg hydroxyapatite (HA)/cm3; trabecular bone volume fraction (BV/TV, %), trabecular number (Tb.N), thickness (Tb.Th, μm) and spacing (Tb.Sp, μm); mean cortical thickness (Ct.Th, μm) and cross-sectional area (CSA, mm2). The in vivo short-term reproducibility of HR-pQCT at the distal tibia assessed in 15 subjects with repositioning varied from 0.7% to 1.0% and from 3.0% to 4.9% for bone density and for trabecular architecture, respectively. These reproducibility ranges in our facility are similar to those recently published [36].\n[SUBTITLE] Expression of the results and statistical analysis [SUBSECTION] The various anthropometric and osteodensitometric variables are given as mean ± SD. MENA and BMI as well as FN aBMD or distal tibia Ct.Th and Dtrab were expressed in Z-scores computed from this healthy female cohort. The mean values of anthropometric variable gains were expressed either in absolute terms or as the difference of the relative (Z-score) values at the different ages. A multivariate model adjusted for repeated measures using individual values of age and BMI Z-score at each visit was performed to demonstrate the overall significant association between BMI Z-score and MENA Z-score (β = −0.256, P ≤ 0.001, R\n2 = 0.07). Since an improvement in the coefficient of determination (R\n2) was observed when the model was repeated without taking into account values at birth and 1 year of age, we looked at which age the relationship between BMI Z-score and menarcheal age Z-score was most significant. Then, univariate analysis at different time points were performed between BMI Z-score and MENA Z-score and between delta BMI Z-score and MENA Z-score. The relationships between bone traits expressed in Z-scores and MENA Z-score or delta BMI expressed in absolute terms were also examined by univariate regression analysis. The subjects were segregated according to the median of menarcheal age. Timing of menarche (MENA) under and above the median age of the first menstruation was defined as “EARLIER” and “LATER,” respectively. The differences in anthropometric characteristics between EARLIER and LATER MENA were assessed by unpaired Student’s t test or by Wilcoxon signed rank test according to the variable distribution pattern. The significance level for two-sided P values was 0.05 for all tests. The data were analyzed using STATA software, version 9.0 (StataCorp LP, College Station, TX, USA).\nThe various anthropometric and osteodensitometric variables are given as mean ± SD. MENA and BMI as well as FN aBMD or distal tibia Ct.Th and Dtrab were expressed in Z-scores computed from this healthy female cohort. The mean values of anthropometric variable gains were expressed either in absolute terms or as the difference of the relative (Z-score) values at the different ages. A multivariate model adjusted for repeated measures using individual values of age and BMI Z-score at each visit was performed to demonstrate the overall significant association between BMI Z-score and MENA Z-score (β = −0.256, P ≤ 0.001, R\n2 = 0.07). Since an improvement in the coefficient of determination (R\n2) was observed when the model was repeated without taking into account values at birth and 1 year of age, we looked at which age the relationship between BMI Z-score and menarcheal age Z-score was most significant. Then, univariate analysis at different time points were performed between BMI Z-score and MENA Z-score and between delta BMI Z-score and MENA Z-score. The relationships between bone traits expressed in Z-scores and MENA Z-score or delta BMI expressed in absolute terms were also examined by univariate regression analysis. The subjects were segregated according to the median of menarcheal age. Timing of menarche (MENA) under and above the median age of the first menstruation was defined as “EARLIER” and “LATER,” respectively. The differences in anthropometric characteristics between EARLIER and LATER MENA were assessed by unpaired Student’s t test or by Wilcoxon signed rank test according to the variable distribution pattern. The significance level for two-sided P values was 0.05 for all tests. The data were analyzed using STATA software, version 9.0 (StataCorp LP, College Station, TX, USA).", "We studied 124 healthy women with mean (±SD) age of 20.4 ± 0.6 year. They belong to a cohort followed during 12 years and previously examined at mean age 7.9 ± 0.5, 8.9 ± 0.5, 10.0 ± 0.5 [31], 12.4 ± 0.5 [32], and 16.4 ± 0.5 year [33]. During 1 year, between mean age of 7.9 and 8.9 years, half the cohort received a supplementation of calcium in a randomized, double-blind, placebo-controlled design, as previously reported [31]. Exclusion criteria at baseline were: ratio of weight/height <3rd or >97th percentile, physical signs of puberty, chronic disease, malabsorption, bone disease, and regular use of medication as previously described [31].\nThe ethics committees of the Department of Pediatrics and the Department of Rehabilitation and Geriatrics of the University Hospitals of Geneva approved the protocol while informed consent was obtained from both parents and children [31]. All subjects were recruited within the Geneva area.", "Body weight, standing height, and BMI (kg/m2) were retrospectively obtained at birth (n = 115) and 1 year of age (n = 96) through questionnaires sent to the parents and the pediatricians. These anthropometric variables were then prospectively measured at each visit from 7.9 years of age on. At mean age (±SD) 7.9 ± 0.5 and 8.9 ± 0.5 year, pubertal stage was assessed by direct clinical examination made by a pediatrician–endocrinologist. At mean age of 10.0, 12.4, and 16.4 years pubertal maturation was assessed by a self-assessment questionnaire with drawings and written description of Tanner’s breast and pubic hair. At mean age 7.9 and 8.9 years, all girls were classified Tanner’s stage P1 while at mean age of 10.0 years, 38% of them had reached Tanner’s stage P2. Menarcheal age (MENA) was then assessed prospectively by direct interview at the second, third, fourth, and fifth visits, i.e., at the mean age of 8.9, 10.0, 12.4, and 16.4 years. MENA was within physiological range in all girls according to reference values established in the general population living in the same area [33]. Moreover, there was no case of pathological delayed or precocious puberty. The use of contraceptive pill for more than 3 months was recorded as well as smoking expressed in yearly pack units.", "At each visit from 7.9 years, spontaneous, i.e., baseline calcium intake, as essentially assessed from dairy sources, was estimated by a frequency questionnaire [34].", "Areal bone mineral density (mg/cm2) was measured by dual-energy X-ray absorptiometry (DXA) at the level of the femoral neck (FN) with a Hologic QDR-4500 instrument (Waltham, MA, USA), as previously reported [33]. The coefficient of variation of repeated aBMD measurements varied between 1.0% and 1.6% [33].Volumetric bone density and microstructure were determined at the distal tibia by HR-pQCT on an XtremeCT instrument (Scanco medical AG®, Basserdorf, Switzerland), as previously described [35]. At the distal tibia, 110 parallel CT slices with a nominal resolution (voxel size) of 82 μm were obtained, thus delivering a three-dimensional representation of approximately 9 mm in the axial direction. An anteroposterior scout view was used to define the measurement region. Briefly, a reference line was manually placed at the endplate of the tibia and the first CT slice was 22.5 mm distal to the reference line. The following variables were measured: total (Dtot), cortical (Dcort), and trabecular (Dtrab) volumetric bone density expressed as mg hydroxyapatite (HA)/cm3; trabecular bone volume fraction (BV/TV, %), trabecular number (Tb.N), thickness (Tb.Th, μm) and spacing (Tb.Sp, μm); mean cortical thickness (Ct.Th, μm) and cross-sectional area (CSA, mm2). The in vivo short-term reproducibility of HR-pQCT at the distal tibia assessed in 15 subjects with repositioning varied from 0.7% to 1.0% and from 3.0% to 4.9% for bone density and for trabecular architecture, respectively. These reproducibility ranges in our facility are similar to those recently published [36].", "The various anthropometric and osteodensitometric variables are given as mean ± SD. MENA and BMI as well as FN aBMD or distal tibia Ct.Th and Dtrab were expressed in Z-scores computed from this healthy female cohort. The mean values of anthropometric variable gains were expressed either in absolute terms or as the difference of the relative (Z-score) values at the different ages. A multivariate model adjusted for repeated measures using individual values of age and BMI Z-score at each visit was performed to demonstrate the overall significant association between BMI Z-score and MENA Z-score (β = −0.256, P ≤ 0.001, R\n2 = 0.07). Since an improvement in the coefficient of determination (R\n2) was observed when the model was repeated without taking into account values at birth and 1 year of age, we looked at which age the relationship between BMI Z-score and menarcheal age Z-score was most significant. Then, univariate analysis at different time points were performed between BMI Z-score and MENA Z-score and between delta BMI Z-score and MENA Z-score. The relationships between bone traits expressed in Z-scores and MENA Z-score or delta BMI expressed in absolute terms were also examined by univariate regression analysis. The subjects were segregated according to the median of menarcheal age. Timing of menarche (MENA) under and above the median age of the first menstruation was defined as “EARLIER” and “LATER,” respectively. The differences in anthropometric characteristics between EARLIER and LATER MENA were assessed by unpaired Student’s t test or by Wilcoxon signed rank test according to the variable distribution pattern. The significance level for two-sided P values was 0.05 for all tests. The data were analyzed using STATA software, version 9.0 (StataCorp LP, College Station, TX, USA).", "The whole cohort anthropometric variables from birth on and the development of DXA-measured FN aBMD from prepuberty to early twenties are described in Table 1. All values are within the reference ranges of the corresponding population of healthy female subjects. Among the 124 young women, 64 (51.6%) were current (n = 50) or previous smokers (n = 14). The use of contraceptive pill for more than 3 months concerned 24 (19.4%) young women. There was no significant difference between girls with (n = 96) and without (n = 28) birth weight and infant weight in terms of BMI, femoral neck aBMD and distal tibia pQCT values at the mean age of 20.4 years. At birth and at 1.0 year of age, there was no relationship between future menarcheal age, taken as a precise assessment of pubertal timing, and BMI (Table 2). In contrast, highly significant inverse regression coefficients (ß) were recorded at the age of 7.9 and 8.9 years, i.e., when all girls were still prepubertal as indicated in the legend in Table 1. The inverse regression coefficient still became maximally negative at the age of 12.4 year (ß = −0.455, P < 0.001). At this age, MENA explained 18% of the BMI variance (R\n2 = 0.18) (Table 2). Afterwards, the negative slope regression of BMI on MENA was less steep, but still remained statistically significant at the beginning of the third decade (Table 2).\nTable 1Anthropometric and femoral neck aBMD data from birth to 20.4 years in healthy girlsAge (year/s)WeightHeightBMIFN aBMDkgcmkg/cm2\nmg/cm2\nBirth3.2 ± 0.449.3 ± 2.113.0 ± 1.2NA n = 11519.2 ± 0.973.9 ± 3.416.9 ± 1.4NA n = 967.9 ± 0.526.5 ± 4.1127.7 ± 5.916.2 ± 1.8634 ± 74 n = 1248.9 ± 0.529.8 ± 4.9132.7 ± 6.116.9 ± 2.1647 ± 75 n = 12310.0 ± 0.533.2 ± 5.7138.8 ± 6.717.1 ± 2.1675 ± 78 n = 11412.4 ± 0.544.5 ± 8.1153.8 ± 7.918.7 ± 2.5751 ± 103 n = 10616.4 ± 0.556.8 ± 7.9164.0 ± 6.221.1 ± 2.7867 ± 111 n = 11320.4 ± 0.660.0 ± 9.2165.0 ± 6.022.1 ± 3.4858 ± 108 n = 124All values are mean ± SD. The percent of girls having experienced their first menstruations was: 0, 1.8, and 25.5% at the age of 8.9, 10.0, and 12.4 years, respectively. All participants were menstruating at the visit when their mean age was 16.4. ± 0.5 year\nBMI body mass index, FN Femoral neck, aBMD areal bone mineral density, NA not available\nTable 2Regressions between Z-scores of body mass index (BMI) and menarcheal age (A) and between delta Z-scores of BMI and menarcheal age (B)\nN\n\nβ\n\nP\n95% CI for\nR\n2\nLowerUpperA)Age (year/s)Birth115−0.0700.468−0.2590.1200.01196−0.0260.804−0.2370.1840.017.9124−0.3360.000−0.505−0.1670.118.9123−0.3370.000−0.506−0.1690.1110.0114−0.3410.000−0.515−0.1660.1212.4105−0.4550.000−0.644−0.2650.1816.4113−0.3270.001−0.510−0.1370.10(0.001)a\n20.4124−0.2080.020−0.383−0.0330.04(0.018)a\nB)Delta age (years)Birth to 196−0.0480.734−0.3280.2320.011 to 7.996−0.2450.058−0.4990.0090.041 to 8.996−0.2600.050−0.5190.0000.041 to 10.092−0.3560.010−0.624−0.0880.071 to 12.488−0.4170.006−0.710−0.1230.081 to 16.492−0.1990.268−0.5530.1560.01(0.089)a\n1 to 20.496−0.1670.243−0.4480.1150.02(0.076)a\n\nCI confidence interval\naAfter adjustment for smoking and contraceptive pill use\n\nAnthropometric and femoral neck aBMD data from birth to 20.4 years in healthy girls\nAll values are mean ± SD. The percent of girls having experienced their first menstruations was: 0, 1.8, and 25.5% at the age of 8.9, 10.0, and 12.4 years, respectively. All participants were menstruating at the visit when their mean age was 16.4. ± 0.5 year\n\nBMI body mass index, FN Femoral neck, aBMD areal bone mineral density, NA not available\nRegressions between Z-scores of body mass index (BMI) and menarcheal age (A) and between delta Z-scores of BMI and menarcheal age (B)\n\nCI confidence interval\n\naAfter adjustment for smoking and contraceptive pill use\nRegression coefficients were also calculated between MENA and BMI gains (Table 2). No relationship was found with BMI increment from birth to 1.0 year of age. In contrast, the regression coefficient of BMI gain on MENA was inversely related from 1.0 to 8.9 years, and 10.0 and 12.4 years. At this age, the negative slope of BMI gain on MENA was the steepest (Table 2). The regression coefficient was no longer significantly less than zero at 16.4 and 20.4 years of age. Adjustment by smoking and contraceptive pill use did not modify the statistical significance of the regressions calculated between BMI Z-score or gain in BMI Z-score at 16.4 and 20 years of age and menarcheal age Z-score (Table 2).\nAs shown in Fig. 1a, b and c, the slopes of the linear regressions between FN aBMD, Ct.Th, and BV/TV of distal tibia, measured at 20.4 years, and MENA are negative. It ensues that the relationships between these three bone variables and BMI gains from 1 to 12.4 years are positively related (Fig. 1d, e, and f).\nFig. 1Femoral neck aBMD, cortical thickness, and trabecular bone density of distal tibia measured at peak bone mass: relation with menarcheal age and change in BMI during childhood. The six linear regressions were calculated with the data prospectively recorded in 124 healthy girls. The regression equations are indicated above each plot, with the corresponding correlation coefficient and the statistical P values. The slopes of the three bone variables (Y) are negatively and positively related to menarcheal age (upper plots: a, b, c) and change in BMI from 1.0 to 12.4 years (lower plots: d, e, f), respectively. See text for further details\n\nFemoral neck aBMD, cortical thickness, and trabecular bone density of distal tibia measured at peak bone mass: relation with menarcheal age and change in BMI during childhood. The six linear regressions were calculated with the data prospectively recorded in 124 healthy girls. The regression equations are indicated above each plot, with the corresponding correlation coefficient and the statistical P values. The slopes of the three bone variables (Y) are negatively and positively related to menarcheal age (upper plots: a, b, c) and change in BMI from 1.0 to 12.4 years (lower plots: d, e, f), respectively. See text for further details\nThe relation between pubertal timing and both anthropometric and bone variables was further analyzed by segregating the cohort by the median (12.9 years) of MENA. At birth and 1 year of age, no difference in BW, H, and thereby in BMI was detected between girls who will experience pubertal timing below (EARLIER) and above (LATER) the median of MENA (Table 3). From 7.9 to 12.4 years, BW, H, and BMI rose significantly, more in EARLIER than LATER MENA subgroup. The differences in these anthropometric variables culminated at 12.4 years of age. They remained statistically significant at 16.4 years for both BW and BMI, but not for H. At 20.4 years, there was still a trend for greater BW and BMI in the EARLIER than in the LATER subgroup (Table 3). From 7.9 to 20.4 years, FN aBMD was constantly greater in the EARLIER than LATER subgroup. The difference was the greatest (+14.1%) at 12.4 years, then declined but remained statistically significant at 20.4 years (+4.8%).\nTable 3Anthropometric and femoral neck aBMD data from birth to 20.4 years in healthy girls segregated by the median of menarcheal ageWeight (kg)\nP\nStanding height (cm)\nP\nBody mass index (kg/cm2)\nP\nFN aBMD (mg/cm2)\nP\nAge (year/s)EarlierLaterEarlierLaterEarlierLaterEarlierLaterBirth3.2 ± 0.43.2 ± 0.40.99549.4 ± 2.249.2 ± 1.90.68013.0 ± 1.213.1 ± 1.30.706NANA\nn = 47\nn = 49\nn = 47\nn = 49\nn = 57\nn = 5819.1 ± 0.99.3 ± 1.00.40873.9 ± 3.274.0 ± 3.60.81916.7 ± 1.117.0 ± 1.60.317NANA\nn = 48\nn = 49\nn = 47\nn = 49\nn = 47\nn = 497.9 ± 0.527.8 ± 4.225.1 ± 3.50.0002129.1 ± 5.7126.3 ± 5.70.00616.6 ± 1.915.7 ± 1.60.003640 ± 71628 ± 770.364\nn = 62\nn = 62\nn = 62\nn = 62\nn = 62\nn = 62\nn = 62\nn = 628.9 ± 0.531.6 ± 5.028.1 ± 4.00.0001134.5 ± 5.8130.9 ± 5.90.000117.4 ± 2.216.4 ± 1.80.005658 ± 72636 ± 770.104\nn = 61\nn = 62\nn = 61\nn = 62\nn = 61\nn = 62\nn = 61\nn = 6210.0 ± 0.535.4 ± 5.630.9 ± 4.90.0001141.5 ± 6.3136.1 ± 5.90.000117.6 ± 2.116.6 ± 2.00.009689 ± 72661 ± 810.061\nn = 58\nn = 56\nn = 58\nn = 56\nn = 58\nn = 56\nn = 58\nn = 5612.4 ± 0.548.6 ± 6.440.2 ± 7.40.0001157.8 ± 6.0149.7 ± 7.70.000119.5 ± 2.217.8 ± 2.50.0004799 ± 84700 ± 970.001\nn = 54\nn = 52\nn = 54\nn = 52\nn = 54\nn = 52\nn = 54\nn = 5216.4 ± 0.558.8 ± 7.454.8 ± 8.00.007164.2 ± 6.1163.8 ± 6.30.75121.8 ± 2.620.4 ± 2.80.005893 ± 94841 ± 1220.014\nn = 57\nn = 56\nn = 57\nn = 56\nn = 57\nn = 56\nn = 57\nn = 5620.4 ± 0.661.4 ± 8.758.5 ± 9.60.085164.7 ± 6.1165.1 ± 6.30.70322.7 ± 3.321.5 ± 3.40.051878 ± 97838 ± 1160.042\nn = 62\nn = 62\nn = 62\nn = 62\nn = 62\nn = 62\nn = 62\nn = 62All values are mean ± SD. The percent of girls having experienced their first menstruations was: 0, 1.8, and 25.5% at the age of 8.9, 10.0, and 12.4 years, respectively. All were menstruating at the visit when their mean age was 16.4 ± 0.5 year\nBMI body mass index, FN Femoral neck, aBMD areal bone mineral density NA not available\n\nAnthropometric and femoral neck aBMD data from birth to 20.4 years in healthy girls segregated by the median of menarcheal age\nAll values are mean ± SD. The percent of girls having experienced their first menstruations was: 0, 1.8, and 25.5% at the age of 8.9, 10.0, and 12.4 years, respectively. All were menstruating at the visit when their mean age was 16.4 ± 0.5 year\n\nBMI body mass index, FN Femoral neck, aBMD areal bone mineral density NA not available\nThe differences in BW, H, and BMI gains from birth to 1 year and from 1.0 to 7.9 years up to 20.4 years between the EARLIER and LATER subgroups are presented in Table 4. These differences corroborate the absolute values indicating that the pubertal timing influence is expressed on the gains from 1.0 to 7.4 years on, but not from birth to 1.0 year (Table 4).\nTable 4Gains in anthropometric variables from birth to 1 year and from 1 year of age in healthy girls segregated by menarcheal ageAge (year/s)Weight (kg)\nP\nHeight (cm)\nP\nBMI (kg/cm2)\nP\nEarlierLaterEarlierLaterEarlierLaterFrom birth to 16.0 ± 0.86.1 ± 1.00.50624.7 ± 2.624.9 ± 3.90.8103.8 ± 1.63.9 ± 1.90.907\nn = 47\nn = 49\nn = 47\nn = 49\nn = 47\nn = 491 to 7.918.4 ± 3.915.9 ± 3.40.00155.2 ± 5.352.2 ± 5.70.009−0.2 ± 2.01.2 ± 1.90.013\nn = 48\nn = 49\nn = 47\nn = 49\nn = 47\nn = 491 to 8.922.1 ± 4.818.9 ± 4.00.00160.7 ± 5.456.9 ± 5.90.0010.5 ± 2.4−0.6 ± 2.20.023\nn = 47\nn = 49\nn = 47\nn = 49\nn = 47\nn = 491 to 10.026.3 ± 5.421.8 ± 4.90.00167.8 ± 6.062.5 ± 6.30.0011.0 ± 2.2−0.4 ± 2.40.005\nn = 47\nn = 46\nn = 46\nn = 46\nn = 46\nn = 461 to 12.439.2 ± 6.232.0 ± 7.70.00183.7 ± 5.676.0 ± 8.70.0012.8 ± 2.41.0 ± 2.90.002\nn = 45\nn = 45\nn = 44\nn = 45\nn = 44\nn = 451 to 16.450.2 ± 7.745.4 ± 7.40.00291.0 ± 4.989.6 ± 6.60.2315.1 ± 2.83.5 ± 3.10.009\nn = 45\nn = 47\nn = 45\nn = 47\nn = 45\nn = 471 to 20.453.0 ± 9.150.0 ± 10.10.13691.0 ± 5.391.3 ± 6.60.8426.1 ± 3.74.7 ± 3.80.067\nn = 47\nn = 49\nn = 47\nn = 49\nn = 47\nn = 49All values are mean ± SD\nBMI body mass index\n\nGains in anthropometric variables from birth to 1 year and from 1 year of age in healthy girls segregated by menarcheal age\nAll values are mean ± SD\n\nBMI body mass index\nFigure 2 illustrates the gains in BMI as expressed in Z-score from 1.0 to 7.9 years on, in EARLIER as compared to LATER subgroup. Under the histogram, the distribution of the pubertal stages from P1 to P5 documents the difference in the age-related progression of sexual maturation between the two MENA subgroups.\nFig. 2Changes in BMI from 1.0 to 20.4 years in healthy subjects segregated by the median of menarcheal age. The diagram illustrates that the change in BMI Z-score from 1.0 year of age on between subjects with menarcheal age below (EARLIER) and above (LATER) the median is statistically significant at 7.9 and 8.9 years, an age at which all girls were still prepubertal (Tanner stage P1) as indicated below the diagram. The difference culminates at 12.4 years, and then declines afterwards. Note that the progression of BMI from birth to 1.0 year of age was very similar in the EARLIER (from 13.0 to 16.7 kg/m2) and LATER (from 13.1 to 17.0 kg/m2) subgroups (see Table 3). The number of subjects for each age is presented in Table 3. See text for further details. P values between EARLIER and LATER group at each age are indicated above the diagram\n\nChanges in BMI from 1.0 to 20.4 years in healthy subjects segregated by the median of menarcheal age. The diagram illustrates that the change in BMI Z-score from 1.0 year of age on between subjects with menarcheal age below (EARLIER) and above (LATER) the median is statistically significant at 7.9 and 8.9 years, an age at which all girls were still prepubertal (Tanner stage P1) as indicated below the diagram. The difference culminates at 12.4 years, and then declines afterwards. Note that the progression of BMI from birth to 1.0 year of age was very similar in the EARLIER (from 13.0 to 16.7 kg/m2) and LATER (from 13.1 to 17.0 kg/m2) subgroups (see Table 3). The number of subjects for each age is presented in Table 3. See text for further details. P values between EARLIER and LATER group at each age are indicated above the diagram", "The recently published report from Javaid et al. [30] showed that change in BMI during childhood, from 1 to 12 years, was inversely associated with hip fracture risk in later life. As potential explanations, the authors suggested either a direct effect of low fat mass on bone mineralization or altered timing of pubertal maturation [30]. Our study carried out in a cohort of healthy females whose BMI remained within the normal range complements this report by demonstrating that femoral neck aBMD measured by the end of skeletal development is also linked to gain in BMI during a very similar time interval, precisely from 1 to 12.4 years. Furthermore, our study documents that BMI gain during this time frame is inversely correlated with pubertal timing as prospectively assessed by recording the age of menarche. We previously reported that in healthy adult females, a relatively later menarcheal age by 1.9 year is associated with a deficit in FN aBMD by nearly 0.4 T-score [12]. Taking into account that FN aBMD tracks from early to late adulthood [15, 16], our observation should pertain to the risk of hip fracture in relation with childhood growth [30]. In the study by Javaid et al., BW and BMI measured at birth and 1 year of age were not related to hip fracture [30]. In the same way, our analysis did not reveal any relationship between these two anthropometric variables when measured either at birth or at 1 yr of age and the pubertal timing recorded several years later.\nThe inverse association between gain in BW or BMI during childhood and menarcheal age may be interpreted as a direct effect of fat mass on the neuroendocrine system that triggers the timing of pubertal maturation [37]. Forty years ago, Frisch and Revelle [38] put forward the “critical weight” hypothesis suggesting that a minimum weight (48 kg) or body fat (22%) should be attained to trigger the complex series of events leading to the development of secondary sexual features. More recently, some but not all epidemiologic studies from the United States of America [39] indicated that the secular trend of earlier puberty in girls would coincide with the progressing prevalence of overweight and obesity in children [40, 41]. Nevertheless, when this association was found, the question remained whether earlier pubertal timing was the result or cause of higher body fat [42]. Among putative nutrition or fat mass-related mediators, leptin was specially taken into account. From the analysis of experimental and clinical evidence, it emerges that leptin could not be considered as a critical factor [43] that would determine the wide interindividual variability in pubertal timing, as repeatedly observed in a large number of healthy adolescent populations [37, 44], as well as in our cohort with menarcheal age ranging from 10.2 to 16.0 years. Leptin should rather be considered as playing a permissive role in the triggering of the pubertal maturation process [43].\nThe secular trend in earlier puberty was also observed in a very large longitudinal multi-cohort study from Denmark with annual measurements of BW and H in 156,835 school children born from 1930 to 1969 [45, 46]. However, this trend was recorded irrespective of the BMI level as assessed at 7 years of age [45, 46]. Thus, there is no evidence that fat mass would be an essential physiological factor causally implicated in the marked variability of pubertal maturation onset, as worldwide monitored in healthy children.\nIn our study, the difference in BMI gain between healthy, non-obese girls who will experience their first menses relatively earlier (12.1 years) and later (14.0 years), was already significant from 1.0 to 8.9 years of age. In absolute terms at 8.9 years of age, BW was 31.6 ± 5.0 and 28.1 ± 4.0 kg in the earlier and later groups, respectively. The corresponding BMI values were 17.4 ± 2.2 and 16.4 ± 1.8 kg/m2 in the earlier and later subgroup, respectively. In a previous UK study in healthy girls of similar age (8.6 ± 0.2 year), BW (29.5 ± 5.7 kg), and H (1.31 ± 0.05 m), with BMI of 16.9 kg/m2, fat mass was estimated from total body water measurement by deuterium dilution [47]. Using this validated method for measuring children body composition [48], fat mass amounted to 8.0 ± 3.7 kg corresponding to 27% of BW [47]. In our study, the increased BW from 1.0 to 8.9 years of age was 22.1 and 18.9 kg in earlier and later maturers, respectively. Assuming the same adiposity percentage as that estimated by Wells and Cole [47], the corresponding difference in accumulated fat mass before the onset of pubertal maturation would be 0.864 kg (27% of 3.2 kg). It is difficult to conceive that less than 1.0 kg of accumulated fat tissue from 1.0 to 8.9 years of age would so markedly delay the timing of puberty by nearly 2 years. For quantitative comparison, the secular trend of earlier pubertal timing in two nationally representative surveys of US girls studied 25 years apart, showed that menarcheal age declined from 12.75 to 12.54 years (12.80 to 12.60 years for white adolescents), corresponding to a decrease of two and a half months [49]. Between the two surveys, body weight measured in 10-year-old girls increased from 35.16 to 37.91 kg and BMI from 17.54 to 18.43 kg/m2 [49]. Therefore, for differences in BW and BMI similar to those we recorded in our 8.9-year-old girls between earlier and later maturers, the secular trend for younger menarcheal age [49] was about ten times less than in our study, i.e., 2.5 vs. 22.8 months.\nBased on accumulating contradictory evidence as reviewed by Wang [50], several previous reports [51–56] have questioned the “critical weight” hypothesis [38, 57] in the determination of menarche timing. Our study, as compared to the secular trend in earlier menarcheal age associated with increased prevalence of overweight and obesity [49] does not support the hypothesis causally linking fatness to pubertal timing. It appears more likely that the direction of causation is opposite, maturational timing affecting body composition [50].\nAlike PBM, pubertal timing is under strong genetic influence, as documented by several twin and family studies [58–64]. Taking into account this strong influence of genetics on pubertal timing, the slight increase in BMI observed in non-obese healthy girls with relatively earlier menarcheal age could correspond to a secondary phenomenon and not to a causal determinant that would be mediated by some putative adipocyte-secreted factors responsible for activating the complex process of pubertal maturation. Therefore, there is no scientific argument to hamper considering pubertal timing as the independent variable that would predict BW and/or BMI, rather than the reverse.\nThe inverse relationship between the timing of puberty and bone mineral mass in adulthood has been often tentatively explained by a difference in estrogen exposure from prepuberty to PBM attainment. However in a recent analysis, we reported [13] that the difference in bone mineral mass between healthy girls experiencing relatively earlier (12.1 ± 0.7 year) vs. later (14.0 ± 0.7 year) menarche was already present at 8.9 years of age, when all subjects were at Tanner stage P1, as assessed by direct examination by a pediatrician–endocrinologist. Moreover, from that prepubertal stage up to 20.4 years, aBMD gains at all skeletal sites examined were similar in earlier and later menarcheal age subgroups [13]. These results do not sustain the notion that difference in estrogen exposure from prepuberty to maturity would explain how pubertal timing could influence PBM and thereby subsequent risk of osteoporosis in later life.\nInterestingly enough, the difference in PBM between African– and European–Americans [65] could not be attributed to faster gain in bone mineral mass during puberty [66]. This racial difference emerges by early childhood [67], although it is not observed in infants 1–18 months of age [68]. The greater velocity of bone accrual in black than white Americans during childhood, but not during pubertal maturation, could well be related to racial difference in pubertal timing [66]. Such a relation would be compatible with the postulated concept linking pubertal timing and PBM acquisition by a common genetic programming [14].\nIn conclusion, in healthy girls, gain in BMI during childhood is associated with pubertal timing as prospectively assessed by recording menarcheal age. This reliable sexual maturation milestone is inversely correlated with several bone traits measured at peak bone mass, including femoral neck aBMD, cortical thickness, and volumetric trabecular density of distal tibia. These data are in accordance and complement further the reported relationship between childhood BMI gain and hip fracture risk in later life [30]. They strongly suggest that BMI gain in children with body weight within the normal range is influenced by pubertal timing as assessed by menarcheal age which in turn, has been shown in several postmenopausal women studies to be inversely related to aBMD or BMC and to increased risk of fragility fractures at several sites of the skeleton including at the hip level." ]
[ "introduction", null, null, null, null, null, null, "results", "discussion" ]
[ "BMI development", "Distal tibia microstructure", "Femoral neck BMD", "Menarcheal age", "Peak bone mass" ]
Perceptual impairment and psychomotor control in virtual laparoscopic surgery.
21359902
It is recognised that one of the major difficulties in performing laparoscopic surgery is the translation of two-dimensional video image information to a three-dimensional working area. However, research has tended to ignore the gaze and eye-hand coordination strategies employed by laparoscopic surgeons as they attempt to overcome these perceptual constraints. This study sought to examine if measures related to tool movements, gaze strategy, and eye-hand coordination (the quiet eye) differentiate between experienced and novice operators performing a two-handed manoeuvres task on a virtual reality laparoscopic surgical simulator (LAP Mentor™).
BACKGROUND
Twenty-five right-handed surgeons were categorised as being either experienced (having led more than 60 laparoscopic procedures) or novice (having performed fewer than 10 procedures) operators. The 10 experienced and 15 novice surgeons completed the "two-hand manoeuvres" task from the LAP Mentor basic skills learning environment while wearing a gaze registration system. Performance, movement, gaze, and eye-hand coordination parameters were recorded and compared between groups.
METHODS
The experienced surgeons completed the task significantly more quickly than the novices, used significantly fewer movements, and displayed shorter tool paths. Gaze analyses revealed that experienced surgeons spent significantly more time fixating the target locations than novices, who split their time between focusing on the targets and tracking the tools. A more detailed analysis of a difficult subcomponent of the task revealed that experienced operators used a significantly longer aiming fixation (the quiet eye period) to guide precision grasping movements and hence needed fewer grasp attempts.
RESULTS
The findings of the study provide further support for the utility of examining strategic gaze behaviour and eye-hand coordination measures to help further our understanding of how experienced surgeons attempt to overcome the perceptual difficulties inherent in the laparoscopic environment.
CONCLUSION
[ "Adult", "Analysis of Variance", "Clinical Competence", "Computer Simulation", "Eye Movements", "Female", "Functional Laterality", "Humans", "Laparoscopy", "Male", "Middle Aged", "Psychomotor Performance", "Task Performance and Analysis", "User-Computer Interface" ]
3116127
null
null
null
null
Results
Due to the corruption of a data storage device, performance and S–T data from the LAP Mentor for nine participants (2 experts and 7 novices) were lost. While the gaze and subcomponent task data are complete for all 25 participants, the analysis of the LAP Mentor data involves comparisons between eight novices and eight experienced operators. However, these reduced numbers still provide sufficient statistical power [19]. [SUBTITLE] Complete task: performance [SUBSECTION] Experienced operators completed the task (9 balls) significantly more quickly than novices (t 14 = 3.02, p < 0.010; see Table 1).Table 1Mean (±SD) performance and S–T process measures for novice and experienced groups (from LAP Mentor)ParameterNoviceExperiencedCompletion time (s)203.8 ± 76.5117.6 ± 25.7Total NoM265.7 ± 97.9139.3 ± 25.9Total PL (cm)726.1 ± 235.6438.9 ± 38.0 NoM number of movements, PL path length Mean (±SD) performance and S–T process measures for novice and experienced groups (from LAP Mentor) NoM number of movements, PL path length Experienced operators completed the task (9 balls) significantly more quickly than novices (t 14 = 3.02, p < 0.010; see Table 1).Table 1Mean (±SD) performance and S–T process measures for novice and experienced groups (from LAP Mentor)ParameterNoviceExperiencedCompletion time (s)203.8 ± 76.5117.6 ± 25.7Total NoM265.7 ± 97.9139.3 ± 25.9Total PL (cm)726.1 ± 235.6438.9 ± 38.0 NoM number of movements, PL path length Mean (±SD) performance and S–T process measures for novice and experienced groups (from LAP Mentor) NoM number of movements, PL path length [SUBTITLE] Complete task: tool movements (S–T interface) [SUBSECTION] Experienced operators made significantly fewer total movements (t 14 = 3.53, p < 0.005) and had significantly shorter total tool path lengths (t 14 = 3.40, p < 0.005) than novices (see Table 1). Experienced operators made significantly fewer total movements (t 14 = 3.53, p < 0.005) and had significantly shorter total tool path lengths (t 14 = 3.40, p < 0.005) than novices (see Table 1). [SUBTITLE] Complete task: gaze strategy (S–M interface) [SUBSECTION] The ANOVA on the percentage time spent fixating on each gaze location revealed a significant main effect for location (F 1,23 = 27.2, p < 0.001) and no significant main effect for ability level (F 1,23 = 3.4, p = 0.081). These results were qualified by a significant interaction effect (F 1,23 = 13.2, p < 0.005). As Fig. 3 demonstrates, experts spent significantly more time fixating on the relevant target (jelly, ball, or endobag) than their novice counterparts (p < 0.005), while novices spent significantly more time tracking the tools than their expert counterparts (p < 0.005). Novices spent similar amounts of time fixating on the target ball and tracking the tools (p = 0.133), while experts spent significantly more time fixating the target balls compared to tool tracking (p < 0.001).Fig. 3The percentage of total fixation duration to target (jelly, ball, or endobag) and tool for novice and experienced surgeons (±SEM) The percentage of total fixation duration to target (jelly, ball, or endobag) and tool for novice and experienced surgeons (±SEM) The ANOVA on the percentage time spent fixating on each gaze location revealed a significant main effect for location (F 1,23 = 27.2, p < 0.001) and no significant main effect for ability level (F 1,23 = 3.4, p = 0.081). These results were qualified by a significant interaction effect (F 1,23 = 13.2, p < 0.005). As Fig. 3 demonstrates, experts spent significantly more time fixating on the relevant target (jelly, ball, or endobag) than their novice counterparts (p < 0.005), while novices spent significantly more time tracking the tools than their expert counterparts (p < 0.005). Novices spent similar amounts of time fixating on the target ball and tracking the tools (p = 0.133), while experts spent significantly more time fixating the target balls compared to tool tracking (p < 0.001).Fig. 3The percentage of total fixation duration to target (jelly, ball, or endobag) and tool for novice and experienced surgeons (±SEM) The percentage of total fixation duration to target (jelly, ball, or endobag) and tool for novice and experienced surgeons (±SEM) [SUBTITLE] Subcomponent task: eye-hand coordination [SUBSECTION] Experienced operators had significantly longer quiet eye (QE) durations on the target ball (t 23 = 3.49, p < 0.005) and made significantly fewer grasp attempts (t 23 = 2.92, p < 0.010) than novice operators (Table 2).Table 2Mean (±SD) quiet eye duration and number of ball grasp attempts during phase 2 of the task, for novice and experienced groupsParameterNoviceExperiencedQuiet eye (ms)597.3 ± 194.61123.4 ± 539.1Number of grasp attempts3.3 ± 0.62.5 ± 0.7 Mean (±SD) quiet eye duration and number of ball grasp attempts during phase 2 of the task, for novice and experienced groups Experienced operators had significantly longer quiet eye (QE) durations on the target ball (t 23 = 3.49, p < 0.005) and made significantly fewer grasp attempts (t 23 = 2.92, p < 0.010) than novice operators (Table 2).Table 2Mean (±SD) quiet eye duration and number of ball grasp attempts during phase 2 of the task, for novice and experienced groupsParameterNoviceExperiencedQuiet eye (ms)597.3 ± 194.61123.4 ± 539.1Number of grasp attempts3.3 ± 0.62.5 ± 0.7 Mean (±SD) quiet eye duration and number of ball grasp attempts during phase 2 of the task, for novice and experienced groups
null
null
[ "Participants", "Apparatus and task", "Procedure", "Measures", "Complete task", "Subcomponent task: grasping a ball", "Analysis", "Complete task: performance", "Complete task: tool movements (S–T interface)", "Complete task: gaze strategy (S–M interface)", "Subcomponent task: eye-hand coordination" ]
[ "A total of 25 surgeons volunteered to take part in the study (mean age = 32.0 years; range = 24–49 years). All participants were right-hand dominant and were classified as novice or experienced laparoscopic surgeons according to the number of laparoscopic procedures they had led. Fifteen novices (6 males, 9 females) had performed fewer than 10 procedures and ten experienced operators (9 males, 1 female) had led more than 60 procedures (range = 60–700). Power calculations using mean and standard deviations from previous studies using similar tasks and groupings [2, 19] suggest that groups should consist of at least eight individuals for a one-tailed test with α = 0.05 and power (1 − β) = 0.8.", "Testing took place on a LAP Mentor™ (Simbionix USA Corp., Cleveland, OH) VR laparoscopic surgical simulator, based at the Centre for Innovation and Training in Elective Care, Torbay Hospital. The two-handed manoeuvres task from the basic skills training module was used for this study as recent research has suggested that it discriminates between levels of expertise across a range of objective performance and S–T measures [2]. To complete the task the operator must locate balls within a jelly mass and then place them in an endobag. There are three subcomponents to this task requiring accurate psychomotor control: (1) grasping the jelly and manipulating it to expose a ball, (2) grasping a ball, and (3) placing the ball in the endobag (Fig. 1).Fig. 1Images of the LAP Mentor™ environment showing representative examples of the “two-handed manoeuvres” task: grasping and manipulating the jelly to reveal a ball (A), grasping a ball (B), and dropping a ball in the endobag (C)\n\nImages of the LAP Mentor™ environment showing representative examples of the “two-handed manoeuvres” task: grasping and manipulating the jelly to reveal a ball (A), grasping a ball (B), and dropping a ball in the endobag (C)\nParticipants were fitted with an Applied Science Laboratories Mobile Eye gaze registration system (ASL, Bedford, MA), which measures eye-line of gaze using dark pupil tracking [19]. The system incorporates a pair of lightweight glasses fitted with eye and scene cameras and a set of three LEDs that project harmless near infrared (IR) light onto the eye via a reflective monocle (Fig. 2A). Some of this light is reflected by the cornea (corneal reflection) and appears to the eye camera as a triangle of three dots at a fixed distance from each other. The pupil appears black as light does not exit the inside of the eye, enabling the system to register its position and determine its centre. When the eye turns, the centre of the pupil moves relative to the head, however, the corneal reflection remains in the same position. Therefore, by comparing the vector (angle and distance) between the pupil and the cornea, the eye tracking system can compute the angle at which the eye is pointed (Fig. 2B).Fig. 2The head-mounted unit from the ASL Mobile Eye gaze registration system (A) and the software environment (B) showing the pupil, the corneal reflection, and the vector line between the two\n\nThe head-mounted unit from the ASL Mobile Eye gaze registration system (A) and the software environment (B) showing the pupil, the corneal reflection, and the vector line between the two\nThe system also incorporates a recording device (a modified digital video cassette recorder), which combines the two video streams from the eye and scene cameras at 25 Hz. The recorder is attached to a laptop installed with Eyevision (ASL) software; both recorder and laptop were placed on a table to the side of the participant. By teaching the system how the angles calculated by the eye camera relate to the image from the second camera that is viewing the environment (the scene camera), the eye tracker can compute what the eye is pointed at. A circular cursor, representing 1° of visual angle with a 4.5-mm lens, indicating the location of gaze in a video image of the scene (spatial accuracy of ±0.5° visual angle; 0.1° precision), is viewed in real time and recorded for subsequent offline analyses.", "Participants arrived at the Training Centre individually at prearranged times. They first read an information sheet describing the aims of the study before completing a demographic questionnaire and providing written informed consent of participation. Participants were fitted with the eye tracker and it was calibrated using six visual landmarks on the LAP Mentor display screen. They then performed three consecutive attempts at the two-handed manoeuvres task, as part of a series of activities, before being debriefed and thanked for their participation in the study.", "[SUBTITLE] Complete task [SUBSECTION] Task performance was assessed in terms of speed (task completion time), and total path length and number of tool movements were chosen from the S–T interface to replicate Aggarwal et al. [2]. Complete task performance and tool process measures were downloaded directly from the LAP Mentor software environment after each trial. The software provided path length and number of movement data for left and right tools and these were aggregated to provide a composite score.\nThe percentage of time spent fixating on important locations was calculated to provide an overall measure of how the participants attended to the two-dimensional environment (S–M interface) (as in Wilson et al. [19]). The target of interest changes depending on the particular subcomponent of the task being undertaken, i.e., the jelly to be moved in phase 1, the ball to be lifted in phase 2, and the endobag in which to place the ball in phase 3 (Fig. 1). For simpler “pointing” tasks in laparoscopic environments, it has been demonstrated that novices tend to switch gaze between tool and target locations in order to determine the relative position of both, whereas experts adopt a more efficient strategy, fixating almost exclusively on the target location [19, 21]. We were interested in determining if similar gaze strategies were adopted for this grasping task which places greater demands on depth perception than pointing.\nThe gaze data were analysed in a frame-by-frame manner using GazeTracker (Eye Response Technologies, Charlottesville, VA, USA) video analysis software. A fixation was defined as a gaze of long enough duration to allow information processing (≥120 ms) to a single location (within 1° visual angle). For each subcomponent of the task, areas of interest (“lookzones”) were created around the relevant subtask target (jelly mould, ball, or endobag) and each tool. These were maintained in place by the experimenter as the video progressed at 25 Hz (40 frames a second). The software then provided information regarding the duration and frequency of gazes occurring within each area of interest for the duration of the trial.\nTask performance was assessed in terms of speed (task completion time), and total path length and number of tool movements were chosen from the S–T interface to replicate Aggarwal et al. [2]. Complete task performance and tool process measures were downloaded directly from the LAP Mentor software environment after each trial. The software provided path length and number of movement data for left and right tools and these were aggregated to provide a composite score.\nThe percentage of time spent fixating on important locations was calculated to provide an overall measure of how the participants attended to the two-dimensional environment (S–M interface) (as in Wilson et al. [19]). The target of interest changes depending on the particular subcomponent of the task being undertaken, i.e., the jelly to be moved in phase 1, the ball to be lifted in phase 2, and the endobag in which to place the ball in phase 3 (Fig. 1). For simpler “pointing” tasks in laparoscopic environments, it has been demonstrated that novices tend to switch gaze between tool and target locations in order to determine the relative position of both, whereas experts adopt a more efficient strategy, fixating almost exclusively on the target location [19, 21]. We were interested in determining if similar gaze strategies were adopted for this grasping task which places greater demands on depth perception than pointing.\nThe gaze data were analysed in a frame-by-frame manner using GazeTracker (Eye Response Technologies, Charlottesville, VA, USA) video analysis software. A fixation was defined as a gaze of long enough duration to allow information processing (≥120 ms) to a single location (within 1° visual angle). For each subcomponent of the task, areas of interest (“lookzones”) were created around the relevant subtask target (jelly mould, ball, or endobag) and each tool. These were maintained in place by the experimenter as the video progressed at 25 Hz (40 frames a second). The software then provided information regarding the duration and frequency of gazes occurring within each area of interest for the duration of the trial.\n[SUBTITLE] Subcomponent task: grasping a ball [SUBSECTION] In order to provide a more fine-grained analysis of the effects of perceptual constraints on psychomotor control, each attempt at grasping a ball (phase 2) was investigated via video footage analyses. This additional level of analysis sought to answer calls for research examining simulator validity evidence to include outcome measures based on decisive actions during procedures [7]. Ball grasp attempts were selected as they were most sensitive to altered depth perception effects due to the increased degree of accuracy required; the ball diameter is considerably less than either the jelly mould or endobag aperture diameter. The number of attempts required to successfully grasp a ball was defined as the performance measure for this subcomponent, and the quiet eye (QE) was adopted as a specific measure to reflect the spatial and temporal coordination between gaze and motor control [20, 22].\nThe QE has been shown to underlie higher levels of skill and performance in a wide range of aiming and interceptive skills (see [22] for a review). It is defined as the duration of the final fixation toward a relevant target prior to the execution of the critical phase of movement and has been accepted as a measure of optimal psychomotor control. It is postulated that the QE allows for a period of cognitive preprogramming of movement parameters while minimizing distraction from other environmental or internal cues [20]. The critical movement for this specific task was considered as the arrival of the tool within “2 gaze cursors” distance (i.e., 2°) of the ball. The QE is therefore operationally defined as the final fixation on the ball prior to the arrival of the tool tip within 2° of visual angle of the target.\nIn order to provide a more fine-grained analysis of the effects of perceptual constraints on psychomotor control, each attempt at grasping a ball (phase 2) was investigated via video footage analyses. This additional level of analysis sought to answer calls for research examining simulator validity evidence to include outcome measures based on decisive actions during procedures [7]. Ball grasp attempts were selected as they were most sensitive to altered depth perception effects due to the increased degree of accuracy required; the ball diameter is considerably less than either the jelly mould or endobag aperture diameter. The number of attempts required to successfully grasp a ball was defined as the performance measure for this subcomponent, and the quiet eye (QE) was adopted as a specific measure to reflect the spatial and temporal coordination between gaze and motor control [20, 22].\nThe QE has been shown to underlie higher levels of skill and performance in a wide range of aiming and interceptive skills (see [22] for a review). It is defined as the duration of the final fixation toward a relevant target prior to the execution of the critical phase of movement and has been accepted as a measure of optimal psychomotor control. It is postulated that the QE allows for a period of cognitive preprogramming of movement parameters while minimizing distraction from other environmental or internal cues [20]. The critical movement for this specific task was considered as the arrival of the tool within “2 gaze cursors” distance (i.e., 2°) of the ball. The QE is therefore operationally defined as the final fixation on the ball prior to the arrival of the tool tip within 2° of visual angle of the target.", "Task performance was assessed in terms of speed (task completion time), and total path length and number of tool movements were chosen from the S–T interface to replicate Aggarwal et al. [2]. Complete task performance and tool process measures were downloaded directly from the LAP Mentor software environment after each trial. The software provided path length and number of movement data for left and right tools and these were aggregated to provide a composite score.\nThe percentage of time spent fixating on important locations was calculated to provide an overall measure of how the participants attended to the two-dimensional environment (S–M interface) (as in Wilson et al. [19]). The target of interest changes depending on the particular subcomponent of the task being undertaken, i.e., the jelly to be moved in phase 1, the ball to be lifted in phase 2, and the endobag in which to place the ball in phase 3 (Fig. 1). For simpler “pointing” tasks in laparoscopic environments, it has been demonstrated that novices tend to switch gaze between tool and target locations in order to determine the relative position of both, whereas experts adopt a more efficient strategy, fixating almost exclusively on the target location [19, 21]. We were interested in determining if similar gaze strategies were adopted for this grasping task which places greater demands on depth perception than pointing.\nThe gaze data were analysed in a frame-by-frame manner using GazeTracker (Eye Response Technologies, Charlottesville, VA, USA) video analysis software. A fixation was defined as a gaze of long enough duration to allow information processing (≥120 ms) to a single location (within 1° visual angle). For each subcomponent of the task, areas of interest (“lookzones”) were created around the relevant subtask target (jelly mould, ball, or endobag) and each tool. These were maintained in place by the experimenter as the video progressed at 25 Hz (40 frames a second). The software then provided information regarding the duration and frequency of gazes occurring within each area of interest for the duration of the trial.", "In order to provide a more fine-grained analysis of the effects of perceptual constraints on psychomotor control, each attempt at grasping a ball (phase 2) was investigated via video footage analyses. This additional level of analysis sought to answer calls for research examining simulator validity evidence to include outcome measures based on decisive actions during procedures [7]. Ball grasp attempts were selected as they were most sensitive to altered depth perception effects due to the increased degree of accuracy required; the ball diameter is considerably less than either the jelly mould or endobag aperture diameter. The number of attempts required to successfully grasp a ball was defined as the performance measure for this subcomponent, and the quiet eye (QE) was adopted as a specific measure to reflect the spatial and temporal coordination between gaze and motor control [20, 22].\nThe QE has been shown to underlie higher levels of skill and performance in a wide range of aiming and interceptive skills (see [22] for a review). It is defined as the duration of the final fixation toward a relevant target prior to the execution of the critical phase of movement and has been accepted as a measure of optimal psychomotor control. It is postulated that the QE allows for a period of cognitive preprogramming of movement parameters while minimizing distraction from other environmental or internal cues [20]. The critical movement for this specific task was considered as the arrival of the tool within “2 gaze cursors” distance (i.e., 2°) of the ball. The QE is therefore operationally defined as the final fixation on the ball prior to the arrival of the tool tip within 2° of visual angle of the target.", "The first trial was considered a familiarization attempt for all participants, providing an insight into the testing protocol while limiting additional learning opportunities prior to testing. Data from the subsequent two trials were averaged to provide a mean value for each variable for each participant to be used for subsequent analyses. The researcher analysing the gaze data was experienced in performing such analyses and blind to the skill levels of the participants to protect against analysis bias.\nShapiro-Wilk tests revealed that all data were normally distributed. Differences between task completion time, total path length (TPL), total number of movements (TNoM), number of attempts required to grasp a ball, and quiet eye (QE) variables for each group were analysed using a series of independent group t tests. Differences in the locations (tools or target) fixated upon were subjected to a mixed-design 2 × 2 ANOVA (group × location), with Bonferroni-adjusted post hoc t tests used to follow up significant interaction effects. All analyses were performed using SPSS 15.0 for Windows (SPSS Inc., Chicago, IL).", "Experienced operators completed the task (9 balls) significantly more quickly than novices (t\n14 = 3.02, p < 0.010; see Table 1).Table 1Mean (±SD) performance and S–T process measures for novice and experienced groups (from LAP Mentor)ParameterNoviceExperiencedCompletion time (s)203.8 ± 76.5117.6 ± 25.7Total NoM265.7 ± 97.9139.3 ± 25.9Total PL (cm)726.1 ± 235.6438.9 ± 38.0\nNoM number of movements, PL path length\n\nMean (±SD) performance and S–T process measures for novice and experienced groups (from LAP Mentor)\n\nNoM number of movements, PL path length", "Experienced operators made significantly fewer total movements (t\n14 = 3.53, p < 0.005) and had significantly shorter total tool path lengths (t\n14 = 3.40, p < 0.005) than novices (see Table 1).", "The ANOVA on the percentage time spent fixating on each gaze location revealed a significant main effect for location (F\n1,23 = 27.2, p < 0.001) and no significant main effect for ability level (F\n1,23 = 3.4, p = 0.081). These results were qualified by a significant interaction effect (F\n1,23 = 13.2, p < 0.005). As Fig. 3 demonstrates, experts spent significantly more time fixating on the relevant target (jelly, ball, or endobag) than their novice counterparts (p < 0.005), while novices spent significantly more time tracking the tools than their expert counterparts (p < 0.005). Novices spent similar amounts of time fixating on the target ball and tracking the tools (p = 0.133), while experts spent significantly more time fixating the target balls compared to tool tracking (p < 0.001).Fig. 3The percentage of total fixation duration to target (jelly, ball, or endobag) and tool for novice and experienced surgeons (±SEM)\n\nThe percentage of total fixation duration to target (jelly, ball, or endobag) and tool for novice and experienced surgeons (±SEM)", "Experienced operators had significantly longer quiet eye (QE) durations on the target ball (t\n23 = 3.49, p < 0.005) and made significantly fewer grasp attempts (t\n23 = 2.92, p < 0.010) than novice operators (Table 2).Table 2Mean (±SD) quiet eye duration and number of ball grasp attempts during phase 2 of the task, for novice and experienced groupsParameterNoviceExperiencedQuiet eye (ms)597.3 ± 194.61123.4 ± 539.1Number of grasp attempts3.3 ± 0.62.5 ± 0.7\n\nMean (±SD) quiet eye duration and number of ball grasp attempts during phase 2 of the task, for novice and experienced groups" ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Methods", "Participants", "Apparatus and task", "Procedure", "Measures", "Complete task", "Subcomponent task: grasping a ball", "Analysis", "Results", "Complete task: performance", "Complete task: tool movements (S–T interface)", "Complete task: gaze strategy (S–M interface)", "Subcomponent task: eye-hand coordination", "Discussion" ]
[ "[SUBTITLE] Participants [SUBSECTION] A total of 25 surgeons volunteered to take part in the study (mean age = 32.0 years; range = 24–49 years). All participants were right-hand dominant and were classified as novice or experienced laparoscopic surgeons according to the number of laparoscopic procedures they had led. Fifteen novices (6 males, 9 females) had performed fewer than 10 procedures and ten experienced operators (9 males, 1 female) had led more than 60 procedures (range = 60–700). Power calculations using mean and standard deviations from previous studies using similar tasks and groupings [2, 19] suggest that groups should consist of at least eight individuals for a one-tailed test with α = 0.05 and power (1 − β) = 0.8.\nA total of 25 surgeons volunteered to take part in the study (mean age = 32.0 years; range = 24–49 years). All participants were right-hand dominant and were classified as novice or experienced laparoscopic surgeons according to the number of laparoscopic procedures they had led. Fifteen novices (6 males, 9 females) had performed fewer than 10 procedures and ten experienced operators (9 males, 1 female) had led more than 60 procedures (range = 60–700). Power calculations using mean and standard deviations from previous studies using similar tasks and groupings [2, 19] suggest that groups should consist of at least eight individuals for a one-tailed test with α = 0.05 and power (1 − β) = 0.8.\n[SUBTITLE] Apparatus and task [SUBSECTION] Testing took place on a LAP Mentor™ (Simbionix USA Corp., Cleveland, OH) VR laparoscopic surgical simulator, based at the Centre for Innovation and Training in Elective Care, Torbay Hospital. The two-handed manoeuvres task from the basic skills training module was used for this study as recent research has suggested that it discriminates between levels of expertise across a range of objective performance and S–T measures [2]. To complete the task the operator must locate balls within a jelly mass and then place them in an endobag. There are three subcomponents to this task requiring accurate psychomotor control: (1) grasping the jelly and manipulating it to expose a ball, (2) grasping a ball, and (3) placing the ball in the endobag (Fig. 1).Fig. 1Images of the LAP Mentor™ environment showing representative examples of the “two-handed manoeuvres” task: grasping and manipulating the jelly to reveal a ball (A), grasping a ball (B), and dropping a ball in the endobag (C)\n\nImages of the LAP Mentor™ environment showing representative examples of the “two-handed manoeuvres” task: grasping and manipulating the jelly to reveal a ball (A), grasping a ball (B), and dropping a ball in the endobag (C)\nParticipants were fitted with an Applied Science Laboratories Mobile Eye gaze registration system (ASL, Bedford, MA), which measures eye-line of gaze using dark pupil tracking [19]. The system incorporates a pair of lightweight glasses fitted with eye and scene cameras and a set of three LEDs that project harmless near infrared (IR) light onto the eye via a reflective monocle (Fig. 2A). Some of this light is reflected by the cornea (corneal reflection) and appears to the eye camera as a triangle of three dots at a fixed distance from each other. The pupil appears black as light does not exit the inside of the eye, enabling the system to register its position and determine its centre. When the eye turns, the centre of the pupil moves relative to the head, however, the corneal reflection remains in the same position. Therefore, by comparing the vector (angle and distance) between the pupil and the cornea, the eye tracking system can compute the angle at which the eye is pointed (Fig. 2B).Fig. 2The head-mounted unit from the ASL Mobile Eye gaze registration system (A) and the software environment (B) showing the pupil, the corneal reflection, and the vector line between the two\n\nThe head-mounted unit from the ASL Mobile Eye gaze registration system (A) and the software environment (B) showing the pupil, the corneal reflection, and the vector line between the two\nThe system also incorporates a recording device (a modified digital video cassette recorder), which combines the two video streams from the eye and scene cameras at 25 Hz. The recorder is attached to a laptop installed with Eyevision (ASL) software; both recorder and laptop were placed on a table to the side of the participant. By teaching the system how the angles calculated by the eye camera relate to the image from the second camera that is viewing the environment (the scene camera), the eye tracker can compute what the eye is pointed at. A circular cursor, representing 1° of visual angle with a 4.5-mm lens, indicating the location of gaze in a video image of the scene (spatial accuracy of ±0.5° visual angle; 0.1° precision), is viewed in real time and recorded for subsequent offline analyses.\nTesting took place on a LAP Mentor™ (Simbionix USA Corp., Cleveland, OH) VR laparoscopic surgical simulator, based at the Centre for Innovation and Training in Elective Care, Torbay Hospital. The two-handed manoeuvres task from the basic skills training module was used for this study as recent research has suggested that it discriminates between levels of expertise across a range of objective performance and S–T measures [2]. To complete the task the operator must locate balls within a jelly mass and then place them in an endobag. There are three subcomponents to this task requiring accurate psychomotor control: (1) grasping the jelly and manipulating it to expose a ball, (2) grasping a ball, and (3) placing the ball in the endobag (Fig. 1).Fig. 1Images of the LAP Mentor™ environment showing representative examples of the “two-handed manoeuvres” task: grasping and manipulating the jelly to reveal a ball (A), grasping a ball (B), and dropping a ball in the endobag (C)\n\nImages of the LAP Mentor™ environment showing representative examples of the “two-handed manoeuvres” task: grasping and manipulating the jelly to reveal a ball (A), grasping a ball (B), and dropping a ball in the endobag (C)\nParticipants were fitted with an Applied Science Laboratories Mobile Eye gaze registration system (ASL, Bedford, MA), which measures eye-line of gaze using dark pupil tracking [19]. The system incorporates a pair of lightweight glasses fitted with eye and scene cameras and a set of three LEDs that project harmless near infrared (IR) light onto the eye via a reflective monocle (Fig. 2A). Some of this light is reflected by the cornea (corneal reflection) and appears to the eye camera as a triangle of three dots at a fixed distance from each other. The pupil appears black as light does not exit the inside of the eye, enabling the system to register its position and determine its centre. When the eye turns, the centre of the pupil moves relative to the head, however, the corneal reflection remains in the same position. Therefore, by comparing the vector (angle and distance) between the pupil and the cornea, the eye tracking system can compute the angle at which the eye is pointed (Fig. 2B).Fig. 2The head-mounted unit from the ASL Mobile Eye gaze registration system (A) and the software environment (B) showing the pupil, the corneal reflection, and the vector line between the two\n\nThe head-mounted unit from the ASL Mobile Eye gaze registration system (A) and the software environment (B) showing the pupil, the corneal reflection, and the vector line between the two\nThe system also incorporates a recording device (a modified digital video cassette recorder), which combines the two video streams from the eye and scene cameras at 25 Hz. The recorder is attached to a laptop installed with Eyevision (ASL) software; both recorder and laptop were placed on a table to the side of the participant. By teaching the system how the angles calculated by the eye camera relate to the image from the second camera that is viewing the environment (the scene camera), the eye tracker can compute what the eye is pointed at. A circular cursor, representing 1° of visual angle with a 4.5-mm lens, indicating the location of gaze in a video image of the scene (spatial accuracy of ±0.5° visual angle; 0.1° precision), is viewed in real time and recorded for subsequent offline analyses.\n[SUBTITLE] Procedure [SUBSECTION] Participants arrived at the Training Centre individually at prearranged times. They first read an information sheet describing the aims of the study before completing a demographic questionnaire and providing written informed consent of participation. Participants were fitted with the eye tracker and it was calibrated using six visual landmarks on the LAP Mentor display screen. They then performed three consecutive attempts at the two-handed manoeuvres task, as part of a series of activities, before being debriefed and thanked for their participation in the study.\nParticipants arrived at the Training Centre individually at prearranged times. They first read an information sheet describing the aims of the study before completing a demographic questionnaire and providing written informed consent of participation. Participants were fitted with the eye tracker and it was calibrated using six visual landmarks on the LAP Mentor display screen. They then performed three consecutive attempts at the two-handed manoeuvres task, as part of a series of activities, before being debriefed and thanked for their participation in the study.\n[SUBTITLE] Measures [SUBSECTION] [SUBTITLE] Complete task [SUBSECTION] Task performance was assessed in terms of speed (task completion time), and total path length and number of tool movements were chosen from the S–T interface to replicate Aggarwal et al. [2]. Complete task performance and tool process measures were downloaded directly from the LAP Mentor software environment after each trial. The software provided path length and number of movement data for left and right tools and these were aggregated to provide a composite score.\nThe percentage of time spent fixating on important locations was calculated to provide an overall measure of how the participants attended to the two-dimensional environment (S–M interface) (as in Wilson et al. [19]). The target of interest changes depending on the particular subcomponent of the task being undertaken, i.e., the jelly to be moved in phase 1, the ball to be lifted in phase 2, and the endobag in which to place the ball in phase 3 (Fig. 1). For simpler “pointing” tasks in laparoscopic environments, it has been demonstrated that novices tend to switch gaze between tool and target locations in order to determine the relative position of both, whereas experts adopt a more efficient strategy, fixating almost exclusively on the target location [19, 21]. We were interested in determining if similar gaze strategies were adopted for this grasping task which places greater demands on depth perception than pointing.\nThe gaze data were analysed in a frame-by-frame manner using GazeTracker (Eye Response Technologies, Charlottesville, VA, USA) video analysis software. A fixation was defined as a gaze of long enough duration to allow information processing (≥120 ms) to a single location (within 1° visual angle). For each subcomponent of the task, areas of interest (“lookzones”) were created around the relevant subtask target (jelly mould, ball, or endobag) and each tool. These were maintained in place by the experimenter as the video progressed at 25 Hz (40 frames a second). The software then provided information regarding the duration and frequency of gazes occurring within each area of interest for the duration of the trial.\nTask performance was assessed in terms of speed (task completion time), and total path length and number of tool movements were chosen from the S–T interface to replicate Aggarwal et al. [2]. Complete task performance and tool process measures were downloaded directly from the LAP Mentor software environment after each trial. The software provided path length and number of movement data for left and right tools and these were aggregated to provide a composite score.\nThe percentage of time spent fixating on important locations was calculated to provide an overall measure of how the participants attended to the two-dimensional environment (S–M interface) (as in Wilson et al. [19]). The target of interest changes depending on the particular subcomponent of the task being undertaken, i.e., the jelly to be moved in phase 1, the ball to be lifted in phase 2, and the endobag in which to place the ball in phase 3 (Fig. 1). For simpler “pointing” tasks in laparoscopic environments, it has been demonstrated that novices tend to switch gaze between tool and target locations in order to determine the relative position of both, whereas experts adopt a more efficient strategy, fixating almost exclusively on the target location [19, 21]. We were interested in determining if similar gaze strategies were adopted for this grasping task which places greater demands on depth perception than pointing.\nThe gaze data were analysed in a frame-by-frame manner using GazeTracker (Eye Response Technologies, Charlottesville, VA, USA) video analysis software. A fixation was defined as a gaze of long enough duration to allow information processing (≥120 ms) to a single location (within 1° visual angle). For each subcomponent of the task, areas of interest (“lookzones”) were created around the relevant subtask target (jelly mould, ball, or endobag) and each tool. These were maintained in place by the experimenter as the video progressed at 25 Hz (40 frames a second). The software then provided information regarding the duration and frequency of gazes occurring within each area of interest for the duration of the trial.\n[SUBTITLE] Subcomponent task: grasping a ball [SUBSECTION] In order to provide a more fine-grained analysis of the effects of perceptual constraints on psychomotor control, each attempt at grasping a ball (phase 2) was investigated via video footage analyses. This additional level of analysis sought to answer calls for research examining simulator validity evidence to include outcome measures based on decisive actions during procedures [7]. Ball grasp attempts were selected as they were most sensitive to altered depth perception effects due to the increased degree of accuracy required; the ball diameter is considerably less than either the jelly mould or endobag aperture diameter. The number of attempts required to successfully grasp a ball was defined as the performance measure for this subcomponent, and the quiet eye (QE) was adopted as a specific measure to reflect the spatial and temporal coordination between gaze and motor control [20, 22].\nThe QE has been shown to underlie higher levels of skill and performance in a wide range of aiming and interceptive skills (see [22] for a review). It is defined as the duration of the final fixation toward a relevant target prior to the execution of the critical phase of movement and has been accepted as a measure of optimal psychomotor control. It is postulated that the QE allows for a period of cognitive preprogramming of movement parameters while minimizing distraction from other environmental or internal cues [20]. The critical movement for this specific task was considered as the arrival of the tool within “2 gaze cursors” distance (i.e., 2°) of the ball. The QE is therefore operationally defined as the final fixation on the ball prior to the arrival of the tool tip within 2° of visual angle of the target.\nIn order to provide a more fine-grained analysis of the effects of perceptual constraints on psychomotor control, each attempt at grasping a ball (phase 2) was investigated via video footage analyses. This additional level of analysis sought to answer calls for research examining simulator validity evidence to include outcome measures based on decisive actions during procedures [7]. Ball grasp attempts were selected as they were most sensitive to altered depth perception effects due to the increased degree of accuracy required; the ball diameter is considerably less than either the jelly mould or endobag aperture diameter. The number of attempts required to successfully grasp a ball was defined as the performance measure for this subcomponent, and the quiet eye (QE) was adopted as a specific measure to reflect the spatial and temporal coordination between gaze and motor control [20, 22].\nThe QE has been shown to underlie higher levels of skill and performance in a wide range of aiming and interceptive skills (see [22] for a review). It is defined as the duration of the final fixation toward a relevant target prior to the execution of the critical phase of movement and has been accepted as a measure of optimal psychomotor control. It is postulated that the QE allows for a period of cognitive preprogramming of movement parameters while minimizing distraction from other environmental or internal cues [20]. The critical movement for this specific task was considered as the arrival of the tool within “2 gaze cursors” distance (i.e., 2°) of the ball. The QE is therefore operationally defined as the final fixation on the ball prior to the arrival of the tool tip within 2° of visual angle of the target.\n[SUBTITLE] Complete task [SUBSECTION] Task performance was assessed in terms of speed (task completion time), and total path length and number of tool movements were chosen from the S–T interface to replicate Aggarwal et al. [2]. Complete task performance and tool process measures were downloaded directly from the LAP Mentor software environment after each trial. The software provided path length and number of movement data for left and right tools and these were aggregated to provide a composite score.\nThe percentage of time spent fixating on important locations was calculated to provide an overall measure of how the participants attended to the two-dimensional environment (S–M interface) (as in Wilson et al. [19]). The target of interest changes depending on the particular subcomponent of the task being undertaken, i.e., the jelly to be moved in phase 1, the ball to be lifted in phase 2, and the endobag in which to place the ball in phase 3 (Fig. 1). For simpler “pointing” tasks in laparoscopic environments, it has been demonstrated that novices tend to switch gaze between tool and target locations in order to determine the relative position of both, whereas experts adopt a more efficient strategy, fixating almost exclusively on the target location [19, 21]. We were interested in determining if similar gaze strategies were adopted for this grasping task which places greater demands on depth perception than pointing.\nThe gaze data were analysed in a frame-by-frame manner using GazeTracker (Eye Response Technologies, Charlottesville, VA, USA) video analysis software. A fixation was defined as a gaze of long enough duration to allow information processing (≥120 ms) to a single location (within 1° visual angle). For each subcomponent of the task, areas of interest (“lookzones”) were created around the relevant subtask target (jelly mould, ball, or endobag) and each tool. These were maintained in place by the experimenter as the video progressed at 25 Hz (40 frames a second). The software then provided information regarding the duration and frequency of gazes occurring within each area of interest for the duration of the trial.\nTask performance was assessed in terms of speed (task completion time), and total path length and number of tool movements were chosen from the S–T interface to replicate Aggarwal et al. [2]. Complete task performance and tool process measures were downloaded directly from the LAP Mentor software environment after each trial. The software provided path length and number of movement data for left and right tools and these were aggregated to provide a composite score.\nThe percentage of time spent fixating on important locations was calculated to provide an overall measure of how the participants attended to the two-dimensional environment (S–M interface) (as in Wilson et al. [19]). The target of interest changes depending on the particular subcomponent of the task being undertaken, i.e., the jelly to be moved in phase 1, the ball to be lifted in phase 2, and the endobag in which to place the ball in phase 3 (Fig. 1). For simpler “pointing” tasks in laparoscopic environments, it has been demonstrated that novices tend to switch gaze between tool and target locations in order to determine the relative position of both, whereas experts adopt a more efficient strategy, fixating almost exclusively on the target location [19, 21]. We were interested in determining if similar gaze strategies were adopted for this grasping task which places greater demands on depth perception than pointing.\nThe gaze data were analysed in a frame-by-frame manner using GazeTracker (Eye Response Technologies, Charlottesville, VA, USA) video analysis software. A fixation was defined as a gaze of long enough duration to allow information processing (≥120 ms) to a single location (within 1° visual angle). For each subcomponent of the task, areas of interest (“lookzones”) were created around the relevant subtask target (jelly mould, ball, or endobag) and each tool. These were maintained in place by the experimenter as the video progressed at 25 Hz (40 frames a second). The software then provided information regarding the duration and frequency of gazes occurring within each area of interest for the duration of the trial.\n[SUBTITLE] Subcomponent task: grasping a ball [SUBSECTION] In order to provide a more fine-grained analysis of the effects of perceptual constraints on psychomotor control, each attempt at grasping a ball (phase 2) was investigated via video footage analyses. This additional level of analysis sought to answer calls for research examining simulator validity evidence to include outcome measures based on decisive actions during procedures [7]. Ball grasp attempts were selected as they were most sensitive to altered depth perception effects due to the increased degree of accuracy required; the ball diameter is considerably less than either the jelly mould or endobag aperture diameter. The number of attempts required to successfully grasp a ball was defined as the performance measure for this subcomponent, and the quiet eye (QE) was adopted as a specific measure to reflect the spatial and temporal coordination between gaze and motor control [20, 22].\nThe QE has been shown to underlie higher levels of skill and performance in a wide range of aiming and interceptive skills (see [22] for a review). It is defined as the duration of the final fixation toward a relevant target prior to the execution of the critical phase of movement and has been accepted as a measure of optimal psychomotor control. It is postulated that the QE allows for a period of cognitive preprogramming of movement parameters while minimizing distraction from other environmental or internal cues [20]. The critical movement for this specific task was considered as the arrival of the tool within “2 gaze cursors” distance (i.e., 2°) of the ball. The QE is therefore operationally defined as the final fixation on the ball prior to the arrival of the tool tip within 2° of visual angle of the target.\nIn order to provide a more fine-grained analysis of the effects of perceptual constraints on psychomotor control, each attempt at grasping a ball (phase 2) was investigated via video footage analyses. This additional level of analysis sought to answer calls for research examining simulator validity evidence to include outcome measures based on decisive actions during procedures [7]. Ball grasp attempts were selected as they were most sensitive to altered depth perception effects due to the increased degree of accuracy required; the ball diameter is considerably less than either the jelly mould or endobag aperture diameter. The number of attempts required to successfully grasp a ball was defined as the performance measure for this subcomponent, and the quiet eye (QE) was adopted as a specific measure to reflect the spatial and temporal coordination between gaze and motor control [20, 22].\nThe QE has been shown to underlie higher levels of skill and performance in a wide range of aiming and interceptive skills (see [22] for a review). It is defined as the duration of the final fixation toward a relevant target prior to the execution of the critical phase of movement and has been accepted as a measure of optimal psychomotor control. It is postulated that the QE allows for a period of cognitive preprogramming of movement parameters while minimizing distraction from other environmental or internal cues [20]. The critical movement for this specific task was considered as the arrival of the tool within “2 gaze cursors” distance (i.e., 2°) of the ball. The QE is therefore operationally defined as the final fixation on the ball prior to the arrival of the tool tip within 2° of visual angle of the target.\n[SUBTITLE] Analysis [SUBSECTION] The first trial was considered a familiarization attempt for all participants, providing an insight into the testing protocol while limiting additional learning opportunities prior to testing. Data from the subsequent two trials were averaged to provide a mean value for each variable for each participant to be used for subsequent analyses. The researcher analysing the gaze data was experienced in performing such analyses and blind to the skill levels of the participants to protect against analysis bias.\nShapiro-Wilk tests revealed that all data were normally distributed. Differences between task completion time, total path length (TPL), total number of movements (TNoM), number of attempts required to grasp a ball, and quiet eye (QE) variables for each group were analysed using a series of independent group t tests. Differences in the locations (tools or target) fixated upon were subjected to a mixed-design 2 × 2 ANOVA (group × location), with Bonferroni-adjusted post hoc t tests used to follow up significant interaction effects. All analyses were performed using SPSS 15.0 for Windows (SPSS Inc., Chicago, IL).\nThe first trial was considered a familiarization attempt for all participants, providing an insight into the testing protocol while limiting additional learning opportunities prior to testing. Data from the subsequent two trials were averaged to provide a mean value for each variable for each participant to be used for subsequent analyses. The researcher analysing the gaze data was experienced in performing such analyses and blind to the skill levels of the participants to protect against analysis bias.\nShapiro-Wilk tests revealed that all data were normally distributed. Differences between task completion time, total path length (TPL), total number of movements (TNoM), number of attempts required to grasp a ball, and quiet eye (QE) variables for each group were analysed using a series of independent group t tests. Differences in the locations (tools or target) fixated upon were subjected to a mixed-design 2 × 2 ANOVA (group × location), with Bonferroni-adjusted post hoc t tests used to follow up significant interaction effects. All analyses were performed using SPSS 15.0 for Windows (SPSS Inc., Chicago, IL).", "A total of 25 surgeons volunteered to take part in the study (mean age = 32.0 years; range = 24–49 years). All participants were right-hand dominant and were classified as novice or experienced laparoscopic surgeons according to the number of laparoscopic procedures they had led. Fifteen novices (6 males, 9 females) had performed fewer than 10 procedures and ten experienced operators (9 males, 1 female) had led more than 60 procedures (range = 60–700). Power calculations using mean and standard deviations from previous studies using similar tasks and groupings [2, 19] suggest that groups should consist of at least eight individuals for a one-tailed test with α = 0.05 and power (1 − β) = 0.8.", "Testing took place on a LAP Mentor™ (Simbionix USA Corp., Cleveland, OH) VR laparoscopic surgical simulator, based at the Centre for Innovation and Training in Elective Care, Torbay Hospital. The two-handed manoeuvres task from the basic skills training module was used for this study as recent research has suggested that it discriminates between levels of expertise across a range of objective performance and S–T measures [2]. To complete the task the operator must locate balls within a jelly mass and then place them in an endobag. There are three subcomponents to this task requiring accurate psychomotor control: (1) grasping the jelly and manipulating it to expose a ball, (2) grasping a ball, and (3) placing the ball in the endobag (Fig. 1).Fig. 1Images of the LAP Mentor™ environment showing representative examples of the “two-handed manoeuvres” task: grasping and manipulating the jelly to reveal a ball (A), grasping a ball (B), and dropping a ball in the endobag (C)\n\nImages of the LAP Mentor™ environment showing representative examples of the “two-handed manoeuvres” task: grasping and manipulating the jelly to reveal a ball (A), grasping a ball (B), and dropping a ball in the endobag (C)\nParticipants were fitted with an Applied Science Laboratories Mobile Eye gaze registration system (ASL, Bedford, MA), which measures eye-line of gaze using dark pupil tracking [19]. The system incorporates a pair of lightweight glasses fitted with eye and scene cameras and a set of three LEDs that project harmless near infrared (IR) light onto the eye via a reflective monocle (Fig. 2A). Some of this light is reflected by the cornea (corneal reflection) and appears to the eye camera as a triangle of three dots at a fixed distance from each other. The pupil appears black as light does not exit the inside of the eye, enabling the system to register its position and determine its centre. When the eye turns, the centre of the pupil moves relative to the head, however, the corneal reflection remains in the same position. Therefore, by comparing the vector (angle and distance) between the pupil and the cornea, the eye tracking system can compute the angle at which the eye is pointed (Fig. 2B).Fig. 2The head-mounted unit from the ASL Mobile Eye gaze registration system (A) and the software environment (B) showing the pupil, the corneal reflection, and the vector line between the two\n\nThe head-mounted unit from the ASL Mobile Eye gaze registration system (A) and the software environment (B) showing the pupil, the corneal reflection, and the vector line between the two\nThe system also incorporates a recording device (a modified digital video cassette recorder), which combines the two video streams from the eye and scene cameras at 25 Hz. The recorder is attached to a laptop installed with Eyevision (ASL) software; both recorder and laptop were placed on a table to the side of the participant. By teaching the system how the angles calculated by the eye camera relate to the image from the second camera that is viewing the environment (the scene camera), the eye tracker can compute what the eye is pointed at. A circular cursor, representing 1° of visual angle with a 4.5-mm lens, indicating the location of gaze in a video image of the scene (spatial accuracy of ±0.5° visual angle; 0.1° precision), is viewed in real time and recorded for subsequent offline analyses.", "Participants arrived at the Training Centre individually at prearranged times. They first read an information sheet describing the aims of the study before completing a demographic questionnaire and providing written informed consent of participation. Participants were fitted with the eye tracker and it was calibrated using six visual landmarks on the LAP Mentor display screen. They then performed three consecutive attempts at the two-handed manoeuvres task, as part of a series of activities, before being debriefed and thanked for their participation in the study.", "[SUBTITLE] Complete task [SUBSECTION] Task performance was assessed in terms of speed (task completion time), and total path length and number of tool movements were chosen from the S–T interface to replicate Aggarwal et al. [2]. Complete task performance and tool process measures were downloaded directly from the LAP Mentor software environment after each trial. The software provided path length and number of movement data for left and right tools and these were aggregated to provide a composite score.\nThe percentage of time spent fixating on important locations was calculated to provide an overall measure of how the participants attended to the two-dimensional environment (S–M interface) (as in Wilson et al. [19]). The target of interest changes depending on the particular subcomponent of the task being undertaken, i.e., the jelly to be moved in phase 1, the ball to be lifted in phase 2, and the endobag in which to place the ball in phase 3 (Fig. 1). For simpler “pointing” tasks in laparoscopic environments, it has been demonstrated that novices tend to switch gaze between tool and target locations in order to determine the relative position of both, whereas experts adopt a more efficient strategy, fixating almost exclusively on the target location [19, 21]. We were interested in determining if similar gaze strategies were adopted for this grasping task which places greater demands on depth perception than pointing.\nThe gaze data were analysed in a frame-by-frame manner using GazeTracker (Eye Response Technologies, Charlottesville, VA, USA) video analysis software. A fixation was defined as a gaze of long enough duration to allow information processing (≥120 ms) to a single location (within 1° visual angle). For each subcomponent of the task, areas of interest (“lookzones”) were created around the relevant subtask target (jelly mould, ball, or endobag) and each tool. These were maintained in place by the experimenter as the video progressed at 25 Hz (40 frames a second). The software then provided information regarding the duration and frequency of gazes occurring within each area of interest for the duration of the trial.\nTask performance was assessed in terms of speed (task completion time), and total path length and number of tool movements were chosen from the S–T interface to replicate Aggarwal et al. [2]. Complete task performance and tool process measures were downloaded directly from the LAP Mentor software environment after each trial. The software provided path length and number of movement data for left and right tools and these were aggregated to provide a composite score.\nThe percentage of time spent fixating on important locations was calculated to provide an overall measure of how the participants attended to the two-dimensional environment (S–M interface) (as in Wilson et al. [19]). The target of interest changes depending on the particular subcomponent of the task being undertaken, i.e., the jelly to be moved in phase 1, the ball to be lifted in phase 2, and the endobag in which to place the ball in phase 3 (Fig. 1). For simpler “pointing” tasks in laparoscopic environments, it has been demonstrated that novices tend to switch gaze between tool and target locations in order to determine the relative position of both, whereas experts adopt a more efficient strategy, fixating almost exclusively on the target location [19, 21]. We were interested in determining if similar gaze strategies were adopted for this grasping task which places greater demands on depth perception than pointing.\nThe gaze data were analysed in a frame-by-frame manner using GazeTracker (Eye Response Technologies, Charlottesville, VA, USA) video analysis software. A fixation was defined as a gaze of long enough duration to allow information processing (≥120 ms) to a single location (within 1° visual angle). For each subcomponent of the task, areas of interest (“lookzones”) were created around the relevant subtask target (jelly mould, ball, or endobag) and each tool. These were maintained in place by the experimenter as the video progressed at 25 Hz (40 frames a second). The software then provided information regarding the duration and frequency of gazes occurring within each area of interest for the duration of the trial.\n[SUBTITLE] Subcomponent task: grasping a ball [SUBSECTION] In order to provide a more fine-grained analysis of the effects of perceptual constraints on psychomotor control, each attempt at grasping a ball (phase 2) was investigated via video footage analyses. This additional level of analysis sought to answer calls for research examining simulator validity evidence to include outcome measures based on decisive actions during procedures [7]. Ball grasp attempts were selected as they were most sensitive to altered depth perception effects due to the increased degree of accuracy required; the ball diameter is considerably less than either the jelly mould or endobag aperture diameter. The number of attempts required to successfully grasp a ball was defined as the performance measure for this subcomponent, and the quiet eye (QE) was adopted as a specific measure to reflect the spatial and temporal coordination between gaze and motor control [20, 22].\nThe QE has been shown to underlie higher levels of skill and performance in a wide range of aiming and interceptive skills (see [22] for a review). It is defined as the duration of the final fixation toward a relevant target prior to the execution of the critical phase of movement and has been accepted as a measure of optimal psychomotor control. It is postulated that the QE allows for a period of cognitive preprogramming of movement parameters while minimizing distraction from other environmental or internal cues [20]. The critical movement for this specific task was considered as the arrival of the tool within “2 gaze cursors” distance (i.e., 2°) of the ball. The QE is therefore operationally defined as the final fixation on the ball prior to the arrival of the tool tip within 2° of visual angle of the target.\nIn order to provide a more fine-grained analysis of the effects of perceptual constraints on psychomotor control, each attempt at grasping a ball (phase 2) was investigated via video footage analyses. This additional level of analysis sought to answer calls for research examining simulator validity evidence to include outcome measures based on decisive actions during procedures [7]. Ball grasp attempts were selected as they were most sensitive to altered depth perception effects due to the increased degree of accuracy required; the ball diameter is considerably less than either the jelly mould or endobag aperture diameter. The number of attempts required to successfully grasp a ball was defined as the performance measure for this subcomponent, and the quiet eye (QE) was adopted as a specific measure to reflect the spatial and temporal coordination between gaze and motor control [20, 22].\nThe QE has been shown to underlie higher levels of skill and performance in a wide range of aiming and interceptive skills (see [22] for a review). It is defined as the duration of the final fixation toward a relevant target prior to the execution of the critical phase of movement and has been accepted as a measure of optimal psychomotor control. It is postulated that the QE allows for a period of cognitive preprogramming of movement parameters while minimizing distraction from other environmental or internal cues [20]. The critical movement for this specific task was considered as the arrival of the tool within “2 gaze cursors” distance (i.e., 2°) of the ball. The QE is therefore operationally defined as the final fixation on the ball prior to the arrival of the tool tip within 2° of visual angle of the target.", "Task performance was assessed in terms of speed (task completion time), and total path length and number of tool movements were chosen from the S–T interface to replicate Aggarwal et al. [2]. Complete task performance and tool process measures were downloaded directly from the LAP Mentor software environment after each trial. The software provided path length and number of movement data for left and right tools and these were aggregated to provide a composite score.\nThe percentage of time spent fixating on important locations was calculated to provide an overall measure of how the participants attended to the two-dimensional environment (S–M interface) (as in Wilson et al. [19]). The target of interest changes depending on the particular subcomponent of the task being undertaken, i.e., the jelly to be moved in phase 1, the ball to be lifted in phase 2, and the endobag in which to place the ball in phase 3 (Fig. 1). For simpler “pointing” tasks in laparoscopic environments, it has been demonstrated that novices tend to switch gaze between tool and target locations in order to determine the relative position of both, whereas experts adopt a more efficient strategy, fixating almost exclusively on the target location [19, 21]. We were interested in determining if similar gaze strategies were adopted for this grasping task which places greater demands on depth perception than pointing.\nThe gaze data were analysed in a frame-by-frame manner using GazeTracker (Eye Response Technologies, Charlottesville, VA, USA) video analysis software. A fixation was defined as a gaze of long enough duration to allow information processing (≥120 ms) to a single location (within 1° visual angle). For each subcomponent of the task, areas of interest (“lookzones”) were created around the relevant subtask target (jelly mould, ball, or endobag) and each tool. These were maintained in place by the experimenter as the video progressed at 25 Hz (40 frames a second). The software then provided information regarding the duration and frequency of gazes occurring within each area of interest for the duration of the trial.", "In order to provide a more fine-grained analysis of the effects of perceptual constraints on psychomotor control, each attempt at grasping a ball (phase 2) was investigated via video footage analyses. This additional level of analysis sought to answer calls for research examining simulator validity evidence to include outcome measures based on decisive actions during procedures [7]. Ball grasp attempts were selected as they were most sensitive to altered depth perception effects due to the increased degree of accuracy required; the ball diameter is considerably less than either the jelly mould or endobag aperture diameter. The number of attempts required to successfully grasp a ball was defined as the performance measure for this subcomponent, and the quiet eye (QE) was adopted as a specific measure to reflect the spatial and temporal coordination between gaze and motor control [20, 22].\nThe QE has been shown to underlie higher levels of skill and performance in a wide range of aiming and interceptive skills (see [22] for a review). It is defined as the duration of the final fixation toward a relevant target prior to the execution of the critical phase of movement and has been accepted as a measure of optimal psychomotor control. It is postulated that the QE allows for a period of cognitive preprogramming of movement parameters while minimizing distraction from other environmental or internal cues [20]. The critical movement for this specific task was considered as the arrival of the tool within “2 gaze cursors” distance (i.e., 2°) of the ball. The QE is therefore operationally defined as the final fixation on the ball prior to the arrival of the tool tip within 2° of visual angle of the target.", "The first trial was considered a familiarization attempt for all participants, providing an insight into the testing protocol while limiting additional learning opportunities prior to testing. Data from the subsequent two trials were averaged to provide a mean value for each variable for each participant to be used for subsequent analyses. The researcher analysing the gaze data was experienced in performing such analyses and blind to the skill levels of the participants to protect against analysis bias.\nShapiro-Wilk tests revealed that all data were normally distributed. Differences between task completion time, total path length (TPL), total number of movements (TNoM), number of attempts required to grasp a ball, and quiet eye (QE) variables for each group were analysed using a series of independent group t tests. Differences in the locations (tools or target) fixated upon were subjected to a mixed-design 2 × 2 ANOVA (group × location), with Bonferroni-adjusted post hoc t tests used to follow up significant interaction effects. All analyses were performed using SPSS 15.0 for Windows (SPSS Inc., Chicago, IL).", "Due to the corruption of a data storage device, performance and S–T data from the LAP Mentor for nine participants (2 experts and 7 novices) were lost. While the gaze and subcomponent task data are complete for all 25 participants, the analysis of the LAP Mentor data involves comparisons between eight novices and eight experienced operators. However, these reduced numbers still provide sufficient statistical power [19].\n[SUBTITLE] Complete task: performance [SUBSECTION] Experienced operators completed the task (9 balls) significantly more quickly than novices (t\n14 = 3.02, p < 0.010; see Table 1).Table 1Mean (±SD) performance and S–T process measures for novice and experienced groups (from LAP Mentor)ParameterNoviceExperiencedCompletion time (s)203.8 ± 76.5117.6 ± 25.7Total NoM265.7 ± 97.9139.3 ± 25.9Total PL (cm)726.1 ± 235.6438.9 ± 38.0\nNoM number of movements, PL path length\n\nMean (±SD) performance and S–T process measures for novice and experienced groups (from LAP Mentor)\n\nNoM number of movements, PL path length\nExperienced operators completed the task (9 balls) significantly more quickly than novices (t\n14 = 3.02, p < 0.010; see Table 1).Table 1Mean (±SD) performance and S–T process measures for novice and experienced groups (from LAP Mentor)ParameterNoviceExperiencedCompletion time (s)203.8 ± 76.5117.6 ± 25.7Total NoM265.7 ± 97.9139.3 ± 25.9Total PL (cm)726.1 ± 235.6438.9 ± 38.0\nNoM number of movements, PL path length\n\nMean (±SD) performance and S–T process measures for novice and experienced groups (from LAP Mentor)\n\nNoM number of movements, PL path length\n[SUBTITLE] Complete task: tool movements (S–T interface) [SUBSECTION] Experienced operators made significantly fewer total movements (t\n14 = 3.53, p < 0.005) and had significantly shorter total tool path lengths (t\n14 = 3.40, p < 0.005) than novices (see Table 1).\nExperienced operators made significantly fewer total movements (t\n14 = 3.53, p < 0.005) and had significantly shorter total tool path lengths (t\n14 = 3.40, p < 0.005) than novices (see Table 1).\n[SUBTITLE] Complete task: gaze strategy (S–M interface) [SUBSECTION] The ANOVA on the percentage time spent fixating on each gaze location revealed a significant main effect for location (F\n1,23 = 27.2, p < 0.001) and no significant main effect for ability level (F\n1,23 = 3.4, p = 0.081). These results were qualified by a significant interaction effect (F\n1,23 = 13.2, p < 0.005). As Fig. 3 demonstrates, experts spent significantly more time fixating on the relevant target (jelly, ball, or endobag) than their novice counterparts (p < 0.005), while novices spent significantly more time tracking the tools than their expert counterparts (p < 0.005). Novices spent similar amounts of time fixating on the target ball and tracking the tools (p = 0.133), while experts spent significantly more time fixating the target balls compared to tool tracking (p < 0.001).Fig. 3The percentage of total fixation duration to target (jelly, ball, or endobag) and tool for novice and experienced surgeons (±SEM)\n\nThe percentage of total fixation duration to target (jelly, ball, or endobag) and tool for novice and experienced surgeons (±SEM)\nThe ANOVA on the percentage time spent fixating on each gaze location revealed a significant main effect for location (F\n1,23 = 27.2, p < 0.001) and no significant main effect for ability level (F\n1,23 = 3.4, p = 0.081). These results were qualified by a significant interaction effect (F\n1,23 = 13.2, p < 0.005). As Fig. 3 demonstrates, experts spent significantly more time fixating on the relevant target (jelly, ball, or endobag) than their novice counterparts (p < 0.005), while novices spent significantly more time tracking the tools than their expert counterparts (p < 0.005). Novices spent similar amounts of time fixating on the target ball and tracking the tools (p = 0.133), while experts spent significantly more time fixating the target balls compared to tool tracking (p < 0.001).Fig. 3The percentage of total fixation duration to target (jelly, ball, or endobag) and tool for novice and experienced surgeons (±SEM)\n\nThe percentage of total fixation duration to target (jelly, ball, or endobag) and tool for novice and experienced surgeons (±SEM)\n[SUBTITLE] Subcomponent task: eye-hand coordination [SUBSECTION] Experienced operators had significantly longer quiet eye (QE) durations on the target ball (t\n23 = 3.49, p < 0.005) and made significantly fewer grasp attempts (t\n23 = 2.92, p < 0.010) than novice operators (Table 2).Table 2Mean (±SD) quiet eye duration and number of ball grasp attempts during phase 2 of the task, for novice and experienced groupsParameterNoviceExperiencedQuiet eye (ms)597.3 ± 194.61123.4 ± 539.1Number of grasp attempts3.3 ± 0.62.5 ± 0.7\n\nMean (±SD) quiet eye duration and number of ball grasp attempts during phase 2 of the task, for novice and experienced groups\nExperienced operators had significantly longer quiet eye (QE) durations on the target ball (t\n23 = 3.49, p < 0.005) and made significantly fewer grasp attempts (t\n23 = 2.92, p < 0.010) than novice operators (Table 2).Table 2Mean (±SD) quiet eye duration and number of ball grasp attempts during phase 2 of the task, for novice and experienced groupsParameterNoviceExperiencedQuiet eye (ms)597.3 ± 194.61123.4 ± 539.1Number of grasp attempts3.3 ± 0.62.5 ± 0.7\n\nMean (±SD) quiet eye duration and number of ball grasp attempts during phase 2 of the task, for novice and experienced groups", "Experienced operators completed the task (9 balls) significantly more quickly than novices (t\n14 = 3.02, p < 0.010; see Table 1).Table 1Mean (±SD) performance and S–T process measures for novice and experienced groups (from LAP Mentor)ParameterNoviceExperiencedCompletion time (s)203.8 ± 76.5117.6 ± 25.7Total NoM265.7 ± 97.9139.3 ± 25.9Total PL (cm)726.1 ± 235.6438.9 ± 38.0\nNoM number of movements, PL path length\n\nMean (±SD) performance and S–T process measures for novice and experienced groups (from LAP Mentor)\n\nNoM number of movements, PL path length", "Experienced operators made significantly fewer total movements (t\n14 = 3.53, p < 0.005) and had significantly shorter total tool path lengths (t\n14 = 3.40, p < 0.005) than novices (see Table 1).", "The ANOVA on the percentage time spent fixating on each gaze location revealed a significant main effect for location (F\n1,23 = 27.2, p < 0.001) and no significant main effect for ability level (F\n1,23 = 3.4, p = 0.081). These results were qualified by a significant interaction effect (F\n1,23 = 13.2, p < 0.005). As Fig. 3 demonstrates, experts spent significantly more time fixating on the relevant target (jelly, ball, or endobag) than their novice counterparts (p < 0.005), while novices spent significantly more time tracking the tools than their expert counterparts (p < 0.005). Novices spent similar amounts of time fixating on the target ball and tracking the tools (p = 0.133), while experts spent significantly more time fixating the target balls compared to tool tracking (p < 0.001).Fig. 3The percentage of total fixation duration to target (jelly, ball, or endobag) and tool for novice and experienced surgeons (±SEM)\n\nThe percentage of total fixation duration to target (jelly, ball, or endobag) and tool for novice and experienced surgeons (±SEM)", "Experienced operators had significantly longer quiet eye (QE) durations on the target ball (t\n23 = 3.49, p < 0.005) and made significantly fewer grasp attempts (t\n23 = 2.92, p < 0.010) than novice operators (Table 2).Table 2Mean (±SD) quiet eye duration and number of ball grasp attempts during phase 2 of the task, for novice and experienced groupsParameterNoviceExperiencedQuiet eye (ms)597.3 ± 194.61123.4 ± 539.1Number of grasp attempts3.3 ± 0.62.5 ± 0.7\n\nMean (±SD) quiet eye duration and number of ball grasp attempts during phase 2 of the task, for novice and experienced groups", "The aim of this research was to explore the coordination of eye and tool movements for a laparoscopic training task requiring coordinated and integrated hand movements. This study therefore extends recent research examining separate measures from the S–M (eye movement) and S–T (tool movement) interface for a more basic task [19]. The primary objective was to gain further insight into the strategies used by experienced and novice operators to overcome the perceptual constraints imposed by the laparoscopic environment. As the goal of simulated training is arguably the creation of a “pretrained novice,” prepared for the operating room with reasonably automatic basic psychomotor and visual-spatial laparoscopic skills [6, 10], research examining indices related to these skills is clearly important.\nThe performance results are in accord with other recent studies and provide support for the discriminatory ability of the two-handed manoeuvres task [2, 12]: Experts were faster in completing the task and had more efficient tool movements than novices (Table 1). While it is often difficult to compare data between studies directly because of differences in procedures and measures reported [7], these results do add to the growing literature base supporting the evidence for validity of some of the LAP Mentor tasks [2, 10–14].\nThe feedback provided by the LAP Mentor software therefore allows the performance advantage of experts to be characterised and criterion scores for trainee performance to be set [2]. However, it is difficult to get an accurate conceptualisation of the perceptual processes underlying task completion without examining eye movements and indices related to eye-hand coordination. Our basic measure of gaze strategy (where the operators looked during task completion) revealed that experts generally used a target-focused gaze strategy during all three phases of the task, seldom needing to focus on the tool position. In comparison, novices adopted a switching strategy, focusing equally on both target and tool locations [19, 21] (Fig. 3). What does this reveal about differences in the perceptual strategies of both levels of operator?\nSkilled psychomotor behaviour involves the ability to predict the consequences of one’s actions and implement mapping rules relating motor and sensory signals [23]. Furthermore, eye movements and the gaze system that controls them play a key role in planning and controlling such precision motor actions [24]. Research examining skilful manual actions (e.g., pointing, grasping) has revealed that goal-directed hand movements are externally driven by target position and do not require visual feedback from the moving hand [25, 26]. Peripheral vision appears to provide sufficient information to refine hand-movement control in this familiar and “natural” environment. This consistent finding can be summarised by the phrase “Keep your eyes on the prize!”\nHowever, the laparoscopic environment is not natural, raising issues of depth perception, elongated tool use, the fulcrum effect, and limited degrees of freedom. Learning therefore involves the adaptation of previously acquired basic sensorimotor rules for manual reaching and grasping [27]. Previous research examining the learning of novel psychomotor tasks has demonstrated the important role of foveal vision and gaze shifts in learning novel mappings between hand motor commands and desirable sensory outcomes [23]. The results we have presented here for laparoscopic surgery reveal similar perceptual processes: Novices switch their attention between targets and tools as they develop the novel mappings required to successfully perform, whereas experienced operators no longer need such exploration and rely on the efficient, target-focused strategy used for less complex environments.\nThe results for the subcomponent of the task (picking up balls) revealed that it is not only the location of the fixation that is important, but also its timing. Approximately half of the experienced operators’ fixations were not to the target (Fig. 3); however, they utilised a target-focused gaze when it was most needed (the quiet eye, QE). The experienced operators’ QE periods were nearly twice that of the novice operators (Table 2). This steady gaze period appears to have helped guide these precision grasping movements, culminating in fewer unsuccessful attempts (Table 2). These results are therefore supportive of those examining proficiency differences in QE and performance in other motor tasks [22] and suggest that such measures of psychomotor control require further attention in the laparoscopic environment.\nThe implications of the current research findings extend beyond helping to clarify the processes underlying the skill advantage of expert laparoscopic operators, implying implicitly that these eye movement patterns must be learned and are made more efficient and effective through practice [23]. Given the importance of gaze information in planning and controlling psychomotor skills [24], research attention might profitably be applied to having novices replicate the gaze patterns and psychomotor control of experienced laparoscopic performers [17, 19]. In the sports literature, for example, quiet eye training programmes have successfully expedited novice performance beyond that achieved by providing technical instructions related to movement parameters [28, 29].\nWe suggest that this advantage may be due to the benefits of learning the skills in an implicit manner, with little conscious awareness of how the skill is executed [30]. By focusing attention externally on relevant targets via gaze control rather than on the movements of the instruments, explicit rules are less likely to be accrued during learning. Masters et al. [31] previously demonstrated that by guiding novices with target information related to suture points, a suturing task could be learned in an implicit fashion. This may be important for subsequent operative performance as the accrual of explicit rules during learning is implicated in skill breakdown under conditions of stress, multitasking, and fatigue [31].\nTo conclude, the current study has further illuminated how surgeons utilize visual perceptual information to plan and control tool movements in a virtual reality laparoscopic environment. Performance results supported the evidence for validity of the two-handed manoeuvres task from the LAP Mentor basic skills training module. Results also supported previous findings from other domains that have revealed differences in the gaze strategies and psychomotor control (quiet eye) of learners and more experienced performers [20, 23]. Given recent calls for skills training programmes to be based on theoretical frameworks [2, 5, 8], future research should seek to test the utility of gaze training programmes to expedite basic surgical skill learning [17]." ]
[ "materials|methods", null, null, null, null, null, null, null, "results", null, null, null, null, "discussion" ]
[ "Psychomotor control", "Eye-hand coordination", "Quiet eye", "Gaze strategy", "Perception" ]
Repair of parastomal hernias with biologic grafts: a systematic review.
21360207
Biologic grafts are increasingly used instead of synthetic mesh for parastomal hernia repair due to concerns of synthetic mesh-related complications. This systematic review was designed to evaluate the use of these collagen-based scaffolds for the repair of parastomal hernias.
BACKGROUND
Studies were retrieved after searching the electronic databases MEDLINE, EMBASE and Cochrane CENTRAL. The search terms 'paracolostomy', 'paraileostomy', 'parastomal', 'colostomy', 'ileostomy', 'hernia', 'defect', 'closure', 'repair' and 'reconstruction' were used. Selection of studies and assessment of methodological quality were performed with a modified MINORS index. All reports on repair of parastomal hernias using a collagen-based biologic scaffold to reinforce or bridge the defect were included. Outcomes were recurrence rate, mortality and morbidity.
METHODS
Four retrospective studies with a combined enrolment of 57 patients were included. Recurrence occurred in 15.7% (95% confidence interval [CI] 7.8-25.9) of patients and wound-related complications in 26.2% (95% CI 14.7-39.5). No mortality or graft infections were reported.
RESULTS
The use of reinforcing or bridging biologic grafts during parastomal hernia repair results in acceptable rates of recurrence and complications. However, given the similar rates of recurrence and complications achieved using synthetic mesh in this scenario, the evidence does not support use of biologic grafts.
CONCLUSIONS
[ "Hernia, Abdominal", "Humans", "Laparoscopy", "Plastic Surgery Procedures", "Surgical Stomas", "Tissue Scaffolds", "Transplantation, Heterologous", "Transplantation, Homologous", "Treatment Outcome" ]
3116129
Introduction
Parastomal herniation is a common complication following creation of an ileostomy or colostomy, with observed rates of up to 28% and 48%, respectively.1 Besides risk of incarceration and stenosis of the bowel, parastomal herniation can cause pain, discomfort and an ill-fitting pouching system that in turn may cause leakage and skin excoriation. Needless to say, body image is adversely affected in patients that might already be experiencing social problems associated with the presence of a stoma.2 Surgical treatment modalities available are relocation of the stoma and repair of the defect using either direct suture repair, or bridging or reinforcement with prostheses. Relocation of the stoma does not address tissue weakness secondary to systemic risk factors and, just like direct suture repair, often results in high recurrence rates.3,4 Since the introduction of synthetic mesh to reinforce or bridge the defect, this procedure has been regarded as the best possible care for parastomal herniation, showing lower recurrence rates.1,5 Its prophylactic use at the time of initial stoma creation is now often propagated to prevent future herniation.5,6 At the same time, reservations have arisen with respect to the implantation of synthetic mesh in close proximity to bowel and stoma due to risk of erosion and fistula formation.7 Also, dense adhesions may complicate future abdominal surgery.8 Besides these concerns, there is the universal fear of infection when implanting foreign body material, especially in contaminated fields. Collagen-based biologic grafts have been produced since the 1980s.9 These prostheses consist of an acellular collagen matrix that is slowly degraded and replaced by fibro-collagenous tissue of the host. Their properties depend on the species and type of tissue that the material is extracted from, the processing methods (including decellularisation and sterilisation), and whether or not they are intentionally cross-linked. Biologic grafts used for incisional hernia repair are derived from either human dermis, porcine dermis, porcine small intestinal submucosa, or bovine pericardium. During processing, the materials are made functionally acellular to prevent a foreign body response, while still maintaining their extracellular collagenous structure that allows for the host tissue ingrowth. Sterilisation of the materials by ethylene oxide gas or irradiation aims at making the final product pathogen free. Some products receive additional cross-linking of the collagen matrix to control or reduce the enzymatic degradation of the graft. This should give the host more time to deposit fibro-collagenous tissue and remodel the prosthesis into strong native tissue. Due to their bio-compatibility resulting in rapid vascularisation and migration of host (immune) cells, it is thought that biologic prostheses are less prone to infection than synthetic grafts. Moreover, they are soft and pliable which potentially decreases the risk of discomfort and erosion into the bowel. However, given the high financial costs of biologic grafts, proper evidence of more beneficial outcomes or cost savings in the long run are paramount to support their use. This systematic review aims to evaluate the use of these acellular collagen-based scaffolds for the repair of parastomal hernias, focusing on recurrence and complication rates.
null
null
Results
A flowchart overview of the search is depicted in Fig. 1. The search strategy yielded 333 titles and abstracts. After screening, 317 records were excluded leaving 16 articles to be retrieved and assessed for eligibility. Six of these were excluded after assessment12–17 leaving a total of 10 articles that reported on the repair of parastomal hernias with biologic prostheses. After subjecting these to the modified MINORS tool, another six were excluded due to too small sample sizes18–22 and inadequate reporting on surgical technique.23 This left four studies to be included in the systematic review.24–27 Fig. 1Flowchart of search strategy Flowchart of search strategy [SUBTITLE] Findings of Systematic Review [SUBSECTION] All included studies were retrospective with a combined enrolment of 57 patients (range 11–20). The definition of a recurrence was not given by any author. Follow-up ranged from 8.1 to 50.2 months, and was done by clinical examination in three25–27 and also by CT imaging in one.26 One study was unclear as to how follow-up was performed.24 No mortality was reported. Study characteristics and outcomes including weighted pooled rates of recurrence and wound-related complications are shown in Table 2. The weighted pooled proportion of recurrences was 15.7% (95% CI 7.8–25.9; Fig. 2). No cases of infected grafts were reported. Araujo et al. only reported on infection (which was absent) and therefore their data were not included in the calculation of wound-related complications. Various surgical techniques were used, including onlay, inlay, and underlay (pre- and intraperitoneal) placement of the biologic graft. Both open and laparoscopic procedures were performed. Biologic grafts used were products derived from human acellular dermis (Alloderm®), bovine pericardium (Peri-Guard®) and porcine small intestinal submucosa (Surgisis®). Characteristics of the biologic grafts used in the included and excluded studies are given in Table 3. Table 2Study characteristics and recurrence rates of studies included in systematic reviewReferenceYearNo. of patientsMINORS indexMaterial usedType of repairNo. of wound complications (%) b Recurrence (%)Months follow-up (range)Araujo et al.24 20051310Peri-GuardOnlayn/a1 (7.7)50.2 (n/a)a Aycock et al.25 2007119AllodermInlay (n = 8) and onlay (n = 3)2 (18.2)3 (27.3)8.1 (1–21)Taner et al.26 2009139AllodermUnder + onlay sandwich5 (38.5)2 (15)9 (4–16)Ellis27 20102012SurgisisIntraperitoneal underlay (Sugarbaker)4 (20.0)2 (10)18 (6–38)Weighted pooled%c (95% CI)–––––26.2% (14.7–39.5)15.7% (7.8–25.9)– aThis follow-up is that of a larger group of which these patients were part of bComplications: wound infection (3),5,26 seroma formation (6),26,27 incisional separation (2)26 cUsing a fixed-effects (inverse variance) model Fig. 2Weighted pooled proportion (fixed-effects model; Cochran’s Q = 1.917, p = 0.5899) of recurrences after parastomal hernia repair using biologic grafts Table 3Characteristics and costs of biologic and synthetic prostheses used for parastomal hernia repairMaterialSourceAdditional cross-linkingPreparationCosts per cm2a AllodermHuman dermisNoneRefrigeration, rehydration$ 35.31PermacolPorcine dermisYes; HMDINone$ 18.97SurgisisPorcine SISNoneRehydration$ 20.00CollamendPorcine dermisYes; EDCRehydration$ 18.88Peri-guardBovine pericardiumYes; gluteraldehydeRehydration$ 3.91VeritasBovine pericardiumNoneNone$ 22.02Polypropylene/e-PTFE/Composite–None$ 3.65 aBased on sheet sizes sufficient for parastomal hernia repair, excluding account discount. Manufacturers and distributors were contacted directly via telephone SIS small intestinal submucosa; HMDI hexamethylene diisocyanate; EDC 1-ethyl-(3-dimethylaminopropyl) carbodiimide hydrochloride; Alloderm LifeCell Corp., Branchburg, NJ, USA; Permacol Tissue Science Laboratories, Aldershot, UK; Surgisis Cook Surgical, Bloomington, IN, USA; Collamend Bard Inc., Warwick, RI, USA; Xenmatrix Brennen Medical Inc., St. Paul, MN, USA; Veritas, Peri-Guard Synovis Surgical Innovations, St. Paul, MN, USA Study characteristics and recurrence rates of studies included in systematic review aThis follow-up is that of a larger group of which these patients were part of bComplications: wound infection (3),5,26 seroma formation (6),26,27 incisional separation (2)26 cUsing a fixed-effects (inverse variance) model Weighted pooled proportion (fixed-effects model; Cochran’s Q = 1.917, p = 0.5899) of recurrences after parastomal hernia repair using biologic grafts Characteristics and costs of biologic and synthetic prostheses used for parastomal hernia repair aBased on sheet sizes sufficient for parastomal hernia repair, excluding account discount. Manufacturers and distributors were contacted directly via telephone SIS small intestinal submucosa; HMDI hexamethylene diisocyanate; EDC 1-ethyl-(3-dimethylaminopropyl) carbodiimide hydrochloride; Alloderm LifeCell Corp., Branchburg, NJ, USA; Permacol Tissue Science Laboratories, Aldershot, UK; Surgisis Cook Surgical, Bloomington, IN, USA; Collamend Bard Inc., Warwick, RI, USA; Xenmatrix Brennen Medical Inc., St. Paul, MN, USA; Veritas, Peri-Guard Synovis Surgical Innovations, St. Paul, MN, USA All included studies were retrospective with a combined enrolment of 57 patients (range 11–20). The definition of a recurrence was not given by any author. Follow-up ranged from 8.1 to 50.2 months, and was done by clinical examination in three25–27 and also by CT imaging in one.26 One study was unclear as to how follow-up was performed.24 No mortality was reported. Study characteristics and outcomes including weighted pooled rates of recurrence and wound-related complications are shown in Table 2. The weighted pooled proportion of recurrences was 15.7% (95% CI 7.8–25.9; Fig. 2). No cases of infected grafts were reported. Araujo et al. only reported on infection (which was absent) and therefore their data were not included in the calculation of wound-related complications. Various surgical techniques were used, including onlay, inlay, and underlay (pre- and intraperitoneal) placement of the biologic graft. Both open and laparoscopic procedures were performed. Biologic grafts used were products derived from human acellular dermis (Alloderm®), bovine pericardium (Peri-Guard®) and porcine small intestinal submucosa (Surgisis®). Characteristics of the biologic grafts used in the included and excluded studies are given in Table 3. Table 2Study characteristics and recurrence rates of studies included in systematic reviewReferenceYearNo. of patientsMINORS indexMaterial usedType of repairNo. of wound complications (%) b Recurrence (%)Months follow-up (range)Araujo et al.24 20051310Peri-GuardOnlayn/a1 (7.7)50.2 (n/a)a Aycock et al.25 2007119AllodermInlay (n = 8) and onlay (n = 3)2 (18.2)3 (27.3)8.1 (1–21)Taner et al.26 2009139AllodermUnder + onlay sandwich5 (38.5)2 (15)9 (4–16)Ellis27 20102012SurgisisIntraperitoneal underlay (Sugarbaker)4 (20.0)2 (10)18 (6–38)Weighted pooled%c (95% CI)–––––26.2% (14.7–39.5)15.7% (7.8–25.9)– aThis follow-up is that of a larger group of which these patients were part of bComplications: wound infection (3),5,26 seroma formation (6),26,27 incisional separation (2)26 cUsing a fixed-effects (inverse variance) model Fig. 2Weighted pooled proportion (fixed-effects model; Cochran’s Q = 1.917, p = 0.5899) of recurrences after parastomal hernia repair using biologic grafts Table 3Characteristics and costs of biologic and synthetic prostheses used for parastomal hernia repairMaterialSourceAdditional cross-linkingPreparationCosts per cm2a AllodermHuman dermisNoneRefrigeration, rehydration$ 35.31PermacolPorcine dermisYes; HMDINone$ 18.97SurgisisPorcine SISNoneRehydration$ 20.00CollamendPorcine dermisYes; EDCRehydration$ 18.88Peri-guardBovine pericardiumYes; gluteraldehydeRehydration$ 3.91VeritasBovine pericardiumNoneNone$ 22.02Polypropylene/e-PTFE/Composite–None$ 3.65 aBased on sheet sizes sufficient for parastomal hernia repair, excluding account discount. Manufacturers and distributors were contacted directly via telephone SIS small intestinal submucosa; HMDI hexamethylene diisocyanate; EDC 1-ethyl-(3-dimethylaminopropyl) carbodiimide hydrochloride; Alloderm LifeCell Corp., Branchburg, NJ, USA; Permacol Tissue Science Laboratories, Aldershot, UK; Surgisis Cook Surgical, Bloomington, IN, USA; Collamend Bard Inc., Warwick, RI, USA; Xenmatrix Brennen Medical Inc., St. Paul, MN, USA; Veritas, Peri-Guard Synovis Surgical Innovations, St. Paul, MN, USA Study characteristics and recurrence rates of studies included in systematic review aThis follow-up is that of a larger group of which these patients were part of bComplications: wound infection (3),5,26 seroma formation (6),26,27 incisional separation (2)26 cUsing a fixed-effects (inverse variance) model Weighted pooled proportion (fixed-effects model; Cochran’s Q = 1.917, p = 0.5899) of recurrences after parastomal hernia repair using biologic grafts Characteristics and costs of biologic and synthetic prostheses used for parastomal hernia repair aBased on sheet sizes sufficient for parastomal hernia repair, excluding account discount. Manufacturers and distributors were contacted directly via telephone SIS small intestinal submucosa; HMDI hexamethylene diisocyanate; EDC 1-ethyl-(3-dimethylaminopropyl) carbodiimide hydrochloride; Alloderm LifeCell Corp., Branchburg, NJ, USA; Permacol Tissue Science Laboratories, Aldershot, UK; Surgisis Cook Surgical, Bloomington, IN, USA; Collamend Bard Inc., Warwick, RI, USA; Xenmatrix Brennen Medical Inc., St. Paul, MN, USA; Veritas, Peri-Guard Synovis Surgical Innovations, St. Paul, MN, USA [SUBTITLE] Studies Excluded From Systematic Review [SUBSECTION] Six reports on the use of biologic grafts for the repair of parastomal hernias were excluded after subjecting them to the modified MINORS tool, including retrospective studies,20,23 case reports19,21 and case series18,22 (Table 4). Two case reports and two case series described the use of biologic grafts for the repair of parastomal hernia. Greenstein and Aldoroty19 reported on a patient with a history of ulcerative colitis and four ileostomy revisions that presented with unremitting obstructive symptoms. An incarcerated parastomal hernia confirmed by CT was repaired using cross-linked porcine dermis (Collamend®) in a retromuscular fashion. Patient regained ileostomy function within a few days and when seen at 18 months was pain free with no evidence of graft infection, hernia recurrence, ileostomy malfunction or obstruction. Lo Menzo et al.21 reported on a patient with a history of abdominoperineal resection for rectal cancer that presented with a three-time recurrent parastomal hernia, for which an expanded polytetrafluoroethylene mesh was used for the last repair using the keyhole technique. The Sugarbaker technique28 was employed using bovine pericardium (Veritas®). Postoperatively, a seroma developed which resolved spontaneously; and at 17-month follow-up, there was no evidence of recurrence, the patient was pain free and satisfied with cosmetic results. In a case series of three patients, Kish et al.22 reported on the primary repair of parastomal hernia using human acellular dermis (Alloderm) as onlay reinforcement. Two patients were followed for 6 months and 1 year, respectively, and remained hernia free. One patient presented 8 months later with symptoms of intestinal obstruction treated conservatively. The patient subsequently returned 3 months later with intestinal obstruction and recurrent parastomal hernia that necessitated an operation for relocation of the stoma and repeat hernia repair. Inan et al.18 reported on two patients, one with a history of proctectomy after severe radiation proctitis presenting with discomfort and obstructive episodes, the other presenting with symptomatic hernia 18 years after abdominoperineal resection. Both were repaired laparoscopically using cross-linked porcine dermis (Permacol®), and at 9 and 3 months postoperatively there was no evidence of recurrence or mesh-related complications. Table 4Study characteristics and recurrence rates of studies excluded from systematic reviewReferenceYearNo. of patientsMaterial usedType of repairNo. of wound complications (%)b Recurrence (%)Follow-up (range)Kish et al.22 20053AllodermOnlayn/a1 (33.3)(6–12)Inan18 20072PermacolLaparoscopic (method not specified)n/a0 (0)6 (3–9)Greenstein & Aldoroty19 20081CollamendRetromuscular/sublay0 (0)0 (0)18Franklin et al.20 20082SurgisisIntraperitoneal onlay mesh (Laparoscopic)n/a0 (0)n/aLo Menzo et al.21 20081VeritasIntraperitoneal (Laparoscopic Sugarbaker)1 (100)0 (0)17Loganathan et al.23 20103Permacoln/a2 (66)1 (33)12 (3–62)a aThis follow-up is that of a larger group of which these patients were part of bComplications: seroma formation (1),21 ischaemic ileostomy and subsequent fistula (1),23 fistula (1)23 Study characteristics and recurrence rates of studies excluded from systematic review aThis follow-up is that of a larger group of which these patients were part of bComplications: seroma formation (1),21 ischaemic ileostomy and subsequent fistula (1),23 fistula (1)23 Two retrospective studies on the use of cross-linked porcine dermis (Permacol) for various types of hernia repair in complex, infected or potentially contaminated settings, included six patients undergoing parastomal hernia repair. Of the total of 133 procedures, Franklin et al.23 repaired parastomal hernia using intraperitoneal onlay mesh in two patients, showing no recurrences.20 Follow-up ranged 1–78 months using clinical examination. Loganathan et al.23 reported on repair of four parastomal hernias, one of which underwent reversal of the colostomy at the time of the hernia repair. Of the other three patients, one that had six previous attempts at hernia repair experienced a recurrence. This patient developed an ischaemic end ileostomy which subsequently developed a localised perforation which manifested as a fistula formation. Another patient also developed a fistula. Cross-linked porcine dermis (Permacol) was placed as inlay or onlay. Median follow-up of the complete series was 377 days (range 85–1,905 days) performed by clinical examination. Six reports on the use of biologic grafts for the repair of parastomal hernias were excluded after subjecting them to the modified MINORS tool, including retrospective studies,20,23 case reports19,21 and case series18,22 (Table 4). Two case reports and two case series described the use of biologic grafts for the repair of parastomal hernia. Greenstein and Aldoroty19 reported on a patient with a history of ulcerative colitis and four ileostomy revisions that presented with unremitting obstructive symptoms. An incarcerated parastomal hernia confirmed by CT was repaired using cross-linked porcine dermis (Collamend®) in a retromuscular fashion. Patient regained ileostomy function within a few days and when seen at 18 months was pain free with no evidence of graft infection, hernia recurrence, ileostomy malfunction or obstruction. Lo Menzo et al.21 reported on a patient with a history of abdominoperineal resection for rectal cancer that presented with a three-time recurrent parastomal hernia, for which an expanded polytetrafluoroethylene mesh was used for the last repair using the keyhole technique. The Sugarbaker technique28 was employed using bovine pericardium (Veritas®). Postoperatively, a seroma developed which resolved spontaneously; and at 17-month follow-up, there was no evidence of recurrence, the patient was pain free and satisfied with cosmetic results. In a case series of three patients, Kish et al.22 reported on the primary repair of parastomal hernia using human acellular dermis (Alloderm) as onlay reinforcement. Two patients were followed for 6 months and 1 year, respectively, and remained hernia free. One patient presented 8 months later with symptoms of intestinal obstruction treated conservatively. The patient subsequently returned 3 months later with intestinal obstruction and recurrent parastomal hernia that necessitated an operation for relocation of the stoma and repeat hernia repair. Inan et al.18 reported on two patients, one with a history of proctectomy after severe radiation proctitis presenting with discomfort and obstructive episodes, the other presenting with symptomatic hernia 18 years after abdominoperineal resection. Both were repaired laparoscopically using cross-linked porcine dermis (Permacol®), and at 9 and 3 months postoperatively there was no evidence of recurrence or mesh-related complications. Table 4Study characteristics and recurrence rates of studies excluded from systematic reviewReferenceYearNo. of patientsMaterial usedType of repairNo. of wound complications (%)b Recurrence (%)Follow-up (range)Kish et al.22 20053AllodermOnlayn/a1 (33.3)(6–12)Inan18 20072PermacolLaparoscopic (method not specified)n/a0 (0)6 (3–9)Greenstein & Aldoroty19 20081CollamendRetromuscular/sublay0 (0)0 (0)18Franklin et al.20 20082SurgisisIntraperitoneal onlay mesh (Laparoscopic)n/a0 (0)n/aLo Menzo et al.21 20081VeritasIntraperitoneal (Laparoscopic Sugarbaker)1 (100)0 (0)17Loganathan et al.23 20103Permacoln/a2 (66)1 (33)12 (3–62)a aThis follow-up is that of a larger group of which these patients were part of bComplications: seroma formation (1),21 ischaemic ileostomy and subsequent fistula (1),23 fistula (1)23 Study characteristics and recurrence rates of studies excluded from systematic review aThis follow-up is that of a larger group of which these patients were part of bComplications: seroma formation (1),21 ischaemic ileostomy and subsequent fistula (1),23 fistula (1)23 Two retrospective studies on the use of cross-linked porcine dermis (Permacol) for various types of hernia repair in complex, infected or potentially contaminated settings, included six patients undergoing parastomal hernia repair. Of the total of 133 procedures, Franklin et al.23 repaired parastomal hernia using intraperitoneal onlay mesh in two patients, showing no recurrences.20 Follow-up ranged 1–78 months using clinical examination. Loganathan et al.23 reported on repair of four parastomal hernias, one of which underwent reversal of the colostomy at the time of the hernia repair. Of the other three patients, one that had six previous attempts at hernia repair experienced a recurrence. This patient developed an ischaemic end ileostomy which subsequently developed a localised perforation which manifested as a fistula formation. Another patient also developed a fistula. Cross-linked porcine dermis (Permacol) was placed as inlay or onlay. Median follow-up of the complete series was 377 days (range 85–1,905 days) performed by clinical examination.
null
null
[ "Search Methods for Study Identification", "Assessment of Study Quality", "Data Extraction", "Findings of Systematic Review", "Studies Excluded From Systematic Review" ]
[ "Studies were identified using the electronic databases MEDLINE (including in-process and other non-indexed citations, 1950–present), EMBASE (1980–present) and the Cochrane Central Register of Controlled Trials. Search terms used were: ‘parastomal’, ‘paracolostomy’, ‘paraileostomy’, ‘stoma’, ‘hernia’, ‘defect’ and ‘repair’. Terms were searched for as free text and where applicable were also mapped to MeSH terms. Full-text articles retrieved for evaluation were scanned for other relevant references. No limits were set on language or publication status. Titles and abstracts were screened for eligibility and full-text articles were retrieved. The last search was performed on 13 September 2010. All reports on repair of parastomal hernias using a acellular collagen-based biologic scaffold as sole material to reinforce or bridge the defect were included. All other types of repair were excluded.", "All studies selected were subjected to a modified version of the Methodological Index for Non-Randomised Studies (MINORS) tool to evaluate their methodological quality (Table 1). This instrument was constructed and validated for appraisal of non-randomised trials in surgery.10 Studies were scored independently by two authors (NJS, RPB). This modified version contains six items with a maximum score of two on each, yielding a maximum index of 12. Studies with a total score less than nine, or no score on item 2, 5 or 6 were excluded from systematic review. Disagreement was resolved by discussion and consensus between authors. Also, the diagnostic modality for the primary outcome was determined for every study.\nTable 1Modified Methodological Index of Non-Randomised Studies (MINORS)ItemCriteriaOptionScore1A clearly stated aimNot reported0Partially reported, no clear aim1Clear aim22Minimum of 5 included patientsNo0Yes23Inclusion of consecutive patientsNot reported0Patients in a certain time period1Consecutive patients + characteristics24Type of stoma specifiedNot reported0Reported25Surgical technique reportedNot reported0Incomplete1Reported clearly, appropriate to aim26Report of end pointsNot reported0Recurrences only1Recurrences and postoperative complications2Maximum score:12\n\nModified Methodological Index of Non-Randomised Studies (MINORS)", "The primary outcome was the rate of parastomal hernia recurrence observed, as defined by the respective authors. Study characteristics (year of publication, no. of patients, surgical technique, follow-up), perioperative (30 days) mortality and rates and type of wound-related complications were also noted. Total amount of wound-related complications were calculated by adding up all relevant complications, including only the studies with adequate reporting. Weighted pooled proportions with their respective 95% confidence intervals (CI) following the fixed-effects (inverse variance) model were determined for recurrences and wound-related complications using StatsDirect® statistical software.11\n", "All included studies were retrospective with a combined enrolment of 57 patients (range 11–20). The definition of a recurrence was not given by any author. Follow-up ranged from 8.1 to 50.2 months, and was done by clinical examination in three25–27 and also by CT imaging in one.26 One study was unclear as to how follow-up was performed.24 No mortality was reported. Study characteristics and outcomes including weighted pooled rates of recurrence and wound-related complications are shown in Table 2. The weighted pooled proportion of recurrences was 15.7% (95% CI 7.8–25.9; Fig. 2). No cases of infected grafts were reported. Araujo et al. only reported on infection (which was absent) and therefore their data were not included in the calculation of wound-related complications. Various surgical techniques were used, including onlay, inlay, and underlay (pre- and intraperitoneal) placement of the biologic graft. Both open and laparoscopic procedures were performed. Biologic grafts used were products derived from human acellular dermis (Alloderm®), bovine pericardium (Peri-Guard®) and porcine small intestinal submucosa (Surgisis®). Characteristics of the biologic grafts used in the included and excluded studies are given in Table 3.\nTable 2Study characteristics and recurrence rates of studies included in systematic reviewReferenceYearNo. of patientsMINORS indexMaterial usedType of repairNo. of wound complications (%) b\nRecurrence (%)Months follow-up (range)Araujo et al.24\n20051310Peri-GuardOnlayn/a1 (7.7)50.2 (n/a)a\nAycock et al.25\n2007119AllodermInlay (n = 8) and onlay (n = 3)2 (18.2)3 (27.3)8.1 (1–21)Taner et al.26\n2009139AllodermUnder + onlay sandwich5 (38.5)2 (15)9 (4–16)Ellis27\n20102012SurgisisIntraperitoneal underlay (Sugarbaker)4 (20.0)2 (10)18 (6–38)Weighted pooled%c (95% CI)–––––26.2% (14.7–39.5)15.7% (7.8–25.9)–\naThis follow-up is that of a larger group of which these patients were part of\nbComplications: wound infection (3),5,26 seroma formation (6),26,27 incisional separation (2)26\n\ncUsing a fixed-effects (inverse variance) model\nFig. 2Weighted pooled proportion (fixed-effects model; Cochran’s Q = 1.917, p = 0.5899) of recurrences after parastomal hernia repair using biologic grafts\nTable 3Characteristics and costs of biologic and synthetic prostheses used for parastomal hernia repairMaterialSourceAdditional cross-linkingPreparationCosts per cm2a\nAllodermHuman dermisNoneRefrigeration, rehydration$ 35.31PermacolPorcine dermisYes; HMDINone$ 18.97SurgisisPorcine SISNoneRehydration$ 20.00CollamendPorcine dermisYes; EDCRehydration$ 18.88Peri-guardBovine pericardiumYes; gluteraldehydeRehydration$ 3.91VeritasBovine pericardiumNoneNone$ 22.02Polypropylene/e-PTFE/Composite–None$ 3.65\naBased on sheet sizes sufficient for parastomal hernia repair, excluding account discount. Manufacturers and distributors were contacted directly via telephone\nSIS small intestinal submucosa; HMDI hexamethylene diisocyanate; EDC 1-ethyl-(3-dimethylaminopropyl) carbodiimide hydrochloride; Alloderm LifeCell Corp., Branchburg, NJ, USA; Permacol Tissue Science Laboratories, Aldershot, UK; Surgisis Cook Surgical, Bloomington, IN, USA; Collamend Bard Inc., Warwick, RI, USA; Xenmatrix Brennen Medical Inc., St. Paul, MN, USA; Veritas, Peri-Guard Synovis Surgical Innovations, St. Paul, MN, USA\n\nStudy characteristics and recurrence rates of studies included in systematic review\n\naThis follow-up is that of a larger group of which these patients were part of\n\nbComplications: wound infection (3),5,26 seroma formation (6),26,27 incisional separation (2)26\n\n\ncUsing a fixed-effects (inverse variance) model\nWeighted pooled proportion (fixed-effects model; Cochran’s Q = 1.917, p = 0.5899) of recurrences after parastomal hernia repair using biologic grafts\nCharacteristics and costs of biologic and synthetic prostheses used for parastomal hernia repair\n\naBased on sheet sizes sufficient for parastomal hernia repair, excluding account discount. Manufacturers and distributors were contacted directly via telephone\n\nSIS small intestinal submucosa; HMDI hexamethylene diisocyanate; EDC 1-ethyl-(3-dimethylaminopropyl) carbodiimide hydrochloride; Alloderm LifeCell Corp., Branchburg, NJ, USA; Permacol Tissue Science Laboratories, Aldershot, UK; Surgisis Cook Surgical, Bloomington, IN, USA; Collamend Bard Inc., Warwick, RI, USA; Xenmatrix Brennen Medical Inc., St. Paul, MN, USA; Veritas, Peri-Guard Synovis Surgical Innovations, St. Paul, MN, USA", "Six reports on the use of biologic grafts for the repair of parastomal hernias were excluded after subjecting them to the modified MINORS tool, including retrospective studies,20,23 case reports19,21 and case series18,22 (Table 4). Two case reports and two case series described the use of biologic grafts for the repair of parastomal hernia. Greenstein and Aldoroty19 reported on a patient with a history of ulcerative colitis and four ileostomy revisions that presented with unremitting obstructive symptoms. An incarcerated parastomal hernia confirmed by CT was repaired using cross-linked porcine dermis (Collamend®) in a retromuscular fashion. Patient regained ileostomy function within a few days and when seen at 18 months was pain free with no evidence of graft infection, hernia recurrence, ileostomy malfunction or obstruction. Lo Menzo et al.21 reported on a patient with a history of abdominoperineal resection for rectal cancer that presented with a three-time recurrent parastomal hernia, for which an expanded polytetrafluoroethylene mesh was used for the last repair using the keyhole technique. The Sugarbaker technique28 was employed using bovine pericardium (Veritas®). Postoperatively, a seroma developed which resolved spontaneously; and at 17-month follow-up, there was no evidence of recurrence, the patient was pain free and satisfied with cosmetic results. In a case series of three patients, Kish et al.22 reported on the primary repair of parastomal hernia using human acellular dermis (Alloderm) as onlay reinforcement. Two patients were followed for 6 months and 1 year, respectively, and remained hernia free. One patient presented 8 months later with symptoms of intestinal obstruction treated conservatively. The patient subsequently returned 3 months later with intestinal obstruction and recurrent parastomal hernia that necessitated an operation for relocation of the stoma and repeat hernia repair. Inan et al.18 reported on two patients, one with a history of proctectomy after severe radiation proctitis presenting with discomfort and obstructive episodes, the other presenting with symptomatic hernia 18 years after abdominoperineal resection. Both were repaired laparoscopically using cross-linked porcine dermis (Permacol®), and at 9 and 3 months postoperatively there was no evidence of recurrence or mesh-related complications.\nTable 4Study characteristics and recurrence rates of studies excluded from systematic reviewReferenceYearNo. of patientsMaterial usedType of repairNo. of wound complications (%)b\nRecurrence (%)Follow-up (range)Kish et al.22\n20053AllodermOnlayn/a1 (33.3)(6–12)Inan18\n20072PermacolLaparoscopic (method not specified)n/a0 (0)6 (3–9)Greenstein & Aldoroty19\n20081CollamendRetromuscular/sublay0 (0)0 (0)18Franklin et al.20\n20082SurgisisIntraperitoneal onlay mesh (Laparoscopic)n/a0 (0)n/aLo Menzo et al.21\n20081VeritasIntraperitoneal (Laparoscopic Sugarbaker)1 (100)0 (0)17Loganathan et al.23\n20103Permacoln/a2 (66)1 (33)12 (3–62)a\n\naThis follow-up is that of a larger group of which these patients were part of\nbComplications: seroma formation (1),21 ischaemic ileostomy and subsequent fistula (1),23 fistula (1)23\n\n\nStudy characteristics and recurrence rates of studies excluded from systematic review\n\naThis follow-up is that of a larger group of which these patients were part of\n\nbComplications: seroma formation (1),21 ischaemic ileostomy and subsequent fistula (1),23 fistula (1)23\n\nTwo retrospective studies on the use of cross-linked porcine dermis (Permacol) for various types of hernia repair in complex, infected or potentially contaminated settings, included six patients undergoing parastomal hernia repair. Of the total of 133 procedures, Franklin et al.23 repaired parastomal hernia using intraperitoneal onlay mesh in two patients, showing no recurrences.20 Follow-up ranged 1–78 months using clinical examination. Loganathan et al.23 reported on repair of four parastomal hernias, one of which underwent reversal of the colostomy at the time of the hernia repair. Of the other three patients, one that had six previous attempts at hernia repair experienced a recurrence. This patient developed an ischaemic end ileostomy which subsequently developed a localised perforation which manifested as a fistula formation. Another patient also developed a fistula. Cross-linked porcine dermis (Permacol) was placed as inlay or onlay. Median follow-up of the complete series was 377 days (range 85–1,905 days) performed by clinical examination." ]
[ null, null, null, null, null ]
[ "Introduction", "Methods", "Search Methods for Study Identification", "Assessment of Study Quality", "Data Extraction", "Results", "Findings of Systematic Review", "Studies Excluded From Systematic Review", "Discussion" ]
[ "Parastomal herniation is a common complication following creation of an ileostomy or colostomy, with observed rates of up to 28% and 48%, respectively.1 Besides risk of incarceration and stenosis of the bowel, parastomal herniation can cause pain, discomfort and an ill-fitting pouching system that in turn may cause leakage and skin excoriation. Needless to say, body image is adversely affected in patients that might already be experiencing social problems associated with the presence of a stoma.2 Surgical treatment modalities available are relocation of the stoma and repair of the defect using either direct suture repair, or bridging or reinforcement with prostheses. Relocation of the stoma does not address tissue weakness secondary to systemic risk factors and, just like direct suture repair, often results in high recurrence rates.3,4 Since the introduction of synthetic mesh to reinforce or bridge the defect, this procedure has been regarded as the best possible care for parastomal herniation, showing lower recurrence rates.1,5 Its prophylactic use at the time of initial stoma creation is now often propagated to prevent future herniation.5,6 At the same time, reservations have arisen with respect to the implantation of synthetic mesh in close proximity to bowel and stoma due to risk of erosion and fistula formation.7 Also, dense adhesions may complicate future abdominal surgery.8 Besides these concerns, there is the universal fear of infection when implanting foreign body material, especially in contaminated fields.\nCollagen-based biologic grafts have been produced since the 1980s.9 These prostheses consist of an acellular collagen matrix that is slowly degraded and replaced by fibro-collagenous tissue of the host. Their properties depend on the species and type of tissue that the material is extracted from, the processing methods (including decellularisation and sterilisation), and whether or not they are intentionally cross-linked. Biologic grafts used for incisional hernia repair are derived from either human dermis, porcine dermis, porcine small intestinal submucosa, or bovine pericardium. During processing, the materials are made functionally acellular to prevent a foreign body response, while still maintaining their extracellular collagenous structure that allows for the host tissue ingrowth. Sterilisation of the materials by ethylene oxide gas or irradiation aims at making the final product pathogen free. Some products receive additional cross-linking of the collagen matrix to control or reduce the enzymatic degradation of the graft. This should give the host more time to deposit fibro-collagenous tissue and remodel the prosthesis into strong native tissue. Due to their bio-compatibility resulting in rapid vascularisation and migration of host (immune) cells, it is thought that biologic prostheses are less prone to infection than synthetic grafts. Moreover, they are soft and pliable which potentially decreases the risk of discomfort and erosion into the bowel. However, given the high financial costs of biologic grafts, proper evidence of more beneficial outcomes or cost savings in the long run are paramount to support their use. This systematic review aims to evaluate the use of these acellular collagen-based scaffolds for the repair of parastomal hernias, focusing on recurrence and complication rates.", "[SUBTITLE] Search Methods for Study Identification [SUBSECTION] Studies were identified using the electronic databases MEDLINE (including in-process and other non-indexed citations, 1950–present), EMBASE (1980–present) and the Cochrane Central Register of Controlled Trials. Search terms used were: ‘parastomal’, ‘paracolostomy’, ‘paraileostomy’, ‘stoma’, ‘hernia’, ‘defect’ and ‘repair’. Terms were searched for as free text and where applicable were also mapped to MeSH terms. Full-text articles retrieved for evaluation were scanned for other relevant references. No limits were set on language or publication status. Titles and abstracts were screened for eligibility and full-text articles were retrieved. The last search was performed on 13 September 2010. All reports on repair of parastomal hernias using a acellular collagen-based biologic scaffold as sole material to reinforce or bridge the defect were included. All other types of repair were excluded.\nStudies were identified using the electronic databases MEDLINE (including in-process and other non-indexed citations, 1950–present), EMBASE (1980–present) and the Cochrane Central Register of Controlled Trials. Search terms used were: ‘parastomal’, ‘paracolostomy’, ‘paraileostomy’, ‘stoma’, ‘hernia’, ‘defect’ and ‘repair’. Terms were searched for as free text and where applicable were also mapped to MeSH terms. Full-text articles retrieved for evaluation were scanned for other relevant references. No limits were set on language or publication status. Titles and abstracts were screened for eligibility and full-text articles were retrieved. The last search was performed on 13 September 2010. All reports on repair of parastomal hernias using a acellular collagen-based biologic scaffold as sole material to reinforce or bridge the defect were included. All other types of repair were excluded.\n[SUBTITLE] Assessment of Study Quality [SUBSECTION] All studies selected were subjected to a modified version of the Methodological Index for Non-Randomised Studies (MINORS) tool to evaluate their methodological quality (Table 1). This instrument was constructed and validated for appraisal of non-randomised trials in surgery.10 Studies were scored independently by two authors (NJS, RPB). This modified version contains six items with a maximum score of two on each, yielding a maximum index of 12. Studies with a total score less than nine, or no score on item 2, 5 or 6 were excluded from systematic review. Disagreement was resolved by discussion and consensus between authors. Also, the diagnostic modality for the primary outcome was determined for every study.\nTable 1Modified Methodological Index of Non-Randomised Studies (MINORS)ItemCriteriaOptionScore1A clearly stated aimNot reported0Partially reported, no clear aim1Clear aim22Minimum of 5 included patientsNo0Yes23Inclusion of consecutive patientsNot reported0Patients in a certain time period1Consecutive patients + characteristics24Type of stoma specifiedNot reported0Reported25Surgical technique reportedNot reported0Incomplete1Reported clearly, appropriate to aim26Report of end pointsNot reported0Recurrences only1Recurrences and postoperative complications2Maximum score:12\n\nModified Methodological Index of Non-Randomised Studies (MINORS)\nAll studies selected were subjected to a modified version of the Methodological Index for Non-Randomised Studies (MINORS) tool to evaluate their methodological quality (Table 1). This instrument was constructed and validated for appraisal of non-randomised trials in surgery.10 Studies were scored independently by two authors (NJS, RPB). This modified version contains six items with a maximum score of two on each, yielding a maximum index of 12. Studies with a total score less than nine, or no score on item 2, 5 or 6 were excluded from systematic review. Disagreement was resolved by discussion and consensus between authors. Also, the diagnostic modality for the primary outcome was determined for every study.\nTable 1Modified Methodological Index of Non-Randomised Studies (MINORS)ItemCriteriaOptionScore1A clearly stated aimNot reported0Partially reported, no clear aim1Clear aim22Minimum of 5 included patientsNo0Yes23Inclusion of consecutive patientsNot reported0Patients in a certain time period1Consecutive patients + characteristics24Type of stoma specifiedNot reported0Reported25Surgical technique reportedNot reported0Incomplete1Reported clearly, appropriate to aim26Report of end pointsNot reported0Recurrences only1Recurrences and postoperative complications2Maximum score:12\n\nModified Methodological Index of Non-Randomised Studies (MINORS)\n[SUBTITLE] Data Extraction [SUBSECTION] The primary outcome was the rate of parastomal hernia recurrence observed, as defined by the respective authors. Study characteristics (year of publication, no. of patients, surgical technique, follow-up), perioperative (30 days) mortality and rates and type of wound-related complications were also noted. Total amount of wound-related complications were calculated by adding up all relevant complications, including only the studies with adequate reporting. Weighted pooled proportions with their respective 95% confidence intervals (CI) following the fixed-effects (inverse variance) model were determined for recurrences and wound-related complications using StatsDirect® statistical software.11\n\nThe primary outcome was the rate of parastomal hernia recurrence observed, as defined by the respective authors. Study characteristics (year of publication, no. of patients, surgical technique, follow-up), perioperative (30 days) mortality and rates and type of wound-related complications were also noted. Total amount of wound-related complications were calculated by adding up all relevant complications, including only the studies with adequate reporting. Weighted pooled proportions with their respective 95% confidence intervals (CI) following the fixed-effects (inverse variance) model were determined for recurrences and wound-related complications using StatsDirect® statistical software.11\n", "Studies were identified using the electronic databases MEDLINE (including in-process and other non-indexed citations, 1950–present), EMBASE (1980–present) and the Cochrane Central Register of Controlled Trials. Search terms used were: ‘parastomal’, ‘paracolostomy’, ‘paraileostomy’, ‘stoma’, ‘hernia’, ‘defect’ and ‘repair’. Terms were searched for as free text and where applicable were also mapped to MeSH terms. Full-text articles retrieved for evaluation were scanned for other relevant references. No limits were set on language or publication status. Titles and abstracts were screened for eligibility and full-text articles were retrieved. The last search was performed on 13 September 2010. All reports on repair of parastomal hernias using a acellular collagen-based biologic scaffold as sole material to reinforce or bridge the defect were included. All other types of repair were excluded.", "All studies selected were subjected to a modified version of the Methodological Index for Non-Randomised Studies (MINORS) tool to evaluate their methodological quality (Table 1). This instrument was constructed and validated for appraisal of non-randomised trials in surgery.10 Studies were scored independently by two authors (NJS, RPB). This modified version contains six items with a maximum score of two on each, yielding a maximum index of 12. Studies with a total score less than nine, or no score on item 2, 5 or 6 were excluded from systematic review. Disagreement was resolved by discussion and consensus between authors. Also, the diagnostic modality for the primary outcome was determined for every study.\nTable 1Modified Methodological Index of Non-Randomised Studies (MINORS)ItemCriteriaOptionScore1A clearly stated aimNot reported0Partially reported, no clear aim1Clear aim22Minimum of 5 included patientsNo0Yes23Inclusion of consecutive patientsNot reported0Patients in a certain time period1Consecutive patients + characteristics24Type of stoma specifiedNot reported0Reported25Surgical technique reportedNot reported0Incomplete1Reported clearly, appropriate to aim26Report of end pointsNot reported0Recurrences only1Recurrences and postoperative complications2Maximum score:12\n\nModified Methodological Index of Non-Randomised Studies (MINORS)", "The primary outcome was the rate of parastomal hernia recurrence observed, as defined by the respective authors. Study characteristics (year of publication, no. of patients, surgical technique, follow-up), perioperative (30 days) mortality and rates and type of wound-related complications were also noted. Total amount of wound-related complications were calculated by adding up all relevant complications, including only the studies with adequate reporting. Weighted pooled proportions with their respective 95% confidence intervals (CI) following the fixed-effects (inverse variance) model were determined for recurrences and wound-related complications using StatsDirect® statistical software.11\n", "A flowchart overview of the search is depicted in Fig. 1. The search strategy yielded 333 titles and abstracts. After screening, 317 records were excluded leaving 16 articles to be retrieved and assessed for eligibility. Six of these were excluded after assessment12–17 leaving a total of 10 articles that reported on the repair of parastomal hernias with biologic prostheses. After subjecting these to the modified MINORS tool, another six were excluded due to too small sample sizes18–22 and inadequate reporting on surgical technique.23 This left four studies to be included in the systematic review.24–27\nFig. 1Flowchart of search strategy\n\nFlowchart of search strategy\n[SUBTITLE] Findings of Systematic Review [SUBSECTION] All included studies were retrospective with a combined enrolment of 57 patients (range 11–20). The definition of a recurrence was not given by any author. Follow-up ranged from 8.1 to 50.2 months, and was done by clinical examination in three25–27 and also by CT imaging in one.26 One study was unclear as to how follow-up was performed.24 No mortality was reported. Study characteristics and outcomes including weighted pooled rates of recurrence and wound-related complications are shown in Table 2. The weighted pooled proportion of recurrences was 15.7% (95% CI 7.8–25.9; Fig. 2). No cases of infected grafts were reported. Araujo et al. only reported on infection (which was absent) and therefore their data were not included in the calculation of wound-related complications. Various surgical techniques were used, including onlay, inlay, and underlay (pre- and intraperitoneal) placement of the biologic graft. Both open and laparoscopic procedures were performed. Biologic grafts used were products derived from human acellular dermis (Alloderm®), bovine pericardium (Peri-Guard®) and porcine small intestinal submucosa (Surgisis®). Characteristics of the biologic grafts used in the included and excluded studies are given in Table 3.\nTable 2Study characteristics and recurrence rates of studies included in systematic reviewReferenceYearNo. of patientsMINORS indexMaterial usedType of repairNo. of wound complications (%) b\nRecurrence (%)Months follow-up (range)Araujo et al.24\n20051310Peri-GuardOnlayn/a1 (7.7)50.2 (n/a)a\nAycock et al.25\n2007119AllodermInlay (n = 8) and onlay (n = 3)2 (18.2)3 (27.3)8.1 (1–21)Taner et al.26\n2009139AllodermUnder + onlay sandwich5 (38.5)2 (15)9 (4–16)Ellis27\n20102012SurgisisIntraperitoneal underlay (Sugarbaker)4 (20.0)2 (10)18 (6–38)Weighted pooled%c (95% CI)–––––26.2% (14.7–39.5)15.7% (7.8–25.9)–\naThis follow-up is that of a larger group of which these patients were part of\nbComplications: wound infection (3),5,26 seroma formation (6),26,27 incisional separation (2)26\n\ncUsing a fixed-effects (inverse variance) model\nFig. 2Weighted pooled proportion (fixed-effects model; Cochran’s Q = 1.917, p = 0.5899) of recurrences after parastomal hernia repair using biologic grafts\nTable 3Characteristics and costs of biologic and synthetic prostheses used for parastomal hernia repairMaterialSourceAdditional cross-linkingPreparationCosts per cm2a\nAllodermHuman dermisNoneRefrigeration, rehydration$ 35.31PermacolPorcine dermisYes; HMDINone$ 18.97SurgisisPorcine SISNoneRehydration$ 20.00CollamendPorcine dermisYes; EDCRehydration$ 18.88Peri-guardBovine pericardiumYes; gluteraldehydeRehydration$ 3.91VeritasBovine pericardiumNoneNone$ 22.02Polypropylene/e-PTFE/Composite–None$ 3.65\naBased on sheet sizes sufficient for parastomal hernia repair, excluding account discount. Manufacturers and distributors were contacted directly via telephone\nSIS small intestinal submucosa; HMDI hexamethylene diisocyanate; EDC 1-ethyl-(3-dimethylaminopropyl) carbodiimide hydrochloride; Alloderm LifeCell Corp., Branchburg, NJ, USA; Permacol Tissue Science Laboratories, Aldershot, UK; Surgisis Cook Surgical, Bloomington, IN, USA; Collamend Bard Inc., Warwick, RI, USA; Xenmatrix Brennen Medical Inc., St. Paul, MN, USA; Veritas, Peri-Guard Synovis Surgical Innovations, St. Paul, MN, USA\n\nStudy characteristics and recurrence rates of studies included in systematic review\n\naThis follow-up is that of a larger group of which these patients were part of\n\nbComplications: wound infection (3),5,26 seroma formation (6),26,27 incisional separation (2)26\n\n\ncUsing a fixed-effects (inverse variance) model\nWeighted pooled proportion (fixed-effects model; Cochran’s Q = 1.917, p = 0.5899) of recurrences after parastomal hernia repair using biologic grafts\nCharacteristics and costs of biologic and synthetic prostheses used for parastomal hernia repair\n\naBased on sheet sizes sufficient for parastomal hernia repair, excluding account discount. Manufacturers and distributors were contacted directly via telephone\n\nSIS small intestinal submucosa; HMDI hexamethylene diisocyanate; EDC 1-ethyl-(3-dimethylaminopropyl) carbodiimide hydrochloride; Alloderm LifeCell Corp., Branchburg, NJ, USA; Permacol Tissue Science Laboratories, Aldershot, UK; Surgisis Cook Surgical, Bloomington, IN, USA; Collamend Bard Inc., Warwick, RI, USA; Xenmatrix Brennen Medical Inc., St. Paul, MN, USA; Veritas, Peri-Guard Synovis Surgical Innovations, St. Paul, MN, USA\nAll included studies were retrospective with a combined enrolment of 57 patients (range 11–20). The definition of a recurrence was not given by any author. Follow-up ranged from 8.1 to 50.2 months, and was done by clinical examination in three25–27 and also by CT imaging in one.26 One study was unclear as to how follow-up was performed.24 No mortality was reported. Study characteristics and outcomes including weighted pooled rates of recurrence and wound-related complications are shown in Table 2. The weighted pooled proportion of recurrences was 15.7% (95% CI 7.8–25.9; Fig. 2). No cases of infected grafts were reported. Araujo et al. only reported on infection (which was absent) and therefore their data were not included in the calculation of wound-related complications. Various surgical techniques were used, including onlay, inlay, and underlay (pre- and intraperitoneal) placement of the biologic graft. Both open and laparoscopic procedures were performed. Biologic grafts used were products derived from human acellular dermis (Alloderm®), bovine pericardium (Peri-Guard®) and porcine small intestinal submucosa (Surgisis®). Characteristics of the biologic grafts used in the included and excluded studies are given in Table 3.\nTable 2Study characteristics and recurrence rates of studies included in systematic reviewReferenceYearNo. of patientsMINORS indexMaterial usedType of repairNo. of wound complications (%) b\nRecurrence (%)Months follow-up (range)Araujo et al.24\n20051310Peri-GuardOnlayn/a1 (7.7)50.2 (n/a)a\nAycock et al.25\n2007119AllodermInlay (n = 8) and onlay (n = 3)2 (18.2)3 (27.3)8.1 (1–21)Taner et al.26\n2009139AllodermUnder + onlay sandwich5 (38.5)2 (15)9 (4–16)Ellis27\n20102012SurgisisIntraperitoneal underlay (Sugarbaker)4 (20.0)2 (10)18 (6–38)Weighted pooled%c (95% CI)–––––26.2% (14.7–39.5)15.7% (7.8–25.9)–\naThis follow-up is that of a larger group of which these patients were part of\nbComplications: wound infection (3),5,26 seroma formation (6),26,27 incisional separation (2)26\n\ncUsing a fixed-effects (inverse variance) model\nFig. 2Weighted pooled proportion (fixed-effects model; Cochran’s Q = 1.917, p = 0.5899) of recurrences after parastomal hernia repair using biologic grafts\nTable 3Characteristics and costs of biologic and synthetic prostheses used for parastomal hernia repairMaterialSourceAdditional cross-linkingPreparationCosts per cm2a\nAllodermHuman dermisNoneRefrigeration, rehydration$ 35.31PermacolPorcine dermisYes; HMDINone$ 18.97SurgisisPorcine SISNoneRehydration$ 20.00CollamendPorcine dermisYes; EDCRehydration$ 18.88Peri-guardBovine pericardiumYes; gluteraldehydeRehydration$ 3.91VeritasBovine pericardiumNoneNone$ 22.02Polypropylene/e-PTFE/Composite–None$ 3.65\naBased on sheet sizes sufficient for parastomal hernia repair, excluding account discount. Manufacturers and distributors were contacted directly via telephone\nSIS small intestinal submucosa; HMDI hexamethylene diisocyanate; EDC 1-ethyl-(3-dimethylaminopropyl) carbodiimide hydrochloride; Alloderm LifeCell Corp., Branchburg, NJ, USA; Permacol Tissue Science Laboratories, Aldershot, UK; Surgisis Cook Surgical, Bloomington, IN, USA; Collamend Bard Inc., Warwick, RI, USA; Xenmatrix Brennen Medical Inc., St. Paul, MN, USA; Veritas, Peri-Guard Synovis Surgical Innovations, St. Paul, MN, USA\n\nStudy characteristics and recurrence rates of studies included in systematic review\n\naThis follow-up is that of a larger group of which these patients were part of\n\nbComplications: wound infection (3),5,26 seroma formation (6),26,27 incisional separation (2)26\n\n\ncUsing a fixed-effects (inverse variance) model\nWeighted pooled proportion (fixed-effects model; Cochran’s Q = 1.917, p = 0.5899) of recurrences after parastomal hernia repair using biologic grafts\nCharacteristics and costs of biologic and synthetic prostheses used for parastomal hernia repair\n\naBased on sheet sizes sufficient for parastomal hernia repair, excluding account discount. Manufacturers and distributors were contacted directly via telephone\n\nSIS small intestinal submucosa; HMDI hexamethylene diisocyanate; EDC 1-ethyl-(3-dimethylaminopropyl) carbodiimide hydrochloride; Alloderm LifeCell Corp., Branchburg, NJ, USA; Permacol Tissue Science Laboratories, Aldershot, UK; Surgisis Cook Surgical, Bloomington, IN, USA; Collamend Bard Inc., Warwick, RI, USA; Xenmatrix Brennen Medical Inc., St. Paul, MN, USA; Veritas, Peri-Guard Synovis Surgical Innovations, St. Paul, MN, USA\n[SUBTITLE] Studies Excluded From Systematic Review [SUBSECTION] Six reports on the use of biologic grafts for the repair of parastomal hernias were excluded after subjecting them to the modified MINORS tool, including retrospective studies,20,23 case reports19,21 and case series18,22 (Table 4). Two case reports and two case series described the use of biologic grafts for the repair of parastomal hernia. Greenstein and Aldoroty19 reported on a patient with a history of ulcerative colitis and four ileostomy revisions that presented with unremitting obstructive symptoms. An incarcerated parastomal hernia confirmed by CT was repaired using cross-linked porcine dermis (Collamend®) in a retromuscular fashion. Patient regained ileostomy function within a few days and when seen at 18 months was pain free with no evidence of graft infection, hernia recurrence, ileostomy malfunction or obstruction. Lo Menzo et al.21 reported on a patient with a history of abdominoperineal resection for rectal cancer that presented with a three-time recurrent parastomal hernia, for which an expanded polytetrafluoroethylene mesh was used for the last repair using the keyhole technique. The Sugarbaker technique28 was employed using bovine pericardium (Veritas®). Postoperatively, a seroma developed which resolved spontaneously; and at 17-month follow-up, there was no evidence of recurrence, the patient was pain free and satisfied with cosmetic results. In a case series of three patients, Kish et al.22 reported on the primary repair of parastomal hernia using human acellular dermis (Alloderm) as onlay reinforcement. Two patients were followed for 6 months and 1 year, respectively, and remained hernia free. One patient presented 8 months later with symptoms of intestinal obstruction treated conservatively. The patient subsequently returned 3 months later with intestinal obstruction and recurrent parastomal hernia that necessitated an operation for relocation of the stoma and repeat hernia repair. Inan et al.18 reported on two patients, one with a history of proctectomy after severe radiation proctitis presenting with discomfort and obstructive episodes, the other presenting with symptomatic hernia 18 years after abdominoperineal resection. Both were repaired laparoscopically using cross-linked porcine dermis (Permacol®), and at 9 and 3 months postoperatively there was no evidence of recurrence or mesh-related complications.\nTable 4Study characteristics and recurrence rates of studies excluded from systematic reviewReferenceYearNo. of patientsMaterial usedType of repairNo. of wound complications (%)b\nRecurrence (%)Follow-up (range)Kish et al.22\n20053AllodermOnlayn/a1 (33.3)(6–12)Inan18\n20072PermacolLaparoscopic (method not specified)n/a0 (0)6 (3–9)Greenstein & Aldoroty19\n20081CollamendRetromuscular/sublay0 (0)0 (0)18Franklin et al.20\n20082SurgisisIntraperitoneal onlay mesh (Laparoscopic)n/a0 (0)n/aLo Menzo et al.21\n20081VeritasIntraperitoneal (Laparoscopic Sugarbaker)1 (100)0 (0)17Loganathan et al.23\n20103Permacoln/a2 (66)1 (33)12 (3–62)a\n\naThis follow-up is that of a larger group of which these patients were part of\nbComplications: seroma formation (1),21 ischaemic ileostomy and subsequent fistula (1),23 fistula (1)23\n\n\nStudy characteristics and recurrence rates of studies excluded from systematic review\n\naThis follow-up is that of a larger group of which these patients were part of\n\nbComplications: seroma formation (1),21 ischaemic ileostomy and subsequent fistula (1),23 fistula (1)23\n\nTwo retrospective studies on the use of cross-linked porcine dermis (Permacol) for various types of hernia repair in complex, infected or potentially contaminated settings, included six patients undergoing parastomal hernia repair. Of the total of 133 procedures, Franklin et al.23 repaired parastomal hernia using intraperitoneal onlay mesh in two patients, showing no recurrences.20 Follow-up ranged 1–78 months using clinical examination. Loganathan et al.23 reported on repair of four parastomal hernias, one of which underwent reversal of the colostomy at the time of the hernia repair. Of the other three patients, one that had six previous attempts at hernia repair experienced a recurrence. This patient developed an ischaemic end ileostomy which subsequently developed a localised perforation which manifested as a fistula formation. Another patient also developed a fistula. Cross-linked porcine dermis (Permacol) was placed as inlay or onlay. Median follow-up of the complete series was 377 days (range 85–1,905 days) performed by clinical examination.\nSix reports on the use of biologic grafts for the repair of parastomal hernias were excluded after subjecting them to the modified MINORS tool, including retrospective studies,20,23 case reports19,21 and case series18,22 (Table 4). Two case reports and two case series described the use of biologic grafts for the repair of parastomal hernia. Greenstein and Aldoroty19 reported on a patient with a history of ulcerative colitis and four ileostomy revisions that presented with unremitting obstructive symptoms. An incarcerated parastomal hernia confirmed by CT was repaired using cross-linked porcine dermis (Collamend®) in a retromuscular fashion. Patient regained ileostomy function within a few days and when seen at 18 months was pain free with no evidence of graft infection, hernia recurrence, ileostomy malfunction or obstruction. Lo Menzo et al.21 reported on a patient with a history of abdominoperineal resection for rectal cancer that presented with a three-time recurrent parastomal hernia, for which an expanded polytetrafluoroethylene mesh was used for the last repair using the keyhole technique. The Sugarbaker technique28 was employed using bovine pericardium (Veritas®). Postoperatively, a seroma developed which resolved spontaneously; and at 17-month follow-up, there was no evidence of recurrence, the patient was pain free and satisfied with cosmetic results. In a case series of three patients, Kish et al.22 reported on the primary repair of parastomal hernia using human acellular dermis (Alloderm) as onlay reinforcement. Two patients were followed for 6 months and 1 year, respectively, and remained hernia free. One patient presented 8 months later with symptoms of intestinal obstruction treated conservatively. The patient subsequently returned 3 months later with intestinal obstruction and recurrent parastomal hernia that necessitated an operation for relocation of the stoma and repeat hernia repair. Inan et al.18 reported on two patients, one with a history of proctectomy after severe radiation proctitis presenting with discomfort and obstructive episodes, the other presenting with symptomatic hernia 18 years after abdominoperineal resection. Both were repaired laparoscopically using cross-linked porcine dermis (Permacol®), and at 9 and 3 months postoperatively there was no evidence of recurrence or mesh-related complications.\nTable 4Study characteristics and recurrence rates of studies excluded from systematic reviewReferenceYearNo. of patientsMaterial usedType of repairNo. of wound complications (%)b\nRecurrence (%)Follow-up (range)Kish et al.22\n20053AllodermOnlayn/a1 (33.3)(6–12)Inan18\n20072PermacolLaparoscopic (method not specified)n/a0 (0)6 (3–9)Greenstein & Aldoroty19\n20081CollamendRetromuscular/sublay0 (0)0 (0)18Franklin et al.20\n20082SurgisisIntraperitoneal onlay mesh (Laparoscopic)n/a0 (0)n/aLo Menzo et al.21\n20081VeritasIntraperitoneal (Laparoscopic Sugarbaker)1 (100)0 (0)17Loganathan et al.23\n20103Permacoln/a2 (66)1 (33)12 (3–62)a\n\naThis follow-up is that of a larger group of which these patients were part of\nbComplications: seroma formation (1),21 ischaemic ileostomy and subsequent fistula (1),23 fistula (1)23\n\n\nStudy characteristics and recurrence rates of studies excluded from systematic review\n\naThis follow-up is that of a larger group of which these patients were part of\n\nbComplications: seroma formation (1),21 ischaemic ileostomy and subsequent fistula (1),23 fistula (1)23\n\nTwo retrospective studies on the use of cross-linked porcine dermis (Permacol) for various types of hernia repair in complex, infected or potentially contaminated settings, included six patients undergoing parastomal hernia repair. Of the total of 133 procedures, Franklin et al.23 repaired parastomal hernia using intraperitoneal onlay mesh in two patients, showing no recurrences.20 Follow-up ranged 1–78 months using clinical examination. Loganathan et al.23 reported on repair of four parastomal hernias, one of which underwent reversal of the colostomy at the time of the hernia repair. Of the other three patients, one that had six previous attempts at hernia repair experienced a recurrence. This patient developed an ischaemic end ileostomy which subsequently developed a localised perforation which manifested as a fistula formation. Another patient also developed a fistula. Cross-linked porcine dermis (Permacol) was placed as inlay or onlay. Median follow-up of the complete series was 377 days (range 85–1,905 days) performed by clinical examination.", "All included studies were retrospective with a combined enrolment of 57 patients (range 11–20). The definition of a recurrence was not given by any author. Follow-up ranged from 8.1 to 50.2 months, and was done by clinical examination in three25–27 and also by CT imaging in one.26 One study was unclear as to how follow-up was performed.24 No mortality was reported. Study characteristics and outcomes including weighted pooled rates of recurrence and wound-related complications are shown in Table 2. The weighted pooled proportion of recurrences was 15.7% (95% CI 7.8–25.9; Fig. 2). No cases of infected grafts were reported. Araujo et al. only reported on infection (which was absent) and therefore their data were not included in the calculation of wound-related complications. Various surgical techniques were used, including onlay, inlay, and underlay (pre- and intraperitoneal) placement of the biologic graft. Both open and laparoscopic procedures were performed. Biologic grafts used were products derived from human acellular dermis (Alloderm®), bovine pericardium (Peri-Guard®) and porcine small intestinal submucosa (Surgisis®). Characteristics of the biologic grafts used in the included and excluded studies are given in Table 3.\nTable 2Study characteristics and recurrence rates of studies included in systematic reviewReferenceYearNo. of patientsMINORS indexMaterial usedType of repairNo. of wound complications (%) b\nRecurrence (%)Months follow-up (range)Araujo et al.24\n20051310Peri-GuardOnlayn/a1 (7.7)50.2 (n/a)a\nAycock et al.25\n2007119AllodermInlay (n = 8) and onlay (n = 3)2 (18.2)3 (27.3)8.1 (1–21)Taner et al.26\n2009139AllodermUnder + onlay sandwich5 (38.5)2 (15)9 (4–16)Ellis27\n20102012SurgisisIntraperitoneal underlay (Sugarbaker)4 (20.0)2 (10)18 (6–38)Weighted pooled%c (95% CI)–––––26.2% (14.7–39.5)15.7% (7.8–25.9)–\naThis follow-up is that of a larger group of which these patients were part of\nbComplications: wound infection (3),5,26 seroma formation (6),26,27 incisional separation (2)26\n\ncUsing a fixed-effects (inverse variance) model\nFig. 2Weighted pooled proportion (fixed-effects model; Cochran’s Q = 1.917, p = 0.5899) of recurrences after parastomal hernia repair using biologic grafts\nTable 3Characteristics and costs of biologic and synthetic prostheses used for parastomal hernia repairMaterialSourceAdditional cross-linkingPreparationCosts per cm2a\nAllodermHuman dermisNoneRefrigeration, rehydration$ 35.31PermacolPorcine dermisYes; HMDINone$ 18.97SurgisisPorcine SISNoneRehydration$ 20.00CollamendPorcine dermisYes; EDCRehydration$ 18.88Peri-guardBovine pericardiumYes; gluteraldehydeRehydration$ 3.91VeritasBovine pericardiumNoneNone$ 22.02Polypropylene/e-PTFE/Composite–None$ 3.65\naBased on sheet sizes sufficient for parastomal hernia repair, excluding account discount. Manufacturers and distributors were contacted directly via telephone\nSIS small intestinal submucosa; HMDI hexamethylene diisocyanate; EDC 1-ethyl-(3-dimethylaminopropyl) carbodiimide hydrochloride; Alloderm LifeCell Corp., Branchburg, NJ, USA; Permacol Tissue Science Laboratories, Aldershot, UK; Surgisis Cook Surgical, Bloomington, IN, USA; Collamend Bard Inc., Warwick, RI, USA; Xenmatrix Brennen Medical Inc., St. Paul, MN, USA; Veritas, Peri-Guard Synovis Surgical Innovations, St. Paul, MN, USA\n\nStudy characteristics and recurrence rates of studies included in systematic review\n\naThis follow-up is that of a larger group of which these patients were part of\n\nbComplications: wound infection (3),5,26 seroma formation (6),26,27 incisional separation (2)26\n\n\ncUsing a fixed-effects (inverse variance) model\nWeighted pooled proportion (fixed-effects model; Cochran’s Q = 1.917, p = 0.5899) of recurrences after parastomal hernia repair using biologic grafts\nCharacteristics and costs of biologic and synthetic prostheses used for parastomal hernia repair\n\naBased on sheet sizes sufficient for parastomal hernia repair, excluding account discount. Manufacturers and distributors were contacted directly via telephone\n\nSIS small intestinal submucosa; HMDI hexamethylene diisocyanate; EDC 1-ethyl-(3-dimethylaminopropyl) carbodiimide hydrochloride; Alloderm LifeCell Corp., Branchburg, NJ, USA; Permacol Tissue Science Laboratories, Aldershot, UK; Surgisis Cook Surgical, Bloomington, IN, USA; Collamend Bard Inc., Warwick, RI, USA; Xenmatrix Brennen Medical Inc., St. Paul, MN, USA; Veritas, Peri-Guard Synovis Surgical Innovations, St. Paul, MN, USA", "Six reports on the use of biologic grafts for the repair of parastomal hernias were excluded after subjecting them to the modified MINORS tool, including retrospective studies,20,23 case reports19,21 and case series18,22 (Table 4). Two case reports and two case series described the use of biologic grafts for the repair of parastomal hernia. Greenstein and Aldoroty19 reported on a patient with a history of ulcerative colitis and four ileostomy revisions that presented with unremitting obstructive symptoms. An incarcerated parastomal hernia confirmed by CT was repaired using cross-linked porcine dermis (Collamend®) in a retromuscular fashion. Patient regained ileostomy function within a few days and when seen at 18 months was pain free with no evidence of graft infection, hernia recurrence, ileostomy malfunction or obstruction. Lo Menzo et al.21 reported on a patient with a history of abdominoperineal resection for rectal cancer that presented with a three-time recurrent parastomal hernia, for which an expanded polytetrafluoroethylene mesh was used for the last repair using the keyhole technique. The Sugarbaker technique28 was employed using bovine pericardium (Veritas®). Postoperatively, a seroma developed which resolved spontaneously; and at 17-month follow-up, there was no evidence of recurrence, the patient was pain free and satisfied with cosmetic results. In a case series of three patients, Kish et al.22 reported on the primary repair of parastomal hernia using human acellular dermis (Alloderm) as onlay reinforcement. Two patients were followed for 6 months and 1 year, respectively, and remained hernia free. One patient presented 8 months later with symptoms of intestinal obstruction treated conservatively. The patient subsequently returned 3 months later with intestinal obstruction and recurrent parastomal hernia that necessitated an operation for relocation of the stoma and repeat hernia repair. Inan et al.18 reported on two patients, one with a history of proctectomy after severe radiation proctitis presenting with discomfort and obstructive episodes, the other presenting with symptomatic hernia 18 years after abdominoperineal resection. Both were repaired laparoscopically using cross-linked porcine dermis (Permacol®), and at 9 and 3 months postoperatively there was no evidence of recurrence or mesh-related complications.\nTable 4Study characteristics and recurrence rates of studies excluded from systematic reviewReferenceYearNo. of patientsMaterial usedType of repairNo. of wound complications (%)b\nRecurrence (%)Follow-up (range)Kish et al.22\n20053AllodermOnlayn/a1 (33.3)(6–12)Inan18\n20072PermacolLaparoscopic (method not specified)n/a0 (0)6 (3–9)Greenstein & Aldoroty19\n20081CollamendRetromuscular/sublay0 (0)0 (0)18Franklin et al.20\n20082SurgisisIntraperitoneal onlay mesh (Laparoscopic)n/a0 (0)n/aLo Menzo et al.21\n20081VeritasIntraperitoneal (Laparoscopic Sugarbaker)1 (100)0 (0)17Loganathan et al.23\n20103Permacoln/a2 (66)1 (33)12 (3–62)a\n\naThis follow-up is that of a larger group of which these patients were part of\nbComplications: seroma formation (1),21 ischaemic ileostomy and subsequent fistula (1),23 fistula (1)23\n\n\nStudy characteristics and recurrence rates of studies excluded from systematic review\n\naThis follow-up is that of a larger group of which these patients were part of\n\nbComplications: seroma formation (1),21 ischaemic ileostomy and subsequent fistula (1),23 fistula (1)23\n\nTwo retrospective studies on the use of cross-linked porcine dermis (Permacol) for various types of hernia repair in complex, infected or potentially contaminated settings, included six patients undergoing parastomal hernia repair. Of the total of 133 procedures, Franklin et al.23 repaired parastomal hernia using intraperitoneal onlay mesh in two patients, showing no recurrences.20 Follow-up ranged 1–78 months using clinical examination. Loganathan et al.23 reported on repair of four parastomal hernias, one of which underwent reversal of the colostomy at the time of the hernia repair. Of the other three patients, one that had six previous attempts at hernia repair experienced a recurrence. This patient developed an ischaemic end ileostomy which subsequently developed a localised perforation which manifested as a fistula formation. Another patient also developed a fistula. Cross-linked porcine dermis (Permacol) was placed as inlay or onlay. Median follow-up of the complete series was 377 days (range 85–1,905 days) performed by clinical examination.", "The current systematic review evaluated the use of biologic grafts for parastomal hernia repair, which results in acceptable rates of recurrence, with a pooled rate of 15.7% (95% CI 7.8–25.9). Wound-related complications were reported in 26.2% (95% CI 14.7–39.5). Given the current evidence, biologic grafts do not provide a superior alternative to other surgical options.\nIn their review on parastomal hernia from 2003, Carne et al.1 shed some light on the outcomes of different techniques of parastomal hernia repair. In studies using synthetic meshes (intraperitoneal, preperitoneal and fascial onlay), the overall recurrence rate was 6/77 (7.8%). Infection is uncommon and only infrequently requires removal of the mesh. A search of the literature published since reveals reherniation occurring in 62/371 (16.7%) patients.29–42 As found by Carne et al., complications were low, with mesh infection reported in 15/460 (3%) of the patients. In the current systematic review of parastomal hernia repair using biologic grafts, rates of recurrence ranged from 7.7% to 27.3%, with a weighted pooled average of 15.7% (95% CI 7.8–25.9). Graft infection was zero, and other wound-related complications including wound infection were 26.2% (95% CI 14.7–39.5). Thus, these rates are very similar to those found for synthetic mesh. Notably, even the risk of mesh infection appears to be low when a synthetic graft is implanted. Given the current evidence, it cannot be concluded that biologic prostheses are more preferable than synthetic mesh to reduce the rates of immediate or long-term complications. Moreover, biologic grafts are very expensive compared to synthetic mesh (Table 3), which further refutes their superiority over synthetic mesh to provide not only effective but also efficient and cost-effective healthcare. With limited financial resources, careful consideration must be taken whilst choosing the types of materials to use.\nIt is well established that parastomal hernias can occur after great periods of time. Also, on the long run, risk of infection may remain higher for non-absorbable synthetic meshes compared to degradable biologic grafts due to a prolonged presence of foreign body material. Studies with longer follow-up are therefore imperative to yield more reliable rates of recurrence and late complications for both these treatment modalities. The results of this systematic review were troubled by typical issues of potential bias, including the lack of uniformity between studies in definition and reporting of outcomes and patient characteristics.\nGiven the scarcity of relevant studies, combined with the variety of biologic grafts used, it is impossible to make a direct comparison between the different products or types of material. The same goes for the surgical technique used (i.e. the type of prosthetic placement), which is also of relevance for outcome. With synthetic meshes, average rates of recurrence after sublay mesh (5.7%)34,39 and intraperitoneal mesh (11.1%)32,33 are lower than after onlay mesh (22.8%)29–31 or laparoscopically placed intraperitoneal mesh (16.6%).35–38,40–42 Onlay placement requires extensive dissection of subcutaneous tissue which predisposes for hematoma and seroma formation and may disrupt skin vascularisation leading to impaired wound healing. Moreover, due to its anatomical position, intra-abdominal pressure may lead to lateral detachment of the graft resulting in its higher recurrence rates. On the other hand, sublay and underlay techniques theoretically benefit from the intra-abdominal pressures which may help to keep the graft in place. Concerning complications, the sublay placement again theoretically seems the most advantageous of the techniques, resulting in the least contact between mesh and bowel.\nBesides its use for the repair of parastomal hernia, there has been much debate as to the effectiveness of the prophylactic placement of a reinforcing prosthesis at the time of initial stoma formation. In a recent systematic review of the use of a mesh to prevent parastomal hernia, Tam et al.6 made a strong case for the use of prophylactic mesh at the time of initial stoma formation, showing an overall recurrence rate of 15.4%, compared to 55.2% in patients who received a conventional stoma. Their meta-analysis performed on three randomised controlled trials yielded similar results. Complications were very low and did not differ between the two groups. To date, only one study can be identified that used a biologic graft for this purpose.17 Hammond et al. compared the prophylactic use of cross-linked porcine dermis (Permacol) to conventional stoma formation. After a median follow-up of only 6.5 months, the conventional group had a recurrence rate of 33.3%, while the prophylactic group showed no recurrences. No complications were observed. Given the very low rate of complications associated with prophylactic synthetic mesh placement, there is as yet no support for the use of biologic grafts instead of synthetic ones in this surgical scenario.\nAs mentioned earlier, when studying rates of hernia recurrence, next to an appropriate follow-up a properly defined outcome measure is deemed essential to create uniform and comparable findings. None of the studies in the current review provided a proper definition of a recurrence. Most studies used clinical examination to detect hernias, and one study also used CT imaging in all patients.26 Here, the two patients that had radiologic evidence of a recurrence continued to be asymptomatic at 385 and 509 days follow-up, respectively, requiring no revision of their repair. Another study, which was excluded from this review due to the prophylactic placement of a biologic graft, also used CT imaging in all patients to determine hernia occurrence.16 Similarly, the only two occurrences were found on CT scan and were small asymptomatic hernias. If these studies had used only clinical examination, it is conceivable that these asymptomatic patients might not have been found to have a recurrence. Most recently, Gurmu et al. examined the inter-observer reliability of clinical examination of parastomal hernia in three hospitals.43 This appeared to be low, with kappa values ranging between 0.29 and 0.73. The correlation between CT and patient-reported complaints using a colostomy questionnaire was also low, revealing a kappa of 0.45. Even though the underestimation of rates of (minor) parastomal hernias may well be very common, its clinical relevance in asymptomatic and satisfied patients is only manifest in an increased risk of complications due to the hernia, such as incarceration and stenosis of bowel. It is hard to estimate these risks in patients with asymptomatic or small hernias, but given the marginal amount of recurrences and long-term complications in the studies discussed in this review and in the literature, they do not seem to give cause for concern." ]
[ "introduction", "materials|methods", null, null, null, "results", null, null, "discussion" ]
[ "Biologic graft", "Allograft", "Xenograft", "Parastomal hernia" ]
Job resources and matching active coping styles as moderators of the longitudinal relation between job demands and job strain.
21360254
Only in a few longitudinal studies it has been examined whether job resources should be matched to job demands to show stress-buffering effects of job resources (matching hypothesis), while there are no empirical studies in which the moderating effect of matching personal characteristics on the stress-buffering effect of job resources has been examined.
BACKGROUND
The study group consisted of 317 beginning teachers from Belgium. The two-wave survey data with a 1-year time lag were analyzed by means of structural equation modeling and multiple group analyses.
METHOD
Data did not support the matching hypothesis. In addition, no support was found for the moderating effect of specific active coping styles, irrespective of the level of match.
RESULTS
To show stress-buffering effects of job resources, it seems to make no difference whether or not specific types of job demands and job resources are matched, and whether or not individual differences in specific active coping styles are taken into account.
CONCLUSION
[ "Adaptation, Psychological", "Adult", "Belgium", "Cognition", "Emotions", "Employment", "Faculty", "Female", "Humans", "Logistic Models", "Male", "Stress, Psychological", "Surveys and Questionnaires", "Work", "Workload", "Young Adult" ]
3212693
Introduction
Research in job stress has concentrated on identifying characteristics of the work environment that may relate to worker’s health and well-being. One dominant approach in this domain proposes that health and well-being can be explained by two distinct classes of job characteristics: job demands and job resources [1]. Job demands are work-related tasks that require effort, and vary from solving complex problems to dealing with aggressive clients or lifting heavy objects. Job resources on the other hand, are work-related assets that can be employed to deal with job demands. Examples of job resources are job autonomy, emotional support from colleagues, and technical equipment. Several researchers have pointed out the stress-buffering effect of job resources on the relation between job demands and job strain (e.g., [2–4]). Specifically, it has been proposed that high job demands will have a deleterious impact on worker’s health and well-being unless workers have sufficient job resources to deal with their demanding job. Job resources may be particularly likely to operate as a stress buffer if they are matched to job demands. That is, if workers use job resources that belong to the same domain of functioning as the type of job demands they need to deal with (e.g., [5, 6]). This idea of match is often referred to as the ‘matching hypothesis’ [7] and is accompanied by a sound body of empirical evidence (cf. [8]). However, to this very moment, the matching hypothesis has mainly been tested in cross-sectional studies [8]. Because cross-sectional designs are not well-suited to make causal inferences about the relation between demands, resources, and strain [9], it seems of great interest to extend the number of longitudinal studies on the matching hypothesis. Therefore, the first aim of the current study is to examine the matching hypothesis with respect to the longitudinal relation between job demands, job resources, and job strain. In this study, we will use a time lag of 1 year to control for time-variant effects (e.g., seasonal fluctuations) that might be present when using time lags shorter than 1 year (cf. [10, 11]). Moreover, compared to time lags of more than 1 year (i.e., 2 or 3 years), a 1-year time lag has proven to be most appropriate for demonstrating longitudinal stressor–strain relations [12]. In addition to the match between job demands and job resources, stress-buffering effects of job resources may also depend on worker’s personal characteristics. Specifically, it has been argued that personal characteristics are likely to moderate the linkage between job conditions and job strain [13]. An individual characteristic that could particularly moderate the stress-buffering effect of job resources is worker’s active coping style. Active coping style can be defined as a persistent tendency to actively manage critical events that pose a challenge, threat, harm, loss, or benefit to a person (cf. [14, 15]). If we translate this definition to work settings, it follows that workers with a high active coping style are more inclined to actively cope with job demands than workers with a low active coping style (cf. [16–19]). Because active coping behavior in demanding situations at work implies the investment of job resources, it seems reasonable to assume that differences in active coping style will have a different impact on the activation of job resources in stressful situations at work. That is, in case of high job demands, workers with a high active coping style may be more likely to activate job resources than workers with a low active coping style (cf. [20]). For instance, workers with a high active coping style may be more likely to consult an expert in the field, ask for emotional support, or employ technical equipment. Because workers who activate job resources are generally more likely to benefit from the stress-buffering effect of job resources than workers who do not use job resources, individual differences in active coping style should be expressed in the number of stress-buffering effects of job resources that are found for the individuals involved. In a cross-sectional study by de Rijk et al. [18], it was indeed shown that high (vs. low) active coping style has a synergistic effect on the stress-buffering effect of job resources. The second aim of the current study was to examine the moderating effect of worker’s active coping style on the lagged stress-buffering effect of job resources. Several researchers have suggested that the moderating effect of active coping style will be stronger if the nature of coping is specific to job resources (cf. [21, 22]). In other words, to show stronger moderating effects of active coping style on the stress-buffering effect of job resources, active coping style should belong to the same domain of functioning as job resources. To the best of our knowledge, the moderating effect of specific, corresponding types of active coping styles has not been tested yet. Therefore, the third aim of the current study is to examine the moderating effect of matching active coping styles with respect to the longitudinal relation between job demands, job resources, and job strain. [SUBTITLE] Matching Hypothesis [SUBSECTION] According to the matching hypothesis, specific types of job resources should be matched to specific types of job demands to show stress-buffering effects of job resources (e.g., [5, 23]). Generally speaking, three specific types of job demands and job resources can be distinguished: cognitive, emotional, and physical demands and resources [24, 25]. When the matching hypothesis is applied to the longitudinal relation between job demands, job resources, and job strain, it follows that workers who are faced with high cognitive job demands (e.g., solving complex problems) at a certain moment in time, are least likely to experience job strain (e.g., mental fatigue) 1 year later if they have sufficient cognitive job resources (e.g., information from handbooks) to deal with their cognitively demanding job. Similarly, workers who are confronted with high emotional job demands (e.g., feeling threatened by aggressive patients) at a certain moment in time, are least likely to experience job strain (e.g., emotional exhaustion) 1 year later if they have sufficient emotional job resources (e.g., a listening ear from colleagues) to deal with their emotionally demanding job. Finally, if workers are faced with high physical job demands (e.g., moving heavy objects) at a certain moment in time, they are least likely to experience job strain (e.g., back pain) 1 year later if they have sufficient physical job resources (e.g., a trolley) to deal with their physically demanding job [25, 26]. This brings us to the following hypothesis: Stress-buffering effects of job resources on the longitudinal relation between job demands and job strain are more likely to occur if there is a match between specific types of job demands and job resources than if there is a non-match between specific types of job demands and job resources. Stress-buffering effects of job resources on the longitudinal relation between job demands and job strain are more likely to occur if there is a match between specific types of job demands and job resources than if there is a non-match between specific types of job demands and job resources. According to the matching hypothesis, specific types of job resources should be matched to specific types of job demands to show stress-buffering effects of job resources (e.g., [5, 23]). Generally speaking, three specific types of job demands and job resources can be distinguished: cognitive, emotional, and physical demands and resources [24, 25]. When the matching hypothesis is applied to the longitudinal relation between job demands, job resources, and job strain, it follows that workers who are faced with high cognitive job demands (e.g., solving complex problems) at a certain moment in time, are least likely to experience job strain (e.g., mental fatigue) 1 year later if they have sufficient cognitive job resources (e.g., information from handbooks) to deal with their cognitively demanding job. Similarly, workers who are confronted with high emotional job demands (e.g., feeling threatened by aggressive patients) at a certain moment in time, are least likely to experience job strain (e.g., emotional exhaustion) 1 year later if they have sufficient emotional job resources (e.g., a listening ear from colleagues) to deal with their emotionally demanding job. Finally, if workers are faced with high physical job demands (e.g., moving heavy objects) at a certain moment in time, they are least likely to experience job strain (e.g., back pain) 1 year later if they have sufficient physical job resources (e.g., a trolley) to deal with their physically demanding job [25, 26]. This brings us to the following hypothesis: Stress-buffering effects of job resources on the longitudinal relation between job demands and job strain are more likely to occur if there is a match between specific types of job demands and job resources than if there is a non-match between specific types of job demands and job resources. Stress-buffering effects of job resources on the longitudinal relation between job demands and job strain are more likely to occur if there is a match between specific types of job demands and job resources than if there is a non-match between specific types of job demands and job resources. [SUBTITLE] Matching Active Coping Styles [SUBSECTION] As explained before, workers with a high active coping style are more likely to activate job resources in demanding situations at work and may, hence, experience less job strain 1 year later than workers with a low active coping style (cf. [18, 27]). However, sometimes, stress-buffering effects of job resources might occur less often than what could have been expected on the basis of resource accessibility and workers’ active coping style (i.e., high vs. low). More specifically, according to Warr’s [28] concept of fit, workers with certain personal characteristics seek out and respond to jobs that offer more of these characteristics. If we apply this concept of fit to the current setting (i.e., workers who activate job resources to actively cope with job demands), it is plausible that workers will only activate available job resources if they have a personal characteristic (i.e., a high active coping style) that corresponds to the type of job resources concerned. In other words, the nature of coping may need to be specific to job resources to optimize the synergistic effect of high active coping style (cf. [21, 22]). In line with the distinction made by Greenglass, Schwarzer, Jakubiec, Fiksenbaum, and Taubert [29], we defined three types of active coping styles: cognitive, emotional, and physical active coping styles. Each specific type of active coping style reflects the extent to which workers are likely to activate specific, corresponding types of job resources to actively cope with job demands (cf. [28]). For instance, workers with a high cognitive active coping style are more likely to use cognitive job resources than workers with a low cognitive active coping style. In a similar vein, workers with a high emotional active coping style are more likely to use emotional job resources than workers with a low emotional active coping style, whereas workers with a high physical active coping style are more likely to use physical job resources than workers with a low physical active coping style. Though some workers may score high on all three types of active coping styles, others may only score high on one or two specific types of active coping styles and may therefore only use job resources from one or two specific domains (e.g., cognitive or physical job resources) to actively cope with job demands. For this latter group of workers, stress-buffering effects of job resources from the third domain (i.e., emotional job resources) are less likely to occur. This brings us to the following hypotheses: Hypothesis 2Stress-buffering effects of job resources on the longitudinal relation between job demands and job strain are more likely to occur if workers have a high specific active coping style than if workers have a low specific active coping style.Hypothesis 3The synergistic effect of high specific active coping styles are more likely to occur if there is a match between specific types of job resources and specific types of active coping styles than if there is a non-match between specific types of job resources and specific types of active coping styles. Stress-buffering effects of job resources on the longitudinal relation between job demands and job strain are more likely to occur if workers have a high specific active coping style than if workers have a low specific active coping style. The synergistic effect of high specific active coping styles are more likely to occur if there is a match between specific types of job resources and specific types of active coping styles than if there is a non-match between specific types of job resources and specific types of active coping styles. As explained before, workers with a high active coping style are more likely to activate job resources in demanding situations at work and may, hence, experience less job strain 1 year later than workers with a low active coping style (cf. [18, 27]). However, sometimes, stress-buffering effects of job resources might occur less often than what could have been expected on the basis of resource accessibility and workers’ active coping style (i.e., high vs. low). More specifically, according to Warr’s [28] concept of fit, workers with certain personal characteristics seek out and respond to jobs that offer more of these characteristics. If we apply this concept of fit to the current setting (i.e., workers who activate job resources to actively cope with job demands), it is plausible that workers will only activate available job resources if they have a personal characteristic (i.e., a high active coping style) that corresponds to the type of job resources concerned. In other words, the nature of coping may need to be specific to job resources to optimize the synergistic effect of high active coping style (cf. [21, 22]). In line with the distinction made by Greenglass, Schwarzer, Jakubiec, Fiksenbaum, and Taubert [29], we defined three types of active coping styles: cognitive, emotional, and physical active coping styles. Each specific type of active coping style reflects the extent to which workers are likely to activate specific, corresponding types of job resources to actively cope with job demands (cf. [28]). For instance, workers with a high cognitive active coping style are more likely to use cognitive job resources than workers with a low cognitive active coping style. In a similar vein, workers with a high emotional active coping style are more likely to use emotional job resources than workers with a low emotional active coping style, whereas workers with a high physical active coping style are more likely to use physical job resources than workers with a low physical active coping style. Though some workers may score high on all three types of active coping styles, others may only score high on one or two specific types of active coping styles and may therefore only use job resources from one or two specific domains (e.g., cognitive or physical job resources) to actively cope with job demands. For this latter group of workers, stress-buffering effects of job resources from the third domain (i.e., emotional job resources) are less likely to occur. This brings us to the following hypotheses: Hypothesis 2Stress-buffering effects of job resources on the longitudinal relation between job demands and job strain are more likely to occur if workers have a high specific active coping style than if workers have a low specific active coping style.Hypothesis 3The synergistic effect of high specific active coping styles are more likely to occur if there is a match between specific types of job resources and specific types of active coping styles than if there is a non-match between specific types of job resources and specific types of active coping styles. Stress-buffering effects of job resources on the longitudinal relation between job demands and job strain are more likely to occur if workers have a high specific active coping style than if workers have a low specific active coping style. The synergistic effect of high specific active coping styles are more likely to occur if there is a match between specific types of job resources and specific types of active coping styles than if there is a non-match between specific types of job resources and specific types of active coping styles.
null
null
Results
Testing hypothesis 1, results in Table 2 showed two significant two-way interactions between matching job demands and job resources in the prediction of job strain 1 year later. One two-way interaction showed a stress-buffering effect. Specifically, it was shown that a combination of high physical demands and high physical resources resulted in less emotional exhaustion 1 year later than a combination of high physical demands and low physical resources (t = −3.15, p < .01). The other two-way interaction was not in the predicted direction. That is, a reversed interaction effect was found in which a combination of high cognitive demands and high cognitive resources led to more cognitive strain 1 year later than a combination of high cognitive demands and low cognitive resources (t = 2.25, p < .05). Table 2Lagged structural equation models of cognitive strain, emotional exhaustion, and physical complaints with moderating terms of matching job demands and job resources for the total sample (N = 317)Cognitive strain T2Emotional exhaustion T2Physical complaints T2 B SE t B SE t B SE t Control variables Gender−0.190.08−2.27*0.040.120.340.110.081.28 Age−0.000.01−0.180.000.010.350.000.010.33T1 outcome variables Cognitive strain T10.500.0510.07**0.040.070.63−0.030.05−0.54 Emotional exhaustion T1−0.020.04−0.580.490.059.09**0.130.043.33** Physical complaints T1−0.050.05−0.990.120.071.710.430.058.40**Demands and resources Cognitive demands−0.000.04−0.110.120.052.37*−0.010.04−0.24 Emotional demands0.030.040.850.060.051.150.000.040.00 Physical demands−0.020.03−0.74−0.010.05−0.240.020.030.54 Cognitive resources0.040.041.11−0.010.06−0.190.030.040.88 Emotional resources−0.050.04−1.21−0.020.06−0.38−0.040.04−1.14 Physical resources−0.090.04−2.37*−0.090.05−1.67−0.040.04−1.07Moderating terms Cognitive demands × cognitive resources0.080.032.25*0.030.050.710.060.031.89 Emotional demands × emotional resources−0.000.03−0.110.020.040.480.030.030.88 Physical demands × physical resources−0.020.03−0.76−0.130.04−3.15**−0.010.03−0.34 B unstandardized coefficient, SE standard error, t t-statistic, T1 time 1, T2 time 2*p < .05 **p < .01 Lagged structural equation models of cognitive strain, emotional exhaustion, and physical complaints with moderating terms of matching job demands and job resources for the total sample (N = 317) B unstandardized coefficient, SE standard error, t t-statistic, T1 time 1, T2 time 2 *p < .05 **p < .01 In addition to the significant two-way interactions between matching demands and resources, one significant two-way interaction was found between non-matching demands and resources. More specifically, as shown in Table 3, a combination of high emotional demands and high physical resources resulted in less emotional exhaustion 1 year later than a combination of high emotional demands and low physical resources (t = −2.25, p < .05). Table 3Lagged structural equation models of cognitive strain, emotional exhaustion, and physical complaints with moderating terms of non-matching job demands and job resources for the total sample (N = 317)Cognitive strain T2Emotional exhaustion T2Physical complaints T2 B SE t B SE t B SE t Control variables Gender−0.190.08−2.30*0.090.120.740.090.081.09 Age−0.000.01−0.680.000.010.120.000.010.23T1 outcome variables Cognitive strain T10.480.059.67**0.030.070.45−0.050.05−1.05 Emotional exhaustion T1−0.020.04−0.580.490.059.00**0.120.043.19** Physical complaints T1−0.040.05−0.820.110.071.490.440.058.61**Demands and resources Cognitive demands−0.020.04−0.650.110.052.07*−0.010.04−0.28 Emotional demands0.040.041.120.060.051.220.000.04−0.12 Physical demands−0.030.03−0.890.000.05−0.060.010.030.45 Cognitive resources0.040.041.10−0.010.06−0.250.030.040.71 Emotional resources−0.050.04−1.36−0.040.06−0.69−0.050.04−1.32 Physical resources−0.110.04−2.88**−0.080.05−1.41−0.050.04−1.39Moderating terms Cognitive demands × emotional resources0.010.030.420.040.050.85−0.020.03−0.48 Cognitive demands × physical resources0.050.041.230.020.060.430.070.041.90 Emotional demands × cognitive resources0.020.030.540.020.050.450.060.031.74 Emotional demands × physical resources−0.050.03−1.32−0.110.05−2.25*−0.020.03−0.49 Physical demands × cognitive resources0.040.041.11−0.040.05−0.83−0.030.04−0.75 Physical demands × emotional resources0.060.031.660.040.050.830.030.030.97 B unstandardized coefficient, SE standard error, t t-statistic, T1 time 1, T2 time 2*p < .05 **p < .01 Lagged structural equation models of cognitive strain, emotional exhaustion, and physical complaints with moderating terms of non-matching job demands and job resources for the total sample (N = 317) B unstandardized coefficient, SE standard error, t t-statistic, T1 time 1, T2 time 2 *p < .05 **p < .01 To summarize, one out of nine (11.1%) tested two-way interactions between matching demands and resources, and one out of 18 (5.6%) tested two-way interactions between non-matching demands and resources showed a lagged stress-buffering effect of job resources. To determine whether the percentages were significantly different from each other, a z test was conducted [41]. Contrary to hypothesis 1, results of the z test revealed that stress-buffering effects of job resources on the longitudinal relation between job demands and job strain were equally likely to occur in case of a match between specific types of job demands and job resources as in case of a non-match between specific types of job demands and job resources (z = 0.26; p = .80). Testing hypothesis 2, results of the multiple group analyses showed that for each type of active coping style (i.e., cognitive, emotional, and physical active coping style), lagged moderating effects of job resources were equally likely to be found for teachers with a low active coping style as for teachers with a high active coping style. Specifically, testing moderating terms of matching demands and resources, no differences were found between workers with a low or high cognitive active coping style (∆pooled χ 2 = 15.77, df = 27, p = .96), workers with a low or high emotional active coping style (∆pooled χ 2 = 0.95, df = 27, p = 1.00), and workers with a low or high physical active coping style (∆pooled χ 2 = 4.10, df = 27, p = 1.00). Similarly, testing moderating terms of non-matching demands and resources, no differences were found between workers with a low or high cognitive active coping style (∆pooled χ 2 = 23.49, df = 36, p = .95), workers with a low or high emotional active coping style (∆pooled χ 2 = 1.81, df = 36, p = 1.00), and workers with a low or high physical active coping style (∆pooled χ 2 = 4.79, df = 36, p = 1.00). As we did not find any evidence for hypothesis 2, there was no statistical rationale for testing hypothesis 3.
null
null
[ "Matching Hypothesis", "Matching Active Coping Styles", "Design", "Participants", "Measures", "Job demands and job resources", "Active coping styles", "Strain", "Data Analysis", "Matching Hypothesis", "Matching Active Coping Styles", "Study Limitations and Implications" ]
[ "According to the matching hypothesis, specific types of job resources should be matched to specific types of job demands to show stress-buffering effects of job resources (e.g., [5, 23]). Generally speaking, three specific types of job demands and job resources can be distinguished: cognitive, emotional, and physical demands and resources [24, 25]. When the matching hypothesis is applied to the longitudinal relation between job demands, job resources, and job strain, it follows that workers who are faced with high cognitive job demands (e.g., solving complex problems) at a certain moment in time, are least likely to experience job strain (e.g., mental fatigue) 1 year later if they have sufficient cognitive job resources (e.g., information from handbooks) to deal with their cognitively demanding job. Similarly, workers who are confronted with high emotional job demands (e.g., feeling threatened by aggressive patients) at a certain moment in time, are least likely to experience job strain (e.g., emotional exhaustion) 1 year later if they have sufficient emotional job resources (e.g., a listening ear from colleagues) to deal with their emotionally demanding job. Finally, if workers are faced with high physical job demands (e.g., moving heavy objects) at a certain moment in time, they are least likely to experience job strain (e.g., back pain) 1 year later if they have sufficient physical job resources (e.g., a trolley) to deal with their physically demanding job [25, 26]. This brings us to the following hypothesis:\nStress-buffering effects of job resources on the longitudinal relation between job demands and job strain are more likely to occur if there is a match between specific types of job demands and job resources than if there is a non-match between specific types of job demands and job resources.\n\nStress-buffering effects of job resources on the longitudinal relation between job demands and job strain are more likely to occur if there is a match between specific types of job demands and job resources than if there is a non-match between specific types of job demands and job resources.", "As explained before, workers with a high active coping style are more likely to activate job resources in demanding situations at work and may, hence, experience less job strain 1 year later than workers with a low active coping style (cf. [18, 27]). However, sometimes, stress-buffering effects of job resources might occur less often than what could have been expected on the basis of resource accessibility and workers’ active coping style (i.e., high vs. low). More specifically, according to Warr’s [28] concept of fit, workers with certain personal characteristics seek out and respond to jobs that offer more of these characteristics. If we apply this concept of fit to the current setting (i.e., workers who activate job resources to actively cope with job demands), it is plausible that workers will only activate available job resources if they have a personal characteristic (i.e., a high active coping style) that corresponds to the type of job resources concerned. In other words, the nature of coping may need to be specific to job resources to optimize the synergistic effect of high active coping style (cf. [21, 22]).\nIn line with the distinction made by Greenglass, Schwarzer, Jakubiec, Fiksenbaum, and Taubert [29], we defined three types of active coping styles: cognitive, emotional, and physical active coping styles. Each specific type of active coping style reflects the extent to which workers are likely to activate specific, corresponding types of job resources to actively cope with job demands (cf. [28]). For instance, workers with a high cognitive active coping style are more likely to use cognitive job resources than workers with a low cognitive active coping style. In a similar vein, workers with a high emotional active coping style are more likely to use emotional job resources than workers with a low emotional active coping style, whereas workers with a high physical active coping style are more likely to use physical job resources than workers with a low physical active coping style. Though some workers may score high on all three types of active coping styles, others may only score high on one or two specific types of active coping styles and may therefore only use job resources from one or two specific domains (e.g., cognitive or physical job resources) to actively cope with job demands. For this latter group of workers, stress-buffering effects of job resources from the third domain (i.e., emotional job resources) are less likely to occur. This brings us to the following hypotheses:\nHypothesis 2Stress-buffering effects of job resources on the longitudinal relation between job demands and job strain are more likely to occur if workers have a high specific active coping style than if workers have a low specific active coping style.Hypothesis 3The synergistic effect of high specific active coping styles are more likely to occur if there is a match between specific types of job resources and specific types of active coping styles than if there is a non-match between specific types of job resources and specific types of active coping styles.\n\nStress-buffering effects of job resources on the longitudinal relation between job demands and job strain are more likely to occur if workers have a high specific active coping style than if workers have a low specific active coping style.\nThe synergistic effect of high specific active coping styles are more likely to occur if there is a match between specific types of job resources and specific types of active coping styles than if there is a non-match between specific types of job resources and specific types of active coping styles.", "Data were collected among graduates from eight Belgian teacher training colleges and were obtained by a questionnaire survey that was conducted at the end of 2004 (time 1), the end of 2005 (time 2), and the end of 2006 (time 3). Questionnaires were sent to the workers’ home addresses. Respondents participated on a voluntary basis and signed an informed consent at each measurement. Because the strain measures at time 3 differed from the strain measures at time 1 and time 2, it was decided to test our hypotheses by means of the first and second wave of the study. Active coping style, however, was only measured during the third and final wave of the study (the idea to examine the synergistic effect of matching active coping styles originated after the second wave of data collection), so that the final study group consisted of teachers who participated in all three waves. Because active coping style was not measured at time 1, this person variable could not be examined as a continuous moderator and had to be examined as a dichotomous moderator instead (see data analysis).", "The study group consisted of 317 teacher training graduates who worked as a teacher between 2004 and 2006. Of all graduates who were invited to participate in the study at time 1 (N = 7,092), 2,527 returned the questionnaire (response rate 35.6%). This moderate response rate can be attributed to undeliverable postal addresses, unemployment, and the length of the questionnaire. From the initial group of respondents, we selected the graduates who were currently working as a teacher (N = 1,116). At time 2, 443 out of 1,116 graduates returned the questionnaire and were still working as a teacher. The final group consisted of 317 out of 443 graduates who were still working as a teacher when they filled out the active coping style scales at time 3.\nThe mean age in the study group was 26.4 years (SD = 5.6) and 78.5% were female. On average, participants had 4.1 years (SD = 1.8) teaching experience, and 88.6% worked full-time [i.e., 20 to 30 teaching units (1 unit = 50 min direct contact with pupils) per week]. Of all participants, 33.8% worked in primary education (28.4% regular education and 5.4% special education), 56.6% worked in secondary education (52.4% regular education and 4.2% special education), and 9.6% worked in other types of education.\nA comparison of drop-outs [i.e., no participation at time 2 (‘group A’) or at time 3 (‘group B’)] with continuous participants [i.e., participation at time 1 and time 2 (‘group C’), or at time 1, time 2, and time 3 (‘group D’)] showed that the data did not appear to suffer from serious selection problems. Specifically, using multiple logistic regression in which a dichotomous variable distinguishing participants who remained in the study from those who dropped out was included as the dependent variable (cf. [30]), attrition effects were found for cognitive job demands at time 1 (‘group A’ vs. ‘group C’, and ‘group A’ vs. ‘group D’), and for physical job resources at time 1, and physical complaints at time 2 (‘group B’ vs. ‘group D’). However, inspection of the respective mean scores revealed no healthy worker effect [31]. That is, it was shown that ‘group A’ (M = 3.88) experienced less cognitive demands at time 1 than ‘group C’ (M = 3.98) and ‘group D’ (M = 3.99). Further, ‘group B’ indicated that they had more physical job resources at time 1 (M = 3.44) and less physical complaints at time 2 (M = 1.66) than ‘group D’ (M = 3.24 and M = 1.73, respectively).", "Independent variables included in this study were cognitive, emotional, and physical job demands (time 1), corresponding job resources (time 1), and corresponding active coping styles (time 3). Dependent variables were chosen from the same domain as demands, resources, and active coping styles, resulting in cognitive, emotional, and physical strain measured at time 1 and time 2. Table 1 shows the psychometric properties of the measures included as well as their zero-order correlations.\nTable 1Number of items, Cronbach’s alphas, test–retest reliabilities, and Pearson correlations (N = 317)MeasureItems\nα\n1\n\nα\n2\n\nr\nt\n1234567891011121314151. Cognitive demands40.750.730.52**−2. Emotional demands60.800.840.44**0.26**−3. Physical demands50.840.900.71**0.030.29**−4. Cognitive resources50.570.600.40**0.03−0.16**−0.07−5. Emotional resources50.780.780.47**−0.06−0.25**−0.050.51**−6. Physical resources50.740.810.37**−0.08−0.19**−0.16**0.46**0.33**−7. Cognitive active coping style110.830.21**0.11−0.06−0.04−0.070.01−8. Emotional active coping style50.840.00−0.12*0.070.18**0.30**0.11−0.06−9. Physical active coping style40.76−0.05−0.13*−0.14*0.21**0.21**0.27**0.040.44**−10. Cognitive strain T130.840.56**−0.020.13*0.04−0.28**−0.35**−0.19**−0.01−0.14*−0.09−11. Emotional exhaustion T180.840.61**0.33**0.31**0.13*−0.26**−0.35**−0.16**0.12*−0.19**−0.17**0.16**−12. Physical complaints T140.610.64**0.060.15**0.26**−0.09−0.15**−0.24**−0.01−0.10−0.070.060.23**−13. Cognitive strain T230.80−0.020.07−0.00−0.16**−0.25**−0.18**−0.08−0.19**−0.18**0.56**0.070.03−14. Emotional exhaustion T280.870.25**0.22**0.11−0.19**−0.27**−0.17**0.16**−0.25**−0.19**0.16**0.61**0.23**0.21**−15. Physical complaints T240.640.060.100.17**−0.10−0.16**−0.16**0.05−0.13*−0.060.060.26**0.64**0.020.34**−\nT1 time 1, T2 time 2*p < .05 **p < .01\n\nNumber of items, Cronbach’s alphas, test–retest reliabilities, and Pearson correlations (N = 317)\n\nT1 time 1, T2 time 2\n*p < .05 **p < .01\n[SUBTITLE] Job demands and job resources [SUBSECTION] Cognitive, emotional, and physical job demands and job resources were assessed with the DISC questionnaire (DISQ 1.1, [32]), which has been used in other studies as well (e.g., [33, 34]). Items were scored on a 5-point frequency scale, ranging from 1 (never or very rarely) to 5 (very often or always).\nThe cognitive, emotional, and physical demands scales were measured with four, six, and five items, respectively. Example items of the respective demands scales are successively ‘worker X will need to display high levels of concentration and precision at work’, ‘worker X will have to display emotions (e.g., towards clients, colleagues, or supervisors) that are inconsistent with his/her current feelings’, and ‘worker X will have to lift or move heavy persons or objects (more than 10 kg)’.\nThe cognitive, emotional, and physical resources scales were measured with five items each. Example items of the respective resources scales are successively ‘worker X would have the opportunity to take a break when tasks require a lot of concentration’, ‘other people (e.g., clients, colleagues, or supervisors) would be a listening ear for worker X when he/she has faced a threatening situation’, and ‘worker X would receive help from others (e.g., clients, colleagues, or supervisors) in lifting or moving heavy persons or objects’.\nCognitive, emotional, and physical job demands and job resources were assessed with the DISC questionnaire (DISQ 1.1, [32]), which has been used in other studies as well (e.g., [33, 34]). Items were scored on a 5-point frequency scale, ranging from 1 (never or very rarely) to 5 (very often or always).\nThe cognitive, emotional, and physical demands scales were measured with four, six, and five items, respectively. Example items of the respective demands scales are successively ‘worker X will need to display high levels of concentration and precision at work’, ‘worker X will have to display emotions (e.g., towards clients, colleagues, or supervisors) that are inconsistent with his/her current feelings’, and ‘worker X will have to lift or move heavy persons or objects (more than 10 kg)’.\nThe cognitive, emotional, and physical resources scales were measured with five items each. Example items of the respective resources scales are successively ‘worker X would have the opportunity to take a break when tasks require a lot of concentration’, ‘other people (e.g., clients, colleagues, or supervisors) would be a listening ear for worker X when he/she has faced a threatening situation’, and ‘worker X would receive help from others (e.g., clients, colleagues, or supervisors) in lifting or moving heavy persons or objects’.\n[SUBTITLE] Active coping styles [SUBSECTION] Items assessing the three specific types of active coping styles were scored on a four-point agreement scale, ranging from 1 (totally disagree) to 4 (totally agree). Cognitive active coping style was measured with 11 items derived from the Reflective Coping Scale [29]. An example item is ‘I tackle a problem by thinking about realistic alternatives’. Emotional active coping style was measured with five items derived from the Emotional Support Seeking Scale [29]. An example item is ‘If I am depressed at work, I make an appeal to others (e.g., colleagues, supervisors, or clients) to help me feel better’. Physical active coping style was measured with four items based on the Instrumental Support Seeking Scale [29]. An example item is ‘If my job requires many or sustained physical efforts, I ask help from others (e.g., colleagues or supervisor)’.\nItems assessing the three specific types of active coping styles were scored on a four-point agreement scale, ranging from 1 (totally disagree) to 4 (totally agree). Cognitive active coping style was measured with 11 items derived from the Reflective Coping Scale [29]. An example item is ‘I tackle a problem by thinking about realistic alternatives’. Emotional active coping style was measured with five items derived from the Emotional Support Seeking Scale [29]. An example item is ‘If I am depressed at work, I make an appeal to others (e.g., colleagues, supervisors, or clients) to help me feel better’. Physical active coping style was measured with four items based on the Instrumental Support Seeking Scale [29]. An example item is ‘If my job requires many or sustained physical efforts, I ask help from others (e.g., colleagues or supervisor)’.\n[SUBTITLE] Strain [SUBSECTION] Cognitive strain was defined as the lack of active learning, that is, the degree workers are enabled and stimulated to acquire new knowledge and skills. This cognitive construct was measured with three items that were derived from a scale developed by Taris, Kompier, de Lange, Schaufeli, and Schreurs [35]. An example item is ‘In my job I can develop myself’. Items were scored on a four-point frequency scale, ranging from 1 [(almost) never] to 4 [(nearly) always]. To assist in the interpretation of the results, the signs of the respective parameter estimates have been reversed, such that high levels of active learning reflect cognitive strain. Emotional strain was assessed by an index of emotional exhaustion, which can be defined as a feeling of being emotionally worn out. This construct was measured with eight items derived from the Utrecht Burnout Scale that has been particularly designed for teachers [36]. An example item is ‘I feel emotionally drained from my work’. Items were scored on a seven-point frequency scale, ranging from 0 (never) to 6 (always). Physical strain was assessed by an index of physical complaints. Physical complaints refer to neck, shoulder, back, and limbs problems in the last 6 months and were measured with four items derived from a scale developed by Hildebrandt and Douwes [37]. An example item is ‘During the past 6 months, did you have trouble with your low back?’. The possible responses were 1 (no), 2 (sometimes), and 3 (yes).\nCognitive strain was defined as the lack of active learning, that is, the degree workers are enabled and stimulated to acquire new knowledge and skills. This cognitive construct was measured with three items that were derived from a scale developed by Taris, Kompier, de Lange, Schaufeli, and Schreurs [35]. An example item is ‘In my job I can develop myself’. Items were scored on a four-point frequency scale, ranging from 1 [(almost) never] to 4 [(nearly) always]. To assist in the interpretation of the results, the signs of the respective parameter estimates have been reversed, such that high levels of active learning reflect cognitive strain. Emotional strain was assessed by an index of emotional exhaustion, which can be defined as a feeling of being emotionally worn out. This construct was measured with eight items derived from the Utrecht Burnout Scale that has been particularly designed for teachers [36]. An example item is ‘I feel emotionally drained from my work’. Items were scored on a seven-point frequency scale, ranging from 0 (never) to 6 (always). Physical strain was assessed by an index of physical complaints. Physical complaints refer to neck, shoulder, back, and limbs problems in the last 6 months and were measured with four items derived from a scale developed by Hildebrandt and Douwes [37]. An example item is ‘During the past 6 months, did you have trouble with your low back?’. The possible responses were 1 (no), 2 (sometimes), and 3 (yes).", "Cognitive, emotional, and physical job demands and job resources were assessed with the DISC questionnaire (DISQ 1.1, [32]), which has been used in other studies as well (e.g., [33, 34]). Items were scored on a 5-point frequency scale, ranging from 1 (never or very rarely) to 5 (very often or always).\nThe cognitive, emotional, and physical demands scales were measured with four, six, and five items, respectively. Example items of the respective demands scales are successively ‘worker X will need to display high levels of concentration and precision at work’, ‘worker X will have to display emotions (e.g., towards clients, colleagues, or supervisors) that are inconsistent with his/her current feelings’, and ‘worker X will have to lift or move heavy persons or objects (more than 10 kg)’.\nThe cognitive, emotional, and physical resources scales were measured with five items each. Example items of the respective resources scales are successively ‘worker X would have the opportunity to take a break when tasks require a lot of concentration’, ‘other people (e.g., clients, colleagues, or supervisors) would be a listening ear for worker X when he/she has faced a threatening situation’, and ‘worker X would receive help from others (e.g., clients, colleagues, or supervisors) in lifting or moving heavy persons or objects’.", "Items assessing the three specific types of active coping styles were scored on a four-point agreement scale, ranging from 1 (totally disagree) to 4 (totally agree). Cognitive active coping style was measured with 11 items derived from the Reflective Coping Scale [29]. An example item is ‘I tackle a problem by thinking about realistic alternatives’. Emotional active coping style was measured with five items derived from the Emotional Support Seeking Scale [29]. An example item is ‘If I am depressed at work, I make an appeal to others (e.g., colleagues, supervisors, or clients) to help me feel better’. Physical active coping style was measured with four items based on the Instrumental Support Seeking Scale [29]. An example item is ‘If my job requires many or sustained physical efforts, I ask help from others (e.g., colleagues or supervisor)’.", "Cognitive strain was defined as the lack of active learning, that is, the degree workers are enabled and stimulated to acquire new knowledge and skills. This cognitive construct was measured with three items that were derived from a scale developed by Taris, Kompier, de Lange, Schaufeli, and Schreurs [35]. An example item is ‘In my job I can develop myself’. Items were scored on a four-point frequency scale, ranging from 1 [(almost) never] to 4 [(nearly) always]. To assist in the interpretation of the results, the signs of the respective parameter estimates have been reversed, such that high levels of active learning reflect cognitive strain. Emotional strain was assessed by an index of emotional exhaustion, which can be defined as a feeling of being emotionally worn out. This construct was measured with eight items derived from the Utrecht Burnout Scale that has been particularly designed for teachers [36]. An example item is ‘I feel emotionally drained from my work’. Items were scored on a seven-point frequency scale, ranging from 0 (never) to 6 (always). Physical strain was assessed by an index of physical complaints. Physical complaints refer to neck, shoulder, back, and limbs problems in the last 6 months and were measured with four items derived from a scale developed by Hildebrandt and Douwes [37]. An example item is ‘During the past 6 months, did you have trouble with your low back?’. The possible responses were 1 (no), 2 (sometimes), and 3 (yes).", "We applied structural equation modeling (SEM) using LISREL 8.50 [38] to test for stress-buffering effects of job resources on the longitudinal relation between job demands and job strain. In addition, because active coping style was measured at time 3, multiple group analyses were used to test whether (a) any differences could be observed between workers with a low specific active coping style and workers with a high specific active coping style, and (b) the nature of coping must be specific to job resources to optimize the synergistic effect of high specific active coping styles. Specifically, three pairs of subgroups were created by dividing scores on each type of active coping style, using median split. Workers were categorized based on their score as either having a low specific active coping style (low score) or a high specific active coping style (high score) (cf. [39]). This resulted in different subgroups (i.e., two per coping style) of workers having a low cognitive active coping style (N = 164) and workers having a high cognitive active coping style (N = 143), workers having a low emotional active coping style (N = 147) and workers having a high emotional active coping style (N = 169), and workers having a low physical active coping style (N = 172) and workers having a high physical active coping style (N = 140).\nStress-buffering effects of job resources on the longitudinal relation between job demands and job strain were tested by examining multiplicative interaction terms between job demands and job resources (i.e., demands × resources) at time 1 in the prediction of job strain at time 2. Because of the large number of interaction terms (nine in total), stress-buffering effects of job resources were tested by means of two separate analyses including either interaction terms between matching demands and resources, or interaction terms between non-matching demands and resources (see [26]). These two separate analyses were conducted in each subgroup, resulting in 12 SEM analyses in which we also controlled for age and gender, as well as for cognitive strain, emotional exhaustion, and physical complaints at time 1.\nAccording to Jaccard and Wan [40], multiple group analyses can be conducted for each pair of subgroups (i.e., low vs. high cognitive active coping style, low vs. high emotional active coping style, and low vs. high physical active coping style) by first estimating the parameters of the main terms and moderating terms in the different groups with no across-group constraints imposed (i.e., the main terms and interaction terms of both groups are assumed to be unequal). If the pooled chi-square of a particular pair of subgroups is non-significant, the parameters can be reestimated with across-group constraints imposed on all main terms and moderating terms (i.e., the main terms and interaction terms of both groups are assumed to be equal). A moderating effect of a particular type of active coping style is present if the pooled chi-square of the constrained model is significantly higher than the pooled chi-square of the unconstrained model. Because the residuals among our outcome variables at time 2 were allowed to correlate, the unconstrained models were fully saturated resulting in three pooled chi-squares of zero (which is non-significant). Next, we reestimated the parameters with across-group constraints imposed on all main terms and moderating terms, and calculated whether the pooled chi-squares of the constrained models significantly differed from zero (i.e., the pooled chi-square of the unconstrained models).", "Contrary to the matching hypothesis (hypothesis 1), results showed that stress-buffering effects of job resources on the longitudinal relation between job demands and job strain were equally likely to occur in case of a match between specific types of demands and resources as in case of a non-match between specific types of demands and resources. In addition, lagged stress-buffering effects of job resources were only found if physical resources were involved, whereas no effects were found containing a cognitive component (i.e., cognitive demands, cognitive resources, or cognitive strain). This study is therefore somewhat inconsistent with other longitudinal studies on the relation between demands, resources, and strain, which showed much stronger evidence for the matching hypothesis [26, 42]. One explanation for the current findings may be that our study group consisted of beginning teachers who still needed to develop adequate coping strategies to deal with high job demands (cf. [43]). That is, the beginning teachers in our study group may still have needed to learn what kind of job resources they had to employ to realize an optimal match between job demands and job resources. In any case, to put the current findings in the right perspective, more longitudinal studies on the matching hypothesis are badly needed.", "Contrary to hypothesis 2, results revealed that neither type of active coping style interacted with job resources to moderate the longitudinal relation between job demands and job strain. Because lagged moderating effects of job resources were equally likely to be found for teachers with a low specific active coping style as for teachers with a high specific active coping style, there was no statistical rationale for testing hypothesis 3.\nOne explanation why lagged moderating effects of job resources were equally likely to be found for workers with a low specific active coping style as for workers with a high specific active coping style, could be that job characteristics (i.e., demands and resources) are of more importance to the job stress process than personal characteristics (i.e., specific active coping styles). Though it has been argued that personal characteristics are particularly likely to moderate the linkages between job conditions and strain [13], moderating effects of coping style have not always been demonstrated (e.g., [44]). An alternative explanation may be that the mere perception that one has sufficient job resources to cope with job stressors (e.g., colleagues who can provide support) may already offset the impact of job demands (cf. [5]). Perhaps workers with a low specific active coping style did not necessarily need to activate available job resources in order to mitigate (or prevent) the adverse impact of high job demands on their health and well-being 1 year later. In addition, because active coping style was examined as a dichotomous moderator, power problems might explain why specific active coping styles did not make a significant contribution to the prediction of job strain. Finally, a better focus for research might be specific active coping behaviors as they occur in specific demanding episodes at work. Because measurements of coping close to when coping happens provide some of the most accurate assessments of coping [45], future research could use hourly and daily reports of coping to examine whether specific active coping behaviors interact with specific job resources to buffer the relation between specific demanding episodes at work and job strain. If such effects are found, the current lack of support for interactions between specific demands, resources, and active coping could be explained by the measures used in this study to assess coping.", "A key limitation of the current study is that it included a homogeneous group of beginning teachers, which—given the moderate response rate—might not be representative for the population of teacher training graduates who were invited to participate in the study. Because this group poses questions about the study’s generalizability to the teaching profession in general as well as other service jobs, future research could focus on more heterogeneous groups. A second limitation of this study is that some findings may not be fully reliable due to the somewhat lower alpha (57) of the cognitive resource scale.\nFrom a theoretical point of view, the current findings suggest that, in order to show stress-buffering effects of job resources on the longitudinal relation between job demands and job strain, it makes no difference whether or not specific types of job resources are matched to specific types of job demands. In addition, the findings emphasize the importance of job rather than personal characteristics [46]. Specifically, results showed that for each type of active coping style, two-way interactions between specific types of job demands and job resources had similar lagged effects on job strain for workers with a low specific active coping style as for workers with a high specific active coping style. Hence, to show stress-buffering effects of job resources on the longitudinal relation between job demands and job strain, it seems to make no difference whether or not individual differences in specific active coping styles are taken into account.\nThe current findings could be of importance to educational practice as well, as there is a high attrition rate, especially among beginning teachers [47]. Those who leave the teaching profession usually do so within the first 5 years [48]. Further, school teaching is generally regarded as a highly stressful profession [49]. Burnout, for instance, is a major problem among teachers, and may seriously affect the achievement of educational goals even before attrition is at stake [50, 51]. In this study, it was shown that the adverse lagged effects of physical and emotional job demands on emotional exhaustion can both be diminished by physical job resources. However, given the existing body of empirical evidence for the matching hypothesis [8] and the fact that the current findings did not suggest that non-matching job resources are more functional stress buffers than matching job resources, it is recommended that employers do not only give priority to physical job resources to arm teachers against these job demands, but also try to make matching emotional job resources easily accessible to all workers. For instance, when teachers need to deal with job-inherent emotions (e.g., being angry with rude pupils) and/or organizationally desired emotions (e.g., staying calm in front of a class), employers could provide emotional support, or stimulate emotional support among colleagues (e.g., a listening ear during breaks or work meetings). In addition to job redesign interventions in personnel selection, teachers could be selected based on personal characteristics that strengthen their immunity to job strain. The current findings suggest, however, that there is no need to address teachers’ specific active coping styles, as these personal characteristics do not seem to affect the investment of available job resources during stressful situations at work.\nTo conclude, results in this longitudinal survey study did support neither the matching hypothesis, nor the moderating effect of specific (matching) active coping styles on the stress-buffering effect of job resources. However, since the results were somewhat inconsistent with previous findings on the matching hypothesis [26, 42], and previous research has shown mixed results with respect to the moderating effect of coping (see e.g., [18, 44]), one should be cautious drawing any firm, generalizable conclusions with respect to the matching hypothesis and the moderating effect of specific (matching) active coping styles. Therefore, further longitudinal research among both beginning and experienced teachers as well as in multi-occupation groups is highly recommended." ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Matching Hypothesis", "Matching Active Coping Styles", "Method", "Design", "Participants", "Measures", "Job demands and job resources", "Active coping styles", "Strain", "Data Analysis", "Results", "Discussion", "Matching Hypothesis", "Matching Active Coping Styles", "Study Limitations and Implications" ]
[ "Research in job stress has concentrated on identifying characteristics of the work environment that may relate to worker’s health and well-being. One dominant approach in this domain proposes that health and well-being can be explained by two distinct classes of job characteristics: job demands and job resources [1]. Job demands are work-related tasks that require effort, and vary from solving complex problems to dealing with aggressive clients or lifting heavy objects. Job resources on the other hand, are work-related assets that can be employed to deal with job demands. Examples of job resources are job autonomy, emotional support from colleagues, and technical equipment.\nSeveral researchers have pointed out the stress-buffering effect of job resources on the relation between job demands and job strain (e.g., [2–4]). Specifically, it has been proposed that high job demands will have a deleterious impact on worker’s health and well-being unless workers have sufficient job resources to deal with their demanding job. Job resources may be particularly likely to operate as a stress buffer if they are matched to job demands. That is, if workers use job resources that belong to the same domain of functioning as the type of job demands they need to deal with (e.g., [5, 6]). This idea of match is often referred to as the ‘matching hypothesis’ [7] and is accompanied by a sound body of empirical evidence (cf. [8]). However, to this very moment, the matching hypothesis has mainly been tested in cross-sectional studies [8]. Because cross-sectional designs are not well-suited to make causal inferences about the relation between demands, resources, and strain [9], it seems of great interest to extend the number of longitudinal studies on the matching hypothesis. Therefore, the first aim of the current study is to examine the matching hypothesis with respect to the longitudinal relation between job demands, job resources, and job strain. In this study, we will use a time lag of 1 year to control for time-variant effects (e.g., seasonal fluctuations) that might be present when using time lags shorter than 1 year (cf. [10, 11]). Moreover, compared to time lags of more than 1 year (i.e., 2 or 3 years), a 1-year time lag has proven to be most appropriate for demonstrating longitudinal stressor–strain relations [12].\nIn addition to the match between job demands and job resources, stress-buffering effects of job resources may also depend on worker’s personal characteristics. Specifically, it has been argued that personal characteristics are likely to moderate the linkage between job conditions and job strain [13]. An individual characteristic that could particularly moderate the stress-buffering effect of job resources is worker’s active coping style. Active coping style can be defined as a persistent tendency to actively manage critical events that pose a challenge, threat, harm, loss, or benefit to a person (cf. [14, 15]). If we translate this definition to work settings, it follows that workers with a high active coping style are more inclined to actively cope with job demands than workers with a low active coping style (cf. [16–19]). Because active coping behavior in demanding situations at work implies the investment of job resources, it seems reasonable to assume that differences in active coping style will have a different impact on the activation of job resources in stressful situations at work. That is, in case of high job demands, workers with a high active coping style may be more likely to activate job resources than workers with a low active coping style (cf. [20]). For instance, workers with a high active coping style may be more likely to consult an expert in the field, ask for emotional support, or employ technical equipment. Because workers who activate job resources are generally more likely to benefit from the stress-buffering effect of job resources than workers who do not use job resources, individual differences in active coping style should be expressed in the number of stress-buffering effects of job resources that are found for the individuals involved. In a cross-sectional study by de Rijk et al. [18], it was indeed shown that high (vs. low) active coping style has a synergistic effect on the stress-buffering effect of job resources. The second aim of the current study was to examine the moderating effect of worker’s active coping style on the lagged stress-buffering effect of job resources.\nSeveral researchers have suggested that the moderating effect of active coping style will be stronger if the nature of coping is specific to job resources (cf. [21, 22]). In other words, to show stronger moderating effects of active coping style on the stress-buffering effect of job resources, active coping style should belong to the same domain of functioning as job resources. To the best of our knowledge, the moderating effect of specific, corresponding types of active coping styles has not been tested yet. Therefore, the third aim of the current study is to examine the moderating effect of matching active coping styles with respect to the longitudinal relation between job demands, job resources, and job strain.\n[SUBTITLE] Matching Hypothesis [SUBSECTION] According to the matching hypothesis, specific types of job resources should be matched to specific types of job demands to show stress-buffering effects of job resources (e.g., [5, 23]). Generally speaking, three specific types of job demands and job resources can be distinguished: cognitive, emotional, and physical demands and resources [24, 25]. When the matching hypothesis is applied to the longitudinal relation between job demands, job resources, and job strain, it follows that workers who are faced with high cognitive job demands (e.g., solving complex problems) at a certain moment in time, are least likely to experience job strain (e.g., mental fatigue) 1 year later if they have sufficient cognitive job resources (e.g., information from handbooks) to deal with their cognitively demanding job. Similarly, workers who are confronted with high emotional job demands (e.g., feeling threatened by aggressive patients) at a certain moment in time, are least likely to experience job strain (e.g., emotional exhaustion) 1 year later if they have sufficient emotional job resources (e.g., a listening ear from colleagues) to deal with their emotionally demanding job. Finally, if workers are faced with high physical job demands (e.g., moving heavy objects) at a certain moment in time, they are least likely to experience job strain (e.g., back pain) 1 year later if they have sufficient physical job resources (e.g., a trolley) to deal with their physically demanding job [25, 26]. This brings us to the following hypothesis:\nStress-buffering effects of job resources on the longitudinal relation between job demands and job strain are more likely to occur if there is a match between specific types of job demands and job resources than if there is a non-match between specific types of job demands and job resources.\n\nStress-buffering effects of job resources on the longitudinal relation between job demands and job strain are more likely to occur if there is a match between specific types of job demands and job resources than if there is a non-match between specific types of job demands and job resources.\nAccording to the matching hypothesis, specific types of job resources should be matched to specific types of job demands to show stress-buffering effects of job resources (e.g., [5, 23]). Generally speaking, three specific types of job demands and job resources can be distinguished: cognitive, emotional, and physical demands and resources [24, 25]. When the matching hypothesis is applied to the longitudinal relation between job demands, job resources, and job strain, it follows that workers who are faced with high cognitive job demands (e.g., solving complex problems) at a certain moment in time, are least likely to experience job strain (e.g., mental fatigue) 1 year later if they have sufficient cognitive job resources (e.g., information from handbooks) to deal with their cognitively demanding job. Similarly, workers who are confronted with high emotional job demands (e.g., feeling threatened by aggressive patients) at a certain moment in time, are least likely to experience job strain (e.g., emotional exhaustion) 1 year later if they have sufficient emotional job resources (e.g., a listening ear from colleagues) to deal with their emotionally demanding job. Finally, if workers are faced with high physical job demands (e.g., moving heavy objects) at a certain moment in time, they are least likely to experience job strain (e.g., back pain) 1 year later if they have sufficient physical job resources (e.g., a trolley) to deal with their physically demanding job [25, 26]. This brings us to the following hypothesis:\nStress-buffering effects of job resources on the longitudinal relation between job demands and job strain are more likely to occur if there is a match between specific types of job demands and job resources than if there is a non-match between specific types of job demands and job resources.\n\nStress-buffering effects of job resources on the longitudinal relation between job demands and job strain are more likely to occur if there is a match between specific types of job demands and job resources than if there is a non-match between specific types of job demands and job resources.\n[SUBTITLE] Matching Active Coping Styles [SUBSECTION] As explained before, workers with a high active coping style are more likely to activate job resources in demanding situations at work and may, hence, experience less job strain 1 year later than workers with a low active coping style (cf. [18, 27]). However, sometimes, stress-buffering effects of job resources might occur less often than what could have been expected on the basis of resource accessibility and workers’ active coping style (i.e., high vs. low). More specifically, according to Warr’s [28] concept of fit, workers with certain personal characteristics seek out and respond to jobs that offer more of these characteristics. If we apply this concept of fit to the current setting (i.e., workers who activate job resources to actively cope with job demands), it is plausible that workers will only activate available job resources if they have a personal characteristic (i.e., a high active coping style) that corresponds to the type of job resources concerned. In other words, the nature of coping may need to be specific to job resources to optimize the synergistic effect of high active coping style (cf. [21, 22]).\nIn line with the distinction made by Greenglass, Schwarzer, Jakubiec, Fiksenbaum, and Taubert [29], we defined three types of active coping styles: cognitive, emotional, and physical active coping styles. Each specific type of active coping style reflects the extent to which workers are likely to activate specific, corresponding types of job resources to actively cope with job demands (cf. [28]). For instance, workers with a high cognitive active coping style are more likely to use cognitive job resources than workers with a low cognitive active coping style. In a similar vein, workers with a high emotional active coping style are more likely to use emotional job resources than workers with a low emotional active coping style, whereas workers with a high physical active coping style are more likely to use physical job resources than workers with a low physical active coping style. Though some workers may score high on all three types of active coping styles, others may only score high on one or two specific types of active coping styles and may therefore only use job resources from one or two specific domains (e.g., cognitive or physical job resources) to actively cope with job demands. For this latter group of workers, stress-buffering effects of job resources from the third domain (i.e., emotional job resources) are less likely to occur. This brings us to the following hypotheses:\nHypothesis 2Stress-buffering effects of job resources on the longitudinal relation between job demands and job strain are more likely to occur if workers have a high specific active coping style than if workers have a low specific active coping style.Hypothesis 3The synergistic effect of high specific active coping styles are more likely to occur if there is a match between specific types of job resources and specific types of active coping styles than if there is a non-match between specific types of job resources and specific types of active coping styles.\n\nStress-buffering effects of job resources on the longitudinal relation between job demands and job strain are more likely to occur if workers have a high specific active coping style than if workers have a low specific active coping style.\nThe synergistic effect of high specific active coping styles are more likely to occur if there is a match between specific types of job resources and specific types of active coping styles than if there is a non-match between specific types of job resources and specific types of active coping styles.\nAs explained before, workers with a high active coping style are more likely to activate job resources in demanding situations at work and may, hence, experience less job strain 1 year later than workers with a low active coping style (cf. [18, 27]). However, sometimes, stress-buffering effects of job resources might occur less often than what could have been expected on the basis of resource accessibility and workers’ active coping style (i.e., high vs. low). More specifically, according to Warr’s [28] concept of fit, workers with certain personal characteristics seek out and respond to jobs that offer more of these characteristics. If we apply this concept of fit to the current setting (i.e., workers who activate job resources to actively cope with job demands), it is plausible that workers will only activate available job resources if they have a personal characteristic (i.e., a high active coping style) that corresponds to the type of job resources concerned. In other words, the nature of coping may need to be specific to job resources to optimize the synergistic effect of high active coping style (cf. [21, 22]).\nIn line with the distinction made by Greenglass, Schwarzer, Jakubiec, Fiksenbaum, and Taubert [29], we defined three types of active coping styles: cognitive, emotional, and physical active coping styles. Each specific type of active coping style reflects the extent to which workers are likely to activate specific, corresponding types of job resources to actively cope with job demands (cf. [28]). For instance, workers with a high cognitive active coping style are more likely to use cognitive job resources than workers with a low cognitive active coping style. In a similar vein, workers with a high emotional active coping style are more likely to use emotional job resources than workers with a low emotional active coping style, whereas workers with a high physical active coping style are more likely to use physical job resources than workers with a low physical active coping style. Though some workers may score high on all three types of active coping styles, others may only score high on one or two specific types of active coping styles and may therefore only use job resources from one or two specific domains (e.g., cognitive or physical job resources) to actively cope with job demands. For this latter group of workers, stress-buffering effects of job resources from the third domain (i.e., emotional job resources) are less likely to occur. This brings us to the following hypotheses:\nHypothesis 2Stress-buffering effects of job resources on the longitudinal relation between job demands and job strain are more likely to occur if workers have a high specific active coping style than if workers have a low specific active coping style.Hypothesis 3The synergistic effect of high specific active coping styles are more likely to occur if there is a match between specific types of job resources and specific types of active coping styles than if there is a non-match between specific types of job resources and specific types of active coping styles.\n\nStress-buffering effects of job resources on the longitudinal relation between job demands and job strain are more likely to occur if workers have a high specific active coping style than if workers have a low specific active coping style.\nThe synergistic effect of high specific active coping styles are more likely to occur if there is a match between specific types of job resources and specific types of active coping styles than if there is a non-match between specific types of job resources and specific types of active coping styles.", "According to the matching hypothesis, specific types of job resources should be matched to specific types of job demands to show stress-buffering effects of job resources (e.g., [5, 23]). Generally speaking, three specific types of job demands and job resources can be distinguished: cognitive, emotional, and physical demands and resources [24, 25]. When the matching hypothesis is applied to the longitudinal relation between job demands, job resources, and job strain, it follows that workers who are faced with high cognitive job demands (e.g., solving complex problems) at a certain moment in time, are least likely to experience job strain (e.g., mental fatigue) 1 year later if they have sufficient cognitive job resources (e.g., information from handbooks) to deal with their cognitively demanding job. Similarly, workers who are confronted with high emotional job demands (e.g., feeling threatened by aggressive patients) at a certain moment in time, are least likely to experience job strain (e.g., emotional exhaustion) 1 year later if they have sufficient emotional job resources (e.g., a listening ear from colleagues) to deal with their emotionally demanding job. Finally, if workers are faced with high physical job demands (e.g., moving heavy objects) at a certain moment in time, they are least likely to experience job strain (e.g., back pain) 1 year later if they have sufficient physical job resources (e.g., a trolley) to deal with their physically demanding job [25, 26]. This brings us to the following hypothesis:\nStress-buffering effects of job resources on the longitudinal relation between job demands and job strain are more likely to occur if there is a match between specific types of job demands and job resources than if there is a non-match between specific types of job demands and job resources.\n\nStress-buffering effects of job resources on the longitudinal relation between job demands and job strain are more likely to occur if there is a match between specific types of job demands and job resources than if there is a non-match between specific types of job demands and job resources.", "As explained before, workers with a high active coping style are more likely to activate job resources in demanding situations at work and may, hence, experience less job strain 1 year later than workers with a low active coping style (cf. [18, 27]). However, sometimes, stress-buffering effects of job resources might occur less often than what could have been expected on the basis of resource accessibility and workers’ active coping style (i.e., high vs. low). More specifically, according to Warr’s [28] concept of fit, workers with certain personal characteristics seek out and respond to jobs that offer more of these characteristics. If we apply this concept of fit to the current setting (i.e., workers who activate job resources to actively cope with job demands), it is plausible that workers will only activate available job resources if they have a personal characteristic (i.e., a high active coping style) that corresponds to the type of job resources concerned. In other words, the nature of coping may need to be specific to job resources to optimize the synergistic effect of high active coping style (cf. [21, 22]).\nIn line with the distinction made by Greenglass, Schwarzer, Jakubiec, Fiksenbaum, and Taubert [29], we defined three types of active coping styles: cognitive, emotional, and physical active coping styles. Each specific type of active coping style reflects the extent to which workers are likely to activate specific, corresponding types of job resources to actively cope with job demands (cf. [28]). For instance, workers with a high cognitive active coping style are more likely to use cognitive job resources than workers with a low cognitive active coping style. In a similar vein, workers with a high emotional active coping style are more likely to use emotional job resources than workers with a low emotional active coping style, whereas workers with a high physical active coping style are more likely to use physical job resources than workers with a low physical active coping style. Though some workers may score high on all three types of active coping styles, others may only score high on one or two specific types of active coping styles and may therefore only use job resources from one or two specific domains (e.g., cognitive or physical job resources) to actively cope with job demands. For this latter group of workers, stress-buffering effects of job resources from the third domain (i.e., emotional job resources) are less likely to occur. This brings us to the following hypotheses:\nHypothesis 2Stress-buffering effects of job resources on the longitudinal relation between job demands and job strain are more likely to occur if workers have a high specific active coping style than if workers have a low specific active coping style.Hypothesis 3The synergistic effect of high specific active coping styles are more likely to occur if there is a match between specific types of job resources and specific types of active coping styles than if there is a non-match between specific types of job resources and specific types of active coping styles.\n\nStress-buffering effects of job resources on the longitudinal relation between job demands and job strain are more likely to occur if workers have a high specific active coping style than if workers have a low specific active coping style.\nThe synergistic effect of high specific active coping styles are more likely to occur if there is a match between specific types of job resources and specific types of active coping styles than if there is a non-match between specific types of job resources and specific types of active coping styles.", "[SUBTITLE] Design [SUBSECTION] Data were collected among graduates from eight Belgian teacher training colleges and were obtained by a questionnaire survey that was conducted at the end of 2004 (time 1), the end of 2005 (time 2), and the end of 2006 (time 3). Questionnaires were sent to the workers’ home addresses. Respondents participated on a voluntary basis and signed an informed consent at each measurement. Because the strain measures at time 3 differed from the strain measures at time 1 and time 2, it was decided to test our hypotheses by means of the first and second wave of the study. Active coping style, however, was only measured during the third and final wave of the study (the idea to examine the synergistic effect of matching active coping styles originated after the second wave of data collection), so that the final study group consisted of teachers who participated in all three waves. Because active coping style was not measured at time 1, this person variable could not be examined as a continuous moderator and had to be examined as a dichotomous moderator instead (see data analysis).\nData were collected among graduates from eight Belgian teacher training colleges and were obtained by a questionnaire survey that was conducted at the end of 2004 (time 1), the end of 2005 (time 2), and the end of 2006 (time 3). Questionnaires were sent to the workers’ home addresses. Respondents participated on a voluntary basis and signed an informed consent at each measurement. Because the strain measures at time 3 differed from the strain measures at time 1 and time 2, it was decided to test our hypotheses by means of the first and second wave of the study. Active coping style, however, was only measured during the third and final wave of the study (the idea to examine the synergistic effect of matching active coping styles originated after the second wave of data collection), so that the final study group consisted of teachers who participated in all three waves. Because active coping style was not measured at time 1, this person variable could not be examined as a continuous moderator and had to be examined as a dichotomous moderator instead (see data analysis).\n[SUBTITLE] Participants [SUBSECTION] The study group consisted of 317 teacher training graduates who worked as a teacher between 2004 and 2006. Of all graduates who were invited to participate in the study at time 1 (N = 7,092), 2,527 returned the questionnaire (response rate 35.6%). This moderate response rate can be attributed to undeliverable postal addresses, unemployment, and the length of the questionnaire. From the initial group of respondents, we selected the graduates who were currently working as a teacher (N = 1,116). At time 2, 443 out of 1,116 graduates returned the questionnaire and were still working as a teacher. The final group consisted of 317 out of 443 graduates who were still working as a teacher when they filled out the active coping style scales at time 3.\nThe mean age in the study group was 26.4 years (SD = 5.6) and 78.5% were female. On average, participants had 4.1 years (SD = 1.8) teaching experience, and 88.6% worked full-time [i.e., 20 to 30 teaching units (1 unit = 50 min direct contact with pupils) per week]. Of all participants, 33.8% worked in primary education (28.4% regular education and 5.4% special education), 56.6% worked in secondary education (52.4% regular education and 4.2% special education), and 9.6% worked in other types of education.\nA comparison of drop-outs [i.e., no participation at time 2 (‘group A’) or at time 3 (‘group B’)] with continuous participants [i.e., participation at time 1 and time 2 (‘group C’), or at time 1, time 2, and time 3 (‘group D’)] showed that the data did not appear to suffer from serious selection problems. Specifically, using multiple logistic regression in which a dichotomous variable distinguishing participants who remained in the study from those who dropped out was included as the dependent variable (cf. [30]), attrition effects were found for cognitive job demands at time 1 (‘group A’ vs. ‘group C’, and ‘group A’ vs. ‘group D’), and for physical job resources at time 1, and physical complaints at time 2 (‘group B’ vs. ‘group D’). However, inspection of the respective mean scores revealed no healthy worker effect [31]. That is, it was shown that ‘group A’ (M = 3.88) experienced less cognitive demands at time 1 than ‘group C’ (M = 3.98) and ‘group D’ (M = 3.99). Further, ‘group B’ indicated that they had more physical job resources at time 1 (M = 3.44) and less physical complaints at time 2 (M = 1.66) than ‘group D’ (M = 3.24 and M = 1.73, respectively).\nThe study group consisted of 317 teacher training graduates who worked as a teacher between 2004 and 2006. Of all graduates who were invited to participate in the study at time 1 (N = 7,092), 2,527 returned the questionnaire (response rate 35.6%). This moderate response rate can be attributed to undeliverable postal addresses, unemployment, and the length of the questionnaire. From the initial group of respondents, we selected the graduates who were currently working as a teacher (N = 1,116). At time 2, 443 out of 1,116 graduates returned the questionnaire and were still working as a teacher. The final group consisted of 317 out of 443 graduates who were still working as a teacher when they filled out the active coping style scales at time 3.\nThe mean age in the study group was 26.4 years (SD = 5.6) and 78.5% were female. On average, participants had 4.1 years (SD = 1.8) teaching experience, and 88.6% worked full-time [i.e., 20 to 30 teaching units (1 unit = 50 min direct contact with pupils) per week]. Of all participants, 33.8% worked in primary education (28.4% regular education and 5.4% special education), 56.6% worked in secondary education (52.4% regular education and 4.2% special education), and 9.6% worked in other types of education.\nA comparison of drop-outs [i.e., no participation at time 2 (‘group A’) or at time 3 (‘group B’)] with continuous participants [i.e., participation at time 1 and time 2 (‘group C’), or at time 1, time 2, and time 3 (‘group D’)] showed that the data did not appear to suffer from serious selection problems. Specifically, using multiple logistic regression in which a dichotomous variable distinguishing participants who remained in the study from those who dropped out was included as the dependent variable (cf. [30]), attrition effects were found for cognitive job demands at time 1 (‘group A’ vs. ‘group C’, and ‘group A’ vs. ‘group D’), and for physical job resources at time 1, and physical complaints at time 2 (‘group B’ vs. ‘group D’). However, inspection of the respective mean scores revealed no healthy worker effect [31]. That is, it was shown that ‘group A’ (M = 3.88) experienced less cognitive demands at time 1 than ‘group C’ (M = 3.98) and ‘group D’ (M = 3.99). Further, ‘group B’ indicated that they had more physical job resources at time 1 (M = 3.44) and less physical complaints at time 2 (M = 1.66) than ‘group D’ (M = 3.24 and M = 1.73, respectively).\n[SUBTITLE] Measures [SUBSECTION] Independent variables included in this study were cognitive, emotional, and physical job demands (time 1), corresponding job resources (time 1), and corresponding active coping styles (time 3). Dependent variables were chosen from the same domain as demands, resources, and active coping styles, resulting in cognitive, emotional, and physical strain measured at time 1 and time 2. Table 1 shows the psychometric properties of the measures included as well as their zero-order correlations.\nTable 1Number of items, Cronbach’s alphas, test–retest reliabilities, and Pearson correlations (N = 317)MeasureItems\nα\n1\n\nα\n2\n\nr\nt\n1234567891011121314151. Cognitive demands40.750.730.52**−2. Emotional demands60.800.840.44**0.26**−3. Physical demands50.840.900.71**0.030.29**−4. Cognitive resources50.570.600.40**0.03−0.16**−0.07−5. Emotional resources50.780.780.47**−0.06−0.25**−0.050.51**−6. Physical resources50.740.810.37**−0.08−0.19**−0.16**0.46**0.33**−7. Cognitive active coping style110.830.21**0.11−0.06−0.04−0.070.01−8. Emotional active coping style50.840.00−0.12*0.070.18**0.30**0.11−0.06−9. Physical active coping style40.76−0.05−0.13*−0.14*0.21**0.21**0.27**0.040.44**−10. Cognitive strain T130.840.56**−0.020.13*0.04−0.28**−0.35**−0.19**−0.01−0.14*−0.09−11. Emotional exhaustion T180.840.61**0.33**0.31**0.13*−0.26**−0.35**−0.16**0.12*−0.19**−0.17**0.16**−12. Physical complaints T140.610.64**0.060.15**0.26**−0.09−0.15**−0.24**−0.01−0.10−0.070.060.23**−13. Cognitive strain T230.80−0.020.07−0.00−0.16**−0.25**−0.18**−0.08−0.19**−0.18**0.56**0.070.03−14. Emotional exhaustion T280.870.25**0.22**0.11−0.19**−0.27**−0.17**0.16**−0.25**−0.19**0.16**0.61**0.23**0.21**−15. Physical complaints T240.640.060.100.17**−0.10−0.16**−0.16**0.05−0.13*−0.060.060.26**0.64**0.020.34**−\nT1 time 1, T2 time 2*p < .05 **p < .01\n\nNumber of items, Cronbach’s alphas, test–retest reliabilities, and Pearson correlations (N = 317)\n\nT1 time 1, T2 time 2\n*p < .05 **p < .01\n[SUBTITLE] Job demands and job resources [SUBSECTION] Cognitive, emotional, and physical job demands and job resources were assessed with the DISC questionnaire (DISQ 1.1, [32]), which has been used in other studies as well (e.g., [33, 34]). Items were scored on a 5-point frequency scale, ranging from 1 (never or very rarely) to 5 (very often or always).\nThe cognitive, emotional, and physical demands scales were measured with four, six, and five items, respectively. Example items of the respective demands scales are successively ‘worker X will need to display high levels of concentration and precision at work’, ‘worker X will have to display emotions (e.g., towards clients, colleagues, or supervisors) that are inconsistent with his/her current feelings’, and ‘worker X will have to lift or move heavy persons or objects (more than 10 kg)’.\nThe cognitive, emotional, and physical resources scales were measured with five items each. Example items of the respective resources scales are successively ‘worker X would have the opportunity to take a break when tasks require a lot of concentration’, ‘other people (e.g., clients, colleagues, or supervisors) would be a listening ear for worker X when he/she has faced a threatening situation’, and ‘worker X would receive help from others (e.g., clients, colleagues, or supervisors) in lifting or moving heavy persons or objects’.\nCognitive, emotional, and physical job demands and job resources were assessed with the DISC questionnaire (DISQ 1.1, [32]), which has been used in other studies as well (e.g., [33, 34]). Items were scored on a 5-point frequency scale, ranging from 1 (never or very rarely) to 5 (very often or always).\nThe cognitive, emotional, and physical demands scales were measured with four, six, and five items, respectively. Example items of the respective demands scales are successively ‘worker X will need to display high levels of concentration and precision at work’, ‘worker X will have to display emotions (e.g., towards clients, colleagues, or supervisors) that are inconsistent with his/her current feelings’, and ‘worker X will have to lift or move heavy persons or objects (more than 10 kg)’.\nThe cognitive, emotional, and physical resources scales were measured with five items each. Example items of the respective resources scales are successively ‘worker X would have the opportunity to take a break when tasks require a lot of concentration’, ‘other people (e.g., clients, colleagues, or supervisors) would be a listening ear for worker X when he/she has faced a threatening situation’, and ‘worker X would receive help from others (e.g., clients, colleagues, or supervisors) in lifting or moving heavy persons or objects’.\n[SUBTITLE] Active coping styles [SUBSECTION] Items assessing the three specific types of active coping styles were scored on a four-point agreement scale, ranging from 1 (totally disagree) to 4 (totally agree). Cognitive active coping style was measured with 11 items derived from the Reflective Coping Scale [29]. An example item is ‘I tackle a problem by thinking about realistic alternatives’. Emotional active coping style was measured with five items derived from the Emotional Support Seeking Scale [29]. An example item is ‘If I am depressed at work, I make an appeal to others (e.g., colleagues, supervisors, or clients) to help me feel better’. Physical active coping style was measured with four items based on the Instrumental Support Seeking Scale [29]. An example item is ‘If my job requires many or sustained physical efforts, I ask help from others (e.g., colleagues or supervisor)’.\nItems assessing the three specific types of active coping styles were scored on a four-point agreement scale, ranging from 1 (totally disagree) to 4 (totally agree). Cognitive active coping style was measured with 11 items derived from the Reflective Coping Scale [29]. An example item is ‘I tackle a problem by thinking about realistic alternatives’. Emotional active coping style was measured with five items derived from the Emotional Support Seeking Scale [29]. An example item is ‘If I am depressed at work, I make an appeal to others (e.g., colleagues, supervisors, or clients) to help me feel better’. Physical active coping style was measured with four items based on the Instrumental Support Seeking Scale [29]. An example item is ‘If my job requires many or sustained physical efforts, I ask help from others (e.g., colleagues or supervisor)’.\n[SUBTITLE] Strain [SUBSECTION] Cognitive strain was defined as the lack of active learning, that is, the degree workers are enabled and stimulated to acquire new knowledge and skills. This cognitive construct was measured with three items that were derived from a scale developed by Taris, Kompier, de Lange, Schaufeli, and Schreurs [35]. An example item is ‘In my job I can develop myself’. Items were scored on a four-point frequency scale, ranging from 1 [(almost) never] to 4 [(nearly) always]. To assist in the interpretation of the results, the signs of the respective parameter estimates have been reversed, such that high levels of active learning reflect cognitive strain. Emotional strain was assessed by an index of emotional exhaustion, which can be defined as a feeling of being emotionally worn out. This construct was measured with eight items derived from the Utrecht Burnout Scale that has been particularly designed for teachers [36]. An example item is ‘I feel emotionally drained from my work’. Items were scored on a seven-point frequency scale, ranging from 0 (never) to 6 (always). Physical strain was assessed by an index of physical complaints. Physical complaints refer to neck, shoulder, back, and limbs problems in the last 6 months and were measured with four items derived from a scale developed by Hildebrandt and Douwes [37]. An example item is ‘During the past 6 months, did you have trouble with your low back?’. The possible responses were 1 (no), 2 (sometimes), and 3 (yes).\nCognitive strain was defined as the lack of active learning, that is, the degree workers are enabled and stimulated to acquire new knowledge and skills. This cognitive construct was measured with three items that were derived from a scale developed by Taris, Kompier, de Lange, Schaufeli, and Schreurs [35]. An example item is ‘In my job I can develop myself’. Items were scored on a four-point frequency scale, ranging from 1 [(almost) never] to 4 [(nearly) always]. To assist in the interpretation of the results, the signs of the respective parameter estimates have been reversed, such that high levels of active learning reflect cognitive strain. Emotional strain was assessed by an index of emotional exhaustion, which can be defined as a feeling of being emotionally worn out. This construct was measured with eight items derived from the Utrecht Burnout Scale that has been particularly designed for teachers [36]. An example item is ‘I feel emotionally drained from my work’. Items were scored on a seven-point frequency scale, ranging from 0 (never) to 6 (always). Physical strain was assessed by an index of physical complaints. Physical complaints refer to neck, shoulder, back, and limbs problems in the last 6 months and were measured with four items derived from a scale developed by Hildebrandt and Douwes [37]. An example item is ‘During the past 6 months, did you have trouble with your low back?’. The possible responses were 1 (no), 2 (sometimes), and 3 (yes).\nIndependent variables included in this study were cognitive, emotional, and physical job demands (time 1), corresponding job resources (time 1), and corresponding active coping styles (time 3). Dependent variables were chosen from the same domain as demands, resources, and active coping styles, resulting in cognitive, emotional, and physical strain measured at time 1 and time 2. Table 1 shows the psychometric properties of the measures included as well as their zero-order correlations.\nTable 1Number of items, Cronbach’s alphas, test–retest reliabilities, and Pearson correlations (N = 317)MeasureItems\nα\n1\n\nα\n2\n\nr\nt\n1234567891011121314151. Cognitive demands40.750.730.52**−2. Emotional demands60.800.840.44**0.26**−3. Physical demands50.840.900.71**0.030.29**−4. Cognitive resources50.570.600.40**0.03−0.16**−0.07−5. Emotional resources50.780.780.47**−0.06−0.25**−0.050.51**−6. Physical resources50.740.810.37**−0.08−0.19**−0.16**0.46**0.33**−7. Cognitive active coping style110.830.21**0.11−0.06−0.04−0.070.01−8. Emotional active coping style50.840.00−0.12*0.070.18**0.30**0.11−0.06−9. Physical active coping style40.76−0.05−0.13*−0.14*0.21**0.21**0.27**0.040.44**−10. Cognitive strain T130.840.56**−0.020.13*0.04−0.28**−0.35**−0.19**−0.01−0.14*−0.09−11. Emotional exhaustion T180.840.61**0.33**0.31**0.13*−0.26**−0.35**−0.16**0.12*−0.19**−0.17**0.16**−12. Physical complaints T140.610.64**0.060.15**0.26**−0.09−0.15**−0.24**−0.01−0.10−0.070.060.23**−13. Cognitive strain T230.80−0.020.07−0.00−0.16**−0.25**−0.18**−0.08−0.19**−0.18**0.56**0.070.03−14. Emotional exhaustion T280.870.25**0.22**0.11−0.19**−0.27**−0.17**0.16**−0.25**−0.19**0.16**0.61**0.23**0.21**−15. Physical complaints T240.640.060.100.17**−0.10−0.16**−0.16**0.05−0.13*−0.060.060.26**0.64**0.020.34**−\nT1 time 1, T2 time 2*p < .05 **p < .01\n\nNumber of items, Cronbach’s alphas, test–retest reliabilities, and Pearson correlations (N = 317)\n\nT1 time 1, T2 time 2\n*p < .05 **p < .01\n[SUBTITLE] Job demands and job resources [SUBSECTION] Cognitive, emotional, and physical job demands and job resources were assessed with the DISC questionnaire (DISQ 1.1, [32]), which has been used in other studies as well (e.g., [33, 34]). Items were scored on a 5-point frequency scale, ranging from 1 (never or very rarely) to 5 (very often or always).\nThe cognitive, emotional, and physical demands scales were measured with four, six, and five items, respectively. Example items of the respective demands scales are successively ‘worker X will need to display high levels of concentration and precision at work’, ‘worker X will have to display emotions (e.g., towards clients, colleagues, or supervisors) that are inconsistent with his/her current feelings’, and ‘worker X will have to lift or move heavy persons or objects (more than 10 kg)’.\nThe cognitive, emotional, and physical resources scales were measured with five items each. Example items of the respective resources scales are successively ‘worker X would have the opportunity to take a break when tasks require a lot of concentration’, ‘other people (e.g., clients, colleagues, or supervisors) would be a listening ear for worker X when he/she has faced a threatening situation’, and ‘worker X would receive help from others (e.g., clients, colleagues, or supervisors) in lifting or moving heavy persons or objects’.\nCognitive, emotional, and physical job demands and job resources were assessed with the DISC questionnaire (DISQ 1.1, [32]), which has been used in other studies as well (e.g., [33, 34]). Items were scored on a 5-point frequency scale, ranging from 1 (never or very rarely) to 5 (very often or always).\nThe cognitive, emotional, and physical demands scales were measured with four, six, and five items, respectively. Example items of the respective demands scales are successively ‘worker X will need to display high levels of concentration and precision at work’, ‘worker X will have to display emotions (e.g., towards clients, colleagues, or supervisors) that are inconsistent with his/her current feelings’, and ‘worker X will have to lift or move heavy persons or objects (more than 10 kg)’.\nThe cognitive, emotional, and physical resources scales were measured with five items each. Example items of the respective resources scales are successively ‘worker X would have the opportunity to take a break when tasks require a lot of concentration’, ‘other people (e.g., clients, colleagues, or supervisors) would be a listening ear for worker X when he/she has faced a threatening situation’, and ‘worker X would receive help from others (e.g., clients, colleagues, or supervisors) in lifting or moving heavy persons or objects’.\n[SUBTITLE] Active coping styles [SUBSECTION] Items assessing the three specific types of active coping styles were scored on a four-point agreement scale, ranging from 1 (totally disagree) to 4 (totally agree). Cognitive active coping style was measured with 11 items derived from the Reflective Coping Scale [29]. An example item is ‘I tackle a problem by thinking about realistic alternatives’. Emotional active coping style was measured with five items derived from the Emotional Support Seeking Scale [29]. An example item is ‘If I am depressed at work, I make an appeal to others (e.g., colleagues, supervisors, or clients) to help me feel better’. Physical active coping style was measured with four items based on the Instrumental Support Seeking Scale [29]. An example item is ‘If my job requires many or sustained physical efforts, I ask help from others (e.g., colleagues or supervisor)’.\nItems assessing the three specific types of active coping styles were scored on a four-point agreement scale, ranging from 1 (totally disagree) to 4 (totally agree). Cognitive active coping style was measured with 11 items derived from the Reflective Coping Scale [29]. An example item is ‘I tackle a problem by thinking about realistic alternatives’. Emotional active coping style was measured with five items derived from the Emotional Support Seeking Scale [29]. An example item is ‘If I am depressed at work, I make an appeal to others (e.g., colleagues, supervisors, or clients) to help me feel better’. Physical active coping style was measured with four items based on the Instrumental Support Seeking Scale [29]. An example item is ‘If my job requires many or sustained physical efforts, I ask help from others (e.g., colleagues or supervisor)’.\n[SUBTITLE] Strain [SUBSECTION] Cognitive strain was defined as the lack of active learning, that is, the degree workers are enabled and stimulated to acquire new knowledge and skills. This cognitive construct was measured with three items that were derived from a scale developed by Taris, Kompier, de Lange, Schaufeli, and Schreurs [35]. An example item is ‘In my job I can develop myself’. Items were scored on a four-point frequency scale, ranging from 1 [(almost) never] to 4 [(nearly) always]. To assist in the interpretation of the results, the signs of the respective parameter estimates have been reversed, such that high levels of active learning reflect cognitive strain. Emotional strain was assessed by an index of emotional exhaustion, which can be defined as a feeling of being emotionally worn out. This construct was measured with eight items derived from the Utrecht Burnout Scale that has been particularly designed for teachers [36]. An example item is ‘I feel emotionally drained from my work’. Items were scored on a seven-point frequency scale, ranging from 0 (never) to 6 (always). Physical strain was assessed by an index of physical complaints. Physical complaints refer to neck, shoulder, back, and limbs problems in the last 6 months and were measured with four items derived from a scale developed by Hildebrandt and Douwes [37]. An example item is ‘During the past 6 months, did you have trouble with your low back?’. The possible responses were 1 (no), 2 (sometimes), and 3 (yes).\nCognitive strain was defined as the lack of active learning, that is, the degree workers are enabled and stimulated to acquire new knowledge and skills. This cognitive construct was measured with three items that were derived from a scale developed by Taris, Kompier, de Lange, Schaufeli, and Schreurs [35]. An example item is ‘In my job I can develop myself’. Items were scored on a four-point frequency scale, ranging from 1 [(almost) never] to 4 [(nearly) always]. To assist in the interpretation of the results, the signs of the respective parameter estimates have been reversed, such that high levels of active learning reflect cognitive strain. Emotional strain was assessed by an index of emotional exhaustion, which can be defined as a feeling of being emotionally worn out. This construct was measured with eight items derived from the Utrecht Burnout Scale that has been particularly designed for teachers [36]. An example item is ‘I feel emotionally drained from my work’. Items were scored on a seven-point frequency scale, ranging from 0 (never) to 6 (always). Physical strain was assessed by an index of physical complaints. Physical complaints refer to neck, shoulder, back, and limbs problems in the last 6 months and were measured with four items derived from a scale developed by Hildebrandt and Douwes [37]. An example item is ‘During the past 6 months, did you have trouble with your low back?’. The possible responses were 1 (no), 2 (sometimes), and 3 (yes).\n[SUBTITLE] Data Analysis [SUBSECTION] We applied structural equation modeling (SEM) using LISREL 8.50 [38] to test for stress-buffering effects of job resources on the longitudinal relation between job demands and job strain. In addition, because active coping style was measured at time 3, multiple group analyses were used to test whether (a) any differences could be observed between workers with a low specific active coping style and workers with a high specific active coping style, and (b) the nature of coping must be specific to job resources to optimize the synergistic effect of high specific active coping styles. Specifically, three pairs of subgroups were created by dividing scores on each type of active coping style, using median split. Workers were categorized based on their score as either having a low specific active coping style (low score) or a high specific active coping style (high score) (cf. [39]). This resulted in different subgroups (i.e., two per coping style) of workers having a low cognitive active coping style (N = 164) and workers having a high cognitive active coping style (N = 143), workers having a low emotional active coping style (N = 147) and workers having a high emotional active coping style (N = 169), and workers having a low physical active coping style (N = 172) and workers having a high physical active coping style (N = 140).\nStress-buffering effects of job resources on the longitudinal relation between job demands and job strain were tested by examining multiplicative interaction terms between job demands and job resources (i.e., demands × resources) at time 1 in the prediction of job strain at time 2. Because of the large number of interaction terms (nine in total), stress-buffering effects of job resources were tested by means of two separate analyses including either interaction terms between matching demands and resources, or interaction terms between non-matching demands and resources (see [26]). These two separate analyses were conducted in each subgroup, resulting in 12 SEM analyses in which we also controlled for age and gender, as well as for cognitive strain, emotional exhaustion, and physical complaints at time 1.\nAccording to Jaccard and Wan [40], multiple group analyses can be conducted for each pair of subgroups (i.e., low vs. high cognitive active coping style, low vs. high emotional active coping style, and low vs. high physical active coping style) by first estimating the parameters of the main terms and moderating terms in the different groups with no across-group constraints imposed (i.e., the main terms and interaction terms of both groups are assumed to be unequal). If the pooled chi-square of a particular pair of subgroups is non-significant, the parameters can be reestimated with across-group constraints imposed on all main terms and moderating terms (i.e., the main terms and interaction terms of both groups are assumed to be equal). A moderating effect of a particular type of active coping style is present if the pooled chi-square of the constrained model is significantly higher than the pooled chi-square of the unconstrained model. Because the residuals among our outcome variables at time 2 were allowed to correlate, the unconstrained models were fully saturated resulting in three pooled chi-squares of zero (which is non-significant). Next, we reestimated the parameters with across-group constraints imposed on all main terms and moderating terms, and calculated whether the pooled chi-squares of the constrained models significantly differed from zero (i.e., the pooled chi-square of the unconstrained models).\nWe applied structural equation modeling (SEM) using LISREL 8.50 [38] to test for stress-buffering effects of job resources on the longitudinal relation between job demands and job strain. In addition, because active coping style was measured at time 3, multiple group analyses were used to test whether (a) any differences could be observed between workers with a low specific active coping style and workers with a high specific active coping style, and (b) the nature of coping must be specific to job resources to optimize the synergistic effect of high specific active coping styles. Specifically, three pairs of subgroups were created by dividing scores on each type of active coping style, using median split. Workers were categorized based on their score as either having a low specific active coping style (low score) or a high specific active coping style (high score) (cf. [39]). This resulted in different subgroups (i.e., two per coping style) of workers having a low cognitive active coping style (N = 164) and workers having a high cognitive active coping style (N = 143), workers having a low emotional active coping style (N = 147) and workers having a high emotional active coping style (N = 169), and workers having a low physical active coping style (N = 172) and workers having a high physical active coping style (N = 140).\nStress-buffering effects of job resources on the longitudinal relation between job demands and job strain were tested by examining multiplicative interaction terms between job demands and job resources (i.e., demands × resources) at time 1 in the prediction of job strain at time 2. Because of the large number of interaction terms (nine in total), stress-buffering effects of job resources were tested by means of two separate analyses including either interaction terms between matching demands and resources, or interaction terms between non-matching demands and resources (see [26]). These two separate analyses were conducted in each subgroup, resulting in 12 SEM analyses in which we also controlled for age and gender, as well as for cognitive strain, emotional exhaustion, and physical complaints at time 1.\nAccording to Jaccard and Wan [40], multiple group analyses can be conducted for each pair of subgroups (i.e., low vs. high cognitive active coping style, low vs. high emotional active coping style, and low vs. high physical active coping style) by first estimating the parameters of the main terms and moderating terms in the different groups with no across-group constraints imposed (i.e., the main terms and interaction terms of both groups are assumed to be unequal). If the pooled chi-square of a particular pair of subgroups is non-significant, the parameters can be reestimated with across-group constraints imposed on all main terms and moderating terms (i.e., the main terms and interaction terms of both groups are assumed to be equal). A moderating effect of a particular type of active coping style is present if the pooled chi-square of the constrained model is significantly higher than the pooled chi-square of the unconstrained model. Because the residuals among our outcome variables at time 2 were allowed to correlate, the unconstrained models were fully saturated resulting in three pooled chi-squares of zero (which is non-significant). Next, we reestimated the parameters with across-group constraints imposed on all main terms and moderating terms, and calculated whether the pooled chi-squares of the constrained models significantly differed from zero (i.e., the pooled chi-square of the unconstrained models).", "Data were collected among graduates from eight Belgian teacher training colleges and were obtained by a questionnaire survey that was conducted at the end of 2004 (time 1), the end of 2005 (time 2), and the end of 2006 (time 3). Questionnaires were sent to the workers’ home addresses. Respondents participated on a voluntary basis and signed an informed consent at each measurement. Because the strain measures at time 3 differed from the strain measures at time 1 and time 2, it was decided to test our hypotheses by means of the first and second wave of the study. Active coping style, however, was only measured during the third and final wave of the study (the idea to examine the synergistic effect of matching active coping styles originated after the second wave of data collection), so that the final study group consisted of teachers who participated in all three waves. Because active coping style was not measured at time 1, this person variable could not be examined as a continuous moderator and had to be examined as a dichotomous moderator instead (see data analysis).", "The study group consisted of 317 teacher training graduates who worked as a teacher between 2004 and 2006. Of all graduates who were invited to participate in the study at time 1 (N = 7,092), 2,527 returned the questionnaire (response rate 35.6%). This moderate response rate can be attributed to undeliverable postal addresses, unemployment, and the length of the questionnaire. From the initial group of respondents, we selected the graduates who were currently working as a teacher (N = 1,116). At time 2, 443 out of 1,116 graduates returned the questionnaire and were still working as a teacher. The final group consisted of 317 out of 443 graduates who were still working as a teacher when they filled out the active coping style scales at time 3.\nThe mean age in the study group was 26.4 years (SD = 5.6) and 78.5% were female. On average, participants had 4.1 years (SD = 1.8) teaching experience, and 88.6% worked full-time [i.e., 20 to 30 teaching units (1 unit = 50 min direct contact with pupils) per week]. Of all participants, 33.8% worked in primary education (28.4% regular education and 5.4% special education), 56.6% worked in secondary education (52.4% regular education and 4.2% special education), and 9.6% worked in other types of education.\nA comparison of drop-outs [i.e., no participation at time 2 (‘group A’) or at time 3 (‘group B’)] with continuous participants [i.e., participation at time 1 and time 2 (‘group C’), or at time 1, time 2, and time 3 (‘group D’)] showed that the data did not appear to suffer from serious selection problems. Specifically, using multiple logistic regression in which a dichotomous variable distinguishing participants who remained in the study from those who dropped out was included as the dependent variable (cf. [30]), attrition effects were found for cognitive job demands at time 1 (‘group A’ vs. ‘group C’, and ‘group A’ vs. ‘group D’), and for physical job resources at time 1, and physical complaints at time 2 (‘group B’ vs. ‘group D’). However, inspection of the respective mean scores revealed no healthy worker effect [31]. That is, it was shown that ‘group A’ (M = 3.88) experienced less cognitive demands at time 1 than ‘group C’ (M = 3.98) and ‘group D’ (M = 3.99). Further, ‘group B’ indicated that they had more physical job resources at time 1 (M = 3.44) and less physical complaints at time 2 (M = 1.66) than ‘group D’ (M = 3.24 and M = 1.73, respectively).", "Independent variables included in this study were cognitive, emotional, and physical job demands (time 1), corresponding job resources (time 1), and corresponding active coping styles (time 3). Dependent variables were chosen from the same domain as demands, resources, and active coping styles, resulting in cognitive, emotional, and physical strain measured at time 1 and time 2. Table 1 shows the psychometric properties of the measures included as well as their zero-order correlations.\nTable 1Number of items, Cronbach’s alphas, test–retest reliabilities, and Pearson correlations (N = 317)MeasureItems\nα\n1\n\nα\n2\n\nr\nt\n1234567891011121314151. Cognitive demands40.750.730.52**−2. Emotional demands60.800.840.44**0.26**−3. Physical demands50.840.900.71**0.030.29**−4. Cognitive resources50.570.600.40**0.03−0.16**−0.07−5. Emotional resources50.780.780.47**−0.06−0.25**−0.050.51**−6. Physical resources50.740.810.37**−0.08−0.19**−0.16**0.46**0.33**−7. Cognitive active coping style110.830.21**0.11−0.06−0.04−0.070.01−8. Emotional active coping style50.840.00−0.12*0.070.18**0.30**0.11−0.06−9. Physical active coping style40.76−0.05−0.13*−0.14*0.21**0.21**0.27**0.040.44**−10. Cognitive strain T130.840.56**−0.020.13*0.04−0.28**−0.35**−0.19**−0.01−0.14*−0.09−11. Emotional exhaustion T180.840.61**0.33**0.31**0.13*−0.26**−0.35**−0.16**0.12*−0.19**−0.17**0.16**−12. Physical complaints T140.610.64**0.060.15**0.26**−0.09−0.15**−0.24**−0.01−0.10−0.070.060.23**−13. Cognitive strain T230.80−0.020.07−0.00−0.16**−0.25**−0.18**−0.08−0.19**−0.18**0.56**0.070.03−14. Emotional exhaustion T280.870.25**0.22**0.11−0.19**−0.27**−0.17**0.16**−0.25**−0.19**0.16**0.61**0.23**0.21**−15. Physical complaints T240.640.060.100.17**−0.10−0.16**−0.16**0.05−0.13*−0.060.060.26**0.64**0.020.34**−\nT1 time 1, T2 time 2*p < .05 **p < .01\n\nNumber of items, Cronbach’s alphas, test–retest reliabilities, and Pearson correlations (N = 317)\n\nT1 time 1, T2 time 2\n*p < .05 **p < .01\n[SUBTITLE] Job demands and job resources [SUBSECTION] Cognitive, emotional, and physical job demands and job resources were assessed with the DISC questionnaire (DISQ 1.1, [32]), which has been used in other studies as well (e.g., [33, 34]). Items were scored on a 5-point frequency scale, ranging from 1 (never or very rarely) to 5 (very often or always).\nThe cognitive, emotional, and physical demands scales were measured with four, six, and five items, respectively. Example items of the respective demands scales are successively ‘worker X will need to display high levels of concentration and precision at work’, ‘worker X will have to display emotions (e.g., towards clients, colleagues, or supervisors) that are inconsistent with his/her current feelings’, and ‘worker X will have to lift or move heavy persons or objects (more than 10 kg)’.\nThe cognitive, emotional, and physical resources scales were measured with five items each. Example items of the respective resources scales are successively ‘worker X would have the opportunity to take a break when tasks require a lot of concentration’, ‘other people (e.g., clients, colleagues, or supervisors) would be a listening ear for worker X when he/she has faced a threatening situation’, and ‘worker X would receive help from others (e.g., clients, colleagues, or supervisors) in lifting or moving heavy persons or objects’.\nCognitive, emotional, and physical job demands and job resources were assessed with the DISC questionnaire (DISQ 1.1, [32]), which has been used in other studies as well (e.g., [33, 34]). Items were scored on a 5-point frequency scale, ranging from 1 (never or very rarely) to 5 (very often or always).\nThe cognitive, emotional, and physical demands scales were measured with four, six, and five items, respectively. Example items of the respective demands scales are successively ‘worker X will need to display high levels of concentration and precision at work’, ‘worker X will have to display emotions (e.g., towards clients, colleagues, or supervisors) that are inconsistent with his/her current feelings’, and ‘worker X will have to lift or move heavy persons or objects (more than 10 kg)’.\nThe cognitive, emotional, and physical resources scales were measured with five items each. Example items of the respective resources scales are successively ‘worker X would have the opportunity to take a break when tasks require a lot of concentration’, ‘other people (e.g., clients, colleagues, or supervisors) would be a listening ear for worker X when he/she has faced a threatening situation’, and ‘worker X would receive help from others (e.g., clients, colleagues, or supervisors) in lifting or moving heavy persons or objects’.\n[SUBTITLE] Active coping styles [SUBSECTION] Items assessing the three specific types of active coping styles were scored on a four-point agreement scale, ranging from 1 (totally disagree) to 4 (totally agree). Cognitive active coping style was measured with 11 items derived from the Reflective Coping Scale [29]. An example item is ‘I tackle a problem by thinking about realistic alternatives’. Emotional active coping style was measured with five items derived from the Emotional Support Seeking Scale [29]. An example item is ‘If I am depressed at work, I make an appeal to others (e.g., colleagues, supervisors, or clients) to help me feel better’. Physical active coping style was measured with four items based on the Instrumental Support Seeking Scale [29]. An example item is ‘If my job requires many or sustained physical efforts, I ask help from others (e.g., colleagues or supervisor)’.\nItems assessing the three specific types of active coping styles were scored on a four-point agreement scale, ranging from 1 (totally disagree) to 4 (totally agree). Cognitive active coping style was measured with 11 items derived from the Reflective Coping Scale [29]. An example item is ‘I tackle a problem by thinking about realistic alternatives’. Emotional active coping style was measured with five items derived from the Emotional Support Seeking Scale [29]. An example item is ‘If I am depressed at work, I make an appeal to others (e.g., colleagues, supervisors, or clients) to help me feel better’. Physical active coping style was measured with four items based on the Instrumental Support Seeking Scale [29]. An example item is ‘If my job requires many or sustained physical efforts, I ask help from others (e.g., colleagues or supervisor)’.\n[SUBTITLE] Strain [SUBSECTION] Cognitive strain was defined as the lack of active learning, that is, the degree workers are enabled and stimulated to acquire new knowledge and skills. This cognitive construct was measured with three items that were derived from a scale developed by Taris, Kompier, de Lange, Schaufeli, and Schreurs [35]. An example item is ‘In my job I can develop myself’. Items were scored on a four-point frequency scale, ranging from 1 [(almost) never] to 4 [(nearly) always]. To assist in the interpretation of the results, the signs of the respective parameter estimates have been reversed, such that high levels of active learning reflect cognitive strain. Emotional strain was assessed by an index of emotional exhaustion, which can be defined as a feeling of being emotionally worn out. This construct was measured with eight items derived from the Utrecht Burnout Scale that has been particularly designed for teachers [36]. An example item is ‘I feel emotionally drained from my work’. Items were scored on a seven-point frequency scale, ranging from 0 (never) to 6 (always). Physical strain was assessed by an index of physical complaints. Physical complaints refer to neck, shoulder, back, and limbs problems in the last 6 months and were measured with four items derived from a scale developed by Hildebrandt and Douwes [37]. An example item is ‘During the past 6 months, did you have trouble with your low back?’. The possible responses were 1 (no), 2 (sometimes), and 3 (yes).\nCognitive strain was defined as the lack of active learning, that is, the degree workers are enabled and stimulated to acquire new knowledge and skills. This cognitive construct was measured with three items that were derived from a scale developed by Taris, Kompier, de Lange, Schaufeli, and Schreurs [35]. An example item is ‘In my job I can develop myself’. Items were scored on a four-point frequency scale, ranging from 1 [(almost) never] to 4 [(nearly) always]. To assist in the interpretation of the results, the signs of the respective parameter estimates have been reversed, such that high levels of active learning reflect cognitive strain. Emotional strain was assessed by an index of emotional exhaustion, which can be defined as a feeling of being emotionally worn out. This construct was measured with eight items derived from the Utrecht Burnout Scale that has been particularly designed for teachers [36]. An example item is ‘I feel emotionally drained from my work’. Items were scored on a seven-point frequency scale, ranging from 0 (never) to 6 (always). Physical strain was assessed by an index of physical complaints. Physical complaints refer to neck, shoulder, back, and limbs problems in the last 6 months and were measured with four items derived from a scale developed by Hildebrandt and Douwes [37]. An example item is ‘During the past 6 months, did you have trouble with your low back?’. The possible responses were 1 (no), 2 (sometimes), and 3 (yes).", "Cognitive, emotional, and physical job demands and job resources were assessed with the DISC questionnaire (DISQ 1.1, [32]), which has been used in other studies as well (e.g., [33, 34]). Items were scored on a 5-point frequency scale, ranging from 1 (never or very rarely) to 5 (very often or always).\nThe cognitive, emotional, and physical demands scales were measured with four, six, and five items, respectively. Example items of the respective demands scales are successively ‘worker X will need to display high levels of concentration and precision at work’, ‘worker X will have to display emotions (e.g., towards clients, colleagues, or supervisors) that are inconsistent with his/her current feelings’, and ‘worker X will have to lift or move heavy persons or objects (more than 10 kg)’.\nThe cognitive, emotional, and physical resources scales were measured with five items each. Example items of the respective resources scales are successively ‘worker X would have the opportunity to take a break when tasks require a lot of concentration’, ‘other people (e.g., clients, colleagues, or supervisors) would be a listening ear for worker X when he/she has faced a threatening situation’, and ‘worker X would receive help from others (e.g., clients, colleagues, or supervisors) in lifting or moving heavy persons or objects’.", "Items assessing the three specific types of active coping styles were scored on a four-point agreement scale, ranging from 1 (totally disagree) to 4 (totally agree). Cognitive active coping style was measured with 11 items derived from the Reflective Coping Scale [29]. An example item is ‘I tackle a problem by thinking about realistic alternatives’. Emotional active coping style was measured with five items derived from the Emotional Support Seeking Scale [29]. An example item is ‘If I am depressed at work, I make an appeal to others (e.g., colleagues, supervisors, or clients) to help me feel better’. Physical active coping style was measured with four items based on the Instrumental Support Seeking Scale [29]. An example item is ‘If my job requires many or sustained physical efforts, I ask help from others (e.g., colleagues or supervisor)’.", "Cognitive strain was defined as the lack of active learning, that is, the degree workers are enabled and stimulated to acquire new knowledge and skills. This cognitive construct was measured with three items that were derived from a scale developed by Taris, Kompier, de Lange, Schaufeli, and Schreurs [35]. An example item is ‘In my job I can develop myself’. Items were scored on a four-point frequency scale, ranging from 1 [(almost) never] to 4 [(nearly) always]. To assist in the interpretation of the results, the signs of the respective parameter estimates have been reversed, such that high levels of active learning reflect cognitive strain. Emotional strain was assessed by an index of emotional exhaustion, which can be defined as a feeling of being emotionally worn out. This construct was measured with eight items derived from the Utrecht Burnout Scale that has been particularly designed for teachers [36]. An example item is ‘I feel emotionally drained from my work’. Items were scored on a seven-point frequency scale, ranging from 0 (never) to 6 (always). Physical strain was assessed by an index of physical complaints. Physical complaints refer to neck, shoulder, back, and limbs problems in the last 6 months and were measured with four items derived from a scale developed by Hildebrandt and Douwes [37]. An example item is ‘During the past 6 months, did you have trouble with your low back?’. The possible responses were 1 (no), 2 (sometimes), and 3 (yes).", "We applied structural equation modeling (SEM) using LISREL 8.50 [38] to test for stress-buffering effects of job resources on the longitudinal relation between job demands and job strain. In addition, because active coping style was measured at time 3, multiple group analyses were used to test whether (a) any differences could be observed between workers with a low specific active coping style and workers with a high specific active coping style, and (b) the nature of coping must be specific to job resources to optimize the synergistic effect of high specific active coping styles. Specifically, three pairs of subgroups were created by dividing scores on each type of active coping style, using median split. Workers were categorized based on their score as either having a low specific active coping style (low score) or a high specific active coping style (high score) (cf. [39]). This resulted in different subgroups (i.e., two per coping style) of workers having a low cognitive active coping style (N = 164) and workers having a high cognitive active coping style (N = 143), workers having a low emotional active coping style (N = 147) and workers having a high emotional active coping style (N = 169), and workers having a low physical active coping style (N = 172) and workers having a high physical active coping style (N = 140).\nStress-buffering effects of job resources on the longitudinal relation between job demands and job strain were tested by examining multiplicative interaction terms between job demands and job resources (i.e., demands × resources) at time 1 in the prediction of job strain at time 2. Because of the large number of interaction terms (nine in total), stress-buffering effects of job resources were tested by means of two separate analyses including either interaction terms between matching demands and resources, or interaction terms between non-matching demands and resources (see [26]). These two separate analyses were conducted in each subgroup, resulting in 12 SEM analyses in which we also controlled for age and gender, as well as for cognitive strain, emotional exhaustion, and physical complaints at time 1.\nAccording to Jaccard and Wan [40], multiple group analyses can be conducted for each pair of subgroups (i.e., low vs. high cognitive active coping style, low vs. high emotional active coping style, and low vs. high physical active coping style) by first estimating the parameters of the main terms and moderating terms in the different groups with no across-group constraints imposed (i.e., the main terms and interaction terms of both groups are assumed to be unequal). If the pooled chi-square of a particular pair of subgroups is non-significant, the parameters can be reestimated with across-group constraints imposed on all main terms and moderating terms (i.e., the main terms and interaction terms of both groups are assumed to be equal). A moderating effect of a particular type of active coping style is present if the pooled chi-square of the constrained model is significantly higher than the pooled chi-square of the unconstrained model. Because the residuals among our outcome variables at time 2 were allowed to correlate, the unconstrained models were fully saturated resulting in three pooled chi-squares of zero (which is non-significant). Next, we reestimated the parameters with across-group constraints imposed on all main terms and moderating terms, and calculated whether the pooled chi-squares of the constrained models significantly differed from zero (i.e., the pooled chi-square of the unconstrained models).", "Testing hypothesis 1, results in Table 2 showed two significant two-way interactions between matching job demands and job resources in the prediction of job strain 1 year later. One two-way interaction showed a stress-buffering effect. Specifically, it was shown that a combination of high physical demands and high physical resources resulted in less emotional exhaustion 1 year later than a combination of high physical demands and low physical resources (t = −3.15, p < .01). The other two-way interaction was not in the predicted direction. That is, a reversed interaction effect was found in which a combination of high cognitive demands and high cognitive resources led to more cognitive strain 1 year later than a combination of high cognitive demands and low cognitive resources (t = 2.25, p < .05).\nTable 2Lagged structural equation models of cognitive strain, emotional exhaustion, and physical complaints with moderating terms of matching job demands and job resources for the total sample (N = 317)Cognitive strain T2Emotional exhaustion T2Physical complaints T2\nB\nSE\nt\n\nB\nSE\nt\n\nB\nSE\nt\nControl variables Gender−0.190.08−2.27*0.040.120.340.110.081.28 Age−0.000.01−0.180.000.010.350.000.010.33T1 outcome variables Cognitive strain T10.500.0510.07**0.040.070.63−0.030.05−0.54 Emotional exhaustion T1−0.020.04−0.580.490.059.09**0.130.043.33** Physical complaints T1−0.050.05−0.990.120.071.710.430.058.40**Demands and resources Cognitive demands−0.000.04−0.110.120.052.37*−0.010.04−0.24 Emotional demands0.030.040.850.060.051.150.000.040.00 Physical demands−0.020.03−0.74−0.010.05−0.240.020.030.54 Cognitive resources0.040.041.11−0.010.06−0.190.030.040.88 Emotional resources−0.050.04−1.21−0.020.06−0.38−0.040.04−1.14 Physical resources−0.090.04−2.37*−0.090.05−1.67−0.040.04−1.07Moderating terms Cognitive demands × cognitive resources0.080.032.25*0.030.050.710.060.031.89 Emotional demands × emotional resources−0.000.03−0.110.020.040.480.030.030.88 Physical demands × physical resources−0.020.03−0.76−0.130.04−3.15**−0.010.03−0.34\nB unstandardized coefficient, SE standard error, t t-statistic, T1 time 1, T2 time 2*p < .05 **p < .01\n\nLagged structural equation models of cognitive strain, emotional exhaustion, and physical complaints with moderating terms of matching job demands and job resources for the total sample (N = 317)\n\nB unstandardized coefficient, SE standard error, t t-statistic, T1 time 1, T2 time 2\n*p < .05 **p < .01\nIn addition to the significant two-way interactions between matching demands and resources, one significant two-way interaction was found between non-matching demands and resources. More specifically, as shown in Table 3, a combination of high emotional demands and high physical resources resulted in less emotional exhaustion 1 year later than a combination of high emotional demands and low physical resources (t = −2.25, p < .05).\nTable 3Lagged structural equation models of cognitive strain, emotional exhaustion, and physical complaints with moderating terms of non-matching job demands and job resources for the total sample (N = 317)Cognitive strain T2Emotional exhaustion T2Physical complaints T2\nB\nSE\nt\n\nB\nSE\nt\n\nB\nSE\nt\nControl variables Gender−0.190.08−2.30*0.090.120.740.090.081.09 Age−0.000.01−0.680.000.010.120.000.010.23T1 outcome variables Cognitive strain T10.480.059.67**0.030.070.45−0.050.05−1.05 Emotional exhaustion T1−0.020.04−0.580.490.059.00**0.120.043.19** Physical complaints T1−0.040.05−0.820.110.071.490.440.058.61**Demands and resources Cognitive demands−0.020.04−0.650.110.052.07*−0.010.04−0.28 Emotional demands0.040.041.120.060.051.220.000.04−0.12 Physical demands−0.030.03−0.890.000.05−0.060.010.030.45 Cognitive resources0.040.041.10−0.010.06−0.250.030.040.71 Emotional resources−0.050.04−1.36−0.040.06−0.69−0.050.04−1.32 Physical resources−0.110.04−2.88**−0.080.05−1.41−0.050.04−1.39Moderating terms Cognitive demands × emotional resources0.010.030.420.040.050.85−0.020.03−0.48 Cognitive demands × physical resources0.050.041.230.020.060.430.070.041.90 Emotional demands × cognitive resources0.020.030.540.020.050.450.060.031.74 Emotional demands × physical resources−0.050.03−1.32−0.110.05−2.25*−0.020.03−0.49 Physical demands × cognitive resources0.040.041.11−0.040.05−0.83−0.030.04−0.75 Physical demands × emotional resources0.060.031.660.040.050.830.030.030.97\nB unstandardized coefficient, SE standard error, t t-statistic, T1 time 1, T2 time 2*p < .05 **p < .01\n\nLagged structural equation models of cognitive strain, emotional exhaustion, and physical complaints with moderating terms of non-matching job demands and job resources for the total sample (N = 317)\n\nB unstandardized coefficient, SE standard error, t t-statistic, T1 time 1, T2 time 2\n*p < .05 **p < .01\nTo summarize, one out of nine (11.1%) tested two-way interactions between matching demands and resources, and one out of 18 (5.6%) tested two-way interactions between non-matching demands and resources showed a lagged stress-buffering effect of job resources. To determine whether the percentages were significantly different from each other, a z test was conducted [41]. Contrary to hypothesis 1, results of the z test revealed that stress-buffering effects of job resources on the longitudinal relation between job demands and job strain were equally likely to occur in case of a match between specific types of job demands and job resources as in case of a non-match between specific types of job demands and job resources (z = 0.26; p = .80).\nTesting hypothesis 2, results of the multiple group analyses showed that for each type of active coping style (i.e., cognitive, emotional, and physical active coping style), lagged moderating effects of job resources were equally likely to be found for teachers with a low active coping style as for teachers with a high active coping style. Specifically, testing moderating terms of matching demands and resources, no differences were found between workers with a low or high cognitive active coping style (∆pooled χ\n2 = 15.77, df = 27, p = .96), workers with a low or high emotional active coping style (∆pooled χ\n2 = 0.95, df = 27, p = 1.00), and workers with a low or high physical active coping style (∆pooled χ\n2 = 4.10, df = 27, p = 1.00). Similarly, testing moderating terms of non-matching demands and resources, no differences were found between workers with a low or high cognitive active coping style (∆pooled χ\n2 = 23.49, df = 36, p = .95), workers with a low or high emotional active coping style (∆pooled χ\n2 = 1.81, df = 36, p = 1.00), and workers with a low or high physical active coping style (∆pooled χ\n2 = 4.79, df = 36, p = 1.00). As we did not find any evidence for hypothesis 2, there was no statistical rationale for testing hypothesis 3.", "The current study aimed to expand earlier research on job stress by examining whether stress-buffering effects of job resources on the longitudinal relation between job demands and job strain (i.e., stressor–strain relations that developed within 1 year) are more likely to occur if (a) there is a match (rather than a non-match) between specific types of job demands and job resources (hypothesis 1), and (b) workers have a high specific active coping style rather than a low specific active coping style (hypothesis 2). In addition, it was hypothesized that the synergistic effect of high specific active coping styles occurs more often if there is a match (rather than a non-match) between specific types of job resources and specific types of active coping styles (hypothesis 3).\n[SUBTITLE] Matching Hypothesis [SUBSECTION] Contrary to the matching hypothesis (hypothesis 1), results showed that stress-buffering effects of job resources on the longitudinal relation between job demands and job strain were equally likely to occur in case of a match between specific types of demands and resources as in case of a non-match between specific types of demands and resources. In addition, lagged stress-buffering effects of job resources were only found if physical resources were involved, whereas no effects were found containing a cognitive component (i.e., cognitive demands, cognitive resources, or cognitive strain). This study is therefore somewhat inconsistent with other longitudinal studies on the relation between demands, resources, and strain, which showed much stronger evidence for the matching hypothesis [26, 42]. One explanation for the current findings may be that our study group consisted of beginning teachers who still needed to develop adequate coping strategies to deal with high job demands (cf. [43]). That is, the beginning teachers in our study group may still have needed to learn what kind of job resources they had to employ to realize an optimal match between job demands and job resources. In any case, to put the current findings in the right perspective, more longitudinal studies on the matching hypothesis are badly needed.\nContrary to the matching hypothesis (hypothesis 1), results showed that stress-buffering effects of job resources on the longitudinal relation between job demands and job strain were equally likely to occur in case of a match between specific types of demands and resources as in case of a non-match between specific types of demands and resources. In addition, lagged stress-buffering effects of job resources were only found if physical resources were involved, whereas no effects were found containing a cognitive component (i.e., cognitive demands, cognitive resources, or cognitive strain). This study is therefore somewhat inconsistent with other longitudinal studies on the relation between demands, resources, and strain, which showed much stronger evidence for the matching hypothesis [26, 42]. One explanation for the current findings may be that our study group consisted of beginning teachers who still needed to develop adequate coping strategies to deal with high job demands (cf. [43]). That is, the beginning teachers in our study group may still have needed to learn what kind of job resources they had to employ to realize an optimal match between job demands and job resources. In any case, to put the current findings in the right perspective, more longitudinal studies on the matching hypothesis are badly needed.\n[SUBTITLE] Matching Active Coping Styles [SUBSECTION] Contrary to hypothesis 2, results revealed that neither type of active coping style interacted with job resources to moderate the longitudinal relation between job demands and job strain. Because lagged moderating effects of job resources were equally likely to be found for teachers with a low specific active coping style as for teachers with a high specific active coping style, there was no statistical rationale for testing hypothesis 3.\nOne explanation why lagged moderating effects of job resources were equally likely to be found for workers with a low specific active coping style as for workers with a high specific active coping style, could be that job characteristics (i.e., demands and resources) are of more importance to the job stress process than personal characteristics (i.e., specific active coping styles). Though it has been argued that personal characteristics are particularly likely to moderate the linkages between job conditions and strain [13], moderating effects of coping style have not always been demonstrated (e.g., [44]). An alternative explanation may be that the mere perception that one has sufficient job resources to cope with job stressors (e.g., colleagues who can provide support) may already offset the impact of job demands (cf. [5]). Perhaps workers with a low specific active coping style did not necessarily need to activate available job resources in order to mitigate (or prevent) the adverse impact of high job demands on their health and well-being 1 year later. In addition, because active coping style was examined as a dichotomous moderator, power problems might explain why specific active coping styles did not make a significant contribution to the prediction of job strain. Finally, a better focus for research might be specific active coping behaviors as they occur in specific demanding episodes at work. Because measurements of coping close to when coping happens provide some of the most accurate assessments of coping [45], future research could use hourly and daily reports of coping to examine whether specific active coping behaviors interact with specific job resources to buffer the relation between specific demanding episodes at work and job strain. If such effects are found, the current lack of support for interactions between specific demands, resources, and active coping could be explained by the measures used in this study to assess coping.\nContrary to hypothesis 2, results revealed that neither type of active coping style interacted with job resources to moderate the longitudinal relation between job demands and job strain. Because lagged moderating effects of job resources were equally likely to be found for teachers with a low specific active coping style as for teachers with a high specific active coping style, there was no statistical rationale for testing hypothesis 3.\nOne explanation why lagged moderating effects of job resources were equally likely to be found for workers with a low specific active coping style as for workers with a high specific active coping style, could be that job characteristics (i.e., demands and resources) are of more importance to the job stress process than personal characteristics (i.e., specific active coping styles). Though it has been argued that personal characteristics are particularly likely to moderate the linkages between job conditions and strain [13], moderating effects of coping style have not always been demonstrated (e.g., [44]). An alternative explanation may be that the mere perception that one has sufficient job resources to cope with job stressors (e.g., colleagues who can provide support) may already offset the impact of job demands (cf. [5]). Perhaps workers with a low specific active coping style did not necessarily need to activate available job resources in order to mitigate (or prevent) the adverse impact of high job demands on their health and well-being 1 year later. In addition, because active coping style was examined as a dichotomous moderator, power problems might explain why specific active coping styles did not make a significant contribution to the prediction of job strain. Finally, a better focus for research might be specific active coping behaviors as they occur in specific demanding episodes at work. Because measurements of coping close to when coping happens provide some of the most accurate assessments of coping [45], future research could use hourly and daily reports of coping to examine whether specific active coping behaviors interact with specific job resources to buffer the relation between specific demanding episodes at work and job strain. If such effects are found, the current lack of support for interactions between specific demands, resources, and active coping could be explained by the measures used in this study to assess coping.\n[SUBTITLE] Study Limitations and Implications [SUBSECTION] A key limitation of the current study is that it included a homogeneous group of beginning teachers, which—given the moderate response rate—might not be representative for the population of teacher training graduates who were invited to participate in the study. Because this group poses questions about the study’s generalizability to the teaching profession in general as well as other service jobs, future research could focus on more heterogeneous groups. A second limitation of this study is that some findings may not be fully reliable due to the somewhat lower alpha (57) of the cognitive resource scale.\nFrom a theoretical point of view, the current findings suggest that, in order to show stress-buffering effects of job resources on the longitudinal relation between job demands and job strain, it makes no difference whether or not specific types of job resources are matched to specific types of job demands. In addition, the findings emphasize the importance of job rather than personal characteristics [46]. Specifically, results showed that for each type of active coping style, two-way interactions between specific types of job demands and job resources had similar lagged effects on job strain for workers with a low specific active coping style as for workers with a high specific active coping style. Hence, to show stress-buffering effects of job resources on the longitudinal relation between job demands and job strain, it seems to make no difference whether or not individual differences in specific active coping styles are taken into account.\nThe current findings could be of importance to educational practice as well, as there is a high attrition rate, especially among beginning teachers [47]. Those who leave the teaching profession usually do so within the first 5 years [48]. Further, school teaching is generally regarded as a highly stressful profession [49]. Burnout, for instance, is a major problem among teachers, and may seriously affect the achievement of educational goals even before attrition is at stake [50, 51]. In this study, it was shown that the adverse lagged effects of physical and emotional job demands on emotional exhaustion can both be diminished by physical job resources. However, given the existing body of empirical evidence for the matching hypothesis [8] and the fact that the current findings did not suggest that non-matching job resources are more functional stress buffers than matching job resources, it is recommended that employers do not only give priority to physical job resources to arm teachers against these job demands, but also try to make matching emotional job resources easily accessible to all workers. For instance, when teachers need to deal with job-inherent emotions (e.g., being angry with rude pupils) and/or organizationally desired emotions (e.g., staying calm in front of a class), employers could provide emotional support, or stimulate emotional support among colleagues (e.g., a listening ear during breaks or work meetings). In addition to job redesign interventions in personnel selection, teachers could be selected based on personal characteristics that strengthen their immunity to job strain. The current findings suggest, however, that there is no need to address teachers’ specific active coping styles, as these personal characteristics do not seem to affect the investment of available job resources during stressful situations at work.\nTo conclude, results in this longitudinal survey study did support neither the matching hypothesis, nor the moderating effect of specific (matching) active coping styles on the stress-buffering effect of job resources. However, since the results were somewhat inconsistent with previous findings on the matching hypothesis [26, 42], and previous research has shown mixed results with respect to the moderating effect of coping (see e.g., [18, 44]), one should be cautious drawing any firm, generalizable conclusions with respect to the matching hypothesis and the moderating effect of specific (matching) active coping styles. Therefore, further longitudinal research among both beginning and experienced teachers as well as in multi-occupation groups is highly recommended.\nA key limitation of the current study is that it included a homogeneous group of beginning teachers, which—given the moderate response rate—might not be representative for the population of teacher training graduates who were invited to participate in the study. Because this group poses questions about the study’s generalizability to the teaching profession in general as well as other service jobs, future research could focus on more heterogeneous groups. A second limitation of this study is that some findings may not be fully reliable due to the somewhat lower alpha (57) of the cognitive resource scale.\nFrom a theoretical point of view, the current findings suggest that, in order to show stress-buffering effects of job resources on the longitudinal relation between job demands and job strain, it makes no difference whether or not specific types of job resources are matched to specific types of job demands. In addition, the findings emphasize the importance of job rather than personal characteristics [46]. Specifically, results showed that for each type of active coping style, two-way interactions between specific types of job demands and job resources had similar lagged effects on job strain for workers with a low specific active coping style as for workers with a high specific active coping style. Hence, to show stress-buffering effects of job resources on the longitudinal relation between job demands and job strain, it seems to make no difference whether or not individual differences in specific active coping styles are taken into account.\nThe current findings could be of importance to educational practice as well, as there is a high attrition rate, especially among beginning teachers [47]. Those who leave the teaching profession usually do so within the first 5 years [48]. Further, school teaching is generally regarded as a highly stressful profession [49]. Burnout, for instance, is a major problem among teachers, and may seriously affect the achievement of educational goals even before attrition is at stake [50, 51]. In this study, it was shown that the adverse lagged effects of physical and emotional job demands on emotional exhaustion can both be diminished by physical job resources. However, given the existing body of empirical evidence for the matching hypothesis [8] and the fact that the current findings did not suggest that non-matching job resources are more functional stress buffers than matching job resources, it is recommended that employers do not only give priority to physical job resources to arm teachers against these job demands, but also try to make matching emotional job resources easily accessible to all workers. For instance, when teachers need to deal with job-inherent emotions (e.g., being angry with rude pupils) and/or organizationally desired emotions (e.g., staying calm in front of a class), employers could provide emotional support, or stimulate emotional support among colleagues (e.g., a listening ear during breaks or work meetings). In addition to job redesign interventions in personnel selection, teachers could be selected based on personal characteristics that strengthen their immunity to job strain. The current findings suggest, however, that there is no need to address teachers’ specific active coping styles, as these personal characteristics do not seem to affect the investment of available job resources during stressful situations at work.\nTo conclude, results in this longitudinal survey study did support neither the matching hypothesis, nor the moderating effect of specific (matching) active coping styles on the stress-buffering effect of job resources. However, since the results were somewhat inconsistent with previous findings on the matching hypothesis [26, 42], and previous research has shown mixed results with respect to the moderating effect of coping (see e.g., [18, 44]), one should be cautious drawing any firm, generalizable conclusions with respect to the matching hypothesis and the moderating effect of specific (matching) active coping styles. Therefore, further longitudinal research among both beginning and experienced teachers as well as in multi-occupation groups is highly recommended.", "Contrary to the matching hypothesis (hypothesis 1), results showed that stress-buffering effects of job resources on the longitudinal relation between job demands and job strain were equally likely to occur in case of a match between specific types of demands and resources as in case of a non-match between specific types of demands and resources. In addition, lagged stress-buffering effects of job resources were only found if physical resources were involved, whereas no effects were found containing a cognitive component (i.e., cognitive demands, cognitive resources, or cognitive strain). This study is therefore somewhat inconsistent with other longitudinal studies on the relation between demands, resources, and strain, which showed much stronger evidence for the matching hypothesis [26, 42]. One explanation for the current findings may be that our study group consisted of beginning teachers who still needed to develop adequate coping strategies to deal with high job demands (cf. [43]). That is, the beginning teachers in our study group may still have needed to learn what kind of job resources they had to employ to realize an optimal match between job demands and job resources. In any case, to put the current findings in the right perspective, more longitudinal studies on the matching hypothesis are badly needed.", "Contrary to hypothesis 2, results revealed that neither type of active coping style interacted with job resources to moderate the longitudinal relation between job demands and job strain. Because lagged moderating effects of job resources were equally likely to be found for teachers with a low specific active coping style as for teachers with a high specific active coping style, there was no statistical rationale for testing hypothesis 3.\nOne explanation why lagged moderating effects of job resources were equally likely to be found for workers with a low specific active coping style as for workers with a high specific active coping style, could be that job characteristics (i.e., demands and resources) are of more importance to the job stress process than personal characteristics (i.e., specific active coping styles). Though it has been argued that personal characteristics are particularly likely to moderate the linkages between job conditions and strain [13], moderating effects of coping style have not always been demonstrated (e.g., [44]). An alternative explanation may be that the mere perception that one has sufficient job resources to cope with job stressors (e.g., colleagues who can provide support) may already offset the impact of job demands (cf. [5]). Perhaps workers with a low specific active coping style did not necessarily need to activate available job resources in order to mitigate (or prevent) the adverse impact of high job demands on their health and well-being 1 year later. In addition, because active coping style was examined as a dichotomous moderator, power problems might explain why specific active coping styles did not make a significant contribution to the prediction of job strain. Finally, a better focus for research might be specific active coping behaviors as they occur in specific demanding episodes at work. Because measurements of coping close to when coping happens provide some of the most accurate assessments of coping [45], future research could use hourly and daily reports of coping to examine whether specific active coping behaviors interact with specific job resources to buffer the relation between specific demanding episodes at work and job strain. If such effects are found, the current lack of support for interactions between specific demands, resources, and active coping could be explained by the measures used in this study to assess coping.", "A key limitation of the current study is that it included a homogeneous group of beginning teachers, which—given the moderate response rate—might not be representative for the population of teacher training graduates who were invited to participate in the study. Because this group poses questions about the study’s generalizability to the teaching profession in general as well as other service jobs, future research could focus on more heterogeneous groups. A second limitation of this study is that some findings may not be fully reliable due to the somewhat lower alpha (57) of the cognitive resource scale.\nFrom a theoretical point of view, the current findings suggest that, in order to show stress-buffering effects of job resources on the longitudinal relation between job demands and job strain, it makes no difference whether or not specific types of job resources are matched to specific types of job demands. In addition, the findings emphasize the importance of job rather than personal characteristics [46]. Specifically, results showed that for each type of active coping style, two-way interactions between specific types of job demands and job resources had similar lagged effects on job strain for workers with a low specific active coping style as for workers with a high specific active coping style. Hence, to show stress-buffering effects of job resources on the longitudinal relation between job demands and job strain, it seems to make no difference whether or not individual differences in specific active coping styles are taken into account.\nThe current findings could be of importance to educational practice as well, as there is a high attrition rate, especially among beginning teachers [47]. Those who leave the teaching profession usually do so within the first 5 years [48]. Further, school teaching is generally regarded as a highly stressful profession [49]. Burnout, for instance, is a major problem among teachers, and may seriously affect the achievement of educational goals even before attrition is at stake [50, 51]. In this study, it was shown that the adverse lagged effects of physical and emotional job demands on emotional exhaustion can both be diminished by physical job resources. However, given the existing body of empirical evidence for the matching hypothesis [8] and the fact that the current findings did not suggest that non-matching job resources are more functional stress buffers than matching job resources, it is recommended that employers do not only give priority to physical job resources to arm teachers against these job demands, but also try to make matching emotional job resources easily accessible to all workers. For instance, when teachers need to deal with job-inherent emotions (e.g., being angry with rude pupils) and/or organizationally desired emotions (e.g., staying calm in front of a class), employers could provide emotional support, or stimulate emotional support among colleagues (e.g., a listening ear during breaks or work meetings). In addition to job redesign interventions in personnel selection, teachers could be selected based on personal characteristics that strengthen their immunity to job strain. The current findings suggest, however, that there is no need to address teachers’ specific active coping styles, as these personal characteristics do not seem to affect the investment of available job resources during stressful situations at work.\nTo conclude, results in this longitudinal survey study did support neither the matching hypothesis, nor the moderating effect of specific (matching) active coping styles on the stress-buffering effect of job resources. However, since the results were somewhat inconsistent with previous findings on the matching hypothesis [26, 42], and previous research has shown mixed results with respect to the moderating effect of coping (see e.g., [18, 44]), one should be cautious drawing any firm, generalizable conclusions with respect to the matching hypothesis and the moderating effect of specific (matching) active coping styles. Therefore, further longitudinal research among both beginning and experienced teachers as well as in multi-occupation groups is highly recommended." ]
[ "introduction", null, null, "materials|methods", null, null, null, null, null, null, null, "results", "discussion", null, null, null ]
[ "Job demands and job resources", "Active coping styles", "Match", "Job stress", "Burnout", "Teachers" ]
Analysis of outpatient trauma referrals in a sub-Saharan African orthopedic center.
21360308
The purpose of this study was to characterize the orthopedic trauma workload in the Bedford Orthopaedic Centre (BOC), an orthopedic referral hospital in rural South Africa.
BACKGROUND
Demographic data, injury data, and information about initial management were collected for two 6-week periods during both 2008 and 2009 from patients seen in the BOC outpatient department. Two primary outcomes were evaluated: (1) the interval between the initial outside evaluation and the BOC consultation and (2) the presence of established infection at the time of consultation. Secondary outcomes included assessments of the initial management at the referring facility.
METHODS
Most patients were adult men. Almost half were referred from within a radius of 10 km, but more than one-third came from facilities in excess of 50 km away. The most frequent mode of transport was ambulance followed by taxi-van. Fractures accounted for most of the injuries. Motor vehicle accidents and assaults were more prevalent among adults than among children, for whom falls accounted for a large proportion of injuries. Referral was delayed more than 72 h in 41.4% of patients. Established infections were identified in 12.2%. Deficiencies detected during prehospital care were common.
RESULTS
The burden of orthopedic trauma in this rural referral center is sufficient to justify the manpower and resources needed for a major orthopedic trauma center. Because most of the injuries were fractures, efforts should be aimed at improving fracture care. Differences in the mode of injury and in the anatomical sites involved between adults and children highlight the need for focused preventive measures. Reducing both delays in referral and deficiencies in initial management might well reduce the cost and complexity of the definitive treatment required.
CONCLUSION
[ "Accidents, Traffic", "Adolescent", "Adult", "Child", "Emergency Medical Services", "Female", "Fractures, Bone", "Hospitals, Special", "Humans", "Joint Dislocations", "Male", "Referral and Consultation", "Soft Tissue Injuries", "South Africa", "Violence", "Workload", "Young Adult" ]
3071472
Introduction
Increasing attention is being paid worldwide to the public health implications of musculoskeletal injuries and trauma, especially in the developing world. The impact of these injuries is especially severe in those countries where human and material resources are most limited. Data concerning the disease burden is such settings are lacking [1]. The eastern portion of the Eastern Cape Province in South Africa (formerly the Transkei homeland) contains a largely indigent, rural population of three to four million people. The role of injury and violence as a leading cause of death in this region has been well documented by Meel [2–6]. The Eastern Cape Province Department of Health administers a network of clinics and district hospitals that provide care to this population. The Mthatha Hospital Complex in Mthatha provides referral care for this portion of the province as well as primary care for the population in the greater Mthatha area. In recent years, the area around the hospital has experienced an increase in population and a corresponding increase in trauma due to falls, violence, and road traffic accidents. The 180-bed Bedford Orthopaedic Centre (BOC), located 8 km outside of the city, provides the orthopedic care for the Complex. Thus, referrals to the BOC come both from within the Mthatha Hospital Complex and from district hospitals staffed by general medical officers with little or no orthopedic training. The BOC is the primary location where care is given to patients with both orthopedic injuries and musculoskeletal conditions such as tuberculosis, tumors, and arthritis. [SUBTITLE] Previous studies [SUBSECTION] Past efforts to assess the workload at the BOC focused on assessing the number of admissions and outpatient visits. Unpublished data from 1997 through April 20071 indicate that the number of yearly admissions to BOC nearly doubled during this period. Nonetheless, the bed capacity remained stable at 180 beds and the staffing unchanged, resulting in an increased workload and an increased demand on already limited resources and facilities. Paralleling the increase in admissions, there was an approximately 28% increase in the number of patients seen in the outpatient department (OPD)—to more than 18,000 visits per year estimated for 2007. In a report on admissions to the male ward at the BOC during a 4-month period in 2005, Millar and MacConnachie documented a large trauma workload, the prominent role of motor vehicle accidents (MVAs) as a cause of injury, and the frequency of delays in treatment [7]. At present, hospital admissions for patients in the OPD often are delayed owing to the limited number of open beds, with priority given to patients with the most urgent needs. Beds frequently are in short supply because hospitalized patients must wait days or weeks for surgery due to the lack of surgical theater time. It is not uncommon for admissions from the OPD to be deferred more than once for a given patient. Past efforts to assess the workload at the BOC focused on assessing the number of admissions and outpatient visits. Unpublished data from 1997 through April 20071 indicate that the number of yearly admissions to BOC nearly doubled during this period. Nonetheless, the bed capacity remained stable at 180 beds and the staffing unchanged, resulting in an increased workload and an increased demand on already limited resources and facilities. Paralleling the increase in admissions, there was an approximately 28% increase in the number of patients seen in the outpatient department (OPD)—to more than 18,000 visits per year estimated for 2007. In a report on admissions to the male ward at the BOC during a 4-month period in 2005, Millar and MacConnachie documented a large trauma workload, the prominent role of motor vehicle accidents (MVAs) as a cause of injury, and the frequency of delays in treatment [7]. At present, hospital admissions for patients in the OPD often are delayed owing to the limited number of open beds, with priority given to patients with the most urgent needs. Beds frequently are in short supply because hospitalized patients must wait days or weeks for surgery due to the lack of surgical theater time. It is not uncommon for admissions from the OPD to be deferred more than once for a given patient. [SUBTITLE] Purpose of the current study [SUBSECTION] Increases in the number of patients seen at this facility have been well documented. However, inadequate prehospital care, delays in referral, and delays in admission may increase the complexity of treatment ultimately undertaken, in turn resulting in increased utilization of personnel and resources. This study is designed to document the demographic characteristics of all trauma patients being treated, the nature of their injuries, and subjective assessment of initial prehospital care given at the referring site. The primary outcomes assessed were (1) a delay of more than 72 h between the initial evaluation and the BOC consultation and (2) the presence of infection at the time of the BOC consultation. Both of these outcomes were thought to reflect potentially modifiable factors that add to the complexity of care and thus to the burden of disease encountered. Increases in the number of patients seen at this facility have been well documented. However, inadequate prehospital care, delays in referral, and delays in admission may increase the complexity of treatment ultimately undertaken, in turn resulting in increased utilization of personnel and resources. This study is designed to document the demographic characteristics of all trauma patients being treated, the nature of their injuries, and subjective assessment of initial prehospital care given at the referring site. The primary outcomes assessed were (1) a delay of more than 72 h between the initial evaluation and the BOC consultation and (2) the presence of infection at the time of the BOC consultation. Both of these outcomes were thought to reflect potentially modifiable factors that add to the complexity of care and thus to the burden of disease encountered.
null
null
null
null
null
null
[ "Previous studies", "Purpose of the current study", "Methods", "Results", "General demographics", "Outcomes assessment", "Primary outcomes", "Secondary outcomes", "Discussion" ]
[ "Past efforts to assess the workload at the BOC focused on assessing the number of admissions and outpatient visits. Unpublished data from 1997 through April 20071 indicate that the number of yearly admissions to BOC nearly doubled during this period. Nonetheless, the bed capacity remained stable at 180 beds and the staffing unchanged, resulting in an increased workload and an increased demand on already limited resources and facilities. Paralleling the increase in admissions, there was an approximately 28% increase in the number of patients seen in the outpatient department (OPD)—to more than 18,000 visits per year estimated for 2007. In a report on admissions to the male ward at the BOC during a 4-month period in 2005, Millar and MacConnachie documented a large trauma workload, the prominent role of motor vehicle accidents (MVAs) as a cause of injury, and the frequency of delays in treatment [7]. At present, hospital admissions for patients in the OPD often are delayed owing to the limited number of open beds, with priority given to patients with the most urgent needs. Beds frequently are in short supply because hospitalized patients must wait days or weeks for surgery due to the lack of surgical theater time. It is not uncommon for admissions from the OPD to be deferred more than once for a given patient.", "Increases in the number of patients seen at this facility have been well documented. However, inadequate prehospital care, delays in referral, and delays in admission may increase the complexity of treatment ultimately undertaken, in turn resulting in increased utilization of personnel and resources. This study is designed to document the demographic characteristics of all trauma patients being treated, the nature of their injuries, and subjective assessment of initial prehospital care given at the referring site. The primary outcomes assessed were (1) a delay of more than 72 h between the initial evaluation and the BOC consultation and (2) the presence of infection at the time of the BOC consultation. Both of these outcomes were thought to reflect potentially modifiable factors that add to the complexity of care and thus to the burden of disease encountered.", "Institutional review board (IRB) approval for the study was obtained from the University of California, San Francisco Committee on Human Research and from the Medical Ethics Committee of the Walter Sisulu University Faculty of Medical Sciences. Data were collected at the BOC during two 6-week periods during July and August of 2008 and 2009. These periods represented average periods of trauma volume at the BOC.\nMedical student researchers collected the data from patients in the BOC outpatient department and from the orthopedic medical officers treating them. When necessary, the patient interviews were facilitated by Xhosa-speaking interpreters. Information from data collection sheets was transferred to spreadsheets, and descriptive statistics were derived from these data.\nThe two primary outcomes—delay in referral of more than 72 h following evaluation at the initial facility and the presence of infection at the time of consultation—were evaluated with respect to demographic and injury data. The statistical significance of the association between each outcome and each variable was evaluated with chi-square tests, using, where appropriate, Fisher’s exact test and Pearson’s test.\nSecondary outcomes that were evaluated were derived from the subjective assessment by the BOC medical staff of specific aspects of the initial care provided prior to BOC consultation. BOC physicians were queried about correctness of the initial diagnosis, the presence of radiographs adequate to determine diagnosis and treatment, adequacy of initial wound management, proper splint immobilization, and a general assessment of the initial management as adequate or inadequate. Finally, treating physicians at BOC were queried as to whether they thought the orthopedic injuries that were being assessed might have been adequately managed at the initial hospital without referral if instruction in the management of simple orthopedic problems had been available. The same demographic and injury variables described previously were statistically evaluated to assess possible relations to these secondary outcomes.", "[SUBTITLE] General demographics [SUBSECTION] Demographic data from the 1055 patients seen in the BOC outpatient department during the 6-week periods in 2008 and in 2009 are presented in Tables 1, 2, 3, 4, and 5. Altogether, 80% of the patients treated were adults (defined in this setting as individuals >14 years of age). Most of the patients seen in both the adult and pediatric groups were male (Table 1). In addition, 56% of the adult patients reported that they were responsible for caring for dependents, either children or elderly relatives.Table 1Age and sex data of trauma patients (n = 1040)ParameterAdults (age > 14 years)Pediatric group (age ≤ 14 years)All patientsAge (years), mean40.0 (n = 830)8.5 (n = 210)33.6Sex (no.) Male500 (60.2%)149 (71.0%)649 (62.6%) Female330 (39.8%)61 (29.0%)391 (37.4%)\nTable 2Mode of transportation to Bedford Orthopaedic Centre from referring facilityModeAdults (%)Pediatric group (%)All patients (%)Taxi24.721.024.3Ambulance66.071.066.7Walk0.20.50.3Private car8.22.46.9Unknown/other1.05.21.8\nTable 3Type of presenting orthopedic injuryInjuryAdults (%)Pediatric group (%)All patients (%)Fracture79.687.180.9Dislocation3.41.93.0Soft tissue13.97.112.8Other3.13.83.2Open injury (2009 only)19.28.017.1\nTable 4Site of injurySiteAdults (%)Pediatric group (%)All patients (%)Upper extremity38.454.841.7Lower extremity55.840.552.7Multiple sites1.94.32.5Spine/head3.70.53.0Unknown0.100.1\nTable 5Mechanism of injuryMechanismAdults (%)Pediatric group (%)All patients (%)MVA-related24.714.822.9Assault18.20.514.4Other (e.g., falls)56.483.361.8Unknown0.71.40.9\nMVA motor vehicle accident\n\nAge and sex data of trauma patients (n = 1040)\nMode of transportation to Bedford Orthopaedic Centre from referring facility\nType of presenting orthopedic injury\nSite of injury\nMechanism of injury\n\nMVA motor vehicle accident\nIn all, 48.6% of patients were referred from facilities in Mthatha, <10 km away, with most from the academic hospital and from the city’s general hospital; 38.6% of patients came from district hospitals >50 km from the BOC; and 12.7% were from hospitals 11–50 km away. The primary mode of transportation was ambulance for two-thirds of patients, with about one-fourth arriving by taxi-van, the primary mode of public transportation in the Eastern Cape (Table 2). There was no significant difference in the type of transportation used by the adult and pediatric populations.\nCharacteristics of the injuries evaluated are listed in Table 3. Most were fractures, with a somewhat higher prevalence in the pediatric group. Soft tissue injuries occurred with greater frequency in adults (13.9%) than in children (7.1%). Information about whether the presenting injuries were open or closed was not collected during the 2008 session but was collected in 2009. Among the 521 patients evaluated in 2009, injuries were open in 89 (17.1%), with a more than twofold greater prevalence in adults than in children and almost a twofold greater prevalence in males than in females.\nMost of the injuries seen (51.5%) involved the lower extremity, with a greater prevalence of these injuries in adults and females (Table 4). The greater prevalence of upper extremity injuries in the pediatric group likely reflects the frequency of supracondylar humerus fractures due to falls. Accidents involving motor vehicles accounted for 22.9% of the injuries seen in all patients and were more prevalent in adults (Table 5). Injuries resulting from assaults were far more common in adults (18.2%) than in children (0.5%). Other injuries, usually falls, accounted for a larger proportion of injuries encountered in children (83.3%) than in adults (56.4%).\nOnly a small portion of the patients seen at BOC for orthopedic trauma were aware of their human immunodeficiency virus (HIV) status and if they were actively infected with tuberculosis (TB). Patients were not routinely screened for HIV. A total of 11.8% of all patients reported having HIV/acquired immunodeficiency syndrome (AIDS) at the time they were evaluated: 13.7% of the adults and 4.3% of the children. In all, 7.5% of patients had known TB: 9.2% of adults and 1.0% of children.\nAltogether, 19.5% of the patients were referred from five district hospitals, the staffs of which attended a course on the care of orthopedic trauma during the months that separated the data collection periods of 2008 and 2009. In all, 99 patients referred from these hospitals were evaluated in 2008 (prior to the introduction of the course), and 107 were evaluated in 2009 (after the course was given).\nDemographic data from the 1055 patients seen in the BOC outpatient department during the 6-week periods in 2008 and in 2009 are presented in Tables 1, 2, 3, 4, and 5. Altogether, 80% of the patients treated were adults (defined in this setting as individuals >14 years of age). Most of the patients seen in both the adult and pediatric groups were male (Table 1). In addition, 56% of the adult patients reported that they were responsible for caring for dependents, either children or elderly relatives.Table 1Age and sex data of trauma patients (n = 1040)ParameterAdults (age > 14 years)Pediatric group (age ≤ 14 years)All patientsAge (years), mean40.0 (n = 830)8.5 (n = 210)33.6Sex (no.) Male500 (60.2%)149 (71.0%)649 (62.6%) Female330 (39.8%)61 (29.0%)391 (37.4%)\nTable 2Mode of transportation to Bedford Orthopaedic Centre from referring facilityModeAdults (%)Pediatric group (%)All patients (%)Taxi24.721.024.3Ambulance66.071.066.7Walk0.20.50.3Private car8.22.46.9Unknown/other1.05.21.8\nTable 3Type of presenting orthopedic injuryInjuryAdults (%)Pediatric group (%)All patients (%)Fracture79.687.180.9Dislocation3.41.93.0Soft tissue13.97.112.8Other3.13.83.2Open injury (2009 only)19.28.017.1\nTable 4Site of injurySiteAdults (%)Pediatric group (%)All patients (%)Upper extremity38.454.841.7Lower extremity55.840.552.7Multiple sites1.94.32.5Spine/head3.70.53.0Unknown0.100.1\nTable 5Mechanism of injuryMechanismAdults (%)Pediatric group (%)All patients (%)MVA-related24.714.822.9Assault18.20.514.4Other (e.g., falls)56.483.361.8Unknown0.71.40.9\nMVA motor vehicle accident\n\nAge and sex data of trauma patients (n = 1040)\nMode of transportation to Bedford Orthopaedic Centre from referring facility\nType of presenting orthopedic injury\nSite of injury\nMechanism of injury\n\nMVA motor vehicle accident\nIn all, 48.6% of patients were referred from facilities in Mthatha, <10 km away, with most from the academic hospital and from the city’s general hospital; 38.6% of patients came from district hospitals >50 km from the BOC; and 12.7% were from hospitals 11–50 km away. The primary mode of transportation was ambulance for two-thirds of patients, with about one-fourth arriving by taxi-van, the primary mode of public transportation in the Eastern Cape (Table 2). There was no significant difference in the type of transportation used by the adult and pediatric populations.\nCharacteristics of the injuries evaluated are listed in Table 3. Most were fractures, with a somewhat higher prevalence in the pediatric group. Soft tissue injuries occurred with greater frequency in adults (13.9%) than in children (7.1%). Information about whether the presenting injuries were open or closed was not collected during the 2008 session but was collected in 2009. Among the 521 patients evaluated in 2009, injuries were open in 89 (17.1%), with a more than twofold greater prevalence in adults than in children and almost a twofold greater prevalence in males than in females.\nMost of the injuries seen (51.5%) involved the lower extremity, with a greater prevalence of these injuries in adults and females (Table 4). The greater prevalence of upper extremity injuries in the pediatric group likely reflects the frequency of supracondylar humerus fractures due to falls. Accidents involving motor vehicles accounted for 22.9% of the injuries seen in all patients and were more prevalent in adults (Table 5). Injuries resulting from assaults were far more common in adults (18.2%) than in children (0.5%). Other injuries, usually falls, accounted for a larger proportion of injuries encountered in children (83.3%) than in adults (56.4%).\nOnly a small portion of the patients seen at BOC for orthopedic trauma were aware of their human immunodeficiency virus (HIV) status and if they were actively infected with tuberculosis (TB). Patients were not routinely screened for HIV. A total of 11.8% of all patients reported having HIV/acquired immunodeficiency syndrome (AIDS) at the time they were evaluated: 13.7% of the adults and 4.3% of the children. In all, 7.5% of patients had known TB: 9.2% of adults and 1.0% of children.\nAltogether, 19.5% of the patients were referred from five district hospitals, the staffs of which attended a course on the care of orthopedic trauma during the months that separated the data collection periods of 2008 and 2009. In all, 99 patients referred from these hospitals were evaluated in 2008 (prior to the introduction of the course), and 107 were evaluated in 2009 (after the course was given).\n[SUBTITLE] Outcomes assessment [SUBSECTION] [SUBTITLE] Primary outcomes [SUBSECTION] Primary outcomes assessed as part of this study include both delay in referral and the presence of infection at the initial BOC consultation. The interval between the initial evaluation at the referring facility and the initial evaluation at BOC was recorded for all 1055 patients. Information on the presence of infection at the time of consultation was reported for 650 of the 1055 patients during the two data collection periods. Univariate analysis with chi-square tests determined which demographic factors and injury data appeared to be related to each of these primary outcomes.\nPatients who were divided into two groups depending on whether the BOC visit had occurred within 72 h of the initial visit or was delayed more than 72 h (Table 6). Factors that correlated statistically with the delayed referral included mode of transportation and distance from the referral site (Table 7). Also found to be related were whether the patients were seen in 2008 or 2009 and whether they were seen in a hospital where the trauma course was given. Of importance is the fact that the ambulance service improved between 2008 and 2009. Factors that did not appear to be related to delayed referral included age, sex, site of injury, type of injury, open versus closed injury, presence of infection, and whether HIV/AIDS or TB were known to be present.Table 6Interval between initial evaluation and Bedford Orthopaedic Center visitIntervalAdults (%)Pediatric group (%)All patients (%)≤72 h58.160.558.6>72 h41.939.541.4\nTable 7Statistical association between primary outcomes and selected variablesDemographic dataDelay in referral ≥ 72 hPresence of infectionAgeNS (0.583)a\nNS (0.323)a\nSexNS (0.898)a\nSig. (0.011)a\nDistance between BOC and referring facilitySig. (0.002)b\nNS (0.579)b\nMode of transportationSig. (0.033)b\nNS (0.839)b\nData from 2008 or 2009Sig. (0.000)a\nNS (0.802)a\nPresence of HIV/AIDS, TBNS (0.388)a\nNS (0.243)b\nReferred from hospital with trauma trainingSig. (0.024)a\nNS (1.000)a\nDelay in referralN/ANS (0.713)a\nType of injuryNS (0.124)b\nSig. (0.000)b\nMechanism of injurySig (0.020)b\nSig (0.014)b\nSite of injuryNS (0.920)b\nNS (0.402)b\nOpen vs. closed (2009 only)NS (0.690)a\nSig. (0.000)a\nPresence of infectionNS (0.713)a\nN/A\nBOC Bedford Orthopaedic Center, HIV/AIDS human immunodeficiency virus/acquired immunodeficiency syndrome, TB tuberculosis, NS not significant, Sig significant, N/A not applicable\naFischer’s exact test\nbPearson chi-square test\n\nInterval between initial evaluation and Bedford Orthopaedic Center visit\nStatistical association between primary outcomes and selected variables\n\nBOC Bedford Orthopaedic Center, HIV/AIDS human immunodeficiency virus/acquired immunodeficiency syndrome, TB tuberculosis, NS not significant, Sig significant, N/A not applicable\n\naFischer’s exact test\n\nbPearson chi-square test\nA total of 12.2% of these patients were reported to have an established infection at the time of the initial evaluation at BOC. Infections were noted in 11.4% of patients in the pediatric group and in 16.4% of adults (Table 7); 15.2% of males and 8.4% of females were diagnosed with infections. Factors associated with the presence of infection included sex, mechanism and type of injury, and whether the injury was open or closed. Factors not associated included age, distance from referral site, mode of transportation, delayed referral of >72 h, and site of injury. Other factors not related were whether the patients were seen in 2008 or 2009, whether they were seen in hospitals where the trauma course was given, and whether TB or HIV/AIDS were known to be present.\nPrimary outcomes assessed as part of this study include both delay in referral and the presence of infection at the initial BOC consultation. The interval between the initial evaluation at the referring facility and the initial evaluation at BOC was recorded for all 1055 patients. Information on the presence of infection at the time of consultation was reported for 650 of the 1055 patients during the two data collection periods. Univariate analysis with chi-square tests determined which demographic factors and injury data appeared to be related to each of these primary outcomes.\nPatients who were divided into two groups depending on whether the BOC visit had occurred within 72 h of the initial visit or was delayed more than 72 h (Table 6). Factors that correlated statistically with the delayed referral included mode of transportation and distance from the referral site (Table 7). Also found to be related were whether the patients were seen in 2008 or 2009 and whether they were seen in a hospital where the trauma course was given. Of importance is the fact that the ambulance service improved between 2008 and 2009. Factors that did not appear to be related to delayed referral included age, sex, site of injury, type of injury, open versus closed injury, presence of infection, and whether HIV/AIDS or TB were known to be present.Table 6Interval between initial evaluation and Bedford Orthopaedic Center visitIntervalAdults (%)Pediatric group (%)All patients (%)≤72 h58.160.558.6>72 h41.939.541.4\nTable 7Statistical association between primary outcomes and selected variablesDemographic dataDelay in referral ≥ 72 hPresence of infectionAgeNS (0.583)a\nNS (0.323)a\nSexNS (0.898)a\nSig. (0.011)a\nDistance between BOC and referring facilitySig. (0.002)b\nNS (0.579)b\nMode of transportationSig. (0.033)b\nNS (0.839)b\nData from 2008 or 2009Sig. (0.000)a\nNS (0.802)a\nPresence of HIV/AIDS, TBNS (0.388)a\nNS (0.243)b\nReferred from hospital with trauma trainingSig. (0.024)a\nNS (1.000)a\nDelay in referralN/ANS (0.713)a\nType of injuryNS (0.124)b\nSig. (0.000)b\nMechanism of injurySig (0.020)b\nSig (0.014)b\nSite of injuryNS (0.920)b\nNS (0.402)b\nOpen vs. closed (2009 only)NS (0.690)a\nSig. (0.000)a\nPresence of infectionNS (0.713)a\nN/A\nBOC Bedford Orthopaedic Center, HIV/AIDS human immunodeficiency virus/acquired immunodeficiency syndrome, TB tuberculosis, NS not significant, Sig significant, N/A not applicable\naFischer’s exact test\nbPearson chi-square test\n\nInterval between initial evaluation and Bedford Orthopaedic Center visit\nStatistical association between primary outcomes and selected variables\n\nBOC Bedford Orthopaedic Center, HIV/AIDS human immunodeficiency virus/acquired immunodeficiency syndrome, TB tuberculosis, NS not significant, Sig significant, N/A not applicable\n\naFischer’s exact test\n\nbPearson chi-square test\nA total of 12.2% of these patients were reported to have an established infection at the time of the initial evaluation at BOC. Infections were noted in 11.4% of patients in the pediatric group and in 16.4% of adults (Table 7); 15.2% of males and 8.4% of females were diagnosed with infections. Factors associated with the presence of infection included sex, mechanism and type of injury, and whether the injury was open or closed. Factors not associated included age, distance from referral site, mode of transportation, delayed referral of >72 h, and site of injury. Other factors not related were whether the patients were seen in 2008 or 2009, whether they were seen in hospitals where the trauma course was given, and whether TB or HIV/AIDS were known to be present.\n[SUBTITLE] Secondary outcomes [SUBSECTION] The orthopedic medical officers at the BOC judged that the initial care provided at the referring facility in some way was deficient for 36% of the patients: 17% of the patients were thought to have received an incorrect diagnoses at the initial examination site; 11% had inadequate radiographs; and 19% of fractures were judged to have inadequate splint immobilization. Finally, it was estimated that 27% of the referred patients might have been managed without referral at the district hospital if the referring staff had had proper basic instruction in orthopedic care. In fact, data collected from patients referred from the five district hospitals where the staff had been given proper trauma instruction showed that after the course was given to the staff there was no difference in the prevalence of perceived deficiencies in the initial care.\nThe orthopedic medical officers at the BOC judged that the initial care provided at the referring facility in some way was deficient for 36% of the patients: 17% of the patients were thought to have received an incorrect diagnoses at the initial examination site; 11% had inadequate radiographs; and 19% of fractures were judged to have inadequate splint immobilization. Finally, it was estimated that 27% of the referred patients might have been managed without referral at the district hospital if the referring staff had had proper basic instruction in orthopedic care. In fact, data collected from patients referred from the five district hospitals where the staff had been given proper trauma instruction showed that after the course was given to the staff there was no difference in the prevalence of perceived deficiencies in the initial care.\n[SUBTITLE] Primary outcomes [SUBSECTION] Primary outcomes assessed as part of this study include both delay in referral and the presence of infection at the initial BOC consultation. The interval between the initial evaluation at the referring facility and the initial evaluation at BOC was recorded for all 1055 patients. Information on the presence of infection at the time of consultation was reported for 650 of the 1055 patients during the two data collection periods. Univariate analysis with chi-square tests determined which demographic factors and injury data appeared to be related to each of these primary outcomes.\nPatients who were divided into two groups depending on whether the BOC visit had occurred within 72 h of the initial visit or was delayed more than 72 h (Table 6). Factors that correlated statistically with the delayed referral included mode of transportation and distance from the referral site (Table 7). Also found to be related were whether the patients were seen in 2008 or 2009 and whether they were seen in a hospital where the trauma course was given. Of importance is the fact that the ambulance service improved between 2008 and 2009. Factors that did not appear to be related to delayed referral included age, sex, site of injury, type of injury, open versus closed injury, presence of infection, and whether HIV/AIDS or TB were known to be present.Table 6Interval between initial evaluation and Bedford Orthopaedic Center visitIntervalAdults (%)Pediatric group (%)All patients (%)≤72 h58.160.558.6>72 h41.939.541.4\nTable 7Statistical association between primary outcomes and selected variablesDemographic dataDelay in referral ≥ 72 hPresence of infectionAgeNS (0.583)a\nNS (0.323)a\nSexNS (0.898)a\nSig. (0.011)a\nDistance between BOC and referring facilitySig. (0.002)b\nNS (0.579)b\nMode of transportationSig. (0.033)b\nNS (0.839)b\nData from 2008 or 2009Sig. (0.000)a\nNS (0.802)a\nPresence of HIV/AIDS, TBNS (0.388)a\nNS (0.243)b\nReferred from hospital with trauma trainingSig. (0.024)a\nNS (1.000)a\nDelay in referralN/ANS (0.713)a\nType of injuryNS (0.124)b\nSig. (0.000)b\nMechanism of injurySig (0.020)b\nSig (0.014)b\nSite of injuryNS (0.920)b\nNS (0.402)b\nOpen vs. closed (2009 only)NS (0.690)a\nSig. (0.000)a\nPresence of infectionNS (0.713)a\nN/A\nBOC Bedford Orthopaedic Center, HIV/AIDS human immunodeficiency virus/acquired immunodeficiency syndrome, TB tuberculosis, NS not significant, Sig significant, N/A not applicable\naFischer’s exact test\nbPearson chi-square test\n\nInterval between initial evaluation and Bedford Orthopaedic Center visit\nStatistical association between primary outcomes and selected variables\n\nBOC Bedford Orthopaedic Center, HIV/AIDS human immunodeficiency virus/acquired immunodeficiency syndrome, TB tuberculosis, NS not significant, Sig significant, N/A not applicable\n\naFischer’s exact test\n\nbPearson chi-square test\nA total of 12.2% of these patients were reported to have an established infection at the time of the initial evaluation at BOC. Infections were noted in 11.4% of patients in the pediatric group and in 16.4% of adults (Table 7); 15.2% of males and 8.4% of females were diagnosed with infections. Factors associated with the presence of infection included sex, mechanism and type of injury, and whether the injury was open or closed. Factors not associated included age, distance from referral site, mode of transportation, delayed referral of >72 h, and site of injury. Other factors not related were whether the patients were seen in 2008 or 2009, whether they were seen in hospitals where the trauma course was given, and whether TB or HIV/AIDS were known to be present.\nPrimary outcomes assessed as part of this study include both delay in referral and the presence of infection at the initial BOC consultation. The interval between the initial evaluation at the referring facility and the initial evaluation at BOC was recorded for all 1055 patients. Information on the presence of infection at the time of consultation was reported for 650 of the 1055 patients during the two data collection periods. Univariate analysis with chi-square tests determined which demographic factors and injury data appeared to be related to each of these primary outcomes.\nPatients who were divided into two groups depending on whether the BOC visit had occurred within 72 h of the initial visit or was delayed more than 72 h (Table 6). Factors that correlated statistically with the delayed referral included mode of transportation and distance from the referral site (Table 7). Also found to be related were whether the patients were seen in 2008 or 2009 and whether they were seen in a hospital where the trauma course was given. Of importance is the fact that the ambulance service improved between 2008 and 2009. Factors that did not appear to be related to delayed referral included age, sex, site of injury, type of injury, open versus closed injury, presence of infection, and whether HIV/AIDS or TB were known to be present.Table 6Interval between initial evaluation and Bedford Orthopaedic Center visitIntervalAdults (%)Pediatric group (%)All patients (%)≤72 h58.160.558.6>72 h41.939.541.4\nTable 7Statistical association between primary outcomes and selected variablesDemographic dataDelay in referral ≥ 72 hPresence of infectionAgeNS (0.583)a\nNS (0.323)a\nSexNS (0.898)a\nSig. (0.011)a\nDistance between BOC and referring facilitySig. (0.002)b\nNS (0.579)b\nMode of transportationSig. (0.033)b\nNS (0.839)b\nData from 2008 or 2009Sig. (0.000)a\nNS (0.802)a\nPresence of HIV/AIDS, TBNS (0.388)a\nNS (0.243)b\nReferred from hospital with trauma trainingSig. (0.024)a\nNS (1.000)a\nDelay in referralN/ANS (0.713)a\nType of injuryNS (0.124)b\nSig. (0.000)b\nMechanism of injurySig (0.020)b\nSig (0.014)b\nSite of injuryNS (0.920)b\nNS (0.402)b\nOpen vs. closed (2009 only)NS (0.690)a\nSig. (0.000)a\nPresence of infectionNS (0.713)a\nN/A\nBOC Bedford Orthopaedic Center, HIV/AIDS human immunodeficiency virus/acquired immunodeficiency syndrome, TB tuberculosis, NS not significant, Sig significant, N/A not applicable\naFischer’s exact test\nbPearson chi-square test\n\nInterval between initial evaluation and Bedford Orthopaedic Center visit\nStatistical association between primary outcomes and selected variables\n\nBOC Bedford Orthopaedic Center, HIV/AIDS human immunodeficiency virus/acquired immunodeficiency syndrome, TB tuberculosis, NS not significant, Sig significant, N/A not applicable\n\naFischer’s exact test\n\nbPearson chi-square test\nA total of 12.2% of these patients were reported to have an established infection at the time of the initial evaluation at BOC. Infections were noted in 11.4% of patients in the pediatric group and in 16.4% of adults (Table 7); 15.2% of males and 8.4% of females were diagnosed with infections. Factors associated with the presence of infection included sex, mechanism and type of injury, and whether the injury was open or closed. Factors not associated included age, distance from referral site, mode of transportation, delayed referral of >72 h, and site of injury. Other factors not related were whether the patients were seen in 2008 or 2009, whether they were seen in hospitals where the trauma course was given, and whether TB or HIV/AIDS were known to be present.\n[SUBTITLE] Secondary outcomes [SUBSECTION] The orthopedic medical officers at the BOC judged that the initial care provided at the referring facility in some way was deficient for 36% of the patients: 17% of the patients were thought to have received an incorrect diagnoses at the initial examination site; 11% had inadequate radiographs; and 19% of fractures were judged to have inadequate splint immobilization. Finally, it was estimated that 27% of the referred patients might have been managed without referral at the district hospital if the referring staff had had proper basic instruction in orthopedic care. In fact, data collected from patients referred from the five district hospitals where the staff had been given proper trauma instruction showed that after the course was given to the staff there was no difference in the prevalence of perceived deficiencies in the initial care.\nThe orthopedic medical officers at the BOC judged that the initial care provided at the referring facility in some way was deficient for 36% of the patients: 17% of the patients were thought to have received an incorrect diagnoses at the initial examination site; 11% had inadequate radiographs; and 19% of fractures were judged to have inadequate splint immobilization. Finally, it was estimated that 27% of the referred patients might have been managed without referral at the district hospital if the referring staff had had proper basic instruction in orthopedic care. In fact, data collected from patients referred from the five district hospitals where the staff had been given proper trauma instruction showed that after the course was given to the staff there was no difference in the prevalence of perceived deficiencies in the initial care.", "Demographic data from the 1055 patients seen in the BOC outpatient department during the 6-week periods in 2008 and in 2009 are presented in Tables 1, 2, 3, 4, and 5. Altogether, 80% of the patients treated were adults (defined in this setting as individuals >14 years of age). Most of the patients seen in both the adult and pediatric groups were male (Table 1). In addition, 56% of the adult patients reported that they were responsible for caring for dependents, either children or elderly relatives.Table 1Age and sex data of trauma patients (n = 1040)ParameterAdults (age > 14 years)Pediatric group (age ≤ 14 years)All patientsAge (years), mean40.0 (n = 830)8.5 (n = 210)33.6Sex (no.) Male500 (60.2%)149 (71.0%)649 (62.6%) Female330 (39.8%)61 (29.0%)391 (37.4%)\nTable 2Mode of transportation to Bedford Orthopaedic Centre from referring facilityModeAdults (%)Pediatric group (%)All patients (%)Taxi24.721.024.3Ambulance66.071.066.7Walk0.20.50.3Private car8.22.46.9Unknown/other1.05.21.8\nTable 3Type of presenting orthopedic injuryInjuryAdults (%)Pediatric group (%)All patients (%)Fracture79.687.180.9Dislocation3.41.93.0Soft tissue13.97.112.8Other3.13.83.2Open injury (2009 only)19.28.017.1\nTable 4Site of injurySiteAdults (%)Pediatric group (%)All patients (%)Upper extremity38.454.841.7Lower extremity55.840.552.7Multiple sites1.94.32.5Spine/head3.70.53.0Unknown0.100.1\nTable 5Mechanism of injuryMechanismAdults (%)Pediatric group (%)All patients (%)MVA-related24.714.822.9Assault18.20.514.4Other (e.g., falls)56.483.361.8Unknown0.71.40.9\nMVA motor vehicle accident\n\nAge and sex data of trauma patients (n = 1040)\nMode of transportation to Bedford Orthopaedic Centre from referring facility\nType of presenting orthopedic injury\nSite of injury\nMechanism of injury\n\nMVA motor vehicle accident\nIn all, 48.6% of patients were referred from facilities in Mthatha, <10 km away, with most from the academic hospital and from the city’s general hospital; 38.6% of patients came from district hospitals >50 km from the BOC; and 12.7% were from hospitals 11–50 km away. The primary mode of transportation was ambulance for two-thirds of patients, with about one-fourth arriving by taxi-van, the primary mode of public transportation in the Eastern Cape (Table 2). There was no significant difference in the type of transportation used by the adult and pediatric populations.\nCharacteristics of the injuries evaluated are listed in Table 3. Most were fractures, with a somewhat higher prevalence in the pediatric group. Soft tissue injuries occurred with greater frequency in adults (13.9%) than in children (7.1%). Information about whether the presenting injuries were open or closed was not collected during the 2008 session but was collected in 2009. Among the 521 patients evaluated in 2009, injuries were open in 89 (17.1%), with a more than twofold greater prevalence in adults than in children and almost a twofold greater prevalence in males than in females.\nMost of the injuries seen (51.5%) involved the lower extremity, with a greater prevalence of these injuries in adults and females (Table 4). The greater prevalence of upper extremity injuries in the pediatric group likely reflects the frequency of supracondylar humerus fractures due to falls. Accidents involving motor vehicles accounted for 22.9% of the injuries seen in all patients and were more prevalent in adults (Table 5). Injuries resulting from assaults were far more common in adults (18.2%) than in children (0.5%). Other injuries, usually falls, accounted for a larger proportion of injuries encountered in children (83.3%) than in adults (56.4%).\nOnly a small portion of the patients seen at BOC for orthopedic trauma were aware of their human immunodeficiency virus (HIV) status and if they were actively infected with tuberculosis (TB). Patients were not routinely screened for HIV. A total of 11.8% of all patients reported having HIV/acquired immunodeficiency syndrome (AIDS) at the time they were evaluated: 13.7% of the adults and 4.3% of the children. In all, 7.5% of patients had known TB: 9.2% of adults and 1.0% of children.\nAltogether, 19.5% of the patients were referred from five district hospitals, the staffs of which attended a course on the care of orthopedic trauma during the months that separated the data collection periods of 2008 and 2009. In all, 99 patients referred from these hospitals were evaluated in 2008 (prior to the introduction of the course), and 107 were evaluated in 2009 (after the course was given).", "[SUBTITLE] Primary outcomes [SUBSECTION] Primary outcomes assessed as part of this study include both delay in referral and the presence of infection at the initial BOC consultation. The interval between the initial evaluation at the referring facility and the initial evaluation at BOC was recorded for all 1055 patients. Information on the presence of infection at the time of consultation was reported for 650 of the 1055 patients during the two data collection periods. Univariate analysis with chi-square tests determined which demographic factors and injury data appeared to be related to each of these primary outcomes.\nPatients who were divided into two groups depending on whether the BOC visit had occurred within 72 h of the initial visit or was delayed more than 72 h (Table 6). Factors that correlated statistically with the delayed referral included mode of transportation and distance from the referral site (Table 7). Also found to be related were whether the patients were seen in 2008 or 2009 and whether they were seen in a hospital where the trauma course was given. Of importance is the fact that the ambulance service improved between 2008 and 2009. Factors that did not appear to be related to delayed referral included age, sex, site of injury, type of injury, open versus closed injury, presence of infection, and whether HIV/AIDS or TB were known to be present.Table 6Interval between initial evaluation and Bedford Orthopaedic Center visitIntervalAdults (%)Pediatric group (%)All patients (%)≤72 h58.160.558.6>72 h41.939.541.4\nTable 7Statistical association between primary outcomes and selected variablesDemographic dataDelay in referral ≥ 72 hPresence of infectionAgeNS (0.583)a\nNS (0.323)a\nSexNS (0.898)a\nSig. (0.011)a\nDistance between BOC and referring facilitySig. (0.002)b\nNS (0.579)b\nMode of transportationSig. (0.033)b\nNS (0.839)b\nData from 2008 or 2009Sig. (0.000)a\nNS (0.802)a\nPresence of HIV/AIDS, TBNS (0.388)a\nNS (0.243)b\nReferred from hospital with trauma trainingSig. (0.024)a\nNS (1.000)a\nDelay in referralN/ANS (0.713)a\nType of injuryNS (0.124)b\nSig. (0.000)b\nMechanism of injurySig (0.020)b\nSig (0.014)b\nSite of injuryNS (0.920)b\nNS (0.402)b\nOpen vs. closed (2009 only)NS (0.690)a\nSig. (0.000)a\nPresence of infectionNS (0.713)a\nN/A\nBOC Bedford Orthopaedic Center, HIV/AIDS human immunodeficiency virus/acquired immunodeficiency syndrome, TB tuberculosis, NS not significant, Sig significant, N/A not applicable\naFischer’s exact test\nbPearson chi-square test\n\nInterval between initial evaluation and Bedford Orthopaedic Center visit\nStatistical association between primary outcomes and selected variables\n\nBOC Bedford Orthopaedic Center, HIV/AIDS human immunodeficiency virus/acquired immunodeficiency syndrome, TB tuberculosis, NS not significant, Sig significant, N/A not applicable\n\naFischer’s exact test\n\nbPearson chi-square test\nA total of 12.2% of these patients were reported to have an established infection at the time of the initial evaluation at BOC. Infections were noted in 11.4% of patients in the pediatric group and in 16.4% of adults (Table 7); 15.2% of males and 8.4% of females were diagnosed with infections. Factors associated with the presence of infection included sex, mechanism and type of injury, and whether the injury was open or closed. Factors not associated included age, distance from referral site, mode of transportation, delayed referral of >72 h, and site of injury. Other factors not related were whether the patients were seen in 2008 or 2009, whether they were seen in hospitals where the trauma course was given, and whether TB or HIV/AIDS were known to be present.\nPrimary outcomes assessed as part of this study include both delay in referral and the presence of infection at the initial BOC consultation. The interval between the initial evaluation at the referring facility and the initial evaluation at BOC was recorded for all 1055 patients. Information on the presence of infection at the time of consultation was reported for 650 of the 1055 patients during the two data collection periods. Univariate analysis with chi-square tests determined which demographic factors and injury data appeared to be related to each of these primary outcomes.\nPatients who were divided into two groups depending on whether the BOC visit had occurred within 72 h of the initial visit or was delayed more than 72 h (Table 6). Factors that correlated statistically with the delayed referral included mode of transportation and distance from the referral site (Table 7). Also found to be related were whether the patients were seen in 2008 or 2009 and whether they were seen in a hospital where the trauma course was given. Of importance is the fact that the ambulance service improved between 2008 and 2009. Factors that did not appear to be related to delayed referral included age, sex, site of injury, type of injury, open versus closed injury, presence of infection, and whether HIV/AIDS or TB were known to be present.Table 6Interval between initial evaluation and Bedford Orthopaedic Center visitIntervalAdults (%)Pediatric group (%)All patients (%)≤72 h58.160.558.6>72 h41.939.541.4\nTable 7Statistical association between primary outcomes and selected variablesDemographic dataDelay in referral ≥ 72 hPresence of infectionAgeNS (0.583)a\nNS (0.323)a\nSexNS (0.898)a\nSig. (0.011)a\nDistance between BOC and referring facilitySig. (0.002)b\nNS (0.579)b\nMode of transportationSig. (0.033)b\nNS (0.839)b\nData from 2008 or 2009Sig. (0.000)a\nNS (0.802)a\nPresence of HIV/AIDS, TBNS (0.388)a\nNS (0.243)b\nReferred from hospital with trauma trainingSig. (0.024)a\nNS (1.000)a\nDelay in referralN/ANS (0.713)a\nType of injuryNS (0.124)b\nSig. (0.000)b\nMechanism of injurySig (0.020)b\nSig (0.014)b\nSite of injuryNS (0.920)b\nNS (0.402)b\nOpen vs. closed (2009 only)NS (0.690)a\nSig. (0.000)a\nPresence of infectionNS (0.713)a\nN/A\nBOC Bedford Orthopaedic Center, HIV/AIDS human immunodeficiency virus/acquired immunodeficiency syndrome, TB tuberculosis, NS not significant, Sig significant, N/A not applicable\naFischer’s exact test\nbPearson chi-square test\n\nInterval between initial evaluation and Bedford Orthopaedic Center visit\nStatistical association between primary outcomes and selected variables\n\nBOC Bedford Orthopaedic Center, HIV/AIDS human immunodeficiency virus/acquired immunodeficiency syndrome, TB tuberculosis, NS not significant, Sig significant, N/A not applicable\n\naFischer’s exact test\n\nbPearson chi-square test\nA total of 12.2% of these patients were reported to have an established infection at the time of the initial evaluation at BOC. Infections were noted in 11.4% of patients in the pediatric group and in 16.4% of adults (Table 7); 15.2% of males and 8.4% of females were diagnosed with infections. Factors associated with the presence of infection included sex, mechanism and type of injury, and whether the injury was open or closed. Factors not associated included age, distance from referral site, mode of transportation, delayed referral of >72 h, and site of injury. Other factors not related were whether the patients were seen in 2008 or 2009, whether they were seen in hospitals where the trauma course was given, and whether TB or HIV/AIDS were known to be present.\n[SUBTITLE] Secondary outcomes [SUBSECTION] The orthopedic medical officers at the BOC judged that the initial care provided at the referring facility in some way was deficient for 36% of the patients: 17% of the patients were thought to have received an incorrect diagnoses at the initial examination site; 11% had inadequate radiographs; and 19% of fractures were judged to have inadequate splint immobilization. Finally, it was estimated that 27% of the referred patients might have been managed without referral at the district hospital if the referring staff had had proper basic instruction in orthopedic care. In fact, data collected from patients referred from the five district hospitals where the staff had been given proper trauma instruction showed that after the course was given to the staff there was no difference in the prevalence of perceived deficiencies in the initial care.\nThe orthopedic medical officers at the BOC judged that the initial care provided at the referring facility in some way was deficient for 36% of the patients: 17% of the patients were thought to have received an incorrect diagnoses at the initial examination site; 11% had inadequate radiographs; and 19% of fractures were judged to have inadequate splint immobilization. Finally, it was estimated that 27% of the referred patients might have been managed without referral at the district hospital if the referring staff had had proper basic instruction in orthopedic care. In fact, data collected from patients referred from the five district hospitals where the staff had been given proper trauma instruction showed that after the course was given to the staff there was no difference in the prevalence of perceived deficiencies in the initial care.", "Primary outcomes assessed as part of this study include both delay in referral and the presence of infection at the initial BOC consultation. The interval between the initial evaluation at the referring facility and the initial evaluation at BOC was recorded for all 1055 patients. Information on the presence of infection at the time of consultation was reported for 650 of the 1055 patients during the two data collection periods. Univariate analysis with chi-square tests determined which demographic factors and injury data appeared to be related to each of these primary outcomes.\nPatients who were divided into two groups depending on whether the BOC visit had occurred within 72 h of the initial visit or was delayed more than 72 h (Table 6). Factors that correlated statistically with the delayed referral included mode of transportation and distance from the referral site (Table 7). Also found to be related were whether the patients were seen in 2008 or 2009 and whether they were seen in a hospital where the trauma course was given. Of importance is the fact that the ambulance service improved between 2008 and 2009. Factors that did not appear to be related to delayed referral included age, sex, site of injury, type of injury, open versus closed injury, presence of infection, and whether HIV/AIDS or TB were known to be present.Table 6Interval between initial evaluation and Bedford Orthopaedic Center visitIntervalAdults (%)Pediatric group (%)All patients (%)≤72 h58.160.558.6>72 h41.939.541.4\nTable 7Statistical association between primary outcomes and selected variablesDemographic dataDelay in referral ≥ 72 hPresence of infectionAgeNS (0.583)a\nNS (0.323)a\nSexNS (0.898)a\nSig. (0.011)a\nDistance between BOC and referring facilitySig. (0.002)b\nNS (0.579)b\nMode of transportationSig. (0.033)b\nNS (0.839)b\nData from 2008 or 2009Sig. (0.000)a\nNS (0.802)a\nPresence of HIV/AIDS, TBNS (0.388)a\nNS (0.243)b\nReferred from hospital with trauma trainingSig. (0.024)a\nNS (1.000)a\nDelay in referralN/ANS (0.713)a\nType of injuryNS (0.124)b\nSig. (0.000)b\nMechanism of injurySig (0.020)b\nSig (0.014)b\nSite of injuryNS (0.920)b\nNS (0.402)b\nOpen vs. closed (2009 only)NS (0.690)a\nSig. (0.000)a\nPresence of infectionNS (0.713)a\nN/A\nBOC Bedford Orthopaedic Center, HIV/AIDS human immunodeficiency virus/acquired immunodeficiency syndrome, TB tuberculosis, NS not significant, Sig significant, N/A not applicable\naFischer’s exact test\nbPearson chi-square test\n\nInterval between initial evaluation and Bedford Orthopaedic Center visit\nStatistical association between primary outcomes and selected variables\n\nBOC Bedford Orthopaedic Center, HIV/AIDS human immunodeficiency virus/acquired immunodeficiency syndrome, TB tuberculosis, NS not significant, Sig significant, N/A not applicable\n\naFischer’s exact test\n\nbPearson chi-square test\nA total of 12.2% of these patients were reported to have an established infection at the time of the initial evaluation at BOC. Infections were noted in 11.4% of patients in the pediatric group and in 16.4% of adults (Table 7); 15.2% of males and 8.4% of females were diagnosed with infections. Factors associated with the presence of infection included sex, mechanism and type of injury, and whether the injury was open or closed. Factors not associated included age, distance from referral site, mode of transportation, delayed referral of >72 h, and site of injury. Other factors not related were whether the patients were seen in 2008 or 2009, whether they were seen in hospitals where the trauma course was given, and whether TB or HIV/AIDS were known to be present.", "The orthopedic medical officers at the BOC judged that the initial care provided at the referring facility in some way was deficient for 36% of the patients: 17% of the patients were thought to have received an incorrect diagnoses at the initial examination site; 11% had inadequate radiographs; and 19% of fractures were judged to have inadequate splint immobilization. Finally, it was estimated that 27% of the referred patients might have been managed without referral at the district hospital if the referring staff had had proper basic instruction in orthopedic care. In fact, data collected from patients referred from the five district hospitals where the staff had been given proper trauma instruction showed that after the course was given to the staff there was no difference in the prevalence of perceived deficiencies in the initial care.", "This study confirms the clinical impression that the burden of orthopedic trauma addressed in the BOC outpatient department is substantial and, furthermore, that fractures are the most frequently diagnosed injuries. A logical first step in managing a heavy trauma workload in resource-constrained settings is to document the number of patients and the types of injury being seen. Such information is essential for governmental agencies to identify where the greatest health care needs lie and to prioritize scarce resources. To offer adequate treatment to large numbers of patients with fractures and related injuries, resources commensurate with the workload are required, as recommended by the World Health Organization (WHO) in Guidelines for Essential Trauma Care [8, 9]. Such resources include emergency transportation, adequately trained first responders, and properly equipped outpatient and inpatient facilities. Because surgical treatment is the standard of care for most of these injuries, also needed are adequate operative facilities, staffing, supplies, equipment, and orthopedic implants [10]. Data from this study provide a starting point for applying resources effectively.\nIt is well understood that delays in definitive treatment of fractures may result in the need for more complex treatment and to poor outcomes. A prior study at the BOC revealed that treatment delays of >7 days occurred in 25% of male inpatients [7]. In the present study, delayed referral of >72 h was noted in 41.4% of all trauma patients seen in the outpatient department. Analysis of factors possibly associated with such delays showed a statistical relation to the distance traveled to the BOC, the type of transportation used, and the year in which data were gathered. The reduced prevalence of referral delays between 2008 and 2009 may be explained in large part by the improvements in the ambulance services that were instituted between these data collection periods. The use of crowded and dangerous taxi-vans, which is the only other alternative for most indigent patients, is particularly cumbersome and expensive when traveling long distances to the BOC. The findings in this study highlight the importance of creating and maintaining adequate transport systems for injured patients, especially for those living in distant communities.\nA total of 12.2% of patients seen in the BOC OPD were diagnosed as having established infections. The infections were more common in adult men and, not surprisingly, were significantly associated with open injuries. Treatment of infected, open injuries frequently requires more intensive management with prolonged inpatient hospital stays, multiple surgical procedures, and prolonged intravenous antibiotic treatment. The reported incidence of infection following open fractures in developed countries, where various standard care protocols are used, is in the range of 2.3% to 7.3% [11, 12]. To minimize the risk of infection in the setting of rural South Africa, it is important to institute measures that promote adequate débridement of open fractures and prompt administration of intravenous antibiotics during the initial treatment at the district hospitals.\nThis present study documents the prevalence of perceived deficiencies in initial care at the district hospitals from which BOC patients were referred. It is recognized that such perceived deficiencies may have multiple causes and that their recognition may have been affected by the bias of caregivers at the BOC. However, it is noteworthy that these observations were made by local practitioners within the context of established standards of orthopedic care in this setting.\nInjuries that are not properly managed at the outset may require more complex, time-consuming, and costly definitive care later. With the goal of reducing the prevalence of problems—such as missed diagnoses, inadequate radiographic studies, missed neurovascular injuries, inadequately managed open injuries—a course on the care of acute orthopedic injuries was given at 5 of the 23 district hospitals between the 2008 and 2009 data collection periods. The goal of this instruction was similar to that advocated in the WHO’s Integrated Management for Emergency and Essential Surgical Care (IMEESC) program [9, 13]. Of interest is the finding of the present study that referral delays in 2009 were reduced following the course. Improvements in ambulance transportation, as well as the information imparted during the courses in these hospitals, may explain why referral delays were less frequent during the second data collection period. Of note, however, is the fact that the prevalence of reported deficiencies in initial care of patients referred from those hospitals did not decrease where the courses were given. A possible explanation for this finding may be that many of the medical officers who had received the instruction at the district hospitals had moved away from the subject hospitals during the months after the courses were given. Thus, many of the patients seen during the second data collection period may not have been treated by physicians who had undergone the trauma training. This underscores the challenges in creating sustainable and measurable changes in clinical practice using educational interventions of the type described.\nWhat are the next steps? Continuing educational efforts at the district hospital level is important, but such efforts must be ongoing and targeted especially to newly arrived physicians. Now that the workload has been documented at the BOC, an important challenge lies in improving orthopedic care in the hospital, especially surgical intervention. Measuring the efficiency with which patients are managed at the BOC itself may provide further insight into whether available resources are adequate and are being properly utilized. As alluded to previously, problems caused by delays in referral are compounded by delays in getting patients admitted to the BOC and by further delays in getting inpatients to surgery in a timely fashion. The next challenge at the BOC is to gather data on the number of patients whose treatment is delayed due to lack of hospital beds and the number of hospitalized patients who experience delays in getting to theater for their needed surgery. Data on the causes of delay in the surgical theaters are of importance. Are there sufficient theaters? Is the nursing staffing adequate? Are there sufficient anesthetists and surgeons? Are there sufficient supplies, equipment, and surgical implants? Further data collection aimed at answering these questions is needed to focus interventions designed to improve both the prehospital care and definitive management of patients with orthopedic trauma in this region." ]
[ null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Previous studies", "Purpose of the current study", "Methods", "Results", "General demographics", "Outcomes assessment", "Primary outcomes", "Secondary outcomes", "Discussion" ]
[ "Increasing attention is being paid worldwide to the public health implications of musculoskeletal injuries and trauma, especially in the developing world. The impact of these injuries is especially severe in those countries where human and material resources are most limited. Data concerning the disease burden is such settings are lacking [1]. The eastern portion of the Eastern Cape Province in South Africa (formerly the Transkei homeland) contains a largely indigent, rural population of three to four million people. The role of injury and violence as a leading cause of death in this region has been well documented by Meel [2–6]. The Eastern Cape Province Department of Health administers a network of clinics and district hospitals that provide care to this population. The Mthatha Hospital Complex in Mthatha provides referral care for this portion of the province as well as primary care for the population in the greater Mthatha area. In recent years, the area around the hospital has experienced an increase in population and a corresponding increase in trauma due to falls, violence, and road traffic accidents. The 180-bed Bedford Orthopaedic Centre (BOC), located 8 km outside of the city, provides the orthopedic care for the Complex. Thus, referrals to the BOC come both from within the Mthatha Hospital Complex and from district hospitals staffed by general medical officers with little or no orthopedic training. The BOC is the primary location where care is given to patients with both orthopedic injuries and musculoskeletal conditions such as tuberculosis, tumors, and arthritis.\n[SUBTITLE] Previous studies [SUBSECTION] Past efforts to assess the workload at the BOC focused on assessing the number of admissions and outpatient visits. Unpublished data from 1997 through April 20071 indicate that the number of yearly admissions to BOC nearly doubled during this period. Nonetheless, the bed capacity remained stable at 180 beds and the staffing unchanged, resulting in an increased workload and an increased demand on already limited resources and facilities. Paralleling the increase in admissions, there was an approximately 28% increase in the number of patients seen in the outpatient department (OPD)—to more than 18,000 visits per year estimated for 2007. In a report on admissions to the male ward at the BOC during a 4-month period in 2005, Millar and MacConnachie documented a large trauma workload, the prominent role of motor vehicle accidents (MVAs) as a cause of injury, and the frequency of delays in treatment [7]. At present, hospital admissions for patients in the OPD often are delayed owing to the limited number of open beds, with priority given to patients with the most urgent needs. Beds frequently are in short supply because hospitalized patients must wait days or weeks for surgery due to the lack of surgical theater time. It is not uncommon for admissions from the OPD to be deferred more than once for a given patient.\nPast efforts to assess the workload at the BOC focused on assessing the number of admissions and outpatient visits. Unpublished data from 1997 through April 20071 indicate that the number of yearly admissions to BOC nearly doubled during this period. Nonetheless, the bed capacity remained stable at 180 beds and the staffing unchanged, resulting in an increased workload and an increased demand on already limited resources and facilities. Paralleling the increase in admissions, there was an approximately 28% increase in the number of patients seen in the outpatient department (OPD)—to more than 18,000 visits per year estimated for 2007. In a report on admissions to the male ward at the BOC during a 4-month period in 2005, Millar and MacConnachie documented a large trauma workload, the prominent role of motor vehicle accidents (MVAs) as a cause of injury, and the frequency of delays in treatment [7]. At present, hospital admissions for patients in the OPD often are delayed owing to the limited number of open beds, with priority given to patients with the most urgent needs. Beds frequently are in short supply because hospitalized patients must wait days or weeks for surgery due to the lack of surgical theater time. It is not uncommon for admissions from the OPD to be deferred more than once for a given patient.\n[SUBTITLE] Purpose of the current study [SUBSECTION] Increases in the number of patients seen at this facility have been well documented. However, inadequate prehospital care, delays in referral, and delays in admission may increase the complexity of treatment ultimately undertaken, in turn resulting in increased utilization of personnel and resources. This study is designed to document the demographic characteristics of all trauma patients being treated, the nature of their injuries, and subjective assessment of initial prehospital care given at the referring site. The primary outcomes assessed were (1) a delay of more than 72 h between the initial evaluation and the BOC consultation and (2) the presence of infection at the time of the BOC consultation. Both of these outcomes were thought to reflect potentially modifiable factors that add to the complexity of care and thus to the burden of disease encountered.\nIncreases in the number of patients seen at this facility have been well documented. However, inadequate prehospital care, delays in referral, and delays in admission may increase the complexity of treatment ultimately undertaken, in turn resulting in increased utilization of personnel and resources. This study is designed to document the demographic characteristics of all trauma patients being treated, the nature of their injuries, and subjective assessment of initial prehospital care given at the referring site. The primary outcomes assessed were (1) a delay of more than 72 h between the initial evaluation and the BOC consultation and (2) the presence of infection at the time of the BOC consultation. Both of these outcomes were thought to reflect potentially modifiable factors that add to the complexity of care and thus to the burden of disease encountered.", "Past efforts to assess the workload at the BOC focused on assessing the number of admissions and outpatient visits. Unpublished data from 1997 through April 20071 indicate that the number of yearly admissions to BOC nearly doubled during this period. Nonetheless, the bed capacity remained stable at 180 beds and the staffing unchanged, resulting in an increased workload and an increased demand on already limited resources and facilities. Paralleling the increase in admissions, there was an approximately 28% increase in the number of patients seen in the outpatient department (OPD)—to more than 18,000 visits per year estimated for 2007. In a report on admissions to the male ward at the BOC during a 4-month period in 2005, Millar and MacConnachie documented a large trauma workload, the prominent role of motor vehicle accidents (MVAs) as a cause of injury, and the frequency of delays in treatment [7]. At present, hospital admissions for patients in the OPD often are delayed owing to the limited number of open beds, with priority given to patients with the most urgent needs. Beds frequently are in short supply because hospitalized patients must wait days or weeks for surgery due to the lack of surgical theater time. It is not uncommon for admissions from the OPD to be deferred more than once for a given patient.", "Increases in the number of patients seen at this facility have been well documented. However, inadequate prehospital care, delays in referral, and delays in admission may increase the complexity of treatment ultimately undertaken, in turn resulting in increased utilization of personnel and resources. This study is designed to document the demographic characteristics of all trauma patients being treated, the nature of their injuries, and subjective assessment of initial prehospital care given at the referring site. The primary outcomes assessed were (1) a delay of more than 72 h between the initial evaluation and the BOC consultation and (2) the presence of infection at the time of the BOC consultation. Both of these outcomes were thought to reflect potentially modifiable factors that add to the complexity of care and thus to the burden of disease encountered.", "Institutional review board (IRB) approval for the study was obtained from the University of California, San Francisco Committee on Human Research and from the Medical Ethics Committee of the Walter Sisulu University Faculty of Medical Sciences. Data were collected at the BOC during two 6-week periods during July and August of 2008 and 2009. These periods represented average periods of trauma volume at the BOC.\nMedical student researchers collected the data from patients in the BOC outpatient department and from the orthopedic medical officers treating them. When necessary, the patient interviews were facilitated by Xhosa-speaking interpreters. Information from data collection sheets was transferred to spreadsheets, and descriptive statistics were derived from these data.\nThe two primary outcomes—delay in referral of more than 72 h following evaluation at the initial facility and the presence of infection at the time of consultation—were evaluated with respect to demographic and injury data. The statistical significance of the association between each outcome and each variable was evaluated with chi-square tests, using, where appropriate, Fisher’s exact test and Pearson’s test.\nSecondary outcomes that were evaluated were derived from the subjective assessment by the BOC medical staff of specific aspects of the initial care provided prior to BOC consultation. BOC physicians were queried about correctness of the initial diagnosis, the presence of radiographs adequate to determine diagnosis and treatment, adequacy of initial wound management, proper splint immobilization, and a general assessment of the initial management as adequate or inadequate. Finally, treating physicians at BOC were queried as to whether they thought the orthopedic injuries that were being assessed might have been adequately managed at the initial hospital without referral if instruction in the management of simple orthopedic problems had been available. The same demographic and injury variables described previously were statistically evaluated to assess possible relations to these secondary outcomes.", "[SUBTITLE] General demographics [SUBSECTION] Demographic data from the 1055 patients seen in the BOC outpatient department during the 6-week periods in 2008 and in 2009 are presented in Tables 1, 2, 3, 4, and 5. Altogether, 80% of the patients treated were adults (defined in this setting as individuals >14 years of age). Most of the patients seen in both the adult and pediatric groups were male (Table 1). In addition, 56% of the adult patients reported that they were responsible for caring for dependents, either children or elderly relatives.Table 1Age and sex data of trauma patients (n = 1040)ParameterAdults (age > 14 years)Pediatric group (age ≤ 14 years)All patientsAge (years), mean40.0 (n = 830)8.5 (n = 210)33.6Sex (no.) Male500 (60.2%)149 (71.0%)649 (62.6%) Female330 (39.8%)61 (29.0%)391 (37.4%)\nTable 2Mode of transportation to Bedford Orthopaedic Centre from referring facilityModeAdults (%)Pediatric group (%)All patients (%)Taxi24.721.024.3Ambulance66.071.066.7Walk0.20.50.3Private car8.22.46.9Unknown/other1.05.21.8\nTable 3Type of presenting orthopedic injuryInjuryAdults (%)Pediatric group (%)All patients (%)Fracture79.687.180.9Dislocation3.41.93.0Soft tissue13.97.112.8Other3.13.83.2Open injury (2009 only)19.28.017.1\nTable 4Site of injurySiteAdults (%)Pediatric group (%)All patients (%)Upper extremity38.454.841.7Lower extremity55.840.552.7Multiple sites1.94.32.5Spine/head3.70.53.0Unknown0.100.1\nTable 5Mechanism of injuryMechanismAdults (%)Pediatric group (%)All patients (%)MVA-related24.714.822.9Assault18.20.514.4Other (e.g., falls)56.483.361.8Unknown0.71.40.9\nMVA motor vehicle accident\n\nAge and sex data of trauma patients (n = 1040)\nMode of transportation to Bedford Orthopaedic Centre from referring facility\nType of presenting orthopedic injury\nSite of injury\nMechanism of injury\n\nMVA motor vehicle accident\nIn all, 48.6% of patients were referred from facilities in Mthatha, <10 km away, with most from the academic hospital and from the city’s general hospital; 38.6% of patients came from district hospitals >50 km from the BOC; and 12.7% were from hospitals 11–50 km away. The primary mode of transportation was ambulance for two-thirds of patients, with about one-fourth arriving by taxi-van, the primary mode of public transportation in the Eastern Cape (Table 2). There was no significant difference in the type of transportation used by the adult and pediatric populations.\nCharacteristics of the injuries evaluated are listed in Table 3. Most were fractures, with a somewhat higher prevalence in the pediatric group. Soft tissue injuries occurred with greater frequency in adults (13.9%) than in children (7.1%). Information about whether the presenting injuries were open or closed was not collected during the 2008 session but was collected in 2009. Among the 521 patients evaluated in 2009, injuries were open in 89 (17.1%), with a more than twofold greater prevalence in adults than in children and almost a twofold greater prevalence in males than in females.\nMost of the injuries seen (51.5%) involved the lower extremity, with a greater prevalence of these injuries in adults and females (Table 4). The greater prevalence of upper extremity injuries in the pediatric group likely reflects the frequency of supracondylar humerus fractures due to falls. Accidents involving motor vehicles accounted for 22.9% of the injuries seen in all patients and were more prevalent in adults (Table 5). Injuries resulting from assaults were far more common in adults (18.2%) than in children (0.5%). Other injuries, usually falls, accounted for a larger proportion of injuries encountered in children (83.3%) than in adults (56.4%).\nOnly a small portion of the patients seen at BOC for orthopedic trauma were aware of their human immunodeficiency virus (HIV) status and if they were actively infected with tuberculosis (TB). Patients were not routinely screened for HIV. A total of 11.8% of all patients reported having HIV/acquired immunodeficiency syndrome (AIDS) at the time they were evaluated: 13.7% of the adults and 4.3% of the children. In all, 7.5% of patients had known TB: 9.2% of adults and 1.0% of children.\nAltogether, 19.5% of the patients were referred from five district hospitals, the staffs of which attended a course on the care of orthopedic trauma during the months that separated the data collection periods of 2008 and 2009. In all, 99 patients referred from these hospitals were evaluated in 2008 (prior to the introduction of the course), and 107 were evaluated in 2009 (after the course was given).\nDemographic data from the 1055 patients seen in the BOC outpatient department during the 6-week periods in 2008 and in 2009 are presented in Tables 1, 2, 3, 4, and 5. Altogether, 80% of the patients treated were adults (defined in this setting as individuals >14 years of age). Most of the patients seen in both the adult and pediatric groups were male (Table 1). In addition, 56% of the adult patients reported that they were responsible for caring for dependents, either children or elderly relatives.Table 1Age and sex data of trauma patients (n = 1040)ParameterAdults (age > 14 years)Pediatric group (age ≤ 14 years)All patientsAge (years), mean40.0 (n = 830)8.5 (n = 210)33.6Sex (no.) Male500 (60.2%)149 (71.0%)649 (62.6%) Female330 (39.8%)61 (29.0%)391 (37.4%)\nTable 2Mode of transportation to Bedford Orthopaedic Centre from referring facilityModeAdults (%)Pediatric group (%)All patients (%)Taxi24.721.024.3Ambulance66.071.066.7Walk0.20.50.3Private car8.22.46.9Unknown/other1.05.21.8\nTable 3Type of presenting orthopedic injuryInjuryAdults (%)Pediatric group (%)All patients (%)Fracture79.687.180.9Dislocation3.41.93.0Soft tissue13.97.112.8Other3.13.83.2Open injury (2009 only)19.28.017.1\nTable 4Site of injurySiteAdults (%)Pediatric group (%)All patients (%)Upper extremity38.454.841.7Lower extremity55.840.552.7Multiple sites1.94.32.5Spine/head3.70.53.0Unknown0.100.1\nTable 5Mechanism of injuryMechanismAdults (%)Pediatric group (%)All patients (%)MVA-related24.714.822.9Assault18.20.514.4Other (e.g., falls)56.483.361.8Unknown0.71.40.9\nMVA motor vehicle accident\n\nAge and sex data of trauma patients (n = 1040)\nMode of transportation to Bedford Orthopaedic Centre from referring facility\nType of presenting orthopedic injury\nSite of injury\nMechanism of injury\n\nMVA motor vehicle accident\nIn all, 48.6% of patients were referred from facilities in Mthatha, <10 km away, with most from the academic hospital and from the city’s general hospital; 38.6% of patients came from district hospitals >50 km from the BOC; and 12.7% were from hospitals 11–50 km away. The primary mode of transportation was ambulance for two-thirds of patients, with about one-fourth arriving by taxi-van, the primary mode of public transportation in the Eastern Cape (Table 2). There was no significant difference in the type of transportation used by the adult and pediatric populations.\nCharacteristics of the injuries evaluated are listed in Table 3. Most were fractures, with a somewhat higher prevalence in the pediatric group. Soft tissue injuries occurred with greater frequency in adults (13.9%) than in children (7.1%). Information about whether the presenting injuries were open or closed was not collected during the 2008 session but was collected in 2009. Among the 521 patients evaluated in 2009, injuries were open in 89 (17.1%), with a more than twofold greater prevalence in adults than in children and almost a twofold greater prevalence in males than in females.\nMost of the injuries seen (51.5%) involved the lower extremity, with a greater prevalence of these injuries in adults and females (Table 4). The greater prevalence of upper extremity injuries in the pediatric group likely reflects the frequency of supracondylar humerus fractures due to falls. Accidents involving motor vehicles accounted for 22.9% of the injuries seen in all patients and were more prevalent in adults (Table 5). Injuries resulting from assaults were far more common in adults (18.2%) than in children (0.5%). Other injuries, usually falls, accounted for a larger proportion of injuries encountered in children (83.3%) than in adults (56.4%).\nOnly a small portion of the patients seen at BOC for orthopedic trauma were aware of their human immunodeficiency virus (HIV) status and if they were actively infected with tuberculosis (TB). Patients were not routinely screened for HIV. A total of 11.8% of all patients reported having HIV/acquired immunodeficiency syndrome (AIDS) at the time they were evaluated: 13.7% of the adults and 4.3% of the children. In all, 7.5% of patients had known TB: 9.2% of adults and 1.0% of children.\nAltogether, 19.5% of the patients were referred from five district hospitals, the staffs of which attended a course on the care of orthopedic trauma during the months that separated the data collection periods of 2008 and 2009. In all, 99 patients referred from these hospitals were evaluated in 2008 (prior to the introduction of the course), and 107 were evaluated in 2009 (after the course was given).\n[SUBTITLE] Outcomes assessment [SUBSECTION] [SUBTITLE] Primary outcomes [SUBSECTION] Primary outcomes assessed as part of this study include both delay in referral and the presence of infection at the initial BOC consultation. The interval between the initial evaluation at the referring facility and the initial evaluation at BOC was recorded for all 1055 patients. Information on the presence of infection at the time of consultation was reported for 650 of the 1055 patients during the two data collection periods. Univariate analysis with chi-square tests determined which demographic factors and injury data appeared to be related to each of these primary outcomes.\nPatients who were divided into two groups depending on whether the BOC visit had occurred within 72 h of the initial visit or was delayed more than 72 h (Table 6). Factors that correlated statistically with the delayed referral included mode of transportation and distance from the referral site (Table 7). Also found to be related were whether the patients were seen in 2008 or 2009 and whether they were seen in a hospital where the trauma course was given. Of importance is the fact that the ambulance service improved between 2008 and 2009. Factors that did not appear to be related to delayed referral included age, sex, site of injury, type of injury, open versus closed injury, presence of infection, and whether HIV/AIDS or TB were known to be present.Table 6Interval between initial evaluation and Bedford Orthopaedic Center visitIntervalAdults (%)Pediatric group (%)All patients (%)≤72 h58.160.558.6>72 h41.939.541.4\nTable 7Statistical association between primary outcomes and selected variablesDemographic dataDelay in referral ≥ 72 hPresence of infectionAgeNS (0.583)a\nNS (0.323)a\nSexNS (0.898)a\nSig. (0.011)a\nDistance between BOC and referring facilitySig. (0.002)b\nNS (0.579)b\nMode of transportationSig. (0.033)b\nNS (0.839)b\nData from 2008 or 2009Sig. (0.000)a\nNS (0.802)a\nPresence of HIV/AIDS, TBNS (0.388)a\nNS (0.243)b\nReferred from hospital with trauma trainingSig. (0.024)a\nNS (1.000)a\nDelay in referralN/ANS (0.713)a\nType of injuryNS (0.124)b\nSig. (0.000)b\nMechanism of injurySig (0.020)b\nSig (0.014)b\nSite of injuryNS (0.920)b\nNS (0.402)b\nOpen vs. closed (2009 only)NS (0.690)a\nSig. (0.000)a\nPresence of infectionNS (0.713)a\nN/A\nBOC Bedford Orthopaedic Center, HIV/AIDS human immunodeficiency virus/acquired immunodeficiency syndrome, TB tuberculosis, NS not significant, Sig significant, N/A not applicable\naFischer’s exact test\nbPearson chi-square test\n\nInterval between initial evaluation and Bedford Orthopaedic Center visit\nStatistical association between primary outcomes and selected variables\n\nBOC Bedford Orthopaedic Center, HIV/AIDS human immunodeficiency virus/acquired immunodeficiency syndrome, TB tuberculosis, NS not significant, Sig significant, N/A not applicable\n\naFischer’s exact test\n\nbPearson chi-square test\nA total of 12.2% of these patients were reported to have an established infection at the time of the initial evaluation at BOC. Infections were noted in 11.4% of patients in the pediatric group and in 16.4% of adults (Table 7); 15.2% of males and 8.4% of females were diagnosed with infections. Factors associated with the presence of infection included sex, mechanism and type of injury, and whether the injury was open or closed. Factors not associated included age, distance from referral site, mode of transportation, delayed referral of >72 h, and site of injury. Other factors not related were whether the patients were seen in 2008 or 2009, whether they were seen in hospitals where the trauma course was given, and whether TB or HIV/AIDS were known to be present.\nPrimary outcomes assessed as part of this study include both delay in referral and the presence of infection at the initial BOC consultation. The interval between the initial evaluation at the referring facility and the initial evaluation at BOC was recorded for all 1055 patients. Information on the presence of infection at the time of consultation was reported for 650 of the 1055 patients during the two data collection periods. Univariate analysis with chi-square tests determined which demographic factors and injury data appeared to be related to each of these primary outcomes.\nPatients who were divided into two groups depending on whether the BOC visit had occurred within 72 h of the initial visit or was delayed more than 72 h (Table 6). Factors that correlated statistically with the delayed referral included mode of transportation and distance from the referral site (Table 7). Also found to be related were whether the patients were seen in 2008 or 2009 and whether they were seen in a hospital where the trauma course was given. Of importance is the fact that the ambulance service improved between 2008 and 2009. Factors that did not appear to be related to delayed referral included age, sex, site of injury, type of injury, open versus closed injury, presence of infection, and whether HIV/AIDS or TB were known to be present.Table 6Interval between initial evaluation and Bedford Orthopaedic Center visitIntervalAdults (%)Pediatric group (%)All patients (%)≤72 h58.160.558.6>72 h41.939.541.4\nTable 7Statistical association between primary outcomes and selected variablesDemographic dataDelay in referral ≥ 72 hPresence of infectionAgeNS (0.583)a\nNS (0.323)a\nSexNS (0.898)a\nSig. (0.011)a\nDistance between BOC and referring facilitySig. (0.002)b\nNS (0.579)b\nMode of transportationSig. (0.033)b\nNS (0.839)b\nData from 2008 or 2009Sig. (0.000)a\nNS (0.802)a\nPresence of HIV/AIDS, TBNS (0.388)a\nNS (0.243)b\nReferred from hospital with trauma trainingSig. (0.024)a\nNS (1.000)a\nDelay in referralN/ANS (0.713)a\nType of injuryNS (0.124)b\nSig. (0.000)b\nMechanism of injurySig (0.020)b\nSig (0.014)b\nSite of injuryNS (0.920)b\nNS (0.402)b\nOpen vs. closed (2009 only)NS (0.690)a\nSig. (0.000)a\nPresence of infectionNS (0.713)a\nN/A\nBOC Bedford Orthopaedic Center, HIV/AIDS human immunodeficiency virus/acquired immunodeficiency syndrome, TB tuberculosis, NS not significant, Sig significant, N/A not applicable\naFischer’s exact test\nbPearson chi-square test\n\nInterval between initial evaluation and Bedford Orthopaedic Center visit\nStatistical association between primary outcomes and selected variables\n\nBOC Bedford Orthopaedic Center, HIV/AIDS human immunodeficiency virus/acquired immunodeficiency syndrome, TB tuberculosis, NS not significant, Sig significant, N/A not applicable\n\naFischer’s exact test\n\nbPearson chi-square test\nA total of 12.2% of these patients were reported to have an established infection at the time of the initial evaluation at BOC. Infections were noted in 11.4% of patients in the pediatric group and in 16.4% of adults (Table 7); 15.2% of males and 8.4% of females were diagnosed with infections. Factors associated with the presence of infection included sex, mechanism and type of injury, and whether the injury was open or closed. Factors not associated included age, distance from referral site, mode of transportation, delayed referral of >72 h, and site of injury. Other factors not related were whether the patients were seen in 2008 or 2009, whether they were seen in hospitals where the trauma course was given, and whether TB or HIV/AIDS were known to be present.\n[SUBTITLE] Secondary outcomes [SUBSECTION] The orthopedic medical officers at the BOC judged that the initial care provided at the referring facility in some way was deficient for 36% of the patients: 17% of the patients were thought to have received an incorrect diagnoses at the initial examination site; 11% had inadequate radiographs; and 19% of fractures were judged to have inadequate splint immobilization. Finally, it was estimated that 27% of the referred patients might have been managed without referral at the district hospital if the referring staff had had proper basic instruction in orthopedic care. In fact, data collected from patients referred from the five district hospitals where the staff had been given proper trauma instruction showed that after the course was given to the staff there was no difference in the prevalence of perceived deficiencies in the initial care.\nThe orthopedic medical officers at the BOC judged that the initial care provided at the referring facility in some way was deficient for 36% of the patients: 17% of the patients were thought to have received an incorrect diagnoses at the initial examination site; 11% had inadequate radiographs; and 19% of fractures were judged to have inadequate splint immobilization. Finally, it was estimated that 27% of the referred patients might have been managed without referral at the district hospital if the referring staff had had proper basic instruction in orthopedic care. In fact, data collected from patients referred from the five district hospitals where the staff had been given proper trauma instruction showed that after the course was given to the staff there was no difference in the prevalence of perceived deficiencies in the initial care.\n[SUBTITLE] Primary outcomes [SUBSECTION] Primary outcomes assessed as part of this study include both delay in referral and the presence of infection at the initial BOC consultation. The interval between the initial evaluation at the referring facility and the initial evaluation at BOC was recorded for all 1055 patients. Information on the presence of infection at the time of consultation was reported for 650 of the 1055 patients during the two data collection periods. Univariate analysis with chi-square tests determined which demographic factors and injury data appeared to be related to each of these primary outcomes.\nPatients who were divided into two groups depending on whether the BOC visit had occurred within 72 h of the initial visit or was delayed more than 72 h (Table 6). Factors that correlated statistically with the delayed referral included mode of transportation and distance from the referral site (Table 7). Also found to be related were whether the patients were seen in 2008 or 2009 and whether they were seen in a hospital where the trauma course was given. Of importance is the fact that the ambulance service improved between 2008 and 2009. Factors that did not appear to be related to delayed referral included age, sex, site of injury, type of injury, open versus closed injury, presence of infection, and whether HIV/AIDS or TB were known to be present.Table 6Interval between initial evaluation and Bedford Orthopaedic Center visitIntervalAdults (%)Pediatric group (%)All patients (%)≤72 h58.160.558.6>72 h41.939.541.4\nTable 7Statistical association between primary outcomes and selected variablesDemographic dataDelay in referral ≥ 72 hPresence of infectionAgeNS (0.583)a\nNS (0.323)a\nSexNS (0.898)a\nSig. (0.011)a\nDistance between BOC and referring facilitySig. (0.002)b\nNS (0.579)b\nMode of transportationSig. (0.033)b\nNS (0.839)b\nData from 2008 or 2009Sig. (0.000)a\nNS (0.802)a\nPresence of HIV/AIDS, TBNS (0.388)a\nNS (0.243)b\nReferred from hospital with trauma trainingSig. (0.024)a\nNS (1.000)a\nDelay in referralN/ANS (0.713)a\nType of injuryNS (0.124)b\nSig. (0.000)b\nMechanism of injurySig (0.020)b\nSig (0.014)b\nSite of injuryNS (0.920)b\nNS (0.402)b\nOpen vs. closed (2009 only)NS (0.690)a\nSig. (0.000)a\nPresence of infectionNS (0.713)a\nN/A\nBOC Bedford Orthopaedic Center, HIV/AIDS human immunodeficiency virus/acquired immunodeficiency syndrome, TB tuberculosis, NS not significant, Sig significant, N/A not applicable\naFischer’s exact test\nbPearson chi-square test\n\nInterval between initial evaluation and Bedford Orthopaedic Center visit\nStatistical association between primary outcomes and selected variables\n\nBOC Bedford Orthopaedic Center, HIV/AIDS human immunodeficiency virus/acquired immunodeficiency syndrome, TB tuberculosis, NS not significant, Sig significant, N/A not applicable\n\naFischer’s exact test\n\nbPearson chi-square test\nA total of 12.2% of these patients were reported to have an established infection at the time of the initial evaluation at BOC. Infections were noted in 11.4% of patients in the pediatric group and in 16.4% of adults (Table 7); 15.2% of males and 8.4% of females were diagnosed with infections. Factors associated with the presence of infection included sex, mechanism and type of injury, and whether the injury was open or closed. Factors not associated included age, distance from referral site, mode of transportation, delayed referral of >72 h, and site of injury. Other factors not related were whether the patients were seen in 2008 or 2009, whether they were seen in hospitals where the trauma course was given, and whether TB or HIV/AIDS were known to be present.\nPrimary outcomes assessed as part of this study include both delay in referral and the presence of infection at the initial BOC consultation. The interval between the initial evaluation at the referring facility and the initial evaluation at BOC was recorded for all 1055 patients. Information on the presence of infection at the time of consultation was reported for 650 of the 1055 patients during the two data collection periods. Univariate analysis with chi-square tests determined which demographic factors and injury data appeared to be related to each of these primary outcomes.\nPatients who were divided into two groups depending on whether the BOC visit had occurred within 72 h of the initial visit or was delayed more than 72 h (Table 6). Factors that correlated statistically with the delayed referral included mode of transportation and distance from the referral site (Table 7). Also found to be related were whether the patients were seen in 2008 or 2009 and whether they were seen in a hospital where the trauma course was given. Of importance is the fact that the ambulance service improved between 2008 and 2009. Factors that did not appear to be related to delayed referral included age, sex, site of injury, type of injury, open versus closed injury, presence of infection, and whether HIV/AIDS or TB were known to be present.Table 6Interval between initial evaluation and Bedford Orthopaedic Center visitIntervalAdults (%)Pediatric group (%)All patients (%)≤72 h58.160.558.6>72 h41.939.541.4\nTable 7Statistical association between primary outcomes and selected variablesDemographic dataDelay in referral ≥ 72 hPresence of infectionAgeNS (0.583)a\nNS (0.323)a\nSexNS (0.898)a\nSig. (0.011)a\nDistance between BOC and referring facilitySig. (0.002)b\nNS (0.579)b\nMode of transportationSig. (0.033)b\nNS (0.839)b\nData from 2008 or 2009Sig. (0.000)a\nNS (0.802)a\nPresence of HIV/AIDS, TBNS (0.388)a\nNS (0.243)b\nReferred from hospital with trauma trainingSig. (0.024)a\nNS (1.000)a\nDelay in referralN/ANS (0.713)a\nType of injuryNS (0.124)b\nSig. (0.000)b\nMechanism of injurySig (0.020)b\nSig (0.014)b\nSite of injuryNS (0.920)b\nNS (0.402)b\nOpen vs. closed (2009 only)NS (0.690)a\nSig. (0.000)a\nPresence of infectionNS (0.713)a\nN/A\nBOC Bedford Orthopaedic Center, HIV/AIDS human immunodeficiency virus/acquired immunodeficiency syndrome, TB tuberculosis, NS not significant, Sig significant, N/A not applicable\naFischer’s exact test\nbPearson chi-square test\n\nInterval between initial evaluation and Bedford Orthopaedic Center visit\nStatistical association between primary outcomes and selected variables\n\nBOC Bedford Orthopaedic Center, HIV/AIDS human immunodeficiency virus/acquired immunodeficiency syndrome, TB tuberculosis, NS not significant, Sig significant, N/A not applicable\n\naFischer’s exact test\n\nbPearson chi-square test\nA total of 12.2% of these patients were reported to have an established infection at the time of the initial evaluation at BOC. Infections were noted in 11.4% of patients in the pediatric group and in 16.4% of adults (Table 7); 15.2% of males and 8.4% of females were diagnosed with infections. Factors associated with the presence of infection included sex, mechanism and type of injury, and whether the injury was open or closed. Factors not associated included age, distance from referral site, mode of transportation, delayed referral of >72 h, and site of injury. Other factors not related were whether the patients were seen in 2008 or 2009, whether they were seen in hospitals where the trauma course was given, and whether TB or HIV/AIDS were known to be present.\n[SUBTITLE] Secondary outcomes [SUBSECTION] The orthopedic medical officers at the BOC judged that the initial care provided at the referring facility in some way was deficient for 36% of the patients: 17% of the patients were thought to have received an incorrect diagnoses at the initial examination site; 11% had inadequate radiographs; and 19% of fractures were judged to have inadequate splint immobilization. Finally, it was estimated that 27% of the referred patients might have been managed without referral at the district hospital if the referring staff had had proper basic instruction in orthopedic care. In fact, data collected from patients referred from the five district hospitals where the staff had been given proper trauma instruction showed that after the course was given to the staff there was no difference in the prevalence of perceived deficiencies in the initial care.\nThe orthopedic medical officers at the BOC judged that the initial care provided at the referring facility in some way was deficient for 36% of the patients: 17% of the patients were thought to have received an incorrect diagnoses at the initial examination site; 11% had inadequate radiographs; and 19% of fractures were judged to have inadequate splint immobilization. Finally, it was estimated that 27% of the referred patients might have been managed without referral at the district hospital if the referring staff had had proper basic instruction in orthopedic care. In fact, data collected from patients referred from the five district hospitals where the staff had been given proper trauma instruction showed that after the course was given to the staff there was no difference in the prevalence of perceived deficiencies in the initial care.", "Demographic data from the 1055 patients seen in the BOC outpatient department during the 6-week periods in 2008 and in 2009 are presented in Tables 1, 2, 3, 4, and 5. Altogether, 80% of the patients treated were adults (defined in this setting as individuals >14 years of age). Most of the patients seen in both the adult and pediatric groups were male (Table 1). In addition, 56% of the adult patients reported that they were responsible for caring for dependents, either children or elderly relatives.Table 1Age and sex data of trauma patients (n = 1040)ParameterAdults (age > 14 years)Pediatric group (age ≤ 14 years)All patientsAge (years), mean40.0 (n = 830)8.5 (n = 210)33.6Sex (no.) Male500 (60.2%)149 (71.0%)649 (62.6%) Female330 (39.8%)61 (29.0%)391 (37.4%)\nTable 2Mode of transportation to Bedford Orthopaedic Centre from referring facilityModeAdults (%)Pediatric group (%)All patients (%)Taxi24.721.024.3Ambulance66.071.066.7Walk0.20.50.3Private car8.22.46.9Unknown/other1.05.21.8\nTable 3Type of presenting orthopedic injuryInjuryAdults (%)Pediatric group (%)All patients (%)Fracture79.687.180.9Dislocation3.41.93.0Soft tissue13.97.112.8Other3.13.83.2Open injury (2009 only)19.28.017.1\nTable 4Site of injurySiteAdults (%)Pediatric group (%)All patients (%)Upper extremity38.454.841.7Lower extremity55.840.552.7Multiple sites1.94.32.5Spine/head3.70.53.0Unknown0.100.1\nTable 5Mechanism of injuryMechanismAdults (%)Pediatric group (%)All patients (%)MVA-related24.714.822.9Assault18.20.514.4Other (e.g., falls)56.483.361.8Unknown0.71.40.9\nMVA motor vehicle accident\n\nAge and sex data of trauma patients (n = 1040)\nMode of transportation to Bedford Orthopaedic Centre from referring facility\nType of presenting orthopedic injury\nSite of injury\nMechanism of injury\n\nMVA motor vehicle accident\nIn all, 48.6% of patients were referred from facilities in Mthatha, <10 km away, with most from the academic hospital and from the city’s general hospital; 38.6% of patients came from district hospitals >50 km from the BOC; and 12.7% were from hospitals 11–50 km away. The primary mode of transportation was ambulance for two-thirds of patients, with about one-fourth arriving by taxi-van, the primary mode of public transportation in the Eastern Cape (Table 2). There was no significant difference in the type of transportation used by the adult and pediatric populations.\nCharacteristics of the injuries evaluated are listed in Table 3. Most were fractures, with a somewhat higher prevalence in the pediatric group. Soft tissue injuries occurred with greater frequency in adults (13.9%) than in children (7.1%). Information about whether the presenting injuries were open or closed was not collected during the 2008 session but was collected in 2009. Among the 521 patients evaluated in 2009, injuries were open in 89 (17.1%), with a more than twofold greater prevalence in adults than in children and almost a twofold greater prevalence in males than in females.\nMost of the injuries seen (51.5%) involved the lower extremity, with a greater prevalence of these injuries in adults and females (Table 4). The greater prevalence of upper extremity injuries in the pediatric group likely reflects the frequency of supracondylar humerus fractures due to falls. Accidents involving motor vehicles accounted for 22.9% of the injuries seen in all patients and were more prevalent in adults (Table 5). Injuries resulting from assaults were far more common in adults (18.2%) than in children (0.5%). Other injuries, usually falls, accounted for a larger proportion of injuries encountered in children (83.3%) than in adults (56.4%).\nOnly a small portion of the patients seen at BOC for orthopedic trauma were aware of their human immunodeficiency virus (HIV) status and if they were actively infected with tuberculosis (TB). Patients were not routinely screened for HIV. A total of 11.8% of all patients reported having HIV/acquired immunodeficiency syndrome (AIDS) at the time they were evaluated: 13.7% of the adults and 4.3% of the children. In all, 7.5% of patients had known TB: 9.2% of adults and 1.0% of children.\nAltogether, 19.5% of the patients were referred from five district hospitals, the staffs of which attended a course on the care of orthopedic trauma during the months that separated the data collection periods of 2008 and 2009. In all, 99 patients referred from these hospitals were evaluated in 2008 (prior to the introduction of the course), and 107 were evaluated in 2009 (after the course was given).", "[SUBTITLE] Primary outcomes [SUBSECTION] Primary outcomes assessed as part of this study include both delay in referral and the presence of infection at the initial BOC consultation. The interval between the initial evaluation at the referring facility and the initial evaluation at BOC was recorded for all 1055 patients. Information on the presence of infection at the time of consultation was reported for 650 of the 1055 patients during the two data collection periods. Univariate analysis with chi-square tests determined which demographic factors and injury data appeared to be related to each of these primary outcomes.\nPatients who were divided into two groups depending on whether the BOC visit had occurred within 72 h of the initial visit or was delayed more than 72 h (Table 6). Factors that correlated statistically with the delayed referral included mode of transportation and distance from the referral site (Table 7). Also found to be related were whether the patients were seen in 2008 or 2009 and whether they were seen in a hospital where the trauma course was given. Of importance is the fact that the ambulance service improved between 2008 and 2009. Factors that did not appear to be related to delayed referral included age, sex, site of injury, type of injury, open versus closed injury, presence of infection, and whether HIV/AIDS or TB were known to be present.Table 6Interval between initial evaluation and Bedford Orthopaedic Center visitIntervalAdults (%)Pediatric group (%)All patients (%)≤72 h58.160.558.6>72 h41.939.541.4\nTable 7Statistical association between primary outcomes and selected variablesDemographic dataDelay in referral ≥ 72 hPresence of infectionAgeNS (0.583)a\nNS (0.323)a\nSexNS (0.898)a\nSig. (0.011)a\nDistance between BOC and referring facilitySig. (0.002)b\nNS (0.579)b\nMode of transportationSig. (0.033)b\nNS (0.839)b\nData from 2008 or 2009Sig. (0.000)a\nNS (0.802)a\nPresence of HIV/AIDS, TBNS (0.388)a\nNS (0.243)b\nReferred from hospital with trauma trainingSig. (0.024)a\nNS (1.000)a\nDelay in referralN/ANS (0.713)a\nType of injuryNS (0.124)b\nSig. (0.000)b\nMechanism of injurySig (0.020)b\nSig (0.014)b\nSite of injuryNS (0.920)b\nNS (0.402)b\nOpen vs. closed (2009 only)NS (0.690)a\nSig. (0.000)a\nPresence of infectionNS (0.713)a\nN/A\nBOC Bedford Orthopaedic Center, HIV/AIDS human immunodeficiency virus/acquired immunodeficiency syndrome, TB tuberculosis, NS not significant, Sig significant, N/A not applicable\naFischer’s exact test\nbPearson chi-square test\n\nInterval between initial evaluation and Bedford Orthopaedic Center visit\nStatistical association between primary outcomes and selected variables\n\nBOC Bedford Orthopaedic Center, HIV/AIDS human immunodeficiency virus/acquired immunodeficiency syndrome, TB tuberculosis, NS not significant, Sig significant, N/A not applicable\n\naFischer’s exact test\n\nbPearson chi-square test\nA total of 12.2% of these patients were reported to have an established infection at the time of the initial evaluation at BOC. Infections were noted in 11.4% of patients in the pediatric group and in 16.4% of adults (Table 7); 15.2% of males and 8.4% of females were diagnosed with infections. Factors associated with the presence of infection included sex, mechanism and type of injury, and whether the injury was open or closed. Factors not associated included age, distance from referral site, mode of transportation, delayed referral of >72 h, and site of injury. Other factors not related were whether the patients were seen in 2008 or 2009, whether they were seen in hospitals where the trauma course was given, and whether TB or HIV/AIDS were known to be present.\nPrimary outcomes assessed as part of this study include both delay in referral and the presence of infection at the initial BOC consultation. The interval between the initial evaluation at the referring facility and the initial evaluation at BOC was recorded for all 1055 patients. Information on the presence of infection at the time of consultation was reported for 650 of the 1055 patients during the two data collection periods. Univariate analysis with chi-square tests determined which demographic factors and injury data appeared to be related to each of these primary outcomes.\nPatients who were divided into two groups depending on whether the BOC visit had occurred within 72 h of the initial visit or was delayed more than 72 h (Table 6). Factors that correlated statistically with the delayed referral included mode of transportation and distance from the referral site (Table 7). Also found to be related were whether the patients were seen in 2008 or 2009 and whether they were seen in a hospital where the trauma course was given. Of importance is the fact that the ambulance service improved between 2008 and 2009. Factors that did not appear to be related to delayed referral included age, sex, site of injury, type of injury, open versus closed injury, presence of infection, and whether HIV/AIDS or TB were known to be present.Table 6Interval between initial evaluation and Bedford Orthopaedic Center visitIntervalAdults (%)Pediatric group (%)All patients (%)≤72 h58.160.558.6>72 h41.939.541.4\nTable 7Statistical association between primary outcomes and selected variablesDemographic dataDelay in referral ≥ 72 hPresence of infectionAgeNS (0.583)a\nNS (0.323)a\nSexNS (0.898)a\nSig. (0.011)a\nDistance between BOC and referring facilitySig. (0.002)b\nNS (0.579)b\nMode of transportationSig. (0.033)b\nNS (0.839)b\nData from 2008 or 2009Sig. (0.000)a\nNS (0.802)a\nPresence of HIV/AIDS, TBNS (0.388)a\nNS (0.243)b\nReferred from hospital with trauma trainingSig. (0.024)a\nNS (1.000)a\nDelay in referralN/ANS (0.713)a\nType of injuryNS (0.124)b\nSig. (0.000)b\nMechanism of injurySig (0.020)b\nSig (0.014)b\nSite of injuryNS (0.920)b\nNS (0.402)b\nOpen vs. closed (2009 only)NS (0.690)a\nSig. (0.000)a\nPresence of infectionNS (0.713)a\nN/A\nBOC Bedford Orthopaedic Center, HIV/AIDS human immunodeficiency virus/acquired immunodeficiency syndrome, TB tuberculosis, NS not significant, Sig significant, N/A not applicable\naFischer’s exact test\nbPearson chi-square test\n\nInterval between initial evaluation and Bedford Orthopaedic Center visit\nStatistical association between primary outcomes and selected variables\n\nBOC Bedford Orthopaedic Center, HIV/AIDS human immunodeficiency virus/acquired immunodeficiency syndrome, TB tuberculosis, NS not significant, Sig significant, N/A not applicable\n\naFischer’s exact test\n\nbPearson chi-square test\nA total of 12.2% of these patients were reported to have an established infection at the time of the initial evaluation at BOC. Infections were noted in 11.4% of patients in the pediatric group and in 16.4% of adults (Table 7); 15.2% of males and 8.4% of females were diagnosed with infections. Factors associated with the presence of infection included sex, mechanism and type of injury, and whether the injury was open or closed. Factors not associated included age, distance from referral site, mode of transportation, delayed referral of >72 h, and site of injury. Other factors not related were whether the patients were seen in 2008 or 2009, whether they were seen in hospitals where the trauma course was given, and whether TB or HIV/AIDS were known to be present.\n[SUBTITLE] Secondary outcomes [SUBSECTION] The orthopedic medical officers at the BOC judged that the initial care provided at the referring facility in some way was deficient for 36% of the patients: 17% of the patients were thought to have received an incorrect diagnoses at the initial examination site; 11% had inadequate radiographs; and 19% of fractures were judged to have inadequate splint immobilization. Finally, it was estimated that 27% of the referred patients might have been managed without referral at the district hospital if the referring staff had had proper basic instruction in orthopedic care. In fact, data collected from patients referred from the five district hospitals where the staff had been given proper trauma instruction showed that after the course was given to the staff there was no difference in the prevalence of perceived deficiencies in the initial care.\nThe orthopedic medical officers at the BOC judged that the initial care provided at the referring facility in some way was deficient for 36% of the patients: 17% of the patients were thought to have received an incorrect diagnoses at the initial examination site; 11% had inadequate radiographs; and 19% of fractures were judged to have inadequate splint immobilization. Finally, it was estimated that 27% of the referred patients might have been managed without referral at the district hospital if the referring staff had had proper basic instruction in orthopedic care. In fact, data collected from patients referred from the five district hospitals where the staff had been given proper trauma instruction showed that after the course was given to the staff there was no difference in the prevalence of perceived deficiencies in the initial care.", "Primary outcomes assessed as part of this study include both delay in referral and the presence of infection at the initial BOC consultation. The interval between the initial evaluation at the referring facility and the initial evaluation at BOC was recorded for all 1055 patients. Information on the presence of infection at the time of consultation was reported for 650 of the 1055 patients during the two data collection periods. Univariate analysis with chi-square tests determined which demographic factors and injury data appeared to be related to each of these primary outcomes.\nPatients who were divided into two groups depending on whether the BOC visit had occurred within 72 h of the initial visit or was delayed more than 72 h (Table 6). Factors that correlated statistically with the delayed referral included mode of transportation and distance from the referral site (Table 7). Also found to be related were whether the patients were seen in 2008 or 2009 and whether they were seen in a hospital where the trauma course was given. Of importance is the fact that the ambulance service improved between 2008 and 2009. Factors that did not appear to be related to delayed referral included age, sex, site of injury, type of injury, open versus closed injury, presence of infection, and whether HIV/AIDS or TB were known to be present.Table 6Interval between initial evaluation and Bedford Orthopaedic Center visitIntervalAdults (%)Pediatric group (%)All patients (%)≤72 h58.160.558.6>72 h41.939.541.4\nTable 7Statistical association between primary outcomes and selected variablesDemographic dataDelay in referral ≥ 72 hPresence of infectionAgeNS (0.583)a\nNS (0.323)a\nSexNS (0.898)a\nSig. (0.011)a\nDistance between BOC and referring facilitySig. (0.002)b\nNS (0.579)b\nMode of transportationSig. (0.033)b\nNS (0.839)b\nData from 2008 or 2009Sig. (0.000)a\nNS (0.802)a\nPresence of HIV/AIDS, TBNS (0.388)a\nNS (0.243)b\nReferred from hospital with trauma trainingSig. (0.024)a\nNS (1.000)a\nDelay in referralN/ANS (0.713)a\nType of injuryNS (0.124)b\nSig. (0.000)b\nMechanism of injurySig (0.020)b\nSig (0.014)b\nSite of injuryNS (0.920)b\nNS (0.402)b\nOpen vs. closed (2009 only)NS (0.690)a\nSig. (0.000)a\nPresence of infectionNS (0.713)a\nN/A\nBOC Bedford Orthopaedic Center, HIV/AIDS human immunodeficiency virus/acquired immunodeficiency syndrome, TB tuberculosis, NS not significant, Sig significant, N/A not applicable\naFischer’s exact test\nbPearson chi-square test\n\nInterval between initial evaluation and Bedford Orthopaedic Center visit\nStatistical association between primary outcomes and selected variables\n\nBOC Bedford Orthopaedic Center, HIV/AIDS human immunodeficiency virus/acquired immunodeficiency syndrome, TB tuberculosis, NS not significant, Sig significant, N/A not applicable\n\naFischer’s exact test\n\nbPearson chi-square test\nA total of 12.2% of these patients were reported to have an established infection at the time of the initial evaluation at BOC. Infections were noted in 11.4% of patients in the pediatric group and in 16.4% of adults (Table 7); 15.2% of males and 8.4% of females were diagnosed with infections. Factors associated with the presence of infection included sex, mechanism and type of injury, and whether the injury was open or closed. Factors not associated included age, distance from referral site, mode of transportation, delayed referral of >72 h, and site of injury. Other factors not related were whether the patients were seen in 2008 or 2009, whether they were seen in hospitals where the trauma course was given, and whether TB or HIV/AIDS were known to be present.", "The orthopedic medical officers at the BOC judged that the initial care provided at the referring facility in some way was deficient for 36% of the patients: 17% of the patients were thought to have received an incorrect diagnoses at the initial examination site; 11% had inadequate radiographs; and 19% of fractures were judged to have inadequate splint immobilization. Finally, it was estimated that 27% of the referred patients might have been managed without referral at the district hospital if the referring staff had had proper basic instruction in orthopedic care. In fact, data collected from patients referred from the five district hospitals where the staff had been given proper trauma instruction showed that after the course was given to the staff there was no difference in the prevalence of perceived deficiencies in the initial care.", "This study confirms the clinical impression that the burden of orthopedic trauma addressed in the BOC outpatient department is substantial and, furthermore, that fractures are the most frequently diagnosed injuries. A logical first step in managing a heavy trauma workload in resource-constrained settings is to document the number of patients and the types of injury being seen. Such information is essential for governmental agencies to identify where the greatest health care needs lie and to prioritize scarce resources. To offer adequate treatment to large numbers of patients with fractures and related injuries, resources commensurate with the workload are required, as recommended by the World Health Organization (WHO) in Guidelines for Essential Trauma Care [8, 9]. Such resources include emergency transportation, adequately trained first responders, and properly equipped outpatient and inpatient facilities. Because surgical treatment is the standard of care for most of these injuries, also needed are adequate operative facilities, staffing, supplies, equipment, and orthopedic implants [10]. Data from this study provide a starting point for applying resources effectively.\nIt is well understood that delays in definitive treatment of fractures may result in the need for more complex treatment and to poor outcomes. A prior study at the BOC revealed that treatment delays of >7 days occurred in 25% of male inpatients [7]. In the present study, delayed referral of >72 h was noted in 41.4% of all trauma patients seen in the outpatient department. Analysis of factors possibly associated with such delays showed a statistical relation to the distance traveled to the BOC, the type of transportation used, and the year in which data were gathered. The reduced prevalence of referral delays between 2008 and 2009 may be explained in large part by the improvements in the ambulance services that were instituted between these data collection periods. The use of crowded and dangerous taxi-vans, which is the only other alternative for most indigent patients, is particularly cumbersome and expensive when traveling long distances to the BOC. The findings in this study highlight the importance of creating and maintaining adequate transport systems for injured patients, especially for those living in distant communities.\nA total of 12.2% of patients seen in the BOC OPD were diagnosed as having established infections. The infections were more common in adult men and, not surprisingly, were significantly associated with open injuries. Treatment of infected, open injuries frequently requires more intensive management with prolonged inpatient hospital stays, multiple surgical procedures, and prolonged intravenous antibiotic treatment. The reported incidence of infection following open fractures in developed countries, where various standard care protocols are used, is in the range of 2.3% to 7.3% [11, 12]. To minimize the risk of infection in the setting of rural South Africa, it is important to institute measures that promote adequate débridement of open fractures and prompt administration of intravenous antibiotics during the initial treatment at the district hospitals.\nThis present study documents the prevalence of perceived deficiencies in initial care at the district hospitals from which BOC patients were referred. It is recognized that such perceived deficiencies may have multiple causes and that their recognition may have been affected by the bias of caregivers at the BOC. However, it is noteworthy that these observations were made by local practitioners within the context of established standards of orthopedic care in this setting.\nInjuries that are not properly managed at the outset may require more complex, time-consuming, and costly definitive care later. With the goal of reducing the prevalence of problems—such as missed diagnoses, inadequate radiographic studies, missed neurovascular injuries, inadequately managed open injuries—a course on the care of acute orthopedic injuries was given at 5 of the 23 district hospitals between the 2008 and 2009 data collection periods. The goal of this instruction was similar to that advocated in the WHO’s Integrated Management for Emergency and Essential Surgical Care (IMEESC) program [9, 13]. Of interest is the finding of the present study that referral delays in 2009 were reduced following the course. Improvements in ambulance transportation, as well as the information imparted during the courses in these hospitals, may explain why referral delays were less frequent during the second data collection period. Of note, however, is the fact that the prevalence of reported deficiencies in initial care of patients referred from those hospitals did not decrease where the courses were given. A possible explanation for this finding may be that many of the medical officers who had received the instruction at the district hospitals had moved away from the subject hospitals during the months after the courses were given. Thus, many of the patients seen during the second data collection period may not have been treated by physicians who had undergone the trauma training. This underscores the challenges in creating sustainable and measurable changes in clinical practice using educational interventions of the type described.\nWhat are the next steps? Continuing educational efforts at the district hospital level is important, but such efforts must be ongoing and targeted especially to newly arrived physicians. Now that the workload has been documented at the BOC, an important challenge lies in improving orthopedic care in the hospital, especially surgical intervention. Measuring the efficiency with which patients are managed at the BOC itself may provide further insight into whether available resources are adequate and are being properly utilized. As alluded to previously, problems caused by delays in referral are compounded by delays in getting patients admitted to the BOC and by further delays in getting inpatients to surgery in a timely fashion. The next challenge at the BOC is to gather data on the number of patients whose treatment is delayed due to lack of hospital beds and the number of hospitalized patients who experience delays in getting to theater for their needed surgery. Data on the causes of delay in the surgical theaters are of importance. Are there sufficient theaters? Is the nursing staffing adequate? Are there sufficient anesthetists and surgeons? Are there sufficient supplies, equipment, and surgical implants? Further data collection aimed at answering these questions is needed to focus interventions designed to improve both the prehospital care and definitive management of patients with orthopedic trauma in this region." ]
[ "introduction", null, null, null, null, null, null, null, null, null ]
[]
Geographical information system and environmental epidemiology: a cross-sectional spatial analysis of the effects of traffic-related air pollution on population respiratory health.
21362158
Traffic-related air pollution is a potential risk factor for human respiratory health. A Geographical Information System (GIS) approach was used to examine whether distance from a main road (the Tosco-Romagnola road) affected respiratory health status.
BACKGROUND
We used data collected during an epidemiological survey performed in the Pisa-Cascina area (central Italy) in the period 1991-93. A total of 2841 subjects participated in the survey and filled out a standardized questionnaire on health status, socio-demographic information, and personal habits. A variable proportion of subjects performed lung function and allergy tests. Highly exposed subjects were defined as those living within 100 m of the main road, moderately exposed as those living between 100 and 250 m from the road, and unexposed as those living between 250 and 800 m from the road. Statistical analyses were conducted to compare the risks for respiratory symptoms and diseases between exposed and unexposed. All analyses were stratified by gender.
METHODS
The study comprised 2062 subjects: mean age was 45.9 years for men and 48.9 years for women. Compared to subjects living between 250 m and 800 m from the main road, subjects living within 100 m of the main road had increased adjusted risks for persistent wheeze (OR = 1.76, 95% CI = 1.08-2.87), COPD diagnosis (OR = 1.80, 95% CI = 1.03-3.08), and reduced FEV1/FVC ratio (OR = 2.07, 95% CI = 1.11-3.87) among males, and for dyspnea (OR = 1.61, 95% CI = 1.13-2.27), positivity to skin prick test (OR = 1.83, 95% CI = 1.11-3.00), asthma diagnosis (OR = 1.68, 95% CI = 0.97-2.88) and attacks of shortness of breath with wheeze (OR = 1.67, 95% CI = 0.98-2.84) among females.
RESULTS
This study points out the potential effects of traffic-related air pollution on respiratory health status, including lung function impairment. It also highlights the added value of GIS in environmental health research.
CONCLUSION
[ "Adolescent", "Adult", "Aged", "Aged, 80 and over", "Air Pollutants", "Child", "Child, Preschool", "Cross-Sectional Studies", "Environmental Health", "Female", "Geographic Information Systems", "Health Status", "Health Surveys", "Humans", "Infant", "Italy", "Male", "Middle Aged", "Motor Vehicles", "Particulate Matter", "Respiratory Function Tests", "Respiratory Tract Diseases", "Risk Assessment", "Risk Factors", "Surveys and Questionnaires", "Young Adult" ]
3056754
null
null
Methods
[SUBTITLE] Setting and study population [SUBSECTION] Since 1980 the Pulmonary Environmental Epidemiology Unit of the Institute of Clinical Physiology of the Italian National Research Council (CNR) has performed epidemiological surveys to assess the effects of outdoor [29-32] and indoor [33,34] air pollution on human health. From 1991 to 1993 a survey was conducted in the area of Pisa (Tuscany), along a main road, called the Tosco-Romagnola road, connecting Pisa to Florence and characterized by high traffic volume (mean daily values ~14700 vehicles, measured during the hours 07.00-21.00). Figure 1 shows a map of the study area, representing the boundaries of the two municipalities involved (Pisa and Cascina), the main road, and the secondary streets. As shown in the map, the central and eastern side of the Tosco-Romagnola road had typical characteristics of a suburban/rural area with sparse buildings and intersections with very small streets, suggesting a major role of the Tosco-Romagnola road in air pollutant emissions. The last western part of the Tosco-Romagnola road, which enters the urban area of Pisa, has different characteristics with regard to other main roads, other air pollution sources, and other types of buildings. Study area and geocoding of population sample. Map representing subjects geocoding in their own home residence address, along the major road, the Tosco-Romagnola road. Each dot on the map represents each subject's home. The map indicates the road and railway network and buildings in the two municipalities. Annual concentrations of total suspended particulates (TSP) were provided by the Pisa Province Unit of the Environmental Protection Agency, along with integrated measurements of sulfur dioxide (SO2): annual means were 24 μg/m3 for SO2 and 99 μg/m3 for TSP, for the entire area around the main road. Subjects participating in the survey (n = 2841) were sampled using a multistage stratified family-cluster design. They were investigated with a protocol including: the CNR questionnaire on respiratory symptoms/diseases and risk factors, lung function tests, skin prick tests, and blood samples for immunoglobulin E (IgE) determination [35]. Since 1980 the Pulmonary Environmental Epidemiology Unit of the Institute of Clinical Physiology of the Italian National Research Council (CNR) has performed epidemiological surveys to assess the effects of outdoor [29-32] and indoor [33,34] air pollution on human health. From 1991 to 1993 a survey was conducted in the area of Pisa (Tuscany), along a main road, called the Tosco-Romagnola road, connecting Pisa to Florence and characterized by high traffic volume (mean daily values ~14700 vehicles, measured during the hours 07.00-21.00). Figure 1 shows a map of the study area, representing the boundaries of the two municipalities involved (Pisa and Cascina), the main road, and the secondary streets. As shown in the map, the central and eastern side of the Tosco-Romagnola road had typical characteristics of a suburban/rural area with sparse buildings and intersections with very small streets, suggesting a major role of the Tosco-Romagnola road in air pollutant emissions. The last western part of the Tosco-Romagnola road, which enters the urban area of Pisa, has different characteristics with regard to other main roads, other air pollution sources, and other types of buildings. Study area and geocoding of population sample. Map representing subjects geocoding in their own home residence address, along the major road, the Tosco-Romagnola road. Each dot on the map represents each subject's home. The map indicates the road and railway network and buildings in the two municipalities. Annual concentrations of total suspended particulates (TSP) were provided by the Pisa Province Unit of the Environmental Protection Agency, along with integrated measurements of sulfur dioxide (SO2): annual means were 24 μg/m3 for SO2 and 99 μg/m3 for TSP, for the entire area around the main road. Subjects participating in the survey (n = 2841) were sampled using a multistage stratified family-cluster design. They were investigated with a protocol including: the CNR questionnaire on respiratory symptoms/diseases and risk factors, lung function tests, skin prick tests, and blood samples for immunoglobulin E (IgE) determination [35]. [SUBTITLE] Exposure assessment [SUBSECTION] Subjects were integrated in a Geographical Information System. Geocoding was done using home residence addresses. For the subjects geocoding, we used cartographic data provided by the GIS Service of Pisa and Cascina municipalities: buildings, streets, topography, population addresses, and house numbers. We applied addresses geocoding techniques provided by ArcMap 8.2 (ESRI): a file extracted from the epidemiological questionnaire containing participants' addresses (street names and house numbers) was matched with vector data. Cartographic data provided by Pisa and Cascina municipalities contained the exact location of house numbers, as in real life; a direct inspection was performed in case of ambiguity or uncertainty. Each subject is shown on a map as a precise mark corresponding to his/her home address, identified by street name and house number. As described in the previous section, in order to minimize the effects of other air pollution sources (industries, other main roads), mainly on the last western part of the Tosco-Romagnola road, only subjects living within 800 m of the main road (n = 2062, i.e. 73% of the total sample) were selected: this cut-off permitted us to also exclude the more rural area of the central-eastern side, characterized by a different kind of buildings (villas, isolated houses) and by different socio-economic status (Figure 1). It is important to underline that for the central and eastern part of the road more than 90% of our sample lived within 800 m of the road. Distances of houses from the main road (the Tosco-Romagnola road) were used to assess traffic-related pollution exposure. Using GIS buffering and overlaying functionalities, we classified the population sample in three groups (see Figure 2): highly exposed (people living within 100 m of the main road), moderately exposed (people living between 100 m and 250 m from the main road) and unexposed subjects (people living between 250 m and 800 m from the main road). These cut-off values were selected based on the results of previous studies showing increased exposure and risk of respiratory symptoms within short distances from the roads [9-13,23]. Classification of subjects based on the distance of each home from the main road. Zoomed map representing the classification of subjects according to the distance of each home from the main road. Highly exposed subjects are those living in the buffer area 0-100 m from the road, moderately exposed subjects living in the buffer area 100-250 m and unexposed are those living between 250 and 800 m from the road. Subjects were integrated in a Geographical Information System. Geocoding was done using home residence addresses. For the subjects geocoding, we used cartographic data provided by the GIS Service of Pisa and Cascina municipalities: buildings, streets, topography, population addresses, and house numbers. We applied addresses geocoding techniques provided by ArcMap 8.2 (ESRI): a file extracted from the epidemiological questionnaire containing participants' addresses (street names and house numbers) was matched with vector data. Cartographic data provided by Pisa and Cascina municipalities contained the exact location of house numbers, as in real life; a direct inspection was performed in case of ambiguity or uncertainty. Each subject is shown on a map as a precise mark corresponding to his/her home address, identified by street name and house number. As described in the previous section, in order to minimize the effects of other air pollution sources (industries, other main roads), mainly on the last western part of the Tosco-Romagnola road, only subjects living within 800 m of the main road (n = 2062, i.e. 73% of the total sample) were selected: this cut-off permitted us to also exclude the more rural area of the central-eastern side, characterized by a different kind of buildings (villas, isolated houses) and by different socio-economic status (Figure 1). It is important to underline that for the central and eastern part of the road more than 90% of our sample lived within 800 m of the road. Distances of houses from the main road (the Tosco-Romagnola road) were used to assess traffic-related pollution exposure. Using GIS buffering and overlaying functionalities, we classified the population sample in three groups (see Figure 2): highly exposed (people living within 100 m of the main road), moderately exposed (people living between 100 m and 250 m from the main road) and unexposed subjects (people living between 250 m and 800 m from the main road). These cut-off values were selected based on the results of previous studies showing increased exposure and risk of respiratory symptoms within short distances from the roads [9-13,23]. Classification of subjects based on the distance of each home from the main road. Zoomed map representing the classification of subjects according to the distance of each home from the main road. Highly exposed subjects are those living in the buffer area 0-100 m from the road, moderately exposed subjects living in the buffer area 100-250 m and unexposed are those living between 250 and 800 m from the road. [SUBTITLE] Health outcomes assessment [SUBSECTION] Subjects filled out a CNR standardized interviewer-administered questionnaire. This was the Italian version of the National Heart Blood and Lung Institute (NHBLI, USA) questionnaire including more than 70 questions on demographic aspects, general health status, lifestyle, potential risk factors (smoking habits, passive smoking exposure, occupational exposure, indoor exposure) [30]. The following respiratory symptoms/diseases were considered for the analyses: • chronic cough (or phlegm): cough (or phlegm) apart from common colds for at least three months of the year for at least two years • attacks of shortness of breath with wheeze: any attack of shortness of breath with wheeze, apart from common colds • persistent wheeze: wheeze, for at least six months of the year, apart from common colds • dyspnea I+ grade: shortness of breath when hurrying on level ground or walking up a slight hill (I grade dyspnea) or when walking on level ground with persons of the same age (II+ grade dyspnea) • chronic obstructive pulmonary disease (COPD): reported diagnosis of emphysema or chronic bronchitis • allergy symptoms: hay fever or any other condition making the nose runny or stuffy, apart from common colds, eye redness, itching, burning and eczema • asthma: reported diagnosis. In addition, the investigated subjects performed skin-prick tests for common airborne allergens, serum IgE determination, lung function tests and nonspecific bronchial challenge test with methacholine. A skin-prick test result was considered positive if it yielded at least one wheal with a mean diameter of at least 3 mm (skin test_3 mm pos) or 5 mm (skin test_5 mm pos), after subtracting the diameter of the negative control reaction. Total serum immunoglobulin E (IgE) was measured and transformed in logarithm10 values (IgE_log) to obtain a normal distribution. Lung function tests were carried out: slow vital capacity, CO single breath diffusing capacity (DLCO) and forced expirograms. Values for spirometry parameters were all expressed in % predicted [36], with the exception of the ratio between forced expiratory volume in the first second and forced vital capacity (FEV1/FVC) and of the ratio between FEV1 and vital capacity (FEV1/VC), which were expressed in percentage of observed values. The results of the non-specific bronchial challenge test with methacholine were expressed using a continuous variable to characterize bronchial reactivity, the slope of the dose-response curve; the slope was transformed using the natural logarithm (slope_ln) because the data distribution was highly skewed, and a small constant (+ 2.57) was added to allow logarithmic transformation of negative and zero values. A variable proportion of subjects agreed to perform these tests. Skin prick tests were performed for 1608 subjects (78%); serum IgE determination for 1409 subjects (68%); lung function tests for 1402 (68%); and bronchial responsiveness to methacholine challenge for 859 (42%). Subjects involved in lung function and allergy tests, compared to those not involved, were more likely men, smokers, young/adults, exposed to passive smoking and with high education levels (data not shown). Subjects filled out a CNR standardized interviewer-administered questionnaire. This was the Italian version of the National Heart Blood and Lung Institute (NHBLI, USA) questionnaire including more than 70 questions on demographic aspects, general health status, lifestyle, potential risk factors (smoking habits, passive smoking exposure, occupational exposure, indoor exposure) [30]. The following respiratory symptoms/diseases were considered for the analyses: • chronic cough (or phlegm): cough (or phlegm) apart from common colds for at least three months of the year for at least two years • attacks of shortness of breath with wheeze: any attack of shortness of breath with wheeze, apart from common colds • persistent wheeze: wheeze, for at least six months of the year, apart from common colds • dyspnea I+ grade: shortness of breath when hurrying on level ground or walking up a slight hill (I grade dyspnea) or when walking on level ground with persons of the same age (II+ grade dyspnea) • chronic obstructive pulmonary disease (COPD): reported diagnosis of emphysema or chronic bronchitis • allergy symptoms: hay fever or any other condition making the nose runny or stuffy, apart from common colds, eye redness, itching, burning and eczema • asthma: reported diagnosis. In addition, the investigated subjects performed skin-prick tests for common airborne allergens, serum IgE determination, lung function tests and nonspecific bronchial challenge test with methacholine. A skin-prick test result was considered positive if it yielded at least one wheal with a mean diameter of at least 3 mm (skin test_3 mm pos) or 5 mm (skin test_5 mm pos), after subtracting the diameter of the negative control reaction. Total serum immunoglobulin E (IgE) was measured and transformed in logarithm10 values (IgE_log) to obtain a normal distribution. Lung function tests were carried out: slow vital capacity, CO single breath diffusing capacity (DLCO) and forced expirograms. Values for spirometry parameters were all expressed in % predicted [36], with the exception of the ratio between forced expiratory volume in the first second and forced vital capacity (FEV1/FVC) and of the ratio between FEV1 and vital capacity (FEV1/VC), which were expressed in percentage of observed values. The results of the non-specific bronchial challenge test with methacholine were expressed using a continuous variable to characterize bronchial reactivity, the slope of the dose-response curve; the slope was transformed using the natural logarithm (slope_ln) because the data distribution was highly skewed, and a small constant (+ 2.57) was added to allow logarithmic transformation of negative and zero values. A variable proportion of subjects agreed to perform these tests. Skin prick tests were performed for 1608 subjects (78%); serum IgE determination for 1409 subjects (68%); lung function tests for 1402 (68%); and bronchial responsiveness to methacholine challenge for 859 (42%). Subjects involved in lung function and allergy tests, compared to those not involved, were more likely men, smokers, young/adults, exposed to passive smoking and with high education levels (data not shown). [SUBTITLE] Potential confounders [SUBSECTION] The following potential confounders, collected through questionnaire, were considered: • age groups: < 25, 25-64, > 64 years. These groups were chosen in order to make possible comparisons with our previous studies [29]. Moreover, these cut-offs allow us to identify the most susceptible categories, i.e. the young people and the elderly • smoking habits: non-smokers (those who had never smoked any kind of tobacco regularly); smokers (those who currently smoked at least one cigarette daily); ex-smokers (those who had smoked regularly in the past until six months or more before the examination, but did not smoke at the moment of the examination) • passive smoking exposure: exposure to the smoke from other people • educational level: low (no education/primary school); medium (secondary school); high (high school/university) • work position: manager/white collar, blue collar/farmer, merchant/craftsman and unemployed • occupational exposure: exposure to fumes, gases, dust or chemicals in working environments • number of hours spent at home (home residence exposure): more than or equal to 15 h; less than 15 h • time of residence: more than or equal to five years; less than five years • type of self-reported environmental exposure: traffic; other exposure. The following potential confounders, collected through questionnaire, were considered: • age groups: < 25, 25-64, > 64 years. These groups were chosen in order to make possible comparisons with our previous studies [29]. Moreover, these cut-offs allow us to identify the most susceptible categories, i.e. the young people and the elderly • smoking habits: non-smokers (those who had never smoked any kind of tobacco regularly); smokers (those who currently smoked at least one cigarette daily); ex-smokers (those who had smoked regularly in the past until six months or more before the examination, but did not smoke at the moment of the examination) • passive smoking exposure: exposure to the smoke from other people • educational level: low (no education/primary school); medium (secondary school); high (high school/university) • work position: manager/white collar, blue collar/farmer, merchant/craftsman and unemployed • occupational exposure: exposure to fumes, gases, dust or chemicals in working environments • number of hours spent at home (home residence exposure): more than or equal to 15 h; less than 15 h • time of residence: more than or equal to five years; less than five years • type of self-reported environmental exposure: traffic; other exposure. [SUBTITLE] Statistical analysis [SUBSECTION] Chi-square test was used to compare symptoms/diseases prevalence rates between exposed and unexposed subjects regarding traffic-related pollution exposure. Separate analyses were performed for both sexes. Objective test variables were analyzed either as continuous or categorical variables. Comparison of adjusted mean values of functional and allergologic parameters (IgE determination, bronchial reactivity and spirometry) among the three exposure classes was performed by analysis of variance (ANOVA). For lung function tests, the mean values were adjusted for the effects of age and smoking habits; for bronchial reactivity parameters, mean values were adjusted for age, smoking habits and predicted FEV1%. Post hoc test (Tukey test) was applied to perform all pair-wise comparisons of the ANOVA results in order to identify which means were significantly different from the others. In addition, continuous variables were dichotomized and analyzed by chi-square: IgE and bronchial reactivity results were dichotomized through the 75th percentile: 1.83 for the logarithm of IgE (IgE_log) and 2.22 for the logarithm of the slope of the dose-response bronchial reactivity curve (slope_ln); airway obstruction was defined as having an observed FEV1/FVC% less than 70%. We applied multiple logistic regression models to assess the association between health outcomes and traffic-related pollution exposure taking into account the role of the independent risk factors. Odds ratios (OR) were stratified by sex and adjusted for the effects of age, education, smoking, passive smoking exposure, occupational exposure, working position, number of hours spent at home and time of residence. A p-value less than 0.05 was considered statistically significant. Chi-square test was used to compare symptoms/diseases prevalence rates between exposed and unexposed subjects regarding traffic-related pollution exposure. Separate analyses were performed for both sexes. Objective test variables were analyzed either as continuous or categorical variables. Comparison of adjusted mean values of functional and allergologic parameters (IgE determination, bronchial reactivity and spirometry) among the three exposure classes was performed by analysis of variance (ANOVA). For lung function tests, the mean values were adjusted for the effects of age and smoking habits; for bronchial reactivity parameters, mean values were adjusted for age, smoking habits and predicted FEV1%. Post hoc test (Tukey test) was applied to perform all pair-wise comparisons of the ANOVA results in order to identify which means were significantly different from the others. In addition, continuous variables were dichotomized and analyzed by chi-square: IgE and bronchial reactivity results were dichotomized through the 75th percentile: 1.83 for the logarithm of IgE (IgE_log) and 2.22 for the logarithm of the slope of the dose-response bronchial reactivity curve (slope_ln); airway obstruction was defined as having an observed FEV1/FVC% less than 70%. We applied multiple logistic regression models to assess the association between health outcomes and traffic-related pollution exposure taking into account the role of the independent risk factors. Odds ratios (OR) were stratified by sex and adjusted for the effects of age, education, smoking, passive smoking exposure, occupational exposure, working position, number of hours spent at home and time of residence. A p-value less than 0.05 was considered statistically significant.
null
null
null
null
[ "Background", "Setting and study population", "Exposure assessment", "Health outcomes assessment", "Potential confounders", "Statistical analysis", "Results", "Discussion", "Health issues", "Gender stratification", "Advantages and disadvantages of study design", "Geographic issues", "Conclusion", "List of abbreviations", "Competing interests", "Authors' contributions" ]
[ "In recent years, despite significant improvements in fuel and engine technology, emissions from traffic have become a major source of air pollution, mainly in urban areas. In Europe, exhaust from motor vehicle traffic is considered to be the most significant source of nitrogen oxides (NOx), carbon monoxide (CO) and non-methane volatile organic compounds (NMVOCs), as well as the second most important source of particulate emissions [1].\nStudies of long-term exposure to air pollution suggest an increased risk of chronic respiratory illnesses [2-4]. Short-term exposures to high concentrations have been associated with higher prevalence rates of bronchitis, asthma and respiratory symptoms [5-8].\nIn urban environments, mainly in areas where population and traffic density are relatively high, human exposure to hazardous substances is expected to be relevant. This is often the case near busy traffic axes in city centers where urban topography and microclimate may contribute to poor dispersion conditions. There is growing epidemiological evidence of increased frequency of respiratory symptoms among people living close to major roads [9-12]. Few studies have focused on the sex-specific associations between exposure to urban air pollution and respiratory health [13].\nAdvances in Geographical Information System (GIS) technology, along with the increasing availability of geographical data, have provided new opportunities for environmental epidemiologists to study associations between environmental exposure and spatial distribution of disease. GIS permits spatial linking of different types of data (e.g., residential addresses, environmental exposure levels, demographic information), as well as automated address matching, buffer analysis, spatial query and polygon overlay analysis [14,15].\nA number of methods for exposure assessment have been developed. Researchers have generally used self-reported or measured traffic density, and self-reported or measured distance of the home from the nearest street [16-19]. Other teams have used both sets of information, traffic density and distance [20-23]. Traffic air pollutant dispersion models and land use regression (LUR) models have been developed to improve the estimation of exposure levels [15,24-26], and they have sometimes been used in epidemiology [14,27,28].\nThe aim of our study was to evaluate the sex-specific associations between living near a main road (assessed through a GIS-based methodology) and respiratory health status in a general population sample.", "Since 1980 the Pulmonary Environmental Epidemiology Unit of the Institute of Clinical Physiology of the Italian National Research Council (CNR) has performed epidemiological surveys to assess the effects of outdoor [29-32] and indoor [33,34] air pollution on human health.\nFrom 1991 to 1993 a survey was conducted in the area of Pisa (Tuscany), along a main road, called the Tosco-Romagnola road, connecting Pisa to Florence and characterized by high traffic volume (mean daily values ~14700 vehicles, measured during the hours 07.00-21.00). Figure 1 shows a map of the study area, representing the boundaries of the two municipalities involved (Pisa and Cascina), the main road, and the secondary streets. As shown in the map, the central and eastern side of the Tosco-Romagnola road had typical characteristics of a suburban/rural area with sparse buildings and intersections with very small streets, suggesting a major role of the Tosco-Romagnola road in air pollutant emissions. The last western part of the Tosco-Romagnola road, which enters the urban area of Pisa, has different characteristics with regard to other main roads, other air pollution sources, and other types of buildings.\nStudy area and geocoding of population sample. Map representing subjects geocoding in their own home residence address, along the major road, the Tosco-Romagnola road. Each dot on the map represents each subject's home. The map indicates the road and railway network and buildings in the two municipalities.\nAnnual concentrations of total suspended particulates (TSP) were provided by the Pisa Province Unit of the Environmental Protection Agency, along with integrated measurements of sulfur dioxide (SO2): annual means were 24 μg/m3 for SO2 and 99 μg/m3 for TSP, for the entire area around the main road.\nSubjects participating in the survey (n = 2841) were sampled using a multistage stratified family-cluster design. They were investigated with a protocol including: the CNR questionnaire on respiratory symptoms/diseases and risk factors, lung function tests, skin prick tests, and blood samples for immunoglobulin E (IgE) determination [35].", "Subjects were integrated in a Geographical Information System. Geocoding was done using home residence addresses. For the subjects geocoding, we used cartographic data provided by the GIS Service of Pisa and Cascina municipalities: buildings, streets, topography, population addresses, and house numbers. We applied addresses geocoding techniques provided by ArcMap 8.2 (ESRI): a file extracted from the epidemiological questionnaire containing participants' addresses (street names and house numbers) was matched with vector data. Cartographic data provided by Pisa and Cascina municipalities contained the exact location of house numbers, as in real life; a direct inspection was performed in case of ambiguity or uncertainty. Each subject is shown on a map as a precise mark corresponding to his/her home address, identified by street name and house number.\nAs described in the previous section, in order to minimize the effects of other air pollution sources (industries, other main roads), mainly on the last western part of the Tosco-Romagnola road, only subjects living within 800 m of the main road (n = 2062, i.e. 73% of the total sample) were selected: this cut-off permitted us to also exclude the more rural area of the central-eastern side, characterized by a different kind of buildings (villas, isolated houses) and by different socio-economic status (Figure 1). It is important to underline that for the central and eastern part of the road more than 90% of our sample lived within 800 m of the road.\nDistances of houses from the main road (the Tosco-Romagnola road) were used to assess traffic-related pollution exposure. Using GIS buffering and overlaying functionalities, we classified the population sample in three groups (see Figure 2): highly exposed (people living within 100 m of the main road), moderately exposed (people living between 100 m and 250 m from the main road) and unexposed subjects (people living between 250 m and 800 m from the main road). These cut-off values were selected based on the results of previous studies showing increased exposure and risk of respiratory symptoms within short distances from the roads [9-13,23].\nClassification of subjects based on the distance of each home from the main road. Zoomed map representing the classification of subjects according to the distance of each home from the main road. Highly exposed subjects are those living in the buffer area 0-100 m from the road, moderately exposed subjects living in the buffer area 100-250 m and unexposed are those living between 250 and 800 m from the road.", "Subjects filled out a CNR standardized interviewer-administered questionnaire. This was the Italian version of the National Heart Blood and Lung Institute (NHBLI, USA) questionnaire including more than 70 questions on demographic aspects, general health status, lifestyle, potential risk factors (smoking habits, passive smoking exposure, occupational exposure, indoor exposure) [30].\nThe following respiratory symptoms/diseases were considered for the analyses:\n• chronic cough (or phlegm): cough (or phlegm) apart from common colds for at least three months of the year for at least two years\n• attacks of shortness of breath with wheeze: any attack of shortness of breath with wheeze, apart from common colds\n• persistent wheeze: wheeze, for at least six months of the year, apart from common colds\n• dyspnea I+ grade: shortness of breath when hurrying on level ground or walking up a slight hill (I grade dyspnea) or when walking on level ground with persons of the same age (II+ grade dyspnea)\n• chronic obstructive pulmonary disease (COPD): reported diagnosis of emphysema or chronic bronchitis\n• allergy symptoms: hay fever or any other condition making the nose runny or stuffy, apart from common colds, eye redness, itching, burning and eczema\n• asthma: reported diagnosis.\nIn addition, the investigated subjects performed skin-prick tests for common airborne allergens, serum IgE determination, lung function tests and nonspecific bronchial challenge test with methacholine.\nA skin-prick test result was considered positive if it yielded at least one wheal with a mean diameter of at least 3 mm (skin test_3 mm pos) or 5 mm (skin test_5 mm pos), after subtracting the diameter of the negative control reaction.\nTotal serum immunoglobulin E (IgE) was measured and transformed in logarithm10 values (IgE_log) to obtain a normal distribution.\nLung function tests were carried out: slow vital capacity, CO single breath diffusing capacity (DLCO) and forced expirograms. Values for spirometry parameters were all expressed in % predicted [36], with the exception of the ratio between forced expiratory volume in the first second and forced vital capacity (FEV1/FVC) and of the ratio between FEV1 and vital capacity (FEV1/VC), which were expressed in percentage of observed values.\nThe results of the non-specific bronchial challenge test with methacholine were expressed using a continuous variable to characterize bronchial reactivity, the slope of the dose-response curve; the slope was transformed using the natural logarithm (slope_ln) because the data distribution was highly skewed, and a small constant (+ 2.57) was added to allow logarithmic transformation of negative and zero values.\nA variable proportion of subjects agreed to perform these tests. Skin prick tests were performed for 1608 subjects (78%); serum IgE determination for 1409 subjects (68%); lung function tests for 1402 (68%); and bronchial responsiveness to methacholine challenge for 859 (42%). Subjects involved in lung function and allergy tests, compared to those not involved, were more likely men, smokers, young/adults, exposed to passive smoking and with high education levels (data not shown).", "The following potential confounders, collected through questionnaire, were considered:\n• age groups: < 25, 25-64, > 64 years. These groups were chosen in order to make possible comparisons with our previous studies [29]. Moreover, these cut-offs allow us to identify the most susceptible categories, i.e. the young people and the elderly\n• smoking habits: non-smokers (those who had never smoked any kind of tobacco regularly); smokers (those who currently smoked at least one cigarette daily); ex-smokers (those who had smoked regularly in the past until six months or more before the examination, but did not smoke at the moment of the examination)\n• passive smoking exposure: exposure to the smoke from other people\n• educational level: low (no education/primary school); medium (secondary school); high (high school/university)\n• work position: manager/white collar, blue collar/farmer, merchant/craftsman and unemployed\n• occupational exposure: exposure to fumes, gases, dust or chemicals in working environments\n• number of hours spent at home (home residence exposure): more than or equal to 15 h; less than 15 h\n• time of residence: more than or equal to five years; less than five years\n• type of self-reported environmental exposure: traffic; other exposure.", "Chi-square test was used to compare symptoms/diseases prevalence rates between exposed and unexposed subjects regarding traffic-related pollution exposure. Separate analyses were performed for both sexes.\nObjective test variables were analyzed either as continuous or categorical variables. Comparison of adjusted mean values of functional and allergologic parameters (IgE determination, bronchial reactivity and spirometry) among the three exposure classes was performed by analysis of variance (ANOVA). For lung function tests, the mean values were adjusted for the effects of age and smoking habits; for bronchial reactivity parameters, mean values were adjusted for age, smoking habits and predicted FEV1%. Post hoc test (Tukey test) was applied to perform all pair-wise comparisons of the ANOVA results in order to identify which means were significantly different from the others.\nIn addition, continuous variables were dichotomized and analyzed by chi-square: IgE and bronchial reactivity results were dichotomized through the 75th percentile: 1.83 for the logarithm of IgE (IgE_log) and 2.22 for the logarithm of the slope of the dose-response bronchial reactivity curve (slope_ln); airway obstruction was defined as having an observed FEV1/FVC% less than 70%.\nWe applied multiple logistic regression models to assess the association between health outcomes and traffic-related pollution exposure taking into account the role of the independent risk factors. Odds ratios (OR) were stratified by sex and adjusted for the effects of age, education, smoking, passive smoking exposure, occupational exposure, working position, number of hours spent at home and time of residence.\nA p-value less than 0.05 was considered statistically significant.", "The study included 2062 subjects: mean age was 45.9 years for men (range 8-93 years) and 48.9 years for women (range 8-97 years). Children (0-14 years) comprised 5% of the study sample. General characteristics of the population sample are reported in Table 1, for men and women. A different distribution of potential confounding factors was observed between genders: females were older than males, current and previous smoking was more frequent in males than in females. Men had a higher education level and socio-economic status, but also a higher frequency of occupational exposure. Women tended to spend more time at home than men, as well as to perceiving vehicular traffic in the street of residence more frequently. Over 85% of the population had been living in the same house for more than five years.\nGeneral characteristics of the population sample by gender.\n*p-values from Pearson chi-square test\nThe average (± standard deviation) distance of subjects' residences from the main road was 239 ± 189 m (median 200 m; minimum 1.5 m; maximum 785 m). Table 2 reports general characteristics of the population when stratified by the three distance classes and by sex. Among females, the elderly tended to live closer to the main road than younger people; females living within 100 m of the road tended to have lower socio-economic status and less passive smoking exposure and occupational exposure than females living farther away. Variables about self-reported perception of environmental exposure were correlated to distance classes used to define traffic-related exposure in both males and females, with subjects living within 100 m from the main road showing the highest self-reported exposure to traffic.\nDistribution of confounding factors by the distance classes in males and females.\n*** p < 0.001, ** p < 0.01, * p < 0.05, # 0.05 < p < 0.1 (borderline) from Pearson chi-square test; comparison between subjects living at different distances from the main road, separately in males and in females\nAs regards symptoms/diseases, persistent wheeze and COPD showed significantly higher prevalence rates in males living within 100 m of the main road; attacks of shortness of breath with wheeze, dyspnea and asthma showed significantly higher prevalence rates in females living within 100 m of the main road (Table 3).\nPrevalence rates of symptoms/diseases by the distance classes in males and females.\n*** p < 0.001, ** p < 0.01, * p < 0.05, # 0.05 < p < 0.1 (borderline) from Pearson chi-square test; comparison between subjects living at different distances from the main road, separately in males and in females\nResults of the comparison of adjusted mean values of functional and allergologic parameters among the exposure classes stratified by sex are reported in Table 4. Significantly lower FEV1/VC% and FEV1/FVC% values were observed in exposed males. The Tukey test highlighted that all significant p-values were associated with differences between subjects living within 100 m of the road (the highly exposed class) and subjects living between 250 m and 800 m from the road (unexposed). There were no significant differences among groups for the logarithm of serum IgE values, nor for the logarithm of the slope of the bronchial reactivity dose-response curve, though for the latter there was a trend between the three exposure classes.\nComparison of adjusted mean values of functional and allergologic parameters by the distance classes in males and females.\n§ Values are expressed in % predicted, with the exception of FEV1/FVC% and FEV1/VC% which are expressed in % observed\n*** p < 0.001, ** p < 0.01, * p < 0.05, # 0.05 < p < 0.1 (borderline) by analyses of variance; comparison between subjects living at different distances from the main road, separately in males and in females\nFor all parameters mean values are adjusted for the effects of age and smoking habits, with the exception of the bronchial reactivity parameter where mean values are adjusted for age, smoking habits and predicted %FEV1\nTable 5 reports chi-square results for dichotomized test outcomes stratified by sex. Significantly higher values were shown in exposed subjects for observed FEV1/FVC% <70% (in males) and for skin test ≥5 mm positivity (in females). Although it was quite weak and not statistically significant, a trend could also be highlighted for skin prick test ≥3 mm positivity in females and bronchial reactivity in males.\nComparison of prevalence rates of tests variables by the distance classes in males and females.\n*** p < 0.001, ** p < 0.01, * p < 0.05, # 0.05 < p < 0.1 (borderline); comparison between subjects living at different distances from the main road, separately in males and in females\nTable 6 shows the statistically significant results (OR and 95% confidence intervals-CI) obtained from the multiple logistic regression models stratified by sex.\nEffects of distance of residence to main road on respiratory symptoms/diseases and dichotomized test outcomes: OR† and 95% CI.\n† OR adjusted for age, educational level, smoking habits, passive smoking exposure, occupational exposure, working position, number of hours spent at home and time of residence, calculated with subjects living between 250-800 m as the reference group\n*** p < 0.001, ** p < 0.01, * p < 0.05, # 0.05 < p < 0.1 (borderline)\nCompared to subjects living between 250 m and 800 m from the road, there were increased risks among males living within 100 m of the main road for persistent wheeze, COPD, and observed FEV1/FVC% < 70%; among males living between 100-250 m from the road, there were significantly increased risks for FEV1/FVC% < 70% and FEV1/VC% < 70%. A borderline significance was observed in men living between 100-250 m from the road for persistent wheeze. Increased risks were shown for dyspnea and for positivity to skin prick test ≥ 5 mm in females living within 100 m of the main road. Borderline effect estimates were observed for asthma, attacks of shortness of breath with wheeze in females living within 100 m of the road and for dyspnea in females living between 100-250 m from the road.\nWith regard to the estimated effects of potential confounders, our results confirmed what is well documented in the scientific literatures: risks for respiratory diseases were closely associated with age, smoking habits and low education levels (data not shown).", "[SUBTITLE] Health issues [SUBSECTION] Our study indicates respiratory health risks for people living in the proximity of a main road. In our study we used subjects' residence as a proxy for environmental exposure. This means that subjects living at the same distance from the main road are assumed to experience the same exposure levels. Since personal exposure can be influenced by many different factors related to each subject's life style, personal habits and exposure to other air pollution sources, we included in our analyses the effects of these confounding factors. Multiple logistic regression models were fitted to adjust for the effects of age, education, smoking, passive smoking exposure, occupational exposure, working position, number of hours spent at home and time of residence.\nAfter adjustment for such potential confounders, subjects with a residential address within 100 m of the main road had higher risks for reporting persistent wheeze, COPD, and for having airway obstruction in males, as well as higher risks for asthma, attacks of shortness of breath with wheezing, dyspnea and positivity to skin-prick test in females.\nOur results are generally consistent with those reported by other authors who have analyzed the effects of traffic-related air pollution exposure on respiratory health status in adults.\nSignificantly higher risks for respiratory symptoms/diagnosis were reported in subjects living within 100 m of a major road by Lindgren et al. (Sweden) [37], Schikowski et al. (Germany) [12] and Cesaroni et al. (Italy) [38]: OR = 1.40 (95% CI 1.04-1.89) for asthma diagnosis, OR = 1.29 (95% CI 1.00-1.67) for asthma symptoms, OR = 1.64 (95% CI 1.11-2.41) for COPD diagnosis and OR = 1.53 (95% CI 1.10-2.13) for chronic bronchitis symptoms by Lindgren et al. [37]; OR = 1.24 (95% CI 1.03-1.49) for frequent cough by Schikowski et al. [12]; OR = 1.26 (95% CI 1.03-1.54) for rhinitis by Cesaroni et al. [38].\nA narrower exposure cut-off (75 m) was defined by Mc Connell et al. [18] in Southern Californian schoolchildren (aged 5-7 years): significant associations were found for lifetime asthma (OR = 1.29 95% CI 1.01-1.86), current asthma (OR = 1.50 95% CI 1.16-1.95) and wheezes (OR = 1.40 95% CI 1.09-1.78). Effects of residential proximity to roadways were greater in females, as in our study.\nSignificantly higher risks for wheezes and phlegm were reported in adults living within 50 m of a major road by Garshick et al. (USA) [9] and within 20 m by Bayer-Oglesby et al. (Switzerland) [39]: OR = 1.30 (95% CI 1.00-1.70) for persistent wheezes and OR = 1.15 (95% CI 1.00-1.31) for regular phlegm, respectively.\nIn our study we also found elevated risks for airway obstruction in males living within 100 m of the main road, as well as between 100 m and 250 m from the road.\nThe study by Kan et al. [40] in the USA provided evidence that lung function, as measured by FEV1 and FVC, is reduced in adults living within 150 m of major roads, especially among women. In contrast to our results, they did not find a significant association between FEV1/FVC ratio and indicators of traffic exposure.\nAdverse effects of traffic-related exposure on lung function have also been highlighted in other studies [41-43]. Gauderman et al. reported a reduced lung development in Californian children, with a not significant larger effect in boys than in girls [41]. Reduced lung function was reported by Forbes et al. [42] in English adults and by Abbey et al. [43] in Californian adults, with a larger effect in males.\nWith regard to the highest values of airway obstruction observed in the intermediate exposure class (100-250 m), this might be due to higher values for some confounding factors; although the estimates are adjusted for these factors, they still probably could have some residual influences. Furthermore, a few factors, unconsidered in the present analyses, might have influenced these results. For example, subjects living within 100-250 m of the road had an higher prevalence of childhood respiratory troubles (chest cold, pertussis and bronchitis) (data not shown); in a previous study we had shown that subjects with childhood respiratory troubles had the lowest lung function values regardless of smoking habits [44]. Anyway, the highest values of airway obstruction observed in the intermediate exposure class (100-250 m), might suggest that respiratory health impairments due to vehicular traffic exposure also occur at a distance greater than 100 m, as reported in other studies which have shown the negative effects of living near a busy road until a distance of 500 m [40,41].\nAs regards the atopic status, we found elevated risks for skin test ≥ 5 mm positivity in females living within 100 m of the main road.\nA recent study on a very large sample of German children showed that the children living near busy streets had significantly higher risk for allergic sensitization (OR = 1.30, 95% CI 1.02-1.66) [45].\nIn the large population-based sample of 5338 schoolchildren of the French Six City Study [46] the adjusted odds of skin-prick test positivity were significantly higher than one in concurrence with elevated PM2.5 concentrations in the proximity of the houses where the children lived.\nOur study indicates respiratory health risks for people living in the proximity of a main road. In our study we used subjects' residence as a proxy for environmental exposure. This means that subjects living at the same distance from the main road are assumed to experience the same exposure levels. Since personal exposure can be influenced by many different factors related to each subject's life style, personal habits and exposure to other air pollution sources, we included in our analyses the effects of these confounding factors. Multiple logistic regression models were fitted to adjust for the effects of age, education, smoking, passive smoking exposure, occupational exposure, working position, number of hours spent at home and time of residence.\nAfter adjustment for such potential confounders, subjects with a residential address within 100 m of the main road had higher risks for reporting persistent wheeze, COPD, and for having airway obstruction in males, as well as higher risks for asthma, attacks of shortness of breath with wheezing, dyspnea and positivity to skin-prick test in females.\nOur results are generally consistent with those reported by other authors who have analyzed the effects of traffic-related air pollution exposure on respiratory health status in adults.\nSignificantly higher risks for respiratory symptoms/diagnosis were reported in subjects living within 100 m of a major road by Lindgren et al. (Sweden) [37], Schikowski et al. (Germany) [12] and Cesaroni et al. (Italy) [38]: OR = 1.40 (95% CI 1.04-1.89) for asthma diagnosis, OR = 1.29 (95% CI 1.00-1.67) for asthma symptoms, OR = 1.64 (95% CI 1.11-2.41) for COPD diagnosis and OR = 1.53 (95% CI 1.10-2.13) for chronic bronchitis symptoms by Lindgren et al. [37]; OR = 1.24 (95% CI 1.03-1.49) for frequent cough by Schikowski et al. [12]; OR = 1.26 (95% CI 1.03-1.54) for rhinitis by Cesaroni et al. [38].\nA narrower exposure cut-off (75 m) was defined by Mc Connell et al. [18] in Southern Californian schoolchildren (aged 5-7 years): significant associations were found for lifetime asthma (OR = 1.29 95% CI 1.01-1.86), current asthma (OR = 1.50 95% CI 1.16-1.95) and wheezes (OR = 1.40 95% CI 1.09-1.78). Effects of residential proximity to roadways were greater in females, as in our study.\nSignificantly higher risks for wheezes and phlegm were reported in adults living within 50 m of a major road by Garshick et al. (USA) [9] and within 20 m by Bayer-Oglesby et al. (Switzerland) [39]: OR = 1.30 (95% CI 1.00-1.70) for persistent wheezes and OR = 1.15 (95% CI 1.00-1.31) for regular phlegm, respectively.\nIn our study we also found elevated risks for airway obstruction in males living within 100 m of the main road, as well as between 100 m and 250 m from the road.\nThe study by Kan et al. [40] in the USA provided evidence that lung function, as measured by FEV1 and FVC, is reduced in adults living within 150 m of major roads, especially among women. In contrast to our results, they did not find a significant association between FEV1/FVC ratio and indicators of traffic exposure.\nAdverse effects of traffic-related exposure on lung function have also been highlighted in other studies [41-43]. Gauderman et al. reported a reduced lung development in Californian children, with a not significant larger effect in boys than in girls [41]. Reduced lung function was reported by Forbes et al. [42] in English adults and by Abbey et al. [43] in Californian adults, with a larger effect in males.\nWith regard to the highest values of airway obstruction observed in the intermediate exposure class (100-250 m), this might be due to higher values for some confounding factors; although the estimates are adjusted for these factors, they still probably could have some residual influences. Furthermore, a few factors, unconsidered in the present analyses, might have influenced these results. For example, subjects living within 100-250 m of the road had an higher prevalence of childhood respiratory troubles (chest cold, pertussis and bronchitis) (data not shown); in a previous study we had shown that subjects with childhood respiratory troubles had the lowest lung function values regardless of smoking habits [44]. Anyway, the highest values of airway obstruction observed in the intermediate exposure class (100-250 m), might suggest that respiratory health impairments due to vehicular traffic exposure also occur at a distance greater than 100 m, as reported in other studies which have shown the negative effects of living near a busy road until a distance of 500 m [40,41].\nAs regards the atopic status, we found elevated risks for skin test ≥ 5 mm positivity in females living within 100 m of the main road.\nA recent study on a very large sample of German children showed that the children living near busy streets had significantly higher risk for allergic sensitization (OR = 1.30, 95% CI 1.02-1.66) [45].\nIn the large population-based sample of 5338 schoolchildren of the French Six City Study [46] the adjusted odds of skin-prick test positivity were significantly higher than one in concurrence with elevated PM2.5 concentrations in the proximity of the houses where the children lived.\n[SUBTITLE] Gender stratification [SUBSECTION] Although there is a growing epidemiological evidence of various associations between air pollution and respiratory health for males and females, few studies reported results stratified by gender in adults. Airway behaviour is influenced by sex-related (biological) and gender-related (socio-cultural) determinants; these aspects can interact to several degrees and directions with environmental exposures, differently in women and men. There may also be sex-based differences in susceptibility to the same environmental exposures [13]. These features can explain the different associations between sex and traffic air pollution found in our study.\nOur approach focusing on the sex-specific effects pattern was also justified by the clear diversification between genders in the distribution of many confounding factors included in the analyses. Due to their prevalent occupation (housewives), females resulted more exposed to home residence environmental conditions; while men reported a greater risk for occupational and tobacco exposures.\nAlthough there is a growing epidemiological evidence of various associations between air pollution and respiratory health for males and females, few studies reported results stratified by gender in adults. Airway behaviour is influenced by sex-related (biological) and gender-related (socio-cultural) determinants; these aspects can interact to several degrees and directions with environmental exposures, differently in women and men. There may also be sex-based differences in susceptibility to the same environmental exposures [13]. These features can explain the different associations between sex and traffic air pollution found in our study.\nOur approach focusing on the sex-specific effects pattern was also justified by the clear diversification between genders in the distribution of many confounding factors included in the analyses. Due to their prevalent occupation (housewives), females resulted more exposed to home residence environmental conditions; while men reported a greater risk for occupational and tobacco exposures.\n[SUBTITLE] Advantages and disadvantages of study design [SUBSECTION] The main strengths of this study were the large sample size, the standard protocols, which already had passed the scrutiny of independent reviewers, and the multi-faceted aspects collected by means of the questionnaire. We also used quantitative respiratory and allergological outcomes (i.e. lung function, skin-prick test, IgE and bronchial hyper-responsiveness), which are not affected by the potential bias linked to the use of the questionnaire (recall bias).\nAs in any epidemiological study, residual confounding is still possible. However, we adjusted for known and potential confounders including demographic characteristics, personal socioeconomic status, lifestyle, work-related features, and cigarette smoking.\nAnother limitation was the cross-sectional nature of the study; we had no information about disease onset, making it hard to establish a temporal relationship between cause and effect. However, since asthma and COPD are known to be exacerbated by traffic-related air pollution, diseased subjects may have been more likely to move away from traffic, rather than towards it, and so a migrational bias would mainly be expected to dilute the effects.\nWe used a relatively simple proxy for exposure to traffic-related air pollution (distance to major roads): due to the large amount of input data required, we were not able to implement more sophisticated approaches; however, we found the distance method useful for an initial assessment of a potential environmental health hazard.\nThe main strengths of this study were the large sample size, the standard protocols, which already had passed the scrutiny of independent reviewers, and the multi-faceted aspects collected by means of the questionnaire. We also used quantitative respiratory and allergological outcomes (i.e. lung function, skin-prick test, IgE and bronchial hyper-responsiveness), which are not affected by the potential bias linked to the use of the questionnaire (recall bias).\nAs in any epidemiological study, residual confounding is still possible. However, we adjusted for known and potential confounders including demographic characteristics, personal socioeconomic status, lifestyle, work-related features, and cigarette smoking.\nAnother limitation was the cross-sectional nature of the study; we had no information about disease onset, making it hard to establish a temporal relationship between cause and effect. However, since asthma and COPD are known to be exacerbated by traffic-related air pollution, diseased subjects may have been more likely to move away from traffic, rather than towards it, and so a migrational bias would mainly be expected to dilute the effects.\nWe used a relatively simple proxy for exposure to traffic-related air pollution (distance to major roads): due to the large amount of input data required, we were not able to implement more sophisticated approaches; however, we found the distance method useful for an initial assessment of a potential environmental health hazard.\n[SUBTITLE] Geographic issues [SUBSECTION] Address geocoding can also introduce bias and errors [47-50] with potential effects on the results of epidemiological studies [51-53].\nAddress matching can be hindered by several factors, such as incomplete or inaccurate information in the address files, lack of standardization of street addresses, and lack of assignment of house numbers, especially in rural areas [50-52]. We succeeded in matching almost 100% of our sample, after a considerable effort to overcome the above-mentioned problems.\nEven if a match occurs, house numbering will not always provide the exact location since house numbers are assigned with no reference to the distance from the beginning of the street segment. Therefore, it is difficult to geocode an address unless the location of house numbers is identified on the map one by one.\nCartographic data provided by Pisa and Cascina municipalities contain the exact correspondence of house numbers to related buildings; a direct inspection was performed in case of ambiguity or uncertainty. Consequently, our study relied upon a detailed and precise geocoding of residential data, which is relatively infrequent for Italian public administrations.\nAddress geocoding can also introduce bias and errors [47-50] with potential effects on the results of epidemiological studies [51-53].\nAddress matching can be hindered by several factors, such as incomplete or inaccurate information in the address files, lack of standardization of street addresses, and lack of assignment of house numbers, especially in rural areas [50-52]. We succeeded in matching almost 100% of our sample, after a considerable effort to overcome the above-mentioned problems.\nEven if a match occurs, house numbering will not always provide the exact location since house numbers are assigned with no reference to the distance from the beginning of the street segment. Therefore, it is difficult to geocode an address unless the location of house numbers is identified on the map one by one.\nCartographic data provided by Pisa and Cascina municipalities contain the exact correspondence of house numbers to related buildings; a direct inspection was performed in case of ambiguity or uncertainty. Consequently, our study relied upon a detailed and precise geocoding of residential data, which is relatively infrequent for Italian public administrations.", "Our study indicates respiratory health risks for people living in the proximity of a main road. In our study we used subjects' residence as a proxy for environmental exposure. This means that subjects living at the same distance from the main road are assumed to experience the same exposure levels. Since personal exposure can be influenced by many different factors related to each subject's life style, personal habits and exposure to other air pollution sources, we included in our analyses the effects of these confounding factors. Multiple logistic regression models were fitted to adjust for the effects of age, education, smoking, passive smoking exposure, occupational exposure, working position, number of hours spent at home and time of residence.\nAfter adjustment for such potential confounders, subjects with a residential address within 100 m of the main road had higher risks for reporting persistent wheeze, COPD, and for having airway obstruction in males, as well as higher risks for asthma, attacks of shortness of breath with wheezing, dyspnea and positivity to skin-prick test in females.\nOur results are generally consistent with those reported by other authors who have analyzed the effects of traffic-related air pollution exposure on respiratory health status in adults.\nSignificantly higher risks for respiratory symptoms/diagnosis were reported in subjects living within 100 m of a major road by Lindgren et al. (Sweden) [37], Schikowski et al. (Germany) [12] and Cesaroni et al. (Italy) [38]: OR = 1.40 (95% CI 1.04-1.89) for asthma diagnosis, OR = 1.29 (95% CI 1.00-1.67) for asthma symptoms, OR = 1.64 (95% CI 1.11-2.41) for COPD diagnosis and OR = 1.53 (95% CI 1.10-2.13) for chronic bronchitis symptoms by Lindgren et al. [37]; OR = 1.24 (95% CI 1.03-1.49) for frequent cough by Schikowski et al. [12]; OR = 1.26 (95% CI 1.03-1.54) for rhinitis by Cesaroni et al. [38].\nA narrower exposure cut-off (75 m) was defined by Mc Connell et al. [18] in Southern Californian schoolchildren (aged 5-7 years): significant associations were found for lifetime asthma (OR = 1.29 95% CI 1.01-1.86), current asthma (OR = 1.50 95% CI 1.16-1.95) and wheezes (OR = 1.40 95% CI 1.09-1.78). Effects of residential proximity to roadways were greater in females, as in our study.\nSignificantly higher risks for wheezes and phlegm were reported in adults living within 50 m of a major road by Garshick et al. (USA) [9] and within 20 m by Bayer-Oglesby et al. (Switzerland) [39]: OR = 1.30 (95% CI 1.00-1.70) for persistent wheezes and OR = 1.15 (95% CI 1.00-1.31) for regular phlegm, respectively.\nIn our study we also found elevated risks for airway obstruction in males living within 100 m of the main road, as well as between 100 m and 250 m from the road.\nThe study by Kan et al. [40] in the USA provided evidence that lung function, as measured by FEV1 and FVC, is reduced in adults living within 150 m of major roads, especially among women. In contrast to our results, they did not find a significant association between FEV1/FVC ratio and indicators of traffic exposure.\nAdverse effects of traffic-related exposure on lung function have also been highlighted in other studies [41-43]. Gauderman et al. reported a reduced lung development in Californian children, with a not significant larger effect in boys than in girls [41]. Reduced lung function was reported by Forbes et al. [42] in English adults and by Abbey et al. [43] in Californian adults, with a larger effect in males.\nWith regard to the highest values of airway obstruction observed in the intermediate exposure class (100-250 m), this might be due to higher values for some confounding factors; although the estimates are adjusted for these factors, they still probably could have some residual influences. Furthermore, a few factors, unconsidered in the present analyses, might have influenced these results. For example, subjects living within 100-250 m of the road had an higher prevalence of childhood respiratory troubles (chest cold, pertussis and bronchitis) (data not shown); in a previous study we had shown that subjects with childhood respiratory troubles had the lowest lung function values regardless of smoking habits [44]. Anyway, the highest values of airway obstruction observed in the intermediate exposure class (100-250 m), might suggest that respiratory health impairments due to vehicular traffic exposure also occur at a distance greater than 100 m, as reported in other studies which have shown the negative effects of living near a busy road until a distance of 500 m [40,41].\nAs regards the atopic status, we found elevated risks for skin test ≥ 5 mm positivity in females living within 100 m of the main road.\nA recent study on a very large sample of German children showed that the children living near busy streets had significantly higher risk for allergic sensitization (OR = 1.30, 95% CI 1.02-1.66) [45].\nIn the large population-based sample of 5338 schoolchildren of the French Six City Study [46] the adjusted odds of skin-prick test positivity were significantly higher than one in concurrence with elevated PM2.5 concentrations in the proximity of the houses where the children lived.", "Although there is a growing epidemiological evidence of various associations between air pollution and respiratory health for males and females, few studies reported results stratified by gender in adults. Airway behaviour is influenced by sex-related (biological) and gender-related (socio-cultural) determinants; these aspects can interact to several degrees and directions with environmental exposures, differently in women and men. There may also be sex-based differences in susceptibility to the same environmental exposures [13]. These features can explain the different associations between sex and traffic air pollution found in our study.\nOur approach focusing on the sex-specific effects pattern was also justified by the clear diversification between genders in the distribution of many confounding factors included in the analyses. Due to their prevalent occupation (housewives), females resulted more exposed to home residence environmental conditions; while men reported a greater risk for occupational and tobacco exposures.", "The main strengths of this study were the large sample size, the standard protocols, which already had passed the scrutiny of independent reviewers, and the multi-faceted aspects collected by means of the questionnaire. We also used quantitative respiratory and allergological outcomes (i.e. lung function, skin-prick test, IgE and bronchial hyper-responsiveness), which are not affected by the potential bias linked to the use of the questionnaire (recall bias).\nAs in any epidemiological study, residual confounding is still possible. However, we adjusted for known and potential confounders including demographic characteristics, personal socioeconomic status, lifestyle, work-related features, and cigarette smoking.\nAnother limitation was the cross-sectional nature of the study; we had no information about disease onset, making it hard to establish a temporal relationship between cause and effect. However, since asthma and COPD are known to be exacerbated by traffic-related air pollution, diseased subjects may have been more likely to move away from traffic, rather than towards it, and so a migrational bias would mainly be expected to dilute the effects.\nWe used a relatively simple proxy for exposure to traffic-related air pollution (distance to major roads): due to the large amount of input data required, we were not able to implement more sophisticated approaches; however, we found the distance method useful for an initial assessment of a potential environmental health hazard.", "Address geocoding can also introduce bias and errors [47-50] with potential effects on the results of epidemiological studies [51-53].\nAddress matching can be hindered by several factors, such as incomplete or inaccurate information in the address files, lack of standardization of street addresses, and lack of assignment of house numbers, especially in rural areas [50-52]. We succeeded in matching almost 100% of our sample, after a considerable effort to overcome the above-mentioned problems.\nEven if a match occurs, house numbering will not always provide the exact location since house numbers are assigned with no reference to the distance from the beginning of the street segment. Therefore, it is difficult to geocode an address unless the location of house numbers is identified on the map one by one.\nCartographic data provided by Pisa and Cascina municipalities contain the exact correspondence of house numbers to related buildings; a direct inspection was performed in case of ambiguity or uncertainty. Consequently, our study relied upon a detailed and precise geocoding of residential data, which is relatively infrequent for Italian public administrations.", "Our study points out that living in the proximity of the main road (within 100 m), as assessed by GIS technology, is associated with chronic respiratory problems, evaluated through both subjective (questionnaire) and objective (lung function and allergy tests) methods. In particular, living within 100 m of the main road was associated with higher risks for reporting persistent wheeze, COPD, and airway obstruction in males, as well as with higher risks for asthma, attacks of shortness of breath with wheezing, dyspnea and positivity to skin-prick tests in females.\nIn addition, our study highlights the added value of close collaboration among researchers with different expertise, such as epidemiology and geographical information system science, when conducting environmental epidemiology studies.", "GIS: Geographical information system; NOx: Nitrogen oxides; CO: Carbon monoxide; NMVOCs: Non-methane volatile organic compounds; LUR model: Land use regression model; COPD: Chronic obstructive pulmonary disease; DLCO: CO single breath diffusing capacity; FEV1: Forced expiratory volume in the first second; FVC: Forced vital capacity; VC: Vital capacity; OR: Odds ratio; 95% CI: 95% Confidence intervals.", "The authors declare that they have no competing interests.", "Each author contributed to the conception and design of the work, the acquisition of data, or the analysis of the data in a manner substantial enough to take public responsibility for it. The authors believe the manuscript represents valid work. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Setting and study population", "Exposure assessment", "Health outcomes assessment", "Potential confounders", "Statistical analysis", "Results", "Discussion", "Health issues", "Gender stratification", "Advantages and disadvantages of study design", "Geographic issues", "Conclusion", "List of abbreviations", "Competing interests", "Authors' contributions" ]
[ "In recent years, despite significant improvements in fuel and engine technology, emissions from traffic have become a major source of air pollution, mainly in urban areas. In Europe, exhaust from motor vehicle traffic is considered to be the most significant source of nitrogen oxides (NOx), carbon monoxide (CO) and non-methane volatile organic compounds (NMVOCs), as well as the second most important source of particulate emissions [1].\nStudies of long-term exposure to air pollution suggest an increased risk of chronic respiratory illnesses [2-4]. Short-term exposures to high concentrations have been associated with higher prevalence rates of bronchitis, asthma and respiratory symptoms [5-8].\nIn urban environments, mainly in areas where population and traffic density are relatively high, human exposure to hazardous substances is expected to be relevant. This is often the case near busy traffic axes in city centers where urban topography and microclimate may contribute to poor dispersion conditions. There is growing epidemiological evidence of increased frequency of respiratory symptoms among people living close to major roads [9-12]. Few studies have focused on the sex-specific associations between exposure to urban air pollution and respiratory health [13].\nAdvances in Geographical Information System (GIS) technology, along with the increasing availability of geographical data, have provided new opportunities for environmental epidemiologists to study associations between environmental exposure and spatial distribution of disease. GIS permits spatial linking of different types of data (e.g., residential addresses, environmental exposure levels, demographic information), as well as automated address matching, buffer analysis, spatial query and polygon overlay analysis [14,15].\nA number of methods for exposure assessment have been developed. Researchers have generally used self-reported or measured traffic density, and self-reported or measured distance of the home from the nearest street [16-19]. Other teams have used both sets of information, traffic density and distance [20-23]. Traffic air pollutant dispersion models and land use regression (LUR) models have been developed to improve the estimation of exposure levels [15,24-26], and they have sometimes been used in epidemiology [14,27,28].\nThe aim of our study was to evaluate the sex-specific associations between living near a main road (assessed through a GIS-based methodology) and respiratory health status in a general population sample.", "[SUBTITLE] Setting and study population [SUBSECTION] Since 1980 the Pulmonary Environmental Epidemiology Unit of the Institute of Clinical Physiology of the Italian National Research Council (CNR) has performed epidemiological surveys to assess the effects of outdoor [29-32] and indoor [33,34] air pollution on human health.\nFrom 1991 to 1993 a survey was conducted in the area of Pisa (Tuscany), along a main road, called the Tosco-Romagnola road, connecting Pisa to Florence and characterized by high traffic volume (mean daily values ~14700 vehicles, measured during the hours 07.00-21.00). Figure 1 shows a map of the study area, representing the boundaries of the two municipalities involved (Pisa and Cascina), the main road, and the secondary streets. As shown in the map, the central and eastern side of the Tosco-Romagnola road had typical characteristics of a suburban/rural area with sparse buildings and intersections with very small streets, suggesting a major role of the Tosco-Romagnola road in air pollutant emissions. The last western part of the Tosco-Romagnola road, which enters the urban area of Pisa, has different characteristics with regard to other main roads, other air pollution sources, and other types of buildings.\nStudy area and geocoding of population sample. Map representing subjects geocoding in their own home residence address, along the major road, the Tosco-Romagnola road. Each dot on the map represents each subject's home. The map indicates the road and railway network and buildings in the two municipalities.\nAnnual concentrations of total suspended particulates (TSP) were provided by the Pisa Province Unit of the Environmental Protection Agency, along with integrated measurements of sulfur dioxide (SO2): annual means were 24 μg/m3 for SO2 and 99 μg/m3 for TSP, for the entire area around the main road.\nSubjects participating in the survey (n = 2841) were sampled using a multistage stratified family-cluster design. They were investigated with a protocol including: the CNR questionnaire on respiratory symptoms/diseases and risk factors, lung function tests, skin prick tests, and blood samples for immunoglobulin E (IgE) determination [35].\nSince 1980 the Pulmonary Environmental Epidemiology Unit of the Institute of Clinical Physiology of the Italian National Research Council (CNR) has performed epidemiological surveys to assess the effects of outdoor [29-32] and indoor [33,34] air pollution on human health.\nFrom 1991 to 1993 a survey was conducted in the area of Pisa (Tuscany), along a main road, called the Tosco-Romagnola road, connecting Pisa to Florence and characterized by high traffic volume (mean daily values ~14700 vehicles, measured during the hours 07.00-21.00). Figure 1 shows a map of the study area, representing the boundaries of the two municipalities involved (Pisa and Cascina), the main road, and the secondary streets. As shown in the map, the central and eastern side of the Tosco-Romagnola road had typical characteristics of a suburban/rural area with sparse buildings and intersections with very small streets, suggesting a major role of the Tosco-Romagnola road in air pollutant emissions. The last western part of the Tosco-Romagnola road, which enters the urban area of Pisa, has different characteristics with regard to other main roads, other air pollution sources, and other types of buildings.\nStudy area and geocoding of population sample. Map representing subjects geocoding in their own home residence address, along the major road, the Tosco-Romagnola road. Each dot on the map represents each subject's home. The map indicates the road and railway network and buildings in the two municipalities.\nAnnual concentrations of total suspended particulates (TSP) were provided by the Pisa Province Unit of the Environmental Protection Agency, along with integrated measurements of sulfur dioxide (SO2): annual means were 24 μg/m3 for SO2 and 99 μg/m3 for TSP, for the entire area around the main road.\nSubjects participating in the survey (n = 2841) were sampled using a multistage stratified family-cluster design. They were investigated with a protocol including: the CNR questionnaire on respiratory symptoms/diseases and risk factors, lung function tests, skin prick tests, and blood samples for immunoglobulin E (IgE) determination [35].\n[SUBTITLE] Exposure assessment [SUBSECTION] Subjects were integrated in a Geographical Information System. Geocoding was done using home residence addresses. For the subjects geocoding, we used cartographic data provided by the GIS Service of Pisa and Cascina municipalities: buildings, streets, topography, population addresses, and house numbers. We applied addresses geocoding techniques provided by ArcMap 8.2 (ESRI): a file extracted from the epidemiological questionnaire containing participants' addresses (street names and house numbers) was matched with vector data. Cartographic data provided by Pisa and Cascina municipalities contained the exact location of house numbers, as in real life; a direct inspection was performed in case of ambiguity or uncertainty. Each subject is shown on a map as a precise mark corresponding to his/her home address, identified by street name and house number.\nAs described in the previous section, in order to minimize the effects of other air pollution sources (industries, other main roads), mainly on the last western part of the Tosco-Romagnola road, only subjects living within 800 m of the main road (n = 2062, i.e. 73% of the total sample) were selected: this cut-off permitted us to also exclude the more rural area of the central-eastern side, characterized by a different kind of buildings (villas, isolated houses) and by different socio-economic status (Figure 1). It is important to underline that for the central and eastern part of the road more than 90% of our sample lived within 800 m of the road.\nDistances of houses from the main road (the Tosco-Romagnola road) were used to assess traffic-related pollution exposure. Using GIS buffering and overlaying functionalities, we classified the population sample in three groups (see Figure 2): highly exposed (people living within 100 m of the main road), moderately exposed (people living between 100 m and 250 m from the main road) and unexposed subjects (people living between 250 m and 800 m from the main road). These cut-off values were selected based on the results of previous studies showing increased exposure and risk of respiratory symptoms within short distances from the roads [9-13,23].\nClassification of subjects based on the distance of each home from the main road. Zoomed map representing the classification of subjects according to the distance of each home from the main road. Highly exposed subjects are those living in the buffer area 0-100 m from the road, moderately exposed subjects living in the buffer area 100-250 m and unexposed are those living between 250 and 800 m from the road.\nSubjects were integrated in a Geographical Information System. Geocoding was done using home residence addresses. For the subjects geocoding, we used cartographic data provided by the GIS Service of Pisa and Cascina municipalities: buildings, streets, topography, population addresses, and house numbers. We applied addresses geocoding techniques provided by ArcMap 8.2 (ESRI): a file extracted from the epidemiological questionnaire containing participants' addresses (street names and house numbers) was matched with vector data. Cartographic data provided by Pisa and Cascina municipalities contained the exact location of house numbers, as in real life; a direct inspection was performed in case of ambiguity or uncertainty. Each subject is shown on a map as a precise mark corresponding to his/her home address, identified by street name and house number.\nAs described in the previous section, in order to minimize the effects of other air pollution sources (industries, other main roads), mainly on the last western part of the Tosco-Romagnola road, only subjects living within 800 m of the main road (n = 2062, i.e. 73% of the total sample) were selected: this cut-off permitted us to also exclude the more rural area of the central-eastern side, characterized by a different kind of buildings (villas, isolated houses) and by different socio-economic status (Figure 1). It is important to underline that for the central and eastern part of the road more than 90% of our sample lived within 800 m of the road.\nDistances of houses from the main road (the Tosco-Romagnola road) were used to assess traffic-related pollution exposure. Using GIS buffering and overlaying functionalities, we classified the population sample in three groups (see Figure 2): highly exposed (people living within 100 m of the main road), moderately exposed (people living between 100 m and 250 m from the main road) and unexposed subjects (people living between 250 m and 800 m from the main road). These cut-off values were selected based on the results of previous studies showing increased exposure and risk of respiratory symptoms within short distances from the roads [9-13,23].\nClassification of subjects based on the distance of each home from the main road. Zoomed map representing the classification of subjects according to the distance of each home from the main road. Highly exposed subjects are those living in the buffer area 0-100 m from the road, moderately exposed subjects living in the buffer area 100-250 m and unexposed are those living between 250 and 800 m from the road.\n[SUBTITLE] Health outcomes assessment [SUBSECTION] Subjects filled out a CNR standardized interviewer-administered questionnaire. This was the Italian version of the National Heart Blood and Lung Institute (NHBLI, USA) questionnaire including more than 70 questions on demographic aspects, general health status, lifestyle, potential risk factors (smoking habits, passive smoking exposure, occupational exposure, indoor exposure) [30].\nThe following respiratory symptoms/diseases were considered for the analyses:\n• chronic cough (or phlegm): cough (or phlegm) apart from common colds for at least three months of the year for at least two years\n• attacks of shortness of breath with wheeze: any attack of shortness of breath with wheeze, apart from common colds\n• persistent wheeze: wheeze, for at least six months of the year, apart from common colds\n• dyspnea I+ grade: shortness of breath when hurrying on level ground or walking up a slight hill (I grade dyspnea) or when walking on level ground with persons of the same age (II+ grade dyspnea)\n• chronic obstructive pulmonary disease (COPD): reported diagnosis of emphysema or chronic bronchitis\n• allergy symptoms: hay fever or any other condition making the nose runny or stuffy, apart from common colds, eye redness, itching, burning and eczema\n• asthma: reported diagnosis.\nIn addition, the investigated subjects performed skin-prick tests for common airborne allergens, serum IgE determination, lung function tests and nonspecific bronchial challenge test with methacholine.\nA skin-prick test result was considered positive if it yielded at least one wheal with a mean diameter of at least 3 mm (skin test_3 mm pos) or 5 mm (skin test_5 mm pos), after subtracting the diameter of the negative control reaction.\nTotal serum immunoglobulin E (IgE) was measured and transformed in logarithm10 values (IgE_log) to obtain a normal distribution.\nLung function tests were carried out: slow vital capacity, CO single breath diffusing capacity (DLCO) and forced expirograms. Values for spirometry parameters were all expressed in % predicted [36], with the exception of the ratio between forced expiratory volume in the first second and forced vital capacity (FEV1/FVC) and of the ratio between FEV1 and vital capacity (FEV1/VC), which were expressed in percentage of observed values.\nThe results of the non-specific bronchial challenge test with methacholine were expressed using a continuous variable to characterize bronchial reactivity, the slope of the dose-response curve; the slope was transformed using the natural logarithm (slope_ln) because the data distribution was highly skewed, and a small constant (+ 2.57) was added to allow logarithmic transformation of negative and zero values.\nA variable proportion of subjects agreed to perform these tests. Skin prick tests were performed for 1608 subjects (78%); serum IgE determination for 1409 subjects (68%); lung function tests for 1402 (68%); and bronchial responsiveness to methacholine challenge for 859 (42%). Subjects involved in lung function and allergy tests, compared to those not involved, were more likely men, smokers, young/adults, exposed to passive smoking and with high education levels (data not shown).\nSubjects filled out a CNR standardized interviewer-administered questionnaire. This was the Italian version of the National Heart Blood and Lung Institute (NHBLI, USA) questionnaire including more than 70 questions on demographic aspects, general health status, lifestyle, potential risk factors (smoking habits, passive smoking exposure, occupational exposure, indoor exposure) [30].\nThe following respiratory symptoms/diseases were considered for the analyses:\n• chronic cough (or phlegm): cough (or phlegm) apart from common colds for at least three months of the year for at least two years\n• attacks of shortness of breath with wheeze: any attack of shortness of breath with wheeze, apart from common colds\n• persistent wheeze: wheeze, for at least six months of the year, apart from common colds\n• dyspnea I+ grade: shortness of breath when hurrying on level ground or walking up a slight hill (I grade dyspnea) or when walking on level ground with persons of the same age (II+ grade dyspnea)\n• chronic obstructive pulmonary disease (COPD): reported diagnosis of emphysema or chronic bronchitis\n• allergy symptoms: hay fever or any other condition making the nose runny or stuffy, apart from common colds, eye redness, itching, burning and eczema\n• asthma: reported diagnosis.\nIn addition, the investigated subjects performed skin-prick tests for common airborne allergens, serum IgE determination, lung function tests and nonspecific bronchial challenge test with methacholine.\nA skin-prick test result was considered positive if it yielded at least one wheal with a mean diameter of at least 3 mm (skin test_3 mm pos) or 5 mm (skin test_5 mm pos), after subtracting the diameter of the negative control reaction.\nTotal serum immunoglobulin E (IgE) was measured and transformed in logarithm10 values (IgE_log) to obtain a normal distribution.\nLung function tests were carried out: slow vital capacity, CO single breath diffusing capacity (DLCO) and forced expirograms. Values for spirometry parameters were all expressed in % predicted [36], with the exception of the ratio between forced expiratory volume in the first second and forced vital capacity (FEV1/FVC) and of the ratio between FEV1 and vital capacity (FEV1/VC), which were expressed in percentage of observed values.\nThe results of the non-specific bronchial challenge test with methacholine were expressed using a continuous variable to characterize bronchial reactivity, the slope of the dose-response curve; the slope was transformed using the natural logarithm (slope_ln) because the data distribution was highly skewed, and a small constant (+ 2.57) was added to allow logarithmic transformation of negative and zero values.\nA variable proportion of subjects agreed to perform these tests. Skin prick tests were performed for 1608 subjects (78%); serum IgE determination for 1409 subjects (68%); lung function tests for 1402 (68%); and bronchial responsiveness to methacholine challenge for 859 (42%). Subjects involved in lung function and allergy tests, compared to those not involved, were more likely men, smokers, young/adults, exposed to passive smoking and with high education levels (data not shown).\n[SUBTITLE] Potential confounders [SUBSECTION] The following potential confounders, collected through questionnaire, were considered:\n• age groups: < 25, 25-64, > 64 years. These groups were chosen in order to make possible comparisons with our previous studies [29]. Moreover, these cut-offs allow us to identify the most susceptible categories, i.e. the young people and the elderly\n• smoking habits: non-smokers (those who had never smoked any kind of tobacco regularly); smokers (those who currently smoked at least one cigarette daily); ex-smokers (those who had smoked regularly in the past until six months or more before the examination, but did not smoke at the moment of the examination)\n• passive smoking exposure: exposure to the smoke from other people\n• educational level: low (no education/primary school); medium (secondary school); high (high school/university)\n• work position: manager/white collar, blue collar/farmer, merchant/craftsman and unemployed\n• occupational exposure: exposure to fumes, gases, dust or chemicals in working environments\n• number of hours spent at home (home residence exposure): more than or equal to 15 h; less than 15 h\n• time of residence: more than or equal to five years; less than five years\n• type of self-reported environmental exposure: traffic; other exposure.\nThe following potential confounders, collected through questionnaire, were considered:\n• age groups: < 25, 25-64, > 64 years. These groups were chosen in order to make possible comparisons with our previous studies [29]. Moreover, these cut-offs allow us to identify the most susceptible categories, i.e. the young people and the elderly\n• smoking habits: non-smokers (those who had never smoked any kind of tobacco regularly); smokers (those who currently smoked at least one cigarette daily); ex-smokers (those who had smoked regularly in the past until six months or more before the examination, but did not smoke at the moment of the examination)\n• passive smoking exposure: exposure to the smoke from other people\n• educational level: low (no education/primary school); medium (secondary school); high (high school/university)\n• work position: manager/white collar, blue collar/farmer, merchant/craftsman and unemployed\n• occupational exposure: exposure to fumes, gases, dust or chemicals in working environments\n• number of hours spent at home (home residence exposure): more than or equal to 15 h; less than 15 h\n• time of residence: more than or equal to five years; less than five years\n• type of self-reported environmental exposure: traffic; other exposure.\n[SUBTITLE] Statistical analysis [SUBSECTION] Chi-square test was used to compare symptoms/diseases prevalence rates between exposed and unexposed subjects regarding traffic-related pollution exposure. Separate analyses were performed for both sexes.\nObjective test variables were analyzed either as continuous or categorical variables. Comparison of adjusted mean values of functional and allergologic parameters (IgE determination, bronchial reactivity and spirometry) among the three exposure classes was performed by analysis of variance (ANOVA). For lung function tests, the mean values were adjusted for the effects of age and smoking habits; for bronchial reactivity parameters, mean values were adjusted for age, smoking habits and predicted FEV1%. Post hoc test (Tukey test) was applied to perform all pair-wise comparisons of the ANOVA results in order to identify which means were significantly different from the others.\nIn addition, continuous variables were dichotomized and analyzed by chi-square: IgE and bronchial reactivity results were dichotomized through the 75th percentile: 1.83 for the logarithm of IgE (IgE_log) and 2.22 for the logarithm of the slope of the dose-response bronchial reactivity curve (slope_ln); airway obstruction was defined as having an observed FEV1/FVC% less than 70%.\nWe applied multiple logistic regression models to assess the association between health outcomes and traffic-related pollution exposure taking into account the role of the independent risk factors. Odds ratios (OR) were stratified by sex and adjusted for the effects of age, education, smoking, passive smoking exposure, occupational exposure, working position, number of hours spent at home and time of residence.\nA p-value less than 0.05 was considered statistically significant.\nChi-square test was used to compare symptoms/diseases prevalence rates between exposed and unexposed subjects regarding traffic-related pollution exposure. Separate analyses were performed for both sexes.\nObjective test variables were analyzed either as continuous or categorical variables. Comparison of adjusted mean values of functional and allergologic parameters (IgE determination, bronchial reactivity and spirometry) among the three exposure classes was performed by analysis of variance (ANOVA). For lung function tests, the mean values were adjusted for the effects of age and smoking habits; for bronchial reactivity parameters, mean values were adjusted for age, smoking habits and predicted FEV1%. Post hoc test (Tukey test) was applied to perform all pair-wise comparisons of the ANOVA results in order to identify which means were significantly different from the others.\nIn addition, continuous variables were dichotomized and analyzed by chi-square: IgE and bronchial reactivity results were dichotomized through the 75th percentile: 1.83 for the logarithm of IgE (IgE_log) and 2.22 for the logarithm of the slope of the dose-response bronchial reactivity curve (slope_ln); airway obstruction was defined as having an observed FEV1/FVC% less than 70%.\nWe applied multiple logistic regression models to assess the association between health outcomes and traffic-related pollution exposure taking into account the role of the independent risk factors. Odds ratios (OR) were stratified by sex and adjusted for the effects of age, education, smoking, passive smoking exposure, occupational exposure, working position, number of hours spent at home and time of residence.\nA p-value less than 0.05 was considered statistically significant.", "Since 1980 the Pulmonary Environmental Epidemiology Unit of the Institute of Clinical Physiology of the Italian National Research Council (CNR) has performed epidemiological surveys to assess the effects of outdoor [29-32] and indoor [33,34] air pollution on human health.\nFrom 1991 to 1993 a survey was conducted in the area of Pisa (Tuscany), along a main road, called the Tosco-Romagnola road, connecting Pisa to Florence and characterized by high traffic volume (mean daily values ~14700 vehicles, measured during the hours 07.00-21.00). Figure 1 shows a map of the study area, representing the boundaries of the two municipalities involved (Pisa and Cascina), the main road, and the secondary streets. As shown in the map, the central and eastern side of the Tosco-Romagnola road had typical characteristics of a suburban/rural area with sparse buildings and intersections with very small streets, suggesting a major role of the Tosco-Romagnola road in air pollutant emissions. The last western part of the Tosco-Romagnola road, which enters the urban area of Pisa, has different characteristics with regard to other main roads, other air pollution sources, and other types of buildings.\nStudy area and geocoding of population sample. Map representing subjects geocoding in their own home residence address, along the major road, the Tosco-Romagnola road. Each dot on the map represents each subject's home. The map indicates the road and railway network and buildings in the two municipalities.\nAnnual concentrations of total suspended particulates (TSP) were provided by the Pisa Province Unit of the Environmental Protection Agency, along with integrated measurements of sulfur dioxide (SO2): annual means were 24 μg/m3 for SO2 and 99 μg/m3 for TSP, for the entire area around the main road.\nSubjects participating in the survey (n = 2841) were sampled using a multistage stratified family-cluster design. They were investigated with a protocol including: the CNR questionnaire on respiratory symptoms/diseases and risk factors, lung function tests, skin prick tests, and blood samples for immunoglobulin E (IgE) determination [35].", "Subjects were integrated in a Geographical Information System. Geocoding was done using home residence addresses. For the subjects geocoding, we used cartographic data provided by the GIS Service of Pisa and Cascina municipalities: buildings, streets, topography, population addresses, and house numbers. We applied addresses geocoding techniques provided by ArcMap 8.2 (ESRI): a file extracted from the epidemiological questionnaire containing participants' addresses (street names and house numbers) was matched with vector data. Cartographic data provided by Pisa and Cascina municipalities contained the exact location of house numbers, as in real life; a direct inspection was performed in case of ambiguity or uncertainty. Each subject is shown on a map as a precise mark corresponding to his/her home address, identified by street name and house number.\nAs described in the previous section, in order to minimize the effects of other air pollution sources (industries, other main roads), mainly on the last western part of the Tosco-Romagnola road, only subjects living within 800 m of the main road (n = 2062, i.e. 73% of the total sample) were selected: this cut-off permitted us to also exclude the more rural area of the central-eastern side, characterized by a different kind of buildings (villas, isolated houses) and by different socio-economic status (Figure 1). It is important to underline that for the central and eastern part of the road more than 90% of our sample lived within 800 m of the road.\nDistances of houses from the main road (the Tosco-Romagnola road) were used to assess traffic-related pollution exposure. Using GIS buffering and overlaying functionalities, we classified the population sample in three groups (see Figure 2): highly exposed (people living within 100 m of the main road), moderately exposed (people living between 100 m and 250 m from the main road) and unexposed subjects (people living between 250 m and 800 m from the main road). These cut-off values were selected based on the results of previous studies showing increased exposure and risk of respiratory symptoms within short distances from the roads [9-13,23].\nClassification of subjects based on the distance of each home from the main road. Zoomed map representing the classification of subjects according to the distance of each home from the main road. Highly exposed subjects are those living in the buffer area 0-100 m from the road, moderately exposed subjects living in the buffer area 100-250 m and unexposed are those living between 250 and 800 m from the road.", "Subjects filled out a CNR standardized interviewer-administered questionnaire. This was the Italian version of the National Heart Blood and Lung Institute (NHBLI, USA) questionnaire including more than 70 questions on demographic aspects, general health status, lifestyle, potential risk factors (smoking habits, passive smoking exposure, occupational exposure, indoor exposure) [30].\nThe following respiratory symptoms/diseases were considered for the analyses:\n• chronic cough (or phlegm): cough (or phlegm) apart from common colds for at least three months of the year for at least two years\n• attacks of shortness of breath with wheeze: any attack of shortness of breath with wheeze, apart from common colds\n• persistent wheeze: wheeze, for at least six months of the year, apart from common colds\n• dyspnea I+ grade: shortness of breath when hurrying on level ground or walking up a slight hill (I grade dyspnea) or when walking on level ground with persons of the same age (II+ grade dyspnea)\n• chronic obstructive pulmonary disease (COPD): reported diagnosis of emphysema or chronic bronchitis\n• allergy symptoms: hay fever or any other condition making the nose runny or stuffy, apart from common colds, eye redness, itching, burning and eczema\n• asthma: reported diagnosis.\nIn addition, the investigated subjects performed skin-prick tests for common airborne allergens, serum IgE determination, lung function tests and nonspecific bronchial challenge test with methacholine.\nA skin-prick test result was considered positive if it yielded at least one wheal with a mean diameter of at least 3 mm (skin test_3 mm pos) or 5 mm (skin test_5 mm pos), after subtracting the diameter of the negative control reaction.\nTotal serum immunoglobulin E (IgE) was measured and transformed in logarithm10 values (IgE_log) to obtain a normal distribution.\nLung function tests were carried out: slow vital capacity, CO single breath diffusing capacity (DLCO) and forced expirograms. Values for spirometry parameters were all expressed in % predicted [36], with the exception of the ratio between forced expiratory volume in the first second and forced vital capacity (FEV1/FVC) and of the ratio between FEV1 and vital capacity (FEV1/VC), which were expressed in percentage of observed values.\nThe results of the non-specific bronchial challenge test with methacholine were expressed using a continuous variable to characterize bronchial reactivity, the slope of the dose-response curve; the slope was transformed using the natural logarithm (slope_ln) because the data distribution was highly skewed, and a small constant (+ 2.57) was added to allow logarithmic transformation of negative and zero values.\nA variable proportion of subjects agreed to perform these tests. Skin prick tests were performed for 1608 subjects (78%); serum IgE determination for 1409 subjects (68%); lung function tests for 1402 (68%); and bronchial responsiveness to methacholine challenge for 859 (42%). Subjects involved in lung function and allergy tests, compared to those not involved, were more likely men, smokers, young/adults, exposed to passive smoking and with high education levels (data not shown).", "The following potential confounders, collected through questionnaire, were considered:\n• age groups: < 25, 25-64, > 64 years. These groups were chosen in order to make possible comparisons with our previous studies [29]. Moreover, these cut-offs allow us to identify the most susceptible categories, i.e. the young people and the elderly\n• smoking habits: non-smokers (those who had never smoked any kind of tobacco regularly); smokers (those who currently smoked at least one cigarette daily); ex-smokers (those who had smoked regularly in the past until six months or more before the examination, but did not smoke at the moment of the examination)\n• passive smoking exposure: exposure to the smoke from other people\n• educational level: low (no education/primary school); medium (secondary school); high (high school/university)\n• work position: manager/white collar, blue collar/farmer, merchant/craftsman and unemployed\n• occupational exposure: exposure to fumes, gases, dust or chemicals in working environments\n• number of hours spent at home (home residence exposure): more than or equal to 15 h; less than 15 h\n• time of residence: more than or equal to five years; less than five years\n• type of self-reported environmental exposure: traffic; other exposure.", "Chi-square test was used to compare symptoms/diseases prevalence rates between exposed and unexposed subjects regarding traffic-related pollution exposure. Separate analyses were performed for both sexes.\nObjective test variables were analyzed either as continuous or categorical variables. Comparison of adjusted mean values of functional and allergologic parameters (IgE determination, bronchial reactivity and spirometry) among the three exposure classes was performed by analysis of variance (ANOVA). For lung function tests, the mean values were adjusted for the effects of age and smoking habits; for bronchial reactivity parameters, mean values were adjusted for age, smoking habits and predicted FEV1%. Post hoc test (Tukey test) was applied to perform all pair-wise comparisons of the ANOVA results in order to identify which means were significantly different from the others.\nIn addition, continuous variables were dichotomized and analyzed by chi-square: IgE and bronchial reactivity results were dichotomized through the 75th percentile: 1.83 for the logarithm of IgE (IgE_log) and 2.22 for the logarithm of the slope of the dose-response bronchial reactivity curve (slope_ln); airway obstruction was defined as having an observed FEV1/FVC% less than 70%.\nWe applied multiple logistic regression models to assess the association between health outcomes and traffic-related pollution exposure taking into account the role of the independent risk factors. Odds ratios (OR) were stratified by sex and adjusted for the effects of age, education, smoking, passive smoking exposure, occupational exposure, working position, number of hours spent at home and time of residence.\nA p-value less than 0.05 was considered statistically significant.", "The study included 2062 subjects: mean age was 45.9 years for men (range 8-93 years) and 48.9 years for women (range 8-97 years). Children (0-14 years) comprised 5% of the study sample. General characteristics of the population sample are reported in Table 1, for men and women. A different distribution of potential confounding factors was observed between genders: females were older than males, current and previous smoking was more frequent in males than in females. Men had a higher education level and socio-economic status, but also a higher frequency of occupational exposure. Women tended to spend more time at home than men, as well as to perceiving vehicular traffic in the street of residence more frequently. Over 85% of the population had been living in the same house for more than five years.\nGeneral characteristics of the population sample by gender.\n*p-values from Pearson chi-square test\nThe average (± standard deviation) distance of subjects' residences from the main road was 239 ± 189 m (median 200 m; minimum 1.5 m; maximum 785 m). Table 2 reports general characteristics of the population when stratified by the three distance classes and by sex. Among females, the elderly tended to live closer to the main road than younger people; females living within 100 m of the road tended to have lower socio-economic status and less passive smoking exposure and occupational exposure than females living farther away. Variables about self-reported perception of environmental exposure were correlated to distance classes used to define traffic-related exposure in both males and females, with subjects living within 100 m from the main road showing the highest self-reported exposure to traffic.\nDistribution of confounding factors by the distance classes in males and females.\n*** p < 0.001, ** p < 0.01, * p < 0.05, # 0.05 < p < 0.1 (borderline) from Pearson chi-square test; comparison between subjects living at different distances from the main road, separately in males and in females\nAs regards symptoms/diseases, persistent wheeze and COPD showed significantly higher prevalence rates in males living within 100 m of the main road; attacks of shortness of breath with wheeze, dyspnea and asthma showed significantly higher prevalence rates in females living within 100 m of the main road (Table 3).\nPrevalence rates of symptoms/diseases by the distance classes in males and females.\n*** p < 0.001, ** p < 0.01, * p < 0.05, # 0.05 < p < 0.1 (borderline) from Pearson chi-square test; comparison between subjects living at different distances from the main road, separately in males and in females\nResults of the comparison of adjusted mean values of functional and allergologic parameters among the exposure classes stratified by sex are reported in Table 4. Significantly lower FEV1/VC% and FEV1/FVC% values were observed in exposed males. The Tukey test highlighted that all significant p-values were associated with differences between subjects living within 100 m of the road (the highly exposed class) and subjects living between 250 m and 800 m from the road (unexposed). There were no significant differences among groups for the logarithm of serum IgE values, nor for the logarithm of the slope of the bronchial reactivity dose-response curve, though for the latter there was a trend between the three exposure classes.\nComparison of adjusted mean values of functional and allergologic parameters by the distance classes in males and females.\n§ Values are expressed in % predicted, with the exception of FEV1/FVC% and FEV1/VC% which are expressed in % observed\n*** p < 0.001, ** p < 0.01, * p < 0.05, # 0.05 < p < 0.1 (borderline) by analyses of variance; comparison between subjects living at different distances from the main road, separately in males and in females\nFor all parameters mean values are adjusted for the effects of age and smoking habits, with the exception of the bronchial reactivity parameter where mean values are adjusted for age, smoking habits and predicted %FEV1\nTable 5 reports chi-square results for dichotomized test outcomes stratified by sex. Significantly higher values were shown in exposed subjects for observed FEV1/FVC% <70% (in males) and for skin test ≥5 mm positivity (in females). Although it was quite weak and not statistically significant, a trend could also be highlighted for skin prick test ≥3 mm positivity in females and bronchial reactivity in males.\nComparison of prevalence rates of tests variables by the distance classes in males and females.\n*** p < 0.001, ** p < 0.01, * p < 0.05, # 0.05 < p < 0.1 (borderline); comparison between subjects living at different distances from the main road, separately in males and in females\nTable 6 shows the statistically significant results (OR and 95% confidence intervals-CI) obtained from the multiple logistic regression models stratified by sex.\nEffects of distance of residence to main road on respiratory symptoms/diseases and dichotomized test outcomes: OR† and 95% CI.\n† OR adjusted for age, educational level, smoking habits, passive smoking exposure, occupational exposure, working position, number of hours spent at home and time of residence, calculated with subjects living between 250-800 m as the reference group\n*** p < 0.001, ** p < 0.01, * p < 0.05, # 0.05 < p < 0.1 (borderline)\nCompared to subjects living between 250 m and 800 m from the road, there were increased risks among males living within 100 m of the main road for persistent wheeze, COPD, and observed FEV1/FVC% < 70%; among males living between 100-250 m from the road, there were significantly increased risks for FEV1/FVC% < 70% and FEV1/VC% < 70%. A borderline significance was observed in men living between 100-250 m from the road for persistent wheeze. Increased risks were shown for dyspnea and for positivity to skin prick test ≥ 5 mm in females living within 100 m of the main road. Borderline effect estimates were observed for asthma, attacks of shortness of breath with wheeze in females living within 100 m of the road and for dyspnea in females living between 100-250 m from the road.\nWith regard to the estimated effects of potential confounders, our results confirmed what is well documented in the scientific literatures: risks for respiratory diseases were closely associated with age, smoking habits and low education levels (data not shown).", "[SUBTITLE] Health issues [SUBSECTION] Our study indicates respiratory health risks for people living in the proximity of a main road. In our study we used subjects' residence as a proxy for environmental exposure. This means that subjects living at the same distance from the main road are assumed to experience the same exposure levels. Since personal exposure can be influenced by many different factors related to each subject's life style, personal habits and exposure to other air pollution sources, we included in our analyses the effects of these confounding factors. Multiple logistic regression models were fitted to adjust for the effects of age, education, smoking, passive smoking exposure, occupational exposure, working position, number of hours spent at home and time of residence.\nAfter adjustment for such potential confounders, subjects with a residential address within 100 m of the main road had higher risks for reporting persistent wheeze, COPD, and for having airway obstruction in males, as well as higher risks for asthma, attacks of shortness of breath with wheezing, dyspnea and positivity to skin-prick test in females.\nOur results are generally consistent with those reported by other authors who have analyzed the effects of traffic-related air pollution exposure on respiratory health status in adults.\nSignificantly higher risks for respiratory symptoms/diagnosis were reported in subjects living within 100 m of a major road by Lindgren et al. (Sweden) [37], Schikowski et al. (Germany) [12] and Cesaroni et al. (Italy) [38]: OR = 1.40 (95% CI 1.04-1.89) for asthma diagnosis, OR = 1.29 (95% CI 1.00-1.67) for asthma symptoms, OR = 1.64 (95% CI 1.11-2.41) for COPD diagnosis and OR = 1.53 (95% CI 1.10-2.13) for chronic bronchitis symptoms by Lindgren et al. [37]; OR = 1.24 (95% CI 1.03-1.49) for frequent cough by Schikowski et al. [12]; OR = 1.26 (95% CI 1.03-1.54) for rhinitis by Cesaroni et al. [38].\nA narrower exposure cut-off (75 m) was defined by Mc Connell et al. [18] in Southern Californian schoolchildren (aged 5-7 years): significant associations were found for lifetime asthma (OR = 1.29 95% CI 1.01-1.86), current asthma (OR = 1.50 95% CI 1.16-1.95) and wheezes (OR = 1.40 95% CI 1.09-1.78). Effects of residential proximity to roadways were greater in females, as in our study.\nSignificantly higher risks for wheezes and phlegm were reported in adults living within 50 m of a major road by Garshick et al. (USA) [9] and within 20 m by Bayer-Oglesby et al. (Switzerland) [39]: OR = 1.30 (95% CI 1.00-1.70) for persistent wheezes and OR = 1.15 (95% CI 1.00-1.31) for regular phlegm, respectively.\nIn our study we also found elevated risks for airway obstruction in males living within 100 m of the main road, as well as between 100 m and 250 m from the road.\nThe study by Kan et al. [40] in the USA provided evidence that lung function, as measured by FEV1 and FVC, is reduced in adults living within 150 m of major roads, especially among women. In contrast to our results, they did not find a significant association between FEV1/FVC ratio and indicators of traffic exposure.\nAdverse effects of traffic-related exposure on lung function have also been highlighted in other studies [41-43]. Gauderman et al. reported a reduced lung development in Californian children, with a not significant larger effect in boys than in girls [41]. Reduced lung function was reported by Forbes et al. [42] in English adults and by Abbey et al. [43] in Californian adults, with a larger effect in males.\nWith regard to the highest values of airway obstruction observed in the intermediate exposure class (100-250 m), this might be due to higher values for some confounding factors; although the estimates are adjusted for these factors, they still probably could have some residual influences. Furthermore, a few factors, unconsidered in the present analyses, might have influenced these results. For example, subjects living within 100-250 m of the road had an higher prevalence of childhood respiratory troubles (chest cold, pertussis and bronchitis) (data not shown); in a previous study we had shown that subjects with childhood respiratory troubles had the lowest lung function values regardless of smoking habits [44]. Anyway, the highest values of airway obstruction observed in the intermediate exposure class (100-250 m), might suggest that respiratory health impairments due to vehicular traffic exposure also occur at a distance greater than 100 m, as reported in other studies which have shown the negative effects of living near a busy road until a distance of 500 m [40,41].\nAs regards the atopic status, we found elevated risks for skin test ≥ 5 mm positivity in females living within 100 m of the main road.\nA recent study on a very large sample of German children showed that the children living near busy streets had significantly higher risk for allergic sensitization (OR = 1.30, 95% CI 1.02-1.66) [45].\nIn the large population-based sample of 5338 schoolchildren of the French Six City Study [46] the adjusted odds of skin-prick test positivity were significantly higher than one in concurrence with elevated PM2.5 concentrations in the proximity of the houses where the children lived.\nOur study indicates respiratory health risks for people living in the proximity of a main road. In our study we used subjects' residence as a proxy for environmental exposure. This means that subjects living at the same distance from the main road are assumed to experience the same exposure levels. Since personal exposure can be influenced by many different factors related to each subject's life style, personal habits and exposure to other air pollution sources, we included in our analyses the effects of these confounding factors. Multiple logistic regression models were fitted to adjust for the effects of age, education, smoking, passive smoking exposure, occupational exposure, working position, number of hours spent at home and time of residence.\nAfter adjustment for such potential confounders, subjects with a residential address within 100 m of the main road had higher risks for reporting persistent wheeze, COPD, and for having airway obstruction in males, as well as higher risks for asthma, attacks of shortness of breath with wheezing, dyspnea and positivity to skin-prick test in females.\nOur results are generally consistent with those reported by other authors who have analyzed the effects of traffic-related air pollution exposure on respiratory health status in adults.\nSignificantly higher risks for respiratory symptoms/diagnosis were reported in subjects living within 100 m of a major road by Lindgren et al. (Sweden) [37], Schikowski et al. (Germany) [12] and Cesaroni et al. (Italy) [38]: OR = 1.40 (95% CI 1.04-1.89) for asthma diagnosis, OR = 1.29 (95% CI 1.00-1.67) for asthma symptoms, OR = 1.64 (95% CI 1.11-2.41) for COPD diagnosis and OR = 1.53 (95% CI 1.10-2.13) for chronic bronchitis symptoms by Lindgren et al. [37]; OR = 1.24 (95% CI 1.03-1.49) for frequent cough by Schikowski et al. [12]; OR = 1.26 (95% CI 1.03-1.54) for rhinitis by Cesaroni et al. [38].\nA narrower exposure cut-off (75 m) was defined by Mc Connell et al. [18] in Southern Californian schoolchildren (aged 5-7 years): significant associations were found for lifetime asthma (OR = 1.29 95% CI 1.01-1.86), current asthma (OR = 1.50 95% CI 1.16-1.95) and wheezes (OR = 1.40 95% CI 1.09-1.78). Effects of residential proximity to roadways were greater in females, as in our study.\nSignificantly higher risks for wheezes and phlegm were reported in adults living within 50 m of a major road by Garshick et al. (USA) [9] and within 20 m by Bayer-Oglesby et al. (Switzerland) [39]: OR = 1.30 (95% CI 1.00-1.70) for persistent wheezes and OR = 1.15 (95% CI 1.00-1.31) for regular phlegm, respectively.\nIn our study we also found elevated risks for airway obstruction in males living within 100 m of the main road, as well as between 100 m and 250 m from the road.\nThe study by Kan et al. [40] in the USA provided evidence that lung function, as measured by FEV1 and FVC, is reduced in adults living within 150 m of major roads, especially among women. In contrast to our results, they did not find a significant association between FEV1/FVC ratio and indicators of traffic exposure.\nAdverse effects of traffic-related exposure on lung function have also been highlighted in other studies [41-43]. Gauderman et al. reported a reduced lung development in Californian children, with a not significant larger effect in boys than in girls [41]. Reduced lung function was reported by Forbes et al. [42] in English adults and by Abbey et al. [43] in Californian adults, with a larger effect in males.\nWith regard to the highest values of airway obstruction observed in the intermediate exposure class (100-250 m), this might be due to higher values for some confounding factors; although the estimates are adjusted for these factors, they still probably could have some residual influences. Furthermore, a few factors, unconsidered in the present analyses, might have influenced these results. For example, subjects living within 100-250 m of the road had an higher prevalence of childhood respiratory troubles (chest cold, pertussis and bronchitis) (data not shown); in a previous study we had shown that subjects with childhood respiratory troubles had the lowest lung function values regardless of smoking habits [44]. Anyway, the highest values of airway obstruction observed in the intermediate exposure class (100-250 m), might suggest that respiratory health impairments due to vehicular traffic exposure also occur at a distance greater than 100 m, as reported in other studies which have shown the negative effects of living near a busy road until a distance of 500 m [40,41].\nAs regards the atopic status, we found elevated risks for skin test ≥ 5 mm positivity in females living within 100 m of the main road.\nA recent study on a very large sample of German children showed that the children living near busy streets had significantly higher risk for allergic sensitization (OR = 1.30, 95% CI 1.02-1.66) [45].\nIn the large population-based sample of 5338 schoolchildren of the French Six City Study [46] the adjusted odds of skin-prick test positivity were significantly higher than one in concurrence with elevated PM2.5 concentrations in the proximity of the houses where the children lived.\n[SUBTITLE] Gender stratification [SUBSECTION] Although there is a growing epidemiological evidence of various associations between air pollution and respiratory health for males and females, few studies reported results stratified by gender in adults. Airway behaviour is influenced by sex-related (biological) and gender-related (socio-cultural) determinants; these aspects can interact to several degrees and directions with environmental exposures, differently in women and men. There may also be sex-based differences in susceptibility to the same environmental exposures [13]. These features can explain the different associations between sex and traffic air pollution found in our study.\nOur approach focusing on the sex-specific effects pattern was also justified by the clear diversification between genders in the distribution of many confounding factors included in the analyses. Due to their prevalent occupation (housewives), females resulted more exposed to home residence environmental conditions; while men reported a greater risk for occupational and tobacco exposures.\nAlthough there is a growing epidemiological evidence of various associations between air pollution and respiratory health for males and females, few studies reported results stratified by gender in adults. Airway behaviour is influenced by sex-related (biological) and gender-related (socio-cultural) determinants; these aspects can interact to several degrees and directions with environmental exposures, differently in women and men. There may also be sex-based differences in susceptibility to the same environmental exposures [13]. These features can explain the different associations between sex and traffic air pollution found in our study.\nOur approach focusing on the sex-specific effects pattern was also justified by the clear diversification between genders in the distribution of many confounding factors included in the analyses. Due to their prevalent occupation (housewives), females resulted more exposed to home residence environmental conditions; while men reported a greater risk for occupational and tobacco exposures.\n[SUBTITLE] Advantages and disadvantages of study design [SUBSECTION] The main strengths of this study were the large sample size, the standard protocols, which already had passed the scrutiny of independent reviewers, and the multi-faceted aspects collected by means of the questionnaire. We also used quantitative respiratory and allergological outcomes (i.e. lung function, skin-prick test, IgE and bronchial hyper-responsiveness), which are not affected by the potential bias linked to the use of the questionnaire (recall bias).\nAs in any epidemiological study, residual confounding is still possible. However, we adjusted for known and potential confounders including demographic characteristics, personal socioeconomic status, lifestyle, work-related features, and cigarette smoking.\nAnother limitation was the cross-sectional nature of the study; we had no information about disease onset, making it hard to establish a temporal relationship between cause and effect. However, since asthma and COPD are known to be exacerbated by traffic-related air pollution, diseased subjects may have been more likely to move away from traffic, rather than towards it, and so a migrational bias would mainly be expected to dilute the effects.\nWe used a relatively simple proxy for exposure to traffic-related air pollution (distance to major roads): due to the large amount of input data required, we were not able to implement more sophisticated approaches; however, we found the distance method useful for an initial assessment of a potential environmental health hazard.\nThe main strengths of this study were the large sample size, the standard protocols, which already had passed the scrutiny of independent reviewers, and the multi-faceted aspects collected by means of the questionnaire. We also used quantitative respiratory and allergological outcomes (i.e. lung function, skin-prick test, IgE and bronchial hyper-responsiveness), which are not affected by the potential bias linked to the use of the questionnaire (recall bias).\nAs in any epidemiological study, residual confounding is still possible. However, we adjusted for known and potential confounders including demographic characteristics, personal socioeconomic status, lifestyle, work-related features, and cigarette smoking.\nAnother limitation was the cross-sectional nature of the study; we had no information about disease onset, making it hard to establish a temporal relationship between cause and effect. However, since asthma and COPD are known to be exacerbated by traffic-related air pollution, diseased subjects may have been more likely to move away from traffic, rather than towards it, and so a migrational bias would mainly be expected to dilute the effects.\nWe used a relatively simple proxy for exposure to traffic-related air pollution (distance to major roads): due to the large amount of input data required, we were not able to implement more sophisticated approaches; however, we found the distance method useful for an initial assessment of a potential environmental health hazard.\n[SUBTITLE] Geographic issues [SUBSECTION] Address geocoding can also introduce bias and errors [47-50] with potential effects on the results of epidemiological studies [51-53].\nAddress matching can be hindered by several factors, such as incomplete or inaccurate information in the address files, lack of standardization of street addresses, and lack of assignment of house numbers, especially in rural areas [50-52]. We succeeded in matching almost 100% of our sample, after a considerable effort to overcome the above-mentioned problems.\nEven if a match occurs, house numbering will not always provide the exact location since house numbers are assigned with no reference to the distance from the beginning of the street segment. Therefore, it is difficult to geocode an address unless the location of house numbers is identified on the map one by one.\nCartographic data provided by Pisa and Cascina municipalities contain the exact correspondence of house numbers to related buildings; a direct inspection was performed in case of ambiguity or uncertainty. Consequently, our study relied upon a detailed and precise geocoding of residential data, which is relatively infrequent for Italian public administrations.\nAddress geocoding can also introduce bias and errors [47-50] with potential effects on the results of epidemiological studies [51-53].\nAddress matching can be hindered by several factors, such as incomplete or inaccurate information in the address files, lack of standardization of street addresses, and lack of assignment of house numbers, especially in rural areas [50-52]. We succeeded in matching almost 100% of our sample, after a considerable effort to overcome the above-mentioned problems.\nEven if a match occurs, house numbering will not always provide the exact location since house numbers are assigned with no reference to the distance from the beginning of the street segment. Therefore, it is difficult to geocode an address unless the location of house numbers is identified on the map one by one.\nCartographic data provided by Pisa and Cascina municipalities contain the exact correspondence of house numbers to related buildings; a direct inspection was performed in case of ambiguity or uncertainty. Consequently, our study relied upon a detailed and precise geocoding of residential data, which is relatively infrequent for Italian public administrations.", "Our study indicates respiratory health risks for people living in the proximity of a main road. In our study we used subjects' residence as a proxy for environmental exposure. This means that subjects living at the same distance from the main road are assumed to experience the same exposure levels. Since personal exposure can be influenced by many different factors related to each subject's life style, personal habits and exposure to other air pollution sources, we included in our analyses the effects of these confounding factors. Multiple logistic regression models were fitted to adjust for the effects of age, education, smoking, passive smoking exposure, occupational exposure, working position, number of hours spent at home and time of residence.\nAfter adjustment for such potential confounders, subjects with a residential address within 100 m of the main road had higher risks for reporting persistent wheeze, COPD, and for having airway obstruction in males, as well as higher risks for asthma, attacks of shortness of breath with wheezing, dyspnea and positivity to skin-prick test in females.\nOur results are generally consistent with those reported by other authors who have analyzed the effects of traffic-related air pollution exposure on respiratory health status in adults.\nSignificantly higher risks for respiratory symptoms/diagnosis were reported in subjects living within 100 m of a major road by Lindgren et al. (Sweden) [37], Schikowski et al. (Germany) [12] and Cesaroni et al. (Italy) [38]: OR = 1.40 (95% CI 1.04-1.89) for asthma diagnosis, OR = 1.29 (95% CI 1.00-1.67) for asthma symptoms, OR = 1.64 (95% CI 1.11-2.41) for COPD diagnosis and OR = 1.53 (95% CI 1.10-2.13) for chronic bronchitis symptoms by Lindgren et al. [37]; OR = 1.24 (95% CI 1.03-1.49) for frequent cough by Schikowski et al. [12]; OR = 1.26 (95% CI 1.03-1.54) for rhinitis by Cesaroni et al. [38].\nA narrower exposure cut-off (75 m) was defined by Mc Connell et al. [18] in Southern Californian schoolchildren (aged 5-7 years): significant associations were found for lifetime asthma (OR = 1.29 95% CI 1.01-1.86), current asthma (OR = 1.50 95% CI 1.16-1.95) and wheezes (OR = 1.40 95% CI 1.09-1.78). Effects of residential proximity to roadways were greater in females, as in our study.\nSignificantly higher risks for wheezes and phlegm were reported in adults living within 50 m of a major road by Garshick et al. (USA) [9] and within 20 m by Bayer-Oglesby et al. (Switzerland) [39]: OR = 1.30 (95% CI 1.00-1.70) for persistent wheezes and OR = 1.15 (95% CI 1.00-1.31) for regular phlegm, respectively.\nIn our study we also found elevated risks for airway obstruction in males living within 100 m of the main road, as well as between 100 m and 250 m from the road.\nThe study by Kan et al. [40] in the USA provided evidence that lung function, as measured by FEV1 and FVC, is reduced in adults living within 150 m of major roads, especially among women. In contrast to our results, they did not find a significant association between FEV1/FVC ratio and indicators of traffic exposure.\nAdverse effects of traffic-related exposure on lung function have also been highlighted in other studies [41-43]. Gauderman et al. reported a reduced lung development in Californian children, with a not significant larger effect in boys than in girls [41]. Reduced lung function was reported by Forbes et al. [42] in English adults and by Abbey et al. [43] in Californian adults, with a larger effect in males.\nWith regard to the highest values of airway obstruction observed in the intermediate exposure class (100-250 m), this might be due to higher values for some confounding factors; although the estimates are adjusted for these factors, they still probably could have some residual influences. Furthermore, a few factors, unconsidered in the present analyses, might have influenced these results. For example, subjects living within 100-250 m of the road had an higher prevalence of childhood respiratory troubles (chest cold, pertussis and bronchitis) (data not shown); in a previous study we had shown that subjects with childhood respiratory troubles had the lowest lung function values regardless of smoking habits [44]. Anyway, the highest values of airway obstruction observed in the intermediate exposure class (100-250 m), might suggest that respiratory health impairments due to vehicular traffic exposure also occur at a distance greater than 100 m, as reported in other studies which have shown the negative effects of living near a busy road until a distance of 500 m [40,41].\nAs regards the atopic status, we found elevated risks for skin test ≥ 5 mm positivity in females living within 100 m of the main road.\nA recent study on a very large sample of German children showed that the children living near busy streets had significantly higher risk for allergic sensitization (OR = 1.30, 95% CI 1.02-1.66) [45].\nIn the large population-based sample of 5338 schoolchildren of the French Six City Study [46] the adjusted odds of skin-prick test positivity were significantly higher than one in concurrence with elevated PM2.5 concentrations in the proximity of the houses where the children lived.", "Although there is a growing epidemiological evidence of various associations between air pollution and respiratory health for males and females, few studies reported results stratified by gender in adults. Airway behaviour is influenced by sex-related (biological) and gender-related (socio-cultural) determinants; these aspects can interact to several degrees and directions with environmental exposures, differently in women and men. There may also be sex-based differences in susceptibility to the same environmental exposures [13]. These features can explain the different associations between sex and traffic air pollution found in our study.\nOur approach focusing on the sex-specific effects pattern was also justified by the clear diversification between genders in the distribution of many confounding factors included in the analyses. Due to their prevalent occupation (housewives), females resulted more exposed to home residence environmental conditions; while men reported a greater risk for occupational and tobacco exposures.", "The main strengths of this study were the large sample size, the standard protocols, which already had passed the scrutiny of independent reviewers, and the multi-faceted aspects collected by means of the questionnaire. We also used quantitative respiratory and allergological outcomes (i.e. lung function, skin-prick test, IgE and bronchial hyper-responsiveness), which are not affected by the potential bias linked to the use of the questionnaire (recall bias).\nAs in any epidemiological study, residual confounding is still possible. However, we adjusted for known and potential confounders including demographic characteristics, personal socioeconomic status, lifestyle, work-related features, and cigarette smoking.\nAnother limitation was the cross-sectional nature of the study; we had no information about disease onset, making it hard to establish a temporal relationship between cause and effect. However, since asthma and COPD are known to be exacerbated by traffic-related air pollution, diseased subjects may have been more likely to move away from traffic, rather than towards it, and so a migrational bias would mainly be expected to dilute the effects.\nWe used a relatively simple proxy for exposure to traffic-related air pollution (distance to major roads): due to the large amount of input data required, we were not able to implement more sophisticated approaches; however, we found the distance method useful for an initial assessment of a potential environmental health hazard.", "Address geocoding can also introduce bias and errors [47-50] with potential effects on the results of epidemiological studies [51-53].\nAddress matching can be hindered by several factors, such as incomplete or inaccurate information in the address files, lack of standardization of street addresses, and lack of assignment of house numbers, especially in rural areas [50-52]. We succeeded in matching almost 100% of our sample, after a considerable effort to overcome the above-mentioned problems.\nEven if a match occurs, house numbering will not always provide the exact location since house numbers are assigned with no reference to the distance from the beginning of the street segment. Therefore, it is difficult to geocode an address unless the location of house numbers is identified on the map one by one.\nCartographic data provided by Pisa and Cascina municipalities contain the exact correspondence of house numbers to related buildings; a direct inspection was performed in case of ambiguity or uncertainty. Consequently, our study relied upon a detailed and precise geocoding of residential data, which is relatively infrequent for Italian public administrations.", "Our study points out that living in the proximity of the main road (within 100 m), as assessed by GIS technology, is associated with chronic respiratory problems, evaluated through both subjective (questionnaire) and objective (lung function and allergy tests) methods. In particular, living within 100 m of the main road was associated with higher risks for reporting persistent wheeze, COPD, and airway obstruction in males, as well as with higher risks for asthma, attacks of shortness of breath with wheezing, dyspnea and positivity to skin-prick tests in females.\nIn addition, our study highlights the added value of close collaboration among researchers with different expertise, such as epidemiology and geographical information system science, when conducting environmental epidemiology studies.", "GIS: Geographical information system; NOx: Nitrogen oxides; CO: Carbon monoxide; NMVOCs: Non-methane volatile organic compounds; LUR model: Land use regression model; COPD: Chronic obstructive pulmonary disease; DLCO: CO single breath diffusing capacity; FEV1: Forced expiratory volume in the first second; FVC: Forced vital capacity; VC: Vital capacity; OR: Odds ratio; 95% CI: 95% Confidence intervals.", "The authors declare that they have no competing interests.", "Each author contributed to the conception and design of the work, the acquisition of data, or the analysis of the data in a manner substantial enough to take public responsibility for it. The authors believe the manuscript represents valid work. All authors read and approved the final manuscript." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
"Did the trial kill the intervention?" experiences from the development, implementation and evaluation of a complex intervention.
21362159
The development, implementation and evaluation of any new health intervention is complex. This paper uses experiences from the design, implementation and evaluation of a rehabilitation programme to shed light on, and prompt discussion around, some of the complexities involved in such an undertaking.
BACKGROUND
Semi-structured interviews were conducted with 15 trial participants and five members of staff at the conclusion of a trial evaluating a rehabilitation programme aimed at promoting recovery after stem cell transplantation.
METHODS
This study identified a number of challenges relating to the development and evaluation of complex interventions. The difficulty of providing a standardised intervention that was acceptable to patients was highlighted in the participant interviews. Trial participants and some members of staff found the concept of equipoise and randomisation challenging and there was discord between the psychosocial nature of the intervention and the predominant bio-medical culture in which the research took place.
RESULTS
A lack of scientific evidence as to the efficacy of an intervention does not preclude staff and patients holding strong views about the benefits of an intervention. The evaluation of complex interventions should, where possible, facilitate not restrict that complexity. Within the local environment where the trial is conducted, acquiescence from those in positions of authority is insufficient; commitment to the trial is required.
CONCLUSIONS
[ "Disclosure", "Evidence-Based Practice", "Female", "Humans", "Interviews as Topic", "Male", "Medical Staff", "Middle Aged", "Randomized Controlled Trials as Topic", "Rehabilitation", "Research Design", "Stem Cell Transplantation", "Treatment Outcome" ]
3056847
null
null
Methods
The design of the evaluation was mixed-methods with a qualitative interview study following the completion of the randomised controlled trial. A full description of the trial and the quantitative results are reported elsewhere [8] but brief details are provided here to give context to the qualitative study that is the focus of this paper. The trial design chosen was a two-arm parallel study comparing structured rehabilitation in a hospital setting led by a team of health professionals (HPL programme) with a home-based, self-managed rehabilitation programme (SM programme). Participants were aged 18 years and over, and had been treated in the previous six to eight weeks with an autologous or allogenic stem cell transplant. The primary outcome was change in physical functioning at six months. Potential participants were initially approached by a healthcare professional and then contacted by a member of the research team (LB) and at this point if potential participants were happy to take part they were asked to provide formal written consent. Following consent and baseline data collection, LB informed participants of their allocation after telephoning an individual independent to the study team who held the randomisation lists. Participants randomised to the HPL programme were asked to attend the hospital once a week for 10 weeks to take part in a group session consisting of a tailored exercise programme, and a relaxation and information support session. Participants randomised to the SM programme were given an information pack that contained a home-based exercise programme and access to all the information and relaxation exercises provided on the HPL programme, this being a slight enhancement of the standard care where rehabilitation was provided in a less structured and more ad hoc manner. Patients could not access the intervention outside of the trial. During a 14-month recruitment period 144 potential participants were approached about the trial and 61 (42%) consented to take part. Common reasons for declining to take part were distance from home to hospital, and that the programme did not seem relevant. Fifty-eight people were randomised and 46 participants were followed-up at six months. The main reasons for drop out were disease relapse and death.
null
null
null
null
[ "Background", "Qualitative interviews", "Patient interviews", "Staff interviews", "Analysis", "Results", "Patients' perceptions of the rehabilitation programmes", "Staff and patient perceptions of involvement in a trial", "The Impact of Organisational Culture", "Discussion", "Strengths and limitations", "Transferability", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "The development of the randomised controlled trial has radically altered the way in which medical therapies are developed, tested and administered. Since 1947 when the Medical Research Council initiated what is generally considered to be the first randomised and blinded clinical trial [1,2] the principles of randomisation and control have moved from being controversial novelties to expected normalities. In the 1990's the broadening of the concept of evidence based medicine towards evidence based practice reflected a growing recognition of the need for decisions about health care interventions to be based on evidence of effectiveness.\nHowever, there are obvious differences between the evaluation of a new drug and, for example, the evaluation of an intervention to promote recovery after cancer treatment and it is not always possible to simply extend the randomised controlled trial design. In acknowledgment of this the Medical Research Council developed in 2000 [3] and then revised in 2008 [4] guidance for the development and evaluation of complex interventions. The MRC emphasise the need for robust and rigorous evaluation of complex interventions, promoting the use of experimental methods, but providing information on some of the alternatives to the conventional randomised controlled trial and highlighting situations in which a trial is impractical or undesirable [4].\nGiven the current financial imperative for interventions to be of proven benefit in order to compete for finite resources, the focus on patient centred care and the undisputed value of the randomised controlled trial it is likely that the number of trials of complex interventions will increase considerably. With this in mind we wanted to provide comment on one randomised controlled trial of a complex intervention which was recently conducted in order to explore some of the acknowledged and hidden complexities of this form of research.\nThis paper reports findings from a qualitative study of the experiences of the development, implementation and evaluation of a rehabilitation programme following stem cell transplantation in a regional haematology unit. A number of staff working on the unit had identified a need for more structured rehabilitation that might include not only support for patients' physical problems but also would address some of the perceived social and psychological needs of these patients. A programme of rehabilitation based on evidence from both the cancer and cardiac rehabilitation literature (for example [5-7]) was put together by a small group of nursing and physiotherapy staff working in collaboration with the rest of the clinical team and patients who had previously undergone stem cell transplant. The programme was piloted by these staff members who felt it was a viable model of routine service delivery and observed positive effects among the small number of patients who undertook the pilot programme. Since these results were based on a small, uncontrolled sample and conducted by those who had developed the intervention, the possibility of bias is a legitimate concern. At this stage an external research team was appointed to conduct an independent and definitive study that attempted to answer whether or not the programme was effective in improving patient outcomes. This paper aims to shed light on, and prompt discussion around, some of the complexities involved in undertaking a randomised controlled trial of two forms of rehabilitation (healthcare professional led and self-managed).", "Individual qualitative interviews to gather the rehabilitation and trial experiences of those who took part in the trial were included in the original study design. The study design was revised to incorporate further individual interviews with staff members in order to explore their perspectives on the challenges that arose during the trial. The interviews were all conducted by the same researcher (LB) as had managed the trial. The researcher had a background in nursing but did not have any clinical role on the unit where the trial took place. The study was approved by the North Nottinghamshire Local Research Ethics Comittee (LREC reference number 05/Q2402/51).\n[SUBTITLE] Patient interviews [SUBSECTION] Interviews were conducted with 15 patients from within the sample of 58 participants who took part in the trial (see table 1). Participants were interviewed shortly after the end of their involvement in the trial and interviews explored patients' experiences of rehabilitation and of participation in a randomised controlled trial.\nCharacteristics of interviewees\nMaximum variation sampling was used to ensure that the sample included women and men, individuals from both arms of the trial, and across age groups. All participants approached to be interviewed agreed to participate. The interviews lasted between one to one and a half hours and were conducted at a location convenient for the participant. This tended to be in a quiet side room at the hospital or in the participant's own home. Topic guides covered life after transplantation, experiences of rehabilitation, the acceptability of randomisation, the acceptability of the evaluation tools and overall trial experience. The schedules served as a guide for the interview structure and content but did not stipulate exact phrasing of questions and prompts, or sequence of areas of enquiry.\nInterviews were conducted with 15 patients from within the sample of 58 participants who took part in the trial (see table 1). Participants were interviewed shortly after the end of their involvement in the trial and interviews explored patients' experiences of rehabilitation and of participation in a randomised controlled trial.\nCharacteristics of interviewees\nMaximum variation sampling was used to ensure that the sample included women and men, individuals from both arms of the trial, and across age groups. All participants approached to be interviewed agreed to participate. The interviews lasted between one to one and a half hours and were conducted at a location convenient for the participant. This tended to be in a quiet side room at the hospital or in the participant's own home. Topic guides covered life after transplantation, experiences of rehabilitation, the acceptability of randomisation, the acceptability of the evaluation tools and overall trial experience. The schedules served as a guide for the interview structure and content but did not stipulate exact phrasing of questions and prompts, or sequence of areas of enquiry.\n[SUBTITLE] Staff interviews [SUBSECTION] Interviews were also conducted with five members of staff, see table 1, after completion of the trial. All three staff members who had been instrumental in designing and implementing the health profession led programme were interviewed. Two other individuals were also interviewed and whilst they had no direct involvement in the design or delivery of the healthcare professional led programme, it was felt that these individuals were responsible for setting the priorities and ethos of the transplant unit. All staff members approached to be interviewed agreed to participate. The staff interviews explored attitudes and beliefs about the rehabilitation programme and about the appropriateness of evaluating it using a randomised controlled trial. Interviews with members of staff who had been involved in running the rehabilitation programme explored the experience of providing the rehabilitation programme within the context of a randomised controlled trial. The interviews were conducted during the working day in a location convenient for the staff member, typically their own office or a private meeting room. Topic guides were used to provide a structure for the interviews.\nInterviews were also conducted with five members of staff, see table 1, after completion of the trial. All three staff members who had been instrumental in designing and implementing the health profession led programme were interviewed. Two other individuals were also interviewed and whilst they had no direct involvement in the design or delivery of the healthcare professional led programme, it was felt that these individuals were responsible for setting the priorities and ethos of the transplant unit. All staff members approached to be interviewed agreed to participate. The staff interviews explored attitudes and beliefs about the rehabilitation programme and about the appropriateness of evaluating it using a randomised controlled trial. Interviews with members of staff who had been involved in running the rehabilitation programme explored the experience of providing the rehabilitation programme within the context of a randomised controlled trial. The interviews were conducted during the working day in a location convenient for the staff member, typically their own office or a private meeting room. Topic guides were used to provide a structure for the interviews.\n[SUBTITLE] Analysis [SUBSECTION] The analysis of the qualitative data was guided by the need to have a more contextual and enhanced [9] understanding of the trial. All interviews were audio recorded, transcribed verbatim and then analysed using NVivo 8. A thematic content approach to data analysis was used. Following transcription, the interview transcripts were checked for accuracy, read for context and then coded. Individual codes were then collapsed into broader categories that could be generally divided between the two main areas of enquiry (rehabilitation and evaluation) that were the focus of this qualitative study. Within these two areas, themes could be distinguished that helped to explain both commonalities and differences in patient and staff perspectives. The analysis was conducted by two of the authors (LB and AA) and corroborated by the third (KC). In the findings section quotes from trial participants are identified with a P followed by a study number and then the abbreviation HPL denoting that they were allocated to the healthcare professional led programme or SM denoting that they were allocated to the self-managed programme. Quotes from staff are identified with an S followed by a study number.\nThe analysis of the qualitative data was guided by the need to have a more contextual and enhanced [9] understanding of the trial. All interviews were audio recorded, transcribed verbatim and then analysed using NVivo 8. A thematic content approach to data analysis was used. Following transcription, the interview transcripts were checked for accuracy, read for context and then coded. Individual codes were then collapsed into broader categories that could be generally divided between the two main areas of enquiry (rehabilitation and evaluation) that were the focus of this qualitative study. Within these two areas, themes could be distinguished that helped to explain both commonalities and differences in patient and staff perspectives. The analysis was conducted by two of the authors (LB and AA) and corroborated by the third (KC). In the findings section quotes from trial participants are identified with a P followed by a study number and then the abbreviation HPL denoting that they were allocated to the healthcare professional led programme or SM denoting that they were allocated to the self-managed programme. Quotes from staff are identified with an S followed by a study number.", "Interviews were conducted with 15 patients from within the sample of 58 participants who took part in the trial (see table 1). Participants were interviewed shortly after the end of their involvement in the trial and interviews explored patients' experiences of rehabilitation and of participation in a randomised controlled trial.\nCharacteristics of interviewees\nMaximum variation sampling was used to ensure that the sample included women and men, individuals from both arms of the trial, and across age groups. All participants approached to be interviewed agreed to participate. The interviews lasted between one to one and a half hours and were conducted at a location convenient for the participant. This tended to be in a quiet side room at the hospital or in the participant's own home. Topic guides covered life after transplantation, experiences of rehabilitation, the acceptability of randomisation, the acceptability of the evaluation tools and overall trial experience. The schedules served as a guide for the interview structure and content but did not stipulate exact phrasing of questions and prompts, or sequence of areas of enquiry.", "Interviews were also conducted with five members of staff, see table 1, after completion of the trial. All three staff members who had been instrumental in designing and implementing the health profession led programme were interviewed. Two other individuals were also interviewed and whilst they had no direct involvement in the design or delivery of the healthcare professional led programme, it was felt that these individuals were responsible for setting the priorities and ethos of the transplant unit. All staff members approached to be interviewed agreed to participate. The staff interviews explored attitudes and beliefs about the rehabilitation programme and about the appropriateness of evaluating it using a randomised controlled trial. Interviews with members of staff who had been involved in running the rehabilitation programme explored the experience of providing the rehabilitation programme within the context of a randomised controlled trial. The interviews were conducted during the working day in a location convenient for the staff member, typically their own office or a private meeting room. Topic guides were used to provide a structure for the interviews.", "The analysis of the qualitative data was guided by the need to have a more contextual and enhanced [9] understanding of the trial. All interviews were audio recorded, transcribed verbatim and then analysed using NVivo 8. A thematic content approach to data analysis was used. Following transcription, the interview transcripts were checked for accuracy, read for context and then coded. Individual codes were then collapsed into broader categories that could be generally divided between the two main areas of enquiry (rehabilitation and evaluation) that were the focus of this qualitative study. Within these two areas, themes could be distinguished that helped to explain both commonalities and differences in patient and staff perspectives. The analysis was conducted by two of the authors (LB and AA) and corroborated by the third (KC). In the findings section quotes from trial participants are identified with a P followed by a study number and then the abbreviation HPL denoting that they were allocated to the healthcare professional led programme or SM denoting that they were allocated to the self-managed programme. Quotes from staff are identified with an S followed by a study number.", "[SUBTITLE] Patients' perceptions of the rehabilitation programmes [SUBSECTION] Despite being developed by staff in collaboration with patients and having undergone a pilot phase to test feasibility and acceptability, views of the healthcare professional led rehabilitation programme were mixed. Most of those interviewed who had taken part in the HPL programme were positive about the exercise component of the programme. Many expressed some degree of exhaustion after exercising, but this was temporary and relieved by resting. However two participants reported finding the exercises too exhausting and one commented:\nP123 (HPL) I personally felt that it had perhaps done me more harm than good. And I perhaps thought it was a bit early because I wasn't coping with it very well.\nIt has been argued that bone marrow transplantation is one of the most stressful treatments in cancer care [10]. It was for this reason that a relaxation component was included as part of the HPL programme. However this was not a need perceived by all participants.\nP117(SM) I tend to just take every day as it comes... I've gone past that stage of worrying about things.\nSome participants clearly felt uncomfortable with the relaxation sessions and one suggested that taking part was like being \"subjected to sort of séances\" (P123HPL). This participant who had not found the relaxation sessions helpful said \"I think a couple of [group members] were nodding off. I thought for god's sake try and keep awake\" (P123) indicating that, for this participant, falling asleep would have been undesirable and unacceptable. However, in contrast another participant commented that \"I love doing that kind of thing, I could kind of fall asleep\" (P326). While the relaxation component was not universally popular some participants indicated that relaxation sessions had been useful and requested assistance with repeating the exercises at home. These were often the participants who, prior to the intervention, had been noticeably anxious or who had found the transplant process to be traumatic.\nIn the interviews participants were given the opportunity to comment on each of the information sessions provided as part of the HPL programme. Participants' reflections on the sessions were generally positive but suggested that information was sometimes repetitive, simply stating common sense, or given too late in the transplant process.\nP315 (HPL) But a lot of it was repetition and there was a bit of déjà vu. I'd heard and seen a lot of it before.\nThe participants felt that recovery after stem cell transplantation was a highly individual process. One example of this was the reaction of different participants to one of the information components which was led by a hospital chaplain which looked at life after transplant. Participants' feelings about the session varied considerably.\nP326 (HPL): the guy who came from the church he was quite interesting, talking about death and how people feel about dying and how people think, that was interesting. I can remember others but I think he was the most interesting one.\nP320 (HPL): We had the vicar and he was very nice, but for me he was a bit intrusive and that was no fault of his that was just you know the way that I felt about it.\nSeveral participants allocated to the self managed programme said that they had never attempted the exercises while others indicated that they gave the exercises a go but quickly lost interest. Reasons for this included a lack of motivation to excercise, or a preference for other activities such as gardening or walking the dog. Those participants who reported not completing the exercises said that they believed that their motivation would have been increased if they had been allocated to the HPL programme. Participants attributed this to two factors: the support provided by a group environment and a feeling of greater confidence exercising where there were professionals available to prevent injury. In contrast to this three participants suggested that they found the programme appropriate and helpful and that they had consistently completed the programme several times a week for a period of many weeks. The differing reactions of individuals to the exercises appeared to relate to several factors. The exercises were completed by those with a high level of motivation to return to a previous state of fitness and by those who had previous direct (for example army training) or indirect (for example a close relative with experience of circuit training) experience of exercise training.\nP311(SM) I found them very good... I first looked at them and thought this looks too easy, but, the situation I was in at the time when I came out of hospital, nothing was easy so they were very good overall. The balance was excellent.\nBetween and within the two study groups, opinions on the value of the intervention received were strongly divided. This illustrates some of the difficulties inherent in trying to develop rehabilitation interventions of this sort. For many participants, initial enthusiasm was occasionally followed by a need for greater flexibility. This was particularly a problem for the 'group' nature of the health profession led programme, which, by definition was designed for a 'typical' recovery trajectory that did not always suit individuals with varying needs that fluctuated over time. Conversely, for those allocated to the self-managed care group, a lack of contact with others in the early stages of their rehabilitation meant those with less personal support may have struggled to motivate themselves.\nDespite being developed by staff in collaboration with patients and having undergone a pilot phase to test feasibility and acceptability, views of the healthcare professional led rehabilitation programme were mixed. Most of those interviewed who had taken part in the HPL programme were positive about the exercise component of the programme. Many expressed some degree of exhaustion after exercising, but this was temporary and relieved by resting. However two participants reported finding the exercises too exhausting and one commented:\nP123 (HPL) I personally felt that it had perhaps done me more harm than good. And I perhaps thought it was a bit early because I wasn't coping with it very well.\nIt has been argued that bone marrow transplantation is one of the most stressful treatments in cancer care [10]. It was for this reason that a relaxation component was included as part of the HPL programme. However this was not a need perceived by all participants.\nP117(SM) I tend to just take every day as it comes... I've gone past that stage of worrying about things.\nSome participants clearly felt uncomfortable with the relaxation sessions and one suggested that taking part was like being \"subjected to sort of séances\" (P123HPL). This participant who had not found the relaxation sessions helpful said \"I think a couple of [group members] were nodding off. I thought for god's sake try and keep awake\" (P123) indicating that, for this participant, falling asleep would have been undesirable and unacceptable. However, in contrast another participant commented that \"I love doing that kind of thing, I could kind of fall asleep\" (P326). While the relaxation component was not universally popular some participants indicated that relaxation sessions had been useful and requested assistance with repeating the exercises at home. These were often the participants who, prior to the intervention, had been noticeably anxious or who had found the transplant process to be traumatic.\nIn the interviews participants were given the opportunity to comment on each of the information sessions provided as part of the HPL programme. Participants' reflections on the sessions were generally positive but suggested that information was sometimes repetitive, simply stating common sense, or given too late in the transplant process.\nP315 (HPL) But a lot of it was repetition and there was a bit of déjà vu. I'd heard and seen a lot of it before.\nThe participants felt that recovery after stem cell transplantation was a highly individual process. One example of this was the reaction of different participants to one of the information components which was led by a hospital chaplain which looked at life after transplant. Participants' feelings about the session varied considerably.\nP326 (HPL): the guy who came from the church he was quite interesting, talking about death and how people feel about dying and how people think, that was interesting. I can remember others but I think he was the most interesting one.\nP320 (HPL): We had the vicar and he was very nice, but for me he was a bit intrusive and that was no fault of his that was just you know the way that I felt about it.\nSeveral participants allocated to the self managed programme said that they had never attempted the exercises while others indicated that they gave the exercises a go but quickly lost interest. Reasons for this included a lack of motivation to excercise, or a preference for other activities such as gardening or walking the dog. Those participants who reported not completing the exercises said that they believed that their motivation would have been increased if they had been allocated to the HPL programme. Participants attributed this to two factors: the support provided by a group environment and a feeling of greater confidence exercising where there were professionals available to prevent injury. In contrast to this three participants suggested that they found the programme appropriate and helpful and that they had consistently completed the programme several times a week for a period of many weeks. The differing reactions of individuals to the exercises appeared to relate to several factors. The exercises were completed by those with a high level of motivation to return to a previous state of fitness and by those who had previous direct (for example army training) or indirect (for example a close relative with experience of circuit training) experience of exercise training.\nP311(SM) I found them very good... I first looked at them and thought this looks too easy, but, the situation I was in at the time when I came out of hospital, nothing was easy so they were very good overall. The balance was excellent.\nBetween and within the two study groups, opinions on the value of the intervention received were strongly divided. This illustrates some of the difficulties inherent in trying to develop rehabilitation interventions of this sort. For many participants, initial enthusiasm was occasionally followed by a need for greater flexibility. This was particularly a problem for the 'group' nature of the health profession led programme, which, by definition was designed for a 'typical' recovery trajectory that did not always suit individuals with varying needs that fluctuated over time. Conversely, for those allocated to the self-managed care group, a lack of contact with others in the early stages of their rehabilitation meant those with less personal support may have struggled to motivate themselves.\n[SUBTITLE] Staff and patient perceptions of involvement in a trial [SUBSECTION] The understanding of the rationale for conducting a randomised controlled trial varied between the staff members interviewed. Some perceived a robust evaluation to be the right thing to do in a climate of limited financial resources. Others implied that they were involved not because they wanted to find out if the intervention worked but because they wanted to prove that it worked. This is a subtle but important difference in emphasis which had a number of implications on how staff felt about the trial.\nS4 It [the trial] was the right thing to try to do, yes, I think that's right because clearly it [the intervention] involves expense, and the question is how much value it had.\nS3: we've had to go through that process [the RCT] and I now appreciate that. Before I resented it [the need to test a new intervention].\nSuch a belief in the inherent value of the healthcare professional led programme led these staff to experience disappointment over the outcome of the randomisation process in particular cases. Staff involved in delivering the HPL programme reported finding it very difficult when participants that they perceived would particularly benefit from the intervention were allocated to the self-managed programme.\nS1 there were certain patients that we saw that, you know, you were desperate for them to get the hospital led programme.\nThe impact of this was that some members of staff found both the trial experience and recruiting patients burdensome and attributed problems with delivering the HPL programme to the fact that a trial was being conducted. One member of staff perceived the trial to place a series of hurdles that patients had to overcome (such as consent to trial participation and randomisation) to access the HPL programme. As a result, disappointing trial recruitment meant that fewer patients than anticipated were participating in the HPL at any one time, meaning a small group size changed the very nature of the programme; the staff member suggesting that \"the trial had killed the intervention\" (S2).\nA number of participants also held strong opinions with regards to the two programmes. Several participants expressed their conviction that the HPL programme was superior.\nP315 (HPL): I was on the right arm of the trial. It did things better for me.\nP318 (SM): It was fair as you did, you know, being picked out I suppose it is fair. It's just that I were picked out for the wrong one.\nHowever not all participants felt that the HPL programme was superior. One participant was allocated the HPL programme but chose never to attend commenting afterwards that they should have consented to take part on the condition that they were allocated to the SM programme.\nThe comments of both staff and patients highlight that the rigour of trial design can easily cause confusion and or dissatisfaction. That a member of staff commented that 'the trial killed the intervention' is particularly interesting. Their perspective was that since the pilot phase had in their opinion been a success it must have been as a result of the trial that the difficulties arose. However it could be argued that the relatively low recruitment and the level of trial attrition were in fact related primarily to the intervention rather than the trial. More robust piloting of both the intervention and of the recruitment and randomisation process may have identified more accurately where the problems lay.\nThe understanding of the rationale for conducting a randomised controlled trial varied between the staff members interviewed. Some perceived a robust evaluation to be the right thing to do in a climate of limited financial resources. Others implied that they were involved not because they wanted to find out if the intervention worked but because they wanted to prove that it worked. This is a subtle but important difference in emphasis which had a number of implications on how staff felt about the trial.\nS4 It [the trial] was the right thing to try to do, yes, I think that's right because clearly it [the intervention] involves expense, and the question is how much value it had.\nS3: we've had to go through that process [the RCT] and I now appreciate that. Before I resented it [the need to test a new intervention].\nSuch a belief in the inherent value of the healthcare professional led programme led these staff to experience disappointment over the outcome of the randomisation process in particular cases. Staff involved in delivering the HPL programme reported finding it very difficult when participants that they perceived would particularly benefit from the intervention were allocated to the self-managed programme.\nS1 there were certain patients that we saw that, you know, you were desperate for them to get the hospital led programme.\nThe impact of this was that some members of staff found both the trial experience and recruiting patients burdensome and attributed problems with delivering the HPL programme to the fact that a trial was being conducted. One member of staff perceived the trial to place a series of hurdles that patients had to overcome (such as consent to trial participation and randomisation) to access the HPL programme. As a result, disappointing trial recruitment meant that fewer patients than anticipated were participating in the HPL at any one time, meaning a small group size changed the very nature of the programme; the staff member suggesting that \"the trial had killed the intervention\" (S2).\nA number of participants also held strong opinions with regards to the two programmes. Several participants expressed their conviction that the HPL programme was superior.\nP315 (HPL): I was on the right arm of the trial. It did things better for me.\nP318 (SM): It was fair as you did, you know, being picked out I suppose it is fair. It's just that I were picked out for the wrong one.\nHowever not all participants felt that the HPL programme was superior. One participant was allocated the HPL programme but chose never to attend commenting afterwards that they should have consented to take part on the condition that they were allocated to the SM programme.\nThe comments of both staff and patients highlight that the rigour of trial design can easily cause confusion and or dissatisfaction. That a member of staff commented that 'the trial killed the intervention' is particularly interesting. Their perspective was that since the pilot phase had in their opinion been a success it must have been as a result of the trial that the difficulties arose. However it could be argued that the relatively low recruitment and the level of trial attrition were in fact related primarily to the intervention rather than the trial. More robust piloting of both the intervention and of the recruitment and randomisation process may have identified more accurately where the problems lay.\n[SUBTITLE] The Impact of Organisational Culture [SUBSECTION] It was consistently recognised by staff that due to the nature of the conditions treated on the unit where the research took place, a highly 'technical' culture existed. It was suggested that this narrow focus on the biological needs of patients resulted in an undervaluing of the psychosocial aspects of health.\nS5: \"Some of the medics are focused on getting people in and getting people out, and therefore spending half an hour talking to somebody about psychological issues, um, isn't high on their cards.\"\nS4: \"Well there probably is [a need for emotional rehabilitation] although I don't know that we cater for that\".\nThis, it was argued, had important consequences for how the service was delivered, the implementation of the intervention, and the success of the trial. It was suggested that this technical perspective somehow 'rubbed off' on patients resulting in them experiencing a narrowing of their own concept of health.\nS2: \"And I think they (patients) are probably focusing very much on blood counts and whether they've got GVHD [Graft Versus Host Disease] and if they're doing all right, and actually rehab is like one extra thing to them, it's like, almost like they've closed down the shutters, they can't take on any more, and they're like, no actually I'm fine, I'll be all right, I'll sort it out, and it'll be okay\".\nIt was felt that despite the fact that the HPL programme was developed in response to the lack of emphasis on psychosocial care, the programme and its evaluation were potentially undermined by the reluctance of some staff to prioritise psychosocial care. Since some members of the medical team did not place a high priority on psychosocial support they likewise did not prioritise a trial which was attempting to evaluate a biopsychosocial intervention. This issue was a particular problem since it was felt that patients were more likely to take part in the trial if it was mentioned to them by a member of medical staff. Without the support of the whole medical team there was not the momentum to promote and maintain trial recruitment. Furthermore, for one member of staff, the trial put into sharp focus the dynamics of the professional hierarchy that was embedded in the local organisational culture.\nS3: \"these patients, everything they do, from, even sometimes getting out of bed, they will say well we'll do it if the doctors say we'll do it, because everything's so medically controlled, (...) patients do what the doctors say...\"\nS2: \"I've learned a lot, in terms of the sort of power relations and how to get things done or not done. And I would have thought before the trial, I was quite influential, and actually, when it actually comes to it, you realise you're not that influential at all really.\"\nIt was consistently recognised by staff that due to the nature of the conditions treated on the unit where the research took place, a highly 'technical' culture existed. It was suggested that this narrow focus on the biological needs of patients resulted in an undervaluing of the psychosocial aspects of health.\nS5: \"Some of the medics are focused on getting people in and getting people out, and therefore spending half an hour talking to somebody about psychological issues, um, isn't high on their cards.\"\nS4: \"Well there probably is [a need for emotional rehabilitation] although I don't know that we cater for that\".\nThis, it was argued, had important consequences for how the service was delivered, the implementation of the intervention, and the success of the trial. It was suggested that this technical perspective somehow 'rubbed off' on patients resulting in them experiencing a narrowing of their own concept of health.\nS2: \"And I think they (patients) are probably focusing very much on blood counts and whether they've got GVHD [Graft Versus Host Disease] and if they're doing all right, and actually rehab is like one extra thing to them, it's like, almost like they've closed down the shutters, they can't take on any more, and they're like, no actually I'm fine, I'll be all right, I'll sort it out, and it'll be okay\".\nIt was felt that despite the fact that the HPL programme was developed in response to the lack of emphasis on psychosocial care, the programme and its evaluation were potentially undermined by the reluctance of some staff to prioritise psychosocial care. Since some members of the medical team did not place a high priority on psychosocial support they likewise did not prioritise a trial which was attempting to evaluate a biopsychosocial intervention. This issue was a particular problem since it was felt that patients were more likely to take part in the trial if it was mentioned to them by a member of medical staff. Without the support of the whole medical team there was not the momentum to promote and maintain trial recruitment. Furthermore, for one member of staff, the trial put into sharp focus the dynamics of the professional hierarchy that was embedded in the local organisational culture.\nS3: \"these patients, everything they do, from, even sometimes getting out of bed, they will say well we'll do it if the doctors say we'll do it, because everything's so medically controlled, (...) patients do what the doctors say...\"\nS2: \"I've learned a lot, in terms of the sort of power relations and how to get things done or not done. And I would have thought before the trial, I was quite influential, and actually, when it actually comes to it, you realise you're not that influential at all really.\"", "Despite being developed by staff in collaboration with patients and having undergone a pilot phase to test feasibility and acceptability, views of the healthcare professional led rehabilitation programme were mixed. Most of those interviewed who had taken part in the HPL programme were positive about the exercise component of the programme. Many expressed some degree of exhaustion after exercising, but this was temporary and relieved by resting. However two participants reported finding the exercises too exhausting and one commented:\nP123 (HPL) I personally felt that it had perhaps done me more harm than good. And I perhaps thought it was a bit early because I wasn't coping with it very well.\nIt has been argued that bone marrow transplantation is one of the most stressful treatments in cancer care [10]. It was for this reason that a relaxation component was included as part of the HPL programme. However this was not a need perceived by all participants.\nP117(SM) I tend to just take every day as it comes... I've gone past that stage of worrying about things.\nSome participants clearly felt uncomfortable with the relaxation sessions and one suggested that taking part was like being \"subjected to sort of séances\" (P123HPL). This participant who had not found the relaxation sessions helpful said \"I think a couple of [group members] were nodding off. I thought for god's sake try and keep awake\" (P123) indicating that, for this participant, falling asleep would have been undesirable and unacceptable. However, in contrast another participant commented that \"I love doing that kind of thing, I could kind of fall asleep\" (P326). While the relaxation component was not universally popular some participants indicated that relaxation sessions had been useful and requested assistance with repeating the exercises at home. These were often the participants who, prior to the intervention, had been noticeably anxious or who had found the transplant process to be traumatic.\nIn the interviews participants were given the opportunity to comment on each of the information sessions provided as part of the HPL programme. Participants' reflections on the sessions were generally positive but suggested that information was sometimes repetitive, simply stating common sense, or given too late in the transplant process.\nP315 (HPL) But a lot of it was repetition and there was a bit of déjà vu. I'd heard and seen a lot of it before.\nThe participants felt that recovery after stem cell transplantation was a highly individual process. One example of this was the reaction of different participants to one of the information components which was led by a hospital chaplain which looked at life after transplant. Participants' feelings about the session varied considerably.\nP326 (HPL): the guy who came from the church he was quite interesting, talking about death and how people feel about dying and how people think, that was interesting. I can remember others but I think he was the most interesting one.\nP320 (HPL): We had the vicar and he was very nice, but for me he was a bit intrusive and that was no fault of his that was just you know the way that I felt about it.\nSeveral participants allocated to the self managed programme said that they had never attempted the exercises while others indicated that they gave the exercises a go but quickly lost interest. Reasons for this included a lack of motivation to excercise, or a preference for other activities such as gardening or walking the dog. Those participants who reported not completing the exercises said that they believed that their motivation would have been increased if they had been allocated to the HPL programme. Participants attributed this to two factors: the support provided by a group environment and a feeling of greater confidence exercising where there were professionals available to prevent injury. In contrast to this three participants suggested that they found the programme appropriate and helpful and that they had consistently completed the programme several times a week for a period of many weeks. The differing reactions of individuals to the exercises appeared to relate to several factors. The exercises were completed by those with a high level of motivation to return to a previous state of fitness and by those who had previous direct (for example army training) or indirect (for example a close relative with experience of circuit training) experience of exercise training.\nP311(SM) I found them very good... I first looked at them and thought this looks too easy, but, the situation I was in at the time when I came out of hospital, nothing was easy so they were very good overall. The balance was excellent.\nBetween and within the two study groups, opinions on the value of the intervention received were strongly divided. This illustrates some of the difficulties inherent in trying to develop rehabilitation interventions of this sort. For many participants, initial enthusiasm was occasionally followed by a need for greater flexibility. This was particularly a problem for the 'group' nature of the health profession led programme, which, by definition was designed for a 'typical' recovery trajectory that did not always suit individuals with varying needs that fluctuated over time. Conversely, for those allocated to the self-managed care group, a lack of contact with others in the early stages of their rehabilitation meant those with less personal support may have struggled to motivate themselves.", "The understanding of the rationale for conducting a randomised controlled trial varied between the staff members interviewed. Some perceived a robust evaluation to be the right thing to do in a climate of limited financial resources. Others implied that they were involved not because they wanted to find out if the intervention worked but because they wanted to prove that it worked. This is a subtle but important difference in emphasis which had a number of implications on how staff felt about the trial.\nS4 It [the trial] was the right thing to try to do, yes, I think that's right because clearly it [the intervention] involves expense, and the question is how much value it had.\nS3: we've had to go through that process [the RCT] and I now appreciate that. Before I resented it [the need to test a new intervention].\nSuch a belief in the inherent value of the healthcare professional led programme led these staff to experience disappointment over the outcome of the randomisation process in particular cases. Staff involved in delivering the HPL programme reported finding it very difficult when participants that they perceived would particularly benefit from the intervention were allocated to the self-managed programme.\nS1 there were certain patients that we saw that, you know, you were desperate for them to get the hospital led programme.\nThe impact of this was that some members of staff found both the trial experience and recruiting patients burdensome and attributed problems with delivering the HPL programme to the fact that a trial was being conducted. One member of staff perceived the trial to place a series of hurdles that patients had to overcome (such as consent to trial participation and randomisation) to access the HPL programme. As a result, disappointing trial recruitment meant that fewer patients than anticipated were participating in the HPL at any one time, meaning a small group size changed the very nature of the programme; the staff member suggesting that \"the trial had killed the intervention\" (S2).\nA number of participants also held strong opinions with regards to the two programmes. Several participants expressed their conviction that the HPL programme was superior.\nP315 (HPL): I was on the right arm of the trial. It did things better for me.\nP318 (SM): It was fair as you did, you know, being picked out I suppose it is fair. It's just that I were picked out for the wrong one.\nHowever not all participants felt that the HPL programme was superior. One participant was allocated the HPL programme but chose never to attend commenting afterwards that they should have consented to take part on the condition that they were allocated to the SM programme.\nThe comments of both staff and patients highlight that the rigour of trial design can easily cause confusion and or dissatisfaction. That a member of staff commented that 'the trial killed the intervention' is particularly interesting. Their perspective was that since the pilot phase had in their opinion been a success it must have been as a result of the trial that the difficulties arose. However it could be argued that the relatively low recruitment and the level of trial attrition were in fact related primarily to the intervention rather than the trial. More robust piloting of both the intervention and of the recruitment and randomisation process may have identified more accurately where the problems lay.", "It was consistently recognised by staff that due to the nature of the conditions treated on the unit where the research took place, a highly 'technical' culture existed. It was suggested that this narrow focus on the biological needs of patients resulted in an undervaluing of the psychosocial aspects of health.\nS5: \"Some of the medics are focused on getting people in and getting people out, and therefore spending half an hour talking to somebody about psychological issues, um, isn't high on their cards.\"\nS4: \"Well there probably is [a need for emotional rehabilitation] although I don't know that we cater for that\".\nThis, it was argued, had important consequences for how the service was delivered, the implementation of the intervention, and the success of the trial. It was suggested that this technical perspective somehow 'rubbed off' on patients resulting in them experiencing a narrowing of their own concept of health.\nS2: \"And I think they (patients) are probably focusing very much on blood counts and whether they've got GVHD [Graft Versus Host Disease] and if they're doing all right, and actually rehab is like one extra thing to them, it's like, almost like they've closed down the shutters, they can't take on any more, and they're like, no actually I'm fine, I'll be all right, I'll sort it out, and it'll be okay\".\nIt was felt that despite the fact that the HPL programme was developed in response to the lack of emphasis on psychosocial care, the programme and its evaluation were potentially undermined by the reluctance of some staff to prioritise psychosocial care. Since some members of the medical team did not place a high priority on psychosocial support they likewise did not prioritise a trial which was attempting to evaluate a biopsychosocial intervention. This issue was a particular problem since it was felt that patients were more likely to take part in the trial if it was mentioned to them by a member of medical staff. Without the support of the whole medical team there was not the momentum to promote and maintain trial recruitment. Furthermore, for one member of staff, the trial put into sharp focus the dynamics of the professional hierarchy that was embedded in the local organisational culture.\nS3: \"these patients, everything they do, from, even sometimes getting out of bed, they will say well we'll do it if the doctors say we'll do it, because everything's so medically controlled, (...) patients do what the doctors say...\"\nS2: \"I've learned a lot, in terms of the sort of power relations and how to get things done or not done. And I would have thought before the trial, I was quite influential, and actually, when it actually comes to it, you realise you're not that influential at all really.\"", "This qualitative interview study has provided an insight into both the way the two rehabilitation programmes were experienced and the reality of conducting a randomised controlled trial in a health service setting. Although there are other good examples of this [11,12] in the main, trial reports of the context of the intervention(s) and the evaluation are limited. The artificiality of the experimental process has been highlighted [13].\nThe data in this study highlighted the numerous practical difficulties involved in trying to develop and evaluate a complex intervention and that compromise is often required between the optimum research design and the practicalities of delivering health care in the real world. The patient data in this study illustrated how difficult it is to develop and standardise a complex rehabilitation intervention so that it is acceptable to patients with different needs and preferences. Hawe et al. [14] suggest that too often a complex intervention is reduced to its constituent parts in order for it to fulfil the strict requirements of a randomised controlled trial. In effect this results in a complex intervention being reduced to a series of simple interventions and in doing so fails to acknowledge that a complex intervention has the potential to be more than the sum of its parts [14]. Hawe et al. suggest that inconclusive trials could be avoided if standardisation of the function of an intervention rather than of its form was more widely utilised. They suggest that this would allow for context level adaption and enable tailoring of the intervention to the local environment, which would potentially improve efficacy. In our own study, a trial could have evaluated whether having access to a patient centred rehabilitation service improved patient outcomes as opposed to testing whether a defined set of rehabilitation interventions improved patient outcomes. The service could have included a core set of interventions which were selected and delivered in response to individual patient need.\nThis study found that some participants and staff felt a sense of misgiving over the use of a randomised controlled trial design to evaluate the rehabilitation programmes. Many patients and staff had clear preferences and this meant that the concepts of equipoise and randomisation were contentious. The fact that patients and members of the general public find the concept of randomisation and equipoise perplexing is widely acknowledged [15-17] and advice exists [18] which aims to assist in explaining this concept to potential trial participants. Whilst the benefit of the programme was unproven, some staff held strong personal beliefs about its efficacy. It has been suggested that only 25% of medical staff can envisage themselves being in personal equipoise and only 18% thought that their patients could be in this state [19]. This raises a number of practical and ethical issues. Collective equipoise may be the justification for randomisation but staff who hold strong views on the likely superiority of the effectiveness of one treatment will have difficulty with seeing patients randomised, while potential participants with clear preferences are unlikely to agree to randomisation. Although this is a potential problem in all trials, it is often magnified in the evaluation of complex interventions due to the necessary drive and determination of individuals to bring about their development. It is unrealistic to expect staff to take a neutral position in relation to something that they have been instrumental in creating.\nAnalysis of this qualitative data has highlighted that difficulties in conducting the trial stemmed not just from the challenges associated with the complexity of the intervention but also from the complexity of the local organisational culture. The importance of trial context has been acknowledged particularly in relation to complex interventions [3,20]. The impact that context and culture can have on a trial has been highlighted in seminal sociological works by Oakley [21] and Fox [22] though trialists have perhaps failed to embrace this literature in a way that can fundamentally change the approach taken to designing and implementing clinical trials. Each person involved in a trial, whether they are a participant, health professional or researcher, is influenced by their own beliefs, attitudes and experiences which consciously or unconsciously affect the way in which they engage with the research process. This creates cultural expectations, both positive and negative, which affect how other people engage with the trial. This study found that organisational culture and underlying assumptions were rarely acknowledged and that they only become apparent if they were challenged in some way. Furthermore since those who determined the cultural norms and ethos of the research environment were ambivalent about the trial there was insufficient commitment to support the considerable energy, time and resource demands required for the trial to be a success. The disparity in influence that members of different healthcare professions exert has been well documented [23,24] and here, the effects of this were seen beyond healthcare delivery and into how research agendas are set and enacted.\n[SUBTITLE] Strengths and limitations [SUBSECTION] Our sampling of trial participants and staff involved in the delivery of the interventions have allowed both perspectives to be explored. Similarly, although we acknowledge that our sample of staff is relatively small, we have included accounts of both staff directly involved in the trial and those only indirectly involved. In addition the challenges faced by both participants and healthcare staff that were identified in these interviews appear to have 'relevance' [25] to trials of complex healthcare interventions beyond this particular trial in this particular setting. A limitation of the study is the lack of a voice for those who declined to take part in the trial itself. For ethical and practical reasons we were not able to approach any individual who had already declined to take part in the trial. Another limitation of this study is that the interviews were conducted by LB who was closely associated with the trial by both patients and staff. This may have constrained interviewees' views on either the intervention(s) or the trial.\nOur sampling of trial participants and staff involved in the delivery of the interventions have allowed both perspectives to be explored. Similarly, although we acknowledge that our sample of staff is relatively small, we have included accounts of both staff directly involved in the trial and those only indirectly involved. In addition the challenges faced by both participants and healthcare staff that were identified in these interviews appear to have 'relevance' [25] to trials of complex healthcare interventions beyond this particular trial in this particular setting. A limitation of the study is the lack of a voice for those who declined to take part in the trial itself. For ethical and practical reasons we were not able to approach any individual who had already declined to take part in the trial. Another limitation of this study is that the interviews were conducted by LB who was closely associated with the trial by both patients and staff. This may have constrained interviewees' views on either the intervention(s) or the trial.\n[SUBTITLE] Transferability [SUBSECTION] Many of the issues in this paper relate to a particular intervention, the way it was evaluated and the environment in which this took place. Despite this, many of the issues discussed are pertinent to the evaluation of complex health service interventions more generally. By definition complex interventions are difficult to standardise, blind and regulate. Furthermore they are far more likely to be developed as a result of the determination and dedication of clinical staff, and need to be tested within the competing pressures of the clinical environment.\nMany of the issues in this paper relate to a particular intervention, the way it was evaluated and the environment in which this took place. Despite this, many of the issues discussed are pertinent to the evaluation of complex health service interventions more generally. By definition complex interventions are difficult to standardise, blind and regulate. Furthermore they are far more likely to be developed as a result of the determination and dedication of clinical staff, and need to be tested within the competing pressures of the clinical environment.", "Our sampling of trial participants and staff involved in the delivery of the interventions have allowed both perspectives to be explored. Similarly, although we acknowledge that our sample of staff is relatively small, we have included accounts of both staff directly involved in the trial and those only indirectly involved. In addition the challenges faced by both participants and healthcare staff that were identified in these interviews appear to have 'relevance' [25] to trials of complex healthcare interventions beyond this particular trial in this particular setting. A limitation of the study is the lack of a voice for those who declined to take part in the trial itself. For ethical and practical reasons we were not able to approach any individual who had already declined to take part in the trial. Another limitation of this study is that the interviews were conducted by LB who was closely associated with the trial by both patients and staff. This may have constrained interviewees' views on either the intervention(s) or the trial.", "Many of the issues in this paper relate to a particular intervention, the way it was evaluated and the environment in which this took place. Despite this, many of the issues discussed are pertinent to the evaluation of complex health service interventions more generally. By definition complex interventions are difficult to standardise, blind and regulate. Furthermore they are far more likely to be developed as a result of the determination and dedication of clinical staff, and need to be tested within the competing pressures of the clinical environment.", "A lack of scientific evidence as to the efficacy of an intervention does not preclude staff and patients holding strong views about the benefits of an intervention. The evaluation of complex interventions should, where possible, facilitate not restrict that complexity. Although MRC guidance on evaluating complex interventions [3] provides a useful framework for the successful conduct of trials of this sort, the evolution of interventions in healthcare and their subsequent evaluation does not always proceed in clearly defined stages. Within this more fluid process, two elements appear crucial. First, pilot studies should test the process of recruitment and randomisation, examining the views and experiences of staff and participants on this element of the research process. The reality of randomising a service is often far more difficult for healthcare providers than they anticipate. A more formal pilot study is unlikely to be conducted without funding, but it has the advantage of greater separation between the design and testing of the intervention. Secondly, an understanding of the way a particular service operates is not enough, an insight into the organisational culture is necessary. Findings from this study strongly suggest that within the local environment where the trial is conducted, acquiescence from those in positions of authority is insufficient; commitment to the trial is required.", "The authors declare that they have no competing interests.", "LB, AA and KC conceived and designed the study. LB conducted the research interviews. LB and AA analysed the data. All authors helped to draft the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2288/11/24/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Qualitative interviews", "Patient interviews", "Staff interviews", "Analysis", "Results", "Patients' perceptions of the rehabilitation programmes", "Staff and patient perceptions of involvement in a trial", "The Impact of Organisational Culture", "Discussion", "Strengths and limitations", "Transferability", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "The development of the randomised controlled trial has radically altered the way in which medical therapies are developed, tested and administered. Since 1947 when the Medical Research Council initiated what is generally considered to be the first randomised and blinded clinical trial [1,2] the principles of randomisation and control have moved from being controversial novelties to expected normalities. In the 1990's the broadening of the concept of evidence based medicine towards evidence based practice reflected a growing recognition of the need for decisions about health care interventions to be based on evidence of effectiveness.\nHowever, there are obvious differences between the evaluation of a new drug and, for example, the evaluation of an intervention to promote recovery after cancer treatment and it is not always possible to simply extend the randomised controlled trial design. In acknowledgment of this the Medical Research Council developed in 2000 [3] and then revised in 2008 [4] guidance for the development and evaluation of complex interventions. The MRC emphasise the need for robust and rigorous evaluation of complex interventions, promoting the use of experimental methods, but providing information on some of the alternatives to the conventional randomised controlled trial and highlighting situations in which a trial is impractical or undesirable [4].\nGiven the current financial imperative for interventions to be of proven benefit in order to compete for finite resources, the focus on patient centred care and the undisputed value of the randomised controlled trial it is likely that the number of trials of complex interventions will increase considerably. With this in mind we wanted to provide comment on one randomised controlled trial of a complex intervention which was recently conducted in order to explore some of the acknowledged and hidden complexities of this form of research.\nThis paper reports findings from a qualitative study of the experiences of the development, implementation and evaluation of a rehabilitation programme following stem cell transplantation in a regional haematology unit. A number of staff working on the unit had identified a need for more structured rehabilitation that might include not only support for patients' physical problems but also would address some of the perceived social and psychological needs of these patients. A programme of rehabilitation based on evidence from both the cancer and cardiac rehabilitation literature (for example [5-7]) was put together by a small group of nursing and physiotherapy staff working in collaboration with the rest of the clinical team and patients who had previously undergone stem cell transplant. The programme was piloted by these staff members who felt it was a viable model of routine service delivery and observed positive effects among the small number of patients who undertook the pilot programme. Since these results were based on a small, uncontrolled sample and conducted by those who had developed the intervention, the possibility of bias is a legitimate concern. At this stage an external research team was appointed to conduct an independent and definitive study that attempted to answer whether or not the programme was effective in improving patient outcomes. This paper aims to shed light on, and prompt discussion around, some of the complexities involved in undertaking a randomised controlled trial of two forms of rehabilitation (healthcare professional led and self-managed).", "The design of the evaluation was mixed-methods with a qualitative interview study following the completion of the randomised controlled trial. A full description of the trial and the quantitative results are reported elsewhere [8] but brief details are provided here to give context to the qualitative study that is the focus of this paper.\nThe trial design chosen was a two-arm parallel study comparing structured rehabilitation in a hospital setting led by a team of health professionals (HPL programme) with a home-based, self-managed rehabilitation programme (SM programme). Participants were aged 18 years and over, and had been treated in the previous six to eight weeks with an autologous or allogenic stem cell transplant. The primary outcome was change in physical functioning at six months. Potential participants were initially approached by a healthcare professional and then contacted by a member of the research team (LB) and at this point if potential participants were happy to take part they were asked to provide formal written consent. Following consent and baseline data collection, LB informed participants of their allocation after telephoning an individual independent to the study team who held the randomisation lists.\nParticipants randomised to the HPL programme were asked to attend the hospital once a week for 10 weeks to take part in a group session consisting of a tailored exercise programme, and a relaxation and information support session. Participants randomised to the SM programme were given an information pack that contained a home-based exercise programme and access to all the information and relaxation exercises provided on the HPL programme, this being a slight enhancement of the standard care where rehabilitation was provided in a less structured and more ad hoc manner. Patients could not access the intervention outside of the trial.\nDuring a 14-month recruitment period 144 potential participants were approached about the trial and 61 (42%) consented to take part. Common reasons for declining to take part were distance from home to hospital, and that the programme did not seem relevant. Fifty-eight people were randomised and 46 participants were followed-up at six months. The main reasons for drop out were disease relapse and death.", "Individual qualitative interviews to gather the rehabilitation and trial experiences of those who took part in the trial were included in the original study design. The study design was revised to incorporate further individual interviews with staff members in order to explore their perspectives on the challenges that arose during the trial. The interviews were all conducted by the same researcher (LB) as had managed the trial. The researcher had a background in nursing but did not have any clinical role on the unit where the trial took place. The study was approved by the North Nottinghamshire Local Research Ethics Comittee (LREC reference number 05/Q2402/51).\n[SUBTITLE] Patient interviews [SUBSECTION] Interviews were conducted with 15 patients from within the sample of 58 participants who took part in the trial (see table 1). Participants were interviewed shortly after the end of their involvement in the trial and interviews explored patients' experiences of rehabilitation and of participation in a randomised controlled trial.\nCharacteristics of interviewees\nMaximum variation sampling was used to ensure that the sample included women and men, individuals from both arms of the trial, and across age groups. All participants approached to be interviewed agreed to participate. The interviews lasted between one to one and a half hours and were conducted at a location convenient for the participant. This tended to be in a quiet side room at the hospital or in the participant's own home. Topic guides covered life after transplantation, experiences of rehabilitation, the acceptability of randomisation, the acceptability of the evaluation tools and overall trial experience. The schedules served as a guide for the interview structure and content but did not stipulate exact phrasing of questions and prompts, or sequence of areas of enquiry.\nInterviews were conducted with 15 patients from within the sample of 58 participants who took part in the trial (see table 1). Participants were interviewed shortly after the end of their involvement in the trial and interviews explored patients' experiences of rehabilitation and of participation in a randomised controlled trial.\nCharacteristics of interviewees\nMaximum variation sampling was used to ensure that the sample included women and men, individuals from both arms of the trial, and across age groups. All participants approached to be interviewed agreed to participate. The interviews lasted between one to one and a half hours and were conducted at a location convenient for the participant. This tended to be in a quiet side room at the hospital or in the participant's own home. Topic guides covered life after transplantation, experiences of rehabilitation, the acceptability of randomisation, the acceptability of the evaluation tools and overall trial experience. The schedules served as a guide for the interview structure and content but did not stipulate exact phrasing of questions and prompts, or sequence of areas of enquiry.\n[SUBTITLE] Staff interviews [SUBSECTION] Interviews were also conducted with five members of staff, see table 1, after completion of the trial. All three staff members who had been instrumental in designing and implementing the health profession led programme were interviewed. Two other individuals were also interviewed and whilst they had no direct involvement in the design or delivery of the healthcare professional led programme, it was felt that these individuals were responsible for setting the priorities and ethos of the transplant unit. All staff members approached to be interviewed agreed to participate. The staff interviews explored attitudes and beliefs about the rehabilitation programme and about the appropriateness of evaluating it using a randomised controlled trial. Interviews with members of staff who had been involved in running the rehabilitation programme explored the experience of providing the rehabilitation programme within the context of a randomised controlled trial. The interviews were conducted during the working day in a location convenient for the staff member, typically their own office or a private meeting room. Topic guides were used to provide a structure for the interviews.\nInterviews were also conducted with five members of staff, see table 1, after completion of the trial. All three staff members who had been instrumental in designing and implementing the health profession led programme were interviewed. Two other individuals were also interviewed and whilst they had no direct involvement in the design or delivery of the healthcare professional led programme, it was felt that these individuals were responsible for setting the priorities and ethos of the transplant unit. All staff members approached to be interviewed agreed to participate. The staff interviews explored attitudes and beliefs about the rehabilitation programme and about the appropriateness of evaluating it using a randomised controlled trial. Interviews with members of staff who had been involved in running the rehabilitation programme explored the experience of providing the rehabilitation programme within the context of a randomised controlled trial. The interviews were conducted during the working day in a location convenient for the staff member, typically their own office or a private meeting room. Topic guides were used to provide a structure for the interviews.\n[SUBTITLE] Analysis [SUBSECTION] The analysis of the qualitative data was guided by the need to have a more contextual and enhanced [9] understanding of the trial. All interviews were audio recorded, transcribed verbatim and then analysed using NVivo 8. A thematic content approach to data analysis was used. Following transcription, the interview transcripts were checked for accuracy, read for context and then coded. Individual codes were then collapsed into broader categories that could be generally divided between the two main areas of enquiry (rehabilitation and evaluation) that were the focus of this qualitative study. Within these two areas, themes could be distinguished that helped to explain both commonalities and differences in patient and staff perspectives. The analysis was conducted by two of the authors (LB and AA) and corroborated by the third (KC). In the findings section quotes from trial participants are identified with a P followed by a study number and then the abbreviation HPL denoting that they were allocated to the healthcare professional led programme or SM denoting that they were allocated to the self-managed programme. Quotes from staff are identified with an S followed by a study number.\nThe analysis of the qualitative data was guided by the need to have a more contextual and enhanced [9] understanding of the trial. All interviews were audio recorded, transcribed verbatim and then analysed using NVivo 8. A thematic content approach to data analysis was used. Following transcription, the interview transcripts were checked for accuracy, read for context and then coded. Individual codes were then collapsed into broader categories that could be generally divided between the two main areas of enquiry (rehabilitation and evaluation) that were the focus of this qualitative study. Within these two areas, themes could be distinguished that helped to explain both commonalities and differences in patient and staff perspectives. The analysis was conducted by two of the authors (LB and AA) and corroborated by the third (KC). In the findings section quotes from trial participants are identified with a P followed by a study number and then the abbreviation HPL denoting that they were allocated to the healthcare professional led programme or SM denoting that they were allocated to the self-managed programme. Quotes from staff are identified with an S followed by a study number.", "Interviews were conducted with 15 patients from within the sample of 58 participants who took part in the trial (see table 1). Participants were interviewed shortly after the end of their involvement in the trial and interviews explored patients' experiences of rehabilitation and of participation in a randomised controlled trial.\nCharacteristics of interviewees\nMaximum variation sampling was used to ensure that the sample included women and men, individuals from both arms of the trial, and across age groups. All participants approached to be interviewed agreed to participate. The interviews lasted between one to one and a half hours and were conducted at a location convenient for the participant. This tended to be in a quiet side room at the hospital or in the participant's own home. Topic guides covered life after transplantation, experiences of rehabilitation, the acceptability of randomisation, the acceptability of the evaluation tools and overall trial experience. The schedules served as a guide for the interview structure and content but did not stipulate exact phrasing of questions and prompts, or sequence of areas of enquiry.", "Interviews were also conducted with five members of staff, see table 1, after completion of the trial. All three staff members who had been instrumental in designing and implementing the health profession led programme were interviewed. Two other individuals were also interviewed and whilst they had no direct involvement in the design or delivery of the healthcare professional led programme, it was felt that these individuals were responsible for setting the priorities and ethos of the transplant unit. All staff members approached to be interviewed agreed to participate. The staff interviews explored attitudes and beliefs about the rehabilitation programme and about the appropriateness of evaluating it using a randomised controlled trial. Interviews with members of staff who had been involved in running the rehabilitation programme explored the experience of providing the rehabilitation programme within the context of a randomised controlled trial. The interviews were conducted during the working day in a location convenient for the staff member, typically their own office or a private meeting room. Topic guides were used to provide a structure for the interviews.", "The analysis of the qualitative data was guided by the need to have a more contextual and enhanced [9] understanding of the trial. All interviews were audio recorded, transcribed verbatim and then analysed using NVivo 8. A thematic content approach to data analysis was used. Following transcription, the interview transcripts were checked for accuracy, read for context and then coded. Individual codes were then collapsed into broader categories that could be generally divided between the two main areas of enquiry (rehabilitation and evaluation) that were the focus of this qualitative study. Within these two areas, themes could be distinguished that helped to explain both commonalities and differences in patient and staff perspectives. The analysis was conducted by two of the authors (LB and AA) and corroborated by the third (KC). In the findings section quotes from trial participants are identified with a P followed by a study number and then the abbreviation HPL denoting that they were allocated to the healthcare professional led programme or SM denoting that they were allocated to the self-managed programme. Quotes from staff are identified with an S followed by a study number.", "[SUBTITLE] Patients' perceptions of the rehabilitation programmes [SUBSECTION] Despite being developed by staff in collaboration with patients and having undergone a pilot phase to test feasibility and acceptability, views of the healthcare professional led rehabilitation programme were mixed. Most of those interviewed who had taken part in the HPL programme were positive about the exercise component of the programme. Many expressed some degree of exhaustion after exercising, but this was temporary and relieved by resting. However two participants reported finding the exercises too exhausting and one commented:\nP123 (HPL) I personally felt that it had perhaps done me more harm than good. And I perhaps thought it was a bit early because I wasn't coping with it very well.\nIt has been argued that bone marrow transplantation is one of the most stressful treatments in cancer care [10]. It was for this reason that a relaxation component was included as part of the HPL programme. However this was not a need perceived by all participants.\nP117(SM) I tend to just take every day as it comes... I've gone past that stage of worrying about things.\nSome participants clearly felt uncomfortable with the relaxation sessions and one suggested that taking part was like being \"subjected to sort of séances\" (P123HPL). This participant who had not found the relaxation sessions helpful said \"I think a couple of [group members] were nodding off. I thought for god's sake try and keep awake\" (P123) indicating that, for this participant, falling asleep would have been undesirable and unacceptable. However, in contrast another participant commented that \"I love doing that kind of thing, I could kind of fall asleep\" (P326). While the relaxation component was not universally popular some participants indicated that relaxation sessions had been useful and requested assistance with repeating the exercises at home. These were often the participants who, prior to the intervention, had been noticeably anxious or who had found the transplant process to be traumatic.\nIn the interviews participants were given the opportunity to comment on each of the information sessions provided as part of the HPL programme. Participants' reflections on the sessions were generally positive but suggested that information was sometimes repetitive, simply stating common sense, or given too late in the transplant process.\nP315 (HPL) But a lot of it was repetition and there was a bit of déjà vu. I'd heard and seen a lot of it before.\nThe participants felt that recovery after stem cell transplantation was a highly individual process. One example of this was the reaction of different participants to one of the information components which was led by a hospital chaplain which looked at life after transplant. Participants' feelings about the session varied considerably.\nP326 (HPL): the guy who came from the church he was quite interesting, talking about death and how people feel about dying and how people think, that was interesting. I can remember others but I think he was the most interesting one.\nP320 (HPL): We had the vicar and he was very nice, but for me he was a bit intrusive and that was no fault of his that was just you know the way that I felt about it.\nSeveral participants allocated to the self managed programme said that they had never attempted the exercises while others indicated that they gave the exercises a go but quickly lost interest. Reasons for this included a lack of motivation to excercise, or a preference for other activities such as gardening or walking the dog. Those participants who reported not completing the exercises said that they believed that their motivation would have been increased if they had been allocated to the HPL programme. Participants attributed this to two factors: the support provided by a group environment and a feeling of greater confidence exercising where there were professionals available to prevent injury. In contrast to this three participants suggested that they found the programme appropriate and helpful and that they had consistently completed the programme several times a week for a period of many weeks. The differing reactions of individuals to the exercises appeared to relate to several factors. The exercises were completed by those with a high level of motivation to return to a previous state of fitness and by those who had previous direct (for example army training) or indirect (for example a close relative with experience of circuit training) experience of exercise training.\nP311(SM) I found them very good... I first looked at them and thought this looks too easy, but, the situation I was in at the time when I came out of hospital, nothing was easy so they were very good overall. The balance was excellent.\nBetween and within the two study groups, opinions on the value of the intervention received were strongly divided. This illustrates some of the difficulties inherent in trying to develop rehabilitation interventions of this sort. For many participants, initial enthusiasm was occasionally followed by a need for greater flexibility. This was particularly a problem for the 'group' nature of the health profession led programme, which, by definition was designed for a 'typical' recovery trajectory that did not always suit individuals with varying needs that fluctuated over time. Conversely, for those allocated to the self-managed care group, a lack of contact with others in the early stages of their rehabilitation meant those with less personal support may have struggled to motivate themselves.\nDespite being developed by staff in collaboration with patients and having undergone a pilot phase to test feasibility and acceptability, views of the healthcare professional led rehabilitation programme were mixed. Most of those interviewed who had taken part in the HPL programme were positive about the exercise component of the programme. Many expressed some degree of exhaustion after exercising, but this was temporary and relieved by resting. However two participants reported finding the exercises too exhausting and one commented:\nP123 (HPL) I personally felt that it had perhaps done me more harm than good. And I perhaps thought it was a bit early because I wasn't coping with it very well.\nIt has been argued that bone marrow transplantation is one of the most stressful treatments in cancer care [10]. It was for this reason that a relaxation component was included as part of the HPL programme. However this was not a need perceived by all participants.\nP117(SM) I tend to just take every day as it comes... I've gone past that stage of worrying about things.\nSome participants clearly felt uncomfortable with the relaxation sessions and one suggested that taking part was like being \"subjected to sort of séances\" (P123HPL). This participant who had not found the relaxation sessions helpful said \"I think a couple of [group members] were nodding off. I thought for god's sake try and keep awake\" (P123) indicating that, for this participant, falling asleep would have been undesirable and unacceptable. However, in contrast another participant commented that \"I love doing that kind of thing, I could kind of fall asleep\" (P326). While the relaxation component was not universally popular some participants indicated that relaxation sessions had been useful and requested assistance with repeating the exercises at home. These were often the participants who, prior to the intervention, had been noticeably anxious or who had found the transplant process to be traumatic.\nIn the interviews participants were given the opportunity to comment on each of the information sessions provided as part of the HPL programme. Participants' reflections on the sessions were generally positive but suggested that information was sometimes repetitive, simply stating common sense, or given too late in the transplant process.\nP315 (HPL) But a lot of it was repetition and there was a bit of déjà vu. I'd heard and seen a lot of it before.\nThe participants felt that recovery after stem cell transplantation was a highly individual process. One example of this was the reaction of different participants to one of the information components which was led by a hospital chaplain which looked at life after transplant. Participants' feelings about the session varied considerably.\nP326 (HPL): the guy who came from the church he was quite interesting, talking about death and how people feel about dying and how people think, that was interesting. I can remember others but I think he was the most interesting one.\nP320 (HPL): We had the vicar and he was very nice, but for me he was a bit intrusive and that was no fault of his that was just you know the way that I felt about it.\nSeveral participants allocated to the self managed programme said that they had never attempted the exercises while others indicated that they gave the exercises a go but quickly lost interest. Reasons for this included a lack of motivation to excercise, or a preference for other activities such as gardening or walking the dog. Those participants who reported not completing the exercises said that they believed that their motivation would have been increased if they had been allocated to the HPL programme. Participants attributed this to two factors: the support provided by a group environment and a feeling of greater confidence exercising where there were professionals available to prevent injury. In contrast to this three participants suggested that they found the programme appropriate and helpful and that they had consistently completed the programme several times a week for a period of many weeks. The differing reactions of individuals to the exercises appeared to relate to several factors. The exercises were completed by those with a high level of motivation to return to a previous state of fitness and by those who had previous direct (for example army training) or indirect (for example a close relative with experience of circuit training) experience of exercise training.\nP311(SM) I found them very good... I first looked at them and thought this looks too easy, but, the situation I was in at the time when I came out of hospital, nothing was easy so they were very good overall. The balance was excellent.\nBetween and within the two study groups, opinions on the value of the intervention received were strongly divided. This illustrates some of the difficulties inherent in trying to develop rehabilitation interventions of this sort. For many participants, initial enthusiasm was occasionally followed by a need for greater flexibility. This was particularly a problem for the 'group' nature of the health profession led programme, which, by definition was designed for a 'typical' recovery trajectory that did not always suit individuals with varying needs that fluctuated over time. Conversely, for those allocated to the self-managed care group, a lack of contact with others in the early stages of their rehabilitation meant those with less personal support may have struggled to motivate themselves.\n[SUBTITLE] Staff and patient perceptions of involvement in a trial [SUBSECTION] The understanding of the rationale for conducting a randomised controlled trial varied between the staff members interviewed. Some perceived a robust evaluation to be the right thing to do in a climate of limited financial resources. Others implied that they were involved not because they wanted to find out if the intervention worked but because they wanted to prove that it worked. This is a subtle but important difference in emphasis which had a number of implications on how staff felt about the trial.\nS4 It [the trial] was the right thing to try to do, yes, I think that's right because clearly it [the intervention] involves expense, and the question is how much value it had.\nS3: we've had to go through that process [the RCT] and I now appreciate that. Before I resented it [the need to test a new intervention].\nSuch a belief in the inherent value of the healthcare professional led programme led these staff to experience disappointment over the outcome of the randomisation process in particular cases. Staff involved in delivering the HPL programme reported finding it very difficult when participants that they perceived would particularly benefit from the intervention were allocated to the self-managed programme.\nS1 there were certain patients that we saw that, you know, you were desperate for them to get the hospital led programme.\nThe impact of this was that some members of staff found both the trial experience and recruiting patients burdensome and attributed problems with delivering the HPL programme to the fact that a trial was being conducted. One member of staff perceived the trial to place a series of hurdles that patients had to overcome (such as consent to trial participation and randomisation) to access the HPL programme. As a result, disappointing trial recruitment meant that fewer patients than anticipated were participating in the HPL at any one time, meaning a small group size changed the very nature of the programme; the staff member suggesting that \"the trial had killed the intervention\" (S2).\nA number of participants also held strong opinions with regards to the two programmes. Several participants expressed their conviction that the HPL programme was superior.\nP315 (HPL): I was on the right arm of the trial. It did things better for me.\nP318 (SM): It was fair as you did, you know, being picked out I suppose it is fair. It's just that I were picked out for the wrong one.\nHowever not all participants felt that the HPL programme was superior. One participant was allocated the HPL programme but chose never to attend commenting afterwards that they should have consented to take part on the condition that they were allocated to the SM programme.\nThe comments of both staff and patients highlight that the rigour of trial design can easily cause confusion and or dissatisfaction. That a member of staff commented that 'the trial killed the intervention' is particularly interesting. Their perspective was that since the pilot phase had in their opinion been a success it must have been as a result of the trial that the difficulties arose. However it could be argued that the relatively low recruitment and the level of trial attrition were in fact related primarily to the intervention rather than the trial. More robust piloting of both the intervention and of the recruitment and randomisation process may have identified more accurately where the problems lay.\nThe understanding of the rationale for conducting a randomised controlled trial varied between the staff members interviewed. Some perceived a robust evaluation to be the right thing to do in a climate of limited financial resources. Others implied that they were involved not because they wanted to find out if the intervention worked but because they wanted to prove that it worked. This is a subtle but important difference in emphasis which had a number of implications on how staff felt about the trial.\nS4 It [the trial] was the right thing to try to do, yes, I think that's right because clearly it [the intervention] involves expense, and the question is how much value it had.\nS3: we've had to go through that process [the RCT] and I now appreciate that. Before I resented it [the need to test a new intervention].\nSuch a belief in the inherent value of the healthcare professional led programme led these staff to experience disappointment over the outcome of the randomisation process in particular cases. Staff involved in delivering the HPL programme reported finding it very difficult when participants that they perceived would particularly benefit from the intervention were allocated to the self-managed programme.\nS1 there were certain patients that we saw that, you know, you were desperate for them to get the hospital led programme.\nThe impact of this was that some members of staff found both the trial experience and recruiting patients burdensome and attributed problems with delivering the HPL programme to the fact that a trial was being conducted. One member of staff perceived the trial to place a series of hurdles that patients had to overcome (such as consent to trial participation and randomisation) to access the HPL programme. As a result, disappointing trial recruitment meant that fewer patients than anticipated were participating in the HPL at any one time, meaning a small group size changed the very nature of the programme; the staff member suggesting that \"the trial had killed the intervention\" (S2).\nA number of participants also held strong opinions with regards to the two programmes. Several participants expressed their conviction that the HPL programme was superior.\nP315 (HPL): I was on the right arm of the trial. It did things better for me.\nP318 (SM): It was fair as you did, you know, being picked out I suppose it is fair. It's just that I were picked out for the wrong one.\nHowever not all participants felt that the HPL programme was superior. One participant was allocated the HPL programme but chose never to attend commenting afterwards that they should have consented to take part on the condition that they were allocated to the SM programme.\nThe comments of both staff and patients highlight that the rigour of trial design can easily cause confusion and or dissatisfaction. That a member of staff commented that 'the trial killed the intervention' is particularly interesting. Their perspective was that since the pilot phase had in their opinion been a success it must have been as a result of the trial that the difficulties arose. However it could be argued that the relatively low recruitment and the level of trial attrition were in fact related primarily to the intervention rather than the trial. More robust piloting of both the intervention and of the recruitment and randomisation process may have identified more accurately where the problems lay.\n[SUBTITLE] The Impact of Organisational Culture [SUBSECTION] It was consistently recognised by staff that due to the nature of the conditions treated on the unit where the research took place, a highly 'technical' culture existed. It was suggested that this narrow focus on the biological needs of patients resulted in an undervaluing of the psychosocial aspects of health.\nS5: \"Some of the medics are focused on getting people in and getting people out, and therefore spending half an hour talking to somebody about psychological issues, um, isn't high on their cards.\"\nS4: \"Well there probably is [a need for emotional rehabilitation] although I don't know that we cater for that\".\nThis, it was argued, had important consequences for how the service was delivered, the implementation of the intervention, and the success of the trial. It was suggested that this technical perspective somehow 'rubbed off' on patients resulting in them experiencing a narrowing of their own concept of health.\nS2: \"And I think they (patients) are probably focusing very much on blood counts and whether they've got GVHD [Graft Versus Host Disease] and if they're doing all right, and actually rehab is like one extra thing to them, it's like, almost like they've closed down the shutters, they can't take on any more, and they're like, no actually I'm fine, I'll be all right, I'll sort it out, and it'll be okay\".\nIt was felt that despite the fact that the HPL programme was developed in response to the lack of emphasis on psychosocial care, the programme and its evaluation were potentially undermined by the reluctance of some staff to prioritise psychosocial care. Since some members of the medical team did not place a high priority on psychosocial support they likewise did not prioritise a trial which was attempting to evaluate a biopsychosocial intervention. This issue was a particular problem since it was felt that patients were more likely to take part in the trial if it was mentioned to them by a member of medical staff. Without the support of the whole medical team there was not the momentum to promote and maintain trial recruitment. Furthermore, for one member of staff, the trial put into sharp focus the dynamics of the professional hierarchy that was embedded in the local organisational culture.\nS3: \"these patients, everything they do, from, even sometimes getting out of bed, they will say well we'll do it if the doctors say we'll do it, because everything's so medically controlled, (...) patients do what the doctors say...\"\nS2: \"I've learned a lot, in terms of the sort of power relations and how to get things done or not done. And I would have thought before the trial, I was quite influential, and actually, when it actually comes to it, you realise you're not that influential at all really.\"\nIt was consistently recognised by staff that due to the nature of the conditions treated on the unit where the research took place, a highly 'technical' culture existed. It was suggested that this narrow focus on the biological needs of patients resulted in an undervaluing of the psychosocial aspects of health.\nS5: \"Some of the medics are focused on getting people in and getting people out, and therefore spending half an hour talking to somebody about psychological issues, um, isn't high on their cards.\"\nS4: \"Well there probably is [a need for emotional rehabilitation] although I don't know that we cater for that\".\nThis, it was argued, had important consequences for how the service was delivered, the implementation of the intervention, and the success of the trial. It was suggested that this technical perspective somehow 'rubbed off' on patients resulting in them experiencing a narrowing of their own concept of health.\nS2: \"And I think they (patients) are probably focusing very much on blood counts and whether they've got GVHD [Graft Versus Host Disease] and if they're doing all right, and actually rehab is like one extra thing to them, it's like, almost like they've closed down the shutters, they can't take on any more, and they're like, no actually I'm fine, I'll be all right, I'll sort it out, and it'll be okay\".\nIt was felt that despite the fact that the HPL programme was developed in response to the lack of emphasis on psychosocial care, the programme and its evaluation were potentially undermined by the reluctance of some staff to prioritise psychosocial care. Since some members of the medical team did not place a high priority on psychosocial support they likewise did not prioritise a trial which was attempting to evaluate a biopsychosocial intervention. This issue was a particular problem since it was felt that patients were more likely to take part in the trial if it was mentioned to them by a member of medical staff. Without the support of the whole medical team there was not the momentum to promote and maintain trial recruitment. Furthermore, for one member of staff, the trial put into sharp focus the dynamics of the professional hierarchy that was embedded in the local organisational culture.\nS3: \"these patients, everything they do, from, even sometimes getting out of bed, they will say well we'll do it if the doctors say we'll do it, because everything's so medically controlled, (...) patients do what the doctors say...\"\nS2: \"I've learned a lot, in terms of the sort of power relations and how to get things done or not done. And I would have thought before the trial, I was quite influential, and actually, when it actually comes to it, you realise you're not that influential at all really.\"", "Despite being developed by staff in collaboration with patients and having undergone a pilot phase to test feasibility and acceptability, views of the healthcare professional led rehabilitation programme were mixed. Most of those interviewed who had taken part in the HPL programme were positive about the exercise component of the programme. Many expressed some degree of exhaustion after exercising, but this was temporary and relieved by resting. However two participants reported finding the exercises too exhausting and one commented:\nP123 (HPL) I personally felt that it had perhaps done me more harm than good. And I perhaps thought it was a bit early because I wasn't coping with it very well.\nIt has been argued that bone marrow transplantation is one of the most stressful treatments in cancer care [10]. It was for this reason that a relaxation component was included as part of the HPL programme. However this was not a need perceived by all participants.\nP117(SM) I tend to just take every day as it comes... I've gone past that stage of worrying about things.\nSome participants clearly felt uncomfortable with the relaxation sessions and one suggested that taking part was like being \"subjected to sort of séances\" (P123HPL). This participant who had not found the relaxation sessions helpful said \"I think a couple of [group members] were nodding off. I thought for god's sake try and keep awake\" (P123) indicating that, for this participant, falling asleep would have been undesirable and unacceptable. However, in contrast another participant commented that \"I love doing that kind of thing, I could kind of fall asleep\" (P326). While the relaxation component was not universally popular some participants indicated that relaxation sessions had been useful and requested assistance with repeating the exercises at home. These were often the participants who, prior to the intervention, had been noticeably anxious or who had found the transplant process to be traumatic.\nIn the interviews participants were given the opportunity to comment on each of the information sessions provided as part of the HPL programme. Participants' reflections on the sessions were generally positive but suggested that information was sometimes repetitive, simply stating common sense, or given too late in the transplant process.\nP315 (HPL) But a lot of it was repetition and there was a bit of déjà vu. I'd heard and seen a lot of it before.\nThe participants felt that recovery after stem cell transplantation was a highly individual process. One example of this was the reaction of different participants to one of the information components which was led by a hospital chaplain which looked at life after transplant. Participants' feelings about the session varied considerably.\nP326 (HPL): the guy who came from the church he was quite interesting, talking about death and how people feel about dying and how people think, that was interesting. I can remember others but I think he was the most interesting one.\nP320 (HPL): We had the vicar and he was very nice, but for me he was a bit intrusive and that was no fault of his that was just you know the way that I felt about it.\nSeveral participants allocated to the self managed programme said that they had never attempted the exercises while others indicated that they gave the exercises a go but quickly lost interest. Reasons for this included a lack of motivation to excercise, or a preference for other activities such as gardening or walking the dog. Those participants who reported not completing the exercises said that they believed that their motivation would have been increased if they had been allocated to the HPL programme. Participants attributed this to two factors: the support provided by a group environment and a feeling of greater confidence exercising where there were professionals available to prevent injury. In contrast to this three participants suggested that they found the programme appropriate and helpful and that they had consistently completed the programme several times a week for a period of many weeks. The differing reactions of individuals to the exercises appeared to relate to several factors. The exercises were completed by those with a high level of motivation to return to a previous state of fitness and by those who had previous direct (for example army training) or indirect (for example a close relative with experience of circuit training) experience of exercise training.\nP311(SM) I found them very good... I first looked at them and thought this looks too easy, but, the situation I was in at the time when I came out of hospital, nothing was easy so they were very good overall. The balance was excellent.\nBetween and within the two study groups, opinions on the value of the intervention received were strongly divided. This illustrates some of the difficulties inherent in trying to develop rehabilitation interventions of this sort. For many participants, initial enthusiasm was occasionally followed by a need for greater flexibility. This was particularly a problem for the 'group' nature of the health profession led programme, which, by definition was designed for a 'typical' recovery trajectory that did not always suit individuals with varying needs that fluctuated over time. Conversely, for those allocated to the self-managed care group, a lack of contact with others in the early stages of their rehabilitation meant those with less personal support may have struggled to motivate themselves.", "The understanding of the rationale for conducting a randomised controlled trial varied between the staff members interviewed. Some perceived a robust evaluation to be the right thing to do in a climate of limited financial resources. Others implied that they were involved not because they wanted to find out if the intervention worked but because they wanted to prove that it worked. This is a subtle but important difference in emphasis which had a number of implications on how staff felt about the trial.\nS4 It [the trial] was the right thing to try to do, yes, I think that's right because clearly it [the intervention] involves expense, and the question is how much value it had.\nS3: we've had to go through that process [the RCT] and I now appreciate that. Before I resented it [the need to test a new intervention].\nSuch a belief in the inherent value of the healthcare professional led programme led these staff to experience disappointment over the outcome of the randomisation process in particular cases. Staff involved in delivering the HPL programme reported finding it very difficult when participants that they perceived would particularly benefit from the intervention were allocated to the self-managed programme.\nS1 there were certain patients that we saw that, you know, you were desperate for them to get the hospital led programme.\nThe impact of this was that some members of staff found both the trial experience and recruiting patients burdensome and attributed problems with delivering the HPL programme to the fact that a trial was being conducted. One member of staff perceived the trial to place a series of hurdles that patients had to overcome (such as consent to trial participation and randomisation) to access the HPL programme. As a result, disappointing trial recruitment meant that fewer patients than anticipated were participating in the HPL at any one time, meaning a small group size changed the very nature of the programme; the staff member suggesting that \"the trial had killed the intervention\" (S2).\nA number of participants also held strong opinions with regards to the two programmes. Several participants expressed their conviction that the HPL programme was superior.\nP315 (HPL): I was on the right arm of the trial. It did things better for me.\nP318 (SM): It was fair as you did, you know, being picked out I suppose it is fair. It's just that I were picked out for the wrong one.\nHowever not all participants felt that the HPL programme was superior. One participant was allocated the HPL programme but chose never to attend commenting afterwards that they should have consented to take part on the condition that they were allocated to the SM programme.\nThe comments of both staff and patients highlight that the rigour of trial design can easily cause confusion and or dissatisfaction. That a member of staff commented that 'the trial killed the intervention' is particularly interesting. Their perspective was that since the pilot phase had in their opinion been a success it must have been as a result of the trial that the difficulties arose. However it could be argued that the relatively low recruitment and the level of trial attrition were in fact related primarily to the intervention rather than the trial. More robust piloting of both the intervention and of the recruitment and randomisation process may have identified more accurately where the problems lay.", "It was consistently recognised by staff that due to the nature of the conditions treated on the unit where the research took place, a highly 'technical' culture existed. It was suggested that this narrow focus on the biological needs of patients resulted in an undervaluing of the psychosocial aspects of health.\nS5: \"Some of the medics are focused on getting people in and getting people out, and therefore spending half an hour talking to somebody about psychological issues, um, isn't high on their cards.\"\nS4: \"Well there probably is [a need for emotional rehabilitation] although I don't know that we cater for that\".\nThis, it was argued, had important consequences for how the service was delivered, the implementation of the intervention, and the success of the trial. It was suggested that this technical perspective somehow 'rubbed off' on patients resulting in them experiencing a narrowing of their own concept of health.\nS2: \"And I think they (patients) are probably focusing very much on blood counts and whether they've got GVHD [Graft Versus Host Disease] and if they're doing all right, and actually rehab is like one extra thing to them, it's like, almost like they've closed down the shutters, they can't take on any more, and they're like, no actually I'm fine, I'll be all right, I'll sort it out, and it'll be okay\".\nIt was felt that despite the fact that the HPL programme was developed in response to the lack of emphasis on psychosocial care, the programme and its evaluation were potentially undermined by the reluctance of some staff to prioritise psychosocial care. Since some members of the medical team did not place a high priority on psychosocial support they likewise did not prioritise a trial which was attempting to evaluate a biopsychosocial intervention. This issue was a particular problem since it was felt that patients were more likely to take part in the trial if it was mentioned to them by a member of medical staff. Without the support of the whole medical team there was not the momentum to promote and maintain trial recruitment. Furthermore, for one member of staff, the trial put into sharp focus the dynamics of the professional hierarchy that was embedded in the local organisational culture.\nS3: \"these patients, everything they do, from, even sometimes getting out of bed, they will say well we'll do it if the doctors say we'll do it, because everything's so medically controlled, (...) patients do what the doctors say...\"\nS2: \"I've learned a lot, in terms of the sort of power relations and how to get things done or not done. And I would have thought before the trial, I was quite influential, and actually, when it actually comes to it, you realise you're not that influential at all really.\"", "This qualitative interview study has provided an insight into both the way the two rehabilitation programmes were experienced and the reality of conducting a randomised controlled trial in a health service setting. Although there are other good examples of this [11,12] in the main, trial reports of the context of the intervention(s) and the evaluation are limited. The artificiality of the experimental process has been highlighted [13].\nThe data in this study highlighted the numerous practical difficulties involved in trying to develop and evaluate a complex intervention and that compromise is often required between the optimum research design and the practicalities of delivering health care in the real world. The patient data in this study illustrated how difficult it is to develop and standardise a complex rehabilitation intervention so that it is acceptable to patients with different needs and preferences. Hawe et al. [14] suggest that too often a complex intervention is reduced to its constituent parts in order for it to fulfil the strict requirements of a randomised controlled trial. In effect this results in a complex intervention being reduced to a series of simple interventions and in doing so fails to acknowledge that a complex intervention has the potential to be more than the sum of its parts [14]. Hawe et al. suggest that inconclusive trials could be avoided if standardisation of the function of an intervention rather than of its form was more widely utilised. They suggest that this would allow for context level adaption and enable tailoring of the intervention to the local environment, which would potentially improve efficacy. In our own study, a trial could have evaluated whether having access to a patient centred rehabilitation service improved patient outcomes as opposed to testing whether a defined set of rehabilitation interventions improved patient outcomes. The service could have included a core set of interventions which were selected and delivered in response to individual patient need.\nThis study found that some participants and staff felt a sense of misgiving over the use of a randomised controlled trial design to evaluate the rehabilitation programmes. Many patients and staff had clear preferences and this meant that the concepts of equipoise and randomisation were contentious. The fact that patients and members of the general public find the concept of randomisation and equipoise perplexing is widely acknowledged [15-17] and advice exists [18] which aims to assist in explaining this concept to potential trial participants. Whilst the benefit of the programme was unproven, some staff held strong personal beliefs about its efficacy. It has been suggested that only 25% of medical staff can envisage themselves being in personal equipoise and only 18% thought that their patients could be in this state [19]. This raises a number of practical and ethical issues. Collective equipoise may be the justification for randomisation but staff who hold strong views on the likely superiority of the effectiveness of one treatment will have difficulty with seeing patients randomised, while potential participants with clear preferences are unlikely to agree to randomisation. Although this is a potential problem in all trials, it is often magnified in the evaluation of complex interventions due to the necessary drive and determination of individuals to bring about their development. It is unrealistic to expect staff to take a neutral position in relation to something that they have been instrumental in creating.\nAnalysis of this qualitative data has highlighted that difficulties in conducting the trial stemmed not just from the challenges associated with the complexity of the intervention but also from the complexity of the local organisational culture. The importance of trial context has been acknowledged particularly in relation to complex interventions [3,20]. The impact that context and culture can have on a trial has been highlighted in seminal sociological works by Oakley [21] and Fox [22] though trialists have perhaps failed to embrace this literature in a way that can fundamentally change the approach taken to designing and implementing clinical trials. Each person involved in a trial, whether they are a participant, health professional or researcher, is influenced by their own beliefs, attitudes and experiences which consciously or unconsciously affect the way in which they engage with the research process. This creates cultural expectations, both positive and negative, which affect how other people engage with the trial. This study found that organisational culture and underlying assumptions were rarely acknowledged and that they only become apparent if they were challenged in some way. Furthermore since those who determined the cultural norms and ethos of the research environment were ambivalent about the trial there was insufficient commitment to support the considerable energy, time and resource demands required for the trial to be a success. The disparity in influence that members of different healthcare professions exert has been well documented [23,24] and here, the effects of this were seen beyond healthcare delivery and into how research agendas are set and enacted.\n[SUBTITLE] Strengths and limitations [SUBSECTION] Our sampling of trial participants and staff involved in the delivery of the interventions have allowed both perspectives to be explored. Similarly, although we acknowledge that our sample of staff is relatively small, we have included accounts of both staff directly involved in the trial and those only indirectly involved. In addition the challenges faced by both participants and healthcare staff that were identified in these interviews appear to have 'relevance' [25] to trials of complex healthcare interventions beyond this particular trial in this particular setting. A limitation of the study is the lack of a voice for those who declined to take part in the trial itself. For ethical and practical reasons we were not able to approach any individual who had already declined to take part in the trial. Another limitation of this study is that the interviews were conducted by LB who was closely associated with the trial by both patients and staff. This may have constrained interviewees' views on either the intervention(s) or the trial.\nOur sampling of trial participants and staff involved in the delivery of the interventions have allowed both perspectives to be explored. Similarly, although we acknowledge that our sample of staff is relatively small, we have included accounts of both staff directly involved in the trial and those only indirectly involved. In addition the challenges faced by both participants and healthcare staff that were identified in these interviews appear to have 'relevance' [25] to trials of complex healthcare interventions beyond this particular trial in this particular setting. A limitation of the study is the lack of a voice for those who declined to take part in the trial itself. For ethical and practical reasons we were not able to approach any individual who had already declined to take part in the trial. Another limitation of this study is that the interviews were conducted by LB who was closely associated with the trial by both patients and staff. This may have constrained interviewees' views on either the intervention(s) or the trial.\n[SUBTITLE] Transferability [SUBSECTION] Many of the issues in this paper relate to a particular intervention, the way it was evaluated and the environment in which this took place. Despite this, many of the issues discussed are pertinent to the evaluation of complex health service interventions more generally. By definition complex interventions are difficult to standardise, blind and regulate. Furthermore they are far more likely to be developed as a result of the determination and dedication of clinical staff, and need to be tested within the competing pressures of the clinical environment.\nMany of the issues in this paper relate to a particular intervention, the way it was evaluated and the environment in which this took place. Despite this, many of the issues discussed are pertinent to the evaluation of complex health service interventions more generally. By definition complex interventions are difficult to standardise, blind and regulate. Furthermore they are far more likely to be developed as a result of the determination and dedication of clinical staff, and need to be tested within the competing pressures of the clinical environment.", "Our sampling of trial participants and staff involved in the delivery of the interventions have allowed both perspectives to be explored. Similarly, although we acknowledge that our sample of staff is relatively small, we have included accounts of both staff directly involved in the trial and those only indirectly involved. In addition the challenges faced by both participants and healthcare staff that were identified in these interviews appear to have 'relevance' [25] to trials of complex healthcare interventions beyond this particular trial in this particular setting. A limitation of the study is the lack of a voice for those who declined to take part in the trial itself. For ethical and practical reasons we were not able to approach any individual who had already declined to take part in the trial. Another limitation of this study is that the interviews were conducted by LB who was closely associated with the trial by both patients and staff. This may have constrained interviewees' views on either the intervention(s) or the trial.", "Many of the issues in this paper relate to a particular intervention, the way it was evaluated and the environment in which this took place. Despite this, many of the issues discussed are pertinent to the evaluation of complex health service interventions more generally. By definition complex interventions are difficult to standardise, blind and regulate. Furthermore they are far more likely to be developed as a result of the determination and dedication of clinical staff, and need to be tested within the competing pressures of the clinical environment.", "A lack of scientific evidence as to the efficacy of an intervention does not preclude staff and patients holding strong views about the benefits of an intervention. The evaluation of complex interventions should, where possible, facilitate not restrict that complexity. Although MRC guidance on evaluating complex interventions [3] provides a useful framework for the successful conduct of trials of this sort, the evolution of interventions in healthcare and their subsequent evaluation does not always proceed in clearly defined stages. Within this more fluid process, two elements appear crucial. First, pilot studies should test the process of recruitment and randomisation, examining the views and experiences of staff and participants on this element of the research process. The reality of randomising a service is often far more difficult for healthcare providers than they anticipate. A more formal pilot study is unlikely to be conducted without funding, but it has the advantage of greater separation between the design and testing of the intervention. Secondly, an understanding of the way a particular service operates is not enough, an insight into the organisational culture is necessary. Findings from this study strongly suggest that within the local environment where the trial is conducted, acquiescence from those in positions of authority is insufficient; commitment to the trial is required.", "The authors declare that they have no competing interests.", "LB, AA and KC conceived and designed the study. LB conducted the research interviews. LB and AA analysed the data. All authors helped to draft the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2288/11/24/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Phase I randomized dose-ascending placebo-controlled trials of ferroquine--a candidate anti-malarial drug--in adults with asymptomatic Plasmodium falciparum infection.
21362162
The development and spread of drug resistant Plasmodium falciparum strains is a major concern and novel anti-malarial drugs are, therefore, needed. Ferroquine is a ferrocenic derivative of chloroquine with proven anti-malarial activity against chloroquine-resistant and -sensitive P. falciparum laboratory strains.
BACKGROUND
Adult young male aged 18 to 45 years, asymptomatic carriers of P. falciparum, were included in two-dose escalation, double-blind, randomized, placebo-controlled Phase I trials, a single dose study and a multiple dose study aiming to evaluate oral doses of ferroquine from 400 to 1,600 mg.
METHODS
Overall, 54/66 patients (40 and 26 treated in the single and multiple dose studies, respectively) experienced at least one adverse event, 15 were under placebo. Adverse events were mainly gastrointestinal symptoms such as abdominal pain (16), diarrhoea (5), nausea (13), and vomiting (9), but also headache (11), and dizziness (5). A few patients had slightly elevated liver parameters (10/66) including two patients under placebo. Moderate changes in QTc and morphological changes in T waves were observed in the course of the study. However, no adverse cardiac effects with clinical relevance were observed.
RESULTS
These phase I trials showed that clinically, ferroquine was generally well-tolerated up to 1,600 mg as single dose and up to 800 mg as repeated dose in asymptomatic young male with P. falciparum infection. Further clinical development of ferroquine, either alone or in combination with another anti-malarial, is highly warranted and currently underway.
CONCLUSIONS
[ "Adolescent", "Adult", "Aminoquinolines", "Antimalarials", "Asymptomatic Diseases", "Double-Blind Method", "Ferrous Compounds", "Humans", "Malaria, Falciparum", "Male", "Metallocenes", "Middle Aged", "Placebos", "Plasmodium falciparum", "Treatment Outcome", "Young Adult" ]
3056844
null
null
Methods
[SUBTITLE] Study design, settings and population [SUBSECTION] Both clinical trials were designed as phase I, randomized, double blind, placebo-controlled, sequential dose escalation studies. The first study TDU5967 ("Single Dose Study") was designed as a dose escalation trial of a single oral dose, whereas the second study TDR5969 ("3 Day Repeated Dose Study") aimed to evaluate a three-day treatment course. Clinical work was carried out from April 2005 to June 2006 at the Medical Research Unit of the Albert Schweitzer Hospital in Lambaréné, Gabon, a semi-urban town of about 30,000 inhabitants where malaria transmission is perennial with moderate seasonal variation [9,10]. Male volunteers aged between 18 and 45 years residing in Lambaréné and its surrounding areas were invited to participate. Individuals having provided written informed consent and harbouring asymptomatic P. falciparum infection were enrolled into the study if they fulfilled the following criteria: (1) weight between 50 and 90 kg, (2) BMI of 18 to 28 kg/m2, (3) normal vital signs after 10 minutes resting in supine position, i.e. systolic blood pressure between 95 and 140 mmHg, and diastolic blood pressure between 50 and 90 mmHg, and heart rate between 45 and 90 beats per minute, (4) normal 12-lead automatic ECG, (5) and no fever or history of fever in the last 72 hours. Exclusion criteria were (1) presence of any clinical signs of malaria, (2) previous anti-malarial treatment, (3) significant co-morbidities or other conditions impeding compliance with the study, (4) presence or history of drug allergy, (5) evidence for hepatitis B, C, or HIV infection (HBs antigen, anti-HCV antibodies, anti-HIV1&2 antibodies). The studies were granted ethical clearance by the Ethics Committee of the International Foundation of the Albert Schweitzer Hospital. The study protocols complied with recommendations of the 18th World Health Congress (Helsinki, 1964) and all applicable amendments, also with laws, regulations, and any applicable guideline of Gabon, the country where the studies were conducted in compliance with the International Harmonized Conference on Good Clinical Practices guidelines. Informed consent was obtained prior to the conduct of any study-related procedures. Both clinical trials were designed as phase I, randomized, double blind, placebo-controlled, sequential dose escalation studies. The first study TDU5967 ("Single Dose Study") was designed as a dose escalation trial of a single oral dose, whereas the second study TDR5969 ("3 Day Repeated Dose Study") aimed to evaluate a three-day treatment course. Clinical work was carried out from April 2005 to June 2006 at the Medical Research Unit of the Albert Schweitzer Hospital in Lambaréné, Gabon, a semi-urban town of about 30,000 inhabitants where malaria transmission is perennial with moderate seasonal variation [9,10]. Male volunteers aged between 18 and 45 years residing in Lambaréné and its surrounding areas were invited to participate. Individuals having provided written informed consent and harbouring asymptomatic P. falciparum infection were enrolled into the study if they fulfilled the following criteria: (1) weight between 50 and 90 kg, (2) BMI of 18 to 28 kg/m2, (3) normal vital signs after 10 minutes resting in supine position, i.e. systolic blood pressure between 95 and 140 mmHg, and diastolic blood pressure between 50 and 90 mmHg, and heart rate between 45 and 90 beats per minute, (4) normal 12-lead automatic ECG, (5) and no fever or history of fever in the last 72 hours. Exclusion criteria were (1) presence of any clinical signs of malaria, (2) previous anti-malarial treatment, (3) significant co-morbidities or other conditions impeding compliance with the study, (4) presence or history of drug allergy, (5) evidence for hepatitis B, C, or HIV infection (HBs antigen, anti-HCV antibodies, anti-HIV1&2 antibodies). The studies were granted ethical clearance by the Ethics Committee of the International Foundation of the Albert Schweitzer Hospital. The study protocols complied with recommendations of the 18th World Health Congress (Helsinki, 1964) and all applicable amendments, also with laws, regulations, and any applicable guideline of Gabon, the country where the studies were conducted in compliance with the International Harmonized Conference on Good Clinical Practices guidelines. Informed consent was obtained prior to the conduct of any study-related procedures. [SUBTITLE] Interventions [SUBSECTION] [SUBTITLE] Randomization and blinding [SUBSECTION] Study treatments were prepared by dose level for each patient according to the study randomization list. The following doses of ferroquine were to be explored, 400 mg, 800 mg, 1,200 mg, 1,400 mg, and 1,600 mg for the single dose study and 400 mg, 600 mg, 800 mg, and 1,000 mg for the three-day repeated dose study. Treatment allocation was performed in sequential groups of eight participants per dose group. Six participants were to receive ferroquine and two participants were to receive placebo. All ferroquine dose strengths and placebo capsules were identically matched. The first patients were assigned to the lowest dose level of the study. The treatment allocation was double-blinded and was done according to the randomization list which was computer-generated. The allocation code was concealed at site and procedures were put in place to have the option for code breaking in case of urgent clinical need. Decision for dose escalation was based on a thorough review of clinical and laboratory data, including ECG and pharmacokinetic data of the preceding dose level. Study treatments were prepared by dose level for each patient according to the study randomization list. The following doses of ferroquine were to be explored, 400 mg, 800 mg, 1,200 mg, 1,400 mg, and 1,600 mg for the single dose study and 400 mg, 600 mg, 800 mg, and 1,000 mg for the three-day repeated dose study. Treatment allocation was performed in sequential groups of eight participants per dose group. Six participants were to receive ferroquine and two participants were to receive placebo. All ferroquine dose strengths and placebo capsules were identically matched. The first patients were assigned to the lowest dose level of the study. The treatment allocation was double-blinded and was done according to the randomization list which was computer-generated. The allocation code was concealed at site and procedures were put in place to have the option for code breaking in case of urgent clinical need. Decision for dose escalation was based on a thorough review of clinical and laboratory data, including ECG and pharmacokinetic data of the preceding dose level. [SUBTITLE] Randomization and blinding [SUBSECTION] Study treatments were prepared by dose level for each patient according to the study randomization list. The following doses of ferroquine were to be explored, 400 mg, 800 mg, 1,200 mg, 1,400 mg, and 1,600 mg for the single dose study and 400 mg, 600 mg, 800 mg, and 1,000 mg for the three-day repeated dose study. Treatment allocation was performed in sequential groups of eight participants per dose group. Six participants were to receive ferroquine and two participants were to receive placebo. All ferroquine dose strengths and placebo capsules were identically matched. The first patients were assigned to the lowest dose level of the study. The treatment allocation was double-blinded and was done according to the randomization list which was computer-generated. The allocation code was concealed at site and procedures were put in place to have the option for code breaking in case of urgent clinical need. Decision for dose escalation was based on a thorough review of clinical and laboratory data, including ECG and pharmacokinetic data of the preceding dose level. Study treatments were prepared by dose level for each patient according to the study randomization list. The following doses of ferroquine were to be explored, 400 mg, 800 mg, 1,200 mg, 1,400 mg, and 1,600 mg for the single dose study and 400 mg, 600 mg, 800 mg, and 1,000 mg for the three-day repeated dose study. Treatment allocation was performed in sequential groups of eight participants per dose group. Six participants were to receive ferroquine and two participants were to receive placebo. All ferroquine dose strengths and placebo capsules were identically matched. The first patients were assigned to the lowest dose level of the study. The treatment allocation was double-blinded and was done according to the randomization list which was computer-generated. The allocation code was concealed at site and procedures were put in place to have the option for code breaking in case of urgent clinical need. Decision for dose escalation was based on a thorough review of clinical and laboratory data, including ECG and pharmacokinetic data of the preceding dose level. [SUBTITLE] Study procedures [SUBSECTION] [SUBTITLE] Screening and drug administration [SUBSECTION] Recruitment of volunteers was started with a pre-screening visit to determine the P. falciparum infection status by blood smear, Rapid Test and PCR [11]. If infection was confirmed by at least one out of these three laboratory tests, the volunteers were invited to undergo screening procedure, including a complete physical examination, haematology and biochemistry analysis, urinalysis, urine drug screen, and viral serology (HIV, HBSAg, HCV). Eligible volunteers were admitted as inpatients to the Medical Research Unit and study drugs were administered orally with noncarbonated water under fasted conditions. Treatment duration was one day in the single dose study and three days in the multiple dose study. Volunteers were discharged from the hospital two days after the last drug intake and were followed-up for eight weeks in both studies. Recruitment of volunteers was started with a pre-screening visit to determine the P. falciparum infection status by blood smear, Rapid Test and PCR [11]. If infection was confirmed by at least one out of these three laboratory tests, the volunteers were invited to undergo screening procedure, including a complete physical examination, haematology and biochemistry analysis, urinalysis, urine drug screen, and viral serology (HIV, HBSAg, HCV). Eligible volunteers were admitted as inpatients to the Medical Research Unit and study drugs were administered orally with noncarbonated water under fasted conditions. Treatment duration was one day in the single dose study and three days in the multiple dose study. Volunteers were discharged from the hospital two days after the last drug intake and were followed-up for eight weeks in both studies. [SUBTITLE] Recording and reporting of adverse events and cardiac effects [SUBSECTION] Adverse events were defined as any clinical change from baseline or previous visit spontaneously reported by the patient or observed by the investigator. All adverse events regardless of seriousness or relationship to the investigational product were recorded. Adverse events were coded according to Medical Dictionary for regulatory Activities (MedDRA, version 8.1). Due to the very long half-life of the product, around 20 days, the rule applied for the emergence of adverse event was 5 half-lives after administration. Thus, an adverse event was considered as emergent until the end-of-study visit. Abnormalities of vital signs, laboratory parameters or ECG readings were recorded as adverse events only if they were medically relevant by being symptomatic, requiring corrective treatment, leading to discontinuation and/or fulfilling a seriousness criterion. A standard 12-lead ECG was performed hourly up to six hours post-dosing, two-hourly up to 12 hours post-dose, and weekly until the end of follow-up. ECG measures were obtained from measurement of three consecutive QRS complexes and averaged. Heart rates, QTc (Bazett and Fridericia) were derived from the mean values of the measured parameters. In addition, a 24-hr ECG was recorded in the repeated dose study for five days to detect cardiac rhythm abnormalities. Adverse events were defined as any clinical change from baseline or previous visit spontaneously reported by the patient or observed by the investigator. All adverse events regardless of seriousness or relationship to the investigational product were recorded. Adverse events were coded according to Medical Dictionary for regulatory Activities (MedDRA, version 8.1). Due to the very long half-life of the product, around 20 days, the rule applied for the emergence of adverse event was 5 half-lives after administration. Thus, an adverse event was considered as emergent until the end-of-study visit. Abnormalities of vital signs, laboratory parameters or ECG readings were recorded as adverse events only if they were medically relevant by being symptomatic, requiring corrective treatment, leading to discontinuation and/or fulfilling a seriousness criterion. A standard 12-lead ECG was performed hourly up to six hours post-dosing, two-hourly up to 12 hours post-dose, and weekly until the end of follow-up. ECG measures were obtained from measurement of three consecutive QRS complexes and averaged. Heart rates, QTc (Bazett and Fridericia) were derived from the mean values of the measured parameters. In addition, a 24-hr ECG was recorded in the repeated dose study for five days to detect cardiac rhythm abnormalities. [SUBTITLE] Screening and drug administration [SUBSECTION] Recruitment of volunteers was started with a pre-screening visit to determine the P. falciparum infection status by blood smear, Rapid Test and PCR [11]. If infection was confirmed by at least one out of these three laboratory tests, the volunteers were invited to undergo screening procedure, including a complete physical examination, haematology and biochemistry analysis, urinalysis, urine drug screen, and viral serology (HIV, HBSAg, HCV). Eligible volunteers were admitted as inpatients to the Medical Research Unit and study drugs were administered orally with noncarbonated water under fasted conditions. Treatment duration was one day in the single dose study and three days in the multiple dose study. Volunteers were discharged from the hospital two days after the last drug intake and were followed-up for eight weeks in both studies. Recruitment of volunteers was started with a pre-screening visit to determine the P. falciparum infection status by blood smear, Rapid Test and PCR [11]. If infection was confirmed by at least one out of these three laboratory tests, the volunteers were invited to undergo screening procedure, including a complete physical examination, haematology and biochemistry analysis, urinalysis, urine drug screen, and viral serology (HIV, HBSAg, HCV). Eligible volunteers were admitted as inpatients to the Medical Research Unit and study drugs were administered orally with noncarbonated water under fasted conditions. Treatment duration was one day in the single dose study and three days in the multiple dose study. Volunteers were discharged from the hospital two days after the last drug intake and were followed-up for eight weeks in both studies. [SUBTITLE] Recording and reporting of adverse events and cardiac effects [SUBSECTION] Adverse events were defined as any clinical change from baseline or previous visit spontaneously reported by the patient or observed by the investigator. All adverse events regardless of seriousness or relationship to the investigational product were recorded. Adverse events were coded according to Medical Dictionary for regulatory Activities (MedDRA, version 8.1). Due to the very long half-life of the product, around 20 days, the rule applied for the emergence of adverse event was 5 half-lives after administration. Thus, an adverse event was considered as emergent until the end-of-study visit. Abnormalities of vital signs, laboratory parameters or ECG readings were recorded as adverse events only if they were medically relevant by being symptomatic, requiring corrective treatment, leading to discontinuation and/or fulfilling a seriousness criterion. A standard 12-lead ECG was performed hourly up to six hours post-dosing, two-hourly up to 12 hours post-dose, and weekly until the end of follow-up. ECG measures were obtained from measurement of three consecutive QRS complexes and averaged. Heart rates, QTc (Bazett and Fridericia) were derived from the mean values of the measured parameters. In addition, a 24-hr ECG was recorded in the repeated dose study for five days to detect cardiac rhythm abnormalities. Adverse events were defined as any clinical change from baseline or previous visit spontaneously reported by the patient or observed by the investigator. All adverse events regardless of seriousness or relationship to the investigational product were recorded. Adverse events were coded according to Medical Dictionary for regulatory Activities (MedDRA, version 8.1). Due to the very long half-life of the product, around 20 days, the rule applied for the emergence of adverse event was 5 half-lives after administration. Thus, an adverse event was considered as emergent until the end-of-study visit. Abnormalities of vital signs, laboratory parameters or ECG readings were recorded as adverse events only if they were medically relevant by being symptomatic, requiring corrective treatment, leading to discontinuation and/or fulfilling a seriousness criterion. A standard 12-lead ECG was performed hourly up to six hours post-dosing, two-hourly up to 12 hours post-dose, and weekly until the end of follow-up. ECG measures were obtained from measurement of three consecutive QRS complexes and averaged. Heart rates, QTc (Bazett and Fridericia) were derived from the mean values of the measured parameters. In addition, a 24-hr ECG was recorded in the repeated dose study for five days to detect cardiac rhythm abnormalities. [SUBTITLE] Outcomes and statistics [SUBSECTION] Primary outcomes (endpoints) for these randomized clinical trials were defined as the clinical and laboratory adverse event profile and the effect of ferroquine on the cardiovascular system. Sample size for these studies was based upon empirical considerations from clinical phase I studies and no formal sample size calculation was performed. Safety analysis was based on participants who were administered at least one dose of study drug (exposed population) including participants who withdrew consent prematurely. If a participant had received the study drug and prematurely stopped study participation he was replaced, unless if follow up data were available up to day 15. Primary outcomes (endpoints) for these randomized clinical trials were defined as the clinical and laboratory adverse event profile and the effect of ferroquine on the cardiovascular system. Sample size for these studies was based upon empirical considerations from clinical phase I studies and no formal sample size calculation was performed. Safety analysis was based on participants who were administered at least one dose of study drug (exposed population) including participants who withdrew consent prematurely. If a participant had received the study drug and prematurely stopped study participation he was replaced, unless if follow up data were available up to day 15.
null
null
null
null
[ "Background", "Preclinical studies", "Previous human experience", "Study design, settings and population", "Interventions", "Randomization and blinding", "Study procedures", "Screening and drug administration", "Recording and reporting of adverse events and cardiac effects", "Outcomes and statistics", "Results", "Study population", "Frequency of adverse events", "Clinical laboratory evaluation", "Electrocardiogram evaluation", "Discussion", "Conclusions", "Competing interests", "Authors' contributions" ]
[ "Plasmodium falciparum has become resistant to several commonly used anti-malarial drugs. As a consequence new anti-malarial compounds are urgently needed. Despite a considerable increase in international efforts in anti-malarial drug development, there is at present a shortage of truly novel drugs which do not share the same mechanisms of action and drug resistance with those that are failing today. The entire reliance on artemisinin derivatives in the drug development over the past decade might be deceptive in the light of first reports of artemisinin resistant P. falciparum [1].\nFerroquine (SSR97193) - a ferrocenic derivative of chloroquine and thus a 4-aminoquinoline compound - is a truly novel anti-malarial candidate drug. Ferroquine demonstrated anti-malarial activity in vitro and in vivo [2-5]. In vitro, ferroquine has a potent activity on Plasmodium falciparum laboratory strains, whether they are sensitive or resistant to chloroquine. In vivo, four days of treatment with ferroquine efficiently cured mice infected with either chloroquine-resistant or sensitive strains of Plasmodium vinckei. Ferroquine is a racemic compound with both enantiomers showing an anti-malarial activity similar to that of the parent compound, in vitro and in vivo. Ferroquine is metabolized into one major metabolite (N-monodemethylated), also highly active in vitro. Tests on field isolates confirmed the activity of ferroquine against parasites that are multi-resistant to most marketed anti-malarial agents and did not show any significant cross sensitivity with major anti-malarials currently used [5-8].\n[SUBTITLE] Preclinical studies [SUBSECTION] Ferroquine was tested in preclinical studies prior to its use in humans, alone or in combination with artesunate, showing it to be devoid of relevant adverse effects on central nervous system, respiratory, renal, and gastrointestinal functions. The only notable effects of ferroquine alone, or in combination with artesunate, were observed on cardiovascular function. In vitro ferroquine, at a high concentration, was shown to affect the action potential pattern as a result of human ether-à-go-go related gene (hERG) channel inhibition (inhibitory concentration decreasing a response by 50% (IC50) = 907 ng/mL). In vivo, in anaesthetized but not in conscious dogs, ferroquine induced from blood concentrations ≥200 ng/mL major haemodynamic effects. In vivo, in conscious dogs, a ferroquine/artesunate combination (at 30 mg/kg and 100 mg/kg per os, respectively, i.e., ferroquine blood concentrations of 669 ng/mL) was devoid of effects on blood pressure, heart rate, PR interval, and QRS complex duration whereas QTcB and QTcF were slightly increased [artesunate alone had no significant effect on cardiovascular and electrocardiogram parameters]. However, no increases in QT, QTc were noted in the 13-week oral toxicity study in dogs at 0.2/25, 1/25 and 5/25 mg/kg of ferroquine/artesunate combination and at 5 mg/kg of ferroquine alone.\nFerroquine was tested in preclinical studies prior to its use in humans, alone or in combination with artesunate, showing it to be devoid of relevant adverse effects on central nervous system, respiratory, renal, and gastrointestinal functions. The only notable effects of ferroquine alone, or in combination with artesunate, were observed on cardiovascular function. In vitro ferroquine, at a high concentration, was shown to affect the action potential pattern as a result of human ether-à-go-go related gene (hERG) channel inhibition (inhibitory concentration decreasing a response by 50% (IC50) = 907 ng/mL). In vivo, in anaesthetized but not in conscious dogs, ferroquine induced from blood concentrations ≥200 ng/mL major haemodynamic effects. In vivo, in conscious dogs, a ferroquine/artesunate combination (at 30 mg/kg and 100 mg/kg per os, respectively, i.e., ferroquine blood concentrations of 669 ng/mL) was devoid of effects on blood pressure, heart rate, PR interval, and QRS complex duration whereas QTcB and QTcF were slightly increased [artesunate alone had no significant effect on cardiovascular and electrocardiogram parameters]. However, no increases in QT, QTc were noted in the 13-week oral toxicity study in dogs at 0.2/25, 1/25 and 5/25 mg/kg of ferroquine/artesunate combination and at 5 mg/kg of ferroquine alone.\n[SUBTITLE] Previous human experience [SUBSECTION] First, in human studies of ferroquine were conducted in a double-blind, randomized, placebo controlled, and sequential dose escalation study in 42 healthy male subjects in Germany [unpublished data]. Single oral doses up to 800 mg were clinically well tolerated and no drug-related clinically relevant adverse event was observed. One serious adverse event occurred which was unrelated to the study drug and all other adverse events were mild to moderate and no relevant electrocardiogram abnormalities, such as QT prolongation or repolarisation abnormalities were described.\nThis is a report on two clinical phase I trials, which aimed to assess the clinical, and laboratory safety and tolerability profile of ferroquine. These studies were designed to evaluate ascending oral doses in young male volunteers with asymptomatic P. falciparum infection and to determine the maximum tolerated dose after oral administration of ferroquine.\nFirst, in human studies of ferroquine were conducted in a double-blind, randomized, placebo controlled, and sequential dose escalation study in 42 healthy male subjects in Germany [unpublished data]. Single oral doses up to 800 mg were clinically well tolerated and no drug-related clinically relevant adverse event was observed. One serious adverse event occurred which was unrelated to the study drug and all other adverse events were mild to moderate and no relevant electrocardiogram abnormalities, such as QT prolongation or repolarisation abnormalities were described.\nThis is a report on two clinical phase I trials, which aimed to assess the clinical, and laboratory safety and tolerability profile of ferroquine. These studies were designed to evaluate ascending oral doses in young male volunteers with asymptomatic P. falciparum infection and to determine the maximum tolerated dose after oral administration of ferroquine.", "Ferroquine was tested in preclinical studies prior to its use in humans, alone or in combination with artesunate, showing it to be devoid of relevant adverse effects on central nervous system, respiratory, renal, and gastrointestinal functions. The only notable effects of ferroquine alone, or in combination with artesunate, were observed on cardiovascular function. In vitro ferroquine, at a high concentration, was shown to affect the action potential pattern as a result of human ether-à-go-go related gene (hERG) channel inhibition (inhibitory concentration decreasing a response by 50% (IC50) = 907 ng/mL). In vivo, in anaesthetized but not in conscious dogs, ferroquine induced from blood concentrations ≥200 ng/mL major haemodynamic effects. In vivo, in conscious dogs, a ferroquine/artesunate combination (at 30 mg/kg and 100 mg/kg per os, respectively, i.e., ferroquine blood concentrations of 669 ng/mL) was devoid of effects on blood pressure, heart rate, PR interval, and QRS complex duration whereas QTcB and QTcF were slightly increased [artesunate alone had no significant effect on cardiovascular and electrocardiogram parameters]. However, no increases in QT, QTc were noted in the 13-week oral toxicity study in dogs at 0.2/25, 1/25 and 5/25 mg/kg of ferroquine/artesunate combination and at 5 mg/kg of ferroquine alone.", "First, in human studies of ferroquine were conducted in a double-blind, randomized, placebo controlled, and sequential dose escalation study in 42 healthy male subjects in Germany [unpublished data]. Single oral doses up to 800 mg were clinically well tolerated and no drug-related clinically relevant adverse event was observed. One serious adverse event occurred which was unrelated to the study drug and all other adverse events were mild to moderate and no relevant electrocardiogram abnormalities, such as QT prolongation or repolarisation abnormalities were described.\nThis is a report on two clinical phase I trials, which aimed to assess the clinical, and laboratory safety and tolerability profile of ferroquine. These studies were designed to evaluate ascending oral doses in young male volunteers with asymptomatic P. falciparum infection and to determine the maximum tolerated dose after oral administration of ferroquine.", "Both clinical trials were designed as phase I, randomized, double blind, placebo-controlled, sequential dose escalation studies. The first study TDU5967 (\"Single Dose Study\") was designed as a dose escalation trial of a single oral dose, whereas the second study TDR5969 (\"3 Day Repeated Dose Study\") aimed to evaluate a three-day treatment course.\nClinical work was carried out from April 2005 to June 2006 at the Medical Research Unit of the Albert Schweitzer Hospital in Lambaréné, Gabon, a semi-urban town of about 30,000 inhabitants where malaria transmission is perennial with moderate seasonal variation [9,10].\nMale volunteers aged between 18 and 45 years residing in Lambaréné and its surrounding areas were invited to participate. Individuals having provided written informed consent and harbouring asymptomatic P. falciparum infection were enrolled into the study if they fulfilled the following criteria: (1) weight between 50 and 90 kg, (2) BMI of 18 to 28 kg/m2, (3) normal vital signs after 10 minutes resting in supine position, i.e. systolic blood pressure between 95 and 140 mmHg, and diastolic blood pressure between 50 and 90 mmHg, and heart rate between 45 and 90 beats per minute, (4) normal 12-lead automatic ECG, (5) and no fever or history of fever in the last 72 hours. Exclusion criteria were (1) presence of any clinical signs of malaria, (2) previous anti-malarial treatment, (3) significant co-morbidities or other conditions impeding compliance with the study, (4) presence or history of drug allergy, (5) evidence for hepatitis B, C, or HIV infection (HBs antigen, anti-HCV antibodies, anti-HIV1&2 antibodies).\nThe studies were granted ethical clearance by the Ethics Committee of the International Foundation of the Albert Schweitzer Hospital. The study protocols complied with recommendations of the 18th World Health Congress (Helsinki, 1964) and all applicable amendments, also with laws, regulations, and any applicable guideline of Gabon, the country where the studies were conducted in compliance with the International Harmonized Conference on Good Clinical Practices guidelines. Informed consent was obtained prior to the conduct of any study-related procedures.", "[SUBTITLE] Randomization and blinding [SUBSECTION] Study treatments were prepared by dose level for each patient according to the study randomization list. The following doses of ferroquine were to be explored, 400 mg, 800 mg, 1,200 mg, 1,400 mg, and 1,600 mg for the single dose study and 400 mg, 600 mg, 800 mg, and 1,000 mg for the three-day repeated dose study.\nTreatment allocation was performed in sequential groups of eight participants per dose group. Six participants were to receive ferroquine and two participants were to receive placebo. All ferroquine dose strengths and placebo capsules were identically matched. The first patients were assigned to the lowest dose level of the study.\nThe treatment allocation was double-blinded and was done according to the randomization list which was computer-generated. The allocation code was concealed at site and procedures were put in place to have the option for code breaking in case of urgent clinical need. Decision for dose escalation was based on a thorough review of clinical and laboratory data, including ECG and pharmacokinetic data of the preceding dose level.\nStudy treatments were prepared by dose level for each patient according to the study randomization list. The following doses of ferroquine were to be explored, 400 mg, 800 mg, 1,200 mg, 1,400 mg, and 1,600 mg for the single dose study and 400 mg, 600 mg, 800 mg, and 1,000 mg for the three-day repeated dose study.\nTreatment allocation was performed in sequential groups of eight participants per dose group. Six participants were to receive ferroquine and two participants were to receive placebo. All ferroquine dose strengths and placebo capsules were identically matched. The first patients were assigned to the lowest dose level of the study.\nThe treatment allocation was double-blinded and was done according to the randomization list which was computer-generated. The allocation code was concealed at site and procedures were put in place to have the option for code breaking in case of urgent clinical need. Decision for dose escalation was based on a thorough review of clinical and laboratory data, including ECG and pharmacokinetic data of the preceding dose level.", "Study treatments were prepared by dose level for each patient according to the study randomization list. The following doses of ferroquine were to be explored, 400 mg, 800 mg, 1,200 mg, 1,400 mg, and 1,600 mg for the single dose study and 400 mg, 600 mg, 800 mg, and 1,000 mg for the three-day repeated dose study.\nTreatment allocation was performed in sequential groups of eight participants per dose group. Six participants were to receive ferroquine and two participants were to receive placebo. All ferroquine dose strengths and placebo capsules were identically matched. The first patients were assigned to the lowest dose level of the study.\nThe treatment allocation was double-blinded and was done according to the randomization list which was computer-generated. The allocation code was concealed at site and procedures were put in place to have the option for code breaking in case of urgent clinical need. Decision for dose escalation was based on a thorough review of clinical and laboratory data, including ECG and pharmacokinetic data of the preceding dose level.", "[SUBTITLE] Screening and drug administration [SUBSECTION] Recruitment of volunteers was started with a pre-screening visit to determine the P. falciparum infection status by blood smear, Rapid Test and PCR [11]. If infection was confirmed by at least one out of these three laboratory tests, the volunteers were invited to undergo screening procedure, including a complete physical examination, haematology and biochemistry analysis, urinalysis, urine drug screen, and viral serology (HIV, HBSAg, HCV). Eligible volunteers were admitted as inpatients to the Medical Research Unit and study drugs were administered orally with noncarbonated water under fasted conditions. Treatment duration was one day in the single dose study and three days in the multiple dose study. Volunteers were discharged from the hospital two days after the last drug intake and were followed-up for eight weeks in both studies.\nRecruitment of volunteers was started with a pre-screening visit to determine the P. falciparum infection status by blood smear, Rapid Test and PCR [11]. If infection was confirmed by at least one out of these three laboratory tests, the volunteers were invited to undergo screening procedure, including a complete physical examination, haematology and biochemistry analysis, urinalysis, urine drug screen, and viral serology (HIV, HBSAg, HCV). Eligible volunteers were admitted as inpatients to the Medical Research Unit and study drugs were administered orally with noncarbonated water under fasted conditions. Treatment duration was one day in the single dose study and three days in the multiple dose study. Volunteers were discharged from the hospital two days after the last drug intake and were followed-up for eight weeks in both studies.\n[SUBTITLE] Recording and reporting of adverse events and cardiac effects [SUBSECTION] Adverse events were defined as any clinical change from baseline or previous visit spontaneously reported by the patient or observed by the investigator. All adverse events regardless of seriousness or relationship to the investigational product were recorded. Adverse events were coded according to Medical Dictionary for regulatory Activities (MedDRA, version 8.1).\nDue to the very long half-life of the product, around 20 days, the rule applied for the emergence of adverse event was 5 half-lives after administration. Thus, an adverse event was considered as emergent until the end-of-study visit. Abnormalities of vital signs, laboratory parameters or ECG readings were recorded as adverse events only if they were medically relevant by being symptomatic, requiring corrective treatment, leading to discontinuation and/or fulfilling a seriousness criterion.\nA standard 12-lead ECG was performed hourly up to six hours post-dosing, two-hourly up to 12 hours post-dose, and weekly until the end of follow-up. ECG measures were obtained from measurement of three consecutive QRS complexes and averaged. Heart rates, QTc (Bazett and Fridericia) were derived from the mean values of the measured parameters. In addition, a 24-hr ECG was recorded in the repeated dose study for five days to detect cardiac rhythm abnormalities.\nAdverse events were defined as any clinical change from baseline or previous visit spontaneously reported by the patient or observed by the investigator. All adverse events regardless of seriousness or relationship to the investigational product were recorded. Adverse events were coded according to Medical Dictionary for regulatory Activities (MedDRA, version 8.1).\nDue to the very long half-life of the product, around 20 days, the rule applied for the emergence of adverse event was 5 half-lives after administration. Thus, an adverse event was considered as emergent until the end-of-study visit. Abnormalities of vital signs, laboratory parameters or ECG readings were recorded as adverse events only if they were medically relevant by being symptomatic, requiring corrective treatment, leading to discontinuation and/or fulfilling a seriousness criterion.\nA standard 12-lead ECG was performed hourly up to six hours post-dosing, two-hourly up to 12 hours post-dose, and weekly until the end of follow-up. ECG measures were obtained from measurement of three consecutive QRS complexes and averaged. Heart rates, QTc (Bazett and Fridericia) were derived from the mean values of the measured parameters. In addition, a 24-hr ECG was recorded in the repeated dose study for five days to detect cardiac rhythm abnormalities.", "Recruitment of volunteers was started with a pre-screening visit to determine the P. falciparum infection status by blood smear, Rapid Test and PCR [11]. If infection was confirmed by at least one out of these three laboratory tests, the volunteers were invited to undergo screening procedure, including a complete physical examination, haematology and biochemistry analysis, urinalysis, urine drug screen, and viral serology (HIV, HBSAg, HCV). Eligible volunteers were admitted as inpatients to the Medical Research Unit and study drugs were administered orally with noncarbonated water under fasted conditions. Treatment duration was one day in the single dose study and three days in the multiple dose study. Volunteers were discharged from the hospital two days after the last drug intake and were followed-up for eight weeks in both studies.", "Adverse events were defined as any clinical change from baseline or previous visit spontaneously reported by the patient or observed by the investigator. All adverse events regardless of seriousness or relationship to the investigational product were recorded. Adverse events were coded according to Medical Dictionary for regulatory Activities (MedDRA, version 8.1).\nDue to the very long half-life of the product, around 20 days, the rule applied for the emergence of adverse event was 5 half-lives after administration. Thus, an adverse event was considered as emergent until the end-of-study visit. Abnormalities of vital signs, laboratory parameters or ECG readings were recorded as adverse events only if they were medically relevant by being symptomatic, requiring corrective treatment, leading to discontinuation and/or fulfilling a seriousness criterion.\nA standard 12-lead ECG was performed hourly up to six hours post-dosing, two-hourly up to 12 hours post-dose, and weekly until the end of follow-up. ECG measures were obtained from measurement of three consecutive QRS complexes and averaged. Heart rates, QTc (Bazett and Fridericia) were derived from the mean values of the measured parameters. In addition, a 24-hr ECG was recorded in the repeated dose study for five days to detect cardiac rhythm abnormalities.", "Primary outcomes (endpoints) for these randomized clinical trials were defined as the clinical and laboratory adverse event profile and the effect of ferroquine on the cardiovascular system.\nSample size for these studies was based upon empirical considerations from clinical phase I studies and no formal sample size calculation was performed. Safety analysis was based on participants who were administered at least one dose of study drug (exposed population) including participants who withdrew consent prematurely. If a participant had received the study drug and prematurely stopped study participation he was replaced, unless if follow up data were available up to day 15.", "[SUBTITLE] Study population [SUBSECTION] Out of a total of 498 volunteers pre-screened, 158 persons with P. falciparum infections were further screened for eligibility. Eighty-four were excluded because of abnormal chemistry or haematology results, abnormal ECGs, or other health problems. Seventeen volunteers could not be enrolled due to logistical reasons and further eight volunteers were not enrolled because the predefined number of participants for each of the two studies was already reached. Recruitment and participant flow are depicted in Figure 1.\nFlow of participants.\nA total of 66 asymptomatic male volunteers infected with P. falciparum were enrolled, randomized, and treated. In the single dose study, a total of 30 volunteers were exposed to single oral doses of ferroquine over a range from 400 mg to 1,600 mg and 10 received placebo. In the three-day multiple dose study, 19 volunteers were treated for three days with daily doses from 400 mg to 1,000 mg and seven received placebo. Only one participant was administered the 1,000 mg dose. Enrolment was discontinued at 1,000 mg dose because of morphologic changes in T waves and a moderate QT prolongation observed in patients treated with 800 mg and 1,000 mg. Seven participants stopped the study prematurely for reasons unrelated to adverse events, three in the single dose study and four in the three-day repeated dose study (Figure 1). Baseline characteristics of study participants are provided in Table 1 and Table 2.\nDemographic and clinical characteristics of participants at baseline in the single dose study\nNOTE: BMI: Body Mass Index; BSA: Body Surface Area; HR: Heart Rate; QTc (Bazett Fridericia); N: Count of patients; SD: standard deviation; Min: minimum; Max: maximum\nDemographic and clinical characteristics of participants at baseline in the multiple dose study\nNOTE: BMI: Body Mass Index; BSA: Body Surface Area; HR: Heart Rate; QTc (Bazett Fridericia); N: Count of patients; SD: standard deviation; Min: minimum; Max: maximum\nOut of a total of 498 volunteers pre-screened, 158 persons with P. falciparum infections were further screened for eligibility. Eighty-four were excluded because of abnormal chemistry or haematology results, abnormal ECGs, or other health problems. Seventeen volunteers could not be enrolled due to logistical reasons and further eight volunteers were not enrolled because the predefined number of participants for each of the two studies was already reached. Recruitment and participant flow are depicted in Figure 1.\nFlow of participants.\nA total of 66 asymptomatic male volunteers infected with P. falciparum were enrolled, randomized, and treated. In the single dose study, a total of 30 volunteers were exposed to single oral doses of ferroquine over a range from 400 mg to 1,600 mg and 10 received placebo. In the three-day multiple dose study, 19 volunteers were treated for three days with daily doses from 400 mg to 1,000 mg and seven received placebo. Only one participant was administered the 1,000 mg dose. Enrolment was discontinued at 1,000 mg dose because of morphologic changes in T waves and a moderate QT prolongation observed in patients treated with 800 mg and 1,000 mg. Seven participants stopped the study prematurely for reasons unrelated to adverse events, three in the single dose study and four in the three-day repeated dose study (Figure 1). Baseline characteristics of study participants are provided in Table 1 and Table 2.\nDemographic and clinical characteristics of participants at baseline in the single dose study\nNOTE: BMI: Body Mass Index; BSA: Body Surface Area; HR: Heart Rate; QTc (Bazett Fridericia); N: Count of patients; SD: standard deviation; Min: minimum; Max: maximum\nDemographic and clinical characteristics of participants at baseline in the multiple dose study\nNOTE: BMI: Body Mass Index; BSA: Body Surface Area; HR: Heart Rate; QTc (Bazett Fridericia); N: Count of patients; SD: standard deviation; Min: minimum; Max: maximum\n[SUBTITLE] Frequency of adverse events [SUBSECTION] Details on the frequency of adverse events by dose-group are depicted in Table 3 and Table 4. In the single, ascending dose study, most patients in the placebo group (9/10) presented at least one adverse event during the study versus 3/6 patients in the 400 m dose group, 1/6 in the 800 mg dose group, 6/6 in the 1,200 mg and 1,400 mg dose groups, and 4/6 patients in the 1,600 mg dose group.\nFrequency of the most common adverse events during the single dose study\nNOTE: N (%): count of patients (percentage per group); TEAE: treatment emergent adverse event\nFrequency of the most common adverse events during the multiple dose study\nNOTE: N (%): count of patients (percentage per group); TEAE: treatment emergent adverse event\nIn the three-day repeated, ascending dose study, six patients out of seven in the placebo group experienced at least one adverse event during the trial versus all of the subjects taking ferroquine (19/19). The adverse events reported were mainly in the gastrointestinal tract, abdominal pain (16), diarrhoea (5), nausea (13), vomiting (9) and toothache (6), and in the nervous system disorders, headache (11) and dizziness (5), but also those that appeared during the follow-up period, in the musculoskeletal and connective tissue disorders (9), such as arthralgia and back pain, the injury, poisoning and procedural complications (10) (injury, post-traumatic pain, thermal burn).\nOther symptoms such as mild, transient eye disorders (blurred vision, floating objects), mild skin rash (ECGs electrode application sites dermatitis or pruritis), infections and infestations, and fatigue were infrequent and were single occurrences across all dose groups in either study. All adverse events were considered mild to moderate in intensity and were transient. There were no deaths, serious adverse events, or adverse events leading to withdrawal reported during the trials.\nBecause of the small numbers of volunteers studied at each dose, no significant conclusions can be drawn from comparisons of adverse events between dose groups at the individual dose levels.\nDetails on the frequency of adverse events by dose-group are depicted in Table 3 and Table 4. In the single, ascending dose study, most patients in the placebo group (9/10) presented at least one adverse event during the study versus 3/6 patients in the 400 m dose group, 1/6 in the 800 mg dose group, 6/6 in the 1,200 mg and 1,400 mg dose groups, and 4/6 patients in the 1,600 mg dose group.\nFrequency of the most common adverse events during the single dose study\nNOTE: N (%): count of patients (percentage per group); TEAE: treatment emergent adverse event\nFrequency of the most common adverse events during the multiple dose study\nNOTE: N (%): count of patients (percentage per group); TEAE: treatment emergent adverse event\nIn the three-day repeated, ascending dose study, six patients out of seven in the placebo group experienced at least one adverse event during the trial versus all of the subjects taking ferroquine (19/19). The adverse events reported were mainly in the gastrointestinal tract, abdominal pain (16), diarrhoea (5), nausea (13), vomiting (9) and toothache (6), and in the nervous system disorders, headache (11) and dizziness (5), but also those that appeared during the follow-up period, in the musculoskeletal and connective tissue disorders (9), such as arthralgia and back pain, the injury, poisoning and procedural complications (10) (injury, post-traumatic pain, thermal burn).\nOther symptoms such as mild, transient eye disorders (blurred vision, floating objects), mild skin rash (ECGs electrode application sites dermatitis or pruritis), infections and infestations, and fatigue were infrequent and were single occurrences across all dose groups in either study. All adverse events were considered mild to moderate in intensity and were transient. There were no deaths, serious adverse events, or adverse events leading to withdrawal reported during the trials.\nBecause of the small numbers of volunteers studied at each dose, no significant conclusions can be drawn from comparisons of adverse events between dose groups at the individual dose levels.\n[SUBTITLE] Clinical laboratory evaluation [SUBSECTION] The clinical laboratory evaluation was performed through clinical chemistry and haematology tests of samples taken at different time points of scheduled visits until the end-of-study visit (60 days). The majority of the clinical chemistry potentially clinically significant abnormalities observed on patients treated with concerned elevations of creatine phosphokinase (≥3 ULN, normal range 25-90 IU). In most cases, creatine phosphokinase was elevated at baseline and was related to intense farming activities prior to inclusion (Table 5 and Table 6).\nSummary of participants with PCSA (parameters with PCSA definitions) for clinical chemistry and haematology parameters during the single dose study\nNOTE: N: count of patients; PCSA: Potentially Clinically Significant Abnormality (version 2.2, December 2004); LLN/ULN: Lower/Upper Limit of Normality. For hemoglobin, baseline values <LLN or >ULN are counted normal; for eosinophils, values <LLN are counted as normal. For ALT, AST, ALP, Total Bilirubin, GGT, CPK, baseline values <LLN are counted normal\nSummary of participants with PCSA (parameters with PCSA definitions) for clinical chemistry and haematology parameters during the multiple dose study\nNOTE: N: count of patients; PCSA: Potentially Clinically Significant Abnormality (version 2.2, December 2004); LLN/ULN: Lower/Upper Limit of Normality. For hemoglobin, baseline values <LLN or >ULN are counted normal; for eosinophils, values <LLN are counted as normal. For ALT, AST, ALP, Total Bilirubin, GGT, CPK, baseline values <LLN are counted normal\nIn the single dose study, five patients out of 40 showed an alanine transaminase increase >1 ULN (defined as 45 IU/ml) at any time point of the study with a possible association to changes of other liver functions. Among them were four patients treated with ferroquine and one patient treated with placebo. These changes were asymptomatic and, therefore, were not reported as adverse events. One patient under placebo experienced an increased level of alanine transaminase, with a maximum value of 59 IU on Day 21. The increase was associated to a parallel and moderate increase of total bilirubin (19 μmol/L, normal range <17 μmol/L) and a moderate increase of aspartate transaminase and gamma glutamyl transferase. In the 800 mg dose-group, one patient had an increased level of alanine transaminase with a maximum value of 148 IU on Day 15, with associated increases in total bilirubin (24 μmol/L), aspartate transaminases, alkaline phosphatases, and gamma glutamyl transferase. In the 1,200 mg dose-group, at one patient an increased level of alanine transaminase with a maximum value of 52 IU was observed on Day 7, whereas, alkaline phosphatase and total bilirubin (5 μmol/L) remained within normal ranges. At one patient treated with 1,400 mg, an increased level of alanine transaminase with a maximum value of 47 IU was observed on Day 15, associated to parallel changes in other liver function parameters at the exception of the bilirubin level (8 μmol/L). Finally, one patient was one of the 1,600 mg dose-group experienced an increase of alanine transaminase values with a maximum value of 48 IU on Day 2 with parallel increase in total bilirubin (25 μmol/L) on Day 15. The respective parameters of this patient were already slightly elevated at baseline (Day -1).\nIn the three-day repeated dose study, elevated alanine transaminases (>2 ULN) was observed in one placebo-treated patient and in four patients in the ferroquine groups. One patient in the 400 mg group (normal baseline 0.38 ULN) had an alanine transaminase >10 ULN starting at Day 12 with a maximum magnitude of 16.55 ULN, which was associated to abdominal pain and headache. Over the course of the following week, alkaline phosphatases and bilirubin increased and picked at about 2 ULN and 1.7 ULN, respectively. Infectious causes were ruled out. However, the patient reported intake of an herbal tea against abdominal pain, which might have contributed to the enzyme abnormalities. Alanine transaminases returned to normal by Day 37 (0.71 ULN). A patient of the 600 mg group, who had a slightly elevated alanine transaminase value at screening (1.22 ULN), but within normal ranges at baseline (0.91 ULN), presented again an elevated value on Day 3 (1.71 ULN) and reached the potentially clinically significant abnormality level (2.18 ULN) on Day 5, subsequently followed by a decrease to normal rangeby Day 12. In this same 600 mg dose group, another patient with normal baseline value of 0.42 ULN, presented an elevated alanine transaminase values on Day 3 that reached the potentially clinically significant abnormality level on Day 23 (2.56 ULN). His values were still elevated at the end-of-study visit (1.78 ULN). Finally, one patient of the 800 mg dose group, with normal baseline value of 0.38 ULN, had a peak alanine transaminases value of 6.37 ULN and a bilirubin value of 1.48 ULN at Day 12 without clinical symptoms. Alkaline phosphatases peaked at 1.77 ULN on Day 17. Liver parameter values decreased toward the normal range by the end-of-study visit. The most frequently observed haematology, potencially clinically significant abnormalities were related to eosinophils that were likely to be related to concomitant parasitic infections existing at baseline.\nThe clinical laboratory evaluation was performed through clinical chemistry and haematology tests of samples taken at different time points of scheduled visits until the end-of-study visit (60 days). The majority of the clinical chemistry potentially clinically significant abnormalities observed on patients treated with concerned elevations of creatine phosphokinase (≥3 ULN, normal range 25-90 IU). In most cases, creatine phosphokinase was elevated at baseline and was related to intense farming activities prior to inclusion (Table 5 and Table 6).\nSummary of participants with PCSA (parameters with PCSA definitions) for clinical chemistry and haematology parameters during the single dose study\nNOTE: N: count of patients; PCSA: Potentially Clinically Significant Abnormality (version 2.2, December 2004); LLN/ULN: Lower/Upper Limit of Normality. For hemoglobin, baseline values <LLN or >ULN are counted normal; for eosinophils, values <LLN are counted as normal. For ALT, AST, ALP, Total Bilirubin, GGT, CPK, baseline values <LLN are counted normal\nSummary of participants with PCSA (parameters with PCSA definitions) for clinical chemistry and haematology parameters during the multiple dose study\nNOTE: N: count of patients; PCSA: Potentially Clinically Significant Abnormality (version 2.2, December 2004); LLN/ULN: Lower/Upper Limit of Normality. For hemoglobin, baseline values <LLN or >ULN are counted normal; for eosinophils, values <LLN are counted as normal. For ALT, AST, ALP, Total Bilirubin, GGT, CPK, baseline values <LLN are counted normal\nIn the single dose study, five patients out of 40 showed an alanine transaminase increase >1 ULN (defined as 45 IU/ml) at any time point of the study with a possible association to changes of other liver functions. Among them were four patients treated with ferroquine and one patient treated with placebo. These changes were asymptomatic and, therefore, were not reported as adverse events. One patient under placebo experienced an increased level of alanine transaminase, with a maximum value of 59 IU on Day 21. The increase was associated to a parallel and moderate increase of total bilirubin (19 μmol/L, normal range <17 μmol/L) and a moderate increase of aspartate transaminase and gamma glutamyl transferase. In the 800 mg dose-group, one patient had an increased level of alanine transaminase with a maximum value of 148 IU on Day 15, with associated increases in total bilirubin (24 μmol/L), aspartate transaminases, alkaline phosphatases, and gamma glutamyl transferase. In the 1,200 mg dose-group, at one patient an increased level of alanine transaminase with a maximum value of 52 IU was observed on Day 7, whereas, alkaline phosphatase and total bilirubin (5 μmol/L) remained within normal ranges. At one patient treated with 1,400 mg, an increased level of alanine transaminase with a maximum value of 47 IU was observed on Day 15, associated to parallel changes in other liver function parameters at the exception of the bilirubin level (8 μmol/L). Finally, one patient was one of the 1,600 mg dose-group experienced an increase of alanine transaminase values with a maximum value of 48 IU on Day 2 with parallel increase in total bilirubin (25 μmol/L) on Day 15. The respective parameters of this patient were already slightly elevated at baseline (Day -1).\nIn the three-day repeated dose study, elevated alanine transaminases (>2 ULN) was observed in one placebo-treated patient and in four patients in the ferroquine groups. One patient in the 400 mg group (normal baseline 0.38 ULN) had an alanine transaminase >10 ULN starting at Day 12 with a maximum magnitude of 16.55 ULN, which was associated to abdominal pain and headache. Over the course of the following week, alkaline phosphatases and bilirubin increased and picked at about 2 ULN and 1.7 ULN, respectively. Infectious causes were ruled out. However, the patient reported intake of an herbal tea against abdominal pain, which might have contributed to the enzyme abnormalities. Alanine transaminases returned to normal by Day 37 (0.71 ULN). A patient of the 600 mg group, who had a slightly elevated alanine transaminase value at screening (1.22 ULN), but within normal ranges at baseline (0.91 ULN), presented again an elevated value on Day 3 (1.71 ULN) and reached the potentially clinically significant abnormality level (2.18 ULN) on Day 5, subsequently followed by a decrease to normal rangeby Day 12. In this same 600 mg dose group, another patient with normal baseline value of 0.42 ULN, presented an elevated alanine transaminase values on Day 3 that reached the potentially clinically significant abnormality level on Day 23 (2.56 ULN). His values were still elevated at the end-of-study visit (1.78 ULN). Finally, one patient of the 800 mg dose group, with normal baseline value of 0.38 ULN, had a peak alanine transaminases value of 6.37 ULN and a bilirubin value of 1.48 ULN at Day 12 without clinical symptoms. Alkaline phosphatases peaked at 1.77 ULN on Day 17. Liver parameter values decreased toward the normal range by the end-of-study visit. The most frequently observed haematology, potencially clinically significant abnormalities were related to eosinophils that were likely to be related to concomitant parasitic infections existing at baseline.\n[SUBTITLE] Electrocardiogram evaluation [SUBSECTION] In the single dose study, no QTcF values >450 ms, or delta QTcF >60 ms were observed in any treatment group. One patient in the 1,200 mg dose group experienced a delta QTcF value between 30 and 60 ms, while a moderate change in mean increases from baseline of + 8.2 ms was observed for the 1,600 mg dose level coinciding with the time of maximal plasma concentrations. In the three-day repeated dose study, continuous monitoring did not show any clinically significant arrhythmic episode developing after drug administration in any patient. A deviation in the repolarization pattern (inversion of T wave), visible from T5 upwards and lasting for several hours, was observed in four patients of the 800 mg dose group, especially in one patient who showed a clear and long-lasting T inversion each day of investigational product administration. According to previous experiences and the known influence of aminoquinolines on ECGs, these episodes were considered to be related to the investigational product, but remained without any clinical significance. In the 800 mg dose group, there were 12.6 ms, 16.8 ms, and 19.2 ms mean increases in QTcF intervals on Day 3 at 6, 8, and 12 hours, respectively. Additionally, in the 800 mg dose group, there was a 19.6 ± 6.8 ms (mean ± SEM) maximum mean increase in QTcF intervals, compared to maximum mean increases of 9.9 ± 5.5, 5.2 ± 3.6, and 16.5 ± 6.8 ms for the placebo, 400 mg, and 600 mg groups, respectively. There were no delta QTcF intervals >60 ms and no QTcF intervals >450 ms.\nIn the single dose study, no QTcF values >450 ms, or delta QTcF >60 ms were observed in any treatment group. One patient in the 1,200 mg dose group experienced a delta QTcF value between 30 and 60 ms, while a moderate change in mean increases from baseline of + 8.2 ms was observed for the 1,600 mg dose level coinciding with the time of maximal plasma concentrations. In the three-day repeated dose study, continuous monitoring did not show any clinically significant arrhythmic episode developing after drug administration in any patient. A deviation in the repolarization pattern (inversion of T wave), visible from T5 upwards and lasting for several hours, was observed in four patients of the 800 mg dose group, especially in one patient who showed a clear and long-lasting T inversion each day of investigational product administration. According to previous experiences and the known influence of aminoquinolines on ECGs, these episodes were considered to be related to the investigational product, but remained without any clinical significance. In the 800 mg dose group, there were 12.6 ms, 16.8 ms, and 19.2 ms mean increases in QTcF intervals on Day 3 at 6, 8, and 12 hours, respectively. Additionally, in the 800 mg dose group, there was a 19.6 ± 6.8 ms (mean ± SEM) maximum mean increase in QTcF intervals, compared to maximum mean increases of 9.9 ± 5.5, 5.2 ± 3.6, and 16.5 ± 6.8 ms for the placebo, 400 mg, and 600 mg groups, respectively. There were no delta QTcF intervals >60 ms and no QTcF intervals >450 ms.", "Out of a total of 498 volunteers pre-screened, 158 persons with P. falciparum infections were further screened for eligibility. Eighty-four were excluded because of abnormal chemistry or haematology results, abnormal ECGs, or other health problems. Seventeen volunteers could not be enrolled due to logistical reasons and further eight volunteers were not enrolled because the predefined number of participants for each of the two studies was already reached. Recruitment and participant flow are depicted in Figure 1.\nFlow of participants.\nA total of 66 asymptomatic male volunteers infected with P. falciparum were enrolled, randomized, and treated. In the single dose study, a total of 30 volunteers were exposed to single oral doses of ferroquine over a range from 400 mg to 1,600 mg and 10 received placebo. In the three-day multiple dose study, 19 volunteers were treated for three days with daily doses from 400 mg to 1,000 mg and seven received placebo. Only one participant was administered the 1,000 mg dose. Enrolment was discontinued at 1,000 mg dose because of morphologic changes in T waves and a moderate QT prolongation observed in patients treated with 800 mg and 1,000 mg. Seven participants stopped the study prematurely for reasons unrelated to adverse events, three in the single dose study and four in the three-day repeated dose study (Figure 1). Baseline characteristics of study participants are provided in Table 1 and Table 2.\nDemographic and clinical characteristics of participants at baseline in the single dose study\nNOTE: BMI: Body Mass Index; BSA: Body Surface Area; HR: Heart Rate; QTc (Bazett Fridericia); N: Count of patients; SD: standard deviation; Min: minimum; Max: maximum\nDemographic and clinical characteristics of participants at baseline in the multiple dose study\nNOTE: BMI: Body Mass Index; BSA: Body Surface Area; HR: Heart Rate; QTc (Bazett Fridericia); N: Count of patients; SD: standard deviation; Min: minimum; Max: maximum", "Details on the frequency of adverse events by dose-group are depicted in Table 3 and Table 4. In the single, ascending dose study, most patients in the placebo group (9/10) presented at least one adverse event during the study versus 3/6 patients in the 400 m dose group, 1/6 in the 800 mg dose group, 6/6 in the 1,200 mg and 1,400 mg dose groups, and 4/6 patients in the 1,600 mg dose group.\nFrequency of the most common adverse events during the single dose study\nNOTE: N (%): count of patients (percentage per group); TEAE: treatment emergent adverse event\nFrequency of the most common adverse events during the multiple dose study\nNOTE: N (%): count of patients (percentage per group); TEAE: treatment emergent adverse event\nIn the three-day repeated, ascending dose study, six patients out of seven in the placebo group experienced at least one adverse event during the trial versus all of the subjects taking ferroquine (19/19). The adverse events reported were mainly in the gastrointestinal tract, abdominal pain (16), diarrhoea (5), nausea (13), vomiting (9) and toothache (6), and in the nervous system disorders, headache (11) and dizziness (5), but also those that appeared during the follow-up period, in the musculoskeletal and connective tissue disorders (9), such as arthralgia and back pain, the injury, poisoning and procedural complications (10) (injury, post-traumatic pain, thermal burn).\nOther symptoms such as mild, transient eye disorders (blurred vision, floating objects), mild skin rash (ECGs electrode application sites dermatitis or pruritis), infections and infestations, and fatigue were infrequent and were single occurrences across all dose groups in either study. All adverse events were considered mild to moderate in intensity and were transient. There were no deaths, serious adverse events, or adverse events leading to withdrawal reported during the trials.\nBecause of the small numbers of volunteers studied at each dose, no significant conclusions can be drawn from comparisons of adverse events between dose groups at the individual dose levels.", "The clinical laboratory evaluation was performed through clinical chemistry and haematology tests of samples taken at different time points of scheduled visits until the end-of-study visit (60 days). The majority of the clinical chemistry potentially clinically significant abnormalities observed on patients treated with concerned elevations of creatine phosphokinase (≥3 ULN, normal range 25-90 IU). In most cases, creatine phosphokinase was elevated at baseline and was related to intense farming activities prior to inclusion (Table 5 and Table 6).\nSummary of participants with PCSA (parameters with PCSA definitions) for clinical chemistry and haematology parameters during the single dose study\nNOTE: N: count of patients; PCSA: Potentially Clinically Significant Abnormality (version 2.2, December 2004); LLN/ULN: Lower/Upper Limit of Normality. For hemoglobin, baseline values <LLN or >ULN are counted normal; for eosinophils, values <LLN are counted as normal. For ALT, AST, ALP, Total Bilirubin, GGT, CPK, baseline values <LLN are counted normal\nSummary of participants with PCSA (parameters with PCSA definitions) for clinical chemistry and haematology parameters during the multiple dose study\nNOTE: N: count of patients; PCSA: Potentially Clinically Significant Abnormality (version 2.2, December 2004); LLN/ULN: Lower/Upper Limit of Normality. For hemoglobin, baseline values <LLN or >ULN are counted normal; for eosinophils, values <LLN are counted as normal. For ALT, AST, ALP, Total Bilirubin, GGT, CPK, baseline values <LLN are counted normal\nIn the single dose study, five patients out of 40 showed an alanine transaminase increase >1 ULN (defined as 45 IU/ml) at any time point of the study with a possible association to changes of other liver functions. Among them were four patients treated with ferroquine and one patient treated with placebo. These changes were asymptomatic and, therefore, were not reported as adverse events. One patient under placebo experienced an increased level of alanine transaminase, with a maximum value of 59 IU on Day 21. The increase was associated to a parallel and moderate increase of total bilirubin (19 μmol/L, normal range <17 μmol/L) and a moderate increase of aspartate transaminase and gamma glutamyl transferase. In the 800 mg dose-group, one patient had an increased level of alanine transaminase with a maximum value of 148 IU on Day 15, with associated increases in total bilirubin (24 μmol/L), aspartate transaminases, alkaline phosphatases, and gamma glutamyl transferase. In the 1,200 mg dose-group, at one patient an increased level of alanine transaminase with a maximum value of 52 IU was observed on Day 7, whereas, alkaline phosphatase and total bilirubin (5 μmol/L) remained within normal ranges. At one patient treated with 1,400 mg, an increased level of alanine transaminase with a maximum value of 47 IU was observed on Day 15, associated to parallel changes in other liver function parameters at the exception of the bilirubin level (8 μmol/L). Finally, one patient was one of the 1,600 mg dose-group experienced an increase of alanine transaminase values with a maximum value of 48 IU on Day 2 with parallel increase in total bilirubin (25 μmol/L) on Day 15. The respective parameters of this patient were already slightly elevated at baseline (Day -1).\nIn the three-day repeated dose study, elevated alanine transaminases (>2 ULN) was observed in one placebo-treated patient and in four patients in the ferroquine groups. One patient in the 400 mg group (normal baseline 0.38 ULN) had an alanine transaminase >10 ULN starting at Day 12 with a maximum magnitude of 16.55 ULN, which was associated to abdominal pain and headache. Over the course of the following week, alkaline phosphatases and bilirubin increased and picked at about 2 ULN and 1.7 ULN, respectively. Infectious causes were ruled out. However, the patient reported intake of an herbal tea against abdominal pain, which might have contributed to the enzyme abnormalities. Alanine transaminases returned to normal by Day 37 (0.71 ULN). A patient of the 600 mg group, who had a slightly elevated alanine transaminase value at screening (1.22 ULN), but within normal ranges at baseline (0.91 ULN), presented again an elevated value on Day 3 (1.71 ULN) and reached the potentially clinically significant abnormality level (2.18 ULN) on Day 5, subsequently followed by a decrease to normal rangeby Day 12. In this same 600 mg dose group, another patient with normal baseline value of 0.42 ULN, presented an elevated alanine transaminase values on Day 3 that reached the potentially clinically significant abnormality level on Day 23 (2.56 ULN). His values were still elevated at the end-of-study visit (1.78 ULN). Finally, one patient of the 800 mg dose group, with normal baseline value of 0.38 ULN, had a peak alanine transaminases value of 6.37 ULN and a bilirubin value of 1.48 ULN at Day 12 without clinical symptoms. Alkaline phosphatases peaked at 1.77 ULN on Day 17. Liver parameter values decreased toward the normal range by the end-of-study visit. The most frequently observed haematology, potencially clinically significant abnormalities were related to eosinophils that were likely to be related to concomitant parasitic infections existing at baseline.", "In the single dose study, no QTcF values >450 ms, or delta QTcF >60 ms were observed in any treatment group. One patient in the 1,200 mg dose group experienced a delta QTcF value between 30 and 60 ms, while a moderate change in mean increases from baseline of + 8.2 ms was observed for the 1,600 mg dose level coinciding with the time of maximal plasma concentrations. In the three-day repeated dose study, continuous monitoring did not show any clinically significant arrhythmic episode developing after drug administration in any patient. A deviation in the repolarization pattern (inversion of T wave), visible from T5 upwards and lasting for several hours, was observed in four patients of the 800 mg dose group, especially in one patient who showed a clear and long-lasting T inversion each day of investigational product administration. According to previous experiences and the known influence of aminoquinolines on ECGs, these episodes were considered to be related to the investigational product, but remained without any clinical significance. In the 800 mg dose group, there were 12.6 ms, 16.8 ms, and 19.2 ms mean increases in QTcF intervals on Day 3 at 6, 8, and 12 hours, respectively. Additionally, in the 800 mg dose group, there was a 19.6 ± 6.8 ms (mean ± SEM) maximum mean increase in QTcF intervals, compared to maximum mean increases of 9.9 ± 5.5, 5.2 ± 3.6, and 16.5 ± 6.8 ms for the placebo, 400 mg, and 600 mg groups, respectively. There were no delta QTcF intervals >60 ms and no QTcF intervals >450 ms.", "Despite an increase in international investment in anti-malarial drug development in the past decade only a few novel drugs have been developed [12-19]. Ferroquine is one of the few novel anti-malarials entering clinical development. A favourable safety and tolerability profile of ferroquine could be demonstrated when administered as a single dose up to 1,600 mg or as three consecutive daily doses up to 800 mg. No dose limiting clinical adverse event was observed at the investigated dose levels. All adverse events were mild to moderate, transient, and no serious drug related adverse event was observed. These findings are in line with previous data of preclinical studies and the first in human study performed in healthy Caucasian subjects [unpublished data].\nGastrointestinal side effects and central nervous disorders appear to be the most common clinical adverse events following treatment with ferroquine. These symptoms were however self-limiting and mild to moderate in intensity. Most important the frequency of these events was similar in patients treated with ferroquine and those receiving placebo and overall compared well to currently used anti-malarials. However, these side effects are well described for other 4-aminoquinoline anti-malarials and need, therefore, further investigation in the case of ferroquine [20-26]. Other less common adverse events, such as fatigue, blurred vision and rash were mild, transient, and were single occurrences across all dose groups.\nOne limitation of the assessment of adverse events in this study may be the potential progression of asymptomatic infection to malaria. However, the inclusion of a control group with placebo treatment helped to minimize this factor in the absence of knowledge on the efficacy of ferroquine in humans. Similarly, the apparent dose dependent increase in gastrointestinal adverse events may be either due to pharmacodynamic properties of the drug or alternatively to the high number of capsules that had to be swallowed (up to 16 capsules per participant). In summary, ferroquine may lead to gastrointestinal side effects comparable to other anti-malarials. However, the number of participants in this study does not permit to make a final judgement on a potential dose dependent increase of gastrointestinal toxicity of ferroquine.\nAt eight weeks follow-up, there was no evidence for cardiac, ocular, hepatic, haemotologic, renal, dermatologic, or other end-organ adverse events. Although adverse events involving these and other organs have been reported with aminoquinolines previously [21,23-25], these findings have typically been observed in persons treated for prolonged periods of time (more than 5 years) at doses 200-400 mg base or higher per day [21,27]. The absence of clinically significant detectable adverse events and the normal laboratory tests in all volunteers at eight weeks of follow-up are consistent with previous reports on the safety of short-term chloroquine treatment [20,21,26].\nThe finding of abnormal liver function tests in the course of the study. These observations are consistent with data from toxicology studies on rats and dogs which showed small increases of alanine transaminase and aspartate aminotransferase (unpublished data). These changes were minor and not considered to be of toxicological significance. In our clinical trials the laboratory changes were not clinically significant and most of the abnormal test results normalized during the course of the study without further intervention. Important is that no linear dose depending relationship between the administration of ferroquine and increase of liver parameters was observed in this study.\nBased on previous animal and human studies showing prolongation of the QT interval for chloroquine a special emphasis was laid on the assessment of cardiac effects [28-32]. Ferroquine itself has been shown to have in vitro effects on hERG channels. There was also a trend toward changes in QTc interval parameters in telemetered animals. The current findings show several patients in the repeated dose study with T wave morphology changes on their ECG readings following ferroquine administration, and most of the time associated with U waves. These findings even though they are not clinically significant highlight the potential of a cardiac effect of ferroquine on humans.", "In conclusion, ferroquine showed a favourable tolerability and safety profile up to 1,600 mg when administered as single dose, and appears to be well tolerated up to 800 mg once daily for three days. Although gastrointestinal disorders (including nausea and vomiting) and nervous system disorders (including dizziness and headache) appear to be the prevalent clinical adverse events, the liver is the potential target organ and it should be carefully monitored via liver function tests. Ferroquine has the potential to prolong the QTc interval and affect T wave morphology. Subjects should undergo careful ECG monitoring while taking this investigational product.", "Authors declare no conflict of interest. This study was funded by the Department of Research of Sanofi-Aventis as part of the developing programme of ferroquine. Part of this work was presented as dissertation by GMN for his medical degree's thesis at the Faculty of Medicine in Libreville, Gabon, MK, PGK and BL were co-directors of the thesis (Université des Sciences de la Santé (USS), Thesis n°524, 2006)", "GMN carried out the safety study and drafted the manuscript. CS, MDB, MAM, PBM, COS, BL carried out the study. MK, MR, PGK participated in the design of the study and performed the statistical analysis. DTM and BL conceived of the study, and participated in its design and coordination. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Preclinical studies", "Previous human experience", "Methods", "Study design, settings and population", "Interventions", "Randomization and blinding", "Study procedures", "Screening and drug administration", "Recording and reporting of adverse events and cardiac effects", "Outcomes and statistics", "Results", "Study population", "Frequency of adverse events", "Clinical laboratory evaluation", "Electrocardiogram evaluation", "Discussion", "Conclusions", "Competing interests", "Authors' contributions" ]
[ "Plasmodium falciparum has become resistant to several commonly used anti-malarial drugs. As a consequence new anti-malarial compounds are urgently needed. Despite a considerable increase in international efforts in anti-malarial drug development, there is at present a shortage of truly novel drugs which do not share the same mechanisms of action and drug resistance with those that are failing today. The entire reliance on artemisinin derivatives in the drug development over the past decade might be deceptive in the light of first reports of artemisinin resistant P. falciparum [1].\nFerroquine (SSR97193) - a ferrocenic derivative of chloroquine and thus a 4-aminoquinoline compound - is a truly novel anti-malarial candidate drug. Ferroquine demonstrated anti-malarial activity in vitro and in vivo [2-5]. In vitro, ferroquine has a potent activity on Plasmodium falciparum laboratory strains, whether they are sensitive or resistant to chloroquine. In vivo, four days of treatment with ferroquine efficiently cured mice infected with either chloroquine-resistant or sensitive strains of Plasmodium vinckei. Ferroquine is a racemic compound with both enantiomers showing an anti-malarial activity similar to that of the parent compound, in vitro and in vivo. Ferroquine is metabolized into one major metabolite (N-monodemethylated), also highly active in vitro. Tests on field isolates confirmed the activity of ferroquine against parasites that are multi-resistant to most marketed anti-malarial agents and did not show any significant cross sensitivity with major anti-malarials currently used [5-8].\n[SUBTITLE] Preclinical studies [SUBSECTION] Ferroquine was tested in preclinical studies prior to its use in humans, alone or in combination with artesunate, showing it to be devoid of relevant adverse effects on central nervous system, respiratory, renal, and gastrointestinal functions. The only notable effects of ferroquine alone, or in combination with artesunate, were observed on cardiovascular function. In vitro ferroquine, at a high concentration, was shown to affect the action potential pattern as a result of human ether-à-go-go related gene (hERG) channel inhibition (inhibitory concentration decreasing a response by 50% (IC50) = 907 ng/mL). In vivo, in anaesthetized but not in conscious dogs, ferroquine induced from blood concentrations ≥200 ng/mL major haemodynamic effects. In vivo, in conscious dogs, a ferroquine/artesunate combination (at 30 mg/kg and 100 mg/kg per os, respectively, i.e., ferroquine blood concentrations of 669 ng/mL) was devoid of effects on blood pressure, heart rate, PR interval, and QRS complex duration whereas QTcB and QTcF were slightly increased [artesunate alone had no significant effect on cardiovascular and electrocardiogram parameters]. However, no increases in QT, QTc were noted in the 13-week oral toxicity study in dogs at 0.2/25, 1/25 and 5/25 mg/kg of ferroquine/artesunate combination and at 5 mg/kg of ferroquine alone.\nFerroquine was tested in preclinical studies prior to its use in humans, alone or in combination with artesunate, showing it to be devoid of relevant adverse effects on central nervous system, respiratory, renal, and gastrointestinal functions. The only notable effects of ferroquine alone, or in combination with artesunate, were observed on cardiovascular function. In vitro ferroquine, at a high concentration, was shown to affect the action potential pattern as a result of human ether-à-go-go related gene (hERG) channel inhibition (inhibitory concentration decreasing a response by 50% (IC50) = 907 ng/mL). In vivo, in anaesthetized but not in conscious dogs, ferroquine induced from blood concentrations ≥200 ng/mL major haemodynamic effects. In vivo, in conscious dogs, a ferroquine/artesunate combination (at 30 mg/kg and 100 mg/kg per os, respectively, i.e., ferroquine blood concentrations of 669 ng/mL) was devoid of effects on blood pressure, heart rate, PR interval, and QRS complex duration whereas QTcB and QTcF were slightly increased [artesunate alone had no significant effect on cardiovascular and electrocardiogram parameters]. However, no increases in QT, QTc were noted in the 13-week oral toxicity study in dogs at 0.2/25, 1/25 and 5/25 mg/kg of ferroquine/artesunate combination and at 5 mg/kg of ferroquine alone.\n[SUBTITLE] Previous human experience [SUBSECTION] First, in human studies of ferroquine were conducted in a double-blind, randomized, placebo controlled, and sequential dose escalation study in 42 healthy male subjects in Germany [unpublished data]. Single oral doses up to 800 mg were clinically well tolerated and no drug-related clinically relevant adverse event was observed. One serious adverse event occurred which was unrelated to the study drug and all other adverse events were mild to moderate and no relevant electrocardiogram abnormalities, such as QT prolongation or repolarisation abnormalities were described.\nThis is a report on two clinical phase I trials, which aimed to assess the clinical, and laboratory safety and tolerability profile of ferroquine. These studies were designed to evaluate ascending oral doses in young male volunteers with asymptomatic P. falciparum infection and to determine the maximum tolerated dose after oral administration of ferroquine.\nFirst, in human studies of ferroquine were conducted in a double-blind, randomized, placebo controlled, and sequential dose escalation study in 42 healthy male subjects in Germany [unpublished data]. Single oral doses up to 800 mg were clinically well tolerated and no drug-related clinically relevant adverse event was observed. One serious adverse event occurred which was unrelated to the study drug and all other adverse events were mild to moderate and no relevant electrocardiogram abnormalities, such as QT prolongation or repolarisation abnormalities were described.\nThis is a report on two clinical phase I trials, which aimed to assess the clinical, and laboratory safety and tolerability profile of ferroquine. These studies were designed to evaluate ascending oral doses in young male volunteers with asymptomatic P. falciparum infection and to determine the maximum tolerated dose after oral administration of ferroquine.", "Ferroquine was tested in preclinical studies prior to its use in humans, alone or in combination with artesunate, showing it to be devoid of relevant adverse effects on central nervous system, respiratory, renal, and gastrointestinal functions. The only notable effects of ferroquine alone, or in combination with artesunate, were observed on cardiovascular function. In vitro ferroquine, at a high concentration, was shown to affect the action potential pattern as a result of human ether-à-go-go related gene (hERG) channel inhibition (inhibitory concentration decreasing a response by 50% (IC50) = 907 ng/mL). In vivo, in anaesthetized but not in conscious dogs, ferroquine induced from blood concentrations ≥200 ng/mL major haemodynamic effects. In vivo, in conscious dogs, a ferroquine/artesunate combination (at 30 mg/kg and 100 mg/kg per os, respectively, i.e., ferroquine blood concentrations of 669 ng/mL) was devoid of effects on blood pressure, heart rate, PR interval, and QRS complex duration whereas QTcB and QTcF were slightly increased [artesunate alone had no significant effect on cardiovascular and electrocardiogram parameters]. However, no increases in QT, QTc were noted in the 13-week oral toxicity study in dogs at 0.2/25, 1/25 and 5/25 mg/kg of ferroquine/artesunate combination and at 5 mg/kg of ferroquine alone.", "First, in human studies of ferroquine were conducted in a double-blind, randomized, placebo controlled, and sequential dose escalation study in 42 healthy male subjects in Germany [unpublished data]. Single oral doses up to 800 mg were clinically well tolerated and no drug-related clinically relevant adverse event was observed. One serious adverse event occurred which was unrelated to the study drug and all other adverse events were mild to moderate and no relevant electrocardiogram abnormalities, such as QT prolongation or repolarisation abnormalities were described.\nThis is a report on two clinical phase I trials, which aimed to assess the clinical, and laboratory safety and tolerability profile of ferroquine. These studies were designed to evaluate ascending oral doses in young male volunteers with asymptomatic P. falciparum infection and to determine the maximum tolerated dose after oral administration of ferroquine.", "[SUBTITLE] Study design, settings and population [SUBSECTION] Both clinical trials were designed as phase I, randomized, double blind, placebo-controlled, sequential dose escalation studies. The first study TDU5967 (\"Single Dose Study\") was designed as a dose escalation trial of a single oral dose, whereas the second study TDR5969 (\"3 Day Repeated Dose Study\") aimed to evaluate a three-day treatment course.\nClinical work was carried out from April 2005 to June 2006 at the Medical Research Unit of the Albert Schweitzer Hospital in Lambaréné, Gabon, a semi-urban town of about 30,000 inhabitants where malaria transmission is perennial with moderate seasonal variation [9,10].\nMale volunteers aged between 18 and 45 years residing in Lambaréné and its surrounding areas were invited to participate. Individuals having provided written informed consent and harbouring asymptomatic P. falciparum infection were enrolled into the study if they fulfilled the following criteria: (1) weight between 50 and 90 kg, (2) BMI of 18 to 28 kg/m2, (3) normal vital signs after 10 minutes resting in supine position, i.e. systolic blood pressure between 95 and 140 mmHg, and diastolic blood pressure between 50 and 90 mmHg, and heart rate between 45 and 90 beats per minute, (4) normal 12-lead automatic ECG, (5) and no fever or history of fever in the last 72 hours. Exclusion criteria were (1) presence of any clinical signs of malaria, (2) previous anti-malarial treatment, (3) significant co-morbidities or other conditions impeding compliance with the study, (4) presence or history of drug allergy, (5) evidence for hepatitis B, C, or HIV infection (HBs antigen, anti-HCV antibodies, anti-HIV1&2 antibodies).\nThe studies were granted ethical clearance by the Ethics Committee of the International Foundation of the Albert Schweitzer Hospital. The study protocols complied with recommendations of the 18th World Health Congress (Helsinki, 1964) and all applicable amendments, also with laws, regulations, and any applicable guideline of Gabon, the country where the studies were conducted in compliance with the International Harmonized Conference on Good Clinical Practices guidelines. Informed consent was obtained prior to the conduct of any study-related procedures.\nBoth clinical trials were designed as phase I, randomized, double blind, placebo-controlled, sequential dose escalation studies. The first study TDU5967 (\"Single Dose Study\") was designed as a dose escalation trial of a single oral dose, whereas the second study TDR5969 (\"3 Day Repeated Dose Study\") aimed to evaluate a three-day treatment course.\nClinical work was carried out from April 2005 to June 2006 at the Medical Research Unit of the Albert Schweitzer Hospital in Lambaréné, Gabon, a semi-urban town of about 30,000 inhabitants where malaria transmission is perennial with moderate seasonal variation [9,10].\nMale volunteers aged between 18 and 45 years residing in Lambaréné and its surrounding areas were invited to participate. Individuals having provided written informed consent and harbouring asymptomatic P. falciparum infection were enrolled into the study if they fulfilled the following criteria: (1) weight between 50 and 90 kg, (2) BMI of 18 to 28 kg/m2, (3) normal vital signs after 10 minutes resting in supine position, i.e. systolic blood pressure between 95 and 140 mmHg, and diastolic blood pressure between 50 and 90 mmHg, and heart rate between 45 and 90 beats per minute, (4) normal 12-lead automatic ECG, (5) and no fever or history of fever in the last 72 hours. Exclusion criteria were (1) presence of any clinical signs of malaria, (2) previous anti-malarial treatment, (3) significant co-morbidities or other conditions impeding compliance with the study, (4) presence or history of drug allergy, (5) evidence for hepatitis B, C, or HIV infection (HBs antigen, anti-HCV antibodies, anti-HIV1&2 antibodies).\nThe studies were granted ethical clearance by the Ethics Committee of the International Foundation of the Albert Schweitzer Hospital. The study protocols complied with recommendations of the 18th World Health Congress (Helsinki, 1964) and all applicable amendments, also with laws, regulations, and any applicable guideline of Gabon, the country where the studies were conducted in compliance with the International Harmonized Conference on Good Clinical Practices guidelines. Informed consent was obtained prior to the conduct of any study-related procedures.\n[SUBTITLE] Interventions [SUBSECTION] [SUBTITLE] Randomization and blinding [SUBSECTION] Study treatments were prepared by dose level for each patient according to the study randomization list. The following doses of ferroquine were to be explored, 400 mg, 800 mg, 1,200 mg, 1,400 mg, and 1,600 mg for the single dose study and 400 mg, 600 mg, 800 mg, and 1,000 mg for the three-day repeated dose study.\nTreatment allocation was performed in sequential groups of eight participants per dose group. Six participants were to receive ferroquine and two participants were to receive placebo. All ferroquine dose strengths and placebo capsules were identically matched. The first patients were assigned to the lowest dose level of the study.\nThe treatment allocation was double-blinded and was done according to the randomization list which was computer-generated. The allocation code was concealed at site and procedures were put in place to have the option for code breaking in case of urgent clinical need. Decision for dose escalation was based on a thorough review of clinical and laboratory data, including ECG and pharmacokinetic data of the preceding dose level.\nStudy treatments were prepared by dose level for each patient according to the study randomization list. The following doses of ferroquine were to be explored, 400 mg, 800 mg, 1,200 mg, 1,400 mg, and 1,600 mg for the single dose study and 400 mg, 600 mg, 800 mg, and 1,000 mg for the three-day repeated dose study.\nTreatment allocation was performed in sequential groups of eight participants per dose group. Six participants were to receive ferroquine and two participants were to receive placebo. All ferroquine dose strengths and placebo capsules were identically matched. The first patients were assigned to the lowest dose level of the study.\nThe treatment allocation was double-blinded and was done according to the randomization list which was computer-generated. The allocation code was concealed at site and procedures were put in place to have the option for code breaking in case of urgent clinical need. Decision for dose escalation was based on a thorough review of clinical and laboratory data, including ECG and pharmacokinetic data of the preceding dose level.\n[SUBTITLE] Randomization and blinding [SUBSECTION] Study treatments were prepared by dose level for each patient according to the study randomization list. The following doses of ferroquine were to be explored, 400 mg, 800 mg, 1,200 mg, 1,400 mg, and 1,600 mg for the single dose study and 400 mg, 600 mg, 800 mg, and 1,000 mg for the three-day repeated dose study.\nTreatment allocation was performed in sequential groups of eight participants per dose group. Six participants were to receive ferroquine and two participants were to receive placebo. All ferroquine dose strengths and placebo capsules were identically matched. The first patients were assigned to the lowest dose level of the study.\nThe treatment allocation was double-blinded and was done according to the randomization list which was computer-generated. The allocation code was concealed at site and procedures were put in place to have the option for code breaking in case of urgent clinical need. Decision for dose escalation was based on a thorough review of clinical and laboratory data, including ECG and pharmacokinetic data of the preceding dose level.\nStudy treatments were prepared by dose level for each patient according to the study randomization list. The following doses of ferroquine were to be explored, 400 mg, 800 mg, 1,200 mg, 1,400 mg, and 1,600 mg for the single dose study and 400 mg, 600 mg, 800 mg, and 1,000 mg for the three-day repeated dose study.\nTreatment allocation was performed in sequential groups of eight participants per dose group. Six participants were to receive ferroquine and two participants were to receive placebo. All ferroquine dose strengths and placebo capsules were identically matched. The first patients were assigned to the lowest dose level of the study.\nThe treatment allocation was double-blinded and was done according to the randomization list which was computer-generated. The allocation code was concealed at site and procedures were put in place to have the option for code breaking in case of urgent clinical need. Decision for dose escalation was based on a thorough review of clinical and laboratory data, including ECG and pharmacokinetic data of the preceding dose level.\n[SUBTITLE] Study procedures [SUBSECTION] [SUBTITLE] Screening and drug administration [SUBSECTION] Recruitment of volunteers was started with a pre-screening visit to determine the P. falciparum infection status by blood smear, Rapid Test and PCR [11]. If infection was confirmed by at least one out of these three laboratory tests, the volunteers were invited to undergo screening procedure, including a complete physical examination, haematology and biochemistry analysis, urinalysis, urine drug screen, and viral serology (HIV, HBSAg, HCV). Eligible volunteers were admitted as inpatients to the Medical Research Unit and study drugs were administered orally with noncarbonated water under fasted conditions. Treatment duration was one day in the single dose study and three days in the multiple dose study. Volunteers were discharged from the hospital two days after the last drug intake and were followed-up for eight weeks in both studies.\nRecruitment of volunteers was started with a pre-screening visit to determine the P. falciparum infection status by blood smear, Rapid Test and PCR [11]. If infection was confirmed by at least one out of these three laboratory tests, the volunteers were invited to undergo screening procedure, including a complete physical examination, haematology and biochemistry analysis, urinalysis, urine drug screen, and viral serology (HIV, HBSAg, HCV). Eligible volunteers were admitted as inpatients to the Medical Research Unit and study drugs were administered orally with noncarbonated water under fasted conditions. Treatment duration was one day in the single dose study and three days in the multiple dose study. Volunteers were discharged from the hospital two days after the last drug intake and were followed-up for eight weeks in both studies.\n[SUBTITLE] Recording and reporting of adverse events and cardiac effects [SUBSECTION] Adverse events were defined as any clinical change from baseline or previous visit spontaneously reported by the patient or observed by the investigator. All adverse events regardless of seriousness or relationship to the investigational product were recorded. Adverse events were coded according to Medical Dictionary for regulatory Activities (MedDRA, version 8.1).\nDue to the very long half-life of the product, around 20 days, the rule applied for the emergence of adverse event was 5 half-lives after administration. Thus, an adverse event was considered as emergent until the end-of-study visit. Abnormalities of vital signs, laboratory parameters or ECG readings were recorded as adverse events only if they were medically relevant by being symptomatic, requiring corrective treatment, leading to discontinuation and/or fulfilling a seriousness criterion.\nA standard 12-lead ECG was performed hourly up to six hours post-dosing, two-hourly up to 12 hours post-dose, and weekly until the end of follow-up. ECG measures were obtained from measurement of three consecutive QRS complexes and averaged. Heart rates, QTc (Bazett and Fridericia) were derived from the mean values of the measured parameters. In addition, a 24-hr ECG was recorded in the repeated dose study for five days to detect cardiac rhythm abnormalities.\nAdverse events were defined as any clinical change from baseline or previous visit spontaneously reported by the patient or observed by the investigator. All adverse events regardless of seriousness or relationship to the investigational product were recorded. Adverse events were coded according to Medical Dictionary for regulatory Activities (MedDRA, version 8.1).\nDue to the very long half-life of the product, around 20 days, the rule applied for the emergence of adverse event was 5 half-lives after administration. Thus, an adverse event was considered as emergent until the end-of-study visit. Abnormalities of vital signs, laboratory parameters or ECG readings were recorded as adverse events only if they were medically relevant by being symptomatic, requiring corrective treatment, leading to discontinuation and/or fulfilling a seriousness criterion.\nA standard 12-lead ECG was performed hourly up to six hours post-dosing, two-hourly up to 12 hours post-dose, and weekly until the end of follow-up. ECG measures were obtained from measurement of three consecutive QRS complexes and averaged. Heart rates, QTc (Bazett and Fridericia) were derived from the mean values of the measured parameters. In addition, a 24-hr ECG was recorded in the repeated dose study for five days to detect cardiac rhythm abnormalities.\n[SUBTITLE] Screening and drug administration [SUBSECTION] Recruitment of volunteers was started with a pre-screening visit to determine the P. falciparum infection status by blood smear, Rapid Test and PCR [11]. If infection was confirmed by at least one out of these three laboratory tests, the volunteers were invited to undergo screening procedure, including a complete physical examination, haematology and biochemistry analysis, urinalysis, urine drug screen, and viral serology (HIV, HBSAg, HCV). Eligible volunteers were admitted as inpatients to the Medical Research Unit and study drugs were administered orally with noncarbonated water under fasted conditions. Treatment duration was one day in the single dose study and three days in the multiple dose study. Volunteers were discharged from the hospital two days after the last drug intake and were followed-up for eight weeks in both studies.\nRecruitment of volunteers was started with a pre-screening visit to determine the P. falciparum infection status by blood smear, Rapid Test and PCR [11]. If infection was confirmed by at least one out of these three laboratory tests, the volunteers were invited to undergo screening procedure, including a complete physical examination, haematology and biochemistry analysis, urinalysis, urine drug screen, and viral serology (HIV, HBSAg, HCV). Eligible volunteers were admitted as inpatients to the Medical Research Unit and study drugs were administered orally with noncarbonated water under fasted conditions. Treatment duration was one day in the single dose study and three days in the multiple dose study. Volunteers were discharged from the hospital two days after the last drug intake and were followed-up for eight weeks in both studies.\n[SUBTITLE] Recording and reporting of adverse events and cardiac effects [SUBSECTION] Adverse events were defined as any clinical change from baseline or previous visit spontaneously reported by the patient or observed by the investigator. All adverse events regardless of seriousness or relationship to the investigational product were recorded. Adverse events were coded according to Medical Dictionary for regulatory Activities (MedDRA, version 8.1).\nDue to the very long half-life of the product, around 20 days, the rule applied for the emergence of adverse event was 5 half-lives after administration. Thus, an adverse event was considered as emergent until the end-of-study visit. Abnormalities of vital signs, laboratory parameters or ECG readings were recorded as adverse events only if they were medically relevant by being symptomatic, requiring corrective treatment, leading to discontinuation and/or fulfilling a seriousness criterion.\nA standard 12-lead ECG was performed hourly up to six hours post-dosing, two-hourly up to 12 hours post-dose, and weekly until the end of follow-up. ECG measures were obtained from measurement of three consecutive QRS complexes and averaged. Heart rates, QTc (Bazett and Fridericia) were derived from the mean values of the measured parameters. In addition, a 24-hr ECG was recorded in the repeated dose study for five days to detect cardiac rhythm abnormalities.\nAdverse events were defined as any clinical change from baseline or previous visit spontaneously reported by the patient or observed by the investigator. All adverse events regardless of seriousness or relationship to the investigational product were recorded. Adverse events were coded according to Medical Dictionary for regulatory Activities (MedDRA, version 8.1).\nDue to the very long half-life of the product, around 20 days, the rule applied for the emergence of adverse event was 5 half-lives after administration. Thus, an adverse event was considered as emergent until the end-of-study visit. Abnormalities of vital signs, laboratory parameters or ECG readings were recorded as adverse events only if they were medically relevant by being symptomatic, requiring corrective treatment, leading to discontinuation and/or fulfilling a seriousness criterion.\nA standard 12-lead ECG was performed hourly up to six hours post-dosing, two-hourly up to 12 hours post-dose, and weekly until the end of follow-up. ECG measures were obtained from measurement of three consecutive QRS complexes and averaged. Heart rates, QTc (Bazett and Fridericia) were derived from the mean values of the measured parameters. In addition, a 24-hr ECG was recorded in the repeated dose study for five days to detect cardiac rhythm abnormalities.\n[SUBTITLE] Outcomes and statistics [SUBSECTION] Primary outcomes (endpoints) for these randomized clinical trials were defined as the clinical and laboratory adverse event profile and the effect of ferroquine on the cardiovascular system.\nSample size for these studies was based upon empirical considerations from clinical phase I studies and no formal sample size calculation was performed. Safety analysis was based on participants who were administered at least one dose of study drug (exposed population) including participants who withdrew consent prematurely. If a participant had received the study drug and prematurely stopped study participation he was replaced, unless if follow up data were available up to day 15.\nPrimary outcomes (endpoints) for these randomized clinical trials were defined as the clinical and laboratory adverse event profile and the effect of ferroquine on the cardiovascular system.\nSample size for these studies was based upon empirical considerations from clinical phase I studies and no formal sample size calculation was performed. Safety analysis was based on participants who were administered at least one dose of study drug (exposed population) including participants who withdrew consent prematurely. If a participant had received the study drug and prematurely stopped study participation he was replaced, unless if follow up data were available up to day 15.", "Both clinical trials were designed as phase I, randomized, double blind, placebo-controlled, sequential dose escalation studies. The first study TDU5967 (\"Single Dose Study\") was designed as a dose escalation trial of a single oral dose, whereas the second study TDR5969 (\"3 Day Repeated Dose Study\") aimed to evaluate a three-day treatment course.\nClinical work was carried out from April 2005 to June 2006 at the Medical Research Unit of the Albert Schweitzer Hospital in Lambaréné, Gabon, a semi-urban town of about 30,000 inhabitants where malaria transmission is perennial with moderate seasonal variation [9,10].\nMale volunteers aged between 18 and 45 years residing in Lambaréné and its surrounding areas were invited to participate. Individuals having provided written informed consent and harbouring asymptomatic P. falciparum infection were enrolled into the study if they fulfilled the following criteria: (1) weight between 50 and 90 kg, (2) BMI of 18 to 28 kg/m2, (3) normal vital signs after 10 minutes resting in supine position, i.e. systolic blood pressure between 95 and 140 mmHg, and diastolic blood pressure between 50 and 90 mmHg, and heart rate between 45 and 90 beats per minute, (4) normal 12-lead automatic ECG, (5) and no fever or history of fever in the last 72 hours. Exclusion criteria were (1) presence of any clinical signs of malaria, (2) previous anti-malarial treatment, (3) significant co-morbidities or other conditions impeding compliance with the study, (4) presence or history of drug allergy, (5) evidence for hepatitis B, C, or HIV infection (HBs antigen, anti-HCV antibodies, anti-HIV1&2 antibodies).\nThe studies were granted ethical clearance by the Ethics Committee of the International Foundation of the Albert Schweitzer Hospital. The study protocols complied with recommendations of the 18th World Health Congress (Helsinki, 1964) and all applicable amendments, also with laws, regulations, and any applicable guideline of Gabon, the country where the studies were conducted in compliance with the International Harmonized Conference on Good Clinical Practices guidelines. Informed consent was obtained prior to the conduct of any study-related procedures.", "[SUBTITLE] Randomization and blinding [SUBSECTION] Study treatments were prepared by dose level for each patient according to the study randomization list. The following doses of ferroquine were to be explored, 400 mg, 800 mg, 1,200 mg, 1,400 mg, and 1,600 mg for the single dose study and 400 mg, 600 mg, 800 mg, and 1,000 mg for the three-day repeated dose study.\nTreatment allocation was performed in sequential groups of eight participants per dose group. Six participants were to receive ferroquine and two participants were to receive placebo. All ferroquine dose strengths and placebo capsules were identically matched. The first patients were assigned to the lowest dose level of the study.\nThe treatment allocation was double-blinded and was done according to the randomization list which was computer-generated. The allocation code was concealed at site and procedures were put in place to have the option for code breaking in case of urgent clinical need. Decision for dose escalation was based on a thorough review of clinical and laboratory data, including ECG and pharmacokinetic data of the preceding dose level.\nStudy treatments were prepared by dose level for each patient according to the study randomization list. The following doses of ferroquine were to be explored, 400 mg, 800 mg, 1,200 mg, 1,400 mg, and 1,600 mg for the single dose study and 400 mg, 600 mg, 800 mg, and 1,000 mg for the three-day repeated dose study.\nTreatment allocation was performed in sequential groups of eight participants per dose group. Six participants were to receive ferroquine and two participants were to receive placebo. All ferroquine dose strengths and placebo capsules were identically matched. The first patients were assigned to the lowest dose level of the study.\nThe treatment allocation was double-blinded and was done according to the randomization list which was computer-generated. The allocation code was concealed at site and procedures were put in place to have the option for code breaking in case of urgent clinical need. Decision for dose escalation was based on a thorough review of clinical and laboratory data, including ECG and pharmacokinetic data of the preceding dose level.", "Study treatments were prepared by dose level for each patient according to the study randomization list. The following doses of ferroquine were to be explored, 400 mg, 800 mg, 1,200 mg, 1,400 mg, and 1,600 mg for the single dose study and 400 mg, 600 mg, 800 mg, and 1,000 mg for the three-day repeated dose study.\nTreatment allocation was performed in sequential groups of eight participants per dose group. Six participants were to receive ferroquine and two participants were to receive placebo. All ferroquine dose strengths and placebo capsules were identically matched. The first patients were assigned to the lowest dose level of the study.\nThe treatment allocation was double-blinded and was done according to the randomization list which was computer-generated. The allocation code was concealed at site and procedures were put in place to have the option for code breaking in case of urgent clinical need. Decision for dose escalation was based on a thorough review of clinical and laboratory data, including ECG and pharmacokinetic data of the preceding dose level.", "[SUBTITLE] Screening and drug administration [SUBSECTION] Recruitment of volunteers was started with a pre-screening visit to determine the P. falciparum infection status by blood smear, Rapid Test and PCR [11]. If infection was confirmed by at least one out of these three laboratory tests, the volunteers were invited to undergo screening procedure, including a complete physical examination, haematology and biochemistry analysis, urinalysis, urine drug screen, and viral serology (HIV, HBSAg, HCV). Eligible volunteers were admitted as inpatients to the Medical Research Unit and study drugs were administered orally with noncarbonated water under fasted conditions. Treatment duration was one day in the single dose study and three days in the multiple dose study. Volunteers were discharged from the hospital two days after the last drug intake and were followed-up for eight weeks in both studies.\nRecruitment of volunteers was started with a pre-screening visit to determine the P. falciparum infection status by blood smear, Rapid Test and PCR [11]. If infection was confirmed by at least one out of these three laboratory tests, the volunteers were invited to undergo screening procedure, including a complete physical examination, haematology and biochemistry analysis, urinalysis, urine drug screen, and viral serology (HIV, HBSAg, HCV). Eligible volunteers were admitted as inpatients to the Medical Research Unit and study drugs were administered orally with noncarbonated water under fasted conditions. Treatment duration was one day in the single dose study and three days in the multiple dose study. Volunteers were discharged from the hospital two days after the last drug intake and were followed-up for eight weeks in both studies.\n[SUBTITLE] Recording and reporting of adverse events and cardiac effects [SUBSECTION] Adverse events were defined as any clinical change from baseline or previous visit spontaneously reported by the patient or observed by the investigator. All adverse events regardless of seriousness or relationship to the investigational product were recorded. Adverse events were coded according to Medical Dictionary for regulatory Activities (MedDRA, version 8.1).\nDue to the very long half-life of the product, around 20 days, the rule applied for the emergence of adverse event was 5 half-lives after administration. Thus, an adverse event was considered as emergent until the end-of-study visit. Abnormalities of vital signs, laboratory parameters or ECG readings were recorded as adverse events only if they were medically relevant by being symptomatic, requiring corrective treatment, leading to discontinuation and/or fulfilling a seriousness criterion.\nA standard 12-lead ECG was performed hourly up to six hours post-dosing, two-hourly up to 12 hours post-dose, and weekly until the end of follow-up. ECG measures were obtained from measurement of three consecutive QRS complexes and averaged. Heart rates, QTc (Bazett and Fridericia) were derived from the mean values of the measured parameters. In addition, a 24-hr ECG was recorded in the repeated dose study for five days to detect cardiac rhythm abnormalities.\nAdverse events were defined as any clinical change from baseline or previous visit spontaneously reported by the patient or observed by the investigator. All adverse events regardless of seriousness or relationship to the investigational product were recorded. Adverse events were coded according to Medical Dictionary for regulatory Activities (MedDRA, version 8.1).\nDue to the very long half-life of the product, around 20 days, the rule applied for the emergence of adverse event was 5 half-lives after administration. Thus, an adverse event was considered as emergent until the end-of-study visit. Abnormalities of vital signs, laboratory parameters or ECG readings were recorded as adverse events only if they were medically relevant by being symptomatic, requiring corrective treatment, leading to discontinuation and/or fulfilling a seriousness criterion.\nA standard 12-lead ECG was performed hourly up to six hours post-dosing, two-hourly up to 12 hours post-dose, and weekly until the end of follow-up. ECG measures were obtained from measurement of three consecutive QRS complexes and averaged. Heart rates, QTc (Bazett and Fridericia) were derived from the mean values of the measured parameters. In addition, a 24-hr ECG was recorded in the repeated dose study for five days to detect cardiac rhythm abnormalities.", "Recruitment of volunteers was started with a pre-screening visit to determine the P. falciparum infection status by blood smear, Rapid Test and PCR [11]. If infection was confirmed by at least one out of these three laboratory tests, the volunteers were invited to undergo screening procedure, including a complete physical examination, haematology and biochemistry analysis, urinalysis, urine drug screen, and viral serology (HIV, HBSAg, HCV). Eligible volunteers were admitted as inpatients to the Medical Research Unit and study drugs were administered orally with noncarbonated water under fasted conditions. Treatment duration was one day in the single dose study and three days in the multiple dose study. Volunteers were discharged from the hospital two days after the last drug intake and were followed-up for eight weeks in both studies.", "Adverse events were defined as any clinical change from baseline or previous visit spontaneously reported by the patient or observed by the investigator. All adverse events regardless of seriousness or relationship to the investigational product were recorded. Adverse events were coded according to Medical Dictionary for regulatory Activities (MedDRA, version 8.1).\nDue to the very long half-life of the product, around 20 days, the rule applied for the emergence of adverse event was 5 half-lives after administration. Thus, an adverse event was considered as emergent until the end-of-study visit. Abnormalities of vital signs, laboratory parameters or ECG readings were recorded as adverse events only if they were medically relevant by being symptomatic, requiring corrective treatment, leading to discontinuation and/or fulfilling a seriousness criterion.\nA standard 12-lead ECG was performed hourly up to six hours post-dosing, two-hourly up to 12 hours post-dose, and weekly until the end of follow-up. ECG measures were obtained from measurement of three consecutive QRS complexes and averaged. Heart rates, QTc (Bazett and Fridericia) were derived from the mean values of the measured parameters. In addition, a 24-hr ECG was recorded in the repeated dose study for five days to detect cardiac rhythm abnormalities.", "Primary outcomes (endpoints) for these randomized clinical trials were defined as the clinical and laboratory adverse event profile and the effect of ferroquine on the cardiovascular system.\nSample size for these studies was based upon empirical considerations from clinical phase I studies and no formal sample size calculation was performed. Safety analysis was based on participants who were administered at least one dose of study drug (exposed population) including participants who withdrew consent prematurely. If a participant had received the study drug and prematurely stopped study participation he was replaced, unless if follow up data were available up to day 15.", "[SUBTITLE] Study population [SUBSECTION] Out of a total of 498 volunteers pre-screened, 158 persons with P. falciparum infections were further screened for eligibility. Eighty-four were excluded because of abnormal chemistry or haematology results, abnormal ECGs, or other health problems. Seventeen volunteers could not be enrolled due to logistical reasons and further eight volunteers were not enrolled because the predefined number of participants for each of the two studies was already reached. Recruitment and participant flow are depicted in Figure 1.\nFlow of participants.\nA total of 66 asymptomatic male volunteers infected with P. falciparum were enrolled, randomized, and treated. In the single dose study, a total of 30 volunteers were exposed to single oral doses of ferroquine over a range from 400 mg to 1,600 mg and 10 received placebo. In the three-day multiple dose study, 19 volunteers were treated for three days with daily doses from 400 mg to 1,000 mg and seven received placebo. Only one participant was administered the 1,000 mg dose. Enrolment was discontinued at 1,000 mg dose because of morphologic changes in T waves and a moderate QT prolongation observed in patients treated with 800 mg and 1,000 mg. Seven participants stopped the study prematurely for reasons unrelated to adverse events, three in the single dose study and four in the three-day repeated dose study (Figure 1). Baseline characteristics of study participants are provided in Table 1 and Table 2.\nDemographic and clinical characteristics of participants at baseline in the single dose study\nNOTE: BMI: Body Mass Index; BSA: Body Surface Area; HR: Heart Rate; QTc (Bazett Fridericia); N: Count of patients; SD: standard deviation; Min: minimum; Max: maximum\nDemographic and clinical characteristics of participants at baseline in the multiple dose study\nNOTE: BMI: Body Mass Index; BSA: Body Surface Area; HR: Heart Rate; QTc (Bazett Fridericia); N: Count of patients; SD: standard deviation; Min: minimum; Max: maximum\nOut of a total of 498 volunteers pre-screened, 158 persons with P. falciparum infections were further screened for eligibility. Eighty-four were excluded because of abnormal chemistry or haematology results, abnormal ECGs, or other health problems. Seventeen volunteers could not be enrolled due to logistical reasons and further eight volunteers were not enrolled because the predefined number of participants for each of the two studies was already reached. Recruitment and participant flow are depicted in Figure 1.\nFlow of participants.\nA total of 66 asymptomatic male volunteers infected with P. falciparum were enrolled, randomized, and treated. In the single dose study, a total of 30 volunteers were exposed to single oral doses of ferroquine over a range from 400 mg to 1,600 mg and 10 received placebo. In the three-day multiple dose study, 19 volunteers were treated for three days with daily doses from 400 mg to 1,000 mg and seven received placebo. Only one participant was administered the 1,000 mg dose. Enrolment was discontinued at 1,000 mg dose because of morphologic changes in T waves and a moderate QT prolongation observed in patients treated with 800 mg and 1,000 mg. Seven participants stopped the study prematurely for reasons unrelated to adverse events, three in the single dose study and four in the three-day repeated dose study (Figure 1). Baseline characteristics of study participants are provided in Table 1 and Table 2.\nDemographic and clinical characteristics of participants at baseline in the single dose study\nNOTE: BMI: Body Mass Index; BSA: Body Surface Area; HR: Heart Rate; QTc (Bazett Fridericia); N: Count of patients; SD: standard deviation; Min: minimum; Max: maximum\nDemographic and clinical characteristics of participants at baseline in the multiple dose study\nNOTE: BMI: Body Mass Index; BSA: Body Surface Area; HR: Heart Rate; QTc (Bazett Fridericia); N: Count of patients; SD: standard deviation; Min: minimum; Max: maximum\n[SUBTITLE] Frequency of adverse events [SUBSECTION] Details on the frequency of adverse events by dose-group are depicted in Table 3 and Table 4. In the single, ascending dose study, most patients in the placebo group (9/10) presented at least one adverse event during the study versus 3/6 patients in the 400 m dose group, 1/6 in the 800 mg dose group, 6/6 in the 1,200 mg and 1,400 mg dose groups, and 4/6 patients in the 1,600 mg dose group.\nFrequency of the most common adverse events during the single dose study\nNOTE: N (%): count of patients (percentage per group); TEAE: treatment emergent adverse event\nFrequency of the most common adverse events during the multiple dose study\nNOTE: N (%): count of patients (percentage per group); TEAE: treatment emergent adverse event\nIn the three-day repeated, ascending dose study, six patients out of seven in the placebo group experienced at least one adverse event during the trial versus all of the subjects taking ferroquine (19/19). The adverse events reported were mainly in the gastrointestinal tract, abdominal pain (16), diarrhoea (5), nausea (13), vomiting (9) and toothache (6), and in the nervous system disorders, headache (11) and dizziness (5), but also those that appeared during the follow-up period, in the musculoskeletal and connective tissue disorders (9), such as arthralgia and back pain, the injury, poisoning and procedural complications (10) (injury, post-traumatic pain, thermal burn).\nOther symptoms such as mild, transient eye disorders (blurred vision, floating objects), mild skin rash (ECGs electrode application sites dermatitis or pruritis), infections and infestations, and fatigue were infrequent and were single occurrences across all dose groups in either study. All adverse events were considered mild to moderate in intensity and were transient. There were no deaths, serious adverse events, or adverse events leading to withdrawal reported during the trials.\nBecause of the small numbers of volunteers studied at each dose, no significant conclusions can be drawn from comparisons of adverse events between dose groups at the individual dose levels.\nDetails on the frequency of adverse events by dose-group are depicted in Table 3 and Table 4. In the single, ascending dose study, most patients in the placebo group (9/10) presented at least one adverse event during the study versus 3/6 patients in the 400 m dose group, 1/6 in the 800 mg dose group, 6/6 in the 1,200 mg and 1,400 mg dose groups, and 4/6 patients in the 1,600 mg dose group.\nFrequency of the most common adverse events during the single dose study\nNOTE: N (%): count of patients (percentage per group); TEAE: treatment emergent adverse event\nFrequency of the most common adverse events during the multiple dose study\nNOTE: N (%): count of patients (percentage per group); TEAE: treatment emergent adverse event\nIn the three-day repeated, ascending dose study, six patients out of seven in the placebo group experienced at least one adverse event during the trial versus all of the subjects taking ferroquine (19/19). The adverse events reported were mainly in the gastrointestinal tract, abdominal pain (16), diarrhoea (5), nausea (13), vomiting (9) and toothache (6), and in the nervous system disorders, headache (11) and dizziness (5), but also those that appeared during the follow-up period, in the musculoskeletal and connective tissue disorders (9), such as arthralgia and back pain, the injury, poisoning and procedural complications (10) (injury, post-traumatic pain, thermal burn).\nOther symptoms such as mild, transient eye disorders (blurred vision, floating objects), mild skin rash (ECGs electrode application sites dermatitis or pruritis), infections and infestations, and fatigue were infrequent and were single occurrences across all dose groups in either study. All adverse events were considered mild to moderate in intensity and were transient. There were no deaths, serious adverse events, or adverse events leading to withdrawal reported during the trials.\nBecause of the small numbers of volunteers studied at each dose, no significant conclusions can be drawn from comparisons of adverse events between dose groups at the individual dose levels.\n[SUBTITLE] Clinical laboratory evaluation [SUBSECTION] The clinical laboratory evaluation was performed through clinical chemistry and haematology tests of samples taken at different time points of scheduled visits until the end-of-study visit (60 days). The majority of the clinical chemistry potentially clinically significant abnormalities observed on patients treated with concerned elevations of creatine phosphokinase (≥3 ULN, normal range 25-90 IU). In most cases, creatine phosphokinase was elevated at baseline and was related to intense farming activities prior to inclusion (Table 5 and Table 6).\nSummary of participants with PCSA (parameters with PCSA definitions) for clinical chemistry and haematology parameters during the single dose study\nNOTE: N: count of patients; PCSA: Potentially Clinically Significant Abnormality (version 2.2, December 2004); LLN/ULN: Lower/Upper Limit of Normality. For hemoglobin, baseline values <LLN or >ULN are counted normal; for eosinophils, values <LLN are counted as normal. For ALT, AST, ALP, Total Bilirubin, GGT, CPK, baseline values <LLN are counted normal\nSummary of participants with PCSA (parameters with PCSA definitions) for clinical chemistry and haematology parameters during the multiple dose study\nNOTE: N: count of patients; PCSA: Potentially Clinically Significant Abnormality (version 2.2, December 2004); LLN/ULN: Lower/Upper Limit of Normality. For hemoglobin, baseline values <LLN or >ULN are counted normal; for eosinophils, values <LLN are counted as normal. For ALT, AST, ALP, Total Bilirubin, GGT, CPK, baseline values <LLN are counted normal\nIn the single dose study, five patients out of 40 showed an alanine transaminase increase >1 ULN (defined as 45 IU/ml) at any time point of the study with a possible association to changes of other liver functions. Among them were four patients treated with ferroquine and one patient treated with placebo. These changes were asymptomatic and, therefore, were not reported as adverse events. One patient under placebo experienced an increased level of alanine transaminase, with a maximum value of 59 IU on Day 21. The increase was associated to a parallel and moderate increase of total bilirubin (19 μmol/L, normal range <17 μmol/L) and a moderate increase of aspartate transaminase and gamma glutamyl transferase. In the 800 mg dose-group, one patient had an increased level of alanine transaminase with a maximum value of 148 IU on Day 15, with associated increases in total bilirubin (24 μmol/L), aspartate transaminases, alkaline phosphatases, and gamma glutamyl transferase. In the 1,200 mg dose-group, at one patient an increased level of alanine transaminase with a maximum value of 52 IU was observed on Day 7, whereas, alkaline phosphatase and total bilirubin (5 μmol/L) remained within normal ranges. At one patient treated with 1,400 mg, an increased level of alanine transaminase with a maximum value of 47 IU was observed on Day 15, associated to parallel changes in other liver function parameters at the exception of the bilirubin level (8 μmol/L). Finally, one patient was one of the 1,600 mg dose-group experienced an increase of alanine transaminase values with a maximum value of 48 IU on Day 2 with parallel increase in total bilirubin (25 μmol/L) on Day 15. The respective parameters of this patient were already slightly elevated at baseline (Day -1).\nIn the three-day repeated dose study, elevated alanine transaminases (>2 ULN) was observed in one placebo-treated patient and in four patients in the ferroquine groups. One patient in the 400 mg group (normal baseline 0.38 ULN) had an alanine transaminase >10 ULN starting at Day 12 with a maximum magnitude of 16.55 ULN, which was associated to abdominal pain and headache. Over the course of the following week, alkaline phosphatases and bilirubin increased and picked at about 2 ULN and 1.7 ULN, respectively. Infectious causes were ruled out. However, the patient reported intake of an herbal tea against abdominal pain, which might have contributed to the enzyme abnormalities. Alanine transaminases returned to normal by Day 37 (0.71 ULN). A patient of the 600 mg group, who had a slightly elevated alanine transaminase value at screening (1.22 ULN), but within normal ranges at baseline (0.91 ULN), presented again an elevated value on Day 3 (1.71 ULN) and reached the potentially clinically significant abnormality level (2.18 ULN) on Day 5, subsequently followed by a decrease to normal rangeby Day 12. In this same 600 mg dose group, another patient with normal baseline value of 0.42 ULN, presented an elevated alanine transaminase values on Day 3 that reached the potentially clinically significant abnormality level on Day 23 (2.56 ULN). His values were still elevated at the end-of-study visit (1.78 ULN). Finally, one patient of the 800 mg dose group, with normal baseline value of 0.38 ULN, had a peak alanine transaminases value of 6.37 ULN and a bilirubin value of 1.48 ULN at Day 12 without clinical symptoms. Alkaline phosphatases peaked at 1.77 ULN on Day 17. Liver parameter values decreased toward the normal range by the end-of-study visit. The most frequently observed haematology, potencially clinically significant abnormalities were related to eosinophils that were likely to be related to concomitant parasitic infections existing at baseline.\nThe clinical laboratory evaluation was performed through clinical chemistry and haematology tests of samples taken at different time points of scheduled visits until the end-of-study visit (60 days). The majority of the clinical chemistry potentially clinically significant abnormalities observed on patients treated with concerned elevations of creatine phosphokinase (≥3 ULN, normal range 25-90 IU). In most cases, creatine phosphokinase was elevated at baseline and was related to intense farming activities prior to inclusion (Table 5 and Table 6).\nSummary of participants with PCSA (parameters with PCSA definitions) for clinical chemistry and haematology parameters during the single dose study\nNOTE: N: count of patients; PCSA: Potentially Clinically Significant Abnormality (version 2.2, December 2004); LLN/ULN: Lower/Upper Limit of Normality. For hemoglobin, baseline values <LLN or >ULN are counted normal; for eosinophils, values <LLN are counted as normal. For ALT, AST, ALP, Total Bilirubin, GGT, CPK, baseline values <LLN are counted normal\nSummary of participants with PCSA (parameters with PCSA definitions) for clinical chemistry and haematology parameters during the multiple dose study\nNOTE: N: count of patients; PCSA: Potentially Clinically Significant Abnormality (version 2.2, December 2004); LLN/ULN: Lower/Upper Limit of Normality. For hemoglobin, baseline values <LLN or >ULN are counted normal; for eosinophils, values <LLN are counted as normal. For ALT, AST, ALP, Total Bilirubin, GGT, CPK, baseline values <LLN are counted normal\nIn the single dose study, five patients out of 40 showed an alanine transaminase increase >1 ULN (defined as 45 IU/ml) at any time point of the study with a possible association to changes of other liver functions. Among them were four patients treated with ferroquine and one patient treated with placebo. These changes were asymptomatic and, therefore, were not reported as adverse events. One patient under placebo experienced an increased level of alanine transaminase, with a maximum value of 59 IU on Day 21. The increase was associated to a parallel and moderate increase of total bilirubin (19 μmol/L, normal range <17 μmol/L) and a moderate increase of aspartate transaminase and gamma glutamyl transferase. In the 800 mg dose-group, one patient had an increased level of alanine transaminase with a maximum value of 148 IU on Day 15, with associated increases in total bilirubin (24 μmol/L), aspartate transaminases, alkaline phosphatases, and gamma glutamyl transferase. In the 1,200 mg dose-group, at one patient an increased level of alanine transaminase with a maximum value of 52 IU was observed on Day 7, whereas, alkaline phosphatase and total bilirubin (5 μmol/L) remained within normal ranges. At one patient treated with 1,400 mg, an increased level of alanine transaminase with a maximum value of 47 IU was observed on Day 15, associated to parallel changes in other liver function parameters at the exception of the bilirubin level (8 μmol/L). Finally, one patient was one of the 1,600 mg dose-group experienced an increase of alanine transaminase values with a maximum value of 48 IU on Day 2 with parallel increase in total bilirubin (25 μmol/L) on Day 15. The respective parameters of this patient were already slightly elevated at baseline (Day -1).\nIn the three-day repeated dose study, elevated alanine transaminases (>2 ULN) was observed in one placebo-treated patient and in four patients in the ferroquine groups. One patient in the 400 mg group (normal baseline 0.38 ULN) had an alanine transaminase >10 ULN starting at Day 12 with a maximum magnitude of 16.55 ULN, which was associated to abdominal pain and headache. Over the course of the following week, alkaline phosphatases and bilirubin increased and picked at about 2 ULN and 1.7 ULN, respectively. Infectious causes were ruled out. However, the patient reported intake of an herbal tea against abdominal pain, which might have contributed to the enzyme abnormalities. Alanine transaminases returned to normal by Day 37 (0.71 ULN). A patient of the 600 mg group, who had a slightly elevated alanine transaminase value at screening (1.22 ULN), but within normal ranges at baseline (0.91 ULN), presented again an elevated value on Day 3 (1.71 ULN) and reached the potentially clinically significant abnormality level (2.18 ULN) on Day 5, subsequently followed by a decrease to normal rangeby Day 12. In this same 600 mg dose group, another patient with normal baseline value of 0.42 ULN, presented an elevated alanine transaminase values on Day 3 that reached the potentially clinically significant abnormality level on Day 23 (2.56 ULN). His values were still elevated at the end-of-study visit (1.78 ULN). Finally, one patient of the 800 mg dose group, with normal baseline value of 0.38 ULN, had a peak alanine transaminases value of 6.37 ULN and a bilirubin value of 1.48 ULN at Day 12 without clinical symptoms. Alkaline phosphatases peaked at 1.77 ULN on Day 17. Liver parameter values decreased toward the normal range by the end-of-study visit. The most frequently observed haematology, potencially clinically significant abnormalities were related to eosinophils that were likely to be related to concomitant parasitic infections existing at baseline.\n[SUBTITLE] Electrocardiogram evaluation [SUBSECTION] In the single dose study, no QTcF values >450 ms, or delta QTcF >60 ms were observed in any treatment group. One patient in the 1,200 mg dose group experienced a delta QTcF value between 30 and 60 ms, while a moderate change in mean increases from baseline of + 8.2 ms was observed for the 1,600 mg dose level coinciding with the time of maximal plasma concentrations. In the three-day repeated dose study, continuous monitoring did not show any clinically significant arrhythmic episode developing after drug administration in any patient. A deviation in the repolarization pattern (inversion of T wave), visible from T5 upwards and lasting for several hours, was observed in four patients of the 800 mg dose group, especially in one patient who showed a clear and long-lasting T inversion each day of investigational product administration. According to previous experiences and the known influence of aminoquinolines on ECGs, these episodes were considered to be related to the investigational product, but remained without any clinical significance. In the 800 mg dose group, there were 12.6 ms, 16.8 ms, and 19.2 ms mean increases in QTcF intervals on Day 3 at 6, 8, and 12 hours, respectively. Additionally, in the 800 mg dose group, there was a 19.6 ± 6.8 ms (mean ± SEM) maximum mean increase in QTcF intervals, compared to maximum mean increases of 9.9 ± 5.5, 5.2 ± 3.6, and 16.5 ± 6.8 ms for the placebo, 400 mg, and 600 mg groups, respectively. There were no delta QTcF intervals >60 ms and no QTcF intervals >450 ms.\nIn the single dose study, no QTcF values >450 ms, or delta QTcF >60 ms were observed in any treatment group. One patient in the 1,200 mg dose group experienced a delta QTcF value between 30 and 60 ms, while a moderate change in mean increases from baseline of + 8.2 ms was observed for the 1,600 mg dose level coinciding with the time of maximal plasma concentrations. In the three-day repeated dose study, continuous monitoring did not show any clinically significant arrhythmic episode developing after drug administration in any patient. A deviation in the repolarization pattern (inversion of T wave), visible from T5 upwards and lasting for several hours, was observed in four patients of the 800 mg dose group, especially in one patient who showed a clear and long-lasting T inversion each day of investigational product administration. According to previous experiences and the known influence of aminoquinolines on ECGs, these episodes were considered to be related to the investigational product, but remained without any clinical significance. In the 800 mg dose group, there were 12.6 ms, 16.8 ms, and 19.2 ms mean increases in QTcF intervals on Day 3 at 6, 8, and 12 hours, respectively. Additionally, in the 800 mg dose group, there was a 19.6 ± 6.8 ms (mean ± SEM) maximum mean increase in QTcF intervals, compared to maximum mean increases of 9.9 ± 5.5, 5.2 ± 3.6, and 16.5 ± 6.8 ms for the placebo, 400 mg, and 600 mg groups, respectively. There were no delta QTcF intervals >60 ms and no QTcF intervals >450 ms.", "Out of a total of 498 volunteers pre-screened, 158 persons with P. falciparum infections were further screened for eligibility. Eighty-four were excluded because of abnormal chemistry or haematology results, abnormal ECGs, or other health problems. Seventeen volunteers could not be enrolled due to logistical reasons and further eight volunteers were not enrolled because the predefined number of participants for each of the two studies was already reached. Recruitment and participant flow are depicted in Figure 1.\nFlow of participants.\nA total of 66 asymptomatic male volunteers infected with P. falciparum were enrolled, randomized, and treated. In the single dose study, a total of 30 volunteers were exposed to single oral doses of ferroquine over a range from 400 mg to 1,600 mg and 10 received placebo. In the three-day multiple dose study, 19 volunteers were treated for three days with daily doses from 400 mg to 1,000 mg and seven received placebo. Only one participant was administered the 1,000 mg dose. Enrolment was discontinued at 1,000 mg dose because of morphologic changes in T waves and a moderate QT prolongation observed in patients treated with 800 mg and 1,000 mg. Seven participants stopped the study prematurely for reasons unrelated to adverse events, three in the single dose study and four in the three-day repeated dose study (Figure 1). Baseline characteristics of study participants are provided in Table 1 and Table 2.\nDemographic and clinical characteristics of participants at baseline in the single dose study\nNOTE: BMI: Body Mass Index; BSA: Body Surface Area; HR: Heart Rate; QTc (Bazett Fridericia); N: Count of patients; SD: standard deviation; Min: minimum; Max: maximum\nDemographic and clinical characteristics of participants at baseline in the multiple dose study\nNOTE: BMI: Body Mass Index; BSA: Body Surface Area; HR: Heart Rate; QTc (Bazett Fridericia); N: Count of patients; SD: standard deviation; Min: minimum; Max: maximum", "Details on the frequency of adverse events by dose-group are depicted in Table 3 and Table 4. In the single, ascending dose study, most patients in the placebo group (9/10) presented at least one adverse event during the study versus 3/6 patients in the 400 m dose group, 1/6 in the 800 mg dose group, 6/6 in the 1,200 mg and 1,400 mg dose groups, and 4/6 patients in the 1,600 mg dose group.\nFrequency of the most common adverse events during the single dose study\nNOTE: N (%): count of patients (percentage per group); TEAE: treatment emergent adverse event\nFrequency of the most common adverse events during the multiple dose study\nNOTE: N (%): count of patients (percentage per group); TEAE: treatment emergent adverse event\nIn the three-day repeated, ascending dose study, six patients out of seven in the placebo group experienced at least one adverse event during the trial versus all of the subjects taking ferroquine (19/19). The adverse events reported were mainly in the gastrointestinal tract, abdominal pain (16), diarrhoea (5), nausea (13), vomiting (9) and toothache (6), and in the nervous system disorders, headache (11) and dizziness (5), but also those that appeared during the follow-up period, in the musculoskeletal and connective tissue disorders (9), such as arthralgia and back pain, the injury, poisoning and procedural complications (10) (injury, post-traumatic pain, thermal burn).\nOther symptoms such as mild, transient eye disorders (blurred vision, floating objects), mild skin rash (ECGs electrode application sites dermatitis or pruritis), infections and infestations, and fatigue were infrequent and were single occurrences across all dose groups in either study. All adverse events were considered mild to moderate in intensity and were transient. There were no deaths, serious adverse events, or adverse events leading to withdrawal reported during the trials.\nBecause of the small numbers of volunteers studied at each dose, no significant conclusions can be drawn from comparisons of adverse events between dose groups at the individual dose levels.", "The clinical laboratory evaluation was performed through clinical chemistry and haematology tests of samples taken at different time points of scheduled visits until the end-of-study visit (60 days). The majority of the clinical chemistry potentially clinically significant abnormalities observed on patients treated with concerned elevations of creatine phosphokinase (≥3 ULN, normal range 25-90 IU). In most cases, creatine phosphokinase was elevated at baseline and was related to intense farming activities prior to inclusion (Table 5 and Table 6).\nSummary of participants with PCSA (parameters with PCSA definitions) for clinical chemistry and haematology parameters during the single dose study\nNOTE: N: count of patients; PCSA: Potentially Clinically Significant Abnormality (version 2.2, December 2004); LLN/ULN: Lower/Upper Limit of Normality. For hemoglobin, baseline values <LLN or >ULN are counted normal; for eosinophils, values <LLN are counted as normal. For ALT, AST, ALP, Total Bilirubin, GGT, CPK, baseline values <LLN are counted normal\nSummary of participants with PCSA (parameters with PCSA definitions) for clinical chemistry and haematology parameters during the multiple dose study\nNOTE: N: count of patients; PCSA: Potentially Clinically Significant Abnormality (version 2.2, December 2004); LLN/ULN: Lower/Upper Limit of Normality. For hemoglobin, baseline values <LLN or >ULN are counted normal; for eosinophils, values <LLN are counted as normal. For ALT, AST, ALP, Total Bilirubin, GGT, CPK, baseline values <LLN are counted normal\nIn the single dose study, five patients out of 40 showed an alanine transaminase increase >1 ULN (defined as 45 IU/ml) at any time point of the study with a possible association to changes of other liver functions. Among them were four patients treated with ferroquine and one patient treated with placebo. These changes were asymptomatic and, therefore, were not reported as adverse events. One patient under placebo experienced an increased level of alanine transaminase, with a maximum value of 59 IU on Day 21. The increase was associated to a parallel and moderate increase of total bilirubin (19 μmol/L, normal range <17 μmol/L) and a moderate increase of aspartate transaminase and gamma glutamyl transferase. In the 800 mg dose-group, one patient had an increased level of alanine transaminase with a maximum value of 148 IU on Day 15, with associated increases in total bilirubin (24 μmol/L), aspartate transaminases, alkaline phosphatases, and gamma glutamyl transferase. In the 1,200 mg dose-group, at one patient an increased level of alanine transaminase with a maximum value of 52 IU was observed on Day 7, whereas, alkaline phosphatase and total bilirubin (5 μmol/L) remained within normal ranges. At one patient treated with 1,400 mg, an increased level of alanine transaminase with a maximum value of 47 IU was observed on Day 15, associated to parallel changes in other liver function parameters at the exception of the bilirubin level (8 μmol/L). Finally, one patient was one of the 1,600 mg dose-group experienced an increase of alanine transaminase values with a maximum value of 48 IU on Day 2 with parallel increase in total bilirubin (25 μmol/L) on Day 15. The respective parameters of this patient were already slightly elevated at baseline (Day -1).\nIn the three-day repeated dose study, elevated alanine transaminases (>2 ULN) was observed in one placebo-treated patient and in four patients in the ferroquine groups. One patient in the 400 mg group (normal baseline 0.38 ULN) had an alanine transaminase >10 ULN starting at Day 12 with a maximum magnitude of 16.55 ULN, which was associated to abdominal pain and headache. Over the course of the following week, alkaline phosphatases and bilirubin increased and picked at about 2 ULN and 1.7 ULN, respectively. Infectious causes were ruled out. However, the patient reported intake of an herbal tea against abdominal pain, which might have contributed to the enzyme abnormalities. Alanine transaminases returned to normal by Day 37 (0.71 ULN). A patient of the 600 mg group, who had a slightly elevated alanine transaminase value at screening (1.22 ULN), but within normal ranges at baseline (0.91 ULN), presented again an elevated value on Day 3 (1.71 ULN) and reached the potentially clinically significant abnormality level (2.18 ULN) on Day 5, subsequently followed by a decrease to normal rangeby Day 12. In this same 600 mg dose group, another patient with normal baseline value of 0.42 ULN, presented an elevated alanine transaminase values on Day 3 that reached the potentially clinically significant abnormality level on Day 23 (2.56 ULN). His values were still elevated at the end-of-study visit (1.78 ULN). Finally, one patient of the 800 mg dose group, with normal baseline value of 0.38 ULN, had a peak alanine transaminases value of 6.37 ULN and a bilirubin value of 1.48 ULN at Day 12 without clinical symptoms. Alkaline phosphatases peaked at 1.77 ULN on Day 17. Liver parameter values decreased toward the normal range by the end-of-study visit. The most frequently observed haematology, potencially clinically significant abnormalities were related to eosinophils that were likely to be related to concomitant parasitic infections existing at baseline.", "In the single dose study, no QTcF values >450 ms, or delta QTcF >60 ms were observed in any treatment group. One patient in the 1,200 mg dose group experienced a delta QTcF value between 30 and 60 ms, while a moderate change in mean increases from baseline of + 8.2 ms was observed for the 1,600 mg dose level coinciding with the time of maximal plasma concentrations. In the three-day repeated dose study, continuous monitoring did not show any clinically significant arrhythmic episode developing after drug administration in any patient. A deviation in the repolarization pattern (inversion of T wave), visible from T5 upwards and lasting for several hours, was observed in four patients of the 800 mg dose group, especially in one patient who showed a clear and long-lasting T inversion each day of investigational product administration. According to previous experiences and the known influence of aminoquinolines on ECGs, these episodes were considered to be related to the investigational product, but remained without any clinical significance. In the 800 mg dose group, there were 12.6 ms, 16.8 ms, and 19.2 ms mean increases in QTcF intervals on Day 3 at 6, 8, and 12 hours, respectively. Additionally, in the 800 mg dose group, there was a 19.6 ± 6.8 ms (mean ± SEM) maximum mean increase in QTcF intervals, compared to maximum mean increases of 9.9 ± 5.5, 5.2 ± 3.6, and 16.5 ± 6.8 ms for the placebo, 400 mg, and 600 mg groups, respectively. There were no delta QTcF intervals >60 ms and no QTcF intervals >450 ms.", "Despite an increase in international investment in anti-malarial drug development in the past decade only a few novel drugs have been developed [12-19]. Ferroquine is one of the few novel anti-malarials entering clinical development. A favourable safety and tolerability profile of ferroquine could be demonstrated when administered as a single dose up to 1,600 mg or as three consecutive daily doses up to 800 mg. No dose limiting clinical adverse event was observed at the investigated dose levels. All adverse events were mild to moderate, transient, and no serious drug related adverse event was observed. These findings are in line with previous data of preclinical studies and the first in human study performed in healthy Caucasian subjects [unpublished data].\nGastrointestinal side effects and central nervous disorders appear to be the most common clinical adverse events following treatment with ferroquine. These symptoms were however self-limiting and mild to moderate in intensity. Most important the frequency of these events was similar in patients treated with ferroquine and those receiving placebo and overall compared well to currently used anti-malarials. However, these side effects are well described for other 4-aminoquinoline anti-malarials and need, therefore, further investigation in the case of ferroquine [20-26]. Other less common adverse events, such as fatigue, blurred vision and rash were mild, transient, and were single occurrences across all dose groups.\nOne limitation of the assessment of adverse events in this study may be the potential progression of asymptomatic infection to malaria. However, the inclusion of a control group with placebo treatment helped to minimize this factor in the absence of knowledge on the efficacy of ferroquine in humans. Similarly, the apparent dose dependent increase in gastrointestinal adverse events may be either due to pharmacodynamic properties of the drug or alternatively to the high number of capsules that had to be swallowed (up to 16 capsules per participant). In summary, ferroquine may lead to gastrointestinal side effects comparable to other anti-malarials. However, the number of participants in this study does not permit to make a final judgement on a potential dose dependent increase of gastrointestinal toxicity of ferroquine.\nAt eight weeks follow-up, there was no evidence for cardiac, ocular, hepatic, haemotologic, renal, dermatologic, or other end-organ adverse events. Although adverse events involving these and other organs have been reported with aminoquinolines previously [21,23-25], these findings have typically been observed in persons treated for prolonged periods of time (more than 5 years) at doses 200-400 mg base or higher per day [21,27]. The absence of clinically significant detectable adverse events and the normal laboratory tests in all volunteers at eight weeks of follow-up are consistent with previous reports on the safety of short-term chloroquine treatment [20,21,26].\nThe finding of abnormal liver function tests in the course of the study. These observations are consistent with data from toxicology studies on rats and dogs which showed small increases of alanine transaminase and aspartate aminotransferase (unpublished data). These changes were minor and not considered to be of toxicological significance. In our clinical trials the laboratory changes were not clinically significant and most of the abnormal test results normalized during the course of the study without further intervention. Important is that no linear dose depending relationship between the administration of ferroquine and increase of liver parameters was observed in this study.\nBased on previous animal and human studies showing prolongation of the QT interval for chloroquine a special emphasis was laid on the assessment of cardiac effects [28-32]. Ferroquine itself has been shown to have in vitro effects on hERG channels. There was also a trend toward changes in QTc interval parameters in telemetered animals. The current findings show several patients in the repeated dose study with T wave morphology changes on their ECG readings following ferroquine administration, and most of the time associated with U waves. These findings even though they are not clinically significant highlight the potential of a cardiac effect of ferroquine on humans.", "In conclusion, ferroquine showed a favourable tolerability and safety profile up to 1,600 mg when administered as single dose, and appears to be well tolerated up to 800 mg once daily for three days. Although gastrointestinal disorders (including nausea and vomiting) and nervous system disorders (including dizziness and headache) appear to be the prevalent clinical adverse events, the liver is the potential target organ and it should be carefully monitored via liver function tests. Ferroquine has the potential to prolong the QTc interval and affect T wave morphology. Subjects should undergo careful ECG monitoring while taking this investigational product.", "Authors declare no conflict of interest. This study was funded by the Department of Research of Sanofi-Aventis as part of the developing programme of ferroquine. Part of this work was presented as dissertation by GMN for his medical degree's thesis at the Faculty of Medicine in Libreville, Gabon, MK, PGK and BL were co-directors of the thesis (Université des Sciences de la Santé (USS), Thesis n°524, 2006)", "GMN carried out the safety study and drafted the manuscript. CS, MDB, MAM, PBM, COS, BL carried out the study. MK, MR, PGK participated in the design of the study and performed the statistical analysis. DTM and BL conceived of the study, and participated in its design and coordination. All authors read and approved the final manuscript." ]
[ null, null, null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Maternal care practices among the ultra poor households in rural Bangladesh: a qualitative exploratory study.
21362164
Although many studies have been carried out to learn about maternal care practices in rural areas and urban-slums of Bangladesh, none have focused on ultra poor women. Understanding the context in which women would be willing to accept new practices is essential for developing realistic and relevant behaviour change messages. This study sought to fill in this knowledge gap by exploring maternal care practices among women who participated in a grant-based livelihood programme for the ultra poor. This is expected to assist the designing of the health education messages programme in an effort to improve maternal morbidity and survival towards achieving the UN millennium Development Goal 5.
BACKGROUND
Qualitative method was used to collect data on maternal care practices during pregnancy, delivery, and post-partum period from women in ultra poor households. The sample included both currently pregnant women who have had a previous childbirth, and lactating women, participating in a grant-based livelihood development programme. Rangpur and Kurigram districts in northern Bangladesh were selected for data collection.
METHODS
Women usually considered pregnancy as a normal event unless complications arose, and most of them refrained from seeking antenatal care (ANC) except for confirmation of pregnancy, and no prior preparation for childbirth was taken. Financial constraints, coupled with traditional beliefs and rituals, delayed care-seeking in cases where complications arose. Delivery usually took place on the floor in the squatting posture and the attendants did not always follow antiseptic measures such as washing hands before conducting delivery. Following the birth of the baby, attention was mainly focused on the expulsion of the placenta and various maneuvres were adapted to hasten the process, which were sometimes harmful. There were multiple food-related taboos and restrictions, which decreased the consumption of protein during pregnancy and post-partum period. Women usually failed to go to the healthcare providers for illnesses in the post-partum period.
RESULTS
This study shows that cultural beliefs and norms have a strong influence on maternal care practices among the ultra poor households, and override the beneficial economic effects from livelihood support intervention. Some of these practices, often compromised by various taboos and beliefs, may become harmful at times. Health behavior education in this livelihood support program can be carefully tailored to local cultural beliefs to achieve better maternal outcomes.
CONCLUSION
[ "Adult", "Attitude of Health Personnel", "Bangladesh", "Diet", "Dietary Supplements", "Female", "Health Knowledge, Attitudes, Practice", "Humans", "Iron", "Labor, Obstetric", "Maternal Behavior", "Patient Acceptance of Health Care", "Postnatal Care", "Poverty", "Pregnancy", "Prenatal Care", "Qualitative Research", "Rural Population", "Young Adult" ]
3056829
null
null
Methods
A qualitative exploratory framework was used to collect maternal care practice from women in ultra poor households. The sample included both currently pregnant women who have had at least one previous childbirth, and lactating mothers who are living in the CFPR-II programme areas [7]. Rangpur and Kurigram districts in the northern part of Bangladesh were selected for data collection. Later, eight administrative offices of CFPR-II programme from the above two districts were selected where CFPR-II baseline survey was conducted [7]. To identify the respondents, two anthropologists with the help of the researchers have visited each office and listed the names of the CFPR-beneficiary households, the address, and the availability of eligible respondents using a household listing form. From each of these lists, 2-3 households have been randomly selected for conducting the interviews. Thus, 20 women- 12 lactating mothers and 8 currently pregnant women were selected for the study. The study has passed through the institutional review process at BRAC Research and Evaluation Division (by Internal Review and Publication Committee) for ethical approval. The ethical clearance was also obtained from Bangladesh Medical Research Council (BMRC) for the entire five-year research project to evaluate the CFPR-II programme. Data were collected after obtaining informed verbal consent from the respondents who were hesitant about signing any document. The written consent form was read and explained to the respondent. When the investigator was satisfied that the respondent understood it including its implications, and agreed to participate, only then she was selected for the in-depth interview. Anonymity of the respondents was maintained at all stages of data analysis. Data were collected through in-depth interview in Bangla following a flexible interview guide during October-December 2007. The interviews were conducted by two experienced anthropologists under supervision of the researchers. The in-depth interview guide comprised three sections; the first section included socioeconomic information and cultural background of extreme poor households; the second section included maternal care practices and rituals which covered pregnancy care, delivery care, and postpartum care; and the last part included their beliefs regarding the care practices and recommendations. The interviews lasted between 60 and 90 minutes. Throughout the process, many loosely structured informal discussions took place, because, in a communal setting, it was difficult to conduct one-to-one interviews and discuss their individual perceptions of practices. All the interviews were recorded and transcribed on the same day. The transcripts were analyzed thematically. Transcripts were reviewed to develop a code list for the topics related to the research questions. Codes were applied manually by the interviewers. Text pertaining to the codes was organized in a matrix and translated into English. No software was used for the analysis.
null
null
null
null
[ "Background", "Results", "Respondent's profile", "Pregnancy care", "Antenatal care", "Social exclusion", "Nutrition during pregnancy", "Restrictions and mobility during pregnancy", "Support from husband", "Birth preparedness", "Delivery care", "Place of birth and attendance at delivery", "Practices to speed up the delivery of placenta", "Post-partum care", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Nearly one quarter to one third of Bangladesh's population lives under extreme poverty [1]. To improve the maternal health indicators it is necessary to develop interventions that meet the health needs of extremely poor women. These extremely poor households, henceforth ultra poor households, have few or no asset base, are highly vulnerable to any shock (e.g., natural disaster, disability of an income-earner or illnesses involving costly care), and mainly depend on seasonal wage-labour for survival. They are characterized by their inability to participate fully in social and economic activities which make them vulnerable to differential treatment of the society [2]. As a result, they are often excluded from government and even from the microcredit-based non-governmental poverty reduction programmes [3].\nTo reach the ultra poor in rural Bangladesh who could not benefit from conventional microcredit programmes, BRAC (an indigenous non-government development organization) implemented an integrated, grant-based programme (comprising asset transfer, skills training, enterprise support, weekly stipend and healthcare support to mitigate income-erosion effect of illnesses) during 2002-2006 called Challenging the Frontiers of Poverty Reduction Phase I (CFPR-I) [3]. Five inclusion criteria (dependent upon female domestic work or begging as income source; possessing ≤10 decimals of land; no male adult active member in the household; children of school going age engaged in paid work; and absence of productive assets in the household) and three exclusion criteria (no adult woman in the household who is able to work; participating in microfinance; and beneficiary of government/NGO development project) were used to identify the ultra poor households for the BRAC CFPR programme. Only those surveyed households who met at least three of the five inclusion criteria and none of the exclusion criteria, were finally selected as the beneficiaries of the CFPR programme. Thus the study population refers to the lowest wealth quintile in a community. A series of evaluations also showed positive impact from the programme including improved income, food security and nutritional status, and health knowledge and behaviour [4-6]. Drawing on the learning and experiences from the first phase (CFPR-I), the five-year second phase of this programme (CFPR-II) started in 2007 with an even greater scale and scope [7].\nBangladesh is one of the low- income countries where poorer groups use less health care and there exists a poor-rich inequality in maternity care and maternal mortality [2,8]. Apart from this poor-rich inequality, social and cultural beliefs and practices regarding motherhood and childrearing also have significant influence on maternal health [9-13]. As a result, these poorest-of-the-poor women suffer a greater burden of ill health including maternal health [3]. They are marginalized compared to national rural average in terms of accessing antenatal care (ANC) (37% for ultra poor women and 60% national rural women), institutional delivery (5% from lowest wealth quintile and 15% national rural average), receiving iron- supplementation during last pregnancy (53% for ultra poor and 55% national rural women), and use of contraceptive prevalence (63% for ultra poor and 80% ever-married women in Bangladesh) [7,14]. These maternal health and health care indicators are also important determinants of neonatal survival [15], which is an important contributing factor for infant survival. Different studies have been carried out to learn about maternal care practices in the rural areas [9-11,13,16,17] or urban-slums of Bangladesh [12,18], but none of them specifically focused on the marginal population such as the ultra poor households. Understanding the context in which women and their families would be willing to accept new practices, i.e., knowing what changes they would make under what conditions is essential for crafting realistic and relevant behaviour change messages. This study sought to fill in this knowledge gap by qualitative exploration of the existing maternal care practices during pregnancy, delivery and post-partum period among women of the ultra poor households who were included in the CFPR II programme (2007-2012). It also attempted to highlight the underlying reasons for doing such maternal care practices. This, in turn, is expected to inform the programme interventions which would hopefully improve maternal morbidity and survival, and achieve the UN Millennium Development Goal 5 (MDG 5).", "[SUBTITLE] Respondent's profile [SUBSECTION] The majority (15 out of 20 respondents) of the women were in their twenties, two were below 20 years of age while three were in their thirties. Most of the interviewees were Muslim. Non-farming labour, like rickshaw pulling, was one of the most common occupations of the household heads. Most of them lived in poor hygienic condition as revealed by the use of proper sanitation (with or without water seal latrine or pit latrine), water access, and overall household condition. At the time of data collection, all the respondents were in their initial phase of the CFPR-II programme.\nThe majority (15 out of 20 respondents) of the women were in their twenties, two were below 20 years of age while three were in their thirties. Most of the interviewees were Muslim. Non-farming labour, like rickshaw pulling, was one of the most common occupations of the household heads. Most of them lived in poor hygienic condition as revealed by the use of proper sanitation (with or without water seal latrine or pit latrine), water access, and overall household condition. At the time of data collection, all the respondents were in their initial phase of the CFPR-II programme.\n[SUBTITLE] Pregnancy care [SUBSECTION] Almost all the women reported about becoming aware of their pregnancy when they experienced amenorrhoea, nausea and vomiting, loss of appetite and weakness. Most of them could identify their pregnancy within the first 2-3 months. According to them, pregnancy identification and its subsequent care was seen as a normal event which did not require any additional medical intervention unless significant complications arose during this period. Two women were found to have had no menstruation history after delivery of the last baby till conception of the next one.\n[SUBTITLE] Antenatal care [SUBSECTION] Confirmation of pregnancy was considered by the women as the most important part of antenatal care and eight out of 20 women went to the nearby health facilities for pregnancy tests. Reportedly, the most common service that a pregnant woman received in ante-natal clinics was iron supplementation. However, most of them did not take all the tablets dispensed because they perceived the tablets to be tasteless (or have bad taste) and made the stool black. None of them took iron tablets for more than two months during their last pregnancy. Most of these women opined that antenatal care provided no benefit to them or their child. Monetary constraints, absence of knowledge about the need of services, and restrictions on the movement of women were also cited as reasons for not accessing antenatal care.\nConfirmation of pregnancy was considered by the women as the most important part of antenatal care and eight out of 20 women went to the nearby health facilities for pregnancy tests. Reportedly, the most common service that a pregnant woman received in ante-natal clinics was iron supplementation. However, most of them did not take all the tablets dispensed because they perceived the tablets to be tasteless (or have bad taste) and made the stool black. None of them took iron tablets for more than two months during their last pregnancy. Most of these women opined that antenatal care provided no benefit to them or their child. Monetary constraints, absence of knowledge about the need of services, and restrictions on the movement of women were also cited as reasons for not accessing antenatal care.\n[SUBTITLE] Social exclusion [SUBSECTION] The following finding is not unique but typical of the broad experience of ultra poor women living in the rural areas of Bangladesh:\n\"I was avoiding health worker because she would scold me if she would have heard about my 4th pregnancy. She used to give me birth control pills. So, I did not meet her and inform her.\" (A lactating mother of 27 years old)\nIn two cases, the women reported that the respective health worker came to their house to give birth control pills. However, neither of them came out of their respective houses. The use of harsh words and low tolerance level of the health workers discouraged the women to use the services provided by these health facilities for antenatal care. These women perceived that they were treated in such a manner by the health worker because of their low socioeconomic status.\n\"I heard that health service from government facility is free of charge but when I went to the health facility I was asked to make a card for Tk. 20.Which services are then considered to be free of charge?\" (A pregnant woman of 21 years old)\nThe following finding is not unique but typical of the broad experience of ultra poor women living in the rural areas of Bangladesh:\n\"I was avoiding health worker because she would scold me if she would have heard about my 4th pregnancy. She used to give me birth control pills. So, I did not meet her and inform her.\" (A lactating mother of 27 years old)\nIn two cases, the women reported that the respective health worker came to their house to give birth control pills. However, neither of them came out of their respective houses. The use of harsh words and low tolerance level of the health workers discouraged the women to use the services provided by these health facilities for antenatal care. These women perceived that they were treated in such a manner by the health worker because of their low socioeconomic status.\n\"I heard that health service from government facility is free of charge but when I went to the health facility I was asked to make a card for Tk. 20.Which services are then considered to be free of charge?\" (A pregnant woman of 21 years old)\n[SUBTITLE] Nutrition during pregnancy [SUBSECTION] Women reported that the reasons for decreasing consumption of food during pregnancy were mostly related to aversion to specific food, followed by lack of monetary power to purchase specific food that extremely poor households usually consumed (like rice, potato, and small fish). However, few women also reported increasing consumption of food during pregnancy. The reasons given for this varied from those given for decreasing consumption. The most frequently cited reason was 'feel like eating more.' 'Craving for a specific food 'was also cited as a reason for increased consumption of some food such as molasses-made drink, rice with green chilies, and milk. Two of the women mentioned that the increased food intake was directly related to improved health of the mother or the baby. This tended to be where husbands and other family members were helpful and better informed.\n\"I know that eating more food is necessary when there is a baby in womb. But I am poor, how can I afford it?\" (A pregnant woman of 21 years old)\nSome food, like ducks, pigeons, beef and Hilsha fish were considered as 'hot' and were restricted during pregnancy. Some fish like Taki, Chanda and Puti, which were within their affordability, were also restricted during pregnancy. There were no restrictions reported in consuming fruits among the extreme poor households.\nWomen reported that the reasons for decreasing consumption of food during pregnancy were mostly related to aversion to specific food, followed by lack of monetary power to purchase specific food that extremely poor households usually consumed (like rice, potato, and small fish). However, few women also reported increasing consumption of food during pregnancy. The reasons given for this varied from those given for decreasing consumption. The most frequently cited reason was 'feel like eating more.' 'Craving for a specific food 'was also cited as a reason for increased consumption of some food such as molasses-made drink, rice with green chilies, and milk. Two of the women mentioned that the increased food intake was directly related to improved health of the mother or the baby. This tended to be where husbands and other family members were helpful and better informed.\n\"I know that eating more food is necessary when there is a baby in womb. But I am poor, how can I afford it?\" (A pregnant woman of 21 years old)\nSome food, like ducks, pigeons, beef and Hilsha fish were considered as 'hot' and were restricted during pregnancy. Some fish like Taki, Chanda and Puti, which were within their affordability, were also restricted during pregnancy. There were no restrictions reported in consuming fruits among the extreme poor households.\n[SUBTITLE] Restrictions and mobility during pregnancy [SUBSECTION] It is generally believed among the extreme poor households that evil spirits are more active in the evening, at noon and at night, so pregnant women avoided leaving their houses during those times. Walking through graveyards was also thought to be harmful for pregnant women. If they did so, they tied up their hair and covered their heads with veils.\n\"Evil spirits could cause miscarriage of the fetus,that is why I did not go out in prayer time\" (A lactating mother of 31 years old)\nA few women reported their beliefs about carrying a piece of iron which would ensure protection. Matches were also reported to be effective in keeping away the evil gaze of the spirits. Most of the respondents mentioned that lunar and solar eclipses could affect pregnant women. They reported (those who got eclipse during last pregnancy) that they had stayed inside the household, walked near the home or inside the home, but had never laid down on the bed during eclipses. They also reported certain restrictions during this period - like they did not eat or cook, cut, and twist anything, as they perceived that the child would be born with a cleft palate or with deformed features. Many of the women reported that elderly family members and husbands were the main informants as to when there would be a lunar or solar eclipse. In any case, restrictions in movement had never been imposed by any health providers but rather from the elderly women of the family.\nIt is generally believed among the extreme poor households that evil spirits are more active in the evening, at noon and at night, so pregnant women avoided leaving their houses during those times. Walking through graveyards was also thought to be harmful for pregnant women. If they did so, they tied up their hair and covered their heads with veils.\n\"Evil spirits could cause miscarriage of the fetus,that is why I did not go out in prayer time\" (A lactating mother of 31 years old)\nA few women reported their beliefs about carrying a piece of iron which would ensure protection. Matches were also reported to be effective in keeping away the evil gaze of the spirits. Most of the respondents mentioned that lunar and solar eclipses could affect pregnant women. They reported (those who got eclipse during last pregnancy) that they had stayed inside the household, walked near the home or inside the home, but had never laid down on the bed during eclipses. They also reported certain restrictions during this period - like they did not eat or cook, cut, and twist anything, as they perceived that the child would be born with a cleft palate or with deformed features. Many of the women reported that elderly family members and husbands were the main informants as to when there would be a lunar or solar eclipse. In any case, restrictions in movement had never been imposed by any health providers but rather from the elderly women of the family.\n[SUBTITLE] Support from husband [SUBSECTION] Present study found that in the midst of poverty, the husband could play a positive role in taking care of his wife during the pregnancy period. An illustrative case:\nAll that Mamtaz's husband can claim as his own are the homestead land and the house. He inherited three decimals of land from his father on which he has built a house to live in. He is physically disabled. He does not have any source of income other than the 2/3 kg of rice that he gets from begging door to door. From that amount, Mamtaz keeps whatever is required for the household and sells the remaining to buy other necessities such as salt, vegetables to run her family. Sometimes when her husband is unable to go for begging, Mamtaz would go to other people's houses for work during pregnancy. Her husband did not wish for her to work at other people's houses during her pregnancy and expressed that whatever he earned through begging was enough for them to sustain. But still, when he was not at home and someone called her for assistance, she would go to their houses to boil paddy or for maintenance of floors in exchange of one or half a kg of rice. (A pregnant woman of 23 years old)\nWomen considered this attitude of their husbands as a positive attitude if the women were too weak to work or continue the usual household work. In three cases, women consulted with their husbands and jointly decided to stop their other child's schooling so that the child would rear the animals and the women could rest during their pregnancy. Overall, during pregnancy, women reported that husbands and other family members helped them in doing heavy work. Activities such as fetching water, boiling and husking rice, lifting heavy cooking utensils and preparing food for animals were generally regarded as heavy work.\nPresent study found that in the midst of poverty, the husband could play a positive role in taking care of his wife during the pregnancy period. An illustrative case:\nAll that Mamtaz's husband can claim as his own are the homestead land and the house. He inherited three decimals of land from his father on which he has built a house to live in. He is physically disabled. He does not have any source of income other than the 2/3 kg of rice that he gets from begging door to door. From that amount, Mamtaz keeps whatever is required for the household and sells the remaining to buy other necessities such as salt, vegetables to run her family. Sometimes when her husband is unable to go for begging, Mamtaz would go to other people's houses for work during pregnancy. Her husband did not wish for her to work at other people's houses during her pregnancy and expressed that whatever he earned through begging was enough for them to sustain. But still, when he was not at home and someone called her for assistance, she would go to their houses to boil paddy or for maintenance of floors in exchange of one or half a kg of rice. (A pregnant woman of 23 years old)\nWomen considered this attitude of their husbands as a positive attitude if the women were too weak to work or continue the usual household work. In three cases, women consulted with their husbands and jointly decided to stop their other child's schooling so that the child would rear the animals and the women could rest during their pregnancy. Overall, during pregnancy, women reported that husbands and other family members helped them in doing heavy work. Activities such as fetching water, boiling and husking rice, lifting heavy cooking utensils and preparing food for animals were generally regarded as heavy work.\n[SUBTITLE] Birth preparedness [SUBSECTION] Birth preparedness for this present study included selecting a skilled birth attendant, arranging delivery kit needed for a safe birth, identifying where to go in case of emergency, and arranging necessary money and transport for delivery. We found only one woman who had a birth plan (took into consideration all the outlined stages of the preparation) before delivery. Most women did not contact the birth attendant who is locally known as traditional birth attendants (TBAs or dais) in advance because they thought TBAs could make some jadutona (Black magic) in advance during pregnancy so that without their presence delivery would not occur and that there were greater chances to pay more for delivery.\n\"After I started having pain on the morning of the 3rd day, my husband went out to call the TBAs. TBAs was not informed beforehand as she took more money if asked to stay for a longer period of time. This TBA perhaps is trained. Most of the children in this locality were delivered by her (A lactating mother of 19 years old)\nFew families, especially the husbands, could put aside some money for delivery purpose. One husband had financial constraints, so he saved 40 kg of rice. Twelve women said that they only collected some old clothes, which they had kept separately, but they had not stitched any new dresses or Kathas (local quilt covering) for the arrival of the new baby. The women believed that it was bad to buy new clothes or make too many plans in advance for the new arrival as it could bring bad luck. Moreover, they were not sure whether the coming child would survive or not. Money spent on her/him was considered to be unnecessary. Women assumed that transportation would be available either from a family member or from a neighbour when needed and, as such, did not plan for the transportation in advance.\nBirth preparedness for this present study included selecting a skilled birth attendant, arranging delivery kit needed for a safe birth, identifying where to go in case of emergency, and arranging necessary money and transport for delivery. We found only one woman who had a birth plan (took into consideration all the outlined stages of the preparation) before delivery. Most women did not contact the birth attendant who is locally known as traditional birth attendants (TBAs or dais) in advance because they thought TBAs could make some jadutona (Black magic) in advance during pregnancy so that without their presence delivery would not occur and that there were greater chances to pay more for delivery.\n\"After I started having pain on the morning of the 3rd day, my husband went out to call the TBAs. TBAs was not informed beforehand as she took more money if asked to stay for a longer period of time. This TBA perhaps is trained. Most of the children in this locality were delivered by her (A lactating mother of 19 years old)\nFew families, especially the husbands, could put aside some money for delivery purpose. One husband had financial constraints, so he saved 40 kg of rice. Twelve women said that they only collected some old clothes, which they had kept separately, but they had not stitched any new dresses or Kathas (local quilt covering) for the arrival of the new baby. The women believed that it was bad to buy new clothes or make too many plans in advance for the new arrival as it could bring bad luck. Moreover, they were not sure whether the coming child would survive or not. Money spent on her/him was considered to be unnecessary. Women assumed that transportation would be available either from a family member or from a neighbour when needed and, as such, did not plan for the transportation in advance.\nAlmost all the women reported about becoming aware of their pregnancy when they experienced amenorrhoea, nausea and vomiting, loss of appetite and weakness. Most of them could identify their pregnancy within the first 2-3 months. According to them, pregnancy identification and its subsequent care was seen as a normal event which did not require any additional medical intervention unless significant complications arose during this period. Two women were found to have had no menstruation history after delivery of the last baby till conception of the next one.\n[SUBTITLE] Antenatal care [SUBSECTION] Confirmation of pregnancy was considered by the women as the most important part of antenatal care and eight out of 20 women went to the nearby health facilities for pregnancy tests. Reportedly, the most common service that a pregnant woman received in ante-natal clinics was iron supplementation. However, most of them did not take all the tablets dispensed because they perceived the tablets to be tasteless (or have bad taste) and made the stool black. None of them took iron tablets for more than two months during their last pregnancy. Most of these women opined that antenatal care provided no benefit to them or their child. Monetary constraints, absence of knowledge about the need of services, and restrictions on the movement of women were also cited as reasons for not accessing antenatal care.\nConfirmation of pregnancy was considered by the women as the most important part of antenatal care and eight out of 20 women went to the nearby health facilities for pregnancy tests. Reportedly, the most common service that a pregnant woman received in ante-natal clinics was iron supplementation. However, most of them did not take all the tablets dispensed because they perceived the tablets to be tasteless (or have bad taste) and made the stool black. None of them took iron tablets for more than two months during their last pregnancy. Most of these women opined that antenatal care provided no benefit to them or their child. Monetary constraints, absence of knowledge about the need of services, and restrictions on the movement of women were also cited as reasons for not accessing antenatal care.\n[SUBTITLE] Social exclusion [SUBSECTION] The following finding is not unique but typical of the broad experience of ultra poor women living in the rural areas of Bangladesh:\n\"I was avoiding health worker because she would scold me if she would have heard about my 4th pregnancy. She used to give me birth control pills. So, I did not meet her and inform her.\" (A lactating mother of 27 years old)\nIn two cases, the women reported that the respective health worker came to their house to give birth control pills. However, neither of them came out of their respective houses. The use of harsh words and low tolerance level of the health workers discouraged the women to use the services provided by these health facilities for antenatal care. These women perceived that they were treated in such a manner by the health worker because of their low socioeconomic status.\n\"I heard that health service from government facility is free of charge but when I went to the health facility I was asked to make a card for Tk. 20.Which services are then considered to be free of charge?\" (A pregnant woman of 21 years old)\nThe following finding is not unique but typical of the broad experience of ultra poor women living in the rural areas of Bangladesh:\n\"I was avoiding health worker because she would scold me if she would have heard about my 4th pregnancy. She used to give me birth control pills. So, I did not meet her and inform her.\" (A lactating mother of 27 years old)\nIn two cases, the women reported that the respective health worker came to their house to give birth control pills. However, neither of them came out of their respective houses. The use of harsh words and low tolerance level of the health workers discouraged the women to use the services provided by these health facilities for antenatal care. These women perceived that they were treated in such a manner by the health worker because of their low socioeconomic status.\n\"I heard that health service from government facility is free of charge but when I went to the health facility I was asked to make a card for Tk. 20.Which services are then considered to be free of charge?\" (A pregnant woman of 21 years old)\n[SUBTITLE] Nutrition during pregnancy [SUBSECTION] Women reported that the reasons for decreasing consumption of food during pregnancy were mostly related to aversion to specific food, followed by lack of monetary power to purchase specific food that extremely poor households usually consumed (like rice, potato, and small fish). However, few women also reported increasing consumption of food during pregnancy. The reasons given for this varied from those given for decreasing consumption. The most frequently cited reason was 'feel like eating more.' 'Craving for a specific food 'was also cited as a reason for increased consumption of some food such as molasses-made drink, rice with green chilies, and milk. Two of the women mentioned that the increased food intake was directly related to improved health of the mother or the baby. This tended to be where husbands and other family members were helpful and better informed.\n\"I know that eating more food is necessary when there is a baby in womb. But I am poor, how can I afford it?\" (A pregnant woman of 21 years old)\nSome food, like ducks, pigeons, beef and Hilsha fish were considered as 'hot' and were restricted during pregnancy. Some fish like Taki, Chanda and Puti, which were within their affordability, were also restricted during pregnancy. There were no restrictions reported in consuming fruits among the extreme poor households.\nWomen reported that the reasons for decreasing consumption of food during pregnancy were mostly related to aversion to specific food, followed by lack of monetary power to purchase specific food that extremely poor households usually consumed (like rice, potato, and small fish). However, few women also reported increasing consumption of food during pregnancy. The reasons given for this varied from those given for decreasing consumption. The most frequently cited reason was 'feel like eating more.' 'Craving for a specific food 'was also cited as a reason for increased consumption of some food such as molasses-made drink, rice with green chilies, and milk. Two of the women mentioned that the increased food intake was directly related to improved health of the mother or the baby. This tended to be where husbands and other family members were helpful and better informed.\n\"I know that eating more food is necessary when there is a baby in womb. But I am poor, how can I afford it?\" (A pregnant woman of 21 years old)\nSome food, like ducks, pigeons, beef and Hilsha fish were considered as 'hot' and were restricted during pregnancy. Some fish like Taki, Chanda and Puti, which were within their affordability, were also restricted during pregnancy. There were no restrictions reported in consuming fruits among the extreme poor households.\n[SUBTITLE] Restrictions and mobility during pregnancy [SUBSECTION] It is generally believed among the extreme poor households that evil spirits are more active in the evening, at noon and at night, so pregnant women avoided leaving their houses during those times. Walking through graveyards was also thought to be harmful for pregnant women. If they did so, they tied up their hair and covered their heads with veils.\n\"Evil spirits could cause miscarriage of the fetus,that is why I did not go out in prayer time\" (A lactating mother of 31 years old)\nA few women reported their beliefs about carrying a piece of iron which would ensure protection. Matches were also reported to be effective in keeping away the evil gaze of the spirits. Most of the respondents mentioned that lunar and solar eclipses could affect pregnant women. They reported (those who got eclipse during last pregnancy) that they had stayed inside the household, walked near the home or inside the home, but had never laid down on the bed during eclipses. They also reported certain restrictions during this period - like they did not eat or cook, cut, and twist anything, as they perceived that the child would be born with a cleft palate or with deformed features. Many of the women reported that elderly family members and husbands were the main informants as to when there would be a lunar or solar eclipse. In any case, restrictions in movement had never been imposed by any health providers but rather from the elderly women of the family.\nIt is generally believed among the extreme poor households that evil spirits are more active in the evening, at noon and at night, so pregnant women avoided leaving their houses during those times. Walking through graveyards was also thought to be harmful for pregnant women. If they did so, they tied up their hair and covered their heads with veils.\n\"Evil spirits could cause miscarriage of the fetus,that is why I did not go out in prayer time\" (A lactating mother of 31 years old)\nA few women reported their beliefs about carrying a piece of iron which would ensure protection. Matches were also reported to be effective in keeping away the evil gaze of the spirits. Most of the respondents mentioned that lunar and solar eclipses could affect pregnant women. They reported (those who got eclipse during last pregnancy) that they had stayed inside the household, walked near the home or inside the home, but had never laid down on the bed during eclipses. They also reported certain restrictions during this period - like they did not eat or cook, cut, and twist anything, as they perceived that the child would be born with a cleft palate or with deformed features. Many of the women reported that elderly family members and husbands were the main informants as to when there would be a lunar or solar eclipse. In any case, restrictions in movement had never been imposed by any health providers but rather from the elderly women of the family.\n[SUBTITLE] Support from husband [SUBSECTION] Present study found that in the midst of poverty, the husband could play a positive role in taking care of his wife during the pregnancy period. An illustrative case:\nAll that Mamtaz's husband can claim as his own are the homestead land and the house. He inherited three decimals of land from his father on which he has built a house to live in. He is physically disabled. He does not have any source of income other than the 2/3 kg of rice that he gets from begging door to door. From that amount, Mamtaz keeps whatever is required for the household and sells the remaining to buy other necessities such as salt, vegetables to run her family. Sometimes when her husband is unable to go for begging, Mamtaz would go to other people's houses for work during pregnancy. Her husband did not wish for her to work at other people's houses during her pregnancy and expressed that whatever he earned through begging was enough for them to sustain. But still, when he was not at home and someone called her for assistance, she would go to their houses to boil paddy or for maintenance of floors in exchange of one or half a kg of rice. (A pregnant woman of 23 years old)\nWomen considered this attitude of their husbands as a positive attitude if the women were too weak to work or continue the usual household work. In three cases, women consulted with their husbands and jointly decided to stop their other child's schooling so that the child would rear the animals and the women could rest during their pregnancy. Overall, during pregnancy, women reported that husbands and other family members helped them in doing heavy work. Activities such as fetching water, boiling and husking rice, lifting heavy cooking utensils and preparing food for animals were generally regarded as heavy work.\nPresent study found that in the midst of poverty, the husband could play a positive role in taking care of his wife during the pregnancy period. An illustrative case:\nAll that Mamtaz's husband can claim as his own are the homestead land and the house. He inherited three decimals of land from his father on which he has built a house to live in. He is physically disabled. He does not have any source of income other than the 2/3 kg of rice that he gets from begging door to door. From that amount, Mamtaz keeps whatever is required for the household and sells the remaining to buy other necessities such as salt, vegetables to run her family. Sometimes when her husband is unable to go for begging, Mamtaz would go to other people's houses for work during pregnancy. Her husband did not wish for her to work at other people's houses during her pregnancy and expressed that whatever he earned through begging was enough for them to sustain. But still, when he was not at home and someone called her for assistance, she would go to their houses to boil paddy or for maintenance of floors in exchange of one or half a kg of rice. (A pregnant woman of 23 years old)\nWomen considered this attitude of their husbands as a positive attitude if the women were too weak to work or continue the usual household work. In three cases, women consulted with their husbands and jointly decided to stop their other child's schooling so that the child would rear the animals and the women could rest during their pregnancy. Overall, during pregnancy, women reported that husbands and other family members helped them in doing heavy work. Activities such as fetching water, boiling and husking rice, lifting heavy cooking utensils and preparing food for animals were generally regarded as heavy work.\n[SUBTITLE] Birth preparedness [SUBSECTION] Birth preparedness for this present study included selecting a skilled birth attendant, arranging delivery kit needed for a safe birth, identifying where to go in case of emergency, and arranging necessary money and transport for delivery. We found only one woman who had a birth plan (took into consideration all the outlined stages of the preparation) before delivery. Most women did not contact the birth attendant who is locally known as traditional birth attendants (TBAs or dais) in advance because they thought TBAs could make some jadutona (Black magic) in advance during pregnancy so that without their presence delivery would not occur and that there were greater chances to pay more for delivery.\n\"After I started having pain on the morning of the 3rd day, my husband went out to call the TBAs. TBAs was not informed beforehand as she took more money if asked to stay for a longer period of time. This TBA perhaps is trained. Most of the children in this locality were delivered by her (A lactating mother of 19 years old)\nFew families, especially the husbands, could put aside some money for delivery purpose. One husband had financial constraints, so he saved 40 kg of rice. Twelve women said that they only collected some old clothes, which they had kept separately, but they had not stitched any new dresses or Kathas (local quilt covering) for the arrival of the new baby. The women believed that it was bad to buy new clothes or make too many plans in advance for the new arrival as it could bring bad luck. Moreover, they were not sure whether the coming child would survive or not. Money spent on her/him was considered to be unnecessary. Women assumed that transportation would be available either from a family member or from a neighbour when needed and, as such, did not plan for the transportation in advance.\nBirth preparedness for this present study included selecting a skilled birth attendant, arranging delivery kit needed for a safe birth, identifying where to go in case of emergency, and arranging necessary money and transport for delivery. We found only one woman who had a birth plan (took into consideration all the outlined stages of the preparation) before delivery. Most women did not contact the birth attendant who is locally known as traditional birth attendants (TBAs or dais) in advance because they thought TBAs could make some jadutona (Black magic) in advance during pregnancy so that without their presence delivery would not occur and that there were greater chances to pay more for delivery.\n\"After I started having pain on the morning of the 3rd day, my husband went out to call the TBAs. TBAs was not informed beforehand as she took more money if asked to stay for a longer period of time. This TBA perhaps is trained. Most of the children in this locality were delivered by her (A lactating mother of 19 years old)\nFew families, especially the husbands, could put aside some money for delivery purpose. One husband had financial constraints, so he saved 40 kg of rice. Twelve women said that they only collected some old clothes, which they had kept separately, but they had not stitched any new dresses or Kathas (local quilt covering) for the arrival of the new baby. The women believed that it was bad to buy new clothes or make too many plans in advance for the new arrival as it could bring bad luck. Moreover, they were not sure whether the coming child would survive or not. Money spent on her/him was considered to be unnecessary. Women assumed that transportation would be available either from a family member or from a neighbour when needed and, as such, did not plan for the transportation in advance.\n[SUBTITLE] Delivery care [SUBSECTION] We found that after the initiation of labour pain, elderly women usually took matters into their own hands by taking various steps and observing certain rituals to facilitate early delivery of the baby. Women reported that sometimes relatives fed them pora pani (enchanted water) to boost up their mental strength, which referred to the psychological aspect of the expectant mothers as being strong enough to face the labour pain. For providing energy to the delivering mother as well as to intensify the labour pain, five mothers said that they had received saline with an injection (oxytocin) from a neighbouring Pallichikitshok (Village doctor).\n[SUBTITLE] Place of birth and attendance at delivery [SUBSECTION] Most of the deliveries took place at home (19 out of 20). The TBA was usually called after the onset of strong labour pain. TBA or friends/relatives were the most common persons to be present as the birth attendant during delivery. Almost all the deliveries took place on the floor. Four women even gave births on just the bare floor, but for the rest, the deliveries took place on spread out cloths or jute sacks or straw. Delivery was not carried out on beds in order to avoid spoiling it. According to the women, few materials like straw, polythene, etc. if placed on the floor, made cleaning and disposing of impure blood and placenta much easier. Ten out of the 20 women reported that their preferred position was squatting while giving birth. However, this may have, and often did change, as labour progressed. For three of the women, squatting position was found to be more painful than lying. Usually, the position to be adopted was often decided upon discussion with the TBA and other female relatives. No cases were found where any male members of the household were involved during delivery.\nHealth system factors such as staff attitudes from healthcare, either public or private, also had an impact on maternal care practice. Poorly attitude of the healthcare staffs were perceived to exist in most health facilities; these included usage of abusive language, denial of providing services, lacking compassion in general and refusal to assist properly.\n\"I went to government facility for antenatal care. The concerned person told me I might need cesarean, and I would die if I did not go to the hospital. Are these words good to tell someone who is pregnant?\" (A pregnant woman of 17 years old)\nThe mothers were asked whether or not the birth attendant had washed their hands with soap before delivery, and whether or not the instrument used for cutting the umbilical cord and the thread used for tying the cord were boiled before use. Washing hands before conducting the delivery was very low. Boiling the blade was high, but it was reported that they did not boil the thread. Women reported that at times, the TBAs kept the thread aflame only and did not boil it if the thread was new.\nMost of the deliveries took place at home (19 out of 20). The TBA was usually called after the onset of strong labour pain. TBA or friends/relatives were the most common persons to be present as the birth attendant during delivery. Almost all the deliveries took place on the floor. Four women even gave births on just the bare floor, but for the rest, the deliveries took place on spread out cloths or jute sacks or straw. Delivery was not carried out on beds in order to avoid spoiling it. According to the women, few materials like straw, polythene, etc. if placed on the floor, made cleaning and disposing of impure blood and placenta much easier. Ten out of the 20 women reported that their preferred position was squatting while giving birth. However, this may have, and often did change, as labour progressed. For three of the women, squatting position was found to be more painful than lying. Usually, the position to be adopted was often decided upon discussion with the TBA and other female relatives. No cases were found where any male members of the household were involved during delivery.\nHealth system factors such as staff attitudes from healthcare, either public or private, also had an impact on maternal care practice. Poorly attitude of the healthcare staffs were perceived to exist in most health facilities; these included usage of abusive language, denial of providing services, lacking compassion in general and refusal to assist properly.\n\"I went to government facility for antenatal care. The concerned person told me I might need cesarean, and I would die if I did not go to the hospital. Are these words good to tell someone who is pregnant?\" (A pregnant woman of 17 years old)\nThe mothers were asked whether or not the birth attendant had washed their hands with soap before delivery, and whether or not the instrument used for cutting the umbilical cord and the thread used for tying the cord were boiled before use. Washing hands before conducting the delivery was very low. Boiling the blade was high, but it was reported that they did not boil the thread. Women reported that at times, the TBAs kept the thread aflame only and did not boil it if the thread was new.\n[SUBTITLE] Practices to speed up the delivery of placenta [SUBSECTION] It was found that the focal point of attention after birth of the baby was mainly on the removal/expulsion of the placenta, as the placenta was believed to have spiritual value. It was believed that after the baby's birth, if the placenta was not delivered quickly, the mother would be in danger. Nine women believed that the placenta could move up to the throat and choke them to death if not removed promptly. To release the placenta after delivery or in cases where there was a delay in the process, TBAs or relatives were found to massage the abdomen of the women, gag her with her hair or give kerosene oil or onion juice to induce vomiting which was believed to help expel the placenta through abdominal contractions. A woman reported that the TBA wiped her chest with a dirty cloth (which was used in mud cleaning) and this was followed by expulsion of the placenta. Treatment of placenta was sometimes considered a priority than treatment of the newborn immediately after birth. It was believed that placenta should be buried in the dry soil so that the child would not suffer from any cold or cough at a later stage. To save some money, some women preferred to cut their umbilical cord themselves. Eight out of 20 women cut their umbilical cord in their last delivery. If any other woman had cut the umbilical cord for them, then they had to pay a minimum of Tk. 20 (US$ 0.30).\nIt was found that the focal point of attention after birth of the baby was mainly on the removal/expulsion of the placenta, as the placenta was believed to have spiritual value. It was believed that after the baby's birth, if the placenta was not delivered quickly, the mother would be in danger. Nine women believed that the placenta could move up to the throat and choke them to death if not removed promptly. To release the placenta after delivery or in cases where there was a delay in the process, TBAs or relatives were found to massage the abdomen of the women, gag her with her hair or give kerosene oil or onion juice to induce vomiting which was believed to help expel the placenta through abdominal contractions. A woman reported that the TBA wiped her chest with a dirty cloth (which was used in mud cleaning) and this was followed by expulsion of the placenta. Treatment of placenta was sometimes considered a priority than treatment of the newborn immediately after birth. It was believed that placenta should be buried in the dry soil so that the child would not suffer from any cold or cough at a later stage. To save some money, some women preferred to cut their umbilical cord themselves. Eight out of 20 women cut their umbilical cord in their last delivery. If any other woman had cut the umbilical cord for them, then they had to pay a minimum of Tk. 20 (US$ 0.30).\nWe found that after the initiation of labour pain, elderly women usually took matters into their own hands by taking various steps and observing certain rituals to facilitate early delivery of the baby. Women reported that sometimes relatives fed them pora pani (enchanted water) to boost up their mental strength, which referred to the psychological aspect of the expectant mothers as being strong enough to face the labour pain. For providing energy to the delivering mother as well as to intensify the labour pain, five mothers said that they had received saline with an injection (oxytocin) from a neighbouring Pallichikitshok (Village doctor).\n[SUBTITLE] Place of birth and attendance at delivery [SUBSECTION] Most of the deliveries took place at home (19 out of 20). The TBA was usually called after the onset of strong labour pain. TBA or friends/relatives were the most common persons to be present as the birth attendant during delivery. Almost all the deliveries took place on the floor. Four women even gave births on just the bare floor, but for the rest, the deliveries took place on spread out cloths or jute sacks or straw. Delivery was not carried out on beds in order to avoid spoiling it. According to the women, few materials like straw, polythene, etc. if placed on the floor, made cleaning and disposing of impure blood and placenta much easier. Ten out of the 20 women reported that their preferred position was squatting while giving birth. However, this may have, and often did change, as labour progressed. For three of the women, squatting position was found to be more painful than lying. Usually, the position to be adopted was often decided upon discussion with the TBA and other female relatives. No cases were found where any male members of the household were involved during delivery.\nHealth system factors such as staff attitudes from healthcare, either public or private, also had an impact on maternal care practice. Poorly attitude of the healthcare staffs were perceived to exist in most health facilities; these included usage of abusive language, denial of providing services, lacking compassion in general and refusal to assist properly.\n\"I went to government facility for antenatal care. The concerned person told me I might need cesarean, and I would die if I did not go to the hospital. Are these words good to tell someone who is pregnant?\" (A pregnant woman of 17 years old)\nThe mothers were asked whether or not the birth attendant had washed their hands with soap before delivery, and whether or not the instrument used for cutting the umbilical cord and the thread used for tying the cord were boiled before use. Washing hands before conducting the delivery was very low. Boiling the blade was high, but it was reported that they did not boil the thread. Women reported that at times, the TBAs kept the thread aflame only and did not boil it if the thread was new.\nMost of the deliveries took place at home (19 out of 20). The TBA was usually called after the onset of strong labour pain. TBA or friends/relatives were the most common persons to be present as the birth attendant during delivery. Almost all the deliveries took place on the floor. Four women even gave births on just the bare floor, but for the rest, the deliveries took place on spread out cloths or jute sacks or straw. Delivery was not carried out on beds in order to avoid spoiling it. According to the women, few materials like straw, polythene, etc. if placed on the floor, made cleaning and disposing of impure blood and placenta much easier. Ten out of the 20 women reported that their preferred position was squatting while giving birth. However, this may have, and often did change, as labour progressed. For three of the women, squatting position was found to be more painful than lying. Usually, the position to be adopted was often decided upon discussion with the TBA and other female relatives. No cases were found where any male members of the household were involved during delivery.\nHealth system factors such as staff attitudes from healthcare, either public or private, also had an impact on maternal care practice. Poorly attitude of the healthcare staffs were perceived to exist in most health facilities; these included usage of abusive language, denial of providing services, lacking compassion in general and refusal to assist properly.\n\"I went to government facility for antenatal care. The concerned person told me I might need cesarean, and I would die if I did not go to the hospital. Are these words good to tell someone who is pregnant?\" (A pregnant woman of 17 years old)\nThe mothers were asked whether or not the birth attendant had washed their hands with soap before delivery, and whether or not the instrument used for cutting the umbilical cord and the thread used for tying the cord were boiled before use. Washing hands before conducting the delivery was very low. Boiling the blade was high, but it was reported that they did not boil the thread. Women reported that at times, the TBAs kept the thread aflame only and did not boil it if the thread was new.\n[SUBTITLE] Practices to speed up the delivery of placenta [SUBSECTION] It was found that the focal point of attention after birth of the baby was mainly on the removal/expulsion of the placenta, as the placenta was believed to have spiritual value. It was believed that after the baby's birth, if the placenta was not delivered quickly, the mother would be in danger. Nine women believed that the placenta could move up to the throat and choke them to death if not removed promptly. To release the placenta after delivery or in cases where there was a delay in the process, TBAs or relatives were found to massage the abdomen of the women, gag her with her hair or give kerosene oil or onion juice to induce vomiting which was believed to help expel the placenta through abdominal contractions. A woman reported that the TBA wiped her chest with a dirty cloth (which was used in mud cleaning) and this was followed by expulsion of the placenta. Treatment of placenta was sometimes considered a priority than treatment of the newborn immediately after birth. It was believed that placenta should be buried in the dry soil so that the child would not suffer from any cold or cough at a later stage. To save some money, some women preferred to cut their umbilical cord themselves. Eight out of 20 women cut their umbilical cord in their last delivery. If any other woman had cut the umbilical cord for them, then they had to pay a minimum of Tk. 20 (US$ 0.30).\nIt was found that the focal point of attention after birth of the baby was mainly on the removal/expulsion of the placenta, as the placenta was believed to have spiritual value. It was believed that after the baby's birth, if the placenta was not delivered quickly, the mother would be in danger. Nine women believed that the placenta could move up to the throat and choke them to death if not removed promptly. To release the placenta after delivery or in cases where there was a delay in the process, TBAs or relatives were found to massage the abdomen of the women, gag her with her hair or give kerosene oil or onion juice to induce vomiting which was believed to help expel the placenta through abdominal contractions. A woman reported that the TBA wiped her chest with a dirty cloth (which was used in mud cleaning) and this was followed by expulsion of the placenta. Treatment of placenta was sometimes considered a priority than treatment of the newborn immediately after birth. It was believed that placenta should be buried in the dry soil so that the child would not suffer from any cold or cough at a later stage. To save some money, some women preferred to cut their umbilical cord themselves. Eight out of 20 women cut their umbilical cord in their last delivery. If any other woman had cut the umbilical cord for them, then they had to pay a minimum of Tk. 20 (US$ 0.30).\n[SUBTITLE] Post-partum care [SUBSECTION] During the post-partum period, especially during the first 5-9 days of isolation, the mothers reported various dietary restrictions imposed on them which deprived them of proper nutrition intake. Most food available in these extreme poor households was thought to be inappropriate during lactation. In the case of four women, no food at all was allowed for consumption during the first few days after delivery, and commonly no food was given at all during the first day after delivery to allow healing of the birth passage. Moreover, women were considered to be impure during this time. They were not allowed to touch any food for preparing meals for other family members. In-depth interviews revealed that the mothers-in-law and the elders played a dominant role in deciding what foodstuffs the mother could eat. Shujata's child, who was not even a month old, lived with her family consisting of her husband, mother and an elder sister. She said:\n\"We all live together, use the same kitchen but have separate rooms. Since the child's delivery, my mother and sister prepare the food as the newly delivered women (Poaati ma) are not supposed to cook till 40 days after the delivery because their body is considered to be impure. People wouldn't like it if I cook and it's not good for me even. My mother brings me food in my room and gives me lesser than my usual intake of food so that I don't fall ill. Poaati ma (lactating mother) should eat as less as possible till her umbilical cord dries up. It does not matter if I'm still hungry and feel weak and as long as I don't have to spend money for doctor's visit. It does not harm if you follow the elder women. I have the whole life to myself to eat more. So it's fine if I eat a little less the 1st 1-2 months. I prefer weakness to illness. (A lactating mother of 23 years old)\nThese women did not seem to have any problem with this imposed dietary restriction because their economic condition did not allow them to buy animal source food like beef, chicken anyway.\nThe most common items eaten during the post-partum period included rice, smashed potato with spices, raw tea, green banana, black cumin, poppy seed, fenugreek leaves etc. These are believed to keep the stomach cool and initiate the production of breast milk. Whatever special food was consumed during post-partum period, it was reported to be only for a few days while the imposed food restrictions continued for a longer time i.e., 21-40 days. Opinions given on the intake of spicy food were mixed. It was given for the first few days for healing of the birth passage but later on, the same was restricted to avoid heart-burning.\nWomen reported that during their last pregnancy they felt weak with severe body-aches (11 out of 20 interviews) after delivery. It lasted for one to three weeks. None of them had gone to any health providers for seeking any service for this weakness and body-ache. These women reasoned that they did not go for the check-up because they were not even aware about the availability of the post-partum check-up. Four women reported that their husbands or mothers had gone to pharmacies to fetch vitamin tablets or saline when they had expressed about their weaknesses. This weakness was however considered to be a common part of their post-partum life. To quote one woman:\n\"It is normal to have some bodyache and fever after delivery, these would be cured automatically\" (A lactating mother of 24 years old)\nDuring the post-partum period, especially during the first 5-9 days of isolation, the mothers reported various dietary restrictions imposed on them which deprived them of proper nutrition intake. Most food available in these extreme poor households was thought to be inappropriate during lactation. In the case of four women, no food at all was allowed for consumption during the first few days after delivery, and commonly no food was given at all during the first day after delivery to allow healing of the birth passage. Moreover, women were considered to be impure during this time. They were not allowed to touch any food for preparing meals for other family members. In-depth interviews revealed that the mothers-in-law and the elders played a dominant role in deciding what foodstuffs the mother could eat. Shujata's child, who was not even a month old, lived with her family consisting of her husband, mother and an elder sister. She said:\n\"We all live together, use the same kitchen but have separate rooms. Since the child's delivery, my mother and sister prepare the food as the newly delivered women (Poaati ma) are not supposed to cook till 40 days after the delivery because their body is considered to be impure. People wouldn't like it if I cook and it's not good for me even. My mother brings me food in my room and gives me lesser than my usual intake of food so that I don't fall ill. Poaati ma (lactating mother) should eat as less as possible till her umbilical cord dries up. It does not matter if I'm still hungry and feel weak and as long as I don't have to spend money for doctor's visit. It does not harm if you follow the elder women. I have the whole life to myself to eat more. So it's fine if I eat a little less the 1st 1-2 months. I prefer weakness to illness. (A lactating mother of 23 years old)\nThese women did not seem to have any problem with this imposed dietary restriction because their economic condition did not allow them to buy animal source food like beef, chicken anyway.\nThe most common items eaten during the post-partum period included rice, smashed potato with spices, raw tea, green banana, black cumin, poppy seed, fenugreek leaves etc. These are believed to keep the stomach cool and initiate the production of breast milk. Whatever special food was consumed during post-partum period, it was reported to be only for a few days while the imposed food restrictions continued for a longer time i.e., 21-40 days. Opinions given on the intake of spicy food were mixed. It was given for the first few days for healing of the birth passage but later on, the same was restricted to avoid heart-burning.\nWomen reported that during their last pregnancy they felt weak with severe body-aches (11 out of 20 interviews) after delivery. It lasted for one to three weeks. None of them had gone to any health providers for seeking any service for this weakness and body-ache. These women reasoned that they did not go for the check-up because they were not even aware about the availability of the post-partum check-up. Four women reported that their husbands or mothers had gone to pharmacies to fetch vitamin tablets or saline when they had expressed about their weaknesses. This weakness was however considered to be a common part of their post-partum life. To quote one woman:\n\"It is normal to have some bodyache and fever after delivery, these would be cured automatically\" (A lactating mother of 24 years old)", "The majority (15 out of 20 respondents) of the women were in their twenties, two were below 20 years of age while three were in their thirties. Most of the interviewees were Muslim. Non-farming labour, like rickshaw pulling, was one of the most common occupations of the household heads. Most of them lived in poor hygienic condition as revealed by the use of proper sanitation (with or without water seal latrine or pit latrine), water access, and overall household condition. At the time of data collection, all the respondents were in their initial phase of the CFPR-II programme.", "Almost all the women reported about becoming aware of their pregnancy when they experienced amenorrhoea, nausea and vomiting, loss of appetite and weakness. Most of them could identify their pregnancy within the first 2-3 months. According to them, pregnancy identification and its subsequent care was seen as a normal event which did not require any additional medical intervention unless significant complications arose during this period. Two women were found to have had no menstruation history after delivery of the last baby till conception of the next one.\n[SUBTITLE] Antenatal care [SUBSECTION] Confirmation of pregnancy was considered by the women as the most important part of antenatal care and eight out of 20 women went to the nearby health facilities for pregnancy tests. Reportedly, the most common service that a pregnant woman received in ante-natal clinics was iron supplementation. However, most of them did not take all the tablets dispensed because they perceived the tablets to be tasteless (or have bad taste) and made the stool black. None of them took iron tablets for more than two months during their last pregnancy. Most of these women opined that antenatal care provided no benefit to them or their child. Monetary constraints, absence of knowledge about the need of services, and restrictions on the movement of women were also cited as reasons for not accessing antenatal care.\nConfirmation of pregnancy was considered by the women as the most important part of antenatal care and eight out of 20 women went to the nearby health facilities for pregnancy tests. Reportedly, the most common service that a pregnant woman received in ante-natal clinics was iron supplementation. However, most of them did not take all the tablets dispensed because they perceived the tablets to be tasteless (or have bad taste) and made the stool black. None of them took iron tablets for more than two months during their last pregnancy. Most of these women opined that antenatal care provided no benefit to them or their child. Monetary constraints, absence of knowledge about the need of services, and restrictions on the movement of women were also cited as reasons for not accessing antenatal care.\n[SUBTITLE] Social exclusion [SUBSECTION] The following finding is not unique but typical of the broad experience of ultra poor women living in the rural areas of Bangladesh:\n\"I was avoiding health worker because she would scold me if she would have heard about my 4th pregnancy. She used to give me birth control pills. So, I did not meet her and inform her.\" (A lactating mother of 27 years old)\nIn two cases, the women reported that the respective health worker came to their house to give birth control pills. However, neither of them came out of their respective houses. The use of harsh words and low tolerance level of the health workers discouraged the women to use the services provided by these health facilities for antenatal care. These women perceived that they were treated in such a manner by the health worker because of their low socioeconomic status.\n\"I heard that health service from government facility is free of charge but when I went to the health facility I was asked to make a card for Tk. 20.Which services are then considered to be free of charge?\" (A pregnant woman of 21 years old)\nThe following finding is not unique but typical of the broad experience of ultra poor women living in the rural areas of Bangladesh:\n\"I was avoiding health worker because she would scold me if she would have heard about my 4th pregnancy. She used to give me birth control pills. So, I did not meet her and inform her.\" (A lactating mother of 27 years old)\nIn two cases, the women reported that the respective health worker came to their house to give birth control pills. However, neither of them came out of their respective houses. The use of harsh words and low tolerance level of the health workers discouraged the women to use the services provided by these health facilities for antenatal care. These women perceived that they were treated in such a manner by the health worker because of their low socioeconomic status.\n\"I heard that health service from government facility is free of charge but when I went to the health facility I was asked to make a card for Tk. 20.Which services are then considered to be free of charge?\" (A pregnant woman of 21 years old)\n[SUBTITLE] Nutrition during pregnancy [SUBSECTION] Women reported that the reasons for decreasing consumption of food during pregnancy were mostly related to aversion to specific food, followed by lack of monetary power to purchase specific food that extremely poor households usually consumed (like rice, potato, and small fish). However, few women also reported increasing consumption of food during pregnancy. The reasons given for this varied from those given for decreasing consumption. The most frequently cited reason was 'feel like eating more.' 'Craving for a specific food 'was also cited as a reason for increased consumption of some food such as molasses-made drink, rice with green chilies, and milk. Two of the women mentioned that the increased food intake was directly related to improved health of the mother or the baby. This tended to be where husbands and other family members were helpful and better informed.\n\"I know that eating more food is necessary when there is a baby in womb. But I am poor, how can I afford it?\" (A pregnant woman of 21 years old)\nSome food, like ducks, pigeons, beef and Hilsha fish were considered as 'hot' and were restricted during pregnancy. Some fish like Taki, Chanda and Puti, which were within their affordability, were also restricted during pregnancy. There were no restrictions reported in consuming fruits among the extreme poor households.\nWomen reported that the reasons for decreasing consumption of food during pregnancy were mostly related to aversion to specific food, followed by lack of monetary power to purchase specific food that extremely poor households usually consumed (like rice, potato, and small fish). However, few women also reported increasing consumption of food during pregnancy. The reasons given for this varied from those given for decreasing consumption. The most frequently cited reason was 'feel like eating more.' 'Craving for a specific food 'was also cited as a reason for increased consumption of some food such as molasses-made drink, rice with green chilies, and milk. Two of the women mentioned that the increased food intake was directly related to improved health of the mother or the baby. This tended to be where husbands and other family members were helpful and better informed.\n\"I know that eating more food is necessary when there is a baby in womb. But I am poor, how can I afford it?\" (A pregnant woman of 21 years old)\nSome food, like ducks, pigeons, beef and Hilsha fish were considered as 'hot' and were restricted during pregnancy. Some fish like Taki, Chanda and Puti, which were within their affordability, were also restricted during pregnancy. There were no restrictions reported in consuming fruits among the extreme poor households.\n[SUBTITLE] Restrictions and mobility during pregnancy [SUBSECTION] It is generally believed among the extreme poor households that evil spirits are more active in the evening, at noon and at night, so pregnant women avoided leaving their houses during those times. Walking through graveyards was also thought to be harmful for pregnant women. If they did so, they tied up their hair and covered their heads with veils.\n\"Evil spirits could cause miscarriage of the fetus,that is why I did not go out in prayer time\" (A lactating mother of 31 years old)\nA few women reported their beliefs about carrying a piece of iron which would ensure protection. Matches were also reported to be effective in keeping away the evil gaze of the spirits. Most of the respondents mentioned that lunar and solar eclipses could affect pregnant women. They reported (those who got eclipse during last pregnancy) that they had stayed inside the household, walked near the home or inside the home, but had never laid down on the bed during eclipses. They also reported certain restrictions during this period - like they did not eat or cook, cut, and twist anything, as they perceived that the child would be born with a cleft palate or with deformed features. Many of the women reported that elderly family members and husbands were the main informants as to when there would be a lunar or solar eclipse. In any case, restrictions in movement had never been imposed by any health providers but rather from the elderly women of the family.\nIt is generally believed among the extreme poor households that evil spirits are more active in the evening, at noon and at night, so pregnant women avoided leaving their houses during those times. Walking through graveyards was also thought to be harmful for pregnant women. If they did so, they tied up their hair and covered their heads with veils.\n\"Evil spirits could cause miscarriage of the fetus,that is why I did not go out in prayer time\" (A lactating mother of 31 years old)\nA few women reported their beliefs about carrying a piece of iron which would ensure protection. Matches were also reported to be effective in keeping away the evil gaze of the spirits. Most of the respondents mentioned that lunar and solar eclipses could affect pregnant women. They reported (those who got eclipse during last pregnancy) that they had stayed inside the household, walked near the home or inside the home, but had never laid down on the bed during eclipses. They also reported certain restrictions during this period - like they did not eat or cook, cut, and twist anything, as they perceived that the child would be born with a cleft palate or with deformed features. Many of the women reported that elderly family members and husbands were the main informants as to when there would be a lunar or solar eclipse. In any case, restrictions in movement had never been imposed by any health providers but rather from the elderly women of the family.\n[SUBTITLE] Support from husband [SUBSECTION] Present study found that in the midst of poverty, the husband could play a positive role in taking care of his wife during the pregnancy period. An illustrative case:\nAll that Mamtaz's husband can claim as his own are the homestead land and the house. He inherited three decimals of land from his father on which he has built a house to live in. He is physically disabled. He does not have any source of income other than the 2/3 kg of rice that he gets from begging door to door. From that amount, Mamtaz keeps whatever is required for the household and sells the remaining to buy other necessities such as salt, vegetables to run her family. Sometimes when her husband is unable to go for begging, Mamtaz would go to other people's houses for work during pregnancy. Her husband did not wish for her to work at other people's houses during her pregnancy and expressed that whatever he earned through begging was enough for them to sustain. But still, when he was not at home and someone called her for assistance, she would go to their houses to boil paddy or for maintenance of floors in exchange of one or half a kg of rice. (A pregnant woman of 23 years old)\nWomen considered this attitude of their husbands as a positive attitude if the women were too weak to work or continue the usual household work. In three cases, women consulted with their husbands and jointly decided to stop their other child's schooling so that the child would rear the animals and the women could rest during their pregnancy. Overall, during pregnancy, women reported that husbands and other family members helped them in doing heavy work. Activities such as fetching water, boiling and husking rice, lifting heavy cooking utensils and preparing food for animals were generally regarded as heavy work.\nPresent study found that in the midst of poverty, the husband could play a positive role in taking care of his wife during the pregnancy period. An illustrative case:\nAll that Mamtaz's husband can claim as his own are the homestead land and the house. He inherited three decimals of land from his father on which he has built a house to live in. He is physically disabled. He does not have any source of income other than the 2/3 kg of rice that he gets from begging door to door. From that amount, Mamtaz keeps whatever is required for the household and sells the remaining to buy other necessities such as salt, vegetables to run her family. Sometimes when her husband is unable to go for begging, Mamtaz would go to other people's houses for work during pregnancy. Her husband did not wish for her to work at other people's houses during her pregnancy and expressed that whatever he earned through begging was enough for them to sustain. But still, when he was not at home and someone called her for assistance, she would go to their houses to boil paddy or for maintenance of floors in exchange of one or half a kg of rice. (A pregnant woman of 23 years old)\nWomen considered this attitude of their husbands as a positive attitude if the women were too weak to work or continue the usual household work. In three cases, women consulted with their husbands and jointly decided to stop their other child's schooling so that the child would rear the animals and the women could rest during their pregnancy. Overall, during pregnancy, women reported that husbands and other family members helped them in doing heavy work. Activities such as fetching water, boiling and husking rice, lifting heavy cooking utensils and preparing food for animals were generally regarded as heavy work.\n[SUBTITLE] Birth preparedness [SUBSECTION] Birth preparedness for this present study included selecting a skilled birth attendant, arranging delivery kit needed for a safe birth, identifying where to go in case of emergency, and arranging necessary money and transport for delivery. We found only one woman who had a birth plan (took into consideration all the outlined stages of the preparation) before delivery. Most women did not contact the birth attendant who is locally known as traditional birth attendants (TBAs or dais) in advance because they thought TBAs could make some jadutona (Black magic) in advance during pregnancy so that without their presence delivery would not occur and that there were greater chances to pay more for delivery.\n\"After I started having pain on the morning of the 3rd day, my husband went out to call the TBAs. TBAs was not informed beforehand as she took more money if asked to stay for a longer period of time. This TBA perhaps is trained. Most of the children in this locality were delivered by her (A lactating mother of 19 years old)\nFew families, especially the husbands, could put aside some money for delivery purpose. One husband had financial constraints, so he saved 40 kg of rice. Twelve women said that they only collected some old clothes, which they had kept separately, but they had not stitched any new dresses or Kathas (local quilt covering) for the arrival of the new baby. The women believed that it was bad to buy new clothes or make too many plans in advance for the new arrival as it could bring bad luck. Moreover, they were not sure whether the coming child would survive or not. Money spent on her/him was considered to be unnecessary. Women assumed that transportation would be available either from a family member or from a neighbour when needed and, as such, did not plan for the transportation in advance.\nBirth preparedness for this present study included selecting a skilled birth attendant, arranging delivery kit needed for a safe birth, identifying where to go in case of emergency, and arranging necessary money and transport for delivery. We found only one woman who had a birth plan (took into consideration all the outlined stages of the preparation) before delivery. Most women did not contact the birth attendant who is locally known as traditional birth attendants (TBAs or dais) in advance because they thought TBAs could make some jadutona (Black magic) in advance during pregnancy so that without their presence delivery would not occur and that there were greater chances to pay more for delivery.\n\"After I started having pain on the morning of the 3rd day, my husband went out to call the TBAs. TBAs was not informed beforehand as she took more money if asked to stay for a longer period of time. This TBA perhaps is trained. Most of the children in this locality were delivered by her (A lactating mother of 19 years old)\nFew families, especially the husbands, could put aside some money for delivery purpose. One husband had financial constraints, so he saved 40 kg of rice. Twelve women said that they only collected some old clothes, which they had kept separately, but they had not stitched any new dresses or Kathas (local quilt covering) for the arrival of the new baby. The women believed that it was bad to buy new clothes or make too many plans in advance for the new arrival as it could bring bad luck. Moreover, they were not sure whether the coming child would survive or not. Money spent on her/him was considered to be unnecessary. Women assumed that transportation would be available either from a family member or from a neighbour when needed and, as such, did not plan for the transportation in advance.", "Confirmation of pregnancy was considered by the women as the most important part of antenatal care and eight out of 20 women went to the nearby health facilities for pregnancy tests. Reportedly, the most common service that a pregnant woman received in ante-natal clinics was iron supplementation. However, most of them did not take all the tablets dispensed because they perceived the tablets to be tasteless (or have bad taste) and made the stool black. None of them took iron tablets for more than two months during their last pregnancy. Most of these women opined that antenatal care provided no benefit to them or their child. Monetary constraints, absence of knowledge about the need of services, and restrictions on the movement of women were also cited as reasons for not accessing antenatal care.", "The following finding is not unique but typical of the broad experience of ultra poor women living in the rural areas of Bangladesh:\n\"I was avoiding health worker because she would scold me if she would have heard about my 4th pregnancy. She used to give me birth control pills. So, I did not meet her and inform her.\" (A lactating mother of 27 years old)\nIn two cases, the women reported that the respective health worker came to their house to give birth control pills. However, neither of them came out of their respective houses. The use of harsh words and low tolerance level of the health workers discouraged the women to use the services provided by these health facilities for antenatal care. These women perceived that they were treated in such a manner by the health worker because of their low socioeconomic status.\n\"I heard that health service from government facility is free of charge but when I went to the health facility I was asked to make a card for Tk. 20.Which services are then considered to be free of charge?\" (A pregnant woman of 21 years old)", "Women reported that the reasons for decreasing consumption of food during pregnancy were mostly related to aversion to specific food, followed by lack of monetary power to purchase specific food that extremely poor households usually consumed (like rice, potato, and small fish). However, few women also reported increasing consumption of food during pregnancy. The reasons given for this varied from those given for decreasing consumption. The most frequently cited reason was 'feel like eating more.' 'Craving for a specific food 'was also cited as a reason for increased consumption of some food such as molasses-made drink, rice with green chilies, and milk. Two of the women mentioned that the increased food intake was directly related to improved health of the mother or the baby. This tended to be where husbands and other family members were helpful and better informed.\n\"I know that eating more food is necessary when there is a baby in womb. But I am poor, how can I afford it?\" (A pregnant woman of 21 years old)\nSome food, like ducks, pigeons, beef and Hilsha fish were considered as 'hot' and were restricted during pregnancy. Some fish like Taki, Chanda and Puti, which were within their affordability, were also restricted during pregnancy. There were no restrictions reported in consuming fruits among the extreme poor households.", "It is generally believed among the extreme poor households that evil spirits are more active in the evening, at noon and at night, so pregnant women avoided leaving their houses during those times. Walking through graveyards was also thought to be harmful for pregnant women. If they did so, they tied up their hair and covered their heads with veils.\n\"Evil spirits could cause miscarriage of the fetus,that is why I did not go out in prayer time\" (A lactating mother of 31 years old)\nA few women reported their beliefs about carrying a piece of iron which would ensure protection. Matches were also reported to be effective in keeping away the evil gaze of the spirits. Most of the respondents mentioned that lunar and solar eclipses could affect pregnant women. They reported (those who got eclipse during last pregnancy) that they had stayed inside the household, walked near the home or inside the home, but had never laid down on the bed during eclipses. They also reported certain restrictions during this period - like they did not eat or cook, cut, and twist anything, as they perceived that the child would be born with a cleft palate or with deformed features. Many of the women reported that elderly family members and husbands were the main informants as to when there would be a lunar or solar eclipse. In any case, restrictions in movement had never been imposed by any health providers but rather from the elderly women of the family.", "Present study found that in the midst of poverty, the husband could play a positive role in taking care of his wife during the pregnancy period. An illustrative case:\nAll that Mamtaz's husband can claim as his own are the homestead land and the house. He inherited three decimals of land from his father on which he has built a house to live in. He is physically disabled. He does not have any source of income other than the 2/3 kg of rice that he gets from begging door to door. From that amount, Mamtaz keeps whatever is required for the household and sells the remaining to buy other necessities such as salt, vegetables to run her family. Sometimes when her husband is unable to go for begging, Mamtaz would go to other people's houses for work during pregnancy. Her husband did not wish for her to work at other people's houses during her pregnancy and expressed that whatever he earned through begging was enough for them to sustain. But still, when he was not at home and someone called her for assistance, she would go to their houses to boil paddy or for maintenance of floors in exchange of one or half a kg of rice. (A pregnant woman of 23 years old)\nWomen considered this attitude of their husbands as a positive attitude if the women were too weak to work or continue the usual household work. In three cases, women consulted with their husbands and jointly decided to stop their other child's schooling so that the child would rear the animals and the women could rest during their pregnancy. Overall, during pregnancy, women reported that husbands and other family members helped them in doing heavy work. Activities such as fetching water, boiling and husking rice, lifting heavy cooking utensils and preparing food for animals were generally regarded as heavy work.", "Birth preparedness for this present study included selecting a skilled birth attendant, arranging delivery kit needed for a safe birth, identifying where to go in case of emergency, and arranging necessary money and transport for delivery. We found only one woman who had a birth plan (took into consideration all the outlined stages of the preparation) before delivery. Most women did not contact the birth attendant who is locally known as traditional birth attendants (TBAs or dais) in advance because they thought TBAs could make some jadutona (Black magic) in advance during pregnancy so that without their presence delivery would not occur and that there were greater chances to pay more for delivery.\n\"After I started having pain on the morning of the 3rd day, my husband went out to call the TBAs. TBAs was not informed beforehand as she took more money if asked to stay for a longer period of time. This TBA perhaps is trained. Most of the children in this locality were delivered by her (A lactating mother of 19 years old)\nFew families, especially the husbands, could put aside some money for delivery purpose. One husband had financial constraints, so he saved 40 kg of rice. Twelve women said that they only collected some old clothes, which they had kept separately, but they had not stitched any new dresses or Kathas (local quilt covering) for the arrival of the new baby. The women believed that it was bad to buy new clothes or make too many plans in advance for the new arrival as it could bring bad luck. Moreover, they were not sure whether the coming child would survive or not. Money spent on her/him was considered to be unnecessary. Women assumed that transportation would be available either from a family member or from a neighbour when needed and, as such, did not plan for the transportation in advance.", "We found that after the initiation of labour pain, elderly women usually took matters into their own hands by taking various steps and observing certain rituals to facilitate early delivery of the baby. Women reported that sometimes relatives fed them pora pani (enchanted water) to boost up their mental strength, which referred to the psychological aspect of the expectant mothers as being strong enough to face the labour pain. For providing energy to the delivering mother as well as to intensify the labour pain, five mothers said that they had received saline with an injection (oxytocin) from a neighbouring Pallichikitshok (Village doctor).\n[SUBTITLE] Place of birth and attendance at delivery [SUBSECTION] Most of the deliveries took place at home (19 out of 20). The TBA was usually called after the onset of strong labour pain. TBA or friends/relatives were the most common persons to be present as the birth attendant during delivery. Almost all the deliveries took place on the floor. Four women even gave births on just the bare floor, but for the rest, the deliveries took place on spread out cloths or jute sacks or straw. Delivery was not carried out on beds in order to avoid spoiling it. According to the women, few materials like straw, polythene, etc. if placed on the floor, made cleaning and disposing of impure blood and placenta much easier. Ten out of the 20 women reported that their preferred position was squatting while giving birth. However, this may have, and often did change, as labour progressed. For three of the women, squatting position was found to be more painful than lying. Usually, the position to be adopted was often decided upon discussion with the TBA and other female relatives. No cases were found where any male members of the household were involved during delivery.\nHealth system factors such as staff attitudes from healthcare, either public or private, also had an impact on maternal care practice. Poorly attitude of the healthcare staffs were perceived to exist in most health facilities; these included usage of abusive language, denial of providing services, lacking compassion in general and refusal to assist properly.\n\"I went to government facility for antenatal care. The concerned person told me I might need cesarean, and I would die if I did not go to the hospital. Are these words good to tell someone who is pregnant?\" (A pregnant woman of 17 years old)\nThe mothers were asked whether or not the birth attendant had washed their hands with soap before delivery, and whether or not the instrument used for cutting the umbilical cord and the thread used for tying the cord were boiled before use. Washing hands before conducting the delivery was very low. Boiling the blade was high, but it was reported that they did not boil the thread. Women reported that at times, the TBAs kept the thread aflame only and did not boil it if the thread was new.\nMost of the deliveries took place at home (19 out of 20). The TBA was usually called after the onset of strong labour pain. TBA or friends/relatives were the most common persons to be present as the birth attendant during delivery. Almost all the deliveries took place on the floor. Four women even gave births on just the bare floor, but for the rest, the deliveries took place on spread out cloths or jute sacks or straw. Delivery was not carried out on beds in order to avoid spoiling it. According to the women, few materials like straw, polythene, etc. if placed on the floor, made cleaning and disposing of impure blood and placenta much easier. Ten out of the 20 women reported that their preferred position was squatting while giving birth. However, this may have, and often did change, as labour progressed. For three of the women, squatting position was found to be more painful than lying. Usually, the position to be adopted was often decided upon discussion with the TBA and other female relatives. No cases were found where any male members of the household were involved during delivery.\nHealth system factors such as staff attitudes from healthcare, either public or private, also had an impact on maternal care practice. Poorly attitude of the healthcare staffs were perceived to exist in most health facilities; these included usage of abusive language, denial of providing services, lacking compassion in general and refusal to assist properly.\n\"I went to government facility for antenatal care. The concerned person told me I might need cesarean, and I would die if I did not go to the hospital. Are these words good to tell someone who is pregnant?\" (A pregnant woman of 17 years old)\nThe mothers were asked whether or not the birth attendant had washed their hands with soap before delivery, and whether or not the instrument used for cutting the umbilical cord and the thread used for tying the cord were boiled before use. Washing hands before conducting the delivery was very low. Boiling the blade was high, but it was reported that they did not boil the thread. Women reported that at times, the TBAs kept the thread aflame only and did not boil it if the thread was new.\n[SUBTITLE] Practices to speed up the delivery of placenta [SUBSECTION] It was found that the focal point of attention after birth of the baby was mainly on the removal/expulsion of the placenta, as the placenta was believed to have spiritual value. It was believed that after the baby's birth, if the placenta was not delivered quickly, the mother would be in danger. Nine women believed that the placenta could move up to the throat and choke them to death if not removed promptly. To release the placenta after delivery or in cases where there was a delay in the process, TBAs or relatives were found to massage the abdomen of the women, gag her with her hair or give kerosene oil or onion juice to induce vomiting which was believed to help expel the placenta through abdominal contractions. A woman reported that the TBA wiped her chest with a dirty cloth (which was used in mud cleaning) and this was followed by expulsion of the placenta. Treatment of placenta was sometimes considered a priority than treatment of the newborn immediately after birth. It was believed that placenta should be buried in the dry soil so that the child would not suffer from any cold or cough at a later stage. To save some money, some women preferred to cut their umbilical cord themselves. Eight out of 20 women cut their umbilical cord in their last delivery. If any other woman had cut the umbilical cord for them, then they had to pay a minimum of Tk. 20 (US$ 0.30).\nIt was found that the focal point of attention after birth of the baby was mainly on the removal/expulsion of the placenta, as the placenta was believed to have spiritual value. It was believed that after the baby's birth, if the placenta was not delivered quickly, the mother would be in danger. Nine women believed that the placenta could move up to the throat and choke them to death if not removed promptly. To release the placenta after delivery or in cases where there was a delay in the process, TBAs or relatives were found to massage the abdomen of the women, gag her with her hair or give kerosene oil or onion juice to induce vomiting which was believed to help expel the placenta through abdominal contractions. A woman reported that the TBA wiped her chest with a dirty cloth (which was used in mud cleaning) and this was followed by expulsion of the placenta. Treatment of placenta was sometimes considered a priority than treatment of the newborn immediately after birth. It was believed that placenta should be buried in the dry soil so that the child would not suffer from any cold or cough at a later stage. To save some money, some women preferred to cut their umbilical cord themselves. Eight out of 20 women cut their umbilical cord in their last delivery. If any other woman had cut the umbilical cord for them, then they had to pay a minimum of Tk. 20 (US$ 0.30).", "Most of the deliveries took place at home (19 out of 20). The TBA was usually called after the onset of strong labour pain. TBA or friends/relatives were the most common persons to be present as the birth attendant during delivery. Almost all the deliveries took place on the floor. Four women even gave births on just the bare floor, but for the rest, the deliveries took place on spread out cloths or jute sacks or straw. Delivery was not carried out on beds in order to avoid spoiling it. According to the women, few materials like straw, polythene, etc. if placed on the floor, made cleaning and disposing of impure blood and placenta much easier. Ten out of the 20 women reported that their preferred position was squatting while giving birth. However, this may have, and often did change, as labour progressed. For three of the women, squatting position was found to be more painful than lying. Usually, the position to be adopted was often decided upon discussion with the TBA and other female relatives. No cases were found where any male members of the household were involved during delivery.\nHealth system factors such as staff attitudes from healthcare, either public or private, also had an impact on maternal care practice. Poorly attitude of the healthcare staffs were perceived to exist in most health facilities; these included usage of abusive language, denial of providing services, lacking compassion in general and refusal to assist properly.\n\"I went to government facility for antenatal care. The concerned person told me I might need cesarean, and I would die if I did not go to the hospital. Are these words good to tell someone who is pregnant?\" (A pregnant woman of 17 years old)\nThe mothers were asked whether or not the birth attendant had washed their hands with soap before delivery, and whether or not the instrument used for cutting the umbilical cord and the thread used for tying the cord were boiled before use. Washing hands before conducting the delivery was very low. Boiling the blade was high, but it was reported that they did not boil the thread. Women reported that at times, the TBAs kept the thread aflame only and did not boil it if the thread was new.", "It was found that the focal point of attention after birth of the baby was mainly on the removal/expulsion of the placenta, as the placenta was believed to have spiritual value. It was believed that after the baby's birth, if the placenta was not delivered quickly, the mother would be in danger. Nine women believed that the placenta could move up to the throat and choke them to death if not removed promptly. To release the placenta after delivery or in cases where there was a delay in the process, TBAs or relatives were found to massage the abdomen of the women, gag her with her hair or give kerosene oil or onion juice to induce vomiting which was believed to help expel the placenta through abdominal contractions. A woman reported that the TBA wiped her chest with a dirty cloth (which was used in mud cleaning) and this was followed by expulsion of the placenta. Treatment of placenta was sometimes considered a priority than treatment of the newborn immediately after birth. It was believed that placenta should be buried in the dry soil so that the child would not suffer from any cold or cough at a later stage. To save some money, some women preferred to cut their umbilical cord themselves. Eight out of 20 women cut their umbilical cord in their last delivery. If any other woman had cut the umbilical cord for them, then they had to pay a minimum of Tk. 20 (US$ 0.30).", "During the post-partum period, especially during the first 5-9 days of isolation, the mothers reported various dietary restrictions imposed on them which deprived them of proper nutrition intake. Most food available in these extreme poor households was thought to be inappropriate during lactation. In the case of four women, no food at all was allowed for consumption during the first few days after delivery, and commonly no food was given at all during the first day after delivery to allow healing of the birth passage. Moreover, women were considered to be impure during this time. They were not allowed to touch any food for preparing meals for other family members. In-depth interviews revealed that the mothers-in-law and the elders played a dominant role in deciding what foodstuffs the mother could eat. Shujata's child, who was not even a month old, lived with her family consisting of her husband, mother and an elder sister. She said:\n\"We all live together, use the same kitchen but have separate rooms. Since the child's delivery, my mother and sister prepare the food as the newly delivered women (Poaati ma) are not supposed to cook till 40 days after the delivery because their body is considered to be impure. People wouldn't like it if I cook and it's not good for me even. My mother brings me food in my room and gives me lesser than my usual intake of food so that I don't fall ill. Poaati ma (lactating mother) should eat as less as possible till her umbilical cord dries up. It does not matter if I'm still hungry and feel weak and as long as I don't have to spend money for doctor's visit. It does not harm if you follow the elder women. I have the whole life to myself to eat more. So it's fine if I eat a little less the 1st 1-2 months. I prefer weakness to illness. (A lactating mother of 23 years old)\nThese women did not seem to have any problem with this imposed dietary restriction because their economic condition did not allow them to buy animal source food like beef, chicken anyway.\nThe most common items eaten during the post-partum period included rice, smashed potato with spices, raw tea, green banana, black cumin, poppy seed, fenugreek leaves etc. These are believed to keep the stomach cool and initiate the production of breast milk. Whatever special food was consumed during post-partum period, it was reported to be only for a few days while the imposed food restrictions continued for a longer time i.e., 21-40 days. Opinions given on the intake of spicy food were mixed. It was given for the first few days for healing of the birth passage but later on, the same was restricted to avoid heart-burning.\nWomen reported that during their last pregnancy they felt weak with severe body-aches (11 out of 20 interviews) after delivery. It lasted for one to three weeks. None of them had gone to any health providers for seeking any service for this weakness and body-ache. These women reasoned that they did not go for the check-up because they were not even aware about the availability of the post-partum check-up. Four women reported that their husbands or mothers had gone to pharmacies to fetch vitamin tablets or saline when they had expressed about their weaknesses. This weakness was however considered to be a common part of their post-partum life. To quote one woman:\n\"It is normal to have some bodyache and fever after delivery, these would be cured automatically\" (A lactating mother of 24 years old)", "There is a dearth of information on maternal care practices of the marginalized women from ultra poor households in rural Bangladesh. This study attempted to fill in some of this knowledge gap through qualitative in-depth interviews of a group of women participating in a grant-based livelihood development programme for the ultra poor. Findings regarding practices were very similar to what is known for the poor rural women [9-11,13,16,17] or women from the urban slums [12,18] which clearly indicate the dominance of the powerful influence of culture and tradition in this area.\nIt is well documented that demographic characteristics, affluence and socio-cultural factors play a major role in maternal care practices [2]. The present study has also emphasized and reconfirmed the fact that dire poverty and social exclusion create an environment which pushes mothers away from antenatal or post-natal care services. Antenatal care is one of the \"four pillars\" of safe motherhood, as formulated by the Maternal Health and Safe Motherhood Programme, Division of Family Health of the World Health Organization (WHO) [19]. Literature suggests that home visits by community-based health workers can help to reduce neonatal mortality by ensuring identification of pregnant women, and by ensuring optimal maternal health through both antenatal and postnatal care visits to their homes [13,15,20,21]. When multiple health services are provided by one single person in a community, problems can arise. For example, it was found from the present study that since the same individual was responsible for providing contraceptive pills as well as for antenatal care, the women felt shy and sometimes scared to share their pregnancy news with the health care provider in fear of being scolded for discontinuation of the contraception. This tendency of healthcare providers, as reported elsewhere, [2] to reprimand mothers in some situations may act as a discouraging factor for the ultra poor women for using the healthcare services, and this may cause less number of pregnant women to be identified on the whole. This situation will only worsen if a woman experiences constraints on seeking care for herself by the involvement of other family members in the decision-making process. This involvement of other family members is very common in Bangladesh as found in other studies [2,12,13,16]. There is a need to sensitize health workers to the needs of the pregnant mothers so that the beneficiaries are not scared of them. Women should be able to share their problems easily with the healthcare providers. However, it is encouraging to note that no traditional beliefs were reinforced by the healthcare providers in the community.\nIn South Asia and even in some of the African countries, only a small proportion of women perform deliveries in healthcare facilities [14-16,21,22]. Birth preparedness in this region is also low [12,20]. Financial constraints coupled with traditional beliefs and rituals have been seen to delay and sometimes stop them altogether from taking any prior preparation for childbirth. This clearly means that there is a dire need to ensure a high level of awareness among pregnant women to address the importance of planned delivery. Beliefs and rituals also have an effect on maternal nutrition. The ultra poor households' per capita mean food consumption is low compared to national average intake [3,4,14]. Half of the ultra poor women are suffering from malnutrition [4], which often becomes worse because of food taboos during pregnancy and post-partum period. However, commonly taken drinks and food such as raw tea, black cumin, poppy seed, fenugreek leaves, neem leaves have health benefits beyond basic nutrition [23]. As the BRAC CFPR II programme is reaching households to deliver messages on health issues, other family members or neighbours can always be advised not to impose any dietary restrictions during pregnancy and most importantly in the post-partum period.\nIt was reported that the deliveries were conducted mostly by keeping the mothers in squatting position which is similar with the findings of other studies in Bangladesh [10,20]. The present study reveals the existence of various beliefs and practices regarding the expulsion of placenta which was often risky as found in other studies [11,12,16]. Adopting these malpractices along with the post-partum confinement of mothers in ultra poor households as found in this culture, could be major factors responsible for the delay or prevention of care-seeking outside the home. According to the respondents in this study, care giving is considered to be offered by a female within the households. Men become involved in maternal care provision in those situations where there is no able-bodied female to take over the role. A husband's involvement in particular situations like 'not allowing any heavy work during pregnancy' and 'going to the pharmacy to bring vitamin and supplementation', could shift some of the care burden and distribute it more equally within the family. In both the cases, a positive outcome was found. For instance, the pregnant woman was able to take rest during pregnancy and could take vitamin supplementation in post-partum period. Focus can be imposed on increasing the role of men/boys in care provision beyond the traditionally-defined role.\nThese findings are based on self-reported maternal care practices, and may therefore differ from actual practices. Although we intended to use the participants' observation to obtain and validate data, this was not possible due to resource constraints. However, given the consistency of findings in qualitative interviews and other quantitative studies [16,17,24], we are confident that the findings represent actual practices.", "This paper attempts to outline the potential areas for programme interventions to improve maternal morbidity and mortality in rural areas of Bangladesh towards achieving the Millennium Development Goal 5. This study shows that cultural beliefs and norms have a strong influence on maternal care practices among the ultra poor households, and override the beneficial economic effects from livelihood support intervention. Some of these practices, often compromised by various taboos and beliefs, may become harmful at times. Health behavior education in this livelihood support program can be carefully tailored to local cultural beliefs to achieve better maternal outcomes. Furthermore, a quantitative research could be carried out to know the accurate level and extent of disadvantages suffered by the ultra poor women compared to other groups.", "The authors declare that they have no competing interests.", "NC and SMA conceived this qualitative study and participated in its design. NC was involved in coordination, data collection, and data analysis. Both authors interpreted the data. NC drafted the manuscript and SMA put critical inputs in improving the draft. SMA was also involved in editing of the manuscript. Both authors read the final draft and approved it for submission.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2393/11/15/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Results", "Respondent's profile", "Pregnancy care", "Antenatal care", "Social exclusion", "Nutrition during pregnancy", "Restrictions and mobility during pregnancy", "Support from husband", "Birth preparedness", "Delivery care", "Place of birth and attendance at delivery", "Practices to speed up the delivery of placenta", "Post-partum care", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Nearly one quarter to one third of Bangladesh's population lives under extreme poverty [1]. To improve the maternal health indicators it is necessary to develop interventions that meet the health needs of extremely poor women. These extremely poor households, henceforth ultra poor households, have few or no asset base, are highly vulnerable to any shock (e.g., natural disaster, disability of an income-earner or illnesses involving costly care), and mainly depend on seasonal wage-labour for survival. They are characterized by their inability to participate fully in social and economic activities which make them vulnerable to differential treatment of the society [2]. As a result, they are often excluded from government and even from the microcredit-based non-governmental poverty reduction programmes [3].\nTo reach the ultra poor in rural Bangladesh who could not benefit from conventional microcredit programmes, BRAC (an indigenous non-government development organization) implemented an integrated, grant-based programme (comprising asset transfer, skills training, enterprise support, weekly stipend and healthcare support to mitigate income-erosion effect of illnesses) during 2002-2006 called Challenging the Frontiers of Poverty Reduction Phase I (CFPR-I) [3]. Five inclusion criteria (dependent upon female domestic work or begging as income source; possessing ≤10 decimals of land; no male adult active member in the household; children of school going age engaged in paid work; and absence of productive assets in the household) and three exclusion criteria (no adult woman in the household who is able to work; participating in microfinance; and beneficiary of government/NGO development project) were used to identify the ultra poor households for the BRAC CFPR programme. Only those surveyed households who met at least three of the five inclusion criteria and none of the exclusion criteria, were finally selected as the beneficiaries of the CFPR programme. Thus the study population refers to the lowest wealth quintile in a community. A series of evaluations also showed positive impact from the programme including improved income, food security and nutritional status, and health knowledge and behaviour [4-6]. Drawing on the learning and experiences from the first phase (CFPR-I), the five-year second phase of this programme (CFPR-II) started in 2007 with an even greater scale and scope [7].\nBangladesh is one of the low- income countries where poorer groups use less health care and there exists a poor-rich inequality in maternity care and maternal mortality [2,8]. Apart from this poor-rich inequality, social and cultural beliefs and practices regarding motherhood and childrearing also have significant influence on maternal health [9-13]. As a result, these poorest-of-the-poor women suffer a greater burden of ill health including maternal health [3]. They are marginalized compared to national rural average in terms of accessing antenatal care (ANC) (37% for ultra poor women and 60% national rural women), institutional delivery (5% from lowest wealth quintile and 15% national rural average), receiving iron- supplementation during last pregnancy (53% for ultra poor and 55% national rural women), and use of contraceptive prevalence (63% for ultra poor and 80% ever-married women in Bangladesh) [7,14]. These maternal health and health care indicators are also important determinants of neonatal survival [15], which is an important contributing factor for infant survival. Different studies have been carried out to learn about maternal care practices in the rural areas [9-11,13,16,17] or urban-slums of Bangladesh [12,18], but none of them specifically focused on the marginal population such as the ultra poor households. Understanding the context in which women and their families would be willing to accept new practices, i.e., knowing what changes they would make under what conditions is essential for crafting realistic and relevant behaviour change messages. This study sought to fill in this knowledge gap by qualitative exploration of the existing maternal care practices during pregnancy, delivery and post-partum period among women of the ultra poor households who were included in the CFPR II programme (2007-2012). It also attempted to highlight the underlying reasons for doing such maternal care practices. This, in turn, is expected to inform the programme interventions which would hopefully improve maternal morbidity and survival, and achieve the UN Millennium Development Goal 5 (MDG 5).", "A qualitative exploratory framework was used to collect maternal care practice from women in ultra poor households. The sample included both currently pregnant women who have had at least one previous childbirth, and lactating mothers who are living in the CFPR-II programme areas [7]. Rangpur and Kurigram districts in the northern part of Bangladesh were selected for data collection. Later, eight administrative offices of CFPR-II programme from the above two districts were selected where CFPR-II baseline survey was conducted [7]. To identify the respondents, two anthropologists with the help of the researchers have visited each office and listed the names of the CFPR-beneficiary households, the address, and the availability of eligible respondents using a household listing form. From each of these lists, 2-3 households have been randomly selected for conducting the interviews. Thus, 20 women- 12 lactating mothers and 8 currently pregnant women were selected for the study.\nThe study has passed through the institutional review process at BRAC Research and Evaluation Division (by Internal Review and Publication Committee) for ethical approval. The ethical clearance was also obtained from Bangladesh Medical Research Council (BMRC) for the entire five-year research project to evaluate the CFPR-II programme. Data were collected after obtaining informed verbal consent from the respondents who were hesitant about signing any document. The written consent form was read and explained to the respondent. When the investigator was satisfied that the respondent understood it including its implications, and agreed to participate, only then she was selected for the in-depth interview. Anonymity of the respondents was maintained at all stages of data analysis.\nData were collected through in-depth interview in Bangla following a flexible interview guide during October-December 2007. The interviews were conducted by two experienced anthropologists under supervision of the researchers. The in-depth interview guide comprised three sections; the first section included socioeconomic information and cultural background of extreme poor households; the second section included maternal care practices and rituals which covered pregnancy care, delivery care, and postpartum care; and the last part included their beliefs regarding the care practices and recommendations. The interviews lasted between 60 and 90 minutes. Throughout the process, many loosely structured informal discussions took place, because, in a communal setting, it was difficult to conduct one-to-one interviews and discuss their individual perceptions of practices.\nAll the interviews were recorded and transcribed on the same day. The transcripts were analyzed thematically. Transcripts were reviewed to develop a code list for the topics related to the research questions. Codes were applied manually by the interviewers. Text pertaining to the codes was organized in a matrix and translated into English. No software was used for the analysis.", "[SUBTITLE] Respondent's profile [SUBSECTION] The majority (15 out of 20 respondents) of the women were in their twenties, two were below 20 years of age while three were in their thirties. Most of the interviewees were Muslim. Non-farming labour, like rickshaw pulling, was one of the most common occupations of the household heads. Most of them lived in poor hygienic condition as revealed by the use of proper sanitation (with or without water seal latrine or pit latrine), water access, and overall household condition. At the time of data collection, all the respondents were in their initial phase of the CFPR-II programme.\nThe majority (15 out of 20 respondents) of the women were in their twenties, two were below 20 years of age while three were in their thirties. Most of the interviewees were Muslim. Non-farming labour, like rickshaw pulling, was one of the most common occupations of the household heads. Most of them lived in poor hygienic condition as revealed by the use of proper sanitation (with or without water seal latrine or pit latrine), water access, and overall household condition. At the time of data collection, all the respondents were in their initial phase of the CFPR-II programme.\n[SUBTITLE] Pregnancy care [SUBSECTION] Almost all the women reported about becoming aware of their pregnancy when they experienced amenorrhoea, nausea and vomiting, loss of appetite and weakness. Most of them could identify their pregnancy within the first 2-3 months. According to them, pregnancy identification and its subsequent care was seen as a normal event which did not require any additional medical intervention unless significant complications arose during this period. Two women were found to have had no menstruation history after delivery of the last baby till conception of the next one.\n[SUBTITLE] Antenatal care [SUBSECTION] Confirmation of pregnancy was considered by the women as the most important part of antenatal care and eight out of 20 women went to the nearby health facilities for pregnancy tests. Reportedly, the most common service that a pregnant woman received in ante-natal clinics was iron supplementation. However, most of them did not take all the tablets dispensed because they perceived the tablets to be tasteless (or have bad taste) and made the stool black. None of them took iron tablets for more than two months during their last pregnancy. Most of these women opined that antenatal care provided no benefit to them or their child. Monetary constraints, absence of knowledge about the need of services, and restrictions on the movement of women were also cited as reasons for not accessing antenatal care.\nConfirmation of pregnancy was considered by the women as the most important part of antenatal care and eight out of 20 women went to the nearby health facilities for pregnancy tests. Reportedly, the most common service that a pregnant woman received in ante-natal clinics was iron supplementation. However, most of them did not take all the tablets dispensed because they perceived the tablets to be tasteless (or have bad taste) and made the stool black. None of them took iron tablets for more than two months during their last pregnancy. Most of these women opined that antenatal care provided no benefit to them or their child. Monetary constraints, absence of knowledge about the need of services, and restrictions on the movement of women were also cited as reasons for not accessing antenatal care.\n[SUBTITLE] Social exclusion [SUBSECTION] The following finding is not unique but typical of the broad experience of ultra poor women living in the rural areas of Bangladesh:\n\"I was avoiding health worker because she would scold me if she would have heard about my 4th pregnancy. She used to give me birth control pills. So, I did not meet her and inform her.\" (A lactating mother of 27 years old)\nIn two cases, the women reported that the respective health worker came to their house to give birth control pills. However, neither of them came out of their respective houses. The use of harsh words and low tolerance level of the health workers discouraged the women to use the services provided by these health facilities for antenatal care. These women perceived that they were treated in such a manner by the health worker because of their low socioeconomic status.\n\"I heard that health service from government facility is free of charge but when I went to the health facility I was asked to make a card for Tk. 20.Which services are then considered to be free of charge?\" (A pregnant woman of 21 years old)\nThe following finding is not unique but typical of the broad experience of ultra poor women living in the rural areas of Bangladesh:\n\"I was avoiding health worker because she would scold me if she would have heard about my 4th pregnancy. She used to give me birth control pills. So, I did not meet her and inform her.\" (A lactating mother of 27 years old)\nIn two cases, the women reported that the respective health worker came to their house to give birth control pills. However, neither of them came out of their respective houses. The use of harsh words and low tolerance level of the health workers discouraged the women to use the services provided by these health facilities for antenatal care. These women perceived that they were treated in such a manner by the health worker because of their low socioeconomic status.\n\"I heard that health service from government facility is free of charge but when I went to the health facility I was asked to make a card for Tk. 20.Which services are then considered to be free of charge?\" (A pregnant woman of 21 years old)\n[SUBTITLE] Nutrition during pregnancy [SUBSECTION] Women reported that the reasons for decreasing consumption of food during pregnancy were mostly related to aversion to specific food, followed by lack of monetary power to purchase specific food that extremely poor households usually consumed (like rice, potato, and small fish). However, few women also reported increasing consumption of food during pregnancy. The reasons given for this varied from those given for decreasing consumption. The most frequently cited reason was 'feel like eating more.' 'Craving for a specific food 'was also cited as a reason for increased consumption of some food such as molasses-made drink, rice with green chilies, and milk. Two of the women mentioned that the increased food intake was directly related to improved health of the mother or the baby. This tended to be where husbands and other family members were helpful and better informed.\n\"I know that eating more food is necessary when there is a baby in womb. But I am poor, how can I afford it?\" (A pregnant woman of 21 years old)\nSome food, like ducks, pigeons, beef and Hilsha fish were considered as 'hot' and were restricted during pregnancy. Some fish like Taki, Chanda and Puti, which were within their affordability, were also restricted during pregnancy. There were no restrictions reported in consuming fruits among the extreme poor households.\nWomen reported that the reasons for decreasing consumption of food during pregnancy were mostly related to aversion to specific food, followed by lack of monetary power to purchase specific food that extremely poor households usually consumed (like rice, potato, and small fish). However, few women also reported increasing consumption of food during pregnancy. The reasons given for this varied from those given for decreasing consumption. The most frequently cited reason was 'feel like eating more.' 'Craving for a specific food 'was also cited as a reason for increased consumption of some food such as molasses-made drink, rice with green chilies, and milk. Two of the women mentioned that the increased food intake was directly related to improved health of the mother or the baby. This tended to be where husbands and other family members were helpful and better informed.\n\"I know that eating more food is necessary when there is a baby in womb. But I am poor, how can I afford it?\" (A pregnant woman of 21 years old)\nSome food, like ducks, pigeons, beef and Hilsha fish were considered as 'hot' and were restricted during pregnancy. Some fish like Taki, Chanda and Puti, which were within their affordability, were also restricted during pregnancy. There were no restrictions reported in consuming fruits among the extreme poor households.\n[SUBTITLE] Restrictions and mobility during pregnancy [SUBSECTION] It is generally believed among the extreme poor households that evil spirits are more active in the evening, at noon and at night, so pregnant women avoided leaving their houses during those times. Walking through graveyards was also thought to be harmful for pregnant women. If they did so, they tied up their hair and covered their heads with veils.\n\"Evil spirits could cause miscarriage of the fetus,that is why I did not go out in prayer time\" (A lactating mother of 31 years old)\nA few women reported their beliefs about carrying a piece of iron which would ensure protection. Matches were also reported to be effective in keeping away the evil gaze of the spirits. Most of the respondents mentioned that lunar and solar eclipses could affect pregnant women. They reported (those who got eclipse during last pregnancy) that they had stayed inside the household, walked near the home or inside the home, but had never laid down on the bed during eclipses. They also reported certain restrictions during this period - like they did not eat or cook, cut, and twist anything, as they perceived that the child would be born with a cleft palate or with deformed features. Many of the women reported that elderly family members and husbands were the main informants as to when there would be a lunar or solar eclipse. In any case, restrictions in movement had never been imposed by any health providers but rather from the elderly women of the family.\nIt is generally believed among the extreme poor households that evil spirits are more active in the evening, at noon and at night, so pregnant women avoided leaving their houses during those times. Walking through graveyards was also thought to be harmful for pregnant women. If they did so, they tied up their hair and covered their heads with veils.\n\"Evil spirits could cause miscarriage of the fetus,that is why I did not go out in prayer time\" (A lactating mother of 31 years old)\nA few women reported their beliefs about carrying a piece of iron which would ensure protection. Matches were also reported to be effective in keeping away the evil gaze of the spirits. Most of the respondents mentioned that lunar and solar eclipses could affect pregnant women. They reported (those who got eclipse during last pregnancy) that they had stayed inside the household, walked near the home or inside the home, but had never laid down on the bed during eclipses. They also reported certain restrictions during this period - like they did not eat or cook, cut, and twist anything, as they perceived that the child would be born with a cleft palate or with deformed features. Many of the women reported that elderly family members and husbands were the main informants as to when there would be a lunar or solar eclipse. In any case, restrictions in movement had never been imposed by any health providers but rather from the elderly women of the family.\n[SUBTITLE] Support from husband [SUBSECTION] Present study found that in the midst of poverty, the husband could play a positive role in taking care of his wife during the pregnancy period. An illustrative case:\nAll that Mamtaz's husband can claim as his own are the homestead land and the house. He inherited three decimals of land from his father on which he has built a house to live in. He is physically disabled. He does not have any source of income other than the 2/3 kg of rice that he gets from begging door to door. From that amount, Mamtaz keeps whatever is required for the household and sells the remaining to buy other necessities such as salt, vegetables to run her family. Sometimes when her husband is unable to go for begging, Mamtaz would go to other people's houses for work during pregnancy. Her husband did not wish for her to work at other people's houses during her pregnancy and expressed that whatever he earned through begging was enough for them to sustain. But still, when he was not at home and someone called her for assistance, she would go to their houses to boil paddy or for maintenance of floors in exchange of one or half a kg of rice. (A pregnant woman of 23 years old)\nWomen considered this attitude of their husbands as a positive attitude if the women were too weak to work or continue the usual household work. In three cases, women consulted with their husbands and jointly decided to stop their other child's schooling so that the child would rear the animals and the women could rest during their pregnancy. Overall, during pregnancy, women reported that husbands and other family members helped them in doing heavy work. Activities such as fetching water, boiling and husking rice, lifting heavy cooking utensils and preparing food for animals were generally regarded as heavy work.\nPresent study found that in the midst of poverty, the husband could play a positive role in taking care of his wife during the pregnancy period. An illustrative case:\nAll that Mamtaz's husband can claim as his own are the homestead land and the house. He inherited three decimals of land from his father on which he has built a house to live in. He is physically disabled. He does not have any source of income other than the 2/3 kg of rice that he gets from begging door to door. From that amount, Mamtaz keeps whatever is required for the household and sells the remaining to buy other necessities such as salt, vegetables to run her family. Sometimes when her husband is unable to go for begging, Mamtaz would go to other people's houses for work during pregnancy. Her husband did not wish for her to work at other people's houses during her pregnancy and expressed that whatever he earned through begging was enough for them to sustain. But still, when he was not at home and someone called her for assistance, she would go to their houses to boil paddy or for maintenance of floors in exchange of one or half a kg of rice. (A pregnant woman of 23 years old)\nWomen considered this attitude of their husbands as a positive attitude if the women were too weak to work or continue the usual household work. In three cases, women consulted with their husbands and jointly decided to stop their other child's schooling so that the child would rear the animals and the women could rest during their pregnancy. Overall, during pregnancy, women reported that husbands and other family members helped them in doing heavy work. Activities such as fetching water, boiling and husking rice, lifting heavy cooking utensils and preparing food for animals were generally regarded as heavy work.\n[SUBTITLE] Birth preparedness [SUBSECTION] Birth preparedness for this present study included selecting a skilled birth attendant, arranging delivery kit needed for a safe birth, identifying where to go in case of emergency, and arranging necessary money and transport for delivery. We found only one woman who had a birth plan (took into consideration all the outlined stages of the preparation) before delivery. Most women did not contact the birth attendant who is locally known as traditional birth attendants (TBAs or dais) in advance because they thought TBAs could make some jadutona (Black magic) in advance during pregnancy so that without their presence delivery would not occur and that there were greater chances to pay more for delivery.\n\"After I started having pain on the morning of the 3rd day, my husband went out to call the TBAs. TBAs was not informed beforehand as she took more money if asked to stay for a longer period of time. This TBA perhaps is trained. Most of the children in this locality were delivered by her (A lactating mother of 19 years old)\nFew families, especially the husbands, could put aside some money for delivery purpose. One husband had financial constraints, so he saved 40 kg of rice. Twelve women said that they only collected some old clothes, which they had kept separately, but they had not stitched any new dresses or Kathas (local quilt covering) for the arrival of the new baby. The women believed that it was bad to buy new clothes or make too many plans in advance for the new arrival as it could bring bad luck. Moreover, they were not sure whether the coming child would survive or not. Money spent on her/him was considered to be unnecessary. Women assumed that transportation would be available either from a family member or from a neighbour when needed and, as such, did not plan for the transportation in advance.\nBirth preparedness for this present study included selecting a skilled birth attendant, arranging delivery kit needed for a safe birth, identifying where to go in case of emergency, and arranging necessary money and transport for delivery. We found only one woman who had a birth plan (took into consideration all the outlined stages of the preparation) before delivery. Most women did not contact the birth attendant who is locally known as traditional birth attendants (TBAs or dais) in advance because they thought TBAs could make some jadutona (Black magic) in advance during pregnancy so that without their presence delivery would not occur and that there were greater chances to pay more for delivery.\n\"After I started having pain on the morning of the 3rd day, my husband went out to call the TBAs. TBAs was not informed beforehand as she took more money if asked to stay for a longer period of time. This TBA perhaps is trained. Most of the children in this locality were delivered by her (A lactating mother of 19 years old)\nFew families, especially the husbands, could put aside some money for delivery purpose. One husband had financial constraints, so he saved 40 kg of rice. Twelve women said that they only collected some old clothes, which they had kept separately, but they had not stitched any new dresses or Kathas (local quilt covering) for the arrival of the new baby. The women believed that it was bad to buy new clothes or make too many plans in advance for the new arrival as it could bring bad luck. Moreover, they were not sure whether the coming child would survive or not. Money spent on her/him was considered to be unnecessary. Women assumed that transportation would be available either from a family member or from a neighbour when needed and, as such, did not plan for the transportation in advance.\nAlmost all the women reported about becoming aware of their pregnancy when they experienced amenorrhoea, nausea and vomiting, loss of appetite and weakness. Most of them could identify their pregnancy within the first 2-3 months. According to them, pregnancy identification and its subsequent care was seen as a normal event which did not require any additional medical intervention unless significant complications arose during this period. Two women were found to have had no menstruation history after delivery of the last baby till conception of the next one.\n[SUBTITLE] Antenatal care [SUBSECTION] Confirmation of pregnancy was considered by the women as the most important part of antenatal care and eight out of 20 women went to the nearby health facilities for pregnancy tests. Reportedly, the most common service that a pregnant woman received in ante-natal clinics was iron supplementation. However, most of them did not take all the tablets dispensed because they perceived the tablets to be tasteless (or have bad taste) and made the stool black. None of them took iron tablets for more than two months during their last pregnancy. Most of these women opined that antenatal care provided no benefit to them or their child. Monetary constraints, absence of knowledge about the need of services, and restrictions on the movement of women were also cited as reasons for not accessing antenatal care.\nConfirmation of pregnancy was considered by the women as the most important part of antenatal care and eight out of 20 women went to the nearby health facilities for pregnancy tests. Reportedly, the most common service that a pregnant woman received in ante-natal clinics was iron supplementation. However, most of them did not take all the tablets dispensed because they perceived the tablets to be tasteless (or have bad taste) and made the stool black. None of them took iron tablets for more than two months during their last pregnancy. Most of these women opined that antenatal care provided no benefit to them or their child. Monetary constraints, absence of knowledge about the need of services, and restrictions on the movement of women were also cited as reasons for not accessing antenatal care.\n[SUBTITLE] Social exclusion [SUBSECTION] The following finding is not unique but typical of the broad experience of ultra poor women living in the rural areas of Bangladesh:\n\"I was avoiding health worker because she would scold me if she would have heard about my 4th pregnancy. She used to give me birth control pills. So, I did not meet her and inform her.\" (A lactating mother of 27 years old)\nIn two cases, the women reported that the respective health worker came to their house to give birth control pills. However, neither of them came out of their respective houses. The use of harsh words and low tolerance level of the health workers discouraged the women to use the services provided by these health facilities for antenatal care. These women perceived that they were treated in such a manner by the health worker because of their low socioeconomic status.\n\"I heard that health service from government facility is free of charge but when I went to the health facility I was asked to make a card for Tk. 20.Which services are then considered to be free of charge?\" (A pregnant woman of 21 years old)\nThe following finding is not unique but typical of the broad experience of ultra poor women living in the rural areas of Bangladesh:\n\"I was avoiding health worker because she would scold me if she would have heard about my 4th pregnancy. She used to give me birth control pills. So, I did not meet her and inform her.\" (A lactating mother of 27 years old)\nIn two cases, the women reported that the respective health worker came to their house to give birth control pills. However, neither of them came out of their respective houses. The use of harsh words and low tolerance level of the health workers discouraged the women to use the services provided by these health facilities for antenatal care. These women perceived that they were treated in such a manner by the health worker because of their low socioeconomic status.\n\"I heard that health service from government facility is free of charge but when I went to the health facility I was asked to make a card for Tk. 20.Which services are then considered to be free of charge?\" (A pregnant woman of 21 years old)\n[SUBTITLE] Nutrition during pregnancy [SUBSECTION] Women reported that the reasons for decreasing consumption of food during pregnancy were mostly related to aversion to specific food, followed by lack of monetary power to purchase specific food that extremely poor households usually consumed (like rice, potato, and small fish). However, few women also reported increasing consumption of food during pregnancy. The reasons given for this varied from those given for decreasing consumption. The most frequently cited reason was 'feel like eating more.' 'Craving for a specific food 'was also cited as a reason for increased consumption of some food such as molasses-made drink, rice with green chilies, and milk. Two of the women mentioned that the increased food intake was directly related to improved health of the mother or the baby. This tended to be where husbands and other family members were helpful and better informed.\n\"I know that eating more food is necessary when there is a baby in womb. But I am poor, how can I afford it?\" (A pregnant woman of 21 years old)\nSome food, like ducks, pigeons, beef and Hilsha fish were considered as 'hot' and were restricted during pregnancy. Some fish like Taki, Chanda and Puti, which were within their affordability, were also restricted during pregnancy. There were no restrictions reported in consuming fruits among the extreme poor households.\nWomen reported that the reasons for decreasing consumption of food during pregnancy were mostly related to aversion to specific food, followed by lack of monetary power to purchase specific food that extremely poor households usually consumed (like rice, potato, and small fish). However, few women also reported increasing consumption of food during pregnancy. The reasons given for this varied from those given for decreasing consumption. The most frequently cited reason was 'feel like eating more.' 'Craving for a specific food 'was also cited as a reason for increased consumption of some food such as molasses-made drink, rice with green chilies, and milk. Two of the women mentioned that the increased food intake was directly related to improved health of the mother or the baby. This tended to be where husbands and other family members were helpful and better informed.\n\"I know that eating more food is necessary when there is a baby in womb. But I am poor, how can I afford it?\" (A pregnant woman of 21 years old)\nSome food, like ducks, pigeons, beef and Hilsha fish were considered as 'hot' and were restricted during pregnancy. Some fish like Taki, Chanda and Puti, which were within their affordability, were also restricted during pregnancy. There were no restrictions reported in consuming fruits among the extreme poor households.\n[SUBTITLE] Restrictions and mobility during pregnancy [SUBSECTION] It is generally believed among the extreme poor households that evil spirits are more active in the evening, at noon and at night, so pregnant women avoided leaving their houses during those times. Walking through graveyards was also thought to be harmful for pregnant women. If they did so, they tied up their hair and covered their heads with veils.\n\"Evil spirits could cause miscarriage of the fetus,that is why I did not go out in prayer time\" (A lactating mother of 31 years old)\nA few women reported their beliefs about carrying a piece of iron which would ensure protection. Matches were also reported to be effective in keeping away the evil gaze of the spirits. Most of the respondents mentioned that lunar and solar eclipses could affect pregnant women. They reported (those who got eclipse during last pregnancy) that they had stayed inside the household, walked near the home or inside the home, but had never laid down on the bed during eclipses. They also reported certain restrictions during this period - like they did not eat or cook, cut, and twist anything, as they perceived that the child would be born with a cleft palate or with deformed features. Many of the women reported that elderly family members and husbands were the main informants as to when there would be a lunar or solar eclipse. In any case, restrictions in movement had never been imposed by any health providers but rather from the elderly women of the family.\nIt is generally believed among the extreme poor households that evil spirits are more active in the evening, at noon and at night, so pregnant women avoided leaving their houses during those times. Walking through graveyards was also thought to be harmful for pregnant women. If they did so, they tied up their hair and covered their heads with veils.\n\"Evil spirits could cause miscarriage of the fetus,that is why I did not go out in prayer time\" (A lactating mother of 31 years old)\nA few women reported their beliefs about carrying a piece of iron which would ensure protection. Matches were also reported to be effective in keeping away the evil gaze of the spirits. Most of the respondents mentioned that lunar and solar eclipses could affect pregnant women. They reported (those who got eclipse during last pregnancy) that they had stayed inside the household, walked near the home or inside the home, but had never laid down on the bed during eclipses. They also reported certain restrictions during this period - like they did not eat or cook, cut, and twist anything, as they perceived that the child would be born with a cleft palate or with deformed features. Many of the women reported that elderly family members and husbands were the main informants as to when there would be a lunar or solar eclipse. In any case, restrictions in movement had never been imposed by any health providers but rather from the elderly women of the family.\n[SUBTITLE] Support from husband [SUBSECTION] Present study found that in the midst of poverty, the husband could play a positive role in taking care of his wife during the pregnancy period. An illustrative case:\nAll that Mamtaz's husband can claim as his own are the homestead land and the house. He inherited three decimals of land from his father on which he has built a house to live in. He is physically disabled. He does not have any source of income other than the 2/3 kg of rice that he gets from begging door to door. From that amount, Mamtaz keeps whatever is required for the household and sells the remaining to buy other necessities such as salt, vegetables to run her family. Sometimes when her husband is unable to go for begging, Mamtaz would go to other people's houses for work during pregnancy. Her husband did not wish for her to work at other people's houses during her pregnancy and expressed that whatever he earned through begging was enough for them to sustain. But still, when he was not at home and someone called her for assistance, she would go to their houses to boil paddy or for maintenance of floors in exchange of one or half a kg of rice. (A pregnant woman of 23 years old)\nWomen considered this attitude of their husbands as a positive attitude if the women were too weak to work or continue the usual household work. In three cases, women consulted with their husbands and jointly decided to stop their other child's schooling so that the child would rear the animals and the women could rest during their pregnancy. Overall, during pregnancy, women reported that husbands and other family members helped them in doing heavy work. Activities such as fetching water, boiling and husking rice, lifting heavy cooking utensils and preparing food for animals were generally regarded as heavy work.\nPresent study found that in the midst of poverty, the husband could play a positive role in taking care of his wife during the pregnancy period. An illustrative case:\nAll that Mamtaz's husband can claim as his own are the homestead land and the house. He inherited three decimals of land from his father on which he has built a house to live in. He is physically disabled. He does not have any source of income other than the 2/3 kg of rice that he gets from begging door to door. From that amount, Mamtaz keeps whatever is required for the household and sells the remaining to buy other necessities such as salt, vegetables to run her family. Sometimes when her husband is unable to go for begging, Mamtaz would go to other people's houses for work during pregnancy. Her husband did not wish for her to work at other people's houses during her pregnancy and expressed that whatever he earned through begging was enough for them to sustain. But still, when he was not at home and someone called her for assistance, she would go to their houses to boil paddy or for maintenance of floors in exchange of one or half a kg of rice. (A pregnant woman of 23 years old)\nWomen considered this attitude of their husbands as a positive attitude if the women were too weak to work or continue the usual household work. In three cases, women consulted with their husbands and jointly decided to stop their other child's schooling so that the child would rear the animals and the women could rest during their pregnancy. Overall, during pregnancy, women reported that husbands and other family members helped them in doing heavy work. Activities such as fetching water, boiling and husking rice, lifting heavy cooking utensils and preparing food for animals were generally regarded as heavy work.\n[SUBTITLE] Birth preparedness [SUBSECTION] Birth preparedness for this present study included selecting a skilled birth attendant, arranging delivery kit needed for a safe birth, identifying where to go in case of emergency, and arranging necessary money and transport for delivery. We found only one woman who had a birth plan (took into consideration all the outlined stages of the preparation) before delivery. Most women did not contact the birth attendant who is locally known as traditional birth attendants (TBAs or dais) in advance because they thought TBAs could make some jadutona (Black magic) in advance during pregnancy so that without their presence delivery would not occur and that there were greater chances to pay more for delivery.\n\"After I started having pain on the morning of the 3rd day, my husband went out to call the TBAs. TBAs was not informed beforehand as she took more money if asked to stay for a longer period of time. This TBA perhaps is trained. Most of the children in this locality were delivered by her (A lactating mother of 19 years old)\nFew families, especially the husbands, could put aside some money for delivery purpose. One husband had financial constraints, so he saved 40 kg of rice. Twelve women said that they only collected some old clothes, which they had kept separately, but they had not stitched any new dresses or Kathas (local quilt covering) for the arrival of the new baby. The women believed that it was bad to buy new clothes or make too many plans in advance for the new arrival as it could bring bad luck. Moreover, they were not sure whether the coming child would survive or not. Money spent on her/him was considered to be unnecessary. Women assumed that transportation would be available either from a family member or from a neighbour when needed and, as such, did not plan for the transportation in advance.\nBirth preparedness for this present study included selecting a skilled birth attendant, arranging delivery kit needed for a safe birth, identifying where to go in case of emergency, and arranging necessary money and transport for delivery. We found only one woman who had a birth plan (took into consideration all the outlined stages of the preparation) before delivery. Most women did not contact the birth attendant who is locally known as traditional birth attendants (TBAs or dais) in advance because they thought TBAs could make some jadutona (Black magic) in advance during pregnancy so that without their presence delivery would not occur and that there were greater chances to pay more for delivery.\n\"After I started having pain on the morning of the 3rd day, my husband went out to call the TBAs. TBAs was not informed beforehand as she took more money if asked to stay for a longer period of time. This TBA perhaps is trained. Most of the children in this locality were delivered by her (A lactating mother of 19 years old)\nFew families, especially the husbands, could put aside some money for delivery purpose. One husband had financial constraints, so he saved 40 kg of rice. Twelve women said that they only collected some old clothes, which they had kept separately, but they had not stitched any new dresses or Kathas (local quilt covering) for the arrival of the new baby. The women believed that it was bad to buy new clothes or make too many plans in advance for the new arrival as it could bring bad luck. Moreover, they were not sure whether the coming child would survive or not. Money spent on her/him was considered to be unnecessary. Women assumed that transportation would be available either from a family member or from a neighbour when needed and, as such, did not plan for the transportation in advance.\n[SUBTITLE] Delivery care [SUBSECTION] We found that after the initiation of labour pain, elderly women usually took matters into their own hands by taking various steps and observing certain rituals to facilitate early delivery of the baby. Women reported that sometimes relatives fed them pora pani (enchanted water) to boost up their mental strength, which referred to the psychological aspect of the expectant mothers as being strong enough to face the labour pain. For providing energy to the delivering mother as well as to intensify the labour pain, five mothers said that they had received saline with an injection (oxytocin) from a neighbouring Pallichikitshok (Village doctor).\n[SUBTITLE] Place of birth and attendance at delivery [SUBSECTION] Most of the deliveries took place at home (19 out of 20). The TBA was usually called after the onset of strong labour pain. TBA or friends/relatives were the most common persons to be present as the birth attendant during delivery. Almost all the deliveries took place on the floor. Four women even gave births on just the bare floor, but for the rest, the deliveries took place on spread out cloths or jute sacks or straw. Delivery was not carried out on beds in order to avoid spoiling it. According to the women, few materials like straw, polythene, etc. if placed on the floor, made cleaning and disposing of impure blood and placenta much easier. Ten out of the 20 women reported that their preferred position was squatting while giving birth. However, this may have, and often did change, as labour progressed. For three of the women, squatting position was found to be more painful than lying. Usually, the position to be adopted was often decided upon discussion with the TBA and other female relatives. No cases were found where any male members of the household were involved during delivery.\nHealth system factors such as staff attitudes from healthcare, either public or private, also had an impact on maternal care practice. Poorly attitude of the healthcare staffs were perceived to exist in most health facilities; these included usage of abusive language, denial of providing services, lacking compassion in general and refusal to assist properly.\n\"I went to government facility for antenatal care. The concerned person told me I might need cesarean, and I would die if I did not go to the hospital. Are these words good to tell someone who is pregnant?\" (A pregnant woman of 17 years old)\nThe mothers were asked whether or not the birth attendant had washed their hands with soap before delivery, and whether or not the instrument used for cutting the umbilical cord and the thread used for tying the cord were boiled before use. Washing hands before conducting the delivery was very low. Boiling the blade was high, but it was reported that they did not boil the thread. Women reported that at times, the TBAs kept the thread aflame only and did not boil it if the thread was new.\nMost of the deliveries took place at home (19 out of 20). The TBA was usually called after the onset of strong labour pain. TBA or friends/relatives were the most common persons to be present as the birth attendant during delivery. Almost all the deliveries took place on the floor. Four women even gave births on just the bare floor, but for the rest, the deliveries took place on spread out cloths or jute sacks or straw. Delivery was not carried out on beds in order to avoid spoiling it. According to the women, few materials like straw, polythene, etc. if placed on the floor, made cleaning and disposing of impure blood and placenta much easier. Ten out of the 20 women reported that their preferred position was squatting while giving birth. However, this may have, and often did change, as labour progressed. For three of the women, squatting position was found to be more painful than lying. Usually, the position to be adopted was often decided upon discussion with the TBA and other female relatives. No cases were found where any male members of the household were involved during delivery.\nHealth system factors such as staff attitudes from healthcare, either public or private, also had an impact on maternal care practice. Poorly attitude of the healthcare staffs were perceived to exist in most health facilities; these included usage of abusive language, denial of providing services, lacking compassion in general and refusal to assist properly.\n\"I went to government facility for antenatal care. The concerned person told me I might need cesarean, and I would die if I did not go to the hospital. Are these words good to tell someone who is pregnant?\" (A pregnant woman of 17 years old)\nThe mothers were asked whether or not the birth attendant had washed their hands with soap before delivery, and whether or not the instrument used for cutting the umbilical cord and the thread used for tying the cord were boiled before use. Washing hands before conducting the delivery was very low. Boiling the blade was high, but it was reported that they did not boil the thread. Women reported that at times, the TBAs kept the thread aflame only and did not boil it if the thread was new.\n[SUBTITLE] Practices to speed up the delivery of placenta [SUBSECTION] It was found that the focal point of attention after birth of the baby was mainly on the removal/expulsion of the placenta, as the placenta was believed to have spiritual value. It was believed that after the baby's birth, if the placenta was not delivered quickly, the mother would be in danger. Nine women believed that the placenta could move up to the throat and choke them to death if not removed promptly. To release the placenta after delivery or in cases where there was a delay in the process, TBAs or relatives were found to massage the abdomen of the women, gag her with her hair or give kerosene oil or onion juice to induce vomiting which was believed to help expel the placenta through abdominal contractions. A woman reported that the TBA wiped her chest with a dirty cloth (which was used in mud cleaning) and this was followed by expulsion of the placenta. Treatment of placenta was sometimes considered a priority than treatment of the newborn immediately after birth. It was believed that placenta should be buried in the dry soil so that the child would not suffer from any cold or cough at a later stage. To save some money, some women preferred to cut their umbilical cord themselves. Eight out of 20 women cut their umbilical cord in their last delivery. If any other woman had cut the umbilical cord for them, then they had to pay a minimum of Tk. 20 (US$ 0.30).\nIt was found that the focal point of attention after birth of the baby was mainly on the removal/expulsion of the placenta, as the placenta was believed to have spiritual value. It was believed that after the baby's birth, if the placenta was not delivered quickly, the mother would be in danger. Nine women believed that the placenta could move up to the throat and choke them to death if not removed promptly. To release the placenta after delivery or in cases where there was a delay in the process, TBAs or relatives were found to massage the abdomen of the women, gag her with her hair or give kerosene oil or onion juice to induce vomiting which was believed to help expel the placenta through abdominal contractions. A woman reported that the TBA wiped her chest with a dirty cloth (which was used in mud cleaning) and this was followed by expulsion of the placenta. Treatment of placenta was sometimes considered a priority than treatment of the newborn immediately after birth. It was believed that placenta should be buried in the dry soil so that the child would not suffer from any cold or cough at a later stage. To save some money, some women preferred to cut their umbilical cord themselves. Eight out of 20 women cut their umbilical cord in their last delivery. If any other woman had cut the umbilical cord for them, then they had to pay a minimum of Tk. 20 (US$ 0.30).\nWe found that after the initiation of labour pain, elderly women usually took matters into their own hands by taking various steps and observing certain rituals to facilitate early delivery of the baby. Women reported that sometimes relatives fed them pora pani (enchanted water) to boost up their mental strength, which referred to the psychological aspect of the expectant mothers as being strong enough to face the labour pain. For providing energy to the delivering mother as well as to intensify the labour pain, five mothers said that they had received saline with an injection (oxytocin) from a neighbouring Pallichikitshok (Village doctor).\n[SUBTITLE] Place of birth and attendance at delivery [SUBSECTION] Most of the deliveries took place at home (19 out of 20). The TBA was usually called after the onset of strong labour pain. TBA or friends/relatives were the most common persons to be present as the birth attendant during delivery. Almost all the deliveries took place on the floor. Four women even gave births on just the bare floor, but for the rest, the deliveries took place on spread out cloths or jute sacks or straw. Delivery was not carried out on beds in order to avoid spoiling it. According to the women, few materials like straw, polythene, etc. if placed on the floor, made cleaning and disposing of impure blood and placenta much easier. Ten out of the 20 women reported that their preferred position was squatting while giving birth. However, this may have, and often did change, as labour progressed. For three of the women, squatting position was found to be more painful than lying. Usually, the position to be adopted was often decided upon discussion with the TBA and other female relatives. No cases were found where any male members of the household were involved during delivery.\nHealth system factors such as staff attitudes from healthcare, either public or private, also had an impact on maternal care practice. Poorly attitude of the healthcare staffs were perceived to exist in most health facilities; these included usage of abusive language, denial of providing services, lacking compassion in general and refusal to assist properly.\n\"I went to government facility for antenatal care. The concerned person told me I might need cesarean, and I would die if I did not go to the hospital. Are these words good to tell someone who is pregnant?\" (A pregnant woman of 17 years old)\nThe mothers were asked whether or not the birth attendant had washed their hands with soap before delivery, and whether or not the instrument used for cutting the umbilical cord and the thread used for tying the cord were boiled before use. Washing hands before conducting the delivery was very low. Boiling the blade was high, but it was reported that they did not boil the thread. Women reported that at times, the TBAs kept the thread aflame only and did not boil it if the thread was new.\nMost of the deliveries took place at home (19 out of 20). The TBA was usually called after the onset of strong labour pain. TBA or friends/relatives were the most common persons to be present as the birth attendant during delivery. Almost all the deliveries took place on the floor. Four women even gave births on just the bare floor, but for the rest, the deliveries took place on spread out cloths or jute sacks or straw. Delivery was not carried out on beds in order to avoid spoiling it. According to the women, few materials like straw, polythene, etc. if placed on the floor, made cleaning and disposing of impure blood and placenta much easier. Ten out of the 20 women reported that their preferred position was squatting while giving birth. However, this may have, and often did change, as labour progressed. For three of the women, squatting position was found to be more painful than lying. Usually, the position to be adopted was often decided upon discussion with the TBA and other female relatives. No cases were found where any male members of the household were involved during delivery.\nHealth system factors such as staff attitudes from healthcare, either public or private, also had an impact on maternal care practice. Poorly attitude of the healthcare staffs were perceived to exist in most health facilities; these included usage of abusive language, denial of providing services, lacking compassion in general and refusal to assist properly.\n\"I went to government facility for antenatal care. The concerned person told me I might need cesarean, and I would die if I did not go to the hospital. Are these words good to tell someone who is pregnant?\" (A pregnant woman of 17 years old)\nThe mothers were asked whether or not the birth attendant had washed their hands with soap before delivery, and whether or not the instrument used for cutting the umbilical cord and the thread used for tying the cord were boiled before use. Washing hands before conducting the delivery was very low. Boiling the blade was high, but it was reported that they did not boil the thread. Women reported that at times, the TBAs kept the thread aflame only and did not boil it if the thread was new.\n[SUBTITLE] Practices to speed up the delivery of placenta [SUBSECTION] It was found that the focal point of attention after birth of the baby was mainly on the removal/expulsion of the placenta, as the placenta was believed to have spiritual value. It was believed that after the baby's birth, if the placenta was not delivered quickly, the mother would be in danger. Nine women believed that the placenta could move up to the throat and choke them to death if not removed promptly. To release the placenta after delivery or in cases where there was a delay in the process, TBAs or relatives were found to massage the abdomen of the women, gag her with her hair or give kerosene oil or onion juice to induce vomiting which was believed to help expel the placenta through abdominal contractions. A woman reported that the TBA wiped her chest with a dirty cloth (which was used in mud cleaning) and this was followed by expulsion of the placenta. Treatment of placenta was sometimes considered a priority than treatment of the newborn immediately after birth. It was believed that placenta should be buried in the dry soil so that the child would not suffer from any cold or cough at a later stage. To save some money, some women preferred to cut their umbilical cord themselves. Eight out of 20 women cut their umbilical cord in their last delivery. If any other woman had cut the umbilical cord for them, then they had to pay a minimum of Tk. 20 (US$ 0.30).\nIt was found that the focal point of attention after birth of the baby was mainly on the removal/expulsion of the placenta, as the placenta was believed to have spiritual value. It was believed that after the baby's birth, if the placenta was not delivered quickly, the mother would be in danger. Nine women believed that the placenta could move up to the throat and choke them to death if not removed promptly. To release the placenta after delivery or in cases where there was a delay in the process, TBAs or relatives were found to massage the abdomen of the women, gag her with her hair or give kerosene oil or onion juice to induce vomiting which was believed to help expel the placenta through abdominal contractions. A woman reported that the TBA wiped her chest with a dirty cloth (which was used in mud cleaning) and this was followed by expulsion of the placenta. Treatment of placenta was sometimes considered a priority than treatment of the newborn immediately after birth. It was believed that placenta should be buried in the dry soil so that the child would not suffer from any cold or cough at a later stage. To save some money, some women preferred to cut their umbilical cord themselves. Eight out of 20 women cut their umbilical cord in their last delivery. If any other woman had cut the umbilical cord for them, then they had to pay a minimum of Tk. 20 (US$ 0.30).\n[SUBTITLE] Post-partum care [SUBSECTION] During the post-partum period, especially during the first 5-9 days of isolation, the mothers reported various dietary restrictions imposed on them which deprived them of proper nutrition intake. Most food available in these extreme poor households was thought to be inappropriate during lactation. In the case of four women, no food at all was allowed for consumption during the first few days after delivery, and commonly no food was given at all during the first day after delivery to allow healing of the birth passage. Moreover, women were considered to be impure during this time. They were not allowed to touch any food for preparing meals for other family members. In-depth interviews revealed that the mothers-in-law and the elders played a dominant role in deciding what foodstuffs the mother could eat. Shujata's child, who was not even a month old, lived with her family consisting of her husband, mother and an elder sister. She said:\n\"We all live together, use the same kitchen but have separate rooms. Since the child's delivery, my mother and sister prepare the food as the newly delivered women (Poaati ma) are not supposed to cook till 40 days after the delivery because their body is considered to be impure. People wouldn't like it if I cook and it's not good for me even. My mother brings me food in my room and gives me lesser than my usual intake of food so that I don't fall ill. Poaati ma (lactating mother) should eat as less as possible till her umbilical cord dries up. It does not matter if I'm still hungry and feel weak and as long as I don't have to spend money for doctor's visit. It does not harm if you follow the elder women. I have the whole life to myself to eat more. So it's fine if I eat a little less the 1st 1-2 months. I prefer weakness to illness. (A lactating mother of 23 years old)\nThese women did not seem to have any problem with this imposed dietary restriction because their economic condition did not allow them to buy animal source food like beef, chicken anyway.\nThe most common items eaten during the post-partum period included rice, smashed potato with spices, raw tea, green banana, black cumin, poppy seed, fenugreek leaves etc. These are believed to keep the stomach cool and initiate the production of breast milk. Whatever special food was consumed during post-partum period, it was reported to be only for a few days while the imposed food restrictions continued for a longer time i.e., 21-40 days. Opinions given on the intake of spicy food were mixed. It was given for the first few days for healing of the birth passage but later on, the same was restricted to avoid heart-burning.\nWomen reported that during their last pregnancy they felt weak with severe body-aches (11 out of 20 interviews) after delivery. It lasted for one to three weeks. None of them had gone to any health providers for seeking any service for this weakness and body-ache. These women reasoned that they did not go for the check-up because they were not even aware about the availability of the post-partum check-up. Four women reported that their husbands or mothers had gone to pharmacies to fetch vitamin tablets or saline when they had expressed about their weaknesses. This weakness was however considered to be a common part of their post-partum life. To quote one woman:\n\"It is normal to have some bodyache and fever after delivery, these would be cured automatically\" (A lactating mother of 24 years old)\nDuring the post-partum period, especially during the first 5-9 days of isolation, the mothers reported various dietary restrictions imposed on them which deprived them of proper nutrition intake. Most food available in these extreme poor households was thought to be inappropriate during lactation. In the case of four women, no food at all was allowed for consumption during the first few days after delivery, and commonly no food was given at all during the first day after delivery to allow healing of the birth passage. Moreover, women were considered to be impure during this time. They were not allowed to touch any food for preparing meals for other family members. In-depth interviews revealed that the mothers-in-law and the elders played a dominant role in deciding what foodstuffs the mother could eat. Shujata's child, who was not even a month old, lived with her family consisting of her husband, mother and an elder sister. She said:\n\"We all live together, use the same kitchen but have separate rooms. Since the child's delivery, my mother and sister prepare the food as the newly delivered women (Poaati ma) are not supposed to cook till 40 days after the delivery because their body is considered to be impure. People wouldn't like it if I cook and it's not good for me even. My mother brings me food in my room and gives me lesser than my usual intake of food so that I don't fall ill. Poaati ma (lactating mother) should eat as less as possible till her umbilical cord dries up. It does not matter if I'm still hungry and feel weak and as long as I don't have to spend money for doctor's visit. It does not harm if you follow the elder women. I have the whole life to myself to eat more. So it's fine if I eat a little less the 1st 1-2 months. I prefer weakness to illness. (A lactating mother of 23 years old)\nThese women did not seem to have any problem with this imposed dietary restriction because their economic condition did not allow them to buy animal source food like beef, chicken anyway.\nThe most common items eaten during the post-partum period included rice, smashed potato with spices, raw tea, green banana, black cumin, poppy seed, fenugreek leaves etc. These are believed to keep the stomach cool and initiate the production of breast milk. Whatever special food was consumed during post-partum period, it was reported to be only for a few days while the imposed food restrictions continued for a longer time i.e., 21-40 days. Opinions given on the intake of spicy food were mixed. It was given for the first few days for healing of the birth passage but later on, the same was restricted to avoid heart-burning.\nWomen reported that during their last pregnancy they felt weak with severe body-aches (11 out of 20 interviews) after delivery. It lasted for one to three weeks. None of them had gone to any health providers for seeking any service for this weakness and body-ache. These women reasoned that they did not go for the check-up because they were not even aware about the availability of the post-partum check-up. Four women reported that their husbands or mothers had gone to pharmacies to fetch vitamin tablets or saline when they had expressed about their weaknesses. This weakness was however considered to be a common part of their post-partum life. To quote one woman:\n\"It is normal to have some bodyache and fever after delivery, these would be cured automatically\" (A lactating mother of 24 years old)", "The majority (15 out of 20 respondents) of the women were in their twenties, two were below 20 years of age while three were in their thirties. Most of the interviewees were Muslim. Non-farming labour, like rickshaw pulling, was one of the most common occupations of the household heads. Most of them lived in poor hygienic condition as revealed by the use of proper sanitation (with or without water seal latrine or pit latrine), water access, and overall household condition. At the time of data collection, all the respondents were in their initial phase of the CFPR-II programme.", "Almost all the women reported about becoming aware of their pregnancy when they experienced amenorrhoea, nausea and vomiting, loss of appetite and weakness. Most of them could identify their pregnancy within the first 2-3 months. According to them, pregnancy identification and its subsequent care was seen as a normal event which did not require any additional medical intervention unless significant complications arose during this period. Two women were found to have had no menstruation history after delivery of the last baby till conception of the next one.\n[SUBTITLE] Antenatal care [SUBSECTION] Confirmation of pregnancy was considered by the women as the most important part of antenatal care and eight out of 20 women went to the nearby health facilities for pregnancy tests. Reportedly, the most common service that a pregnant woman received in ante-natal clinics was iron supplementation. However, most of them did not take all the tablets dispensed because they perceived the tablets to be tasteless (or have bad taste) and made the stool black. None of them took iron tablets for more than two months during their last pregnancy. Most of these women opined that antenatal care provided no benefit to them or their child. Monetary constraints, absence of knowledge about the need of services, and restrictions on the movement of women were also cited as reasons for not accessing antenatal care.\nConfirmation of pregnancy was considered by the women as the most important part of antenatal care and eight out of 20 women went to the nearby health facilities for pregnancy tests. Reportedly, the most common service that a pregnant woman received in ante-natal clinics was iron supplementation. However, most of them did not take all the tablets dispensed because they perceived the tablets to be tasteless (or have bad taste) and made the stool black. None of them took iron tablets for more than two months during their last pregnancy. Most of these women opined that antenatal care provided no benefit to them or their child. Monetary constraints, absence of knowledge about the need of services, and restrictions on the movement of women were also cited as reasons for not accessing antenatal care.\n[SUBTITLE] Social exclusion [SUBSECTION] The following finding is not unique but typical of the broad experience of ultra poor women living in the rural areas of Bangladesh:\n\"I was avoiding health worker because she would scold me if she would have heard about my 4th pregnancy. She used to give me birth control pills. So, I did not meet her and inform her.\" (A lactating mother of 27 years old)\nIn two cases, the women reported that the respective health worker came to their house to give birth control pills. However, neither of them came out of their respective houses. The use of harsh words and low tolerance level of the health workers discouraged the women to use the services provided by these health facilities for antenatal care. These women perceived that they were treated in such a manner by the health worker because of their low socioeconomic status.\n\"I heard that health service from government facility is free of charge but when I went to the health facility I was asked to make a card for Tk. 20.Which services are then considered to be free of charge?\" (A pregnant woman of 21 years old)\nThe following finding is not unique but typical of the broad experience of ultra poor women living in the rural areas of Bangladesh:\n\"I was avoiding health worker because she would scold me if she would have heard about my 4th pregnancy. She used to give me birth control pills. So, I did not meet her and inform her.\" (A lactating mother of 27 years old)\nIn two cases, the women reported that the respective health worker came to their house to give birth control pills. However, neither of them came out of their respective houses. The use of harsh words and low tolerance level of the health workers discouraged the women to use the services provided by these health facilities for antenatal care. These women perceived that they were treated in such a manner by the health worker because of their low socioeconomic status.\n\"I heard that health service from government facility is free of charge but when I went to the health facility I was asked to make a card for Tk. 20.Which services are then considered to be free of charge?\" (A pregnant woman of 21 years old)\n[SUBTITLE] Nutrition during pregnancy [SUBSECTION] Women reported that the reasons for decreasing consumption of food during pregnancy were mostly related to aversion to specific food, followed by lack of monetary power to purchase specific food that extremely poor households usually consumed (like rice, potato, and small fish). However, few women also reported increasing consumption of food during pregnancy. The reasons given for this varied from those given for decreasing consumption. The most frequently cited reason was 'feel like eating more.' 'Craving for a specific food 'was also cited as a reason for increased consumption of some food such as molasses-made drink, rice with green chilies, and milk. Two of the women mentioned that the increased food intake was directly related to improved health of the mother or the baby. This tended to be where husbands and other family members were helpful and better informed.\n\"I know that eating more food is necessary when there is a baby in womb. But I am poor, how can I afford it?\" (A pregnant woman of 21 years old)\nSome food, like ducks, pigeons, beef and Hilsha fish were considered as 'hot' and were restricted during pregnancy. Some fish like Taki, Chanda and Puti, which were within their affordability, were also restricted during pregnancy. There were no restrictions reported in consuming fruits among the extreme poor households.\nWomen reported that the reasons for decreasing consumption of food during pregnancy were mostly related to aversion to specific food, followed by lack of monetary power to purchase specific food that extremely poor households usually consumed (like rice, potato, and small fish). However, few women also reported increasing consumption of food during pregnancy. The reasons given for this varied from those given for decreasing consumption. The most frequently cited reason was 'feel like eating more.' 'Craving for a specific food 'was also cited as a reason for increased consumption of some food such as molasses-made drink, rice with green chilies, and milk. Two of the women mentioned that the increased food intake was directly related to improved health of the mother or the baby. This tended to be where husbands and other family members were helpful and better informed.\n\"I know that eating more food is necessary when there is a baby in womb. But I am poor, how can I afford it?\" (A pregnant woman of 21 years old)\nSome food, like ducks, pigeons, beef and Hilsha fish were considered as 'hot' and were restricted during pregnancy. Some fish like Taki, Chanda and Puti, which were within their affordability, were also restricted during pregnancy. There were no restrictions reported in consuming fruits among the extreme poor households.\n[SUBTITLE] Restrictions and mobility during pregnancy [SUBSECTION] It is generally believed among the extreme poor households that evil spirits are more active in the evening, at noon and at night, so pregnant women avoided leaving their houses during those times. Walking through graveyards was also thought to be harmful for pregnant women. If they did so, they tied up their hair and covered their heads with veils.\n\"Evil spirits could cause miscarriage of the fetus,that is why I did not go out in prayer time\" (A lactating mother of 31 years old)\nA few women reported their beliefs about carrying a piece of iron which would ensure protection. Matches were also reported to be effective in keeping away the evil gaze of the spirits. Most of the respondents mentioned that lunar and solar eclipses could affect pregnant women. They reported (those who got eclipse during last pregnancy) that they had stayed inside the household, walked near the home or inside the home, but had never laid down on the bed during eclipses. They also reported certain restrictions during this period - like they did not eat or cook, cut, and twist anything, as they perceived that the child would be born with a cleft palate or with deformed features. Many of the women reported that elderly family members and husbands were the main informants as to when there would be a lunar or solar eclipse. In any case, restrictions in movement had never been imposed by any health providers but rather from the elderly women of the family.\nIt is generally believed among the extreme poor households that evil spirits are more active in the evening, at noon and at night, so pregnant women avoided leaving their houses during those times. Walking through graveyards was also thought to be harmful for pregnant women. If they did so, they tied up their hair and covered their heads with veils.\n\"Evil spirits could cause miscarriage of the fetus,that is why I did not go out in prayer time\" (A lactating mother of 31 years old)\nA few women reported their beliefs about carrying a piece of iron which would ensure protection. Matches were also reported to be effective in keeping away the evil gaze of the spirits. Most of the respondents mentioned that lunar and solar eclipses could affect pregnant women. They reported (those who got eclipse during last pregnancy) that they had stayed inside the household, walked near the home or inside the home, but had never laid down on the bed during eclipses. They also reported certain restrictions during this period - like they did not eat or cook, cut, and twist anything, as they perceived that the child would be born with a cleft palate or with deformed features. Many of the women reported that elderly family members and husbands were the main informants as to when there would be a lunar or solar eclipse. In any case, restrictions in movement had never been imposed by any health providers but rather from the elderly women of the family.\n[SUBTITLE] Support from husband [SUBSECTION] Present study found that in the midst of poverty, the husband could play a positive role in taking care of his wife during the pregnancy period. An illustrative case:\nAll that Mamtaz's husband can claim as his own are the homestead land and the house. He inherited three decimals of land from his father on which he has built a house to live in. He is physically disabled. He does not have any source of income other than the 2/3 kg of rice that he gets from begging door to door. From that amount, Mamtaz keeps whatever is required for the household and sells the remaining to buy other necessities such as salt, vegetables to run her family. Sometimes when her husband is unable to go for begging, Mamtaz would go to other people's houses for work during pregnancy. Her husband did not wish for her to work at other people's houses during her pregnancy and expressed that whatever he earned through begging was enough for them to sustain. But still, when he was not at home and someone called her for assistance, she would go to their houses to boil paddy or for maintenance of floors in exchange of one or half a kg of rice. (A pregnant woman of 23 years old)\nWomen considered this attitude of their husbands as a positive attitude if the women were too weak to work or continue the usual household work. In three cases, women consulted with their husbands and jointly decided to stop their other child's schooling so that the child would rear the animals and the women could rest during their pregnancy. Overall, during pregnancy, women reported that husbands and other family members helped them in doing heavy work. Activities such as fetching water, boiling and husking rice, lifting heavy cooking utensils and preparing food for animals were generally regarded as heavy work.\nPresent study found that in the midst of poverty, the husband could play a positive role in taking care of his wife during the pregnancy period. An illustrative case:\nAll that Mamtaz's husband can claim as his own are the homestead land and the house. He inherited three decimals of land from his father on which he has built a house to live in. He is physically disabled. He does not have any source of income other than the 2/3 kg of rice that he gets from begging door to door. From that amount, Mamtaz keeps whatever is required for the household and sells the remaining to buy other necessities such as salt, vegetables to run her family. Sometimes when her husband is unable to go for begging, Mamtaz would go to other people's houses for work during pregnancy. Her husband did not wish for her to work at other people's houses during her pregnancy and expressed that whatever he earned through begging was enough for them to sustain. But still, when he was not at home and someone called her for assistance, she would go to their houses to boil paddy or for maintenance of floors in exchange of one or half a kg of rice. (A pregnant woman of 23 years old)\nWomen considered this attitude of their husbands as a positive attitude if the women were too weak to work or continue the usual household work. In three cases, women consulted with their husbands and jointly decided to stop their other child's schooling so that the child would rear the animals and the women could rest during their pregnancy. Overall, during pregnancy, women reported that husbands and other family members helped them in doing heavy work. Activities such as fetching water, boiling and husking rice, lifting heavy cooking utensils and preparing food for animals were generally regarded as heavy work.\n[SUBTITLE] Birth preparedness [SUBSECTION] Birth preparedness for this present study included selecting a skilled birth attendant, arranging delivery kit needed for a safe birth, identifying where to go in case of emergency, and arranging necessary money and transport for delivery. We found only one woman who had a birth plan (took into consideration all the outlined stages of the preparation) before delivery. Most women did not contact the birth attendant who is locally known as traditional birth attendants (TBAs or dais) in advance because they thought TBAs could make some jadutona (Black magic) in advance during pregnancy so that without their presence delivery would not occur and that there were greater chances to pay more for delivery.\n\"After I started having pain on the morning of the 3rd day, my husband went out to call the TBAs. TBAs was not informed beforehand as she took more money if asked to stay for a longer period of time. This TBA perhaps is trained. Most of the children in this locality were delivered by her (A lactating mother of 19 years old)\nFew families, especially the husbands, could put aside some money for delivery purpose. One husband had financial constraints, so he saved 40 kg of rice. Twelve women said that they only collected some old clothes, which they had kept separately, but they had not stitched any new dresses or Kathas (local quilt covering) for the arrival of the new baby. The women believed that it was bad to buy new clothes or make too many plans in advance for the new arrival as it could bring bad luck. Moreover, they were not sure whether the coming child would survive or not. Money spent on her/him was considered to be unnecessary. Women assumed that transportation would be available either from a family member or from a neighbour when needed and, as such, did not plan for the transportation in advance.\nBirth preparedness for this present study included selecting a skilled birth attendant, arranging delivery kit needed for a safe birth, identifying where to go in case of emergency, and arranging necessary money and transport for delivery. We found only one woman who had a birth plan (took into consideration all the outlined stages of the preparation) before delivery. Most women did not contact the birth attendant who is locally known as traditional birth attendants (TBAs or dais) in advance because they thought TBAs could make some jadutona (Black magic) in advance during pregnancy so that without their presence delivery would not occur and that there were greater chances to pay more for delivery.\n\"After I started having pain on the morning of the 3rd day, my husband went out to call the TBAs. TBAs was not informed beforehand as she took more money if asked to stay for a longer period of time. This TBA perhaps is trained. Most of the children in this locality were delivered by her (A lactating mother of 19 years old)\nFew families, especially the husbands, could put aside some money for delivery purpose. One husband had financial constraints, so he saved 40 kg of rice. Twelve women said that they only collected some old clothes, which they had kept separately, but they had not stitched any new dresses or Kathas (local quilt covering) for the arrival of the new baby. The women believed that it was bad to buy new clothes or make too many plans in advance for the new arrival as it could bring bad luck. Moreover, they were not sure whether the coming child would survive or not. Money spent on her/him was considered to be unnecessary. Women assumed that transportation would be available either from a family member or from a neighbour when needed and, as such, did not plan for the transportation in advance.", "Confirmation of pregnancy was considered by the women as the most important part of antenatal care and eight out of 20 women went to the nearby health facilities for pregnancy tests. Reportedly, the most common service that a pregnant woman received in ante-natal clinics was iron supplementation. However, most of them did not take all the tablets dispensed because they perceived the tablets to be tasteless (or have bad taste) and made the stool black. None of them took iron tablets for more than two months during their last pregnancy. Most of these women opined that antenatal care provided no benefit to them or their child. Monetary constraints, absence of knowledge about the need of services, and restrictions on the movement of women were also cited as reasons for not accessing antenatal care.", "The following finding is not unique but typical of the broad experience of ultra poor women living in the rural areas of Bangladesh:\n\"I was avoiding health worker because she would scold me if she would have heard about my 4th pregnancy. She used to give me birth control pills. So, I did not meet her and inform her.\" (A lactating mother of 27 years old)\nIn two cases, the women reported that the respective health worker came to their house to give birth control pills. However, neither of them came out of their respective houses. The use of harsh words and low tolerance level of the health workers discouraged the women to use the services provided by these health facilities for antenatal care. These women perceived that they were treated in such a manner by the health worker because of their low socioeconomic status.\n\"I heard that health service from government facility is free of charge but when I went to the health facility I was asked to make a card for Tk. 20.Which services are then considered to be free of charge?\" (A pregnant woman of 21 years old)", "Women reported that the reasons for decreasing consumption of food during pregnancy were mostly related to aversion to specific food, followed by lack of monetary power to purchase specific food that extremely poor households usually consumed (like rice, potato, and small fish). However, few women also reported increasing consumption of food during pregnancy. The reasons given for this varied from those given for decreasing consumption. The most frequently cited reason was 'feel like eating more.' 'Craving for a specific food 'was also cited as a reason for increased consumption of some food such as molasses-made drink, rice with green chilies, and milk. Two of the women mentioned that the increased food intake was directly related to improved health of the mother or the baby. This tended to be where husbands and other family members were helpful and better informed.\n\"I know that eating more food is necessary when there is a baby in womb. But I am poor, how can I afford it?\" (A pregnant woman of 21 years old)\nSome food, like ducks, pigeons, beef and Hilsha fish were considered as 'hot' and were restricted during pregnancy. Some fish like Taki, Chanda and Puti, which were within their affordability, were also restricted during pregnancy. There were no restrictions reported in consuming fruits among the extreme poor households.", "It is generally believed among the extreme poor households that evil spirits are more active in the evening, at noon and at night, so pregnant women avoided leaving their houses during those times. Walking through graveyards was also thought to be harmful for pregnant women. If they did so, they tied up their hair and covered their heads with veils.\n\"Evil spirits could cause miscarriage of the fetus,that is why I did not go out in prayer time\" (A lactating mother of 31 years old)\nA few women reported their beliefs about carrying a piece of iron which would ensure protection. Matches were also reported to be effective in keeping away the evil gaze of the spirits. Most of the respondents mentioned that lunar and solar eclipses could affect pregnant women. They reported (those who got eclipse during last pregnancy) that they had stayed inside the household, walked near the home or inside the home, but had never laid down on the bed during eclipses. They also reported certain restrictions during this period - like they did not eat or cook, cut, and twist anything, as they perceived that the child would be born with a cleft palate or with deformed features. Many of the women reported that elderly family members and husbands were the main informants as to when there would be a lunar or solar eclipse. In any case, restrictions in movement had never been imposed by any health providers but rather from the elderly women of the family.", "Present study found that in the midst of poverty, the husband could play a positive role in taking care of his wife during the pregnancy period. An illustrative case:\nAll that Mamtaz's husband can claim as his own are the homestead land and the house. He inherited three decimals of land from his father on which he has built a house to live in. He is physically disabled. He does not have any source of income other than the 2/3 kg of rice that he gets from begging door to door. From that amount, Mamtaz keeps whatever is required for the household and sells the remaining to buy other necessities such as salt, vegetables to run her family. Sometimes when her husband is unable to go for begging, Mamtaz would go to other people's houses for work during pregnancy. Her husband did not wish for her to work at other people's houses during her pregnancy and expressed that whatever he earned through begging was enough for them to sustain. But still, when he was not at home and someone called her for assistance, she would go to their houses to boil paddy or for maintenance of floors in exchange of one or half a kg of rice. (A pregnant woman of 23 years old)\nWomen considered this attitude of their husbands as a positive attitude if the women were too weak to work or continue the usual household work. In three cases, women consulted with their husbands and jointly decided to stop their other child's schooling so that the child would rear the animals and the women could rest during their pregnancy. Overall, during pregnancy, women reported that husbands and other family members helped them in doing heavy work. Activities such as fetching water, boiling and husking rice, lifting heavy cooking utensils and preparing food for animals were generally regarded as heavy work.", "Birth preparedness for this present study included selecting a skilled birth attendant, arranging delivery kit needed for a safe birth, identifying where to go in case of emergency, and arranging necessary money and transport for delivery. We found only one woman who had a birth plan (took into consideration all the outlined stages of the preparation) before delivery. Most women did not contact the birth attendant who is locally known as traditional birth attendants (TBAs or dais) in advance because they thought TBAs could make some jadutona (Black magic) in advance during pregnancy so that without their presence delivery would not occur and that there were greater chances to pay more for delivery.\n\"After I started having pain on the morning of the 3rd day, my husband went out to call the TBAs. TBAs was not informed beforehand as she took more money if asked to stay for a longer period of time. This TBA perhaps is trained. Most of the children in this locality were delivered by her (A lactating mother of 19 years old)\nFew families, especially the husbands, could put aside some money for delivery purpose. One husband had financial constraints, so he saved 40 kg of rice. Twelve women said that they only collected some old clothes, which they had kept separately, but they had not stitched any new dresses or Kathas (local quilt covering) for the arrival of the new baby. The women believed that it was bad to buy new clothes or make too many plans in advance for the new arrival as it could bring bad luck. Moreover, they were not sure whether the coming child would survive or not. Money spent on her/him was considered to be unnecessary. Women assumed that transportation would be available either from a family member or from a neighbour when needed and, as such, did not plan for the transportation in advance.", "We found that after the initiation of labour pain, elderly women usually took matters into their own hands by taking various steps and observing certain rituals to facilitate early delivery of the baby. Women reported that sometimes relatives fed them pora pani (enchanted water) to boost up their mental strength, which referred to the psychological aspect of the expectant mothers as being strong enough to face the labour pain. For providing energy to the delivering mother as well as to intensify the labour pain, five mothers said that they had received saline with an injection (oxytocin) from a neighbouring Pallichikitshok (Village doctor).\n[SUBTITLE] Place of birth and attendance at delivery [SUBSECTION] Most of the deliveries took place at home (19 out of 20). The TBA was usually called after the onset of strong labour pain. TBA or friends/relatives were the most common persons to be present as the birth attendant during delivery. Almost all the deliveries took place on the floor. Four women even gave births on just the bare floor, but for the rest, the deliveries took place on spread out cloths or jute sacks or straw. Delivery was not carried out on beds in order to avoid spoiling it. According to the women, few materials like straw, polythene, etc. if placed on the floor, made cleaning and disposing of impure blood and placenta much easier. Ten out of the 20 women reported that their preferred position was squatting while giving birth. However, this may have, and often did change, as labour progressed. For three of the women, squatting position was found to be more painful than lying. Usually, the position to be adopted was often decided upon discussion with the TBA and other female relatives. No cases were found where any male members of the household were involved during delivery.\nHealth system factors such as staff attitudes from healthcare, either public or private, also had an impact on maternal care practice. Poorly attitude of the healthcare staffs were perceived to exist in most health facilities; these included usage of abusive language, denial of providing services, lacking compassion in general and refusal to assist properly.\n\"I went to government facility for antenatal care. The concerned person told me I might need cesarean, and I would die if I did not go to the hospital. Are these words good to tell someone who is pregnant?\" (A pregnant woman of 17 years old)\nThe mothers were asked whether or not the birth attendant had washed their hands with soap before delivery, and whether or not the instrument used for cutting the umbilical cord and the thread used for tying the cord were boiled before use. Washing hands before conducting the delivery was very low. Boiling the blade was high, but it was reported that they did not boil the thread. Women reported that at times, the TBAs kept the thread aflame only and did not boil it if the thread was new.\nMost of the deliveries took place at home (19 out of 20). The TBA was usually called after the onset of strong labour pain. TBA or friends/relatives were the most common persons to be present as the birth attendant during delivery. Almost all the deliveries took place on the floor. Four women even gave births on just the bare floor, but for the rest, the deliveries took place on spread out cloths or jute sacks or straw. Delivery was not carried out on beds in order to avoid spoiling it. According to the women, few materials like straw, polythene, etc. if placed on the floor, made cleaning and disposing of impure blood and placenta much easier. Ten out of the 20 women reported that their preferred position was squatting while giving birth. However, this may have, and often did change, as labour progressed. For three of the women, squatting position was found to be more painful than lying. Usually, the position to be adopted was often decided upon discussion with the TBA and other female relatives. No cases were found where any male members of the household were involved during delivery.\nHealth system factors such as staff attitudes from healthcare, either public or private, also had an impact on maternal care practice. Poorly attitude of the healthcare staffs were perceived to exist in most health facilities; these included usage of abusive language, denial of providing services, lacking compassion in general and refusal to assist properly.\n\"I went to government facility for antenatal care. The concerned person told me I might need cesarean, and I would die if I did not go to the hospital. Are these words good to tell someone who is pregnant?\" (A pregnant woman of 17 years old)\nThe mothers were asked whether or not the birth attendant had washed their hands with soap before delivery, and whether or not the instrument used for cutting the umbilical cord and the thread used for tying the cord were boiled before use. Washing hands before conducting the delivery was very low. Boiling the blade was high, but it was reported that they did not boil the thread. Women reported that at times, the TBAs kept the thread aflame only and did not boil it if the thread was new.\n[SUBTITLE] Practices to speed up the delivery of placenta [SUBSECTION] It was found that the focal point of attention after birth of the baby was mainly on the removal/expulsion of the placenta, as the placenta was believed to have spiritual value. It was believed that after the baby's birth, if the placenta was not delivered quickly, the mother would be in danger. Nine women believed that the placenta could move up to the throat and choke them to death if not removed promptly. To release the placenta after delivery or in cases where there was a delay in the process, TBAs or relatives were found to massage the abdomen of the women, gag her with her hair or give kerosene oil or onion juice to induce vomiting which was believed to help expel the placenta through abdominal contractions. A woman reported that the TBA wiped her chest with a dirty cloth (which was used in mud cleaning) and this was followed by expulsion of the placenta. Treatment of placenta was sometimes considered a priority than treatment of the newborn immediately after birth. It was believed that placenta should be buried in the dry soil so that the child would not suffer from any cold or cough at a later stage. To save some money, some women preferred to cut their umbilical cord themselves. Eight out of 20 women cut their umbilical cord in their last delivery. If any other woman had cut the umbilical cord for them, then they had to pay a minimum of Tk. 20 (US$ 0.30).\nIt was found that the focal point of attention after birth of the baby was mainly on the removal/expulsion of the placenta, as the placenta was believed to have spiritual value. It was believed that after the baby's birth, if the placenta was not delivered quickly, the mother would be in danger. Nine women believed that the placenta could move up to the throat and choke them to death if not removed promptly. To release the placenta after delivery or in cases where there was a delay in the process, TBAs or relatives were found to massage the abdomen of the women, gag her with her hair or give kerosene oil or onion juice to induce vomiting which was believed to help expel the placenta through abdominal contractions. A woman reported that the TBA wiped her chest with a dirty cloth (which was used in mud cleaning) and this was followed by expulsion of the placenta. Treatment of placenta was sometimes considered a priority than treatment of the newborn immediately after birth. It was believed that placenta should be buried in the dry soil so that the child would not suffer from any cold or cough at a later stage. To save some money, some women preferred to cut their umbilical cord themselves. Eight out of 20 women cut their umbilical cord in their last delivery. If any other woman had cut the umbilical cord for them, then they had to pay a minimum of Tk. 20 (US$ 0.30).", "Most of the deliveries took place at home (19 out of 20). The TBA was usually called after the onset of strong labour pain. TBA or friends/relatives were the most common persons to be present as the birth attendant during delivery. Almost all the deliveries took place on the floor. Four women even gave births on just the bare floor, but for the rest, the deliveries took place on spread out cloths or jute sacks or straw. Delivery was not carried out on beds in order to avoid spoiling it. According to the women, few materials like straw, polythene, etc. if placed on the floor, made cleaning and disposing of impure blood and placenta much easier. Ten out of the 20 women reported that their preferred position was squatting while giving birth. However, this may have, and often did change, as labour progressed. For three of the women, squatting position was found to be more painful than lying. Usually, the position to be adopted was often decided upon discussion with the TBA and other female relatives. No cases were found where any male members of the household were involved during delivery.\nHealth system factors such as staff attitudes from healthcare, either public or private, also had an impact on maternal care practice. Poorly attitude of the healthcare staffs were perceived to exist in most health facilities; these included usage of abusive language, denial of providing services, lacking compassion in general and refusal to assist properly.\n\"I went to government facility for antenatal care. The concerned person told me I might need cesarean, and I would die if I did not go to the hospital. Are these words good to tell someone who is pregnant?\" (A pregnant woman of 17 years old)\nThe mothers were asked whether or not the birth attendant had washed their hands with soap before delivery, and whether or not the instrument used for cutting the umbilical cord and the thread used for tying the cord were boiled before use. Washing hands before conducting the delivery was very low. Boiling the blade was high, but it was reported that they did not boil the thread. Women reported that at times, the TBAs kept the thread aflame only and did not boil it if the thread was new.", "It was found that the focal point of attention after birth of the baby was mainly on the removal/expulsion of the placenta, as the placenta was believed to have spiritual value. It was believed that after the baby's birth, if the placenta was not delivered quickly, the mother would be in danger. Nine women believed that the placenta could move up to the throat and choke them to death if not removed promptly. To release the placenta after delivery or in cases where there was a delay in the process, TBAs or relatives were found to massage the abdomen of the women, gag her with her hair or give kerosene oil or onion juice to induce vomiting which was believed to help expel the placenta through abdominal contractions. A woman reported that the TBA wiped her chest with a dirty cloth (which was used in mud cleaning) and this was followed by expulsion of the placenta. Treatment of placenta was sometimes considered a priority than treatment of the newborn immediately after birth. It was believed that placenta should be buried in the dry soil so that the child would not suffer from any cold or cough at a later stage. To save some money, some women preferred to cut their umbilical cord themselves. Eight out of 20 women cut their umbilical cord in their last delivery. If any other woman had cut the umbilical cord for them, then they had to pay a minimum of Tk. 20 (US$ 0.30).", "During the post-partum period, especially during the first 5-9 days of isolation, the mothers reported various dietary restrictions imposed on them which deprived them of proper nutrition intake. Most food available in these extreme poor households was thought to be inappropriate during lactation. In the case of four women, no food at all was allowed for consumption during the first few days after delivery, and commonly no food was given at all during the first day after delivery to allow healing of the birth passage. Moreover, women were considered to be impure during this time. They were not allowed to touch any food for preparing meals for other family members. In-depth interviews revealed that the mothers-in-law and the elders played a dominant role in deciding what foodstuffs the mother could eat. Shujata's child, who was not even a month old, lived with her family consisting of her husband, mother and an elder sister. She said:\n\"We all live together, use the same kitchen but have separate rooms. Since the child's delivery, my mother and sister prepare the food as the newly delivered women (Poaati ma) are not supposed to cook till 40 days after the delivery because their body is considered to be impure. People wouldn't like it if I cook and it's not good for me even. My mother brings me food in my room and gives me lesser than my usual intake of food so that I don't fall ill. Poaati ma (lactating mother) should eat as less as possible till her umbilical cord dries up. It does not matter if I'm still hungry and feel weak and as long as I don't have to spend money for doctor's visit. It does not harm if you follow the elder women. I have the whole life to myself to eat more. So it's fine if I eat a little less the 1st 1-2 months. I prefer weakness to illness. (A lactating mother of 23 years old)\nThese women did not seem to have any problem with this imposed dietary restriction because their economic condition did not allow them to buy animal source food like beef, chicken anyway.\nThe most common items eaten during the post-partum period included rice, smashed potato with spices, raw tea, green banana, black cumin, poppy seed, fenugreek leaves etc. These are believed to keep the stomach cool and initiate the production of breast milk. Whatever special food was consumed during post-partum period, it was reported to be only for a few days while the imposed food restrictions continued for a longer time i.e., 21-40 days. Opinions given on the intake of spicy food were mixed. It was given for the first few days for healing of the birth passage but later on, the same was restricted to avoid heart-burning.\nWomen reported that during their last pregnancy they felt weak with severe body-aches (11 out of 20 interviews) after delivery. It lasted for one to three weeks. None of them had gone to any health providers for seeking any service for this weakness and body-ache. These women reasoned that they did not go for the check-up because they were not even aware about the availability of the post-partum check-up. Four women reported that their husbands or mothers had gone to pharmacies to fetch vitamin tablets or saline when they had expressed about their weaknesses. This weakness was however considered to be a common part of their post-partum life. To quote one woman:\n\"It is normal to have some bodyache and fever after delivery, these would be cured automatically\" (A lactating mother of 24 years old)", "There is a dearth of information on maternal care practices of the marginalized women from ultra poor households in rural Bangladesh. This study attempted to fill in some of this knowledge gap through qualitative in-depth interviews of a group of women participating in a grant-based livelihood development programme for the ultra poor. Findings regarding practices were very similar to what is known for the poor rural women [9-11,13,16,17] or women from the urban slums [12,18] which clearly indicate the dominance of the powerful influence of culture and tradition in this area.\nIt is well documented that demographic characteristics, affluence and socio-cultural factors play a major role in maternal care practices [2]. The present study has also emphasized and reconfirmed the fact that dire poverty and social exclusion create an environment which pushes mothers away from antenatal or post-natal care services. Antenatal care is one of the \"four pillars\" of safe motherhood, as formulated by the Maternal Health and Safe Motherhood Programme, Division of Family Health of the World Health Organization (WHO) [19]. Literature suggests that home visits by community-based health workers can help to reduce neonatal mortality by ensuring identification of pregnant women, and by ensuring optimal maternal health through both antenatal and postnatal care visits to their homes [13,15,20,21]. When multiple health services are provided by one single person in a community, problems can arise. For example, it was found from the present study that since the same individual was responsible for providing contraceptive pills as well as for antenatal care, the women felt shy and sometimes scared to share their pregnancy news with the health care provider in fear of being scolded for discontinuation of the contraception. This tendency of healthcare providers, as reported elsewhere, [2] to reprimand mothers in some situations may act as a discouraging factor for the ultra poor women for using the healthcare services, and this may cause less number of pregnant women to be identified on the whole. This situation will only worsen if a woman experiences constraints on seeking care for herself by the involvement of other family members in the decision-making process. This involvement of other family members is very common in Bangladesh as found in other studies [2,12,13,16]. There is a need to sensitize health workers to the needs of the pregnant mothers so that the beneficiaries are not scared of them. Women should be able to share their problems easily with the healthcare providers. However, it is encouraging to note that no traditional beliefs were reinforced by the healthcare providers in the community.\nIn South Asia and even in some of the African countries, only a small proportion of women perform deliveries in healthcare facilities [14-16,21,22]. Birth preparedness in this region is also low [12,20]. Financial constraints coupled with traditional beliefs and rituals have been seen to delay and sometimes stop them altogether from taking any prior preparation for childbirth. This clearly means that there is a dire need to ensure a high level of awareness among pregnant women to address the importance of planned delivery. Beliefs and rituals also have an effect on maternal nutrition. The ultra poor households' per capita mean food consumption is low compared to national average intake [3,4,14]. Half of the ultra poor women are suffering from malnutrition [4], which often becomes worse because of food taboos during pregnancy and post-partum period. However, commonly taken drinks and food such as raw tea, black cumin, poppy seed, fenugreek leaves, neem leaves have health benefits beyond basic nutrition [23]. As the BRAC CFPR II programme is reaching households to deliver messages on health issues, other family members or neighbours can always be advised not to impose any dietary restrictions during pregnancy and most importantly in the post-partum period.\nIt was reported that the deliveries were conducted mostly by keeping the mothers in squatting position which is similar with the findings of other studies in Bangladesh [10,20]. The present study reveals the existence of various beliefs and practices regarding the expulsion of placenta which was often risky as found in other studies [11,12,16]. Adopting these malpractices along with the post-partum confinement of mothers in ultra poor households as found in this culture, could be major factors responsible for the delay or prevention of care-seeking outside the home. According to the respondents in this study, care giving is considered to be offered by a female within the households. Men become involved in maternal care provision in those situations where there is no able-bodied female to take over the role. A husband's involvement in particular situations like 'not allowing any heavy work during pregnancy' and 'going to the pharmacy to bring vitamin and supplementation', could shift some of the care burden and distribute it more equally within the family. In both the cases, a positive outcome was found. For instance, the pregnant woman was able to take rest during pregnancy and could take vitamin supplementation in post-partum period. Focus can be imposed on increasing the role of men/boys in care provision beyond the traditionally-defined role.\nThese findings are based on self-reported maternal care practices, and may therefore differ from actual practices. Although we intended to use the participants' observation to obtain and validate data, this was not possible due to resource constraints. However, given the consistency of findings in qualitative interviews and other quantitative studies [16,17,24], we are confident that the findings represent actual practices.", "This paper attempts to outline the potential areas for programme interventions to improve maternal morbidity and mortality in rural areas of Bangladesh towards achieving the Millennium Development Goal 5. This study shows that cultural beliefs and norms have a strong influence on maternal care practices among the ultra poor households, and override the beneficial economic effects from livelihood support intervention. Some of these practices, often compromised by various taboos and beliefs, may become harmful at times. Health behavior education in this livelihood support program can be carefully tailored to local cultural beliefs to achieve better maternal outcomes. Furthermore, a quantitative research could be carried out to know the accurate level and extent of disadvantages suffered by the ultra poor women compared to other groups.", "The authors declare that they have no competing interests.", "NC and SMA conceived this qualitative study and participated in its design. NC was involved in coordination, data collection, and data analysis. Both authors interpreted the data. NC drafted the manuscript and SMA put critical inputs in improving the draft. SMA was also involved in editing of the manuscript. Both authors read the final draft and approved it for submission.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2393/11/15/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
The development of the Quality Indicator for Rehabilitative Care (QuIRC): a measure of best practice for facilities for people with longer term mental health problems.
21362167
Despite the progress over recent decades in developing community mental health services internationally, many people still receive treatment and care in institutional settings. Those most likely to reside longest in these facilities have the most complex mental health problems and are at most risk of potential abuses of care and exploitation. This study aimed to develop an international, standardised toolkit to assess the quality of care in longer term hospital and community based mental health units, including the degree to which human rights, social inclusion and autonomy are promoted.
BACKGROUND
The domains of care included in the toolkit were identified from a systematic literature review, international expert Delphi exercise, and review of care standards in ten European countries. The draft toolkit comprised 154 questions for unit managers. Inter-rater reliability was tested in 202 units across ten countries at different stages of deinstitutionalisation and development of community mental health services. Exploratory factor analysis was used to corroborate the allocation of items to domains. Feedback from those using the toolkit was collected about its usefulness and ease of completion.
METHOD
The toolkit had excellent inter-rater reliability and few items with narrow spread of response. Unit managers found the content highly relevant and were able to complete it in around 90 minutes. Minimal refinement was required and the final version comprised 145 questions assessing seven domains of care.
RESULTS
Triangulation of qualitative and quantitative evidence directed the development of a robust and comprehensive international quality assessment toolkit for units in highly variable socioeconomic and political contexts.
CONCLUSIONS
[ "Benchmarking", "Humans", "Mental Disorders", "Mental Health", "Mental Health Services", "Standard of Care" ]
3056767
null
null
Methods
The Development of a European Measure of Best Practice for people with longer term mental health problems in institutional care (DEMoBinc) was a three year project funded by the European Commission from March 2007. It involved eleven centres across ten countries at different stages of deinstitutionalisation (Bulgaria, Czech Republic, Germany, Greece, Italy, Netherlands, Poland, Portugal, Spain, UK). Full details of the study protocol are published elsewhere [9]. In summary, the project comprised six phases: 1) identification of the domains of care for inclusion in the toolkit through triangulation of the results of i) a review of care standards in each country, ii) a systematic literature review of the components of care (and their effectiveness) in mental health institutions, and iii) a Delphi exercise with four stakeholder groups in each country (service users, carers, professionals, advocates) on the aspects of care that promote recovery for people with mental health problems living in institutions; 2) piloting and testing the inter-rater reliability of the toolkit; 3) refining the toolkit; 4) testing the association between toolkit ratings (gathered from the facility's manager) with service users' experiences of care, quality of life, autonomy and markers of recovery; 5) assessing the toolkit's ability to report on a facility's "value for money" through a health economic analysis; 6) dissemination of results. This paper reports on the first three phases. [SUBTITLE] Phase 1 [SUBSECTION] The results of the systematic review of the literature on components of institutional care have been published elsewhere [10]. Eight domains of care were identified: living conditions; interventions for schizophrenia; physical health; restraint and seclusion; staff training and support; therapeutic relationship; autonomy and service user involvement; and clinical governance. The results of the Delphi exercise have also been previously reported [11] and eleven domains of care were identified: social policy and human rights; social inclusion; self management and autonomy; therapeutic interventions; governance; staffing; staff attitudes; therapeutic environment; post-discharge care; carers; physical health care [11]. Collation of each country's care standards by HK and TT identified seven domains: living environment; mental and physical health; therapeutic relationship; service users' rights and autonomy; service user involvement; staff training and support; clinical governance. The project steering committee (PSC) reviewed these findings and agreed on nine domains for inclusion in the toolkit (Living Environment; Treatments and Interventions including restraint and seclusion; Therapeutic Environment; Self-management and Autonomy; Social Policy, Citizenship and Advocacy; Clinical Governance; Social Interface; Human Rights; and Recovery Based Practice). These were further reviewed and agreed by an international panel of experts in social care, mental health rehabilitation, recovery based practice, service user experience, disability rights, international mental health law, international mental health policy and care standard setting. Toolkit items for assessment of these domains were generated by the UK centres. The toolkit was designed to be completed by the manager of the facility since we were aware, due to the complexity of their mental health problems, that only some service users would have the capacity to complete such a measure. However, service users' experiences of care were assessed in a later Phase of the project to investigate the association between unit manager toolkit ratings and service user reports. Where possible, toolkit items were worded to avoid revealing which answer would lead to a higher quality rating. A mix of question formats was used (Likert scales, ordered categories, quantitative responses, binary responses, lists of yes/no's summed to create quantitative responses, and vignettes that asked the respondent to generate answers which were "checklisted" by the researcher and summed to give a quantitative response). The varied format of questions aimed to increase the accuracy of responses by avoiding a response set and make the toolkit more interesting to complete. The draft toolkit was reviewed by the PSC and the international expert panel and further questions were added if there was evidence for their inclusion from Phase 1 or if they appeared highly relevant across countries. The toolkit was translated in each country and back translated by someone independent of the project. Back translations were reviewed at the lead centre in the UK and amendments agreed with each country. The toolkit was piloted in each country in one or two facilities. A training session was attended by all researchers involved in data collection to ensure clarity of understanding of all items and their scoring. The results of the systematic review of the literature on components of institutional care have been published elsewhere [10]. Eight domains of care were identified: living conditions; interventions for schizophrenia; physical health; restraint and seclusion; staff training and support; therapeutic relationship; autonomy and service user involvement; and clinical governance. The results of the Delphi exercise have also been previously reported [11] and eleven domains of care were identified: social policy and human rights; social inclusion; self management and autonomy; therapeutic interventions; governance; staffing; staff attitudes; therapeutic environment; post-discharge care; carers; physical health care [11]. Collation of each country's care standards by HK and TT identified seven domains: living environment; mental and physical health; therapeutic relationship; service users' rights and autonomy; service user involvement; staff training and support; clinical governance. The project steering committee (PSC) reviewed these findings and agreed on nine domains for inclusion in the toolkit (Living Environment; Treatments and Interventions including restraint and seclusion; Therapeutic Environment; Self-management and Autonomy; Social Policy, Citizenship and Advocacy; Clinical Governance; Social Interface; Human Rights; and Recovery Based Practice). These were further reviewed and agreed by an international panel of experts in social care, mental health rehabilitation, recovery based practice, service user experience, disability rights, international mental health law, international mental health policy and care standard setting. Toolkit items for assessment of these domains were generated by the UK centres. The toolkit was designed to be completed by the manager of the facility since we were aware, due to the complexity of their mental health problems, that only some service users would have the capacity to complete such a measure. However, service users' experiences of care were assessed in a later Phase of the project to investigate the association between unit manager toolkit ratings and service user reports. Where possible, toolkit items were worded to avoid revealing which answer would lead to a higher quality rating. A mix of question formats was used (Likert scales, ordered categories, quantitative responses, binary responses, lists of yes/no's summed to create quantitative responses, and vignettes that asked the respondent to generate answers which were "checklisted" by the researcher and summed to give a quantitative response). The varied format of questions aimed to increase the accuracy of responses by avoiding a response set and make the toolkit more interesting to complete. The draft toolkit was reviewed by the PSC and the international expert panel and further questions were added if there was evidence for their inclusion from Phase 1 or if they appeared highly relevant across countries. The toolkit was translated in each country and back translated by someone independent of the project. Back translations were reviewed at the lead centre in the UK and amendments agreed with each country. The toolkit was piloted in each country in one or two facilities. A training session was attended by all researchers involved in data collection to ensure clarity of understanding of all items and their scoring. [SUBTITLE] Phase 2 [SUBSECTION] The draft toolkit comprised 154 questions (consisting of 280 items) of which 29 were descriptive and did not contribute to scoring. The remaining questions were allocated to one or more of the nine domains by the UK research teams. Since some questions were combined for the purposes of scoring, a total of 96 question scores contributed to the rating of domains. Of these, 27 assessed only one domain, 32 assessed two domains, 18 assessed three, 17 assessed four and two assessed five. Since the toolkit had a variety of response structures, questions were scored within a similar range to ensure similar weighting of items within each domain. For example, Likert scale responses were transformed from a scale of 1 to 5 to -2 to +2. Each country identified 20 facilities (units) in which to carry out inter-rater reliability testing of the draft toolkit that: provided for adults with longer term mental health problems (length of stay at least six months); had at least six patients/residents; had communal facilities; had staff on site, ideally 24 hours per day. Units that only provided for specialist groups (e.g. learning disability or dementia) were excluded. Hospital and community based units were recruited to give a range in size and geographical spread within countries. Sampling was not random; units were identified from registration lists in each country and/or were known to the lead investigator in each country. Face to face interviews to complete the draft toolkit were carried out by the researchers with the manager of each unit. Inter-rater reliability was tested in one of three ways; a second researcher was also present at the interview and completed ratings simultaneously, or they repeated the interview with the manager within two weeks, or they rated the toolkit from a tape recording of the first interview. Researchers were not allowed to confer on ratings of the same unit. Feedback from interviewees and researchers was collected on the relevance and usefulness of the toolkit questions, the ease of completion and the time taken to complete. [SUBTITLE] Data management and analysis [SUBSECTION] A common SPSS database was developed in the lead centre and distributed to all centres. A test entry of pilot data in each centre clarified any coding queries. Double data entry was completed for 10% of the toolkit data using a separate database and the study statistician carried out data validation on the two databases for each centre. The maximum error rate was set at 5%. Any centre that had an error rate above this was required to complete double data entry for all their data. Inter-rater reliability of toolkit items was assessed using the Kappa coefficient for categorical data (weighted Kappa where there were more than two categories) and the intraclass correlation coefficient (ICC) for normally distributed, continuous data. Paired ratings for 20 institutions in 10 countries (200 institutions in all) enabled a 95% confidence interval for the estimate of ICC of ± 0.15 [12]. Items whose Kappa was below 0.4 or ICC/weighted Kappa was below 0.7 were dropped. Items that had a narrow spread (categorical items with more than 90% of the response or Likert scale items where >80% of responses fell to either side of neutral) were also dropped due to their inability to discern differences in quality between units. The fact that many questions contributed to the rating of more than one domain meant domains were likely to be highly correlated with each other rather than assessing discrete aspects of care. An exploratory factor analysis (EFA) was therefore indicated to explore the latent factor structure of the 96 scored questions, reduce the overlap between domain content and ensure common variation of items within a domain. However, using the five subjects per item rule of thumb for EFA, a sample size of at least 500 units would have been required. An iterative EFA was therefore carried out which could take account of the available sample size. The first iteration of the EFA used a Principal Components Analysis of each domain, extracting factors indicated by Velicers MAP [13]. No rotation was necessary as there was no intention to interpret the factors extracted. Having completed this for each domain, the unrotated factor loadings were examined. A factor loading greater than 0.3 was taken to indicate that the item was correlated with other items in the domain. Since many items were initially allocated to more than one domain, our first approach to reducing the overlap between domains was to identify items which did not load onto their allocated domain. Such items were removed from that domain as long as they loaded onto another domain. Items which did not load onto any domain in the first iteration could potentially load onto their allocated domains once other items had been removed. The procedure was therefore repeated and an assessment of factor loadings from this second iteration was conducted as before and items that did not load were removed. The third and final iteration was carried out as before but this time all items with a factor loading less than 0.3 were removed even if this meant that they were not retained in any domain. Based on this third iteration a final allocation of items to domains was produced. The reliability of these domains was assessed using two measures: 1) the KMO measure of sampling adequacy and 2) Cronbach's Alpha, a measure of internal consistency. A value of greater than 0.7 is desirable for both. A common SPSS database was developed in the lead centre and distributed to all centres. A test entry of pilot data in each centre clarified any coding queries. Double data entry was completed for 10% of the toolkit data using a separate database and the study statistician carried out data validation on the two databases for each centre. The maximum error rate was set at 5%. Any centre that had an error rate above this was required to complete double data entry for all their data. Inter-rater reliability of toolkit items was assessed using the Kappa coefficient for categorical data (weighted Kappa where there were more than two categories) and the intraclass correlation coefficient (ICC) for normally distributed, continuous data. Paired ratings for 20 institutions in 10 countries (200 institutions in all) enabled a 95% confidence interval for the estimate of ICC of ± 0.15 [12]. Items whose Kappa was below 0.4 or ICC/weighted Kappa was below 0.7 were dropped. Items that had a narrow spread (categorical items with more than 90% of the response or Likert scale items where >80% of responses fell to either side of neutral) were also dropped due to their inability to discern differences in quality between units. The fact that many questions contributed to the rating of more than one domain meant domains were likely to be highly correlated with each other rather than assessing discrete aspects of care. An exploratory factor analysis (EFA) was therefore indicated to explore the latent factor structure of the 96 scored questions, reduce the overlap between domain content and ensure common variation of items within a domain. However, using the five subjects per item rule of thumb for EFA, a sample size of at least 500 units would have been required. An iterative EFA was therefore carried out which could take account of the available sample size. The first iteration of the EFA used a Principal Components Analysis of each domain, extracting factors indicated by Velicers MAP [13]. No rotation was necessary as there was no intention to interpret the factors extracted. Having completed this for each domain, the unrotated factor loadings were examined. A factor loading greater than 0.3 was taken to indicate that the item was correlated with other items in the domain. Since many items were initially allocated to more than one domain, our first approach to reducing the overlap between domains was to identify items which did not load onto their allocated domain. Such items were removed from that domain as long as they loaded onto another domain. Items which did not load onto any domain in the first iteration could potentially load onto their allocated domains once other items had been removed. The procedure was therefore repeated and an assessment of factor loadings from this second iteration was conducted as before and items that did not load were removed. The third and final iteration was carried out as before but this time all items with a factor loading less than 0.3 were removed even if this meant that they were not retained in any domain. Based on this third iteration a final allocation of items to domains was produced. The reliability of these domains was assessed using two measures: 1) the KMO measure of sampling adequacy and 2) Cronbach's Alpha, a measure of internal consistency. A value of greater than 0.7 is desirable for both. The draft toolkit comprised 154 questions (consisting of 280 items) of which 29 were descriptive and did not contribute to scoring. The remaining questions were allocated to one or more of the nine domains by the UK research teams. Since some questions were combined for the purposes of scoring, a total of 96 question scores contributed to the rating of domains. Of these, 27 assessed only one domain, 32 assessed two domains, 18 assessed three, 17 assessed four and two assessed five. Since the toolkit had a variety of response structures, questions were scored within a similar range to ensure similar weighting of items within each domain. For example, Likert scale responses were transformed from a scale of 1 to 5 to -2 to +2. Each country identified 20 facilities (units) in which to carry out inter-rater reliability testing of the draft toolkit that: provided for adults with longer term mental health problems (length of stay at least six months); had at least six patients/residents; had communal facilities; had staff on site, ideally 24 hours per day. Units that only provided for specialist groups (e.g. learning disability or dementia) were excluded. Hospital and community based units were recruited to give a range in size and geographical spread within countries. Sampling was not random; units were identified from registration lists in each country and/or were known to the lead investigator in each country. Face to face interviews to complete the draft toolkit were carried out by the researchers with the manager of each unit. Inter-rater reliability was tested in one of three ways; a second researcher was also present at the interview and completed ratings simultaneously, or they repeated the interview with the manager within two weeks, or they rated the toolkit from a tape recording of the first interview. Researchers were not allowed to confer on ratings of the same unit. Feedback from interviewees and researchers was collected on the relevance and usefulness of the toolkit questions, the ease of completion and the time taken to complete. [SUBTITLE] Data management and analysis [SUBSECTION] A common SPSS database was developed in the lead centre and distributed to all centres. A test entry of pilot data in each centre clarified any coding queries. Double data entry was completed for 10% of the toolkit data using a separate database and the study statistician carried out data validation on the two databases for each centre. The maximum error rate was set at 5%. Any centre that had an error rate above this was required to complete double data entry for all their data. Inter-rater reliability of toolkit items was assessed using the Kappa coefficient for categorical data (weighted Kappa where there were more than two categories) and the intraclass correlation coefficient (ICC) for normally distributed, continuous data. Paired ratings for 20 institutions in 10 countries (200 institutions in all) enabled a 95% confidence interval for the estimate of ICC of ± 0.15 [12]. Items whose Kappa was below 0.4 or ICC/weighted Kappa was below 0.7 were dropped. Items that had a narrow spread (categorical items with more than 90% of the response or Likert scale items where >80% of responses fell to either side of neutral) were also dropped due to their inability to discern differences in quality between units. The fact that many questions contributed to the rating of more than one domain meant domains were likely to be highly correlated with each other rather than assessing discrete aspects of care. An exploratory factor analysis (EFA) was therefore indicated to explore the latent factor structure of the 96 scored questions, reduce the overlap between domain content and ensure common variation of items within a domain. However, using the five subjects per item rule of thumb for EFA, a sample size of at least 500 units would have been required. An iterative EFA was therefore carried out which could take account of the available sample size. The first iteration of the EFA used a Principal Components Analysis of each domain, extracting factors indicated by Velicers MAP [13]. No rotation was necessary as there was no intention to interpret the factors extracted. Having completed this for each domain, the unrotated factor loadings were examined. A factor loading greater than 0.3 was taken to indicate that the item was correlated with other items in the domain. Since many items were initially allocated to more than one domain, our first approach to reducing the overlap between domains was to identify items which did not load onto their allocated domain. Such items were removed from that domain as long as they loaded onto another domain. Items which did not load onto any domain in the first iteration could potentially load onto their allocated domains once other items had been removed. The procedure was therefore repeated and an assessment of factor loadings from this second iteration was conducted as before and items that did not load were removed. The third and final iteration was carried out as before but this time all items with a factor loading less than 0.3 were removed even if this meant that they were not retained in any domain. Based on this third iteration a final allocation of items to domains was produced. The reliability of these domains was assessed using two measures: 1) the KMO measure of sampling adequacy and 2) Cronbach's Alpha, a measure of internal consistency. A value of greater than 0.7 is desirable for both. A common SPSS database was developed in the lead centre and distributed to all centres. A test entry of pilot data in each centre clarified any coding queries. Double data entry was completed for 10% of the toolkit data using a separate database and the study statistician carried out data validation on the two databases for each centre. The maximum error rate was set at 5%. Any centre that had an error rate above this was required to complete double data entry for all their data. Inter-rater reliability of toolkit items was assessed using the Kappa coefficient for categorical data (weighted Kappa where there were more than two categories) and the intraclass correlation coefficient (ICC) for normally distributed, continuous data. Paired ratings for 20 institutions in 10 countries (200 institutions in all) enabled a 95% confidence interval for the estimate of ICC of ± 0.15 [12]. Items whose Kappa was below 0.4 or ICC/weighted Kappa was below 0.7 were dropped. Items that had a narrow spread (categorical items with more than 90% of the response or Likert scale items where >80% of responses fell to either side of neutral) were also dropped due to their inability to discern differences in quality between units. The fact that many questions contributed to the rating of more than one domain meant domains were likely to be highly correlated with each other rather than assessing discrete aspects of care. An exploratory factor analysis (EFA) was therefore indicated to explore the latent factor structure of the 96 scored questions, reduce the overlap between domain content and ensure common variation of items within a domain. However, using the five subjects per item rule of thumb for EFA, a sample size of at least 500 units would have been required. An iterative EFA was therefore carried out which could take account of the available sample size. The first iteration of the EFA used a Principal Components Analysis of each domain, extracting factors indicated by Velicers MAP [13]. No rotation was necessary as there was no intention to interpret the factors extracted. Having completed this for each domain, the unrotated factor loadings were examined. A factor loading greater than 0.3 was taken to indicate that the item was correlated with other items in the domain. Since many items were initially allocated to more than one domain, our first approach to reducing the overlap between domains was to identify items which did not load onto their allocated domain. Such items were removed from that domain as long as they loaded onto another domain. Items which did not load onto any domain in the first iteration could potentially load onto their allocated domains once other items had been removed. The procedure was therefore repeated and an assessment of factor loadings from this second iteration was conducted as before and items that did not load were removed. The third and final iteration was carried out as before but this time all items with a factor loading less than 0.3 were removed even if this meant that they were not retained in any domain. Based on this third iteration a final allocation of items to domains was produced. The reliability of these domains was assessed using two measures: 1) the KMO measure of sampling adequacy and 2) Cronbach's Alpha, a measure of internal consistency. A value of greater than 0.7 is desirable for both. [SUBTITLE] Phase 3 [SUBSECTION] The toolkit was refined in light of a) the feedback from interviewers and unit managers b) the results of the inter-rater reliability testing c) the results of the EFA. Amendments were discussed and agreed by the PSC and international expert panel. The toolkit was refined in light of a) the feedback from interviewers and unit managers b) the results of the inter-rater reliability testing c) the results of the EFA. Amendments were discussed and agreed by the PSC and international expert panel.
null
null
null
null
[ "Background", "Phase 1", "Phase 2", "Data management and analysis", "Phase 3", "Results", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Worldwide, countries are at different stages of deinstitutionalisation [1] and in Europe, despite the investment in community services, many individuals with mental health problems still live in asylums or other types of institutions [2]. The majority have longer term conditions [3] with complications such as treatment resistance [4], cognitive impairment and pervasive negative symptoms [5], poor function [6], substance misuse and challenging behaviours [7]. They are at risk of abuse of their human rights since their capacity to make informed choices about their care may be impaired. The European Commission's Green Paper [8] on improving the mental health of the population highlighted the importance of promotion of social inclusion of the mentally unwell and protection of their rights and dignity. This paper reports on the development of an international toolkit to assess the quality of care delivered in hospital and community based mental health units.", "The results of the systematic review of the literature on components of institutional care have been published elsewhere [10]. Eight domains of care were identified: living conditions; interventions for schizophrenia; physical health; restraint and seclusion; staff training and support; therapeutic relationship; autonomy and service user involvement; and clinical governance. The results of the Delphi exercise have also been previously reported [11] and eleven domains of care were identified: social policy and human rights; social inclusion; self management and autonomy; therapeutic interventions; governance; staffing; staff attitudes; therapeutic environment; post-discharge care; carers; physical health care [11]. Collation of each country's care standards by HK and TT identified seven domains: living environment; mental and physical health; therapeutic relationship; service users' rights and autonomy; service user involvement; staff training and support; clinical governance. The project steering committee (PSC) reviewed these findings and agreed on nine domains for inclusion in the toolkit (Living Environment; Treatments and Interventions including restraint and seclusion; Therapeutic Environment; Self-management and Autonomy; Social Policy, Citizenship and Advocacy; Clinical Governance; Social Interface; Human Rights; and Recovery Based Practice). These were further reviewed and agreed by an international panel of experts in social care, mental health rehabilitation, recovery based practice, service user experience, disability rights, international mental health law, international mental health policy and care standard setting.\nToolkit items for assessment of these domains were generated by the UK centres. The toolkit was designed to be completed by the manager of the facility since we were aware, due to the complexity of their mental health problems, that only some service users would have the capacity to complete such a measure. However, service users' experiences of care were assessed in a later Phase of the project to investigate the association between unit manager toolkit ratings and service user reports. Where possible, toolkit items were worded to avoid revealing which answer would lead to a higher quality rating. A mix of question formats was used (Likert scales, ordered categories, quantitative responses, binary responses, lists of yes/no's summed to create quantitative responses, and vignettes that asked the respondent to generate answers which were \"checklisted\" by the researcher and summed to give a quantitative response). The varied format of questions aimed to increase the accuracy of responses by avoiding a response set and make the toolkit more interesting to complete. The draft toolkit was reviewed by the PSC and the international expert panel and further questions were added if there was evidence for their inclusion from Phase 1 or if they appeared highly relevant across countries.\nThe toolkit was translated in each country and back translated by someone independent of the project. Back translations were reviewed at the lead centre in the UK and amendments agreed with each country. The toolkit was piloted in each country in one or two facilities. A training session was attended by all researchers involved in data collection to ensure clarity of understanding of all items and their scoring.", "The draft toolkit comprised 154 questions (consisting of 280 items) of which 29 were descriptive and did not contribute to scoring. The remaining questions were allocated to one or more of the nine domains by the UK research teams. Since some questions were combined for the purposes of scoring, a total of 96 question scores contributed to the rating of domains. Of these, 27 assessed only one domain, 32 assessed two domains, 18 assessed three, 17 assessed four and two assessed five. Since the toolkit had a variety of response structures, questions were scored within a similar range to ensure similar weighting of items within each domain. For example, Likert scale responses were transformed from a scale of 1 to 5 to -2 to +2.\nEach country identified 20 facilities (units) in which to carry out inter-rater reliability testing of the draft toolkit that: provided for adults with longer term mental health problems (length of stay at least six months); had at least six patients/residents; had communal facilities; had staff on site, ideally 24 hours per day. Units that only provided for specialist groups (e.g. learning disability or dementia) were excluded. Hospital and community based units were recruited to give a range in size and geographical spread within countries. Sampling was not random; units were identified from registration lists in each country and/or were known to the lead investigator in each country.\nFace to face interviews to complete the draft toolkit were carried out by the researchers with the manager of each unit. Inter-rater reliability was tested in one of three ways; a second researcher was also present at the interview and completed ratings simultaneously, or they repeated the interview with the manager within two weeks, or they rated the toolkit from a tape recording of the first interview. Researchers were not allowed to confer on ratings of the same unit. Feedback from interviewees and researchers was collected on the relevance and usefulness of the toolkit questions, the ease of completion and the time taken to complete.\n[SUBTITLE] Data management and analysis [SUBSECTION] A common SPSS database was developed in the lead centre and distributed to all centres. A test entry of pilot data in each centre clarified any coding queries. Double data entry was completed for 10% of the toolkit data using a separate database and the study statistician carried out data validation on the two databases for each centre. The maximum error rate was set at 5%. Any centre that had an error rate above this was required to complete double data entry for all their data.\nInter-rater reliability of toolkit items was assessed using the Kappa coefficient for categorical data (weighted Kappa where there were more than two categories) and the intraclass correlation coefficient (ICC) for normally distributed, continuous data. Paired ratings for 20 institutions in 10 countries (200 institutions in all) enabled a 95% confidence interval for the estimate of ICC of ± 0.15 [12]. Items whose Kappa was below 0.4 or ICC/weighted Kappa was below 0.7 were dropped. Items that had a narrow spread (categorical items with more than 90% of the response or Likert scale items where >80% of responses fell to either side of neutral) were also dropped due to their inability to discern differences in quality between units.\nThe fact that many questions contributed to the rating of more than one domain meant domains were likely to be highly correlated with each other rather than assessing discrete aspects of care. An exploratory factor analysis (EFA) was therefore indicated to explore the latent factor structure of the 96 scored questions, reduce the overlap between domain content and ensure common variation of items within a domain. However, using the five subjects per item rule of thumb for EFA, a sample size of at least 500 units would have been required. An iterative EFA was therefore carried out which could take account of the available sample size.\nThe first iteration of the EFA used a Principal Components Analysis of each domain, extracting factors indicated by Velicers MAP [13]. No rotation was necessary as there was no intention to interpret the factors extracted. Having completed this for each domain, the unrotated factor loadings were examined. A factor loading greater than 0.3 was taken to indicate that the item was correlated with other items in the domain. Since many items were initially allocated to more than one domain, our first approach to reducing the overlap between domains was to identify items which did not load onto their allocated domain. Such items were removed from that domain as long as they loaded onto another domain. Items which did not load onto any domain in the first iteration could potentially load onto their allocated domains once other items had been removed. The procedure was therefore repeated and an assessment of factor loadings from this second iteration was conducted as before and items that did not load were removed. The third and final iteration was carried out as before but this time all items with a factor loading less than 0.3 were removed even if this meant that they were not retained in any domain. Based on this third iteration a final allocation of items to domains was produced. The reliability of these domains was assessed using two measures: 1) the KMO measure of sampling adequacy and 2) Cronbach's Alpha, a measure of internal consistency. A value of greater than 0.7 is desirable for both.\nA common SPSS database was developed in the lead centre and distributed to all centres. A test entry of pilot data in each centre clarified any coding queries. Double data entry was completed for 10% of the toolkit data using a separate database and the study statistician carried out data validation on the two databases for each centre. The maximum error rate was set at 5%. Any centre that had an error rate above this was required to complete double data entry for all their data.\nInter-rater reliability of toolkit items was assessed using the Kappa coefficient for categorical data (weighted Kappa where there were more than two categories) and the intraclass correlation coefficient (ICC) for normally distributed, continuous data. Paired ratings for 20 institutions in 10 countries (200 institutions in all) enabled a 95% confidence interval for the estimate of ICC of ± 0.15 [12]. Items whose Kappa was below 0.4 or ICC/weighted Kappa was below 0.7 were dropped. Items that had a narrow spread (categorical items with more than 90% of the response or Likert scale items where >80% of responses fell to either side of neutral) were also dropped due to their inability to discern differences in quality between units.\nThe fact that many questions contributed to the rating of more than one domain meant domains were likely to be highly correlated with each other rather than assessing discrete aspects of care. An exploratory factor analysis (EFA) was therefore indicated to explore the latent factor structure of the 96 scored questions, reduce the overlap between domain content and ensure common variation of items within a domain. However, using the five subjects per item rule of thumb for EFA, a sample size of at least 500 units would have been required. An iterative EFA was therefore carried out which could take account of the available sample size.\nThe first iteration of the EFA used a Principal Components Analysis of each domain, extracting factors indicated by Velicers MAP [13]. No rotation was necessary as there was no intention to interpret the factors extracted. Having completed this for each domain, the unrotated factor loadings were examined. A factor loading greater than 0.3 was taken to indicate that the item was correlated with other items in the domain. Since many items were initially allocated to more than one domain, our first approach to reducing the overlap between domains was to identify items which did not load onto their allocated domain. Such items were removed from that domain as long as they loaded onto another domain. Items which did not load onto any domain in the first iteration could potentially load onto their allocated domains once other items had been removed. The procedure was therefore repeated and an assessment of factor loadings from this second iteration was conducted as before and items that did not load were removed. The third and final iteration was carried out as before but this time all items with a factor loading less than 0.3 were removed even if this meant that they were not retained in any domain. Based on this third iteration a final allocation of items to domains was produced. The reliability of these domains was assessed using two measures: 1) the KMO measure of sampling adequacy and 2) Cronbach's Alpha, a measure of internal consistency. A value of greater than 0.7 is desirable for both.", "A common SPSS database was developed in the lead centre and distributed to all centres. A test entry of pilot data in each centre clarified any coding queries. Double data entry was completed for 10% of the toolkit data using a separate database and the study statistician carried out data validation on the two databases for each centre. The maximum error rate was set at 5%. Any centre that had an error rate above this was required to complete double data entry for all their data.\nInter-rater reliability of toolkit items was assessed using the Kappa coefficient for categorical data (weighted Kappa where there were more than two categories) and the intraclass correlation coefficient (ICC) for normally distributed, continuous data. Paired ratings for 20 institutions in 10 countries (200 institutions in all) enabled a 95% confidence interval for the estimate of ICC of ± 0.15 [12]. Items whose Kappa was below 0.4 or ICC/weighted Kappa was below 0.7 were dropped. Items that had a narrow spread (categorical items with more than 90% of the response or Likert scale items where >80% of responses fell to either side of neutral) were also dropped due to their inability to discern differences in quality between units.\nThe fact that many questions contributed to the rating of more than one domain meant domains were likely to be highly correlated with each other rather than assessing discrete aspects of care. An exploratory factor analysis (EFA) was therefore indicated to explore the latent factor structure of the 96 scored questions, reduce the overlap between domain content and ensure common variation of items within a domain. However, using the five subjects per item rule of thumb for EFA, a sample size of at least 500 units would have been required. An iterative EFA was therefore carried out which could take account of the available sample size.\nThe first iteration of the EFA used a Principal Components Analysis of each domain, extracting factors indicated by Velicers MAP [13]. No rotation was necessary as there was no intention to interpret the factors extracted. Having completed this for each domain, the unrotated factor loadings were examined. A factor loading greater than 0.3 was taken to indicate that the item was correlated with other items in the domain. Since many items were initially allocated to more than one domain, our first approach to reducing the overlap between domains was to identify items which did not load onto their allocated domain. Such items were removed from that domain as long as they loaded onto another domain. Items which did not load onto any domain in the first iteration could potentially load onto their allocated domains once other items had been removed. The procedure was therefore repeated and an assessment of factor loadings from this second iteration was conducted as before and items that did not load were removed. The third and final iteration was carried out as before but this time all items with a factor loading less than 0.3 were removed even if this meant that they were not retained in any domain. Based on this third iteration a final allocation of items to domains was produced. The reliability of these domains was assessed using two measures: 1) the KMO measure of sampling adequacy and 2) Cronbach's Alpha, a measure of internal consistency. A value of greater than 0.7 is desirable for both.", "The toolkit was refined in light of a) the feedback from interviewers and unit managers b) the results of the inter-rater reliability testing c) the results of the EFA. Amendments were discussed and agreed by the PSC and international expert panel.", "In total, 202 units were recruited across the ten countries. No centre had a data entry error rate over 5% and no complete double data entry was required. Of the 202 units, 93 (46%) were in the inner city, 73 (36%) in the suburbs and 37 (18%) in the country. The majority (120, 59%) were community based, 47 (23%) were hospital wards and 35 (17%) were units within the hospital grounds. Their size ranged from five to 320 beds (mean 30, median 19); 162 (80%) had no maximum length of stay and of those that did the mean was 1.8 years (range 0.5 to 5, median 2). Thirty-three (16%) units were for men only and 18 (9%) for women only. Table 1 shows the characteristics of units recruited in each country. Independent data collection for inter-rater reliability testing of the toolkit was carried out in only one case by a second rater repeating the interview.\nCharacteristics of included units and inter-rater reliability testing method\n* In only 1 unit (in Bulgaria) toolkit inter-rater reliability was assessed by two researchers interviewing the unit manager separately\nSixteen items had a narrow range of response (Figure 1).\nReasons for dropping toolkit items.\nThe results of the inter-rater reliability testing are shown in Additional file 1. Only one item had poor inter-rater reliability (How many CBT appointments are usually offered?) but was retained with an amended response structure.\nOf the 202 managers interviewed, 189 (94%) thought the toolkit questions were relevant/very relevant to their unit and 178 (88%) thought the results would be useful/very useful in auditing the quality of their unit. Of the 202 interviews carried out, the researchers reported that 143 (71%) took between one and two hours, 43 (21%) took less than an hour and 15 (7%) took over two hours. There were problems in accessing information in 37 (18%) interviews.\nThe toolkit was refined through discussion with the PSC and international expert panel in light of the results. The 16 items with a narrow range of response were dropped and nine others were dropped for the reasons shown in Figure 1. Eight items were merged with another item, three items were amended from single answer to categorical response options and one item was added (total number of staff employed by or visiting the unit). The final toolkit comprised 145 questions.\nIn the initial allocation of scored items to domains, 25 were allocated to Living Environment, 42 to Therapeutic Environment, 34 to Treatments and Interventions, 32 to Self-management and Autonomy, eight to Social Policy and Citizenship, eight to Clinical Governance, 19 to Social Interface, 30 to Human Rights and 25 to Recovery Based Practice. The following pairs of domains shared more than 50% of items: all Social Policy, Citizenship and Advocacy questions were also in Human Rights; 72% of Recovery Based Practice questions were in Therapeutic Environment; 64% of Recovery Based Practice questions were in Self-management and Autonomy; 60% of Human Rights questions were in Self-management and Autonomy; 53% of Social Interface questions were in Treatments and Interventions; 50% of Clinical Governance questions were in Human Rights and 50% were in Therapeutic Environment.\nAfter the first iteration of the EFA, 16 items were removed from domains they did not load onto where they loaded onto another domain. After the second iteration one item (is there a private room for patients/residents to meet with their visitors?) which had not loaded onto any domain in the first iteration now loaded onto Living Environment and was retained. One question (unit has a policy for dealing with a report from a patient/resident of abuse, aggression or bullying from a member of staff?) which had loaded onto Clinical Governance and Human Rights after the first iteration now did not load onto Clinical Governance and was retained only in Human Rights. One item (unit provides the same activities for all residents?) which had loaded onto Therapeutic Environment after the first iteration no longer loaded after the second iteration. Eight items which did not load onto any domain after the first and second iterations were dropped (Figure 2) and the third iteration of EFA run. This indicated that all remaining items loaded onto at least one domain with a factor loading greater than 0.3.\nItems dropped after Exploratory Factor Analysis.\nThe KMO measures of sampling adequacy of the nine domains were low for Clinical Governance and Social Policy, Citizenship and Advocacy (0.52 and 0.61 respectively). Clinical Governance comprised only three items and Social Policy, Citizenship and Advocacy comprised six. All these items also contributed to other domains. The PSC therefore agreed that these two domains could be dropped without the loss of any toolkit content. The KMO statistics for the remaining seven domains ranged from 0.67 to 0.80 with only one (Social Interface) falling just below 0.7. The number of items per domain, KMO and Cronbach's Alpha statistics are shown in Table 2. These demonstrate that all seven domains had good internal consistency (again only Social Interface fell just below the threshold of 0.7). The final allocation of questions to domains comprised 88 questions allocated to one or more of seven domains (38 were allocated to one domain, 24 to 2, 20 to 3, 5 to 4 and 1 to 5). The EFA process reduced the overlap of items between domains (57% of Recovery Based Practice items in Self-management and Autonomy compared with 64% originally; 52% of Human Rights in Self-management and Autonomy compared with 60% originally; 71% of Recovery Based Practice items in Therapeutic Environment compared with 72% originally; 60% of Social Interface items in Treatments and Interventions compared with 53% originally).\nSampling adequacy and internal consistency of domains after 3rd iteration of exploratory factor analysis", "The project facilitated the development of the first international quality assessment toolkit for longer term hospital and community based mental health facilities, the Quality Indicator for Rehabilitative Care (QuIRC). The toolkit has excellent inter-rater reliability and since items were derived from the results of a systematic literature review, Delphi exercises with stakeholder groups in a diverse range of countries, and a review of care standards in each country, the toolkit is able to deliver comprehensive assessment of units in countries at different stages of deinstitutionalisation.\nThe exploratory factor analysis provided a data driven corroboration and refinement of our original allocation of items to domains and reduced the overlap of content between domains. Although overlap of items in sub-scores of assessment tools is not usual, we feel it is acceptable for specific aspects of care to contribute to the quality rating of more than one domain since this reflects the multiple effects of the complex interventions delivered in facilities for those with more complex mental health problems. Three domains shared the greatest content with other domains (Social Interface, Human Rights and Recovery Based Practice) which highlights their \"cross-cutting\" nature.\nThe total QuIRC score provides a measure of overall quality of care and domain scores indicate where specific improvements may be required. A web based version of the QuIRC is available in ten languages that compares the unit's domain scores with similar units in the same country (http://www.quirc.eu). This allows its use as a local, regional and national quality assessment tool and it has been incorporated into the UK's peer accreditation process for inpatient mental health rehabilitation units. It is also being used in a national programme of research of these units in England.", "Triangulation of qualitative and quantitative evidence directed the development of a robust and comprehensive international quality assessment toolkit for facilities providing care for people with longer term mental health problems in highly variable socioeconomic and political contexts. The QuIRC represents the first measure of this type and has potential for use as a research tool and as an international quality benchmark.", "The authors declare that they have no competing interests.", "HK, MK, CW and SW conceived and designed the study. SW carried out the data analysis. HK drafted the article which was reviewed and revised by all authors. All authors agreed the final version for publication.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-244X/11/35/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Phase 1", "Phase 2", "Data management and analysis", "Phase 3", "Results", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history", "Supplementary Material" ]
[ "Worldwide, countries are at different stages of deinstitutionalisation [1] and in Europe, despite the investment in community services, many individuals with mental health problems still live in asylums or other types of institutions [2]. The majority have longer term conditions [3] with complications such as treatment resistance [4], cognitive impairment and pervasive negative symptoms [5], poor function [6], substance misuse and challenging behaviours [7]. They are at risk of abuse of their human rights since their capacity to make informed choices about their care may be impaired. The European Commission's Green Paper [8] on improving the mental health of the population highlighted the importance of promotion of social inclusion of the mentally unwell and protection of their rights and dignity. This paper reports on the development of an international toolkit to assess the quality of care delivered in hospital and community based mental health units.", "The Development of a European Measure of Best Practice for people with longer term mental health problems in institutional care (DEMoBinc) was a three year project funded by the European Commission from March 2007. It involved eleven centres across ten countries at different stages of deinstitutionalisation (Bulgaria, Czech Republic, Germany, Greece, Italy, Netherlands, Poland, Portugal, Spain, UK). Full details of the study protocol are published elsewhere [9]. In summary, the project comprised six phases: 1) identification of the domains of care for inclusion in the toolkit through triangulation of the results of i) a review of care standards in each country, ii) a systematic literature review of the components of care (and their effectiveness) in mental health institutions, and iii) a Delphi exercise with four stakeholder groups in each country (service users, carers, professionals, advocates) on the aspects of care that promote recovery for people with mental health problems living in institutions; 2) piloting and testing the inter-rater reliability of the toolkit; 3) refining the toolkit; 4) testing the association between toolkit ratings (gathered from the facility's manager) with service users' experiences of care, quality of life, autonomy and markers of recovery; 5) assessing the toolkit's ability to report on a facility's \"value for money\" through a health economic analysis; 6) dissemination of results. This paper reports on the first three phases.\n[SUBTITLE] Phase 1 [SUBSECTION] The results of the systematic review of the literature on components of institutional care have been published elsewhere [10]. Eight domains of care were identified: living conditions; interventions for schizophrenia; physical health; restraint and seclusion; staff training and support; therapeutic relationship; autonomy and service user involvement; and clinical governance. The results of the Delphi exercise have also been previously reported [11] and eleven domains of care were identified: social policy and human rights; social inclusion; self management and autonomy; therapeutic interventions; governance; staffing; staff attitudes; therapeutic environment; post-discharge care; carers; physical health care [11]. Collation of each country's care standards by HK and TT identified seven domains: living environment; mental and physical health; therapeutic relationship; service users' rights and autonomy; service user involvement; staff training and support; clinical governance. The project steering committee (PSC) reviewed these findings and agreed on nine domains for inclusion in the toolkit (Living Environment; Treatments and Interventions including restraint and seclusion; Therapeutic Environment; Self-management and Autonomy; Social Policy, Citizenship and Advocacy; Clinical Governance; Social Interface; Human Rights; and Recovery Based Practice). These were further reviewed and agreed by an international panel of experts in social care, mental health rehabilitation, recovery based practice, service user experience, disability rights, international mental health law, international mental health policy and care standard setting.\nToolkit items for assessment of these domains were generated by the UK centres. The toolkit was designed to be completed by the manager of the facility since we were aware, due to the complexity of their mental health problems, that only some service users would have the capacity to complete such a measure. However, service users' experiences of care were assessed in a later Phase of the project to investigate the association between unit manager toolkit ratings and service user reports. Where possible, toolkit items were worded to avoid revealing which answer would lead to a higher quality rating. A mix of question formats was used (Likert scales, ordered categories, quantitative responses, binary responses, lists of yes/no's summed to create quantitative responses, and vignettes that asked the respondent to generate answers which were \"checklisted\" by the researcher and summed to give a quantitative response). The varied format of questions aimed to increase the accuracy of responses by avoiding a response set and make the toolkit more interesting to complete. The draft toolkit was reviewed by the PSC and the international expert panel and further questions were added if there was evidence for their inclusion from Phase 1 or if they appeared highly relevant across countries.\nThe toolkit was translated in each country and back translated by someone independent of the project. Back translations were reviewed at the lead centre in the UK and amendments agreed with each country. The toolkit was piloted in each country in one or two facilities. A training session was attended by all researchers involved in data collection to ensure clarity of understanding of all items and their scoring.\nThe results of the systematic review of the literature on components of institutional care have been published elsewhere [10]. Eight domains of care were identified: living conditions; interventions for schizophrenia; physical health; restraint and seclusion; staff training and support; therapeutic relationship; autonomy and service user involvement; and clinical governance. The results of the Delphi exercise have also been previously reported [11] and eleven domains of care were identified: social policy and human rights; social inclusion; self management and autonomy; therapeutic interventions; governance; staffing; staff attitudes; therapeutic environment; post-discharge care; carers; physical health care [11]. Collation of each country's care standards by HK and TT identified seven domains: living environment; mental and physical health; therapeutic relationship; service users' rights and autonomy; service user involvement; staff training and support; clinical governance. The project steering committee (PSC) reviewed these findings and agreed on nine domains for inclusion in the toolkit (Living Environment; Treatments and Interventions including restraint and seclusion; Therapeutic Environment; Self-management and Autonomy; Social Policy, Citizenship and Advocacy; Clinical Governance; Social Interface; Human Rights; and Recovery Based Practice). These were further reviewed and agreed by an international panel of experts in social care, mental health rehabilitation, recovery based practice, service user experience, disability rights, international mental health law, international mental health policy and care standard setting.\nToolkit items for assessment of these domains were generated by the UK centres. The toolkit was designed to be completed by the manager of the facility since we were aware, due to the complexity of their mental health problems, that only some service users would have the capacity to complete such a measure. However, service users' experiences of care were assessed in a later Phase of the project to investigate the association between unit manager toolkit ratings and service user reports. Where possible, toolkit items were worded to avoid revealing which answer would lead to a higher quality rating. A mix of question formats was used (Likert scales, ordered categories, quantitative responses, binary responses, lists of yes/no's summed to create quantitative responses, and vignettes that asked the respondent to generate answers which were \"checklisted\" by the researcher and summed to give a quantitative response). The varied format of questions aimed to increase the accuracy of responses by avoiding a response set and make the toolkit more interesting to complete. The draft toolkit was reviewed by the PSC and the international expert panel and further questions were added if there was evidence for their inclusion from Phase 1 or if they appeared highly relevant across countries.\nThe toolkit was translated in each country and back translated by someone independent of the project. Back translations were reviewed at the lead centre in the UK and amendments agreed with each country. The toolkit was piloted in each country in one or two facilities. A training session was attended by all researchers involved in data collection to ensure clarity of understanding of all items and their scoring.\n[SUBTITLE] Phase 2 [SUBSECTION] The draft toolkit comprised 154 questions (consisting of 280 items) of which 29 were descriptive and did not contribute to scoring. The remaining questions were allocated to one or more of the nine domains by the UK research teams. Since some questions were combined for the purposes of scoring, a total of 96 question scores contributed to the rating of domains. Of these, 27 assessed only one domain, 32 assessed two domains, 18 assessed three, 17 assessed four and two assessed five. Since the toolkit had a variety of response structures, questions were scored within a similar range to ensure similar weighting of items within each domain. For example, Likert scale responses were transformed from a scale of 1 to 5 to -2 to +2.\nEach country identified 20 facilities (units) in which to carry out inter-rater reliability testing of the draft toolkit that: provided for adults with longer term mental health problems (length of stay at least six months); had at least six patients/residents; had communal facilities; had staff on site, ideally 24 hours per day. Units that only provided for specialist groups (e.g. learning disability or dementia) were excluded. Hospital and community based units were recruited to give a range in size and geographical spread within countries. Sampling was not random; units were identified from registration lists in each country and/or were known to the lead investigator in each country.\nFace to face interviews to complete the draft toolkit were carried out by the researchers with the manager of each unit. Inter-rater reliability was tested in one of three ways; a second researcher was also present at the interview and completed ratings simultaneously, or they repeated the interview with the manager within two weeks, or they rated the toolkit from a tape recording of the first interview. Researchers were not allowed to confer on ratings of the same unit. Feedback from interviewees and researchers was collected on the relevance and usefulness of the toolkit questions, the ease of completion and the time taken to complete.\n[SUBTITLE] Data management and analysis [SUBSECTION] A common SPSS database was developed in the lead centre and distributed to all centres. A test entry of pilot data in each centre clarified any coding queries. Double data entry was completed for 10% of the toolkit data using a separate database and the study statistician carried out data validation on the two databases for each centre. The maximum error rate was set at 5%. Any centre that had an error rate above this was required to complete double data entry for all their data.\nInter-rater reliability of toolkit items was assessed using the Kappa coefficient for categorical data (weighted Kappa where there were more than two categories) and the intraclass correlation coefficient (ICC) for normally distributed, continuous data. Paired ratings for 20 institutions in 10 countries (200 institutions in all) enabled a 95% confidence interval for the estimate of ICC of ± 0.15 [12]. Items whose Kappa was below 0.4 or ICC/weighted Kappa was below 0.7 were dropped. Items that had a narrow spread (categorical items with more than 90% of the response or Likert scale items where >80% of responses fell to either side of neutral) were also dropped due to their inability to discern differences in quality between units.\nThe fact that many questions contributed to the rating of more than one domain meant domains were likely to be highly correlated with each other rather than assessing discrete aspects of care. An exploratory factor analysis (EFA) was therefore indicated to explore the latent factor structure of the 96 scored questions, reduce the overlap between domain content and ensure common variation of items within a domain. However, using the five subjects per item rule of thumb for EFA, a sample size of at least 500 units would have been required. An iterative EFA was therefore carried out which could take account of the available sample size.\nThe first iteration of the EFA used a Principal Components Analysis of each domain, extracting factors indicated by Velicers MAP [13]. No rotation was necessary as there was no intention to interpret the factors extracted. Having completed this for each domain, the unrotated factor loadings were examined. A factor loading greater than 0.3 was taken to indicate that the item was correlated with other items in the domain. Since many items were initially allocated to more than one domain, our first approach to reducing the overlap between domains was to identify items which did not load onto their allocated domain. Such items were removed from that domain as long as they loaded onto another domain. Items which did not load onto any domain in the first iteration could potentially load onto their allocated domains once other items had been removed. The procedure was therefore repeated and an assessment of factor loadings from this second iteration was conducted as before and items that did not load were removed. The third and final iteration was carried out as before but this time all items with a factor loading less than 0.3 were removed even if this meant that they were not retained in any domain. Based on this third iteration a final allocation of items to domains was produced. The reliability of these domains was assessed using two measures: 1) the KMO measure of sampling adequacy and 2) Cronbach's Alpha, a measure of internal consistency. A value of greater than 0.7 is desirable for both.\nA common SPSS database was developed in the lead centre and distributed to all centres. A test entry of pilot data in each centre clarified any coding queries. Double data entry was completed for 10% of the toolkit data using a separate database and the study statistician carried out data validation on the two databases for each centre. The maximum error rate was set at 5%. Any centre that had an error rate above this was required to complete double data entry for all their data.\nInter-rater reliability of toolkit items was assessed using the Kappa coefficient for categorical data (weighted Kappa where there were more than two categories) and the intraclass correlation coefficient (ICC) for normally distributed, continuous data. Paired ratings for 20 institutions in 10 countries (200 institutions in all) enabled a 95% confidence interval for the estimate of ICC of ± 0.15 [12]. Items whose Kappa was below 0.4 or ICC/weighted Kappa was below 0.7 were dropped. Items that had a narrow spread (categorical items with more than 90% of the response or Likert scale items where >80% of responses fell to either side of neutral) were also dropped due to their inability to discern differences in quality between units.\nThe fact that many questions contributed to the rating of more than one domain meant domains were likely to be highly correlated with each other rather than assessing discrete aspects of care. An exploratory factor analysis (EFA) was therefore indicated to explore the latent factor structure of the 96 scored questions, reduce the overlap between domain content and ensure common variation of items within a domain. However, using the five subjects per item rule of thumb for EFA, a sample size of at least 500 units would have been required. An iterative EFA was therefore carried out which could take account of the available sample size.\nThe first iteration of the EFA used a Principal Components Analysis of each domain, extracting factors indicated by Velicers MAP [13]. No rotation was necessary as there was no intention to interpret the factors extracted. Having completed this for each domain, the unrotated factor loadings were examined. A factor loading greater than 0.3 was taken to indicate that the item was correlated with other items in the domain. Since many items were initially allocated to more than one domain, our first approach to reducing the overlap between domains was to identify items which did not load onto their allocated domain. Such items were removed from that domain as long as they loaded onto another domain. Items which did not load onto any domain in the first iteration could potentially load onto their allocated domains once other items had been removed. The procedure was therefore repeated and an assessment of factor loadings from this second iteration was conducted as before and items that did not load were removed. The third and final iteration was carried out as before but this time all items with a factor loading less than 0.3 were removed even if this meant that they were not retained in any domain. Based on this third iteration a final allocation of items to domains was produced. The reliability of these domains was assessed using two measures: 1) the KMO measure of sampling adequacy and 2) Cronbach's Alpha, a measure of internal consistency. A value of greater than 0.7 is desirable for both.\nThe draft toolkit comprised 154 questions (consisting of 280 items) of which 29 were descriptive and did not contribute to scoring. The remaining questions were allocated to one or more of the nine domains by the UK research teams. Since some questions were combined for the purposes of scoring, a total of 96 question scores contributed to the rating of domains. Of these, 27 assessed only one domain, 32 assessed two domains, 18 assessed three, 17 assessed four and two assessed five. Since the toolkit had a variety of response structures, questions were scored within a similar range to ensure similar weighting of items within each domain. For example, Likert scale responses were transformed from a scale of 1 to 5 to -2 to +2.\nEach country identified 20 facilities (units) in which to carry out inter-rater reliability testing of the draft toolkit that: provided for adults with longer term mental health problems (length of stay at least six months); had at least six patients/residents; had communal facilities; had staff on site, ideally 24 hours per day. Units that only provided for specialist groups (e.g. learning disability or dementia) were excluded. Hospital and community based units were recruited to give a range in size and geographical spread within countries. Sampling was not random; units were identified from registration lists in each country and/or were known to the lead investigator in each country.\nFace to face interviews to complete the draft toolkit were carried out by the researchers with the manager of each unit. Inter-rater reliability was tested in one of three ways; a second researcher was also present at the interview and completed ratings simultaneously, or they repeated the interview with the manager within two weeks, or they rated the toolkit from a tape recording of the first interview. Researchers were not allowed to confer on ratings of the same unit. Feedback from interviewees and researchers was collected on the relevance and usefulness of the toolkit questions, the ease of completion and the time taken to complete.\n[SUBTITLE] Data management and analysis [SUBSECTION] A common SPSS database was developed in the lead centre and distributed to all centres. A test entry of pilot data in each centre clarified any coding queries. Double data entry was completed for 10% of the toolkit data using a separate database and the study statistician carried out data validation on the two databases for each centre. The maximum error rate was set at 5%. Any centre that had an error rate above this was required to complete double data entry for all their data.\nInter-rater reliability of toolkit items was assessed using the Kappa coefficient for categorical data (weighted Kappa where there were more than two categories) and the intraclass correlation coefficient (ICC) for normally distributed, continuous data. Paired ratings for 20 institutions in 10 countries (200 institutions in all) enabled a 95% confidence interval for the estimate of ICC of ± 0.15 [12]. Items whose Kappa was below 0.4 or ICC/weighted Kappa was below 0.7 were dropped. Items that had a narrow spread (categorical items with more than 90% of the response or Likert scale items where >80% of responses fell to either side of neutral) were also dropped due to their inability to discern differences in quality between units.\nThe fact that many questions contributed to the rating of more than one domain meant domains were likely to be highly correlated with each other rather than assessing discrete aspects of care. An exploratory factor analysis (EFA) was therefore indicated to explore the latent factor structure of the 96 scored questions, reduce the overlap between domain content and ensure common variation of items within a domain. However, using the five subjects per item rule of thumb for EFA, a sample size of at least 500 units would have been required. An iterative EFA was therefore carried out which could take account of the available sample size.\nThe first iteration of the EFA used a Principal Components Analysis of each domain, extracting factors indicated by Velicers MAP [13]. No rotation was necessary as there was no intention to interpret the factors extracted. Having completed this for each domain, the unrotated factor loadings were examined. A factor loading greater than 0.3 was taken to indicate that the item was correlated with other items in the domain. Since many items were initially allocated to more than one domain, our first approach to reducing the overlap between domains was to identify items which did not load onto their allocated domain. Such items were removed from that domain as long as they loaded onto another domain. Items which did not load onto any domain in the first iteration could potentially load onto their allocated domains once other items had been removed. The procedure was therefore repeated and an assessment of factor loadings from this second iteration was conducted as before and items that did not load were removed. The third and final iteration was carried out as before but this time all items with a factor loading less than 0.3 were removed even if this meant that they were not retained in any domain. Based on this third iteration a final allocation of items to domains was produced. The reliability of these domains was assessed using two measures: 1) the KMO measure of sampling adequacy and 2) Cronbach's Alpha, a measure of internal consistency. A value of greater than 0.7 is desirable for both.\nA common SPSS database was developed in the lead centre and distributed to all centres. A test entry of pilot data in each centre clarified any coding queries. Double data entry was completed for 10% of the toolkit data using a separate database and the study statistician carried out data validation on the two databases for each centre. The maximum error rate was set at 5%. Any centre that had an error rate above this was required to complete double data entry for all their data.\nInter-rater reliability of toolkit items was assessed using the Kappa coefficient for categorical data (weighted Kappa where there were more than two categories) and the intraclass correlation coefficient (ICC) for normally distributed, continuous data. Paired ratings for 20 institutions in 10 countries (200 institutions in all) enabled a 95% confidence interval for the estimate of ICC of ± 0.15 [12]. Items whose Kappa was below 0.4 or ICC/weighted Kappa was below 0.7 were dropped. Items that had a narrow spread (categorical items with more than 90% of the response or Likert scale items where >80% of responses fell to either side of neutral) were also dropped due to their inability to discern differences in quality between units.\nThe fact that many questions contributed to the rating of more than one domain meant domains were likely to be highly correlated with each other rather than assessing discrete aspects of care. An exploratory factor analysis (EFA) was therefore indicated to explore the latent factor structure of the 96 scored questions, reduce the overlap between domain content and ensure common variation of items within a domain. However, using the five subjects per item rule of thumb for EFA, a sample size of at least 500 units would have been required. An iterative EFA was therefore carried out which could take account of the available sample size.\nThe first iteration of the EFA used a Principal Components Analysis of each domain, extracting factors indicated by Velicers MAP [13]. No rotation was necessary as there was no intention to interpret the factors extracted. Having completed this for each domain, the unrotated factor loadings were examined. A factor loading greater than 0.3 was taken to indicate that the item was correlated with other items in the domain. Since many items were initially allocated to more than one domain, our first approach to reducing the overlap between domains was to identify items which did not load onto their allocated domain. Such items were removed from that domain as long as they loaded onto another domain. Items which did not load onto any domain in the first iteration could potentially load onto their allocated domains once other items had been removed. The procedure was therefore repeated and an assessment of factor loadings from this second iteration was conducted as before and items that did not load were removed. The third and final iteration was carried out as before but this time all items with a factor loading less than 0.3 were removed even if this meant that they were not retained in any domain. Based on this third iteration a final allocation of items to domains was produced. The reliability of these domains was assessed using two measures: 1) the KMO measure of sampling adequacy and 2) Cronbach's Alpha, a measure of internal consistency. A value of greater than 0.7 is desirable for both.\n[SUBTITLE] Phase 3 [SUBSECTION] The toolkit was refined in light of a) the feedback from interviewers and unit managers b) the results of the inter-rater reliability testing c) the results of the EFA. Amendments were discussed and agreed by the PSC and international expert panel.\nThe toolkit was refined in light of a) the feedback from interviewers and unit managers b) the results of the inter-rater reliability testing c) the results of the EFA. Amendments were discussed and agreed by the PSC and international expert panel.", "The results of the systematic review of the literature on components of institutional care have been published elsewhere [10]. Eight domains of care were identified: living conditions; interventions for schizophrenia; physical health; restraint and seclusion; staff training and support; therapeutic relationship; autonomy and service user involvement; and clinical governance. The results of the Delphi exercise have also been previously reported [11] and eleven domains of care were identified: social policy and human rights; social inclusion; self management and autonomy; therapeutic interventions; governance; staffing; staff attitudes; therapeutic environment; post-discharge care; carers; physical health care [11]. Collation of each country's care standards by HK and TT identified seven domains: living environment; mental and physical health; therapeutic relationship; service users' rights and autonomy; service user involvement; staff training and support; clinical governance. The project steering committee (PSC) reviewed these findings and agreed on nine domains for inclusion in the toolkit (Living Environment; Treatments and Interventions including restraint and seclusion; Therapeutic Environment; Self-management and Autonomy; Social Policy, Citizenship and Advocacy; Clinical Governance; Social Interface; Human Rights; and Recovery Based Practice). These were further reviewed and agreed by an international panel of experts in social care, mental health rehabilitation, recovery based practice, service user experience, disability rights, international mental health law, international mental health policy and care standard setting.\nToolkit items for assessment of these domains were generated by the UK centres. The toolkit was designed to be completed by the manager of the facility since we were aware, due to the complexity of their mental health problems, that only some service users would have the capacity to complete such a measure. However, service users' experiences of care were assessed in a later Phase of the project to investigate the association between unit manager toolkit ratings and service user reports. Where possible, toolkit items were worded to avoid revealing which answer would lead to a higher quality rating. A mix of question formats was used (Likert scales, ordered categories, quantitative responses, binary responses, lists of yes/no's summed to create quantitative responses, and vignettes that asked the respondent to generate answers which were \"checklisted\" by the researcher and summed to give a quantitative response). The varied format of questions aimed to increase the accuracy of responses by avoiding a response set and make the toolkit more interesting to complete. The draft toolkit was reviewed by the PSC and the international expert panel and further questions were added if there was evidence for their inclusion from Phase 1 or if they appeared highly relevant across countries.\nThe toolkit was translated in each country and back translated by someone independent of the project. Back translations were reviewed at the lead centre in the UK and amendments agreed with each country. The toolkit was piloted in each country in one or two facilities. A training session was attended by all researchers involved in data collection to ensure clarity of understanding of all items and their scoring.", "The draft toolkit comprised 154 questions (consisting of 280 items) of which 29 were descriptive and did not contribute to scoring. The remaining questions were allocated to one or more of the nine domains by the UK research teams. Since some questions were combined for the purposes of scoring, a total of 96 question scores contributed to the rating of domains. Of these, 27 assessed only one domain, 32 assessed two domains, 18 assessed three, 17 assessed four and two assessed five. Since the toolkit had a variety of response structures, questions were scored within a similar range to ensure similar weighting of items within each domain. For example, Likert scale responses were transformed from a scale of 1 to 5 to -2 to +2.\nEach country identified 20 facilities (units) in which to carry out inter-rater reliability testing of the draft toolkit that: provided for adults with longer term mental health problems (length of stay at least six months); had at least six patients/residents; had communal facilities; had staff on site, ideally 24 hours per day. Units that only provided for specialist groups (e.g. learning disability or dementia) were excluded. Hospital and community based units were recruited to give a range in size and geographical spread within countries. Sampling was not random; units were identified from registration lists in each country and/or were known to the lead investigator in each country.\nFace to face interviews to complete the draft toolkit were carried out by the researchers with the manager of each unit. Inter-rater reliability was tested in one of three ways; a second researcher was also present at the interview and completed ratings simultaneously, or they repeated the interview with the manager within two weeks, or they rated the toolkit from a tape recording of the first interview. Researchers were not allowed to confer on ratings of the same unit. Feedback from interviewees and researchers was collected on the relevance and usefulness of the toolkit questions, the ease of completion and the time taken to complete.\n[SUBTITLE] Data management and analysis [SUBSECTION] A common SPSS database was developed in the lead centre and distributed to all centres. A test entry of pilot data in each centre clarified any coding queries. Double data entry was completed for 10% of the toolkit data using a separate database and the study statistician carried out data validation on the two databases for each centre. The maximum error rate was set at 5%. Any centre that had an error rate above this was required to complete double data entry for all their data.\nInter-rater reliability of toolkit items was assessed using the Kappa coefficient for categorical data (weighted Kappa where there were more than two categories) and the intraclass correlation coefficient (ICC) for normally distributed, continuous data. Paired ratings for 20 institutions in 10 countries (200 institutions in all) enabled a 95% confidence interval for the estimate of ICC of ± 0.15 [12]. Items whose Kappa was below 0.4 or ICC/weighted Kappa was below 0.7 were dropped. Items that had a narrow spread (categorical items with more than 90% of the response or Likert scale items where >80% of responses fell to either side of neutral) were also dropped due to their inability to discern differences in quality between units.\nThe fact that many questions contributed to the rating of more than one domain meant domains were likely to be highly correlated with each other rather than assessing discrete aspects of care. An exploratory factor analysis (EFA) was therefore indicated to explore the latent factor structure of the 96 scored questions, reduce the overlap between domain content and ensure common variation of items within a domain. However, using the five subjects per item rule of thumb for EFA, a sample size of at least 500 units would have been required. An iterative EFA was therefore carried out which could take account of the available sample size.\nThe first iteration of the EFA used a Principal Components Analysis of each domain, extracting factors indicated by Velicers MAP [13]. No rotation was necessary as there was no intention to interpret the factors extracted. Having completed this for each domain, the unrotated factor loadings were examined. A factor loading greater than 0.3 was taken to indicate that the item was correlated with other items in the domain. Since many items were initially allocated to more than one domain, our first approach to reducing the overlap between domains was to identify items which did not load onto their allocated domain. Such items were removed from that domain as long as they loaded onto another domain. Items which did not load onto any domain in the first iteration could potentially load onto their allocated domains once other items had been removed. The procedure was therefore repeated and an assessment of factor loadings from this second iteration was conducted as before and items that did not load were removed. The third and final iteration was carried out as before but this time all items with a factor loading less than 0.3 were removed even if this meant that they were not retained in any domain. Based on this third iteration a final allocation of items to domains was produced. The reliability of these domains was assessed using two measures: 1) the KMO measure of sampling adequacy and 2) Cronbach's Alpha, a measure of internal consistency. A value of greater than 0.7 is desirable for both.\nA common SPSS database was developed in the lead centre and distributed to all centres. A test entry of pilot data in each centre clarified any coding queries. Double data entry was completed for 10% of the toolkit data using a separate database and the study statistician carried out data validation on the two databases for each centre. The maximum error rate was set at 5%. Any centre that had an error rate above this was required to complete double data entry for all their data.\nInter-rater reliability of toolkit items was assessed using the Kappa coefficient for categorical data (weighted Kappa where there were more than two categories) and the intraclass correlation coefficient (ICC) for normally distributed, continuous data. Paired ratings for 20 institutions in 10 countries (200 institutions in all) enabled a 95% confidence interval for the estimate of ICC of ± 0.15 [12]. Items whose Kappa was below 0.4 or ICC/weighted Kappa was below 0.7 were dropped. Items that had a narrow spread (categorical items with more than 90% of the response or Likert scale items where >80% of responses fell to either side of neutral) were also dropped due to their inability to discern differences in quality between units.\nThe fact that many questions contributed to the rating of more than one domain meant domains were likely to be highly correlated with each other rather than assessing discrete aspects of care. An exploratory factor analysis (EFA) was therefore indicated to explore the latent factor structure of the 96 scored questions, reduce the overlap between domain content and ensure common variation of items within a domain. However, using the five subjects per item rule of thumb for EFA, a sample size of at least 500 units would have been required. An iterative EFA was therefore carried out which could take account of the available sample size.\nThe first iteration of the EFA used a Principal Components Analysis of each domain, extracting factors indicated by Velicers MAP [13]. No rotation was necessary as there was no intention to interpret the factors extracted. Having completed this for each domain, the unrotated factor loadings were examined. A factor loading greater than 0.3 was taken to indicate that the item was correlated with other items in the domain. Since many items were initially allocated to more than one domain, our first approach to reducing the overlap between domains was to identify items which did not load onto their allocated domain. Such items were removed from that domain as long as they loaded onto another domain. Items which did not load onto any domain in the first iteration could potentially load onto their allocated domains once other items had been removed. The procedure was therefore repeated and an assessment of factor loadings from this second iteration was conducted as before and items that did not load were removed. The third and final iteration was carried out as before but this time all items with a factor loading less than 0.3 were removed even if this meant that they were not retained in any domain. Based on this third iteration a final allocation of items to domains was produced. The reliability of these domains was assessed using two measures: 1) the KMO measure of sampling adequacy and 2) Cronbach's Alpha, a measure of internal consistency. A value of greater than 0.7 is desirable for both.", "A common SPSS database was developed in the lead centre and distributed to all centres. A test entry of pilot data in each centre clarified any coding queries. Double data entry was completed for 10% of the toolkit data using a separate database and the study statistician carried out data validation on the two databases for each centre. The maximum error rate was set at 5%. Any centre that had an error rate above this was required to complete double data entry for all their data.\nInter-rater reliability of toolkit items was assessed using the Kappa coefficient for categorical data (weighted Kappa where there were more than two categories) and the intraclass correlation coefficient (ICC) for normally distributed, continuous data. Paired ratings for 20 institutions in 10 countries (200 institutions in all) enabled a 95% confidence interval for the estimate of ICC of ± 0.15 [12]. Items whose Kappa was below 0.4 or ICC/weighted Kappa was below 0.7 were dropped. Items that had a narrow spread (categorical items with more than 90% of the response or Likert scale items where >80% of responses fell to either side of neutral) were also dropped due to their inability to discern differences in quality between units.\nThe fact that many questions contributed to the rating of more than one domain meant domains were likely to be highly correlated with each other rather than assessing discrete aspects of care. An exploratory factor analysis (EFA) was therefore indicated to explore the latent factor structure of the 96 scored questions, reduce the overlap between domain content and ensure common variation of items within a domain. However, using the five subjects per item rule of thumb for EFA, a sample size of at least 500 units would have been required. An iterative EFA was therefore carried out which could take account of the available sample size.\nThe first iteration of the EFA used a Principal Components Analysis of each domain, extracting factors indicated by Velicers MAP [13]. No rotation was necessary as there was no intention to interpret the factors extracted. Having completed this for each domain, the unrotated factor loadings were examined. A factor loading greater than 0.3 was taken to indicate that the item was correlated with other items in the domain. Since many items were initially allocated to more than one domain, our first approach to reducing the overlap between domains was to identify items which did not load onto their allocated domain. Such items were removed from that domain as long as they loaded onto another domain. Items which did not load onto any domain in the first iteration could potentially load onto their allocated domains once other items had been removed. The procedure was therefore repeated and an assessment of factor loadings from this second iteration was conducted as before and items that did not load were removed. The third and final iteration was carried out as before but this time all items with a factor loading less than 0.3 were removed even if this meant that they were not retained in any domain. Based on this third iteration a final allocation of items to domains was produced. The reliability of these domains was assessed using two measures: 1) the KMO measure of sampling adequacy and 2) Cronbach's Alpha, a measure of internal consistency. A value of greater than 0.7 is desirable for both.", "The toolkit was refined in light of a) the feedback from interviewers and unit managers b) the results of the inter-rater reliability testing c) the results of the EFA. Amendments were discussed and agreed by the PSC and international expert panel.", "In total, 202 units were recruited across the ten countries. No centre had a data entry error rate over 5% and no complete double data entry was required. Of the 202 units, 93 (46%) were in the inner city, 73 (36%) in the suburbs and 37 (18%) in the country. The majority (120, 59%) were community based, 47 (23%) were hospital wards and 35 (17%) were units within the hospital grounds. Their size ranged from five to 320 beds (mean 30, median 19); 162 (80%) had no maximum length of stay and of those that did the mean was 1.8 years (range 0.5 to 5, median 2). Thirty-three (16%) units were for men only and 18 (9%) for women only. Table 1 shows the characteristics of units recruited in each country. Independent data collection for inter-rater reliability testing of the toolkit was carried out in only one case by a second rater repeating the interview.\nCharacteristics of included units and inter-rater reliability testing method\n* In only 1 unit (in Bulgaria) toolkit inter-rater reliability was assessed by two researchers interviewing the unit manager separately\nSixteen items had a narrow range of response (Figure 1).\nReasons for dropping toolkit items.\nThe results of the inter-rater reliability testing are shown in Additional file 1. Only one item had poor inter-rater reliability (How many CBT appointments are usually offered?) but was retained with an amended response structure.\nOf the 202 managers interviewed, 189 (94%) thought the toolkit questions were relevant/very relevant to their unit and 178 (88%) thought the results would be useful/very useful in auditing the quality of their unit. Of the 202 interviews carried out, the researchers reported that 143 (71%) took between one and two hours, 43 (21%) took less than an hour and 15 (7%) took over two hours. There were problems in accessing information in 37 (18%) interviews.\nThe toolkit was refined through discussion with the PSC and international expert panel in light of the results. The 16 items with a narrow range of response were dropped and nine others were dropped for the reasons shown in Figure 1. Eight items were merged with another item, three items were amended from single answer to categorical response options and one item was added (total number of staff employed by or visiting the unit). The final toolkit comprised 145 questions.\nIn the initial allocation of scored items to domains, 25 were allocated to Living Environment, 42 to Therapeutic Environment, 34 to Treatments and Interventions, 32 to Self-management and Autonomy, eight to Social Policy and Citizenship, eight to Clinical Governance, 19 to Social Interface, 30 to Human Rights and 25 to Recovery Based Practice. The following pairs of domains shared more than 50% of items: all Social Policy, Citizenship and Advocacy questions were also in Human Rights; 72% of Recovery Based Practice questions were in Therapeutic Environment; 64% of Recovery Based Practice questions were in Self-management and Autonomy; 60% of Human Rights questions were in Self-management and Autonomy; 53% of Social Interface questions were in Treatments and Interventions; 50% of Clinical Governance questions were in Human Rights and 50% were in Therapeutic Environment.\nAfter the first iteration of the EFA, 16 items were removed from domains they did not load onto where they loaded onto another domain. After the second iteration one item (is there a private room for patients/residents to meet with their visitors?) which had not loaded onto any domain in the first iteration now loaded onto Living Environment and was retained. One question (unit has a policy for dealing with a report from a patient/resident of abuse, aggression or bullying from a member of staff?) which had loaded onto Clinical Governance and Human Rights after the first iteration now did not load onto Clinical Governance and was retained only in Human Rights. One item (unit provides the same activities for all residents?) which had loaded onto Therapeutic Environment after the first iteration no longer loaded after the second iteration. Eight items which did not load onto any domain after the first and second iterations were dropped (Figure 2) and the third iteration of EFA run. This indicated that all remaining items loaded onto at least one domain with a factor loading greater than 0.3.\nItems dropped after Exploratory Factor Analysis.\nThe KMO measures of sampling adequacy of the nine domains were low for Clinical Governance and Social Policy, Citizenship and Advocacy (0.52 and 0.61 respectively). Clinical Governance comprised only three items and Social Policy, Citizenship and Advocacy comprised six. All these items also contributed to other domains. The PSC therefore agreed that these two domains could be dropped without the loss of any toolkit content. The KMO statistics for the remaining seven domains ranged from 0.67 to 0.80 with only one (Social Interface) falling just below 0.7. The number of items per domain, KMO and Cronbach's Alpha statistics are shown in Table 2. These demonstrate that all seven domains had good internal consistency (again only Social Interface fell just below the threshold of 0.7). The final allocation of questions to domains comprised 88 questions allocated to one or more of seven domains (38 were allocated to one domain, 24 to 2, 20 to 3, 5 to 4 and 1 to 5). The EFA process reduced the overlap of items between domains (57% of Recovery Based Practice items in Self-management and Autonomy compared with 64% originally; 52% of Human Rights in Self-management and Autonomy compared with 60% originally; 71% of Recovery Based Practice items in Therapeutic Environment compared with 72% originally; 60% of Social Interface items in Treatments and Interventions compared with 53% originally).\nSampling adequacy and internal consistency of domains after 3rd iteration of exploratory factor analysis", "The project facilitated the development of the first international quality assessment toolkit for longer term hospital and community based mental health facilities, the Quality Indicator for Rehabilitative Care (QuIRC). The toolkit has excellent inter-rater reliability and since items were derived from the results of a systematic literature review, Delphi exercises with stakeholder groups in a diverse range of countries, and a review of care standards in each country, the toolkit is able to deliver comprehensive assessment of units in countries at different stages of deinstitutionalisation.\nThe exploratory factor analysis provided a data driven corroboration and refinement of our original allocation of items to domains and reduced the overlap of content between domains. Although overlap of items in sub-scores of assessment tools is not usual, we feel it is acceptable for specific aspects of care to contribute to the quality rating of more than one domain since this reflects the multiple effects of the complex interventions delivered in facilities for those with more complex mental health problems. Three domains shared the greatest content with other domains (Social Interface, Human Rights and Recovery Based Practice) which highlights their \"cross-cutting\" nature.\nThe total QuIRC score provides a measure of overall quality of care and domain scores indicate where specific improvements may be required. A web based version of the QuIRC is available in ten languages that compares the unit's domain scores with similar units in the same country (http://www.quirc.eu). This allows its use as a local, regional and national quality assessment tool and it has been incorporated into the UK's peer accreditation process for inpatient mental health rehabilitation units. It is also being used in a national programme of research of these units in England.", "Triangulation of qualitative and quantitative evidence directed the development of a robust and comprehensive international quality assessment toolkit for facilities providing care for people with longer term mental health problems in highly variable socioeconomic and political contexts. The QuIRC represents the first measure of this type and has potential for use as a research tool and as an international quality benchmark.", "The authors declare that they have no competing interests.", "HK, MK, CW and SW conceived and designed the study. SW carried out the data analysis. HK drafted the article which was reviewed and revised by all authors. All authors agreed the final version for publication.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-244X/11/35/prepub\n", "Results of inter-rater reliability testing.\nClick here for file" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, "supplementary-material" ]
[]
Changes in occupational class differences in leisure-time physical activity: a follow-up study.
21362168
Physical activity is known to have health benefits across population groups. However, less is known about changes over time in socioeconomic differences in leisure-time physical activity and the reasons for the changes. We hypothesised that class differences in leisure-time physical activity would widen over time due to declining physical activity among the lower occupational classes. We examined whether occupational class differences in leisure-time physical activity change over time in a cohort of Finnish middle-aged women and men. We also examined whether a set of selected covariates could account for the observed changes.
BACKGROUND
The data were derived from the Helsinki Health Study cohort mail surveys; the respondents were 40-60-year-old employees of the City of Helsinki at baseline in 2000-2002 (n = 8960, response rate 67%). Follow-up questionnaires were sent to the baseline respondents in 2007 (n = 7332, response rate 83%). The outcome measure was leisure-time physical activity, including commuting, converted to metabolic equivalent tasks (MET). Socioeconomic position was measured by occupational class (professionals, semi-professionals, routine non-manual employees and manual workers). The covariates included baseline age, marital status, limiting long-lasting illness, common mental disorders, job strain, physical and mental health functioning, smoking, body mass index, and employment status at follow-up. Firstly the analyses focused on changes over time in age adjusted prevalence of leisure-time physical activity. Secondly, logistic regression analysis was used to adjust for covariates of changes in occupational class differences in leisure-time physical activity.
METHODS
At baseline there were no occupational class differences in leisure-time physical activity. Over the follow-up leisure-time physical activity increased among those in the higher classes and decreased among manual workers, suggesting the emergence of occupational class differences at follow-up. Women in routine non-manual and manual classes and men in the manual class tended to be more often physically inactive in their leisure-time (<14 MET hours/week) and to be less often active (>30 MET hours/week) than those in the top two classes. Adjustment for the covariates did not substantially affect the observed occupational class differences in leisure-time physical activity at follow-up.
RESULTS
Occupational class differences in leisure-time physical activity emerged over the follow-up period among both women and men. Leisure-time physical activity needs to be promoted among ageing employees, especially among manual workers.
CONCLUSIONS
[ "Adult", "Cohort Studies", "Data Collection", "Energy Metabolism", "Exercise", "Female", "Finland", "Follow-Up Studies", "Health Behavior", "Humans", "Leisure Activities", "Male", "Middle Aged", "Motor Activity", "Occupations", "Social Class", "Surveys and Questionnaires", "Transportation" ]
3058076
null
null
Methods
[SUBTITLE] Data [SUBSECTION] The data were derived from the Helsinki Health Study cohort mail questionnaire surveys administered to employees of the City of Helsinki, Finland. At baseline in 2000-2002, the data covered 8 960 employees aged 40-60-years (response rate of 67%, 80% of the respondents were women) [21]. The follow-up survey was conducted in 2007 among the respondents to the baseline survey. At follow-up, there were 7 332 respondents, which gives a response rate of 83%. The baseline data is broadly representative of the target population and the non-response is unlikely to bias the relationships between socioeconomic position and the health-related variables [22]. Our control analyses showed that non-respondents to the follow-up were only slightly more physically inactive (<14 MET hours/week) during leisure-time at baseline (among women 24% 95% CI 22.9-25.1 vs. 29% 95% CI 26.1-31.3). However, occupational class differences in leisure-time physical activity among the respondents and the non-respondents, were broadly similar. The analyses for this study were carried out among those with data on occupational class (missing data for 118), leisure-time physical activity (missing data for 135) and all the covariates (missing data for 123). After exclusions for missing data, the final data consisted of 5 652 women and 1 279 men. The Helsinki Health Study protocol has been approved by ethics committees of the Department of Public Health, University of Helsinki, and the City of Helsinki health authorities, Finland. The data were derived from the Helsinki Health Study cohort mail questionnaire surveys administered to employees of the City of Helsinki, Finland. At baseline in 2000-2002, the data covered 8 960 employees aged 40-60-years (response rate of 67%, 80% of the respondents were women) [21]. The follow-up survey was conducted in 2007 among the respondents to the baseline survey. At follow-up, there were 7 332 respondents, which gives a response rate of 83%. The baseline data is broadly representative of the target population and the non-response is unlikely to bias the relationships between socioeconomic position and the health-related variables [22]. Our control analyses showed that non-respondents to the follow-up were only slightly more physically inactive (<14 MET hours/week) during leisure-time at baseline (among women 24% 95% CI 22.9-25.1 vs. 29% 95% CI 26.1-31.3). However, occupational class differences in leisure-time physical activity among the respondents and the non-respondents, were broadly similar. The analyses for this study were carried out among those with data on occupational class (missing data for 118), leisure-time physical activity (missing data for 135) and all the covariates (missing data for 123). After exclusions for missing data, the final data consisted of 5 652 women and 1 279 men. The Helsinki Health Study protocol has been approved by ethics committees of the Department of Public Health, University of Helsinki, and the City of Helsinki health authorities, Finland. [SUBTITLE] Leisure-time physical activity [SUBSECTION] The respondents were asked to estimate the average weekly hours of physical activity during their leisure time (including commuting) within the previous year [14]. There were four levels of intensity: walking, brisk walking, jogging and running, or their equivalent activities. The response alternatives for each level were: not during the past twelve months, in total under half an hour per week, between half and one hour per week, between two and three hours per week, and four hours or more per week. The amount of leisure-time physical activity was assessed by approximate metabolic equivalent tasks (MET) taking into account the intensity, duration and frequency [23] and calculated by multiplying the weekly time used by the estimated MET value of each activity level [14,24]. Moderate-intensity leisure-time physical activity for about 30 minutes at least five times a week is recommended by the national guidelines in Finland as well as other countries [25-27]. Approximately 14 MET hours per week correspond to the energy expenditure (1000 kcal, e.g. brisk walking for 2.5 hours/week equals 15 MET hours) needed for reducing health risks associated with physical inactivity [13,28-30]. An optimal 30 MET hours fulfill the recommendations [27,31] and requirements for healthy weight maintenance [32]. We use the term physically inactive for under 14 MET hours per week, and physically active for over 30 MET hours. The respondents were asked to estimate the average weekly hours of physical activity during their leisure time (including commuting) within the previous year [14]. There were four levels of intensity: walking, brisk walking, jogging and running, or their equivalent activities. The response alternatives for each level were: not during the past twelve months, in total under half an hour per week, between half and one hour per week, between two and three hours per week, and four hours or more per week. The amount of leisure-time physical activity was assessed by approximate metabolic equivalent tasks (MET) taking into account the intensity, duration and frequency [23] and calculated by multiplying the weekly time used by the estimated MET value of each activity level [14,24]. Moderate-intensity leisure-time physical activity for about 30 minutes at least five times a week is recommended by the national guidelines in Finland as well as other countries [25-27]. Approximately 14 MET hours per week correspond to the energy expenditure (1000 kcal, e.g. brisk walking for 2.5 hours/week equals 15 MET hours) needed for reducing health risks associated with physical inactivity [13,28-30]. An optimal 30 MET hours fulfill the recommendations [27,31] and requirements for healthy weight maintenance [32]. We use the term physically inactive for under 14 MET hours per week, and physically active for over 30 MET hours. [SUBTITLE] Socioeconomic position [SUBSECTION] Occupational class was used as an indicator of socioeconomic position. Information on occupational class was derived from the City of Helsinki personnel registers for those who gave written permission for data linkage (77%) [21]. For the rest, occupational class was obtained from the questionnaires. The respondents were classified into four hierarchical occupational classes: professionals including managers, semi-professionals, routine non-manual employees and manual workers. Among women, 27% were professionals, 20% semi-professionals, 39% routine non-manual employees and 14% manual workers. The corresponding figures for men were 46%, 20%, 10% and 24%. Occupational class was used as an indicator of socioeconomic position. Information on occupational class was derived from the City of Helsinki personnel registers for those who gave written permission for data linkage (77%) [21]. For the rest, occupational class was obtained from the questionnaires. The respondents were classified into four hierarchical occupational classes: professionals including managers, semi-professionals, routine non-manual employees and manual workers. Among women, 27% were professionals, 20% semi-professionals, 39% routine non-manual employees and 14% manual workers. The corresponding figures for men were 46%, 20%, 10% and 24%. [SUBTITLE] Covariates [SUBSECTION] All the covariates were self-reported and taken from the baseline survey, except employment status which was taken from the follow-up survey. Thirteen per cent of women were single, 10% cohabiting, 58% married, 16% divorced and 3% were widowed. The corresponding figures for men were 11%, 15%, 64%, 10% and 1%. Twenty-two per cent of women and 25 per cent of men were smokers at baseline. Body mass index (BMI) was calculated as the weight in kilograms divided by the height in metres squared (kg/m2) (the means were 25.3 and 26.3 kg/m2 for women and men, respectively). Physical and mental health functioning were measured on the physical component summary (PCS) and the mental component summary (MCS) of the Short Form 36 (SF-36) questionnaire [33]. The average PCS score was 48.9 among women and 50.8 among men, and the respective MCS scores were 51.7 and 51.7. Lower scores indicate poorer health functioning. Both were dichotomised by the lowest quartile. At follow-up most of the respondents were still in employment (75% of women and 71% of men), but some had retired due to disability (4%) or old age (18% and 23%, respectively). All the covariates were self-reported and taken from the baseline survey, except employment status which was taken from the follow-up survey. Thirteen per cent of women were single, 10% cohabiting, 58% married, 16% divorced and 3% were widowed. The corresponding figures for men were 11%, 15%, 64%, 10% and 1%. Twenty-two per cent of women and 25 per cent of men were smokers at baseline. Body mass index (BMI) was calculated as the weight in kilograms divided by the height in metres squared (kg/m2) (the means were 25.3 and 26.3 kg/m2 for women and men, respectively). Physical and mental health functioning were measured on the physical component summary (PCS) and the mental component summary (MCS) of the Short Form 36 (SF-36) questionnaire [33]. The average PCS score was 48.9 among women and 50.8 among men, and the respective MCS scores were 51.7 and 51.7. Lower scores indicate poorer health functioning. Both were dichotomised by the lowest quartile. At follow-up most of the respondents were still in employment (75% of women and 71% of men), but some had retired due to disability (4%) or old age (18% and 23%, respectively). [SUBTITLE] Statistical analyses [SUBSECTION] All the analyses were carried out separately for women and men. First, the age-adjusted prevalence and 95% confidence intervals of the physically inactive and active participants at baseline and follow-up were calculated by occupational class. Logistic regression analysis was then used to examine the emerging occupational differences in leisure-time physical activity adjusting for covariates. Model 1 is adjusted for age, and model 2 for age and baseline leisure-time physical activity. Further models additionally adjusted for baseline marital status, smoking, BMI, mental and physical health functioning, and employment status one covariate at a time. The SPSS (version 15.0) was used for the analyses. All the analyses were carried out separately for women and men. First, the age-adjusted prevalence and 95% confidence intervals of the physically inactive and active participants at baseline and follow-up were calculated by occupational class. Logistic regression analysis was then used to examine the emerging occupational differences in leisure-time physical activity adjusting for covariates. Model 1 is adjusted for age, and model 2 for age and baseline leisure-time physical activity. Further models additionally adjusted for baseline marital status, smoking, BMI, mental and physical health functioning, and employment status one covariate at a time. The SPSS (version 15.0) was used for the analyses.
null
null
null
null
[ "Background", "Data", "Leisure-time physical activity", "Socioeconomic position", "Covariates", "Statistical analyses", "Results", "Discussion", "Main findings", "Interpretation", "Methodological considerations", "Conclusions", "List of abbreviations", "Competing interests", "Authors' contributions" ]
[ "Health behaviours, such as leisure-time physical activity tend to be socioeconomically patterned. Such patterning is complex as socioeconomic position covers a range of social, economic and material circumstances from childhood to adulthood [1]. The main subdomains of socioeconomic position include education, occupational class and income [2]. While the subdomains are correlated with each other, they nevertheless are not interchangeable. Occupational class is a key subdomain of socioeconomic position and particularly suitable when an occupational cohorts are studied.\nCross-sectional studies suggest that people in higher education [3,4] and occupational class [5] are more often physically active in their leisure time than counterparts in lower positions. There has been a tendency in Finland over the last few decades for those on lower income levels to be less physically active in their leisure time than those with higher incomes [6]. However, female manual workers were found to engage in higher levels of active commuting than those in higher occupational classes [6].\nIn the last few decades, increases in leisure-time physical activity have been reported in Finland [6], Canada [7] and the USA [8]. The prevalence of adult Finns engaging in leisure-time physical activity at least twice a week increased from 37% to 55% among women and from 38% to 62% among men between 1978-2002 [6]. In Canada, the prevalence of adults who were physically active in leisure-time increased from 20% to 40% between 1981-2000 [7]. In the USA, the prevalence of those who were physically inactive in their leisure-time declined during 1988-2008 from 31% to 25% [8]. In Australia, the prevalence of those who were physically inactive in leisure-time has been remained almost stable [9].\nAccording to a study on trends in Finland, education, occupational class and household income differences in leisure-time and commuting physical activity generally remained relatively small between 1978-2002 [6]. Follow-up studies examining changes over time in physical activity by socioeconomic position have been conducted in the Netherlands [10,11] and Denmark [12]. Those with lower education were more likely to reduce their leisure-time physical activity during follow-up [10] whereas those with higher education were more likely to remain active than their lower education counterparts [11,12].\nIt is known that low leisure-time physical activity in general is associated with higher weight [3,11,13], smoking [3,11], poorer physical health functioning [14], mental health [15], being married [11] and being unmarried [16]. Retirement has been associated with increasing physical activity [17]. It has been suggested that those in lower occupational classes or those with lower education have more often poor health [18] and therefore are less likely to be physically active than those in higher classes [19]. Only few previous studies have considered covariates for socioeconomic differences in physical activity or their changes over time. In a previous study occupational class differences in leisure-time physical inactivity were attenuated after taking job strain, physical workload, body mass index (BMI) and smoking into account [20].\nWe hypothesised that class differences in leisure-time physical activity would widen over time due to declining physical activity among the lower occupational classes. Our first aim was to examine occupational class differences in leisure-time physical activity, and subsequent changes, over a follow-up of 5 to 7 years. Our second aim was to examine the effect of covariates on changes in the occupational class differences in leisure-time physical activity.", "The data were derived from the Helsinki Health Study cohort mail questionnaire surveys administered to employees of the City of Helsinki, Finland. At baseline in 2000-2002, the data covered 8 960 employees aged 40-60-years (response rate of 67%, 80% of the respondents were women) [21]. The follow-up survey was conducted in 2007 among the respondents to the baseline survey. At follow-up, there were 7 332 respondents, which gives a response rate of 83%. The baseline data is broadly representative of the target population and the non-response is unlikely to bias the relationships between socioeconomic position and the health-related variables [22]. Our control analyses showed that non-respondents to the follow-up were only slightly more physically inactive (<14 MET hours/week) during leisure-time at baseline (among women 24% 95% CI 22.9-25.1 vs. 29% 95% CI 26.1-31.3). However, occupational class differences in leisure-time physical activity among the respondents and the non-respondents, were broadly similar.\nThe analyses for this study were carried out among those with data on occupational class (missing data for 118), leisure-time physical activity (missing data for 135) and all the covariates (missing data for 123). After exclusions for missing data, the final data consisted of 5 652 women and 1 279 men.\nThe Helsinki Health Study protocol has been approved by ethics committees of the Department of Public Health, University of Helsinki, and the City of Helsinki health authorities, Finland.", "The respondents were asked to estimate the average weekly hours of physical activity during their leisure time (including commuting) within the previous year [14]. There were four levels of intensity: walking, brisk walking, jogging and running, or their equivalent activities. The response alternatives for each level were: not during the past twelve months, in total under half an hour per week, between half and one hour per week, between two and three hours per week, and four hours or more per week.\nThe amount of leisure-time physical activity was assessed by approximate metabolic equivalent tasks (MET) taking into account the intensity, duration and frequency [23] and calculated by multiplying the weekly time used by the estimated MET value of each activity level [14,24].\nModerate-intensity leisure-time physical activity for about 30 minutes at least five times a week is recommended by the national guidelines in Finland as well as other countries [25-27]. Approximately 14 MET hours per week correspond to the energy expenditure (1000 kcal, e.g. brisk walking for 2.5 hours/week equals 15 MET hours) needed for reducing health risks associated with physical inactivity [13,28-30]. An optimal 30 MET hours fulfill the recommendations [27,31] and requirements for healthy weight maintenance [32]. We use the term physically inactive for under 14 MET hours per week, and physically active for over 30 MET hours.", "Occupational class was used as an indicator of socioeconomic position. Information on occupational class was derived from the City of Helsinki personnel registers for those who gave written permission for data linkage (77%) [21]. For the rest, occupational class was obtained from the questionnaires. The respondents were classified into four hierarchical occupational classes: professionals including managers, semi-professionals, routine non-manual employees and manual workers. Among women, 27% were professionals, 20% semi-professionals, 39% routine non-manual employees and 14% manual workers. The corresponding figures for men were 46%, 20%, 10% and 24%.", "All the covariates were self-reported and taken from the baseline survey, except employment status which was taken from the follow-up survey. Thirteen per cent of women were single, 10% cohabiting, 58% married, 16% divorced and 3% were widowed. The corresponding figures for men were 11%, 15%, 64%, 10% and 1%. Twenty-two per cent of women and 25 per cent of men were smokers at baseline. Body mass index (BMI) was calculated as the weight in kilograms divided by the height in metres squared (kg/m2) (the means were 25.3 and 26.3 kg/m2 for women and men, respectively). Physical and mental health functioning were measured on the physical component summary (PCS) and the mental component summary (MCS) of the Short Form 36 (SF-36) questionnaire [33]. The average PCS score was 48.9 among women and 50.8 among men, and the respective MCS scores were 51.7 and 51.7. Lower scores indicate poorer health functioning. Both were dichotomised by the lowest quartile. At follow-up most of the respondents were still in employment (75% of women and 71% of men), but some had retired due to disability (4%) or old age (18% and 23%, respectively).", "All the analyses were carried out separately for women and men. First, the age-adjusted prevalence and 95% confidence intervals of the physically inactive and active participants at baseline and follow-up were calculated by occupational class. Logistic regression analysis was then used to examine the emerging occupational differences in leisure-time physical activity adjusting for covariates. Model 1 is adjusted for age, and model 2 for age and baseline leisure-time physical activity. Further models additionally adjusted for baseline marital status, smoking, BMI, mental and physical health functioning, and employment status one covariate at a time. The SPSS (version 15.0) was used for the analyses.", "At baseline 24% of women were physically inactive and there were no differences between the occupational classes (Table 1). The prevalence of physical inactivity at follow-up was 22%, but the higher classes were less likely to be physically inactive than their lower class counterparts. At follow-up, fewer professionals (19%) and semi-professionals (18%) than routine non-manual employees (24%) and manual workers (27%) were physically inactive. Professionals increased their leisure-time physical activity to the recommended level, whereas the level of inactivity in the other classes remained almost stable. Thus there were occupational class differences at follow-up suggesting a gradient.\nAge-adjusted prevalence (%) and 95% confidence intervals (95% CI) for physically inactive and active women and men at baseline and at follow-up.\n1MET = an activity metabolic equivalent task\nThirty-seven percent of women were physically active in their leisure-time at baseline and there were no differences between the occupational classes (Table 1). At follow-up 39% were physically active. However, more among the professionals (42%) and semi-professionals (44%) than among routine non-manual employees (36%) and manual workers (33%) were physically active. The prevalence of physically active increased among the professionals and semi-professionals and remained stable or decreased in the lower classes, thus leading to socioeconomic differences over the follow-up.\nTwenty-five per cent of men were physically inactive at baseline and as with women there were no differences between the occupational classes (Table 1). At follow-up the prevalence of inactivity was 24%, but the higher classes tended to be less often physically inactive than their manual worker counterparts in leisure-time. The prevalence of physical inactivity among male manual workers increased by eight percentage points (from 28% to 36%) leading to higher prevalence than in the other occupational classes. Among the semi-professionals (from 29% to 22%) and routine non-manual employees (from 27% to 21%) it decreased over the follow-up. Thus there were occupational class differences in leisure-time physical activity at the follow-up.\nAmong men 43% were physically active during their leisure-time and there were no differences between the occupational classes at baseline (Table 1). Among male manual workers the prevalence of physically active decreased during the follow-up by 9 percentage units (from 40% to 31%) whereas the higher classes somewhat increased their leisure-time physical activity leading to occupational class differences at follow-up.\nNext logistic regression analysis was used to examine the effects of the covariates on the occupational class differences in leisure-time physical activity at follow-up. Firstly, we examined physical inactivity. Among women, after adjusting for age, manual workers (OR 1.60 95% CI 1.31-1.97) and routine non-manual employees (OR 1.34 95% CI 1.14-1.57) were more likely to be physically inactive at follow-up than professionals (Table 2). We further examined the emergence of occupational class differences in leisure-time physical activity by adjusting for baseline physical activity and it did not change the associations. Of the baseline covariates only BMI (OR for manual workers 1.47 95% CI 1.18-1.83 and for routine non-manual employees 1.27 95% CI 1.07-1.50) slightly attenuated the association.\nOdds ratios (OR) and their 95% confidence intervals (95% CI) among women (n = 5652) and men (n = 1279). Physically inactive (<14 MET1 hours/week) at follow-up.\n1MET = an activity metabolic equivalent task\n2Model 1 = adjusted for age\n3Model 2 = adjusted for age and baseline physical activity\n4BMI = body mass index\n5PCS = SF-36 physical component summary\n6MCS = SF-36 mental component summary\nSecondly, we examined the physically active. Among women, after adjusting for age, manual workers (OR 0.69 95% CI 0.57-0.83) and routine non-manual employees (OR 0.79 95% CI 0.69-0.91) were less likely to be physically active at follow-up than professionals (Table 3). Adjusting for baseline physical activity did not change the associations, whereas BMI had a slight attenuating effect on the studied association.\nOdds ratios (OR) and their 95% confidence intervals (95% CI) among women (n = 5652) and men (n = 1279). Physically active (>30 MET1 hours/week) at follow-up.\n1MET = an activity metabolic equivalent task\n2Model 1 = adjusted for age\n3Model 2 = adjusted for age and baseline physical activity\n4BMI = body mass index\n5PCS = SF-36 physical component summary\n6MCS = SF-36 mental component summary\nAmong men, after adjusting for age, manual workers (OR 2.21 95% CI 1.62-3.02) were more likely to be physically inactive at follow-up than those in the upper classes (Table 2). Adjusting for baseline physical activity did not affect the association. Of the covariates only marital status (OR 2.04 95% CI 1.47-2.85) and BMI (OR 2.04 95% CI 1.47-2.84) slightly attenuated the differences between the manual workers and the upper classes.\nAmong men, after adjusting for age manual workers (OR 0.47 95% CI 0.35-0.63) were less likely to be physically active at follow-up than those in the upper classes (Table 3). Adjusting for baseline physical activity (OR 0.42 95% CI 0.30-0.59) had virtually no effect on the associations.", "[SUBTITLE] Main findings [SUBSECTION] The focus in this study was on occupational class differences in leisure-time physical activity among middle-aged women and men over a follow-up of 5 to 7 years. A further aim was to find out which covariates would affect such differences. We hypothesised that class differences in leisure-time physical activity would widen over time. The results showed that there were no considerable class differences in leisure time physical activity at baseline, but hierarchical occupational class differences in leisure-time physical activity emerged over the follow-up. In women the levels of leisure-time physical activity increased among the professionals and semi-professionals, and decreased among the manual workers. Among male manual workers there was a substantial decrease in leisure-time physical activity. None of the examined social or health-related baseline covariates or employment status at follow-up affected substantially the occupational class differences in leisure-time physical activity at follow-up.\nThe focus in this study was on occupational class differences in leisure-time physical activity among middle-aged women and men over a follow-up of 5 to 7 years. A further aim was to find out which covariates would affect such differences. We hypothesised that class differences in leisure-time physical activity would widen over time. The results showed that there were no considerable class differences in leisure time physical activity at baseline, but hierarchical occupational class differences in leisure-time physical activity emerged over the follow-up. In women the levels of leisure-time physical activity increased among the professionals and semi-professionals, and decreased among the manual workers. Among male manual workers there was a substantial decrease in leisure-time physical activity. None of the examined social or health-related baseline covariates or employment status at follow-up affected substantially the occupational class differences in leisure-time physical activity at follow-up.\n[SUBTITLE] Interpretation [SUBSECTION] In the whole sample, level of leisure-time physical activity remained stable over the 5 to 7 years follow-up. Some previous trend studies have reported an increase in leisure-time physical activity [6,7,34]. As found in many other studies [7,35] here, too, men were more physically active than women. Occupational differences in leisure-time physical activity emerged at follow-up, with those in the upper classes being more physically active than their lower class counterparts. The decrease in leisure-time physical activity among manual workers we observed is consistent with the findings of Danish [11] and a Dutch study [10]. Another Dutch study [12] found that those in higher positions remained physically active over time.\nOur findings revealed that occupational class differences in leisure-time physical activity, which did not exist at baseline, emerged during the follow-up. The data do not show when such differences developed. In principle they might have existed even before the baseline, disappeared and then appeared again at follow-up. However this is unlikely, particularly given the results of a Finnish trend study showing that socioeconomic differences in physical activity have largely remained relatively small in recent decades [6]. Most of our participants work in permanent jobs for the same employer and share many similar circumstances such as occupational health care. However, over the follow-up almost a third exit workforce e.g. due to retirement. Among the ageing employees occupational class differences in health widen [36], potentially contributing to the emergence of class differences in physical activity as well.\nIn our ageing cohort a fifth retired during the follow-up. Retirement is a major life transition that may affect health behaviours including leisure-time physical activity [17]. Also other life events could contribute to physical activity and the emergence of socioeconomic differences. Adjusting for employment status, however, did not affect the observed differences.\nThose in the higher occupational classes and with higher education have better knowledge and they are more ready to adopt healthier behaviours and reduce risk behaviours than those in lower classes [19]. This might explain why the higher classes increased their leisure-time physical activity. It is an unfortunate development that manual workers engage less in leisure-time physical activity as they age. People who are less physically active benefit most from following the recommended level of physical activity [28].\nNone of the baseline covariates we examined had a substantial effect on the occupational class differences found at follow-up. We also took into consideration health behaviours, sociodemographic factors as well as physical and mental health. We controlled not only for the factors included in the reported analyses but also for the effects of limiting long-lasting illness, common mental disorders and job strain on the identified socioeconomic differences in leisure-time physical activity (data not shown). However none of these additional covariates had a substantial effect on the emergent occupational class differences. We also checked the effects of covariates measured at follow-up but the results were similar to those using covariates measured at baseline.\nOccupational class differences in health and functioning tend to widen towards late middle-age, with manual workers' health being the worst [36]. Health problems that also restrict physical activity during leisure-time may occur even more among the lower occupational classes. In this study the association of occupational class and leisure-time physical activity, however, did not attenuate after adjusting for the physical component summary (PCS) of the SF-36 health inventory.\nTherefore we conducted control analyses and adjusted for the physical functioning subscale of the SF-36 which more precisely measures health problems that restrict physical activities such as running and brisk walking. The physical functioning subscale is by definition a purer measure of physical functioning than the physical component summary which is a composite measure [33]. The differences in physical functioning may be partly masked when using the physical component summary.\nFurther control analyses showed that the follow-up physical functioning score attenuated the association more than the other examined covariates (data not shown), however, the baseline adjustment had no effects. This suggests that there were differences in physical functioning between the occupational classes at follow-up that affect leisure-time physical activity. In other words leisure-time physical activity decreased among the lower occupational classes partly due to poorer health as suggested by previous research [18]. Our single follow-up design and use of logistic regression analysis, however, do not allow causal interpretations or pathways between the exposure, outcome and control variables. The question why occupational class differences in leisure-time physical activity emerged remains open to further scrutiny.\nOne also might ask whether similar developments could be observed in other health behaviours. With regard to the consumption of healthy food, for example, the socioeconomic gap has remained stable, and food habits increasingly tend to follow national guidelines [37]. Further studies are needed to examine the importance of these and other factors for the socioeconomic differences in leisure-time physical activity and the underlying mechanisms.\nIn the whole sample, level of leisure-time physical activity remained stable over the 5 to 7 years follow-up. Some previous trend studies have reported an increase in leisure-time physical activity [6,7,34]. As found in many other studies [7,35] here, too, men were more physically active than women. Occupational differences in leisure-time physical activity emerged at follow-up, with those in the upper classes being more physically active than their lower class counterparts. The decrease in leisure-time physical activity among manual workers we observed is consistent with the findings of Danish [11] and a Dutch study [10]. Another Dutch study [12] found that those in higher positions remained physically active over time.\nOur findings revealed that occupational class differences in leisure-time physical activity, which did not exist at baseline, emerged during the follow-up. The data do not show when such differences developed. In principle they might have existed even before the baseline, disappeared and then appeared again at follow-up. However this is unlikely, particularly given the results of a Finnish trend study showing that socioeconomic differences in physical activity have largely remained relatively small in recent decades [6]. Most of our participants work in permanent jobs for the same employer and share many similar circumstances such as occupational health care. However, over the follow-up almost a third exit workforce e.g. due to retirement. Among the ageing employees occupational class differences in health widen [36], potentially contributing to the emergence of class differences in physical activity as well.\nIn our ageing cohort a fifth retired during the follow-up. Retirement is a major life transition that may affect health behaviours including leisure-time physical activity [17]. Also other life events could contribute to physical activity and the emergence of socioeconomic differences. Adjusting for employment status, however, did not affect the observed differences.\nThose in the higher occupational classes and with higher education have better knowledge and they are more ready to adopt healthier behaviours and reduce risk behaviours than those in lower classes [19]. This might explain why the higher classes increased their leisure-time physical activity. It is an unfortunate development that manual workers engage less in leisure-time physical activity as they age. People who are less physically active benefit most from following the recommended level of physical activity [28].\nNone of the baseline covariates we examined had a substantial effect on the occupational class differences found at follow-up. We also took into consideration health behaviours, sociodemographic factors as well as physical and mental health. We controlled not only for the factors included in the reported analyses but also for the effects of limiting long-lasting illness, common mental disorders and job strain on the identified socioeconomic differences in leisure-time physical activity (data not shown). However none of these additional covariates had a substantial effect on the emergent occupational class differences. We also checked the effects of covariates measured at follow-up but the results were similar to those using covariates measured at baseline.\nOccupational class differences in health and functioning tend to widen towards late middle-age, with manual workers' health being the worst [36]. Health problems that also restrict physical activity during leisure-time may occur even more among the lower occupational classes. In this study the association of occupational class and leisure-time physical activity, however, did not attenuate after adjusting for the physical component summary (PCS) of the SF-36 health inventory.\nTherefore we conducted control analyses and adjusted for the physical functioning subscale of the SF-36 which more precisely measures health problems that restrict physical activities such as running and brisk walking. The physical functioning subscale is by definition a purer measure of physical functioning than the physical component summary which is a composite measure [33]. The differences in physical functioning may be partly masked when using the physical component summary.\nFurther control analyses showed that the follow-up physical functioning score attenuated the association more than the other examined covariates (data not shown), however, the baseline adjustment had no effects. This suggests that there were differences in physical functioning between the occupational classes at follow-up that affect leisure-time physical activity. In other words leisure-time physical activity decreased among the lower occupational classes partly due to poorer health as suggested by previous research [18]. Our single follow-up design and use of logistic regression analysis, however, do not allow causal interpretations or pathways between the exposure, outcome and control variables. The question why occupational class differences in leisure-time physical activity emerged remains open to further scrutiny.\nOne also might ask whether similar developments could be observed in other health behaviours. With regard to the consumption of healthy food, for example, the socioeconomic gap has remained stable, and food habits increasingly tend to follow national guidelines [37]. Further studies are needed to examine the importance of these and other factors for the socioeconomic differences in leisure-time physical activity and the underlying mechanisms.\n[SUBTITLE] Methodological considerations [SUBSECTION] Given that our study was based on an occupational cohort we used occupational class for the analyses, which reflects material resources and working conditions, in particular. We used education as a parallel socioeconomic indicator in our control analyses. Education is an indicator reflecting knowledge and affecting unhealthy behaviours. However, the results of the control analyses were similar to those reported above (data not shown).\nThe baseline survey was conducted in spring and the follow-up in autumn, but because physical activity was measured over the previous year, there should be no seasonal effects on the results. In this study leisure-time physical activity was self-reported, which is common practice in epidemiological studies [10,34,35]. Physical activity may be overestimated in surveys, but among adults the reporting has not been biased by education or gender [38]. We acknowledge that the measurement of leisure-time physical activity is a complicated task, and a recent review concluded that there is no golden standard for measuring physical activity in questionnaires and no single measure has proven superior [39].\nWe examined leisure-time physical activity while some other studies have also examined occupational physical activity. Working life has changed over the decades, and is generally less physically demanding [40]. This is the case in Finland, but there has nevertheless been an increase in physical activity during leisure-time overall [6]. Manual workers do more physically strenuous work than other classes and this may lead to less leisure-time physical activity, especially if problems of health and functioning have emerged along ageing. Regarding male occupations approximately half of male manual workers are public transport drivers implying sedentary work.\nThe response rate was acceptable both at baseline (67%) and follow-up (83%), but non-participation was still a concern and may have biased the findings. Our analysis of the baseline non-response indicated that the associations in question were unlikely to be severely affected [22]. According to our further control analyses bias due to attrition is unlikely to substantially affect the studied associations.\nAll the respondents were from the Helsinki metropolitan area and employed at baseline by the City of Helsinki. The results therefore cannot be generalised to the whole population of Finland, and not even to the employed population at large. Nevertheless, the cohort is large and diverse, and the study was planned to enable studies of the socioeconomic differences in lifestyles and health. A further strength was the use of identical questions at baseline and follow-up in assessing occupational class differences in leisure-time physical activity.\nGiven that our study was based on an occupational cohort we used occupational class for the analyses, which reflects material resources and working conditions, in particular. We used education as a parallel socioeconomic indicator in our control analyses. Education is an indicator reflecting knowledge and affecting unhealthy behaviours. However, the results of the control analyses were similar to those reported above (data not shown).\nThe baseline survey was conducted in spring and the follow-up in autumn, but because physical activity was measured over the previous year, there should be no seasonal effects on the results. In this study leisure-time physical activity was self-reported, which is common practice in epidemiological studies [10,34,35]. Physical activity may be overestimated in surveys, but among adults the reporting has not been biased by education or gender [38]. We acknowledge that the measurement of leisure-time physical activity is a complicated task, and a recent review concluded that there is no golden standard for measuring physical activity in questionnaires and no single measure has proven superior [39].\nWe examined leisure-time physical activity while some other studies have also examined occupational physical activity. Working life has changed over the decades, and is generally less physically demanding [40]. This is the case in Finland, but there has nevertheless been an increase in physical activity during leisure-time overall [6]. Manual workers do more physically strenuous work than other classes and this may lead to less leisure-time physical activity, especially if problems of health and functioning have emerged along ageing. Regarding male occupations approximately half of male manual workers are public transport drivers implying sedentary work.\nThe response rate was acceptable both at baseline (67%) and follow-up (83%), but non-participation was still a concern and may have biased the findings. Our analysis of the baseline non-response indicated that the associations in question were unlikely to be severely affected [22]. According to our further control analyses bias due to attrition is unlikely to substantially affect the studied associations.\nAll the respondents were from the Helsinki metropolitan area and employed at baseline by the City of Helsinki. The results therefore cannot be generalised to the whole population of Finland, and not even to the employed population at large. Nevertheless, the cohort is large and diverse, and the study was planned to enable studies of the socioeconomic differences in lifestyles and health. A further strength was the use of identical questions at baseline and follow-up in assessing occupational class differences in leisure-time physical activity.", "The focus in this study was on occupational class differences in leisure-time physical activity among middle-aged women and men over a follow-up of 5 to 7 years. A further aim was to find out which covariates would affect such differences. We hypothesised that class differences in leisure-time physical activity would widen over time. The results showed that there were no considerable class differences in leisure time physical activity at baseline, but hierarchical occupational class differences in leisure-time physical activity emerged over the follow-up. In women the levels of leisure-time physical activity increased among the professionals and semi-professionals, and decreased among the manual workers. Among male manual workers there was a substantial decrease in leisure-time physical activity. None of the examined social or health-related baseline covariates or employment status at follow-up affected substantially the occupational class differences in leisure-time physical activity at follow-up.", "In the whole sample, level of leisure-time physical activity remained stable over the 5 to 7 years follow-up. Some previous trend studies have reported an increase in leisure-time physical activity [6,7,34]. As found in many other studies [7,35] here, too, men were more physically active than women. Occupational differences in leisure-time physical activity emerged at follow-up, with those in the upper classes being more physically active than their lower class counterparts. The decrease in leisure-time physical activity among manual workers we observed is consistent with the findings of Danish [11] and a Dutch study [10]. Another Dutch study [12] found that those in higher positions remained physically active over time.\nOur findings revealed that occupational class differences in leisure-time physical activity, which did not exist at baseline, emerged during the follow-up. The data do not show when such differences developed. In principle they might have existed even before the baseline, disappeared and then appeared again at follow-up. However this is unlikely, particularly given the results of a Finnish trend study showing that socioeconomic differences in physical activity have largely remained relatively small in recent decades [6]. Most of our participants work in permanent jobs for the same employer and share many similar circumstances such as occupational health care. However, over the follow-up almost a third exit workforce e.g. due to retirement. Among the ageing employees occupational class differences in health widen [36], potentially contributing to the emergence of class differences in physical activity as well.\nIn our ageing cohort a fifth retired during the follow-up. Retirement is a major life transition that may affect health behaviours including leisure-time physical activity [17]. Also other life events could contribute to physical activity and the emergence of socioeconomic differences. Adjusting for employment status, however, did not affect the observed differences.\nThose in the higher occupational classes and with higher education have better knowledge and they are more ready to adopt healthier behaviours and reduce risk behaviours than those in lower classes [19]. This might explain why the higher classes increased their leisure-time physical activity. It is an unfortunate development that manual workers engage less in leisure-time physical activity as they age. People who are less physically active benefit most from following the recommended level of physical activity [28].\nNone of the baseline covariates we examined had a substantial effect on the occupational class differences found at follow-up. We also took into consideration health behaviours, sociodemographic factors as well as physical and mental health. We controlled not only for the factors included in the reported analyses but also for the effects of limiting long-lasting illness, common mental disorders and job strain on the identified socioeconomic differences in leisure-time physical activity (data not shown). However none of these additional covariates had a substantial effect on the emergent occupational class differences. We also checked the effects of covariates measured at follow-up but the results were similar to those using covariates measured at baseline.\nOccupational class differences in health and functioning tend to widen towards late middle-age, with manual workers' health being the worst [36]. Health problems that also restrict physical activity during leisure-time may occur even more among the lower occupational classes. In this study the association of occupational class and leisure-time physical activity, however, did not attenuate after adjusting for the physical component summary (PCS) of the SF-36 health inventory.\nTherefore we conducted control analyses and adjusted for the physical functioning subscale of the SF-36 which more precisely measures health problems that restrict physical activities such as running and brisk walking. The physical functioning subscale is by definition a purer measure of physical functioning than the physical component summary which is a composite measure [33]. The differences in physical functioning may be partly masked when using the physical component summary.\nFurther control analyses showed that the follow-up physical functioning score attenuated the association more than the other examined covariates (data not shown), however, the baseline adjustment had no effects. This suggests that there were differences in physical functioning between the occupational classes at follow-up that affect leisure-time physical activity. In other words leisure-time physical activity decreased among the lower occupational classes partly due to poorer health as suggested by previous research [18]. Our single follow-up design and use of logistic regression analysis, however, do not allow causal interpretations or pathways between the exposure, outcome and control variables. The question why occupational class differences in leisure-time physical activity emerged remains open to further scrutiny.\nOne also might ask whether similar developments could be observed in other health behaviours. With regard to the consumption of healthy food, for example, the socioeconomic gap has remained stable, and food habits increasingly tend to follow national guidelines [37]. Further studies are needed to examine the importance of these and other factors for the socioeconomic differences in leisure-time physical activity and the underlying mechanisms.", "Given that our study was based on an occupational cohort we used occupational class for the analyses, which reflects material resources and working conditions, in particular. We used education as a parallel socioeconomic indicator in our control analyses. Education is an indicator reflecting knowledge and affecting unhealthy behaviours. However, the results of the control analyses were similar to those reported above (data not shown).\nThe baseline survey was conducted in spring and the follow-up in autumn, but because physical activity was measured over the previous year, there should be no seasonal effects on the results. In this study leisure-time physical activity was self-reported, which is common practice in epidemiological studies [10,34,35]. Physical activity may be overestimated in surveys, but among adults the reporting has not been biased by education or gender [38]. We acknowledge that the measurement of leisure-time physical activity is a complicated task, and a recent review concluded that there is no golden standard for measuring physical activity in questionnaires and no single measure has proven superior [39].\nWe examined leisure-time physical activity while some other studies have also examined occupational physical activity. Working life has changed over the decades, and is generally less physically demanding [40]. This is the case in Finland, but there has nevertheless been an increase in physical activity during leisure-time overall [6]. Manual workers do more physically strenuous work than other classes and this may lead to less leisure-time physical activity, especially if problems of health and functioning have emerged along ageing. Regarding male occupations approximately half of male manual workers are public transport drivers implying sedentary work.\nThe response rate was acceptable both at baseline (67%) and follow-up (83%), but non-participation was still a concern and may have biased the findings. Our analysis of the baseline non-response indicated that the associations in question were unlikely to be severely affected [22]. According to our further control analyses bias due to attrition is unlikely to substantially affect the studied associations.\nAll the respondents were from the Helsinki metropolitan area and employed at baseline by the City of Helsinki. The results therefore cannot be generalised to the whole population of Finland, and not even to the employed population at large. Nevertheless, the cohort is large and diverse, and the study was planned to enable studies of the socioeconomic differences in lifestyles and health. A further strength was the use of identical questions at baseline and follow-up in assessing occupational class differences in leisure-time physical activity.", "Occupational class differences in leisure-time physical activity emerged during the follow-up of 5 to 7 years: there was an increase in activity among the upper classes and a decrease among the lower classes. In the interests of health promotion and disease prevention it is important for ageing people of all occupational classes, but especially for manual workers to maintain and increase physical activity. Efforts should also be made to reduce the socioeconomic differences. In the future mechanisms behind the socioeconomic differences in leisure-time physical activity should be further examined.", "BMI: body mass index; MCS: the mental component summary; MET: an activity metabolic equivalent task; PCS: the physical component summary; SF-36: the Short Form 36;", "The authors declare that they have no competing interests.", "TS carried out the statistical analysis, interpreted the results and drafted the manuscript. TS, JL, TL, OR and EL contributed to designing the study, interpreting the results and drafting the manuscript. All the authors critically reviewed the manuscript and approved the final version." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Data", "Leisure-time physical activity", "Socioeconomic position", "Covariates", "Statistical analyses", "Results", "Discussion", "Main findings", "Interpretation", "Methodological considerations", "Conclusions", "List of abbreviations", "Competing interests", "Authors' contributions" ]
[ "Health behaviours, such as leisure-time physical activity tend to be socioeconomically patterned. Such patterning is complex as socioeconomic position covers a range of social, economic and material circumstances from childhood to adulthood [1]. The main subdomains of socioeconomic position include education, occupational class and income [2]. While the subdomains are correlated with each other, they nevertheless are not interchangeable. Occupational class is a key subdomain of socioeconomic position and particularly suitable when an occupational cohorts are studied.\nCross-sectional studies suggest that people in higher education [3,4] and occupational class [5] are more often physically active in their leisure time than counterparts in lower positions. There has been a tendency in Finland over the last few decades for those on lower income levels to be less physically active in their leisure time than those with higher incomes [6]. However, female manual workers were found to engage in higher levels of active commuting than those in higher occupational classes [6].\nIn the last few decades, increases in leisure-time physical activity have been reported in Finland [6], Canada [7] and the USA [8]. The prevalence of adult Finns engaging in leisure-time physical activity at least twice a week increased from 37% to 55% among women and from 38% to 62% among men between 1978-2002 [6]. In Canada, the prevalence of adults who were physically active in leisure-time increased from 20% to 40% between 1981-2000 [7]. In the USA, the prevalence of those who were physically inactive in their leisure-time declined during 1988-2008 from 31% to 25% [8]. In Australia, the prevalence of those who were physically inactive in leisure-time has been remained almost stable [9].\nAccording to a study on trends in Finland, education, occupational class and household income differences in leisure-time and commuting physical activity generally remained relatively small between 1978-2002 [6]. Follow-up studies examining changes over time in physical activity by socioeconomic position have been conducted in the Netherlands [10,11] and Denmark [12]. Those with lower education were more likely to reduce their leisure-time physical activity during follow-up [10] whereas those with higher education were more likely to remain active than their lower education counterparts [11,12].\nIt is known that low leisure-time physical activity in general is associated with higher weight [3,11,13], smoking [3,11], poorer physical health functioning [14], mental health [15], being married [11] and being unmarried [16]. Retirement has been associated with increasing physical activity [17]. It has been suggested that those in lower occupational classes or those with lower education have more often poor health [18] and therefore are less likely to be physically active than those in higher classes [19]. Only few previous studies have considered covariates for socioeconomic differences in physical activity or their changes over time. In a previous study occupational class differences in leisure-time physical inactivity were attenuated after taking job strain, physical workload, body mass index (BMI) and smoking into account [20].\nWe hypothesised that class differences in leisure-time physical activity would widen over time due to declining physical activity among the lower occupational classes. Our first aim was to examine occupational class differences in leisure-time physical activity, and subsequent changes, over a follow-up of 5 to 7 years. Our second aim was to examine the effect of covariates on changes in the occupational class differences in leisure-time physical activity.", "[SUBTITLE] Data [SUBSECTION] The data were derived from the Helsinki Health Study cohort mail questionnaire surveys administered to employees of the City of Helsinki, Finland. At baseline in 2000-2002, the data covered 8 960 employees aged 40-60-years (response rate of 67%, 80% of the respondents were women) [21]. The follow-up survey was conducted in 2007 among the respondents to the baseline survey. At follow-up, there were 7 332 respondents, which gives a response rate of 83%. The baseline data is broadly representative of the target population and the non-response is unlikely to bias the relationships between socioeconomic position and the health-related variables [22]. Our control analyses showed that non-respondents to the follow-up were only slightly more physically inactive (<14 MET hours/week) during leisure-time at baseline (among women 24% 95% CI 22.9-25.1 vs. 29% 95% CI 26.1-31.3). However, occupational class differences in leisure-time physical activity among the respondents and the non-respondents, were broadly similar.\nThe analyses for this study were carried out among those with data on occupational class (missing data for 118), leisure-time physical activity (missing data for 135) and all the covariates (missing data for 123). After exclusions for missing data, the final data consisted of 5 652 women and 1 279 men.\nThe Helsinki Health Study protocol has been approved by ethics committees of the Department of Public Health, University of Helsinki, and the City of Helsinki health authorities, Finland.\nThe data were derived from the Helsinki Health Study cohort mail questionnaire surveys administered to employees of the City of Helsinki, Finland. At baseline in 2000-2002, the data covered 8 960 employees aged 40-60-years (response rate of 67%, 80% of the respondents were women) [21]. The follow-up survey was conducted in 2007 among the respondents to the baseline survey. At follow-up, there were 7 332 respondents, which gives a response rate of 83%. The baseline data is broadly representative of the target population and the non-response is unlikely to bias the relationships between socioeconomic position and the health-related variables [22]. Our control analyses showed that non-respondents to the follow-up were only slightly more physically inactive (<14 MET hours/week) during leisure-time at baseline (among women 24% 95% CI 22.9-25.1 vs. 29% 95% CI 26.1-31.3). However, occupational class differences in leisure-time physical activity among the respondents and the non-respondents, were broadly similar.\nThe analyses for this study were carried out among those with data on occupational class (missing data for 118), leisure-time physical activity (missing data for 135) and all the covariates (missing data for 123). After exclusions for missing data, the final data consisted of 5 652 women and 1 279 men.\nThe Helsinki Health Study protocol has been approved by ethics committees of the Department of Public Health, University of Helsinki, and the City of Helsinki health authorities, Finland.\n[SUBTITLE] Leisure-time physical activity [SUBSECTION] The respondents were asked to estimate the average weekly hours of physical activity during their leisure time (including commuting) within the previous year [14]. There were four levels of intensity: walking, brisk walking, jogging and running, or their equivalent activities. The response alternatives for each level were: not during the past twelve months, in total under half an hour per week, between half and one hour per week, between two and three hours per week, and four hours or more per week.\nThe amount of leisure-time physical activity was assessed by approximate metabolic equivalent tasks (MET) taking into account the intensity, duration and frequency [23] and calculated by multiplying the weekly time used by the estimated MET value of each activity level [14,24].\nModerate-intensity leisure-time physical activity for about 30 minutes at least five times a week is recommended by the national guidelines in Finland as well as other countries [25-27]. Approximately 14 MET hours per week correspond to the energy expenditure (1000 kcal, e.g. brisk walking for 2.5 hours/week equals 15 MET hours) needed for reducing health risks associated with physical inactivity [13,28-30]. An optimal 30 MET hours fulfill the recommendations [27,31] and requirements for healthy weight maintenance [32]. We use the term physically inactive for under 14 MET hours per week, and physically active for over 30 MET hours.\nThe respondents were asked to estimate the average weekly hours of physical activity during their leisure time (including commuting) within the previous year [14]. There were four levels of intensity: walking, brisk walking, jogging and running, or their equivalent activities. The response alternatives for each level were: not during the past twelve months, in total under half an hour per week, between half and one hour per week, between two and three hours per week, and four hours or more per week.\nThe amount of leisure-time physical activity was assessed by approximate metabolic equivalent tasks (MET) taking into account the intensity, duration and frequency [23] and calculated by multiplying the weekly time used by the estimated MET value of each activity level [14,24].\nModerate-intensity leisure-time physical activity for about 30 minutes at least five times a week is recommended by the national guidelines in Finland as well as other countries [25-27]. Approximately 14 MET hours per week correspond to the energy expenditure (1000 kcal, e.g. brisk walking for 2.5 hours/week equals 15 MET hours) needed for reducing health risks associated with physical inactivity [13,28-30]. An optimal 30 MET hours fulfill the recommendations [27,31] and requirements for healthy weight maintenance [32]. We use the term physically inactive for under 14 MET hours per week, and physically active for over 30 MET hours.\n[SUBTITLE] Socioeconomic position [SUBSECTION] Occupational class was used as an indicator of socioeconomic position. Information on occupational class was derived from the City of Helsinki personnel registers for those who gave written permission for data linkage (77%) [21]. For the rest, occupational class was obtained from the questionnaires. The respondents were classified into four hierarchical occupational classes: professionals including managers, semi-professionals, routine non-manual employees and manual workers. Among women, 27% were professionals, 20% semi-professionals, 39% routine non-manual employees and 14% manual workers. The corresponding figures for men were 46%, 20%, 10% and 24%.\nOccupational class was used as an indicator of socioeconomic position. Information on occupational class was derived from the City of Helsinki personnel registers for those who gave written permission for data linkage (77%) [21]. For the rest, occupational class was obtained from the questionnaires. The respondents were classified into four hierarchical occupational classes: professionals including managers, semi-professionals, routine non-manual employees and manual workers. Among women, 27% were professionals, 20% semi-professionals, 39% routine non-manual employees and 14% manual workers. The corresponding figures for men were 46%, 20%, 10% and 24%.\n[SUBTITLE] Covariates [SUBSECTION] All the covariates were self-reported and taken from the baseline survey, except employment status which was taken from the follow-up survey. Thirteen per cent of women were single, 10% cohabiting, 58% married, 16% divorced and 3% were widowed. The corresponding figures for men were 11%, 15%, 64%, 10% and 1%. Twenty-two per cent of women and 25 per cent of men were smokers at baseline. Body mass index (BMI) was calculated as the weight in kilograms divided by the height in metres squared (kg/m2) (the means were 25.3 and 26.3 kg/m2 for women and men, respectively). Physical and mental health functioning were measured on the physical component summary (PCS) and the mental component summary (MCS) of the Short Form 36 (SF-36) questionnaire [33]. The average PCS score was 48.9 among women and 50.8 among men, and the respective MCS scores were 51.7 and 51.7. Lower scores indicate poorer health functioning. Both were dichotomised by the lowest quartile. At follow-up most of the respondents were still in employment (75% of women and 71% of men), but some had retired due to disability (4%) or old age (18% and 23%, respectively).\nAll the covariates were self-reported and taken from the baseline survey, except employment status which was taken from the follow-up survey. Thirteen per cent of women were single, 10% cohabiting, 58% married, 16% divorced and 3% were widowed. The corresponding figures for men were 11%, 15%, 64%, 10% and 1%. Twenty-two per cent of women and 25 per cent of men were smokers at baseline. Body mass index (BMI) was calculated as the weight in kilograms divided by the height in metres squared (kg/m2) (the means were 25.3 and 26.3 kg/m2 for women and men, respectively). Physical and mental health functioning were measured on the physical component summary (PCS) and the mental component summary (MCS) of the Short Form 36 (SF-36) questionnaire [33]. The average PCS score was 48.9 among women and 50.8 among men, and the respective MCS scores were 51.7 and 51.7. Lower scores indicate poorer health functioning. Both were dichotomised by the lowest quartile. At follow-up most of the respondents were still in employment (75% of women and 71% of men), but some had retired due to disability (4%) or old age (18% and 23%, respectively).\n[SUBTITLE] Statistical analyses [SUBSECTION] All the analyses were carried out separately for women and men. First, the age-adjusted prevalence and 95% confidence intervals of the physically inactive and active participants at baseline and follow-up were calculated by occupational class. Logistic regression analysis was then used to examine the emerging occupational differences in leisure-time physical activity adjusting for covariates. Model 1 is adjusted for age, and model 2 for age and baseline leisure-time physical activity. Further models additionally adjusted for baseline marital status, smoking, BMI, mental and physical health functioning, and employment status one covariate at a time. The SPSS (version 15.0) was used for the analyses.\nAll the analyses were carried out separately for women and men. First, the age-adjusted prevalence and 95% confidence intervals of the physically inactive and active participants at baseline and follow-up were calculated by occupational class. Logistic regression analysis was then used to examine the emerging occupational differences in leisure-time physical activity adjusting for covariates. Model 1 is adjusted for age, and model 2 for age and baseline leisure-time physical activity. Further models additionally adjusted for baseline marital status, smoking, BMI, mental and physical health functioning, and employment status one covariate at a time. The SPSS (version 15.0) was used for the analyses.", "The data were derived from the Helsinki Health Study cohort mail questionnaire surveys administered to employees of the City of Helsinki, Finland. At baseline in 2000-2002, the data covered 8 960 employees aged 40-60-years (response rate of 67%, 80% of the respondents were women) [21]. The follow-up survey was conducted in 2007 among the respondents to the baseline survey. At follow-up, there were 7 332 respondents, which gives a response rate of 83%. The baseline data is broadly representative of the target population and the non-response is unlikely to bias the relationships between socioeconomic position and the health-related variables [22]. Our control analyses showed that non-respondents to the follow-up were only slightly more physically inactive (<14 MET hours/week) during leisure-time at baseline (among women 24% 95% CI 22.9-25.1 vs. 29% 95% CI 26.1-31.3). However, occupational class differences in leisure-time physical activity among the respondents and the non-respondents, were broadly similar.\nThe analyses for this study were carried out among those with data on occupational class (missing data for 118), leisure-time physical activity (missing data for 135) and all the covariates (missing data for 123). After exclusions for missing data, the final data consisted of 5 652 women and 1 279 men.\nThe Helsinki Health Study protocol has been approved by ethics committees of the Department of Public Health, University of Helsinki, and the City of Helsinki health authorities, Finland.", "The respondents were asked to estimate the average weekly hours of physical activity during their leisure time (including commuting) within the previous year [14]. There were four levels of intensity: walking, brisk walking, jogging and running, or their equivalent activities. The response alternatives for each level were: not during the past twelve months, in total under half an hour per week, between half and one hour per week, between two and three hours per week, and four hours or more per week.\nThe amount of leisure-time physical activity was assessed by approximate metabolic equivalent tasks (MET) taking into account the intensity, duration and frequency [23] and calculated by multiplying the weekly time used by the estimated MET value of each activity level [14,24].\nModerate-intensity leisure-time physical activity for about 30 minutes at least five times a week is recommended by the national guidelines in Finland as well as other countries [25-27]. Approximately 14 MET hours per week correspond to the energy expenditure (1000 kcal, e.g. brisk walking for 2.5 hours/week equals 15 MET hours) needed for reducing health risks associated with physical inactivity [13,28-30]. An optimal 30 MET hours fulfill the recommendations [27,31] and requirements for healthy weight maintenance [32]. We use the term physically inactive for under 14 MET hours per week, and physically active for over 30 MET hours.", "Occupational class was used as an indicator of socioeconomic position. Information on occupational class was derived from the City of Helsinki personnel registers for those who gave written permission for data linkage (77%) [21]. For the rest, occupational class was obtained from the questionnaires. The respondents were classified into four hierarchical occupational classes: professionals including managers, semi-professionals, routine non-manual employees and manual workers. Among women, 27% were professionals, 20% semi-professionals, 39% routine non-manual employees and 14% manual workers. The corresponding figures for men were 46%, 20%, 10% and 24%.", "All the covariates were self-reported and taken from the baseline survey, except employment status which was taken from the follow-up survey. Thirteen per cent of women were single, 10% cohabiting, 58% married, 16% divorced and 3% were widowed. The corresponding figures for men were 11%, 15%, 64%, 10% and 1%. Twenty-two per cent of women and 25 per cent of men were smokers at baseline. Body mass index (BMI) was calculated as the weight in kilograms divided by the height in metres squared (kg/m2) (the means were 25.3 and 26.3 kg/m2 for women and men, respectively). Physical and mental health functioning were measured on the physical component summary (PCS) and the mental component summary (MCS) of the Short Form 36 (SF-36) questionnaire [33]. The average PCS score was 48.9 among women and 50.8 among men, and the respective MCS scores were 51.7 and 51.7. Lower scores indicate poorer health functioning. Both were dichotomised by the lowest quartile. At follow-up most of the respondents were still in employment (75% of women and 71% of men), but some had retired due to disability (4%) or old age (18% and 23%, respectively).", "All the analyses were carried out separately for women and men. First, the age-adjusted prevalence and 95% confidence intervals of the physically inactive and active participants at baseline and follow-up were calculated by occupational class. Logistic regression analysis was then used to examine the emerging occupational differences in leisure-time physical activity adjusting for covariates. Model 1 is adjusted for age, and model 2 for age and baseline leisure-time physical activity. Further models additionally adjusted for baseline marital status, smoking, BMI, mental and physical health functioning, and employment status one covariate at a time. The SPSS (version 15.0) was used for the analyses.", "At baseline 24% of women were physically inactive and there were no differences between the occupational classes (Table 1). The prevalence of physical inactivity at follow-up was 22%, but the higher classes were less likely to be physically inactive than their lower class counterparts. At follow-up, fewer professionals (19%) and semi-professionals (18%) than routine non-manual employees (24%) and manual workers (27%) were physically inactive. Professionals increased their leisure-time physical activity to the recommended level, whereas the level of inactivity in the other classes remained almost stable. Thus there were occupational class differences at follow-up suggesting a gradient.\nAge-adjusted prevalence (%) and 95% confidence intervals (95% CI) for physically inactive and active women and men at baseline and at follow-up.\n1MET = an activity metabolic equivalent task\nThirty-seven percent of women were physically active in their leisure-time at baseline and there were no differences between the occupational classes (Table 1). At follow-up 39% were physically active. However, more among the professionals (42%) and semi-professionals (44%) than among routine non-manual employees (36%) and manual workers (33%) were physically active. The prevalence of physically active increased among the professionals and semi-professionals and remained stable or decreased in the lower classes, thus leading to socioeconomic differences over the follow-up.\nTwenty-five per cent of men were physically inactive at baseline and as with women there were no differences between the occupational classes (Table 1). At follow-up the prevalence of inactivity was 24%, but the higher classes tended to be less often physically inactive than their manual worker counterparts in leisure-time. The prevalence of physical inactivity among male manual workers increased by eight percentage points (from 28% to 36%) leading to higher prevalence than in the other occupational classes. Among the semi-professionals (from 29% to 22%) and routine non-manual employees (from 27% to 21%) it decreased over the follow-up. Thus there were occupational class differences in leisure-time physical activity at the follow-up.\nAmong men 43% were physically active during their leisure-time and there were no differences between the occupational classes at baseline (Table 1). Among male manual workers the prevalence of physically active decreased during the follow-up by 9 percentage units (from 40% to 31%) whereas the higher classes somewhat increased their leisure-time physical activity leading to occupational class differences at follow-up.\nNext logistic regression analysis was used to examine the effects of the covariates on the occupational class differences in leisure-time physical activity at follow-up. Firstly, we examined physical inactivity. Among women, after adjusting for age, manual workers (OR 1.60 95% CI 1.31-1.97) and routine non-manual employees (OR 1.34 95% CI 1.14-1.57) were more likely to be physically inactive at follow-up than professionals (Table 2). We further examined the emergence of occupational class differences in leisure-time physical activity by adjusting for baseline physical activity and it did not change the associations. Of the baseline covariates only BMI (OR for manual workers 1.47 95% CI 1.18-1.83 and for routine non-manual employees 1.27 95% CI 1.07-1.50) slightly attenuated the association.\nOdds ratios (OR) and their 95% confidence intervals (95% CI) among women (n = 5652) and men (n = 1279). Physically inactive (<14 MET1 hours/week) at follow-up.\n1MET = an activity metabolic equivalent task\n2Model 1 = adjusted for age\n3Model 2 = adjusted for age and baseline physical activity\n4BMI = body mass index\n5PCS = SF-36 physical component summary\n6MCS = SF-36 mental component summary\nSecondly, we examined the physically active. Among women, after adjusting for age, manual workers (OR 0.69 95% CI 0.57-0.83) and routine non-manual employees (OR 0.79 95% CI 0.69-0.91) were less likely to be physically active at follow-up than professionals (Table 3). Adjusting for baseline physical activity did not change the associations, whereas BMI had a slight attenuating effect on the studied association.\nOdds ratios (OR) and their 95% confidence intervals (95% CI) among women (n = 5652) and men (n = 1279). Physically active (>30 MET1 hours/week) at follow-up.\n1MET = an activity metabolic equivalent task\n2Model 1 = adjusted for age\n3Model 2 = adjusted for age and baseline physical activity\n4BMI = body mass index\n5PCS = SF-36 physical component summary\n6MCS = SF-36 mental component summary\nAmong men, after adjusting for age, manual workers (OR 2.21 95% CI 1.62-3.02) were more likely to be physically inactive at follow-up than those in the upper classes (Table 2). Adjusting for baseline physical activity did not affect the association. Of the covariates only marital status (OR 2.04 95% CI 1.47-2.85) and BMI (OR 2.04 95% CI 1.47-2.84) slightly attenuated the differences between the manual workers and the upper classes.\nAmong men, after adjusting for age manual workers (OR 0.47 95% CI 0.35-0.63) were less likely to be physically active at follow-up than those in the upper classes (Table 3). Adjusting for baseline physical activity (OR 0.42 95% CI 0.30-0.59) had virtually no effect on the associations.", "[SUBTITLE] Main findings [SUBSECTION] The focus in this study was on occupational class differences in leisure-time physical activity among middle-aged women and men over a follow-up of 5 to 7 years. A further aim was to find out which covariates would affect such differences. We hypothesised that class differences in leisure-time physical activity would widen over time. The results showed that there were no considerable class differences in leisure time physical activity at baseline, but hierarchical occupational class differences in leisure-time physical activity emerged over the follow-up. In women the levels of leisure-time physical activity increased among the professionals and semi-professionals, and decreased among the manual workers. Among male manual workers there was a substantial decrease in leisure-time physical activity. None of the examined social or health-related baseline covariates or employment status at follow-up affected substantially the occupational class differences in leisure-time physical activity at follow-up.\nThe focus in this study was on occupational class differences in leisure-time physical activity among middle-aged women and men over a follow-up of 5 to 7 years. A further aim was to find out which covariates would affect such differences. We hypothesised that class differences in leisure-time physical activity would widen over time. The results showed that there were no considerable class differences in leisure time physical activity at baseline, but hierarchical occupational class differences in leisure-time physical activity emerged over the follow-up. In women the levels of leisure-time physical activity increased among the professionals and semi-professionals, and decreased among the manual workers. Among male manual workers there was a substantial decrease in leisure-time physical activity. None of the examined social or health-related baseline covariates or employment status at follow-up affected substantially the occupational class differences in leisure-time physical activity at follow-up.\n[SUBTITLE] Interpretation [SUBSECTION] In the whole sample, level of leisure-time physical activity remained stable over the 5 to 7 years follow-up. Some previous trend studies have reported an increase in leisure-time physical activity [6,7,34]. As found in many other studies [7,35] here, too, men were more physically active than women. Occupational differences in leisure-time physical activity emerged at follow-up, with those in the upper classes being more physically active than their lower class counterparts. The decrease in leisure-time physical activity among manual workers we observed is consistent with the findings of Danish [11] and a Dutch study [10]. Another Dutch study [12] found that those in higher positions remained physically active over time.\nOur findings revealed that occupational class differences in leisure-time physical activity, which did not exist at baseline, emerged during the follow-up. The data do not show when such differences developed. In principle they might have existed even before the baseline, disappeared and then appeared again at follow-up. However this is unlikely, particularly given the results of a Finnish trend study showing that socioeconomic differences in physical activity have largely remained relatively small in recent decades [6]. Most of our participants work in permanent jobs for the same employer and share many similar circumstances such as occupational health care. However, over the follow-up almost a third exit workforce e.g. due to retirement. Among the ageing employees occupational class differences in health widen [36], potentially contributing to the emergence of class differences in physical activity as well.\nIn our ageing cohort a fifth retired during the follow-up. Retirement is a major life transition that may affect health behaviours including leisure-time physical activity [17]. Also other life events could contribute to physical activity and the emergence of socioeconomic differences. Adjusting for employment status, however, did not affect the observed differences.\nThose in the higher occupational classes and with higher education have better knowledge and they are more ready to adopt healthier behaviours and reduce risk behaviours than those in lower classes [19]. This might explain why the higher classes increased their leisure-time physical activity. It is an unfortunate development that manual workers engage less in leisure-time physical activity as they age. People who are less physically active benefit most from following the recommended level of physical activity [28].\nNone of the baseline covariates we examined had a substantial effect on the occupational class differences found at follow-up. We also took into consideration health behaviours, sociodemographic factors as well as physical and mental health. We controlled not only for the factors included in the reported analyses but also for the effects of limiting long-lasting illness, common mental disorders and job strain on the identified socioeconomic differences in leisure-time physical activity (data not shown). However none of these additional covariates had a substantial effect on the emergent occupational class differences. We also checked the effects of covariates measured at follow-up but the results were similar to those using covariates measured at baseline.\nOccupational class differences in health and functioning tend to widen towards late middle-age, with manual workers' health being the worst [36]. Health problems that also restrict physical activity during leisure-time may occur even more among the lower occupational classes. In this study the association of occupational class and leisure-time physical activity, however, did not attenuate after adjusting for the physical component summary (PCS) of the SF-36 health inventory.\nTherefore we conducted control analyses and adjusted for the physical functioning subscale of the SF-36 which more precisely measures health problems that restrict physical activities such as running and brisk walking. The physical functioning subscale is by definition a purer measure of physical functioning than the physical component summary which is a composite measure [33]. The differences in physical functioning may be partly masked when using the physical component summary.\nFurther control analyses showed that the follow-up physical functioning score attenuated the association more than the other examined covariates (data not shown), however, the baseline adjustment had no effects. This suggests that there were differences in physical functioning between the occupational classes at follow-up that affect leisure-time physical activity. In other words leisure-time physical activity decreased among the lower occupational classes partly due to poorer health as suggested by previous research [18]. Our single follow-up design and use of logistic regression analysis, however, do not allow causal interpretations or pathways between the exposure, outcome and control variables. The question why occupational class differences in leisure-time physical activity emerged remains open to further scrutiny.\nOne also might ask whether similar developments could be observed in other health behaviours. With regard to the consumption of healthy food, for example, the socioeconomic gap has remained stable, and food habits increasingly tend to follow national guidelines [37]. Further studies are needed to examine the importance of these and other factors for the socioeconomic differences in leisure-time physical activity and the underlying mechanisms.\nIn the whole sample, level of leisure-time physical activity remained stable over the 5 to 7 years follow-up. Some previous trend studies have reported an increase in leisure-time physical activity [6,7,34]. As found in many other studies [7,35] here, too, men were more physically active than women. Occupational differences in leisure-time physical activity emerged at follow-up, with those in the upper classes being more physically active than their lower class counterparts. The decrease in leisure-time physical activity among manual workers we observed is consistent with the findings of Danish [11] and a Dutch study [10]. Another Dutch study [12] found that those in higher positions remained physically active over time.\nOur findings revealed that occupational class differences in leisure-time physical activity, which did not exist at baseline, emerged during the follow-up. The data do not show when such differences developed. In principle they might have existed even before the baseline, disappeared and then appeared again at follow-up. However this is unlikely, particularly given the results of a Finnish trend study showing that socioeconomic differences in physical activity have largely remained relatively small in recent decades [6]. Most of our participants work in permanent jobs for the same employer and share many similar circumstances such as occupational health care. However, over the follow-up almost a third exit workforce e.g. due to retirement. Among the ageing employees occupational class differences in health widen [36], potentially contributing to the emergence of class differences in physical activity as well.\nIn our ageing cohort a fifth retired during the follow-up. Retirement is a major life transition that may affect health behaviours including leisure-time physical activity [17]. Also other life events could contribute to physical activity and the emergence of socioeconomic differences. Adjusting for employment status, however, did not affect the observed differences.\nThose in the higher occupational classes and with higher education have better knowledge and they are more ready to adopt healthier behaviours and reduce risk behaviours than those in lower classes [19]. This might explain why the higher classes increased their leisure-time physical activity. It is an unfortunate development that manual workers engage less in leisure-time physical activity as they age. People who are less physically active benefit most from following the recommended level of physical activity [28].\nNone of the baseline covariates we examined had a substantial effect on the occupational class differences found at follow-up. We also took into consideration health behaviours, sociodemographic factors as well as physical and mental health. We controlled not only for the factors included in the reported analyses but also for the effects of limiting long-lasting illness, common mental disorders and job strain on the identified socioeconomic differences in leisure-time physical activity (data not shown). However none of these additional covariates had a substantial effect on the emergent occupational class differences. We also checked the effects of covariates measured at follow-up but the results were similar to those using covariates measured at baseline.\nOccupational class differences in health and functioning tend to widen towards late middle-age, with manual workers' health being the worst [36]. Health problems that also restrict physical activity during leisure-time may occur even more among the lower occupational classes. In this study the association of occupational class and leisure-time physical activity, however, did not attenuate after adjusting for the physical component summary (PCS) of the SF-36 health inventory.\nTherefore we conducted control analyses and adjusted for the physical functioning subscale of the SF-36 which more precisely measures health problems that restrict physical activities such as running and brisk walking. The physical functioning subscale is by definition a purer measure of physical functioning than the physical component summary which is a composite measure [33]. The differences in physical functioning may be partly masked when using the physical component summary.\nFurther control analyses showed that the follow-up physical functioning score attenuated the association more than the other examined covariates (data not shown), however, the baseline adjustment had no effects. This suggests that there were differences in physical functioning between the occupational classes at follow-up that affect leisure-time physical activity. In other words leisure-time physical activity decreased among the lower occupational classes partly due to poorer health as suggested by previous research [18]. Our single follow-up design and use of logistic regression analysis, however, do not allow causal interpretations or pathways between the exposure, outcome and control variables. The question why occupational class differences in leisure-time physical activity emerged remains open to further scrutiny.\nOne also might ask whether similar developments could be observed in other health behaviours. With regard to the consumption of healthy food, for example, the socioeconomic gap has remained stable, and food habits increasingly tend to follow national guidelines [37]. Further studies are needed to examine the importance of these and other factors for the socioeconomic differences in leisure-time physical activity and the underlying mechanisms.\n[SUBTITLE] Methodological considerations [SUBSECTION] Given that our study was based on an occupational cohort we used occupational class for the analyses, which reflects material resources and working conditions, in particular. We used education as a parallel socioeconomic indicator in our control analyses. Education is an indicator reflecting knowledge and affecting unhealthy behaviours. However, the results of the control analyses were similar to those reported above (data not shown).\nThe baseline survey was conducted in spring and the follow-up in autumn, but because physical activity was measured over the previous year, there should be no seasonal effects on the results. In this study leisure-time physical activity was self-reported, which is common practice in epidemiological studies [10,34,35]. Physical activity may be overestimated in surveys, but among adults the reporting has not been biased by education or gender [38]. We acknowledge that the measurement of leisure-time physical activity is a complicated task, and a recent review concluded that there is no golden standard for measuring physical activity in questionnaires and no single measure has proven superior [39].\nWe examined leisure-time physical activity while some other studies have also examined occupational physical activity. Working life has changed over the decades, and is generally less physically demanding [40]. This is the case in Finland, but there has nevertheless been an increase in physical activity during leisure-time overall [6]. Manual workers do more physically strenuous work than other classes and this may lead to less leisure-time physical activity, especially if problems of health and functioning have emerged along ageing. Regarding male occupations approximately half of male manual workers are public transport drivers implying sedentary work.\nThe response rate was acceptable both at baseline (67%) and follow-up (83%), but non-participation was still a concern and may have biased the findings. Our analysis of the baseline non-response indicated that the associations in question were unlikely to be severely affected [22]. According to our further control analyses bias due to attrition is unlikely to substantially affect the studied associations.\nAll the respondents were from the Helsinki metropolitan area and employed at baseline by the City of Helsinki. The results therefore cannot be generalised to the whole population of Finland, and not even to the employed population at large. Nevertheless, the cohort is large and diverse, and the study was planned to enable studies of the socioeconomic differences in lifestyles and health. A further strength was the use of identical questions at baseline and follow-up in assessing occupational class differences in leisure-time physical activity.\nGiven that our study was based on an occupational cohort we used occupational class for the analyses, which reflects material resources and working conditions, in particular. We used education as a parallel socioeconomic indicator in our control analyses. Education is an indicator reflecting knowledge and affecting unhealthy behaviours. However, the results of the control analyses were similar to those reported above (data not shown).\nThe baseline survey was conducted in spring and the follow-up in autumn, but because physical activity was measured over the previous year, there should be no seasonal effects on the results. In this study leisure-time physical activity was self-reported, which is common practice in epidemiological studies [10,34,35]. Physical activity may be overestimated in surveys, but among adults the reporting has not been biased by education or gender [38]. We acknowledge that the measurement of leisure-time physical activity is a complicated task, and a recent review concluded that there is no golden standard for measuring physical activity in questionnaires and no single measure has proven superior [39].\nWe examined leisure-time physical activity while some other studies have also examined occupational physical activity. Working life has changed over the decades, and is generally less physically demanding [40]. This is the case in Finland, but there has nevertheless been an increase in physical activity during leisure-time overall [6]. Manual workers do more physically strenuous work than other classes and this may lead to less leisure-time physical activity, especially if problems of health and functioning have emerged along ageing. Regarding male occupations approximately half of male manual workers are public transport drivers implying sedentary work.\nThe response rate was acceptable both at baseline (67%) and follow-up (83%), but non-participation was still a concern and may have biased the findings. Our analysis of the baseline non-response indicated that the associations in question were unlikely to be severely affected [22]. According to our further control analyses bias due to attrition is unlikely to substantially affect the studied associations.\nAll the respondents were from the Helsinki metropolitan area and employed at baseline by the City of Helsinki. The results therefore cannot be generalised to the whole population of Finland, and not even to the employed population at large. Nevertheless, the cohort is large and diverse, and the study was planned to enable studies of the socioeconomic differences in lifestyles and health. A further strength was the use of identical questions at baseline and follow-up in assessing occupational class differences in leisure-time physical activity.", "The focus in this study was on occupational class differences in leisure-time physical activity among middle-aged women and men over a follow-up of 5 to 7 years. A further aim was to find out which covariates would affect such differences. We hypothesised that class differences in leisure-time physical activity would widen over time. The results showed that there were no considerable class differences in leisure time physical activity at baseline, but hierarchical occupational class differences in leisure-time physical activity emerged over the follow-up. In women the levels of leisure-time physical activity increased among the professionals and semi-professionals, and decreased among the manual workers. Among male manual workers there was a substantial decrease in leisure-time physical activity. None of the examined social or health-related baseline covariates or employment status at follow-up affected substantially the occupational class differences in leisure-time physical activity at follow-up.", "In the whole sample, level of leisure-time physical activity remained stable over the 5 to 7 years follow-up. Some previous trend studies have reported an increase in leisure-time physical activity [6,7,34]. As found in many other studies [7,35] here, too, men were more physically active than women. Occupational differences in leisure-time physical activity emerged at follow-up, with those in the upper classes being more physically active than their lower class counterparts. The decrease in leisure-time physical activity among manual workers we observed is consistent with the findings of Danish [11] and a Dutch study [10]. Another Dutch study [12] found that those in higher positions remained physically active over time.\nOur findings revealed that occupational class differences in leisure-time physical activity, which did not exist at baseline, emerged during the follow-up. The data do not show when such differences developed. In principle they might have existed even before the baseline, disappeared and then appeared again at follow-up. However this is unlikely, particularly given the results of a Finnish trend study showing that socioeconomic differences in physical activity have largely remained relatively small in recent decades [6]. Most of our participants work in permanent jobs for the same employer and share many similar circumstances such as occupational health care. However, over the follow-up almost a third exit workforce e.g. due to retirement. Among the ageing employees occupational class differences in health widen [36], potentially contributing to the emergence of class differences in physical activity as well.\nIn our ageing cohort a fifth retired during the follow-up. Retirement is a major life transition that may affect health behaviours including leisure-time physical activity [17]. Also other life events could contribute to physical activity and the emergence of socioeconomic differences. Adjusting for employment status, however, did not affect the observed differences.\nThose in the higher occupational classes and with higher education have better knowledge and they are more ready to adopt healthier behaviours and reduce risk behaviours than those in lower classes [19]. This might explain why the higher classes increased their leisure-time physical activity. It is an unfortunate development that manual workers engage less in leisure-time physical activity as they age. People who are less physically active benefit most from following the recommended level of physical activity [28].\nNone of the baseline covariates we examined had a substantial effect on the occupational class differences found at follow-up. We also took into consideration health behaviours, sociodemographic factors as well as physical and mental health. We controlled not only for the factors included in the reported analyses but also for the effects of limiting long-lasting illness, common mental disorders and job strain on the identified socioeconomic differences in leisure-time physical activity (data not shown). However none of these additional covariates had a substantial effect on the emergent occupational class differences. We also checked the effects of covariates measured at follow-up but the results were similar to those using covariates measured at baseline.\nOccupational class differences in health and functioning tend to widen towards late middle-age, with manual workers' health being the worst [36]. Health problems that also restrict physical activity during leisure-time may occur even more among the lower occupational classes. In this study the association of occupational class and leisure-time physical activity, however, did not attenuate after adjusting for the physical component summary (PCS) of the SF-36 health inventory.\nTherefore we conducted control analyses and adjusted for the physical functioning subscale of the SF-36 which more precisely measures health problems that restrict physical activities such as running and brisk walking. The physical functioning subscale is by definition a purer measure of physical functioning than the physical component summary which is a composite measure [33]. The differences in physical functioning may be partly masked when using the physical component summary.\nFurther control analyses showed that the follow-up physical functioning score attenuated the association more than the other examined covariates (data not shown), however, the baseline adjustment had no effects. This suggests that there were differences in physical functioning between the occupational classes at follow-up that affect leisure-time physical activity. In other words leisure-time physical activity decreased among the lower occupational classes partly due to poorer health as suggested by previous research [18]. Our single follow-up design and use of logistic regression analysis, however, do not allow causal interpretations or pathways between the exposure, outcome and control variables. The question why occupational class differences in leisure-time physical activity emerged remains open to further scrutiny.\nOne also might ask whether similar developments could be observed in other health behaviours. With regard to the consumption of healthy food, for example, the socioeconomic gap has remained stable, and food habits increasingly tend to follow national guidelines [37]. Further studies are needed to examine the importance of these and other factors for the socioeconomic differences in leisure-time physical activity and the underlying mechanisms.", "Given that our study was based on an occupational cohort we used occupational class for the analyses, which reflects material resources and working conditions, in particular. We used education as a parallel socioeconomic indicator in our control analyses. Education is an indicator reflecting knowledge and affecting unhealthy behaviours. However, the results of the control analyses were similar to those reported above (data not shown).\nThe baseline survey was conducted in spring and the follow-up in autumn, but because physical activity was measured over the previous year, there should be no seasonal effects on the results. In this study leisure-time physical activity was self-reported, which is common practice in epidemiological studies [10,34,35]. Physical activity may be overestimated in surveys, but among adults the reporting has not been biased by education or gender [38]. We acknowledge that the measurement of leisure-time physical activity is a complicated task, and a recent review concluded that there is no golden standard for measuring physical activity in questionnaires and no single measure has proven superior [39].\nWe examined leisure-time physical activity while some other studies have also examined occupational physical activity. Working life has changed over the decades, and is generally less physically demanding [40]. This is the case in Finland, but there has nevertheless been an increase in physical activity during leisure-time overall [6]. Manual workers do more physically strenuous work than other classes and this may lead to less leisure-time physical activity, especially if problems of health and functioning have emerged along ageing. Regarding male occupations approximately half of male manual workers are public transport drivers implying sedentary work.\nThe response rate was acceptable both at baseline (67%) and follow-up (83%), but non-participation was still a concern and may have biased the findings. Our analysis of the baseline non-response indicated that the associations in question were unlikely to be severely affected [22]. According to our further control analyses bias due to attrition is unlikely to substantially affect the studied associations.\nAll the respondents were from the Helsinki metropolitan area and employed at baseline by the City of Helsinki. The results therefore cannot be generalised to the whole population of Finland, and not even to the employed population at large. Nevertheless, the cohort is large and diverse, and the study was planned to enable studies of the socioeconomic differences in lifestyles and health. A further strength was the use of identical questions at baseline and follow-up in assessing occupational class differences in leisure-time physical activity.", "Occupational class differences in leisure-time physical activity emerged during the follow-up of 5 to 7 years: there was an increase in activity among the upper classes and a decrease among the lower classes. In the interests of health promotion and disease prevention it is important for ageing people of all occupational classes, but especially for manual workers to maintain and increase physical activity. Efforts should also be made to reduce the socioeconomic differences. In the future mechanisms behind the socioeconomic differences in leisure-time physical activity should be further examined.", "BMI: body mass index; MCS: the mental component summary; MET: an activity metabolic equivalent task; PCS: the physical component summary; SF-36: the Short Form 36;", "The authors declare that they have no competing interests.", "TS carried out the statistical analysis, interpreted the results and drafted the manuscript. TS, JL, TL, OR and EL contributed to designing the study, interpreting the results and drafting the manuscript. All the authors critically reviewed the manuscript and approved the final version." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Effectiveness analysis of resistance and tolerance to infection.
21362170
Tolerance and resistance provide animals with two distinct strategies to fight infectious pathogens and may exhibit different evolutionary dynamics. However, few studies have investigated these mechanisms in the case of animal diseases under commercial constraints.
BACKGROUND
The paper proposes a method to simultaneously describe (1) the dynamics of transmission of a contagious pathogen between animals, (2) the growth and death of the pathogen within infected hosts and (3) the effects on their performances. The effectiveness of increasing individual levels of tolerance and resistance is evaluated by the number of infected animals and the performance at the population level.
METHODS
The model is applied to a particular set of parameters and different combinations of values. Given these imputed values, it is shown that higher levels of individual tolerance should be more effective than increased levels of resistance in commercial populations. As a practical example, a method is proposed to measure levels of animal tolerance to bovine mastitis.
RESULTS
The model provides a general framework and some tools to maximize health and performances of a population under infection. Limits and assumptions of the model are clearly identified so it can be improved for different epidemiological settings.
CONCLUSIONS
[ "Animals", "Animals, Domestic", "Breeding", "Cattle", "Disease Susceptibility", "Female", "Genetic Predisposition to Disease", "Host-Pathogen Interactions", "Infections", "Mastitis, Bovine", "Models, Theoretical" ]
3066122
null
null
Methods
[SUBTITLE] Pathogen dynamics [SUBSECTION] The model chosen here to depict the dynamics of transmission of the infection in a herd is a stochastic version of the SIS (S for susceptible, I for infected) model for the spread of a disease in a closed population of N individuals [11]. This model is appropriate for infections with no permanent immunity after recovery, i.e. individuals are susceptible to the infection, potentially get infected, may recover and become susceptible again. The time-scale of the disease process is assumed to be short compared to the life length of the host and no demographic turnover (natural birth or death) is considered. The area occupied by parasites and hosts is constant, so that numbers and densities coincide. There is only a single non-evolving pathogen species within infected hosts. Once infected, hosts are immediately able to infect other individuals (no latent period). Within the host, the number of pathogens increases following a sigmoidal growth curve and is directly related to the number of immune constituents of the host response to the pathogen, with no distinction between innate and specific immunity. Recovered hosts are as susceptible to infection as naïve hosts and re-exposure does not accelerate development of the disease. In mathematical terms, the process is described by a continuous time Markov chain, {Cit; t = 0 to T, i = 1 to N}, where Cit denotes the number of pathogens in the ith host at time t. Units of time are chosen arbitrarily. The chain has three transition probabilities (over a small time interval Δt) reflecting the three events, i.e. invasion of a new host by the pathogen, its multiplication and its killing by the immune response of the host. The first transition probability is the probability the ith susceptible host is infected by Cmin pathogens:(1) where o(Δt) tends to 0 when Δt is small, Cmin is the minimum number of pathogens necessary to have infection, β is the per-capita rate of successful transmission of Cmin pathogens from an infectious host to a susceptible host upon contact with an infectious individual and during Δt, and It is the number of infectious hosts with which the ith susceptible has contact. The second transition probability is the probability that a pathogen in an infected host gives birth to Cmin new offspring, such that this host becomes infectious:(2) where γ is the pathogen growth rate. Right after becoming infected, pathogen growth in a host is approximately exponential but it slows down as it reaches a maximum (CMax), at which it stops. The last transition probability is the probability that Cmin pathogens are killed within the host:(3) This equation follows from the dynamics between pathogens and immune factors, as observed in experimental studies [12,13]. Parameter μ represents the maximum number of pathogens killed for each unit of Rit, with Rit being a generic index related to the number, at time t, of the different types of immune factors specific for that pathogen. Because the main interest is on the number of pathogens, the complexity of the immune response is greatly simplified when Rit increases at a rate (hi) that is constant across time. The scaling parameter ρi varies from 0 to 1 and represents the extra investment in resistance of the ith host with respect to μ. When Cit drops below Cmin, the infection is assumed to be cleared. The Markov chain was simulated using the Gillespie algorithm [14], which essentially uses exponential waiting times between events. For all simulations, it was assumed that two individuals in the population were initially infected. Simulation steps were executed until t reaches T units of time (= one replicate) and repeated over 50 replicates. Each cycle took around 4 hours to complete, so the population size was limited at 30 individuals, which is the average size of most dairy herds in the Walloon region of Belgium. The model chosen here to depict the dynamics of transmission of the infection in a herd is a stochastic version of the SIS (S for susceptible, I for infected) model for the spread of a disease in a closed population of N individuals [11]. This model is appropriate for infections with no permanent immunity after recovery, i.e. individuals are susceptible to the infection, potentially get infected, may recover and become susceptible again. The time-scale of the disease process is assumed to be short compared to the life length of the host and no demographic turnover (natural birth or death) is considered. The area occupied by parasites and hosts is constant, so that numbers and densities coincide. There is only a single non-evolving pathogen species within infected hosts. Once infected, hosts are immediately able to infect other individuals (no latent period). Within the host, the number of pathogens increases following a sigmoidal growth curve and is directly related to the number of immune constituents of the host response to the pathogen, with no distinction between innate and specific immunity. Recovered hosts are as susceptible to infection as naïve hosts and re-exposure does not accelerate development of the disease. In mathematical terms, the process is described by a continuous time Markov chain, {Cit; t = 0 to T, i = 1 to N}, where Cit denotes the number of pathogens in the ith host at time t. Units of time are chosen arbitrarily. The chain has three transition probabilities (over a small time interval Δt) reflecting the three events, i.e. invasion of a new host by the pathogen, its multiplication and its killing by the immune response of the host. The first transition probability is the probability the ith susceptible host is infected by Cmin pathogens:(1) where o(Δt) tends to 0 when Δt is small, Cmin is the minimum number of pathogens necessary to have infection, β is the per-capita rate of successful transmission of Cmin pathogens from an infectious host to a susceptible host upon contact with an infectious individual and during Δt, and It is the number of infectious hosts with which the ith susceptible has contact. The second transition probability is the probability that a pathogen in an infected host gives birth to Cmin new offspring, such that this host becomes infectious:(2) where γ is the pathogen growth rate. Right after becoming infected, pathogen growth in a host is approximately exponential but it slows down as it reaches a maximum (CMax), at which it stops. The last transition probability is the probability that Cmin pathogens are killed within the host:(3) This equation follows from the dynamics between pathogens and immune factors, as observed in experimental studies [12,13]. Parameter μ represents the maximum number of pathogens killed for each unit of Rit, with Rit being a generic index related to the number, at time t, of the different types of immune factors specific for that pathogen. Because the main interest is on the number of pathogens, the complexity of the immune response is greatly simplified when Rit increases at a rate (hi) that is constant across time. The scaling parameter ρi varies from 0 to 1 and represents the extra investment in resistance of the ith host with respect to μ. When Cit drops below Cmin, the infection is assumed to be cleared. The Markov chain was simulated using the Gillespie algorithm [14], which essentially uses exponential waiting times between events. For all simulations, it was assumed that two individuals in the population were initially infected. Simulation steps were executed until t reaches T units of time (= one replicate) and repeated over 50 replicates. Each cycle took around 4 hours to complete, so the population size was limited at 30 individuals, which is the average size of most dairy herds in the Walloon region of Belgium. [SUBTITLE] Individual performance [SUBSECTION] The performance of an infected host decreases proportionally to the number of pathogens (Cit) and to investments of the host in tolerance [15]:(4) where Pit is the performance of the ith host at time t, when it is infected with Cit pathogens, ω is the maximum amount of performance lost per pathogen (virulence). The parameter λi is a scaling parameter representing the extra investment in tolerance. If λi = 1, the host is completely tolerant and produces at a level identical to the one without infection. If λi = 0, the host is not tolerant to the deleterious effects of the pathogen. Hosts invest part of their constitutive resources to resist or tolerate the pathogens and costs are assumed proportional to the investments in both types of defense. They are combined in an additive way:(5) where PMax is the totality of the resources available to the host to insure performance (e.g., production, reproduction, work) and to cope with an infection (resistance and tolerance). If no extra-investments are put in resistance and tolerance, all resources are allocated to insure the highest achievable level of performance in the absence of infection. Parameter ciρ is the marginal cost of resistance and ciλ is the marginal cost of tolerance (in units of performance). Values for both costs are constrained such that the factor within brackets remains positive (ρi ciρ + λi ciλ ≤ 1). A constraint was also set to insure Pit (equation 4) remains positive or null in totally non-tolerant individuals infected with K pathogens: ω ≤ PMax(1 - ρi ciρ)/K. Typical patterns in performance as a function of number of pathogens are shown schematically in Figure 1 to illustrate the different ways resources can be allocated between resistance, tolerance and performance (costs are assumed equal for resistance and tolerance). Performances of hosts allocating none of the available resources to resistance and tolerance are the highest at the start of infection (Pt = 0 = PMax) and decrease as Ct increases. Numbers of pathogens remain below 20 among resistant hosts, and performances of tolerant hosts do not decline with increasing parasite burden. Schematic representation of the impact of resource allocation on performance (P) and number of pathogens (C). The performance of an infected host decreases proportionally to the number of pathogens (Cit) and to investments of the host in tolerance [15]:(4) where Pit is the performance of the ith host at time t, when it is infected with Cit pathogens, ω is the maximum amount of performance lost per pathogen (virulence). The parameter λi is a scaling parameter representing the extra investment in tolerance. If λi = 1, the host is completely tolerant and produces at a level identical to the one without infection. If λi = 0, the host is not tolerant to the deleterious effects of the pathogen. Hosts invest part of their constitutive resources to resist or tolerate the pathogens and costs are assumed proportional to the investments in both types of defense. They are combined in an additive way:(5) where PMax is the totality of the resources available to the host to insure performance (e.g., production, reproduction, work) and to cope with an infection (resistance and tolerance). If no extra-investments are put in resistance and tolerance, all resources are allocated to insure the highest achievable level of performance in the absence of infection. Parameter ciρ is the marginal cost of resistance and ciλ is the marginal cost of tolerance (in units of performance). Values for both costs are constrained such that the factor within brackets remains positive (ρi ciρ + λi ciλ ≤ 1). A constraint was also set to insure Pit (equation 4) remains positive or null in totally non-tolerant individuals infected with K pathogens: ω ≤ PMax(1 - ρi ciρ)/K. Typical patterns in performance as a function of number of pathogens are shown schematically in Figure 1 to illustrate the different ways resources can be allocated between resistance, tolerance and performance (costs are assumed equal for resistance and tolerance). Performances of hosts allocating none of the available resources to resistance and tolerance are the highest at the start of infection (Pt = 0 = PMax) and decrease as Ct increases. Numbers of pathogens remain below 20 among resistant hosts, and performances of tolerant hosts do not decline with increasing parasite burden. Schematic representation of the impact of resource allocation on performance (P) and number of pathogens (C). [SUBTITLE] Effectiveness analysis [SUBSECTION] The most profitable strategy, i.e. the one that will insure the lowest number of infected animals or the highest performance of the population, or both, was identified by weighing the allowed extra investments in resistance, tolerance, or both, against the effectiveness of each of these alternatives. Effectiveness was computed by comparing populations under the same infection process but in which animals invest ('yes' population) or not ('no' populations) in resistance, tolerance, or both. To do so, the number of infected hosts (It) and the overall performance (Pt = Σi = 1,N Pit) were followed across time, and the area under the curves of Pt (AUCP) and It (AUCI) were obtained for t = 0 to T with the spline method of the procedure Expand of SAS® [16]. Subsequently, the incremental effects (ΔEI and ΔEP) were computed as the difference between corresponding 'yes' and 'no' populations: ΔEI = AUCIno - AUCIyes, and ΔEP = AUCPyes - AUCPno. Then, the most effective alternative was identified as the one with the highest values for ΔEI and ΔEP. Incremental effects were calculated for different sets of parameters (Table 1). Two transmission rates were considered, with β = 0.1 and β = 0.5, which correspond to a new infection per 10 and 2 effective contacts, respectively. The minimum number of pathogens was set to Cmin = 10 and the maximum to CMax = 500. The growth rate (γ) was set at 0.5 new pathogens for each existing one and the value for μ was set to 0.25 or 1.0 to obtain killing rates equal to half or twice the pathogen growth rate. A convenient value of 100 was given to PMax, while virulence (ω) was set at 0.1 or 0.2 units of performance lost per pathogen present. Individual extra investments in resistance and tolerance were drawn from uniform distributions with different extreme values to have low (U[0, 0.5]), average (U[0, 1]), or high (U[0.9, 1]) levels of investments. Associated costs were drawn from uniform distributions within the allowable limits imposed by equations (4) and (5): U [0, 0.1], U [0.1, 0.2], and U [0.2, 0.5]. Model parameters and their values Finally, effects of low, average and high levels of extra-investments in resistance and tolerance on ΔEI and ΔEP were quantified using fixed linear models (proc GLM on SAS® [16]) that also contained the effects of β, μ, and ω for the characteristics of the pathogen, the averages at the population level of hi, ciρ and ciλ for the characteristics of the hosts, and all first-order interactions. The resulting least-squares estimates were used to identify epidemiological situations for which investments in tolerance, resistance or both were effective. The most profitable strategy, i.e. the one that will insure the lowest number of infected animals or the highest performance of the population, or both, was identified by weighing the allowed extra investments in resistance, tolerance, or both, against the effectiveness of each of these alternatives. Effectiveness was computed by comparing populations under the same infection process but in which animals invest ('yes' population) or not ('no' populations) in resistance, tolerance, or both. To do so, the number of infected hosts (It) and the overall performance (Pt = Σi = 1,N Pit) were followed across time, and the area under the curves of Pt (AUCP) and It (AUCI) were obtained for t = 0 to T with the spline method of the procedure Expand of SAS® [16]. Subsequently, the incremental effects (ΔEI and ΔEP) were computed as the difference between corresponding 'yes' and 'no' populations: ΔEI = AUCIno - AUCIyes, and ΔEP = AUCPyes - AUCPno. Then, the most effective alternative was identified as the one with the highest values for ΔEI and ΔEP. Incremental effects were calculated for different sets of parameters (Table 1). Two transmission rates were considered, with β = 0.1 and β = 0.5, which correspond to a new infection per 10 and 2 effective contacts, respectively. The minimum number of pathogens was set to Cmin = 10 and the maximum to CMax = 500. The growth rate (γ) was set at 0.5 new pathogens for each existing one and the value for μ was set to 0.25 or 1.0 to obtain killing rates equal to half or twice the pathogen growth rate. A convenient value of 100 was given to PMax, while virulence (ω) was set at 0.1 or 0.2 units of performance lost per pathogen present. Individual extra investments in resistance and tolerance were drawn from uniform distributions with different extreme values to have low (U[0, 0.5]), average (U[0, 1]), or high (U[0.9, 1]) levels of investments. Associated costs were drawn from uniform distributions within the allowable limits imposed by equations (4) and (5): U [0, 0.1], U [0.1, 0.2], and U [0.2, 0.5]. Model parameters and their values Finally, effects of low, average and high levels of extra-investments in resistance and tolerance on ΔEI and ΔEP were quantified using fixed linear models (proc GLM on SAS® [16]) that also contained the effects of β, μ, and ω for the characteristics of the pathogen, the averages at the population level of hi, ciρ and ciλ for the characteristics of the hosts, and all first-order interactions. The resulting least-squares estimates were used to identify epidemiological situations for which investments in tolerance, resistance or both were effective.
null
null
null
null
[ "Background", "Pathogen dynamics", "Individual performance", "Effectiveness analysis", "Results", "Within-host pathogen dynamics", "Between-host pathogen dynamics", "Effectiveness analyses", "Discussion", "Validation of the model", "Limits and assumptions of the model", "Effectiveness analyses", "Conclusions", "Competing interests" ]
[ "The breeding objective in most livestock species is to increase profit by improving performance efficiency. One way to reach this objective is to improve the animals' health, for example, through the implementation of appropriate management methods (e.g. chemotherapy, vaccination, and control of disease vectors). A more sustainable method consists in taking advantage, by selective breeding, of the within-breed variation that exists in the mechanisms of defenses against infectious pathogens [1]. Indeed, hosts have evolved resistance and tolerance defenses [2], thus breeders may choose, as progenitors, animals with the highest levels of resistance, tolerance, or both. One the one hand, resistance is the ability of the host to reduce the success of infection or to increase the rate of clearance of the pathogens. On the other hand, tolerance is the ability to reduce the detrimental effects of the pathogens on the performances of the hosts, either directly or by limiting immunopathological mechanisms [3]. The rate of transmission diminishes naturally among resistant hosts but not necessarily among tolerant ones, as these harbor the pathogen with no or moderate loss in performance [4].\nResistance and tolerance are associated with fitness costs, which arise from the diversion of limiting resources away from biological processes related to performance [5]. If these costs are too high, they may outweigh the effectiveness of the chosen strategy. Direct evidence of such costs can be found in experiments in insects [6], rainbow trout [7], crustaceans [8], wild birds [9] and mice [10].\nTo decide whether improving resistance, tolerance, neither, or both is the most effective strategy, it is proposed to (1) characterize the dynamics of the pathogens within and between hosts in the population under study, (2) evaluate the impact of the infection on the performances of the population, and (3) choose the most effective strategy. The goal of this study is to illustrate the methodology with a non-lethal micro-parasitic disease in a population where hosts have different levels of resistance to multiplication of the pathogen and different levels of tolerance to damages induced directly by the pathogens.", "The model chosen here to depict the dynamics of transmission of the infection in a herd is a stochastic version of the SIS (S for susceptible, I for infected) model for the spread of a disease in a closed population of N individuals [11]. This model is appropriate for infections with no permanent immunity after recovery, i.e. individuals are susceptible to the infection, potentially get infected, may recover and become susceptible again. The time-scale of the disease process is assumed to be short compared to the life length of the host and no demographic turnover (natural birth or death) is considered. The area occupied by parasites and hosts is constant, so that numbers and densities coincide. There is only a single non-evolving pathogen species within infected hosts. Once infected, hosts are immediately able to infect other individuals (no latent period). Within the host, the number of pathogens increases following a sigmoidal growth curve and is directly related to the number of immune constituents of the host response to the pathogen, with no distinction between innate and specific immunity. Recovered hosts are as susceptible to infection as naïve hosts and re-exposure does not accelerate development of the disease.\nIn mathematical terms, the process is described by a continuous time Markov chain, {Cit; t = 0 to T, i = 1 to N}, where Cit denotes the number of pathogens in the ith host at time t. Units of time are chosen arbitrarily. The chain has three transition probabilities (over a small time interval Δt) reflecting the three events, i.e. invasion of a new host by the pathogen, its multiplication and its killing by the immune response of the host.\nThe first transition probability is the probability the ith susceptible host is infected by Cmin pathogens:(1)\nwhere o(Δt) tends to 0 when Δt is small, Cmin is the minimum number of pathogens necessary to have infection, β is the per-capita rate of successful transmission of Cmin pathogens from an infectious host to a susceptible host upon contact with an infectious individual and during Δt, and It is the number of infectious hosts with which the ith susceptible has contact.\nThe second transition probability is the probability that a pathogen in an infected host gives birth to Cmin new offspring, such that this host becomes infectious:(2)\nwhere γ is the pathogen growth rate. Right after becoming infected, pathogen growth in a host is approximately exponential but it slows down as it reaches a maximum (CMax), at which it stops.\nThe last transition probability is the probability that Cmin pathogens are killed within the host:(3)\nThis equation follows from the dynamics between pathogens and immune factors, as observed in experimental studies [12,13]. Parameter μ represents the maximum number of pathogens killed for each unit of Rit, with Rit being a generic index related to the number, at time t, of the different types of immune factors specific for that pathogen. Because the main interest is on the number of pathogens, the complexity of the immune response is greatly simplified when Rit increases at a rate (hi) that is constant across time. The scaling parameter ρi varies from 0 to 1 and represents the extra investment in resistance of the ith host with respect to μ. When Cit drops below Cmin, the infection is assumed to be cleared.\nThe Markov chain was simulated using the Gillespie algorithm [14], which essentially uses exponential waiting times between events. For all simulations, it was assumed that two individuals in the population were initially infected. Simulation steps were executed until t reaches T units of time (= one replicate) and repeated over 50 replicates. Each cycle took around 4 hours to complete, so the population size was limited at 30 individuals, which is the average size of most dairy herds in the Walloon region of Belgium.", "The performance of an infected host decreases proportionally to the number of pathogens (Cit) and to investments of the host in tolerance [15]:(4)\nwhere Pit is the performance of the ith host at time t, when it is infected with Cit pathogens, ω is the maximum amount of performance lost per pathogen (virulence). The parameter λi is a scaling parameter representing the extra investment in tolerance. If λi = 1, the host is completely tolerant and produces at a level identical to the one without infection. If λi = 0, the host is not tolerant to the deleterious effects of the pathogen.\nHosts invest part of their constitutive resources to resist or tolerate the pathogens and costs are assumed proportional to the investments in both types of defense. They are combined in an additive way:(5)\nwhere PMax is the totality of the resources available to the host to insure performance (e.g., production, reproduction, work) and to cope with an infection (resistance and tolerance). If no extra-investments are put in resistance and tolerance, all resources are allocated to insure the highest achievable level of performance in the absence of infection. Parameter ciρ is the marginal cost of resistance and ciλ is the marginal cost of tolerance (in units of performance). Values for both costs are constrained such that the factor within brackets remains positive (ρi ciρ + λi ciλ ≤ 1). A constraint was also set to insure Pit (equation 4) remains positive or null in totally non-tolerant individuals infected with K pathogens: ω ≤ PMax(1 - ρi ciρ)/K.\nTypical patterns in performance as a function of number of pathogens are shown schematically in Figure 1 to illustrate the different ways resources can be allocated between resistance, tolerance and performance (costs are assumed equal for resistance and tolerance). Performances of hosts allocating none of the available resources to resistance and tolerance are the highest at the start of infection (Pt = 0 = PMax) and decrease as Ct increases. Numbers of pathogens remain below 20 among resistant hosts, and performances of tolerant hosts do not decline with increasing parasite burden.\nSchematic representation of the impact of resource allocation on performance (P) and number of pathogens (C).", "The most profitable strategy, i.e. the one that will insure the lowest number of infected animals or the highest performance of the population, or both, was identified by weighing the allowed extra investments in resistance, tolerance, or both, against the effectiveness of each of these alternatives.\nEffectiveness was computed by comparing populations under the same infection process but in which animals invest ('yes' population) or not ('no' populations) in resistance, tolerance, or both. To do so, the number of infected hosts (It) and the overall performance (Pt = Σi = 1,N Pit) were followed across time, and the area under the curves of Pt (AUCP) and It (AUCI) were obtained for t = 0 to T with the spline method of the procedure Expand of SAS® [16]. Subsequently, the incremental effects (ΔEI and ΔEP) were computed as the difference between corresponding 'yes' and 'no' populations: ΔEI = AUCIno - AUCIyes, and ΔEP = AUCPyes - AUCPno. Then, the most effective alternative was identified as the one with the highest values for ΔEI and ΔEP.\nIncremental effects were calculated for different sets of parameters (Table 1). Two transmission rates were considered, with β = 0.1 and β = 0.5, which correspond to a new infection per 10 and 2 effective contacts, respectively. The minimum number of pathogens was set to Cmin = 10 and the maximum to CMax = 500. The growth rate (γ) was set at 0.5 new pathogens for each existing one and the value for μ was set to 0.25 or 1.0 to obtain killing rates equal to half or twice the pathogen growth rate. A convenient value of 100 was given to PMax, while virulence (ω) was set at 0.1 or 0.2 units of performance lost per pathogen present. Individual extra investments in resistance and tolerance were drawn from uniform distributions with different extreme values to have low (U[0, 0.5]), average (U[0, 1]), or high (U[0.9, 1]) levels of investments. Associated costs were drawn from uniform distributions within the allowable limits imposed by equations (4) and (5): U [0, 0.1], U [0.1, 0.2], and U [0.2, 0.5].\nModel parameters and their values\nFinally, effects of low, average and high levels of extra-investments in resistance and tolerance on ΔEI and ΔEP were quantified using fixed linear models (proc GLM on SAS® [16]) that also contained the effects of β, μ, and ω for the characteristics of the pathogen, the averages at the population level of hi, ciρ and ciλ for the characteristics of the hosts, and all first-order interactions. The resulting least-squares estimates were used to identify epidemiological situations for which investments in tolerance, resistance or both were effective.", "[SUBTITLE] Within-host pathogen dynamics [SUBSECTION] The number of pathogens within a host is shown in Figure 2 for 10 animals in a 'no' population with the following characteristics: β = 0.5; μ = 0.1; ω = 0.1; hi ~ U [0, 0.001], and ρi = λi = 0 for i = 1 to N. The duration and the number of pathogens generated were approximately the same for all animals because they depended on γ = 0.5 (equation 2). However, the stochastic nature of the simulation resulted in a cloud of points for each Cit. On average, CMax was reached after 300 time units, so it was used as the upper limit for T because the Gillespie algorithm was slow to converge and because T = 300 insured the steady value CMax was reached among completely non-resistant hosts.\nNumber of pathogens across time for 10 completely susceptible hosts.\nIn Figure 3, the dynamics in Cit and Pit are shown for four individuals with different investments and costs of resistance and tolerance, and for an infection with β = 0.5, γ = 0.5, ω = 0.2, hi ~ U [0, 0.1], and μ = 0.25. When both ρi and λi were high, Cit remained low and Pit did not change much across time (individuals □ or +). Conversely, when ρi was low, Cit increased up to its maximum and the associated individual performance decreased (individual ○ in Figure 3). Between these extremes, a wide range of different situations occurred. Initial performance (P0) varied according to the costs and extra investments in tolerance and resistance (equation 3).\nWithin-host dynamics for the number of pathogens and performances of four individuals with different levels of resistance (ρ) and tolerance (λ) Associated costs are in parentheses.\nThe number of pathogens within a host is shown in Figure 2 for 10 animals in a 'no' population with the following characteristics: β = 0.5; μ = 0.1; ω = 0.1; hi ~ U [0, 0.001], and ρi = λi = 0 for i = 1 to N. The duration and the number of pathogens generated were approximately the same for all animals because they depended on γ = 0.5 (equation 2). However, the stochastic nature of the simulation resulted in a cloud of points for each Cit. On average, CMax was reached after 300 time units, so it was used as the upper limit for T because the Gillespie algorithm was slow to converge and because T = 300 insured the steady value CMax was reached among completely non-resistant hosts.\nNumber of pathogens across time for 10 completely susceptible hosts.\nIn Figure 3, the dynamics in Cit and Pit are shown for four individuals with different investments and costs of resistance and tolerance, and for an infection with β = 0.5, γ = 0.5, ω = 0.2, hi ~ U [0, 0.1], and μ = 0.25. When both ρi and λi were high, Cit remained low and Pit did not change much across time (individuals □ or +). Conversely, when ρi was low, Cit increased up to its maximum and the associated individual performance decreased (individual ○ in Figure 3). Between these extremes, a wide range of different situations occurred. Initial performance (P0) varied according to the costs and extra investments in tolerance and resistance (equation 3).\nWithin-host dynamics for the number of pathogens and performances of four individuals with different levels of resistance (ρ) and tolerance (λ) Associated costs are in parentheses.\n[SUBTITLE] Between-host pathogen dynamics [SUBSECTION] The number of infected hosts (It) and the overall performance (Pt) are given in Figure 4 as percentages of their maximum values (N and PMax, respectively) and for an infection with β = 0.5, γ = 0.5, ω = 0.1, and μ = 0.25. At T = 300, all individuals in the 'no' population (Figure 4a) were infected (with the exception of one) and the overall performance was close to 50%, which is the minimum expected from equation 5 when all animals have zero tolerance and are infected with CMax pathogens.\nNumber of infected individuals (solid line) and overall performance (broken line) in populations with different average values for levels of resistance (ρ) and tolerance (λ), and for their associated costs (cρ and cλ in parentheses) The values are expressed as percentages of their maxima.\nWhen individuals invested more in resistance, only a fraction of the population got infected and AUCI was low. For example, AUCI decreased from 22,720 to 19,379 and 13,851 infected hosts in Figures 4b (ρ = 0.22), 4e (ρ = 0.46), and 4h (ρ = 0.94), respectively. When the average level of extra investments in tolerance was high (around 0.95), the impact of It on Pt was almost zero (Figures 4d,g and 4j). Otherwise, Pt decreased as It increased, especially for low levels of tolerance (Figures 4b,e and 4h). This should have translated in an increase in AUCP but, in this particular population, costs associated with tolerance were high (around 0.15) and initial performance was low. For example, P0 averaged 79.8 in Figure 4g (λ = 0.95; AUCP = 23,509) and 90.4 in Figure 4e (λ = 0.25; AUCP = 24,779).\nThe number of infected hosts (It) and the overall performance (Pt) are given in Figure 4 as percentages of their maximum values (N and PMax, respectively) and for an infection with β = 0.5, γ = 0.5, ω = 0.1, and μ = 0.25. At T = 300, all individuals in the 'no' population (Figure 4a) were infected (with the exception of one) and the overall performance was close to 50%, which is the minimum expected from equation 5 when all animals have zero tolerance and are infected with CMax pathogens.\nNumber of infected individuals (solid line) and overall performance (broken line) in populations with different average values for levels of resistance (ρ) and tolerance (λ), and for their associated costs (cρ and cλ in parentheses) The values are expressed as percentages of their maxima.\nWhen individuals invested more in resistance, only a fraction of the population got infected and AUCI was low. For example, AUCI decreased from 22,720 to 19,379 and 13,851 infected hosts in Figures 4b (ρ = 0.22), 4e (ρ = 0.46), and 4h (ρ = 0.94), respectively. When the average level of extra investments in tolerance was high (around 0.95), the impact of It on Pt was almost zero (Figures 4d,g and 4j). Otherwise, Pt decreased as It increased, especially for low levels of tolerance (Figures 4b,e and 4h). This should have translated in an increase in AUCP but, in this particular population, costs associated with tolerance were high (around 0.15) and initial performance was low. For example, P0 averaged 79.8 in Figure 4g (λ = 0.95; AUCP = 23,509) and 90.4 in Figure 4e (λ = 0.25; AUCP = 24,779).\n[SUBTITLE] Effectiveness analyses [SUBSECTION] Values of ΔEP and ΔEI obtained for each combination of the parameters of Table 1 are shown in relation to ρ and λ in Figure 5. Each dot corresponds to one specific combination of the parameter values. Effective combinations, those associated with both ΔEP>0 and ΔEI>0, represented 75.7% of all combinations. There was a tendency for ΔEI and ΔEP to increase with increasing values for ρ and λ, respectively. However, there were also combinations of parameters for which high values for ρ or λ were not effective, as revealed by the analysis of variance.\nIncremental effectiveness for performance (ΔEP) and number of infected individuals (ΔEI) for different investments in resistance (ρ) and tolerance (λ) and for various characteristics of the infection (Table 1) .\nResults from the analysis of variance identified significant (p < 0.01) effects of ρi, cρ, hi, and μ on ΔEI, and of λ, cλ, β, and ω on ΔEP. All first-order interactions were non-significant (p > 0.10). Incremental effects are given in Tables 2 and 3 for selected combinations. Overall, ΔEP was greater for higher values of λ but, for moderately virulent (ω = 0.1) and slow spreading (β = 0.1) diseases, investments in tolerance were low or ineffective unless they incurred at low costs (Table 2). Investing in resistance (Table 3) was effective for infections that elicited moderate to high but not low (hi ~ U[0, 0.001]) immune responses in the hosts (unless levels of resistance were high).\nIncremental effectiveness of the performance of the population (ΔEP) associated to different investments in individual tolerance (λi) and for selected values of ciλ, β and ω, as defined in Table 1\nIncremental effectiveness of the number of infected (ΔEI) associated to different investments in individual resistance (ρi) and for selected values of ciρ, μ and hi, as defined in Table 1\nValues of ΔEP and ΔEI obtained for each combination of the parameters of Table 1 are shown in relation to ρ and λ in Figure 5. Each dot corresponds to one specific combination of the parameter values. Effective combinations, those associated with both ΔEP>0 and ΔEI>0, represented 75.7% of all combinations. There was a tendency for ΔEI and ΔEP to increase with increasing values for ρ and λ, respectively. However, there were also combinations of parameters for which high values for ρ or λ were not effective, as revealed by the analysis of variance.\nIncremental effectiveness for performance (ΔEP) and number of infected individuals (ΔEI) for different investments in resistance (ρ) and tolerance (λ) and for various characteristics of the infection (Table 1) .\nResults from the analysis of variance identified significant (p < 0.01) effects of ρi, cρ, hi, and μ on ΔEI, and of λ, cλ, β, and ω on ΔEP. All first-order interactions were non-significant (p > 0.10). Incremental effects are given in Tables 2 and 3 for selected combinations. Overall, ΔEP was greater for higher values of λ but, for moderately virulent (ω = 0.1) and slow spreading (β = 0.1) diseases, investments in tolerance were low or ineffective unless they incurred at low costs (Table 2). Investing in resistance (Table 3) was effective for infections that elicited moderate to high but not low (hi ~ U[0, 0.001]) immune responses in the hosts (unless levels of resistance were high).\nIncremental effectiveness of the performance of the population (ΔEP) associated to different investments in individual tolerance (λi) and for selected values of ciλ, β and ω, as defined in Table 1\nIncremental effectiveness of the number of infected (ΔEI) associated to different investments in individual resistance (ρi) and for selected values of ciρ, μ and hi, as defined in Table 1", "The number of pathogens within a host is shown in Figure 2 for 10 animals in a 'no' population with the following characteristics: β = 0.5; μ = 0.1; ω = 0.1; hi ~ U [0, 0.001], and ρi = λi = 0 for i = 1 to N. The duration and the number of pathogens generated were approximately the same for all animals because they depended on γ = 0.5 (equation 2). However, the stochastic nature of the simulation resulted in a cloud of points for each Cit. On average, CMax was reached after 300 time units, so it was used as the upper limit for T because the Gillespie algorithm was slow to converge and because T = 300 insured the steady value CMax was reached among completely non-resistant hosts.\nNumber of pathogens across time for 10 completely susceptible hosts.\nIn Figure 3, the dynamics in Cit and Pit are shown for four individuals with different investments and costs of resistance and tolerance, and for an infection with β = 0.5, γ = 0.5, ω = 0.2, hi ~ U [0, 0.1], and μ = 0.25. When both ρi and λi were high, Cit remained low and Pit did not change much across time (individuals □ or +). Conversely, when ρi was low, Cit increased up to its maximum and the associated individual performance decreased (individual ○ in Figure 3). Between these extremes, a wide range of different situations occurred. Initial performance (P0) varied according to the costs and extra investments in tolerance and resistance (equation 3).\nWithin-host dynamics for the number of pathogens and performances of four individuals with different levels of resistance (ρ) and tolerance (λ) Associated costs are in parentheses.", "The number of infected hosts (It) and the overall performance (Pt) are given in Figure 4 as percentages of their maximum values (N and PMax, respectively) and for an infection with β = 0.5, γ = 0.5, ω = 0.1, and μ = 0.25. At T = 300, all individuals in the 'no' population (Figure 4a) were infected (with the exception of one) and the overall performance was close to 50%, which is the minimum expected from equation 5 when all animals have zero tolerance and are infected with CMax pathogens.\nNumber of infected individuals (solid line) and overall performance (broken line) in populations with different average values for levels of resistance (ρ) and tolerance (λ), and for their associated costs (cρ and cλ in parentheses) The values are expressed as percentages of their maxima.\nWhen individuals invested more in resistance, only a fraction of the population got infected and AUCI was low. For example, AUCI decreased from 22,720 to 19,379 and 13,851 infected hosts in Figures 4b (ρ = 0.22), 4e (ρ = 0.46), and 4h (ρ = 0.94), respectively. When the average level of extra investments in tolerance was high (around 0.95), the impact of It on Pt was almost zero (Figures 4d,g and 4j). Otherwise, Pt decreased as It increased, especially for low levels of tolerance (Figures 4b,e and 4h). This should have translated in an increase in AUCP but, in this particular population, costs associated with tolerance were high (around 0.15) and initial performance was low. For example, P0 averaged 79.8 in Figure 4g (λ = 0.95; AUCP = 23,509) and 90.4 in Figure 4e (λ = 0.25; AUCP = 24,779).", "Values of ΔEP and ΔEI obtained for each combination of the parameters of Table 1 are shown in relation to ρ and λ in Figure 5. Each dot corresponds to one specific combination of the parameter values. Effective combinations, those associated with both ΔEP>0 and ΔEI>0, represented 75.7% of all combinations. There was a tendency for ΔEI and ΔEP to increase with increasing values for ρ and λ, respectively. However, there were also combinations of parameters for which high values for ρ or λ were not effective, as revealed by the analysis of variance.\nIncremental effectiveness for performance (ΔEP) and number of infected individuals (ΔEI) for different investments in resistance (ρ) and tolerance (λ) and for various characteristics of the infection (Table 1) .\nResults from the analysis of variance identified significant (p < 0.01) effects of ρi, cρ, hi, and μ on ΔEI, and of λ, cλ, β, and ω on ΔEP. All first-order interactions were non-significant (p > 0.10). Incremental effects are given in Tables 2 and 3 for selected combinations. Overall, ΔEP was greater for higher values of λ but, for moderately virulent (ω = 0.1) and slow spreading (β = 0.1) diseases, investments in tolerance were low or ineffective unless they incurred at low costs (Table 2). Investing in resistance (Table 3) was effective for infections that elicited moderate to high but not low (hi ~ U[0, 0.001]) immune responses in the hosts (unless levels of resistance were high).\nIncremental effectiveness of the performance of the population (ΔEP) associated to different investments in individual tolerance (λi) and for selected values of ciλ, β and ω, as defined in Table 1\nIncremental effectiveness of the number of infected (ΔEI) associated to different investments in individual resistance (ρi) and for selected values of ciρ, μ and hi, as defined in Table 1", "A general framework is proposed to provide insights into the effects of improved resistance and tolerance on the performance and size of an infected population. A clear distinction is made between effects of resistance on multiplication of the pathogen and effects of tolerance on damages induced by the pathogens. Hosts differ in the costs they incur to insure their particular levels of resistance and tolerance, and in the intensity of the response they mount against pathogens. Pathogens differ in their speed of spread between hosts, in virulence, and in the intensity of the response they elicit in the hosts. However, to be useful, the model must be validated and its limits and assumptions must be clarified, as will be discussed in the following, with examples mostly related to bovine mastitis.\n[SUBTITLE] Validation of the model [SUBSECTION] Model validation usually takes the form of a comparison between model outputs and real data but this was not possible here because reliable field data are scarce, difficult to measure or imprecisely defined [17,18]. For example, estimates of costs associated with resistance and tolerance are limited in animals, in contrast to plants (see review by [19]). Tolerance has often been measured imprecisely as the overall ability to maintain fitness in the face of infection, irrespective of parasite burden. For example, cows infected with E. coli have been classified as moderate and severe responders according to milk production loss in the non-challenged quarters [20]. In this case, it is in reality a measure of the combined effects of resistance and tolerance [4]. It was also a deliberate choice to present a generic model because parameters values are different among disease and host populations, so model outputs for one specific disease may not apply to another disease. For example, transmission rates have been estimated at 0.20 to 1.50 per 1000 quarter-days at risk for S. uberis mastitis [21] but at 7 to 50 for S. aureus mastitis [22]. Similarly, killing rates have been estimated at 0.67 to 1.33 × 10-8 mL/cell per min in milk of cows [23,24] and at 1.64 to 1.76 × 10-8 mL/neutrophil per min in dermis of rats inoculated with E. coli [13]. Model outputs will also depend on the virulence of the invading pathogens (ω), as exemplified by the different amount of milk loss at the first occurrence of clinical mastitis depending on bacteria species [25], and on the type of performance (e.g., yield, quality of products, or capacity for work) considered.\nAs an alternative form of validation, the dynamics of Cit and Pit at the individual, and of It and Pt at the herd levels were evaluated. For instance, as expected, Cit was lowest in resistant and Pit was highest in tolerant hosts (Figure 3), Pt remained stable across time when tolerance of the hosts was at its highest level, and It decreased faster when resistance of hosts was at its highest level (Figure 4). Results from the analysis of variance also validated the model. The null value of ΔEI for hi~U [0, 0.001] was sensible because, at this low rate, pathogens cannot be killed, regardless of how much was invested in resistance. The fact that β did not affect ΔEI may also be explained by the same transmission in 'yes' and 'no' populations, so AUCIyes was close to AUCIno for any value of β. As a final example, ΔEP was higher for β = 0.5 than for β = 0.1 because only few animals got infected with β = 0.1, so improving tolerance of these few hosts was not beneficial at the population level.\nModel validation usually takes the form of a comparison between model outputs and real data but this was not possible here because reliable field data are scarce, difficult to measure or imprecisely defined [17,18]. For example, estimates of costs associated with resistance and tolerance are limited in animals, in contrast to plants (see review by [19]). Tolerance has often been measured imprecisely as the overall ability to maintain fitness in the face of infection, irrespective of parasite burden. For example, cows infected with E. coli have been classified as moderate and severe responders according to milk production loss in the non-challenged quarters [20]. In this case, it is in reality a measure of the combined effects of resistance and tolerance [4]. It was also a deliberate choice to present a generic model because parameters values are different among disease and host populations, so model outputs for one specific disease may not apply to another disease. For example, transmission rates have been estimated at 0.20 to 1.50 per 1000 quarter-days at risk for S. uberis mastitis [21] but at 7 to 50 for S. aureus mastitis [22]. Similarly, killing rates have been estimated at 0.67 to 1.33 × 10-8 mL/cell per min in milk of cows [23,24] and at 1.64 to 1.76 × 10-8 mL/neutrophil per min in dermis of rats inoculated with E. coli [13]. Model outputs will also depend on the virulence of the invading pathogens (ω), as exemplified by the different amount of milk loss at the first occurrence of clinical mastitis depending on bacteria species [25], and on the type of performance (e.g., yield, quality of products, or capacity for work) considered.\nAs an alternative form of validation, the dynamics of Cit and Pit at the individual, and of It and Pt at the herd levels were evaluated. For instance, as expected, Cit was lowest in resistant and Pit was highest in tolerant hosts (Figure 3), Pt remained stable across time when tolerance of the hosts was at its highest level, and It decreased faster when resistance of hosts was at its highest level (Figure 4). Results from the analysis of variance also validated the model. The null value of ΔEI for hi~U [0, 0.001] was sensible because, at this low rate, pathogens cannot be killed, regardless of how much was invested in resistance. The fact that β did not affect ΔEI may also be explained by the same transmission in 'yes' and 'no' populations, so AUCIyes was close to AUCIno for any value of β. As a final example, ΔEP was higher for β = 0.5 than for β = 0.1 because only few animals got infected with β = 0.1, so improving tolerance of these few hosts was not beneficial at the population level.\n[SUBTITLE] Limits and assumptions of the model [SUBSECTION] The strategy to build this model followed the current trend in epidemiology to begin with simple models and to add complexity only if the model fails to reproduce plausible epidemiological behaviors [26]. Several assumptions were made, some of which have been confirmed previously. One assumption was that available resources are partitioned between performance, resistance and tolerance. Indeed, experiences in poultry [27] and other species [28] have shown that individuals differ in their ability to allocate resources to their needs. This is also one of the factors evoked to explain the increased susceptibility of high yielding dairy cows to mastitis [29]. Lack of resources may lead to vicious cycles because hosts in poor condition are more susceptible to higher pathogen occurrence and infection intensity, which further weaken the condition of the host [30]. Another assumption is that investments in resistance and tolerance are linked through the constraint in equation 4 and this has been confirmed by [2], where a negative relationship was found between resistance and tolerance in rodent malaria.\nSome assumptions of the model could also be relaxed with more complex equations that have been used in models examining the effects of mixed infection [21], infectious dose [31] and vaccination/treatment [32] on transmission dynamics. Resistance could vary as a function of exposure to disease [33]. Availability of external resource can vary across time, as in Doesch-Wilson et al. [34]. In the model used here, individual infectious contacts were assumed independent and at random but models with heterogeneous mixing [35] and that consider genetic susceptibility among relatives [36,37] may be more appropriate. The course of infection within hosts can also be modelled more accurately, in line with the characteristics of the disease under study. For example, models with increasing complexity have been proposed to describe the fate of mastitis-causing E. coli in infected cows [23,24]. Models for co-evolutionary mechanisms between host and pathogens should be considered [38] if the time scale is longer than the one used in this study.\nOther assumptions may be difficult to verify. For such assumptions, a set of arbitrary standard values for the parameters and different forms for equations should be tested in so-called sensitivity analyses. For example, the amount of loss in performance was assumed directly associated with pathogen load, although the most dramatic changes may occur at low or subclinical levels of disease, with diminishing effects of each additional parasite [39].\nThe strategy to build this model followed the current trend in epidemiology to begin with simple models and to add complexity only if the model fails to reproduce plausible epidemiological behaviors [26]. Several assumptions were made, some of which have been confirmed previously. One assumption was that available resources are partitioned between performance, resistance and tolerance. Indeed, experiences in poultry [27] and other species [28] have shown that individuals differ in their ability to allocate resources to their needs. This is also one of the factors evoked to explain the increased susceptibility of high yielding dairy cows to mastitis [29]. Lack of resources may lead to vicious cycles because hosts in poor condition are more susceptible to higher pathogen occurrence and infection intensity, which further weaken the condition of the host [30]. Another assumption is that investments in resistance and tolerance are linked through the constraint in equation 4 and this has been confirmed by [2], where a negative relationship was found between resistance and tolerance in rodent malaria.\nSome assumptions of the model could also be relaxed with more complex equations that have been used in models examining the effects of mixed infection [21], infectious dose [31] and vaccination/treatment [32] on transmission dynamics. Resistance could vary as a function of exposure to disease [33]. Availability of external resource can vary across time, as in Doesch-Wilson et al. [34]. In the model used here, individual infectious contacts were assumed independent and at random but models with heterogeneous mixing [35] and that consider genetic susceptibility among relatives [36,37] may be more appropriate. The course of infection within hosts can also be modelled more accurately, in line with the characteristics of the disease under study. For example, models with increasing complexity have been proposed to describe the fate of mastitis-causing E. coli in infected cows [23,24]. Models for co-evolutionary mechanisms between host and pathogens should be considered [38] if the time scale is longer than the one used in this study.\nOther assumptions may be difficult to verify. For such assumptions, a set of arbitrary standard values for the parameters and different forms for equations should be tested in so-called sensitivity analyses. For example, the amount of loss in performance was assumed directly associated with pathogen load, although the most dramatic changes may occur at low or subclinical levels of disease, with diminishing effects of each additional parasite [39].\n[SUBTITLE] Effectiveness analyses [SUBSECTION] Two results from the effectiveness analyses are noteworthy, although they must be further evaluated in empirical studies. One is that the range of possible values of ΔEP and ΔEI for the different input parameters (Figure 5) is wide. This emphasizes the need to accurately model the infection process and its impact on the population before deciding on the most effective strategy. For example, increasing host tolerance is theoretically less effective for improving performance of populations infected with pathogens that cause minor rather than major mastitis. Indeed, pathogens causing minor mastitis are less virulent (ω) and less transmissible (β) than those causing major mastitis [40], so modest advantages of high tolerance would be offset by the associated costs. Likewise, selecting for better resistance to mastitis would be effective to restrict the size of a population epidemic if animals are infected with bacterial strains that are likely to be killed by neutrophils [41], i.e. μ>0 in equation 3.\nAnother noteworthy observation is that least-squares means for ΔEP were highest in highly tolerant populations, while ΔEI did not change between different tolerance levels. This suggests that selection for increased tolerance would be effective under commercial constraints. This is different from models applied to natural populations that predict an increase in the overall incidence of infection as the frequency of tolerant hosts increases [38]. In natural populations, tolerant hosts survive longer than non-tolerant ones, thus keeping the disease longer in the population and increasing the risk of exposure to disease. Here, the model is for an endemic disease in a population under commercial contraints, in which non-tolerant animals are kept even if they are sick (no natural death, no culling). Consequently, the risk of exposure to disease does not change, even if the pathogen population size (C) increases.\nIn general, little is known about tolerance mechanisms in animals but their study should provide a good foundation for insuring health over the long term. Indeed, in the long term, advantages of being tolerant should be greater than those associated with resistance. For example, in non-evolving pathogen populations, advantages of being resistant decrease in parallel with the decline in disease frequency, while the advantages of being tolerant are maintained, or even increase if disease frequency rises [42]. In evolving pathogen populations, improved host resistance will pressure pathogens to evolve better mechanisms to evade host defense processes, potentially resulting in cyclical co-evolutionary dynamics. In contrast, tolerance does not interact directly with the pathogen and should not induce selection for counter-adaptations, although elevated levels of tolerance may allow pathogens to be more virulent [43].\nPractically, in bovine mastitis, it the degree of tolerance of an animal can be estimated by the amount of milk loss per bacteria present in the quarter (CFU) using a model adapted from that proposed by [2] for inbred strains of laboratory mice:\nwhere yijt is the milk yield at time t of the ith cow (yield corrected for fixed and non-genetic random effects estimated from the genetic evaluation model) infected with Iijt, i.e. the bacterial load for bacterial species j; bj is the average tolerance against bacteria of strain j; Bij describes individual random deviations from the average tolerance with Bij ~ IID N(0, σ²b); and eijt are residuals with eijt ~ N(0, Ve), where Ve accounts for the non-independence between repeated eijt. Such information could be collected from quarters of experimentally infected cows, as was done in the study of [44].\nTwo results from the effectiveness analyses are noteworthy, although they must be further evaluated in empirical studies. One is that the range of possible values of ΔEP and ΔEI for the different input parameters (Figure 5) is wide. This emphasizes the need to accurately model the infection process and its impact on the population before deciding on the most effective strategy. For example, increasing host tolerance is theoretically less effective for improving performance of populations infected with pathogens that cause minor rather than major mastitis. Indeed, pathogens causing minor mastitis are less virulent (ω) and less transmissible (β) than those causing major mastitis [40], so modest advantages of high tolerance would be offset by the associated costs. Likewise, selecting for better resistance to mastitis would be effective to restrict the size of a population epidemic if animals are infected with bacterial strains that are likely to be killed by neutrophils [41], i.e. μ>0 in equation 3.\nAnother noteworthy observation is that least-squares means for ΔEP were highest in highly tolerant populations, while ΔEI did not change between different tolerance levels. This suggests that selection for increased tolerance would be effective under commercial constraints. This is different from models applied to natural populations that predict an increase in the overall incidence of infection as the frequency of tolerant hosts increases [38]. In natural populations, tolerant hosts survive longer than non-tolerant ones, thus keeping the disease longer in the population and increasing the risk of exposure to disease. Here, the model is for an endemic disease in a population under commercial contraints, in which non-tolerant animals are kept even if they are sick (no natural death, no culling). Consequently, the risk of exposure to disease does not change, even if the pathogen population size (C) increases.\nIn general, little is known about tolerance mechanisms in animals but their study should provide a good foundation for insuring health over the long term. Indeed, in the long term, advantages of being tolerant should be greater than those associated with resistance. For example, in non-evolving pathogen populations, advantages of being resistant decrease in parallel with the decline in disease frequency, while the advantages of being tolerant are maintained, or even increase if disease frequency rises [42]. In evolving pathogen populations, improved host resistance will pressure pathogens to evolve better mechanisms to evade host defense processes, potentially resulting in cyclical co-evolutionary dynamics. In contrast, tolerance does not interact directly with the pathogen and should not induce selection for counter-adaptations, although elevated levels of tolerance may allow pathogens to be more virulent [43].\nPractically, in bovine mastitis, it the degree of tolerance of an animal can be estimated by the amount of milk loss per bacteria present in the quarter (CFU) using a model adapted from that proposed by [2] for inbred strains of laboratory mice:\nwhere yijt is the milk yield at time t of the ith cow (yield corrected for fixed and non-genetic random effects estimated from the genetic evaluation model) infected with Iijt, i.e. the bacterial load for bacterial species j; bj is the average tolerance against bacteria of strain j; Bij describes individual random deviations from the average tolerance with Bij ~ IID N(0, σ²b); and eijt are residuals with eijt ~ N(0, Ve), where Ve accounts for the non-independence between repeated eijt. Such information could be collected from quarters of experimentally infected cows, as was done in the study of [44].", "Model validation usually takes the form of a comparison between model outputs and real data but this was not possible here because reliable field data are scarce, difficult to measure or imprecisely defined [17,18]. For example, estimates of costs associated with resistance and tolerance are limited in animals, in contrast to plants (see review by [19]). Tolerance has often been measured imprecisely as the overall ability to maintain fitness in the face of infection, irrespective of parasite burden. For example, cows infected with E. coli have been classified as moderate and severe responders according to milk production loss in the non-challenged quarters [20]. In this case, it is in reality a measure of the combined effects of resistance and tolerance [4]. It was also a deliberate choice to present a generic model because parameters values are different among disease and host populations, so model outputs for one specific disease may not apply to another disease. For example, transmission rates have been estimated at 0.20 to 1.50 per 1000 quarter-days at risk for S. uberis mastitis [21] but at 7 to 50 for S. aureus mastitis [22]. Similarly, killing rates have been estimated at 0.67 to 1.33 × 10-8 mL/cell per min in milk of cows [23,24] and at 1.64 to 1.76 × 10-8 mL/neutrophil per min in dermis of rats inoculated with E. coli [13]. Model outputs will also depend on the virulence of the invading pathogens (ω), as exemplified by the different amount of milk loss at the first occurrence of clinical mastitis depending on bacteria species [25], and on the type of performance (e.g., yield, quality of products, or capacity for work) considered.\nAs an alternative form of validation, the dynamics of Cit and Pit at the individual, and of It and Pt at the herd levels were evaluated. For instance, as expected, Cit was lowest in resistant and Pit was highest in tolerant hosts (Figure 3), Pt remained stable across time when tolerance of the hosts was at its highest level, and It decreased faster when resistance of hosts was at its highest level (Figure 4). Results from the analysis of variance also validated the model. The null value of ΔEI for hi~U [0, 0.001] was sensible because, at this low rate, pathogens cannot be killed, regardless of how much was invested in resistance. The fact that β did not affect ΔEI may also be explained by the same transmission in 'yes' and 'no' populations, so AUCIyes was close to AUCIno for any value of β. As a final example, ΔEP was higher for β = 0.5 than for β = 0.1 because only few animals got infected with β = 0.1, so improving tolerance of these few hosts was not beneficial at the population level.", "The strategy to build this model followed the current trend in epidemiology to begin with simple models and to add complexity only if the model fails to reproduce plausible epidemiological behaviors [26]. Several assumptions were made, some of which have been confirmed previously. One assumption was that available resources are partitioned between performance, resistance and tolerance. Indeed, experiences in poultry [27] and other species [28] have shown that individuals differ in their ability to allocate resources to their needs. This is also one of the factors evoked to explain the increased susceptibility of high yielding dairy cows to mastitis [29]. Lack of resources may lead to vicious cycles because hosts in poor condition are more susceptible to higher pathogen occurrence and infection intensity, which further weaken the condition of the host [30]. Another assumption is that investments in resistance and tolerance are linked through the constraint in equation 4 and this has been confirmed by [2], where a negative relationship was found between resistance and tolerance in rodent malaria.\nSome assumptions of the model could also be relaxed with more complex equations that have been used in models examining the effects of mixed infection [21], infectious dose [31] and vaccination/treatment [32] on transmission dynamics. Resistance could vary as a function of exposure to disease [33]. Availability of external resource can vary across time, as in Doesch-Wilson et al. [34]. In the model used here, individual infectious contacts were assumed independent and at random but models with heterogeneous mixing [35] and that consider genetic susceptibility among relatives [36,37] may be more appropriate. The course of infection within hosts can also be modelled more accurately, in line with the characteristics of the disease under study. For example, models with increasing complexity have been proposed to describe the fate of mastitis-causing E. coli in infected cows [23,24]. Models for co-evolutionary mechanisms between host and pathogens should be considered [38] if the time scale is longer than the one used in this study.\nOther assumptions may be difficult to verify. For such assumptions, a set of arbitrary standard values for the parameters and different forms for equations should be tested in so-called sensitivity analyses. For example, the amount of loss in performance was assumed directly associated with pathogen load, although the most dramatic changes may occur at low or subclinical levels of disease, with diminishing effects of each additional parasite [39].", "Two results from the effectiveness analyses are noteworthy, although they must be further evaluated in empirical studies. One is that the range of possible values of ΔEP and ΔEI for the different input parameters (Figure 5) is wide. This emphasizes the need to accurately model the infection process and its impact on the population before deciding on the most effective strategy. For example, increasing host tolerance is theoretically less effective for improving performance of populations infected with pathogens that cause minor rather than major mastitis. Indeed, pathogens causing minor mastitis are less virulent (ω) and less transmissible (β) than those causing major mastitis [40], so modest advantages of high tolerance would be offset by the associated costs. Likewise, selecting for better resistance to mastitis would be effective to restrict the size of a population epidemic if animals are infected with bacterial strains that are likely to be killed by neutrophils [41], i.e. μ>0 in equation 3.\nAnother noteworthy observation is that least-squares means for ΔEP were highest in highly tolerant populations, while ΔEI did not change between different tolerance levels. This suggests that selection for increased tolerance would be effective under commercial constraints. This is different from models applied to natural populations that predict an increase in the overall incidence of infection as the frequency of tolerant hosts increases [38]. In natural populations, tolerant hosts survive longer than non-tolerant ones, thus keeping the disease longer in the population and increasing the risk of exposure to disease. Here, the model is for an endemic disease in a population under commercial contraints, in which non-tolerant animals are kept even if they are sick (no natural death, no culling). Consequently, the risk of exposure to disease does not change, even if the pathogen population size (C) increases.\nIn general, little is known about tolerance mechanisms in animals but their study should provide a good foundation for insuring health over the long term. Indeed, in the long term, advantages of being tolerant should be greater than those associated with resistance. For example, in non-evolving pathogen populations, advantages of being resistant decrease in parallel with the decline in disease frequency, while the advantages of being tolerant are maintained, or even increase if disease frequency rises [42]. In evolving pathogen populations, improved host resistance will pressure pathogens to evolve better mechanisms to evade host defense processes, potentially resulting in cyclical co-evolutionary dynamics. In contrast, tolerance does not interact directly with the pathogen and should not induce selection for counter-adaptations, although elevated levels of tolerance may allow pathogens to be more virulent [43].\nPractically, in bovine mastitis, it the degree of tolerance of an animal can be estimated by the amount of milk loss per bacteria present in the quarter (CFU) using a model adapted from that proposed by [2] for inbred strains of laboratory mice:\nwhere yijt is the milk yield at time t of the ith cow (yield corrected for fixed and non-genetic random effects estimated from the genetic evaluation model) infected with Iijt, i.e. the bacterial load for bacterial species j; bj is the average tolerance against bacteria of strain j; Bij describes individual random deviations from the average tolerance with Bij ~ IID N(0, σ²b); and eijt are residuals with eijt ~ N(0, Ve), where Ve accounts for the non-independence between repeated eijt. Such information could be collected from quarters of experimentally infected cows, as was done in the study of [44].", "In summary, this paper presents a novel epidemic model to explore the effects of tolerance and resistance on performance and disease spread in a population. Although more research is necessary to validate the model and more empirical studies are needed to obtain values for the input parameters, the analytic approach can be used to find optimal strategies of disease control in commercial populations.", "The authors declare that they have no competing interests." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Pathogen dynamics", "Individual performance", "Effectiveness analysis", "Results", "Within-host pathogen dynamics", "Between-host pathogen dynamics", "Effectiveness analyses", "Discussion", "Validation of the model", "Limits and assumptions of the model", "Effectiveness analyses", "Conclusions", "Competing interests" ]
[ "The breeding objective in most livestock species is to increase profit by improving performance efficiency. One way to reach this objective is to improve the animals' health, for example, through the implementation of appropriate management methods (e.g. chemotherapy, vaccination, and control of disease vectors). A more sustainable method consists in taking advantage, by selective breeding, of the within-breed variation that exists in the mechanisms of defenses against infectious pathogens [1]. Indeed, hosts have evolved resistance and tolerance defenses [2], thus breeders may choose, as progenitors, animals with the highest levels of resistance, tolerance, or both. One the one hand, resistance is the ability of the host to reduce the success of infection or to increase the rate of clearance of the pathogens. On the other hand, tolerance is the ability to reduce the detrimental effects of the pathogens on the performances of the hosts, either directly or by limiting immunopathological mechanisms [3]. The rate of transmission diminishes naturally among resistant hosts but not necessarily among tolerant ones, as these harbor the pathogen with no or moderate loss in performance [4].\nResistance and tolerance are associated with fitness costs, which arise from the diversion of limiting resources away from biological processes related to performance [5]. If these costs are too high, they may outweigh the effectiveness of the chosen strategy. Direct evidence of such costs can be found in experiments in insects [6], rainbow trout [7], crustaceans [8], wild birds [9] and mice [10].\nTo decide whether improving resistance, tolerance, neither, or both is the most effective strategy, it is proposed to (1) characterize the dynamics of the pathogens within and between hosts in the population under study, (2) evaluate the impact of the infection on the performances of the population, and (3) choose the most effective strategy. The goal of this study is to illustrate the methodology with a non-lethal micro-parasitic disease in a population where hosts have different levels of resistance to multiplication of the pathogen and different levels of tolerance to damages induced directly by the pathogens.", "[SUBTITLE] Pathogen dynamics [SUBSECTION] The model chosen here to depict the dynamics of transmission of the infection in a herd is a stochastic version of the SIS (S for susceptible, I for infected) model for the spread of a disease in a closed population of N individuals [11]. This model is appropriate for infections with no permanent immunity after recovery, i.e. individuals are susceptible to the infection, potentially get infected, may recover and become susceptible again. The time-scale of the disease process is assumed to be short compared to the life length of the host and no demographic turnover (natural birth or death) is considered. The area occupied by parasites and hosts is constant, so that numbers and densities coincide. There is only a single non-evolving pathogen species within infected hosts. Once infected, hosts are immediately able to infect other individuals (no latent period). Within the host, the number of pathogens increases following a sigmoidal growth curve and is directly related to the number of immune constituents of the host response to the pathogen, with no distinction between innate and specific immunity. Recovered hosts are as susceptible to infection as naïve hosts and re-exposure does not accelerate development of the disease.\nIn mathematical terms, the process is described by a continuous time Markov chain, {Cit; t = 0 to T, i = 1 to N}, where Cit denotes the number of pathogens in the ith host at time t. Units of time are chosen arbitrarily. The chain has three transition probabilities (over a small time interval Δt) reflecting the three events, i.e. invasion of a new host by the pathogen, its multiplication and its killing by the immune response of the host.\nThe first transition probability is the probability the ith susceptible host is infected by Cmin pathogens:(1)\nwhere o(Δt) tends to 0 when Δt is small, Cmin is the minimum number of pathogens necessary to have infection, β is the per-capita rate of successful transmission of Cmin pathogens from an infectious host to a susceptible host upon contact with an infectious individual and during Δt, and It is the number of infectious hosts with which the ith susceptible has contact.\nThe second transition probability is the probability that a pathogen in an infected host gives birth to Cmin new offspring, such that this host becomes infectious:(2)\nwhere γ is the pathogen growth rate. Right after becoming infected, pathogen growth in a host is approximately exponential but it slows down as it reaches a maximum (CMax), at which it stops.\nThe last transition probability is the probability that Cmin pathogens are killed within the host:(3)\nThis equation follows from the dynamics between pathogens and immune factors, as observed in experimental studies [12,13]. Parameter μ represents the maximum number of pathogens killed for each unit of Rit, with Rit being a generic index related to the number, at time t, of the different types of immune factors specific for that pathogen. Because the main interest is on the number of pathogens, the complexity of the immune response is greatly simplified when Rit increases at a rate (hi) that is constant across time. The scaling parameter ρi varies from 0 to 1 and represents the extra investment in resistance of the ith host with respect to μ. When Cit drops below Cmin, the infection is assumed to be cleared.\nThe Markov chain was simulated using the Gillespie algorithm [14], which essentially uses exponential waiting times between events. For all simulations, it was assumed that two individuals in the population were initially infected. Simulation steps were executed until t reaches T units of time (= one replicate) and repeated over 50 replicates. Each cycle took around 4 hours to complete, so the population size was limited at 30 individuals, which is the average size of most dairy herds in the Walloon region of Belgium.\nThe model chosen here to depict the dynamics of transmission of the infection in a herd is a stochastic version of the SIS (S for susceptible, I for infected) model for the spread of a disease in a closed population of N individuals [11]. This model is appropriate for infections with no permanent immunity after recovery, i.e. individuals are susceptible to the infection, potentially get infected, may recover and become susceptible again. The time-scale of the disease process is assumed to be short compared to the life length of the host and no demographic turnover (natural birth or death) is considered. The area occupied by parasites and hosts is constant, so that numbers and densities coincide. There is only a single non-evolving pathogen species within infected hosts. Once infected, hosts are immediately able to infect other individuals (no latent period). Within the host, the number of pathogens increases following a sigmoidal growth curve and is directly related to the number of immune constituents of the host response to the pathogen, with no distinction between innate and specific immunity. Recovered hosts are as susceptible to infection as naïve hosts and re-exposure does not accelerate development of the disease.\nIn mathematical terms, the process is described by a continuous time Markov chain, {Cit; t = 0 to T, i = 1 to N}, where Cit denotes the number of pathogens in the ith host at time t. Units of time are chosen arbitrarily. The chain has three transition probabilities (over a small time interval Δt) reflecting the three events, i.e. invasion of a new host by the pathogen, its multiplication and its killing by the immune response of the host.\nThe first transition probability is the probability the ith susceptible host is infected by Cmin pathogens:(1)\nwhere o(Δt) tends to 0 when Δt is small, Cmin is the minimum number of pathogens necessary to have infection, β is the per-capita rate of successful transmission of Cmin pathogens from an infectious host to a susceptible host upon contact with an infectious individual and during Δt, and It is the number of infectious hosts with which the ith susceptible has contact.\nThe second transition probability is the probability that a pathogen in an infected host gives birth to Cmin new offspring, such that this host becomes infectious:(2)\nwhere γ is the pathogen growth rate. Right after becoming infected, pathogen growth in a host is approximately exponential but it slows down as it reaches a maximum (CMax), at which it stops.\nThe last transition probability is the probability that Cmin pathogens are killed within the host:(3)\nThis equation follows from the dynamics between pathogens and immune factors, as observed in experimental studies [12,13]. Parameter μ represents the maximum number of pathogens killed for each unit of Rit, with Rit being a generic index related to the number, at time t, of the different types of immune factors specific for that pathogen. Because the main interest is on the number of pathogens, the complexity of the immune response is greatly simplified when Rit increases at a rate (hi) that is constant across time. The scaling parameter ρi varies from 0 to 1 and represents the extra investment in resistance of the ith host with respect to μ. When Cit drops below Cmin, the infection is assumed to be cleared.\nThe Markov chain was simulated using the Gillespie algorithm [14], which essentially uses exponential waiting times between events. For all simulations, it was assumed that two individuals in the population were initially infected. Simulation steps were executed until t reaches T units of time (= one replicate) and repeated over 50 replicates. Each cycle took around 4 hours to complete, so the population size was limited at 30 individuals, which is the average size of most dairy herds in the Walloon region of Belgium.\n[SUBTITLE] Individual performance [SUBSECTION] The performance of an infected host decreases proportionally to the number of pathogens (Cit) and to investments of the host in tolerance [15]:(4)\nwhere Pit is the performance of the ith host at time t, when it is infected with Cit pathogens, ω is the maximum amount of performance lost per pathogen (virulence). The parameter λi is a scaling parameter representing the extra investment in tolerance. If λi = 1, the host is completely tolerant and produces at a level identical to the one without infection. If λi = 0, the host is not tolerant to the deleterious effects of the pathogen.\nHosts invest part of their constitutive resources to resist or tolerate the pathogens and costs are assumed proportional to the investments in both types of defense. They are combined in an additive way:(5)\nwhere PMax is the totality of the resources available to the host to insure performance (e.g., production, reproduction, work) and to cope with an infection (resistance and tolerance). If no extra-investments are put in resistance and tolerance, all resources are allocated to insure the highest achievable level of performance in the absence of infection. Parameter ciρ is the marginal cost of resistance and ciλ is the marginal cost of tolerance (in units of performance). Values for both costs are constrained such that the factor within brackets remains positive (ρi ciρ + λi ciλ ≤ 1). A constraint was also set to insure Pit (equation 4) remains positive or null in totally non-tolerant individuals infected with K pathogens: ω ≤ PMax(1 - ρi ciρ)/K.\nTypical patterns in performance as a function of number of pathogens are shown schematically in Figure 1 to illustrate the different ways resources can be allocated between resistance, tolerance and performance (costs are assumed equal for resistance and tolerance). Performances of hosts allocating none of the available resources to resistance and tolerance are the highest at the start of infection (Pt = 0 = PMax) and decrease as Ct increases. Numbers of pathogens remain below 20 among resistant hosts, and performances of tolerant hosts do not decline with increasing parasite burden.\nSchematic representation of the impact of resource allocation on performance (P) and number of pathogens (C).\nThe performance of an infected host decreases proportionally to the number of pathogens (Cit) and to investments of the host in tolerance [15]:(4)\nwhere Pit is the performance of the ith host at time t, when it is infected with Cit pathogens, ω is the maximum amount of performance lost per pathogen (virulence). The parameter λi is a scaling parameter representing the extra investment in tolerance. If λi = 1, the host is completely tolerant and produces at a level identical to the one without infection. If λi = 0, the host is not tolerant to the deleterious effects of the pathogen.\nHosts invest part of their constitutive resources to resist or tolerate the pathogens and costs are assumed proportional to the investments in both types of defense. They are combined in an additive way:(5)\nwhere PMax is the totality of the resources available to the host to insure performance (e.g., production, reproduction, work) and to cope with an infection (resistance and tolerance). If no extra-investments are put in resistance and tolerance, all resources are allocated to insure the highest achievable level of performance in the absence of infection. Parameter ciρ is the marginal cost of resistance and ciλ is the marginal cost of tolerance (in units of performance). Values for both costs are constrained such that the factor within brackets remains positive (ρi ciρ + λi ciλ ≤ 1). A constraint was also set to insure Pit (equation 4) remains positive or null in totally non-tolerant individuals infected with K pathogens: ω ≤ PMax(1 - ρi ciρ)/K.\nTypical patterns in performance as a function of number of pathogens are shown schematically in Figure 1 to illustrate the different ways resources can be allocated between resistance, tolerance and performance (costs are assumed equal for resistance and tolerance). Performances of hosts allocating none of the available resources to resistance and tolerance are the highest at the start of infection (Pt = 0 = PMax) and decrease as Ct increases. Numbers of pathogens remain below 20 among resistant hosts, and performances of tolerant hosts do not decline with increasing parasite burden.\nSchematic representation of the impact of resource allocation on performance (P) and number of pathogens (C).\n[SUBTITLE] Effectiveness analysis [SUBSECTION] The most profitable strategy, i.e. the one that will insure the lowest number of infected animals or the highest performance of the population, or both, was identified by weighing the allowed extra investments in resistance, tolerance, or both, against the effectiveness of each of these alternatives.\nEffectiveness was computed by comparing populations under the same infection process but in which animals invest ('yes' population) or not ('no' populations) in resistance, tolerance, or both. To do so, the number of infected hosts (It) and the overall performance (Pt = Σi = 1,N Pit) were followed across time, and the area under the curves of Pt (AUCP) and It (AUCI) were obtained for t = 0 to T with the spline method of the procedure Expand of SAS® [16]. Subsequently, the incremental effects (ΔEI and ΔEP) were computed as the difference between corresponding 'yes' and 'no' populations: ΔEI = AUCIno - AUCIyes, and ΔEP = AUCPyes - AUCPno. Then, the most effective alternative was identified as the one with the highest values for ΔEI and ΔEP.\nIncremental effects were calculated for different sets of parameters (Table 1). Two transmission rates were considered, with β = 0.1 and β = 0.5, which correspond to a new infection per 10 and 2 effective contacts, respectively. The minimum number of pathogens was set to Cmin = 10 and the maximum to CMax = 500. The growth rate (γ) was set at 0.5 new pathogens for each existing one and the value for μ was set to 0.25 or 1.0 to obtain killing rates equal to half or twice the pathogen growth rate. A convenient value of 100 was given to PMax, while virulence (ω) was set at 0.1 or 0.2 units of performance lost per pathogen present. Individual extra investments in resistance and tolerance were drawn from uniform distributions with different extreme values to have low (U[0, 0.5]), average (U[0, 1]), or high (U[0.9, 1]) levels of investments. Associated costs were drawn from uniform distributions within the allowable limits imposed by equations (4) and (5): U [0, 0.1], U [0.1, 0.2], and U [0.2, 0.5].\nModel parameters and their values\nFinally, effects of low, average and high levels of extra-investments in resistance and tolerance on ΔEI and ΔEP were quantified using fixed linear models (proc GLM on SAS® [16]) that also contained the effects of β, μ, and ω for the characteristics of the pathogen, the averages at the population level of hi, ciρ and ciλ for the characteristics of the hosts, and all first-order interactions. The resulting least-squares estimates were used to identify epidemiological situations for which investments in tolerance, resistance or both were effective.\nThe most profitable strategy, i.e. the one that will insure the lowest number of infected animals or the highest performance of the population, or both, was identified by weighing the allowed extra investments in resistance, tolerance, or both, against the effectiveness of each of these alternatives.\nEffectiveness was computed by comparing populations under the same infection process but in which animals invest ('yes' population) or not ('no' populations) in resistance, tolerance, or both. To do so, the number of infected hosts (It) and the overall performance (Pt = Σi = 1,N Pit) were followed across time, and the area under the curves of Pt (AUCP) and It (AUCI) were obtained for t = 0 to T with the spline method of the procedure Expand of SAS® [16]. Subsequently, the incremental effects (ΔEI and ΔEP) were computed as the difference between corresponding 'yes' and 'no' populations: ΔEI = AUCIno - AUCIyes, and ΔEP = AUCPyes - AUCPno. Then, the most effective alternative was identified as the one with the highest values for ΔEI and ΔEP.\nIncremental effects were calculated for different sets of parameters (Table 1). Two transmission rates were considered, with β = 0.1 and β = 0.5, which correspond to a new infection per 10 and 2 effective contacts, respectively. The minimum number of pathogens was set to Cmin = 10 and the maximum to CMax = 500. The growth rate (γ) was set at 0.5 new pathogens for each existing one and the value for μ was set to 0.25 or 1.0 to obtain killing rates equal to half or twice the pathogen growth rate. A convenient value of 100 was given to PMax, while virulence (ω) was set at 0.1 or 0.2 units of performance lost per pathogen present. Individual extra investments in resistance and tolerance were drawn from uniform distributions with different extreme values to have low (U[0, 0.5]), average (U[0, 1]), or high (U[0.9, 1]) levels of investments. Associated costs were drawn from uniform distributions within the allowable limits imposed by equations (4) and (5): U [0, 0.1], U [0.1, 0.2], and U [0.2, 0.5].\nModel parameters and their values\nFinally, effects of low, average and high levels of extra-investments in resistance and tolerance on ΔEI and ΔEP were quantified using fixed linear models (proc GLM on SAS® [16]) that also contained the effects of β, μ, and ω for the characteristics of the pathogen, the averages at the population level of hi, ciρ and ciλ for the characteristics of the hosts, and all first-order interactions. The resulting least-squares estimates were used to identify epidemiological situations for which investments in tolerance, resistance or both were effective.", "The model chosen here to depict the dynamics of transmission of the infection in a herd is a stochastic version of the SIS (S for susceptible, I for infected) model for the spread of a disease in a closed population of N individuals [11]. This model is appropriate for infections with no permanent immunity after recovery, i.e. individuals are susceptible to the infection, potentially get infected, may recover and become susceptible again. The time-scale of the disease process is assumed to be short compared to the life length of the host and no demographic turnover (natural birth or death) is considered. The area occupied by parasites and hosts is constant, so that numbers and densities coincide. There is only a single non-evolving pathogen species within infected hosts. Once infected, hosts are immediately able to infect other individuals (no latent period). Within the host, the number of pathogens increases following a sigmoidal growth curve and is directly related to the number of immune constituents of the host response to the pathogen, with no distinction between innate and specific immunity. Recovered hosts are as susceptible to infection as naïve hosts and re-exposure does not accelerate development of the disease.\nIn mathematical terms, the process is described by a continuous time Markov chain, {Cit; t = 0 to T, i = 1 to N}, where Cit denotes the number of pathogens in the ith host at time t. Units of time are chosen arbitrarily. The chain has three transition probabilities (over a small time interval Δt) reflecting the three events, i.e. invasion of a new host by the pathogen, its multiplication and its killing by the immune response of the host.\nThe first transition probability is the probability the ith susceptible host is infected by Cmin pathogens:(1)\nwhere o(Δt) tends to 0 when Δt is small, Cmin is the minimum number of pathogens necessary to have infection, β is the per-capita rate of successful transmission of Cmin pathogens from an infectious host to a susceptible host upon contact with an infectious individual and during Δt, and It is the number of infectious hosts with which the ith susceptible has contact.\nThe second transition probability is the probability that a pathogen in an infected host gives birth to Cmin new offspring, such that this host becomes infectious:(2)\nwhere γ is the pathogen growth rate. Right after becoming infected, pathogen growth in a host is approximately exponential but it slows down as it reaches a maximum (CMax), at which it stops.\nThe last transition probability is the probability that Cmin pathogens are killed within the host:(3)\nThis equation follows from the dynamics between pathogens and immune factors, as observed in experimental studies [12,13]. Parameter μ represents the maximum number of pathogens killed for each unit of Rit, with Rit being a generic index related to the number, at time t, of the different types of immune factors specific for that pathogen. Because the main interest is on the number of pathogens, the complexity of the immune response is greatly simplified when Rit increases at a rate (hi) that is constant across time. The scaling parameter ρi varies from 0 to 1 and represents the extra investment in resistance of the ith host with respect to μ. When Cit drops below Cmin, the infection is assumed to be cleared.\nThe Markov chain was simulated using the Gillespie algorithm [14], which essentially uses exponential waiting times between events. For all simulations, it was assumed that two individuals in the population were initially infected. Simulation steps were executed until t reaches T units of time (= one replicate) and repeated over 50 replicates. Each cycle took around 4 hours to complete, so the population size was limited at 30 individuals, which is the average size of most dairy herds in the Walloon region of Belgium.", "The performance of an infected host decreases proportionally to the number of pathogens (Cit) and to investments of the host in tolerance [15]:(4)\nwhere Pit is the performance of the ith host at time t, when it is infected with Cit pathogens, ω is the maximum amount of performance lost per pathogen (virulence). The parameter λi is a scaling parameter representing the extra investment in tolerance. If λi = 1, the host is completely tolerant and produces at a level identical to the one without infection. If λi = 0, the host is not tolerant to the deleterious effects of the pathogen.\nHosts invest part of their constitutive resources to resist or tolerate the pathogens and costs are assumed proportional to the investments in both types of defense. They are combined in an additive way:(5)\nwhere PMax is the totality of the resources available to the host to insure performance (e.g., production, reproduction, work) and to cope with an infection (resistance and tolerance). If no extra-investments are put in resistance and tolerance, all resources are allocated to insure the highest achievable level of performance in the absence of infection. Parameter ciρ is the marginal cost of resistance and ciλ is the marginal cost of tolerance (in units of performance). Values for both costs are constrained such that the factor within brackets remains positive (ρi ciρ + λi ciλ ≤ 1). A constraint was also set to insure Pit (equation 4) remains positive or null in totally non-tolerant individuals infected with K pathogens: ω ≤ PMax(1 - ρi ciρ)/K.\nTypical patterns in performance as a function of number of pathogens are shown schematically in Figure 1 to illustrate the different ways resources can be allocated between resistance, tolerance and performance (costs are assumed equal for resistance and tolerance). Performances of hosts allocating none of the available resources to resistance and tolerance are the highest at the start of infection (Pt = 0 = PMax) and decrease as Ct increases. Numbers of pathogens remain below 20 among resistant hosts, and performances of tolerant hosts do not decline with increasing parasite burden.\nSchematic representation of the impact of resource allocation on performance (P) and number of pathogens (C).", "The most profitable strategy, i.e. the one that will insure the lowest number of infected animals or the highest performance of the population, or both, was identified by weighing the allowed extra investments in resistance, tolerance, or both, against the effectiveness of each of these alternatives.\nEffectiveness was computed by comparing populations under the same infection process but in which animals invest ('yes' population) or not ('no' populations) in resistance, tolerance, or both. To do so, the number of infected hosts (It) and the overall performance (Pt = Σi = 1,N Pit) were followed across time, and the area under the curves of Pt (AUCP) and It (AUCI) were obtained for t = 0 to T with the spline method of the procedure Expand of SAS® [16]. Subsequently, the incremental effects (ΔEI and ΔEP) were computed as the difference between corresponding 'yes' and 'no' populations: ΔEI = AUCIno - AUCIyes, and ΔEP = AUCPyes - AUCPno. Then, the most effective alternative was identified as the one with the highest values for ΔEI and ΔEP.\nIncremental effects were calculated for different sets of parameters (Table 1). Two transmission rates were considered, with β = 0.1 and β = 0.5, which correspond to a new infection per 10 and 2 effective contacts, respectively. The minimum number of pathogens was set to Cmin = 10 and the maximum to CMax = 500. The growth rate (γ) was set at 0.5 new pathogens for each existing one and the value for μ was set to 0.25 or 1.0 to obtain killing rates equal to half or twice the pathogen growth rate. A convenient value of 100 was given to PMax, while virulence (ω) was set at 0.1 or 0.2 units of performance lost per pathogen present. Individual extra investments in resistance and tolerance were drawn from uniform distributions with different extreme values to have low (U[0, 0.5]), average (U[0, 1]), or high (U[0.9, 1]) levels of investments. Associated costs were drawn from uniform distributions within the allowable limits imposed by equations (4) and (5): U [0, 0.1], U [0.1, 0.2], and U [0.2, 0.5].\nModel parameters and their values\nFinally, effects of low, average and high levels of extra-investments in resistance and tolerance on ΔEI and ΔEP were quantified using fixed linear models (proc GLM on SAS® [16]) that also contained the effects of β, μ, and ω for the characteristics of the pathogen, the averages at the population level of hi, ciρ and ciλ for the characteristics of the hosts, and all first-order interactions. The resulting least-squares estimates were used to identify epidemiological situations for which investments in tolerance, resistance or both were effective.", "[SUBTITLE] Within-host pathogen dynamics [SUBSECTION] The number of pathogens within a host is shown in Figure 2 for 10 animals in a 'no' population with the following characteristics: β = 0.5; μ = 0.1; ω = 0.1; hi ~ U [0, 0.001], and ρi = λi = 0 for i = 1 to N. The duration and the number of pathogens generated were approximately the same for all animals because they depended on γ = 0.5 (equation 2). However, the stochastic nature of the simulation resulted in a cloud of points for each Cit. On average, CMax was reached after 300 time units, so it was used as the upper limit for T because the Gillespie algorithm was slow to converge and because T = 300 insured the steady value CMax was reached among completely non-resistant hosts.\nNumber of pathogens across time for 10 completely susceptible hosts.\nIn Figure 3, the dynamics in Cit and Pit are shown for four individuals with different investments and costs of resistance and tolerance, and for an infection with β = 0.5, γ = 0.5, ω = 0.2, hi ~ U [0, 0.1], and μ = 0.25. When both ρi and λi were high, Cit remained low and Pit did not change much across time (individuals □ or +). Conversely, when ρi was low, Cit increased up to its maximum and the associated individual performance decreased (individual ○ in Figure 3). Between these extremes, a wide range of different situations occurred. Initial performance (P0) varied according to the costs and extra investments in tolerance and resistance (equation 3).\nWithin-host dynamics for the number of pathogens and performances of four individuals with different levels of resistance (ρ) and tolerance (λ) Associated costs are in parentheses.\nThe number of pathogens within a host is shown in Figure 2 for 10 animals in a 'no' population with the following characteristics: β = 0.5; μ = 0.1; ω = 0.1; hi ~ U [0, 0.001], and ρi = λi = 0 for i = 1 to N. The duration and the number of pathogens generated were approximately the same for all animals because they depended on γ = 0.5 (equation 2). However, the stochastic nature of the simulation resulted in a cloud of points for each Cit. On average, CMax was reached after 300 time units, so it was used as the upper limit for T because the Gillespie algorithm was slow to converge and because T = 300 insured the steady value CMax was reached among completely non-resistant hosts.\nNumber of pathogens across time for 10 completely susceptible hosts.\nIn Figure 3, the dynamics in Cit and Pit are shown for four individuals with different investments and costs of resistance and tolerance, and for an infection with β = 0.5, γ = 0.5, ω = 0.2, hi ~ U [0, 0.1], and μ = 0.25. When both ρi and λi were high, Cit remained low and Pit did not change much across time (individuals □ or +). Conversely, when ρi was low, Cit increased up to its maximum and the associated individual performance decreased (individual ○ in Figure 3). Between these extremes, a wide range of different situations occurred. Initial performance (P0) varied according to the costs and extra investments in tolerance and resistance (equation 3).\nWithin-host dynamics for the number of pathogens and performances of four individuals with different levels of resistance (ρ) and tolerance (λ) Associated costs are in parentheses.\n[SUBTITLE] Between-host pathogen dynamics [SUBSECTION] The number of infected hosts (It) and the overall performance (Pt) are given in Figure 4 as percentages of their maximum values (N and PMax, respectively) and for an infection with β = 0.5, γ = 0.5, ω = 0.1, and μ = 0.25. At T = 300, all individuals in the 'no' population (Figure 4a) were infected (with the exception of one) and the overall performance was close to 50%, which is the minimum expected from equation 5 when all animals have zero tolerance and are infected with CMax pathogens.\nNumber of infected individuals (solid line) and overall performance (broken line) in populations with different average values for levels of resistance (ρ) and tolerance (λ), and for their associated costs (cρ and cλ in parentheses) The values are expressed as percentages of their maxima.\nWhen individuals invested more in resistance, only a fraction of the population got infected and AUCI was low. For example, AUCI decreased from 22,720 to 19,379 and 13,851 infected hosts in Figures 4b (ρ = 0.22), 4e (ρ = 0.46), and 4h (ρ = 0.94), respectively. When the average level of extra investments in tolerance was high (around 0.95), the impact of It on Pt was almost zero (Figures 4d,g and 4j). Otherwise, Pt decreased as It increased, especially for low levels of tolerance (Figures 4b,e and 4h). This should have translated in an increase in AUCP but, in this particular population, costs associated with tolerance were high (around 0.15) and initial performance was low. For example, P0 averaged 79.8 in Figure 4g (λ = 0.95; AUCP = 23,509) and 90.4 in Figure 4e (λ = 0.25; AUCP = 24,779).\nThe number of infected hosts (It) and the overall performance (Pt) are given in Figure 4 as percentages of their maximum values (N and PMax, respectively) and for an infection with β = 0.5, γ = 0.5, ω = 0.1, and μ = 0.25. At T = 300, all individuals in the 'no' population (Figure 4a) were infected (with the exception of one) and the overall performance was close to 50%, which is the minimum expected from equation 5 when all animals have zero tolerance and are infected with CMax pathogens.\nNumber of infected individuals (solid line) and overall performance (broken line) in populations with different average values for levels of resistance (ρ) and tolerance (λ), and for their associated costs (cρ and cλ in parentheses) The values are expressed as percentages of their maxima.\nWhen individuals invested more in resistance, only a fraction of the population got infected and AUCI was low. For example, AUCI decreased from 22,720 to 19,379 and 13,851 infected hosts in Figures 4b (ρ = 0.22), 4e (ρ = 0.46), and 4h (ρ = 0.94), respectively. When the average level of extra investments in tolerance was high (around 0.95), the impact of It on Pt was almost zero (Figures 4d,g and 4j). Otherwise, Pt decreased as It increased, especially for low levels of tolerance (Figures 4b,e and 4h). This should have translated in an increase in AUCP but, in this particular population, costs associated with tolerance were high (around 0.15) and initial performance was low. For example, P0 averaged 79.8 in Figure 4g (λ = 0.95; AUCP = 23,509) and 90.4 in Figure 4e (λ = 0.25; AUCP = 24,779).\n[SUBTITLE] Effectiveness analyses [SUBSECTION] Values of ΔEP and ΔEI obtained for each combination of the parameters of Table 1 are shown in relation to ρ and λ in Figure 5. Each dot corresponds to one specific combination of the parameter values. Effective combinations, those associated with both ΔEP>0 and ΔEI>0, represented 75.7% of all combinations. There was a tendency for ΔEI and ΔEP to increase with increasing values for ρ and λ, respectively. However, there were also combinations of parameters for which high values for ρ or λ were not effective, as revealed by the analysis of variance.\nIncremental effectiveness for performance (ΔEP) and number of infected individuals (ΔEI) for different investments in resistance (ρ) and tolerance (λ) and for various characteristics of the infection (Table 1) .\nResults from the analysis of variance identified significant (p < 0.01) effects of ρi, cρ, hi, and μ on ΔEI, and of λ, cλ, β, and ω on ΔEP. All first-order interactions were non-significant (p > 0.10). Incremental effects are given in Tables 2 and 3 for selected combinations. Overall, ΔEP was greater for higher values of λ but, for moderately virulent (ω = 0.1) and slow spreading (β = 0.1) diseases, investments in tolerance were low or ineffective unless they incurred at low costs (Table 2). Investing in resistance (Table 3) was effective for infections that elicited moderate to high but not low (hi ~ U[0, 0.001]) immune responses in the hosts (unless levels of resistance were high).\nIncremental effectiveness of the performance of the population (ΔEP) associated to different investments in individual tolerance (λi) and for selected values of ciλ, β and ω, as defined in Table 1\nIncremental effectiveness of the number of infected (ΔEI) associated to different investments in individual resistance (ρi) and for selected values of ciρ, μ and hi, as defined in Table 1\nValues of ΔEP and ΔEI obtained for each combination of the parameters of Table 1 are shown in relation to ρ and λ in Figure 5. Each dot corresponds to one specific combination of the parameter values. Effective combinations, those associated with both ΔEP>0 and ΔEI>0, represented 75.7% of all combinations. There was a tendency for ΔEI and ΔEP to increase with increasing values for ρ and λ, respectively. However, there were also combinations of parameters for which high values for ρ or λ were not effective, as revealed by the analysis of variance.\nIncremental effectiveness for performance (ΔEP) and number of infected individuals (ΔEI) for different investments in resistance (ρ) and tolerance (λ) and for various characteristics of the infection (Table 1) .\nResults from the analysis of variance identified significant (p < 0.01) effects of ρi, cρ, hi, and μ on ΔEI, and of λ, cλ, β, and ω on ΔEP. All first-order interactions were non-significant (p > 0.10). Incremental effects are given in Tables 2 and 3 for selected combinations. Overall, ΔEP was greater for higher values of λ but, for moderately virulent (ω = 0.1) and slow spreading (β = 0.1) diseases, investments in tolerance were low or ineffective unless they incurred at low costs (Table 2). Investing in resistance (Table 3) was effective for infections that elicited moderate to high but not low (hi ~ U[0, 0.001]) immune responses in the hosts (unless levels of resistance were high).\nIncremental effectiveness of the performance of the population (ΔEP) associated to different investments in individual tolerance (λi) and for selected values of ciλ, β and ω, as defined in Table 1\nIncremental effectiveness of the number of infected (ΔEI) associated to different investments in individual resistance (ρi) and for selected values of ciρ, μ and hi, as defined in Table 1", "The number of pathogens within a host is shown in Figure 2 for 10 animals in a 'no' population with the following characteristics: β = 0.5; μ = 0.1; ω = 0.1; hi ~ U [0, 0.001], and ρi = λi = 0 for i = 1 to N. The duration and the number of pathogens generated were approximately the same for all animals because they depended on γ = 0.5 (equation 2). However, the stochastic nature of the simulation resulted in a cloud of points for each Cit. On average, CMax was reached after 300 time units, so it was used as the upper limit for T because the Gillespie algorithm was slow to converge and because T = 300 insured the steady value CMax was reached among completely non-resistant hosts.\nNumber of pathogens across time for 10 completely susceptible hosts.\nIn Figure 3, the dynamics in Cit and Pit are shown for four individuals with different investments and costs of resistance and tolerance, and for an infection with β = 0.5, γ = 0.5, ω = 0.2, hi ~ U [0, 0.1], and μ = 0.25. When both ρi and λi were high, Cit remained low and Pit did not change much across time (individuals □ or +). Conversely, when ρi was low, Cit increased up to its maximum and the associated individual performance decreased (individual ○ in Figure 3). Between these extremes, a wide range of different situations occurred. Initial performance (P0) varied according to the costs and extra investments in tolerance and resistance (equation 3).\nWithin-host dynamics for the number of pathogens and performances of four individuals with different levels of resistance (ρ) and tolerance (λ) Associated costs are in parentheses.", "The number of infected hosts (It) and the overall performance (Pt) are given in Figure 4 as percentages of their maximum values (N and PMax, respectively) and for an infection with β = 0.5, γ = 0.5, ω = 0.1, and μ = 0.25. At T = 300, all individuals in the 'no' population (Figure 4a) were infected (with the exception of one) and the overall performance was close to 50%, which is the minimum expected from equation 5 when all animals have zero tolerance and are infected with CMax pathogens.\nNumber of infected individuals (solid line) and overall performance (broken line) in populations with different average values for levels of resistance (ρ) and tolerance (λ), and for their associated costs (cρ and cλ in parentheses) The values are expressed as percentages of their maxima.\nWhen individuals invested more in resistance, only a fraction of the population got infected and AUCI was low. For example, AUCI decreased from 22,720 to 19,379 and 13,851 infected hosts in Figures 4b (ρ = 0.22), 4e (ρ = 0.46), and 4h (ρ = 0.94), respectively. When the average level of extra investments in tolerance was high (around 0.95), the impact of It on Pt was almost zero (Figures 4d,g and 4j). Otherwise, Pt decreased as It increased, especially for low levels of tolerance (Figures 4b,e and 4h). This should have translated in an increase in AUCP but, in this particular population, costs associated with tolerance were high (around 0.15) and initial performance was low. For example, P0 averaged 79.8 in Figure 4g (λ = 0.95; AUCP = 23,509) and 90.4 in Figure 4e (λ = 0.25; AUCP = 24,779).", "Values of ΔEP and ΔEI obtained for each combination of the parameters of Table 1 are shown in relation to ρ and λ in Figure 5. Each dot corresponds to one specific combination of the parameter values. Effective combinations, those associated with both ΔEP>0 and ΔEI>0, represented 75.7% of all combinations. There was a tendency for ΔEI and ΔEP to increase with increasing values for ρ and λ, respectively. However, there were also combinations of parameters for which high values for ρ or λ were not effective, as revealed by the analysis of variance.\nIncremental effectiveness for performance (ΔEP) and number of infected individuals (ΔEI) for different investments in resistance (ρ) and tolerance (λ) and for various characteristics of the infection (Table 1) .\nResults from the analysis of variance identified significant (p < 0.01) effects of ρi, cρ, hi, and μ on ΔEI, and of λ, cλ, β, and ω on ΔEP. All first-order interactions were non-significant (p > 0.10). Incremental effects are given in Tables 2 and 3 for selected combinations. Overall, ΔEP was greater for higher values of λ but, for moderately virulent (ω = 0.1) and slow spreading (β = 0.1) diseases, investments in tolerance were low or ineffective unless they incurred at low costs (Table 2). Investing in resistance (Table 3) was effective for infections that elicited moderate to high but not low (hi ~ U[0, 0.001]) immune responses in the hosts (unless levels of resistance were high).\nIncremental effectiveness of the performance of the population (ΔEP) associated to different investments in individual tolerance (λi) and for selected values of ciλ, β and ω, as defined in Table 1\nIncremental effectiveness of the number of infected (ΔEI) associated to different investments in individual resistance (ρi) and for selected values of ciρ, μ and hi, as defined in Table 1", "A general framework is proposed to provide insights into the effects of improved resistance and tolerance on the performance and size of an infected population. A clear distinction is made between effects of resistance on multiplication of the pathogen and effects of tolerance on damages induced by the pathogens. Hosts differ in the costs they incur to insure their particular levels of resistance and tolerance, and in the intensity of the response they mount against pathogens. Pathogens differ in their speed of spread between hosts, in virulence, and in the intensity of the response they elicit in the hosts. However, to be useful, the model must be validated and its limits and assumptions must be clarified, as will be discussed in the following, with examples mostly related to bovine mastitis.\n[SUBTITLE] Validation of the model [SUBSECTION] Model validation usually takes the form of a comparison between model outputs and real data but this was not possible here because reliable field data are scarce, difficult to measure or imprecisely defined [17,18]. For example, estimates of costs associated with resistance and tolerance are limited in animals, in contrast to plants (see review by [19]). Tolerance has often been measured imprecisely as the overall ability to maintain fitness in the face of infection, irrespective of parasite burden. For example, cows infected with E. coli have been classified as moderate and severe responders according to milk production loss in the non-challenged quarters [20]. In this case, it is in reality a measure of the combined effects of resistance and tolerance [4]. It was also a deliberate choice to present a generic model because parameters values are different among disease and host populations, so model outputs for one specific disease may not apply to another disease. For example, transmission rates have been estimated at 0.20 to 1.50 per 1000 quarter-days at risk for S. uberis mastitis [21] but at 7 to 50 for S. aureus mastitis [22]. Similarly, killing rates have been estimated at 0.67 to 1.33 × 10-8 mL/cell per min in milk of cows [23,24] and at 1.64 to 1.76 × 10-8 mL/neutrophil per min in dermis of rats inoculated with E. coli [13]. Model outputs will also depend on the virulence of the invading pathogens (ω), as exemplified by the different amount of milk loss at the first occurrence of clinical mastitis depending on bacteria species [25], and on the type of performance (e.g., yield, quality of products, or capacity for work) considered.\nAs an alternative form of validation, the dynamics of Cit and Pit at the individual, and of It and Pt at the herd levels were evaluated. For instance, as expected, Cit was lowest in resistant and Pit was highest in tolerant hosts (Figure 3), Pt remained stable across time when tolerance of the hosts was at its highest level, and It decreased faster when resistance of hosts was at its highest level (Figure 4). Results from the analysis of variance also validated the model. The null value of ΔEI for hi~U [0, 0.001] was sensible because, at this low rate, pathogens cannot be killed, regardless of how much was invested in resistance. The fact that β did not affect ΔEI may also be explained by the same transmission in 'yes' and 'no' populations, so AUCIyes was close to AUCIno for any value of β. As a final example, ΔEP was higher for β = 0.5 than for β = 0.1 because only few animals got infected with β = 0.1, so improving tolerance of these few hosts was not beneficial at the population level.\nModel validation usually takes the form of a comparison between model outputs and real data but this was not possible here because reliable field data are scarce, difficult to measure or imprecisely defined [17,18]. For example, estimates of costs associated with resistance and tolerance are limited in animals, in contrast to plants (see review by [19]). Tolerance has often been measured imprecisely as the overall ability to maintain fitness in the face of infection, irrespective of parasite burden. For example, cows infected with E. coli have been classified as moderate and severe responders according to milk production loss in the non-challenged quarters [20]. In this case, it is in reality a measure of the combined effects of resistance and tolerance [4]. It was also a deliberate choice to present a generic model because parameters values are different among disease and host populations, so model outputs for one specific disease may not apply to another disease. For example, transmission rates have been estimated at 0.20 to 1.50 per 1000 quarter-days at risk for S. uberis mastitis [21] but at 7 to 50 for S. aureus mastitis [22]. Similarly, killing rates have been estimated at 0.67 to 1.33 × 10-8 mL/cell per min in milk of cows [23,24] and at 1.64 to 1.76 × 10-8 mL/neutrophil per min in dermis of rats inoculated with E. coli [13]. Model outputs will also depend on the virulence of the invading pathogens (ω), as exemplified by the different amount of milk loss at the first occurrence of clinical mastitis depending on bacteria species [25], and on the type of performance (e.g., yield, quality of products, or capacity for work) considered.\nAs an alternative form of validation, the dynamics of Cit and Pit at the individual, and of It and Pt at the herd levels were evaluated. For instance, as expected, Cit was lowest in resistant and Pit was highest in tolerant hosts (Figure 3), Pt remained stable across time when tolerance of the hosts was at its highest level, and It decreased faster when resistance of hosts was at its highest level (Figure 4). Results from the analysis of variance also validated the model. The null value of ΔEI for hi~U [0, 0.001] was sensible because, at this low rate, pathogens cannot be killed, regardless of how much was invested in resistance. The fact that β did not affect ΔEI may also be explained by the same transmission in 'yes' and 'no' populations, so AUCIyes was close to AUCIno for any value of β. As a final example, ΔEP was higher for β = 0.5 than for β = 0.1 because only few animals got infected with β = 0.1, so improving tolerance of these few hosts was not beneficial at the population level.\n[SUBTITLE] Limits and assumptions of the model [SUBSECTION] The strategy to build this model followed the current trend in epidemiology to begin with simple models and to add complexity only if the model fails to reproduce plausible epidemiological behaviors [26]. Several assumptions were made, some of which have been confirmed previously. One assumption was that available resources are partitioned between performance, resistance and tolerance. Indeed, experiences in poultry [27] and other species [28] have shown that individuals differ in their ability to allocate resources to their needs. This is also one of the factors evoked to explain the increased susceptibility of high yielding dairy cows to mastitis [29]. Lack of resources may lead to vicious cycles because hosts in poor condition are more susceptible to higher pathogen occurrence and infection intensity, which further weaken the condition of the host [30]. Another assumption is that investments in resistance and tolerance are linked through the constraint in equation 4 and this has been confirmed by [2], where a negative relationship was found between resistance and tolerance in rodent malaria.\nSome assumptions of the model could also be relaxed with more complex equations that have been used in models examining the effects of mixed infection [21], infectious dose [31] and vaccination/treatment [32] on transmission dynamics. Resistance could vary as a function of exposure to disease [33]. Availability of external resource can vary across time, as in Doesch-Wilson et al. [34]. In the model used here, individual infectious contacts were assumed independent and at random but models with heterogeneous mixing [35] and that consider genetic susceptibility among relatives [36,37] may be more appropriate. The course of infection within hosts can also be modelled more accurately, in line with the characteristics of the disease under study. For example, models with increasing complexity have been proposed to describe the fate of mastitis-causing E. coli in infected cows [23,24]. Models for co-evolutionary mechanisms between host and pathogens should be considered [38] if the time scale is longer than the one used in this study.\nOther assumptions may be difficult to verify. For such assumptions, a set of arbitrary standard values for the parameters and different forms for equations should be tested in so-called sensitivity analyses. For example, the amount of loss in performance was assumed directly associated with pathogen load, although the most dramatic changes may occur at low or subclinical levels of disease, with diminishing effects of each additional parasite [39].\nThe strategy to build this model followed the current trend in epidemiology to begin with simple models and to add complexity only if the model fails to reproduce plausible epidemiological behaviors [26]. Several assumptions were made, some of which have been confirmed previously. One assumption was that available resources are partitioned between performance, resistance and tolerance. Indeed, experiences in poultry [27] and other species [28] have shown that individuals differ in their ability to allocate resources to their needs. This is also one of the factors evoked to explain the increased susceptibility of high yielding dairy cows to mastitis [29]. Lack of resources may lead to vicious cycles because hosts in poor condition are more susceptible to higher pathogen occurrence and infection intensity, which further weaken the condition of the host [30]. Another assumption is that investments in resistance and tolerance are linked through the constraint in equation 4 and this has been confirmed by [2], where a negative relationship was found between resistance and tolerance in rodent malaria.\nSome assumptions of the model could also be relaxed with more complex equations that have been used in models examining the effects of mixed infection [21], infectious dose [31] and vaccination/treatment [32] on transmission dynamics. Resistance could vary as a function of exposure to disease [33]. Availability of external resource can vary across time, as in Doesch-Wilson et al. [34]. In the model used here, individual infectious contacts were assumed independent and at random but models with heterogeneous mixing [35] and that consider genetic susceptibility among relatives [36,37] may be more appropriate. The course of infection within hosts can also be modelled more accurately, in line with the characteristics of the disease under study. For example, models with increasing complexity have been proposed to describe the fate of mastitis-causing E. coli in infected cows [23,24]. Models for co-evolutionary mechanisms between host and pathogens should be considered [38] if the time scale is longer than the one used in this study.\nOther assumptions may be difficult to verify. For such assumptions, a set of arbitrary standard values for the parameters and different forms for equations should be tested in so-called sensitivity analyses. For example, the amount of loss in performance was assumed directly associated with pathogen load, although the most dramatic changes may occur at low or subclinical levels of disease, with diminishing effects of each additional parasite [39].\n[SUBTITLE] Effectiveness analyses [SUBSECTION] Two results from the effectiveness analyses are noteworthy, although they must be further evaluated in empirical studies. One is that the range of possible values of ΔEP and ΔEI for the different input parameters (Figure 5) is wide. This emphasizes the need to accurately model the infection process and its impact on the population before deciding on the most effective strategy. For example, increasing host tolerance is theoretically less effective for improving performance of populations infected with pathogens that cause minor rather than major mastitis. Indeed, pathogens causing minor mastitis are less virulent (ω) and less transmissible (β) than those causing major mastitis [40], so modest advantages of high tolerance would be offset by the associated costs. Likewise, selecting for better resistance to mastitis would be effective to restrict the size of a population epidemic if animals are infected with bacterial strains that are likely to be killed by neutrophils [41], i.e. μ>0 in equation 3.\nAnother noteworthy observation is that least-squares means for ΔEP were highest in highly tolerant populations, while ΔEI did not change between different tolerance levels. This suggests that selection for increased tolerance would be effective under commercial constraints. This is different from models applied to natural populations that predict an increase in the overall incidence of infection as the frequency of tolerant hosts increases [38]. In natural populations, tolerant hosts survive longer than non-tolerant ones, thus keeping the disease longer in the population and increasing the risk of exposure to disease. Here, the model is for an endemic disease in a population under commercial contraints, in which non-tolerant animals are kept even if they are sick (no natural death, no culling). Consequently, the risk of exposure to disease does not change, even if the pathogen population size (C) increases.\nIn general, little is known about tolerance mechanisms in animals but their study should provide a good foundation for insuring health over the long term. Indeed, in the long term, advantages of being tolerant should be greater than those associated with resistance. For example, in non-evolving pathogen populations, advantages of being resistant decrease in parallel with the decline in disease frequency, while the advantages of being tolerant are maintained, or even increase if disease frequency rises [42]. In evolving pathogen populations, improved host resistance will pressure pathogens to evolve better mechanisms to evade host defense processes, potentially resulting in cyclical co-evolutionary dynamics. In contrast, tolerance does not interact directly with the pathogen and should not induce selection for counter-adaptations, although elevated levels of tolerance may allow pathogens to be more virulent [43].\nPractically, in bovine mastitis, it the degree of tolerance of an animal can be estimated by the amount of milk loss per bacteria present in the quarter (CFU) using a model adapted from that proposed by [2] for inbred strains of laboratory mice:\nwhere yijt is the milk yield at time t of the ith cow (yield corrected for fixed and non-genetic random effects estimated from the genetic evaluation model) infected with Iijt, i.e. the bacterial load for bacterial species j; bj is the average tolerance against bacteria of strain j; Bij describes individual random deviations from the average tolerance with Bij ~ IID N(0, σ²b); and eijt are residuals with eijt ~ N(0, Ve), where Ve accounts for the non-independence between repeated eijt. Such information could be collected from quarters of experimentally infected cows, as was done in the study of [44].\nTwo results from the effectiveness analyses are noteworthy, although they must be further evaluated in empirical studies. One is that the range of possible values of ΔEP and ΔEI for the different input parameters (Figure 5) is wide. This emphasizes the need to accurately model the infection process and its impact on the population before deciding on the most effective strategy. For example, increasing host tolerance is theoretically less effective for improving performance of populations infected with pathogens that cause minor rather than major mastitis. Indeed, pathogens causing minor mastitis are less virulent (ω) and less transmissible (β) than those causing major mastitis [40], so modest advantages of high tolerance would be offset by the associated costs. Likewise, selecting for better resistance to mastitis would be effective to restrict the size of a population epidemic if animals are infected with bacterial strains that are likely to be killed by neutrophils [41], i.e. μ>0 in equation 3.\nAnother noteworthy observation is that least-squares means for ΔEP were highest in highly tolerant populations, while ΔEI did not change between different tolerance levels. This suggests that selection for increased tolerance would be effective under commercial constraints. This is different from models applied to natural populations that predict an increase in the overall incidence of infection as the frequency of tolerant hosts increases [38]. In natural populations, tolerant hosts survive longer than non-tolerant ones, thus keeping the disease longer in the population and increasing the risk of exposure to disease. Here, the model is for an endemic disease in a population under commercial contraints, in which non-tolerant animals are kept even if they are sick (no natural death, no culling). Consequently, the risk of exposure to disease does not change, even if the pathogen population size (C) increases.\nIn general, little is known about tolerance mechanisms in animals but their study should provide a good foundation for insuring health over the long term. Indeed, in the long term, advantages of being tolerant should be greater than those associated with resistance. For example, in non-evolving pathogen populations, advantages of being resistant decrease in parallel with the decline in disease frequency, while the advantages of being tolerant are maintained, or even increase if disease frequency rises [42]. In evolving pathogen populations, improved host resistance will pressure pathogens to evolve better mechanisms to evade host defense processes, potentially resulting in cyclical co-evolutionary dynamics. In contrast, tolerance does not interact directly with the pathogen and should not induce selection for counter-adaptations, although elevated levels of tolerance may allow pathogens to be more virulent [43].\nPractically, in bovine mastitis, it the degree of tolerance of an animal can be estimated by the amount of milk loss per bacteria present in the quarter (CFU) using a model adapted from that proposed by [2] for inbred strains of laboratory mice:\nwhere yijt is the milk yield at time t of the ith cow (yield corrected for fixed and non-genetic random effects estimated from the genetic evaluation model) infected with Iijt, i.e. the bacterial load for bacterial species j; bj is the average tolerance against bacteria of strain j; Bij describes individual random deviations from the average tolerance with Bij ~ IID N(0, σ²b); and eijt are residuals with eijt ~ N(0, Ve), where Ve accounts for the non-independence between repeated eijt. Such information could be collected from quarters of experimentally infected cows, as was done in the study of [44].", "Model validation usually takes the form of a comparison between model outputs and real data but this was not possible here because reliable field data are scarce, difficult to measure or imprecisely defined [17,18]. For example, estimates of costs associated with resistance and tolerance are limited in animals, in contrast to plants (see review by [19]). Tolerance has often been measured imprecisely as the overall ability to maintain fitness in the face of infection, irrespective of parasite burden. For example, cows infected with E. coli have been classified as moderate and severe responders according to milk production loss in the non-challenged quarters [20]. In this case, it is in reality a measure of the combined effects of resistance and tolerance [4]. It was also a deliberate choice to present a generic model because parameters values are different among disease and host populations, so model outputs for one specific disease may not apply to another disease. For example, transmission rates have been estimated at 0.20 to 1.50 per 1000 quarter-days at risk for S. uberis mastitis [21] but at 7 to 50 for S. aureus mastitis [22]. Similarly, killing rates have been estimated at 0.67 to 1.33 × 10-8 mL/cell per min in milk of cows [23,24] and at 1.64 to 1.76 × 10-8 mL/neutrophil per min in dermis of rats inoculated with E. coli [13]. Model outputs will also depend on the virulence of the invading pathogens (ω), as exemplified by the different amount of milk loss at the first occurrence of clinical mastitis depending on bacteria species [25], and on the type of performance (e.g., yield, quality of products, or capacity for work) considered.\nAs an alternative form of validation, the dynamics of Cit and Pit at the individual, and of It and Pt at the herd levels were evaluated. For instance, as expected, Cit was lowest in resistant and Pit was highest in tolerant hosts (Figure 3), Pt remained stable across time when tolerance of the hosts was at its highest level, and It decreased faster when resistance of hosts was at its highest level (Figure 4). Results from the analysis of variance also validated the model. The null value of ΔEI for hi~U [0, 0.001] was sensible because, at this low rate, pathogens cannot be killed, regardless of how much was invested in resistance. The fact that β did not affect ΔEI may also be explained by the same transmission in 'yes' and 'no' populations, so AUCIyes was close to AUCIno for any value of β. As a final example, ΔEP was higher for β = 0.5 than for β = 0.1 because only few animals got infected with β = 0.1, so improving tolerance of these few hosts was not beneficial at the population level.", "The strategy to build this model followed the current trend in epidemiology to begin with simple models and to add complexity only if the model fails to reproduce plausible epidemiological behaviors [26]. Several assumptions were made, some of which have been confirmed previously. One assumption was that available resources are partitioned between performance, resistance and tolerance. Indeed, experiences in poultry [27] and other species [28] have shown that individuals differ in their ability to allocate resources to their needs. This is also one of the factors evoked to explain the increased susceptibility of high yielding dairy cows to mastitis [29]. Lack of resources may lead to vicious cycles because hosts in poor condition are more susceptible to higher pathogen occurrence and infection intensity, which further weaken the condition of the host [30]. Another assumption is that investments in resistance and tolerance are linked through the constraint in equation 4 and this has been confirmed by [2], where a negative relationship was found between resistance and tolerance in rodent malaria.\nSome assumptions of the model could also be relaxed with more complex equations that have been used in models examining the effects of mixed infection [21], infectious dose [31] and vaccination/treatment [32] on transmission dynamics. Resistance could vary as a function of exposure to disease [33]. Availability of external resource can vary across time, as in Doesch-Wilson et al. [34]. In the model used here, individual infectious contacts were assumed independent and at random but models with heterogeneous mixing [35] and that consider genetic susceptibility among relatives [36,37] may be more appropriate. The course of infection within hosts can also be modelled more accurately, in line with the characteristics of the disease under study. For example, models with increasing complexity have been proposed to describe the fate of mastitis-causing E. coli in infected cows [23,24]. Models for co-evolutionary mechanisms between host and pathogens should be considered [38] if the time scale is longer than the one used in this study.\nOther assumptions may be difficult to verify. For such assumptions, a set of arbitrary standard values for the parameters and different forms for equations should be tested in so-called sensitivity analyses. For example, the amount of loss in performance was assumed directly associated with pathogen load, although the most dramatic changes may occur at low or subclinical levels of disease, with diminishing effects of each additional parasite [39].", "Two results from the effectiveness analyses are noteworthy, although they must be further evaluated in empirical studies. One is that the range of possible values of ΔEP and ΔEI for the different input parameters (Figure 5) is wide. This emphasizes the need to accurately model the infection process and its impact on the population before deciding on the most effective strategy. For example, increasing host tolerance is theoretically less effective for improving performance of populations infected with pathogens that cause minor rather than major mastitis. Indeed, pathogens causing minor mastitis are less virulent (ω) and less transmissible (β) than those causing major mastitis [40], so modest advantages of high tolerance would be offset by the associated costs. Likewise, selecting for better resistance to mastitis would be effective to restrict the size of a population epidemic if animals are infected with bacterial strains that are likely to be killed by neutrophils [41], i.e. μ>0 in equation 3.\nAnother noteworthy observation is that least-squares means for ΔEP were highest in highly tolerant populations, while ΔEI did not change between different tolerance levels. This suggests that selection for increased tolerance would be effective under commercial constraints. This is different from models applied to natural populations that predict an increase in the overall incidence of infection as the frequency of tolerant hosts increases [38]. In natural populations, tolerant hosts survive longer than non-tolerant ones, thus keeping the disease longer in the population and increasing the risk of exposure to disease. Here, the model is for an endemic disease in a population under commercial contraints, in which non-tolerant animals are kept even if they are sick (no natural death, no culling). Consequently, the risk of exposure to disease does not change, even if the pathogen population size (C) increases.\nIn general, little is known about tolerance mechanisms in animals but their study should provide a good foundation for insuring health over the long term. Indeed, in the long term, advantages of being tolerant should be greater than those associated with resistance. For example, in non-evolving pathogen populations, advantages of being resistant decrease in parallel with the decline in disease frequency, while the advantages of being tolerant are maintained, or even increase if disease frequency rises [42]. In evolving pathogen populations, improved host resistance will pressure pathogens to evolve better mechanisms to evade host defense processes, potentially resulting in cyclical co-evolutionary dynamics. In contrast, tolerance does not interact directly with the pathogen and should not induce selection for counter-adaptations, although elevated levels of tolerance may allow pathogens to be more virulent [43].\nPractically, in bovine mastitis, it the degree of tolerance of an animal can be estimated by the amount of milk loss per bacteria present in the quarter (CFU) using a model adapted from that proposed by [2] for inbred strains of laboratory mice:\nwhere yijt is the milk yield at time t of the ith cow (yield corrected for fixed and non-genetic random effects estimated from the genetic evaluation model) infected with Iijt, i.e. the bacterial load for bacterial species j; bj is the average tolerance against bacteria of strain j; Bij describes individual random deviations from the average tolerance with Bij ~ IID N(0, σ²b); and eijt are residuals with eijt ~ N(0, Ve), where Ve accounts for the non-independence between repeated eijt. Such information could be collected from quarters of experimentally infected cows, as was done in the study of [44].", "In summary, this paper presents a novel epidemic model to explore the effects of tolerance and resistance on performance and disease spread in a population. Although more research is necessary to validate the model and more empirical studies are needed to obtain values for the input parameters, the analytic approach can be used to find optimal strategies of disease control in commercial populations.", "The authors declare that they have no competing interests." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Multilevel latent class casemix modelling: a novel approach to accommodate patient casemix.
21362172
Using routinely collected patient data we explore the utility of multilevel latent class (MLLC) models to adjust for patient casemix and rank Trust performance. We contrast this with ranks derived from Trust standardised mortality ratios (SMRs).
BACKGROUND
Patients with colorectal cancer diagnosed between 1998 and 2004 and resident in Northern and Yorkshire regions were identified from the cancer registry database (n = 24,640). Patient age, sex, stage-at-diagnosis (Dukes), and Trust of diagnosis/treatment were extracted. Socioeconomic background was derived using the Townsend Index. Outcome was survival at 3 years after diagnosis. MLLC-modelled and SMR-generated Trust ranks were compared.
METHODS
Patients were assigned to two classes of similar size: one with reasonable prognosis (63.0% died within 3 years), and one with better prognosis (39.3% died within 3 years). In patient class one, all patients diagnosed at stage B or C died within 3 years; in patient class two, all patients diagnosed at stage A, B or C survived. Trusts were assigned two classes with 51.3% and 53.2% of patients respectively dying within 3 years. Differences in the ranked Trust performance between the MLLC model and SMRs were all within estimated 95% CIs.
RESULTS
A novel approach to casemix adjustment is illustrated, ranking Trust performance whilst facilitating the evaluation of factors associated with the patient journey (e.g. treatments) and factors associated with the processes of healthcare delivery (e.g. delays). Further research can demonstrate the value of modelling patient pathways and evaluating healthcare processes across provider institutions.
CONCLUSIONS
[ "Aged", "Diagnosis-Related Groups", "England", "Female", "Hospitals, Public", "Humans", "Male", "Models, Statistical", "Models, Theoretical", "Neoplasms", "Quality Assurance, Health Care", "Registries", "Reproducibility of Results", "Risk Factors", "Survival Analysis" ]
3062580
null
null
Methods
[SUBTITLE] The illustrative colorectal cancer dataset [SUBSECTION] Patients with colorectal cancer (ICD10 [18] codes C18, C19 and C20) diagnosed between 1998 and 2004 and resident in the Northern and Yorkshire regions were identified from the Northern and Yorkshire Cancer Registry and Information Service (NYCRIS) database. Patient age, sex, tumour stage at diagnosis (using the Dukes classification [19]), Trust of diagnosis/treatment, and whether or not the patient received treatment were extracted. Initial data extraction yielded 26,455 unique patient records. Socioeconomic background (SEB) was defined at the 2001 enumeration district level of residence (super output area) using the Townsend Index [20] and matched to patients using postcode. The primary outcome was dead or alive three years following diagnosis, which is clinically meaningful since colorectal cancer has a median survival of approximately three years and survival to three years is often considered for policy reasons. An area deprivation score could not be obtained for one case. Patients with age at diagnosis greater than 100 years (7 patients) and patients identified by death certificate only (364; 1.4%) were excluded. Some patients had multiple diagnosis codes and for patients attending more than one hospital (16,549; 63%), the location of the most recent Trust with a relevant diagnosis code was recorded as the diagnostic/treatment centre, as this provided the latest staging information. For patients who did not have a relevant diagnosis code for any Trust visits (220; 0.83%), the location of their first Trust visit was taken as the diagnostic/treatment centre. Some 1,239 (4.7%) patients were excluded as their diagnostic centres were outside the NYCRIS region. Following exclusions, 24,640 (93%) of the identified patients remained for analysis. Patients with colorectal cancer (ICD10 [18] codes C18, C19 and C20) diagnosed between 1998 and 2004 and resident in the Northern and Yorkshire regions were identified from the Northern and Yorkshire Cancer Registry and Information Service (NYCRIS) database. Patient age, sex, tumour stage at diagnosis (using the Dukes classification [19]), Trust of diagnosis/treatment, and whether or not the patient received treatment were extracted. Initial data extraction yielded 26,455 unique patient records. Socioeconomic background (SEB) was defined at the 2001 enumeration district level of residence (super output area) using the Townsend Index [20] and matched to patients using postcode. The primary outcome was dead or alive three years following diagnosis, which is clinically meaningful since colorectal cancer has a median survival of approximately three years and survival to three years is often considered for policy reasons. An area deprivation score could not be obtained for one case. Patients with age at diagnosis greater than 100 years (7 patients) and patients identified by death certificate only (364; 1.4%) were excluded. Some patients had multiple diagnosis codes and for patients attending more than one hospital (16,549; 63%), the location of the most recent Trust with a relevant diagnosis code was recorded as the diagnostic/treatment centre, as this provided the latest staging information. For patients who did not have a relevant diagnosis code for any Trust visits (220; 0.83%), the location of their first Trust visit was taken as the diagnostic/treatment centre. Some 1,239 (4.7%) patients were excluded as their diagnostic centres were outside the NYCRIS region. Following exclusions, 24,640 (93%) of the identified patients remained for analysis. [SUBTITLE] Statistical methods [SUBSECTION] Latent class analysis (LCA) is well established within single-level regression analysis. Also known as discrete latent variable modelling, or mixture modelling, one determines a number of latent classes, or subgroups, the optimum choice of which is typically informed by log-likelihood statistics. The Bayesian Information Criterion (BIC), [21] the Akaike Information Criterion (AIC), [22] and changes in log-likelihood (LL) are used as model-fit indicators, though models might also be selected on the basis of interpretation [23]. Model parameters of each latent class are determined empirically, along with their contribution to the outcome distribution. LCA models are useful where subtypes are sought and one wishes to model uncertainty surrounding class membership, since observations may belong to all classes, with probabilities determined empirically. LCA thus reflects the uncertainty associated with a limited number of predictors when determining subtypes of outcomes. The proposed LCA models are multilevel because patients are nested within diagnostic/treatment centres (Trusts). LCA extends to a multilevel setting by incorporating discrete latent variables at all levels of the hierarchy. For the colorectal cancer data, latent classes at the patient level model uncertainty surrounding affiliation to patient subgroups and latent classes at the Trust level model Trust variation. The modelling strategy was to determine patient-level latent classes (having included patient-level covariates) with Trust-level variation accommodated initially by a continuous latent variable. With patient-level subtype structure fixed, Trust classes were then sought by switching the Trust-level latent variable from continuous to categorical. A minimum of two Trust classes was required to exhibit discretised Trust class differences in patient outcomes. The proposed modelling strategy builds upon work originated by Downing et al., [24] where multilevel LCA circumnavigated potential bias due to the 'reversal paradox' when adjusting for confounders on the causal path between exposure and outcome [25]. We have no such concerns here, since we are not seeking inference of any exposure nor confounder adjustment: rather, we seek to optimise outcome prediction by modelling patient characteristics to accommodate casemix differences. Consequently, all available covariates for which there was complete data (age, sex, and SEB) were considered by the modelling process, along with stage at diagnosis (coded A to D for increasing severity and missing coded X). Stage was included despite a degree of missing data (13.1%), because it is known to influence survival, [3,4] and a missing category was conveniently added. Although additional patient variables were available, such as time-to-first-treatment and treatment-received, these had substantial incomplete data that would question their utility and were therefore not used. Patient age at diagnosis and Townsend score (SEB) were continuous measures; age was centred on the study mean (71.5 years) and SEB was centred on the population mean of zero (study mean was -0.040). Both covariates exhibited a non-linear relationship with 3-year survival, so a quadratic term for age was included in the model; and by 'trimming' the tails of SEB (assigning rare values > ± 5.0 as ± 5.0), it was possible to avoid higher order terms for Townsend score. The model is described in the Appendix. SMRs were calculated for each Trust (standardised by age, sex, deprivation and stage) and a scaled difference from 'SMR = 1' was determined for each Trust by dividing by the square root of the Trust size. For both the SMRs and the MLLC models, 200 bootstrapped datasets were generated and each was analysed in the same manner to determine 95% confidence intervals (CIs). We used MLLC to calculate absolute differences in Trust effects on the log odds scale (with patient-level values aggregated to the Trust level) before ranking in order of 'best' to 'worst' survival, to compare with the ranks generated from the Trust SMRs. For data manipulation, summary statistics, tabulation, and charts, Stata was used; [26] for latent variable models, LatentGold [27] was used. Latent class analysis (LCA) is well established within single-level regression analysis. Also known as discrete latent variable modelling, or mixture modelling, one determines a number of latent classes, or subgroups, the optimum choice of which is typically informed by log-likelihood statistics. The Bayesian Information Criterion (BIC), [21] the Akaike Information Criterion (AIC), [22] and changes in log-likelihood (LL) are used as model-fit indicators, though models might also be selected on the basis of interpretation [23]. Model parameters of each latent class are determined empirically, along with their contribution to the outcome distribution. LCA models are useful where subtypes are sought and one wishes to model uncertainty surrounding class membership, since observations may belong to all classes, with probabilities determined empirically. LCA thus reflects the uncertainty associated with a limited number of predictors when determining subtypes of outcomes. The proposed LCA models are multilevel because patients are nested within diagnostic/treatment centres (Trusts). LCA extends to a multilevel setting by incorporating discrete latent variables at all levels of the hierarchy. For the colorectal cancer data, latent classes at the patient level model uncertainty surrounding affiliation to patient subgroups and latent classes at the Trust level model Trust variation. The modelling strategy was to determine patient-level latent classes (having included patient-level covariates) with Trust-level variation accommodated initially by a continuous latent variable. With patient-level subtype structure fixed, Trust classes were then sought by switching the Trust-level latent variable from continuous to categorical. A minimum of two Trust classes was required to exhibit discretised Trust class differences in patient outcomes. The proposed modelling strategy builds upon work originated by Downing et al., [24] where multilevel LCA circumnavigated potential bias due to the 'reversal paradox' when adjusting for confounders on the causal path between exposure and outcome [25]. We have no such concerns here, since we are not seeking inference of any exposure nor confounder adjustment: rather, we seek to optimise outcome prediction by modelling patient characteristics to accommodate casemix differences. Consequently, all available covariates for which there was complete data (age, sex, and SEB) were considered by the modelling process, along with stage at diagnosis (coded A to D for increasing severity and missing coded X). Stage was included despite a degree of missing data (13.1%), because it is known to influence survival, [3,4] and a missing category was conveniently added. Although additional patient variables were available, such as time-to-first-treatment and treatment-received, these had substantial incomplete data that would question their utility and were therefore not used. Patient age at diagnosis and Townsend score (SEB) were continuous measures; age was centred on the study mean (71.5 years) and SEB was centred on the population mean of zero (study mean was -0.040). Both covariates exhibited a non-linear relationship with 3-year survival, so a quadratic term for age was included in the model; and by 'trimming' the tails of SEB (assigning rare values > ± 5.0 as ± 5.0), it was possible to avoid higher order terms for Townsend score. The model is described in the Appendix. SMRs were calculated for each Trust (standardised by age, sex, deprivation and stage) and a scaled difference from 'SMR = 1' was determined for each Trust by dividing by the square root of the Trust size. For both the SMRs and the MLLC models, 200 bootstrapped datasets were generated and each was analysed in the same manner to determine 95% confidence intervals (CIs). We used MLLC to calculate absolute differences in Trust effects on the log odds scale (with patient-level values aggregated to the Trust level) before ranking in order of 'best' to 'worst' survival, to compare with the ranks generated from the Trust SMRs. For data manipulation, summary statistics, tabulation, and charts, Stata was used; [26] for latent variable models, LatentGold [27] was used.
null
null
null
null
[ "Background", "The illustrative colorectal cancer dataset", "Statistical methods", "Results", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Appendix", "Pre-publication history" ]
[ "Survival from cancer varies according to many factors including place of diagnosis and treatment centre (Trust), [1,2] stage at diagnosis, [3,4] and associated risk factors such as age at diagnosis, sex, and socioeconomic background (SEB) [5-9]. Some Trusts perform better or worse than others in terms of average survival rates perhaps due to patient casemix at the time of entry into the healthcare system, though patient outcome differences will reflect underlying differences in the effectiveness of healthcare organisations. Much interest lies in identifying good and poor performing healthcare providers, to identify best practice and advocate changes in under-performing institutions. It is important to account for patient casemix when evaluating institutional performance and there are currently several strategies.\nRegression (linear or logistic) is a traditional and well-documented approach, [10] where variables relating to patient characteristics are modelled, effectively to adjust the outcome in relation to the likely influences of these factors. Methods such as matching, stratification, [10] or propensity score analysis, [11,12] may also be used, though these techniques make potentially untestable assumptions and never account for the impact of unmeasured variables or accommodate Trust-level variation. Although multilevel modelling accounts for patients nested within Trusts, and provides improved estimates compared with logistic regression, [13,14] parametric assumptions are made that may not be tenable. Other methods, such as boosted decision trees, [15] have occasionally been used, though these can be difficult to interpret.\nNo casemix-adjustment strategy will eliminate all bias due to unmeasured differences amongst patients; [16] some procedures increase bias [17]. Accommodating patient variation through measured variables only is crude: models ought to reflect the uncertainty associated with patient casemix characteristics. Furthermore, casemix adjustment does not account for differences in patient treatments. Failure to capture variation in patient pathways and their consequences may result in over-simplistic interpretation of healthcare processes and consequent outcomes. Models need to accommodate patient casemix, the patient experience, and uncertainty in both.\nMultilevel latent class (MLLC) modelling is proposed to: (i) adjust for patient casemix whilst accommodating uncertainty surrounding unrecorded patient characteristics; (ii) adjust for patient pathways in terms of the delivery of appropriate healthcare (e.g. treatments); and (iii) differentiate patient outcomes in relation to institutional process characteristics (e.g. delays to treatment). To demonstrate and validate all three steps simultaneously is challenging. The first of these is explored here. We contrast the MLLC model ranking of Trust performance with that of ranks derived from calculating Trust standardised mortality ratios (SMRs). To illustrate our methodology, we study routine data on colorectal cancer patients from a large UK health region.", "Patients with colorectal cancer (ICD10 [18] codes C18, C19 and C20) diagnosed between 1998 and 2004 and resident in the Northern and Yorkshire regions were identified from the Northern and Yorkshire Cancer Registry and Information Service (NYCRIS) database. Patient age, sex, tumour stage at diagnosis (using the Dukes classification [19]), Trust of diagnosis/treatment, and whether or not the patient received treatment were extracted. Initial data extraction yielded 26,455 unique patient records. Socioeconomic background (SEB) was defined at the 2001 enumeration district level of residence (super output area) using the Townsend Index [20] and matched to patients using postcode. The primary outcome was dead or alive three years following diagnosis, which is clinically meaningful since colorectal cancer has a median survival of approximately three years and survival to three years is often considered for policy reasons.\nAn area deprivation score could not be obtained for one case. Patients with age at diagnosis greater than 100 years (7 patients) and patients identified by death certificate only (364; 1.4%) were excluded. Some patients had multiple diagnosis codes and for patients attending more than one hospital (16,549; 63%), the location of the most recent Trust with a relevant diagnosis code was recorded as the diagnostic/treatment centre, as this provided the latest staging information. For patients who did not have a relevant diagnosis code for any Trust visits (220; 0.83%), the location of their first Trust visit was taken as the diagnostic/treatment centre. Some 1,239 (4.7%) patients were excluded as their diagnostic centres were outside the NYCRIS region. Following exclusions, 24,640 (93%) of the identified patients remained for analysis.", "Latent class analysis (LCA) is well established within single-level regression analysis. Also known as discrete latent variable modelling, or mixture modelling, one determines a number of latent classes, or subgroups, the optimum choice of which is typically informed by log-likelihood statistics. The Bayesian Information Criterion (BIC), [21] the Akaike Information Criterion (AIC), [22] and changes in log-likelihood (LL) are used as model-fit indicators, though models might also be selected on the basis of interpretation [23]. Model parameters of each latent class are determined empirically, along with their contribution to the outcome distribution. LCA models are useful where subtypes are sought and one wishes to model uncertainty surrounding class membership, since observations may belong to all classes, with probabilities determined empirically. LCA thus reflects the uncertainty associated with a limited number of predictors when determining subtypes of outcomes. The proposed LCA models are multilevel because patients are nested within diagnostic/treatment centres (Trusts). LCA extends to a multilevel setting by incorporating discrete latent variables at all levels of the hierarchy. For the colorectal cancer data, latent classes at the patient level model uncertainty surrounding affiliation to patient subgroups and latent classes at the Trust level model Trust variation. The modelling strategy was to determine patient-level latent classes (having included patient-level covariates) with Trust-level variation accommodated initially by a continuous latent variable. With patient-level subtype structure fixed, Trust classes were then sought by switching the Trust-level latent variable from continuous to categorical. A minimum of two Trust classes was required to exhibit discretised Trust class differences in patient outcomes.\nThe proposed modelling strategy builds upon work originated by Downing et al., [24] where multilevel LCA circumnavigated potential bias due to the 'reversal paradox' when adjusting for confounders on the causal path between exposure and outcome [25]. We have no such concerns here, since we are not seeking inference of any exposure nor confounder adjustment: rather, we seek to optimise outcome prediction by modelling patient characteristics to accommodate casemix differences. Consequently, all available covariates for which there was complete data (age, sex, and SEB) were considered by the modelling process, along with stage at diagnosis (coded A to D for increasing severity and missing coded X). Stage was included despite a degree of missing data (13.1%), because it is known to influence survival, [3,4] and a missing category was conveniently added. Although additional patient variables were available, such as time-to-first-treatment and treatment-received, these had substantial incomplete data that would question their utility and were therefore not used. Patient age at diagnosis and Townsend score (SEB) were continuous measures; age was centred on the study mean (71.5 years) and SEB was centred on the population mean of zero (study mean was -0.040). Both covariates exhibited a non-linear relationship with 3-year survival, so a quadratic term for age was included in the model; and by 'trimming' the tails of SEB (assigning rare values > ± 5.0 as ± 5.0), it was possible to avoid higher order terms for Townsend score. The model is described in the Appendix.\nSMRs were calculated for each Trust (standardised by age, sex, deprivation and stage) and a scaled difference from 'SMR = 1' was determined for each Trust by dividing by the square root of the Trust size. For both the SMRs and the MLLC models, 200 bootstrapped datasets were generated and each was analysed in the same manner to determine 95% confidence intervals (CIs). We used MLLC to calculate absolute differences in Trust effects on the log odds scale (with patient-level values aggregated to the Trust level) before ranking in order of 'best' to 'worst' survival, to compare with the ranks generated from the Trust SMRs. For data manipulation, summary statistics, tabulation, and charts, Stata was used; [26] for latent variable models, LatentGold [27] was used.", "Table 1 summarises the 'ideal' MLLC model determined by the procedures described. Patients were assigned to two latent classes of similar size, one with reasonable prognosis (PC1: 54.3% of cases, of which 63.0% died within three years), and one with better prognosis (PC2: 45.7% of cases, of which 39.3% died within three years). Trusts were similarly assigned to two latent classes. The largest Trust class, with 53.1% of patients, had better prognosis (TC1: 51.3% of patients died within three years; TC2: 53.2% of patients died within three years). Table 2 summarises the number of deaths within each patient class by stage. Allocating patients to classes according to their largest class probability (modal assignment), all patients in PC1 diagnosed either at stage B or C died within three years; in PC2, all patients diagnosed at stage A, B or C survived. This difference is anticipated, as stage at diagnosis is an important predictor of survival. Most of the early- or mid-stage patients died within three years in PC1 compared to PC2, and there was a clear graduation in survival with increasing stage at diagnosis from early- to late-stage within both classes. The predictor age differed substantially across classes. In contrast, the predictors deprivation and sex differed only marginally between patient classes.\nResults for the subject classes in the 2-patient, 2-Trust-class multilevel latent class regression model\nOdds ratio of death within 3 years. CI: Confidence Interval. There were 12,856 (52.2%) deaths in the study population. The reference group comprised males, aged 71.5 years, classified as Stage A at diagnosis, and attributed a Townsend score of zero. †The odds ratio could not be estimated as there were zero patients who survived 3 years in this subcategory.\nDeaths by stage, and patient class, for the 2-patient, 2-Trust multilevel latent class regression model\nTrust ranks and their bootstrapped 95% CIs are summarised in Table 3; a low ranking value indicates a better survival rate than expected. Differences in the median rank of Trust performance between the MLLC model approach and the Trust SMRs are within their estimated 95% CIs. Figure 1 provides a graphical representation of these results, in order of increasing median probability of belonging to the best survival Trust class by the MLLC methodology.\nTrust ranks from the MMLC model and the calculation of Trust SMRs\nTrust Median Ranks and 95% Confidence Intervals, ordered by the MMLC analysis.", "In a standard multilevel setting, where a continuous latent variable is adopted at the Trust level, the implicit assumption is that Trust-level outcomes have an underlying normal distribution (conditional on Trust-level covariates): Trusts are effectively treated as a random sample of a larger (infinite) population of Trusts. Trusts are not, however, randomly placed geographically and nor are patients randomly assigned to Trusts. Parametric assumptions were therefore replaced by other assumptions which are less restrictive by adopting discrete latent variables, although there remains a degree of geographical dependency that is not accounted for. This remains a limitation. The simplest MLLC model adopted was therefore where the continuous latent variable at the upper level is replaced by a categorical latent variable. The model estimates the mean outcome for each Trust class and the size of each Trust class (summation of Trust probabilities for each Trust class) and no assumptions were made regarding the underlying distribution or class sizes. More complex models can extend this approach to accommodate the spatial dependencies, though this will be part of future developments.\nAn upper-level discrete latent variable allows for individual Trusts to be assigned probabilistically across the discrete latent classes, providing less restricted weighting of Trust relative performance. This may improve the accuracy of the estimated patient outcome differences across Trust classes, which improves the estimated patient casemix adjustment for individual Trusts. The MLLC model is more likely to capture contextual effects due to the inherent data hierarchy than either a standard multilevel approach or by merely estimating Trust ranks according to their SMRs. Continuous and discrete latent variables, if combined, may prove more parsimonious, with variation within each Trust class captured by the continuous latent variable, potentially leading to fewer Trust classes needed to describe overall Trust-level variation. Where determination of Trust ranks is important, the estimation of Trust outcomes is simpler if the categorical latent variable only is adopted at the Trust level, avoiding derivation of the normally distributed effects within each Trust class. Addressing spatial dependencies amongst the Trusts may nevertheless warrant incorporating upper-level effects.\nIn fixing patient-level latent class composition and accommodating patient casemix differences, the residual Trust-class differences in outcome reflect variations in Trust performance that depend upon Trust characteristics (differences in the treatments given and healthcare delivery processes). Model improvement might be feasible with more patient-level variables, but this would incorporate incomplete data, which can cause bias. Within a latent class framework the uncertainty surrounding unrecorded or unused patient characteristics is modelled explicitly: 'fuzzy' matching. Trust-level covariates might explain some of the Trust-class outcome differences if included. The optimum number and composition of Trust (and possibly patient) classes may change with the inclusion/exclusion of different covariates.\nThe probabilities of Trust class membership in Table 3 were marked, with most Trusts belonging entirely or predominantly to one Trust class. This is unsurprising, as there is only a modest difference between the two classes in median survival, and probabilistic assignment differentiates between the two, providing a class weighted combined survival rate. It is not feasible, however, for a Trust to be assigned a class weighted survival rate below that of the poorer survival class, or above that of the better survival class. This is an implicit constraint on the estimated weighted survival for Trusts allocated entirely to one of the two classes (e.g. Trust 1). To alleviate this, more Trust-level classes could be sought, increasing the number until no Trust had a probabilistic assignment of exactly one for classes at the extremities of the range of Trust outcome means. More research is needed, but as applied here, the estimated ranks are robust.\nAlthough the analyses undertaken were primarily for illustration of the proposed methodology, the results are to be taken seriously. Bias may have occurred, however, due to patients with more than one Trust visit having been assigned the most recent Trust visited as the treatment centre. If diagnosis was made at a separate Trust to that which subsequently provided treatment, it would be the latter that was important when modelling healthcare delivery and process variables. In our dataset, 75% of patients visited only one Trust. Nevertheless, some inaccuracies may remain, which could be addressed by screening each patient journey to determine where the majority of interventions take place, or by using multilevel multiple membership models for multiple treatment centres. Furthermore, technically, we have cross-classified data, with patients nested in both area of residence (which yields the patient SEB) and diagnostic centre (Trust); the area level is thus crossed with the Trust level. The number of patients in each area, however, is small and for simplicity of illustration we discarded this level in our model. The methodological principles of MLLC modelling extend theoretically to a cross-classified context, but software does not yet facilitate this.\nWe have satisfactorily demonstrated the principles of step (i) outlined previously, but there is more research to be undertaken to determine the processes for steps (ii) and (iii), which embark upon modelling patient pathways and the evaluation of process differences that vary across healthcare provider institutions. Distinction could then be made between the delivery of care (e.g. treatments) and health service process characteristics (e.g. delays to treatment) that make up the total patient experience. The proposed methodology paves the way for a more advanced modelling approach to the analysis of treatment centre characteristics (in addition to patient casemix characteristics), where differences in the patient pathway of care are modelled to evaluate organisational features in relation to patient outcomes. Such strategies permit hypothesis generation around which healthcare delivery and organisational features warrant intervention, informing prospective cluster-randomised trials targeted at improving service organisation and delivery. This feeds into existing approaches for quality improvement research, consistent with the principles of the MRC framework for the development and evaluation of complex interventions [28].", "The main advantages of the MLLC approach are that it provides accurately derived estimates of the outcome differences across Trust classes, hence improved 'casemix adjustment' for individual Trusts. Trust level covariates may be included, capturing additional casemix complexity. Although deliberately simplified, our illustration demonstrates a principle that could readily extend to a number of more sophisticated scenarios (e.g. time-to-event analysis, multiple treatment centres, cross-classified structures). The MLLC model paves the way to adjust for variations in the patient pathway (especially delivery of appropriate healthcare), permitting the evaluation of institutional processes, which should provide a more robust approach to evaluating institutional performance than is current practice.", "The authors declare that they have no competing interests.", "MSG conceived the idea and planned the study, he drafted the manuscript, and coordinated input from all coauthors; WJH did the analysis, addressing the statistical problems throughout the course of analytical developments, she produced the results (tables and chart) and drafted the manuscript; AD contributed her expertise from previous work that led on to this study and commented on the manuscript; DF provided cancer epidemiology expert advice and commented on the manuscript; RMW contributed to initial discussions surrounding concept and study design, helped steer the analyses, and contributed to the interpretation of the results and the writing of the manuscript. All authors read and approved the final manuscript.", "The multilevel latent class model used in this study takes the form:\nwhere yij is the outcome (death = 1, alive = 0) for patient i within Trust j; is the vector of patient-level covariates; t are the Trust classes (1...T); and c are the patient classes (1...C); p(c|t) is the probability of being in patient class c conditional on being in Trust class t, and in this study C is taken as the same for each Trust. The patient class model, P(c), expands to:\nwhere β0(c) to β5(c) are the patient-class specific coefficients for the patient-level covariates.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6963/11/53/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "The illustrative colorectal cancer dataset", "Statistical methods", "Results", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Appendix", "Pre-publication history" ]
[ "Survival from cancer varies according to many factors including place of diagnosis and treatment centre (Trust), [1,2] stage at diagnosis, [3,4] and associated risk factors such as age at diagnosis, sex, and socioeconomic background (SEB) [5-9]. Some Trusts perform better or worse than others in terms of average survival rates perhaps due to patient casemix at the time of entry into the healthcare system, though patient outcome differences will reflect underlying differences in the effectiveness of healthcare organisations. Much interest lies in identifying good and poor performing healthcare providers, to identify best practice and advocate changes in under-performing institutions. It is important to account for patient casemix when evaluating institutional performance and there are currently several strategies.\nRegression (linear or logistic) is a traditional and well-documented approach, [10] where variables relating to patient characteristics are modelled, effectively to adjust the outcome in relation to the likely influences of these factors. Methods such as matching, stratification, [10] or propensity score analysis, [11,12] may also be used, though these techniques make potentially untestable assumptions and never account for the impact of unmeasured variables or accommodate Trust-level variation. Although multilevel modelling accounts for patients nested within Trusts, and provides improved estimates compared with logistic regression, [13,14] parametric assumptions are made that may not be tenable. Other methods, such as boosted decision trees, [15] have occasionally been used, though these can be difficult to interpret.\nNo casemix-adjustment strategy will eliminate all bias due to unmeasured differences amongst patients; [16] some procedures increase bias [17]. Accommodating patient variation through measured variables only is crude: models ought to reflect the uncertainty associated with patient casemix characteristics. Furthermore, casemix adjustment does not account for differences in patient treatments. Failure to capture variation in patient pathways and their consequences may result in over-simplistic interpretation of healthcare processes and consequent outcomes. Models need to accommodate patient casemix, the patient experience, and uncertainty in both.\nMultilevel latent class (MLLC) modelling is proposed to: (i) adjust for patient casemix whilst accommodating uncertainty surrounding unrecorded patient characteristics; (ii) adjust for patient pathways in terms of the delivery of appropriate healthcare (e.g. treatments); and (iii) differentiate patient outcomes in relation to institutional process characteristics (e.g. delays to treatment). To demonstrate and validate all three steps simultaneously is challenging. The first of these is explored here. We contrast the MLLC model ranking of Trust performance with that of ranks derived from calculating Trust standardised mortality ratios (SMRs). To illustrate our methodology, we study routine data on colorectal cancer patients from a large UK health region.", "[SUBTITLE] The illustrative colorectal cancer dataset [SUBSECTION] Patients with colorectal cancer (ICD10 [18] codes C18, C19 and C20) diagnosed between 1998 and 2004 and resident in the Northern and Yorkshire regions were identified from the Northern and Yorkshire Cancer Registry and Information Service (NYCRIS) database. Patient age, sex, tumour stage at diagnosis (using the Dukes classification [19]), Trust of diagnosis/treatment, and whether or not the patient received treatment were extracted. Initial data extraction yielded 26,455 unique patient records. Socioeconomic background (SEB) was defined at the 2001 enumeration district level of residence (super output area) using the Townsend Index [20] and matched to patients using postcode. The primary outcome was dead or alive three years following diagnosis, which is clinically meaningful since colorectal cancer has a median survival of approximately three years and survival to three years is often considered for policy reasons.\nAn area deprivation score could not be obtained for one case. Patients with age at diagnosis greater than 100 years (7 patients) and patients identified by death certificate only (364; 1.4%) were excluded. Some patients had multiple diagnosis codes and for patients attending more than one hospital (16,549; 63%), the location of the most recent Trust with a relevant diagnosis code was recorded as the diagnostic/treatment centre, as this provided the latest staging information. For patients who did not have a relevant diagnosis code for any Trust visits (220; 0.83%), the location of their first Trust visit was taken as the diagnostic/treatment centre. Some 1,239 (4.7%) patients were excluded as their diagnostic centres were outside the NYCRIS region. Following exclusions, 24,640 (93%) of the identified patients remained for analysis.\nPatients with colorectal cancer (ICD10 [18] codes C18, C19 and C20) diagnosed between 1998 and 2004 and resident in the Northern and Yorkshire regions were identified from the Northern and Yorkshire Cancer Registry and Information Service (NYCRIS) database. Patient age, sex, tumour stage at diagnosis (using the Dukes classification [19]), Trust of diagnosis/treatment, and whether or not the patient received treatment were extracted. Initial data extraction yielded 26,455 unique patient records. Socioeconomic background (SEB) was defined at the 2001 enumeration district level of residence (super output area) using the Townsend Index [20] and matched to patients using postcode. The primary outcome was dead or alive three years following diagnosis, which is clinically meaningful since colorectal cancer has a median survival of approximately three years and survival to three years is often considered for policy reasons.\nAn area deprivation score could not be obtained for one case. Patients with age at diagnosis greater than 100 years (7 patients) and patients identified by death certificate only (364; 1.4%) were excluded. Some patients had multiple diagnosis codes and for patients attending more than one hospital (16,549; 63%), the location of the most recent Trust with a relevant diagnosis code was recorded as the diagnostic/treatment centre, as this provided the latest staging information. For patients who did not have a relevant diagnosis code for any Trust visits (220; 0.83%), the location of their first Trust visit was taken as the diagnostic/treatment centre. Some 1,239 (4.7%) patients were excluded as their diagnostic centres were outside the NYCRIS region. Following exclusions, 24,640 (93%) of the identified patients remained for analysis.\n[SUBTITLE] Statistical methods [SUBSECTION] Latent class analysis (LCA) is well established within single-level regression analysis. Also known as discrete latent variable modelling, or mixture modelling, one determines a number of latent classes, or subgroups, the optimum choice of which is typically informed by log-likelihood statistics. The Bayesian Information Criterion (BIC), [21] the Akaike Information Criterion (AIC), [22] and changes in log-likelihood (LL) are used as model-fit indicators, though models might also be selected on the basis of interpretation [23]. Model parameters of each latent class are determined empirically, along with their contribution to the outcome distribution. LCA models are useful where subtypes are sought and one wishes to model uncertainty surrounding class membership, since observations may belong to all classes, with probabilities determined empirically. LCA thus reflects the uncertainty associated with a limited number of predictors when determining subtypes of outcomes. The proposed LCA models are multilevel because patients are nested within diagnostic/treatment centres (Trusts). LCA extends to a multilevel setting by incorporating discrete latent variables at all levels of the hierarchy. For the colorectal cancer data, latent classes at the patient level model uncertainty surrounding affiliation to patient subgroups and latent classes at the Trust level model Trust variation. The modelling strategy was to determine patient-level latent classes (having included patient-level covariates) with Trust-level variation accommodated initially by a continuous latent variable. With patient-level subtype structure fixed, Trust classes were then sought by switching the Trust-level latent variable from continuous to categorical. A minimum of two Trust classes was required to exhibit discretised Trust class differences in patient outcomes.\nThe proposed modelling strategy builds upon work originated by Downing et al., [24] where multilevel LCA circumnavigated potential bias due to the 'reversal paradox' when adjusting for confounders on the causal path between exposure and outcome [25]. We have no such concerns here, since we are not seeking inference of any exposure nor confounder adjustment: rather, we seek to optimise outcome prediction by modelling patient characteristics to accommodate casemix differences. Consequently, all available covariates for which there was complete data (age, sex, and SEB) were considered by the modelling process, along with stage at diagnosis (coded A to D for increasing severity and missing coded X). Stage was included despite a degree of missing data (13.1%), because it is known to influence survival, [3,4] and a missing category was conveniently added. Although additional patient variables were available, such as time-to-first-treatment and treatment-received, these had substantial incomplete data that would question their utility and were therefore not used. Patient age at diagnosis and Townsend score (SEB) were continuous measures; age was centred on the study mean (71.5 years) and SEB was centred on the population mean of zero (study mean was -0.040). Both covariates exhibited a non-linear relationship with 3-year survival, so a quadratic term for age was included in the model; and by 'trimming' the tails of SEB (assigning rare values > ± 5.0 as ± 5.0), it was possible to avoid higher order terms for Townsend score. The model is described in the Appendix.\nSMRs were calculated for each Trust (standardised by age, sex, deprivation and stage) and a scaled difference from 'SMR = 1' was determined for each Trust by dividing by the square root of the Trust size. For both the SMRs and the MLLC models, 200 bootstrapped datasets were generated and each was analysed in the same manner to determine 95% confidence intervals (CIs). We used MLLC to calculate absolute differences in Trust effects on the log odds scale (with patient-level values aggregated to the Trust level) before ranking in order of 'best' to 'worst' survival, to compare with the ranks generated from the Trust SMRs. For data manipulation, summary statistics, tabulation, and charts, Stata was used; [26] for latent variable models, LatentGold [27] was used.\nLatent class analysis (LCA) is well established within single-level regression analysis. Also known as discrete latent variable modelling, or mixture modelling, one determines a number of latent classes, or subgroups, the optimum choice of which is typically informed by log-likelihood statistics. The Bayesian Information Criterion (BIC), [21] the Akaike Information Criterion (AIC), [22] and changes in log-likelihood (LL) are used as model-fit indicators, though models might also be selected on the basis of interpretation [23]. Model parameters of each latent class are determined empirically, along with their contribution to the outcome distribution. LCA models are useful where subtypes are sought and one wishes to model uncertainty surrounding class membership, since observations may belong to all classes, with probabilities determined empirically. LCA thus reflects the uncertainty associated with a limited number of predictors when determining subtypes of outcomes. The proposed LCA models are multilevel because patients are nested within diagnostic/treatment centres (Trusts). LCA extends to a multilevel setting by incorporating discrete latent variables at all levels of the hierarchy. For the colorectal cancer data, latent classes at the patient level model uncertainty surrounding affiliation to patient subgroups and latent classes at the Trust level model Trust variation. The modelling strategy was to determine patient-level latent classes (having included patient-level covariates) with Trust-level variation accommodated initially by a continuous latent variable. With patient-level subtype structure fixed, Trust classes were then sought by switching the Trust-level latent variable from continuous to categorical. A minimum of two Trust classes was required to exhibit discretised Trust class differences in patient outcomes.\nThe proposed modelling strategy builds upon work originated by Downing et al., [24] where multilevel LCA circumnavigated potential bias due to the 'reversal paradox' when adjusting for confounders on the causal path between exposure and outcome [25]. We have no such concerns here, since we are not seeking inference of any exposure nor confounder adjustment: rather, we seek to optimise outcome prediction by modelling patient characteristics to accommodate casemix differences. Consequently, all available covariates for which there was complete data (age, sex, and SEB) were considered by the modelling process, along with stage at diagnosis (coded A to D for increasing severity and missing coded X). Stage was included despite a degree of missing data (13.1%), because it is known to influence survival, [3,4] and a missing category was conveniently added. Although additional patient variables were available, such as time-to-first-treatment and treatment-received, these had substantial incomplete data that would question their utility and were therefore not used. Patient age at diagnosis and Townsend score (SEB) were continuous measures; age was centred on the study mean (71.5 years) and SEB was centred on the population mean of zero (study mean was -0.040). Both covariates exhibited a non-linear relationship with 3-year survival, so a quadratic term for age was included in the model; and by 'trimming' the tails of SEB (assigning rare values > ± 5.0 as ± 5.0), it was possible to avoid higher order terms for Townsend score. The model is described in the Appendix.\nSMRs were calculated for each Trust (standardised by age, sex, deprivation and stage) and a scaled difference from 'SMR = 1' was determined for each Trust by dividing by the square root of the Trust size. For both the SMRs and the MLLC models, 200 bootstrapped datasets were generated and each was analysed in the same manner to determine 95% confidence intervals (CIs). We used MLLC to calculate absolute differences in Trust effects on the log odds scale (with patient-level values aggregated to the Trust level) before ranking in order of 'best' to 'worst' survival, to compare with the ranks generated from the Trust SMRs. For data manipulation, summary statistics, tabulation, and charts, Stata was used; [26] for latent variable models, LatentGold [27] was used.", "Patients with colorectal cancer (ICD10 [18] codes C18, C19 and C20) diagnosed between 1998 and 2004 and resident in the Northern and Yorkshire regions were identified from the Northern and Yorkshire Cancer Registry and Information Service (NYCRIS) database. Patient age, sex, tumour stage at diagnosis (using the Dukes classification [19]), Trust of diagnosis/treatment, and whether or not the patient received treatment were extracted. Initial data extraction yielded 26,455 unique patient records. Socioeconomic background (SEB) was defined at the 2001 enumeration district level of residence (super output area) using the Townsend Index [20] and matched to patients using postcode. The primary outcome was dead or alive three years following diagnosis, which is clinically meaningful since colorectal cancer has a median survival of approximately three years and survival to three years is often considered for policy reasons.\nAn area deprivation score could not be obtained for one case. Patients with age at diagnosis greater than 100 years (7 patients) and patients identified by death certificate only (364; 1.4%) were excluded. Some patients had multiple diagnosis codes and for patients attending more than one hospital (16,549; 63%), the location of the most recent Trust with a relevant diagnosis code was recorded as the diagnostic/treatment centre, as this provided the latest staging information. For patients who did not have a relevant diagnosis code for any Trust visits (220; 0.83%), the location of their first Trust visit was taken as the diagnostic/treatment centre. Some 1,239 (4.7%) patients were excluded as their diagnostic centres were outside the NYCRIS region. Following exclusions, 24,640 (93%) of the identified patients remained for analysis.", "Latent class analysis (LCA) is well established within single-level regression analysis. Also known as discrete latent variable modelling, or mixture modelling, one determines a number of latent classes, or subgroups, the optimum choice of which is typically informed by log-likelihood statistics. The Bayesian Information Criterion (BIC), [21] the Akaike Information Criterion (AIC), [22] and changes in log-likelihood (LL) are used as model-fit indicators, though models might also be selected on the basis of interpretation [23]. Model parameters of each latent class are determined empirically, along with their contribution to the outcome distribution. LCA models are useful where subtypes are sought and one wishes to model uncertainty surrounding class membership, since observations may belong to all classes, with probabilities determined empirically. LCA thus reflects the uncertainty associated with a limited number of predictors when determining subtypes of outcomes. The proposed LCA models are multilevel because patients are nested within diagnostic/treatment centres (Trusts). LCA extends to a multilevel setting by incorporating discrete latent variables at all levels of the hierarchy. For the colorectal cancer data, latent classes at the patient level model uncertainty surrounding affiliation to patient subgroups and latent classes at the Trust level model Trust variation. The modelling strategy was to determine patient-level latent classes (having included patient-level covariates) with Trust-level variation accommodated initially by a continuous latent variable. With patient-level subtype structure fixed, Trust classes were then sought by switching the Trust-level latent variable from continuous to categorical. A minimum of two Trust classes was required to exhibit discretised Trust class differences in patient outcomes.\nThe proposed modelling strategy builds upon work originated by Downing et al., [24] where multilevel LCA circumnavigated potential bias due to the 'reversal paradox' when adjusting for confounders on the causal path between exposure and outcome [25]. We have no such concerns here, since we are not seeking inference of any exposure nor confounder adjustment: rather, we seek to optimise outcome prediction by modelling patient characteristics to accommodate casemix differences. Consequently, all available covariates for which there was complete data (age, sex, and SEB) were considered by the modelling process, along with stage at diagnosis (coded A to D for increasing severity and missing coded X). Stage was included despite a degree of missing data (13.1%), because it is known to influence survival, [3,4] and a missing category was conveniently added. Although additional patient variables were available, such as time-to-first-treatment and treatment-received, these had substantial incomplete data that would question their utility and were therefore not used. Patient age at diagnosis and Townsend score (SEB) were continuous measures; age was centred on the study mean (71.5 years) and SEB was centred on the population mean of zero (study mean was -0.040). Both covariates exhibited a non-linear relationship with 3-year survival, so a quadratic term for age was included in the model; and by 'trimming' the tails of SEB (assigning rare values > ± 5.0 as ± 5.0), it was possible to avoid higher order terms for Townsend score. The model is described in the Appendix.\nSMRs were calculated for each Trust (standardised by age, sex, deprivation and stage) and a scaled difference from 'SMR = 1' was determined for each Trust by dividing by the square root of the Trust size. For both the SMRs and the MLLC models, 200 bootstrapped datasets were generated and each was analysed in the same manner to determine 95% confidence intervals (CIs). We used MLLC to calculate absolute differences in Trust effects on the log odds scale (with patient-level values aggregated to the Trust level) before ranking in order of 'best' to 'worst' survival, to compare with the ranks generated from the Trust SMRs. For data manipulation, summary statistics, tabulation, and charts, Stata was used; [26] for latent variable models, LatentGold [27] was used.", "Table 1 summarises the 'ideal' MLLC model determined by the procedures described. Patients were assigned to two latent classes of similar size, one with reasonable prognosis (PC1: 54.3% of cases, of which 63.0% died within three years), and one with better prognosis (PC2: 45.7% of cases, of which 39.3% died within three years). Trusts were similarly assigned to two latent classes. The largest Trust class, with 53.1% of patients, had better prognosis (TC1: 51.3% of patients died within three years; TC2: 53.2% of patients died within three years). Table 2 summarises the number of deaths within each patient class by stage. Allocating patients to classes according to their largest class probability (modal assignment), all patients in PC1 diagnosed either at stage B or C died within three years; in PC2, all patients diagnosed at stage A, B or C survived. This difference is anticipated, as stage at diagnosis is an important predictor of survival. Most of the early- or mid-stage patients died within three years in PC1 compared to PC2, and there was a clear graduation in survival with increasing stage at diagnosis from early- to late-stage within both classes. The predictor age differed substantially across classes. In contrast, the predictors deprivation and sex differed only marginally between patient classes.\nResults for the subject classes in the 2-patient, 2-Trust-class multilevel latent class regression model\nOdds ratio of death within 3 years. CI: Confidence Interval. There were 12,856 (52.2%) deaths in the study population. The reference group comprised males, aged 71.5 years, classified as Stage A at diagnosis, and attributed a Townsend score of zero. †The odds ratio could not be estimated as there were zero patients who survived 3 years in this subcategory.\nDeaths by stage, and patient class, for the 2-patient, 2-Trust multilevel latent class regression model\nTrust ranks and their bootstrapped 95% CIs are summarised in Table 3; a low ranking value indicates a better survival rate than expected. Differences in the median rank of Trust performance between the MLLC model approach and the Trust SMRs are within their estimated 95% CIs. Figure 1 provides a graphical representation of these results, in order of increasing median probability of belonging to the best survival Trust class by the MLLC methodology.\nTrust ranks from the MMLC model and the calculation of Trust SMRs\nTrust Median Ranks and 95% Confidence Intervals, ordered by the MMLC analysis.", "In a standard multilevel setting, where a continuous latent variable is adopted at the Trust level, the implicit assumption is that Trust-level outcomes have an underlying normal distribution (conditional on Trust-level covariates): Trusts are effectively treated as a random sample of a larger (infinite) population of Trusts. Trusts are not, however, randomly placed geographically and nor are patients randomly assigned to Trusts. Parametric assumptions were therefore replaced by other assumptions which are less restrictive by adopting discrete latent variables, although there remains a degree of geographical dependency that is not accounted for. This remains a limitation. The simplest MLLC model adopted was therefore where the continuous latent variable at the upper level is replaced by a categorical latent variable. The model estimates the mean outcome for each Trust class and the size of each Trust class (summation of Trust probabilities for each Trust class) and no assumptions were made regarding the underlying distribution or class sizes. More complex models can extend this approach to accommodate the spatial dependencies, though this will be part of future developments.\nAn upper-level discrete latent variable allows for individual Trusts to be assigned probabilistically across the discrete latent classes, providing less restricted weighting of Trust relative performance. This may improve the accuracy of the estimated patient outcome differences across Trust classes, which improves the estimated patient casemix adjustment for individual Trusts. The MLLC model is more likely to capture contextual effects due to the inherent data hierarchy than either a standard multilevel approach or by merely estimating Trust ranks according to their SMRs. Continuous and discrete latent variables, if combined, may prove more parsimonious, with variation within each Trust class captured by the continuous latent variable, potentially leading to fewer Trust classes needed to describe overall Trust-level variation. Where determination of Trust ranks is important, the estimation of Trust outcomes is simpler if the categorical latent variable only is adopted at the Trust level, avoiding derivation of the normally distributed effects within each Trust class. Addressing spatial dependencies amongst the Trusts may nevertheless warrant incorporating upper-level effects.\nIn fixing patient-level latent class composition and accommodating patient casemix differences, the residual Trust-class differences in outcome reflect variations in Trust performance that depend upon Trust characteristics (differences in the treatments given and healthcare delivery processes). Model improvement might be feasible with more patient-level variables, but this would incorporate incomplete data, which can cause bias. Within a latent class framework the uncertainty surrounding unrecorded or unused patient characteristics is modelled explicitly: 'fuzzy' matching. Trust-level covariates might explain some of the Trust-class outcome differences if included. The optimum number and composition of Trust (and possibly patient) classes may change with the inclusion/exclusion of different covariates.\nThe probabilities of Trust class membership in Table 3 were marked, with most Trusts belonging entirely or predominantly to one Trust class. This is unsurprising, as there is only a modest difference between the two classes in median survival, and probabilistic assignment differentiates between the two, providing a class weighted combined survival rate. It is not feasible, however, for a Trust to be assigned a class weighted survival rate below that of the poorer survival class, or above that of the better survival class. This is an implicit constraint on the estimated weighted survival for Trusts allocated entirely to one of the two classes (e.g. Trust 1). To alleviate this, more Trust-level classes could be sought, increasing the number until no Trust had a probabilistic assignment of exactly one for classes at the extremities of the range of Trust outcome means. More research is needed, but as applied here, the estimated ranks are robust.\nAlthough the analyses undertaken were primarily for illustration of the proposed methodology, the results are to be taken seriously. Bias may have occurred, however, due to patients with more than one Trust visit having been assigned the most recent Trust visited as the treatment centre. If diagnosis was made at a separate Trust to that which subsequently provided treatment, it would be the latter that was important when modelling healthcare delivery and process variables. In our dataset, 75% of patients visited only one Trust. Nevertheless, some inaccuracies may remain, which could be addressed by screening each patient journey to determine where the majority of interventions take place, or by using multilevel multiple membership models for multiple treatment centres. Furthermore, technically, we have cross-classified data, with patients nested in both area of residence (which yields the patient SEB) and diagnostic centre (Trust); the area level is thus crossed with the Trust level. The number of patients in each area, however, is small and for simplicity of illustration we discarded this level in our model. The methodological principles of MLLC modelling extend theoretically to a cross-classified context, but software does not yet facilitate this.\nWe have satisfactorily demonstrated the principles of step (i) outlined previously, but there is more research to be undertaken to determine the processes for steps (ii) and (iii), which embark upon modelling patient pathways and the evaluation of process differences that vary across healthcare provider institutions. Distinction could then be made between the delivery of care (e.g. treatments) and health service process characteristics (e.g. delays to treatment) that make up the total patient experience. The proposed methodology paves the way for a more advanced modelling approach to the analysis of treatment centre characteristics (in addition to patient casemix characteristics), where differences in the patient pathway of care are modelled to evaluate organisational features in relation to patient outcomes. Such strategies permit hypothesis generation around which healthcare delivery and organisational features warrant intervention, informing prospective cluster-randomised trials targeted at improving service organisation and delivery. This feeds into existing approaches for quality improvement research, consistent with the principles of the MRC framework for the development and evaluation of complex interventions [28].", "The main advantages of the MLLC approach are that it provides accurately derived estimates of the outcome differences across Trust classes, hence improved 'casemix adjustment' for individual Trusts. Trust level covariates may be included, capturing additional casemix complexity. Although deliberately simplified, our illustration demonstrates a principle that could readily extend to a number of more sophisticated scenarios (e.g. time-to-event analysis, multiple treatment centres, cross-classified structures). The MLLC model paves the way to adjust for variations in the patient pathway (especially delivery of appropriate healthcare), permitting the evaluation of institutional processes, which should provide a more robust approach to evaluating institutional performance than is current practice.", "The authors declare that they have no competing interests.", "MSG conceived the idea and planned the study, he drafted the manuscript, and coordinated input from all coauthors; WJH did the analysis, addressing the statistical problems throughout the course of analytical developments, she produced the results (tables and chart) and drafted the manuscript; AD contributed her expertise from previous work that led on to this study and commented on the manuscript; DF provided cancer epidemiology expert advice and commented on the manuscript; RMW contributed to initial discussions surrounding concept and study design, helped steer the analyses, and contributed to the interpretation of the results and the writing of the manuscript. All authors read and approved the final manuscript.", "The multilevel latent class model used in this study takes the form:\nwhere yij is the outcome (death = 1, alive = 0) for patient i within Trust j; is the vector of patient-level covariates; t are the Trust classes (1...T); and c are the patient classes (1...C); p(c|t) is the probability of being in patient class c conditional on being in Trust class t, and in this study C is taken as the same for each Trust. The patient class model, P(c), expands to:\nwhere β0(c) to β5(c) are the patient-class specific coefficients for the patient-level covariates.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6963/11/53/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null ]
[]
A comparative study of four intensive care outcome prediction models in cardiac surgery patients.
21362175
Outcome prediction scoring systems are increasingly used in intensive care medicine, but most were not developed for use in cardiac surgery patients. We compared the performance of four intensive care outcome prediction scoring systems (Acute Physiology and Chronic Health Evaluation II [APACHE II], Simplified Acute Physiology Score II [SAPS II], Sequential Organ Failure Assessment [SOFA], and Cardiac Surgery Score [CASUS]) in patients after open heart surgery.
BACKGROUND
We prospectively included all consecutive adult patients who underwent open heart surgery and were admitted to the intensive care unit (ICU) between January 1st 2007 and December 31st 2008. Scores were calculated daily from ICU admission until discharge. The outcome measure was ICU mortality. The performance of the four scores was assessed by calibration and discrimination statistics. Derived variables (Mean- and Max- scores) were also evaluated.
METHODS
During the study period, 2801 patients (29.6% female) were included. Mean age was 66.9 ± 10.7 years and the ICU mortality rate was 5.2%. Calibration tests for SOFA and CASUS were reliable throughout (p-value not < 0.05), but there were significant differences between predicted and observed outcome for SAPS II (days 1, 2, 3 and 5) and APACHE II (days 2 and 3). CASUS, and its mean- and maximum-derivatives, discriminated better between survivors and non-survivors than the other scores throughout the study (area under curve ≥ 0.90). In order of best discrimination, CASUS was followed by SOFA, then SAPS II, and finally APACHE II. SAPS II and APACHE II derivatives had discrimination results that were superior to those of the SOFA derivatives.
RESULTS
CASUS and SOFA are reliable ICU mortality risk stratification models for cardiac surgery patients. SAPS II and APACHE II did not perform well in terms of calibration and discrimination statistics.
CONCLUSIONS
[ "Adult", "Aged", "Aged, 80 and over", "Cardiac Surgical Procedures", "Female", "Follow-Up Studies", "Germany", "Heart Diseases", "Hospital Mortality", "Humans", "Intensive Care Units", "Male", "Middle Aged", "Outcome Assessment, Health Care", "Prospective Studies", "Severity of Illness Index", "Young Adult" ]
3058022
null
null
Methods
This study involved an evaluation of prospectively collected data from all consecutive adult patients admitted to our ICU after cardiac surgery. Patients admitted between January 1st 2007 and December 31st 2008 were included and the study was approved by the Institutional Review Board of Friedrich Schiller University Hospital (approval no.: 2809-05/10). Only the first admission was considered for patients who were readmitted to the ICU during the study period. Data were collected from the quality control system QUIMS 2.0b (University Hospital of Muenster, Germany) and from the intensive care information system COPRA 5.2 (COPRASYSTEM GmbH, Sasbachwalden, Germany), which is interfaced with patient monitors (Philips IntelliVue MP70, Amsterdam, Netherlands), ventilators (Draeger Evita IV, Luebeck, Germany and Hamilton Galileo, Bonaduz, Swizerland), blood gas analyzing devices (ABL 800Flex Radiometer, Copenhagen, Denmark) and the central laboratories. The attending physician collected the study data of all scores for the first postoperative week. Two assigned medical clerks validated the data collection daily. A senior consultant performed a second periodical validation. Inconsistency between the raters was resolved by consensus. There were no missing data. Outcome was defined as ICU mortality. The scores were calculated using the most abnormal value for each variable per day. The maximum derivative of any scoring system (Max-score) was defined as the worst daily score throughout the whole ICU stay. Mean-score was calculated by dividing the sum of all daily values during the ICU stay by the ICU length of stay (ICULOS) in days. [SUBTITLE] Statistical analyses [SUBSECTION] Statistical analyses were performed with SPSS software version 18 (SPSS Inc, Chicago, IL). Graphics were drawn using Microsoft Excel software. Continuous scale data are presented as mean ± standard deviation (SD) and were analyzed using the two-tailed Student's t-test for independent samples. The Kolmogorov-Smirnov test showed a normal distribution of the continuous data. A p value of < 0.05 was considered as significant. Calibration was performed using the Hosmer-Lemeshow (HL) test (goodness-of-fit-test) to insure the absence of a significant discrepancy between predicted and observed mortality. Calibration was considered good when there was a low χ2 value and a high p value (>0.05). Discrimination (ability of a scoring model to differentiate between survival and death) was evaluated with receiver-operating-characteristic (ROC) curves; the area under the curve (AUC) indicates the discriminative ability of the scores, i.e., the ability to discriminate survivors from non-survivors. AUCs enable direct comparison of different scoring systems: An AUC of 0.5 (a diagonal line) is equivalent to random chance, AUC >0.7 indicates a moderate prognostic model, and AUC >0.8 (a bulbous curve) indicates a good prognostic model. The overall correct classification (OCC) (the ratio of number of correctly predicted survivors and non-survivors to the total number of patients) values of the scores were calculated. The risk of mortality is given as odds ratios for all scores with 95%-confidence intervals. All statistical analyses were performed from ICU day 1 (n = 2801) (operative day) to day 6 (n = 431 patients) only, in order to obtain accurate statistical results and to avoid a small number of patients. The preoperative logistic and additive EuroSCORE were also statistically tested. Statistical analyses were performed with SPSS software version 18 (SPSS Inc, Chicago, IL). Graphics were drawn using Microsoft Excel software. Continuous scale data are presented as mean ± standard deviation (SD) and were analyzed using the two-tailed Student's t-test for independent samples. The Kolmogorov-Smirnov test showed a normal distribution of the continuous data. A p value of < 0.05 was considered as significant. Calibration was performed using the Hosmer-Lemeshow (HL) test (goodness-of-fit-test) to insure the absence of a significant discrepancy between predicted and observed mortality. Calibration was considered good when there was a low χ2 value and a high p value (>0.05). Discrimination (ability of a scoring model to differentiate between survival and death) was evaluated with receiver-operating-characteristic (ROC) curves; the area under the curve (AUC) indicates the discriminative ability of the scores, i.e., the ability to discriminate survivors from non-survivors. AUCs enable direct comparison of different scoring systems: An AUC of 0.5 (a diagonal line) is equivalent to random chance, AUC >0.7 indicates a moderate prognostic model, and AUC >0.8 (a bulbous curve) indicates a good prognostic model. The overall correct classification (OCC) (the ratio of number of correctly predicted survivors and non-survivors to the total number of patients) values of the scores were calculated. The risk of mortality is given as odds ratios for all scores with 95%-confidence intervals. All statistical analyses were performed from ICU day 1 (n = 2801) (operative day) to day 6 (n = 431 patients) only, in order to obtain accurate statistical results and to avoid a small number of patients. The preoperative logistic and additive EuroSCORE were also statistically tested.
null
null
null
null
[ "Background", "Statistical analyses", "Results", "Discussion", "Conclusion", "Competing interests", "Authors' contributions" ]
[ "Scoring systems were introduced into intensive care medicine to provide the physician with an objective tool for judging a patient's condition and likely outcome. These scores can be used to estimate the severity of disease and to aid therapeutic decisions. The acute patho-physiological sequelae of cardiopulmonary bypass are transient and many physiologic changes may be masked by multiple system support devices, such as intra-aortic balloon pumps, ventricular assist devices, hemofiltration and mechanical ventilation. The subset of cardiac surgery patients was, therefore, excluded during the development of many general scoring systems, such as the Acute Physiology and Chronic Health Evaluation (APACHE) and the Simplified Acute Physiology Score (SAPS) [1,2]. Nevertheless, many of these scoring systems are used in cardiac surgery intensive care units (ICU) because of the lack of an appropriate risk index for this specific subgroup of patients. In central Europe, the most commonly used postoperative scoring systems in cardiac ICUs are APACHE II [1], SAPS II [2] and the Sequential Organ Failure Assessment (SOFA) [3]. Recently, the Cardiac Surgery Score (CASUS) [4] was introduced to specifically target cardiac surgery patients, but it is not yet widely used. In this study, we compared the mortality prediction of CASUS and the other well-known ICU scoring systems after cardiac surgery. The variables included in these four scores are shown in Table 1.\nSummary of variables included in the different postoperative scoring systems\nCVP: central venous pressure; IABP: intra-aortic balloon pump; VAD: ventricular assist device; COPD: chronic obstructive pulmonary disease; GI: gastrointestinal; GCS: Glasgow coma scale; AIDS: acquired immunodeficiency disease.", "Statistical analyses were performed with SPSS software version 18 (SPSS Inc, Chicago, IL). Graphics were drawn using Microsoft Excel software. Continuous scale data are presented as mean ± standard deviation (SD) and were analyzed using the two-tailed Student's t-test for independent samples. The Kolmogorov-Smirnov test showed a normal distribution of the continuous data. A p value of < 0.05 was considered as significant. Calibration was performed using the Hosmer-Lemeshow (HL) test (goodness-of-fit-test) to insure the absence of a significant discrepancy between predicted and observed mortality. Calibration was considered good when there was a low χ2 value and a high p value (>0.05). Discrimination (ability of a scoring model to differentiate between survival and death) was evaluated with receiver-operating-characteristic (ROC) curves; the area under the curve (AUC) indicates the discriminative ability of the scores, i.e., the ability to discriminate survivors from non-survivors. AUCs enable direct comparison of different scoring systems: An AUC of 0.5 (a diagonal line) is equivalent to random chance, AUC >0.7 indicates a moderate prognostic model, and AUC >0.8 (a bulbous curve) indicates a good prognostic model. The overall correct classification (OCC) (the ratio of number of correctly predicted survivors and non-survivors to the total number of patients) values of the scores were calculated. The risk of mortality is given as odds ratios for all scores with 95%-confidence intervals. All statistical analyses were performed from ICU day 1 (n = 2801) (operative day) to day 6 (n = 431 patients) only, in order to obtain accurate statistical results and to avoid a small number of patients. The preoperative logistic and additive EuroSCORE were also statistically tested.", "The study included 2801 patients who were admitted to the ICU over the two-year period; 29.6% (n = 830) were female, and mean age was 66.9 ± 10.7 years (range of 19-89 years). The types of surgical procedures are shown in Table 2. ICULOS was 4.3 ± 6.8 days (range 1-189 days, median 2.0 days, 75th percentile 4.0 days) and ICU mortality was 5.2% (n = 147). The preoperative collected mean additive EuroSCORE was 6.3 ± 3.6 and the mean logistic EuroSCORE was 9.9 ± 12.9 (median 5.3, 75th percentile 11.3).\nType of surgery in the study population\nCABG: Coronary artery bypass grafting.\nTable 3 summarizes the OCC, calibration and discrimination of all four models from the first ICU day to day 6 and for both preoperative EuroSCORE models. There were no significant differences between expected and observed mortality for CASUS, SOFA and the preoperative additive EuroSCORE using the HL-test, but there were differences for the preoperative logistic EuroSCORE (p = 0.01), SAPS II (p < 0.05 on ICU admission and days 2, 3 and 5) and APACHE II (p < 0.05 on days 2 and 3). Figure 1 shows the ROCs of all the postoperative models for the first six ICU days. The AUC for CASUS (≥ 0.90) was greater than those of the other scoring systems on all studied days; the largest AUC was achieved with CASUS on the second ICU day (AUC = 0.97) (Table 3, Figure 2). SOFA performed better than APACHE II and SAPS II in this statistical analysis. The OCC was greater for CASUS than for the other scores on all days with the best result on the second ICU day (OCC = 96.9%).\nDay 1-6: Logistic regression, OCC, calibration (HL), discrimination (ROC) for EuroSCORE, CASUS, SOFA, SAPSII, APACHEII\n95%-CI: 95%-confidence interval, Add-Euro: additive EuroSCORE, AUC: Area under ROC curve, HL: Hosmer-Lemeshow, Log-Euro: logistic EuroSCORE, OCC: overall correct classification, CC: tio for risk of mortality, OR: Odds ratio for risk of mortality, ROC: receiver operating characteristic.\nDay 1-6: ROC-curves of CASUS, SOFA, APACHE II, SAPS II and their derivatives.\nDay 1-6: Areas under the ROC-curves of CASUS, SOFA, APACHE II and SAPS II.\nTable 4 shows the results for the statistical evaluation of the score-derivatives. There were no significant differences between expected and observed mortality using the HL-test. CASUS again had the best discrimination. In the ROC test, in contrast to the results for the original scores, the derivatives of SAPS II and of APACHE II performed better than the derivatives of SOFA. All derived scores had higher OCCs than the original scores.\nLogistic regression/odds ratio, OCC, calibration (HL), discrimination (ROC) for CASUS, SOFA, SAPSII, APACHEII derivatives\n95%-CI: 95%-confidence interval, AUC: Area under ROC curve, HL: Hosmer-Lemeshow, OCC: overall correct classification, OR: Odds ratio for risk of mortality, ROC: receiver operating characteristic.", "Patients undergoing cardiac surgery show temporary pathophysiological effects related to the heart-lung-machine [5,6] that can influence the values of the postoperative scoring systems [7] and may make them unreliable in this population. These effects include the relatively long mechanical ventilation time needed to stabilize these patients [8,9] and the postoperative sedation that limits the role of the Glasgow Coma Scale (GCS) as a prognostic parameter [10]. Electrolyte- and blood glucose imbalances are also frequent [4]. All these factors are temporary and have a limited effect on prognosis. In addition, most currently used scoring systems ignore some of the parameters that can influence outcomes in these patients. The most common examples of this are the use of intra-aortic balloon pumps (IABP) and ventricular assist devices (VAD), and the presence of postoperative low cardiac output syndrome (LCOS) [5,6,8,11]. In 2005, CASUS [4] was suggested as a specialized cardiac surgery scoring system that took into account the special circumstances encountered in the ICU after cardiac surgery. However, many ICUs are still using the general postoperative risk stratification models for cardiac surgery patients, notably, in central Europe, the SOFA, APACHE II and SAPS II scores. Postoperative risk stratification is increasingly used, especially in cardiac surgery, and we believed it was important to compare these widely used scoring systems with the relatively new model (CASUS) to try and identify the optimal tool in this field.\nThe APACHE II model [1], published in 1985, was developed to simplify the original APACHE model and has become the most frequently used general mortality prediction model. APACHE II has been extensively validated, and despite being the oldest system, it still performs well [12]. More recent versions (APACHE III and IV) have not been widely adopted. All the APACHE models are based on the most abnormal values registered during the first 24 h after ICU admission. However, because several studies [13,14] have supported serial daily usage of postoperative risk stratification models, we chose to evaluate APACHE II on all ICU days. In our study, APACHE II had the worst discrimination of the four models studied but its calibration was better than that of SAPS II.\nSAPS II was developed in 1994 [2] based on a European/North American database, which included 13,152 patients. Logistic regression analysis was used to select variables, and for weighting and conversion of the score to give the probability of hospital mortality for ICU patients over the age of 18. Although cardiac surgery patients were originally excluded from the score's target, it is used in many cardiac ICUs. SAPS II has been extensively studied and validated. There seems to be quite convincing evidence of the ability to maintain good discrimination across different populations, but calibration is often poor [15,16]. Our study in cardiac surgery patients, confirmed the poor calibration of SAPS II and its discrimination was worse than that of SOFA and CASUS. SAPS III [17] was introduced in 2005 in an attempt to overcome shortcomings related to different case-mixes and lead-time bias of SAPS II. However, its calibration and discrimination set were shown to vary widely around the world [12] so that many centers in central Europe still use the older version.\nThe SOFA was originally developed in 1996 as a morbidity risk stratification model for patients with sepsis [3]. Because of its good performance and reliability, SOFA is widely used as a scoring model for ICU patients not only for morbidity but also for mortality prediction [7]. In 2003, Ceriani et al. [14] suggested the use of SOFA in cardiac surgery patients. Based on the good results they obtained in 218 patients, they concluded that SOFA was applicable in cardiac surgery without any need for specific modifications. SOFA comprises separate daily scores for respiratory, renal, cardiovascular, central nervous, coagulation, and hepatic systems. The scores can be used in several ways, as individual scores (for each organ), as the sum of scores on a single ICU day, or as the sum of the worst scores during the ICU stay.\nCASUS was developed based on retrospective analyses to identify descriptors of mortality and multiorgan dysfunction in postoperative cardiac surgical patients. It was then evaluated prospectively in 3230 patients in a single center study [4]. The main goal was to develop a scoring model that was specific to this type of patient and had a minimum number of descriptors. CASUS is, therefore, a compact score index with only ten, readily available descriptors. This scoring system has not yet been externally validated in multicenter studies, and accordingly, has not yet gained much popularity.\nThe ideal scoring system should not only be simple and reproducible but also reliable. This reliability can be assessed using calibration and discrimination tests, considered by the European Society of Intensive Care Medicine (ESICM) to be the best methods to validate score systems and prognostic parameters [18]. It has been argued that perfect discrimination is important in order to evaluate an individual patient's risk using a scoring system, whereas for clinical trials or comparison of care between ICUs better calibration is needed [19]. Accordingly, validations of scoring systems in the literature have frequently been achieved using good discrimination tests, although the HL test has often resulted in unreliable calibration. The HL test is very sensitive to the size of the study population with large numbers of patients resulting in unreliable calibration [20]. This fact is applicable to study populations larger than 5000 patients [20], which was not the case in our study. In other words, if the HL-test, in studies with more than 5000 patients, is significant this does not necessarily mean that the scoring systems are not useful or are unreliable [20].\nHowever, our study, with a more optimal size of study population, showed that APACHE II and SAPS II are not suitable for use in cardiac surgery patients. CASUS and SOFA had an acceptable performance with the HL-test compared to the other two scores. CASUS was clearly superior in its ability to discriminate between survival and death on all days. This predictive property allows complications to be anticipated in individual patients and should alert residents, especially those with relatively little experience, to ask for help. The OCC (the ratio of correctly predicted number of survivors and non-survivors to the total number of patients) was also better in CASUS than in the other scores. We decided not to compare the different scores using odds ratios, because conclusions from such analyses can be distorted, as the maximum points in the different scoring systems vary significantly. Nevertheless odds ratios are useful tools to estimate the risk of mortality. Hence, for example, results can be influenced by different inotropic regimes or fluid replacement strategies in different hospitals. The assessment of the central nervous system may also affect results because the GCS is affected by sedation, anesthesia and paralysis [10,21,22], and calculation requires clinical evaluation, which may be biased by subjective interpretation [6,10,22]. CASUS is not affected by these problems. Its simple variable, 'neurologic state', can be calculated in less than one minute per patient per day. The parameters included in any scoring system influence its usefulness in different populations of patients. It is, therefore, perhaps not surprising that CASUS, which was specifically constructed for cardiac surgery patients, is superior to general severity systems in this group.\nMean- and max-score derivatives were introduced for SOFA by Moreno et al. [23] in 1999 and Ferreira et al. [24] in 2001. These methods were extended by Ceriani et al. [14] in 2003. We chose to calculate mean- and max-values for all four scores. However, it should be remembered that calculating the mean- and max-values adds some degree of selectivity to the model. The mean-derivative of a model reflects the overall average, whereas the max-derivative highlights the peak of organ dysfunction during the postoperative ICU stay; both are associated with the ICULOS, and thus allow a defined outcome prediction. The mean- and max-derivatives of all scores demonstrated better calibration, discrimination and OCC than the original models.\nSimilar to other studies, we detected a severe decrease in the study population on the third day because uncomplicated cases had been transferred to the general floor (Table 3). It is therefore important that a score is reliable during the first two days so that patients at risk are not discharged too early potentially leading to ICU-readmission and/or prolonged hospital stay, both of which are associated with higher mortality rates [25,26]. The good prognostic abilities of SOFA and CASUS in this study suggest they could be used to identify high-risk patients, enabling certain precautions to be put into place, such as daily monitoring of physiological dysfunction [27], and allowing prognoses and therapeutic choices, including withdrawal of therapy, to be discussed and reconsidered [28]. Nevertheless, no scoring system can replace clinical evaluation at a patient's bedside; they can only serve as an objective tool in decision making. Although scoring systems may provide an indication of disease severity and prognosis in individual patients and assist in overall patient assessment along with full clinical evaluation and other available parameters, they are designed for use in groups of patients and should never be the sole basis for therapeutic decisions [29].", "SOFA and CASUS are reliable tools for risk stratification in cardiac surgery patients. CASUS is more accurate than SOFA in mortality prediction. In contrast, APACHE II and SAPS II are not the tools of choice for this group of patients.", "The authors declare that they have neither a financial nor a non-financial competing interest.", "FD: substantial contributions to conception and design; acquisition, analysis and interpretation of data; drafting the manuscript. AB: substantial contributions to conception and design; revising the manuscript critically for important intellectual content. MH: acquisition and analysis of data; revising the manuscript it critically for important intellectual content. TB: final approval of the version to be published. MR: revising the manuscript critically for important intellectual content; final approval of the version to be published. TL: substantial contributions to statistical methods and analyses. OB: final approval of the version to be published. KH: substantial contributions to conception and design; interpretation of data; revising the manuscript critically for important intellectual content. All authors read and approved the final manuscript." ]
[ null, null, null, null, null, null, null ]
[ "Background", "Methods", "Statistical analyses", "Results", "Discussion", "Conclusion", "Competing interests", "Authors' contributions" ]
[ "Scoring systems were introduced into intensive care medicine to provide the physician with an objective tool for judging a patient's condition and likely outcome. These scores can be used to estimate the severity of disease and to aid therapeutic decisions. The acute patho-physiological sequelae of cardiopulmonary bypass are transient and many physiologic changes may be masked by multiple system support devices, such as intra-aortic balloon pumps, ventricular assist devices, hemofiltration and mechanical ventilation. The subset of cardiac surgery patients was, therefore, excluded during the development of many general scoring systems, such as the Acute Physiology and Chronic Health Evaluation (APACHE) and the Simplified Acute Physiology Score (SAPS) [1,2]. Nevertheless, many of these scoring systems are used in cardiac surgery intensive care units (ICU) because of the lack of an appropriate risk index for this specific subgroup of patients. In central Europe, the most commonly used postoperative scoring systems in cardiac ICUs are APACHE II [1], SAPS II [2] and the Sequential Organ Failure Assessment (SOFA) [3]. Recently, the Cardiac Surgery Score (CASUS) [4] was introduced to specifically target cardiac surgery patients, but it is not yet widely used. In this study, we compared the mortality prediction of CASUS and the other well-known ICU scoring systems after cardiac surgery. The variables included in these four scores are shown in Table 1.\nSummary of variables included in the different postoperative scoring systems\nCVP: central venous pressure; IABP: intra-aortic balloon pump; VAD: ventricular assist device; COPD: chronic obstructive pulmonary disease; GI: gastrointestinal; GCS: Glasgow coma scale; AIDS: acquired immunodeficiency disease.", "This study involved an evaluation of prospectively collected data from all consecutive adult patients admitted to our ICU after cardiac surgery. Patients admitted between January 1st 2007 and December 31st 2008 were included and the study was approved by the Institutional Review Board of Friedrich Schiller University Hospital (approval no.: 2809-05/10). Only the first admission was considered for patients who were readmitted to the ICU during the study period. Data were collected from the quality control system QUIMS 2.0b (University Hospital of Muenster, Germany) and from the intensive care information system COPRA 5.2 (COPRASYSTEM GmbH, Sasbachwalden, Germany), which is interfaced with patient monitors (Philips IntelliVue MP70, Amsterdam, Netherlands), ventilators (Draeger Evita IV, Luebeck, Germany and Hamilton Galileo, Bonaduz, Swizerland), blood gas analyzing devices (ABL 800Flex Radiometer, Copenhagen, Denmark) and the central laboratories.\nThe attending physician collected the study data of all scores for the first postoperative week. Two assigned medical clerks validated the data collection daily. A senior consultant performed a second periodical validation. Inconsistency between the raters was resolved by consensus. There were no missing data. Outcome was defined as ICU mortality. The scores were calculated using the most abnormal value for each variable per day. The maximum derivative of any scoring system (Max-score) was defined as the worst daily score throughout the whole ICU stay. Mean-score was calculated by dividing the sum of all daily values during the ICU stay by the ICU length of stay (ICULOS) in days.\n[SUBTITLE] Statistical analyses [SUBSECTION] Statistical analyses were performed with SPSS software version 18 (SPSS Inc, Chicago, IL). Graphics were drawn using Microsoft Excel software. Continuous scale data are presented as mean ± standard deviation (SD) and were analyzed using the two-tailed Student's t-test for independent samples. The Kolmogorov-Smirnov test showed a normal distribution of the continuous data. A p value of < 0.05 was considered as significant. Calibration was performed using the Hosmer-Lemeshow (HL) test (goodness-of-fit-test) to insure the absence of a significant discrepancy between predicted and observed mortality. Calibration was considered good when there was a low χ2 value and a high p value (>0.05). Discrimination (ability of a scoring model to differentiate between survival and death) was evaluated with receiver-operating-characteristic (ROC) curves; the area under the curve (AUC) indicates the discriminative ability of the scores, i.e., the ability to discriminate survivors from non-survivors. AUCs enable direct comparison of different scoring systems: An AUC of 0.5 (a diagonal line) is equivalent to random chance, AUC >0.7 indicates a moderate prognostic model, and AUC >0.8 (a bulbous curve) indicates a good prognostic model. The overall correct classification (OCC) (the ratio of number of correctly predicted survivors and non-survivors to the total number of patients) values of the scores were calculated. The risk of mortality is given as odds ratios for all scores with 95%-confidence intervals. All statistical analyses were performed from ICU day 1 (n = 2801) (operative day) to day 6 (n = 431 patients) only, in order to obtain accurate statistical results and to avoid a small number of patients. The preoperative logistic and additive EuroSCORE were also statistically tested.\nStatistical analyses were performed with SPSS software version 18 (SPSS Inc, Chicago, IL). Graphics were drawn using Microsoft Excel software. Continuous scale data are presented as mean ± standard deviation (SD) and were analyzed using the two-tailed Student's t-test for independent samples. The Kolmogorov-Smirnov test showed a normal distribution of the continuous data. A p value of < 0.05 was considered as significant. Calibration was performed using the Hosmer-Lemeshow (HL) test (goodness-of-fit-test) to insure the absence of a significant discrepancy between predicted and observed mortality. Calibration was considered good when there was a low χ2 value and a high p value (>0.05). Discrimination (ability of a scoring model to differentiate between survival and death) was evaluated with receiver-operating-characteristic (ROC) curves; the area under the curve (AUC) indicates the discriminative ability of the scores, i.e., the ability to discriminate survivors from non-survivors. AUCs enable direct comparison of different scoring systems: An AUC of 0.5 (a diagonal line) is equivalent to random chance, AUC >0.7 indicates a moderate prognostic model, and AUC >0.8 (a bulbous curve) indicates a good prognostic model. The overall correct classification (OCC) (the ratio of number of correctly predicted survivors and non-survivors to the total number of patients) values of the scores were calculated. The risk of mortality is given as odds ratios for all scores with 95%-confidence intervals. All statistical analyses were performed from ICU day 1 (n = 2801) (operative day) to day 6 (n = 431 patients) only, in order to obtain accurate statistical results and to avoid a small number of patients. The preoperative logistic and additive EuroSCORE were also statistically tested.", "Statistical analyses were performed with SPSS software version 18 (SPSS Inc, Chicago, IL). Graphics were drawn using Microsoft Excel software. Continuous scale data are presented as mean ± standard deviation (SD) and were analyzed using the two-tailed Student's t-test for independent samples. The Kolmogorov-Smirnov test showed a normal distribution of the continuous data. A p value of < 0.05 was considered as significant. Calibration was performed using the Hosmer-Lemeshow (HL) test (goodness-of-fit-test) to insure the absence of a significant discrepancy between predicted and observed mortality. Calibration was considered good when there was a low χ2 value and a high p value (>0.05). Discrimination (ability of a scoring model to differentiate between survival and death) was evaluated with receiver-operating-characteristic (ROC) curves; the area under the curve (AUC) indicates the discriminative ability of the scores, i.e., the ability to discriminate survivors from non-survivors. AUCs enable direct comparison of different scoring systems: An AUC of 0.5 (a diagonal line) is equivalent to random chance, AUC >0.7 indicates a moderate prognostic model, and AUC >0.8 (a bulbous curve) indicates a good prognostic model. The overall correct classification (OCC) (the ratio of number of correctly predicted survivors and non-survivors to the total number of patients) values of the scores were calculated. The risk of mortality is given as odds ratios for all scores with 95%-confidence intervals. All statistical analyses were performed from ICU day 1 (n = 2801) (operative day) to day 6 (n = 431 patients) only, in order to obtain accurate statistical results and to avoid a small number of patients. The preoperative logistic and additive EuroSCORE were also statistically tested.", "The study included 2801 patients who were admitted to the ICU over the two-year period; 29.6% (n = 830) were female, and mean age was 66.9 ± 10.7 years (range of 19-89 years). The types of surgical procedures are shown in Table 2. ICULOS was 4.3 ± 6.8 days (range 1-189 days, median 2.0 days, 75th percentile 4.0 days) and ICU mortality was 5.2% (n = 147). The preoperative collected mean additive EuroSCORE was 6.3 ± 3.6 and the mean logistic EuroSCORE was 9.9 ± 12.9 (median 5.3, 75th percentile 11.3).\nType of surgery in the study population\nCABG: Coronary artery bypass grafting.\nTable 3 summarizes the OCC, calibration and discrimination of all four models from the first ICU day to day 6 and for both preoperative EuroSCORE models. There were no significant differences between expected and observed mortality for CASUS, SOFA and the preoperative additive EuroSCORE using the HL-test, but there were differences for the preoperative logistic EuroSCORE (p = 0.01), SAPS II (p < 0.05 on ICU admission and days 2, 3 and 5) and APACHE II (p < 0.05 on days 2 and 3). Figure 1 shows the ROCs of all the postoperative models for the first six ICU days. The AUC for CASUS (≥ 0.90) was greater than those of the other scoring systems on all studied days; the largest AUC was achieved with CASUS on the second ICU day (AUC = 0.97) (Table 3, Figure 2). SOFA performed better than APACHE II and SAPS II in this statistical analysis. The OCC was greater for CASUS than for the other scores on all days with the best result on the second ICU day (OCC = 96.9%).\nDay 1-6: Logistic regression, OCC, calibration (HL), discrimination (ROC) for EuroSCORE, CASUS, SOFA, SAPSII, APACHEII\n95%-CI: 95%-confidence interval, Add-Euro: additive EuroSCORE, AUC: Area under ROC curve, HL: Hosmer-Lemeshow, Log-Euro: logistic EuroSCORE, OCC: overall correct classification, CC: tio for risk of mortality, OR: Odds ratio for risk of mortality, ROC: receiver operating characteristic.\nDay 1-6: ROC-curves of CASUS, SOFA, APACHE II, SAPS II and their derivatives.\nDay 1-6: Areas under the ROC-curves of CASUS, SOFA, APACHE II and SAPS II.\nTable 4 shows the results for the statistical evaluation of the score-derivatives. There were no significant differences between expected and observed mortality using the HL-test. CASUS again had the best discrimination. In the ROC test, in contrast to the results for the original scores, the derivatives of SAPS II and of APACHE II performed better than the derivatives of SOFA. All derived scores had higher OCCs than the original scores.\nLogistic regression/odds ratio, OCC, calibration (HL), discrimination (ROC) for CASUS, SOFA, SAPSII, APACHEII derivatives\n95%-CI: 95%-confidence interval, AUC: Area under ROC curve, HL: Hosmer-Lemeshow, OCC: overall correct classification, OR: Odds ratio for risk of mortality, ROC: receiver operating characteristic.", "Patients undergoing cardiac surgery show temporary pathophysiological effects related to the heart-lung-machine [5,6] that can influence the values of the postoperative scoring systems [7] and may make them unreliable in this population. These effects include the relatively long mechanical ventilation time needed to stabilize these patients [8,9] and the postoperative sedation that limits the role of the Glasgow Coma Scale (GCS) as a prognostic parameter [10]. Electrolyte- and blood glucose imbalances are also frequent [4]. All these factors are temporary and have a limited effect on prognosis. In addition, most currently used scoring systems ignore some of the parameters that can influence outcomes in these patients. The most common examples of this are the use of intra-aortic balloon pumps (IABP) and ventricular assist devices (VAD), and the presence of postoperative low cardiac output syndrome (LCOS) [5,6,8,11]. In 2005, CASUS [4] was suggested as a specialized cardiac surgery scoring system that took into account the special circumstances encountered in the ICU after cardiac surgery. However, many ICUs are still using the general postoperative risk stratification models for cardiac surgery patients, notably, in central Europe, the SOFA, APACHE II and SAPS II scores. Postoperative risk stratification is increasingly used, especially in cardiac surgery, and we believed it was important to compare these widely used scoring systems with the relatively new model (CASUS) to try and identify the optimal tool in this field.\nThe APACHE II model [1], published in 1985, was developed to simplify the original APACHE model and has become the most frequently used general mortality prediction model. APACHE II has been extensively validated, and despite being the oldest system, it still performs well [12]. More recent versions (APACHE III and IV) have not been widely adopted. All the APACHE models are based on the most abnormal values registered during the first 24 h after ICU admission. However, because several studies [13,14] have supported serial daily usage of postoperative risk stratification models, we chose to evaluate APACHE II on all ICU days. In our study, APACHE II had the worst discrimination of the four models studied but its calibration was better than that of SAPS II.\nSAPS II was developed in 1994 [2] based on a European/North American database, which included 13,152 patients. Logistic regression analysis was used to select variables, and for weighting and conversion of the score to give the probability of hospital mortality for ICU patients over the age of 18. Although cardiac surgery patients were originally excluded from the score's target, it is used in many cardiac ICUs. SAPS II has been extensively studied and validated. There seems to be quite convincing evidence of the ability to maintain good discrimination across different populations, but calibration is often poor [15,16]. Our study in cardiac surgery patients, confirmed the poor calibration of SAPS II and its discrimination was worse than that of SOFA and CASUS. SAPS III [17] was introduced in 2005 in an attempt to overcome shortcomings related to different case-mixes and lead-time bias of SAPS II. However, its calibration and discrimination set were shown to vary widely around the world [12] so that many centers in central Europe still use the older version.\nThe SOFA was originally developed in 1996 as a morbidity risk stratification model for patients with sepsis [3]. Because of its good performance and reliability, SOFA is widely used as a scoring model for ICU patients not only for morbidity but also for mortality prediction [7]. In 2003, Ceriani et al. [14] suggested the use of SOFA in cardiac surgery patients. Based on the good results they obtained in 218 patients, they concluded that SOFA was applicable in cardiac surgery without any need for specific modifications. SOFA comprises separate daily scores for respiratory, renal, cardiovascular, central nervous, coagulation, and hepatic systems. The scores can be used in several ways, as individual scores (for each organ), as the sum of scores on a single ICU day, or as the sum of the worst scores during the ICU stay.\nCASUS was developed based on retrospective analyses to identify descriptors of mortality and multiorgan dysfunction in postoperative cardiac surgical patients. It was then evaluated prospectively in 3230 patients in a single center study [4]. The main goal was to develop a scoring model that was specific to this type of patient and had a minimum number of descriptors. CASUS is, therefore, a compact score index with only ten, readily available descriptors. This scoring system has not yet been externally validated in multicenter studies, and accordingly, has not yet gained much popularity.\nThe ideal scoring system should not only be simple and reproducible but also reliable. This reliability can be assessed using calibration and discrimination tests, considered by the European Society of Intensive Care Medicine (ESICM) to be the best methods to validate score systems and prognostic parameters [18]. It has been argued that perfect discrimination is important in order to evaluate an individual patient's risk using a scoring system, whereas for clinical trials or comparison of care between ICUs better calibration is needed [19]. Accordingly, validations of scoring systems in the literature have frequently been achieved using good discrimination tests, although the HL test has often resulted in unreliable calibration. The HL test is very sensitive to the size of the study population with large numbers of patients resulting in unreliable calibration [20]. This fact is applicable to study populations larger than 5000 patients [20], which was not the case in our study. In other words, if the HL-test, in studies with more than 5000 patients, is significant this does not necessarily mean that the scoring systems are not useful or are unreliable [20].\nHowever, our study, with a more optimal size of study population, showed that APACHE II and SAPS II are not suitable for use in cardiac surgery patients. CASUS and SOFA had an acceptable performance with the HL-test compared to the other two scores. CASUS was clearly superior in its ability to discriminate between survival and death on all days. This predictive property allows complications to be anticipated in individual patients and should alert residents, especially those with relatively little experience, to ask for help. The OCC (the ratio of correctly predicted number of survivors and non-survivors to the total number of patients) was also better in CASUS than in the other scores. We decided not to compare the different scores using odds ratios, because conclusions from such analyses can be distorted, as the maximum points in the different scoring systems vary significantly. Nevertheless odds ratios are useful tools to estimate the risk of mortality. Hence, for example, results can be influenced by different inotropic regimes or fluid replacement strategies in different hospitals. The assessment of the central nervous system may also affect results because the GCS is affected by sedation, anesthesia and paralysis [10,21,22], and calculation requires clinical evaluation, which may be biased by subjective interpretation [6,10,22]. CASUS is not affected by these problems. Its simple variable, 'neurologic state', can be calculated in less than one minute per patient per day. The parameters included in any scoring system influence its usefulness in different populations of patients. It is, therefore, perhaps not surprising that CASUS, which was specifically constructed for cardiac surgery patients, is superior to general severity systems in this group.\nMean- and max-score derivatives were introduced for SOFA by Moreno et al. [23] in 1999 and Ferreira et al. [24] in 2001. These methods were extended by Ceriani et al. [14] in 2003. We chose to calculate mean- and max-values for all four scores. However, it should be remembered that calculating the mean- and max-values adds some degree of selectivity to the model. The mean-derivative of a model reflects the overall average, whereas the max-derivative highlights the peak of organ dysfunction during the postoperative ICU stay; both are associated with the ICULOS, and thus allow a defined outcome prediction. The mean- and max-derivatives of all scores demonstrated better calibration, discrimination and OCC than the original models.\nSimilar to other studies, we detected a severe decrease in the study population on the third day because uncomplicated cases had been transferred to the general floor (Table 3). It is therefore important that a score is reliable during the first two days so that patients at risk are not discharged too early potentially leading to ICU-readmission and/or prolonged hospital stay, both of which are associated with higher mortality rates [25,26]. The good prognostic abilities of SOFA and CASUS in this study suggest they could be used to identify high-risk patients, enabling certain precautions to be put into place, such as daily monitoring of physiological dysfunction [27], and allowing prognoses and therapeutic choices, including withdrawal of therapy, to be discussed and reconsidered [28]. Nevertheless, no scoring system can replace clinical evaluation at a patient's bedside; they can only serve as an objective tool in decision making. Although scoring systems may provide an indication of disease severity and prognosis in individual patients and assist in overall patient assessment along with full clinical evaluation and other available parameters, they are designed for use in groups of patients and should never be the sole basis for therapeutic decisions [29].", "SOFA and CASUS are reliable tools for risk stratification in cardiac surgery patients. CASUS is more accurate than SOFA in mortality prediction. In contrast, APACHE II and SAPS II are not the tools of choice for this group of patients.", "The authors declare that they have neither a financial nor a non-financial competing interest.", "FD: substantial contributions to conception and design; acquisition, analysis and interpretation of data; drafting the manuscript. AB: substantial contributions to conception and design; revising the manuscript critically for important intellectual content. MH: acquisition and analysis of data; revising the manuscript it critically for important intellectual content. TB: final approval of the version to be published. MR: revising the manuscript critically for important intellectual content; final approval of the version to be published. TL: substantial contributions to statistical methods and analyses. OB: final approval of the version to be published. KH: substantial contributions to conception and design; interpretation of data; revising the manuscript critically for important intellectual content. All authors read and approved the final manuscript." ]
[ null, "methods", null, null, null, null, null, null ]
[]
Weight and height z-scores improve after initiating ART among HIV-infected children in rural Zambia: a cohort study.
21362177
Deficits in growth observed in HIV-infected children in resource-poor settings can be reversed with antiretroviral treatment (ART). However, many of the studies have been conducted in urban areas with older pediatric populations. This study was undertaken to evaluate growth patterns after ART initiation in a young pediatric population in rural Zambia with a high prevalence of undernutrition.
BACKGROUND
Between 2007 and 2009, 193 HIV-infected children were enrolled in a cohort study in Macha, Zambia. Children were evaluated every 3 months, at which time a questionnaire was administered, height and weight were measured, and blood specimens were collected. Weight- and height-for-age z-scores were constructed from WHO growth standards. All children receiving ART at enrollment or initiating ART during the study were included in this analysis. Linear mixed effects models were used to model trajectories of weight and height-for-age z-scores.
METHODS
A high proportion of study children were underweight (59%) and stunted (72%) at treatment initiation. Improvements in both weight- and height-for-age z-scores were observed, with weight-for-age z-scores increasing during the first 6 months of treatment and then stabilizing, and height-for-age z-scores increasing consistently over time. Trajectories of weight-for-age z-scores differed by underweight status at treatment initiation, with children who were underweight experiencing greater increases in z-scores in the first 6 months of treatment. Trajectories of height-for-age z-scores differed by age, with children older than 5 years of age experiencing smaller increases over time.
RESULTS
Some of the effects of HIV on growth were reversed with ART initiation, although a high proportion of children remained underweight and stunted after two years of treatment. Partnerships between treatment and nutrition programs should be explored so that HIV-infected children can receive optimal nutritional support.
CONCLUSIONS
[ "Anthropometry", "Anti-HIV Agents", "Antiretroviral Therapy, Highly Active", "Body Height", "Body Weight", "Child", "Child, Preschool", "Cohort Studies", "Female", "HIV Infections", "Humans", "Infant", "Infant, Newborn", "Male", "Rural Population", "Zambia" ]
3056795
null
null
Methods
[SUBTITLE] Study setting and population [SUBSECTION] The study was conducted at Macha Hospital in a rural area of Southern Province, Zambia. The study setting and population have been described in detail elsewhere [22]. Briefly, Macha Hospital is a district-level referral hospital administered by the Zambian Brethren in Christ Church. Since 2005, Macha Hospital has provided care to over 6000 HIV-infected adults and children through the Government of Zambia's antiretroviral treatment program, with additional support from the President's Emergency Plan for AIDS Relief (PEPFAR) through the non-governmental organization, AidsRelief. Children with a positive HIV serologic test are referred to the clinic from voluntary counseling and testing programs, outpatient clinics and rural health centers. Early infant diagnosis has been available since February 2008. Clinical care is provided without charge by medical doctors and clinical officers, and adherence counseling by nurses and trained counselors. ART is initiated according to WHO guidelines [23,24]. The first-line antiretroviral treatment regimen consists of two nucleoside reverse transcriptase inhibitors (lamivudine (3TC) plus zidovudine (AZT) or stavudine (D4T) or abacavir (ABC)) and a non-nucleoside reverse transcriptase inhibitor (efavirenz (EFV) or nevirapine (NVP)). Pediatric and adult fixed dose combinations of D4T and 3TC are available, as well as of D4T, 3TC and NVP. High energy protein supplements are provided to underweight children. The study was conducted at Macha Hospital in a rural area of Southern Province, Zambia. The study setting and population have been described in detail elsewhere [22]. Briefly, Macha Hospital is a district-level referral hospital administered by the Zambian Brethren in Christ Church. Since 2005, Macha Hospital has provided care to over 6000 HIV-infected adults and children through the Government of Zambia's antiretroviral treatment program, with additional support from the President's Emergency Plan for AIDS Relief (PEPFAR) through the non-governmental organization, AidsRelief. Children with a positive HIV serologic test are referred to the clinic from voluntary counseling and testing programs, outpatient clinics and rural health centers. Early infant diagnosis has been available since February 2008. Clinical care is provided without charge by medical doctors and clinical officers, and adherence counseling by nurses and trained counselors. ART is initiated according to WHO guidelines [23,24]. The first-line antiretroviral treatment regimen consists of two nucleoside reverse transcriptase inhibitors (lamivudine (3TC) plus zidovudine (AZT) or stavudine (D4T) or abacavir (ABC)) and a non-nucleoside reverse transcriptase inhibitor (efavirenz (EFV) or nevirapine (NVP)). Pediatric and adult fixed dose combinations of D4T and 3TC are available, as well as of D4T, 3TC and NVP. High energy protein supplements are provided to underweight children. [SUBTITLE] Study procedures [SUBSECTION] Beginning in September 2007, HIV-infected children younger than 16 years seeking HIV care were eligible for enrollment into a cohort study. Written informed consent was obtained from parents or guardians and assent was obtained from children 8-16 years of age. Children were evaluated at study visits approximately every three months, at which time a questionnaire was administered, the child was examined and a blood specimen was obtained. At each visit CD4+ T-cell counts and percentages were measured using the Guava Easy CD4 system (Guava Technologies, Inc., Hayward, CA). During each physical examination, height and weight were measured. For children who missed study visits, home visits were attempted to ascertain their status. Information recorded before study enrollment was abstracted from medical records. The study was approved by the Ministry of Health in Zambia, the Research Ethics Committee of the University of Zambia and the Institutional Review Board of the Johns Hopkins Bloomberg School of Public Health. For the present analysis, all children enrolled in the study and receiving ART between September 2007 and September 2009 were included. The study sample included children already receiving ART at enrollment and children initiating ART during the study period. Beginning in September 2007, HIV-infected children younger than 16 years seeking HIV care were eligible for enrollment into a cohort study. Written informed consent was obtained from parents or guardians and assent was obtained from children 8-16 years of age. Children were evaluated at study visits approximately every three months, at which time a questionnaire was administered, the child was examined and a blood specimen was obtained. At each visit CD4+ T-cell counts and percentages were measured using the Guava Easy CD4 system (Guava Technologies, Inc., Hayward, CA). During each physical examination, height and weight were measured. For children who missed study visits, home visits were attempted to ascertain their status. Information recorded before study enrollment was abstracted from medical records. The study was approved by the Ministry of Health in Zambia, the Research Ethics Committee of the University of Zambia and the Institutional Review Board of the Johns Hopkins Bloomberg School of Public Health. For the present analysis, all children enrolled in the study and receiving ART between September 2007 and September 2009 were included. The study sample included children already receiving ART at enrollment and children initiating ART during the study period. [SUBTITLE] Statistical analysis [SUBSECTION] Data were entered in duplicate using EpiInfo (Centers for Disease Control and Prevention) and analyses were conducted in STATA, version 9 (StataCorp LP, College Station, Texas). Weight-for-age z-scores (WAZ) among children younger than 10 years of age and height-for-age (HAZ) z-scores among all children were calculated based on the WHO growth standards [25], and children with z-scores below -2 were defined as underweight and stunted, respectively. Severe immunodeficiency was defined by CD4+ T-cell percentage according to the WHO 2006 treatment guidelines [24]. WAZ and HAZ after ART initiation were assessed among children with at least one post-ART measure. Children were followed until they died, were lost to follow-up, or were administratively censored on September 30, 2009. Children who had not returned for at least 6 months were assumed lost to follow-up. For reporting outcomes at specific time points after ART initiation, measurements were aggregated to within 45 days. Treatment outcomes were evaluated using linear mixed effects models with random intercept, exchangeable correlation structure and robust standard error estimation. As changes in WAZ were not linear, a spline term was added at 7.5 months, the upper window around the 6-month measure. Covariates of interest included sex, orphan status, education of the primary caregiver, age, underweight, stunting and severe immunodeficiency at ART initiation. Covariates found to be (p < 0.10) or known to be associated with either outcome were included in the models. Differences in trajectories of WAZ and HAZ were assessed by each covariate of interest. Data were entered in duplicate using EpiInfo (Centers for Disease Control and Prevention) and analyses were conducted in STATA, version 9 (StataCorp LP, College Station, Texas). Weight-for-age z-scores (WAZ) among children younger than 10 years of age and height-for-age (HAZ) z-scores among all children were calculated based on the WHO growth standards [25], and children with z-scores below -2 were defined as underweight and stunted, respectively. Severe immunodeficiency was defined by CD4+ T-cell percentage according to the WHO 2006 treatment guidelines [24]. WAZ and HAZ after ART initiation were assessed among children with at least one post-ART measure. Children were followed until they died, were lost to follow-up, or were administratively censored on September 30, 2009. Children who had not returned for at least 6 months were assumed lost to follow-up. For reporting outcomes at specific time points after ART initiation, measurements were aggregated to within 45 days. Treatment outcomes were evaluated using linear mixed effects models with random intercept, exchangeable correlation structure and robust standard error estimation. As changes in WAZ were not linear, a spline term was added at 7.5 months, the upper window around the 6-month measure. Covariates of interest included sex, orphan status, education of the primary caregiver, age, underweight, stunting and severe immunodeficiency at ART initiation. Covariates found to be (p < 0.10) or known to be associated with either outcome were included in the models. Differences in trajectories of WAZ and HAZ were assessed by each covariate of interest.
null
null
null
null
[ "Background", "Study setting and population", "Study procedures", "Statistical analysis", "Results", "Characteristics of the study population at study enrollment and ART initiation", "Weight-for-age z-scores after ART initiation", "Height-for-age z-scores after ART initiation", "Discussion", "Conclusions", "List of Abbreviations", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Children in sub-Saharan Africa have high levels of undernutrition, exhibiting lower weight- and height-for-age than children in high resource settings. Both of these conditions are exacerbated by HIV infection [1-4], and can be used to determine disease status and monitor treatment response [5]. As implementation of antiretroviral treatment (ART) programs in sub-Saharan Africa has increased [6], many HIV-infected children are benefitting from treatment and are experiencing reductions in morbidity and mortality. Several studies have shown that many of the deficits in growth due to HIV infection are reversed with ART, with children exhibiting consistent improvements in weight-for-age [7-19]. Some studies, but not all [12,13,17], have also reported improvements in height-for-age [7-10,14,16,18,19]. Gains in weight have been found to correlate with treatment response [19].\nMany of these studies have been conducted in urban areas, where food security tends to be higher and levels of undernutrition lower than in surrounding rural areas [20]. In addition, many of these studies were conducted in the first years of program implementation when the majority of children initiating ART were older [21]. Both age [11] and level of undernutrition at treatment initiation [7] have the potential to impact growth trajectories and the effect of ART. Consequently, this study was undertaken to evaluate growth after ART initiation in a young pediatric population in rural Zambia and identify characteristics at ART initiation that influence growth trajectories.", "The study was conducted at Macha Hospital in a rural area of Southern Province, Zambia. The study setting and population have been described in detail elsewhere [22]. Briefly, Macha Hospital is a district-level referral hospital administered by the Zambian Brethren in Christ Church. Since 2005, Macha Hospital has provided care to over 6000 HIV-infected adults and children through the Government of Zambia's antiretroviral treatment program, with additional support from the President's Emergency Plan for AIDS Relief (PEPFAR) through the non-governmental organization, AidsRelief.\nChildren with a positive HIV serologic test are referred to the clinic from voluntary counseling and testing programs, outpatient clinics and rural health centers. Early infant diagnosis has been available since February 2008. Clinical care is provided without charge by medical doctors and clinical officers, and adherence counseling by nurses and trained counselors. ART is initiated according to WHO guidelines [23,24]. The first-line antiretroviral treatment regimen consists of two nucleoside reverse transcriptase inhibitors (lamivudine (3TC) plus zidovudine (AZT) or stavudine (D4T) or abacavir (ABC)) and a non-nucleoside reverse transcriptase inhibitor (efavirenz (EFV) or nevirapine (NVP)). Pediatric and adult fixed dose combinations of D4T and 3TC are available, as well as of D4T, 3TC and NVP. High energy protein supplements are provided to underweight children.", "Beginning in September 2007, HIV-infected children younger than 16 years seeking HIV care were eligible for enrollment into a cohort study. Written informed consent was obtained from parents or guardians and assent was obtained from children 8-16 years of age. Children were evaluated at study visits approximately every three months, at which time a questionnaire was administered, the child was examined and a blood specimen was obtained. At each visit CD4+ T-cell counts and percentages were measured using the Guava Easy CD4 system (Guava Technologies, Inc., Hayward, CA). During each physical examination, height and weight were measured. For children who missed study visits, home visits were attempted to ascertain their status. Information recorded before study enrollment was abstracted from medical records. The study was approved by the Ministry of Health in Zambia, the Research Ethics Committee of the University of Zambia and the Institutional Review Board of the Johns Hopkins Bloomberg School of Public Health.\nFor the present analysis, all children enrolled in the study and receiving ART between September 2007 and September 2009 were included. The study sample included children already receiving ART at enrollment and children initiating ART during the study period.", "Data were entered in duplicate using EpiInfo (Centers for Disease Control and Prevention) and analyses were conducted in STATA, version 9 (StataCorp LP, College Station, Texas). Weight-for-age z-scores (WAZ) among children younger than 10 years of age and height-for-age (HAZ) z-scores among all children were calculated based on the WHO growth standards [25], and children with z-scores below -2 were defined as underweight and stunted, respectively. Severe immunodeficiency was defined by CD4+ T-cell percentage according to the WHO 2006 treatment guidelines [24].\nWAZ and HAZ after ART initiation were assessed among children with at least one post-ART measure. Children were followed until they died, were lost to follow-up, or were administratively censored on September 30, 2009. Children who had not returned for at least 6 months were assumed lost to follow-up. For reporting outcomes at specific time points after ART initiation, measurements were aggregated to within 45 days. Treatment outcomes were evaluated using linear mixed effects models with random intercept, exchangeable correlation structure and robust standard error estimation. As changes in WAZ were not linear, a spline term was added at 7.5 months, the upper window around the 6-month measure. Covariates of interest included sex, orphan status, education of the primary caregiver, age, underweight, stunting and severe immunodeficiency at ART initiation. Covariates found to be (p < 0.10) or known to be associated with either outcome were included in the models. Differences in trajectories of WAZ and HAZ were assessed by each covariate of interest.", "[SUBTITLE] Characteristics of the study population at study enrollment and ART initiation [SUBSECTION] Between September 2007 and 2009, 193 children received ART, with 67 entering the study already receiving ART and 126 initiating ART after study enrollment. Children receiving ART at study enrollment entered a median of 8.3 months (IQR: 2.3, 17.7) after initiating ART, while treatment-naïve children initiated ART a median of 2.0 months (IQR: 0.9, 6.0) after study enrollment. The median follow-up time in the study was 13.1 months (IQR: 5.1, 20.0). The median age was 3.0 years (IQR: 1.6, 6.9) at study enrollment and 51.3% were male (Table 1). The majority of children were cared for by a parent (77.5%) or grandparent (13.6%). Sixty-three percent of primary caregivers had no high school education and 9.5% of children were double orphans. Very few mothers (2.6%) had received drugs to prevent mother-to-child transmission. The median age at ART initiation was 2.9 years (IQR: 1.7, 6.8). The median WAZ and HAZ at ART initiation were -2.3 (IQR: -3.5, -1.4; 58.9% underweight) and -3.2 (IQR: -4.3, -1.9; 71.9% stunted), respectively. The median CD4+ T-cell percentage at ART initiation was 16.3% (IQR: 11.5, 20.1; 59.8% severe immunodeficiency). Children who entered the study already receiving ART were significantly older and more likely to be male. In addition, they were significantly more likely to be underweight and have a lower CD4+ T-cell percentage at ART initiation.\nCharacteristics at study enrollment and ART initiation of HIV-infected children receiving antiretroviral therapy\nART: antiretroviral treatment; IQR: interquartile range; PMTCT: prevention of mother-to-child transmission; WAZ: weight for age z-score; HAZ: height for age z-score;\naamong children <10 years of age\nbdefined by age according to the 2006 WHO guidelines\nThe initial ART regimen was D4T/3TC/NVP for 40.5% of children. Other regimens included AZT/3TC/EFV (24.3%), D4T/3TC/EFV (18.9%), and AZT/3TC/NVP (13.5%). An additional three children received a regimen including ABC (2.0%) and one child received a regimen including emtricitabine and tenofovir (0.5%).\nChildren on ART experienced good immunologic recovery, with median CD4+ T-cell percentage increasing to 28.9% (IQR: 22.6, 37.2), 32.4% (IQR: 25.1, 39.1), and 34.2% (IQR: 30.6, 38.7) 6, 12 and 24 months after ART initiation, respectively.\nBetween September 2007 and 2009, 193 children received ART, with 67 entering the study already receiving ART and 126 initiating ART after study enrollment. Children receiving ART at study enrollment entered a median of 8.3 months (IQR: 2.3, 17.7) after initiating ART, while treatment-naïve children initiated ART a median of 2.0 months (IQR: 0.9, 6.0) after study enrollment. The median follow-up time in the study was 13.1 months (IQR: 5.1, 20.0). The median age was 3.0 years (IQR: 1.6, 6.9) at study enrollment and 51.3% were male (Table 1). The majority of children were cared for by a parent (77.5%) or grandparent (13.6%). Sixty-three percent of primary caregivers had no high school education and 9.5% of children were double orphans. Very few mothers (2.6%) had received drugs to prevent mother-to-child transmission. The median age at ART initiation was 2.9 years (IQR: 1.7, 6.8). The median WAZ and HAZ at ART initiation were -2.3 (IQR: -3.5, -1.4; 58.9% underweight) and -3.2 (IQR: -4.3, -1.9; 71.9% stunted), respectively. The median CD4+ T-cell percentage at ART initiation was 16.3% (IQR: 11.5, 20.1; 59.8% severe immunodeficiency). Children who entered the study already receiving ART were significantly older and more likely to be male. In addition, they were significantly more likely to be underweight and have a lower CD4+ T-cell percentage at ART initiation.\nCharacteristics at study enrollment and ART initiation of HIV-infected children receiving antiretroviral therapy\nART: antiretroviral treatment; IQR: interquartile range; PMTCT: prevention of mother-to-child transmission; WAZ: weight for age z-score; HAZ: height for age z-score;\naamong children <10 years of age\nbdefined by age according to the 2006 WHO guidelines\nThe initial ART regimen was D4T/3TC/NVP for 40.5% of children. Other regimens included AZT/3TC/EFV (24.3%), D4T/3TC/EFV (18.9%), and AZT/3TC/NVP (13.5%). An additional three children received a regimen including ABC (2.0%) and one child received a regimen including emtricitabine and tenofovir (0.5%).\nChildren on ART experienced good immunologic recovery, with median CD4+ T-cell percentage increasing to 28.9% (IQR: 22.6, 37.2), 32.4% (IQR: 25.1, 39.1), and 34.2% (IQR: 30.6, 38.7) 6, 12 and 24 months after ART initiation, respectively.\n[SUBTITLE] Weight-for-age z-scores after ART initiation [SUBSECTION] For WAZ, 128 children younger than 10 years were included in the analysis, of whom 4 (3.1%) died and 4 (3.1%) were lost to follow-up after ART initiation. WAZ increased during the first 6 months of treatment and then stabilized. Mean WAZ increased from -2.4 at treatment initiation to -1.3, -1.5, -1.4 and -1.7 at 6, 12, 18 and 24 months on ART, respectively (Figure 1A). Consequently, the proportion of underweight children decreased from 59.7% at treatment initiation to 28.8%, 35.3%, 26.5% and 45.0% at 6, 12, 18, and 24 months on ART, respectively. Results of the crude longitudinal models indicated that WAZ increased by 0.12 units per month in the first 6 months of ART, and remained stable thereafter (Table 2). Male sex, double orphan status and older age at ART initiation were significantly associated with lower WAZ. Severe immunodeficiency at ART initiation was marginally associated with lower WAZ. Differing patterns of improvement were found only by WAZ at ART initiation, with underweight children experiencing greater increases in WAZ in the first 6 months of ART.\nMean (95% CI) weight-for-age (A) and height-for-age (B) z-scores by time since ART initiation. *sample sizes are smaller due to missing data at ART initiation\nResults of the longitudinal data analysis for WAZ and HAZ after ART initiation\nWAZ: weight for age z-score; HAZ: height for age z-score; SE: standard error; ART: antiretroviral treatment\naamong children < 10 years of age\nbdefined by age according to WHO 2006 guidelines\ncmeasured at ART initiation\ndWAZ model adjusted for sex, double orphan status, severe immunodeficiency and age at ART initiation; HAZ model adjusted for sex, severe immunodeficiency and underweight at ART initiation\nFor WAZ, 128 children younger than 10 years were included in the analysis, of whom 4 (3.1%) died and 4 (3.1%) were lost to follow-up after ART initiation. WAZ increased during the first 6 months of treatment and then stabilized. Mean WAZ increased from -2.4 at treatment initiation to -1.3, -1.5, -1.4 and -1.7 at 6, 12, 18 and 24 months on ART, respectively (Figure 1A). Consequently, the proportion of underweight children decreased from 59.7% at treatment initiation to 28.8%, 35.3%, 26.5% and 45.0% at 6, 12, 18, and 24 months on ART, respectively. Results of the crude longitudinal models indicated that WAZ increased by 0.12 units per month in the first 6 months of ART, and remained stable thereafter (Table 2). Male sex, double orphan status and older age at ART initiation were significantly associated with lower WAZ. Severe immunodeficiency at ART initiation was marginally associated with lower WAZ. Differing patterns of improvement were found only by WAZ at ART initiation, with underweight children experiencing greater increases in WAZ in the first 6 months of ART.\nMean (95% CI) weight-for-age (A) and height-for-age (B) z-scores by time since ART initiation. *sample sizes are smaller due to missing data at ART initiation\nResults of the longitudinal data analysis for WAZ and HAZ after ART initiation\nWAZ: weight for age z-score; HAZ: height for age z-score; SE: standard error; ART: antiretroviral treatment\naamong children < 10 years of age\nbdefined by age according to WHO 2006 guidelines\ncmeasured at ART initiation\ndWAZ model adjusted for sex, double orphan status, severe immunodeficiency and age at ART initiation; HAZ model adjusted for sex, severe immunodeficiency and underweight at ART initiation\n[SUBTITLE] Height-for-age z-scores after ART initiation [SUBSECTION] For HAZ, 152 children were included in the analysis, of whom 4 (2.6%) died and 4 (2.6%) were lost to follow-up after ART initiation. A linear increase in HAZ was observed throughout treatment and mean HAZ increased from -3.5 at treatment initiation to -3.1, -2.6, -2.5, and -2.1 at 6, 12, 18 and 24 months on ART, respectively (Figure 1B). Consequently the proportion of stunted children decreased from 71.6% at treatment initiation to 80.5%, 66.7%, 60.5%, and 46.4% at 6, 12, 18 and 24 months on ART, respectively. Results of the crude longitudinal model indicated that HAZ increased by 0.053 units per month after ART initiation (Table 2). Underweight children at ART initiation had significantly lower HAZ, while older children and females had significantly higher HAZ throughout treatment. Severe immunodeficiency at ART initiation was marginally associated with lower HAZ. Significant differences in the trajectories of HAZ were found only by age at ART initiation, with children older than 5 years at initiation experiencing significantly smaller increases in HAZ per month compared to children younger than 2 years of age.\nFor HAZ, 152 children were included in the analysis, of whom 4 (2.6%) died and 4 (2.6%) were lost to follow-up after ART initiation. A linear increase in HAZ was observed throughout treatment and mean HAZ increased from -3.5 at treatment initiation to -3.1, -2.6, -2.5, and -2.1 at 6, 12, 18 and 24 months on ART, respectively (Figure 1B). Consequently the proportion of stunted children decreased from 71.6% at treatment initiation to 80.5%, 66.7%, 60.5%, and 46.4% at 6, 12, 18 and 24 months on ART, respectively. Results of the crude longitudinal model indicated that HAZ increased by 0.053 units per month after ART initiation (Table 2). Underweight children at ART initiation had significantly lower HAZ, while older children and females had significantly higher HAZ throughout treatment. Severe immunodeficiency at ART initiation was marginally associated with lower HAZ. Significant differences in the trajectories of HAZ were found only by age at ART initiation, with children older than 5 years at initiation experiencing significantly smaller increases in HAZ per month compared to children younger than 2 years of age.", "Between September 2007 and 2009, 193 children received ART, with 67 entering the study already receiving ART and 126 initiating ART after study enrollment. Children receiving ART at study enrollment entered a median of 8.3 months (IQR: 2.3, 17.7) after initiating ART, while treatment-naïve children initiated ART a median of 2.0 months (IQR: 0.9, 6.0) after study enrollment. The median follow-up time in the study was 13.1 months (IQR: 5.1, 20.0). The median age was 3.0 years (IQR: 1.6, 6.9) at study enrollment and 51.3% were male (Table 1). The majority of children were cared for by a parent (77.5%) or grandparent (13.6%). Sixty-three percent of primary caregivers had no high school education and 9.5% of children were double orphans. Very few mothers (2.6%) had received drugs to prevent mother-to-child transmission. The median age at ART initiation was 2.9 years (IQR: 1.7, 6.8). The median WAZ and HAZ at ART initiation were -2.3 (IQR: -3.5, -1.4; 58.9% underweight) and -3.2 (IQR: -4.3, -1.9; 71.9% stunted), respectively. The median CD4+ T-cell percentage at ART initiation was 16.3% (IQR: 11.5, 20.1; 59.8% severe immunodeficiency). Children who entered the study already receiving ART were significantly older and more likely to be male. In addition, they were significantly more likely to be underweight and have a lower CD4+ T-cell percentage at ART initiation.\nCharacteristics at study enrollment and ART initiation of HIV-infected children receiving antiretroviral therapy\nART: antiretroviral treatment; IQR: interquartile range; PMTCT: prevention of mother-to-child transmission; WAZ: weight for age z-score; HAZ: height for age z-score;\naamong children <10 years of age\nbdefined by age according to the 2006 WHO guidelines\nThe initial ART regimen was D4T/3TC/NVP for 40.5% of children. Other regimens included AZT/3TC/EFV (24.3%), D4T/3TC/EFV (18.9%), and AZT/3TC/NVP (13.5%). An additional three children received a regimen including ABC (2.0%) and one child received a regimen including emtricitabine and tenofovir (0.5%).\nChildren on ART experienced good immunologic recovery, with median CD4+ T-cell percentage increasing to 28.9% (IQR: 22.6, 37.2), 32.4% (IQR: 25.1, 39.1), and 34.2% (IQR: 30.6, 38.7) 6, 12 and 24 months after ART initiation, respectively.", "For WAZ, 128 children younger than 10 years were included in the analysis, of whom 4 (3.1%) died and 4 (3.1%) were lost to follow-up after ART initiation. WAZ increased during the first 6 months of treatment and then stabilized. Mean WAZ increased from -2.4 at treatment initiation to -1.3, -1.5, -1.4 and -1.7 at 6, 12, 18 and 24 months on ART, respectively (Figure 1A). Consequently, the proportion of underweight children decreased from 59.7% at treatment initiation to 28.8%, 35.3%, 26.5% and 45.0% at 6, 12, 18, and 24 months on ART, respectively. Results of the crude longitudinal models indicated that WAZ increased by 0.12 units per month in the first 6 months of ART, and remained stable thereafter (Table 2). Male sex, double orphan status and older age at ART initiation were significantly associated with lower WAZ. Severe immunodeficiency at ART initiation was marginally associated with lower WAZ. Differing patterns of improvement were found only by WAZ at ART initiation, with underweight children experiencing greater increases in WAZ in the first 6 months of ART.\nMean (95% CI) weight-for-age (A) and height-for-age (B) z-scores by time since ART initiation. *sample sizes are smaller due to missing data at ART initiation\nResults of the longitudinal data analysis for WAZ and HAZ after ART initiation\nWAZ: weight for age z-score; HAZ: height for age z-score; SE: standard error; ART: antiretroviral treatment\naamong children < 10 years of age\nbdefined by age according to WHO 2006 guidelines\ncmeasured at ART initiation\ndWAZ model adjusted for sex, double orphan status, severe immunodeficiency and age at ART initiation; HAZ model adjusted for sex, severe immunodeficiency and underweight at ART initiation", "For HAZ, 152 children were included in the analysis, of whom 4 (2.6%) died and 4 (2.6%) were lost to follow-up after ART initiation. A linear increase in HAZ was observed throughout treatment and mean HAZ increased from -3.5 at treatment initiation to -3.1, -2.6, -2.5, and -2.1 at 6, 12, 18 and 24 months on ART, respectively (Figure 1B). Consequently the proportion of stunted children decreased from 71.6% at treatment initiation to 80.5%, 66.7%, 60.5%, and 46.4% at 6, 12, 18 and 24 months on ART, respectively. Results of the crude longitudinal model indicated that HAZ increased by 0.053 units per month after ART initiation (Table 2). Underweight children at ART initiation had significantly lower HAZ, while older children and females had significantly higher HAZ throughout treatment. Severe immunodeficiency at ART initiation was marginally associated with lower HAZ. Significant differences in the trajectories of HAZ were found only by age at ART initiation, with children older than 5 years at initiation experiencing significantly smaller increases in HAZ per month compared to children younger than 2 years of age.", "In this study of young HIV-infected children in rural Zambia with good immunologic recovery on ART, both weight and height-for-age improved after initiation of ART. Age and undernutrition at ART initiation impacted both WAZ and HAZ, and differences in the trajectories of WAZ and HAZ were associated with undernutrition and age at ART initiation, respectively.\nImprovements in WAZ and HAZ among HIV-infected children treated with ART were found in other studies throughout sub-Saharan Africa [7-18]. The trajectories for WAZ and HAZ after ART initiation, however, differed in this study. WAZ improved for the first 6 months and then stabilized with only minimal improvements thereafter, whereas HAZ consistently improved over time. Similar trajectories for WAZ and HAZ were reported in one study in South Africa [10], while other studies found linear improvements in WAZ during the first 24 months of treatment [11,26]. Reasons for these differences are unknown but may be due to the higher levels of undernutrition observed in this rural population [26]. Over half of the study population was underweight and three-quarters stunted at ART initiation. Differences in trajectories were found between children who were underweight and those with normal weight, with greater weight improvements in the first 6 months for children underweight at ART initiation. A more consistent increase was found for children with normal weight. Consequently, it is possible that this group of rural children experienced different trajectories than the urban populations in previous studies.\nDue to the relatively young age of the study population, the impact of age at ART initiation on both WAZ and HAZ could be evaluated. Older age was associated with both WAZ and HAZ at ART initiation; however, only age impacted the trajectories for HAZ, with children older than 5 years experiencing less improvement. In other studies, HAZ did not consistently improve, with some studies finding no significant increases [12,13,17]. Discrepancies in HAZ may be due to the different age compositions of the study populations, as many studies were conducted among children with an average age older than 5 years [21]. As more infants and young children are diagnosed and started on ART, further evaluation of HAZ over time will be needed.\nThis study was limited by the small sample size beyond two years on ART, and the small number of children with measures available at ART initiation (Figure 1).The role of food supplementation in achieving weight and height gains in this study is unknown, as the criteria used for eligibility were not consistent across clinic staff and children did not receive supplements at every visit. In addition, no information was collected on the child's diet or on comorbidities and therefore the contribution of these factors to growth could not be assessed.", "This study demonstrated that rural Zambian children experienced significant improvements in both weight and height after starting ART. However, even after two years of ART approximately 25% and 50% of children remained underweight and stunted, higher than observed among HIV-negative children in the same region [27]. Consequently, successful treatment with ART was not able to fully reverse the effects of HIV on growth. Partnerships between HIV treatment and nutrition programs should be explored so that children receive an integrated care and treatment approach that includes nutritional support. Further evaluation of the impact of food supplementation on growth after ART initiation is needed.", "3TC: lamivudine; ABC: abacavir; ART: antiretroviral therapy; AZT: zidovudine; D4T: stavudine; EFV: efavirenz; HAZ: height-for-age Z-score; IQR: interquartile range; NVP: nevirapine; PMTCT: prevention of mother-to-child transmission; SE: standard error; WAZ: weight-for-age Z-score", "The authors declare that they have no competing interests.", "CGS conceived of the study, performed the data analysis and participated in the writing of the manuscript. JHvD supervised the implementation of the study in Zambia and participated in the writing of the manuscript. BM was responsible for study recruitment and implementation, and reviewed the final manuscript. FH was responsible for study recruitment and implementation, and reviewed the final manuscript. PS was responsible for study recruitment and implementation, and reviewed the final manuscript. PET supervised the implementation of the study in Zambia and reviewed the final manuscript. WJM supervised the implementation of the study in the US and participated in the writing of the manuscript. All authors have read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2334/11/54/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study setting and population", "Study procedures", "Statistical analysis", "Results", "Characteristics of the study population at study enrollment and ART initiation", "Weight-for-age z-scores after ART initiation", "Height-for-age z-scores after ART initiation", "Discussion", "Conclusions", "List of Abbreviations", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Children in sub-Saharan Africa have high levels of undernutrition, exhibiting lower weight- and height-for-age than children in high resource settings. Both of these conditions are exacerbated by HIV infection [1-4], and can be used to determine disease status and monitor treatment response [5]. As implementation of antiretroviral treatment (ART) programs in sub-Saharan Africa has increased [6], many HIV-infected children are benefitting from treatment and are experiencing reductions in morbidity and mortality. Several studies have shown that many of the deficits in growth due to HIV infection are reversed with ART, with children exhibiting consistent improvements in weight-for-age [7-19]. Some studies, but not all [12,13,17], have also reported improvements in height-for-age [7-10,14,16,18,19]. Gains in weight have been found to correlate with treatment response [19].\nMany of these studies have been conducted in urban areas, where food security tends to be higher and levels of undernutrition lower than in surrounding rural areas [20]. In addition, many of these studies were conducted in the first years of program implementation when the majority of children initiating ART were older [21]. Both age [11] and level of undernutrition at treatment initiation [7] have the potential to impact growth trajectories and the effect of ART. Consequently, this study was undertaken to evaluate growth after ART initiation in a young pediatric population in rural Zambia and identify characteristics at ART initiation that influence growth trajectories.", "[SUBTITLE] Study setting and population [SUBSECTION] The study was conducted at Macha Hospital in a rural area of Southern Province, Zambia. The study setting and population have been described in detail elsewhere [22]. Briefly, Macha Hospital is a district-level referral hospital administered by the Zambian Brethren in Christ Church. Since 2005, Macha Hospital has provided care to over 6000 HIV-infected adults and children through the Government of Zambia's antiretroviral treatment program, with additional support from the President's Emergency Plan for AIDS Relief (PEPFAR) through the non-governmental organization, AidsRelief.\nChildren with a positive HIV serologic test are referred to the clinic from voluntary counseling and testing programs, outpatient clinics and rural health centers. Early infant diagnosis has been available since February 2008. Clinical care is provided without charge by medical doctors and clinical officers, and adherence counseling by nurses and trained counselors. ART is initiated according to WHO guidelines [23,24]. The first-line antiretroviral treatment regimen consists of two nucleoside reverse transcriptase inhibitors (lamivudine (3TC) plus zidovudine (AZT) or stavudine (D4T) or abacavir (ABC)) and a non-nucleoside reverse transcriptase inhibitor (efavirenz (EFV) or nevirapine (NVP)). Pediatric and adult fixed dose combinations of D4T and 3TC are available, as well as of D4T, 3TC and NVP. High energy protein supplements are provided to underweight children.\nThe study was conducted at Macha Hospital in a rural area of Southern Province, Zambia. The study setting and population have been described in detail elsewhere [22]. Briefly, Macha Hospital is a district-level referral hospital administered by the Zambian Brethren in Christ Church. Since 2005, Macha Hospital has provided care to over 6000 HIV-infected adults and children through the Government of Zambia's antiretroviral treatment program, with additional support from the President's Emergency Plan for AIDS Relief (PEPFAR) through the non-governmental organization, AidsRelief.\nChildren with a positive HIV serologic test are referred to the clinic from voluntary counseling and testing programs, outpatient clinics and rural health centers. Early infant diagnosis has been available since February 2008. Clinical care is provided without charge by medical doctors and clinical officers, and adherence counseling by nurses and trained counselors. ART is initiated according to WHO guidelines [23,24]. The first-line antiretroviral treatment regimen consists of two nucleoside reverse transcriptase inhibitors (lamivudine (3TC) plus zidovudine (AZT) or stavudine (D4T) or abacavir (ABC)) and a non-nucleoside reverse transcriptase inhibitor (efavirenz (EFV) or nevirapine (NVP)). Pediatric and adult fixed dose combinations of D4T and 3TC are available, as well as of D4T, 3TC and NVP. High energy protein supplements are provided to underweight children.\n[SUBTITLE] Study procedures [SUBSECTION] Beginning in September 2007, HIV-infected children younger than 16 years seeking HIV care were eligible for enrollment into a cohort study. Written informed consent was obtained from parents or guardians and assent was obtained from children 8-16 years of age. Children were evaluated at study visits approximately every three months, at which time a questionnaire was administered, the child was examined and a blood specimen was obtained. At each visit CD4+ T-cell counts and percentages were measured using the Guava Easy CD4 system (Guava Technologies, Inc., Hayward, CA). During each physical examination, height and weight were measured. For children who missed study visits, home visits were attempted to ascertain their status. Information recorded before study enrollment was abstracted from medical records. The study was approved by the Ministry of Health in Zambia, the Research Ethics Committee of the University of Zambia and the Institutional Review Board of the Johns Hopkins Bloomberg School of Public Health.\nFor the present analysis, all children enrolled in the study and receiving ART between September 2007 and September 2009 were included. The study sample included children already receiving ART at enrollment and children initiating ART during the study period.\nBeginning in September 2007, HIV-infected children younger than 16 years seeking HIV care were eligible for enrollment into a cohort study. Written informed consent was obtained from parents or guardians and assent was obtained from children 8-16 years of age. Children were evaluated at study visits approximately every three months, at which time a questionnaire was administered, the child was examined and a blood specimen was obtained. At each visit CD4+ T-cell counts and percentages were measured using the Guava Easy CD4 system (Guava Technologies, Inc., Hayward, CA). During each physical examination, height and weight were measured. For children who missed study visits, home visits were attempted to ascertain their status. Information recorded before study enrollment was abstracted from medical records. The study was approved by the Ministry of Health in Zambia, the Research Ethics Committee of the University of Zambia and the Institutional Review Board of the Johns Hopkins Bloomberg School of Public Health.\nFor the present analysis, all children enrolled in the study and receiving ART between September 2007 and September 2009 were included. The study sample included children already receiving ART at enrollment and children initiating ART during the study period.\n[SUBTITLE] Statistical analysis [SUBSECTION] Data were entered in duplicate using EpiInfo (Centers for Disease Control and Prevention) and analyses were conducted in STATA, version 9 (StataCorp LP, College Station, Texas). Weight-for-age z-scores (WAZ) among children younger than 10 years of age and height-for-age (HAZ) z-scores among all children were calculated based on the WHO growth standards [25], and children with z-scores below -2 were defined as underweight and stunted, respectively. Severe immunodeficiency was defined by CD4+ T-cell percentage according to the WHO 2006 treatment guidelines [24].\nWAZ and HAZ after ART initiation were assessed among children with at least one post-ART measure. Children were followed until they died, were lost to follow-up, or were administratively censored on September 30, 2009. Children who had not returned for at least 6 months were assumed lost to follow-up. For reporting outcomes at specific time points after ART initiation, measurements were aggregated to within 45 days. Treatment outcomes were evaluated using linear mixed effects models with random intercept, exchangeable correlation structure and robust standard error estimation. As changes in WAZ were not linear, a spline term was added at 7.5 months, the upper window around the 6-month measure. Covariates of interest included sex, orphan status, education of the primary caregiver, age, underweight, stunting and severe immunodeficiency at ART initiation. Covariates found to be (p < 0.10) or known to be associated with either outcome were included in the models. Differences in trajectories of WAZ and HAZ were assessed by each covariate of interest.\nData were entered in duplicate using EpiInfo (Centers for Disease Control and Prevention) and analyses were conducted in STATA, version 9 (StataCorp LP, College Station, Texas). Weight-for-age z-scores (WAZ) among children younger than 10 years of age and height-for-age (HAZ) z-scores among all children were calculated based on the WHO growth standards [25], and children with z-scores below -2 were defined as underweight and stunted, respectively. Severe immunodeficiency was defined by CD4+ T-cell percentage according to the WHO 2006 treatment guidelines [24].\nWAZ and HAZ after ART initiation were assessed among children with at least one post-ART measure. Children were followed until they died, were lost to follow-up, or were administratively censored on September 30, 2009. Children who had not returned for at least 6 months were assumed lost to follow-up. For reporting outcomes at specific time points after ART initiation, measurements were aggregated to within 45 days. Treatment outcomes were evaluated using linear mixed effects models with random intercept, exchangeable correlation structure and robust standard error estimation. As changes in WAZ were not linear, a spline term was added at 7.5 months, the upper window around the 6-month measure. Covariates of interest included sex, orphan status, education of the primary caregiver, age, underweight, stunting and severe immunodeficiency at ART initiation. Covariates found to be (p < 0.10) or known to be associated with either outcome were included in the models. Differences in trajectories of WAZ and HAZ were assessed by each covariate of interest.", "The study was conducted at Macha Hospital in a rural area of Southern Province, Zambia. The study setting and population have been described in detail elsewhere [22]. Briefly, Macha Hospital is a district-level referral hospital administered by the Zambian Brethren in Christ Church. Since 2005, Macha Hospital has provided care to over 6000 HIV-infected adults and children through the Government of Zambia's antiretroviral treatment program, with additional support from the President's Emergency Plan for AIDS Relief (PEPFAR) through the non-governmental organization, AidsRelief.\nChildren with a positive HIV serologic test are referred to the clinic from voluntary counseling and testing programs, outpatient clinics and rural health centers. Early infant diagnosis has been available since February 2008. Clinical care is provided without charge by medical doctors and clinical officers, and adherence counseling by nurses and trained counselors. ART is initiated according to WHO guidelines [23,24]. The first-line antiretroviral treatment regimen consists of two nucleoside reverse transcriptase inhibitors (lamivudine (3TC) plus zidovudine (AZT) or stavudine (D4T) or abacavir (ABC)) and a non-nucleoside reverse transcriptase inhibitor (efavirenz (EFV) or nevirapine (NVP)). Pediatric and adult fixed dose combinations of D4T and 3TC are available, as well as of D4T, 3TC and NVP. High energy protein supplements are provided to underweight children.", "Beginning in September 2007, HIV-infected children younger than 16 years seeking HIV care were eligible for enrollment into a cohort study. Written informed consent was obtained from parents or guardians and assent was obtained from children 8-16 years of age. Children were evaluated at study visits approximately every three months, at which time a questionnaire was administered, the child was examined and a blood specimen was obtained. At each visit CD4+ T-cell counts and percentages were measured using the Guava Easy CD4 system (Guava Technologies, Inc., Hayward, CA). During each physical examination, height and weight were measured. For children who missed study visits, home visits were attempted to ascertain their status. Information recorded before study enrollment was abstracted from medical records. The study was approved by the Ministry of Health in Zambia, the Research Ethics Committee of the University of Zambia and the Institutional Review Board of the Johns Hopkins Bloomberg School of Public Health.\nFor the present analysis, all children enrolled in the study and receiving ART between September 2007 and September 2009 were included. The study sample included children already receiving ART at enrollment and children initiating ART during the study period.", "Data were entered in duplicate using EpiInfo (Centers for Disease Control and Prevention) and analyses were conducted in STATA, version 9 (StataCorp LP, College Station, Texas). Weight-for-age z-scores (WAZ) among children younger than 10 years of age and height-for-age (HAZ) z-scores among all children were calculated based on the WHO growth standards [25], and children with z-scores below -2 were defined as underweight and stunted, respectively. Severe immunodeficiency was defined by CD4+ T-cell percentage according to the WHO 2006 treatment guidelines [24].\nWAZ and HAZ after ART initiation were assessed among children with at least one post-ART measure. Children were followed until they died, were lost to follow-up, or were administratively censored on September 30, 2009. Children who had not returned for at least 6 months were assumed lost to follow-up. For reporting outcomes at specific time points after ART initiation, measurements were aggregated to within 45 days. Treatment outcomes were evaluated using linear mixed effects models with random intercept, exchangeable correlation structure and robust standard error estimation. As changes in WAZ were not linear, a spline term was added at 7.5 months, the upper window around the 6-month measure. Covariates of interest included sex, orphan status, education of the primary caregiver, age, underweight, stunting and severe immunodeficiency at ART initiation. Covariates found to be (p < 0.10) or known to be associated with either outcome were included in the models. Differences in trajectories of WAZ and HAZ were assessed by each covariate of interest.", "[SUBTITLE] Characteristics of the study population at study enrollment and ART initiation [SUBSECTION] Between September 2007 and 2009, 193 children received ART, with 67 entering the study already receiving ART and 126 initiating ART after study enrollment. Children receiving ART at study enrollment entered a median of 8.3 months (IQR: 2.3, 17.7) after initiating ART, while treatment-naïve children initiated ART a median of 2.0 months (IQR: 0.9, 6.0) after study enrollment. The median follow-up time in the study was 13.1 months (IQR: 5.1, 20.0). The median age was 3.0 years (IQR: 1.6, 6.9) at study enrollment and 51.3% were male (Table 1). The majority of children were cared for by a parent (77.5%) or grandparent (13.6%). Sixty-three percent of primary caregivers had no high school education and 9.5% of children were double orphans. Very few mothers (2.6%) had received drugs to prevent mother-to-child transmission. The median age at ART initiation was 2.9 years (IQR: 1.7, 6.8). The median WAZ and HAZ at ART initiation were -2.3 (IQR: -3.5, -1.4; 58.9% underweight) and -3.2 (IQR: -4.3, -1.9; 71.9% stunted), respectively. The median CD4+ T-cell percentage at ART initiation was 16.3% (IQR: 11.5, 20.1; 59.8% severe immunodeficiency). Children who entered the study already receiving ART were significantly older and more likely to be male. In addition, they were significantly more likely to be underweight and have a lower CD4+ T-cell percentage at ART initiation.\nCharacteristics at study enrollment and ART initiation of HIV-infected children receiving antiretroviral therapy\nART: antiretroviral treatment; IQR: interquartile range; PMTCT: prevention of mother-to-child transmission; WAZ: weight for age z-score; HAZ: height for age z-score;\naamong children <10 years of age\nbdefined by age according to the 2006 WHO guidelines\nThe initial ART regimen was D4T/3TC/NVP for 40.5% of children. Other regimens included AZT/3TC/EFV (24.3%), D4T/3TC/EFV (18.9%), and AZT/3TC/NVP (13.5%). An additional three children received a regimen including ABC (2.0%) and one child received a regimen including emtricitabine and tenofovir (0.5%).\nChildren on ART experienced good immunologic recovery, with median CD4+ T-cell percentage increasing to 28.9% (IQR: 22.6, 37.2), 32.4% (IQR: 25.1, 39.1), and 34.2% (IQR: 30.6, 38.7) 6, 12 and 24 months after ART initiation, respectively.\nBetween September 2007 and 2009, 193 children received ART, with 67 entering the study already receiving ART and 126 initiating ART after study enrollment. Children receiving ART at study enrollment entered a median of 8.3 months (IQR: 2.3, 17.7) after initiating ART, while treatment-naïve children initiated ART a median of 2.0 months (IQR: 0.9, 6.0) after study enrollment. The median follow-up time in the study was 13.1 months (IQR: 5.1, 20.0). The median age was 3.0 years (IQR: 1.6, 6.9) at study enrollment and 51.3% were male (Table 1). The majority of children were cared for by a parent (77.5%) or grandparent (13.6%). Sixty-three percent of primary caregivers had no high school education and 9.5% of children were double orphans. Very few mothers (2.6%) had received drugs to prevent mother-to-child transmission. The median age at ART initiation was 2.9 years (IQR: 1.7, 6.8). The median WAZ and HAZ at ART initiation were -2.3 (IQR: -3.5, -1.4; 58.9% underweight) and -3.2 (IQR: -4.3, -1.9; 71.9% stunted), respectively. The median CD4+ T-cell percentage at ART initiation was 16.3% (IQR: 11.5, 20.1; 59.8% severe immunodeficiency). Children who entered the study already receiving ART were significantly older and more likely to be male. In addition, they were significantly more likely to be underweight and have a lower CD4+ T-cell percentage at ART initiation.\nCharacteristics at study enrollment and ART initiation of HIV-infected children receiving antiretroviral therapy\nART: antiretroviral treatment; IQR: interquartile range; PMTCT: prevention of mother-to-child transmission; WAZ: weight for age z-score; HAZ: height for age z-score;\naamong children <10 years of age\nbdefined by age according to the 2006 WHO guidelines\nThe initial ART regimen was D4T/3TC/NVP for 40.5% of children. Other regimens included AZT/3TC/EFV (24.3%), D4T/3TC/EFV (18.9%), and AZT/3TC/NVP (13.5%). An additional three children received a regimen including ABC (2.0%) and one child received a regimen including emtricitabine and tenofovir (0.5%).\nChildren on ART experienced good immunologic recovery, with median CD4+ T-cell percentage increasing to 28.9% (IQR: 22.6, 37.2), 32.4% (IQR: 25.1, 39.1), and 34.2% (IQR: 30.6, 38.7) 6, 12 and 24 months after ART initiation, respectively.\n[SUBTITLE] Weight-for-age z-scores after ART initiation [SUBSECTION] For WAZ, 128 children younger than 10 years were included in the analysis, of whom 4 (3.1%) died and 4 (3.1%) were lost to follow-up after ART initiation. WAZ increased during the first 6 months of treatment and then stabilized. Mean WAZ increased from -2.4 at treatment initiation to -1.3, -1.5, -1.4 and -1.7 at 6, 12, 18 and 24 months on ART, respectively (Figure 1A). Consequently, the proportion of underweight children decreased from 59.7% at treatment initiation to 28.8%, 35.3%, 26.5% and 45.0% at 6, 12, 18, and 24 months on ART, respectively. Results of the crude longitudinal models indicated that WAZ increased by 0.12 units per month in the first 6 months of ART, and remained stable thereafter (Table 2). Male sex, double orphan status and older age at ART initiation were significantly associated with lower WAZ. Severe immunodeficiency at ART initiation was marginally associated with lower WAZ. Differing patterns of improvement were found only by WAZ at ART initiation, with underweight children experiencing greater increases in WAZ in the first 6 months of ART.\nMean (95% CI) weight-for-age (A) and height-for-age (B) z-scores by time since ART initiation. *sample sizes are smaller due to missing data at ART initiation\nResults of the longitudinal data analysis for WAZ and HAZ after ART initiation\nWAZ: weight for age z-score; HAZ: height for age z-score; SE: standard error; ART: antiretroviral treatment\naamong children < 10 years of age\nbdefined by age according to WHO 2006 guidelines\ncmeasured at ART initiation\ndWAZ model adjusted for sex, double orphan status, severe immunodeficiency and age at ART initiation; HAZ model adjusted for sex, severe immunodeficiency and underweight at ART initiation\nFor WAZ, 128 children younger than 10 years were included in the analysis, of whom 4 (3.1%) died and 4 (3.1%) were lost to follow-up after ART initiation. WAZ increased during the first 6 months of treatment and then stabilized. Mean WAZ increased from -2.4 at treatment initiation to -1.3, -1.5, -1.4 and -1.7 at 6, 12, 18 and 24 months on ART, respectively (Figure 1A). Consequently, the proportion of underweight children decreased from 59.7% at treatment initiation to 28.8%, 35.3%, 26.5% and 45.0% at 6, 12, 18, and 24 months on ART, respectively. Results of the crude longitudinal models indicated that WAZ increased by 0.12 units per month in the first 6 months of ART, and remained stable thereafter (Table 2). Male sex, double orphan status and older age at ART initiation were significantly associated with lower WAZ. Severe immunodeficiency at ART initiation was marginally associated with lower WAZ. Differing patterns of improvement were found only by WAZ at ART initiation, with underweight children experiencing greater increases in WAZ in the first 6 months of ART.\nMean (95% CI) weight-for-age (A) and height-for-age (B) z-scores by time since ART initiation. *sample sizes are smaller due to missing data at ART initiation\nResults of the longitudinal data analysis for WAZ and HAZ after ART initiation\nWAZ: weight for age z-score; HAZ: height for age z-score; SE: standard error; ART: antiretroviral treatment\naamong children < 10 years of age\nbdefined by age according to WHO 2006 guidelines\ncmeasured at ART initiation\ndWAZ model adjusted for sex, double orphan status, severe immunodeficiency and age at ART initiation; HAZ model adjusted for sex, severe immunodeficiency and underweight at ART initiation\n[SUBTITLE] Height-for-age z-scores after ART initiation [SUBSECTION] For HAZ, 152 children were included in the analysis, of whom 4 (2.6%) died and 4 (2.6%) were lost to follow-up after ART initiation. A linear increase in HAZ was observed throughout treatment and mean HAZ increased from -3.5 at treatment initiation to -3.1, -2.6, -2.5, and -2.1 at 6, 12, 18 and 24 months on ART, respectively (Figure 1B). Consequently the proportion of stunted children decreased from 71.6% at treatment initiation to 80.5%, 66.7%, 60.5%, and 46.4% at 6, 12, 18 and 24 months on ART, respectively. Results of the crude longitudinal model indicated that HAZ increased by 0.053 units per month after ART initiation (Table 2). Underweight children at ART initiation had significantly lower HAZ, while older children and females had significantly higher HAZ throughout treatment. Severe immunodeficiency at ART initiation was marginally associated with lower HAZ. Significant differences in the trajectories of HAZ were found only by age at ART initiation, with children older than 5 years at initiation experiencing significantly smaller increases in HAZ per month compared to children younger than 2 years of age.\nFor HAZ, 152 children were included in the analysis, of whom 4 (2.6%) died and 4 (2.6%) were lost to follow-up after ART initiation. A linear increase in HAZ was observed throughout treatment and mean HAZ increased from -3.5 at treatment initiation to -3.1, -2.6, -2.5, and -2.1 at 6, 12, 18 and 24 months on ART, respectively (Figure 1B). Consequently the proportion of stunted children decreased from 71.6% at treatment initiation to 80.5%, 66.7%, 60.5%, and 46.4% at 6, 12, 18 and 24 months on ART, respectively. Results of the crude longitudinal model indicated that HAZ increased by 0.053 units per month after ART initiation (Table 2). Underweight children at ART initiation had significantly lower HAZ, while older children and females had significantly higher HAZ throughout treatment. Severe immunodeficiency at ART initiation was marginally associated with lower HAZ. Significant differences in the trajectories of HAZ were found only by age at ART initiation, with children older than 5 years at initiation experiencing significantly smaller increases in HAZ per month compared to children younger than 2 years of age.", "Between September 2007 and 2009, 193 children received ART, with 67 entering the study already receiving ART and 126 initiating ART after study enrollment. Children receiving ART at study enrollment entered a median of 8.3 months (IQR: 2.3, 17.7) after initiating ART, while treatment-naïve children initiated ART a median of 2.0 months (IQR: 0.9, 6.0) after study enrollment. The median follow-up time in the study was 13.1 months (IQR: 5.1, 20.0). The median age was 3.0 years (IQR: 1.6, 6.9) at study enrollment and 51.3% were male (Table 1). The majority of children were cared for by a parent (77.5%) or grandparent (13.6%). Sixty-three percent of primary caregivers had no high school education and 9.5% of children were double orphans. Very few mothers (2.6%) had received drugs to prevent mother-to-child transmission. The median age at ART initiation was 2.9 years (IQR: 1.7, 6.8). The median WAZ and HAZ at ART initiation were -2.3 (IQR: -3.5, -1.4; 58.9% underweight) and -3.2 (IQR: -4.3, -1.9; 71.9% stunted), respectively. The median CD4+ T-cell percentage at ART initiation was 16.3% (IQR: 11.5, 20.1; 59.8% severe immunodeficiency). Children who entered the study already receiving ART were significantly older and more likely to be male. In addition, they were significantly more likely to be underweight and have a lower CD4+ T-cell percentage at ART initiation.\nCharacteristics at study enrollment and ART initiation of HIV-infected children receiving antiretroviral therapy\nART: antiretroviral treatment; IQR: interquartile range; PMTCT: prevention of mother-to-child transmission; WAZ: weight for age z-score; HAZ: height for age z-score;\naamong children <10 years of age\nbdefined by age according to the 2006 WHO guidelines\nThe initial ART regimen was D4T/3TC/NVP for 40.5% of children. Other regimens included AZT/3TC/EFV (24.3%), D4T/3TC/EFV (18.9%), and AZT/3TC/NVP (13.5%). An additional three children received a regimen including ABC (2.0%) and one child received a regimen including emtricitabine and tenofovir (0.5%).\nChildren on ART experienced good immunologic recovery, with median CD4+ T-cell percentage increasing to 28.9% (IQR: 22.6, 37.2), 32.4% (IQR: 25.1, 39.1), and 34.2% (IQR: 30.6, 38.7) 6, 12 and 24 months after ART initiation, respectively.", "For WAZ, 128 children younger than 10 years were included in the analysis, of whom 4 (3.1%) died and 4 (3.1%) were lost to follow-up after ART initiation. WAZ increased during the first 6 months of treatment and then stabilized. Mean WAZ increased from -2.4 at treatment initiation to -1.3, -1.5, -1.4 and -1.7 at 6, 12, 18 and 24 months on ART, respectively (Figure 1A). Consequently, the proportion of underweight children decreased from 59.7% at treatment initiation to 28.8%, 35.3%, 26.5% and 45.0% at 6, 12, 18, and 24 months on ART, respectively. Results of the crude longitudinal models indicated that WAZ increased by 0.12 units per month in the first 6 months of ART, and remained stable thereafter (Table 2). Male sex, double orphan status and older age at ART initiation were significantly associated with lower WAZ. Severe immunodeficiency at ART initiation was marginally associated with lower WAZ. Differing patterns of improvement were found only by WAZ at ART initiation, with underweight children experiencing greater increases in WAZ in the first 6 months of ART.\nMean (95% CI) weight-for-age (A) and height-for-age (B) z-scores by time since ART initiation. *sample sizes are smaller due to missing data at ART initiation\nResults of the longitudinal data analysis for WAZ and HAZ after ART initiation\nWAZ: weight for age z-score; HAZ: height for age z-score; SE: standard error; ART: antiretroviral treatment\naamong children < 10 years of age\nbdefined by age according to WHO 2006 guidelines\ncmeasured at ART initiation\ndWAZ model adjusted for sex, double orphan status, severe immunodeficiency and age at ART initiation; HAZ model adjusted for sex, severe immunodeficiency and underweight at ART initiation", "For HAZ, 152 children were included in the analysis, of whom 4 (2.6%) died and 4 (2.6%) were lost to follow-up after ART initiation. A linear increase in HAZ was observed throughout treatment and mean HAZ increased from -3.5 at treatment initiation to -3.1, -2.6, -2.5, and -2.1 at 6, 12, 18 and 24 months on ART, respectively (Figure 1B). Consequently the proportion of stunted children decreased from 71.6% at treatment initiation to 80.5%, 66.7%, 60.5%, and 46.4% at 6, 12, 18 and 24 months on ART, respectively. Results of the crude longitudinal model indicated that HAZ increased by 0.053 units per month after ART initiation (Table 2). Underweight children at ART initiation had significantly lower HAZ, while older children and females had significantly higher HAZ throughout treatment. Severe immunodeficiency at ART initiation was marginally associated with lower HAZ. Significant differences in the trajectories of HAZ were found only by age at ART initiation, with children older than 5 years at initiation experiencing significantly smaller increases in HAZ per month compared to children younger than 2 years of age.", "In this study of young HIV-infected children in rural Zambia with good immunologic recovery on ART, both weight and height-for-age improved after initiation of ART. Age and undernutrition at ART initiation impacted both WAZ and HAZ, and differences in the trajectories of WAZ and HAZ were associated with undernutrition and age at ART initiation, respectively.\nImprovements in WAZ and HAZ among HIV-infected children treated with ART were found in other studies throughout sub-Saharan Africa [7-18]. The trajectories for WAZ and HAZ after ART initiation, however, differed in this study. WAZ improved for the first 6 months and then stabilized with only minimal improvements thereafter, whereas HAZ consistently improved over time. Similar trajectories for WAZ and HAZ were reported in one study in South Africa [10], while other studies found linear improvements in WAZ during the first 24 months of treatment [11,26]. Reasons for these differences are unknown but may be due to the higher levels of undernutrition observed in this rural population [26]. Over half of the study population was underweight and three-quarters stunted at ART initiation. Differences in trajectories were found between children who were underweight and those with normal weight, with greater weight improvements in the first 6 months for children underweight at ART initiation. A more consistent increase was found for children with normal weight. Consequently, it is possible that this group of rural children experienced different trajectories than the urban populations in previous studies.\nDue to the relatively young age of the study population, the impact of age at ART initiation on both WAZ and HAZ could be evaluated. Older age was associated with both WAZ and HAZ at ART initiation; however, only age impacted the trajectories for HAZ, with children older than 5 years experiencing less improvement. In other studies, HAZ did not consistently improve, with some studies finding no significant increases [12,13,17]. Discrepancies in HAZ may be due to the different age compositions of the study populations, as many studies were conducted among children with an average age older than 5 years [21]. As more infants and young children are diagnosed and started on ART, further evaluation of HAZ over time will be needed.\nThis study was limited by the small sample size beyond two years on ART, and the small number of children with measures available at ART initiation (Figure 1).The role of food supplementation in achieving weight and height gains in this study is unknown, as the criteria used for eligibility were not consistent across clinic staff and children did not receive supplements at every visit. In addition, no information was collected on the child's diet or on comorbidities and therefore the contribution of these factors to growth could not be assessed.", "This study demonstrated that rural Zambian children experienced significant improvements in both weight and height after starting ART. However, even after two years of ART approximately 25% and 50% of children remained underweight and stunted, higher than observed among HIV-negative children in the same region [27]. Consequently, successful treatment with ART was not able to fully reverse the effects of HIV on growth. Partnerships between HIV treatment and nutrition programs should be explored so that children receive an integrated care and treatment approach that includes nutritional support. Further evaluation of the impact of food supplementation on growth after ART initiation is needed.", "3TC: lamivudine; ABC: abacavir; ART: antiretroviral therapy; AZT: zidovudine; D4T: stavudine; EFV: efavirenz; HAZ: height-for-age Z-score; IQR: interquartile range; NVP: nevirapine; PMTCT: prevention of mother-to-child transmission; SE: standard error; WAZ: weight-for-age Z-score", "The authors declare that they have no competing interests.", "CGS conceived of the study, performed the data analysis and participated in the writing of the manuscript. JHvD supervised the implementation of the study in Zambia and participated in the writing of the manuscript. BM was responsible for study recruitment and implementation, and reviewed the final manuscript. FH was responsible for study recruitment and implementation, and reviewed the final manuscript. PS was responsible for study recruitment and implementation, and reviewed the final manuscript. PET supervised the implementation of the study in Zambia and reviewed the final manuscript. WJM supervised the implementation of the study in the US and participated in the writing of the manuscript. All authors have read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2334/11/54/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Endosialin expression in relation to clinicopathological and biological variables in rectal cancers with a Swedish clinical trial of preoperative radiotherapy.
21362178
The importance of changes in tumour-associated stroma for tumour initiation and progression has been established. Endosialin is expressed in fibroblasts and pericytes of blood vessels in several types of tumours, and is involved in the progression of colorectal cancer. In order to see whether endosialin was related to radiotherapy (RT) response, and clinicopathological and biological variables, we investigated endosialin expression in rectal cancers from the patients who participated in a Swedish clinical trial of preoperative RT.
BACKGROUND
Endosialin was immunohistochemically examined in normal mucosa, including distant (n = 72) and adjacent (n = 112) normal mucosa, and primary tumours (n = 135). Seventy-three of 135 patients received surgery alone and 62 received additional preoperative RT.
METHODS
Endosialin expression in the stroma increased from normal mucosa to tumour (p < 0.0001) both in RT and non-RT group. In the RT group, endosialin expression in the stroma was positively associated with expression of cyclooxygenase-2 (Cox-2) (p = 0.03), p73 (p = 0.01) and phosphates of regenerating liver (PRL) (p = 0.002). Endosialin expression in the tumour cells of both in the RT group (p = 0.01) and the non-RT group (p = 0.06) was observed more often in tumours with an infiltrative growth pattern than in tumours with an expansive growth pattern. In the RT group, endosialin expression in tumour cells was positively related to PRL expression (p = 0.02), whereas in the non-RT group, endosialin expression in tumour cells was positively related to p73 expression (p = 0.01).
RESULTS
Endosialin expression may be involved in the progression of rectal cancers, and was related to Cox-2, p73 and PRL expression. However, a direct relationship between endosialin expression and RT responses in patients was not found.
CONCLUSIONS
[ "Adult", "Aged", "Aged, 80 and over", "Antigens, CD", "Antigens, Neoplasm", "Biomarkers, Tumor", "Carcinoma", "Clinical Trials as Topic", "Disease Progression", "Female", "Humans", "Male", "Middle Aged", "Preoperative Period", "Prognosis", "Radiotherapy, Adjuvant", "Rectal Neoplasms", "Sweden" ]
3056834
null
null
Methods
[SUBTITLE] Patients [SUBSECTION] Endosialin was immunohistochemically examined in distant mucosa samples (n = 72, in which 65 cases were matched with primary tumours), adjacent normal mucosa samples (n = 112) and primary tumours (n = 135) from the patients with rectal adenocarcinoma. The patients were from the Southeast Swedish Health Care region and participated in a Swedish clinical trial of preoperative RT between 1987 and 1990 [3]. The distant normal mucosal samples were taken from a resected distant margin that was histologically free from tumours, and adjacent normal mucosa was adjacent to the primary tumour on the same histologic section. The study was approved by the ethical committee of the Faculty of Health Sciences, Universities of Linköping and Uppsala, Sweden. All participants gave informed consents. Among 135 patients, 73 patients received surgery alone and 62 received additional preoperative RT. A total of 25 Gy of radiation was administered in five fractions before surgery, over a median of 8 days (range, 6-15 days). Surgery was performed at a median of 3 days (range, 0-8 days) after RT. None of the patients received adjuvant chemotherapy before or after surgery. The mean age of the patients was 67 years (range, 36-85 years; median, 69 years). All patients were included in the follow-up, with mean and median follow-up periods of 86 and 75 months (range, 0-193 months), respectively. Follow-up sessions were scheduled at the end of 2004, by which time 49 patients had died from rectal cancer. The growth pattern of the tumours was classified (by two pathologists) as either expansive or infiltrative pattern, based on their patterns of growth and invasiveness. Tumours were graded as well, moderately or poorly differentiated. Other patient and tumour characteristics are presented in Table 1. No statistically significant differences between the non-RT and RT groups regarding gender, age, TNM stage, grade of differentiation, and number of other tumours, surgical type, resection margin and mean distance to the anal verge were found (p > 0.05). Characteristics of patients and rectal cancers * Other colorectal cancer and/or other type of tumour besides the present rectal cancer # For T-test, and the other p values for X2 test The data for expression of cyclooxygenase-2 (Cox-2) [16], p73 [17] and phosphates of regenerating liver (PRL) [18] determined by immunohistochemistry, were obtained from the previous studies carried out at our laboratory. Endosialin was immunohistochemically examined in distant mucosa samples (n = 72, in which 65 cases were matched with primary tumours), adjacent normal mucosa samples (n = 112) and primary tumours (n = 135) from the patients with rectal adenocarcinoma. The patients were from the Southeast Swedish Health Care region and participated in a Swedish clinical trial of preoperative RT between 1987 and 1990 [3]. The distant normal mucosal samples were taken from a resected distant margin that was histologically free from tumours, and adjacent normal mucosa was adjacent to the primary tumour on the same histologic section. The study was approved by the ethical committee of the Faculty of Health Sciences, Universities of Linköping and Uppsala, Sweden. All participants gave informed consents. Among 135 patients, 73 patients received surgery alone and 62 received additional preoperative RT. A total of 25 Gy of radiation was administered in five fractions before surgery, over a median of 8 days (range, 6-15 days). Surgery was performed at a median of 3 days (range, 0-8 days) after RT. None of the patients received adjuvant chemotherapy before or after surgery. The mean age of the patients was 67 years (range, 36-85 years; median, 69 years). All patients were included in the follow-up, with mean and median follow-up periods of 86 and 75 months (range, 0-193 months), respectively. Follow-up sessions were scheduled at the end of 2004, by which time 49 patients had died from rectal cancer. The growth pattern of the tumours was classified (by two pathologists) as either expansive or infiltrative pattern, based on their patterns of growth and invasiveness. Tumours were graded as well, moderately or poorly differentiated. Other patient and tumour characteristics are presented in Table 1. No statistically significant differences between the non-RT and RT groups regarding gender, age, TNM stage, grade of differentiation, and number of other tumours, surgical type, resection margin and mean distance to the anal verge were found (p > 0.05). Characteristics of patients and rectal cancers * Other colorectal cancer and/or other type of tumour besides the present rectal cancer # For T-test, and the other p values for X2 test The data for expression of cyclooxygenase-2 (Cox-2) [16], p73 [17] and phosphates of regenerating liver (PRL) [18] determined by immunohistochemistry, were obtained from the previous studies carried out at our laboratory. [SUBTITLE] Immunohistochemical staining [SUBSECTION] The mouse monoclonal antibody (B1/35), which is directed against human endosialin, was provided by Prof. Clare Isacke (Institute of Cancer Research, Sutton, UK), and described previously [11,12]. Five micrometer sections obtained from paraffin-embedded tissue blocks were incubated overnight at 60 °C, deparaffinized in xylene, and rehydrated in graded ethanol and distilled water. Sections were boiled in 0.01M citrate buffer (pH 6.0) in a high-pressure cooker for 1 min at 120 °C and then kept at room temperature for 30 min, followed by washing in phosphate-buffered saline (PBS; pH 7.4) buffer. The sections were then incubated overnight at 4°C with the primary antibody diluted to 1:250 in PBS. Sections were rinsed with PBS and incubated with polymerized horseradish peroxidise (HRP) -anti mouse/rabbit IgG for 30 min at room temperature (Real™ Envision™ HRP (rabbit/mouse) kit, Dako). After washing with PBS, the peroxidase reaction was run for 8 min with 3, 3'- diaminobenzididine (DAB). Sections were then counterstained with hematoxylin and mounted for microscopic examination. The normal mucosa and primary tumour samples were stained in the same immunostaining run to avoid biases in the staining pattern and intensity. Sections known to stain positively were included as negative and positive controls. For negative controls, sections incubated with universal mouse IgG (Dako), instead of the primary antibody, were not stained, whereas, the positive controls were stained with the primary antibody. The slides were examined with a microscope and scored independently by two pathologists who were given no clinical or pathological information. To avoid artifacts, areas with poor morphology, section margins, and any necrotic regions were not considered. The staining intensity in the entire stroma was scored as either weak (including negative) or strong. The staining intensity of tumour cells over the entire slide area was scored as negative or positive (if positive cells >5% of tumour cells). The mouse monoclonal antibody (B1/35), which is directed against human endosialin, was provided by Prof. Clare Isacke (Institute of Cancer Research, Sutton, UK), and described previously [11,12]. Five micrometer sections obtained from paraffin-embedded tissue blocks were incubated overnight at 60 °C, deparaffinized in xylene, and rehydrated in graded ethanol and distilled water. Sections were boiled in 0.01M citrate buffer (pH 6.0) in a high-pressure cooker for 1 min at 120 °C and then kept at room temperature for 30 min, followed by washing in phosphate-buffered saline (PBS; pH 7.4) buffer. The sections were then incubated overnight at 4°C with the primary antibody diluted to 1:250 in PBS. Sections were rinsed with PBS and incubated with polymerized horseradish peroxidise (HRP) -anti mouse/rabbit IgG for 30 min at room temperature (Real™ Envision™ HRP (rabbit/mouse) kit, Dako). After washing with PBS, the peroxidase reaction was run for 8 min with 3, 3'- diaminobenzididine (DAB). Sections were then counterstained with hematoxylin and mounted for microscopic examination. The normal mucosa and primary tumour samples were stained in the same immunostaining run to avoid biases in the staining pattern and intensity. Sections known to stain positively were included as negative and positive controls. For negative controls, sections incubated with universal mouse IgG (Dako), instead of the primary antibody, were not stained, whereas, the positive controls were stained with the primary antibody. The slides were examined with a microscope and scored independently by two pathologists who were given no clinical or pathological information. To avoid artifacts, areas with poor morphology, section margins, and any necrotic regions were not considered. The staining intensity in the entire stroma was scored as either weak (including negative) or strong. The staining intensity of tumour cells over the entire slide area was scored as negative or positive (if positive cells >5% of tumour cells). [SUBTITLE] Statistical analysis [SUBSECTION] The significance of the difference in the intensity of endosialin expression between normal mucosa and primary tumours, as well as between stroma and tumour cells, was examined by X2 or McNemar's test. The relationship between endosialin expression and clinicopathological/biological factors was examined by the X2 method. The relationship between endosialin expression and survival was tested by using Cox's proportional hazard model. All p values were two sided, and values of p < 0.05 were considered as statistically significant. The significance of the difference in the intensity of endosialin expression between normal mucosa and primary tumours, as well as between stroma and tumour cells, was examined by X2 or McNemar's test. The relationship between endosialin expression and clinicopathological/biological factors was examined by the X2 method. The relationship between endosialin expression and survival was tested by using Cox's proportional hazard model. All p values were two sided, and values of p < 0.05 were considered as statistically significant.
null
null
null
null
[ "Background", "Patients", "Immunohistochemical staining", "Statistical analysis", "Results", "Endosialin expression in the stroma of distant normal mucosa, adjacent normal mucosa and tumour", "Endosialin expression in the stroma in relation to clinicopathological and biological variables", "Endosialin expression in the epithelial cells of distant normal mucosa, adjacent normal mucosa and carcinoma cells of tumours", "Endosialin expression in tumour cells in relation to clinicopathological and biological variables", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Colorectal cancer is one of the most common malignant diseases in western countries. Rectal cancer is a frequent presentation, with an estimated 35% of cases found situated in the rectum [1]. New surgical techniques [2] and preoperative radiotherapy (RT) [3] have improved the local control and disease-free survival of patients with rectal cancer. However the incidence of recurrence and mortality are still high, even following RT. Therefore, it is important to gain a better understanding of the changes induced in tumours following RT of rectal cancer patients and search for new biological markers in order to evaluate their therapeutic effects.\nThe initiation and progression of tumours are influenced by the behaviour of the tumour microenvironment, consisting of the extracellular matrix (ECM), the newly formed vasculature, inflammatory cells and fibroblasts [4,5]. Tumour-associated fibroblasts (activated fibroblasts and myofibroblasts) have a well-recognized role as a source of paracrine (cell-to-cell) growth factors that influence the growth, migration and invasion of cancer cells during the carcinogenic process. Activated fibroblasts are also responsible for the synthesis, deposition and remodelling of the ECM in tumour-associated stroma [4]. Angiogenesis is a multistep process in tumour progression that involves both endothelial cells and pericytes. Alternative potential targets for inhibiting tumours may be involved in the tumour-associated stroma that contains newly formed blood vessels, active fibroblasts and ECM proteins.\nOne of these ECM proteins is endosialin. The gene for this protein is located in chromosome 11q13 [6], and its product is a type I transmembrane protein, which is a highly sialylated cell surface receptor found conserved in humans and in mice. Its extracellular portion consists of five globular domains, which are N-terminal C-type lectin domain, a sushi-like domain, and three epidermal growth factor (EGF)-like repeats, followed by a mucin-like region [7,8]. Endosialin was first reported to be selectively expressed in tumour-associated endothelium, which results in an alternate designation of tumour endothelial marker 1 (TEM1) [9]. Recently, this designation was challenged by a series of studies in which endosialin was shown to be expressed in pericytes (periendothelial mural cells) and activated fibroblasts [10-14]. Thus, endosialin plays an important role in overall tumour vasculature [15]. Targeting on endosialin or its related pathways may therefore offer an attractive therapeutic opportunity for cancer patients [12].\nIn the present study, we examined endosialin expression in distant and adjacent normal mucosa, as well as in primary tumours, from rectal cancer patients, with or without preoperative RT. We aimed to investigate the relationships of endosialin expression with RT responses, and clinicopathological and biological variables associated with rectal cancers.", "Endosialin was immunohistochemically examined in distant mucosa samples (n = 72, in which 65 cases were matched with primary tumours), adjacent normal mucosa samples (n = 112) and primary tumours (n = 135) from the patients with rectal adenocarcinoma. The patients were from the Southeast Swedish Health Care region and participated in a Swedish clinical trial of preoperative RT between 1987 and 1990 [3]. The distant normal mucosal samples were taken from a resected distant margin that was histologically free from tumours, and adjacent normal mucosa was adjacent to the primary tumour on the same histologic section. The study was approved by the ethical committee of the Faculty of Health Sciences, Universities of Linköping and Uppsala, Sweden. All participants gave informed consents.\nAmong 135 patients, 73 patients received surgery alone and 62 received additional preoperative RT. A total of 25 Gy of radiation was administered in five fractions before surgery, over a median of 8 days (range, 6-15 days). Surgery was performed at a median of 3 days (range, 0-8 days) after RT. None of the patients received adjuvant chemotherapy before or after surgery. The mean age of the patients was 67 years (range, 36-85 years; median, 69 years). All patients were included in the follow-up, with mean and median follow-up periods of 86 and 75 months (range, 0-193 months), respectively. Follow-up sessions were scheduled at the end of 2004, by which time 49 patients had died from rectal cancer.\nThe growth pattern of the tumours was classified (by two pathologists) as either expansive or infiltrative pattern, based on their patterns of growth and invasiveness. Tumours were graded as well, moderately or poorly differentiated. Other patient and tumour characteristics are presented in Table 1. No statistically significant differences between the non-RT and RT groups regarding gender, age, TNM stage, grade of differentiation, and number of other tumours, surgical type, resection margin and mean distance to the anal verge were found (p > 0.05).\nCharacteristics of patients and rectal cancers\n* Other colorectal cancer and/or other type of tumour besides the present rectal cancer\n# For T-test, and the other p values for X2 test\nThe data for expression of cyclooxygenase-2 (Cox-2) [16], p73 [17] and phosphates of regenerating liver (PRL) [18] determined by immunohistochemistry, were obtained from the previous studies carried out at our laboratory.", "The mouse monoclonal antibody (B1/35), which is directed against human endosialin, was provided by Prof. Clare Isacke (Institute of Cancer Research, Sutton, UK), and described previously [11,12].\nFive micrometer sections obtained from paraffin-embedded tissue blocks were incubated overnight at 60 °C, deparaffinized in xylene, and rehydrated in graded ethanol and distilled water. Sections were boiled in 0.01M citrate buffer (pH 6.0) in a high-pressure cooker for 1 min at 120 °C and then kept at room temperature for 30 min, followed by washing in phosphate-buffered saline (PBS; pH 7.4) buffer. The sections were then incubated overnight at 4°C with the primary antibody diluted to 1:250 in PBS. Sections were rinsed with PBS and incubated with polymerized horseradish peroxidise (HRP) -anti mouse/rabbit IgG for 30 min at room temperature (Real™ Envision™ HRP (rabbit/mouse) kit, Dako). After washing with PBS, the peroxidase reaction was run for 8 min with 3, 3'- diaminobenzididine (DAB). Sections were then counterstained with hematoxylin and mounted for microscopic examination.\nThe normal mucosa and primary tumour samples were stained in the same immunostaining run to avoid biases in the staining pattern and intensity. Sections known to stain positively were included as negative and positive controls. For negative controls, sections incubated with universal mouse IgG (Dako), instead of the primary antibody, were not stained, whereas, the positive controls were stained with the primary antibody.\nThe slides were examined with a microscope and scored independently by two pathologists who were given no clinical or pathological information. To avoid artifacts, areas with poor morphology, section margins, and any necrotic regions were not considered. The staining intensity in the entire stroma was scored as either weak (including negative) or strong. The staining intensity of tumour cells over the entire slide area was scored as negative or positive (if positive cells >5% of tumour cells).", "The significance of the difference in the intensity of endosialin expression between normal mucosa and primary tumours, as well as between stroma and tumour cells, was examined by X2 or McNemar's test. The relationship between endosialin expression and clinicopathological/biological factors was examined by the X2 method. The relationship between endosialin expression and survival was tested by using Cox's proportional hazard model. All p values were two sided, and values of p < 0.05 were considered as statistically significant.", "[SUBTITLE] Endosialin expression in the stroma of distant normal mucosa, adjacent normal mucosa and tumour [SUBSECTION] Endosialin was expressed in fibroblasts and blood vessels in the stroma of normal mucosa, including distant and adjacent normal mucosa, and tumour (Figure 1A-D). In the fibroblasts, endosialin expression of intratumoural or peritumoural areas ranged from a prominent to a diffuse pattern. In the blood vessels (mainly mircrovessels), on the other hand, endosialin presented in the cells around vessels, either in the stroma of normal mucosa or tumour.\nEndosialin expression in rectal normal mucosa and in rectal cancer (representative images from different groups). (A and B) Weak expression in the fibroblasts and capillaries in the stroma, and lack of expression in epithelial cells of normal mucosa ((A) ×100 and (B) ×400). (C) Strong expression in fibroblasts of tumour-associated stroma (black arrow) and weak expression in fibroblasts of normal mucosa (green arrow), normal epithelial cells are negative (yellow arrow), and a few tumour cells are positive (red arrow) and negative (pink arrow) (×100). (D) Strong expression in fibroblasts of tumour-associated stroma (black arrow) and tumour cells (red arrow), whereas some tumour cells are negative (pink arrow) (×400).\nEndosialin expression in the stroma significantly increased from distant or adjacent normal mucosa to the tumour (p < 0.0001) in both the non-RT group (3%, 5%, and 63%, respectively) and the RT groups (12%, 5%, and 65%, respectively, Figure 2). No significant difference was observed between distant and adjacent normal mucosa in the two subgroups (p > 0.05; Figure 2).\nEndosialin expression in the stroma in distant normal mucosa, adjacent normal mucosa and tumour in non-RT and RT groups. A significant increase was observed from the distant normal mucosa or adjacent normal mucosa to the tumour (p < 0.0001).\nEndosialin was expressed in fibroblasts and blood vessels in the stroma of normal mucosa, including distant and adjacent normal mucosa, and tumour (Figure 1A-D). In the fibroblasts, endosialin expression of intratumoural or peritumoural areas ranged from a prominent to a diffuse pattern. In the blood vessels (mainly mircrovessels), on the other hand, endosialin presented in the cells around vessels, either in the stroma of normal mucosa or tumour.\nEndosialin expression in rectal normal mucosa and in rectal cancer (representative images from different groups). (A and B) Weak expression in the fibroblasts and capillaries in the stroma, and lack of expression in epithelial cells of normal mucosa ((A) ×100 and (B) ×400). (C) Strong expression in fibroblasts of tumour-associated stroma (black arrow) and weak expression in fibroblasts of normal mucosa (green arrow), normal epithelial cells are negative (yellow arrow), and a few tumour cells are positive (red arrow) and negative (pink arrow) (×100). (D) Strong expression in fibroblasts of tumour-associated stroma (black arrow) and tumour cells (red arrow), whereas some tumour cells are negative (pink arrow) (×400).\nEndosialin expression in the stroma significantly increased from distant or adjacent normal mucosa to the tumour (p < 0.0001) in both the non-RT group (3%, 5%, and 63%, respectively) and the RT groups (12%, 5%, and 65%, respectively, Figure 2). No significant difference was observed between distant and adjacent normal mucosa in the two subgroups (p > 0.05; Figure 2).\nEndosialin expression in the stroma in distant normal mucosa, adjacent normal mucosa and tumour in non-RT and RT groups. A significant increase was observed from the distant normal mucosa or adjacent normal mucosa to the tumour (p < 0.0001).\n[SUBTITLE] Endosialin expression in the stroma in relation to clinicopathological and biological variables [SUBSECTION] As shown in Table 2, in the RT group, the frequencies of strong endosialin expression in the stroma in TNM stages I to IV were 45%, 70%, 87%, and 57%, respectively (p = 0.07). If TNM stage I (45%) was compared with other stages (II+III+IV, 74%) the difference was statistically significant (p = 0.03). Endosialin expression was positively associated with expression of Cox-2 (p = 0.03), p73 (p = 0.01), and PRL (p = 0.002), whereas there was no such relationship in the non-RT group (p > 0.05; Table 2). No significant associations of endosialin expression in the stroma with other clinicopathological variables, including gender, age, tumour location, differentiation, complication, local/distant recurrence, overall survival and disease-free survival, in either the RT group or the non-RT group (p > 0.05; data not shown) were also observed.\nEndosialin expression in tumour-associated stroma in relation to clinicopathological and biological variables in rectal cancer patients\np values for X2 test; Cox-2: cyclooxygenase-2; PRL: phosphates of regenerating liver.\nAs shown in Table 2, in the RT group, the frequencies of strong endosialin expression in the stroma in TNM stages I to IV were 45%, 70%, 87%, and 57%, respectively (p = 0.07). If TNM stage I (45%) was compared with other stages (II+III+IV, 74%) the difference was statistically significant (p = 0.03). Endosialin expression was positively associated with expression of Cox-2 (p = 0.03), p73 (p = 0.01), and PRL (p = 0.002), whereas there was no such relationship in the non-RT group (p > 0.05; Table 2). No significant associations of endosialin expression in the stroma with other clinicopathological variables, including gender, age, tumour location, differentiation, complication, local/distant recurrence, overall survival and disease-free survival, in either the RT group or the non-RT group (p > 0.05; data not shown) were also observed.\nEndosialin expression in tumour-associated stroma in relation to clinicopathological and biological variables in rectal cancer patients\np values for X2 test; Cox-2: cyclooxygenase-2; PRL: phosphates of regenerating liver.\n[SUBTITLE] Endosialin expression in the epithelial cells of distant normal mucosa, adjacent normal mucosa and carcinoma cells of tumours [SUBSECTION] There was no positive expression in epithelial cells of distant and adjacent normal mucosa in either the non-RT group or the RT group (Figure 1A-C). While in 25 cases of non-RT group and 21 cases of RT group, some tumour cells had positive endosialin expression (Figure 1C-D).\nWe further analyzed the relationship between endosialin expression in the stroma and that in the tumour cells in non-RT and RT group (Table 3). The expression in the stroma and in the tumour cells was concordant in 24 weak/negative cases (33%) and 22 strong/positive cases (30%) in non-RT group. Only 27 cases (37%) showed different expression in the two locations (3 tumours (4%) showed positive expression in tumour cells but weak expression in stroma, and 24 tumours (33%) showed strong expression in stroma but negative expression in tumour cells). In the RT group, the concordance of endosialin expression in stroma and in tumour cells was 20 weak/negative cases (32%) and 19 strong/positive cases (31%). The discordance was 23 cases (37%), which showed different expression in the two locations (2 tumours (3%) showed positive expression in tumour cells but weak expression in stroma, and 21 tumours (34%) showed strong expression in stroma but negative expression in tumour cells). There was a significant difference between the expression in the stroma and in the tumour cells in the non-RT (p = 0.0001) and RT groups (p = 0.0003).\nRelationship of endosialin expression in stroma and tumour cells of rectal cancer patients\n* For McNemar's test.\nThere was no positive expression in epithelial cells of distant and adjacent normal mucosa in either the non-RT group or the RT group (Figure 1A-C). While in 25 cases of non-RT group and 21 cases of RT group, some tumour cells had positive endosialin expression (Figure 1C-D).\nWe further analyzed the relationship between endosialin expression in the stroma and that in the tumour cells in non-RT and RT group (Table 3). The expression in the stroma and in the tumour cells was concordant in 24 weak/negative cases (33%) and 22 strong/positive cases (30%) in non-RT group. Only 27 cases (37%) showed different expression in the two locations (3 tumours (4%) showed positive expression in tumour cells but weak expression in stroma, and 24 tumours (33%) showed strong expression in stroma but negative expression in tumour cells). In the RT group, the concordance of endosialin expression in stroma and in tumour cells was 20 weak/negative cases (32%) and 19 strong/positive cases (31%). The discordance was 23 cases (37%), which showed different expression in the two locations (2 tumours (3%) showed positive expression in tumour cells but weak expression in stroma, and 21 tumours (34%) showed strong expression in stroma but negative expression in tumour cells). There was a significant difference between the expression in the stroma and in the tumour cells in the non-RT (p = 0.0001) and RT groups (p = 0.0003).\nRelationship of endosialin expression in stroma and tumour cells of rectal cancer patients\n* For McNemar's test.\n[SUBTITLE] Endosialin expression in tumour cells in relation to clinicopathological and biological variables [SUBSECTION] In the non-RT group, endosialin expression was more frequent in tumours with infiltrative growth patterns than those with expansive growth patterns (p = 0.01; Table 4) and was positively related to p73 expression (p = 0.01; Table 4). In the RT group, a similar trend of endosialin expression was observed in tumours with infiltrative growth patterns when compared to those with expansive growth patterns (p = 0.06; Table 4). Endosialin expression was positively related to PRL expression (p = 0.02; Table 4).\nEndosialin expression in tumour cells in relation to clinicopathological and biological variables in rectal cancer patients\np values for X2 test; Cox-2: cyclooxygenase-2; PRL: phosphates of regenerating liver.\nNo significant correlation between endosialin expression in tumour cells and survival, recurrence and other pathological/biological variables were found in either the non-RT group or in the RT group (p >0.05; data not shown).\nIn the non-RT group, endosialin expression was more frequent in tumours with infiltrative growth patterns than those with expansive growth patterns (p = 0.01; Table 4) and was positively related to p73 expression (p = 0.01; Table 4). In the RT group, a similar trend of endosialin expression was observed in tumours with infiltrative growth patterns when compared to those with expansive growth patterns (p = 0.06; Table 4). Endosialin expression was positively related to PRL expression (p = 0.02; Table 4).\nEndosialin expression in tumour cells in relation to clinicopathological and biological variables in rectal cancer patients\np values for X2 test; Cox-2: cyclooxygenase-2; PRL: phosphates of regenerating liver.\nNo significant correlation between endosialin expression in tumour cells and survival, recurrence and other pathological/biological variables were found in either the non-RT group or in the RT group (p >0.05; data not shown).", "Endosialin was expressed in fibroblasts and blood vessels in the stroma of normal mucosa, including distant and adjacent normal mucosa, and tumour (Figure 1A-D). In the fibroblasts, endosialin expression of intratumoural or peritumoural areas ranged from a prominent to a diffuse pattern. In the blood vessels (mainly mircrovessels), on the other hand, endosialin presented in the cells around vessels, either in the stroma of normal mucosa or tumour.\nEndosialin expression in rectal normal mucosa and in rectal cancer (representative images from different groups). (A and B) Weak expression in the fibroblasts and capillaries in the stroma, and lack of expression in epithelial cells of normal mucosa ((A) ×100 and (B) ×400). (C) Strong expression in fibroblasts of tumour-associated stroma (black arrow) and weak expression in fibroblasts of normal mucosa (green arrow), normal epithelial cells are negative (yellow arrow), and a few tumour cells are positive (red arrow) and negative (pink arrow) (×100). (D) Strong expression in fibroblasts of tumour-associated stroma (black arrow) and tumour cells (red arrow), whereas some tumour cells are negative (pink arrow) (×400).\nEndosialin expression in the stroma significantly increased from distant or adjacent normal mucosa to the tumour (p < 0.0001) in both the non-RT group (3%, 5%, and 63%, respectively) and the RT groups (12%, 5%, and 65%, respectively, Figure 2). No significant difference was observed between distant and adjacent normal mucosa in the two subgroups (p > 0.05; Figure 2).\nEndosialin expression in the stroma in distant normal mucosa, adjacent normal mucosa and tumour in non-RT and RT groups. A significant increase was observed from the distant normal mucosa or adjacent normal mucosa to the tumour (p < 0.0001).", "As shown in Table 2, in the RT group, the frequencies of strong endosialin expression in the stroma in TNM stages I to IV were 45%, 70%, 87%, and 57%, respectively (p = 0.07). If TNM stage I (45%) was compared with other stages (II+III+IV, 74%) the difference was statistically significant (p = 0.03). Endosialin expression was positively associated with expression of Cox-2 (p = 0.03), p73 (p = 0.01), and PRL (p = 0.002), whereas there was no such relationship in the non-RT group (p > 0.05; Table 2). No significant associations of endosialin expression in the stroma with other clinicopathological variables, including gender, age, tumour location, differentiation, complication, local/distant recurrence, overall survival and disease-free survival, in either the RT group or the non-RT group (p > 0.05; data not shown) were also observed.\nEndosialin expression in tumour-associated stroma in relation to clinicopathological and biological variables in rectal cancer patients\np values for X2 test; Cox-2: cyclooxygenase-2; PRL: phosphates of regenerating liver.", "There was no positive expression in epithelial cells of distant and adjacent normal mucosa in either the non-RT group or the RT group (Figure 1A-C). While in 25 cases of non-RT group and 21 cases of RT group, some tumour cells had positive endosialin expression (Figure 1C-D).\nWe further analyzed the relationship between endosialin expression in the stroma and that in the tumour cells in non-RT and RT group (Table 3). The expression in the stroma and in the tumour cells was concordant in 24 weak/negative cases (33%) and 22 strong/positive cases (30%) in non-RT group. Only 27 cases (37%) showed different expression in the two locations (3 tumours (4%) showed positive expression in tumour cells but weak expression in stroma, and 24 tumours (33%) showed strong expression in stroma but negative expression in tumour cells). In the RT group, the concordance of endosialin expression in stroma and in tumour cells was 20 weak/negative cases (32%) and 19 strong/positive cases (31%). The discordance was 23 cases (37%), which showed different expression in the two locations (2 tumours (3%) showed positive expression in tumour cells but weak expression in stroma, and 21 tumours (34%) showed strong expression in stroma but negative expression in tumour cells). There was a significant difference between the expression in the stroma and in the tumour cells in the non-RT (p = 0.0001) and RT groups (p = 0.0003).\nRelationship of endosialin expression in stroma and tumour cells of rectal cancer patients\n* For McNemar's test.", "In the non-RT group, endosialin expression was more frequent in tumours with infiltrative growth patterns than those with expansive growth patterns (p = 0.01; Table 4) and was positively related to p73 expression (p = 0.01; Table 4). In the RT group, a similar trend of endosialin expression was observed in tumours with infiltrative growth patterns when compared to those with expansive growth patterns (p = 0.06; Table 4). Endosialin expression was positively related to PRL expression (p = 0.02; Table 4).\nEndosialin expression in tumour cells in relation to clinicopathological and biological variables in rectal cancer patients\np values for X2 test; Cox-2: cyclooxygenase-2; PRL: phosphates of regenerating liver.\nNo significant correlation between endosialin expression in tumour cells and survival, recurrence and other pathological/biological variables were found in either the non-RT group or in the RT group (p >0.05; data not shown).", "Endosialin, also referred to as TEM1, was originally discovered as a human embryonic fibroblast-specific antigen and later reported to be expressed in the endothelium. Endosialin is barely detectable in normal tissues other than its moderate expression in the smooth muscle of colon and prostate [13]. Therefore, TEM1 was once considered as an important candidate as a vascular target [6,9,19]. However, recent studies have demonstrated that endosialin is expressed in pericytes of breast tumours and brain gliomas, and not selectively in tumour endothelium [10-12]. In the present study, endosialin was prominently expressed in the tumour-associated stroma, especially in fibroblasts and blood vessels, and only weakly expressed in the stroma of distant or adjacent normal mucosa in both the non-RT group and in the RT groups. When compared to the normal mucosa, endosialin expression was much higher in tumour, thus agreeing with other reports on colon and breast cancers [20,21]. Up-regulated endosialin in the tumour-associated stroma may play a role in the tumorigenesis of rectal cancers.\nIn the present study, endosialin expression in the stroma was more frequently observed in advanced TNM stages, and positively related to the tumour cellular expression of Cox-2, p73, and PRL in the RT group, whereas there were no such associations in the non-RT group. The level of endosialin expression in the tumour-associated stroma was significantly higher in breast cancers with nodal involvement compared to those with negative nodes [21]. Endosialin has also been found in glioblastoma multiforme, anaplastic astrocytomas, and metastatic carcinomas that are of highly invasive activity [12,22]. Endosialin is also more abundant in melanoma metastases than in the primary tumours [13]. In colorectal cancer, endosialin was up-regulated in Dukes' B compared to Dukes' A [20]. Cox-2 is an inducible isoenzyme of cyclooxygenase that is undetectable in normal colonic mucosa but is overexpressed in 80% of colonic tumours [23]. Cox-2 is involved in a multistep process of colorectal tumorigenesis, such as apoptosis inhibition of cellular proliferation and angiogenesis enhancement, tumour cell invasion and differentiation. One of our previous studies has shown that Cox-2 expression is higher in more advanced tumours [24]. It is interesting to see, in the further study, whether or not endosialin and Cox-2 have interactions in the tumour development, especially in enhancing angiogenesis. We have also found that p73 expression is increased during the development of colorectal cancers and its overexpression is further associated with poor prognosis in patients [25]. PRL, which stimulates the Rho signalling pathway to promote cell motility and invasion [26], is up-regulated in colorectal cancer and associated with tumour invasion and metastasis [18]. In the same series of the patients, our previous studies have demonstrated that tumours with p53-negative expression (wild type p53), or p73-negative expression, or weak Cox-2 expression had less local recurrence after RT [16,17,27]. PRL expression is related to distant recurrence and poor survival after RT [18]. In further studies of colon cancer cell lines, we found that, after radiation, the antiapoptotic ΔNp73 and mitosis factor PRL-3 increase [28] and the overexpression of ΔNp73β increases the viability of cell lines and cisplatin induces the degradation of ΔNp73β in a dose-dependent manner (unpublished data). All these results indicate that certain biological factors may be involved in response to therapy in rectal cancer patients. If we could further confirm the data obtained from studies of these biological factors in relation to the clinicopathological issues above as well as their pathways in therapies, we may be able to apply them to clinical practices where rectal cancer patients may receive individual therapies based on their biological profile. For example, the targeting of multiple biological factors instead of only one in certain therapies may yield greater responses to them.\nRadiated stromal fibroblasts in the carcinogenic process have been shown to induce sub-lethal DNA damage. Radiation-induced alterations of the stroma have been found to produce more mammary carcinomas compared to non-radiated stroma [29]. All of these results hinted that the patients with up-regulated expression of endosialin in a tumour-associated stroma may be linked to other biological variables that are related to aggressive characterization after radiation. However, we did not find that endosialin expression was further related to survival in either the whole group or subgroups (with or without RT) of patients. This may be partly due to the limited number of the patients and/or the hypothesis that endosialin may play a survival role in certain groups of the patients. It would be interesting to determine the survival significance of endosialin in subgroups where the endosialin is related to the clinicopathological variables, such as TNM, Cox-2, p73 and PRL. We did find that endosialin presented different results in each subgroup, but firm conclusions are difficult to draw at this point due to the limited number.\nIn the present study, we also observed endosialin expression in tumour cells. Positive endosialin expression in tumour cells was more frequently observed in tumours with infiltrative growth pattern compared to expansive growth pattern, regardless of RT. Endosialin was positively related to p73 expression in the non-RT group, and to PRL expression in the RT group. Why was stromal endosialin positively related to p73, PRL and Cox-2 in the RT-group, but not in the non-RT group? One speculation may be that radiation influenced biological factors by up-regulating or down-regulating expression. After radiation, \"bad factors,\" such as Cox-2, p73, and PRL-3 increased their expression. This indicates that there may be some mechanism by which tumors try to protect themselves by increasing the expression of \"bad factors\" against the damage of radiation. Furthermore, why did stromal endosialin and tumour cellular endosialin have different relationships with biological factors (p73 and PRL)? For example, stromal endosialin was related to p73 expression in the RT group, whereas tumour cellular endosialin was related to p73 in the non-RT group. One possible explanation may be that the effects of radiation on stroma differ on different types of tumour cells, resulting in different expression and relationships to the biological factors investigated. In addition, the p73 gene contains two promoter regions, giving rise to a p53-like protein named TAp73, and the N-terminally truncated ΔNp73, which lacks the transactivation domain and p53 homology. ΔNp73 is thought to have a regulatory function, down-regulating both TAp73 by protein-protein interaction and p53 by competitive binding with DNA. In this autoregulatory loop, ΔNp73 protein is also up-regulated by both TAp73 and p53. The functional cooperation among these family members seems to vary depending on cell types, stimuli and p53 status [30]. In other words, the roles of the two isoforms may depend on their locations in the stroma or tumour cells.\nRecent studies have raised the concept of the coevolution of tumour cells with tumour-associated stroma. The stromal environment of tumours appears to be a leading factor, and not just a supporting one in the initiation of tumours [31]. The tumour microenvironment and interactions between tumour and stromal cells have a reciprocal relationship in tumour development and progression. Another interesting concept is epithelial-mesenchymal transition (EMT), a process by which cells lose their polarized epithelial structures and concomitantly acquire a migratory or mesenchymal phenotype. EMT is essential for normal embryonic development and progression of non-invasive adenomas into malignant, metastatic carcinomas. Alterations in cell-cell adhesion, cell-substrate interaction, extracellular matrix degradation and cytoskeleton organization are the major events that occur during EMT [32]. Overexpressed thymosin β4 (Tβ4) induces EMT in colorectal carcinoma by increasing integrin-linked kinase (ILK) complex formation with particularly interesting new cysteine-histidine rich protein (PINCH) [32]. We have studied PINCH expression in colorectal carcinomas and found that PINCH overexpressed on fibroblasts in the tumour-associated stroma compared to its expression in normal mucosa [33], similar to endosialin expression in tumour versus normal mucosa. In the present study, endosialin was expressed in the both tumour-associated stroma and tumour cells. Furthermore, endosialin expression in the tumour-associated stroma was positively correlated with that in tumour cells, giving more information regarding the interactions of endosialin between tumour microenvironments and tumours. Changes in signal conduction related to endosialin appear to play an important role in enhancing tumour progression after radiation. There was evidence that antiangiogenic therapies targeting both endothelial cells and pericytes were more effective than single-agent therapies [34]. In the present study, endosialin was not only obviously up-regulated in stroma but also up-regulated in tumour cells when compared to normal mucosa. If the therapies target endosialin in both stroma and tumour cells, they may provide more efficient strategies of therapy for the patients with rectal cancers.\nRegarding the discrepancies of endosialin localization, in stroma and/or in tumour cells found in different studies, several factors may be responsible, including the number and clinicopathological characteristics of the patients included in such studies, as well as the methods used in the same. For example, if tissue arrays were used for staining endosialin [13], it is possible that a limited sample of tissue was obtained from the tumour blocks and the selected arrays may not be representative of the complete characteristics of the tumour because of tumour heterogeneity. In the present study, we used ordinary sections for staining endosialin and observed that only 34% of the cases showed positive endosialin expression in tumour cells. In fact, in some positive cases, only a few positive tumour cells were determined in the entire tumour sections. In addition, some studies which employed real-time PCR or quantitative real-time PCR methods could not determine the location of the expression [20,21]. In the present study, we used immunohistochemical staining, which is one of the best methods of identifying the location of endosialin expression.", "Endosialin expression may be involved in the progression of rectal cancers. It is also related to Cox-2, p73, and PRL expression. However, a direct relationship between endosialin expression and RT responses in patients was not found.", "The authors declare that they have no competing interests.", "ZYZ conducted the experiments, analyzed the results with XFS, and wrote the drafts of the manuscript. ZYZ and HZ read the immunohistochemical slides. AG provided the samples for the experiments and information regarding the therapy. XFS designed the experiments and helped write the manuscript. All authors have read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2407/11/89/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Patients", "Immunohistochemical staining", "Statistical analysis", "Results", "Endosialin expression in the stroma of distant normal mucosa, adjacent normal mucosa and tumour", "Endosialin expression in the stroma in relation to clinicopathological and biological variables", "Endosialin expression in the epithelial cells of distant normal mucosa, adjacent normal mucosa and carcinoma cells of tumours", "Endosialin expression in tumour cells in relation to clinicopathological and biological variables", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Colorectal cancer is one of the most common malignant diseases in western countries. Rectal cancer is a frequent presentation, with an estimated 35% of cases found situated in the rectum [1]. New surgical techniques [2] and preoperative radiotherapy (RT) [3] have improved the local control and disease-free survival of patients with rectal cancer. However the incidence of recurrence and mortality are still high, even following RT. Therefore, it is important to gain a better understanding of the changes induced in tumours following RT of rectal cancer patients and search for new biological markers in order to evaluate their therapeutic effects.\nThe initiation and progression of tumours are influenced by the behaviour of the tumour microenvironment, consisting of the extracellular matrix (ECM), the newly formed vasculature, inflammatory cells and fibroblasts [4,5]. Tumour-associated fibroblasts (activated fibroblasts and myofibroblasts) have a well-recognized role as a source of paracrine (cell-to-cell) growth factors that influence the growth, migration and invasion of cancer cells during the carcinogenic process. Activated fibroblasts are also responsible for the synthesis, deposition and remodelling of the ECM in tumour-associated stroma [4]. Angiogenesis is a multistep process in tumour progression that involves both endothelial cells and pericytes. Alternative potential targets for inhibiting tumours may be involved in the tumour-associated stroma that contains newly formed blood vessels, active fibroblasts and ECM proteins.\nOne of these ECM proteins is endosialin. The gene for this protein is located in chromosome 11q13 [6], and its product is a type I transmembrane protein, which is a highly sialylated cell surface receptor found conserved in humans and in mice. Its extracellular portion consists of five globular domains, which are N-terminal C-type lectin domain, a sushi-like domain, and three epidermal growth factor (EGF)-like repeats, followed by a mucin-like region [7,8]. Endosialin was first reported to be selectively expressed in tumour-associated endothelium, which results in an alternate designation of tumour endothelial marker 1 (TEM1) [9]. Recently, this designation was challenged by a series of studies in which endosialin was shown to be expressed in pericytes (periendothelial mural cells) and activated fibroblasts [10-14]. Thus, endosialin plays an important role in overall tumour vasculature [15]. Targeting on endosialin or its related pathways may therefore offer an attractive therapeutic opportunity for cancer patients [12].\nIn the present study, we examined endosialin expression in distant and adjacent normal mucosa, as well as in primary tumours, from rectal cancer patients, with or without preoperative RT. We aimed to investigate the relationships of endosialin expression with RT responses, and clinicopathological and biological variables associated with rectal cancers.", "[SUBTITLE] Patients [SUBSECTION] Endosialin was immunohistochemically examined in distant mucosa samples (n = 72, in which 65 cases were matched with primary tumours), adjacent normal mucosa samples (n = 112) and primary tumours (n = 135) from the patients with rectal adenocarcinoma. The patients were from the Southeast Swedish Health Care region and participated in a Swedish clinical trial of preoperative RT between 1987 and 1990 [3]. The distant normal mucosal samples were taken from a resected distant margin that was histologically free from tumours, and adjacent normal mucosa was adjacent to the primary tumour on the same histologic section. The study was approved by the ethical committee of the Faculty of Health Sciences, Universities of Linköping and Uppsala, Sweden. All participants gave informed consents.\nAmong 135 patients, 73 patients received surgery alone and 62 received additional preoperative RT. A total of 25 Gy of radiation was administered in five fractions before surgery, over a median of 8 days (range, 6-15 days). Surgery was performed at a median of 3 days (range, 0-8 days) after RT. None of the patients received adjuvant chemotherapy before or after surgery. The mean age of the patients was 67 years (range, 36-85 years; median, 69 years). All patients were included in the follow-up, with mean and median follow-up periods of 86 and 75 months (range, 0-193 months), respectively. Follow-up sessions were scheduled at the end of 2004, by which time 49 patients had died from rectal cancer.\nThe growth pattern of the tumours was classified (by two pathologists) as either expansive or infiltrative pattern, based on their patterns of growth and invasiveness. Tumours were graded as well, moderately or poorly differentiated. Other patient and tumour characteristics are presented in Table 1. No statistically significant differences between the non-RT and RT groups regarding gender, age, TNM stage, grade of differentiation, and number of other tumours, surgical type, resection margin and mean distance to the anal verge were found (p > 0.05).\nCharacteristics of patients and rectal cancers\n* Other colorectal cancer and/or other type of tumour besides the present rectal cancer\n# For T-test, and the other p values for X2 test\nThe data for expression of cyclooxygenase-2 (Cox-2) [16], p73 [17] and phosphates of regenerating liver (PRL) [18] determined by immunohistochemistry, were obtained from the previous studies carried out at our laboratory.\nEndosialin was immunohistochemically examined in distant mucosa samples (n = 72, in which 65 cases were matched with primary tumours), adjacent normal mucosa samples (n = 112) and primary tumours (n = 135) from the patients with rectal adenocarcinoma. The patients were from the Southeast Swedish Health Care region and participated in a Swedish clinical trial of preoperative RT between 1987 and 1990 [3]. The distant normal mucosal samples were taken from a resected distant margin that was histologically free from tumours, and adjacent normal mucosa was adjacent to the primary tumour on the same histologic section. The study was approved by the ethical committee of the Faculty of Health Sciences, Universities of Linköping and Uppsala, Sweden. All participants gave informed consents.\nAmong 135 patients, 73 patients received surgery alone and 62 received additional preoperative RT. A total of 25 Gy of radiation was administered in five fractions before surgery, over a median of 8 days (range, 6-15 days). Surgery was performed at a median of 3 days (range, 0-8 days) after RT. None of the patients received adjuvant chemotherapy before or after surgery. The mean age of the patients was 67 years (range, 36-85 years; median, 69 years). All patients were included in the follow-up, with mean and median follow-up periods of 86 and 75 months (range, 0-193 months), respectively. Follow-up sessions were scheduled at the end of 2004, by which time 49 patients had died from rectal cancer.\nThe growth pattern of the tumours was classified (by two pathologists) as either expansive or infiltrative pattern, based on their patterns of growth and invasiveness. Tumours were graded as well, moderately or poorly differentiated. Other patient and tumour characteristics are presented in Table 1. No statistically significant differences between the non-RT and RT groups regarding gender, age, TNM stage, grade of differentiation, and number of other tumours, surgical type, resection margin and mean distance to the anal verge were found (p > 0.05).\nCharacteristics of patients and rectal cancers\n* Other colorectal cancer and/or other type of tumour besides the present rectal cancer\n# For T-test, and the other p values for X2 test\nThe data for expression of cyclooxygenase-2 (Cox-2) [16], p73 [17] and phosphates of regenerating liver (PRL) [18] determined by immunohistochemistry, were obtained from the previous studies carried out at our laboratory.\n[SUBTITLE] Immunohistochemical staining [SUBSECTION] The mouse monoclonal antibody (B1/35), which is directed against human endosialin, was provided by Prof. Clare Isacke (Institute of Cancer Research, Sutton, UK), and described previously [11,12].\nFive micrometer sections obtained from paraffin-embedded tissue blocks were incubated overnight at 60 °C, deparaffinized in xylene, and rehydrated in graded ethanol and distilled water. Sections were boiled in 0.01M citrate buffer (pH 6.0) in a high-pressure cooker for 1 min at 120 °C and then kept at room temperature for 30 min, followed by washing in phosphate-buffered saline (PBS; pH 7.4) buffer. The sections were then incubated overnight at 4°C with the primary antibody diluted to 1:250 in PBS. Sections were rinsed with PBS and incubated with polymerized horseradish peroxidise (HRP) -anti mouse/rabbit IgG for 30 min at room temperature (Real™ Envision™ HRP (rabbit/mouse) kit, Dako). After washing with PBS, the peroxidase reaction was run for 8 min with 3, 3'- diaminobenzididine (DAB). Sections were then counterstained with hematoxylin and mounted for microscopic examination.\nThe normal mucosa and primary tumour samples were stained in the same immunostaining run to avoid biases in the staining pattern and intensity. Sections known to stain positively were included as negative and positive controls. For negative controls, sections incubated with universal mouse IgG (Dako), instead of the primary antibody, were not stained, whereas, the positive controls were stained with the primary antibody.\nThe slides were examined with a microscope and scored independently by two pathologists who were given no clinical or pathological information. To avoid artifacts, areas with poor morphology, section margins, and any necrotic regions were not considered. The staining intensity in the entire stroma was scored as either weak (including negative) or strong. The staining intensity of tumour cells over the entire slide area was scored as negative or positive (if positive cells >5% of tumour cells).\nThe mouse monoclonal antibody (B1/35), which is directed against human endosialin, was provided by Prof. Clare Isacke (Institute of Cancer Research, Sutton, UK), and described previously [11,12].\nFive micrometer sections obtained from paraffin-embedded tissue blocks were incubated overnight at 60 °C, deparaffinized in xylene, and rehydrated in graded ethanol and distilled water. Sections were boiled in 0.01M citrate buffer (pH 6.0) in a high-pressure cooker for 1 min at 120 °C and then kept at room temperature for 30 min, followed by washing in phosphate-buffered saline (PBS; pH 7.4) buffer. The sections were then incubated overnight at 4°C with the primary antibody diluted to 1:250 in PBS. Sections were rinsed with PBS and incubated with polymerized horseradish peroxidise (HRP) -anti mouse/rabbit IgG for 30 min at room temperature (Real™ Envision™ HRP (rabbit/mouse) kit, Dako). After washing with PBS, the peroxidase reaction was run for 8 min with 3, 3'- diaminobenzididine (DAB). Sections were then counterstained with hematoxylin and mounted for microscopic examination.\nThe normal mucosa and primary tumour samples were stained in the same immunostaining run to avoid biases in the staining pattern and intensity. Sections known to stain positively were included as negative and positive controls. For negative controls, sections incubated with universal mouse IgG (Dako), instead of the primary antibody, were not stained, whereas, the positive controls were stained with the primary antibody.\nThe slides were examined with a microscope and scored independently by two pathologists who were given no clinical or pathological information. To avoid artifacts, areas with poor morphology, section margins, and any necrotic regions were not considered. The staining intensity in the entire stroma was scored as either weak (including negative) or strong. The staining intensity of tumour cells over the entire slide area was scored as negative or positive (if positive cells >5% of tumour cells).\n[SUBTITLE] Statistical analysis [SUBSECTION] The significance of the difference in the intensity of endosialin expression between normal mucosa and primary tumours, as well as between stroma and tumour cells, was examined by X2 or McNemar's test. The relationship between endosialin expression and clinicopathological/biological factors was examined by the X2 method. The relationship between endosialin expression and survival was tested by using Cox's proportional hazard model. All p values were two sided, and values of p < 0.05 were considered as statistically significant.\nThe significance of the difference in the intensity of endosialin expression between normal mucosa and primary tumours, as well as between stroma and tumour cells, was examined by X2 or McNemar's test. The relationship between endosialin expression and clinicopathological/biological factors was examined by the X2 method. The relationship between endosialin expression and survival was tested by using Cox's proportional hazard model. All p values were two sided, and values of p < 0.05 were considered as statistically significant.", "Endosialin was immunohistochemically examined in distant mucosa samples (n = 72, in which 65 cases were matched with primary tumours), adjacent normal mucosa samples (n = 112) and primary tumours (n = 135) from the patients with rectal adenocarcinoma. The patients were from the Southeast Swedish Health Care region and participated in a Swedish clinical trial of preoperative RT between 1987 and 1990 [3]. The distant normal mucosal samples were taken from a resected distant margin that was histologically free from tumours, and adjacent normal mucosa was adjacent to the primary tumour on the same histologic section. The study was approved by the ethical committee of the Faculty of Health Sciences, Universities of Linköping and Uppsala, Sweden. All participants gave informed consents.\nAmong 135 patients, 73 patients received surgery alone and 62 received additional preoperative RT. A total of 25 Gy of radiation was administered in five fractions before surgery, over a median of 8 days (range, 6-15 days). Surgery was performed at a median of 3 days (range, 0-8 days) after RT. None of the patients received adjuvant chemotherapy before or after surgery. The mean age of the patients was 67 years (range, 36-85 years; median, 69 years). All patients were included in the follow-up, with mean and median follow-up periods of 86 and 75 months (range, 0-193 months), respectively. Follow-up sessions were scheduled at the end of 2004, by which time 49 patients had died from rectal cancer.\nThe growth pattern of the tumours was classified (by two pathologists) as either expansive or infiltrative pattern, based on their patterns of growth and invasiveness. Tumours were graded as well, moderately or poorly differentiated. Other patient and tumour characteristics are presented in Table 1. No statistically significant differences between the non-RT and RT groups regarding gender, age, TNM stage, grade of differentiation, and number of other tumours, surgical type, resection margin and mean distance to the anal verge were found (p > 0.05).\nCharacteristics of patients and rectal cancers\n* Other colorectal cancer and/or other type of tumour besides the present rectal cancer\n# For T-test, and the other p values for X2 test\nThe data for expression of cyclooxygenase-2 (Cox-2) [16], p73 [17] and phosphates of regenerating liver (PRL) [18] determined by immunohistochemistry, were obtained from the previous studies carried out at our laboratory.", "The mouse monoclonal antibody (B1/35), which is directed against human endosialin, was provided by Prof. Clare Isacke (Institute of Cancer Research, Sutton, UK), and described previously [11,12].\nFive micrometer sections obtained from paraffin-embedded tissue blocks were incubated overnight at 60 °C, deparaffinized in xylene, and rehydrated in graded ethanol and distilled water. Sections were boiled in 0.01M citrate buffer (pH 6.0) in a high-pressure cooker for 1 min at 120 °C and then kept at room temperature for 30 min, followed by washing in phosphate-buffered saline (PBS; pH 7.4) buffer. The sections were then incubated overnight at 4°C with the primary antibody diluted to 1:250 in PBS. Sections were rinsed with PBS and incubated with polymerized horseradish peroxidise (HRP) -anti mouse/rabbit IgG for 30 min at room temperature (Real™ Envision™ HRP (rabbit/mouse) kit, Dako). After washing with PBS, the peroxidase reaction was run for 8 min with 3, 3'- diaminobenzididine (DAB). Sections were then counterstained with hematoxylin and mounted for microscopic examination.\nThe normal mucosa and primary tumour samples were stained in the same immunostaining run to avoid biases in the staining pattern and intensity. Sections known to stain positively were included as negative and positive controls. For negative controls, sections incubated with universal mouse IgG (Dako), instead of the primary antibody, were not stained, whereas, the positive controls were stained with the primary antibody.\nThe slides were examined with a microscope and scored independently by two pathologists who were given no clinical or pathological information. To avoid artifacts, areas with poor morphology, section margins, and any necrotic regions were not considered. The staining intensity in the entire stroma was scored as either weak (including negative) or strong. The staining intensity of tumour cells over the entire slide area was scored as negative or positive (if positive cells >5% of tumour cells).", "The significance of the difference in the intensity of endosialin expression between normal mucosa and primary tumours, as well as between stroma and tumour cells, was examined by X2 or McNemar's test. The relationship between endosialin expression and clinicopathological/biological factors was examined by the X2 method. The relationship between endosialin expression and survival was tested by using Cox's proportional hazard model. All p values were two sided, and values of p < 0.05 were considered as statistically significant.", "[SUBTITLE] Endosialin expression in the stroma of distant normal mucosa, adjacent normal mucosa and tumour [SUBSECTION] Endosialin was expressed in fibroblasts and blood vessels in the stroma of normal mucosa, including distant and adjacent normal mucosa, and tumour (Figure 1A-D). In the fibroblasts, endosialin expression of intratumoural or peritumoural areas ranged from a prominent to a diffuse pattern. In the blood vessels (mainly mircrovessels), on the other hand, endosialin presented in the cells around vessels, either in the stroma of normal mucosa or tumour.\nEndosialin expression in rectal normal mucosa and in rectal cancer (representative images from different groups). (A and B) Weak expression in the fibroblasts and capillaries in the stroma, and lack of expression in epithelial cells of normal mucosa ((A) ×100 and (B) ×400). (C) Strong expression in fibroblasts of tumour-associated stroma (black arrow) and weak expression in fibroblasts of normal mucosa (green arrow), normal epithelial cells are negative (yellow arrow), and a few tumour cells are positive (red arrow) and negative (pink arrow) (×100). (D) Strong expression in fibroblasts of tumour-associated stroma (black arrow) and tumour cells (red arrow), whereas some tumour cells are negative (pink arrow) (×400).\nEndosialin expression in the stroma significantly increased from distant or adjacent normal mucosa to the tumour (p < 0.0001) in both the non-RT group (3%, 5%, and 63%, respectively) and the RT groups (12%, 5%, and 65%, respectively, Figure 2). No significant difference was observed between distant and adjacent normal mucosa in the two subgroups (p > 0.05; Figure 2).\nEndosialin expression in the stroma in distant normal mucosa, adjacent normal mucosa and tumour in non-RT and RT groups. A significant increase was observed from the distant normal mucosa or adjacent normal mucosa to the tumour (p < 0.0001).\nEndosialin was expressed in fibroblasts and blood vessels in the stroma of normal mucosa, including distant and adjacent normal mucosa, and tumour (Figure 1A-D). In the fibroblasts, endosialin expression of intratumoural or peritumoural areas ranged from a prominent to a diffuse pattern. In the blood vessels (mainly mircrovessels), on the other hand, endosialin presented in the cells around vessels, either in the stroma of normal mucosa or tumour.\nEndosialin expression in rectal normal mucosa and in rectal cancer (representative images from different groups). (A and B) Weak expression in the fibroblasts and capillaries in the stroma, and lack of expression in epithelial cells of normal mucosa ((A) ×100 and (B) ×400). (C) Strong expression in fibroblasts of tumour-associated stroma (black arrow) and weak expression in fibroblasts of normal mucosa (green arrow), normal epithelial cells are negative (yellow arrow), and a few tumour cells are positive (red arrow) and negative (pink arrow) (×100). (D) Strong expression in fibroblasts of tumour-associated stroma (black arrow) and tumour cells (red arrow), whereas some tumour cells are negative (pink arrow) (×400).\nEndosialin expression in the stroma significantly increased from distant or adjacent normal mucosa to the tumour (p < 0.0001) in both the non-RT group (3%, 5%, and 63%, respectively) and the RT groups (12%, 5%, and 65%, respectively, Figure 2). No significant difference was observed between distant and adjacent normal mucosa in the two subgroups (p > 0.05; Figure 2).\nEndosialin expression in the stroma in distant normal mucosa, adjacent normal mucosa and tumour in non-RT and RT groups. A significant increase was observed from the distant normal mucosa or adjacent normal mucosa to the tumour (p < 0.0001).\n[SUBTITLE] Endosialin expression in the stroma in relation to clinicopathological and biological variables [SUBSECTION] As shown in Table 2, in the RT group, the frequencies of strong endosialin expression in the stroma in TNM stages I to IV were 45%, 70%, 87%, and 57%, respectively (p = 0.07). If TNM stage I (45%) was compared with other stages (II+III+IV, 74%) the difference was statistically significant (p = 0.03). Endosialin expression was positively associated with expression of Cox-2 (p = 0.03), p73 (p = 0.01), and PRL (p = 0.002), whereas there was no such relationship in the non-RT group (p > 0.05; Table 2). No significant associations of endosialin expression in the stroma with other clinicopathological variables, including gender, age, tumour location, differentiation, complication, local/distant recurrence, overall survival and disease-free survival, in either the RT group or the non-RT group (p > 0.05; data not shown) were also observed.\nEndosialin expression in tumour-associated stroma in relation to clinicopathological and biological variables in rectal cancer patients\np values for X2 test; Cox-2: cyclooxygenase-2; PRL: phosphates of regenerating liver.\nAs shown in Table 2, in the RT group, the frequencies of strong endosialin expression in the stroma in TNM stages I to IV were 45%, 70%, 87%, and 57%, respectively (p = 0.07). If TNM stage I (45%) was compared with other stages (II+III+IV, 74%) the difference was statistically significant (p = 0.03). Endosialin expression was positively associated with expression of Cox-2 (p = 0.03), p73 (p = 0.01), and PRL (p = 0.002), whereas there was no such relationship in the non-RT group (p > 0.05; Table 2). No significant associations of endosialin expression in the stroma with other clinicopathological variables, including gender, age, tumour location, differentiation, complication, local/distant recurrence, overall survival and disease-free survival, in either the RT group or the non-RT group (p > 0.05; data not shown) were also observed.\nEndosialin expression in tumour-associated stroma in relation to clinicopathological and biological variables in rectal cancer patients\np values for X2 test; Cox-2: cyclooxygenase-2; PRL: phosphates of regenerating liver.\n[SUBTITLE] Endosialin expression in the epithelial cells of distant normal mucosa, adjacent normal mucosa and carcinoma cells of tumours [SUBSECTION] There was no positive expression in epithelial cells of distant and adjacent normal mucosa in either the non-RT group or the RT group (Figure 1A-C). While in 25 cases of non-RT group and 21 cases of RT group, some tumour cells had positive endosialin expression (Figure 1C-D).\nWe further analyzed the relationship between endosialin expression in the stroma and that in the tumour cells in non-RT and RT group (Table 3). The expression in the stroma and in the tumour cells was concordant in 24 weak/negative cases (33%) and 22 strong/positive cases (30%) in non-RT group. Only 27 cases (37%) showed different expression in the two locations (3 tumours (4%) showed positive expression in tumour cells but weak expression in stroma, and 24 tumours (33%) showed strong expression in stroma but negative expression in tumour cells). In the RT group, the concordance of endosialin expression in stroma and in tumour cells was 20 weak/negative cases (32%) and 19 strong/positive cases (31%). The discordance was 23 cases (37%), which showed different expression in the two locations (2 tumours (3%) showed positive expression in tumour cells but weak expression in stroma, and 21 tumours (34%) showed strong expression in stroma but negative expression in tumour cells). There was a significant difference between the expression in the stroma and in the tumour cells in the non-RT (p = 0.0001) and RT groups (p = 0.0003).\nRelationship of endosialin expression in stroma and tumour cells of rectal cancer patients\n* For McNemar's test.\nThere was no positive expression in epithelial cells of distant and adjacent normal mucosa in either the non-RT group or the RT group (Figure 1A-C). While in 25 cases of non-RT group and 21 cases of RT group, some tumour cells had positive endosialin expression (Figure 1C-D).\nWe further analyzed the relationship between endosialin expression in the stroma and that in the tumour cells in non-RT and RT group (Table 3). The expression in the stroma and in the tumour cells was concordant in 24 weak/negative cases (33%) and 22 strong/positive cases (30%) in non-RT group. Only 27 cases (37%) showed different expression in the two locations (3 tumours (4%) showed positive expression in tumour cells but weak expression in stroma, and 24 tumours (33%) showed strong expression in stroma but negative expression in tumour cells). In the RT group, the concordance of endosialin expression in stroma and in tumour cells was 20 weak/negative cases (32%) and 19 strong/positive cases (31%). The discordance was 23 cases (37%), which showed different expression in the two locations (2 tumours (3%) showed positive expression in tumour cells but weak expression in stroma, and 21 tumours (34%) showed strong expression in stroma but negative expression in tumour cells). There was a significant difference between the expression in the stroma and in the tumour cells in the non-RT (p = 0.0001) and RT groups (p = 0.0003).\nRelationship of endosialin expression in stroma and tumour cells of rectal cancer patients\n* For McNemar's test.\n[SUBTITLE] Endosialin expression in tumour cells in relation to clinicopathological and biological variables [SUBSECTION] In the non-RT group, endosialin expression was more frequent in tumours with infiltrative growth patterns than those with expansive growth patterns (p = 0.01; Table 4) and was positively related to p73 expression (p = 0.01; Table 4). In the RT group, a similar trend of endosialin expression was observed in tumours with infiltrative growth patterns when compared to those with expansive growth patterns (p = 0.06; Table 4). Endosialin expression was positively related to PRL expression (p = 0.02; Table 4).\nEndosialin expression in tumour cells in relation to clinicopathological and biological variables in rectal cancer patients\np values for X2 test; Cox-2: cyclooxygenase-2; PRL: phosphates of regenerating liver.\nNo significant correlation between endosialin expression in tumour cells and survival, recurrence and other pathological/biological variables were found in either the non-RT group or in the RT group (p >0.05; data not shown).\nIn the non-RT group, endosialin expression was more frequent in tumours with infiltrative growth patterns than those with expansive growth patterns (p = 0.01; Table 4) and was positively related to p73 expression (p = 0.01; Table 4). In the RT group, a similar trend of endosialin expression was observed in tumours with infiltrative growth patterns when compared to those with expansive growth patterns (p = 0.06; Table 4). Endosialin expression was positively related to PRL expression (p = 0.02; Table 4).\nEndosialin expression in tumour cells in relation to clinicopathological and biological variables in rectal cancer patients\np values for X2 test; Cox-2: cyclooxygenase-2; PRL: phosphates of regenerating liver.\nNo significant correlation between endosialin expression in tumour cells and survival, recurrence and other pathological/biological variables were found in either the non-RT group or in the RT group (p >0.05; data not shown).", "Endosialin was expressed in fibroblasts and blood vessels in the stroma of normal mucosa, including distant and adjacent normal mucosa, and tumour (Figure 1A-D). In the fibroblasts, endosialin expression of intratumoural or peritumoural areas ranged from a prominent to a diffuse pattern. In the blood vessels (mainly mircrovessels), on the other hand, endosialin presented in the cells around vessels, either in the stroma of normal mucosa or tumour.\nEndosialin expression in rectal normal mucosa and in rectal cancer (representative images from different groups). (A and B) Weak expression in the fibroblasts and capillaries in the stroma, and lack of expression in epithelial cells of normal mucosa ((A) ×100 and (B) ×400). (C) Strong expression in fibroblasts of tumour-associated stroma (black arrow) and weak expression in fibroblasts of normal mucosa (green arrow), normal epithelial cells are negative (yellow arrow), and a few tumour cells are positive (red arrow) and negative (pink arrow) (×100). (D) Strong expression in fibroblasts of tumour-associated stroma (black arrow) and tumour cells (red arrow), whereas some tumour cells are negative (pink arrow) (×400).\nEndosialin expression in the stroma significantly increased from distant or adjacent normal mucosa to the tumour (p < 0.0001) in both the non-RT group (3%, 5%, and 63%, respectively) and the RT groups (12%, 5%, and 65%, respectively, Figure 2). No significant difference was observed between distant and adjacent normal mucosa in the two subgroups (p > 0.05; Figure 2).\nEndosialin expression in the stroma in distant normal mucosa, adjacent normal mucosa and tumour in non-RT and RT groups. A significant increase was observed from the distant normal mucosa or adjacent normal mucosa to the tumour (p < 0.0001).", "As shown in Table 2, in the RT group, the frequencies of strong endosialin expression in the stroma in TNM stages I to IV were 45%, 70%, 87%, and 57%, respectively (p = 0.07). If TNM stage I (45%) was compared with other stages (II+III+IV, 74%) the difference was statistically significant (p = 0.03). Endosialin expression was positively associated with expression of Cox-2 (p = 0.03), p73 (p = 0.01), and PRL (p = 0.002), whereas there was no such relationship in the non-RT group (p > 0.05; Table 2). No significant associations of endosialin expression in the stroma with other clinicopathological variables, including gender, age, tumour location, differentiation, complication, local/distant recurrence, overall survival and disease-free survival, in either the RT group or the non-RT group (p > 0.05; data not shown) were also observed.\nEndosialin expression in tumour-associated stroma in relation to clinicopathological and biological variables in rectal cancer patients\np values for X2 test; Cox-2: cyclooxygenase-2; PRL: phosphates of regenerating liver.", "There was no positive expression in epithelial cells of distant and adjacent normal mucosa in either the non-RT group or the RT group (Figure 1A-C). While in 25 cases of non-RT group and 21 cases of RT group, some tumour cells had positive endosialin expression (Figure 1C-D).\nWe further analyzed the relationship between endosialin expression in the stroma and that in the tumour cells in non-RT and RT group (Table 3). The expression in the stroma and in the tumour cells was concordant in 24 weak/negative cases (33%) and 22 strong/positive cases (30%) in non-RT group. Only 27 cases (37%) showed different expression in the two locations (3 tumours (4%) showed positive expression in tumour cells but weak expression in stroma, and 24 tumours (33%) showed strong expression in stroma but negative expression in tumour cells). In the RT group, the concordance of endosialin expression in stroma and in tumour cells was 20 weak/negative cases (32%) and 19 strong/positive cases (31%). The discordance was 23 cases (37%), which showed different expression in the two locations (2 tumours (3%) showed positive expression in tumour cells but weak expression in stroma, and 21 tumours (34%) showed strong expression in stroma but negative expression in tumour cells). There was a significant difference between the expression in the stroma and in the tumour cells in the non-RT (p = 0.0001) and RT groups (p = 0.0003).\nRelationship of endosialin expression in stroma and tumour cells of rectal cancer patients\n* For McNemar's test.", "In the non-RT group, endosialin expression was more frequent in tumours with infiltrative growth patterns than those with expansive growth patterns (p = 0.01; Table 4) and was positively related to p73 expression (p = 0.01; Table 4). In the RT group, a similar trend of endosialin expression was observed in tumours with infiltrative growth patterns when compared to those with expansive growth patterns (p = 0.06; Table 4). Endosialin expression was positively related to PRL expression (p = 0.02; Table 4).\nEndosialin expression in tumour cells in relation to clinicopathological and biological variables in rectal cancer patients\np values for X2 test; Cox-2: cyclooxygenase-2; PRL: phosphates of regenerating liver.\nNo significant correlation between endosialin expression in tumour cells and survival, recurrence and other pathological/biological variables were found in either the non-RT group or in the RT group (p >0.05; data not shown).", "Endosialin, also referred to as TEM1, was originally discovered as a human embryonic fibroblast-specific antigen and later reported to be expressed in the endothelium. Endosialin is barely detectable in normal tissues other than its moderate expression in the smooth muscle of colon and prostate [13]. Therefore, TEM1 was once considered as an important candidate as a vascular target [6,9,19]. However, recent studies have demonstrated that endosialin is expressed in pericytes of breast tumours and brain gliomas, and not selectively in tumour endothelium [10-12]. In the present study, endosialin was prominently expressed in the tumour-associated stroma, especially in fibroblasts and blood vessels, and only weakly expressed in the stroma of distant or adjacent normal mucosa in both the non-RT group and in the RT groups. When compared to the normal mucosa, endosialin expression was much higher in tumour, thus agreeing with other reports on colon and breast cancers [20,21]. Up-regulated endosialin in the tumour-associated stroma may play a role in the tumorigenesis of rectal cancers.\nIn the present study, endosialin expression in the stroma was more frequently observed in advanced TNM stages, and positively related to the tumour cellular expression of Cox-2, p73, and PRL in the RT group, whereas there were no such associations in the non-RT group. The level of endosialin expression in the tumour-associated stroma was significantly higher in breast cancers with nodal involvement compared to those with negative nodes [21]. Endosialin has also been found in glioblastoma multiforme, anaplastic astrocytomas, and metastatic carcinomas that are of highly invasive activity [12,22]. Endosialin is also more abundant in melanoma metastases than in the primary tumours [13]. In colorectal cancer, endosialin was up-regulated in Dukes' B compared to Dukes' A [20]. Cox-2 is an inducible isoenzyme of cyclooxygenase that is undetectable in normal colonic mucosa but is overexpressed in 80% of colonic tumours [23]. Cox-2 is involved in a multistep process of colorectal tumorigenesis, such as apoptosis inhibition of cellular proliferation and angiogenesis enhancement, tumour cell invasion and differentiation. One of our previous studies has shown that Cox-2 expression is higher in more advanced tumours [24]. It is interesting to see, in the further study, whether or not endosialin and Cox-2 have interactions in the tumour development, especially in enhancing angiogenesis. We have also found that p73 expression is increased during the development of colorectal cancers and its overexpression is further associated with poor prognosis in patients [25]. PRL, which stimulates the Rho signalling pathway to promote cell motility and invasion [26], is up-regulated in colorectal cancer and associated with tumour invasion and metastasis [18]. In the same series of the patients, our previous studies have demonstrated that tumours with p53-negative expression (wild type p53), or p73-negative expression, or weak Cox-2 expression had less local recurrence after RT [16,17,27]. PRL expression is related to distant recurrence and poor survival after RT [18]. In further studies of colon cancer cell lines, we found that, after radiation, the antiapoptotic ΔNp73 and mitosis factor PRL-3 increase [28] and the overexpression of ΔNp73β increases the viability of cell lines and cisplatin induces the degradation of ΔNp73β in a dose-dependent manner (unpublished data). All these results indicate that certain biological factors may be involved in response to therapy in rectal cancer patients. If we could further confirm the data obtained from studies of these biological factors in relation to the clinicopathological issues above as well as their pathways in therapies, we may be able to apply them to clinical practices where rectal cancer patients may receive individual therapies based on their biological profile. For example, the targeting of multiple biological factors instead of only one in certain therapies may yield greater responses to them.\nRadiated stromal fibroblasts in the carcinogenic process have been shown to induce sub-lethal DNA damage. Radiation-induced alterations of the stroma have been found to produce more mammary carcinomas compared to non-radiated stroma [29]. All of these results hinted that the patients with up-regulated expression of endosialin in a tumour-associated stroma may be linked to other biological variables that are related to aggressive characterization after radiation. However, we did not find that endosialin expression was further related to survival in either the whole group or subgroups (with or without RT) of patients. This may be partly due to the limited number of the patients and/or the hypothesis that endosialin may play a survival role in certain groups of the patients. It would be interesting to determine the survival significance of endosialin in subgroups where the endosialin is related to the clinicopathological variables, such as TNM, Cox-2, p73 and PRL. We did find that endosialin presented different results in each subgroup, but firm conclusions are difficult to draw at this point due to the limited number.\nIn the present study, we also observed endosialin expression in tumour cells. Positive endosialin expression in tumour cells was more frequently observed in tumours with infiltrative growth pattern compared to expansive growth pattern, regardless of RT. Endosialin was positively related to p73 expression in the non-RT group, and to PRL expression in the RT group. Why was stromal endosialin positively related to p73, PRL and Cox-2 in the RT-group, but not in the non-RT group? One speculation may be that radiation influenced biological factors by up-regulating or down-regulating expression. After radiation, \"bad factors,\" such as Cox-2, p73, and PRL-3 increased their expression. This indicates that there may be some mechanism by which tumors try to protect themselves by increasing the expression of \"bad factors\" against the damage of radiation. Furthermore, why did stromal endosialin and tumour cellular endosialin have different relationships with biological factors (p73 and PRL)? For example, stromal endosialin was related to p73 expression in the RT group, whereas tumour cellular endosialin was related to p73 in the non-RT group. One possible explanation may be that the effects of radiation on stroma differ on different types of tumour cells, resulting in different expression and relationships to the biological factors investigated. In addition, the p73 gene contains two promoter regions, giving rise to a p53-like protein named TAp73, and the N-terminally truncated ΔNp73, which lacks the transactivation domain and p53 homology. ΔNp73 is thought to have a regulatory function, down-regulating both TAp73 by protein-protein interaction and p53 by competitive binding with DNA. In this autoregulatory loop, ΔNp73 protein is also up-regulated by both TAp73 and p53. The functional cooperation among these family members seems to vary depending on cell types, stimuli and p53 status [30]. In other words, the roles of the two isoforms may depend on their locations in the stroma or tumour cells.\nRecent studies have raised the concept of the coevolution of tumour cells with tumour-associated stroma. The stromal environment of tumours appears to be a leading factor, and not just a supporting one in the initiation of tumours [31]. The tumour microenvironment and interactions between tumour and stromal cells have a reciprocal relationship in tumour development and progression. Another interesting concept is epithelial-mesenchymal transition (EMT), a process by which cells lose their polarized epithelial structures and concomitantly acquire a migratory or mesenchymal phenotype. EMT is essential for normal embryonic development and progression of non-invasive adenomas into malignant, metastatic carcinomas. Alterations in cell-cell adhesion, cell-substrate interaction, extracellular matrix degradation and cytoskeleton organization are the major events that occur during EMT [32]. Overexpressed thymosin β4 (Tβ4) induces EMT in colorectal carcinoma by increasing integrin-linked kinase (ILK) complex formation with particularly interesting new cysteine-histidine rich protein (PINCH) [32]. We have studied PINCH expression in colorectal carcinomas and found that PINCH overexpressed on fibroblasts in the tumour-associated stroma compared to its expression in normal mucosa [33], similar to endosialin expression in tumour versus normal mucosa. In the present study, endosialin was expressed in the both tumour-associated stroma and tumour cells. Furthermore, endosialin expression in the tumour-associated stroma was positively correlated with that in tumour cells, giving more information regarding the interactions of endosialin between tumour microenvironments and tumours. Changes in signal conduction related to endosialin appear to play an important role in enhancing tumour progression after radiation. There was evidence that antiangiogenic therapies targeting both endothelial cells and pericytes were more effective than single-agent therapies [34]. In the present study, endosialin was not only obviously up-regulated in stroma but also up-regulated in tumour cells when compared to normal mucosa. If the therapies target endosialin in both stroma and tumour cells, they may provide more efficient strategies of therapy for the patients with rectal cancers.\nRegarding the discrepancies of endosialin localization, in stroma and/or in tumour cells found in different studies, several factors may be responsible, including the number and clinicopathological characteristics of the patients included in such studies, as well as the methods used in the same. For example, if tissue arrays were used for staining endosialin [13], it is possible that a limited sample of tissue was obtained from the tumour blocks and the selected arrays may not be representative of the complete characteristics of the tumour because of tumour heterogeneity. In the present study, we used ordinary sections for staining endosialin and observed that only 34% of the cases showed positive endosialin expression in tumour cells. In fact, in some positive cases, only a few positive tumour cells were determined in the entire tumour sections. In addition, some studies which employed real-time PCR or quantitative real-time PCR methods could not determine the location of the expression [20,21]. In the present study, we used immunohistochemical staining, which is one of the best methods of identifying the location of endosialin expression.", "Endosialin expression may be involved in the progression of rectal cancers. It is also related to Cox-2, p73, and PRL expression. However, a direct relationship between endosialin expression and RT responses in patients was not found.", "The authors declare that they have no competing interests.", "ZYZ conducted the experiments, analyzed the results with XFS, and wrote the drafts of the manuscript. ZYZ and HZ read the immunohistochemical slides. AG provided the samples for the experiments and information regarding the therapy. XFS designed the experiments and helped write the manuscript. All authors have read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2407/11/89/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Prevalence of systemic immunoreactivity to Aggregatibacter actinomycetemcomitans leukotoxin in relation to the incidence of myocardial infarction.
21362180
Chronic infections and associated inflammatory markers are suggested risk factors for cardiovascular disease (CVD). The proinflammatory cytokine, interleukin (IL)-1β, is suggested to play a role in the regulation of local inflammatory responses in both CVD and periodontitis. The leukotoxin from the periodontal pathogen Aggregatibacter actinomycetemcomitans has recently been shown to cause abundant secretion of IL-1β from macrophages. The aim of the present study was to compare the prevalence of systemic immunoreactivity to A. actinomycetemcomitans leukotoxin in myocardial infarction (MI) cases (n = 532) and matched controls (n = 1,000) in a population-based case and referents study in northern Sweden.
BACKGROUND
Capacity to neutralize A. actinomycetemcomitans leukotoxin was analyzed in a bioassay with leukocytes, purified leukotoxin, and plasma. Plasma samples that inhibited lactate-dehydrogenase release from leukotoxin-lysed cells by ≥ 50% were classified as positive.
METHODS
Neutralizing capacity against A. actinomycetemcomitans leukotoxin was detected in 53.3% of the plasma samples. The ability to neutralize leukotoxin was correlated to increasing age in men (n = 1,082) but not in women (n = 450). There was no correlation between presence of systemic leukotoxin-neutralization capacity and the incidence of MI, except for women (n = 146). Women with a low neutralizing capacity had a significantly higher incidence of MI than those who had a high neutralizing capacity.
RESULTS
Systemic immunoreactivity against A. actinomycetemcomitans leukotoxin was found at a high prevalence in the analyzed population of adults from northern Sweden. The results from the present study do not support the hypothesis that systemic leukotoxin-neutralizing capacity can decrease the risk for MI.
CONCLUSION
[ "Adult", "Aged", "Antibodies, Bacterial", "Antibodies, Neutralizing", "Case-Control Studies", "Comorbidity", "Exotoxins", "Female", "Humans", "Incidence", "Male", "Middle Aged", "Myocardial Infarction", "Neutralization Tests", "Pasteurellaceae", "Pasteurellaceae Infections", "Seroepidemiologic Studies", "Sweden" ]
3053232
null
null
Methods
[SUBTITLE] Study population [SUBSECTION] The study population was derived from the Northern Sweden Health and Disease Study (NSHDS), which consists of three sub-cohorts: The Västerbotten Intervention Programme (VIP) [20], the WHO's Multinational Monitoring of Trends and Determinants in Cardiovascular Disease (MONICA) study in northern Sweden [21] and the Mammography Screening Project (MSP) [22]. Both VIP and MONICA are health examination programmes for CVD and diabetes. Participation rates were 59 and 77%, respectively. The VIP was designed to be as similar as possible to the MONICA study. In order to increase the number of female cases, participants in the MSP were included in sex specific analyses. The participation rate in the MSP was 85% in the screening phase, of which 57% donated blood samples. By December 31, 1999 approximately 73,000 unique subjects had been screened in these 3 sub-cohorts in NSHDS. Incident cases and matching controls were identified through 13 years of follow-up (1985-1999) from the Västerbotten Intervention Program and the Multinational Monitoring of Trends and Determinants in Cardiovascular Disease (MONICA) study. Study participants were from the Västerbotten and Norrbotten counties in northern Sweden. Participants with a history of MI, stroke or cancer were excluded from this study. Participants were followed from baseline examination until first MI or death. There was an average time period of 4 years between the time of inclusion and the MI event. Fatal and nonfatal cases of MI occurring from October 1, 1994 to December 31, 1999 were identified through the Northern Sweden Monica Incidence Registry [23]. Throughout the follow-up period, 532 incident first events of MI (cases) were identified. For each case, two controls were individually matched for age, sex and ± 4 months of case occurrence. Thus, the study population consisted of 1,532 participants, 532 cases (382 men and 156 women) and 1,000 sex and age-matched controls (706 men and 294 women), aged 30-77 years at baseline. This study population has previously been described in detail [24]. The study was approved by the Ethics Committee of Umeå University and was conducted in accordance with the Helsinki Declaration. All participants gave informed consent. The study population was derived from the Northern Sweden Health and Disease Study (NSHDS), which consists of three sub-cohorts: The Västerbotten Intervention Programme (VIP) [20], the WHO's Multinational Monitoring of Trends and Determinants in Cardiovascular Disease (MONICA) study in northern Sweden [21] and the Mammography Screening Project (MSP) [22]. Both VIP and MONICA are health examination programmes for CVD and diabetes. Participation rates were 59 and 77%, respectively. The VIP was designed to be as similar as possible to the MONICA study. In order to increase the number of female cases, participants in the MSP were included in sex specific analyses. The participation rate in the MSP was 85% in the screening phase, of which 57% donated blood samples. By December 31, 1999 approximately 73,000 unique subjects had been screened in these 3 sub-cohorts in NSHDS. Incident cases and matching controls were identified through 13 years of follow-up (1985-1999) from the Västerbotten Intervention Program and the Multinational Monitoring of Trends and Determinants in Cardiovascular Disease (MONICA) study. Study participants were from the Västerbotten and Norrbotten counties in northern Sweden. Participants with a history of MI, stroke or cancer were excluded from this study. Participants were followed from baseline examination until first MI or death. There was an average time period of 4 years between the time of inclusion and the MI event. Fatal and nonfatal cases of MI occurring from October 1, 1994 to December 31, 1999 were identified through the Northern Sweden Monica Incidence Registry [23]. Throughout the follow-up period, 532 incident first events of MI (cases) were identified. For each case, two controls were individually matched for age, sex and ± 4 months of case occurrence. Thus, the study population consisted of 1,532 participants, 532 cases (382 men and 156 women) and 1,000 sex and age-matched controls (706 men and 294 women), aged 30-77 years at baseline. This study population has previously been described in detail [24]. The study was approved by the Ethics Committee of Umeå University and was conducted in accordance with the Helsinki Declaration. All participants gave informed consent. [SUBTITLE] Leukotoxin-neutralization assay [SUBSECTION] The A. actinomycetemcomitans leukotoxin-neutralizing capacity in the plasma samples was detected as a reduction of leukocyte damage and subsequent leakage of lactate dehydrogenase (LDH) upon exposure to purified leukotoxin, as described previously [18]. This assay quantifies the activity of the LDH enzyme and does not allow freezing and thawing of the supernatants. The neutralization assay also limits the possible influence from LDH present in the different plasma samples from the study population. Briefly, human polymorphonuclear leukocytes (PMNs) were isolated from human peripheral blood as described previously [25]. The isolated PMNs were suspended in RPMI 1640 (Sigma-Aldrich, St Louis, MI, USA) with 10% fetal bovine serum (FBS) (Sigma-Aldrich) at a density of 3 × 106 cells/ml. The blood was taken from donors visiting the University Hospital blood bank in Umeå, Sweden. Informed written approval was given by all subjects, and authorization for the study was granted by the Human Studies Ethical Committee of Umeå University, Sweden (§67/3, dnr 03-019). For detection of leukotoxin-neutralizing capacity, purified leukotoxin (450 ng/mL) [26] was mixed with each plasma sample (10%) in RPMI 1640. One hundred μl portions of the plasma-leukotoxin mixtures were added in duplicate to a 96 well culture plate (Nunc, Roskilde, Denmark) and incubated for 15 min at room temperature. Then 50 μl of PMN was added and the mixtures were incubated for 60 min at 37°C in 5% CO2. Activity of the released LDH into the culture supernatant was quantified as described previously [25]. Plasma samples that inhibited the leukotoxin-induced LDH release by ≥50% were classified as positive and were further analyzed in the assay diluted to 1% of the final volume. Plasma without leukotoxin-neutralization capacity was classified as "null", plasma that neutralized leukotoxin at 10% plasma concentration but not at 1% was classified as "low", and plasma that neutralized the leukotoxin at both 10% and 1% concentrations was classified as "high". The A. actinomycetemcomitans leukotoxin-neutralizing capacity in the plasma samples was detected as a reduction of leukocyte damage and subsequent leakage of lactate dehydrogenase (LDH) upon exposure to purified leukotoxin, as described previously [18]. This assay quantifies the activity of the LDH enzyme and does not allow freezing and thawing of the supernatants. The neutralization assay also limits the possible influence from LDH present in the different plasma samples from the study population. Briefly, human polymorphonuclear leukocytes (PMNs) were isolated from human peripheral blood as described previously [25]. The isolated PMNs were suspended in RPMI 1640 (Sigma-Aldrich, St Louis, MI, USA) with 10% fetal bovine serum (FBS) (Sigma-Aldrich) at a density of 3 × 106 cells/ml. The blood was taken from donors visiting the University Hospital blood bank in Umeå, Sweden. Informed written approval was given by all subjects, and authorization for the study was granted by the Human Studies Ethical Committee of Umeå University, Sweden (§67/3, dnr 03-019). For detection of leukotoxin-neutralizing capacity, purified leukotoxin (450 ng/mL) [26] was mixed with each plasma sample (10%) in RPMI 1640. One hundred μl portions of the plasma-leukotoxin mixtures were added in duplicate to a 96 well culture plate (Nunc, Roskilde, Denmark) and incubated for 15 min at room temperature. Then 50 μl of PMN was added and the mixtures were incubated for 60 min at 37°C in 5% CO2. Activity of the released LDH into the culture supernatant was quantified as described previously [25]. Plasma samples that inhibited the leukotoxin-induced LDH release by ≥50% were classified as positive and were further analyzed in the assay diluted to 1% of the final volume. Plasma without leukotoxin-neutralization capacity was classified as "null", plasma that neutralized leukotoxin at 10% plasma concentration but not at 1% was classified as "low", and plasma that neutralized the leukotoxin at both 10% and 1% concentrations was classified as "high". [SUBTITLE] Statistical anaylses [SUBSECTION] The Mantel-Haenszel χ2-test for trend was used to analyze the association between age and antibodies. To investigate if the presence of leukotoxin antibodies (categorized into null, low or high) affects the risk of having an MI, conditional logistic regression appropriate for the matched design, was used. A multivariable model was used to adjust for smoking, self-reported diabetes, systolic blood pressure and apoB/apoA1. Results are presented as p-values, odds ratios (ORs) and corresponding 95% confidence intervals (CIs). No correction for multiple testing was performed. SAS version 9.1 was used for the statistical analyses. The Mantel-Haenszel χ2-test for trend was used to analyze the association between age and antibodies. To investigate if the presence of leukotoxin antibodies (categorized into null, low or high) affects the risk of having an MI, conditional logistic regression appropriate for the matched design, was used. A multivariable model was used to adjust for smoking, self-reported diabetes, systolic blood pressure and apoB/apoA1. Results are presented as p-values, odds ratios (ORs) and corresponding 95% confidence intervals (CIs). No correction for multiple testing was performed. SAS version 9.1 was used for the statistical analyses.
null
null
null
null
[ "Background", "Study population", "Leukotoxin-neutralization assay", "Statistical anaylses", "Results", "Prevalence of leukotoxin-neutralizing capacity", "Prevalence and age", "Prevalence in relation to incidence of MI", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Chronic inflammations, such as periodontitis, are suggested to be risk factors for the development of cardiovascular diseases [1]. It has been suggested that the total pathogenic burden from the oral cavity, and possibly also from the gut, correlates with disease markers of atherosclerosis [2]. Periodontitis is a bacteria-induced inflammatory condition that causes degradation of the tooth-supporting tissues, bone and connective tissue [3,4]. Bioactive molecules released from pathogenic microorganisms located in the subgingival biofilm cause imbalance in the inflammatory response, which results in loss of the tooth-supporting tissues [5]. For the host to maintain homeostasis within the periodontal tissues, the immune response system contributes to controlling the microbial colonization and invasion [6]. This immune response includes local and systemic production of antibodies induced by antigens from the microorganisms that are localized in this biofilm [7]. There are more than 700 different microbial species found in the oral cavity of humans [8]. A recent report using the pyrosequencing methodology to analyze the composition of the oral microbiota indicate a substantial increase in that number [9]. Among the different species found, Aggregatibacter actinomycetemcomitans is a bacterium associated with aggressive forms of periodontitis, and it produces a leukotoxin that specifically affects human leukocytes [10]. Individuals infected with a specific, highly leukotoxic clone (JP2) of this bacterium have a significantly increased risk for periodontitis [11]. The proinflammatory response induced by the leukotoxin is a cellular response associated with the pathogenesis of periodontitis [10,12,13] and atherosclerosis [14].\nThe proportion in a population that harbor A. actinomycetemcomitans varies depending on geographic origin and periodontal condition of the subjects [10]. It has been shown that systemic leukotoxin-neutralization is correlated to the presence of this bacterium in the oral subgingival biofilm [15-17]. Data from a previous study showed that women with systemic neutralizing capacity against A. actinomycetemcomitans' leukotoxin had a significantly decreased incidence of stroke [18]. This systemic neutralizing capacity has been shown to correlate (p-value < 0.001) to the presence of leukotoxin-specific antibodies, as well as to antibodies against whole A. actinomyctemcomitans bacteria [19]. We hypothesized that a virgin A. actinomycetemcomitans infection late in life might be a risk factor for stroke and contribute to the negative association between stroke and the presence of these neutralizing antibodies. The aim of the present study was to analyze if the presence of systemic immunoreactivity to A. actinomycetemcomitans leukotoxin also interferes with the incidence of a future myocardial infarction (MI).", "The study population was derived from the Northern Sweden Health and Disease Study (NSHDS), which consists of three sub-cohorts: The Västerbotten Intervention Programme (VIP) [20], the WHO's Multinational Monitoring of Trends and Determinants in Cardiovascular Disease (MONICA) study in northern Sweden [21] and the Mammography Screening Project (MSP) [22]. Both VIP and MONICA are health examination programmes for CVD and diabetes. Participation rates were 59 and 77%, respectively. The VIP was designed to be as similar as possible to the MONICA study. In order to increase the number of female cases, participants in the MSP were included in sex specific analyses. The participation rate in the MSP was 85% in the screening phase, of which 57% donated blood samples. By December 31, 1999 approximately 73,000 unique subjects had been screened in these 3 sub-cohorts in NSHDS.\nIncident cases and matching controls were identified through 13 years of follow-up (1985-1999) from the Västerbotten Intervention Program and the Multinational Monitoring of Trends and Determinants in Cardiovascular Disease (MONICA) study. Study participants were from the Västerbotten and Norrbotten counties in northern Sweden. Participants with a history of MI, stroke or cancer were excluded from this study. Participants were followed from baseline examination until first MI or death. There was an average time period of 4 years between the time of inclusion and the MI event.\nFatal and nonfatal cases of MI occurring from October 1, 1994 to December 31, 1999 were identified through the Northern Sweden Monica Incidence Registry [23]. Throughout the follow-up period, 532 incident first events of MI (cases) were identified. For each case, two controls were individually matched for age, sex and ± 4 months of case occurrence. Thus, the study population consisted of 1,532 participants, 532 cases (382 men and 156 women) and 1,000 sex and age-matched controls (706 men and 294 women), aged 30-77 years at baseline. This study population has previously been described in detail [24]. The study was approved by the Ethics Committee of Umeå University and was conducted in accordance with the Helsinki Declaration. All participants gave informed consent.", "The A. actinomycetemcomitans leukotoxin-neutralizing capacity in the plasma samples was detected as a reduction of leukocyte damage and subsequent leakage of lactate dehydrogenase (LDH) upon exposure to purified leukotoxin, as described previously [18]. This assay quantifies the activity of the LDH enzyme and does not allow freezing and thawing of the supernatants. The neutralization assay also limits the possible influence from LDH present in the different plasma samples from the study population.\nBriefly, human polymorphonuclear leukocytes (PMNs) were isolated from human peripheral blood as described previously [25]. The isolated PMNs were suspended in RPMI 1640 (Sigma-Aldrich, St Louis, MI, USA) with 10% fetal bovine serum (FBS) (Sigma-Aldrich) at a density of 3 × 106 cells/ml. The blood was taken from donors visiting the University Hospital blood bank in Umeå, Sweden. Informed written approval was given by all subjects, and authorization for the study was granted by the Human Studies Ethical Committee of Umeå University, Sweden (§67/3, dnr 03-019).\nFor detection of leukotoxin-neutralizing capacity, purified leukotoxin (450 ng/mL) [26] was mixed with each plasma sample (10%) in RPMI 1640. One hundred μl portions of the plasma-leukotoxin mixtures were added in duplicate to a 96 well culture plate (Nunc, Roskilde, Denmark) and incubated for 15 min at room temperature. Then 50 μl of PMN was added and the mixtures were incubated for 60 min at 37°C in 5% CO2. Activity of the released LDH into the culture supernatant was quantified as described previously [25]. Plasma samples that inhibited the leukotoxin-induced LDH release by ≥50% were classified as positive and were further analyzed in the assay diluted to 1% of the final volume. Plasma without leukotoxin-neutralization capacity was classified as \"null\", plasma that neutralized leukotoxin at 10% plasma concentration but not at 1% was classified as \"low\", and plasma that neutralized the leukotoxin at both 10% and 1% concentrations was classified as \"high\".", "The Mantel-Haenszel χ2-test for trend was used to analyze the association between age and antibodies. To investigate if the presence of leukotoxin antibodies (categorized into null, low or high) affects the risk of having an MI, conditional logistic regression appropriate for the matched design, was used. A multivariable model was used to adjust for smoking, self-reported diabetes, systolic blood pressure and apoB/apoA1. Results are presented as p-values, odds ratios (ORs) and corresponding 95% confidence intervals (CIs). No correction for multiple testing was performed. SAS version 9.1 was used for the statistical analyses.", "[SUBTITLE] Prevalence of leukotoxin-neutralizing capacity [SUBSECTION] The study population was classified into 4 different age groups, and the distribution in relation to age and gender is shown in Figure 1.\nDistribution of men and women in the age groups for the whole study cohort, i.e. including both cases and referents.\nAmong the 1,532 analyzed plasma samples, 817 (53.3%) could neutralize A. actinomycetemcomitans leukotoxicity in the neutralization assay. Further dilution of the plasma samples resulted in loss of the capacity to neutralize leukotoxin in 526 of these samples, and they were classified as samples with low neutralizing capacity. The 291 samples that neutralized leukotoxin also at the higher dilution were classified as high. The distribution of the study population in relation to their systemic capacity to neutralize leukotoxin was 46.7% negative, 34.3% low and 19.9% high. There were no significant differences between men and woman in capacity to neutralize the leukotoxin (Figure 2).\nProportion of men and women with different systemic capacity to neutralize A. actinomycetemcomitans leukotoxin.\nThe study population was classified into 4 different age groups, and the distribution in relation to age and gender is shown in Figure 1.\nDistribution of men and women in the age groups for the whole study cohort, i.e. including both cases and referents.\nAmong the 1,532 analyzed plasma samples, 817 (53.3%) could neutralize A. actinomycetemcomitans leukotoxicity in the neutralization assay. Further dilution of the plasma samples resulted in loss of the capacity to neutralize leukotoxin in 526 of these samples, and they were classified as samples with low neutralizing capacity. The 291 samples that neutralized leukotoxin also at the higher dilution were classified as high. The distribution of the study population in relation to their systemic capacity to neutralize leukotoxin was 46.7% negative, 34.3% low and 19.9% high. There were no significant differences between men and woman in capacity to neutralize the leukotoxin (Figure 2).\nProportion of men and women with different systemic capacity to neutralize A. actinomycetemcomitans leukotoxin.\n[SUBTITLE] Prevalence and age [SUBSECTION] The proportion of subjects with capacity to neutralize leukotoxin increased with increasing age (Figure 3). This age-related increase was significant for men (p-values ≤ 0.001) but not for women (p-values = 0.170). In order to avoid combinations with no or few observations, the two lowest age groups (29-44 and 45-54) were merged for women, and the two highest age groups (55-64 and 65-77) were merged for men in the formal analysis.\nProportion of men and women (low + high) with systemic A. actinomycetemcomitans leukotoxin-neutralizing capacity in relation to age. The distributions of men and women in the different groups were: 29-44 yr, 179 men and 21 women; 45-54 yr, 397 men and 114 women; 55-64 yr, 503 men and 221 women; and 65-77 yr, 6 men and 94 women.\nThe proportion of subjects with capacity to neutralize leukotoxin increased with increasing age (Figure 3). This age-related increase was significant for men (p-values ≤ 0.001) but not for women (p-values = 0.170). In order to avoid combinations with no or few observations, the two lowest age groups (29-44 and 45-54) were merged for women, and the two highest age groups (55-64 and 65-77) were merged for men in the formal analysis.\nProportion of men and women (low + high) with systemic A. actinomycetemcomitans leukotoxin-neutralizing capacity in relation to age. The distributions of men and women in the different groups were: 29-44 yr, 179 men and 21 women; 45-54 yr, 397 men and 114 women; 55-64 yr, 503 men and 221 women; and 65-77 yr, 6 men and 94 women.\n[SUBTITLE] Prevalence in relation to incidence of MI [SUBSECTION] Women with low capacity to neutralize leukotoxin had a significantly (p-value = 0.031) higher incidence of MI than women without the capacity to neutralize leukotoxin (Table 1). The odds ratio (OR) of having an MI in this group was 1. 8 (95% CI: 1.13-2.8). No other significant differences were seen between the incidence of MI and the different groups classified in relation to systemic leukotoxin neutralization and gender. After adjustments for other known risk-factors for MI (smoking, diabetes, systolic blood pressure and ApoB/ApoA1) there were no significant differences (Table 1).\nProportion of controls and cases with plasma that neutralized leukotoxic activity.\nSignificant differences between controls and cases (*), number of subjects (n) and the odds ratio (OR) of having an MI compared with the antibody-negative group (null) are indicated.\n#) Adjusted for smoking, self-reported diabetes, systolic blood pressure and ApoB/ApoA1\nWomen with low capacity to neutralize leukotoxin had a significantly (p-value = 0.031) higher incidence of MI than women without the capacity to neutralize leukotoxin (Table 1). The odds ratio (OR) of having an MI in this group was 1. 8 (95% CI: 1.13-2.8). No other significant differences were seen between the incidence of MI and the different groups classified in relation to systemic leukotoxin neutralization and gender. After adjustments for other known risk-factors for MI (smoking, diabetes, systolic blood pressure and ApoB/ApoA1) there were no significant differences (Table 1).\nProportion of controls and cases with plasma that neutralized leukotoxic activity.\nSignificant differences between controls and cases (*), number of subjects (n) and the odds ratio (OR) of having an MI compared with the antibody-negative group (null) are indicated.\n#) Adjusted for smoking, self-reported diabetes, systolic blood pressure and ApoB/ApoA1", "The study population was classified into 4 different age groups, and the distribution in relation to age and gender is shown in Figure 1.\nDistribution of men and women in the age groups for the whole study cohort, i.e. including both cases and referents.\nAmong the 1,532 analyzed plasma samples, 817 (53.3%) could neutralize A. actinomycetemcomitans leukotoxicity in the neutralization assay. Further dilution of the plasma samples resulted in loss of the capacity to neutralize leukotoxin in 526 of these samples, and they were classified as samples with low neutralizing capacity. The 291 samples that neutralized leukotoxin also at the higher dilution were classified as high. The distribution of the study population in relation to their systemic capacity to neutralize leukotoxin was 46.7% negative, 34.3% low and 19.9% high. There were no significant differences between men and woman in capacity to neutralize the leukotoxin (Figure 2).\nProportion of men and women with different systemic capacity to neutralize A. actinomycetemcomitans leukotoxin.", "The proportion of subjects with capacity to neutralize leukotoxin increased with increasing age (Figure 3). This age-related increase was significant for men (p-values ≤ 0.001) but not for women (p-values = 0.170). In order to avoid combinations with no or few observations, the two lowest age groups (29-44 and 45-54) were merged for women, and the two highest age groups (55-64 and 65-77) were merged for men in the formal analysis.\nProportion of men and women (low + high) with systemic A. actinomycetemcomitans leukotoxin-neutralizing capacity in relation to age. The distributions of men and women in the different groups were: 29-44 yr, 179 men and 21 women; 45-54 yr, 397 men and 114 women; 55-64 yr, 503 men and 221 women; and 65-77 yr, 6 men and 94 women.", "Women with low capacity to neutralize leukotoxin had a significantly (p-value = 0.031) higher incidence of MI than women without the capacity to neutralize leukotoxin (Table 1). The odds ratio (OR) of having an MI in this group was 1. 8 (95% CI: 1.13-2.8). No other significant differences were seen between the incidence of MI and the different groups classified in relation to systemic leukotoxin neutralization and gender. After adjustments for other known risk-factors for MI (smoking, diabetes, systolic blood pressure and ApoB/ApoA1) there were no significant differences (Table 1).\nProportion of controls and cases with plasma that neutralized leukotoxic activity.\nSignificant differences between controls and cases (*), number of subjects (n) and the odds ratio (OR) of having an MI compared with the antibody-negative group (null) are indicated.\n#) Adjusted for smoking, self-reported diabetes, systolic blood pressure and ApoB/ApoA1", "Results from the present study showed that 53.3% of the plasma samples from a Swedish adult cohort of 1,532 subjects had the capacity to neutralize A. actinomycetemcomitans leukotoxin. It has recently been demonstrated that this leukotoxin-neutralizing capacity correlates with the presence of leukotoxin-specific antibodies [19]. The high prevalence of subjects with this leukotoxin-neutralizing capacity was not expected, however, in line with results from some previous studies [19,27]. Both of these studies were based on Swedish study populations from a similar age group as examined in the present study. In one of these reports the study population consisted of a total of 197 subjects from a case control study of myocardial infarction and matched healthy controls, and the prevalence of systemic leukotoxin-neutralizing capacity was 57% without significant differences between cases and controls [19,28]. The other study consisted of 50 subjects with periodontitis and 41 healthy referents, and in this population the prevalence of systemic leukotoxin-neutralizing capacity was 45% without significant differences between the two groups [27]. Another study showed lower prevalence (15.2%) of leukotoxin-neutralizing capacity [18]. In this study [18] a target cell line (HL-60) with lower sensitivity to leukotoxin than the PMNs was used in the neutralization assay [29], which resulted in a need for enhanced leukotoxin concentration to obtain cell lyses and subsequently more antibodies for neutralization. This difference in leukotoxin sensitivity makes this assay with PMNs more sensitive to detect leukotoxin neutralization than the previous used assay with HL-60 cells [15]. The proportion of samples (19.9%) with high leukotoxin-neutralizing capacity in the present study might be comparable with the results from Johansson et al., 2005 [18].\nWe have previously shown that the systemic leukotoxin-neutralizing capacity correlated to a decreased incidence of stroke in women [18]. The mechanisms behind this phenomenon are not known. We speculate that a virgin infection with A. actinomycetemcomitans in middle-aged and elderly subjects might be a risk factor for stroke and that the capacity to neutralize leukotoxin might be protective. The leukotoxin has been shown to induce a rapid proinflammatory reaction in human macrophages, already at a ratio of 1 bacterium/macrophage [13], and therefore the leukotoxin is a possible risk factor in atherosclerosis. The common etiology of both stroke and MI with inflammatory processes and atherosclerosis [14], indicates that the presence of systemic leukotoxin-neutralizing capacity also might interfere with the incidence of MI. The results of the present study showed that systemic presence of leukotoxin-neutralizing capacity did not affect the incidence of MI, except for women classified as low for this neutralizing capacity. Our main finding refutes the hypothesis that systemic immunoreactivity to leukotoxin has a protective effect against MI. The significant association found for women might be an effect of multiple testing or a type-1 error, but further studies on this finding are warranted to confirm this association.\nThe periodontal status of the analyzed subjects is not known, but it could be expected to be in line with a similar recently examined Swedish population [28]. In that population a correlation between MI and periodontitis was observed. In addition, both periodontitis and MI correlated to the presence of systemic immunoreactivity against Porphyromonas gingivalis but not against A. actinomycetemcomitans [19,28].\nThe proportion of subjects with systemic neutralizing capacity to leukotoxin increased with increasing age, significantly for men but not for women. This age-related increase is in line with previous findings [19] and indicates that a virgin infection with A. actinomycetemcomitans can take place late in life. A virgin A. actinomycetemcomitans infection might decrease the risk for stroke in middle-aged and elderly subjects without systemic leukotoxin-neutralizing capacity [18]. Even though the role of leukotoxin-neutralizing antibodies in the pathogenesis of periodontal disease is unknown [7], the antibodies might help mitigate the systemic effects of A. actinomycetemcomitans infections. The leukotoxin produced by A. actinomycetemcomitans is a unique virulence factor with the capacity to cause a rapid proinflammatory reaction [10]. However, to fully investigate a potential role of leukotoxin in the pathogenesis of stroke, the presence of systemic leukotoxin antibodies has to be analyzed both before and after the disease incidence. We therefore still look at the results that showed a negative correlation between systemic leukotoxin antibodies and stroke as preliminary [18].", "The results from the present study do not support the hypothesis that systemic leukotoxin-neutralizing capacity can decrease the risk for MI. In addition, the prevalence of systemic A. actinomycetemcomitans leukotoxin-neutralizing capacity is high (53.3%) in adults from northern Sweden. The prevalence of leukotoxin-neutralizing capacity increased with increasing age, significantly for men but not for women.", "The authors declare that they have no competing interests.", "AJ conceptualized the study, conducted the analyses, and wrote the first draft of the manuscript. A-MÅ and AJ performed the analyses and made a first draft of the result calculations. J-HJ and KB planned and supervised data collection. GH and IJ coordinated data collection, and ME performed the statistical calculations. All approved the final version of the submitted manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2334/11/55/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study population", "Leukotoxin-neutralization assay", "Statistical anaylses", "Results", "Prevalence of leukotoxin-neutralizing capacity", "Prevalence and age", "Prevalence in relation to incidence of MI", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "Chronic inflammations, such as periodontitis, are suggested to be risk factors for the development of cardiovascular diseases [1]. It has been suggested that the total pathogenic burden from the oral cavity, and possibly also from the gut, correlates with disease markers of atherosclerosis [2]. Periodontitis is a bacteria-induced inflammatory condition that causes degradation of the tooth-supporting tissues, bone and connective tissue [3,4]. Bioactive molecules released from pathogenic microorganisms located in the subgingival biofilm cause imbalance in the inflammatory response, which results in loss of the tooth-supporting tissues [5]. For the host to maintain homeostasis within the periodontal tissues, the immune response system contributes to controlling the microbial colonization and invasion [6]. This immune response includes local and systemic production of antibodies induced by antigens from the microorganisms that are localized in this biofilm [7]. There are more than 700 different microbial species found in the oral cavity of humans [8]. A recent report using the pyrosequencing methodology to analyze the composition of the oral microbiota indicate a substantial increase in that number [9]. Among the different species found, Aggregatibacter actinomycetemcomitans is a bacterium associated with aggressive forms of periodontitis, and it produces a leukotoxin that specifically affects human leukocytes [10]. Individuals infected with a specific, highly leukotoxic clone (JP2) of this bacterium have a significantly increased risk for periodontitis [11]. The proinflammatory response induced by the leukotoxin is a cellular response associated with the pathogenesis of periodontitis [10,12,13] and atherosclerosis [14].\nThe proportion in a population that harbor A. actinomycetemcomitans varies depending on geographic origin and periodontal condition of the subjects [10]. It has been shown that systemic leukotoxin-neutralization is correlated to the presence of this bacterium in the oral subgingival biofilm [15-17]. Data from a previous study showed that women with systemic neutralizing capacity against A. actinomycetemcomitans' leukotoxin had a significantly decreased incidence of stroke [18]. This systemic neutralizing capacity has been shown to correlate (p-value < 0.001) to the presence of leukotoxin-specific antibodies, as well as to antibodies against whole A. actinomyctemcomitans bacteria [19]. We hypothesized that a virgin A. actinomycetemcomitans infection late in life might be a risk factor for stroke and contribute to the negative association between stroke and the presence of these neutralizing antibodies. The aim of the present study was to analyze if the presence of systemic immunoreactivity to A. actinomycetemcomitans leukotoxin also interferes with the incidence of a future myocardial infarction (MI).", "[SUBTITLE] Study population [SUBSECTION] The study population was derived from the Northern Sweden Health and Disease Study (NSHDS), which consists of three sub-cohorts: The Västerbotten Intervention Programme (VIP) [20], the WHO's Multinational Monitoring of Trends and Determinants in Cardiovascular Disease (MONICA) study in northern Sweden [21] and the Mammography Screening Project (MSP) [22]. Both VIP and MONICA are health examination programmes for CVD and diabetes. Participation rates were 59 and 77%, respectively. The VIP was designed to be as similar as possible to the MONICA study. In order to increase the number of female cases, participants in the MSP were included in sex specific analyses. The participation rate in the MSP was 85% in the screening phase, of which 57% donated blood samples. By December 31, 1999 approximately 73,000 unique subjects had been screened in these 3 sub-cohorts in NSHDS.\nIncident cases and matching controls were identified through 13 years of follow-up (1985-1999) from the Västerbotten Intervention Program and the Multinational Monitoring of Trends and Determinants in Cardiovascular Disease (MONICA) study. Study participants were from the Västerbotten and Norrbotten counties in northern Sweden. Participants with a history of MI, stroke or cancer were excluded from this study. Participants were followed from baseline examination until first MI or death. There was an average time period of 4 years between the time of inclusion and the MI event.\nFatal and nonfatal cases of MI occurring from October 1, 1994 to December 31, 1999 were identified through the Northern Sweden Monica Incidence Registry [23]. Throughout the follow-up period, 532 incident first events of MI (cases) were identified. For each case, two controls were individually matched for age, sex and ± 4 months of case occurrence. Thus, the study population consisted of 1,532 participants, 532 cases (382 men and 156 women) and 1,000 sex and age-matched controls (706 men and 294 women), aged 30-77 years at baseline. This study population has previously been described in detail [24]. The study was approved by the Ethics Committee of Umeå University and was conducted in accordance with the Helsinki Declaration. All participants gave informed consent.\nThe study population was derived from the Northern Sweden Health and Disease Study (NSHDS), which consists of three sub-cohorts: The Västerbotten Intervention Programme (VIP) [20], the WHO's Multinational Monitoring of Trends and Determinants in Cardiovascular Disease (MONICA) study in northern Sweden [21] and the Mammography Screening Project (MSP) [22]. Both VIP and MONICA are health examination programmes for CVD and diabetes. Participation rates were 59 and 77%, respectively. The VIP was designed to be as similar as possible to the MONICA study. In order to increase the number of female cases, participants in the MSP were included in sex specific analyses. The participation rate in the MSP was 85% in the screening phase, of which 57% donated blood samples. By December 31, 1999 approximately 73,000 unique subjects had been screened in these 3 sub-cohorts in NSHDS.\nIncident cases and matching controls were identified through 13 years of follow-up (1985-1999) from the Västerbotten Intervention Program and the Multinational Monitoring of Trends and Determinants in Cardiovascular Disease (MONICA) study. Study participants were from the Västerbotten and Norrbotten counties in northern Sweden. Participants with a history of MI, stroke or cancer were excluded from this study. Participants were followed from baseline examination until first MI or death. There was an average time period of 4 years between the time of inclusion and the MI event.\nFatal and nonfatal cases of MI occurring from October 1, 1994 to December 31, 1999 were identified through the Northern Sweden Monica Incidence Registry [23]. Throughout the follow-up period, 532 incident first events of MI (cases) were identified. For each case, two controls were individually matched for age, sex and ± 4 months of case occurrence. Thus, the study population consisted of 1,532 participants, 532 cases (382 men and 156 women) and 1,000 sex and age-matched controls (706 men and 294 women), aged 30-77 years at baseline. This study population has previously been described in detail [24]. The study was approved by the Ethics Committee of Umeå University and was conducted in accordance with the Helsinki Declaration. All participants gave informed consent.\n[SUBTITLE] Leukotoxin-neutralization assay [SUBSECTION] The A. actinomycetemcomitans leukotoxin-neutralizing capacity in the plasma samples was detected as a reduction of leukocyte damage and subsequent leakage of lactate dehydrogenase (LDH) upon exposure to purified leukotoxin, as described previously [18]. This assay quantifies the activity of the LDH enzyme and does not allow freezing and thawing of the supernatants. The neutralization assay also limits the possible influence from LDH present in the different plasma samples from the study population.\nBriefly, human polymorphonuclear leukocytes (PMNs) were isolated from human peripheral blood as described previously [25]. The isolated PMNs were suspended in RPMI 1640 (Sigma-Aldrich, St Louis, MI, USA) with 10% fetal bovine serum (FBS) (Sigma-Aldrich) at a density of 3 × 106 cells/ml. The blood was taken from donors visiting the University Hospital blood bank in Umeå, Sweden. Informed written approval was given by all subjects, and authorization for the study was granted by the Human Studies Ethical Committee of Umeå University, Sweden (§67/3, dnr 03-019).\nFor detection of leukotoxin-neutralizing capacity, purified leukotoxin (450 ng/mL) [26] was mixed with each plasma sample (10%) in RPMI 1640. One hundred μl portions of the plasma-leukotoxin mixtures were added in duplicate to a 96 well culture plate (Nunc, Roskilde, Denmark) and incubated for 15 min at room temperature. Then 50 μl of PMN was added and the mixtures were incubated for 60 min at 37°C in 5% CO2. Activity of the released LDH into the culture supernatant was quantified as described previously [25]. Plasma samples that inhibited the leukotoxin-induced LDH release by ≥50% were classified as positive and were further analyzed in the assay diluted to 1% of the final volume. Plasma without leukotoxin-neutralization capacity was classified as \"null\", plasma that neutralized leukotoxin at 10% plasma concentration but not at 1% was classified as \"low\", and plasma that neutralized the leukotoxin at both 10% and 1% concentrations was classified as \"high\".\nThe A. actinomycetemcomitans leukotoxin-neutralizing capacity in the plasma samples was detected as a reduction of leukocyte damage and subsequent leakage of lactate dehydrogenase (LDH) upon exposure to purified leukotoxin, as described previously [18]. This assay quantifies the activity of the LDH enzyme and does not allow freezing and thawing of the supernatants. The neutralization assay also limits the possible influence from LDH present in the different plasma samples from the study population.\nBriefly, human polymorphonuclear leukocytes (PMNs) were isolated from human peripheral blood as described previously [25]. The isolated PMNs were suspended in RPMI 1640 (Sigma-Aldrich, St Louis, MI, USA) with 10% fetal bovine serum (FBS) (Sigma-Aldrich) at a density of 3 × 106 cells/ml. The blood was taken from donors visiting the University Hospital blood bank in Umeå, Sweden. Informed written approval was given by all subjects, and authorization for the study was granted by the Human Studies Ethical Committee of Umeå University, Sweden (§67/3, dnr 03-019).\nFor detection of leukotoxin-neutralizing capacity, purified leukotoxin (450 ng/mL) [26] was mixed with each plasma sample (10%) in RPMI 1640. One hundred μl portions of the plasma-leukotoxin mixtures were added in duplicate to a 96 well culture plate (Nunc, Roskilde, Denmark) and incubated for 15 min at room temperature. Then 50 μl of PMN was added and the mixtures were incubated for 60 min at 37°C in 5% CO2. Activity of the released LDH into the culture supernatant was quantified as described previously [25]. Plasma samples that inhibited the leukotoxin-induced LDH release by ≥50% were classified as positive and were further analyzed in the assay diluted to 1% of the final volume. Plasma without leukotoxin-neutralization capacity was classified as \"null\", plasma that neutralized leukotoxin at 10% plasma concentration but not at 1% was classified as \"low\", and plasma that neutralized the leukotoxin at both 10% and 1% concentrations was classified as \"high\".\n[SUBTITLE] Statistical anaylses [SUBSECTION] The Mantel-Haenszel χ2-test for trend was used to analyze the association between age and antibodies. To investigate if the presence of leukotoxin antibodies (categorized into null, low or high) affects the risk of having an MI, conditional logistic regression appropriate for the matched design, was used. A multivariable model was used to adjust for smoking, self-reported diabetes, systolic blood pressure and apoB/apoA1. Results are presented as p-values, odds ratios (ORs) and corresponding 95% confidence intervals (CIs). No correction for multiple testing was performed. SAS version 9.1 was used for the statistical analyses.\nThe Mantel-Haenszel χ2-test for trend was used to analyze the association between age and antibodies. To investigate if the presence of leukotoxin antibodies (categorized into null, low or high) affects the risk of having an MI, conditional logistic regression appropriate for the matched design, was used. A multivariable model was used to adjust for smoking, self-reported diabetes, systolic blood pressure and apoB/apoA1. Results are presented as p-values, odds ratios (ORs) and corresponding 95% confidence intervals (CIs). No correction for multiple testing was performed. SAS version 9.1 was used for the statistical analyses.", "The study population was derived from the Northern Sweden Health and Disease Study (NSHDS), which consists of three sub-cohorts: The Västerbotten Intervention Programme (VIP) [20], the WHO's Multinational Monitoring of Trends and Determinants in Cardiovascular Disease (MONICA) study in northern Sweden [21] and the Mammography Screening Project (MSP) [22]. Both VIP and MONICA are health examination programmes for CVD and diabetes. Participation rates were 59 and 77%, respectively. The VIP was designed to be as similar as possible to the MONICA study. In order to increase the number of female cases, participants in the MSP were included in sex specific analyses. The participation rate in the MSP was 85% in the screening phase, of which 57% donated blood samples. By December 31, 1999 approximately 73,000 unique subjects had been screened in these 3 sub-cohorts in NSHDS.\nIncident cases and matching controls were identified through 13 years of follow-up (1985-1999) from the Västerbotten Intervention Program and the Multinational Monitoring of Trends and Determinants in Cardiovascular Disease (MONICA) study. Study participants were from the Västerbotten and Norrbotten counties in northern Sweden. Participants with a history of MI, stroke or cancer were excluded from this study. Participants were followed from baseline examination until first MI or death. There was an average time period of 4 years between the time of inclusion and the MI event.\nFatal and nonfatal cases of MI occurring from October 1, 1994 to December 31, 1999 were identified through the Northern Sweden Monica Incidence Registry [23]. Throughout the follow-up period, 532 incident first events of MI (cases) were identified. For each case, two controls were individually matched for age, sex and ± 4 months of case occurrence. Thus, the study population consisted of 1,532 participants, 532 cases (382 men and 156 women) and 1,000 sex and age-matched controls (706 men and 294 women), aged 30-77 years at baseline. This study population has previously been described in detail [24]. The study was approved by the Ethics Committee of Umeå University and was conducted in accordance with the Helsinki Declaration. All participants gave informed consent.", "The A. actinomycetemcomitans leukotoxin-neutralizing capacity in the plasma samples was detected as a reduction of leukocyte damage and subsequent leakage of lactate dehydrogenase (LDH) upon exposure to purified leukotoxin, as described previously [18]. This assay quantifies the activity of the LDH enzyme and does not allow freezing and thawing of the supernatants. The neutralization assay also limits the possible influence from LDH present in the different plasma samples from the study population.\nBriefly, human polymorphonuclear leukocytes (PMNs) were isolated from human peripheral blood as described previously [25]. The isolated PMNs were suspended in RPMI 1640 (Sigma-Aldrich, St Louis, MI, USA) with 10% fetal bovine serum (FBS) (Sigma-Aldrich) at a density of 3 × 106 cells/ml. The blood was taken from donors visiting the University Hospital blood bank in Umeå, Sweden. Informed written approval was given by all subjects, and authorization for the study was granted by the Human Studies Ethical Committee of Umeå University, Sweden (§67/3, dnr 03-019).\nFor detection of leukotoxin-neutralizing capacity, purified leukotoxin (450 ng/mL) [26] was mixed with each plasma sample (10%) in RPMI 1640. One hundred μl portions of the plasma-leukotoxin mixtures were added in duplicate to a 96 well culture plate (Nunc, Roskilde, Denmark) and incubated for 15 min at room temperature. Then 50 μl of PMN was added and the mixtures were incubated for 60 min at 37°C in 5% CO2. Activity of the released LDH into the culture supernatant was quantified as described previously [25]. Plasma samples that inhibited the leukotoxin-induced LDH release by ≥50% were classified as positive and were further analyzed in the assay diluted to 1% of the final volume. Plasma without leukotoxin-neutralization capacity was classified as \"null\", plasma that neutralized leukotoxin at 10% plasma concentration but not at 1% was classified as \"low\", and plasma that neutralized the leukotoxin at both 10% and 1% concentrations was classified as \"high\".", "The Mantel-Haenszel χ2-test for trend was used to analyze the association between age and antibodies. To investigate if the presence of leukotoxin antibodies (categorized into null, low or high) affects the risk of having an MI, conditional logistic regression appropriate for the matched design, was used. A multivariable model was used to adjust for smoking, self-reported diabetes, systolic blood pressure and apoB/apoA1. Results are presented as p-values, odds ratios (ORs) and corresponding 95% confidence intervals (CIs). No correction for multiple testing was performed. SAS version 9.1 was used for the statistical analyses.", "[SUBTITLE] Prevalence of leukotoxin-neutralizing capacity [SUBSECTION] The study population was classified into 4 different age groups, and the distribution in relation to age and gender is shown in Figure 1.\nDistribution of men and women in the age groups for the whole study cohort, i.e. including both cases and referents.\nAmong the 1,532 analyzed plasma samples, 817 (53.3%) could neutralize A. actinomycetemcomitans leukotoxicity in the neutralization assay. Further dilution of the plasma samples resulted in loss of the capacity to neutralize leukotoxin in 526 of these samples, and they were classified as samples with low neutralizing capacity. The 291 samples that neutralized leukotoxin also at the higher dilution were classified as high. The distribution of the study population in relation to their systemic capacity to neutralize leukotoxin was 46.7% negative, 34.3% low and 19.9% high. There were no significant differences between men and woman in capacity to neutralize the leukotoxin (Figure 2).\nProportion of men and women with different systemic capacity to neutralize A. actinomycetemcomitans leukotoxin.\nThe study population was classified into 4 different age groups, and the distribution in relation to age and gender is shown in Figure 1.\nDistribution of men and women in the age groups for the whole study cohort, i.e. including both cases and referents.\nAmong the 1,532 analyzed plasma samples, 817 (53.3%) could neutralize A. actinomycetemcomitans leukotoxicity in the neutralization assay. Further dilution of the plasma samples resulted in loss of the capacity to neutralize leukotoxin in 526 of these samples, and they were classified as samples with low neutralizing capacity. The 291 samples that neutralized leukotoxin also at the higher dilution were classified as high. The distribution of the study population in relation to their systemic capacity to neutralize leukotoxin was 46.7% negative, 34.3% low and 19.9% high. There were no significant differences between men and woman in capacity to neutralize the leukotoxin (Figure 2).\nProportion of men and women with different systemic capacity to neutralize A. actinomycetemcomitans leukotoxin.\n[SUBTITLE] Prevalence and age [SUBSECTION] The proportion of subjects with capacity to neutralize leukotoxin increased with increasing age (Figure 3). This age-related increase was significant for men (p-values ≤ 0.001) but not for women (p-values = 0.170). In order to avoid combinations with no or few observations, the two lowest age groups (29-44 and 45-54) were merged for women, and the two highest age groups (55-64 and 65-77) were merged for men in the formal analysis.\nProportion of men and women (low + high) with systemic A. actinomycetemcomitans leukotoxin-neutralizing capacity in relation to age. The distributions of men and women in the different groups were: 29-44 yr, 179 men and 21 women; 45-54 yr, 397 men and 114 women; 55-64 yr, 503 men and 221 women; and 65-77 yr, 6 men and 94 women.\nThe proportion of subjects with capacity to neutralize leukotoxin increased with increasing age (Figure 3). This age-related increase was significant for men (p-values ≤ 0.001) but not for women (p-values = 0.170). In order to avoid combinations with no or few observations, the two lowest age groups (29-44 and 45-54) were merged for women, and the two highest age groups (55-64 and 65-77) were merged for men in the formal analysis.\nProportion of men and women (low + high) with systemic A. actinomycetemcomitans leukotoxin-neutralizing capacity in relation to age. The distributions of men and women in the different groups were: 29-44 yr, 179 men and 21 women; 45-54 yr, 397 men and 114 women; 55-64 yr, 503 men and 221 women; and 65-77 yr, 6 men and 94 women.\n[SUBTITLE] Prevalence in relation to incidence of MI [SUBSECTION] Women with low capacity to neutralize leukotoxin had a significantly (p-value = 0.031) higher incidence of MI than women without the capacity to neutralize leukotoxin (Table 1). The odds ratio (OR) of having an MI in this group was 1. 8 (95% CI: 1.13-2.8). No other significant differences were seen between the incidence of MI and the different groups classified in relation to systemic leukotoxin neutralization and gender. After adjustments for other known risk-factors for MI (smoking, diabetes, systolic blood pressure and ApoB/ApoA1) there were no significant differences (Table 1).\nProportion of controls and cases with plasma that neutralized leukotoxic activity.\nSignificant differences between controls and cases (*), number of subjects (n) and the odds ratio (OR) of having an MI compared with the antibody-negative group (null) are indicated.\n#) Adjusted for smoking, self-reported diabetes, systolic blood pressure and ApoB/ApoA1\nWomen with low capacity to neutralize leukotoxin had a significantly (p-value = 0.031) higher incidence of MI than women without the capacity to neutralize leukotoxin (Table 1). The odds ratio (OR) of having an MI in this group was 1. 8 (95% CI: 1.13-2.8). No other significant differences were seen between the incidence of MI and the different groups classified in relation to systemic leukotoxin neutralization and gender. After adjustments for other known risk-factors for MI (smoking, diabetes, systolic blood pressure and ApoB/ApoA1) there were no significant differences (Table 1).\nProportion of controls and cases with plasma that neutralized leukotoxic activity.\nSignificant differences between controls and cases (*), number of subjects (n) and the odds ratio (OR) of having an MI compared with the antibody-negative group (null) are indicated.\n#) Adjusted for smoking, self-reported diabetes, systolic blood pressure and ApoB/ApoA1", "The study population was classified into 4 different age groups, and the distribution in relation to age and gender is shown in Figure 1.\nDistribution of men and women in the age groups for the whole study cohort, i.e. including both cases and referents.\nAmong the 1,532 analyzed plasma samples, 817 (53.3%) could neutralize A. actinomycetemcomitans leukotoxicity in the neutralization assay. Further dilution of the plasma samples resulted in loss of the capacity to neutralize leukotoxin in 526 of these samples, and they were classified as samples with low neutralizing capacity. The 291 samples that neutralized leukotoxin also at the higher dilution were classified as high. The distribution of the study population in relation to their systemic capacity to neutralize leukotoxin was 46.7% negative, 34.3% low and 19.9% high. There were no significant differences between men and woman in capacity to neutralize the leukotoxin (Figure 2).\nProportion of men and women with different systemic capacity to neutralize A. actinomycetemcomitans leukotoxin.", "The proportion of subjects with capacity to neutralize leukotoxin increased with increasing age (Figure 3). This age-related increase was significant for men (p-values ≤ 0.001) but not for women (p-values = 0.170). In order to avoid combinations with no or few observations, the two lowest age groups (29-44 and 45-54) were merged for women, and the two highest age groups (55-64 and 65-77) were merged for men in the formal analysis.\nProportion of men and women (low + high) with systemic A. actinomycetemcomitans leukotoxin-neutralizing capacity in relation to age. The distributions of men and women in the different groups were: 29-44 yr, 179 men and 21 women; 45-54 yr, 397 men and 114 women; 55-64 yr, 503 men and 221 women; and 65-77 yr, 6 men and 94 women.", "Women with low capacity to neutralize leukotoxin had a significantly (p-value = 0.031) higher incidence of MI than women without the capacity to neutralize leukotoxin (Table 1). The odds ratio (OR) of having an MI in this group was 1. 8 (95% CI: 1.13-2.8). No other significant differences were seen between the incidence of MI and the different groups classified in relation to systemic leukotoxin neutralization and gender. After adjustments for other known risk-factors for MI (smoking, diabetes, systolic blood pressure and ApoB/ApoA1) there were no significant differences (Table 1).\nProportion of controls and cases with plasma that neutralized leukotoxic activity.\nSignificant differences between controls and cases (*), number of subjects (n) and the odds ratio (OR) of having an MI compared with the antibody-negative group (null) are indicated.\n#) Adjusted for smoking, self-reported diabetes, systolic blood pressure and ApoB/ApoA1", "Results from the present study showed that 53.3% of the plasma samples from a Swedish adult cohort of 1,532 subjects had the capacity to neutralize A. actinomycetemcomitans leukotoxin. It has recently been demonstrated that this leukotoxin-neutralizing capacity correlates with the presence of leukotoxin-specific antibodies [19]. The high prevalence of subjects with this leukotoxin-neutralizing capacity was not expected, however, in line with results from some previous studies [19,27]. Both of these studies were based on Swedish study populations from a similar age group as examined in the present study. In one of these reports the study population consisted of a total of 197 subjects from a case control study of myocardial infarction and matched healthy controls, and the prevalence of systemic leukotoxin-neutralizing capacity was 57% without significant differences between cases and controls [19,28]. The other study consisted of 50 subjects with periodontitis and 41 healthy referents, and in this population the prevalence of systemic leukotoxin-neutralizing capacity was 45% without significant differences between the two groups [27]. Another study showed lower prevalence (15.2%) of leukotoxin-neutralizing capacity [18]. In this study [18] a target cell line (HL-60) with lower sensitivity to leukotoxin than the PMNs was used in the neutralization assay [29], which resulted in a need for enhanced leukotoxin concentration to obtain cell lyses and subsequently more antibodies for neutralization. This difference in leukotoxin sensitivity makes this assay with PMNs more sensitive to detect leukotoxin neutralization than the previous used assay with HL-60 cells [15]. The proportion of samples (19.9%) with high leukotoxin-neutralizing capacity in the present study might be comparable with the results from Johansson et al., 2005 [18].\nWe have previously shown that the systemic leukotoxin-neutralizing capacity correlated to a decreased incidence of stroke in women [18]. The mechanisms behind this phenomenon are not known. We speculate that a virgin infection with A. actinomycetemcomitans in middle-aged and elderly subjects might be a risk factor for stroke and that the capacity to neutralize leukotoxin might be protective. The leukotoxin has been shown to induce a rapid proinflammatory reaction in human macrophages, already at a ratio of 1 bacterium/macrophage [13], and therefore the leukotoxin is a possible risk factor in atherosclerosis. The common etiology of both stroke and MI with inflammatory processes and atherosclerosis [14], indicates that the presence of systemic leukotoxin-neutralizing capacity also might interfere with the incidence of MI. The results of the present study showed that systemic presence of leukotoxin-neutralizing capacity did not affect the incidence of MI, except for women classified as low for this neutralizing capacity. Our main finding refutes the hypothesis that systemic immunoreactivity to leukotoxin has a protective effect against MI. The significant association found for women might be an effect of multiple testing or a type-1 error, but further studies on this finding are warranted to confirm this association.\nThe periodontal status of the analyzed subjects is not known, but it could be expected to be in line with a similar recently examined Swedish population [28]. In that population a correlation between MI and periodontitis was observed. In addition, both periodontitis and MI correlated to the presence of systemic immunoreactivity against Porphyromonas gingivalis but not against A. actinomycetemcomitans [19,28].\nThe proportion of subjects with systemic neutralizing capacity to leukotoxin increased with increasing age, significantly for men but not for women. This age-related increase is in line with previous findings [19] and indicates that a virgin infection with A. actinomycetemcomitans can take place late in life. A virgin A. actinomycetemcomitans infection might decrease the risk for stroke in middle-aged and elderly subjects without systemic leukotoxin-neutralizing capacity [18]. Even though the role of leukotoxin-neutralizing antibodies in the pathogenesis of periodontal disease is unknown [7], the antibodies might help mitigate the systemic effects of A. actinomycetemcomitans infections. The leukotoxin produced by A. actinomycetemcomitans is a unique virulence factor with the capacity to cause a rapid proinflammatory reaction [10]. However, to fully investigate a potential role of leukotoxin in the pathogenesis of stroke, the presence of systemic leukotoxin antibodies has to be analyzed both before and after the disease incidence. We therefore still look at the results that showed a negative correlation between systemic leukotoxin antibodies and stroke as preliminary [18].", "The results from the present study do not support the hypothesis that systemic leukotoxin-neutralizing capacity can decrease the risk for MI. In addition, the prevalence of systemic A. actinomycetemcomitans leukotoxin-neutralizing capacity is high (53.3%) in adults from northern Sweden. The prevalence of leukotoxin-neutralizing capacity increased with increasing age, significantly for men but not for women.", "The authors declare that they have no competing interests.", "AJ conceptualized the study, conducted the analyses, and wrote the first draft of the manuscript. A-MÅ and AJ performed the analyses and made a first draft of the result calculations. J-HJ and KB planned and supervised data collection. GH and IJ coordinated data collection, and ME performed the statistical calculations. All approved the final version of the submitted manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2334/11/55/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Lessons learnt from comprehensive evaluation of community-based education in Uganda: a proposal for an ideal model community-based education for health professional training institutions.
21362181
Community-based education (CBE) can provide contextual learning that addresses manpower scarcity by enabling trainees acquire requisite experiences, competence, confidence and values. In Uganda, many health professional training institutions conduct some form of community-based education (CBE). However, there is scanty information on the nature of the training: whether a curriculum exists (objectives, intended outcomes, content, implementation strategy), administration and constraints faced. The objective was to make a comprehensive assessment of CBE as implemented by Ugandan health professional training institutions to document the nature of CBE conducted and propose an ideal model with minimum requirements for health professional training institutions in Uganda.
BACKGROUND
We employed several methods: documentary review of curricula of 22 institutions, so as to assess the nature, purpose, outcomes, and methods of instruction and assessment; site visits to these institutions and their CBE sites, to assess the learning environment (infrastructure and resources); in-depth interviews with key people involved in running CBE at the institutions and community, to evaluate CBE implementation, challenges experienced and perceived solutions.
METHODS
CBE was perceived differently ranging from a subject, a course, a program or a project. Despite having similar curricula, institutions differ in the administration, implementation and assessment of CBE. Objectives of CBE, the curricula content and implementation strategies differ in similar institutions. On collaborative and social learning, most trainees do not reside in the community, though they work on group projects and write group reports. Lectures and skills demonstrations were the main instruction methods. Assessment involved mainly continuous assessment, oral or written reports and summative examination.
RESULTS
This assessment identified deficiencies in the design and implementation of CBE at several health professional training institutions, with major flaws identified in curriculum content, supervision of trainees, inappropriate assessment, trainee welfare, and underutilization of opportunities for contextual and collaborative learning. Since CBE showed potential to benefit the trainees, community and institutions, we propose a model that delivers a minimum package of CBE and overcomes the wide variation in the concept, conduct and implementation of CBE.
CONCLUSION
[ "Clinical Competence", "Curriculum", "Evaluation Studies as Topic", "Health Knowledge, Attitudes, Practice", "Health Personnel", "Humans", "Learning", "Models, Educational", "Needs Assessment", "Residence Characteristics", "Teaching", "Uganda" ]
3056836
null
null
Methods
Through the twinning grant with Johns Hopkins University, Makerere University conducted a comprehensive assessment of CBE implemented by 22 health professional training institutions in Uganda, from October 2009 through May 2010. This assessment involved documentary review of available curricula and other documents related to CBE at 22 health professional training institutions, so as to assess the nature, purpose, intended outcomes, instruction methods and methods of assessment. Site visits to these institutions and two of their CBE sites was also conducted to assess the available infrastructure and learning resources available at these sites. In analyzing whether learning occurs during CBE, we assessed the following: i) whether learning is competence-based or outcome-based (that is, students are able to demonstrate certain competences, and focusing or organizing the learning in the educational system around what is essential for all students to be able to demonstrate successfully at the end of their learning experiences, for instance, critical reflection, communication skills and clinical skills) ii) whether learning during CBE is problem-based (whether real-life situations related to patient care are used to stimulate learning); iii) whether learning during CBE is contextual (reality-based, outside-of-the-classroom experiences, which serve as a catalyst for students to utilize their disciplinary knowledge, are employed in real-life settings similar to where trainees will eventually work, for instance health centres or the community); iv) whether CBE promotes collaborative learning (whether learning occurs in situations in which two or more students learn, or attempt to learn, something together, as pairs, small groups or whole class), and perform learning activities such as problem-solving using individual effort, joint effort or external effort; v) whether reflection (critical appraisal of incidents and experiences) occurs; vi) whether CBE promotes lifelong learning (stimulates trainees to keep up-to-date with current clinical management or professional care guidelines). vii) Since students contribute to service during community placements, we assessed whether such service contributes to trainees' learning (that is, whether service improves critical reflection whereby critical incidents are noted, appraised, analyzed and used to develop individual or group learning plans). viii) Lastly, in-depth and key-informant interviews were conducted with key people involved in running CBE at the institutions and CBE sites to assess how CBE was implemented, challenges experienced and perceived solutions. Interviews with alumni of these institutions and community members at the CBE sites offered insight into the impact of CBE, challenges in CBE implementation and how these challenges may be addressed. Content analysis [17] was applied to analysis of these interviews: familiarization; identifying a thematic framework; indexing; mapping and interpretation. The data was then clearly coded and the text was indexed by using descriptors alongside various passages in the transcriptions. Ethics and research committees of Makerere University College of Health Sciences and Johns Hopkins School of Public Health approved the research. Approval to conduct the research was given by the administrators of all the health institutions that were assessed.
null
null
null
null
[ "Background", "Results", "Discussion", "Limitations", "Conclusion", "Competing interests", "Authors' contributions", "Appendix", "Proposal for a model CBE curriculum for training health professionals", "Steps in developing and implementing the ideal CBE curriculum", "Perceived Ideal objectives of CBE for health professionals", "The curriculum components", "Competences of the trainee", "Curriculum content related to CBE", "Roles of tutors, supervisors and faculty", "Pre-publication history" ]
[ "Community-based education (CBE) has several definitions [1,2], but the core definition refers to learning that takes place in a setting external to the higher education institution. In the context of health professional education, CBE refers to instruction whereby trainees learn and acquire professional competencies in community settings. Such settings include general practices, communities, community health centres or rural hospitals [2] with the focus being learning about health services in the community, methods of health promotion, as well as social and economic aspects of illness [2]. Community-oriented programmes have several goals: to create more appropriate knowledge, skills and attitudes; to deepen trainees' understanding of health and illness; to enable trainees understand the health and social services; to promote interpersonal skills and multidisciplinary teamwork; and to deepen trainees' understanding of the contribution of social and environmental factors to causation and prevention of ill-health [2].\nCBE goes beyond cognitive capacities and encompasses the social and emotional aspects of learning, increasing trainee' experience, confidence and competence [2-4] and improving awareness of community values and lifestyles of health workers in rural areas [5,6]. CBE increases trainees' interest in uptake of careers in rural practice [5]. Provision of support for community site tutors and faculty improves the quality of medical students' learning experiences during rural rotations [6]. Indeed, such exposure to the communities through community placements during CBE shapes trainees' values and perceptions of rural practice, eventually promoting ethics, professionalism and health professionals uptake of rural practice [2,7,8].\nIn Uganda, many health professional training institutions conduct some form of CBE. However, there is scanty information on the nature of the training: whether a curriculum exists, the objectives related to CBE, intended outcomes, curriculum content. There is also scanty information on the implementation of the CBE curriculum (training sites, instruction methods, activities conducted by trainees, roles played by training site supervisors and institution faculty); administration of CBE (financial, human and material resources involved); and challenges faced by the implementation of CBE. Most reports of community-based programmes have been conducted in developed countries, have been largely descriptive, and rarely have assessed outcomes such as learning and rural deployment [9-11]. Others have only analyzed a specific component of CBE: limited to trainees' experience of the training during community placement [12], students' evaluation of the teaching and learning environment [13], specific contexts such as medical emergencies [14], evaluation of CBE conducted by a single academic institution [15], or evaluation of the role of community preceptors [16]. No similar studies have been documented on CBE for health professionals in Uganda. Specifically, no studies have documented the effectiveness of CBE in ensuring contextual learning for health professional trainees. Though medical schools and other health professional training institutions adopted CBE as a strategy for medical education, and community placements as one of the training settings, there is very little published research that explores the actual training that is conducted, the resources that are needed to sustain training, or the learning activities undertaken by students in different environments in Uganda.\nOur aim was to answer the following questions: What is the nature of CBE conducted? Does CBE promote learning? What challenges do institutions face in implementing CBE and what are potential solutions? The objective of the comprehensive assessment of CBE conducted in health professional training institutions in Uganda was two-fold: 1) To analyze the nature of CBE conducted by different institutions as well as challenges involved; 2) To analyze effectiveness of CBE in promoting learning and acquisition of competences for cadres at different levels (certificate, diploma, advanced diploma and degree). As a spin-off of this comprehensive evaluation, we wish to utilize lessons to propose an ideal model with minimum requirements for health professional training institutions in Uganda. Documentation of this information as well as potential solutions to identified challenges in running CBE programs might not only improve the training of health professionals but also assist in the long-term goal of improving the recruitment, deployment or retention of health workers to rural areas.", "Table 1 shows the findings from review of the curriculum documents, the implementation of the curriculum, the instruction methods used and evaluation of collaborative learning. Some of the institutions had no curriculum or the curricula documents were deficient with regard to CBE. Even where CBE was indicated in the curricula, there was wide variation in its perception, conduct and implementation. Activities conducted, instruction methods used and learning environment (community sites used to provide opportunity for learning) varied greatly in similar institutions. Lectures, assignments and skills demonstrations were the main methods of instruction.\nAssessment of CBE at the 22 institutions evaluated\nTable 2 shows the activities which students are involved in to foster learning. Self-directed learning and group discussions were the main activities which trainees used to enhance learning. The findings show that considering the variety of learning sites to which trainees are exposed, CBE had the potential to benefit trainees with experiential and contextual learning, while providing a service to the CBE sites as well as the community. However, less than half of the institutions had a research component as part of the activities students are involved in during CBE, though trainees had prior skills and resources for conducting research. This indicates a missed opportunity for CBE programs.\nLearning, research and assessment of learning\nTable 3 shows the overall social environment in the community and health facilities. The evaluation shows that there is wide variation in available resources to support CBE, which were shared by different institutions despite their being inadequate. Most deficits were in accommodation facilities, welfare, sundries and supplies. Transport to and within the community sites was a problem. Our findings demonstrate the challenges that need to be addressed (especially financial, supervisory and administrative constraints) and the missed opportunities to be rectified if the CBE learning environment is to be enhanced.\nFacilities at sites to facilitate CBE activities\nThough most interviewees felt the concept of CBE was right, there were many challenges in its conduct, which hinder adequate preparation and implementation. Some of the beneficial aspects of CBE identified were early contact with the community, improved team work in the trainees who leave, work and learn in small groups, improved interpersonal relationships and improved communication skills. That the CBE was outcome-based is indicated by some curricula having clear objectives, learning outcomes, desired competencies and clear implementation strategy. Collaborative learning is indicated by the individual, joint and external collaborations by students to promote learning. CBE offered opportunities for improved clinical, leadership, self-directed learning and research skills. As an outcome of this comprehensive assessment, we identified a critical need to propose an ideal CBE curriculum whose content and implementation would provide a minimum package of CBE. Indeed, such an ideal model might make available human resources with competence to address Uganda's current and evolving priority health problems.\nWe found that though many health professional training institutions in Uganda conduct some form of CBE, the content of the curricula and the extent to which they are implemented is variable. For institutions without a written curriculum, we found no clear explanation why this was so from interviewing the faculty. The tutors, faculty and community members interviewed were in agreement that CBE has the potential to benefit trainees (through providing a real-life training environment where the graduates may eventually work, such in health centres, rural hospitals or the community); the community (whose members benefit from the services of the trainees during the community placements and outreach activities); the institutions (which get a better training context for their students); and the staff at the community training sites (who gain from update of knowledge and skills from interaction and collaboration with staff from the training institutions). CBE also offered opportunities to conduct research on conditions that affect the community (through community diagnosis). Different descriptions were given to CBE in the different institutions, consequently, CBE was administered the differently depending on how it was defined and the resources available. The key personnel ranged from individuals (acting as coordinators), a committee of persons, or an administrative office (usually under the dean, academic registrar or principal tutor). Factors considered before CBE site selection and preparation included proximity, available infrastructure, uniqueness of the site in terms of services offered (such as maternal and child health and mental health), receptivity of health facility staff and community leadership, ability of the potential site to cater for trainee's welfare, and availability of willing personnel at health facilities to supervise trainees. For the latter, 18 (70%) of the institutions had neither a formal criteria for selection of community supervisors nor formal training of these people.", "Our findings evaluated the potential effectiveness of CBE in enhancing training of health professions. The methods of evaluation used enable evaluation of the ''curriculum on paper'' (what is written about the curriculum in documents and what the implementers understand about curriculum goals and objectives), the ''curriculum in action'' (how the curriculum is implemented) and the ''curriculum as experienced''(what students actually do, how they study and outcomes of the learning) [15]. Our findings therefore indicate that CBE provides contextual learning for most of the institutions (reality-based, outside-of-the-classroom experience, such as in the health centres, the community, schools or at the district health office as well as working with NGOs at community level). Contextual learning provides learning experiences in contexts in which trainees are interested and motivated and therefore achieve more [18-20]. The learning process of CBE includes the learner's ability to gain and utilize acquired knowledge as well as solve problems, while developing collaborative skills, innovation skills, communication skills, critical reflection, teamwork and interpersonal relationships [2,18,20].\nIn our assessment, there were wide variations in the curriculum content even for public institutions that offer the same academic award. It was also unclear to some institutions how the curriculum was developed, implemented or evaluated. Developing a generic CBE curriculum should follow a systematic approach that is capable of integrating content area with educational theory and methodology, and indicate evaluation of the impact of this process [21-24]. When completed, the CBE needs assessment should make a strong argument for the need for the curriculum, identifying previously developed or validated methods.\nSome of the CBE curricula documents were unclear about the learning outcomes or desired competences, so it was unclear if the actualized curriculum included them. From the interviews, this was still unclear as some tutors were not conversant with the curriculum content or implementation. Regarding content and outcomes of the ideal CBE curriculum, the professional competences should focus on theoretical and applied knowledge associated with practice, emphasizing values, skills and critical appraisal. This is essential as training and practice/service often occur in situations characterized by complexity, uniqueness, uncertainty and conflicting values [23]. In contrast to the curriculum on paper, the curriculum in action is how the intended curriculum is implemented in practice [25], while the actualised curriculum refers to what students actually do, how they study, what they believe they should be doing, the learning that occurs and the outcome of their learning [25].\nWhile pre-placement orientation is essential for success of CBE, it was unclear whether all institutions conducted it. Most of the CBE curricula emphasized knowledge and professional skills. Whereas specialized knowledge and skills are clearly essential for practice, self consciousness (reflection) and continual self critique (critical reflection) are crucial to acquisition of competences [23]. Reflective and critically reflective practice, communication skills and interpersonal relations are vital professional skills for effective and efficient professional practice [24].\nFew of the institutions employed Problem-based learning (PBL) as an instruction method. Regarding implementation strategies, PBL is ideal for CBE as it effectively brings out the desired competences of the ideal CBE curriculum. Though the exact format of PBL varies considerably, two key features are consistent: emphasis on learner-focused exploration of case-based problems and use of case histories to help students identify learning issues that become the focus of individual or group problem-solving. During CBE, trainees should be exposed to professionally meaningful situations or problems that bear strong resemblance to the situations they will be confronted with in their future profession, which enables acquisition of knowledge, clinical skills and problem-solving skills. Development of critical reflection and critical thinking as professional skills during CBE requires adoption of PBL as the ideal model of instruction [22-24]. Indeed, during CBE, individual trainees interpret their varied experiences in their own way and understanding, basing on what happens to them and what they see, hear and read. [26]. CBE provides a holistic picture of health and health systems in the community.\nThe institutions evaluated conducted assessment of CBE using different and diverse methods, with variations in reliability. For CBE, there is a challenge of designing assessment such that it adequately reflects the desired competences, the instruction methods and the eventual professional role, while making it retain reliability and validity across different settings and assessors [27,28]. During CBE, students are taught in diverse contexts by diverse groups of tutors with varied competences and skills as educators. In the curricula, institutions should be clear about what and why they need to assess and who will do the assessment [2]. Assessment strategies need to be developed to enhance and support learning as well as reliably measure performance with evidence of learning having taken place [29,30]. While both formative and summative assessment are essential, assessment of CBE requires development of portfolios by the trainees [29]. Portfolios facilitate evaluation of integrated and complex abilities taking into account the learning context [29]. Indeed, portfolios personalize the assessment process, incorporating important educational values, while supporting the important principle that assessment drives learning. Other suitable methods include: peer evaluation, which may ably assess individual student effort, interaction with the community, leadership, knowledge or subject matter; supervisors' checklists; feedback from community leaders; and individual and group reports.\n[SUBTITLE] Limitations [SUBSECTION] We acknowledge several limitations. Firstly, the institutions were purposively sampled in order to try to achieve representativeness of rural versus urban institutions, private versus public, the different cadres trained and different academic level of training. There is no guarantee that the institutions that were not sampled have similar constraints or challenges. However, all the government-supported institutions are supposed to follow and implement a similar curriculum. Secondly, the assessment was conducted regionally by four different teams. Since the teams piloted the instruments, there may have been slight variations (across the four teams) in the way interviews were conducted, checklists performed or documents reviewed. Thirdly, no direct observation of conduct of CBE was performed, yet this should be done in an ideal CBE assessment [31,32]. However, the methods (documentary review, checklists and interviews) employed are reliable [31], and have been used by others [15]. Lastly, the ideal curriculum would require validation of its objective and content [31]. This step was conducted at a stakeholder meeting of the Ministry of Health, Ministry of Local Government, all the training institutions and community representatives.\nWe acknowledge several limitations. Firstly, the institutions were purposively sampled in order to try to achieve representativeness of rural versus urban institutions, private versus public, the different cadres trained and different academic level of training. There is no guarantee that the institutions that were not sampled have similar constraints or challenges. However, all the government-supported institutions are supposed to follow and implement a similar curriculum. Secondly, the assessment was conducted regionally by four different teams. Since the teams piloted the instruments, there may have been slight variations (across the four teams) in the way interviews were conducted, checklists performed or documents reviewed. Thirdly, no direct observation of conduct of CBE was performed, yet this should be done in an ideal CBE assessment [31,32]. However, the methods (documentary review, checklists and interviews) employed are reliable [31], and have been used by others [15]. Lastly, the ideal curriculum would require validation of its objective and content [31]. This step was conducted at a stakeholder meeting of the Ministry of Health, Ministry of Local Government, all the training institutions and community representatives.", "We acknowledge several limitations. Firstly, the institutions were purposively sampled in order to try to achieve representativeness of rural versus urban institutions, private versus public, the different cadres trained and different academic level of training. There is no guarantee that the institutions that were not sampled have similar constraints or challenges. However, all the government-supported institutions are supposed to follow and implement a similar curriculum. Secondly, the assessment was conducted regionally by four different teams. Since the teams piloted the instruments, there may have been slight variations (across the four teams) in the way interviews were conducted, checklists performed or documents reviewed. Thirdly, no direct observation of conduct of CBE was performed, yet this should be done in an ideal CBE assessment [31,32]. However, the methods (documentary review, checklists and interviews) employed are reliable [31], and have been used by others [15]. Lastly, the ideal curriculum would require validation of its objective and content [31]. This step was conducted at a stakeholder meeting of the Ministry of Health, Ministry of Local Government, all the training institutions and community representatives.", "This assessment identified deficiencies in the design and implementation of CBE at several health professional training institutions, with major flaws identified in curriculum content, supervision of trainees, inappropriate assessment, trainee welfare, and underutilization of opportunities for contextual and collaborative learning. Despite similar goals, there is wide variation in the concept, conduct and implementation of CBE conducted by different health institutions in Uganda, even in presence of similar curricula. CBE, if well implemented and assessed, has the potential to ensure contextual learning in medical education.", "All authors (except GB, LWC and SG who are from Johns Hopkins University) are faculty of Makerere University College of Health Sciences where community-based education is part of the curricula for undergraduate training of health professionals, and all community sites assessed are used by the university. GB, LWC and SG are team members of the project: Partnership for building the capacity of Makerere University to improve priority health outcomes in Uganda, but have no competing interests.", "All co-authors were involved in the design of the study on comprehensive assessment of community-based education. All except LWC and GB participated in the data collection and data analysis. DKK wrote the first draft of the manuscript. All co-authors contributed by reviewing the subsequent drafts of the manuscript and approved the final version", "[SUBTITLE] Proposal for a model CBE curriculum for training health professionals [SUBSECTION] [SUBTITLE] Steps in developing and implementing the ideal CBE curriculum [SUBSECTION] Step 1: Problem identification and general needs assessment: Identification and critical analysis of the educational need or problem that will be addressed by the curriculum. This step requires substantial research to analyze what is currently being done by practitioners (such as alumni of the training institutions) and educators (that is, the current approach) and ideally what should be done by practitioners and educators to address the health care problem (the ideal approach) and therefore the performance gap. The general needs assessment is usually stated as the knowledge, attitude, and performance deficits that the curriculum will address.\nStep 2: Needs assessment of targeted learners\nStep 3: Goals and objectives: Overall goals and aims for the curriculum are written. Specific measurable knowledge, skill/performance, attitude, and process objectives are written for the curriculum.\nStep 4: Educational strategies: A plan to maximize the impact of the curriculum, including content and educational methods congruent with the objectives, is prepared.\nStep 5: Implementation: A plan for implementation, including timelines and resources required, is created. A plan for faculty development should be made to assure consistent implementation\nStep 6: Evaluation and feedback: Learner and program evaluation plans are created. A plan for dissemination of the curriculum is made.\nStep 1: Problem identification and general needs assessment: Identification and critical analysis of the educational need or problem that will be addressed by the curriculum. This step requires substantial research to analyze what is currently being done by practitioners (such as alumni of the training institutions) and educators (that is, the current approach) and ideally what should be done by practitioners and educators to address the health care problem (the ideal approach) and therefore the performance gap. The general needs assessment is usually stated as the knowledge, attitude, and performance deficits that the curriculum will address.\nStep 2: Needs assessment of targeted learners\nStep 3: Goals and objectives: Overall goals and aims for the curriculum are written. Specific measurable knowledge, skill/performance, attitude, and process objectives are written for the curriculum.\nStep 4: Educational strategies: A plan to maximize the impact of the curriculum, including content and educational methods congruent with the objectives, is prepared.\nStep 5: Implementation: A plan for implementation, including timelines and resources required, is created. A plan for faculty development should be made to assure consistent implementation\nStep 6: Evaluation and feedback: Learner and program evaluation plans are created. A plan for dissemination of the curriculum is made.\n[SUBTITLE] Perceived Ideal objectives of CBE for health professionals [SUBSECTION] The primary aim should be:\n1 Trainees to acquire an understanding of health and disease, positioning the prevention and management of disease in the context of the whole individual in his or her place in the family, community and society.\n2 To develop health professionals with an attitude to learning that is based on curiosity and knowledge exploration rather than passive knowledge acquisition. This enables trainees become reflexive learners who can identify their learning needs, seek the desired information and eventually become life long learners\n3 To enable trainees integrate the theory and practical aspects, as well as the basic science and clinical/professional aspects of their training. This necessitates introduction of components of problem-based learning in CBE, whereby learning is based on real-life contexts\n4 To ensure early direct contact with patients/clients and critical appraisal/analysis of their problems, and thereby trainees attain problem-solving skills\nThe primary aim should be:\n1 Trainees to acquire an understanding of health and disease, positioning the prevention and management of disease in the context of the whole individual in his or her place in the family, community and society.\n2 To develop health professionals with an attitude to learning that is based on curiosity and knowledge exploration rather than passive knowledge acquisition. This enables trainees become reflexive learners who can identify their learning needs, seek the desired information and eventually become life long learners\n3 To enable trainees integrate the theory and practical aspects, as well as the basic science and clinical/professional aspects of their training. This necessitates introduction of components of problem-based learning in CBE, whereby learning is based on real-life contexts\n4 To ensure early direct contact with patients/clients and critical appraisal/analysis of their problems, and thereby trainees attain problem-solving skills\n[SUBTITLE] The curriculum components [SUBSECTION] The curriculum should:\n1. Provide an ideal balance of factual information and practical skills in addition to providing core knowledge, skills and attitudes\n2. Develop general competence (e.g. critical thinking, problem-solving, communication, leadership, teamwork, management)\n3. Ensure early clinical contact while strengthening inter-professional collaboration, thereby providing a balance between hospital/community; curative/preventive aspects.\n4. Cover the wider aspects of health care (e.g. medico-legal issues, health economics, political aspect of health systems and medical audit)\n5. Ensure that methods of learning/teaching support aims of the curriculum, and likewise, ensure that assessment methods employed support and rhyme with aims of the curriculum (are they reliable in assessing the competences?)\nThe curriculum should:\n1. Provide an ideal balance of factual information and practical skills in addition to providing core knowledge, skills and attitudes\n2. Develop general competence (e.g. critical thinking, problem-solving, communication, leadership, teamwork, management)\n3. Ensure early clinical contact while strengthening inter-professional collaboration, thereby providing a balance between hospital/community; curative/preventive aspects.\n4. Cover the wider aspects of health care (e.g. medico-legal issues, health economics, political aspect of health systems and medical audit)\n5. Ensure that methods of learning/teaching support aims of the curriculum, and likewise, ensure that assessment methods employed support and rhyme with aims of the curriculum (are they reliable in assessing the competences?)\n[SUBTITLE] Competences of the trainee [SUBSECTION] The minimum competences of the trainees should include:\n1. Knowledge and skills in caring for the community's health with emphasis on primary care, disease prevention and promotion of healthy lifestyle, and in involving patients, families and communities in healthcare decision-making\n2. Understanding the factors that influence health and disease at household level, as well the roles and responsibilities of different players in the healthcare system\n3. Innovation and problem solving skills necessary to provide cost-effective care using technology appropriately\n4. Skills in collection, analysis and utilization information on health systems\n5. Understanding the role of the physical, social and political environment in health\n6. Life long learning (continuous acquisition of knowledge depending on critical reflection and self-criticism to identify learning needs)\n7. Professionalism (learning values of how a professional is expected to perform and relate with the community and all other players in the healthcare system)\nThe minimum competences of the trainees should include:\n1. Knowledge and skills in caring for the community's health with emphasis on primary care, disease prevention and promotion of healthy lifestyle, and in involving patients, families and communities in healthcare decision-making\n2. Understanding the factors that influence health and disease at household level, as well the roles and responsibilities of different players in the healthcare system\n3. Innovation and problem solving skills necessary to provide cost-effective care using technology appropriately\n4. Skills in collection, analysis and utilization information on health systems\n5. Understanding the role of the physical, social and political environment in health\n6. Life long learning (continuous acquisition of knowledge depending on critical reflection and self-criticism to identify learning needs)\n7. Professionalism (learning values of how a professional is expected to perform and relate with the community and all other players in the healthcare system)\n[SUBTITLE] Curriculum content related to CBE [SUBSECTION] The curriculum content should include:\n1. Broad education in both the clinical and basic sciences and integration of CBE into the science of profession\n2. Clearly defined objectives and intended outcomes of CBE\n3. Emphasis on methods and processes of acquisition of knowledge, professional skills, lifelong learning skills, values and attitudes (such as indicating examples of what activities trainees should be involved in)\n4. Emphasis on utilization of educational sites beyond the training institution (such as schools, health centres, community health projects or working with non-government organizations)\n5. Emphasis on adapting training to meet the challenges produced by ongoing changes in the organization, structure, financing and delivery of health care\nThe curriculum content should include:\n1. Broad education in both the clinical and basic sciences and integration of CBE into the science of profession\n2. Clearly defined objectives and intended outcomes of CBE\n3. Emphasis on methods and processes of acquisition of knowledge, professional skills, lifelong learning skills, values and attitudes (such as indicating examples of what activities trainees should be involved in)\n4. Emphasis on utilization of educational sites beyond the training institution (such as schools, health centres, community health projects or working with non-government organizations)\n5. Emphasis on adapting training to meet the challenges produced by ongoing changes in the organization, structure, financing and delivery of health care\n[SUBTITLE] Roles of tutors, supervisors and faculty [SUBSECTION] 1. To identify suitable teaching resources and to conduct feedback and trainee appraisal\n2. To design educational programs for trainees as well as identify suitable problems to deliver effective educational events for learning\n3. To utilize appropriate teaching methods that employ large and small groups\n4. To provide individual assistance depending on student needs\n5. To assess the trainees used a variety of assessment methods that include portfolios\n6. To evaluate the CBE program in context of the health professional training course\n1. To identify suitable teaching resources and to conduct feedback and trainee appraisal\n2. To design educational programs for trainees as well as identify suitable problems to deliver effective educational events for learning\n3. To utilize appropriate teaching methods that employ large and small groups\n4. To provide individual assistance depending on student needs\n5. To assess the trainees used a variety of assessment methods that include portfolios\n6. To evaluate the CBE program in context of the health professional training course\n[SUBTITLE] Steps in developing and implementing the ideal CBE curriculum [SUBSECTION] Step 1: Problem identification and general needs assessment: Identification and critical analysis of the educational need or problem that will be addressed by the curriculum. This step requires substantial research to analyze what is currently being done by practitioners (such as alumni of the training institutions) and educators (that is, the current approach) and ideally what should be done by practitioners and educators to address the health care problem (the ideal approach) and therefore the performance gap. The general needs assessment is usually stated as the knowledge, attitude, and performance deficits that the curriculum will address.\nStep 2: Needs assessment of targeted learners\nStep 3: Goals and objectives: Overall goals and aims for the curriculum are written. Specific measurable knowledge, skill/performance, attitude, and process objectives are written for the curriculum.\nStep 4: Educational strategies: A plan to maximize the impact of the curriculum, including content and educational methods congruent with the objectives, is prepared.\nStep 5: Implementation: A plan for implementation, including timelines and resources required, is created. A plan for faculty development should be made to assure consistent implementation\nStep 6: Evaluation and feedback: Learner and program evaluation plans are created. A plan for dissemination of the curriculum is made.\nStep 1: Problem identification and general needs assessment: Identification and critical analysis of the educational need or problem that will be addressed by the curriculum. This step requires substantial research to analyze what is currently being done by practitioners (such as alumni of the training institutions) and educators (that is, the current approach) and ideally what should be done by practitioners and educators to address the health care problem (the ideal approach) and therefore the performance gap. The general needs assessment is usually stated as the knowledge, attitude, and performance deficits that the curriculum will address.\nStep 2: Needs assessment of targeted learners\nStep 3: Goals and objectives: Overall goals and aims for the curriculum are written. Specific measurable knowledge, skill/performance, attitude, and process objectives are written for the curriculum.\nStep 4: Educational strategies: A plan to maximize the impact of the curriculum, including content and educational methods congruent with the objectives, is prepared.\nStep 5: Implementation: A plan for implementation, including timelines and resources required, is created. A plan for faculty development should be made to assure consistent implementation\nStep 6: Evaluation and feedback: Learner and program evaluation plans are created. A plan for dissemination of the curriculum is made.\n[SUBTITLE] Perceived Ideal objectives of CBE for health professionals [SUBSECTION] The primary aim should be:\n1 Trainees to acquire an understanding of health and disease, positioning the prevention and management of disease in the context of the whole individual in his or her place in the family, community and society.\n2 To develop health professionals with an attitude to learning that is based on curiosity and knowledge exploration rather than passive knowledge acquisition. This enables trainees become reflexive learners who can identify their learning needs, seek the desired information and eventually become life long learners\n3 To enable trainees integrate the theory and practical aspects, as well as the basic science and clinical/professional aspects of their training. This necessitates introduction of components of problem-based learning in CBE, whereby learning is based on real-life contexts\n4 To ensure early direct contact with patients/clients and critical appraisal/analysis of their problems, and thereby trainees attain problem-solving skills\nThe primary aim should be:\n1 Trainees to acquire an understanding of health and disease, positioning the prevention and management of disease in the context of the whole individual in his or her place in the family, community and society.\n2 To develop health professionals with an attitude to learning that is based on curiosity and knowledge exploration rather than passive knowledge acquisition. This enables trainees become reflexive learners who can identify their learning needs, seek the desired information and eventually become life long learners\n3 To enable trainees integrate the theory and practical aspects, as well as the basic science and clinical/professional aspects of their training. This necessitates introduction of components of problem-based learning in CBE, whereby learning is based on real-life contexts\n4 To ensure early direct contact with patients/clients and critical appraisal/analysis of their problems, and thereby trainees attain problem-solving skills\n[SUBTITLE] The curriculum components [SUBSECTION] The curriculum should:\n1. Provide an ideal balance of factual information and practical skills in addition to providing core knowledge, skills and attitudes\n2. Develop general competence (e.g. critical thinking, problem-solving, communication, leadership, teamwork, management)\n3. Ensure early clinical contact while strengthening inter-professional collaboration, thereby providing a balance between hospital/community; curative/preventive aspects.\n4. Cover the wider aspects of health care (e.g. medico-legal issues, health economics, political aspect of health systems and medical audit)\n5. Ensure that methods of learning/teaching support aims of the curriculum, and likewise, ensure that assessment methods employed support and rhyme with aims of the curriculum (are they reliable in assessing the competences?)\nThe curriculum should:\n1. Provide an ideal balance of factual information and practical skills in addition to providing core knowledge, skills and attitudes\n2. Develop general competence (e.g. critical thinking, problem-solving, communication, leadership, teamwork, management)\n3. Ensure early clinical contact while strengthening inter-professional collaboration, thereby providing a balance between hospital/community; curative/preventive aspects.\n4. Cover the wider aspects of health care (e.g. medico-legal issues, health economics, political aspect of health systems and medical audit)\n5. Ensure that methods of learning/teaching support aims of the curriculum, and likewise, ensure that assessment methods employed support and rhyme with aims of the curriculum (are they reliable in assessing the competences?)\n[SUBTITLE] Competences of the trainee [SUBSECTION] The minimum competences of the trainees should include:\n1. Knowledge and skills in caring for the community's health with emphasis on primary care, disease prevention and promotion of healthy lifestyle, and in involving patients, families and communities in healthcare decision-making\n2. Understanding the factors that influence health and disease at household level, as well the roles and responsibilities of different players in the healthcare system\n3. Innovation and problem solving skills necessary to provide cost-effective care using technology appropriately\n4. Skills in collection, analysis and utilization information on health systems\n5. Understanding the role of the physical, social and political environment in health\n6. Life long learning (continuous acquisition of knowledge depending on critical reflection and self-criticism to identify learning needs)\n7. Professionalism (learning values of how a professional is expected to perform and relate with the community and all other players in the healthcare system)\nThe minimum competences of the trainees should include:\n1. Knowledge and skills in caring for the community's health with emphasis on primary care, disease prevention and promotion of healthy lifestyle, and in involving patients, families and communities in healthcare decision-making\n2. Understanding the factors that influence health and disease at household level, as well the roles and responsibilities of different players in the healthcare system\n3. Innovation and problem solving skills necessary to provide cost-effective care using technology appropriately\n4. Skills in collection, analysis and utilization information on health systems\n5. Understanding the role of the physical, social and political environment in health\n6. Life long learning (continuous acquisition of knowledge depending on critical reflection and self-criticism to identify learning needs)\n7. Professionalism (learning values of how a professional is expected to perform and relate with the community and all other players in the healthcare system)\n[SUBTITLE] Curriculum content related to CBE [SUBSECTION] The curriculum content should include:\n1. Broad education in both the clinical and basic sciences and integration of CBE into the science of profession\n2. Clearly defined objectives and intended outcomes of CBE\n3. Emphasis on methods and processes of acquisition of knowledge, professional skills, lifelong learning skills, values and attitudes (such as indicating examples of what activities trainees should be involved in)\n4. Emphasis on utilization of educational sites beyond the training institution (such as schools, health centres, community health projects or working with non-government organizations)\n5. Emphasis on adapting training to meet the challenges produced by ongoing changes in the organization, structure, financing and delivery of health care\nThe curriculum content should include:\n1. Broad education in both the clinical and basic sciences and integration of CBE into the science of profession\n2. Clearly defined objectives and intended outcomes of CBE\n3. Emphasis on methods and processes of acquisition of knowledge, professional skills, lifelong learning skills, values and attitudes (such as indicating examples of what activities trainees should be involved in)\n4. Emphasis on utilization of educational sites beyond the training institution (such as schools, health centres, community health projects or working with non-government organizations)\n5. Emphasis on adapting training to meet the challenges produced by ongoing changes in the organization, structure, financing and delivery of health care\n[SUBTITLE] Roles of tutors, supervisors and faculty [SUBSECTION] 1. To identify suitable teaching resources and to conduct feedback and trainee appraisal\n2. To design educational programs for trainees as well as identify suitable problems to deliver effective educational events for learning\n3. To utilize appropriate teaching methods that employ large and small groups\n4. To provide individual assistance depending on student needs\n5. To assess the trainees used a variety of assessment methods that include portfolios\n6. To evaluate the CBE program in context of the health professional training course\n1. To identify suitable teaching resources and to conduct feedback and trainee appraisal\n2. To design educational programs for trainees as well as identify suitable problems to deliver effective educational events for learning\n3. To utilize appropriate teaching methods that employ large and small groups\n4. To provide individual assistance depending on student needs\n5. To assess the trainees used a variety of assessment methods that include portfolios\n6. To evaluate the CBE program in context of the health professional training course", "[SUBTITLE] Steps in developing and implementing the ideal CBE curriculum [SUBSECTION] Step 1: Problem identification and general needs assessment: Identification and critical analysis of the educational need or problem that will be addressed by the curriculum. This step requires substantial research to analyze what is currently being done by practitioners (such as alumni of the training institutions) and educators (that is, the current approach) and ideally what should be done by practitioners and educators to address the health care problem (the ideal approach) and therefore the performance gap. The general needs assessment is usually stated as the knowledge, attitude, and performance deficits that the curriculum will address.\nStep 2: Needs assessment of targeted learners\nStep 3: Goals and objectives: Overall goals and aims for the curriculum are written. Specific measurable knowledge, skill/performance, attitude, and process objectives are written for the curriculum.\nStep 4: Educational strategies: A plan to maximize the impact of the curriculum, including content and educational methods congruent with the objectives, is prepared.\nStep 5: Implementation: A plan for implementation, including timelines and resources required, is created. A plan for faculty development should be made to assure consistent implementation\nStep 6: Evaluation and feedback: Learner and program evaluation plans are created. A plan for dissemination of the curriculum is made.\nStep 1: Problem identification and general needs assessment: Identification and critical analysis of the educational need or problem that will be addressed by the curriculum. This step requires substantial research to analyze what is currently being done by practitioners (such as alumni of the training institutions) and educators (that is, the current approach) and ideally what should be done by practitioners and educators to address the health care problem (the ideal approach) and therefore the performance gap. The general needs assessment is usually stated as the knowledge, attitude, and performance deficits that the curriculum will address.\nStep 2: Needs assessment of targeted learners\nStep 3: Goals and objectives: Overall goals and aims for the curriculum are written. Specific measurable knowledge, skill/performance, attitude, and process objectives are written for the curriculum.\nStep 4: Educational strategies: A plan to maximize the impact of the curriculum, including content and educational methods congruent with the objectives, is prepared.\nStep 5: Implementation: A plan for implementation, including timelines and resources required, is created. A plan for faculty development should be made to assure consistent implementation\nStep 6: Evaluation and feedback: Learner and program evaluation plans are created. A plan for dissemination of the curriculum is made.\n[SUBTITLE] Perceived Ideal objectives of CBE for health professionals [SUBSECTION] The primary aim should be:\n1 Trainees to acquire an understanding of health and disease, positioning the prevention and management of disease in the context of the whole individual in his or her place in the family, community and society.\n2 To develop health professionals with an attitude to learning that is based on curiosity and knowledge exploration rather than passive knowledge acquisition. This enables trainees become reflexive learners who can identify their learning needs, seek the desired information and eventually become life long learners\n3 To enable trainees integrate the theory and practical aspects, as well as the basic science and clinical/professional aspects of their training. This necessitates introduction of components of problem-based learning in CBE, whereby learning is based on real-life contexts\n4 To ensure early direct contact with patients/clients and critical appraisal/analysis of their problems, and thereby trainees attain problem-solving skills\nThe primary aim should be:\n1 Trainees to acquire an understanding of health and disease, positioning the prevention and management of disease in the context of the whole individual in his or her place in the family, community and society.\n2 To develop health professionals with an attitude to learning that is based on curiosity and knowledge exploration rather than passive knowledge acquisition. This enables trainees become reflexive learners who can identify their learning needs, seek the desired information and eventually become life long learners\n3 To enable trainees integrate the theory and practical aspects, as well as the basic science and clinical/professional aspects of their training. This necessitates introduction of components of problem-based learning in CBE, whereby learning is based on real-life contexts\n4 To ensure early direct contact with patients/clients and critical appraisal/analysis of their problems, and thereby trainees attain problem-solving skills\n[SUBTITLE] The curriculum components [SUBSECTION] The curriculum should:\n1. Provide an ideal balance of factual information and practical skills in addition to providing core knowledge, skills and attitudes\n2. Develop general competence (e.g. critical thinking, problem-solving, communication, leadership, teamwork, management)\n3. Ensure early clinical contact while strengthening inter-professional collaboration, thereby providing a balance between hospital/community; curative/preventive aspects.\n4. Cover the wider aspects of health care (e.g. medico-legal issues, health economics, political aspect of health systems and medical audit)\n5. Ensure that methods of learning/teaching support aims of the curriculum, and likewise, ensure that assessment methods employed support and rhyme with aims of the curriculum (are they reliable in assessing the competences?)\nThe curriculum should:\n1. Provide an ideal balance of factual information and practical skills in addition to providing core knowledge, skills and attitudes\n2. Develop general competence (e.g. critical thinking, problem-solving, communication, leadership, teamwork, management)\n3. Ensure early clinical contact while strengthening inter-professional collaboration, thereby providing a balance between hospital/community; curative/preventive aspects.\n4. Cover the wider aspects of health care (e.g. medico-legal issues, health economics, political aspect of health systems and medical audit)\n5. Ensure that methods of learning/teaching support aims of the curriculum, and likewise, ensure that assessment methods employed support and rhyme with aims of the curriculum (are they reliable in assessing the competences?)\n[SUBTITLE] Competences of the trainee [SUBSECTION] The minimum competences of the trainees should include:\n1. Knowledge and skills in caring for the community's health with emphasis on primary care, disease prevention and promotion of healthy lifestyle, and in involving patients, families and communities in healthcare decision-making\n2. Understanding the factors that influence health and disease at household level, as well the roles and responsibilities of different players in the healthcare system\n3. Innovation and problem solving skills necessary to provide cost-effective care using technology appropriately\n4. Skills in collection, analysis and utilization information on health systems\n5. Understanding the role of the physical, social and political environment in health\n6. Life long learning (continuous acquisition of knowledge depending on critical reflection and self-criticism to identify learning needs)\n7. Professionalism (learning values of how a professional is expected to perform and relate with the community and all other players in the healthcare system)\nThe minimum competences of the trainees should include:\n1. Knowledge and skills in caring for the community's health with emphasis on primary care, disease prevention and promotion of healthy lifestyle, and in involving patients, families and communities in healthcare decision-making\n2. Understanding the factors that influence health and disease at household level, as well the roles and responsibilities of different players in the healthcare system\n3. Innovation and problem solving skills necessary to provide cost-effective care using technology appropriately\n4. Skills in collection, analysis and utilization information on health systems\n5. Understanding the role of the physical, social and political environment in health\n6. Life long learning (continuous acquisition of knowledge depending on critical reflection and self-criticism to identify learning needs)\n7. Professionalism (learning values of how a professional is expected to perform and relate with the community and all other players in the healthcare system)\n[SUBTITLE] Curriculum content related to CBE [SUBSECTION] The curriculum content should include:\n1. Broad education in both the clinical and basic sciences and integration of CBE into the science of profession\n2. Clearly defined objectives and intended outcomes of CBE\n3. Emphasis on methods and processes of acquisition of knowledge, professional skills, lifelong learning skills, values and attitudes (such as indicating examples of what activities trainees should be involved in)\n4. Emphasis on utilization of educational sites beyond the training institution (such as schools, health centres, community health projects or working with non-government organizations)\n5. Emphasis on adapting training to meet the challenges produced by ongoing changes in the organization, structure, financing and delivery of health care\nThe curriculum content should include:\n1. Broad education in both the clinical and basic sciences and integration of CBE into the science of profession\n2. Clearly defined objectives and intended outcomes of CBE\n3. Emphasis on methods and processes of acquisition of knowledge, professional skills, lifelong learning skills, values and attitudes (such as indicating examples of what activities trainees should be involved in)\n4. Emphasis on utilization of educational sites beyond the training institution (such as schools, health centres, community health projects or working with non-government organizations)\n5. Emphasis on adapting training to meet the challenges produced by ongoing changes in the organization, structure, financing and delivery of health care\n[SUBTITLE] Roles of tutors, supervisors and faculty [SUBSECTION] 1. To identify suitable teaching resources and to conduct feedback and trainee appraisal\n2. To design educational programs for trainees as well as identify suitable problems to deliver effective educational events for learning\n3. To utilize appropriate teaching methods that employ large and small groups\n4. To provide individual assistance depending on student needs\n5. To assess the trainees used a variety of assessment methods that include portfolios\n6. To evaluate the CBE program in context of the health professional training course\n1. To identify suitable teaching resources and to conduct feedback and trainee appraisal\n2. To design educational programs for trainees as well as identify suitable problems to deliver effective educational events for learning\n3. To utilize appropriate teaching methods that employ large and small groups\n4. To provide individual assistance depending on student needs\n5. To assess the trainees used a variety of assessment methods that include portfolios\n6. To evaluate the CBE program in context of the health professional training course", "Step 1: Problem identification and general needs assessment: Identification and critical analysis of the educational need or problem that will be addressed by the curriculum. This step requires substantial research to analyze what is currently being done by practitioners (such as alumni of the training institutions) and educators (that is, the current approach) and ideally what should be done by practitioners and educators to address the health care problem (the ideal approach) and therefore the performance gap. The general needs assessment is usually stated as the knowledge, attitude, and performance deficits that the curriculum will address.\nStep 2: Needs assessment of targeted learners\nStep 3: Goals and objectives: Overall goals and aims for the curriculum are written. Specific measurable knowledge, skill/performance, attitude, and process objectives are written for the curriculum.\nStep 4: Educational strategies: A plan to maximize the impact of the curriculum, including content and educational methods congruent with the objectives, is prepared.\nStep 5: Implementation: A plan for implementation, including timelines and resources required, is created. A plan for faculty development should be made to assure consistent implementation\nStep 6: Evaluation and feedback: Learner and program evaluation plans are created. A plan for dissemination of the curriculum is made.", "The primary aim should be:\n1 Trainees to acquire an understanding of health and disease, positioning the prevention and management of disease in the context of the whole individual in his or her place in the family, community and society.\n2 To develop health professionals with an attitude to learning that is based on curiosity and knowledge exploration rather than passive knowledge acquisition. This enables trainees become reflexive learners who can identify their learning needs, seek the desired information and eventually become life long learners\n3 To enable trainees integrate the theory and practical aspects, as well as the basic science and clinical/professional aspects of their training. This necessitates introduction of components of problem-based learning in CBE, whereby learning is based on real-life contexts\n4 To ensure early direct contact with patients/clients and critical appraisal/analysis of their problems, and thereby trainees attain problem-solving skills", "The curriculum should:\n1. Provide an ideal balance of factual information and practical skills in addition to providing core knowledge, skills and attitudes\n2. Develop general competence (e.g. critical thinking, problem-solving, communication, leadership, teamwork, management)\n3. Ensure early clinical contact while strengthening inter-professional collaboration, thereby providing a balance between hospital/community; curative/preventive aspects.\n4. Cover the wider aspects of health care (e.g. medico-legal issues, health economics, political aspect of health systems and medical audit)\n5. Ensure that methods of learning/teaching support aims of the curriculum, and likewise, ensure that assessment methods employed support and rhyme with aims of the curriculum (are they reliable in assessing the competences?)", "The minimum competences of the trainees should include:\n1. Knowledge and skills in caring for the community's health with emphasis on primary care, disease prevention and promotion of healthy lifestyle, and in involving patients, families and communities in healthcare decision-making\n2. Understanding the factors that influence health and disease at household level, as well the roles and responsibilities of different players in the healthcare system\n3. Innovation and problem solving skills necessary to provide cost-effective care using technology appropriately\n4. Skills in collection, analysis and utilization information on health systems\n5. Understanding the role of the physical, social and political environment in health\n6. Life long learning (continuous acquisition of knowledge depending on critical reflection and self-criticism to identify learning needs)\n7. Professionalism (learning values of how a professional is expected to perform and relate with the community and all other players in the healthcare system)", "The curriculum content should include:\n1. Broad education in both the clinical and basic sciences and integration of CBE into the science of profession\n2. Clearly defined objectives and intended outcomes of CBE\n3. Emphasis on methods and processes of acquisition of knowledge, professional skills, lifelong learning skills, values and attitudes (such as indicating examples of what activities trainees should be involved in)\n4. Emphasis on utilization of educational sites beyond the training institution (such as schools, health centres, community health projects or working with non-government organizations)\n5. Emphasis on adapting training to meet the challenges produced by ongoing changes in the organization, structure, financing and delivery of health care", "1. To identify suitable teaching resources and to conduct feedback and trainee appraisal\n2. To design educational programs for trainees as well as identify suitable problems to deliver effective educational events for learning\n3. To utilize appropriate teaching methods that employ large and small groups\n4. To provide individual assistance depending on student needs\n5. To assess the trainees used a variety of assessment methods that include portfolios\n6. To evaluate the CBE program in context of the health professional training course", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6920/11/7/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Results", "Discussion", "Limitations", "Conclusion", "Competing interests", "Authors' contributions", "Appendix", "Proposal for a model CBE curriculum for training health professionals", "Steps in developing and implementing the ideal CBE curriculum", "Perceived Ideal objectives of CBE for health professionals", "The curriculum components", "Competences of the trainee", "Curriculum content related to CBE", "Roles of tutors, supervisors and faculty", "Pre-publication history" ]
[ "Community-based education (CBE) has several definitions [1,2], but the core definition refers to learning that takes place in a setting external to the higher education institution. In the context of health professional education, CBE refers to instruction whereby trainees learn and acquire professional competencies in community settings. Such settings include general practices, communities, community health centres or rural hospitals [2] with the focus being learning about health services in the community, methods of health promotion, as well as social and economic aspects of illness [2]. Community-oriented programmes have several goals: to create more appropriate knowledge, skills and attitudes; to deepen trainees' understanding of health and illness; to enable trainees understand the health and social services; to promote interpersonal skills and multidisciplinary teamwork; and to deepen trainees' understanding of the contribution of social and environmental factors to causation and prevention of ill-health [2].\nCBE goes beyond cognitive capacities and encompasses the social and emotional aspects of learning, increasing trainee' experience, confidence and competence [2-4] and improving awareness of community values and lifestyles of health workers in rural areas [5,6]. CBE increases trainees' interest in uptake of careers in rural practice [5]. Provision of support for community site tutors and faculty improves the quality of medical students' learning experiences during rural rotations [6]. Indeed, such exposure to the communities through community placements during CBE shapes trainees' values and perceptions of rural practice, eventually promoting ethics, professionalism and health professionals uptake of rural practice [2,7,8].\nIn Uganda, many health professional training institutions conduct some form of CBE. However, there is scanty information on the nature of the training: whether a curriculum exists, the objectives related to CBE, intended outcomes, curriculum content. There is also scanty information on the implementation of the CBE curriculum (training sites, instruction methods, activities conducted by trainees, roles played by training site supervisors and institution faculty); administration of CBE (financial, human and material resources involved); and challenges faced by the implementation of CBE. Most reports of community-based programmes have been conducted in developed countries, have been largely descriptive, and rarely have assessed outcomes such as learning and rural deployment [9-11]. Others have only analyzed a specific component of CBE: limited to trainees' experience of the training during community placement [12], students' evaluation of the teaching and learning environment [13], specific contexts such as medical emergencies [14], evaluation of CBE conducted by a single academic institution [15], or evaluation of the role of community preceptors [16]. No similar studies have been documented on CBE for health professionals in Uganda. Specifically, no studies have documented the effectiveness of CBE in ensuring contextual learning for health professional trainees. Though medical schools and other health professional training institutions adopted CBE as a strategy for medical education, and community placements as one of the training settings, there is very little published research that explores the actual training that is conducted, the resources that are needed to sustain training, or the learning activities undertaken by students in different environments in Uganda.\nOur aim was to answer the following questions: What is the nature of CBE conducted? Does CBE promote learning? What challenges do institutions face in implementing CBE and what are potential solutions? The objective of the comprehensive assessment of CBE conducted in health professional training institutions in Uganda was two-fold: 1) To analyze the nature of CBE conducted by different institutions as well as challenges involved; 2) To analyze effectiveness of CBE in promoting learning and acquisition of competences for cadres at different levels (certificate, diploma, advanced diploma and degree). As a spin-off of this comprehensive evaluation, we wish to utilize lessons to propose an ideal model with minimum requirements for health professional training institutions in Uganda. Documentation of this information as well as potential solutions to identified challenges in running CBE programs might not only improve the training of health professionals but also assist in the long-term goal of improving the recruitment, deployment or retention of health workers to rural areas.", "Through the twinning grant with Johns Hopkins University, Makerere University conducted a comprehensive assessment of CBE implemented by 22 health professional training institutions in Uganda, from October 2009 through May 2010. This assessment involved documentary review of available curricula and other documents related to CBE at 22 health professional training institutions, so as to assess the nature, purpose, intended outcomes, instruction methods and methods of assessment. Site visits to these institutions and two of their CBE sites was also conducted to assess the available infrastructure and learning resources available at these sites.\nIn analyzing whether learning occurs during CBE, we assessed the following:\ni) whether learning is competence-based or outcome-based (that is, students are able to demonstrate certain competences, and focusing or organizing the learning in the educational system around what is essential for all students to be able to demonstrate successfully at the end of their learning experiences, for instance, critical reflection, communication skills and clinical skills)\nii) whether learning during CBE is problem-based (whether real-life situations related to patient care are used to stimulate learning);\niii) whether learning during CBE is contextual (reality-based, outside-of-the-classroom experiences, which serve as a catalyst for students to utilize their disciplinary knowledge, are employed in real-life settings similar to where trainees will eventually work, for instance health centres or the community);\niv) whether CBE promotes collaborative learning (whether learning occurs in situations in which two or more students learn, or attempt to learn, something together, as pairs, small groups or whole class), and perform learning activities such as problem-solving using individual effort, joint effort or external effort;\nv) whether reflection (critical appraisal of incidents and experiences) occurs;\nvi) whether CBE promotes lifelong learning (stimulates trainees to keep up-to-date with current clinical management or professional care guidelines).\nvii) Since students contribute to service during community placements, we assessed whether such service contributes to trainees' learning (that is, whether service improves critical reflection whereby critical incidents are noted, appraised, analyzed and used to develop individual or group learning plans).\nviii) Lastly, in-depth and key-informant interviews were conducted with key people involved in running CBE at the institutions and CBE sites to assess how CBE was implemented, challenges experienced and perceived solutions. Interviews with alumni of these institutions and community members at the CBE sites offered insight into the impact of CBE, challenges in CBE implementation and how these challenges may be addressed. Content analysis [17] was applied to analysis of these interviews: familiarization; identifying a thematic framework; indexing; mapping and interpretation. The data was then clearly coded and the text was indexed by using descriptors alongside various passages in the transcriptions.\nEthics and research committees of Makerere University College of Health Sciences and Johns Hopkins School of Public Health approved the research. Approval to conduct the research was given by the administrators of all the health institutions that were assessed.", "Table 1 shows the findings from review of the curriculum documents, the implementation of the curriculum, the instruction methods used and evaluation of collaborative learning. Some of the institutions had no curriculum or the curricula documents were deficient with regard to CBE. Even where CBE was indicated in the curricula, there was wide variation in its perception, conduct and implementation. Activities conducted, instruction methods used and learning environment (community sites used to provide opportunity for learning) varied greatly in similar institutions. Lectures, assignments and skills demonstrations were the main methods of instruction.\nAssessment of CBE at the 22 institutions evaluated\nTable 2 shows the activities which students are involved in to foster learning. Self-directed learning and group discussions were the main activities which trainees used to enhance learning. The findings show that considering the variety of learning sites to which trainees are exposed, CBE had the potential to benefit trainees with experiential and contextual learning, while providing a service to the CBE sites as well as the community. However, less than half of the institutions had a research component as part of the activities students are involved in during CBE, though trainees had prior skills and resources for conducting research. This indicates a missed opportunity for CBE programs.\nLearning, research and assessment of learning\nTable 3 shows the overall social environment in the community and health facilities. The evaluation shows that there is wide variation in available resources to support CBE, which were shared by different institutions despite their being inadequate. Most deficits were in accommodation facilities, welfare, sundries and supplies. Transport to and within the community sites was a problem. Our findings demonstrate the challenges that need to be addressed (especially financial, supervisory and administrative constraints) and the missed opportunities to be rectified if the CBE learning environment is to be enhanced.\nFacilities at sites to facilitate CBE activities\nThough most interviewees felt the concept of CBE was right, there were many challenges in its conduct, which hinder adequate preparation and implementation. Some of the beneficial aspects of CBE identified were early contact with the community, improved team work in the trainees who leave, work and learn in small groups, improved interpersonal relationships and improved communication skills. That the CBE was outcome-based is indicated by some curricula having clear objectives, learning outcomes, desired competencies and clear implementation strategy. Collaborative learning is indicated by the individual, joint and external collaborations by students to promote learning. CBE offered opportunities for improved clinical, leadership, self-directed learning and research skills. As an outcome of this comprehensive assessment, we identified a critical need to propose an ideal CBE curriculum whose content and implementation would provide a minimum package of CBE. Indeed, such an ideal model might make available human resources with competence to address Uganda's current and evolving priority health problems.\nWe found that though many health professional training institutions in Uganda conduct some form of CBE, the content of the curricula and the extent to which they are implemented is variable. For institutions without a written curriculum, we found no clear explanation why this was so from interviewing the faculty. The tutors, faculty and community members interviewed were in agreement that CBE has the potential to benefit trainees (through providing a real-life training environment where the graduates may eventually work, such in health centres, rural hospitals or the community); the community (whose members benefit from the services of the trainees during the community placements and outreach activities); the institutions (which get a better training context for their students); and the staff at the community training sites (who gain from update of knowledge and skills from interaction and collaboration with staff from the training institutions). CBE also offered opportunities to conduct research on conditions that affect the community (through community diagnosis). Different descriptions were given to CBE in the different institutions, consequently, CBE was administered the differently depending on how it was defined and the resources available. The key personnel ranged from individuals (acting as coordinators), a committee of persons, or an administrative office (usually under the dean, academic registrar or principal tutor). Factors considered before CBE site selection and preparation included proximity, available infrastructure, uniqueness of the site in terms of services offered (such as maternal and child health and mental health), receptivity of health facility staff and community leadership, ability of the potential site to cater for trainee's welfare, and availability of willing personnel at health facilities to supervise trainees. For the latter, 18 (70%) of the institutions had neither a formal criteria for selection of community supervisors nor formal training of these people.", "Our findings evaluated the potential effectiveness of CBE in enhancing training of health professions. The methods of evaluation used enable evaluation of the ''curriculum on paper'' (what is written about the curriculum in documents and what the implementers understand about curriculum goals and objectives), the ''curriculum in action'' (how the curriculum is implemented) and the ''curriculum as experienced''(what students actually do, how they study and outcomes of the learning) [15]. Our findings therefore indicate that CBE provides contextual learning for most of the institutions (reality-based, outside-of-the-classroom experience, such as in the health centres, the community, schools or at the district health office as well as working with NGOs at community level). Contextual learning provides learning experiences in contexts in which trainees are interested and motivated and therefore achieve more [18-20]. The learning process of CBE includes the learner's ability to gain and utilize acquired knowledge as well as solve problems, while developing collaborative skills, innovation skills, communication skills, critical reflection, teamwork and interpersonal relationships [2,18,20].\nIn our assessment, there were wide variations in the curriculum content even for public institutions that offer the same academic award. It was also unclear to some institutions how the curriculum was developed, implemented or evaluated. Developing a generic CBE curriculum should follow a systematic approach that is capable of integrating content area with educational theory and methodology, and indicate evaluation of the impact of this process [21-24]. When completed, the CBE needs assessment should make a strong argument for the need for the curriculum, identifying previously developed or validated methods.\nSome of the CBE curricula documents were unclear about the learning outcomes or desired competences, so it was unclear if the actualized curriculum included them. From the interviews, this was still unclear as some tutors were not conversant with the curriculum content or implementation. Regarding content and outcomes of the ideal CBE curriculum, the professional competences should focus on theoretical and applied knowledge associated with practice, emphasizing values, skills and critical appraisal. This is essential as training and practice/service often occur in situations characterized by complexity, uniqueness, uncertainty and conflicting values [23]. In contrast to the curriculum on paper, the curriculum in action is how the intended curriculum is implemented in practice [25], while the actualised curriculum refers to what students actually do, how they study, what they believe they should be doing, the learning that occurs and the outcome of their learning [25].\nWhile pre-placement orientation is essential for success of CBE, it was unclear whether all institutions conducted it. Most of the CBE curricula emphasized knowledge and professional skills. Whereas specialized knowledge and skills are clearly essential for practice, self consciousness (reflection) and continual self critique (critical reflection) are crucial to acquisition of competences [23]. Reflective and critically reflective practice, communication skills and interpersonal relations are vital professional skills for effective and efficient professional practice [24].\nFew of the institutions employed Problem-based learning (PBL) as an instruction method. Regarding implementation strategies, PBL is ideal for CBE as it effectively brings out the desired competences of the ideal CBE curriculum. Though the exact format of PBL varies considerably, two key features are consistent: emphasis on learner-focused exploration of case-based problems and use of case histories to help students identify learning issues that become the focus of individual or group problem-solving. During CBE, trainees should be exposed to professionally meaningful situations or problems that bear strong resemblance to the situations they will be confronted with in their future profession, which enables acquisition of knowledge, clinical skills and problem-solving skills. Development of critical reflection and critical thinking as professional skills during CBE requires adoption of PBL as the ideal model of instruction [22-24]. Indeed, during CBE, individual trainees interpret their varied experiences in their own way and understanding, basing on what happens to them and what they see, hear and read. [26]. CBE provides a holistic picture of health and health systems in the community.\nThe institutions evaluated conducted assessment of CBE using different and diverse methods, with variations in reliability. For CBE, there is a challenge of designing assessment such that it adequately reflects the desired competences, the instruction methods and the eventual professional role, while making it retain reliability and validity across different settings and assessors [27,28]. During CBE, students are taught in diverse contexts by diverse groups of tutors with varied competences and skills as educators. In the curricula, institutions should be clear about what and why they need to assess and who will do the assessment [2]. Assessment strategies need to be developed to enhance and support learning as well as reliably measure performance with evidence of learning having taken place [29,30]. While both formative and summative assessment are essential, assessment of CBE requires development of portfolios by the trainees [29]. Portfolios facilitate evaluation of integrated and complex abilities taking into account the learning context [29]. Indeed, portfolios personalize the assessment process, incorporating important educational values, while supporting the important principle that assessment drives learning. Other suitable methods include: peer evaluation, which may ably assess individual student effort, interaction with the community, leadership, knowledge or subject matter; supervisors' checklists; feedback from community leaders; and individual and group reports.\n[SUBTITLE] Limitations [SUBSECTION] We acknowledge several limitations. Firstly, the institutions were purposively sampled in order to try to achieve representativeness of rural versus urban institutions, private versus public, the different cadres trained and different academic level of training. There is no guarantee that the institutions that were not sampled have similar constraints or challenges. However, all the government-supported institutions are supposed to follow and implement a similar curriculum. Secondly, the assessment was conducted regionally by four different teams. Since the teams piloted the instruments, there may have been slight variations (across the four teams) in the way interviews were conducted, checklists performed or documents reviewed. Thirdly, no direct observation of conduct of CBE was performed, yet this should be done in an ideal CBE assessment [31,32]. However, the methods (documentary review, checklists and interviews) employed are reliable [31], and have been used by others [15]. Lastly, the ideal curriculum would require validation of its objective and content [31]. This step was conducted at a stakeholder meeting of the Ministry of Health, Ministry of Local Government, all the training institutions and community representatives.\nWe acknowledge several limitations. Firstly, the institutions were purposively sampled in order to try to achieve representativeness of rural versus urban institutions, private versus public, the different cadres trained and different academic level of training. There is no guarantee that the institutions that were not sampled have similar constraints or challenges. However, all the government-supported institutions are supposed to follow and implement a similar curriculum. Secondly, the assessment was conducted regionally by four different teams. Since the teams piloted the instruments, there may have been slight variations (across the four teams) in the way interviews were conducted, checklists performed or documents reviewed. Thirdly, no direct observation of conduct of CBE was performed, yet this should be done in an ideal CBE assessment [31,32]. However, the methods (documentary review, checklists and interviews) employed are reliable [31], and have been used by others [15]. Lastly, the ideal curriculum would require validation of its objective and content [31]. This step was conducted at a stakeholder meeting of the Ministry of Health, Ministry of Local Government, all the training institutions and community representatives.", "We acknowledge several limitations. Firstly, the institutions were purposively sampled in order to try to achieve representativeness of rural versus urban institutions, private versus public, the different cadres trained and different academic level of training. There is no guarantee that the institutions that were not sampled have similar constraints or challenges. However, all the government-supported institutions are supposed to follow and implement a similar curriculum. Secondly, the assessment was conducted regionally by four different teams. Since the teams piloted the instruments, there may have been slight variations (across the four teams) in the way interviews were conducted, checklists performed or documents reviewed. Thirdly, no direct observation of conduct of CBE was performed, yet this should be done in an ideal CBE assessment [31,32]. However, the methods (documentary review, checklists and interviews) employed are reliable [31], and have been used by others [15]. Lastly, the ideal curriculum would require validation of its objective and content [31]. This step was conducted at a stakeholder meeting of the Ministry of Health, Ministry of Local Government, all the training institutions and community representatives.", "This assessment identified deficiencies in the design and implementation of CBE at several health professional training institutions, with major flaws identified in curriculum content, supervision of trainees, inappropriate assessment, trainee welfare, and underutilization of opportunities for contextual and collaborative learning. Despite similar goals, there is wide variation in the concept, conduct and implementation of CBE conducted by different health institutions in Uganda, even in presence of similar curricula. CBE, if well implemented and assessed, has the potential to ensure contextual learning in medical education.", "All authors (except GB, LWC and SG who are from Johns Hopkins University) are faculty of Makerere University College of Health Sciences where community-based education is part of the curricula for undergraduate training of health professionals, and all community sites assessed are used by the university. GB, LWC and SG are team members of the project: Partnership for building the capacity of Makerere University to improve priority health outcomes in Uganda, but have no competing interests.", "All co-authors were involved in the design of the study on comprehensive assessment of community-based education. All except LWC and GB participated in the data collection and data analysis. DKK wrote the first draft of the manuscript. All co-authors contributed by reviewing the subsequent drafts of the manuscript and approved the final version", "[SUBTITLE] Proposal for a model CBE curriculum for training health professionals [SUBSECTION] [SUBTITLE] Steps in developing and implementing the ideal CBE curriculum [SUBSECTION] Step 1: Problem identification and general needs assessment: Identification and critical analysis of the educational need or problem that will be addressed by the curriculum. This step requires substantial research to analyze what is currently being done by practitioners (such as alumni of the training institutions) and educators (that is, the current approach) and ideally what should be done by practitioners and educators to address the health care problem (the ideal approach) and therefore the performance gap. The general needs assessment is usually stated as the knowledge, attitude, and performance deficits that the curriculum will address.\nStep 2: Needs assessment of targeted learners\nStep 3: Goals and objectives: Overall goals and aims for the curriculum are written. Specific measurable knowledge, skill/performance, attitude, and process objectives are written for the curriculum.\nStep 4: Educational strategies: A plan to maximize the impact of the curriculum, including content and educational methods congruent with the objectives, is prepared.\nStep 5: Implementation: A plan for implementation, including timelines and resources required, is created. A plan for faculty development should be made to assure consistent implementation\nStep 6: Evaluation and feedback: Learner and program evaluation plans are created. A plan for dissemination of the curriculum is made.\nStep 1: Problem identification and general needs assessment: Identification and critical analysis of the educational need or problem that will be addressed by the curriculum. This step requires substantial research to analyze what is currently being done by practitioners (such as alumni of the training institutions) and educators (that is, the current approach) and ideally what should be done by practitioners and educators to address the health care problem (the ideal approach) and therefore the performance gap. The general needs assessment is usually stated as the knowledge, attitude, and performance deficits that the curriculum will address.\nStep 2: Needs assessment of targeted learners\nStep 3: Goals and objectives: Overall goals and aims for the curriculum are written. Specific measurable knowledge, skill/performance, attitude, and process objectives are written for the curriculum.\nStep 4: Educational strategies: A plan to maximize the impact of the curriculum, including content and educational methods congruent with the objectives, is prepared.\nStep 5: Implementation: A plan for implementation, including timelines and resources required, is created. A plan for faculty development should be made to assure consistent implementation\nStep 6: Evaluation and feedback: Learner and program evaluation plans are created. A plan for dissemination of the curriculum is made.\n[SUBTITLE] Perceived Ideal objectives of CBE for health professionals [SUBSECTION] The primary aim should be:\n1 Trainees to acquire an understanding of health and disease, positioning the prevention and management of disease in the context of the whole individual in his or her place in the family, community and society.\n2 To develop health professionals with an attitude to learning that is based on curiosity and knowledge exploration rather than passive knowledge acquisition. This enables trainees become reflexive learners who can identify their learning needs, seek the desired information and eventually become life long learners\n3 To enable trainees integrate the theory and practical aspects, as well as the basic science and clinical/professional aspects of their training. This necessitates introduction of components of problem-based learning in CBE, whereby learning is based on real-life contexts\n4 To ensure early direct contact with patients/clients and critical appraisal/analysis of their problems, and thereby trainees attain problem-solving skills\nThe primary aim should be:\n1 Trainees to acquire an understanding of health and disease, positioning the prevention and management of disease in the context of the whole individual in his or her place in the family, community and society.\n2 To develop health professionals with an attitude to learning that is based on curiosity and knowledge exploration rather than passive knowledge acquisition. This enables trainees become reflexive learners who can identify their learning needs, seek the desired information and eventually become life long learners\n3 To enable trainees integrate the theory and practical aspects, as well as the basic science and clinical/professional aspects of their training. This necessitates introduction of components of problem-based learning in CBE, whereby learning is based on real-life contexts\n4 To ensure early direct contact with patients/clients and critical appraisal/analysis of their problems, and thereby trainees attain problem-solving skills\n[SUBTITLE] The curriculum components [SUBSECTION] The curriculum should:\n1. Provide an ideal balance of factual information and practical skills in addition to providing core knowledge, skills and attitudes\n2. Develop general competence (e.g. critical thinking, problem-solving, communication, leadership, teamwork, management)\n3. Ensure early clinical contact while strengthening inter-professional collaboration, thereby providing a balance between hospital/community; curative/preventive aspects.\n4. Cover the wider aspects of health care (e.g. medico-legal issues, health economics, political aspect of health systems and medical audit)\n5. Ensure that methods of learning/teaching support aims of the curriculum, and likewise, ensure that assessment methods employed support and rhyme with aims of the curriculum (are they reliable in assessing the competences?)\nThe curriculum should:\n1. Provide an ideal balance of factual information and practical skills in addition to providing core knowledge, skills and attitudes\n2. Develop general competence (e.g. critical thinking, problem-solving, communication, leadership, teamwork, management)\n3. Ensure early clinical contact while strengthening inter-professional collaboration, thereby providing a balance between hospital/community; curative/preventive aspects.\n4. Cover the wider aspects of health care (e.g. medico-legal issues, health economics, political aspect of health systems and medical audit)\n5. Ensure that methods of learning/teaching support aims of the curriculum, and likewise, ensure that assessment methods employed support and rhyme with aims of the curriculum (are they reliable in assessing the competences?)\n[SUBTITLE] Competences of the trainee [SUBSECTION] The minimum competences of the trainees should include:\n1. Knowledge and skills in caring for the community's health with emphasis on primary care, disease prevention and promotion of healthy lifestyle, and in involving patients, families and communities in healthcare decision-making\n2. Understanding the factors that influence health and disease at household level, as well the roles and responsibilities of different players in the healthcare system\n3. Innovation and problem solving skills necessary to provide cost-effective care using technology appropriately\n4. Skills in collection, analysis and utilization information on health systems\n5. Understanding the role of the physical, social and political environment in health\n6. Life long learning (continuous acquisition of knowledge depending on critical reflection and self-criticism to identify learning needs)\n7. Professionalism (learning values of how a professional is expected to perform and relate with the community and all other players in the healthcare system)\nThe minimum competences of the trainees should include:\n1. Knowledge and skills in caring for the community's health with emphasis on primary care, disease prevention and promotion of healthy lifestyle, and in involving patients, families and communities in healthcare decision-making\n2. Understanding the factors that influence health and disease at household level, as well the roles and responsibilities of different players in the healthcare system\n3. Innovation and problem solving skills necessary to provide cost-effective care using technology appropriately\n4. Skills in collection, analysis and utilization information on health systems\n5. Understanding the role of the physical, social and political environment in health\n6. Life long learning (continuous acquisition of knowledge depending on critical reflection and self-criticism to identify learning needs)\n7. Professionalism (learning values of how a professional is expected to perform and relate with the community and all other players in the healthcare system)\n[SUBTITLE] Curriculum content related to CBE [SUBSECTION] The curriculum content should include:\n1. Broad education in both the clinical and basic sciences and integration of CBE into the science of profession\n2. Clearly defined objectives and intended outcomes of CBE\n3. Emphasis on methods and processes of acquisition of knowledge, professional skills, lifelong learning skills, values and attitudes (such as indicating examples of what activities trainees should be involved in)\n4. Emphasis on utilization of educational sites beyond the training institution (such as schools, health centres, community health projects or working with non-government organizations)\n5. Emphasis on adapting training to meet the challenges produced by ongoing changes in the organization, structure, financing and delivery of health care\nThe curriculum content should include:\n1. Broad education in both the clinical and basic sciences and integration of CBE into the science of profession\n2. Clearly defined objectives and intended outcomes of CBE\n3. Emphasis on methods and processes of acquisition of knowledge, professional skills, lifelong learning skills, values and attitudes (such as indicating examples of what activities trainees should be involved in)\n4. Emphasis on utilization of educational sites beyond the training institution (such as schools, health centres, community health projects or working with non-government organizations)\n5. Emphasis on adapting training to meet the challenges produced by ongoing changes in the organization, structure, financing and delivery of health care\n[SUBTITLE] Roles of tutors, supervisors and faculty [SUBSECTION] 1. To identify suitable teaching resources and to conduct feedback and trainee appraisal\n2. To design educational programs for trainees as well as identify suitable problems to deliver effective educational events for learning\n3. To utilize appropriate teaching methods that employ large and small groups\n4. To provide individual assistance depending on student needs\n5. To assess the trainees used a variety of assessment methods that include portfolios\n6. To evaluate the CBE program in context of the health professional training course\n1. To identify suitable teaching resources and to conduct feedback and trainee appraisal\n2. To design educational programs for trainees as well as identify suitable problems to deliver effective educational events for learning\n3. To utilize appropriate teaching methods that employ large and small groups\n4. To provide individual assistance depending on student needs\n5. To assess the trainees used a variety of assessment methods that include portfolios\n6. To evaluate the CBE program in context of the health professional training course\n[SUBTITLE] Steps in developing and implementing the ideal CBE curriculum [SUBSECTION] Step 1: Problem identification and general needs assessment: Identification and critical analysis of the educational need or problem that will be addressed by the curriculum. This step requires substantial research to analyze what is currently being done by practitioners (such as alumni of the training institutions) and educators (that is, the current approach) and ideally what should be done by practitioners and educators to address the health care problem (the ideal approach) and therefore the performance gap. The general needs assessment is usually stated as the knowledge, attitude, and performance deficits that the curriculum will address.\nStep 2: Needs assessment of targeted learners\nStep 3: Goals and objectives: Overall goals and aims for the curriculum are written. Specific measurable knowledge, skill/performance, attitude, and process objectives are written for the curriculum.\nStep 4: Educational strategies: A plan to maximize the impact of the curriculum, including content and educational methods congruent with the objectives, is prepared.\nStep 5: Implementation: A plan for implementation, including timelines and resources required, is created. A plan for faculty development should be made to assure consistent implementation\nStep 6: Evaluation and feedback: Learner and program evaluation plans are created. A plan for dissemination of the curriculum is made.\nStep 1: Problem identification and general needs assessment: Identification and critical analysis of the educational need or problem that will be addressed by the curriculum. This step requires substantial research to analyze what is currently being done by practitioners (such as alumni of the training institutions) and educators (that is, the current approach) and ideally what should be done by practitioners and educators to address the health care problem (the ideal approach) and therefore the performance gap. The general needs assessment is usually stated as the knowledge, attitude, and performance deficits that the curriculum will address.\nStep 2: Needs assessment of targeted learners\nStep 3: Goals and objectives: Overall goals and aims for the curriculum are written. Specific measurable knowledge, skill/performance, attitude, and process objectives are written for the curriculum.\nStep 4: Educational strategies: A plan to maximize the impact of the curriculum, including content and educational methods congruent with the objectives, is prepared.\nStep 5: Implementation: A plan for implementation, including timelines and resources required, is created. A plan for faculty development should be made to assure consistent implementation\nStep 6: Evaluation and feedback: Learner and program evaluation plans are created. A plan for dissemination of the curriculum is made.\n[SUBTITLE] Perceived Ideal objectives of CBE for health professionals [SUBSECTION] The primary aim should be:\n1 Trainees to acquire an understanding of health and disease, positioning the prevention and management of disease in the context of the whole individual in his or her place in the family, community and society.\n2 To develop health professionals with an attitude to learning that is based on curiosity and knowledge exploration rather than passive knowledge acquisition. This enables trainees become reflexive learners who can identify their learning needs, seek the desired information and eventually become life long learners\n3 To enable trainees integrate the theory and practical aspects, as well as the basic science and clinical/professional aspects of their training. This necessitates introduction of components of problem-based learning in CBE, whereby learning is based on real-life contexts\n4 To ensure early direct contact with patients/clients and critical appraisal/analysis of their problems, and thereby trainees attain problem-solving skills\nThe primary aim should be:\n1 Trainees to acquire an understanding of health and disease, positioning the prevention and management of disease in the context of the whole individual in his or her place in the family, community and society.\n2 To develop health professionals with an attitude to learning that is based on curiosity and knowledge exploration rather than passive knowledge acquisition. This enables trainees become reflexive learners who can identify their learning needs, seek the desired information and eventually become life long learners\n3 To enable trainees integrate the theory and practical aspects, as well as the basic science and clinical/professional aspects of their training. This necessitates introduction of components of problem-based learning in CBE, whereby learning is based on real-life contexts\n4 To ensure early direct contact with patients/clients and critical appraisal/analysis of their problems, and thereby trainees attain problem-solving skills\n[SUBTITLE] The curriculum components [SUBSECTION] The curriculum should:\n1. Provide an ideal balance of factual information and practical skills in addition to providing core knowledge, skills and attitudes\n2. Develop general competence (e.g. critical thinking, problem-solving, communication, leadership, teamwork, management)\n3. Ensure early clinical contact while strengthening inter-professional collaboration, thereby providing a balance between hospital/community; curative/preventive aspects.\n4. Cover the wider aspects of health care (e.g. medico-legal issues, health economics, political aspect of health systems and medical audit)\n5. Ensure that methods of learning/teaching support aims of the curriculum, and likewise, ensure that assessment methods employed support and rhyme with aims of the curriculum (are they reliable in assessing the competences?)\nThe curriculum should:\n1. Provide an ideal balance of factual information and practical skills in addition to providing core knowledge, skills and attitudes\n2. Develop general competence (e.g. critical thinking, problem-solving, communication, leadership, teamwork, management)\n3. Ensure early clinical contact while strengthening inter-professional collaboration, thereby providing a balance between hospital/community; curative/preventive aspects.\n4. Cover the wider aspects of health care (e.g. medico-legal issues, health economics, political aspect of health systems and medical audit)\n5. Ensure that methods of learning/teaching support aims of the curriculum, and likewise, ensure that assessment methods employed support and rhyme with aims of the curriculum (are they reliable in assessing the competences?)\n[SUBTITLE] Competences of the trainee [SUBSECTION] The minimum competences of the trainees should include:\n1. Knowledge and skills in caring for the community's health with emphasis on primary care, disease prevention and promotion of healthy lifestyle, and in involving patients, families and communities in healthcare decision-making\n2. Understanding the factors that influence health and disease at household level, as well the roles and responsibilities of different players in the healthcare system\n3. Innovation and problem solving skills necessary to provide cost-effective care using technology appropriately\n4. Skills in collection, analysis and utilization information on health systems\n5. Understanding the role of the physical, social and political environment in health\n6. Life long learning (continuous acquisition of knowledge depending on critical reflection and self-criticism to identify learning needs)\n7. Professionalism (learning values of how a professional is expected to perform and relate with the community and all other players in the healthcare system)\nThe minimum competences of the trainees should include:\n1. Knowledge and skills in caring for the community's health with emphasis on primary care, disease prevention and promotion of healthy lifestyle, and in involving patients, families and communities in healthcare decision-making\n2. Understanding the factors that influence health and disease at household level, as well the roles and responsibilities of different players in the healthcare system\n3. Innovation and problem solving skills necessary to provide cost-effective care using technology appropriately\n4. Skills in collection, analysis and utilization information on health systems\n5. Understanding the role of the physical, social and political environment in health\n6. Life long learning (continuous acquisition of knowledge depending on critical reflection and self-criticism to identify learning needs)\n7. Professionalism (learning values of how a professional is expected to perform and relate with the community and all other players in the healthcare system)\n[SUBTITLE] Curriculum content related to CBE [SUBSECTION] The curriculum content should include:\n1. Broad education in both the clinical and basic sciences and integration of CBE into the science of profession\n2. Clearly defined objectives and intended outcomes of CBE\n3. Emphasis on methods and processes of acquisition of knowledge, professional skills, lifelong learning skills, values and attitudes (such as indicating examples of what activities trainees should be involved in)\n4. Emphasis on utilization of educational sites beyond the training institution (such as schools, health centres, community health projects or working with non-government organizations)\n5. Emphasis on adapting training to meet the challenges produced by ongoing changes in the organization, structure, financing and delivery of health care\nThe curriculum content should include:\n1. Broad education in both the clinical and basic sciences and integration of CBE into the science of profession\n2. Clearly defined objectives and intended outcomes of CBE\n3. Emphasis on methods and processes of acquisition of knowledge, professional skills, lifelong learning skills, values and attitudes (such as indicating examples of what activities trainees should be involved in)\n4. Emphasis on utilization of educational sites beyond the training institution (such as schools, health centres, community health projects or working with non-government organizations)\n5. Emphasis on adapting training to meet the challenges produced by ongoing changes in the organization, structure, financing and delivery of health care\n[SUBTITLE] Roles of tutors, supervisors and faculty [SUBSECTION] 1. To identify suitable teaching resources and to conduct feedback and trainee appraisal\n2. To design educational programs for trainees as well as identify suitable problems to deliver effective educational events for learning\n3. To utilize appropriate teaching methods that employ large and small groups\n4. To provide individual assistance depending on student needs\n5. To assess the trainees used a variety of assessment methods that include portfolios\n6. To evaluate the CBE program in context of the health professional training course\n1. To identify suitable teaching resources and to conduct feedback and trainee appraisal\n2. To design educational programs for trainees as well as identify suitable problems to deliver effective educational events for learning\n3. To utilize appropriate teaching methods that employ large and small groups\n4. To provide individual assistance depending on student needs\n5. To assess the trainees used a variety of assessment methods that include portfolios\n6. To evaluate the CBE program in context of the health professional training course", "[SUBTITLE] Steps in developing and implementing the ideal CBE curriculum [SUBSECTION] Step 1: Problem identification and general needs assessment: Identification and critical analysis of the educational need or problem that will be addressed by the curriculum. This step requires substantial research to analyze what is currently being done by practitioners (such as alumni of the training institutions) and educators (that is, the current approach) and ideally what should be done by practitioners and educators to address the health care problem (the ideal approach) and therefore the performance gap. The general needs assessment is usually stated as the knowledge, attitude, and performance deficits that the curriculum will address.\nStep 2: Needs assessment of targeted learners\nStep 3: Goals and objectives: Overall goals and aims for the curriculum are written. Specific measurable knowledge, skill/performance, attitude, and process objectives are written for the curriculum.\nStep 4: Educational strategies: A plan to maximize the impact of the curriculum, including content and educational methods congruent with the objectives, is prepared.\nStep 5: Implementation: A plan for implementation, including timelines and resources required, is created. A plan for faculty development should be made to assure consistent implementation\nStep 6: Evaluation and feedback: Learner and program evaluation plans are created. A plan for dissemination of the curriculum is made.\nStep 1: Problem identification and general needs assessment: Identification and critical analysis of the educational need or problem that will be addressed by the curriculum. This step requires substantial research to analyze what is currently being done by practitioners (such as alumni of the training institutions) and educators (that is, the current approach) and ideally what should be done by practitioners and educators to address the health care problem (the ideal approach) and therefore the performance gap. The general needs assessment is usually stated as the knowledge, attitude, and performance deficits that the curriculum will address.\nStep 2: Needs assessment of targeted learners\nStep 3: Goals and objectives: Overall goals and aims for the curriculum are written. Specific measurable knowledge, skill/performance, attitude, and process objectives are written for the curriculum.\nStep 4: Educational strategies: A plan to maximize the impact of the curriculum, including content and educational methods congruent with the objectives, is prepared.\nStep 5: Implementation: A plan for implementation, including timelines and resources required, is created. A plan for faculty development should be made to assure consistent implementation\nStep 6: Evaluation and feedback: Learner and program evaluation plans are created. A plan for dissemination of the curriculum is made.\n[SUBTITLE] Perceived Ideal objectives of CBE for health professionals [SUBSECTION] The primary aim should be:\n1 Trainees to acquire an understanding of health and disease, positioning the prevention and management of disease in the context of the whole individual in his or her place in the family, community and society.\n2 To develop health professionals with an attitude to learning that is based on curiosity and knowledge exploration rather than passive knowledge acquisition. This enables trainees become reflexive learners who can identify their learning needs, seek the desired information and eventually become life long learners\n3 To enable trainees integrate the theory and practical aspects, as well as the basic science and clinical/professional aspects of their training. This necessitates introduction of components of problem-based learning in CBE, whereby learning is based on real-life contexts\n4 To ensure early direct contact with patients/clients and critical appraisal/analysis of their problems, and thereby trainees attain problem-solving skills\nThe primary aim should be:\n1 Trainees to acquire an understanding of health and disease, positioning the prevention and management of disease in the context of the whole individual in his or her place in the family, community and society.\n2 To develop health professionals with an attitude to learning that is based on curiosity and knowledge exploration rather than passive knowledge acquisition. This enables trainees become reflexive learners who can identify their learning needs, seek the desired information and eventually become life long learners\n3 To enable trainees integrate the theory and practical aspects, as well as the basic science and clinical/professional aspects of their training. This necessitates introduction of components of problem-based learning in CBE, whereby learning is based on real-life contexts\n4 To ensure early direct contact with patients/clients and critical appraisal/analysis of their problems, and thereby trainees attain problem-solving skills\n[SUBTITLE] The curriculum components [SUBSECTION] The curriculum should:\n1. Provide an ideal balance of factual information and practical skills in addition to providing core knowledge, skills and attitudes\n2. Develop general competence (e.g. critical thinking, problem-solving, communication, leadership, teamwork, management)\n3. Ensure early clinical contact while strengthening inter-professional collaboration, thereby providing a balance between hospital/community; curative/preventive aspects.\n4. Cover the wider aspects of health care (e.g. medico-legal issues, health economics, political aspect of health systems and medical audit)\n5. Ensure that methods of learning/teaching support aims of the curriculum, and likewise, ensure that assessment methods employed support and rhyme with aims of the curriculum (are they reliable in assessing the competences?)\nThe curriculum should:\n1. Provide an ideal balance of factual information and practical skills in addition to providing core knowledge, skills and attitudes\n2. Develop general competence (e.g. critical thinking, problem-solving, communication, leadership, teamwork, management)\n3. Ensure early clinical contact while strengthening inter-professional collaboration, thereby providing a balance between hospital/community; curative/preventive aspects.\n4. Cover the wider aspects of health care (e.g. medico-legal issues, health economics, political aspect of health systems and medical audit)\n5. Ensure that methods of learning/teaching support aims of the curriculum, and likewise, ensure that assessment methods employed support and rhyme with aims of the curriculum (are they reliable in assessing the competences?)\n[SUBTITLE] Competences of the trainee [SUBSECTION] The minimum competences of the trainees should include:\n1. Knowledge and skills in caring for the community's health with emphasis on primary care, disease prevention and promotion of healthy lifestyle, and in involving patients, families and communities in healthcare decision-making\n2. Understanding the factors that influence health and disease at household level, as well the roles and responsibilities of different players in the healthcare system\n3. Innovation and problem solving skills necessary to provide cost-effective care using technology appropriately\n4. Skills in collection, analysis and utilization information on health systems\n5. Understanding the role of the physical, social and political environment in health\n6. Life long learning (continuous acquisition of knowledge depending on critical reflection and self-criticism to identify learning needs)\n7. Professionalism (learning values of how a professional is expected to perform and relate with the community and all other players in the healthcare system)\nThe minimum competences of the trainees should include:\n1. Knowledge and skills in caring for the community's health with emphasis on primary care, disease prevention and promotion of healthy lifestyle, and in involving patients, families and communities in healthcare decision-making\n2. Understanding the factors that influence health and disease at household level, as well the roles and responsibilities of different players in the healthcare system\n3. Innovation and problem solving skills necessary to provide cost-effective care using technology appropriately\n4. Skills in collection, analysis and utilization information on health systems\n5. Understanding the role of the physical, social and political environment in health\n6. Life long learning (continuous acquisition of knowledge depending on critical reflection and self-criticism to identify learning needs)\n7. Professionalism (learning values of how a professional is expected to perform and relate with the community and all other players in the healthcare system)\n[SUBTITLE] Curriculum content related to CBE [SUBSECTION] The curriculum content should include:\n1. Broad education in both the clinical and basic sciences and integration of CBE into the science of profession\n2. Clearly defined objectives and intended outcomes of CBE\n3. Emphasis on methods and processes of acquisition of knowledge, professional skills, lifelong learning skills, values and attitudes (such as indicating examples of what activities trainees should be involved in)\n4. Emphasis on utilization of educational sites beyond the training institution (such as schools, health centres, community health projects or working with non-government organizations)\n5. Emphasis on adapting training to meet the challenges produced by ongoing changes in the organization, structure, financing and delivery of health care\nThe curriculum content should include:\n1. Broad education in both the clinical and basic sciences and integration of CBE into the science of profession\n2. Clearly defined objectives and intended outcomes of CBE\n3. Emphasis on methods and processes of acquisition of knowledge, professional skills, lifelong learning skills, values and attitudes (such as indicating examples of what activities trainees should be involved in)\n4. Emphasis on utilization of educational sites beyond the training institution (such as schools, health centres, community health projects or working with non-government organizations)\n5. Emphasis on adapting training to meet the challenges produced by ongoing changes in the organization, structure, financing and delivery of health care\n[SUBTITLE] Roles of tutors, supervisors and faculty [SUBSECTION] 1. To identify suitable teaching resources and to conduct feedback and trainee appraisal\n2. To design educational programs for trainees as well as identify suitable problems to deliver effective educational events for learning\n3. To utilize appropriate teaching methods that employ large and small groups\n4. To provide individual assistance depending on student needs\n5. To assess the trainees used a variety of assessment methods that include portfolios\n6. To evaluate the CBE program in context of the health professional training course\n1. To identify suitable teaching resources and to conduct feedback and trainee appraisal\n2. To design educational programs for trainees as well as identify suitable problems to deliver effective educational events for learning\n3. To utilize appropriate teaching methods that employ large and small groups\n4. To provide individual assistance depending on student needs\n5. To assess the trainees used a variety of assessment methods that include portfolios\n6. To evaluate the CBE program in context of the health professional training course", "Step 1: Problem identification and general needs assessment: Identification and critical analysis of the educational need or problem that will be addressed by the curriculum. This step requires substantial research to analyze what is currently being done by practitioners (such as alumni of the training institutions) and educators (that is, the current approach) and ideally what should be done by practitioners and educators to address the health care problem (the ideal approach) and therefore the performance gap. The general needs assessment is usually stated as the knowledge, attitude, and performance deficits that the curriculum will address.\nStep 2: Needs assessment of targeted learners\nStep 3: Goals and objectives: Overall goals and aims for the curriculum are written. Specific measurable knowledge, skill/performance, attitude, and process objectives are written for the curriculum.\nStep 4: Educational strategies: A plan to maximize the impact of the curriculum, including content and educational methods congruent with the objectives, is prepared.\nStep 5: Implementation: A plan for implementation, including timelines and resources required, is created. A plan for faculty development should be made to assure consistent implementation\nStep 6: Evaluation and feedback: Learner and program evaluation plans are created. A plan for dissemination of the curriculum is made.", "The primary aim should be:\n1 Trainees to acquire an understanding of health and disease, positioning the prevention and management of disease in the context of the whole individual in his or her place in the family, community and society.\n2 To develop health professionals with an attitude to learning that is based on curiosity and knowledge exploration rather than passive knowledge acquisition. This enables trainees become reflexive learners who can identify their learning needs, seek the desired information and eventually become life long learners\n3 To enable trainees integrate the theory and practical aspects, as well as the basic science and clinical/professional aspects of their training. This necessitates introduction of components of problem-based learning in CBE, whereby learning is based on real-life contexts\n4 To ensure early direct contact with patients/clients and critical appraisal/analysis of their problems, and thereby trainees attain problem-solving skills", "The curriculum should:\n1. Provide an ideal balance of factual information and practical skills in addition to providing core knowledge, skills and attitudes\n2. Develop general competence (e.g. critical thinking, problem-solving, communication, leadership, teamwork, management)\n3. Ensure early clinical contact while strengthening inter-professional collaboration, thereby providing a balance between hospital/community; curative/preventive aspects.\n4. Cover the wider aspects of health care (e.g. medico-legal issues, health economics, political aspect of health systems and medical audit)\n5. Ensure that methods of learning/teaching support aims of the curriculum, and likewise, ensure that assessment methods employed support and rhyme with aims of the curriculum (are they reliable in assessing the competences?)", "The minimum competences of the trainees should include:\n1. Knowledge and skills in caring for the community's health with emphasis on primary care, disease prevention and promotion of healthy lifestyle, and in involving patients, families and communities in healthcare decision-making\n2. Understanding the factors that influence health and disease at household level, as well the roles and responsibilities of different players in the healthcare system\n3. Innovation and problem solving skills necessary to provide cost-effective care using technology appropriately\n4. Skills in collection, analysis and utilization information on health systems\n5. Understanding the role of the physical, social and political environment in health\n6. Life long learning (continuous acquisition of knowledge depending on critical reflection and self-criticism to identify learning needs)\n7. Professionalism (learning values of how a professional is expected to perform and relate with the community and all other players in the healthcare system)", "The curriculum content should include:\n1. Broad education in both the clinical and basic sciences and integration of CBE into the science of profession\n2. Clearly defined objectives and intended outcomes of CBE\n3. Emphasis on methods and processes of acquisition of knowledge, professional skills, lifelong learning skills, values and attitudes (such as indicating examples of what activities trainees should be involved in)\n4. Emphasis on utilization of educational sites beyond the training institution (such as schools, health centres, community health projects or working with non-government organizations)\n5. Emphasis on adapting training to meet the challenges produced by ongoing changes in the organization, structure, financing and delivery of health care", "1. To identify suitable teaching resources and to conduct feedback and trainee appraisal\n2. To design educational programs for trainees as well as identify suitable problems to deliver effective educational events for learning\n3. To utilize appropriate teaching methods that employ large and small groups\n4. To provide individual assistance depending on student needs\n5. To assess the trainees used a variety of assessment methods that include portfolios\n6. To evaluate the CBE program in context of the health professional training course", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6920/11/7/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Validation of a case definition to define chronic dialysis using outpatient administrative data.
21362182
Administrative health care databases offer an efficient and accessible, though as-yet unvalidated, approach to studying outcomes of patients with chronic kidney disease and end-stage renal disease (ESRD). The objective of this study is to determine the validity of outpatient physician billing derived algorithms for defining chronic dialysis compared to a reference standard ESRD registry.
BACKGROUND
A cohort of incident dialysis patients (Jan. 1-Dec. 31, 2008) and prevalent chronic dialysis patients (Jan 1, 2008) was selected from a geographically inclusive ESRD registry and administrative database. Four administrative data definitions were considered: at least 1 outpatient claim, at least 2 outpatient claims, at least 2 outpatient claims at least 90 days apart, and continuous outpatient claims at least 90 days apart with no gap in claims greater than 21 days. Measures of agreement of the four administrative data definitions were compared to a reference standard (ESRD registry). Basic patient characteristics are compared between all 5 patient groups.
METHODS
1,118,097 individuals formed the overall population and 2,227 chronic dialysis patients were included in the ESRD registry. The three definitions requiring at least 2 outpatient claims resulted in kappa statistics between 0.60-0.80 indicating "substantial" agreement. "At least 1 outpatient claim" resulted in "excellent" agreement with a kappa statistic of 0.81.
RESULTS
Of the four definitions, the simplest (at least 1 outpatient claim) performed comparatively to other definitions. The limitations of this work are the billing codes used are developed in Canada, however, other countries use similar billing practices and thus the codes could easily be mapped to other systems. Our reference standard ESRD registry may not capture all dialysis patients resulting in some misclassification. The registry is linked to on-going care so this is likely to be minimal. The definition utilized will vary with the research objective.
CONCLUSIONS
[ "Algorithms", "Ambulatory Care", "Cohort Studies", "Databases, Factual", "Female", "Health Services Research", "Humans", "Insurance Claim Review", "Kidney Failure, Chronic", "Male", "Middle Aged", "Outcome Assessment, Health Care", "Reference Standards", "Registries", "Renal Dialysis" ]
3055853
null
null
Methods
[SUBTITLE] Study Population [SUBSECTION] A cohort was identified from the Alberta Kidney Disease Network (AKDN - http://www.akdn.info) laboratory database to form the study population. The AKDN is a prospective data collection initiative of routine laboratory tests on all patients in the province of Alberta (population approx. 3 million) Canada, resulting in a population-based geographically inclusive database [16]. Patients identified from laboratory data are followed prospectively with linkage to administrative and other computerized sources to obtain detailed information including socio-demographic data, clinical data including comorbidities, health care encounters, health care costs, death, and kidney-related outcomes. The study cohort included patients aged 18 and older who had at least 1 outpatient serum creatinine between Jan 1 2008 and Dec 31 2008. Although a general population cohort would be optimal, our selected study population introduces minimal, if any, bias as anyone "at-risk" of ESRD or evaluated for or receiving chronic dialysis was expected to have received serum creatinine measurement as part of their routine clinical assessment. A cohort was identified from the Alberta Kidney Disease Network (AKDN - http://www.akdn.info) laboratory database to form the study population. The AKDN is a prospective data collection initiative of routine laboratory tests on all patients in the province of Alberta (population approx. 3 million) Canada, resulting in a population-based geographically inclusive database [16]. Patients identified from laboratory data are followed prospectively with linkage to administrative and other computerized sources to obtain detailed information including socio-demographic data, clinical data including comorbidities, health care encounters, health care costs, death, and kidney-related outcomes. The study cohort included patients aged 18 and older who had at least 1 outpatient serum creatinine between Jan 1 2008 and Dec 31 2008. Although a general population cohort would be optimal, our selected study population introduces minimal, if any, bias as anyone "at-risk" of ESRD or evaluated for or receiving chronic dialysis was expected to have received serum creatinine measurement as part of their routine clinical assessment. [SUBTITLE] Data sources [SUBSECTION] Patients treated for ESRD in Alberta are cared for by the Northern Alberta (NARP) and Southern Alberta (SARP) Renal Programs [17]. These programs are responsible for providing ESRD care including chronic dialysis within their geographic area. Each program maintains a prospective patient registry of all chronic dialysis patients, and captures detailed demographic and clinic data, including date of initiation of dialysis. Patients are enrolled at the time of first dialysis for ESRD (first hemodialysis session or first flushing of peritoneal dialysis catheter), or, for patients who initiate dialysis for acute kidney injury, when the attending nephrologist deems that dialysis will be chronic. The NARP and SARP registries were used to identify prevalent and incident dialysis patients from January 1, 2008 to December 31, 2008 (considered the reference standard). Prevalent cases were first identified on Jan 1 1999, with additional incident dialysis patients identified from that date forward. Non- Alberta residents were excluded. Physicians in Alberta submit claims for reimbursement of services to Alberta Health and Wellness, the provincial health ministry, (the universal health care provider for the province of Alberta); claims are stored in a database which contains information on patients' personal health number, physician unique identifier, up to 3 ICD-9 diagnosis codes and 1 procedure code. Procedure codes are captured using the Canadian Classification of Diagnostic, Therapeutic and Surgical Procedures (CCP, which was developed by Statistics Canada to accompany the International Classification of Diseases version 9 (ICD-9) [18]. Physician claims capture all of the outpatient physician services and the majority of the inpatient services. All chronic dialysis patients in the province of Alberta are cared for by nephrologists, who are compensated either using a fee-for-service or salaried model. Regardless of compensation method, physicians are required to submit claims for all patient encounters. Patients treated for ESRD in Alberta are cared for by the Northern Alberta (NARP) and Southern Alberta (SARP) Renal Programs [17]. These programs are responsible for providing ESRD care including chronic dialysis within their geographic area. Each program maintains a prospective patient registry of all chronic dialysis patients, and captures detailed demographic and clinic data, including date of initiation of dialysis. Patients are enrolled at the time of first dialysis for ESRD (first hemodialysis session or first flushing of peritoneal dialysis catheter), or, for patients who initiate dialysis for acute kidney injury, when the attending nephrologist deems that dialysis will be chronic. The NARP and SARP registries were used to identify prevalent and incident dialysis patients from January 1, 2008 to December 31, 2008 (considered the reference standard). Prevalent cases were first identified on Jan 1 1999, with additional incident dialysis patients identified from that date forward. Non- Alberta residents were excluded. Physicians in Alberta submit claims for reimbursement of services to Alberta Health and Wellness, the provincial health ministry, (the universal health care provider for the province of Alberta); claims are stored in a database which contains information on patients' personal health number, physician unique identifier, up to 3 ICD-9 diagnosis codes and 1 procedure code. Procedure codes are captured using the Canadian Classification of Diagnostic, Therapeutic and Surgical Procedures (CCP, which was developed by Statistics Canada to accompany the International Classification of Diseases version 9 (ICD-9) [18]. Physician claims capture all of the outpatient physician services and the majority of the inpatient services. All chronic dialysis patients in the province of Alberta are cared for by nephrologists, who are compensated either using a fee-for-service or salaried model. Regardless of compensation method, physicians are required to submit claims for all patient encounters. [SUBTITLE] Defining Chronic Dialysis using Administrative Data [SUBSECTION] We identified all patients with outpatient dialysis physician claims (Table 1) occurring from Jan 1 2008 to Dec 31 2008. We evaluated 4 different case definitions for chronic dialysis patients based on varying the number and timing of physicians claims for dialysis: 1) At least 1 outpatient claim, 2) At least 2 outpatient claims, 3) At least 2 outpatient claims at least 90 days apart and 4) Continuous outpatient claims at least 90 days apart with no gap in claims greater than 21 days. We evaluated algorithms employing a 90 day period of claims to be congruent with other current administrative data definitions developed using inpatient data [19,20]. Administrative billing codes used to define outpatient chronic dialysis We identified all patients with outpatient dialysis physician claims (Table 1) occurring from Jan 1 2008 to Dec 31 2008. We evaluated 4 different case definitions for chronic dialysis patients based on varying the number and timing of physicians claims for dialysis: 1) At least 1 outpatient claim, 2) At least 2 outpatient claims, 3) At least 2 outpatient claims at least 90 days apart and 4) Continuous outpatient claims at least 90 days apart with no gap in claims greater than 21 days. We evaluated algorithms employing a 90 day period of claims to be congruent with other current administrative data definitions developed using inpatient data [19,20]. Administrative billing codes used to define outpatient chronic dialysis [SUBTITLE] Comorbidities and other outcomes [SUBSECTION] Demographic data were determined from the provincial administrative data files. Diabetes mellitus and hypertension were identified from hospital discharge records and physician claims based on validated algorithms [4,21]. The Charlson comorbidities were calculated using the validated algorithms applied to physician claims and hospitalization data [22,23]. Any comorbidity identified during the 3 year period prior to cohort entry was included. To ascertain death, patients were followed up from their start date of dialysis, defined either by the first recorded date in the registry or the date of the second of the outpatient claims when the administrative data definition was used, until March 31, 2009 to ensure a minimum of 90 days of follow-up for all patients. Patients who met the case definition and subsequently died or moved out of the province (lost to follow-up) were included in analyses Demographic data were determined from the provincial administrative data files. Diabetes mellitus and hypertension were identified from hospital discharge records and physician claims based on validated algorithms [4,21]. The Charlson comorbidities were calculated using the validated algorithms applied to physician claims and hospitalization data [22,23]. Any comorbidity identified during the 3 year period prior to cohort entry was included. To ascertain death, patients were followed up from their start date of dialysis, defined either by the first recorded date in the registry or the date of the second of the outpatient claims when the administrative data definition was used, until March 31, 2009 to ensure a minimum of 90 days of follow-up for all patients. Patients who met the case definition and subsequently died or moved out of the province (lost to follow-up) were included in analyses [SUBTITLE] Statistical Analysis [SUBSECTION] Basic descriptive statistics were used to describe demographic features and comorbidities for the overall cohort, the NARP/SARP dialysis cohort. Table 2 outlines the analytic framework adopted. We subsequently calculated positive agreement, sensitivity, positive predictive value (PPV), for each case definition, using the NARP/SARP registry data as the reference standard [24]. Positive agreement is the conditional probability, given the reference standard is positive, the administrative data definition is also positive [25]. Thus, the positive agreement will explore if there is an imbalance between the likelihood of agreeing on positive and negative cases. The kappa-statistic was used to assess overall agreement between the registry and the billing data. Landis and Koch categorize Kappa into five categories: less than 0.2 indicating "poor agreement", 0.21 to 0.40 indicating "fair agreement", 0.41 to 0.60 indicating "moderate agreement, 0.61 to 0.80 indicating "substantial agreement" and greater than 0.81 indicating "near perfect agreement"[26]. We did not report specificity, negative agreement, or negative predictive value (NPV) as the large size of the non-diseased population (n = 1.11 million) and low incidence of ESRD in the general population makes these measures insensitive to changes in the case definitions. SAS version 9.2 was used for all analyses. Ethics approval was obtained from the Conjoint Health Research Ethics Board at the University of Calgary. Reported measures of agreement: analytic framework Abbreviations: a: true positives; b: false positives; c: false negatives; d: true negatives; N: total validation sample Positive agreement = 2a/[2a+b+c] The measure of agreement in positive cases; the number of true positives divided by all of the positives defined by both data sources. Positive agreement will show imbalance in the agreement between positive and negative responses. Sensitivity = a/(a+c) The proportion of those on dialysis according to the administrative data among those positive cases identified in reference standard. The ability of administrative data to correctly identify individuals on dialysis according to the reference standard Positive Predictive Value (PPV) = a/(a+b) The proportion of those on dialysis according to the reference standard among those positive cases identified in the administrative data Kappa = [(a+d)/N - ((a+b)(a+c) + (c+d)(b+d))/N2]/[1-((a+b)(a+c)+(c+d)(b+d))/N2] The agreement between the administrative data and reference standard above what is expected by chance Basic descriptive statistics were used to describe demographic features and comorbidities for the overall cohort, the NARP/SARP dialysis cohort. Table 2 outlines the analytic framework adopted. We subsequently calculated positive agreement, sensitivity, positive predictive value (PPV), for each case definition, using the NARP/SARP registry data as the reference standard [24]. Positive agreement is the conditional probability, given the reference standard is positive, the administrative data definition is also positive [25]. Thus, the positive agreement will explore if there is an imbalance between the likelihood of agreeing on positive and negative cases. The kappa-statistic was used to assess overall agreement between the registry and the billing data. Landis and Koch categorize Kappa into five categories: less than 0.2 indicating "poor agreement", 0.21 to 0.40 indicating "fair agreement", 0.41 to 0.60 indicating "moderate agreement, 0.61 to 0.80 indicating "substantial agreement" and greater than 0.81 indicating "near perfect agreement"[26]. We did not report specificity, negative agreement, or negative predictive value (NPV) as the large size of the non-diseased population (n = 1.11 million) and low incidence of ESRD in the general population makes these measures insensitive to changes in the case definitions. SAS version 9.2 was used for all analyses. Ethics approval was obtained from the Conjoint Health Research Ethics Board at the University of Calgary. Reported measures of agreement: analytic framework Abbreviations: a: true positives; b: false positives; c: false negatives; d: true negatives; N: total validation sample Positive agreement = 2a/[2a+b+c] The measure of agreement in positive cases; the number of true positives divided by all of the positives defined by both data sources. Positive agreement will show imbalance in the agreement between positive and negative responses. Sensitivity = a/(a+c) The proportion of those on dialysis according to the administrative data among those positive cases identified in reference standard. The ability of administrative data to correctly identify individuals on dialysis according to the reference standard Positive Predictive Value (PPV) = a/(a+b) The proportion of those on dialysis according to the reference standard among those positive cases identified in the administrative data Kappa = [(a+d)/N - ((a+b)(a+c) + (c+d)(b+d))/N2]/[1-((a+b)(a+c)+(c+d)(b+d))/N2] The agreement between the administrative data and reference standard above what is expected by chance
null
null
null
null
[ "Background", "Study Population", "Data sources", "Defining Chronic Dialysis using Administrative Data", "Comorbidities and other outcomes", "Statistical Analysis", "Results", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "The global prevalence of end-stage renal disease (ESRD) requiring treatment with dialysis or kidney transplantation continues to increase [1,2]. Patients with ESRD experience far greater morbidity, mortality and health care costs than members of the general population, and studies evaluating health outcomes in this high-risk population are required worldwide [1,2].\nAdministrative health care databases offer an efficient and accessible approach to studying outcomes in large populations[3]. Physician billing claims data are one data source for identifying cases of ESRD because they are routinely collected for physician reimbursement, often span wide geographic areas, and have the potential to capture both in-hospital and outpatient encounters within a healthcare system[4]. However, before such data sources can be widely adopted for use in research where identification of cases of ESRD is critical, the validity of algorithms used to define case definitions of ESRD requires evaluation.\nLimited data demonstrate the validity of administrative data algorithms for identifying patients requiring chronic hemodialysis or peritoneal dialysis. Prior studies have assessed acute kidney injury [5-7], as well as the validity of using inpatient administrative data to identify chronic dialysis patients [8-13]. The two previous studies considering chronic dialysis in the outpatient setting have considered diagnostic codes [14,15], not procedural codes as are considered in this study. This is of particular importance as the majority of contemporary ESRD patients receive chronic dialysis as outpatients. We therefore did this study to determine the validity of algorithms derived from outpatient physician billing claims for defining chronic dialysis, compared to the reference standard of an ESRD registry.", "A cohort was identified from the Alberta Kidney Disease Network (AKDN - http://www.akdn.info) laboratory database to form the study population. The AKDN is a prospective data collection initiative of routine laboratory tests on all patients in the province of Alberta (population approx. 3 million) Canada, resulting in a population-based geographically inclusive database [16]. Patients identified from laboratory data are followed prospectively with linkage to administrative and other computerized sources to obtain detailed information including socio-demographic data, clinical data including comorbidities, health care encounters, health care costs, death, and kidney-related outcomes. The study cohort included patients aged 18 and older who had at least 1 outpatient serum creatinine between Jan 1 2008 and Dec 31 2008. Although a general population cohort would be optimal, our selected study population introduces minimal, if any, bias as anyone \"at-risk\" of ESRD or evaluated for or receiving chronic dialysis was expected to have received serum creatinine measurement as part of their routine clinical assessment.", "Patients treated for ESRD in Alberta are cared for by the Northern Alberta (NARP) and Southern Alberta (SARP) Renal Programs [17]. These programs are responsible for providing ESRD care including chronic dialysis within their geographic area. Each program maintains a prospective patient registry of all chronic dialysis patients, and captures detailed demographic and clinic data, including date of initiation of dialysis. Patients are enrolled at the time of first dialysis for ESRD (first hemodialysis session or first flushing of peritoneal dialysis catheter), or, for patients who initiate dialysis for acute kidney injury, when the attending nephrologist deems that dialysis will be chronic. The NARP and SARP registries were used to identify prevalent and incident dialysis patients from January 1, 2008 to December 31, 2008 (considered the reference standard). Prevalent cases were first identified on Jan 1 1999, with additional incident dialysis patients identified from that date forward. Non- Alberta residents were excluded.\nPhysicians in Alberta submit claims for reimbursement of services to Alberta Health and Wellness, the provincial health ministry, (the universal health care provider for the province of Alberta); claims are stored in a database which contains information on patients' personal health number, physician unique identifier, up to 3 ICD-9 diagnosis codes and 1 procedure code. Procedure codes are captured using the Canadian Classification of Diagnostic, Therapeutic and Surgical Procedures (CCP, which was developed by Statistics Canada to accompany the International Classification of Diseases version 9 (ICD-9) [18]. Physician claims capture all of the outpatient physician services and the majority of the inpatient services. All chronic dialysis patients in the province of Alberta are cared for by nephrologists, who are compensated either using a fee-for-service or salaried model. Regardless of compensation method, physicians are required to submit claims for all patient encounters.", "We identified all patients with outpatient dialysis physician claims (Table 1) occurring from Jan 1 2008 to Dec 31 2008. We evaluated 4 different case definitions for chronic dialysis patients based on varying the number and timing of physicians claims for dialysis: 1) At least 1 outpatient claim, 2) At least 2 outpatient claims, 3) At least 2 outpatient claims at least 90 days apart and 4) Continuous outpatient claims at least 90 days apart with no gap in claims greater than 21 days. We evaluated algorithms employing a 90 day period of claims to be congruent with other current administrative data definitions developed using inpatient data [19,20].\nAdministrative billing codes used to define outpatient chronic dialysis", "Demographic data were determined from the provincial administrative data files. Diabetes mellitus and hypertension were identified from hospital discharge records and physician claims based on validated algorithms [4,21]. The Charlson comorbidities were calculated using the validated algorithms applied to physician claims and hospitalization data [22,23]. Any comorbidity identified during the 3 year period prior to cohort entry was included. To ascertain death, patients were followed up from their start date of dialysis, defined either by the first recorded date in the registry or the date of the second of the outpatient claims when the administrative data definition was used, until March 31, 2009 to ensure a minimum of 90 days of follow-up for all patients. Patients who met the case definition and subsequently died or moved out of the province (lost to follow-up) were included in analyses", "Basic descriptive statistics were used to describe demographic features and comorbidities for the overall cohort, the NARP/SARP dialysis cohort. Table 2 outlines the analytic framework adopted. We subsequently calculated positive agreement, sensitivity, positive predictive value (PPV), for each case definition, using the NARP/SARP registry data as the reference standard [24]. Positive agreement is the conditional probability, given the reference standard is positive, the administrative data definition is also positive [25]. Thus, the positive agreement will explore if there is an imbalance between the likelihood of agreeing on positive and negative cases. The kappa-statistic was used to assess overall agreement between the registry and the billing data. Landis and Koch categorize Kappa into five categories: less than 0.2 indicating \"poor agreement\", 0.21 to 0.40 indicating \"fair agreement\", 0.41 to 0.60 indicating \"moderate agreement, 0.61 to 0.80 indicating \"substantial agreement\" and greater than 0.81 indicating \"near perfect agreement\"[26]. We did not report specificity, negative agreement, or negative predictive value (NPV) as the large size of the non-diseased population (n = 1.11 million) and low incidence of ESRD in the general population makes these measures insensitive to changes in the case definitions. SAS version 9.2 was used for all analyses. Ethics approval was obtained from the Conjoint Health Research Ethics Board at the University of Calgary.\nReported measures of agreement: analytic framework\nAbbreviations: a: true positives; b: false positives; c: false negatives; d: true negatives; N: total validation sample\nPositive agreement = 2a/[2a+b+c]\nThe measure of agreement in positive cases; the number of true positives divided by all of the positives defined by both data sources. Positive agreement will show imbalance in the agreement between positive and negative responses.\nSensitivity = a/(a+c)\nThe proportion of those on dialysis according to the administrative data among those positive cases identified in reference standard. The ability of administrative data to correctly identify individuals on dialysis according to the reference standard\nPositive Predictive Value (PPV) = a/(a+b)\nThe proportion of those on dialysis according to the reference standard among those positive cases identified in the administrative data\nKappa = [(a+d)/N - ((a+b)(a+c) + (c+d)(b+d))/N2]/[1-((a+b)(a+c)+(c+d)(b+d))/N2]\nThe agreement between the administrative data and reference standard above what is expected by chance", "In total 1,118,097 individuals had at least 1 out-patient serum creatinine measure from Jan 1 2008 to Dec 31 2008. During that period 2,227 chronic dialysis patients (0.20% of the total study population) were registered in the ESRD registry. Table 3 presents the baseline characteristics of the overall population, the reference standard dialysis cohort and the cohort resulting from each of the administrative data definitions. The characteristics of the overall cohort are similar to the general Alberta population [27]. As expected, the dialysis cohort was older (64.0 vs. 52.6 y), had a higher prevalence of diabetes (54.5% vs. 12.7%), hypertension (89.0% vs. 34.7%) and a higher burden of comorbid disease (median number of Charlson comorbidities 3 vs. 0) compared to the total population. As the administrative data definition became more restrictive, the cohort became slightly older with a moderately higher burden of diabetes and hypertension.\nBaseline cohort characteristics\n*Interquartile range\nThe chronic dialysis case definitions based on 1 outpatient claim and 2 outpatient claims resulted in similar prevalence estimates to the reference standard (0.21% and 0.19% respectively). The other two definitions, incorporating claims spanning 90 days, underestimated the prevalence (Table 4). The positive agreement was highest when the definition using 2 outpatient claims was considered. The four coding algorithms for dialysis resulted in sensitivities ranging from 0.58 (Continuous outpatient claims) to 0.81 (at least 1 outpatient claim). The PPVs ranged from 0.77 (at least 1 outpatient claim) to 0.86 (Continuous outpatient claims). The three definitions requiring at least 2 outpatient claims resulted in kappa statistics between 0.60-0.80 indicating \"substantial\" agreement [26]. \"At least 1 outpatient claim\" resulted in \"excellent\" agreement with a kappa statistic of 0.81, however, given the size of the true negative population this must be interpreted with caution [24].\nValidity of physician billing chronic dialysis case definitions compared with reference standard registry case definition\nAbbreviations: prev - Prevalence = Admin (+)/total N", "All four physician claims-based case definitions assessed resulted in \"substantial\" agreement with our reference standard registry definition for chronic dialysis. One outpatient claim for dialysis was the most sensitive definition, while more complicated definitions exhibited modest increases in positive predictive value. The optimal administrative data definition may vary with the research objective. For example, when seeking to maximize identification of dialysis as an outcome an approach based on at least 1 outpatient claim may be preferable. In contrast, when establishing a cohort of patients with ESRD receiving chronic dialysis that includes the fewest non-diseased cases being captured, the use of continuous outpatient claims may be better suited.\nSome of the discrepancies between our registry and physician claims algorithms for chronic dialysis likely relate to differences in the classification of patients who receive temporary dialysis or who die soon after initiating dialysis Traditionally, administrative algorithms and national registries, such as the USRDS, have required a 90-day timeframe to define chronic dialysis [19,20]. Although this approach avoids identification of patients who receive temporary dialysis then recover renal function within 3 months, it introduces survivor bias and does not capture chronic dialysis patients that may begin dialysis but die before meeting the inclusion criteria of the definition. Our study demonstrates that approaches based on 1 or 2 outpatient dialysis claims are substantially more sensitive than definitions based on 90 days of claims, although this definition may include some patients who would not be classified as receiving chronic dialysis in a registry (false positive cases). Utilizing a definition that does not require the patient to survive a certain amount of time eliminates any potential survival bias and allows studies of the patient group that begin dialysis and die soon after. However the limitation of this definition is that it may also include patients with acute kidney injury requiring dialysis for a short period who subsequently recover their renal function and no longer require dialysis. Furthermore, estimates of disease incidence and outcomes will not be comparable to studies based on most existing national registries.\nEstablishing the validity of an outpatient administrative data definition for chronic dialysis will allow researchers to utilize physician billing claims data to assess outcomes and form cohorts. This is of international relevance, even in countries where established dialysis registries are available. In the United States, not all researchers have the means to access the USRDS. In other registries from other countries often only cross-sectional, regional data with limited outcomes are available. Thus, validated methods for identifying chronic dialysis patients using billing claims data would be useful for in health services research.\nWe found that the use of physician claims data resulted in the classification of patients as receiving dialysis who were not identified as such in our registry (false positives). Most of these patients were removed from the case definition when algorithms which required claims to span 90 days were used. This is in-keeping with the hypothesis that these events may be acute kidney injury cases or patients who were initiated on dialysis but subsequently recovered renal function; i.e., those not considered chronic dialysis patients and thus not captured in the registry. We also found that physician claims failed to identify some patients captured in the registry (false negatives). As Alberta Health and Wellness does not employ any formal quality assurance or correction process, this may be due to missed billings, billing errors, billings made by physicians on alternative payment plans (shadow billing) or miscoding present in administrative data sources, as the number of such patients decreased when algorithms that required less intensive physician claims were employed.\nTo our knowledge, this is the first study to look at using outpatient administrative data sources using procedure codes to define chronic dialysis. Others have developed algorithms for acute kidney injury and chronic kidney disease using inpatient administrative data [5-13]. Given that the majority of chronic dialysis patients are treated in the outpatient setting, administrative data algorithms limited to inpatient encounters are likely to perform poorly when compared against a reference standard. Three previous studies have included outpatient claim data [14,15,28]. However, Kern et al. excluded chronic dialysis patients, focusing on the validity of administrative data to define chronic kidney disease defined by eGFR <60 ml/min/1.73 m2 [28]. Neither Weintraub et al. nor Wilchesky et al. included procedural codes [14,15]. Their work was limited to ICD-9-CM diagnosis codes for chronic renal failure. Thus, our study is novel, and could facilitate further health services research in a high risk population with ESRD who experience very high morbidity, mortality, and health care costs.\nOur study does have several limitations. First, the billing codes used are from the Canadian Classification of Diagnostic, Therapeutic and Surgical Procedures (CCP); a classification system developed and applied in Canada. However, most countries have similar billing practices and billing codes that could be mapped to the CCP codes. Second, we used a provincial registry of all chronic dialysis patients as the reference standard. Although this registry is geographically inclusive, some dialysis patients may be omitted from the registry in error, thereby resulting in misclassification. However, as this registry is linked to ongoing dialysis treatment, the number of patients not registered is expected to be small. Third, our study did not distinguish between dialysis modalities (hemodialysis versus peritoneal dialysis, or in-centre versus home dialysis), and the accuracy of patient registry and physician claims in these settings may vary. However, prior research has reported limitations in the accuracy of administrative data for identifying the timing of changes between dialysis modalities suggesting that administrative data sources may be better suited to the general identification of patients receiving chronic dialysis rather than a specific modality [29].", "We found that outpatient physician claims identified patients receiving chronic dialysis with \"substantial\" agreement to a reference standard dialysis registry definition. The use of 1 or 2 outpatient claims was most sensitive; however, had modestly lower positive predictive value than claims spanning 90 days or continuous claims. Given the variation in the way clinicians, researchers, and research tools define chronic dialysis, the optimal physician claims based definition will vary with the research objective.", "The authors declare that they have no competing interests.", "All the authors contributed to the conception and design. FC, MJ, RC and BR contributed to the data analysis and drafted the report. All of the authors contributed to the interpretation of data, critically revised the manuscript for important intellectual content and approved the final version submitted for publication.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2288/11/25/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study Population", "Data sources", "Defining Chronic Dialysis using Administrative Data", "Comorbidities and other outcomes", "Statistical Analysis", "Results", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "The global prevalence of end-stage renal disease (ESRD) requiring treatment with dialysis or kidney transplantation continues to increase [1,2]. Patients with ESRD experience far greater morbidity, mortality and health care costs than members of the general population, and studies evaluating health outcomes in this high-risk population are required worldwide [1,2].\nAdministrative health care databases offer an efficient and accessible approach to studying outcomes in large populations[3]. Physician billing claims data are one data source for identifying cases of ESRD because they are routinely collected for physician reimbursement, often span wide geographic areas, and have the potential to capture both in-hospital and outpatient encounters within a healthcare system[4]. However, before such data sources can be widely adopted for use in research where identification of cases of ESRD is critical, the validity of algorithms used to define case definitions of ESRD requires evaluation.\nLimited data demonstrate the validity of administrative data algorithms for identifying patients requiring chronic hemodialysis or peritoneal dialysis. Prior studies have assessed acute kidney injury [5-7], as well as the validity of using inpatient administrative data to identify chronic dialysis patients [8-13]. The two previous studies considering chronic dialysis in the outpatient setting have considered diagnostic codes [14,15], not procedural codes as are considered in this study. This is of particular importance as the majority of contemporary ESRD patients receive chronic dialysis as outpatients. We therefore did this study to determine the validity of algorithms derived from outpatient physician billing claims for defining chronic dialysis, compared to the reference standard of an ESRD registry.", "[SUBTITLE] Study Population [SUBSECTION] A cohort was identified from the Alberta Kidney Disease Network (AKDN - http://www.akdn.info) laboratory database to form the study population. The AKDN is a prospective data collection initiative of routine laboratory tests on all patients in the province of Alberta (population approx. 3 million) Canada, resulting in a population-based geographically inclusive database [16]. Patients identified from laboratory data are followed prospectively with linkage to administrative and other computerized sources to obtain detailed information including socio-demographic data, clinical data including comorbidities, health care encounters, health care costs, death, and kidney-related outcomes. The study cohort included patients aged 18 and older who had at least 1 outpatient serum creatinine between Jan 1 2008 and Dec 31 2008. Although a general population cohort would be optimal, our selected study population introduces minimal, if any, bias as anyone \"at-risk\" of ESRD or evaluated for or receiving chronic dialysis was expected to have received serum creatinine measurement as part of their routine clinical assessment.\nA cohort was identified from the Alberta Kidney Disease Network (AKDN - http://www.akdn.info) laboratory database to form the study population. The AKDN is a prospective data collection initiative of routine laboratory tests on all patients in the province of Alberta (population approx. 3 million) Canada, resulting in a population-based geographically inclusive database [16]. Patients identified from laboratory data are followed prospectively with linkage to administrative and other computerized sources to obtain detailed information including socio-demographic data, clinical data including comorbidities, health care encounters, health care costs, death, and kidney-related outcomes. The study cohort included patients aged 18 and older who had at least 1 outpatient serum creatinine between Jan 1 2008 and Dec 31 2008. Although a general population cohort would be optimal, our selected study population introduces minimal, if any, bias as anyone \"at-risk\" of ESRD or evaluated for or receiving chronic dialysis was expected to have received serum creatinine measurement as part of their routine clinical assessment.\n[SUBTITLE] Data sources [SUBSECTION] Patients treated for ESRD in Alberta are cared for by the Northern Alberta (NARP) and Southern Alberta (SARP) Renal Programs [17]. These programs are responsible for providing ESRD care including chronic dialysis within their geographic area. Each program maintains a prospective patient registry of all chronic dialysis patients, and captures detailed demographic and clinic data, including date of initiation of dialysis. Patients are enrolled at the time of first dialysis for ESRD (first hemodialysis session or first flushing of peritoneal dialysis catheter), or, for patients who initiate dialysis for acute kidney injury, when the attending nephrologist deems that dialysis will be chronic. The NARP and SARP registries were used to identify prevalent and incident dialysis patients from January 1, 2008 to December 31, 2008 (considered the reference standard). Prevalent cases were first identified on Jan 1 1999, with additional incident dialysis patients identified from that date forward. Non- Alberta residents were excluded.\nPhysicians in Alberta submit claims for reimbursement of services to Alberta Health and Wellness, the provincial health ministry, (the universal health care provider for the province of Alberta); claims are stored in a database which contains information on patients' personal health number, physician unique identifier, up to 3 ICD-9 diagnosis codes and 1 procedure code. Procedure codes are captured using the Canadian Classification of Diagnostic, Therapeutic and Surgical Procedures (CCP, which was developed by Statistics Canada to accompany the International Classification of Diseases version 9 (ICD-9) [18]. Physician claims capture all of the outpatient physician services and the majority of the inpatient services. All chronic dialysis patients in the province of Alberta are cared for by nephrologists, who are compensated either using a fee-for-service or salaried model. Regardless of compensation method, physicians are required to submit claims for all patient encounters.\nPatients treated for ESRD in Alberta are cared for by the Northern Alberta (NARP) and Southern Alberta (SARP) Renal Programs [17]. These programs are responsible for providing ESRD care including chronic dialysis within their geographic area. Each program maintains a prospective patient registry of all chronic dialysis patients, and captures detailed demographic and clinic data, including date of initiation of dialysis. Patients are enrolled at the time of first dialysis for ESRD (first hemodialysis session or first flushing of peritoneal dialysis catheter), or, for patients who initiate dialysis for acute kidney injury, when the attending nephrologist deems that dialysis will be chronic. The NARP and SARP registries were used to identify prevalent and incident dialysis patients from January 1, 2008 to December 31, 2008 (considered the reference standard). Prevalent cases were first identified on Jan 1 1999, with additional incident dialysis patients identified from that date forward. Non- Alberta residents were excluded.\nPhysicians in Alberta submit claims for reimbursement of services to Alberta Health and Wellness, the provincial health ministry, (the universal health care provider for the province of Alberta); claims are stored in a database which contains information on patients' personal health number, physician unique identifier, up to 3 ICD-9 diagnosis codes and 1 procedure code. Procedure codes are captured using the Canadian Classification of Diagnostic, Therapeutic and Surgical Procedures (CCP, which was developed by Statistics Canada to accompany the International Classification of Diseases version 9 (ICD-9) [18]. Physician claims capture all of the outpatient physician services and the majority of the inpatient services. All chronic dialysis patients in the province of Alberta are cared for by nephrologists, who are compensated either using a fee-for-service or salaried model. Regardless of compensation method, physicians are required to submit claims for all patient encounters.\n[SUBTITLE] Defining Chronic Dialysis using Administrative Data [SUBSECTION] We identified all patients with outpatient dialysis physician claims (Table 1) occurring from Jan 1 2008 to Dec 31 2008. We evaluated 4 different case definitions for chronic dialysis patients based on varying the number and timing of physicians claims for dialysis: 1) At least 1 outpatient claim, 2) At least 2 outpatient claims, 3) At least 2 outpatient claims at least 90 days apart and 4) Continuous outpatient claims at least 90 days apart with no gap in claims greater than 21 days. We evaluated algorithms employing a 90 day period of claims to be congruent with other current administrative data definitions developed using inpatient data [19,20].\nAdministrative billing codes used to define outpatient chronic dialysis\nWe identified all patients with outpatient dialysis physician claims (Table 1) occurring from Jan 1 2008 to Dec 31 2008. We evaluated 4 different case definitions for chronic dialysis patients based on varying the number and timing of physicians claims for dialysis: 1) At least 1 outpatient claim, 2) At least 2 outpatient claims, 3) At least 2 outpatient claims at least 90 days apart and 4) Continuous outpatient claims at least 90 days apart with no gap in claims greater than 21 days. We evaluated algorithms employing a 90 day period of claims to be congruent with other current administrative data definitions developed using inpatient data [19,20].\nAdministrative billing codes used to define outpatient chronic dialysis\n[SUBTITLE] Comorbidities and other outcomes [SUBSECTION] Demographic data were determined from the provincial administrative data files. Diabetes mellitus and hypertension were identified from hospital discharge records and physician claims based on validated algorithms [4,21]. The Charlson comorbidities were calculated using the validated algorithms applied to physician claims and hospitalization data [22,23]. Any comorbidity identified during the 3 year period prior to cohort entry was included. To ascertain death, patients were followed up from their start date of dialysis, defined either by the first recorded date in the registry or the date of the second of the outpatient claims when the administrative data definition was used, until March 31, 2009 to ensure a minimum of 90 days of follow-up for all patients. Patients who met the case definition and subsequently died or moved out of the province (lost to follow-up) were included in analyses\nDemographic data were determined from the provincial administrative data files. Diabetes mellitus and hypertension were identified from hospital discharge records and physician claims based on validated algorithms [4,21]. The Charlson comorbidities were calculated using the validated algorithms applied to physician claims and hospitalization data [22,23]. Any comorbidity identified during the 3 year period prior to cohort entry was included. To ascertain death, patients were followed up from their start date of dialysis, defined either by the first recorded date in the registry or the date of the second of the outpatient claims when the administrative data definition was used, until March 31, 2009 to ensure a minimum of 90 days of follow-up for all patients. Patients who met the case definition and subsequently died or moved out of the province (lost to follow-up) were included in analyses\n[SUBTITLE] Statistical Analysis [SUBSECTION] Basic descriptive statistics were used to describe demographic features and comorbidities for the overall cohort, the NARP/SARP dialysis cohort. Table 2 outlines the analytic framework adopted. We subsequently calculated positive agreement, sensitivity, positive predictive value (PPV), for each case definition, using the NARP/SARP registry data as the reference standard [24]. Positive agreement is the conditional probability, given the reference standard is positive, the administrative data definition is also positive [25]. Thus, the positive agreement will explore if there is an imbalance between the likelihood of agreeing on positive and negative cases. The kappa-statistic was used to assess overall agreement between the registry and the billing data. Landis and Koch categorize Kappa into five categories: less than 0.2 indicating \"poor agreement\", 0.21 to 0.40 indicating \"fair agreement\", 0.41 to 0.60 indicating \"moderate agreement, 0.61 to 0.80 indicating \"substantial agreement\" and greater than 0.81 indicating \"near perfect agreement\"[26]. We did not report specificity, negative agreement, or negative predictive value (NPV) as the large size of the non-diseased population (n = 1.11 million) and low incidence of ESRD in the general population makes these measures insensitive to changes in the case definitions. SAS version 9.2 was used for all analyses. Ethics approval was obtained from the Conjoint Health Research Ethics Board at the University of Calgary.\nReported measures of agreement: analytic framework\nAbbreviations: a: true positives; b: false positives; c: false negatives; d: true negatives; N: total validation sample\nPositive agreement = 2a/[2a+b+c]\nThe measure of agreement in positive cases; the number of true positives divided by all of the positives defined by both data sources. Positive agreement will show imbalance in the agreement between positive and negative responses.\nSensitivity = a/(a+c)\nThe proportion of those on dialysis according to the administrative data among those positive cases identified in reference standard. The ability of administrative data to correctly identify individuals on dialysis according to the reference standard\nPositive Predictive Value (PPV) = a/(a+b)\nThe proportion of those on dialysis according to the reference standard among those positive cases identified in the administrative data\nKappa = [(a+d)/N - ((a+b)(a+c) + (c+d)(b+d))/N2]/[1-((a+b)(a+c)+(c+d)(b+d))/N2]\nThe agreement between the administrative data and reference standard above what is expected by chance\nBasic descriptive statistics were used to describe demographic features and comorbidities for the overall cohort, the NARP/SARP dialysis cohort. Table 2 outlines the analytic framework adopted. We subsequently calculated positive agreement, sensitivity, positive predictive value (PPV), for each case definition, using the NARP/SARP registry data as the reference standard [24]. Positive agreement is the conditional probability, given the reference standard is positive, the administrative data definition is also positive [25]. Thus, the positive agreement will explore if there is an imbalance between the likelihood of agreeing on positive and negative cases. The kappa-statistic was used to assess overall agreement between the registry and the billing data. Landis and Koch categorize Kappa into five categories: less than 0.2 indicating \"poor agreement\", 0.21 to 0.40 indicating \"fair agreement\", 0.41 to 0.60 indicating \"moderate agreement, 0.61 to 0.80 indicating \"substantial agreement\" and greater than 0.81 indicating \"near perfect agreement\"[26]. We did not report specificity, negative agreement, or negative predictive value (NPV) as the large size of the non-diseased population (n = 1.11 million) and low incidence of ESRD in the general population makes these measures insensitive to changes in the case definitions. SAS version 9.2 was used for all analyses. Ethics approval was obtained from the Conjoint Health Research Ethics Board at the University of Calgary.\nReported measures of agreement: analytic framework\nAbbreviations: a: true positives; b: false positives; c: false negatives; d: true negatives; N: total validation sample\nPositive agreement = 2a/[2a+b+c]\nThe measure of agreement in positive cases; the number of true positives divided by all of the positives defined by both data sources. Positive agreement will show imbalance in the agreement between positive and negative responses.\nSensitivity = a/(a+c)\nThe proportion of those on dialysis according to the administrative data among those positive cases identified in reference standard. The ability of administrative data to correctly identify individuals on dialysis according to the reference standard\nPositive Predictive Value (PPV) = a/(a+b)\nThe proportion of those on dialysis according to the reference standard among those positive cases identified in the administrative data\nKappa = [(a+d)/N - ((a+b)(a+c) + (c+d)(b+d))/N2]/[1-((a+b)(a+c)+(c+d)(b+d))/N2]\nThe agreement between the administrative data and reference standard above what is expected by chance", "A cohort was identified from the Alberta Kidney Disease Network (AKDN - http://www.akdn.info) laboratory database to form the study population. The AKDN is a prospective data collection initiative of routine laboratory tests on all patients in the province of Alberta (population approx. 3 million) Canada, resulting in a population-based geographically inclusive database [16]. Patients identified from laboratory data are followed prospectively with linkage to administrative and other computerized sources to obtain detailed information including socio-demographic data, clinical data including comorbidities, health care encounters, health care costs, death, and kidney-related outcomes. The study cohort included patients aged 18 and older who had at least 1 outpatient serum creatinine between Jan 1 2008 and Dec 31 2008. Although a general population cohort would be optimal, our selected study population introduces minimal, if any, bias as anyone \"at-risk\" of ESRD or evaluated for or receiving chronic dialysis was expected to have received serum creatinine measurement as part of their routine clinical assessment.", "Patients treated for ESRD in Alberta are cared for by the Northern Alberta (NARP) and Southern Alberta (SARP) Renal Programs [17]. These programs are responsible for providing ESRD care including chronic dialysis within their geographic area. Each program maintains a prospective patient registry of all chronic dialysis patients, and captures detailed demographic and clinic data, including date of initiation of dialysis. Patients are enrolled at the time of first dialysis for ESRD (first hemodialysis session or first flushing of peritoneal dialysis catheter), or, for patients who initiate dialysis for acute kidney injury, when the attending nephrologist deems that dialysis will be chronic. The NARP and SARP registries were used to identify prevalent and incident dialysis patients from January 1, 2008 to December 31, 2008 (considered the reference standard). Prevalent cases were first identified on Jan 1 1999, with additional incident dialysis patients identified from that date forward. Non- Alberta residents were excluded.\nPhysicians in Alberta submit claims for reimbursement of services to Alberta Health and Wellness, the provincial health ministry, (the universal health care provider for the province of Alberta); claims are stored in a database which contains information on patients' personal health number, physician unique identifier, up to 3 ICD-9 diagnosis codes and 1 procedure code. Procedure codes are captured using the Canadian Classification of Diagnostic, Therapeutic and Surgical Procedures (CCP, which was developed by Statistics Canada to accompany the International Classification of Diseases version 9 (ICD-9) [18]. Physician claims capture all of the outpatient physician services and the majority of the inpatient services. All chronic dialysis patients in the province of Alberta are cared for by nephrologists, who are compensated either using a fee-for-service or salaried model. Regardless of compensation method, physicians are required to submit claims for all patient encounters.", "We identified all patients with outpatient dialysis physician claims (Table 1) occurring from Jan 1 2008 to Dec 31 2008. We evaluated 4 different case definitions for chronic dialysis patients based on varying the number and timing of physicians claims for dialysis: 1) At least 1 outpatient claim, 2) At least 2 outpatient claims, 3) At least 2 outpatient claims at least 90 days apart and 4) Continuous outpatient claims at least 90 days apart with no gap in claims greater than 21 days. We evaluated algorithms employing a 90 day period of claims to be congruent with other current administrative data definitions developed using inpatient data [19,20].\nAdministrative billing codes used to define outpatient chronic dialysis", "Demographic data were determined from the provincial administrative data files. Diabetes mellitus and hypertension were identified from hospital discharge records and physician claims based on validated algorithms [4,21]. The Charlson comorbidities were calculated using the validated algorithms applied to physician claims and hospitalization data [22,23]. Any comorbidity identified during the 3 year period prior to cohort entry was included. To ascertain death, patients were followed up from their start date of dialysis, defined either by the first recorded date in the registry or the date of the second of the outpatient claims when the administrative data definition was used, until March 31, 2009 to ensure a minimum of 90 days of follow-up for all patients. Patients who met the case definition and subsequently died or moved out of the province (lost to follow-up) were included in analyses", "Basic descriptive statistics were used to describe demographic features and comorbidities for the overall cohort, the NARP/SARP dialysis cohort. Table 2 outlines the analytic framework adopted. We subsequently calculated positive agreement, sensitivity, positive predictive value (PPV), for each case definition, using the NARP/SARP registry data as the reference standard [24]. Positive agreement is the conditional probability, given the reference standard is positive, the administrative data definition is also positive [25]. Thus, the positive agreement will explore if there is an imbalance between the likelihood of agreeing on positive and negative cases. The kappa-statistic was used to assess overall agreement between the registry and the billing data. Landis and Koch categorize Kappa into five categories: less than 0.2 indicating \"poor agreement\", 0.21 to 0.40 indicating \"fair agreement\", 0.41 to 0.60 indicating \"moderate agreement, 0.61 to 0.80 indicating \"substantial agreement\" and greater than 0.81 indicating \"near perfect agreement\"[26]. We did not report specificity, negative agreement, or negative predictive value (NPV) as the large size of the non-diseased population (n = 1.11 million) and low incidence of ESRD in the general population makes these measures insensitive to changes in the case definitions. SAS version 9.2 was used for all analyses. Ethics approval was obtained from the Conjoint Health Research Ethics Board at the University of Calgary.\nReported measures of agreement: analytic framework\nAbbreviations: a: true positives; b: false positives; c: false negatives; d: true negatives; N: total validation sample\nPositive agreement = 2a/[2a+b+c]\nThe measure of agreement in positive cases; the number of true positives divided by all of the positives defined by both data sources. Positive agreement will show imbalance in the agreement between positive and negative responses.\nSensitivity = a/(a+c)\nThe proportion of those on dialysis according to the administrative data among those positive cases identified in reference standard. The ability of administrative data to correctly identify individuals on dialysis according to the reference standard\nPositive Predictive Value (PPV) = a/(a+b)\nThe proportion of those on dialysis according to the reference standard among those positive cases identified in the administrative data\nKappa = [(a+d)/N - ((a+b)(a+c) + (c+d)(b+d))/N2]/[1-((a+b)(a+c)+(c+d)(b+d))/N2]\nThe agreement between the administrative data and reference standard above what is expected by chance", "In total 1,118,097 individuals had at least 1 out-patient serum creatinine measure from Jan 1 2008 to Dec 31 2008. During that period 2,227 chronic dialysis patients (0.20% of the total study population) were registered in the ESRD registry. Table 3 presents the baseline characteristics of the overall population, the reference standard dialysis cohort and the cohort resulting from each of the administrative data definitions. The characteristics of the overall cohort are similar to the general Alberta population [27]. As expected, the dialysis cohort was older (64.0 vs. 52.6 y), had a higher prevalence of diabetes (54.5% vs. 12.7%), hypertension (89.0% vs. 34.7%) and a higher burden of comorbid disease (median number of Charlson comorbidities 3 vs. 0) compared to the total population. As the administrative data definition became more restrictive, the cohort became slightly older with a moderately higher burden of diabetes and hypertension.\nBaseline cohort characteristics\n*Interquartile range\nThe chronic dialysis case definitions based on 1 outpatient claim and 2 outpatient claims resulted in similar prevalence estimates to the reference standard (0.21% and 0.19% respectively). The other two definitions, incorporating claims spanning 90 days, underestimated the prevalence (Table 4). The positive agreement was highest when the definition using 2 outpatient claims was considered. The four coding algorithms for dialysis resulted in sensitivities ranging from 0.58 (Continuous outpatient claims) to 0.81 (at least 1 outpatient claim). The PPVs ranged from 0.77 (at least 1 outpatient claim) to 0.86 (Continuous outpatient claims). The three definitions requiring at least 2 outpatient claims resulted in kappa statistics between 0.60-0.80 indicating \"substantial\" agreement [26]. \"At least 1 outpatient claim\" resulted in \"excellent\" agreement with a kappa statistic of 0.81, however, given the size of the true negative population this must be interpreted with caution [24].\nValidity of physician billing chronic dialysis case definitions compared with reference standard registry case definition\nAbbreviations: prev - Prevalence = Admin (+)/total N", "All four physician claims-based case definitions assessed resulted in \"substantial\" agreement with our reference standard registry definition for chronic dialysis. One outpatient claim for dialysis was the most sensitive definition, while more complicated definitions exhibited modest increases in positive predictive value. The optimal administrative data definition may vary with the research objective. For example, when seeking to maximize identification of dialysis as an outcome an approach based on at least 1 outpatient claim may be preferable. In contrast, when establishing a cohort of patients with ESRD receiving chronic dialysis that includes the fewest non-diseased cases being captured, the use of continuous outpatient claims may be better suited.\nSome of the discrepancies between our registry and physician claims algorithms for chronic dialysis likely relate to differences in the classification of patients who receive temporary dialysis or who die soon after initiating dialysis Traditionally, administrative algorithms and national registries, such as the USRDS, have required a 90-day timeframe to define chronic dialysis [19,20]. Although this approach avoids identification of patients who receive temporary dialysis then recover renal function within 3 months, it introduces survivor bias and does not capture chronic dialysis patients that may begin dialysis but die before meeting the inclusion criteria of the definition. Our study demonstrates that approaches based on 1 or 2 outpatient dialysis claims are substantially more sensitive than definitions based on 90 days of claims, although this definition may include some patients who would not be classified as receiving chronic dialysis in a registry (false positive cases). Utilizing a definition that does not require the patient to survive a certain amount of time eliminates any potential survival bias and allows studies of the patient group that begin dialysis and die soon after. However the limitation of this definition is that it may also include patients with acute kidney injury requiring dialysis for a short period who subsequently recover their renal function and no longer require dialysis. Furthermore, estimates of disease incidence and outcomes will not be comparable to studies based on most existing national registries.\nEstablishing the validity of an outpatient administrative data definition for chronic dialysis will allow researchers to utilize physician billing claims data to assess outcomes and form cohorts. This is of international relevance, even in countries where established dialysis registries are available. In the United States, not all researchers have the means to access the USRDS. In other registries from other countries often only cross-sectional, regional data with limited outcomes are available. Thus, validated methods for identifying chronic dialysis patients using billing claims data would be useful for in health services research.\nWe found that the use of physician claims data resulted in the classification of patients as receiving dialysis who were not identified as such in our registry (false positives). Most of these patients were removed from the case definition when algorithms which required claims to span 90 days were used. This is in-keeping with the hypothesis that these events may be acute kidney injury cases or patients who were initiated on dialysis but subsequently recovered renal function; i.e., those not considered chronic dialysis patients and thus not captured in the registry. We also found that physician claims failed to identify some patients captured in the registry (false negatives). As Alberta Health and Wellness does not employ any formal quality assurance or correction process, this may be due to missed billings, billing errors, billings made by physicians on alternative payment plans (shadow billing) or miscoding present in administrative data sources, as the number of such patients decreased when algorithms that required less intensive physician claims were employed.\nTo our knowledge, this is the first study to look at using outpatient administrative data sources using procedure codes to define chronic dialysis. Others have developed algorithms for acute kidney injury and chronic kidney disease using inpatient administrative data [5-13]. Given that the majority of chronic dialysis patients are treated in the outpatient setting, administrative data algorithms limited to inpatient encounters are likely to perform poorly when compared against a reference standard. Three previous studies have included outpatient claim data [14,15,28]. However, Kern et al. excluded chronic dialysis patients, focusing on the validity of administrative data to define chronic kidney disease defined by eGFR <60 ml/min/1.73 m2 [28]. Neither Weintraub et al. nor Wilchesky et al. included procedural codes [14,15]. Their work was limited to ICD-9-CM diagnosis codes for chronic renal failure. Thus, our study is novel, and could facilitate further health services research in a high risk population with ESRD who experience very high morbidity, mortality, and health care costs.\nOur study does have several limitations. First, the billing codes used are from the Canadian Classification of Diagnostic, Therapeutic and Surgical Procedures (CCP); a classification system developed and applied in Canada. However, most countries have similar billing practices and billing codes that could be mapped to the CCP codes. Second, we used a provincial registry of all chronic dialysis patients as the reference standard. Although this registry is geographically inclusive, some dialysis patients may be omitted from the registry in error, thereby resulting in misclassification. However, as this registry is linked to ongoing dialysis treatment, the number of patients not registered is expected to be small. Third, our study did not distinguish between dialysis modalities (hemodialysis versus peritoneal dialysis, or in-centre versus home dialysis), and the accuracy of patient registry and physician claims in these settings may vary. However, prior research has reported limitations in the accuracy of administrative data for identifying the timing of changes between dialysis modalities suggesting that administrative data sources may be better suited to the general identification of patients receiving chronic dialysis rather than a specific modality [29].", "We found that outpatient physician claims identified patients receiving chronic dialysis with \"substantial\" agreement to a reference standard dialysis registry definition. The use of 1 or 2 outpatient claims was most sensitive; however, had modestly lower positive predictive value than claims spanning 90 days or continuous claims. Given the variation in the way clinicians, researchers, and research tools define chronic dialysis, the optimal physician claims based definition will vary with the research objective.", "The authors declare that they have no competing interests.", "All the authors contributed to the conception and design. FC, MJ, RC and BR contributed to the data analysis and drafted the report. All of the authors contributed to the interpretation of data, critically revised the manuscript for important intellectual content and approved the final version submitted for publication.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2288/11/25/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null ]
[]
Impact on mortality and cancer incidence rates of using random invitation from population registers for recruitment to trials.
21362184
Participants in trials evaluating preventive interventions such as screening are on average healthier than the general population. To decrease this 'healthy volunteer effect' (HVE) women were randomly invited from population registers to participate in the United Kingdom Collaborative Trial of Ovarian Cancer Screening (UKCTOCS) and not allowed to self refer. This report assesses the extent of the HVE still prevalent in UKCTOCS and considers how certain shortfalls in mortality and incidence can be related to differences in socioeconomic status.
BACKGROUND
Between 2001 and 2005, 202 638 postmenopausal women joined the trial out of 1 243 312 women randomly invited from local health authority registers. The cohort was flagged for deaths and cancer registrations and mean follow up at censoring was 5.55 years for mortality, and 2.58 years for cancer incidence. Overall and cause-specific Standardised Mortality Ratios (SMRs) and Standardised Incidence Ratios (SIRs) were calculated based on national mortality (2005) and cancer incidence (2006) statistics. The Index of Multiple Deprivation (IMD 2007) was used to assess the link between socioeconomic status and mortality/cancer incidence, and differences between the invited and recruited populations.
METHODS
The SMR for all trial participants was 37%. By subgroup, the SMRs were higher for: younger age groups, extremes of BMI distribution and with each increasing year in trial. There was a clear trend between lower socioeconomic status and increased mortality but less pronounced with incidence. While the invited population had higher mean IMD scores (more deprived) than the national average, those who joined the trial were less deprived.
RESULTS
Recruitment to screening trials through invitation from population registers does not prevent a pronounced HVE on mortality. The impact on cancer incidence is much smaller. Similar shortfalls can be expected in other screening RCTs and it maybe prudent to use the various mortality and incidence rates presented as guides for calculating event rates and power in RCTs involving women.
CONCLUSIONS
[ "Age Factors", "Aged", "Body Mass Index", "Female", "Humans", "Incidence", "Mass Screening", "Middle Aged", "Ovarian Neoplasms", "Patient Selection", "Postmenopause", "Registries", "Risk Assessment", "Risk Factors", "Socioeconomic Factors", "Time Factors", "United Kingdom" ]
3058013
null
null
Methods
[SUBTITLE] Study Design [SUBSECTION] UKCTOCS is an RCT aiming to assess the impact of screening on ovarian cancer mortality while comprehensively evaluating performance characteristics, physical and psychological morbidity, compliance, and cost of the screening strategies. It was set up in 13 NHS Trusts in England, Wales and Northern Ireland. Women living in adjoining Primary Care Trusts (including Local Health Boards in Wales) were invited to participate in the trial. Those who accepted the invitation attended a local recruitment clinic. Detailed description of the invitation and recruitment process, as well as inclusion/exclusion criteria, are detailed elsewhere [5]. Of relevance to this analysis is that women with an active malignancy were only eligible if they had no documented persistent or recurrent disease, and those with previous history of ovarian cancer were excluded. All women provided written consent. UKCTOCS is an RCT aiming to assess the impact of screening on ovarian cancer mortality while comprehensively evaluating performance characteristics, physical and psychological morbidity, compliance, and cost of the screening strategies. It was set up in 13 NHS Trusts in England, Wales and Northern Ireland. Women living in adjoining Primary Care Trusts (including Local Health Boards in Wales) were invited to participate in the trial. Those who accepted the invitation attended a local recruitment clinic. Detailed description of the invitation and recruitment process, as well as inclusion/exclusion criteria, are detailed elsewhere [5]. Of relevance to this analysis is that women with an active malignancy were only eligible if they had no documented persistent or recurrent disease, and those with previous history of ovarian cancer were excluded. All women provided written consent. [SUBTITLE] Follow up [SUBSECTION] Women recruited into the trial were 'flagged' for follow up with the NHS Information Centre for Health and Social Care (ICHSC) in England and Wales (for death and cancer registration) and with the Central Services Agency (CSA, for deaths in Northern Ireland) and Cancer Registry in Northern Ireland (NICR). Almost all women were successfully flagged (n = 202 593). From the received death certificate copies, the 'underlying cause of death' was used for the cause-specific observed counts. Barring an inquest, death certificates were mostly received within 3 months of the death. To ensure completeness of data on deaths, events were censored on the 1st June 2009, eight months prior to the last death certificates update on 1st February 2010. Data provided by the CSA on cause of death was incomplete and therefore women from Northern Ireland were excluded from the calculation of cause-specific SMRs. Information on all incident cancers can take up to 3 years to be recorded with the national registries. In order to ensure completeness of data on cancers, events were censored on 1st June 2006, allowing a time lag of 3.75 years between events and the final cancer registration update from NHS ICHSC and the NICR in February 2010. Unlike the CSA, the NICR provided full data on cancer type, so that all women were included in the cancer-specific incidence analysis. Women recruited into the trial were 'flagged' for follow up with the NHS Information Centre for Health and Social Care (ICHSC) in England and Wales (for death and cancer registration) and with the Central Services Agency (CSA, for deaths in Northern Ireland) and Cancer Registry in Northern Ireland (NICR). Almost all women were successfully flagged (n = 202 593). From the received death certificate copies, the 'underlying cause of death' was used for the cause-specific observed counts. Barring an inquest, death certificates were mostly received within 3 months of the death. To ensure completeness of data on deaths, events were censored on the 1st June 2009, eight months prior to the last death certificates update on 1st February 2010. Data provided by the CSA on cause of death was incomplete and therefore women from Northern Ireland were excluded from the calculation of cause-specific SMRs. Information on all incident cancers can take up to 3 years to be recorded with the national registries. In order to ensure completeness of data on cancers, events were censored on 1st June 2006, allowing a time lag of 3.75 years between events and the final cancer registration update from NHS ICHSC and the NICR in February 2010. Unlike the CSA, the NICR provided full data on cancer type, so that all women were included in the cancer-specific incidence analysis. [SUBTITLE] Analysis [SUBSECTION] [SUBTITLE] Mortality and cancer incidence [SUBSECTION] Evidence of a HVE was assessed by calculation of the Standardised Mortality Ratio (SMR) which is defined as the ratio of observed to expected deaths (×100). A value significantly less than 100 would indicate a HVE. SMRs were calculated for: overall mortality (including ovarian cancer); overall cancer (excluding ovarian cancer and tubal cancer (ICD10-C56, C57.0) and 'other malignant neoplasms of skin' (ICD10-C44); the 10 leading individual causes of female cancer mortality (excluding ovarian cancer); and the five leading general causes other than cancer (circulatory, respiratory, digestive, nervous system and mental and behavioural). Except for overall mortality, ovarian and fallopian tube cancers were excluded from the analysis as they were the primary outcome measure of the ongoing RCT. The effect on cancer incidence from the HVE was similarly assessed with the Standardised Incidence Ratio (SIR), defined as the ratio of observed to expected cancer incidence (×100). This was calculated again for overall cancer (also excluding ICD10-C56, C57.0 and C44) and the leading 10 female causes of cancer incidence, excluding ovarian cancer. These are the same as for mortality, although there are differences in the rankings. For each trial participant, Expected Mortality Rates (EMRs) were calculated using national mortality rates derived from ONS for 2005 [6]. All individual EMRs were calculated for each year or partial year on the trial, with the values summed over the number of years on the trial. The individual's risk of mortality was adjusted for age at randomisation and also dynamically, so that the risk reflected the ageing woman as the screening progressed. The overall EMR for a cause was the sum of each woman's individual EMR up to the censoring date of 1st June 2009. ONS mortality tables for 2005 [6] provide both the number of female deaths for each cause as well as total female population in 5 year age groups. To estimate an age-specific mortality rate for each year, firstly the midpoint of the age group was taken as representing the mortality rate calculated for that age group. An approximate mortality rate estimate for any given age was then calculated by imputing the age into either a best fitting quadratic or exponential function. For nearly all causes the fit was excellent with R2 always over 0.95 and mostly over 0.99. Similar analysis was performed for cancer incidence using ONS cancer incidence tables for 2006 [7]. The only exception was breast cancer incidence where the effect of a national screening programme meant that after the age of 70 the incidence fell sharply, so that 2 separate functions had to be used (below and above age 70). All confidence intervals for the SMRs and SIRs were based on an assumed poisson distribution for the observed deaths or cancers. To calculate the EMR of each woman i (i = 1, 2...202 593) for cause of death z if: • t is the year on trial (t = 1, 2...8) • xi is the age of woman i at randomisation • Dzx is the imputed mortality rate for cause of death z at age x • yti is the fraction of the year of trial t completed at censoring or death by woman i (always = 1, except for most recent/last year on trial) then E M R i z = ∑ t = 1 , 2...8 [ D z ( x i + t − 0.5 ) ] y t i and the overall EMR for cause of death z is simply: E M R z = ∑ i = 1 , 2...202593 E M R i z Note that the age x imputed in Dzx is slightly adjusted by 0.5 to approximate the average effect of ageing over the year (t - 0.5 instead of t-1). Also note, if women withdraw from attending for screening in the trial at any point, they continue to be followed up through flagging for death and cancer registration. Hence no adjustment is made for withdrawals. Evidence of a HVE was assessed by calculation of the Standardised Mortality Ratio (SMR) which is defined as the ratio of observed to expected deaths (×100). A value significantly less than 100 would indicate a HVE. SMRs were calculated for: overall mortality (including ovarian cancer); overall cancer (excluding ovarian cancer and tubal cancer (ICD10-C56, C57.0) and 'other malignant neoplasms of skin' (ICD10-C44); the 10 leading individual causes of female cancer mortality (excluding ovarian cancer); and the five leading general causes other than cancer (circulatory, respiratory, digestive, nervous system and mental and behavioural). Except for overall mortality, ovarian and fallopian tube cancers were excluded from the analysis as they were the primary outcome measure of the ongoing RCT. The effect on cancer incidence from the HVE was similarly assessed with the Standardised Incidence Ratio (SIR), defined as the ratio of observed to expected cancer incidence (×100). This was calculated again for overall cancer (also excluding ICD10-C56, C57.0 and C44) and the leading 10 female causes of cancer incidence, excluding ovarian cancer. These are the same as for mortality, although there are differences in the rankings. For each trial participant, Expected Mortality Rates (EMRs) were calculated using national mortality rates derived from ONS for 2005 [6]. All individual EMRs were calculated for each year or partial year on the trial, with the values summed over the number of years on the trial. The individual's risk of mortality was adjusted for age at randomisation and also dynamically, so that the risk reflected the ageing woman as the screening progressed. The overall EMR for a cause was the sum of each woman's individual EMR up to the censoring date of 1st June 2009. ONS mortality tables for 2005 [6] provide both the number of female deaths for each cause as well as total female population in 5 year age groups. To estimate an age-specific mortality rate for each year, firstly the midpoint of the age group was taken as representing the mortality rate calculated for that age group. An approximate mortality rate estimate for any given age was then calculated by imputing the age into either a best fitting quadratic or exponential function. For nearly all causes the fit was excellent with R2 always over 0.95 and mostly over 0.99. Similar analysis was performed for cancer incidence using ONS cancer incidence tables for 2006 [7]. The only exception was breast cancer incidence where the effect of a national screening programme meant that after the age of 70 the incidence fell sharply, so that 2 separate functions had to be used (below and above age 70). All confidence intervals for the SMRs and SIRs were based on an assumed poisson distribution for the observed deaths or cancers. To calculate the EMR of each woman i (i = 1, 2...202 593) for cause of death z if: • t is the year on trial (t = 1, 2...8) • xi is the age of woman i at randomisation • Dzx is the imputed mortality rate for cause of death z at age x • yti is the fraction of the year of trial t completed at censoring or death by woman i (always = 1, except for most recent/last year on trial) then E M R i z = ∑ t = 1 , 2...8 [ D z ( x i + t − 0.5 ) ] y t i and the overall EMR for cause of death z is simply: E M R z = ∑ i = 1 , 2...202593 E M R i z Note that the age x imputed in Dzx is slightly adjusted by 0.5 to approximate the average effect of ageing over the year (t - 0.5 instead of t-1). Also note, if women withdraw from attending for screening in the trial at any point, they continue to be followed up through flagging for death and cancer registration. Hence no adjustment is made for withdrawals. [SUBTITLE] Socioeconomic status [SUBSECTION] The PCT provided postcodes and dates of birth for all women invited to the trial. The former was used to estimate socio-economic class. The Index of Multiple Deprivation 2007 (IMD) [8] provides 32 482 scores at a Super Output Area (SOA) level linked to postcodes for England. It was chosen over the other two census based available indices (Townsend or Carstairs) as firstly, the most up-to-date and secondly, the most precise in ascribing a score to an individual based on postcode, given that it is calculated at a much finer spatial scale. Upon linking, an individual IMD score was derived for 156 620 women recruited in England. The women recruited from centres in Wales and Northern Ireland were omitted from this particular analysis. A Welsh IMD (2008) [9] has been published. However, the differing criteria employed make comparisons between the English and Welsh IMD scores invalid [10]. To explore mortality versus deprivation, the recruited women were separated into quintiles according to IMD score and the respective SMRs compared. This was also repeated for cancer incidence. IMD scores for all women who were invited from England were compared with those recruited to the trial by evaluating their relative frequency distributions. No mortality data is available for invited women. The PCT provided postcodes and dates of birth for all women invited to the trial. The former was used to estimate socio-economic class. The Index of Multiple Deprivation 2007 (IMD) [8] provides 32 482 scores at a Super Output Area (SOA) level linked to postcodes for England. It was chosen over the other two census based available indices (Townsend or Carstairs) as firstly, the most up-to-date and secondly, the most precise in ascribing a score to an individual based on postcode, given that it is calculated at a much finer spatial scale. Upon linking, an individual IMD score was derived for 156 620 women recruited in England. The women recruited from centres in Wales and Northern Ireland were omitted from this particular analysis. A Welsh IMD (2008) [9] has been published. However, the differing criteria employed make comparisons between the English and Welsh IMD scores invalid [10]. To explore mortality versus deprivation, the recruited women were separated into quintiles according to IMD score and the respective SMRs compared. This was also repeated for cancer incidence. IMD scores for all women who were invited from England were compared with those recruited to the trial by evaluating their relative frequency distributions. No mortality data is available for invited women. [SUBTITLE] Variation in HVE with age/region/BMI/time on trial [SUBSECTION] For all these analyses, the expected and observed mortality rates for all relevant women in each group were summed. Regional variations were compared by summing over the individual recruitment centres. For the age group analysis the groupings were made by using the age at randomisation to categorise into 50-54, 55-59, 60-65, 65-69 or 70-74 age groups. To assess any change in the SMR/SIRs over the trial period, the overall EMR was partitioned into year in trial by summing the individual EMRs for each year t. This was done for the individual causes of mortality and incidence, as well as for overall mortality. Mortality versus body mass index (BMI) of the women was explored by separating women who provided height and weight at randomisation into the standard underweight, normal, overweight and obese categories (up to 18.5; 18.5-25; 25-30; over 30, respectively) and comparing respective SMRs. For all these analyses, the expected and observed mortality rates for all relevant women in each group were summed. Regional variations were compared by summing over the individual recruitment centres. For the age group analysis the groupings were made by using the age at randomisation to categorise into 50-54, 55-59, 60-65, 65-69 or 70-74 age groups. To assess any change in the SMR/SIRs over the trial period, the overall EMR was partitioned into year in trial by summing the individual EMRs for each year t. This was done for the individual causes of mortality and incidence, as well as for overall mortality. Mortality versus body mass index (BMI) of the women was explored by separating women who provided height and weight at randomisation into the standard underweight, normal, overweight and obese categories (up to 18.5; 18.5-25; 25-30; over 30, respectively) and comparing respective SMRs. [SUBTITLE] Mortality and cancer incidence [SUBSECTION] Evidence of a HVE was assessed by calculation of the Standardised Mortality Ratio (SMR) which is defined as the ratio of observed to expected deaths (×100). A value significantly less than 100 would indicate a HVE. SMRs were calculated for: overall mortality (including ovarian cancer); overall cancer (excluding ovarian cancer and tubal cancer (ICD10-C56, C57.0) and 'other malignant neoplasms of skin' (ICD10-C44); the 10 leading individual causes of female cancer mortality (excluding ovarian cancer); and the five leading general causes other than cancer (circulatory, respiratory, digestive, nervous system and mental and behavioural). Except for overall mortality, ovarian and fallopian tube cancers were excluded from the analysis as they were the primary outcome measure of the ongoing RCT. The effect on cancer incidence from the HVE was similarly assessed with the Standardised Incidence Ratio (SIR), defined as the ratio of observed to expected cancer incidence (×100). This was calculated again for overall cancer (also excluding ICD10-C56, C57.0 and C44) and the leading 10 female causes of cancer incidence, excluding ovarian cancer. These are the same as for mortality, although there are differences in the rankings. For each trial participant, Expected Mortality Rates (EMRs) were calculated using national mortality rates derived from ONS for 2005 [6]. All individual EMRs were calculated for each year or partial year on the trial, with the values summed over the number of years on the trial. The individual's risk of mortality was adjusted for age at randomisation and also dynamically, so that the risk reflected the ageing woman as the screening progressed. The overall EMR for a cause was the sum of each woman's individual EMR up to the censoring date of 1st June 2009. ONS mortality tables for 2005 [6] provide both the number of female deaths for each cause as well as total female population in 5 year age groups. To estimate an age-specific mortality rate for each year, firstly the midpoint of the age group was taken as representing the mortality rate calculated for that age group. An approximate mortality rate estimate for any given age was then calculated by imputing the age into either a best fitting quadratic or exponential function. For nearly all causes the fit was excellent with R2 always over 0.95 and mostly over 0.99. Similar analysis was performed for cancer incidence using ONS cancer incidence tables for 2006 [7]. The only exception was breast cancer incidence where the effect of a national screening programme meant that after the age of 70 the incidence fell sharply, so that 2 separate functions had to be used (below and above age 70). All confidence intervals for the SMRs and SIRs were based on an assumed poisson distribution for the observed deaths or cancers. To calculate the EMR of each woman i (i = 1, 2...202 593) for cause of death z if: • t is the year on trial (t = 1, 2...8) • xi is the age of woman i at randomisation • Dzx is the imputed mortality rate for cause of death z at age x • yti is the fraction of the year of trial t completed at censoring or death by woman i (always = 1, except for most recent/last year on trial) then E M R i z = ∑ t = 1 , 2...8 [ D z ( x i + t − 0.5 ) ] y t i and the overall EMR for cause of death z is simply: E M R z = ∑ i = 1 , 2...202593 E M R i z Note that the age x imputed in Dzx is slightly adjusted by 0.5 to approximate the average effect of ageing over the year (t - 0.5 instead of t-1). Also note, if women withdraw from attending for screening in the trial at any point, they continue to be followed up through flagging for death and cancer registration. Hence no adjustment is made for withdrawals. Evidence of a HVE was assessed by calculation of the Standardised Mortality Ratio (SMR) which is defined as the ratio of observed to expected deaths (×100). A value significantly less than 100 would indicate a HVE. SMRs were calculated for: overall mortality (including ovarian cancer); overall cancer (excluding ovarian cancer and tubal cancer (ICD10-C56, C57.0) and 'other malignant neoplasms of skin' (ICD10-C44); the 10 leading individual causes of female cancer mortality (excluding ovarian cancer); and the five leading general causes other than cancer (circulatory, respiratory, digestive, nervous system and mental and behavioural). Except for overall mortality, ovarian and fallopian tube cancers were excluded from the analysis as they were the primary outcome measure of the ongoing RCT. The effect on cancer incidence from the HVE was similarly assessed with the Standardised Incidence Ratio (SIR), defined as the ratio of observed to expected cancer incidence (×100). This was calculated again for overall cancer (also excluding ICD10-C56, C57.0 and C44) and the leading 10 female causes of cancer incidence, excluding ovarian cancer. These are the same as for mortality, although there are differences in the rankings. For each trial participant, Expected Mortality Rates (EMRs) were calculated using national mortality rates derived from ONS for 2005 [6]. All individual EMRs were calculated for each year or partial year on the trial, with the values summed over the number of years on the trial. The individual's risk of mortality was adjusted for age at randomisation and also dynamically, so that the risk reflected the ageing woman as the screening progressed. The overall EMR for a cause was the sum of each woman's individual EMR up to the censoring date of 1st June 2009. ONS mortality tables for 2005 [6] provide both the number of female deaths for each cause as well as total female population in 5 year age groups. To estimate an age-specific mortality rate for each year, firstly the midpoint of the age group was taken as representing the mortality rate calculated for that age group. An approximate mortality rate estimate for any given age was then calculated by imputing the age into either a best fitting quadratic or exponential function. For nearly all causes the fit was excellent with R2 always over 0.95 and mostly over 0.99. Similar analysis was performed for cancer incidence using ONS cancer incidence tables for 2006 [7]. The only exception was breast cancer incidence where the effect of a national screening programme meant that after the age of 70 the incidence fell sharply, so that 2 separate functions had to be used (below and above age 70). All confidence intervals for the SMRs and SIRs were based on an assumed poisson distribution for the observed deaths or cancers. To calculate the EMR of each woman i (i = 1, 2...202 593) for cause of death z if: • t is the year on trial (t = 1, 2...8) • xi is the age of woman i at randomisation • Dzx is the imputed mortality rate for cause of death z at age x • yti is the fraction of the year of trial t completed at censoring or death by woman i (always = 1, except for most recent/last year on trial) then E M R i z = ∑ t = 1 , 2...8 [ D z ( x i + t − 0.5 ) ] y t i and the overall EMR for cause of death z is simply: E M R z = ∑ i = 1 , 2...202593 E M R i z Note that the age x imputed in Dzx is slightly adjusted by 0.5 to approximate the average effect of ageing over the year (t - 0.5 instead of t-1). Also note, if women withdraw from attending for screening in the trial at any point, they continue to be followed up through flagging for death and cancer registration. Hence no adjustment is made for withdrawals. [SUBTITLE] Socioeconomic status [SUBSECTION] The PCT provided postcodes and dates of birth for all women invited to the trial. The former was used to estimate socio-economic class. The Index of Multiple Deprivation 2007 (IMD) [8] provides 32 482 scores at a Super Output Area (SOA) level linked to postcodes for England. It was chosen over the other two census based available indices (Townsend or Carstairs) as firstly, the most up-to-date and secondly, the most precise in ascribing a score to an individual based on postcode, given that it is calculated at a much finer spatial scale. Upon linking, an individual IMD score was derived for 156 620 women recruited in England. The women recruited from centres in Wales and Northern Ireland were omitted from this particular analysis. A Welsh IMD (2008) [9] has been published. However, the differing criteria employed make comparisons between the English and Welsh IMD scores invalid [10]. To explore mortality versus deprivation, the recruited women were separated into quintiles according to IMD score and the respective SMRs compared. This was also repeated for cancer incidence. IMD scores for all women who were invited from England were compared with those recruited to the trial by evaluating their relative frequency distributions. No mortality data is available for invited women. The PCT provided postcodes and dates of birth for all women invited to the trial. The former was used to estimate socio-economic class. The Index of Multiple Deprivation 2007 (IMD) [8] provides 32 482 scores at a Super Output Area (SOA) level linked to postcodes for England. It was chosen over the other two census based available indices (Townsend or Carstairs) as firstly, the most up-to-date and secondly, the most precise in ascribing a score to an individual based on postcode, given that it is calculated at a much finer spatial scale. Upon linking, an individual IMD score was derived for 156 620 women recruited in England. The women recruited from centres in Wales and Northern Ireland were omitted from this particular analysis. A Welsh IMD (2008) [9] has been published. However, the differing criteria employed make comparisons between the English and Welsh IMD scores invalid [10]. To explore mortality versus deprivation, the recruited women were separated into quintiles according to IMD score and the respective SMRs compared. This was also repeated for cancer incidence. IMD scores for all women who were invited from England were compared with those recruited to the trial by evaluating their relative frequency distributions. No mortality data is available for invited women. [SUBTITLE] Variation in HVE with age/region/BMI/time on trial [SUBSECTION] For all these analyses, the expected and observed mortality rates for all relevant women in each group were summed. Regional variations were compared by summing over the individual recruitment centres. For the age group analysis the groupings were made by using the age at randomisation to categorise into 50-54, 55-59, 60-65, 65-69 or 70-74 age groups. To assess any change in the SMR/SIRs over the trial period, the overall EMR was partitioned into year in trial by summing the individual EMRs for each year t. This was done for the individual causes of mortality and incidence, as well as for overall mortality. Mortality versus body mass index (BMI) of the women was explored by separating women who provided height and weight at randomisation into the standard underweight, normal, overweight and obese categories (up to 18.5; 18.5-25; 25-30; over 30, respectively) and comparing respective SMRs. For all these analyses, the expected and observed mortality rates for all relevant women in each group were summed. Regional variations were compared by summing over the individual recruitment centres. For the age group analysis the groupings were made by using the age at randomisation to categorise into 50-54, 55-59, 60-65, 65-69 or 70-74 age groups. To assess any change in the SMR/SIRs over the trial period, the overall EMR was partitioned into year in trial by summing the individual EMRs for each year t. This was done for the individual causes of mortality and incidence, as well as for overall mortality. Mortality versus body mass index (BMI) of the women was explored by separating women who provided height and weight at randomisation into the standard underweight, normal, overweight and obese categories (up to 18.5; 18.5-25; 25-30; over 30, respectively) and comparing respective SMRs.
null
null
null
null
[ "Background", "Study Design", "Follow up", "Analysis", "Mortality and cancer incidence", "Socioeconomic status", "Variation in HVE with age/region/BMI/time on trial", "Results", "Mortality rates", "Socioeconomic status comparison", "Cancer Incidence", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Funding" ]
[ "In clinical studies, mortality and morbidity data from the general population is used to calculate expected death and incidence rates. However, volunteers participating in trials evaluating preventive interventions such as screening are on average healthier than the general population [1-3]. The implication of this 'healthy volunteer effect (HVE)' is that trial participants have lower mortality and morbidity than the general population. In randomised controlled trials (RCTs), this can cause a shortfall in expected event rates which are the foundation of the trial's power calculations [4]. The latter determine the sample size (number of participants recruited and their time on the trial) and contribute significantly to design, logistics and cost. A deficit could mean a significant fall in power and may require alteration of the design midway through the trial if the primary objective is to be achieved.\nThe United Kingdom Collaborative Trial of Ovarian Cancer Screening (UKCTOCS) is a large multi-centre randomised controlled trial of 202 638 women recruited between 2001 and 2005 [5]. In order to ensure trial volunteers were as representative of the general population as possible, women were not allowed to self refer. Instead over 1.2 million women aged 50-74 were randomly invited from age sex registers of 27 participating local health authority registers [5]. The underlying hypothesis was that the HVE is largely related to socioeconomic status with participants being more affluent, better educated and more health-conscious than the population as a whole. This bias was thought to be magnified by recruitment using self-referral, which is dependent on publicising the trial through a variety of media such as newspapers, magazines, radio, television, posters at numerous venues and meetings.\nIn this paper, we report on the impact of population invitation on the HVE in UKCTOCS by comparing observed and expected mortality and cancer incidence rates in the trial, particularly with regard to socioeconomic status levels. We also consider the differences in deprivation of those invited with those recruited. The results provide vital data to inform trial design and sample size calculations for those seeking to undertake screening studies involving the general population.", "UKCTOCS is an RCT aiming to assess the impact of screening on ovarian cancer mortality while comprehensively evaluating performance characteristics, physical and psychological morbidity, compliance, and cost of the screening strategies. It was set up in 13 NHS Trusts in England, Wales and Northern Ireland. Women living in adjoining Primary Care Trusts (including Local Health Boards in Wales) were invited to participate in the trial. Those who accepted the invitation attended a local recruitment clinic. Detailed description of the invitation and recruitment process, as well as inclusion/exclusion criteria, are detailed elsewhere [5]. Of relevance to this analysis is that women with an active malignancy were only eligible if they had no documented persistent or recurrent disease, and those with previous history of ovarian cancer were excluded. All women provided written consent.", "Women recruited into the trial were 'flagged' for follow up with the NHS Information Centre for Health and Social Care (ICHSC) in England and Wales (for death and cancer registration) and with the Central Services Agency (CSA, for deaths in Northern Ireland) and Cancer Registry in Northern Ireland (NICR). Almost all women were successfully flagged (n = 202 593). From the received death certificate copies, the 'underlying cause of death' was used for the cause-specific observed counts. Barring an inquest, death certificates were mostly received within 3 months of the death. To ensure completeness of data on deaths, events were censored on the 1st June 2009, eight months prior to the last death certificates update on 1st February 2010. Data provided by the CSA on cause of death was incomplete and therefore women from Northern Ireland were excluded from the calculation of cause-specific SMRs.\nInformation on all incident cancers can take up to 3 years to be recorded with the national registries. In order to ensure completeness of data on cancers, events were censored on 1st June 2006, allowing a time lag of 3.75 years between events and the final cancer registration update from NHS ICHSC and the NICR in February 2010. Unlike the CSA, the NICR provided full data on cancer type, so that all women were included in the cancer-specific incidence analysis.", "[SUBTITLE] Mortality and cancer incidence [SUBSECTION] Evidence of a HVE was assessed by calculation of the Standardised Mortality Ratio (SMR) which is defined as the ratio of observed to expected deaths (×100). A value significantly less than 100 would indicate a HVE. SMRs were calculated for: overall mortality (including ovarian cancer); overall cancer (excluding ovarian cancer and tubal cancer (ICD10-C56, C57.0) and 'other malignant neoplasms of skin' (ICD10-C44); the 10 leading individual causes of female cancer mortality (excluding ovarian cancer); and the five leading general causes other than cancer (circulatory, respiratory, digestive, nervous system and mental and behavioural). Except for overall mortality, ovarian and fallopian tube cancers were excluded from the analysis as they were the primary outcome measure of the ongoing RCT.\nThe effect on cancer incidence from the HVE was similarly assessed with the Standardised Incidence Ratio (SIR), defined as the ratio of observed to expected cancer incidence (×100). This was calculated again for overall cancer (also excluding ICD10-C56, C57.0 and C44) and the leading 10 female causes of cancer incidence, excluding ovarian cancer. These are the same as for mortality, although there are differences in the rankings.\nFor each trial participant, Expected Mortality Rates (EMRs) were calculated using national mortality rates derived from ONS for 2005 [6]. All individual EMRs were calculated for each year or partial year on the trial, with the values summed over the number of years on the trial. The individual's risk of mortality was adjusted for age at randomisation and also dynamically, so that the risk reflected the ageing woman as the screening progressed. The overall EMR for a cause was the sum of each woman's individual EMR up to the censoring date of 1st June 2009. ONS mortality tables for 2005 [6] provide both the number of female deaths for each cause as well as total female population in 5 year age groups. To estimate an age-specific mortality rate for each year, firstly the midpoint of the age group was taken as representing the mortality rate calculated for that age group. An approximate mortality rate estimate for any given age was then calculated by imputing the age into either a best fitting quadratic or exponential function. For nearly all causes the fit was excellent with R2 always over 0.95 and mostly over 0.99. Similar analysis was performed for cancer incidence using ONS cancer incidence tables for 2006 [7]. The only exception was breast cancer incidence where the effect of a national screening programme meant that after the age of 70 the incidence fell sharply, so that 2 separate functions had to be used (below and above age 70). All confidence intervals for the SMRs and SIRs were based on an assumed poisson distribution for the observed deaths or cancers.\nTo calculate the EMR of each woman i (i = 1, 2...202 593) for cause of death z if:\n• t is the year on trial (t = 1, 2...8)\n• xi is the age of woman i at randomisation\n• Dzx is the imputed mortality rate for cause of death z at age x\n• yti is the fraction of the year of trial t completed at censoring or death by woman i (always = 1, except for most recent/last year on trial)\nthen\n\n\n\n\nE\nM\n\nR\n\ni\nz\n\n\n=\n\n\n∑\n\nt\n=\n1\n,\n2...8\n\n\n\n[\n\nD\n\nz\n(\n\nx\ni\n\n+\nt\n−\n0.5\n)\n\n\n]\n\ny\n\nt\ni\n\n\n\n\n\n\n\n\nand the overall EMR for cause of death z is simply:\n\n\n\n\nE\nM\n\nR\nz\n\n=\n\n\n∑\n\ni\n=\n1\n,\n2...202593\n\n\n\nE\nM\n\nR\n\ni\nz\n\n\n\n\n\n\n\n\nNote that the age x imputed in Dzx is slightly adjusted by 0.5 to approximate the average effect of ageing over the year (t - 0.5 instead of t-1). Also note, if women withdraw from attending for screening in the trial at any point, they continue to be followed up through flagging for death and cancer registration. Hence no adjustment is made for withdrawals.\nEvidence of a HVE was assessed by calculation of the Standardised Mortality Ratio (SMR) which is defined as the ratio of observed to expected deaths (×100). A value significantly less than 100 would indicate a HVE. SMRs were calculated for: overall mortality (including ovarian cancer); overall cancer (excluding ovarian cancer and tubal cancer (ICD10-C56, C57.0) and 'other malignant neoplasms of skin' (ICD10-C44); the 10 leading individual causes of female cancer mortality (excluding ovarian cancer); and the five leading general causes other than cancer (circulatory, respiratory, digestive, nervous system and mental and behavioural). Except for overall mortality, ovarian and fallopian tube cancers were excluded from the analysis as they were the primary outcome measure of the ongoing RCT.\nThe effect on cancer incidence from the HVE was similarly assessed with the Standardised Incidence Ratio (SIR), defined as the ratio of observed to expected cancer incidence (×100). This was calculated again for overall cancer (also excluding ICD10-C56, C57.0 and C44) and the leading 10 female causes of cancer incidence, excluding ovarian cancer. These are the same as for mortality, although there are differences in the rankings.\nFor each trial participant, Expected Mortality Rates (EMRs) were calculated using national mortality rates derived from ONS for 2005 [6]. All individual EMRs were calculated for each year or partial year on the trial, with the values summed over the number of years on the trial. The individual's risk of mortality was adjusted for age at randomisation and also dynamically, so that the risk reflected the ageing woman as the screening progressed. The overall EMR for a cause was the sum of each woman's individual EMR up to the censoring date of 1st June 2009. ONS mortality tables for 2005 [6] provide both the number of female deaths for each cause as well as total female population in 5 year age groups. To estimate an age-specific mortality rate for each year, firstly the midpoint of the age group was taken as representing the mortality rate calculated for that age group. An approximate mortality rate estimate for any given age was then calculated by imputing the age into either a best fitting quadratic or exponential function. For nearly all causes the fit was excellent with R2 always over 0.95 and mostly over 0.99. Similar analysis was performed for cancer incidence using ONS cancer incidence tables for 2006 [7]. The only exception was breast cancer incidence where the effect of a national screening programme meant that after the age of 70 the incidence fell sharply, so that 2 separate functions had to be used (below and above age 70). All confidence intervals for the SMRs and SIRs were based on an assumed poisson distribution for the observed deaths or cancers.\nTo calculate the EMR of each woman i (i = 1, 2...202 593) for cause of death z if:\n• t is the year on trial (t = 1, 2...8)\n• xi is the age of woman i at randomisation\n• Dzx is the imputed mortality rate for cause of death z at age x\n• yti is the fraction of the year of trial t completed at censoring or death by woman i (always = 1, except for most recent/last year on trial)\nthen\n\n\n\n\nE\nM\n\nR\n\ni\nz\n\n\n=\n\n\n∑\n\nt\n=\n1\n,\n2...8\n\n\n\n[\n\nD\n\nz\n(\n\nx\ni\n\n+\nt\n−\n0.5\n)\n\n\n]\n\ny\n\nt\ni\n\n\n\n\n\n\n\n\nand the overall EMR for cause of death z is simply:\n\n\n\n\nE\nM\n\nR\nz\n\n=\n\n\n∑\n\ni\n=\n1\n,\n2...202593\n\n\n\nE\nM\n\nR\n\ni\nz\n\n\n\n\n\n\n\n\nNote that the age x imputed in Dzx is slightly adjusted by 0.5 to approximate the average effect of ageing over the year (t - 0.5 instead of t-1). Also note, if women withdraw from attending for screening in the trial at any point, they continue to be followed up through flagging for death and cancer registration. Hence no adjustment is made for withdrawals.\n[SUBTITLE] Socioeconomic status [SUBSECTION] The PCT provided postcodes and dates of birth for all women invited to the trial. The former was used to estimate socio-economic class. The Index of Multiple Deprivation 2007 (IMD) [8] provides 32 482 scores at a Super Output Area (SOA) level linked to postcodes for England. It was chosen over the other two census based available indices (Townsend or Carstairs) as firstly, the most up-to-date and secondly, the most precise in ascribing a score to an individual based on postcode, given that it is calculated at a much finer spatial scale. Upon linking, an individual IMD score was derived for 156 620 women recruited in England. The women recruited from centres in Wales and Northern Ireland were omitted from this particular analysis. A Welsh IMD (2008) [9] has been published. However, the differing criteria employed make comparisons between the English and Welsh IMD scores invalid [10]. To explore mortality versus deprivation, the recruited women were separated into quintiles according to IMD score and the respective SMRs compared. This was also repeated for cancer incidence.\nIMD scores for all women who were invited from England were compared with those recruited to the trial by evaluating their relative frequency distributions. No mortality data is available for invited women.\nThe PCT provided postcodes and dates of birth for all women invited to the trial. The former was used to estimate socio-economic class. The Index of Multiple Deprivation 2007 (IMD) [8] provides 32 482 scores at a Super Output Area (SOA) level linked to postcodes for England. It was chosen over the other two census based available indices (Townsend or Carstairs) as firstly, the most up-to-date and secondly, the most precise in ascribing a score to an individual based on postcode, given that it is calculated at a much finer spatial scale. Upon linking, an individual IMD score was derived for 156 620 women recruited in England. The women recruited from centres in Wales and Northern Ireland were omitted from this particular analysis. A Welsh IMD (2008) [9] has been published. However, the differing criteria employed make comparisons between the English and Welsh IMD scores invalid [10]. To explore mortality versus deprivation, the recruited women were separated into quintiles according to IMD score and the respective SMRs compared. This was also repeated for cancer incidence.\nIMD scores for all women who were invited from England were compared with those recruited to the trial by evaluating their relative frequency distributions. No mortality data is available for invited women.\n[SUBTITLE] Variation in HVE with age/region/BMI/time on trial [SUBSECTION] For all these analyses, the expected and observed mortality rates for all relevant women in each group were summed. Regional variations were compared by summing over the individual recruitment centres. For the age group analysis the groupings were made by using the age at randomisation to categorise into 50-54, 55-59, 60-65, 65-69 or 70-74 age groups. To assess any change in the SMR/SIRs over the trial period, the overall EMR was partitioned into year in trial by summing the individual EMRs for each year t. This was done for the individual causes of mortality and incidence, as well as for overall mortality. Mortality versus body mass index (BMI) of the women was explored by separating women who provided height and weight at randomisation into the standard underweight, normal, overweight and obese categories (up to 18.5; 18.5-25; 25-30; over 30, respectively) and comparing respective SMRs.\nFor all these analyses, the expected and observed mortality rates for all relevant women in each group were summed. Regional variations were compared by summing over the individual recruitment centres. For the age group analysis the groupings were made by using the age at randomisation to categorise into 50-54, 55-59, 60-65, 65-69 or 70-74 age groups. To assess any change in the SMR/SIRs over the trial period, the overall EMR was partitioned into year in trial by summing the individual EMRs for each year t. This was done for the individual causes of mortality and incidence, as well as for overall mortality. Mortality versus body mass index (BMI) of the women was explored by separating women who provided height and weight at randomisation into the standard underweight, normal, overweight and obese categories (up to 18.5; 18.5-25; 25-30; over 30, respectively) and comparing respective SMRs.", "Evidence of a HVE was assessed by calculation of the Standardised Mortality Ratio (SMR) which is defined as the ratio of observed to expected deaths (×100). A value significantly less than 100 would indicate a HVE. SMRs were calculated for: overall mortality (including ovarian cancer); overall cancer (excluding ovarian cancer and tubal cancer (ICD10-C56, C57.0) and 'other malignant neoplasms of skin' (ICD10-C44); the 10 leading individual causes of female cancer mortality (excluding ovarian cancer); and the five leading general causes other than cancer (circulatory, respiratory, digestive, nervous system and mental and behavioural). Except for overall mortality, ovarian and fallopian tube cancers were excluded from the analysis as they were the primary outcome measure of the ongoing RCT.\nThe effect on cancer incidence from the HVE was similarly assessed with the Standardised Incidence Ratio (SIR), defined as the ratio of observed to expected cancer incidence (×100). This was calculated again for overall cancer (also excluding ICD10-C56, C57.0 and C44) and the leading 10 female causes of cancer incidence, excluding ovarian cancer. These are the same as for mortality, although there are differences in the rankings.\nFor each trial participant, Expected Mortality Rates (EMRs) were calculated using national mortality rates derived from ONS for 2005 [6]. All individual EMRs were calculated for each year or partial year on the trial, with the values summed over the number of years on the trial. The individual's risk of mortality was adjusted for age at randomisation and also dynamically, so that the risk reflected the ageing woman as the screening progressed. The overall EMR for a cause was the sum of each woman's individual EMR up to the censoring date of 1st June 2009. ONS mortality tables for 2005 [6] provide both the number of female deaths for each cause as well as total female population in 5 year age groups. To estimate an age-specific mortality rate for each year, firstly the midpoint of the age group was taken as representing the mortality rate calculated for that age group. An approximate mortality rate estimate for any given age was then calculated by imputing the age into either a best fitting quadratic or exponential function. For nearly all causes the fit was excellent with R2 always over 0.95 and mostly over 0.99. Similar analysis was performed for cancer incidence using ONS cancer incidence tables for 2006 [7]. The only exception was breast cancer incidence where the effect of a national screening programme meant that after the age of 70 the incidence fell sharply, so that 2 separate functions had to be used (below and above age 70). All confidence intervals for the SMRs and SIRs were based on an assumed poisson distribution for the observed deaths or cancers.\nTo calculate the EMR of each woman i (i = 1, 2...202 593) for cause of death z if:\n• t is the year on trial (t = 1, 2...8)\n• xi is the age of woman i at randomisation\n• Dzx is the imputed mortality rate for cause of death z at age x\n• yti is the fraction of the year of trial t completed at censoring or death by woman i (always = 1, except for most recent/last year on trial)\nthen\n\n\n\n\nE\nM\n\nR\n\ni\nz\n\n\n=\n\n\n∑\n\nt\n=\n1\n,\n2...8\n\n\n\n[\n\nD\n\nz\n(\n\nx\ni\n\n+\nt\n−\n0.5\n)\n\n\n]\n\ny\n\nt\ni\n\n\n\n\n\n\n\n\nand the overall EMR for cause of death z is simply:\n\n\n\n\nE\nM\n\nR\nz\n\n=\n\n\n∑\n\ni\n=\n1\n,\n2...202593\n\n\n\nE\nM\n\nR\n\ni\nz\n\n\n\n\n\n\n\n\nNote that the age x imputed in Dzx is slightly adjusted by 0.5 to approximate the average effect of ageing over the year (t - 0.5 instead of t-1). Also note, if women withdraw from attending for screening in the trial at any point, they continue to be followed up through flagging for death and cancer registration. Hence no adjustment is made for withdrawals.", "The PCT provided postcodes and dates of birth for all women invited to the trial. The former was used to estimate socio-economic class. The Index of Multiple Deprivation 2007 (IMD) [8] provides 32 482 scores at a Super Output Area (SOA) level linked to postcodes for England. It was chosen over the other two census based available indices (Townsend or Carstairs) as firstly, the most up-to-date and secondly, the most precise in ascribing a score to an individual based on postcode, given that it is calculated at a much finer spatial scale. Upon linking, an individual IMD score was derived for 156 620 women recruited in England. The women recruited from centres in Wales and Northern Ireland were omitted from this particular analysis. A Welsh IMD (2008) [9] has been published. However, the differing criteria employed make comparisons between the English and Welsh IMD scores invalid [10]. To explore mortality versus deprivation, the recruited women were separated into quintiles according to IMD score and the respective SMRs compared. This was also repeated for cancer incidence.\nIMD scores for all women who were invited from England were compared with those recruited to the trial by evaluating their relative frequency distributions. No mortality data is available for invited women.", "For all these analyses, the expected and observed mortality rates for all relevant women in each group were summed. Regional variations were compared by summing over the individual recruitment centres. For the age group analysis the groupings were made by using the age at randomisation to categorise into 50-54, 55-59, 60-65, 65-69 or 70-74 age groups. To assess any change in the SMR/SIRs over the trial period, the overall EMR was partitioned into year in trial by summing the individual EMRs for each year t. This was done for the individual causes of mortality and incidence, as well as for overall mortality. Mortality versus body mass index (BMI) of the women was explored by separating women who provided height and weight at randomisation into the standard underweight, normal, overweight and obese categories (up to 18.5; 18.5-25; 25-30; over 30, respectively) and comparing respective SMRs.", "1 243 282 women residing in 27 Primary Care Trusts (including Local Health Boards in Wales) adjoining the 13 trial centres in England (10), Wales (2) and Northern Ireland (1) were invited to participate in the trial. Between 17th April 2001 and 29th September 2005, 202 638 women (157 973 England, 31 086 Wales, 13 579 Northern Ireland) were recruited and randomised [5]. Of those recruited from England and Wales, 35 were unsuccessfully matched and 10 refused consent for flagging, leaving 189 014 women from England and Wales undergoing flagging through NHS ICHSC. All 13 579 women recruited from Northern Ireland were successfully matched by the Northern Ireland CSA. The average number of years on trial when mortality events were censored on 1st June 2009 for mortality was 5.55 years, with over 99% having been on the trial for over 3 years, and 24% over 7 years. Mean follow-up for incidence was 2.58 years at 1st June 2006.\n[SUBTITLE] Mortality rates [SUBSECTION] There were only 4554 observed deaths compared to the expected number of 12 247 based on 2005 national mortality rates (Table 1). The SMR for overall mortality was 37.3% (95% CI: 36.2, 38.4%). There was some variation of SMR across the 13 trial centres with the highest, 48.4% at Liverpool and the lowest, 30.9% at Bristol. However across all centres there was a strong HVE with less than half the expected deaths (Table 1).\nStandardized Mortality Ratios (SMR) for overall mortality and by subgroup.\nAverage time on trial = 5.55 yrs/woman\nFor age group, there was an apparent decrease in the SMRs as age increased, with the youngest group (50-54; SMR = 47.3%) having a less pronounced HVE than the other groups (Table 1). For BMI, both normal and overweight categories had similar SMRs of around 34% whereas the extreme categories had higher mortality rates, particularly the underweight category (70.6%)\nThe overall cancer SMR was 55.9%. There was some variation between the different cancer types but all were between 42.9% (breast) and 79.8% (pancreatic), with mortality significantly lower than expected (100%). The HVE was even stronger for the other major causes of mortality (Table 2).\nStandardized Mortality Ratios (SMRs) for overall cancer, the 10 leading causes of cancer mortality and 5 other causes of mortality.\nOvarian (C56)/tubal cancer (C57.0) and other malignant neoplasm of skin (C44) have been excluded. Average time on trial at censoring (1st June 2009) = 5.55 yrs/woman\nThere was a clear increase in the SMRs as time in trial increased (Table 3). The overall SMR was low in the 1st year (18.5%) but rose steadily to 49.0% by the 8th year. With the exception of stomach cancer all of the cause-specific 1st year SMRs for mortality were significantly below 100% with some particularly low values such as 5.3% for breast cancer (Table 3). These figures showed an increasing trend as the study progressed, though not nearly as consistently as for overall mortality, and given the lower numbers, with wider confidence intervals. Of the cancers only lung, breast and colorectal had 6 or more study years where the confidence interval for the SMR did not contain 100.\nStandardized Mortality Ratios for various causes by year in trial.\nOvarian (C56)/tubal cancer (C57.0) and other malignant neoplasm of skin (C44) have been excluded. Average time on trial at censoring (1st June 2009) = 5.55 yrs/woman. Note the number of 'Women years' relates to individual causes and not to overall mortality due to exclusion of Northern Ireland women. Italicised values have 95% confidence intervals not containing 100.\nThere were only 4554 observed deaths compared to the expected number of 12 247 based on 2005 national mortality rates (Table 1). The SMR for overall mortality was 37.3% (95% CI: 36.2, 38.4%). There was some variation of SMR across the 13 trial centres with the highest, 48.4% at Liverpool and the lowest, 30.9% at Bristol. However across all centres there was a strong HVE with less than half the expected deaths (Table 1).\nStandardized Mortality Ratios (SMR) for overall mortality and by subgroup.\nAverage time on trial = 5.55 yrs/woman\nFor age group, there was an apparent decrease in the SMRs as age increased, with the youngest group (50-54; SMR = 47.3%) having a less pronounced HVE than the other groups (Table 1). For BMI, both normal and overweight categories had similar SMRs of around 34% whereas the extreme categories had higher mortality rates, particularly the underweight category (70.6%)\nThe overall cancer SMR was 55.9%. There was some variation between the different cancer types but all were between 42.9% (breast) and 79.8% (pancreatic), with mortality significantly lower than expected (100%). The HVE was even stronger for the other major causes of mortality (Table 2).\nStandardized Mortality Ratios (SMRs) for overall cancer, the 10 leading causes of cancer mortality and 5 other causes of mortality.\nOvarian (C56)/tubal cancer (C57.0) and other malignant neoplasm of skin (C44) have been excluded. Average time on trial at censoring (1st June 2009) = 5.55 yrs/woman\nThere was a clear increase in the SMRs as time in trial increased (Table 3). The overall SMR was low in the 1st year (18.5%) but rose steadily to 49.0% by the 8th year. With the exception of stomach cancer all of the cause-specific 1st year SMRs for mortality were significantly below 100% with some particularly low values such as 5.3% for breast cancer (Table 3). These figures showed an increasing trend as the study progressed, though not nearly as consistently as for overall mortality, and given the lower numbers, with wider confidence intervals. Of the cancers only lung, breast and colorectal had 6 or more study years where the confidence interval for the SMR did not contain 100.\nStandardized Mortality Ratios for various causes by year in trial.\nOvarian (C56)/tubal cancer (C57.0) and other malignant neoplasm of skin (C44) have been excluded. Average time on trial at censoring (1st June 2009) = 5.55 yrs/woman. Note the number of 'Women years' relates to individual causes and not to overall mortality due to exclusion of Northern Ireland women. Italicised values have 95% confidence intervals not containing 100.\n[SUBTITLE] Socioeconomic status comparison [SUBSECTION] Based on IMD score quintiles, there was a general trend between deprivation and mortality with higher SMRs for increasing levels of deprivation (Table 4). Specifically, the lowest (least deprived) two quintiles had a similar SMR of around 30% with the most deprived having a SMR of 52.3%. The rising trend between overall cancer incidence and increasing deprivation was less obvious. Figure 1 shows the relative frequency distributions for IMD score. Although the distributions show similarity in trend and location, the more peaked distribution of those who joined, and the crossover of distributions at an IMD score of 20, imply that the trial participants were less deprived than the invited population.\nStandardized Mortality Ratios (SMR) and Standardized Incidence Ratios (SIR) by deprivation (IMD) quintile.\nAverage time on trial = 5.55 yrs/woman for SMRs and = 2.58 yrs/woman for SIRs\nRelative frequency distributions of IMD score for those invited to UKCTOCS and those that joined UKCTOCS (English postcodes only).\nBased on IMD score quintiles, there was a general trend between deprivation and mortality with higher SMRs for increasing levels of deprivation (Table 4). Specifically, the lowest (least deprived) two quintiles had a similar SMR of around 30% with the most deprived having a SMR of 52.3%. The rising trend between overall cancer incidence and increasing deprivation was less obvious. Figure 1 shows the relative frequency distributions for IMD score. Although the distributions show similarity in trend and location, the more peaked distribution of those who joined, and the crossover of distributions at an IMD score of 20, imply that the trial participants were less deprived than the invited population.\nStandardized Mortality Ratios (SMR) and Standardized Incidence Ratios (SIR) by deprivation (IMD) quintile.\nAverage time on trial = 5.55 yrs/woman for SMRs and = 2.58 yrs/woman for SIRs\nRelative frequency distributions of IMD score for those invited to UKCTOCS and those that joined UKCTOCS (English postcodes only).\n[SUBTITLE] Cancer Incidence [SUBSECTION] The situation regarding cancer incidence (Table 5) was rather different. For overall cancer the SIR was 88.1%, higher than the 55.9% for overall cancer mortality. Of the individual cancers, only for lung, pancreatic, oesophageal and colorectal cancers did the confidence interval for the SIR not contain 100, and only for pancreatic and oesophageal cancer was the SMR higher than the SIR.\nStandardized Incidence Ratios (SIRs) for overall cancer and the 10 leading causes of cancer.\nOvarian (C56)/tubal cancer (C57.0) and other malignant neoplasm of skin (C44) have been excluded. Average time in trial at censoring (1st June 2006) = 2.58 yrs/woman.\nRegarding incidence over time (Table 6), there were far fewer occasions compared to mortality where the whole confidence interval for the SIR was below 100, with only lung cancer having a consistently low SIR over time (between 27.1% and 65.9%). Apart from lung cancer and leukaemia, the cancer specific SIRs were not particularly low in the first year compared to the complete study period. For overall cancer the SIRs were remarkably consistent over time, between 84.6% and 91.7%.\nStandardized Incidence Ratios for 10 leading causes of cancer by year in trial.\nOvarian (C56)/tubal cancer (C57.0) and other malignant neoplasm of skin (C44) have been excluded. Average time in trial at censoring (1st June 2006) = 2.58 yrs/woman. Italicised values have 95% confidence intervals not containing 100.\nThe situation regarding cancer incidence (Table 5) was rather different. For overall cancer the SIR was 88.1%, higher than the 55.9% for overall cancer mortality. Of the individual cancers, only for lung, pancreatic, oesophageal and colorectal cancers did the confidence interval for the SIR not contain 100, and only for pancreatic and oesophageal cancer was the SMR higher than the SIR.\nStandardized Incidence Ratios (SIRs) for overall cancer and the 10 leading causes of cancer.\nOvarian (C56)/tubal cancer (C57.0) and other malignant neoplasm of skin (C44) have been excluded. Average time in trial at censoring (1st June 2006) = 2.58 yrs/woman.\nRegarding incidence over time (Table 6), there were far fewer occasions compared to mortality where the whole confidence interval for the SIR was below 100, with only lung cancer having a consistently low SIR over time (between 27.1% and 65.9%). Apart from lung cancer and leukaemia, the cancer specific SIRs were not particularly low in the first year compared to the complete study period. For overall cancer the SIRs were remarkably consistent over time, between 84.6% and 91.7%.\nStandardized Incidence Ratios for 10 leading causes of cancer by year in trial.\nOvarian (C56)/tubal cancer (C57.0) and other malignant neoplasm of skin (C44) have been excluded. Average time in trial at censoring (1st June 2006) = 2.58 yrs/woman. Italicised values have 95% confidence intervals not containing 100.", "There were only 4554 observed deaths compared to the expected number of 12 247 based on 2005 national mortality rates (Table 1). The SMR for overall mortality was 37.3% (95% CI: 36.2, 38.4%). There was some variation of SMR across the 13 trial centres with the highest, 48.4% at Liverpool and the lowest, 30.9% at Bristol. However across all centres there was a strong HVE with less than half the expected deaths (Table 1).\nStandardized Mortality Ratios (SMR) for overall mortality and by subgroup.\nAverage time on trial = 5.55 yrs/woman\nFor age group, there was an apparent decrease in the SMRs as age increased, with the youngest group (50-54; SMR = 47.3%) having a less pronounced HVE than the other groups (Table 1). For BMI, both normal and overweight categories had similar SMRs of around 34% whereas the extreme categories had higher mortality rates, particularly the underweight category (70.6%)\nThe overall cancer SMR was 55.9%. There was some variation between the different cancer types but all were between 42.9% (breast) and 79.8% (pancreatic), with mortality significantly lower than expected (100%). The HVE was even stronger for the other major causes of mortality (Table 2).\nStandardized Mortality Ratios (SMRs) for overall cancer, the 10 leading causes of cancer mortality and 5 other causes of mortality.\nOvarian (C56)/tubal cancer (C57.0) and other malignant neoplasm of skin (C44) have been excluded. Average time on trial at censoring (1st June 2009) = 5.55 yrs/woman\nThere was a clear increase in the SMRs as time in trial increased (Table 3). The overall SMR was low in the 1st year (18.5%) but rose steadily to 49.0% by the 8th year. With the exception of stomach cancer all of the cause-specific 1st year SMRs for mortality were significantly below 100% with some particularly low values such as 5.3% for breast cancer (Table 3). These figures showed an increasing trend as the study progressed, though not nearly as consistently as for overall mortality, and given the lower numbers, with wider confidence intervals. Of the cancers only lung, breast and colorectal had 6 or more study years where the confidence interval for the SMR did not contain 100.\nStandardized Mortality Ratios for various causes by year in trial.\nOvarian (C56)/tubal cancer (C57.0) and other malignant neoplasm of skin (C44) have been excluded. Average time on trial at censoring (1st June 2009) = 5.55 yrs/woman. Note the number of 'Women years' relates to individual causes and not to overall mortality due to exclusion of Northern Ireland women. Italicised values have 95% confidence intervals not containing 100.", "Based on IMD score quintiles, there was a general trend between deprivation and mortality with higher SMRs for increasing levels of deprivation (Table 4). Specifically, the lowest (least deprived) two quintiles had a similar SMR of around 30% with the most deprived having a SMR of 52.3%. The rising trend between overall cancer incidence and increasing deprivation was less obvious. Figure 1 shows the relative frequency distributions for IMD score. Although the distributions show similarity in trend and location, the more peaked distribution of those who joined, and the crossover of distributions at an IMD score of 20, imply that the trial participants were less deprived than the invited population.\nStandardized Mortality Ratios (SMR) and Standardized Incidence Ratios (SIR) by deprivation (IMD) quintile.\nAverage time on trial = 5.55 yrs/woman for SMRs and = 2.58 yrs/woman for SIRs\nRelative frequency distributions of IMD score for those invited to UKCTOCS and those that joined UKCTOCS (English postcodes only).", "The situation regarding cancer incidence (Table 5) was rather different. For overall cancer the SIR was 88.1%, higher than the 55.9% for overall cancer mortality. Of the individual cancers, only for lung, pancreatic, oesophageal and colorectal cancers did the confidence interval for the SIR not contain 100, and only for pancreatic and oesophageal cancer was the SMR higher than the SIR.\nStandardized Incidence Ratios (SIRs) for overall cancer and the 10 leading causes of cancer.\nOvarian (C56)/tubal cancer (C57.0) and other malignant neoplasm of skin (C44) have been excluded. Average time in trial at censoring (1st June 2006) = 2.58 yrs/woman.\nRegarding incidence over time (Table 6), there were far fewer occasions compared to mortality where the whole confidence interval for the SIR was below 100, with only lung cancer having a consistently low SIR over time (between 27.1% and 65.9%). Apart from lung cancer and leukaemia, the cancer specific SIRs were not particularly low in the first year compared to the complete study period. For overall cancer the SIRs were remarkably consistent over time, between 84.6% and 91.7%.\nStandardized Incidence Ratios for 10 leading causes of cancer by year in trial.\nOvarian (C56)/tubal cancer (C57.0) and other malignant neoplasm of skin (C44) have been excluded. Average time in trial at censoring (1st June 2006) = 2.58 yrs/woman. Italicised values have 95% confidence intervals not containing 100.", "This is the first report to explore the impact of a 'healthy volunteer effect' from inviting potential participants randomly from population registers as opposed to self-referral. The overall SMR compared to the 2005 population of England and Wales was 37.3%. The figure is almost identical to the overall SMR of 38% reported for women in the US PLCO screening trial [2] where participants were allowed to self refer or invited through mass mailings using motor-vehicle registrations and health care organization lists which were not generally population based [11]. This introduces additional bias, as the types of advertising media (radio station, website, newspaper or magazine) or mailing lists used, limit those who have access to the information. In contrast, in UKCTOCS over 1.2 million women (1 in 6 of the UK population in the eligible age range) were randomly invited from health authority registers. It was anticipated that such invitation would result in participants being more representative of the general population than those recruited through advertisement and self referral. However, even with this safeguard there continued to be a pronounced HVE on mortality, both overall and cause-specific. The data highlights again the selection bias that occurs in clinical trials and emphasises the need for randomised controlled trials rather than observational studies to determine efficacy of screening and prevention strategies. There was a much lesser effect on cancer incidence relative to the general population.\nThe magnitude of the HVE in a trial is dependent on a variety of other factors in addition to mode of recruitment. Eligibility criteria can play a crucial role. This includes gender, where in the PLCO trial the HVE was less pronounced in men, who had a statistically significantly higher SMR than women for all-cause mortality (46% versus 38%), all cancer mortality, respiratory diseases, diabetes, cardiovascular diseases, and non-Hodgkin's lymphoma [2]. Most screening/prevention trials exclude those with an ongoing active malignancy. This will inevitably affect the cancer specific SMRs in the early years. There was a clear upward trend in the cancer specific SMRs, despite widening confidence intervals, when examined by 'year in trial'. In the first year, most were below 25%. It is feasible that the other exclusion criteria may also have had health implications. Women who had undergone bilateral oophorectomy were ineligible. Recent reports have shown increased mortality in women in this subgroup who do not use oestrogen replacement until the age of 45 [12,13]. Finally, higher participations rates may be expected to reduce the self-selection effect. In UKCTOCS 25% of women invited replied that they would like to participate in the trial but finally only 16% were randomised [5].\nThe overall 1st year SMR was 18.5%, rising to 35% in the 2nd year and nearly 50% by the 8th year. The SMRs have been age-adjusted dynamically so this is not a result of the age-related increase in risk. Similar trends were seen in the PLCO trial. In both studies by the 7th year the SMR was 48%. A major contributing factor to the SMR trend with time is the health-screening nature of volunteering. The huge shortfall in SMRs for the first year of UKCTOCS, particularly causes other than cancer, are strong indicators that women suffering from poor health or chronic non-cancer illness tend not to volunteer [4]. Their health concerns naturally lie with their immediate real problems rather than a future potential issue. Some of these conditions may well predispose to earlier mortality. It is interesting that the younger age groups, specifically 50-54, had higher SMRs. This suggests that women in the younger groups may more closely represent their national counterparts. We were unable to find in the literature reports where differences in (age-adjusted) SMRs were explored by age, but if confirmed, one possible explanation is that there may be less prevalent morbidity that might hinder volunteering at these ages. Mental and behavioural deaths had the lowest SMRs and the need for informed consent could have been a contributory factor. For general health, the commonly reported u-shaped curve [14-16] relating BMI to mortality was seen, with the highest SMRs belonging to those underweight and obese (Table 1).\nThe HVE is often ascribed to the fact that educated women who are financially better off and have a healthier lifestyle are more likely to volunteer for a screening trial, where health awareness and the means to travel to the trial centre influence a women's decision. While difficult to substantiate directly, many of these factors are linked to indices of social deprivation and in UKCTOCS the availability of postcodes for all invited women made it possible to calculate deprivation (IMD) scores for all invitees from England. Figure 1 shows that the cohort of invited women was more deprived (higher IMD scores: mean = 24.7) than those who subsequently joined the trial (mean score = 19.6). The reported mean IMD score for England is 21.6 [17]. Our invited population was more deprived than the national average probably as a result of a higher proportion of urban centres. However, as has been shown, those who actually volunteered were less deprived. Of note, Bristol and Portsmouth, which were the least deprived (lowest mean IMD) among the 10 English centres, had the highest acceptance rates of invitations among all 13 centres [5]. This suggests that postal invitation alone will not persuade women from deprived backgrounds to participate.\nSocioeconomic status is known to be linked to most causes of mortality, including cancers. Bristol, which had the 2nd lowest mean IMD score had the lowest SMR, while Liverpool, with the highest mean IMD score, had the highest SMR (48.4%). Further support is provided by the trend of higher SMRs with increasing levels of deprivation across the 5 quintiles, from the least deprived (SMR = 30.4%) to the most deprived (SMR = 52.3%) (Table 5).\nThe most striking aspect on comparing cause-specific incidence and mortality rates was that the SIRs were higher than the SMRs and, for the leading cancers other than lung, pancreas, oesophagus and colorectal (just), the SIR confidence intervals crossed 100. In a recent analysis of US data, while late-stage diagnoses in all cancers (with resultant higher mortality) were associated with lower socioeconomic status, incidence of only certain specific cancers varied with socioeconomic status [18]. Pollock et al noted that while mortality increased with deprivation among patients suffering from lung, breast and colorectal cancers in the South Thames area, for incidence this was only observed in lung cancer [19]. Official national statistics for England and Wales show a mixture of positive (notably lung and cervix), negative (breast, leukaemia) and zero association (colorectal) between different cancer type incidences and deprivation [20]. The three cancers (lung, oesophagus and pancreas) with the largest shortfalls in SIRs in UKCTOCS have a strong link to smoking [21-23]. Individuals with lower socio-economic status are more likely to be current smokers, physically inactive and obese [24]. In all three of these cancers, there are reports of negative correlation between incidence and socioeconomic status [18-20,25,26]. Conversely, in breast cancer the SIR was 102% whilst the SMR was 43%, in keeping with previously reported associations between higher socioeconomic status and higher incidence of localised breast cancer but lower regional breast cancer mortality [25,27]. Women who volunteer for a screening trial are more likely to attend for breast screening and to be diagnosed with early stage disease. Overdiagnosis of breast cancer in the screened population could also contribute to higher incidence but lower mortality [28].\nDespite the strong similarity of results for overall mortality between UKCTOCS and the PLCO trial there is less commonality when cause-specific results are compared. While pancreatic cancer has the highest SMR in both studies, large discrepancies exist for cancers such as uterus (52% UKCTOCS versus 22% PLCO), stomach (75% versus 41%) and oesophagus (76% versus 41%). Given the smaller numbers in these subgroups, some of these differences may be purely random. Most of these cancers are also associated with lower SIRs in the PLCO trial compared to UKCTOCS: oesophageal (72% UKCTOCS versus 38% PLCO), stomach cancer (85% versus 48%) and bladder (80% versus 52%). It needs to be noted that there are subtle differences in the PLCO entry criteria when compared to UKCTOCS, such as minimum age (55 versus 50 in UKCTOCS) and inclusion of women who had undergone bilateral oophorectomy.\nThe most recently published statistics for mortality (for 2005, published 2006 [6]) and incidence (for 2006, published 2008 [7]) produced by ONS were used to calculate EMRs for the period 2001-2009 so the data can be considered broadly representative. An additional issue is that the 'national' mortality rates were based on data from England and Wales only but was used to calculate EMRs for the 13 579 women from Northern Ireland. There were also approximations involved in the actual calculations, such as the age-group mortality rates representing the midpoint of that group and specific age-adjusted rates estimated by use of a best-fitting simple model. The EMRs were also assumed to be fixed values when calculating the confidence intervals. They are estimates, as they are based on national data that varies yearly through a random component, in addition to any real change. However comparison of ONS's 2004 and 2005 (logged) mortality rates showed a high level of linear correlation, with all Pearson correlations for the major cancer causes over 0.99, except those for uterus (r = 0.984) and non-Hodgkin's lymphoma (r = 0.981). This suggests any yearly changes in mortality rates (real shifts or random fluctuations) are small and treating them as fixed was not unreasonable.", "The lack of mortality or incidence events can severely harm a clinical trial's ability to demonstrate efficacy. Other ramifications of the HVE inevitably include concerns over external validity of a demonstrated screening benefit, though that would imply some level of interaction between screening and volunteer characteristics. It may be hard to perceive how social factors could influence screening success directly at the point of intervention, though certainly compliance with a screening programme can be dependent upon the level of social deprivation [29]. Either way, one may regard this as a realistic aspect of a national screening programme. In UKCTOCS, the HVE has necessitated revision of the trial design in 2008, with extension of screening in the study arm until 31st Dec 2011 and follow up until 31st Dec 2014 [30]. During planning of this trial in 1999, no published data was available to estimate the impact of the HVE. The various mortality rates presented here are based on over one million study years, and incidence rates on over half a million study years. They provide vital information for investigators on likely event rate shortfalls that might be expected in ongoing and future screening studies/RCTs of similar design.", "IJ has consultancy arrangements with Becton Dickinson, who have an interest in tumour markers and ovarian cancer. They have provided consulting fees, funds for research, and staff but not directly related to this study. SS has received research support from Fujirebio Diagnostics but not in relation to this trial.", "IJ, UM, MP, SS, LF, SC were involved in trial design and concept. UM, MB, AR, MH AG-M and SA were involved in acquisition of data. MB, MP and UM were involved in the statistical analysis. MB and UM were responsible for interpretation of data and drafted the manuscript. All authors critically revised the manuscript and approved the final version. UM is the guarantor.", "The trial was core funded by the Medical Research Council (grant no. G990102), Cancer Research UK (grant no. C1479/A2884), and the Department of Health with additional support from the Eve Appeal, Special Trustees of Bart's and the London, and Special Trustees of UCLH. A major portion of this work was done at UCLH/UCL within the \"women's health theme\" of the NIHR UCLH/UCL Comprehensive Biomedical Research Centre supported by the Department of Health. SS has received research support from NCI (grant numbers CA086381 and CA083639). The researchers are independent from the funders.\nEthical approval: The study was approved by the UK North West Multicentre Research Ethics Committees (North West MREC 00/8/34) with site specific approval from the local regional ethics committees and the Caldicott guardians (data controllers) of the primary care trusts." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study Design", "Follow up", "Analysis", "Mortality and cancer incidence", "Socioeconomic status", "Variation in HVE with age/region/BMI/time on trial", "Results", "Mortality rates", "Socioeconomic status comparison", "Cancer Incidence", "Discussion", "Conclusions", "Competing interests", "Authors' contributions", "Funding" ]
[ "In clinical studies, mortality and morbidity data from the general population is used to calculate expected death and incidence rates. However, volunteers participating in trials evaluating preventive interventions such as screening are on average healthier than the general population [1-3]. The implication of this 'healthy volunteer effect (HVE)' is that trial participants have lower mortality and morbidity than the general population. In randomised controlled trials (RCTs), this can cause a shortfall in expected event rates which are the foundation of the trial's power calculations [4]. The latter determine the sample size (number of participants recruited and their time on the trial) and contribute significantly to design, logistics and cost. A deficit could mean a significant fall in power and may require alteration of the design midway through the trial if the primary objective is to be achieved.\nThe United Kingdom Collaborative Trial of Ovarian Cancer Screening (UKCTOCS) is a large multi-centre randomised controlled trial of 202 638 women recruited between 2001 and 2005 [5]. In order to ensure trial volunteers were as representative of the general population as possible, women were not allowed to self refer. Instead over 1.2 million women aged 50-74 were randomly invited from age sex registers of 27 participating local health authority registers [5]. The underlying hypothesis was that the HVE is largely related to socioeconomic status with participants being more affluent, better educated and more health-conscious than the population as a whole. This bias was thought to be magnified by recruitment using self-referral, which is dependent on publicising the trial through a variety of media such as newspapers, magazines, radio, television, posters at numerous venues and meetings.\nIn this paper, we report on the impact of population invitation on the HVE in UKCTOCS by comparing observed and expected mortality and cancer incidence rates in the trial, particularly with regard to socioeconomic status levels. We also consider the differences in deprivation of those invited with those recruited. The results provide vital data to inform trial design and sample size calculations for those seeking to undertake screening studies involving the general population.", "[SUBTITLE] Study Design [SUBSECTION] UKCTOCS is an RCT aiming to assess the impact of screening on ovarian cancer mortality while comprehensively evaluating performance characteristics, physical and psychological morbidity, compliance, and cost of the screening strategies. It was set up in 13 NHS Trusts in England, Wales and Northern Ireland. Women living in adjoining Primary Care Trusts (including Local Health Boards in Wales) were invited to participate in the trial. Those who accepted the invitation attended a local recruitment clinic. Detailed description of the invitation and recruitment process, as well as inclusion/exclusion criteria, are detailed elsewhere [5]. Of relevance to this analysis is that women with an active malignancy were only eligible if they had no documented persistent or recurrent disease, and those with previous history of ovarian cancer were excluded. All women provided written consent.\nUKCTOCS is an RCT aiming to assess the impact of screening on ovarian cancer mortality while comprehensively evaluating performance characteristics, physical and psychological morbidity, compliance, and cost of the screening strategies. It was set up in 13 NHS Trusts in England, Wales and Northern Ireland. Women living in adjoining Primary Care Trusts (including Local Health Boards in Wales) were invited to participate in the trial. Those who accepted the invitation attended a local recruitment clinic. Detailed description of the invitation and recruitment process, as well as inclusion/exclusion criteria, are detailed elsewhere [5]. Of relevance to this analysis is that women with an active malignancy were only eligible if they had no documented persistent or recurrent disease, and those with previous history of ovarian cancer were excluded. All women provided written consent.\n[SUBTITLE] Follow up [SUBSECTION] Women recruited into the trial were 'flagged' for follow up with the NHS Information Centre for Health and Social Care (ICHSC) in England and Wales (for death and cancer registration) and with the Central Services Agency (CSA, for deaths in Northern Ireland) and Cancer Registry in Northern Ireland (NICR). Almost all women were successfully flagged (n = 202 593). From the received death certificate copies, the 'underlying cause of death' was used for the cause-specific observed counts. Barring an inquest, death certificates were mostly received within 3 months of the death. To ensure completeness of data on deaths, events were censored on the 1st June 2009, eight months prior to the last death certificates update on 1st February 2010. Data provided by the CSA on cause of death was incomplete and therefore women from Northern Ireland were excluded from the calculation of cause-specific SMRs.\nInformation on all incident cancers can take up to 3 years to be recorded with the national registries. In order to ensure completeness of data on cancers, events were censored on 1st June 2006, allowing a time lag of 3.75 years between events and the final cancer registration update from NHS ICHSC and the NICR in February 2010. Unlike the CSA, the NICR provided full data on cancer type, so that all women were included in the cancer-specific incidence analysis.\nWomen recruited into the trial were 'flagged' for follow up with the NHS Information Centre for Health and Social Care (ICHSC) in England and Wales (for death and cancer registration) and with the Central Services Agency (CSA, for deaths in Northern Ireland) and Cancer Registry in Northern Ireland (NICR). Almost all women were successfully flagged (n = 202 593). From the received death certificate copies, the 'underlying cause of death' was used for the cause-specific observed counts. Barring an inquest, death certificates were mostly received within 3 months of the death. To ensure completeness of data on deaths, events were censored on the 1st June 2009, eight months prior to the last death certificates update on 1st February 2010. Data provided by the CSA on cause of death was incomplete and therefore women from Northern Ireland were excluded from the calculation of cause-specific SMRs.\nInformation on all incident cancers can take up to 3 years to be recorded with the national registries. In order to ensure completeness of data on cancers, events were censored on 1st June 2006, allowing a time lag of 3.75 years between events and the final cancer registration update from NHS ICHSC and the NICR in February 2010. Unlike the CSA, the NICR provided full data on cancer type, so that all women were included in the cancer-specific incidence analysis.\n[SUBTITLE] Analysis [SUBSECTION] [SUBTITLE] Mortality and cancer incidence [SUBSECTION] Evidence of a HVE was assessed by calculation of the Standardised Mortality Ratio (SMR) which is defined as the ratio of observed to expected deaths (×100). A value significantly less than 100 would indicate a HVE. SMRs were calculated for: overall mortality (including ovarian cancer); overall cancer (excluding ovarian cancer and tubal cancer (ICD10-C56, C57.0) and 'other malignant neoplasms of skin' (ICD10-C44); the 10 leading individual causes of female cancer mortality (excluding ovarian cancer); and the five leading general causes other than cancer (circulatory, respiratory, digestive, nervous system and mental and behavioural). Except for overall mortality, ovarian and fallopian tube cancers were excluded from the analysis as they were the primary outcome measure of the ongoing RCT.\nThe effect on cancer incidence from the HVE was similarly assessed with the Standardised Incidence Ratio (SIR), defined as the ratio of observed to expected cancer incidence (×100). This was calculated again for overall cancer (also excluding ICD10-C56, C57.0 and C44) and the leading 10 female causes of cancer incidence, excluding ovarian cancer. These are the same as for mortality, although there are differences in the rankings.\nFor each trial participant, Expected Mortality Rates (EMRs) were calculated using national mortality rates derived from ONS for 2005 [6]. All individual EMRs were calculated for each year or partial year on the trial, with the values summed over the number of years on the trial. The individual's risk of mortality was adjusted for age at randomisation and also dynamically, so that the risk reflected the ageing woman as the screening progressed. The overall EMR for a cause was the sum of each woman's individual EMR up to the censoring date of 1st June 2009. ONS mortality tables for 2005 [6] provide both the number of female deaths for each cause as well as total female population in 5 year age groups. To estimate an age-specific mortality rate for each year, firstly the midpoint of the age group was taken as representing the mortality rate calculated for that age group. An approximate mortality rate estimate for any given age was then calculated by imputing the age into either a best fitting quadratic or exponential function. For nearly all causes the fit was excellent with R2 always over 0.95 and mostly over 0.99. Similar analysis was performed for cancer incidence using ONS cancer incidence tables for 2006 [7]. The only exception was breast cancer incidence where the effect of a national screening programme meant that after the age of 70 the incidence fell sharply, so that 2 separate functions had to be used (below and above age 70). All confidence intervals for the SMRs and SIRs were based on an assumed poisson distribution for the observed deaths or cancers.\nTo calculate the EMR of each woman i (i = 1, 2...202 593) for cause of death z if:\n• t is the year on trial (t = 1, 2...8)\n• xi is the age of woman i at randomisation\n• Dzx is the imputed mortality rate for cause of death z at age x\n• yti is the fraction of the year of trial t completed at censoring or death by woman i (always = 1, except for most recent/last year on trial)\nthen\n\n\n\n\nE\nM\n\nR\n\ni\nz\n\n\n=\n\n\n∑\n\nt\n=\n1\n,\n2...8\n\n\n\n[\n\nD\n\nz\n(\n\nx\ni\n\n+\nt\n−\n0.5\n)\n\n\n]\n\ny\n\nt\ni\n\n\n\n\n\n\n\n\nand the overall EMR for cause of death z is simply:\n\n\n\n\nE\nM\n\nR\nz\n\n=\n\n\n∑\n\ni\n=\n1\n,\n2...202593\n\n\n\nE\nM\n\nR\n\ni\nz\n\n\n\n\n\n\n\n\nNote that the age x imputed in Dzx is slightly adjusted by 0.5 to approximate the average effect of ageing over the year (t - 0.5 instead of t-1). Also note, if women withdraw from attending for screening in the trial at any point, they continue to be followed up through flagging for death and cancer registration. Hence no adjustment is made for withdrawals.\nEvidence of a HVE was assessed by calculation of the Standardised Mortality Ratio (SMR) which is defined as the ratio of observed to expected deaths (×100). A value significantly less than 100 would indicate a HVE. SMRs were calculated for: overall mortality (including ovarian cancer); overall cancer (excluding ovarian cancer and tubal cancer (ICD10-C56, C57.0) and 'other malignant neoplasms of skin' (ICD10-C44); the 10 leading individual causes of female cancer mortality (excluding ovarian cancer); and the five leading general causes other than cancer (circulatory, respiratory, digestive, nervous system and mental and behavioural). Except for overall mortality, ovarian and fallopian tube cancers were excluded from the analysis as they were the primary outcome measure of the ongoing RCT.\nThe effect on cancer incidence from the HVE was similarly assessed with the Standardised Incidence Ratio (SIR), defined as the ratio of observed to expected cancer incidence (×100). This was calculated again for overall cancer (also excluding ICD10-C56, C57.0 and C44) and the leading 10 female causes of cancer incidence, excluding ovarian cancer. These are the same as for mortality, although there are differences in the rankings.\nFor each trial participant, Expected Mortality Rates (EMRs) were calculated using national mortality rates derived from ONS for 2005 [6]. All individual EMRs were calculated for each year or partial year on the trial, with the values summed over the number of years on the trial. The individual's risk of mortality was adjusted for age at randomisation and also dynamically, so that the risk reflected the ageing woman as the screening progressed. The overall EMR for a cause was the sum of each woman's individual EMR up to the censoring date of 1st June 2009. ONS mortality tables for 2005 [6] provide both the number of female deaths for each cause as well as total female population in 5 year age groups. To estimate an age-specific mortality rate for each year, firstly the midpoint of the age group was taken as representing the mortality rate calculated for that age group. An approximate mortality rate estimate for any given age was then calculated by imputing the age into either a best fitting quadratic or exponential function. For nearly all causes the fit was excellent with R2 always over 0.95 and mostly over 0.99. Similar analysis was performed for cancer incidence using ONS cancer incidence tables for 2006 [7]. The only exception was breast cancer incidence where the effect of a national screening programme meant that after the age of 70 the incidence fell sharply, so that 2 separate functions had to be used (below and above age 70). All confidence intervals for the SMRs and SIRs were based on an assumed poisson distribution for the observed deaths or cancers.\nTo calculate the EMR of each woman i (i = 1, 2...202 593) for cause of death z if:\n• t is the year on trial (t = 1, 2...8)\n• xi is the age of woman i at randomisation\n• Dzx is the imputed mortality rate for cause of death z at age x\n• yti is the fraction of the year of trial t completed at censoring or death by woman i (always = 1, except for most recent/last year on trial)\nthen\n\n\n\n\nE\nM\n\nR\n\ni\nz\n\n\n=\n\n\n∑\n\nt\n=\n1\n,\n2...8\n\n\n\n[\n\nD\n\nz\n(\n\nx\ni\n\n+\nt\n−\n0.5\n)\n\n\n]\n\ny\n\nt\ni\n\n\n\n\n\n\n\n\nand the overall EMR for cause of death z is simply:\n\n\n\n\nE\nM\n\nR\nz\n\n=\n\n\n∑\n\ni\n=\n1\n,\n2...202593\n\n\n\nE\nM\n\nR\n\ni\nz\n\n\n\n\n\n\n\n\nNote that the age x imputed in Dzx is slightly adjusted by 0.5 to approximate the average effect of ageing over the year (t - 0.5 instead of t-1). Also note, if women withdraw from attending for screening in the trial at any point, they continue to be followed up through flagging for death and cancer registration. Hence no adjustment is made for withdrawals.\n[SUBTITLE] Socioeconomic status [SUBSECTION] The PCT provided postcodes and dates of birth for all women invited to the trial. The former was used to estimate socio-economic class. The Index of Multiple Deprivation 2007 (IMD) [8] provides 32 482 scores at a Super Output Area (SOA) level linked to postcodes for England. It was chosen over the other two census based available indices (Townsend or Carstairs) as firstly, the most up-to-date and secondly, the most precise in ascribing a score to an individual based on postcode, given that it is calculated at a much finer spatial scale. Upon linking, an individual IMD score was derived for 156 620 women recruited in England. The women recruited from centres in Wales and Northern Ireland were omitted from this particular analysis. A Welsh IMD (2008) [9] has been published. However, the differing criteria employed make comparisons between the English and Welsh IMD scores invalid [10]. To explore mortality versus deprivation, the recruited women were separated into quintiles according to IMD score and the respective SMRs compared. This was also repeated for cancer incidence.\nIMD scores for all women who were invited from England were compared with those recruited to the trial by evaluating their relative frequency distributions. No mortality data is available for invited women.\nThe PCT provided postcodes and dates of birth for all women invited to the trial. The former was used to estimate socio-economic class. The Index of Multiple Deprivation 2007 (IMD) [8] provides 32 482 scores at a Super Output Area (SOA) level linked to postcodes for England. It was chosen over the other two census based available indices (Townsend or Carstairs) as firstly, the most up-to-date and secondly, the most precise in ascribing a score to an individual based on postcode, given that it is calculated at a much finer spatial scale. Upon linking, an individual IMD score was derived for 156 620 women recruited in England. The women recruited from centres in Wales and Northern Ireland were omitted from this particular analysis. A Welsh IMD (2008) [9] has been published. However, the differing criteria employed make comparisons between the English and Welsh IMD scores invalid [10]. To explore mortality versus deprivation, the recruited women were separated into quintiles according to IMD score and the respective SMRs compared. This was also repeated for cancer incidence.\nIMD scores for all women who were invited from England were compared with those recruited to the trial by evaluating their relative frequency distributions. No mortality data is available for invited women.\n[SUBTITLE] Variation in HVE with age/region/BMI/time on trial [SUBSECTION] For all these analyses, the expected and observed mortality rates for all relevant women in each group were summed. Regional variations were compared by summing over the individual recruitment centres. For the age group analysis the groupings were made by using the age at randomisation to categorise into 50-54, 55-59, 60-65, 65-69 or 70-74 age groups. To assess any change in the SMR/SIRs over the trial period, the overall EMR was partitioned into year in trial by summing the individual EMRs for each year t. This was done for the individual causes of mortality and incidence, as well as for overall mortality. Mortality versus body mass index (BMI) of the women was explored by separating women who provided height and weight at randomisation into the standard underweight, normal, overweight and obese categories (up to 18.5; 18.5-25; 25-30; over 30, respectively) and comparing respective SMRs.\nFor all these analyses, the expected and observed mortality rates for all relevant women in each group were summed. Regional variations were compared by summing over the individual recruitment centres. For the age group analysis the groupings were made by using the age at randomisation to categorise into 50-54, 55-59, 60-65, 65-69 or 70-74 age groups. To assess any change in the SMR/SIRs over the trial period, the overall EMR was partitioned into year in trial by summing the individual EMRs for each year t. This was done for the individual causes of mortality and incidence, as well as for overall mortality. Mortality versus body mass index (BMI) of the women was explored by separating women who provided height and weight at randomisation into the standard underweight, normal, overweight and obese categories (up to 18.5; 18.5-25; 25-30; over 30, respectively) and comparing respective SMRs.\n[SUBTITLE] Mortality and cancer incidence [SUBSECTION] Evidence of a HVE was assessed by calculation of the Standardised Mortality Ratio (SMR) which is defined as the ratio of observed to expected deaths (×100). A value significantly less than 100 would indicate a HVE. SMRs were calculated for: overall mortality (including ovarian cancer); overall cancer (excluding ovarian cancer and tubal cancer (ICD10-C56, C57.0) and 'other malignant neoplasms of skin' (ICD10-C44); the 10 leading individual causes of female cancer mortality (excluding ovarian cancer); and the five leading general causes other than cancer (circulatory, respiratory, digestive, nervous system and mental and behavioural). Except for overall mortality, ovarian and fallopian tube cancers were excluded from the analysis as they were the primary outcome measure of the ongoing RCT.\nThe effect on cancer incidence from the HVE was similarly assessed with the Standardised Incidence Ratio (SIR), defined as the ratio of observed to expected cancer incidence (×100). This was calculated again for overall cancer (also excluding ICD10-C56, C57.0 and C44) and the leading 10 female causes of cancer incidence, excluding ovarian cancer. These are the same as for mortality, although there are differences in the rankings.\nFor each trial participant, Expected Mortality Rates (EMRs) were calculated using national mortality rates derived from ONS for 2005 [6]. All individual EMRs were calculated for each year or partial year on the trial, with the values summed over the number of years on the trial. The individual's risk of mortality was adjusted for age at randomisation and also dynamically, so that the risk reflected the ageing woman as the screening progressed. The overall EMR for a cause was the sum of each woman's individual EMR up to the censoring date of 1st June 2009. ONS mortality tables for 2005 [6] provide both the number of female deaths for each cause as well as total female population in 5 year age groups. To estimate an age-specific mortality rate for each year, firstly the midpoint of the age group was taken as representing the mortality rate calculated for that age group. An approximate mortality rate estimate for any given age was then calculated by imputing the age into either a best fitting quadratic or exponential function. For nearly all causes the fit was excellent with R2 always over 0.95 and mostly over 0.99. Similar analysis was performed for cancer incidence using ONS cancer incidence tables for 2006 [7]. The only exception was breast cancer incidence where the effect of a national screening programme meant that after the age of 70 the incidence fell sharply, so that 2 separate functions had to be used (below and above age 70). All confidence intervals for the SMRs and SIRs were based on an assumed poisson distribution for the observed deaths or cancers.\nTo calculate the EMR of each woman i (i = 1, 2...202 593) for cause of death z if:\n• t is the year on trial (t = 1, 2...8)\n• xi is the age of woman i at randomisation\n• Dzx is the imputed mortality rate for cause of death z at age x\n• yti is the fraction of the year of trial t completed at censoring or death by woman i (always = 1, except for most recent/last year on trial)\nthen\n\n\n\n\nE\nM\n\nR\n\ni\nz\n\n\n=\n\n\n∑\n\nt\n=\n1\n,\n2...8\n\n\n\n[\n\nD\n\nz\n(\n\nx\ni\n\n+\nt\n−\n0.5\n)\n\n\n]\n\ny\n\nt\ni\n\n\n\n\n\n\n\n\nand the overall EMR for cause of death z is simply:\n\n\n\n\nE\nM\n\nR\nz\n\n=\n\n\n∑\n\ni\n=\n1\n,\n2...202593\n\n\n\nE\nM\n\nR\n\ni\nz\n\n\n\n\n\n\n\n\nNote that the age x imputed in Dzx is slightly adjusted by 0.5 to approximate the average effect of ageing over the year (t - 0.5 instead of t-1). Also note, if women withdraw from attending for screening in the trial at any point, they continue to be followed up through flagging for death and cancer registration. Hence no adjustment is made for withdrawals.\nEvidence of a HVE was assessed by calculation of the Standardised Mortality Ratio (SMR) which is defined as the ratio of observed to expected deaths (×100). A value significantly less than 100 would indicate a HVE. SMRs were calculated for: overall mortality (including ovarian cancer); overall cancer (excluding ovarian cancer and tubal cancer (ICD10-C56, C57.0) and 'other malignant neoplasms of skin' (ICD10-C44); the 10 leading individual causes of female cancer mortality (excluding ovarian cancer); and the five leading general causes other than cancer (circulatory, respiratory, digestive, nervous system and mental and behavioural). Except for overall mortality, ovarian and fallopian tube cancers were excluded from the analysis as they were the primary outcome measure of the ongoing RCT.\nThe effect on cancer incidence from the HVE was similarly assessed with the Standardised Incidence Ratio (SIR), defined as the ratio of observed to expected cancer incidence (×100). This was calculated again for overall cancer (also excluding ICD10-C56, C57.0 and C44) and the leading 10 female causes of cancer incidence, excluding ovarian cancer. These are the same as for mortality, although there are differences in the rankings.\nFor each trial participant, Expected Mortality Rates (EMRs) were calculated using national mortality rates derived from ONS for 2005 [6]. All individual EMRs were calculated for each year or partial year on the trial, with the values summed over the number of years on the trial. The individual's risk of mortality was adjusted for age at randomisation and also dynamically, so that the risk reflected the ageing woman as the screening progressed. The overall EMR for a cause was the sum of each woman's individual EMR up to the censoring date of 1st June 2009. ONS mortality tables for 2005 [6] provide both the number of female deaths for each cause as well as total female population in 5 year age groups. To estimate an age-specific mortality rate for each year, firstly the midpoint of the age group was taken as representing the mortality rate calculated for that age group. An approximate mortality rate estimate for any given age was then calculated by imputing the age into either a best fitting quadratic or exponential function. For nearly all causes the fit was excellent with R2 always over 0.95 and mostly over 0.99. Similar analysis was performed for cancer incidence using ONS cancer incidence tables for 2006 [7]. The only exception was breast cancer incidence where the effect of a national screening programme meant that after the age of 70 the incidence fell sharply, so that 2 separate functions had to be used (below and above age 70). All confidence intervals for the SMRs and SIRs were based on an assumed poisson distribution for the observed deaths or cancers.\nTo calculate the EMR of each woman i (i = 1, 2...202 593) for cause of death z if:\n• t is the year on trial (t = 1, 2...8)\n• xi is the age of woman i at randomisation\n• Dzx is the imputed mortality rate for cause of death z at age x\n• yti is the fraction of the year of trial t completed at censoring or death by woman i (always = 1, except for most recent/last year on trial)\nthen\n\n\n\n\nE\nM\n\nR\n\ni\nz\n\n\n=\n\n\n∑\n\nt\n=\n1\n,\n2...8\n\n\n\n[\n\nD\n\nz\n(\n\nx\ni\n\n+\nt\n−\n0.5\n)\n\n\n]\n\ny\n\nt\ni\n\n\n\n\n\n\n\n\nand the overall EMR for cause of death z is simply:\n\n\n\n\nE\nM\n\nR\nz\n\n=\n\n\n∑\n\ni\n=\n1\n,\n2...202593\n\n\n\nE\nM\n\nR\n\ni\nz\n\n\n\n\n\n\n\n\nNote that the age x imputed in Dzx is slightly adjusted by 0.5 to approximate the average effect of ageing over the year (t - 0.5 instead of t-1). Also note, if women withdraw from attending for screening in the trial at any point, they continue to be followed up through flagging for death and cancer registration. Hence no adjustment is made for withdrawals.\n[SUBTITLE] Socioeconomic status [SUBSECTION] The PCT provided postcodes and dates of birth for all women invited to the trial. The former was used to estimate socio-economic class. The Index of Multiple Deprivation 2007 (IMD) [8] provides 32 482 scores at a Super Output Area (SOA) level linked to postcodes for England. It was chosen over the other two census based available indices (Townsend or Carstairs) as firstly, the most up-to-date and secondly, the most precise in ascribing a score to an individual based on postcode, given that it is calculated at a much finer spatial scale. Upon linking, an individual IMD score was derived for 156 620 women recruited in England. The women recruited from centres in Wales and Northern Ireland were omitted from this particular analysis. A Welsh IMD (2008) [9] has been published. However, the differing criteria employed make comparisons between the English and Welsh IMD scores invalid [10]. To explore mortality versus deprivation, the recruited women were separated into quintiles according to IMD score and the respective SMRs compared. This was also repeated for cancer incidence.\nIMD scores for all women who were invited from England were compared with those recruited to the trial by evaluating their relative frequency distributions. No mortality data is available for invited women.\nThe PCT provided postcodes and dates of birth for all women invited to the trial. The former was used to estimate socio-economic class. The Index of Multiple Deprivation 2007 (IMD) [8] provides 32 482 scores at a Super Output Area (SOA) level linked to postcodes for England. It was chosen over the other two census based available indices (Townsend or Carstairs) as firstly, the most up-to-date and secondly, the most precise in ascribing a score to an individual based on postcode, given that it is calculated at a much finer spatial scale. Upon linking, an individual IMD score was derived for 156 620 women recruited in England. The women recruited from centres in Wales and Northern Ireland were omitted from this particular analysis. A Welsh IMD (2008) [9] has been published. However, the differing criteria employed make comparisons between the English and Welsh IMD scores invalid [10]. To explore mortality versus deprivation, the recruited women were separated into quintiles according to IMD score and the respective SMRs compared. This was also repeated for cancer incidence.\nIMD scores for all women who were invited from England were compared with those recruited to the trial by evaluating their relative frequency distributions. No mortality data is available for invited women.\n[SUBTITLE] Variation in HVE with age/region/BMI/time on trial [SUBSECTION] For all these analyses, the expected and observed mortality rates for all relevant women in each group were summed. Regional variations were compared by summing over the individual recruitment centres. For the age group analysis the groupings were made by using the age at randomisation to categorise into 50-54, 55-59, 60-65, 65-69 or 70-74 age groups. To assess any change in the SMR/SIRs over the trial period, the overall EMR was partitioned into year in trial by summing the individual EMRs for each year t. This was done for the individual causes of mortality and incidence, as well as for overall mortality. Mortality versus body mass index (BMI) of the women was explored by separating women who provided height and weight at randomisation into the standard underweight, normal, overweight and obese categories (up to 18.5; 18.5-25; 25-30; over 30, respectively) and comparing respective SMRs.\nFor all these analyses, the expected and observed mortality rates for all relevant women in each group were summed. Regional variations were compared by summing over the individual recruitment centres. For the age group analysis the groupings were made by using the age at randomisation to categorise into 50-54, 55-59, 60-65, 65-69 or 70-74 age groups. To assess any change in the SMR/SIRs over the trial period, the overall EMR was partitioned into year in trial by summing the individual EMRs for each year t. This was done for the individual causes of mortality and incidence, as well as for overall mortality. Mortality versus body mass index (BMI) of the women was explored by separating women who provided height and weight at randomisation into the standard underweight, normal, overweight and obese categories (up to 18.5; 18.5-25; 25-30; over 30, respectively) and comparing respective SMRs.", "UKCTOCS is an RCT aiming to assess the impact of screening on ovarian cancer mortality while comprehensively evaluating performance characteristics, physical and psychological morbidity, compliance, and cost of the screening strategies. It was set up in 13 NHS Trusts in England, Wales and Northern Ireland. Women living in adjoining Primary Care Trusts (including Local Health Boards in Wales) were invited to participate in the trial. Those who accepted the invitation attended a local recruitment clinic. Detailed description of the invitation and recruitment process, as well as inclusion/exclusion criteria, are detailed elsewhere [5]. Of relevance to this analysis is that women with an active malignancy were only eligible if they had no documented persistent or recurrent disease, and those with previous history of ovarian cancer were excluded. All women provided written consent.", "Women recruited into the trial were 'flagged' for follow up with the NHS Information Centre for Health and Social Care (ICHSC) in England and Wales (for death and cancer registration) and with the Central Services Agency (CSA, for deaths in Northern Ireland) and Cancer Registry in Northern Ireland (NICR). Almost all women were successfully flagged (n = 202 593). From the received death certificate copies, the 'underlying cause of death' was used for the cause-specific observed counts. Barring an inquest, death certificates were mostly received within 3 months of the death. To ensure completeness of data on deaths, events were censored on the 1st June 2009, eight months prior to the last death certificates update on 1st February 2010. Data provided by the CSA on cause of death was incomplete and therefore women from Northern Ireland were excluded from the calculation of cause-specific SMRs.\nInformation on all incident cancers can take up to 3 years to be recorded with the national registries. In order to ensure completeness of data on cancers, events were censored on 1st June 2006, allowing a time lag of 3.75 years between events and the final cancer registration update from NHS ICHSC and the NICR in February 2010. Unlike the CSA, the NICR provided full data on cancer type, so that all women were included in the cancer-specific incidence analysis.", "[SUBTITLE] Mortality and cancer incidence [SUBSECTION] Evidence of a HVE was assessed by calculation of the Standardised Mortality Ratio (SMR) which is defined as the ratio of observed to expected deaths (×100). A value significantly less than 100 would indicate a HVE. SMRs were calculated for: overall mortality (including ovarian cancer); overall cancer (excluding ovarian cancer and tubal cancer (ICD10-C56, C57.0) and 'other malignant neoplasms of skin' (ICD10-C44); the 10 leading individual causes of female cancer mortality (excluding ovarian cancer); and the five leading general causes other than cancer (circulatory, respiratory, digestive, nervous system and mental and behavioural). Except for overall mortality, ovarian and fallopian tube cancers were excluded from the analysis as they were the primary outcome measure of the ongoing RCT.\nThe effect on cancer incidence from the HVE was similarly assessed with the Standardised Incidence Ratio (SIR), defined as the ratio of observed to expected cancer incidence (×100). This was calculated again for overall cancer (also excluding ICD10-C56, C57.0 and C44) and the leading 10 female causes of cancer incidence, excluding ovarian cancer. These are the same as for mortality, although there are differences in the rankings.\nFor each trial participant, Expected Mortality Rates (EMRs) were calculated using national mortality rates derived from ONS for 2005 [6]. All individual EMRs were calculated for each year or partial year on the trial, with the values summed over the number of years on the trial. The individual's risk of mortality was adjusted for age at randomisation and also dynamically, so that the risk reflected the ageing woman as the screening progressed. The overall EMR for a cause was the sum of each woman's individual EMR up to the censoring date of 1st June 2009. ONS mortality tables for 2005 [6] provide both the number of female deaths for each cause as well as total female population in 5 year age groups. To estimate an age-specific mortality rate for each year, firstly the midpoint of the age group was taken as representing the mortality rate calculated for that age group. An approximate mortality rate estimate for any given age was then calculated by imputing the age into either a best fitting quadratic or exponential function. For nearly all causes the fit was excellent with R2 always over 0.95 and mostly over 0.99. Similar analysis was performed for cancer incidence using ONS cancer incidence tables for 2006 [7]. The only exception was breast cancer incidence where the effect of a national screening programme meant that after the age of 70 the incidence fell sharply, so that 2 separate functions had to be used (below and above age 70). All confidence intervals for the SMRs and SIRs were based on an assumed poisson distribution for the observed deaths or cancers.\nTo calculate the EMR of each woman i (i = 1, 2...202 593) for cause of death z if:\n• t is the year on trial (t = 1, 2...8)\n• xi is the age of woman i at randomisation\n• Dzx is the imputed mortality rate for cause of death z at age x\n• yti is the fraction of the year of trial t completed at censoring or death by woman i (always = 1, except for most recent/last year on trial)\nthen\n\n\n\n\nE\nM\n\nR\n\ni\nz\n\n\n=\n\n\n∑\n\nt\n=\n1\n,\n2...8\n\n\n\n[\n\nD\n\nz\n(\n\nx\ni\n\n+\nt\n−\n0.5\n)\n\n\n]\n\ny\n\nt\ni\n\n\n\n\n\n\n\n\nand the overall EMR for cause of death z is simply:\n\n\n\n\nE\nM\n\nR\nz\n\n=\n\n\n∑\n\ni\n=\n1\n,\n2...202593\n\n\n\nE\nM\n\nR\n\ni\nz\n\n\n\n\n\n\n\n\nNote that the age x imputed in Dzx is slightly adjusted by 0.5 to approximate the average effect of ageing over the year (t - 0.5 instead of t-1). Also note, if women withdraw from attending for screening in the trial at any point, they continue to be followed up through flagging for death and cancer registration. Hence no adjustment is made for withdrawals.\nEvidence of a HVE was assessed by calculation of the Standardised Mortality Ratio (SMR) which is defined as the ratio of observed to expected deaths (×100). A value significantly less than 100 would indicate a HVE. SMRs were calculated for: overall mortality (including ovarian cancer); overall cancer (excluding ovarian cancer and tubal cancer (ICD10-C56, C57.0) and 'other malignant neoplasms of skin' (ICD10-C44); the 10 leading individual causes of female cancer mortality (excluding ovarian cancer); and the five leading general causes other than cancer (circulatory, respiratory, digestive, nervous system and mental and behavioural). Except for overall mortality, ovarian and fallopian tube cancers were excluded from the analysis as they were the primary outcome measure of the ongoing RCT.\nThe effect on cancer incidence from the HVE was similarly assessed with the Standardised Incidence Ratio (SIR), defined as the ratio of observed to expected cancer incidence (×100). This was calculated again for overall cancer (also excluding ICD10-C56, C57.0 and C44) and the leading 10 female causes of cancer incidence, excluding ovarian cancer. These are the same as for mortality, although there are differences in the rankings.\nFor each trial participant, Expected Mortality Rates (EMRs) were calculated using national mortality rates derived from ONS for 2005 [6]. All individual EMRs were calculated for each year or partial year on the trial, with the values summed over the number of years on the trial. The individual's risk of mortality was adjusted for age at randomisation and also dynamically, so that the risk reflected the ageing woman as the screening progressed. The overall EMR for a cause was the sum of each woman's individual EMR up to the censoring date of 1st June 2009. ONS mortality tables for 2005 [6] provide both the number of female deaths for each cause as well as total female population in 5 year age groups. To estimate an age-specific mortality rate for each year, firstly the midpoint of the age group was taken as representing the mortality rate calculated for that age group. An approximate mortality rate estimate for any given age was then calculated by imputing the age into either a best fitting quadratic or exponential function. For nearly all causes the fit was excellent with R2 always over 0.95 and mostly over 0.99. Similar analysis was performed for cancer incidence using ONS cancer incidence tables for 2006 [7]. The only exception was breast cancer incidence where the effect of a national screening programme meant that after the age of 70 the incidence fell sharply, so that 2 separate functions had to be used (below and above age 70). All confidence intervals for the SMRs and SIRs were based on an assumed poisson distribution for the observed deaths or cancers.\nTo calculate the EMR of each woman i (i = 1, 2...202 593) for cause of death z if:\n• t is the year on trial (t = 1, 2...8)\n• xi is the age of woman i at randomisation\n• Dzx is the imputed mortality rate for cause of death z at age x\n• yti is the fraction of the year of trial t completed at censoring or death by woman i (always = 1, except for most recent/last year on trial)\nthen\n\n\n\n\nE\nM\n\nR\n\ni\nz\n\n\n=\n\n\n∑\n\nt\n=\n1\n,\n2...8\n\n\n\n[\n\nD\n\nz\n(\n\nx\ni\n\n+\nt\n−\n0.5\n)\n\n\n]\n\ny\n\nt\ni\n\n\n\n\n\n\n\n\nand the overall EMR for cause of death z is simply:\n\n\n\n\nE\nM\n\nR\nz\n\n=\n\n\n∑\n\ni\n=\n1\n,\n2...202593\n\n\n\nE\nM\n\nR\n\ni\nz\n\n\n\n\n\n\n\n\nNote that the age x imputed in Dzx is slightly adjusted by 0.5 to approximate the average effect of ageing over the year (t - 0.5 instead of t-1). Also note, if women withdraw from attending for screening in the trial at any point, they continue to be followed up through flagging for death and cancer registration. Hence no adjustment is made for withdrawals.\n[SUBTITLE] Socioeconomic status [SUBSECTION] The PCT provided postcodes and dates of birth for all women invited to the trial. The former was used to estimate socio-economic class. The Index of Multiple Deprivation 2007 (IMD) [8] provides 32 482 scores at a Super Output Area (SOA) level linked to postcodes for England. It was chosen over the other two census based available indices (Townsend or Carstairs) as firstly, the most up-to-date and secondly, the most precise in ascribing a score to an individual based on postcode, given that it is calculated at a much finer spatial scale. Upon linking, an individual IMD score was derived for 156 620 women recruited in England. The women recruited from centres in Wales and Northern Ireland were omitted from this particular analysis. A Welsh IMD (2008) [9] has been published. However, the differing criteria employed make comparisons between the English and Welsh IMD scores invalid [10]. To explore mortality versus deprivation, the recruited women were separated into quintiles according to IMD score and the respective SMRs compared. This was also repeated for cancer incidence.\nIMD scores for all women who were invited from England were compared with those recruited to the trial by evaluating their relative frequency distributions. No mortality data is available for invited women.\nThe PCT provided postcodes and dates of birth for all women invited to the trial. The former was used to estimate socio-economic class. The Index of Multiple Deprivation 2007 (IMD) [8] provides 32 482 scores at a Super Output Area (SOA) level linked to postcodes for England. It was chosen over the other two census based available indices (Townsend or Carstairs) as firstly, the most up-to-date and secondly, the most precise in ascribing a score to an individual based on postcode, given that it is calculated at a much finer spatial scale. Upon linking, an individual IMD score was derived for 156 620 women recruited in England. The women recruited from centres in Wales and Northern Ireland were omitted from this particular analysis. A Welsh IMD (2008) [9] has been published. However, the differing criteria employed make comparisons between the English and Welsh IMD scores invalid [10]. To explore mortality versus deprivation, the recruited women were separated into quintiles according to IMD score and the respective SMRs compared. This was also repeated for cancer incidence.\nIMD scores for all women who were invited from England were compared with those recruited to the trial by evaluating their relative frequency distributions. No mortality data is available for invited women.\n[SUBTITLE] Variation in HVE with age/region/BMI/time on trial [SUBSECTION] For all these analyses, the expected and observed mortality rates for all relevant women in each group were summed. Regional variations were compared by summing over the individual recruitment centres. For the age group analysis the groupings were made by using the age at randomisation to categorise into 50-54, 55-59, 60-65, 65-69 or 70-74 age groups. To assess any change in the SMR/SIRs over the trial period, the overall EMR was partitioned into year in trial by summing the individual EMRs for each year t. This was done for the individual causes of mortality and incidence, as well as for overall mortality. Mortality versus body mass index (BMI) of the women was explored by separating women who provided height and weight at randomisation into the standard underweight, normal, overweight and obese categories (up to 18.5; 18.5-25; 25-30; over 30, respectively) and comparing respective SMRs.\nFor all these analyses, the expected and observed mortality rates for all relevant women in each group were summed. Regional variations were compared by summing over the individual recruitment centres. For the age group analysis the groupings were made by using the age at randomisation to categorise into 50-54, 55-59, 60-65, 65-69 or 70-74 age groups. To assess any change in the SMR/SIRs over the trial period, the overall EMR was partitioned into year in trial by summing the individual EMRs for each year t. This was done for the individual causes of mortality and incidence, as well as for overall mortality. Mortality versus body mass index (BMI) of the women was explored by separating women who provided height and weight at randomisation into the standard underweight, normal, overweight and obese categories (up to 18.5; 18.5-25; 25-30; over 30, respectively) and comparing respective SMRs.", "Evidence of a HVE was assessed by calculation of the Standardised Mortality Ratio (SMR) which is defined as the ratio of observed to expected deaths (×100). A value significantly less than 100 would indicate a HVE. SMRs were calculated for: overall mortality (including ovarian cancer); overall cancer (excluding ovarian cancer and tubal cancer (ICD10-C56, C57.0) and 'other malignant neoplasms of skin' (ICD10-C44); the 10 leading individual causes of female cancer mortality (excluding ovarian cancer); and the five leading general causes other than cancer (circulatory, respiratory, digestive, nervous system and mental and behavioural). Except for overall mortality, ovarian and fallopian tube cancers were excluded from the analysis as they were the primary outcome measure of the ongoing RCT.\nThe effect on cancer incidence from the HVE was similarly assessed with the Standardised Incidence Ratio (SIR), defined as the ratio of observed to expected cancer incidence (×100). This was calculated again for overall cancer (also excluding ICD10-C56, C57.0 and C44) and the leading 10 female causes of cancer incidence, excluding ovarian cancer. These are the same as for mortality, although there are differences in the rankings.\nFor each trial participant, Expected Mortality Rates (EMRs) were calculated using national mortality rates derived from ONS for 2005 [6]. All individual EMRs were calculated for each year or partial year on the trial, with the values summed over the number of years on the trial. The individual's risk of mortality was adjusted for age at randomisation and also dynamically, so that the risk reflected the ageing woman as the screening progressed. The overall EMR for a cause was the sum of each woman's individual EMR up to the censoring date of 1st June 2009. ONS mortality tables for 2005 [6] provide both the number of female deaths for each cause as well as total female population in 5 year age groups. To estimate an age-specific mortality rate for each year, firstly the midpoint of the age group was taken as representing the mortality rate calculated for that age group. An approximate mortality rate estimate for any given age was then calculated by imputing the age into either a best fitting quadratic or exponential function. For nearly all causes the fit was excellent with R2 always over 0.95 and mostly over 0.99. Similar analysis was performed for cancer incidence using ONS cancer incidence tables for 2006 [7]. The only exception was breast cancer incidence where the effect of a national screening programme meant that after the age of 70 the incidence fell sharply, so that 2 separate functions had to be used (below and above age 70). All confidence intervals for the SMRs and SIRs were based on an assumed poisson distribution for the observed deaths or cancers.\nTo calculate the EMR of each woman i (i = 1, 2...202 593) for cause of death z if:\n• t is the year on trial (t = 1, 2...8)\n• xi is the age of woman i at randomisation\n• Dzx is the imputed mortality rate for cause of death z at age x\n• yti is the fraction of the year of trial t completed at censoring or death by woman i (always = 1, except for most recent/last year on trial)\nthen\n\n\n\n\nE\nM\n\nR\n\ni\nz\n\n\n=\n\n\n∑\n\nt\n=\n1\n,\n2...8\n\n\n\n[\n\nD\n\nz\n(\n\nx\ni\n\n+\nt\n−\n0.5\n)\n\n\n]\n\ny\n\nt\ni\n\n\n\n\n\n\n\n\nand the overall EMR for cause of death z is simply:\n\n\n\n\nE\nM\n\nR\nz\n\n=\n\n\n∑\n\ni\n=\n1\n,\n2...202593\n\n\n\nE\nM\n\nR\n\ni\nz\n\n\n\n\n\n\n\n\nNote that the age x imputed in Dzx is slightly adjusted by 0.5 to approximate the average effect of ageing over the year (t - 0.5 instead of t-1). Also note, if women withdraw from attending for screening in the trial at any point, they continue to be followed up through flagging for death and cancer registration. Hence no adjustment is made for withdrawals.", "The PCT provided postcodes and dates of birth for all women invited to the trial. The former was used to estimate socio-economic class. The Index of Multiple Deprivation 2007 (IMD) [8] provides 32 482 scores at a Super Output Area (SOA) level linked to postcodes for England. It was chosen over the other two census based available indices (Townsend or Carstairs) as firstly, the most up-to-date and secondly, the most precise in ascribing a score to an individual based on postcode, given that it is calculated at a much finer spatial scale. Upon linking, an individual IMD score was derived for 156 620 women recruited in England. The women recruited from centres in Wales and Northern Ireland were omitted from this particular analysis. A Welsh IMD (2008) [9] has been published. However, the differing criteria employed make comparisons between the English and Welsh IMD scores invalid [10]. To explore mortality versus deprivation, the recruited women were separated into quintiles according to IMD score and the respective SMRs compared. This was also repeated for cancer incidence.\nIMD scores for all women who were invited from England were compared with those recruited to the trial by evaluating their relative frequency distributions. No mortality data is available for invited women.", "For all these analyses, the expected and observed mortality rates for all relevant women in each group were summed. Regional variations were compared by summing over the individual recruitment centres. For the age group analysis the groupings were made by using the age at randomisation to categorise into 50-54, 55-59, 60-65, 65-69 or 70-74 age groups. To assess any change in the SMR/SIRs over the trial period, the overall EMR was partitioned into year in trial by summing the individual EMRs for each year t. This was done for the individual causes of mortality and incidence, as well as for overall mortality. Mortality versus body mass index (BMI) of the women was explored by separating women who provided height and weight at randomisation into the standard underweight, normal, overweight and obese categories (up to 18.5; 18.5-25; 25-30; over 30, respectively) and comparing respective SMRs.", "1 243 282 women residing in 27 Primary Care Trusts (including Local Health Boards in Wales) adjoining the 13 trial centres in England (10), Wales (2) and Northern Ireland (1) were invited to participate in the trial. Between 17th April 2001 and 29th September 2005, 202 638 women (157 973 England, 31 086 Wales, 13 579 Northern Ireland) were recruited and randomised [5]. Of those recruited from England and Wales, 35 were unsuccessfully matched and 10 refused consent for flagging, leaving 189 014 women from England and Wales undergoing flagging through NHS ICHSC. All 13 579 women recruited from Northern Ireland were successfully matched by the Northern Ireland CSA. The average number of years on trial when mortality events were censored on 1st June 2009 for mortality was 5.55 years, with over 99% having been on the trial for over 3 years, and 24% over 7 years. Mean follow-up for incidence was 2.58 years at 1st June 2006.\n[SUBTITLE] Mortality rates [SUBSECTION] There were only 4554 observed deaths compared to the expected number of 12 247 based on 2005 national mortality rates (Table 1). The SMR for overall mortality was 37.3% (95% CI: 36.2, 38.4%). There was some variation of SMR across the 13 trial centres with the highest, 48.4% at Liverpool and the lowest, 30.9% at Bristol. However across all centres there was a strong HVE with less than half the expected deaths (Table 1).\nStandardized Mortality Ratios (SMR) for overall mortality and by subgroup.\nAverage time on trial = 5.55 yrs/woman\nFor age group, there was an apparent decrease in the SMRs as age increased, with the youngest group (50-54; SMR = 47.3%) having a less pronounced HVE than the other groups (Table 1). For BMI, both normal and overweight categories had similar SMRs of around 34% whereas the extreme categories had higher mortality rates, particularly the underweight category (70.6%)\nThe overall cancer SMR was 55.9%. There was some variation between the different cancer types but all were between 42.9% (breast) and 79.8% (pancreatic), with mortality significantly lower than expected (100%). The HVE was even stronger for the other major causes of mortality (Table 2).\nStandardized Mortality Ratios (SMRs) for overall cancer, the 10 leading causes of cancer mortality and 5 other causes of mortality.\nOvarian (C56)/tubal cancer (C57.0) and other malignant neoplasm of skin (C44) have been excluded. Average time on trial at censoring (1st June 2009) = 5.55 yrs/woman\nThere was a clear increase in the SMRs as time in trial increased (Table 3). The overall SMR was low in the 1st year (18.5%) but rose steadily to 49.0% by the 8th year. With the exception of stomach cancer all of the cause-specific 1st year SMRs for mortality were significantly below 100% with some particularly low values such as 5.3% for breast cancer (Table 3). These figures showed an increasing trend as the study progressed, though not nearly as consistently as for overall mortality, and given the lower numbers, with wider confidence intervals. Of the cancers only lung, breast and colorectal had 6 or more study years where the confidence interval for the SMR did not contain 100.\nStandardized Mortality Ratios for various causes by year in trial.\nOvarian (C56)/tubal cancer (C57.0) and other malignant neoplasm of skin (C44) have been excluded. Average time on trial at censoring (1st June 2009) = 5.55 yrs/woman. Note the number of 'Women years' relates to individual causes and not to overall mortality due to exclusion of Northern Ireland women. Italicised values have 95% confidence intervals not containing 100.\nThere were only 4554 observed deaths compared to the expected number of 12 247 based on 2005 national mortality rates (Table 1). The SMR for overall mortality was 37.3% (95% CI: 36.2, 38.4%). There was some variation of SMR across the 13 trial centres with the highest, 48.4% at Liverpool and the lowest, 30.9% at Bristol. However across all centres there was a strong HVE with less than half the expected deaths (Table 1).\nStandardized Mortality Ratios (SMR) for overall mortality and by subgroup.\nAverage time on trial = 5.55 yrs/woman\nFor age group, there was an apparent decrease in the SMRs as age increased, with the youngest group (50-54; SMR = 47.3%) having a less pronounced HVE than the other groups (Table 1). For BMI, both normal and overweight categories had similar SMRs of around 34% whereas the extreme categories had higher mortality rates, particularly the underweight category (70.6%)\nThe overall cancer SMR was 55.9%. There was some variation between the different cancer types but all were between 42.9% (breast) and 79.8% (pancreatic), with mortality significantly lower than expected (100%). The HVE was even stronger for the other major causes of mortality (Table 2).\nStandardized Mortality Ratios (SMRs) for overall cancer, the 10 leading causes of cancer mortality and 5 other causes of mortality.\nOvarian (C56)/tubal cancer (C57.0) and other malignant neoplasm of skin (C44) have been excluded. Average time on trial at censoring (1st June 2009) = 5.55 yrs/woman\nThere was a clear increase in the SMRs as time in trial increased (Table 3). The overall SMR was low in the 1st year (18.5%) but rose steadily to 49.0% by the 8th year. With the exception of stomach cancer all of the cause-specific 1st year SMRs for mortality were significantly below 100% with some particularly low values such as 5.3% for breast cancer (Table 3). These figures showed an increasing trend as the study progressed, though not nearly as consistently as for overall mortality, and given the lower numbers, with wider confidence intervals. Of the cancers only lung, breast and colorectal had 6 or more study years where the confidence interval for the SMR did not contain 100.\nStandardized Mortality Ratios for various causes by year in trial.\nOvarian (C56)/tubal cancer (C57.0) and other malignant neoplasm of skin (C44) have been excluded. Average time on trial at censoring (1st June 2009) = 5.55 yrs/woman. Note the number of 'Women years' relates to individual causes and not to overall mortality due to exclusion of Northern Ireland women. Italicised values have 95% confidence intervals not containing 100.\n[SUBTITLE] Socioeconomic status comparison [SUBSECTION] Based on IMD score quintiles, there was a general trend between deprivation and mortality with higher SMRs for increasing levels of deprivation (Table 4). Specifically, the lowest (least deprived) two quintiles had a similar SMR of around 30% with the most deprived having a SMR of 52.3%. The rising trend between overall cancer incidence and increasing deprivation was less obvious. Figure 1 shows the relative frequency distributions for IMD score. Although the distributions show similarity in trend and location, the more peaked distribution of those who joined, and the crossover of distributions at an IMD score of 20, imply that the trial participants were less deprived than the invited population.\nStandardized Mortality Ratios (SMR) and Standardized Incidence Ratios (SIR) by deprivation (IMD) quintile.\nAverage time on trial = 5.55 yrs/woman for SMRs and = 2.58 yrs/woman for SIRs\nRelative frequency distributions of IMD score for those invited to UKCTOCS and those that joined UKCTOCS (English postcodes only).\nBased on IMD score quintiles, there was a general trend between deprivation and mortality with higher SMRs for increasing levels of deprivation (Table 4). Specifically, the lowest (least deprived) two quintiles had a similar SMR of around 30% with the most deprived having a SMR of 52.3%. The rising trend between overall cancer incidence and increasing deprivation was less obvious. Figure 1 shows the relative frequency distributions for IMD score. Although the distributions show similarity in trend and location, the more peaked distribution of those who joined, and the crossover of distributions at an IMD score of 20, imply that the trial participants were less deprived than the invited population.\nStandardized Mortality Ratios (SMR) and Standardized Incidence Ratios (SIR) by deprivation (IMD) quintile.\nAverage time on trial = 5.55 yrs/woman for SMRs and = 2.58 yrs/woman for SIRs\nRelative frequency distributions of IMD score for those invited to UKCTOCS and those that joined UKCTOCS (English postcodes only).\n[SUBTITLE] Cancer Incidence [SUBSECTION] The situation regarding cancer incidence (Table 5) was rather different. For overall cancer the SIR was 88.1%, higher than the 55.9% for overall cancer mortality. Of the individual cancers, only for lung, pancreatic, oesophageal and colorectal cancers did the confidence interval for the SIR not contain 100, and only for pancreatic and oesophageal cancer was the SMR higher than the SIR.\nStandardized Incidence Ratios (SIRs) for overall cancer and the 10 leading causes of cancer.\nOvarian (C56)/tubal cancer (C57.0) and other malignant neoplasm of skin (C44) have been excluded. Average time in trial at censoring (1st June 2006) = 2.58 yrs/woman.\nRegarding incidence over time (Table 6), there were far fewer occasions compared to mortality where the whole confidence interval for the SIR was below 100, with only lung cancer having a consistently low SIR over time (between 27.1% and 65.9%). Apart from lung cancer and leukaemia, the cancer specific SIRs were not particularly low in the first year compared to the complete study period. For overall cancer the SIRs were remarkably consistent over time, between 84.6% and 91.7%.\nStandardized Incidence Ratios for 10 leading causes of cancer by year in trial.\nOvarian (C56)/tubal cancer (C57.0) and other malignant neoplasm of skin (C44) have been excluded. Average time in trial at censoring (1st June 2006) = 2.58 yrs/woman. Italicised values have 95% confidence intervals not containing 100.\nThe situation regarding cancer incidence (Table 5) was rather different. For overall cancer the SIR was 88.1%, higher than the 55.9% for overall cancer mortality. Of the individual cancers, only for lung, pancreatic, oesophageal and colorectal cancers did the confidence interval for the SIR not contain 100, and only for pancreatic and oesophageal cancer was the SMR higher than the SIR.\nStandardized Incidence Ratios (SIRs) for overall cancer and the 10 leading causes of cancer.\nOvarian (C56)/tubal cancer (C57.0) and other malignant neoplasm of skin (C44) have been excluded. Average time in trial at censoring (1st June 2006) = 2.58 yrs/woman.\nRegarding incidence over time (Table 6), there were far fewer occasions compared to mortality where the whole confidence interval for the SIR was below 100, with only lung cancer having a consistently low SIR over time (between 27.1% and 65.9%). Apart from lung cancer and leukaemia, the cancer specific SIRs were not particularly low in the first year compared to the complete study period. For overall cancer the SIRs were remarkably consistent over time, between 84.6% and 91.7%.\nStandardized Incidence Ratios for 10 leading causes of cancer by year in trial.\nOvarian (C56)/tubal cancer (C57.0) and other malignant neoplasm of skin (C44) have been excluded. Average time in trial at censoring (1st June 2006) = 2.58 yrs/woman. Italicised values have 95% confidence intervals not containing 100.", "There were only 4554 observed deaths compared to the expected number of 12 247 based on 2005 national mortality rates (Table 1). The SMR for overall mortality was 37.3% (95% CI: 36.2, 38.4%). There was some variation of SMR across the 13 trial centres with the highest, 48.4% at Liverpool and the lowest, 30.9% at Bristol. However across all centres there was a strong HVE with less than half the expected deaths (Table 1).\nStandardized Mortality Ratios (SMR) for overall mortality and by subgroup.\nAverage time on trial = 5.55 yrs/woman\nFor age group, there was an apparent decrease in the SMRs as age increased, with the youngest group (50-54; SMR = 47.3%) having a less pronounced HVE than the other groups (Table 1). For BMI, both normal and overweight categories had similar SMRs of around 34% whereas the extreme categories had higher mortality rates, particularly the underweight category (70.6%)\nThe overall cancer SMR was 55.9%. There was some variation between the different cancer types but all were between 42.9% (breast) and 79.8% (pancreatic), with mortality significantly lower than expected (100%). The HVE was even stronger for the other major causes of mortality (Table 2).\nStandardized Mortality Ratios (SMRs) for overall cancer, the 10 leading causes of cancer mortality and 5 other causes of mortality.\nOvarian (C56)/tubal cancer (C57.0) and other malignant neoplasm of skin (C44) have been excluded. Average time on trial at censoring (1st June 2009) = 5.55 yrs/woman\nThere was a clear increase in the SMRs as time in trial increased (Table 3). The overall SMR was low in the 1st year (18.5%) but rose steadily to 49.0% by the 8th year. With the exception of stomach cancer all of the cause-specific 1st year SMRs for mortality were significantly below 100% with some particularly low values such as 5.3% for breast cancer (Table 3). These figures showed an increasing trend as the study progressed, though not nearly as consistently as for overall mortality, and given the lower numbers, with wider confidence intervals. Of the cancers only lung, breast and colorectal had 6 or more study years where the confidence interval for the SMR did not contain 100.\nStandardized Mortality Ratios for various causes by year in trial.\nOvarian (C56)/tubal cancer (C57.0) and other malignant neoplasm of skin (C44) have been excluded. Average time on trial at censoring (1st June 2009) = 5.55 yrs/woman. Note the number of 'Women years' relates to individual causes and not to overall mortality due to exclusion of Northern Ireland women. Italicised values have 95% confidence intervals not containing 100.", "Based on IMD score quintiles, there was a general trend between deprivation and mortality with higher SMRs for increasing levels of deprivation (Table 4). Specifically, the lowest (least deprived) two quintiles had a similar SMR of around 30% with the most deprived having a SMR of 52.3%. The rising trend between overall cancer incidence and increasing deprivation was less obvious. Figure 1 shows the relative frequency distributions for IMD score. Although the distributions show similarity in trend and location, the more peaked distribution of those who joined, and the crossover of distributions at an IMD score of 20, imply that the trial participants were less deprived than the invited population.\nStandardized Mortality Ratios (SMR) and Standardized Incidence Ratios (SIR) by deprivation (IMD) quintile.\nAverage time on trial = 5.55 yrs/woman for SMRs and = 2.58 yrs/woman for SIRs\nRelative frequency distributions of IMD score for those invited to UKCTOCS and those that joined UKCTOCS (English postcodes only).", "The situation regarding cancer incidence (Table 5) was rather different. For overall cancer the SIR was 88.1%, higher than the 55.9% for overall cancer mortality. Of the individual cancers, only for lung, pancreatic, oesophageal and colorectal cancers did the confidence interval for the SIR not contain 100, and only for pancreatic and oesophageal cancer was the SMR higher than the SIR.\nStandardized Incidence Ratios (SIRs) for overall cancer and the 10 leading causes of cancer.\nOvarian (C56)/tubal cancer (C57.0) and other malignant neoplasm of skin (C44) have been excluded. Average time in trial at censoring (1st June 2006) = 2.58 yrs/woman.\nRegarding incidence over time (Table 6), there were far fewer occasions compared to mortality where the whole confidence interval for the SIR was below 100, with only lung cancer having a consistently low SIR over time (between 27.1% and 65.9%). Apart from lung cancer and leukaemia, the cancer specific SIRs were not particularly low in the first year compared to the complete study period. For overall cancer the SIRs were remarkably consistent over time, between 84.6% and 91.7%.\nStandardized Incidence Ratios for 10 leading causes of cancer by year in trial.\nOvarian (C56)/tubal cancer (C57.0) and other malignant neoplasm of skin (C44) have been excluded. Average time in trial at censoring (1st June 2006) = 2.58 yrs/woman. Italicised values have 95% confidence intervals not containing 100.", "This is the first report to explore the impact of a 'healthy volunteer effect' from inviting potential participants randomly from population registers as opposed to self-referral. The overall SMR compared to the 2005 population of England and Wales was 37.3%. The figure is almost identical to the overall SMR of 38% reported for women in the US PLCO screening trial [2] where participants were allowed to self refer or invited through mass mailings using motor-vehicle registrations and health care organization lists which were not generally population based [11]. This introduces additional bias, as the types of advertising media (radio station, website, newspaper or magazine) or mailing lists used, limit those who have access to the information. In contrast, in UKCTOCS over 1.2 million women (1 in 6 of the UK population in the eligible age range) were randomly invited from health authority registers. It was anticipated that such invitation would result in participants being more representative of the general population than those recruited through advertisement and self referral. However, even with this safeguard there continued to be a pronounced HVE on mortality, both overall and cause-specific. The data highlights again the selection bias that occurs in clinical trials and emphasises the need for randomised controlled trials rather than observational studies to determine efficacy of screening and prevention strategies. There was a much lesser effect on cancer incidence relative to the general population.\nThe magnitude of the HVE in a trial is dependent on a variety of other factors in addition to mode of recruitment. Eligibility criteria can play a crucial role. This includes gender, where in the PLCO trial the HVE was less pronounced in men, who had a statistically significantly higher SMR than women for all-cause mortality (46% versus 38%), all cancer mortality, respiratory diseases, diabetes, cardiovascular diseases, and non-Hodgkin's lymphoma [2]. Most screening/prevention trials exclude those with an ongoing active malignancy. This will inevitably affect the cancer specific SMRs in the early years. There was a clear upward trend in the cancer specific SMRs, despite widening confidence intervals, when examined by 'year in trial'. In the first year, most were below 25%. It is feasible that the other exclusion criteria may also have had health implications. Women who had undergone bilateral oophorectomy were ineligible. Recent reports have shown increased mortality in women in this subgroup who do not use oestrogen replacement until the age of 45 [12,13]. Finally, higher participations rates may be expected to reduce the self-selection effect. In UKCTOCS 25% of women invited replied that they would like to participate in the trial but finally only 16% were randomised [5].\nThe overall 1st year SMR was 18.5%, rising to 35% in the 2nd year and nearly 50% by the 8th year. The SMRs have been age-adjusted dynamically so this is not a result of the age-related increase in risk. Similar trends were seen in the PLCO trial. In both studies by the 7th year the SMR was 48%. A major contributing factor to the SMR trend with time is the health-screening nature of volunteering. The huge shortfall in SMRs for the first year of UKCTOCS, particularly causes other than cancer, are strong indicators that women suffering from poor health or chronic non-cancer illness tend not to volunteer [4]. Their health concerns naturally lie with their immediate real problems rather than a future potential issue. Some of these conditions may well predispose to earlier mortality. It is interesting that the younger age groups, specifically 50-54, had higher SMRs. This suggests that women in the younger groups may more closely represent their national counterparts. We were unable to find in the literature reports where differences in (age-adjusted) SMRs were explored by age, but if confirmed, one possible explanation is that there may be less prevalent morbidity that might hinder volunteering at these ages. Mental and behavioural deaths had the lowest SMRs and the need for informed consent could have been a contributory factor. For general health, the commonly reported u-shaped curve [14-16] relating BMI to mortality was seen, with the highest SMRs belonging to those underweight and obese (Table 1).\nThe HVE is often ascribed to the fact that educated women who are financially better off and have a healthier lifestyle are more likely to volunteer for a screening trial, where health awareness and the means to travel to the trial centre influence a women's decision. While difficult to substantiate directly, many of these factors are linked to indices of social deprivation and in UKCTOCS the availability of postcodes for all invited women made it possible to calculate deprivation (IMD) scores for all invitees from England. Figure 1 shows that the cohort of invited women was more deprived (higher IMD scores: mean = 24.7) than those who subsequently joined the trial (mean score = 19.6). The reported mean IMD score for England is 21.6 [17]. Our invited population was more deprived than the national average probably as a result of a higher proportion of urban centres. However, as has been shown, those who actually volunteered were less deprived. Of note, Bristol and Portsmouth, which were the least deprived (lowest mean IMD) among the 10 English centres, had the highest acceptance rates of invitations among all 13 centres [5]. This suggests that postal invitation alone will not persuade women from deprived backgrounds to participate.\nSocioeconomic status is known to be linked to most causes of mortality, including cancers. Bristol, which had the 2nd lowest mean IMD score had the lowest SMR, while Liverpool, with the highest mean IMD score, had the highest SMR (48.4%). Further support is provided by the trend of higher SMRs with increasing levels of deprivation across the 5 quintiles, from the least deprived (SMR = 30.4%) to the most deprived (SMR = 52.3%) (Table 5).\nThe most striking aspect on comparing cause-specific incidence and mortality rates was that the SIRs were higher than the SMRs and, for the leading cancers other than lung, pancreas, oesophagus and colorectal (just), the SIR confidence intervals crossed 100. In a recent analysis of US data, while late-stage diagnoses in all cancers (with resultant higher mortality) were associated with lower socioeconomic status, incidence of only certain specific cancers varied with socioeconomic status [18]. Pollock et al noted that while mortality increased with deprivation among patients suffering from lung, breast and colorectal cancers in the South Thames area, for incidence this was only observed in lung cancer [19]. Official national statistics for England and Wales show a mixture of positive (notably lung and cervix), negative (breast, leukaemia) and zero association (colorectal) between different cancer type incidences and deprivation [20]. The three cancers (lung, oesophagus and pancreas) with the largest shortfalls in SIRs in UKCTOCS have a strong link to smoking [21-23]. Individuals with lower socio-economic status are more likely to be current smokers, physically inactive and obese [24]. In all three of these cancers, there are reports of negative correlation between incidence and socioeconomic status [18-20,25,26]. Conversely, in breast cancer the SIR was 102% whilst the SMR was 43%, in keeping with previously reported associations between higher socioeconomic status and higher incidence of localised breast cancer but lower regional breast cancer mortality [25,27]. Women who volunteer for a screening trial are more likely to attend for breast screening and to be diagnosed with early stage disease. Overdiagnosis of breast cancer in the screened population could also contribute to higher incidence but lower mortality [28].\nDespite the strong similarity of results for overall mortality between UKCTOCS and the PLCO trial there is less commonality when cause-specific results are compared. While pancreatic cancer has the highest SMR in both studies, large discrepancies exist for cancers such as uterus (52% UKCTOCS versus 22% PLCO), stomach (75% versus 41%) and oesophagus (76% versus 41%). Given the smaller numbers in these subgroups, some of these differences may be purely random. Most of these cancers are also associated with lower SIRs in the PLCO trial compared to UKCTOCS: oesophageal (72% UKCTOCS versus 38% PLCO), stomach cancer (85% versus 48%) and bladder (80% versus 52%). It needs to be noted that there are subtle differences in the PLCO entry criteria when compared to UKCTOCS, such as minimum age (55 versus 50 in UKCTOCS) and inclusion of women who had undergone bilateral oophorectomy.\nThe most recently published statistics for mortality (for 2005, published 2006 [6]) and incidence (for 2006, published 2008 [7]) produced by ONS were used to calculate EMRs for the period 2001-2009 so the data can be considered broadly representative. An additional issue is that the 'national' mortality rates were based on data from England and Wales only but was used to calculate EMRs for the 13 579 women from Northern Ireland. There were also approximations involved in the actual calculations, such as the age-group mortality rates representing the midpoint of that group and specific age-adjusted rates estimated by use of a best-fitting simple model. The EMRs were also assumed to be fixed values when calculating the confidence intervals. They are estimates, as they are based on national data that varies yearly through a random component, in addition to any real change. However comparison of ONS's 2004 and 2005 (logged) mortality rates showed a high level of linear correlation, with all Pearson correlations for the major cancer causes over 0.99, except those for uterus (r = 0.984) and non-Hodgkin's lymphoma (r = 0.981). This suggests any yearly changes in mortality rates (real shifts or random fluctuations) are small and treating them as fixed was not unreasonable.", "The lack of mortality or incidence events can severely harm a clinical trial's ability to demonstrate efficacy. Other ramifications of the HVE inevitably include concerns over external validity of a demonstrated screening benefit, though that would imply some level of interaction between screening and volunteer characteristics. It may be hard to perceive how social factors could influence screening success directly at the point of intervention, though certainly compliance with a screening programme can be dependent upon the level of social deprivation [29]. Either way, one may regard this as a realistic aspect of a national screening programme. In UKCTOCS, the HVE has necessitated revision of the trial design in 2008, with extension of screening in the study arm until 31st Dec 2011 and follow up until 31st Dec 2014 [30]. During planning of this trial in 1999, no published data was available to estimate the impact of the HVE. The various mortality rates presented here are based on over one million study years, and incidence rates on over half a million study years. They provide vital information for investigators on likely event rate shortfalls that might be expected in ongoing and future screening studies/RCTs of similar design.", "IJ has consultancy arrangements with Becton Dickinson, who have an interest in tumour markers and ovarian cancer. They have provided consulting fees, funds for research, and staff but not directly related to this study. SS has received research support from Fujirebio Diagnostics but not in relation to this trial.", "IJ, UM, MP, SS, LF, SC were involved in trial design and concept. UM, MB, AR, MH AG-M and SA were involved in acquisition of data. MB, MP and UM were involved in the statistical analysis. MB and UM were responsible for interpretation of data and drafted the manuscript. All authors critically revised the manuscript and approved the final version. UM is the guarantor.", "The trial was core funded by the Medical Research Council (grant no. G990102), Cancer Research UK (grant no. C1479/A2884), and the Department of Health with additional support from the Eve Appeal, Special Trustees of Bart's and the London, and Special Trustees of UCLH. A major portion of this work was done at UCLH/UCL within the \"women's health theme\" of the NIHR UCLH/UCL Comprehensive Biomedical Research Centre supported by the Department of Health. SS has received research support from NCI (grant numbers CA086381 and CA083639). The researchers are independent from the funders.\nEthical approval: The study was approved by the UK North West Multicentre Research Ethics Committees (North West MREC 00/8/34) with site specific approval from the local regional ethics committees and the Caldicott guardians (data controllers) of the primary care trusts." ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Timely HAART initiation may pave the way for a better viral control.
21362195
When to initiate antiretroviral therapy in HIV infected patients is a difficult clinical decision. Actually, it is still a matter of discussion whether early highly active antiretroviral therapy (HAART) during primary HIV infection may influence the dynamics of the viral rebound, in case of therapy interruption, and overall the main disease course.
BACKGROUND
In this article we use a computational model and clinical data to identify the role of HAART timing on the residual capability to control HIV rebound after treatment suspension. Analyses of clinical data from three groups of patients initiating HAART respectively before seroconversion (very early), during the acute phase (early) and in the chronic phase (late), evidence differences arising from the very early events of the viral infection.
METHODS
The computational model allows a fine grain assessment of the impact of HAART timing on the disease outcome, from acute to chronic HIV-1 infection. Both patients' data and computer simulations reveal that HAART timing may indeed affect the HIV control capability after treatment discontinuation. In particular, we find a median time to viral rebound that is significantly longer in very early than in late patients.
RESULTS
A timing threshold is identified, corresponding to approximately three weeks post-infection, after which the capability to control HIV replication is lost. Conversely, HAART initiation occurring within three weeks from the infection could allow to preserve a significant control capability. This time could be related to the global triggering of uncontrolled immune activation, affecting residual immune competence preservation and HIV reservoir establishment.
CONCLUSIONS
[ "Anti-HIV Agents", "Antiretroviral Therapy, Highly Active", "Computer Simulation", "Female", "HIV Infections", "Humans", "Male", "Time Factors", "Treatment Outcome", "Viral Load" ]
3058048
null
null
Methods
[SUBTITLE] Clinical studies [SUBSECTION] We analyzed the results of clinical studies performed at the Clinical Department of the National Institute for Infectious Disease "L. Spallanzani" in Rome. A first group of eleven patients (9 male and 2 female) were diagnosed HIV-1 positive between year 1998 and 2006. All patients initiated HAART within 14 days from diagnosis, during the very early phase of infection (see Table 1). The very early phase was defined as having a negative or indeterminate western blot for HIV-1 antibodies in combination with a positive test for either p24 antigen or a detectable HIV-1 RNA concentration. Those patients were treated with zidovudine/lamivudine (CBV) in combination with either the reverse transcriptase inhibitor efavirenz (EFZ) or one protease inhibitor lopinavir/ritonavir (KAL) or indinavir (IDV). Because anaemia and neutropenia were diagnosed, in two cases CBV has been substituted with lamivudine (3TC) and staduvine (D4T). All those patients underwent a therapy cycle for 2 ± 1 years and remained off HAART for about 48 weeks. Very early subjects with an immediate treatment before seroconversion. Clinical information about the eleven patients selected at the Clinical Department of the National Institute for Infectious Disease "L. Spallanzani" in Rome. All subjects received HAART within six months from primary infection. †CBV, Combivir (AZT Zidovudine plus 3TC Lamivudine); IDV, Indinavir; EFZ, Efavirenz; KAL, Kaletra (lopinavir/ritonavir). ‡ Days elapsed from diagnosis (enrollment) to initiation of HAART. *CD4 and CD8 are per microlitre of plasma, viremia is per millilitre of plasma. ** 4 weeks after treatment interruption. The second group is made up by twenty-two patients (21 male and 1 female) enrolled in the program between year 1998 and 2005. Patients in this group underwent HAART during the early phase of HIV-1 infection. In particular, they started HAART about 20 days after treatment diagnosis (see Table 2). Early patients were defined as having documented seronegative HIV-1 antibody test within the previous 6 months; acute symptomatic seroconversion illness; evolving HIV-specific antibody response by ELISA; positive HIV-DNA PCR in PBMC. Those patients were treated with three different drugs (in the majority of cases zidovudine (AZT) plus 3TC plus a protease inhibitor. Further details can be found in Table One of [4]. All those patients underwent a therapy cycle for 3 ± 1 years and remained o HAART for about 88 weeks. Early subjects with an immediate treatment of acute HIV-1 infection. Clinical information about the twenty-two patients selected at the Clinical Department of the National Institute for Infectious Disease "L. Spallanzani" in Rome. All subjects received HAART within six months from primary infection. †D4T, staduvine; 3TC, lamivudine; IDV, Indinavir; AZT, Zidovudine; NFV, nelflnavir; EFV, Efavirenz; NVP, nevirapine; TNF, Tenofovir; Lop, lopinavir; Rit, Ritonavir; LPV (KAL), Kaletra (lopinavir/ritonavir). ‡ Days elapsed from diagnosis (enrollment) to initiation of HAART. *CD4 and CD8 are per microlitre of plasma, viremia is per millilitre of plasma. The third group consists of twenty-one patients (12 male, 9 female). They started HAART during the chronic phase of infection defined as suggested by the guidelines [10]. In particular, they started HAART about 3.5 years after treatment diagnosis (see Table 3). Their CD4 count at initiation was 400 ± 150 per microlitre of plasma. The range of calendar year for starting HAART among those patients was 1998 ± 3. All those patients underwent a therapy cycle for 4 ± 2 years and remained off HAART for about 41 weeks. The Ethical Committee of the "L. Spallanzani" Institute approved the study and the patients gave a written informed consent. Late subjects with deferred treatment of acute HIV-1 infection. Clinical information about the twenty-one patients selected at the Clinical Department of the National Institute for Infectious Disease "L. Spallanzani" in Rome. All subjects received HAART during chronic phase of infection. †AZT, Retrovir (Zidovudine); 3TC, Epivir (Lamivudine); D4T, Zerit (Staduvine); SQV, Saquinvir (Invirase); Zalcitabina (ddC, Hivid); Rit, Ritonavir (Norvir); ddl, Didanosine (Videx); IDV, Indinavir (Crix-ivan); EFV, Efavirenz; NVP, Nevirapine (Viramune); LPV (Lop), Lopinavir; ‡ Days elapsed from diagnosis (enrollment) to initiation of HAART. *CD4 and CD8 are per microlitre of plasma, viremia is per millilitre of plasma. We analyzed the results of clinical studies performed at the Clinical Department of the National Institute for Infectious Disease "L. Spallanzani" in Rome. A first group of eleven patients (9 male and 2 female) were diagnosed HIV-1 positive between year 1998 and 2006. All patients initiated HAART within 14 days from diagnosis, during the very early phase of infection (see Table 1). The very early phase was defined as having a negative or indeterminate western blot for HIV-1 antibodies in combination with a positive test for either p24 antigen or a detectable HIV-1 RNA concentration. Those patients were treated with zidovudine/lamivudine (CBV) in combination with either the reverse transcriptase inhibitor efavirenz (EFZ) or one protease inhibitor lopinavir/ritonavir (KAL) or indinavir (IDV). Because anaemia and neutropenia were diagnosed, in two cases CBV has been substituted with lamivudine (3TC) and staduvine (D4T). All those patients underwent a therapy cycle for 2 ± 1 years and remained off HAART for about 48 weeks. Very early subjects with an immediate treatment before seroconversion. Clinical information about the eleven patients selected at the Clinical Department of the National Institute for Infectious Disease "L. Spallanzani" in Rome. All subjects received HAART within six months from primary infection. †CBV, Combivir (AZT Zidovudine plus 3TC Lamivudine); IDV, Indinavir; EFZ, Efavirenz; KAL, Kaletra (lopinavir/ritonavir). ‡ Days elapsed from diagnosis (enrollment) to initiation of HAART. *CD4 and CD8 are per microlitre of plasma, viremia is per millilitre of plasma. ** 4 weeks after treatment interruption. The second group is made up by twenty-two patients (21 male and 1 female) enrolled in the program between year 1998 and 2005. Patients in this group underwent HAART during the early phase of HIV-1 infection. In particular, they started HAART about 20 days after treatment diagnosis (see Table 2). Early patients were defined as having documented seronegative HIV-1 antibody test within the previous 6 months; acute symptomatic seroconversion illness; evolving HIV-specific antibody response by ELISA; positive HIV-DNA PCR in PBMC. Those patients were treated with three different drugs (in the majority of cases zidovudine (AZT) plus 3TC plus a protease inhibitor. Further details can be found in Table One of [4]. All those patients underwent a therapy cycle for 3 ± 1 years and remained o HAART for about 88 weeks. Early subjects with an immediate treatment of acute HIV-1 infection. Clinical information about the twenty-two patients selected at the Clinical Department of the National Institute for Infectious Disease "L. Spallanzani" in Rome. All subjects received HAART within six months from primary infection. †D4T, staduvine; 3TC, lamivudine; IDV, Indinavir; AZT, Zidovudine; NFV, nelflnavir; EFV, Efavirenz; NVP, nevirapine; TNF, Tenofovir; Lop, lopinavir; Rit, Ritonavir; LPV (KAL), Kaletra (lopinavir/ritonavir). ‡ Days elapsed from diagnosis (enrollment) to initiation of HAART. *CD4 and CD8 are per microlitre of plasma, viremia is per millilitre of plasma. The third group consists of twenty-one patients (12 male, 9 female). They started HAART during the chronic phase of infection defined as suggested by the guidelines [10]. In particular, they started HAART about 3.5 years after treatment diagnosis (see Table 3). Their CD4 count at initiation was 400 ± 150 per microlitre of plasma. The range of calendar year for starting HAART among those patients was 1998 ± 3. All those patients underwent a therapy cycle for 4 ± 2 years and remained off HAART for about 41 weeks. The Ethical Committee of the "L. Spallanzani" Institute approved the study and the patients gave a written informed consent. Late subjects with deferred treatment of acute HIV-1 infection. Clinical information about the twenty-one patients selected at the Clinical Department of the National Institute for Infectious Disease "L. Spallanzani" in Rome. All subjects received HAART during chronic phase of infection. †AZT, Retrovir (Zidovudine); 3TC, Epivir (Lamivudine); D4T, Zerit (Staduvine); SQV, Saquinvir (Invirase); Zalcitabina (ddC, Hivid); Rit, Ritonavir (Norvir); ddl, Didanosine (Videx); IDV, Indinavir (Crix-ivan); EFV, Efavirenz; NVP, Nevirapine (Viramune); LPV (Lop), Lopinavir; ‡ Days elapsed from diagnosis (enrollment) to initiation of HAART. *CD4 and CD8 are per microlitre of plasma, viremia is per millilitre of plasma. [SUBTITLE] Plasma HIV-1 determination [SUBSECTION] Plasma HIV-1 RNA levels were determined by a second-generation assay based on nucleic acid sequence based amplification (NASBA), for samples collected until 2001 and by the branched-chain DNA assay (Versant HIV RNA test, Version 3.0, lower limit of quantification 50 copies/ml; Bayer Diagnostics, Milan, Italy) from 2001 until 2008. Plasma HIV-1 RNA levels were determined by a second-generation assay based on nucleic acid sequence based amplification (NASBA), for samples collected until 2001 and by the branched-chain DNA assay (Versant HIV RNA test, Version 3.0, lower limit of quantification 50 copies/ml; Bayer Diagnostics, Milan, Italy) from 2001 until 2008. [SUBTITLE] Computational model [SUBSECTION] The current version of the model we employ derives from an early simulator that has been quite extensively described in previous publications [11,12]. Recently it has been specialized to simulate the HIV-1 infection [6] and the effects of antiretroviral therapy [4]. Briefly, it resorts to bit strings to represent "binding sites" of cells and molecules, as for example lymphocyte receptors, MHC, antigen peptides and epitopes, immunocomplexes, etc. [13]. The model includes the major classes of cells of the lymphoid lineage (T helper lymphocytes, cytotoxic T lymphocytes, B lymphocytes and antibody-producer plasma cells) and some of the myeloid lineage (macrophages and dendritic cells). These entities are individually represented. In contrast to cells, cytokines like interleukin-2 are represented in terms of concentrations and their dynamics described by a parabolic partial differential equation plus a degradation term accounting for the finite half-life [5,14]. Modeling features of the HIV infection include HIV replication inside infected lymphocytes, T production impairment; specific response against HIV strains and HIV mutation. The simulated life cycle of the virus is represented by the following stages: 1) the virus infects CD4+ T cells, macrophages, dendritic cells; 2) reverse transcriptase copies the viral single stranded RNA genome into a double-stranded viral DNA. The viral DNA is then integrated into the host chromosomal DNA; 3) the virus remains at rest until an event activates the transcription; 4) the replicating virus buds from the cell membrane. Fully assembled virions are then able to infect other cells to restart the life cycle. HAAR effects are modeled as follows: Reverse transcriptase inhibitors block reverse transcriptase enzymatic functions and avoid completion of synthesis of the double-stranded viral DNA thus preventing HIV-1 from replicating (i.e., it prevents the virus in stage 1 from reaching stage 2); Protease inhibitors prevent viral replication by inhibiting the activity of HIV-1 protease, an enzyme used by the virus to cleave nascent proteins for final assembly of new virions (i.e., it prevents virus assembly in stage 4). Further details and parameter settings of the simulations can be found in the Additional file 1. For what concerns the setting of the parameters related to the therapy, we performed computer simulations in which we fixed the immunological parameters at the time of therapy initiation on the basis of the average values measured in patients in vivo: 5.8 ± 0.2 RNA copies/ml (in logarithmic scale), 870 ± 50 CD4 cells/μl and 430 ± 50 CD8 cells/μl. For all simulations we applied a one-year course of HAART. Further details on the tuning of the simulation parameters can be found in the Additional file 2. The current version of the model we employ derives from an early simulator that has been quite extensively described in previous publications [11,12]. Recently it has been specialized to simulate the HIV-1 infection [6] and the effects of antiretroviral therapy [4]. Briefly, it resorts to bit strings to represent "binding sites" of cells and molecules, as for example lymphocyte receptors, MHC, antigen peptides and epitopes, immunocomplexes, etc. [13]. The model includes the major classes of cells of the lymphoid lineage (T helper lymphocytes, cytotoxic T lymphocytes, B lymphocytes and antibody-producer plasma cells) and some of the myeloid lineage (macrophages and dendritic cells). These entities are individually represented. In contrast to cells, cytokines like interleukin-2 are represented in terms of concentrations and their dynamics described by a parabolic partial differential equation plus a degradation term accounting for the finite half-life [5,14]. Modeling features of the HIV infection include HIV replication inside infected lymphocytes, T production impairment; specific response against HIV strains and HIV mutation. The simulated life cycle of the virus is represented by the following stages: 1) the virus infects CD4+ T cells, macrophages, dendritic cells; 2) reverse transcriptase copies the viral single stranded RNA genome into a double-stranded viral DNA. The viral DNA is then integrated into the host chromosomal DNA; 3) the virus remains at rest until an event activates the transcription; 4) the replicating virus buds from the cell membrane. Fully assembled virions are then able to infect other cells to restart the life cycle. HAAR effects are modeled as follows: Reverse transcriptase inhibitors block reverse transcriptase enzymatic functions and avoid completion of synthesis of the double-stranded viral DNA thus preventing HIV-1 from replicating (i.e., it prevents the virus in stage 1 from reaching stage 2); Protease inhibitors prevent viral replication by inhibiting the activity of HIV-1 protease, an enzyme used by the virus to cleave nascent proteins for final assembly of new virions (i.e., it prevents virus assembly in stage 4). Further details and parameter settings of the simulations can be found in the Additional file 1. For what concerns the setting of the parameters related to the therapy, we performed computer simulations in which we fixed the immunological parameters at the time of therapy initiation on the basis of the average values measured in patients in vivo: 5.8 ± 0.2 RNA copies/ml (in logarithmic scale), 870 ± 50 CD4 cells/μl and 430 ± 50 CD8 cells/μl. For all simulations we applied a one-year course of HAART. Further details on the tuning of the simulation parameters can be found in the Additional file 2.
null
null
null
null
[ "Background", "Clinical studies", "Plasma HIV-1 determination", "Computational model", "Results", "Discussion", "Conclusions", "Availability and requirements", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "The question of when antiretroviral therapy has to be initiated remains a challenging issue. Recent studies show that the early immune response to HIV-1 infection is likely to be an important factor in determining the clinical course of disease [1]. The first weeks following HIV-1 transmission are extremely dynamic. They are associated with rapid damage to generative immune cell micro-environments and with immune responses that partially control the virus. Following HIV-1 infection, the virus first replicates locally in the mucosa and then is transported to draining lymph nodes where further amplification occurs. This initial phase of infection, until the systemic viral dissemination begins, constitutes the eclipse phase [1]. In general, there is an exponential increase in plasma viremia with a peak 21-28 days after infection. By this time, significant depletion of mucosal CD4+T cells has already occurred. Around the time of peak viremia, patients may become symptomatic and reservoirs of latent virus are established [1,2].\nThe \"window of opportunity\" between the infection and peaking of viremia, prior to massive CD4+ T cell destruction and the establishment of viral reservoirs, seems to be a narrow but crucial period in which an antiretroviral therapy can control viral replication, prevent an extensive CD4+ T cell depletion from occurring and curb generalized immune activation. Thus, thwarting HIV replication by introducing HAART in the early phases of infection could have a substantial impact on the whole disease course. In particular, suggested factors that may contribute to the observed better viral control after treatment interruption in very early treated patients are [3]: i) early arrest of viral escape, leaving the virus vulnerable; ii) preservation or even enhancement of the immune response resulting from the early clearing of antigen; iii) prevention of the establishment of a pool of HIV-specific memory CD4 T cells thus leaving fewer target cells available for viral infection.\nAn ideal clinical model aimed at addressing such issue should compare a number of patients treated starting on different times: from very early to very late. Besides the ethical issues, it is rather difficult to collect enough patients to significantly represent the whole spectrum of possible HAART initiation timings. As a matter of fact, a practical clinical model would compare very early to late-treated patients. While informative on the overall role of HAART timing on disease course, this approach would not allow to verify if there are events in the early infection influenced by the starting time of HAART that affect directly and decisively the course of the disease.\nWe have already shown in [4-6] that an agent-based model of HIV-1 infection could be a valuable tool for the study of the AIDS disease progression and treatment. The computerized simulation allows us to track the effect of HAART timing on the progression of the disease.\nThe aim of the present work is to verify the effect of HAART timing on subsequent events. Indeed, both a clinical model and a computational simulation show that a late initiation of treatment affects HIV-1 replication control. Interestingly, the in silico model identifies a significant three-week time threshold as the \"ultimate\" time point beyond which the decisive HIV-induced damages already occurred, affecting the whole disease course.\nIn a previous work [4], we analyzed clinical data of patients initiating HAART within six months from infection (i.e., we called that early phase) and performed computer simulations to predict the differences in viral rebound at therapy interruption between those patients and subjects initiating therapy during the chronic phase (i.e., we called that late phase, corresponding to initiating six or more months after infection). Our conclusion was that early initiation of therapy does not prolong the disease-free period when compared to a treatment started during the late phase. However, other studies suggest that an earlier initiation is preferable [7-9]. This motivated us to better identify the meaning of early initiation. In the present article, we extend the analysis in [4] to get a more complete picture. We analyze clinical data of very early patients (i.e., treated before seroconvertion) against late-treated patients.", "We analyzed the results of clinical studies performed at the Clinical Department of the National Institute for Infectious Disease \"L. Spallanzani\" in Rome.\nA first group of eleven patients (9 male and 2 female) were diagnosed HIV-1 positive between year 1998 and 2006. All patients initiated HAART within 14 days from diagnosis, during the very early phase of infection (see Table 1). The very early phase was defined as having a negative or indeterminate western blot for HIV-1 antibodies in combination with a positive test for either p24 antigen or a detectable HIV-1 RNA concentration. Those patients were treated with zidovudine/lamivudine (CBV) in combination with either the reverse transcriptase inhibitor efavirenz (EFZ) or one protease inhibitor lopinavir/ritonavir (KAL) or indinavir (IDV). Because anaemia and neutropenia were diagnosed, in two cases CBV has been substituted with lamivudine (3TC) and staduvine (D4T). All those patients underwent a therapy cycle for 2 ± 1 years and remained off HAART for about 48 weeks.\nVery early subjects with an immediate treatment before seroconversion.\nClinical information about the eleven patients selected at the Clinical Department of the National Institute for Infectious Disease \"L. Spallanzani\" in Rome. All subjects received HAART within six months from primary infection.\n†CBV, Combivir (AZT Zidovudine plus 3TC Lamivudine); IDV, Indinavir; EFZ, Efavirenz; KAL, Kaletra (lopinavir/ritonavir). ‡ Days elapsed from diagnosis (enrollment) to initiation of HAART. *CD4 and CD8 are per microlitre of plasma, viremia is per millilitre of plasma. ** 4 weeks after treatment interruption.\nThe second group is made up by twenty-two patients (21 male and 1 female) enrolled in the program between year 1998 and 2005. Patients in this group underwent HAART during the early phase of HIV-1 infection. In particular, they started HAART about 20 days after treatment diagnosis (see Table 2). Early patients were defined as having documented seronegative HIV-1 antibody test within the previous 6 months; acute symptomatic seroconversion illness; evolving HIV-specific antibody response by ELISA; positive HIV-DNA PCR in PBMC. Those patients were treated with three different drugs (in the majority of cases zidovudine (AZT) plus 3TC plus a protease inhibitor. Further details can be found in Table One of [4]. All those patients underwent a therapy cycle for 3 ± 1 years and remained o HAART for about 88 weeks.\nEarly subjects with an immediate treatment of acute HIV-1 infection.\nClinical information about the twenty-two patients selected at the Clinical Department of the National Institute for Infectious Disease \"L. Spallanzani\" in Rome. All subjects received HAART within six months from primary infection.\n†D4T, staduvine; 3TC, lamivudine; IDV, Indinavir; AZT, Zidovudine; NFV, nelflnavir; EFV, Efavirenz; NVP, nevirapine; TNF, Tenofovir; Lop, lopinavir; Rit, Ritonavir; LPV (KAL), Kaletra (lopinavir/ritonavir). ‡ Days elapsed from diagnosis (enrollment) to initiation of HAART. *CD4 and CD8 are per microlitre of plasma, viremia is per millilitre of plasma.\nThe third group consists of twenty-one patients (12 male, 9 female). They started HAART during the chronic phase of infection defined as suggested by the guidelines [10]. In particular, they started HAART about 3.5 years after treatment diagnosis (see Table 3). Their CD4 count at initiation was 400 ± 150 per microlitre of plasma. The range of calendar year for starting HAART among those patients was 1998 ± 3. All those patients underwent a therapy cycle for 4 ± 2 years and remained off HAART for about 41 weeks. The Ethical Committee of the \"L. Spallanzani\" Institute approved the study and the patients gave a written informed consent.\nLate subjects with deferred treatment of acute HIV-1 infection.\nClinical information about the twenty-one patients selected at the Clinical Department of the National Institute for Infectious Disease \"L. Spallanzani\" in Rome. All subjects received HAART during chronic phase of infection.\n†AZT, Retrovir (Zidovudine); 3TC, Epivir (Lamivudine); D4T, Zerit (Staduvine); SQV, Saquinvir (Invirase); Zalcitabina (ddC, Hivid); Rit, Ritonavir (Norvir); ddl, Didanosine (Videx); IDV, Indinavir (Crix-ivan); EFV, Efavirenz; NVP, Nevirapine (Viramune); LPV (Lop), Lopinavir; ‡ Days elapsed from diagnosis (enrollment) to initiation of HAART. *CD4 and CD8 are per microlitre of plasma, viremia is per millilitre of plasma.", "Plasma HIV-1 RNA levels were determined by a second-generation assay based on nucleic acid sequence based amplification (NASBA), for samples collected until 2001 and by the branched-chain DNA assay (Versant HIV RNA test, Version 3.0, lower limit of quantification 50 copies/ml; Bayer Diagnostics, Milan, Italy) from 2001 until 2008.", "The current version of the model we employ derives from an early simulator that has been quite extensively described in previous publications [11,12]. Recently it has been specialized to simulate the HIV-1 infection [6] and the effects of antiretroviral therapy [4].\nBriefly, it resorts to bit strings to represent \"binding sites\" of cells and molecules, as for example lymphocyte receptors, MHC, antigen peptides and epitopes, immunocomplexes, etc. [13]. The model includes the major classes of cells of the lymphoid lineage (T helper lymphocytes, cytotoxic T lymphocytes, B lymphocytes and antibody-producer plasma cells) and some of the myeloid lineage (macrophages and dendritic cells). These entities are individually represented. In contrast to cells, cytokines like interleukin-2 are represented in terms of concentrations and their dynamics described by a parabolic partial differential equation plus a degradation term accounting for the finite half-life [5,14]. Modeling features of the HIV infection include HIV replication inside infected lymphocytes, T production impairment; specific response against HIV strains and HIV mutation.\nThe simulated life cycle of the virus is represented by the following stages: 1) the virus infects CD4+ T cells, macrophages, dendritic cells; 2) reverse transcriptase copies the viral single stranded RNA genome into a double-stranded viral DNA. The viral DNA is then integrated into the host chromosomal DNA; 3) the virus remains at rest until an event activates the transcription; 4) the replicating virus buds from the cell membrane. Fully assembled virions are then able to infect other cells to restart the life cycle. HAAR effects are modeled as follows: Reverse transcriptase inhibitors block reverse transcriptase enzymatic functions and avoid completion of synthesis of the double-stranded viral DNA thus preventing HIV-1 from replicating (i.e., it prevents the virus in stage 1 from reaching stage 2); Protease inhibitors prevent viral replication by inhibiting the activity of HIV-1 protease, an enzyme used by the virus to cleave nascent proteins for final assembly of new virions (i.e., it prevents virus assembly in stage 4). Further details and parameter settings of the simulations can be found in the Additional file 1.\nFor what concerns the setting of the parameters related to the therapy, we performed computer simulations in which we fixed the immunological parameters at the time of therapy initiation on the basis of the average values measured in patients in vivo: 5.8 ± 0.2 RNA copies/ml (in logarithmic scale), 870 ± 50 CD4 cells/μl and 430 ± 50 CD8 cells/μl. For all simulations we applied a one-year course of HAART. Further details on the tuning of the simulation parameters can be found in the Additional file 2.", "We analyze virological data from HIV patients treated during the very early, early and late phase of infection and compare them with computer simulations.\nIn Figure 1 clinical data of all three analyzed groups is shown altogether. In a point-by-point comparison we find no statistical difference in viral rebound between early and late-treated patients (P ≥ 0.05, Mann-Whitney U two-tailed test) confirming the results of [4]. In addition, we observe that a difference does exist for very early initiation of therapy (P < 0.05, Mann-Whitney U two-tailed test).\nPlasma HIV-1 RNA load. The mean plasma HIV-1 RNA load versus time in weeks after interruption of HAART for patients classified in late (21 subjects), early (22 subjects) and very early (11 subjects) groups. Error bars show standard deviations. The features of the three clinical settings are given in section Patients and Methods.\nIn the present work we extend the simulations of [4] to include the new clinical settings corresponding to a very early initiation of therapy. In particular, the very early simulation settings correspond to a beginning of the therapy within the first week whereas the late settings correspond to initiating therapy between week five and six from infection.\nFigure 2 summarizes data of virological rebound (averages) after therapy interruption at different time points (4, 8 and 24 weeks) for very early and late patients for both clinical (empty boxes) and simulation data (filled boxes). Firstly, the figure shows that clinical and simulation data are in good agreement (differences are not statistically significant: P ≥ 0.05, Mann-Whitney U two-tailed test). Secondly, the difference in plasma HIV-1 RNA (copies/ml) between very early and late-treated patients decreases with increasing time from therapy interruption. Panel (a) shows a difference of about two logs with panel (b) for both clinical data and simulations. These differences vanish after 24 weeks from therapy interruption (cfr. panel (e) and (f)). The overall message is that a delay in the initiation of therapy reduces the chances of maintaining a therapy effect at discontinuation.\nVirological rebound. Virological rebound after 4 weeks (top), 8 weeks (middle) and 24 weeks (bottom) from therapy interruption for two groups: very early (green boxes) that started HAART before seroconversion and late (blue boxes) that started HAART during chronic phase of primary HIV-1 infection. Filled boxes represent in silico data (resulting from three thousands runs) whereas empty boxes correspond to in vivo data. We have calculated the Mann-Whitney U test statistics for assessing whether the two independent samples (in silico and in vivo) come from the same distribution. In all cases we did not find a significant difference (P ≥ 0.05, Mann-Whitney U two-tailed test). Black lines indicate standard deviations. Information on parameter settings for the simulations can be found in the Additional file 1.\nIn order to provide a more precise estimate of the time \"limit\" beyond which the benefit of an early initiation of therapy vanishes, we use the simulation to investigate the influence of HAART initiation time (ts) on the viral rebound. The corresponding results are shown in Figure 3. The virological rebound at one week after therapy interruption as a function of ts is presented. We observe that there are two regimens, one for ts < 20 days and one for ts > 30 corresponding to what clinicians call respectively best controllers (with undetectable HIV RNA levels) and rebounders (whose HIV viremia load returns, approximately, to the pre-HAART level).\nVirological rebound after 1 weeks. Virological rebound at 1 week after therapy interruption for different starting time points of HAART (ts). The red dots represent the results of thousands simulations and the fitting line is given by the Richards' curve in equation 1. Standard deviation is about 0.1 log10(copies/ml) for all points. Inset plot is a zoom for ts < 30 days. Parameters of the fitting curve are: a = 1.52, k = 3.79, d = 0.12, and ts* = 23.50.\nThe points in Figure 3 fit pretty well a generalized logistic function (i.e., the Richards' curve, [15]) describing the growth of viremia as a function of ts,\n\n\n\n\nV\n(\n\nt\ns\n\n)\n=\na\n+\n\n\nk\n−\na\n\n\n1\n+\n\ne\n\n−\nd\n(\n\nt\ns\n\n−\n\nt\ns\n*\n\n)\n\n\n\n\n\n\n\n\nwhere the parameter k is the carrying capacity or the upper asymptote, a is the lower asymptote, d is the growth rate, and ts* is the time of maximum growth. By moving the time of the measurements beyond one week after therapy interruption, the resulting data still fit the same V (ts) but with a greater a, a smaller d and a greater ts*. In particular the limit for d going to zero, of V (ts) is (a + k)/2 may lead to the deceiving conclusion that there is no window of opportunity because the viral rebound is independent from ts.\nWith respect to ts*, the value of~ 23:5 days points to the early inflammation as a critical phase of the disease. To bring into focus this facet, we compare two simulated \"markers\" of the inflammation state in untreated (control case), very early and lately treated simulated patients (see Figure 4). These virtual markers are given by the cell counts of active macrophages (a) and dendritic cells presenting viral proteins on class II MHC molecules (b). We observe that the late-treated case is comparable to the control case (untreated) whereas the very early stands on its own. This observation suggests that it is the activation of the immune system through the set up of an in ammatory state that has to be blamed for the increased viral rebound for ts >ts*.\nInflammatory response. Cell counts of active macrophages (a) and dendritic cells presenting viral proteins on class II MHC molecules (b) show that, in the simulation, the late-treated case is comparable to the control case (untreated) whereas the very early case stands on its own. This suggests that the set up of an infiammatory state affects the viral rebound at therapy discontinuation. Counts are taken at week 8 for all groups. Therapy for very early started within the first week and for the late started at about week 6.\nFigure 4 shows with clarity that very early initiation of the treatment can down-regulate the immune activation, hence limiting viral replication and spread. Interestingly, this view is supported by the observation that HIV triggers the immune activation directly (e.g., HIV gene products can induce the activation of lymphocytes and macrophages as well as the production of pro-inflammatory cytokines and chemokines [2]) or indirectly (e.g., sustained antigen-mediated immune activation occurs in HIV-1-infected patients due also to other viruses like the cytomegalovirus or the Epstein-Barr virus [2]). In both case, the result is a high level of pro-inflammatory cytokines, such as tumor necrosis factor alpha, interleukin 6 and interleukin 1 beta, right from the early stages of HIV-1 infection [2].", "Recent analysis (performed by Fiebig et al. [16]) of samples from individuals that have been infected by HIV-1 has revealed that patients can be categorized into six stages on the basis of a sequential gain in positive HIV-1 clinical diagnostic assays (viral RNA measured by PCR, p24 and p31 viral antigens measured by enzyme-linked immunosorbent assay (ELISA), HIV-1-specific antibody detected by ELISA and HIV-1-specific antibodies detected by western blot, [16]). Patients progress from acute to early chronic infection at the end of stage V, approximately 100 days following infection, as the plasma viral load begins to stabilize.\nWith respect to the study conducted by clinical data analysis and computer simulation described so far, we identify three regimens, as highlighted in Figure 3. These can be paralleled to Fiebig et al. stages [16]. In particular, we observe that patients treated with HAART in very early stages of the infection (stage I-III) are likely to better control the viremia after treatment interruption [3]. If therapy starts in the acute phase (stage V-VI) then the action of the drug foils the immune response and, as a consequence, at the end of the therapeutic period, the virus rebounds undisturbed.\nThese considerations are summarized in Figure 5 where we draw a schematic picture of the importance of an early initiation of HAART with respect to the progression of HIV markers according to Fiebig's et al. stages. In particular we identified the \"window of opportunity\" corresponding to stages I-III, that is, the first three weeks from primary HIV-1 infection. Patients receiving therapy in this narrow period are likely to turn out to be best controllers. Probably, the massive immune activation in the early stage of the disease favors the virus, as it finds more host target cells to exploit for replication. In point of fact, the ensuing massive depletion of CD4+ T cells in mucosal lymphoid tissues, can result in the disruption of the mucosal barrier in the gut. This barrier prevents the translocation of the intestinal flora from the gut to the systemic immune system restricting it to the lamina propia and the mesenteric lymph nodes [2]. HIV-1 infection is indeed associated with a significant increase of plasma lipopolysaccharide levels that is an indicator of microbial translocation, directly correlated with measures of immune activation.\nPhase diagram. Schematic diagram of the importance of an early initiation of HAART with respect to the progression of HIV markers according to the staging in [16].", "A number of studies indicate that interfering with HIV replication by starting the therapy in the early phases of the infection could have a deep impact on the whole disease course. However, HAART is costly, it is onerous for both patient and health care provider, and often brings adverse effects. Its clinical benefit must therefore be weighed against its burden.\nIn the present study, we resorted to a computer model to study the dynamics of the plasma viral load after prolonged treatment interruption in two groups of in silico patients: those who initiate HAART very early and those who start it lately. We evaluated the model comparing the results to clinical data. We found that an opportunity time-window exists for the initiation of HAART (roughly within three weeks before the establishment of viral reservoirs), in which the therapy can control viral replication, preventing generalized immune activation and extensive CD4+ T cell depletion.", "An educational version of the immune system simulator is available on our website:\n- Name: C-ImmSim\n- Home page: http://www.iac.rm.cnr.it/~filippo/C-ImmSim.html\n- Operating system(s): Linux, Unix Mac OS X, Windows\n- Programming language: C\n- Licence: C-ImmSim is available under a LICENSE AGREEMENT that needs to be signed: http://www.iac.rm.cnr.it/~filippo/how-to-get-cimmsim_files/LicenseAgreement.pdf", "The authors declare that they have no competing interests.", "FC and PP designed and performed research; all authors wrote the paper. FM and GDO provided the clinical data. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2334/11/56/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Clinical studies", "Plasma HIV-1 determination", "Computational model", "Results", "Discussion", "Conclusions", "Availability and requirements", "Competing interests", "Authors' contributions", "Pre-publication history", "Supplementary Material" ]
[ "The question of when antiretroviral therapy has to be initiated remains a challenging issue. Recent studies show that the early immune response to HIV-1 infection is likely to be an important factor in determining the clinical course of disease [1]. The first weeks following HIV-1 transmission are extremely dynamic. They are associated with rapid damage to generative immune cell micro-environments and with immune responses that partially control the virus. Following HIV-1 infection, the virus first replicates locally in the mucosa and then is transported to draining lymph nodes where further amplification occurs. This initial phase of infection, until the systemic viral dissemination begins, constitutes the eclipse phase [1]. In general, there is an exponential increase in plasma viremia with a peak 21-28 days after infection. By this time, significant depletion of mucosal CD4+T cells has already occurred. Around the time of peak viremia, patients may become symptomatic and reservoirs of latent virus are established [1,2].\nThe \"window of opportunity\" between the infection and peaking of viremia, prior to massive CD4+ T cell destruction and the establishment of viral reservoirs, seems to be a narrow but crucial period in which an antiretroviral therapy can control viral replication, prevent an extensive CD4+ T cell depletion from occurring and curb generalized immune activation. Thus, thwarting HIV replication by introducing HAART in the early phases of infection could have a substantial impact on the whole disease course. In particular, suggested factors that may contribute to the observed better viral control after treatment interruption in very early treated patients are [3]: i) early arrest of viral escape, leaving the virus vulnerable; ii) preservation or even enhancement of the immune response resulting from the early clearing of antigen; iii) prevention of the establishment of a pool of HIV-specific memory CD4 T cells thus leaving fewer target cells available for viral infection.\nAn ideal clinical model aimed at addressing such issue should compare a number of patients treated starting on different times: from very early to very late. Besides the ethical issues, it is rather difficult to collect enough patients to significantly represent the whole spectrum of possible HAART initiation timings. As a matter of fact, a practical clinical model would compare very early to late-treated patients. While informative on the overall role of HAART timing on disease course, this approach would not allow to verify if there are events in the early infection influenced by the starting time of HAART that affect directly and decisively the course of the disease.\nWe have already shown in [4-6] that an agent-based model of HIV-1 infection could be a valuable tool for the study of the AIDS disease progression and treatment. The computerized simulation allows us to track the effect of HAART timing on the progression of the disease.\nThe aim of the present work is to verify the effect of HAART timing on subsequent events. Indeed, both a clinical model and a computational simulation show that a late initiation of treatment affects HIV-1 replication control. Interestingly, the in silico model identifies a significant three-week time threshold as the \"ultimate\" time point beyond which the decisive HIV-induced damages already occurred, affecting the whole disease course.\nIn a previous work [4], we analyzed clinical data of patients initiating HAART within six months from infection (i.e., we called that early phase) and performed computer simulations to predict the differences in viral rebound at therapy interruption between those patients and subjects initiating therapy during the chronic phase (i.e., we called that late phase, corresponding to initiating six or more months after infection). Our conclusion was that early initiation of therapy does not prolong the disease-free period when compared to a treatment started during the late phase. However, other studies suggest that an earlier initiation is preferable [7-9]. This motivated us to better identify the meaning of early initiation. In the present article, we extend the analysis in [4] to get a more complete picture. We analyze clinical data of very early patients (i.e., treated before seroconvertion) against late-treated patients.", "[SUBTITLE] Clinical studies [SUBSECTION] We analyzed the results of clinical studies performed at the Clinical Department of the National Institute for Infectious Disease \"L. Spallanzani\" in Rome.\nA first group of eleven patients (9 male and 2 female) were diagnosed HIV-1 positive between year 1998 and 2006. All patients initiated HAART within 14 days from diagnosis, during the very early phase of infection (see Table 1). The very early phase was defined as having a negative or indeterminate western blot for HIV-1 antibodies in combination with a positive test for either p24 antigen or a detectable HIV-1 RNA concentration. Those patients were treated with zidovudine/lamivudine (CBV) in combination with either the reverse transcriptase inhibitor efavirenz (EFZ) or one protease inhibitor lopinavir/ritonavir (KAL) or indinavir (IDV). Because anaemia and neutropenia were diagnosed, in two cases CBV has been substituted with lamivudine (3TC) and staduvine (D4T). All those patients underwent a therapy cycle for 2 ± 1 years and remained off HAART for about 48 weeks.\nVery early subjects with an immediate treatment before seroconversion.\nClinical information about the eleven patients selected at the Clinical Department of the National Institute for Infectious Disease \"L. Spallanzani\" in Rome. All subjects received HAART within six months from primary infection.\n†CBV, Combivir (AZT Zidovudine plus 3TC Lamivudine); IDV, Indinavir; EFZ, Efavirenz; KAL, Kaletra (lopinavir/ritonavir). ‡ Days elapsed from diagnosis (enrollment) to initiation of HAART. *CD4 and CD8 are per microlitre of plasma, viremia is per millilitre of plasma. ** 4 weeks after treatment interruption.\nThe second group is made up by twenty-two patients (21 male and 1 female) enrolled in the program between year 1998 and 2005. Patients in this group underwent HAART during the early phase of HIV-1 infection. In particular, they started HAART about 20 days after treatment diagnosis (see Table 2). Early patients were defined as having documented seronegative HIV-1 antibody test within the previous 6 months; acute symptomatic seroconversion illness; evolving HIV-specific antibody response by ELISA; positive HIV-DNA PCR in PBMC. Those patients were treated with three different drugs (in the majority of cases zidovudine (AZT) plus 3TC plus a protease inhibitor. Further details can be found in Table One of [4]. All those patients underwent a therapy cycle for 3 ± 1 years and remained o HAART for about 88 weeks.\nEarly subjects with an immediate treatment of acute HIV-1 infection.\nClinical information about the twenty-two patients selected at the Clinical Department of the National Institute for Infectious Disease \"L. Spallanzani\" in Rome. All subjects received HAART within six months from primary infection.\n†D4T, staduvine; 3TC, lamivudine; IDV, Indinavir; AZT, Zidovudine; NFV, nelflnavir; EFV, Efavirenz; NVP, nevirapine; TNF, Tenofovir; Lop, lopinavir; Rit, Ritonavir; LPV (KAL), Kaletra (lopinavir/ritonavir). ‡ Days elapsed from diagnosis (enrollment) to initiation of HAART. *CD4 and CD8 are per microlitre of plasma, viremia is per millilitre of plasma.\nThe third group consists of twenty-one patients (12 male, 9 female). They started HAART during the chronic phase of infection defined as suggested by the guidelines [10]. In particular, they started HAART about 3.5 years after treatment diagnosis (see Table 3). Their CD4 count at initiation was 400 ± 150 per microlitre of plasma. The range of calendar year for starting HAART among those patients was 1998 ± 3. All those patients underwent a therapy cycle for 4 ± 2 years and remained off HAART for about 41 weeks. The Ethical Committee of the \"L. Spallanzani\" Institute approved the study and the patients gave a written informed consent.\nLate subjects with deferred treatment of acute HIV-1 infection.\nClinical information about the twenty-one patients selected at the Clinical Department of the National Institute for Infectious Disease \"L. Spallanzani\" in Rome. All subjects received HAART during chronic phase of infection.\n†AZT, Retrovir (Zidovudine); 3TC, Epivir (Lamivudine); D4T, Zerit (Staduvine); SQV, Saquinvir (Invirase); Zalcitabina (ddC, Hivid); Rit, Ritonavir (Norvir); ddl, Didanosine (Videx); IDV, Indinavir (Crix-ivan); EFV, Efavirenz; NVP, Nevirapine (Viramune); LPV (Lop), Lopinavir; ‡ Days elapsed from diagnosis (enrollment) to initiation of HAART. *CD4 and CD8 are per microlitre of plasma, viremia is per millilitre of plasma.\nWe analyzed the results of clinical studies performed at the Clinical Department of the National Institute for Infectious Disease \"L. Spallanzani\" in Rome.\nA first group of eleven patients (9 male and 2 female) were diagnosed HIV-1 positive between year 1998 and 2006. All patients initiated HAART within 14 days from diagnosis, during the very early phase of infection (see Table 1). The very early phase was defined as having a negative or indeterminate western blot for HIV-1 antibodies in combination with a positive test for either p24 antigen or a detectable HIV-1 RNA concentration. Those patients were treated with zidovudine/lamivudine (CBV) in combination with either the reverse transcriptase inhibitor efavirenz (EFZ) or one protease inhibitor lopinavir/ritonavir (KAL) or indinavir (IDV). Because anaemia and neutropenia were diagnosed, in two cases CBV has been substituted with lamivudine (3TC) and staduvine (D4T). All those patients underwent a therapy cycle for 2 ± 1 years and remained off HAART for about 48 weeks.\nVery early subjects with an immediate treatment before seroconversion.\nClinical information about the eleven patients selected at the Clinical Department of the National Institute for Infectious Disease \"L. Spallanzani\" in Rome. All subjects received HAART within six months from primary infection.\n†CBV, Combivir (AZT Zidovudine plus 3TC Lamivudine); IDV, Indinavir; EFZ, Efavirenz; KAL, Kaletra (lopinavir/ritonavir). ‡ Days elapsed from diagnosis (enrollment) to initiation of HAART. *CD4 and CD8 are per microlitre of plasma, viremia is per millilitre of plasma. ** 4 weeks after treatment interruption.\nThe second group is made up by twenty-two patients (21 male and 1 female) enrolled in the program between year 1998 and 2005. Patients in this group underwent HAART during the early phase of HIV-1 infection. In particular, they started HAART about 20 days after treatment diagnosis (see Table 2). Early patients were defined as having documented seronegative HIV-1 antibody test within the previous 6 months; acute symptomatic seroconversion illness; evolving HIV-specific antibody response by ELISA; positive HIV-DNA PCR in PBMC. Those patients were treated with three different drugs (in the majority of cases zidovudine (AZT) plus 3TC plus a protease inhibitor. Further details can be found in Table One of [4]. All those patients underwent a therapy cycle for 3 ± 1 years and remained o HAART for about 88 weeks.\nEarly subjects with an immediate treatment of acute HIV-1 infection.\nClinical information about the twenty-two patients selected at the Clinical Department of the National Institute for Infectious Disease \"L. Spallanzani\" in Rome. All subjects received HAART within six months from primary infection.\n†D4T, staduvine; 3TC, lamivudine; IDV, Indinavir; AZT, Zidovudine; NFV, nelflnavir; EFV, Efavirenz; NVP, nevirapine; TNF, Tenofovir; Lop, lopinavir; Rit, Ritonavir; LPV (KAL), Kaletra (lopinavir/ritonavir). ‡ Days elapsed from diagnosis (enrollment) to initiation of HAART. *CD4 and CD8 are per microlitre of plasma, viremia is per millilitre of plasma.\nThe third group consists of twenty-one patients (12 male, 9 female). They started HAART during the chronic phase of infection defined as suggested by the guidelines [10]. In particular, they started HAART about 3.5 years after treatment diagnosis (see Table 3). Their CD4 count at initiation was 400 ± 150 per microlitre of plasma. The range of calendar year for starting HAART among those patients was 1998 ± 3. All those patients underwent a therapy cycle for 4 ± 2 years and remained off HAART for about 41 weeks. The Ethical Committee of the \"L. Spallanzani\" Institute approved the study and the patients gave a written informed consent.\nLate subjects with deferred treatment of acute HIV-1 infection.\nClinical information about the twenty-one patients selected at the Clinical Department of the National Institute for Infectious Disease \"L. Spallanzani\" in Rome. All subjects received HAART during chronic phase of infection.\n†AZT, Retrovir (Zidovudine); 3TC, Epivir (Lamivudine); D4T, Zerit (Staduvine); SQV, Saquinvir (Invirase); Zalcitabina (ddC, Hivid); Rit, Ritonavir (Norvir); ddl, Didanosine (Videx); IDV, Indinavir (Crix-ivan); EFV, Efavirenz; NVP, Nevirapine (Viramune); LPV (Lop), Lopinavir; ‡ Days elapsed from diagnosis (enrollment) to initiation of HAART. *CD4 and CD8 are per microlitre of plasma, viremia is per millilitre of plasma.\n[SUBTITLE] Plasma HIV-1 determination [SUBSECTION] Plasma HIV-1 RNA levels were determined by a second-generation assay based on nucleic acid sequence based amplification (NASBA), for samples collected until 2001 and by the branched-chain DNA assay (Versant HIV RNA test, Version 3.0, lower limit of quantification 50 copies/ml; Bayer Diagnostics, Milan, Italy) from 2001 until 2008.\nPlasma HIV-1 RNA levels were determined by a second-generation assay based on nucleic acid sequence based amplification (NASBA), for samples collected until 2001 and by the branched-chain DNA assay (Versant HIV RNA test, Version 3.0, lower limit of quantification 50 copies/ml; Bayer Diagnostics, Milan, Italy) from 2001 until 2008.\n[SUBTITLE] Computational model [SUBSECTION] The current version of the model we employ derives from an early simulator that has been quite extensively described in previous publications [11,12]. Recently it has been specialized to simulate the HIV-1 infection [6] and the effects of antiretroviral therapy [4].\nBriefly, it resorts to bit strings to represent \"binding sites\" of cells and molecules, as for example lymphocyte receptors, MHC, antigen peptides and epitopes, immunocomplexes, etc. [13]. The model includes the major classes of cells of the lymphoid lineage (T helper lymphocytes, cytotoxic T lymphocytes, B lymphocytes and antibody-producer plasma cells) and some of the myeloid lineage (macrophages and dendritic cells). These entities are individually represented. In contrast to cells, cytokines like interleukin-2 are represented in terms of concentrations and their dynamics described by a parabolic partial differential equation plus a degradation term accounting for the finite half-life [5,14]. Modeling features of the HIV infection include HIV replication inside infected lymphocytes, T production impairment; specific response against HIV strains and HIV mutation.\nThe simulated life cycle of the virus is represented by the following stages: 1) the virus infects CD4+ T cells, macrophages, dendritic cells; 2) reverse transcriptase copies the viral single stranded RNA genome into a double-stranded viral DNA. The viral DNA is then integrated into the host chromosomal DNA; 3) the virus remains at rest until an event activates the transcription; 4) the replicating virus buds from the cell membrane. Fully assembled virions are then able to infect other cells to restart the life cycle. HAAR effects are modeled as follows: Reverse transcriptase inhibitors block reverse transcriptase enzymatic functions and avoid completion of synthesis of the double-stranded viral DNA thus preventing HIV-1 from replicating (i.e., it prevents the virus in stage 1 from reaching stage 2); Protease inhibitors prevent viral replication by inhibiting the activity of HIV-1 protease, an enzyme used by the virus to cleave nascent proteins for final assembly of new virions (i.e., it prevents virus assembly in stage 4). Further details and parameter settings of the simulations can be found in the Additional file 1.\nFor what concerns the setting of the parameters related to the therapy, we performed computer simulations in which we fixed the immunological parameters at the time of therapy initiation on the basis of the average values measured in patients in vivo: 5.8 ± 0.2 RNA copies/ml (in logarithmic scale), 870 ± 50 CD4 cells/μl and 430 ± 50 CD8 cells/μl. For all simulations we applied a one-year course of HAART. Further details on the tuning of the simulation parameters can be found in the Additional file 2.\nThe current version of the model we employ derives from an early simulator that has been quite extensively described in previous publications [11,12]. Recently it has been specialized to simulate the HIV-1 infection [6] and the effects of antiretroviral therapy [4].\nBriefly, it resorts to bit strings to represent \"binding sites\" of cells and molecules, as for example lymphocyte receptors, MHC, antigen peptides and epitopes, immunocomplexes, etc. [13]. The model includes the major classes of cells of the lymphoid lineage (T helper lymphocytes, cytotoxic T lymphocytes, B lymphocytes and antibody-producer plasma cells) and some of the myeloid lineage (macrophages and dendritic cells). These entities are individually represented. In contrast to cells, cytokines like interleukin-2 are represented in terms of concentrations and their dynamics described by a parabolic partial differential equation plus a degradation term accounting for the finite half-life [5,14]. Modeling features of the HIV infection include HIV replication inside infected lymphocytes, T production impairment; specific response against HIV strains and HIV mutation.\nThe simulated life cycle of the virus is represented by the following stages: 1) the virus infects CD4+ T cells, macrophages, dendritic cells; 2) reverse transcriptase copies the viral single stranded RNA genome into a double-stranded viral DNA. The viral DNA is then integrated into the host chromosomal DNA; 3) the virus remains at rest until an event activates the transcription; 4) the replicating virus buds from the cell membrane. Fully assembled virions are then able to infect other cells to restart the life cycle. HAAR effects are modeled as follows: Reverse transcriptase inhibitors block reverse transcriptase enzymatic functions and avoid completion of synthesis of the double-stranded viral DNA thus preventing HIV-1 from replicating (i.e., it prevents the virus in stage 1 from reaching stage 2); Protease inhibitors prevent viral replication by inhibiting the activity of HIV-1 protease, an enzyme used by the virus to cleave nascent proteins for final assembly of new virions (i.e., it prevents virus assembly in stage 4). Further details and parameter settings of the simulations can be found in the Additional file 1.\nFor what concerns the setting of the parameters related to the therapy, we performed computer simulations in which we fixed the immunological parameters at the time of therapy initiation on the basis of the average values measured in patients in vivo: 5.8 ± 0.2 RNA copies/ml (in logarithmic scale), 870 ± 50 CD4 cells/μl and 430 ± 50 CD8 cells/μl. For all simulations we applied a one-year course of HAART. Further details on the tuning of the simulation parameters can be found in the Additional file 2.", "We analyzed the results of clinical studies performed at the Clinical Department of the National Institute for Infectious Disease \"L. Spallanzani\" in Rome.\nA first group of eleven patients (9 male and 2 female) were diagnosed HIV-1 positive between year 1998 and 2006. All patients initiated HAART within 14 days from diagnosis, during the very early phase of infection (see Table 1). The very early phase was defined as having a negative or indeterminate western blot for HIV-1 antibodies in combination with a positive test for either p24 antigen or a detectable HIV-1 RNA concentration. Those patients were treated with zidovudine/lamivudine (CBV) in combination with either the reverse transcriptase inhibitor efavirenz (EFZ) or one protease inhibitor lopinavir/ritonavir (KAL) or indinavir (IDV). Because anaemia and neutropenia were diagnosed, in two cases CBV has been substituted with lamivudine (3TC) and staduvine (D4T). All those patients underwent a therapy cycle for 2 ± 1 years and remained off HAART for about 48 weeks.\nVery early subjects with an immediate treatment before seroconversion.\nClinical information about the eleven patients selected at the Clinical Department of the National Institute for Infectious Disease \"L. Spallanzani\" in Rome. All subjects received HAART within six months from primary infection.\n†CBV, Combivir (AZT Zidovudine plus 3TC Lamivudine); IDV, Indinavir; EFZ, Efavirenz; KAL, Kaletra (lopinavir/ritonavir). ‡ Days elapsed from diagnosis (enrollment) to initiation of HAART. *CD4 and CD8 are per microlitre of plasma, viremia is per millilitre of plasma. ** 4 weeks after treatment interruption.\nThe second group is made up by twenty-two patients (21 male and 1 female) enrolled in the program between year 1998 and 2005. Patients in this group underwent HAART during the early phase of HIV-1 infection. In particular, they started HAART about 20 days after treatment diagnosis (see Table 2). Early patients were defined as having documented seronegative HIV-1 antibody test within the previous 6 months; acute symptomatic seroconversion illness; evolving HIV-specific antibody response by ELISA; positive HIV-DNA PCR in PBMC. Those patients were treated with three different drugs (in the majority of cases zidovudine (AZT) plus 3TC plus a protease inhibitor. Further details can be found in Table One of [4]. All those patients underwent a therapy cycle for 3 ± 1 years and remained o HAART for about 88 weeks.\nEarly subjects with an immediate treatment of acute HIV-1 infection.\nClinical information about the twenty-two patients selected at the Clinical Department of the National Institute for Infectious Disease \"L. Spallanzani\" in Rome. All subjects received HAART within six months from primary infection.\n†D4T, staduvine; 3TC, lamivudine; IDV, Indinavir; AZT, Zidovudine; NFV, nelflnavir; EFV, Efavirenz; NVP, nevirapine; TNF, Tenofovir; Lop, lopinavir; Rit, Ritonavir; LPV (KAL), Kaletra (lopinavir/ritonavir). ‡ Days elapsed from diagnosis (enrollment) to initiation of HAART. *CD4 and CD8 are per microlitre of plasma, viremia is per millilitre of plasma.\nThe third group consists of twenty-one patients (12 male, 9 female). They started HAART during the chronic phase of infection defined as suggested by the guidelines [10]. In particular, they started HAART about 3.5 years after treatment diagnosis (see Table 3). Their CD4 count at initiation was 400 ± 150 per microlitre of plasma. The range of calendar year for starting HAART among those patients was 1998 ± 3. All those patients underwent a therapy cycle for 4 ± 2 years and remained off HAART for about 41 weeks. The Ethical Committee of the \"L. Spallanzani\" Institute approved the study and the patients gave a written informed consent.\nLate subjects with deferred treatment of acute HIV-1 infection.\nClinical information about the twenty-one patients selected at the Clinical Department of the National Institute for Infectious Disease \"L. Spallanzani\" in Rome. All subjects received HAART during chronic phase of infection.\n†AZT, Retrovir (Zidovudine); 3TC, Epivir (Lamivudine); D4T, Zerit (Staduvine); SQV, Saquinvir (Invirase); Zalcitabina (ddC, Hivid); Rit, Ritonavir (Norvir); ddl, Didanosine (Videx); IDV, Indinavir (Crix-ivan); EFV, Efavirenz; NVP, Nevirapine (Viramune); LPV (Lop), Lopinavir; ‡ Days elapsed from diagnosis (enrollment) to initiation of HAART. *CD4 and CD8 are per microlitre of plasma, viremia is per millilitre of plasma.", "Plasma HIV-1 RNA levels were determined by a second-generation assay based on nucleic acid sequence based amplification (NASBA), for samples collected until 2001 and by the branched-chain DNA assay (Versant HIV RNA test, Version 3.0, lower limit of quantification 50 copies/ml; Bayer Diagnostics, Milan, Italy) from 2001 until 2008.", "The current version of the model we employ derives from an early simulator that has been quite extensively described in previous publications [11,12]. Recently it has been specialized to simulate the HIV-1 infection [6] and the effects of antiretroviral therapy [4].\nBriefly, it resorts to bit strings to represent \"binding sites\" of cells and molecules, as for example lymphocyte receptors, MHC, antigen peptides and epitopes, immunocomplexes, etc. [13]. The model includes the major classes of cells of the lymphoid lineage (T helper lymphocytes, cytotoxic T lymphocytes, B lymphocytes and antibody-producer plasma cells) and some of the myeloid lineage (macrophages and dendritic cells). These entities are individually represented. In contrast to cells, cytokines like interleukin-2 are represented in terms of concentrations and their dynamics described by a parabolic partial differential equation plus a degradation term accounting for the finite half-life [5,14]. Modeling features of the HIV infection include HIV replication inside infected lymphocytes, T production impairment; specific response against HIV strains and HIV mutation.\nThe simulated life cycle of the virus is represented by the following stages: 1) the virus infects CD4+ T cells, macrophages, dendritic cells; 2) reverse transcriptase copies the viral single stranded RNA genome into a double-stranded viral DNA. The viral DNA is then integrated into the host chromosomal DNA; 3) the virus remains at rest until an event activates the transcription; 4) the replicating virus buds from the cell membrane. Fully assembled virions are then able to infect other cells to restart the life cycle. HAAR effects are modeled as follows: Reverse transcriptase inhibitors block reverse transcriptase enzymatic functions and avoid completion of synthesis of the double-stranded viral DNA thus preventing HIV-1 from replicating (i.e., it prevents the virus in stage 1 from reaching stage 2); Protease inhibitors prevent viral replication by inhibiting the activity of HIV-1 protease, an enzyme used by the virus to cleave nascent proteins for final assembly of new virions (i.e., it prevents virus assembly in stage 4). Further details and parameter settings of the simulations can be found in the Additional file 1.\nFor what concerns the setting of the parameters related to the therapy, we performed computer simulations in which we fixed the immunological parameters at the time of therapy initiation on the basis of the average values measured in patients in vivo: 5.8 ± 0.2 RNA copies/ml (in logarithmic scale), 870 ± 50 CD4 cells/μl and 430 ± 50 CD8 cells/μl. For all simulations we applied a one-year course of HAART. Further details on the tuning of the simulation parameters can be found in the Additional file 2.", "We analyze virological data from HIV patients treated during the very early, early and late phase of infection and compare them with computer simulations.\nIn Figure 1 clinical data of all three analyzed groups is shown altogether. In a point-by-point comparison we find no statistical difference in viral rebound between early and late-treated patients (P ≥ 0.05, Mann-Whitney U two-tailed test) confirming the results of [4]. In addition, we observe that a difference does exist for very early initiation of therapy (P < 0.05, Mann-Whitney U two-tailed test).\nPlasma HIV-1 RNA load. The mean plasma HIV-1 RNA load versus time in weeks after interruption of HAART for patients classified in late (21 subjects), early (22 subjects) and very early (11 subjects) groups. Error bars show standard deviations. The features of the three clinical settings are given in section Patients and Methods.\nIn the present work we extend the simulations of [4] to include the new clinical settings corresponding to a very early initiation of therapy. In particular, the very early simulation settings correspond to a beginning of the therapy within the first week whereas the late settings correspond to initiating therapy between week five and six from infection.\nFigure 2 summarizes data of virological rebound (averages) after therapy interruption at different time points (4, 8 and 24 weeks) for very early and late patients for both clinical (empty boxes) and simulation data (filled boxes). Firstly, the figure shows that clinical and simulation data are in good agreement (differences are not statistically significant: P ≥ 0.05, Mann-Whitney U two-tailed test). Secondly, the difference in plasma HIV-1 RNA (copies/ml) between very early and late-treated patients decreases with increasing time from therapy interruption. Panel (a) shows a difference of about two logs with panel (b) for both clinical data and simulations. These differences vanish after 24 weeks from therapy interruption (cfr. panel (e) and (f)). The overall message is that a delay in the initiation of therapy reduces the chances of maintaining a therapy effect at discontinuation.\nVirological rebound. Virological rebound after 4 weeks (top), 8 weeks (middle) and 24 weeks (bottom) from therapy interruption for two groups: very early (green boxes) that started HAART before seroconversion and late (blue boxes) that started HAART during chronic phase of primary HIV-1 infection. Filled boxes represent in silico data (resulting from three thousands runs) whereas empty boxes correspond to in vivo data. We have calculated the Mann-Whitney U test statistics for assessing whether the two independent samples (in silico and in vivo) come from the same distribution. In all cases we did not find a significant difference (P ≥ 0.05, Mann-Whitney U two-tailed test). Black lines indicate standard deviations. Information on parameter settings for the simulations can be found in the Additional file 1.\nIn order to provide a more precise estimate of the time \"limit\" beyond which the benefit of an early initiation of therapy vanishes, we use the simulation to investigate the influence of HAART initiation time (ts) on the viral rebound. The corresponding results are shown in Figure 3. The virological rebound at one week after therapy interruption as a function of ts is presented. We observe that there are two regimens, one for ts < 20 days and one for ts > 30 corresponding to what clinicians call respectively best controllers (with undetectable HIV RNA levels) and rebounders (whose HIV viremia load returns, approximately, to the pre-HAART level).\nVirological rebound after 1 weeks. Virological rebound at 1 week after therapy interruption for different starting time points of HAART (ts). The red dots represent the results of thousands simulations and the fitting line is given by the Richards' curve in equation 1. Standard deviation is about 0.1 log10(copies/ml) for all points. Inset plot is a zoom for ts < 30 days. Parameters of the fitting curve are: a = 1.52, k = 3.79, d = 0.12, and ts* = 23.50.\nThe points in Figure 3 fit pretty well a generalized logistic function (i.e., the Richards' curve, [15]) describing the growth of viremia as a function of ts,\n\n\n\n\nV\n(\n\nt\ns\n\n)\n=\na\n+\n\n\nk\n−\na\n\n\n1\n+\n\ne\n\n−\nd\n(\n\nt\ns\n\n−\n\nt\ns\n*\n\n)\n\n\n\n\n\n\n\n\nwhere the parameter k is the carrying capacity or the upper asymptote, a is the lower asymptote, d is the growth rate, and ts* is the time of maximum growth. By moving the time of the measurements beyond one week after therapy interruption, the resulting data still fit the same V (ts) but with a greater a, a smaller d and a greater ts*. In particular the limit for d going to zero, of V (ts) is (a + k)/2 may lead to the deceiving conclusion that there is no window of opportunity because the viral rebound is independent from ts.\nWith respect to ts*, the value of~ 23:5 days points to the early inflammation as a critical phase of the disease. To bring into focus this facet, we compare two simulated \"markers\" of the inflammation state in untreated (control case), very early and lately treated simulated patients (see Figure 4). These virtual markers are given by the cell counts of active macrophages (a) and dendritic cells presenting viral proteins on class II MHC molecules (b). We observe that the late-treated case is comparable to the control case (untreated) whereas the very early stands on its own. This observation suggests that it is the activation of the immune system through the set up of an in ammatory state that has to be blamed for the increased viral rebound for ts >ts*.\nInflammatory response. Cell counts of active macrophages (a) and dendritic cells presenting viral proteins on class II MHC molecules (b) show that, in the simulation, the late-treated case is comparable to the control case (untreated) whereas the very early case stands on its own. This suggests that the set up of an infiammatory state affects the viral rebound at therapy discontinuation. Counts are taken at week 8 for all groups. Therapy for very early started within the first week and for the late started at about week 6.\nFigure 4 shows with clarity that very early initiation of the treatment can down-regulate the immune activation, hence limiting viral replication and spread. Interestingly, this view is supported by the observation that HIV triggers the immune activation directly (e.g., HIV gene products can induce the activation of lymphocytes and macrophages as well as the production of pro-inflammatory cytokines and chemokines [2]) or indirectly (e.g., sustained antigen-mediated immune activation occurs in HIV-1-infected patients due also to other viruses like the cytomegalovirus or the Epstein-Barr virus [2]). In both case, the result is a high level of pro-inflammatory cytokines, such as tumor necrosis factor alpha, interleukin 6 and interleukin 1 beta, right from the early stages of HIV-1 infection [2].", "Recent analysis (performed by Fiebig et al. [16]) of samples from individuals that have been infected by HIV-1 has revealed that patients can be categorized into six stages on the basis of a sequential gain in positive HIV-1 clinical diagnostic assays (viral RNA measured by PCR, p24 and p31 viral antigens measured by enzyme-linked immunosorbent assay (ELISA), HIV-1-specific antibody detected by ELISA and HIV-1-specific antibodies detected by western blot, [16]). Patients progress from acute to early chronic infection at the end of stage V, approximately 100 days following infection, as the plasma viral load begins to stabilize.\nWith respect to the study conducted by clinical data analysis and computer simulation described so far, we identify three regimens, as highlighted in Figure 3. These can be paralleled to Fiebig et al. stages [16]. In particular, we observe that patients treated with HAART in very early stages of the infection (stage I-III) are likely to better control the viremia after treatment interruption [3]. If therapy starts in the acute phase (stage V-VI) then the action of the drug foils the immune response and, as a consequence, at the end of the therapeutic period, the virus rebounds undisturbed.\nThese considerations are summarized in Figure 5 where we draw a schematic picture of the importance of an early initiation of HAART with respect to the progression of HIV markers according to Fiebig's et al. stages. In particular we identified the \"window of opportunity\" corresponding to stages I-III, that is, the first three weeks from primary HIV-1 infection. Patients receiving therapy in this narrow period are likely to turn out to be best controllers. Probably, the massive immune activation in the early stage of the disease favors the virus, as it finds more host target cells to exploit for replication. In point of fact, the ensuing massive depletion of CD4+ T cells in mucosal lymphoid tissues, can result in the disruption of the mucosal barrier in the gut. This barrier prevents the translocation of the intestinal flora from the gut to the systemic immune system restricting it to the lamina propia and the mesenteric lymph nodes [2]. HIV-1 infection is indeed associated with a significant increase of plasma lipopolysaccharide levels that is an indicator of microbial translocation, directly correlated with measures of immune activation.\nPhase diagram. Schematic diagram of the importance of an early initiation of HAART with respect to the progression of HIV markers according to the staging in [16].", "A number of studies indicate that interfering with HIV replication by starting the therapy in the early phases of the infection could have a deep impact on the whole disease course. However, HAART is costly, it is onerous for both patient and health care provider, and often brings adverse effects. Its clinical benefit must therefore be weighed against its burden.\nIn the present study, we resorted to a computer model to study the dynamics of the plasma viral load after prolonged treatment interruption in two groups of in silico patients: those who initiate HAART very early and those who start it lately. We evaluated the model comparing the results to clinical data. We found that an opportunity time-window exists for the initiation of HAART (roughly within three weeks before the establishment of viral reservoirs), in which the therapy can control viral replication, preventing generalized immune activation and extensive CD4+ T cell depletion.", "An educational version of the immune system simulator is available on our website:\n- Name: C-ImmSim\n- Home page: http://www.iac.rm.cnr.it/~filippo/C-ImmSim.html\n- Operating system(s): Linux, Unix Mac OS X, Windows\n- Programming language: C\n- Licence: C-ImmSim is available under a LICENSE AGREEMENT that needs to be signed: http://www.iac.rm.cnr.it/~filippo/how-to-get-cimmsim_files/LicenseAgreement.pdf", "The authors declare that they have no competing interests.", "FC and PP designed and performed research; all authors wrote the paper. FM and GDO provided the clinical data. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1471-2334/11/56/prepub\n", "Mathematical model details. This file lists all the interactions between cells and molecules considered in the model and an accurate description of the parameter setting.\nClick here for file\nParameters tuning for the simulation of HIV infected virtual patients. This file reports details of the parameter setting and tuning.\nClick here for file" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, "supplementary-material" ]
[]
Use of 16S ribosomal RNA gene analyses to characterize the bacterial signature associated with poor oral health in West Virginia.
21362199
West Virginia has the worst oral health in the United States, but the reasons for this are unclear. This pilot study explored the etiology of this disparity using culture-independent analyses to identify bacterial species associated with oral disease.
BACKGROUND
Bacteria in subgingival plaque samples from twelve participants in two independent West Virginia dental-related studies were characterized using 16S rRNA gene sequencing and Human Oral Microbe Identification Microarray (HOMIM) analysis. Unifrac analysis was used to characterize phylogenetic differences between bacterial communities obtained from plaque of participants with low or high oral disease, which was further evaluated using clustering and Principal Coordinate Analysis.
METHODS
Statistically different bacterial signatures (P<0.001) were identified in subgingival plaque of individuals with low or high oral disease in West Virginia based on 16S rRNA gene sequencing. Low disease contained a high frequency of Veillonella and Streptococcus, with a moderate number of Capnocytophaga. High disease exhibited substantially increased bacterial diversity and included a large proportion of Clostridiales cluster bacteria (Selenomonas, Eubacterium, Dialister). Phylogenetic trees constructed using 16S rRNA gene sequencing revealed that Clostridiales were repeated colonizers in plaque associated with high oral disease, providing evidence that the oral environment is somehow influencing the bacterial signature linked to disease.
RESULTS
Culture-independent analyses identified an atypical bacterial signature associated with high oral disease in West Virginians and provided evidence that the oral environment influenced this signature. Both findings provide insight into the etiology of the oral disparity in West Virginia.
CONCLUSIONS
[ "Adult", "Aged", "Aged, 80 and over", "Bacterial Typing Techniques", "Cluster Analysis", "DNA, Bacterial", "Dental Plaque", "Humans", "Middle Aged", "Mouth Diseases", "Oligonucleotide Array Sequence Analysis", "Phylogeny", "Pilot Projects", "Principal Component Analysis", "RNA, Ribosomal, 16S", "Tooth Diseases", "West Virginia", "Young Adult" ]
3061962
null
null
Methods
[SUBTITLE] Subject populations [SUBSECTION] Subgingival plaque samples used in this study were obtained in conjunction with two research projects conducted in West Virginia. Plaque from an age 23 to 48 population was obtained through the Center for Oral Health Research in Appalachia (COHRA study), a partnership between West Virginia University and University of Pittsburgh examining genetic, epidemiologic, microbiological, and behavioral factors contributing to oral health. Plaque from an older population was obtained through the Oral Health Disparities Among Elders With and Without Cognitive Impairment (Cognitive study), performed in collaboration with the West Virginia University School of Dentistry and the Center on Aging. We chose to include subjects from two independent West Virginia population pools (median age 23-48 or age >70) in our study to help extend an understanding of the relevance of our findings to a broad sample of the West Virginian population. Being funded as a pilot project, relatively few subjects from each group were able to be analyzed. All procedures were performed using Institutional Review Board approved protocols. Subgingival plaque samples used in this study were obtained in conjunction with two research projects conducted in West Virginia. Plaque from an age 23 to 48 population was obtained through the Center for Oral Health Research in Appalachia (COHRA study), a partnership between West Virginia University and University of Pittsburgh examining genetic, epidemiologic, microbiological, and behavioral factors contributing to oral health. Plaque from an older population was obtained through the Oral Health Disparities Among Elders With and Without Cognitive Impairment (Cognitive study), performed in collaboration with the West Virginia University School of Dentistry and the Center on Aging. We chose to include subjects from two independent West Virginia population pools (median age 23-48 or age >70) in our study to help extend an understanding of the relevance of our findings to a broad sample of the West Virginian population. Being funded as a pilot project, relatively few subjects from each group were able to be analyzed. All procedures were performed using Institutional Review Board approved protocols. [SUBTITLE] Subject evaluations [SUBSECTION] Criteria for oral health evaluation, methods of plaque collection from participants, and calibration of researchers in the COHRA project have been previously described [12]. Our study examined subgingival plaque from seven COHRA participants, which was obtained from four first molars (#3, 14, 19 and 30). These same sites were assessed for probing depth (PD), recession and bleeding on probing (BOP), as summarized in Table 1. Caries assessment was based on the coronal tooth surfaces, and teeth were classified as sound, decayed, filled or missing. Percent sound teeth for each participant is reported in Table 1. The oral status of COHRA study subjects was determined based on periodontal examinations (bordered by dashed lines in Table 1): low disease defined as <3.5 mm PD, 0-25% BOP sites; high disease defined as >4.5 mm PD, 100% BOP sites. Subgingival plaque was sampled using a curette, suspended in 100-500 μl TE (10 mM Tris, pH 7.5, 1 mM EDTA) containing 20% glycerol and stored at -70°C until processing. Multiple specimens from individual participants were pooled for analyses. COHRA study clinical evaluations 1 Dashed lines border clinical parameters used to determine the oral status of COHRA study participants. Cognitive study participants were selected based on: 1) age 70 years and older, 2) resident of West Virginia, 3) community-living, and 4) dentate (having at least four natural teeth). Oral evaluations were performed by calibrated researchers using guidelines and protocols from the National Health and Nutrition Examination Survey (NHANES IV) [13]. Subgingival plaque samples were collected using sterile periodontal curettes from six sites as follows: the buccal surface of the most anterior molar in each quadrant, and the buccal surface of #11 and #31. Plaque samples were stored and pooled for analyses as above. Criteria for periodontal evaluation in the Cognitive study included PD, gingivitis and calculus. Caries evaluation included type of dentures and percent sound, missing, filled, decayed or crowned teeth. Both evaluations are reported in Table 2. The oral status of Cognitive study subjects was determined based on caries examinations (bordered by dashed lines in Table 2): low disease defined as >60% sound/filled teeth present; high disease defined as >40% teeth missing or decayed. Cognitive study clinical evaluations 1No periodontal examination data was acquired for DAA. 2Dashed lines border clinical parameters used to determine the oral status of Cognitive study participants. Criteria for oral health evaluation, methods of plaque collection from participants, and calibration of researchers in the COHRA project have been previously described [12]. Our study examined subgingival plaque from seven COHRA participants, which was obtained from four first molars (#3, 14, 19 and 30). These same sites were assessed for probing depth (PD), recession and bleeding on probing (BOP), as summarized in Table 1. Caries assessment was based on the coronal tooth surfaces, and teeth were classified as sound, decayed, filled or missing. Percent sound teeth for each participant is reported in Table 1. The oral status of COHRA study subjects was determined based on periodontal examinations (bordered by dashed lines in Table 1): low disease defined as <3.5 mm PD, 0-25% BOP sites; high disease defined as >4.5 mm PD, 100% BOP sites. Subgingival plaque was sampled using a curette, suspended in 100-500 μl TE (10 mM Tris, pH 7.5, 1 mM EDTA) containing 20% glycerol and stored at -70°C until processing. Multiple specimens from individual participants were pooled for analyses. COHRA study clinical evaluations 1 Dashed lines border clinical parameters used to determine the oral status of COHRA study participants. Cognitive study participants were selected based on: 1) age 70 years and older, 2) resident of West Virginia, 3) community-living, and 4) dentate (having at least four natural teeth). Oral evaluations were performed by calibrated researchers using guidelines and protocols from the National Health and Nutrition Examination Survey (NHANES IV) [13]. Subgingival plaque samples were collected using sterile periodontal curettes from six sites as follows: the buccal surface of the most anterior molar in each quadrant, and the buccal surface of #11 and #31. Plaque samples were stored and pooled for analyses as above. Criteria for periodontal evaluation in the Cognitive study included PD, gingivitis and calculus. Caries evaluation included type of dentures and percent sound, missing, filled, decayed or crowned teeth. Both evaluations are reported in Table 2. The oral status of Cognitive study subjects was determined based on caries examinations (bordered by dashed lines in Table 2): low disease defined as >60% sound/filled teeth present; high disease defined as >40% teeth missing or decayed. Cognitive study clinical evaluations 1No periodontal examination data was acquired for DAA. 2Dashed lines border clinical parameters used to determine the oral status of Cognitive study participants. [SUBTITLE] DNA extraction and 16S rRNA gene sequence analysis [SUBSECTION] DNA was extracted and purified from subgingival plaque using the UltraClean Soil DNA Isolation Kit (MO Bio Labs, Carlsbad, CA). The 16S rRNA gene was PCR amplified using the universal bacterial 16S rRNA primers (forward, 5'-GAGTTTGATYMTGGCTCAG, reverse, 5'-GAAGGAGGTGWTCCADCC [14]). Each PCR reaction contained 1 μl purified DNA, 0.4 μM universal forward and reverse primers, 5 μl 10X platinum PCR buffer, 1.5 mM MgSO4, 0.2 mM dNTPs and 0.5 μl Platinum TAQ DNA Polymerase High Fidelity (Invitrogen, Life Technologies Corp, Carlsbad, CA). PCR conditions were: 94°C for 4 minutes, followed by 30 cycles of 94°C for 45 seconds, 60°C for 45 seconds and 72°C for 90 seconds, with a final extension at 72°C for 15 minutes. PCR products were analyzed by 0.8% agarose gel electrophoresis, and reactions yielding products of ~1500 bp were cloned using the TOPO TA Cloning Kit for Sequencing (Invitrogen). Ligation products were electroporated into Escherichia coli (TOP10 Chemically Competent cells, Invitrogen), and transformants were incubated in 250 μl SOC medium (2% tryptone, 0.5% yeast extract, 10 mM NaCl, 2.5 mM KCl, 10 mM MgCl2, 10 mM MgSO4, 20 mM glucose) for 1 hour at 37°C, prior to plating on LB agar containing 50 μg/ml kanamycin and a 25 μl overlay of 2% X-gal. Following incubation overnight, ~100 colonies containing inserts (white colonies) were isolated from each sample and cultured at 37°C overnight in 96 well plates containing LB broth with 50 μg/ml kanamycin. A 2 μl volume of bacterial culture from each well was PCR amplified using M13 primers (forward, 5'-TGTAAAACGACGGCCAGT, reverse 5'-CAGGAAACAGCTATGAC). Reactions contained 0.2 μM of each primer, 5 μl 10X Qiagen PCR buffer (Qiagen, Valencia, CA), 0.2 mM dNTPs and 0.2 μl Taq DNA polymerase (Qiagen). PCR conditions were: 94°C for 4 minutes, followed by 30 cycles of 94°C for 45 seconds, 52°C for 45 seconds and 72°C for 90 seconds, with a final extension at 72°C for 15 minutes. PCR products from each well were examined by electrophoresis, and products of the appropriate size were sequenced by a commercial facility (SeqWright, Houston, TX). DNA was extracted and purified from subgingival plaque using the UltraClean Soil DNA Isolation Kit (MO Bio Labs, Carlsbad, CA). The 16S rRNA gene was PCR amplified using the universal bacterial 16S rRNA primers (forward, 5'-GAGTTTGATYMTGGCTCAG, reverse, 5'-GAAGGAGGTGWTCCADCC [14]). Each PCR reaction contained 1 μl purified DNA, 0.4 μM universal forward and reverse primers, 5 μl 10X platinum PCR buffer, 1.5 mM MgSO4, 0.2 mM dNTPs and 0.5 μl Platinum TAQ DNA Polymerase High Fidelity (Invitrogen, Life Technologies Corp, Carlsbad, CA). PCR conditions were: 94°C for 4 minutes, followed by 30 cycles of 94°C for 45 seconds, 60°C for 45 seconds and 72°C for 90 seconds, with a final extension at 72°C for 15 minutes. PCR products were analyzed by 0.8% agarose gel electrophoresis, and reactions yielding products of ~1500 bp were cloned using the TOPO TA Cloning Kit for Sequencing (Invitrogen). Ligation products were electroporated into Escherichia coli (TOP10 Chemically Competent cells, Invitrogen), and transformants were incubated in 250 μl SOC medium (2% tryptone, 0.5% yeast extract, 10 mM NaCl, 2.5 mM KCl, 10 mM MgCl2, 10 mM MgSO4, 20 mM glucose) for 1 hour at 37°C, prior to plating on LB agar containing 50 μg/ml kanamycin and a 25 μl overlay of 2% X-gal. Following incubation overnight, ~100 colonies containing inserts (white colonies) were isolated from each sample and cultured at 37°C overnight in 96 well plates containing LB broth with 50 μg/ml kanamycin. A 2 μl volume of bacterial culture from each well was PCR amplified using M13 primers (forward, 5'-TGTAAAACGACGGCCAGT, reverse 5'-CAGGAAACAGCTATGAC). Reactions contained 0.2 μM of each primer, 5 μl 10X Qiagen PCR buffer (Qiagen, Valencia, CA), 0.2 mM dNTPs and 0.2 μl Taq DNA polymerase (Qiagen). PCR conditions were: 94°C for 4 minutes, followed by 30 cycles of 94°C for 45 seconds, 52°C for 45 seconds and 72°C for 90 seconds, with a final extension at 72°C for 15 minutes. PCR products from each well were examined by electrophoresis, and products of the appropriate size were sequenced by a commercial facility (SeqWright, Houston, TX). [SUBTITLE] 16S rRNA gene sequence analysis [SUBSECTION] Each DNA sequence was scanned for a single segment of the original primer sequence (ATCAAACT) between bp 440 and 520 to identify the 16S rRNA gene and to help exclude chimeras. The initial part of each sequence containing uncalled bases (N) was removed, the distal part (vector sequence) was trimmed at the primer, and the sequence was reversed to standard orientation. Nucleotide sequences generated in this study have been deposited in GenBank, accession numbers HQ894465 - HQ895588. Sequences were classified by BLASTN analysis against both a local database and the GenBank non-redundant (nr) database. The local database was assembled in an interative process: if a new experimental sequence did not match with >98% identity, a new database was assembled adding matching type sequences obtained from GenBank and the Ribosomal Database Project (RDP) [15]. The current local database contains 375 sequences organized into 122 'groups'. Each group contains closely related sequences and has been validated to be non-overlapping by BLAST analysis of individual groups against the entire database. This approach provided a level of detail that was informative for classifying bacterial sequences. Software used in sequence analyses included: 1) BLAST, using the NCBI server [16] with results formatted as XML; 2) UniFrac, using its server [17,18]; 3) phylogenetic tree construction, using the RDP server; 4) local BLAST, using the 'legacy' executable blastall from NCBI [19]. Other software run locally included: R [20] and the Analysis of Phylogenetics and Evolution (APE) package for constructing and plotting phylogenetic trees and plotting the Principal Coordinate Analysis (PCoA); ClustalW2 [21] and Muscle [22] for multiple sequence alignment; and Python [23] to automate steps in analyses and produce the heatmaps (generated using matplotlib). Each DNA sequence was scanned for a single segment of the original primer sequence (ATCAAACT) between bp 440 and 520 to identify the 16S rRNA gene and to help exclude chimeras. The initial part of each sequence containing uncalled bases (N) was removed, the distal part (vector sequence) was trimmed at the primer, and the sequence was reversed to standard orientation. Nucleotide sequences generated in this study have been deposited in GenBank, accession numbers HQ894465 - HQ895588. Sequences were classified by BLASTN analysis against both a local database and the GenBank non-redundant (nr) database. The local database was assembled in an interative process: if a new experimental sequence did not match with >98% identity, a new database was assembled adding matching type sequences obtained from GenBank and the Ribosomal Database Project (RDP) [15]. The current local database contains 375 sequences organized into 122 'groups'. Each group contains closely related sequences and has been validated to be non-overlapping by BLAST analysis of individual groups against the entire database. This approach provided a level of detail that was informative for classifying bacterial sequences. Software used in sequence analyses included: 1) BLAST, using the NCBI server [16] with results formatted as XML; 2) UniFrac, using its server [17,18]; 3) phylogenetic tree construction, using the RDP server; 4) local BLAST, using the 'legacy' executable blastall from NCBI [19]. Other software run locally included: R [20] and the Analysis of Phylogenetics and Evolution (APE) package for constructing and plotting phylogenetic trees and plotting the Principal Coordinate Analysis (PCoA); ClustalW2 [21] and Muscle [22] for multiple sequence alignment; and Python [23] to automate steps in analyses and produce the heatmaps (generated using matplotlib). [SUBTITLE] Human Oral Microbe Identification Microarray (HOMIM) analysis [SUBSECTION] For HOMIM 16S rRNA gene microarray analysis, purified DNA from subgingival plaque samples was sent to the HOMIM Core Facility at the Forsyth Institute (Boston, MA). This facility offers high throughput analysis of ~300 of the most prevalent oral bacterial species and provides a comprehensive report of bacterial profiles within DNA samples [24]. For HOMIM 16S rRNA gene microarray analysis, purified DNA from subgingival plaque samples was sent to the HOMIM Core Facility at the Forsyth Institute (Boston, MA). This facility offers high throughput analysis of ~300 of the most prevalent oral bacterial species and provides a comprehensive report of bacterial profiles within DNA samples [24].
null
null
null
null
[ "Background", "Subject populations", "Subject evaluations", "DNA extraction and 16S rRNA gene sequence analysis", "16S rRNA gene sequence analysis", "Human Oral Microbe Identification Microarray (HOMIM) analysis", "Results and Discussion", "Identification of bacterial populations in subgingival plaque that differentiate individuals with high or low oral disease in West Virginia", "Statistical analyses of bacterial types identified in subgingival plaque", "Analysis of the bacterial populations in plaque using HOMIM analyses", "Origin of bacterial types in high disease plaque", "Conclusions", "List of Abbreviations", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "West Virginians have the worst oral health in the nation, with almost twice the national average (48.2%) of adults aged 65 or more having all their natural teeth extracted [1]. These statistics become more alarming knowing that infections of the oral cavity have been associated with chronic diseases, such as diabetes, cardiovascular disease and atherosclerosis [2-4]. Neither the origin of poor oral health in West Virginia nor its relationship with systemic disease is understood. Central to this problem is a determination of the microbial populations responsible for oral infections. Historically this has been difficult because of the complexity of the microbiome within oral biofilms, and difficulties in cultivating bacteria obtained from the oral environment.\nBiofilms can play either a protective (probiotic) or pathogenic role in oral health depending upon their microbial composition. Oral biofilms are initiated by colonization of probiotic Gram-positive cocci, primarily streptococci, adhering to the tooth surface, along with coaggregating Actinomyces and Veillonella [5]. Coaggregation is a common property in plaque development, and early colonizing bacteria are bridged through bacteria, such as fusobacteria, to late colonizers. The ecological succession of microbial populations from early colonizing Gram-positive cocci to late colonizing Gram-negative anaerobes of diverse morphotypes leads to a shift in biofilm composition that correlates with the appearance of gingivitis and periodontitis [6]. Specific organisms have been linked with oral diseases. Dental caries occurs as a result of a shift in the biofilm community towards acidogenic and acid-tolerant bacteria, specifically Streptococcus mutans and lactobacilli [7]. In subgingival plaque, Porphyromonas gingivalis, Tannerella forsythia and Aggregatibacter actinomycetemcomitans have been strongly associated with periodontal disease [8,9]. Until recently, associations of microbes with oral disease have been based on in vitro cultivation. As it is now recognized that only about 60% of the species in oral biofilms are cultivable [10], the use of culture-independent analyses has led to a new level of understanding of oral associated microbes [11].\nMolecular analyses of periodontal microflora had not previously been used to examine the bacterial profile of subgingival plaque of West Virginians. The goal of this study was to use 16S rRNA gene analyses to gain insight into the etiology of the oral health disparity observed in this population. In this preliminary study we were able to identify significantly different 16S rRNA bacterial phylogenetic signatures in plaque from individuals having high or low oral disease, and the high disease signature was evident in two independent populations that span a wide range of age groups. Overall we found that communities rich in Veillonella and streptococci shifted to communities rich in Selenomonas and other Clostridiales in association with a decline in oral health, potentially linking an atypical bacterial signature with oral disease in West Virginians. The finding that an atypical bacterial signature may be linked to oral health disparities observed in West Virginia highlights the need for further analyses of bacterial species associated with high and low oral disease in this population in order to understand the origin of this disparity.", "Subgingival plaque samples used in this study were obtained in conjunction with two research projects conducted in West Virginia. Plaque from an age 23 to 48 population was obtained through the Center for Oral Health Research in Appalachia (COHRA study), a partnership between West Virginia University and University of Pittsburgh examining genetic, epidemiologic, microbiological, and behavioral factors contributing to oral health. Plaque from an older population was obtained through the Oral Health Disparities Among Elders With and Without Cognitive Impairment (Cognitive study), performed in collaboration with the West Virginia University School of Dentistry and the Center on Aging. We chose to include subjects from two independent West Virginia population pools (median age 23-48 or age >70) in our study to help extend an understanding of the relevance of our findings to a broad sample of the West Virginian population. Being funded as a pilot project, relatively few subjects from each group were able to be analyzed. All procedures were performed using Institutional Review Board approved protocols.", "Criteria for oral health evaluation, methods of plaque collection from participants, and calibration of researchers in the COHRA project have been previously described [12]. Our study examined subgingival plaque from seven COHRA participants, which was obtained from four first molars (#3, 14, 19 and 30). These same sites were assessed for probing depth (PD), recession and bleeding on probing (BOP), as summarized in Table 1. Caries assessment was based on the coronal tooth surfaces, and teeth were classified as sound, decayed, filled or missing. Percent sound teeth for each participant is reported in Table 1. The oral status of COHRA study subjects was determined based on periodontal examinations (bordered by dashed lines in Table 1): low disease defined as <3.5 mm PD, 0-25% BOP sites; high disease defined as >4.5 mm PD, 100% BOP sites. Subgingival plaque was sampled using a curette, suspended in 100-500 μl TE (10 mM Tris, pH 7.5, 1 mM EDTA) containing 20% glycerol and stored at -70°C until processing. Multiple specimens from individual participants were pooled for analyses.\nCOHRA study clinical evaluations\n1 Dashed lines border clinical parameters used to determine the oral status of COHRA study participants.\nCognitive study participants were selected based on: 1) age 70 years and older, 2) resident of West Virginia, 3) community-living, and 4) dentate (having at least four natural teeth). Oral evaluations were performed by calibrated researchers using guidelines and protocols from the National Health and Nutrition Examination Survey (NHANES IV) [13]. Subgingival plaque samples were collected using sterile periodontal curettes from six sites as follows: the buccal surface of the most anterior molar in each quadrant, and the buccal surface of #11 and #31. Plaque samples were stored and pooled for analyses as above. Criteria for periodontal evaluation in the Cognitive study included PD, gingivitis and calculus. Caries evaluation included type of dentures and percent sound, missing, filled, decayed or crowned teeth. Both evaluations are reported in Table 2. The oral status of Cognitive study subjects was determined based on caries examinations (bordered by dashed lines in Table 2): low disease defined as >60% sound/filled teeth present; high disease defined as >40% teeth missing or decayed.\nCognitive study clinical evaluations\n1No periodontal examination data was acquired for DAA.\n2Dashed lines border clinical parameters used to determine the oral status of Cognitive study participants.", "DNA was extracted and purified from subgingival plaque using the UltraClean Soil DNA Isolation Kit (MO Bio Labs, Carlsbad, CA). The 16S rRNA gene was PCR amplified using the universal bacterial 16S rRNA primers (forward, 5'-GAGTTTGATYMTGGCTCAG, reverse, 5'-GAAGGAGGTGWTCCADCC [14]). Each PCR reaction contained 1 μl purified DNA, 0.4 μM universal forward and reverse primers, 5 μl 10X platinum PCR buffer, 1.5 mM MgSO4, 0.2 mM dNTPs and 0.5 μl Platinum TAQ DNA Polymerase High Fidelity (Invitrogen, Life Technologies Corp, Carlsbad, CA). PCR conditions were: 94°C for 4 minutes, followed by 30 cycles of 94°C for 45 seconds, 60°C for 45 seconds and 72°C for 90 seconds, with a final extension at 72°C for 15 minutes. PCR products were analyzed by 0.8% agarose gel electrophoresis, and reactions yielding products of ~1500 bp were cloned using the TOPO TA Cloning Kit for Sequencing (Invitrogen). Ligation products were electroporated into Escherichia coli (TOP10 Chemically Competent cells, Invitrogen), and transformants were incubated in 250 μl SOC medium (2% tryptone, 0.5% yeast extract, 10 mM NaCl, 2.5 mM KCl, 10 mM MgCl2, 10 mM MgSO4, 20 mM glucose) for 1 hour at 37°C, prior to plating on LB agar containing 50 μg/ml kanamycin and a 25 μl overlay of 2% X-gal. Following incubation overnight, ~100 colonies containing inserts (white colonies) were isolated from each sample and cultured at 37°C overnight in 96 well plates containing LB broth with 50 μg/ml kanamycin. A 2 μl volume of bacterial culture from each well was PCR amplified using M13 primers (forward, 5'-TGTAAAACGACGGCCAGT, reverse 5'-CAGGAAACAGCTATGAC). Reactions contained 0.2 μM of each primer, 5 μl 10X Qiagen PCR buffer (Qiagen, Valencia, CA), 0.2 mM dNTPs and 0.2 μl Taq DNA polymerase (Qiagen). PCR conditions were: 94°C for 4 minutes, followed by 30 cycles of 94°C for 45 seconds, 52°C for 45 seconds and 72°C for 90 seconds, with a final extension at 72°C for 15 minutes. PCR products from each well were examined by electrophoresis, and products of the appropriate size were sequenced by a commercial facility (SeqWright, Houston, TX).", "Each DNA sequence was scanned for a single segment of the original primer sequence (ATCAAACT) between bp 440 and 520 to identify the 16S rRNA gene and to help exclude chimeras. The initial part of each sequence containing uncalled bases (N) was removed, the distal part (vector sequence) was trimmed at the primer, and the sequence was reversed to standard orientation. Nucleotide sequences generated in this study have been deposited in GenBank, accession numbers HQ894465 - HQ895588.\nSequences were classified by BLASTN analysis against both a local database and the GenBank non-redundant (nr) database. The local database was assembled in an interative process: if a new experimental sequence did not match with >98% identity, a new database was assembled adding matching type sequences obtained from GenBank and the Ribosomal Database Project (RDP) [15]. The current local database contains 375 sequences organized into 122 'groups'. Each group contains closely related sequences and has been validated to be non-overlapping by BLAST analysis of individual groups against the entire database. This approach provided a level of detail that was informative for classifying bacterial sequences. Software used in sequence analyses included: 1) BLAST, using the NCBI server [16] with results formatted as XML; 2) UniFrac, using its server [17,18]; 3) phylogenetic tree construction, using the RDP server; 4) local BLAST, using the 'legacy' executable blastall from NCBI [19]. Other software run locally included: R [20] and the Analysis of Phylogenetics and Evolution (APE) package for constructing and plotting phylogenetic trees and plotting the Principal Coordinate Analysis (PCoA); ClustalW2 [21] and Muscle [22] for multiple sequence alignment; and Python [23] to automate steps in analyses and produce the heatmaps (generated using matplotlib).", "For HOMIM 16S rRNA gene microarray analysis, purified DNA from subgingival plaque samples was sent to the HOMIM Core Facility at the Forsyth Institute (Boston, MA). This facility offers high throughput analysis of ~300 of the most prevalent oral bacterial species and provides a comprehensive report of bacterial profiles within DNA samples [24].", "[SUBTITLE] Identification of bacterial populations in subgingival plaque that differentiate individuals with high or low oral disease in West Virginia [SUBSECTION] Previous studies found bacterial species in the 'red complex' to be strongly linked to periodontal disease [25]. To examine if these same bacteria were associated with oral disease in West Virginia, we used 16S rRNA gene sequencing to characterize bacteria in subgingival plaque of COHRA participants diagnosed as having low or high oral disease based on periodontal examinations (Table 1, bordered by dashed lines). Analyses were performed in 96-well plate format, and due to the relatively high cost of DNA sequencing, our practical goal was to sequence 96 clones per sample, realizing that we would not be able to detect bacterial types present at low frequency. In actuality the number of clones sequenced per sample ranged from 55 to 133, with low numbers relating to difficulties obtaining high quality sequences for some samples. Sequences obtained were analyzed by BLAST against the GenBank database that includes all known bacterial 16S rRNA gene sequences, and against a local database that facilitated classification of bacterial 'types' based on ≥95% sequence identity. Subsequent evaluation of sequence data using the Human Oral Microbiome Database (HOMD) BLAST server [26] yielded nearly identical results, except for a few changes caused by reclassification of Clostridiales type strains, which affected some assignments to Eubacterium and Lachnospiraceae.\nThe initial phase of sequencing examined 2 low (DB and DC) and 5 high (DF, DL, DG, DI, DA) disease COHRA participants. An additional low disease sample (DM), evaluated based on periodontal criteria, was obtained through the West Virginia University Dental Clinic and served as a non-COHRA low disease control. The percentage of a bacterial type in each sample relative to the total number of clones sequenced (indicated at the bottom of each column) is shown in the heatmap in Figure 1. As previously reported [6], increased bacterial diversity was evident in plaque from individuals diagnosed with high oral disease. However, unexpectedly a large number of high disease bacterial types were classified in the Clostridiales cluster (Selenomonas, Eubacterium, Dialister), which were completely absent from low disease plaque. Also notable in high disease was the low frequency of bacterial types traditionally linked to periodontitis (Porphyromonas, Tannerella, Treponema and Aggregatibacter). Plaque from low disease exhibited a distinctly different bacterial population, including mainly Streptococcus and Veillonella with a moderate number of Capnocytophaga. The bacterial diversity in low disease COHRA samples was highly consistent with a recent study, which identified Streptococcus, Veillonella and Capnocytophaga in 100% of the plaque samples obtained from individuals with healthy oral cavities [27]. Thus, while bacteria associated with oral health in West Virginia was highly reflective of other regions in the United States, an atypical pathogenic phylogenetic signature that includes Selenomonas and other Clostridiales cluster bacteria appeared to be associated with the decline in oral health in West Virginia.\nIdentification of bacterial populations in subgingival plaque of West Virginians. Bacterial composition in plaque samples was determined using 16S rRNA gene sequencing in 2 low and 5 high oral disease COHRA participants, ranked based on periodontal exams, and 5 Cognitive study participants, ranked based on caries index. DM is a low periodontal disease control sample obtained through the West Virginia University Dental Clinic. Each numbered box indicates the percentage of clones of the type-specific bacterial 16S rRNA gene relative to the total number of clones sequenced, which is indicated at the bottom of each column. The color of the box reflects observed counts.\nCOHRA study participants ranged in age from 23 to 48, and Cognitive study participants (ages 73 to 93) provided a means of assessing how bacterial types related to tooth status later in life. The periodontal health of the 5 Cognitive study participants included in this study was exceptionally good, as evident in probing depths of 3 mm or less, shown in Table 2. However, as expected for this age, considerable variation was observed in caries health (Table 2, bordered by dashed lines), with DT having the highest percentage of sound teeth, and DAA and DZ having the highest percentage of missing/decayed teeth. Examination of bacterial types in subgingival plaque of DT revealed mainly Veillonella, Streptococcus and Capnocytophaga (Figure 1), as observed for low disease COHRA participants. In comparison, DAA and DZ had bacterial patterns that included increased diversity and species richness of Selenomonas, Eubacterium and Dialister, in addition to many of the other bacterial species observed in high oral disease COHRA participants. These data are consistent with the bacterial signature of aged West Virginians with tooth loss being similar to that of younger West Virginians with periodontal disease.\nPrevious studies found bacterial species in the 'red complex' to be strongly linked to periodontal disease [25]. To examine if these same bacteria were associated with oral disease in West Virginia, we used 16S rRNA gene sequencing to characterize bacteria in subgingival plaque of COHRA participants diagnosed as having low or high oral disease based on periodontal examinations (Table 1, bordered by dashed lines). Analyses were performed in 96-well plate format, and due to the relatively high cost of DNA sequencing, our practical goal was to sequence 96 clones per sample, realizing that we would not be able to detect bacterial types present at low frequency. In actuality the number of clones sequenced per sample ranged from 55 to 133, with low numbers relating to difficulties obtaining high quality sequences for some samples. Sequences obtained were analyzed by BLAST against the GenBank database that includes all known bacterial 16S rRNA gene sequences, and against a local database that facilitated classification of bacterial 'types' based on ≥95% sequence identity. Subsequent evaluation of sequence data using the Human Oral Microbiome Database (HOMD) BLAST server [26] yielded nearly identical results, except for a few changes caused by reclassification of Clostridiales type strains, which affected some assignments to Eubacterium and Lachnospiraceae.\nThe initial phase of sequencing examined 2 low (DB and DC) and 5 high (DF, DL, DG, DI, DA) disease COHRA participants. An additional low disease sample (DM), evaluated based on periodontal criteria, was obtained through the West Virginia University Dental Clinic and served as a non-COHRA low disease control. The percentage of a bacterial type in each sample relative to the total number of clones sequenced (indicated at the bottom of each column) is shown in the heatmap in Figure 1. As previously reported [6], increased bacterial diversity was evident in plaque from individuals diagnosed with high oral disease. However, unexpectedly a large number of high disease bacterial types were classified in the Clostridiales cluster (Selenomonas, Eubacterium, Dialister), which were completely absent from low disease plaque. Also notable in high disease was the low frequency of bacterial types traditionally linked to periodontitis (Porphyromonas, Tannerella, Treponema and Aggregatibacter). Plaque from low disease exhibited a distinctly different bacterial population, including mainly Streptococcus and Veillonella with a moderate number of Capnocytophaga. The bacterial diversity in low disease COHRA samples was highly consistent with a recent study, which identified Streptococcus, Veillonella and Capnocytophaga in 100% of the plaque samples obtained from individuals with healthy oral cavities [27]. Thus, while bacteria associated with oral health in West Virginia was highly reflective of other regions in the United States, an atypical pathogenic phylogenetic signature that includes Selenomonas and other Clostridiales cluster bacteria appeared to be associated with the decline in oral health in West Virginia.\nIdentification of bacterial populations in subgingival plaque of West Virginians. Bacterial composition in plaque samples was determined using 16S rRNA gene sequencing in 2 low and 5 high oral disease COHRA participants, ranked based on periodontal exams, and 5 Cognitive study participants, ranked based on caries index. DM is a low periodontal disease control sample obtained through the West Virginia University Dental Clinic. Each numbered box indicates the percentage of clones of the type-specific bacterial 16S rRNA gene relative to the total number of clones sequenced, which is indicated at the bottom of each column. The color of the box reflects observed counts.\nCOHRA study participants ranged in age from 23 to 48, and Cognitive study participants (ages 73 to 93) provided a means of assessing how bacterial types related to tooth status later in life. The periodontal health of the 5 Cognitive study participants included in this study was exceptionally good, as evident in probing depths of 3 mm or less, shown in Table 2. However, as expected for this age, considerable variation was observed in caries health (Table 2, bordered by dashed lines), with DT having the highest percentage of sound teeth, and DAA and DZ having the highest percentage of missing/decayed teeth. Examination of bacterial types in subgingival plaque of DT revealed mainly Veillonella, Streptococcus and Capnocytophaga (Figure 1), as observed for low disease COHRA participants. In comparison, DAA and DZ had bacterial patterns that included increased diversity and species richness of Selenomonas, Eubacterium and Dialister, in addition to many of the other bacterial species observed in high oral disease COHRA participants. These data are consistent with the bacterial signature of aged West Virginians with tooth loss being similar to that of younger West Virginians with periodontal disease.\n[SUBTITLE] Statistical analyses of bacterial types identified in subgingival plaque [SUBSECTION] Unifrac analysis [17], which takes into account phylogenetic distances, has become the method of choice for analyzing differences between microbial communities and was used in our studies to provide statistical validation of differences in low and high disease plaque environments. The method requires a rooted phylogenetic tree as input, and we used the RDP server to generate the tree, which is especially suited for rRNA alignments as it employs Infernal [28] that is guided by predicted secondary structure. The tree was constructed by the neighbor-joining method [29], using APE in R. It was rooted by including Thermotoga maritima SL7 as an outgroup. The Unifrac algorithm was used to compute the unique fraction of branch length for each sample. These results (which are phylogenetic distances separating the individual samples) were analyzed using cluster analysis (UPGMA) and PCoA and ignored duplicate sequences. As shown in Figure 2A, PCoA separated low disease from diseased samples into two clusters along the first eigenvector. Cluster analysis also split low disease and diseased samples into different clades, with sample DT (labeled T in Figure 2), the least diseased sample based on caries outcome in the Cognitive study, splitting to the low disease clade. Support for the clusters was evaluated by jackknife tests (1000 replicates; Figure 2B). When analyses were again performed with all the sequences relabeled as diseased or healthy, Unifrac analysis found the two environments to be statistically different (P < 0.001), whether or not duplicate sequences were considered.\nStatistical analyses of bacterial populations in low and high disease plaque. A) Principal coordinate analysis of bacterial communities from subgingival plaque of West Virginians with low oral disease (blue) as compared to plaque of West Virginians with varying degrees of oral disease (red). B) The Unifrac algorithm was used to compute the unique branch length for a given sub-sample. Cluster analysis split low (blue) and diseased samples (red) into different clades. Support for the clusters was evaluated by jackknife tests (1000 replicates). T was the least diseased sample in the Cognitive study, as ranked in Table 2. The 'D' has been removed from sample notations in these figures for clarity.\nUnifrac analysis [17], which takes into account phylogenetic distances, has become the method of choice for analyzing differences between microbial communities and was used in our studies to provide statistical validation of differences in low and high disease plaque environments. The method requires a rooted phylogenetic tree as input, and we used the RDP server to generate the tree, which is especially suited for rRNA alignments as it employs Infernal [28] that is guided by predicted secondary structure. The tree was constructed by the neighbor-joining method [29], using APE in R. It was rooted by including Thermotoga maritima SL7 as an outgroup. The Unifrac algorithm was used to compute the unique fraction of branch length for each sample. These results (which are phylogenetic distances separating the individual samples) were analyzed using cluster analysis (UPGMA) and PCoA and ignored duplicate sequences. As shown in Figure 2A, PCoA separated low disease from diseased samples into two clusters along the first eigenvector. Cluster analysis also split low disease and diseased samples into different clades, with sample DT (labeled T in Figure 2), the least diseased sample based on caries outcome in the Cognitive study, splitting to the low disease clade. Support for the clusters was evaluated by jackknife tests (1000 replicates; Figure 2B). When analyses were again performed with all the sequences relabeled as diseased or healthy, Unifrac analysis found the two environments to be statistically different (P < 0.001), whether or not duplicate sequences were considered.\nStatistical analyses of bacterial populations in low and high disease plaque. A) Principal coordinate analysis of bacterial communities from subgingival plaque of West Virginians with low oral disease (blue) as compared to plaque of West Virginians with varying degrees of oral disease (red). B) The Unifrac algorithm was used to compute the unique branch length for a given sub-sample. Cluster analysis split low (blue) and diseased samples (red) into different clades. Support for the clusters was evaluated by jackknife tests (1000 replicates). T was the least diseased sample in the Cognitive study, as ranked in Table 2. The 'D' has been removed from sample notations in these figures for clarity.\n[SUBTITLE] Analysis of the bacterial populations in plaque using HOMIM analyses [SUBSECTION] Since rRNA gene sequencing revealed an unexpected bacterial signature in plaque of West Virginians with high oral disease, we asked whether this pattern could be detected using an alternative method of 16S rRNA gene analysis, HOMIM. In HOMIM, multiple primers are initially used to amplify 16S rRNA genes within a DNA sample, which are then assayed for the presence of specific 16S rRNA genes using probes designed to optimize detection of 272 different bacterial species [30]. Unlike 16S rRNA sequencing, which allows quantification of the frequency of bacterial clones, the read-out for HOMIM is a relative signal intensity for each detected 16S rRNA sequence on a scale from 0 to 5. In our studies HOMIM results are presented as the sum of the signals for all oral taxa within each genus. Comparisons of HOMIM and DNA sequence analyses of plaque samples from 4 low disease (DB, DC, DM, DT) and the 5 highest disease (DA, DI, DG, DZ, DAA) participants in our study are shown in Figure 3. HOMIM analyses of plaque from low disease detected a high frequency of Veillonella, Streptococcus and Capnocytophaga, closely paralleling results of 16S rRNA sequencing (Figure 3A). Less prevalent bacterial types detected in 16S rRNA sequencing, such as Gemella, Fusobacterium, Actinomyces, Granulicatella, Neisseria and Haemophilus, were confirmed by HOMIM. Additional bacterial types, presumably at too low a frequency to detect by sequencing, were also detected in low disease by HOMIM, the most evident of which were Campylobacter, Prevotella, Leptotrichia, Lautropia and Aggregatibacter. These results highlight differences that can be observed between DNA sequencing and HOMIM due to methodologies employed in HOMIM that can increase the sensitivity of detection of specific bacterial species by 10 fold.\nComparison of 16S rRNA gene analyses using sequencing and HOMIM. The frequency of bacterial types (by percentage), determined by 16S rRNA sequencing, of 4 plaque samples from low disease and 5 plaque samples from high disease West Virginians, ranked based on criteria defined in Materials and Methods, was compared with the microarray signal intensity obtained from the same samples in HOMIM analyses. Bacterial types above the red line were more frequent in low disease.\nIn plaque from individuals with high disease, HOMIM analyses confirmed the presence of genera from the Clostridiales order, including Selenomonas, Eubacterium and Dialister (Figure 3B). Notably, as with sequencing data, bacteria in the 'red complex' were either absent or present at low levels in HOMIM analyses. As with low disease plaque, disproportionately higher signals for certain bacteria, specifically Campylobacter, Prevotella and Leptotrichia, were observed in HOMIM analysis as compared with sequencing, again explained by the increased sensitivity of microarray detection methodologies. Importantly, conclusions from HOMIM analyses confirmed those from 16S rRNA sequencing, and identified increased genus richness and high intensity signals for Selenomonas, Eubacterium and Dialister in association with a decline in oral health.\nSince rRNA gene sequencing revealed an unexpected bacterial signature in plaque of West Virginians with high oral disease, we asked whether this pattern could be detected using an alternative method of 16S rRNA gene analysis, HOMIM. In HOMIM, multiple primers are initially used to amplify 16S rRNA genes within a DNA sample, which are then assayed for the presence of specific 16S rRNA genes using probes designed to optimize detection of 272 different bacterial species [30]. Unlike 16S rRNA sequencing, which allows quantification of the frequency of bacterial clones, the read-out for HOMIM is a relative signal intensity for each detected 16S rRNA sequence on a scale from 0 to 5. In our studies HOMIM results are presented as the sum of the signals for all oral taxa within each genus. Comparisons of HOMIM and DNA sequence analyses of plaque samples from 4 low disease (DB, DC, DM, DT) and the 5 highest disease (DA, DI, DG, DZ, DAA) participants in our study are shown in Figure 3. HOMIM analyses of plaque from low disease detected a high frequency of Veillonella, Streptococcus and Capnocytophaga, closely paralleling results of 16S rRNA sequencing (Figure 3A). Less prevalent bacterial types detected in 16S rRNA sequencing, such as Gemella, Fusobacterium, Actinomyces, Granulicatella, Neisseria and Haemophilus, were confirmed by HOMIM. Additional bacterial types, presumably at too low a frequency to detect by sequencing, were also detected in low disease by HOMIM, the most evident of which were Campylobacter, Prevotella, Leptotrichia, Lautropia and Aggregatibacter. These results highlight differences that can be observed between DNA sequencing and HOMIM due to methodologies employed in HOMIM that can increase the sensitivity of detection of specific bacterial species by 10 fold.\nComparison of 16S rRNA gene analyses using sequencing and HOMIM. The frequency of bacterial types (by percentage), determined by 16S rRNA sequencing, of 4 plaque samples from low disease and 5 plaque samples from high disease West Virginians, ranked based on criteria defined in Materials and Methods, was compared with the microarray signal intensity obtained from the same samples in HOMIM analyses. Bacterial types above the red line were more frequent in low disease.\nIn plaque from individuals with high disease, HOMIM analyses confirmed the presence of genera from the Clostridiales order, including Selenomonas, Eubacterium and Dialister (Figure 3B). Notably, as with sequencing data, bacteria in the 'red complex' were either absent or present at low levels in HOMIM analyses. As with low disease plaque, disproportionately higher signals for certain bacteria, specifically Campylobacter, Prevotella and Leptotrichia, were observed in HOMIM analysis as compared with sequencing, again explained by the increased sensitivity of microarray detection methodologies. Importantly, conclusions from HOMIM analyses confirmed those from 16S rRNA sequencing, and identified increased genus richness and high intensity signals for Selenomonas, Eubacterium and Dialister in association with a decline in oral health.\n[SUBTITLE] Origin of bacterial types in high disease plaque [SUBSECTION] Phylogenetic trees of bacterial types developed using 16S rRNA sequencing also provided insight into the origin of bacterial signatures associated with high oral disease. For example, comparisons of 16S rRNA gene sequences revealed that diverse Selenomonas spp. were actually repeated colonizers of the high disease oral environment. The 16S rRNA tree represented in Figure 4 shows the broad diversity of Selenomonas phylotypes recovered in a single plaque sample from a high disease subject (DA). In contrast, the same clade in a sample from a low disease subject (DB) contained only low diversity Veillonella phylotypes. The order Clostridiales has its phylogenetic assignment in the Firmicutes (typically Gram-positive), but it includes a large family, the Veillonellaceae, which are obligate anaerobes that stain Gram-negative due to a porous pseudo-outer membrane. Low disease Veillonella have nearly identical sequences typical of a single species (> 97% 16S rRNA gene sequence identity), as represented in Figure 4. In our studies, we found the tight cluster of Veillonella observed in low disease was replaced in plaque from high disease by a broad expansion of other members of the family Veillonellaceae, such as Selenomonas and Dialister. At the same time in high disease there was an expansion of Gram-positives from the order Clostridiales, primarily in the families Eubacteriaceae and Lachnospiraceae. The breadth of this bacterial group is large (minimum ~85% identity), and this allowed us to recognize that Clostridiales were repeated colonizers in plaque of individuals having high oral disease, as represented by the phylogenic diversity of Selenomonas in Figure 4. This finding is significant because it highlights the role of the oral environment (as opposed to in situ evolution due to horizontal gene transfer) in the generation of bacterial signatures associated with high oral disease in West Virginia.\nPhylogenetic trees of Selenomonas and Veillonella isolated from individual plaque samples. 16S rRNA gene sequences from a single high disease plaque sample (DA, red text) and a single low disease plaque sample (DB, blue text) were used to generate phylogenetic trees of Selenomonas and Veillonella, respectively. The x-axis indicates percent difference in 16S rRNA gene sequence. The separated clusters of Selenomonas show that this population descended from independently colonizing but phylogenetically related bacteria. Veillonella from DB exhibited limited diversity.\nPhylogenetic trees of bacterial types developed using 16S rRNA sequencing also provided insight into the origin of bacterial signatures associated with high oral disease. For example, comparisons of 16S rRNA gene sequences revealed that diverse Selenomonas spp. were actually repeated colonizers of the high disease oral environment. The 16S rRNA tree represented in Figure 4 shows the broad diversity of Selenomonas phylotypes recovered in a single plaque sample from a high disease subject (DA). In contrast, the same clade in a sample from a low disease subject (DB) contained only low diversity Veillonella phylotypes. The order Clostridiales has its phylogenetic assignment in the Firmicutes (typically Gram-positive), but it includes a large family, the Veillonellaceae, which are obligate anaerobes that stain Gram-negative due to a porous pseudo-outer membrane. Low disease Veillonella have nearly identical sequences typical of a single species (> 97% 16S rRNA gene sequence identity), as represented in Figure 4. In our studies, we found the tight cluster of Veillonella observed in low disease was replaced in plaque from high disease by a broad expansion of other members of the family Veillonellaceae, such as Selenomonas and Dialister. At the same time in high disease there was an expansion of Gram-positives from the order Clostridiales, primarily in the families Eubacteriaceae and Lachnospiraceae. The breadth of this bacterial group is large (minimum ~85% identity), and this allowed us to recognize that Clostridiales were repeated colonizers in plaque of individuals having high oral disease, as represented by the phylogenic diversity of Selenomonas in Figure 4. This finding is significant because it highlights the role of the oral environment (as opposed to in situ evolution due to horizontal gene transfer) in the generation of bacterial signatures associated with high oral disease in West Virginia.\nPhylogenetic trees of Selenomonas and Veillonella isolated from individual plaque samples. 16S rRNA gene sequences from a single high disease plaque sample (DA, red text) and a single low disease plaque sample (DB, blue text) were used to generate phylogenetic trees of Selenomonas and Veillonella, respectively. The x-axis indicates percent difference in 16S rRNA gene sequence. The separated clusters of Selenomonas show that this population descended from independently colonizing but phylogenetically related bacteria. Veillonella from DB exhibited limited diversity.", "Previous studies found bacterial species in the 'red complex' to be strongly linked to periodontal disease [25]. To examine if these same bacteria were associated with oral disease in West Virginia, we used 16S rRNA gene sequencing to characterize bacteria in subgingival plaque of COHRA participants diagnosed as having low or high oral disease based on periodontal examinations (Table 1, bordered by dashed lines). Analyses were performed in 96-well plate format, and due to the relatively high cost of DNA sequencing, our practical goal was to sequence 96 clones per sample, realizing that we would not be able to detect bacterial types present at low frequency. In actuality the number of clones sequenced per sample ranged from 55 to 133, with low numbers relating to difficulties obtaining high quality sequences for some samples. Sequences obtained were analyzed by BLAST against the GenBank database that includes all known bacterial 16S rRNA gene sequences, and against a local database that facilitated classification of bacterial 'types' based on ≥95% sequence identity. Subsequent evaluation of sequence data using the Human Oral Microbiome Database (HOMD) BLAST server [26] yielded nearly identical results, except for a few changes caused by reclassification of Clostridiales type strains, which affected some assignments to Eubacterium and Lachnospiraceae.\nThe initial phase of sequencing examined 2 low (DB and DC) and 5 high (DF, DL, DG, DI, DA) disease COHRA participants. An additional low disease sample (DM), evaluated based on periodontal criteria, was obtained through the West Virginia University Dental Clinic and served as a non-COHRA low disease control. The percentage of a bacterial type in each sample relative to the total number of clones sequenced (indicated at the bottom of each column) is shown in the heatmap in Figure 1. As previously reported [6], increased bacterial diversity was evident in plaque from individuals diagnosed with high oral disease. However, unexpectedly a large number of high disease bacterial types were classified in the Clostridiales cluster (Selenomonas, Eubacterium, Dialister), which were completely absent from low disease plaque. Also notable in high disease was the low frequency of bacterial types traditionally linked to periodontitis (Porphyromonas, Tannerella, Treponema and Aggregatibacter). Plaque from low disease exhibited a distinctly different bacterial population, including mainly Streptococcus and Veillonella with a moderate number of Capnocytophaga. The bacterial diversity in low disease COHRA samples was highly consistent with a recent study, which identified Streptococcus, Veillonella and Capnocytophaga in 100% of the plaque samples obtained from individuals with healthy oral cavities [27]. Thus, while bacteria associated with oral health in West Virginia was highly reflective of other regions in the United States, an atypical pathogenic phylogenetic signature that includes Selenomonas and other Clostridiales cluster bacteria appeared to be associated with the decline in oral health in West Virginia.\nIdentification of bacterial populations in subgingival plaque of West Virginians. Bacterial composition in plaque samples was determined using 16S rRNA gene sequencing in 2 low and 5 high oral disease COHRA participants, ranked based on periodontal exams, and 5 Cognitive study participants, ranked based on caries index. DM is a low periodontal disease control sample obtained through the West Virginia University Dental Clinic. Each numbered box indicates the percentage of clones of the type-specific bacterial 16S rRNA gene relative to the total number of clones sequenced, which is indicated at the bottom of each column. The color of the box reflects observed counts.\nCOHRA study participants ranged in age from 23 to 48, and Cognitive study participants (ages 73 to 93) provided a means of assessing how bacterial types related to tooth status later in life. The periodontal health of the 5 Cognitive study participants included in this study was exceptionally good, as evident in probing depths of 3 mm or less, shown in Table 2. However, as expected for this age, considerable variation was observed in caries health (Table 2, bordered by dashed lines), with DT having the highest percentage of sound teeth, and DAA and DZ having the highest percentage of missing/decayed teeth. Examination of bacterial types in subgingival plaque of DT revealed mainly Veillonella, Streptococcus and Capnocytophaga (Figure 1), as observed for low disease COHRA participants. In comparison, DAA and DZ had bacterial patterns that included increased diversity and species richness of Selenomonas, Eubacterium and Dialister, in addition to many of the other bacterial species observed in high oral disease COHRA participants. These data are consistent with the bacterial signature of aged West Virginians with tooth loss being similar to that of younger West Virginians with periodontal disease.", "Unifrac analysis [17], which takes into account phylogenetic distances, has become the method of choice for analyzing differences between microbial communities and was used in our studies to provide statistical validation of differences in low and high disease plaque environments. The method requires a rooted phylogenetic tree as input, and we used the RDP server to generate the tree, which is especially suited for rRNA alignments as it employs Infernal [28] that is guided by predicted secondary structure. The tree was constructed by the neighbor-joining method [29], using APE in R. It was rooted by including Thermotoga maritima SL7 as an outgroup. The Unifrac algorithm was used to compute the unique fraction of branch length for each sample. These results (which are phylogenetic distances separating the individual samples) were analyzed using cluster analysis (UPGMA) and PCoA and ignored duplicate sequences. As shown in Figure 2A, PCoA separated low disease from diseased samples into two clusters along the first eigenvector. Cluster analysis also split low disease and diseased samples into different clades, with sample DT (labeled T in Figure 2), the least diseased sample based on caries outcome in the Cognitive study, splitting to the low disease clade. Support for the clusters was evaluated by jackknife tests (1000 replicates; Figure 2B). When analyses were again performed with all the sequences relabeled as diseased or healthy, Unifrac analysis found the two environments to be statistically different (P < 0.001), whether or not duplicate sequences were considered.\nStatistical analyses of bacterial populations in low and high disease plaque. A) Principal coordinate analysis of bacterial communities from subgingival plaque of West Virginians with low oral disease (blue) as compared to plaque of West Virginians with varying degrees of oral disease (red). B) The Unifrac algorithm was used to compute the unique branch length for a given sub-sample. Cluster analysis split low (blue) and diseased samples (red) into different clades. Support for the clusters was evaluated by jackknife tests (1000 replicates). T was the least diseased sample in the Cognitive study, as ranked in Table 2. The 'D' has been removed from sample notations in these figures for clarity.", "Since rRNA gene sequencing revealed an unexpected bacterial signature in plaque of West Virginians with high oral disease, we asked whether this pattern could be detected using an alternative method of 16S rRNA gene analysis, HOMIM. In HOMIM, multiple primers are initially used to amplify 16S rRNA genes within a DNA sample, which are then assayed for the presence of specific 16S rRNA genes using probes designed to optimize detection of 272 different bacterial species [30]. Unlike 16S rRNA sequencing, which allows quantification of the frequency of bacterial clones, the read-out for HOMIM is a relative signal intensity for each detected 16S rRNA sequence on a scale from 0 to 5. In our studies HOMIM results are presented as the sum of the signals for all oral taxa within each genus. Comparisons of HOMIM and DNA sequence analyses of plaque samples from 4 low disease (DB, DC, DM, DT) and the 5 highest disease (DA, DI, DG, DZ, DAA) participants in our study are shown in Figure 3. HOMIM analyses of plaque from low disease detected a high frequency of Veillonella, Streptococcus and Capnocytophaga, closely paralleling results of 16S rRNA sequencing (Figure 3A). Less prevalent bacterial types detected in 16S rRNA sequencing, such as Gemella, Fusobacterium, Actinomyces, Granulicatella, Neisseria and Haemophilus, were confirmed by HOMIM. Additional bacterial types, presumably at too low a frequency to detect by sequencing, were also detected in low disease by HOMIM, the most evident of which were Campylobacter, Prevotella, Leptotrichia, Lautropia and Aggregatibacter. These results highlight differences that can be observed between DNA sequencing and HOMIM due to methodologies employed in HOMIM that can increase the sensitivity of detection of specific bacterial species by 10 fold.\nComparison of 16S rRNA gene analyses using sequencing and HOMIM. The frequency of bacterial types (by percentage), determined by 16S rRNA sequencing, of 4 plaque samples from low disease and 5 plaque samples from high disease West Virginians, ranked based on criteria defined in Materials and Methods, was compared with the microarray signal intensity obtained from the same samples in HOMIM analyses. Bacterial types above the red line were more frequent in low disease.\nIn plaque from individuals with high disease, HOMIM analyses confirmed the presence of genera from the Clostridiales order, including Selenomonas, Eubacterium and Dialister (Figure 3B). Notably, as with sequencing data, bacteria in the 'red complex' were either absent or present at low levels in HOMIM analyses. As with low disease plaque, disproportionately higher signals for certain bacteria, specifically Campylobacter, Prevotella and Leptotrichia, were observed in HOMIM analysis as compared with sequencing, again explained by the increased sensitivity of microarray detection methodologies. Importantly, conclusions from HOMIM analyses confirmed those from 16S rRNA sequencing, and identified increased genus richness and high intensity signals for Selenomonas, Eubacterium and Dialister in association with a decline in oral health.", "Phylogenetic trees of bacterial types developed using 16S rRNA sequencing also provided insight into the origin of bacterial signatures associated with high oral disease. For example, comparisons of 16S rRNA gene sequences revealed that diverse Selenomonas spp. were actually repeated colonizers of the high disease oral environment. The 16S rRNA tree represented in Figure 4 shows the broad diversity of Selenomonas phylotypes recovered in a single plaque sample from a high disease subject (DA). In contrast, the same clade in a sample from a low disease subject (DB) contained only low diversity Veillonella phylotypes. The order Clostridiales has its phylogenetic assignment in the Firmicutes (typically Gram-positive), but it includes a large family, the Veillonellaceae, which are obligate anaerobes that stain Gram-negative due to a porous pseudo-outer membrane. Low disease Veillonella have nearly identical sequences typical of a single species (> 97% 16S rRNA gene sequence identity), as represented in Figure 4. In our studies, we found the tight cluster of Veillonella observed in low disease was replaced in plaque from high disease by a broad expansion of other members of the family Veillonellaceae, such as Selenomonas and Dialister. At the same time in high disease there was an expansion of Gram-positives from the order Clostridiales, primarily in the families Eubacteriaceae and Lachnospiraceae. The breadth of this bacterial group is large (minimum ~85% identity), and this allowed us to recognize that Clostridiales were repeated colonizers in plaque of individuals having high oral disease, as represented by the phylogenic diversity of Selenomonas in Figure 4. This finding is significant because it highlights the role of the oral environment (as opposed to in situ evolution due to horizontal gene transfer) in the generation of bacterial signatures associated with high oral disease in West Virginia.\nPhylogenetic trees of Selenomonas and Veillonella isolated from individual plaque samples. 16S rRNA gene sequences from a single high disease plaque sample (DA, red text) and a single low disease plaque sample (DB, blue text) were used to generate phylogenetic trees of Selenomonas and Veillonella, respectively. The x-axis indicates percent difference in 16S rRNA gene sequence. The separated clusters of Selenomonas show that this population descended from independently colonizing but phylogenetically related bacteria. Veillonella from DB exhibited limited diversity.", "The etiology of the poor oral health leading to the high tooth loss statistics in West Virginia is unknown. The goal of this study was to gain insight into this problem using 16S rRNA gene analyses to characterize the bacterial species in subgingival plaque of West Virginians having high oral disease. The most striking finding of our analyses was the predominance of Clostridiales cluster bacteria, including genera such as Selenomonas, Eubacterium and Dialister, in plaque of West Virginians having high oral disease. The oral microbiota is complex and dynamic, which compromises conclusions derived from this preliminary study, but factors that strengthen our interpretation that an atypical bacterial signature contributes to the oral health disparity in West Virginia include: 1) characterization of similar microbiological patterns in high and low disease plaque using of two methods of 16S rRNA gene analysis, and 2) detection of a similar relationship between bacterial types and oral disease in two separate West Virginia populations. The breadth of the Clostridiales also allowed us to recognize that bacteria linked to oral disease were repeated colonizers of individuals with high disease, suggesting a functional relationship between the disease environment and these disease-associated bacteria. The possibility that an atypical bacterial phylogenetic signature might be linked to the decline in oral health in West Virginia is a novel idea and may prove integral to understanding the etiology of the dental problems in this state. The results from this pilot project highlight the need for an expanded study that uses culture independent approaches to analyze bacterial populations in a larger number of West Virginians having low or high oral disease to confirm the relationship between bacterial profiles and disease. The knowledge derived from such a study has the potential of being applied to the development of targeted intervention strategies that modify environmental factors to preclude colonization by disease-associated bacteria.", "APE: Analysis of Phylogenetics and Evolution; BOP: bleeding on probing; COHRA: Center for Oral Health Research in Appalachia; HOMIM: Human Oral Microbe Identification Microarray; HOMD: Human Oral Microbiome Database; NHANES: National Health and Nutrition Examination Survey; nr: non-redundant; PCoA: Principal Coordinate Analysis; PD: probing depth; RDP: Ribosomal Database Project; rRNA: ribosomal RNA; UPGMA: Unweighted Pair Group Method with Arithmetic Mean; WVU: West Virginia University.", "BJP is the director of the HOMIM Core Facility at The Forsyth Institute, which performed the bacterial microarray analyses.", "JCO organized the project and drafted the manuscript. CFC assisted with project organization and drafting of the manuscript. SL developed 16S rRNA gene amplification and cloning methods. EL performed 16S rRNA gene amplification and cloning. YC organized Cognitive study clinical data. BW organized acquisition of specimens and clinical data for the Cognitive study. RJC organized the alignment of our project with the COHRA and Cognitive studies and assisted with clinical analyses and project development. JGT organized and facilitated acquisition of COHRA subgingival plaque samples. DWM assisted with the design of the COHRA and Cognitive studies and with the acquisition of clinical samples. RJW and MLM assisted with the design of the COHRA project and organization of clinical data. BJP performed HOMIM analyses and helped with developing methods and analyses of 16S rRNA gene sequences. TE performed all 16S rRNA sequence data analyses. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6831/11/7/prepub\n" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Subject populations", "Subject evaluations", "DNA extraction and 16S rRNA gene sequence analysis", "16S rRNA gene sequence analysis", "Human Oral Microbe Identification Microarray (HOMIM) analysis", "Results and Discussion", "Identification of bacterial populations in subgingival plaque that differentiate individuals with high or low oral disease in West Virginia", "Statistical analyses of bacterial types identified in subgingival plaque", "Analysis of the bacterial populations in plaque using HOMIM analyses", "Origin of bacterial types in high disease plaque", "Conclusions", "List of Abbreviations", "Competing interests", "Authors' contributions", "Pre-publication history" ]
[ "West Virginians have the worst oral health in the nation, with almost twice the national average (48.2%) of adults aged 65 or more having all their natural teeth extracted [1]. These statistics become more alarming knowing that infections of the oral cavity have been associated with chronic diseases, such as diabetes, cardiovascular disease and atherosclerosis [2-4]. Neither the origin of poor oral health in West Virginia nor its relationship with systemic disease is understood. Central to this problem is a determination of the microbial populations responsible for oral infections. Historically this has been difficult because of the complexity of the microbiome within oral biofilms, and difficulties in cultivating bacteria obtained from the oral environment.\nBiofilms can play either a protective (probiotic) or pathogenic role in oral health depending upon their microbial composition. Oral biofilms are initiated by colonization of probiotic Gram-positive cocci, primarily streptococci, adhering to the tooth surface, along with coaggregating Actinomyces and Veillonella [5]. Coaggregation is a common property in plaque development, and early colonizing bacteria are bridged through bacteria, such as fusobacteria, to late colonizers. The ecological succession of microbial populations from early colonizing Gram-positive cocci to late colonizing Gram-negative anaerobes of diverse morphotypes leads to a shift in biofilm composition that correlates with the appearance of gingivitis and periodontitis [6]. Specific organisms have been linked with oral diseases. Dental caries occurs as a result of a shift in the biofilm community towards acidogenic and acid-tolerant bacteria, specifically Streptococcus mutans and lactobacilli [7]. In subgingival plaque, Porphyromonas gingivalis, Tannerella forsythia and Aggregatibacter actinomycetemcomitans have been strongly associated with periodontal disease [8,9]. Until recently, associations of microbes with oral disease have been based on in vitro cultivation. As it is now recognized that only about 60% of the species in oral biofilms are cultivable [10], the use of culture-independent analyses has led to a new level of understanding of oral associated microbes [11].\nMolecular analyses of periodontal microflora had not previously been used to examine the bacterial profile of subgingival plaque of West Virginians. The goal of this study was to use 16S rRNA gene analyses to gain insight into the etiology of the oral health disparity observed in this population. In this preliminary study we were able to identify significantly different 16S rRNA bacterial phylogenetic signatures in plaque from individuals having high or low oral disease, and the high disease signature was evident in two independent populations that span a wide range of age groups. Overall we found that communities rich in Veillonella and streptococci shifted to communities rich in Selenomonas and other Clostridiales in association with a decline in oral health, potentially linking an atypical bacterial signature with oral disease in West Virginians. The finding that an atypical bacterial signature may be linked to oral health disparities observed in West Virginia highlights the need for further analyses of bacterial species associated with high and low oral disease in this population in order to understand the origin of this disparity.", "[SUBTITLE] Subject populations [SUBSECTION] Subgingival plaque samples used in this study were obtained in conjunction with two research projects conducted in West Virginia. Plaque from an age 23 to 48 population was obtained through the Center for Oral Health Research in Appalachia (COHRA study), a partnership between West Virginia University and University of Pittsburgh examining genetic, epidemiologic, microbiological, and behavioral factors contributing to oral health. Plaque from an older population was obtained through the Oral Health Disparities Among Elders With and Without Cognitive Impairment (Cognitive study), performed in collaboration with the West Virginia University School of Dentistry and the Center on Aging. We chose to include subjects from two independent West Virginia population pools (median age 23-48 or age >70) in our study to help extend an understanding of the relevance of our findings to a broad sample of the West Virginian population. Being funded as a pilot project, relatively few subjects from each group were able to be analyzed. All procedures were performed using Institutional Review Board approved protocols.\nSubgingival plaque samples used in this study were obtained in conjunction with two research projects conducted in West Virginia. Plaque from an age 23 to 48 population was obtained through the Center for Oral Health Research in Appalachia (COHRA study), a partnership between West Virginia University and University of Pittsburgh examining genetic, epidemiologic, microbiological, and behavioral factors contributing to oral health. Plaque from an older population was obtained through the Oral Health Disparities Among Elders With and Without Cognitive Impairment (Cognitive study), performed in collaboration with the West Virginia University School of Dentistry and the Center on Aging. We chose to include subjects from two independent West Virginia population pools (median age 23-48 or age >70) in our study to help extend an understanding of the relevance of our findings to a broad sample of the West Virginian population. Being funded as a pilot project, relatively few subjects from each group were able to be analyzed. All procedures were performed using Institutional Review Board approved protocols.\n[SUBTITLE] Subject evaluations [SUBSECTION] Criteria for oral health evaluation, methods of plaque collection from participants, and calibration of researchers in the COHRA project have been previously described [12]. Our study examined subgingival plaque from seven COHRA participants, which was obtained from four first molars (#3, 14, 19 and 30). These same sites were assessed for probing depth (PD), recession and bleeding on probing (BOP), as summarized in Table 1. Caries assessment was based on the coronal tooth surfaces, and teeth were classified as sound, decayed, filled or missing. Percent sound teeth for each participant is reported in Table 1. The oral status of COHRA study subjects was determined based on periodontal examinations (bordered by dashed lines in Table 1): low disease defined as <3.5 mm PD, 0-25% BOP sites; high disease defined as >4.5 mm PD, 100% BOP sites. Subgingival plaque was sampled using a curette, suspended in 100-500 μl TE (10 mM Tris, pH 7.5, 1 mM EDTA) containing 20% glycerol and stored at -70°C until processing. Multiple specimens from individual participants were pooled for analyses.\nCOHRA study clinical evaluations\n1 Dashed lines border clinical parameters used to determine the oral status of COHRA study participants.\nCognitive study participants were selected based on: 1) age 70 years and older, 2) resident of West Virginia, 3) community-living, and 4) dentate (having at least four natural teeth). Oral evaluations were performed by calibrated researchers using guidelines and protocols from the National Health and Nutrition Examination Survey (NHANES IV) [13]. Subgingival plaque samples were collected using sterile periodontal curettes from six sites as follows: the buccal surface of the most anterior molar in each quadrant, and the buccal surface of #11 and #31. Plaque samples were stored and pooled for analyses as above. Criteria for periodontal evaluation in the Cognitive study included PD, gingivitis and calculus. Caries evaluation included type of dentures and percent sound, missing, filled, decayed or crowned teeth. Both evaluations are reported in Table 2. The oral status of Cognitive study subjects was determined based on caries examinations (bordered by dashed lines in Table 2): low disease defined as >60% sound/filled teeth present; high disease defined as >40% teeth missing or decayed.\nCognitive study clinical evaluations\n1No periodontal examination data was acquired for DAA.\n2Dashed lines border clinical parameters used to determine the oral status of Cognitive study participants.\nCriteria for oral health evaluation, methods of plaque collection from participants, and calibration of researchers in the COHRA project have been previously described [12]. Our study examined subgingival plaque from seven COHRA participants, which was obtained from four first molars (#3, 14, 19 and 30). These same sites were assessed for probing depth (PD), recession and bleeding on probing (BOP), as summarized in Table 1. Caries assessment was based on the coronal tooth surfaces, and teeth were classified as sound, decayed, filled or missing. Percent sound teeth for each participant is reported in Table 1. The oral status of COHRA study subjects was determined based on periodontal examinations (bordered by dashed lines in Table 1): low disease defined as <3.5 mm PD, 0-25% BOP sites; high disease defined as >4.5 mm PD, 100% BOP sites. Subgingival plaque was sampled using a curette, suspended in 100-500 μl TE (10 mM Tris, pH 7.5, 1 mM EDTA) containing 20% glycerol and stored at -70°C until processing. Multiple specimens from individual participants were pooled for analyses.\nCOHRA study clinical evaluations\n1 Dashed lines border clinical parameters used to determine the oral status of COHRA study participants.\nCognitive study participants were selected based on: 1) age 70 years and older, 2) resident of West Virginia, 3) community-living, and 4) dentate (having at least four natural teeth). Oral evaluations were performed by calibrated researchers using guidelines and protocols from the National Health and Nutrition Examination Survey (NHANES IV) [13]. Subgingival plaque samples were collected using sterile periodontal curettes from six sites as follows: the buccal surface of the most anterior molar in each quadrant, and the buccal surface of #11 and #31. Plaque samples were stored and pooled for analyses as above. Criteria for periodontal evaluation in the Cognitive study included PD, gingivitis and calculus. Caries evaluation included type of dentures and percent sound, missing, filled, decayed or crowned teeth. Both evaluations are reported in Table 2. The oral status of Cognitive study subjects was determined based on caries examinations (bordered by dashed lines in Table 2): low disease defined as >60% sound/filled teeth present; high disease defined as >40% teeth missing or decayed.\nCognitive study clinical evaluations\n1No periodontal examination data was acquired for DAA.\n2Dashed lines border clinical parameters used to determine the oral status of Cognitive study participants.\n[SUBTITLE] DNA extraction and 16S rRNA gene sequence analysis [SUBSECTION] DNA was extracted and purified from subgingival plaque using the UltraClean Soil DNA Isolation Kit (MO Bio Labs, Carlsbad, CA). The 16S rRNA gene was PCR amplified using the universal bacterial 16S rRNA primers (forward, 5'-GAGTTTGATYMTGGCTCAG, reverse, 5'-GAAGGAGGTGWTCCADCC [14]). Each PCR reaction contained 1 μl purified DNA, 0.4 μM universal forward and reverse primers, 5 μl 10X platinum PCR buffer, 1.5 mM MgSO4, 0.2 mM dNTPs and 0.5 μl Platinum TAQ DNA Polymerase High Fidelity (Invitrogen, Life Technologies Corp, Carlsbad, CA). PCR conditions were: 94°C for 4 minutes, followed by 30 cycles of 94°C for 45 seconds, 60°C for 45 seconds and 72°C for 90 seconds, with a final extension at 72°C for 15 minutes. PCR products were analyzed by 0.8% agarose gel electrophoresis, and reactions yielding products of ~1500 bp were cloned using the TOPO TA Cloning Kit for Sequencing (Invitrogen). Ligation products were electroporated into Escherichia coli (TOP10 Chemically Competent cells, Invitrogen), and transformants were incubated in 250 μl SOC medium (2% tryptone, 0.5% yeast extract, 10 mM NaCl, 2.5 mM KCl, 10 mM MgCl2, 10 mM MgSO4, 20 mM glucose) for 1 hour at 37°C, prior to plating on LB agar containing 50 μg/ml kanamycin and a 25 μl overlay of 2% X-gal. Following incubation overnight, ~100 colonies containing inserts (white colonies) were isolated from each sample and cultured at 37°C overnight in 96 well plates containing LB broth with 50 μg/ml kanamycin. A 2 μl volume of bacterial culture from each well was PCR amplified using M13 primers (forward, 5'-TGTAAAACGACGGCCAGT, reverse 5'-CAGGAAACAGCTATGAC). Reactions contained 0.2 μM of each primer, 5 μl 10X Qiagen PCR buffer (Qiagen, Valencia, CA), 0.2 mM dNTPs and 0.2 μl Taq DNA polymerase (Qiagen). PCR conditions were: 94°C for 4 minutes, followed by 30 cycles of 94°C for 45 seconds, 52°C for 45 seconds and 72°C for 90 seconds, with a final extension at 72°C for 15 minutes. PCR products from each well were examined by electrophoresis, and products of the appropriate size were sequenced by a commercial facility (SeqWright, Houston, TX).\nDNA was extracted and purified from subgingival plaque using the UltraClean Soil DNA Isolation Kit (MO Bio Labs, Carlsbad, CA). The 16S rRNA gene was PCR amplified using the universal bacterial 16S rRNA primers (forward, 5'-GAGTTTGATYMTGGCTCAG, reverse, 5'-GAAGGAGGTGWTCCADCC [14]). Each PCR reaction contained 1 μl purified DNA, 0.4 μM universal forward and reverse primers, 5 μl 10X platinum PCR buffer, 1.5 mM MgSO4, 0.2 mM dNTPs and 0.5 μl Platinum TAQ DNA Polymerase High Fidelity (Invitrogen, Life Technologies Corp, Carlsbad, CA). PCR conditions were: 94°C for 4 minutes, followed by 30 cycles of 94°C for 45 seconds, 60°C for 45 seconds and 72°C for 90 seconds, with a final extension at 72°C for 15 minutes. PCR products were analyzed by 0.8% agarose gel electrophoresis, and reactions yielding products of ~1500 bp were cloned using the TOPO TA Cloning Kit for Sequencing (Invitrogen). Ligation products were electroporated into Escherichia coli (TOP10 Chemically Competent cells, Invitrogen), and transformants were incubated in 250 μl SOC medium (2% tryptone, 0.5% yeast extract, 10 mM NaCl, 2.5 mM KCl, 10 mM MgCl2, 10 mM MgSO4, 20 mM glucose) for 1 hour at 37°C, prior to plating on LB agar containing 50 μg/ml kanamycin and a 25 μl overlay of 2% X-gal. Following incubation overnight, ~100 colonies containing inserts (white colonies) were isolated from each sample and cultured at 37°C overnight in 96 well plates containing LB broth with 50 μg/ml kanamycin. A 2 μl volume of bacterial culture from each well was PCR amplified using M13 primers (forward, 5'-TGTAAAACGACGGCCAGT, reverse 5'-CAGGAAACAGCTATGAC). Reactions contained 0.2 μM of each primer, 5 μl 10X Qiagen PCR buffer (Qiagen, Valencia, CA), 0.2 mM dNTPs and 0.2 μl Taq DNA polymerase (Qiagen). PCR conditions were: 94°C for 4 minutes, followed by 30 cycles of 94°C for 45 seconds, 52°C for 45 seconds and 72°C for 90 seconds, with a final extension at 72°C for 15 minutes. PCR products from each well were examined by electrophoresis, and products of the appropriate size were sequenced by a commercial facility (SeqWright, Houston, TX).\n[SUBTITLE] 16S rRNA gene sequence analysis [SUBSECTION] Each DNA sequence was scanned for a single segment of the original primer sequence (ATCAAACT) between bp 440 and 520 to identify the 16S rRNA gene and to help exclude chimeras. The initial part of each sequence containing uncalled bases (N) was removed, the distal part (vector sequence) was trimmed at the primer, and the sequence was reversed to standard orientation. Nucleotide sequences generated in this study have been deposited in GenBank, accession numbers HQ894465 - HQ895588.\nSequences were classified by BLASTN analysis against both a local database and the GenBank non-redundant (nr) database. The local database was assembled in an interative process: if a new experimental sequence did not match with >98% identity, a new database was assembled adding matching type sequences obtained from GenBank and the Ribosomal Database Project (RDP) [15]. The current local database contains 375 sequences organized into 122 'groups'. Each group contains closely related sequences and has been validated to be non-overlapping by BLAST analysis of individual groups against the entire database. This approach provided a level of detail that was informative for classifying bacterial sequences. Software used in sequence analyses included: 1) BLAST, using the NCBI server [16] with results formatted as XML; 2) UniFrac, using its server [17,18]; 3) phylogenetic tree construction, using the RDP server; 4) local BLAST, using the 'legacy' executable blastall from NCBI [19]. Other software run locally included: R [20] and the Analysis of Phylogenetics and Evolution (APE) package for constructing and plotting phylogenetic trees and plotting the Principal Coordinate Analysis (PCoA); ClustalW2 [21] and Muscle [22] for multiple sequence alignment; and Python [23] to automate steps in analyses and produce the heatmaps (generated using matplotlib).\nEach DNA sequence was scanned for a single segment of the original primer sequence (ATCAAACT) between bp 440 and 520 to identify the 16S rRNA gene and to help exclude chimeras. The initial part of each sequence containing uncalled bases (N) was removed, the distal part (vector sequence) was trimmed at the primer, and the sequence was reversed to standard orientation. Nucleotide sequences generated in this study have been deposited in GenBank, accession numbers HQ894465 - HQ895588.\nSequences were classified by BLASTN analysis against both a local database and the GenBank non-redundant (nr) database. The local database was assembled in an interative process: if a new experimental sequence did not match with >98% identity, a new database was assembled adding matching type sequences obtained from GenBank and the Ribosomal Database Project (RDP) [15]. The current local database contains 375 sequences organized into 122 'groups'. Each group contains closely related sequences and has been validated to be non-overlapping by BLAST analysis of individual groups against the entire database. This approach provided a level of detail that was informative for classifying bacterial sequences. Software used in sequence analyses included: 1) BLAST, using the NCBI server [16] with results formatted as XML; 2) UniFrac, using its server [17,18]; 3) phylogenetic tree construction, using the RDP server; 4) local BLAST, using the 'legacy' executable blastall from NCBI [19]. Other software run locally included: R [20] and the Analysis of Phylogenetics and Evolution (APE) package for constructing and plotting phylogenetic trees and plotting the Principal Coordinate Analysis (PCoA); ClustalW2 [21] and Muscle [22] for multiple sequence alignment; and Python [23] to automate steps in analyses and produce the heatmaps (generated using matplotlib).\n[SUBTITLE] Human Oral Microbe Identification Microarray (HOMIM) analysis [SUBSECTION] For HOMIM 16S rRNA gene microarray analysis, purified DNA from subgingival plaque samples was sent to the HOMIM Core Facility at the Forsyth Institute (Boston, MA). This facility offers high throughput analysis of ~300 of the most prevalent oral bacterial species and provides a comprehensive report of bacterial profiles within DNA samples [24].\nFor HOMIM 16S rRNA gene microarray analysis, purified DNA from subgingival plaque samples was sent to the HOMIM Core Facility at the Forsyth Institute (Boston, MA). This facility offers high throughput analysis of ~300 of the most prevalent oral bacterial species and provides a comprehensive report of bacterial profiles within DNA samples [24].", "Subgingival plaque samples used in this study were obtained in conjunction with two research projects conducted in West Virginia. Plaque from an age 23 to 48 population was obtained through the Center for Oral Health Research in Appalachia (COHRA study), a partnership between West Virginia University and University of Pittsburgh examining genetic, epidemiologic, microbiological, and behavioral factors contributing to oral health. Plaque from an older population was obtained through the Oral Health Disparities Among Elders With and Without Cognitive Impairment (Cognitive study), performed in collaboration with the West Virginia University School of Dentistry and the Center on Aging. We chose to include subjects from two independent West Virginia population pools (median age 23-48 or age >70) in our study to help extend an understanding of the relevance of our findings to a broad sample of the West Virginian population. Being funded as a pilot project, relatively few subjects from each group were able to be analyzed. All procedures were performed using Institutional Review Board approved protocols.", "Criteria for oral health evaluation, methods of plaque collection from participants, and calibration of researchers in the COHRA project have been previously described [12]. Our study examined subgingival plaque from seven COHRA participants, which was obtained from four first molars (#3, 14, 19 and 30). These same sites were assessed for probing depth (PD), recession and bleeding on probing (BOP), as summarized in Table 1. Caries assessment was based on the coronal tooth surfaces, and teeth were classified as sound, decayed, filled or missing. Percent sound teeth for each participant is reported in Table 1. The oral status of COHRA study subjects was determined based on periodontal examinations (bordered by dashed lines in Table 1): low disease defined as <3.5 mm PD, 0-25% BOP sites; high disease defined as >4.5 mm PD, 100% BOP sites. Subgingival plaque was sampled using a curette, suspended in 100-500 μl TE (10 mM Tris, pH 7.5, 1 mM EDTA) containing 20% glycerol and stored at -70°C until processing. Multiple specimens from individual participants were pooled for analyses.\nCOHRA study clinical evaluations\n1 Dashed lines border clinical parameters used to determine the oral status of COHRA study participants.\nCognitive study participants were selected based on: 1) age 70 years and older, 2) resident of West Virginia, 3) community-living, and 4) dentate (having at least four natural teeth). Oral evaluations were performed by calibrated researchers using guidelines and protocols from the National Health and Nutrition Examination Survey (NHANES IV) [13]. Subgingival plaque samples were collected using sterile periodontal curettes from six sites as follows: the buccal surface of the most anterior molar in each quadrant, and the buccal surface of #11 and #31. Plaque samples were stored and pooled for analyses as above. Criteria for periodontal evaluation in the Cognitive study included PD, gingivitis and calculus. Caries evaluation included type of dentures and percent sound, missing, filled, decayed or crowned teeth. Both evaluations are reported in Table 2. The oral status of Cognitive study subjects was determined based on caries examinations (bordered by dashed lines in Table 2): low disease defined as >60% sound/filled teeth present; high disease defined as >40% teeth missing or decayed.\nCognitive study clinical evaluations\n1No periodontal examination data was acquired for DAA.\n2Dashed lines border clinical parameters used to determine the oral status of Cognitive study participants.", "DNA was extracted and purified from subgingival plaque using the UltraClean Soil DNA Isolation Kit (MO Bio Labs, Carlsbad, CA). The 16S rRNA gene was PCR amplified using the universal bacterial 16S rRNA primers (forward, 5'-GAGTTTGATYMTGGCTCAG, reverse, 5'-GAAGGAGGTGWTCCADCC [14]). Each PCR reaction contained 1 μl purified DNA, 0.4 μM universal forward and reverse primers, 5 μl 10X platinum PCR buffer, 1.5 mM MgSO4, 0.2 mM dNTPs and 0.5 μl Platinum TAQ DNA Polymerase High Fidelity (Invitrogen, Life Technologies Corp, Carlsbad, CA). PCR conditions were: 94°C for 4 minutes, followed by 30 cycles of 94°C for 45 seconds, 60°C for 45 seconds and 72°C for 90 seconds, with a final extension at 72°C for 15 minutes. PCR products were analyzed by 0.8% agarose gel electrophoresis, and reactions yielding products of ~1500 bp were cloned using the TOPO TA Cloning Kit for Sequencing (Invitrogen). Ligation products were electroporated into Escherichia coli (TOP10 Chemically Competent cells, Invitrogen), and transformants were incubated in 250 μl SOC medium (2% tryptone, 0.5% yeast extract, 10 mM NaCl, 2.5 mM KCl, 10 mM MgCl2, 10 mM MgSO4, 20 mM glucose) for 1 hour at 37°C, prior to plating on LB agar containing 50 μg/ml kanamycin and a 25 μl overlay of 2% X-gal. Following incubation overnight, ~100 colonies containing inserts (white colonies) were isolated from each sample and cultured at 37°C overnight in 96 well plates containing LB broth with 50 μg/ml kanamycin. A 2 μl volume of bacterial culture from each well was PCR amplified using M13 primers (forward, 5'-TGTAAAACGACGGCCAGT, reverse 5'-CAGGAAACAGCTATGAC). Reactions contained 0.2 μM of each primer, 5 μl 10X Qiagen PCR buffer (Qiagen, Valencia, CA), 0.2 mM dNTPs and 0.2 μl Taq DNA polymerase (Qiagen). PCR conditions were: 94°C for 4 minutes, followed by 30 cycles of 94°C for 45 seconds, 52°C for 45 seconds and 72°C for 90 seconds, with a final extension at 72°C for 15 minutes. PCR products from each well were examined by electrophoresis, and products of the appropriate size were sequenced by a commercial facility (SeqWright, Houston, TX).", "Each DNA sequence was scanned for a single segment of the original primer sequence (ATCAAACT) between bp 440 and 520 to identify the 16S rRNA gene and to help exclude chimeras. The initial part of each sequence containing uncalled bases (N) was removed, the distal part (vector sequence) was trimmed at the primer, and the sequence was reversed to standard orientation. Nucleotide sequences generated in this study have been deposited in GenBank, accession numbers HQ894465 - HQ895588.\nSequences were classified by BLASTN analysis against both a local database and the GenBank non-redundant (nr) database. The local database was assembled in an interative process: if a new experimental sequence did not match with >98% identity, a new database was assembled adding matching type sequences obtained from GenBank and the Ribosomal Database Project (RDP) [15]. The current local database contains 375 sequences organized into 122 'groups'. Each group contains closely related sequences and has been validated to be non-overlapping by BLAST analysis of individual groups against the entire database. This approach provided a level of detail that was informative for classifying bacterial sequences. Software used in sequence analyses included: 1) BLAST, using the NCBI server [16] with results formatted as XML; 2) UniFrac, using its server [17,18]; 3) phylogenetic tree construction, using the RDP server; 4) local BLAST, using the 'legacy' executable blastall from NCBI [19]. Other software run locally included: R [20] and the Analysis of Phylogenetics and Evolution (APE) package for constructing and plotting phylogenetic trees and plotting the Principal Coordinate Analysis (PCoA); ClustalW2 [21] and Muscle [22] for multiple sequence alignment; and Python [23] to automate steps in analyses and produce the heatmaps (generated using matplotlib).", "For HOMIM 16S rRNA gene microarray analysis, purified DNA from subgingival plaque samples was sent to the HOMIM Core Facility at the Forsyth Institute (Boston, MA). This facility offers high throughput analysis of ~300 of the most prevalent oral bacterial species and provides a comprehensive report of bacterial profiles within DNA samples [24].", "[SUBTITLE] Identification of bacterial populations in subgingival plaque that differentiate individuals with high or low oral disease in West Virginia [SUBSECTION] Previous studies found bacterial species in the 'red complex' to be strongly linked to periodontal disease [25]. To examine if these same bacteria were associated with oral disease in West Virginia, we used 16S rRNA gene sequencing to characterize bacteria in subgingival plaque of COHRA participants diagnosed as having low or high oral disease based on periodontal examinations (Table 1, bordered by dashed lines). Analyses were performed in 96-well plate format, and due to the relatively high cost of DNA sequencing, our practical goal was to sequence 96 clones per sample, realizing that we would not be able to detect bacterial types present at low frequency. In actuality the number of clones sequenced per sample ranged from 55 to 133, with low numbers relating to difficulties obtaining high quality sequences for some samples. Sequences obtained were analyzed by BLAST against the GenBank database that includes all known bacterial 16S rRNA gene sequences, and against a local database that facilitated classification of bacterial 'types' based on ≥95% sequence identity. Subsequent evaluation of sequence data using the Human Oral Microbiome Database (HOMD) BLAST server [26] yielded nearly identical results, except for a few changes caused by reclassification of Clostridiales type strains, which affected some assignments to Eubacterium and Lachnospiraceae.\nThe initial phase of sequencing examined 2 low (DB and DC) and 5 high (DF, DL, DG, DI, DA) disease COHRA participants. An additional low disease sample (DM), evaluated based on periodontal criteria, was obtained through the West Virginia University Dental Clinic and served as a non-COHRA low disease control. The percentage of a bacterial type in each sample relative to the total number of clones sequenced (indicated at the bottom of each column) is shown in the heatmap in Figure 1. As previously reported [6], increased bacterial diversity was evident in plaque from individuals diagnosed with high oral disease. However, unexpectedly a large number of high disease bacterial types were classified in the Clostridiales cluster (Selenomonas, Eubacterium, Dialister), which were completely absent from low disease plaque. Also notable in high disease was the low frequency of bacterial types traditionally linked to periodontitis (Porphyromonas, Tannerella, Treponema and Aggregatibacter). Plaque from low disease exhibited a distinctly different bacterial population, including mainly Streptococcus and Veillonella with a moderate number of Capnocytophaga. The bacterial diversity in low disease COHRA samples was highly consistent with a recent study, which identified Streptococcus, Veillonella and Capnocytophaga in 100% of the plaque samples obtained from individuals with healthy oral cavities [27]. Thus, while bacteria associated with oral health in West Virginia was highly reflective of other regions in the United States, an atypical pathogenic phylogenetic signature that includes Selenomonas and other Clostridiales cluster bacteria appeared to be associated with the decline in oral health in West Virginia.\nIdentification of bacterial populations in subgingival plaque of West Virginians. Bacterial composition in plaque samples was determined using 16S rRNA gene sequencing in 2 low and 5 high oral disease COHRA participants, ranked based on periodontal exams, and 5 Cognitive study participants, ranked based on caries index. DM is a low periodontal disease control sample obtained through the West Virginia University Dental Clinic. Each numbered box indicates the percentage of clones of the type-specific bacterial 16S rRNA gene relative to the total number of clones sequenced, which is indicated at the bottom of each column. The color of the box reflects observed counts.\nCOHRA study participants ranged in age from 23 to 48, and Cognitive study participants (ages 73 to 93) provided a means of assessing how bacterial types related to tooth status later in life. The periodontal health of the 5 Cognitive study participants included in this study was exceptionally good, as evident in probing depths of 3 mm or less, shown in Table 2. However, as expected for this age, considerable variation was observed in caries health (Table 2, bordered by dashed lines), with DT having the highest percentage of sound teeth, and DAA and DZ having the highest percentage of missing/decayed teeth. Examination of bacterial types in subgingival plaque of DT revealed mainly Veillonella, Streptococcus and Capnocytophaga (Figure 1), as observed for low disease COHRA participants. In comparison, DAA and DZ had bacterial patterns that included increased diversity and species richness of Selenomonas, Eubacterium and Dialister, in addition to many of the other bacterial species observed in high oral disease COHRA participants. These data are consistent with the bacterial signature of aged West Virginians with tooth loss being similar to that of younger West Virginians with periodontal disease.\nPrevious studies found bacterial species in the 'red complex' to be strongly linked to periodontal disease [25]. To examine if these same bacteria were associated with oral disease in West Virginia, we used 16S rRNA gene sequencing to characterize bacteria in subgingival plaque of COHRA participants diagnosed as having low or high oral disease based on periodontal examinations (Table 1, bordered by dashed lines). Analyses were performed in 96-well plate format, and due to the relatively high cost of DNA sequencing, our practical goal was to sequence 96 clones per sample, realizing that we would not be able to detect bacterial types present at low frequency. In actuality the number of clones sequenced per sample ranged from 55 to 133, with low numbers relating to difficulties obtaining high quality sequences for some samples. Sequences obtained were analyzed by BLAST against the GenBank database that includes all known bacterial 16S rRNA gene sequences, and against a local database that facilitated classification of bacterial 'types' based on ≥95% sequence identity. Subsequent evaluation of sequence data using the Human Oral Microbiome Database (HOMD) BLAST server [26] yielded nearly identical results, except for a few changes caused by reclassification of Clostridiales type strains, which affected some assignments to Eubacterium and Lachnospiraceae.\nThe initial phase of sequencing examined 2 low (DB and DC) and 5 high (DF, DL, DG, DI, DA) disease COHRA participants. An additional low disease sample (DM), evaluated based on periodontal criteria, was obtained through the West Virginia University Dental Clinic and served as a non-COHRA low disease control. The percentage of a bacterial type in each sample relative to the total number of clones sequenced (indicated at the bottom of each column) is shown in the heatmap in Figure 1. As previously reported [6], increased bacterial diversity was evident in plaque from individuals diagnosed with high oral disease. However, unexpectedly a large number of high disease bacterial types were classified in the Clostridiales cluster (Selenomonas, Eubacterium, Dialister), which were completely absent from low disease plaque. Also notable in high disease was the low frequency of bacterial types traditionally linked to periodontitis (Porphyromonas, Tannerella, Treponema and Aggregatibacter). Plaque from low disease exhibited a distinctly different bacterial population, including mainly Streptococcus and Veillonella with a moderate number of Capnocytophaga. The bacterial diversity in low disease COHRA samples was highly consistent with a recent study, which identified Streptococcus, Veillonella and Capnocytophaga in 100% of the plaque samples obtained from individuals with healthy oral cavities [27]. Thus, while bacteria associated with oral health in West Virginia was highly reflective of other regions in the United States, an atypical pathogenic phylogenetic signature that includes Selenomonas and other Clostridiales cluster bacteria appeared to be associated with the decline in oral health in West Virginia.\nIdentification of bacterial populations in subgingival plaque of West Virginians. Bacterial composition in plaque samples was determined using 16S rRNA gene sequencing in 2 low and 5 high oral disease COHRA participants, ranked based on periodontal exams, and 5 Cognitive study participants, ranked based on caries index. DM is a low periodontal disease control sample obtained through the West Virginia University Dental Clinic. Each numbered box indicates the percentage of clones of the type-specific bacterial 16S rRNA gene relative to the total number of clones sequenced, which is indicated at the bottom of each column. The color of the box reflects observed counts.\nCOHRA study participants ranged in age from 23 to 48, and Cognitive study participants (ages 73 to 93) provided a means of assessing how bacterial types related to tooth status later in life. The periodontal health of the 5 Cognitive study participants included in this study was exceptionally good, as evident in probing depths of 3 mm or less, shown in Table 2. However, as expected for this age, considerable variation was observed in caries health (Table 2, bordered by dashed lines), with DT having the highest percentage of sound teeth, and DAA and DZ having the highest percentage of missing/decayed teeth. Examination of bacterial types in subgingival plaque of DT revealed mainly Veillonella, Streptococcus and Capnocytophaga (Figure 1), as observed for low disease COHRA participants. In comparison, DAA and DZ had bacterial patterns that included increased diversity and species richness of Selenomonas, Eubacterium and Dialister, in addition to many of the other bacterial species observed in high oral disease COHRA participants. These data are consistent with the bacterial signature of aged West Virginians with tooth loss being similar to that of younger West Virginians with periodontal disease.\n[SUBTITLE] Statistical analyses of bacterial types identified in subgingival plaque [SUBSECTION] Unifrac analysis [17], which takes into account phylogenetic distances, has become the method of choice for analyzing differences between microbial communities and was used in our studies to provide statistical validation of differences in low and high disease plaque environments. The method requires a rooted phylogenetic tree as input, and we used the RDP server to generate the tree, which is especially suited for rRNA alignments as it employs Infernal [28] that is guided by predicted secondary structure. The tree was constructed by the neighbor-joining method [29], using APE in R. It was rooted by including Thermotoga maritima SL7 as an outgroup. The Unifrac algorithm was used to compute the unique fraction of branch length for each sample. These results (which are phylogenetic distances separating the individual samples) were analyzed using cluster analysis (UPGMA) and PCoA and ignored duplicate sequences. As shown in Figure 2A, PCoA separated low disease from diseased samples into two clusters along the first eigenvector. Cluster analysis also split low disease and diseased samples into different clades, with sample DT (labeled T in Figure 2), the least diseased sample based on caries outcome in the Cognitive study, splitting to the low disease clade. Support for the clusters was evaluated by jackknife tests (1000 replicates; Figure 2B). When analyses were again performed with all the sequences relabeled as diseased or healthy, Unifrac analysis found the two environments to be statistically different (P < 0.001), whether or not duplicate sequences were considered.\nStatistical analyses of bacterial populations in low and high disease plaque. A) Principal coordinate analysis of bacterial communities from subgingival plaque of West Virginians with low oral disease (blue) as compared to plaque of West Virginians with varying degrees of oral disease (red). B) The Unifrac algorithm was used to compute the unique branch length for a given sub-sample. Cluster analysis split low (blue) and diseased samples (red) into different clades. Support for the clusters was evaluated by jackknife tests (1000 replicates). T was the least diseased sample in the Cognitive study, as ranked in Table 2. The 'D' has been removed from sample notations in these figures for clarity.\nUnifrac analysis [17], which takes into account phylogenetic distances, has become the method of choice for analyzing differences between microbial communities and was used in our studies to provide statistical validation of differences in low and high disease plaque environments. The method requires a rooted phylogenetic tree as input, and we used the RDP server to generate the tree, which is especially suited for rRNA alignments as it employs Infernal [28] that is guided by predicted secondary structure. The tree was constructed by the neighbor-joining method [29], using APE in R. It was rooted by including Thermotoga maritima SL7 as an outgroup. The Unifrac algorithm was used to compute the unique fraction of branch length for each sample. These results (which are phylogenetic distances separating the individual samples) were analyzed using cluster analysis (UPGMA) and PCoA and ignored duplicate sequences. As shown in Figure 2A, PCoA separated low disease from diseased samples into two clusters along the first eigenvector. Cluster analysis also split low disease and diseased samples into different clades, with sample DT (labeled T in Figure 2), the least diseased sample based on caries outcome in the Cognitive study, splitting to the low disease clade. Support for the clusters was evaluated by jackknife tests (1000 replicates; Figure 2B). When analyses were again performed with all the sequences relabeled as diseased or healthy, Unifrac analysis found the two environments to be statistically different (P < 0.001), whether or not duplicate sequences were considered.\nStatistical analyses of bacterial populations in low and high disease plaque. A) Principal coordinate analysis of bacterial communities from subgingival plaque of West Virginians with low oral disease (blue) as compared to plaque of West Virginians with varying degrees of oral disease (red). B) The Unifrac algorithm was used to compute the unique branch length for a given sub-sample. Cluster analysis split low (blue) and diseased samples (red) into different clades. Support for the clusters was evaluated by jackknife tests (1000 replicates). T was the least diseased sample in the Cognitive study, as ranked in Table 2. The 'D' has been removed from sample notations in these figures for clarity.\n[SUBTITLE] Analysis of the bacterial populations in plaque using HOMIM analyses [SUBSECTION] Since rRNA gene sequencing revealed an unexpected bacterial signature in plaque of West Virginians with high oral disease, we asked whether this pattern could be detected using an alternative method of 16S rRNA gene analysis, HOMIM. In HOMIM, multiple primers are initially used to amplify 16S rRNA genes within a DNA sample, which are then assayed for the presence of specific 16S rRNA genes using probes designed to optimize detection of 272 different bacterial species [30]. Unlike 16S rRNA sequencing, which allows quantification of the frequency of bacterial clones, the read-out for HOMIM is a relative signal intensity for each detected 16S rRNA sequence on a scale from 0 to 5. In our studies HOMIM results are presented as the sum of the signals for all oral taxa within each genus. Comparisons of HOMIM and DNA sequence analyses of plaque samples from 4 low disease (DB, DC, DM, DT) and the 5 highest disease (DA, DI, DG, DZ, DAA) participants in our study are shown in Figure 3. HOMIM analyses of plaque from low disease detected a high frequency of Veillonella, Streptococcus and Capnocytophaga, closely paralleling results of 16S rRNA sequencing (Figure 3A). Less prevalent bacterial types detected in 16S rRNA sequencing, such as Gemella, Fusobacterium, Actinomyces, Granulicatella, Neisseria and Haemophilus, were confirmed by HOMIM. Additional bacterial types, presumably at too low a frequency to detect by sequencing, were also detected in low disease by HOMIM, the most evident of which were Campylobacter, Prevotella, Leptotrichia, Lautropia and Aggregatibacter. These results highlight differences that can be observed between DNA sequencing and HOMIM due to methodologies employed in HOMIM that can increase the sensitivity of detection of specific bacterial species by 10 fold.\nComparison of 16S rRNA gene analyses using sequencing and HOMIM. The frequency of bacterial types (by percentage), determined by 16S rRNA sequencing, of 4 plaque samples from low disease and 5 plaque samples from high disease West Virginians, ranked based on criteria defined in Materials and Methods, was compared with the microarray signal intensity obtained from the same samples in HOMIM analyses. Bacterial types above the red line were more frequent in low disease.\nIn plaque from individuals with high disease, HOMIM analyses confirmed the presence of genera from the Clostridiales order, including Selenomonas, Eubacterium and Dialister (Figure 3B). Notably, as with sequencing data, bacteria in the 'red complex' were either absent or present at low levels in HOMIM analyses. As with low disease plaque, disproportionately higher signals for certain bacteria, specifically Campylobacter, Prevotella and Leptotrichia, were observed in HOMIM analysis as compared with sequencing, again explained by the increased sensitivity of microarray detection methodologies. Importantly, conclusions from HOMIM analyses confirmed those from 16S rRNA sequencing, and identified increased genus richness and high intensity signals for Selenomonas, Eubacterium and Dialister in association with a decline in oral health.\nSince rRNA gene sequencing revealed an unexpected bacterial signature in plaque of West Virginians with high oral disease, we asked whether this pattern could be detected using an alternative method of 16S rRNA gene analysis, HOMIM. In HOMIM, multiple primers are initially used to amplify 16S rRNA genes within a DNA sample, which are then assayed for the presence of specific 16S rRNA genes using probes designed to optimize detection of 272 different bacterial species [30]. Unlike 16S rRNA sequencing, which allows quantification of the frequency of bacterial clones, the read-out for HOMIM is a relative signal intensity for each detected 16S rRNA sequence on a scale from 0 to 5. In our studies HOMIM results are presented as the sum of the signals for all oral taxa within each genus. Comparisons of HOMIM and DNA sequence analyses of plaque samples from 4 low disease (DB, DC, DM, DT) and the 5 highest disease (DA, DI, DG, DZ, DAA) participants in our study are shown in Figure 3. HOMIM analyses of plaque from low disease detected a high frequency of Veillonella, Streptococcus and Capnocytophaga, closely paralleling results of 16S rRNA sequencing (Figure 3A). Less prevalent bacterial types detected in 16S rRNA sequencing, such as Gemella, Fusobacterium, Actinomyces, Granulicatella, Neisseria and Haemophilus, were confirmed by HOMIM. Additional bacterial types, presumably at too low a frequency to detect by sequencing, were also detected in low disease by HOMIM, the most evident of which were Campylobacter, Prevotella, Leptotrichia, Lautropia and Aggregatibacter. These results highlight differences that can be observed between DNA sequencing and HOMIM due to methodologies employed in HOMIM that can increase the sensitivity of detection of specific bacterial species by 10 fold.\nComparison of 16S rRNA gene analyses using sequencing and HOMIM. The frequency of bacterial types (by percentage), determined by 16S rRNA sequencing, of 4 plaque samples from low disease and 5 plaque samples from high disease West Virginians, ranked based on criteria defined in Materials and Methods, was compared with the microarray signal intensity obtained from the same samples in HOMIM analyses. Bacterial types above the red line were more frequent in low disease.\nIn plaque from individuals with high disease, HOMIM analyses confirmed the presence of genera from the Clostridiales order, including Selenomonas, Eubacterium and Dialister (Figure 3B). Notably, as with sequencing data, bacteria in the 'red complex' were either absent or present at low levels in HOMIM analyses. As with low disease plaque, disproportionately higher signals for certain bacteria, specifically Campylobacter, Prevotella and Leptotrichia, were observed in HOMIM analysis as compared with sequencing, again explained by the increased sensitivity of microarray detection methodologies. Importantly, conclusions from HOMIM analyses confirmed those from 16S rRNA sequencing, and identified increased genus richness and high intensity signals for Selenomonas, Eubacterium and Dialister in association with a decline in oral health.\n[SUBTITLE] Origin of bacterial types in high disease plaque [SUBSECTION] Phylogenetic trees of bacterial types developed using 16S rRNA sequencing also provided insight into the origin of bacterial signatures associated with high oral disease. For example, comparisons of 16S rRNA gene sequences revealed that diverse Selenomonas spp. were actually repeated colonizers of the high disease oral environment. The 16S rRNA tree represented in Figure 4 shows the broad diversity of Selenomonas phylotypes recovered in a single plaque sample from a high disease subject (DA). In contrast, the same clade in a sample from a low disease subject (DB) contained only low diversity Veillonella phylotypes. The order Clostridiales has its phylogenetic assignment in the Firmicutes (typically Gram-positive), but it includes a large family, the Veillonellaceae, which are obligate anaerobes that stain Gram-negative due to a porous pseudo-outer membrane. Low disease Veillonella have nearly identical sequences typical of a single species (> 97% 16S rRNA gene sequence identity), as represented in Figure 4. In our studies, we found the tight cluster of Veillonella observed in low disease was replaced in plaque from high disease by a broad expansion of other members of the family Veillonellaceae, such as Selenomonas and Dialister. At the same time in high disease there was an expansion of Gram-positives from the order Clostridiales, primarily in the families Eubacteriaceae and Lachnospiraceae. The breadth of this bacterial group is large (minimum ~85% identity), and this allowed us to recognize that Clostridiales were repeated colonizers in plaque of individuals having high oral disease, as represented by the phylogenic diversity of Selenomonas in Figure 4. This finding is significant because it highlights the role of the oral environment (as opposed to in situ evolution due to horizontal gene transfer) in the generation of bacterial signatures associated with high oral disease in West Virginia.\nPhylogenetic trees of Selenomonas and Veillonella isolated from individual plaque samples. 16S rRNA gene sequences from a single high disease plaque sample (DA, red text) and a single low disease plaque sample (DB, blue text) were used to generate phylogenetic trees of Selenomonas and Veillonella, respectively. The x-axis indicates percent difference in 16S rRNA gene sequence. The separated clusters of Selenomonas show that this population descended from independently colonizing but phylogenetically related bacteria. Veillonella from DB exhibited limited diversity.\nPhylogenetic trees of bacterial types developed using 16S rRNA sequencing also provided insight into the origin of bacterial signatures associated with high oral disease. For example, comparisons of 16S rRNA gene sequences revealed that diverse Selenomonas spp. were actually repeated colonizers of the high disease oral environment. The 16S rRNA tree represented in Figure 4 shows the broad diversity of Selenomonas phylotypes recovered in a single plaque sample from a high disease subject (DA). In contrast, the same clade in a sample from a low disease subject (DB) contained only low diversity Veillonella phylotypes. The order Clostridiales has its phylogenetic assignment in the Firmicutes (typically Gram-positive), but it includes a large family, the Veillonellaceae, which are obligate anaerobes that stain Gram-negative due to a porous pseudo-outer membrane. Low disease Veillonella have nearly identical sequences typical of a single species (> 97% 16S rRNA gene sequence identity), as represented in Figure 4. In our studies, we found the tight cluster of Veillonella observed in low disease was replaced in plaque from high disease by a broad expansion of other members of the family Veillonellaceae, such as Selenomonas and Dialister. At the same time in high disease there was an expansion of Gram-positives from the order Clostridiales, primarily in the families Eubacteriaceae and Lachnospiraceae. The breadth of this bacterial group is large (minimum ~85% identity), and this allowed us to recognize that Clostridiales were repeated colonizers in plaque of individuals having high oral disease, as represented by the phylogenic diversity of Selenomonas in Figure 4. This finding is significant because it highlights the role of the oral environment (as opposed to in situ evolution due to horizontal gene transfer) in the generation of bacterial signatures associated with high oral disease in West Virginia.\nPhylogenetic trees of Selenomonas and Veillonella isolated from individual plaque samples. 16S rRNA gene sequences from a single high disease plaque sample (DA, red text) and a single low disease plaque sample (DB, blue text) were used to generate phylogenetic trees of Selenomonas and Veillonella, respectively. The x-axis indicates percent difference in 16S rRNA gene sequence. The separated clusters of Selenomonas show that this population descended from independently colonizing but phylogenetically related bacteria. Veillonella from DB exhibited limited diversity.", "Previous studies found bacterial species in the 'red complex' to be strongly linked to periodontal disease [25]. To examine if these same bacteria were associated with oral disease in West Virginia, we used 16S rRNA gene sequencing to characterize bacteria in subgingival plaque of COHRA participants diagnosed as having low or high oral disease based on periodontal examinations (Table 1, bordered by dashed lines). Analyses were performed in 96-well plate format, and due to the relatively high cost of DNA sequencing, our practical goal was to sequence 96 clones per sample, realizing that we would not be able to detect bacterial types present at low frequency. In actuality the number of clones sequenced per sample ranged from 55 to 133, with low numbers relating to difficulties obtaining high quality sequences for some samples. Sequences obtained were analyzed by BLAST against the GenBank database that includes all known bacterial 16S rRNA gene sequences, and against a local database that facilitated classification of bacterial 'types' based on ≥95% sequence identity. Subsequent evaluation of sequence data using the Human Oral Microbiome Database (HOMD) BLAST server [26] yielded nearly identical results, except for a few changes caused by reclassification of Clostridiales type strains, which affected some assignments to Eubacterium and Lachnospiraceae.\nThe initial phase of sequencing examined 2 low (DB and DC) and 5 high (DF, DL, DG, DI, DA) disease COHRA participants. An additional low disease sample (DM), evaluated based on periodontal criteria, was obtained through the West Virginia University Dental Clinic and served as a non-COHRA low disease control. The percentage of a bacterial type in each sample relative to the total number of clones sequenced (indicated at the bottom of each column) is shown in the heatmap in Figure 1. As previously reported [6], increased bacterial diversity was evident in plaque from individuals diagnosed with high oral disease. However, unexpectedly a large number of high disease bacterial types were classified in the Clostridiales cluster (Selenomonas, Eubacterium, Dialister), which were completely absent from low disease plaque. Also notable in high disease was the low frequency of bacterial types traditionally linked to periodontitis (Porphyromonas, Tannerella, Treponema and Aggregatibacter). Plaque from low disease exhibited a distinctly different bacterial population, including mainly Streptococcus and Veillonella with a moderate number of Capnocytophaga. The bacterial diversity in low disease COHRA samples was highly consistent with a recent study, which identified Streptococcus, Veillonella and Capnocytophaga in 100% of the plaque samples obtained from individuals with healthy oral cavities [27]. Thus, while bacteria associated with oral health in West Virginia was highly reflective of other regions in the United States, an atypical pathogenic phylogenetic signature that includes Selenomonas and other Clostridiales cluster bacteria appeared to be associated with the decline in oral health in West Virginia.\nIdentification of bacterial populations in subgingival plaque of West Virginians. Bacterial composition in plaque samples was determined using 16S rRNA gene sequencing in 2 low and 5 high oral disease COHRA participants, ranked based on periodontal exams, and 5 Cognitive study participants, ranked based on caries index. DM is a low periodontal disease control sample obtained through the West Virginia University Dental Clinic. Each numbered box indicates the percentage of clones of the type-specific bacterial 16S rRNA gene relative to the total number of clones sequenced, which is indicated at the bottom of each column. The color of the box reflects observed counts.\nCOHRA study participants ranged in age from 23 to 48, and Cognitive study participants (ages 73 to 93) provided a means of assessing how bacterial types related to tooth status later in life. The periodontal health of the 5 Cognitive study participants included in this study was exceptionally good, as evident in probing depths of 3 mm or less, shown in Table 2. However, as expected for this age, considerable variation was observed in caries health (Table 2, bordered by dashed lines), with DT having the highest percentage of sound teeth, and DAA and DZ having the highest percentage of missing/decayed teeth. Examination of bacterial types in subgingival plaque of DT revealed mainly Veillonella, Streptococcus and Capnocytophaga (Figure 1), as observed for low disease COHRA participants. In comparison, DAA and DZ had bacterial patterns that included increased diversity and species richness of Selenomonas, Eubacterium and Dialister, in addition to many of the other bacterial species observed in high oral disease COHRA participants. These data are consistent with the bacterial signature of aged West Virginians with tooth loss being similar to that of younger West Virginians with periodontal disease.", "Unifrac analysis [17], which takes into account phylogenetic distances, has become the method of choice for analyzing differences between microbial communities and was used in our studies to provide statistical validation of differences in low and high disease plaque environments. The method requires a rooted phylogenetic tree as input, and we used the RDP server to generate the tree, which is especially suited for rRNA alignments as it employs Infernal [28] that is guided by predicted secondary structure. The tree was constructed by the neighbor-joining method [29], using APE in R. It was rooted by including Thermotoga maritima SL7 as an outgroup. The Unifrac algorithm was used to compute the unique fraction of branch length for each sample. These results (which are phylogenetic distances separating the individual samples) were analyzed using cluster analysis (UPGMA) and PCoA and ignored duplicate sequences. As shown in Figure 2A, PCoA separated low disease from diseased samples into two clusters along the first eigenvector. Cluster analysis also split low disease and diseased samples into different clades, with sample DT (labeled T in Figure 2), the least diseased sample based on caries outcome in the Cognitive study, splitting to the low disease clade. Support for the clusters was evaluated by jackknife tests (1000 replicates; Figure 2B). When analyses were again performed with all the sequences relabeled as diseased or healthy, Unifrac analysis found the two environments to be statistically different (P < 0.001), whether or not duplicate sequences were considered.\nStatistical analyses of bacterial populations in low and high disease plaque. A) Principal coordinate analysis of bacterial communities from subgingival plaque of West Virginians with low oral disease (blue) as compared to plaque of West Virginians with varying degrees of oral disease (red). B) The Unifrac algorithm was used to compute the unique branch length for a given sub-sample. Cluster analysis split low (blue) and diseased samples (red) into different clades. Support for the clusters was evaluated by jackknife tests (1000 replicates). T was the least diseased sample in the Cognitive study, as ranked in Table 2. The 'D' has been removed from sample notations in these figures for clarity.", "Since rRNA gene sequencing revealed an unexpected bacterial signature in plaque of West Virginians with high oral disease, we asked whether this pattern could be detected using an alternative method of 16S rRNA gene analysis, HOMIM. In HOMIM, multiple primers are initially used to amplify 16S rRNA genes within a DNA sample, which are then assayed for the presence of specific 16S rRNA genes using probes designed to optimize detection of 272 different bacterial species [30]. Unlike 16S rRNA sequencing, which allows quantification of the frequency of bacterial clones, the read-out for HOMIM is a relative signal intensity for each detected 16S rRNA sequence on a scale from 0 to 5. In our studies HOMIM results are presented as the sum of the signals for all oral taxa within each genus. Comparisons of HOMIM and DNA sequence analyses of plaque samples from 4 low disease (DB, DC, DM, DT) and the 5 highest disease (DA, DI, DG, DZ, DAA) participants in our study are shown in Figure 3. HOMIM analyses of plaque from low disease detected a high frequency of Veillonella, Streptococcus and Capnocytophaga, closely paralleling results of 16S rRNA sequencing (Figure 3A). Less prevalent bacterial types detected in 16S rRNA sequencing, such as Gemella, Fusobacterium, Actinomyces, Granulicatella, Neisseria and Haemophilus, were confirmed by HOMIM. Additional bacterial types, presumably at too low a frequency to detect by sequencing, were also detected in low disease by HOMIM, the most evident of which were Campylobacter, Prevotella, Leptotrichia, Lautropia and Aggregatibacter. These results highlight differences that can be observed between DNA sequencing and HOMIM due to methodologies employed in HOMIM that can increase the sensitivity of detection of specific bacterial species by 10 fold.\nComparison of 16S rRNA gene analyses using sequencing and HOMIM. The frequency of bacterial types (by percentage), determined by 16S rRNA sequencing, of 4 plaque samples from low disease and 5 plaque samples from high disease West Virginians, ranked based on criteria defined in Materials and Methods, was compared with the microarray signal intensity obtained from the same samples in HOMIM analyses. Bacterial types above the red line were more frequent in low disease.\nIn plaque from individuals with high disease, HOMIM analyses confirmed the presence of genera from the Clostridiales order, including Selenomonas, Eubacterium and Dialister (Figure 3B). Notably, as with sequencing data, bacteria in the 'red complex' were either absent or present at low levels in HOMIM analyses. As with low disease plaque, disproportionately higher signals for certain bacteria, specifically Campylobacter, Prevotella and Leptotrichia, were observed in HOMIM analysis as compared with sequencing, again explained by the increased sensitivity of microarray detection methodologies. Importantly, conclusions from HOMIM analyses confirmed those from 16S rRNA sequencing, and identified increased genus richness and high intensity signals for Selenomonas, Eubacterium and Dialister in association with a decline in oral health.", "Phylogenetic trees of bacterial types developed using 16S rRNA sequencing also provided insight into the origin of bacterial signatures associated with high oral disease. For example, comparisons of 16S rRNA gene sequences revealed that diverse Selenomonas spp. were actually repeated colonizers of the high disease oral environment. The 16S rRNA tree represented in Figure 4 shows the broad diversity of Selenomonas phylotypes recovered in a single plaque sample from a high disease subject (DA). In contrast, the same clade in a sample from a low disease subject (DB) contained only low diversity Veillonella phylotypes. The order Clostridiales has its phylogenetic assignment in the Firmicutes (typically Gram-positive), but it includes a large family, the Veillonellaceae, which are obligate anaerobes that stain Gram-negative due to a porous pseudo-outer membrane. Low disease Veillonella have nearly identical sequences typical of a single species (> 97% 16S rRNA gene sequence identity), as represented in Figure 4. In our studies, we found the tight cluster of Veillonella observed in low disease was replaced in plaque from high disease by a broad expansion of other members of the family Veillonellaceae, such as Selenomonas and Dialister. At the same time in high disease there was an expansion of Gram-positives from the order Clostridiales, primarily in the families Eubacteriaceae and Lachnospiraceae. The breadth of this bacterial group is large (minimum ~85% identity), and this allowed us to recognize that Clostridiales were repeated colonizers in plaque of individuals having high oral disease, as represented by the phylogenic diversity of Selenomonas in Figure 4. This finding is significant because it highlights the role of the oral environment (as opposed to in situ evolution due to horizontal gene transfer) in the generation of bacterial signatures associated with high oral disease in West Virginia.\nPhylogenetic trees of Selenomonas and Veillonella isolated from individual plaque samples. 16S rRNA gene sequences from a single high disease plaque sample (DA, red text) and a single low disease plaque sample (DB, blue text) were used to generate phylogenetic trees of Selenomonas and Veillonella, respectively. The x-axis indicates percent difference in 16S rRNA gene sequence. The separated clusters of Selenomonas show that this population descended from independently colonizing but phylogenetically related bacteria. Veillonella from DB exhibited limited diversity.", "The etiology of the poor oral health leading to the high tooth loss statistics in West Virginia is unknown. The goal of this study was to gain insight into this problem using 16S rRNA gene analyses to characterize the bacterial species in subgingival plaque of West Virginians having high oral disease. The most striking finding of our analyses was the predominance of Clostridiales cluster bacteria, including genera such as Selenomonas, Eubacterium and Dialister, in plaque of West Virginians having high oral disease. The oral microbiota is complex and dynamic, which compromises conclusions derived from this preliminary study, but factors that strengthen our interpretation that an atypical bacterial signature contributes to the oral health disparity in West Virginia include: 1) characterization of similar microbiological patterns in high and low disease plaque using of two methods of 16S rRNA gene analysis, and 2) detection of a similar relationship between bacterial types and oral disease in two separate West Virginia populations. The breadth of the Clostridiales also allowed us to recognize that bacteria linked to oral disease were repeated colonizers of individuals with high disease, suggesting a functional relationship between the disease environment and these disease-associated bacteria. The possibility that an atypical bacterial phylogenetic signature might be linked to the decline in oral health in West Virginia is a novel idea and may prove integral to understanding the etiology of the dental problems in this state. The results from this pilot project highlight the need for an expanded study that uses culture independent approaches to analyze bacterial populations in a larger number of West Virginians having low or high oral disease to confirm the relationship between bacterial profiles and disease. The knowledge derived from such a study has the potential of being applied to the development of targeted intervention strategies that modify environmental factors to preclude colonization by disease-associated bacteria.", "APE: Analysis of Phylogenetics and Evolution; BOP: bleeding on probing; COHRA: Center for Oral Health Research in Appalachia; HOMIM: Human Oral Microbe Identification Microarray; HOMD: Human Oral Microbiome Database; NHANES: National Health and Nutrition Examination Survey; nr: non-redundant; PCoA: Principal Coordinate Analysis; PD: probing depth; RDP: Ribosomal Database Project; rRNA: ribosomal RNA; UPGMA: Unweighted Pair Group Method with Arithmetic Mean; WVU: West Virginia University.", "BJP is the director of the HOMIM Core Facility at The Forsyth Institute, which performed the bacterial microarray analyses.", "JCO organized the project and drafted the manuscript. CFC assisted with project organization and drafting of the manuscript. SL developed 16S rRNA gene amplification and cloning methods. EL performed 16S rRNA gene amplification and cloning. YC organized Cognitive study clinical data. BW organized acquisition of specimens and clinical data for the Cognitive study. RJC organized the alignment of our project with the COHRA and Cognitive studies and assisted with clinical analyses and project development. JGT organized and facilitated acquisition of COHRA subgingival plaque samples. DWM assisted with the design of the COHRA and Cognitive studies and with the acquisition of clinical samples. RJW and MLM assisted with the design of the COHRA project and organization of clinical data. BJP performed HOMIM analyses and helped with developing methods and analyses of 16S rRNA gene sequences. TE performed all 16S rRNA sequence data analyses. All authors read and approved the final manuscript.", "The pre-publication history for this paper can be accessed here:\n\nhttp://www.biomedcentral.com/1472-6831/11/7/prepub\n" ]
[ null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[]
Reducing risks for infant mortality in the Midlands, UK: a qualitative study identifying areas for improvement in the delivery of key public health messages in the perinatal period.
36253719
The Midlands has amongst the highest rates of neonatal and infant mortality in the UK. A public health parent education and empowerment programme, aimed at reducing key risks associated with this mortality was established and evaluated in the region. This was undertaken in an attempt to identify areas for optimal delivery of the public health messages around reducing risks for neonatal and infant mortality.
BACKGROUND
Qualitatively assessment, using the software package Dedoose®, was undertaken. This involved analysis of reflections by the programme trainers, after the delivery of their training sessions to parents, families and carers, between 01 January and 31 December 2021. These were intended to capture insights from the trainers on parent, family, carer and staff perspectives, perceptions/misperceptions around reducing risks for infant mortality. Potential areas for improvement in delivery of the programme were identified from this analysis.
METHOD
A total of 323 programmes, comprising 524 parents, family members and carers were offered the programme. Analysis of 167 reflections around these interactions and those of staff (n = 29) are reported. The programme was positively received across parents, families, carers and staff. Four overall themes were identified: (a) reach and inclusion, (b) knowledge, (c) practical and emotional support and (d) challenges for delivery of the programme. Recommendations for improved delivery of the programme were identified, based on qualitative analysis.
RESULTS
This novel approach to empowerment and education around neonatal public health messaging is a valuable tool for parents, families, carers and staff in the Midlands. Key practical recommendations for enhancing delivery of these critical public health messages were identified from this qualitative research. These are likely to be of value in other parts of the UK and globally.
CONCLUSION
[ "Humans", "Infant", "Infant, Newborn", "Empowerment", "Health Education", "Infant Mortality", "Parents", "Public Health", "Qualitative Research", "Risk Assessment", "United Kingdom", "Program Evaluation" ]
9574805
Introduction
Infant mortality is a key indicator of the health of a nation. In the United Kingdom (UK), the vast majority of infant mortality is due to deaths within the neonatal period. Infant mortality ranges from 1.7 deaths per 1000 live births in some parts of the UK, to a high of almost 7 deaths (6.8) per 1000 live births in the Midlands [1]. The Midlands has notoriously held high rates of infant mortality, and there have been numerous reports presented to cabinet and published outlining the key associations with infant mortality in the region. These include smoking, lack of breast feeding, prematurity, low birth weight, smoking, extremes of maternal age at delivery, domestic abuse, congenital anomalies, ethnicity, social deprivation, lack of maternal education, sudden infant death syndrome and infection [2–4]. Intense child death overview panels, local and regional perinatal mortality reviews, National Mothers and Babies confidential enquiries (MBRRACE Reducing Risk through Audits and Confidential Enquiries) and other supportive agencies addressing potentially avoidable factors relating to medical and social support are regularly and intensely conducted in England. However there has been no analysis of population perspectives (parents, families and carers of those with new babies or those expecting a new baby) around risks associated with infant mortality, and the impact of their carer education and empowerment in the antenatal and postnatal period, in understanding and modifying behaviours in response to their knowledge of these risks. Our objective was to qualitatively assess perspectives on education and empowerment around risks for infant mortality in the region in families that were expecting or had just delivered a new baby, so that it could (a) provide a base for local knowledge, (b) inform how better to deliver our messages, (c) and over time, define which aspects to focus on moving forward that would best benefit the local population. We report here on qualitative analysis of the perspectives shared by parents, families and carers, through reflections from trainers within a neonatal public health education and empowerment Programme (STORK Programme) in a local district general hospital in The Midlands. [SUBTITLE] Background of the local neonatal public health programme [SUBSECTION] In stretching the boundaries of perinatal care, a public health programme, aimed at reducing the risks associated with infant mortality through education and empowerment to families in parts of the Midlands began in 2017 [5, 6]. This programme [7] (STORK for parents, families and carers) addresses key public health messaging around chief preventable risks associated with infant mortality comprises (a) basic bystander life support education, (b) how to deal with a choking child, (c) recognising signs of illness in baby, (d) safe sleeping advice, (e) breast feeding support and advice, (f) understanding the risks associated with smoking in and after pregnancy, around baby, and smoking cessation advice, (g) managing a crying baby, and signposting to healthy lifestyle and perinatal mental health support in relevant participating regions. Delivery of ‘one’ programme to a parent, family or carer means education on all aspects a) to g) above, with specific focus on areas of greatest need. Its focus is to highlight key risks and strategies to reduce the risks around infant mortality in the region, using modern aids such as mobile applications, and linking in to NHS and other relevant online resources. Parents and families with new born babies or those expectant parents, are taught the key components of the programme, using a mobile application [8] and in so doing are informed around risks associated with infant mortality in the region. The programme is modified in participating units, as per available resources, to either offer individualised 1:1 learning with families, group or signposting sessions. STORK is the acronym for Supportive Training Offering Reassurance and Knowledge, and parent feedback on the programme has previously been published. For sessions of education of STORK programme education with parents, family members or carers, staff members were routinely encouraged to be observers. This was part of the overall delivery of the STORK programme at the hospital, to promote awareness amongst staff so that they could better support the parents, family members and carers whilst in hospital. Multiple sessions with parents, families and carers to ensure all aspects were delivered comprise delivery of ‘one programme’. This included follow up conversations with parents, families and carers after discharge from the neonatal unit. Anchored at the University of Wolverhampton, Research Institute and Faculty of Science and Engineering, it has been supported in participating local regions variably by individual NHS Trusts, in which the programme has been embedded into discharge services for neonatal units, or through Public Health, Local Maternity and Neonatal Systems and City Councils. The programme focuses on parent, family and carer education and empowerment, utilising NHS and other resources. Its strength lies in the collation of material into a single platform around reducing risks for infant mortality, in a user-friendly mobile application, 1:1 parent, family and carer training, and delivered by support workers in this district general hospital. Its success with parents has been previously described from another local district hospital [5]. Here, in a survey of 218 parents or carers in Wolverhampton, 215 (98.6%) indicated it instilled confidence in them caring for their babies after discharge from the neonatal unit, that they would recommend the programme to others, and that the programme was useful. The remaining three did not comment as English was not their first language. Free text feedback from parents included amongst other comments, that ‘it was a great programme’, that ‘all parents should be offered this’, ‘. and we learnt a lot of what we didn’t know’. The intention of the programme is to, over time, work towards behavioural changes that cascade multi-directionally, to friends and relatives, subsequent children, parents and grandparents and carers. In stretching the boundaries of perinatal care, a public health programme, aimed at reducing the risks associated with infant mortality through education and empowerment to families in parts of the Midlands began in 2017 [5, 6]. This programme [7] (STORK for parents, families and carers) addresses key public health messaging around chief preventable risks associated with infant mortality comprises (a) basic bystander life support education, (b) how to deal with a choking child, (c) recognising signs of illness in baby, (d) safe sleeping advice, (e) breast feeding support and advice, (f) understanding the risks associated with smoking in and after pregnancy, around baby, and smoking cessation advice, (g) managing a crying baby, and signposting to healthy lifestyle and perinatal mental health support in relevant participating regions. Delivery of ‘one’ programme to a parent, family or carer means education on all aspects a) to g) above, with specific focus on areas of greatest need. Its focus is to highlight key risks and strategies to reduce the risks around infant mortality in the region, using modern aids such as mobile applications, and linking in to NHS and other relevant online resources. Parents and families with new born babies or those expectant parents, are taught the key components of the programme, using a mobile application [8] and in so doing are informed around risks associated with infant mortality in the region. The programme is modified in participating units, as per available resources, to either offer individualised 1:1 learning with families, group or signposting sessions. STORK is the acronym for Supportive Training Offering Reassurance and Knowledge, and parent feedback on the programme has previously been published. For sessions of education of STORK programme education with parents, family members or carers, staff members were routinely encouraged to be observers. This was part of the overall delivery of the STORK programme at the hospital, to promote awareness amongst staff so that they could better support the parents, family members and carers whilst in hospital. Multiple sessions with parents, families and carers to ensure all aspects were delivered comprise delivery of ‘one programme’. This included follow up conversations with parents, families and carers after discharge from the neonatal unit. Anchored at the University of Wolverhampton, Research Institute and Faculty of Science and Engineering, it has been supported in participating local regions variably by individual NHS Trusts, in which the programme has been embedded into discharge services for neonatal units, or through Public Health, Local Maternity and Neonatal Systems and City Councils. The programme focuses on parent, family and carer education and empowerment, utilising NHS and other resources. Its strength lies in the collation of material into a single platform around reducing risks for infant mortality, in a user-friendly mobile application, 1:1 parent, family and carer training, and delivered by support workers in this district general hospital. Its success with parents has been previously described from another local district hospital [5]. Here, in a survey of 218 parents or carers in Wolverhampton, 215 (98.6%) indicated it instilled confidence in them caring for their babies after discharge from the neonatal unit, that they would recommend the programme to others, and that the programme was useful. The remaining three did not comment as English was not their first language. Free text feedback from parents included amongst other comments, that ‘it was a great programme’, that ‘all parents should be offered this’, ‘. and we learnt a lot of what we didn’t know’. The intention of the programme is to, over time, work towards behavioural changes that cascade multi-directionally, to friends and relatives, subsequent children, parents and grandparents and carers.
null
null
Results
During this time period 323 STORK programmes (i.e. distinct sessions of education with either individual parents, or in with parents and other family members such as grandparents, or carers) were offered to parents, families and carers at the Dudley Group of Hospitals NHS Trust. As this was offered during the COVID-19 period, the focus of the education and empowerment packages was heavily weighted towards parents, who had access to the hospital’s neonatal unit. A total of 524 individuals comprising 309 mothers, 198 fathers and 17 other family members including foster carers were included in the training programme during this time period. In addition 29 staff members and students observed these sessions of education. These included including neonatal staff nurses, senior sisters, student nurses including trainee and nursing associates, doctors as well as interpreters. 12 declined the STORK programme. The Trainers recorded 167 reflective entries during this time period. Of these, four overall themes were identified from the data: (a) reach and inclusion; (b) knowledge; (c) practical and emotional support; and (d) challenges. Each of these four themes is discussed below, covering different aspects of the STORK programme including basic life support, safe sleep, breastfeeding, smoking and infant crying. Illustrative comments were taken verbatim from the practitioner notes. The initial at the start of each quote symbolises the practitioner who made the comment (C, L or H). [SUBTITLE] Reach and inclusion: reaching parents and carers at the right time and place, with the right tools [SUBSECTION] The degree of inclusion of different participants in the STORK programme was a theme that emerged from trainer reflections. This included the challenges of reaching fathers and other carers, as well as ‘time and place’ dimensions – reaching mothers at the right time and opportunities to extend the programme beyond the neonatal unit. The provision of multi-language and digital and audio-visual tools was proposed by the trainers to facilitate greater inclusion. The importance and challenge of engaging fathers and other carers in the programme was highlighted in the practitioner reflections. This represented a particular challenge during the COVID-19 pandemic since hospital visits were restricted, limiting opportunities to engage with fathers and wider family members.L: ‘It was really useful to have been able to do it with the aunt, as the dad was only coming in to get the baby and would not have stayed to do STORK, meaning nobody would have had STORK if I hadn’t done it with the aunt. I think it also highlighted how STORK is not just for parents; it can be really useful for any member of the family who will be helping to look after baby. As this aunt also helped to look after other babies in the family, what she learned from STORK will not just have benefitted this baby but also probably many other babies in their family, which I feel is a really positive outcome’. L: ‘It was really useful to have been able to do it with the aunt, as the dad was only coming in to get the baby and would not have stayed to do STORK, meaning nobody would have had STORK if I hadn’t done it with the aunt. I think it also highlighted how STORK is not just for parents; it can be really useful for any member of the family who will be helping to look after baby. As this aunt also helped to look after other babies in the family, what she learned from STORK will not just have benefitted this baby but also probably many other babies in their family, which I feel is a really positive outcome’. The trainers highlight the importance of reaching mothers and families at the right time, especially first-time mothers, to ensure they receive vital information on safe sleep, life support and breastfeeding. At times, trainers working in the neonatal unit reported feeling they had reached the mothers too late when providing breastfeeding support. On the other hand, due to the limited time to provide support to families in the hospital, trainers commented on the opportunity to extend support to normal care as well as into the community or establishing a STORK support network for breastfeeding mothers.C: ‘[T]his mom said that she had never had this type of information before with any of her children as she’s normally “kicked out quickly” after birth and she’s never actively signed up to any classes. I felt that this really captured the importance of continuing with this service in the longer term so that we catch all first time moms, that way that education is following them through their subsequent babies and beyond and making differences to peoples’ lives, not in the same way as people being motivated to do it themselves as many aren’t, especially the most vulnerable’. C: ‘[T]his mom said that she had never had this type of information before with any of her children as she’s normally “kicked out quickly” after birth and she’s never actively signed up to any classes. I felt that this really captured the importance of continuing with this service in the longer term so that we catch all first time moms, that way that education is following them through their subsequent babies and beyond and making differences to peoples’ lives, not in the same way as people being motivated to do it themselves as many aren’t, especially the most vulnerable’. Language barriers were reported as a barrier to the successful delivery of the programme across all community groups. The provision of multi-language content and visual aids was mentioned by trainers as a potential solution to overcome this challenge, especially when interpreters are unavailable, and to aid parents with learning difficulties. The trainers make use of digital and audio-visual tools such as mobile apps and videos to facilitate learning. However, they encountered problems with a lack of reliable Wi-Fi connection and access to tablets in the hospital and cited a need for information leaflets in a range of languages. The degree of inclusion of different participants in the STORK programme was a theme that emerged from trainer reflections. This included the challenges of reaching fathers and other carers, as well as ‘time and place’ dimensions – reaching mothers at the right time and opportunities to extend the programme beyond the neonatal unit. The provision of multi-language and digital and audio-visual tools was proposed by the trainers to facilitate greater inclusion. The importance and challenge of engaging fathers and other carers in the programme was highlighted in the practitioner reflections. This represented a particular challenge during the COVID-19 pandemic since hospital visits were restricted, limiting opportunities to engage with fathers and wider family members.L: ‘It was really useful to have been able to do it with the aunt, as the dad was only coming in to get the baby and would not have stayed to do STORK, meaning nobody would have had STORK if I hadn’t done it with the aunt. I think it also highlighted how STORK is not just for parents; it can be really useful for any member of the family who will be helping to look after baby. As this aunt also helped to look after other babies in the family, what she learned from STORK will not just have benefitted this baby but also probably many other babies in their family, which I feel is a really positive outcome’. L: ‘It was really useful to have been able to do it with the aunt, as the dad was only coming in to get the baby and would not have stayed to do STORK, meaning nobody would have had STORK if I hadn’t done it with the aunt. I think it also highlighted how STORK is not just for parents; it can be really useful for any member of the family who will be helping to look after baby. As this aunt also helped to look after other babies in the family, what she learned from STORK will not just have benefitted this baby but also probably many other babies in their family, which I feel is a really positive outcome’. The trainers highlight the importance of reaching mothers and families at the right time, especially first-time mothers, to ensure they receive vital information on safe sleep, life support and breastfeeding. At times, trainers working in the neonatal unit reported feeling they had reached the mothers too late when providing breastfeeding support. On the other hand, due to the limited time to provide support to families in the hospital, trainers commented on the opportunity to extend support to normal care as well as into the community or establishing a STORK support network for breastfeeding mothers.C: ‘[T]his mom said that she had never had this type of information before with any of her children as she’s normally “kicked out quickly” after birth and she’s never actively signed up to any classes. I felt that this really captured the importance of continuing with this service in the longer term so that we catch all first time moms, that way that education is following them through their subsequent babies and beyond and making differences to peoples’ lives, not in the same way as people being motivated to do it themselves as many aren’t, especially the most vulnerable’. C: ‘[T]his mom said that she had never had this type of information before with any of her children as she’s normally “kicked out quickly” after birth and she’s never actively signed up to any classes. I felt that this really captured the importance of continuing with this service in the longer term so that we catch all first time moms, that way that education is following them through their subsequent babies and beyond and making differences to peoples’ lives, not in the same way as people being motivated to do it themselves as many aren’t, especially the most vulnerable’. Language barriers were reported as a barrier to the successful delivery of the programme across all community groups. The provision of multi-language content and visual aids was mentioned by trainers as a potential solution to overcome this challenge, especially when interpreters are unavailable, and to aid parents with learning difficulties. The trainers make use of digital and audio-visual tools such as mobile apps and videos to facilitate learning. However, they encountered problems with a lack of reliable Wi-Fi connection and access to tablets in the hospital and cited a need for information leaflets in a range of languages. [SUBTITLE] Knowledge: filling knowledge gaps and dealing with conflicting knowledge claims [SUBSECTION] The reflections revealed how the STORK programme supported parents from a range of cultural and socio-economic backgrounds, with different personal circumstances and varying levels of baseline knowledge. The provision of information tailored to the participant helped to fill knowledge gaps. When it came to safe sleep practices, ICON, smoking awareness and knowledge of basic life support for babies, the trainer reflections revealed a wide spectrum of knowledge levels among STORK participants, from no prior knowledge to some knowledge to good knowledge. Even experienced parents may have never received information on these topics and by asking questions to establish baseline knowledge, the trainers filled in gaps and provided critical information.H: ‘When covering safe sleep and SIDs I asked the parents what they already knew, both of them had no knowledge of any safe sleep guidelines. I found this really surprising this being their 6th baby! They said that nobody had ever spoken to them about safe sleep and they had heard of cot death but associated this with babies passing away in the night, in their cots’.C: ‘Parents appreciated everything, they came from war country and talked about war crimes and violence which they had scars… They especially appreciated the choking and resus. They said they’d never seen anything like it with the information I gave them, and took my hand in appreciation for my time to them’ H: ‘When covering safe sleep and SIDs I asked the parents what they already knew, both of them had no knowledge of any safe sleep guidelines. I found this really surprising this being their 6th baby! They said that nobody had ever spoken to them about safe sleep and they had heard of cot death but associated this with babies passing away in the night, in their cots’. C: ‘Parents appreciated everything, they came from war country and talked about war crimes and violence which they had scars… They especially appreciated the choking and resus. They said they’d never seen anything like it with the information I gave them, and took my hand in appreciation for my time to them’ The trainers described how even participants with considerable knowledge went away with new information after taking part in STORK.C: ‘They were experienced parents, however had never seen ICON, evaluating this for the audit is a really useful way of seeing what parents’ knowledge base is and what they can gain from it. Discussed this at great length, including social isolation and coping strategies, parents took a lot from this as could understand having been in that position before. They said this was more useful than anything else in the session’. C: ‘They were experienced parents, however had never seen ICON, evaluating this for the audit is a really useful way of seeing what parents’ knowledge base is and what they can gain from it. Discussed this at great length, including social isolation and coping strategies, parents took a lot from this as could understand having been in that position before. They said this was more useful than anything else in the session’. When participants received new information that contradicted their existing knowledge, attitudes or cultural practices, this produced a range of reactions - from surprise and acceptance to resistance and dismissal. A reoccurring topic was the safety of baby products such as sleeping pods and prep machines.L: ‘I explained to them about how (artificial formula feed) prep machines are not recommended, and explained the reasons why… They really didn’t seem very open to hearing this and I felt like everything I was saying was falling on deaf ears.… I got the impression they still fully intended on using the prep machines. This felt quite disappointing because I just couldn’t seem to get through to them’.H: ‘After we completed the safe sleep and sudden infant death syndrome (SIDS) section mom said “Actually I get it now like why people are telling me not to do some of the stuff like putting her on her side and on the bed with pillows and stuff. I realise now that actually it’s really dangerous. I co-slept with all my other kids and used pods. Nobody has told us with our other kids not to use them and you just think stuff is safe.”’ L: ‘I explained to them about how (artificial formula feed) prep machines are not recommended, and explained the reasons why… They really didn’t seem very open to hearing this and I felt like everything I was saying was falling on deaf ears.… I got the impression they still fully intended on using the prep machines. This felt quite disappointing because I just couldn’t seem to get through to them’. H: ‘After we completed the safe sleep and sudden infant death syndrome (SIDS) section mom said “Actually I get it now like why people are telling me not to do some of the stuff like putting her on her side and on the bed with pillows and stuff. I realise now that actually it’s really dangerous. I co-slept with all my other kids and used pods. Nobody has told us with our other kids not to use them and you just think stuff is safe.”’ There were cases when the trainers reported a change of heart among participants after taking part in the STORK programme. For example, mothers who had no intention to breastfeed who later decided to try it, or parents who decided not to use certain baby products after being informed about safety concerns.L: ‘I feel quite sure that had I not intervened with STORK, this mum would never have even considered breastfeeding and no one would have encouraged her to try as whenever anyone asked she had said she was not interested. I think that everyone else involved in her care had not had the time to just sit with her for a while to talk it all through, and this is what she needed’. L: ‘I feel quite sure that had I not intervened with STORK, this mum would never have even considered breastfeeding and no one would have encouraged her to try as whenever anyone asked she had said she was not interested. I think that everyone else involved in her care had not had the time to just sit with her for a while to talk it all through, and this is what she needed’. The reflections revealed how the STORK programme supported parents from a range of cultural and socio-economic backgrounds, with different personal circumstances and varying levels of baseline knowledge. The provision of information tailored to the participant helped to fill knowledge gaps. When it came to safe sleep practices, ICON, smoking awareness and knowledge of basic life support for babies, the trainer reflections revealed a wide spectrum of knowledge levels among STORK participants, from no prior knowledge to some knowledge to good knowledge. Even experienced parents may have never received information on these topics and by asking questions to establish baseline knowledge, the trainers filled in gaps and provided critical information.H: ‘When covering safe sleep and SIDs I asked the parents what they already knew, both of them had no knowledge of any safe sleep guidelines. I found this really surprising this being their 6th baby! They said that nobody had ever spoken to them about safe sleep and they had heard of cot death but associated this with babies passing away in the night, in their cots’.C: ‘Parents appreciated everything, they came from war country and talked about war crimes and violence which they had scars… They especially appreciated the choking and resus. They said they’d never seen anything like it with the information I gave them, and took my hand in appreciation for my time to them’ H: ‘When covering safe sleep and SIDs I asked the parents what they already knew, both of them had no knowledge of any safe sleep guidelines. I found this really surprising this being their 6th baby! They said that nobody had ever spoken to them about safe sleep and they had heard of cot death but associated this with babies passing away in the night, in their cots’. C: ‘Parents appreciated everything, they came from war country and talked about war crimes and violence which they had scars… They especially appreciated the choking and resus. They said they’d never seen anything like it with the information I gave them, and took my hand in appreciation for my time to them’ The trainers described how even participants with considerable knowledge went away with new information after taking part in STORK.C: ‘They were experienced parents, however had never seen ICON, evaluating this for the audit is a really useful way of seeing what parents’ knowledge base is and what they can gain from it. Discussed this at great length, including social isolation and coping strategies, parents took a lot from this as could understand having been in that position before. They said this was more useful than anything else in the session’. C: ‘They were experienced parents, however had never seen ICON, evaluating this for the audit is a really useful way of seeing what parents’ knowledge base is and what they can gain from it. Discussed this at great length, including social isolation and coping strategies, parents took a lot from this as could understand having been in that position before. They said this was more useful than anything else in the session’. When participants received new information that contradicted their existing knowledge, attitudes or cultural practices, this produced a range of reactions - from surprise and acceptance to resistance and dismissal. A reoccurring topic was the safety of baby products such as sleeping pods and prep machines.L: ‘I explained to them about how (artificial formula feed) prep machines are not recommended, and explained the reasons why… They really didn’t seem very open to hearing this and I felt like everything I was saying was falling on deaf ears.… I got the impression they still fully intended on using the prep machines. This felt quite disappointing because I just couldn’t seem to get through to them’.H: ‘After we completed the safe sleep and sudden infant death syndrome (SIDS) section mom said “Actually I get it now like why people are telling me not to do some of the stuff like putting her on her side and on the bed with pillows and stuff. I realise now that actually it’s really dangerous. I co-slept with all my other kids and used pods. Nobody has told us with our other kids not to use them and you just think stuff is safe.”’ L: ‘I explained to them about how (artificial formula feed) prep machines are not recommended, and explained the reasons why… They really didn’t seem very open to hearing this and I felt like everything I was saying was falling on deaf ears.… I got the impression they still fully intended on using the prep machines. This felt quite disappointing because I just couldn’t seem to get through to them’. H: ‘After we completed the safe sleep and sudden infant death syndrome (SIDS) section mom said “Actually I get it now like why people are telling me not to do some of the stuff like putting her on her side and on the bed with pillows and stuff. I realise now that actually it’s really dangerous. I co-slept with all my other kids and used pods. Nobody has told us with our other kids not to use them and you just think stuff is safe.”’ There were cases when the trainers reported a change of heart among participants after taking part in the STORK programme. For example, mothers who had no intention to breastfeed who later decided to try it, or parents who decided not to use certain baby products after being informed about safety concerns.L: ‘I feel quite sure that had I not intervened with STORK, this mum would never have even considered breastfeeding and no one would have encouraged her to try as whenever anyone asked she had said she was not interested. I think that everyone else involved in her care had not had the time to just sit with her for a while to talk it all through, and this is what she needed’. L: ‘I feel quite sure that had I not intervened with STORK, this mum would never have even considered breastfeeding and no one would have encouraged her to try as whenever anyone asked she had said she was not interested. I think that everyone else involved in her care had not had the time to just sit with her for a while to talk it all through, and this is what she needed’. [SUBTITLE] Practical and emotional support: providing reassurance and empowerment [SUBSECTION] Aside from filling knowledge gaps, the trainers described how providing practical and emotional support helped to reassure and empower parents along their journey. The trainers performed practical first aid and life support demonstrations, breastfeeding techniques, as well as offering encouragement and reassurance. Even for participants with existing knowledge of the topics covered, they reported feeling reassured and more confident after refreshing their knowledge and having the opportunity to ask questions and clarify any doubts. One mother had first-hand experience of life support since she had to previously resuscitate one of her children but was keen to do STORK for reassurance and to refresh her knowledge. The excerpts reveal the benefits of the dedicated time and counselling approach offered by trainers.L: ‘She herself had also got a one year old child who had previously stopped breathing, and she had had to do basic life support to resuscitate them. Because of this, she was really interested in STORK, I think mostly because she wanted reassurance... For her, I think it wasn’t so much about learning from scratch, it was more about reassuring her that she would know what to do if it happened again and giving her more confidence’. L: ‘She herself had also got a one year old child who had previously stopped breathing, and she had had to do basic life support to resuscitate them. Because of this, she was really interested in STORK, I think mostly because she wanted reassurance... For her, I think it wasn’t so much about learning from scratch, it was more about reassuring her that she would know what to do if it happened again and giving her more confidence’. The trainers reflected on the approaches they used to empower parents. This included using a gentle approach and actively listening, especially when dealing with vulnerable participants and those that have been through complicated births.L: ‘What I will take away from this is how important it is to just actively listen, especially when a parent is finding the neonatal unit a particularly difficult experience. I would definitely try to use this approach when speaking to parents feeling a similar way, possibly doing the goal setting with them too as this seemed to be really beneficial here and empowered her to identify her own strengths and coping mechanisms’ L: ‘What I will take away from this is how important it is to just actively listen, especially when a parent is finding the neonatal unit a particularly difficult experience. I would definitely try to use this approach when speaking to parents feeling a similar way, possibly doing the goal setting with them too as this seemed to be really beneficial here and empowered her to identify her own strengths and coping mechanisms’ Aside from filling knowledge gaps, the trainers described how providing practical and emotional support helped to reassure and empower parents along their journey. The trainers performed practical first aid and life support demonstrations, breastfeeding techniques, as well as offering encouragement and reassurance. Even for participants with existing knowledge of the topics covered, they reported feeling reassured and more confident after refreshing their knowledge and having the opportunity to ask questions and clarify any doubts. One mother had first-hand experience of life support since she had to previously resuscitate one of her children but was keen to do STORK for reassurance and to refresh her knowledge. The excerpts reveal the benefits of the dedicated time and counselling approach offered by trainers.L: ‘She herself had also got a one year old child who had previously stopped breathing, and she had had to do basic life support to resuscitate them. Because of this, she was really interested in STORK, I think mostly because she wanted reassurance... For her, I think it wasn’t so much about learning from scratch, it was more about reassuring her that she would know what to do if it happened again and giving her more confidence’. L: ‘She herself had also got a one year old child who had previously stopped breathing, and she had had to do basic life support to resuscitate them. Because of this, she was really interested in STORK, I think mostly because she wanted reassurance... For her, I think it wasn’t so much about learning from scratch, it was more about reassuring her that she would know what to do if it happened again and giving her more confidence’. The trainers reflected on the approaches they used to empower parents. This included using a gentle approach and actively listening, especially when dealing with vulnerable participants and those that have been through complicated births.L: ‘What I will take away from this is how important it is to just actively listen, especially when a parent is finding the neonatal unit a particularly difficult experience. I would definitely try to use this approach when speaking to parents feeling a similar way, possibly doing the goal setting with them too as this seemed to be really beneficial here and empowered her to identify her own strengths and coping mechanisms’ L: ‘What I will take away from this is how important it is to just actively listen, especially when a parent is finding the neonatal unit a particularly difficult experience. I would definitely try to use this approach when speaking to parents feeling a similar way, possibly doing the goal setting with them too as this seemed to be really beneficial here and empowered her to identify her own strengths and coping mechanisms’ [SUBTITLE] Challenges for the programme [SUBSECTION] The analysis of trainer reflections highlighted a number of key challenges for the implementation of the STORK programme, including staffing shortages, the need for mental health support for trainers and lack of cohesion across healthcare staff. Staffing shortages, compounded by additional pressures as a result of the COVID-19 pandemic, resulted in healthcare staff being overstretched and additional workload and pressure on the STORK trainers.C: ‘Having reflected on ways to include dads in this situation, until I have support with staffing I’m finding I can’t change the situation, and it’s unfortunate people are being missed due to these barriers on the unit’. C: ‘Having reflected on ways to include dads in this situation, until I have support with staffing I’m finding I can’t change the situation, and it’s unfortunate people are being missed due to these barriers on the unit’. A general lack of awareness and support on the topics covered by the STORK programme, in particular breastfeeding, among medical staff such as nurses and midwives was highlighted in the reflections. This was attributed to a lack of capacity and training and regimented feeding practices in the neonatal unit, compounded by being overstretched due to COVID-19:C: ‘Mom really appreciated this as no feeding support given by postnatal midwives, midwives currently running at ratio 1:9 because of sickness and self-isolating due to COVID-19 and no ward breastfeeding support. This is a real barrier to parents at the moment and I feel my impact is making a real difference to these moms feeding experiences as seen here’. C: ‘Mom really appreciated this as no feeding support given by postnatal midwives, midwives currently running at ratio 1:9 because of sickness and self-isolating due to COVID-19 and no ward breastfeeding support. This is a real barrier to parents at the moment and I feel my impact is making a real difference to these moms feeding experiences as seen here’. This resulted in incidents of miscommunication, contradictory information being provided to patients and a general lack of coherence among STORK trainers and other staff.C: ‘I also feel that nurses should be facilitating mom’s wishes if they ask for support to breastfeed/express/pump and should not just decide they feel mom doesn’t really want to. As our unit is not currently BFI (Breastfeeding Feeding Initiative) [accredited]. I hope that this type of training will support nurses to do this in future’. C: ‘I also feel that nurses should be facilitating mom’s wishes if they ask for support to breastfeed/express/pump and should not just decide they feel mom doesn’t really want to. As our unit is not currently BFI (Breastfeeding Feeding Initiative) [accredited]. I hope that this type of training will support nurses to do this in future’. The trainers cited difficult experiences when dealing with challenging participants and those with mental health problems. They report that previous counselling training had been beneficial and that they would have benefitted from further training in this area.H: ‘During the session Nan started to open up about her past, being a victim of long term DV [domestic violence] herself. She expressed how hard she found it watching her daughter go through the same things. […] This situation highlighted the importance of our role not only for parents, but also the people who surround the babies we look after. It also highlights the amount of input we have not “just” delivering STORK but also providing support around SG [safeguarding], mental health and well-being’.C: ‘On reflection I feel counselling skills are so important for this role, nearly every parent I work with has huge amounts of emotional anxiety and this has to be unpicked before learning can take place. […] The confidence and reassurance is already part of STORK, but the counselling skills and mental health support perhaps not so widely known, this is time consuming and emotionally consuming for staff, I need to ensure that I have some space to reflect as I have not been doing this recently’. H: ‘During the session Nan started to open up about her past, being a victim of long term DV [domestic violence] herself. She expressed how hard she found it watching her daughter go through the same things. […] This situation highlighted the importance of our role not only for parents, but also the people who surround the babies we look after. It also highlights the amount of input we have not “just” delivering STORK but also providing support around SG [safeguarding], mental health and well-being’. C: ‘On reflection I feel counselling skills are so important for this role, nearly every parent I work with has huge amounts of emotional anxiety and this has to be unpicked before learning can take place. […] The confidence and reassurance is already part of STORK, but the counselling skills and mental health support perhaps not so widely known, this is time consuming and emotionally consuming for staff, I need to ensure that I have some space to reflect as I have not been doing this recently’. The analysis of trainer reflections highlighted a number of key challenges for the implementation of the STORK programme, including staffing shortages, the need for mental health support for trainers and lack of cohesion across healthcare staff. Staffing shortages, compounded by additional pressures as a result of the COVID-19 pandemic, resulted in healthcare staff being overstretched and additional workload and pressure on the STORK trainers.C: ‘Having reflected on ways to include dads in this situation, until I have support with staffing I’m finding I can’t change the situation, and it’s unfortunate people are being missed due to these barriers on the unit’. C: ‘Having reflected on ways to include dads in this situation, until I have support with staffing I’m finding I can’t change the situation, and it’s unfortunate people are being missed due to these barriers on the unit’. A general lack of awareness and support on the topics covered by the STORK programme, in particular breastfeeding, among medical staff such as nurses and midwives was highlighted in the reflections. This was attributed to a lack of capacity and training and regimented feeding practices in the neonatal unit, compounded by being overstretched due to COVID-19:C: ‘Mom really appreciated this as no feeding support given by postnatal midwives, midwives currently running at ratio 1:9 because of sickness and self-isolating due to COVID-19 and no ward breastfeeding support. This is a real barrier to parents at the moment and I feel my impact is making a real difference to these moms feeding experiences as seen here’. C: ‘Mom really appreciated this as no feeding support given by postnatal midwives, midwives currently running at ratio 1:9 because of sickness and self-isolating due to COVID-19 and no ward breastfeeding support. This is a real barrier to parents at the moment and I feel my impact is making a real difference to these moms feeding experiences as seen here’. This resulted in incidents of miscommunication, contradictory information being provided to patients and a general lack of coherence among STORK trainers and other staff.C: ‘I also feel that nurses should be facilitating mom’s wishes if they ask for support to breastfeed/express/pump and should not just decide they feel mom doesn’t really want to. As our unit is not currently BFI (Breastfeeding Feeding Initiative) [accredited]. I hope that this type of training will support nurses to do this in future’. C: ‘I also feel that nurses should be facilitating mom’s wishes if they ask for support to breastfeed/express/pump and should not just decide they feel mom doesn’t really want to. As our unit is not currently BFI (Breastfeeding Feeding Initiative) [accredited]. I hope that this type of training will support nurses to do this in future’. The trainers cited difficult experiences when dealing with challenging participants and those with mental health problems. They report that previous counselling training had been beneficial and that they would have benefitted from further training in this area.H: ‘During the session Nan started to open up about her past, being a victim of long term DV [domestic violence] herself. She expressed how hard she found it watching her daughter go through the same things. […] This situation highlighted the importance of our role not only for parents, but also the people who surround the babies we look after. It also highlights the amount of input we have not “just” delivering STORK but also providing support around SG [safeguarding], mental health and well-being’.C: ‘On reflection I feel counselling skills are so important for this role, nearly every parent I work with has huge amounts of emotional anxiety and this has to be unpicked before learning can take place. […] The confidence and reassurance is already part of STORK, but the counselling skills and mental health support perhaps not so widely known, this is time consuming and emotionally consuming for staff, I need to ensure that I have some space to reflect as I have not been doing this recently’. H: ‘During the session Nan started to open up about her past, being a victim of long term DV [domestic violence] herself. She expressed how hard she found it watching her daughter go through the same things. […] This situation highlighted the importance of our role not only for parents, but also the people who surround the babies we look after. It also highlights the amount of input we have not “just” delivering STORK but also providing support around SG [safeguarding], mental health and well-being’. C: ‘On reflection I feel counselling skills are so important for this role, nearly every parent I work with has huge amounts of emotional anxiety and this has to be unpicked before learning can take place. […] The confidence and reassurance is already part of STORK, but the counselling skills and mental health support perhaps not so widely known, this is time consuming and emotionally consuming for staff, I need to ensure that I have some space to reflect as I have not been doing this recently’.
Conclusion
The success of the programme is evident in the positive feedback and success stories of changes that the programme has effected in behavioural practices in its participants [7]. But behavioural adaptation as a consequence of effective education and empowerment will be the greatest challenge in defining at population level, and will require further study, followed by refinement of messaging and further study over time. All of this is contingent on getting the messaging, its delivery, reach and inclusion appropriate for the local population.
[ "Background of the local neonatal public health programme", "Methods", "Setting", "Data collection", "Data analysis", "Reach and inclusion: reaching parents and carers at the right time and place, with the right tools", "Knowledge: filling knowledge gaps and dealing with conflicting knowledge claims", "Practical and emotional support: providing reassurance and empowerment", "Challenges for the programme", "Strengths and limitations", "The next steps and key challenges" ]
[ "In stretching the boundaries of perinatal care, a public health programme, aimed at reducing the risks associated with infant mortality through education and empowerment to families in parts of the Midlands began in 2017 [5, 6]. This programme [7] (STORK for parents, families and carers) addresses key public health messaging around chief preventable risks associated with infant mortality comprises (a) basic bystander life support education, (b) how to deal with a choking child, (c) recognising signs of illness in baby, (d) safe sleeping advice, (e) breast feeding support and advice, (f) understanding the risks associated with smoking in and after pregnancy, around baby, and smoking cessation advice, (g) managing a crying baby, and signposting to healthy lifestyle and perinatal mental health support in relevant participating regions. Delivery of ‘one’ programme to a parent, family or carer means education on all aspects a) to g) above, with specific focus on areas of greatest need.\nIts focus is to highlight key risks and strategies to reduce the risks around infant mortality in the region, using modern aids such as mobile applications, and linking in to NHS and other relevant online resources. Parents and families with new born babies or those expectant parents, are taught the key components of the programme, using a mobile application [8] and in so doing are informed around risks associated with infant mortality in the region. The programme is modified in participating units, as per available resources, to either offer individualised 1:1 learning with families, group or signposting sessions. STORK is the acronym for Supportive Training Offering Reassurance and Knowledge, and parent feedback on the programme has previously been published. For sessions of education of STORK programme education with parents, family members or carers, staff members were routinely encouraged to be observers. This was part of the overall delivery of the STORK programme at the hospital, to promote awareness amongst staff so that they could better support the parents, family members and carers whilst in hospital. Multiple sessions with parents, families and carers to ensure all aspects were delivered comprise delivery of ‘one programme’. This included follow up conversations with parents, families and carers after discharge from the neonatal unit.\nAnchored at the University of Wolverhampton, Research Institute and Faculty of Science and Engineering, it has been supported in participating local regions variably by individual NHS Trusts, in which the programme has been embedded into discharge services for neonatal units, or through Public Health, Local Maternity and Neonatal Systems and City Councils. The programme focuses on parent, family and carer education and empowerment, utilising NHS and other resources. Its strength lies in the collation of material into a single platform around reducing risks for infant mortality, in a user-friendly mobile application, 1:1 parent, family and carer training, and delivered by support workers in this district general hospital. Its success with parents has been previously described from another local district hospital [5]. Here, in a survey of 218 parents or carers in Wolverhampton, 215 (98.6%) indicated it instilled confidence in them caring for their babies after discharge from the neonatal unit, that they would recommend the programme to others, and that the programme was useful. The remaining three did not comment as English was not their first language. Free text feedback from parents included amongst other comments, that ‘it was a great programme’, that ‘all parents should be offered this’, ‘. and we learnt a lot of what we didn’t know’. The intention of the programme is to, over time, work towards behavioural changes that cascade multi-directionally, to friends and relatives, subsequent children, parents and grandparents and carers.", "[SUBTITLE] Setting [SUBSECTION] This project was undertaken at a Dudley Group of Hospitals NHS Trust, local district general hospital catering for approximately 4100 births per year in the Midlands. With the support of its Nurture and Resilience Steering Group, local Council, its local department of Public Health funded two part-time posts for support workers to work within the hospital’s neonatal in-patient setting, providing 1:1 education and empowerment for parents whose babies were admitted to its neonatal services. Additional support where needed for vulnerable families in the paediatric and maternity setting was offered, in delivering the messages around reducing the risks for infant mortality. Vulnerable families included those with a previous neonatal or infant death, recent immigrants or refugees to the UK, mothers under 18 years of age who had declined the family nurse partnership established to support teenage pregnancies, those mothers with learning difficulties, were smokers in pregnancy, experiences substance misuse, domestic violence, mental health issues, where there were safeguarding concerns, or concerns raised by health care professionals during pregnancy or after birth about parenting capabilities such as complex social lives or unsafe sleep practices. Approximately a third of the mothers and families who received the STORK programme in our hospital were classified as vulnerable, and a quarter, ethnic minority.\nThe support workers (STORK Trainers) were employed at the equivalent grade of band 4 nurses at the Trust. They had healthcare backgrounds, were educated to a minimum of degree level, and had previous experience in data capture in research. They were all educators prior to employment with the Hospital and had hands-on experience in patient counselling. A one day STORK programme-specific full day training was delivered at induction into the post, by the developer of the programme (TP) and supported by training for its individual components which are referenced in the app [8] and based on online NHS material for safe sleep, recognising signs of illness, and smoking cessation. Online training resources from the national centre for smoking cessation were utilised. Bystander life support and choking training were based on the algorithm supported by our BLISS our national charity for babies born preterm and ill [9], and St Johns Ambulance guidelines. A 2 day maternity-led breastfeeding training session, supporting the Unicef Baby Friendly Initiative [10], for which the hospital’s maternity services has received full accreditation, was also undertaken. Training to manage the crying baby (ICON) [11] was also included. All of these are referenced in the mobile app. STORK trainers observed an established STORK trainer for around 4 weeks before actively starting own teaching sessions.\nThis project was undertaken at a Dudley Group of Hospitals NHS Trust, local district general hospital catering for approximately 4100 births per year in the Midlands. With the support of its Nurture and Resilience Steering Group, local Council, its local department of Public Health funded two part-time posts for support workers to work within the hospital’s neonatal in-patient setting, providing 1:1 education and empowerment for parents whose babies were admitted to its neonatal services. Additional support where needed for vulnerable families in the paediatric and maternity setting was offered, in delivering the messages around reducing the risks for infant mortality. Vulnerable families included those with a previous neonatal or infant death, recent immigrants or refugees to the UK, mothers under 18 years of age who had declined the family nurse partnership established to support teenage pregnancies, those mothers with learning difficulties, were smokers in pregnancy, experiences substance misuse, domestic violence, mental health issues, where there were safeguarding concerns, or concerns raised by health care professionals during pregnancy or after birth about parenting capabilities such as complex social lives or unsafe sleep practices. Approximately a third of the mothers and families who received the STORK programme in our hospital were classified as vulnerable, and a quarter, ethnic minority.\nThe support workers (STORK Trainers) were employed at the equivalent grade of band 4 nurses at the Trust. They had healthcare backgrounds, were educated to a minimum of degree level, and had previous experience in data capture in research. They were all educators prior to employment with the Hospital and had hands-on experience in patient counselling. A one day STORK programme-specific full day training was delivered at induction into the post, by the developer of the programme (TP) and supported by training for its individual components which are referenced in the app [8] and based on online NHS material for safe sleep, recognising signs of illness, and smoking cessation. Online training resources from the national centre for smoking cessation were utilised. Bystander life support and choking training were based on the algorithm supported by our BLISS our national charity for babies born preterm and ill [9], and St Johns Ambulance guidelines. A 2 day maternity-led breastfeeding training session, supporting the Unicef Baby Friendly Initiative [10], for which the hospital’s maternity services has received full accreditation, was also undertaken. Training to manage the crying baby (ICON) [11] was also included. All of these are referenced in the mobile app. STORK trainers observed an established STORK trainer for around 4 weeks before actively starting own teaching sessions.\n[SUBTITLE] Data collection [SUBSECTION] Prior to commencement of the project the trainers were given direction on documentation of reflections around their work, as and when they considered it appropriate, i.e. while in hospital or after multiple sessions with parents, families and carers.The structure of the reflection was free text, intended to capture the following broad themes: (a) what did you learn from your engagement with the parents/families/carers/staff, i.e. what did parents generally know (b) what went well, (c) what could have been done differently, or better. Qualitative data were collected in the form of written reflections and voice memos by the STORK trainers teaching the reducing the risks associated with infant mortality messaging to parents, families and carers. Three trainers (occupying two posts) recorded their reflections following sessions they conducted with participants of the STORK programme between January and December 2021.\nAll participant data was anonymised. All data remained confidential. The anonymity of the parents, families and carers who participated was maintained. No identifying information on participants was shared. The written reflections were typed up by the trainers and the voice memos were later transcribed using automated transcription software and edited for accuracy.\nWe selected trainer reflections as the most robust way of capturing perspectives for this study. We considered the option of selecting parents for interview, but felt that greater value could be obtained through larger scale reflections, across a wider mix of patients, and suited our objective better.\nPrior to commencement of the project the trainers were given direction on documentation of reflections around their work, as and when they considered it appropriate, i.e. while in hospital or after multiple sessions with parents, families and carers.The structure of the reflection was free text, intended to capture the following broad themes: (a) what did you learn from your engagement with the parents/families/carers/staff, i.e. what did parents generally know (b) what went well, (c) what could have been done differently, or better. Qualitative data were collected in the form of written reflections and voice memos by the STORK trainers teaching the reducing the risks associated with infant mortality messaging to parents, families and carers. Three trainers (occupying two posts) recorded their reflections following sessions they conducted with participants of the STORK programme between January and December 2021.\nAll participant data was anonymised. All data remained confidential. The anonymity of the parents, families and carers who participated was maintained. No identifying information on participants was shared. The written reflections were typed up by the trainers and the voice memos were later transcribed using automated transcription software and edited for accuracy.\nWe selected trainer reflections as the most robust way of capturing perspectives for this study. We considered the option of selecting parents for interview, but felt that greater value could be obtained through larger scale reflections, across a wider mix of patients, and suited our objective better.\n[SUBTITLE] Data analysis [SUBSECTION] The reflections (written notes and transcription files) were uploaded to Dedoose®, a software application for analysing qualitative and mixed methods research. An experienced social scientist coded the reflections using thematic analysis during two stages [12]. First, we generated initial codes. A second phase of coding was then undertaken involving organising the codes into overall themes and a process of collaboration with the principal researcher, who was a clinician (consultant neonatologist). A combination of inductive and deductive coding was carried out in two stages. Utility of a clinician with experience in the development of the programme and in parent counselling, together with an experienced social scientist was designed to ensure optimal derivation of codes and themes.\nFirst, overall categories were identified before the data analysis process: baseline knowledge around basic life support and choking training and the shaken baby (ICON https://iconcope.org/ [9]); and cultural perceptions around safe sleep, breastfeeding, artificial feed and smoking. The data were analysed line-by-line and codes were added. The list of codes was then organised and refined, resulting in an initial list of categories and codes (Table 1).\nSecond, the data were analysed for cross-cutting themes across all the categories. This stage of analysis resulted in a final list of four overall themes (reach and inclusion; knowledge; practical and emotional support; and challenges) and sub-themes, visualised in Fig. 1.\n\nTable 1Summary of initial categories and codesCategoriesSub categoriesCodesBasic knowledgeBasic life support knowledgeLack of knowledge - life supportPrevious knowledge - life supportBreastfeeding benefits knowledgeArtificial feed misperceptionsBreastfeeding - change of perceptionBreastfeeding - lack of knowledge (general)Challenges of knowledge gained through online groupsDid not know about weaningDid not understand benefits for all babies, not only neonatalDoubts, worries and misinformationFeeding from the breast perceptions vs. expressingKnowledge gaps - when baby is hungry/full, positioningLack of support and information on feeding from medical staffPractical and emotional support with breastfeeding/expressingCultural practices, beliefs and mythsBreastfeeding cultural practiceIf baby gets jaundice, put them by the windowRocking baby on feetSterilising goes against their beliefsSwaddling to straighten babies bodiesUsing pillow to avoid baby getting a flat headWhiskey on dummyICON knowledgeCry it out methodICON - Lack of knowledgeProducts and safetysleep pods/nestsblankets/pillowsmilk preparation machinesSafe sleep knowledgeBasic safe sleep knowledgeCo-sleepingLack of information about multiple sleep optionsSafe sleep - lack of knowledgeSmoking knowledge and perceptionsDid not like vapingDid not realise difference between vaping and smokingNeed support to quit smokingSurprised that vaping is classed as non-smokerUnderstanding the importance of smoke/vape-free homeSuccess storiesPositive feedback/success storiesProviding reassurance/refresherTraining challenges and opportunitiesAdditional support to quit smokingBreastfeeding – reaching mothers earlier and community follow-upExtend programme to all parents/carers beyond neonatalImprove awareness among healthcare staff of STORKInclusion and reaching dads and other carersLanguage barriers - multi-language content and visual aidsNeed for counselling/mental health training for staffStaffing shortages and impacts of COVID-19Unreliable Wi-Fi connection\n\nSummary of initial categories and codes\nLack of knowledge - life support\nPrevious knowledge - life support\nArtificial feed misperceptions\nBreastfeeding - change of perception\nBreastfeeding - lack of knowledge (general)\nChallenges of knowledge gained through online groups\nDid not know about weaning\nDid not understand benefits for all babies, not only neonatal\nDoubts, worries and misinformation\nFeeding from the breast perceptions vs. expressing\nKnowledge gaps - when baby is hungry/full, positioning\nLack of support and information on feeding from medical staff\nPractical and emotional support with breastfeeding/expressing\nBreastfeeding cultural practice\nIf baby gets jaundice, put them by the window\nRocking baby on feet\nSterilising goes against their beliefs\nSwaddling to straighten babies bodies\nUsing pillow to avoid baby getting a flat head\nWhiskey on dummy\nCry it out method\nICON - Lack of knowledge\nsleep pods/nests\nblankets/pillows\nmilk preparation machines\nBasic safe sleep knowledge\nCo-sleeping\nLack of information about multiple sleep options\nSafe sleep - lack of knowledge\nDid not like vaping\nDid not realise difference between vaping and smoking\nNeed support to quit smoking\nSurprised that vaping is classed as non-smoker\nUnderstanding the importance of smoke/vape-free home\nPositive feedback/success stories\nProviding reassurance/refresher\nAdditional support to quit smoking\nBreastfeeding – reaching mothers earlier and community follow-up\nExtend programme to all parents/carers beyond neonatal\nImprove awareness among healthcare staff of STORK\nInclusion and reaching dads and other carers\nLanguage barriers - multi-language content and visual aids\nNeed for counselling/mental health training for staff\nStaffing shortages and impacts of COVID-19\nUnreliable Wi-Fi connection\n\nFig. 1Diagrammatic representation of final themes (centre) and sub-themes (peripheral)\n\nDiagrammatic representation of final themes (centre) and sub-themes (peripheral)\nThe reflections (written notes and transcription files) were uploaded to Dedoose®, a software application for analysing qualitative and mixed methods research. An experienced social scientist coded the reflections using thematic analysis during two stages [12]. First, we generated initial codes. A second phase of coding was then undertaken involving organising the codes into overall themes and a process of collaboration with the principal researcher, who was a clinician (consultant neonatologist). A combination of inductive and deductive coding was carried out in two stages. Utility of a clinician with experience in the development of the programme and in parent counselling, together with an experienced social scientist was designed to ensure optimal derivation of codes and themes.\nFirst, overall categories were identified before the data analysis process: baseline knowledge around basic life support and choking training and the shaken baby (ICON https://iconcope.org/ [9]); and cultural perceptions around safe sleep, breastfeeding, artificial feed and smoking. The data were analysed line-by-line and codes were added. The list of codes was then organised and refined, resulting in an initial list of categories and codes (Table 1).\nSecond, the data were analysed for cross-cutting themes across all the categories. This stage of analysis resulted in a final list of four overall themes (reach and inclusion; knowledge; practical and emotional support; and challenges) and sub-themes, visualised in Fig. 1.\n\nTable 1Summary of initial categories and codesCategoriesSub categoriesCodesBasic knowledgeBasic life support knowledgeLack of knowledge - life supportPrevious knowledge - life supportBreastfeeding benefits knowledgeArtificial feed misperceptionsBreastfeeding - change of perceptionBreastfeeding - lack of knowledge (general)Challenges of knowledge gained through online groupsDid not know about weaningDid not understand benefits for all babies, not only neonatalDoubts, worries and misinformationFeeding from the breast perceptions vs. expressingKnowledge gaps - when baby is hungry/full, positioningLack of support and information on feeding from medical staffPractical and emotional support with breastfeeding/expressingCultural practices, beliefs and mythsBreastfeeding cultural practiceIf baby gets jaundice, put them by the windowRocking baby on feetSterilising goes against their beliefsSwaddling to straighten babies bodiesUsing pillow to avoid baby getting a flat headWhiskey on dummyICON knowledgeCry it out methodICON - Lack of knowledgeProducts and safetysleep pods/nestsblankets/pillowsmilk preparation machinesSafe sleep knowledgeBasic safe sleep knowledgeCo-sleepingLack of information about multiple sleep optionsSafe sleep - lack of knowledgeSmoking knowledge and perceptionsDid not like vapingDid not realise difference between vaping and smokingNeed support to quit smokingSurprised that vaping is classed as non-smokerUnderstanding the importance of smoke/vape-free homeSuccess storiesPositive feedback/success storiesProviding reassurance/refresherTraining challenges and opportunitiesAdditional support to quit smokingBreastfeeding – reaching mothers earlier and community follow-upExtend programme to all parents/carers beyond neonatalImprove awareness among healthcare staff of STORKInclusion and reaching dads and other carersLanguage barriers - multi-language content and visual aidsNeed for counselling/mental health training for staffStaffing shortages and impacts of COVID-19Unreliable Wi-Fi connection\n\nSummary of initial categories and codes\nLack of knowledge - life support\nPrevious knowledge - life support\nArtificial feed misperceptions\nBreastfeeding - change of perception\nBreastfeeding - lack of knowledge (general)\nChallenges of knowledge gained through online groups\nDid not know about weaning\nDid not understand benefits for all babies, not only neonatal\nDoubts, worries and misinformation\nFeeding from the breast perceptions vs. expressing\nKnowledge gaps - when baby is hungry/full, positioning\nLack of support and information on feeding from medical staff\nPractical and emotional support with breastfeeding/expressing\nBreastfeeding cultural practice\nIf baby gets jaundice, put them by the window\nRocking baby on feet\nSterilising goes against their beliefs\nSwaddling to straighten babies bodies\nUsing pillow to avoid baby getting a flat head\nWhiskey on dummy\nCry it out method\nICON - Lack of knowledge\nsleep pods/nests\nblankets/pillows\nmilk preparation machines\nBasic safe sleep knowledge\nCo-sleeping\nLack of information about multiple sleep options\nSafe sleep - lack of knowledge\nDid not like vaping\nDid not realise difference between vaping and smoking\nNeed support to quit smoking\nSurprised that vaping is classed as non-smoker\nUnderstanding the importance of smoke/vape-free home\nPositive feedback/success stories\nProviding reassurance/refresher\nAdditional support to quit smoking\nBreastfeeding – reaching mothers earlier and community follow-up\nExtend programme to all parents/carers beyond neonatal\nImprove awareness among healthcare staff of STORK\nInclusion and reaching dads and other carers\nLanguage barriers - multi-language content and visual aids\nNeed for counselling/mental health training for staff\nStaffing shortages and impacts of COVID-19\nUnreliable Wi-Fi connection\n\nFig. 1Diagrammatic representation of final themes (centre) and sub-themes (peripheral)\n\nDiagrammatic representation of final themes (centre) and sub-themes (peripheral)", "This project was undertaken at a Dudley Group of Hospitals NHS Trust, local district general hospital catering for approximately 4100 births per year in the Midlands. With the support of its Nurture and Resilience Steering Group, local Council, its local department of Public Health funded two part-time posts for support workers to work within the hospital’s neonatal in-patient setting, providing 1:1 education and empowerment for parents whose babies were admitted to its neonatal services. Additional support where needed for vulnerable families in the paediatric and maternity setting was offered, in delivering the messages around reducing the risks for infant mortality. Vulnerable families included those with a previous neonatal or infant death, recent immigrants or refugees to the UK, mothers under 18 years of age who had declined the family nurse partnership established to support teenage pregnancies, those mothers with learning difficulties, were smokers in pregnancy, experiences substance misuse, domestic violence, mental health issues, where there were safeguarding concerns, or concerns raised by health care professionals during pregnancy or after birth about parenting capabilities such as complex social lives or unsafe sleep practices. Approximately a third of the mothers and families who received the STORK programme in our hospital were classified as vulnerable, and a quarter, ethnic minority.\nThe support workers (STORK Trainers) were employed at the equivalent grade of band 4 nurses at the Trust. They had healthcare backgrounds, were educated to a minimum of degree level, and had previous experience in data capture in research. They were all educators prior to employment with the Hospital and had hands-on experience in patient counselling. A one day STORK programme-specific full day training was delivered at induction into the post, by the developer of the programme (TP) and supported by training for its individual components which are referenced in the app [8] and based on online NHS material for safe sleep, recognising signs of illness, and smoking cessation. Online training resources from the national centre for smoking cessation were utilised. Bystander life support and choking training were based on the algorithm supported by our BLISS our national charity for babies born preterm and ill [9], and St Johns Ambulance guidelines. A 2 day maternity-led breastfeeding training session, supporting the Unicef Baby Friendly Initiative [10], for which the hospital’s maternity services has received full accreditation, was also undertaken. Training to manage the crying baby (ICON) [11] was also included. All of these are referenced in the mobile app. STORK trainers observed an established STORK trainer for around 4 weeks before actively starting own teaching sessions.", "Prior to commencement of the project the trainers were given direction on documentation of reflections around their work, as and when they considered it appropriate, i.e. while in hospital or after multiple sessions with parents, families and carers.The structure of the reflection was free text, intended to capture the following broad themes: (a) what did you learn from your engagement with the parents/families/carers/staff, i.e. what did parents generally know (b) what went well, (c) what could have been done differently, or better. Qualitative data were collected in the form of written reflections and voice memos by the STORK trainers teaching the reducing the risks associated with infant mortality messaging to parents, families and carers. Three trainers (occupying two posts) recorded their reflections following sessions they conducted with participants of the STORK programme between January and December 2021.\nAll participant data was anonymised. All data remained confidential. The anonymity of the parents, families and carers who participated was maintained. No identifying information on participants was shared. The written reflections were typed up by the trainers and the voice memos were later transcribed using automated transcription software and edited for accuracy.\nWe selected trainer reflections as the most robust way of capturing perspectives for this study. We considered the option of selecting parents for interview, but felt that greater value could be obtained through larger scale reflections, across a wider mix of patients, and suited our objective better.", "The reflections (written notes and transcription files) were uploaded to Dedoose®, a software application for analysing qualitative and mixed methods research. An experienced social scientist coded the reflections using thematic analysis during two stages [12]. First, we generated initial codes. A second phase of coding was then undertaken involving organising the codes into overall themes and a process of collaboration with the principal researcher, who was a clinician (consultant neonatologist). A combination of inductive and deductive coding was carried out in two stages. Utility of a clinician with experience in the development of the programme and in parent counselling, together with an experienced social scientist was designed to ensure optimal derivation of codes and themes.\nFirst, overall categories were identified before the data analysis process: baseline knowledge around basic life support and choking training and the shaken baby (ICON https://iconcope.org/ [9]); and cultural perceptions around safe sleep, breastfeeding, artificial feed and smoking. The data were analysed line-by-line and codes were added. The list of codes was then organised and refined, resulting in an initial list of categories and codes (Table 1).\nSecond, the data were analysed for cross-cutting themes across all the categories. This stage of analysis resulted in a final list of four overall themes (reach and inclusion; knowledge; practical and emotional support; and challenges) and sub-themes, visualised in Fig. 1.\n\nTable 1Summary of initial categories and codesCategoriesSub categoriesCodesBasic knowledgeBasic life support knowledgeLack of knowledge - life supportPrevious knowledge - life supportBreastfeeding benefits knowledgeArtificial feed misperceptionsBreastfeeding - change of perceptionBreastfeeding - lack of knowledge (general)Challenges of knowledge gained through online groupsDid not know about weaningDid not understand benefits for all babies, not only neonatalDoubts, worries and misinformationFeeding from the breast perceptions vs. expressingKnowledge gaps - when baby is hungry/full, positioningLack of support and information on feeding from medical staffPractical and emotional support with breastfeeding/expressingCultural practices, beliefs and mythsBreastfeeding cultural practiceIf baby gets jaundice, put them by the windowRocking baby on feetSterilising goes against their beliefsSwaddling to straighten babies bodiesUsing pillow to avoid baby getting a flat headWhiskey on dummyICON knowledgeCry it out methodICON - Lack of knowledgeProducts and safetysleep pods/nestsblankets/pillowsmilk preparation machinesSafe sleep knowledgeBasic safe sleep knowledgeCo-sleepingLack of information about multiple sleep optionsSafe sleep - lack of knowledgeSmoking knowledge and perceptionsDid not like vapingDid not realise difference between vaping and smokingNeed support to quit smokingSurprised that vaping is classed as non-smokerUnderstanding the importance of smoke/vape-free homeSuccess storiesPositive feedback/success storiesProviding reassurance/refresherTraining challenges and opportunitiesAdditional support to quit smokingBreastfeeding – reaching mothers earlier and community follow-upExtend programme to all parents/carers beyond neonatalImprove awareness among healthcare staff of STORKInclusion and reaching dads and other carersLanguage barriers - multi-language content and visual aidsNeed for counselling/mental health training for staffStaffing shortages and impacts of COVID-19Unreliable Wi-Fi connection\n\nSummary of initial categories and codes\nLack of knowledge - life support\nPrevious knowledge - life support\nArtificial feed misperceptions\nBreastfeeding - change of perception\nBreastfeeding - lack of knowledge (general)\nChallenges of knowledge gained through online groups\nDid not know about weaning\nDid not understand benefits for all babies, not only neonatal\nDoubts, worries and misinformation\nFeeding from the breast perceptions vs. expressing\nKnowledge gaps - when baby is hungry/full, positioning\nLack of support and information on feeding from medical staff\nPractical and emotional support with breastfeeding/expressing\nBreastfeeding cultural practice\nIf baby gets jaundice, put them by the window\nRocking baby on feet\nSterilising goes against their beliefs\nSwaddling to straighten babies bodies\nUsing pillow to avoid baby getting a flat head\nWhiskey on dummy\nCry it out method\nICON - Lack of knowledge\nsleep pods/nests\nblankets/pillows\nmilk preparation machines\nBasic safe sleep knowledge\nCo-sleeping\nLack of information about multiple sleep options\nSafe sleep - lack of knowledge\nDid not like vaping\nDid not realise difference between vaping and smoking\nNeed support to quit smoking\nSurprised that vaping is classed as non-smoker\nUnderstanding the importance of smoke/vape-free home\nPositive feedback/success stories\nProviding reassurance/refresher\nAdditional support to quit smoking\nBreastfeeding – reaching mothers earlier and community follow-up\nExtend programme to all parents/carers beyond neonatal\nImprove awareness among healthcare staff of STORK\nInclusion and reaching dads and other carers\nLanguage barriers - multi-language content and visual aids\nNeed for counselling/mental health training for staff\nStaffing shortages and impacts of COVID-19\nUnreliable Wi-Fi connection\n\nFig. 1Diagrammatic representation of final themes (centre) and sub-themes (peripheral)\n\nDiagrammatic representation of final themes (centre) and sub-themes (peripheral)", "The degree of inclusion of different participants in the STORK programme was a theme that emerged from trainer reflections. This included the challenges of reaching fathers and other carers, as well as ‘time and place’ dimensions – reaching mothers at the right time and opportunities to extend the programme beyond the neonatal unit. The provision of multi-language and digital and audio-visual tools was proposed by the trainers to facilitate greater inclusion. The importance and challenge of engaging fathers and other carers in the programme was highlighted in the practitioner reflections. This represented a particular challenge during the COVID-19 pandemic since hospital visits were restricted, limiting opportunities to engage with fathers and wider family members.L: ‘It was really useful to have been able to do it with the aunt, as the dad was only coming in to get the baby and would not have stayed to do STORK, meaning nobody would have had STORK if I hadn’t done it with the aunt. I think it also highlighted how STORK is not just for parents; it can be really useful for any member of the family who will be helping to look after baby. As this aunt also helped to look after other babies in the family, what she learned from STORK will not just have benefitted this baby but also probably many other babies in their family, which I feel is a really positive outcome’.\nL: ‘It was really useful to have been able to do it with the aunt, as the dad was only coming in to get the baby and would not have stayed to do STORK, meaning nobody would have had STORK if I hadn’t done it with the aunt. I think it also highlighted how STORK is not just for parents; it can be really useful for any member of the family who will be helping to look after baby. As this aunt also helped to look after other babies in the family, what she learned from STORK will not just have benefitted this baby but also probably many other babies in their family, which I feel is a really positive outcome’.\nThe trainers highlight the importance of reaching mothers and families at the right time, especially first-time mothers, to ensure they receive vital information on safe sleep, life support and breastfeeding. At times, trainers working in the neonatal unit reported feeling they had reached the mothers too late when providing breastfeeding support. On the other hand, due to the limited time to provide support to families in the hospital, trainers commented on the opportunity to extend support to normal care as well as into the community or establishing a STORK support network for breastfeeding mothers.C: ‘[T]his mom said that she had never had this type of information before with any of her children as she’s normally “kicked out quickly” after birth and she’s never actively signed up to any classes. I felt that this really captured the importance of continuing with this service in the longer term so that we catch all first time moms, that way that education is following them through their subsequent babies and beyond and making differences to peoples’ lives, not in the same way as people being motivated to do it themselves as many aren’t, especially the most vulnerable’.\nC: ‘[T]his mom said that she had never had this type of information before with any of her children as she’s normally “kicked out quickly” after birth and she’s never actively signed up to any classes. I felt that this really captured the importance of continuing with this service in the longer term so that we catch all first time moms, that way that education is following them through their subsequent babies and beyond and making differences to peoples’ lives, not in the same way as people being motivated to do it themselves as many aren’t, especially the most vulnerable’.\nLanguage barriers were reported as a barrier to the successful delivery of the programme across all community groups. The provision of multi-language content and visual aids was mentioned by trainers as a potential solution to overcome this challenge, especially when interpreters are unavailable, and to aid parents with learning difficulties. The trainers make use of digital and audio-visual tools such as mobile apps and videos to facilitate learning. However, they encountered problems with a lack of reliable Wi-Fi connection and access to tablets in the hospital and cited a need for information leaflets in a range of languages.", "The reflections revealed how the STORK programme supported parents from a range of cultural and socio-economic backgrounds, with different personal circumstances and varying levels of baseline knowledge. The provision of information tailored to the participant helped to fill knowledge gaps. When it came to safe sleep practices, ICON, smoking awareness and knowledge of basic life support for babies, the trainer reflections revealed a wide spectrum of knowledge levels among STORK participants, from no prior knowledge to some knowledge to good knowledge. Even experienced parents may have never received information on these topics and by asking questions to establish baseline knowledge, the trainers filled in gaps and provided critical information.H: ‘When covering safe sleep and SIDs I asked the parents what they already knew, both of them had no knowledge of any safe sleep guidelines. I found this really surprising this being their 6th baby! They said that nobody had ever spoken to them about safe sleep and they had heard of cot death but associated this with babies passing away in the night, in their cots’.C: ‘Parents appreciated everything, they came from war country and talked about war crimes and violence which they had scars… They especially appreciated the choking and resus. They said they’d never seen anything like it with the information I gave them, and took my hand in appreciation for my time to them’\nH: ‘When covering safe sleep and SIDs I asked the parents what they already knew, both of them had no knowledge of any safe sleep guidelines. I found this really surprising this being their 6th baby! They said that nobody had ever spoken to them about safe sleep and they had heard of cot death but associated this with babies passing away in the night, in their cots’.\nC: ‘Parents appreciated everything, they came from war country and talked about war crimes and violence which they had scars… They especially appreciated the choking and resus. They said they’d never seen anything like it with the information I gave them, and took my hand in appreciation for my time to them’\nThe trainers described how even participants with considerable knowledge went away with new information after taking part in STORK.C: ‘They were experienced parents, however had never seen ICON, evaluating this for the audit is a really useful way of seeing what parents’ knowledge base is and what they can gain from it. Discussed this at great length, including social isolation and coping strategies, parents took a lot from this as could understand having been in that position before. They said this was more useful than anything else in the session’.\nC: ‘They were experienced parents, however had never seen ICON, evaluating this for the audit is a really useful way of seeing what parents’ knowledge base is and what they can gain from it. Discussed this at great length, including social isolation and coping strategies, parents took a lot from this as could understand having been in that position before. They said this was more useful than anything else in the session’.\nWhen participants received new information that contradicted their existing knowledge, attitudes or cultural practices, this produced a range of reactions - from surprise and acceptance to resistance and dismissal. A reoccurring topic was the safety of baby products such as sleeping pods and prep machines.L: ‘I explained to them about how (artificial formula feed) prep machines are not recommended, and explained the reasons why… They really didn’t seem very open to hearing this and I felt like everything I was saying was falling on deaf ears.… I got the impression they still fully intended on using the prep machines. This felt quite disappointing because I just couldn’t seem to get through to them’.H: ‘After we completed the safe sleep and sudden infant death syndrome (SIDS) section mom said “Actually I get it now like why people are telling me not to do some of the stuff like putting her on her side and on the bed with pillows and stuff. I realise now that actually it’s really dangerous. I co-slept with all my other kids and used pods. Nobody has told us with our other kids not to use them and you just think stuff is safe.”’\nL: ‘I explained to them about how (artificial formula feed) prep machines are not recommended, and explained the reasons why… They really didn’t seem very open to hearing this and I felt like everything I was saying was falling on deaf ears.… I got the impression they still fully intended on using the prep machines. This felt quite disappointing because I just couldn’t seem to get through to them’.\nH: ‘After we completed the safe sleep and sudden infant death syndrome (SIDS) section mom said “Actually I get it now like why people are telling me not to do some of the stuff like putting her on her side and on the bed with pillows and stuff. I realise now that actually it’s really dangerous. I co-slept with all my other kids and used pods. Nobody has told us with our other kids not to use them and you just think stuff is safe.”’\nThere were cases when the trainers reported a change of heart among participants after taking part in the STORK programme. For example, mothers who had no intention to breastfeed who later decided to try it, or parents who decided not to use certain baby products after being informed about safety concerns.L: ‘I feel quite sure that had I not intervened with STORK, this mum would never have even considered breastfeeding and no one would have encouraged her to try as whenever anyone asked she had said she was not interested. I think that everyone else involved in her care had not had the time to just sit with her for a while to talk it all through, and this is what she needed’.\nL: ‘I feel quite sure that had I not intervened with STORK, this mum would never have even considered breastfeeding and no one would have encouraged her to try as whenever anyone asked she had said she was not interested. I think that everyone else involved in her care had not had the time to just sit with her for a while to talk it all through, and this is what she needed’.", "Aside from filling knowledge gaps, the trainers described how providing practical and emotional support helped to reassure and empower parents along their journey. The trainers performed practical first aid and life support demonstrations, breastfeeding techniques, as well as offering encouragement and reassurance. Even for participants with existing knowledge of the topics covered, they reported feeling reassured and more confident after refreshing their knowledge and having the opportunity to ask questions and clarify any doubts. One mother had first-hand experience of life support since she had to previously resuscitate one of her children but was keen to do STORK for reassurance and to refresh her knowledge. The excerpts reveal the benefits of the dedicated time and counselling approach offered by trainers.L: ‘She herself had also got a one year old child who had previously stopped breathing, and she had had to do basic life support to resuscitate them. Because of this, she was really interested in STORK, I think mostly because she wanted reassurance... For her, I think it wasn’t so much about learning from scratch, it was more about reassuring her that she would know what to do if it happened again and giving her more confidence’.\nL: ‘She herself had also got a one year old child who had previously stopped breathing, and she had had to do basic life support to resuscitate them. Because of this, she was really interested in STORK, I think mostly because she wanted reassurance... For her, I think it wasn’t so much about learning from scratch, it was more about reassuring her that she would know what to do if it happened again and giving her more confidence’.\nThe trainers reflected on the approaches they used to empower parents. This included using a gentle approach and actively listening, especially when dealing with vulnerable participants and those that have been through complicated births.L: ‘What I will take away from this is how important it is to just actively listen, especially when a parent is finding the neonatal unit a particularly difficult experience. I would definitely try to use this approach when speaking to parents feeling a similar way, possibly doing the goal setting with them too as this seemed to be really beneficial here and empowered her to identify her own strengths and coping mechanisms’\nL: ‘What I will take away from this is how important it is to just actively listen, especially when a parent is finding the neonatal unit a particularly difficult experience. I would definitely try to use this approach when speaking to parents feeling a similar way, possibly doing the goal setting with them too as this seemed to be really beneficial here and empowered her to identify her own strengths and coping mechanisms’", "The analysis of trainer reflections highlighted a number of key challenges for the implementation of the STORK programme, including staffing shortages, the need for mental health support for trainers and lack of cohesion across healthcare staff.\nStaffing shortages, compounded by additional pressures as a result of the COVID-19 pandemic, resulted in healthcare staff being overstretched and additional workload and pressure on the STORK trainers.C: ‘Having reflected on ways to include dads in this situation, until I have support with staffing I’m finding I can’t change the situation, and it’s unfortunate people are being missed due to these barriers on the unit’.\nC: ‘Having reflected on ways to include dads in this situation, until I have support with staffing I’m finding I can’t change the situation, and it’s unfortunate people are being missed due to these barriers on the unit’.\nA general lack of awareness and support on the topics covered by the STORK programme, in particular breastfeeding, among medical staff such as nurses and midwives was highlighted in the reflections. This was attributed to a lack of capacity and training and regimented feeding practices in the neonatal unit, compounded by being overstretched due to COVID-19:C: ‘Mom really appreciated this as no feeding support given by postnatal midwives, midwives currently running at ratio 1:9 because of sickness and self-isolating due to COVID-19 and no ward breastfeeding support. This is a real barrier to parents at the moment and I feel my impact is making a real difference to these moms feeding experiences as seen here’.\nC: ‘Mom really appreciated this as no feeding support given by postnatal midwives, midwives currently running at ratio 1:9 because of sickness and self-isolating due to COVID-19 and no ward breastfeeding support. This is a real barrier to parents at the moment and I feel my impact is making a real difference to these moms feeding experiences as seen here’.\nThis resulted in incidents of miscommunication, contradictory information being provided to patients and a general lack of coherence among STORK trainers and other staff.C: ‘I also feel that nurses should be facilitating mom’s wishes if they ask for support to breastfeed/express/pump and should not just decide they feel mom doesn’t really want to. As our unit is not currently BFI (Breastfeeding Feeding Initiative) [accredited]. I hope that this type of training will support nurses to do this in future’.\nC: ‘I also feel that nurses should be facilitating mom’s wishes if they ask for support to breastfeed/express/pump and should not just decide they feel mom doesn’t really want to. As our unit is not currently BFI (Breastfeeding Feeding Initiative) [accredited]. I hope that this type of training will support nurses to do this in future’.\nThe trainers cited difficult experiences when dealing with challenging participants and those with mental health problems. They report that previous counselling training had been beneficial and that they would have benefitted from further training in this area.H: ‘During the session Nan started to open up about her past, being a victim of long term DV [domestic violence] herself. She expressed how hard she found it watching her daughter go through the same things. […] This situation highlighted the importance of our role not only for parents, but also the people who surround the babies we look after. It also highlights the amount of input we have not “just” delivering STORK but also providing support around SG [safeguarding], mental health and well-being’.C: ‘On reflection I feel counselling skills are so important for this role, nearly every parent I work with has huge amounts of emotional anxiety and this has to be unpicked before learning can take place. […] The confidence and reassurance is already part of STORK, but the counselling skills and mental health support perhaps not so widely known, this is time consuming and emotionally consuming for staff, I need to ensure that I have some space to reflect as I have not been doing this recently’.\nH: ‘During the session Nan started to open up about her past, being a victim of long term DV [domestic violence] herself. She expressed how hard she found it watching her daughter go through the same things. […] This situation highlighted the importance of our role not only for parents, but also the people who surround the babies we look after. It also highlights the amount of input we have not “just” delivering STORK but also providing support around SG [safeguarding], mental health and well-being’.\nC: ‘On reflection I feel counselling skills are so important for this role, nearly every parent I work with has huge amounts of emotional anxiety and this has to be unpicked before learning can take place. […] The confidence and reassurance is already part of STORK, but the counselling skills and mental health support perhaps not so widely known, this is time consuming and emotionally consuming for staff, I need to ensure that I have some space to reflect as I have not been doing this recently’.", "One of the limitations of this project is that it was mainly conducted in the post-natal period for vulnerable families and those families who had babies admitted in the neonatal unit. We believe that this work should be extended to midwifery services and provision of support in the antenatal period, the post delivery period for all women and families with a new baby and the community through engagement with all midwives, health visitors and the voluntary sector. A second limitation is that the perspectives here are solely from the trainers.\nThe strengths of this project are firstly that we have compiled key risks based on the associations with infant mortality in our region into an easy to use empowerment package. Secondly that we have utilised social media tools through a mobile platform, which almost all our parents have access to, and in a format that is easily understandable and quick to use for new parents who may have leaflet-fatigue. Thirdly, we have utilised reflections of key trained workers on their interactions with parents, families and carers in a free text style. This has enabled us to identify multiple, potential avenues for improvement in the programme, based on the four themes identified in the qualitative analysis. These are described in Table 2.\n\nTable 2Potential avenues for improvementPotential Avenues for improvementReach and inclusionDevise strategies to promote greater engagement of fathers and other carers in the STORK programme.Consider opportunities to extend the STORK programme beyond the neonatal unit to normal care.Examine opportunities to reach mothers before and after childbirth, during pregnancy as well as follow-up support in the community, to maximise the impact of the programme.Develop training and informational content in multiple languages and formats (print, digital and audio-visual).Trial different approaches in training delivery (e.g. one-on-one sessions with each parent in certain situations).Improve access to interpreters, where possible with relevant dialects and knowledge of the content being covered in the programme.KnowledgeProduce informational materials on topics such as baby product safety, breastfeeding and sleeping practices.Training and capacity buildingSeek measures to improve support for the STORK programme among the wider medical wards and staff, such as providing augmented breastfeeding training to midwives and nurses, including information on appropriate signposting and utility of the mobile app for this purpose.Provide counselling and mental health training for STORK trainers to equip them with the skills for dealing with challenging cases.\n\nPotential avenues for improvement", "Our focus now is to fine tune the programme based on potential areas for improvement that have been identified. A key priority is expanding this into a multilingual format for all to be able to use, and expand the work to evaluate birth partner perspectives. Our challenges will be how best to promote the programme and app in the antenatal period, devising strategies for roll out within the community with general practitioners, public health and health visiting as partners, to ensure that the programme is more uniformly delivered. This is important if we are to allow greater opportunity for any potential impact the programme may have in bringing about behavioural change within the community. As part of the STORK Programme in the region, our added focus must be working towards the objective of peer to peer education and ownership of the programme by the community. Enhanced training for STORK trainers to equip them with skills to deal with more complex interactions will need to be supported. In addition the study findings here provide a sound foundation for surveys and focus group discussions evaluating users’ perspectives and experiences, as well as evidence for utility and how to expand utility of the mobile app." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Background of the local neonatal public health programme", "Methods", "Setting", "Data collection", "Data analysis", "Results", "Reach and inclusion: reaching parents and carers at the right time and place, with the right tools", "Knowledge: filling knowledge gaps and dealing with conflicting knowledge claims", "Practical and emotional support: providing reassurance and empowerment", "Challenges for the programme", "Discussion", "Strengths and limitations", "The next steps and key challenges", "Conclusion" ]
[ "Infant mortality is a key indicator of the health of a nation. In the United Kingdom (UK), the vast majority of infant mortality is due to deaths within the neonatal period. Infant mortality ranges from 1.7 deaths per 1000 live births in some parts of the UK, to a high of almost 7 deaths (6.8) per 1000 live births in the Midlands [1]. The Midlands has notoriously held high rates of infant mortality, and there have been numerous reports presented to cabinet and published outlining the key associations with infant mortality in the region. These include smoking, lack of breast feeding, prematurity, low birth weight, smoking, extremes of maternal age at delivery, domestic abuse, congenital anomalies, ethnicity, social deprivation, lack of maternal education, sudden infant death syndrome and infection [2–4].\nIntense child death overview panels, local and regional perinatal mortality reviews, National Mothers and Babies confidential enquiries (MBRRACE Reducing Risk through Audits and Confidential Enquiries) and other supportive agencies addressing potentially avoidable factors relating to medical and social support are regularly and intensely conducted in England. However there has been no analysis of population perspectives (parents, families and carers of those with new babies or those expecting a new baby) around risks associated with infant mortality, and the impact of their carer education and empowerment in the antenatal and postnatal period, in understanding and modifying behaviours in response to their knowledge of these risks.\nOur objective was to qualitatively assess perspectives on education and empowerment around risks for infant mortality in the region in families that were expecting or had just delivered a new baby, so that it could (a) provide a base for local knowledge, (b) inform how better to deliver our messages, (c) and over time, define which aspects to focus on moving forward that would best benefit the local population. We report here on qualitative analysis of the perspectives shared by parents, families and carers, through reflections from trainers within a neonatal public health education and empowerment Programme (STORK Programme) in a local district general hospital in The Midlands.\n[SUBTITLE] Background of the local neonatal public health programme [SUBSECTION] In stretching the boundaries of perinatal care, a public health programme, aimed at reducing the risks associated with infant mortality through education and empowerment to families in parts of the Midlands began in 2017 [5, 6]. This programme [7] (STORK for parents, families and carers) addresses key public health messaging around chief preventable risks associated with infant mortality comprises (a) basic bystander life support education, (b) how to deal with a choking child, (c) recognising signs of illness in baby, (d) safe sleeping advice, (e) breast feeding support and advice, (f) understanding the risks associated with smoking in and after pregnancy, around baby, and smoking cessation advice, (g) managing a crying baby, and signposting to healthy lifestyle and perinatal mental health support in relevant participating regions. Delivery of ‘one’ programme to a parent, family or carer means education on all aspects a) to g) above, with specific focus on areas of greatest need.\nIts focus is to highlight key risks and strategies to reduce the risks around infant mortality in the region, using modern aids such as mobile applications, and linking in to NHS and other relevant online resources. Parents and families with new born babies or those expectant parents, are taught the key components of the programme, using a mobile application [8] and in so doing are informed around risks associated with infant mortality in the region. The programme is modified in participating units, as per available resources, to either offer individualised 1:1 learning with families, group or signposting sessions. STORK is the acronym for Supportive Training Offering Reassurance and Knowledge, and parent feedback on the programme has previously been published. For sessions of education of STORK programme education with parents, family members or carers, staff members were routinely encouraged to be observers. This was part of the overall delivery of the STORK programme at the hospital, to promote awareness amongst staff so that they could better support the parents, family members and carers whilst in hospital. Multiple sessions with parents, families and carers to ensure all aspects were delivered comprise delivery of ‘one programme’. This included follow up conversations with parents, families and carers after discharge from the neonatal unit.\nAnchored at the University of Wolverhampton, Research Institute and Faculty of Science and Engineering, it has been supported in participating local regions variably by individual NHS Trusts, in which the programme has been embedded into discharge services for neonatal units, or through Public Health, Local Maternity and Neonatal Systems and City Councils. The programme focuses on parent, family and carer education and empowerment, utilising NHS and other resources. Its strength lies in the collation of material into a single platform around reducing risks for infant mortality, in a user-friendly mobile application, 1:1 parent, family and carer training, and delivered by support workers in this district general hospital. Its success with parents has been previously described from another local district hospital [5]. Here, in a survey of 218 parents or carers in Wolverhampton, 215 (98.6%) indicated it instilled confidence in them caring for their babies after discharge from the neonatal unit, that they would recommend the programme to others, and that the programme was useful. The remaining three did not comment as English was not their first language. Free text feedback from parents included amongst other comments, that ‘it was a great programme’, that ‘all parents should be offered this’, ‘. and we learnt a lot of what we didn’t know’. The intention of the programme is to, over time, work towards behavioural changes that cascade multi-directionally, to friends and relatives, subsequent children, parents and grandparents and carers.\nIn stretching the boundaries of perinatal care, a public health programme, aimed at reducing the risks associated with infant mortality through education and empowerment to families in parts of the Midlands began in 2017 [5, 6]. This programme [7] (STORK for parents, families and carers) addresses key public health messaging around chief preventable risks associated with infant mortality comprises (a) basic bystander life support education, (b) how to deal with a choking child, (c) recognising signs of illness in baby, (d) safe sleeping advice, (e) breast feeding support and advice, (f) understanding the risks associated with smoking in and after pregnancy, around baby, and smoking cessation advice, (g) managing a crying baby, and signposting to healthy lifestyle and perinatal mental health support in relevant participating regions. Delivery of ‘one’ programme to a parent, family or carer means education on all aspects a) to g) above, with specific focus on areas of greatest need.\nIts focus is to highlight key risks and strategies to reduce the risks around infant mortality in the region, using modern aids such as mobile applications, and linking in to NHS and other relevant online resources. Parents and families with new born babies or those expectant parents, are taught the key components of the programme, using a mobile application [8] and in so doing are informed around risks associated with infant mortality in the region. The programme is modified in participating units, as per available resources, to either offer individualised 1:1 learning with families, group or signposting sessions. STORK is the acronym for Supportive Training Offering Reassurance and Knowledge, and parent feedback on the programme has previously been published. For sessions of education of STORK programme education with parents, family members or carers, staff members were routinely encouraged to be observers. This was part of the overall delivery of the STORK programme at the hospital, to promote awareness amongst staff so that they could better support the parents, family members and carers whilst in hospital. Multiple sessions with parents, families and carers to ensure all aspects were delivered comprise delivery of ‘one programme’. This included follow up conversations with parents, families and carers after discharge from the neonatal unit.\nAnchored at the University of Wolverhampton, Research Institute and Faculty of Science and Engineering, it has been supported in participating local regions variably by individual NHS Trusts, in which the programme has been embedded into discharge services for neonatal units, or through Public Health, Local Maternity and Neonatal Systems and City Councils. The programme focuses on parent, family and carer education and empowerment, utilising NHS and other resources. Its strength lies in the collation of material into a single platform around reducing risks for infant mortality, in a user-friendly mobile application, 1:1 parent, family and carer training, and delivered by support workers in this district general hospital. Its success with parents has been previously described from another local district hospital [5]. Here, in a survey of 218 parents or carers in Wolverhampton, 215 (98.6%) indicated it instilled confidence in them caring for their babies after discharge from the neonatal unit, that they would recommend the programme to others, and that the programme was useful. The remaining three did not comment as English was not their first language. Free text feedback from parents included amongst other comments, that ‘it was a great programme’, that ‘all parents should be offered this’, ‘. and we learnt a lot of what we didn’t know’. The intention of the programme is to, over time, work towards behavioural changes that cascade multi-directionally, to friends and relatives, subsequent children, parents and grandparents and carers.", "In stretching the boundaries of perinatal care, a public health programme, aimed at reducing the risks associated with infant mortality through education and empowerment to families in parts of the Midlands began in 2017 [5, 6]. This programme [7] (STORK for parents, families and carers) addresses key public health messaging around chief preventable risks associated with infant mortality comprises (a) basic bystander life support education, (b) how to deal with a choking child, (c) recognising signs of illness in baby, (d) safe sleeping advice, (e) breast feeding support and advice, (f) understanding the risks associated with smoking in and after pregnancy, around baby, and smoking cessation advice, (g) managing a crying baby, and signposting to healthy lifestyle and perinatal mental health support in relevant participating regions. Delivery of ‘one’ programme to a parent, family or carer means education on all aspects a) to g) above, with specific focus on areas of greatest need.\nIts focus is to highlight key risks and strategies to reduce the risks around infant mortality in the region, using modern aids such as mobile applications, and linking in to NHS and other relevant online resources. Parents and families with new born babies or those expectant parents, are taught the key components of the programme, using a mobile application [8] and in so doing are informed around risks associated with infant mortality in the region. The programme is modified in participating units, as per available resources, to either offer individualised 1:1 learning with families, group or signposting sessions. STORK is the acronym for Supportive Training Offering Reassurance and Knowledge, and parent feedback on the programme has previously been published. For sessions of education of STORK programme education with parents, family members or carers, staff members were routinely encouraged to be observers. This was part of the overall delivery of the STORK programme at the hospital, to promote awareness amongst staff so that they could better support the parents, family members and carers whilst in hospital. Multiple sessions with parents, families and carers to ensure all aspects were delivered comprise delivery of ‘one programme’. This included follow up conversations with parents, families and carers after discharge from the neonatal unit.\nAnchored at the University of Wolverhampton, Research Institute and Faculty of Science and Engineering, it has been supported in participating local regions variably by individual NHS Trusts, in which the programme has been embedded into discharge services for neonatal units, or through Public Health, Local Maternity and Neonatal Systems and City Councils. The programme focuses on parent, family and carer education and empowerment, utilising NHS and other resources. Its strength lies in the collation of material into a single platform around reducing risks for infant mortality, in a user-friendly mobile application, 1:1 parent, family and carer training, and delivered by support workers in this district general hospital. Its success with parents has been previously described from another local district hospital [5]. Here, in a survey of 218 parents or carers in Wolverhampton, 215 (98.6%) indicated it instilled confidence in them caring for their babies after discharge from the neonatal unit, that they would recommend the programme to others, and that the programme was useful. The remaining three did not comment as English was not their first language. Free text feedback from parents included amongst other comments, that ‘it was a great programme’, that ‘all parents should be offered this’, ‘. and we learnt a lot of what we didn’t know’. The intention of the programme is to, over time, work towards behavioural changes that cascade multi-directionally, to friends and relatives, subsequent children, parents and grandparents and carers.", "[SUBTITLE] Setting [SUBSECTION] This project was undertaken at a Dudley Group of Hospitals NHS Trust, local district general hospital catering for approximately 4100 births per year in the Midlands. With the support of its Nurture and Resilience Steering Group, local Council, its local department of Public Health funded two part-time posts for support workers to work within the hospital’s neonatal in-patient setting, providing 1:1 education and empowerment for parents whose babies were admitted to its neonatal services. Additional support where needed for vulnerable families in the paediatric and maternity setting was offered, in delivering the messages around reducing the risks for infant mortality. Vulnerable families included those with a previous neonatal or infant death, recent immigrants or refugees to the UK, mothers under 18 years of age who had declined the family nurse partnership established to support teenage pregnancies, those mothers with learning difficulties, were smokers in pregnancy, experiences substance misuse, domestic violence, mental health issues, where there were safeguarding concerns, or concerns raised by health care professionals during pregnancy or after birth about parenting capabilities such as complex social lives or unsafe sleep practices. Approximately a third of the mothers and families who received the STORK programme in our hospital were classified as vulnerable, and a quarter, ethnic minority.\nThe support workers (STORK Trainers) were employed at the equivalent grade of band 4 nurses at the Trust. They had healthcare backgrounds, were educated to a minimum of degree level, and had previous experience in data capture in research. They were all educators prior to employment with the Hospital and had hands-on experience in patient counselling. A one day STORK programme-specific full day training was delivered at induction into the post, by the developer of the programme (TP) and supported by training for its individual components which are referenced in the app [8] and based on online NHS material for safe sleep, recognising signs of illness, and smoking cessation. Online training resources from the national centre for smoking cessation were utilised. Bystander life support and choking training were based on the algorithm supported by our BLISS our national charity for babies born preterm and ill [9], and St Johns Ambulance guidelines. A 2 day maternity-led breastfeeding training session, supporting the Unicef Baby Friendly Initiative [10], for which the hospital’s maternity services has received full accreditation, was also undertaken. Training to manage the crying baby (ICON) [11] was also included. All of these are referenced in the mobile app. STORK trainers observed an established STORK trainer for around 4 weeks before actively starting own teaching sessions.\nThis project was undertaken at a Dudley Group of Hospitals NHS Trust, local district general hospital catering for approximately 4100 births per year in the Midlands. With the support of its Nurture and Resilience Steering Group, local Council, its local department of Public Health funded two part-time posts for support workers to work within the hospital’s neonatal in-patient setting, providing 1:1 education and empowerment for parents whose babies were admitted to its neonatal services. Additional support where needed for vulnerable families in the paediatric and maternity setting was offered, in delivering the messages around reducing the risks for infant mortality. Vulnerable families included those with a previous neonatal or infant death, recent immigrants or refugees to the UK, mothers under 18 years of age who had declined the family nurse partnership established to support teenage pregnancies, those mothers with learning difficulties, were smokers in pregnancy, experiences substance misuse, domestic violence, mental health issues, where there were safeguarding concerns, or concerns raised by health care professionals during pregnancy or after birth about parenting capabilities such as complex social lives or unsafe sleep practices. Approximately a third of the mothers and families who received the STORK programme in our hospital were classified as vulnerable, and a quarter, ethnic minority.\nThe support workers (STORK Trainers) were employed at the equivalent grade of band 4 nurses at the Trust. They had healthcare backgrounds, were educated to a minimum of degree level, and had previous experience in data capture in research. They were all educators prior to employment with the Hospital and had hands-on experience in patient counselling. A one day STORK programme-specific full day training was delivered at induction into the post, by the developer of the programme (TP) and supported by training for its individual components which are referenced in the app [8] and based on online NHS material for safe sleep, recognising signs of illness, and smoking cessation. Online training resources from the national centre for smoking cessation were utilised. Bystander life support and choking training were based on the algorithm supported by our BLISS our national charity for babies born preterm and ill [9], and St Johns Ambulance guidelines. A 2 day maternity-led breastfeeding training session, supporting the Unicef Baby Friendly Initiative [10], for which the hospital’s maternity services has received full accreditation, was also undertaken. Training to manage the crying baby (ICON) [11] was also included. All of these are referenced in the mobile app. STORK trainers observed an established STORK trainer for around 4 weeks before actively starting own teaching sessions.\n[SUBTITLE] Data collection [SUBSECTION] Prior to commencement of the project the trainers were given direction on documentation of reflections around their work, as and when they considered it appropriate, i.e. while in hospital or after multiple sessions with parents, families and carers.The structure of the reflection was free text, intended to capture the following broad themes: (a) what did you learn from your engagement with the parents/families/carers/staff, i.e. what did parents generally know (b) what went well, (c) what could have been done differently, or better. Qualitative data were collected in the form of written reflections and voice memos by the STORK trainers teaching the reducing the risks associated with infant mortality messaging to parents, families and carers. Three trainers (occupying two posts) recorded their reflections following sessions they conducted with participants of the STORK programme between January and December 2021.\nAll participant data was anonymised. All data remained confidential. The anonymity of the parents, families and carers who participated was maintained. No identifying information on participants was shared. The written reflections were typed up by the trainers and the voice memos were later transcribed using automated transcription software and edited for accuracy.\nWe selected trainer reflections as the most robust way of capturing perspectives for this study. We considered the option of selecting parents for interview, but felt that greater value could be obtained through larger scale reflections, across a wider mix of patients, and suited our objective better.\nPrior to commencement of the project the trainers were given direction on documentation of reflections around their work, as and when they considered it appropriate, i.e. while in hospital or after multiple sessions with parents, families and carers.The structure of the reflection was free text, intended to capture the following broad themes: (a) what did you learn from your engagement with the parents/families/carers/staff, i.e. what did parents generally know (b) what went well, (c) what could have been done differently, or better. Qualitative data were collected in the form of written reflections and voice memos by the STORK trainers teaching the reducing the risks associated with infant mortality messaging to parents, families and carers. Three trainers (occupying two posts) recorded their reflections following sessions they conducted with participants of the STORK programme between January and December 2021.\nAll participant data was anonymised. All data remained confidential. The anonymity of the parents, families and carers who participated was maintained. No identifying information on participants was shared. The written reflections were typed up by the trainers and the voice memos were later transcribed using automated transcription software and edited for accuracy.\nWe selected trainer reflections as the most robust way of capturing perspectives for this study. We considered the option of selecting parents for interview, but felt that greater value could be obtained through larger scale reflections, across a wider mix of patients, and suited our objective better.\n[SUBTITLE] Data analysis [SUBSECTION] The reflections (written notes and transcription files) were uploaded to Dedoose®, a software application for analysing qualitative and mixed methods research. An experienced social scientist coded the reflections using thematic analysis during two stages [12]. First, we generated initial codes. A second phase of coding was then undertaken involving organising the codes into overall themes and a process of collaboration with the principal researcher, who was a clinician (consultant neonatologist). A combination of inductive and deductive coding was carried out in two stages. Utility of a clinician with experience in the development of the programme and in parent counselling, together with an experienced social scientist was designed to ensure optimal derivation of codes and themes.\nFirst, overall categories were identified before the data analysis process: baseline knowledge around basic life support and choking training and the shaken baby (ICON https://iconcope.org/ [9]); and cultural perceptions around safe sleep, breastfeeding, artificial feed and smoking. The data were analysed line-by-line and codes were added. The list of codes was then organised and refined, resulting in an initial list of categories and codes (Table 1).\nSecond, the data were analysed for cross-cutting themes across all the categories. This stage of analysis resulted in a final list of four overall themes (reach and inclusion; knowledge; practical and emotional support; and challenges) and sub-themes, visualised in Fig. 1.\n\nTable 1Summary of initial categories and codesCategoriesSub categoriesCodesBasic knowledgeBasic life support knowledgeLack of knowledge - life supportPrevious knowledge - life supportBreastfeeding benefits knowledgeArtificial feed misperceptionsBreastfeeding - change of perceptionBreastfeeding - lack of knowledge (general)Challenges of knowledge gained through online groupsDid not know about weaningDid not understand benefits for all babies, not only neonatalDoubts, worries and misinformationFeeding from the breast perceptions vs. expressingKnowledge gaps - when baby is hungry/full, positioningLack of support and information on feeding from medical staffPractical and emotional support with breastfeeding/expressingCultural practices, beliefs and mythsBreastfeeding cultural practiceIf baby gets jaundice, put them by the windowRocking baby on feetSterilising goes against their beliefsSwaddling to straighten babies bodiesUsing pillow to avoid baby getting a flat headWhiskey on dummyICON knowledgeCry it out methodICON - Lack of knowledgeProducts and safetysleep pods/nestsblankets/pillowsmilk preparation machinesSafe sleep knowledgeBasic safe sleep knowledgeCo-sleepingLack of information about multiple sleep optionsSafe sleep - lack of knowledgeSmoking knowledge and perceptionsDid not like vapingDid not realise difference between vaping and smokingNeed support to quit smokingSurprised that vaping is classed as non-smokerUnderstanding the importance of smoke/vape-free homeSuccess storiesPositive feedback/success storiesProviding reassurance/refresherTraining challenges and opportunitiesAdditional support to quit smokingBreastfeeding – reaching mothers earlier and community follow-upExtend programme to all parents/carers beyond neonatalImprove awareness among healthcare staff of STORKInclusion and reaching dads and other carersLanguage barriers - multi-language content and visual aidsNeed for counselling/mental health training for staffStaffing shortages and impacts of COVID-19Unreliable Wi-Fi connection\n\nSummary of initial categories and codes\nLack of knowledge - life support\nPrevious knowledge - life support\nArtificial feed misperceptions\nBreastfeeding - change of perception\nBreastfeeding - lack of knowledge (general)\nChallenges of knowledge gained through online groups\nDid not know about weaning\nDid not understand benefits for all babies, not only neonatal\nDoubts, worries and misinformation\nFeeding from the breast perceptions vs. expressing\nKnowledge gaps - when baby is hungry/full, positioning\nLack of support and information on feeding from medical staff\nPractical and emotional support with breastfeeding/expressing\nBreastfeeding cultural practice\nIf baby gets jaundice, put them by the window\nRocking baby on feet\nSterilising goes against their beliefs\nSwaddling to straighten babies bodies\nUsing pillow to avoid baby getting a flat head\nWhiskey on dummy\nCry it out method\nICON - Lack of knowledge\nsleep pods/nests\nblankets/pillows\nmilk preparation machines\nBasic safe sleep knowledge\nCo-sleeping\nLack of information about multiple sleep options\nSafe sleep - lack of knowledge\nDid not like vaping\nDid not realise difference between vaping and smoking\nNeed support to quit smoking\nSurprised that vaping is classed as non-smoker\nUnderstanding the importance of smoke/vape-free home\nPositive feedback/success stories\nProviding reassurance/refresher\nAdditional support to quit smoking\nBreastfeeding – reaching mothers earlier and community follow-up\nExtend programme to all parents/carers beyond neonatal\nImprove awareness among healthcare staff of STORK\nInclusion and reaching dads and other carers\nLanguage barriers - multi-language content and visual aids\nNeed for counselling/mental health training for staff\nStaffing shortages and impacts of COVID-19\nUnreliable Wi-Fi connection\n\nFig. 1Diagrammatic representation of final themes (centre) and sub-themes (peripheral)\n\nDiagrammatic representation of final themes (centre) and sub-themes (peripheral)\nThe reflections (written notes and transcription files) were uploaded to Dedoose®, a software application for analysing qualitative and mixed methods research. An experienced social scientist coded the reflections using thematic analysis during two stages [12]. First, we generated initial codes. A second phase of coding was then undertaken involving organising the codes into overall themes and a process of collaboration with the principal researcher, who was a clinician (consultant neonatologist). A combination of inductive and deductive coding was carried out in two stages. Utility of a clinician with experience in the development of the programme and in parent counselling, together with an experienced social scientist was designed to ensure optimal derivation of codes and themes.\nFirst, overall categories were identified before the data analysis process: baseline knowledge around basic life support and choking training and the shaken baby (ICON https://iconcope.org/ [9]); and cultural perceptions around safe sleep, breastfeeding, artificial feed and smoking. The data were analysed line-by-line and codes were added. The list of codes was then organised and refined, resulting in an initial list of categories and codes (Table 1).\nSecond, the data were analysed for cross-cutting themes across all the categories. This stage of analysis resulted in a final list of four overall themes (reach and inclusion; knowledge; practical and emotional support; and challenges) and sub-themes, visualised in Fig. 1.\n\nTable 1Summary of initial categories and codesCategoriesSub categoriesCodesBasic knowledgeBasic life support knowledgeLack of knowledge - life supportPrevious knowledge - life supportBreastfeeding benefits knowledgeArtificial feed misperceptionsBreastfeeding - change of perceptionBreastfeeding - lack of knowledge (general)Challenges of knowledge gained through online groupsDid not know about weaningDid not understand benefits for all babies, not only neonatalDoubts, worries and misinformationFeeding from the breast perceptions vs. expressingKnowledge gaps - when baby is hungry/full, positioningLack of support and information on feeding from medical staffPractical and emotional support with breastfeeding/expressingCultural practices, beliefs and mythsBreastfeeding cultural practiceIf baby gets jaundice, put them by the windowRocking baby on feetSterilising goes against their beliefsSwaddling to straighten babies bodiesUsing pillow to avoid baby getting a flat headWhiskey on dummyICON knowledgeCry it out methodICON - Lack of knowledgeProducts and safetysleep pods/nestsblankets/pillowsmilk preparation machinesSafe sleep knowledgeBasic safe sleep knowledgeCo-sleepingLack of information about multiple sleep optionsSafe sleep - lack of knowledgeSmoking knowledge and perceptionsDid not like vapingDid not realise difference between vaping and smokingNeed support to quit smokingSurprised that vaping is classed as non-smokerUnderstanding the importance of smoke/vape-free homeSuccess storiesPositive feedback/success storiesProviding reassurance/refresherTraining challenges and opportunitiesAdditional support to quit smokingBreastfeeding – reaching mothers earlier and community follow-upExtend programme to all parents/carers beyond neonatalImprove awareness among healthcare staff of STORKInclusion and reaching dads and other carersLanguage barriers - multi-language content and visual aidsNeed for counselling/mental health training for staffStaffing shortages and impacts of COVID-19Unreliable Wi-Fi connection\n\nSummary of initial categories and codes\nLack of knowledge - life support\nPrevious knowledge - life support\nArtificial feed misperceptions\nBreastfeeding - change of perception\nBreastfeeding - lack of knowledge (general)\nChallenges of knowledge gained through online groups\nDid not know about weaning\nDid not understand benefits for all babies, not only neonatal\nDoubts, worries and misinformation\nFeeding from the breast perceptions vs. expressing\nKnowledge gaps - when baby is hungry/full, positioning\nLack of support and information on feeding from medical staff\nPractical and emotional support with breastfeeding/expressing\nBreastfeeding cultural practice\nIf baby gets jaundice, put them by the window\nRocking baby on feet\nSterilising goes against their beliefs\nSwaddling to straighten babies bodies\nUsing pillow to avoid baby getting a flat head\nWhiskey on dummy\nCry it out method\nICON - Lack of knowledge\nsleep pods/nests\nblankets/pillows\nmilk preparation machines\nBasic safe sleep knowledge\nCo-sleeping\nLack of information about multiple sleep options\nSafe sleep - lack of knowledge\nDid not like vaping\nDid not realise difference between vaping and smoking\nNeed support to quit smoking\nSurprised that vaping is classed as non-smoker\nUnderstanding the importance of smoke/vape-free home\nPositive feedback/success stories\nProviding reassurance/refresher\nAdditional support to quit smoking\nBreastfeeding – reaching mothers earlier and community follow-up\nExtend programme to all parents/carers beyond neonatal\nImprove awareness among healthcare staff of STORK\nInclusion and reaching dads and other carers\nLanguage barriers - multi-language content and visual aids\nNeed for counselling/mental health training for staff\nStaffing shortages and impacts of COVID-19\nUnreliable Wi-Fi connection\n\nFig. 1Diagrammatic representation of final themes (centre) and sub-themes (peripheral)\n\nDiagrammatic representation of final themes (centre) and sub-themes (peripheral)", "This project was undertaken at a Dudley Group of Hospitals NHS Trust, local district general hospital catering for approximately 4100 births per year in the Midlands. With the support of its Nurture and Resilience Steering Group, local Council, its local department of Public Health funded two part-time posts for support workers to work within the hospital’s neonatal in-patient setting, providing 1:1 education and empowerment for parents whose babies were admitted to its neonatal services. Additional support where needed for vulnerable families in the paediatric and maternity setting was offered, in delivering the messages around reducing the risks for infant mortality. Vulnerable families included those with a previous neonatal or infant death, recent immigrants or refugees to the UK, mothers under 18 years of age who had declined the family nurse partnership established to support teenage pregnancies, those mothers with learning difficulties, were smokers in pregnancy, experiences substance misuse, domestic violence, mental health issues, where there were safeguarding concerns, or concerns raised by health care professionals during pregnancy or after birth about parenting capabilities such as complex social lives or unsafe sleep practices. Approximately a third of the mothers and families who received the STORK programme in our hospital were classified as vulnerable, and a quarter, ethnic minority.\nThe support workers (STORK Trainers) were employed at the equivalent grade of band 4 nurses at the Trust. They had healthcare backgrounds, were educated to a minimum of degree level, and had previous experience in data capture in research. They were all educators prior to employment with the Hospital and had hands-on experience in patient counselling. A one day STORK programme-specific full day training was delivered at induction into the post, by the developer of the programme (TP) and supported by training for its individual components which are referenced in the app [8] and based on online NHS material for safe sleep, recognising signs of illness, and smoking cessation. Online training resources from the national centre for smoking cessation were utilised. Bystander life support and choking training were based on the algorithm supported by our BLISS our national charity for babies born preterm and ill [9], and St Johns Ambulance guidelines. A 2 day maternity-led breastfeeding training session, supporting the Unicef Baby Friendly Initiative [10], for which the hospital’s maternity services has received full accreditation, was also undertaken. Training to manage the crying baby (ICON) [11] was also included. All of these are referenced in the mobile app. STORK trainers observed an established STORK trainer for around 4 weeks before actively starting own teaching sessions.", "Prior to commencement of the project the trainers were given direction on documentation of reflections around their work, as and when they considered it appropriate, i.e. while in hospital or after multiple sessions with parents, families and carers.The structure of the reflection was free text, intended to capture the following broad themes: (a) what did you learn from your engagement with the parents/families/carers/staff, i.e. what did parents generally know (b) what went well, (c) what could have been done differently, or better. Qualitative data were collected in the form of written reflections and voice memos by the STORK trainers teaching the reducing the risks associated with infant mortality messaging to parents, families and carers. Three trainers (occupying two posts) recorded their reflections following sessions they conducted with participants of the STORK programme between January and December 2021.\nAll participant data was anonymised. All data remained confidential. The anonymity of the parents, families and carers who participated was maintained. No identifying information on participants was shared. The written reflections were typed up by the trainers and the voice memos were later transcribed using automated transcription software and edited for accuracy.\nWe selected trainer reflections as the most robust way of capturing perspectives for this study. We considered the option of selecting parents for interview, but felt that greater value could be obtained through larger scale reflections, across a wider mix of patients, and suited our objective better.", "The reflections (written notes and transcription files) were uploaded to Dedoose®, a software application for analysing qualitative and mixed methods research. An experienced social scientist coded the reflections using thematic analysis during two stages [12]. First, we generated initial codes. A second phase of coding was then undertaken involving organising the codes into overall themes and a process of collaboration with the principal researcher, who was a clinician (consultant neonatologist). A combination of inductive and deductive coding was carried out in two stages. Utility of a clinician with experience in the development of the programme and in parent counselling, together with an experienced social scientist was designed to ensure optimal derivation of codes and themes.\nFirst, overall categories were identified before the data analysis process: baseline knowledge around basic life support and choking training and the shaken baby (ICON https://iconcope.org/ [9]); and cultural perceptions around safe sleep, breastfeeding, artificial feed and smoking. The data were analysed line-by-line and codes were added. The list of codes was then organised and refined, resulting in an initial list of categories and codes (Table 1).\nSecond, the data were analysed for cross-cutting themes across all the categories. This stage of analysis resulted in a final list of four overall themes (reach and inclusion; knowledge; practical and emotional support; and challenges) and sub-themes, visualised in Fig. 1.\n\nTable 1Summary of initial categories and codesCategoriesSub categoriesCodesBasic knowledgeBasic life support knowledgeLack of knowledge - life supportPrevious knowledge - life supportBreastfeeding benefits knowledgeArtificial feed misperceptionsBreastfeeding - change of perceptionBreastfeeding - lack of knowledge (general)Challenges of knowledge gained through online groupsDid not know about weaningDid not understand benefits for all babies, not only neonatalDoubts, worries and misinformationFeeding from the breast perceptions vs. expressingKnowledge gaps - when baby is hungry/full, positioningLack of support and information on feeding from medical staffPractical and emotional support with breastfeeding/expressingCultural practices, beliefs and mythsBreastfeeding cultural practiceIf baby gets jaundice, put them by the windowRocking baby on feetSterilising goes against their beliefsSwaddling to straighten babies bodiesUsing pillow to avoid baby getting a flat headWhiskey on dummyICON knowledgeCry it out methodICON - Lack of knowledgeProducts and safetysleep pods/nestsblankets/pillowsmilk preparation machinesSafe sleep knowledgeBasic safe sleep knowledgeCo-sleepingLack of information about multiple sleep optionsSafe sleep - lack of knowledgeSmoking knowledge and perceptionsDid not like vapingDid not realise difference between vaping and smokingNeed support to quit smokingSurprised that vaping is classed as non-smokerUnderstanding the importance of smoke/vape-free homeSuccess storiesPositive feedback/success storiesProviding reassurance/refresherTraining challenges and opportunitiesAdditional support to quit smokingBreastfeeding – reaching mothers earlier and community follow-upExtend programme to all parents/carers beyond neonatalImprove awareness among healthcare staff of STORKInclusion and reaching dads and other carersLanguage barriers - multi-language content and visual aidsNeed for counselling/mental health training for staffStaffing shortages and impacts of COVID-19Unreliable Wi-Fi connection\n\nSummary of initial categories and codes\nLack of knowledge - life support\nPrevious knowledge - life support\nArtificial feed misperceptions\nBreastfeeding - change of perception\nBreastfeeding - lack of knowledge (general)\nChallenges of knowledge gained through online groups\nDid not know about weaning\nDid not understand benefits for all babies, not only neonatal\nDoubts, worries and misinformation\nFeeding from the breast perceptions vs. expressing\nKnowledge gaps - when baby is hungry/full, positioning\nLack of support and information on feeding from medical staff\nPractical and emotional support with breastfeeding/expressing\nBreastfeeding cultural practice\nIf baby gets jaundice, put them by the window\nRocking baby on feet\nSterilising goes against their beliefs\nSwaddling to straighten babies bodies\nUsing pillow to avoid baby getting a flat head\nWhiskey on dummy\nCry it out method\nICON - Lack of knowledge\nsleep pods/nests\nblankets/pillows\nmilk preparation machines\nBasic safe sleep knowledge\nCo-sleeping\nLack of information about multiple sleep options\nSafe sleep - lack of knowledge\nDid not like vaping\nDid not realise difference between vaping and smoking\nNeed support to quit smoking\nSurprised that vaping is classed as non-smoker\nUnderstanding the importance of smoke/vape-free home\nPositive feedback/success stories\nProviding reassurance/refresher\nAdditional support to quit smoking\nBreastfeeding – reaching mothers earlier and community follow-up\nExtend programme to all parents/carers beyond neonatal\nImprove awareness among healthcare staff of STORK\nInclusion and reaching dads and other carers\nLanguage barriers - multi-language content and visual aids\nNeed for counselling/mental health training for staff\nStaffing shortages and impacts of COVID-19\nUnreliable Wi-Fi connection\n\nFig. 1Diagrammatic representation of final themes (centre) and sub-themes (peripheral)\n\nDiagrammatic representation of final themes (centre) and sub-themes (peripheral)", "During this time period 323 STORK programmes (i.e. distinct sessions of education with either individual parents, or in with parents and other family members such as grandparents, or carers) were offered to parents, families and carers at the Dudley Group of Hospitals NHS Trust. As this was offered during the COVID-19 period, the focus of the education and empowerment packages was heavily weighted towards parents, who had access to the hospital’s neonatal unit. A total of 524 individuals comprising 309 mothers, 198 fathers and 17 other family members including foster carers were included in the training programme during this time period.\nIn addition 29 staff members and students observed these sessions of education. These included including neonatal staff nurses, senior sisters, student nurses including trainee and nursing associates, doctors as well as interpreters. 12 declined the STORK programme.\nThe Trainers recorded 167 reflective entries during this time period. Of these, four overall themes were identified from the data: (a) reach and inclusion; (b) knowledge; (c) practical and emotional support; and (d) challenges. Each of these four themes is discussed below, covering different aspects of the STORK programme including basic life support, safe sleep, breastfeeding, smoking and infant crying. Illustrative comments were taken verbatim from the practitioner notes. The initial at the start of each quote symbolises the practitioner who made the comment (C, L or H).\n[SUBTITLE] Reach and inclusion: reaching parents and carers at the right time and place, with the right tools [SUBSECTION] The degree of inclusion of different participants in the STORK programme was a theme that emerged from trainer reflections. This included the challenges of reaching fathers and other carers, as well as ‘time and place’ dimensions – reaching mothers at the right time and opportunities to extend the programme beyond the neonatal unit. The provision of multi-language and digital and audio-visual tools was proposed by the trainers to facilitate greater inclusion. The importance and challenge of engaging fathers and other carers in the programme was highlighted in the practitioner reflections. This represented a particular challenge during the COVID-19 pandemic since hospital visits were restricted, limiting opportunities to engage with fathers and wider family members.L: ‘It was really useful to have been able to do it with the aunt, as the dad was only coming in to get the baby and would not have stayed to do STORK, meaning nobody would have had STORK if I hadn’t done it with the aunt. I think it also highlighted how STORK is not just for parents; it can be really useful for any member of the family who will be helping to look after baby. As this aunt also helped to look after other babies in the family, what she learned from STORK will not just have benefitted this baby but also probably many other babies in their family, which I feel is a really positive outcome’.\nL: ‘It was really useful to have been able to do it with the aunt, as the dad was only coming in to get the baby and would not have stayed to do STORK, meaning nobody would have had STORK if I hadn’t done it with the aunt. I think it also highlighted how STORK is not just for parents; it can be really useful for any member of the family who will be helping to look after baby. As this aunt also helped to look after other babies in the family, what she learned from STORK will not just have benefitted this baby but also probably many other babies in their family, which I feel is a really positive outcome’.\nThe trainers highlight the importance of reaching mothers and families at the right time, especially first-time mothers, to ensure they receive vital information on safe sleep, life support and breastfeeding. At times, trainers working in the neonatal unit reported feeling they had reached the mothers too late when providing breastfeeding support. On the other hand, due to the limited time to provide support to families in the hospital, trainers commented on the opportunity to extend support to normal care as well as into the community or establishing a STORK support network for breastfeeding mothers.C: ‘[T]his mom said that she had never had this type of information before with any of her children as she’s normally “kicked out quickly” after birth and she’s never actively signed up to any classes. I felt that this really captured the importance of continuing with this service in the longer term so that we catch all first time moms, that way that education is following them through their subsequent babies and beyond and making differences to peoples’ lives, not in the same way as people being motivated to do it themselves as many aren’t, especially the most vulnerable’.\nC: ‘[T]his mom said that she had never had this type of information before with any of her children as she’s normally “kicked out quickly” after birth and she’s never actively signed up to any classes. I felt that this really captured the importance of continuing with this service in the longer term so that we catch all first time moms, that way that education is following them through their subsequent babies and beyond and making differences to peoples’ lives, not in the same way as people being motivated to do it themselves as many aren’t, especially the most vulnerable’.\nLanguage barriers were reported as a barrier to the successful delivery of the programme across all community groups. The provision of multi-language content and visual aids was mentioned by trainers as a potential solution to overcome this challenge, especially when interpreters are unavailable, and to aid parents with learning difficulties. The trainers make use of digital and audio-visual tools such as mobile apps and videos to facilitate learning. However, they encountered problems with a lack of reliable Wi-Fi connection and access to tablets in the hospital and cited a need for information leaflets in a range of languages.\nThe degree of inclusion of different participants in the STORK programme was a theme that emerged from trainer reflections. This included the challenges of reaching fathers and other carers, as well as ‘time and place’ dimensions – reaching mothers at the right time and opportunities to extend the programme beyond the neonatal unit. The provision of multi-language and digital and audio-visual tools was proposed by the trainers to facilitate greater inclusion. The importance and challenge of engaging fathers and other carers in the programme was highlighted in the practitioner reflections. This represented a particular challenge during the COVID-19 pandemic since hospital visits were restricted, limiting opportunities to engage with fathers and wider family members.L: ‘It was really useful to have been able to do it with the aunt, as the dad was only coming in to get the baby and would not have stayed to do STORK, meaning nobody would have had STORK if I hadn’t done it with the aunt. I think it also highlighted how STORK is not just for parents; it can be really useful for any member of the family who will be helping to look after baby. As this aunt also helped to look after other babies in the family, what she learned from STORK will not just have benefitted this baby but also probably many other babies in their family, which I feel is a really positive outcome’.\nL: ‘It was really useful to have been able to do it with the aunt, as the dad was only coming in to get the baby and would not have stayed to do STORK, meaning nobody would have had STORK if I hadn’t done it with the aunt. I think it also highlighted how STORK is not just for parents; it can be really useful for any member of the family who will be helping to look after baby. As this aunt also helped to look after other babies in the family, what she learned from STORK will not just have benefitted this baby but also probably many other babies in their family, which I feel is a really positive outcome’.\nThe trainers highlight the importance of reaching mothers and families at the right time, especially first-time mothers, to ensure they receive vital information on safe sleep, life support and breastfeeding. At times, trainers working in the neonatal unit reported feeling they had reached the mothers too late when providing breastfeeding support. On the other hand, due to the limited time to provide support to families in the hospital, trainers commented on the opportunity to extend support to normal care as well as into the community or establishing a STORK support network for breastfeeding mothers.C: ‘[T]his mom said that she had never had this type of information before with any of her children as she’s normally “kicked out quickly” after birth and she’s never actively signed up to any classes. I felt that this really captured the importance of continuing with this service in the longer term so that we catch all first time moms, that way that education is following them through their subsequent babies and beyond and making differences to peoples’ lives, not in the same way as people being motivated to do it themselves as many aren’t, especially the most vulnerable’.\nC: ‘[T]his mom said that she had never had this type of information before with any of her children as she’s normally “kicked out quickly” after birth and she’s never actively signed up to any classes. I felt that this really captured the importance of continuing with this service in the longer term so that we catch all first time moms, that way that education is following them through their subsequent babies and beyond and making differences to peoples’ lives, not in the same way as people being motivated to do it themselves as many aren’t, especially the most vulnerable’.\nLanguage barriers were reported as a barrier to the successful delivery of the programme across all community groups. The provision of multi-language content and visual aids was mentioned by trainers as a potential solution to overcome this challenge, especially when interpreters are unavailable, and to aid parents with learning difficulties. The trainers make use of digital and audio-visual tools such as mobile apps and videos to facilitate learning. However, they encountered problems with a lack of reliable Wi-Fi connection and access to tablets in the hospital and cited a need for information leaflets in a range of languages.\n[SUBTITLE] Knowledge: filling knowledge gaps and dealing with conflicting knowledge claims [SUBSECTION] The reflections revealed how the STORK programme supported parents from a range of cultural and socio-economic backgrounds, with different personal circumstances and varying levels of baseline knowledge. The provision of information tailored to the participant helped to fill knowledge gaps. When it came to safe sleep practices, ICON, smoking awareness and knowledge of basic life support for babies, the trainer reflections revealed a wide spectrum of knowledge levels among STORK participants, from no prior knowledge to some knowledge to good knowledge. Even experienced parents may have never received information on these topics and by asking questions to establish baseline knowledge, the trainers filled in gaps and provided critical information.H: ‘When covering safe sleep and SIDs I asked the parents what they already knew, both of them had no knowledge of any safe sleep guidelines. I found this really surprising this being their 6th baby! They said that nobody had ever spoken to them about safe sleep and they had heard of cot death but associated this with babies passing away in the night, in their cots’.C: ‘Parents appreciated everything, they came from war country and talked about war crimes and violence which they had scars… They especially appreciated the choking and resus. They said they’d never seen anything like it with the information I gave them, and took my hand in appreciation for my time to them’\nH: ‘When covering safe sleep and SIDs I asked the parents what they already knew, both of them had no knowledge of any safe sleep guidelines. I found this really surprising this being their 6th baby! They said that nobody had ever spoken to them about safe sleep and they had heard of cot death but associated this with babies passing away in the night, in their cots’.\nC: ‘Parents appreciated everything, they came from war country and talked about war crimes and violence which they had scars… They especially appreciated the choking and resus. They said they’d never seen anything like it with the information I gave them, and took my hand in appreciation for my time to them’\nThe trainers described how even participants with considerable knowledge went away with new information after taking part in STORK.C: ‘They were experienced parents, however had never seen ICON, evaluating this for the audit is a really useful way of seeing what parents’ knowledge base is and what they can gain from it. Discussed this at great length, including social isolation and coping strategies, parents took a lot from this as could understand having been in that position before. They said this was more useful than anything else in the session’.\nC: ‘They were experienced parents, however had never seen ICON, evaluating this for the audit is a really useful way of seeing what parents’ knowledge base is and what they can gain from it. Discussed this at great length, including social isolation and coping strategies, parents took a lot from this as could understand having been in that position before. They said this was more useful than anything else in the session’.\nWhen participants received new information that contradicted their existing knowledge, attitudes or cultural practices, this produced a range of reactions - from surprise and acceptance to resistance and dismissal. A reoccurring topic was the safety of baby products such as sleeping pods and prep machines.L: ‘I explained to them about how (artificial formula feed) prep machines are not recommended, and explained the reasons why… They really didn’t seem very open to hearing this and I felt like everything I was saying was falling on deaf ears.… I got the impression they still fully intended on using the prep machines. This felt quite disappointing because I just couldn’t seem to get through to them’.H: ‘After we completed the safe sleep and sudden infant death syndrome (SIDS) section mom said “Actually I get it now like why people are telling me not to do some of the stuff like putting her on her side and on the bed with pillows and stuff. I realise now that actually it’s really dangerous. I co-slept with all my other kids and used pods. Nobody has told us with our other kids not to use them and you just think stuff is safe.”’\nL: ‘I explained to them about how (artificial formula feed) prep machines are not recommended, and explained the reasons why… They really didn’t seem very open to hearing this and I felt like everything I was saying was falling on deaf ears.… I got the impression they still fully intended on using the prep machines. This felt quite disappointing because I just couldn’t seem to get through to them’.\nH: ‘After we completed the safe sleep and sudden infant death syndrome (SIDS) section mom said “Actually I get it now like why people are telling me not to do some of the stuff like putting her on her side and on the bed with pillows and stuff. I realise now that actually it’s really dangerous. I co-slept with all my other kids and used pods. Nobody has told us with our other kids not to use them and you just think stuff is safe.”’\nThere were cases when the trainers reported a change of heart among participants after taking part in the STORK programme. For example, mothers who had no intention to breastfeed who later decided to try it, or parents who decided not to use certain baby products after being informed about safety concerns.L: ‘I feel quite sure that had I not intervened with STORK, this mum would never have even considered breastfeeding and no one would have encouraged her to try as whenever anyone asked she had said she was not interested. I think that everyone else involved in her care had not had the time to just sit with her for a while to talk it all through, and this is what she needed’.\nL: ‘I feel quite sure that had I not intervened with STORK, this mum would never have even considered breastfeeding and no one would have encouraged her to try as whenever anyone asked she had said she was not interested. I think that everyone else involved in her care had not had the time to just sit with her for a while to talk it all through, and this is what she needed’.\nThe reflections revealed how the STORK programme supported parents from a range of cultural and socio-economic backgrounds, with different personal circumstances and varying levels of baseline knowledge. The provision of information tailored to the participant helped to fill knowledge gaps. When it came to safe sleep practices, ICON, smoking awareness and knowledge of basic life support for babies, the trainer reflections revealed a wide spectrum of knowledge levels among STORK participants, from no prior knowledge to some knowledge to good knowledge. Even experienced parents may have never received information on these topics and by asking questions to establish baseline knowledge, the trainers filled in gaps and provided critical information.H: ‘When covering safe sleep and SIDs I asked the parents what they already knew, both of them had no knowledge of any safe sleep guidelines. I found this really surprising this being their 6th baby! They said that nobody had ever spoken to them about safe sleep and they had heard of cot death but associated this with babies passing away in the night, in their cots’.C: ‘Parents appreciated everything, they came from war country and talked about war crimes and violence which they had scars… They especially appreciated the choking and resus. They said they’d never seen anything like it with the information I gave them, and took my hand in appreciation for my time to them’\nH: ‘When covering safe sleep and SIDs I asked the parents what they already knew, both of them had no knowledge of any safe sleep guidelines. I found this really surprising this being their 6th baby! They said that nobody had ever spoken to them about safe sleep and they had heard of cot death but associated this with babies passing away in the night, in their cots’.\nC: ‘Parents appreciated everything, they came from war country and talked about war crimes and violence which they had scars… They especially appreciated the choking and resus. They said they’d never seen anything like it with the information I gave them, and took my hand in appreciation for my time to them’\nThe trainers described how even participants with considerable knowledge went away with new information after taking part in STORK.C: ‘They were experienced parents, however had never seen ICON, evaluating this for the audit is a really useful way of seeing what parents’ knowledge base is and what they can gain from it. Discussed this at great length, including social isolation and coping strategies, parents took a lot from this as could understand having been in that position before. They said this was more useful than anything else in the session’.\nC: ‘They were experienced parents, however had never seen ICON, evaluating this for the audit is a really useful way of seeing what parents’ knowledge base is and what they can gain from it. Discussed this at great length, including social isolation and coping strategies, parents took a lot from this as could understand having been in that position before. They said this was more useful than anything else in the session’.\nWhen participants received new information that contradicted their existing knowledge, attitudes or cultural practices, this produced a range of reactions - from surprise and acceptance to resistance and dismissal. A reoccurring topic was the safety of baby products such as sleeping pods and prep machines.L: ‘I explained to them about how (artificial formula feed) prep machines are not recommended, and explained the reasons why… They really didn’t seem very open to hearing this and I felt like everything I was saying was falling on deaf ears.… I got the impression they still fully intended on using the prep machines. This felt quite disappointing because I just couldn’t seem to get through to them’.H: ‘After we completed the safe sleep and sudden infant death syndrome (SIDS) section mom said “Actually I get it now like why people are telling me not to do some of the stuff like putting her on her side and on the bed with pillows and stuff. I realise now that actually it’s really dangerous. I co-slept with all my other kids and used pods. Nobody has told us with our other kids not to use them and you just think stuff is safe.”’\nL: ‘I explained to them about how (artificial formula feed) prep machines are not recommended, and explained the reasons why… They really didn’t seem very open to hearing this and I felt like everything I was saying was falling on deaf ears.… I got the impression they still fully intended on using the prep machines. This felt quite disappointing because I just couldn’t seem to get through to them’.\nH: ‘After we completed the safe sleep and sudden infant death syndrome (SIDS) section mom said “Actually I get it now like why people are telling me not to do some of the stuff like putting her on her side and on the bed with pillows and stuff. I realise now that actually it’s really dangerous. I co-slept with all my other kids and used pods. Nobody has told us with our other kids not to use them and you just think stuff is safe.”’\nThere were cases when the trainers reported a change of heart among participants after taking part in the STORK programme. For example, mothers who had no intention to breastfeed who later decided to try it, or parents who decided not to use certain baby products after being informed about safety concerns.L: ‘I feel quite sure that had I not intervened with STORK, this mum would never have even considered breastfeeding and no one would have encouraged her to try as whenever anyone asked she had said she was not interested. I think that everyone else involved in her care had not had the time to just sit with her for a while to talk it all through, and this is what she needed’.\nL: ‘I feel quite sure that had I not intervened with STORK, this mum would never have even considered breastfeeding and no one would have encouraged her to try as whenever anyone asked she had said she was not interested. I think that everyone else involved in her care had not had the time to just sit with her for a while to talk it all through, and this is what she needed’.\n[SUBTITLE] Practical and emotional support: providing reassurance and empowerment [SUBSECTION] Aside from filling knowledge gaps, the trainers described how providing practical and emotional support helped to reassure and empower parents along their journey. The trainers performed practical first aid and life support demonstrations, breastfeeding techniques, as well as offering encouragement and reassurance. Even for participants with existing knowledge of the topics covered, they reported feeling reassured and more confident after refreshing their knowledge and having the opportunity to ask questions and clarify any doubts. One mother had first-hand experience of life support since she had to previously resuscitate one of her children but was keen to do STORK for reassurance and to refresh her knowledge. The excerpts reveal the benefits of the dedicated time and counselling approach offered by trainers.L: ‘She herself had also got a one year old child who had previously stopped breathing, and she had had to do basic life support to resuscitate them. Because of this, she was really interested in STORK, I think mostly because she wanted reassurance... For her, I think it wasn’t so much about learning from scratch, it was more about reassuring her that she would know what to do if it happened again and giving her more confidence’.\nL: ‘She herself had also got a one year old child who had previously stopped breathing, and she had had to do basic life support to resuscitate them. Because of this, she was really interested in STORK, I think mostly because she wanted reassurance... For her, I think it wasn’t so much about learning from scratch, it was more about reassuring her that she would know what to do if it happened again and giving her more confidence’.\nThe trainers reflected on the approaches they used to empower parents. This included using a gentle approach and actively listening, especially when dealing with vulnerable participants and those that have been through complicated births.L: ‘What I will take away from this is how important it is to just actively listen, especially when a parent is finding the neonatal unit a particularly difficult experience. I would definitely try to use this approach when speaking to parents feeling a similar way, possibly doing the goal setting with them too as this seemed to be really beneficial here and empowered her to identify her own strengths and coping mechanisms’\nL: ‘What I will take away from this is how important it is to just actively listen, especially when a parent is finding the neonatal unit a particularly difficult experience. I would definitely try to use this approach when speaking to parents feeling a similar way, possibly doing the goal setting with them too as this seemed to be really beneficial here and empowered her to identify her own strengths and coping mechanisms’\nAside from filling knowledge gaps, the trainers described how providing practical and emotional support helped to reassure and empower parents along their journey. The trainers performed practical first aid and life support demonstrations, breastfeeding techniques, as well as offering encouragement and reassurance. Even for participants with existing knowledge of the topics covered, they reported feeling reassured and more confident after refreshing their knowledge and having the opportunity to ask questions and clarify any doubts. One mother had first-hand experience of life support since she had to previously resuscitate one of her children but was keen to do STORK for reassurance and to refresh her knowledge. The excerpts reveal the benefits of the dedicated time and counselling approach offered by trainers.L: ‘She herself had also got a one year old child who had previously stopped breathing, and she had had to do basic life support to resuscitate them. Because of this, she was really interested in STORK, I think mostly because she wanted reassurance... For her, I think it wasn’t so much about learning from scratch, it was more about reassuring her that she would know what to do if it happened again and giving her more confidence’.\nL: ‘She herself had also got a one year old child who had previously stopped breathing, and she had had to do basic life support to resuscitate them. Because of this, she was really interested in STORK, I think mostly because she wanted reassurance... For her, I think it wasn’t so much about learning from scratch, it was more about reassuring her that she would know what to do if it happened again and giving her more confidence’.\nThe trainers reflected on the approaches they used to empower parents. This included using a gentle approach and actively listening, especially when dealing with vulnerable participants and those that have been through complicated births.L: ‘What I will take away from this is how important it is to just actively listen, especially when a parent is finding the neonatal unit a particularly difficult experience. I would definitely try to use this approach when speaking to parents feeling a similar way, possibly doing the goal setting with them too as this seemed to be really beneficial here and empowered her to identify her own strengths and coping mechanisms’\nL: ‘What I will take away from this is how important it is to just actively listen, especially when a parent is finding the neonatal unit a particularly difficult experience. I would definitely try to use this approach when speaking to parents feeling a similar way, possibly doing the goal setting with them too as this seemed to be really beneficial here and empowered her to identify her own strengths and coping mechanisms’\n[SUBTITLE] Challenges for the programme [SUBSECTION] The analysis of trainer reflections highlighted a number of key challenges for the implementation of the STORK programme, including staffing shortages, the need for mental health support for trainers and lack of cohesion across healthcare staff.\nStaffing shortages, compounded by additional pressures as a result of the COVID-19 pandemic, resulted in healthcare staff being overstretched and additional workload and pressure on the STORK trainers.C: ‘Having reflected on ways to include dads in this situation, until I have support with staffing I’m finding I can’t change the situation, and it’s unfortunate people are being missed due to these barriers on the unit’.\nC: ‘Having reflected on ways to include dads in this situation, until I have support with staffing I’m finding I can’t change the situation, and it’s unfortunate people are being missed due to these barriers on the unit’.\nA general lack of awareness and support on the topics covered by the STORK programme, in particular breastfeeding, among medical staff such as nurses and midwives was highlighted in the reflections. This was attributed to a lack of capacity and training and regimented feeding practices in the neonatal unit, compounded by being overstretched due to COVID-19:C: ‘Mom really appreciated this as no feeding support given by postnatal midwives, midwives currently running at ratio 1:9 because of sickness and self-isolating due to COVID-19 and no ward breastfeeding support. This is a real barrier to parents at the moment and I feel my impact is making a real difference to these moms feeding experiences as seen here’.\nC: ‘Mom really appreciated this as no feeding support given by postnatal midwives, midwives currently running at ratio 1:9 because of sickness and self-isolating due to COVID-19 and no ward breastfeeding support. This is a real barrier to parents at the moment and I feel my impact is making a real difference to these moms feeding experiences as seen here’.\nThis resulted in incidents of miscommunication, contradictory information being provided to patients and a general lack of coherence among STORK trainers and other staff.C: ‘I also feel that nurses should be facilitating mom’s wishes if they ask for support to breastfeed/express/pump and should not just decide they feel mom doesn’t really want to. As our unit is not currently BFI (Breastfeeding Feeding Initiative) [accredited]. I hope that this type of training will support nurses to do this in future’.\nC: ‘I also feel that nurses should be facilitating mom’s wishes if they ask for support to breastfeed/express/pump and should not just decide they feel mom doesn’t really want to. As our unit is not currently BFI (Breastfeeding Feeding Initiative) [accredited]. I hope that this type of training will support nurses to do this in future’.\nThe trainers cited difficult experiences when dealing with challenging participants and those with mental health problems. They report that previous counselling training had been beneficial and that they would have benefitted from further training in this area.H: ‘During the session Nan started to open up about her past, being a victim of long term DV [domestic violence] herself. She expressed how hard she found it watching her daughter go through the same things. […] This situation highlighted the importance of our role not only for parents, but also the people who surround the babies we look after. It also highlights the amount of input we have not “just” delivering STORK but also providing support around SG [safeguarding], mental health and well-being’.C: ‘On reflection I feel counselling skills are so important for this role, nearly every parent I work with has huge amounts of emotional anxiety and this has to be unpicked before learning can take place. […] The confidence and reassurance is already part of STORK, but the counselling skills and mental health support perhaps not so widely known, this is time consuming and emotionally consuming for staff, I need to ensure that I have some space to reflect as I have not been doing this recently’.\nH: ‘During the session Nan started to open up about her past, being a victim of long term DV [domestic violence] herself. She expressed how hard she found it watching her daughter go through the same things. […] This situation highlighted the importance of our role not only for parents, but also the people who surround the babies we look after. It also highlights the amount of input we have not “just” delivering STORK but also providing support around SG [safeguarding], mental health and well-being’.\nC: ‘On reflection I feel counselling skills are so important for this role, nearly every parent I work with has huge amounts of emotional anxiety and this has to be unpicked before learning can take place. […] The confidence and reassurance is already part of STORK, but the counselling skills and mental health support perhaps not so widely known, this is time consuming and emotionally consuming for staff, I need to ensure that I have some space to reflect as I have not been doing this recently’.\nThe analysis of trainer reflections highlighted a number of key challenges for the implementation of the STORK programme, including staffing shortages, the need for mental health support for trainers and lack of cohesion across healthcare staff.\nStaffing shortages, compounded by additional pressures as a result of the COVID-19 pandemic, resulted in healthcare staff being overstretched and additional workload and pressure on the STORK trainers.C: ‘Having reflected on ways to include dads in this situation, until I have support with staffing I’m finding I can’t change the situation, and it’s unfortunate people are being missed due to these barriers on the unit’.\nC: ‘Having reflected on ways to include dads in this situation, until I have support with staffing I’m finding I can’t change the situation, and it’s unfortunate people are being missed due to these barriers on the unit’.\nA general lack of awareness and support on the topics covered by the STORK programme, in particular breastfeeding, among medical staff such as nurses and midwives was highlighted in the reflections. This was attributed to a lack of capacity and training and regimented feeding practices in the neonatal unit, compounded by being overstretched due to COVID-19:C: ‘Mom really appreciated this as no feeding support given by postnatal midwives, midwives currently running at ratio 1:9 because of sickness and self-isolating due to COVID-19 and no ward breastfeeding support. This is a real barrier to parents at the moment and I feel my impact is making a real difference to these moms feeding experiences as seen here’.\nC: ‘Mom really appreciated this as no feeding support given by postnatal midwives, midwives currently running at ratio 1:9 because of sickness and self-isolating due to COVID-19 and no ward breastfeeding support. This is a real barrier to parents at the moment and I feel my impact is making a real difference to these moms feeding experiences as seen here’.\nThis resulted in incidents of miscommunication, contradictory information being provided to patients and a general lack of coherence among STORK trainers and other staff.C: ‘I also feel that nurses should be facilitating mom’s wishes if they ask for support to breastfeed/express/pump and should not just decide they feel mom doesn’t really want to. As our unit is not currently BFI (Breastfeeding Feeding Initiative) [accredited]. I hope that this type of training will support nurses to do this in future’.\nC: ‘I also feel that nurses should be facilitating mom’s wishes if they ask for support to breastfeed/express/pump and should not just decide they feel mom doesn’t really want to. As our unit is not currently BFI (Breastfeeding Feeding Initiative) [accredited]. I hope that this type of training will support nurses to do this in future’.\nThe trainers cited difficult experiences when dealing with challenging participants and those with mental health problems. They report that previous counselling training had been beneficial and that they would have benefitted from further training in this area.H: ‘During the session Nan started to open up about her past, being a victim of long term DV [domestic violence] herself. She expressed how hard she found it watching her daughter go through the same things. […] This situation highlighted the importance of our role not only for parents, but also the people who surround the babies we look after. It also highlights the amount of input we have not “just” delivering STORK but also providing support around SG [safeguarding], mental health and well-being’.C: ‘On reflection I feel counselling skills are so important for this role, nearly every parent I work with has huge amounts of emotional anxiety and this has to be unpicked before learning can take place. […] The confidence and reassurance is already part of STORK, but the counselling skills and mental health support perhaps not so widely known, this is time consuming and emotionally consuming for staff, I need to ensure that I have some space to reflect as I have not been doing this recently’.\nH: ‘During the session Nan started to open up about her past, being a victim of long term DV [domestic violence] herself. She expressed how hard she found it watching her daughter go through the same things. […] This situation highlighted the importance of our role not only for parents, but also the people who surround the babies we look after. It also highlights the amount of input we have not “just” delivering STORK but also providing support around SG [safeguarding], mental health and well-being’.\nC: ‘On reflection I feel counselling skills are so important for this role, nearly every parent I work with has huge amounts of emotional anxiety and this has to be unpicked before learning can take place. […] The confidence and reassurance is already part of STORK, but the counselling skills and mental health support perhaps not so widely known, this is time consuming and emotionally consuming for staff, I need to ensure that I have some space to reflect as I have not been doing this recently’.", "The degree of inclusion of different participants in the STORK programme was a theme that emerged from trainer reflections. This included the challenges of reaching fathers and other carers, as well as ‘time and place’ dimensions – reaching mothers at the right time and opportunities to extend the programme beyond the neonatal unit. The provision of multi-language and digital and audio-visual tools was proposed by the trainers to facilitate greater inclusion. The importance and challenge of engaging fathers and other carers in the programme was highlighted in the practitioner reflections. This represented a particular challenge during the COVID-19 pandemic since hospital visits were restricted, limiting opportunities to engage with fathers and wider family members.L: ‘It was really useful to have been able to do it with the aunt, as the dad was only coming in to get the baby and would not have stayed to do STORK, meaning nobody would have had STORK if I hadn’t done it with the aunt. I think it also highlighted how STORK is not just for parents; it can be really useful for any member of the family who will be helping to look after baby. As this aunt also helped to look after other babies in the family, what she learned from STORK will not just have benefitted this baby but also probably many other babies in their family, which I feel is a really positive outcome’.\nL: ‘It was really useful to have been able to do it with the aunt, as the dad was only coming in to get the baby and would not have stayed to do STORK, meaning nobody would have had STORK if I hadn’t done it with the aunt. I think it also highlighted how STORK is not just for parents; it can be really useful for any member of the family who will be helping to look after baby. As this aunt also helped to look after other babies in the family, what she learned from STORK will not just have benefitted this baby but also probably many other babies in their family, which I feel is a really positive outcome’.\nThe trainers highlight the importance of reaching mothers and families at the right time, especially first-time mothers, to ensure they receive vital information on safe sleep, life support and breastfeeding. At times, trainers working in the neonatal unit reported feeling they had reached the mothers too late when providing breastfeeding support. On the other hand, due to the limited time to provide support to families in the hospital, trainers commented on the opportunity to extend support to normal care as well as into the community or establishing a STORK support network for breastfeeding mothers.C: ‘[T]his mom said that she had never had this type of information before with any of her children as she’s normally “kicked out quickly” after birth and she’s never actively signed up to any classes. I felt that this really captured the importance of continuing with this service in the longer term so that we catch all first time moms, that way that education is following them through their subsequent babies and beyond and making differences to peoples’ lives, not in the same way as people being motivated to do it themselves as many aren’t, especially the most vulnerable’.\nC: ‘[T]his mom said that she had never had this type of information before with any of her children as she’s normally “kicked out quickly” after birth and she’s never actively signed up to any classes. I felt that this really captured the importance of continuing with this service in the longer term so that we catch all first time moms, that way that education is following them through their subsequent babies and beyond and making differences to peoples’ lives, not in the same way as people being motivated to do it themselves as many aren’t, especially the most vulnerable’.\nLanguage barriers were reported as a barrier to the successful delivery of the programme across all community groups. The provision of multi-language content and visual aids was mentioned by trainers as a potential solution to overcome this challenge, especially when interpreters are unavailable, and to aid parents with learning difficulties. The trainers make use of digital and audio-visual tools such as mobile apps and videos to facilitate learning. However, they encountered problems with a lack of reliable Wi-Fi connection and access to tablets in the hospital and cited a need for information leaflets in a range of languages.", "The reflections revealed how the STORK programme supported parents from a range of cultural and socio-economic backgrounds, with different personal circumstances and varying levels of baseline knowledge. The provision of information tailored to the participant helped to fill knowledge gaps. When it came to safe sleep practices, ICON, smoking awareness and knowledge of basic life support for babies, the trainer reflections revealed a wide spectrum of knowledge levels among STORK participants, from no prior knowledge to some knowledge to good knowledge. Even experienced parents may have never received information on these topics and by asking questions to establish baseline knowledge, the trainers filled in gaps and provided critical information.H: ‘When covering safe sleep and SIDs I asked the parents what they already knew, both of them had no knowledge of any safe sleep guidelines. I found this really surprising this being their 6th baby! They said that nobody had ever spoken to them about safe sleep and they had heard of cot death but associated this with babies passing away in the night, in their cots’.C: ‘Parents appreciated everything, they came from war country and talked about war crimes and violence which they had scars… They especially appreciated the choking and resus. They said they’d never seen anything like it with the information I gave them, and took my hand in appreciation for my time to them’\nH: ‘When covering safe sleep and SIDs I asked the parents what they already knew, both of them had no knowledge of any safe sleep guidelines. I found this really surprising this being their 6th baby! They said that nobody had ever spoken to them about safe sleep and they had heard of cot death but associated this with babies passing away in the night, in their cots’.\nC: ‘Parents appreciated everything, they came from war country and talked about war crimes and violence which they had scars… They especially appreciated the choking and resus. They said they’d never seen anything like it with the information I gave them, and took my hand in appreciation for my time to them’\nThe trainers described how even participants with considerable knowledge went away with new information after taking part in STORK.C: ‘They were experienced parents, however had never seen ICON, evaluating this for the audit is a really useful way of seeing what parents’ knowledge base is and what they can gain from it. Discussed this at great length, including social isolation and coping strategies, parents took a lot from this as could understand having been in that position before. They said this was more useful than anything else in the session’.\nC: ‘They were experienced parents, however had never seen ICON, evaluating this for the audit is a really useful way of seeing what parents’ knowledge base is and what they can gain from it. Discussed this at great length, including social isolation and coping strategies, parents took a lot from this as could understand having been in that position before. They said this was more useful than anything else in the session’.\nWhen participants received new information that contradicted their existing knowledge, attitudes or cultural practices, this produced a range of reactions - from surprise and acceptance to resistance and dismissal. A reoccurring topic was the safety of baby products such as sleeping pods and prep machines.L: ‘I explained to them about how (artificial formula feed) prep machines are not recommended, and explained the reasons why… They really didn’t seem very open to hearing this and I felt like everything I was saying was falling on deaf ears.… I got the impression they still fully intended on using the prep machines. This felt quite disappointing because I just couldn’t seem to get through to them’.H: ‘After we completed the safe sleep and sudden infant death syndrome (SIDS) section mom said “Actually I get it now like why people are telling me not to do some of the stuff like putting her on her side and on the bed with pillows and stuff. I realise now that actually it’s really dangerous. I co-slept with all my other kids and used pods. Nobody has told us with our other kids not to use them and you just think stuff is safe.”’\nL: ‘I explained to them about how (artificial formula feed) prep machines are not recommended, and explained the reasons why… They really didn’t seem very open to hearing this and I felt like everything I was saying was falling on deaf ears.… I got the impression they still fully intended on using the prep machines. This felt quite disappointing because I just couldn’t seem to get through to them’.\nH: ‘After we completed the safe sleep and sudden infant death syndrome (SIDS) section mom said “Actually I get it now like why people are telling me not to do some of the stuff like putting her on her side and on the bed with pillows and stuff. I realise now that actually it’s really dangerous. I co-slept with all my other kids and used pods. Nobody has told us with our other kids not to use them and you just think stuff is safe.”’\nThere were cases when the trainers reported a change of heart among participants after taking part in the STORK programme. For example, mothers who had no intention to breastfeed who later decided to try it, or parents who decided not to use certain baby products after being informed about safety concerns.L: ‘I feel quite sure that had I not intervened with STORK, this mum would never have even considered breastfeeding and no one would have encouraged her to try as whenever anyone asked she had said she was not interested. I think that everyone else involved in her care had not had the time to just sit with her for a while to talk it all through, and this is what she needed’.\nL: ‘I feel quite sure that had I not intervened with STORK, this mum would never have even considered breastfeeding and no one would have encouraged her to try as whenever anyone asked she had said she was not interested. I think that everyone else involved in her care had not had the time to just sit with her for a while to talk it all through, and this is what she needed’.", "Aside from filling knowledge gaps, the trainers described how providing practical and emotional support helped to reassure and empower parents along their journey. The trainers performed practical first aid and life support demonstrations, breastfeeding techniques, as well as offering encouragement and reassurance. Even for participants with existing knowledge of the topics covered, they reported feeling reassured and more confident after refreshing their knowledge and having the opportunity to ask questions and clarify any doubts. One mother had first-hand experience of life support since she had to previously resuscitate one of her children but was keen to do STORK for reassurance and to refresh her knowledge. The excerpts reveal the benefits of the dedicated time and counselling approach offered by trainers.L: ‘She herself had also got a one year old child who had previously stopped breathing, and she had had to do basic life support to resuscitate them. Because of this, she was really interested in STORK, I think mostly because she wanted reassurance... For her, I think it wasn’t so much about learning from scratch, it was more about reassuring her that she would know what to do if it happened again and giving her more confidence’.\nL: ‘She herself had also got a one year old child who had previously stopped breathing, and she had had to do basic life support to resuscitate them. Because of this, she was really interested in STORK, I think mostly because she wanted reassurance... For her, I think it wasn’t so much about learning from scratch, it was more about reassuring her that she would know what to do if it happened again and giving her more confidence’.\nThe trainers reflected on the approaches they used to empower parents. This included using a gentle approach and actively listening, especially when dealing with vulnerable participants and those that have been through complicated births.L: ‘What I will take away from this is how important it is to just actively listen, especially when a parent is finding the neonatal unit a particularly difficult experience. I would definitely try to use this approach when speaking to parents feeling a similar way, possibly doing the goal setting with them too as this seemed to be really beneficial here and empowered her to identify her own strengths and coping mechanisms’\nL: ‘What I will take away from this is how important it is to just actively listen, especially when a parent is finding the neonatal unit a particularly difficult experience. I would definitely try to use this approach when speaking to parents feeling a similar way, possibly doing the goal setting with them too as this seemed to be really beneficial here and empowered her to identify her own strengths and coping mechanisms’", "The analysis of trainer reflections highlighted a number of key challenges for the implementation of the STORK programme, including staffing shortages, the need for mental health support for trainers and lack of cohesion across healthcare staff.\nStaffing shortages, compounded by additional pressures as a result of the COVID-19 pandemic, resulted in healthcare staff being overstretched and additional workload and pressure on the STORK trainers.C: ‘Having reflected on ways to include dads in this situation, until I have support with staffing I’m finding I can’t change the situation, and it’s unfortunate people are being missed due to these barriers on the unit’.\nC: ‘Having reflected on ways to include dads in this situation, until I have support with staffing I’m finding I can’t change the situation, and it’s unfortunate people are being missed due to these barriers on the unit’.\nA general lack of awareness and support on the topics covered by the STORK programme, in particular breastfeeding, among medical staff such as nurses and midwives was highlighted in the reflections. This was attributed to a lack of capacity and training and regimented feeding practices in the neonatal unit, compounded by being overstretched due to COVID-19:C: ‘Mom really appreciated this as no feeding support given by postnatal midwives, midwives currently running at ratio 1:9 because of sickness and self-isolating due to COVID-19 and no ward breastfeeding support. This is a real barrier to parents at the moment and I feel my impact is making a real difference to these moms feeding experiences as seen here’.\nC: ‘Mom really appreciated this as no feeding support given by postnatal midwives, midwives currently running at ratio 1:9 because of sickness and self-isolating due to COVID-19 and no ward breastfeeding support. This is a real barrier to parents at the moment and I feel my impact is making a real difference to these moms feeding experiences as seen here’.\nThis resulted in incidents of miscommunication, contradictory information being provided to patients and a general lack of coherence among STORK trainers and other staff.C: ‘I also feel that nurses should be facilitating mom’s wishes if they ask for support to breastfeed/express/pump and should not just decide they feel mom doesn’t really want to. As our unit is not currently BFI (Breastfeeding Feeding Initiative) [accredited]. I hope that this type of training will support nurses to do this in future’.\nC: ‘I also feel that nurses should be facilitating mom’s wishes if they ask for support to breastfeed/express/pump and should not just decide they feel mom doesn’t really want to. As our unit is not currently BFI (Breastfeeding Feeding Initiative) [accredited]. I hope that this type of training will support nurses to do this in future’.\nThe trainers cited difficult experiences when dealing with challenging participants and those with mental health problems. They report that previous counselling training had been beneficial and that they would have benefitted from further training in this area.H: ‘During the session Nan started to open up about her past, being a victim of long term DV [domestic violence] herself. She expressed how hard she found it watching her daughter go through the same things. […] This situation highlighted the importance of our role not only for parents, but also the people who surround the babies we look after. It also highlights the amount of input we have not “just” delivering STORK but also providing support around SG [safeguarding], mental health and well-being’.C: ‘On reflection I feel counselling skills are so important for this role, nearly every parent I work with has huge amounts of emotional anxiety and this has to be unpicked before learning can take place. […] The confidence and reassurance is already part of STORK, but the counselling skills and mental health support perhaps not so widely known, this is time consuming and emotionally consuming for staff, I need to ensure that I have some space to reflect as I have not been doing this recently’.\nH: ‘During the session Nan started to open up about her past, being a victim of long term DV [domestic violence] herself. She expressed how hard she found it watching her daughter go through the same things. […] This situation highlighted the importance of our role not only for parents, but also the people who surround the babies we look after. It also highlights the amount of input we have not “just” delivering STORK but also providing support around SG [safeguarding], mental health and well-being’.\nC: ‘On reflection I feel counselling skills are so important for this role, nearly every parent I work with has huge amounts of emotional anxiety and this has to be unpicked before learning can take place. […] The confidence and reassurance is already part of STORK, but the counselling skills and mental health support perhaps not so widely known, this is time consuming and emotionally consuming for staff, I need to ensure that I have some space to reflect as I have not been doing this recently’.", "This novel study provides valuable insights into the current level of local understanding, limitations and challenges in key neonatal and perinatal public health messages around reducing risks for mortality in babies and infants, i.e. promoting a healthy baby and infant. We did this through trainer reflections of three STORK trainers, who had undertaken multiple STORK programme educational sessions with parents, family members and carers, with additional staff members observing. This work highlights the knowledge, successes and challenges of implementing a neonatal and infant public health STORK programme at a local district general hospital in The West Midlands. Its success with parents has been previously described from another local district hospital [5], as well as from participant evaluations, which were uniformly positive and are therefore not additionally described in this report.\nKey themes and codes identified in our project (Table 1) reveal that there is still much work to be done to educate our local population around reducing risks, specifically breast feeding, safe sleeping, bystander life support, coping with a crying baby, and the risks of smoking in and after pregnancy. It is important to note that the messages we offer as part of the reducing the risk programme, are not restricted to the STORK Programme, but part of multiple NHS initiatives aimed at improving outcomes for neonates, infants and children in the UK, such as NHS Maternity Neonatal Safety Improvement Programme (promoting smoke free homes, breast feeding, early recognition of illness in baby) [10], the Maternity Transformation Programme [11], Implementing Better Births [12], The 0–19 Healthy Child Programme [13], the ICON programme, Saving Babies Lives Care Bundle [14], the NHS Long Term Care Plan [15], NHS Core20PLUS5 [16] and partly incorporated into midwifery support for families. These programmes have been in place in different formats over the years. The STORK Programme utilises these NHS resources and brings together components that target the key risks associated with infant mortality.\nIt is apparent from our results that despite these varied modalities of delivery to the local population, including midwifery, health visiting, neonatal, paediatric and GP services, the messages have not consistently reached the population that are childbearing and attending the local hospital’s Trust. Lack of cohesive messaging, lack of utility of modern media coupled with the pressure of work within the NHS, midwifery and health visiting sectors especially, may be responsible for this. These could partly be addressed through neonatal and perinatal public health programmes such as the STORK Programme. Such programmes are required to be expanded to the entire region, and not just to families whose babies are resident on the neonatal unit.\nThe value of the programme is evident in the 1:1 support, dedicated time and active listening provided to mothers and families by the STORK trainers. Utilising their previous experience in counselling, the trainers are skilled with an ability to enable, using critical thinking, emotional intelligence and thought exploration, positive experiences for the participants.\nThe utility of electronic mobile platforms for training health care workers in developing economies has been reviewed and has diverse potential [17]. There are no equivalent reducing the risk for infant mortality composite mobile applications reported in the literature, however the individual components in our reducing risks for infant mortality programme are key parts of many health systems internationally [19, 20] including our own [12–18], Empowering society through education and training using mobile applications, with readily available resources for parent, family or carer self-learning and updates appear to be positively received in our region with short term gains as described [5, 7]. Its benefit potentially lies in education and empowering in a multidirectional method, from parents, to grandparents and other carers of the generation before, and cascaded through discussions with friends and other family members, for the benefit of current and future generations, using modern media that is easily adopted and suited to the lifestyles of our local population. It is recognised that short term statistical reductions in infant mortality are unrealistic, considering that the associations with it span across socio-cultural, economic and generational divides. Reduction in infant mortality risk is the primary objective of the STORK programme. This perspective has been borne out in the observations from a nurse led home visiting programme, The Ohio Infant Mortality Reduction Initiative [21]. Here, this could not singlehandedly make a difference to infant mortality rates, but did contribute to prevention of some infant mortality risks.\nThis public health programme, utilising an easily accessible, free online mobile application unifies the public health and NHS messages important to families with a new or expectant baby and support for this is being continued through the regions’ Public Health and City Council initiatives. While further development is contingent on an effective and potentially expanded workforce, we have demonstrated through this and other local work [5, 6] that the education, empowerment and support for families to understand the risks associated with infant mortality, do not necessarily need to be delivered by highly banded nurses or doctors. Community engagement, utility of peer support workers and trainers are likely to be important in delivery of these messages, and we believe should be actively supported by local units in the region.\nInfant mortality is a health inequality, as some of the risks associated with infant mortality can be linked to social inequality. Ethnicity, lack of maternal education, coupled with social deprivation mean that emphasis on providing the material in the appropriate language and at the correct pitch for families in our region is a key priority going forward [16]. Provision of appropriate resources, ensuring that the ‘reach and inclusion’ of the programme is appropriate, is critical. Utility of modern mobile applications instead of leaflets, inclusion of partners and staff in the education and empowerment package are necessary components if we are to take this public health initiative going forwards. This includes spreading the message to the population upstream of pregnancy itself. Work towards development of a senior schools programme is underway in this regard [7].\n[SUBTITLE] Strengths and limitations [SUBSECTION] One of the limitations of this project is that it was mainly conducted in the post-natal period for vulnerable families and those families who had babies admitted in the neonatal unit. We believe that this work should be extended to midwifery services and provision of support in the antenatal period, the post delivery period for all women and families with a new baby and the community through engagement with all midwives, health visitors and the voluntary sector. A second limitation is that the perspectives here are solely from the trainers.\nThe strengths of this project are firstly that we have compiled key risks based on the associations with infant mortality in our region into an easy to use empowerment package. Secondly that we have utilised social media tools through a mobile platform, which almost all our parents have access to, and in a format that is easily understandable and quick to use for new parents who may have leaflet-fatigue. Thirdly, we have utilised reflections of key trained workers on their interactions with parents, families and carers in a free text style. This has enabled us to identify multiple, potential avenues for improvement in the programme, based on the four themes identified in the qualitative analysis. These are described in Table 2.\n\nTable 2Potential avenues for improvementPotential Avenues for improvementReach and inclusionDevise strategies to promote greater engagement of fathers and other carers in the STORK programme.Consider opportunities to extend the STORK programme beyond the neonatal unit to normal care.Examine opportunities to reach mothers before and after childbirth, during pregnancy as well as follow-up support in the community, to maximise the impact of the programme.Develop training and informational content in multiple languages and formats (print, digital and audio-visual).Trial different approaches in training delivery (e.g. one-on-one sessions with each parent in certain situations).Improve access to interpreters, where possible with relevant dialects and knowledge of the content being covered in the programme.KnowledgeProduce informational materials on topics such as baby product safety, breastfeeding and sleeping practices.Training and capacity buildingSeek measures to improve support for the STORK programme among the wider medical wards and staff, such as providing augmented breastfeeding training to midwives and nurses, including information on appropriate signposting and utility of the mobile app for this purpose.Provide counselling and mental health training for STORK trainers to equip them with the skills for dealing with challenging cases.\n\nPotential avenues for improvement\nOne of the limitations of this project is that it was mainly conducted in the post-natal period for vulnerable families and those families who had babies admitted in the neonatal unit. We believe that this work should be extended to midwifery services and provision of support in the antenatal period, the post delivery period for all women and families with a new baby and the community through engagement with all midwives, health visitors and the voluntary sector. A second limitation is that the perspectives here are solely from the trainers.\nThe strengths of this project are firstly that we have compiled key risks based on the associations with infant mortality in our region into an easy to use empowerment package. Secondly that we have utilised social media tools through a mobile platform, which almost all our parents have access to, and in a format that is easily understandable and quick to use for new parents who may have leaflet-fatigue. Thirdly, we have utilised reflections of key trained workers on their interactions with parents, families and carers in a free text style. This has enabled us to identify multiple, potential avenues for improvement in the programme, based on the four themes identified in the qualitative analysis. These are described in Table 2.\n\nTable 2Potential avenues for improvementPotential Avenues for improvementReach and inclusionDevise strategies to promote greater engagement of fathers and other carers in the STORK programme.Consider opportunities to extend the STORK programme beyond the neonatal unit to normal care.Examine opportunities to reach mothers before and after childbirth, during pregnancy as well as follow-up support in the community, to maximise the impact of the programme.Develop training and informational content in multiple languages and formats (print, digital and audio-visual).Trial different approaches in training delivery (e.g. one-on-one sessions with each parent in certain situations).Improve access to interpreters, where possible with relevant dialects and knowledge of the content being covered in the programme.KnowledgeProduce informational materials on topics such as baby product safety, breastfeeding and sleeping practices.Training and capacity buildingSeek measures to improve support for the STORK programme among the wider medical wards and staff, such as providing augmented breastfeeding training to midwives and nurses, including information on appropriate signposting and utility of the mobile app for this purpose.Provide counselling and mental health training for STORK trainers to equip them with the skills for dealing with challenging cases.\n\nPotential avenues for improvement\n[SUBTITLE] The next steps and key challenges [SUBSECTION] Our focus now is to fine tune the programme based on potential areas for improvement that have been identified. A key priority is expanding this into a multilingual format for all to be able to use, and expand the work to evaluate birth partner perspectives. Our challenges will be how best to promote the programme and app in the antenatal period, devising strategies for roll out within the community with general practitioners, public health and health visiting as partners, to ensure that the programme is more uniformly delivered. This is important if we are to allow greater opportunity for any potential impact the programme may have in bringing about behavioural change within the community. As part of the STORK Programme in the region, our added focus must be working towards the objective of peer to peer education and ownership of the programme by the community. Enhanced training for STORK trainers to equip them with skills to deal with more complex interactions will need to be supported. In addition the study findings here provide a sound foundation for surveys and focus group discussions evaluating users’ perspectives and experiences, as well as evidence for utility and how to expand utility of the mobile app.\nOur focus now is to fine tune the programme based on potential areas for improvement that have been identified. A key priority is expanding this into a multilingual format for all to be able to use, and expand the work to evaluate birth partner perspectives. Our challenges will be how best to promote the programme and app in the antenatal period, devising strategies for roll out within the community with general practitioners, public health and health visiting as partners, to ensure that the programme is more uniformly delivered. This is important if we are to allow greater opportunity for any potential impact the programme may have in bringing about behavioural change within the community. As part of the STORK Programme in the region, our added focus must be working towards the objective of peer to peer education and ownership of the programme by the community. Enhanced training for STORK trainers to equip them with skills to deal with more complex interactions will need to be supported. In addition the study findings here provide a sound foundation for surveys and focus group discussions evaluating users’ perspectives and experiences, as well as evidence for utility and how to expand utility of the mobile app.", "One of the limitations of this project is that it was mainly conducted in the post-natal period for vulnerable families and those families who had babies admitted in the neonatal unit. We believe that this work should be extended to midwifery services and provision of support in the antenatal period, the post delivery period for all women and families with a new baby and the community through engagement with all midwives, health visitors and the voluntary sector. A second limitation is that the perspectives here are solely from the trainers.\nThe strengths of this project are firstly that we have compiled key risks based on the associations with infant mortality in our region into an easy to use empowerment package. Secondly that we have utilised social media tools through a mobile platform, which almost all our parents have access to, and in a format that is easily understandable and quick to use for new parents who may have leaflet-fatigue. Thirdly, we have utilised reflections of key trained workers on their interactions with parents, families and carers in a free text style. This has enabled us to identify multiple, potential avenues for improvement in the programme, based on the four themes identified in the qualitative analysis. These are described in Table 2.\n\nTable 2Potential avenues for improvementPotential Avenues for improvementReach and inclusionDevise strategies to promote greater engagement of fathers and other carers in the STORK programme.Consider opportunities to extend the STORK programme beyond the neonatal unit to normal care.Examine opportunities to reach mothers before and after childbirth, during pregnancy as well as follow-up support in the community, to maximise the impact of the programme.Develop training and informational content in multiple languages and formats (print, digital and audio-visual).Trial different approaches in training delivery (e.g. one-on-one sessions with each parent in certain situations).Improve access to interpreters, where possible with relevant dialects and knowledge of the content being covered in the programme.KnowledgeProduce informational materials on topics such as baby product safety, breastfeeding and sleeping practices.Training and capacity buildingSeek measures to improve support for the STORK programme among the wider medical wards and staff, such as providing augmented breastfeeding training to midwives and nurses, including information on appropriate signposting and utility of the mobile app for this purpose.Provide counselling and mental health training for STORK trainers to equip them with the skills for dealing with challenging cases.\n\nPotential avenues for improvement", "Our focus now is to fine tune the programme based on potential areas for improvement that have been identified. A key priority is expanding this into a multilingual format for all to be able to use, and expand the work to evaluate birth partner perspectives. Our challenges will be how best to promote the programme and app in the antenatal period, devising strategies for roll out within the community with general practitioners, public health and health visiting as partners, to ensure that the programme is more uniformly delivered. This is important if we are to allow greater opportunity for any potential impact the programme may have in bringing about behavioural change within the community. As part of the STORK Programme in the region, our added focus must be working towards the objective of peer to peer education and ownership of the programme by the community. Enhanced training for STORK trainers to equip them with skills to deal with more complex interactions will need to be supported. In addition the study findings here provide a sound foundation for surveys and focus group discussions evaluating users’ perspectives and experiences, as well as evidence for utility and how to expand utility of the mobile app.", "The success of the programme is evident in the positive feedback and success stories of changes that the programme has effected in behavioural practices in its participants [7]. But behavioural adaptation as a consequence of effective education and empowerment will be the greatest challenge in defining at population level, and will require further study, followed by refinement of messaging and further study over time. All of this is contingent on getting the messaging, its delivery, reach and inclusion appropriate for the local population." ]
[ "introduction", null, null, null, null, null, "results", null, null, null, null, "discussion", null, null, "conclusion" ]
[ "Parent education", "Parent empowerment", "Infant mortality", "Reducing risks", "Neonatal public health" ]
Exploring the psychological and religious perspectives of cancer patients and their future financial planning: a Q-methodological approach.
36253745
Cancer patients are often hesitant to talk about their mental health, religious beliefs regarding the disease, and financial issues that drain them physically and psychologically. But there is a need to break this taboo to understand the perceptions and behaviours of the patients. Previous studies identified many psychological factors that are bothering cancer patients. However, it still requires exploring new elements affecting their mental and physical health and introducing new coping strategies to address patients' concerns.
BACKGROUND
The current study aims to identify cancer patients' perceived attitudes towards the severity of illness, understand their fears, tend towards religion to overcome the disease, and future financial planning by using a Q-methodological approach. Data were collected in three steps from January-June 2020, and 51 cancer patients participated in the final stage of Q-sorting.
METHODS
The findings of the study are based on the principal component factor analysis that highlighted three essential factors: (1) feelings, (2) religious beliefs about the acceptance of death, and (3) their future personal and financial planning. Further, the analysis shows that the patients differ in their beliefs, causes and support that they received as a coping mechanism.
RESULTS
This study explains cancer patients' psychological discomfort and physical pain but cannot relate it to co-morbidities. Q methodology allows the contextualization of their thoughts and future planning in different sets, like acceptance of death, combating religion's help, and sharing experiences through various platforms. This study will help health professionals derive new coping strategies for treating patients and financial managers to design insurance policies that help them to share their financial burdens.
CONCLUSION
[ "Adaptation, Psychological", "Fear", "Humans", "Mental Health", "Neoplasms", "Religion" ]
9578276
null
null
null
null
Results
The sample characteristics are shown in Table 1, which varied within the groups. Of the 51 respondents, 55% were female, 57% were married, and the dominant age group was 36 to 45 (25%). Breast cancer is the most prevalent type of cancer among the study participants. However, all cancer types mentioned in the table prevail in Pakistan [31]. But, the number of patients with breast cancer is the highest of all [33]. Most of the participants are employed (51%) and have completed at least their college education (47%). Further, the characteristics of the respondents are shown in Table 1. Table 1Patients DemographicsCancer Patients (n = 51)n (%) Gender Male23 (45)Female28 (55) Marital Status Single14 (27)Married29 (57)Separated/Divorced3 (06)Widow5 (10) Age 15–2510 (20)26–3512 (24)36–4513 (25)46+16 (31) Education High School or Less7 (14)College or more24 (47)University or more20 (39) Employment Status Employed19 (51)Unemployed15 (41)Retired3 (08) Cancer Type Breast14 (27)Lip, oral cavity10 (20)Lung7 (14)Oesophagus5 (10)Leukaemia4 (08)Cervix uteri4 (08)Ovary5 (10)Other2 (04) Patients Demographics The study is exploratory, so there is a need to assess the validity of the data. Most Q-methodology studies are exploratory and qualitative and tend not to use random sample designs. That is why questions of the research validity were assessed differently from quantitative research methods [34, 35]. As understood in more conventional survey research, item validity does not apply to the study of subjectivity. In Q-methodology, one expects the meaning of an item to be interpreted individually. The contextual meaning of how each item was individually interpreted becomes apparent in the rank-ordering and follow-up interviews. It shows the factor characteristics explaining the average reliability coefficient used to assess the reliability, or internal consistency, of a set of scale or test items. In other words, the reliability of any given measurement refers to the extent to which it is a consistent measure of a concept. Cronbach’s alpha is one way of measuring the strength of that consistency. Due to this reason, the appropriate statistical techniques are used to achieve the objectives of the study. Reliability analysis was done to check the quality of the survey, which is suggested as an estimate of reliability [36]. If the value of Cronbach’s alpha is between 0.60 and 0.90, data is considered highly reliable and consistent [36]. Our Cronbach’s alpha score is 0.774, which shows that the data is reliable and consistent. Table 2 shows the results of summary statistics of the Q sort items in the form of mean, standard deviations and Z score values. We first rank all the statements based on Z scores in descending order and then rank them according to the mean and standard deviation values, respectively. All the sample statements were sub-categorized into three main factors presented in Table 2. It presents three significant factors about the fear and psyche that cancer patients recognize: psychological and emotional needs (17 statements), fear of death and dependency on religion (16 statements), and future financial planning (13 statements). We analyzed the data by multiple correspondence analysis (MCA), and all the noises from the data were removed to obtain good results. Table 2Summary StatisticsItem no.MeanZ-score Factor I: Feelings of Cancer Patients 2 4.042.21 9 3.242.15 3 4.162.14 10 3.252.06 5 3.12.05 7 3.652.03 14 3.632.03 4 4.181.94 12 3.961.90 16 4.101.89 8 3.591.89 6 3.671.88 1 4.061.82 11 3.451.76 13 3.161.71 15 3.201.67 17 3.841.41 Factor II: Religious Beliefs about the Acceptance of Death 30 3.242.10 20 3.372.02 27 2.981.91 32 5.251.90 24 3.181.90 28 2.571.82 21 3.531.81 22 3.391.80 25 3.181.80 29 2.781.74 23 3.251.71 19 3.551.70 31 5.711.69 26 2.781.68 18 3.711.62 33 5.411.51 Factors III: Future Personal and Financial Planning 44 4.571.98 38 5.001.80 35 4.981.79 37 4.781.78 43 4.631.75 34 5.021.71 42 5.041.60 36 5.291.59 45 5.241.56 40 5.181.56 46 5.651.51 39 5.431.47 41 5.241.45 Summary Statistics MCA consequently played an essential role in data screening, so our selected Q-factors are simpler and more accurate. Applying MCA to data, Table 3 shows that total inertia is 0.79 (percent of inertia 45% is due to the first axis & 34% is due to the second axis). Total inertia values indicate how much variability is in the model. Each dimension’s inertia values refer to the amount of variance by each dimension [34]. We have selected the highest interaction factors and ignored the weak relationship factors through MCA. Data were collected from 51 participants to check cancer patients’ views on how they are combating their disease, e.g. by improving their mental health with the help of religion and if they have any financial planning. Cochran’s Q test determined a statistical significance in the proportion of patients coping with their disease by different means over the time χ2(2) = 493.46, p < .05, see Table 4. Table 3Impact of All VariablesModel Summary Dimension Cronbach’s Alpha Variance Accounted For EigenvalueInertia 1 0.955.8190.45 2 0.943.7630.34 Total 9.5820.79 Impact of All Variables Table 4Cochran’s Q TestSum of SquaresdfMean SquareCochran’s QSig Between Factors 672.95013.45 Within Factors Between factors 1877.84541.73493.450.001 Residual 6855.622503.047 Total 8733.522953.805 Cochran’s Q Test The critical statements from each of the three factors were sorted through PQ method 2.11 (statistical method: Multiple correspondence analysis to select the high interaction terms), which gives us the dimensions and insight of the Eigenvalues; we selected our Q factors based on these insights. The most acceptable factors were decided based on Eigenvalues which are at least 1.0. We have rearranged the selected Q-sorts based on Z scores in Table 5. The resultant factors are divided into three main categories: feelings of cancer patients, religious beliefs about accepting death, and future personal and financial planning. Table 5Descending Array of Z-scores Presenting Feelings of Cancer Patients towards Illness and Their Future PlanningItem No.StatementsZ-score Factor I: Feelings of Cancer Patients 9 I was not mentally ready for all this2.15 8 When this news broke, I was in a state of shock and disbelief and felt numb.2.03 14 I often think, why me? Why did God let that happen to me?2.02 20 I am worried about the cost of treatment.2.02 12 Thoughts came to my mind that people feel pity and grief when they came to know about my disease.1.90 7 I started getting panic attacks when the painful treatment process came to my mind.1.88 Factor II: Religious Beliefs about the Acceptance of Death 27 A person’s body will die but not the spirit.1.91 24 Death is inevitable, so we should not worry about it1.90 21 We should not think about death; we have to live fully and enjoy every moment of life1.81 31 Social and family support lowers feelings of anxiety and depression.1.69 26 Only religion can help a person overcome the fear of death and console the mind and body.1.68 33 My willpower is giving me the strength to combat the disease.1.51 Factor III: Future Personal and Financial Planning 44 I will donate my organs (eyeballs, cornea, heart, kidney, etc.) to other people.1.98 38 I will purchase investment plans for my family.1.80 35 I will write a will regarding the distribution of my assets and unfulfilled wishes.1.79 34 This disease has changed my retirement, travelling, or parenthood plans.1.71 19 I am worried that I am causing trouble for my family and friends (emotionally and financially).1.70 36 I will clearly instruct my family regarding my social responsibilities.1.59 45 I will add a specific portion of my wealth to a charitable institution.1.56 40 I will make diversified investments to minimize risk.1.56 39 I prefer risk-free investments to secure my family’s future.1.47 41 I will take the consultancy from financial experts (brokers, fund managers, bankers, and real estate agents) before finalizing my investment plans1.45 Descending Array of Z-scores Presenting Feelings of Cancer Patients towards Illness and Their Future Planning [SUBTITLE] Feelings of cancer patients [SUBSECTION] Cancer patients in this factor appear in a challenging situation. Table 5 of statements where they mainly were strongly agreed or agreed. They were in a big shock and disturbed psychologically over the fact of why God had chosen them for this disease. According to their statements, they were distraught when this news was revealed. Results showed the perceived feelings of cancer patients; when they first received the news, they were in a state of shock. They felt panic and started questioning God, “why has he selected them for this disease? Why cannot he go for any other person”. Statistical results are significant about their feelings that they start feeling pity and jealousy from other people. Some people reported increased anxiety and panic attacks and started feeling depressed about their finances. A patient said, “when I received the news that I have cancer, I was shocked and could not utter a single word for some moments”. Feelings are different gender-wise; women were more emotional than men and were more composed. Cancer patients in this factor appear in a challenging situation. Table 5 of statements where they mainly were strongly agreed or agreed. They were in a big shock and disturbed psychologically over the fact of why God had chosen them for this disease. According to their statements, they were distraught when this news was revealed. Results showed the perceived feelings of cancer patients; when they first received the news, they were in a state of shock. They felt panic and started questioning God, “why has he selected them for this disease? Why cannot he go for any other person”. Statistical results are significant about their feelings that they start feeling pity and jealousy from other people. Some people reported increased anxiety and panic attacks and started feeling depressed about their finances. A patient said, “when I received the news that I have cancer, I was shocked and could not utter a single word for some moments”. Feelings are different gender-wise; women were more emotional than men and were more composed. [SUBTITLE] Religious beliefs about the acceptance of death [SUBSECTION] From Factor II, the most realistic statement is identified by the respondent that their belief in death is a certain thing. We all believe in that, but untimely or when you know about the time of your death, you feel pretty anxious, distressed, etc. This situation is more harrowing that counting the death at your fingertips. Participants classified their death-related thoughts, acceptance of death, and how religion helped them overcome this fear. Elderly patients believed in religion’s comfort; they stated that religion helped them a lot to fight with this fear, and God is gracious, and he will ease their pain. Old-aged persons had an increased tendency towards religion than young ones. In the light of the results, people believed in the certainty of death in the light of religion. A patient said, “He was not religious before, but after the disease, he started following the religion and that change helped him cope with the pressure of disease”. From Factor II, the most realistic statement is identified by the respondent that their belief in death is a certain thing. We all believe in that, but untimely or when you know about the time of your death, you feel pretty anxious, distressed, etc. This situation is more harrowing that counting the death at your fingertips. Participants classified their death-related thoughts, acceptance of death, and how religion helped them overcome this fear. Elderly patients believed in religion’s comfort; they stated that religion helped them a lot to fight with this fear, and God is gracious, and he will ease their pain. Old-aged persons had an increased tendency towards religion than young ones. In the light of the results, people believed in the certainty of death in the light of religion. A patient said, “He was not religious before, but after the disease, he started following the religion and that change helped him cope with the pressure of disease”. [SUBTITLE] Future personal and financial planning [SUBSECTION] Factor III highlights the intensity of the respondents towards future financial planning. Cancer patients are already bearing the high cost of treatment, and patients, particularly older ones, are worried about their family’s future and want to secure it. They emphasized future financial planning for them and their families. Few participants wished to donate their organs after death to help humanity. They were worried about the cost of treatment because cancer treatment is costly. Z scores explained that patients felt a burden to family and friends. Some patients said, “They contacted the financial institutes for their future financial policies but found any suitable plan”. Some people wanted to donate their property to charitable institutes. The patients started planning the future of their families. Young people are more optimistic about their future, planning that they will recover soon and take a fresh start in their life. Some participants wanted to buy the investment plans and write the will for their families. Factor III highlights the intensity of the respondents towards future financial planning. Cancer patients are already bearing the high cost of treatment, and patients, particularly older ones, are worried about their family’s future and want to secure it. They emphasized future financial planning for them and their families. Few participants wished to donate their organs after death to help humanity. They were worried about the cost of treatment because cancer treatment is costly. Z scores explained that patients felt a burden to family and friends. Some patients said, “They contacted the financial institutes for their future financial policies but found any suitable plan”. Some people wanted to donate their property to charitable institutes. The patients started planning the future of their families. Young people are more optimistic about their future, planning that they will recover soon and take a fresh start in their life. Some participants wanted to buy the investment plans and write the will for their families.
Conclusion
This study is conducted to identify cancer patients’ perceived behaviour towards their disease using the Q-methodological technique. The participants shared their experiences with illness, including psychological distress, fear of dying, concerns about treatment cost, future uncertainties, and combating it. The findings reported three key factors: feelings of the cancer patients, their religious and spiritual beliefs, and future personal and financial planning. Their responses also varied according to age, gender, disease severity, and recovery expectations. Young people are more enthusiastic about their future, while older ones, particularly cancer patients of stages III and IV, are pretty uncertain about their lives. Results showed that they all face a specific degree of stress and anxiety when they know about their disease, and it was difficult for them to accept this reality initially. But their religious beliefs, social support, and health practitioners play a positive role in their lives, keeping them hopeful and serving as a coping-up strategy. Young people, who are married and have family responsibilities, face more financial distress, like fear of losing jobs. Married women were more worried about their kids. The patients also discussed their future personal and financial planning. The present study will help practitioners to improve their treatment strategies, and design customized plans according to patients’ needs and behaviours. It will help to create a trusted atmosphere which will improve their mental health, peace of mind, and physical health.
[ "Background", "Method", "Data collection procedure", "Construction of concourse (Q population)", "Q-sample", "Selection of participants", "Q sorting", "Feelings of cancer patients", "Religious beliefs about the acceptance of death", "Future personal and financial planning", "Study implications", "Study limitations" ]
[ "Cancer is the uncontrolled growth of abnormal body cells. Its diagnosis affects not only the physical condition of patients but also emotionally drains their families. It is a life-changing experience. Depression and anxiety are the most common side effects [1, 2]. The whole life turns upwards down, and it is crucial to identify those changes and provide needed help [3]. The persons experiencing cancer not only bear the physical pain of surgery, chemotherapy, radiation, bone marrow transplant, and immunotherapy but also pass through psychological trauma that can badly affect their physical and mental health [4, 5]. Patients considered themselves a burden to their family and friends, often resulting in self-harm and suicidal thoughts [6, 7]. The interpersonal-psychological theory of attempted and completed suicide also regarded a sense that considering oneself as a burden on others is one of the essential components of ending their life by suicide [8].\nPsychology and its theories help us to understand patients’ behaviour [9]. Behavioural sciences theories describe the feelings of individuals and how they define and interpret disease. It also explains their acceptance and fears of death, future planning, and remedial actions towards it. These factors are shaped by sociocultural and psychological behaviour rather than cognitive, physiological, genetic, or other biological reasoning for the disease [10]. Thus, illness behaviour reflects complex reactions toward changing bodily sensations that represent the psychological predisposition of the person and the broader socio-economic context within which the individual lives [11].\nAlthough previous studies reported many psychological problems, confronting cancer patients such as thread to life, anxiety, body image concerns, financial crisis, increased marital stress, fear of being unemployed, not capable of fulfilling the social roles in life etc. that badly impact their mental and physical health [9]. But still, there is a need to study the perceptions and behaviour of cancer patients to explore some new factors that are bothering them. Much research is conducted to analyze the reactions of cancer patients towards the severity of their disease and their co-existing worries about adverse psychological long-term consequences of treatment [12]. These factors also slow down their recovery process and become a hurdle to obtaining the desired results.\nHealth practitioners often use psychological theories to design coping strategies for cancer patients. Bandura’s self-efficacy theory helped to develop an effective psychological treatment framework for cancer patients [13]. These treatment strategies are useful in dealing with the emotional distress of the patients through psychological intervention. The social cognitive theory provides a support mechanism that improves patients’ overall quality of life [9]. Religious beliefs and spirituality also play a significant role in the treatment process by creating a ray of light among the patients that positively impacts their lives [14]. Religious beliefs act as a coping-up strategy that supports the illness and positively deals with it [15, 16]. It serves as a long-term therapy that results in maintaining the self-esteem of patients, restoring their confidence, giving them emotional comfort, and creating a sense of meaning in life [17, 18].\nFamily and social support are also considered essential for the psychological well-being of the patients. However, the finest moral and psychosocial support demands understanding individual and family-level perceptions at the time of cancer diagnosis and throughout the treatment trajectory [19]. The patient’s willpower and spiritual therapy play a vital role in cancer treatment [20]. Previously most of the research involved the caretakers asking about the patient’s feelings which did not directly depict the feelings of patients [4, 8]. The current study targeted cancer patients and directly explored their feelings and opinions.\nSimilarly, positive patient-doctor communications provide undue support to patients to come out of the trauma [21]. A positive patient-doctor relationship helps adaptation to illness, reduces treatment pain, and provides hope to fight against the disease. The nursing interventions also support building an empathetic relationship with the patient and their family members that help in fostering mutual trust and facilitating coping mechanisms during the care process [22]. But patients with antisocial personality traits have more psychological order and face difficulty in handling it [23].\nFinances are a big question mark for patients to bear the cost of treatment besides the psychological issues. Scholars believe that cancer treatment costs have a profound, long-lasting impact on the pockets of patients and caretakers [24]. Families often become indebted or bankrupt as they do not want to compromise their patients’ health and functional outcomes [25, 26]. So, financial issues are considered the highest risk factor in psychosocial oncology for patients and their families during treatment [27]. Patients are also worried about the future of their families. They must re-evaluate their priorities and take strict actions for their family security. Previous studies focused on the bidirectional impact of family-reported positive (resilience) or negative (distress) psychosocial well-being. Still, none have explicitly focused on the patient’s feelings, fears and coping strategies and particularly their future financial planning to secure their family’s future [3].\nThe study aims to identify cancer patients’ perceived attitudes towards the disease severity and understand their fears and future financial planning. Previous researchers explored various psychological issues, religiosity and spirituality factors and the economic burden of the disease either through qualitative methods or quantitative techniques. However, the current study explores the perceived feelings of cancer patients on these issues jointly by using the Q-methodological technique, a combination of qualitative and quantitative approaches. In this way, it will provide a comprehensive view of the patients and further contribute to the current knowledge in psychology, oncology, and behavioural sciences studies.", "This study aims to identify the perceived attitude of cancer patients towards their disease by applying a Q-methodological approach and describing their resultant actions regarding future planning. Q methodology is a novel approach and gives the foundation for the analytical study of people’s opinions, attitudes, feelings and viewpoints [28]. It combines qualitative and quantitative techniques that depict a comprehensive viewpoint of the respondents [29].\n[SUBTITLE] Data collection procedure [SUBSECTION] Data were collected in three steps. In step 1, a Q sort pack of statements was developed through literature review, asking a global single-item question from the relevant stakeholders and in-depth interviews. The second step involves finalizing the Q-sample statements from Q-population based on the expert’s opinions in the field. In the last step, questionnaire items were finalized by the experts, and data were collected from the final respondents. Data was taken from cancer patients in Pakistan from January-June 2020.\nData were collected in three steps. In step 1, a Q sort pack of statements was developed through literature review, asking a global single-item question from the relevant stakeholders and in-depth interviews. The second step involves finalizing the Q-sample statements from Q-population based on the expert’s opinions in the field. In the last step, questionnaire items were finalized by the experts, and data were collected from the final respondents. Data was taken from cancer patients in Pakistan from January-June 2020.\n[SUBTITLE] Construction of concourse (Q population) [SUBSECTION] The first step in Q methodology is to develop the Q sort pack, preferably a set of 40 to 80 statements relating to the topic of study [30]. Q-concourse of statements were developed through Global Single-Item Questions and in-depth interviews. The initial Q-concourse (collection of opinion statements to represent possible reactions towards the severity of disease) was assembled after reviewing relevant literature [31, 32]. The interviews and written narratives are based on the Global Single-Item Question: “what are the feelings of cancer patients towards illness and their future planning?” The question was asked from 31 adults, 12 immediate family members of cancer patients, 5 oncologists, 10 caretaking nurses, 3 psychologists and 2 general physicians, who were not the study participants.\nFurther, 5 in-depth interviews were conducted with cancer patients. These interviews unveil their feelings, reactions, and experiences about the disease, their journey from fear to acceptance of death and their future planning (domestic, financial, and personal satisfaction to the soul). Finally, we ended up with a total of 121 statements as a Q population. We carefully selected samples by keeping the margin of error (confidence interval) by +/- 5%. We chose a confidence level of 95%, and variability (standard deviation) among the sample was 0.5, and we calculated a sample size of 51 using Eq. (1)\n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$Necessary\\,Sample\\,Size\\, = \\,N\\, = \\,{Z^2}\\,.\\,\\sigma \\,(1\\, - \\,\\sigma )/{e^2}$$\\end{document} (1)\nwhere e = margin of error.\nThe first step in Q methodology is to develop the Q sort pack, preferably a set of 40 to 80 statements relating to the topic of study [30]. Q-concourse of statements were developed through Global Single-Item Questions and in-depth interviews. The initial Q-concourse (collection of opinion statements to represent possible reactions towards the severity of disease) was assembled after reviewing relevant literature [31, 32]. The interviews and written narratives are based on the Global Single-Item Question: “what are the feelings of cancer patients towards illness and their future planning?” The question was asked from 31 adults, 12 immediate family members of cancer patients, 5 oncologists, 10 caretaking nurses, 3 psychologists and 2 general physicians, who were not the study participants.\nFurther, 5 in-depth interviews were conducted with cancer patients. These interviews unveil their feelings, reactions, and experiences about the disease, their journey from fear to acceptance of death and their future planning (domestic, financial, and personal satisfaction to the soul). Finally, we ended up with a total of 121 statements as a Q population. We carefully selected samples by keeping the margin of error (confidence interval) by +/- 5%. We chose a confidence level of 95%, and variability (standard deviation) among the sample was 0.5, and we calculated a sample size of 51 using Eq. (1)\n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$Necessary\\,Sample\\,Size\\, = \\,N\\, = \\,{Z^2}\\,.\\,\\sigma \\,(1\\, - \\,\\sigma )/{e^2}$$\\end{document} (1)\nwhere e = margin of error.\n[SUBTITLE] Q-sample [SUBSECTION] In the second step, the Q sample was finalized, a set of selected statements from the Q population based on the experts’ opinions in the field. The experts (4 professors and one methodologist) analyzed 121 statements and rank-ordered them according to their meanings and context. They ended up with 46 statements as a Q sample, divided into 3 main categories: feelings of cancer patients (17), religious beliefs about the acceptance of death (16), and future personal and financial planning (13). This sample is based on the most representative and distinctive statements that are considered best for use in the Q sorting process.\nIn the second step, the Q sample was finalized, a set of selected statements from the Q population based on the experts’ opinions in the field. The experts (4 professors and one methodologist) analyzed 121 statements and rank-ordered them according to their meanings and context. They ended up with 46 statements as a Q sample, divided into 3 main categories: feelings of cancer patients (17), religious beliefs about the acceptance of death (16), and future personal and financial planning (13). This sample is based on the most representative and distinctive statements that are considered best for use in the Q sorting process.\n[SUBTITLE] Selection of participants [SUBSECTION] In the third step, the study participants were selected who were cancer patients admitted or taking treatment from the local cancer hospitals in Pakistan. This study is conducted keeping in view the cultural and social norms of Pakistani society. The health system is entirely different here. The government and private sectors provide no health insurance. Chronic diseases like cancer may dig a hole in the pocket of the common person, which affects their emotional and financial state. The family also suffers a lot, and depression is quite common in this scenario. An essential advantage of Q-methodology is using a small sample of purposively selected respondents, which is more helpful in predicting intra-individual differences rather than inter-individual [25]. Therefore, a sample of 60 participants was employed based on their agreement to contribute to this study. Further, participants were ensured that the provided information would be used anonymously for research purposes only. Researchers maintained a high level of confidentially during the study’s complete process. Nine participants withdrew because they were too demanding (2); had changed their mind (4); were not comfortable (1); or were so tired (2). Finally, 51 participants (85%) attended and completed the Q sorting process.\nIn the third step, the study participants were selected who were cancer patients admitted or taking treatment from the local cancer hospitals in Pakistan. This study is conducted keeping in view the cultural and social norms of Pakistani society. The health system is entirely different here. The government and private sectors provide no health insurance. Chronic diseases like cancer may dig a hole in the pocket of the common person, which affects their emotional and financial state. The family also suffers a lot, and depression is quite common in this scenario. An essential advantage of Q-methodology is using a small sample of purposively selected respondents, which is more helpful in predicting intra-individual differences rather than inter-individual [25]. Therefore, a sample of 60 participants was employed based on their agreement to contribute to this study. Further, participants were ensured that the provided information would be used anonymously for research purposes only. Researchers maintained a high level of confidentially during the study’s complete process. Nine participants withdrew because they were too demanding (2); had changed their mind (4); were not comfortable (1); or were so tired (2). Finally, 51 participants (85%) attended and completed the Q sorting process.\n[SUBTITLE] Q sorting [SUBSECTION] The researchers have done multiple meetings with the study participants who were agree to participate. During the initial meetings, we elaborated the objective of study, how they can contribute to our study, listened to their concerns and those who gave their verbal consent to we took data from them. The data were collected in two stages. In stage I, 51 study participants answered the survey with 46 Q statements on a likert scale of 1 (strongly disagree) to 7 (strongly agree). These statements were finalized after following a approved process mentioned in the paper. In stage 2, respondents were asked to explain the preferences which they made in the survey to make sure they fully understand the concept of study. What is the reason behind their choices?. The researchers have taken all the notes and tried to provide an easy and convenient environment for them as they were going through an emotional phase. All these responses are reported in the results section. However, the resultant Q sorts representing the participant’s operant subjectivity on the issue under consideration are presented.\nThe researchers have done multiple meetings with the study participants who were agree to participate. During the initial meetings, we elaborated the objective of study, how they can contribute to our study, listened to their concerns and those who gave their verbal consent to we took data from them. The data were collected in two stages. In stage I, 51 study participants answered the survey with 46 Q statements on a likert scale of 1 (strongly disagree) to 7 (strongly agree). These statements were finalized after following a approved process mentioned in the paper. In stage 2, respondents were asked to explain the preferences which they made in the survey to make sure they fully understand the concept of study. What is the reason behind their choices?. The researchers have taken all the notes and tried to provide an easy and convenient environment for them as they were going through an emotional phase. All these responses are reported in the results section. However, the resultant Q sorts representing the participant’s operant subjectivity on the issue under consideration are presented.", "Data were collected in three steps. In step 1, a Q sort pack of statements was developed through literature review, asking a global single-item question from the relevant stakeholders and in-depth interviews. The second step involves finalizing the Q-sample statements from Q-population based on the expert’s opinions in the field. In the last step, questionnaire items were finalized by the experts, and data were collected from the final respondents. Data was taken from cancer patients in Pakistan from January-June 2020.", "The first step in Q methodology is to develop the Q sort pack, preferably a set of 40 to 80 statements relating to the topic of study [30]. Q-concourse of statements were developed through Global Single-Item Questions and in-depth interviews. The initial Q-concourse (collection of opinion statements to represent possible reactions towards the severity of disease) was assembled after reviewing relevant literature [31, 32]. The interviews and written narratives are based on the Global Single-Item Question: “what are the feelings of cancer patients towards illness and their future planning?” The question was asked from 31 adults, 12 immediate family members of cancer patients, 5 oncologists, 10 caretaking nurses, 3 psychologists and 2 general physicians, who were not the study participants.\nFurther, 5 in-depth interviews were conducted with cancer patients. These interviews unveil their feelings, reactions, and experiences about the disease, their journey from fear to acceptance of death and their future planning (domestic, financial, and personal satisfaction to the soul). Finally, we ended up with a total of 121 statements as a Q population. We carefully selected samples by keeping the margin of error (confidence interval) by +/- 5%. We chose a confidence level of 95%, and variability (standard deviation) among the sample was 0.5, and we calculated a sample size of 51 using Eq. (1)\n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$Necessary\\,Sample\\,Size\\, = \\,N\\, = \\,{Z^2}\\,.\\,\\sigma \\,(1\\, - \\,\\sigma )/{e^2}$$\\end{document} (1)\nwhere e = margin of error.", "In the second step, the Q sample was finalized, a set of selected statements from the Q population based on the experts’ opinions in the field. The experts (4 professors and one methodologist) analyzed 121 statements and rank-ordered them according to their meanings and context. They ended up with 46 statements as a Q sample, divided into 3 main categories: feelings of cancer patients (17), religious beliefs about the acceptance of death (16), and future personal and financial planning (13). This sample is based on the most representative and distinctive statements that are considered best for use in the Q sorting process.", "In the third step, the study participants were selected who were cancer patients admitted or taking treatment from the local cancer hospitals in Pakistan. This study is conducted keeping in view the cultural and social norms of Pakistani society. The health system is entirely different here. The government and private sectors provide no health insurance. Chronic diseases like cancer may dig a hole in the pocket of the common person, which affects their emotional and financial state. The family also suffers a lot, and depression is quite common in this scenario. An essential advantage of Q-methodology is using a small sample of purposively selected respondents, which is more helpful in predicting intra-individual differences rather than inter-individual [25]. Therefore, a sample of 60 participants was employed based on their agreement to contribute to this study. Further, participants were ensured that the provided information would be used anonymously for research purposes only. Researchers maintained a high level of confidentially during the study’s complete process. Nine participants withdrew because they were too demanding (2); had changed their mind (4); were not comfortable (1); or were so tired (2). Finally, 51 participants (85%) attended and completed the Q sorting process.", "The researchers have done multiple meetings with the study participants who were agree to participate. During the initial meetings, we elaborated the objective of study, how they can contribute to our study, listened to their concerns and those who gave their verbal consent to we took data from them. The data were collected in two stages. In stage I, 51 study participants answered the survey with 46 Q statements on a likert scale of 1 (strongly disagree) to 7 (strongly agree). These statements were finalized after following a approved process mentioned in the paper. In stage 2, respondents were asked to explain the preferences which they made in the survey to make sure they fully understand the concept of study. What is the reason behind their choices?. The researchers have taken all the notes and tried to provide an easy and convenient environment for them as they were going through an emotional phase. All these responses are reported in the results section. However, the resultant Q sorts representing the participant’s operant subjectivity on the issue under consideration are presented.", "Cancer patients in this factor appear in a challenging situation. Table 5 of statements where they mainly were strongly agreed or agreed. They were in a big shock and disturbed psychologically over the fact of why God had chosen them for this disease. According to their statements, they were distraught when this news was revealed. Results showed the perceived feelings of cancer patients; when they first received the news, they were in a state of shock. They felt panic and started questioning God, “why has he selected them for this disease? Why cannot he go for any other person”. Statistical results are significant about their feelings that they start feeling pity and jealousy from other people. Some people reported increased anxiety and panic attacks and started feeling depressed about their finances. A patient said, “when I received the news that I have cancer, I was shocked and could not utter a single word for some moments”. Feelings are different gender-wise; women were more emotional than men and were more composed.", "From Factor II, the most realistic statement is identified by the respondent that their belief in death is a certain thing. We all believe in that, but untimely or when you know about the time of your death, you feel pretty anxious, distressed, etc. This situation is more harrowing that counting the death at your fingertips. Participants classified their death-related thoughts, acceptance of death, and how religion helped them overcome this fear. Elderly patients believed in religion’s comfort; they stated that religion helped them a lot to fight with this fear, and God is gracious, and he will ease their pain. Old-aged persons had an increased tendency towards religion than young ones. In the light of the results, people believed in the certainty of death in the light of religion. A patient said, “He was not religious before, but after the disease, he started following the religion and that change helped him cope with the pressure of disease”.", "Factor III highlights the intensity of the respondents towards future financial planning. Cancer patients are already bearing the high cost of treatment, and patients, particularly older ones, are worried about their family’s future and want to secure it. They emphasized future financial planning for them and their families. Few participants wished to donate their organs after death to help humanity. They were worried about the cost of treatment because cancer treatment is costly. Z scores explained that patients felt a burden to family and friends. Some patients said, “They contacted the financial institutes for their future financial policies but found any suitable plan”. Some people wanted to donate their property to charitable institutes. The patients started planning the future of their families. Young people are more optimistic about their future, planning that they will recover soon and take a fresh start in their life. Some participants wanted to buy the investment plans and write the will for their families.", "The current study benefits the scholars, psychologists, oncologists and managers in multiple ways. Firstly, it will help the families of cancer patients to understand and cope with the feelings of their suffering loved ones. Secondly, it will be beneficial to understand the psyche of cancer patients and observe the changes in their behaviour and uncertainty about future accomplishments during the painful process of treatment that affects their daily activities. Thirdly, it will help the oncologists and psychologists work in a team to plan medication with counselling services for cancer patients and implement treatment plans more effectively. Depression remains highly predominant in cancer patients and dramatically impacts their quality of life; perhaps utilizing its impact on observance, physical activity, social support, etc., will highlight the need to address the new health policies. Psychologists and oncologists can make new policies with their mutual discussion with the help of this study. Fourthly, it will help financial institutions to deal with the mortality fears of cancer patients and design their policies in light of the study’s findings.", "Some limitations should be used to evaluate this study correctly. First, it discusses psychological discomfort and physical pain. Still, we cannot find its relation with the co-morbidities effect, which is a new research direction for future studies to investigate the appropriate psychosocial care for cancer patients. Second, data were collected from a single country, where cultural and socio-economic conditions are diversified from other countries. Third, the study’s religious beliefs, family backgrounds, and social responsibilities may also vary, influencing the findings little. Also, when this data was collected, a pandemic in the form of Covid-19 had hit Pakistan, and due to this pandemic, people’s beliefs and thoughts changed, and they turned towards religion more than ever. That is because cancer patients were away from their loved ones and not allowed to meet them due to the SOPs followed by the hospital’s administration to keep them safe from COVID-19 has impacted the patients. They felt lonely in those hospital beds and found relief in religion and coping with the disease during those hard times with their disease [42]. Fourthly, we have used the minimum sample size that fulfils all the properties of the excellent estimator for selecting the sample size, and it works well with the Q methodology." ]
[ null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Method", "Data collection procedure", "Construction of concourse (Q population)", "Q-sample", "Selection of participants", "Q sorting", "Results", "Feelings of cancer patients", "Religious beliefs about the acceptance of death", "Future personal and financial planning", "Discussion", "Study implications", "Study limitations", "Conclusion" ]
[ "Cancer is the uncontrolled growth of abnormal body cells. Its diagnosis affects not only the physical condition of patients but also emotionally drains their families. It is a life-changing experience. Depression and anxiety are the most common side effects [1, 2]. The whole life turns upwards down, and it is crucial to identify those changes and provide needed help [3]. The persons experiencing cancer not only bear the physical pain of surgery, chemotherapy, radiation, bone marrow transplant, and immunotherapy but also pass through psychological trauma that can badly affect their physical and mental health [4, 5]. Patients considered themselves a burden to their family and friends, often resulting in self-harm and suicidal thoughts [6, 7]. The interpersonal-psychological theory of attempted and completed suicide also regarded a sense that considering oneself as a burden on others is one of the essential components of ending their life by suicide [8].\nPsychology and its theories help us to understand patients’ behaviour [9]. Behavioural sciences theories describe the feelings of individuals and how they define and interpret disease. It also explains their acceptance and fears of death, future planning, and remedial actions towards it. These factors are shaped by sociocultural and psychological behaviour rather than cognitive, physiological, genetic, or other biological reasoning for the disease [10]. Thus, illness behaviour reflects complex reactions toward changing bodily sensations that represent the psychological predisposition of the person and the broader socio-economic context within which the individual lives [11].\nAlthough previous studies reported many psychological problems, confronting cancer patients such as thread to life, anxiety, body image concerns, financial crisis, increased marital stress, fear of being unemployed, not capable of fulfilling the social roles in life etc. that badly impact their mental and physical health [9]. But still, there is a need to study the perceptions and behaviour of cancer patients to explore some new factors that are bothering them. Much research is conducted to analyze the reactions of cancer patients towards the severity of their disease and their co-existing worries about adverse psychological long-term consequences of treatment [12]. These factors also slow down their recovery process and become a hurdle to obtaining the desired results.\nHealth practitioners often use psychological theories to design coping strategies for cancer patients. Bandura’s self-efficacy theory helped to develop an effective psychological treatment framework for cancer patients [13]. These treatment strategies are useful in dealing with the emotional distress of the patients through psychological intervention. The social cognitive theory provides a support mechanism that improves patients’ overall quality of life [9]. Religious beliefs and spirituality also play a significant role in the treatment process by creating a ray of light among the patients that positively impacts their lives [14]. Religious beliefs act as a coping-up strategy that supports the illness and positively deals with it [15, 16]. It serves as a long-term therapy that results in maintaining the self-esteem of patients, restoring their confidence, giving them emotional comfort, and creating a sense of meaning in life [17, 18].\nFamily and social support are also considered essential for the psychological well-being of the patients. However, the finest moral and psychosocial support demands understanding individual and family-level perceptions at the time of cancer diagnosis and throughout the treatment trajectory [19]. The patient’s willpower and spiritual therapy play a vital role in cancer treatment [20]. Previously most of the research involved the caretakers asking about the patient’s feelings which did not directly depict the feelings of patients [4, 8]. The current study targeted cancer patients and directly explored their feelings and opinions.\nSimilarly, positive patient-doctor communications provide undue support to patients to come out of the trauma [21]. A positive patient-doctor relationship helps adaptation to illness, reduces treatment pain, and provides hope to fight against the disease. The nursing interventions also support building an empathetic relationship with the patient and their family members that help in fostering mutual trust and facilitating coping mechanisms during the care process [22]. But patients with antisocial personality traits have more psychological order and face difficulty in handling it [23].\nFinances are a big question mark for patients to bear the cost of treatment besides the psychological issues. Scholars believe that cancer treatment costs have a profound, long-lasting impact on the pockets of patients and caretakers [24]. Families often become indebted or bankrupt as they do not want to compromise their patients’ health and functional outcomes [25, 26]. So, financial issues are considered the highest risk factor in psychosocial oncology for patients and their families during treatment [27]. Patients are also worried about the future of their families. They must re-evaluate their priorities and take strict actions for their family security. Previous studies focused on the bidirectional impact of family-reported positive (resilience) or negative (distress) psychosocial well-being. Still, none have explicitly focused on the patient’s feelings, fears and coping strategies and particularly their future financial planning to secure their family’s future [3].\nThe study aims to identify cancer patients’ perceived attitudes towards the disease severity and understand their fears and future financial planning. Previous researchers explored various psychological issues, religiosity and spirituality factors and the economic burden of the disease either through qualitative methods or quantitative techniques. However, the current study explores the perceived feelings of cancer patients on these issues jointly by using the Q-methodological technique, a combination of qualitative and quantitative approaches. In this way, it will provide a comprehensive view of the patients and further contribute to the current knowledge in psychology, oncology, and behavioural sciences studies.", "This study aims to identify the perceived attitude of cancer patients towards their disease by applying a Q-methodological approach and describing their resultant actions regarding future planning. Q methodology is a novel approach and gives the foundation for the analytical study of people’s opinions, attitudes, feelings and viewpoints [28]. It combines qualitative and quantitative techniques that depict a comprehensive viewpoint of the respondents [29].\n[SUBTITLE] Data collection procedure [SUBSECTION] Data were collected in three steps. In step 1, a Q sort pack of statements was developed through literature review, asking a global single-item question from the relevant stakeholders and in-depth interviews. The second step involves finalizing the Q-sample statements from Q-population based on the expert’s opinions in the field. In the last step, questionnaire items were finalized by the experts, and data were collected from the final respondents. Data was taken from cancer patients in Pakistan from January-June 2020.\nData were collected in three steps. In step 1, a Q sort pack of statements was developed through literature review, asking a global single-item question from the relevant stakeholders and in-depth interviews. The second step involves finalizing the Q-sample statements from Q-population based on the expert’s opinions in the field. In the last step, questionnaire items were finalized by the experts, and data were collected from the final respondents. Data was taken from cancer patients in Pakistan from January-June 2020.\n[SUBTITLE] Construction of concourse (Q population) [SUBSECTION] The first step in Q methodology is to develop the Q sort pack, preferably a set of 40 to 80 statements relating to the topic of study [30]. Q-concourse of statements were developed through Global Single-Item Questions and in-depth interviews. The initial Q-concourse (collection of opinion statements to represent possible reactions towards the severity of disease) was assembled after reviewing relevant literature [31, 32]. The interviews and written narratives are based on the Global Single-Item Question: “what are the feelings of cancer patients towards illness and their future planning?” The question was asked from 31 adults, 12 immediate family members of cancer patients, 5 oncologists, 10 caretaking nurses, 3 psychologists and 2 general physicians, who were not the study participants.\nFurther, 5 in-depth interviews were conducted with cancer patients. These interviews unveil their feelings, reactions, and experiences about the disease, their journey from fear to acceptance of death and their future planning (domestic, financial, and personal satisfaction to the soul). Finally, we ended up with a total of 121 statements as a Q population. We carefully selected samples by keeping the margin of error (confidence interval) by +/- 5%. We chose a confidence level of 95%, and variability (standard deviation) among the sample was 0.5, and we calculated a sample size of 51 using Eq. (1)\n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$Necessary\\,Sample\\,Size\\, = \\,N\\, = \\,{Z^2}\\,.\\,\\sigma \\,(1\\, - \\,\\sigma )/{e^2}$$\\end{document} (1)\nwhere e = margin of error.\nThe first step in Q methodology is to develop the Q sort pack, preferably a set of 40 to 80 statements relating to the topic of study [30]. Q-concourse of statements were developed through Global Single-Item Questions and in-depth interviews. The initial Q-concourse (collection of opinion statements to represent possible reactions towards the severity of disease) was assembled after reviewing relevant literature [31, 32]. The interviews and written narratives are based on the Global Single-Item Question: “what are the feelings of cancer patients towards illness and their future planning?” The question was asked from 31 adults, 12 immediate family members of cancer patients, 5 oncologists, 10 caretaking nurses, 3 psychologists and 2 general physicians, who were not the study participants.\nFurther, 5 in-depth interviews were conducted with cancer patients. These interviews unveil their feelings, reactions, and experiences about the disease, their journey from fear to acceptance of death and their future planning (domestic, financial, and personal satisfaction to the soul). Finally, we ended up with a total of 121 statements as a Q population. We carefully selected samples by keeping the margin of error (confidence interval) by +/- 5%. We chose a confidence level of 95%, and variability (standard deviation) among the sample was 0.5, and we calculated a sample size of 51 using Eq. (1)\n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$Necessary\\,Sample\\,Size\\, = \\,N\\, = \\,{Z^2}\\,.\\,\\sigma \\,(1\\, - \\,\\sigma )/{e^2}$$\\end{document} (1)\nwhere e = margin of error.\n[SUBTITLE] Q-sample [SUBSECTION] In the second step, the Q sample was finalized, a set of selected statements from the Q population based on the experts’ opinions in the field. The experts (4 professors and one methodologist) analyzed 121 statements and rank-ordered them according to their meanings and context. They ended up with 46 statements as a Q sample, divided into 3 main categories: feelings of cancer patients (17), religious beliefs about the acceptance of death (16), and future personal and financial planning (13). This sample is based on the most representative and distinctive statements that are considered best for use in the Q sorting process.\nIn the second step, the Q sample was finalized, a set of selected statements from the Q population based on the experts’ opinions in the field. The experts (4 professors and one methodologist) analyzed 121 statements and rank-ordered them according to their meanings and context. They ended up with 46 statements as a Q sample, divided into 3 main categories: feelings of cancer patients (17), religious beliefs about the acceptance of death (16), and future personal and financial planning (13). This sample is based on the most representative and distinctive statements that are considered best for use in the Q sorting process.\n[SUBTITLE] Selection of participants [SUBSECTION] In the third step, the study participants were selected who were cancer patients admitted or taking treatment from the local cancer hospitals in Pakistan. This study is conducted keeping in view the cultural and social norms of Pakistani society. The health system is entirely different here. The government and private sectors provide no health insurance. Chronic diseases like cancer may dig a hole in the pocket of the common person, which affects their emotional and financial state. The family also suffers a lot, and depression is quite common in this scenario. An essential advantage of Q-methodology is using a small sample of purposively selected respondents, which is more helpful in predicting intra-individual differences rather than inter-individual [25]. Therefore, a sample of 60 participants was employed based on their agreement to contribute to this study. Further, participants were ensured that the provided information would be used anonymously for research purposes only. Researchers maintained a high level of confidentially during the study’s complete process. Nine participants withdrew because they were too demanding (2); had changed their mind (4); were not comfortable (1); or were so tired (2). Finally, 51 participants (85%) attended and completed the Q sorting process.\nIn the third step, the study participants were selected who were cancer patients admitted or taking treatment from the local cancer hospitals in Pakistan. This study is conducted keeping in view the cultural and social norms of Pakistani society. The health system is entirely different here. The government and private sectors provide no health insurance. Chronic diseases like cancer may dig a hole in the pocket of the common person, which affects their emotional and financial state. The family also suffers a lot, and depression is quite common in this scenario. An essential advantage of Q-methodology is using a small sample of purposively selected respondents, which is more helpful in predicting intra-individual differences rather than inter-individual [25]. Therefore, a sample of 60 participants was employed based on their agreement to contribute to this study. Further, participants were ensured that the provided information would be used anonymously for research purposes only. Researchers maintained a high level of confidentially during the study’s complete process. Nine participants withdrew because they were too demanding (2); had changed their mind (4); were not comfortable (1); or were so tired (2). Finally, 51 participants (85%) attended and completed the Q sorting process.\n[SUBTITLE] Q sorting [SUBSECTION] The researchers have done multiple meetings with the study participants who were agree to participate. During the initial meetings, we elaborated the objective of study, how they can contribute to our study, listened to their concerns and those who gave their verbal consent to we took data from them. The data were collected in two stages. In stage I, 51 study participants answered the survey with 46 Q statements on a likert scale of 1 (strongly disagree) to 7 (strongly agree). These statements were finalized after following a approved process mentioned in the paper. In stage 2, respondents were asked to explain the preferences which they made in the survey to make sure they fully understand the concept of study. What is the reason behind their choices?. The researchers have taken all the notes and tried to provide an easy and convenient environment for them as they were going through an emotional phase. All these responses are reported in the results section. However, the resultant Q sorts representing the participant’s operant subjectivity on the issue under consideration are presented.\nThe researchers have done multiple meetings with the study participants who were agree to participate. During the initial meetings, we elaborated the objective of study, how they can contribute to our study, listened to their concerns and those who gave their verbal consent to we took data from them. The data were collected in two stages. In stage I, 51 study participants answered the survey with 46 Q statements on a likert scale of 1 (strongly disagree) to 7 (strongly agree). These statements were finalized after following a approved process mentioned in the paper. In stage 2, respondents were asked to explain the preferences which they made in the survey to make sure they fully understand the concept of study. What is the reason behind their choices?. The researchers have taken all the notes and tried to provide an easy and convenient environment for them as they were going through an emotional phase. All these responses are reported in the results section. However, the resultant Q sorts representing the participant’s operant subjectivity on the issue under consideration are presented.", "Data were collected in three steps. In step 1, a Q sort pack of statements was developed through literature review, asking a global single-item question from the relevant stakeholders and in-depth interviews. The second step involves finalizing the Q-sample statements from Q-population based on the expert’s opinions in the field. In the last step, questionnaire items were finalized by the experts, and data were collected from the final respondents. Data was taken from cancer patients in Pakistan from January-June 2020.", "The first step in Q methodology is to develop the Q sort pack, preferably a set of 40 to 80 statements relating to the topic of study [30]. Q-concourse of statements were developed through Global Single-Item Questions and in-depth interviews. The initial Q-concourse (collection of opinion statements to represent possible reactions towards the severity of disease) was assembled after reviewing relevant literature [31, 32]. The interviews and written narratives are based on the Global Single-Item Question: “what are the feelings of cancer patients towards illness and their future planning?” The question was asked from 31 adults, 12 immediate family members of cancer patients, 5 oncologists, 10 caretaking nurses, 3 psychologists and 2 general physicians, who were not the study participants.\nFurther, 5 in-depth interviews were conducted with cancer patients. These interviews unveil their feelings, reactions, and experiences about the disease, their journey from fear to acceptance of death and their future planning (domestic, financial, and personal satisfaction to the soul). Finally, we ended up with a total of 121 statements as a Q population. We carefully selected samples by keeping the margin of error (confidence interval) by +/- 5%. We chose a confidence level of 95%, and variability (standard deviation) among the sample was 0.5, and we calculated a sample size of 51 using Eq. (1)\n\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$Necessary\\,Sample\\,Size\\, = \\,N\\, = \\,{Z^2}\\,.\\,\\sigma \\,(1\\, - \\,\\sigma )/{e^2}$$\\end{document} (1)\nwhere e = margin of error.", "In the second step, the Q sample was finalized, a set of selected statements from the Q population based on the experts’ opinions in the field. The experts (4 professors and one methodologist) analyzed 121 statements and rank-ordered them according to their meanings and context. They ended up with 46 statements as a Q sample, divided into 3 main categories: feelings of cancer patients (17), religious beliefs about the acceptance of death (16), and future personal and financial planning (13). This sample is based on the most representative and distinctive statements that are considered best for use in the Q sorting process.", "In the third step, the study participants were selected who were cancer patients admitted or taking treatment from the local cancer hospitals in Pakistan. This study is conducted keeping in view the cultural and social norms of Pakistani society. The health system is entirely different here. The government and private sectors provide no health insurance. Chronic diseases like cancer may dig a hole in the pocket of the common person, which affects their emotional and financial state. The family also suffers a lot, and depression is quite common in this scenario. An essential advantage of Q-methodology is using a small sample of purposively selected respondents, which is more helpful in predicting intra-individual differences rather than inter-individual [25]. Therefore, a sample of 60 participants was employed based on their agreement to contribute to this study. Further, participants were ensured that the provided information would be used anonymously for research purposes only. Researchers maintained a high level of confidentially during the study’s complete process. Nine participants withdrew because they were too demanding (2); had changed their mind (4); were not comfortable (1); or were so tired (2). Finally, 51 participants (85%) attended and completed the Q sorting process.", "The researchers have done multiple meetings with the study participants who were agree to participate. During the initial meetings, we elaborated the objective of study, how they can contribute to our study, listened to their concerns and those who gave their verbal consent to we took data from them. The data were collected in two stages. In stage I, 51 study participants answered the survey with 46 Q statements on a likert scale of 1 (strongly disagree) to 7 (strongly agree). These statements were finalized after following a approved process mentioned in the paper. In stage 2, respondents were asked to explain the preferences which they made in the survey to make sure they fully understand the concept of study. What is the reason behind their choices?. The researchers have taken all the notes and tried to provide an easy and convenient environment for them as they were going through an emotional phase. All these responses are reported in the results section. However, the resultant Q sorts representing the participant’s operant subjectivity on the issue under consideration are presented.", "The sample characteristics are shown in Table 1, which varied within the groups. Of the 51 respondents, 55% were female, 57% were married, and the dominant age group was 36 to 45 (25%). Breast cancer is the most prevalent type of cancer among the study participants. However, all cancer types mentioned in the table prevail in Pakistan [31]. But, the number of patients with breast cancer is the highest of all [33]. Most of the participants are employed (51%) and have completed at least their college education (47%). Further, the characteristics of the respondents are shown in Table 1.\n\nTable 1Patients DemographicsCancer Patients (n = 51)n (%)\nGender\nMale23 (45)Female28 (55)\nMarital Status\nSingle14 (27)Married29 (57)Separated/Divorced3 (06)Widow5 (10)\nAge\n15–2510 (20)26–3512 (24)36–4513 (25)46+16 (31)\nEducation\nHigh School or Less7 (14)College or more24 (47)University or more20 (39)\nEmployment Status\nEmployed19 (51)Unemployed15 (41)Retired3 (08)\nCancer Type\nBreast14 (27)Lip, oral cavity10 (20)Lung7 (14)Oesophagus5 (10)Leukaemia4 (08)Cervix uteri4 (08)Ovary5 (10)Other2 (04)\n\nPatients Demographics\nThe study is exploratory, so there is a need to assess the validity of the data. Most Q-methodology studies are exploratory and qualitative and tend not to use random sample designs. That is why questions of the research validity were assessed differently from quantitative research methods [34, 35]. As understood in more conventional survey research, item validity does not apply to the study of subjectivity. In Q-methodology, one expects the meaning of an item to be interpreted individually. The contextual meaning of how each item was individually interpreted becomes apparent in the rank-ordering and follow-up interviews.\nIt shows the factor characteristics explaining the average reliability coefficient used to assess the reliability, or internal consistency, of a set of scale or test items. In other words, the reliability of any given measurement refers to the extent to which it is a consistent measure of a concept. Cronbach’s alpha is one way of measuring the strength of that consistency. Due to this reason, the appropriate statistical techniques are used to achieve the objectives of the study. Reliability analysis was done to check the quality of the survey, which is suggested as an estimate of reliability [36]. If the value of Cronbach’s alpha is between 0.60 and 0.90, data is considered highly reliable and consistent [36]. Our Cronbach’s alpha score is 0.774, which shows that the data is reliable and consistent.\nTable 2 shows the results of summary statistics of the Q sort items in the form of mean, standard deviations and Z score values. We first rank all the statements based on Z scores in descending order and then rank them according to the mean and standard deviation values, respectively. All the sample statements were sub-categorized into three main factors presented in Table 2. It presents three significant factors about the fear and psyche that cancer patients recognize: psychological and emotional needs (17 statements), fear of death and dependency on religion (16 statements), and future financial planning (13 statements). We analyzed the data by multiple correspondence analysis (MCA), and all the noises from the data were removed to obtain good results.\n\nTable 2Summary StatisticsItem no.MeanZ-score\nFactor I: Feelings of Cancer Patients\n\n2\n4.042.21\n9\n3.242.15\n3\n4.162.14\n10\n3.252.06\n5\n3.12.05\n7\n3.652.03\n14\n3.632.03\n4\n4.181.94\n12\n3.961.90\n16\n4.101.89\n8\n3.591.89\n6\n3.671.88\n1\n4.061.82\n11\n3.451.76\n13\n3.161.71\n15\n3.201.67\n17\n3.841.41\nFactor II: Religious Beliefs about the Acceptance of Death\n\n30\n3.242.10\n20\n3.372.02\n27\n2.981.91\n32\n5.251.90\n24\n3.181.90\n28\n2.571.82\n21\n3.531.81\n22\n3.391.80\n25\n3.181.80\n29\n2.781.74\n23\n3.251.71\n19\n3.551.70\n31\n5.711.69\n26\n2.781.68\n18\n3.711.62\n33\n5.411.51\nFactors III: Future Personal and Financial Planning\n\n44\n4.571.98\n38\n5.001.80\n35\n4.981.79\n37\n4.781.78\n43\n4.631.75\n34\n5.021.71\n42\n5.041.60\n36\n5.291.59\n45\n5.241.56\n40\n5.181.56\n46\n5.651.51\n39\n5.431.47\n41\n5.241.45\n\nSummary Statistics\nMCA consequently played an essential role in data screening, so our selected Q-factors are simpler and more accurate. Applying MCA to data, Table 3 shows that total inertia is 0.79 (percent of inertia 45% is due to the first axis & 34% is due to the second axis). Total inertia values indicate how much variability is in the model. Each dimension’s inertia values refer to the amount of variance by each dimension [34]. We have selected the highest interaction factors and ignored the weak relationship factors through MCA. Data were collected from 51 participants to check cancer patients’ views on how they are combating their disease, e.g. by improving their mental health with the help of religion and if they have any financial planning. Cochran’s Q test determined a statistical significance in the proportion of patients coping with their disease by different means over the time χ2(2) = 493.46, p < .05, see Table 4.\n\nTable 3Impact of All VariablesModel Summary\nDimension\n\nCronbach’s Alpha\n\nVariance Accounted For\nEigenvalueInertia\n1\n0.955.8190.45\n2\n0.943.7630.34\nTotal\n9.5820.79\n\nImpact of All Variables\n\nTable 4Cochran’s Q TestSum of SquaresdfMean SquareCochran’s QSig\nBetween Factors\n672.95013.45\nWithin Factors\n\nBetween factors\n1877.84541.73493.450.001\nResidual\n6855.622503.047\nTotal\n8733.522953.805\n\nCochran’s Q Test\nThe critical statements from each of the three factors were sorted through PQ method 2.11 (statistical method: Multiple correspondence analysis to select the high interaction terms), which gives us the dimensions and insight of the Eigenvalues; we selected our Q factors based on these insights. The most acceptable factors were decided based on Eigenvalues which are at least 1.0. We have rearranged the selected Q-sorts based on Z scores in Table 5. The resultant factors are divided into three main categories: feelings of cancer patients, religious beliefs about accepting death, and future personal and financial planning.\n\nTable 5Descending Array of Z-scores Presenting Feelings of Cancer Patients towards Illness and Their Future PlanningItem No.StatementsZ-score\nFactor I: Feelings of Cancer Patients\n\n9\nI was not mentally ready for all this2.15\n8\nWhen this news broke, I was in a state of shock and disbelief and felt numb.2.03\n14\nI often think, why me? Why did God let that happen to me?2.02\n20\nI am worried about the cost of treatment.2.02\n12\nThoughts came to my mind that people feel pity and grief when they came to know about my disease.1.90\n7\nI started getting panic attacks when the painful treatment process came to my mind.1.88\nFactor II: Religious Beliefs about the Acceptance of Death\n\n27\nA person’s body will die but not the spirit.1.91\n24\nDeath is inevitable, so we should not worry about it1.90\n21\nWe should not think about death; we have to live fully and enjoy every moment of life1.81\n31\nSocial and family support lowers feelings of anxiety and depression.1.69\n26\nOnly religion can help a person overcome the fear of death and console the mind and body.1.68\n33\nMy willpower is giving me the strength to combat the disease.1.51\nFactor III: Future Personal and Financial Planning\n\n44\nI will donate my organs (eyeballs, cornea, heart, kidney, etc.) to other people.1.98\n38\nI will purchase investment plans for my family.1.80\n35\nI will write a will regarding the distribution of my assets and unfulfilled wishes.1.79\n34\nThis disease has changed my retirement, travelling, or parenthood plans.1.71\n19\nI am worried that I am causing trouble for my family and friends (emotionally and financially).1.70\n36\nI will clearly instruct my family regarding my social responsibilities.1.59\n45\nI will add a specific portion of my wealth to a charitable institution.1.56\n40\nI will make diversified investments to minimize risk.1.56\n39\nI prefer risk-free investments to secure my family’s future.1.47\n41\nI will take the consultancy from financial experts (brokers, fund managers, bankers, and real estate agents) before finalizing my investment plans1.45\n\nDescending Array of Z-scores Presenting Feelings of Cancer Patients towards Illness and Their Future Planning\n[SUBTITLE] Feelings of cancer patients [SUBSECTION] Cancer patients in this factor appear in a challenging situation. Table 5 of statements where they mainly were strongly agreed or agreed. They were in a big shock and disturbed psychologically over the fact of why God had chosen them for this disease. According to their statements, they were distraught when this news was revealed. Results showed the perceived feelings of cancer patients; when they first received the news, they were in a state of shock. They felt panic and started questioning God, “why has he selected them for this disease? Why cannot he go for any other person”. Statistical results are significant about their feelings that they start feeling pity and jealousy from other people. Some people reported increased anxiety and panic attacks and started feeling depressed about their finances. A patient said, “when I received the news that I have cancer, I was shocked and could not utter a single word for some moments”. Feelings are different gender-wise; women were more emotional than men and were more composed.\nCancer patients in this factor appear in a challenging situation. Table 5 of statements where they mainly were strongly agreed or agreed. They were in a big shock and disturbed psychologically over the fact of why God had chosen them for this disease. According to their statements, they were distraught when this news was revealed. Results showed the perceived feelings of cancer patients; when they first received the news, they were in a state of shock. They felt panic and started questioning God, “why has he selected them for this disease? Why cannot he go for any other person”. Statistical results are significant about their feelings that they start feeling pity and jealousy from other people. Some people reported increased anxiety and panic attacks and started feeling depressed about their finances. A patient said, “when I received the news that I have cancer, I was shocked and could not utter a single word for some moments”. Feelings are different gender-wise; women were more emotional than men and were more composed.\n[SUBTITLE] Religious beliefs about the acceptance of death [SUBSECTION] From Factor II, the most realistic statement is identified by the respondent that their belief in death is a certain thing. We all believe in that, but untimely or when you know about the time of your death, you feel pretty anxious, distressed, etc. This situation is more harrowing that counting the death at your fingertips. Participants classified their death-related thoughts, acceptance of death, and how religion helped them overcome this fear. Elderly patients believed in religion’s comfort; they stated that religion helped them a lot to fight with this fear, and God is gracious, and he will ease their pain. Old-aged persons had an increased tendency towards religion than young ones. In the light of the results, people believed in the certainty of death in the light of religion. A patient said, “He was not religious before, but after the disease, he started following the religion and that change helped him cope with the pressure of disease”.\nFrom Factor II, the most realistic statement is identified by the respondent that their belief in death is a certain thing. We all believe in that, but untimely or when you know about the time of your death, you feel pretty anxious, distressed, etc. This situation is more harrowing that counting the death at your fingertips. Participants classified their death-related thoughts, acceptance of death, and how religion helped them overcome this fear. Elderly patients believed in religion’s comfort; they stated that religion helped them a lot to fight with this fear, and God is gracious, and he will ease their pain. Old-aged persons had an increased tendency towards religion than young ones. In the light of the results, people believed in the certainty of death in the light of religion. A patient said, “He was not religious before, but after the disease, he started following the religion and that change helped him cope with the pressure of disease”.\n[SUBTITLE] Future personal and financial planning [SUBSECTION] Factor III highlights the intensity of the respondents towards future financial planning. Cancer patients are already bearing the high cost of treatment, and patients, particularly older ones, are worried about their family’s future and want to secure it. They emphasized future financial planning for them and their families. Few participants wished to donate their organs after death to help humanity. They were worried about the cost of treatment because cancer treatment is costly. Z scores explained that patients felt a burden to family and friends. Some patients said, “They contacted the financial institutes for their future financial policies but found any suitable plan”. Some people wanted to donate their property to charitable institutes. The patients started planning the future of their families. Young people are more optimistic about their future, planning that they will recover soon and take a fresh start in their life. Some participants wanted to buy the investment plans and write the will for their families.\nFactor III highlights the intensity of the respondents towards future financial planning. Cancer patients are already bearing the high cost of treatment, and patients, particularly older ones, are worried about their family’s future and want to secure it. They emphasized future financial planning for them and their families. Few participants wished to donate their organs after death to help humanity. They were worried about the cost of treatment because cancer treatment is costly. Z scores explained that patients felt a burden to family and friends. Some patients said, “They contacted the financial institutes for their future financial policies but found any suitable plan”. Some people wanted to donate their property to charitable institutes. The patients started planning the future of their families. Young people are more optimistic about their future, planning that they will recover soon and take a fresh start in their life. Some participants wanted to buy the investment plans and write the will for their families.", "Cancer patients in this factor appear in a challenging situation. Table 5 of statements where they mainly were strongly agreed or agreed. They were in a big shock and disturbed psychologically over the fact of why God had chosen them for this disease. According to their statements, they were distraught when this news was revealed. Results showed the perceived feelings of cancer patients; when they first received the news, they were in a state of shock. They felt panic and started questioning God, “why has he selected them for this disease? Why cannot he go for any other person”. Statistical results are significant about their feelings that they start feeling pity and jealousy from other people. Some people reported increased anxiety and panic attacks and started feeling depressed about their finances. A patient said, “when I received the news that I have cancer, I was shocked and could not utter a single word for some moments”. Feelings are different gender-wise; women were more emotional than men and were more composed.", "From Factor II, the most realistic statement is identified by the respondent that their belief in death is a certain thing. We all believe in that, but untimely or when you know about the time of your death, you feel pretty anxious, distressed, etc. This situation is more harrowing that counting the death at your fingertips. Participants classified their death-related thoughts, acceptance of death, and how religion helped them overcome this fear. Elderly patients believed in religion’s comfort; they stated that religion helped them a lot to fight with this fear, and God is gracious, and he will ease their pain. Old-aged persons had an increased tendency towards religion than young ones. In the light of the results, people believed in the certainty of death in the light of religion. A patient said, “He was not religious before, but after the disease, he started following the religion and that change helped him cope with the pressure of disease”.", "Factor III highlights the intensity of the respondents towards future financial planning. Cancer patients are already bearing the high cost of treatment, and patients, particularly older ones, are worried about their family’s future and want to secure it. They emphasized future financial planning for them and their families. Few participants wished to donate their organs after death to help humanity. They were worried about the cost of treatment because cancer treatment is costly. Z scores explained that patients felt a burden to family and friends. Some patients said, “They contacted the financial institutes for their future financial policies but found any suitable plan”. Some people wanted to donate their property to charitable institutes. The patients started planning the future of their families. Young people are more optimistic about their future, planning that they will recover soon and take a fresh start in their life. Some participants wanted to buy the investment plans and write the will for their families.", "This study explored cancer patients’ behaviours and attitudes towards their death and future financial planning. It employs Q-methodology, which helps to identify the conflicting priorities of patients. The findings explain three main factors. Firstly, they feel financially drained over the cost of treatment because these treatments dig a hole in patients’ pockets. So there should be enough financial policies to help them and their families after death. The second difference in beliefs was noted about illness and death. Most people found that religion is helping them with medicine to cope with the disease. They believed God commands their lives, and everything happens according to his will. However, some did not but were willing to accept it and have different views. They preferred self-management as well as accepting medical treatments. Thirdly, some patients differed on the importance of supporting networks and not feeling shame in seeking help, which appeared we could protect them from suicidal thoughts or feelings.\nIn comparison, some patients felt unsupported and embarrassed and had to consider suicide to stop the distress [10]. Identifying depression in patients is crucial, and one should introduce the detection and treatment strategies in primary & aftercare. The patients emphasize the need to make those policies according to their personalized needs so they can recover physically and emotionally. Cancer patients have started feeling self-pity and burden on their families. They caught themselves in their thoughts about why this disease came to them, their lives changed upwards, and their energies were low in panic attacks. These thoughts are alarming because they reflect the psychological state of a cancer patient and how they are mentally disturbed when they become aware of their disease. It further highlights the need for a psycho-oncologist to handle cancer patients’ emotions and save them from depression and negative thoughts.\nIf we see the religious factor, some of the patients quoted that their painful thoughts often lead them to self-harm or have suicidal thoughts to ease the pain. Certain patients were hopeful for their life. They wanted to enjoy every bit of their life even though they were going to die [37]. They quoted that death is the new beginning, so there is no need to be frightened of it; on the other side, some feel comfortable with their families, which helps them ease the pain [20, 38]. A few relations stand out based on the data obtained from the questionnaire the respondents completed. They also had attitudes in common. Most of the elderly patients agreed with the statements about dying. A person dying should be given a chance to talk openly about their death and their psychological and physical needs to their families and doctors without being judgemental [39–41] because the pain of unfulfilled things and wishes can be seen in their eyes.\nRegarding future planning, patients appeared to be in a difficult situation; some participants wanted to donate their wealth to charity because they believed it would soothe their souls and help them even after death [37]. Some participants wanted to buy the investment plans and write the will for their families. But few of them had clear goals. They want to address their family so that they will know about their future unfinished work, social responsibilities, and hidden personality traits. This thought usually prevails among young dying patients who have kids. They wanted to nourish the psychological needs of their growing-up kids by doing so.\n[SUBTITLE] Study implications [SUBSECTION] The current study benefits the scholars, psychologists, oncologists and managers in multiple ways. Firstly, it will help the families of cancer patients to understand and cope with the feelings of their suffering loved ones. Secondly, it will be beneficial to understand the psyche of cancer patients and observe the changes in their behaviour and uncertainty about future accomplishments during the painful process of treatment that affects their daily activities. Thirdly, it will help the oncologists and psychologists work in a team to plan medication with counselling services for cancer patients and implement treatment plans more effectively. Depression remains highly predominant in cancer patients and dramatically impacts their quality of life; perhaps utilizing its impact on observance, physical activity, social support, etc., will highlight the need to address the new health policies. Psychologists and oncologists can make new policies with their mutual discussion with the help of this study. Fourthly, it will help financial institutions to deal with the mortality fears of cancer patients and design their policies in light of the study’s findings.\nThe current study benefits the scholars, psychologists, oncologists and managers in multiple ways. Firstly, it will help the families of cancer patients to understand and cope with the feelings of their suffering loved ones. Secondly, it will be beneficial to understand the psyche of cancer patients and observe the changes in their behaviour and uncertainty about future accomplishments during the painful process of treatment that affects their daily activities. Thirdly, it will help the oncologists and psychologists work in a team to plan medication with counselling services for cancer patients and implement treatment plans more effectively. Depression remains highly predominant in cancer patients and dramatically impacts their quality of life; perhaps utilizing its impact on observance, physical activity, social support, etc., will highlight the need to address the new health policies. Psychologists and oncologists can make new policies with their mutual discussion with the help of this study. Fourthly, it will help financial institutions to deal with the mortality fears of cancer patients and design their policies in light of the study’s findings.\n[SUBTITLE] Study limitations [SUBSECTION] Some limitations should be used to evaluate this study correctly. First, it discusses psychological discomfort and physical pain. Still, we cannot find its relation with the co-morbidities effect, which is a new research direction for future studies to investigate the appropriate psychosocial care for cancer patients. Second, data were collected from a single country, where cultural and socio-economic conditions are diversified from other countries. Third, the study’s religious beliefs, family backgrounds, and social responsibilities may also vary, influencing the findings little. Also, when this data was collected, a pandemic in the form of Covid-19 had hit Pakistan, and due to this pandemic, people’s beliefs and thoughts changed, and they turned towards religion more than ever. That is because cancer patients were away from their loved ones and not allowed to meet them due to the SOPs followed by the hospital’s administration to keep them safe from COVID-19 has impacted the patients. They felt lonely in those hospital beds and found relief in religion and coping with the disease during those hard times with their disease [42]. Fourthly, we have used the minimum sample size that fulfils all the properties of the excellent estimator for selecting the sample size, and it works well with the Q methodology.\nSome limitations should be used to evaluate this study correctly. First, it discusses psychological discomfort and physical pain. Still, we cannot find its relation with the co-morbidities effect, which is a new research direction for future studies to investigate the appropriate psychosocial care for cancer patients. Second, data were collected from a single country, where cultural and socio-economic conditions are diversified from other countries. Third, the study’s religious beliefs, family backgrounds, and social responsibilities may also vary, influencing the findings little. Also, when this data was collected, a pandemic in the form of Covid-19 had hit Pakistan, and due to this pandemic, people’s beliefs and thoughts changed, and they turned towards religion more than ever. That is because cancer patients were away from their loved ones and not allowed to meet them due to the SOPs followed by the hospital’s administration to keep them safe from COVID-19 has impacted the patients. They felt lonely in those hospital beds and found relief in religion and coping with the disease during those hard times with their disease [42]. Fourthly, we have used the minimum sample size that fulfils all the properties of the excellent estimator for selecting the sample size, and it works well with the Q methodology.", "The current study benefits the scholars, psychologists, oncologists and managers in multiple ways. Firstly, it will help the families of cancer patients to understand and cope with the feelings of their suffering loved ones. Secondly, it will be beneficial to understand the psyche of cancer patients and observe the changes in their behaviour and uncertainty about future accomplishments during the painful process of treatment that affects their daily activities. Thirdly, it will help the oncologists and psychologists work in a team to plan medication with counselling services for cancer patients and implement treatment plans more effectively. Depression remains highly predominant in cancer patients and dramatically impacts their quality of life; perhaps utilizing its impact on observance, physical activity, social support, etc., will highlight the need to address the new health policies. Psychologists and oncologists can make new policies with their mutual discussion with the help of this study. Fourthly, it will help financial institutions to deal with the mortality fears of cancer patients and design their policies in light of the study’s findings.", "Some limitations should be used to evaluate this study correctly. First, it discusses psychological discomfort and physical pain. Still, we cannot find its relation with the co-morbidities effect, which is a new research direction for future studies to investigate the appropriate psychosocial care for cancer patients. Second, data were collected from a single country, where cultural and socio-economic conditions are diversified from other countries. Third, the study’s religious beliefs, family backgrounds, and social responsibilities may also vary, influencing the findings little. Also, when this data was collected, a pandemic in the form of Covid-19 had hit Pakistan, and due to this pandemic, people’s beliefs and thoughts changed, and they turned towards religion more than ever. That is because cancer patients were away from their loved ones and not allowed to meet them due to the SOPs followed by the hospital’s administration to keep them safe from COVID-19 has impacted the patients. They felt lonely in those hospital beds and found relief in religion and coping with the disease during those hard times with their disease [42]. Fourthly, we have used the minimum sample size that fulfils all the properties of the excellent estimator for selecting the sample size, and it works well with the Q methodology.", "This study is conducted to identify cancer patients’ perceived behaviour towards their disease using the Q-methodological technique. The participants shared their experiences with illness, including psychological distress, fear of dying, concerns about treatment cost, future uncertainties, and combating it. The findings reported three key factors: feelings of the cancer patients, their religious and spiritual beliefs, and future personal and financial planning. Their responses also varied according to age, gender, disease severity, and recovery expectations. Young people are more enthusiastic about their future, while older ones, particularly cancer patients of stages III and IV, are pretty uncertain about their lives.\nResults showed that they all face a specific degree of stress and anxiety when they know about their disease, and it was difficult for them to accept this reality initially. But their religious beliefs, social support, and health practitioners play a positive role in their lives, keeping them hopeful and serving as a coping-up strategy. Young people, who are married and have family responsibilities, face more financial distress, like fear of losing jobs. Married women were more worried about their kids. The patients also discussed their future personal and financial planning. The present study will help practitioners to improve their treatment strategies, and design customized plans according to patients’ needs and behaviours. It will help to create a trusted atmosphere which will improve their mental health, peace of mind, and physical health." ]
[ null, null, null, null, null, null, null, "results", null, null, null, "discussion", null, null, "conclusion" ]
[ "Cancer Patients", "Q-methodology", "Religious beliefs", "Psychological impacts", "Financial Burden", "Personal and Financial Planning", "Fear and Acceptance of Death" ]
Blood transcriptomics to facilitate diagnosis and stratification in pediatric rheumatic diseases - a proof of concept study.
36253751
Transcriptome profiling of blood cells is an efficient tool to study the gene expression signatures of rheumatic diseases. This study aims to improve the early diagnosis of pediatric rheumatic diseases by investigating patients' blood gene expression and applying machine learning on the transcriptome data to develop predictive models.
BACKGROUND
RNA sequencing was performed on whole blood collected from children with rheumatic diseases. Random Forest classification models were developed based on the transcriptome data of 48 rheumatic patients, 46 children with viral infection, and 35 controls to classify different disease groups. The performance of these classifiers was evaluated by leave-one-out cross-validation. Analyses of differentially expressed genes (DEG), gene ontology (GO), and interferon-stimulated gene (ISG) score were also conducted.
METHODS
Our first classifier could differentiate pediatric rheumatic patients from controls and infection cases with high area-under-the-curve (AUC) values (AUC = 0.8 ± 0.1 and 0.7 ± 0.1, respectively). Three other classifiers could distinguish chronic recurrent multifocal osteomyelitis (CRMO), juvenile idiopathic arthritis (JIA), and interferonopathies (IFN) from control and infection cases with AUC ≥ 0.8. DEG and GO analyses reveal that the pathophysiology of CRMO, IFN, and JIA involves innate immune responses including myeloid leukocyte and granulocyte activation, neutrophil activation and degranulation. IFN is specifically mediated by antibacterial and antifungal defense responses, CRMO by cellular response to cytokine, and JIA by cellular response to chemical stimulus. IFN patients particularly had the highest mean ISG score among all disease groups.
RESULTS
Our data show that blood transcriptomics combined with machine learning is a promising diagnostic tool for pediatric rheumatic diseases and may assist physicians in making data-driven and patient-specific decisions in clinical practice.
CONCLUSION
[ "Child", "Humans", "Arthritis, Juvenile", "Cytokines", "Interferons", "Osteomyelitis", "Proof of Concept Study", "Rheumatic Diseases", "Transcriptome" ]
9575227
null
null
null
null
Results
[SUBTITLE] Transcriptome profiles of rheumatic diseases, viral infection, and convalescent controls [SUBSECTION] We compared the transcriptome profiles of six rheumatic disease groups (i.e., JIA, AID, CRMO, HLA-B51, IFN, and vasculitis) with viral infection and convalescent controls. Clustering analyses using t-SNE and hierarchical algorithms were displayed in Fig. 1 and Figure S1, respectively. As shown in the t-SNE plot in Fig. 1, most controls were gathered in cluster 1 while infection cases were grouped into a separate cluster 2, which implies that the gene expression of actively infected cases and remission cases (i.e., controls) is substantially independent despite coming from the same participants. However, patients with different rheumatic diseases were not well distinguished and assigned mostly to cluster 3, while cluster 4 contained a mixture of different categories. Fig. 1t-SNE plot of 4 different clusters t-SNE plot of 4 different clusters We compared the transcriptome profiles of six rheumatic disease groups (i.e., JIA, AID, CRMO, HLA-B51, IFN, and vasculitis) with viral infection and convalescent controls. Clustering analyses using t-SNE and hierarchical algorithms were displayed in Fig. 1 and Figure S1, respectively. As shown in the t-SNE plot in Fig. 1, most controls were gathered in cluster 1 while infection cases were grouped into a separate cluster 2, which implies that the gene expression of actively infected cases and remission cases (i.e., controls) is substantially independent despite coming from the same participants. However, patients with different rheumatic diseases were not well distinguished and assigned mostly to cluster 3, while cluster 4 contained a mixture of different categories. Fig. 1t-SNE plot of 4 different clusters t-SNE plot of 4 different clusters [SUBTITLE] Classifier development [SUBSECTION] The Random Forest algorithm was used for classifier development because it uses the ensemble learning technique that is robust to outliers, stable with new data, and can handle non-linear correlations. The first classifier was developed to distinguish between control, infection, and pediatric rheumatic cases based on normalized transcriptome data. Leave-one-out cross-validation results in Fig. 2a confirm that the classifier could differentiate pediatric rheumatic patients from negative controls (AUC = 0.8 ± 0.1) and from viral infection cases (AUC = 0.7 ± 0.1). The Boruta algorithm selected 349 genes out of 31,319 initial genes (Table S2) for the training of this classifier between control, infection, and pediatric rheumatic cases. Some of the notable selected genes were CD3G, CD96, and CD200R1 (CD200 receptor 1). The gene CD3G encodes the CD3γ polypeptide, which forms a part of the CD3-TCR (T-cell receptor) complex. This complex plays an important role in antigen recognition and several intracellular signal-transduction pathways. This finding indicates that some of the rheumatic diseases are specifically connected to the alteration and malfunction of γ T-cells. Previous studies have also reported the association of γ and δ T-cells with (immunodeficiency and) autoimmune diseases [12]. CD96 is expressed on T-cells and natural killer cells. It belongs to a family of molecules that provide costimulatory and coinhibitory signals during T-cell activation. It was shown to inhibit the expansion and IL-9 production of Th17 cells and thus, reduce inflammation and pathogenicity [13]. CD200R1 is also expressed on T-cells, as well as myeloid cells. It was reported to alter the balance between Th17 cells and regulatory T-cells in SLE patients [14] and has also been confirmed as one of the genetic factors susceptible to JIA, especially oligoarticular JIA [15]. Aberrant expression of CD200R1 was shown to contribute to abnormal Th17 cell differentiation and chemotaxis in patients with rheumatoid arthritis [15]. Fig. 2ROC curves and AUC values from leave-one-out cross-validation of classifier between (a) negative controls (i.e., control), viral infected subjects (i.e., infection) and subjects with rheumatic diseases (i.e., Pedrheum); and more specifically between (b) CRMO, IFN, JIA and control/infection cases ROC curves and AUC values from leave-one-out cross-validation of classifier between (a) negative controls (i.e., control), viral infected subjects (i.e., infection) and subjects with rheumatic diseases (i.e., Pedrheum); and more specifically between (b) CRMO, IFN, JIA and control/infection cases More specific classifiers were then developed per disease group. As the number of rheumatic patients in our dataset was limited, these classifiers focused only on CRMO, IFN, and JIA groups, which had more subjects for model training and validation than the other disease groups. Three classifiers were developed to distinguish patients with CRMO (n = 6), IFN (n = 6), and JIA (n = 20) from control (n = 35) and infection (n = 46) cases. They worked quite well as their AUC values are above or equal to 0.8 (Fig. 2b). Since CRMO, IFN, and JIA were differentiated well from control and infection cases, it was subsequently important to examine how they could be distinguished from one another. ROC curves and AUC values of a classifier between CRMO, IFN, and JIA (Figure S2) indicated that IFN could be distinguished relatively well from CRMO and JIA (AUC = 0.7 ± 0.2/0.3), however CRMO is not easily differentiated from JIA (AUC = 0.5 ± 0.3), likely explained by the limited sample size. The Boruta-identified genes for these classifiers are also presented in Table S2. There were 349 selected genes from the CRMO-control-infection classifier, 247 genes from the IFN-control-infection classifier, and 286 genes from the JIA-control-infection classifier. As expected, more interferon-related genes were selected for the IFN classifier compared to those of CRMO and JIA. The Random Forest algorithm was used for classifier development because it uses the ensemble learning technique that is robust to outliers, stable with new data, and can handle non-linear correlations. The first classifier was developed to distinguish between control, infection, and pediatric rheumatic cases based on normalized transcriptome data. Leave-one-out cross-validation results in Fig. 2a confirm that the classifier could differentiate pediatric rheumatic patients from negative controls (AUC = 0.8 ± 0.1) and from viral infection cases (AUC = 0.7 ± 0.1). The Boruta algorithm selected 349 genes out of 31,319 initial genes (Table S2) for the training of this classifier between control, infection, and pediatric rheumatic cases. Some of the notable selected genes were CD3G, CD96, and CD200R1 (CD200 receptor 1). The gene CD3G encodes the CD3γ polypeptide, which forms a part of the CD3-TCR (T-cell receptor) complex. This complex plays an important role in antigen recognition and several intracellular signal-transduction pathways. This finding indicates that some of the rheumatic diseases are specifically connected to the alteration and malfunction of γ T-cells. Previous studies have also reported the association of γ and δ T-cells with (immunodeficiency and) autoimmune diseases [12]. CD96 is expressed on T-cells and natural killer cells. It belongs to a family of molecules that provide costimulatory and coinhibitory signals during T-cell activation. It was shown to inhibit the expansion and IL-9 production of Th17 cells and thus, reduce inflammation and pathogenicity [13]. CD200R1 is also expressed on T-cells, as well as myeloid cells. It was reported to alter the balance between Th17 cells and regulatory T-cells in SLE patients [14] and has also been confirmed as one of the genetic factors susceptible to JIA, especially oligoarticular JIA [15]. Aberrant expression of CD200R1 was shown to contribute to abnormal Th17 cell differentiation and chemotaxis in patients with rheumatoid arthritis [15]. Fig. 2ROC curves and AUC values from leave-one-out cross-validation of classifier between (a) negative controls (i.e., control), viral infected subjects (i.e., infection) and subjects with rheumatic diseases (i.e., Pedrheum); and more specifically between (b) CRMO, IFN, JIA and control/infection cases ROC curves and AUC values from leave-one-out cross-validation of classifier between (a) negative controls (i.e., control), viral infected subjects (i.e., infection) and subjects with rheumatic diseases (i.e., Pedrheum); and more specifically between (b) CRMO, IFN, JIA and control/infection cases More specific classifiers were then developed per disease group. As the number of rheumatic patients in our dataset was limited, these classifiers focused only on CRMO, IFN, and JIA groups, which had more subjects for model training and validation than the other disease groups. Three classifiers were developed to distinguish patients with CRMO (n = 6), IFN (n = 6), and JIA (n = 20) from control (n = 35) and infection (n = 46) cases. They worked quite well as their AUC values are above or equal to 0.8 (Fig. 2b). Since CRMO, IFN, and JIA were differentiated well from control and infection cases, it was subsequently important to examine how they could be distinguished from one another. ROC curves and AUC values of a classifier between CRMO, IFN, and JIA (Figure S2) indicated that IFN could be distinguished relatively well from CRMO and JIA (AUC = 0.7 ± 0.2/0.3), however CRMO is not easily differentiated from JIA (AUC = 0.5 ± 0.3), likely explained by the limited sample size. The Boruta-identified genes for these classifiers are also presented in Table S2. There were 349 selected genes from the CRMO-control-infection classifier, 247 genes from the IFN-control-infection classifier, and 286 genes from the JIA-control-infection classifier. As expected, more interferon-related genes were selected for the IFN classifier compared to those of CRMO and JIA. [SUBTITLE] Differential expression and gene ontology enrichment analyses [SUBSECTION] We analyzed the differentially expressed genes (DEGs) of CRMO, IFN, and JIA versus controls. The resulting DEGs were translated to corresponding Gene Ontology (GO) categories to understand which pathways were involved in the disease pathophysiology. Many of the top 10 GO categories of CRMO, IFN, and JIA groups are related to innate immunity including myeloid leukocyte and granulocyte activation, neutrophil activation and degranulation (Fig. 3a and Table S3). In IFN particularly, the immunity is largely mediated by antibacterial and antifungal defense responses. Results from GO analyses of CRMO, IFN, and JIA against the other Pedrheum groups are displayed in Figure S3 and Table S3. Although the classifiers could not adequately differentiate between CRMO and JIA, we noted that 1,106 DEGs could be found between CRMO and all other Pedrheum groups, 1,730 DEGs in the case of IFN, and 1,216 DEGs for JIA (Table S5). Additionally, more than 170 DEGs were found between CRMO and IFN, CRMO and JIA, as well as between IFN and JIA (Table S5). We analyzed the differentially expressed genes (DEGs) of CRMO, IFN, and JIA versus controls. The resulting DEGs were translated to corresponding Gene Ontology (GO) categories to understand which pathways were involved in the disease pathophysiology. Many of the top 10 GO categories of CRMO, IFN, and JIA groups are related to innate immunity including myeloid leukocyte and granulocyte activation, neutrophil activation and degranulation (Fig. 3a and Table S3). In IFN particularly, the immunity is largely mediated by antibacterial and antifungal defense responses. Results from GO analyses of CRMO, IFN, and JIA against the other Pedrheum groups are displayed in Figure S3 and Table S3. Although the classifiers could not adequately differentiate between CRMO and JIA, we noted that 1,106 DEGs could be found between CRMO and all other Pedrheum groups, 1,730 DEGs in the case of IFN, and 1,216 DEGs for JIA (Table S5). Additionally, more than 170 DEGs were found between CRMO and IFN, CRMO and JIA, as well as between IFN and JIA (Table S5). [SUBTITLE] ISG scores [SUBSECTION] Using the whole blood gene expression obtained from 3’ mRNA sequencing, we calculated the ISG scores of IFN patients and compared them with those from other disease groups. As displayed in Fig. 3b, IFN patients had the highest mean score of 18. Other disease groups, although displaying lower mean scores than IFN (6.0 for AID, 4.0 for CRMO, 9.0 for HLA-B51, 3.9 for JIA, 7.8 for vasculitis, and 14 for infection cases), did include some patients with particularly high scores: one AID patient had a score of 58, one HLA-B51 patient had score 48, and one vasculitis patient had score 44. Interestingly, longitudinal tracking of ISG scores was proven feasible using 3’ mRNA sequencing. Indeed, we showed that one patient with Aicardi-Goutières syndrome had significantly high ISG scores at early presentations that decreased following initiation of JAK-inhibition via tofacitinib (see Figure S4). Using the whole blood gene expression obtained from 3’ mRNA sequencing, we calculated the ISG scores of IFN patients and compared them with those from other disease groups. As displayed in Fig. 3b, IFN patients had the highest mean score of 18. Other disease groups, although displaying lower mean scores than IFN (6.0 for AID, 4.0 for CRMO, 9.0 for HLA-B51, 3.9 for JIA, 7.8 for vasculitis, and 14 for infection cases), did include some patients with particularly high scores: one AID patient had a score of 58, one HLA-B51 patient had score 48, and one vasculitis patient had score 44. Interestingly, longitudinal tracking of ISG scores was proven feasible using 3’ mRNA sequencing. Indeed, we showed that one patient with Aicardi-Goutières syndrome had significantly high ISG scores at early presentations that decreased following initiation of JAK-inhibition via tofacitinib (see Figure S4).
Conclusion
Overall, our study indicates that blood transcriptomics is a promising tool to improve the diagnosis of pediatric rheumatic diseases. The ease of sample collection as well as the continuous enhancement and affordability of sequencing techniques can overcome the challenges of patient heterogeneity and allow for further fruitful research.
[ "Background", "Methods", "Patients and controls", "RNA extraction", "3’ mRNA library preparation and sequencing", "Raw data processing", "Cluster identification", "Differential gene expression and gene ontology enrichment analyses", "Classifier development", "Calculation of interferon-stimulated genes’ scores", "Transcriptome profiles of rheumatic diseases, viral infection, and convalescent controls", "Classifier development", "Differential expression and gene ontology enrichment analyses", "ISG scores", "" ]
[ "Pediatric rheumatic diseases encompass a spectrum of autoimmune and autoinflammatory diseases that can affect the joints, muscles, bones, and other organs in children under the age of 16–18 years. Although many pediatric rheumatic diseases typically present with joint manifestations, other organs, including the eyes, skin, muscles, and gastrointestinal tract, may also be affected. Children who present with rheumatic symptoms often pose several challenges to their physicians. First, typical symptoms such as fever, rash, redness, pain and/or swelling at joints are common in many rheumatic diseases: (i) a transient/self-limiting process such as (reactive) infectious arthritis, (ii) a relapsing process like auto-inflammatory diseases (AID), (iii) a chronic condition like vasculitis, chronic recurrent multifocal osteomyelitis (CRMO), or juvenile idiopathic arthritis (JIA), (iv) an interferonopathy (IFN) such as dermatomyositis or systemic lupus erythematosus (SLE) characterized by dysregulation of type I interferon, or (v) diseases related to the human leukocyte antigen B51 serotype (HLA-B51). Second, once a disease has been confirmed, it is still difficult to make a rapid and definitive classification amongst the different subtypes within the disease due to its rarity and clinical presentation variability. This has challenged physicians’ efforts in making specific diagnoses and assigning proper treatments.\nTranscriptome profiling of blood cells has proven to be useful and efficacious in identifying gene expression signatures in rheumatic diseases [1, 2], enabling physicians to make data-driven and patient-specific decisions. Thanks to the use of transcriptomics, the participation of type I interferons participate in the pathophysiology of SLE [3] and dermatomyositis [4] was discovered and has become one of the paramount findings in rheumatology. Psoriatic arthritis is another exemplary disease demonstrating the usefulness of applying transcriptomics to identify important pathways related to interleukins IL-12, IL-17 and IL-23 [1]. Large datasets of gene expression generated from such studies are highly interesting, yet their systematic analysis and interpretation is quite challenging. Thus, there has been considerable interest in applying machine learning on blood cells’ gene expression in order to obtain new insights into the pathophysiology of rheumatic diseases which in turn may have important implications for their clinical management [5, 6].\nBy investigating the whole blood gene expression of children with rheumatic diseases in comparison with reactive/post-infection controls, we aim to develop computational classifiers based on the obtained transcriptome data that allow us to identify pediatric patients with rheumatic diseases and distinguish different rheumatic groups (e.g., CRMO, JIA, and IFN), and thus, ultimately, to improve the diagnosis of future patients.", "[SUBTITLE] Patients and controls [SUBSECTION] After obtaining written consent, 48 children (1 to 16 years old) with rheumatic diseases (i.e., AID, CRMO, IFN, JIA, vasculitis, and HLA-B51 related rheumatic diseases) were recruited, prior to any treatment except non-steroidal anti-inflammatory drugs (NSAIDs), from May 2016 until August 2020, at the Divisions of Pediatric Rheumatology of four hospitals in Belgium (Antwerp Hospital Network, Antwerp University Hospital, Brussels University Hospital, and Ghent University Hospital). Venous blood was collected into PAXgene® Blood RNA tubes (PreAnalytiX, Switzerland). Clinical details of the patients can be found in Table S1. As controls, 46 children with PCR-confirmed viral (mainly enterovirus) infections were also recruited and requested to provide blood twice: first while being actively infected, and second following remission [7]. Only 35 children agreed to a second venapunction as it was not obligatory.\nAfter obtaining written consent, 48 children (1 to 16 years old) with rheumatic diseases (i.e., AID, CRMO, IFN, JIA, vasculitis, and HLA-B51 related rheumatic diseases) were recruited, prior to any treatment except non-steroidal anti-inflammatory drugs (NSAIDs), from May 2016 until August 2020, at the Divisions of Pediatric Rheumatology of four hospitals in Belgium (Antwerp Hospital Network, Antwerp University Hospital, Brussels University Hospital, and Ghent University Hospital). Venous blood was collected into PAXgene® Blood RNA tubes (PreAnalytiX, Switzerland). Clinical details of the patients can be found in Table S1. As controls, 46 children with PCR-confirmed viral (mainly enterovirus) infections were also recruited and requested to provide blood twice: first while being actively infected, and second following remission [7]. Only 35 children agreed to a second venapunction as it was not obligatory.\n[SUBTITLE] RNA extraction [SUBSECTION] PAXgene® Blood RNA tubes were kept at -80 °C within 72 h after blood collection until use. RNA extraction was performed via a column-based RNA extraction using the PAXgene® Blood RNA extraction kit (Qiagen, Germany). To optimize RNA concentrations, we used the RNA Clean & Concentrator™-5 kit (Zymo Research, USA). We verified the RNA quality using the RNA ScreenTape Analysis on the Tapestation (Agilent, USA).\nPAXgene® Blood RNA tubes were kept at -80 °C within 72 h after blood collection until use. RNA extraction was performed via a column-based RNA extraction using the PAXgene® Blood RNA extraction kit (Qiagen, Germany). To optimize RNA concentrations, we used the RNA Clean & Concentrator™-5 kit (Zymo Research, USA). We verified the RNA quality using the RNA ScreenTape Analysis on the Tapestation (Agilent, USA).\n[SUBTITLE] 3’ mRNA library preparation and sequencing [SUBSECTION] All RNA samples were prepared with the QuantSeq3′ mRNA-Seq Library Prep Kit for Illumina (Lexogen GmbH, Austria) following the standard supplier’s protocol for long fragments. During the RNA removal step, we also added globin blockers, so none of the abundant globin mRNA was copied to double stranded cDNA. The resulting amplified cDNA libraries were equimolarly pooled and sequenced on NextSeq 500 (high output v2,5 kit, 150 cycli, single read) (Illumina, USA) with up to 40 samples per batch. This gave us an optimum of 10 million reads for each sample. All samples were prepared and sequenced in 4 batches.\nAll RNA samples were prepared with the QuantSeq3′ mRNA-Seq Library Prep Kit for Illumina (Lexogen GmbH, Austria) following the standard supplier’s protocol for long fragments. During the RNA removal step, we also added globin blockers, so none of the abundant globin mRNA was copied to double stranded cDNA. The resulting amplified cDNA libraries were equimolarly pooled and sequenced on NextSeq 500 (high output v2,5 kit, 150 cycli, single read) (Illumina, USA) with up to 40 samples per batch. This gave us an optimum of 10 million reads for each sample. All samples were prepared and sequenced in 4 batches.\n[SUBTITLE] Raw data processing [SUBSECTION] Raw data from the NextSeq was demultiplexed and further processed per batch through an in-house pipeline. The quality of all reads was evaluated using FastQC (v0.11.9) before and after processing with Trimmomatic (v0.36). Trimmomatic removed the leading 20 bases from reads, ensure a minimum quality score of 15 over a sliding window of 4 bases and required a minimum read length of 30 bases. As usage of oligodT primers might cause poly-A stretches at the 3′ end, the latter end was trimmed with our own in-house poly-A removal script. All sequences that remained after trimming were mapped against the human reference genome build 38 (polymorph variants excluded) with HISAT2 (v2.0.4). HTseq (v0.6.1) was used to count all reads for each gene and generate a readcount table.\nRaw data from the NextSeq was demultiplexed and further processed per batch through an in-house pipeline. The quality of all reads was evaluated using FastQC (v0.11.9) before and after processing with Trimmomatic (v0.36). Trimmomatic removed the leading 20 bases from reads, ensure a minimum quality score of 15 over a sliding window of 4 bases and required a minimum read length of 30 bases. As usage of oligodT primers might cause poly-A stretches at the 3′ end, the latter end was trimmed with our own in-house poly-A removal script. All sequences that remained after trimming were mapped against the human reference genome build 38 (polymorph variants excluded) with HISAT2 (v2.0.4). HTseq (v0.6.1) was used to count all reads for each gene and generate a readcount table.\n[SUBTITLE] Cluster identification [SUBSECTION] Clustering is an unsupervised learning algorithm that is used to discover patterns in high-dimensional data that would not be easily identified by conventional statistics. The readcount table obtained after raw data processing was normalized by median scaling. Then, the t-distributed stochastic neighbor embedding (t-SNE) method and hierarchical clustering algorithms were applied on the normalized readcount data to assign data into different clusters based on the (dis)similarity in the gene expression between samples.\nClustering is an unsupervised learning algorithm that is used to discover patterns in high-dimensional data that would not be easily identified by conventional statistics. The readcount table obtained after raw data processing was normalized by median scaling. Then, the t-distributed stochastic neighbor embedding (t-SNE) method and hierarchical clustering algorithms were applied on the normalized readcount data to assign data into different clusters based on the (dis)similarity in the gene expression between samples.\n[SUBTITLE] Differential gene expression and gene ontology enrichment analyses [SUBSECTION] Preliminary filtering of the normalized readcount data was performed by removing genes with fewer than 10 readcounts over all samples. Differential gene expression analyses were performed using the DESeq2 Bioconductor package in the open-source statistical software R. To account for batch effects, the batch number was included in the DESeq2 design. Differentially expressed genes with log2 fold change ≥ 2 (either up- or down-regulated) and p-value < 0.01 (adjusted for multiple testing by the False Discovery Rate method) were passed on to gene ontology enrichment analysis using the topGO package in R. Fisher’s exact test was performed to determine significantly enriched/depleted gene ontology terms relating to biological processes.\nPreliminary filtering of the normalized readcount data was performed by removing genes with fewer than 10 readcounts over all samples. Differential gene expression analyses were performed using the DESeq2 Bioconductor package in the open-source statistical software R. To account for batch effects, the batch number was included in the DESeq2 design. Differentially expressed genes with log2 fold change ≥ 2 (either up- or down-regulated) and p-value < 0.01 (adjusted for multiple testing by the False Discovery Rate method) were passed on to gene ontology enrichment analysis using the topGO package in R. Fisher’s exact test was performed to determine significantly enriched/depleted gene ontology terms relating to biological processes.\n[SUBTITLE] Classifier development [SUBSECTION] Due to the larger number of genes in the dataset, a feature selection step was performed using the Boruta package in R [8] with a Bonferroni correction to identify genes that had good predictive power for disease classification. The Boruta algorithm was applied on the normalized gene expression values obtained from DESeq2. With the Boruta selected genes, classification models were trained using the Random Forest algorithm [9], also in R. Validation of the trained classifiers was performed using a leave-one-out cross-validation strategy. In this strategy, a single sample would be removed from the dataset and a classification model was trained on the remaining samples. The model was then used to classify the left-out sample. This process would be repeated for all samples in the dataset. Receiver-operator-characteristic (ROC) curves and the area-under-the-curve (AUC) were employed using the package pROC [10] in R to evaluate the classifier performance. AUC values were displayed in figures as mean ± confidence interval (95%).\nDue to the larger number of genes in the dataset, a feature selection step was performed using the Boruta package in R [8] with a Bonferroni correction to identify genes that had good predictive power for disease classification. The Boruta algorithm was applied on the normalized gene expression values obtained from DESeq2. With the Boruta selected genes, classification models were trained using the Random Forest algorithm [9], also in R. Validation of the trained classifiers was performed using a leave-one-out cross-validation strategy. In this strategy, a single sample would be removed from the dataset and a classification model was trained on the remaining samples. The model was then used to classify the left-out sample. This process would be repeated for all samples in the dataset. Receiver-operator-characteristic (ROC) curves and the area-under-the-curve (AUC) were employed using the package pROC [10] in R to evaluate the classifier performance. AUC values were displayed in figures as mean ± confidence interval (95%).\n[SUBTITLE] Calculation of interferon-stimulated genes’ scores [SUBSECTION] First, relative expression (RE) was calculated based on the readcounts of interferon-stimulated genes (ISG) and the housekeeping gene GAPDH: \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\text{RE = }{\\text{2}}^{\\text{-}\\left(\\text{count ISG}-\\text{count GAPDH}\\right)}$$\\end{document}. The ISG score was calculated by summing up the individual RE per gene after normalization to the control group as follows: \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\sum \\left({\\text{RE}}_{\\text{subject}} -\\text{ }{\\text{Mean}}_{\\text{control}}\\right)/{\\text{Standard Deviation}}_{\\text{control}}$$\\end{document}. 28 ISG were selected for ISG score calculation (Table S4) [11]. Statistical significance was assessed using Mann-Whitney test. * P < 0.05; ** P < 0.005.\nFirst, relative expression (RE) was calculated based on the readcounts of interferon-stimulated genes (ISG) and the housekeeping gene GAPDH: \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\text{RE = }{\\text{2}}^{\\text{-}\\left(\\text{count ISG}-\\text{count GAPDH}\\right)}$$\\end{document}. The ISG score was calculated by summing up the individual RE per gene after normalization to the control group as follows: \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\sum \\left({\\text{RE}}_{\\text{subject}} -\\text{ }{\\text{Mean}}_{\\text{control}}\\right)/{\\text{Standard Deviation}}_{\\text{control}}$$\\end{document}. 28 ISG were selected for ISG score calculation (Table S4) [11]. Statistical significance was assessed using Mann-Whitney test. * P < 0.05; ** P < 0.005.", "After obtaining written consent, 48 children (1 to 16 years old) with rheumatic diseases (i.e., AID, CRMO, IFN, JIA, vasculitis, and HLA-B51 related rheumatic diseases) were recruited, prior to any treatment except non-steroidal anti-inflammatory drugs (NSAIDs), from May 2016 until August 2020, at the Divisions of Pediatric Rheumatology of four hospitals in Belgium (Antwerp Hospital Network, Antwerp University Hospital, Brussels University Hospital, and Ghent University Hospital). Venous blood was collected into PAXgene® Blood RNA tubes (PreAnalytiX, Switzerland). Clinical details of the patients can be found in Table S1. As controls, 46 children with PCR-confirmed viral (mainly enterovirus) infections were also recruited and requested to provide blood twice: first while being actively infected, and second following remission [7]. Only 35 children agreed to a second venapunction as it was not obligatory.", "PAXgene® Blood RNA tubes were kept at -80 °C within 72 h after blood collection until use. RNA extraction was performed via a column-based RNA extraction using the PAXgene® Blood RNA extraction kit (Qiagen, Germany). To optimize RNA concentrations, we used the RNA Clean & Concentrator™-5 kit (Zymo Research, USA). We verified the RNA quality using the RNA ScreenTape Analysis on the Tapestation (Agilent, USA).", "All RNA samples were prepared with the QuantSeq3′ mRNA-Seq Library Prep Kit for Illumina (Lexogen GmbH, Austria) following the standard supplier’s protocol for long fragments. During the RNA removal step, we also added globin blockers, so none of the abundant globin mRNA was copied to double stranded cDNA. The resulting amplified cDNA libraries were equimolarly pooled and sequenced on NextSeq 500 (high output v2,5 kit, 150 cycli, single read) (Illumina, USA) with up to 40 samples per batch. This gave us an optimum of 10 million reads for each sample. All samples were prepared and sequenced in 4 batches.", "Raw data from the NextSeq was demultiplexed and further processed per batch through an in-house pipeline. The quality of all reads was evaluated using FastQC (v0.11.9) before and after processing with Trimmomatic (v0.36). Trimmomatic removed the leading 20 bases from reads, ensure a minimum quality score of 15 over a sliding window of 4 bases and required a minimum read length of 30 bases. As usage of oligodT primers might cause poly-A stretches at the 3′ end, the latter end was trimmed with our own in-house poly-A removal script. All sequences that remained after trimming were mapped against the human reference genome build 38 (polymorph variants excluded) with HISAT2 (v2.0.4). HTseq (v0.6.1) was used to count all reads for each gene and generate a readcount table.", "Clustering is an unsupervised learning algorithm that is used to discover patterns in high-dimensional data that would not be easily identified by conventional statistics. The readcount table obtained after raw data processing was normalized by median scaling. Then, the t-distributed stochastic neighbor embedding (t-SNE) method and hierarchical clustering algorithms were applied on the normalized readcount data to assign data into different clusters based on the (dis)similarity in the gene expression between samples.", "Preliminary filtering of the normalized readcount data was performed by removing genes with fewer than 10 readcounts over all samples. Differential gene expression analyses were performed using the DESeq2 Bioconductor package in the open-source statistical software R. To account for batch effects, the batch number was included in the DESeq2 design. Differentially expressed genes with log2 fold change ≥ 2 (either up- or down-regulated) and p-value < 0.01 (adjusted for multiple testing by the False Discovery Rate method) were passed on to gene ontology enrichment analysis using the topGO package in R. Fisher’s exact test was performed to determine significantly enriched/depleted gene ontology terms relating to biological processes.", "Due to the larger number of genes in the dataset, a feature selection step was performed using the Boruta package in R [8] with a Bonferroni correction to identify genes that had good predictive power for disease classification. The Boruta algorithm was applied on the normalized gene expression values obtained from DESeq2. With the Boruta selected genes, classification models were trained using the Random Forest algorithm [9], also in R. Validation of the trained classifiers was performed using a leave-one-out cross-validation strategy. In this strategy, a single sample would be removed from the dataset and a classification model was trained on the remaining samples. The model was then used to classify the left-out sample. This process would be repeated for all samples in the dataset. Receiver-operator-characteristic (ROC) curves and the area-under-the-curve (AUC) were employed using the package pROC [10] in R to evaluate the classifier performance. AUC values were displayed in figures as mean ± confidence interval (95%).", "First, relative expression (RE) was calculated based on the readcounts of interferon-stimulated genes (ISG) and the housekeeping gene GAPDH: \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\text{RE = }{\\text{2}}^{\\text{-}\\left(\\text{count ISG}-\\text{count GAPDH}\\right)}$$\\end{document}. The ISG score was calculated by summing up the individual RE per gene after normalization to the control group as follows: \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\sum \\left({\\text{RE}}_{\\text{subject}} -\\text{ }{\\text{Mean}}_{\\text{control}}\\right)/{\\text{Standard Deviation}}_{\\text{control}}$$\\end{document}. 28 ISG were selected for ISG score calculation (Table S4) [11]. Statistical significance was assessed using Mann-Whitney test. * P < 0.05; ** P < 0.005.", "We compared the transcriptome profiles of six rheumatic disease groups (i.e., JIA, AID, CRMO, HLA-B51, IFN, and vasculitis) with viral infection and convalescent controls. Clustering analyses using t-SNE and hierarchical algorithms were displayed in Fig. 1 and Figure S1, respectively. As shown in the t-SNE plot in Fig. 1, most controls were gathered in cluster 1 while infection cases were grouped into a separate cluster 2, which implies that the gene expression of actively infected cases and remission cases (i.e., controls) is substantially independent despite coming from the same participants. However, patients with different rheumatic diseases were not well distinguished and assigned mostly to cluster 3, while cluster 4 contained a mixture of different categories.\n\nFig. 1t-SNE plot of 4 different clusters\n\nt-SNE plot of 4 different clusters", "The Random Forest algorithm was used for classifier development because it uses the ensemble learning technique that is robust to outliers, stable with new data, and can handle non-linear correlations. The first classifier was developed to distinguish between control, infection, and pediatric rheumatic cases based on normalized transcriptome data. Leave-one-out cross-validation results in Fig. 2a confirm that the classifier could differentiate pediatric rheumatic patients from negative controls (AUC = 0.8 ± 0.1) and from viral infection cases (AUC = 0.7 ± 0.1). The Boruta algorithm selected 349 genes out of 31,319 initial genes (Table S2) for the training of this classifier between control, infection, and pediatric rheumatic cases. Some of the notable selected genes were CD3G, CD96, and CD200R1 (CD200 receptor 1). The gene CD3G encodes the CD3γ polypeptide, which forms a part of the CD3-TCR (T-cell receptor) complex. This complex plays an important role in antigen recognition and several intracellular signal-transduction pathways. This finding indicates that some of the rheumatic diseases are specifically connected to the alteration and malfunction of γ T-cells. Previous studies have also reported the association of γ and δ T-cells with (immunodeficiency and) autoimmune diseases [12]. CD96 is expressed on T-cells and natural killer cells. It belongs to a family of molecules that provide costimulatory and coinhibitory signals during T-cell activation. It was shown to inhibit the expansion and IL-9 production of Th17 cells and thus, reduce inflammation and pathogenicity [13]. CD200R1 is also expressed on T-cells, as well as myeloid cells. It was reported to alter the balance between Th17 cells and regulatory T-cells in SLE patients [14] and has also been confirmed as one of the genetic factors susceptible to JIA, especially oligoarticular JIA [15]. Aberrant expression of CD200R1 was shown to contribute to abnormal Th17 cell differentiation and chemotaxis in patients with rheumatoid arthritis [15].\n\nFig. 2ROC curves and AUC values from leave-one-out cross-validation of classifier between (a) negative controls (i.e., control), viral infected subjects (i.e., infection) and subjects with rheumatic diseases (i.e., Pedrheum); and more specifically between (b) CRMO, IFN, JIA and control/infection cases\n\nROC curves and AUC values from leave-one-out cross-validation of classifier between (a) negative controls (i.e., control), viral infected subjects (i.e., infection) and subjects with rheumatic diseases (i.e., Pedrheum); and more specifically between (b) CRMO, IFN, JIA and control/infection cases\nMore specific classifiers were then developed per disease group. As the number of rheumatic patients in our dataset was limited, these classifiers focused only on CRMO, IFN, and JIA groups, which had more subjects for model training and validation than the other disease groups. Three classifiers were developed to distinguish patients with CRMO (n = 6), IFN (n = 6), and JIA (n = 20) from control (n = 35) and infection (n = 46) cases. They worked quite well as their AUC values are above or equal to 0.8 (Fig. 2b). Since CRMO, IFN, and JIA were differentiated well from control and infection cases, it was subsequently important to examine how they could be distinguished from one another. ROC curves and AUC values of a classifier between CRMO, IFN, and JIA (Figure S2) indicated that IFN could be distinguished relatively well from CRMO and JIA (AUC = 0.7 ± 0.2/0.3), however CRMO is not easily differentiated from JIA (AUC = 0.5 ± 0.3), likely explained by the limited sample size. The Boruta-identified genes for these classifiers are also presented in Table S2. There were 349 selected genes from the CRMO-control-infection classifier, 247 genes from the IFN-control-infection classifier, and 286 genes from the JIA-control-infection classifier. As expected, more interferon-related genes were selected for the IFN classifier compared to those of CRMO and JIA.", "We analyzed the differentially expressed genes (DEGs) of CRMO, IFN, and JIA versus controls. The resulting DEGs were translated to corresponding Gene Ontology (GO) categories to understand which pathways were involved in the disease pathophysiology. Many of the top 10 GO categories of CRMO, IFN, and JIA groups are related to innate immunity including myeloid leukocyte and granulocyte activation, neutrophil activation and degranulation (Fig. 3a and Table S3). In IFN particularly, the immunity is largely mediated by antibacterial and antifungal defense responses. Results from GO analyses of CRMO, IFN, and JIA against the other Pedrheum groups are displayed in Figure S3 and Table S3. Although the classifiers could not adequately differentiate between CRMO and JIA, we noted that 1,106 DEGs could be found between CRMO and all other Pedrheum groups, 1,730 DEGs in the case of IFN, and 1,216 DEGs for JIA (Table S5). Additionally, more than 170 DEGs were found between CRMO and IFN, CRMO and JIA, as well as between IFN and JIA (Table S5).", "Using the whole blood gene expression obtained from 3’ mRNA sequencing, we calculated the ISG scores of IFN patients and compared them with those from other disease groups. As displayed in Fig. 3b, IFN patients had the highest mean score of 18. Other disease groups, although displaying lower mean scores than IFN (6.0 for AID, 4.0 for CRMO, 9.0 for HLA-B51, 3.9 for JIA, 7.8 for vasculitis, and 14 for infection cases), did include some patients with particularly high scores: one AID patient had a score of 58, one HLA-B51 patient had score 48, and one vasculitis patient had score 44. Interestingly, longitudinal tracking of ISG scores was proven feasible using 3’ mRNA sequencing. Indeed, we showed that one patient with Aicardi-Goutières syndrome had significantly high ISG scores at early presentations that decreased following initiation of JAK-inhibition via tofacitinib (see Figure S4).", "Below is the link to the electronic supplementary material.\n\nSupplementary Material 1\n\nSupplementary Material 1\n\nSupplementary Material 2\n\nSupplementary Material 2\n\nSupplementary Material 3\n\nSupplementary Material 3\n\nSupplementary Material 4\n\nSupplementary Material 4\n\nSupplementary Material 5\n\nSupplementary Material 5\n\nSupplementary Material 6\n\nSupplementary Material 6" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Patients and controls", "RNA extraction", "3’ mRNA library preparation and sequencing", "Raw data processing", "Cluster identification", "Differential gene expression and gene ontology enrichment analyses", "Classifier development", "Calculation of interferon-stimulated genes’ scores", "Results", "Transcriptome profiles of rheumatic diseases, viral infection, and convalescent controls", "Classifier development", "Differential expression and gene ontology enrichment analyses", "ISG scores", "Discussion", "Conclusion", "Electronic supplementary material", "" ]
[ "Pediatric rheumatic diseases encompass a spectrum of autoimmune and autoinflammatory diseases that can affect the joints, muscles, bones, and other organs in children under the age of 16–18 years. Although many pediatric rheumatic diseases typically present with joint manifestations, other organs, including the eyes, skin, muscles, and gastrointestinal tract, may also be affected. Children who present with rheumatic symptoms often pose several challenges to their physicians. First, typical symptoms such as fever, rash, redness, pain and/or swelling at joints are common in many rheumatic diseases: (i) a transient/self-limiting process such as (reactive) infectious arthritis, (ii) a relapsing process like auto-inflammatory diseases (AID), (iii) a chronic condition like vasculitis, chronic recurrent multifocal osteomyelitis (CRMO), or juvenile idiopathic arthritis (JIA), (iv) an interferonopathy (IFN) such as dermatomyositis or systemic lupus erythematosus (SLE) characterized by dysregulation of type I interferon, or (v) diseases related to the human leukocyte antigen B51 serotype (HLA-B51). Second, once a disease has been confirmed, it is still difficult to make a rapid and definitive classification amongst the different subtypes within the disease due to its rarity and clinical presentation variability. This has challenged physicians’ efforts in making specific diagnoses and assigning proper treatments.\nTranscriptome profiling of blood cells has proven to be useful and efficacious in identifying gene expression signatures in rheumatic diseases [1, 2], enabling physicians to make data-driven and patient-specific decisions. Thanks to the use of transcriptomics, the participation of type I interferons participate in the pathophysiology of SLE [3] and dermatomyositis [4] was discovered and has become one of the paramount findings in rheumatology. Psoriatic arthritis is another exemplary disease demonstrating the usefulness of applying transcriptomics to identify important pathways related to interleukins IL-12, IL-17 and IL-23 [1]. Large datasets of gene expression generated from such studies are highly interesting, yet their systematic analysis and interpretation is quite challenging. Thus, there has been considerable interest in applying machine learning on blood cells’ gene expression in order to obtain new insights into the pathophysiology of rheumatic diseases which in turn may have important implications for their clinical management [5, 6].\nBy investigating the whole blood gene expression of children with rheumatic diseases in comparison with reactive/post-infection controls, we aim to develop computational classifiers based on the obtained transcriptome data that allow us to identify pediatric patients with rheumatic diseases and distinguish different rheumatic groups (e.g., CRMO, JIA, and IFN), and thus, ultimately, to improve the diagnosis of future patients.", "[SUBTITLE] Patients and controls [SUBSECTION] After obtaining written consent, 48 children (1 to 16 years old) with rheumatic diseases (i.e., AID, CRMO, IFN, JIA, vasculitis, and HLA-B51 related rheumatic diseases) were recruited, prior to any treatment except non-steroidal anti-inflammatory drugs (NSAIDs), from May 2016 until August 2020, at the Divisions of Pediatric Rheumatology of four hospitals in Belgium (Antwerp Hospital Network, Antwerp University Hospital, Brussels University Hospital, and Ghent University Hospital). Venous blood was collected into PAXgene® Blood RNA tubes (PreAnalytiX, Switzerland). Clinical details of the patients can be found in Table S1. As controls, 46 children with PCR-confirmed viral (mainly enterovirus) infections were also recruited and requested to provide blood twice: first while being actively infected, and second following remission [7]. Only 35 children agreed to a second venapunction as it was not obligatory.\nAfter obtaining written consent, 48 children (1 to 16 years old) with rheumatic diseases (i.e., AID, CRMO, IFN, JIA, vasculitis, and HLA-B51 related rheumatic diseases) were recruited, prior to any treatment except non-steroidal anti-inflammatory drugs (NSAIDs), from May 2016 until August 2020, at the Divisions of Pediatric Rheumatology of four hospitals in Belgium (Antwerp Hospital Network, Antwerp University Hospital, Brussels University Hospital, and Ghent University Hospital). Venous blood was collected into PAXgene® Blood RNA tubes (PreAnalytiX, Switzerland). Clinical details of the patients can be found in Table S1. As controls, 46 children with PCR-confirmed viral (mainly enterovirus) infections were also recruited and requested to provide blood twice: first while being actively infected, and second following remission [7]. Only 35 children agreed to a second venapunction as it was not obligatory.\n[SUBTITLE] RNA extraction [SUBSECTION] PAXgene® Blood RNA tubes were kept at -80 °C within 72 h after blood collection until use. RNA extraction was performed via a column-based RNA extraction using the PAXgene® Blood RNA extraction kit (Qiagen, Germany). To optimize RNA concentrations, we used the RNA Clean & Concentrator™-5 kit (Zymo Research, USA). We verified the RNA quality using the RNA ScreenTape Analysis on the Tapestation (Agilent, USA).\nPAXgene® Blood RNA tubes were kept at -80 °C within 72 h after blood collection until use. RNA extraction was performed via a column-based RNA extraction using the PAXgene® Blood RNA extraction kit (Qiagen, Germany). To optimize RNA concentrations, we used the RNA Clean & Concentrator™-5 kit (Zymo Research, USA). We verified the RNA quality using the RNA ScreenTape Analysis on the Tapestation (Agilent, USA).\n[SUBTITLE] 3’ mRNA library preparation and sequencing [SUBSECTION] All RNA samples were prepared with the QuantSeq3′ mRNA-Seq Library Prep Kit for Illumina (Lexogen GmbH, Austria) following the standard supplier’s protocol for long fragments. During the RNA removal step, we also added globin blockers, so none of the abundant globin mRNA was copied to double stranded cDNA. The resulting amplified cDNA libraries were equimolarly pooled and sequenced on NextSeq 500 (high output v2,5 kit, 150 cycli, single read) (Illumina, USA) with up to 40 samples per batch. This gave us an optimum of 10 million reads for each sample. All samples were prepared and sequenced in 4 batches.\nAll RNA samples were prepared with the QuantSeq3′ mRNA-Seq Library Prep Kit for Illumina (Lexogen GmbH, Austria) following the standard supplier’s protocol for long fragments. During the RNA removal step, we also added globin blockers, so none of the abundant globin mRNA was copied to double stranded cDNA. The resulting amplified cDNA libraries were equimolarly pooled and sequenced on NextSeq 500 (high output v2,5 kit, 150 cycli, single read) (Illumina, USA) with up to 40 samples per batch. This gave us an optimum of 10 million reads for each sample. All samples were prepared and sequenced in 4 batches.\n[SUBTITLE] Raw data processing [SUBSECTION] Raw data from the NextSeq was demultiplexed and further processed per batch through an in-house pipeline. The quality of all reads was evaluated using FastQC (v0.11.9) before and after processing with Trimmomatic (v0.36). Trimmomatic removed the leading 20 bases from reads, ensure a minimum quality score of 15 over a sliding window of 4 bases and required a minimum read length of 30 bases. As usage of oligodT primers might cause poly-A stretches at the 3′ end, the latter end was trimmed with our own in-house poly-A removal script. All sequences that remained after trimming were mapped against the human reference genome build 38 (polymorph variants excluded) with HISAT2 (v2.0.4). HTseq (v0.6.1) was used to count all reads for each gene and generate a readcount table.\nRaw data from the NextSeq was demultiplexed and further processed per batch through an in-house pipeline. The quality of all reads was evaluated using FastQC (v0.11.9) before and after processing with Trimmomatic (v0.36). Trimmomatic removed the leading 20 bases from reads, ensure a minimum quality score of 15 over a sliding window of 4 bases and required a minimum read length of 30 bases. As usage of oligodT primers might cause poly-A stretches at the 3′ end, the latter end was trimmed with our own in-house poly-A removal script. All sequences that remained after trimming were mapped against the human reference genome build 38 (polymorph variants excluded) with HISAT2 (v2.0.4). HTseq (v0.6.1) was used to count all reads for each gene and generate a readcount table.\n[SUBTITLE] Cluster identification [SUBSECTION] Clustering is an unsupervised learning algorithm that is used to discover patterns in high-dimensional data that would not be easily identified by conventional statistics. The readcount table obtained after raw data processing was normalized by median scaling. Then, the t-distributed stochastic neighbor embedding (t-SNE) method and hierarchical clustering algorithms were applied on the normalized readcount data to assign data into different clusters based on the (dis)similarity in the gene expression between samples.\nClustering is an unsupervised learning algorithm that is used to discover patterns in high-dimensional data that would not be easily identified by conventional statistics. The readcount table obtained after raw data processing was normalized by median scaling. Then, the t-distributed stochastic neighbor embedding (t-SNE) method and hierarchical clustering algorithms were applied on the normalized readcount data to assign data into different clusters based on the (dis)similarity in the gene expression between samples.\n[SUBTITLE] Differential gene expression and gene ontology enrichment analyses [SUBSECTION] Preliminary filtering of the normalized readcount data was performed by removing genes with fewer than 10 readcounts over all samples. Differential gene expression analyses were performed using the DESeq2 Bioconductor package in the open-source statistical software R. To account for batch effects, the batch number was included in the DESeq2 design. Differentially expressed genes with log2 fold change ≥ 2 (either up- or down-regulated) and p-value < 0.01 (adjusted for multiple testing by the False Discovery Rate method) were passed on to gene ontology enrichment analysis using the topGO package in R. Fisher’s exact test was performed to determine significantly enriched/depleted gene ontology terms relating to biological processes.\nPreliminary filtering of the normalized readcount data was performed by removing genes with fewer than 10 readcounts over all samples. Differential gene expression analyses were performed using the DESeq2 Bioconductor package in the open-source statistical software R. To account for batch effects, the batch number was included in the DESeq2 design. Differentially expressed genes with log2 fold change ≥ 2 (either up- or down-regulated) and p-value < 0.01 (adjusted for multiple testing by the False Discovery Rate method) were passed on to gene ontology enrichment analysis using the topGO package in R. Fisher’s exact test was performed to determine significantly enriched/depleted gene ontology terms relating to biological processes.\n[SUBTITLE] Classifier development [SUBSECTION] Due to the larger number of genes in the dataset, a feature selection step was performed using the Boruta package in R [8] with a Bonferroni correction to identify genes that had good predictive power for disease classification. The Boruta algorithm was applied on the normalized gene expression values obtained from DESeq2. With the Boruta selected genes, classification models were trained using the Random Forest algorithm [9], also in R. Validation of the trained classifiers was performed using a leave-one-out cross-validation strategy. In this strategy, a single sample would be removed from the dataset and a classification model was trained on the remaining samples. The model was then used to classify the left-out sample. This process would be repeated for all samples in the dataset. Receiver-operator-characteristic (ROC) curves and the area-under-the-curve (AUC) were employed using the package pROC [10] in R to evaluate the classifier performance. AUC values were displayed in figures as mean ± confidence interval (95%).\nDue to the larger number of genes in the dataset, a feature selection step was performed using the Boruta package in R [8] with a Bonferroni correction to identify genes that had good predictive power for disease classification. The Boruta algorithm was applied on the normalized gene expression values obtained from DESeq2. With the Boruta selected genes, classification models were trained using the Random Forest algorithm [9], also in R. Validation of the trained classifiers was performed using a leave-one-out cross-validation strategy. In this strategy, a single sample would be removed from the dataset and a classification model was trained on the remaining samples. The model was then used to classify the left-out sample. This process would be repeated for all samples in the dataset. Receiver-operator-characteristic (ROC) curves and the area-under-the-curve (AUC) were employed using the package pROC [10] in R to evaluate the classifier performance. AUC values were displayed in figures as mean ± confidence interval (95%).\n[SUBTITLE] Calculation of interferon-stimulated genes’ scores [SUBSECTION] First, relative expression (RE) was calculated based on the readcounts of interferon-stimulated genes (ISG) and the housekeeping gene GAPDH: \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\text{RE = }{\\text{2}}^{\\text{-}\\left(\\text{count ISG}-\\text{count GAPDH}\\right)}$$\\end{document}. The ISG score was calculated by summing up the individual RE per gene after normalization to the control group as follows: \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\sum \\left({\\text{RE}}_{\\text{subject}} -\\text{ }{\\text{Mean}}_{\\text{control}}\\right)/{\\text{Standard Deviation}}_{\\text{control}}$$\\end{document}. 28 ISG were selected for ISG score calculation (Table S4) [11]. Statistical significance was assessed using Mann-Whitney test. * P < 0.05; ** P < 0.005.\nFirst, relative expression (RE) was calculated based on the readcounts of interferon-stimulated genes (ISG) and the housekeeping gene GAPDH: \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\text{RE = }{\\text{2}}^{\\text{-}\\left(\\text{count ISG}-\\text{count GAPDH}\\right)}$$\\end{document}. The ISG score was calculated by summing up the individual RE per gene after normalization to the control group as follows: \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\sum \\left({\\text{RE}}_{\\text{subject}} -\\text{ }{\\text{Mean}}_{\\text{control}}\\right)/{\\text{Standard Deviation}}_{\\text{control}}$$\\end{document}. 28 ISG were selected for ISG score calculation (Table S4) [11]. Statistical significance was assessed using Mann-Whitney test. * P < 0.05; ** P < 0.005.", "After obtaining written consent, 48 children (1 to 16 years old) with rheumatic diseases (i.e., AID, CRMO, IFN, JIA, vasculitis, and HLA-B51 related rheumatic diseases) were recruited, prior to any treatment except non-steroidal anti-inflammatory drugs (NSAIDs), from May 2016 until August 2020, at the Divisions of Pediatric Rheumatology of four hospitals in Belgium (Antwerp Hospital Network, Antwerp University Hospital, Brussels University Hospital, and Ghent University Hospital). Venous blood was collected into PAXgene® Blood RNA tubes (PreAnalytiX, Switzerland). Clinical details of the patients can be found in Table S1. As controls, 46 children with PCR-confirmed viral (mainly enterovirus) infections were also recruited and requested to provide blood twice: first while being actively infected, and second following remission [7]. Only 35 children agreed to a second venapunction as it was not obligatory.", "PAXgene® Blood RNA tubes were kept at -80 °C within 72 h after blood collection until use. RNA extraction was performed via a column-based RNA extraction using the PAXgene® Blood RNA extraction kit (Qiagen, Germany). To optimize RNA concentrations, we used the RNA Clean & Concentrator™-5 kit (Zymo Research, USA). We verified the RNA quality using the RNA ScreenTape Analysis on the Tapestation (Agilent, USA).", "All RNA samples were prepared with the QuantSeq3′ mRNA-Seq Library Prep Kit for Illumina (Lexogen GmbH, Austria) following the standard supplier’s protocol for long fragments. During the RNA removal step, we also added globin blockers, so none of the abundant globin mRNA was copied to double stranded cDNA. The resulting amplified cDNA libraries were equimolarly pooled and sequenced on NextSeq 500 (high output v2,5 kit, 150 cycli, single read) (Illumina, USA) with up to 40 samples per batch. This gave us an optimum of 10 million reads for each sample. All samples were prepared and sequenced in 4 batches.", "Raw data from the NextSeq was demultiplexed and further processed per batch through an in-house pipeline. The quality of all reads was evaluated using FastQC (v0.11.9) before and after processing with Trimmomatic (v0.36). Trimmomatic removed the leading 20 bases from reads, ensure a minimum quality score of 15 over a sliding window of 4 bases and required a minimum read length of 30 bases. As usage of oligodT primers might cause poly-A stretches at the 3′ end, the latter end was trimmed with our own in-house poly-A removal script. All sequences that remained after trimming were mapped against the human reference genome build 38 (polymorph variants excluded) with HISAT2 (v2.0.4). HTseq (v0.6.1) was used to count all reads for each gene and generate a readcount table.", "Clustering is an unsupervised learning algorithm that is used to discover patterns in high-dimensional data that would not be easily identified by conventional statistics. The readcount table obtained after raw data processing was normalized by median scaling. Then, the t-distributed stochastic neighbor embedding (t-SNE) method and hierarchical clustering algorithms were applied on the normalized readcount data to assign data into different clusters based on the (dis)similarity in the gene expression between samples.", "Preliminary filtering of the normalized readcount data was performed by removing genes with fewer than 10 readcounts over all samples. Differential gene expression analyses were performed using the DESeq2 Bioconductor package in the open-source statistical software R. To account for batch effects, the batch number was included in the DESeq2 design. Differentially expressed genes with log2 fold change ≥ 2 (either up- or down-regulated) and p-value < 0.01 (adjusted for multiple testing by the False Discovery Rate method) were passed on to gene ontology enrichment analysis using the topGO package in R. Fisher’s exact test was performed to determine significantly enriched/depleted gene ontology terms relating to biological processes.", "Due to the larger number of genes in the dataset, a feature selection step was performed using the Boruta package in R [8] with a Bonferroni correction to identify genes that had good predictive power for disease classification. The Boruta algorithm was applied on the normalized gene expression values obtained from DESeq2. With the Boruta selected genes, classification models were trained using the Random Forest algorithm [9], also in R. Validation of the trained classifiers was performed using a leave-one-out cross-validation strategy. In this strategy, a single sample would be removed from the dataset and a classification model was trained on the remaining samples. The model was then used to classify the left-out sample. This process would be repeated for all samples in the dataset. Receiver-operator-characteristic (ROC) curves and the area-under-the-curve (AUC) were employed using the package pROC [10] in R to evaluate the classifier performance. AUC values were displayed in figures as mean ± confidence interval (95%).", "First, relative expression (RE) was calculated based on the readcounts of interferon-stimulated genes (ISG) and the housekeeping gene GAPDH: \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\text{RE = }{\\text{2}}^{\\text{-}\\left(\\text{count ISG}-\\text{count GAPDH}\\right)}$$\\end{document}. The ISG score was calculated by summing up the individual RE per gene after normalization to the control group as follows: \\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\sum \\left({\\text{RE}}_{\\text{subject}} -\\text{ }{\\text{Mean}}_{\\text{control}}\\right)/{\\text{Standard Deviation}}_{\\text{control}}$$\\end{document}. 28 ISG were selected for ISG score calculation (Table S4) [11]. Statistical significance was assessed using Mann-Whitney test. * P < 0.05; ** P < 0.005.", "[SUBTITLE] Transcriptome profiles of rheumatic diseases, viral infection, and convalescent controls [SUBSECTION] We compared the transcriptome profiles of six rheumatic disease groups (i.e., JIA, AID, CRMO, HLA-B51, IFN, and vasculitis) with viral infection and convalescent controls. Clustering analyses using t-SNE and hierarchical algorithms were displayed in Fig. 1 and Figure S1, respectively. As shown in the t-SNE plot in Fig. 1, most controls were gathered in cluster 1 while infection cases were grouped into a separate cluster 2, which implies that the gene expression of actively infected cases and remission cases (i.e., controls) is substantially independent despite coming from the same participants. However, patients with different rheumatic diseases were not well distinguished and assigned mostly to cluster 3, while cluster 4 contained a mixture of different categories.\n\nFig. 1t-SNE plot of 4 different clusters\n\nt-SNE plot of 4 different clusters\nWe compared the transcriptome profiles of six rheumatic disease groups (i.e., JIA, AID, CRMO, HLA-B51, IFN, and vasculitis) with viral infection and convalescent controls. Clustering analyses using t-SNE and hierarchical algorithms were displayed in Fig. 1 and Figure S1, respectively. As shown in the t-SNE plot in Fig. 1, most controls were gathered in cluster 1 while infection cases were grouped into a separate cluster 2, which implies that the gene expression of actively infected cases and remission cases (i.e., controls) is substantially independent despite coming from the same participants. However, patients with different rheumatic diseases were not well distinguished and assigned mostly to cluster 3, while cluster 4 contained a mixture of different categories.\n\nFig. 1t-SNE plot of 4 different clusters\n\nt-SNE plot of 4 different clusters\n[SUBTITLE] Classifier development [SUBSECTION] The Random Forest algorithm was used for classifier development because it uses the ensemble learning technique that is robust to outliers, stable with new data, and can handle non-linear correlations. The first classifier was developed to distinguish between control, infection, and pediatric rheumatic cases based on normalized transcriptome data. Leave-one-out cross-validation results in Fig. 2a confirm that the classifier could differentiate pediatric rheumatic patients from negative controls (AUC = 0.8 ± 0.1) and from viral infection cases (AUC = 0.7 ± 0.1). The Boruta algorithm selected 349 genes out of 31,319 initial genes (Table S2) for the training of this classifier between control, infection, and pediatric rheumatic cases. Some of the notable selected genes were CD3G, CD96, and CD200R1 (CD200 receptor 1). The gene CD3G encodes the CD3γ polypeptide, which forms a part of the CD3-TCR (T-cell receptor) complex. This complex plays an important role in antigen recognition and several intracellular signal-transduction pathways. This finding indicates that some of the rheumatic diseases are specifically connected to the alteration and malfunction of γ T-cells. Previous studies have also reported the association of γ and δ T-cells with (immunodeficiency and) autoimmune diseases [12]. CD96 is expressed on T-cells and natural killer cells. It belongs to a family of molecules that provide costimulatory and coinhibitory signals during T-cell activation. It was shown to inhibit the expansion and IL-9 production of Th17 cells and thus, reduce inflammation and pathogenicity [13]. CD200R1 is also expressed on T-cells, as well as myeloid cells. It was reported to alter the balance between Th17 cells and regulatory T-cells in SLE patients [14] and has also been confirmed as one of the genetic factors susceptible to JIA, especially oligoarticular JIA [15]. Aberrant expression of CD200R1 was shown to contribute to abnormal Th17 cell differentiation and chemotaxis in patients with rheumatoid arthritis [15].\n\nFig. 2ROC curves and AUC values from leave-one-out cross-validation of classifier between (a) negative controls (i.e., control), viral infected subjects (i.e., infection) and subjects with rheumatic diseases (i.e., Pedrheum); and more specifically between (b) CRMO, IFN, JIA and control/infection cases\n\nROC curves and AUC values from leave-one-out cross-validation of classifier between (a) negative controls (i.e., control), viral infected subjects (i.e., infection) and subjects with rheumatic diseases (i.e., Pedrheum); and more specifically between (b) CRMO, IFN, JIA and control/infection cases\nMore specific classifiers were then developed per disease group. As the number of rheumatic patients in our dataset was limited, these classifiers focused only on CRMO, IFN, and JIA groups, which had more subjects for model training and validation than the other disease groups. Three classifiers were developed to distinguish patients with CRMO (n = 6), IFN (n = 6), and JIA (n = 20) from control (n = 35) and infection (n = 46) cases. They worked quite well as their AUC values are above or equal to 0.8 (Fig. 2b). Since CRMO, IFN, and JIA were differentiated well from control and infection cases, it was subsequently important to examine how they could be distinguished from one another. ROC curves and AUC values of a classifier between CRMO, IFN, and JIA (Figure S2) indicated that IFN could be distinguished relatively well from CRMO and JIA (AUC = 0.7 ± 0.2/0.3), however CRMO is not easily differentiated from JIA (AUC = 0.5 ± 0.3), likely explained by the limited sample size. The Boruta-identified genes for these classifiers are also presented in Table S2. There were 349 selected genes from the CRMO-control-infection classifier, 247 genes from the IFN-control-infection classifier, and 286 genes from the JIA-control-infection classifier. As expected, more interferon-related genes were selected for the IFN classifier compared to those of CRMO and JIA.\nThe Random Forest algorithm was used for classifier development because it uses the ensemble learning technique that is robust to outliers, stable with new data, and can handle non-linear correlations. The first classifier was developed to distinguish between control, infection, and pediatric rheumatic cases based on normalized transcriptome data. Leave-one-out cross-validation results in Fig. 2a confirm that the classifier could differentiate pediatric rheumatic patients from negative controls (AUC = 0.8 ± 0.1) and from viral infection cases (AUC = 0.7 ± 0.1). The Boruta algorithm selected 349 genes out of 31,319 initial genes (Table S2) for the training of this classifier between control, infection, and pediatric rheumatic cases. Some of the notable selected genes were CD3G, CD96, and CD200R1 (CD200 receptor 1). The gene CD3G encodes the CD3γ polypeptide, which forms a part of the CD3-TCR (T-cell receptor) complex. This complex plays an important role in antigen recognition and several intracellular signal-transduction pathways. This finding indicates that some of the rheumatic diseases are specifically connected to the alteration and malfunction of γ T-cells. Previous studies have also reported the association of γ and δ T-cells with (immunodeficiency and) autoimmune diseases [12]. CD96 is expressed on T-cells and natural killer cells. It belongs to a family of molecules that provide costimulatory and coinhibitory signals during T-cell activation. It was shown to inhibit the expansion and IL-9 production of Th17 cells and thus, reduce inflammation and pathogenicity [13]. CD200R1 is also expressed on T-cells, as well as myeloid cells. It was reported to alter the balance between Th17 cells and regulatory T-cells in SLE patients [14] and has also been confirmed as one of the genetic factors susceptible to JIA, especially oligoarticular JIA [15]. Aberrant expression of CD200R1 was shown to contribute to abnormal Th17 cell differentiation and chemotaxis in patients with rheumatoid arthritis [15].\n\nFig. 2ROC curves and AUC values from leave-one-out cross-validation of classifier between (a) negative controls (i.e., control), viral infected subjects (i.e., infection) and subjects with rheumatic diseases (i.e., Pedrheum); and more specifically between (b) CRMO, IFN, JIA and control/infection cases\n\nROC curves and AUC values from leave-one-out cross-validation of classifier between (a) negative controls (i.e., control), viral infected subjects (i.e., infection) and subjects with rheumatic diseases (i.e., Pedrheum); and more specifically between (b) CRMO, IFN, JIA and control/infection cases\nMore specific classifiers were then developed per disease group. As the number of rheumatic patients in our dataset was limited, these classifiers focused only on CRMO, IFN, and JIA groups, which had more subjects for model training and validation than the other disease groups. Three classifiers were developed to distinguish patients with CRMO (n = 6), IFN (n = 6), and JIA (n = 20) from control (n = 35) and infection (n = 46) cases. They worked quite well as their AUC values are above or equal to 0.8 (Fig. 2b). Since CRMO, IFN, and JIA were differentiated well from control and infection cases, it was subsequently important to examine how they could be distinguished from one another. ROC curves and AUC values of a classifier between CRMO, IFN, and JIA (Figure S2) indicated that IFN could be distinguished relatively well from CRMO and JIA (AUC = 0.7 ± 0.2/0.3), however CRMO is not easily differentiated from JIA (AUC = 0.5 ± 0.3), likely explained by the limited sample size. The Boruta-identified genes for these classifiers are also presented in Table S2. There were 349 selected genes from the CRMO-control-infection classifier, 247 genes from the IFN-control-infection classifier, and 286 genes from the JIA-control-infection classifier. As expected, more interferon-related genes were selected for the IFN classifier compared to those of CRMO and JIA.\n[SUBTITLE] Differential expression and gene ontology enrichment analyses [SUBSECTION] We analyzed the differentially expressed genes (DEGs) of CRMO, IFN, and JIA versus controls. The resulting DEGs were translated to corresponding Gene Ontology (GO) categories to understand which pathways were involved in the disease pathophysiology. Many of the top 10 GO categories of CRMO, IFN, and JIA groups are related to innate immunity including myeloid leukocyte and granulocyte activation, neutrophil activation and degranulation (Fig. 3a and Table S3). In IFN particularly, the immunity is largely mediated by antibacterial and antifungal defense responses. Results from GO analyses of CRMO, IFN, and JIA against the other Pedrheum groups are displayed in Figure S3 and Table S3. Although the classifiers could not adequately differentiate between CRMO and JIA, we noted that 1,106 DEGs could be found between CRMO and all other Pedrheum groups, 1,730 DEGs in the case of IFN, and 1,216 DEGs for JIA (Table S5). Additionally, more than 170 DEGs were found between CRMO and IFN, CRMO and JIA, as well as between IFN and JIA (Table S5).\nWe analyzed the differentially expressed genes (DEGs) of CRMO, IFN, and JIA versus controls. The resulting DEGs were translated to corresponding Gene Ontology (GO) categories to understand which pathways were involved in the disease pathophysiology. Many of the top 10 GO categories of CRMO, IFN, and JIA groups are related to innate immunity including myeloid leukocyte and granulocyte activation, neutrophil activation and degranulation (Fig. 3a and Table S3). In IFN particularly, the immunity is largely mediated by antibacterial and antifungal defense responses. Results from GO analyses of CRMO, IFN, and JIA against the other Pedrheum groups are displayed in Figure S3 and Table S3. Although the classifiers could not adequately differentiate between CRMO and JIA, we noted that 1,106 DEGs could be found between CRMO and all other Pedrheum groups, 1,730 DEGs in the case of IFN, and 1,216 DEGs for JIA (Table S5). Additionally, more than 170 DEGs were found between CRMO and IFN, CRMO and JIA, as well as between IFN and JIA (Table S5).\n[SUBTITLE] ISG scores [SUBSECTION] Using the whole blood gene expression obtained from 3’ mRNA sequencing, we calculated the ISG scores of IFN patients and compared them with those from other disease groups. As displayed in Fig. 3b, IFN patients had the highest mean score of 18. Other disease groups, although displaying lower mean scores than IFN (6.0 for AID, 4.0 for CRMO, 9.0 for HLA-B51, 3.9 for JIA, 7.8 for vasculitis, and 14 for infection cases), did include some patients with particularly high scores: one AID patient had a score of 58, one HLA-B51 patient had score 48, and one vasculitis patient had score 44. Interestingly, longitudinal tracking of ISG scores was proven feasible using 3’ mRNA sequencing. Indeed, we showed that one patient with Aicardi-Goutières syndrome had significantly high ISG scores at early presentations that decreased following initiation of JAK-inhibition via tofacitinib (see Figure S4).\nUsing the whole blood gene expression obtained from 3’ mRNA sequencing, we calculated the ISG scores of IFN patients and compared them with those from other disease groups. As displayed in Fig. 3b, IFN patients had the highest mean score of 18. Other disease groups, although displaying lower mean scores than IFN (6.0 for AID, 4.0 for CRMO, 9.0 for HLA-B51, 3.9 for JIA, 7.8 for vasculitis, and 14 for infection cases), did include some patients with particularly high scores: one AID patient had a score of 58, one HLA-B51 patient had score 48, and one vasculitis patient had score 44. Interestingly, longitudinal tracking of ISG scores was proven feasible using 3’ mRNA sequencing. Indeed, we showed that one patient with Aicardi-Goutières syndrome had significantly high ISG scores at early presentations that decreased following initiation of JAK-inhibition via tofacitinib (see Figure S4).", "We compared the transcriptome profiles of six rheumatic disease groups (i.e., JIA, AID, CRMO, HLA-B51, IFN, and vasculitis) with viral infection and convalescent controls. Clustering analyses using t-SNE and hierarchical algorithms were displayed in Fig. 1 and Figure S1, respectively. As shown in the t-SNE plot in Fig. 1, most controls were gathered in cluster 1 while infection cases were grouped into a separate cluster 2, which implies that the gene expression of actively infected cases and remission cases (i.e., controls) is substantially independent despite coming from the same participants. However, patients with different rheumatic diseases were not well distinguished and assigned mostly to cluster 3, while cluster 4 contained a mixture of different categories.\n\nFig. 1t-SNE plot of 4 different clusters\n\nt-SNE plot of 4 different clusters", "The Random Forest algorithm was used for classifier development because it uses the ensemble learning technique that is robust to outliers, stable with new data, and can handle non-linear correlations. The first classifier was developed to distinguish between control, infection, and pediatric rheumatic cases based on normalized transcriptome data. Leave-one-out cross-validation results in Fig. 2a confirm that the classifier could differentiate pediatric rheumatic patients from negative controls (AUC = 0.8 ± 0.1) and from viral infection cases (AUC = 0.7 ± 0.1). The Boruta algorithm selected 349 genes out of 31,319 initial genes (Table S2) for the training of this classifier between control, infection, and pediatric rheumatic cases. Some of the notable selected genes were CD3G, CD96, and CD200R1 (CD200 receptor 1). The gene CD3G encodes the CD3γ polypeptide, which forms a part of the CD3-TCR (T-cell receptor) complex. This complex plays an important role in antigen recognition and several intracellular signal-transduction pathways. This finding indicates that some of the rheumatic diseases are specifically connected to the alteration and malfunction of γ T-cells. Previous studies have also reported the association of γ and δ T-cells with (immunodeficiency and) autoimmune diseases [12]. CD96 is expressed on T-cells and natural killer cells. It belongs to a family of molecules that provide costimulatory and coinhibitory signals during T-cell activation. It was shown to inhibit the expansion and IL-9 production of Th17 cells and thus, reduce inflammation and pathogenicity [13]. CD200R1 is also expressed on T-cells, as well as myeloid cells. It was reported to alter the balance between Th17 cells and regulatory T-cells in SLE patients [14] and has also been confirmed as one of the genetic factors susceptible to JIA, especially oligoarticular JIA [15]. Aberrant expression of CD200R1 was shown to contribute to abnormal Th17 cell differentiation and chemotaxis in patients with rheumatoid arthritis [15].\n\nFig. 2ROC curves and AUC values from leave-one-out cross-validation of classifier between (a) negative controls (i.e., control), viral infected subjects (i.e., infection) and subjects with rheumatic diseases (i.e., Pedrheum); and more specifically between (b) CRMO, IFN, JIA and control/infection cases\n\nROC curves and AUC values from leave-one-out cross-validation of classifier between (a) negative controls (i.e., control), viral infected subjects (i.e., infection) and subjects with rheumatic diseases (i.e., Pedrheum); and more specifically between (b) CRMO, IFN, JIA and control/infection cases\nMore specific classifiers were then developed per disease group. As the number of rheumatic patients in our dataset was limited, these classifiers focused only on CRMO, IFN, and JIA groups, which had more subjects for model training and validation than the other disease groups. Three classifiers were developed to distinguish patients with CRMO (n = 6), IFN (n = 6), and JIA (n = 20) from control (n = 35) and infection (n = 46) cases. They worked quite well as their AUC values are above or equal to 0.8 (Fig. 2b). Since CRMO, IFN, and JIA were differentiated well from control and infection cases, it was subsequently important to examine how they could be distinguished from one another. ROC curves and AUC values of a classifier between CRMO, IFN, and JIA (Figure S2) indicated that IFN could be distinguished relatively well from CRMO and JIA (AUC = 0.7 ± 0.2/0.3), however CRMO is not easily differentiated from JIA (AUC = 0.5 ± 0.3), likely explained by the limited sample size. The Boruta-identified genes for these classifiers are also presented in Table S2. There were 349 selected genes from the CRMO-control-infection classifier, 247 genes from the IFN-control-infection classifier, and 286 genes from the JIA-control-infection classifier. As expected, more interferon-related genes were selected for the IFN classifier compared to those of CRMO and JIA.", "We analyzed the differentially expressed genes (DEGs) of CRMO, IFN, and JIA versus controls. The resulting DEGs were translated to corresponding Gene Ontology (GO) categories to understand which pathways were involved in the disease pathophysiology. Many of the top 10 GO categories of CRMO, IFN, and JIA groups are related to innate immunity including myeloid leukocyte and granulocyte activation, neutrophil activation and degranulation (Fig. 3a and Table S3). In IFN particularly, the immunity is largely mediated by antibacterial and antifungal defense responses. Results from GO analyses of CRMO, IFN, and JIA against the other Pedrheum groups are displayed in Figure S3 and Table S3. Although the classifiers could not adequately differentiate between CRMO and JIA, we noted that 1,106 DEGs could be found between CRMO and all other Pedrheum groups, 1,730 DEGs in the case of IFN, and 1,216 DEGs for JIA (Table S5). Additionally, more than 170 DEGs were found between CRMO and IFN, CRMO and JIA, as well as between IFN and JIA (Table S5).", "Using the whole blood gene expression obtained from 3’ mRNA sequencing, we calculated the ISG scores of IFN patients and compared them with those from other disease groups. As displayed in Fig. 3b, IFN patients had the highest mean score of 18. Other disease groups, although displaying lower mean scores than IFN (6.0 for AID, 4.0 for CRMO, 9.0 for HLA-B51, 3.9 for JIA, 7.8 for vasculitis, and 14 for infection cases), did include some patients with particularly high scores: one AID patient had a score of 58, one HLA-B51 patient had score 48, and one vasculitis patient had score 44. Interestingly, longitudinal tracking of ISG scores was proven feasible using 3’ mRNA sequencing. Indeed, we showed that one patient with Aicardi-Goutières syndrome had significantly high ISG scores at early presentations that decreased following initiation of JAK-inhibition via tofacitinib (see Figure S4).", "Application of transcriptomics techniques such as microarray or sequencing on blood or synovial fluid of rheumatic patients has been a key transforming factor in rheumatology [1]. In the current study, we showed that 3’ mRNA from whole blood can provide adequate information for the differentiation between controls, viral infections, and pediatric rheumatic diseases. We demonstrated that the Random Forest algorithm can be applied on blood transcriptome data to determine whether a pediatric patient has reactive/post-infection phenomena or an autoimmune/autoinflammatory disease. Furthermore, studies on autoimmune and autoinflammatory diseases have largely focused on adaptive immunity, that is, regulatory and autoreactive T-cells [16]. Our study hereby provides evidence that innate immunity may also have an important role in the pathophysiology of pediatric rheumatic autoimmune and autoinflammatory diseases, including CRMO, IFN, and JIA. Moreover, we showed that the activation and immune response of myeloid cells form participate in the biological pathways underlying JIA, CRMO and IFN – three of the most common rheumatic diseases in children. Via the DEG and GO analyses, we found that the immunological activities of innate cells, such as neutrophils and granulocytes, were highly associated with CRMO, IFN, and JIA compared to the viral convalescent controls. Interestingly, in the IFN group, the immunity seems to be largely mediated by antibacterial and antifungal defense responses. This could be a sign of molecular mimicry where self-derived peptides resemble foreign antigens and thus stimulate the antigen-specific autoreactive T-cells or B-cells, which in turn results in the production of pro-inflammatory cytokines [17, 18]. One of them is type I interferon, which plays an important role in the pathophysiology of IFN disease and which also has been reported to participate in the immune response against viral, bacterial, fungal pathogens, and parasites [19]. GO analyses comparing CRMO, IFN, and JIA with the other Pedrheum groups (Figure S3 and Table S3) reveal that CRMO is specifically driven by cellular response to cytokine, JIA by cellular response to chemical stimulus, and IFN by the activation of myeloid leukocytes, neutrophils, and granulocytes. The dysregulated cytokine expression from innate immune cells has been concluded to have central contribution to the inflammatory phenotype of CRMO by Hofmann et al. [20]. JIA on the other hand has been found to be associated with antibiotics exposure in a dose- and time- dependent fashion in a large pediatric population by Horton et al. [21].\n\nFig. 3(a) Gene ontology enrichment analysis of CRMO-, IFN-, and JIA-associated genes. Bar charts the top 10 GO terms for biological process. (b) ISG score by disease group; horizontal lines represent median values of each group; Mann-Whitney test for statistical significance: * P < 0.05; ** P < 0.005\n\n(a) Gene ontology enrichment analysis of CRMO-, IFN-, and JIA-associated genes. Bar charts the top 10 GO terms for biological process. (b) ISG score by disease group; horizontal lines represent median values of each group; Mann-Whitney test for statistical significance: * P < 0.05; ** P < 0.005\nThe over-expression of ISG is a useful biomarker of IFN diseases, including SLE. Although the interferon signature was first defined in SLE patients, different ISG are investigated to classify pathological conditions of other interferonopathies and lupus-like disorders (e.g., dermatomyositis, Aicardi-Goutières syndrome) to guide molecular diagnostics and to formulate targeted therapy approaches [22]. Quantitative polymerase chain reaction (qPCR) has been the method of choice to measure the expression of ISG and estimate the ISG score, but this approach has low throughput as qPCR can only analyze pre-determined genes and the number of genes to be analyzed simultaneously is limited [23]. Using whole blood gene expression obtained from 3’ mRNA sequencing, we were able to calculate the ISG scores without gene pre-selection or gene number limitation. Our data show that the average score of IFN group was the highest among all disease categories, thereby confirming that the highly expressed ISG are signatures of interferonopathies. More importantly, we showed that robust longitudinal tracking of ISG scores is possible with the use of 3’ mRNA sequencing. Beside the IFN group, viral infection cases also displayed high average ISG scores due to the participation of type I interferon in the immune responses against viral infection [19].\nDespite demonstrating that blood RNA sequencing can distinguish autoimmune/autoinflammatory diseases from viral infection/post-infection cases, as well as reveal their key genes and pathways, our study is limited by the sample size of each rheumatic disease, due to which cross-validation was done to validate the classifiers’ performance instead of training and testing on independent datasets. The limited number of rheumatic cases may also be responsible for the modest cross-validation performance of the CRMO-IFN-JIA classifier although rheumatic diseases in general could be identified well from controls and infections. Another limitation that challenges the predictive performance of our classifiers is disease heterogeneity. IFN and JIA are examples of heterogeneous groups that pose great difficulties for the classifiers to differentiate from other groups because they contain several disease subgroups (e.g., dermatomyositis and systemic lupus erythematosus were grouped together as IFN, and JIA included polyarticular, oligoarticular types, as well as spondyloarthropathies). The heterogeneity of diseases often obstructs explicit modelling of underlying distributions of individual features, which can be even more problematic when the sample population is small [24]. Finally, although efforts were made to minimize batch effects, these cannot always be completely avoided.\nTo incorporate the expression-based assessment into clinical use, it is necessary to demonstrate that the reliability and accuracy of this approach is comparable or superior to the currently used methods. The development of diagnostic criteria is challenging in rheumatology due to the heterogeneity of many rheumatic diseases, variable clinical presentations, and complex pathophysiology. Given the lack of optimal diagnostic criteria, physicians must rely on a complicated decision-making process based on a combination of symptoms, physical examination, exclusion of competing diagnoses, and geographic prevalence [25]. Moreover, there is an ongoing concern that physical examination is insensitive in detecting subtle, smoldering synovitis [26]. Since the completion of the Human Genome Project in 2003, next-generation sequencing has seen great improvements in technique and decline in cost. In addition, the whole protocol from RNA extraction, library preparation, and sequencing until data pre-processing and disease classification using machine learning only takes 4–5 days in our experience, given that a classifier is already developed and validated. This is a similar average amount of time it would take to complete a blood test and other assessments for the diagnosis of rheumatic diseases. Thus, the approach proposed in this study, where machine learning is developed based on blood transcriptome data, should be highly affordable and applicable in clinical practice. Since gene expression variations in blood cells of rheumatic patients can predate the clinical manifestations [27], blood gene expression profiling is useful in identifying new biomarkers of pediatric rheumatic diseases and, together with the machine learning classifiers presented in this study – after further development and validation – will be an efficient tool for early diagnosis and heterogeneity exploration of pediatric rheumatic diseases. We believe that adding more cases of pediatric rheumatic diseases to the database will provide more data for the classifier to be trained on, allowing it to capture more distinct transcriptome features and variances of each disease, as well as get validated on an independent test set. It would also be useful to collect blood from the same patients over fixed time periods or before and after therapy to obtain longitudinal transcriptome data so that we can update and improve the model to foresee patients’ clinical course or treatment response.", "Overall, our study indicates that blood transcriptomics is a promising tool to improve the diagnosis of pediatric rheumatic diseases. The ease of sample collection as well as the continuous enhancement and affordability of sequencing techniques can overcome the challenges of patient heterogeneity and allow for further fruitful research.", "[SUBTITLE] [SUBSECTION] Below is the link to the electronic supplementary material.\n\nSupplementary Material 1\n\nSupplementary Material 1\n\nSupplementary Material 2\n\nSupplementary Material 2\n\nSupplementary Material 3\n\nSupplementary Material 3\n\nSupplementary Material 4\n\nSupplementary Material 4\n\nSupplementary Material 5\n\nSupplementary Material 5\n\nSupplementary Material 6\n\nSupplementary Material 6\nBelow is the link to the electronic supplementary material.\n\nSupplementary Material 1\n\nSupplementary Material 1\n\nSupplementary Material 2\n\nSupplementary Material 2\n\nSupplementary Material 3\n\nSupplementary Material 3\n\nSupplementary Material 4\n\nSupplementary Material 4\n\nSupplementary Material 5\n\nSupplementary Material 5\n\nSupplementary Material 6\n\nSupplementary Material 6", "Below is the link to the electronic supplementary material.\n\nSupplementary Material 1\n\nSupplementary Material 1\n\nSupplementary Material 2\n\nSupplementary Material 2\n\nSupplementary Material 3\n\nSupplementary Material 3\n\nSupplementary Material 4\n\nSupplementary Material 4\n\nSupplementary Material 5\n\nSupplementary Material 5\n\nSupplementary Material 6\n\nSupplementary Material 6" ]
[ null, null, null, null, null, null, null, null, null, null, "results", null, null, null, null, "discussion", "conclusion", "supplementary-material", null ]
[ "Pediatric rheumatic diseases", "RNA sequencing", "Blood transcriptomics", "Classification model" ]
Importance of oxidative stress in the evaluation of acute pulmonary embolism severity.
36253755
Pulmonary embolism (PE) is a common and potentially life-threatening disorder. Our study was aimed to investigate whether oxidative stress markers can be used as clinical markers in the evaluation of acute PE (APE) severity.
BACKGROUND
47 patients with objectively documented diagnosis of APE were recorded. Of these patients, 14 had low-risk PE, 16 had moderate-risk PE, and 17 had high-risk PE. 21 healthy subjects were also enrolled in this study. Ischemia-modified albumin (IMA), prooxidants-antioxidants balance (PAB), advanced protein oxidation products (AOPPs), and ferric reducing antioxidant power (FRAP) were measured as oxidative stress parameters to evaluate the role of oxidative stress.
METHODS
In the low-risk and moderate-risk APE groups, AOPPs and PAB levels were significantly higher and FRAP levels were significantly lower than those in the control group. AOPPs and IMA levels in the patients with high-risk PE were significantly higher than those in both the low-risk and moderate-risk APE patients. There was a significant correlation between levels of AOPPs and the levels of both IMA (r: 0.462, p < 0.001) and PAB (r:0.378, p < 0.005). Serum FRAP levels were negatively correlated with PAB (r:- 0.683, p < 0.001) and AOPPs levels (r:- 0,384, p < 0.001). There was also a significant positive correlation between the serum IMA and PAB levels.
RESULTS
We clearly demonstrated that reactive oxygen species formation is significantly enhanced in APE. IMA and AOPPs may be used as clinical markers in the evaluation of APE severity in clinical practice. However, further studies with larger patient populations and longer follow-up periods are required to confirm the mechanisms underlying these findings.
CONCLUSIONS
[ "Humans", "Advanced Oxidation Protein Products", "Antioxidants", "Biomarkers", "Oxidative Stress", "Pulmonary Embolism", "Reactive Oxygen Species", "Serum Albumin" ]
9575210
Introduction
Pulmonary embolism (PE) is a relatively common cardiovascular emergency. By occluding the pulmonary arterial bed, it may lead to acute life-threatening situations. PE is a difficult diagnosis that may be missed because of its non-specific clinical presentation [1]. On the other hand, acute PE (APE) is a life-threatening disease leading to reperfusion of previously ischemia of the lung parenchyma. Pulmonary infarction occurs with the development of hemorrhagic necrosis in the lung parenchyma distal to the pulmonary artery occluded in PE [2]. Oxidative stress accompanies this phenomenon. Previous studies have proven that hypoxia-reoxygenation and ischemia-reperfusion cause oxidative stress accompanied by the production of oxygen free radicals exceeding the endogenous antioxidant capacity [3, 4]. In addition, in the case of ischemia, an albumin molecule called ischemia-modified albumin (IMA) is formed as a result of structural changes in the last amino terminal that binds metal in the serum albumin structure [5]. The increase in IMA concentration is currently used in the evaluation of patients with coronary syndrome as a marker of myocardial ischemia [6]. There are also studies evaluating IMA measurements in the diagnosis of PE [7, 8]. In our clinical research, it was aimed to investigate the changes in oxidative and antioxidant markers such as IMA, advanced oxidation protein products (AOPPs), prooxidants-antioxidants balance (PAB) and ferric reducing antioxidant power (FRAP) levels in patients diagnosed with APE were classified as high-risk, moderate-risk, low-risk before the initiation of anticoagulant / thrombolytic therapy.
null
null
Results
General and clinical characteristics of PE patients grouped into three categories were given in Table 1. The high-risk PE group had significantly lower platelet count, serum albumin and protein levels when compared to both the low-risk (p < 0.05, p < 0.05 and p < 0.01) and moderate-risk PE groups (for each p < 0.05). Table 1General and clinical characteristics of patients with acute pulmonary embolism (APE)Controls(n:21)Low-risk(n:14)Intermediate-risk(n:16)High-risk(n:17) p Gender (M/F) 11/107/74/1211/6– BMI (mm2/kg)25.34 ± 2.5127.83 ± 5.5429.99 ± 4.1127.87 ± 4.95> 0.05 Hypertension (n)– 587 Diabetes mellitus (n) –231COPD (n) –315 Immobilization (n) –101012 Trauma (n) –020 Malignancy (n) –456 Pregnancy (n) –111 Major general surgery (n) –355 Pulse rate per minute –94.79 ± 10.887.88 ± 14.1998.71 ± 19.41< 0.05 Number of breaths per minute –22.21 ± 5.1223.75 ± 4.9125.18 ± 5.26< 0.05 Systolic blood pressure (mmHg) 120 ± 8.15119.29 ± 11.91120.31 ± 14.6692.65 ± 13.71< 0.05 Diastolic blood pressure (mmHg) 82.34 ± 3.6472.86 ± 8.2575.31 ± 9.5759.71 ± 11.79< 0.01  C-reactive protein (mg/dL) 1.87 ± 0.0944.64 ± 39.1654.62 ± 43.9355.04 ± 44.73< 0.01 Leukocyte counts ( /mm³) 7896 ± 14988945 ± 38129281 ± 34669730 ± 4815< 0.05 Hemoglobin (g/dL) 12.46 ± 1.7412.44 ± 1.510.89 ± 3.311.81 ± 2.76> 0.05 Hematocrit (%) 41.85 ± 3.7635.73 ± 4.2234.24 ± 5.5335.98 ± 8.27> 0.05 Platelet counts ( /mm³) 325.87 ± 48.98273.79 ± 93.67288.5 ± 164.3199.29 ± 91.27< 0.01 Albumin (g/dL) 4.12 ± 0.343.59 ± 0.293.52 ± 0.453.01 ± 0.91> 0.05 Total protein (g/dL) 7.83 ± 0.26.76 ± 0.356.67 ± 0.46.36 ± 0.33> 0.05 NT-proBNP (pg/mL) 35.72 ± 24.6575.21 ± 67.2274.32 ± 490.56809.76 ± 705.12< 0.001 cTnT (ng/mL) 0 ± 0.010 ± 0.010.03 ± 0.090.04 ± 0.09> 0.05 COPD, chronic obstructive pulmonary disease; NT-proBNP, N-terminal prohormone of brain natriuretic peptide General and clinical characteristics of patients with acute pulmonary embolism (APE) COPD, chronic obstructive pulmonary disease; NT-proBNP, N-terminal prohormone of brain natriuretic peptide The serum oxidative stress marker levels of the groups were given in Table 2. In the low-risk and moderate-risk PE groups, serum AOPPs and PAB levels were significantly higher (p < 0.001 and p < 0.001; p < 0.001 and p < 0.001) and FRAP levels were significantly lower (p < 0.001; p < 0.001) than those in the control group. There was no significant difference between the low-risk and moderate-risk PE groups concerning the oxidative marker levels. Patients with high-risk PE had significantly higher AOPPs, IMA and PAB (for each p < 0.001) and significantly lower FRAP level (p < 0.001) than controls. AOPPs and IMA levels were significantly higher in the patients with high-risk PE when compared to both the low-risk (p < 0.001 and p < 0.001) and moderate-risk PE patients (p < 0.01 and p < 0.05). Although the mean serum PAB level was higher in the high-risk PE group when compared to both the moderate-risk and high-risk PE groups, differences did not reach statistically significant levels. Table 2Serum advanced protein oxidation products (AOPPs), ferric reducing antioxidant power (FRAP), pro-oxidant-antioxidant balance (PAB) and ischemia modified albumin (IMA) levels of the patients with pulmonary embolism (PE) and the controlsControls(n:21)Low-risk(n:14)Intermediate-risk(n:16)High-risk(n:17) p IMA (U/mL) 10.43 ± 1.9610.12 ± 2.2310.87 ± 2.6714.66 ± 5.54< 0.01 AOPPs (µM chloramine T) 39.6 ± 6.850.7 ± 13.457.7 ± 18.881.4 ± 22.9< 0.001 FRAP (M uric acids) 0.20 ± 0.030.10 ± 0.030.10 ± 0.030.11 ± 0.05> 0.05 PAB(% H2O2)40.2 ± 3.989.1 ± 26.796.2 ± 19.9105.3 ± 21.8< 0.01 Serum advanced protein oxidation products (AOPPs), ferric reducing antioxidant power (FRAP), pro-oxidant-antioxidant balance (PAB) and ischemia modified albumin (IMA) levels of the patients with pulmonary embolism (PE) and the controls IMA (U/mL) AOPPs (µM chloramine T) FRAP (M uric acids) PAB (% H2O2) There was a significant correlation between serum AOPPs level and both IMA (r: 0.462, p < 0.001) and PAB levels (r: 0.378, p < 0.005). Serum FRAP level was negatively correlated with PAB (r: − 0.683, p < 0.001) and AOPPs levels (r: − 0,384, p < 0.001). There was also a significant positive correlation between serum IMA and PAB levels (r: 0.380, p < 0.005).
null
null
[ "Sample collection and preparation", "Measurements of the concentrations of plasma advanced protein oxidation products (AOPPs)", "\n\nMeasurement of serum ferric reducing antioxidant power (FRAP) concentrations\n\n", "Measurement of serum prooxidant-antioxidant balance (PAB) concentrations", "Measurement of serum ischemia-modified albumin (IMA) concentrations", "Statistical analysis", "Limitation of study" ]
[ "Blood samples were taken from the brachial vein by using the venipuncture technique at the time of presentation. Vacutainer tubes without anticoagulants were used to obtain serum. Serum specimens were obtained after 15 min of centrifugation at 3000 rpm. Specimens to be used for measuring oxidative stress markers were pipetted into Eppendorf tubes and stored at − 80 °C.", "Spectrophotometric determinations of AOPPs concentrations were performed using a modification of the Gelisgen et al. method [13]. The coefficients of intra- and inter-assay variation were 3.1% (n = 20) and 3.5% (n = 20), respectively.", "The FRAP assay was performed according to the protocol of Benzie and Strain [14], with minor modifications. The coefficients of intra- and inter-assay variations were 4.2% (n = 20) and 5.1% (n = 20), respectively.", "The PAB assay was performed according to the method of Alamdari et al. [15] with minor modifications. The coefficients of intra- and inter-assay variation were 5.0% (n = 20) and 6.1% (n = 20), respectively.", "IMA concentration was assessed by a modification of the Bar-Or et al. method [6]. The coefficients of intra- and inter-assay variations were 4.1% (n = 20) and 5.2% (n = 20), respectively.\nThe other biochemical parameters were measured using routine methods with commercial kits and an autoanalyzer.", "Statistical analysis was performed using SPSS 17.0 version for Windows Statistical Program (SPSS, Chicago, IL, USA). All data were expressed as means ± standard deviation (SD). Power analysis was performed to determine the sample size, α = 0.05 β = 0.20, 1 − β = 0.80 and the power of the test was calculated as 0.80. Descriptive statistics were obtained, and data were tested for normality using the Kolmogorov-Smirnov test for Gaussian distribution. To compare the parameters of the various patients group, first the nonparametric Kruskal–Wallis test. The particular groups were compared by the Mann-Whitney U test. For correlation analysis, Spearman’s Rho Correlation Coefficient was determined. Values of p < 0.05 were considered significant.", "Although this study is prospective and seeks to answer an important clinical question, the study has its own limitations. Chief among these is the lack of information about whether treatments that may have normalized vital values were given, and if so, what these treatments were. Again, the fact that the sample size is not large enough and that it is a single-centered study can be counted among the limitations of the study.\nWe clearly demonstrated that ROS formation is significantly enhanced in PE. Since they are also measured by a rapid and low-cost technique, we suggest that IMA and AOPPs may be used as clinical markers in the evaluation of PE severity in clinical practice. Although serum IMA levels are affected by many physiological variables such as exercise and diseases such as heart failure, renal failure or liver failure, determination of serum IMA levels may be helpful to clinicians in cases in which the selection of treatment or imaging method is required. However, further studies with larger patient populations and longer follow-up periods are required to confirm the mechanisms underlying these findings." ]
[ null, null, null, null, null, null, null ]
[ "Introduction", "Materials and methods", "Sample collection and preparation", "Measurements of the concentrations of plasma advanced protein oxidation products (AOPPs)", "\n\nMeasurement of serum ferric reducing antioxidant power (FRAP) concentrations\n\n", "Measurement of serum prooxidant-antioxidant balance (PAB) concentrations", "Measurement of serum ischemia-modified albumin (IMA) concentrations", "Statistical analysis", "Results", "Discussion", "Limitation of study" ]
[ "Pulmonary embolism (PE) is a relatively common cardiovascular emergency. By occluding the pulmonary arterial bed, it may lead to acute life-threatening situations. PE is a difficult diagnosis that may be missed because of its non-specific clinical presentation [1].\nOn the other hand, acute PE (APE) is a life-threatening disease leading to reperfusion of previously ischemia of the lung parenchyma. Pulmonary infarction occurs with the development of hemorrhagic necrosis in the lung parenchyma distal to the pulmonary artery occluded in PE [2]. Oxidative stress accompanies this phenomenon. Previous studies have proven that hypoxia-reoxygenation and ischemia-reperfusion cause oxidative stress accompanied by the production of oxygen free radicals exceeding the endogenous antioxidant capacity [3, 4]. In addition, in the case of ischemia, an albumin molecule called ischemia-modified albumin (IMA) is formed as a result of structural changes in the last amino terminal that binds metal in the serum albumin structure [5]. The increase in IMA concentration is currently used in the evaluation of patients with coronary syndrome as a marker of myocardial ischemia [6]. There are also studies evaluating IMA measurements in the diagnosis of PE [7, 8].\nIn our clinical research, it was aimed to investigate the changes in oxidative and antioxidant markers such as IMA, advanced oxidation protein products (AOPPs), prooxidants-antioxidants balance (PAB) and ferric reducing antioxidant power (FRAP) levels in patients diagnosed with APE were classified as high-risk, moderate-risk, low-risk before the initiation of anticoagulant / thrombolytic therapy.", " The study protocol was carried out by the ethical guidelines of the 1975 Declaration of Helsinki and the permission of the Institutional Scientific and Human Research Ethics Committee of the Istanbul Medical Faculty. Forty seven patients with PE (22 male, 25 female; mean age: 56.7 ± 15.42 years) and 21 healthy subjects (11 male, 10 female; mean age: 57.14 ± 10.59 years) were enrolled in the study. Each subject provided written informed consent and was informed clearly about the details of the study and blood sampling.\nTwenty-one healthy volunteers with no previous history of disease and normal physical examination findings were recruited into the control group and included in the study. APE was diagnosed by spiral CT angiography in all 47 patients with suspected PE by clinical and laboratory evaluation. Data were collected on the clinical and laboratory findings of the patients at diagnosis including arterial blood gas values and chest roentgenograms, the presence of underlying diseases or predisposing factors for pulmonary thromboembolism, the results of all diagnostic procedures including ECG, echocardiography, nuclear imaging studies, right heart catheterization, and pulmonary angiography. Both the simplified Well’s and the modified Geneva scores were used for clinical scoring [9, 10].\nCardiopulmonary resuscitation, hemodynamic instability (systolic blood pressure < 90 mmHg or a drop in systolic blood pressure by > 40 mmHg for > 15 min with signs of end-organ hypoperfusion), inotrope use, recurrent or diffuse APE, development of lower limb thrombosis, the use of mechanical ventilation within one month of admission, newly developed chest pain and dyspnea, and bleeding episode due to anticoagulant use were regarded as the adverse events [11]. All consecutive patients with suspicious clinical symptoms (acute onset dyspnea, chest pain, tachycardia, and tachypnea with the need for oxygen supplementation) associated with APE admitted to the emergency department were prospectively evaluated. All patients were stratified into risk classes according to the simplified Pulmonary Embolism Severity Index (sPESI) (high- and low-risk) and the algorithm proposed by the European Society of Cardiology (ESC) 2019 guidelines [12]. Severity assessment was carried out classifying patients into four groups: low risk (LR), intermediate risk (IMR), and high risk (HR). The principal criteria for categorizing PE as high risk (HR) were arterial hypotension and cardiogenic shock. Arterial hypotension was defined as a systolic arterial blood pressure < 90 mm Hg or a drop in systolic arterial blood pressure of at least 40 mm Hg for at least 15 min. Shock was defined as manifestation of tissue hypoperfusion and hypoxia, including an altered level of consciousness, oliguria or cool, clammy extremities. Intermediate risk (IMR) patients had preserved systemic arterial pressure and right ventricular dysfunction. Low risk (LR) patients had normal both systemic blood pressure and right ventricular function. Of our 47 patients, 14 (29.7%) had LR, 16 (34%) had IMR and 17 (%36.2) had HR.\nExclusion criteria of the patient group were as follows: (1) Other acute ischemic diseases such as acute coronary syndrome, acute ischemic cerebrovascular disease, acute peripheral arterial occlusion, or acute mesenteric ischemia newly diagnosed when questioned during the admission to the emergency department, (2) Advanced liver, kidney or heart failure, (3) PE treatment already initiated, (4) Refusal to participate in the study. Patients who died or were hospitalized before they applied to the emergency department and completed the necessary procedures, and patients who did not have sufficient information in the hospital information system were not included in the study. The exclusion criteria applied during the enrollment of the healthy control group were similar to those in the patient group. Also, healthy subjects did not have any endocrine, vascular, cardiac or inflammatory diseases were chosen as the control group.\n[SUBTITLE] Sample collection and preparation [SUBSECTION] Blood samples were taken from the brachial vein by using the venipuncture technique at the time of presentation. Vacutainer tubes without anticoagulants were used to obtain serum. Serum specimens were obtained after 15 min of centrifugation at 3000 rpm. Specimens to be used for measuring oxidative stress markers were pipetted into Eppendorf tubes and stored at − 80 °C.\nBlood samples were taken from the brachial vein by using the venipuncture technique at the time of presentation. Vacutainer tubes without anticoagulants were used to obtain serum. Serum specimens were obtained after 15 min of centrifugation at 3000 rpm. Specimens to be used for measuring oxidative stress markers were pipetted into Eppendorf tubes and stored at − 80 °C.\n[SUBTITLE] Measurements of the concentrations of plasma advanced protein oxidation products (AOPPs) [SUBSECTION] Spectrophotometric determinations of AOPPs concentrations were performed using a modification of the Gelisgen et al. method [13]. The coefficients of intra- and inter-assay variation were 3.1% (n = 20) and 3.5% (n = 20), respectively.\nSpectrophotometric determinations of AOPPs concentrations were performed using a modification of the Gelisgen et al. method [13]. The coefficients of intra- and inter-assay variation were 3.1% (n = 20) and 3.5% (n = 20), respectively.\n[SUBTITLE] \n\nMeasurement of serum ferric reducing antioxidant power (FRAP) concentrations\n\n [SUBSECTION] The FRAP assay was performed according to the protocol of Benzie and Strain [14], with minor modifications. The coefficients of intra- and inter-assay variations were 4.2% (n = 20) and 5.1% (n = 20), respectively.\nThe FRAP assay was performed according to the protocol of Benzie and Strain [14], with minor modifications. The coefficients of intra- and inter-assay variations were 4.2% (n = 20) and 5.1% (n = 20), respectively.\n[SUBTITLE] Measurement of serum prooxidant-antioxidant balance (PAB) concentrations [SUBSECTION] The PAB assay was performed according to the method of Alamdari et al. [15] with minor modifications. The coefficients of intra- and inter-assay variation were 5.0% (n = 20) and 6.1% (n = 20), respectively.\nThe PAB assay was performed according to the method of Alamdari et al. [15] with minor modifications. The coefficients of intra- and inter-assay variation were 5.0% (n = 20) and 6.1% (n = 20), respectively.\n[SUBTITLE] Measurement of serum ischemia-modified albumin (IMA) concentrations [SUBSECTION] IMA concentration was assessed by a modification of the Bar-Or et al. method [6]. The coefficients of intra- and inter-assay variations were 4.1% (n = 20) and 5.2% (n = 20), respectively.\nThe other biochemical parameters were measured using routine methods with commercial kits and an autoanalyzer.\nIMA concentration was assessed by a modification of the Bar-Or et al. method [6]. The coefficients of intra- and inter-assay variations were 4.1% (n = 20) and 5.2% (n = 20), respectively.\nThe other biochemical parameters were measured using routine methods with commercial kits and an autoanalyzer.\n[SUBTITLE] Statistical analysis [SUBSECTION] Statistical analysis was performed using SPSS 17.0 version for Windows Statistical Program (SPSS, Chicago, IL, USA). All data were expressed as means ± standard deviation (SD). Power analysis was performed to determine the sample size, α = 0.05 β = 0.20, 1 − β = 0.80 and the power of the test was calculated as 0.80. Descriptive statistics were obtained, and data were tested for normality using the Kolmogorov-Smirnov test for Gaussian distribution. To compare the parameters of the various patients group, first the nonparametric Kruskal–Wallis test. The particular groups were compared by the Mann-Whitney U test. For correlation analysis, Spearman’s Rho Correlation Coefficient was determined. Values of p < 0.05 were considered significant.\nStatistical analysis was performed using SPSS 17.0 version for Windows Statistical Program (SPSS, Chicago, IL, USA). All data were expressed as means ± standard deviation (SD). Power analysis was performed to determine the sample size, α = 0.05 β = 0.20, 1 − β = 0.80 and the power of the test was calculated as 0.80. Descriptive statistics were obtained, and data were tested for normality using the Kolmogorov-Smirnov test for Gaussian distribution. To compare the parameters of the various patients group, first the nonparametric Kruskal–Wallis test. The particular groups were compared by the Mann-Whitney U test. For correlation analysis, Spearman’s Rho Correlation Coefficient was determined. Values of p < 0.05 were considered significant.", "Blood samples were taken from the brachial vein by using the venipuncture technique at the time of presentation. Vacutainer tubes without anticoagulants were used to obtain serum. Serum specimens were obtained after 15 min of centrifugation at 3000 rpm. Specimens to be used for measuring oxidative stress markers were pipetted into Eppendorf tubes and stored at − 80 °C.", "Spectrophotometric determinations of AOPPs concentrations were performed using a modification of the Gelisgen et al. method [13]. The coefficients of intra- and inter-assay variation were 3.1% (n = 20) and 3.5% (n = 20), respectively.", "The FRAP assay was performed according to the protocol of Benzie and Strain [14], with minor modifications. The coefficients of intra- and inter-assay variations were 4.2% (n = 20) and 5.1% (n = 20), respectively.", "The PAB assay was performed according to the method of Alamdari et al. [15] with minor modifications. The coefficients of intra- and inter-assay variation were 5.0% (n = 20) and 6.1% (n = 20), respectively.", "IMA concentration was assessed by a modification of the Bar-Or et al. method [6]. The coefficients of intra- and inter-assay variations were 4.1% (n = 20) and 5.2% (n = 20), respectively.\nThe other biochemical parameters were measured using routine methods with commercial kits and an autoanalyzer.", "Statistical analysis was performed using SPSS 17.0 version for Windows Statistical Program (SPSS, Chicago, IL, USA). All data were expressed as means ± standard deviation (SD). Power analysis was performed to determine the sample size, α = 0.05 β = 0.20, 1 − β = 0.80 and the power of the test was calculated as 0.80. Descriptive statistics were obtained, and data were tested for normality using the Kolmogorov-Smirnov test for Gaussian distribution. To compare the parameters of the various patients group, first the nonparametric Kruskal–Wallis test. The particular groups were compared by the Mann-Whitney U test. For correlation analysis, Spearman’s Rho Correlation Coefficient was determined. Values of p < 0.05 were considered significant.", "General and clinical characteristics of PE patients grouped into three categories were given in Table 1. The high-risk PE group had significantly lower platelet count, serum albumin and protein levels when compared to both the low-risk (p < 0.05, p < 0.05 and p < 0.01) and moderate-risk PE groups (for each p < 0.05).\n\nTable 1General and clinical characteristics of patients with acute pulmonary embolism (APE)Controls(n:21)Low-risk(n:14)Intermediate-risk(n:16)High-risk(n:17)\np\n\nGender (M/F)\n11/107/74/1211/6– BMI (mm2/kg)25.34 ± 2.5127.83 ± 5.5429.99 ± 4.1127.87 ± 4.95> 0.05\nHypertension (n)–\n587\nDiabetes mellitus (n)\n–231COPD (n)\n–315\nImmobilization (n)\n–101012\nTrauma (n)\n–020\nMalignancy (n)\n–456\nPregnancy (n)\n–111\nMajor general surgery (n)\n–355\nPulse rate per minute\n–94.79 ± 10.887.88 ± 14.1998.71 ± 19.41< 0.05\nNumber of breaths per minute\n–22.21 ± 5.1223.75 ± 4.9125.18 ± 5.26< 0.05\nSystolic blood pressure (mmHg)\n120 ± 8.15119.29 ± 11.91120.31 ± 14.6692.65 ± 13.71< 0.05\nDiastolic blood pressure (mmHg)\n82.34 ± 3.6472.86 ± 8.2575.31 ± 9.5759.71 ± 11.79< 0.01\n C-reactive protein (mg/dL)\n1.87 ± 0.0944.64 ± 39.1654.62 ± 43.9355.04 ± 44.73< 0.01\nLeukocyte counts ( /mm³)\n7896 ± 14988945 ± 38129281 ± 34669730 ± 4815< 0.05\nHemoglobin (g/dL)\n12.46 ± 1.7412.44 ± 1.510.89 ± 3.311.81 ± 2.76> 0.05\nHematocrit (%)\n41.85 ± 3.7635.73 ± 4.2234.24 ± 5.5335.98 ± 8.27> 0.05\nPlatelet counts ( /mm³)\n325.87 ± 48.98273.79 ± 93.67288.5 ± 164.3199.29 ± 91.27< 0.01\nAlbumin (g/dL)\n4.12 ± 0.343.59 ± 0.293.52 ± 0.453.01 ± 0.91> 0.05\nTotal protein (g/dL)\n7.83 ± 0.26.76 ± 0.356.67 ± 0.46.36 ± 0.33> 0.05\nNT-proBNP (pg/mL)\n35.72 ± 24.6575.21 ± 67.2274.32 ± 490.56809.76 ± 705.12< 0.001\ncTnT (ng/mL)\n0 ± 0.010 ± 0.010.03 ± 0.090.04 ± 0.09> 0.05\nCOPD, chronic obstructive pulmonary disease; NT-proBNP, N-terminal prohormone of brain natriuretic peptide\nGeneral and clinical characteristics of patients with acute pulmonary embolism (APE)\n\nCOPD, chronic obstructive pulmonary disease; NT-proBNP, N-terminal prohormone of brain natriuretic peptide\nThe serum oxidative stress marker levels of the groups were given in Table 2. In the low-risk and moderate-risk PE groups, serum AOPPs and PAB levels were significantly higher (p < 0.001 and p < 0.001; p < 0.001 and p < 0.001) and FRAP levels were significantly lower (p < 0.001; p < 0.001) than those in the control group. There was no significant difference between the low-risk and moderate-risk PE groups concerning the oxidative marker levels. Patients with high-risk PE had significantly higher AOPPs, IMA and PAB (for each p < 0.001) and significantly lower FRAP level (p < 0.001) than controls. AOPPs and IMA levels were significantly higher in the patients with high-risk PE when compared to both the low-risk (p < 0.001 and p < 0.001) and moderate-risk PE patients (p < 0.01 and p < 0.05). Although the mean serum PAB level was higher in the high-risk PE group when compared to both the moderate-risk and high-risk PE groups, differences did not reach statistically significant levels.\n\nTable 2Serum advanced protein oxidation products (AOPPs), ferric reducing antioxidant power (FRAP), pro-oxidant-antioxidant balance (PAB) and ischemia modified albumin (IMA) levels of the patients with pulmonary embolism (PE) and the controlsControls(n:21)Low-risk(n:14)Intermediate-risk(n:16)High-risk(n:17)\np\n\nIMA\n\n(U/mL)\n10.43 ± 1.9610.12 ± 2.2310.87 ± 2.6714.66 ± 5.54< 0.01\nAOPPs\n\n(µM chloramine T)\n39.6 ± 6.850.7 ± 13.457.7 ± 18.881.4 ± 22.9< 0.001\nFRAP\n\n(M uric acids)\n0.20 ± 0.030.10 ± 0.030.10 ± 0.030.11 ± 0.05> 0.05 PAB(% H2O2)40.2 ± 3.989.1 ± 26.796.2 ± 19.9105.3 ± 21.8< 0.01\nSerum advanced protein oxidation products (AOPPs), ferric reducing antioxidant power (FRAP), pro-oxidant-antioxidant balance (PAB) and ischemia modified albumin (IMA) levels of the patients with pulmonary embolism (PE) and the controls\n\nIMA\n\n\n(U/mL)\n\n\nAOPPs\n\n\n(µM chloramine T)\n\n\nFRAP\n\n\n(M uric acids)\n\n PAB\n(% H2O2)\nThere was a significant correlation between serum AOPPs level and both IMA (r: 0.462, p < 0.001) and PAB levels (r: 0.378, p < 0.005). Serum FRAP level was negatively correlated with PAB (r: − 0.683, p < 0.001) and AOPPs levels (r: − 0,384, p < 0.001). There was also a significant positive correlation between serum IMA and PAB levels (r: 0.380, p < 0.005).", "Increasing evidence from both experimental and clinical studies has suggested that oxidative stress plays a major role in the pathogenesis of PE [16, 17]. Pulmonary vascular resistance (PVR) increases as a result of mechanical obstruction in the vessels and vasoconstriction caused by inflammatory mediators and hypoxia. The blood that passes through the lungs without gas exchange with the alveoli is called an intrapulmonary shunt. The progression of pulmonary hypertension and the intrapulmonary shunt circulation induce an arterial hypoxia and the reduction of oxygen saturation. It is widely known that hypoxia-reoxygenation and ischemia-reperfusion induce oxidative stress with a concomitant imbalance between the production of oxygen free radicals and endogenously available antioxidants [18, 19]. We studied the levels of oxidative stress markers named IMA, PAB, AOPPs and FRAP in PE and their roles in the determination of the severity of PE. Under acute ischemic conditions, the metal binding capacity of albumin to transition metals such as copper, nickel and cobalt is reduced, generating a metabolic variant of the protein referred to as IMA [20] IMA levels in the patients with high-risk PE were significantly higher when compared to those in both the low-risk and moderate-risk PE patients. “IMA” is one of the earliest markers of ischemia. These data represent our preliminary results, and we continue to follow our patients with measuring these parameters with six-month intervals to see the status of ischemia. There are several human and experimental studies about the diagnostic value of IMA in the diagnosis of PE, which were conducted by Türedi et al. and published in the literature [7–22]. They showed that the level of IMA might be of use in the diagnosis of PE. The authors also suggested that IMA was a good alternative to D-dimer in PE diagnosis regarding both the cost and the efficiency. However, D-dimer may be a more relevant marker than IMA which has been proposed as a new marker in the biochemical determination of severity of PE based on radiological findings [7]. Hogg et al. [23] evaluated the ability of the IMA assay to diagnose DVT and PE in a prospective cohort. They found that excluding these patient groups marginally increased the AUC for IMA and IMA/albumin in the diagnosis of PE (excluding the patients with extreme values) The investigators believe that if a new diagnostic marker is found, it should not only be simple to use but also be used on all patients, not just a small patient subset. On the other hand, another study of theirs [24] showed that IMA levels are not specific for deaths related with PE. In comparison, no such relation was detected in this study, probably due to the severity of the right ventricular dysfunction.\nOur clinical investigation revealed that PE had led to increased IMA levels that had been augmented following PE severity. IMA may also be an end-product of tissue ischemia. Further studies are required to confirm the mechanisms underlying this effect.\nAOPPs, which are proteins damaged by the oxidative stress, most notably albumin and its aggregates, have recently begun to attract the attention of various investigators [25]. In the general population, plasma level of AOPPs was found as an independent risk factor for coronary artery disease (CAD) [26], whereas high plasma AOPPs was found to be related with atherosclerotic cardiovascular events in nondiabetic patients with chronic kidney disease (CKD) [27] Ours is the first study to investigate whether there is a correlation between PE severity and AOPPs. In all patient groups, AOPPs levels were significantly higher when compared to the control group. AOPPs level in the patient group with high-risk PE was also significantly higher than that of both the low-risk and moderate-risk PE groups. There was a significant correlation between serum level of AOPPs and both IMA and PAB levels. In the present study, a novel marker (AOPPs assay) was analyzed for providing information about the level of oxidative damage in the proteins found in plasma for determining the severity of PE. The obtained results corroborate the IMA and PAB data.\nOxidative stress is an imbalance between the production of prooxidants and antioxidant defenses, being in favor of prooxidants. It is usually related to the increased formation of reactive oxygen species (ROS) and is considered to play a pivotal role in the pathogenesis and development of ischemia and its complications [28] The PAB assay may be able to provide valuable information regarding the oxidant-antioxidant status. Likewise, to the best of our knowledge, the serum PAB method employed in the present study has not been previously used to investigate the oxidant-antioxidant status in PE. In the low-risk and moderate-risk PE groups, PAB levels were significantly higher than that of the control group. Although the mean serum PAB level was higher in the high-risk PE group than those in the moderate-risk and low-risk PE groups, the differences were not able to reach statistically significant levels. There was also a significant positive correlation between serum IMA and PAB levels. Therefore, our study and the other studie [29–31] confirm that the PAB levels are increased in patients with CAD. We suggest that the PAB assay may be useful for a CAD risk predictor in diseases such as PE. It may also help to identify patients with high levels of oxidative stress, may be useful in the early diagnosis of vascular diseases and early interventions. Further studies are required to elucidate the contribution of PAB to PE severity.\nThe effect of the prooxidant or the antioxidant molecules in serum is additive. Various methods for the separate measurement of the total oxidant or antioxidant status have been proposed. For the evaluation of the prooxidant-antioxidant balance, we assayed the PAB and FRAP levels with the purpose of determining both the oxidant and the antioxidant status. FRAP can be defined as the cumulative action of all antioxidants present in the serum, thus providing a composed parameter rather than the sum of measurable antioxidants [14]. In our study, all patients had significantly lower FRAP levels than controls. There was no significant difference in oxidative marker levels between patients with low-risk and moderate-risk PE. Serum FRAP levels were negatively correlated with PAB and AOPPs levels. Sousa-Santos et al. [32] investigated whether pretreatment with tempol (a general ROS scavenger) prevented matrix metalloproteinase (MMP) activation and protected against cardiomyocyte injury of acute pulmonary thromboembolism (APT). They also investigated the possible therapeutic effects of tempol administration after APT. They showed that antioxidant treatment might prevent MMP activation and might protect against cardiomyocyte injury after APT. Neto-Neves et al. [33] demonstrated that pretreatment with atorvastatin protected against pulmonary hypertension associated with APT and that sildenafil improved this response. These findings may reflect antioxidant effects and inhibited neutrophils/MMPs activation. Clinical studies should be carried out to validate the beneficial effects exerted by this combination of drugs during APT. The validation of the total antioxidant capacity for monitoring the efficiency of antioxidant supplementation during PE should be further investigated.\nIn this study, NT-proBNP was found to be the most potent predictor of unfavorable outcomes in APE. This finding may be regarded as evidence for the presence of right ventricular dysfunction in the study group.\n[SUBTITLE] Limitation of study [SUBSECTION] Although this study is prospective and seeks to answer an important clinical question, the study has its own limitations. Chief among these is the lack of information about whether treatments that may have normalized vital values were given, and if so, what these treatments were. Again, the fact that the sample size is not large enough and that it is a single-centered study can be counted among the limitations of the study.\nWe clearly demonstrated that ROS formation is significantly enhanced in PE. Since they are also measured by a rapid and low-cost technique, we suggest that IMA and AOPPs may be used as clinical markers in the evaluation of PE severity in clinical practice. Although serum IMA levels are affected by many physiological variables such as exercise and diseases such as heart failure, renal failure or liver failure, determination of serum IMA levels may be helpful to clinicians in cases in which the selection of treatment or imaging method is required. However, further studies with larger patient populations and longer follow-up periods are required to confirm the mechanisms underlying these findings.\nAlthough this study is prospective and seeks to answer an important clinical question, the study has its own limitations. Chief among these is the lack of information about whether treatments that may have normalized vital values were given, and if so, what these treatments were. Again, the fact that the sample size is not large enough and that it is a single-centered study can be counted among the limitations of the study.\nWe clearly demonstrated that ROS formation is significantly enhanced in PE. Since they are also measured by a rapid and low-cost technique, we suggest that IMA and AOPPs may be used as clinical markers in the evaluation of PE severity in clinical practice. Although serum IMA levels are affected by many physiological variables such as exercise and diseases such as heart failure, renal failure or liver failure, determination of serum IMA levels may be helpful to clinicians in cases in which the selection of treatment or imaging method is required. However, further studies with larger patient populations and longer follow-up periods are required to confirm the mechanisms underlying these findings.", "Although this study is prospective and seeks to answer an important clinical question, the study has its own limitations. Chief among these is the lack of information about whether treatments that may have normalized vital values were given, and if so, what these treatments were. Again, the fact that the sample size is not large enough and that it is a single-centered study can be counted among the limitations of the study.\nWe clearly demonstrated that ROS formation is significantly enhanced in PE. Since they are also measured by a rapid and low-cost technique, we suggest that IMA and AOPPs may be used as clinical markers in the evaluation of PE severity in clinical practice. Although serum IMA levels are affected by many physiological variables such as exercise and diseases such as heart failure, renal failure or liver failure, determination of serum IMA levels may be helpful to clinicians in cases in which the selection of treatment or imaging method is required. However, further studies with larger patient populations and longer follow-up periods are required to confirm the mechanisms underlying these findings." ]
[ "introduction", "materials|methods", null, null, null, null, null, null, "results", "discussion", null ]
[ "Pulmonary thromboembolism", "Ischemia-modified albumin", "Advanced protein oxidation products", "Total antioxidant capacity", "Pro-oxidant-antioxidant balance" ]
Comparison of transvaginal cervical cerclage versus laparoscopic abdominal cervical cerclage in cervical insufficiency: a retrospective study from a single centre.
36253759
Cervical cerclage has been proposed as an effective treatment for cervical insufficiency, but there has been controversy regarding the surgical options of cervical cerclage in singleton and twin pregnancies. This study aimed to compare the pregnancy outcomes between transvaginal cervical cerclage (TVC) and laparoscopic abdominal cervical cerclage (LAC) in patients with cervical insufficiency. We also aimed to evaluate the efficacy and safety, and provide more evidence to support the application of cervical cerclage in twin pregnancies.
BACKGROUND
A retrospective study was carried out from January 2015 to December 2021. The primary outcomes were the incidence of spontaneous preterm birth (sPTB) < 24 weeks, < 28, < 32, < 34 weeks, and < 37weeks, gestational age at delivery, and the incidence of admission for threatened abortion or preterm birth after cervical cerclage. The secondary outcomes included admission to the Neonatal Intensive Care Unit, adverse neonatal outcomes and neonatal death. We also analysed the pregnancy outcomes of twin pregnancies after cervical cerclage.
METHODS
A total of 289 patients were identified as eligible for inclusion. The LAC group (n = 56) had a very low incidence of sPTB ˂ 34 weeks, and it was associated with a significant decrease in sPTB < 28 weeks, ˂32 weeks, ˂34 and < 37 weeks, and admission to the hospital during pregnancy for threatened abortion or preterm birth after cervical cerclage (0 vs.27%; 1.8% vs. 40.3%; 7.1% vs. 46.8%; 14% vs. 63.5%, 8.9% vs. 62.2%, respectively; P < 0.001), and high in gestational age at delivery compared with the TVC group (n = 233) (38.3 weeks vs.34.4 weeks,P < 0.001). Neonatal outcomes in the LAC group were significantly better than those in the TVC group. The mean gestational age at delivery was 34.3 ± 1.8 weeks, with a total foetal survival rate of 100% without serious neonatal complications in twin pregnancies with LAC.
RESULTS
In patients with cervical insufficiency, LAC appears to have better pregnancy outcomes than TVC. For some patients, LAC is a recommended option and may be selected as the first choice. Even in twin pregnancies, cervical cerclage can improve pregnancy outcomes with a longer latency period, especially in the LAC group.
CONCLUSION
[ "Abortion, Threatened", "Cerclage, Cervical", "Female", "Humans", "Infant", "Infant, Newborn", "Laparoscopy", "Pregnancy", "Pregnancy Outcome", "Pregnancy, Twin", "Premature Birth", "Retrospective Studies", "Uterine Cervical Incompetence" ]
9575299
null
null
null
null
Results
[SUBTITLE] Patient characteristics [SUBSECTION] A total of 289 patients were identified as eligible for inclusion during the study period, among which 233 women accepted TVC and 56 underwent LAC. There was no difference in the majority characteristics of the two groups (Table 1). The mean age of the TVC group and LAC group was 30.96 ± 4.12 years vs. 32.27 ± 3.89 years, P = 0.28. Most of the patients (79.9%) had a typical history of ≥ 1 spontaneous second trimester abortions and/or preterm births < 34 weeks. Some patients received CC because of ultrasonographic cervical progressive reduction during pregnancy after prior operative hysteroscopy, prior cervical surgery, and/or routine examination in twin pregnancies. The LAC group had more prior failed transvaginal cerclage procedures and more spontaneous second trimester abortions and/or preterm births < 34 weeks than the TVC group (10.7% vs. 1.7%, P = 0.005; 1 (0.5-2) vs. 1 (1–2), P < 0.001). A total of 160 patients in the TVC group underwent emergency cervical cerclage, compared with none in the LAC group. In the TVC group, the median length of the cervical canal was 1.2 cm in the emergency group and 2.12 cm in the prophylactic CC group. In the LAC group, 47 patients underwent prepregnant LAC and nine in the first trimester (8–10 weeks). The median gestational age at cerclage placement was 22.4 weeks in the TVC group. No perioperative complications (e.g., bleeding > 100 mL, infection, injury to bowel or bladder) occurred in either group. Six patients in the TVC group received repeat cerclage due to membrane bulging or funnelling below the knots (details shown in Table 2), while all patients in the LAC group received only one transabdominal cerclage. Table 1Clinical characteristics of the study populationTVC groupLAC group P Patients (n)23356-Age(years)30.96 ± 4.1232.27 ± 3.890.28Gravidity3(2–4)3(2–5)0.044the incidence of twin pregnancies (n[%])46 [19.7%]7 [12.5%]0.209Prior ssA-PTB1(0.5-2)1(1–2)< 0.001Prior failed transvaginal cerclage4[1.7%]6[10.7%]0.005Prior operative hysteroscopy (n[%])53[23%]8[14%]0.164Prior cervical surgery (n[%])11[5%]5[10%]0.216Conception by IVF(n[%])68[29%]19[34%]0.487Emergency cervical cerclage (n[%])160[68%]0[0]< 0.001GA at cerclage placement (weeks)22.4 (17-24.7)before pregnancy or during the first trimester-Bleeding during the CC (ml)15.6 ± 6.820.3 ± 5.20.37The length of the cervix (cm)1.2 (0.8-2)--Emergency CC1 (0.6–1.5)prophylactic CC2.12 (1.5-3.0)-Cervical dilation at diagnosis (cm)0(0-0.5)--TVC: transvaginal cervical cerclage; LAC: laparoscopic abdominal cervical cerclage; ssA-PTB: spontaneous second-trimester abortions and/or preterm births < 34 weeks; IVF: In-Vitro Fertilization; CC: cervical cerclage Clinical characteristics of the study population TVC: transvaginal cervical cerclage; LAC: laparoscopic abdominal cervical cerclage; ssA-PTB: spontaneous second-trimester abortions and/or preterm births < 34 weeks; IVF: In-Vitro Fertilization; CC: cervical cerclage Table 2The detailed data of 6 patients received repeat cerclage in the TVC groupGA at first CCGA at repeat CCGravidityprior ssA-PTBPrior hysteroscopyGA at deliveryDelivery ModecomplicationsApgar score (1-5-10 min)patient 125 + 228 + 262028 + 5VD/8-9-9patient 216 + 425 + 332137 + 4VDCL, PPH10-10-10patient 314 + 522 + 442029 + 3CD/8-9-9patient 420 + 323 + 561126 + 6VDCL7-8-9patient 51824 + 131628VDCL, PPH8-9-9patient 620 + 525 + 131128 + 4CDCL8-9-10/7-8-9TVC: transvaginal cervical cerclage; GA: gestational age; CC: cervical cerclage; ssA-PTB: spontaneous second-trimester abortions and/or preterm births; VD: vaginal delivery; CD: caesarean delivery; CL: cervical laceration; PPH: postpartum haemorrhage The detailed data of 6 patients received repeat cerclage in the TVC group TVC: transvaginal cervical cerclage; GA: gestational age; CC: cervical cerclage; ssA-PTB: spontaneous second-trimester abortions and/or preterm births; VD: vaginal delivery; CD: caesarean delivery; CL: cervical laceration; PPH: postpartum haemorrhage A total of 289 patients were identified as eligible for inclusion during the study period, among which 233 women accepted TVC and 56 underwent LAC. There was no difference in the majority characteristics of the two groups (Table 1). The mean age of the TVC group and LAC group was 30.96 ± 4.12 years vs. 32.27 ± 3.89 years, P = 0.28. Most of the patients (79.9%) had a typical history of ≥ 1 spontaneous second trimester abortions and/or preterm births < 34 weeks. Some patients received CC because of ultrasonographic cervical progressive reduction during pregnancy after prior operative hysteroscopy, prior cervical surgery, and/or routine examination in twin pregnancies. The LAC group had more prior failed transvaginal cerclage procedures and more spontaneous second trimester abortions and/or preterm births < 34 weeks than the TVC group (10.7% vs. 1.7%, P = 0.005; 1 (0.5-2) vs. 1 (1–2), P < 0.001). A total of 160 patients in the TVC group underwent emergency cervical cerclage, compared with none in the LAC group. In the TVC group, the median length of the cervical canal was 1.2 cm in the emergency group and 2.12 cm in the prophylactic CC group. In the LAC group, 47 patients underwent prepregnant LAC and nine in the first trimester (8–10 weeks). The median gestational age at cerclage placement was 22.4 weeks in the TVC group. No perioperative complications (e.g., bleeding > 100 mL, infection, injury to bowel or bladder) occurred in either group. Six patients in the TVC group received repeat cerclage due to membrane bulging or funnelling below the knots (details shown in Table 2), while all patients in the LAC group received only one transabdominal cerclage. Table 1Clinical characteristics of the study populationTVC groupLAC group P Patients (n)23356-Age(years)30.96 ± 4.1232.27 ± 3.890.28Gravidity3(2–4)3(2–5)0.044the incidence of twin pregnancies (n[%])46 [19.7%]7 [12.5%]0.209Prior ssA-PTB1(0.5-2)1(1–2)< 0.001Prior failed transvaginal cerclage4[1.7%]6[10.7%]0.005Prior operative hysteroscopy (n[%])53[23%]8[14%]0.164Prior cervical surgery (n[%])11[5%]5[10%]0.216Conception by IVF(n[%])68[29%]19[34%]0.487Emergency cervical cerclage (n[%])160[68%]0[0]< 0.001GA at cerclage placement (weeks)22.4 (17-24.7)before pregnancy or during the first trimester-Bleeding during the CC (ml)15.6 ± 6.820.3 ± 5.20.37The length of the cervix (cm)1.2 (0.8-2)--Emergency CC1 (0.6–1.5)prophylactic CC2.12 (1.5-3.0)-Cervical dilation at diagnosis (cm)0(0-0.5)--TVC: transvaginal cervical cerclage; LAC: laparoscopic abdominal cervical cerclage; ssA-PTB: spontaneous second-trimester abortions and/or preterm births < 34 weeks; IVF: In-Vitro Fertilization; CC: cervical cerclage Clinical characteristics of the study population TVC: transvaginal cervical cerclage; LAC: laparoscopic abdominal cervical cerclage; ssA-PTB: spontaneous second-trimester abortions and/or preterm births < 34 weeks; IVF: In-Vitro Fertilization; CC: cervical cerclage Table 2The detailed data of 6 patients received repeat cerclage in the TVC groupGA at first CCGA at repeat CCGravidityprior ssA-PTBPrior hysteroscopyGA at deliveryDelivery ModecomplicationsApgar score (1-5-10 min)patient 125 + 228 + 262028 + 5VD/8-9-9patient 216 + 425 + 332137 + 4VDCL, PPH10-10-10patient 314 + 522 + 442029 + 3CD/8-9-9patient 420 + 323 + 561126 + 6VDCL7-8-9patient 51824 + 131628VDCL, PPH8-9-9patient 620 + 525 + 131128 + 4CDCL8-9-10/7-8-9TVC: transvaginal cervical cerclage; GA: gestational age; CC: cervical cerclage; ssA-PTB: spontaneous second-trimester abortions and/or preterm births; VD: vaginal delivery; CD: caesarean delivery; CL: cervical laceration; PPH: postpartum haemorrhage The detailed data of 6 patients received repeat cerclage in the TVC group TVC: transvaginal cervical cerclage; GA: gestational age; CC: cervical cerclage; ssA-PTB: spontaneous second-trimester abortions and/or preterm births; VD: vaginal delivery; CD: caesarean delivery; CL: cervical laceration; PPH: postpartum haemorrhage
Conclusion
To conclude, we found that LAC, as a more effective treatment for CI patients, appears to have better pregnancy outcomes than TVC. Moreover, our limited experience suggested that LAC before pregnancy or during the first trimester is a recommended option for CI patients. Even in twin pregnancies, CC can improve pregnancy outcomes with a longer latency period, especially in the LAC group. Given the high efficacy and low harmfulness, LAC may be used as a first-line treatment in certain indications, such as in patients who have a typical history of ≥ 2 prior spontaneous second trimester abortions and/or preterm births < 34 weeks or that are willing to select LAC even if they have a typical history of ≥ 1 after they understand the advantages and limitations. Of course, larger prospective controlled studies are needed for confirmation.
[ "Background", "Methods", "Study population", "Surgical techniques", "Primary and secondary outcomes", "Statistical analysis", "Patient characteristics", "Pregnancy outcomes", "Twin pregnancies", "" ]
[ "Cervical insufficiency (CI) is defined as the inability of the uterine cervix to retain a pregnancy without the signs and symptoms of clinical contractions and other clear pathology (e.g., bleeding, infection, ruptured membranes) in the second trimester or the early third trimester (< 34th weeks), which leads to late abortion or preterm birth [1]. The incidence rate of CI is estimated to be up to 1% of obstetric pregnancies [2], and CI accounts for recurrent spontaneous second trimester abortions and/or preterm births < 34 weeks [3]. Cervical cerclage (CC) has been proposed as an effective treatment for CI and has been used clinically for many years, thus successfully prolonging the pregnancy period and improving the perinatal prognosis [4]. The traditional surgical treatment is transvaginal cervical cerclage (TVC), which is straightforward and feasible. However, transabdominal cervical cerclage (TAC) may be performed when a TVC has failed in a previous pregnancy or in some cases with refractory cervical insufficiency [5]. TAC provides greater structural support to the cervix through placement of the suture at the internal os. Moreover, the absence of a foreign body in the vagina may reduce the risk of ascending infection and LAC can promote the ability to leave the suture in situ for future pregnancies, which is associated with a high rate of neonatal survival [6, 7]. With an increasing number of studies indicating that the prognosis of TAC is better than that of TVC, TAC has gradually become the primary abdominal approach [8]. However, there is no consensus regarding the first-line treatment for CI due to its advantages and limitations.\nEspecially in twin pregnancies, the American College of Obstetricians and Gynaecologists (ACOG) has claimed that in women with twin pregnancies and an ultrasonographically detected cervical length less than 25 mm, CC may increase the risk of preterm birth and is not recommended [1]. With the experience of CC in twin pregnancies, some retrospective cohort studies have found that perinatal outcomes are considerably improved in twin pregnancies [9], and emergency transvaginal cervical cerclage in twin women with cervical dilation and prolapsed membranes was associated with an overall 40% decrease in spontaneous preterm birth (sPTB) < 28 weeks of gestation and a prolongation of latency by 5 weeks [10]. A recent meta-analysis showed that for twin pregnancies with a cervical length less than 15 mm, CC was associated with a significant reduction in preterm birth [11]. Thus, what is the role of CC in twin pregnancies with CI in our hospital?\nThe objective of this study was to conduct an additional thorough investigation of the superiority and safety of LAC by comparing pregnancy and neonatal outcomes in patients with CI between the TVC and laparoscopic TAC (LAC) groups in a single centre. We also aimed to evaluate the efficacy and safety and to provide more evidence to support the application of CC in twin pregnancies with CI.", "[SUBTITLE] Study population [SUBSECTION] A retrospective study was conducted with medical records of pregnancies diagnosed with CI from January 2015 to December 2021 in the Department of Obstetrics, West China Second University Hospital in Sichuan, China, the regional tertiary referral centre. Approval was obtained from the Ethics Committee of West China Second Hospital, Sichuan University and informed consent was obtained from all participants. All cases of CI, either singleton or twin pregnancies with CC, were identified by reviewing the medical records of the hospital and telephone follow-up. Some patients had a typical history of ≥ 2 spontaneous second trimester abortions and/or preterm births < 34 weeks, and some patients with a typical history of ≥ 1 were found to have a spontaneous progressive reduction of the cervix on routine cervical length surveillance ultrasound. Other patients were diagnosed with cervical dilation and prolapsed membranes by transvaginal ultrasound or by pelvic examination because of mild symptoms such as vaginal discharge without obvious abdominal pain. Exclusion criteria included past pregnancy losses or preterm births due to an infection or induction, preterm premature rupture of membranes (PPROM) at diagnosis, obvious abdominal pain, active vaginal bleeding, clinical chorioamnionitis before the CC procedure, and foetal structural or genetic abnormalities. GA (gestational age) was determined by an evaluation of the last menstrual period and crown-rump length measurement on first-trimester ultrasound.\nA retrospective study was conducted with medical records of pregnancies diagnosed with CI from January 2015 to December 2021 in the Department of Obstetrics, West China Second University Hospital in Sichuan, China, the regional tertiary referral centre. Approval was obtained from the Ethics Committee of West China Second Hospital, Sichuan University and informed consent was obtained from all participants. All cases of CI, either singleton or twin pregnancies with CC, were identified by reviewing the medical records of the hospital and telephone follow-up. Some patients had a typical history of ≥ 2 spontaneous second trimester abortions and/or preterm births < 34 weeks, and some patients with a typical history of ≥ 1 were found to have a spontaneous progressive reduction of the cervix on routine cervical length surveillance ultrasound. Other patients were diagnosed with cervical dilation and prolapsed membranes by transvaginal ultrasound or by pelvic examination because of mild symptoms such as vaginal discharge without obvious abdominal pain. Exclusion criteria included past pregnancy losses or preterm births due to an infection or induction, preterm premature rupture of membranes (PPROM) at diagnosis, obvious abdominal pain, active vaginal bleeding, clinical chorioamnionitis before the CC procedure, and foetal structural or genetic abnormalities. GA (gestational age) was determined by an evaluation of the last menstrual period and crown-rump length measurement on first-trimester ultrasound.", "A retrospective study was conducted with medical records of pregnancies diagnosed with CI from January 2015 to December 2021 in the Department of Obstetrics, West China Second University Hospital in Sichuan, China, the regional tertiary referral centre. Approval was obtained from the Ethics Committee of West China Second Hospital, Sichuan University and informed consent was obtained from all participants. All cases of CI, either singleton or twin pregnancies with CC, were identified by reviewing the medical records of the hospital and telephone follow-up. Some patients had a typical history of ≥ 2 spontaneous second trimester abortions and/or preterm births < 34 weeks, and some patients with a typical history of ≥ 1 were found to have a spontaneous progressive reduction of the cervix on routine cervical length surveillance ultrasound. Other patients were diagnosed with cervical dilation and prolapsed membranes by transvaginal ultrasound or by pelvic examination because of mild symptoms such as vaginal discharge without obvious abdominal pain. Exclusion criteria included past pregnancy losses or preterm births due to an infection or induction, preterm premature rupture of membranes (PPROM) at diagnosis, obvious abdominal pain, active vaginal bleeding, clinical chorioamnionitis before the CC procedure, and foetal structural or genetic abnormalities. GA (gestational age) was determined by an evaluation of the last menstrual period and crown-rump length measurement on first-trimester ultrasound.", "Both TVC and LAC were carried out in our hospital, and no complications occurred during the procedures. The decision to perform cerclage and the treatment plan were made in consultation with patients. Before the procedures, vaginal and cervical swabs for microbiological analyses were obtained to rule out infection, and we excluded several contraindicated situations, such as active labour, placental abruption or PPROM.\nTVC was performed by experienced obstetricians using the standardized transvaginal McDonald’s technique. Patients with emergency TVC at 17–25 weeks received broad-spectrum antibiotics, prophylactic intravenous tocolysis (ritodrine or atosiban) and magnesium sulphate (if necessary) before the procedure and for an additional 24 h. If they recovered well after surgery, patients were discharged to their home two days after CC and continued their prenatal care in the high-risk clinic. Prophylactic TVC was conducted at 12–18 weeks with no antibiotic or tocolytic prophylaxis. The onset of regular contractions, premature rupture of membranes, and/or suspicion of sepsis are indications for emergency removal of the cerclage. If the pregnancy course went smoothly, the cerclage was removed electively at 35–37 weeks gestation.\nLAC was conducted electively by gynaecologists before pregnancy or during the first trimester. The surgical procedures of LAC were referred to previously published literature [12]. A nonabsorbable Mersilene tape (5 mm) with double needles was used for cervical cervix cerclage. In spite of the timing of placement, the surgical procedure performed was identical, except that a uterine probe (3 mm in diameter) was placed and fixed in the official cavity to delineate the cervicovaginal junction in non-pregnant women. With/without opening of the bladder peritoneum, each needle was inserted anteriorly to posteriorly using the uterosacral ligaments as landmarks at the level of the uterine isthmus, between the outer edge of the uterine isthmus and the medial to the uterine vessels. The uterine isthmus was ligated behind the uterine isthmus, and the uterine probe was removed at the end of the operation. Patients with LAC required a caesarean delivery to terminate the pregnancy. The transabdominal cerclage can be left in situ for future pregnancies or removed according to the will of the patients.", "The primary outcomes were the incidence of sPTB < 24 weeks, < 28, < 32, < 34 weeks, and < 37weeks, GA at delivery, and the incidence of admission for threatened abortion or preterm birth after CC (admission after CC). The main secondary outcomes included new-born birth weight, delivery-related complications (such as cervical laceration, postpartum haemorrhage, and/or clinical chorioamnionitis), 1-min Apgar score < 7, 5-min Apgar score < 7, admission to the NICU, adverse neonatal outcomes (respiratory distress syndrome, necrotizing enterocolitis, intraventricular haemorrhage, sepsis, retinopathy of prematurity requiring laser therapy and bronchopulmonary dysplasia) and neonatal death. We also analysed the pregnancy outcomes of twin pregnancies with CI after CC.\n[SUBTITLE] Statistical analysis [SUBSECTION] Data are described by the mean ± standard deviation, number [%] or median (interquartile range). Continuous variables were compared using the Student’s t test (for normally distributed data) or the Mann–Whitney U test (for nonnormal distribution). Categorical-type outcomes were analysed with the chi-square test or Fisher’s exact test. A P value < 0.05 was considered statistically significant. Data analysis was performed by SPSS version 24.0 for Windows (SPSS Inc., Chicago, IL, USA).\nData are described by the mean ± standard deviation, number [%] or median (interquartile range). Continuous variables were compared using the Student’s t test (for normally distributed data) or the Mann–Whitney U test (for nonnormal distribution). Categorical-type outcomes were analysed with the chi-square test or Fisher’s exact test. A P value < 0.05 was considered statistically significant. Data analysis was performed by SPSS version 24.0 for Windows (SPSS Inc., Chicago, IL, USA).", "Data are described by the mean ± standard deviation, number [%] or median (interquartile range). Continuous variables were compared using the Student’s t test (for normally distributed data) or the Mann–Whitney U test (for nonnormal distribution). Categorical-type outcomes were analysed with the chi-square test or Fisher’s exact test. A P value < 0.05 was considered statistically significant. Data analysis was performed by SPSS version 24.0 for Windows (SPSS Inc., Chicago, IL, USA).", "A total of 289 patients were identified as eligible for inclusion during the study period, among which 233 women accepted TVC and 56 underwent LAC. There was no difference in the majority characteristics of the two groups (Table 1). The mean age of the TVC group and LAC group was 30.96 ± 4.12 years vs. 32.27 ± 3.89 years, P = 0.28. Most of the patients (79.9%) had a typical history of ≥ 1 spontaneous second trimester abortions and/or preterm births < 34 weeks. Some patients received CC because of ultrasonographic cervical progressive reduction during pregnancy after prior operative hysteroscopy, prior cervical surgery, and/or routine examination in twin pregnancies. The LAC group had more prior failed transvaginal cerclage procedures and more spontaneous second trimester abortions and/or preterm births < 34 weeks than the TVC group (10.7% vs. 1.7%, P = 0.005; 1 (0.5-2) vs. 1 (1–2), P < 0.001). A total of 160 patients in the TVC group underwent emergency cervical cerclage, compared with none in the LAC group. In the TVC group, the median length of the cervical canal was 1.2 cm in the emergency group and 2.12 cm in the prophylactic CC group. In the LAC group, 47 patients underwent prepregnant LAC and nine in the first trimester (8–10 weeks). The median gestational age at cerclage placement was 22.4 weeks in the TVC group. No perioperative complications (e.g., bleeding > 100 mL, infection, injury to bowel or bladder) occurred in either group. Six patients in the TVC group received repeat cerclage due to membrane bulging or funnelling below the knots (details shown in Table 2), while all patients in the LAC group received only one transabdominal cerclage.\n\nTable 1Clinical characteristics of the study populationTVC groupLAC group\nP\nPatients (n)23356-Age(years)30.96 ± 4.1232.27 ± 3.890.28Gravidity3(2–4)3(2–5)0.044the incidence of twin pregnancies (n[%])46 [19.7%]7 [12.5%]0.209Prior ssA-PTB1(0.5-2)1(1–2)< 0.001Prior failed transvaginal cerclage4[1.7%]6[10.7%]0.005Prior operative hysteroscopy (n[%])53[23%]8[14%]0.164Prior cervical surgery (n[%])11[5%]5[10%]0.216Conception by IVF(n[%])68[29%]19[34%]0.487Emergency cervical cerclage (n[%])160[68%]0[0]< 0.001GA at cerclage placement (weeks)22.4 (17-24.7)before pregnancy or during the first trimester-Bleeding during the CC (ml)15.6 ± 6.820.3 ± 5.20.37The length of the cervix (cm)1.2 (0.8-2)--Emergency CC1 (0.6–1.5)prophylactic CC2.12 (1.5-3.0)-Cervical dilation at diagnosis (cm)0(0-0.5)--TVC: transvaginal cervical cerclage; LAC: laparoscopic abdominal cervical cerclage; ssA-PTB: spontaneous second-trimester abortions and/or preterm births < 34 weeks; IVF: In-Vitro Fertilization; CC: cervical cerclage\n\nClinical characteristics of the study population\nTVC: transvaginal cervical cerclage; LAC: laparoscopic abdominal cervical cerclage; ssA-PTB: spontaneous second-trimester abortions and/or preterm births < 34 weeks; IVF: In-Vitro Fertilization; CC: cervical cerclage\n\nTable 2The detailed data of 6 patients received repeat cerclage in the TVC groupGA at first CCGA at repeat CCGravidityprior ssA-PTBPrior hysteroscopyGA at deliveryDelivery ModecomplicationsApgar score (1-5-10 min)patient 125 + 228 + 262028 + 5VD/8-9-9patient 216 + 425 + 332137 + 4VDCL, PPH10-10-10patient 314 + 522 + 442029 + 3CD/8-9-9patient 420 + 323 + 561126 + 6VDCL7-8-9patient 51824 + 131628VDCL, PPH8-9-9patient 620 + 525 + 131128 + 4CDCL8-9-10/7-8-9TVC: transvaginal cervical cerclage; GA: gestational age; CC: cervical cerclage; ssA-PTB: spontaneous second-trimester abortions and/or preterm births; VD: vaginal delivery; CD: caesarean delivery; CL: cervical laceration; PPH: postpartum haemorrhage\n\nThe detailed data of 6 patients received repeat cerclage in the TVC group\nTVC: transvaginal cervical cerclage; GA: gestational age; CC: cervical cerclage; ssA-PTB: spontaneous second-trimester abortions and/or preterm births; VD: vaginal delivery; CD: caesarean delivery; CL: cervical laceration; PPH: postpartum haemorrhage", "For the primary outcome, 15 (6.4%) patients with sPTB < 24 weeks of gestation occurred due to the failure of TVC, while there was one patient who unfortunately had intrauterine foetal death at 21 weeks in the LAC group. The incidence of sPTB < 28, < 32, < 34 and < 37 weeks was significantly less common in the LAC group than in the TVC group (Table 3). The median GA at delivery was 38.3 weeks in the LAC group, which was higher than that in the TVC group, and there was statistical significance between the two groups. In the TVC group, 148 (63.5%) patients were admitted to the hospital for threatened abortion or preterm birth after CC, and only eight patients were in the other group (P < 0.001).\n\nTable 3The primary outcomes between the TVC group and the LAC groupTVC group (n = 233)LAC group (n = 56)PsPTB < 24 weeks15[6.4%]00.08sPTB < 28 weeks63[27%]0< 0.001sPTB < 32 weeks94[40.3%]1[1.8%]< 0.001sPTB < 34 weeks109[46.8%]4[7.1%]< 0.001sPTB < 37 weeks145 [62.2%]15 [8.9%]< 0.001GA at delivery34.4(27.1–38.7)38.3(36-39.6)< 0.001Admission after CC(n[%])148[63.5%]8[14%]< 0.001TVC: transvaginal cervical cerclage; LAC: laparoscopic abdominal cervical cerclage; sPTB: spontaneous preterm birth; GA: estational age; CC: cervical cerclage\n\nThe primary outcomes between the TVC group and the LAC group\nTVC: transvaginal cervical cerclage; LAC: laparoscopic abdominal cervical cerclage; sPTB: spontaneous preterm birth; GA: estational age; CC: cervical cerclage\nIn the LAC group, one patient had intrauterine foetal death at 21 weeks and then underwent laparoscopic surgery to remove the tape for vaginal delivery. For the remaining patients, a caesarean delivery was performed to terminate the pregnancy in the third trimester with no complications, and 58.2% (32/55) decided to keep the cerclage in situ for future pregnancies. In the TVC group, most of the patients underwent vaginal delivery, and 93 patients underwent caesarean delivery, with a surgery rate of 39.9% (93/233). However, 26 patients had delivery-related complications, such as cervical laceration, postpartum haemorrhage, and/or clinical chorioamnionitis in the TVC group, which was significantly greater than the occurrence in the LAC group (15.9% vs. 0, P = 0.009. Table 4).\n\nTable 4The secondary outcomes between the TVC group and the LAC groupTVC group (n = 233)LAC group (n = 56)PCesarean delivery(n[%])93[39.9%]55[98.2%]< 0.001Duration of hospital stay after delivery (days)3(2–4)3(3–4)< 0.001removal of the cerclage (n[%])233 [100%]24 [42.8%]< 0.001delivery-related complications® (n[%])37 [15.9%]0< 0.001Infection and Clinical chorioamnionitis12 [5.2%]00.132Neonate births274 + 5*60 + 3*-Stillbirths (n[%])29 [10.6%]1 [1.7%]0.029Live births (n[%])245 [89.4%]59 [98.3%]0.029Birth weight (kg)2.21 ± 0.993.01 ± 0.630.000NICU admission (n[%])141 [57.6%]10 [16.9%]0.0001-min Apgar score10 (8–10)10 (10–10)< 0.0015-min Apgar score10 (9–10)10 (10–10)< 0.0011-min Apgar score < 735 [14.3%]00.0025-min Apgar score < 79 [3.7%]00.214Duration of neonatology stay (days)29 (9–55)9 (5.75-19)0.017Neonate death14 [5.7%]00.08®the include ; *means selective reduction of multifetal pregnanciesTVC: transvaginal cervical cerclage; LAC: laparoscopic abdominal cervical cerclage; NICU: Neonatal Intensive Care Unit\n\nThe secondary outcomes between the TVC group and the LAC group\n®the include ; *means selective reduction of multifetal pregnancies\nTVC: transvaginal cervical cerclage; LAC: laparoscopic abdominal cervical cerclage; NICU: Neonatal Intensive Care Unit\nApart from 8 patients with selective reduction of multifoetal pregnancies, one patient in the LAC group encountered intrauterine foetal death in the second trimester, and 29 patients suffered from stillbirth in the TVC group due to TVC failure. The total foetal survival rate in the two groups was 91.0% (304/334), of which the rate was higher in the LAC group than in the TVC group, with no significance (98.3% vs. 89.4%, P = 0.183. Table 4). The mean new-born birth weight and the Apgar score (1 and 5 min) were higher, and the NICU admission rate was lower in the LAC group than in the TVC group, with significant differences. Similarly, the LAC group had better neonatal outcomes and fewer neonatal complications than the TVC group (Table 4, Supplemental Table 1). However, there was no difference in the 5-min Apgar score < 7, necrotizing enterocolitis, sepsis or neonatal death between the two groups. To reduce the interference of cervical length, we performed a subgroup analysis between the prophylactic CC and LAC groups, and found similar pregnancy outcomes (Table 5, Supplemental Table 2).\n\nTable 5 A subgroup analysis between prophylactic CC and LAC groupprophylactic TVC group (n = 73)LAC group (n = 56)PsPTB < 24 weeks400.132sPTB < 28 weeks1100.002sPTB < 32 weeks171< 0.001sPTB < 34 weeks2240.001GA at delivery36.4 (32.6–37.7)38.3(36-39.6)< 0.001Admission after CC(n[%])48 [65.7%]8[14%]< 0.001Caesarean delivery(n[%])2855< 0.001Length of hospital stay after delivery (days)2 (2–3)3(3–4)< 0.001delivery-related complications (n[%])12 [16.4%]0[0]0.001Infection and Clinical chorioamnionitis3 [4.1%]00.257Neonate births7660 + 3*-Stillbirths (n[%])12 [15.8%]1 [1.7%]0.006Live births (n[%])64 [84.2%]59 [98.3%]0.001Birth weight (kg)2.43 ± 1.003.01 ± 0.63< 0.001NICU admission (n[%])28 [43.8%]10 [16.9%]0.0011-min Apgar score10 (8–10)10 (10–10)< 0.0015-min Apgar score10 (8–10)10 (10–10)< 0.0011-min Apgar score < 78 [12.5%]00.0065-min Apgar score < 73 [4.7%]00.245Duration of neonatology stay (days)37 (7–52)9 (5.75-19)0.04Neonate death3 [4.7%]00.245*means selective reduction of multifetal pregnanciesCC: cervical cerclage; LAC: laparoscopic abdominal cervical cerclage; sPTB: spontaneous preterm birth; GA: gestational age; VD: vaginal delivery; NICU: Neonatal Intensive Care Unit\n\n A subgroup analysis between prophylactic CC and LAC group\n*means selective reduction of multifetal pregnancies\nCC: cervical cerclage; LAC: laparoscopic abdominal cervical cerclage; sPTB: spontaneous preterm birth; GA: gestational age; VD: vaginal delivery; NICU: Neonatal Intensive Care Unit", "In the case of cervical cerclage in twin pregnancies in the TVC group, most (95.7%, 44/46) of the patients received operations at a cervical length of < 1.5 cm. Forty-one of 46 patients in the TVC group had an emergency CC due to a progressive reduction of the cervix, and some had ≥ 1 prior spontaneous second trimester abortions and/or preterm births < 34 weeks. A total of 97.8% of patients had a prior operative hysteroscopy history. GA at delivery ranged from 23.6 to 38 weeks: three patients at < 24 weeks, 31 between 24 and 33.9 weeks, and 12 ≥ 34 weeks (4 ≥ 37 weeks). The mean GA at delivery was 30.7 ± 4.5 weeks gestation, with a median gestational latency of 7.8 weeks. Apart from six neonate deaths due to extremely preterm birth and three due to selective reduction, we obtained 83 live new-borns with a mean birth weight of 1.58 kg.\nIn the LAC group, our study included seven women with two monochorionic/diamniotic and five dichorionic/diamniotic twin pregnancies. They underwent prophylactic TAC due to CI before pregnancy (n = 4) or during the first trimester (n = 3). The obstetrical and neonatal outcomes are presented in Table 6. GA at delivery ranged from 32 to 37 weeks (6 patients at < 37 weeks), and the mean GA at delivery was 34.3 ± 1.8 weeks. There were 12 live new-borns with a mean birth weight of 2.1 kg. The NICU admission rate in the LAC group was 58.3%, mostly due to GA < 34 weeks and/or birth weight < 2.0 kg, but no serious neonatal complications were reported in all new-borns.\n\nTable 6The pregnancy outcomes of twin pregnanciesTVC groupLAC grouppatients (n)467Prior sPTB0 (0–1)1 (1–2)Prior operative hysteroscopy (n[%])45 [97.8%]0Conception by IVF(n[%])36 [78.3%]5 [71.4%]Rescue cervical cerclage (n[%])41 [89.1%]-The length of the cervix < 1.5 cm44 [95.7%]-The length of the cervix (cm)0.8 (0.5–1.5)-Cervical dilation at diagnosis (cm)0 (0–1)-Gestational latency (weeks)7.8 (4.9–11.5)-GA at delivery30.7 ± 4.534.3 ± 1.8sPTB < 24 weeks30sPTB < 28 weeks130sPTB < 32 weeks250sPTB < 34 weeks343sPTB < 37 weeks426Live births (n[%])83 [90.2%]12 [100%] + 2*Birth weight (kg)1.58 ± 0.652.1 ± 0.44NICU admission (n[%])69 [83.1%]7 [58.3%]*means selective reduction of multifetal pregnanciesTVC: transvaginal cervical cerclage; LAC: laparoscopic abdominal cervical cerclage; sPTB: spontaneous preterm birth; IVF: In-Vitro Fertilization; GA: gestational age; NICU: Neonatal Intensive Care Unit\n\nThe pregnancy outcomes of twin pregnancies\n*means selective reduction of multifetal pregnancies\nTVC: transvaginal cervical cerclage; LAC: laparoscopic abdominal cervical cerclage; sPTB: spontaneous preterm birth; IVF: In-Vitro Fertilization; GA: gestational age; NICU: Neonatal Intensive Care Unit", "Below is the link to the electronic supplementary material.\n\n\nSupplementary Material 1\n\n\n\nSupplementary Material 1\n\n\n\nSupplementary Material 2\n\n\n\nSupplementary Material 2\n" ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study population", "Surgical techniques", "Primary and secondary outcomes", "Statistical analysis", "Results", "Patient characteristics", "Pregnancy outcomes", "Twin pregnancies", "Discussion", "Conclusion", "Electronic supplementary material", "" ]
[ "Cervical insufficiency (CI) is defined as the inability of the uterine cervix to retain a pregnancy without the signs and symptoms of clinical contractions and other clear pathology (e.g., bleeding, infection, ruptured membranes) in the second trimester or the early third trimester (< 34th weeks), which leads to late abortion or preterm birth [1]. The incidence rate of CI is estimated to be up to 1% of obstetric pregnancies [2], and CI accounts for recurrent spontaneous second trimester abortions and/or preterm births < 34 weeks [3]. Cervical cerclage (CC) has been proposed as an effective treatment for CI and has been used clinically for many years, thus successfully prolonging the pregnancy period and improving the perinatal prognosis [4]. The traditional surgical treatment is transvaginal cervical cerclage (TVC), which is straightforward and feasible. However, transabdominal cervical cerclage (TAC) may be performed when a TVC has failed in a previous pregnancy or in some cases with refractory cervical insufficiency [5]. TAC provides greater structural support to the cervix through placement of the suture at the internal os. Moreover, the absence of a foreign body in the vagina may reduce the risk of ascending infection and LAC can promote the ability to leave the suture in situ for future pregnancies, which is associated with a high rate of neonatal survival [6, 7]. With an increasing number of studies indicating that the prognosis of TAC is better than that of TVC, TAC has gradually become the primary abdominal approach [8]. However, there is no consensus regarding the first-line treatment for CI due to its advantages and limitations.\nEspecially in twin pregnancies, the American College of Obstetricians and Gynaecologists (ACOG) has claimed that in women with twin pregnancies and an ultrasonographically detected cervical length less than 25 mm, CC may increase the risk of preterm birth and is not recommended [1]. With the experience of CC in twin pregnancies, some retrospective cohort studies have found that perinatal outcomes are considerably improved in twin pregnancies [9], and emergency transvaginal cervical cerclage in twin women with cervical dilation and prolapsed membranes was associated with an overall 40% decrease in spontaneous preterm birth (sPTB) < 28 weeks of gestation and a prolongation of latency by 5 weeks [10]. A recent meta-analysis showed that for twin pregnancies with a cervical length less than 15 mm, CC was associated with a significant reduction in preterm birth [11]. Thus, what is the role of CC in twin pregnancies with CI in our hospital?\nThe objective of this study was to conduct an additional thorough investigation of the superiority and safety of LAC by comparing pregnancy and neonatal outcomes in patients with CI between the TVC and laparoscopic TAC (LAC) groups in a single centre. We also aimed to evaluate the efficacy and safety and to provide more evidence to support the application of CC in twin pregnancies with CI.", "[SUBTITLE] Study population [SUBSECTION] A retrospective study was conducted with medical records of pregnancies diagnosed with CI from January 2015 to December 2021 in the Department of Obstetrics, West China Second University Hospital in Sichuan, China, the regional tertiary referral centre. Approval was obtained from the Ethics Committee of West China Second Hospital, Sichuan University and informed consent was obtained from all participants. All cases of CI, either singleton or twin pregnancies with CC, were identified by reviewing the medical records of the hospital and telephone follow-up. Some patients had a typical history of ≥ 2 spontaneous second trimester abortions and/or preterm births < 34 weeks, and some patients with a typical history of ≥ 1 were found to have a spontaneous progressive reduction of the cervix on routine cervical length surveillance ultrasound. Other patients were diagnosed with cervical dilation and prolapsed membranes by transvaginal ultrasound or by pelvic examination because of mild symptoms such as vaginal discharge without obvious abdominal pain. Exclusion criteria included past pregnancy losses or preterm births due to an infection or induction, preterm premature rupture of membranes (PPROM) at diagnosis, obvious abdominal pain, active vaginal bleeding, clinical chorioamnionitis before the CC procedure, and foetal structural or genetic abnormalities. GA (gestational age) was determined by an evaluation of the last menstrual period and crown-rump length measurement on first-trimester ultrasound.\nA retrospective study was conducted with medical records of pregnancies diagnosed with CI from January 2015 to December 2021 in the Department of Obstetrics, West China Second University Hospital in Sichuan, China, the regional tertiary referral centre. Approval was obtained from the Ethics Committee of West China Second Hospital, Sichuan University and informed consent was obtained from all participants. All cases of CI, either singleton or twin pregnancies with CC, were identified by reviewing the medical records of the hospital and telephone follow-up. Some patients had a typical history of ≥ 2 spontaneous second trimester abortions and/or preterm births < 34 weeks, and some patients with a typical history of ≥ 1 were found to have a spontaneous progressive reduction of the cervix on routine cervical length surveillance ultrasound. Other patients were diagnosed with cervical dilation and prolapsed membranes by transvaginal ultrasound or by pelvic examination because of mild symptoms such as vaginal discharge without obvious abdominal pain. Exclusion criteria included past pregnancy losses or preterm births due to an infection or induction, preterm premature rupture of membranes (PPROM) at diagnosis, obvious abdominal pain, active vaginal bleeding, clinical chorioamnionitis before the CC procedure, and foetal structural or genetic abnormalities. GA (gestational age) was determined by an evaluation of the last menstrual period and crown-rump length measurement on first-trimester ultrasound.", "A retrospective study was conducted with medical records of pregnancies diagnosed with CI from January 2015 to December 2021 in the Department of Obstetrics, West China Second University Hospital in Sichuan, China, the regional tertiary referral centre. Approval was obtained from the Ethics Committee of West China Second Hospital, Sichuan University and informed consent was obtained from all participants. All cases of CI, either singleton or twin pregnancies with CC, were identified by reviewing the medical records of the hospital and telephone follow-up. Some patients had a typical history of ≥ 2 spontaneous second trimester abortions and/or preterm births < 34 weeks, and some patients with a typical history of ≥ 1 were found to have a spontaneous progressive reduction of the cervix on routine cervical length surveillance ultrasound. Other patients were diagnosed with cervical dilation and prolapsed membranes by transvaginal ultrasound or by pelvic examination because of mild symptoms such as vaginal discharge without obvious abdominal pain. Exclusion criteria included past pregnancy losses or preterm births due to an infection or induction, preterm premature rupture of membranes (PPROM) at diagnosis, obvious abdominal pain, active vaginal bleeding, clinical chorioamnionitis before the CC procedure, and foetal structural or genetic abnormalities. GA (gestational age) was determined by an evaluation of the last menstrual period and crown-rump length measurement on first-trimester ultrasound.", "Both TVC and LAC were carried out in our hospital, and no complications occurred during the procedures. The decision to perform cerclage and the treatment plan were made in consultation with patients. Before the procedures, vaginal and cervical swabs for microbiological analyses were obtained to rule out infection, and we excluded several contraindicated situations, such as active labour, placental abruption or PPROM.\nTVC was performed by experienced obstetricians using the standardized transvaginal McDonald’s technique. Patients with emergency TVC at 17–25 weeks received broad-spectrum antibiotics, prophylactic intravenous tocolysis (ritodrine or atosiban) and magnesium sulphate (if necessary) before the procedure and for an additional 24 h. If they recovered well after surgery, patients were discharged to their home two days after CC and continued their prenatal care in the high-risk clinic. Prophylactic TVC was conducted at 12–18 weeks with no antibiotic or tocolytic prophylaxis. The onset of regular contractions, premature rupture of membranes, and/or suspicion of sepsis are indications for emergency removal of the cerclage. If the pregnancy course went smoothly, the cerclage was removed electively at 35–37 weeks gestation.\nLAC was conducted electively by gynaecologists before pregnancy or during the first trimester. The surgical procedures of LAC were referred to previously published literature [12]. A nonabsorbable Mersilene tape (5 mm) with double needles was used for cervical cervix cerclage. In spite of the timing of placement, the surgical procedure performed was identical, except that a uterine probe (3 mm in diameter) was placed and fixed in the official cavity to delineate the cervicovaginal junction in non-pregnant women. With/without opening of the bladder peritoneum, each needle was inserted anteriorly to posteriorly using the uterosacral ligaments as landmarks at the level of the uterine isthmus, between the outer edge of the uterine isthmus and the medial to the uterine vessels. The uterine isthmus was ligated behind the uterine isthmus, and the uterine probe was removed at the end of the operation. Patients with LAC required a caesarean delivery to terminate the pregnancy. The transabdominal cerclage can be left in situ for future pregnancies or removed according to the will of the patients.", "The primary outcomes were the incidence of sPTB < 24 weeks, < 28, < 32, < 34 weeks, and < 37weeks, GA at delivery, and the incidence of admission for threatened abortion or preterm birth after CC (admission after CC). The main secondary outcomes included new-born birth weight, delivery-related complications (such as cervical laceration, postpartum haemorrhage, and/or clinical chorioamnionitis), 1-min Apgar score < 7, 5-min Apgar score < 7, admission to the NICU, adverse neonatal outcomes (respiratory distress syndrome, necrotizing enterocolitis, intraventricular haemorrhage, sepsis, retinopathy of prematurity requiring laser therapy and bronchopulmonary dysplasia) and neonatal death. We also analysed the pregnancy outcomes of twin pregnancies with CI after CC.\n[SUBTITLE] Statistical analysis [SUBSECTION] Data are described by the mean ± standard deviation, number [%] or median (interquartile range). Continuous variables were compared using the Student’s t test (for normally distributed data) or the Mann–Whitney U test (for nonnormal distribution). Categorical-type outcomes were analysed with the chi-square test or Fisher’s exact test. A P value < 0.05 was considered statistically significant. Data analysis was performed by SPSS version 24.0 for Windows (SPSS Inc., Chicago, IL, USA).\nData are described by the mean ± standard deviation, number [%] or median (interquartile range). Continuous variables were compared using the Student’s t test (for normally distributed data) or the Mann–Whitney U test (for nonnormal distribution). Categorical-type outcomes were analysed with the chi-square test or Fisher’s exact test. A P value < 0.05 was considered statistically significant. Data analysis was performed by SPSS version 24.0 for Windows (SPSS Inc., Chicago, IL, USA).", "Data are described by the mean ± standard deviation, number [%] or median (interquartile range). Continuous variables were compared using the Student’s t test (for normally distributed data) or the Mann–Whitney U test (for nonnormal distribution). Categorical-type outcomes were analysed with the chi-square test or Fisher’s exact test. A P value < 0.05 was considered statistically significant. Data analysis was performed by SPSS version 24.0 for Windows (SPSS Inc., Chicago, IL, USA).", "[SUBTITLE] Patient characteristics [SUBSECTION] A total of 289 patients were identified as eligible for inclusion during the study period, among which 233 women accepted TVC and 56 underwent LAC. There was no difference in the majority characteristics of the two groups (Table 1). The mean age of the TVC group and LAC group was 30.96 ± 4.12 years vs. 32.27 ± 3.89 years, P = 0.28. Most of the patients (79.9%) had a typical history of ≥ 1 spontaneous second trimester abortions and/or preterm births < 34 weeks. Some patients received CC because of ultrasonographic cervical progressive reduction during pregnancy after prior operative hysteroscopy, prior cervical surgery, and/or routine examination in twin pregnancies. The LAC group had more prior failed transvaginal cerclage procedures and more spontaneous second trimester abortions and/or preterm births < 34 weeks than the TVC group (10.7% vs. 1.7%, P = 0.005; 1 (0.5-2) vs. 1 (1–2), P < 0.001). A total of 160 patients in the TVC group underwent emergency cervical cerclage, compared with none in the LAC group. In the TVC group, the median length of the cervical canal was 1.2 cm in the emergency group and 2.12 cm in the prophylactic CC group. In the LAC group, 47 patients underwent prepregnant LAC and nine in the first trimester (8–10 weeks). The median gestational age at cerclage placement was 22.4 weeks in the TVC group. No perioperative complications (e.g., bleeding > 100 mL, infection, injury to bowel or bladder) occurred in either group. Six patients in the TVC group received repeat cerclage due to membrane bulging or funnelling below the knots (details shown in Table 2), while all patients in the LAC group received only one transabdominal cerclage.\n\nTable 1Clinical characteristics of the study populationTVC groupLAC group\nP\nPatients (n)23356-Age(years)30.96 ± 4.1232.27 ± 3.890.28Gravidity3(2–4)3(2–5)0.044the incidence of twin pregnancies (n[%])46 [19.7%]7 [12.5%]0.209Prior ssA-PTB1(0.5-2)1(1–2)< 0.001Prior failed transvaginal cerclage4[1.7%]6[10.7%]0.005Prior operative hysteroscopy (n[%])53[23%]8[14%]0.164Prior cervical surgery (n[%])11[5%]5[10%]0.216Conception by IVF(n[%])68[29%]19[34%]0.487Emergency cervical cerclage (n[%])160[68%]0[0]< 0.001GA at cerclage placement (weeks)22.4 (17-24.7)before pregnancy or during the first trimester-Bleeding during the CC (ml)15.6 ± 6.820.3 ± 5.20.37The length of the cervix (cm)1.2 (0.8-2)--Emergency CC1 (0.6–1.5)prophylactic CC2.12 (1.5-3.0)-Cervical dilation at diagnosis (cm)0(0-0.5)--TVC: transvaginal cervical cerclage; LAC: laparoscopic abdominal cervical cerclage; ssA-PTB: spontaneous second-trimester abortions and/or preterm births < 34 weeks; IVF: In-Vitro Fertilization; CC: cervical cerclage\n\nClinical characteristics of the study population\nTVC: transvaginal cervical cerclage; LAC: laparoscopic abdominal cervical cerclage; ssA-PTB: spontaneous second-trimester abortions and/or preterm births < 34 weeks; IVF: In-Vitro Fertilization; CC: cervical cerclage\n\nTable 2The detailed data of 6 patients received repeat cerclage in the TVC groupGA at first CCGA at repeat CCGravidityprior ssA-PTBPrior hysteroscopyGA at deliveryDelivery ModecomplicationsApgar score (1-5-10 min)patient 125 + 228 + 262028 + 5VD/8-9-9patient 216 + 425 + 332137 + 4VDCL, PPH10-10-10patient 314 + 522 + 442029 + 3CD/8-9-9patient 420 + 323 + 561126 + 6VDCL7-8-9patient 51824 + 131628VDCL, PPH8-9-9patient 620 + 525 + 131128 + 4CDCL8-9-10/7-8-9TVC: transvaginal cervical cerclage; GA: gestational age; CC: cervical cerclage; ssA-PTB: spontaneous second-trimester abortions and/or preterm births; VD: vaginal delivery; CD: caesarean delivery; CL: cervical laceration; PPH: postpartum haemorrhage\n\nThe detailed data of 6 patients received repeat cerclage in the TVC group\nTVC: transvaginal cervical cerclage; GA: gestational age; CC: cervical cerclage; ssA-PTB: spontaneous second-trimester abortions and/or preterm births; VD: vaginal delivery; CD: caesarean delivery; CL: cervical laceration; PPH: postpartum haemorrhage\nA total of 289 patients were identified as eligible for inclusion during the study period, among which 233 women accepted TVC and 56 underwent LAC. There was no difference in the majority characteristics of the two groups (Table 1). The mean age of the TVC group and LAC group was 30.96 ± 4.12 years vs. 32.27 ± 3.89 years, P = 0.28. Most of the patients (79.9%) had a typical history of ≥ 1 spontaneous second trimester abortions and/or preterm births < 34 weeks. Some patients received CC because of ultrasonographic cervical progressive reduction during pregnancy after prior operative hysteroscopy, prior cervical surgery, and/or routine examination in twin pregnancies. The LAC group had more prior failed transvaginal cerclage procedures and more spontaneous second trimester abortions and/or preterm births < 34 weeks than the TVC group (10.7% vs. 1.7%, P = 0.005; 1 (0.5-2) vs. 1 (1–2), P < 0.001). A total of 160 patients in the TVC group underwent emergency cervical cerclage, compared with none in the LAC group. In the TVC group, the median length of the cervical canal was 1.2 cm in the emergency group and 2.12 cm in the prophylactic CC group. In the LAC group, 47 patients underwent prepregnant LAC and nine in the first trimester (8–10 weeks). The median gestational age at cerclage placement was 22.4 weeks in the TVC group. No perioperative complications (e.g., bleeding > 100 mL, infection, injury to bowel or bladder) occurred in either group. Six patients in the TVC group received repeat cerclage due to membrane bulging or funnelling below the knots (details shown in Table 2), while all patients in the LAC group received only one transabdominal cerclage.\n\nTable 1Clinical characteristics of the study populationTVC groupLAC group\nP\nPatients (n)23356-Age(years)30.96 ± 4.1232.27 ± 3.890.28Gravidity3(2–4)3(2–5)0.044the incidence of twin pregnancies (n[%])46 [19.7%]7 [12.5%]0.209Prior ssA-PTB1(0.5-2)1(1–2)< 0.001Prior failed transvaginal cerclage4[1.7%]6[10.7%]0.005Prior operative hysteroscopy (n[%])53[23%]8[14%]0.164Prior cervical surgery (n[%])11[5%]5[10%]0.216Conception by IVF(n[%])68[29%]19[34%]0.487Emergency cervical cerclage (n[%])160[68%]0[0]< 0.001GA at cerclage placement (weeks)22.4 (17-24.7)before pregnancy or during the first trimester-Bleeding during the CC (ml)15.6 ± 6.820.3 ± 5.20.37The length of the cervix (cm)1.2 (0.8-2)--Emergency CC1 (0.6–1.5)prophylactic CC2.12 (1.5-3.0)-Cervical dilation at diagnosis (cm)0(0-0.5)--TVC: transvaginal cervical cerclage; LAC: laparoscopic abdominal cervical cerclage; ssA-PTB: spontaneous second-trimester abortions and/or preterm births < 34 weeks; IVF: In-Vitro Fertilization; CC: cervical cerclage\n\nClinical characteristics of the study population\nTVC: transvaginal cervical cerclage; LAC: laparoscopic abdominal cervical cerclage; ssA-PTB: spontaneous second-trimester abortions and/or preterm births < 34 weeks; IVF: In-Vitro Fertilization; CC: cervical cerclage\n\nTable 2The detailed data of 6 patients received repeat cerclage in the TVC groupGA at first CCGA at repeat CCGravidityprior ssA-PTBPrior hysteroscopyGA at deliveryDelivery ModecomplicationsApgar score (1-5-10 min)patient 125 + 228 + 262028 + 5VD/8-9-9patient 216 + 425 + 332137 + 4VDCL, PPH10-10-10patient 314 + 522 + 442029 + 3CD/8-9-9patient 420 + 323 + 561126 + 6VDCL7-8-9patient 51824 + 131628VDCL, PPH8-9-9patient 620 + 525 + 131128 + 4CDCL8-9-10/7-8-9TVC: transvaginal cervical cerclage; GA: gestational age; CC: cervical cerclage; ssA-PTB: spontaneous second-trimester abortions and/or preterm births; VD: vaginal delivery; CD: caesarean delivery; CL: cervical laceration; PPH: postpartum haemorrhage\n\nThe detailed data of 6 patients received repeat cerclage in the TVC group\nTVC: transvaginal cervical cerclage; GA: gestational age; CC: cervical cerclage; ssA-PTB: spontaneous second-trimester abortions and/or preterm births; VD: vaginal delivery; CD: caesarean delivery; CL: cervical laceration; PPH: postpartum haemorrhage", "A total of 289 patients were identified as eligible for inclusion during the study period, among which 233 women accepted TVC and 56 underwent LAC. There was no difference in the majority characteristics of the two groups (Table 1). The mean age of the TVC group and LAC group was 30.96 ± 4.12 years vs. 32.27 ± 3.89 years, P = 0.28. Most of the patients (79.9%) had a typical history of ≥ 1 spontaneous second trimester abortions and/or preterm births < 34 weeks. Some patients received CC because of ultrasonographic cervical progressive reduction during pregnancy after prior operative hysteroscopy, prior cervical surgery, and/or routine examination in twin pregnancies. The LAC group had more prior failed transvaginal cerclage procedures and more spontaneous second trimester abortions and/or preterm births < 34 weeks than the TVC group (10.7% vs. 1.7%, P = 0.005; 1 (0.5-2) vs. 1 (1–2), P < 0.001). A total of 160 patients in the TVC group underwent emergency cervical cerclage, compared with none in the LAC group. In the TVC group, the median length of the cervical canal was 1.2 cm in the emergency group and 2.12 cm in the prophylactic CC group. In the LAC group, 47 patients underwent prepregnant LAC and nine in the first trimester (8–10 weeks). The median gestational age at cerclage placement was 22.4 weeks in the TVC group. No perioperative complications (e.g., bleeding > 100 mL, infection, injury to bowel or bladder) occurred in either group. Six patients in the TVC group received repeat cerclage due to membrane bulging or funnelling below the knots (details shown in Table 2), while all patients in the LAC group received only one transabdominal cerclage.\n\nTable 1Clinical characteristics of the study populationTVC groupLAC group\nP\nPatients (n)23356-Age(years)30.96 ± 4.1232.27 ± 3.890.28Gravidity3(2–4)3(2–5)0.044the incidence of twin pregnancies (n[%])46 [19.7%]7 [12.5%]0.209Prior ssA-PTB1(0.5-2)1(1–2)< 0.001Prior failed transvaginal cerclage4[1.7%]6[10.7%]0.005Prior operative hysteroscopy (n[%])53[23%]8[14%]0.164Prior cervical surgery (n[%])11[5%]5[10%]0.216Conception by IVF(n[%])68[29%]19[34%]0.487Emergency cervical cerclage (n[%])160[68%]0[0]< 0.001GA at cerclage placement (weeks)22.4 (17-24.7)before pregnancy or during the first trimester-Bleeding during the CC (ml)15.6 ± 6.820.3 ± 5.20.37The length of the cervix (cm)1.2 (0.8-2)--Emergency CC1 (0.6–1.5)prophylactic CC2.12 (1.5-3.0)-Cervical dilation at diagnosis (cm)0(0-0.5)--TVC: transvaginal cervical cerclage; LAC: laparoscopic abdominal cervical cerclage; ssA-PTB: spontaneous second-trimester abortions and/or preterm births < 34 weeks; IVF: In-Vitro Fertilization; CC: cervical cerclage\n\nClinical characteristics of the study population\nTVC: transvaginal cervical cerclage; LAC: laparoscopic abdominal cervical cerclage; ssA-PTB: spontaneous second-trimester abortions and/or preterm births < 34 weeks; IVF: In-Vitro Fertilization; CC: cervical cerclage\n\nTable 2The detailed data of 6 patients received repeat cerclage in the TVC groupGA at first CCGA at repeat CCGravidityprior ssA-PTBPrior hysteroscopyGA at deliveryDelivery ModecomplicationsApgar score (1-5-10 min)patient 125 + 228 + 262028 + 5VD/8-9-9patient 216 + 425 + 332137 + 4VDCL, PPH10-10-10patient 314 + 522 + 442029 + 3CD/8-9-9patient 420 + 323 + 561126 + 6VDCL7-8-9patient 51824 + 131628VDCL, PPH8-9-9patient 620 + 525 + 131128 + 4CDCL8-9-10/7-8-9TVC: transvaginal cervical cerclage; GA: gestational age; CC: cervical cerclage; ssA-PTB: spontaneous second-trimester abortions and/or preterm births; VD: vaginal delivery; CD: caesarean delivery; CL: cervical laceration; PPH: postpartum haemorrhage\n\nThe detailed data of 6 patients received repeat cerclage in the TVC group\nTVC: transvaginal cervical cerclage; GA: gestational age; CC: cervical cerclage; ssA-PTB: spontaneous second-trimester abortions and/or preterm births; VD: vaginal delivery; CD: caesarean delivery; CL: cervical laceration; PPH: postpartum haemorrhage", "For the primary outcome, 15 (6.4%) patients with sPTB < 24 weeks of gestation occurred due to the failure of TVC, while there was one patient who unfortunately had intrauterine foetal death at 21 weeks in the LAC group. The incidence of sPTB < 28, < 32, < 34 and < 37 weeks was significantly less common in the LAC group than in the TVC group (Table 3). The median GA at delivery was 38.3 weeks in the LAC group, which was higher than that in the TVC group, and there was statistical significance between the two groups. In the TVC group, 148 (63.5%) patients were admitted to the hospital for threatened abortion or preterm birth after CC, and only eight patients were in the other group (P < 0.001).\n\nTable 3The primary outcomes between the TVC group and the LAC groupTVC group (n = 233)LAC group (n = 56)PsPTB < 24 weeks15[6.4%]00.08sPTB < 28 weeks63[27%]0< 0.001sPTB < 32 weeks94[40.3%]1[1.8%]< 0.001sPTB < 34 weeks109[46.8%]4[7.1%]< 0.001sPTB < 37 weeks145 [62.2%]15 [8.9%]< 0.001GA at delivery34.4(27.1–38.7)38.3(36-39.6)< 0.001Admission after CC(n[%])148[63.5%]8[14%]< 0.001TVC: transvaginal cervical cerclage; LAC: laparoscopic abdominal cervical cerclage; sPTB: spontaneous preterm birth; GA: estational age; CC: cervical cerclage\n\nThe primary outcomes between the TVC group and the LAC group\nTVC: transvaginal cervical cerclage; LAC: laparoscopic abdominal cervical cerclage; sPTB: spontaneous preterm birth; GA: estational age; CC: cervical cerclage\nIn the LAC group, one patient had intrauterine foetal death at 21 weeks and then underwent laparoscopic surgery to remove the tape for vaginal delivery. For the remaining patients, a caesarean delivery was performed to terminate the pregnancy in the third trimester with no complications, and 58.2% (32/55) decided to keep the cerclage in situ for future pregnancies. In the TVC group, most of the patients underwent vaginal delivery, and 93 patients underwent caesarean delivery, with a surgery rate of 39.9% (93/233). However, 26 patients had delivery-related complications, such as cervical laceration, postpartum haemorrhage, and/or clinical chorioamnionitis in the TVC group, which was significantly greater than the occurrence in the LAC group (15.9% vs. 0, P = 0.009. Table 4).\n\nTable 4The secondary outcomes between the TVC group and the LAC groupTVC group (n = 233)LAC group (n = 56)PCesarean delivery(n[%])93[39.9%]55[98.2%]< 0.001Duration of hospital stay after delivery (days)3(2–4)3(3–4)< 0.001removal of the cerclage (n[%])233 [100%]24 [42.8%]< 0.001delivery-related complications® (n[%])37 [15.9%]0< 0.001Infection and Clinical chorioamnionitis12 [5.2%]00.132Neonate births274 + 5*60 + 3*-Stillbirths (n[%])29 [10.6%]1 [1.7%]0.029Live births (n[%])245 [89.4%]59 [98.3%]0.029Birth weight (kg)2.21 ± 0.993.01 ± 0.630.000NICU admission (n[%])141 [57.6%]10 [16.9%]0.0001-min Apgar score10 (8–10)10 (10–10)< 0.0015-min Apgar score10 (9–10)10 (10–10)< 0.0011-min Apgar score < 735 [14.3%]00.0025-min Apgar score < 79 [3.7%]00.214Duration of neonatology stay (days)29 (9–55)9 (5.75-19)0.017Neonate death14 [5.7%]00.08®the include ; *means selective reduction of multifetal pregnanciesTVC: transvaginal cervical cerclage; LAC: laparoscopic abdominal cervical cerclage; NICU: Neonatal Intensive Care Unit\n\nThe secondary outcomes between the TVC group and the LAC group\n®the include ; *means selective reduction of multifetal pregnancies\nTVC: transvaginal cervical cerclage; LAC: laparoscopic abdominal cervical cerclage; NICU: Neonatal Intensive Care Unit\nApart from 8 patients with selective reduction of multifoetal pregnancies, one patient in the LAC group encountered intrauterine foetal death in the second trimester, and 29 patients suffered from stillbirth in the TVC group due to TVC failure. The total foetal survival rate in the two groups was 91.0% (304/334), of which the rate was higher in the LAC group than in the TVC group, with no significance (98.3% vs. 89.4%, P = 0.183. Table 4). The mean new-born birth weight and the Apgar score (1 and 5 min) were higher, and the NICU admission rate was lower in the LAC group than in the TVC group, with significant differences. Similarly, the LAC group had better neonatal outcomes and fewer neonatal complications than the TVC group (Table 4, Supplemental Table 1). However, there was no difference in the 5-min Apgar score < 7, necrotizing enterocolitis, sepsis or neonatal death between the two groups. To reduce the interference of cervical length, we performed a subgroup analysis between the prophylactic CC and LAC groups, and found similar pregnancy outcomes (Table 5, Supplemental Table 2).\n\nTable 5 A subgroup analysis between prophylactic CC and LAC groupprophylactic TVC group (n = 73)LAC group (n = 56)PsPTB < 24 weeks400.132sPTB < 28 weeks1100.002sPTB < 32 weeks171< 0.001sPTB < 34 weeks2240.001GA at delivery36.4 (32.6–37.7)38.3(36-39.6)< 0.001Admission after CC(n[%])48 [65.7%]8[14%]< 0.001Caesarean delivery(n[%])2855< 0.001Length of hospital stay after delivery (days)2 (2–3)3(3–4)< 0.001delivery-related complications (n[%])12 [16.4%]0[0]0.001Infection and Clinical chorioamnionitis3 [4.1%]00.257Neonate births7660 + 3*-Stillbirths (n[%])12 [15.8%]1 [1.7%]0.006Live births (n[%])64 [84.2%]59 [98.3%]0.001Birth weight (kg)2.43 ± 1.003.01 ± 0.63< 0.001NICU admission (n[%])28 [43.8%]10 [16.9%]0.0011-min Apgar score10 (8–10)10 (10–10)< 0.0015-min Apgar score10 (8–10)10 (10–10)< 0.0011-min Apgar score < 78 [12.5%]00.0065-min Apgar score < 73 [4.7%]00.245Duration of neonatology stay (days)37 (7–52)9 (5.75-19)0.04Neonate death3 [4.7%]00.245*means selective reduction of multifetal pregnanciesCC: cervical cerclage; LAC: laparoscopic abdominal cervical cerclage; sPTB: spontaneous preterm birth; GA: gestational age; VD: vaginal delivery; NICU: Neonatal Intensive Care Unit\n\n A subgroup analysis between prophylactic CC and LAC group\n*means selective reduction of multifetal pregnancies\nCC: cervical cerclage; LAC: laparoscopic abdominal cervical cerclage; sPTB: spontaneous preterm birth; GA: gestational age; VD: vaginal delivery; NICU: Neonatal Intensive Care Unit", "In the case of cervical cerclage in twin pregnancies in the TVC group, most (95.7%, 44/46) of the patients received operations at a cervical length of < 1.5 cm. Forty-one of 46 patients in the TVC group had an emergency CC due to a progressive reduction of the cervix, and some had ≥ 1 prior spontaneous second trimester abortions and/or preterm births < 34 weeks. A total of 97.8% of patients had a prior operative hysteroscopy history. GA at delivery ranged from 23.6 to 38 weeks: three patients at < 24 weeks, 31 between 24 and 33.9 weeks, and 12 ≥ 34 weeks (4 ≥ 37 weeks). The mean GA at delivery was 30.7 ± 4.5 weeks gestation, with a median gestational latency of 7.8 weeks. Apart from six neonate deaths due to extremely preterm birth and three due to selective reduction, we obtained 83 live new-borns with a mean birth weight of 1.58 kg.\nIn the LAC group, our study included seven women with two monochorionic/diamniotic and five dichorionic/diamniotic twin pregnancies. They underwent prophylactic TAC due to CI before pregnancy (n = 4) or during the first trimester (n = 3). The obstetrical and neonatal outcomes are presented in Table 6. GA at delivery ranged from 32 to 37 weeks (6 patients at < 37 weeks), and the mean GA at delivery was 34.3 ± 1.8 weeks. There were 12 live new-borns with a mean birth weight of 2.1 kg. The NICU admission rate in the LAC group was 58.3%, mostly due to GA < 34 weeks and/or birth weight < 2.0 kg, but no serious neonatal complications were reported in all new-borns.\n\nTable 6The pregnancy outcomes of twin pregnanciesTVC groupLAC grouppatients (n)467Prior sPTB0 (0–1)1 (1–2)Prior operative hysteroscopy (n[%])45 [97.8%]0Conception by IVF(n[%])36 [78.3%]5 [71.4%]Rescue cervical cerclage (n[%])41 [89.1%]-The length of the cervix < 1.5 cm44 [95.7%]-The length of the cervix (cm)0.8 (0.5–1.5)-Cervical dilation at diagnosis (cm)0 (0–1)-Gestational latency (weeks)7.8 (4.9–11.5)-GA at delivery30.7 ± 4.534.3 ± 1.8sPTB < 24 weeks30sPTB < 28 weeks130sPTB < 32 weeks250sPTB < 34 weeks343sPTB < 37 weeks426Live births (n[%])83 [90.2%]12 [100%] + 2*Birth weight (kg)1.58 ± 0.652.1 ± 0.44NICU admission (n[%])69 [83.1%]7 [58.3%]*means selective reduction of multifetal pregnanciesTVC: transvaginal cervical cerclage; LAC: laparoscopic abdominal cervical cerclage; sPTB: spontaneous preterm birth; IVF: In-Vitro Fertilization; GA: gestational age; NICU: Neonatal Intensive Care Unit\n\nThe pregnancy outcomes of twin pregnancies\n*means selective reduction of multifetal pregnancies\nTVC: transvaginal cervical cerclage; LAC: laparoscopic abdominal cervical cerclage; sPTB: spontaneous preterm birth; IVF: In-Vitro Fertilization; GA: gestational age; NICU: Neonatal Intensive Care Unit", "Cervical insufficiency (CI) is one of the main reasons for late abortion or preterm birth, which increases the burden on families and society as a whole. In women with CI, the recurrence rate of second trimester delivery was 72% in women with no cerclage, 30% in women with prophylactic TVC, and 5% in women with TAC [13], which means CC has been shown to be effective in the treatment of CI. Previous research found that CC prevents preterm birth in singleton women with a previous preterm birth once a short cervix has been detected on ultrasound imaging [14].\nIn the present study, we analysed 289 patients with CI, in which 233 women underwent TVC during pregnancy and 56 received LAC. The choice of procedure was dependent on the clinician on duty/performing the procedure and the patients after they were informed about both procedures. Most of the patients who had a typical history of ≥ 2 prior spontaneous second trimester abortions and/or preterm births < 34 weeks would like to seek medical advice before pregnancy or during the first trimester and then receive LAC or prophylactic TVC. The traditional concept is the preference for TVC for CI patients as first-line treatment while considering the suitability of TAC, either open TAC or LAC, for patients who were diagnosed with refractory cervical insufficiency or who had a prior failed TVC suture [5, 15]. It is known that the main disadvantage of TAC is the caesarean delivery to terminate the pregnancy and the complications during the TAC procedure. Nevertheless, with the development of laparoscopic techniques, some new approaches for LAC may be considered as an acceptable alternative to traditional LAC due to its superiority in terms of tranvaginal removal [16]. Surgeons have had extensive experience in ensuring that the procedure is successful and minimally invasive [17]. In our study, there were no adverse events, and all patients were discharged within 24-48 h in the TAC group. Moreover, TAC avoids the infection risk of vaginal surgery and movement restrictions after surgery, and patients can take care of their duration of pregnancy by themselves. A recent systematic review extrapolated that LAC is a reasonable alternative to open TAC and may be preferable because of benefits such as cosmesis and recovery [18]. Therefore, an increasing number of researchers prefer to concentrate on the benefits of LAC before pregnancy or during the first trimester.\nThe currently available literature provides much evidence that indicates the superiority of pregnancy outcomes and neonatal outcomes in the LAC group to TVC. As a regional tertiary referral centre, we perform many cases with cervical cerclage, including TVC and LAC. However, there has been little analysis of the data in our hospital in recent years. Several findings in our study are notable. (1) Because previous studies have shown that TAC is suitable for patients who had a prior failed TVC suture, and may be associated with a lower risk of perinatal death or delivery at < 24 weeks [15, 19], such patients are prone to choose TAC. Therefore, the LAC group had more prior failed transvaginal cerclage procedures than the TVC group. (2) The LAC group had a very low incidence of sPTB < 34 weeks, and it was associated with a significant decrease in sPTB < 28 weeks, < 32, < 34 and < 37 weeks, and admission to the hospital during pregnancy for threatened abortion or preterm birth after CC and high in GA at delivery in comparison to the TVC group. (3) Neonatal outcomes in the LAC group were significantly better than those in the TVC group. These results in our retrospective study suggest that LAC is more effective than TVC. We speculate that the reasons for this result include the following. (1) The cerclage was positioned in the cervico-isthmic area, in a more proximal position in the LAC group, resulting in its stability. There were six patients in the TVC group who received a repeat cerclage (three prophylactic CCs at the first cerclage), while all patients in the LAC group received only one transabdominal cerclage, which is consistent with the retrospective study demonstrating that LAC is associated with better preservation of the cervical length throughout pregnancy than TVC [20]. (2) The timing of the LAC before pregnancy period avoids adverse effects of surgical stimulation on pregnancy. (3) Cervical shortness is one of the causes of sPTB. The patients in the TVC group had a shorter cervical length at cerclage placement than those in the LAC group, which may be associated with poorer pregnancy outcomes. However, it is worth noting that we obtained the same outcomes when we performed a subgroup analysis between the prophylactic CC and LAC groups without the interference of cervical length. (4) Transvaginal surgery has a higher risk of infection than transabdominal surgery, and infection is an essential indicator for termination of pregnancy. Although there was no significant difference between groups, there were 12 (5.2%) patients with infection and clinical chorioamnionitis in the TVC group, which is much greater than that in the LAC group where none occurred.\nA number of retrospective studies have reported pregnancy and neonatal outcomes for LAC before pregnancy or during the first trimester [21, 22]. Whittle et al. reported data for 65 patients with LAC and found that the foetal salvage rate (n = 67 pregnancies) was 89% with a mean gestational age of 35.8 ± 2.9 weeks [23]. Chen et al. showed a series of 101 LAC cases with an average GA at delivery of 36.2 weeks, a 95% foetal survival rate and no complications [24]. Ades et al. demonstrated that all patients with LAC in their study delivered via caesarean delivery with an average gestational age of 37.1 weeks [25]. In our report, the median GA at delivery in the LAC group was 38.3 weeks, with a 93.6% foetal survival rate and low NICU admission. Moreover, the finding that only a small portion of patients were admitted to the hospital for threatened abortion or preterm birth after CC during pregnancy in the LAC group can reduce the financial and emotional burden on patients and families and solve the problem of increased risk of venous thromboembolism due to movement restrictions. Meanwhile, prior research reported that when left in situ for subsequent pregnancies, laparoscopic transabdominal cerclage is associated with a high rate of neonatal survival [7]. Therefore, for some patients, could LAC be selected as the first choice?\nThere are some different opinions on whether TVC can be recommended in multiple pregnancies for preventing preterm birth. Some studies have shown no current evidence of a benefit for TVC was found in multiple pregnancies, and an increased risk of PTB, very low birth weight and respiratory distress [1, 26−27]. However, in another retrospective study, it was concluded that in twin pregnancies with a cervix ≤ 15 mm before 24 weeks, CC was associated with a significant prolongation of pregnancy by almost 4 more weeks and significantly decreased preterm birth < 34 weeks and admission to the NICU compared with controls [28]. In a recent study, Wu et al. found that ultrasound-indicated TVC in dichorionic/diamniotic twin pregnancies could decrease PTB, especially for women with a cervical length < 15 mm [29]. Zeng et al. reported that emergency cerclage with the standardized transvaginal McDonald’s technique in twin women with cervical dilation and prolapsed membranes was associated with better pregnancy outcomes [10]. We briefly analysed the pregnancy outcome of CC in twin pregnancies. In our study, 44 patients had cervical lengths < 15 mm with a median 7.8-week latency period from CC to delivery in the TVC group. Our results, including GA at delivery and neonatal outcomes, compare favourably to previously reported series that reported that the mean (min.- max.) gestational age at delivery was 27.3 (21–34) weeks, and the median time between cervical cerclage and delivery was 6.4 weeks [30]. The results indicating the potential to consider offering CC to twin pregnancies with cervical shortening (< 15 mm) corroborate those of previous reports.\nA case series and literature review showed 80% of pregnancies with transabdominal cerclage delivered beyond 32 weeks and 35% after 37 weeks gestation with an overall perinatal survival of 91% and without adverse events [9]. in the LAC group, all of our cases delivered ≥ 32 weeks with a total foetal survival rate of 100%, providing good obstetric results without increasing perioperative morbidity and good evidence for the view that TAC efficiently suppresses the risk of sPTB for patients with CI in twin pregnancies.\nOne of the limitations in our study is the unbalanced patients number between the two groups (233 vs. 56). We need larger prospective controlled studies about laparoscopic abdominal cervical cerclage to further confirm its superiority, especially in twin pregnancies.", "To conclude, we found that LAC, as a more effective treatment for CI patients, appears to have better pregnancy outcomes than TVC. Moreover, our limited experience suggested that LAC before pregnancy or during the first trimester is a recommended option for CI patients. Even in twin pregnancies, CC can improve pregnancy outcomes with a longer latency period, especially in the LAC group. Given the high efficacy and low harmfulness, LAC may be used as a first-line treatment in certain indications, such as in patients who have a typical history of ≥ 2 prior spontaneous second trimester abortions and/or preterm births < 34 weeks or that are willing to select LAC even if they have a typical history of ≥ 1 after they understand the advantages and limitations. Of course, larger prospective controlled studies are needed for confirmation.", "[SUBTITLE] [SUBSECTION] Below is the link to the electronic supplementary material.\n\n\nSupplementary Material 1\n\n\n\nSupplementary Material 1\n\n\n\nSupplementary Material 2\n\n\n\nSupplementary Material 2\n\nBelow is the link to the electronic supplementary material.\n\n\nSupplementary Material 1\n\n\n\nSupplementary Material 1\n\n\n\nSupplementary Material 2\n\n\n\nSupplementary Material 2\n", "Below is the link to the electronic supplementary material.\n\n\nSupplementary Material 1\n\n\n\nSupplementary Material 1\n\n\n\nSupplementary Material 2\n\n\n\nSupplementary Material 2\n" ]
[ null, null, null, null, null, null, "results", null, null, null, "discussion", "conclusion", "supplementary-material", null ]
[ "Cervical insufficiency", "Laparoscopic cervical cerclage", "Transvaginal cervical cerclage", "Pregnancy outcomes", "Preterm birth", "Twin pregnancy" ]
Evaluation of two different semi-automated homogenization techniques in microbiological diagnosis of periprosthetic joint infection: disperser vs. bead milling method.
36253761
In microbiological diagnosis of periprosthetic joint infection (PJI) there is no consensus regarding the most suitable and optimal number of specimens to be cultured or the most effective technique of tissue processing. This comparative study analysed the accuracy of two semi-automated homogenization methods with special focus on the volume and exact origin of each sample.
BACKGROUND
We investigated a total of 722 periprosthetic tissue samples. PJI was defined according to the new scoring system for preoperative and intraoperative criteria. We compared the performance of our routinely used single tissue processing by disposable high-frequency disperser with the bead milling method.
METHODS
Eighty patients were included. Among forty classified PJIs, 34 patients yielded positive culture results. In 23 cases (68%) exact concordant results were generated with both techniques. However, in seven cases (20%) processing by the disperser and in four cases (12%) by bead milling provided additional positive samples, but without significant difference since the major definition criteria were met in all cases. The percentage of positive results was influenced by the volume and origin of the tissue samples. Results for small tissue samples tended to be better using the bead milling method. This might lead to improved preoperative arthroscopic diagnosis, as the volume of biopsies is generally limited. Six patients had negative results due to previous antimicrobial therapy. Forty other patients were classified as aseptic failures. Neither procedure resulted in any contamination.
RESULTS
Both methods enable reliable processing of tissue samples for diagnosis of PJI and are suitable for routine use.
CONCLUSION
[ "Arthritis, Infectious", "Arthroplasty, Replacement, Hip", "Biopsy", "Humans", "Prosthesis-Related Infections", "Sensitivity and Specificity" ]
9575308
Introduction
Microbiological investigations play a key role in the diagnosis of periprosthetic joint infection (PJI). In contrast to many organ-related infections which cause acute symptoms, PJI often has a chronically insidious course. Depending on the joint and patient collective, these cases can account for up to 50% of the total number of infections (own data). The consequences for the patient are considerable, since almost every case sooner or later requires surgical intervention. The development of the infection is closely related to the variable growth behaviour of the pathogens. Many microorganisms are able to colonize the surface of a foreign body, creating a biofilm to protect them from their environment. If they cause infections in the tissue surrounding the devices, bacteria can survive as sessile or slow-growing variants, making diagnostics and therapy a challenge [1]. Furthermore, chronic inflammation is histologically characterized by predominant fibrous granulation tissue, while the proportion of neutrophils, the hallmark of an acute infection process, is usually very low. This places special demands on the laboratory in terms of processing and culture methods. Unfortunately, there are still no standard procedures for processing or cultivation. We have recently published data on the significance of culture media for diagnostics in PJI [2, 3]. It is undisputed that semi-automated homogenization of tissue samples is superior to any manual method [4]. However, these methods are still compared with one another in various publications. To our knowledge, this is the first study that has evaluated the performance of two different semi-automated homogenization techniques and their effect on the yield of bacteria, additionally taking into account the number, volume and origin of the samples. We compared our routine procedure in which we process single tissue samples by disposable high-frequency disperser with the bead milling method (mechanized agitation) which enables simultaneous handling of several samples.
null
null
Results
[SUBTITLE] Patient data [SUBSECTION] Ten cases were excluded because of an insufficient number of samples. Finally, a cohort of 80 patients, 40 with a hip and 40 with a knee prosthesis, was included in the study. In total, 722 tissue samples were investigated. In 35 cases the patients had at least one of the major diagnostic criteria for the presence of PJI (Table 1). Furthermore, five patients with no major criteria had an aggregate score of minor criteria greater than or equal to 6 and therefore were also considered as infected (Table 1). The other 40 cases had neither a major criterion for PJI nor an aggregate score greater than 2 and were classified as AF. For further demographic data, see Table 1. Table 1Clinical and microbiological characteristics of all study casesPJI1 (n = 40)AF2 (n = 40)Median patient age, yr (range)75 (46–89)72 (50–93)No. (%) of females18 (45)24 (57)Site of arthroplasty (no. [%])Hip20 (50)20 (50)Knee20 (50)20 (50)Type of surgery (no. [%])Explantation of the entire joint prosthesis25 (62)19 (47)Explantation of prosthesis components(femoral, acetabular, tibial, inlay)14 (35)18 (45)Debridement and prosthesis retention1 (3)3 (8)Patients with major diagnostic criteria for PJI* (no./total no. [%])≥ 2 positive cultures32/35 (91)0≥ 2 positive cultures + Presence of sinus tract2/35 (6)0Negative culture + Presence of sinus tract1/35 (3)0Average number of positive samples per patient/Average total number of samples per patient3.5/4.50/4.5Patients with negative major criteria and minor pre- and intraoperative scoring based diagnostic criteria for PJI* (no./total no. [%])Score ≥ 6 (infected)5/5 (100)0Score 4–5 (inconclusive)00Score ≤ 3 (not infected)0401Periprosthetic joint infection; 2Aseptic failure; *According to the 2018 Definition of periprosthetic hip and knee infection; Clinical and microbiological characteristics of all study cases Explantation of prosthesis components (femoral, acetabular, tibial, inlay) Average number of positive samples per patient/ Average total number of samples per patient 1Periprosthetic joint infection; 2Aseptic failure; *According to the 2018 Definition of periprosthetic hip and knee infection; Table 2ACulture results achieved by homogenization of tissue samples using Disperser vs. Bead milling methodPJI1 (n = 40)Preference of method(no. [%])Additional number of positive samples/(No. of cases)Culture positive (n = 34)Concordance 23/(68)0Disperser 7/(20)1/(6); 2/(1)Bead milling 4/(12)1/(1); 2/(3)CommentsPreviously detected microorganismsCulture negative (n = 6)2 cases previously positive by TC* Cutibacterium spp. 4 cases previously positive by TC*Eikenella corrodens, Granulicatella adiacens, Actinomyces turicensis, Streptococcus species., Coagulase-negative staphylococci (CoNS)AF2 (n = 40)Culture negative (n = 40)Concordance 40/(100)01Periprosthetic joint infection; 2Aseptic failure; *TC, tissue culture Culture results achieved by homogenization of tissue samples using Disperser vs. Bead milling method 1Periprosthetic joint infection; 2Aseptic failure; *TC, tissue culture Ten cases were excluded because of an insufficient number of samples. Finally, a cohort of 80 patients, 40 with a hip and 40 with a knee prosthesis, was included in the study. In total, 722 tissue samples were investigated. In 35 cases the patients had at least one of the major diagnostic criteria for the presence of PJI (Table 1). Furthermore, five patients with no major criteria had an aggregate score of minor criteria greater than or equal to 6 and therefore were also considered as infected (Table 1). The other 40 cases had neither a major criterion for PJI nor an aggregate score greater than 2 and were classified as AF. For further demographic data, see Table 1. Table 1Clinical and microbiological characteristics of all study casesPJI1 (n = 40)AF2 (n = 40)Median patient age, yr (range)75 (46–89)72 (50–93)No. (%) of females18 (45)24 (57)Site of arthroplasty (no. [%])Hip20 (50)20 (50)Knee20 (50)20 (50)Type of surgery (no. [%])Explantation of the entire joint prosthesis25 (62)19 (47)Explantation of prosthesis components(femoral, acetabular, tibial, inlay)14 (35)18 (45)Debridement and prosthesis retention1 (3)3 (8)Patients with major diagnostic criteria for PJI* (no./total no. [%])≥ 2 positive cultures32/35 (91)0≥ 2 positive cultures + Presence of sinus tract2/35 (6)0Negative culture + Presence of sinus tract1/35 (3)0Average number of positive samples per patient/Average total number of samples per patient3.5/4.50/4.5Patients with negative major criteria and minor pre- and intraoperative scoring based diagnostic criteria for PJI* (no./total no. [%])Score ≥ 6 (infected)5/5 (100)0Score 4–5 (inconclusive)00Score ≤ 3 (not infected)0401Periprosthetic joint infection; 2Aseptic failure; *According to the 2018 Definition of periprosthetic hip and knee infection; Clinical and microbiological characteristics of all study cases Explantation of prosthesis components (femoral, acetabular, tibial, inlay) Average number of positive samples per patient/ Average total number of samples per patient 1Periprosthetic joint infection; 2Aseptic failure; *According to the 2018 Definition of periprosthetic hip and knee infection; Table 2ACulture results achieved by homogenization of tissue samples using Disperser vs. Bead milling methodPJI1 (n = 40)Preference of method(no. [%])Additional number of positive samples/(No. of cases)Culture positive (n = 34)Concordance 23/(68)0Disperser 7/(20)1/(6); 2/(1)Bead milling 4/(12)1/(1); 2/(3)CommentsPreviously detected microorganismsCulture negative (n = 6)2 cases previously positive by TC* Cutibacterium spp. 4 cases previously positive by TC*Eikenella corrodens, Granulicatella adiacens, Actinomyces turicensis, Streptococcus species., Coagulase-negative staphylococci (CoNS)AF2 (n = 40)Culture negative (n = 40)Concordance 40/(100)01Periprosthetic joint infection; 2Aseptic failure; *TC, tissue culture Culture results achieved by homogenization of tissue samples using Disperser vs. Bead milling method 1Periprosthetic joint infection; 2Aseptic failure; *TC, tissue culture [SUBTITLE] Microbiological diagnosis [SUBSECTION] In the PJI group, 34 patients yielded positive culture results. We processed a total of 308 tissue samples from these patients, 153 with the disperser and 155 with the bead mill. In the samples processed with the disperser 120 were positive. Processing with the bead mill yielded 121 positive results. On average, we received 4.5 samples per patient and method, of which 3.5 samples were positive (Tables 1 and 2B). However, in six cases only two samples were positive with both methods. In 23 patients (68%) we obtained identical culture results with both methods, in addition there was also a match in size, location and detected pathogens of the tissue samples. However, in 11 cases there were differences, but these were only related to the number of positive tissue samples per patient. In one of these cases, the tissue sample that gave a positive result when processed with the disperser was significantly larger than the sample processed with the bead mill. All pathogens were identified with both methods. The differences were distributed as follows: in six cases one sample and in one case two samples were additionally positive when the disperser was used (Table 2 A). On the other hand, in one case one sample and in three cases two samples were additionally positive when the bead mill was used (Table 2 A). Overall, there was no significant difference in the final evaluation, since in all cases at least two tissue samples were positive with both methods. In the 34 positive cases a total of 51 microorganisms were recovered. The frequency of their occurrence is listed in Table 3. We identified 25 monomicrobial and nine polymicrobial infections. According to clinical records, 11 patients had a chronic course, in two of these cases small colony variants (SCV) were detected (Table 3). Table 2BThe weight of each investigated tissue sample of all subjects and in case of PJI its effect on the rate of positive culture results using Disperser vs. Bead milling methodProportion of investigated samples based on weightPJI1 Culture positive (n = 34)> 1.5 g0.5-1.5 g< 0.5 gAll sizesDisperser (no. of positive cultures/total no. of specimens [%])71/87 (82)41/53 (77)8/13 (62)120/153 (78)Bead milling (no. of positive cultures/ total no. of specimens [%])72/90 (80)37/48 (77)12/17 (71)121/155 (78)PJI1 Culture negative (n = 6)Disperser (no. [%])18 (64)7 (25)3 (11)27Bead milling (no. [%])15 (56)7 (26)5 (18)27AF2 (n = 40)Disperser (no. [%])89 (49)66 (37)25 (14)180Bead milling (no. [%])90 (50)64 (36)26 (14)1801Periprosthetic joint infection; 2Aseptic failure The weight of each investigated tissue sample of all subjects and in case of PJI its effect on the rate of positive culture results using Disperser vs. Bead milling method Disperser (no. of positive cultures/ total no. of specimens [%]) 1Periprosthetic joint infection; 2Aseptic failure Table 3Microbiological findings by positive tissue cultures from 34 patients meeting the definition of PJIMicroorganismsNo. (%)(n = 51)Staphylococcus species19 (37) Staphylococcus aureus 8 (16) Staphylococcus epidermidis* 6 (12) Staphylococcus lugdunensis 3 (6) Staphylococcus capitis 2 (4)Enterococcus species5 (10) Enterococcus faecalis 5 (10)Streptococcus species4 (8) Streptococcus dysgalactiae 1 (2) Streptococcus agalactiae 1 (2) Streptococcus constellatus 1 (2) Streptococcus oralis 1 (2)Other Gram-positive cocci2 (4)Parvimonas micra (anaerobic)1 (2)Peptoniphilus harei (anaerobic)1 (2)Gram-positive bacilli7 (14)Cutibacterium acnes (anaerobic)4 (8)Cutibacterium avidum (anaerobic)2 (4) Corynebacterium amycolatum 1 (2)Gram-negative bacilli14 (28) Enterobacter cloacae 4 (8) Escherichia coli 3 (6) Klebsiella pneumoniae 2 (4) Proteus mirabilis 2 (4) Pseudomonas aeruginosa 2 (4) Morganella morganii 1 (2)*in 2 cases associated with small colony variants (SCV) Microbiological findings by positive tissue cultures from 34 patients meeting the definition of PJI *in 2 cases associated with small colony variants (SCV) Nevertheless, both methods yielded negative culture results for six patients in the PJI group. But all patients had been treated with antimicrobials because of microorganisms that had been detected previously from tissue specimens or synovial fluid (Table 2 A). In this group 54 tissue samples were processed, 27 with each method (Table 2B). Specific analysis of the percentage of positive tissue samples based on weight revealed no significant difference between the methods. However, regardless of the method, the accuracy of the results decreased correspondingly to decreasing weight of the samples. Tissue samples with > 1.5 g had a positive rate of 82.0%, 71/87 for disperser versus 80.0%, 72/90 for bead milling, P = 0.79. For samples with a weight of 0.5-1.5 g we noted 77%, 41/53 for disperser versus 77%, 37/48 for bead milling, P = 0.97. And for samples with a weight of < 0.5 g we detected 62%, 8/13 for disperser versus 71%, 12/17 for bead milling, P = 0.60 (Table 2B). In the AF group both techniques presented negative culture results in all 40 cases. Here, we processed 180 tissue samples with each method. For information on the weight distribution of the investigated tissue samples, see Table 2B. A further aspect of our study was to record the origin of each specimen and its detection rate of microorganisms in the group of culture-positive PJI cases. Regardless of the joint, the highest rate of single samples was taken from the neo-synovium, followed by the acetabulum, if the hip was affected. In the number of samples taken from the proximal and distal periprosthetic membrane of the stem, there were joint-dependent differences. Due to the low number of cases, we did not differentiate between surgical procedures. We cannot rule out that this had an influence on the different results. For an overview of the distribution of positive samples, see Fig. 1. Fig. 1Overview of the local distribution of culture-positive tissue samples in patients with periprosthetic joint infection. Left: Hip (n = 17); Right: Knee (n = 17) Overview of the local distribution of culture-positive tissue samples in patients with periprosthetic joint infection. Left: Hip (n = 17); Right: Knee (n = 17) In the PJI group, 34 patients yielded positive culture results. We processed a total of 308 tissue samples from these patients, 153 with the disperser and 155 with the bead mill. In the samples processed with the disperser 120 were positive. Processing with the bead mill yielded 121 positive results. On average, we received 4.5 samples per patient and method, of which 3.5 samples were positive (Tables 1 and 2B). However, in six cases only two samples were positive with both methods. In 23 patients (68%) we obtained identical culture results with both methods, in addition there was also a match in size, location and detected pathogens of the tissue samples. However, in 11 cases there were differences, but these were only related to the number of positive tissue samples per patient. In one of these cases, the tissue sample that gave a positive result when processed with the disperser was significantly larger than the sample processed with the bead mill. All pathogens were identified with both methods. The differences were distributed as follows: in six cases one sample and in one case two samples were additionally positive when the disperser was used (Table 2 A). On the other hand, in one case one sample and in three cases two samples were additionally positive when the bead mill was used (Table 2 A). Overall, there was no significant difference in the final evaluation, since in all cases at least two tissue samples were positive with both methods. In the 34 positive cases a total of 51 microorganisms were recovered. The frequency of their occurrence is listed in Table 3. We identified 25 monomicrobial and nine polymicrobial infections. According to clinical records, 11 patients had a chronic course, in two of these cases small colony variants (SCV) were detected (Table 3). Table 2BThe weight of each investigated tissue sample of all subjects and in case of PJI its effect on the rate of positive culture results using Disperser vs. Bead milling methodProportion of investigated samples based on weightPJI1 Culture positive (n = 34)> 1.5 g0.5-1.5 g< 0.5 gAll sizesDisperser (no. of positive cultures/total no. of specimens [%])71/87 (82)41/53 (77)8/13 (62)120/153 (78)Bead milling (no. of positive cultures/ total no. of specimens [%])72/90 (80)37/48 (77)12/17 (71)121/155 (78)PJI1 Culture negative (n = 6)Disperser (no. [%])18 (64)7 (25)3 (11)27Bead milling (no. [%])15 (56)7 (26)5 (18)27AF2 (n = 40)Disperser (no. [%])89 (49)66 (37)25 (14)180Bead milling (no. [%])90 (50)64 (36)26 (14)1801Periprosthetic joint infection; 2Aseptic failure The weight of each investigated tissue sample of all subjects and in case of PJI its effect on the rate of positive culture results using Disperser vs. Bead milling method Disperser (no. of positive cultures/ total no. of specimens [%]) 1Periprosthetic joint infection; 2Aseptic failure Table 3Microbiological findings by positive tissue cultures from 34 patients meeting the definition of PJIMicroorganismsNo. (%)(n = 51)Staphylococcus species19 (37) Staphylococcus aureus 8 (16) Staphylococcus epidermidis* 6 (12) Staphylococcus lugdunensis 3 (6) Staphylococcus capitis 2 (4)Enterococcus species5 (10) Enterococcus faecalis 5 (10)Streptococcus species4 (8) Streptococcus dysgalactiae 1 (2) Streptococcus agalactiae 1 (2) Streptococcus constellatus 1 (2) Streptococcus oralis 1 (2)Other Gram-positive cocci2 (4)Parvimonas micra (anaerobic)1 (2)Peptoniphilus harei (anaerobic)1 (2)Gram-positive bacilli7 (14)Cutibacterium acnes (anaerobic)4 (8)Cutibacterium avidum (anaerobic)2 (4) Corynebacterium amycolatum 1 (2)Gram-negative bacilli14 (28) Enterobacter cloacae 4 (8) Escherichia coli 3 (6) Klebsiella pneumoniae 2 (4) Proteus mirabilis 2 (4) Pseudomonas aeruginosa 2 (4) Morganella morganii 1 (2)*in 2 cases associated with small colony variants (SCV) Microbiological findings by positive tissue cultures from 34 patients meeting the definition of PJI *in 2 cases associated with small colony variants (SCV) Nevertheless, both methods yielded negative culture results for six patients in the PJI group. But all patients had been treated with antimicrobials because of microorganisms that had been detected previously from tissue specimens or synovial fluid (Table 2 A). In this group 54 tissue samples were processed, 27 with each method (Table 2B). Specific analysis of the percentage of positive tissue samples based on weight revealed no significant difference between the methods. However, regardless of the method, the accuracy of the results decreased correspondingly to decreasing weight of the samples. Tissue samples with > 1.5 g had a positive rate of 82.0%, 71/87 for disperser versus 80.0%, 72/90 for bead milling, P = 0.79. For samples with a weight of 0.5-1.5 g we noted 77%, 41/53 for disperser versus 77%, 37/48 for bead milling, P = 0.97. And for samples with a weight of < 0.5 g we detected 62%, 8/13 for disperser versus 71%, 12/17 for bead milling, P = 0.60 (Table 2B). In the AF group both techniques presented negative culture results in all 40 cases. Here, we processed 180 tissue samples with each method. For information on the weight distribution of the investigated tissue samples, see Table 2B. A further aspect of our study was to record the origin of each specimen and its detection rate of microorganisms in the group of culture-positive PJI cases. Regardless of the joint, the highest rate of single samples was taken from the neo-synovium, followed by the acetabulum, if the hip was affected. In the number of samples taken from the proximal and distal periprosthetic membrane of the stem, there were joint-dependent differences. Due to the low number of cases, we did not differentiate between surgical procedures. We cannot rule out that this had an influence on the different results. For an overview of the distribution of positive samples, see Fig. 1. Fig. 1Overview of the local distribution of culture-positive tissue samples in patients with periprosthetic joint infection. Left: Hip (n = 17); Right: Knee (n = 17) Overview of the local distribution of culture-positive tissue samples in patients with periprosthetic joint infection. Left: Hip (n = 17); Right: Knee (n = 17)
Conclusion
With this study we have demonstrated that two different semi-automated systems enable reliable processing of tissue samples for diagnosis of PJI. These techniques should replace the still widely used, less sensitive, manual methods which are more susceptible to contamination. According to the literature, the performance and usability of the devices available in the market differ considerably. Therefore, comparative studies are urgently required. This study has also shown that the rate of positive tissue samples is influenced by the volume and exact origin of the sample. Therefore, these parameters should always be recorded and communicated in the laboratory report as they affect clinical relevance. There can be no doubt that standardized procedures are required to make the microbiological results predictable and comparable and give surgeons the highest level of certainty for their decision-making.
[ "Study population/definition of infection", "Ethics", "Specimen processing", "Culture conditions", "Statistical analysis", "Patient data", "Microbiological diagnosis" ]
[ "This comparative study was conducted between 2019 and 2020 and included patients from three different hospitals with which our laboratory has a cooperation agreement for microbiological diagnostics. We investigated about 800 tissue samples from a total of 90 patients, almost equally distributed among the hospitals. The patients had undergone revision arthroplasty of the hip or knee because of presumed infection or aseptic failure (AF). We based our definition of PJI on the new scoring system for preoperative and intraoperative criteria published by Parvizi et al. [5]. Two positive tissue cultures with the same microorganisms and/or the presence of a sinus tract communicating with the prosthesis were considered as major criteria for infection. The following parameters were regarded as preoperative minor criteria: elevated serum CRP (> 1 mg/dL), D-dimer (> 860 ng/mL), and erythrocyte sedimentation rate (> 30 mm/h) assigned with 2, 2 and 1 points. Furthermore, elevated synovial fluid white blood cell count (> 3000 cells/µL), alpha-defensin (signal-to-cut off ratio > 1), leucocyte esterase (++), polymorphonuclear percentage (> 80%), and synovial CRP (> 6.9 mg/L) received 3, 3, 3, 2, and 1 points, respectively. Patients with an aggregate score of greater than or equal to 6 were considered to be infected. For patients with a lower score, intraoperative findings of positive histology, purulence, and a single positive culture were included and assigned 3, 3, and 2 points. Combined with the preoperative score, a total of greater than or equal to 6 was considered infected, a final score between 4 and 5 was inconclusive, and a score of 3 or less was considered not infected. Histopathological analysis was interpreted according to the classification by Krenn et al. [6].", "The study was approved by the ethics committee of the General Medical Council of North-Rhine, Düsseldorf, Germany. All patients gave their consent to participate in this study.", "A minimum set of four tissue samples per patient and method was a prerequisite for participation in the study. The specimens were taken from the neo-synovium, the area around the acetabulum and diverse suspicious sites in the periprosthetic membrane. On the basis of previous experience, samples ≥ 1 cm3 in size, corresponding to a weight of ≥ 1.5 g, were required, provided that the operating processes allowed this. Each sample was taken with a separate, sterile instrument. All samples were collected in the operating room using different transport vials depending on the method. For routine diagnostics, each sample was transferred to an individually packaged, sterile 25 ml tube (Sarstedt, Australia) with a screw top, and for the bead milling method a 15 ml tube filled with 50 ceramic beads of 2.8/5.0 mm provided by the supplier (Bertin Technologies, USA) was used. This was individually packaged and sterilized by steam. Sterilization was controlled and documented with process indicators as well as bioindicators. To prevent tissue samples drying up, each vial was covered (3–5 ml depending on the size) in the operating room with single-use, separate, sterile 0.9% sodium chloride solution. All samples were transferred to the laboratory within four hours.\nIn the laboratory the 25 ml tubes were directly homogenized at a laminar air flow bench within two hours after arrival using the disperser T18 Ultra Turrax with disposable dispersing elements (IKA-Werke, Staufen, Germany). Depending on the tissue structure, the speed range varied from 5,000 to 10,000 rpm for 30 s. For the bead milling method, Precellys Evolution homogenizer was used (Bertin Technologies, Rockville, Washington D.C., USA). The tubes were directly processed using two cycles at 7,200 rpm for 20 s each, interrupted by a pause of 20 s.", "The homogenized periprosthetic tissue samples were applied for cultivation onto sheep-blood agar and chocolate agar (Oxoid, Basingstoke, Hampshire, United Kingdom). For anaerobic cultures, Schaedler agar, Schaedler KV agar (Oxoid, Basingstoke, Hampshire, United Kingdom) and Columbia blood agar (biomerieux, Marcy l’Etoile, France) were inoculated and incubated for five days. All specimens were also incubated for fourteen days using brain-heart infusion broth (BHI, Oxoid, Basingstoke, Hampshire, United Kingdom) and thioglycollate broth medium additionally incorporated with liver digest and finally supplemented with hemin and horse serum (LT, SIFIN, Berlin, Germany). For more details about this approach, see the literature [2, 3]. As an adjunct, the results of all investigated prostheses and components had no influence on the study and are therefore not discussed in more detail here.\nThe organisms were identified by Matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS; BrukerDaltonics, Bremen, Germany) using the direct transfer method according to the recommendation of the manufacturer.", "The data for homogenization by disperser and bead milling were statistically analysed by the two-proportion z-test using the RStudio (version 1.2. 5042) software. Yate’s continuity correction was applied for all databases. P values of less than 0.05 should be considered statistically significant.", "Ten cases were excluded because of an insufficient number of samples. Finally, a cohort of 80 patients, 40 with a hip and 40 with a knee prosthesis, was included in the study. In total, 722 tissue samples were investigated. In 35 cases the patients had at least one of the major diagnostic criteria for the presence of PJI (Table 1). Furthermore, five patients with no major criteria had an aggregate score of minor criteria greater than or equal to 6 and therefore were also considered as infected (Table 1). The other 40 cases had neither a major criterion for PJI nor an aggregate score greater than 2 and were classified as AF. For further demographic data, see Table 1.\n\nTable 1Clinical and microbiological characteristics of all study casesPJI1 (n = 40)AF2 (n = 40)Median patient age, yr (range)75 (46–89)72 (50–93)No. (%) of females18 (45)24 (57)Site of arthroplasty (no. [%])Hip20 (50)20 (50)Knee20 (50)20 (50)Type of surgery (no. [%])Explantation of the entire joint prosthesis25 (62)19 (47)Explantation of prosthesis components(femoral, acetabular, tibial, inlay)14 (35)18 (45)Debridement and prosthesis retention1 (3)3 (8)Patients with major diagnostic criteria for PJI* (no./total no. [%])≥ 2 positive cultures32/35 (91)0≥ 2 positive cultures + Presence of sinus tract2/35 (6)0Negative culture + Presence of sinus tract1/35 (3)0Average number of positive samples per patient/Average total number of samples per patient3.5/4.50/4.5Patients with negative major criteria and minor pre- and intraoperative scoring based diagnostic criteria for PJI* (no./total no. [%])Score ≥ 6 (infected)5/5 (100)0Score 4–5 (inconclusive)00Score ≤ 3 (not infected)0401Periprosthetic joint infection; 2Aseptic failure; *According to the 2018 Definition of periprosthetic hip and knee infection;\n\nClinical and microbiological characteristics of all study cases\nExplantation of prosthesis components\n(femoral, acetabular, tibial, inlay)\nAverage number of positive samples per patient/\nAverage total number of samples per patient\n1Periprosthetic joint infection; 2Aseptic failure; *According to the 2018 Definition of periprosthetic hip and knee infection;\n\nTable 2ACulture results achieved by homogenization of tissue samples using Disperser vs. Bead milling methodPJI1 (n = 40)Preference of method(no. [%])Additional number of positive samples/(No. of cases)Culture positive (n = 34)Concordance 23/(68)0Disperser 7/(20)1/(6); 2/(1)Bead milling 4/(12)1/(1); 2/(3)CommentsPreviously detected microorganismsCulture negative (n = 6)2 cases previously positive by TC*\nCutibacterium spp.\n4 cases previously positive by TC*Eikenella corrodens, Granulicatella adiacens, Actinomyces turicensis, Streptococcus species., Coagulase-negative staphylococci (CoNS)AF2 (n = 40)Culture negative (n = 40)Concordance 40/(100)01Periprosthetic joint infection; 2Aseptic failure; *TC, tissue culture\n\nCulture results achieved by homogenization of tissue samples using Disperser vs. Bead milling method\n1Periprosthetic joint infection; 2Aseptic failure; *TC, tissue culture", "In the PJI group, 34 patients yielded positive culture results. We processed a total of 308 tissue samples from these patients, 153 with the disperser and 155 with the bead mill. In the samples processed with the disperser 120 were positive. Processing with the bead mill yielded 121 positive results. On average, we received 4.5 samples per patient and method, of which 3.5 samples were positive (Tables 1 and 2B). However, in six cases only two samples were positive with both methods. In 23 patients (68%) we obtained identical culture results with both methods, in addition there was also a match in size, location and detected pathogens of the tissue samples. However, in 11 cases there were differences, but these were only related to the number of positive tissue samples per patient. In one of these cases, the tissue sample that gave a positive result when processed with the disperser was significantly larger than the sample processed with the bead mill. All pathogens were identified with both methods. The differences were distributed as follows: in six cases one sample and in one case two samples were additionally positive when the disperser was used (Table 2 A). On the other hand, in one case one sample and in three cases two samples were additionally positive when the bead mill was used (Table 2 A). Overall, there was no significant difference in the final evaluation, since in all cases at least two tissue samples were positive with both methods.\nIn the 34 positive cases a total of 51 microorganisms were recovered. The frequency of their occurrence is listed in Table 3. We identified 25 monomicrobial and nine polymicrobial infections. According to clinical records, 11 patients had a chronic course, in two of these cases small colony variants (SCV) were detected (Table 3).\n\nTable 2BThe weight of each investigated tissue sample of all subjects and in case of PJI its effect on the rate of positive culture results using Disperser vs. Bead milling methodProportion of investigated samples based on weightPJI1 Culture positive (n = 34)> 1.5 g0.5-1.5 g< 0.5 gAll sizesDisperser (no. of positive cultures/total no. of specimens [%])71/87 (82)41/53 (77)8/13 (62)120/153 (78)Bead milling (no. of positive cultures/ total no. of specimens [%])72/90 (80)37/48 (77)12/17 (71)121/155 (78)PJI1 Culture negative (n = 6)Disperser (no. [%])18 (64)7 (25)3 (11)27Bead milling (no. [%])15 (56)7 (26)5 (18)27AF2 (n = 40)Disperser (no. [%])89 (49)66 (37)25 (14)180Bead milling (no. [%])90 (50)64 (36)26 (14)1801Periprosthetic joint infection; 2Aseptic failure\n\nThe weight of each investigated tissue sample of all subjects and in case of PJI its effect on the rate of positive culture results using Disperser vs. Bead milling method\nDisperser (no. of positive cultures/\ntotal no. of specimens [%])\n1Periprosthetic joint infection; 2Aseptic failure\n\nTable 3Microbiological findings by positive tissue cultures from 34 patients meeting the definition of PJIMicroorganismsNo. (%)(n = 51)Staphylococcus species19 (37)\nStaphylococcus aureus\n8 (16)\nStaphylococcus epidermidis*\n6 (12)\nStaphylococcus lugdunensis\n3 (6)\nStaphylococcus capitis\n2 (4)Enterococcus species5 (10)\nEnterococcus faecalis\n5 (10)Streptococcus species4 (8)\nStreptococcus dysgalactiae\n1 (2)\nStreptococcus agalactiae\n1 (2)\nStreptococcus constellatus\n1 (2)\nStreptococcus oralis\n1 (2)Other Gram-positive cocci2 (4)Parvimonas micra (anaerobic)1 (2)Peptoniphilus harei (anaerobic)1 (2)Gram-positive bacilli7 (14)Cutibacterium acnes (anaerobic)4 (8)Cutibacterium avidum (anaerobic)2 (4)\nCorynebacterium amycolatum\n1 (2)Gram-negative bacilli14 (28)\nEnterobacter cloacae\n4 (8)\nEscherichia coli\n3 (6)\nKlebsiella pneumoniae\n2 (4)\nProteus mirabilis\n2 (4)\nPseudomonas aeruginosa\n2 (4)\nMorganella morganii\n1 (2)*in 2 cases associated with small colony variants (SCV)\n\nMicrobiological findings by positive tissue cultures from 34 patients meeting the definition of PJI\n*in 2 cases associated with small colony variants (SCV)\nNevertheless, both methods yielded negative culture results for six patients in the PJI group. But all patients had been treated with antimicrobials because of microorganisms that had been detected previously from tissue specimens or synovial fluid (Table 2 A). In this group 54 tissue samples were processed, 27 with each method (Table 2B).\nSpecific analysis of the percentage of positive tissue samples based on weight revealed no significant difference between the methods. However, regardless of the method, the accuracy of the results decreased correspondingly to decreasing weight of the samples. Tissue samples with > 1.5 g had a positive rate of 82.0%, 71/87 for disperser versus 80.0%, 72/90 for bead milling, P = 0.79. For samples with a weight of 0.5-1.5 g we noted 77%, 41/53 for disperser versus 77%, 37/48 for bead milling, P = 0.97. And for samples with a weight of < 0.5 g we detected 62%, 8/13 for disperser versus 71%, 12/17 for bead milling, P = 0.60 (Table 2B).\nIn the AF group both techniques presented negative culture results in all 40 cases. Here, we processed 180 tissue samples with each method. For information on the weight distribution of the investigated tissue samples, see Table 2B.\nA further aspect of our study was to record the origin of each specimen and its detection rate of microorganisms in the group of culture-positive PJI cases. Regardless of the joint, the highest rate of single samples was taken from the neo-synovium, followed by the acetabulum, if the hip was affected. In the number of samples taken from the proximal and distal periprosthetic membrane of the stem, there were joint-dependent differences.\nDue to the low number of cases, we did not differentiate between surgical procedures. We cannot rule out that this had an influence on the different results.\nFor an overview of the distribution of positive samples, see Fig. 1.\n\nFig. 1Overview of the local distribution of culture-positive tissue samples in patients with periprosthetic joint infection. Left: Hip (n = 17); Right: Knee (n = 17)\n\nOverview of the local distribution of culture-positive tissue samples in patients with periprosthetic joint infection. Left: Hip (n = 17); Right: Knee (n = 17)" ]
[ null, null, null, null, null, null, null ]
[ "Introduction", "Materials and methods", "Study population/definition of infection", "Ethics", "Specimen processing", "Culture conditions", "Statistical analysis", "Results", "Patient data", "Microbiological diagnosis", "Discussion", "Conclusion" ]
[ "Microbiological investigations play a key role in the diagnosis of periprosthetic joint infection (PJI). In contrast to many organ-related infections which cause acute symptoms, PJI often has a chronically insidious course. Depending on the joint and patient collective, these cases can account for up to 50% of the total number of infections (own data). The consequences for the patient are considerable, since almost every case sooner or later requires surgical intervention. The development of the infection is closely related to the variable growth behaviour of the pathogens. Many microorganisms are able to colonize the surface of a foreign body, creating a biofilm to protect them from their environment. If they cause infections in the tissue surrounding the devices, bacteria can survive as sessile or slow-growing variants, making diagnostics and therapy a challenge [1]. Furthermore, chronic inflammation is histologically characterized by predominant fibrous granulation tissue, while the proportion of neutrophils, the hallmark of an acute infection process, is usually very low. This places special demands on the laboratory in terms of processing and culture methods. Unfortunately, there are still no standard procedures for processing or cultivation. We have recently published data on the significance of culture media for diagnostics in PJI [2, 3].\nIt is undisputed that semi-automated homogenization of tissue samples is superior to any manual method [4]. However, these methods are still compared with one another in various publications. To our knowledge, this is the first study that has evaluated the performance of two different semi-automated homogenization techniques and their effect on the yield of bacteria, additionally taking into account the number, volume and origin of the samples. We compared our routine procedure in which we process single tissue samples by disposable high-frequency disperser with the bead milling method (mechanized agitation) which enables simultaneous handling of several samples.", "[SUBTITLE] Study population/definition of infection [SUBSECTION] This comparative study was conducted between 2019 and 2020 and included patients from three different hospitals with which our laboratory has a cooperation agreement for microbiological diagnostics. We investigated about 800 tissue samples from a total of 90 patients, almost equally distributed among the hospitals. The patients had undergone revision arthroplasty of the hip or knee because of presumed infection or aseptic failure (AF). We based our definition of PJI on the new scoring system for preoperative and intraoperative criteria published by Parvizi et al. [5]. Two positive tissue cultures with the same microorganisms and/or the presence of a sinus tract communicating with the prosthesis were considered as major criteria for infection. The following parameters were regarded as preoperative minor criteria: elevated serum CRP (> 1 mg/dL), D-dimer (> 860 ng/mL), and erythrocyte sedimentation rate (> 30 mm/h) assigned with 2, 2 and 1 points. Furthermore, elevated synovial fluid white blood cell count (> 3000 cells/µL), alpha-defensin (signal-to-cut off ratio > 1), leucocyte esterase (++), polymorphonuclear percentage (> 80%), and synovial CRP (> 6.9 mg/L) received 3, 3, 3, 2, and 1 points, respectively. Patients with an aggregate score of greater than or equal to 6 were considered to be infected. For patients with a lower score, intraoperative findings of positive histology, purulence, and a single positive culture were included and assigned 3, 3, and 2 points. Combined with the preoperative score, a total of greater than or equal to 6 was considered infected, a final score between 4 and 5 was inconclusive, and a score of 3 or less was considered not infected. Histopathological analysis was interpreted according to the classification by Krenn et al. [6].\nThis comparative study was conducted between 2019 and 2020 and included patients from three different hospitals with which our laboratory has a cooperation agreement for microbiological diagnostics. We investigated about 800 tissue samples from a total of 90 patients, almost equally distributed among the hospitals. The patients had undergone revision arthroplasty of the hip or knee because of presumed infection or aseptic failure (AF). We based our definition of PJI on the new scoring system for preoperative and intraoperative criteria published by Parvizi et al. [5]. Two positive tissue cultures with the same microorganisms and/or the presence of a sinus tract communicating with the prosthesis were considered as major criteria for infection. The following parameters were regarded as preoperative minor criteria: elevated serum CRP (> 1 mg/dL), D-dimer (> 860 ng/mL), and erythrocyte sedimentation rate (> 30 mm/h) assigned with 2, 2 and 1 points. Furthermore, elevated synovial fluid white blood cell count (> 3000 cells/µL), alpha-defensin (signal-to-cut off ratio > 1), leucocyte esterase (++), polymorphonuclear percentage (> 80%), and synovial CRP (> 6.9 mg/L) received 3, 3, 3, 2, and 1 points, respectively. Patients with an aggregate score of greater than or equal to 6 were considered to be infected. For patients with a lower score, intraoperative findings of positive histology, purulence, and a single positive culture were included and assigned 3, 3, and 2 points. Combined with the preoperative score, a total of greater than or equal to 6 was considered infected, a final score between 4 and 5 was inconclusive, and a score of 3 or less was considered not infected. Histopathological analysis was interpreted according to the classification by Krenn et al. [6].\n[SUBTITLE] Ethics [SUBSECTION] The study was approved by the ethics committee of the General Medical Council of North-Rhine, Düsseldorf, Germany. All patients gave their consent to participate in this study.\nThe study was approved by the ethics committee of the General Medical Council of North-Rhine, Düsseldorf, Germany. All patients gave their consent to participate in this study.\n[SUBTITLE] Specimen processing [SUBSECTION] A minimum set of four tissue samples per patient and method was a prerequisite for participation in the study. The specimens were taken from the neo-synovium, the area around the acetabulum and diverse suspicious sites in the periprosthetic membrane. On the basis of previous experience, samples ≥ 1 cm3 in size, corresponding to a weight of ≥ 1.5 g, were required, provided that the operating processes allowed this. Each sample was taken with a separate, sterile instrument. All samples were collected in the operating room using different transport vials depending on the method. For routine diagnostics, each sample was transferred to an individually packaged, sterile 25 ml tube (Sarstedt, Australia) with a screw top, and for the bead milling method a 15 ml tube filled with 50 ceramic beads of 2.8/5.0 mm provided by the supplier (Bertin Technologies, USA) was used. This was individually packaged and sterilized by steam. Sterilization was controlled and documented with process indicators as well as bioindicators. To prevent tissue samples drying up, each vial was covered (3–5 ml depending on the size) in the operating room with single-use, separate, sterile 0.9% sodium chloride solution. All samples were transferred to the laboratory within four hours.\nIn the laboratory the 25 ml tubes were directly homogenized at a laminar air flow bench within two hours after arrival using the disperser T18 Ultra Turrax with disposable dispersing elements (IKA-Werke, Staufen, Germany). Depending on the tissue structure, the speed range varied from 5,000 to 10,000 rpm for 30 s. For the bead milling method, Precellys Evolution homogenizer was used (Bertin Technologies, Rockville, Washington D.C., USA). The tubes were directly processed using two cycles at 7,200 rpm for 20 s each, interrupted by a pause of 20 s.\nA minimum set of four tissue samples per patient and method was a prerequisite for participation in the study. The specimens were taken from the neo-synovium, the area around the acetabulum and diverse suspicious sites in the periprosthetic membrane. On the basis of previous experience, samples ≥ 1 cm3 in size, corresponding to a weight of ≥ 1.5 g, were required, provided that the operating processes allowed this. Each sample was taken with a separate, sterile instrument. All samples were collected in the operating room using different transport vials depending on the method. For routine diagnostics, each sample was transferred to an individually packaged, sterile 25 ml tube (Sarstedt, Australia) with a screw top, and for the bead milling method a 15 ml tube filled with 50 ceramic beads of 2.8/5.0 mm provided by the supplier (Bertin Technologies, USA) was used. This was individually packaged and sterilized by steam. Sterilization was controlled and documented with process indicators as well as bioindicators. To prevent tissue samples drying up, each vial was covered (3–5 ml depending on the size) in the operating room with single-use, separate, sterile 0.9% sodium chloride solution. All samples were transferred to the laboratory within four hours.\nIn the laboratory the 25 ml tubes were directly homogenized at a laminar air flow bench within two hours after arrival using the disperser T18 Ultra Turrax with disposable dispersing elements (IKA-Werke, Staufen, Germany). Depending on the tissue structure, the speed range varied from 5,000 to 10,000 rpm for 30 s. For the bead milling method, Precellys Evolution homogenizer was used (Bertin Technologies, Rockville, Washington D.C., USA). The tubes were directly processed using two cycles at 7,200 rpm for 20 s each, interrupted by a pause of 20 s.\n[SUBTITLE] Culture conditions [SUBSECTION] The homogenized periprosthetic tissue samples were applied for cultivation onto sheep-blood agar and chocolate agar (Oxoid, Basingstoke, Hampshire, United Kingdom). For anaerobic cultures, Schaedler agar, Schaedler KV agar (Oxoid, Basingstoke, Hampshire, United Kingdom) and Columbia blood agar (biomerieux, Marcy l’Etoile, France) were inoculated and incubated for five days. All specimens were also incubated for fourteen days using brain-heart infusion broth (BHI, Oxoid, Basingstoke, Hampshire, United Kingdom) and thioglycollate broth medium additionally incorporated with liver digest and finally supplemented with hemin and horse serum (LT, SIFIN, Berlin, Germany). For more details about this approach, see the literature [2, 3]. As an adjunct, the results of all investigated prostheses and components had no influence on the study and are therefore not discussed in more detail here.\nThe organisms were identified by Matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS; BrukerDaltonics, Bremen, Germany) using the direct transfer method according to the recommendation of the manufacturer.\nThe homogenized periprosthetic tissue samples were applied for cultivation onto sheep-blood agar and chocolate agar (Oxoid, Basingstoke, Hampshire, United Kingdom). For anaerobic cultures, Schaedler agar, Schaedler KV agar (Oxoid, Basingstoke, Hampshire, United Kingdom) and Columbia blood agar (biomerieux, Marcy l’Etoile, France) were inoculated and incubated for five days. All specimens were also incubated for fourteen days using brain-heart infusion broth (BHI, Oxoid, Basingstoke, Hampshire, United Kingdom) and thioglycollate broth medium additionally incorporated with liver digest and finally supplemented with hemin and horse serum (LT, SIFIN, Berlin, Germany). For more details about this approach, see the literature [2, 3]. As an adjunct, the results of all investigated prostheses and components had no influence on the study and are therefore not discussed in more detail here.\nThe organisms were identified by Matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS; BrukerDaltonics, Bremen, Germany) using the direct transfer method according to the recommendation of the manufacturer.\n[SUBTITLE] Statistical analysis [SUBSECTION] The data for homogenization by disperser and bead milling were statistically analysed by the two-proportion z-test using the RStudio (version 1.2. 5042) software. Yate’s continuity correction was applied for all databases. P values of less than 0.05 should be considered statistically significant.\nThe data for homogenization by disperser and bead milling were statistically analysed by the two-proportion z-test using the RStudio (version 1.2. 5042) software. Yate’s continuity correction was applied for all databases. P values of less than 0.05 should be considered statistically significant.", "This comparative study was conducted between 2019 and 2020 and included patients from three different hospitals with which our laboratory has a cooperation agreement for microbiological diagnostics. We investigated about 800 tissue samples from a total of 90 patients, almost equally distributed among the hospitals. The patients had undergone revision arthroplasty of the hip or knee because of presumed infection or aseptic failure (AF). We based our definition of PJI on the new scoring system for preoperative and intraoperative criteria published by Parvizi et al. [5]. Two positive tissue cultures with the same microorganisms and/or the presence of a sinus tract communicating with the prosthesis were considered as major criteria for infection. The following parameters were regarded as preoperative minor criteria: elevated serum CRP (> 1 mg/dL), D-dimer (> 860 ng/mL), and erythrocyte sedimentation rate (> 30 mm/h) assigned with 2, 2 and 1 points. Furthermore, elevated synovial fluid white blood cell count (> 3000 cells/µL), alpha-defensin (signal-to-cut off ratio > 1), leucocyte esterase (++), polymorphonuclear percentage (> 80%), and synovial CRP (> 6.9 mg/L) received 3, 3, 3, 2, and 1 points, respectively. Patients with an aggregate score of greater than or equal to 6 were considered to be infected. For patients with a lower score, intraoperative findings of positive histology, purulence, and a single positive culture were included and assigned 3, 3, and 2 points. Combined with the preoperative score, a total of greater than or equal to 6 was considered infected, a final score between 4 and 5 was inconclusive, and a score of 3 or less was considered not infected. Histopathological analysis was interpreted according to the classification by Krenn et al. [6].", "The study was approved by the ethics committee of the General Medical Council of North-Rhine, Düsseldorf, Germany. All patients gave their consent to participate in this study.", "A minimum set of four tissue samples per patient and method was a prerequisite for participation in the study. The specimens were taken from the neo-synovium, the area around the acetabulum and diverse suspicious sites in the periprosthetic membrane. On the basis of previous experience, samples ≥ 1 cm3 in size, corresponding to a weight of ≥ 1.5 g, were required, provided that the operating processes allowed this. Each sample was taken with a separate, sterile instrument. All samples were collected in the operating room using different transport vials depending on the method. For routine diagnostics, each sample was transferred to an individually packaged, sterile 25 ml tube (Sarstedt, Australia) with a screw top, and for the bead milling method a 15 ml tube filled with 50 ceramic beads of 2.8/5.0 mm provided by the supplier (Bertin Technologies, USA) was used. This was individually packaged and sterilized by steam. Sterilization was controlled and documented with process indicators as well as bioindicators. To prevent tissue samples drying up, each vial was covered (3–5 ml depending on the size) in the operating room with single-use, separate, sterile 0.9% sodium chloride solution. All samples were transferred to the laboratory within four hours.\nIn the laboratory the 25 ml tubes were directly homogenized at a laminar air flow bench within two hours after arrival using the disperser T18 Ultra Turrax with disposable dispersing elements (IKA-Werke, Staufen, Germany). Depending on the tissue structure, the speed range varied from 5,000 to 10,000 rpm for 30 s. For the bead milling method, Precellys Evolution homogenizer was used (Bertin Technologies, Rockville, Washington D.C., USA). The tubes were directly processed using two cycles at 7,200 rpm for 20 s each, interrupted by a pause of 20 s.", "The homogenized periprosthetic tissue samples were applied for cultivation onto sheep-blood agar and chocolate agar (Oxoid, Basingstoke, Hampshire, United Kingdom). For anaerobic cultures, Schaedler agar, Schaedler KV agar (Oxoid, Basingstoke, Hampshire, United Kingdom) and Columbia blood agar (biomerieux, Marcy l’Etoile, France) were inoculated and incubated for five days. All specimens were also incubated for fourteen days using brain-heart infusion broth (BHI, Oxoid, Basingstoke, Hampshire, United Kingdom) and thioglycollate broth medium additionally incorporated with liver digest and finally supplemented with hemin and horse serum (LT, SIFIN, Berlin, Germany). For more details about this approach, see the literature [2, 3]. As an adjunct, the results of all investigated prostheses and components had no influence on the study and are therefore not discussed in more detail here.\nThe organisms were identified by Matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS; BrukerDaltonics, Bremen, Germany) using the direct transfer method according to the recommendation of the manufacturer.", "The data for homogenization by disperser and bead milling were statistically analysed by the two-proportion z-test using the RStudio (version 1.2. 5042) software. Yate’s continuity correction was applied for all databases. P values of less than 0.05 should be considered statistically significant.", "[SUBTITLE] Patient data [SUBSECTION] Ten cases were excluded because of an insufficient number of samples. Finally, a cohort of 80 patients, 40 with a hip and 40 with a knee prosthesis, was included in the study. In total, 722 tissue samples were investigated. In 35 cases the patients had at least one of the major diagnostic criteria for the presence of PJI (Table 1). Furthermore, five patients with no major criteria had an aggregate score of minor criteria greater than or equal to 6 and therefore were also considered as infected (Table 1). The other 40 cases had neither a major criterion for PJI nor an aggregate score greater than 2 and were classified as AF. For further demographic data, see Table 1.\n\nTable 1Clinical and microbiological characteristics of all study casesPJI1 (n = 40)AF2 (n = 40)Median patient age, yr (range)75 (46–89)72 (50–93)No. (%) of females18 (45)24 (57)Site of arthroplasty (no. [%])Hip20 (50)20 (50)Knee20 (50)20 (50)Type of surgery (no. [%])Explantation of the entire joint prosthesis25 (62)19 (47)Explantation of prosthesis components(femoral, acetabular, tibial, inlay)14 (35)18 (45)Debridement and prosthesis retention1 (3)3 (8)Patients with major diagnostic criteria for PJI* (no./total no. [%])≥ 2 positive cultures32/35 (91)0≥ 2 positive cultures + Presence of sinus tract2/35 (6)0Negative culture + Presence of sinus tract1/35 (3)0Average number of positive samples per patient/Average total number of samples per patient3.5/4.50/4.5Patients with negative major criteria and minor pre- and intraoperative scoring based diagnostic criteria for PJI* (no./total no. [%])Score ≥ 6 (infected)5/5 (100)0Score 4–5 (inconclusive)00Score ≤ 3 (not infected)0401Periprosthetic joint infection; 2Aseptic failure; *According to the 2018 Definition of periprosthetic hip and knee infection;\n\nClinical and microbiological characteristics of all study cases\nExplantation of prosthesis components\n(femoral, acetabular, tibial, inlay)\nAverage number of positive samples per patient/\nAverage total number of samples per patient\n1Periprosthetic joint infection; 2Aseptic failure; *According to the 2018 Definition of periprosthetic hip and knee infection;\n\nTable 2ACulture results achieved by homogenization of tissue samples using Disperser vs. Bead milling methodPJI1 (n = 40)Preference of method(no. [%])Additional number of positive samples/(No. of cases)Culture positive (n = 34)Concordance 23/(68)0Disperser 7/(20)1/(6); 2/(1)Bead milling 4/(12)1/(1); 2/(3)CommentsPreviously detected microorganismsCulture negative (n = 6)2 cases previously positive by TC*\nCutibacterium spp.\n4 cases previously positive by TC*Eikenella corrodens, Granulicatella adiacens, Actinomyces turicensis, Streptococcus species., Coagulase-negative staphylococci (CoNS)AF2 (n = 40)Culture negative (n = 40)Concordance 40/(100)01Periprosthetic joint infection; 2Aseptic failure; *TC, tissue culture\n\nCulture results achieved by homogenization of tissue samples using Disperser vs. Bead milling method\n1Periprosthetic joint infection; 2Aseptic failure; *TC, tissue culture\nTen cases were excluded because of an insufficient number of samples. Finally, a cohort of 80 patients, 40 with a hip and 40 with a knee prosthesis, was included in the study. In total, 722 tissue samples were investigated. In 35 cases the patients had at least one of the major diagnostic criteria for the presence of PJI (Table 1). Furthermore, five patients with no major criteria had an aggregate score of minor criteria greater than or equal to 6 and therefore were also considered as infected (Table 1). The other 40 cases had neither a major criterion for PJI nor an aggregate score greater than 2 and were classified as AF. For further demographic data, see Table 1.\n\nTable 1Clinical and microbiological characteristics of all study casesPJI1 (n = 40)AF2 (n = 40)Median patient age, yr (range)75 (46–89)72 (50–93)No. (%) of females18 (45)24 (57)Site of arthroplasty (no. [%])Hip20 (50)20 (50)Knee20 (50)20 (50)Type of surgery (no. [%])Explantation of the entire joint prosthesis25 (62)19 (47)Explantation of prosthesis components(femoral, acetabular, tibial, inlay)14 (35)18 (45)Debridement and prosthesis retention1 (3)3 (8)Patients with major diagnostic criteria for PJI* (no./total no. [%])≥ 2 positive cultures32/35 (91)0≥ 2 positive cultures + Presence of sinus tract2/35 (6)0Negative culture + Presence of sinus tract1/35 (3)0Average number of positive samples per patient/Average total number of samples per patient3.5/4.50/4.5Patients with negative major criteria and minor pre- and intraoperative scoring based diagnostic criteria for PJI* (no./total no. [%])Score ≥ 6 (infected)5/5 (100)0Score 4–5 (inconclusive)00Score ≤ 3 (not infected)0401Periprosthetic joint infection; 2Aseptic failure; *According to the 2018 Definition of periprosthetic hip and knee infection;\n\nClinical and microbiological characteristics of all study cases\nExplantation of prosthesis components\n(femoral, acetabular, tibial, inlay)\nAverage number of positive samples per patient/\nAverage total number of samples per patient\n1Periprosthetic joint infection; 2Aseptic failure; *According to the 2018 Definition of periprosthetic hip and knee infection;\n\nTable 2ACulture results achieved by homogenization of tissue samples using Disperser vs. Bead milling methodPJI1 (n = 40)Preference of method(no. [%])Additional number of positive samples/(No. of cases)Culture positive (n = 34)Concordance 23/(68)0Disperser 7/(20)1/(6); 2/(1)Bead milling 4/(12)1/(1); 2/(3)CommentsPreviously detected microorganismsCulture negative (n = 6)2 cases previously positive by TC*\nCutibacterium spp.\n4 cases previously positive by TC*Eikenella corrodens, Granulicatella adiacens, Actinomyces turicensis, Streptococcus species., Coagulase-negative staphylococci (CoNS)AF2 (n = 40)Culture negative (n = 40)Concordance 40/(100)01Periprosthetic joint infection; 2Aseptic failure; *TC, tissue culture\n\nCulture results achieved by homogenization of tissue samples using Disperser vs. Bead milling method\n1Periprosthetic joint infection; 2Aseptic failure; *TC, tissue culture\n[SUBTITLE] Microbiological diagnosis [SUBSECTION] In the PJI group, 34 patients yielded positive culture results. We processed a total of 308 tissue samples from these patients, 153 with the disperser and 155 with the bead mill. In the samples processed with the disperser 120 were positive. Processing with the bead mill yielded 121 positive results. On average, we received 4.5 samples per patient and method, of which 3.5 samples were positive (Tables 1 and 2B). However, in six cases only two samples were positive with both methods. In 23 patients (68%) we obtained identical culture results with both methods, in addition there was also a match in size, location and detected pathogens of the tissue samples. However, in 11 cases there were differences, but these were only related to the number of positive tissue samples per patient. In one of these cases, the tissue sample that gave a positive result when processed with the disperser was significantly larger than the sample processed with the bead mill. All pathogens were identified with both methods. The differences were distributed as follows: in six cases one sample and in one case two samples were additionally positive when the disperser was used (Table 2 A). On the other hand, in one case one sample and in three cases two samples were additionally positive when the bead mill was used (Table 2 A). Overall, there was no significant difference in the final evaluation, since in all cases at least two tissue samples were positive with both methods.\nIn the 34 positive cases a total of 51 microorganisms were recovered. The frequency of their occurrence is listed in Table 3. We identified 25 monomicrobial and nine polymicrobial infections. According to clinical records, 11 patients had a chronic course, in two of these cases small colony variants (SCV) were detected (Table 3).\n\nTable 2BThe weight of each investigated tissue sample of all subjects and in case of PJI its effect on the rate of positive culture results using Disperser vs. Bead milling methodProportion of investigated samples based on weightPJI1 Culture positive (n = 34)> 1.5 g0.5-1.5 g< 0.5 gAll sizesDisperser (no. of positive cultures/total no. of specimens [%])71/87 (82)41/53 (77)8/13 (62)120/153 (78)Bead milling (no. of positive cultures/ total no. of specimens [%])72/90 (80)37/48 (77)12/17 (71)121/155 (78)PJI1 Culture negative (n = 6)Disperser (no. [%])18 (64)7 (25)3 (11)27Bead milling (no. [%])15 (56)7 (26)5 (18)27AF2 (n = 40)Disperser (no. [%])89 (49)66 (37)25 (14)180Bead milling (no. [%])90 (50)64 (36)26 (14)1801Periprosthetic joint infection; 2Aseptic failure\n\nThe weight of each investigated tissue sample of all subjects and in case of PJI its effect on the rate of positive culture results using Disperser vs. Bead milling method\nDisperser (no. of positive cultures/\ntotal no. of specimens [%])\n1Periprosthetic joint infection; 2Aseptic failure\n\nTable 3Microbiological findings by positive tissue cultures from 34 patients meeting the definition of PJIMicroorganismsNo. (%)(n = 51)Staphylococcus species19 (37)\nStaphylococcus aureus\n8 (16)\nStaphylococcus epidermidis*\n6 (12)\nStaphylococcus lugdunensis\n3 (6)\nStaphylococcus capitis\n2 (4)Enterococcus species5 (10)\nEnterococcus faecalis\n5 (10)Streptococcus species4 (8)\nStreptococcus dysgalactiae\n1 (2)\nStreptococcus agalactiae\n1 (2)\nStreptococcus constellatus\n1 (2)\nStreptococcus oralis\n1 (2)Other Gram-positive cocci2 (4)Parvimonas micra (anaerobic)1 (2)Peptoniphilus harei (anaerobic)1 (2)Gram-positive bacilli7 (14)Cutibacterium acnes (anaerobic)4 (8)Cutibacterium avidum (anaerobic)2 (4)\nCorynebacterium amycolatum\n1 (2)Gram-negative bacilli14 (28)\nEnterobacter cloacae\n4 (8)\nEscherichia coli\n3 (6)\nKlebsiella pneumoniae\n2 (4)\nProteus mirabilis\n2 (4)\nPseudomonas aeruginosa\n2 (4)\nMorganella morganii\n1 (2)*in 2 cases associated with small colony variants (SCV)\n\nMicrobiological findings by positive tissue cultures from 34 patients meeting the definition of PJI\n*in 2 cases associated with small colony variants (SCV)\nNevertheless, both methods yielded negative culture results for six patients in the PJI group. But all patients had been treated with antimicrobials because of microorganisms that had been detected previously from tissue specimens or synovial fluid (Table 2 A). In this group 54 tissue samples were processed, 27 with each method (Table 2B).\nSpecific analysis of the percentage of positive tissue samples based on weight revealed no significant difference between the methods. However, regardless of the method, the accuracy of the results decreased correspondingly to decreasing weight of the samples. Tissue samples with > 1.5 g had a positive rate of 82.0%, 71/87 for disperser versus 80.0%, 72/90 for bead milling, P = 0.79. For samples with a weight of 0.5-1.5 g we noted 77%, 41/53 for disperser versus 77%, 37/48 for bead milling, P = 0.97. And for samples with a weight of < 0.5 g we detected 62%, 8/13 for disperser versus 71%, 12/17 for bead milling, P = 0.60 (Table 2B).\nIn the AF group both techniques presented negative culture results in all 40 cases. Here, we processed 180 tissue samples with each method. For information on the weight distribution of the investigated tissue samples, see Table 2B.\nA further aspect of our study was to record the origin of each specimen and its detection rate of microorganisms in the group of culture-positive PJI cases. Regardless of the joint, the highest rate of single samples was taken from the neo-synovium, followed by the acetabulum, if the hip was affected. In the number of samples taken from the proximal and distal periprosthetic membrane of the stem, there were joint-dependent differences.\nDue to the low number of cases, we did not differentiate between surgical procedures. We cannot rule out that this had an influence on the different results.\nFor an overview of the distribution of positive samples, see Fig. 1.\n\nFig. 1Overview of the local distribution of culture-positive tissue samples in patients with periprosthetic joint infection. Left: Hip (n = 17); Right: Knee (n = 17)\n\nOverview of the local distribution of culture-positive tissue samples in patients with periprosthetic joint infection. Left: Hip (n = 17); Right: Knee (n = 17)\nIn the PJI group, 34 patients yielded positive culture results. We processed a total of 308 tissue samples from these patients, 153 with the disperser and 155 with the bead mill. In the samples processed with the disperser 120 were positive. Processing with the bead mill yielded 121 positive results. On average, we received 4.5 samples per patient and method, of which 3.5 samples were positive (Tables 1 and 2B). However, in six cases only two samples were positive with both methods. In 23 patients (68%) we obtained identical culture results with both methods, in addition there was also a match in size, location and detected pathogens of the tissue samples. However, in 11 cases there were differences, but these were only related to the number of positive tissue samples per patient. In one of these cases, the tissue sample that gave a positive result when processed with the disperser was significantly larger than the sample processed with the bead mill. All pathogens were identified with both methods. The differences were distributed as follows: in six cases one sample and in one case two samples were additionally positive when the disperser was used (Table 2 A). On the other hand, in one case one sample and in three cases two samples were additionally positive when the bead mill was used (Table 2 A). Overall, there was no significant difference in the final evaluation, since in all cases at least two tissue samples were positive with both methods.\nIn the 34 positive cases a total of 51 microorganisms were recovered. The frequency of their occurrence is listed in Table 3. We identified 25 monomicrobial and nine polymicrobial infections. According to clinical records, 11 patients had a chronic course, in two of these cases small colony variants (SCV) were detected (Table 3).\n\nTable 2BThe weight of each investigated tissue sample of all subjects and in case of PJI its effect on the rate of positive culture results using Disperser vs. Bead milling methodProportion of investigated samples based on weightPJI1 Culture positive (n = 34)> 1.5 g0.5-1.5 g< 0.5 gAll sizesDisperser (no. of positive cultures/total no. of specimens [%])71/87 (82)41/53 (77)8/13 (62)120/153 (78)Bead milling (no. of positive cultures/ total no. of specimens [%])72/90 (80)37/48 (77)12/17 (71)121/155 (78)PJI1 Culture negative (n = 6)Disperser (no. [%])18 (64)7 (25)3 (11)27Bead milling (no. [%])15 (56)7 (26)5 (18)27AF2 (n = 40)Disperser (no. [%])89 (49)66 (37)25 (14)180Bead milling (no. [%])90 (50)64 (36)26 (14)1801Periprosthetic joint infection; 2Aseptic failure\n\nThe weight of each investigated tissue sample of all subjects and in case of PJI its effect on the rate of positive culture results using Disperser vs. Bead milling method\nDisperser (no. of positive cultures/\ntotal no. of specimens [%])\n1Periprosthetic joint infection; 2Aseptic failure\n\nTable 3Microbiological findings by positive tissue cultures from 34 patients meeting the definition of PJIMicroorganismsNo. (%)(n = 51)Staphylococcus species19 (37)\nStaphylococcus aureus\n8 (16)\nStaphylococcus epidermidis*\n6 (12)\nStaphylococcus lugdunensis\n3 (6)\nStaphylococcus capitis\n2 (4)Enterococcus species5 (10)\nEnterococcus faecalis\n5 (10)Streptococcus species4 (8)\nStreptococcus dysgalactiae\n1 (2)\nStreptococcus agalactiae\n1 (2)\nStreptococcus constellatus\n1 (2)\nStreptococcus oralis\n1 (2)Other Gram-positive cocci2 (4)Parvimonas micra (anaerobic)1 (2)Peptoniphilus harei (anaerobic)1 (2)Gram-positive bacilli7 (14)Cutibacterium acnes (anaerobic)4 (8)Cutibacterium avidum (anaerobic)2 (4)\nCorynebacterium amycolatum\n1 (2)Gram-negative bacilli14 (28)\nEnterobacter cloacae\n4 (8)\nEscherichia coli\n3 (6)\nKlebsiella pneumoniae\n2 (4)\nProteus mirabilis\n2 (4)\nPseudomonas aeruginosa\n2 (4)\nMorganella morganii\n1 (2)*in 2 cases associated with small colony variants (SCV)\n\nMicrobiological findings by positive tissue cultures from 34 patients meeting the definition of PJI\n*in 2 cases associated with small colony variants (SCV)\nNevertheless, both methods yielded negative culture results for six patients in the PJI group. But all patients had been treated with antimicrobials because of microorganisms that had been detected previously from tissue specimens or synovial fluid (Table 2 A). In this group 54 tissue samples were processed, 27 with each method (Table 2B).\nSpecific analysis of the percentage of positive tissue samples based on weight revealed no significant difference between the methods. However, regardless of the method, the accuracy of the results decreased correspondingly to decreasing weight of the samples. Tissue samples with > 1.5 g had a positive rate of 82.0%, 71/87 for disperser versus 80.0%, 72/90 for bead milling, P = 0.79. For samples with a weight of 0.5-1.5 g we noted 77%, 41/53 for disperser versus 77%, 37/48 for bead milling, P = 0.97. And for samples with a weight of < 0.5 g we detected 62%, 8/13 for disperser versus 71%, 12/17 for bead milling, P = 0.60 (Table 2B).\nIn the AF group both techniques presented negative culture results in all 40 cases. Here, we processed 180 tissue samples with each method. For information on the weight distribution of the investigated tissue samples, see Table 2B.\nA further aspect of our study was to record the origin of each specimen and its detection rate of microorganisms in the group of culture-positive PJI cases. Regardless of the joint, the highest rate of single samples was taken from the neo-synovium, followed by the acetabulum, if the hip was affected. In the number of samples taken from the proximal and distal periprosthetic membrane of the stem, there were joint-dependent differences.\nDue to the low number of cases, we did not differentiate between surgical procedures. We cannot rule out that this had an influence on the different results.\nFor an overview of the distribution of positive samples, see Fig. 1.\n\nFig. 1Overview of the local distribution of culture-positive tissue samples in patients with periprosthetic joint infection. Left: Hip (n = 17); Right: Knee (n = 17)\n\nOverview of the local distribution of culture-positive tissue samples in patients with periprosthetic joint infection. Left: Hip (n = 17); Right: Knee (n = 17)", "Ten cases were excluded because of an insufficient number of samples. Finally, a cohort of 80 patients, 40 with a hip and 40 with a knee prosthesis, was included in the study. In total, 722 tissue samples were investigated. In 35 cases the patients had at least one of the major diagnostic criteria for the presence of PJI (Table 1). Furthermore, five patients with no major criteria had an aggregate score of minor criteria greater than or equal to 6 and therefore were also considered as infected (Table 1). The other 40 cases had neither a major criterion for PJI nor an aggregate score greater than 2 and were classified as AF. For further demographic data, see Table 1.\n\nTable 1Clinical and microbiological characteristics of all study casesPJI1 (n = 40)AF2 (n = 40)Median patient age, yr (range)75 (46–89)72 (50–93)No. (%) of females18 (45)24 (57)Site of arthroplasty (no. [%])Hip20 (50)20 (50)Knee20 (50)20 (50)Type of surgery (no. [%])Explantation of the entire joint prosthesis25 (62)19 (47)Explantation of prosthesis components(femoral, acetabular, tibial, inlay)14 (35)18 (45)Debridement and prosthesis retention1 (3)3 (8)Patients with major diagnostic criteria for PJI* (no./total no. [%])≥ 2 positive cultures32/35 (91)0≥ 2 positive cultures + Presence of sinus tract2/35 (6)0Negative culture + Presence of sinus tract1/35 (3)0Average number of positive samples per patient/Average total number of samples per patient3.5/4.50/4.5Patients with negative major criteria and minor pre- and intraoperative scoring based diagnostic criteria for PJI* (no./total no. [%])Score ≥ 6 (infected)5/5 (100)0Score 4–5 (inconclusive)00Score ≤ 3 (not infected)0401Periprosthetic joint infection; 2Aseptic failure; *According to the 2018 Definition of periprosthetic hip and knee infection;\n\nClinical and microbiological characteristics of all study cases\nExplantation of prosthesis components\n(femoral, acetabular, tibial, inlay)\nAverage number of positive samples per patient/\nAverage total number of samples per patient\n1Periprosthetic joint infection; 2Aseptic failure; *According to the 2018 Definition of periprosthetic hip and knee infection;\n\nTable 2ACulture results achieved by homogenization of tissue samples using Disperser vs. Bead milling methodPJI1 (n = 40)Preference of method(no. [%])Additional number of positive samples/(No. of cases)Culture positive (n = 34)Concordance 23/(68)0Disperser 7/(20)1/(6); 2/(1)Bead milling 4/(12)1/(1); 2/(3)CommentsPreviously detected microorganismsCulture negative (n = 6)2 cases previously positive by TC*\nCutibacterium spp.\n4 cases previously positive by TC*Eikenella corrodens, Granulicatella adiacens, Actinomyces turicensis, Streptococcus species., Coagulase-negative staphylococci (CoNS)AF2 (n = 40)Culture negative (n = 40)Concordance 40/(100)01Periprosthetic joint infection; 2Aseptic failure; *TC, tissue culture\n\nCulture results achieved by homogenization of tissue samples using Disperser vs. Bead milling method\n1Periprosthetic joint infection; 2Aseptic failure; *TC, tissue culture", "In the PJI group, 34 patients yielded positive culture results. We processed a total of 308 tissue samples from these patients, 153 with the disperser and 155 with the bead mill. In the samples processed with the disperser 120 were positive. Processing with the bead mill yielded 121 positive results. On average, we received 4.5 samples per patient and method, of which 3.5 samples were positive (Tables 1 and 2B). However, in six cases only two samples were positive with both methods. In 23 patients (68%) we obtained identical culture results with both methods, in addition there was also a match in size, location and detected pathogens of the tissue samples. However, in 11 cases there were differences, but these were only related to the number of positive tissue samples per patient. In one of these cases, the tissue sample that gave a positive result when processed with the disperser was significantly larger than the sample processed with the bead mill. All pathogens were identified with both methods. The differences were distributed as follows: in six cases one sample and in one case two samples were additionally positive when the disperser was used (Table 2 A). On the other hand, in one case one sample and in three cases two samples were additionally positive when the bead mill was used (Table 2 A). Overall, there was no significant difference in the final evaluation, since in all cases at least two tissue samples were positive with both methods.\nIn the 34 positive cases a total of 51 microorganisms were recovered. The frequency of their occurrence is listed in Table 3. We identified 25 monomicrobial and nine polymicrobial infections. According to clinical records, 11 patients had a chronic course, in two of these cases small colony variants (SCV) were detected (Table 3).\n\nTable 2BThe weight of each investigated tissue sample of all subjects and in case of PJI its effect on the rate of positive culture results using Disperser vs. Bead milling methodProportion of investigated samples based on weightPJI1 Culture positive (n = 34)> 1.5 g0.5-1.5 g< 0.5 gAll sizesDisperser (no. of positive cultures/total no. of specimens [%])71/87 (82)41/53 (77)8/13 (62)120/153 (78)Bead milling (no. of positive cultures/ total no. of specimens [%])72/90 (80)37/48 (77)12/17 (71)121/155 (78)PJI1 Culture negative (n = 6)Disperser (no. [%])18 (64)7 (25)3 (11)27Bead milling (no. [%])15 (56)7 (26)5 (18)27AF2 (n = 40)Disperser (no. [%])89 (49)66 (37)25 (14)180Bead milling (no. [%])90 (50)64 (36)26 (14)1801Periprosthetic joint infection; 2Aseptic failure\n\nThe weight of each investigated tissue sample of all subjects and in case of PJI its effect on the rate of positive culture results using Disperser vs. Bead milling method\nDisperser (no. of positive cultures/\ntotal no. of specimens [%])\n1Periprosthetic joint infection; 2Aseptic failure\n\nTable 3Microbiological findings by positive tissue cultures from 34 patients meeting the definition of PJIMicroorganismsNo. (%)(n = 51)Staphylococcus species19 (37)\nStaphylococcus aureus\n8 (16)\nStaphylococcus epidermidis*\n6 (12)\nStaphylococcus lugdunensis\n3 (6)\nStaphylococcus capitis\n2 (4)Enterococcus species5 (10)\nEnterococcus faecalis\n5 (10)Streptococcus species4 (8)\nStreptococcus dysgalactiae\n1 (2)\nStreptococcus agalactiae\n1 (2)\nStreptococcus constellatus\n1 (2)\nStreptococcus oralis\n1 (2)Other Gram-positive cocci2 (4)Parvimonas micra (anaerobic)1 (2)Peptoniphilus harei (anaerobic)1 (2)Gram-positive bacilli7 (14)Cutibacterium acnes (anaerobic)4 (8)Cutibacterium avidum (anaerobic)2 (4)\nCorynebacterium amycolatum\n1 (2)Gram-negative bacilli14 (28)\nEnterobacter cloacae\n4 (8)\nEscherichia coli\n3 (6)\nKlebsiella pneumoniae\n2 (4)\nProteus mirabilis\n2 (4)\nPseudomonas aeruginosa\n2 (4)\nMorganella morganii\n1 (2)*in 2 cases associated with small colony variants (SCV)\n\nMicrobiological findings by positive tissue cultures from 34 patients meeting the definition of PJI\n*in 2 cases associated with small colony variants (SCV)\nNevertheless, both methods yielded negative culture results for six patients in the PJI group. But all patients had been treated with antimicrobials because of microorganisms that had been detected previously from tissue specimens or synovial fluid (Table 2 A). In this group 54 tissue samples were processed, 27 with each method (Table 2B).\nSpecific analysis of the percentage of positive tissue samples based on weight revealed no significant difference between the methods. However, regardless of the method, the accuracy of the results decreased correspondingly to decreasing weight of the samples. Tissue samples with > 1.5 g had a positive rate of 82.0%, 71/87 for disperser versus 80.0%, 72/90 for bead milling, P = 0.79. For samples with a weight of 0.5-1.5 g we noted 77%, 41/53 for disperser versus 77%, 37/48 for bead milling, P = 0.97. And for samples with a weight of < 0.5 g we detected 62%, 8/13 for disperser versus 71%, 12/17 for bead milling, P = 0.60 (Table 2B).\nIn the AF group both techniques presented negative culture results in all 40 cases. Here, we processed 180 tissue samples with each method. For information on the weight distribution of the investigated tissue samples, see Table 2B.\nA further aspect of our study was to record the origin of each specimen and its detection rate of microorganisms in the group of culture-positive PJI cases. Regardless of the joint, the highest rate of single samples was taken from the neo-synovium, followed by the acetabulum, if the hip was affected. In the number of samples taken from the proximal and distal periprosthetic membrane of the stem, there were joint-dependent differences.\nDue to the low number of cases, we did not differentiate between surgical procedures. We cannot rule out that this had an influence on the different results.\nFor an overview of the distribution of positive samples, see Fig. 1.\n\nFig. 1Overview of the local distribution of culture-positive tissue samples in patients with periprosthetic joint infection. Left: Hip (n = 17); Right: Knee (n = 17)\n\nOverview of the local distribution of culture-positive tissue samples in patients with periprosthetic joint infection. Left: Hip (n = 17); Right: Knee (n = 17)", "Correct identification of the causative agent from microbiological culture is mandatory for a targeted antimicrobial therapy of PJI. However, there is currently no consensus on several preanalytical and analytical aspects such as the most suitable tissue sample to be cultured, the optimal number of specimens investigated, the most effective method of tissue processing, and finally, the appropriate sensitive culture media that also enable detection of fastidious pathogens. We have already published research results on the latter [2, 3]. In this study we aimed to address the open questions in our patient collective.\nFirstly, we recorded the origin of each tissue sample and its contribution to the detection of an infection (Fig. 1). In 25 of 34 culture-positive cases, samples from the neo-synovium were the best positive single location. The value especially of synovial biopsy in diagnostics of PJI both of the hip and knee was also reported by Fink et al. [7, 8].\nThe specifications in our study for both locations and volume are based on an evaluation of several thousand tissue samples that we have analysed over the past few years (not published).\nOne result of this investigation was that bone biopsies as a whole proved to be less suitable. This experience is confirmed by Larsen et al. who investigated, among other questions, the contribution of specimen types in the diagnosis of PJI [9].\nSecondly, in our study, four tissue samples were sufficient to confirm the diagnosis of an infection. These results are in line with reports by Bemer et al. and Gandhi et al., who both demonstrated that four samples are optimal, if at least three different media including blood culture bottles (BCB) are used [10, 11]. We agree with the importance of culture media, however it is not the number but the composition of nutrients that matters. We would like to refer to our publications on this subject, especially with regard to the limited use of BCB [12].\nThirdly, although the patient samples were sent consecutively regardless of the course of symptoms, we were able to identify the pathogens of a series of chronic infections due to which the patients had had to live with prosthesis-associated pain for an average of 11 (5–25) months before surgery was carried out. Our processing thus enabled a targeted effective therapy for these cases. This can be regarded as an indication of the validity of both procedures, as smaller amounts of bacteria are generally expected with these types of infection.\nFurthermore, in the AF group, neither procedure resulted in any contamination, and they therefore delivered an optimal specific result.\nThe literature on processing methods is rare. In 2017, Suren et al. published a prospective analysis of a semi-automated tissue homogenization method using the ULTRA-TURRAX drive workstation with tubes containing ten steel beads [13]. The authors investigated 38 total hip and knee arthroplasties, but their results were inconsistent, and no information was given about their routine procedures. Roux et al. published a retrospective analysis in 2011 [14] which included 92 patients undergoing revision surgery. The tissue samples were collected between 2003 and 2006 and examined using vials with added glass beads. The authors found a substantial number of microorganisms associated with PJI, but here too, they did not compare these data with their routine workflow. Redanz et al. first used an experimental model with a specimen of artificially inoculated pork to investigate the effectivity of Precellys Evolution bead mill homogenizer. The authors then processed clinical samples using 2 ml tubes and analysed 22 tissue samples from periprosthetic membranes and synovia recovered from seven patients. Only five samples were positive. Despite this limited amount of data, the authors gave a clear recommendation for the procedure [15]. It is worthy of note that according to the manufacturer, a loading capacity of up to 0.2 g weight is recommended for 2 ml tubes. In our experience, this volume is too small for a reliable diagnosis, especially if low-grade infections are suspected. In our study, about 90% of the samples had at least 5–10 times this weight. Only recently, Fang et al. demonstrated the superiority of tissue homogenization for diagnosis of PJI, but for comparison they used methods that have already proven to be non-competitive, such as manual techniques or pre-treatment of the tissue with ultrasound [16]. Finally, Yusuf et al. evaluated the diagnostic value of pre-processing tissue specimens with a homogenizer compared with their routine manual procedures. Surprisingly, the authors found no significant difference between the methods. Speculation remains as to whether the selected program was not suitable for processing these special tissue samples. The authors also did not provide any information as to the extent to which they carried out preliminary tests and why they selected the program mentioned in the material Sect. [17].\nEven if we have demonstrated the accuracy of two homogenization techniques, our study has some limitations. Firstly, we accepted the bias that surgeons could assign the samples themselves, since transferring to another vial in the laboratory, even under laminar airflow, poses a risk of contamination. This free choice could be the reason why in seven cases processing under routine conditions revealed additional positive samples compared to processing by the bead milling method. But even with this method additional positive samples were found in four cases. However, these differences had no effect on the overall assessment. Secondly, there is no gold standard for processing procedures, making investigations very laborious, because all individual stages have to be carefully validated. In our study, we first had to establish which bead material (steel, glass or ceramics) was most suitable for our purposes. Then we had to identify both the appropriate mix of bead sizes for best homogenization and the right rotation speed without affecting the bacteria. Based on preliminary tests we decided on ceramic beads. The best ratio of homogenizing and recovery of bacteria was obtained at 7,200 rpm using a bead mix of 2.8/5.0 mm. However, at ≥ 8,000 rpm the temperature within the sample rose to 60° C and impaired bacterial growth.\nAnother aspect that has not yet been systematically investigated is the recording of the volume of the examined tissue samples [4]. Our monitoring showed a dependency between the volume and the detection rate of pathogens, but no difference in the methods used. However, if the weight was < 0.5 g, the bead milling method tended to achieve better results, although the proportion of positive tissue samples was the lowest overall (Table 2B). Even if the results were not significant, using this method might have a positive effect on preoperative arthroscopic diagnosis, especially of low-grade infections, as the volume of biopsies is often limited.\nIndependently of this finding, we are working on semi-quantitative PCR analyses to enable better integration of the informative power of molecular examination procedures into diagnostics in cases of unexpected culture-negative results (not finished).", "With this study we have demonstrated that two different semi-automated systems enable reliable processing of tissue samples for diagnosis of PJI. These techniques should replace the still widely used, less sensitive, manual methods which are more susceptible to contamination.\nAccording to the literature, the performance and usability of the devices available in the market differ considerably. Therefore, comparative studies are urgently required. This study has also shown that the rate of positive tissue samples is influenced by the volume and exact origin of the sample. Therefore, these parameters should always be recorded and communicated in the laboratory report as they affect clinical relevance. There can be no doubt that standardized procedures are required to make the microbiological results predictable and comparable and give surgeons the highest level of certainty for their decision-making." ]
[ "introduction", "materials|methods", null, null, null, null, null, "results", null, null, "discussion", "conclusion" ]
[ "Periprosthetic joint infection", "Microbiological diagnostics", "Tissue homogenization techniques" ]
The economic cost of bovine trypanosomosis in pastoral and ago pastoral communities surrounding Murchision Falls National park, Buliisa district, Uganda.
36253776
Animal diseases that are endemic like tsetse transmitted trypanosomosis cause the continuous expenditure of financial resources of livestock farmers and loss of productivity of livestock. Estimating the cost of controlling animal trypanosomosis can provide evidence for priority setting and targeting cost-effective control strategies.
BACKGROUND
A cross-sectional survey to estimate the economic cost of bovine trypanosomosis was conducted in cattle-keeping communities living around Murchision falls National Park, in Buliisa district Uganda. Data was collected on herd structure, the cost of treatment and control, prevalence of morbidity and mortality rates due to trypanosomosis, and salvage sales losses in cattle herds in the last year.
METHODOLOGY
In this study, 55.4% (n = 87) of the households reported their cattle had been affected by trypanosomosis during the previous last year. There was a high economic cost of trypanosomosis (USD 653) per household in cattle-keeping communities in Buliisa district of which 83% and 9% were due to mortality and milk loss respectively/ High mortality loss was due to low investment in treatment. The study showed that prophylactic treatment 3 times a year of the whole herd of cattle using Samorin ® (Isometamidium chloride) at a cost of USD 110 could drastically reduce cattle mortality loss due to trypanosomosis due to trypanosomosis with a return on investment of USD 540 annually per herd. This could be coupled with strategic restricted insecticide spraying of cattle with deltamethrin products.
RESULTS
The results show a high economic cost of trypanosomosis in cattle-keeping communities in Buliisa district, with cattle mortality contributing the largest proportion of the economic cost. The high mortality loss was due to low investment in treatment of sick cattle.
CONCLUSION
[ "Animals", "Cattle", "Cross-Sectional Studies", "Insecticides", "Parks, Recreational", "Trypanosomiasis", "Trypanosomiasis, Bovine", "Tsetse Flies", "Uganda" ]
9578198
null
null
null
null
null
null
Conclusion
The results show a high economic cost of trypanosomosis (USD 653) in cattle-keeping communities in Buliisa district with death of cattle contributing the largest proportion to economic cost (83%). Prophylactic treatment of cattle using Samorin® costing USD 110 annually could significantly reduce cattle mortality due to trypanosomosis with a net return on investment of USD 465 annually per herd.
[ "Background of study", "Results", "Recommendation" ]
[ "Animal trypanosomosis is one of the major limitations of cattle production causing a huge threat to household food security and livelihoods in sub-Saharan Africa. The disease impedes economic development and causes a huge toll on human health [1, 2]. The disease is majorly controlled using trypanocidal drugs or through control measures targeting the tsetse fly. In addition, the disease can be controlled by reducing the birthrate of disease vector through sterile insect technique and increasing the death rate of the disease vector through insecticide-treated cattle and insecticide impregnated traps and targets [3].\nThe effect of Animal African trypanosomosis (AAT) can be reduced through the use of curative and prophylactic trypanocides and rearing of trypanotolerant cattle [4]. Nevertheless, there are cases of increasing resistance to trypanocides and farmers are reluctant to rear trypanotolerant cattle [5].\nThere are several promising initiatives on vaccine candidates to control animal trypanosomes but currently, no vaccines are yet available for farmer use [6, 7].\nThe most suitable methods for controlling AAT and the magnitude to which they could be implemented depend on several factors, including social, economic, political, and environmental contexts. In addition, knowledge of the epidemiological cycle of AAT and the tsetse fly population and the available resources play a key role in control programs [5].\nAlthough there have been several campaigns supported by international organizations to control AAT, decisions on allocation of resources have always been a challenge due to the large geographical range of the disease, the variation of the ecological and livestock systems and diversity of disease, and presence of different control methods [6, 7].\nThe control of livestock diseases including AAT is a private good where farmers have to pay for the service. For farmers to continuously invest in controlling Animal trypanosomosis the service must be affordable and effective [8, 9].\nAt the moment economic analysis of animal health has not been thoroughly studied [13–16]. Several reasons are contributing to few economic analysis studies on animal health and these include: (i) the complex impact of animal diseases - the direct effects of the diseases are easy to quantify while the indirect effects are difficult to approach; (ii) the complexity of livestock systems compared to crop systems due to inter alia to longer cycles and (iii) livestock systems are an integral part component of mixed farm systems [11, 12].\nThe control strategies targeting tsetse flies that have been deployed in Uganda include ground and aerial spraying of the breeding sites of tsetse, insecticide-treated cattle, and insecticide-impregnated traps and targets [10]. The use of these control measures has led to environmental toxicity and the high costs involved [16]. In Uganda, there are limited studies [13–17] where animal disease control decisions are based on economic cost. As such evaluation of the economic cost of tsetse and trypanosomiasis is necessary for deciding on the best cost-effective intervention strategy [18].\nIt is against this background that this study was designed to determine the economic cost of bovine trypanosomosis in Buliisa district Uganda.", "The average farm size was 29.8 ± 7.2 acres. On average the household had 32 ± 3.1 cattle, 10 ± 1.2 goats, 0.7 ± 0.17 pigs, 10 ± 1.5 sheep, and 14.5 ± 1.1 chicken.\nThe percentage of age-specific herd structures were shown in Table 1.\n\nTable 1Percentage of cattle herd structureCattle categoryPercentage (%)Lactating cattle26.3Dry cattle20.5Heifers15.4Steers4.4Weaners9.2Female calves11.3Male calves9.7Bulls3.2\n\nPercentage of cattle herd structure\nIn this study, 55.4% (n = 87) reported their cattle had been infected by trypanosomosis during the previous year. Annual expenditure on treatment using Samorin® (Isometamidium Chloride was Ug Shs. 12,147 (USD 3.47) per household. In addition, 74% of the households treated their cattle themselves without the supervision of veterinarians. The average cost of a sachet of Isometamidium chloride (Samorin ®) treating 8–10 cattle was at Ug. Shs. 30,000 or USD 8.5. Isometamidium Chloride was administered at an interval of 2–3 months a year. The mean prices of cattle per age-specific category were shown in Table 2. The age-specific morbidity and mortality rate were as shown in Table 3.\n\nTable 2Mean (Uganda shillings) per cattle age categoryCattle age categoryMean PriceLactating cattle957,727 ± 59,647Dry cattle901,075 ± 35,090Heifers707,647 ± 16,996Weaners503,158 ± 22,936Steers615,223 ± 66,561Male calves346,571 ± 19,132Female calves416,641 ± 33,046Bull1,300,946 ± 59,831\n\nMean (Uganda shillings) per cattle age category\n\nTable 3Percentage Mortality and morbidity rates of cattle age categories due to trypanosomosisAge categoryMorbidity RateMortality RateLactating cows20.08.3Dry cows90.75.5Heifers15.86.1Weaners28.28.6Steers36.717.7Male calves12.17.8Female calves12.18.6Bulls20.87.1Overall33.47.8\n\nPercentage Mortality and morbidity rates of cattle age categories due to trypanosomosis\nExchange rate 1 USD = 3500 Ug Shs. at the time the study was conductedMilk loss was computed as (Lactation off take (liters per year) 280 * (Number of lactating cattle that died in previous year) 125* (Average price per liter in UG shs.)1000 .Sales loss was computed as the difference between the normal sale value and the salvage value. The percentage price reduction was calculated as a ratio total salvage value to the total normal sale value multiplied by 100.\n74% of the households treated their cattle themselves without the supervision of veterinarians. The average cost of a sachet of Isometamidium Chloride (Samorin ®) was at Ug. Shs. 30,000. One sachet was used for treating 8–10 cattle. Generally no prophylactic of cattle was being done. To prophylactically protect cattle against bovine trypanosomosis cattle need to be treated 2–3 times a year with Samorin®. This would cost USD 110 per herd.\nCattle were not sprayed with insecticides against tsetse flies. Farmers who reported practicing bush clearing and bush burning were 10.2% and 3.2% respectively. The mean bush cleared area was 0.21 acres. The results further showed that 5% of households used tsetse traps as a control method for the tsetse flies.", "Prophylaxis treatment using Samorin® should be done three times a year. This should be coupled with community participation in strategic restricted spraying of cattle with deltamethrin products to control both tsetse flies and ticks." ]
[ null, null, null ]
[ "Background of study", "Materials and methods", "Results", "Discussion", "Conclusion", "Recommendation" ]
[ "Animal trypanosomosis is one of the major limitations of cattle production causing a huge threat to household food security and livelihoods in sub-Saharan Africa. The disease impedes economic development and causes a huge toll on human health [1, 2]. The disease is majorly controlled using trypanocidal drugs or through control measures targeting the tsetse fly. In addition, the disease can be controlled by reducing the birthrate of disease vector through sterile insect technique and increasing the death rate of the disease vector through insecticide-treated cattle and insecticide impregnated traps and targets [3].\nThe effect of Animal African trypanosomosis (AAT) can be reduced through the use of curative and prophylactic trypanocides and rearing of trypanotolerant cattle [4]. Nevertheless, there are cases of increasing resistance to trypanocides and farmers are reluctant to rear trypanotolerant cattle [5].\nThere are several promising initiatives on vaccine candidates to control animal trypanosomes but currently, no vaccines are yet available for farmer use [6, 7].\nThe most suitable methods for controlling AAT and the magnitude to which they could be implemented depend on several factors, including social, economic, political, and environmental contexts. In addition, knowledge of the epidemiological cycle of AAT and the tsetse fly population and the available resources play a key role in control programs [5].\nAlthough there have been several campaigns supported by international organizations to control AAT, decisions on allocation of resources have always been a challenge due to the large geographical range of the disease, the variation of the ecological and livestock systems and diversity of disease, and presence of different control methods [6, 7].\nThe control of livestock diseases including AAT is a private good where farmers have to pay for the service. For farmers to continuously invest in controlling Animal trypanosomosis the service must be affordable and effective [8, 9].\nAt the moment economic analysis of animal health has not been thoroughly studied [13–16]. Several reasons are contributing to few economic analysis studies on animal health and these include: (i) the complex impact of animal diseases - the direct effects of the diseases are easy to quantify while the indirect effects are difficult to approach; (ii) the complexity of livestock systems compared to crop systems due to inter alia to longer cycles and (iii) livestock systems are an integral part component of mixed farm systems [11, 12].\nThe control strategies targeting tsetse flies that have been deployed in Uganda include ground and aerial spraying of the breeding sites of tsetse, insecticide-treated cattle, and insecticide-impregnated traps and targets [10]. The use of these control measures has led to environmental toxicity and the high costs involved [16]. In Uganda, there are limited studies [13–17] where animal disease control decisions are based on economic cost. As such evaluation of the economic cost of tsetse and trypanosomiasis is necessary for deciding on the best cost-effective intervention strategy [18].\nIt is against this background that this study was designed to determine the economic cost of bovine trypanosomosis in Buliisa district Uganda.", "The study was conducted in Buliisa district located at (02º 11ʹ N 31º 24ʹ E) neighbouring Murchison Falls National Park. Details of the location were shown in Fig. 1. The choice of Buliisa district was based on its proximity to a national park and a higher prevalence of bovine trypanosomosis of 29.6%. The district is located in the cattle corridor belt bordering Nebbi district in North West, Nwoya district in North East, Masindi district in the East and Hoima district in the south, and Lake Albert in the West. Bugungu wildlife reserve which is part of Murchison Falls National park is located in Buliisa district. The district is rural-based with pastoralism, agro-pastoralism, fishing, and subsistence agriculture as the major economic activities. Buliisa experiences a bimodal type of climate with 2 rainy seasons (March to May and August to November). The vegetation is classified into forest, savannah, grassland, and swamp. The forest vegetation includes Budongo forest while savannah vegetation comprises perennial grasses, scattered trees, and shrubs. Murchison Falls National Park and Bugungu Game reserve contribute to grassland and woodland cover. Buliisa district is part of the Albertine graben where oil and gas have been discovered and explorations currently going on. The discovery of oil and gas has contributed to increased human activity and several infrastructural developments and employment opportunities for both local and foreign workers. Buliisa district has 6 sub counties and 1 town council. These include Biiso, Buliisa, Kihungya, Butiaba, Kigwera, Ngwedo and Buliisa town council. The sub counties are further sub divided into parishes and several villages.\n\nFig. 1Map of Uganda and location of Buliisa district Source: Author\n\nMap of Uganda and location of Buliisa district Source: Author\nA cross-sectional survey was conducted from January to April 2020 using a pre-tested structured questionnaire. Data was collected from 157 participants that were randomly selected. The selection criteria of study participants were being a cattle farmer and voluntarily consenting to participate in the study. The participants were drawn from the list of cattle keepers provided by local leaders and veterinary extension staff in each sub county. Through the Coordinating Office for the Control of Trypanosomosis in Uganda (COCTU) focal person, Bullisa District Production Office (DPO), and District Veterinary Officer (DVO) were approached and explained the objectives of the study. The DVO contacted the sub-county Animal Husbandry Officers (AHO) who in turn were explained the study objectives and trained as research assistants. Sub counties of Biiso, Buliisa, Butiaba, Kigwera and Ngwedo were visited and study sites selected.\nThe questionnaire was pre-tested and additional information generated and some questions were modified. The questionnaire was translated from English into Runyoro by Makerere University Center for languages and communication services (CLCS).\nThe sample size for the study was computed using the following formula\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$n = \\frac{N}{{1 + N{\\varepsilon ^2}}}$$\\end{document}\nWhere n = minimum returned sample size, N = population size, ℇ= adjusted margin of error which is\\documentclass[12pt]{minimal}\n\t\t\t\t\\usepackage{amsmath}\n\t\t\t\t\\usepackage{wasysym} \n\t\t\t\t\\usepackage{amsfonts} \n\t\t\t\t\\usepackage{amssymb} \n\t\t\t\t\\usepackage{amsbsy}\n\t\t\t\t\\usepackage{mathrsfs}\n\t\t\t\t\\usepackage{upgreek}\n\t\t\t\t\\setlength{\\oddsidemargin}{-69pt}\n\t\t\t\t\\begin{document}$$\\left[ {\\varepsilon = \\frac{{\\rho e}}{t}} \\right]$$\\end{document}\ne = degree of accuracy expressed as a proportion (Margin of error at 0.03 for continuous data), ρ = number of standard deviation that would include all possible values in a range for a 5 point scale which is equal to 4, t = t value for selected alpha level = 1.96 at 95% confidential interval [19].\nThe questionnaire collected information on participants’ socio-demographic characteristics, crop and livestock enterprises, cattle herd structure, prices per each cattle category, and the number of cattle age category that was affected by trypanosomosis in the last year. Furthermore, additional information was collected on the cost of curative and prophylactic treatment which included drugs used, and the cost of insecticide used in controlling tsetse flies. The number of abortions in the cattle herd, mortality of animals due to trypanosomosis in the last year, and salvage sales of cattle in the last year were also collected. In addition, data was collected on how communities controlled tsetse flies.\nEconomic data was collected and collated from the questionnaires. Data was then coded and entered into Microsoft Excel® 2020 spreadsheet software which was used to generate descriptive analysis mainly presented as means and percentages. Herd cattle age mortality rates due to trypanosomosis during the last year were determined. Cattle that presented with common signs of trypanosomosis before their death was included and taken as trypanosomosis-induced mortality. Herd cattle age morbidity rates were calculated from cattle that presented common signs and symptoms of trypanosomosis during the last year.\nMortality loss was calculated by computing the number of age categories of cattle that died from trypanosomosis multiplied by the prevailing market price of that age category of cattle. Salvage value was calculated from the number of cattle that were infected with trypanosomosis and sold before they died at a salvage price for the last year.\nSales loss was computed as the difference between the normal sale value and the salvage value. The percentage price reduction was calculated as a ratio total salvage value to the total normal sale value multiplied by 100.\nThe economic cost due to bovine trypanosomosis was calculated as the sum of costs due to: (i) treatment and chemoprophylaxis of the disease in the herd; (ii) loss due to mortality; (iii) estimated loss of milk production from literature [20] due to lack of records. The estimation was based on the following assumptions (Lactation off take (liters per year) 280 * (Number of lactating cattle that died in previous year) 125* (Average price per liter in UG. shs.)1000 ; (iv) live animal salvage sale loss; (v) insecticide spraying costs; (vi) tsetse fly trap costs; (vi) bush clearing costs.", "The average farm size was 29.8 ± 7.2 acres. On average the household had 32 ± 3.1 cattle, 10 ± 1.2 goats, 0.7 ± 0.17 pigs, 10 ± 1.5 sheep, and 14.5 ± 1.1 chicken.\nThe percentage of age-specific herd structures were shown in Table 1.\n\nTable 1Percentage of cattle herd structureCattle categoryPercentage (%)Lactating cattle26.3Dry cattle20.5Heifers15.4Steers4.4Weaners9.2Female calves11.3Male calves9.7Bulls3.2\n\nPercentage of cattle herd structure\nIn this study, 55.4% (n = 87) reported their cattle had been infected by trypanosomosis during the previous year. Annual expenditure on treatment using Samorin® (Isometamidium Chloride was Ug Shs. 12,147 (USD 3.47) per household. In addition, 74% of the households treated their cattle themselves without the supervision of veterinarians. The average cost of a sachet of Isometamidium chloride (Samorin ®) treating 8–10 cattle was at Ug. Shs. 30,000 or USD 8.5. Isometamidium Chloride was administered at an interval of 2–3 months a year. The mean prices of cattle per age-specific category were shown in Table 2. The age-specific morbidity and mortality rate were as shown in Table 3.\n\nTable 2Mean (Uganda shillings) per cattle age categoryCattle age categoryMean PriceLactating cattle957,727 ± 59,647Dry cattle901,075 ± 35,090Heifers707,647 ± 16,996Weaners503,158 ± 22,936Steers615,223 ± 66,561Male calves346,571 ± 19,132Female calves416,641 ± 33,046Bull1,300,946 ± 59,831\n\nMean (Uganda shillings) per cattle age category\n\nTable 3Percentage Mortality and morbidity rates of cattle age categories due to trypanosomosisAge categoryMorbidity RateMortality RateLactating cows20.08.3Dry cows90.75.5Heifers15.86.1Weaners28.28.6Steers36.717.7Male calves12.17.8Female calves12.18.6Bulls20.87.1Overall33.47.8\n\nPercentage Mortality and morbidity rates of cattle age categories due to trypanosomosis\nExchange rate 1 USD = 3500 Ug Shs. at the time the study was conductedMilk loss was computed as (Lactation off take (liters per year) 280 * (Number of lactating cattle that died in previous year) 125* (Average price per liter in UG shs.)1000 .Sales loss was computed as the difference between the normal sale value and the salvage value. The percentage price reduction was calculated as a ratio total salvage value to the total normal sale value multiplied by 100.\n74% of the households treated their cattle themselves without the supervision of veterinarians. The average cost of a sachet of Isometamidium Chloride (Samorin ®) was at Ug. Shs. 30,000. One sachet was used for treating 8–10 cattle. Generally no prophylactic of cattle was being done. To prophylactically protect cattle against bovine trypanosomosis cattle need to be treated 2–3 times a year with Samorin®. This would cost USD 110 per herd.\nCattle were not sprayed with insecticides against tsetse flies. Farmers who reported practicing bush clearing and bush burning were 10.2% and 3.2% respectively. The mean bush cleared area was 0.21 acres. The results further showed that 5% of households used tsetse traps as a control method for the tsetse flies.", "The results from the study show that cattle was a major livestock species reared followed by indigenous chicken, goats, and sheep. This finding broadly supports the work of other studies that highlighted the role of cattle and other livestock species in supporting pastoralist livelihoods [20–22]. Cattle in pastoral and agro-pastoral communities play a multifunctional role in providing both market and non-market benefits. The latter include financing and insurance functions which define the competitiveness of cattle rearing in pastoral and agro-pastoral communities [23]. Cattle and other types of livestock in pastoralist and agro-pastoral households support an important role in coping with shocks, accumulating wealth, and acting as a bank in the absence of commercial financial institutions and formal markets. [24] .\nIn terms of cattle herd structure, adult cattle were the majority in household herds. Heifers, female calves, and weaners followed in that order (Table 1). The results show that more female cattle were kept compared to male calves and bulls. The findings might indicate that pastoralists keep more female cattle because of their ability to produce milk and for herd growth. This finding is consistent with another study [25] where female cattle of reproductive age constituted more than 50% of all livestock species. This is contrary in areas where male cattle are used for traction.\nThe overall prevalence and mortality rate of bovine trypanosomosis was 33.4% and 7.8% respectively (Table 3). The findings are not based on blood screening rather on cattle that presented with common signs of bovine trypanosomosis in the last one (prevalence) and before their death (trypanosomosis induced mortality). The findings are suggestive of prevalence and mortality farmers reported based on the clinical signs the animals presented since livestock disease diagnostic services are not available. Due to absence of laboratory diagnostic services in the district, the overall prevalence reported can also be attributed to other diseases presenting similar clinical signs to those of bovine trypanosomosis. These results are higher than those found in Metekel Zone North West Ethiopia which reported a prevalence of 12.1% and a mortality rate of 4.4% [24]. These differences in prevalence and mortality rates could be caused by variations in vegetation types and the seasons when the studies were conducted. The type of vegetation and season are known to determine the tsetse population and consequently the prevalence and mortality rates [25–27]. In addition, another plausible reason for the difference could be attributed to the breed of cattle kept. In areas where crossbred cattle are kept compared to indigenous breeds, it’s likely to find higher prevalence and mortality rates. From this study, the highest mortality rate was reported in the steer category of cattle while the highest morbidity rates were observed among dry cattle. A possible explanation for this might be that larger animals were more attractive to tsetse flies compared to smaller animals. Large cattle produce more odor plumes that attract tsetse than calves. This was further supported in previous studies [2] and [26].\nThe control measures of trypanosomosis mainly involved use of trypanocidal drugs with isometamidium chloride (Samorin®) as the main drug of choice. Although the drug is more expensive compared to other trypanocidal drugs on the market, farmers revealed that it has both curative and protective effects on animals. The farmers’ revelations were in support with a previous study [27] where it was reported that Isometamidium chloride mode of action was both therapeutic and prophylactic. The results from our analysis showed that 1 sachet of Samorin® costs Ug shs.30, 000 and farmers usually use it to treat 10 animals. When used in prophylaxis treatment at a three months interval (30,000/10)* 4 times a year, it would cost per herd of 32 animals (Average herd size) Ug shs. 384,000 or USD 110 annually per herd. This would drastically reduce the high mortality rate loss caused by trypanosomosis (Table 4) thereby increasing the profit margins of cattle keeping in the area. This was in agreement with studies.\n\nTable 4Mean annual economic cost in Ug. Shs. of Bovine trypanosomosis per householdEconomic costUg. shs% contribution ECTreatment12,1470.5Mortality loss2,057,07383.0Insecticide cost80,2103.2Milk loss222,9309.0Salvage sale loss46,1971.9Bush clearing6,7390.3\nTotal UGX\n\n2,425,296\n\nUSD\n\n693\n\n\nMean annual economic cost in Ug. Shs. of Bovine trypanosomosis per household\ndone elsewhere [28, 29] where they found higher returns on investment was got when farmers used trypanocide prophylaxis to protect their cattle against trypanosomosis.\nIn addition farmers in this area did not spray their cattle against tsetse flies using insecticides. In other areas infested with tsetse flies [11, 16] farmers have used dual-purpose insecticides like deltamethrin to control both ticks and tsetse with success. Spraying the entire animal’s body uses large amounts of the insecticide wash which is costly and leads to environmental contamination. The Restricted Insecticide Application protocol (RAP) is now being advocated for [30]. RAP involves application of insecticide to tsetse predilection sites of the animal (bellies, fore, and hind legs) and in the ears. These are also the predilection sites of Rhipicephalus Appendiculatus. The anticipated benefits of RAP compared to full body spraying include reduced over-dependence on trypanicidal drugs, lowered risk of drug resistance, and cost of tsetse and tick-borne disease control [16, 30, 31].\nFrom this study (Table 5) it was shown that dry cattle and steers were salvage sold at a price less than market value. Salvage sales were done by farmers to avoid complete loss as a result of death. Animals that were salvaged sold are ones that failed to respond to treatment and continue deteriorating in their health till the farmer decides to dispose of them before dying. As a result, farmers made losses depending on the state of the animals and the salvage price offered. It was found that farmers lost 56.1% of their income due to salvage sales. This was far less compared to the percentage loss of 83% for bulls and 88% for cows caused by foot and mouth disease outbreak in Isingiro [32].\n\nTable 5Total (for all households in the study n-157) and mean household mortality and salvage sale lossAge categoryMortality lossSale lossLactating cows119,634,3750Dry cows56,767,7253,834,675Heifers37,505,291577,647Weaners23,145,2680Steers30,761,1502,491,338Male calves14,902,5530Female calves23,331,896349,141Bulls16,912,2460Total (n = 157)322,960,5047,252,801Average household loss2,057,07346,196\n\nTotal (for all households in the study n-157) and mean household mortality and salvage sale loss\nThe mean annual economic cost per household due to trypanosomosis was found to be USD 693 of which 83% and 9% were due to mortality and milk loss respectively (Table 4). The mortality loss was equivalent to USD 588 which was higher than USD 244 reported in Metekel zone Ethiopia [33] and USD 200 in Baro Akobo and Gojeb river basins Ethiopia [33] There are several possible explanations for this result. One possible explanation might be that the mortality loss is contributed by other diseases that can present signs similar to those of trypanosomosis. However, in this area, there was a lack of laboratory services where farmers and field veterinarians can diagnose blood samples to confirm the presence of trypanosomes before treatment. This finding is in agreement with results of an earlier study [34] which reported that the use of veterinary diagnostic laboratories in Uganda was poor. Also, there were no veterinary diagnostic services found in the area. The farmers were treating cattle themselves failing to administer the right curative trypanocides at the right dose. There was therefore a need to provide trypanosomosis diagnostic and veterinary services for sick cattle. Also, there are substandard and fake trypanocidal drugs on the market which may have contributed to treatment failure.\nThe drive by most farmers to improve genetically their herds through crossbreeding may also have contributed to the high mortality in crossbred animals compared to local breeds [2, 35].\nWhen farmers invest in a preventive prophylactic treatment using Samorin ® at an interval of every 3 months per year, the annual cost of treatment per household would be USD 110. The return on investment in treatment would be USD 465. This could be saved annually making cattle-keeping enterprise profitable venture in this area. This, therefore, means that a prophylactic treatment regime should be adopted in this area.\nMilk loss of USD 63.4 annually per household due to trypanosomosis is the second largest contribution to the total economic cost. Milk loss was computed by multiplying number of lactating cattle that died in previous one year by the average price per liter and the estimated lactation offtake (liters per year) [36]. The loss in milk was mainly through death of lactating cows, abortions of dry cows, and decreased milk yield in sick cattle. Milk is an important component of the communities’ diet and milk loss undermines the daily household incomes. Milk that was not directly consumed was locally processed into other value added dairy products that could be sold locally. With increasing population in Buliisa district and the oil discovery within the district, the demand for milk is growing hence becoming a major source of household income.\nSurprisingly, the percentage contribution of treatment and bush clearing is less than 1% (Table 4) yet more than 50% of the households reported their animals were infected with trypanosomosis the previous year. The small contribution of treatment cost to the total economic cost of trypanosomosis may be contributing to the high mortality loss observed in cattle due to trypanosomosis. In addition, most farmers keep local breeds of cattle that are thought to be more trypanotolerant and therefore are reluctant to invest in treatment costs compared to farmers with crossbreed animals which are have shown to be trypanosusceptible.\nIn this study, bush clearing and use of traps were not used by most farmers. A possible explanation for the low practice of bush clearing might be that land is communally owned and communities were not motivated to invest in it despite knowing that bushes were breeding habitats for tsetse. Bush of different types provides a good breeding environment for different tsetse species. The Glossina palpalis and G fusca tsetse species thrive well in woody vegetation while the G. moristan species survive best in savannah woodland. Furthermore, indiscriminate bush clearing as an approach to controlling the tsetse population can lead to a negative impact on biodiversity loss and the approach is not ecologically and politically acceptable. However, there has been modification developed [37, 38] which include removal of vegetation at ground level without removing high trees (discriminative partial bush clearing) or cutting only some of the trees or shrubs species (partial selective bush clearing) which are effective in reducing the tsetse populations. Traps were not being deployed as a tsetse control measure in the study area. The probable reason why traps are not popular among the farmers might be the lack of their promotion as an important tool to monitor spatial and temporal changes in the tsetse population and non-functional livestock extension, entomology, and community tsetse control intervention programs [39]. There are several limitations to the wider use of traps which could be non-community involvement in their deployment, supervision, and management, high cost, and high rate of theft and vandalism.\nRelatedly bush or vegetation influences the efficiency of use of insecticide-impregnated traps and targets. The effectiveness of traps and targets in controlling tsetse flies can be hampered by vegetation regrowth and encroachment [40] found a significant decrease in tsetse catches when the traps were obscured by vegetation by 80%.", "The results show a high economic cost of trypanosomosis (USD 653) in cattle-keeping communities in Buliisa district with death of cattle contributing the largest proportion to economic cost (83%). Prophylactic treatment of cattle using Samorin® costing USD 110 annually could significantly reduce cattle mortality due to trypanosomosis with a net return on investment of USD 465 annually per herd.", "Prophylaxis treatment using Samorin® should be done three times a year. This should be coupled with community participation in strategic restricted spraying of cattle with deltamethrin products to control both tsetse flies and ticks." ]
[ null, "materials|methods", null, "discussion", "conclusion", null ]
[ "Economic cost", "Mortality loss", "Milk loss", "Bovine trypanosomosis", "Buliisa" ]
Unmet health care needs: factors predicting satisfaction with health care services among community-dwelling Canadians living with neurological conditions.
36253779
Neurological conditions (NCs) can lead to long-term challenges including functional impairments and limitations to activities of daily living. People with neurological conditions often report unmet health care needs and experience barriers to care. This study aimed to (1) explore the factors predicting patient satisfaction with general health care, hospital, and physician services among Canadians with NCs, (2) examine the association between unmet health care needs and satisfaction with health care services among neurological patients in Canada, and (3) contrast patient satisfaction between physician care and hospital care among Canadians with NCs.
BACKGROUND
We conducted a secondary analysis on a subsample of the 2010 Canadian Community Health Survey - Annual Component data (N = 6335) of respondents with neurological conditions, who received general health care services, hospital services, and physician services within twelve months. Multivariate logistic regression fitted the models and odds ratios and 95% confidence intervals were reported using STATA version 14.
METHODS
Excellent quality care predicts higher odds of patient satisfaction with general health care services (OR, 95%CI-237.6, 70.4-801.5), hospital services (OR, 95%CI-166.9, 67.9-410.6), and physician services (OR, 95%CI-176.5, 63.89-487.3). In contrast, self-perceived unmet health care needs negatively predict patient satisfaction across all health care services: general health care services (OR, 95%CI-0.59, 0.37-0.93), hospital services (OR, 95%CI-0.41, 0.21-0.77), and physician services (OR, 95%CI-0.29, 0.13-0.69). Other negative predictors of patient satisfaction include some post-secondary education (OR, 95%CI-0.36, 0.18-0.72) for general health services and (OR, 95%CI-0.26, 0.09-0.80) for physician services. Those with secondary (OR, 95% CI-0.32, 0.13-0.76) and post-secondary graduation (OR, 95%CI- 0.28, 0.11-0.67) negatively predicted patient satisfaction among users of physician services while being an emergency room patient most recently (OR, 95%CI- 0.39, 0.20-0.77) was also negatively associated with patients satisfaction among hospital services users.
RESULTS
This study found self-perceived unmet health care needs as a significant negative predictor of neurological patients' satisfaction across health care services and emphasizes the importance of ensuring coordinated efforts to provide appropriate and accessible care of the highest quality for Canadians with neurological conditions.
CONCLUSION
[ "Activities of Daily Living", "Canada", "Health Services Accessibility", "Health Services Needs and Demand", "Humans", "Independent Living", "Personal Satisfaction" ]
9578245
null
null
null
null
Results
[SUBTITLE] Characteristics of the study population – individuals with neurological conditions [SUBSECTION] Analysis for this study was limited to the imputed data of the original subsamples of 2902, 1222, and 2211 individuals with NCs who received general health care services, hospital services, and physician services respectively. Table 1 below demonstrates the demographic characteristics of the study population for all three study samples. The total number of cases varies due to missing values. There is little variation in socio-demographic characteristics across subsamples. Over two-thirds of the subsamples were females (67.8, 68.8, and 70.2% respectively) and under 65 years of age (71.5, 67.2, 71.9% respectively). A little under half of the respondents reported postsecondary graduation (45.2, 45.3, and 47.4% respectively). Less than half of the respondents in all samples were married (40.4, 38.7, and 40.1% respectively), while just under half earned ≤$19,999 annually (43.1, 44.6, and 43.3%) and under 20% in each sample reported unmet health care needs. Table 1Sociodemographic characteristics of study samples by health care services use: general (8,712), hospital (3,492) and physician (6,451) servicesCharacteristicsGeneral Health Care ServicesHospital ServicesPhysician Servicesn(%)*n(%)*n(%)* Age categories, years ≤44 years3,507 (40.2)1,242 (35.6)2,553 (39.6)45 to 642,725 (31.3)1,103 (31.6)2,086 (32.3)65 to 791,636 (18.8)758 (21.7)1,125 (17.4)80 and above844 (9.7)389 (11.1)687 (10.7) Sex Male2,804 (32.2)1,091 (31.2)1,925 (29.8)Female5,908 (67.8)2,401 (68.8)4,526 (70.2) Marital status Single2,583 (29.7)954 (27.4)1,843 (28.6)Married3,515 (40.4)1,349 (38.7)2,583 (40.1)Common-law413 (4.7)193 (5.5)301 (4.7)Widowed/separated/divorced2,193 (25.2)992 (28.4)1,717 (26.6) Educational level Less than secondary2,517 (29.0)1,033 (29.7)1,774 (27.6)Secondary grad1,569 (18.1)543 (15.6)1,116 (17.4)Other post-secondary666 (7.7)326 (9.4)485 (7.6)Post-secondary graduation3,916 (45.2)1,577 (45.3)3,046 (47.4) Income status <=19,9993,541 (43.1)1,468 (44.6)2,634 (43.3)20,000–39,9992,376 (29.0)1,037 (31.5)1,828 (30.1)40,000–69,9991,548 (18.9)536 (16.3)1,074 (17.7)≥ 70,000735 (9.0)252 (7.6)544 (8.9)*Values and percentages included imputed data Sociodemographic characteristics of study samples by health care services use: general (8,712), hospital (3,492) and physician (6,451) services *Values and percentages included imputed data The results in Table 2 below describe the variables associated with health care services received by the respondents. Over two-thirds of the respondents were satisfied with general, hospital and physician services (83.9, 81.1, and 91.2% respectively). Less than half of the respondents felt they received excellent general, hospital, and physician health care (38.3, 45.9, and 54.6% respectively). Less than half of the respondents who received hospital services were outpatients (39%) while the majority received physician services from a family doctor (82.3%) (Table 2). Table 2Description of variables associated with utilization of health care services: general (8,712), hospital (3,492) and physician (6,451) servicesVariablesGeneral Health Care Services N( %)*Hospital Services N(%)*Physician Services N(%)* Unmet health care needs No7,329 (84.2)2,797 (80.2)5,249 (81.4)Yes1,375 (15.8)691 (19.8)1,197 (18.6) General life satisfaction Dissatisfied590 (6.8)279 (8.0)434 (6.7)Very satisfied2,787 (32.2)1,012 (29.2)2,007 (31.3)Satisfied4,396 (50.7)1,822 (52.5)3,342 (52.1)Neither satisfied/dissatisfied895 (10.3)359 (10.3)633 (9.9) Rating of availability of Provincial care Poor1,218 (14.0)537 (15.5)845 (13.2)Fair2,182 (25.2)923 (26.6)1,647 (25.6)Good3,884 (44.8)1,381 (39.7)3,928 (61.2)1Excellent1,392 (16.0)633 (18.2) Quality of Care Received Poor299 (3.4)595 (17.0)694 (10.8)Fair1,072 (12.3)Good3,993 (45.9)1,293 (37.1)22,233 (34.6)2Excellent3,336 (38.3)1,599 (45.9)3,517 (54.6) Patient Satisfaction Dissatisfied1,396 (16.1)660 (18.9)568 (8.8)Satisfied7,299 (83.9)2,827 (81.1)5,875 (91.2) Most recent patient Outpatient-1,363 (39.0)-Admitted Overnight-817 (23.40)-ER Patient-1,312 (37.6)- Physician Type Family Doctor--5,303 (82.3)Specialist--1,144 (17.7)1 Good and excellent categories collapsed to very good2 Fair and good categories collapsed into good* Results included inputted values Description of variables associated with utilization of health care services: general (8,712), hospital (3,492) and physician (6,451) services 1 Good and excellent categories collapsed to very good 2 Fair and good categories collapsed into good * Results included inputted values Analysis for this study was limited to the imputed data of the original subsamples of 2902, 1222, and 2211 individuals with NCs who received general health care services, hospital services, and physician services respectively. Table 1 below demonstrates the demographic characteristics of the study population for all three study samples. The total number of cases varies due to missing values. There is little variation in socio-demographic characteristics across subsamples. Over two-thirds of the subsamples were females (67.8, 68.8, and 70.2% respectively) and under 65 years of age (71.5, 67.2, 71.9% respectively). A little under half of the respondents reported postsecondary graduation (45.2, 45.3, and 47.4% respectively). Less than half of the respondents in all samples were married (40.4, 38.7, and 40.1% respectively), while just under half earned ≤$19,999 annually (43.1, 44.6, and 43.3%) and under 20% in each sample reported unmet health care needs. Table 1Sociodemographic characteristics of study samples by health care services use: general (8,712), hospital (3,492) and physician (6,451) servicesCharacteristicsGeneral Health Care ServicesHospital ServicesPhysician Servicesn(%)*n(%)*n(%)* Age categories, years ≤44 years3,507 (40.2)1,242 (35.6)2,553 (39.6)45 to 642,725 (31.3)1,103 (31.6)2,086 (32.3)65 to 791,636 (18.8)758 (21.7)1,125 (17.4)80 and above844 (9.7)389 (11.1)687 (10.7) Sex Male2,804 (32.2)1,091 (31.2)1,925 (29.8)Female5,908 (67.8)2,401 (68.8)4,526 (70.2) Marital status Single2,583 (29.7)954 (27.4)1,843 (28.6)Married3,515 (40.4)1,349 (38.7)2,583 (40.1)Common-law413 (4.7)193 (5.5)301 (4.7)Widowed/separated/divorced2,193 (25.2)992 (28.4)1,717 (26.6) Educational level Less than secondary2,517 (29.0)1,033 (29.7)1,774 (27.6)Secondary grad1,569 (18.1)543 (15.6)1,116 (17.4)Other post-secondary666 (7.7)326 (9.4)485 (7.6)Post-secondary graduation3,916 (45.2)1,577 (45.3)3,046 (47.4) Income status <=19,9993,541 (43.1)1,468 (44.6)2,634 (43.3)20,000–39,9992,376 (29.0)1,037 (31.5)1,828 (30.1)40,000–69,9991,548 (18.9)536 (16.3)1,074 (17.7)≥ 70,000735 (9.0)252 (7.6)544 (8.9)*Values and percentages included imputed data Sociodemographic characteristics of study samples by health care services use: general (8,712), hospital (3,492) and physician (6,451) services *Values and percentages included imputed data The results in Table 2 below describe the variables associated with health care services received by the respondents. Over two-thirds of the respondents were satisfied with general, hospital and physician services (83.9, 81.1, and 91.2% respectively). Less than half of the respondents felt they received excellent general, hospital, and physician health care (38.3, 45.9, and 54.6% respectively). Less than half of the respondents who received hospital services were outpatients (39%) while the majority received physician services from a family doctor (82.3%) (Table 2). Table 2Description of variables associated with utilization of health care services: general (8,712), hospital (3,492) and physician (6,451) servicesVariablesGeneral Health Care Services N( %)*Hospital Services N(%)*Physician Services N(%)* Unmet health care needs No7,329 (84.2)2,797 (80.2)5,249 (81.4)Yes1,375 (15.8)691 (19.8)1,197 (18.6) General life satisfaction Dissatisfied590 (6.8)279 (8.0)434 (6.7)Very satisfied2,787 (32.2)1,012 (29.2)2,007 (31.3)Satisfied4,396 (50.7)1,822 (52.5)3,342 (52.1)Neither satisfied/dissatisfied895 (10.3)359 (10.3)633 (9.9) Rating of availability of Provincial care Poor1,218 (14.0)537 (15.5)845 (13.2)Fair2,182 (25.2)923 (26.6)1,647 (25.6)Good3,884 (44.8)1,381 (39.7)3,928 (61.2)1Excellent1,392 (16.0)633 (18.2) Quality of Care Received Poor299 (3.4)595 (17.0)694 (10.8)Fair1,072 (12.3)Good3,993 (45.9)1,293 (37.1)22,233 (34.6)2Excellent3,336 (38.3)1,599 (45.9)3,517 (54.6) Patient Satisfaction Dissatisfied1,396 (16.1)660 (18.9)568 (8.8)Satisfied7,299 (83.9)2,827 (81.1)5,875 (91.2) Most recent patient Outpatient-1,363 (39.0)-Admitted Overnight-817 (23.40)-ER Patient-1,312 (37.6)- Physician Type Family Doctor--5,303 (82.3)Specialist--1,144 (17.7)1 Good and excellent categories collapsed to very good2 Fair and good categories collapsed into good* Results included inputted values Description of variables associated with utilization of health care services: general (8,712), hospital (3,492) and physician (6,451) services 1 Good and excellent categories collapsed to very good 2 Fair and good categories collapsed into good * Results included inputted values [SUBTITLE] Characteristics associated with patient satisfaction with general health care, hospital, and physician services (multivariate analysis) [SUBSECTION] Table 3 demonstrates the results of the final multivariate logistic regression models for patient satisfaction with adjusted predictor and/or covariate variables. We found self-perceived unmet health care needs to be a strong negative predictor for patient satisfaction across all health care services. For those with self-perceived unmet needs, the greatest dissatisfaction was associated with physician services (OR = 0.29, p = 0.005), followed by hospital services (OR = 0.41, p = 0.006) and general health care services (OR = 0.59, p = 0.024), when compared to those without unmet health care needs. Conversely, quality and availability of care were significant protective predictors of patient satisfaction across all health care services. When compared to those who received poor quality care, the odds of patient satisfaction (general health care services, 237.60, p < 0.001; hospital services,166.99, p < 0.001; and physician services, 176.4, p < 0.001) were highest across all services among those who received excellent quality care; with those receiving general health services most likely to be satisfied with quality care: fair (OR = 6.15, p = 0.002), good (OR = 36.37, p < 0.001) and excellent (OR = 237.60, p < 0.001) (Table 3). The odds of patient satisfaction across all health services were higher with the increasing availability of care. When compared to poor availability of care, the odds of patient satisfaction were highest among those who reported excellent care availability across health care services in general (OR = 4.45, p < 0.001) and hospital services (6.30, p < 0.001), with those receiving hospital services increasingly satisfied with levels of care availability: fair (OR = 2.77, p = 0.011), good (OR = 3.90, p < 0.001) and excellent (OR = 6.30, p < 0.001). Education was a negative predictor of patient satisfaction among those who received general health services with higher levels of education being more dissatisfied with care, [(secondary graduate, OR = 0.62, p = 0.126); (other post-secondary, OR = 0.36, p = 0.004); and post-secondary graduate, OR = 0.54, p = 0.050)] and those who received physician services [(secondary graduate, OR = 0.32, p = 0.010); (other post-secondary, OR = 0.26, p = 0.019); and post-secondary graduate, OR = 0.28, p = 0.005)]. Post-secondary graduation provided reduced odds of being satisfied with hospital services compared to those with the lowest levels of education (Table 3). Physician type seen and most recent type of patient during last health care services were also predictors of patient satisfaction with physician and hospital services respectively. Respondents who received specialist care were 47% less likely (OR = 0.47, p = 0.106) to be satisfied with physician services than those who received care from a family doctor. Patients who were admitted overnight were more likely (OR = 1.20, p = 0.660) to be satisfied with hospital services than outpatients, while ER patients were significantly less likely (OR = 0.39, p = 0.007) to be satisfied with hospital services than both outpatients and overnight patients. Table 3Multivariate analysis of predictors of patient satisfaction with general (8,712), hospital (3,492), and physician (6,451) servicesVariablesGeneral Health Care Services*Hospital Services*Physician Services*OR, 95% CIp-ValueOR, 95% CIp-ValueOR, 95% CIp-Value Age categories, years ≤44 yearsReferenceReferenceReference45 to 641.80 (0.98–3.32)0.0592.02 (0.83–4.91)0.1201.14 (0.51–2.57)0.74765 to 791.24 (0.59–2.590.5760.39 (0.13–1.17)0.0920.75 (0.27–2.08)0.58580 and above2.57 (0.97–6.86)0.0590.48 (0.11–2.13)0.3340.73 (0.23–2.29)0.593 Sex MaleReferenceReferenceReferenceFemale1.32 (0.80–2.17)0.2761.12 (0.53–2.37)0.7640.62 (0.32–1.19)0.152 Marital status SingleReferenceReferenceReferenceMarried1.39 (0.78–2.50)0.2640.55 (0.19–1.61)0.2780.62 (0.31–1.21)0.160Common-law1.31 (0.67–2.58)0.4270.69 (0.23–2.10)0.5141.21 (0.42–3.53)0.727Widowed/separated/divorced1.48 (0.76–2.89)0.2540.68 (2.01–2.27)0.5241.06 (0.41–2.71)0.603 Educational level Less than secondaryReferenceReferenceReferenceSecondary graduate0.62 (0.38–1.14)0.1262.00 (0.73–5.47)0.1770.32 (0.13–0.76)0.010Other post-secondary0.36 (0.18–0.72)0.0042.81 (0.94–8.40)0.0650.26 (0.09–0.80)0.019Post-secondary graduate0.54 (0.29–1.00)0.0500.98 (0.42–2.28)0.9670.28 (0.11–0.67)0.005 Income status ≤ 19,999ReferenceReferenceReference20,000–39,9990.73 (0.41–1.30)0.2811.49 (0.58–3.80)0.4041.57 (0.69–3.54)0.27840,000–69,9991.54 (0.75–3.17)0.2421.04 (0.36–3.00)0.9391.06 (0.33–3.46)0.917≥ 70,0000.90 (0.39–2.12)0.8170.33 (0.11–0.97)0.0451.17 (0.38–3.63)0.783 Unmet health care needs NoReferenceReferenceReferenceYes0.59 (0.37–0.93)0.0240.41 (0.21–0.77)0.0060.29 (0.13–0.69)0.005 General life satisfaction DissatisfiedReferenceReferenceReferenceVery satisfied2.15 (1.03–4.49)0.0411.56 (0.45–5.41)0.4812.53 (0.88–7.26)0.084Satisfied1.80 (0.92–3.53)0.0850.77 (0.25–2.33)0.6421.24 (0.46–3.37)0.668Neither satisfied nor dissatisfied1.29 (0.60–2.76)0.5100.46 (0.13–1.69)0.2441.60 (0.57–4.52)0.372 Availability of provincial care PoorReferenceReferenceReferenceFair1.72 (1.03–2.87)0.0392.77 (1.27–6.05)0.0111.25 (0.54–2.93)0.592Good3.18 (1.78–5.68)< 0.0013.90 (1.92–7.92)< 0.0011.10 (0.44–2.75)0.833Excellent4.45 (1.76–11.25)< 0.0016.30 (2.35–16.86)< 0.001 Quality of care received PoorReferenceReferenceReferenceFair6.15 (2.00–18.94)0.002Good36.37 (12.09–109.44)< 0.00135.61 (18.71–67.78)< 0.00126.78 (13.36–53.69)< 0.001Excellent237.60 (70.43–801.52)< 0.001166.99 (67.91–410.64)< 0.001176.45 (63.89–487.30)< 0.001 Most recent patient OutpatientReferenceAdmitted Overnight1.20 (0.53–2.72)0.660ER Patient0.39 (0.20–0.77)0.007 Physician type Family DoctorReferenceSpecialist0.47 (0.18–1.18)0.106*Results included imputed valuesSignificant values are marked in bold printHosmer-Lemeshow (χ2) and p-values for General health care services (H-L: 13.74; p-value = 0.0888); Hospital services (H-L: 29.80; p-value = 0.0002); Physician services (H-L: 19.15; p-value = 0.0141) Multivariate analysis of predictors of patient satisfaction with general (8,712), hospital (3,492), and physician (6,451) services *Results included imputed values Significant values are marked in bold print Hosmer-Lemeshow (χ2) and p-values for General health care services (H-L: 13.74; p-value = 0.0888); Hospital services (H-L: 29.80; p-value = 0.0002); Physician services (H-L: 19.15; p-value = 0.0141) In summary, quality of care is strongly and positively associated with patient satisfaction across all health services. Other significant positive predictors of patient satisfaction are the availability of provincial care, quality of care received, and being very satisfied with life in general. The common significant negative predictor of patient satisfaction across all healthcare services is self-perceived unmet health care needs. Post-secondary education (general health services and physician services), and being an ER patient most recently (hospital services) also demonstrated significant negative associations with patient satisfaction. (Fig. 3). Fig. 3Summary of significant associations between health care services and patient satisfaction among individuals with neurological conditions Summary of significant associations between health care services and patient satisfaction among individuals with neurological conditions Table 3 demonstrates the results of the final multivariate logistic regression models for patient satisfaction with adjusted predictor and/or covariate variables. We found self-perceived unmet health care needs to be a strong negative predictor for patient satisfaction across all health care services. For those with self-perceived unmet needs, the greatest dissatisfaction was associated with physician services (OR = 0.29, p = 0.005), followed by hospital services (OR = 0.41, p = 0.006) and general health care services (OR = 0.59, p = 0.024), when compared to those without unmet health care needs. Conversely, quality and availability of care were significant protective predictors of patient satisfaction across all health care services. When compared to those who received poor quality care, the odds of patient satisfaction (general health care services, 237.60, p < 0.001; hospital services,166.99, p < 0.001; and physician services, 176.4, p < 0.001) were highest across all services among those who received excellent quality care; with those receiving general health services most likely to be satisfied with quality care: fair (OR = 6.15, p = 0.002), good (OR = 36.37, p < 0.001) and excellent (OR = 237.60, p < 0.001) (Table 3). The odds of patient satisfaction across all health services were higher with the increasing availability of care. When compared to poor availability of care, the odds of patient satisfaction were highest among those who reported excellent care availability across health care services in general (OR = 4.45, p < 0.001) and hospital services (6.30, p < 0.001), with those receiving hospital services increasingly satisfied with levels of care availability: fair (OR = 2.77, p = 0.011), good (OR = 3.90, p < 0.001) and excellent (OR = 6.30, p < 0.001). Education was a negative predictor of patient satisfaction among those who received general health services with higher levels of education being more dissatisfied with care, [(secondary graduate, OR = 0.62, p = 0.126); (other post-secondary, OR = 0.36, p = 0.004); and post-secondary graduate, OR = 0.54, p = 0.050)] and those who received physician services [(secondary graduate, OR = 0.32, p = 0.010); (other post-secondary, OR = 0.26, p = 0.019); and post-secondary graduate, OR = 0.28, p = 0.005)]. Post-secondary graduation provided reduced odds of being satisfied with hospital services compared to those with the lowest levels of education (Table 3). Physician type seen and most recent type of patient during last health care services were also predictors of patient satisfaction with physician and hospital services respectively. Respondents who received specialist care were 47% less likely (OR = 0.47, p = 0.106) to be satisfied with physician services than those who received care from a family doctor. Patients who were admitted overnight were more likely (OR = 1.20, p = 0.660) to be satisfied with hospital services than outpatients, while ER patients were significantly less likely (OR = 0.39, p = 0.007) to be satisfied with hospital services than both outpatients and overnight patients. Table 3Multivariate analysis of predictors of patient satisfaction with general (8,712), hospital (3,492), and physician (6,451) servicesVariablesGeneral Health Care Services*Hospital Services*Physician Services*OR, 95% CIp-ValueOR, 95% CIp-ValueOR, 95% CIp-Value Age categories, years ≤44 yearsReferenceReferenceReference45 to 641.80 (0.98–3.32)0.0592.02 (0.83–4.91)0.1201.14 (0.51–2.57)0.74765 to 791.24 (0.59–2.590.5760.39 (0.13–1.17)0.0920.75 (0.27–2.08)0.58580 and above2.57 (0.97–6.86)0.0590.48 (0.11–2.13)0.3340.73 (0.23–2.29)0.593 Sex MaleReferenceReferenceReferenceFemale1.32 (0.80–2.17)0.2761.12 (0.53–2.37)0.7640.62 (0.32–1.19)0.152 Marital status SingleReferenceReferenceReferenceMarried1.39 (0.78–2.50)0.2640.55 (0.19–1.61)0.2780.62 (0.31–1.21)0.160Common-law1.31 (0.67–2.58)0.4270.69 (0.23–2.10)0.5141.21 (0.42–3.53)0.727Widowed/separated/divorced1.48 (0.76–2.89)0.2540.68 (2.01–2.27)0.5241.06 (0.41–2.71)0.603 Educational level Less than secondaryReferenceReferenceReferenceSecondary graduate0.62 (0.38–1.14)0.1262.00 (0.73–5.47)0.1770.32 (0.13–0.76)0.010Other post-secondary0.36 (0.18–0.72)0.0042.81 (0.94–8.40)0.0650.26 (0.09–0.80)0.019Post-secondary graduate0.54 (0.29–1.00)0.0500.98 (0.42–2.28)0.9670.28 (0.11–0.67)0.005 Income status ≤ 19,999ReferenceReferenceReference20,000–39,9990.73 (0.41–1.30)0.2811.49 (0.58–3.80)0.4041.57 (0.69–3.54)0.27840,000–69,9991.54 (0.75–3.17)0.2421.04 (0.36–3.00)0.9391.06 (0.33–3.46)0.917≥ 70,0000.90 (0.39–2.12)0.8170.33 (0.11–0.97)0.0451.17 (0.38–3.63)0.783 Unmet health care needs NoReferenceReferenceReferenceYes0.59 (0.37–0.93)0.0240.41 (0.21–0.77)0.0060.29 (0.13–0.69)0.005 General life satisfaction DissatisfiedReferenceReferenceReferenceVery satisfied2.15 (1.03–4.49)0.0411.56 (0.45–5.41)0.4812.53 (0.88–7.26)0.084Satisfied1.80 (0.92–3.53)0.0850.77 (0.25–2.33)0.6421.24 (0.46–3.37)0.668Neither satisfied nor dissatisfied1.29 (0.60–2.76)0.5100.46 (0.13–1.69)0.2441.60 (0.57–4.52)0.372 Availability of provincial care PoorReferenceReferenceReferenceFair1.72 (1.03–2.87)0.0392.77 (1.27–6.05)0.0111.25 (0.54–2.93)0.592Good3.18 (1.78–5.68)< 0.0013.90 (1.92–7.92)< 0.0011.10 (0.44–2.75)0.833Excellent4.45 (1.76–11.25)< 0.0016.30 (2.35–16.86)< 0.001 Quality of care received PoorReferenceReferenceReferenceFair6.15 (2.00–18.94)0.002Good36.37 (12.09–109.44)< 0.00135.61 (18.71–67.78)< 0.00126.78 (13.36–53.69)< 0.001Excellent237.60 (70.43–801.52)< 0.001166.99 (67.91–410.64)< 0.001176.45 (63.89–487.30)< 0.001 Most recent patient OutpatientReferenceAdmitted Overnight1.20 (0.53–2.72)0.660ER Patient0.39 (0.20–0.77)0.007 Physician type Family DoctorReferenceSpecialist0.47 (0.18–1.18)0.106*Results included imputed valuesSignificant values are marked in bold printHosmer-Lemeshow (χ2) and p-values for General health care services (H-L: 13.74; p-value = 0.0888); Hospital services (H-L: 29.80; p-value = 0.0002); Physician services (H-L: 19.15; p-value = 0.0141) Multivariate analysis of predictors of patient satisfaction with general (8,712), hospital (3,492), and physician (6,451) services *Results included imputed values Significant values are marked in bold print Hosmer-Lemeshow (χ2) and p-values for General health care services (H-L: 13.74; p-value = 0.0888); Hospital services (H-L: 29.80; p-value = 0.0002); Physician services (H-L: 19.15; p-value = 0.0141) In summary, quality of care is strongly and positively associated with patient satisfaction across all health services. Other significant positive predictors of patient satisfaction are the availability of provincial care, quality of care received, and being very satisfied with life in general. The common significant negative predictor of patient satisfaction across all healthcare services is self-perceived unmet health care needs. Post-secondary education (general health services and physician services), and being an ER patient most recently (hospital services) also demonstrated significant negative associations with patient satisfaction. (Fig. 3). Fig. 3Summary of significant associations between health care services and patient satisfaction among individuals with neurological conditions Summary of significant associations between health care services and patient satisfaction among individuals with neurological conditions
Conclusion
Self-perceived unmet health care needs are a common significant negative predictor of neurological patients’ satisfaction across health care services. Future studies on predictors of neurological patients’ satisfaction with health care services should focus on specific unmet health care needs and different neurological conditions. Neurological patients are known to report unmet health care needs and experience barriers to care, limiting their quality of life. Our study emphasizes that the availability and accessibility of care for neurological patients increased the satisfaction with health care services in general as well as physician and hospital services.
[ "Background", "Methods", "Study participants and data sources", "Derivation of neurological conditions variable", "Assessment of patient satisfaction (outcome)", "Primary predictor (self-perceived unmet health care needs)", "Covariates", "Statistical analysis", "Characteristics of the study population – individuals with neurological conditions", "Characteristics associated with patient satisfaction with general health care, hospital, and physician services (multivariate analysis)", "Strengths and limitations of the study" ]
[ "Neurological conditions (NCs) including Alzheimer’s disease (AD)/ dementia, Parkinson’s disease (PD), amyotrophic lateral sclerosis, sclerosis, and others, were the focus of a Statistics Canada survey in 2010 [1]. NCs, especially those exacerbated by increased age, e.g., PD and AD/dementia, lead to long-term challenges with functional impairments and limitations to activity [2]. Neurological patients, not surprisingly, report unmet health care needs [3, 4] and experience barriers to care including lack of resources (time and money), lack of services, and no local specialists [2, 5, 6].\nSelf-reported unmet health care need is a commonly used measure of health care access or utilization [7]. Health care utilization factors include availability, acceptability, accessibility, and personal choice (unrelated to the health system) [8, 9]. Perceived unmet health care needs may be categorized per availability – waiting time too long, care not available when requested, care not available in the area; acceptability – dislike doctor/afraid, language problems, didn’t know where to go; accessibility –cost and transportation; or personal choice – too busy, didn’t get around to it/didn’t bother, felt it would be inadequate, decided not to seek care, and personal/family responsibilities [6].\nAnderson’s health behavior model describes health care utilization as a function of three factors: predisposing, enabling, and need. Predisposing factors exist before presentation with a health condition, i.e., socio-demographic or socio-cultural characteristics; enabling factors represent the logistical means for accessing health services; and need factors are the effectual cause of health service use and reflect the perceived health status of the health care user [10, 11]. The outcome measure for this study, patient satisfaction, is widely accepted as an assessment of overall healthcare quality [12, 13]. Patient satisfaction is associated with health-related quality of life (an individual’s or a group’s perceived physical and mental health over time) [14]. Some studies indicate that unmet health care needs result in decreased patient satisfaction with health care services [15–17] and lowered quality of health care and life [18–20].\nNeurological conditions are a major contributor to disability in the Canadian population. Approximately 3.77 million Canadians live with neurological conditions. Of this number, 170,000 are cared for in institutions [21]. People with psychosocial difficulties, common to neurological conditions, have reported higher numbers of unmet health care needs [22–24] that may go unnoticed by health professionals [25]. Therefore, an understanding of unmet health care needs and patient satisfaction among older Canadians with NCs is crucial to the ongoing evaluation and continuous quality improvement of care for this vulnerable population [10]. Such knowledge will contribute to the health system’s preparation and strengthening of services to adequately meet the needs of the increasing aging population. This study examines the association between unmet health care needs and satisfaction with health care services in Canada among neurological patients. We incorporate life satisfaction as a predisposing factor of patients’ satisfaction with the health care system as it presents an overarching view of an individual’s satisfaction and may influence one’s satisfaction with the health system. The specific objectives of this study are (1) to explore the factors predicting patient satisfaction with general health care, hospital, and physician services among Canadians with NCs, (2) examine the association between unmet health care needs and satisfaction with health care services among neurological patients in Canada, and (3) contrast patient satisfaction between physician care and hospital care among Canadians with NCs.", "[SUBTITLE] Study participants and data sources [SUBSECTION] Data were extracted from the 2010 Canadian Community Health Survey - Annual Component (CCHS − 2010). This cross-sectional survey collected population-wide information on health status, health care utilization, and health determinants of Canadians aged 12 + living in private households in all provinces and territories [26]. Persons living on Crown lands or Indian Reserves, those dwelling in institutions, or certain remote regions, as well as full-time members of the Canadian Forces, are excluded from this survey [26]. Approximately half the interviews were conducted in person using computer-assisted personal interviewing (CAPI) and the other half were conducted over the phone using computer-assisted telephone interviewing (CATI) [26]. The overall person-level survey response rate was 88.6% and the combined response rate was 71.5% at the national level. Statistics Canada’s research ethics board approved the original survey [26].\nThe CCHS-2010 was used due to its one-year unique common content on health care utilization: unmet health care needs (UCN) and neurological conditions and the optional content on patient satisfaction [26]. Residents of Ontario with NCs who received health care services completed the module on patient satisfaction and provided content on unmet health care needs were assessed. The population of 10,819,146 in Ontario in 2010 represented a little over one-third of the Canadian population in that year. The views of those respondents should provide good insight into the concerns of Canadians with NCs. Therefore, an imputed subsample of 6335 respondents with NCs was used for this study. From that number, 2902 who received general health care services, 1222 who received hospital services, and 2211 who received physician services within twelve months leading up to data collection were selected. Age categories 12–44 years were grouped to protect anonymity, due to the small sample size of the study population, and very few people in the youngest age categories reported NCs and unmet health care needs. This study was carried out in accordance with the relevant national/institutional guidelines and regulations. Figure 1 below demonstrates the restriction criteria used to obtain the subsample from the original sample.\n\nFig. 1Restriction criteria employed to obtain the sub-sample in this study. * Excluded from the analysis\n\nRestriction criteria employed to obtain the sub-sample in this study. * Excluded from the analysis\nData were extracted from the 2010 Canadian Community Health Survey - Annual Component (CCHS − 2010). This cross-sectional survey collected population-wide information on health status, health care utilization, and health determinants of Canadians aged 12 + living in private households in all provinces and territories [26]. Persons living on Crown lands or Indian Reserves, those dwelling in institutions, or certain remote regions, as well as full-time members of the Canadian Forces, are excluded from this survey [26]. Approximately half the interviews were conducted in person using computer-assisted personal interviewing (CAPI) and the other half were conducted over the phone using computer-assisted telephone interviewing (CATI) [26]. The overall person-level survey response rate was 88.6% and the combined response rate was 71.5% at the national level. Statistics Canada’s research ethics board approved the original survey [26].\nThe CCHS-2010 was used due to its one-year unique common content on health care utilization: unmet health care needs (UCN) and neurological conditions and the optional content on patient satisfaction [26]. Residents of Ontario with NCs who received health care services completed the module on patient satisfaction and provided content on unmet health care needs were assessed. The population of 10,819,146 in Ontario in 2010 represented a little over one-third of the Canadian population in that year. The views of those respondents should provide good insight into the concerns of Canadians with NCs. Therefore, an imputed subsample of 6335 respondents with NCs was used for this study. From that number, 2902 who received general health care services, 1222 who received hospital services, and 2211 who received physician services within twelve months leading up to data collection were selected. Age categories 12–44 years were grouped to protect anonymity, due to the small sample size of the study population, and very few people in the youngest age categories reported NCs and unmet health care needs. This study was carried out in accordance with the relevant national/institutional guidelines and regulations. Figure 1 below demonstrates the restriction criteria used to obtain the subsample from the original sample.\n\nFig. 1Restriction criteria employed to obtain the sub-sample in this study. * Excluded from the analysis\n\nRestriction criteria employed to obtain the sub-sample in this study. * Excluded from the analysis\n[SUBTITLE] Derivation of neurological conditions variable [SUBSECTION] Neurological conditions in the CCHS-2010 sample were derived from responding “yes” to having a neurological condition: Alzheimer’s disease or dementia, Parkinson’s disease, multiple sclerosis, epilepsy, cerebral palsy, amyotrophic lateral sclerosis, Huntington’s disease, stroke effects, Tourette’s syndrome, dystonia, muscular dystrophy, spina bifida, brain injuries, spinal cord injury, brain and spinal cord tumors, hydrocephalus, and migraine headaches.\nNeurological conditions in the CCHS-2010 sample were derived from responding “yes” to having a neurological condition: Alzheimer’s disease or dementia, Parkinson’s disease, multiple sclerosis, epilepsy, cerebral palsy, amyotrophic lateral sclerosis, Huntington’s disease, stroke effects, Tourette’s syndrome, dystonia, muscular dystrophy, spina bifida, brain injuries, spinal cord injury, brain and spinal cord tumors, hydrocephalus, and migraine headaches.\n[SUBTITLE] Assessment of patient satisfaction (outcome) [SUBSECTION] Patient satisfaction as our outcome of interest was defined according to satisfaction with health care in general (health care services from any health care provider including ophthalmologists, dentists, and other allied health professionals and home care); hospital (health care services at a hospital, for any diagnostic or day surgery service, overnight stay, or as an emergency room patient); and physician services (health care services from a family doctor (general practitioner), and other physicians (medical specialist). Respondents answered the following questions: “Overall, how satisfied were you with the way health care services were provided?” “How satisfied were you with the way hospital services were provided?” “How satisfied were you with the way physician care was provided?” Responses for the levels of satisfaction with the various types of health care services were ordinal and coded by categories: 1 = very satisfied, 2 = somewhat satisfied, 3 = neither satisfied nor dissatisfied, 4 = somewhat dissatisfied, and 5 = very dissatisfied. For each patient satisfaction variable (general health care, hospital, and physician), categories 1 and 2 were collapsed and recoded as “satisfied” = 1, while categories 3–5 were collapsed and recoded as “dissatisfied” = 0.\nPatient satisfaction as our outcome of interest was defined according to satisfaction with health care in general (health care services from any health care provider including ophthalmologists, dentists, and other allied health professionals and home care); hospital (health care services at a hospital, for any diagnostic or day surgery service, overnight stay, or as an emergency room patient); and physician services (health care services from a family doctor (general practitioner), and other physicians (medical specialist). Respondents answered the following questions: “Overall, how satisfied were you with the way health care services were provided?” “How satisfied were you with the way hospital services were provided?” “How satisfied were you with the way physician care was provided?” Responses for the levels of satisfaction with the various types of health care services were ordinal and coded by categories: 1 = very satisfied, 2 = somewhat satisfied, 3 = neither satisfied nor dissatisfied, 4 = somewhat dissatisfied, and 5 = very dissatisfied. For each patient satisfaction variable (general health care, hospital, and physician), categories 1 and 2 were collapsed and recoded as “satisfied” = 1, while categories 3–5 were collapsed and recoded as “dissatisfied” = 0.\n[SUBTITLE] Primary predictor (self-perceived unmet health care needs) [SUBSECTION] We examine the relationship between self-perceived unmet health care needs and patient satisfaction. Self-perceived unmet care need was identified in the CCHS-2010 by the question, “During the past 12 months, was there ever a time when you felt that you needed health care but you didn’t receive it?” Responses were coded, “yes” = 1 and “no” = 0. For this variable, reasons for indicating unmet care needs include (1) unavailability of care – waiting time too long, care not available when requested, care not available in the area, the doctor didn’t think the care was necessary (2) unacceptability of care – dislike doctor/afraid, language problems, didn’t know where to go (3) inaccessibility –cost (4) personal choice – too busy, didn’t get around to it/didn’t bother, felt it would be inadequate, decided not to seek care, and personal/family responsibilities.\nWe examine the relationship between self-perceived unmet health care needs and patient satisfaction. Self-perceived unmet care need was identified in the CCHS-2010 by the question, “During the past 12 months, was there ever a time when you felt that you needed health care but you didn’t receive it?” Responses were coded, “yes” = 1 and “no” = 0. For this variable, reasons for indicating unmet care needs include (1) unavailability of care – waiting time too long, care not available when requested, care not available in the area, the doctor didn’t think the care was necessary (2) unacceptability of care – dislike doctor/afraid, language problems, didn’t know where to go (3) inaccessibility –cost (4) personal choice – too busy, didn’t get around to it/didn’t bother, felt it would be inadequate, decided not to seek care, and personal/family responsibilities.\n[SUBTITLE] Covariates [SUBSECTION] Other sociodemographic covariates assessed were: age (< 45, 45–64, 65–79, 80 + years), sex (“male” vs “female”), marital status (“married”, “common-law”, “widowed/divorced/separated”, “single/never married”), level of education (“less than secondary”, secondary graduation”, “some post-secondary education”, “post-secondary graduation”), total personal income from all sources (≤ 19,999, 20,000–39,999, 40,000–69,999, 70,000 or more), satisfaction with life in general (“dissatisfied”, “very satisfied”, “satisfied”, “neither satisfied nor dissatisfied”). Ratings of availability of provincial health care were assessed as: general health care (“poor”, “fair”, “good”, “excellent”); hospital services (“poor”, “fair”, “good”, “excellent”); and physician services (“poor”, “fair”, “very good”). Rating of quality of care received: general health care (“poor”, “fair”, “good”, “excellent”); hospital services (“poor”, “good”, “excellent”); and physician services (“poor”, “good”, “excellent”). Type of patient at most recent visit (“admitted overnight”, “outpatient”, “ER patient”). Type of physician seen at most recent visit (“family doctor” vs “specialist”). Categories of “do not know”, “refusal” and “not stated” were treated as missing values. Our study is grounded on Andersen’s health behavior model as shown in (Fig. 2) below.\n\nFig. 2Research model of health care utilization in current study based on Andersen’s health behavior model\n\nResearch model of health care utilization in current study based on Andersen’s health behavior model\nOther sociodemographic covariates assessed were: age (< 45, 45–64, 65–79, 80 + years), sex (“male” vs “female”), marital status (“married”, “common-law”, “widowed/divorced/separated”, “single/never married”), level of education (“less than secondary”, secondary graduation”, “some post-secondary education”, “post-secondary graduation”), total personal income from all sources (≤ 19,999, 20,000–39,999, 40,000–69,999, 70,000 or more), satisfaction with life in general (“dissatisfied”, “very satisfied”, “satisfied”, “neither satisfied nor dissatisfied”). Ratings of availability of provincial health care were assessed as: general health care (“poor”, “fair”, “good”, “excellent”); hospital services (“poor”, “fair”, “good”, “excellent”); and physician services (“poor”, “fair”, “very good”). Rating of quality of care received: general health care (“poor”, “fair”, “good”, “excellent”); hospital services (“poor”, “good”, “excellent”); and physician services (“poor”, “good”, “excellent”). Type of patient at most recent visit (“admitted overnight”, “outpatient”, “ER patient”). Type of physician seen at most recent visit (“family doctor” vs “specialist”). Categories of “do not know”, “refusal” and “not stated” were treated as missing values. Our study is grounded on Andersen’s health behavior model as shown in (Fig. 2) below.\n\nFig. 2Research model of health care utilization in current study based on Andersen’s health behavior model\n\nResearch model of health care utilization in current study based on Andersen’s health behavior model\n[SUBTITLE] Statistical analysis [SUBSECTION] Statistical analysis was completed using STATA version 14. Sampling weights were applied to account for the survey design. Descriptive statistics were tabulated for the main exposure variable, outcome variable, and covariates as well as socio-demographic factors (age, gender, marital status, education, and personal income) among those with NCs. To account for missing data, and prevent loss of information and selection bias, chained iterations of multiple imputations were conducted [27]. All missing values were retrieved and included in the final model-building process.\nLogistic regression was used to estimate the association between predictor variables and general life satisfaction due to the small sample size and because the assumptions for ordered logistic regression were violated. The outcome variable categories were collapsed and logistic regression was conducted because generalized ordered logistic regression models did not converge in the model-building process. Univariate logistic regression models were utilized to examine the association between self-perceived unmet care needs, other predictors/covariates, and satisfaction with health care services. Unadjusted odds ratios and 95% confidence intervals (CI) and p-values were calculated. Predictors/covariates with unconditional p-values ≤ 0.20 were retained for use in the multivariate model-building phase of analysis [4]. In the multivariate model building process, variables with p-values > 0.05 were individually eliminated in a sequence of descending p-values, using a manual backward elimination strategy. Variables with significant p-values ≤ 0.05 were retained in the final model. All variables of interest which were manually eliminated due to insignificant p-values were checked for confounding and retained when they altered the coefficients for the exposure of interest by > 20%. Any variable with an initial insignificant p-value that was eliminated at the univariate analysis stage was assessed for interaction. A likelihood ratio test assessed the overall significance of our logistic regression model.\nStatistical analysis was completed using STATA version 14. Sampling weights were applied to account for the survey design. Descriptive statistics were tabulated for the main exposure variable, outcome variable, and covariates as well as socio-demographic factors (age, gender, marital status, education, and personal income) among those with NCs. To account for missing data, and prevent loss of information and selection bias, chained iterations of multiple imputations were conducted [27]. All missing values were retrieved and included in the final model-building process.\nLogistic regression was used to estimate the association between predictor variables and general life satisfaction due to the small sample size and because the assumptions for ordered logistic regression were violated. The outcome variable categories were collapsed and logistic regression was conducted because generalized ordered logistic regression models did not converge in the model-building process. Univariate logistic regression models were utilized to examine the association between self-perceived unmet care needs, other predictors/covariates, and satisfaction with health care services. Unadjusted odds ratios and 95% confidence intervals (CI) and p-values were calculated. Predictors/covariates with unconditional p-values ≤ 0.20 were retained for use in the multivariate model-building phase of analysis [4]. In the multivariate model building process, variables with p-values > 0.05 were individually eliminated in a sequence of descending p-values, using a manual backward elimination strategy. Variables with significant p-values ≤ 0.05 were retained in the final model. All variables of interest which were manually eliminated due to insignificant p-values were checked for confounding and retained when they altered the coefficients for the exposure of interest by > 20%. Any variable with an initial insignificant p-value that was eliminated at the univariate analysis stage was assessed for interaction. A likelihood ratio test assessed the overall significance of our logistic regression model.", "Data were extracted from the 2010 Canadian Community Health Survey - Annual Component (CCHS − 2010). This cross-sectional survey collected population-wide information on health status, health care utilization, and health determinants of Canadians aged 12 + living in private households in all provinces and territories [26]. Persons living on Crown lands or Indian Reserves, those dwelling in institutions, or certain remote regions, as well as full-time members of the Canadian Forces, are excluded from this survey [26]. Approximately half the interviews were conducted in person using computer-assisted personal interviewing (CAPI) and the other half were conducted over the phone using computer-assisted telephone interviewing (CATI) [26]. The overall person-level survey response rate was 88.6% and the combined response rate was 71.5% at the national level. Statistics Canada’s research ethics board approved the original survey [26].\nThe CCHS-2010 was used due to its one-year unique common content on health care utilization: unmet health care needs (UCN) and neurological conditions and the optional content on patient satisfaction [26]. Residents of Ontario with NCs who received health care services completed the module on patient satisfaction and provided content on unmet health care needs were assessed. The population of 10,819,146 in Ontario in 2010 represented a little over one-third of the Canadian population in that year. The views of those respondents should provide good insight into the concerns of Canadians with NCs. Therefore, an imputed subsample of 6335 respondents with NCs was used for this study. From that number, 2902 who received general health care services, 1222 who received hospital services, and 2211 who received physician services within twelve months leading up to data collection were selected. Age categories 12–44 years were grouped to protect anonymity, due to the small sample size of the study population, and very few people in the youngest age categories reported NCs and unmet health care needs. This study was carried out in accordance with the relevant national/institutional guidelines and regulations. Figure 1 below demonstrates the restriction criteria used to obtain the subsample from the original sample.\n\nFig. 1Restriction criteria employed to obtain the sub-sample in this study. * Excluded from the analysis\n\nRestriction criteria employed to obtain the sub-sample in this study. * Excluded from the analysis", "Neurological conditions in the CCHS-2010 sample were derived from responding “yes” to having a neurological condition: Alzheimer’s disease or dementia, Parkinson’s disease, multiple sclerosis, epilepsy, cerebral palsy, amyotrophic lateral sclerosis, Huntington’s disease, stroke effects, Tourette’s syndrome, dystonia, muscular dystrophy, spina bifida, brain injuries, spinal cord injury, brain and spinal cord tumors, hydrocephalus, and migraine headaches.", "Patient satisfaction as our outcome of interest was defined according to satisfaction with health care in general (health care services from any health care provider including ophthalmologists, dentists, and other allied health professionals and home care); hospital (health care services at a hospital, for any diagnostic or day surgery service, overnight stay, or as an emergency room patient); and physician services (health care services from a family doctor (general practitioner), and other physicians (medical specialist). Respondents answered the following questions: “Overall, how satisfied were you with the way health care services were provided?” “How satisfied were you with the way hospital services were provided?” “How satisfied were you with the way physician care was provided?” Responses for the levels of satisfaction with the various types of health care services were ordinal and coded by categories: 1 = very satisfied, 2 = somewhat satisfied, 3 = neither satisfied nor dissatisfied, 4 = somewhat dissatisfied, and 5 = very dissatisfied. For each patient satisfaction variable (general health care, hospital, and physician), categories 1 and 2 were collapsed and recoded as “satisfied” = 1, while categories 3–5 were collapsed and recoded as “dissatisfied” = 0.", "We examine the relationship between self-perceived unmet health care needs and patient satisfaction. Self-perceived unmet care need was identified in the CCHS-2010 by the question, “During the past 12 months, was there ever a time when you felt that you needed health care but you didn’t receive it?” Responses were coded, “yes” = 1 and “no” = 0. For this variable, reasons for indicating unmet care needs include (1) unavailability of care – waiting time too long, care not available when requested, care not available in the area, the doctor didn’t think the care was necessary (2) unacceptability of care – dislike doctor/afraid, language problems, didn’t know where to go (3) inaccessibility –cost (4) personal choice – too busy, didn’t get around to it/didn’t bother, felt it would be inadequate, decided not to seek care, and personal/family responsibilities.", "Other sociodemographic covariates assessed were: age (< 45, 45–64, 65–79, 80 + years), sex (“male” vs “female”), marital status (“married”, “common-law”, “widowed/divorced/separated”, “single/never married”), level of education (“less than secondary”, secondary graduation”, “some post-secondary education”, “post-secondary graduation”), total personal income from all sources (≤ 19,999, 20,000–39,999, 40,000–69,999, 70,000 or more), satisfaction with life in general (“dissatisfied”, “very satisfied”, “satisfied”, “neither satisfied nor dissatisfied”). Ratings of availability of provincial health care were assessed as: general health care (“poor”, “fair”, “good”, “excellent”); hospital services (“poor”, “fair”, “good”, “excellent”); and physician services (“poor”, “fair”, “very good”). Rating of quality of care received: general health care (“poor”, “fair”, “good”, “excellent”); hospital services (“poor”, “good”, “excellent”); and physician services (“poor”, “good”, “excellent”). Type of patient at most recent visit (“admitted overnight”, “outpatient”, “ER patient”). Type of physician seen at most recent visit (“family doctor” vs “specialist”). Categories of “do not know”, “refusal” and “not stated” were treated as missing values. Our study is grounded on Andersen’s health behavior model as shown in (Fig. 2) below.\n\nFig. 2Research model of health care utilization in current study based on Andersen’s health behavior model\n\nResearch model of health care utilization in current study based on Andersen’s health behavior model", "Statistical analysis was completed using STATA version 14. Sampling weights were applied to account for the survey design. Descriptive statistics were tabulated for the main exposure variable, outcome variable, and covariates as well as socio-demographic factors (age, gender, marital status, education, and personal income) among those with NCs. To account for missing data, and prevent loss of information and selection bias, chained iterations of multiple imputations were conducted [27]. All missing values were retrieved and included in the final model-building process.\nLogistic regression was used to estimate the association between predictor variables and general life satisfaction due to the small sample size and because the assumptions for ordered logistic regression were violated. The outcome variable categories were collapsed and logistic regression was conducted because generalized ordered logistic regression models did not converge in the model-building process. Univariate logistic regression models were utilized to examine the association between self-perceived unmet care needs, other predictors/covariates, and satisfaction with health care services. Unadjusted odds ratios and 95% confidence intervals (CI) and p-values were calculated. Predictors/covariates with unconditional p-values ≤ 0.20 were retained for use in the multivariate model-building phase of analysis [4]. In the multivariate model building process, variables with p-values > 0.05 were individually eliminated in a sequence of descending p-values, using a manual backward elimination strategy. Variables with significant p-values ≤ 0.05 were retained in the final model. All variables of interest which were manually eliminated due to insignificant p-values were checked for confounding and retained when they altered the coefficients for the exposure of interest by > 20%. Any variable with an initial insignificant p-value that was eliminated at the univariate analysis stage was assessed for interaction. A likelihood ratio test assessed the overall significance of our logistic regression model.", "Analysis for this study was limited to the imputed data of the original subsamples of 2902, 1222, and 2211 individuals with NCs who received general health care services, hospital services, and physician services respectively. Table 1 below demonstrates the demographic characteristics of the study population for all three study samples. The total number of cases varies due to missing values.\nThere is little variation in socio-demographic characteristics across subsamples. Over two-thirds of the subsamples were females (67.8, 68.8, and 70.2% respectively) and under 65 years of age (71.5, 67.2, 71.9% respectively). A little under half of the respondents reported postsecondary graduation (45.2, 45.3, and 47.4% respectively). Less than half of the respondents in all samples were married (40.4, 38.7, and 40.1% respectively), while just under half earned ≤$19,999 annually (43.1, 44.6, and 43.3%) and under 20% in each sample reported unmet health care needs.\n\nTable 1Sociodemographic characteristics of study samples by health care services use: general (8,712), hospital (3,492) and physician (6,451) servicesCharacteristicsGeneral Health Care ServicesHospital ServicesPhysician Servicesn(%)*n(%)*n(%)*\nAge categories, years\n≤44 years3,507 (40.2)1,242 (35.6)2,553 (39.6)45 to 642,725 (31.3)1,103 (31.6)2,086 (32.3)65 to 791,636 (18.8)758 (21.7)1,125 (17.4)80 and above844 (9.7)389 (11.1)687 (10.7)\nSex\nMale2,804 (32.2)1,091 (31.2)1,925 (29.8)Female5,908 (67.8)2,401 (68.8)4,526 (70.2)\nMarital status\nSingle2,583 (29.7)954 (27.4)1,843 (28.6)Married3,515 (40.4)1,349 (38.7)2,583 (40.1)Common-law413 (4.7)193 (5.5)301 (4.7)Widowed/separated/divorced2,193 (25.2)992 (28.4)1,717 (26.6)\nEducational level\nLess than secondary2,517 (29.0)1,033 (29.7)1,774 (27.6)Secondary grad1,569 (18.1)543 (15.6)1,116 (17.4)Other post-secondary666 (7.7)326 (9.4)485 (7.6)Post-secondary graduation3,916 (45.2)1,577 (45.3)3,046 (47.4)\nIncome status\n<=19,9993,541 (43.1)1,468 (44.6)2,634 (43.3)20,000–39,9992,376 (29.0)1,037 (31.5)1,828 (30.1)40,000–69,9991,548 (18.9)536 (16.3)1,074 (17.7)≥ 70,000735 (9.0)252 (7.6)544 (8.9)*Values and percentages included imputed data\n\nSociodemographic characteristics of study samples by health care services use: general (8,712), hospital (3,492) and physician (6,451) services\n*Values and percentages included imputed data\nThe results in Table 2 below describe the variables associated with health care services received by the respondents. Over two-thirds of the respondents were satisfied with general, hospital and physician services (83.9, 81.1, and 91.2% respectively). Less than half of the respondents felt they received excellent general, hospital, and physician health care (38.3, 45.9, and 54.6% respectively). Less than half of the respondents who received hospital services were outpatients (39%) while the majority received physician services from a family doctor (82.3%) (Table 2).\n\nTable 2Description of variables associated with utilization of health care services: general (8,712), hospital (3,492) and physician (6,451) servicesVariablesGeneral Health Care Services N( %)*Hospital Services N(%)*Physician Services N(%)*\nUnmet health care needs\nNo7,329 (84.2)2,797 (80.2)5,249 (81.4)Yes1,375 (15.8)691 (19.8)1,197 (18.6)\nGeneral life satisfaction\nDissatisfied590 (6.8)279 (8.0)434 (6.7)Very satisfied2,787 (32.2)1,012 (29.2)2,007 (31.3)Satisfied4,396 (50.7)1,822 (52.5)3,342 (52.1)Neither satisfied/dissatisfied895 (10.3)359 (10.3)633 (9.9)\nRating of availability of Provincial care\nPoor1,218 (14.0)537 (15.5)845 (13.2)Fair2,182 (25.2)923 (26.6)1,647 (25.6)Good3,884 (44.8)1,381 (39.7)3,928 (61.2)1Excellent1,392 (16.0)633 (18.2)\nQuality of Care Received\nPoor299 (3.4)595 (17.0)694 (10.8)Fair1,072 (12.3)Good3,993 (45.9)1,293 (37.1)22,233 (34.6)2Excellent3,336 (38.3)1,599 (45.9)3,517 (54.6)\nPatient Satisfaction\nDissatisfied1,396 (16.1)660 (18.9)568 (8.8)Satisfied7,299 (83.9)2,827 (81.1)5,875 (91.2)\nMost recent patient\nOutpatient-1,363 (39.0)-Admitted Overnight-817 (23.40)-ER Patient-1,312 (37.6)-\nPhysician Type\nFamily Doctor--5,303 (82.3)Specialist--1,144 (17.7)1 Good and excellent categories collapsed to very good2 Fair and good categories collapsed into good* Results included inputted values\n\nDescription of variables associated with utilization of health care services: general (8,712), hospital (3,492) and physician (6,451) services\n1 Good and excellent categories collapsed to very good\n2 Fair and good categories collapsed into good\n* Results included inputted values", "Table 3 demonstrates the results of the final multivariate logistic regression models for patient satisfaction with adjusted predictor and/or covariate variables. We found self-perceived unmet health care needs to be a strong negative predictor for patient satisfaction across all health care services. For those with self-perceived unmet needs, the greatest dissatisfaction was associated with physician services (OR = 0.29, p = 0.005), followed by hospital services (OR = 0.41, p = 0.006) and general health care services (OR = 0.59, p = 0.024), when compared to those without unmet health care needs. Conversely, quality and availability of care were significant protective predictors of patient satisfaction across all health care services. When compared to those who received poor quality care, the odds of patient satisfaction (general health care services, 237.60, p < 0.001; hospital services,166.99, p < 0.001; and physician services, 176.4, p < 0.001) were highest across all services among those who received excellent quality care; with those receiving general health services most likely to be satisfied with quality care: fair (OR = 6.15, p = 0.002), good (OR = 36.37, p < 0.001) and excellent (OR = 237.60, p < 0.001) (Table 3). The odds of patient satisfaction across all health services were higher with the increasing availability of care. When compared to poor availability of care, the odds of patient satisfaction were highest among those who reported excellent care availability across health care services in general (OR = 4.45, p < 0.001) and hospital services (6.30, p < 0.001), with those receiving hospital services increasingly satisfied with levels of care availability: fair (OR = 2.77, p = 0.011), good (OR = 3.90, p < 0.001) and excellent (OR = 6.30, p < 0.001).\nEducation was a negative predictor of patient satisfaction among those who received general health services with higher levels of education being more dissatisfied with care, [(secondary graduate, OR = 0.62, p = 0.126); (other post-secondary, OR = 0.36, p = 0.004); and post-secondary graduate, OR = 0.54, p = 0.050)] and those who received physician services [(secondary graduate, OR = 0.32, p = 0.010); (other post-secondary, OR = 0.26, p = 0.019); and post-secondary graduate, OR = 0.28, p = 0.005)]. Post-secondary graduation provided reduced odds of being satisfied with hospital services compared to those with the lowest levels of education (Table 3).\nPhysician type seen and most recent type of patient during last health care services were also predictors of patient satisfaction with physician and hospital services respectively. Respondents who received specialist care were 47% less likely (OR = 0.47, p = 0.106) to be satisfied with physician services than those who received care from a family doctor. Patients who were admitted overnight were more likely (OR = 1.20, p = 0.660) to be satisfied with hospital services than outpatients, while ER patients were significantly less likely (OR = 0.39, p = 0.007) to be satisfied with hospital services than both outpatients and overnight patients.\n\nTable 3Multivariate analysis of predictors of patient satisfaction with general (8,712), hospital (3,492), and physician (6,451) servicesVariablesGeneral Health Care Services*Hospital Services*Physician Services*OR, 95% CIp-ValueOR, 95% CIp-ValueOR, 95% CIp-Value\nAge categories, years\n≤44 yearsReferenceReferenceReference45 to 641.80 (0.98–3.32)0.0592.02 (0.83–4.91)0.1201.14 (0.51–2.57)0.74765 to 791.24 (0.59–2.590.5760.39 (0.13–1.17)0.0920.75 (0.27–2.08)0.58580 and above2.57 (0.97–6.86)0.0590.48 (0.11–2.13)0.3340.73 (0.23–2.29)0.593\nSex\nMaleReferenceReferenceReferenceFemale1.32 (0.80–2.17)0.2761.12 (0.53–2.37)0.7640.62 (0.32–1.19)0.152\nMarital status\nSingleReferenceReferenceReferenceMarried1.39 (0.78–2.50)0.2640.55 (0.19–1.61)0.2780.62 (0.31–1.21)0.160Common-law1.31 (0.67–2.58)0.4270.69 (0.23–2.10)0.5141.21 (0.42–3.53)0.727Widowed/separated/divorced1.48 (0.76–2.89)0.2540.68 (2.01–2.27)0.5241.06 (0.41–2.71)0.603\nEducational level\nLess than secondaryReferenceReferenceReferenceSecondary graduate0.62 (0.38–1.14)0.1262.00 (0.73–5.47)0.1770.32 (0.13–0.76)0.010Other post-secondary0.36 (0.18–0.72)0.0042.81 (0.94–8.40)0.0650.26 (0.09–0.80)0.019Post-secondary graduate0.54 (0.29–1.00)0.0500.98 (0.42–2.28)0.9670.28 (0.11–0.67)0.005\nIncome status\n≤ 19,999ReferenceReferenceReference20,000–39,9990.73 (0.41–1.30)0.2811.49 (0.58–3.80)0.4041.57 (0.69–3.54)0.27840,000–69,9991.54 (0.75–3.17)0.2421.04 (0.36–3.00)0.9391.06 (0.33–3.46)0.917≥ 70,0000.90 (0.39–2.12)0.8170.33 (0.11–0.97)0.0451.17 (0.38–3.63)0.783\nUnmet health care needs\nNoReferenceReferenceReferenceYes0.59 (0.37–0.93)0.0240.41 (0.21–0.77)0.0060.29 (0.13–0.69)0.005\nGeneral life satisfaction\nDissatisfiedReferenceReferenceReferenceVery satisfied2.15 (1.03–4.49)0.0411.56 (0.45–5.41)0.4812.53 (0.88–7.26)0.084Satisfied1.80 (0.92–3.53)0.0850.77 (0.25–2.33)0.6421.24 (0.46–3.37)0.668Neither satisfied nor dissatisfied1.29 (0.60–2.76)0.5100.46 (0.13–1.69)0.2441.60 (0.57–4.52)0.372\nAvailability of provincial care\nPoorReferenceReferenceReferenceFair1.72 (1.03–2.87)0.0392.77 (1.27–6.05)0.0111.25 (0.54–2.93)0.592Good3.18 (1.78–5.68)< 0.0013.90 (1.92–7.92)< 0.0011.10 (0.44–2.75)0.833Excellent4.45 (1.76–11.25)< 0.0016.30 (2.35–16.86)< 0.001\nQuality of care received\nPoorReferenceReferenceReferenceFair6.15 (2.00–18.94)0.002Good36.37 (12.09–109.44)< 0.00135.61 (18.71–67.78)< 0.00126.78 (13.36–53.69)< 0.001Excellent237.60 (70.43–801.52)< 0.001166.99 (67.91–410.64)< 0.001176.45 (63.89–487.30)< 0.001\nMost recent patient\nOutpatientReferenceAdmitted Overnight1.20 (0.53–2.72)0.660ER Patient0.39 (0.20–0.77)0.007\nPhysician type\nFamily DoctorReferenceSpecialist0.47 (0.18–1.18)0.106*Results included imputed valuesSignificant values are marked in bold printHosmer-Lemeshow (χ2) and p-values for General health care services (H-L: 13.74; p-value = 0.0888); Hospital services (H-L: 29.80; p-value = 0.0002); Physician services (H-L: 19.15; p-value = 0.0141)\n\nMultivariate analysis of predictors of patient satisfaction with general (8,712), hospital (3,492), and physician (6,451) services\n*Results included imputed values\nSignificant values are marked in bold print\nHosmer-Lemeshow (χ2) and p-values for General health care services (H-L: 13.74; p-value = 0.0888); Hospital services (H-L: 29.80; p-value = 0.0002); Physician services (H-L: 19.15; p-value = 0.0141)\nIn summary, quality of care is strongly and positively associated with patient satisfaction across all health services. Other significant positive predictors of patient satisfaction are the availability of provincial care, quality of care received, and being very satisfied with life in general. The common significant negative predictor of patient satisfaction across all healthcare services is self-perceived unmet health care needs. Post-secondary education (general health services and physician services), and being an ER patient most recently (hospital services) also demonstrated significant negative associations with patient satisfaction. (Fig. 3).\n\nFig. 3Summary of significant associations between health care services and patient satisfaction among individuals with neurological conditions\n\nSummary of significant associations between health care services and patient satisfaction among individuals with neurological conditions", "One strength of this study is that it supports the finding that unmet health care needs are a risk factor for decreased patient satisfaction among neurological patients and that available and quality care are positive predictors of patient satisfaction across health services. Other strengths include the use of a nationally representative survey of the Canadian population with relatively high participation rates allowing for generalization of study findings; and the provision of information on specific health care services, i.e., general health care services, hospital and physician services that may vary in their impact on neurological patients.\nLimitations are noted. Persons living on lands designated as Indian Reserves or by the Crown, those dwelling in institutions, or certain remote regions as well as full-time members of the Canadian Forces are excluded from this survey. The representation of those residing in institutions would have been valuable to this study. This exclusion and the possible selection bias of individuals who were functionally capable of responding to the questionnaires are limitations that may impact the generalizability of the study. The use of data from optional modules causes a reduction in the sample size, decreasing the generalizability of the findings to the entire population. The relatively small sample size did not facilitate subgroup analysis by types of neurological conditions. Types of unmet health care needs and neurological conditions were not specified. The severity of disease conditions was not measured, making it difficult to address patient satisfaction or targeted interventions within groups of neurological conditions with specific unmet health care needs. Finally, the study could not perform a stratified analysis by income differences (< $20,000 annual income versus $40,000 + annual income) due to the small sample size of the study population in some categories and the need to meet anonymity, confidentiality, and data release rules of the research data centre. This is important in determining the potential influence of income on life satisfaction among neurological patients." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "Background", "Methods", "Study participants and data sources", "Derivation of neurological conditions variable", "Assessment of patient satisfaction (outcome)", "Primary predictor (self-perceived unmet health care needs)", "Covariates", "Statistical analysis", "Results", "Characteristics of the study population – individuals with neurological conditions", "Characteristics associated with patient satisfaction with general health care, hospital, and physician services (multivariate analysis)", "Discussion", "Strengths and limitations of the study", "Conclusion" ]
[ "Neurological conditions (NCs) including Alzheimer’s disease (AD)/ dementia, Parkinson’s disease (PD), amyotrophic lateral sclerosis, sclerosis, and others, were the focus of a Statistics Canada survey in 2010 [1]. NCs, especially those exacerbated by increased age, e.g., PD and AD/dementia, lead to long-term challenges with functional impairments and limitations to activity [2]. Neurological patients, not surprisingly, report unmet health care needs [3, 4] and experience barriers to care including lack of resources (time and money), lack of services, and no local specialists [2, 5, 6].\nSelf-reported unmet health care need is a commonly used measure of health care access or utilization [7]. Health care utilization factors include availability, acceptability, accessibility, and personal choice (unrelated to the health system) [8, 9]. Perceived unmet health care needs may be categorized per availability – waiting time too long, care not available when requested, care not available in the area; acceptability – dislike doctor/afraid, language problems, didn’t know where to go; accessibility –cost and transportation; or personal choice – too busy, didn’t get around to it/didn’t bother, felt it would be inadequate, decided not to seek care, and personal/family responsibilities [6].\nAnderson’s health behavior model describes health care utilization as a function of three factors: predisposing, enabling, and need. Predisposing factors exist before presentation with a health condition, i.e., socio-demographic or socio-cultural characteristics; enabling factors represent the logistical means for accessing health services; and need factors are the effectual cause of health service use and reflect the perceived health status of the health care user [10, 11]. The outcome measure for this study, patient satisfaction, is widely accepted as an assessment of overall healthcare quality [12, 13]. Patient satisfaction is associated with health-related quality of life (an individual’s or a group’s perceived physical and mental health over time) [14]. Some studies indicate that unmet health care needs result in decreased patient satisfaction with health care services [15–17] and lowered quality of health care and life [18–20].\nNeurological conditions are a major contributor to disability in the Canadian population. Approximately 3.77 million Canadians live with neurological conditions. Of this number, 170,000 are cared for in institutions [21]. People with psychosocial difficulties, common to neurological conditions, have reported higher numbers of unmet health care needs [22–24] that may go unnoticed by health professionals [25]. Therefore, an understanding of unmet health care needs and patient satisfaction among older Canadians with NCs is crucial to the ongoing evaluation and continuous quality improvement of care for this vulnerable population [10]. Such knowledge will contribute to the health system’s preparation and strengthening of services to adequately meet the needs of the increasing aging population. This study examines the association between unmet health care needs and satisfaction with health care services in Canada among neurological patients. We incorporate life satisfaction as a predisposing factor of patients’ satisfaction with the health care system as it presents an overarching view of an individual’s satisfaction and may influence one’s satisfaction with the health system. The specific objectives of this study are (1) to explore the factors predicting patient satisfaction with general health care, hospital, and physician services among Canadians with NCs, (2) examine the association between unmet health care needs and satisfaction with health care services among neurological patients in Canada, and (3) contrast patient satisfaction between physician care and hospital care among Canadians with NCs.", "[SUBTITLE] Study participants and data sources [SUBSECTION] Data were extracted from the 2010 Canadian Community Health Survey - Annual Component (CCHS − 2010). This cross-sectional survey collected population-wide information on health status, health care utilization, and health determinants of Canadians aged 12 + living in private households in all provinces and territories [26]. Persons living on Crown lands or Indian Reserves, those dwelling in institutions, or certain remote regions, as well as full-time members of the Canadian Forces, are excluded from this survey [26]. Approximately half the interviews were conducted in person using computer-assisted personal interviewing (CAPI) and the other half were conducted over the phone using computer-assisted telephone interviewing (CATI) [26]. The overall person-level survey response rate was 88.6% and the combined response rate was 71.5% at the national level. Statistics Canada’s research ethics board approved the original survey [26].\nThe CCHS-2010 was used due to its one-year unique common content on health care utilization: unmet health care needs (UCN) and neurological conditions and the optional content on patient satisfaction [26]. Residents of Ontario with NCs who received health care services completed the module on patient satisfaction and provided content on unmet health care needs were assessed. The population of 10,819,146 in Ontario in 2010 represented a little over one-third of the Canadian population in that year. The views of those respondents should provide good insight into the concerns of Canadians with NCs. Therefore, an imputed subsample of 6335 respondents with NCs was used for this study. From that number, 2902 who received general health care services, 1222 who received hospital services, and 2211 who received physician services within twelve months leading up to data collection were selected. Age categories 12–44 years were grouped to protect anonymity, due to the small sample size of the study population, and very few people in the youngest age categories reported NCs and unmet health care needs. This study was carried out in accordance with the relevant national/institutional guidelines and regulations. Figure 1 below demonstrates the restriction criteria used to obtain the subsample from the original sample.\n\nFig. 1Restriction criteria employed to obtain the sub-sample in this study. * Excluded from the analysis\n\nRestriction criteria employed to obtain the sub-sample in this study. * Excluded from the analysis\nData were extracted from the 2010 Canadian Community Health Survey - Annual Component (CCHS − 2010). This cross-sectional survey collected population-wide information on health status, health care utilization, and health determinants of Canadians aged 12 + living in private households in all provinces and territories [26]. Persons living on Crown lands or Indian Reserves, those dwelling in institutions, or certain remote regions, as well as full-time members of the Canadian Forces, are excluded from this survey [26]. Approximately half the interviews were conducted in person using computer-assisted personal interviewing (CAPI) and the other half were conducted over the phone using computer-assisted telephone interviewing (CATI) [26]. The overall person-level survey response rate was 88.6% and the combined response rate was 71.5% at the national level. Statistics Canada’s research ethics board approved the original survey [26].\nThe CCHS-2010 was used due to its one-year unique common content on health care utilization: unmet health care needs (UCN) and neurological conditions and the optional content on patient satisfaction [26]. Residents of Ontario with NCs who received health care services completed the module on patient satisfaction and provided content on unmet health care needs were assessed. The population of 10,819,146 in Ontario in 2010 represented a little over one-third of the Canadian population in that year. The views of those respondents should provide good insight into the concerns of Canadians with NCs. Therefore, an imputed subsample of 6335 respondents with NCs was used for this study. From that number, 2902 who received general health care services, 1222 who received hospital services, and 2211 who received physician services within twelve months leading up to data collection were selected. Age categories 12–44 years were grouped to protect anonymity, due to the small sample size of the study population, and very few people in the youngest age categories reported NCs and unmet health care needs. This study was carried out in accordance with the relevant national/institutional guidelines and regulations. Figure 1 below demonstrates the restriction criteria used to obtain the subsample from the original sample.\n\nFig. 1Restriction criteria employed to obtain the sub-sample in this study. * Excluded from the analysis\n\nRestriction criteria employed to obtain the sub-sample in this study. * Excluded from the analysis\n[SUBTITLE] Derivation of neurological conditions variable [SUBSECTION] Neurological conditions in the CCHS-2010 sample were derived from responding “yes” to having a neurological condition: Alzheimer’s disease or dementia, Parkinson’s disease, multiple sclerosis, epilepsy, cerebral palsy, amyotrophic lateral sclerosis, Huntington’s disease, stroke effects, Tourette’s syndrome, dystonia, muscular dystrophy, spina bifida, brain injuries, spinal cord injury, brain and spinal cord tumors, hydrocephalus, and migraine headaches.\nNeurological conditions in the CCHS-2010 sample were derived from responding “yes” to having a neurological condition: Alzheimer’s disease or dementia, Parkinson’s disease, multiple sclerosis, epilepsy, cerebral palsy, amyotrophic lateral sclerosis, Huntington’s disease, stroke effects, Tourette’s syndrome, dystonia, muscular dystrophy, spina bifida, brain injuries, spinal cord injury, brain and spinal cord tumors, hydrocephalus, and migraine headaches.\n[SUBTITLE] Assessment of patient satisfaction (outcome) [SUBSECTION] Patient satisfaction as our outcome of interest was defined according to satisfaction with health care in general (health care services from any health care provider including ophthalmologists, dentists, and other allied health professionals and home care); hospital (health care services at a hospital, for any diagnostic or day surgery service, overnight stay, or as an emergency room patient); and physician services (health care services from a family doctor (general practitioner), and other physicians (medical specialist). Respondents answered the following questions: “Overall, how satisfied were you with the way health care services were provided?” “How satisfied were you with the way hospital services were provided?” “How satisfied were you with the way physician care was provided?” Responses for the levels of satisfaction with the various types of health care services were ordinal and coded by categories: 1 = very satisfied, 2 = somewhat satisfied, 3 = neither satisfied nor dissatisfied, 4 = somewhat dissatisfied, and 5 = very dissatisfied. For each patient satisfaction variable (general health care, hospital, and physician), categories 1 and 2 were collapsed and recoded as “satisfied” = 1, while categories 3–5 were collapsed and recoded as “dissatisfied” = 0.\nPatient satisfaction as our outcome of interest was defined according to satisfaction with health care in general (health care services from any health care provider including ophthalmologists, dentists, and other allied health professionals and home care); hospital (health care services at a hospital, for any diagnostic or day surgery service, overnight stay, or as an emergency room patient); and physician services (health care services from a family doctor (general practitioner), and other physicians (medical specialist). Respondents answered the following questions: “Overall, how satisfied were you with the way health care services were provided?” “How satisfied were you with the way hospital services were provided?” “How satisfied were you with the way physician care was provided?” Responses for the levels of satisfaction with the various types of health care services were ordinal and coded by categories: 1 = very satisfied, 2 = somewhat satisfied, 3 = neither satisfied nor dissatisfied, 4 = somewhat dissatisfied, and 5 = very dissatisfied. For each patient satisfaction variable (general health care, hospital, and physician), categories 1 and 2 were collapsed and recoded as “satisfied” = 1, while categories 3–5 were collapsed and recoded as “dissatisfied” = 0.\n[SUBTITLE] Primary predictor (self-perceived unmet health care needs) [SUBSECTION] We examine the relationship between self-perceived unmet health care needs and patient satisfaction. Self-perceived unmet care need was identified in the CCHS-2010 by the question, “During the past 12 months, was there ever a time when you felt that you needed health care but you didn’t receive it?” Responses were coded, “yes” = 1 and “no” = 0. For this variable, reasons for indicating unmet care needs include (1) unavailability of care – waiting time too long, care not available when requested, care not available in the area, the doctor didn’t think the care was necessary (2) unacceptability of care – dislike doctor/afraid, language problems, didn’t know where to go (3) inaccessibility –cost (4) personal choice – too busy, didn’t get around to it/didn’t bother, felt it would be inadequate, decided not to seek care, and personal/family responsibilities.\nWe examine the relationship between self-perceived unmet health care needs and patient satisfaction. Self-perceived unmet care need was identified in the CCHS-2010 by the question, “During the past 12 months, was there ever a time when you felt that you needed health care but you didn’t receive it?” Responses were coded, “yes” = 1 and “no” = 0. For this variable, reasons for indicating unmet care needs include (1) unavailability of care – waiting time too long, care not available when requested, care not available in the area, the doctor didn’t think the care was necessary (2) unacceptability of care – dislike doctor/afraid, language problems, didn’t know where to go (3) inaccessibility –cost (4) personal choice – too busy, didn’t get around to it/didn’t bother, felt it would be inadequate, decided not to seek care, and personal/family responsibilities.\n[SUBTITLE] Covariates [SUBSECTION] Other sociodemographic covariates assessed were: age (< 45, 45–64, 65–79, 80 + years), sex (“male” vs “female”), marital status (“married”, “common-law”, “widowed/divorced/separated”, “single/never married”), level of education (“less than secondary”, secondary graduation”, “some post-secondary education”, “post-secondary graduation”), total personal income from all sources (≤ 19,999, 20,000–39,999, 40,000–69,999, 70,000 or more), satisfaction with life in general (“dissatisfied”, “very satisfied”, “satisfied”, “neither satisfied nor dissatisfied”). Ratings of availability of provincial health care were assessed as: general health care (“poor”, “fair”, “good”, “excellent”); hospital services (“poor”, “fair”, “good”, “excellent”); and physician services (“poor”, “fair”, “very good”). Rating of quality of care received: general health care (“poor”, “fair”, “good”, “excellent”); hospital services (“poor”, “good”, “excellent”); and physician services (“poor”, “good”, “excellent”). Type of patient at most recent visit (“admitted overnight”, “outpatient”, “ER patient”). Type of physician seen at most recent visit (“family doctor” vs “specialist”). Categories of “do not know”, “refusal” and “not stated” were treated as missing values. Our study is grounded on Andersen’s health behavior model as shown in (Fig. 2) below.\n\nFig. 2Research model of health care utilization in current study based on Andersen’s health behavior model\n\nResearch model of health care utilization in current study based on Andersen’s health behavior model\nOther sociodemographic covariates assessed were: age (< 45, 45–64, 65–79, 80 + years), sex (“male” vs “female”), marital status (“married”, “common-law”, “widowed/divorced/separated”, “single/never married”), level of education (“less than secondary”, secondary graduation”, “some post-secondary education”, “post-secondary graduation”), total personal income from all sources (≤ 19,999, 20,000–39,999, 40,000–69,999, 70,000 or more), satisfaction with life in general (“dissatisfied”, “very satisfied”, “satisfied”, “neither satisfied nor dissatisfied”). Ratings of availability of provincial health care were assessed as: general health care (“poor”, “fair”, “good”, “excellent”); hospital services (“poor”, “fair”, “good”, “excellent”); and physician services (“poor”, “fair”, “very good”). Rating of quality of care received: general health care (“poor”, “fair”, “good”, “excellent”); hospital services (“poor”, “good”, “excellent”); and physician services (“poor”, “good”, “excellent”). Type of patient at most recent visit (“admitted overnight”, “outpatient”, “ER patient”). Type of physician seen at most recent visit (“family doctor” vs “specialist”). Categories of “do not know”, “refusal” and “not stated” were treated as missing values. Our study is grounded on Andersen’s health behavior model as shown in (Fig. 2) below.\n\nFig. 2Research model of health care utilization in current study based on Andersen’s health behavior model\n\nResearch model of health care utilization in current study based on Andersen’s health behavior model\n[SUBTITLE] Statistical analysis [SUBSECTION] Statistical analysis was completed using STATA version 14. Sampling weights were applied to account for the survey design. Descriptive statistics were tabulated for the main exposure variable, outcome variable, and covariates as well as socio-demographic factors (age, gender, marital status, education, and personal income) among those with NCs. To account for missing data, and prevent loss of information and selection bias, chained iterations of multiple imputations were conducted [27]. All missing values were retrieved and included in the final model-building process.\nLogistic regression was used to estimate the association between predictor variables and general life satisfaction due to the small sample size and because the assumptions for ordered logistic regression were violated. The outcome variable categories were collapsed and logistic regression was conducted because generalized ordered logistic regression models did not converge in the model-building process. Univariate logistic regression models were utilized to examine the association between self-perceived unmet care needs, other predictors/covariates, and satisfaction with health care services. Unadjusted odds ratios and 95% confidence intervals (CI) and p-values were calculated. Predictors/covariates with unconditional p-values ≤ 0.20 were retained for use in the multivariate model-building phase of analysis [4]. In the multivariate model building process, variables with p-values > 0.05 were individually eliminated in a sequence of descending p-values, using a manual backward elimination strategy. Variables with significant p-values ≤ 0.05 were retained in the final model. All variables of interest which were manually eliminated due to insignificant p-values were checked for confounding and retained when they altered the coefficients for the exposure of interest by > 20%. Any variable with an initial insignificant p-value that was eliminated at the univariate analysis stage was assessed for interaction. A likelihood ratio test assessed the overall significance of our logistic regression model.\nStatistical analysis was completed using STATA version 14. Sampling weights were applied to account for the survey design. Descriptive statistics were tabulated for the main exposure variable, outcome variable, and covariates as well as socio-demographic factors (age, gender, marital status, education, and personal income) among those with NCs. To account for missing data, and prevent loss of information and selection bias, chained iterations of multiple imputations were conducted [27]. All missing values were retrieved and included in the final model-building process.\nLogistic regression was used to estimate the association between predictor variables and general life satisfaction due to the small sample size and because the assumptions for ordered logistic regression were violated. The outcome variable categories were collapsed and logistic regression was conducted because generalized ordered logistic regression models did not converge in the model-building process. Univariate logistic regression models were utilized to examine the association between self-perceived unmet care needs, other predictors/covariates, and satisfaction with health care services. Unadjusted odds ratios and 95% confidence intervals (CI) and p-values were calculated. Predictors/covariates with unconditional p-values ≤ 0.20 were retained for use in the multivariate model-building phase of analysis [4]. In the multivariate model building process, variables with p-values > 0.05 were individually eliminated in a sequence of descending p-values, using a manual backward elimination strategy. Variables with significant p-values ≤ 0.05 were retained in the final model. All variables of interest which were manually eliminated due to insignificant p-values were checked for confounding and retained when they altered the coefficients for the exposure of interest by > 20%. Any variable with an initial insignificant p-value that was eliminated at the univariate analysis stage was assessed for interaction. A likelihood ratio test assessed the overall significance of our logistic regression model.", "Data were extracted from the 2010 Canadian Community Health Survey - Annual Component (CCHS − 2010). This cross-sectional survey collected population-wide information on health status, health care utilization, and health determinants of Canadians aged 12 + living in private households in all provinces and territories [26]. Persons living on Crown lands or Indian Reserves, those dwelling in institutions, or certain remote regions, as well as full-time members of the Canadian Forces, are excluded from this survey [26]. Approximately half the interviews were conducted in person using computer-assisted personal interviewing (CAPI) and the other half were conducted over the phone using computer-assisted telephone interviewing (CATI) [26]. The overall person-level survey response rate was 88.6% and the combined response rate was 71.5% at the national level. Statistics Canada’s research ethics board approved the original survey [26].\nThe CCHS-2010 was used due to its one-year unique common content on health care utilization: unmet health care needs (UCN) and neurological conditions and the optional content on patient satisfaction [26]. Residents of Ontario with NCs who received health care services completed the module on patient satisfaction and provided content on unmet health care needs were assessed. The population of 10,819,146 in Ontario in 2010 represented a little over one-third of the Canadian population in that year. The views of those respondents should provide good insight into the concerns of Canadians with NCs. Therefore, an imputed subsample of 6335 respondents with NCs was used for this study. From that number, 2902 who received general health care services, 1222 who received hospital services, and 2211 who received physician services within twelve months leading up to data collection were selected. Age categories 12–44 years were grouped to protect anonymity, due to the small sample size of the study population, and very few people in the youngest age categories reported NCs and unmet health care needs. This study was carried out in accordance with the relevant national/institutional guidelines and regulations. Figure 1 below demonstrates the restriction criteria used to obtain the subsample from the original sample.\n\nFig. 1Restriction criteria employed to obtain the sub-sample in this study. * Excluded from the analysis\n\nRestriction criteria employed to obtain the sub-sample in this study. * Excluded from the analysis", "Neurological conditions in the CCHS-2010 sample were derived from responding “yes” to having a neurological condition: Alzheimer’s disease or dementia, Parkinson’s disease, multiple sclerosis, epilepsy, cerebral palsy, amyotrophic lateral sclerosis, Huntington’s disease, stroke effects, Tourette’s syndrome, dystonia, muscular dystrophy, spina bifida, brain injuries, spinal cord injury, brain and spinal cord tumors, hydrocephalus, and migraine headaches.", "Patient satisfaction as our outcome of interest was defined according to satisfaction with health care in general (health care services from any health care provider including ophthalmologists, dentists, and other allied health professionals and home care); hospital (health care services at a hospital, for any diagnostic or day surgery service, overnight stay, or as an emergency room patient); and physician services (health care services from a family doctor (general practitioner), and other physicians (medical specialist). Respondents answered the following questions: “Overall, how satisfied were you with the way health care services were provided?” “How satisfied were you with the way hospital services were provided?” “How satisfied were you with the way physician care was provided?” Responses for the levels of satisfaction with the various types of health care services were ordinal and coded by categories: 1 = very satisfied, 2 = somewhat satisfied, 3 = neither satisfied nor dissatisfied, 4 = somewhat dissatisfied, and 5 = very dissatisfied. For each patient satisfaction variable (general health care, hospital, and physician), categories 1 and 2 were collapsed and recoded as “satisfied” = 1, while categories 3–5 were collapsed and recoded as “dissatisfied” = 0.", "We examine the relationship between self-perceived unmet health care needs and patient satisfaction. Self-perceived unmet care need was identified in the CCHS-2010 by the question, “During the past 12 months, was there ever a time when you felt that you needed health care but you didn’t receive it?” Responses were coded, “yes” = 1 and “no” = 0. For this variable, reasons for indicating unmet care needs include (1) unavailability of care – waiting time too long, care not available when requested, care not available in the area, the doctor didn’t think the care was necessary (2) unacceptability of care – dislike doctor/afraid, language problems, didn’t know where to go (3) inaccessibility –cost (4) personal choice – too busy, didn’t get around to it/didn’t bother, felt it would be inadequate, decided not to seek care, and personal/family responsibilities.", "Other sociodemographic covariates assessed were: age (< 45, 45–64, 65–79, 80 + years), sex (“male” vs “female”), marital status (“married”, “common-law”, “widowed/divorced/separated”, “single/never married”), level of education (“less than secondary”, secondary graduation”, “some post-secondary education”, “post-secondary graduation”), total personal income from all sources (≤ 19,999, 20,000–39,999, 40,000–69,999, 70,000 or more), satisfaction with life in general (“dissatisfied”, “very satisfied”, “satisfied”, “neither satisfied nor dissatisfied”). Ratings of availability of provincial health care were assessed as: general health care (“poor”, “fair”, “good”, “excellent”); hospital services (“poor”, “fair”, “good”, “excellent”); and physician services (“poor”, “fair”, “very good”). Rating of quality of care received: general health care (“poor”, “fair”, “good”, “excellent”); hospital services (“poor”, “good”, “excellent”); and physician services (“poor”, “good”, “excellent”). Type of patient at most recent visit (“admitted overnight”, “outpatient”, “ER patient”). Type of physician seen at most recent visit (“family doctor” vs “specialist”). Categories of “do not know”, “refusal” and “not stated” were treated as missing values. Our study is grounded on Andersen’s health behavior model as shown in (Fig. 2) below.\n\nFig. 2Research model of health care utilization in current study based on Andersen’s health behavior model\n\nResearch model of health care utilization in current study based on Andersen’s health behavior model", "Statistical analysis was completed using STATA version 14. Sampling weights were applied to account for the survey design. Descriptive statistics were tabulated for the main exposure variable, outcome variable, and covariates as well as socio-demographic factors (age, gender, marital status, education, and personal income) among those with NCs. To account for missing data, and prevent loss of information and selection bias, chained iterations of multiple imputations were conducted [27]. All missing values were retrieved and included in the final model-building process.\nLogistic regression was used to estimate the association between predictor variables and general life satisfaction due to the small sample size and because the assumptions for ordered logistic regression were violated. The outcome variable categories were collapsed and logistic regression was conducted because generalized ordered logistic regression models did not converge in the model-building process. Univariate logistic regression models were utilized to examine the association between self-perceived unmet care needs, other predictors/covariates, and satisfaction with health care services. Unadjusted odds ratios and 95% confidence intervals (CI) and p-values were calculated. Predictors/covariates with unconditional p-values ≤ 0.20 were retained for use in the multivariate model-building phase of analysis [4]. In the multivariate model building process, variables with p-values > 0.05 were individually eliminated in a sequence of descending p-values, using a manual backward elimination strategy. Variables with significant p-values ≤ 0.05 were retained in the final model. All variables of interest which were manually eliminated due to insignificant p-values were checked for confounding and retained when they altered the coefficients for the exposure of interest by > 20%. Any variable with an initial insignificant p-value that was eliminated at the univariate analysis stage was assessed for interaction. A likelihood ratio test assessed the overall significance of our logistic regression model.", "[SUBTITLE] Characteristics of the study population – individuals with neurological conditions [SUBSECTION] Analysis for this study was limited to the imputed data of the original subsamples of 2902, 1222, and 2211 individuals with NCs who received general health care services, hospital services, and physician services respectively. Table 1 below demonstrates the demographic characteristics of the study population for all three study samples. The total number of cases varies due to missing values.\nThere is little variation in socio-demographic characteristics across subsamples. Over two-thirds of the subsamples were females (67.8, 68.8, and 70.2% respectively) and under 65 years of age (71.5, 67.2, 71.9% respectively). A little under half of the respondents reported postsecondary graduation (45.2, 45.3, and 47.4% respectively). Less than half of the respondents in all samples were married (40.4, 38.7, and 40.1% respectively), while just under half earned ≤$19,999 annually (43.1, 44.6, and 43.3%) and under 20% in each sample reported unmet health care needs.\n\nTable 1Sociodemographic characteristics of study samples by health care services use: general (8,712), hospital (3,492) and physician (6,451) servicesCharacteristicsGeneral Health Care ServicesHospital ServicesPhysician Servicesn(%)*n(%)*n(%)*\nAge categories, years\n≤44 years3,507 (40.2)1,242 (35.6)2,553 (39.6)45 to 642,725 (31.3)1,103 (31.6)2,086 (32.3)65 to 791,636 (18.8)758 (21.7)1,125 (17.4)80 and above844 (9.7)389 (11.1)687 (10.7)\nSex\nMale2,804 (32.2)1,091 (31.2)1,925 (29.8)Female5,908 (67.8)2,401 (68.8)4,526 (70.2)\nMarital status\nSingle2,583 (29.7)954 (27.4)1,843 (28.6)Married3,515 (40.4)1,349 (38.7)2,583 (40.1)Common-law413 (4.7)193 (5.5)301 (4.7)Widowed/separated/divorced2,193 (25.2)992 (28.4)1,717 (26.6)\nEducational level\nLess than secondary2,517 (29.0)1,033 (29.7)1,774 (27.6)Secondary grad1,569 (18.1)543 (15.6)1,116 (17.4)Other post-secondary666 (7.7)326 (9.4)485 (7.6)Post-secondary graduation3,916 (45.2)1,577 (45.3)3,046 (47.4)\nIncome status\n<=19,9993,541 (43.1)1,468 (44.6)2,634 (43.3)20,000–39,9992,376 (29.0)1,037 (31.5)1,828 (30.1)40,000–69,9991,548 (18.9)536 (16.3)1,074 (17.7)≥ 70,000735 (9.0)252 (7.6)544 (8.9)*Values and percentages included imputed data\n\nSociodemographic characteristics of study samples by health care services use: general (8,712), hospital (3,492) and physician (6,451) services\n*Values and percentages included imputed data\nThe results in Table 2 below describe the variables associated with health care services received by the respondents. Over two-thirds of the respondents were satisfied with general, hospital and physician services (83.9, 81.1, and 91.2% respectively). Less than half of the respondents felt they received excellent general, hospital, and physician health care (38.3, 45.9, and 54.6% respectively). Less than half of the respondents who received hospital services were outpatients (39%) while the majority received physician services from a family doctor (82.3%) (Table 2).\n\nTable 2Description of variables associated with utilization of health care services: general (8,712), hospital (3,492) and physician (6,451) servicesVariablesGeneral Health Care Services N( %)*Hospital Services N(%)*Physician Services N(%)*\nUnmet health care needs\nNo7,329 (84.2)2,797 (80.2)5,249 (81.4)Yes1,375 (15.8)691 (19.8)1,197 (18.6)\nGeneral life satisfaction\nDissatisfied590 (6.8)279 (8.0)434 (6.7)Very satisfied2,787 (32.2)1,012 (29.2)2,007 (31.3)Satisfied4,396 (50.7)1,822 (52.5)3,342 (52.1)Neither satisfied/dissatisfied895 (10.3)359 (10.3)633 (9.9)\nRating of availability of Provincial care\nPoor1,218 (14.0)537 (15.5)845 (13.2)Fair2,182 (25.2)923 (26.6)1,647 (25.6)Good3,884 (44.8)1,381 (39.7)3,928 (61.2)1Excellent1,392 (16.0)633 (18.2)\nQuality of Care Received\nPoor299 (3.4)595 (17.0)694 (10.8)Fair1,072 (12.3)Good3,993 (45.9)1,293 (37.1)22,233 (34.6)2Excellent3,336 (38.3)1,599 (45.9)3,517 (54.6)\nPatient Satisfaction\nDissatisfied1,396 (16.1)660 (18.9)568 (8.8)Satisfied7,299 (83.9)2,827 (81.1)5,875 (91.2)\nMost recent patient\nOutpatient-1,363 (39.0)-Admitted Overnight-817 (23.40)-ER Patient-1,312 (37.6)-\nPhysician Type\nFamily Doctor--5,303 (82.3)Specialist--1,144 (17.7)1 Good and excellent categories collapsed to very good2 Fair and good categories collapsed into good* Results included inputted values\n\nDescription of variables associated with utilization of health care services: general (8,712), hospital (3,492) and physician (6,451) services\n1 Good and excellent categories collapsed to very good\n2 Fair and good categories collapsed into good\n* Results included inputted values\nAnalysis for this study was limited to the imputed data of the original subsamples of 2902, 1222, and 2211 individuals with NCs who received general health care services, hospital services, and physician services respectively. Table 1 below demonstrates the demographic characteristics of the study population for all three study samples. The total number of cases varies due to missing values.\nThere is little variation in socio-demographic characteristics across subsamples. Over two-thirds of the subsamples were females (67.8, 68.8, and 70.2% respectively) and under 65 years of age (71.5, 67.2, 71.9% respectively). A little under half of the respondents reported postsecondary graduation (45.2, 45.3, and 47.4% respectively). Less than half of the respondents in all samples were married (40.4, 38.7, and 40.1% respectively), while just under half earned ≤$19,999 annually (43.1, 44.6, and 43.3%) and under 20% in each sample reported unmet health care needs.\n\nTable 1Sociodemographic characteristics of study samples by health care services use: general (8,712), hospital (3,492) and physician (6,451) servicesCharacteristicsGeneral Health Care ServicesHospital ServicesPhysician Servicesn(%)*n(%)*n(%)*\nAge categories, years\n≤44 years3,507 (40.2)1,242 (35.6)2,553 (39.6)45 to 642,725 (31.3)1,103 (31.6)2,086 (32.3)65 to 791,636 (18.8)758 (21.7)1,125 (17.4)80 and above844 (9.7)389 (11.1)687 (10.7)\nSex\nMale2,804 (32.2)1,091 (31.2)1,925 (29.8)Female5,908 (67.8)2,401 (68.8)4,526 (70.2)\nMarital status\nSingle2,583 (29.7)954 (27.4)1,843 (28.6)Married3,515 (40.4)1,349 (38.7)2,583 (40.1)Common-law413 (4.7)193 (5.5)301 (4.7)Widowed/separated/divorced2,193 (25.2)992 (28.4)1,717 (26.6)\nEducational level\nLess than secondary2,517 (29.0)1,033 (29.7)1,774 (27.6)Secondary grad1,569 (18.1)543 (15.6)1,116 (17.4)Other post-secondary666 (7.7)326 (9.4)485 (7.6)Post-secondary graduation3,916 (45.2)1,577 (45.3)3,046 (47.4)\nIncome status\n<=19,9993,541 (43.1)1,468 (44.6)2,634 (43.3)20,000–39,9992,376 (29.0)1,037 (31.5)1,828 (30.1)40,000–69,9991,548 (18.9)536 (16.3)1,074 (17.7)≥ 70,000735 (9.0)252 (7.6)544 (8.9)*Values and percentages included imputed data\n\nSociodemographic characteristics of study samples by health care services use: general (8,712), hospital (3,492) and physician (6,451) services\n*Values and percentages included imputed data\nThe results in Table 2 below describe the variables associated with health care services received by the respondents. Over two-thirds of the respondents were satisfied with general, hospital and physician services (83.9, 81.1, and 91.2% respectively). Less than half of the respondents felt they received excellent general, hospital, and physician health care (38.3, 45.9, and 54.6% respectively). Less than half of the respondents who received hospital services were outpatients (39%) while the majority received physician services from a family doctor (82.3%) (Table 2).\n\nTable 2Description of variables associated with utilization of health care services: general (8,712), hospital (3,492) and physician (6,451) servicesVariablesGeneral Health Care Services N( %)*Hospital Services N(%)*Physician Services N(%)*\nUnmet health care needs\nNo7,329 (84.2)2,797 (80.2)5,249 (81.4)Yes1,375 (15.8)691 (19.8)1,197 (18.6)\nGeneral life satisfaction\nDissatisfied590 (6.8)279 (8.0)434 (6.7)Very satisfied2,787 (32.2)1,012 (29.2)2,007 (31.3)Satisfied4,396 (50.7)1,822 (52.5)3,342 (52.1)Neither satisfied/dissatisfied895 (10.3)359 (10.3)633 (9.9)\nRating of availability of Provincial care\nPoor1,218 (14.0)537 (15.5)845 (13.2)Fair2,182 (25.2)923 (26.6)1,647 (25.6)Good3,884 (44.8)1,381 (39.7)3,928 (61.2)1Excellent1,392 (16.0)633 (18.2)\nQuality of Care Received\nPoor299 (3.4)595 (17.0)694 (10.8)Fair1,072 (12.3)Good3,993 (45.9)1,293 (37.1)22,233 (34.6)2Excellent3,336 (38.3)1,599 (45.9)3,517 (54.6)\nPatient Satisfaction\nDissatisfied1,396 (16.1)660 (18.9)568 (8.8)Satisfied7,299 (83.9)2,827 (81.1)5,875 (91.2)\nMost recent patient\nOutpatient-1,363 (39.0)-Admitted Overnight-817 (23.40)-ER Patient-1,312 (37.6)-\nPhysician Type\nFamily Doctor--5,303 (82.3)Specialist--1,144 (17.7)1 Good and excellent categories collapsed to very good2 Fair and good categories collapsed into good* Results included inputted values\n\nDescription of variables associated with utilization of health care services: general (8,712), hospital (3,492) and physician (6,451) services\n1 Good and excellent categories collapsed to very good\n2 Fair and good categories collapsed into good\n* Results included inputted values\n[SUBTITLE] Characteristics associated with patient satisfaction with general health care, hospital, and physician services (multivariate analysis) [SUBSECTION] Table 3 demonstrates the results of the final multivariate logistic regression models for patient satisfaction with adjusted predictor and/or covariate variables. We found self-perceived unmet health care needs to be a strong negative predictor for patient satisfaction across all health care services. For those with self-perceived unmet needs, the greatest dissatisfaction was associated with physician services (OR = 0.29, p = 0.005), followed by hospital services (OR = 0.41, p = 0.006) and general health care services (OR = 0.59, p = 0.024), when compared to those without unmet health care needs. Conversely, quality and availability of care were significant protective predictors of patient satisfaction across all health care services. When compared to those who received poor quality care, the odds of patient satisfaction (general health care services, 237.60, p < 0.001; hospital services,166.99, p < 0.001; and physician services, 176.4, p < 0.001) were highest across all services among those who received excellent quality care; with those receiving general health services most likely to be satisfied with quality care: fair (OR = 6.15, p = 0.002), good (OR = 36.37, p < 0.001) and excellent (OR = 237.60, p < 0.001) (Table 3). The odds of patient satisfaction across all health services were higher with the increasing availability of care. When compared to poor availability of care, the odds of patient satisfaction were highest among those who reported excellent care availability across health care services in general (OR = 4.45, p < 0.001) and hospital services (6.30, p < 0.001), with those receiving hospital services increasingly satisfied with levels of care availability: fair (OR = 2.77, p = 0.011), good (OR = 3.90, p < 0.001) and excellent (OR = 6.30, p < 0.001).\nEducation was a negative predictor of patient satisfaction among those who received general health services with higher levels of education being more dissatisfied with care, [(secondary graduate, OR = 0.62, p = 0.126); (other post-secondary, OR = 0.36, p = 0.004); and post-secondary graduate, OR = 0.54, p = 0.050)] and those who received physician services [(secondary graduate, OR = 0.32, p = 0.010); (other post-secondary, OR = 0.26, p = 0.019); and post-secondary graduate, OR = 0.28, p = 0.005)]. Post-secondary graduation provided reduced odds of being satisfied with hospital services compared to those with the lowest levels of education (Table 3).\nPhysician type seen and most recent type of patient during last health care services were also predictors of patient satisfaction with physician and hospital services respectively. Respondents who received specialist care were 47% less likely (OR = 0.47, p = 0.106) to be satisfied with physician services than those who received care from a family doctor. Patients who were admitted overnight were more likely (OR = 1.20, p = 0.660) to be satisfied with hospital services than outpatients, while ER patients were significantly less likely (OR = 0.39, p = 0.007) to be satisfied with hospital services than both outpatients and overnight patients.\n\nTable 3Multivariate analysis of predictors of patient satisfaction with general (8,712), hospital (3,492), and physician (6,451) servicesVariablesGeneral Health Care Services*Hospital Services*Physician Services*OR, 95% CIp-ValueOR, 95% CIp-ValueOR, 95% CIp-Value\nAge categories, years\n≤44 yearsReferenceReferenceReference45 to 641.80 (0.98–3.32)0.0592.02 (0.83–4.91)0.1201.14 (0.51–2.57)0.74765 to 791.24 (0.59–2.590.5760.39 (0.13–1.17)0.0920.75 (0.27–2.08)0.58580 and above2.57 (0.97–6.86)0.0590.48 (0.11–2.13)0.3340.73 (0.23–2.29)0.593\nSex\nMaleReferenceReferenceReferenceFemale1.32 (0.80–2.17)0.2761.12 (0.53–2.37)0.7640.62 (0.32–1.19)0.152\nMarital status\nSingleReferenceReferenceReferenceMarried1.39 (0.78–2.50)0.2640.55 (0.19–1.61)0.2780.62 (0.31–1.21)0.160Common-law1.31 (0.67–2.58)0.4270.69 (0.23–2.10)0.5141.21 (0.42–3.53)0.727Widowed/separated/divorced1.48 (0.76–2.89)0.2540.68 (2.01–2.27)0.5241.06 (0.41–2.71)0.603\nEducational level\nLess than secondaryReferenceReferenceReferenceSecondary graduate0.62 (0.38–1.14)0.1262.00 (0.73–5.47)0.1770.32 (0.13–0.76)0.010Other post-secondary0.36 (0.18–0.72)0.0042.81 (0.94–8.40)0.0650.26 (0.09–0.80)0.019Post-secondary graduate0.54 (0.29–1.00)0.0500.98 (0.42–2.28)0.9670.28 (0.11–0.67)0.005\nIncome status\n≤ 19,999ReferenceReferenceReference20,000–39,9990.73 (0.41–1.30)0.2811.49 (0.58–3.80)0.4041.57 (0.69–3.54)0.27840,000–69,9991.54 (0.75–3.17)0.2421.04 (0.36–3.00)0.9391.06 (0.33–3.46)0.917≥ 70,0000.90 (0.39–2.12)0.8170.33 (0.11–0.97)0.0451.17 (0.38–3.63)0.783\nUnmet health care needs\nNoReferenceReferenceReferenceYes0.59 (0.37–0.93)0.0240.41 (0.21–0.77)0.0060.29 (0.13–0.69)0.005\nGeneral life satisfaction\nDissatisfiedReferenceReferenceReferenceVery satisfied2.15 (1.03–4.49)0.0411.56 (0.45–5.41)0.4812.53 (0.88–7.26)0.084Satisfied1.80 (0.92–3.53)0.0850.77 (0.25–2.33)0.6421.24 (0.46–3.37)0.668Neither satisfied nor dissatisfied1.29 (0.60–2.76)0.5100.46 (0.13–1.69)0.2441.60 (0.57–4.52)0.372\nAvailability of provincial care\nPoorReferenceReferenceReferenceFair1.72 (1.03–2.87)0.0392.77 (1.27–6.05)0.0111.25 (0.54–2.93)0.592Good3.18 (1.78–5.68)< 0.0013.90 (1.92–7.92)< 0.0011.10 (0.44–2.75)0.833Excellent4.45 (1.76–11.25)< 0.0016.30 (2.35–16.86)< 0.001\nQuality of care received\nPoorReferenceReferenceReferenceFair6.15 (2.00–18.94)0.002Good36.37 (12.09–109.44)< 0.00135.61 (18.71–67.78)< 0.00126.78 (13.36–53.69)< 0.001Excellent237.60 (70.43–801.52)< 0.001166.99 (67.91–410.64)< 0.001176.45 (63.89–487.30)< 0.001\nMost recent patient\nOutpatientReferenceAdmitted Overnight1.20 (0.53–2.72)0.660ER Patient0.39 (0.20–0.77)0.007\nPhysician type\nFamily DoctorReferenceSpecialist0.47 (0.18–1.18)0.106*Results included imputed valuesSignificant values are marked in bold printHosmer-Lemeshow (χ2) and p-values for General health care services (H-L: 13.74; p-value = 0.0888); Hospital services (H-L: 29.80; p-value = 0.0002); Physician services (H-L: 19.15; p-value = 0.0141)\n\nMultivariate analysis of predictors of patient satisfaction with general (8,712), hospital (3,492), and physician (6,451) services\n*Results included imputed values\nSignificant values are marked in bold print\nHosmer-Lemeshow (χ2) and p-values for General health care services (H-L: 13.74; p-value = 0.0888); Hospital services (H-L: 29.80; p-value = 0.0002); Physician services (H-L: 19.15; p-value = 0.0141)\nIn summary, quality of care is strongly and positively associated with patient satisfaction across all health services. Other significant positive predictors of patient satisfaction are the availability of provincial care, quality of care received, and being very satisfied with life in general. The common significant negative predictor of patient satisfaction across all healthcare services is self-perceived unmet health care needs. Post-secondary education (general health services and physician services), and being an ER patient most recently (hospital services) also demonstrated significant negative associations with patient satisfaction. (Fig. 3).\n\nFig. 3Summary of significant associations between health care services and patient satisfaction among individuals with neurological conditions\n\nSummary of significant associations between health care services and patient satisfaction among individuals with neurological conditions\nTable 3 demonstrates the results of the final multivariate logistic regression models for patient satisfaction with adjusted predictor and/or covariate variables. We found self-perceived unmet health care needs to be a strong negative predictor for patient satisfaction across all health care services. For those with self-perceived unmet needs, the greatest dissatisfaction was associated with physician services (OR = 0.29, p = 0.005), followed by hospital services (OR = 0.41, p = 0.006) and general health care services (OR = 0.59, p = 0.024), when compared to those without unmet health care needs. Conversely, quality and availability of care were significant protective predictors of patient satisfaction across all health care services. When compared to those who received poor quality care, the odds of patient satisfaction (general health care services, 237.60, p < 0.001; hospital services,166.99, p < 0.001; and physician services, 176.4, p < 0.001) were highest across all services among those who received excellent quality care; with those receiving general health services most likely to be satisfied with quality care: fair (OR = 6.15, p = 0.002), good (OR = 36.37, p < 0.001) and excellent (OR = 237.60, p < 0.001) (Table 3). The odds of patient satisfaction across all health services were higher with the increasing availability of care. When compared to poor availability of care, the odds of patient satisfaction were highest among those who reported excellent care availability across health care services in general (OR = 4.45, p < 0.001) and hospital services (6.30, p < 0.001), with those receiving hospital services increasingly satisfied with levels of care availability: fair (OR = 2.77, p = 0.011), good (OR = 3.90, p < 0.001) and excellent (OR = 6.30, p < 0.001).\nEducation was a negative predictor of patient satisfaction among those who received general health services with higher levels of education being more dissatisfied with care, [(secondary graduate, OR = 0.62, p = 0.126); (other post-secondary, OR = 0.36, p = 0.004); and post-secondary graduate, OR = 0.54, p = 0.050)] and those who received physician services [(secondary graduate, OR = 0.32, p = 0.010); (other post-secondary, OR = 0.26, p = 0.019); and post-secondary graduate, OR = 0.28, p = 0.005)]. Post-secondary graduation provided reduced odds of being satisfied with hospital services compared to those with the lowest levels of education (Table 3).\nPhysician type seen and most recent type of patient during last health care services were also predictors of patient satisfaction with physician and hospital services respectively. Respondents who received specialist care were 47% less likely (OR = 0.47, p = 0.106) to be satisfied with physician services than those who received care from a family doctor. Patients who were admitted overnight were more likely (OR = 1.20, p = 0.660) to be satisfied with hospital services than outpatients, while ER patients were significantly less likely (OR = 0.39, p = 0.007) to be satisfied with hospital services than both outpatients and overnight patients.\n\nTable 3Multivariate analysis of predictors of patient satisfaction with general (8,712), hospital (3,492), and physician (6,451) servicesVariablesGeneral Health Care Services*Hospital Services*Physician Services*OR, 95% CIp-ValueOR, 95% CIp-ValueOR, 95% CIp-Value\nAge categories, years\n≤44 yearsReferenceReferenceReference45 to 641.80 (0.98–3.32)0.0592.02 (0.83–4.91)0.1201.14 (0.51–2.57)0.74765 to 791.24 (0.59–2.590.5760.39 (0.13–1.17)0.0920.75 (0.27–2.08)0.58580 and above2.57 (0.97–6.86)0.0590.48 (0.11–2.13)0.3340.73 (0.23–2.29)0.593\nSex\nMaleReferenceReferenceReferenceFemale1.32 (0.80–2.17)0.2761.12 (0.53–2.37)0.7640.62 (0.32–1.19)0.152\nMarital status\nSingleReferenceReferenceReferenceMarried1.39 (0.78–2.50)0.2640.55 (0.19–1.61)0.2780.62 (0.31–1.21)0.160Common-law1.31 (0.67–2.58)0.4270.69 (0.23–2.10)0.5141.21 (0.42–3.53)0.727Widowed/separated/divorced1.48 (0.76–2.89)0.2540.68 (2.01–2.27)0.5241.06 (0.41–2.71)0.603\nEducational level\nLess than secondaryReferenceReferenceReferenceSecondary graduate0.62 (0.38–1.14)0.1262.00 (0.73–5.47)0.1770.32 (0.13–0.76)0.010Other post-secondary0.36 (0.18–0.72)0.0042.81 (0.94–8.40)0.0650.26 (0.09–0.80)0.019Post-secondary graduate0.54 (0.29–1.00)0.0500.98 (0.42–2.28)0.9670.28 (0.11–0.67)0.005\nIncome status\n≤ 19,999ReferenceReferenceReference20,000–39,9990.73 (0.41–1.30)0.2811.49 (0.58–3.80)0.4041.57 (0.69–3.54)0.27840,000–69,9991.54 (0.75–3.17)0.2421.04 (0.36–3.00)0.9391.06 (0.33–3.46)0.917≥ 70,0000.90 (0.39–2.12)0.8170.33 (0.11–0.97)0.0451.17 (0.38–3.63)0.783\nUnmet health care needs\nNoReferenceReferenceReferenceYes0.59 (0.37–0.93)0.0240.41 (0.21–0.77)0.0060.29 (0.13–0.69)0.005\nGeneral life satisfaction\nDissatisfiedReferenceReferenceReferenceVery satisfied2.15 (1.03–4.49)0.0411.56 (0.45–5.41)0.4812.53 (0.88–7.26)0.084Satisfied1.80 (0.92–3.53)0.0850.77 (0.25–2.33)0.6421.24 (0.46–3.37)0.668Neither satisfied nor dissatisfied1.29 (0.60–2.76)0.5100.46 (0.13–1.69)0.2441.60 (0.57–4.52)0.372\nAvailability of provincial care\nPoorReferenceReferenceReferenceFair1.72 (1.03–2.87)0.0392.77 (1.27–6.05)0.0111.25 (0.54–2.93)0.592Good3.18 (1.78–5.68)< 0.0013.90 (1.92–7.92)< 0.0011.10 (0.44–2.75)0.833Excellent4.45 (1.76–11.25)< 0.0016.30 (2.35–16.86)< 0.001\nQuality of care received\nPoorReferenceReferenceReferenceFair6.15 (2.00–18.94)0.002Good36.37 (12.09–109.44)< 0.00135.61 (18.71–67.78)< 0.00126.78 (13.36–53.69)< 0.001Excellent237.60 (70.43–801.52)< 0.001166.99 (67.91–410.64)< 0.001176.45 (63.89–487.30)< 0.001\nMost recent patient\nOutpatientReferenceAdmitted Overnight1.20 (0.53–2.72)0.660ER Patient0.39 (0.20–0.77)0.007\nPhysician type\nFamily DoctorReferenceSpecialist0.47 (0.18–1.18)0.106*Results included imputed valuesSignificant values are marked in bold printHosmer-Lemeshow (χ2) and p-values for General health care services (H-L: 13.74; p-value = 0.0888); Hospital services (H-L: 29.80; p-value = 0.0002); Physician services (H-L: 19.15; p-value = 0.0141)\n\nMultivariate analysis of predictors of patient satisfaction with general (8,712), hospital (3,492), and physician (6,451) services\n*Results included imputed values\nSignificant values are marked in bold print\nHosmer-Lemeshow (χ2) and p-values for General health care services (H-L: 13.74; p-value = 0.0888); Hospital services (H-L: 29.80; p-value = 0.0002); Physician services (H-L: 19.15; p-value = 0.0141)\nIn summary, quality of care is strongly and positively associated with patient satisfaction across all health services. Other significant positive predictors of patient satisfaction are the availability of provincial care, quality of care received, and being very satisfied with life in general. The common significant negative predictor of patient satisfaction across all healthcare services is self-perceived unmet health care needs. Post-secondary education (general health services and physician services), and being an ER patient most recently (hospital services) also demonstrated significant negative associations with patient satisfaction. (Fig. 3).\n\nFig. 3Summary of significant associations between health care services and patient satisfaction among individuals with neurological conditions\n\nSummary of significant associations between health care services and patient satisfaction among individuals with neurological conditions", "Analysis for this study was limited to the imputed data of the original subsamples of 2902, 1222, and 2211 individuals with NCs who received general health care services, hospital services, and physician services respectively. Table 1 below demonstrates the demographic characteristics of the study population for all three study samples. The total number of cases varies due to missing values.\nThere is little variation in socio-demographic characteristics across subsamples. Over two-thirds of the subsamples were females (67.8, 68.8, and 70.2% respectively) and under 65 years of age (71.5, 67.2, 71.9% respectively). A little under half of the respondents reported postsecondary graduation (45.2, 45.3, and 47.4% respectively). Less than half of the respondents in all samples were married (40.4, 38.7, and 40.1% respectively), while just under half earned ≤$19,999 annually (43.1, 44.6, and 43.3%) and under 20% in each sample reported unmet health care needs.\n\nTable 1Sociodemographic characteristics of study samples by health care services use: general (8,712), hospital (3,492) and physician (6,451) servicesCharacteristicsGeneral Health Care ServicesHospital ServicesPhysician Servicesn(%)*n(%)*n(%)*\nAge categories, years\n≤44 years3,507 (40.2)1,242 (35.6)2,553 (39.6)45 to 642,725 (31.3)1,103 (31.6)2,086 (32.3)65 to 791,636 (18.8)758 (21.7)1,125 (17.4)80 and above844 (9.7)389 (11.1)687 (10.7)\nSex\nMale2,804 (32.2)1,091 (31.2)1,925 (29.8)Female5,908 (67.8)2,401 (68.8)4,526 (70.2)\nMarital status\nSingle2,583 (29.7)954 (27.4)1,843 (28.6)Married3,515 (40.4)1,349 (38.7)2,583 (40.1)Common-law413 (4.7)193 (5.5)301 (4.7)Widowed/separated/divorced2,193 (25.2)992 (28.4)1,717 (26.6)\nEducational level\nLess than secondary2,517 (29.0)1,033 (29.7)1,774 (27.6)Secondary grad1,569 (18.1)543 (15.6)1,116 (17.4)Other post-secondary666 (7.7)326 (9.4)485 (7.6)Post-secondary graduation3,916 (45.2)1,577 (45.3)3,046 (47.4)\nIncome status\n<=19,9993,541 (43.1)1,468 (44.6)2,634 (43.3)20,000–39,9992,376 (29.0)1,037 (31.5)1,828 (30.1)40,000–69,9991,548 (18.9)536 (16.3)1,074 (17.7)≥ 70,000735 (9.0)252 (7.6)544 (8.9)*Values and percentages included imputed data\n\nSociodemographic characteristics of study samples by health care services use: general (8,712), hospital (3,492) and physician (6,451) services\n*Values and percentages included imputed data\nThe results in Table 2 below describe the variables associated with health care services received by the respondents. Over two-thirds of the respondents were satisfied with general, hospital and physician services (83.9, 81.1, and 91.2% respectively). Less than half of the respondents felt they received excellent general, hospital, and physician health care (38.3, 45.9, and 54.6% respectively). Less than half of the respondents who received hospital services were outpatients (39%) while the majority received physician services from a family doctor (82.3%) (Table 2).\n\nTable 2Description of variables associated with utilization of health care services: general (8,712), hospital (3,492) and physician (6,451) servicesVariablesGeneral Health Care Services N( %)*Hospital Services N(%)*Physician Services N(%)*\nUnmet health care needs\nNo7,329 (84.2)2,797 (80.2)5,249 (81.4)Yes1,375 (15.8)691 (19.8)1,197 (18.6)\nGeneral life satisfaction\nDissatisfied590 (6.8)279 (8.0)434 (6.7)Very satisfied2,787 (32.2)1,012 (29.2)2,007 (31.3)Satisfied4,396 (50.7)1,822 (52.5)3,342 (52.1)Neither satisfied/dissatisfied895 (10.3)359 (10.3)633 (9.9)\nRating of availability of Provincial care\nPoor1,218 (14.0)537 (15.5)845 (13.2)Fair2,182 (25.2)923 (26.6)1,647 (25.6)Good3,884 (44.8)1,381 (39.7)3,928 (61.2)1Excellent1,392 (16.0)633 (18.2)\nQuality of Care Received\nPoor299 (3.4)595 (17.0)694 (10.8)Fair1,072 (12.3)Good3,993 (45.9)1,293 (37.1)22,233 (34.6)2Excellent3,336 (38.3)1,599 (45.9)3,517 (54.6)\nPatient Satisfaction\nDissatisfied1,396 (16.1)660 (18.9)568 (8.8)Satisfied7,299 (83.9)2,827 (81.1)5,875 (91.2)\nMost recent patient\nOutpatient-1,363 (39.0)-Admitted Overnight-817 (23.40)-ER Patient-1,312 (37.6)-\nPhysician Type\nFamily Doctor--5,303 (82.3)Specialist--1,144 (17.7)1 Good and excellent categories collapsed to very good2 Fair and good categories collapsed into good* Results included inputted values\n\nDescription of variables associated with utilization of health care services: general (8,712), hospital (3,492) and physician (6,451) services\n1 Good and excellent categories collapsed to very good\n2 Fair and good categories collapsed into good\n* Results included inputted values", "Table 3 demonstrates the results of the final multivariate logistic regression models for patient satisfaction with adjusted predictor and/or covariate variables. We found self-perceived unmet health care needs to be a strong negative predictor for patient satisfaction across all health care services. For those with self-perceived unmet needs, the greatest dissatisfaction was associated with physician services (OR = 0.29, p = 0.005), followed by hospital services (OR = 0.41, p = 0.006) and general health care services (OR = 0.59, p = 0.024), when compared to those without unmet health care needs. Conversely, quality and availability of care were significant protective predictors of patient satisfaction across all health care services. When compared to those who received poor quality care, the odds of patient satisfaction (general health care services, 237.60, p < 0.001; hospital services,166.99, p < 0.001; and physician services, 176.4, p < 0.001) were highest across all services among those who received excellent quality care; with those receiving general health services most likely to be satisfied with quality care: fair (OR = 6.15, p = 0.002), good (OR = 36.37, p < 0.001) and excellent (OR = 237.60, p < 0.001) (Table 3). The odds of patient satisfaction across all health services were higher with the increasing availability of care. When compared to poor availability of care, the odds of patient satisfaction were highest among those who reported excellent care availability across health care services in general (OR = 4.45, p < 0.001) and hospital services (6.30, p < 0.001), with those receiving hospital services increasingly satisfied with levels of care availability: fair (OR = 2.77, p = 0.011), good (OR = 3.90, p < 0.001) and excellent (OR = 6.30, p < 0.001).\nEducation was a negative predictor of patient satisfaction among those who received general health services with higher levels of education being more dissatisfied with care, [(secondary graduate, OR = 0.62, p = 0.126); (other post-secondary, OR = 0.36, p = 0.004); and post-secondary graduate, OR = 0.54, p = 0.050)] and those who received physician services [(secondary graduate, OR = 0.32, p = 0.010); (other post-secondary, OR = 0.26, p = 0.019); and post-secondary graduate, OR = 0.28, p = 0.005)]. Post-secondary graduation provided reduced odds of being satisfied with hospital services compared to those with the lowest levels of education (Table 3).\nPhysician type seen and most recent type of patient during last health care services were also predictors of patient satisfaction with physician and hospital services respectively. Respondents who received specialist care were 47% less likely (OR = 0.47, p = 0.106) to be satisfied with physician services than those who received care from a family doctor. Patients who were admitted overnight were more likely (OR = 1.20, p = 0.660) to be satisfied with hospital services than outpatients, while ER patients were significantly less likely (OR = 0.39, p = 0.007) to be satisfied with hospital services than both outpatients and overnight patients.\n\nTable 3Multivariate analysis of predictors of patient satisfaction with general (8,712), hospital (3,492), and physician (6,451) servicesVariablesGeneral Health Care Services*Hospital Services*Physician Services*OR, 95% CIp-ValueOR, 95% CIp-ValueOR, 95% CIp-Value\nAge categories, years\n≤44 yearsReferenceReferenceReference45 to 641.80 (0.98–3.32)0.0592.02 (0.83–4.91)0.1201.14 (0.51–2.57)0.74765 to 791.24 (0.59–2.590.5760.39 (0.13–1.17)0.0920.75 (0.27–2.08)0.58580 and above2.57 (0.97–6.86)0.0590.48 (0.11–2.13)0.3340.73 (0.23–2.29)0.593\nSex\nMaleReferenceReferenceReferenceFemale1.32 (0.80–2.17)0.2761.12 (0.53–2.37)0.7640.62 (0.32–1.19)0.152\nMarital status\nSingleReferenceReferenceReferenceMarried1.39 (0.78–2.50)0.2640.55 (0.19–1.61)0.2780.62 (0.31–1.21)0.160Common-law1.31 (0.67–2.58)0.4270.69 (0.23–2.10)0.5141.21 (0.42–3.53)0.727Widowed/separated/divorced1.48 (0.76–2.89)0.2540.68 (2.01–2.27)0.5241.06 (0.41–2.71)0.603\nEducational level\nLess than secondaryReferenceReferenceReferenceSecondary graduate0.62 (0.38–1.14)0.1262.00 (0.73–5.47)0.1770.32 (0.13–0.76)0.010Other post-secondary0.36 (0.18–0.72)0.0042.81 (0.94–8.40)0.0650.26 (0.09–0.80)0.019Post-secondary graduate0.54 (0.29–1.00)0.0500.98 (0.42–2.28)0.9670.28 (0.11–0.67)0.005\nIncome status\n≤ 19,999ReferenceReferenceReference20,000–39,9990.73 (0.41–1.30)0.2811.49 (0.58–3.80)0.4041.57 (0.69–3.54)0.27840,000–69,9991.54 (0.75–3.17)0.2421.04 (0.36–3.00)0.9391.06 (0.33–3.46)0.917≥ 70,0000.90 (0.39–2.12)0.8170.33 (0.11–0.97)0.0451.17 (0.38–3.63)0.783\nUnmet health care needs\nNoReferenceReferenceReferenceYes0.59 (0.37–0.93)0.0240.41 (0.21–0.77)0.0060.29 (0.13–0.69)0.005\nGeneral life satisfaction\nDissatisfiedReferenceReferenceReferenceVery satisfied2.15 (1.03–4.49)0.0411.56 (0.45–5.41)0.4812.53 (0.88–7.26)0.084Satisfied1.80 (0.92–3.53)0.0850.77 (0.25–2.33)0.6421.24 (0.46–3.37)0.668Neither satisfied nor dissatisfied1.29 (0.60–2.76)0.5100.46 (0.13–1.69)0.2441.60 (0.57–4.52)0.372\nAvailability of provincial care\nPoorReferenceReferenceReferenceFair1.72 (1.03–2.87)0.0392.77 (1.27–6.05)0.0111.25 (0.54–2.93)0.592Good3.18 (1.78–5.68)< 0.0013.90 (1.92–7.92)< 0.0011.10 (0.44–2.75)0.833Excellent4.45 (1.76–11.25)< 0.0016.30 (2.35–16.86)< 0.001\nQuality of care received\nPoorReferenceReferenceReferenceFair6.15 (2.00–18.94)0.002Good36.37 (12.09–109.44)< 0.00135.61 (18.71–67.78)< 0.00126.78 (13.36–53.69)< 0.001Excellent237.60 (70.43–801.52)< 0.001166.99 (67.91–410.64)< 0.001176.45 (63.89–487.30)< 0.001\nMost recent patient\nOutpatientReferenceAdmitted Overnight1.20 (0.53–2.72)0.660ER Patient0.39 (0.20–0.77)0.007\nPhysician type\nFamily DoctorReferenceSpecialist0.47 (0.18–1.18)0.106*Results included imputed valuesSignificant values are marked in bold printHosmer-Lemeshow (χ2) and p-values for General health care services (H-L: 13.74; p-value = 0.0888); Hospital services (H-L: 29.80; p-value = 0.0002); Physician services (H-L: 19.15; p-value = 0.0141)\n\nMultivariate analysis of predictors of patient satisfaction with general (8,712), hospital (3,492), and physician (6,451) services\n*Results included imputed values\nSignificant values are marked in bold print\nHosmer-Lemeshow (χ2) and p-values for General health care services (H-L: 13.74; p-value = 0.0888); Hospital services (H-L: 29.80; p-value = 0.0002); Physician services (H-L: 19.15; p-value = 0.0141)\nIn summary, quality of care is strongly and positively associated with patient satisfaction across all health services. Other significant positive predictors of patient satisfaction are the availability of provincial care, quality of care received, and being very satisfied with life in general. The common significant negative predictor of patient satisfaction across all healthcare services is self-perceived unmet health care needs. Post-secondary education (general health services and physician services), and being an ER patient most recently (hospital services) also demonstrated significant negative associations with patient satisfaction. (Fig. 3).\n\nFig. 3Summary of significant associations between health care services and patient satisfaction among individuals with neurological conditions\n\nSummary of significant associations between health care services and patient satisfaction among individuals with neurological conditions", "The major findings of our study can be summarized by Anderson’s health behavior model predicting health care utilization factors in our model: predisposing (age, gender, and general life satisfaction), enabling (marital status, income, education, availability of health care, and quality of care), and need factors (neurological patients’ use of general health care services, hospital and physician services) and patient satisfaction. One enabling factor, quality of received care, demonstrated a strong positive association with patient satisfaction with all health care services received in this study, while another, availability of provincial care, was positively associated with patient satisfaction with general health care and hospital services. One predisposing factor, general life satisfaction, was positively associated with patient satisfaction with general health care services. On the other hand, identified as a disabling factor, self-perceived unmet health care needs commonly reduced the odds of patient satisfaction with the need factors, health care services in general, physician and hospital services. Education was also deemed a disabling factor, with all levels negatively associated with patient satisfaction with physician care. The need factor, ER services, was negatively associated with patient satisfaction with hospital services.\nOf particular interest is the relationship between patient satisfaction and the predisposing factors of health care utilization, General Life Satisfaction (GLS) represents the quality of life in several studies [28–30]. Our finding that GLS was positively associated with patient satisfaction with general health care services is consistent with that of other studies that reported satisfaction in life domains as positively associated with patient satisfaction [31]. While GLS influencing health-related quality of life may be positively associated with patient satisfaction with general health care services, significant decreases in health-related quality of life among people living with long-term neurological conditions have been reported in other studies [32–36]. This positive association between increased levels of GLS and greater odds of patient satisfaction among neurological patients may result in increased health-related quality of life due to the enabling factors, availability, and quality of care.\nOur study found that while availability and quality of care were positive predictors of patient satisfaction across all health services, it was not significantly associated with patient satisfaction with physician services. Availability and quality of care are important predictors of health-related quality of life as satisfied patients are more likely to comply with treatment, demonstrate positive health behaviors, and register improved health outcomes [37, 38]. Consistent with our study, one other study found that quality of care was associated with high levels of patient satisfaction among neurological patients [39]. The quality of care in that study referred to the early connection between patients and neurologists and education and advice on living with neurological conditions [39]. A similar study of neurological patients, found high patient satisfaction with coordination of safe, compassionate, and multiple health care services for those with mobility challenges [40], supporting our finding that when health care services are available, the odds of neurological patient satisfaction are increased.\nThe association of unmet health care needs, patient satisfaction with health care services, and health-related quality of life have been reported in earlier studies [18, 41]. Patient satisfaction is positively associated with health-related quality of life [42–44]. One particular study that examined the relationship between unmet health care needs and health-related quality of life among patients with multimorbidity [45], found that the presence of unmet health care needs was associated with lowered health-related quality of life. It may be deduced from this study that self-perceived unmet health care needs are associated with health-related quality of life among neurological patients, though we did not predict the direction of that association.\nHigher education levels and hospital admission through the emergency room (ER) were associated with decreased patient satisfaction in our study. This is consistent with findings of other studies [46, 47], one of which suggests that health care providers may create a better patient experience through increased communication or more active referral of ER patients to patient representatives [46]. One other study found that the highest level of education strongly predicted favorable satisfaction with communication with doctors [48]. This suggests that the negative association between the highest levels of education and patient satisfaction among individuals with NCs in our study may be due to communication needs not being met.\nThe association between ER care in hospitals and lower patient satisfaction in our study may be explained by a reduction in one or more of the components of patient satisfaction proposed by Mollaoğlu and Çelik [49]: guidance, debriefing, paying attention and being kind, having empathy, providing psychosocial support, speed of service, timing, proficiency, and overall quality. In addition, the severity of a patient’s condition [50] and the stress of a neurological patient being in the ER [49] may negatively influence patients’ level of satisfaction with emergency services. Finally, our study demonstrated an association between availability of care and lower odds of patient satisfaction among ER neurological patients who received hospital services. This may be indicative of decreased availability of care– waiting time too long, healthcare not available when requested, and healthcare not available in the area (elements of unmet health care needs reported in the CCHS-2010) [26].\n[SUBTITLE] Strengths and limitations of the study [SUBSECTION] One strength of this study is that it supports the finding that unmet health care needs are a risk factor for decreased patient satisfaction among neurological patients and that available and quality care are positive predictors of patient satisfaction across health services. Other strengths include the use of a nationally representative survey of the Canadian population with relatively high participation rates allowing for generalization of study findings; and the provision of information on specific health care services, i.e., general health care services, hospital and physician services that may vary in their impact on neurological patients.\nLimitations are noted. Persons living on lands designated as Indian Reserves or by the Crown, those dwelling in institutions, or certain remote regions as well as full-time members of the Canadian Forces are excluded from this survey. The representation of those residing in institutions would have been valuable to this study. This exclusion and the possible selection bias of individuals who were functionally capable of responding to the questionnaires are limitations that may impact the generalizability of the study. The use of data from optional modules causes a reduction in the sample size, decreasing the generalizability of the findings to the entire population. The relatively small sample size did not facilitate subgroup analysis by types of neurological conditions. Types of unmet health care needs and neurological conditions were not specified. The severity of disease conditions was not measured, making it difficult to address patient satisfaction or targeted interventions within groups of neurological conditions with specific unmet health care needs. Finally, the study could not perform a stratified analysis by income differences (< $20,000 annual income versus $40,000 + annual income) due to the small sample size of the study population in some categories and the need to meet anonymity, confidentiality, and data release rules of the research data centre. This is important in determining the potential influence of income on life satisfaction among neurological patients.\nOne strength of this study is that it supports the finding that unmet health care needs are a risk factor for decreased patient satisfaction among neurological patients and that available and quality care are positive predictors of patient satisfaction across health services. Other strengths include the use of a nationally representative survey of the Canadian population with relatively high participation rates allowing for generalization of study findings; and the provision of information on specific health care services, i.e., general health care services, hospital and physician services that may vary in their impact on neurological patients.\nLimitations are noted. Persons living on lands designated as Indian Reserves or by the Crown, those dwelling in institutions, or certain remote regions as well as full-time members of the Canadian Forces are excluded from this survey. The representation of those residing in institutions would have been valuable to this study. This exclusion and the possible selection bias of individuals who were functionally capable of responding to the questionnaires are limitations that may impact the generalizability of the study. The use of data from optional modules causes a reduction in the sample size, decreasing the generalizability of the findings to the entire population. The relatively small sample size did not facilitate subgroup analysis by types of neurological conditions. Types of unmet health care needs and neurological conditions were not specified. The severity of disease conditions was not measured, making it difficult to address patient satisfaction or targeted interventions within groups of neurological conditions with specific unmet health care needs. Finally, the study could not perform a stratified analysis by income differences (< $20,000 annual income versus $40,000 + annual income) due to the small sample size of the study population in some categories and the need to meet anonymity, confidentiality, and data release rules of the research data centre. This is important in determining the potential influence of income on life satisfaction among neurological patients.", "One strength of this study is that it supports the finding that unmet health care needs are a risk factor for decreased patient satisfaction among neurological patients and that available and quality care are positive predictors of patient satisfaction across health services. Other strengths include the use of a nationally representative survey of the Canadian population with relatively high participation rates allowing for generalization of study findings; and the provision of information on specific health care services, i.e., general health care services, hospital and physician services that may vary in their impact on neurological patients.\nLimitations are noted. Persons living on lands designated as Indian Reserves or by the Crown, those dwelling in institutions, or certain remote regions as well as full-time members of the Canadian Forces are excluded from this survey. The representation of those residing in institutions would have been valuable to this study. This exclusion and the possible selection bias of individuals who were functionally capable of responding to the questionnaires are limitations that may impact the generalizability of the study. The use of data from optional modules causes a reduction in the sample size, decreasing the generalizability of the findings to the entire population. The relatively small sample size did not facilitate subgroup analysis by types of neurological conditions. Types of unmet health care needs and neurological conditions were not specified. The severity of disease conditions was not measured, making it difficult to address patient satisfaction or targeted interventions within groups of neurological conditions with specific unmet health care needs. Finally, the study could not perform a stratified analysis by income differences (< $20,000 annual income versus $40,000 + annual income) due to the small sample size of the study population in some categories and the need to meet anonymity, confidentiality, and data release rules of the research data centre. This is important in determining the potential influence of income on life satisfaction among neurological patients.", "Self-perceived unmet health care needs are a common significant negative predictor of neurological patients’ satisfaction across health care services. Future studies on predictors of neurological patients’ satisfaction with health care services should focus on specific unmet health care needs and different neurological conditions. Neurological patients are known to report unmet health care needs and experience barriers to care, limiting their quality of life. Our study emphasizes that the availability and accessibility of care for neurological patients increased the satisfaction with health care services in general as well as physician and hospital services." ]
[ null, null, null, null, null, null, null, null, "results", null, null, "discussion", null, "conclusion" ]
[ "Unmet needs", "Predictors", "Satisfaction", "Neurological conditions", "Canada" ]
Characteristics and outcome of traumatic cardiac arrest at a level 1 trauma centre over 10 years in Sweden.
36253786
Historically, resuscitation in traumatic cardiac arrest (TCA) has been deemed futile. However, recent literature reports improved but varying survival. Current European guidelines emphasise the addressing of reversible aetiologies in TCA and propose that a resuscitative thoracotomy may be performed within 15 min from last sign of life. To improve clinician understanding of which patients benefit from resuscitative efforts we aimed to describe the characteristics and 30-day survival for traumatic cardiac arrest at a Swedish trauma centre with a particular focus on resuscitative thoracotomy.
BACKGROUND
Retrospective cohort study of adult patients (≥ 15 years) with TCA managed at Karolinska University Hospital Solna between 2011 and 2020. Trauma demographics, intra-arrest factors, lab values and procedures were compared between survivors and non-survivors.
METHODS
Among the 284 included patients the median age was 38 years, 82.2% were male and 60.5% were previously healthy. Blunt trauma was the dominant injury in 64.8% and median Injury Severity Score (ISS) was 38. For patients with a documented arrest rhythm, asystole was recorded in 39.2%, pulseless electric activity in 24.8% and a shockable rhythm in 6.8%. Thirty patients (10.6%) survived to 30 days with a Glasgow Outcome Scale score of 3 (n = 23) or 4 (n = 7). The most common causes of death were haemorrhagic shock (50.0%) and traumatic brain injury (25.5%). Survivors had a lower ISS (P < 0.001), more often had reactive pupils (P < 0.001) and a shockable rhythm (P = 0.04). In the subset of prehospital TCA, survivors less frequently received adrenaline (epinephrine) (P < 0.001) and in lower amounts (P = 0.02). Of patients that underwent resuscitative thoracotomy (n = 101), survivors (n = 12) had a shorter median time from last sign of life to thoracotomy (P = 0.03), however in four of these survivors the time exceeded 15 min.
RESULTS
Survival after TCA is possible. Determining futility in TCA is difficult and this study demonstrates survivors outside of recent guidelines.
CONCLUSION
[ "Adult", "Cardiopulmonary Resuscitation", "Epinephrine", "Female", "Heart Arrest", "Humans", "Male", "Resuscitation", "Retrospective Studies", "Sweden", "Thoracotomy", "Trauma Centers" ]
9575295
Introduction
Trauma claims around 5 million lives annually worldwide [1] and is the leading cause of death among young adults in industrialised countries [2]. Traumatic cardiac arrest (TCA) is the extreme state of traumatic shock and can be diagnosed when a patient, after suffering physical trauma, presents with both unconsciousness, agonal or absent spontaneous breathing and loss of a central pulse [3, 4]. Historically, resuscitative attempts in TCA were considered futile due to reported survival as poor as 0% [5]. However, more recent studies have shown improved albeit variable survival between 2.4 and 18.4% [3, 6–17]. In Sweden specifically, survival after out-of-hospital TCA gradually increased from 1.9 to 8.3% between 1990 and 2015 [3]. To further enhance care in TCA, the European Resuscitation Council (ERC) has developed a treatment algorithm stressing the importance of rapidly addressing reversible causes and suggests that emergency resuscitative thoracotomy may be performed in selected cases if less than 15 min have elapsed since loss of vital signs [4]. In TCA or peri-arrest the salvageability of resuscitative thoracotomy is quoted to be 7.8–21.8% depending on injury mechanism and setting [18–20], with only 6% reported to have neurological impairment [18]. For the trauma population in general it is also believed that care at designated trauma centres can reduce mortality [21, 22], especially in severely injured patients [23]. Given the dismal but not futile prognosis for TCA, it is important for the clinician to understand factors prognostic for survival and to identify whom may benefit from a resuscitative thoracotomy. The primary aim of this study was therefore to describe the characteristics and 30-day survival after TCA at a Swedish level 1 trauma centre with special regard to patients undergoing resuscitative thoracotomy.
null
null
Results
[SUBTITLE] Demography [SUBSECTION] In total, 284 patients with confirmed TCA were included in the study (Fig. 1). The median age was 38 years, 82.0% were male and 60% were previously healthy. The median ISS was 38. Cardiac arrest occurred out of hospital in most patients (90.1%). Blunt trauma was the predominant injury mechanism (64.8%). In patients with penetrating trauma 44.0% had suffered gunshot wounds and 56% stab wounds. For patients with a documented cardiac rhythm, asystole was observed in 39.4% and pulseless electric activity (PEA) in 24.6% whereas a shockable rhythm was recorded in 6.7% (Table 1). Among the 85 patients (29.9%) who survived to intensive care unit admission, the median length of hospital stay was 5 (IQR 3–20) days and the median time on ventilator was 3 (IQR 1–7) days. Fig. 1Flow chart of patients included in the study. CPR = cardiopulmonary resuscitation Flow chart of patients included in the study. CPR = cardiopulmonary resuscitation Table 1Characteristics of 284 traumatic cardiac arrest patients treated at a Swedish level 1 trauma centre between years 2011–2020Outcome at 30 days Total cohort Dead Alive P value (n = 284) (n = 254) (n = 30) Age (years) Median [IQR]38.0 [24.0–58.0]38.0 [24.0–56.0]43.5 [28.3–66.0]0.24Missing0 (0.0%)0 (0.0%)0 (0.0%) Age groups (years) 15–2474 (26.1%)69 (27.2%)5 (16.7%)0.0825–3448 (16.9%)43 (16.9%)5 (16.7%)35–4958 (20.4%)52 (20.5%)6 (20%)50–6447 (16.5%)43 (16.9%)4 (13.3%)65–7946 (16.2%)36 (14.2%)10 (33.3%)80–9411 (3.9%)11 (4.3%)0 (0.0%) Gender Male233 (82.0%)209 (82.3%)24 (80.0%)0.96Female51 (18.0%)45 (17.7%)6 (20.0%) Preinjury ASA class 1171 (60.2%)157 (61.8%)14.0 (46.7%)0.03244 (15.5%)35 (13.8%)9 (30.0%)337 (13.0%)31 (12.2%)6 (20.0%)42 (0.7%)1 (0.4%)1 (3.3%)Missing30 (10.6%)30 (11.8%)0 (0.0%) Dominant injury Blunt184 (64.8%)163 (64.2%)21 (70.0%)0.67Penetrating100 (35.2%)91 (35.8%)9 (30.0%) Injury mechanism Motor vehicle accident - non motorbike32 (11.3%)27 (10.6%)5 (16.7%)0.28Motor bike accident17 (6.0%)15 (5.9%)2 (6.7%)Bike accident8 (2.8%)7 (2.8%)1 (3.3%)Injured pedestrian17 (6.0%)14 (5.5%)3 (10.0%)Other vehicle accident4 (1.4%)4 (1.6%)0 (0.0%)Gunshot wound44 (15.5%)42 (16.5%)2 (6.7%)Stab wound56 (19.7%)50 (19.7%)6 (20.0%)Hit by blunt object28 (9.9%)26 (10.2%)2 (6.7%)Same level fall7 (2.5%)4 (1.6%)3 (10.0%)Fall from height65 (22.9%)60 (23.6%)5 (16.7%)Explosion1 (0.4%)1 (0.4%)0 (0.0%)Other2 (0.7%)2 (0.8%)0 (0.0%)Missing3 (1.1%)2 (0.8%)1 (3.3%) Arrest rhythm VT/VF19.0 (6.7%)14 (5.5%)5 (16.7%)0.09Asystole112 (39.4%)103 (40.6%)9 (30.0%)PEA70 (24.6%)65 (25.6%)5 (16.7%)Non shockable unknown rhythm25 (8.8%)22 (8.7%)3 (10.0%)Missing58 (20.4%)50 (19.7%)8 (26.7%) Shockable rhythm Yes19 (6.7%)8 (3.1%)2 (6.7%)0.04No207 (72.9%)6 (2.4%)3 (10.0%)Missing58 (20.4%)50 (19.7%)8 (26.7%) ISS Median [IQR]38.0 [25.5–75.0]41.0 [26–75.0]26.0 [17.0–38.0]< 0.001Missing5 (1.8%)5 (2.0%)0 (0.0%) NISS Median [IQR]50.0 [34.0–75.0]57.0 [35.5–75.0]41.0 [24.0–50.0]< 0.001Missing7 (2.5%)6 (2.4%)1 (3.3%) Place of arrest Out of hospital256 (90.1%)238 (93.7%)18 (60.0%)< 0.001In hospital28 (9.9%)16 (6.3%)12 (40.0%) Characteristics of 284 traumatic cardiac arrest patients treated at a Swedish level 1 trauma centre between years 2011–2020 In total, 284 patients with confirmed TCA were included in the study (Fig. 1). The median age was 38 years, 82.0% were male and 60% were previously healthy. The median ISS was 38. Cardiac arrest occurred out of hospital in most patients (90.1%). Blunt trauma was the predominant injury mechanism (64.8%). In patients with penetrating trauma 44.0% had suffered gunshot wounds and 56% stab wounds. For patients with a documented cardiac rhythm, asystole was observed in 39.4% and pulseless electric activity (PEA) in 24.6% whereas a shockable rhythm was recorded in 6.7% (Table 1). Among the 85 patients (29.9%) who survived to intensive care unit admission, the median length of hospital stay was 5 (IQR 3–20) days and the median time on ventilator was 3 (IQR 1–7) days. Fig. 1Flow chart of patients included in the study. CPR = cardiopulmonary resuscitation Flow chart of patients included in the study. CPR = cardiopulmonary resuscitation Table 1Characteristics of 284 traumatic cardiac arrest patients treated at a Swedish level 1 trauma centre between years 2011–2020Outcome at 30 days Total cohort Dead Alive P value (n = 284) (n = 254) (n = 30) Age (years) Median [IQR]38.0 [24.0–58.0]38.0 [24.0–56.0]43.5 [28.3–66.0]0.24Missing0 (0.0%)0 (0.0%)0 (0.0%) Age groups (years) 15–2474 (26.1%)69 (27.2%)5 (16.7%)0.0825–3448 (16.9%)43 (16.9%)5 (16.7%)35–4958 (20.4%)52 (20.5%)6 (20%)50–6447 (16.5%)43 (16.9%)4 (13.3%)65–7946 (16.2%)36 (14.2%)10 (33.3%)80–9411 (3.9%)11 (4.3%)0 (0.0%) Gender Male233 (82.0%)209 (82.3%)24 (80.0%)0.96Female51 (18.0%)45 (17.7%)6 (20.0%) Preinjury ASA class 1171 (60.2%)157 (61.8%)14.0 (46.7%)0.03244 (15.5%)35 (13.8%)9 (30.0%)337 (13.0%)31 (12.2%)6 (20.0%)42 (0.7%)1 (0.4%)1 (3.3%)Missing30 (10.6%)30 (11.8%)0 (0.0%) Dominant injury Blunt184 (64.8%)163 (64.2%)21 (70.0%)0.67Penetrating100 (35.2%)91 (35.8%)9 (30.0%) Injury mechanism Motor vehicle accident - non motorbike32 (11.3%)27 (10.6%)5 (16.7%)0.28Motor bike accident17 (6.0%)15 (5.9%)2 (6.7%)Bike accident8 (2.8%)7 (2.8%)1 (3.3%)Injured pedestrian17 (6.0%)14 (5.5%)3 (10.0%)Other vehicle accident4 (1.4%)4 (1.6%)0 (0.0%)Gunshot wound44 (15.5%)42 (16.5%)2 (6.7%)Stab wound56 (19.7%)50 (19.7%)6 (20.0%)Hit by blunt object28 (9.9%)26 (10.2%)2 (6.7%)Same level fall7 (2.5%)4 (1.6%)3 (10.0%)Fall from height65 (22.9%)60 (23.6%)5 (16.7%)Explosion1 (0.4%)1 (0.4%)0 (0.0%)Other2 (0.7%)2 (0.8%)0 (0.0%)Missing3 (1.1%)2 (0.8%)1 (3.3%) Arrest rhythm VT/VF19.0 (6.7%)14 (5.5%)5 (16.7%)0.09Asystole112 (39.4%)103 (40.6%)9 (30.0%)PEA70 (24.6%)65 (25.6%)5 (16.7%)Non shockable unknown rhythm25 (8.8%)22 (8.7%)3 (10.0%)Missing58 (20.4%)50 (19.7%)8 (26.7%) Shockable rhythm Yes19 (6.7%)8 (3.1%)2 (6.7%)0.04No207 (72.9%)6 (2.4%)3 (10.0%)Missing58 (20.4%)50 (19.7%)8 (26.7%) ISS Median [IQR]38.0 [25.5–75.0]41.0 [26–75.0]26.0 [17.0–38.0]< 0.001Missing5 (1.8%)5 (2.0%)0 (0.0%) NISS Median [IQR]50.0 [34.0–75.0]57.0 [35.5–75.0]41.0 [24.0–50.0]< 0.001Missing7 (2.5%)6 (2.4%)1 (3.3%) Place of arrest Out of hospital256 (90.1%)238 (93.7%)18 (60.0%)< 0.001In hospital28 (9.9%)16 (6.3%)12 (40.0%) Characteristics of 284 traumatic cardiac arrest patients treated at a Swedish level 1 trauma centre between years 2011–2020 [SUBTITLE] Primary outcome [SUBSECTION] A total of 30 patients (10.6%) survived to 30 days. Seven patients were assessed as GOS 4 and 23 patients as GOS 3 at discharge. No patients were discharged without disability. The survival rate was higher after in-hospital TCA (42.9%) as compared to pre-hospital TCA (7.0%) (P < 0.001). The survival rates varied between years, and we found no temporal trend (Fig. 2). Fig. 2Temporal trends in traumatic cardiac arrest at a Swedish level 1 trauma centre during 2011–2020. (A) Caseload and survivors. (B) Total survivors and patients with favourable functional outcome as defined by Glasgow Outcome Scale score of 4–5. Regression line to evaluate trend in overall survival (P = 0.66) Temporal trends in traumatic cardiac arrest at a Swedish level 1 trauma centre during 2011–2020. (A) Caseload and survivors. (B) Total survivors and patients with favourable functional outcome as defined by Glasgow Outcome Scale score of 4–5. Regression line to evaluate trend in overall survival (P = 0.66) Data on cause of death were only available from 2013 to 2020 for 212 TCA patients (Fig. 3.). Half of these died of bleeding. When death occurred within 24 h the proportion of deaths resulting from bleeding was even higher (60.2%). Between 24 h and 30 days traumatic brain injury (TBI) emerged as a more prevalent mortality cause (43.8%). On review, seven deaths were judged potentially preventable, of which five were considered due to bleeding, one because of TBI and the last cause was unspecified. Fig. 3Summary of causes of mortality presented for 212 patients from year 2013 to 2020 grouped by time of death. MOF = Multi organ failure, TBI = Traumatic brain injury, DOA = Dead on arrival Summary of causes of mortality presented for 212 patients from year 2013 to 2020 grouped by time of death. MOF = Multi organ failure, TBI = Traumatic brain injury, DOA = Dead on arrival A total of 30 patients (10.6%) survived to 30 days. Seven patients were assessed as GOS 4 and 23 patients as GOS 3 at discharge. No patients were discharged without disability. The survival rate was higher after in-hospital TCA (42.9%) as compared to pre-hospital TCA (7.0%) (P < 0.001). The survival rates varied between years, and we found no temporal trend (Fig. 2). Fig. 2Temporal trends in traumatic cardiac arrest at a Swedish level 1 trauma centre during 2011–2020. (A) Caseload and survivors. (B) Total survivors and patients with favourable functional outcome as defined by Glasgow Outcome Scale score of 4–5. Regression line to evaluate trend in overall survival (P = 0.66) Temporal trends in traumatic cardiac arrest at a Swedish level 1 trauma centre during 2011–2020. (A) Caseload and survivors. (B) Total survivors and patients with favourable functional outcome as defined by Glasgow Outcome Scale score of 4–5. Regression line to evaluate trend in overall survival (P = 0.66) Data on cause of death were only available from 2013 to 2020 for 212 TCA patients (Fig. 3.). Half of these died of bleeding. When death occurred within 24 h the proportion of deaths resulting from bleeding was even higher (60.2%). Between 24 h and 30 days traumatic brain injury (TBI) emerged as a more prevalent mortality cause (43.8%). On review, seven deaths were judged potentially preventable, of which five were considered due to bleeding, one because of TBI and the last cause was unspecified. Fig. 3Summary of causes of mortality presented for 212 patients from year 2013 to 2020 grouped by time of death. MOF = Multi organ failure, TBI = Traumatic brain injury, DOA = Dead on arrival Summary of causes of mortality presented for 212 patients from year 2013 to 2020 grouped by time of death. MOF = Multi organ failure, TBI = Traumatic brain injury, DOA = Dead on arrival [SUBTITLE] Prehospital care [SUBSECTION] A majority (83.6%) of patients in the subset with out of hospital TCA (n = 256) were intubated and in 26.1% a thoracic decompression was performed in the prehospital setting. The median EMS response time was 9 min, median on scene time was 20 min and median transport time was 13 min and did not differ between groups (Table 2). Survivors compared to non-survivors received adrenaline (epinephrine) less frequently (16.7 vs. 60.1% (P < 0.001)) and in lower median amounts (0.0 (IQR 0.0–0.0) vs. 2.0 (IQR 0.0–4.0) mg (P = 0.002)) (Table 2). Table 2Prehospital interventions and dispatch times among the subset of 256 patients with out-of-hospital traumatic cardiac arrestOutcome at 30 days Total cohort Dead Alive P value (n = 256)(n = 238)(n = 18) Bystander CPR Yes111 (43.4%)102 (42.9%)9 (50.0%)0.93No88 (34.4%)81 (34.0%)7 (38.9%)EMS witnessed cardiac arrest33 (12.9%)31 (13.0%)2 (11.1%)Missing24 (9.4%)24 (10.1%)0 (0.0%) Highest competence Not attended by EMS3 (1.2%)3 (1.2%)0 (0.0%)0.97Advanced life support by nurse135 (52.7%)125 (52.5%)10 (55.6%)Advanced life support by physician117 (45.7%)109 (45.8%)8 (44.4%)Missing1 (0.4%)1 (0.4%)1 (0.4%) EMS response time (min) Median [IQR]9.0 [6.0–13.0]9.0 [6.0–13.0]9.0 [6.0–13.0]0.92Missing2 (0.8%)2 (0.8%)0 (0.0%) On scene time (min) Median [IQR]20.0 [14.3–26.0]19.5 [15.0–26.0]21.5 [12.0-23.8]0.69Missing2 (0.8%)2 (0.8%)0 (0%) Transport time from scene to hospital (min) Median [IQR]13.0 [9.0–18.0]13.0 [9.0–18.0]14.5 [11.3–19.0]0.31Missing2 (0.8%)2 (0.8%)0 (0.0%) Time from dispatch to hospital (min) Median [IQR]44.0 [37.8–54.3]44.0 [34.3–54.8]46.0 [36.3–53.3]0.79Missing0 (0.0%)0 (0.0%)0 (0.0%) Intubation by EMS Yes212 (82.8%)199 (83.6%)13 (72.2%)0.36No44 (17.2%)39 (16.4%)5 (27.8%)Missing0 (0.0%)0 (0.0%)0 (0.0%) Highest ETCO2 (kPa) Median [IQR]4.0 [2.2-6.0]4.0 [1.9-6.0]5.0 [4.1–6.1]0.24Missing166 (64.8%)153 (64.3%)13 (72.2%) Adrenaline (Epinephrine) administered Yes146 (57.0%)143 (60.1%)3 (16.7%)< 0.001No75 (29.3%)63 (26.5%)12 (66.7%)Missing35 (13.7%)32 (13.4%)3 (16.7%) Amount of adrenaline (mg) Median [IQR]2.0 [0.0–4.0]2.0 [0.0–4.0]0.0 [0.0–0.0]0.002Missing41 (16.0%)38 (16.0%)3 (16.7%) Thoracic decompression by EMS Yes66 (25.8%)62 (26.1%)4 (22.2%)0.94No190 (74.2%)176 (73.9%)14 (77.8%) Prehospital interventions and dispatch times among the subset of 256 patients with out-of-hospital traumatic cardiac arrest A majority (83.6%) of patients in the subset with out of hospital TCA (n = 256) were intubated and in 26.1% a thoracic decompression was performed in the prehospital setting. The median EMS response time was 9 min, median on scene time was 20 min and median transport time was 13 min and did not differ between groups (Table 2). Survivors compared to non-survivors received adrenaline (epinephrine) less frequently (16.7 vs. 60.1% (P < 0.001)) and in lower median amounts (0.0 (IQR 0.0–0.0) vs. 2.0 (IQR 0.0–4.0) mg (P = 0.002)) (Table 2). Table 2Prehospital interventions and dispatch times among the subset of 256 patients with out-of-hospital traumatic cardiac arrestOutcome at 30 days Total cohort Dead Alive P value (n = 256)(n = 238)(n = 18) Bystander CPR Yes111 (43.4%)102 (42.9%)9 (50.0%)0.93No88 (34.4%)81 (34.0%)7 (38.9%)EMS witnessed cardiac arrest33 (12.9%)31 (13.0%)2 (11.1%)Missing24 (9.4%)24 (10.1%)0 (0.0%) Highest competence Not attended by EMS3 (1.2%)3 (1.2%)0 (0.0%)0.97Advanced life support by nurse135 (52.7%)125 (52.5%)10 (55.6%)Advanced life support by physician117 (45.7%)109 (45.8%)8 (44.4%)Missing1 (0.4%)1 (0.4%)1 (0.4%) EMS response time (min) Median [IQR]9.0 [6.0–13.0]9.0 [6.0–13.0]9.0 [6.0–13.0]0.92Missing2 (0.8%)2 (0.8%)0 (0.0%) On scene time (min) Median [IQR]20.0 [14.3–26.0]19.5 [15.0–26.0]21.5 [12.0-23.8]0.69Missing2 (0.8%)2 (0.8%)0 (0%) Transport time from scene to hospital (min) Median [IQR]13.0 [9.0–18.0]13.0 [9.0–18.0]14.5 [11.3–19.0]0.31Missing2 (0.8%)2 (0.8%)0 (0.0%) Time from dispatch to hospital (min) Median [IQR]44.0 [37.8–54.3]44.0 [34.3–54.8]46.0 [36.3–53.3]0.79Missing0 (0.0%)0 (0.0%)0 (0.0%) Intubation by EMS Yes212 (82.8%)199 (83.6%)13 (72.2%)0.36No44 (17.2%)39 (16.4%)5 (27.8%)Missing0 (0.0%)0 (0.0%)0 (0.0%) Highest ETCO2 (kPa) Median [IQR]4.0 [2.2-6.0]4.0 [1.9-6.0]5.0 [4.1–6.1]0.24Missing166 (64.8%)153 (64.3%)13 (72.2%) Adrenaline (Epinephrine) administered Yes146 (57.0%)143 (60.1%)3 (16.7%)< 0.001No75 (29.3%)63 (26.5%)12 (66.7%)Missing35 (13.7%)32 (13.4%)3 (16.7%) Amount of adrenaline (mg) Median [IQR]2.0 [0.0–4.0]2.0 [0.0–4.0]0.0 [0.0–0.0]0.002Missing41 (16.0%)38 (16.0%)3 (16.7%) Thoracic decompression by EMS Yes66 (25.8%)62 (26.1%)4 (22.2%)0.94No190 (74.2%)176 (73.9%)14 (77.8%) Prehospital interventions and dispatch times among the subset of 256 patients with out-of-hospital traumatic cardiac arrest [SUBTITLE] Intrahospital assessment and procedures [SUBSECTION] Survivors more commonly presented to the hospital with a pulse compared to non-survivors (70.0% (n = 21) vs. 21.7% (n = 55), P < 0.001). Among survivors of prehospital TCA (n = 18), 77.8% (n = 14) had regained spontaneous circulation when arriving in the trauma unit, 11.1% (n = 2) were in PEA and the remaining 11.1% (n = 2) had no rhythm documented. A shockable rhythm was more frequently reported in survivors compared to non-survivors (16.7 vs. 5.5%, P = 0.04). Patients with asystole had a survival of 8.0%. A pupillary response was more commonly elicited in survivors compared to non-survivors (56.7% vs. 9.8%, P < 0.001). Survivors exhibited higher platelet counts, higher fibrinogen, shorter activated partial thromboplastin time (APTT), lower lactate, lower base deficit, higher pH value and a lower S100b in comparison to non-survivors (Fig. 4). The highest measured lactate in a survivor was 20 mmol L− 1. Concerning lab values there were variable amounts of missing data (Supplementary Table 2). Fig. 4Lab values at hospital admission. Data are presented as median and interquartile range. Difference between groups examined with Wilcoxon rank sum test. *=P < 0.05, **= P < 0.01, ***= P < 0.001, NS = Nonsignificant at 0.05 level Lab values at hospital admission. Data are presented as median and interquartile range. Difference between groups examined with Wilcoxon rank sum test. *=P < 0.05, **= P < 0.01, ***= P < 0.001, NS = Nonsignificant at 0.05 level Survivors more commonly presented to the hospital with a pulse compared to non-survivors (70.0% (n = 21) vs. 21.7% (n = 55), P < 0.001). Among survivors of prehospital TCA (n = 18), 77.8% (n = 14) had regained spontaneous circulation when arriving in the trauma unit, 11.1% (n = 2) were in PEA and the remaining 11.1% (n = 2) had no rhythm documented. A shockable rhythm was more frequently reported in survivors compared to non-survivors (16.7 vs. 5.5%, P = 0.04). Patients with asystole had a survival of 8.0%. A pupillary response was more commonly elicited in survivors compared to non-survivors (56.7% vs. 9.8%, P < 0.001). Survivors exhibited higher platelet counts, higher fibrinogen, shorter activated partial thromboplastin time (APTT), lower lactate, lower base deficit, higher pH value and a lower S100b in comparison to non-survivors (Fig. 4). The highest measured lactate in a survivor was 20 mmol L− 1. Concerning lab values there were variable amounts of missing data (Supplementary Table 2). Fig. 4Lab values at hospital admission. Data are presented as median and interquartile range. Difference between groups examined with Wilcoxon rank sum test. *=P < 0.05, **= P < 0.01, ***= P < 0.001, NS = Nonsignificant at 0.05 level Lab values at hospital admission. Data are presented as median and interquartile range. Difference between groups examined with Wilcoxon rank sum test. *=P < 0.05, **= P < 0.01, ***= P < 0.001, NS = Nonsignificant at 0.05 level [SUBTITLE] Resuscitative thoracotomy [SUBSECTION] Altogether, 101 TCA patients (36%) underwent resuscitative thoracotomy. Among these, 12 patients (11.9%) survived with a GOS score at hospital discharge of 3 (n = 8) or 4 (n = 4). A tamponade was released in two of the survivors (16.7%) and 12 of the non survivors (13.5%). In two survivors, during one case of extremity bleed and another of abdominal bleed, a thoracotomy was made solely to occlude the aorta. Patients arresting after arrival had a higher survival after thoracotomy compared to patients presenting in arrest (42.1 vs. 4.9%, P < 0.001). Median time from last sign of life to thoracotomy was shorter for in-hospital TCA compared to prehospital TCA (9.5 (IQR 5-14.75) vs. 23 (IQR 14–36) minutes, P < 0.001). For prehospital TCA, the median time from hospital arrival to start of thoracotomy was 4 (IQR 3–10) minutes. Blunt and penetrating TCAs had a comparable survival after thoracotomy (12.5 and 11.5%, P = 1), however median time to resuscitative thoracotomy from last sign of life was shorter in blunt than penetrating trauma (15.5 (IQR 8.3–24.5) vs. 25.0 (IQR 12.0–37.0) minutes, P = 0.01). Median time from latest sign of life to resuscitative thoracotomy was shorter in survivors compared to non-survivors (10 (IQR 5.0–22.0) vs. 21 (IQR 10.8–35.3) minutes (P = 0.03) (Fig. 5). Fig. 5Time from last vital sign to start of thoracotomy. Dotted line marks 15 min corresponding to upper recommended limit for resuscitative thoracotomy in European Resuscitation Council’s current guidelines4. Data presented as median and interquartile range. Scatterplot superimposed to present individual cases. Comparison between groups with Wilcoxon rank sum test. *=P < 0.05 Time from last vital sign to start of thoracotomy. Dotted line marks 15 min corresponding to upper recommended limit for resuscitative thoracotomy in European Resuscitation Council’s current guidelines4. Data presented as median and interquartile range. Scatterplot superimposed to present individual cases. Comparison between groups with Wilcoxon rank sum test. *=P < 0.05 Resuscitative thoracotomy was in 58 patients performed later than the suggested 15 min since loss of vital signs (Fig. 5). Of those, four patients survived of whom all had suffered penetrating injuries. Three survivors had arrested before hospital arrival and one during initial resuscitation. One survivor with PEA and one with unknown rhythm had recordings of visible cardiac contractions upon chest opening. Another survivor who was in asystole had a pericardial tamponade relieved. The last survivor who had the longest arrest duration of 36 min before thoracotomy was also in PEA. One survivor was discharged with GOS 4 and the remaining three with GOS 3. Altogether, 101 TCA patients (36%) underwent resuscitative thoracotomy. Among these, 12 patients (11.9%) survived with a GOS score at hospital discharge of 3 (n = 8) or 4 (n = 4). A tamponade was released in two of the survivors (16.7%) and 12 of the non survivors (13.5%). In two survivors, during one case of extremity bleed and another of abdominal bleed, a thoracotomy was made solely to occlude the aorta. Patients arresting after arrival had a higher survival after thoracotomy compared to patients presenting in arrest (42.1 vs. 4.9%, P < 0.001). Median time from last sign of life to thoracotomy was shorter for in-hospital TCA compared to prehospital TCA (9.5 (IQR 5-14.75) vs. 23 (IQR 14–36) minutes, P < 0.001). For prehospital TCA, the median time from hospital arrival to start of thoracotomy was 4 (IQR 3–10) minutes. Blunt and penetrating TCAs had a comparable survival after thoracotomy (12.5 and 11.5%, P = 1), however median time to resuscitative thoracotomy from last sign of life was shorter in blunt than penetrating trauma (15.5 (IQR 8.3–24.5) vs. 25.0 (IQR 12.0–37.0) minutes, P = 0.01). Median time from latest sign of life to resuscitative thoracotomy was shorter in survivors compared to non-survivors (10 (IQR 5.0–22.0) vs. 21 (IQR 10.8–35.3) minutes (P = 0.03) (Fig. 5). Fig. 5Time from last vital sign to start of thoracotomy. Dotted line marks 15 min corresponding to upper recommended limit for resuscitative thoracotomy in European Resuscitation Council’s current guidelines4. Data presented as median and interquartile range. Scatterplot superimposed to present individual cases. Comparison between groups with Wilcoxon rank sum test. *=P < 0.05 Time from last vital sign to start of thoracotomy. Dotted line marks 15 min corresponding to upper recommended limit for resuscitative thoracotomy in European Resuscitation Council’s current guidelines4. Data presented as median and interquartile range. Scatterplot superimposed to present individual cases. Comparison between groups with Wilcoxon rank sum test. *=P < 0.05 Resuscitative thoracotomy was in 58 patients performed later than the suggested 15 min since loss of vital signs (Fig. 5). Of those, four patients survived of whom all had suffered penetrating injuries. Three survivors had arrested before hospital arrival and one during initial resuscitation. One survivor with PEA and one with unknown rhythm had recordings of visible cardiac contractions upon chest opening. Another survivor who was in asystole had a pericardial tamponade relieved. The last survivor who had the longest arrest duration of 36 min before thoracotomy was also in PEA. One survivor was discharged with GOS 4 and the remaining three with GOS 3.
Conclusion
Our data support that survival after TCA is possible and that aggressive care is justified, particularly directed at managing bleeding. Determining futility in TCA is complex and although current treatment algorithms mostly perform adequately, our study demonstrates exceptions where patients survived outside of guidelines.
[ "Methods", "Setting", "Study design", "Data sources and variables", "Outcome", "Statistics", "Demography", "Primary outcome", "Prehospital care", "Intrahospital assessment and procedures", "Resuscitative thoracotomy", "Strengths and limitations", "" ]
[ "[SUBTITLE] Setting [SUBSECTION] Karolinska University Hospital Solna is the Swedish capital Stockholm’s only level 1 trauma centre serving a population of about 2.4 million and receives roughly 1400 adult trauma patients yearly [24]. From the trauma room, CT-scanners and operating theatres are immediately accessible, and a surgeon trained in performing a resuscitative thoracotomy is always part of the trauma team [25]. Local hospital protocols advise that resuscitative thoracotomy should be done within 10 min for blunt and 15 min for penetrating trauma. When applicable, aortic compression and internal cardiac massage is considered routine during thoracotomy. Resuscitative Endovascular Balloon Occlusion of the Aorta (REBOA) is not used for TCA.\nAll Emergency Medical Services (EMS)-crews in Sweden are trained in Advanced Cardiac Life Support (ACLS) and Prehospital Trauma Life Support (PHTLS). A physician staffed rapid response unit with advanced airway competency can be dispatched to assist the EMS in the Stockholm area. Prehospital thoracotomies are never performed in the region.\nKarolinska University Hospital Solna is the Swedish capital Stockholm’s only level 1 trauma centre serving a population of about 2.4 million and receives roughly 1400 adult trauma patients yearly [24]. From the trauma room, CT-scanners and operating theatres are immediately accessible, and a surgeon trained in performing a resuscitative thoracotomy is always part of the trauma team [25]. Local hospital protocols advise that resuscitative thoracotomy should be done within 10 min for blunt and 15 min for penetrating trauma. When applicable, aortic compression and internal cardiac massage is considered routine during thoracotomy. Resuscitative Endovascular Balloon Occlusion of the Aorta (REBOA) is not used for TCA.\nAll Emergency Medical Services (EMS)-crews in Sweden are trained in Advanced Cardiac Life Support (ACLS) and Prehospital Trauma Life Support (PHTLS). A physician staffed rapid response unit with advanced airway competency can be dispatched to assist the EMS in the Stockholm area. Prehospital thoracotomies are never performed in the region.\n[SUBTITLE] Study design [SUBSECTION] A retrospective cohort study was conducted based on the Swedish Trauma Registry (SweTrau) and the Swedish Registry for Cardiopulmonary Resuscitation. Adult patients (≥ 15 years) with TCA managed at the Karolinska University Hospital Solna, Stockholm, Sweden from January 1st 2011 to December 31th 2020 were included. Patients aged < 15 years, invalid registry entries or medical records, cardiac arrest resulting from hanging, drowning, burns or smoke inhalation or who received bystander cardiopulmonary resuscitation (CPR) but with spontaneous circulation upon arrival of EMS were excluded. Patients exclusively attended by EMS and thus not transported to hospital were not studied. The decision to terminate resuscitation in the prehospital setting is made at the discretion of the treating physician or EMS-nurse. Patients with injuries incompatible with life or who did not respond to initial resuscitation with a predicted long transport time would be declared dead on scene.\nFor this survey, TCA was defined as comatose patient with absent or agonal breathing and no central pulse after a traumatic event. Prehospital TCAs were prespecified in SweTrau through the initiation of CPR either by bystanders or EMS crews. Thoracotomy was utilised as a proxy to identify patients who arrested in the initial hospital phase. All reports of thoracotomies in SweTrau were queried and patients included if they had a concurrent cardiac arrest. Patients in the Swedish Registry for Cardiopulmonary Resuscitation where EMS reported trauma as the presumed aetiology were also included.\nA retrospective cohort study was conducted based on the Swedish Trauma Registry (SweTrau) and the Swedish Registry for Cardiopulmonary Resuscitation. Adult patients (≥ 15 years) with TCA managed at the Karolinska University Hospital Solna, Stockholm, Sweden from January 1st 2011 to December 31th 2020 were included. Patients aged < 15 years, invalid registry entries or medical records, cardiac arrest resulting from hanging, drowning, burns or smoke inhalation or who received bystander cardiopulmonary resuscitation (CPR) but with spontaneous circulation upon arrival of EMS were excluded. Patients exclusively attended by EMS and thus not transported to hospital were not studied. The decision to terminate resuscitation in the prehospital setting is made at the discretion of the treating physician or EMS-nurse. Patients with injuries incompatible with life or who did not respond to initial resuscitation with a predicted long transport time would be declared dead on scene.\nFor this survey, TCA was defined as comatose patient with absent or agonal breathing and no central pulse after a traumatic event. Prehospital TCAs were prespecified in SweTrau through the initiation of CPR either by bystanders or EMS crews. Thoracotomy was utilised as a proxy to identify patients who arrested in the initial hospital phase. All reports of thoracotomies in SweTrau were queried and patients included if they had a concurrent cardiac arrest. Patients in the Swedish Registry for Cardiopulmonary Resuscitation where EMS reported trauma as the presumed aetiology were also included.\n[SUBTITLE] Data sources and variables [SUBSECTION] SweTrau is a nationwide registry of trauma patients. Patients are entered in the registry if a trauma call is triggered at the receiving hospital or if they are admitted with an Injury Severity Score (ISS) > 15. All trauma fatalities within 30 days at the Karolinska University Hospital also consistently undergo a structured review by a multidisciplinary committee and classified as preventable, possibly preventable or unpreventable [26].\nThe Swedish Registry for Cardiopulmonary Resuscitation is a national quality register for out of hospital cardiac arrest in Sweden. Patients are entered in the registry if EMS or bystanders attempted resuscitation.\nBoth registries report data according to standardised Utstein methodology [27].\nVariables were collected from both SweTrau and through review of dispatch reports and digital hospital charts. Electronic records were scrutinized by two of the authors (DO, PM). Data concerning patient demographics, trauma mechanism, cause of death, vital signs, lab tests, performed interventions, EMS times, survival and functional outcome were extracted (Supplementary Table 1).\nSweTrau is a nationwide registry of trauma patients. Patients are entered in the registry if a trauma call is triggered at the receiving hospital or if they are admitted with an Injury Severity Score (ISS) > 15. All trauma fatalities within 30 days at the Karolinska University Hospital also consistently undergo a structured review by a multidisciplinary committee and classified as preventable, possibly preventable or unpreventable [26].\nThe Swedish Registry for Cardiopulmonary Resuscitation is a national quality register for out of hospital cardiac arrest in Sweden. Patients are entered in the registry if EMS or bystanders attempted resuscitation.\nBoth registries report data according to standardised Utstein methodology [27].\nVariables were collected from both SweTrau and through review of dispatch reports and digital hospital charts. Electronic records were scrutinized by two of the authors (DO, PM). Data concerning patient demographics, trauma mechanism, cause of death, vital signs, lab tests, performed interventions, EMS times, survival and functional outcome were extracted (Supplementary Table 1).\n[SUBTITLE] Outcome [SUBSECTION] Our primary outcome measure was 30-day survival. Secondary outcome was functional status at discharge from the Karolinska University Hospital Solna expressed as Glasgow Outcome Scale (GOS) score. The scale contains five categories: (1) dead, (2) vegetative state, (3) severe disability, (4) mild disability and (5) good recovery [28]. Additionally, factors associated with survival were investigated in a sub-analysis.\nOur primary outcome measure was 30-day survival. Secondary outcome was functional status at discharge from the Karolinska University Hospital Solna expressed as Glasgow Outcome Scale (GOS) score. The scale contains five categories: (1) dead, (2) vegetative state, (3) severe disability, (4) mild disability and (5) good recovery [28]. Additionally, factors associated with survival were investigated in a sub-analysis.\n[SUBTITLE] Statistics [SUBSECTION] Descriptive data were presented as median with interquartile range (IQR) for continuous variables and as numbers with percentages for categorical variables. Comparison between groups were made using Wilcoxon rank sum test for continuous variables and Chi-square test for dichotomous variables. Trends were evaluated using linear regression. Significance level was set to 0.05. Missing data were kept missing, i.e. not imputed or estimated. All data were analysed using R version 4.0.4 (R-studio core team 2021).\nDescriptive data were presented as median with interquartile range (IQR) for continuous variables and as numbers with percentages for categorical variables. Comparison between groups were made using Wilcoxon rank sum test for continuous variables and Chi-square test for dichotomous variables. Trends were evaluated using linear regression. Significance level was set to 0.05. Missing data were kept missing, i.e. not imputed or estimated. All data were analysed using R version 4.0.4 (R-studio core team 2021).", "Karolinska University Hospital Solna is the Swedish capital Stockholm’s only level 1 trauma centre serving a population of about 2.4 million and receives roughly 1400 adult trauma patients yearly [24]. From the trauma room, CT-scanners and operating theatres are immediately accessible, and a surgeon trained in performing a resuscitative thoracotomy is always part of the trauma team [25]. Local hospital protocols advise that resuscitative thoracotomy should be done within 10 min for blunt and 15 min for penetrating trauma. When applicable, aortic compression and internal cardiac massage is considered routine during thoracotomy. Resuscitative Endovascular Balloon Occlusion of the Aorta (REBOA) is not used for TCA.\nAll Emergency Medical Services (EMS)-crews in Sweden are trained in Advanced Cardiac Life Support (ACLS) and Prehospital Trauma Life Support (PHTLS). A physician staffed rapid response unit with advanced airway competency can be dispatched to assist the EMS in the Stockholm area. Prehospital thoracotomies are never performed in the region.", "A retrospective cohort study was conducted based on the Swedish Trauma Registry (SweTrau) and the Swedish Registry for Cardiopulmonary Resuscitation. Adult patients (≥ 15 years) with TCA managed at the Karolinska University Hospital Solna, Stockholm, Sweden from January 1st 2011 to December 31th 2020 were included. Patients aged < 15 years, invalid registry entries or medical records, cardiac arrest resulting from hanging, drowning, burns or smoke inhalation or who received bystander cardiopulmonary resuscitation (CPR) but with spontaneous circulation upon arrival of EMS were excluded. Patients exclusively attended by EMS and thus not transported to hospital were not studied. The decision to terminate resuscitation in the prehospital setting is made at the discretion of the treating physician or EMS-nurse. Patients with injuries incompatible with life or who did not respond to initial resuscitation with a predicted long transport time would be declared dead on scene.\nFor this survey, TCA was defined as comatose patient with absent or agonal breathing and no central pulse after a traumatic event. Prehospital TCAs were prespecified in SweTrau through the initiation of CPR either by bystanders or EMS crews. Thoracotomy was utilised as a proxy to identify patients who arrested in the initial hospital phase. All reports of thoracotomies in SweTrau were queried and patients included if they had a concurrent cardiac arrest. Patients in the Swedish Registry for Cardiopulmonary Resuscitation where EMS reported trauma as the presumed aetiology were also included.", "SweTrau is a nationwide registry of trauma patients. Patients are entered in the registry if a trauma call is triggered at the receiving hospital or if they are admitted with an Injury Severity Score (ISS) > 15. All trauma fatalities within 30 days at the Karolinska University Hospital also consistently undergo a structured review by a multidisciplinary committee and classified as preventable, possibly preventable or unpreventable [26].\nThe Swedish Registry for Cardiopulmonary Resuscitation is a national quality register for out of hospital cardiac arrest in Sweden. Patients are entered in the registry if EMS or bystanders attempted resuscitation.\nBoth registries report data according to standardised Utstein methodology [27].\nVariables were collected from both SweTrau and through review of dispatch reports and digital hospital charts. Electronic records were scrutinized by two of the authors (DO, PM). Data concerning patient demographics, trauma mechanism, cause of death, vital signs, lab tests, performed interventions, EMS times, survival and functional outcome were extracted (Supplementary Table 1).", "Our primary outcome measure was 30-day survival. Secondary outcome was functional status at discharge from the Karolinska University Hospital Solna expressed as Glasgow Outcome Scale (GOS) score. The scale contains five categories: (1) dead, (2) vegetative state, (3) severe disability, (4) mild disability and (5) good recovery [28]. Additionally, factors associated with survival were investigated in a sub-analysis.", "Descriptive data were presented as median with interquartile range (IQR) for continuous variables and as numbers with percentages for categorical variables. Comparison between groups were made using Wilcoxon rank sum test for continuous variables and Chi-square test for dichotomous variables. Trends were evaluated using linear regression. Significance level was set to 0.05. Missing data were kept missing, i.e. not imputed or estimated. All data were analysed using R version 4.0.4 (R-studio core team 2021).", "In total, 284 patients with confirmed TCA were included in the study (Fig. 1). The median age was 38 years, 82.0% were male and 60% were previously healthy. The median ISS was 38. Cardiac arrest occurred out of hospital in most patients (90.1%). Blunt trauma was the predominant injury mechanism (64.8%). In patients with penetrating trauma 44.0% had suffered gunshot wounds and 56% stab wounds. For patients with a documented cardiac rhythm, asystole was observed in 39.4% and pulseless electric activity (PEA) in 24.6% whereas a shockable rhythm was recorded in 6.7% (Table 1). Among the 85 patients (29.9%) who survived to intensive care unit admission, the median length of hospital stay was 5 (IQR 3–20) days and the median time on ventilator was 3 (IQR 1–7) days.\n\nFig. 1Flow chart of patients included in the study. CPR = cardiopulmonary resuscitation\n\nFlow chart of patients included in the study. CPR = cardiopulmonary resuscitation\n\nTable 1Characteristics of 284 traumatic cardiac arrest patients treated at a Swedish level 1 trauma centre between years 2011–2020Outcome at 30 days\nTotal cohort\n\nDead\n\nAlive\n\nP value\n\n(n = 284)\n\n(n = 254)\n\n(n = 30)\n\nAge (years)\nMedian [IQR]38.0 [24.0–58.0]38.0 [24.0–56.0]43.5 [28.3–66.0]0.24Missing0 (0.0%)0 (0.0%)0 (0.0%)\nAge groups (years)\n15–2474 (26.1%)69 (27.2%)5 (16.7%)0.0825–3448 (16.9%)43 (16.9%)5 (16.7%)35–4958 (20.4%)52 (20.5%)6 (20%)50–6447 (16.5%)43 (16.9%)4 (13.3%)65–7946 (16.2%)36 (14.2%)10 (33.3%)80–9411 (3.9%)11 (4.3%)0 (0.0%)\nGender\nMale233 (82.0%)209 (82.3%)24 (80.0%)0.96Female51 (18.0%)45 (17.7%)6 (20.0%)\nPreinjury ASA class\n1171 (60.2%)157 (61.8%)14.0 (46.7%)0.03244 (15.5%)35 (13.8%)9 (30.0%)337 (13.0%)31 (12.2%)6 (20.0%)42 (0.7%)1 (0.4%)1 (3.3%)Missing30 (10.6%)30 (11.8%)0 (0.0%)\nDominant injury\nBlunt184 (64.8%)163 (64.2%)21 (70.0%)0.67Penetrating100 (35.2%)91 (35.8%)9 (30.0%)\nInjury mechanism\nMotor vehicle accident - non motorbike32 (11.3%)27 (10.6%)5 (16.7%)0.28Motor bike accident17 (6.0%)15 (5.9%)2 (6.7%)Bike accident8 (2.8%)7 (2.8%)1 (3.3%)Injured pedestrian17 (6.0%)14 (5.5%)3 (10.0%)Other vehicle accident4 (1.4%)4 (1.6%)0 (0.0%)Gunshot wound44 (15.5%)42 (16.5%)2 (6.7%)Stab wound56 (19.7%)50 (19.7%)6 (20.0%)Hit by blunt object28 (9.9%)26 (10.2%)2 (6.7%)Same level fall7 (2.5%)4 (1.6%)3 (10.0%)Fall from height65 (22.9%)60 (23.6%)5 (16.7%)Explosion1 (0.4%)1 (0.4%)0 (0.0%)Other2 (0.7%)2 (0.8%)0 (0.0%)Missing3 (1.1%)2 (0.8%)1 (3.3%)\nArrest rhythm\nVT/VF19.0 (6.7%)14 (5.5%)5 (16.7%)0.09Asystole112 (39.4%)103 (40.6%)9 (30.0%)PEA70 (24.6%)65 (25.6%)5 (16.7%)Non shockable unknown rhythm25 (8.8%)22 (8.7%)3 (10.0%)Missing58 (20.4%)50 (19.7%)8 (26.7%)\nShockable rhythm\nYes19 (6.7%)8 (3.1%)2 (6.7%)0.04No207 (72.9%)6 (2.4%)3 (10.0%)Missing58 (20.4%)50 (19.7%)8 (26.7%)\nISS\nMedian [IQR]38.0 [25.5–75.0]41.0 [26–75.0]26.0 [17.0–38.0]< 0.001Missing5 (1.8%)5 (2.0%)0 (0.0%)\nNISS\nMedian [IQR]50.0 [34.0–75.0]57.0 [35.5–75.0]41.0 [24.0–50.0]< 0.001Missing7 (2.5%)6 (2.4%)1 (3.3%)\nPlace of arrest\nOut of hospital256 (90.1%)238 (93.7%)18 (60.0%)< 0.001In hospital28 (9.9%)16 (6.3%)12 (40.0%)\n\nCharacteristics of 284 traumatic cardiac arrest patients treated at a Swedish level 1 trauma centre between years 2011–2020", "A total of 30 patients (10.6%) survived to 30 days. Seven patients were assessed as GOS 4 and 23 patients as GOS 3 at discharge. No patients were discharged without disability. The survival rate was higher after in-hospital TCA (42.9%) as compared to pre-hospital TCA (7.0%) (P < 0.001). The survival rates varied between years, and we found no temporal trend (Fig. 2).\n\nFig. 2Temporal trends in traumatic cardiac arrest at a Swedish level 1 trauma centre during 2011–2020. (A) Caseload and survivors. (B) Total survivors and patients with favourable functional outcome as defined by Glasgow Outcome Scale score of 4–5. Regression line to evaluate trend in overall survival (P = 0.66)\n\nTemporal trends in traumatic cardiac arrest at a Swedish level 1 trauma centre during 2011–2020. (A) Caseload and survivors. (B) Total survivors and patients with favourable functional outcome as defined by Glasgow Outcome Scale score of 4–5. Regression line to evaluate trend in overall survival (P = 0.66)\nData on cause of death were only available from 2013 to 2020 for 212 TCA patients (Fig. 3.). Half of these died of bleeding. When death occurred within 24 h the proportion of deaths resulting from bleeding was even higher (60.2%). Between 24 h and 30 days traumatic brain injury (TBI) emerged as a more prevalent mortality cause (43.8%). On review, seven deaths were judged potentially preventable, of which five were considered due to bleeding, one because of TBI and the last cause was unspecified.\n\nFig. 3Summary of causes of mortality presented for 212 patients from year 2013 to 2020 grouped by time of death. MOF = Multi organ failure, TBI = Traumatic brain injury, DOA = Dead on arrival\n\nSummary of causes of mortality presented for 212 patients from year 2013 to 2020 grouped by time of death. MOF = Multi organ failure, TBI = Traumatic brain injury, DOA = Dead on arrival", "A majority (83.6%) of patients in the subset with out of hospital TCA (n = 256) were intubated and in 26.1% a thoracic decompression was performed in the prehospital setting. The median EMS response time was 9 min, median on scene time was 20 min and median transport time was 13 min and did not differ between groups (Table 2). Survivors compared to non-survivors received adrenaline (epinephrine) less frequently (16.7 vs. 60.1% (P < 0.001)) and in lower median amounts (0.0 (IQR 0.0–0.0) vs. 2.0 (IQR 0.0–4.0) mg (P = 0.002)) (Table 2).\n\nTable 2Prehospital interventions and dispatch times among the subset of 256 patients with out-of-hospital traumatic cardiac arrestOutcome at 30 days\nTotal cohort\n\nDead\n\nAlive\n\nP value\n(n = 256)(n = 238)(n = 18)\nBystander CPR\nYes111 (43.4%)102 (42.9%)9 (50.0%)0.93No88 (34.4%)81 (34.0%)7 (38.9%)EMS witnessed cardiac arrest33 (12.9%)31 (13.0%)2 (11.1%)Missing24 (9.4%)24 (10.1%)0 (0.0%)\nHighest competence\nNot attended by EMS3 (1.2%)3 (1.2%)0 (0.0%)0.97Advanced life support by nurse135 (52.7%)125 (52.5%)10 (55.6%)Advanced life support by physician117 (45.7%)109 (45.8%)8 (44.4%)Missing1 (0.4%)1 (0.4%)1 (0.4%)\nEMS response time (min)\nMedian [IQR]9.0 [6.0–13.0]9.0 [6.0–13.0]9.0 [6.0–13.0]0.92Missing2 (0.8%)2 (0.8%)0 (0.0%)\nOn scene time (min)\nMedian [IQR]20.0 [14.3–26.0]19.5 [15.0–26.0]21.5 [12.0-23.8]0.69Missing2 (0.8%)2 (0.8%)0 (0%)\nTransport time from scene to hospital (min)\nMedian [IQR]13.0 [9.0–18.0]13.0 [9.0–18.0]14.5 [11.3–19.0]0.31Missing2 (0.8%)2 (0.8%)0 (0.0%)\nTime from dispatch to hospital (min)\nMedian [IQR]44.0 [37.8–54.3]44.0 [34.3–54.8]46.0 [36.3–53.3]0.79Missing0 (0.0%)0 (0.0%)0 (0.0%)\nIntubation by EMS\nYes212 (82.8%)199 (83.6%)13 (72.2%)0.36No44 (17.2%)39 (16.4%)5 (27.8%)Missing0 (0.0%)0 (0.0%)0 (0.0%)\nHighest ETCO2 (kPa)\nMedian [IQR]4.0 [2.2-6.0]4.0 [1.9-6.0]5.0 [4.1–6.1]0.24Missing166 (64.8%)153 (64.3%)13 (72.2%)\nAdrenaline (Epinephrine) administered\nYes146 (57.0%)143 (60.1%)3 (16.7%)< 0.001No75 (29.3%)63 (26.5%)12 (66.7%)Missing35 (13.7%)32 (13.4%)3 (16.7%)\nAmount of adrenaline (mg)\nMedian [IQR]2.0 [0.0–4.0]2.0 [0.0–4.0]0.0 [0.0–0.0]0.002Missing41 (16.0%)38 (16.0%)3 (16.7%)\nThoracic decompression by EMS\nYes66 (25.8%)62 (26.1%)4 (22.2%)0.94No190 (74.2%)176 (73.9%)14 (77.8%)\n\nPrehospital interventions and dispatch times among the subset of 256 patients with out-of-hospital traumatic cardiac arrest", "Survivors more commonly presented to the hospital with a pulse compared to non-survivors (70.0% (n = 21) vs. 21.7% (n = 55), P < 0.001). Among survivors of prehospital TCA (n = 18), 77.8% (n = 14) had regained spontaneous circulation when arriving in the trauma unit, 11.1% (n = 2) were in PEA and the remaining 11.1% (n = 2) had no rhythm documented.\nA shockable rhythm was more frequently reported in survivors compared to non-survivors (16.7 vs. 5.5%, P = 0.04). Patients with asystole had a survival of 8.0%.\nA pupillary response was more commonly elicited in survivors compared to non-survivors (56.7% vs. 9.8%, P < 0.001).\nSurvivors exhibited higher platelet counts, higher fibrinogen, shorter activated partial thromboplastin time (APTT), lower lactate, lower base deficit, higher pH value and a lower S100b in comparison to non-survivors (Fig. 4). The highest measured lactate in a survivor was 20 mmol L− 1. Concerning lab values there were variable amounts of missing data (Supplementary Table 2).\n\nFig. 4Lab values at hospital admission. Data are presented as median and interquartile range. Difference between groups examined with Wilcoxon rank sum test. *=P < 0.05, **= P < 0.01, ***= P < 0.001, NS = Nonsignificant at 0.05 level\n\nLab values at hospital admission. Data are presented as median and interquartile range. Difference between groups examined with Wilcoxon rank sum test. *=P < 0.05, **= P < 0.01, ***= P < 0.001, NS = Nonsignificant at 0.05 level", "Altogether, 101 TCA patients (36%) underwent resuscitative thoracotomy. Among these, 12 patients (11.9%) survived with a GOS score at hospital discharge of 3 (n = 8) or 4 (n = 4).\nA tamponade was released in two of the survivors (16.7%) and 12 of the non survivors (13.5%). In two survivors, during one case of extremity bleed and another of abdominal bleed, a thoracotomy was made solely to occlude the aorta.\nPatients arresting after arrival had a higher survival after thoracotomy compared to patients presenting in arrest (42.1 vs. 4.9%, P < 0.001). Median time from last sign of life to thoracotomy was shorter for in-hospital TCA compared to prehospital TCA (9.5 (IQR 5-14.75) vs. 23 (IQR 14–36) minutes, P < 0.001). For prehospital TCA, the median time from hospital arrival to start of thoracotomy was 4 (IQR 3–10) minutes. Blunt and penetrating TCAs had a comparable survival after thoracotomy (12.5 and 11.5%, P = 1), however median time to resuscitative thoracotomy from last sign of life was shorter in blunt than penetrating trauma (15.5 (IQR 8.3–24.5) vs. 25.0 (IQR 12.0–37.0) minutes, P = 0.01). Median time from latest sign of life to resuscitative thoracotomy was shorter in survivors compared to non-survivors (10 (IQR 5.0–22.0) vs. 21 (IQR 10.8–35.3) minutes (P = 0.03) (Fig. 5).\n\nFig. 5Time from last vital sign to start of thoracotomy. Dotted line marks 15 min corresponding to upper recommended limit for resuscitative thoracotomy in European Resuscitation Council’s current guidelines4. Data presented as median and interquartile range. Scatterplot superimposed to present individual cases. Comparison between groups with Wilcoxon rank sum test. *=P < 0.05\n\nTime from last vital sign to start of thoracotomy. Dotted line marks 15 min corresponding to upper recommended limit for resuscitative thoracotomy in European Resuscitation Council’s current guidelines4. Data presented as median and interquartile range. Scatterplot superimposed to present individual cases. Comparison between groups with Wilcoxon rank sum test. *=P < 0.05\nResuscitative thoracotomy was in 58 patients performed later than the suggested 15 min since loss of vital signs (Fig. 5). Of those, four patients survived of whom all had suffered penetrating injuries. Three survivors had arrested before hospital arrival and one during initial resuscitation. One survivor with PEA and one with unknown rhythm had recordings of visible cardiac contractions upon chest opening. Another survivor who was in asystole had a pericardial tamponade relieved. The last survivor who had the longest arrest duration of 36 min before thoracotomy was also in PEA. One survivor was discharged with GOS 4 and the remaining three with GOS 3.", "The key strength of this study is that we report on both trauma-related and arrest specific variables with complete coverage of outcome parameters. Limitations of this study include those inherent to a retrospective database review. In some areas the amount of missing data was not minor, especially regarding lab values, and these results need to be interpreted with caution. In addition, we lack information on the proportion of patients where resuscitation was terminated on scene.", "Below is the link to the electronic supplementary material.\n\nSupplementary Material 1\n\nSupplementary Material 1\n\nSupplementary Material 2\n\nSupplementary Material 2" ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Introduction", "Methods", "Setting", "Study design", "Data sources and variables", "Outcome", "Statistics", "Results", "Demography", "Primary outcome", "Prehospital care", "Intrahospital assessment and procedures", "Resuscitative thoracotomy", "Discussion", "Strengths and limitations", "Conclusion", "Electronic supplementary material", "" ]
[ "Trauma claims around 5 million lives annually worldwide [1] and is the leading cause of death among young adults in industrialised countries [2]. Traumatic cardiac arrest (TCA) is the extreme state of traumatic shock and can be diagnosed when a patient, after suffering physical trauma, presents with both unconsciousness, agonal or absent spontaneous breathing and loss of a central pulse [3, 4]. Historically, resuscitative attempts in TCA were considered futile due to reported survival as poor as 0% [5]. However, more recent studies have shown improved albeit variable survival between 2.4 and 18.4% [3, 6–17]. In Sweden specifically, survival after out-of-hospital TCA gradually increased from 1.9 to 8.3% between 1990 and 2015 [3]. To further enhance care in TCA, the European Resuscitation Council (ERC) has developed a treatment algorithm stressing the importance of rapidly addressing reversible causes and suggests that emergency resuscitative thoracotomy may be performed in selected cases if less than 15 min have elapsed since loss of vital signs [4]. In TCA or peri-arrest the salvageability of resuscitative thoracotomy is quoted to be 7.8–21.8% depending on injury mechanism and setting [18–20], with only 6% reported to have neurological impairment [18]. For the trauma population in general it is also believed that care at designated trauma centres can reduce mortality [21, 22], especially in severely injured patients [23].\nGiven the dismal but not futile prognosis for TCA, it is important for the clinician to understand factors prognostic for survival and to identify whom may benefit from a resuscitative thoracotomy. The primary aim of this study was therefore to describe the characteristics and 30-day survival after TCA at a Swedish level 1 trauma centre with special regard to patients undergoing resuscitative thoracotomy.", "[SUBTITLE] Setting [SUBSECTION] Karolinska University Hospital Solna is the Swedish capital Stockholm’s only level 1 trauma centre serving a population of about 2.4 million and receives roughly 1400 adult trauma patients yearly [24]. From the trauma room, CT-scanners and operating theatres are immediately accessible, and a surgeon trained in performing a resuscitative thoracotomy is always part of the trauma team [25]. Local hospital protocols advise that resuscitative thoracotomy should be done within 10 min for blunt and 15 min for penetrating trauma. When applicable, aortic compression and internal cardiac massage is considered routine during thoracotomy. Resuscitative Endovascular Balloon Occlusion of the Aorta (REBOA) is not used for TCA.\nAll Emergency Medical Services (EMS)-crews in Sweden are trained in Advanced Cardiac Life Support (ACLS) and Prehospital Trauma Life Support (PHTLS). A physician staffed rapid response unit with advanced airway competency can be dispatched to assist the EMS in the Stockholm area. Prehospital thoracotomies are never performed in the region.\nKarolinska University Hospital Solna is the Swedish capital Stockholm’s only level 1 trauma centre serving a population of about 2.4 million and receives roughly 1400 adult trauma patients yearly [24]. From the trauma room, CT-scanners and operating theatres are immediately accessible, and a surgeon trained in performing a resuscitative thoracotomy is always part of the trauma team [25]. Local hospital protocols advise that resuscitative thoracotomy should be done within 10 min for blunt and 15 min for penetrating trauma. When applicable, aortic compression and internal cardiac massage is considered routine during thoracotomy. Resuscitative Endovascular Balloon Occlusion of the Aorta (REBOA) is not used for TCA.\nAll Emergency Medical Services (EMS)-crews in Sweden are trained in Advanced Cardiac Life Support (ACLS) and Prehospital Trauma Life Support (PHTLS). A physician staffed rapid response unit with advanced airway competency can be dispatched to assist the EMS in the Stockholm area. Prehospital thoracotomies are never performed in the region.\n[SUBTITLE] Study design [SUBSECTION] A retrospective cohort study was conducted based on the Swedish Trauma Registry (SweTrau) and the Swedish Registry for Cardiopulmonary Resuscitation. Adult patients (≥ 15 years) with TCA managed at the Karolinska University Hospital Solna, Stockholm, Sweden from January 1st 2011 to December 31th 2020 were included. Patients aged < 15 years, invalid registry entries or medical records, cardiac arrest resulting from hanging, drowning, burns or smoke inhalation or who received bystander cardiopulmonary resuscitation (CPR) but with spontaneous circulation upon arrival of EMS were excluded. Patients exclusively attended by EMS and thus not transported to hospital were not studied. The decision to terminate resuscitation in the prehospital setting is made at the discretion of the treating physician or EMS-nurse. Patients with injuries incompatible with life or who did not respond to initial resuscitation with a predicted long transport time would be declared dead on scene.\nFor this survey, TCA was defined as comatose patient with absent or agonal breathing and no central pulse after a traumatic event. Prehospital TCAs were prespecified in SweTrau through the initiation of CPR either by bystanders or EMS crews. Thoracotomy was utilised as a proxy to identify patients who arrested in the initial hospital phase. All reports of thoracotomies in SweTrau were queried and patients included if they had a concurrent cardiac arrest. Patients in the Swedish Registry for Cardiopulmonary Resuscitation where EMS reported trauma as the presumed aetiology were also included.\nA retrospective cohort study was conducted based on the Swedish Trauma Registry (SweTrau) and the Swedish Registry for Cardiopulmonary Resuscitation. Adult patients (≥ 15 years) with TCA managed at the Karolinska University Hospital Solna, Stockholm, Sweden from January 1st 2011 to December 31th 2020 were included. Patients aged < 15 years, invalid registry entries or medical records, cardiac arrest resulting from hanging, drowning, burns or smoke inhalation or who received bystander cardiopulmonary resuscitation (CPR) but with spontaneous circulation upon arrival of EMS were excluded. Patients exclusively attended by EMS and thus not transported to hospital were not studied. The decision to terminate resuscitation in the prehospital setting is made at the discretion of the treating physician or EMS-nurse. Patients with injuries incompatible with life or who did not respond to initial resuscitation with a predicted long transport time would be declared dead on scene.\nFor this survey, TCA was defined as comatose patient with absent or agonal breathing and no central pulse after a traumatic event. Prehospital TCAs were prespecified in SweTrau through the initiation of CPR either by bystanders or EMS crews. Thoracotomy was utilised as a proxy to identify patients who arrested in the initial hospital phase. All reports of thoracotomies in SweTrau were queried and patients included if they had a concurrent cardiac arrest. Patients in the Swedish Registry for Cardiopulmonary Resuscitation where EMS reported trauma as the presumed aetiology were also included.\n[SUBTITLE] Data sources and variables [SUBSECTION] SweTrau is a nationwide registry of trauma patients. Patients are entered in the registry if a trauma call is triggered at the receiving hospital or if they are admitted with an Injury Severity Score (ISS) > 15. All trauma fatalities within 30 days at the Karolinska University Hospital also consistently undergo a structured review by a multidisciplinary committee and classified as preventable, possibly preventable or unpreventable [26].\nThe Swedish Registry for Cardiopulmonary Resuscitation is a national quality register for out of hospital cardiac arrest in Sweden. Patients are entered in the registry if EMS or bystanders attempted resuscitation.\nBoth registries report data according to standardised Utstein methodology [27].\nVariables were collected from both SweTrau and through review of dispatch reports and digital hospital charts. Electronic records were scrutinized by two of the authors (DO, PM). Data concerning patient demographics, trauma mechanism, cause of death, vital signs, lab tests, performed interventions, EMS times, survival and functional outcome were extracted (Supplementary Table 1).\nSweTrau is a nationwide registry of trauma patients. Patients are entered in the registry if a trauma call is triggered at the receiving hospital or if they are admitted with an Injury Severity Score (ISS) > 15. All trauma fatalities within 30 days at the Karolinska University Hospital also consistently undergo a structured review by a multidisciplinary committee and classified as preventable, possibly preventable or unpreventable [26].\nThe Swedish Registry for Cardiopulmonary Resuscitation is a national quality register for out of hospital cardiac arrest in Sweden. Patients are entered in the registry if EMS or bystanders attempted resuscitation.\nBoth registries report data according to standardised Utstein methodology [27].\nVariables were collected from both SweTrau and through review of dispatch reports and digital hospital charts. Electronic records were scrutinized by two of the authors (DO, PM). Data concerning patient demographics, trauma mechanism, cause of death, vital signs, lab tests, performed interventions, EMS times, survival and functional outcome were extracted (Supplementary Table 1).\n[SUBTITLE] Outcome [SUBSECTION] Our primary outcome measure was 30-day survival. Secondary outcome was functional status at discharge from the Karolinska University Hospital Solna expressed as Glasgow Outcome Scale (GOS) score. The scale contains five categories: (1) dead, (2) vegetative state, (3) severe disability, (4) mild disability and (5) good recovery [28]. Additionally, factors associated with survival were investigated in a sub-analysis.\nOur primary outcome measure was 30-day survival. Secondary outcome was functional status at discharge from the Karolinska University Hospital Solna expressed as Glasgow Outcome Scale (GOS) score. The scale contains five categories: (1) dead, (2) vegetative state, (3) severe disability, (4) mild disability and (5) good recovery [28]. Additionally, factors associated with survival were investigated in a sub-analysis.\n[SUBTITLE] Statistics [SUBSECTION] Descriptive data were presented as median with interquartile range (IQR) for continuous variables and as numbers with percentages for categorical variables. Comparison between groups were made using Wilcoxon rank sum test for continuous variables and Chi-square test for dichotomous variables. Trends were evaluated using linear regression. Significance level was set to 0.05. Missing data were kept missing, i.e. not imputed or estimated. All data were analysed using R version 4.0.4 (R-studio core team 2021).\nDescriptive data were presented as median with interquartile range (IQR) for continuous variables and as numbers with percentages for categorical variables. Comparison between groups were made using Wilcoxon rank sum test for continuous variables and Chi-square test for dichotomous variables. Trends were evaluated using linear regression. Significance level was set to 0.05. Missing data were kept missing, i.e. not imputed or estimated. All data were analysed using R version 4.0.4 (R-studio core team 2021).", "Karolinska University Hospital Solna is the Swedish capital Stockholm’s only level 1 trauma centre serving a population of about 2.4 million and receives roughly 1400 adult trauma patients yearly [24]. From the trauma room, CT-scanners and operating theatres are immediately accessible, and a surgeon trained in performing a resuscitative thoracotomy is always part of the trauma team [25]. Local hospital protocols advise that resuscitative thoracotomy should be done within 10 min for blunt and 15 min for penetrating trauma. When applicable, aortic compression and internal cardiac massage is considered routine during thoracotomy. Resuscitative Endovascular Balloon Occlusion of the Aorta (REBOA) is not used for TCA.\nAll Emergency Medical Services (EMS)-crews in Sweden are trained in Advanced Cardiac Life Support (ACLS) and Prehospital Trauma Life Support (PHTLS). A physician staffed rapid response unit with advanced airway competency can be dispatched to assist the EMS in the Stockholm area. Prehospital thoracotomies are never performed in the region.", "A retrospective cohort study was conducted based on the Swedish Trauma Registry (SweTrau) and the Swedish Registry for Cardiopulmonary Resuscitation. Adult patients (≥ 15 years) with TCA managed at the Karolinska University Hospital Solna, Stockholm, Sweden from January 1st 2011 to December 31th 2020 were included. Patients aged < 15 years, invalid registry entries or medical records, cardiac arrest resulting from hanging, drowning, burns or smoke inhalation or who received bystander cardiopulmonary resuscitation (CPR) but with spontaneous circulation upon arrival of EMS were excluded. Patients exclusively attended by EMS and thus not transported to hospital were not studied. The decision to terminate resuscitation in the prehospital setting is made at the discretion of the treating physician or EMS-nurse. Patients with injuries incompatible with life or who did not respond to initial resuscitation with a predicted long transport time would be declared dead on scene.\nFor this survey, TCA was defined as comatose patient with absent or agonal breathing and no central pulse after a traumatic event. Prehospital TCAs were prespecified in SweTrau through the initiation of CPR either by bystanders or EMS crews. Thoracotomy was utilised as a proxy to identify patients who arrested in the initial hospital phase. All reports of thoracotomies in SweTrau were queried and patients included if they had a concurrent cardiac arrest. Patients in the Swedish Registry for Cardiopulmonary Resuscitation where EMS reported trauma as the presumed aetiology were also included.", "SweTrau is a nationwide registry of trauma patients. Patients are entered in the registry if a trauma call is triggered at the receiving hospital or if they are admitted with an Injury Severity Score (ISS) > 15. All trauma fatalities within 30 days at the Karolinska University Hospital also consistently undergo a structured review by a multidisciplinary committee and classified as preventable, possibly preventable or unpreventable [26].\nThe Swedish Registry for Cardiopulmonary Resuscitation is a national quality register for out of hospital cardiac arrest in Sweden. Patients are entered in the registry if EMS or bystanders attempted resuscitation.\nBoth registries report data according to standardised Utstein methodology [27].\nVariables were collected from both SweTrau and through review of dispatch reports and digital hospital charts. Electronic records were scrutinized by two of the authors (DO, PM). Data concerning patient demographics, trauma mechanism, cause of death, vital signs, lab tests, performed interventions, EMS times, survival and functional outcome were extracted (Supplementary Table 1).", "Our primary outcome measure was 30-day survival. Secondary outcome was functional status at discharge from the Karolinska University Hospital Solna expressed as Glasgow Outcome Scale (GOS) score. The scale contains five categories: (1) dead, (2) vegetative state, (3) severe disability, (4) mild disability and (5) good recovery [28]. Additionally, factors associated with survival were investigated in a sub-analysis.", "Descriptive data were presented as median with interquartile range (IQR) for continuous variables and as numbers with percentages for categorical variables. Comparison between groups were made using Wilcoxon rank sum test for continuous variables and Chi-square test for dichotomous variables. Trends were evaluated using linear regression. Significance level was set to 0.05. Missing data were kept missing, i.e. not imputed or estimated. All data were analysed using R version 4.0.4 (R-studio core team 2021).", "[SUBTITLE] Demography [SUBSECTION] In total, 284 patients with confirmed TCA were included in the study (Fig. 1). The median age was 38 years, 82.0% were male and 60% were previously healthy. The median ISS was 38. Cardiac arrest occurred out of hospital in most patients (90.1%). Blunt trauma was the predominant injury mechanism (64.8%). In patients with penetrating trauma 44.0% had suffered gunshot wounds and 56% stab wounds. For patients with a documented cardiac rhythm, asystole was observed in 39.4% and pulseless electric activity (PEA) in 24.6% whereas a shockable rhythm was recorded in 6.7% (Table 1). Among the 85 patients (29.9%) who survived to intensive care unit admission, the median length of hospital stay was 5 (IQR 3–20) days and the median time on ventilator was 3 (IQR 1–7) days.\n\nFig. 1Flow chart of patients included in the study. CPR = cardiopulmonary resuscitation\n\nFlow chart of patients included in the study. CPR = cardiopulmonary resuscitation\n\nTable 1Characteristics of 284 traumatic cardiac arrest patients treated at a Swedish level 1 trauma centre between years 2011–2020Outcome at 30 days\nTotal cohort\n\nDead\n\nAlive\n\nP value\n\n(n = 284)\n\n(n = 254)\n\n(n = 30)\n\nAge (years)\nMedian [IQR]38.0 [24.0–58.0]38.0 [24.0–56.0]43.5 [28.3–66.0]0.24Missing0 (0.0%)0 (0.0%)0 (0.0%)\nAge groups (years)\n15–2474 (26.1%)69 (27.2%)5 (16.7%)0.0825–3448 (16.9%)43 (16.9%)5 (16.7%)35–4958 (20.4%)52 (20.5%)6 (20%)50–6447 (16.5%)43 (16.9%)4 (13.3%)65–7946 (16.2%)36 (14.2%)10 (33.3%)80–9411 (3.9%)11 (4.3%)0 (0.0%)\nGender\nMale233 (82.0%)209 (82.3%)24 (80.0%)0.96Female51 (18.0%)45 (17.7%)6 (20.0%)\nPreinjury ASA class\n1171 (60.2%)157 (61.8%)14.0 (46.7%)0.03244 (15.5%)35 (13.8%)9 (30.0%)337 (13.0%)31 (12.2%)6 (20.0%)42 (0.7%)1 (0.4%)1 (3.3%)Missing30 (10.6%)30 (11.8%)0 (0.0%)\nDominant injury\nBlunt184 (64.8%)163 (64.2%)21 (70.0%)0.67Penetrating100 (35.2%)91 (35.8%)9 (30.0%)\nInjury mechanism\nMotor vehicle accident - non motorbike32 (11.3%)27 (10.6%)5 (16.7%)0.28Motor bike accident17 (6.0%)15 (5.9%)2 (6.7%)Bike accident8 (2.8%)7 (2.8%)1 (3.3%)Injured pedestrian17 (6.0%)14 (5.5%)3 (10.0%)Other vehicle accident4 (1.4%)4 (1.6%)0 (0.0%)Gunshot wound44 (15.5%)42 (16.5%)2 (6.7%)Stab wound56 (19.7%)50 (19.7%)6 (20.0%)Hit by blunt object28 (9.9%)26 (10.2%)2 (6.7%)Same level fall7 (2.5%)4 (1.6%)3 (10.0%)Fall from height65 (22.9%)60 (23.6%)5 (16.7%)Explosion1 (0.4%)1 (0.4%)0 (0.0%)Other2 (0.7%)2 (0.8%)0 (0.0%)Missing3 (1.1%)2 (0.8%)1 (3.3%)\nArrest rhythm\nVT/VF19.0 (6.7%)14 (5.5%)5 (16.7%)0.09Asystole112 (39.4%)103 (40.6%)9 (30.0%)PEA70 (24.6%)65 (25.6%)5 (16.7%)Non shockable unknown rhythm25 (8.8%)22 (8.7%)3 (10.0%)Missing58 (20.4%)50 (19.7%)8 (26.7%)\nShockable rhythm\nYes19 (6.7%)8 (3.1%)2 (6.7%)0.04No207 (72.9%)6 (2.4%)3 (10.0%)Missing58 (20.4%)50 (19.7%)8 (26.7%)\nISS\nMedian [IQR]38.0 [25.5–75.0]41.0 [26–75.0]26.0 [17.0–38.0]< 0.001Missing5 (1.8%)5 (2.0%)0 (0.0%)\nNISS\nMedian [IQR]50.0 [34.0–75.0]57.0 [35.5–75.0]41.0 [24.0–50.0]< 0.001Missing7 (2.5%)6 (2.4%)1 (3.3%)\nPlace of arrest\nOut of hospital256 (90.1%)238 (93.7%)18 (60.0%)< 0.001In hospital28 (9.9%)16 (6.3%)12 (40.0%)\n\nCharacteristics of 284 traumatic cardiac arrest patients treated at a Swedish level 1 trauma centre between years 2011–2020\nIn total, 284 patients with confirmed TCA were included in the study (Fig. 1). The median age was 38 years, 82.0% were male and 60% were previously healthy. The median ISS was 38. Cardiac arrest occurred out of hospital in most patients (90.1%). Blunt trauma was the predominant injury mechanism (64.8%). In patients with penetrating trauma 44.0% had suffered gunshot wounds and 56% stab wounds. For patients with a documented cardiac rhythm, asystole was observed in 39.4% and pulseless electric activity (PEA) in 24.6% whereas a shockable rhythm was recorded in 6.7% (Table 1). Among the 85 patients (29.9%) who survived to intensive care unit admission, the median length of hospital stay was 5 (IQR 3–20) days and the median time on ventilator was 3 (IQR 1–7) days.\n\nFig. 1Flow chart of patients included in the study. CPR = cardiopulmonary resuscitation\n\nFlow chart of patients included in the study. CPR = cardiopulmonary resuscitation\n\nTable 1Characteristics of 284 traumatic cardiac arrest patients treated at a Swedish level 1 trauma centre between years 2011–2020Outcome at 30 days\nTotal cohort\n\nDead\n\nAlive\n\nP value\n\n(n = 284)\n\n(n = 254)\n\n(n = 30)\n\nAge (years)\nMedian [IQR]38.0 [24.0–58.0]38.0 [24.0–56.0]43.5 [28.3–66.0]0.24Missing0 (0.0%)0 (0.0%)0 (0.0%)\nAge groups (years)\n15–2474 (26.1%)69 (27.2%)5 (16.7%)0.0825–3448 (16.9%)43 (16.9%)5 (16.7%)35–4958 (20.4%)52 (20.5%)6 (20%)50–6447 (16.5%)43 (16.9%)4 (13.3%)65–7946 (16.2%)36 (14.2%)10 (33.3%)80–9411 (3.9%)11 (4.3%)0 (0.0%)\nGender\nMale233 (82.0%)209 (82.3%)24 (80.0%)0.96Female51 (18.0%)45 (17.7%)6 (20.0%)\nPreinjury ASA class\n1171 (60.2%)157 (61.8%)14.0 (46.7%)0.03244 (15.5%)35 (13.8%)9 (30.0%)337 (13.0%)31 (12.2%)6 (20.0%)42 (0.7%)1 (0.4%)1 (3.3%)Missing30 (10.6%)30 (11.8%)0 (0.0%)\nDominant injury\nBlunt184 (64.8%)163 (64.2%)21 (70.0%)0.67Penetrating100 (35.2%)91 (35.8%)9 (30.0%)\nInjury mechanism\nMotor vehicle accident - non motorbike32 (11.3%)27 (10.6%)5 (16.7%)0.28Motor bike accident17 (6.0%)15 (5.9%)2 (6.7%)Bike accident8 (2.8%)7 (2.8%)1 (3.3%)Injured pedestrian17 (6.0%)14 (5.5%)3 (10.0%)Other vehicle accident4 (1.4%)4 (1.6%)0 (0.0%)Gunshot wound44 (15.5%)42 (16.5%)2 (6.7%)Stab wound56 (19.7%)50 (19.7%)6 (20.0%)Hit by blunt object28 (9.9%)26 (10.2%)2 (6.7%)Same level fall7 (2.5%)4 (1.6%)3 (10.0%)Fall from height65 (22.9%)60 (23.6%)5 (16.7%)Explosion1 (0.4%)1 (0.4%)0 (0.0%)Other2 (0.7%)2 (0.8%)0 (0.0%)Missing3 (1.1%)2 (0.8%)1 (3.3%)\nArrest rhythm\nVT/VF19.0 (6.7%)14 (5.5%)5 (16.7%)0.09Asystole112 (39.4%)103 (40.6%)9 (30.0%)PEA70 (24.6%)65 (25.6%)5 (16.7%)Non shockable unknown rhythm25 (8.8%)22 (8.7%)3 (10.0%)Missing58 (20.4%)50 (19.7%)8 (26.7%)\nShockable rhythm\nYes19 (6.7%)8 (3.1%)2 (6.7%)0.04No207 (72.9%)6 (2.4%)3 (10.0%)Missing58 (20.4%)50 (19.7%)8 (26.7%)\nISS\nMedian [IQR]38.0 [25.5–75.0]41.0 [26–75.0]26.0 [17.0–38.0]< 0.001Missing5 (1.8%)5 (2.0%)0 (0.0%)\nNISS\nMedian [IQR]50.0 [34.0–75.0]57.0 [35.5–75.0]41.0 [24.0–50.0]< 0.001Missing7 (2.5%)6 (2.4%)1 (3.3%)\nPlace of arrest\nOut of hospital256 (90.1%)238 (93.7%)18 (60.0%)< 0.001In hospital28 (9.9%)16 (6.3%)12 (40.0%)\n\nCharacteristics of 284 traumatic cardiac arrest patients treated at a Swedish level 1 trauma centre between years 2011–2020\n[SUBTITLE] Primary outcome [SUBSECTION] A total of 30 patients (10.6%) survived to 30 days. Seven patients were assessed as GOS 4 and 23 patients as GOS 3 at discharge. No patients were discharged without disability. The survival rate was higher after in-hospital TCA (42.9%) as compared to pre-hospital TCA (7.0%) (P < 0.001). The survival rates varied between years, and we found no temporal trend (Fig. 2).\n\nFig. 2Temporal trends in traumatic cardiac arrest at a Swedish level 1 trauma centre during 2011–2020. (A) Caseload and survivors. (B) Total survivors and patients with favourable functional outcome as defined by Glasgow Outcome Scale score of 4–5. Regression line to evaluate trend in overall survival (P = 0.66)\n\nTemporal trends in traumatic cardiac arrest at a Swedish level 1 trauma centre during 2011–2020. (A) Caseload and survivors. (B) Total survivors and patients with favourable functional outcome as defined by Glasgow Outcome Scale score of 4–5. Regression line to evaluate trend in overall survival (P = 0.66)\nData on cause of death were only available from 2013 to 2020 for 212 TCA patients (Fig. 3.). Half of these died of bleeding. When death occurred within 24 h the proportion of deaths resulting from bleeding was even higher (60.2%). Between 24 h and 30 days traumatic brain injury (TBI) emerged as a more prevalent mortality cause (43.8%). On review, seven deaths were judged potentially preventable, of which five were considered due to bleeding, one because of TBI and the last cause was unspecified.\n\nFig. 3Summary of causes of mortality presented for 212 patients from year 2013 to 2020 grouped by time of death. MOF = Multi organ failure, TBI = Traumatic brain injury, DOA = Dead on arrival\n\nSummary of causes of mortality presented for 212 patients from year 2013 to 2020 grouped by time of death. MOF = Multi organ failure, TBI = Traumatic brain injury, DOA = Dead on arrival\nA total of 30 patients (10.6%) survived to 30 days. Seven patients were assessed as GOS 4 and 23 patients as GOS 3 at discharge. No patients were discharged without disability. The survival rate was higher after in-hospital TCA (42.9%) as compared to pre-hospital TCA (7.0%) (P < 0.001). The survival rates varied between years, and we found no temporal trend (Fig. 2).\n\nFig. 2Temporal trends in traumatic cardiac arrest at a Swedish level 1 trauma centre during 2011–2020. (A) Caseload and survivors. (B) Total survivors and patients with favourable functional outcome as defined by Glasgow Outcome Scale score of 4–5. Regression line to evaluate trend in overall survival (P = 0.66)\n\nTemporal trends in traumatic cardiac arrest at a Swedish level 1 trauma centre during 2011–2020. (A) Caseload and survivors. (B) Total survivors and patients with favourable functional outcome as defined by Glasgow Outcome Scale score of 4–5. Regression line to evaluate trend in overall survival (P = 0.66)\nData on cause of death were only available from 2013 to 2020 for 212 TCA patients (Fig. 3.). Half of these died of bleeding. When death occurred within 24 h the proportion of deaths resulting from bleeding was even higher (60.2%). Between 24 h and 30 days traumatic brain injury (TBI) emerged as a more prevalent mortality cause (43.8%). On review, seven deaths were judged potentially preventable, of which five were considered due to bleeding, one because of TBI and the last cause was unspecified.\n\nFig. 3Summary of causes of mortality presented for 212 patients from year 2013 to 2020 grouped by time of death. MOF = Multi organ failure, TBI = Traumatic brain injury, DOA = Dead on arrival\n\nSummary of causes of mortality presented for 212 patients from year 2013 to 2020 grouped by time of death. MOF = Multi organ failure, TBI = Traumatic brain injury, DOA = Dead on arrival\n[SUBTITLE] Prehospital care [SUBSECTION] A majority (83.6%) of patients in the subset with out of hospital TCA (n = 256) were intubated and in 26.1% a thoracic decompression was performed in the prehospital setting. The median EMS response time was 9 min, median on scene time was 20 min and median transport time was 13 min and did not differ between groups (Table 2). Survivors compared to non-survivors received adrenaline (epinephrine) less frequently (16.7 vs. 60.1% (P < 0.001)) and in lower median amounts (0.0 (IQR 0.0–0.0) vs. 2.0 (IQR 0.0–4.0) mg (P = 0.002)) (Table 2).\n\nTable 2Prehospital interventions and dispatch times among the subset of 256 patients with out-of-hospital traumatic cardiac arrestOutcome at 30 days\nTotal cohort\n\nDead\n\nAlive\n\nP value\n(n = 256)(n = 238)(n = 18)\nBystander CPR\nYes111 (43.4%)102 (42.9%)9 (50.0%)0.93No88 (34.4%)81 (34.0%)7 (38.9%)EMS witnessed cardiac arrest33 (12.9%)31 (13.0%)2 (11.1%)Missing24 (9.4%)24 (10.1%)0 (0.0%)\nHighest competence\nNot attended by EMS3 (1.2%)3 (1.2%)0 (0.0%)0.97Advanced life support by nurse135 (52.7%)125 (52.5%)10 (55.6%)Advanced life support by physician117 (45.7%)109 (45.8%)8 (44.4%)Missing1 (0.4%)1 (0.4%)1 (0.4%)\nEMS response time (min)\nMedian [IQR]9.0 [6.0–13.0]9.0 [6.0–13.0]9.0 [6.0–13.0]0.92Missing2 (0.8%)2 (0.8%)0 (0.0%)\nOn scene time (min)\nMedian [IQR]20.0 [14.3–26.0]19.5 [15.0–26.0]21.5 [12.0-23.8]0.69Missing2 (0.8%)2 (0.8%)0 (0%)\nTransport time from scene to hospital (min)\nMedian [IQR]13.0 [9.0–18.0]13.0 [9.0–18.0]14.5 [11.3–19.0]0.31Missing2 (0.8%)2 (0.8%)0 (0.0%)\nTime from dispatch to hospital (min)\nMedian [IQR]44.0 [37.8–54.3]44.0 [34.3–54.8]46.0 [36.3–53.3]0.79Missing0 (0.0%)0 (0.0%)0 (0.0%)\nIntubation by EMS\nYes212 (82.8%)199 (83.6%)13 (72.2%)0.36No44 (17.2%)39 (16.4%)5 (27.8%)Missing0 (0.0%)0 (0.0%)0 (0.0%)\nHighest ETCO2 (kPa)\nMedian [IQR]4.0 [2.2-6.0]4.0 [1.9-6.0]5.0 [4.1–6.1]0.24Missing166 (64.8%)153 (64.3%)13 (72.2%)\nAdrenaline (Epinephrine) administered\nYes146 (57.0%)143 (60.1%)3 (16.7%)< 0.001No75 (29.3%)63 (26.5%)12 (66.7%)Missing35 (13.7%)32 (13.4%)3 (16.7%)\nAmount of adrenaline (mg)\nMedian [IQR]2.0 [0.0–4.0]2.0 [0.0–4.0]0.0 [0.0–0.0]0.002Missing41 (16.0%)38 (16.0%)3 (16.7%)\nThoracic decompression by EMS\nYes66 (25.8%)62 (26.1%)4 (22.2%)0.94No190 (74.2%)176 (73.9%)14 (77.8%)\n\nPrehospital interventions and dispatch times among the subset of 256 patients with out-of-hospital traumatic cardiac arrest\nA majority (83.6%) of patients in the subset with out of hospital TCA (n = 256) were intubated and in 26.1% a thoracic decompression was performed in the prehospital setting. The median EMS response time was 9 min, median on scene time was 20 min and median transport time was 13 min and did not differ between groups (Table 2). Survivors compared to non-survivors received adrenaline (epinephrine) less frequently (16.7 vs. 60.1% (P < 0.001)) and in lower median amounts (0.0 (IQR 0.0–0.0) vs. 2.0 (IQR 0.0–4.0) mg (P = 0.002)) (Table 2).\n\nTable 2Prehospital interventions and dispatch times among the subset of 256 patients with out-of-hospital traumatic cardiac arrestOutcome at 30 days\nTotal cohort\n\nDead\n\nAlive\n\nP value\n(n = 256)(n = 238)(n = 18)\nBystander CPR\nYes111 (43.4%)102 (42.9%)9 (50.0%)0.93No88 (34.4%)81 (34.0%)7 (38.9%)EMS witnessed cardiac arrest33 (12.9%)31 (13.0%)2 (11.1%)Missing24 (9.4%)24 (10.1%)0 (0.0%)\nHighest competence\nNot attended by EMS3 (1.2%)3 (1.2%)0 (0.0%)0.97Advanced life support by nurse135 (52.7%)125 (52.5%)10 (55.6%)Advanced life support by physician117 (45.7%)109 (45.8%)8 (44.4%)Missing1 (0.4%)1 (0.4%)1 (0.4%)\nEMS response time (min)\nMedian [IQR]9.0 [6.0–13.0]9.0 [6.0–13.0]9.0 [6.0–13.0]0.92Missing2 (0.8%)2 (0.8%)0 (0.0%)\nOn scene time (min)\nMedian [IQR]20.0 [14.3–26.0]19.5 [15.0–26.0]21.5 [12.0-23.8]0.69Missing2 (0.8%)2 (0.8%)0 (0%)\nTransport time from scene to hospital (min)\nMedian [IQR]13.0 [9.0–18.0]13.0 [9.0–18.0]14.5 [11.3–19.0]0.31Missing2 (0.8%)2 (0.8%)0 (0.0%)\nTime from dispatch to hospital (min)\nMedian [IQR]44.0 [37.8–54.3]44.0 [34.3–54.8]46.0 [36.3–53.3]0.79Missing0 (0.0%)0 (0.0%)0 (0.0%)\nIntubation by EMS\nYes212 (82.8%)199 (83.6%)13 (72.2%)0.36No44 (17.2%)39 (16.4%)5 (27.8%)Missing0 (0.0%)0 (0.0%)0 (0.0%)\nHighest ETCO2 (kPa)\nMedian [IQR]4.0 [2.2-6.0]4.0 [1.9-6.0]5.0 [4.1–6.1]0.24Missing166 (64.8%)153 (64.3%)13 (72.2%)\nAdrenaline (Epinephrine) administered\nYes146 (57.0%)143 (60.1%)3 (16.7%)< 0.001No75 (29.3%)63 (26.5%)12 (66.7%)Missing35 (13.7%)32 (13.4%)3 (16.7%)\nAmount of adrenaline (mg)\nMedian [IQR]2.0 [0.0–4.0]2.0 [0.0–4.0]0.0 [0.0–0.0]0.002Missing41 (16.0%)38 (16.0%)3 (16.7%)\nThoracic decompression by EMS\nYes66 (25.8%)62 (26.1%)4 (22.2%)0.94No190 (74.2%)176 (73.9%)14 (77.8%)\n\nPrehospital interventions and dispatch times among the subset of 256 patients with out-of-hospital traumatic cardiac arrest\n[SUBTITLE] Intrahospital assessment and procedures [SUBSECTION] Survivors more commonly presented to the hospital with a pulse compared to non-survivors (70.0% (n = 21) vs. 21.7% (n = 55), P < 0.001). Among survivors of prehospital TCA (n = 18), 77.8% (n = 14) had regained spontaneous circulation when arriving in the trauma unit, 11.1% (n = 2) were in PEA and the remaining 11.1% (n = 2) had no rhythm documented.\nA shockable rhythm was more frequently reported in survivors compared to non-survivors (16.7 vs. 5.5%, P = 0.04). Patients with asystole had a survival of 8.0%.\nA pupillary response was more commonly elicited in survivors compared to non-survivors (56.7% vs. 9.8%, P < 0.001).\nSurvivors exhibited higher platelet counts, higher fibrinogen, shorter activated partial thromboplastin time (APTT), lower lactate, lower base deficit, higher pH value and a lower S100b in comparison to non-survivors (Fig. 4). The highest measured lactate in a survivor was 20 mmol L− 1. Concerning lab values there were variable amounts of missing data (Supplementary Table 2).\n\nFig. 4Lab values at hospital admission. Data are presented as median and interquartile range. Difference between groups examined with Wilcoxon rank sum test. *=P < 0.05, **= P < 0.01, ***= P < 0.001, NS = Nonsignificant at 0.05 level\n\nLab values at hospital admission. Data are presented as median and interquartile range. Difference between groups examined with Wilcoxon rank sum test. *=P < 0.05, **= P < 0.01, ***= P < 0.001, NS = Nonsignificant at 0.05 level\nSurvivors more commonly presented to the hospital with a pulse compared to non-survivors (70.0% (n = 21) vs. 21.7% (n = 55), P < 0.001). Among survivors of prehospital TCA (n = 18), 77.8% (n = 14) had regained spontaneous circulation when arriving in the trauma unit, 11.1% (n = 2) were in PEA and the remaining 11.1% (n = 2) had no rhythm documented.\nA shockable rhythm was more frequently reported in survivors compared to non-survivors (16.7 vs. 5.5%, P = 0.04). Patients with asystole had a survival of 8.0%.\nA pupillary response was more commonly elicited in survivors compared to non-survivors (56.7% vs. 9.8%, P < 0.001).\nSurvivors exhibited higher platelet counts, higher fibrinogen, shorter activated partial thromboplastin time (APTT), lower lactate, lower base deficit, higher pH value and a lower S100b in comparison to non-survivors (Fig. 4). The highest measured lactate in a survivor was 20 mmol L− 1. Concerning lab values there were variable amounts of missing data (Supplementary Table 2).\n\nFig. 4Lab values at hospital admission. Data are presented as median and interquartile range. Difference between groups examined with Wilcoxon rank sum test. *=P < 0.05, **= P < 0.01, ***= P < 0.001, NS = Nonsignificant at 0.05 level\n\nLab values at hospital admission. Data are presented as median and interquartile range. Difference between groups examined with Wilcoxon rank sum test. *=P < 0.05, **= P < 0.01, ***= P < 0.001, NS = Nonsignificant at 0.05 level\n[SUBTITLE] Resuscitative thoracotomy [SUBSECTION] Altogether, 101 TCA patients (36%) underwent resuscitative thoracotomy. Among these, 12 patients (11.9%) survived with a GOS score at hospital discharge of 3 (n = 8) or 4 (n = 4).\nA tamponade was released in two of the survivors (16.7%) and 12 of the non survivors (13.5%). In two survivors, during one case of extremity bleed and another of abdominal bleed, a thoracotomy was made solely to occlude the aorta.\nPatients arresting after arrival had a higher survival after thoracotomy compared to patients presenting in arrest (42.1 vs. 4.9%, P < 0.001). Median time from last sign of life to thoracotomy was shorter for in-hospital TCA compared to prehospital TCA (9.5 (IQR 5-14.75) vs. 23 (IQR 14–36) minutes, P < 0.001). For prehospital TCA, the median time from hospital arrival to start of thoracotomy was 4 (IQR 3–10) minutes. Blunt and penetrating TCAs had a comparable survival after thoracotomy (12.5 and 11.5%, P = 1), however median time to resuscitative thoracotomy from last sign of life was shorter in blunt than penetrating trauma (15.5 (IQR 8.3–24.5) vs. 25.0 (IQR 12.0–37.0) minutes, P = 0.01). Median time from latest sign of life to resuscitative thoracotomy was shorter in survivors compared to non-survivors (10 (IQR 5.0–22.0) vs. 21 (IQR 10.8–35.3) minutes (P = 0.03) (Fig. 5).\n\nFig. 5Time from last vital sign to start of thoracotomy. Dotted line marks 15 min corresponding to upper recommended limit for resuscitative thoracotomy in European Resuscitation Council’s current guidelines4. Data presented as median and interquartile range. Scatterplot superimposed to present individual cases. Comparison between groups with Wilcoxon rank sum test. *=P < 0.05\n\nTime from last vital sign to start of thoracotomy. Dotted line marks 15 min corresponding to upper recommended limit for resuscitative thoracotomy in European Resuscitation Council’s current guidelines4. Data presented as median and interquartile range. Scatterplot superimposed to present individual cases. Comparison between groups with Wilcoxon rank sum test. *=P < 0.05\nResuscitative thoracotomy was in 58 patients performed later than the suggested 15 min since loss of vital signs (Fig. 5). Of those, four patients survived of whom all had suffered penetrating injuries. Three survivors had arrested before hospital arrival and one during initial resuscitation. One survivor with PEA and one with unknown rhythm had recordings of visible cardiac contractions upon chest opening. Another survivor who was in asystole had a pericardial tamponade relieved. The last survivor who had the longest arrest duration of 36 min before thoracotomy was also in PEA. One survivor was discharged with GOS 4 and the remaining three with GOS 3.\nAltogether, 101 TCA patients (36%) underwent resuscitative thoracotomy. Among these, 12 patients (11.9%) survived with a GOS score at hospital discharge of 3 (n = 8) or 4 (n = 4).\nA tamponade was released in two of the survivors (16.7%) and 12 of the non survivors (13.5%). In two survivors, during one case of extremity bleed and another of abdominal bleed, a thoracotomy was made solely to occlude the aorta.\nPatients arresting after arrival had a higher survival after thoracotomy compared to patients presenting in arrest (42.1 vs. 4.9%, P < 0.001). Median time from last sign of life to thoracotomy was shorter for in-hospital TCA compared to prehospital TCA (9.5 (IQR 5-14.75) vs. 23 (IQR 14–36) minutes, P < 0.001). For prehospital TCA, the median time from hospital arrival to start of thoracotomy was 4 (IQR 3–10) minutes. Blunt and penetrating TCAs had a comparable survival after thoracotomy (12.5 and 11.5%, P = 1), however median time to resuscitative thoracotomy from last sign of life was shorter in blunt than penetrating trauma (15.5 (IQR 8.3–24.5) vs. 25.0 (IQR 12.0–37.0) minutes, P = 0.01). Median time from latest sign of life to resuscitative thoracotomy was shorter in survivors compared to non-survivors (10 (IQR 5.0–22.0) vs. 21 (IQR 10.8–35.3) minutes (P = 0.03) (Fig. 5).\n\nFig. 5Time from last vital sign to start of thoracotomy. Dotted line marks 15 min corresponding to upper recommended limit for resuscitative thoracotomy in European Resuscitation Council’s current guidelines4. Data presented as median and interquartile range. Scatterplot superimposed to present individual cases. Comparison between groups with Wilcoxon rank sum test. *=P < 0.05\n\nTime from last vital sign to start of thoracotomy. Dotted line marks 15 min corresponding to upper recommended limit for resuscitative thoracotomy in European Resuscitation Council’s current guidelines4. Data presented as median and interquartile range. Scatterplot superimposed to present individual cases. Comparison between groups with Wilcoxon rank sum test. *=P < 0.05\nResuscitative thoracotomy was in 58 patients performed later than the suggested 15 min since loss of vital signs (Fig. 5). Of those, four patients survived of whom all had suffered penetrating injuries. Three survivors had arrested before hospital arrival and one during initial resuscitation. One survivor with PEA and one with unknown rhythm had recordings of visible cardiac contractions upon chest opening. Another survivor who was in asystole had a pericardial tamponade relieved. The last survivor who had the longest arrest duration of 36 min before thoracotomy was also in PEA. One survivor was discharged with GOS 4 and the remaining three with GOS 3.", "In total, 284 patients with confirmed TCA were included in the study (Fig. 1). The median age was 38 years, 82.0% were male and 60% were previously healthy. The median ISS was 38. Cardiac arrest occurred out of hospital in most patients (90.1%). Blunt trauma was the predominant injury mechanism (64.8%). In patients with penetrating trauma 44.0% had suffered gunshot wounds and 56% stab wounds. For patients with a documented cardiac rhythm, asystole was observed in 39.4% and pulseless electric activity (PEA) in 24.6% whereas a shockable rhythm was recorded in 6.7% (Table 1). Among the 85 patients (29.9%) who survived to intensive care unit admission, the median length of hospital stay was 5 (IQR 3–20) days and the median time on ventilator was 3 (IQR 1–7) days.\n\nFig. 1Flow chart of patients included in the study. CPR = cardiopulmonary resuscitation\n\nFlow chart of patients included in the study. CPR = cardiopulmonary resuscitation\n\nTable 1Characteristics of 284 traumatic cardiac arrest patients treated at a Swedish level 1 trauma centre between years 2011–2020Outcome at 30 days\nTotal cohort\n\nDead\n\nAlive\n\nP value\n\n(n = 284)\n\n(n = 254)\n\n(n = 30)\n\nAge (years)\nMedian [IQR]38.0 [24.0–58.0]38.0 [24.0–56.0]43.5 [28.3–66.0]0.24Missing0 (0.0%)0 (0.0%)0 (0.0%)\nAge groups (years)\n15–2474 (26.1%)69 (27.2%)5 (16.7%)0.0825–3448 (16.9%)43 (16.9%)5 (16.7%)35–4958 (20.4%)52 (20.5%)6 (20%)50–6447 (16.5%)43 (16.9%)4 (13.3%)65–7946 (16.2%)36 (14.2%)10 (33.3%)80–9411 (3.9%)11 (4.3%)0 (0.0%)\nGender\nMale233 (82.0%)209 (82.3%)24 (80.0%)0.96Female51 (18.0%)45 (17.7%)6 (20.0%)\nPreinjury ASA class\n1171 (60.2%)157 (61.8%)14.0 (46.7%)0.03244 (15.5%)35 (13.8%)9 (30.0%)337 (13.0%)31 (12.2%)6 (20.0%)42 (0.7%)1 (0.4%)1 (3.3%)Missing30 (10.6%)30 (11.8%)0 (0.0%)\nDominant injury\nBlunt184 (64.8%)163 (64.2%)21 (70.0%)0.67Penetrating100 (35.2%)91 (35.8%)9 (30.0%)\nInjury mechanism\nMotor vehicle accident - non motorbike32 (11.3%)27 (10.6%)5 (16.7%)0.28Motor bike accident17 (6.0%)15 (5.9%)2 (6.7%)Bike accident8 (2.8%)7 (2.8%)1 (3.3%)Injured pedestrian17 (6.0%)14 (5.5%)3 (10.0%)Other vehicle accident4 (1.4%)4 (1.6%)0 (0.0%)Gunshot wound44 (15.5%)42 (16.5%)2 (6.7%)Stab wound56 (19.7%)50 (19.7%)6 (20.0%)Hit by blunt object28 (9.9%)26 (10.2%)2 (6.7%)Same level fall7 (2.5%)4 (1.6%)3 (10.0%)Fall from height65 (22.9%)60 (23.6%)5 (16.7%)Explosion1 (0.4%)1 (0.4%)0 (0.0%)Other2 (0.7%)2 (0.8%)0 (0.0%)Missing3 (1.1%)2 (0.8%)1 (3.3%)\nArrest rhythm\nVT/VF19.0 (6.7%)14 (5.5%)5 (16.7%)0.09Asystole112 (39.4%)103 (40.6%)9 (30.0%)PEA70 (24.6%)65 (25.6%)5 (16.7%)Non shockable unknown rhythm25 (8.8%)22 (8.7%)3 (10.0%)Missing58 (20.4%)50 (19.7%)8 (26.7%)\nShockable rhythm\nYes19 (6.7%)8 (3.1%)2 (6.7%)0.04No207 (72.9%)6 (2.4%)3 (10.0%)Missing58 (20.4%)50 (19.7%)8 (26.7%)\nISS\nMedian [IQR]38.0 [25.5–75.0]41.0 [26–75.0]26.0 [17.0–38.0]< 0.001Missing5 (1.8%)5 (2.0%)0 (0.0%)\nNISS\nMedian [IQR]50.0 [34.0–75.0]57.0 [35.5–75.0]41.0 [24.0–50.0]< 0.001Missing7 (2.5%)6 (2.4%)1 (3.3%)\nPlace of arrest\nOut of hospital256 (90.1%)238 (93.7%)18 (60.0%)< 0.001In hospital28 (9.9%)16 (6.3%)12 (40.0%)\n\nCharacteristics of 284 traumatic cardiac arrest patients treated at a Swedish level 1 trauma centre between years 2011–2020", "A total of 30 patients (10.6%) survived to 30 days. Seven patients were assessed as GOS 4 and 23 patients as GOS 3 at discharge. No patients were discharged without disability. The survival rate was higher after in-hospital TCA (42.9%) as compared to pre-hospital TCA (7.0%) (P < 0.001). The survival rates varied between years, and we found no temporal trend (Fig. 2).\n\nFig. 2Temporal trends in traumatic cardiac arrest at a Swedish level 1 trauma centre during 2011–2020. (A) Caseload and survivors. (B) Total survivors and patients with favourable functional outcome as defined by Glasgow Outcome Scale score of 4–5. Regression line to evaluate trend in overall survival (P = 0.66)\n\nTemporal trends in traumatic cardiac arrest at a Swedish level 1 trauma centre during 2011–2020. (A) Caseload and survivors. (B) Total survivors and patients with favourable functional outcome as defined by Glasgow Outcome Scale score of 4–5. Regression line to evaluate trend in overall survival (P = 0.66)\nData on cause of death were only available from 2013 to 2020 for 212 TCA patients (Fig. 3.). Half of these died of bleeding. When death occurred within 24 h the proportion of deaths resulting from bleeding was even higher (60.2%). Between 24 h and 30 days traumatic brain injury (TBI) emerged as a more prevalent mortality cause (43.8%). On review, seven deaths were judged potentially preventable, of which five were considered due to bleeding, one because of TBI and the last cause was unspecified.\n\nFig. 3Summary of causes of mortality presented for 212 patients from year 2013 to 2020 grouped by time of death. MOF = Multi organ failure, TBI = Traumatic brain injury, DOA = Dead on arrival\n\nSummary of causes of mortality presented for 212 patients from year 2013 to 2020 grouped by time of death. MOF = Multi organ failure, TBI = Traumatic brain injury, DOA = Dead on arrival", "A majority (83.6%) of patients in the subset with out of hospital TCA (n = 256) were intubated and in 26.1% a thoracic decompression was performed in the prehospital setting. The median EMS response time was 9 min, median on scene time was 20 min and median transport time was 13 min and did not differ between groups (Table 2). Survivors compared to non-survivors received adrenaline (epinephrine) less frequently (16.7 vs. 60.1% (P < 0.001)) and in lower median amounts (0.0 (IQR 0.0–0.0) vs. 2.0 (IQR 0.0–4.0) mg (P = 0.002)) (Table 2).\n\nTable 2Prehospital interventions and dispatch times among the subset of 256 patients with out-of-hospital traumatic cardiac arrestOutcome at 30 days\nTotal cohort\n\nDead\n\nAlive\n\nP value\n(n = 256)(n = 238)(n = 18)\nBystander CPR\nYes111 (43.4%)102 (42.9%)9 (50.0%)0.93No88 (34.4%)81 (34.0%)7 (38.9%)EMS witnessed cardiac arrest33 (12.9%)31 (13.0%)2 (11.1%)Missing24 (9.4%)24 (10.1%)0 (0.0%)\nHighest competence\nNot attended by EMS3 (1.2%)3 (1.2%)0 (0.0%)0.97Advanced life support by nurse135 (52.7%)125 (52.5%)10 (55.6%)Advanced life support by physician117 (45.7%)109 (45.8%)8 (44.4%)Missing1 (0.4%)1 (0.4%)1 (0.4%)\nEMS response time (min)\nMedian [IQR]9.0 [6.0–13.0]9.0 [6.0–13.0]9.0 [6.0–13.0]0.92Missing2 (0.8%)2 (0.8%)0 (0.0%)\nOn scene time (min)\nMedian [IQR]20.0 [14.3–26.0]19.5 [15.0–26.0]21.5 [12.0-23.8]0.69Missing2 (0.8%)2 (0.8%)0 (0%)\nTransport time from scene to hospital (min)\nMedian [IQR]13.0 [9.0–18.0]13.0 [9.0–18.0]14.5 [11.3–19.0]0.31Missing2 (0.8%)2 (0.8%)0 (0.0%)\nTime from dispatch to hospital (min)\nMedian [IQR]44.0 [37.8–54.3]44.0 [34.3–54.8]46.0 [36.3–53.3]0.79Missing0 (0.0%)0 (0.0%)0 (0.0%)\nIntubation by EMS\nYes212 (82.8%)199 (83.6%)13 (72.2%)0.36No44 (17.2%)39 (16.4%)5 (27.8%)Missing0 (0.0%)0 (0.0%)0 (0.0%)\nHighest ETCO2 (kPa)\nMedian [IQR]4.0 [2.2-6.0]4.0 [1.9-6.0]5.0 [4.1–6.1]0.24Missing166 (64.8%)153 (64.3%)13 (72.2%)\nAdrenaline (Epinephrine) administered\nYes146 (57.0%)143 (60.1%)3 (16.7%)< 0.001No75 (29.3%)63 (26.5%)12 (66.7%)Missing35 (13.7%)32 (13.4%)3 (16.7%)\nAmount of adrenaline (mg)\nMedian [IQR]2.0 [0.0–4.0]2.0 [0.0–4.0]0.0 [0.0–0.0]0.002Missing41 (16.0%)38 (16.0%)3 (16.7%)\nThoracic decompression by EMS\nYes66 (25.8%)62 (26.1%)4 (22.2%)0.94No190 (74.2%)176 (73.9%)14 (77.8%)\n\nPrehospital interventions and dispatch times among the subset of 256 patients with out-of-hospital traumatic cardiac arrest", "Survivors more commonly presented to the hospital with a pulse compared to non-survivors (70.0% (n = 21) vs. 21.7% (n = 55), P < 0.001). Among survivors of prehospital TCA (n = 18), 77.8% (n = 14) had regained spontaneous circulation when arriving in the trauma unit, 11.1% (n = 2) were in PEA and the remaining 11.1% (n = 2) had no rhythm documented.\nA shockable rhythm was more frequently reported in survivors compared to non-survivors (16.7 vs. 5.5%, P = 0.04). Patients with asystole had a survival of 8.0%.\nA pupillary response was more commonly elicited in survivors compared to non-survivors (56.7% vs. 9.8%, P < 0.001).\nSurvivors exhibited higher platelet counts, higher fibrinogen, shorter activated partial thromboplastin time (APTT), lower lactate, lower base deficit, higher pH value and a lower S100b in comparison to non-survivors (Fig. 4). The highest measured lactate in a survivor was 20 mmol L− 1. Concerning lab values there were variable amounts of missing data (Supplementary Table 2).\n\nFig. 4Lab values at hospital admission. Data are presented as median and interquartile range. Difference between groups examined with Wilcoxon rank sum test. *=P < 0.05, **= P < 0.01, ***= P < 0.001, NS = Nonsignificant at 0.05 level\n\nLab values at hospital admission. Data are presented as median and interquartile range. Difference between groups examined with Wilcoxon rank sum test. *=P < 0.05, **= P < 0.01, ***= P < 0.001, NS = Nonsignificant at 0.05 level", "Altogether, 101 TCA patients (36%) underwent resuscitative thoracotomy. Among these, 12 patients (11.9%) survived with a GOS score at hospital discharge of 3 (n = 8) or 4 (n = 4).\nA tamponade was released in two of the survivors (16.7%) and 12 of the non survivors (13.5%). In two survivors, during one case of extremity bleed and another of abdominal bleed, a thoracotomy was made solely to occlude the aorta.\nPatients arresting after arrival had a higher survival after thoracotomy compared to patients presenting in arrest (42.1 vs. 4.9%, P < 0.001). Median time from last sign of life to thoracotomy was shorter for in-hospital TCA compared to prehospital TCA (9.5 (IQR 5-14.75) vs. 23 (IQR 14–36) minutes, P < 0.001). For prehospital TCA, the median time from hospital arrival to start of thoracotomy was 4 (IQR 3–10) minutes. Blunt and penetrating TCAs had a comparable survival after thoracotomy (12.5 and 11.5%, P = 1), however median time to resuscitative thoracotomy from last sign of life was shorter in blunt than penetrating trauma (15.5 (IQR 8.3–24.5) vs. 25.0 (IQR 12.0–37.0) minutes, P = 0.01). Median time from latest sign of life to resuscitative thoracotomy was shorter in survivors compared to non-survivors (10 (IQR 5.0–22.0) vs. 21 (IQR 10.8–35.3) minutes (P = 0.03) (Fig. 5).\n\nFig. 5Time from last vital sign to start of thoracotomy. Dotted line marks 15 min corresponding to upper recommended limit for resuscitative thoracotomy in European Resuscitation Council’s current guidelines4. Data presented as median and interquartile range. Scatterplot superimposed to present individual cases. Comparison between groups with Wilcoxon rank sum test. *=P < 0.05\n\nTime from last vital sign to start of thoracotomy. Dotted line marks 15 min corresponding to upper recommended limit for resuscitative thoracotomy in European Resuscitation Council’s current guidelines4. Data presented as median and interquartile range. Scatterplot superimposed to present individual cases. Comparison between groups with Wilcoxon rank sum test. *=P < 0.05\nResuscitative thoracotomy was in 58 patients performed later than the suggested 15 min since loss of vital signs (Fig. 5). Of those, four patients survived of whom all had suffered penetrating injuries. Three survivors had arrested before hospital arrival and one during initial resuscitation. One survivor with PEA and one with unknown rhythm had recordings of visible cardiac contractions upon chest opening. Another survivor who was in asystole had a pericardial tamponade relieved. The last survivor who had the longest arrest duration of 36 min before thoracotomy was also in PEA. One survivor was discharged with GOS 4 and the remaining three with GOS 3.", "This single-centre retrospective cohort study demonstrated a 10.6% 30-day survival for TCA between years 2011–2020. Most deaths were caused by bleeding and non-survivors were found more coagulopathic. We identified nine survivors with asystole as arrest rhythm. Survival after resuscitative thoracotomy was 11.9% and four among these 12 survivors breached the upper time limit of 15 min stipulated in current European guidelines [4].\nThe demonstrated survival in our series is largely in keeping with recent evidence describing survival ranging from 2.4 to 18.4% [3, 6–17]. However, direct comparison with previous outcome studies is difficult due to a wide variation in factors such as inclusion criteria, geographic settings and development of health care systems. The only prior survey of TCA set in Sweden quoted an overall survival of 3.7% for out-of-hospital TCA between 1990 and 2015 [3] as compared to 7% in the subset of prehospital TCA in our material. In relation to data from the Swedish Cardiac Arrest Registry, that mainly reports on medical arrests, we found a moderately lower overall survival for out-of-hospital TCA (7 vs. 11%) but a slightly better survival for in-hospital TCA (42.9% vs. 35%) during the same ten years [29].\nFavourable outcome among survivors was recorded in 23% in our study which was merely about half of the 56% in a recent review [30] and although four patients survived after a late thoracotomy only one had a favourable outcome. The rate of severe disability in survivors is rather high. However, this finding must be interpreted in the light of that GOS score at discharge in multi-trauma often is impacted by concomitant traumatic injuries affecting whole body function, hence not purely reflecting cerebral performance. Later improvement of GOS may have occurred but unfortunately long-term data were not available.\nBleeding was the most predominant early death cause, surpassed by TBI after 24 h of care in line with previous literature [6, 31]. Most preventable deaths were also judged to be caused by bleeding. We furthermore found non-survivors to have more affected coagulation tests. Trauma-induced coagulopathy correlates to an increased mortality [32], especially in severely injured patients [33], and can occur in TBI even in the absence of major haemorrhage [34]. This emphasises the need to swiftly control bleeding and initiate massive transfusion as stressed in present guidelines [4].\nIn our survey, survivors with each arrest rhythm were reported including 9 patients with asystole. Primary rhythm has been proposed as a triage tool where resuscitation is withheld in asystolic TCA [35–37]. However, contrasting studies have shown that survival in asystole is possible [7] even with complete neurological recovery [38]. These findings suggest a poor prognostic power of absent electrical activity as a stand-alone criterion to stop resuscitation. Moreover, we noticed that survivors more often recorded a shockable rhythm which in a review was shown to be prognostic for survival [30]. Though, it is still possible that in cases with shockable rhythm, the cardiac arrest preceded the trauma and was actually medical in origin.\nIn the subgroup of prehospital TCA, non-survivors more often received adrenaline and in higher doses. In a recent meta-analysis it was concluded that adrenaline use in TCA predicts return of spontaneous circulation (ROSC) while decreasing odds of survival [30]. Adrenaline is known to impair cerebral micro-circulation and increases myocardial oxygen demand [39] which in a hypovolemic low-flow state might worsen outcome. Additionally, it is also plausible that this correlation is confounded by adrenaline acting as a proxy for longer arrest times.\nIn our study, survival after resuscitative thoracotomy was 11.9% which is slightly higher than the 7.8% in a recent review [18]. This despite 57% of thoracotomies being carried out later than 15 min from loss of vital signs [4]. Nevertheless, we found that survivors had shorter times from arrest to resuscitative thoracotomy, underscoring the need for a system facilitating fast transport times and instant access to surgical competence [25]. This has previously been established in a military context where shorter time intervals (6.2 vs. 17.7 min for survivors and non-survivors respectively) yielded an even higher overall survival of 21.5% [20].\nInterestingly, we found a similar survival after resuscitative thoracotomy between blunt and penetrating injuries which contradicts previous studies reporting a poorer outcome after blunt trauma [36, 40, 41]. This may be explained by the shorter time to thoracotomy in blunt compared to penetrating victims in our material, conceivably influenced by local protocols recommending resuscitative thoracotomy within 10 min in blunt but up to 15 min for penetrating trauma.\nSurprisingly, four of the 12 survivors after resuscitative thoracotomy breached the 15-minute restriction set in recent European guidelines [4]. When the futility of resuscitative thoracotomy in TCA was previously investigated, no survivors were found with prehospital CPR-times exceeding 15 min [42]. It is certainly pertinent to separate potential survivors not to misdirect unwarranted resources. Firstly, all four survivors suffered penetrating trauma known to have better outcome after thoracotomy [36, 40, 41]. Moreover, focused assessed sonography in trauma (FAST) has been showing promising prognostic value in TCA with a high sensitivity for survival [30, 43]. The two survivors with cardiac motion, possibly in a low flow state rather than true arrest, would expectedly have been identified using FAST. Specifically, by using the protocol suggested by Inaba and co-workers, where absence of both contractions and pericardial effusion on FAST carried a 0% survival chance, thoracotomy would have been regarded futile in only 1/4 survivors [44]. An upper boundary of 15 min for resuscitative efforts in TCA has also been challenged in four previous papers. Cumulatively, 16% of survivors in these studies were in violation of the 15-minute limit [4]; 7/68 in London [17], 1/4 in Victoria [45], 3/14 in Seattle [7] and 4/6 in Taiwan [46]. These combined observations mandate caution in applying this time-related criterion.\n[SUBTITLE] Strengths and limitations [SUBSECTION] The key strength of this study is that we report on both trauma-related and arrest specific variables with complete coverage of outcome parameters. Limitations of this study include those inherent to a retrospective database review. In some areas the amount of missing data was not minor, especially regarding lab values, and these results need to be interpreted with caution. In addition, we lack information on the proportion of patients where resuscitation was terminated on scene.\nThe key strength of this study is that we report on both trauma-related and arrest specific variables with complete coverage of outcome parameters. Limitations of this study include those inherent to a retrospective database review. In some areas the amount of missing data was not minor, especially regarding lab values, and these results need to be interpreted with caution. In addition, we lack information on the proportion of patients where resuscitation was terminated on scene.", "The key strength of this study is that we report on both trauma-related and arrest specific variables with complete coverage of outcome parameters. Limitations of this study include those inherent to a retrospective database review. In some areas the amount of missing data was not minor, especially regarding lab values, and these results need to be interpreted with caution. In addition, we lack information on the proportion of patients where resuscitation was terminated on scene.", "Our data support that survival after TCA is possible and that aggressive care is justified, particularly directed at managing bleeding. Determining futility in TCA is complex and although current treatment algorithms mostly perform adequately, our study demonstrates exceptions where patients survived outside of guidelines.", "[SUBTITLE] [SUBSECTION] Below is the link to the electronic supplementary material.\n\nSupplementary Material 1\n\nSupplementary Material 1\n\nSupplementary Material 2\n\nSupplementary Material 2\nBelow is the link to the electronic supplementary material.\n\nSupplementary Material 1\n\nSupplementary Material 1\n\nSupplementary Material 2\n\nSupplementary Material 2", "Below is the link to the electronic supplementary material.\n\nSupplementary Material 1\n\nSupplementary Material 1\n\nSupplementary Material 2\n\nSupplementary Material 2" ]
[ "introduction", null, null, null, null, null, null, "results", null, null, null, null, null, "discussion", null, "conclusion", "supplementary-material", null ]
[ "Traumatic cardiac arrest", "Resuscitative thoracotomy", "Resuscitation", "Trauma", "Outcome." ]
Perception of pediatric residents from a tertiary hospital in the city of México regarding their training during the COVID-19 pandemic.
36253812
On March 11, 2020, the World Health Organization (WHO) declared the novel coronavirus (COVID-19) outbreak a global pandemic, which changed the residents' teaching and learning process. The purpose of this study was to determine residents' satisfaction and impressions on their training during the pandemic in a tertiary pediatric hospital.
BACKGROUNDS
This was a descriptive cross-sectional study. An online survey was designed to determine residents' demographic and personal characteristics, as well as their perception about the theoretical and practical training, as well as about their emotional situation. The analysis separated medical students from surgical students in order to identify any differences existing between these groups, for which χ2 was calculated.
METHODS
Overall, 148 of 171 residents (86.5%) responded to the questionnaire; 75% belonged to the medical specialty and 25% to the surgical specialty. Statistically significant differences were found in terms of those training aspects they were concerned about during the pandemic (p < 0.001) and about the difficulties associated with online learning (p = 0.001). Differences were also found regarding their satisfaction toward the time needed to complete their thesis (p = 0.059) and activities outside the hospital (p = 0.029). Regarding their degree of satisfaction in general, most medical specialty students felt slightly satisfied (43.2%) and surgical specialty students felt mostly neutral (37.8%). Regarding their feelings about their mental health, statistically significant differences were found between both groups (p = 0.038) although both groups reported the same percentage of overall dissatisfaction (2.7%) in this area.
RESULTS
The COVID-19 pandemic has brought significant challenges to medical education systems. Lack of practice in decision-making and maneuver execution are concerns for residents and may affect their future professional performance.
CONCLUSION
[ "COVID-19", "Child", "Cross-Sectional Studies", "Humans", "Internship and Residency", "Mexico", "Pandemics", "Perception", "Surveys and Questionnaires", "Tertiary Care Centers" ]
9575638
Introduction
On March 11, 2020, the World Health Organization (WHO) declared the novel coronavirus (COVID-19) outbreak, originally described in Wuhan, in the Chinese province of Hubei, a pandemic. Since that date, the general belief was that this new disease would put the society and health systems to the test [1]. In México, the first imported cases were described on February 28 and local transmission was confirmed on March 24, 2020 [2]. It soon became clear that this disease is highly contagious, and that the exponential increase of infections caused hospital saturation and resource scarcity in terms of staff and medical equipment, as well as shortage of intensive care beds and ventilators. In order to tackle these problems, hospital reconversion, that is, the process by which different hospitals prepare for the care of patients during a health crisis (in this case, the one caused by COVID-19), was established in China originally and later on in Italy, the United States of America and in our own country. Hospital reconversion involves postponing outpatient specialty consultation, elective surgery, auxiliary diagnostic tests, physical medicine, and group psychological consultation, among other measures [3]. On March 17, 2020, the Association of American Medical Colleges recommended the suspension of all activities involving direct contact between patients and medical students due to the COVID-19 pandemic [4]. The need to reduce hospital overcrowding led to a decrease in the number of residents at their teaching sites [5], probably affecting residents in their final years of specialty the most [6]. In our country, the health system had to face the pandemic containment by making use of all available resources, both material and human. Our Institute was reconverted into a pediatric COVID-19 hospital, which implied the redistribution of residents to services related with the care of COVID-19 patients, with the consequent modification of their academic activities [7]. Among the academic activities implemented during the pandemic, online learning or e-learning stands out. The definition of online learning may vary according to different organizations, but in essence, it is the use of electronic media for training and education, using the web, computers, and virtual classrooms, and developing digital content [8]. Teaching processes vary according to the type of specialty, but all of them require the acquisition of theoretical and practical skills. For this reason, exploring the current training methods worldwide, which are mainly based on online learning, is now of the utmost importance [9]. In addition to exploring how effective this learning is in comparison with traditional learning, studying its influence on the perception and satisfaction of future health professionals becomes all the more necessary [10]. Learning satisfaction could be defining as the student’s feelings and attitudes towards the education process and the perceived level of fulfillment connected to the desire to learn [11]. Satisfaction has been related to successful educational processes. It involves the expectation that students have about their learning and the quality of the perceived education service received. Exploring satisfaction is a good indicator for institutions on the effectiveness of educational programs. [12–14]. Since early 2020, concerns regarding residents’ education began to be published, as well as strategies that should be adopted in an effort to alleviate these deficiencies [15–18]. Subsequently, several reports were published on some results about the strategies adopted during the pandemic. However, most of them referred to the reduction of clinical activities, especially in surgical specialties [19] such as general surgery [20], neurosurgery [21], otorhinolaryngology [22], gynecology-obstetrics [23], urology [24], orthopedics-trauma [25], and interventional cardiology [26], in which emphasis is placed on the reduction or suspension of elective surgeries, visits, or outpatient consultations. Reports on the impact of the pandemic on medical specialties were scarce and, as in surgical specialties, confirmed a reduction in the clinical training or in the strategies adopted, for example, in radiology [27, 28], dermatology [29], internal medicine [30] and gastroenterology [31]. Regarding residency in pediatrics or pediatric specialties, there are few publications on training and learning management in the face of the pandemic [32], or on the creation of a file of previous conferences [33]. Although there are reports on the pandemic’s impact on residents, most of them refer to surgical specialties [5, 34, 35], and there are few reports on medical specialties [36]. The impact on pediatric residency or its subspecialties has been scarcely described [37, 38]. In this regard, this lack of evidence points out to the fact that undergraduate students are satisfied with online learning, being the preclinical years the most benefited period, in contrast to students in clinical stages [39]. There is concern in the world about the development of the future health professionals trained in this era, whose clinical training time has been reduced and replaced by online training [40, 41]. Therefore, the aim of this study was to determine the level of satisfaction with theoretical and practical learning, as well as the emotional well-being, of residents from a pediatric hospital converted into a COVID-19 hospital, and to analyze the differences between residents of medical specialties (MRs) and those of surgical specialties (SRs). In Mexico residents have the degree of graduated general practitioners however, being residents of some specialty or medical or surgical subspecialty means that they have decided to continue their training so in the hospital they are seen as students. In the specialization process known as residency, doctors are seen as students. They obey a hierarchy given by the degree of residence they are studying (it can range from two to seven years, depending on the specialty they have chosen) and must follow instructions from their professors, which are doctors that work professionally in the hospital. In order to obtain the degree of specialist, residents must submit a thesis related to their area of specialty. The thesis consists of a clinical research protocol, which can be from the design and implementation of an observational or even experimental study. The teaching-learning methodology has focused on problem-solving with the aim that residents form habits and skills to reason critically and reflexively about health problems. In this sense, theoretical-practical training is complemented by visits to other hospitals, attendance at seminars and congresses, and presentation of clinical cases. During the pandemia, online activities were implemented. The classes were online via zoom and were complemented by seminars and presentations of clinical cases online. Although students have different tutors and managers (doctors that work in the hospital), according to the area of specialty they have chosen, there is a coordination team in the Teaching Directorate, which is responsible for monitoring the educational programs of each area, as well as supporting students with any academic or administrative issue related to their training.
null
null
Results
A total of 171 applications were sent and 148 residents (86.5%) responded: 111 (75%) belonged to medical specialties and 37 (25%) to surgical specialties, with 80.2% female participants and 19.8% male paticipants. In relation to age, the majority of participants (60.8%) were 25 to 30 years old. 88.5% of the residents were single. Regarding the year of specialty, 27.7% were in their first year, 14.2% in their second year, 14.9% in their third year and 43.2% were in a subspecialty (Table 1). Table 1Sociodemographic characteristics of the residents of Hospital Infantil de México Federico Gómez (HIMFG)Medical Specialtiesn = 111 (75%)%(n)Surgical Specialtiesn = 37 (25%)%(n) Sex Female80.2 (89)59.5 (22)Male19.8 (22)40.5 (15) Age Range 25 to 30 years67.6 (75)40.5 (15)31 to 35 years30.6 (34)51.4 (19)36 to 40 years1.8 (2)8.1 (3) Marital status Single89.2 (99)70.3 (26)Married or free union10.8 (12)29.7 (11) Year of specialty First33.3 (37)10.8 (4)Second16.2 (18)8.1 (3)Third18.9 (21)2.7 (1)Subspecialty31.5 (35)78.3 (29) Sociodemographic characteristics of the residents of Hospital Infantil de México Federico Gómez (HIMFG) In terms of online learning during the pandemic, residents were asked 6 questions. Initially, they were asked whether they were familiar with this type of learning, finding that most of the students were not (n = 106). Regarding the type of online activities they performed, they reported that live lessons were the main activity (MRs = 85.6% and SRs = 70.3%), and regarding the importance of this method for their training, most of them described it as important (MRs = 94.6% and SRs = 83.8%). It is worth mentioning that no differences were found between MRs and SRs for these factors. However, statistically significant differences were found in two items associated with the residents’ concerns regarding their preparedness for the pandemic (p ≤ 0.001), finding that the main concern for MRs was their preparedness for clinical decision-making, while, for SRs, the main reason for concern was their preparedness to treat patients (Table 2). Likewise, MRs reported greater difficulties (64%) with this type of learning than SRs (36%) (p ≤ 0.001), with technical failures being the most common difficulties. Table 2Online learning (N = 148) MRs n 111 % (n) SRs n = 37 % (n) P-value They are familiar with platforms (Zoom, Data Webinar, Google classroom…) Yes30.6 (34)21.6 (8)No69.4 (77)78.4 (29)0.400 Types of online academic activities Live online lessons85.6 (95)70.3 (26)Webinars8.1 (9)13.5 (5)Individual counseling3.6 (4)8.1 (3)0.122Others2.7 (3)8.1 (3) They consider e-learning an important part of their training Yes94.6 (105)83.8 (31)No5.4 (6)16.2 (6)0.074 Aspects of concern regarding their preparedness for the pandemic Clinical decisions61.3 (68)10.8 (4)Diagnosis8.1 (9)8.1 (3)Treatment18.0 (20)70.3 (26) < 0.001 Others (Lack of practical and surgical skills, lack of experience)12.6 (14)10.8 (4) They experienced difficulties associated with online learning Yes64 (71)32.4 (12) < 0.001 No36 (40)67.6 (25) Main difficulties Distraction25.2 (28)13.5 (5)Boredom9 (10)8.1 (3)Bad lecturers2.7 (3)0 (0)Technical failures (connection, equipment, etc.)30.6 (34)18.9 (7)0.093Poor audiovisual material2.7 (3)2.7 (1)No difficulties29.7 (33)56.8 (21) Online learning (N = 148) As for the degree of satisfaction with their current training process when compared to the pre-pandemic stage, it was similar (p = 0.511) between MRs and SRs; the majority of residents being slightly satisfied (SRs 37.8%) and not at all satisfied (MRs 43.2%) with their current training. The degree of satisfaction with the interaction they had with their teachers was similar in both specialties (p = 0.639), although the majority (35.1%) of SRs indicated being very satisfied, while the majority of MRs (37.8%) were neutral (Fig. 1). Fig. 1Traditional learning vs. Online learning Traditional learning vs. Online learning Regarding theoretical training, residents were asked how they felt about the academic activities organized by the teaching management, and the results in both types of specialties were mainly neutral (p = 0.697). Regarding satisfaction with the academic activities carried out in their departments, SRs were twice as satisfied as MRs (21.6% vs. 10.8%); however, no statistically significant differences were found (p = 0.327). Regarding satisfaction with activities outside the hospital, it was statistically significant lower in MRs than in SRs (p = 0.020). In terms of self-study hours, there were no statistically significant differences between the groups; however, with regards to the time they could allocate for their thesis, statistically significant differences were found (p = 0.059). While both groups reported feeling equally dissatisfied, most of the MRs indicated feeling slightly satisfied (25.2%) and neutral (27.9%), while SRs felt more neutral (45.9%) and satisfied (35.1%). When asked about their perception regarding their knowledge to face the labor world, both groups reported to feel mostly neutral (Fig. 2). Fig. 2Percentage of satisfaction of medical (n = 111) and surgical (n = 37) residents with their training Percentage of satisfaction of medical (n = 111) and surgical (n = 37) residents with their training Regarding for their practical training, when asked about their satisfaction with their manual skills, MRs were more neutral, and most SRs were slightly satisfied. It is worth mentioning that no student in either group reported feeling totally satisfied. In terms of their clinical training, MRs were less satisfied compared to SRs. Finally, regarding procedure teaching, most MRs referred to themselves as being slightly satisfied and the SRs as neutral. However, 4.5% of MRs were extremely satisfied while no SRs were extremely satisfied. In relation to the execution of procedures, most MRs and SRs were slightly satisfied (38.7% and 35.1% respectively), and although 6.3% of MRs were extremely satisfied versus no SRs, also in this item, MRs reported a higher degree of dissatisfaction (p = 0.184). Regarding mentor supervision, SRs reported higher satisfaction than MRs, with no statistically significant differences. Comparing current external consultations versus pre-pandemic consultations, neither group was completely satisfied; 28.8% MRs were not at all satisfied, representing a 10% more than SRs. Regarding current hospital rounds versus pre-pandemic hospital rounds, there was less satisfaction among MRs. No statistically significant differences were found between groups on any of the items (Fig. 2). Finally, Fig. 3 depicts the results of the items aimed at exploring the emotional area of MRs and SRs. When asked about their degree of satisfaction with their well-being, a 2.7% of MRs felt not at all satisfied, with no SRs in this item, and the majority of residents felt neutral. When asked regarding emotional support from friends, family, and colleagues, the majority of residents felt very satisfied. When exploring how they felt about their mental health, although 2.7% in both groups expressed feeling not at all satisfied in this area, statistically significant differences were found between the groups (p = 0.038), with a greater number of satisfied SRs against dissatisfied MRs. In terms of the emotional support received from their department, MRs reported feeling less satisfied compared to SRs. Fig. 3Percentage of satisfaction with the emotional area (Medical (n = 111) and Surgical (n = 37) residents) Percentage of satisfaction with the emotional area (Medical (n = 111) and Surgical (n = 37) residents)
Conclusion
This study shows residents’ perception about their training during the COVID-19 pandemic. It also reveals that, as a result of the pandemic, residents’ uncertainty when it comes to making clinical decisions or residents’ lack of clinical practice could affect their performance in the labor world. Likewise, it shows that residents of medical and surgical specialties have different degrees of satisfaction regarding their training and emotional aspects.
[ "Methods", "Limitations" ]
[ "An online survey was conducted based on literature search, which was conducted from January 2020 to March 2021 in MEDLINE, using the following MeSH terms and keywords: education, programs, medical education, medical, students, learning, e-learning, COVID-19, and SARS-CoV-2 and with filters such as human studies and indexed studies. Additionally, we searched Google Scholar and the reference lists of the articles found to identify other relevant studies. Four study were included to select the surveys items. [42–45]\nThe survey was elaborated and applied using Google Forms and covered 4 main areas as follows: residents’ demographic and personal characteristics, perception of theoretical training, perception of practical training, and perception of their own emotional situation.\nThe survey included 42 items: 13 were designed to characterize the students, 14 to explore their satisfaction with theoretical training, 11 to explore their satisfaction with practical training and 4 to explore their satisfaction with their emotional state. The survey included items combining dichotomous, 5 points Likert-scale, and multiple-choice responses.\nThe answers to the 5 points Likert scale were categorically expressed with text (Not at all satisfied, slightly satisfied, neutral, very satisfied, extremely satisfied) to explore the resident´s perception of their learning process.\nIn addition, the questionnaire included an initial section with an informed consent form which residents had to fill out in order to be eligible to complete the survey. No validation process was performed for this survey. The protocol was submitted to the Research Committee of the hospital under authorization no. HIM-SR-2021-004.\nThe survey was sent through the Teaching Department via e-mail to all pediatric and pediatric specialty residents who had completed any year of their residency during March 2020 to February 2021.\nThe outcomes were analyzed using SPSS and Stata software. Descriptive statistics were used as percentages and a comparison was made between residents of medical and surgical specialties to identify differences between both groups, for which the p-value was calculated with Fisher’s exact test and to compare the variables of the Likert scale, χ2 was calculated.", "Although our study adds to the knowledge on pediatric residents’ perception about the impact of the COVID-19 pandemic on their education, it has some limitations. Data are based on self-reports, so they are subjective and should be interpreted with caution. This paper did not investigate their academic and professional performance or the relationship with the subject matter of the study, however interesting this could be, as that was not the purpose of this research. One potential variable is the effect of the pandemic per se on residents’ perception, bearing in mind that, when this research was carried out, our country was immersed in the second wave of contagion, and vaccination against SARS-Cov-2 had just commenced.\nRegarding their wellbeing, it´s possible students are not aware of these aspects, so they reported as neutral, contrary reported in the literature, in this sense, there is a methodological limitation by not having used a validated scale to know the emotional aspects of the students.\nFinally, as in the rest of the world, this pandemic has brought significant challenges to medical education systems, and, as in many other areas, the true long-term impact on physician training still remains unknown. Medical education systems should promote the use of technologies in their educational curricula and find further technologies and strategies that allow students to continue their intellectual training and personal development even in times of adversity. The results of this study can be used as a basis for the generation of personal and material resource organization and management strategies, both, of health institutions in order to meet the needs expressed by residents." ]
[ null, null ]
[ "Introduction", "Methods", "Results", "Discussion", "Limitations", "Conclusion" ]
[ "On March 11, 2020, the World Health Organization (WHO) declared the novel coronavirus (COVID-19) outbreak, originally described in Wuhan, in the Chinese province of Hubei, a pandemic. Since that date, the general belief was that this new disease would put the society and health systems to the test [1].\nIn México, the first imported cases were described on February 28 and local transmission was confirmed on March 24, 2020 [2].\nIt soon became clear that this disease is highly contagious, and that the exponential increase of infections caused hospital saturation and resource scarcity in terms of staff and medical equipment, as well as shortage of intensive care beds and ventilators. In order to tackle these problems, hospital reconversion, that is, the process by which different hospitals prepare for the care of patients during a health crisis (in this case, the one caused by COVID-19), was established in China originally and later on in Italy, the United States of America and in our own country. Hospital reconversion involves postponing outpatient specialty consultation, elective surgery, auxiliary diagnostic tests, physical medicine, and group psychological consultation, among other measures [3].\nOn March 17, 2020, the Association of American Medical Colleges recommended the suspension of all activities involving direct contact between patients and medical students due to the COVID-19 pandemic [4]. The need to reduce hospital overcrowding led to a decrease in the number of residents at their teaching sites [5], probably affecting residents in their final years of specialty the most [6].\nIn our country, the health system had to face the pandemic containment by making use of all available resources, both material and human. Our Institute was reconverted into a pediatric COVID-19 hospital, which implied the redistribution of residents to services related with the care of COVID-19 patients, with the consequent modification of their academic activities [7]. Among the academic activities implemented during the pandemic, online learning or e-learning stands out.\nThe definition of online learning may vary according to different organizations, but in essence, it is the use of electronic media for training and education, using the web, computers, and virtual classrooms, and developing digital content [8].\nTeaching processes vary according to the type of specialty, but all of them require the acquisition of theoretical and practical skills. For this reason, exploring the current training methods worldwide, which are mainly based on online learning, is now of the utmost importance [9]. In addition to exploring how effective this learning is in comparison with traditional learning, studying its influence on the perception and satisfaction of future health professionals becomes all the more necessary [10].\nLearning satisfaction could be defining as the student’s feelings and attitudes towards the education process and the perceived level of fulfillment connected to the desire to learn [11].\nSatisfaction has been related to successful educational processes. It involves the expectation that students have about their learning and the quality of the perceived education service received. Exploring satisfaction is a good indicator for institutions on the effectiveness of educational programs. [12–14].\nSince early 2020, concerns regarding residents’ education began to be published, as well as strategies that should be adopted in an effort to alleviate these deficiencies [15–18]. Subsequently, several reports were published on some results about the strategies adopted during the pandemic. However, most of them referred to the reduction of clinical activities, especially in surgical specialties [19] such as general surgery [20], neurosurgery [21], otorhinolaryngology [22], gynecology-obstetrics [23], urology [24], orthopedics-trauma [25], and interventional cardiology [26], in which emphasis is placed on the reduction or suspension of elective surgeries, visits, or outpatient consultations. Reports on the impact of the pandemic on medical specialties were scarce and, as in surgical specialties, confirmed a reduction in the clinical training or in the strategies adopted, for example, in radiology [27, 28], dermatology [29], internal medicine [30] and gastroenterology [31]. Regarding residency in pediatrics or pediatric specialties, there are few publications on training and learning management in the face of the pandemic [32], or on the creation of a file of previous conferences [33].\nAlthough there are reports on the pandemic’s impact on residents, most of them refer to surgical specialties [5, 34, 35], and there are few reports on medical specialties [36]. The impact on pediatric residency or its subspecialties has been scarcely described [37, 38]. In this regard, this lack of evidence points out to the fact that undergraduate students are satisfied with online learning, being the preclinical years the most benefited period, in contrast to students in clinical stages [39].\nThere is concern in the world about the development of the future health professionals trained in this era, whose clinical training time has been reduced and replaced by online training [40, 41]. Therefore, the aim of this study was to determine the level of satisfaction with theoretical and practical learning, as well as the emotional well-being, of residents from a pediatric hospital converted into a COVID-19 hospital, and to analyze the differences between residents of medical specialties (MRs) and those of surgical specialties (SRs).\nIn Mexico residents have the degree of graduated general practitioners however, being residents of some specialty or medical or surgical subspecialty means that they have decided to continue their training so in the hospital they are seen as students.\nIn the specialization process known as residency, doctors are seen as students. They obey a hierarchy given by the degree of residence they are studying (it can range from two to seven years, depending on the specialty they have chosen) and must follow instructions from their professors, which are doctors that work professionally in the hospital.\nIn order to obtain the degree of specialist, residents must submit a thesis related to their area of specialty. The thesis consists of a clinical research protocol, which can be from the design and implementation of an observational or even experimental study.\nThe teaching-learning methodology has focused on problem-solving with the aim that residents form habits and skills to reason critically and reflexively about health problems. In this sense, theoretical-practical training is complemented by visits to other hospitals, attendance at seminars and congresses, and presentation of clinical cases.\nDuring the pandemia, online activities were implemented. The classes were online via zoom and were complemented by seminars and presentations of clinical cases online.\nAlthough students have different tutors and managers (doctors that work in the hospital), according to the area of specialty they have chosen, there is a coordination team in the Teaching Directorate, which is responsible for monitoring the educational programs of each area, as well as supporting students with any academic or administrative issue related to their training.", "An online survey was conducted based on literature search, which was conducted from January 2020 to March 2021 in MEDLINE, using the following MeSH terms and keywords: education, programs, medical education, medical, students, learning, e-learning, COVID-19, and SARS-CoV-2 and with filters such as human studies and indexed studies. Additionally, we searched Google Scholar and the reference lists of the articles found to identify other relevant studies. Four study were included to select the surveys items. [42–45]\nThe survey was elaborated and applied using Google Forms and covered 4 main areas as follows: residents’ demographic and personal characteristics, perception of theoretical training, perception of practical training, and perception of their own emotional situation.\nThe survey included 42 items: 13 were designed to characterize the students, 14 to explore their satisfaction with theoretical training, 11 to explore their satisfaction with practical training and 4 to explore their satisfaction with their emotional state. The survey included items combining dichotomous, 5 points Likert-scale, and multiple-choice responses.\nThe answers to the 5 points Likert scale were categorically expressed with text (Not at all satisfied, slightly satisfied, neutral, very satisfied, extremely satisfied) to explore the resident´s perception of their learning process.\nIn addition, the questionnaire included an initial section with an informed consent form which residents had to fill out in order to be eligible to complete the survey. No validation process was performed for this survey. The protocol was submitted to the Research Committee of the hospital under authorization no. HIM-SR-2021-004.\nThe survey was sent through the Teaching Department via e-mail to all pediatric and pediatric specialty residents who had completed any year of their residency during March 2020 to February 2021.\nThe outcomes were analyzed using SPSS and Stata software. Descriptive statistics were used as percentages and a comparison was made between residents of medical and surgical specialties to identify differences between both groups, for which the p-value was calculated with Fisher’s exact test and to compare the variables of the Likert scale, χ2 was calculated.", "A total of 171 applications were sent and 148 residents (86.5%) responded: 111 (75%) belonged to medical specialties and 37 (25%) to surgical specialties, with 80.2% female participants and 19.8% male paticipants. In relation to age, the majority of participants (60.8%) were 25 to 30 years old. 88.5% of the residents were single. Regarding the year of specialty, 27.7% were in their first year, 14.2% in their second year, 14.9% in their third year and 43.2% were in a subspecialty (Table 1).\n\nTable 1Sociodemographic characteristics of the residents of Hospital Infantil de México Federico Gómez (HIMFG)Medical Specialtiesn = 111 (75%)%(n)Surgical Specialtiesn = 37 (25%)%(n)\nSex\nFemale80.2 (89)59.5 (22)Male19.8 (22)40.5 (15)\nAge Range\n25 to 30 years67.6 (75)40.5 (15)31 to 35 years30.6 (34)51.4 (19)36 to 40 years1.8 (2)8.1 (3)\nMarital status\nSingle89.2 (99)70.3 (26)Married or free union10.8 (12)29.7 (11)\nYear of specialty\nFirst33.3 (37)10.8 (4)Second16.2 (18)8.1 (3)Third18.9 (21)2.7 (1)Subspecialty31.5 (35)78.3 (29)\n\nSociodemographic characteristics of the residents of Hospital Infantil de México Federico Gómez (HIMFG)\nIn terms of online learning during the pandemic, residents were asked 6 questions.\nInitially, they were asked whether they were familiar with this type of learning, finding that most of the students were not (n = 106). Regarding the type of online activities they performed, they reported that live lessons were the main activity (MRs = 85.6% and SRs = 70.3%), and regarding the importance of this method for their training, most of them described it as important (MRs = 94.6% and SRs = 83.8%). It is worth mentioning that no differences were found between MRs and SRs for these factors.\nHowever, statistically significant differences were found in two items associated with the residents’ concerns regarding their preparedness for the pandemic (p ≤ 0.001), finding that the main concern for MRs was their preparedness for clinical decision-making, while, for SRs, the main reason for concern was their preparedness to treat patients (Table 2).\nLikewise, MRs reported greater difficulties (64%) with this type of learning than SRs (36%) (p ≤ 0.001), with technical failures being the most common difficulties.\n\nTable 2Online learning (N = 148)\nMRs\n\nn 111\n\n% (n)\n\nSRs\n\nn = 37\n\n% (n)\n\nP-value\n\nThey are familiar with platforms (Zoom, Data Webinar, Google classroom…)\nYes30.6 (34)21.6 (8)No69.4 (77)78.4 (29)0.400\nTypes of online academic activities\nLive online lessons85.6 (95)70.3 (26)Webinars8.1 (9)13.5 (5)Individual counseling3.6 (4)8.1 (3)0.122Others2.7 (3)8.1 (3)\nThey consider e-learning an important part of their training\nYes94.6 (105)83.8 (31)No5.4 (6)16.2 (6)0.074\nAspects of concern regarding their preparedness for the pandemic\nClinical decisions61.3 (68)10.8 (4)Diagnosis8.1 (9)8.1 (3)Treatment18.0 (20)70.3 (26)\n< 0.001\nOthers (Lack of practical and surgical skills, lack of experience)12.6 (14)10.8 (4)\nThey experienced difficulties associated with online learning\nYes64 (71)32.4 (12)\n< 0.001\nNo36 (40)67.6 (25)\nMain difficulties\nDistraction25.2 (28)13.5 (5)Boredom9 (10)8.1 (3)Bad lecturers2.7 (3)0 (0)Technical failures (connection, equipment, etc.)30.6 (34)18.9 (7)0.093Poor audiovisual material2.7 (3)2.7 (1)No difficulties29.7 (33)56.8 (21)\n\nOnline learning (N = 148)\nAs for the degree of satisfaction with their current training process when compared to the pre-pandemic stage, it was similar (p = 0.511) between MRs and SRs; the majority of residents being slightly satisfied (SRs 37.8%) and not at all satisfied (MRs 43.2%) with their current training.\nThe degree of satisfaction with the interaction they had with their teachers was similar in both specialties (p = 0.639), although the majority (35.1%) of SRs indicated being very satisfied, while the majority of MRs (37.8%) were neutral (Fig. 1).\n\nFig. 1Traditional learning vs. Online learning\n\nTraditional learning vs. Online learning\nRegarding theoretical training, residents were asked how they felt about the academic activities organized by the teaching management, and the results in both types of specialties were mainly neutral (p = 0.697).\nRegarding satisfaction with the academic activities carried out in their departments, SRs were twice as satisfied as MRs (21.6% vs. 10.8%); however, no statistically significant differences were found (p = 0.327). Regarding satisfaction with activities outside the hospital, it was statistically significant lower in MRs than in SRs (p = 0.020).\nIn terms of self-study hours, there were no statistically significant differences between the groups; however, with regards to the time they could allocate for their thesis, statistically significant differences were found (p = 0.059). While both groups reported feeling equally dissatisfied, most of the MRs indicated feeling slightly satisfied (25.2%) and neutral (27.9%), while SRs felt more neutral (45.9%) and satisfied (35.1%). When asked about their perception regarding their knowledge to face the labor world, both groups reported to feel mostly neutral (Fig. 2).\n\nFig. 2Percentage of satisfaction of medical (n = 111) and surgical (n = 37) residents with their training\n\nPercentage of satisfaction of medical (n = 111) and surgical (n = 37) residents with their training\nRegarding for their practical training, when asked about their satisfaction with their manual skills, MRs were more neutral, and most SRs were slightly satisfied. It is worth mentioning that no student in either group reported feeling totally satisfied. In terms of their clinical training, MRs were less satisfied compared to SRs. Finally, regarding procedure teaching, most MRs referred to themselves as being slightly satisfied and the SRs as neutral. However, 4.5% of MRs were extremely satisfied while no SRs were extremely satisfied.\nIn relation to the execution of procedures, most MRs and SRs were slightly satisfied (38.7% and 35.1% respectively), and although 6.3% of MRs were extremely satisfied versus no SRs, also in this item, MRs reported a higher degree of dissatisfaction (p = 0.184). Regarding mentor supervision, SRs reported higher satisfaction than MRs, with no statistically significant differences.\nComparing current external consultations versus pre-pandemic consultations, neither group was completely satisfied; 28.8% MRs were not at all satisfied, representing a 10% more than SRs. Regarding current hospital rounds versus pre-pandemic hospital rounds, there was less satisfaction among MRs. No statistically significant differences were found between groups on any of the items (Fig. 2).\nFinally, Fig. 3 depicts the results of the items aimed at exploring the emotional area of MRs and SRs. When asked about their degree of satisfaction with their well-being, a 2.7% of MRs felt not at all satisfied, with no SRs in this item, and the majority of residents felt neutral.\nWhen asked regarding emotional support from friends, family, and colleagues, the majority of residents felt very satisfied.\nWhen exploring how they felt about their mental health, although 2.7% in both groups expressed feeling not at all satisfied in this area, statistically significant differences were found between the groups (p = 0.038), with a greater number of satisfied SRs against dissatisfied MRs.\nIn terms of the emotional support received from their department, MRs reported feeling less satisfied compared to SRs.\n\nFig. 3Percentage of satisfaction with the emotional area (Medical (n = 111) and Surgical (n = 37) residents)\n\nPercentage of satisfaction with the emotional area (Medical (n = 111) and Surgical (n = 37) residents)", "Due to the COVID-19 pandemic confinement, educational activities were disrupted at all levels. Hospital reconversion has affected care and teaching activities, with the subsequent impact on resident training.\nSlightly more than half of the residents (56.7%) were studying pediatrics and the rest were studying a subspecialty. A total of 75% were studying medical specialties or subspecialties and 25% surgical specialties, which makes our population a mixture of both disciplines and allowed us to compare some results between the two of them.\nHowever, none of these reports refer to residents’ self-perception about their training, both theoretical and practical, during the first year of the pandemic. This was the objective of our study which is, to our knowledge, the first to report residents’ perception of the impact of the pandemic on their training process as pediatricians or pediatric specialists. Our survey was answered by 86.5% of the 171 residents, and therefore, it adequately reflects the perceptions in an exclusively pediatric hospital converted into a COVID hospital. Health care and academic activities changed drastically; outpatient consultations, hospitalizations, and elective surgeries were cancelled; face-to-face patient care was reduced and many of the residents had to care for COVID-19 patients; and external rotations and congresses were cancelled. Almost all academic sessions were delivered online despite the fact that 71.6% of our residents were not familiar with distance learning platforms. However, it is important to start considering online learning a formal didactic resource, since, as indicated by Mian et al., it has proven to be of great help for theoretical training and allows continuing education even in emergency situations [46].\nAs in the reports by Tapper J et al., the residents in this study reported connectivity issues and difficulties accessing the platforms as the main limiting factors to continue their training. This difficulty was reported by 56% of our residents, especially MRs (64% vs. 32% SRs, p = 0.001), with technical failure being the predominant factor in 27%, half of the number reported by Dasgupta et al. with regards to a group of ophthalmology residents [47]. This is probably related to the fact that, in Latin America, only 1 in 2 households has broadband Internet connection [48], and therefore, appropriate resources and conditions for online studying should be implemented [49, 50]. Other difficulties were distraction during lessons (22.2%) and boredom (8.7%), which is consistent with what was published for emergency and internal medicine residents, who were more engaged in other activities such as literature searches, answering e-mails, and even exercising during online lectures. Engaging the attention of learners is critical in online education. In this regard, some options include asking group questions via social networks, conducting small group sessions, and even having more game-like activities [51].\nIt is striking that practically half of our residents reported being slightly or not at all satisfied with their current training compared to the pre-pandemic period (Fig. 1), as has been reported in orthopedic residents [52]. In this regard, 89.7% of the residents surveyed by Guo et al. think that their education was negatively affected by the pandemic [22]. On the contrary, with regards to webinars, neurosurgery residents think that they are more useful than face-to-face conferences [53] and half of the ophthalmology residents surveyed think that they should continue in the post-pandemic stage [49]. One of the most important concerns expressed by the MRs in our study was related with making clinical decisions, whereas the SRs expressed greater concern about their treatment-oriented training (70.3%). This is consistent with other reports [22, 54], perhaps the surgical nature of their specialty, make SR residents more focused on such treatments.\nWhen evaluating residents’ perception of their theoretical training, about 50% were satisfied with the academic activities, but MRs were significant more satisfied than SRs with out-of-hospital activities such as conferences or webinars. This is probably related to the fact that out-of-hospital activities of SRs are often associated with the execution of other surgical procedures in the out-of-hospital rotations that were cancelled. Regarding the time available for thesis preparation, MRs were significantly less satisfied than SRs, perhaps because they had more out-of-hospital activities. Regarding time for study, most of our residents reported being very satisfied (35.1% of MRs) or neutral (45.9% of SRs), contrary to what was reported by orthopedic and trauma residents, who reported a 46% restriction in time for self-study [25].\nIn relation to practical training, high levels of dissatisfaction were predominant, generally above 40%, with MRs being less satisfied and with no statistically significant differences with respect to SRs. Many studies reported a decrease in practical activities in surgical specialties [20, 24, 26, 55, 56], but our results prove that residents in predominantly medical specialties, such as pediatrics and its associated medical specialties, were also affected in terms of practical learning. We must explore new methods of academic training and, in this regard, social networking is a tool that may have been underutilized [19], although it is undoubtedly on the rise as a result of the pandemic [57]. Other strategies that abound in practical training include files of radiological studies or clinical cases that can be constantly consulted [33], the use of tele-dermatology [29] or surgical simulators.\nRegarding satisfaction with their emotional well-being and mental health, it is to be noted that 89.5% of the total number of residents reported having cared for patients with COVID-19, of whom 27.7% got infected. Facing a new disease with less medical knowledge, having to wear special protective equipment, managing a high patient load, facing life-and-death decisions and desires, very often through video calls, is a source of psychological stress [30]. A commonly reported fact is the fear of residents getting the disease themselves [30, 58–60], which was corroborated in our study, since 50.7% of the residents were constantly afraid of getting the disease and 26.4% felt occasional fear.\nThis undoubtedly has an impact on residents’ mental health. Other studies have reported conditions such as anxiety about their future [23], mental health alterations [21], decreased quality of life [52] and burnout [61]. In this study, MRs were more frequently slightly or not at all satisfied with their mental health compared to SRs (27% vs. 5.4%, p = 0.038). This is something that should be confirmed in other series comparing MRs with SRs and, if confirmed, the cause should be investigated in greater depth. Some possible strategies could be that directors of residency programs in pediatrics, and probably other specialties, would consider reducing extremely long on-call shifts and changing to shorter schedules followed by rest, at least while the pandemic contingency is maintained [62].", "Although our study adds to the knowledge on pediatric residents’ perception about the impact of the COVID-19 pandemic on their education, it has some limitations. Data are based on self-reports, so they are subjective and should be interpreted with caution. This paper did not investigate their academic and professional performance or the relationship with the subject matter of the study, however interesting this could be, as that was not the purpose of this research. One potential variable is the effect of the pandemic per se on residents’ perception, bearing in mind that, when this research was carried out, our country was immersed in the second wave of contagion, and vaccination against SARS-Cov-2 had just commenced.\nRegarding their wellbeing, it´s possible students are not aware of these aspects, so they reported as neutral, contrary reported in the literature, in this sense, there is a methodological limitation by not having used a validated scale to know the emotional aspects of the students.\nFinally, as in the rest of the world, this pandemic has brought significant challenges to medical education systems, and, as in many other areas, the true long-term impact on physician training still remains unknown. Medical education systems should promote the use of technologies in their educational curricula and find further technologies and strategies that allow students to continue their intellectual training and personal development even in times of adversity. The results of this study can be used as a basis for the generation of personal and material resource organization and management strategies, both, of health institutions in order to meet the needs expressed by residents.", "This study shows residents’ perception about their training during the COVID-19 pandemic. It also reveals that, as a result of the pandemic, residents’ uncertainty when it comes to making clinical decisions or residents’ lack of clinical practice could affect their performance in the labor world. Likewise, it shows that residents of medical and surgical specialties have different degrees of satisfaction regarding their training and emotional aspects." ]
[ "introduction", null, "results", "discussion", null, "conclusion" ]
[ "COVID-19", "Medical education", "Online education" ]
Dietitians' perspectives on challenges and prospects for group-based education to adults with type 1 diabetes - a qualitative study.
36253850
Type 1 diabetes (T1DM) is an autoimmune disorder which can have short- and long-term adverse effects on health. Dietitians in diabetes offer specialist evidence-based advice to people with T1DM and provide education in either individual or group settings. The purpose of this study was to explore dietitians' perception of, and role in, group-based education as well as prospects for development.
BACKGROUND
This was a qualitative descriptive study conducted in Sweden using a convenience sampling of dietitians working in adult diabetes care. Semi-structured interviews were conducted with participants and data were analysed using a content analysis approach.
METHODS
Ten dietitians with a median experience of 14.5 years in diabetes care were interviewed. The informants were all appreciative of facilitating group-based education and perceived that it was beneficial for people with T1DM to be part of group processes, but the informants did also suggest that there were challenges for their professional role. The main challenges reported was to adjust the level of depth and complexity to the information provided and the lack of ability to individualize the education-sessions in a heterogenous group. None of the dietitians reported performing pre-assessment or follow-up audits on the group-based education.
RESULTS
There was a great engagement from the dietitians, but they identified a lack of framework that address challenges regarding group-based education. The dietitians experienced examples of person-centred care while facilitating group-based education, which may benefit people with T1DM. Based on the results, it would be valuable to explore the pedagogic training level of Swedish dietitians and potential barriers in their ability to facilitate group-based education. We suggest that a framework for group-based education should be explored together with patient representatives to optimize the care given to ensure cost-effectiveness, optimize clinical outcomes, quality of life and equally accessible care for people with T1DM.
CONCLUSION
[ "Adult", "Humans", "Diabetes Mellitus, Type 1", "Nutritionists", "Qualitative Research", "Self Care", "Patient Education as Topic", "Group Processes", "Attitude of Health Personnel" ]
9576129
null
null
null
null
Results
The ten informants had worked between one and 35 years in a dietetic role and time spent in the diabetes field ranged from six months to 35 years with a median of 14.5 years. All the informants were involved in group-based education to adults with T1DM, but with varied frequency from one to 22 sessions per year. Six informants had sessions without any other health care professional attending while four of the informants described that their group-based education was delivered in a multidisciplinary setting. For two of those, the multidisciplinary setting meant they had a diabetes specialist nurse attending for some or in all the education sessions, while two informants had different professional categories attending during the sessions. Those (n = 4) who took part in multidisciplinary education rather than working alone experienced a more structured and goal-oriented education plans and routines for follow-up. From the content analyses, four categories were identified with three to six subcategories each (Table 3). Table 3Categories and sub-categories from content analysis of dietitian’s experiences regarding group-based educationCategoriesThe dietitian’s role and experience of group-based educationParticipants’ engagement in group-based educationStructure, goals, and participation in group-based educationThe dietitian’s perspective on group-based education and its prospects Sub-categories Facilitating groupsThe dietitian’s roleIndividual requirementsGroup compositionGroup processesThe participants’ roleParticipants’ experienceParticipation and knowledge acquisitionParticipants’ prerequisiteEqual careStructure and organizationReferral processGoal settingPerson-centred careProspectsStrengths with group-based educationWeaknesses with group-based education Categories and sub-categories from content analysis of dietitian’s experiences regarding group-based education Facilitating groups The dietitian’s role Individual requirements Group composition Group processes The participants’ role Participants’ experience Participation and knowledge acquisition Participants’ prerequisite Equal care Structure and organization Referral process Goal setting Person-centred care Prospects Strengths with group-based education Weaknesses with group-based education The results are presented in the following order: 1) The dietitian’s role and experience of group-based education; 2) Participants’ engagement in group-based education; 3) Structure, goals, and participation in group-based education; and 4) The dietitian’s perspective on group-based education and its prospects. [SUBTITLE] The dietitian’s role and experience of group-based education [SUBSECTION] The informants were all appreciative of facilitating group-based education and described how they perceived it benefited the participants but did suggest there were challenges regarding facilitating a group. As one informant stated,” It is a super wonderful opportunity. Patients can meet and exchange experiences. You start with the information but then they usually run the group themselves” (Dietitian 2). Seven informants particularly mentioned that their main challenge was to adjust the level of depth and complexity to the information provided. As one informant described, “One of the hardest things is to adjust when they are at different levels” (Dietitian 4). However, one informant mentioned the complexity with different level of knowledge between participants but described using it as an opportunity for participants to teach each other as the informant found it beneficial when they received information from peers. Most of the informants also raised that people with T1DM are a heterogenic group, and they found it challenging when they are brought together as one. With regards to facilitating a group, the informants described several situations where they were required to draw on their pedagogical skills when having groups with talkative participants or inactive participants. One informant also reported on being doubted on the information provided and described,” I was very questioned there, sometimes it was tough, mentally stressful to deliver those education sessions” (Dietitian 5). The methods used to effectively gather the groups’ focus were asking questions, practical examples, or workshops, or acknowledging participants who were active in group discussions. Five of the informants had some form of continuous professional education in therapeutic techniques such as Motivational Interviewing, Cognitive Behaviour Therapy, or had pedagogical experience from teaching, but all informants described that they perceived their work experience had led them to develop skills to facilitate groups. One informant described a previously feeling of fear during public speaking, but with time had developed necessary skills to cope. The informants were all appreciative of facilitating group-based education and described how they perceived it benefited the participants but did suggest there were challenges regarding facilitating a group. As one informant stated,” It is a super wonderful opportunity. Patients can meet and exchange experiences. You start with the information but then they usually run the group themselves” (Dietitian 2). Seven informants particularly mentioned that their main challenge was to adjust the level of depth and complexity to the information provided. As one informant described, “One of the hardest things is to adjust when they are at different levels” (Dietitian 4). However, one informant mentioned the complexity with different level of knowledge between participants but described using it as an opportunity for participants to teach each other as the informant found it beneficial when they received information from peers. Most of the informants also raised that people with T1DM are a heterogenic group, and they found it challenging when they are brought together as one. With regards to facilitating a group, the informants described several situations where they were required to draw on their pedagogical skills when having groups with talkative participants or inactive participants. One informant also reported on being doubted on the information provided and described,” I was very questioned there, sometimes it was tough, mentally stressful to deliver those education sessions” (Dietitian 5). The methods used to effectively gather the groups’ focus were asking questions, practical examples, or workshops, or acknowledging participants who were active in group discussions. Five of the informants had some form of continuous professional education in therapeutic techniques such as Motivational Interviewing, Cognitive Behaviour Therapy, or had pedagogical experience from teaching, but all informants described that they perceived their work experience had led them to develop skills to facilitate groups. One informant described a previously feeling of fear during public speaking, but with time had developed necessary skills to cope. [SUBTITLE] Participants’ engagement in group-based education [SUBSECTION] The informants exemplified how they perceived that education and the group processes that occur enhanced the solidarity and cooperation between participants in the shape of sharing experiences, sharing knowledge, peer support and a sense of belonging. As one informant stated,” It is also that the patients can meet each other. It is a big difference when I tell them -” you are not alone, others feel like this too” compared to when someone with diabetes tell them that” (Dietitian 4). All the informants mentioned how they felt there was a need for adults with T1DM to meet peers in the same situation, and the strengthening impact it can have on self-management of the condition. However, they also reported that they experienced that the group’s composition was decisive for the group dynamic and peer support. The informants shared their observation of the outcome depending on a group’s composition with regards to age, pre-understanding, treatment, stage, or type of diabetes condition. As one dietitian described, “I have felt that it is difficult to get homogeneous groups, so that the groups are beneficial because they [the participants] have been on such different levels it has not been good” (Dietitian 7). Having a randomly composed group was perceived as less beneficial, and in worst case a risk of a negative experience. As stated by one informant,” If there is someone with lots of long-term complications who gets caught up in that, it can have a discouraging effect instead” (Dietitian 8). The informants exemplified how they perceived that education and the group processes that occur enhanced the solidarity and cooperation between participants in the shape of sharing experiences, sharing knowledge, peer support and a sense of belonging. As one informant stated,” It is also that the patients can meet each other. It is a big difference when I tell them -” you are not alone, others feel like this too” compared to when someone with diabetes tell them that” (Dietitian 4). All the informants mentioned how they felt there was a need for adults with T1DM to meet peers in the same situation, and the strengthening impact it can have on self-management of the condition. However, they also reported that they experienced that the group’s composition was decisive for the group dynamic and peer support. The informants shared their observation of the outcome depending on a group’s composition with regards to age, pre-understanding, treatment, stage, or type of diabetes condition. As one dietitian described, “I have felt that it is difficult to get homogeneous groups, so that the groups are beneficial because they [the participants] have been on such different levels it has not been good” (Dietitian 7). Having a randomly composed group was perceived as less beneficial, and in worst case a risk of a negative experience. As stated by one informant,” If there is someone with lots of long-term complications who gets caught up in that, it can have a discouraging effect instead” (Dietitian 8). [SUBTITLE] Structure, goals, and participation in group-based education [SUBSECTION] Six informants described having group-based education on their own while two had a diabetes nurse present and two were part of a multi-professional team providing education. When the education was multidisciplinary, the informants described a clearer goal setting with regards to the clinic’s goal such as learning to count carbohydrates or lower Hba1c. The six informants who were working independently with their group education did not describe having developed goals to the same extent. However, none of the informants reported having a clear discussion about goals with the participants in relation to the group-based education. One informant described that the lack of discussion on goals was due to the assumption that the participants knew why they attended the group-based education, or as another informant reported, “We haven’t discussed goals with regards to what this should lead to, it’s more a feeling of what they might need so it’s not as structured as it should be, it really isn’t” (Dietitian 8). None of the informants reported that they worked actively with participants’ individual goal settings and action planning either before, during or after their sessions. One informant described,” Goal setting, it is really only for us that the patient has participated in the education session. We don’t follow up with any personal goals. I can feel it is quite deficient and one must ask what it gives” (Dietitian 3). None of the informants reported any follow-up on their goals or audits. The informants described that most participants were referred to the group-based education by a nurse or physician, this was most likely after a discussion, but one informant described people experiencing being told that education is something they must attend. As stated by the informant, “It can be the nurse or doctor who says you will attend this. So, they can be forced to go even though they themselves don’t want to” (Dietitian 3). Another informant also described the referral process as problematic since there were no assessment or criteria for who was referred which in their view led to non-attendance or low motivation to participate in the group education. As the informant described, “I haven’t met the patients before, so I don’t know about their motivation. They are referred to me and I have then thought they are motivated and want this. And maybe they even said that at previous appointment. But then they don’t really want this or at least they´re not ready to spend time or energy on it” (Dietitian 10). The routine of having a follow-up session for the attending group within six months of the education was reported by two dietitians while other informants mentioned that participants always have the possibility to request an individual appointment. One informant described that the participants are encouraged to phone the clinic two weeks after the education session for an individual follow-up. Three informants concluded that this was something that would be important to develop. Furthermore, one informant mentioned the difficulty to provide equal care when there are no regional structures or pathways regarding what type and form of care people with T1DM are offered. None of the informants reported conducting any auditing on the outcomes of the education sessions. But several informants did mention that they believed that having a structure to the group education is a prerequisite. As one informant stated,” I think you owe it to the patients to have a structure and goal setting, the structure is what gives an equal care” (Dietitian 2). Six informants described having group-based education on their own while two had a diabetes nurse present and two were part of a multi-professional team providing education. When the education was multidisciplinary, the informants described a clearer goal setting with regards to the clinic’s goal such as learning to count carbohydrates or lower Hba1c. The six informants who were working independently with their group education did not describe having developed goals to the same extent. However, none of the informants reported having a clear discussion about goals with the participants in relation to the group-based education. One informant described that the lack of discussion on goals was due to the assumption that the participants knew why they attended the group-based education, or as another informant reported, “We haven’t discussed goals with regards to what this should lead to, it’s more a feeling of what they might need so it’s not as structured as it should be, it really isn’t” (Dietitian 8). None of the informants reported that they worked actively with participants’ individual goal settings and action planning either before, during or after their sessions. One informant described,” Goal setting, it is really only for us that the patient has participated in the education session. We don’t follow up with any personal goals. I can feel it is quite deficient and one must ask what it gives” (Dietitian 3). None of the informants reported any follow-up on their goals or audits. The informants described that most participants were referred to the group-based education by a nurse or physician, this was most likely after a discussion, but one informant described people experiencing being told that education is something they must attend. As stated by the informant, “It can be the nurse or doctor who says you will attend this. So, they can be forced to go even though they themselves don’t want to” (Dietitian 3). Another informant also described the referral process as problematic since there were no assessment or criteria for who was referred which in their view led to non-attendance or low motivation to participate in the group education. As the informant described, “I haven’t met the patients before, so I don’t know about their motivation. They are referred to me and I have then thought they are motivated and want this. And maybe they even said that at previous appointment. But then they don’t really want this or at least they´re not ready to spend time or energy on it” (Dietitian 10). The routine of having a follow-up session for the attending group within six months of the education was reported by two dietitians while other informants mentioned that participants always have the possibility to request an individual appointment. One informant described that the participants are encouraged to phone the clinic two weeks after the education session for an individual follow-up. Three informants concluded that this was something that would be important to develop. Furthermore, one informant mentioned the difficulty to provide equal care when there are no regional structures or pathways regarding what type and form of care people with T1DM are offered. None of the informants reported conducting any auditing on the outcomes of the education sessions. But several informants did mention that they believed that having a structure to the group education is a prerequisite. As one informant stated,” I think you owe it to the patients to have a structure and goal setting, the structure is what gives an equal care” (Dietitian 2). [SUBTITLE] The dietitian’s perspective on group-based patient education and its prospects [SUBSECTION] Seven informants highlighted time savings as an advantage of holding education in groups, but only if the participants in the group have a high attendance level. One informant described the administrative work around the group education as too time consuming and therefore time gain never was achieved. The main benefit as perceived by all informants was the positive impact for the participants as they get more time with the dietitians, to hear the same information several times in different forms and participation in practical examples and listen to each other’s questions and answers. All informants also highlighted the support between peers as a positive key factor to reaching out with self-care messages. The informants described how the ongoing Covid-19 pandemic (2020–2022) had accelerated a digital development in healthcare, and three informants described that education in groups was about to be offered digitally while one was already running groups solely on digital platforms. One informant emphasized that with digital channels the entire care team can meet the participants without having to be in the same place, or there could be an opportunity to share videos or other digital content. As an informant described,” The development will be digital, it is coming to us in spring [2021]. Digital channels will be a real challenge and great fun. I never think we will go back to people coming in person, especially not type ones or people of working age” (Dietitian 2). While the digital development was recognised as a positive prospect for education, one informant who was already actively involved in digital education did highlight that they perceived it challenging to achieve the same level of discussions and interactions online. Three of the informants described that they would like to return to a so-called day-care week where education and workshops are provided in groups for several days in a row for one week. As described by one informant,” If you can really dream, I would like to have the day care week back, where the patients come and participate in lectures, cooking sessions, carbohydrate counting and such in a group with other people. It is an invaluable way to learn self-management” (Dietitian 1). Four informants described that they would like to individualize group-based education by developing new education sessions and material for groups in different topics such as pregnancy, exercise, and people that have been newly diagnosed with T1DM. Among other things, it was emphasized that time is what limits the development of new subject areas, partly to create educations and partly to gather feedback from participants to match the demand for groups. One informant emphasized that dietitians could benefit from sharing education materials with each other,” It would be great if us dietitians could look at joint material, I am sure there are lots of us who have the same type of groups, but everyone creates their own [education material].” (Dietitian 3). Seven informants highlighted time savings as an advantage of holding education in groups, but only if the participants in the group have a high attendance level. One informant described the administrative work around the group education as too time consuming and therefore time gain never was achieved. The main benefit as perceived by all informants was the positive impact for the participants as they get more time with the dietitians, to hear the same information several times in different forms and participation in practical examples and listen to each other’s questions and answers. All informants also highlighted the support between peers as a positive key factor to reaching out with self-care messages. The informants described how the ongoing Covid-19 pandemic (2020–2022) had accelerated a digital development in healthcare, and three informants described that education in groups was about to be offered digitally while one was already running groups solely on digital platforms. One informant emphasized that with digital channels the entire care team can meet the participants without having to be in the same place, or there could be an opportunity to share videos or other digital content. As an informant described,” The development will be digital, it is coming to us in spring [2021]. Digital channels will be a real challenge and great fun. I never think we will go back to people coming in person, especially not type ones or people of working age” (Dietitian 2). While the digital development was recognised as a positive prospect for education, one informant who was already actively involved in digital education did highlight that they perceived it challenging to achieve the same level of discussions and interactions online. Three of the informants described that they would like to return to a so-called day-care week where education and workshops are provided in groups for several days in a row for one week. As described by one informant,” If you can really dream, I would like to have the day care week back, where the patients come and participate in lectures, cooking sessions, carbohydrate counting and such in a group with other people. It is an invaluable way to learn self-management” (Dietitian 1). Four informants described that they would like to individualize group-based education by developing new education sessions and material for groups in different topics such as pregnancy, exercise, and people that have been newly diagnosed with T1DM. Among other things, it was emphasized that time is what limits the development of new subject areas, partly to create educations and partly to gather feedback from participants to match the demand for groups. One informant emphasized that dietitians could benefit from sharing education materials with each other,” It would be great if us dietitians could look at joint material, I am sure there are lots of us who have the same type of groups, but everyone creates their own [education material].” (Dietitian 3).
Conclusion
This study addresses the dietitians’ perspective regarding the requirement of a person-centred approach in group-based education. There was a great optimism and engagement from the informants, but they identified a lack of framework that addressed the challenges suggested in this study. Based on the results, we suggest it would be of value to explore the pedagogic training level of Swedish dietitians and potential barriers in their ability to facilitate group-based education. Further on we suggest that a framework for group-based education to adults with T1DM containing options for pre-assessment, goal setting and structured follow-up packages should be explored together with patient representatives to optimize the care given to ensure cost-effectiveness, optimize clinical outcomes, quality of life and equally accessible care for people with T1DM.
[ "Background", "Method", "The dietitian’s role and experience of group-based education", "Participants’ engagement in group-based education", "Structure, goals, and participation in group-based education", "The dietitian’s perspective on group-based patient education and its prospects", "The dietitian’s perception of, and role in, group-based education", "Prospects for development of group-based education" ]
[ "Diabetes mellitus is described as a metabolic disorder characterized by hyperglycaemia in the absence of treatment [1]. The loss of insulin results in high blood glucose and other metabolic and haematological abnormalities which can have both short- and long-term adverse effects on health [2]. Type 1 Diabetes Mellitus (T1DM) is treated with insulin replacement therapy aiming to recreate the body’s normal fluctuations in secreting insulin. A multidisciplinary approach to diabetes treatment, involving physicians, diabetes nurses, dietitians, and podiatrists is recommended, this can prolong onset of and prevent complications in people with T1DM. In Sweden, there are approximately 50.000 adults with T1DM [3, 4].\nThe recommendations state that the Swedish health service should provide group-based education to people with diabetes due to its effect on improving Hba1c, and that the cost per quality adjusted years is low compared to individual education [4]. Dietitians offer specialist evidence-based dietary advice to people with diabetes which includes considering factors such as nutritional diagnosis and status, medication, diabetes management and lifestyle [5]. According to Swedish national guidelines, an education programme requires a structure, set goals and active participation and the education programme must be based on psychological and/or pedagogical theories on adult learning. However, the present guidelines do not offer any further instruction or specifications, for example there are no standards for structure or goal orientation.\nInternationally, there are available resources for national standards on education such as The American 2017 National Standards for Diabetes Self-Management Education and Support or Quality Institute for Self-Management Education & Training (QISMET), standards from the United Kingdom (UK) to which you can apply to have your education programme certified [6, 7]. In Sweden there is a lack of indicators for services to audit as there are no clear standards for group-based education regarding people with T1DM [4]. In the National Board of Health and Welfare’s evaluation of diabetes care in 2015, 81 of 93 diabetes clinics in Sweden participated [8]. Group-based education programs to people with T1DM were offered by 66% of the diabetes clinics while the clinics who did not offer it stated that they lacked time and resources. The national evaluation does not shed light on the extent to which dietitians are part of delivering the education, or the structure of offered education sessions.\nThe purpose of this study was to explore the dietitians’ perception of, and role in, group-based education to adults with T1DM as well as prospects for changes and development regarding framework of group-based education.", "This was a qualitative descriptive study conducted in Sweden using a convenience sampling of dietitians from a diabetes specialist network in adult diabetes care. Dietitians who worked in different regions and hospitals in Sweden were invited to participate. Semi-structured interviews were chosen to achieve a unique depth of understanding dietitians’ perspectives and how they manage group-based education to people with T1DM. A qualitative study design allowed for exploration into the complexity of facilitating group-based education, and how different factors in the health care system influence outcomes, as well as how this could possibly be addressed. Dietitians working in paediatrics’ were excluded as the guidelines are different compared to diabetes management for adults.\nThe interviewer in this study was a female registered dietitian specialised in diabetes and certified as a diabetes educator through the structured education programme Dose Adjustment for Normal Eating (DAFNE) within the National Health Service England. Occupation at the time of the interviews was as a clinical dietitian in a community hospital in Sweden. The interviewer had previous experience of conducting research interviews from a master thesis. The interview guide which consisted of five areas of interest was collaboratively developed between the authors (Table 1).\n\nTable 1Interview Guide: Areas of interest1. Experience and perceptions of facilitating group-based education to people with type 1 diabetes.2. Structure, goals, and participation during group-based education.3. Weaknesses with group-based education.4. Strengths with group-based education.5. Exploring future opportunities.\n\nInterview Guide: Areas of interest\nIn total, ten dietitians from a diabetes specialist network in the Swedish Association of Clinical dietitians as well as a regional network were approached via email, all accepted to participate. Invited dietitians were from seven different regions in Sweden. The interviewer was also part of both networks but did not have any established relationships with the informants. A personal email invite was distributed which included an information letter and meeting invitation.\nThe semi-structured interviews (Table 2) which were held in Swedish were conducted on video calls, with sound recording using an application on a smart phone. No field notes were taken. Nine informants were in their workplace for the interviews and one informant was working from home.\n\nTable 2Interview Guide: Dietitians’ perspectives of working with group-based educationInterview Questions1. How long have you worked as a dietitian?- How long have you worked with people with type 1 diabetes?2. What are your experiences of delivering group-based education?- How often is group-based education offered?- Who attends the sessions, what does the referral process look like?- What educational topics do you discuss?- Do you have any further education in pedagogic methods?3. Describe how you experience group-based education.4. According to national guidelines, a group-based education should contain a set structure and goals, what is your view and experience on that?- Do you have any set goals, either on a clinical or individual level?- How are goals communicated to the participants?- Does the education contain any individual goal setting discussions?- What are the possibilities for follow-up?5. A prerequisite for group-based education is an active participation, what is your view and experience of that?6. Describe your experience regarding weaknesses with group-based education.7. Describe your experience regarding strengths with group-based education.8. In your own view, how do you think group-based education could be developed?\n\nInterview Guide: Dietitians’ perspectives of working with group-based education\n1. How long have you worked as a dietitian?\n- How long have you worked with people with type 1 diabetes?\n2. What are your experiences of delivering group-based education?\n- How often is group-based education offered?\n- Who attends the sessions, what does the referral process look like?\n- What educational topics do you discuss?\n- Do you have any further education in pedagogic methods?\n4. According to national guidelines, a group-based education should contain a set structure and goals, what is your view and experience on that?\n- Do you have any set goals, either on a clinical or individual level?\n- How are goals communicated to the participants?\n- Does the education contain any individual goal setting discussions?\n- What are the possibilities for follow-up?\nThe interviews were between 24 and 38 min long with an average of 31.5 min. Six of the interviews were conducted over a four-week period in November and December 2020, the process then continued with a further four interviews in August and September 2021. After conducting ten interviews, no new data, or themes or new coding were detected, data saturation was therefore deemed complete. Criteria for data saturation was done in accordance with Fusch et al. [9]. The transcribing of the interviews started after the first interview and was conducted to preliminary review that the interview guide (Tables 1 and 2) made it possible to deepen the insights around the chosen questions. By using Microsoft word, a naturalized transcription method was chosen, meaning that word for word was written, including pauses, laughter etcetera. The recordings were transcribed into text immediately after each session and read through to discover any direct questions to the informants. After control of the transcriptions all recordings were deleted. The transcripts were not returned to the informants for comments since we decided, based on Hagens et al. [10], that the procedure with interviewee transcript review might have potential disadvantages such as the risk of loss of valuable data and the procedure is also very time consuming for the informant, and we did not want to burden the workload for the participating dietitians. During one interview the informant sat in a shared office space with one dietetic student nearby. After one interview the author reached out to the informant via email to clarify the amount of education sessions held and within which topic. The informants had the possibility to contact the interviewer afterwards if there were anything else they wanted to share or change, but no one did that. There was no internal drop out during the study.\nThe transcripts were read several times to improve acquaintance with the material. The data was analysed using qualitative content analysis in accordance with Graneheim and Lundman [11]. Each transcribed interview was analysed as a single unit, meaning that one interview was analysed at a time. As the text was read through, units of meaning were identified, which were sentences, sections or words that belonged together due to its content or context. The text was shortened but with the aim to preserve the core meaning. The units of meaning were then assigned a code, a way of labelling the data. Where commonality in the content was found, text was extracted and brought together in categories. The categories were exhaustive and mutually exclusive as recommended by Krippendorff [12]. This meant that no data related to the purpose could be excluded due to lack of a suitable category. Furthermore, no data should fall between two categories or fit into more than one category. Four categories were identified, with three to six subcategories each. The senior researcher (ESS) had in-depth knowledge in content analysis methodology and coding, categories and subcategories were discussed in detail between the two authors during the analyses. To ensure trustworthiness and conformability of the results [13] both authors performed the content analysis individually for two of the transcripts. The authors used the Consolidated Criteria for Reporting Qualitative Research as a guide to report data in this study [14].\nThe interviews and the analysis were conducted in Swedish and the quotes representing the result section were translated to English by the first author who is Swedish but has worked in the UK and is an English-Swedish translator for nutrition care process terminology. During the interviews, the informants did occasionally share their experience and perspective on group-based education to other categories of patients. This data was used when analysing the dietitian’s role and experience of group-based education. The informants participating in the interviews were numbered 1 to 10 and referred to as “Dietitian 1, Dietitian 2 etc.” when quoted.\nIn this study, the dietitians’ perceptions of group-based education were explored. No data on health or other sensitive personal data were collected. The dietitians were informed in an introductory letter that participation was voluntary, that the interviews would be recorded by an application on a smart phone and informed about General Data Protection Regulation 2016/679. Written and verbal consent to participate in the study was collected from each informant before conducting the interviews. The informants could at any time choose to withdraw participation without giving a reason for withdrawal. Throughout the whole process good research practice was applied in accordance with the Swedish Research Council and the World Medical Association Declaration of Helsinki: ethical principles for medical research involving human subjects [15].", "The informants were all appreciative of facilitating group-based education and described how they perceived it benefited the participants but did suggest there were challenges regarding facilitating a group. As one informant stated,” It is a super wonderful opportunity. Patients can meet and exchange experiences. You start with the information but then they usually run the group themselves” (Dietitian 2).\nSeven informants particularly mentioned that their main challenge was to adjust the level of depth and complexity to the information provided. As one informant described, “One of the hardest things is to adjust when they are at different levels” (Dietitian 4). However, one informant mentioned the complexity with different level of knowledge between participants but described using it as an opportunity for participants to teach each other as the informant found it beneficial when they received information from peers. Most of the informants also raised that people with T1DM are a heterogenic group, and they found it challenging when they are brought together as one. With regards to facilitating a group, the informants described several situations where they were required to draw on their pedagogical skills when having groups with talkative participants or inactive participants. One informant also reported on being doubted on the information provided and described,” I was very questioned there, sometimes it was tough, mentally stressful to deliver those education sessions” (Dietitian 5).\nThe methods used to effectively gather the groups’ focus were asking questions, practical examples, or workshops, or acknowledging participants who were active in group discussions. Five of the informants had some form of continuous professional education in therapeutic techniques such as Motivational Interviewing, Cognitive Behaviour Therapy, or had pedagogical experience from teaching, but all informants described that they perceived their work experience had led them to develop skills to facilitate groups. One informant described a previously feeling of fear during public speaking, but with time had developed necessary skills to cope.", "The informants exemplified how they perceived that education and the group processes that occur enhanced the solidarity and cooperation between participants in the shape of sharing experiences, sharing knowledge, peer support and a sense of belonging. As one informant stated,” It is also that the patients can meet each other. It is a big difference when I tell them -” you are not alone, others feel like this too” compared to when someone with diabetes tell them that” (Dietitian 4).\nAll the informants mentioned how they felt there was a need for adults with T1DM to meet peers in the same situation, and the strengthening impact it can have on self-management of the condition. However, they also reported that they experienced that the group’s composition was decisive for the group dynamic and peer support. The informants shared their observation of the outcome depending on a group’s composition with regards to age, pre-understanding, treatment, stage, or type of diabetes condition. As one dietitian described, “I have felt that it is difficult to get homogeneous groups, so that the groups are beneficial because they [the participants] have been on such different levels it has not been good” (Dietitian 7). Having a randomly composed group was perceived as less beneficial, and in worst case a risk of a negative experience. As stated by one informant,” If there is someone with lots of long-term complications who gets caught up in that, it can have a discouraging effect instead” (Dietitian 8).", "Six informants described having group-based education on their own while two had a diabetes nurse present and two were part of a multi-professional team providing education. When the education was multidisciplinary, the informants described a clearer goal setting with regards to the clinic’s goal such as learning to count carbohydrates or lower Hba1c. The six informants who were working independently with their group education did not describe having developed goals to the same extent. However, none of the informants reported having a clear discussion about goals with the participants in relation to the group-based education. One informant described that the lack of discussion on goals was due to the assumption that the participants knew why they attended the group-based education, or as another informant reported, “We haven’t discussed goals with regards to what this should lead to, it’s more a feeling of what they might need so it’s not as structured as it should be, it really isn’t” (Dietitian 8).\nNone of the informants reported that they worked actively with participants’ individual goal settings and action planning either before, during or after their sessions. One informant described,” Goal setting, it is really only for us that the patient has participated in the education session. We don’t follow up with any personal goals. I can feel it is quite deficient and one must ask what it gives” (Dietitian 3). None of the informants reported any follow-up on their goals or audits.\nThe informants described that most participants were referred to the group-based education by a nurse or physician, this was most likely after a discussion, but one informant described people experiencing being told that education is something they must attend. As stated by the informant, “It can be the nurse or doctor who says you will attend this. So, they can be forced to go even though they themselves don’t want to” (Dietitian 3). Another informant also described the referral process as problematic since there were no assessment or criteria for who was referred which in their view led to non-attendance or low motivation to participate in the group education. As the informant described, “I haven’t met the patients before, so I don’t know about their motivation. They are referred to me and I have then thought they are motivated and want this. And maybe they even said that at previous appointment. But then they don’t really want this or at least they´re not ready to spend time or energy on it” (Dietitian 10).\nThe routine of having a follow-up session for the attending group within six months of the education was reported by two dietitians while other informants mentioned that participants always have the possibility to request an individual appointment. One informant described that the participants are encouraged to phone the clinic two weeks after the education session for an individual follow-up. Three informants concluded that this was something that would be important to develop. Furthermore, one informant mentioned the difficulty to provide equal care when there are no regional structures or pathways regarding what type and form of care people with T1DM are offered. None of the informants reported conducting any auditing on the outcomes of the education sessions. But several informants did mention that they believed that having a structure to the group education is a prerequisite. As one informant stated,” I think you owe it to the patients to have a structure and goal setting, the structure is what gives an equal care” (Dietitian 2).", "Seven informants highlighted time savings as an advantage of holding education in groups, but only if the participants in the group have a high attendance level. One informant described the administrative work around the group education as too time consuming and therefore time gain never was achieved.\nThe main benefit as perceived by all informants was the positive impact for the participants as they get more time with the dietitians, to hear the same information several times in different forms and participation in practical examples and listen to each other’s questions and answers. All informants also highlighted the support between peers as a positive key factor to reaching out with self-care messages. The informants described how the ongoing Covid-19 pandemic (2020–2022) had accelerated a digital development in healthcare, and three informants described that education in groups was about to be offered digitally while one was already running groups solely on digital platforms. One informant emphasized that with digital channels the entire care team can meet the participants without having to be in the same place, or there could be an opportunity to share videos or other digital content. As an informant described,” The development will be digital, it is coming to us in spring [2021]. Digital channels will be a real challenge and great fun. I never think we will go back to people coming in person, especially not type ones or people of working age” (Dietitian 2).\nWhile the digital development was recognised as a positive prospect for education, one informant who was already actively involved in digital education did highlight that they perceived it challenging to achieve the same level of discussions and interactions online. Three of the informants described that they would like to return to a so-called day-care week where education and workshops are provided in groups for several days in a row for one week. As described by one informant,” If you can really dream, I would like to have the day care week back, where the patients come and participate in lectures, cooking sessions, carbohydrate counting and such in a group with other people. It is an invaluable way to learn self-management” (Dietitian 1).\nFour informants described that they would like to individualize group-based education by developing new education sessions and material for groups in different topics such as pregnancy, exercise, and people that have been newly diagnosed with T1DM. Among other things, it was emphasized that time is what limits the development of new subject areas, partly to create educations and partly to gather feedback from participants to match the demand for groups. One informant emphasized that dietitians could benefit from sharing education materials with each other,” It would be great if us dietitians could look at joint material, I am sure there are lots of us who have the same type of groups, but everyone creates their own [education material].” (Dietitian 3).", "Most of the informants reported a positive experience of group-based education, both for themselves and for the participants, but they did suggest there were challenges concerning facilitating groups. Half of the informants had undertaken some form of post graduate training in therapeutic techniques or pedagogic experience but all of them relayed that they perceived their experience had led them to develop the required pedagogical skills and approaches. At the same time, the informants did share challenges relating to the group’s composition and dynamic, and none of them worked actively with individual goal setting, both areas that are underpinned by pedagogic and empowerment skills. It is widely accepted and recommended that pedagogic training of some form is required for an educator [4]. Yet, in Sweden there is no available certifications as a diabetes educator or standards recommended for dietitians which means the level of pedagogic approach will be depending on individual experience and education. As described by Loveman et al. in a systematic review, the educators’ pedagogical approach is essential to the clinical outcomes [16]. Loveman et al. also suggest that the changes a pedagogically underpinned education had led to, even if sometimes small, were relatively long-lasting. Further on, educational interventions that emphasize knowledge, emotional and behaviour support, coping strategies and self-management training has been associated with improved glycaemic control at all ages which makes the educator and its approach essential [17]. In a study by Fredrix et al., using semi-structured interviews with diabetes educators the results confirmed that educators are finding this part challenging and therefore would value additional training opportunities [18]. The informants in our study perceived their barriers towards facilitating groups on a spectrum from overcoming fear of public speaking to feeling completely confident with their ability. It would be of value to explore the pedagogic training level of Swedish dietitians and potential barriers in their ability to facilitate group-based education, not just in diabetes but for other conditions within the chronic care model as well to evaluate the requirement for provision of adequate training opportunities.\nAnother main challenge, as described by the informants, was the lack of possibility to individualize and difficulty to know and adjust what level on literacy and numeracy to settle the information on. Qualitative data from interviews and observations within the diabetes education programme DAFNE show that people attend diabetes education with different skills, experiences, and approaches to diabetes management [19]. This is explained by, potentially, different clinical practices in diabetes services, and different lengths of time since diagnosis. It was also suggested that people who were diagnosed in childhood or adolescent may not have retained the information given to them at the time, and it had since not been repeated. As suggested by international standards [6] an initial assessment with medical history, age, cultural influences, diabetes knowledge, disease burden, literacy, readiness to learn, limitations etcetera could conclude a more successful approach and opportunity to individualise. This is also confirmed by the DAFNE study which suggested that assessing numeracy, which is critical for counting carbohydrates and insulin dose adjustment, would help determine the additional support required before or during the group-based education [19]. This approach would support the dietitian facilitating the education to adjust and individualize and improve the outcome for the participant. Having a lower literacy and numeracy skills are associated with poorer portion size estimation [20] and understanding of food labels [21]. The added support could be numeracy-focused practical exercises, using lay language and pictures, or hands-on learning. According to our results, the pathway to education as described in the interviews were mostly referrals by nurses or physicians at the clinic. While initiating a referral probably involves some form of assessment it is essential that the educator can make their own assessment and adjustments to education if required. Including pre-assessment as part of a framework for diabetes education would be essential to support educators in delivering a person-centred care. A pre-assessment encounter can also offer an opportunity to relay information such as aim of the session and promote attendance. Our study highlights therefore the need for initial assessments as part of structured education.\nAll the informants highlighted their perception of the positive impact that a group setting can have on participants. This is in line with the findings from the British psychosocial study on structured diabetes education who conducted interviews with participants and educators [22]. They suggested, that delivering education in a group setting promoted the participants’ sense of enjoyment and therefore the ability to concentrate on the subject. It also enhanced the ability to learn from each other’s experiences and use these to illustrate self-management. Furthermore, the group setting provided an opportunity for apprehensive or anxious participants to observe other course attendees. However, the informants in this study expressed that a randomly composed group was perceived as less beneficial which highlights that achieving positive group processes may not be straightforward.\nNone of the informants reported using personalized goal setting, and in fact there was an overall lack in discussions about goals. Incorporating empowerment-based support has been shown to facilitate development of personal responsibility and control of daily decisions in a study focussing on people with type 2 diabetes and may be relevant for people with a lower health literacy [23, 24]. Further, empowerment-based interventions such as guided self-determination intervention has been shown to successfully improve and sustain a lower Hba1c in young women in a randomised control trial by Zoffman et al. [25].", "In our study, three informants were considering developing a follow-up procedure as a prospect for their group-based education. The current pathways often involved the option for participants to contact their dietitian if required. While offering people the opportunity to request an appointment with the dietitian is a great step for a follow-up procedure, a research study analysing barriers to, and facilitators of successful diabetes self-management suggest that people are reluctant to over-burden health professionals and therefore avoid taking command to request individual appointments [19]. The long-lasting effect of education is not thoroughly explored yet, however a literature review studying the effectiveness of managing T1DM with structured education suggest that the improvements in glycaemia induced by the education programme tend to reduce as time elapses and literature on the long-term effectiveness is limited [26]. Similarly, a Swedish study on group-based nutrition education to people with T1DM using carbohydrate counting versus a food-based approach showed an improved Hba1c at three months but this significant difference did not remain after twelve months [27]. This highlights the importance of integrating follow-up support into the structured education package. In addition to structured follow-up sessions, to improve lifelong learning participants should also be provided with opportunities for ad hoc contact with appropriately trained health professionals to troubleshoot problems as they arise and when life circumstances change [22]. Although the education sessions are effective in a group setting, when it comes to follow-up, participants in the DAFNE education programme have indicated a clear preference for one-to-one, individually tailored follow up support.\nDigitalisation is described as the single largest change in our time and will impact the entire society. One of the informants who exclusively held digital education sessions found it challenging to encourage participants to be engaged in discussion compared to previous experience of physical meetings. While this is an area in its early days of research, there are some initiatives aiming to set new standards for digital dietetic education or counselling. During 2020–2021, 39 DAFNE centres took part in a pilot study reviewing participant and educator experience of remote group-based education. The preliminary data, presented at the Diabetes UK Education and Self-management Award, suggested that the learning experience was highly rated and 95% of the participants would recommend it to a friend [28]. This could again highlight the importance of adequate training for dietitians to develop skills to facilitate digital groups.\nIn the present study, dietitians described how a possible development would be to offer group-based education on more topics and perhaps designed for a specific diagnosis or stage of diagnosis. With little direction from national guidelines or interactivity in dietetic networks all education materials will be created in accordance with the knowledge and views of the producer. Yet, there is no available national framework for group-based education regarding T1DM in Sweden, leading to each dietitian creating their own topics and pedagogical approach which will be time consuming as structure, goal settings and evidence-based research need to be reviewed. Loveman et al., states that for the best clinical outcome, educators need to have time and resources to fulfil the need of a structured education [16]. Dietitians are a profession with great engagement in their work with more than a thousand members in the professional Swedish organisation [29]. This could provide an ideal platform for cooperating and developing shared material to ensure continuously updated evidence and a step towards equal diabetes care. A framework could also open the opportunity for a certification process like the QISMET standardization in the UK where services can apply to have their structured education accredited [7].\nIn Sweden, diabetes care causes high societal financial cost mainly due to treatment of complications, but the condition also impacts quality of life, periods of sick leave and an increased demand for hospital and community care [4]. Audit data from ongoing international education programmes demonstrate improvements in quality of life, diabetes distress, glycaemic control, severe hypoglycaemia, and diabetes keto acidosis [30, 31]. More than 18% of adults with T1DM in Sweden have an Hba1c > 70 mmol/mol and 55% are overweight or obese which both indicate that dietetic interventions are essential [3]. According to a national survey, 66% of diabetes care providers offer group-based education but the outcomes of these are not nationally audited [8]. However, research show that group-based education does not always work well due to the potential differences in style, method, and focus [32]. Making new habits part of everyday life and to be maintained long-term is where group-based education most often fails [19]. Results from our study implies that dietitians found it particularly challenging to adjust the level of information with regards to numeracy and literacy, as well as participants pre-understanding and to individualize the information depending on different needs. The National Board of Health and Welfare describe group-based education as a prioritized area for improvement in the latest guidelines [4]. While this is highly welcomed, our study suggests that a framework for group-based education to adults with T1DM and the skills required to facilitate it is needed to ensure the efforts are used in the most efficient way with regards to clinical outcomes, lasting effects, and cost-effectiveness. This study has focussed on the dietitians’ perspective but before establishing a framework for group education the patient perspective should be explored, with advantage in cooperation with patient organisations or representatives. The principles of self-management which are the basis of education are applied within the chronic care model for several chronic conditions such as chronic obstructive pulmonary diseases and cardiovascular diseases. This could imply transferability to group-based education for people with other chronic conditions, and for different healthcare professionals as the guidance for educators of varied disciplines is similar.\nQualitative studies are verified through their credibility, transferability, and reliability [11]. The credibility and strength in the present study is enhanced by the fact that the informants were dietitians in the diabetes field, and they were professionally involved in delivering group-based education to adults with T1DM. Moreover, for a broader perspective regarding group-based education, the ten informants in our study worked in seven different regions geographically spread in Sweden. Informants varied in work experience, which can enhance the opportunity to gather and enlighten different perspectives. The quotes used in the results were translated into English. Even though the translation was made by an English-Swedish translator, the process is complex, and sensitivity must be taken to reconstruct the quotes while maintaining its meaning. Still, some small details regarding specific choice of words might have been omitted in the translation which may be a limitation of the study. This study addresses dietitians’ perspectives regarding group-based education and do not include experiences from people with T1DM, which is a limitation. The first author is a diabetes specialist dietitian with an international qualification as a diabetes educator. While this gives a high level of pre-understanding, it is also a major part of initiating deep insights on the subject. All the informants were aware of the author’s professional role through dietetic networks which could have affected how they shared information. To prevent and limit the impact of pre-conception the aim of the study was shared with the informants at the start of each interview, and it was clarified that the aim was to gather their perspective regarding group-based education to adults with T1DM." ]
[ null, null, null, null, null, null, null, null ]
[ "Background", "Method", "Results", "The dietitian’s role and experience of group-based education", "Participants’ engagement in group-based education", "Structure, goals, and participation in group-based education", "The dietitian’s perspective on group-based patient education and its prospects", "Discussion", "The dietitian’s perception of, and role in, group-based education", "Prospects for development of group-based education", "Conclusion" ]
[ "Diabetes mellitus is described as a metabolic disorder characterized by hyperglycaemia in the absence of treatment [1]. The loss of insulin results in high blood glucose and other metabolic and haematological abnormalities which can have both short- and long-term adverse effects on health [2]. Type 1 Diabetes Mellitus (T1DM) is treated with insulin replacement therapy aiming to recreate the body’s normal fluctuations in secreting insulin. A multidisciplinary approach to diabetes treatment, involving physicians, diabetes nurses, dietitians, and podiatrists is recommended, this can prolong onset of and prevent complications in people with T1DM. In Sweden, there are approximately 50.000 adults with T1DM [3, 4].\nThe recommendations state that the Swedish health service should provide group-based education to people with diabetes due to its effect on improving Hba1c, and that the cost per quality adjusted years is low compared to individual education [4]. Dietitians offer specialist evidence-based dietary advice to people with diabetes which includes considering factors such as nutritional diagnosis and status, medication, diabetes management and lifestyle [5]. According to Swedish national guidelines, an education programme requires a structure, set goals and active participation and the education programme must be based on psychological and/or pedagogical theories on adult learning. However, the present guidelines do not offer any further instruction or specifications, for example there are no standards for structure or goal orientation.\nInternationally, there are available resources for national standards on education such as The American 2017 National Standards for Diabetes Self-Management Education and Support or Quality Institute for Self-Management Education & Training (QISMET), standards from the United Kingdom (UK) to which you can apply to have your education programme certified [6, 7]. In Sweden there is a lack of indicators for services to audit as there are no clear standards for group-based education regarding people with T1DM [4]. In the National Board of Health and Welfare’s evaluation of diabetes care in 2015, 81 of 93 diabetes clinics in Sweden participated [8]. Group-based education programs to people with T1DM were offered by 66% of the diabetes clinics while the clinics who did not offer it stated that they lacked time and resources. The national evaluation does not shed light on the extent to which dietitians are part of delivering the education, or the structure of offered education sessions.\nThe purpose of this study was to explore the dietitians’ perception of, and role in, group-based education to adults with T1DM as well as prospects for changes and development regarding framework of group-based education.", "This was a qualitative descriptive study conducted in Sweden using a convenience sampling of dietitians from a diabetes specialist network in adult diabetes care. Dietitians who worked in different regions and hospitals in Sweden were invited to participate. Semi-structured interviews were chosen to achieve a unique depth of understanding dietitians’ perspectives and how they manage group-based education to people with T1DM. A qualitative study design allowed for exploration into the complexity of facilitating group-based education, and how different factors in the health care system influence outcomes, as well as how this could possibly be addressed. Dietitians working in paediatrics’ were excluded as the guidelines are different compared to diabetes management for adults.\nThe interviewer in this study was a female registered dietitian specialised in diabetes and certified as a diabetes educator through the structured education programme Dose Adjustment for Normal Eating (DAFNE) within the National Health Service England. Occupation at the time of the interviews was as a clinical dietitian in a community hospital in Sweden. The interviewer had previous experience of conducting research interviews from a master thesis. The interview guide which consisted of five areas of interest was collaboratively developed between the authors (Table 1).\n\nTable 1Interview Guide: Areas of interest1. Experience and perceptions of facilitating group-based education to people with type 1 diabetes.2. Structure, goals, and participation during group-based education.3. Weaknesses with group-based education.4. Strengths with group-based education.5. Exploring future opportunities.\n\nInterview Guide: Areas of interest\nIn total, ten dietitians from a diabetes specialist network in the Swedish Association of Clinical dietitians as well as a regional network were approached via email, all accepted to participate. Invited dietitians were from seven different regions in Sweden. The interviewer was also part of both networks but did not have any established relationships with the informants. A personal email invite was distributed which included an information letter and meeting invitation.\nThe semi-structured interviews (Table 2) which were held in Swedish were conducted on video calls, with sound recording using an application on a smart phone. No field notes were taken. Nine informants were in their workplace for the interviews and one informant was working from home.\n\nTable 2Interview Guide: Dietitians’ perspectives of working with group-based educationInterview Questions1. How long have you worked as a dietitian?- How long have you worked with people with type 1 diabetes?2. What are your experiences of delivering group-based education?- How often is group-based education offered?- Who attends the sessions, what does the referral process look like?- What educational topics do you discuss?- Do you have any further education in pedagogic methods?3. Describe how you experience group-based education.4. According to national guidelines, a group-based education should contain a set structure and goals, what is your view and experience on that?- Do you have any set goals, either on a clinical or individual level?- How are goals communicated to the participants?- Does the education contain any individual goal setting discussions?- What are the possibilities for follow-up?5. A prerequisite for group-based education is an active participation, what is your view and experience of that?6. Describe your experience regarding weaknesses with group-based education.7. Describe your experience regarding strengths with group-based education.8. In your own view, how do you think group-based education could be developed?\n\nInterview Guide: Dietitians’ perspectives of working with group-based education\n1. How long have you worked as a dietitian?\n- How long have you worked with people with type 1 diabetes?\n2. What are your experiences of delivering group-based education?\n- How often is group-based education offered?\n- Who attends the sessions, what does the referral process look like?\n- What educational topics do you discuss?\n- Do you have any further education in pedagogic methods?\n4. According to national guidelines, a group-based education should contain a set structure and goals, what is your view and experience on that?\n- Do you have any set goals, either on a clinical or individual level?\n- How are goals communicated to the participants?\n- Does the education contain any individual goal setting discussions?\n- What are the possibilities for follow-up?\nThe interviews were between 24 and 38 min long with an average of 31.5 min. Six of the interviews were conducted over a four-week period in November and December 2020, the process then continued with a further four interviews in August and September 2021. After conducting ten interviews, no new data, or themes or new coding were detected, data saturation was therefore deemed complete. Criteria for data saturation was done in accordance with Fusch et al. [9]. The transcribing of the interviews started after the first interview and was conducted to preliminary review that the interview guide (Tables 1 and 2) made it possible to deepen the insights around the chosen questions. By using Microsoft word, a naturalized transcription method was chosen, meaning that word for word was written, including pauses, laughter etcetera. The recordings were transcribed into text immediately after each session and read through to discover any direct questions to the informants. After control of the transcriptions all recordings were deleted. The transcripts were not returned to the informants for comments since we decided, based on Hagens et al. [10], that the procedure with interviewee transcript review might have potential disadvantages such as the risk of loss of valuable data and the procedure is also very time consuming for the informant, and we did not want to burden the workload for the participating dietitians. During one interview the informant sat in a shared office space with one dietetic student nearby. After one interview the author reached out to the informant via email to clarify the amount of education sessions held and within which topic. The informants had the possibility to contact the interviewer afterwards if there were anything else they wanted to share or change, but no one did that. There was no internal drop out during the study.\nThe transcripts were read several times to improve acquaintance with the material. The data was analysed using qualitative content analysis in accordance with Graneheim and Lundman [11]. Each transcribed interview was analysed as a single unit, meaning that one interview was analysed at a time. As the text was read through, units of meaning were identified, which were sentences, sections or words that belonged together due to its content or context. The text was shortened but with the aim to preserve the core meaning. The units of meaning were then assigned a code, a way of labelling the data. Where commonality in the content was found, text was extracted and brought together in categories. The categories were exhaustive and mutually exclusive as recommended by Krippendorff [12]. This meant that no data related to the purpose could be excluded due to lack of a suitable category. Furthermore, no data should fall between two categories or fit into more than one category. Four categories were identified, with three to six subcategories each. The senior researcher (ESS) had in-depth knowledge in content analysis methodology and coding, categories and subcategories were discussed in detail between the two authors during the analyses. To ensure trustworthiness and conformability of the results [13] both authors performed the content analysis individually for two of the transcripts. The authors used the Consolidated Criteria for Reporting Qualitative Research as a guide to report data in this study [14].\nThe interviews and the analysis were conducted in Swedish and the quotes representing the result section were translated to English by the first author who is Swedish but has worked in the UK and is an English-Swedish translator for nutrition care process terminology. During the interviews, the informants did occasionally share their experience and perspective on group-based education to other categories of patients. This data was used when analysing the dietitian’s role and experience of group-based education. The informants participating in the interviews were numbered 1 to 10 and referred to as “Dietitian 1, Dietitian 2 etc.” when quoted.\nIn this study, the dietitians’ perceptions of group-based education were explored. No data on health or other sensitive personal data were collected. The dietitians were informed in an introductory letter that participation was voluntary, that the interviews would be recorded by an application on a smart phone and informed about General Data Protection Regulation 2016/679. Written and verbal consent to participate in the study was collected from each informant before conducting the interviews. The informants could at any time choose to withdraw participation without giving a reason for withdrawal. Throughout the whole process good research practice was applied in accordance with the Swedish Research Council and the World Medical Association Declaration of Helsinki: ethical principles for medical research involving human subjects [15].", "The ten informants had worked between one and 35 years in a dietetic role and time spent in the diabetes field ranged from six months to 35 years with a median of 14.5 years. All the informants were involved in group-based education to adults with T1DM, but with varied frequency from one to 22 sessions per year. Six informants had sessions without any other health care professional attending while four of the informants described that their group-based education was delivered in a multidisciplinary setting. For two of those, the multidisciplinary setting meant they had a diabetes specialist nurse attending for some or in all the education sessions, while two informants had different professional categories attending during the sessions. Those (n = 4) who took part in multidisciplinary education rather than working alone experienced a more structured and goal-oriented education plans and routines for follow-up.\nFrom the content analyses, four categories were identified with three to six subcategories each (Table 3).\n\nTable 3Categories and sub-categories from content analysis of dietitian’s experiences regarding group-based educationCategoriesThe dietitian’s role and experience of group-based educationParticipants’ engagement in group-based educationStructure, goals, and participation in group-based educationThe dietitian’s perspective on group-based education and its prospects\nSub-categories\nFacilitating groupsThe dietitian’s roleIndividual requirementsGroup compositionGroup processesThe participants’ roleParticipants’ experienceParticipation and knowledge acquisitionParticipants’ prerequisiteEqual careStructure and organizationReferral processGoal settingPerson-centred careProspectsStrengths with group-based educationWeaknesses with group-based education\n\nCategories and sub-categories from content analysis of dietitian’s experiences regarding group-based education\nFacilitating groups\nThe dietitian’s role\nIndividual requirements\nGroup composition\nGroup processes\nThe participants’ role\nParticipants’ experience\nParticipation and knowledge acquisition\nParticipants’ prerequisite\nEqual care\nStructure and organization\nReferral process\nGoal setting\nPerson-centred care\nProspects\nStrengths with group-based education\nWeaknesses with group-based education\nThe results are presented in the following order: 1) The dietitian’s role and experience of group-based education; 2) Participants’ engagement in group-based education; 3) Structure, goals, and participation in group-based education; and 4) The dietitian’s perspective on group-based education and its prospects.\n[SUBTITLE] The dietitian’s role and experience of group-based education [SUBSECTION] The informants were all appreciative of facilitating group-based education and described how they perceived it benefited the participants but did suggest there were challenges regarding facilitating a group. As one informant stated,” It is a super wonderful opportunity. Patients can meet and exchange experiences. You start with the information but then they usually run the group themselves” (Dietitian 2).\nSeven informants particularly mentioned that their main challenge was to adjust the level of depth and complexity to the information provided. As one informant described, “One of the hardest things is to adjust when they are at different levels” (Dietitian 4). However, one informant mentioned the complexity with different level of knowledge between participants but described using it as an opportunity for participants to teach each other as the informant found it beneficial when they received information from peers. Most of the informants also raised that people with T1DM are a heterogenic group, and they found it challenging when they are brought together as one. With regards to facilitating a group, the informants described several situations where they were required to draw on their pedagogical skills when having groups with talkative participants or inactive participants. One informant also reported on being doubted on the information provided and described,” I was very questioned there, sometimes it was tough, mentally stressful to deliver those education sessions” (Dietitian 5).\nThe methods used to effectively gather the groups’ focus were asking questions, practical examples, or workshops, or acknowledging participants who were active in group discussions. Five of the informants had some form of continuous professional education in therapeutic techniques such as Motivational Interviewing, Cognitive Behaviour Therapy, or had pedagogical experience from teaching, but all informants described that they perceived their work experience had led them to develop skills to facilitate groups. One informant described a previously feeling of fear during public speaking, but with time had developed necessary skills to cope.\nThe informants were all appreciative of facilitating group-based education and described how they perceived it benefited the participants but did suggest there were challenges regarding facilitating a group. As one informant stated,” It is a super wonderful opportunity. Patients can meet and exchange experiences. You start with the information but then they usually run the group themselves” (Dietitian 2).\nSeven informants particularly mentioned that their main challenge was to adjust the level of depth and complexity to the information provided. As one informant described, “One of the hardest things is to adjust when they are at different levels” (Dietitian 4). However, one informant mentioned the complexity with different level of knowledge between participants but described using it as an opportunity for participants to teach each other as the informant found it beneficial when they received information from peers. Most of the informants also raised that people with T1DM are a heterogenic group, and they found it challenging when they are brought together as one. With regards to facilitating a group, the informants described several situations where they were required to draw on their pedagogical skills when having groups with talkative participants or inactive participants. One informant also reported on being doubted on the information provided and described,” I was very questioned there, sometimes it was tough, mentally stressful to deliver those education sessions” (Dietitian 5).\nThe methods used to effectively gather the groups’ focus were asking questions, practical examples, or workshops, or acknowledging participants who were active in group discussions. Five of the informants had some form of continuous professional education in therapeutic techniques such as Motivational Interviewing, Cognitive Behaviour Therapy, or had pedagogical experience from teaching, but all informants described that they perceived their work experience had led them to develop skills to facilitate groups. One informant described a previously feeling of fear during public speaking, but with time had developed necessary skills to cope.\n[SUBTITLE] Participants’ engagement in group-based education [SUBSECTION] The informants exemplified how they perceived that education and the group processes that occur enhanced the solidarity and cooperation between participants in the shape of sharing experiences, sharing knowledge, peer support and a sense of belonging. As one informant stated,” It is also that the patients can meet each other. It is a big difference when I tell them -” you are not alone, others feel like this too” compared to when someone with diabetes tell them that” (Dietitian 4).\nAll the informants mentioned how they felt there was a need for adults with T1DM to meet peers in the same situation, and the strengthening impact it can have on self-management of the condition. However, they also reported that they experienced that the group’s composition was decisive for the group dynamic and peer support. The informants shared their observation of the outcome depending on a group’s composition with regards to age, pre-understanding, treatment, stage, or type of diabetes condition. As one dietitian described, “I have felt that it is difficult to get homogeneous groups, so that the groups are beneficial because they [the participants] have been on such different levels it has not been good” (Dietitian 7). Having a randomly composed group was perceived as less beneficial, and in worst case a risk of a negative experience. As stated by one informant,” If there is someone with lots of long-term complications who gets caught up in that, it can have a discouraging effect instead” (Dietitian 8).\nThe informants exemplified how they perceived that education and the group processes that occur enhanced the solidarity and cooperation between participants in the shape of sharing experiences, sharing knowledge, peer support and a sense of belonging. As one informant stated,” It is also that the patients can meet each other. It is a big difference when I tell them -” you are not alone, others feel like this too” compared to when someone with diabetes tell them that” (Dietitian 4).\nAll the informants mentioned how they felt there was a need for adults with T1DM to meet peers in the same situation, and the strengthening impact it can have on self-management of the condition. However, they also reported that they experienced that the group’s composition was decisive for the group dynamic and peer support. The informants shared their observation of the outcome depending on a group’s composition with regards to age, pre-understanding, treatment, stage, or type of diabetes condition. As one dietitian described, “I have felt that it is difficult to get homogeneous groups, so that the groups are beneficial because they [the participants] have been on such different levels it has not been good” (Dietitian 7). Having a randomly composed group was perceived as less beneficial, and in worst case a risk of a negative experience. As stated by one informant,” If there is someone with lots of long-term complications who gets caught up in that, it can have a discouraging effect instead” (Dietitian 8).\n[SUBTITLE] Structure, goals, and participation in group-based education [SUBSECTION] Six informants described having group-based education on their own while two had a diabetes nurse present and two were part of a multi-professional team providing education. When the education was multidisciplinary, the informants described a clearer goal setting with regards to the clinic’s goal such as learning to count carbohydrates or lower Hba1c. The six informants who were working independently with their group education did not describe having developed goals to the same extent. However, none of the informants reported having a clear discussion about goals with the participants in relation to the group-based education. One informant described that the lack of discussion on goals was due to the assumption that the participants knew why they attended the group-based education, or as another informant reported, “We haven’t discussed goals with regards to what this should lead to, it’s more a feeling of what they might need so it’s not as structured as it should be, it really isn’t” (Dietitian 8).\nNone of the informants reported that they worked actively with participants’ individual goal settings and action planning either before, during or after their sessions. One informant described,” Goal setting, it is really only for us that the patient has participated in the education session. We don’t follow up with any personal goals. I can feel it is quite deficient and one must ask what it gives” (Dietitian 3). None of the informants reported any follow-up on their goals or audits.\nThe informants described that most participants were referred to the group-based education by a nurse or physician, this was most likely after a discussion, but one informant described people experiencing being told that education is something they must attend. As stated by the informant, “It can be the nurse or doctor who says you will attend this. So, they can be forced to go even though they themselves don’t want to” (Dietitian 3). Another informant also described the referral process as problematic since there were no assessment or criteria for who was referred which in their view led to non-attendance or low motivation to participate in the group education. As the informant described, “I haven’t met the patients before, so I don’t know about their motivation. They are referred to me and I have then thought they are motivated and want this. And maybe they even said that at previous appointment. But then they don’t really want this or at least they´re not ready to spend time or energy on it” (Dietitian 10).\nThe routine of having a follow-up session for the attending group within six months of the education was reported by two dietitians while other informants mentioned that participants always have the possibility to request an individual appointment. One informant described that the participants are encouraged to phone the clinic two weeks after the education session for an individual follow-up. Three informants concluded that this was something that would be important to develop. Furthermore, one informant mentioned the difficulty to provide equal care when there are no regional structures or pathways regarding what type and form of care people with T1DM are offered. None of the informants reported conducting any auditing on the outcomes of the education sessions. But several informants did mention that they believed that having a structure to the group education is a prerequisite. As one informant stated,” I think you owe it to the patients to have a structure and goal setting, the structure is what gives an equal care” (Dietitian 2).\nSix informants described having group-based education on their own while two had a diabetes nurse present and two were part of a multi-professional team providing education. When the education was multidisciplinary, the informants described a clearer goal setting with regards to the clinic’s goal such as learning to count carbohydrates or lower Hba1c. The six informants who were working independently with their group education did not describe having developed goals to the same extent. However, none of the informants reported having a clear discussion about goals with the participants in relation to the group-based education. One informant described that the lack of discussion on goals was due to the assumption that the participants knew why they attended the group-based education, or as another informant reported, “We haven’t discussed goals with regards to what this should lead to, it’s more a feeling of what they might need so it’s not as structured as it should be, it really isn’t” (Dietitian 8).\nNone of the informants reported that they worked actively with participants’ individual goal settings and action planning either before, during or after their sessions. One informant described,” Goal setting, it is really only for us that the patient has participated in the education session. We don’t follow up with any personal goals. I can feel it is quite deficient and one must ask what it gives” (Dietitian 3). None of the informants reported any follow-up on their goals or audits.\nThe informants described that most participants were referred to the group-based education by a nurse or physician, this was most likely after a discussion, but one informant described people experiencing being told that education is something they must attend. As stated by the informant, “It can be the nurse or doctor who says you will attend this. So, they can be forced to go even though they themselves don’t want to” (Dietitian 3). Another informant also described the referral process as problematic since there were no assessment or criteria for who was referred which in their view led to non-attendance or low motivation to participate in the group education. As the informant described, “I haven’t met the patients before, so I don’t know about their motivation. They are referred to me and I have then thought they are motivated and want this. And maybe they even said that at previous appointment. But then they don’t really want this or at least they´re not ready to spend time or energy on it” (Dietitian 10).\nThe routine of having a follow-up session for the attending group within six months of the education was reported by two dietitians while other informants mentioned that participants always have the possibility to request an individual appointment. One informant described that the participants are encouraged to phone the clinic two weeks after the education session for an individual follow-up. Three informants concluded that this was something that would be important to develop. Furthermore, one informant mentioned the difficulty to provide equal care when there are no regional structures or pathways regarding what type and form of care people with T1DM are offered. None of the informants reported conducting any auditing on the outcomes of the education sessions. But several informants did mention that they believed that having a structure to the group education is a prerequisite. As one informant stated,” I think you owe it to the patients to have a structure and goal setting, the structure is what gives an equal care” (Dietitian 2).\n[SUBTITLE] The dietitian’s perspective on group-based patient education and its prospects [SUBSECTION] Seven informants highlighted time savings as an advantage of holding education in groups, but only if the participants in the group have a high attendance level. One informant described the administrative work around the group education as too time consuming and therefore time gain never was achieved.\nThe main benefit as perceived by all informants was the positive impact for the participants as they get more time with the dietitians, to hear the same information several times in different forms and participation in practical examples and listen to each other’s questions and answers. All informants also highlighted the support between peers as a positive key factor to reaching out with self-care messages. The informants described how the ongoing Covid-19 pandemic (2020–2022) had accelerated a digital development in healthcare, and three informants described that education in groups was about to be offered digitally while one was already running groups solely on digital platforms. One informant emphasized that with digital channels the entire care team can meet the participants without having to be in the same place, or there could be an opportunity to share videos or other digital content. As an informant described,” The development will be digital, it is coming to us in spring [2021]. Digital channels will be a real challenge and great fun. I never think we will go back to people coming in person, especially not type ones or people of working age” (Dietitian 2).\nWhile the digital development was recognised as a positive prospect for education, one informant who was already actively involved in digital education did highlight that they perceived it challenging to achieve the same level of discussions and interactions online. Three of the informants described that they would like to return to a so-called day-care week where education and workshops are provided in groups for several days in a row for one week. As described by one informant,” If you can really dream, I would like to have the day care week back, where the patients come and participate in lectures, cooking sessions, carbohydrate counting and such in a group with other people. It is an invaluable way to learn self-management” (Dietitian 1).\nFour informants described that they would like to individualize group-based education by developing new education sessions and material for groups in different topics such as pregnancy, exercise, and people that have been newly diagnosed with T1DM. Among other things, it was emphasized that time is what limits the development of new subject areas, partly to create educations and partly to gather feedback from participants to match the demand for groups. One informant emphasized that dietitians could benefit from sharing education materials with each other,” It would be great if us dietitians could look at joint material, I am sure there are lots of us who have the same type of groups, but everyone creates their own [education material].” (Dietitian 3).\nSeven informants highlighted time savings as an advantage of holding education in groups, but only if the participants in the group have a high attendance level. One informant described the administrative work around the group education as too time consuming and therefore time gain never was achieved.\nThe main benefit as perceived by all informants was the positive impact for the participants as they get more time with the dietitians, to hear the same information several times in different forms and participation in practical examples and listen to each other’s questions and answers. All informants also highlighted the support between peers as a positive key factor to reaching out with self-care messages. The informants described how the ongoing Covid-19 pandemic (2020–2022) had accelerated a digital development in healthcare, and three informants described that education in groups was about to be offered digitally while one was already running groups solely on digital platforms. One informant emphasized that with digital channels the entire care team can meet the participants without having to be in the same place, or there could be an opportunity to share videos or other digital content. As an informant described,” The development will be digital, it is coming to us in spring [2021]. Digital channels will be a real challenge and great fun. I never think we will go back to people coming in person, especially not type ones or people of working age” (Dietitian 2).\nWhile the digital development was recognised as a positive prospect for education, one informant who was already actively involved in digital education did highlight that they perceived it challenging to achieve the same level of discussions and interactions online. Three of the informants described that they would like to return to a so-called day-care week where education and workshops are provided in groups for several days in a row for one week. As described by one informant,” If you can really dream, I would like to have the day care week back, where the patients come and participate in lectures, cooking sessions, carbohydrate counting and such in a group with other people. It is an invaluable way to learn self-management” (Dietitian 1).\nFour informants described that they would like to individualize group-based education by developing new education sessions and material for groups in different topics such as pregnancy, exercise, and people that have been newly diagnosed with T1DM. Among other things, it was emphasized that time is what limits the development of new subject areas, partly to create educations and partly to gather feedback from participants to match the demand for groups. One informant emphasized that dietitians could benefit from sharing education materials with each other,” It would be great if us dietitians could look at joint material, I am sure there are lots of us who have the same type of groups, but everyone creates their own [education material].” (Dietitian 3).", "The informants were all appreciative of facilitating group-based education and described how they perceived it benefited the participants but did suggest there were challenges regarding facilitating a group. As one informant stated,” It is a super wonderful opportunity. Patients can meet and exchange experiences. You start with the information but then they usually run the group themselves” (Dietitian 2).\nSeven informants particularly mentioned that their main challenge was to adjust the level of depth and complexity to the information provided. As one informant described, “One of the hardest things is to adjust when they are at different levels” (Dietitian 4). However, one informant mentioned the complexity with different level of knowledge between participants but described using it as an opportunity for participants to teach each other as the informant found it beneficial when they received information from peers. Most of the informants also raised that people with T1DM are a heterogenic group, and they found it challenging when they are brought together as one. With regards to facilitating a group, the informants described several situations where they were required to draw on their pedagogical skills when having groups with talkative participants or inactive participants. One informant also reported on being doubted on the information provided and described,” I was very questioned there, sometimes it was tough, mentally stressful to deliver those education sessions” (Dietitian 5).\nThe methods used to effectively gather the groups’ focus were asking questions, practical examples, or workshops, or acknowledging participants who were active in group discussions. Five of the informants had some form of continuous professional education in therapeutic techniques such as Motivational Interviewing, Cognitive Behaviour Therapy, or had pedagogical experience from teaching, but all informants described that they perceived their work experience had led them to develop skills to facilitate groups. One informant described a previously feeling of fear during public speaking, but with time had developed necessary skills to cope.", "The informants exemplified how they perceived that education and the group processes that occur enhanced the solidarity and cooperation between participants in the shape of sharing experiences, sharing knowledge, peer support and a sense of belonging. As one informant stated,” It is also that the patients can meet each other. It is a big difference when I tell them -” you are not alone, others feel like this too” compared to when someone with diabetes tell them that” (Dietitian 4).\nAll the informants mentioned how they felt there was a need for adults with T1DM to meet peers in the same situation, and the strengthening impact it can have on self-management of the condition. However, they also reported that they experienced that the group’s composition was decisive for the group dynamic and peer support. The informants shared their observation of the outcome depending on a group’s composition with regards to age, pre-understanding, treatment, stage, or type of diabetes condition. As one dietitian described, “I have felt that it is difficult to get homogeneous groups, so that the groups are beneficial because they [the participants] have been on such different levels it has not been good” (Dietitian 7). Having a randomly composed group was perceived as less beneficial, and in worst case a risk of a negative experience. As stated by one informant,” If there is someone with lots of long-term complications who gets caught up in that, it can have a discouraging effect instead” (Dietitian 8).", "Six informants described having group-based education on their own while two had a diabetes nurse present and two were part of a multi-professional team providing education. When the education was multidisciplinary, the informants described a clearer goal setting with regards to the clinic’s goal such as learning to count carbohydrates or lower Hba1c. The six informants who were working independently with their group education did not describe having developed goals to the same extent. However, none of the informants reported having a clear discussion about goals with the participants in relation to the group-based education. One informant described that the lack of discussion on goals was due to the assumption that the participants knew why they attended the group-based education, or as another informant reported, “We haven’t discussed goals with regards to what this should lead to, it’s more a feeling of what they might need so it’s not as structured as it should be, it really isn’t” (Dietitian 8).\nNone of the informants reported that they worked actively with participants’ individual goal settings and action planning either before, during or after their sessions. One informant described,” Goal setting, it is really only for us that the patient has participated in the education session. We don’t follow up with any personal goals. I can feel it is quite deficient and one must ask what it gives” (Dietitian 3). None of the informants reported any follow-up on their goals or audits.\nThe informants described that most participants were referred to the group-based education by a nurse or physician, this was most likely after a discussion, but one informant described people experiencing being told that education is something they must attend. As stated by the informant, “It can be the nurse or doctor who says you will attend this. So, they can be forced to go even though they themselves don’t want to” (Dietitian 3). Another informant also described the referral process as problematic since there were no assessment or criteria for who was referred which in their view led to non-attendance or low motivation to participate in the group education. As the informant described, “I haven’t met the patients before, so I don’t know about their motivation. They are referred to me and I have then thought they are motivated and want this. And maybe they even said that at previous appointment. But then they don’t really want this or at least they´re not ready to spend time or energy on it” (Dietitian 10).\nThe routine of having a follow-up session for the attending group within six months of the education was reported by two dietitians while other informants mentioned that participants always have the possibility to request an individual appointment. One informant described that the participants are encouraged to phone the clinic two weeks after the education session for an individual follow-up. Three informants concluded that this was something that would be important to develop. Furthermore, one informant mentioned the difficulty to provide equal care when there are no regional structures or pathways regarding what type and form of care people with T1DM are offered. None of the informants reported conducting any auditing on the outcomes of the education sessions. But several informants did mention that they believed that having a structure to the group education is a prerequisite. As one informant stated,” I think you owe it to the patients to have a structure and goal setting, the structure is what gives an equal care” (Dietitian 2).", "Seven informants highlighted time savings as an advantage of holding education in groups, but only if the participants in the group have a high attendance level. One informant described the administrative work around the group education as too time consuming and therefore time gain never was achieved.\nThe main benefit as perceived by all informants was the positive impact for the participants as they get more time with the dietitians, to hear the same information several times in different forms and participation in practical examples and listen to each other’s questions and answers. All informants also highlighted the support between peers as a positive key factor to reaching out with self-care messages. The informants described how the ongoing Covid-19 pandemic (2020–2022) had accelerated a digital development in healthcare, and three informants described that education in groups was about to be offered digitally while one was already running groups solely on digital platforms. One informant emphasized that with digital channels the entire care team can meet the participants without having to be in the same place, or there could be an opportunity to share videos or other digital content. As an informant described,” The development will be digital, it is coming to us in spring [2021]. Digital channels will be a real challenge and great fun. I never think we will go back to people coming in person, especially not type ones or people of working age” (Dietitian 2).\nWhile the digital development was recognised as a positive prospect for education, one informant who was already actively involved in digital education did highlight that they perceived it challenging to achieve the same level of discussions and interactions online. Three of the informants described that they would like to return to a so-called day-care week where education and workshops are provided in groups for several days in a row for one week. As described by one informant,” If you can really dream, I would like to have the day care week back, where the patients come and participate in lectures, cooking sessions, carbohydrate counting and such in a group with other people. It is an invaluable way to learn self-management” (Dietitian 1).\nFour informants described that they would like to individualize group-based education by developing new education sessions and material for groups in different topics such as pregnancy, exercise, and people that have been newly diagnosed with T1DM. Among other things, it was emphasized that time is what limits the development of new subject areas, partly to create educations and partly to gather feedback from participants to match the demand for groups. One informant emphasized that dietitians could benefit from sharing education materials with each other,” It would be great if us dietitians could look at joint material, I am sure there are lots of us who have the same type of groups, but everyone creates their own [education material].” (Dietitian 3).", "This study aimed to explore dietitians’ perception of, and role in, group-based education to adults with T1DM. The informants were overall appreciative of facilitating group-based education and expressed that they thought group-based education might be beneficial for people with T1DM. However, the informants also experienced challenges concerning engaging a group and adjust the level of knowledge and individualize the information to the participants.\n[SUBTITLE] The dietitian’s perception of, and role in, group-based education [SUBSECTION] Most of the informants reported a positive experience of group-based education, both for themselves and for the participants, but they did suggest there were challenges concerning facilitating groups. Half of the informants had undertaken some form of post graduate training in therapeutic techniques or pedagogic experience but all of them relayed that they perceived their experience had led them to develop the required pedagogical skills and approaches. At the same time, the informants did share challenges relating to the group’s composition and dynamic, and none of them worked actively with individual goal setting, both areas that are underpinned by pedagogic and empowerment skills. It is widely accepted and recommended that pedagogic training of some form is required for an educator [4]. Yet, in Sweden there is no available certifications as a diabetes educator or standards recommended for dietitians which means the level of pedagogic approach will be depending on individual experience and education. As described by Loveman et al. in a systematic review, the educators’ pedagogical approach is essential to the clinical outcomes [16]. Loveman et al. also suggest that the changes a pedagogically underpinned education had led to, even if sometimes small, were relatively long-lasting. Further on, educational interventions that emphasize knowledge, emotional and behaviour support, coping strategies and self-management training has been associated with improved glycaemic control at all ages which makes the educator and its approach essential [17]. In a study by Fredrix et al., using semi-structured interviews with diabetes educators the results confirmed that educators are finding this part challenging and therefore would value additional training opportunities [18]. The informants in our study perceived their barriers towards facilitating groups on a spectrum from overcoming fear of public speaking to feeling completely confident with their ability. It would be of value to explore the pedagogic training level of Swedish dietitians and potential barriers in their ability to facilitate group-based education, not just in diabetes but for other conditions within the chronic care model as well to evaluate the requirement for provision of adequate training opportunities.\nAnother main challenge, as described by the informants, was the lack of possibility to individualize and difficulty to know and adjust what level on literacy and numeracy to settle the information on. Qualitative data from interviews and observations within the diabetes education programme DAFNE show that people attend diabetes education with different skills, experiences, and approaches to diabetes management [19]. This is explained by, potentially, different clinical practices in diabetes services, and different lengths of time since diagnosis. It was also suggested that people who were diagnosed in childhood or adolescent may not have retained the information given to them at the time, and it had since not been repeated. As suggested by international standards [6] an initial assessment with medical history, age, cultural influences, diabetes knowledge, disease burden, literacy, readiness to learn, limitations etcetera could conclude a more successful approach and opportunity to individualise. This is also confirmed by the DAFNE study which suggested that assessing numeracy, which is critical for counting carbohydrates and insulin dose adjustment, would help determine the additional support required before or during the group-based education [19]. This approach would support the dietitian facilitating the education to adjust and individualize and improve the outcome for the participant. Having a lower literacy and numeracy skills are associated with poorer portion size estimation [20] and understanding of food labels [21]. The added support could be numeracy-focused practical exercises, using lay language and pictures, or hands-on learning. According to our results, the pathway to education as described in the interviews were mostly referrals by nurses or physicians at the clinic. While initiating a referral probably involves some form of assessment it is essential that the educator can make their own assessment and adjustments to education if required. Including pre-assessment as part of a framework for diabetes education would be essential to support educators in delivering a person-centred care. A pre-assessment encounter can also offer an opportunity to relay information such as aim of the session and promote attendance. Our study highlights therefore the need for initial assessments as part of structured education.\nAll the informants highlighted their perception of the positive impact that a group setting can have on participants. This is in line with the findings from the British psychosocial study on structured diabetes education who conducted interviews with participants and educators [22]. They suggested, that delivering education in a group setting promoted the participants’ sense of enjoyment and therefore the ability to concentrate on the subject. It also enhanced the ability to learn from each other’s experiences and use these to illustrate self-management. Furthermore, the group setting provided an opportunity for apprehensive or anxious participants to observe other course attendees. However, the informants in this study expressed that a randomly composed group was perceived as less beneficial which highlights that achieving positive group processes may not be straightforward.\nNone of the informants reported using personalized goal setting, and in fact there was an overall lack in discussions about goals. Incorporating empowerment-based support has been shown to facilitate development of personal responsibility and control of daily decisions in a study focussing on people with type 2 diabetes and may be relevant for people with a lower health literacy [23, 24]. Further, empowerment-based interventions such as guided self-determination intervention has been shown to successfully improve and sustain a lower Hba1c in young women in a randomised control trial by Zoffman et al. [25].\nMost of the informants reported a positive experience of group-based education, both for themselves and for the participants, but they did suggest there were challenges concerning facilitating groups. Half of the informants had undertaken some form of post graduate training in therapeutic techniques or pedagogic experience but all of them relayed that they perceived their experience had led them to develop the required pedagogical skills and approaches. At the same time, the informants did share challenges relating to the group’s composition and dynamic, and none of them worked actively with individual goal setting, both areas that are underpinned by pedagogic and empowerment skills. It is widely accepted and recommended that pedagogic training of some form is required for an educator [4]. Yet, in Sweden there is no available certifications as a diabetes educator or standards recommended for dietitians which means the level of pedagogic approach will be depending on individual experience and education. As described by Loveman et al. in a systematic review, the educators’ pedagogical approach is essential to the clinical outcomes [16]. Loveman et al. also suggest that the changes a pedagogically underpinned education had led to, even if sometimes small, were relatively long-lasting. Further on, educational interventions that emphasize knowledge, emotional and behaviour support, coping strategies and self-management training has been associated with improved glycaemic control at all ages which makes the educator and its approach essential [17]. In a study by Fredrix et al., using semi-structured interviews with diabetes educators the results confirmed that educators are finding this part challenging and therefore would value additional training opportunities [18]. The informants in our study perceived their barriers towards facilitating groups on a spectrum from overcoming fear of public speaking to feeling completely confident with their ability. It would be of value to explore the pedagogic training level of Swedish dietitians and potential barriers in their ability to facilitate group-based education, not just in diabetes but for other conditions within the chronic care model as well to evaluate the requirement for provision of adequate training opportunities.\nAnother main challenge, as described by the informants, was the lack of possibility to individualize and difficulty to know and adjust what level on literacy and numeracy to settle the information on. Qualitative data from interviews and observations within the diabetes education programme DAFNE show that people attend diabetes education with different skills, experiences, and approaches to diabetes management [19]. This is explained by, potentially, different clinical practices in diabetes services, and different lengths of time since diagnosis. It was also suggested that people who were diagnosed in childhood or adolescent may not have retained the information given to them at the time, and it had since not been repeated. As suggested by international standards [6] an initial assessment with medical history, age, cultural influences, diabetes knowledge, disease burden, literacy, readiness to learn, limitations etcetera could conclude a more successful approach and opportunity to individualise. This is also confirmed by the DAFNE study which suggested that assessing numeracy, which is critical for counting carbohydrates and insulin dose adjustment, would help determine the additional support required before or during the group-based education [19]. This approach would support the dietitian facilitating the education to adjust and individualize and improve the outcome for the participant. Having a lower literacy and numeracy skills are associated with poorer portion size estimation [20] and understanding of food labels [21]. The added support could be numeracy-focused practical exercises, using lay language and pictures, or hands-on learning. According to our results, the pathway to education as described in the interviews were mostly referrals by nurses or physicians at the clinic. While initiating a referral probably involves some form of assessment it is essential that the educator can make their own assessment and adjustments to education if required. Including pre-assessment as part of a framework for diabetes education would be essential to support educators in delivering a person-centred care. A pre-assessment encounter can also offer an opportunity to relay information such as aim of the session and promote attendance. Our study highlights therefore the need for initial assessments as part of structured education.\nAll the informants highlighted their perception of the positive impact that a group setting can have on participants. This is in line with the findings from the British psychosocial study on structured diabetes education who conducted interviews with participants and educators [22]. They suggested, that delivering education in a group setting promoted the participants’ sense of enjoyment and therefore the ability to concentrate on the subject. It also enhanced the ability to learn from each other’s experiences and use these to illustrate self-management. Furthermore, the group setting provided an opportunity for apprehensive or anxious participants to observe other course attendees. However, the informants in this study expressed that a randomly composed group was perceived as less beneficial which highlights that achieving positive group processes may not be straightforward.\nNone of the informants reported using personalized goal setting, and in fact there was an overall lack in discussions about goals. Incorporating empowerment-based support has been shown to facilitate development of personal responsibility and control of daily decisions in a study focussing on people with type 2 diabetes and may be relevant for people with a lower health literacy [23, 24]. Further, empowerment-based interventions such as guided self-determination intervention has been shown to successfully improve and sustain a lower Hba1c in young women in a randomised control trial by Zoffman et al. [25].\n[SUBTITLE] Prospects for development of group-based education [SUBSECTION] In our study, three informants were considering developing a follow-up procedure as a prospect for their group-based education. The current pathways often involved the option for participants to contact their dietitian if required. While offering people the opportunity to request an appointment with the dietitian is a great step for a follow-up procedure, a research study analysing barriers to, and facilitators of successful diabetes self-management suggest that people are reluctant to over-burden health professionals and therefore avoid taking command to request individual appointments [19]. The long-lasting effect of education is not thoroughly explored yet, however a literature review studying the effectiveness of managing T1DM with structured education suggest that the improvements in glycaemia induced by the education programme tend to reduce as time elapses and literature on the long-term effectiveness is limited [26]. Similarly, a Swedish study on group-based nutrition education to people with T1DM using carbohydrate counting versus a food-based approach showed an improved Hba1c at three months but this significant difference did not remain after twelve months [27]. This highlights the importance of integrating follow-up support into the structured education package. In addition to structured follow-up sessions, to improve lifelong learning participants should also be provided with opportunities for ad hoc contact with appropriately trained health professionals to troubleshoot problems as they arise and when life circumstances change [22]. Although the education sessions are effective in a group setting, when it comes to follow-up, participants in the DAFNE education programme have indicated a clear preference for one-to-one, individually tailored follow up support.\nDigitalisation is described as the single largest change in our time and will impact the entire society. One of the informants who exclusively held digital education sessions found it challenging to encourage participants to be engaged in discussion compared to previous experience of physical meetings. While this is an area in its early days of research, there are some initiatives aiming to set new standards for digital dietetic education or counselling. During 2020–2021, 39 DAFNE centres took part in a pilot study reviewing participant and educator experience of remote group-based education. The preliminary data, presented at the Diabetes UK Education and Self-management Award, suggested that the learning experience was highly rated and 95% of the participants would recommend it to a friend [28]. This could again highlight the importance of adequate training for dietitians to develop skills to facilitate digital groups.\nIn the present study, dietitians described how a possible development would be to offer group-based education on more topics and perhaps designed for a specific diagnosis or stage of diagnosis. With little direction from national guidelines or interactivity in dietetic networks all education materials will be created in accordance with the knowledge and views of the producer. Yet, there is no available national framework for group-based education regarding T1DM in Sweden, leading to each dietitian creating their own topics and pedagogical approach which will be time consuming as structure, goal settings and evidence-based research need to be reviewed. Loveman et al., states that for the best clinical outcome, educators need to have time and resources to fulfil the need of a structured education [16]. Dietitians are a profession with great engagement in their work with more than a thousand members in the professional Swedish organisation [29]. This could provide an ideal platform for cooperating and developing shared material to ensure continuously updated evidence and a step towards equal diabetes care. A framework could also open the opportunity for a certification process like the QISMET standardization in the UK where services can apply to have their structured education accredited [7].\nIn Sweden, diabetes care causes high societal financial cost mainly due to treatment of complications, but the condition also impacts quality of life, periods of sick leave and an increased demand for hospital and community care [4]. Audit data from ongoing international education programmes demonstrate improvements in quality of life, diabetes distress, glycaemic control, severe hypoglycaemia, and diabetes keto acidosis [30, 31]. More than 18% of adults with T1DM in Sweden have an Hba1c > 70 mmol/mol and 55% are overweight or obese which both indicate that dietetic interventions are essential [3]. According to a national survey, 66% of diabetes care providers offer group-based education but the outcomes of these are not nationally audited [8]. However, research show that group-based education does not always work well due to the potential differences in style, method, and focus [32]. Making new habits part of everyday life and to be maintained long-term is where group-based education most often fails [19]. Results from our study implies that dietitians found it particularly challenging to adjust the level of information with regards to numeracy and literacy, as well as participants pre-understanding and to individualize the information depending on different needs. The National Board of Health and Welfare describe group-based education as a prioritized area for improvement in the latest guidelines [4]. While this is highly welcomed, our study suggests that a framework for group-based education to adults with T1DM and the skills required to facilitate it is needed to ensure the efforts are used in the most efficient way with regards to clinical outcomes, lasting effects, and cost-effectiveness. This study has focussed on the dietitians’ perspective but before establishing a framework for group education the patient perspective should be explored, with advantage in cooperation with patient organisations or representatives. The principles of self-management which are the basis of education are applied within the chronic care model for several chronic conditions such as chronic obstructive pulmonary diseases and cardiovascular diseases. This could imply transferability to group-based education for people with other chronic conditions, and for different healthcare professionals as the guidance for educators of varied disciplines is similar.\nQualitative studies are verified through their credibility, transferability, and reliability [11]. The credibility and strength in the present study is enhanced by the fact that the informants were dietitians in the diabetes field, and they were professionally involved in delivering group-based education to adults with T1DM. Moreover, for a broader perspective regarding group-based education, the ten informants in our study worked in seven different regions geographically spread in Sweden. Informants varied in work experience, which can enhance the opportunity to gather and enlighten different perspectives. The quotes used in the results were translated into English. Even though the translation was made by an English-Swedish translator, the process is complex, and sensitivity must be taken to reconstruct the quotes while maintaining its meaning. Still, some small details regarding specific choice of words might have been omitted in the translation which may be a limitation of the study. This study addresses dietitians’ perspectives regarding group-based education and do not include experiences from people with T1DM, which is a limitation. The first author is a diabetes specialist dietitian with an international qualification as a diabetes educator. While this gives a high level of pre-understanding, it is also a major part of initiating deep insights on the subject. All the informants were aware of the author’s professional role through dietetic networks which could have affected how they shared information. To prevent and limit the impact of pre-conception the aim of the study was shared with the informants at the start of each interview, and it was clarified that the aim was to gather their perspective regarding group-based education to adults with T1DM.\nIn our study, three informants were considering developing a follow-up procedure as a prospect for their group-based education. The current pathways often involved the option for participants to contact their dietitian if required. While offering people the opportunity to request an appointment with the dietitian is a great step for a follow-up procedure, a research study analysing barriers to, and facilitators of successful diabetes self-management suggest that people are reluctant to over-burden health professionals and therefore avoid taking command to request individual appointments [19]. The long-lasting effect of education is not thoroughly explored yet, however a literature review studying the effectiveness of managing T1DM with structured education suggest that the improvements in glycaemia induced by the education programme tend to reduce as time elapses and literature on the long-term effectiveness is limited [26]. Similarly, a Swedish study on group-based nutrition education to people with T1DM using carbohydrate counting versus a food-based approach showed an improved Hba1c at three months but this significant difference did not remain after twelve months [27]. This highlights the importance of integrating follow-up support into the structured education package. In addition to structured follow-up sessions, to improve lifelong learning participants should also be provided with opportunities for ad hoc contact with appropriately trained health professionals to troubleshoot problems as they arise and when life circumstances change [22]. Although the education sessions are effective in a group setting, when it comes to follow-up, participants in the DAFNE education programme have indicated a clear preference for one-to-one, individually tailored follow up support.\nDigitalisation is described as the single largest change in our time and will impact the entire society. One of the informants who exclusively held digital education sessions found it challenging to encourage participants to be engaged in discussion compared to previous experience of physical meetings. While this is an area in its early days of research, there are some initiatives aiming to set new standards for digital dietetic education or counselling. During 2020–2021, 39 DAFNE centres took part in a pilot study reviewing participant and educator experience of remote group-based education. The preliminary data, presented at the Diabetes UK Education and Self-management Award, suggested that the learning experience was highly rated and 95% of the participants would recommend it to a friend [28]. This could again highlight the importance of adequate training for dietitians to develop skills to facilitate digital groups.\nIn the present study, dietitians described how a possible development would be to offer group-based education on more topics and perhaps designed for a specific diagnosis or stage of diagnosis. With little direction from national guidelines or interactivity in dietetic networks all education materials will be created in accordance with the knowledge and views of the producer. Yet, there is no available national framework for group-based education regarding T1DM in Sweden, leading to each dietitian creating their own topics and pedagogical approach which will be time consuming as structure, goal settings and evidence-based research need to be reviewed. Loveman et al., states that for the best clinical outcome, educators need to have time and resources to fulfil the need of a structured education [16]. Dietitians are a profession with great engagement in their work with more than a thousand members in the professional Swedish organisation [29]. This could provide an ideal platform for cooperating and developing shared material to ensure continuously updated evidence and a step towards equal diabetes care. A framework could also open the opportunity for a certification process like the QISMET standardization in the UK where services can apply to have their structured education accredited [7].\nIn Sweden, diabetes care causes high societal financial cost mainly due to treatment of complications, but the condition also impacts quality of life, periods of sick leave and an increased demand for hospital and community care [4]. Audit data from ongoing international education programmes demonstrate improvements in quality of life, diabetes distress, glycaemic control, severe hypoglycaemia, and diabetes keto acidosis [30, 31]. More than 18% of adults with T1DM in Sweden have an Hba1c > 70 mmol/mol and 55% are overweight or obese which both indicate that dietetic interventions are essential [3]. According to a national survey, 66% of diabetes care providers offer group-based education but the outcomes of these are not nationally audited [8]. However, research show that group-based education does not always work well due to the potential differences in style, method, and focus [32]. Making new habits part of everyday life and to be maintained long-term is where group-based education most often fails [19]. Results from our study implies that dietitians found it particularly challenging to adjust the level of information with regards to numeracy and literacy, as well as participants pre-understanding and to individualize the information depending on different needs. The National Board of Health and Welfare describe group-based education as a prioritized area for improvement in the latest guidelines [4]. While this is highly welcomed, our study suggests that a framework for group-based education to adults with T1DM and the skills required to facilitate it is needed to ensure the efforts are used in the most efficient way with regards to clinical outcomes, lasting effects, and cost-effectiveness. This study has focussed on the dietitians’ perspective but before establishing a framework for group education the patient perspective should be explored, with advantage in cooperation with patient organisations or representatives. The principles of self-management which are the basis of education are applied within the chronic care model for several chronic conditions such as chronic obstructive pulmonary diseases and cardiovascular diseases. This could imply transferability to group-based education for people with other chronic conditions, and for different healthcare professionals as the guidance for educators of varied disciplines is similar.\nQualitative studies are verified through their credibility, transferability, and reliability [11]. The credibility and strength in the present study is enhanced by the fact that the informants were dietitians in the diabetes field, and they were professionally involved in delivering group-based education to adults with T1DM. Moreover, for a broader perspective regarding group-based education, the ten informants in our study worked in seven different regions geographically spread in Sweden. Informants varied in work experience, which can enhance the opportunity to gather and enlighten different perspectives. The quotes used in the results were translated into English. Even though the translation was made by an English-Swedish translator, the process is complex, and sensitivity must be taken to reconstruct the quotes while maintaining its meaning. Still, some small details regarding specific choice of words might have been omitted in the translation which may be a limitation of the study. This study addresses dietitians’ perspectives regarding group-based education and do not include experiences from people with T1DM, which is a limitation. The first author is a diabetes specialist dietitian with an international qualification as a diabetes educator. While this gives a high level of pre-understanding, it is also a major part of initiating deep insights on the subject. All the informants were aware of the author’s professional role through dietetic networks which could have affected how they shared information. To prevent and limit the impact of pre-conception the aim of the study was shared with the informants at the start of each interview, and it was clarified that the aim was to gather their perspective regarding group-based education to adults with T1DM.", "Most of the informants reported a positive experience of group-based education, both for themselves and for the participants, but they did suggest there were challenges concerning facilitating groups. Half of the informants had undertaken some form of post graduate training in therapeutic techniques or pedagogic experience but all of them relayed that they perceived their experience had led them to develop the required pedagogical skills and approaches. At the same time, the informants did share challenges relating to the group’s composition and dynamic, and none of them worked actively with individual goal setting, both areas that are underpinned by pedagogic and empowerment skills. It is widely accepted and recommended that pedagogic training of some form is required for an educator [4]. Yet, in Sweden there is no available certifications as a diabetes educator or standards recommended for dietitians which means the level of pedagogic approach will be depending on individual experience and education. As described by Loveman et al. in a systematic review, the educators’ pedagogical approach is essential to the clinical outcomes [16]. Loveman et al. also suggest that the changes a pedagogically underpinned education had led to, even if sometimes small, were relatively long-lasting. Further on, educational interventions that emphasize knowledge, emotional and behaviour support, coping strategies and self-management training has been associated with improved glycaemic control at all ages which makes the educator and its approach essential [17]. In a study by Fredrix et al., using semi-structured interviews with diabetes educators the results confirmed that educators are finding this part challenging and therefore would value additional training opportunities [18]. The informants in our study perceived their barriers towards facilitating groups on a spectrum from overcoming fear of public speaking to feeling completely confident with their ability. It would be of value to explore the pedagogic training level of Swedish dietitians and potential barriers in their ability to facilitate group-based education, not just in diabetes but for other conditions within the chronic care model as well to evaluate the requirement for provision of adequate training opportunities.\nAnother main challenge, as described by the informants, was the lack of possibility to individualize and difficulty to know and adjust what level on literacy and numeracy to settle the information on. Qualitative data from interviews and observations within the diabetes education programme DAFNE show that people attend diabetes education with different skills, experiences, and approaches to diabetes management [19]. This is explained by, potentially, different clinical practices in diabetes services, and different lengths of time since diagnosis. It was also suggested that people who were diagnosed in childhood or adolescent may not have retained the information given to them at the time, and it had since not been repeated. As suggested by international standards [6] an initial assessment with medical history, age, cultural influences, diabetes knowledge, disease burden, literacy, readiness to learn, limitations etcetera could conclude a more successful approach and opportunity to individualise. This is also confirmed by the DAFNE study which suggested that assessing numeracy, which is critical for counting carbohydrates and insulin dose adjustment, would help determine the additional support required before or during the group-based education [19]. This approach would support the dietitian facilitating the education to adjust and individualize and improve the outcome for the participant. Having a lower literacy and numeracy skills are associated with poorer portion size estimation [20] and understanding of food labels [21]. The added support could be numeracy-focused practical exercises, using lay language and pictures, or hands-on learning. According to our results, the pathway to education as described in the interviews were mostly referrals by nurses or physicians at the clinic. While initiating a referral probably involves some form of assessment it is essential that the educator can make their own assessment and adjustments to education if required. Including pre-assessment as part of a framework for diabetes education would be essential to support educators in delivering a person-centred care. A pre-assessment encounter can also offer an opportunity to relay information such as aim of the session and promote attendance. Our study highlights therefore the need for initial assessments as part of structured education.\nAll the informants highlighted their perception of the positive impact that a group setting can have on participants. This is in line with the findings from the British psychosocial study on structured diabetes education who conducted interviews with participants and educators [22]. They suggested, that delivering education in a group setting promoted the participants’ sense of enjoyment and therefore the ability to concentrate on the subject. It also enhanced the ability to learn from each other’s experiences and use these to illustrate self-management. Furthermore, the group setting provided an opportunity for apprehensive or anxious participants to observe other course attendees. However, the informants in this study expressed that a randomly composed group was perceived as less beneficial which highlights that achieving positive group processes may not be straightforward.\nNone of the informants reported using personalized goal setting, and in fact there was an overall lack in discussions about goals. Incorporating empowerment-based support has been shown to facilitate development of personal responsibility and control of daily decisions in a study focussing on people with type 2 diabetes and may be relevant for people with a lower health literacy [23, 24]. Further, empowerment-based interventions such as guided self-determination intervention has been shown to successfully improve and sustain a lower Hba1c in young women in a randomised control trial by Zoffman et al. [25].", "In our study, three informants were considering developing a follow-up procedure as a prospect for their group-based education. The current pathways often involved the option for participants to contact their dietitian if required. While offering people the opportunity to request an appointment with the dietitian is a great step for a follow-up procedure, a research study analysing barriers to, and facilitators of successful diabetes self-management suggest that people are reluctant to over-burden health professionals and therefore avoid taking command to request individual appointments [19]. The long-lasting effect of education is not thoroughly explored yet, however a literature review studying the effectiveness of managing T1DM with structured education suggest that the improvements in glycaemia induced by the education programme tend to reduce as time elapses and literature on the long-term effectiveness is limited [26]. Similarly, a Swedish study on group-based nutrition education to people with T1DM using carbohydrate counting versus a food-based approach showed an improved Hba1c at three months but this significant difference did not remain after twelve months [27]. This highlights the importance of integrating follow-up support into the structured education package. In addition to structured follow-up sessions, to improve lifelong learning participants should also be provided with opportunities for ad hoc contact with appropriately trained health professionals to troubleshoot problems as they arise and when life circumstances change [22]. Although the education sessions are effective in a group setting, when it comes to follow-up, participants in the DAFNE education programme have indicated a clear preference for one-to-one, individually tailored follow up support.\nDigitalisation is described as the single largest change in our time and will impact the entire society. One of the informants who exclusively held digital education sessions found it challenging to encourage participants to be engaged in discussion compared to previous experience of physical meetings. While this is an area in its early days of research, there are some initiatives aiming to set new standards for digital dietetic education or counselling. During 2020–2021, 39 DAFNE centres took part in a pilot study reviewing participant and educator experience of remote group-based education. The preliminary data, presented at the Diabetes UK Education and Self-management Award, suggested that the learning experience was highly rated and 95% of the participants would recommend it to a friend [28]. This could again highlight the importance of adequate training for dietitians to develop skills to facilitate digital groups.\nIn the present study, dietitians described how a possible development would be to offer group-based education on more topics and perhaps designed for a specific diagnosis or stage of diagnosis. With little direction from national guidelines or interactivity in dietetic networks all education materials will be created in accordance with the knowledge and views of the producer. Yet, there is no available national framework for group-based education regarding T1DM in Sweden, leading to each dietitian creating their own topics and pedagogical approach which will be time consuming as structure, goal settings and evidence-based research need to be reviewed. Loveman et al., states that for the best clinical outcome, educators need to have time and resources to fulfil the need of a structured education [16]. Dietitians are a profession with great engagement in their work with more than a thousand members in the professional Swedish organisation [29]. This could provide an ideal platform for cooperating and developing shared material to ensure continuously updated evidence and a step towards equal diabetes care. A framework could also open the opportunity for a certification process like the QISMET standardization in the UK where services can apply to have their structured education accredited [7].\nIn Sweden, diabetes care causes high societal financial cost mainly due to treatment of complications, but the condition also impacts quality of life, periods of sick leave and an increased demand for hospital and community care [4]. Audit data from ongoing international education programmes demonstrate improvements in quality of life, diabetes distress, glycaemic control, severe hypoglycaemia, and diabetes keto acidosis [30, 31]. More than 18% of adults with T1DM in Sweden have an Hba1c > 70 mmol/mol and 55% are overweight or obese which both indicate that dietetic interventions are essential [3]. According to a national survey, 66% of diabetes care providers offer group-based education but the outcomes of these are not nationally audited [8]. However, research show that group-based education does not always work well due to the potential differences in style, method, and focus [32]. Making new habits part of everyday life and to be maintained long-term is where group-based education most often fails [19]. Results from our study implies that dietitians found it particularly challenging to adjust the level of information with regards to numeracy and literacy, as well as participants pre-understanding and to individualize the information depending on different needs. The National Board of Health and Welfare describe group-based education as a prioritized area for improvement in the latest guidelines [4]. While this is highly welcomed, our study suggests that a framework for group-based education to adults with T1DM and the skills required to facilitate it is needed to ensure the efforts are used in the most efficient way with regards to clinical outcomes, lasting effects, and cost-effectiveness. This study has focussed on the dietitians’ perspective but before establishing a framework for group education the patient perspective should be explored, with advantage in cooperation with patient organisations or representatives. The principles of self-management which are the basis of education are applied within the chronic care model for several chronic conditions such as chronic obstructive pulmonary diseases and cardiovascular diseases. This could imply transferability to group-based education for people with other chronic conditions, and for different healthcare professionals as the guidance for educators of varied disciplines is similar.\nQualitative studies are verified through their credibility, transferability, and reliability [11]. The credibility and strength in the present study is enhanced by the fact that the informants were dietitians in the diabetes field, and they were professionally involved in delivering group-based education to adults with T1DM. Moreover, for a broader perspective regarding group-based education, the ten informants in our study worked in seven different regions geographically spread in Sweden. Informants varied in work experience, which can enhance the opportunity to gather and enlighten different perspectives. The quotes used in the results were translated into English. Even though the translation was made by an English-Swedish translator, the process is complex, and sensitivity must be taken to reconstruct the quotes while maintaining its meaning. Still, some small details regarding specific choice of words might have been omitted in the translation which may be a limitation of the study. This study addresses dietitians’ perspectives regarding group-based education and do not include experiences from people with T1DM, which is a limitation. The first author is a diabetes specialist dietitian with an international qualification as a diabetes educator. While this gives a high level of pre-understanding, it is also a major part of initiating deep insights on the subject. All the informants were aware of the author’s professional role through dietetic networks which could have affected how they shared information. To prevent and limit the impact of pre-conception the aim of the study was shared with the informants at the start of each interview, and it was clarified that the aim was to gather their perspective regarding group-based education to adults with T1DM.", "This study addresses the dietitians’ perspective regarding the requirement of a person-centred approach in group-based education. There was a great optimism and engagement from the informants, but they identified a lack of framework that addressed the challenges suggested in this study. Based on the results, we suggest it would be of value to explore the pedagogic training level of Swedish dietitians and potential barriers in their ability to facilitate group-based education. Further on we suggest that a framework for group-based education to adults with T1DM containing options for pre-assessment, goal setting and structured follow-up packages should be explored together with patient representatives to optimize the care given to ensure cost-effectiveness, optimize clinical outcomes, quality of life and equally accessible care for people with T1DM." ]
[ null, null, "results", null, null, null, null, "discussion", null, null, "conclusion" ]
[ "Dietitians", "Group-based education", "National diabetes guidelines", "Structured education", "Type 1 Diabetes Mellitus", "Person-centred care" ]
Effect of transversus abdominis plane block on the quality of recovery in laparoscopic nephrectomy: A prospective double-blinded randomized controlled clinical trial.
36253971
Poorly controlled acute postoperative pain after laparoscopic nephrectomy may adversely affect surgical outcomes and increase morbidity rates. In addition, excessive use of opioids during surgery may slow postoperative endocrine and metabolic responses and cause opioid-related side effects and opioid-induced hyperalgesia. The purpose of this study was to evaluate the effect of ultrasound-guided transversus abdominis plane (TAP) block on the postoperative quality of recovery and intraoperative remifentanil requirement in laparoscopic nephrectomy.
BACKGROUND
Sixty patients who underwent laparoscopic nephrectomy were randomly divided into 2 groups: TAP and Control groups. After induction of anesthesia and before awakening from anesthesia, the TAP group was administered 40 mL of 0.375% ropivacaine and the Control group was administered 40 mL of normal saline to deliver ultrasound-guided TAP block using 20 mL of each of the above drugs. The main objectives of this study were to evaluate the effect of the TAP block on quality of recovery using the Quality of Recovery 40 (QoR-40) questionnaire and assessments of intraoperative remifentanil requirement. In addition, to evaluate the postoperative analgesic effect of the TAP block, the total usage time for patient-controlled analgesia (PCA) and the number of PCA bolus buttons used in both groups were analyzed.
METHODS
The QoR-40 score, measured when visiting the ward on the third day after surgery, was significantly higher in the TAP group (171.9 ± 23.1) than in the Control group (151.9 ± 28.1) (P = .006). The intraoperative remifentanil requirement was not significantly different between the groups (P = .439). In the TAP group, the frequency of bolus dose accumulation at 1, 2, 3, 6, 12, 24, 48, and 72 hours after surgery was low enough to show a significant difference, and the total usage time for PCA was long enough to show a significant difference.
RESULTS
In conclusion, we determined that ultrasound-guided TAP block during laparoscopic nephrectomy improves the quality of postoperative recovery and is effective for postoperative pain control but does not affect the amount of remifentanil required for adequate anesthesia during surgery.
CONCLUSION
[ "Abdominal Muscles", "Analgesics, Opioid", "Double-Blind Method", "Humans", "Laparoscopy", "Nephrectomy", "Pain, Postoperative", "Prospective Studies", "Remifentanil", "Ropivacaine", "Saline Solution" ]
9575771
1. Introduction
Severe acute postoperative pain adversely affects the quality of recovery in surgical patients as well as patients’ surgical outcomes.[1] As the average age of patients undergoing surgery continues to increase due to the rapid aging of the global population, the number of patients with underlying medical diseases (including cardiopulmonary, respiratory, and endocrine system diseases) is increasing significantly and at a much higher rate than in the past.[2] Therefore, achieving proper control of acute postoperative pain is necessary. We note that opioids used to maintain proper anesthesia during surgery have a meaningful effect on the patient’s immunity, thereby strongly affecting the surgical outcome.[3] Recently, nephrectomy has been performed laparoscopically to minimize surgical scars and postoperative pain at the surgical site due to performing surgery through the smallest possible incision. Nevertheless, these techniques still require the use of powerful analgesics after surgery. Moreover, in most cases, laparoscopic nephrectomy results in complaints of severe pain requiring the use of patient-controlled analgesia (PCA) after surgery. PCA is widely used on a global scale, as it has proven effective in acute postoperative pain control.[4]However, as the main drugs used for PCA are intravenous opioids and nonsteroidal anti-inflammatory drugs, it is not uncommon for PCA to be removed in the middle of the treatment course due to the common occurrence of adverse side effects, such as dizziness, nausea, vomiting, urticaria, and respiratory depression.[5] Therefore, in many cases, a pain control method to replace PCA is urgently needed, and a multimodal approach has recently been recommended to this effect.[6,7] As part of a multimodal approach (and given recent developments in ultrasound devices), regional analgesia under ultrasound guidance is currently widely performed in order to minimize the use of opioids during surgery and to reduce postoperative pain, even in surgeries performed under general anesthesia.[8] Since Rafi et al[9] first introduced the landmark-guided transversus abdominis plane (TAP) block in 2001, it has become one of the most commonly performed truncal blocks (after undergoing several modifications). It is effective for acute postoperative pain control after various types of abdominal surgery using laparotomy or laparoscopy.[10] In particular, with recent developments in ultrasound devices, the increased use of portable devices has made it easier to implement ultrasound devices in a clinical setting, thereby enabling the performance of safer and more accurate procedures (even in an already state-of-the art operating room).[11,12] The quality of recovery after anesthesia and surgery is critically important in evaluating the success of the operation and is an important measure in judging a patient’s initial health status after surgery.[13] With increasing interest in the quality of recovery, several methods for assessing this metric have been under active development within the field of anesthesia. The Quality of Recovery-40 (QoR-40) questionnaire developed by Myles et al[13] is one of the most commonly used tools in this regard. Despite the distinctly different cultural backgrounds of targeted patients, the Korean version of the QoR-40 (KvQoR-40) has been shown to be acceptable and as reliable as the original English-language QoR-40 in terms of assessing the quality of recovery after anesthesia and surgery in Korean patients.[14] The purpose of this study was to determine the effect of ultrasound-guided TAP block, administered after induction of general anesthesia and before awakening from general anesthesia, on the quality of recovery after surgery using the QoR-40 as well as to assess remifentanil requirements during surgery in patients undergoing laparoscopic nephrectomy. We hypothesized that the use of ultrasound-guided TAP block would enhance patients’ quality of recovery and reduce the use of remifentanil.
2. Materials and Methods
[SUBTITLE] 2.1. Study design [SUBSECTION] This study was conducted at the Kyungpook National University Chilgok Hospital (Daegu, South Korea) between January 2016 and February 2017. The study protocol was approved by the Research Ethics Committee of the Kyungpook National University Chilgok Hospital, Daegu, South Korea. This study received institutional approval (KNUCH 2015-12-004) and was conducted in accordance with the principles of the Declaration of Helsinki. All the participants provided their informed consent prior to participation. This study was conducted at the Kyungpook National University Chilgok Hospital (Daegu, South Korea) between January 2016 and February 2017. The study protocol was approved by the Research Ethics Committee of the Kyungpook National University Chilgok Hospital, Daegu, South Korea. This study received institutional approval (KNUCH 2015-12-004) and was conducted in accordance with the principles of the Declaration of Helsinki. All the participants provided their informed consent prior to participation. [SUBTITLE] 2.2. Patient selection [SUBSECTION] Sixty of the 87 patients who underwent laparoscopic nephrectomy during the study period participated in this study. Only patients, aged between 20 and 80 years, with an American Society of Anesthesiologists physical status class I to III who had undergone laparoscopic nephrectomy under general anesthesia in the Department of Urology at Kyungpook National University Chilgok Hospital were included in the current study. Patients were excluded from this study if they refused to participate or if they exceeded American Society of Anesthesiologists physical status class III, were younger than 19 years of age or older than 81 years of age, had difficulty communicating due to an intellectual disability, underwent partial nephrectomy or nephroureterectomy, had previously undergone abdominal or pelvic surgery, were undergoing multiple (combined) surgeries on other parts of the body/other organs, were excessively obese (body mass index >35 kg/m2), had a history of long-term opioid usage, had a hemorrhagic predisposition or a hemorrhagic disorder, or presented with contraindications to regional anesthesia. Sixty of the 87 patients who underwent laparoscopic nephrectomy during the study period participated in this study. Only patients, aged between 20 and 80 years, with an American Society of Anesthesiologists physical status class I to III who had undergone laparoscopic nephrectomy under general anesthesia in the Department of Urology at Kyungpook National University Chilgok Hospital were included in the current study. Patients were excluded from this study if they refused to participate or if they exceeded American Society of Anesthesiologists physical status class III, were younger than 19 years of age or older than 81 years of age, had difficulty communicating due to an intellectual disability, underwent partial nephrectomy or nephroureterectomy, had previously undergone abdominal or pelvic surgery, were undergoing multiple (combined) surgeries on other parts of the body/other organs, were excessively obese (body mass index >35 kg/m2), had a history of long-term opioid usage, had a hemorrhagic predisposition or a hemorrhagic disorder, or presented with contraindications to regional anesthesia. [SUBTITLE] 2.3. Randomization and blinding [SUBSECTION] Patients were randomly assigned to either the TAP group (receiving 40 mL of 0.375% ropivacaine; n = 30) or the Control group (receiving only 40 mL of normal saline; n = 30). Randomization was computer-generated (https://www.randomizer.org), and a sealed opaque envelope method was used to hide patient randomization numbers until the start of anesthesia induction. The sealed opaque envelope was opened by the research investigator immediately before performing the TAP block. A registered nurse, who did not enter the operating room and was fully familiar with the methods and procedures of this study, was in charge of consultation with patients and data collection and was blinded to the group assignments, which were not disclosed until the final statistical analysis was completed. Patients were randomly assigned to either the TAP group (receiving 40 mL of 0.375% ropivacaine; n = 30) or the Control group (receiving only 40 mL of normal saline; n = 30). Randomization was computer-generated (https://www.randomizer.org), and a sealed opaque envelope method was used to hide patient randomization numbers until the start of anesthesia induction. The sealed opaque envelope was opened by the research investigator immediately before performing the TAP block. A registered nurse, who did not enter the operating room and was fully familiar with the methods and procedures of this study, was in charge of consultation with patients and data collection and was blinded to the group assignments, which were not disclosed until the final statistical analysis was completed. [SUBTITLE] 2.4. General anesthesia and monitoring [SUBSECTION] The patients included in this study were not administered any prior medication. During the operation, the patient was monitored using pulse oximetry, electrocardiography, noninvasive arterial pressure measurements, capnography, bispectral index (BIS) measurements, and a nasopharyngeal temperature probe. However, some elderly patients and/or those with cardiovascular disease were monitored using invasive radial arterial blood pressure measurements. To induce general anesthesia, total intravenous anesthesia was initiated using target-controlled infusion of propofol and remifentanil (Orchestra; Fresenius Vial, Auvergne Rhone Alpes, France). The initial effect-site concentration of propofol was 6.0 µg/mL; this was gradually increased until BIS values reached 40 to 60. Following this, 3.0 ng/mL of remifentanil was started with a targeted injection of remifentanil at the effect site. When the remifentanil concentration reached the target value, 0.8 mg/kg of rocuronium was administered to facilitate endotracheal intubation. The lungs were ventilated with a tidal volume of 7 mL/kg, and the respiratory rate was adjusted to maintain the end-tidal partial pressure of carbon dioxide at 30 to 40 mm Hg. Target-controlled infusion of propofol and remifentanil was continued throughout the surgery. Concentrations of propofol and remifentanil were continuously adjusted to maintain a BIS value of 40 to 60 and a mean arterial pressure of ±20% of the reference value. To maintain the patient’s vital signs, the remifentanil concentration was maintained below 2 ng/mL; in addition, phenylephrine was injected when the patient’s mean arterial blood pressure was maintained below 80% of the baseline. To maintain vital signs, nicardipine was administered when the remifentanil concentration was maintained at ≥8 ng/mL or when the patient’s mean arterial blood pressure was maintained at or above 120% of the baseline value. Atropine and esmolol were administered separately when the patient’s heart rate dropped to less than 46 beats per minute or increased to more than 30 seconds or increased to more than 90 beats per minute for more than 30 seconds. Repeated or continuous infusion was performed when deemed necessary. Rocuronium was injected at 1 µg/kg/min to maintain muscle relaxation during surgery and was stopped before closing the abdomen. During surgery, lactated Ringer’s solution was continuously injected at 8 mL/kg/h, and the amount of bleeding was supplemented with 3 times the volume of lactated Ringer’s solution throughout the surgery. A heated blanket and warm intravenous and surgical irrigation fluids were applied to maintain normal body temperature. In all patients, PCA was connected to the patient immediately before skin closure was completed, and propofol and remifentanil infusions were discontinued. After sufficient oral aspiration, the inhaled oxygen fraction and fresh gas flow rate were increased to 100% and 8 L, respectively. The neuromuscular blockade was reversed with 0.4 mg glycopyrrolate and pyridostigmine (15 mg) and confirmed by train-of-four monitoring. The endotracheal tube was removed when the patient regained spontaneous breathing and consciousness. The patient was then transferred to the postoperative recovery room. Lactated Ringer’s solution was injected in the postoperative recovery room at a rate of 2 mL/kg/h. The patients included in this study were not administered any prior medication. During the operation, the patient was monitored using pulse oximetry, electrocardiography, noninvasive arterial pressure measurements, capnography, bispectral index (BIS) measurements, and a nasopharyngeal temperature probe. However, some elderly patients and/or those with cardiovascular disease were monitored using invasive radial arterial blood pressure measurements. To induce general anesthesia, total intravenous anesthesia was initiated using target-controlled infusion of propofol and remifentanil (Orchestra; Fresenius Vial, Auvergne Rhone Alpes, France). The initial effect-site concentration of propofol was 6.0 µg/mL; this was gradually increased until BIS values reached 40 to 60. Following this, 3.0 ng/mL of remifentanil was started with a targeted injection of remifentanil at the effect site. When the remifentanil concentration reached the target value, 0.8 mg/kg of rocuronium was administered to facilitate endotracheal intubation. The lungs were ventilated with a tidal volume of 7 mL/kg, and the respiratory rate was adjusted to maintain the end-tidal partial pressure of carbon dioxide at 30 to 40 mm Hg. Target-controlled infusion of propofol and remifentanil was continued throughout the surgery. Concentrations of propofol and remifentanil were continuously adjusted to maintain a BIS value of 40 to 60 and a mean arterial pressure of ±20% of the reference value. To maintain the patient’s vital signs, the remifentanil concentration was maintained below 2 ng/mL; in addition, phenylephrine was injected when the patient’s mean arterial blood pressure was maintained below 80% of the baseline. To maintain vital signs, nicardipine was administered when the remifentanil concentration was maintained at ≥8 ng/mL or when the patient’s mean arterial blood pressure was maintained at or above 120% of the baseline value. Atropine and esmolol were administered separately when the patient’s heart rate dropped to less than 46 beats per minute or increased to more than 30 seconds or increased to more than 90 beats per minute for more than 30 seconds. Repeated or continuous infusion was performed when deemed necessary. Rocuronium was injected at 1 µg/kg/min to maintain muscle relaxation during surgery and was stopped before closing the abdomen. During surgery, lactated Ringer’s solution was continuously injected at 8 mL/kg/h, and the amount of bleeding was supplemented with 3 times the volume of lactated Ringer’s solution throughout the surgery. A heated blanket and warm intravenous and surgical irrigation fluids were applied to maintain normal body temperature. In all patients, PCA was connected to the patient immediately before skin closure was completed, and propofol and remifentanil infusions were discontinued. After sufficient oral aspiration, the inhaled oxygen fraction and fresh gas flow rate were increased to 100% and 8 L, respectively. The neuromuscular blockade was reversed with 0.4 mg glycopyrrolate and pyridostigmine (15 mg) and confirmed by train-of-four monitoring. The endotracheal tube was removed when the patient regained spontaneous breathing and consciousness. The patient was then transferred to the postoperative recovery room. Lactated Ringer’s solution was injected in the postoperative recovery room at a rate of 2 mL/kg/h. [SUBTITLE] 2.5. TAP block [SUBSECTION] In all surgeries, the TAP block was performed twice (after induction of general anesthesia and before awakening from general anesthesia). All surgeries and TAP blocks were performed in the lateral decubitus position with a slight table break at the waist. All study procedures were performed by one of the study authors (with more than 10 years of experience in ultrasound procedures), including injecting 20 mL of 0.375% ropivacaine or normal saline into the TAP space under ultrasound guidance (ProSound Alpha7 Premier; Hitachi Aloka Medical, Tokyo, Japan) using a broadband (4–13 MHz) linear array ultrasound probe. Both the subcostal and lateral approaches were used to sufficiently cover the entire surgical site of the laparoscopic nephrectomy.[10] The subcostal approach targets the TAP compartment of the anterior abdominal wall (between the xyphoid process and the anterosuperior iliac spine), whereas the lateral approach targets the TAP compartment of the lateral abdominal wall (between the mid-axillary and anterior axillary lines). Drug injection was performed at 3 to 5 sites, and an ultrasound scan was performed to confirm that the drug was evenly distributed throughout the skin incision. In all surgeries, the TAP block was performed twice (after induction of general anesthesia and before awakening from general anesthesia). All surgeries and TAP blocks were performed in the lateral decubitus position with a slight table break at the waist. All study procedures were performed by one of the study authors (with more than 10 years of experience in ultrasound procedures), including injecting 20 mL of 0.375% ropivacaine or normal saline into the TAP space under ultrasound guidance (ProSound Alpha7 Premier; Hitachi Aloka Medical, Tokyo, Japan) using a broadband (4–13 MHz) linear array ultrasound probe. Both the subcostal and lateral approaches were used to sufficiently cover the entire surgical site of the laparoscopic nephrectomy.[10] The subcostal approach targets the TAP compartment of the anterior abdominal wall (between the xyphoid process and the anterosuperior iliac spine), whereas the lateral approach targets the TAP compartment of the lateral abdominal wall (between the mid-axillary and anterior axillary lines). Drug injection was performed at 3 to 5 sites, and an ultrasound scan was performed to confirm that the drug was evenly distributed throughout the skin incision. [SUBTITLE] 2.6. The QoR-40 questionnaire [SUBSECTION] One of the study authors fully explained the QoR-40 to the study patients when visiting the ward the day before the surgery. This study was conducted before the research results of KvQoR-40 were introduced; therefore, a translation of QoR-40 from English into Korean was used. After the research results of KvQoR-40 were introduced, it was confirmed that there was no difference between the translated content and the content or meaning of the KvQoR-40 questionnaires. Namely, it consists of 40 questions in Korean; 5 points are awarded for each question for a maximum total of 200 points. The higher the total score, the higher the quality of recovery. On the third day of surgery, a researcher blinded to the patient’s group assignment completed the QoR-40 questionnaire. One of the study authors fully explained the QoR-40 to the study patients when visiting the ward the day before the surgery. This study was conducted before the research results of KvQoR-40 were introduced; therefore, a translation of QoR-40 from English into Korean was used. After the research results of KvQoR-40 were introduced, it was confirmed that there was no difference between the translated content and the content or meaning of the KvQoR-40 questionnaires. Namely, it consists of 40 questions in Korean; 5 points are awarded for each question for a maximum total of 200 points. The higher the total score, the higher the quality of recovery. On the third day of surgery, a researcher blinded to the patient’s group assignment completed the QoR-40 questionnaire. [SUBTITLE] 2.7. PCA data extraction and analysis [SUBSECTION] Patients were treated using an electronic intravenous (IV) PCA device (Accumate 1100, Woo Young Medical, Seoul, South Korea); the drug consisted only of fentanyl and ramosetron, with a total volume of 60 mL. The basal infusion rate was 0.5 mL/h and the bolus dose was set at 0.5 mL. The lockout interval was set at 15 minutes. An IV PCA was connected to the venous fluid line before the end of the skin sutures. On the day before the surgery, patients were instructed to press the PCA bolus button when they felt pain (visual analog scale score of 3 or higher). After pressing the bolus button, if pain (visual analog scale score of 3 or higher) persisted for more than 15 minutes, the same procedure was repeated. If the pain still persisted, a rescue analgesic was used according to the existing manual set forth by the Department of Urology at our hospital. After the IV PCA was completely used, it was collected and the total usage time of the PCA was checked. In addition, the information stored in the PCA machine was extracted into an Excel datasheet (Excel 2018; Microsoft, Redmond, WA), and the frequency of bolus doses of IV PCA injected into the patient at 1, 2, 3, 6, 12, 24, 48, and 72 hours after surgery was analyzed. Patients were treated using an electronic intravenous (IV) PCA device (Accumate 1100, Woo Young Medical, Seoul, South Korea); the drug consisted only of fentanyl and ramosetron, with a total volume of 60 mL. The basal infusion rate was 0.5 mL/h and the bolus dose was set at 0.5 mL. The lockout interval was set at 15 minutes. An IV PCA was connected to the venous fluid line before the end of the skin sutures. On the day before the surgery, patients were instructed to press the PCA bolus button when they felt pain (visual analog scale score of 3 or higher). After pressing the bolus button, if pain (visual analog scale score of 3 or higher) persisted for more than 15 minutes, the same procedure was repeated. If the pain still persisted, a rescue analgesic was used according to the existing manual set forth by the Department of Urology at our hospital. After the IV PCA was completely used, it was collected and the total usage time of the PCA was checked. In addition, the information stored in the PCA machine was extracted into an Excel datasheet (Excel 2018; Microsoft, Redmond, WA), and the frequency of bolus doses of IV PCA injected into the patient at 1, 2, 3, 6, 12, 24, 48, and 72 hours after surgery was analyzed. [SUBTITLE] 2.8. Study outcomes [SUBSECTION] The main purpose of this study was to evaluate the effect of TAP block, which was administered twice (after induction of general anesthesia and before awakening from general anesthesia), on the postoperative quality of recovery using the QoR-40 as well as patients’ intraoperative remifentanil requirements (µg/kg/h) in comparative evaluations between the TAP and Control groups. In addition, to evaluate the postoperative analgesic effect of the TAP block, the total duration of PCA used after surgery and the frequency of bolus doses of IV PCA injected into the patient at 1, 2, 3, 6, 12, 24, 48, and 72 hours after surgery in both groups were analyzed. The main purpose of this study was to evaluate the effect of TAP block, which was administered twice (after induction of general anesthesia and before awakening from general anesthesia), on the postoperative quality of recovery using the QoR-40 as well as patients’ intraoperative remifentanil requirements (µg/kg/h) in comparative evaluations between the TAP and Control groups. In addition, to evaluate the postoperative analgesic effect of the TAP block, the total duration of PCA used after surgery and the frequency of bolus doses of IV PCA injected into the patient at 1, 2, 3, 6, 12, 24, 48, and 72 hours after surgery in both groups were analyzed. [SUBTITLE] 2.9. Sample size [SUBSECTION] The postoperative quality of recovery using the QoR-40 and intraoperative remifentanil requirements were evaluated to judge the effect of the TAP block. In a preliminary study of 16 patients, the means ± standard deviations of the QoR-40 score, used to confirm the postoperative quality of recovery in the Control and TAP groups, were 160.3 ± 25.1 and 188.3 ± 6.3, respectively. As we aimed to maintain 95% power and a 5% significance level, we calculated a target sample size of 14 patients per group. In a preliminary study of 16 patients, the means ± standard deviations for the intraoperative remifentanil requirement (µg/kg/h) in the Control and TAP groups were 0.153 ± 0.053 and 0.104 ± 0.035, respectively. Again, as we aimed to maintain 95% power and a 5% significance level, we calculated a target sample size of 23 patients per group. We followed the results of the intraoperative remifentanil requirement for study accuracy. We determined that, assuming a 30% dropout rate, a minimum of 30 patients per group would be required to achieve meaningful results in this study. The postoperative quality of recovery using the QoR-40 and intraoperative remifentanil requirements were evaluated to judge the effect of the TAP block. In a preliminary study of 16 patients, the means ± standard deviations of the QoR-40 score, used to confirm the postoperative quality of recovery in the Control and TAP groups, were 160.3 ± 25.1 and 188.3 ± 6.3, respectively. As we aimed to maintain 95% power and a 5% significance level, we calculated a target sample size of 14 patients per group. In a preliminary study of 16 patients, the means ± standard deviations for the intraoperative remifentanil requirement (µg/kg/h) in the Control and TAP groups were 0.153 ± 0.053 and 0.104 ± 0.035, respectively. Again, as we aimed to maintain 95% power and a 5% significance level, we calculated a target sample size of 23 patients per group. We followed the results of the intraoperative remifentanil requirement for study accuracy. We determined that, assuming a 30% dropout rate, a minimum of 30 patients per group would be required to achieve meaningful results in this study. [SUBTITLE] 2.10. Statistical analysis [SUBSECTION] Data were entered into a study database and analyzed using IBM SPSS Statistics (v. 27.0; IBM Corp, Armonk, NY). Statistical analysis was performed using the x2 test for comparative evaluations by sex (among the demographic data), and independent t tests were used to evaluate the distributions of the other demographic data across groups. In addition, the statistical significance between the TAP and Control groups with regard to QoR-40 scores, intraoperative remifentanil requirements, and IV PCA administrations (Excel data) were verified using independent t-tests. All values were expressed as means ± standard deviations. All reported P values were 2-sided, and P values <.05 were considered statistically significant. Data were entered into a study database and analyzed using IBM SPSS Statistics (v. 27.0; IBM Corp, Armonk, NY). Statistical analysis was performed using the x2 test for comparative evaluations by sex (among the demographic data), and independent t tests were used to evaluate the distributions of the other demographic data across groups. In addition, the statistical significance between the TAP and Control groups with regard to QoR-40 scores, intraoperative remifentanil requirements, and IV PCA administrations (Excel data) were verified using independent t-tests. All values were expressed as means ± standard deviations. All reported P values were 2-sided, and P values <.05 were considered statistically significant.
3. Results
[SUBTITLE] 3.1. Patient characteristics [SUBSECTION] Of the 87 patients who underwent laparoscopic nephrectomy during the study period, 5 refused to participate in this study, 14 underwent laparoscopic partial nephrectomy, 4 underwent laparoscopic nephroureterectomy, 2 underwent combined surgeries, and 2 had a body mass index of >35 kg/m2. Thus, a total of 60 patients were randomly assigned to 2 groups. Two patients from the Control group and 1 patient from the TAP group were excluded from the study due to side effects arising from IV PCA, including nausea, vomiting, and urticaria. Two patients from the Control group and 1 patient from the TAP group were excluded from the study because of a switch to open surgery. A total of 54 patients (Control group, n = 26; TAP group, n = 28) were included in the final study population (Fig. 1). Table 1 presents the comparative demographic and perioperative characteristics for the 2 groups. There were no significant differences in these medical and demographic characteristics between the 2 groups. Patient characteristics and intraoperative data. BMI = body mass index (kg/m2). Results are presented as means ± standard deviation, or numbers of patients. P < .05 indicates a significant difference between groups. Patient flowchart showing the patients included in the enrollment, group allocation, follow-up, and analysis phases of the study. Of the 87 patients who underwent laparoscopic nephrectomy during the study period, 5 refused to participate in this study, 14 underwent laparoscopic partial nephrectomy, 4 underwent laparoscopic nephroureterectomy, 2 underwent combined surgeries, and 2 had a body mass index of >35 kg/m2. Thus, a total of 60 patients were randomly assigned to 2 groups. Two patients from the Control group and 1 patient from the TAP group were excluded from the study due to side effects arising from IV PCA, including nausea, vomiting, and urticaria. Two patients from the Control group and 1 patient from the TAP group were excluded from the study because of a switch to open surgery. A total of 54 patients (Control group, n = 26; TAP group, n = 28) were included in the final study population (Fig. 1). Table 1 presents the comparative demographic and perioperative characteristics for the 2 groups. There were no significant differences in these medical and demographic characteristics between the 2 groups. Patient characteristics and intraoperative data. BMI = body mass index (kg/m2). Results are presented as means ± standard deviation, or numbers of patients. P < .05 indicates a significant difference between groups. Patient flowchart showing the patients included in the enrollment, group allocation, follow-up, and analysis phases of the study. [SUBTITLE] 3.2. The QoR-40 questionnaire [SUBSECTION] The QoR-40 score, measured when visiting the ward on the third day after surgery, was significantly higher in the TAP group (171.9 ± 23.1) than in the Control group (151.9 ± 28.1) (P = .006) (Fig. 2). Quality of recovery 40 questionnaire findings. *P < .05 indicates a significant difference between groups. The QoR-40 score, measured when visiting the ward on the third day after surgery, was significantly higher in the TAP group (171.9 ± 23.1) than in the Control group (151.9 ± 28.1) (P = .006) (Fig. 2). Quality of recovery 40 questionnaire findings. *P < .05 indicates a significant difference between groups. [SUBTITLE] 3.3. Intraoperative remifentanil requirement [SUBSECTION] The intraoperative remifentanil requirement (µg/kg/h) did not show a significant difference between groups (0.100 ± .050 in the TAP group vs 0.111 ± .060 in the Control group) (P = .439) (Fig. 3). Intraoperative remifentanil requirement. The intraoperative remifentanil requirement (µg/kg/h) did not show a significant difference between groups (0.100 ± .050 in the TAP group vs 0.111 ± .060 in the Control group) (P = .439) (Fig. 3). Intraoperative remifentanil requirement. [SUBTITLE] 3.4. PCA data [SUBSECTION] The frequency of use of the bolus dose accumulated at 1, 2, 3, 6, 12, 24, 48, and 72 hours after surgery was 1.15 ± 0.97, 2.85 ± 1.08, 3.81 ± 1.44, 6.38 ± 3.60, 10.12 ± 5.96, 14.96 ± 10.53, 23.30 ± 17.44, 24.40 ± 17.76, respectively, in the Control group and 0.64 ± 0.83, 1.86 ± 1.18, 2.68 ± 1.44, 4.32 ± 3.43, 6.21 ± 5.50, 9.43 ± 9.22, 10.70 ± 9.64, 12.44 ± 11.32, respectively, in the TAP group. All results for each time period showed a significant difference (P = .041, P = .002, P = .006, P = .036, P = .015, P = .047, P = .008, P = .034) (Fig. 4). The total usage time for PCA showed a significant difference, as the usage time in the TAP group (70.26 ± 34.0 hours) was much greater than that in the Control group (50.05 ± 32.6 hours; P = .030) (Fig. 5). Frequency of PCA bolus injection. PCA = patient-controlled analgesia. *P < .05 indicates a significant difference between groups. Total time for use of PCA. PCA = patient-controlled analgesia. *P < .05 indicates a significant difference between groups. The frequency of use of the bolus dose accumulated at 1, 2, 3, 6, 12, 24, 48, and 72 hours after surgery was 1.15 ± 0.97, 2.85 ± 1.08, 3.81 ± 1.44, 6.38 ± 3.60, 10.12 ± 5.96, 14.96 ± 10.53, 23.30 ± 17.44, 24.40 ± 17.76, respectively, in the Control group and 0.64 ± 0.83, 1.86 ± 1.18, 2.68 ± 1.44, 4.32 ± 3.43, 6.21 ± 5.50, 9.43 ± 9.22, 10.70 ± 9.64, 12.44 ± 11.32, respectively, in the TAP group. All results for each time period showed a significant difference (P = .041, P = .002, P = .006, P = .036, P = .015, P = .047, P = .008, P = .034) (Fig. 4). The total usage time for PCA showed a significant difference, as the usage time in the TAP group (70.26 ± 34.0 hours) was much greater than that in the Control group (50.05 ± 32.6 hours; P = .030) (Fig. 5). Frequency of PCA bolus injection. PCA = patient-controlled analgesia. *P < .05 indicates a significant difference between groups. Total time for use of PCA. PCA = patient-controlled analgesia. *P < .05 indicates a significant difference between groups.
null
null
[ "2.1. Study design", "2.2. Patient selection", "2.3. Randomization and blinding", "2.4. General anesthesia and monitoring", "2.5. TAP block", "2.6. The QoR-40 questionnaire", "2.7. PCA data extraction and analysis", "2.8. Study outcomes", "2.9. Sample size", "2.10. Statistical analysis", "3.2. The QoR-40 questionnaire", "3.3. Intraoperative remifentanil requirement", "3.4. PCA data", "Author contributions" ]
[ "This study was conducted at the Kyungpook National University Chilgok Hospital (Daegu, South Korea) between January 2016 and February 2017. The study protocol was approved by the Research Ethics Committee of the Kyungpook National University Chilgok Hospital, Daegu, South Korea. This study received institutional approval (KNUCH 2015-12-004) and was conducted in accordance with the principles of the Declaration of Helsinki. All the participants provided their informed consent prior to participation.", "Sixty of the 87 patients who underwent laparoscopic nephrectomy during the study period participated in this study. Only patients, aged between 20 and 80 years, with an American Society of Anesthesiologists physical status class I to III who had undergone laparoscopic nephrectomy under general anesthesia in the Department of Urology at Kyungpook National University Chilgok Hospital were included in the current study.\nPatients were excluded from this study if they refused to participate or if they exceeded American Society of Anesthesiologists physical status class III, were younger than 19 years of age or older than 81 years of age, had difficulty communicating due to an intellectual disability, underwent partial nephrectomy or nephroureterectomy, had previously undergone abdominal or pelvic surgery, were undergoing multiple (combined) surgeries on other parts of the body/other organs, were excessively obese (body mass index >35 kg/m2), had a history of long-term opioid usage, had a hemorrhagic predisposition or a hemorrhagic disorder, or presented with contraindications to regional anesthesia.", "Patients were randomly assigned to either the TAP group (receiving 40 mL of 0.375% ropivacaine; n = 30) or the Control group (receiving only 40 mL of normal saline; n = 30). Randomization was computer-generated (https://www.randomizer.org), and a sealed opaque envelope method was used to hide patient randomization numbers until the start of anesthesia induction. The sealed opaque envelope was opened by the research investigator immediately before performing the TAP block. A registered nurse, who did not enter the operating room and was fully familiar with the methods and procedures of this study, was in charge of consultation with patients and data collection and was blinded to the group assignments, which were not disclosed until the final statistical analysis was completed.", "The patients included in this study were not administered any prior medication. During the operation, the patient was monitored using pulse oximetry, electrocardiography, noninvasive arterial pressure measurements, capnography, bispectral index (BIS) measurements, and a nasopharyngeal temperature probe. However, some elderly patients and/or those with cardiovascular disease were monitored using invasive radial arterial blood pressure measurements. To induce general anesthesia, total intravenous anesthesia was initiated using target-controlled infusion of propofol and remifentanil (Orchestra; Fresenius Vial, Auvergne Rhone Alpes, France). The initial effect-site concentration of propofol was 6.0 µg/mL; this was gradually increased until BIS values reached 40 to 60. Following this, 3.0 ng/mL of remifentanil was started with a targeted injection of remifentanil at the effect site. When the remifentanil concentration reached the target value, 0.8 mg/kg of rocuronium was administered to facilitate endotracheal intubation. The lungs were ventilated with a tidal volume of 7 mL/kg, and the respiratory rate was adjusted to maintain the end-tidal partial pressure of carbon dioxide at 30 to 40 mm Hg. Target-controlled infusion of propofol and remifentanil was continued throughout the surgery. Concentrations of propofol and remifentanil were continuously adjusted to maintain a BIS value of 40 to 60 and a mean arterial pressure of ±20% of the reference value. To maintain the patient’s vital signs, the remifentanil concentration was maintained below 2 ng/mL; in addition, phenylephrine was injected when the patient’s mean arterial blood pressure was maintained below 80% of the baseline. To maintain vital signs, nicardipine was administered when the remifentanil concentration was maintained at ≥8 ng/mL or when the patient’s mean arterial blood pressure was maintained at or above 120% of the baseline value. Atropine and esmolol were administered separately when the patient’s heart rate dropped to less than 46 beats per minute or increased to more than 30 seconds or increased to more than 90 beats per minute for more than 30 seconds. Repeated or continuous infusion was performed when deemed necessary. Rocuronium was injected at 1 µg/kg/min to maintain muscle relaxation during surgery and was stopped before closing the abdomen. During surgery, lactated Ringer’s solution was continuously injected at 8 mL/kg/h, and the amount of bleeding was supplemented with 3 times the volume of lactated Ringer’s solution throughout the surgery. A heated blanket and warm intravenous and surgical irrigation fluids were applied to maintain normal body temperature. In all patients, PCA was connected to the patient immediately before skin closure was completed, and propofol and remifentanil infusions were discontinued. After sufficient oral aspiration, the inhaled oxygen fraction and fresh gas flow rate were increased to 100% and 8 L, respectively. The neuromuscular blockade was reversed with 0.4 mg glycopyrrolate and pyridostigmine (15 mg) and confirmed by train-of-four monitoring. The endotracheal tube was removed when the patient regained spontaneous breathing and consciousness. The patient was then transferred to the postoperative recovery room. Lactated Ringer’s solution was injected in the postoperative recovery room at a rate of 2 mL/kg/h.", "In all surgeries, the TAP block was performed twice (after induction of general anesthesia and before awakening from general anesthesia). All surgeries and TAP blocks were performed in the lateral decubitus position with a slight table break at the waist.\nAll study procedures were performed by one of the study authors (with more than 10 years of experience in ultrasound procedures), including injecting 20 mL of 0.375% ropivacaine or normal saline into the TAP space under ultrasound guidance (ProSound Alpha7 Premier; Hitachi Aloka Medical, Tokyo, Japan) using a broadband (4–13 MHz) linear array ultrasound probe. Both the subcostal and lateral approaches were used to sufficiently cover the entire surgical site of the laparoscopic nephrectomy.[10] The subcostal approach targets the TAP compartment of the anterior abdominal wall (between the xyphoid process and the anterosuperior iliac spine), whereas the lateral approach targets the TAP compartment of the lateral abdominal wall (between the mid-axillary and anterior axillary lines). Drug injection was performed at 3 to 5 sites, and an ultrasound scan was performed to confirm that the drug was evenly distributed throughout the skin incision.", "One of the study authors fully explained the QoR-40 to the study patients when visiting the ward the day before the surgery. This study was conducted before the research results of KvQoR-40 were introduced; therefore, a translation of QoR-40 from English into Korean was used. After the research results of KvQoR-40 were introduced, it was confirmed that there was no difference between the translated content and the content or meaning of the KvQoR-40 questionnaires. Namely, it consists of 40 questions in Korean; 5 points are awarded for each question for a maximum total of 200 points. The higher the total score, the higher the quality of recovery. On the third day of surgery, a researcher blinded to the patient’s group assignment completed the QoR-40 questionnaire.", "Patients were treated using an electronic intravenous (IV) PCA device (Accumate 1100, Woo Young Medical, Seoul, South Korea); the drug consisted only of fentanyl and ramosetron, with a total volume of 60 mL. The basal infusion rate was 0.5 mL/h and the bolus dose was set at 0.5 mL. The lockout interval was set at 15 minutes. An IV PCA was connected to the venous fluid line before the end of the skin sutures. On the day before the surgery, patients were instructed to press the PCA bolus button when they felt pain (visual analog scale score of 3 or higher). After pressing the bolus button, if pain (visual analog scale score of 3 or higher) persisted for more than 15 minutes, the same procedure was repeated. If the pain still persisted, a rescue analgesic was used according to the existing manual set forth by the Department of Urology at our hospital. After the IV PCA was completely used, it was collected and the total usage time of the PCA was checked. In addition, the information stored in the PCA machine was extracted into an Excel datasheet (Excel 2018; Microsoft, Redmond, WA), and the frequency of bolus doses of IV PCA injected into the patient at 1, 2, 3, 6, 12, 24, 48, and 72 hours after surgery was analyzed.", "The main purpose of this study was to evaluate the effect of TAP block, which was administered twice (after induction of general anesthesia and before awakening from general anesthesia), on the postoperative quality of recovery using the QoR-40 as well as patients’ intraoperative remifentanil requirements (µg/kg/h) in comparative evaluations between the TAP and Control groups. In addition, to evaluate the postoperative analgesic effect of the TAP block, the total duration of PCA used after surgery and the frequency of bolus doses of IV PCA injected into the patient at 1, 2, 3, 6, 12, 24, 48, and 72 hours after surgery in both groups were analyzed.", "The postoperative quality of recovery using the QoR-40 and intraoperative remifentanil requirements were evaluated to judge the effect of the TAP block. In a preliminary study of 16 patients, the means ± standard deviations of the QoR-40 score, used to confirm the postoperative quality of recovery in the Control and TAP groups, were 160.3 ± 25.1 and 188.3 ± 6.3, respectively. As we aimed to maintain 95% power and a 5% significance level, we calculated a target sample size of 14 patients per group. In a preliminary study of 16 patients, the means ± standard deviations for the intraoperative remifentanil requirement (µg/kg/h) in the Control and TAP groups were 0.153 ± 0.053 and 0.104 ± 0.035, respectively. Again, as we aimed to maintain 95% power and a 5% significance level, we calculated a target sample size of 23 patients per group. We followed the results of the intraoperative remifentanil requirement for study accuracy. We determined that, assuming a 30% dropout rate, a minimum of 30 patients per group would be required to achieve meaningful results in this study.", "Data were entered into a study database and analyzed using IBM SPSS Statistics (v. 27.0; IBM Corp, Armonk, NY). Statistical analysis was performed using the x2 test for comparative evaluations by sex (among the demographic data), and independent t tests were used to evaluate the distributions of the other demographic data across groups. In addition, the statistical significance between the TAP and Control groups with regard to QoR-40 scores, intraoperative remifentanil requirements, and IV PCA administrations (Excel data) were verified using independent t-tests. All values were expressed as means ± standard deviations. All reported P values were 2-sided, and P values <.05 were considered statistically significant.", "The QoR-40 score, measured when visiting the ward on the third day after surgery, was significantly higher in the TAP group (171.9 ± 23.1) than in the Control group (151.9 ± 28.1) (P = .006) (Fig. 2).\nQuality of recovery 40 questionnaire findings. *P < .05 indicates a significant difference between groups.", "The intraoperative remifentanil requirement (µg/kg/h) did not show a significant difference between groups (0.100 ± .050 in the TAP group vs 0.111 ± .060 in the Control group) (P = .439) (Fig. 3).\nIntraoperative remifentanil requirement.", "The frequency of use of the bolus dose accumulated at 1, 2, 3, 6, 12, 24, 48, and 72 hours after surgery was 1.15 ± 0.97, 2.85 ± 1.08, 3.81 ± 1.44, 6.38 ± 3.60, 10.12 ± 5.96, 14.96 ± 10.53, 23.30 ± 17.44, 24.40 ± 17.76, respectively, in the Control group and 0.64 ± 0.83, 1.86 ± 1.18, 2.68 ± 1.44, 4.32 ± 3.43, 6.21 ± 5.50, 9.43 ± 9.22, 10.70 ± 9.64, 12.44 ± 11.32, respectively, in the TAP group. All results for each time period showed a significant difference (P = .041, P = .002, P = .006, P = .036, P = .015, P = .047, P = .008, P = .034) (Fig. 4). The total usage time for PCA showed a significant difference, as the usage time in the TAP group (70.26 ± 34.0 hours) was much greater than that in the Control group (50.05 ± 32.6 hours; P = .030) (Fig. 5).\nFrequency of PCA bolus injection. PCA = patient-controlled analgesia. *P < .05 indicates a significant difference between groups.\nTotal time for use of PCA. PCA = patient-controlled analgesia. *P < .05 indicates a significant difference between groups.", "Conceptualization: Jun-Mo Park.\nData curation: Joonhee Lee.\nFormal analysis: Joonhee Lee.\nInvestigation: Jun-Mo Park.\nSupervision: Jun-Mo Park.\nValidation: Jun-Mo Park.\nVisualization: Joonhee Lee.\nWriting – original draft: Jun-Mo Park.\nWriting – review & editing: Jun-Mo Park." ]
[ null, "subjects", null, null, null, null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Materials and Methods", "2.1. Study design", "2.2. Patient selection", "2.3. Randomization and blinding", "2.4. General anesthesia and monitoring", "2.5. TAP block", "2.6. The QoR-40 questionnaire", "2.7. PCA data extraction and analysis", "2.8. Study outcomes", "2.9. Sample size", "2.10. Statistical analysis", "3. Results", "3.1. Patient characteristics", "3.2. The QoR-40 questionnaire", "3.3. Intraoperative remifentanil requirement", "3.4. PCA data", "4. Discussion", "Author contributions" ]
[ "Severe acute postoperative pain adversely affects the quality of recovery in surgical patients as well as patients’ surgical outcomes.[1] As the average age of patients undergoing surgery continues to increase due to the rapid aging of the global population, the number of patients with underlying medical diseases (including cardiopulmonary, respiratory, and endocrine system diseases) is increasing significantly and at a much higher rate than in the past.[2] Therefore, achieving proper control of acute postoperative pain is necessary. We note that opioids used to maintain proper anesthesia during surgery have a meaningful effect on the patient’s immunity, thereby strongly affecting the surgical outcome.[3]\nRecently, nephrectomy has been performed laparoscopically to minimize surgical scars and postoperative pain at the surgical site due to performing surgery through the smallest possible incision. Nevertheless, these techniques still require the use of powerful analgesics after surgery. Moreover, in most cases, laparoscopic nephrectomy results in complaints of severe pain requiring the use of patient-controlled analgesia (PCA) after surgery. PCA is widely used on a global scale, as it has proven effective in acute postoperative pain control.[4]However, as the main drugs used for PCA are intravenous opioids and nonsteroidal anti-inflammatory drugs, it is not uncommon for PCA to be removed in the middle of the treatment course due to the common occurrence of adverse side effects, such as dizziness, nausea, vomiting, urticaria, and respiratory depression.[5] Therefore, in many cases, a pain control method to replace PCA is urgently needed, and a multimodal approach has recently been recommended to this effect.[6,7] As part of a multimodal approach (and given recent developments in ultrasound devices), regional analgesia under ultrasound guidance is currently widely performed in order to minimize the use of opioids during surgery and to reduce postoperative pain, even in surgeries performed under general anesthesia.[8]\nSince Rafi et al[9] first introduced the landmark-guided transversus abdominis plane (TAP) block in 2001, it has become one of the most commonly performed truncal blocks (after undergoing several modifications). It is effective for acute postoperative pain control after various types of abdominal surgery using laparotomy or laparoscopy.[10] In particular, with recent developments in ultrasound devices, the increased use of portable devices has made it easier to implement ultrasound devices in a clinical setting, thereby enabling the performance of safer and more accurate procedures (even in an already state-of-the art operating room).[11,12]\nThe quality of recovery after anesthesia and surgery is critically important in evaluating the success of the operation and is an important measure in judging a patient’s initial health status after surgery.[13] With increasing interest in the quality of recovery, several methods for assessing this metric have been under active development within the field of anesthesia. The Quality of Recovery-40 (QoR-40) questionnaire developed by Myles et al[13] is one of the most commonly used tools in this regard. Despite the distinctly different cultural backgrounds of targeted patients, the Korean version of the QoR-40 (KvQoR-40) has been shown to be acceptable and as reliable as the original English-language QoR-40 in terms of assessing the quality of recovery after anesthesia and surgery in Korean patients.[14]\nThe purpose of this study was to determine the effect of ultrasound-guided TAP block, administered after induction of general anesthesia and before awakening from general anesthesia, on the quality of recovery after surgery using the QoR-40 as well as to assess remifentanil requirements during surgery in patients undergoing laparoscopic nephrectomy. We hypothesized that the use of ultrasound-guided TAP block would enhance patients’ quality of recovery and reduce the use of remifentanil.", "[SUBTITLE] 2.1. Study design [SUBSECTION] This study was conducted at the Kyungpook National University Chilgok Hospital (Daegu, South Korea) between January 2016 and February 2017. The study protocol was approved by the Research Ethics Committee of the Kyungpook National University Chilgok Hospital, Daegu, South Korea. This study received institutional approval (KNUCH 2015-12-004) and was conducted in accordance with the principles of the Declaration of Helsinki. All the participants provided their informed consent prior to participation.\nThis study was conducted at the Kyungpook National University Chilgok Hospital (Daegu, South Korea) between January 2016 and February 2017. The study protocol was approved by the Research Ethics Committee of the Kyungpook National University Chilgok Hospital, Daegu, South Korea. This study received institutional approval (KNUCH 2015-12-004) and was conducted in accordance with the principles of the Declaration of Helsinki. All the participants provided their informed consent prior to participation.\n[SUBTITLE] 2.2. Patient selection [SUBSECTION] Sixty of the 87 patients who underwent laparoscopic nephrectomy during the study period participated in this study. Only patients, aged between 20 and 80 years, with an American Society of Anesthesiologists physical status class I to III who had undergone laparoscopic nephrectomy under general anesthesia in the Department of Urology at Kyungpook National University Chilgok Hospital were included in the current study.\nPatients were excluded from this study if they refused to participate or if they exceeded American Society of Anesthesiologists physical status class III, were younger than 19 years of age or older than 81 years of age, had difficulty communicating due to an intellectual disability, underwent partial nephrectomy or nephroureterectomy, had previously undergone abdominal or pelvic surgery, were undergoing multiple (combined) surgeries on other parts of the body/other organs, were excessively obese (body mass index >35 kg/m2), had a history of long-term opioid usage, had a hemorrhagic predisposition or a hemorrhagic disorder, or presented with contraindications to regional anesthesia.\nSixty of the 87 patients who underwent laparoscopic nephrectomy during the study period participated in this study. Only patients, aged between 20 and 80 years, with an American Society of Anesthesiologists physical status class I to III who had undergone laparoscopic nephrectomy under general anesthesia in the Department of Urology at Kyungpook National University Chilgok Hospital were included in the current study.\nPatients were excluded from this study if they refused to participate or if they exceeded American Society of Anesthesiologists physical status class III, were younger than 19 years of age or older than 81 years of age, had difficulty communicating due to an intellectual disability, underwent partial nephrectomy or nephroureterectomy, had previously undergone abdominal or pelvic surgery, were undergoing multiple (combined) surgeries on other parts of the body/other organs, were excessively obese (body mass index >35 kg/m2), had a history of long-term opioid usage, had a hemorrhagic predisposition or a hemorrhagic disorder, or presented with contraindications to regional anesthesia.\n[SUBTITLE] 2.3. Randomization and blinding [SUBSECTION] Patients were randomly assigned to either the TAP group (receiving 40 mL of 0.375% ropivacaine; n = 30) or the Control group (receiving only 40 mL of normal saline; n = 30). Randomization was computer-generated (https://www.randomizer.org), and a sealed opaque envelope method was used to hide patient randomization numbers until the start of anesthesia induction. The sealed opaque envelope was opened by the research investigator immediately before performing the TAP block. A registered nurse, who did not enter the operating room and was fully familiar with the methods and procedures of this study, was in charge of consultation with patients and data collection and was blinded to the group assignments, which were not disclosed until the final statistical analysis was completed.\nPatients were randomly assigned to either the TAP group (receiving 40 mL of 0.375% ropivacaine; n = 30) or the Control group (receiving only 40 mL of normal saline; n = 30). Randomization was computer-generated (https://www.randomizer.org), and a sealed opaque envelope method was used to hide patient randomization numbers until the start of anesthesia induction. The sealed opaque envelope was opened by the research investigator immediately before performing the TAP block. A registered nurse, who did not enter the operating room and was fully familiar with the methods and procedures of this study, was in charge of consultation with patients and data collection and was blinded to the group assignments, which were not disclosed until the final statistical analysis was completed.\n[SUBTITLE] 2.4. General anesthesia and monitoring [SUBSECTION] The patients included in this study were not administered any prior medication. During the operation, the patient was monitored using pulse oximetry, electrocardiography, noninvasive arterial pressure measurements, capnography, bispectral index (BIS) measurements, and a nasopharyngeal temperature probe. However, some elderly patients and/or those with cardiovascular disease were monitored using invasive radial arterial blood pressure measurements. To induce general anesthesia, total intravenous anesthesia was initiated using target-controlled infusion of propofol and remifentanil (Orchestra; Fresenius Vial, Auvergne Rhone Alpes, France). The initial effect-site concentration of propofol was 6.0 µg/mL; this was gradually increased until BIS values reached 40 to 60. Following this, 3.0 ng/mL of remifentanil was started with a targeted injection of remifentanil at the effect site. When the remifentanil concentration reached the target value, 0.8 mg/kg of rocuronium was administered to facilitate endotracheal intubation. The lungs were ventilated with a tidal volume of 7 mL/kg, and the respiratory rate was adjusted to maintain the end-tidal partial pressure of carbon dioxide at 30 to 40 mm Hg. Target-controlled infusion of propofol and remifentanil was continued throughout the surgery. Concentrations of propofol and remifentanil were continuously adjusted to maintain a BIS value of 40 to 60 and a mean arterial pressure of ±20% of the reference value. To maintain the patient’s vital signs, the remifentanil concentration was maintained below 2 ng/mL; in addition, phenylephrine was injected when the patient’s mean arterial blood pressure was maintained below 80% of the baseline. To maintain vital signs, nicardipine was administered when the remifentanil concentration was maintained at ≥8 ng/mL or when the patient’s mean arterial blood pressure was maintained at or above 120% of the baseline value. Atropine and esmolol were administered separately when the patient’s heart rate dropped to less than 46 beats per minute or increased to more than 30 seconds or increased to more than 90 beats per minute for more than 30 seconds. Repeated or continuous infusion was performed when deemed necessary. Rocuronium was injected at 1 µg/kg/min to maintain muscle relaxation during surgery and was stopped before closing the abdomen. During surgery, lactated Ringer’s solution was continuously injected at 8 mL/kg/h, and the amount of bleeding was supplemented with 3 times the volume of lactated Ringer’s solution throughout the surgery. A heated blanket and warm intravenous and surgical irrigation fluids were applied to maintain normal body temperature. In all patients, PCA was connected to the patient immediately before skin closure was completed, and propofol and remifentanil infusions were discontinued. After sufficient oral aspiration, the inhaled oxygen fraction and fresh gas flow rate were increased to 100% and 8 L, respectively. The neuromuscular blockade was reversed with 0.4 mg glycopyrrolate and pyridostigmine (15 mg) and confirmed by train-of-four monitoring. The endotracheal tube was removed when the patient regained spontaneous breathing and consciousness. The patient was then transferred to the postoperative recovery room. Lactated Ringer’s solution was injected in the postoperative recovery room at a rate of 2 mL/kg/h.\nThe patients included in this study were not administered any prior medication. During the operation, the patient was monitored using pulse oximetry, electrocardiography, noninvasive arterial pressure measurements, capnography, bispectral index (BIS) measurements, and a nasopharyngeal temperature probe. However, some elderly patients and/or those with cardiovascular disease were monitored using invasive radial arterial blood pressure measurements. To induce general anesthesia, total intravenous anesthesia was initiated using target-controlled infusion of propofol and remifentanil (Orchestra; Fresenius Vial, Auvergne Rhone Alpes, France). The initial effect-site concentration of propofol was 6.0 µg/mL; this was gradually increased until BIS values reached 40 to 60. Following this, 3.0 ng/mL of remifentanil was started with a targeted injection of remifentanil at the effect site. When the remifentanil concentration reached the target value, 0.8 mg/kg of rocuronium was administered to facilitate endotracheal intubation. The lungs were ventilated with a tidal volume of 7 mL/kg, and the respiratory rate was adjusted to maintain the end-tidal partial pressure of carbon dioxide at 30 to 40 mm Hg. Target-controlled infusion of propofol and remifentanil was continued throughout the surgery. Concentrations of propofol and remifentanil were continuously adjusted to maintain a BIS value of 40 to 60 and a mean arterial pressure of ±20% of the reference value. To maintain the patient’s vital signs, the remifentanil concentration was maintained below 2 ng/mL; in addition, phenylephrine was injected when the patient’s mean arterial blood pressure was maintained below 80% of the baseline. To maintain vital signs, nicardipine was administered when the remifentanil concentration was maintained at ≥8 ng/mL or when the patient’s mean arterial blood pressure was maintained at or above 120% of the baseline value. Atropine and esmolol were administered separately when the patient’s heart rate dropped to less than 46 beats per minute or increased to more than 30 seconds or increased to more than 90 beats per minute for more than 30 seconds. Repeated or continuous infusion was performed when deemed necessary. Rocuronium was injected at 1 µg/kg/min to maintain muscle relaxation during surgery and was stopped before closing the abdomen. During surgery, lactated Ringer’s solution was continuously injected at 8 mL/kg/h, and the amount of bleeding was supplemented with 3 times the volume of lactated Ringer’s solution throughout the surgery. A heated blanket and warm intravenous and surgical irrigation fluids were applied to maintain normal body temperature. In all patients, PCA was connected to the patient immediately before skin closure was completed, and propofol and remifentanil infusions were discontinued. After sufficient oral aspiration, the inhaled oxygen fraction and fresh gas flow rate were increased to 100% and 8 L, respectively. The neuromuscular blockade was reversed with 0.4 mg glycopyrrolate and pyridostigmine (15 mg) and confirmed by train-of-four monitoring. The endotracheal tube was removed when the patient regained spontaneous breathing and consciousness. The patient was then transferred to the postoperative recovery room. Lactated Ringer’s solution was injected in the postoperative recovery room at a rate of 2 mL/kg/h.\n[SUBTITLE] 2.5. TAP block [SUBSECTION] In all surgeries, the TAP block was performed twice (after induction of general anesthesia and before awakening from general anesthesia). All surgeries and TAP blocks were performed in the lateral decubitus position with a slight table break at the waist.\nAll study procedures were performed by one of the study authors (with more than 10 years of experience in ultrasound procedures), including injecting 20 mL of 0.375% ropivacaine or normal saline into the TAP space under ultrasound guidance (ProSound Alpha7 Premier; Hitachi Aloka Medical, Tokyo, Japan) using a broadband (4–13 MHz) linear array ultrasound probe. Both the subcostal and lateral approaches were used to sufficiently cover the entire surgical site of the laparoscopic nephrectomy.[10] The subcostal approach targets the TAP compartment of the anterior abdominal wall (between the xyphoid process and the anterosuperior iliac spine), whereas the lateral approach targets the TAP compartment of the lateral abdominal wall (between the mid-axillary and anterior axillary lines). Drug injection was performed at 3 to 5 sites, and an ultrasound scan was performed to confirm that the drug was evenly distributed throughout the skin incision.\nIn all surgeries, the TAP block was performed twice (after induction of general anesthesia and before awakening from general anesthesia). All surgeries and TAP blocks were performed in the lateral decubitus position with a slight table break at the waist.\nAll study procedures were performed by one of the study authors (with more than 10 years of experience in ultrasound procedures), including injecting 20 mL of 0.375% ropivacaine or normal saline into the TAP space under ultrasound guidance (ProSound Alpha7 Premier; Hitachi Aloka Medical, Tokyo, Japan) using a broadband (4–13 MHz) linear array ultrasound probe. Both the subcostal and lateral approaches were used to sufficiently cover the entire surgical site of the laparoscopic nephrectomy.[10] The subcostal approach targets the TAP compartment of the anterior abdominal wall (between the xyphoid process and the anterosuperior iliac spine), whereas the lateral approach targets the TAP compartment of the lateral abdominal wall (between the mid-axillary and anterior axillary lines). Drug injection was performed at 3 to 5 sites, and an ultrasound scan was performed to confirm that the drug was evenly distributed throughout the skin incision.\n[SUBTITLE] 2.6. The QoR-40 questionnaire [SUBSECTION] One of the study authors fully explained the QoR-40 to the study patients when visiting the ward the day before the surgery. This study was conducted before the research results of KvQoR-40 were introduced; therefore, a translation of QoR-40 from English into Korean was used. After the research results of KvQoR-40 were introduced, it was confirmed that there was no difference between the translated content and the content or meaning of the KvQoR-40 questionnaires. Namely, it consists of 40 questions in Korean; 5 points are awarded for each question for a maximum total of 200 points. The higher the total score, the higher the quality of recovery. On the third day of surgery, a researcher blinded to the patient’s group assignment completed the QoR-40 questionnaire.\nOne of the study authors fully explained the QoR-40 to the study patients when visiting the ward the day before the surgery. This study was conducted before the research results of KvQoR-40 were introduced; therefore, a translation of QoR-40 from English into Korean was used. After the research results of KvQoR-40 were introduced, it was confirmed that there was no difference between the translated content and the content or meaning of the KvQoR-40 questionnaires. Namely, it consists of 40 questions in Korean; 5 points are awarded for each question for a maximum total of 200 points. The higher the total score, the higher the quality of recovery. On the third day of surgery, a researcher blinded to the patient’s group assignment completed the QoR-40 questionnaire.\n[SUBTITLE] 2.7. PCA data extraction and analysis [SUBSECTION] Patients were treated using an electronic intravenous (IV) PCA device (Accumate 1100, Woo Young Medical, Seoul, South Korea); the drug consisted only of fentanyl and ramosetron, with a total volume of 60 mL. The basal infusion rate was 0.5 mL/h and the bolus dose was set at 0.5 mL. The lockout interval was set at 15 minutes. An IV PCA was connected to the venous fluid line before the end of the skin sutures. On the day before the surgery, patients were instructed to press the PCA bolus button when they felt pain (visual analog scale score of 3 or higher). After pressing the bolus button, if pain (visual analog scale score of 3 or higher) persisted for more than 15 minutes, the same procedure was repeated. If the pain still persisted, a rescue analgesic was used according to the existing manual set forth by the Department of Urology at our hospital. After the IV PCA was completely used, it was collected and the total usage time of the PCA was checked. In addition, the information stored in the PCA machine was extracted into an Excel datasheet (Excel 2018; Microsoft, Redmond, WA), and the frequency of bolus doses of IV PCA injected into the patient at 1, 2, 3, 6, 12, 24, 48, and 72 hours after surgery was analyzed.\nPatients were treated using an electronic intravenous (IV) PCA device (Accumate 1100, Woo Young Medical, Seoul, South Korea); the drug consisted only of fentanyl and ramosetron, with a total volume of 60 mL. The basal infusion rate was 0.5 mL/h and the bolus dose was set at 0.5 mL. The lockout interval was set at 15 minutes. An IV PCA was connected to the venous fluid line before the end of the skin sutures. On the day before the surgery, patients were instructed to press the PCA bolus button when they felt pain (visual analog scale score of 3 or higher). After pressing the bolus button, if pain (visual analog scale score of 3 or higher) persisted for more than 15 minutes, the same procedure was repeated. If the pain still persisted, a rescue analgesic was used according to the existing manual set forth by the Department of Urology at our hospital. After the IV PCA was completely used, it was collected and the total usage time of the PCA was checked. In addition, the information stored in the PCA machine was extracted into an Excel datasheet (Excel 2018; Microsoft, Redmond, WA), and the frequency of bolus doses of IV PCA injected into the patient at 1, 2, 3, 6, 12, 24, 48, and 72 hours after surgery was analyzed.\n[SUBTITLE] 2.8. Study outcomes [SUBSECTION] The main purpose of this study was to evaluate the effect of TAP block, which was administered twice (after induction of general anesthesia and before awakening from general anesthesia), on the postoperative quality of recovery using the QoR-40 as well as patients’ intraoperative remifentanil requirements (µg/kg/h) in comparative evaluations between the TAP and Control groups. In addition, to evaluate the postoperative analgesic effect of the TAP block, the total duration of PCA used after surgery and the frequency of bolus doses of IV PCA injected into the patient at 1, 2, 3, 6, 12, 24, 48, and 72 hours after surgery in both groups were analyzed.\nThe main purpose of this study was to evaluate the effect of TAP block, which was administered twice (after induction of general anesthesia and before awakening from general anesthesia), on the postoperative quality of recovery using the QoR-40 as well as patients’ intraoperative remifentanil requirements (µg/kg/h) in comparative evaluations between the TAP and Control groups. In addition, to evaluate the postoperative analgesic effect of the TAP block, the total duration of PCA used after surgery and the frequency of bolus doses of IV PCA injected into the patient at 1, 2, 3, 6, 12, 24, 48, and 72 hours after surgery in both groups were analyzed.\n[SUBTITLE] 2.9. Sample size [SUBSECTION] The postoperative quality of recovery using the QoR-40 and intraoperative remifentanil requirements were evaluated to judge the effect of the TAP block. In a preliminary study of 16 patients, the means ± standard deviations of the QoR-40 score, used to confirm the postoperative quality of recovery in the Control and TAP groups, were 160.3 ± 25.1 and 188.3 ± 6.3, respectively. As we aimed to maintain 95% power and a 5% significance level, we calculated a target sample size of 14 patients per group. In a preliminary study of 16 patients, the means ± standard deviations for the intraoperative remifentanil requirement (µg/kg/h) in the Control and TAP groups were 0.153 ± 0.053 and 0.104 ± 0.035, respectively. Again, as we aimed to maintain 95% power and a 5% significance level, we calculated a target sample size of 23 patients per group. We followed the results of the intraoperative remifentanil requirement for study accuracy. We determined that, assuming a 30% dropout rate, a minimum of 30 patients per group would be required to achieve meaningful results in this study.\nThe postoperative quality of recovery using the QoR-40 and intraoperative remifentanil requirements were evaluated to judge the effect of the TAP block. In a preliminary study of 16 patients, the means ± standard deviations of the QoR-40 score, used to confirm the postoperative quality of recovery in the Control and TAP groups, were 160.3 ± 25.1 and 188.3 ± 6.3, respectively. As we aimed to maintain 95% power and a 5% significance level, we calculated a target sample size of 14 patients per group. In a preliminary study of 16 patients, the means ± standard deviations for the intraoperative remifentanil requirement (µg/kg/h) in the Control and TAP groups were 0.153 ± 0.053 and 0.104 ± 0.035, respectively. Again, as we aimed to maintain 95% power and a 5% significance level, we calculated a target sample size of 23 patients per group. We followed the results of the intraoperative remifentanil requirement for study accuracy. We determined that, assuming a 30% dropout rate, a minimum of 30 patients per group would be required to achieve meaningful results in this study.\n[SUBTITLE] 2.10. Statistical analysis [SUBSECTION] Data were entered into a study database and analyzed using IBM SPSS Statistics (v. 27.0; IBM Corp, Armonk, NY). Statistical analysis was performed using the x2 test for comparative evaluations by sex (among the demographic data), and independent t tests were used to evaluate the distributions of the other demographic data across groups. In addition, the statistical significance between the TAP and Control groups with regard to QoR-40 scores, intraoperative remifentanil requirements, and IV PCA administrations (Excel data) were verified using independent t-tests. All values were expressed as means ± standard deviations. All reported P values were 2-sided, and P values <.05 were considered statistically significant.\nData were entered into a study database and analyzed using IBM SPSS Statistics (v. 27.0; IBM Corp, Armonk, NY). Statistical analysis was performed using the x2 test for comparative evaluations by sex (among the demographic data), and independent t tests were used to evaluate the distributions of the other demographic data across groups. In addition, the statistical significance between the TAP and Control groups with regard to QoR-40 scores, intraoperative remifentanil requirements, and IV PCA administrations (Excel data) were verified using independent t-tests. All values were expressed as means ± standard deviations. All reported P values were 2-sided, and P values <.05 were considered statistically significant.", "This study was conducted at the Kyungpook National University Chilgok Hospital (Daegu, South Korea) between January 2016 and February 2017. The study protocol was approved by the Research Ethics Committee of the Kyungpook National University Chilgok Hospital, Daegu, South Korea. This study received institutional approval (KNUCH 2015-12-004) and was conducted in accordance with the principles of the Declaration of Helsinki. All the participants provided their informed consent prior to participation.", "Sixty of the 87 patients who underwent laparoscopic nephrectomy during the study period participated in this study. Only patients, aged between 20 and 80 years, with an American Society of Anesthesiologists physical status class I to III who had undergone laparoscopic nephrectomy under general anesthesia in the Department of Urology at Kyungpook National University Chilgok Hospital were included in the current study.\nPatients were excluded from this study if they refused to participate or if they exceeded American Society of Anesthesiologists physical status class III, were younger than 19 years of age or older than 81 years of age, had difficulty communicating due to an intellectual disability, underwent partial nephrectomy or nephroureterectomy, had previously undergone abdominal or pelvic surgery, were undergoing multiple (combined) surgeries on other parts of the body/other organs, were excessively obese (body mass index >35 kg/m2), had a history of long-term opioid usage, had a hemorrhagic predisposition or a hemorrhagic disorder, or presented with contraindications to regional anesthesia.", "Patients were randomly assigned to either the TAP group (receiving 40 mL of 0.375% ropivacaine; n = 30) or the Control group (receiving only 40 mL of normal saline; n = 30). Randomization was computer-generated (https://www.randomizer.org), and a sealed opaque envelope method was used to hide patient randomization numbers until the start of anesthesia induction. The sealed opaque envelope was opened by the research investigator immediately before performing the TAP block. A registered nurse, who did not enter the operating room and was fully familiar with the methods and procedures of this study, was in charge of consultation with patients and data collection and was blinded to the group assignments, which were not disclosed until the final statistical analysis was completed.", "The patients included in this study were not administered any prior medication. During the operation, the patient was monitored using pulse oximetry, electrocardiography, noninvasive arterial pressure measurements, capnography, bispectral index (BIS) measurements, and a nasopharyngeal temperature probe. However, some elderly patients and/or those with cardiovascular disease were monitored using invasive radial arterial blood pressure measurements. To induce general anesthesia, total intravenous anesthesia was initiated using target-controlled infusion of propofol and remifentanil (Orchestra; Fresenius Vial, Auvergne Rhone Alpes, France). The initial effect-site concentration of propofol was 6.0 µg/mL; this was gradually increased until BIS values reached 40 to 60. Following this, 3.0 ng/mL of remifentanil was started with a targeted injection of remifentanil at the effect site. When the remifentanil concentration reached the target value, 0.8 mg/kg of rocuronium was administered to facilitate endotracheal intubation. The lungs were ventilated with a tidal volume of 7 mL/kg, and the respiratory rate was adjusted to maintain the end-tidal partial pressure of carbon dioxide at 30 to 40 mm Hg. Target-controlled infusion of propofol and remifentanil was continued throughout the surgery. Concentrations of propofol and remifentanil were continuously adjusted to maintain a BIS value of 40 to 60 and a mean arterial pressure of ±20% of the reference value. To maintain the patient’s vital signs, the remifentanil concentration was maintained below 2 ng/mL; in addition, phenylephrine was injected when the patient’s mean arterial blood pressure was maintained below 80% of the baseline. To maintain vital signs, nicardipine was administered when the remifentanil concentration was maintained at ≥8 ng/mL or when the patient’s mean arterial blood pressure was maintained at or above 120% of the baseline value. Atropine and esmolol were administered separately when the patient’s heart rate dropped to less than 46 beats per minute or increased to more than 30 seconds or increased to more than 90 beats per minute for more than 30 seconds. Repeated or continuous infusion was performed when deemed necessary. Rocuronium was injected at 1 µg/kg/min to maintain muscle relaxation during surgery and was stopped before closing the abdomen. During surgery, lactated Ringer’s solution was continuously injected at 8 mL/kg/h, and the amount of bleeding was supplemented with 3 times the volume of lactated Ringer’s solution throughout the surgery. A heated blanket and warm intravenous and surgical irrigation fluids were applied to maintain normal body temperature. In all patients, PCA was connected to the patient immediately before skin closure was completed, and propofol and remifentanil infusions were discontinued. After sufficient oral aspiration, the inhaled oxygen fraction and fresh gas flow rate were increased to 100% and 8 L, respectively. The neuromuscular blockade was reversed with 0.4 mg glycopyrrolate and pyridostigmine (15 mg) and confirmed by train-of-four monitoring. The endotracheal tube was removed when the patient regained spontaneous breathing and consciousness. The patient was then transferred to the postoperative recovery room. Lactated Ringer’s solution was injected in the postoperative recovery room at a rate of 2 mL/kg/h.", "In all surgeries, the TAP block was performed twice (after induction of general anesthesia and before awakening from general anesthesia). All surgeries and TAP blocks were performed in the lateral decubitus position with a slight table break at the waist.\nAll study procedures were performed by one of the study authors (with more than 10 years of experience in ultrasound procedures), including injecting 20 mL of 0.375% ropivacaine or normal saline into the TAP space under ultrasound guidance (ProSound Alpha7 Premier; Hitachi Aloka Medical, Tokyo, Japan) using a broadband (4–13 MHz) linear array ultrasound probe. Both the subcostal and lateral approaches were used to sufficiently cover the entire surgical site of the laparoscopic nephrectomy.[10] The subcostal approach targets the TAP compartment of the anterior abdominal wall (between the xyphoid process and the anterosuperior iliac spine), whereas the lateral approach targets the TAP compartment of the lateral abdominal wall (between the mid-axillary and anterior axillary lines). Drug injection was performed at 3 to 5 sites, and an ultrasound scan was performed to confirm that the drug was evenly distributed throughout the skin incision.", "One of the study authors fully explained the QoR-40 to the study patients when visiting the ward the day before the surgery. This study was conducted before the research results of KvQoR-40 were introduced; therefore, a translation of QoR-40 from English into Korean was used. After the research results of KvQoR-40 were introduced, it was confirmed that there was no difference between the translated content and the content or meaning of the KvQoR-40 questionnaires. Namely, it consists of 40 questions in Korean; 5 points are awarded for each question for a maximum total of 200 points. The higher the total score, the higher the quality of recovery. On the third day of surgery, a researcher blinded to the patient’s group assignment completed the QoR-40 questionnaire.", "Patients were treated using an electronic intravenous (IV) PCA device (Accumate 1100, Woo Young Medical, Seoul, South Korea); the drug consisted only of fentanyl and ramosetron, with a total volume of 60 mL. The basal infusion rate was 0.5 mL/h and the bolus dose was set at 0.5 mL. The lockout interval was set at 15 minutes. An IV PCA was connected to the venous fluid line before the end of the skin sutures. On the day before the surgery, patients were instructed to press the PCA bolus button when they felt pain (visual analog scale score of 3 or higher). After pressing the bolus button, if pain (visual analog scale score of 3 or higher) persisted for more than 15 minutes, the same procedure was repeated. If the pain still persisted, a rescue analgesic was used according to the existing manual set forth by the Department of Urology at our hospital. After the IV PCA was completely used, it was collected and the total usage time of the PCA was checked. In addition, the information stored in the PCA machine was extracted into an Excel datasheet (Excel 2018; Microsoft, Redmond, WA), and the frequency of bolus doses of IV PCA injected into the patient at 1, 2, 3, 6, 12, 24, 48, and 72 hours after surgery was analyzed.", "The main purpose of this study was to evaluate the effect of TAP block, which was administered twice (after induction of general anesthesia and before awakening from general anesthesia), on the postoperative quality of recovery using the QoR-40 as well as patients’ intraoperative remifentanil requirements (µg/kg/h) in comparative evaluations between the TAP and Control groups. In addition, to evaluate the postoperative analgesic effect of the TAP block, the total duration of PCA used after surgery and the frequency of bolus doses of IV PCA injected into the patient at 1, 2, 3, 6, 12, 24, 48, and 72 hours after surgery in both groups were analyzed.", "The postoperative quality of recovery using the QoR-40 and intraoperative remifentanil requirements were evaluated to judge the effect of the TAP block. In a preliminary study of 16 patients, the means ± standard deviations of the QoR-40 score, used to confirm the postoperative quality of recovery in the Control and TAP groups, were 160.3 ± 25.1 and 188.3 ± 6.3, respectively. As we aimed to maintain 95% power and a 5% significance level, we calculated a target sample size of 14 patients per group. In a preliminary study of 16 patients, the means ± standard deviations for the intraoperative remifentanil requirement (µg/kg/h) in the Control and TAP groups were 0.153 ± 0.053 and 0.104 ± 0.035, respectively. Again, as we aimed to maintain 95% power and a 5% significance level, we calculated a target sample size of 23 patients per group. We followed the results of the intraoperative remifentanil requirement for study accuracy. We determined that, assuming a 30% dropout rate, a minimum of 30 patients per group would be required to achieve meaningful results in this study.", "Data were entered into a study database and analyzed using IBM SPSS Statistics (v. 27.0; IBM Corp, Armonk, NY). Statistical analysis was performed using the x2 test for comparative evaluations by sex (among the demographic data), and independent t tests were used to evaluate the distributions of the other demographic data across groups. In addition, the statistical significance between the TAP and Control groups with regard to QoR-40 scores, intraoperative remifentanil requirements, and IV PCA administrations (Excel data) were verified using independent t-tests. All values were expressed as means ± standard deviations. All reported P values were 2-sided, and P values <.05 were considered statistically significant.", "[SUBTITLE] 3.1. Patient characteristics [SUBSECTION] Of the 87 patients who underwent laparoscopic nephrectomy during the study period, 5 refused to participate in this study, 14 underwent laparoscopic partial nephrectomy, 4 underwent laparoscopic nephroureterectomy, 2 underwent combined surgeries, and 2 had a body mass index of >35 kg/m2. Thus, a total of 60 patients were randomly assigned to 2 groups. Two patients from the Control group and 1 patient from the TAP group were excluded from the study due to side effects arising from IV PCA, including nausea, vomiting, and urticaria. Two patients from the Control group and 1 patient from the TAP group were excluded from the study because of a switch to open surgery. A total of 54 patients (Control group, n = 26; TAP group, n = 28) were included in the final study population (Fig. 1). Table 1 presents the comparative demographic and perioperative characteristics for the 2 groups. There were no significant differences in these medical and demographic characteristics between the 2 groups.\nPatient characteristics and intraoperative data.\nBMI = body mass index (kg/m2).\nResults are presented as means ± standard deviation, or numbers of patients.\nP < .05 indicates a significant difference between groups.\nPatient flowchart showing the patients included in the enrollment, group allocation, follow-up, and analysis phases of the study.\nOf the 87 patients who underwent laparoscopic nephrectomy during the study period, 5 refused to participate in this study, 14 underwent laparoscopic partial nephrectomy, 4 underwent laparoscopic nephroureterectomy, 2 underwent combined surgeries, and 2 had a body mass index of >35 kg/m2. Thus, a total of 60 patients were randomly assigned to 2 groups. Two patients from the Control group and 1 patient from the TAP group were excluded from the study due to side effects arising from IV PCA, including nausea, vomiting, and urticaria. Two patients from the Control group and 1 patient from the TAP group were excluded from the study because of a switch to open surgery. A total of 54 patients (Control group, n = 26; TAP group, n = 28) were included in the final study population (Fig. 1). Table 1 presents the comparative demographic and perioperative characteristics for the 2 groups. There were no significant differences in these medical and demographic characteristics between the 2 groups.\nPatient characteristics and intraoperative data.\nBMI = body mass index (kg/m2).\nResults are presented as means ± standard deviation, or numbers of patients.\nP < .05 indicates a significant difference between groups.\nPatient flowchart showing the patients included in the enrollment, group allocation, follow-up, and analysis phases of the study.\n[SUBTITLE] 3.2. The QoR-40 questionnaire [SUBSECTION] The QoR-40 score, measured when visiting the ward on the third day after surgery, was significantly higher in the TAP group (171.9 ± 23.1) than in the Control group (151.9 ± 28.1) (P = .006) (Fig. 2).\nQuality of recovery 40 questionnaire findings. *P < .05 indicates a significant difference between groups.\nThe QoR-40 score, measured when visiting the ward on the third day after surgery, was significantly higher in the TAP group (171.9 ± 23.1) than in the Control group (151.9 ± 28.1) (P = .006) (Fig. 2).\nQuality of recovery 40 questionnaire findings. *P < .05 indicates a significant difference between groups.\n[SUBTITLE] 3.3. Intraoperative remifentanil requirement [SUBSECTION] The intraoperative remifentanil requirement (µg/kg/h) did not show a significant difference between groups (0.100 ± .050 in the TAP group vs 0.111 ± .060 in the Control group) (P = .439) (Fig. 3).\nIntraoperative remifentanil requirement.\nThe intraoperative remifentanil requirement (µg/kg/h) did not show a significant difference between groups (0.100 ± .050 in the TAP group vs 0.111 ± .060 in the Control group) (P = .439) (Fig. 3).\nIntraoperative remifentanil requirement.\n[SUBTITLE] 3.4. PCA data [SUBSECTION] The frequency of use of the bolus dose accumulated at 1, 2, 3, 6, 12, 24, 48, and 72 hours after surgery was 1.15 ± 0.97, 2.85 ± 1.08, 3.81 ± 1.44, 6.38 ± 3.60, 10.12 ± 5.96, 14.96 ± 10.53, 23.30 ± 17.44, 24.40 ± 17.76, respectively, in the Control group and 0.64 ± 0.83, 1.86 ± 1.18, 2.68 ± 1.44, 4.32 ± 3.43, 6.21 ± 5.50, 9.43 ± 9.22, 10.70 ± 9.64, 12.44 ± 11.32, respectively, in the TAP group. All results for each time period showed a significant difference (P = .041, P = .002, P = .006, P = .036, P = .015, P = .047, P = .008, P = .034) (Fig. 4). The total usage time for PCA showed a significant difference, as the usage time in the TAP group (70.26 ± 34.0 hours) was much greater than that in the Control group (50.05 ± 32.6 hours; P = .030) (Fig. 5).\nFrequency of PCA bolus injection. PCA = patient-controlled analgesia. *P < .05 indicates a significant difference between groups.\nTotal time for use of PCA. PCA = patient-controlled analgesia. *P < .05 indicates a significant difference between groups.\nThe frequency of use of the bolus dose accumulated at 1, 2, 3, 6, 12, 24, 48, and 72 hours after surgery was 1.15 ± 0.97, 2.85 ± 1.08, 3.81 ± 1.44, 6.38 ± 3.60, 10.12 ± 5.96, 14.96 ± 10.53, 23.30 ± 17.44, 24.40 ± 17.76, respectively, in the Control group and 0.64 ± 0.83, 1.86 ± 1.18, 2.68 ± 1.44, 4.32 ± 3.43, 6.21 ± 5.50, 9.43 ± 9.22, 10.70 ± 9.64, 12.44 ± 11.32, respectively, in the TAP group. All results for each time period showed a significant difference (P = .041, P = .002, P = .006, P = .036, P = .015, P = .047, P = .008, P = .034) (Fig. 4). The total usage time for PCA showed a significant difference, as the usage time in the TAP group (70.26 ± 34.0 hours) was much greater than that in the Control group (50.05 ± 32.6 hours; P = .030) (Fig. 5).\nFrequency of PCA bolus injection. PCA = patient-controlled analgesia. *P < .05 indicates a significant difference between groups.\nTotal time for use of PCA. PCA = patient-controlled analgesia. *P < .05 indicates a significant difference between groups.", "Of the 87 patients who underwent laparoscopic nephrectomy during the study period, 5 refused to participate in this study, 14 underwent laparoscopic partial nephrectomy, 4 underwent laparoscopic nephroureterectomy, 2 underwent combined surgeries, and 2 had a body mass index of >35 kg/m2. Thus, a total of 60 patients were randomly assigned to 2 groups. Two patients from the Control group and 1 patient from the TAP group were excluded from the study due to side effects arising from IV PCA, including nausea, vomiting, and urticaria. Two patients from the Control group and 1 patient from the TAP group were excluded from the study because of a switch to open surgery. A total of 54 patients (Control group, n = 26; TAP group, n = 28) were included in the final study population (Fig. 1). Table 1 presents the comparative demographic and perioperative characteristics for the 2 groups. There were no significant differences in these medical and demographic characteristics between the 2 groups.\nPatient characteristics and intraoperative data.\nBMI = body mass index (kg/m2).\nResults are presented as means ± standard deviation, or numbers of patients.\nP < .05 indicates a significant difference between groups.\nPatient flowchart showing the patients included in the enrollment, group allocation, follow-up, and analysis phases of the study.", "The QoR-40 score, measured when visiting the ward on the third day after surgery, was significantly higher in the TAP group (171.9 ± 23.1) than in the Control group (151.9 ± 28.1) (P = .006) (Fig. 2).\nQuality of recovery 40 questionnaire findings. *P < .05 indicates a significant difference between groups.", "The intraoperative remifentanil requirement (µg/kg/h) did not show a significant difference between groups (0.100 ± .050 in the TAP group vs 0.111 ± .060 in the Control group) (P = .439) (Fig. 3).\nIntraoperative remifentanil requirement.", "The frequency of use of the bolus dose accumulated at 1, 2, 3, 6, 12, 24, 48, and 72 hours after surgery was 1.15 ± 0.97, 2.85 ± 1.08, 3.81 ± 1.44, 6.38 ± 3.60, 10.12 ± 5.96, 14.96 ± 10.53, 23.30 ± 17.44, 24.40 ± 17.76, respectively, in the Control group and 0.64 ± 0.83, 1.86 ± 1.18, 2.68 ± 1.44, 4.32 ± 3.43, 6.21 ± 5.50, 9.43 ± 9.22, 10.70 ± 9.64, 12.44 ± 11.32, respectively, in the TAP group. All results for each time period showed a significant difference (P = .041, P = .002, P = .006, P = .036, P = .015, P = .047, P = .008, P = .034) (Fig. 4). The total usage time for PCA showed a significant difference, as the usage time in the TAP group (70.26 ± 34.0 hours) was much greater than that in the Control group (50.05 ± 32.6 hours; P = .030) (Fig. 5).\nFrequency of PCA bolus injection. PCA = patient-controlled analgesia. *P < .05 indicates a significant difference between groups.\nTotal time for use of PCA. PCA = patient-controlled analgesia. *P < .05 indicates a significant difference between groups.", "The results of this study demonstrated that the implementation of a unilateral ultrasound-guided TAP block during laparoscopic nephrectomy significantly affected patients’ quality of recovery after surgery. The remarkably improved quality of recovery achieved using the TAP block procedure was also confirmed by the increase in total PCA usage time and significantly fewer injections of the PCA bolus dose in the TAP group than in the Control group. However, the TAP block had no effect on the amount of remifentanil required for adequate anesthesia during surgery.\nWith the recent development of ultrasound technology and increase in the use of ultrasound devices, new regional anesthetic and analgesic techniques are being actively developed and implemented.[15] This trend has made it possible to safely inject an appropriate amount of local anesthetics around the desired nerve, avoiding major structures such as blood vessels, instead of the conventional method of injecting a large amount of local anesthetic around a large peripheral nerve.[11,12] Recently, fascial plane block has received considerable attention.[15] However, adequate studies that can draw clear conclusions about its effects are still lacking. The TAP block is one of the most commonly used methods and is one of the first used methods among fascial plane blocks.\nIn some studies, ultrasound-guided TAP block in laparoscopic nephrectomy reduced acute postoperative pain scores and early opioid consumption.[16,17] A recent study showed that ultrasound-guided TAP block performed in robotic partial nephrectomy reduced morphine consumption and somatic pain for 24 hours postoperatively and reduced the incidence of chronic pain.[18] However, the results of a recent study contradict those of previous studies.[19] Moreover, a recent meta-analysis of the analgesic effect of ultrasound-guided TAP block in laparoscopic abdominal surgery showed that TAP block had marginal postoperative analgesic effect.[20] According to the most recently published systematic review and meta-analysis of TAP blocks in urological procedures, although TAP block appears to provide improved analgesia in urological surgery, there is great heterogeneity between the findings of the included studies, and due to significant risk of bias, careful review is recommended.[21] Given these limited results, it is still inappropriate to conclude that TAP blocks contribute to early postoperative analgesia after laparoscopic nephrectomy.\nConflicting results have been reported on the effects of TAP block in abdominal surgeries other than urological surgery (e.g., nephrectomy). Studies showed that TAP blocks in robot-assisted distal pancreatectomy and total abdominal hysterectomy reduced postoperative pain and opioid consumption.[22,23] However, a study reported no effect of TAP block in total abdominal hysterectomy.[24] Although it is not possible to explain exactly why the results of TAP blocks in different types of surgery are conflicting, it is clear that TAP blocks show conflicting results across different types of surgery. However, recent studies have shown promising results possibly due to the increase in the skill and experience of the TAP block operator along with the development of ultrasound equipment.\nAlthough the common goal is to block afferent nociceptive propagation, fascial plane blocks such as TAP blocks do not target specific nerves, unlike conventional methods that target specific nerves. Fascial plane block is a method of injecting a local anesthetic into a compartment (plane) between 2 anatomically separated fascia layers.[25] Fascial plane block has recently attracted attention because it is relatively easy, and safe and can provide meaningful analgesia in various clinical settings.[26] However, there is controversy about how clinical effects are achieved because fascial plane blocks exert their effects through several different pathways, as opposed to conventional methods.[25] In fascial plane block, dense nerve block is rarely seen, the results obtained in individual patients may be different, and the degree of skin sensory block may not sufficiently reflect the analgesic effect.\nThe causes of pain associated with laparoscopic nephrectomy vary, including port pain, lower abdominal incision (to retrieve the kidney) pain, pelvic organ pain, diaphragm stimulation (discomfort at the tip of the shoulder due to residual pneumothorax), and urethral catheter discomfort.[27] There is no difference in acute postoperative pain scores after laparoscopic and laparotomy, and if acute postoperative pain is not reduced appropriately, the likelihood of chronic postsurgical pain is the same.[28] Chronic postsurgical pain is most affected by postoperative pain severity and psychological vulnerability.[27] The more severe the postoperative dynamic pain (movement-induced pain), the higher the risk of chronic postsurgical pain. Although opioids are potent analgesics, they are mostly unsuitable for treating dynamic pain, whereas regional analgesia using local anesthetics, nonsteroidal anti-inflammatory drugs, α2-agonists, and N-methyl-D-aspartate receptor antagonists may be effective for controlling dynamic types of pain and preventing central sensitization.[27] Preemptive TAP block in laparoscopic nephrectomy can potentially reduce the intraoperative metabolic response and avoid central sensitization, thereby reducing the incidence of chronic postsurgical pain.\nWith the current risk of the opioid epidemic being highlighted, the enhanced recovery after surgery pathway has emerged as one of the best strategies to improve the value and quality of surgical treatment.[6] Multimodal analgesia is one of the most essential components of the enhanced recovery after surgery pathway. Multimodal analgesia can have additive, if not synergistic, effects using various regional analgesic techniques or non-opioid analgesics and can reduce opioid usage and opioid-related side effects. Regional analgesic techniques that can be used for laparoscopic nephrectomy are generally divided into neuraxial (e.g., epidural analgesia and intrathecal morphine) and peripheral (e.g., TAP, quadratus lumborum block, retrolaminar block, erector spinae plane block, and wound infiltration) blocks or catheters. Epidural analgesia remains the standard method of application for major abdominal surgeries, but has the disadvantage of higher risks associated with the procedure.[29] Fascial plane blocks such as TAP blocks are very important factors in multimodal analgesia and have the advantage of being safer than epidural analgesia in patients with obesity, but they are not effective in controlling visceral pain and do not match the duration of analgesia.[11] Recently, in addition to TAP block, quadratus lumborum block, retrolaminar block, and erector spinae plane block have been found to be useful in patients undergoing laparoscopic nephrectomy.[30–33]\nThere might be several possible reasons underlying the results that the intraoperative remifentanil requirement did not differ, but the postoperative pain control and opioid consumption differed between the 2 groups. A commercially available ampule of 0.75% ropivacaine was used for convenience in conducting this study and to avoid local anesthetic systemic toxicity. For both procedures (i.e., after anesthesia induction and before awakening from anesthesia), 20 mL of 0.75% ropivacaine was mixed with 20 mL of normal saline to constitute 40 mL of 0.375% ropivacaine. Only a limited effect was considered to be shown in reducing the intraoperative remifentanil requirement because a proper regional anesthetic was not administered (i.e., because half the concentration typically used when performing regional anesthesia was used in the current study). However, TAP block, which was administered at the same subanesthetic concentration before awakening from anesthesia, had a significant effect on postoperative pain control; therefore, TAP block is thought to be effective for pain control up to 72 hours after surgery. This can be confirmed by the results reported by Cederholm et al,[34] showing that a lower concentration of ropivacaine decreased the number of blood vessels in the skin, whereas a higher concentration of ropivacaine induced an increase in blood flow. It is thought that a lower concentration of ropivacaine reduces the absorption of local anesthetic into the systemic circulation by decreasing the number of blood vessels, allowing the local anesthetic to stay around the target nerve for a longer period of time, and exhibiting analgesic effects for a longer period of time. Based on the results of the present study, we conclude that additional research according to the type and concentration of local anesthetics used in the TAP block is needed and that this gap in the literature should be addressed in future research endeavors. Considering the characteristics of the procedure (which is performed under ultrasound guidance), it is thought that the operator’s skill level or experience as well as the quality of the ultrasound device can greatly influence the results of the procedure. However, considering that all study procedures were performed by one study authors, who had more than 10 years of experience using ultrasound, the possibility of adverse effects occurring due to a lack of operating skill level or experience is low. At the same time, it should be considered that the fascial plane in which the TAP block is applied is poorly vascularized and that the TAP block may have a prolonged analgesic effect due to slow drug clearance.[35,36]\nTherefore, we conclude that ultrasound-guided TAP block during laparoscopic nephrectomy improves the quality of postoperative recovery and is effective for postoperative pain control, but does not affect the amount of remifentanil required for adequate anesthesia during surgery.", "Conceptualization: Jun-Mo Park.\nData curation: Joonhee Lee.\nFormal analysis: Joonhee Lee.\nInvestigation: Jun-Mo Park.\nSupervision: Jun-Mo Park.\nValidation: Jun-Mo Park.\nVisualization: Joonhee Lee.\nWriting – original draft: Jun-Mo Park.\nWriting – review & editing: Jun-Mo Park." ]
[ "intro", "methods", null, "subjects", null, null, null, null, null, null, null, null, "results", "subjects", null, null, null, "discussion", null ]
[]
Safflor yellow treating angina pectoris: A pharmacoeconomic evaluation and network meta-analysis.
36253977
Coronary heart disease (CHD) is a cardiovascular disease caused by myocardial ischemia. In China, safflor yellow and artemisinin-based combination therapies have been extensively used to treat angina pectoris.
BACKGROUND
Efficacies were provided by a network meta-analysis following the PRISMA 2020 checklist. Cost-effectiveness analysis was based on patient perspectives. Two-way and probabilistic sensitivity analyses were conducted to assess the robustness of the study results.
METHODS
Conventional treatment combined with safflower is a better choice against angina pectoris. Sensitivity analysis showed that the model was sensitive to the treatment efficacy rather than the drug cost.
RESULTS
Conventional treatment combined with safflower injection is suggested to treat angina pectoris. Low molecular weight heparin or compound Danshen-dropping pills can be used to increase the recovery rate of angina pectoris, according to conventional treatment combined with safflower injection.
CONCLUSION
[ "Angina Pectoris", "Artemisinins", "Drugs, Chinese Herbal", "Economics, Pharmaceutical", "Heparin, Low-Molecular-Weight", "Humans", "Network Meta-Analysis" ]
9575819
1. Introduction
Coronary heart disease (CHD) is the most common heart disease and represents a continuum of diseases. CHD begins with coronary atherosclerosis in the early stages and progresses to established coronary artery disease (CAD), caused by plaque buildup in the walls of the arteries that supply blood to the heart and other parts of the body. Of all the diseases in China, CAD is currently the leading cause of death. As of 2013, the CAD prevalence among people aged 15 and above was 1.23%, 0.81%, and 1.02% for the urban and rural residents and combination, respectively, while the prevalence reached 2.78% in the older population over 60.[1] A recent study on the global burden of disease displayed that China accounted for about 38.2% of the deaths of CHD (ischemic heart disease) worldwide from 1990 to 2017.[2] Meanwhile, the CHD for all cardiovascular diseases elevated from 29% to 37%.[3] Treating angina pectoris is critical to avoiding CHD by preventing acute myocardial infarction. In China, the annual angina pectoris is higher in men than in women aged >40 years.[4] Similarly, in another world, annual angina pectoris in 50-year-old men and women is 0.2% and 0.08%, respectively. Patients with CHD and angina pectoris frequently manifest anxiety and fear of untimely death. Besides,[5] in patients’ self-consciousness, they saw themselves as a burden to their family and others, both physically and financially. In addition to their physical pain, the such psychological condition could result in negative emotions such as anxiety, guilt, and remorse in patients,[6] which would be more likely to lead to acute myocardial infarction or sudden death.[7] Additionally, the irrational drug became increasingly severe due to the increasing number of patients with CHD and angina pectoris saddled the healthcare system with a more social and economic burden. More specifically, CHD accounted for 9.4% of the disability-adjusted life-year loss of the top 10 diseases, ranking first in developed and developing countries.[8] The survey reported that the PCI cases in 2017 were 753142, a 13% increase over 2016, and the cost of hospitalization and medical devices is increasing annually. Commonly used drugs for treating CHD and angina pectoris include nitrates β-blockers, calcium channel blockers, and antiplatelet agents. However, these drugs always produce side effects. Here, we selected a natural product, safflor yellow, a pigment extracted from the petals of safflor,[9–11] as a treatment drug to assess its efficacy, safety, and cost-effectiveness. Safflor yellow combats cardiovascular disease through various pharmacological effects, such as dilating blood vessels, improving myocardial blood supply, inhibiting platelet aggregation and thrombosis, and anti-oxidation, against cardiovascular disease. This study aimed to identify an optimal treatment plan for safflor yellow injection to guide rational drug use, targeting better allocation of resources and cost savings. To compare the efficacy and safety of safflor yellow injection with the existing angina-pectoris treatments, we conducted stratified research on top of evidence-based medicine using a network meta-analysis followed by pharmacoeconomic evaluation.
4.2. Methods
[SUBTITLE] 4.2.1. Decision tree model. [SUBSECTION] This study used a decision tree model to analyze the cost and effect of 13 treatment options for angina included in the network meta-analysis. Efficacy and safety indicators were obtained using meta-analysis to comprehensively evaluate the economics of 13 treatment regimens. The structure of the decision-tree model is shown in Figure 5. The model primarily assessed the short-term economy, and the time horizon for this analysis was 1 treatment course (14 days). Decision tree model. The decision tree model was used to analyze the cost-effectiveness of 13 treatment options for angina included in the network meta-analysis. The specific treatment protocols and their cost are shown in this figure. The time horizon for this analysis is 1 treatment course (14 days). AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection. This study used a decision tree model to analyze the cost and effect of 13 treatment options for angina included in the network meta-analysis. Efficacy and safety indicators were obtained using meta-analysis to comprehensively evaluate the economics of 13 treatment regimens. The structure of the decision-tree model is shown in Figure 5. The model primarily assessed the short-term economy, and the time horizon for this analysis was 1 treatment course (14 days). Decision tree model. The decision tree model was used to analyze the cost-effectiveness of 13 treatment options for angina included in the network meta-analysis. The specific treatment protocols and their cost are shown in this figure. The time horizon for this analysis is 1 treatment course (14 days). AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection. [SUBTITLE] 4.2.2. Statistical analysis. [SUBSECTION] In pharmacoeconomic evaluation, cost-effectiveness analysis calculated the incremental cost-effectiveness ratio. Cyclone plots were drawn by single factor sensitivity analysis, probability sensitivity analysis was carried out by Monte Carlo simulation, and acceptable cost effect curves were drawn.[62] TreeAge 2011 was used to construct a decision tree model for cost-effectiveness and sensitivity analyses. In pharmacoeconomic evaluation, cost-effectiveness analysis calculated the incremental cost-effectiveness ratio. Cyclone plots were drawn by single factor sensitivity analysis, probability sensitivity analysis was carried out by Monte Carlo simulation, and acceptable cost effect curves were drawn.[62] TreeAge 2011 was used to construct a decision tree model for cost-effectiveness and sensitivity analyses. [SUBTITLE] 4.2.3. Effectiveness. [SUBSECTION] The studies included in the economic evaluation were similar to the network meta-analysis. We obtained the effective rates of 13 treatment regimens according to the proportion of each study shown in the forest map in the meta-analysis and weighting the treatment efficiency of angina patients. The results showed that the efficiency ranking and the score ranking of SUCRA in the NMA were similar, indicating that the efficiency from the weighted calculation was reasonable and could be included in the economic-evaluation calculation (Table 4). Clinical efficacy and ranking of 13 angina-pectoris treatments. CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection. The studies included in the economic evaluation were similar to the network meta-analysis. We obtained the effective rates of 13 treatment regimens according to the proportion of each study shown in the forest map in the meta-analysis and weighting the treatment efficiency of angina patients. The results showed that the efficiency ranking and the score ranking of SUCRA in the NMA were similar, indicating that the efficiency from the weighted calculation was reasonable and could be included in the economic-evaluation calculation (Table 4). Clinical efficacy and ranking of 13 angina-pectoris treatments. CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection. [SUBTITLE] 4.2.4. Cost. [SUBSECTION] Cost estimation was based on the patient perspective. We assumed that the direct and indirect costs of the 13 interventions were the same, that direct medical costs caused the differences in total costs, and that the cost of the conventional treatment was identical for each treatment regimen. In addition, this study’s effective components of safflower yellow injection and safflower injection are consistent. However, they were produced by different manufacturers, were differentiated in the network meta-analysis and discriminated in the cost calculation. We adopted a discount rate of 5% for the cost data and discounted uniformly until early 2020. Drug cost We utilized the most common drug retail prices and the lowest to the highest manufacturers’ retail prices for the sensitivity analysis. When calculating the total drug cost of the 13 treatment schemes, the weighted drug amount was calculated by multiplying the cost of various drugs or injections in the included literature by the weight obtained from the meta-analysis and the drug cost of the treatment scheme was unified. The costs of the 10 drugs are shown in Table 5. The weighted dosages of the 13 treatment regimens are listed in Table 6. Cost price and maximum/minimum value of 10 drugs. (A) All data come from 315 medicine price inquiry net https://www.315jiage.cn. (B) Maximum, minimum and cost of the exact drug specifications. Weighted dose and cost of 13 treatment options. CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection. The cost of 1 course of treatment, including Aspirin enteric-coated tablets, Propranolol tablets, Nitroglycerin tablets and Nifedipine sustained-release tablets for conventional treatment, was ¥13.09, ¥13.72, ¥1.68, and ¥17.08, respectively, and the total cost of 1 course of conventional treatment was ¥45.57.[63] The discounted cost of conventional treatment was ¥50.24. Other costs The cost of injection mainly includes the cost of materials, such as disposable infusion tubes and syringes used for intravenous injection, and the cost of intravenous injection. The latest medical service fees published by the Beijing Medical Insurance Bureau include ¥5.5 for intravenous injection and ¥7 for intravenous infusion.[64] The total intravenous infusion material fee was ¥8.00 and the total amount of intravenous infusion material fee was ¥2.40.[65] After discounting, the average value is ¥14.3/day. In addition, the cost of examination items during the entire course of treatment for patients with angina includes the cost of blood, urine, stool routine, liver and kidney function, and electrocardiogram before treatment. The cost of laboratory tests and electrocardiograms was obtained from Jianwei Xuan et al[66] The average price of medical services was obtained from the website of the local Health Commission. Discount calculation results for inspection cost of ¥373.6. Zhang et al[67] summarized the costs of diagnosing and treating CHD in 26 sample hospitals from 2014 to 2016 and found that the average hospitalization cost for angina patients was ¥26745.12. Zhao et al[12] studied CHD in 237 tertiary hospitals in Beijing in 2014 and found that the average hospitalization cost of patients with unstable angina was ¥26482.41. The average cost of hospitalization calculated after discount is ¥34811.63. Cost estimation was based on the patient perspective. We assumed that the direct and indirect costs of the 13 interventions were the same, that direct medical costs caused the differences in total costs, and that the cost of the conventional treatment was identical for each treatment regimen. In addition, this study’s effective components of safflower yellow injection and safflower injection are consistent. However, they were produced by different manufacturers, were differentiated in the network meta-analysis and discriminated in the cost calculation. We adopted a discount rate of 5% for the cost data and discounted uniformly until early 2020. Drug cost We utilized the most common drug retail prices and the lowest to the highest manufacturers’ retail prices for the sensitivity analysis. When calculating the total drug cost of the 13 treatment schemes, the weighted drug amount was calculated by multiplying the cost of various drugs or injections in the included literature by the weight obtained from the meta-analysis and the drug cost of the treatment scheme was unified. The costs of the 10 drugs are shown in Table 5. The weighted dosages of the 13 treatment regimens are listed in Table 6. Cost price and maximum/minimum value of 10 drugs. (A) All data come from 315 medicine price inquiry net https://www.315jiage.cn. (B) Maximum, minimum and cost of the exact drug specifications. Weighted dose and cost of 13 treatment options. CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection. The cost of 1 course of treatment, including Aspirin enteric-coated tablets, Propranolol tablets, Nitroglycerin tablets and Nifedipine sustained-release tablets for conventional treatment, was ¥13.09, ¥13.72, ¥1.68, and ¥17.08, respectively, and the total cost of 1 course of conventional treatment was ¥45.57.[63] The discounted cost of conventional treatment was ¥50.24. Other costs The cost of injection mainly includes the cost of materials, such as disposable infusion tubes and syringes used for intravenous injection, and the cost of intravenous injection. The latest medical service fees published by the Beijing Medical Insurance Bureau include ¥5.5 for intravenous injection and ¥7 for intravenous infusion.[64] The total intravenous infusion material fee was ¥8.00 and the total amount of intravenous infusion material fee was ¥2.40.[65] After discounting, the average value is ¥14.3/day. In addition, the cost of examination items during the entire course of treatment for patients with angina includes the cost of blood, urine, stool routine, liver and kidney function, and electrocardiogram before treatment. The cost of laboratory tests and electrocardiograms was obtained from Jianwei Xuan et al[66] The average price of medical services was obtained from the website of the local Health Commission. Discount calculation results for inspection cost of ¥373.6. Zhang et al[67] summarized the costs of diagnosing and treating CHD in 26 sample hospitals from 2014 to 2016 and found that the average hospitalization cost for angina patients was ¥26745.12. Zhao et al[12] studied CHD in 237 tertiary hospitals in Beijing in 2014 and found that the average hospitalization cost of patients with unstable angina was ¥26482.41. The average cost of hospitalization calculated after discount is ¥34811.63.
4.3. Results
[SUBTITLE] 4.3.1. Base-case results. [SUBSECTION] We selected studies with effective rates of more than 90% (including conventional treatment) for economic evaluation. As shown in Table 7, CT + SI was the most cost-effective treatment. Base-case analysis results. CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, ICER = incremental cost-effectiveness ratio, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection. We selected studies with effective rates of more than 90% (including conventional treatment) for economic evaluation. As shown in Table 7, CT + SI was the most cost-effective treatment. Base-case analysis results. CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, ICER = incremental cost-effectiveness ratio, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection. [SUBTITLE] 4.3.2. Two-way sensitivity analysis. [SUBSECTION] It was assumed that the efficacy rate of the 9 treatment regimens fluctuated by 5%. The cost was analyzed sensitively according to the highest and lowest manufacturer retail price, assuming that WTP was GDP per capita in 2018. As shown in Figure 6, the parameter with the most significant impact on the results was the treatment efficiency of the CT + SI group. Sensitivity analysis on cost and effective rate. eCT_SI = effective rate of combined therapy of conventional treatment and Safflor injection, eCT_SI_H = effective rate of combined therapy of conventional treatment and Safflor injection and low molecular heparin, eCT_SI_DS = effective rate of combined therapy of conventional treatment and Safflor injection and compound Danshen drip pill, eCT_SYI_LC = effective rate of combined therapy of conventional treatment and safflor yellow injection and L-Carnitine injection, eCT_SYI_CD = effective rate of combined therapy of conventional treatment and safflor yellow injection and carvedilol, cCT_SI = cost of combined therapy of conventional treatment and Safflor injection, eCT_SYI = effective rate of combined therapy of conventional treatment and safflor yellow injection, eCT_SYI_AC = effective rate of combined therapy of conventional treatment and safflor yellow injection and atorvastatin calcium tablets, cCT_SI_H = cost of combined therapy of conventional treatment and Safflor injection and low molecular heparin, eCT_SYI_SMI = effective rate of combined therapy of conventional treatment and safflor yellow injection and Shenmai injection, eCT = effective rate of conventional treatment, cCT_SYI_SMI = cost of combined therapy of conventional treatment and safflor yellow injection and Shenmai injection, eCT_SYI_LC = effective rate of combined therapy of conventional treatment and safflor yellow injection and L-Carnitine injection. It was assumed that the efficacy rate of the 9 treatment regimens fluctuated by 5%. The cost was analyzed sensitively according to the highest and lowest manufacturer retail price, assuming that WTP was GDP per capita in 2018. As shown in Figure 6, the parameter with the most significant impact on the results was the treatment efficiency of the CT + SI group. Sensitivity analysis on cost and effective rate. eCT_SI = effective rate of combined therapy of conventional treatment and Safflor injection, eCT_SI_H = effective rate of combined therapy of conventional treatment and Safflor injection and low molecular heparin, eCT_SI_DS = effective rate of combined therapy of conventional treatment and Safflor injection and compound Danshen drip pill, eCT_SYI_LC = effective rate of combined therapy of conventional treatment and safflor yellow injection and L-Carnitine injection, eCT_SYI_CD = effective rate of combined therapy of conventional treatment and safflor yellow injection and carvedilol, cCT_SI = cost of combined therapy of conventional treatment and Safflor injection, eCT_SYI = effective rate of combined therapy of conventional treatment and safflor yellow injection, eCT_SYI_AC = effective rate of combined therapy of conventional treatment and safflor yellow injection and atorvastatin calcium tablets, cCT_SI_H = cost of combined therapy of conventional treatment and Safflor injection and low molecular heparin, eCT_SYI_SMI = effective rate of combined therapy of conventional treatment and safflor yellow injection and Shenmai injection, eCT = effective rate of conventional treatment, cCT_SYI_SMI = cost of combined therapy of conventional treatment and safflor yellow injection and Shenmai injection, eCT_SYI_LC = effective rate of combined therapy of conventional treatment and safflor yellow injection and L-Carnitine injection. [SUBTITLE] 4.3.3. Probabilistic sensitivity analysis (PSA). [SUBSECTION] The results of the PSA based on 1000 Monte Carlo simulations are presented in the cost-effectiveness scatter plot below (Fig. 7). The efficiency and the cost were presumed to be a beta distribution and a triangular distribution, respectively. The patient’s WTP changed from 0 to ¥198018. The acceptable cost effect curve is shown in Figure 7. The probability of cost-effectiveness of CT + SI gradually increased with the WTP threshold and exceeded CT when the WTP reached ¥19801.8. When the WTP is higher than ¥39603.6, the CT + SI probability representing a more economical scheme was reduced; however, it was still greater than 50%. The results of the PSA were consistent with the base-case results (Fig. 8). Cost-effectiveness acceptability curve. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection. Cost-effectiveness of screening options. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection. The results of the PSA based on 1000 Monte Carlo simulations are presented in the cost-effectiveness scatter plot below (Fig. 7). The efficiency and the cost were presumed to be a beta distribution and a triangular distribution, respectively. The patient’s WTP changed from 0 to ¥198018. The acceptable cost effect curve is shown in Figure 7. The probability of cost-effectiveness of CT + SI gradually increased with the WTP threshold and exceeded CT when the WTP reached ¥19801.8. When the WTP is higher than ¥39603.6, the CT + SI probability representing a more economical scheme was reduced; however, it was still greater than 50%. The results of the PSA were consistent with the base-case results (Fig. 8). Cost-effectiveness acceptability curve. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection. Cost-effectiveness of screening options. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.
null
null
[ "2. Methods", "2.1. Search strategy", "2.2. Inclusion criteria", "2.2.1. Study type.", "2.2.2. Participants.", "2.2.3. Interventions.", "2.2.4. Outcomes.", "2.3. Exclusion criteria", "2.4. Literature screening and data extraction", "2.5. Quality assessment", "2.6. Statistical methods", "3. Result", "3.1. Search results", "3.2. Study characteristics", "3.3. Risk of bias assessment", "3.4. Network meta-analysis results", "3.4.1. Evidence network diagram.", "3.4.2. Test for heterogeneity.", "3.4.3. Publication bias.", "3.4.4. Network meta-analysis of drug efficacy for angina pectoris treatment.", "3.4.5. SUCRA curves of treatment efficacy.", "3.5. Adverse reactions", "4. Pharmacoeconomic evaluation", "4.1. Research perspective", "4.2.1. Decision tree model.", "4.2.2. Statistical analysis.", "4.2.3. Effectiveness.", "4.2.4. Cost.", "4.3.1. Base-case results.", "4.3.2. Two-way sensitivity analysis.", "4.3.3. Probabilistic sensitivity analysis (PSA).", "6. Conclusion", "Acknowledgments", "Author contributions" ]
[ "We conducted this meta-analysis by the PRISMA 2020 checklist.\n[SUBTITLE] 2.1. Search strategy [SUBSECTION] We did a comprehensive search using predefined search terms in PubMed, Cochrane Library databases, Clinical Trials.gov, Chinese National Knowledge Infrastructure, WanFang, VIP databases, and China Biology Medicine Disc (Si-noMed) from January 2005 to December 2019. Keywords included “angina pectoris,” “coronary heart disease,” “safflor flavin,” “safflor yellow injection,” and “safflor injection.” An advanced search combined with keywords was used to search the Chinese literature. The main search terms were: “stenocardia,” “angina pectoris,” “coronary heart disease,” “safflor flavin,” “safflor injection.” All prospective studies were included with no linguistic restrictions and were independently screened by 2 reviewers (Lu and Li).\nWe did a comprehensive search using predefined search terms in PubMed, Cochrane Library databases, Clinical Trials.gov, Chinese National Knowledge Infrastructure, WanFang, VIP databases, and China Biology Medicine Disc (Si-noMed) from January 2005 to December 2019. Keywords included “angina pectoris,” “coronary heart disease,” “safflor flavin,” “safflor yellow injection,” and “safflor injection.” An advanced search combined with keywords was used to search the Chinese literature. The main search terms were: “stenocardia,” “angina pectoris,” “coronary heart disease,” “safflor flavin,” “safflor injection.” All prospective studies were included with no linguistic restrictions and were independently screened by 2 reviewers (Lu and Li).\n[SUBTITLE] 2.2. Inclusion criteria [SUBSECTION] [SUBTITLE] 2.2.1. Study type. [SUBSECTION] Randomized Controlled Trials (RCTs) and retrospective trials.\nRandomized Controlled Trials (RCTs) and retrospective trials.\n[SUBTITLE] 2.2.2. Participants. [SUBSECTION] All patients were clinically diagnosed with angina pectoris, including stable and unstable angina pectoris caused by aging, abnormal lipid metabolism, hypertension, smoking, diabetes, and other factors.[12]\nAll patients were clinically diagnosed with angina pectoris, including stable and unstable angina pectoris caused by aging, abnormal lipid metabolism, hypertension, smoking, diabetes, and other factors.[12]\n[SUBTITLE] 2.2.3. Interventions. [SUBSECTION] The treatment group was dosed with safflor yellow injection alone, safflor yellow freeze-dried injection product or safflor injection, or safflor yellow combined with conventional treatment or other drugs (low molecular heparin 5000U, carvedilol, levocarnitine injection, atorvastatin calcium tablets, Danshen injection, etc). Conventional treatments with nitrate drugs, β-blockers, angiotensin-converting enzyme inhibitors, and calcium channel blockers were used when angina pectoris occurred. As a result, these 4 drugs were incorporated into the cost calculation in the following pharmacoeconomic studies.\nThe control group was given conventional treatment or drugs against angina pectoris, such as compounded Danshen dripping pills, compound Danshen injections, Xiangdan injections, and safflor injections.\nThe treatment group was dosed with safflor yellow injection alone, safflor yellow freeze-dried injection product or safflor injection, or safflor yellow combined with conventional treatment or other drugs (low molecular heparin 5000U, carvedilol, levocarnitine injection, atorvastatin calcium tablets, Danshen injection, etc). Conventional treatments with nitrate drugs, β-blockers, angiotensin-converting enzyme inhibitors, and calcium channel blockers were used when angina pectoris occurred. As a result, these 4 drugs were incorporated into the cost calculation in the following pharmacoeconomic studies.\nThe control group was given conventional treatment or drugs against angina pectoris, such as compounded Danshen dripping pills, compound Danshen injections, Xiangdan injections, and safflor injections.\n[SUBTITLE] 2.2.4. Outcomes. [SUBSECTION] The total effective rate was defined based on “Common Guidelines for the Diagnosis and Treatment of Cardiovascular and Cerebrovascular Diseases in China.”[13]\njudgment criteria for angina pectoris:•significantly effective: angina pectoris disappears or disappears or the frequency or nitroglycerin consumption is reduced by more than 80% treated for 1 course;•effective: angina pectoris is largely relieved after 1 course, Nitroglycerin consumption was reduced by over 50%;•ineffective: times of angina pectoris or nitroglycerin usage was reduced by more than 80%, 50% to 80%, and <50%, respectively.\n\nsignificantly effective: angina pectoris disappears or disappears or the frequency or nitroglycerin consumption is reduced by more than 80% treated for 1 course;\neffective: angina pectoris is largely relieved after 1 course, Nitroglycerin consumption was reduced by over 50%;\nineffective: times of angina pectoris or nitroglycerin usage was reduced by more than 80%, 50% to 80%, and <50%, respectively.\nCriteria for the electrocardiogram (ECG) efficacy:•significantly effective: the symptoms disappear, the ST segment and T wave of the ECG return to normal, and the exercise test changes from positive to negative;•effective: symptoms were relieved, the ST segment was low on the ECG, and the T-wave inversion was corrected;•ineffective: the symptoms were not alleviated, and the ST segment was low on the ECG, or the T-wave inversion was not improved.\n\nsignificantly effective: the symptoms disappear, the ST segment and T wave of the ECG return to normal, and the exercise test changes from positive to negative;\neffective: symptoms were relieved, the ST segment was low on the ECG, and the T-wave inversion was corrected;\nineffective: the symptoms were not alleviated, and the ST segment was low on the ECG, or the T-wave inversion was not improved.\nThe total effective rate was defined based on “Common Guidelines for the Diagnosis and Treatment of Cardiovascular and Cerebrovascular Diseases in China.”[13]\njudgment criteria for angina pectoris:•significantly effective: angina pectoris disappears or disappears or the frequency or nitroglycerin consumption is reduced by more than 80% treated for 1 course;•effective: angina pectoris is largely relieved after 1 course, Nitroglycerin consumption was reduced by over 50%;•ineffective: times of angina pectoris or nitroglycerin usage was reduced by more than 80%, 50% to 80%, and <50%, respectively.\n\nsignificantly effective: angina pectoris disappears or disappears or the frequency or nitroglycerin consumption is reduced by more than 80% treated for 1 course;\neffective: angina pectoris is largely relieved after 1 course, Nitroglycerin consumption was reduced by over 50%;\nineffective: times of angina pectoris or nitroglycerin usage was reduced by more than 80%, 50% to 80%, and <50%, respectively.\nCriteria for the electrocardiogram (ECG) efficacy:•significantly effective: the symptoms disappear, the ST segment and T wave of the ECG return to normal, and the exercise test changes from positive to negative;•effective: symptoms were relieved, the ST segment was low on the ECG, and the T-wave inversion was corrected;•ineffective: the symptoms were not alleviated, and the ST segment was low on the ECG, or the T-wave inversion was not improved.\n\nsignificantly effective: the symptoms disappear, the ST segment and T wave of the ECG return to normal, and the exercise test changes from positive to negative;\neffective: symptoms were relieved, the ST segment was low on the ECG, and the T-wave inversion was corrected;\nineffective: the symptoms were not alleviated, and the ST segment was low on the ECG, or the T-wave inversion was not improved.\n[SUBTITLE] 2.2.1. Study type. [SUBSECTION] Randomized Controlled Trials (RCTs) and retrospective trials.\nRandomized Controlled Trials (RCTs) and retrospective trials.\n[SUBTITLE] 2.2.2. Participants. [SUBSECTION] All patients were clinically diagnosed with angina pectoris, including stable and unstable angina pectoris caused by aging, abnormal lipid metabolism, hypertension, smoking, diabetes, and other factors.[12]\nAll patients were clinically diagnosed with angina pectoris, including stable and unstable angina pectoris caused by aging, abnormal lipid metabolism, hypertension, smoking, diabetes, and other factors.[12]\n[SUBTITLE] 2.2.3. Interventions. [SUBSECTION] The treatment group was dosed with safflor yellow injection alone, safflor yellow freeze-dried injection product or safflor injection, or safflor yellow combined with conventional treatment or other drugs (low molecular heparin 5000U, carvedilol, levocarnitine injection, atorvastatin calcium tablets, Danshen injection, etc). Conventional treatments with nitrate drugs, β-blockers, angiotensin-converting enzyme inhibitors, and calcium channel blockers were used when angina pectoris occurred. As a result, these 4 drugs were incorporated into the cost calculation in the following pharmacoeconomic studies.\nThe control group was given conventional treatment or drugs against angina pectoris, such as compounded Danshen dripping pills, compound Danshen injections, Xiangdan injections, and safflor injections.\nThe treatment group was dosed with safflor yellow injection alone, safflor yellow freeze-dried injection product or safflor injection, or safflor yellow combined with conventional treatment or other drugs (low molecular heparin 5000U, carvedilol, levocarnitine injection, atorvastatin calcium tablets, Danshen injection, etc). Conventional treatments with nitrate drugs, β-blockers, angiotensin-converting enzyme inhibitors, and calcium channel blockers were used when angina pectoris occurred. As a result, these 4 drugs were incorporated into the cost calculation in the following pharmacoeconomic studies.\nThe control group was given conventional treatment or drugs against angina pectoris, such as compounded Danshen dripping pills, compound Danshen injections, Xiangdan injections, and safflor injections.\n[SUBTITLE] 2.2.4. Outcomes. [SUBSECTION] The total effective rate was defined based on “Common Guidelines for the Diagnosis and Treatment of Cardiovascular and Cerebrovascular Diseases in China.”[13]\njudgment criteria for angina pectoris:•significantly effective: angina pectoris disappears or disappears or the frequency or nitroglycerin consumption is reduced by more than 80% treated for 1 course;•effective: angina pectoris is largely relieved after 1 course, Nitroglycerin consumption was reduced by over 50%;•ineffective: times of angina pectoris or nitroglycerin usage was reduced by more than 80%, 50% to 80%, and <50%, respectively.\n\nsignificantly effective: angina pectoris disappears or disappears or the frequency or nitroglycerin consumption is reduced by more than 80% treated for 1 course;\neffective: angina pectoris is largely relieved after 1 course, Nitroglycerin consumption was reduced by over 50%;\nineffective: times of angina pectoris or nitroglycerin usage was reduced by more than 80%, 50% to 80%, and <50%, respectively.\nCriteria for the electrocardiogram (ECG) efficacy:•significantly effective: the symptoms disappear, the ST segment and T wave of the ECG return to normal, and the exercise test changes from positive to negative;•effective: symptoms were relieved, the ST segment was low on the ECG, and the T-wave inversion was corrected;•ineffective: the symptoms were not alleviated, and the ST segment was low on the ECG, or the T-wave inversion was not improved.\n\nsignificantly effective: the symptoms disappear, the ST segment and T wave of the ECG return to normal, and the exercise test changes from positive to negative;\neffective: symptoms were relieved, the ST segment was low on the ECG, and the T-wave inversion was corrected;\nineffective: the symptoms were not alleviated, and the ST segment was low on the ECG, or the T-wave inversion was not improved.\nThe total effective rate was defined based on “Common Guidelines for the Diagnosis and Treatment of Cardiovascular and Cerebrovascular Diseases in China.”[13]\njudgment criteria for angina pectoris:•significantly effective: angina pectoris disappears or disappears or the frequency or nitroglycerin consumption is reduced by more than 80% treated for 1 course;•effective: angina pectoris is largely relieved after 1 course, Nitroglycerin consumption was reduced by over 50%;•ineffective: times of angina pectoris or nitroglycerin usage was reduced by more than 80%, 50% to 80%, and <50%, respectively.\n\nsignificantly effective: angina pectoris disappears or disappears or the frequency or nitroglycerin consumption is reduced by more than 80% treated for 1 course;\neffective: angina pectoris is largely relieved after 1 course, Nitroglycerin consumption was reduced by over 50%;\nineffective: times of angina pectoris or nitroglycerin usage was reduced by more than 80%, 50% to 80%, and <50%, respectively.\nCriteria for the electrocardiogram (ECG) efficacy:•significantly effective: the symptoms disappear, the ST segment and T wave of the ECG return to normal, and the exercise test changes from positive to negative;•effective: symptoms were relieved, the ST segment was low on the ECG, and the T-wave inversion was corrected;•ineffective: the symptoms were not alleviated, and the ST segment was low on the ECG, or the T-wave inversion was not improved.\n\nsignificantly effective: the symptoms disappear, the ST segment and T wave of the ECG return to normal, and the exercise test changes from positive to negative;\neffective: symptoms were relieved, the ST segment was low on the ECG, and the T-wave inversion was corrected;\nineffective: the symptoms were not alleviated, and the ST segment was low on the ECG, or the T-wave inversion was not improved.\n[SUBTITLE] 2.3. Exclusion criteria [SUBSECTION] Studies without full-text access, studies with incomplete or severely faulted data, studies with repetitive publications or data, retrospective studies, studies with incomplete or unclear reports on experimental design and results reporting, and animal experiments.\nStudies without full-text access, studies with incomplete or severely faulted data, studies with repetitive publications or data, retrospective studies, studies with incomplete or unclear reports on experimental design and results reporting, and animal experiments.\n[SUBTITLE] 2.4. Literature screening and data extraction [SUBSECTION] The NoteExpress 3.4.0 software was used for reference management. Two researchers selected the documents independently following the inclusion and exclusion criteria and then extracted the data. The literature extraction data predominantly contained the following information: general information of the study: author, publication time, sample size, age, type of study, etc; treatment: dosage and treatment duration; and outcome indicators: angina pectoris efficacy criteria, ECG, hemorheology indexes, blood lipid improvement, etc.\nThe NoteExpress 3.4.0 software was used for reference management. Two researchers selected the documents independently following the inclusion and exclusion criteria and then extracted the data. The literature extraction data predominantly contained the following information: general information of the study: author, publication time, sample size, age, type of study, etc; treatment: dosage and treatment duration; and outcome indicators: angina pectoris efficacy criteria, ECG, hemorheology indexes, blood lipid improvement, etc.\n[SUBTITLE] 2.5. Quality assessment [SUBSECTION] The Cochrane Handbook versions 5.0.1 RCT bias risk assessment tool[14] was applied to weigh the methodological quality of RCTs. Seven domains were integrated into the evaluation: random sequence generation, allocation concealment, blinding method of subjects and researchers, blinding method of the outcome evaluator, incomplete outcome report, selective outcome report, and other biases. Each item was classified as a “low-risk bias,” “unclear,” or “high-risk bias.” Two reviewers conducted data extraction and methodological evaluation. Any inconsistencies were resolved through discussion.\nThe Cochrane Handbook versions 5.0.1 RCT bias risk assessment tool[14] was applied to weigh the methodological quality of RCTs. Seven domains were integrated into the evaluation: random sequence generation, allocation concealment, blinding method of subjects and researchers, blinding method of the outcome evaluator, incomplete outcome report, selective outcome report, and other biases. Each item was classified as a “low-risk bias,” “unclear,” or “high-risk bias.” Two reviewers conducted data extraction and methodological evaluation. Any inconsistencies were resolved through discussion.\n[SUBTITLE] 2.6. Statistical methods [SUBSECTION] A network meta-analysis was utilized for frequency statistics and a Bayesian approach. The frequency statistics approach used statistical samples under hypothesis testing and inference conclusions. The Bayesian approach is flexible and powerful and requires a high degree of statistical knowledge. The frequency statistics method is simple and easily understood (Tian et al, 2014).[15] The Bayesian analysis was performed under Bayesian principles and posterior/prior probability. Studies indicated equivalent reliability between the results of a network meta-analysis of frequency statistics and the Bayesian approach (Carlin et al, 2013).[16] This study implemented a network meta-analysis for research and analysis according to the frequency statistics, a multivariate framework and frequency theory. Stata software (version 14.0) was used for statistical analysis and graphics plotting, applying the mvmeta network and its packages (Tian et al, 2014).[15] The outcome indicators in this study were binary classification variables. Odds ratios (ORs) were calculated with 95% confidence intervals (95% CIs). A network diagram was prepared under the 2-arm data structure to demonstrate the comparative relationships among the different interventions (Zhang et al, 2013).[17] Subsequently, a networked meta-random effect model was constructed to evaluate the model consistency, and then “if plot” command was utilized to assess the inconsistency factor value and conduct the Z test. P > .05 indicated consistency, demonstrating better consistency in direct and indirect comparisons (Zhang et al, 2014).[18] The intervention was evaluated for publication bias or small-sample effects by drawing a comparison-correction funnel plot. The surface under the cumulative ranking curve of each intervention (SUCRA) was calculated to predict the ranking of the intervention drug efficacy. The closer the SUCRA value is to 100, the better the intervention is Zeng et al, 2013.[19]\nA network meta-analysis was utilized for frequency statistics and a Bayesian approach. The frequency statistics approach used statistical samples under hypothesis testing and inference conclusions. The Bayesian approach is flexible and powerful and requires a high degree of statistical knowledge. The frequency statistics method is simple and easily understood (Tian et al, 2014).[15] The Bayesian analysis was performed under Bayesian principles and posterior/prior probability. Studies indicated equivalent reliability between the results of a network meta-analysis of frequency statistics and the Bayesian approach (Carlin et al, 2013).[16] This study implemented a network meta-analysis for research and analysis according to the frequency statistics, a multivariate framework and frequency theory. Stata software (version 14.0) was used for statistical analysis and graphics plotting, applying the mvmeta network and its packages (Tian et al, 2014).[15] The outcome indicators in this study were binary classification variables. Odds ratios (ORs) were calculated with 95% confidence intervals (95% CIs). A network diagram was prepared under the 2-arm data structure to demonstrate the comparative relationships among the different interventions (Zhang et al, 2013).[17] Subsequently, a networked meta-random effect model was constructed to evaluate the model consistency, and then “if plot” command was utilized to assess the inconsistency factor value and conduct the Z test. P > .05 indicated consistency, demonstrating better consistency in direct and indirect comparisons (Zhang et al, 2014).[18] The intervention was evaluated for publication bias or small-sample effects by drawing a comparison-correction funnel plot. The surface under the cumulative ranking curve of each intervention (SUCRA) was calculated to predict the ranking of the intervention drug efficacy. The closer the SUCRA value is to 100, the better the intervention is Zeng et al, 2013.[19]", "We did a comprehensive search using predefined search terms in PubMed, Cochrane Library databases, Clinical Trials.gov, Chinese National Knowledge Infrastructure, WanFang, VIP databases, and China Biology Medicine Disc (Si-noMed) from January 2005 to December 2019. Keywords included “angina pectoris,” “coronary heart disease,” “safflor flavin,” “safflor yellow injection,” and “safflor injection.” An advanced search combined with keywords was used to search the Chinese literature. The main search terms were: “stenocardia,” “angina pectoris,” “coronary heart disease,” “safflor flavin,” “safflor injection.” All prospective studies were included with no linguistic restrictions and were independently screened by 2 reviewers (Lu and Li).", "[SUBTITLE] 2.2.1. Study type. [SUBSECTION] Randomized Controlled Trials (RCTs) and retrospective trials.\nRandomized Controlled Trials (RCTs) and retrospective trials.\n[SUBTITLE] 2.2.2. Participants. [SUBSECTION] All patients were clinically diagnosed with angina pectoris, including stable and unstable angina pectoris caused by aging, abnormal lipid metabolism, hypertension, smoking, diabetes, and other factors.[12]\nAll patients were clinically diagnosed with angina pectoris, including stable and unstable angina pectoris caused by aging, abnormal lipid metabolism, hypertension, smoking, diabetes, and other factors.[12]\n[SUBTITLE] 2.2.3. Interventions. [SUBSECTION] The treatment group was dosed with safflor yellow injection alone, safflor yellow freeze-dried injection product or safflor injection, or safflor yellow combined with conventional treatment or other drugs (low molecular heparin 5000U, carvedilol, levocarnitine injection, atorvastatin calcium tablets, Danshen injection, etc). Conventional treatments with nitrate drugs, β-blockers, angiotensin-converting enzyme inhibitors, and calcium channel blockers were used when angina pectoris occurred. As a result, these 4 drugs were incorporated into the cost calculation in the following pharmacoeconomic studies.\nThe control group was given conventional treatment or drugs against angina pectoris, such as compounded Danshen dripping pills, compound Danshen injections, Xiangdan injections, and safflor injections.\nThe treatment group was dosed with safflor yellow injection alone, safflor yellow freeze-dried injection product or safflor injection, or safflor yellow combined with conventional treatment or other drugs (low molecular heparin 5000U, carvedilol, levocarnitine injection, atorvastatin calcium tablets, Danshen injection, etc). Conventional treatments with nitrate drugs, β-blockers, angiotensin-converting enzyme inhibitors, and calcium channel blockers were used when angina pectoris occurred. As a result, these 4 drugs were incorporated into the cost calculation in the following pharmacoeconomic studies.\nThe control group was given conventional treatment or drugs against angina pectoris, such as compounded Danshen dripping pills, compound Danshen injections, Xiangdan injections, and safflor injections.\n[SUBTITLE] 2.2.4. Outcomes. [SUBSECTION] The total effective rate was defined based on “Common Guidelines for the Diagnosis and Treatment of Cardiovascular and Cerebrovascular Diseases in China.”[13]\njudgment criteria for angina pectoris:•significantly effective: angina pectoris disappears or disappears or the frequency or nitroglycerin consumption is reduced by more than 80% treated for 1 course;•effective: angina pectoris is largely relieved after 1 course, Nitroglycerin consumption was reduced by over 50%;•ineffective: times of angina pectoris or nitroglycerin usage was reduced by more than 80%, 50% to 80%, and <50%, respectively.\n\nsignificantly effective: angina pectoris disappears or disappears or the frequency or nitroglycerin consumption is reduced by more than 80% treated for 1 course;\neffective: angina pectoris is largely relieved after 1 course, Nitroglycerin consumption was reduced by over 50%;\nineffective: times of angina pectoris or nitroglycerin usage was reduced by more than 80%, 50% to 80%, and <50%, respectively.\nCriteria for the electrocardiogram (ECG) efficacy:•significantly effective: the symptoms disappear, the ST segment and T wave of the ECG return to normal, and the exercise test changes from positive to negative;•effective: symptoms were relieved, the ST segment was low on the ECG, and the T-wave inversion was corrected;•ineffective: the symptoms were not alleviated, and the ST segment was low on the ECG, or the T-wave inversion was not improved.\n\nsignificantly effective: the symptoms disappear, the ST segment and T wave of the ECG return to normal, and the exercise test changes from positive to negative;\neffective: symptoms were relieved, the ST segment was low on the ECG, and the T-wave inversion was corrected;\nineffective: the symptoms were not alleviated, and the ST segment was low on the ECG, or the T-wave inversion was not improved.\nThe total effective rate was defined based on “Common Guidelines for the Diagnosis and Treatment of Cardiovascular and Cerebrovascular Diseases in China.”[13]\njudgment criteria for angina pectoris:•significantly effective: angina pectoris disappears or disappears or the frequency or nitroglycerin consumption is reduced by more than 80% treated for 1 course;•effective: angina pectoris is largely relieved after 1 course, Nitroglycerin consumption was reduced by over 50%;•ineffective: times of angina pectoris or nitroglycerin usage was reduced by more than 80%, 50% to 80%, and <50%, respectively.\n\nsignificantly effective: angina pectoris disappears or disappears or the frequency or nitroglycerin consumption is reduced by more than 80% treated for 1 course;\neffective: angina pectoris is largely relieved after 1 course, Nitroglycerin consumption was reduced by over 50%;\nineffective: times of angina pectoris or nitroglycerin usage was reduced by more than 80%, 50% to 80%, and <50%, respectively.\nCriteria for the electrocardiogram (ECG) efficacy:•significantly effective: the symptoms disappear, the ST segment and T wave of the ECG return to normal, and the exercise test changes from positive to negative;•effective: symptoms were relieved, the ST segment was low on the ECG, and the T-wave inversion was corrected;•ineffective: the symptoms were not alleviated, and the ST segment was low on the ECG, or the T-wave inversion was not improved.\n\nsignificantly effective: the symptoms disappear, the ST segment and T wave of the ECG return to normal, and the exercise test changes from positive to negative;\neffective: symptoms were relieved, the ST segment was low on the ECG, and the T-wave inversion was corrected;\nineffective: the symptoms were not alleviated, and the ST segment was low on the ECG, or the T-wave inversion was not improved.", "Randomized Controlled Trials (RCTs) and retrospective trials.", "All patients were clinically diagnosed with angina pectoris, including stable and unstable angina pectoris caused by aging, abnormal lipid metabolism, hypertension, smoking, diabetes, and other factors.[12]", "The treatment group was dosed with safflor yellow injection alone, safflor yellow freeze-dried injection product or safflor injection, or safflor yellow combined with conventional treatment or other drugs (low molecular heparin 5000U, carvedilol, levocarnitine injection, atorvastatin calcium tablets, Danshen injection, etc). Conventional treatments with nitrate drugs, β-blockers, angiotensin-converting enzyme inhibitors, and calcium channel blockers were used when angina pectoris occurred. As a result, these 4 drugs were incorporated into the cost calculation in the following pharmacoeconomic studies.\nThe control group was given conventional treatment or drugs against angina pectoris, such as compounded Danshen dripping pills, compound Danshen injections, Xiangdan injections, and safflor injections.", "The total effective rate was defined based on “Common Guidelines for the Diagnosis and Treatment of Cardiovascular and Cerebrovascular Diseases in China.”[13]\njudgment criteria for angina pectoris:•significantly effective: angina pectoris disappears or disappears or the frequency or nitroglycerin consumption is reduced by more than 80% treated for 1 course;•effective: angina pectoris is largely relieved after 1 course, Nitroglycerin consumption was reduced by over 50%;•ineffective: times of angina pectoris or nitroglycerin usage was reduced by more than 80%, 50% to 80%, and <50%, respectively.\n\nsignificantly effective: angina pectoris disappears or disappears or the frequency or nitroglycerin consumption is reduced by more than 80% treated for 1 course;\neffective: angina pectoris is largely relieved after 1 course, Nitroglycerin consumption was reduced by over 50%;\nineffective: times of angina pectoris or nitroglycerin usage was reduced by more than 80%, 50% to 80%, and <50%, respectively.\nCriteria for the electrocardiogram (ECG) efficacy:•significantly effective: the symptoms disappear, the ST segment and T wave of the ECG return to normal, and the exercise test changes from positive to negative;•effective: symptoms were relieved, the ST segment was low on the ECG, and the T-wave inversion was corrected;•ineffective: the symptoms were not alleviated, and the ST segment was low on the ECG, or the T-wave inversion was not improved.\n\nsignificantly effective: the symptoms disappear, the ST segment and T wave of the ECG return to normal, and the exercise test changes from positive to negative;\neffective: symptoms were relieved, the ST segment was low on the ECG, and the T-wave inversion was corrected;\nineffective: the symptoms were not alleviated, and the ST segment was low on the ECG, or the T-wave inversion was not improved.", "Studies without full-text access, studies with incomplete or severely faulted data, studies with repetitive publications or data, retrospective studies, studies with incomplete or unclear reports on experimental design and results reporting, and animal experiments.", "The NoteExpress 3.4.0 software was used for reference management. Two researchers selected the documents independently following the inclusion and exclusion criteria and then extracted the data. The literature extraction data predominantly contained the following information: general information of the study: author, publication time, sample size, age, type of study, etc; treatment: dosage and treatment duration; and outcome indicators: angina pectoris efficacy criteria, ECG, hemorheology indexes, blood lipid improvement, etc.", "The Cochrane Handbook versions 5.0.1 RCT bias risk assessment tool[14] was applied to weigh the methodological quality of RCTs. Seven domains were integrated into the evaluation: random sequence generation, allocation concealment, blinding method of subjects and researchers, blinding method of the outcome evaluator, incomplete outcome report, selective outcome report, and other biases. Each item was classified as a “low-risk bias,” “unclear,” or “high-risk bias.” Two reviewers conducted data extraction and methodological evaluation. Any inconsistencies were resolved through discussion.", "A network meta-analysis was utilized for frequency statistics and a Bayesian approach. The frequency statistics approach used statistical samples under hypothesis testing and inference conclusions. The Bayesian approach is flexible and powerful and requires a high degree of statistical knowledge. The frequency statistics method is simple and easily understood (Tian et al, 2014).[15] The Bayesian analysis was performed under Bayesian principles and posterior/prior probability. Studies indicated equivalent reliability between the results of a network meta-analysis of frequency statistics and the Bayesian approach (Carlin et al, 2013).[16] This study implemented a network meta-analysis for research and analysis according to the frequency statistics, a multivariate framework and frequency theory. Stata software (version 14.0) was used for statistical analysis and graphics plotting, applying the mvmeta network and its packages (Tian et al, 2014).[15] The outcome indicators in this study were binary classification variables. Odds ratios (ORs) were calculated with 95% confidence intervals (95% CIs). A network diagram was prepared under the 2-arm data structure to demonstrate the comparative relationships among the different interventions (Zhang et al, 2013).[17] Subsequently, a networked meta-random effect model was constructed to evaluate the model consistency, and then “if plot” command was utilized to assess the inconsistency factor value and conduct the Z test. P > .05 indicated consistency, demonstrating better consistency in direct and indirect comparisons (Zhang et al, 2014).[18] The intervention was evaluated for publication bias or small-sample effects by drawing a comparison-correction funnel plot. The surface under the cumulative ranking curve of each intervention (SUCRA) was calculated to predict the ranking of the intervention drug efficacy. The closer the SUCRA value is to 100, the better the intervention is Zeng et al, 2013.[19]", "[SUBTITLE] 3.1. Search results [SUBSECTION] Of the 79829 related studies identified, 810 retrieved records were screened after removing duplicates and the initial exclusion of invalid literature. Full-text assessment resulted in 42 eligible articles after excluding 768 articles according to this review’s inclusion and exclusion criteria, including 41 Chinese studies and 1 English study. The study selection process was performed according to PRISMA guidelines (Fig. 1).\nDocument screening process.\nOf the 79829 related studies identified, 810 retrieved records were screened after removing duplicates and the initial exclusion of invalid literature. Full-text assessment resulted in 42 eligible articles after excluding 768 articles according to this review’s inclusion and exclusion criteria, including 41 Chinese studies and 1 English study. The study selection process was performed according to PRISMA guidelines (Fig. 1).\nDocument screening process.\n[SUBTITLE] 3.2. Study characteristics [SUBSECTION] The main characteristics of the included studies are summarized in Table 1. The studies were published between 2006 and 2019. Overall, 42 trials[12,20–60] with 4290 angina-pectoris patients were involved in the network meta-analysis, 2273 in the treatment group and 2017 in the control group. The sample sizes of the study participants ranged from 46 to 432. The mean age of the patients across trials fluctuated from 39.8 to 72.7 years, along with a 7 to 14 day treatment duration.\nCharacteristics of included studies.\n(A) Due to the different purity of safflor injection and safflor yellow injection, they should be distinguished; (B) The safflor yellow sodium chloride injection is prepared by adding sodium chloride to the safflor yellow injection for dilution. Therefore, only safflor yellow injection was included in the network meta-analysis.\nAC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThe main characteristics of the included studies are summarized in Table 1. The studies were published between 2006 and 2019. Overall, 42 trials[12,20–60] with 4290 angina-pectoris patients were involved in the network meta-analysis, 2273 in the treatment group and 2017 in the control group. The sample sizes of the study participants ranged from 46 to 432. The mean age of the patients across trials fluctuated from 39.8 to 72.7 years, along with a 7 to 14 day treatment duration.\nCharacteristics of included studies.\n(A) Due to the different purity of safflor injection and safflor yellow injection, they should be distinguished; (B) The safflor yellow sodium chloride injection is prepared by adding sodium chloride to the safflor yellow injection for dilution. Therefore, only safflor yellow injection was included in the network meta-analysis.\nAC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n[SUBTITLE] 3.3. Risk of bias assessment [SUBSECTION] The Cochrane risk of bias tool was used to assess the 9 included RCTs. Among the 42 included studies, 11[12,23–25,27,30,34,36,52,56,59] specifically reported the method of random sequence generation. Allocation concealment was adequately described in only a few included studies. All outcomes of the included studies were completed without determining other sources of bias. Overall, these 42 studies showed moderate methodological quality. The details of the bias-risk evaluation for each study are presented in Figure 2.\nBias risk of included studies.\nThe Cochrane risk of bias tool was used to assess the 9 included RCTs. Among the 42 included studies, 11[12,23–25,27,30,34,36,52,56,59] specifically reported the method of random sequence generation. Allocation concealment was adequately described in only a few included studies. All outcomes of the included studies were completed without determining other sources of bias. Overall, these 42 studies showed moderate methodological quality. The details of the bias-risk evaluation for each study are presented in Figure 2.\nBias risk of included studies.\n[SUBTITLE] 3.4. Network meta-analysis results [SUBSECTION] [SUBTITLE] 3.4.1. Evidence network diagram. [SUBSECTION] This network meta-analysis (NMA) included 7 safflor yellow-related studies, including its monotherapy and combination with 5 other traditional Chinese medicine injections or conventional treatments for angina pectoris. As is shown in Figure 3, 2 closed loops were formed, focusing on the conventional treatment. 42 RCTs[12,20–60] for angina-pectoris treatment efficiency were estimated according to the efficacy evaluation criteria. ECG effects and hemorheological indicators were included in 40 RCTs[12,20–24,26–55,57–60] and 19 RCTs,[20,21,25–30,33–35,37,38,41,48,49,52,59,60] respectively.\nEvidence network plot. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThis network meta-analysis (NMA) included 7 safflor yellow-related studies, including its monotherapy and combination with 5 other traditional Chinese medicine injections or conventional treatments for angina pectoris. As is shown in Figure 3, 2 closed loops were formed, focusing on the conventional treatment. 42 RCTs[12,20–60] for angina-pectoris treatment efficiency were estimated according to the efficacy evaluation criteria. ECG effects and hemorheological indicators were included in 40 RCTs[12,20–24,26–55,57–60] and 19 RCTs,[20,21,25–30,33–35,37,38,41,48,49,52,59,60] respectively.\nEvidence network plot. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n[SUBTITLE] 3.4.2. Test for heterogeneity. [SUBSECTION] Two triangular closed loops appeared during the intervention. LOOP was used to construct the inconsistency test chart, calculate the inconsistency factor, and conduct the Z test. The Z value finalized that Loop (CT + SYI-CT + SI-CT + DSI) P = .446 and Loop (CT-CT + SYI-CT + SI) P = .584, demonstrating no inconsistency results.\nTwo triangular closed loops appeared during the intervention. LOOP was used to construct the inconsistency test chart, calculate the inconsistency factor, and conduct the Z test. The Z value finalized that Loop (CT + SYI-CT + SI-CT + DSI) P = .446 and Loop (CT-CT + SYI-CT + SI) P = .584, demonstrating no inconsistency results.\n[SUBTITLE] 3.4.3. Publication bias. [SUBSECTION] Eleven studies were included in the funnel plot for publication bias analysis. The funnel plot showed an asymmetric distribution of points and indicated the possibility of publication bias and minor study effects.\nEleven studies were included in the funnel plot for publication bias analysis. The funnel plot showed an asymmetric distribution of points and indicated the possibility of publication bias and minor study effects.\n[SUBTITLE] 3.4.4. Network meta-analysis of drug efficacy for angina pectoris treatment. [SUBSECTION] 42 RCTs demonstrated the clinical treatment efficacy against angina pectoris. A comparison of these results is presented in Table 2. Compared with the conventional treatment group, the CT + SI + H group showed the highest treatment efficacy (OR = 9.62, 95% CI [3.84, 24.05]), and the DSI group displayed the most modest treatment effect (OR = 0.85, 95% CI [0.16, 4.64]).\nNetwork meta-analysis results comparing the clinical effectiveness.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n42 RCTs demonstrated the clinical treatment efficacy against angina pectoris. A comparison of these results is presented in Table 2. Compared with the conventional treatment group, the CT + SI + H group showed the highest treatment efficacy (OR = 9.62, 95% CI [3.84, 24.05]), and the DSI group displayed the most modest treatment effect (OR = 0.85, 95% CI [0.16, 4.64]).\nNetwork meta-analysis results comparing the clinical effectiveness.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n[SUBTITLE] 3.4.5. SUCRA curves of treatment efficacy. [SUBSECTION] The SUCRA values from probability ranking are listed in Table 3. CT + SI + H had the highest rank probability of treatment success rate. The rank probability of the treatments based on SUCRAs is shown in Figure 4, demonstrating similarity to the ranking of the effective NMA results (Table 3).\nProbability ranking of clinical effectiveness evaluation in 13 angina-pectoris treatments.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nSUCRA curves of 13 treatment interventions. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThe SUCRA values from probability ranking are listed in Table 3. CT + SI + H had the highest rank probability of treatment success rate. The rank probability of the treatments based on SUCRAs is shown in Figure 4, demonstrating similarity to the ranking of the effective NMA results (Table 3).\nProbability ranking of clinical effectiveness evaluation in 13 angina-pectoris treatments.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nSUCRA curves of 13 treatment interventions. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n[SUBTITLE] 3.4.1. Evidence network diagram. [SUBSECTION] This network meta-analysis (NMA) included 7 safflor yellow-related studies, including its monotherapy and combination with 5 other traditional Chinese medicine injections or conventional treatments for angina pectoris. As is shown in Figure 3, 2 closed loops were formed, focusing on the conventional treatment. 42 RCTs[12,20–60] for angina-pectoris treatment efficiency were estimated according to the efficacy evaluation criteria. ECG effects and hemorheological indicators were included in 40 RCTs[12,20–24,26–55,57–60] and 19 RCTs,[20,21,25–30,33–35,37,38,41,48,49,52,59,60] respectively.\nEvidence network plot. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThis network meta-analysis (NMA) included 7 safflor yellow-related studies, including its monotherapy and combination with 5 other traditional Chinese medicine injections or conventional treatments for angina pectoris. As is shown in Figure 3, 2 closed loops were formed, focusing on the conventional treatment. 42 RCTs[12,20–60] for angina-pectoris treatment efficiency were estimated according to the efficacy evaluation criteria. ECG effects and hemorheological indicators were included in 40 RCTs[12,20–24,26–55,57–60] and 19 RCTs,[20,21,25–30,33–35,37,38,41,48,49,52,59,60] respectively.\nEvidence network plot. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n[SUBTITLE] 3.4.2. Test for heterogeneity. [SUBSECTION] Two triangular closed loops appeared during the intervention. LOOP was used to construct the inconsistency test chart, calculate the inconsistency factor, and conduct the Z test. The Z value finalized that Loop (CT + SYI-CT + SI-CT + DSI) P = .446 and Loop (CT-CT + SYI-CT + SI) P = .584, demonstrating no inconsistency results.\nTwo triangular closed loops appeared during the intervention. LOOP was used to construct the inconsistency test chart, calculate the inconsistency factor, and conduct the Z test. The Z value finalized that Loop (CT + SYI-CT + SI-CT + DSI) P = .446 and Loop (CT-CT + SYI-CT + SI) P = .584, demonstrating no inconsistency results.\n[SUBTITLE] 3.4.3. Publication bias. [SUBSECTION] Eleven studies were included in the funnel plot for publication bias analysis. The funnel plot showed an asymmetric distribution of points and indicated the possibility of publication bias and minor study effects.\nEleven studies were included in the funnel plot for publication bias analysis. The funnel plot showed an asymmetric distribution of points and indicated the possibility of publication bias and minor study effects.\n[SUBTITLE] 3.4.4. Network meta-analysis of drug efficacy for angina pectoris treatment. [SUBSECTION] 42 RCTs demonstrated the clinical treatment efficacy against angina pectoris. A comparison of these results is presented in Table 2. Compared with the conventional treatment group, the CT + SI + H group showed the highest treatment efficacy (OR = 9.62, 95% CI [3.84, 24.05]), and the DSI group displayed the most modest treatment effect (OR = 0.85, 95% CI [0.16, 4.64]).\nNetwork meta-analysis results comparing the clinical effectiveness.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n42 RCTs demonstrated the clinical treatment efficacy against angina pectoris. A comparison of these results is presented in Table 2. Compared with the conventional treatment group, the CT + SI + H group showed the highest treatment efficacy (OR = 9.62, 95% CI [3.84, 24.05]), and the DSI group displayed the most modest treatment effect (OR = 0.85, 95% CI [0.16, 4.64]).\nNetwork meta-analysis results comparing the clinical effectiveness.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n[SUBTITLE] 3.4.5. SUCRA curves of treatment efficacy. [SUBSECTION] The SUCRA values from probability ranking are listed in Table 3. CT + SI + H had the highest rank probability of treatment success rate. The rank probability of the treatments based on SUCRAs is shown in Figure 4, demonstrating similarity to the ranking of the effective NMA results (Table 3).\nProbability ranking of clinical effectiveness evaluation in 13 angina-pectoris treatments.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nSUCRA curves of 13 treatment interventions. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThe SUCRA values from probability ranking are listed in Table 3. CT + SI + H had the highest rank probability of treatment success rate. The rank probability of the treatments based on SUCRAs is shown in Figure 4, demonstrating similarity to the ranking of the effective NMA results (Table 3).\nProbability ranking of clinical effectiveness evaluation in 13 angina-pectoris treatments.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nSUCRA curves of 13 treatment interventions. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n[SUBTITLE] 3.5. Adverse reactions [SUBSECTION] Nine studies[20,21,23,39,40,45,51,53,54] including 1045 patients demonstrated the adverse events occurrences. Mild venous inflammation was observed in 2,[45] disappearing after needle removal and not significantly affecting treatment. In addition, 2 patients in the control group developed an allergic reaction. One study[40] indicated that the treatment and control groups resulted in bleeding, slightly longer coagulation time, and slightly reduced platelet count after treatment. One study[39] reported that 3 cases of acute myocardial infarction occurred in the control group without inducing death among the adverse reactions in the circulatory system. Five studies[20,21,39,53,54] indicated other adverse reactions, including insomnia, nausea, dizziness, nausea, pruritus, rash, hypotension, head swelling, and muscle aches. All adverse reactions returned to normal after continued or discontinued observation. The results show that safflor yellow injection is effective and safe for treating angina pectoris, with few adverse reactions.\nNine studies[20,21,23,39,40,45,51,53,54] including 1045 patients demonstrated the adverse events occurrences. Mild venous inflammation was observed in 2,[45] disappearing after needle removal and not significantly affecting treatment. In addition, 2 patients in the control group developed an allergic reaction. One study[40] indicated that the treatment and control groups resulted in bleeding, slightly longer coagulation time, and slightly reduced platelet count after treatment. One study[39] reported that 3 cases of acute myocardial infarction occurred in the control group without inducing death among the adverse reactions in the circulatory system. Five studies[20,21,39,53,54] indicated other adverse reactions, including insomnia, nausea, dizziness, nausea, pruritus, rash, hypotension, head swelling, and muscle aches. All adverse reactions returned to normal after continued or discontinued observation. The results show that safflor yellow injection is effective and safe for treating angina pectoris, with few adverse reactions.", "Of the 79829 related studies identified, 810 retrieved records were screened after removing duplicates and the initial exclusion of invalid literature. Full-text assessment resulted in 42 eligible articles after excluding 768 articles according to this review’s inclusion and exclusion criteria, including 41 Chinese studies and 1 English study. The study selection process was performed according to PRISMA guidelines (Fig. 1).\nDocument screening process.", "The main characteristics of the included studies are summarized in Table 1. The studies were published between 2006 and 2019. Overall, 42 trials[12,20–60] with 4290 angina-pectoris patients were involved in the network meta-analysis, 2273 in the treatment group and 2017 in the control group. The sample sizes of the study participants ranged from 46 to 432. The mean age of the patients across trials fluctuated from 39.8 to 72.7 years, along with a 7 to 14 day treatment duration.\nCharacteristics of included studies.\n(A) Due to the different purity of safflor injection and safflor yellow injection, they should be distinguished; (B) The safflor yellow sodium chloride injection is prepared by adding sodium chloride to the safflor yellow injection for dilution. Therefore, only safflor yellow injection was included in the network meta-analysis.\nAC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.", "The Cochrane risk of bias tool was used to assess the 9 included RCTs. Among the 42 included studies, 11[12,23–25,27,30,34,36,52,56,59] specifically reported the method of random sequence generation. Allocation concealment was adequately described in only a few included studies. All outcomes of the included studies were completed without determining other sources of bias. Overall, these 42 studies showed moderate methodological quality. The details of the bias-risk evaluation for each study are presented in Figure 2.\nBias risk of included studies.", "[SUBTITLE] 3.4.1. Evidence network diagram. [SUBSECTION] This network meta-analysis (NMA) included 7 safflor yellow-related studies, including its monotherapy and combination with 5 other traditional Chinese medicine injections or conventional treatments for angina pectoris. As is shown in Figure 3, 2 closed loops were formed, focusing on the conventional treatment. 42 RCTs[12,20–60] for angina-pectoris treatment efficiency were estimated according to the efficacy evaluation criteria. ECG effects and hemorheological indicators were included in 40 RCTs[12,20–24,26–55,57–60] and 19 RCTs,[20,21,25–30,33–35,37,38,41,48,49,52,59,60] respectively.\nEvidence network plot. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThis network meta-analysis (NMA) included 7 safflor yellow-related studies, including its monotherapy and combination with 5 other traditional Chinese medicine injections or conventional treatments for angina pectoris. As is shown in Figure 3, 2 closed loops were formed, focusing on the conventional treatment. 42 RCTs[12,20–60] for angina-pectoris treatment efficiency were estimated according to the efficacy evaluation criteria. ECG effects and hemorheological indicators were included in 40 RCTs[12,20–24,26–55,57–60] and 19 RCTs,[20,21,25–30,33–35,37,38,41,48,49,52,59,60] respectively.\nEvidence network plot. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n[SUBTITLE] 3.4.2. Test for heterogeneity. [SUBSECTION] Two triangular closed loops appeared during the intervention. LOOP was used to construct the inconsistency test chart, calculate the inconsistency factor, and conduct the Z test. The Z value finalized that Loop (CT + SYI-CT + SI-CT + DSI) P = .446 and Loop (CT-CT + SYI-CT + SI) P = .584, demonstrating no inconsistency results.\nTwo triangular closed loops appeared during the intervention. LOOP was used to construct the inconsistency test chart, calculate the inconsistency factor, and conduct the Z test. The Z value finalized that Loop (CT + SYI-CT + SI-CT + DSI) P = .446 and Loop (CT-CT + SYI-CT + SI) P = .584, demonstrating no inconsistency results.\n[SUBTITLE] 3.4.3. Publication bias. [SUBSECTION] Eleven studies were included in the funnel plot for publication bias analysis. The funnel plot showed an asymmetric distribution of points and indicated the possibility of publication bias and minor study effects.\nEleven studies were included in the funnel plot for publication bias analysis. The funnel plot showed an asymmetric distribution of points and indicated the possibility of publication bias and minor study effects.\n[SUBTITLE] 3.4.4. Network meta-analysis of drug efficacy for angina pectoris treatment. [SUBSECTION] 42 RCTs demonstrated the clinical treatment efficacy against angina pectoris. A comparison of these results is presented in Table 2. Compared with the conventional treatment group, the CT + SI + H group showed the highest treatment efficacy (OR = 9.62, 95% CI [3.84, 24.05]), and the DSI group displayed the most modest treatment effect (OR = 0.85, 95% CI [0.16, 4.64]).\nNetwork meta-analysis results comparing the clinical effectiveness.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n42 RCTs demonstrated the clinical treatment efficacy against angina pectoris. A comparison of these results is presented in Table 2. Compared with the conventional treatment group, the CT + SI + H group showed the highest treatment efficacy (OR = 9.62, 95% CI [3.84, 24.05]), and the DSI group displayed the most modest treatment effect (OR = 0.85, 95% CI [0.16, 4.64]).\nNetwork meta-analysis results comparing the clinical effectiveness.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n[SUBTITLE] 3.4.5. SUCRA curves of treatment efficacy. [SUBSECTION] The SUCRA values from probability ranking are listed in Table 3. CT + SI + H had the highest rank probability of treatment success rate. The rank probability of the treatments based on SUCRAs is shown in Figure 4, demonstrating similarity to the ranking of the effective NMA results (Table 3).\nProbability ranking of clinical effectiveness evaluation in 13 angina-pectoris treatments.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nSUCRA curves of 13 treatment interventions. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThe SUCRA values from probability ranking are listed in Table 3. CT + SI + H had the highest rank probability of treatment success rate. The rank probability of the treatments based on SUCRAs is shown in Figure 4, demonstrating similarity to the ranking of the effective NMA results (Table 3).\nProbability ranking of clinical effectiveness evaluation in 13 angina-pectoris treatments.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nSUCRA curves of 13 treatment interventions. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.", "This network meta-analysis (NMA) included 7 safflor yellow-related studies, including its monotherapy and combination with 5 other traditional Chinese medicine injections or conventional treatments for angina pectoris. As is shown in Figure 3, 2 closed loops were formed, focusing on the conventional treatment. 42 RCTs[12,20–60] for angina-pectoris treatment efficiency were estimated according to the efficacy evaluation criteria. ECG effects and hemorheological indicators were included in 40 RCTs[12,20–24,26–55,57–60] and 19 RCTs,[20,21,25–30,33–35,37,38,41,48,49,52,59,60] respectively.\nEvidence network plot. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.", "Two triangular closed loops appeared during the intervention. LOOP was used to construct the inconsistency test chart, calculate the inconsistency factor, and conduct the Z test. The Z value finalized that Loop (CT + SYI-CT + SI-CT + DSI) P = .446 and Loop (CT-CT + SYI-CT + SI) P = .584, demonstrating no inconsistency results.", "Eleven studies were included in the funnel plot for publication bias analysis. The funnel plot showed an asymmetric distribution of points and indicated the possibility of publication bias and minor study effects.", "42 RCTs demonstrated the clinical treatment efficacy against angina pectoris. A comparison of these results is presented in Table 2. Compared with the conventional treatment group, the CT + SI + H group showed the highest treatment efficacy (OR = 9.62, 95% CI [3.84, 24.05]), and the DSI group displayed the most modest treatment effect (OR = 0.85, 95% CI [0.16, 4.64]).\nNetwork meta-analysis results comparing the clinical effectiveness.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.", "The SUCRA values from probability ranking are listed in Table 3. CT + SI + H had the highest rank probability of treatment success rate. The rank probability of the treatments based on SUCRAs is shown in Figure 4, demonstrating similarity to the ranking of the effective NMA results (Table 3).\nProbability ranking of clinical effectiveness evaluation in 13 angina-pectoris treatments.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nSUCRA curves of 13 treatment interventions. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.", "Nine studies[20,21,23,39,40,45,51,53,54] including 1045 patients demonstrated the adverse events occurrences. Mild venous inflammation was observed in 2,[45] disappearing after needle removal and not significantly affecting treatment. In addition, 2 patients in the control group developed an allergic reaction. One study[40] indicated that the treatment and control groups resulted in bleeding, slightly longer coagulation time, and slightly reduced platelet count after treatment. One study[39] reported that 3 cases of acute myocardial infarction occurred in the control group without inducing death among the adverse reactions in the circulatory system. Five studies[20,21,39,53,54] indicated other adverse reactions, including insomnia, nausea, dizziness, nausea, pruritus, rash, hypotension, head swelling, and muscle aches. All adverse reactions returned to normal after continued or discontinued observation. The results show that safflor yellow injection is effective and safe for treating angina pectoris, with few adverse reactions.", "[SUBTITLE] 4.1. Research perspective [SUBSECTION] This analysis was done from the perspective of patients with angina.[61] The calculation of the implicit cost was not contained due to its complication. This study is a retrospective analysis, so the differences between indirect and hidden costs are too significant. Therefore, we only involved the direct costs of different treatment schemes.\nThis analysis was done from the perspective of patients with angina.[61] The calculation of the implicit cost was not contained due to its complication. This study is a retrospective analysis, so the differences between indirect and hidden costs are too significant. Therefore, we only involved the direct costs of different treatment schemes.\n[SUBTITLE] 4.2. Methods [SUBSECTION] [SUBTITLE] 4.2.1. Decision tree model. [SUBSECTION] This study used a decision tree model to analyze the cost and effect of 13 treatment options for angina included in the network meta-analysis. Efficacy and safety indicators were obtained using meta-analysis to comprehensively evaluate the economics of 13 treatment regimens. The structure of the decision-tree model is shown in Figure 5. The model primarily assessed the short-term economy, and the time horizon for this analysis was 1 treatment course (14 days).\nDecision tree model. The decision tree model was used to analyze the cost-effectiveness of 13 treatment options for angina included in the network meta-analysis. The specific treatment protocols and their cost are shown in this figure. The time horizon for this analysis is 1 treatment course (14 days). AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThis study used a decision tree model to analyze the cost and effect of 13 treatment options for angina included in the network meta-analysis. Efficacy and safety indicators were obtained using meta-analysis to comprehensively evaluate the economics of 13 treatment regimens. The structure of the decision-tree model is shown in Figure 5. The model primarily assessed the short-term economy, and the time horizon for this analysis was 1 treatment course (14 days).\nDecision tree model. The decision tree model was used to analyze the cost-effectiveness of 13 treatment options for angina included in the network meta-analysis. The specific treatment protocols and their cost are shown in this figure. The time horizon for this analysis is 1 treatment course (14 days). AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n[SUBTITLE] 4.2.2. Statistical analysis. [SUBSECTION] In pharmacoeconomic evaluation, cost-effectiveness analysis calculated the incremental cost-effectiveness ratio. Cyclone plots were drawn by single factor sensitivity analysis, probability sensitivity analysis was carried out by Monte Carlo simulation, and acceptable cost effect curves were drawn.[62] TreeAge 2011 was used to construct a decision tree model for cost-effectiveness and sensitivity analyses.\nIn pharmacoeconomic evaluation, cost-effectiveness analysis calculated the incremental cost-effectiveness ratio. Cyclone plots were drawn by single factor sensitivity analysis, probability sensitivity analysis was carried out by Monte Carlo simulation, and acceptable cost effect curves were drawn.[62] TreeAge 2011 was used to construct a decision tree model for cost-effectiveness and sensitivity analyses.\n[SUBTITLE] 4.2.3. Effectiveness. [SUBSECTION] The studies included in the economic evaluation were similar to the network meta-analysis. We obtained the effective rates of 13 treatment regimens according to the proportion of each study shown in the forest map in the meta-analysis and weighting the treatment efficiency of angina patients. The results showed that the efficiency ranking and the score ranking of SUCRA in the NMA were similar, indicating that the efficiency from the weighted calculation was reasonable and could be included in the economic-evaluation calculation (Table 4).\nClinical efficacy and ranking of 13 angina-pectoris treatments.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThe studies included in the economic evaluation were similar to the network meta-analysis. We obtained the effective rates of 13 treatment regimens according to the proportion of each study shown in the forest map in the meta-analysis and weighting the treatment efficiency of angina patients. The results showed that the efficiency ranking and the score ranking of SUCRA in the NMA were similar, indicating that the efficiency from the weighted calculation was reasonable and could be included in the economic-evaluation calculation (Table 4).\nClinical efficacy and ranking of 13 angina-pectoris treatments.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n[SUBTITLE] 4.2.4. Cost. [SUBSECTION] Cost estimation was based on the patient perspective. We assumed that the direct and indirect costs of the 13 interventions were the same, that direct medical costs caused the differences in total costs, and that the cost of the conventional treatment was identical for each treatment regimen. In addition, this study’s effective components of safflower yellow injection and safflower injection are consistent. However, they were produced by different manufacturers, were differentiated in the network meta-analysis and discriminated in the cost calculation. We adopted a discount rate of 5% for the cost data and discounted uniformly until early 2020.\nDrug cost\nWe utilized the most common drug retail prices and the lowest to the highest manufacturers’ retail prices for the sensitivity analysis. When calculating the total drug cost of the 13 treatment schemes, the weighted drug amount was calculated by multiplying the cost of various drugs or injections in the included literature by the weight obtained from the meta-analysis and the drug cost of the treatment scheme was unified. The costs of the 10 drugs are shown in Table 5. The weighted dosages of the 13 treatment regimens are listed in Table 6.\nCost price and maximum/minimum value of 10 drugs.\n(A) All data come from 315 medicine price inquiry net https://www.315jiage.cn. (B) Maximum, minimum and cost of the exact drug specifications.\nWeighted dose and cost of 13 treatment options.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThe cost of 1 course of treatment, including Aspirin enteric-coated tablets, Propranolol tablets, Nitroglycerin tablets and Nifedipine sustained-release tablets for conventional treatment, was ¥13.09, ¥13.72, ¥1.68, and ¥17.08, respectively, and the total cost of 1 course of conventional treatment was ¥45.57.[63] The discounted cost of conventional treatment was ¥50.24.\nOther costs\nThe cost of injection mainly includes the cost of materials, such as disposable infusion tubes and syringes used for intravenous injection, and the cost of intravenous injection. The latest medical service fees published by the Beijing Medical Insurance Bureau include ¥5.5 for intravenous injection and ¥7 for intravenous infusion.[64] The total intravenous infusion material fee was ¥8.00 and the total amount of intravenous infusion material fee was ¥2.40.[65] After discounting, the average value is ¥14.3/day. In addition, the cost of examination items during the entire course of treatment for patients with angina includes the cost of blood, urine, stool routine, liver and kidney function, and electrocardiogram before treatment. The cost of laboratory tests and electrocardiograms was obtained from Jianwei Xuan et al[66] The average price of medical services was obtained from the website of the local Health Commission. Discount calculation results for inspection cost of ¥373.6. Zhang et al[67] summarized the costs of diagnosing and treating CHD in 26 sample hospitals from 2014 to 2016 and found that the average hospitalization cost for angina patients was ¥26745.12. Zhao et al[12] studied CHD in 237 tertiary hospitals in Beijing in 2014 and found that the average hospitalization cost of patients with unstable angina was ¥26482.41. The average cost of hospitalization calculated after discount is ¥34811.63.\nCost estimation was based on the patient perspective. We assumed that the direct and indirect costs of the 13 interventions were the same, that direct medical costs caused the differences in total costs, and that the cost of the conventional treatment was identical for each treatment regimen. In addition, this study’s effective components of safflower yellow injection and safflower injection are consistent. However, they were produced by different manufacturers, were differentiated in the network meta-analysis and discriminated in the cost calculation. We adopted a discount rate of 5% for the cost data and discounted uniformly until early 2020.\nDrug cost\nWe utilized the most common drug retail prices and the lowest to the highest manufacturers’ retail prices for the sensitivity analysis. When calculating the total drug cost of the 13 treatment schemes, the weighted drug amount was calculated by multiplying the cost of various drugs or injections in the included literature by the weight obtained from the meta-analysis and the drug cost of the treatment scheme was unified. The costs of the 10 drugs are shown in Table 5. The weighted dosages of the 13 treatment regimens are listed in Table 6.\nCost price and maximum/minimum value of 10 drugs.\n(A) All data come from 315 medicine price inquiry net https://www.315jiage.cn. (B) Maximum, minimum and cost of the exact drug specifications.\nWeighted dose and cost of 13 treatment options.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThe cost of 1 course of treatment, including Aspirin enteric-coated tablets, Propranolol tablets, Nitroglycerin tablets and Nifedipine sustained-release tablets for conventional treatment, was ¥13.09, ¥13.72, ¥1.68, and ¥17.08, respectively, and the total cost of 1 course of conventional treatment was ¥45.57.[63] The discounted cost of conventional treatment was ¥50.24.\nOther costs\nThe cost of injection mainly includes the cost of materials, such as disposable infusion tubes and syringes used for intravenous injection, and the cost of intravenous injection. The latest medical service fees published by the Beijing Medical Insurance Bureau include ¥5.5 for intravenous injection and ¥7 for intravenous infusion.[64] The total intravenous infusion material fee was ¥8.00 and the total amount of intravenous infusion material fee was ¥2.40.[65] After discounting, the average value is ¥14.3/day. In addition, the cost of examination items during the entire course of treatment for patients with angina includes the cost of blood, urine, stool routine, liver and kidney function, and electrocardiogram before treatment. The cost of laboratory tests and electrocardiograms was obtained from Jianwei Xuan et al[66] The average price of medical services was obtained from the website of the local Health Commission. Discount calculation results for inspection cost of ¥373.6. Zhang et al[67] summarized the costs of diagnosing and treating CHD in 26 sample hospitals from 2014 to 2016 and found that the average hospitalization cost for angina patients was ¥26745.12. Zhao et al[12] studied CHD in 237 tertiary hospitals in Beijing in 2014 and found that the average hospitalization cost of patients with unstable angina was ¥26482.41. The average cost of hospitalization calculated after discount is ¥34811.63.\n[SUBTITLE] 4.2.1. Decision tree model. [SUBSECTION] This study used a decision tree model to analyze the cost and effect of 13 treatment options for angina included in the network meta-analysis. Efficacy and safety indicators were obtained using meta-analysis to comprehensively evaluate the economics of 13 treatment regimens. The structure of the decision-tree model is shown in Figure 5. The model primarily assessed the short-term economy, and the time horizon for this analysis was 1 treatment course (14 days).\nDecision tree model. The decision tree model was used to analyze the cost-effectiveness of 13 treatment options for angina included in the network meta-analysis. The specific treatment protocols and their cost are shown in this figure. The time horizon for this analysis is 1 treatment course (14 days). AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThis study used a decision tree model to analyze the cost and effect of 13 treatment options for angina included in the network meta-analysis. Efficacy and safety indicators were obtained using meta-analysis to comprehensively evaluate the economics of 13 treatment regimens. The structure of the decision-tree model is shown in Figure 5. The model primarily assessed the short-term economy, and the time horizon for this analysis was 1 treatment course (14 days).\nDecision tree model. The decision tree model was used to analyze the cost-effectiveness of 13 treatment options for angina included in the network meta-analysis. The specific treatment protocols and their cost are shown in this figure. The time horizon for this analysis is 1 treatment course (14 days). AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n[SUBTITLE] 4.2.2. Statistical analysis. [SUBSECTION] In pharmacoeconomic evaluation, cost-effectiveness analysis calculated the incremental cost-effectiveness ratio. Cyclone plots were drawn by single factor sensitivity analysis, probability sensitivity analysis was carried out by Monte Carlo simulation, and acceptable cost effect curves were drawn.[62] TreeAge 2011 was used to construct a decision tree model for cost-effectiveness and sensitivity analyses.\nIn pharmacoeconomic evaluation, cost-effectiveness analysis calculated the incremental cost-effectiveness ratio. Cyclone plots were drawn by single factor sensitivity analysis, probability sensitivity analysis was carried out by Monte Carlo simulation, and acceptable cost effect curves were drawn.[62] TreeAge 2011 was used to construct a decision tree model for cost-effectiveness and sensitivity analyses.\n[SUBTITLE] 4.2.3. Effectiveness. [SUBSECTION] The studies included in the economic evaluation were similar to the network meta-analysis. We obtained the effective rates of 13 treatment regimens according to the proportion of each study shown in the forest map in the meta-analysis and weighting the treatment efficiency of angina patients. The results showed that the efficiency ranking and the score ranking of SUCRA in the NMA were similar, indicating that the efficiency from the weighted calculation was reasonable and could be included in the economic-evaluation calculation (Table 4).\nClinical efficacy and ranking of 13 angina-pectoris treatments.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThe studies included in the economic evaluation were similar to the network meta-analysis. We obtained the effective rates of 13 treatment regimens according to the proportion of each study shown in the forest map in the meta-analysis and weighting the treatment efficiency of angina patients. The results showed that the efficiency ranking and the score ranking of SUCRA in the NMA were similar, indicating that the efficiency from the weighted calculation was reasonable and could be included in the economic-evaluation calculation (Table 4).\nClinical efficacy and ranking of 13 angina-pectoris treatments.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n[SUBTITLE] 4.2.4. Cost. [SUBSECTION] Cost estimation was based on the patient perspective. We assumed that the direct and indirect costs of the 13 interventions were the same, that direct medical costs caused the differences in total costs, and that the cost of the conventional treatment was identical for each treatment regimen. In addition, this study’s effective components of safflower yellow injection and safflower injection are consistent. However, they were produced by different manufacturers, were differentiated in the network meta-analysis and discriminated in the cost calculation. We adopted a discount rate of 5% for the cost data and discounted uniformly until early 2020.\nDrug cost\nWe utilized the most common drug retail prices and the lowest to the highest manufacturers’ retail prices for the sensitivity analysis. When calculating the total drug cost of the 13 treatment schemes, the weighted drug amount was calculated by multiplying the cost of various drugs or injections in the included literature by the weight obtained from the meta-analysis and the drug cost of the treatment scheme was unified. The costs of the 10 drugs are shown in Table 5. The weighted dosages of the 13 treatment regimens are listed in Table 6.\nCost price and maximum/minimum value of 10 drugs.\n(A) All data come from 315 medicine price inquiry net https://www.315jiage.cn. (B) Maximum, minimum and cost of the exact drug specifications.\nWeighted dose and cost of 13 treatment options.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThe cost of 1 course of treatment, including Aspirin enteric-coated tablets, Propranolol tablets, Nitroglycerin tablets and Nifedipine sustained-release tablets for conventional treatment, was ¥13.09, ¥13.72, ¥1.68, and ¥17.08, respectively, and the total cost of 1 course of conventional treatment was ¥45.57.[63] The discounted cost of conventional treatment was ¥50.24.\nOther costs\nThe cost of injection mainly includes the cost of materials, such as disposable infusion tubes and syringes used for intravenous injection, and the cost of intravenous injection. The latest medical service fees published by the Beijing Medical Insurance Bureau include ¥5.5 for intravenous injection and ¥7 for intravenous infusion.[64] The total intravenous infusion material fee was ¥8.00 and the total amount of intravenous infusion material fee was ¥2.40.[65] After discounting, the average value is ¥14.3/day. In addition, the cost of examination items during the entire course of treatment for patients with angina includes the cost of blood, urine, stool routine, liver and kidney function, and electrocardiogram before treatment. The cost of laboratory tests and electrocardiograms was obtained from Jianwei Xuan et al[66] The average price of medical services was obtained from the website of the local Health Commission. Discount calculation results for inspection cost of ¥373.6. Zhang et al[67] summarized the costs of diagnosing and treating CHD in 26 sample hospitals from 2014 to 2016 and found that the average hospitalization cost for angina patients was ¥26745.12. Zhao et al[12] studied CHD in 237 tertiary hospitals in Beijing in 2014 and found that the average hospitalization cost of patients with unstable angina was ¥26482.41. The average cost of hospitalization calculated after discount is ¥34811.63.\nCost estimation was based on the patient perspective. We assumed that the direct and indirect costs of the 13 interventions were the same, that direct medical costs caused the differences in total costs, and that the cost of the conventional treatment was identical for each treatment regimen. In addition, this study’s effective components of safflower yellow injection and safflower injection are consistent. However, they were produced by different manufacturers, were differentiated in the network meta-analysis and discriminated in the cost calculation. We adopted a discount rate of 5% for the cost data and discounted uniformly until early 2020.\nDrug cost\nWe utilized the most common drug retail prices and the lowest to the highest manufacturers’ retail prices for the sensitivity analysis. When calculating the total drug cost of the 13 treatment schemes, the weighted drug amount was calculated by multiplying the cost of various drugs or injections in the included literature by the weight obtained from the meta-analysis and the drug cost of the treatment scheme was unified. The costs of the 10 drugs are shown in Table 5. The weighted dosages of the 13 treatment regimens are listed in Table 6.\nCost price and maximum/minimum value of 10 drugs.\n(A) All data come from 315 medicine price inquiry net https://www.315jiage.cn. (B) Maximum, minimum and cost of the exact drug specifications.\nWeighted dose and cost of 13 treatment options.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThe cost of 1 course of treatment, including Aspirin enteric-coated tablets, Propranolol tablets, Nitroglycerin tablets and Nifedipine sustained-release tablets for conventional treatment, was ¥13.09, ¥13.72, ¥1.68, and ¥17.08, respectively, and the total cost of 1 course of conventional treatment was ¥45.57.[63] The discounted cost of conventional treatment was ¥50.24.\nOther costs\nThe cost of injection mainly includes the cost of materials, such as disposable infusion tubes and syringes used for intravenous injection, and the cost of intravenous injection. The latest medical service fees published by the Beijing Medical Insurance Bureau include ¥5.5 for intravenous injection and ¥7 for intravenous infusion.[64] The total intravenous infusion material fee was ¥8.00 and the total amount of intravenous infusion material fee was ¥2.40.[65] After discounting, the average value is ¥14.3/day. In addition, the cost of examination items during the entire course of treatment for patients with angina includes the cost of blood, urine, stool routine, liver and kidney function, and electrocardiogram before treatment. The cost of laboratory tests and electrocardiograms was obtained from Jianwei Xuan et al[66] The average price of medical services was obtained from the website of the local Health Commission. Discount calculation results for inspection cost of ¥373.6. Zhang et al[67] summarized the costs of diagnosing and treating CHD in 26 sample hospitals from 2014 to 2016 and found that the average hospitalization cost for angina patients was ¥26745.12. Zhao et al[12] studied CHD in 237 tertiary hospitals in Beijing in 2014 and found that the average hospitalization cost of patients with unstable angina was ¥26482.41. The average cost of hospitalization calculated after discount is ¥34811.63.\n[SUBTITLE] 4.3. Results [SUBSECTION] [SUBTITLE] 4.3.1. Base-case results. [SUBSECTION] We selected studies with effective rates of more than 90% (including conventional treatment) for economic evaluation. As shown in Table 7, CT + SI was the most cost-effective treatment.\nBase-case analysis results.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, ICER = incremental cost-effectiveness ratio, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nWe selected studies with effective rates of more than 90% (including conventional treatment) for economic evaluation. As shown in Table 7, CT + SI was the most cost-effective treatment.\nBase-case analysis results.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, ICER = incremental cost-effectiveness ratio, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n[SUBTITLE] 4.3.2. Two-way sensitivity analysis. [SUBSECTION] It was assumed that the efficacy rate of the 9 treatment regimens fluctuated by 5%. The cost was analyzed sensitively according to the highest and lowest manufacturer retail price, assuming that WTP was GDP per capita in 2018. As shown in Figure 6, the parameter with the most significant impact on the results was the treatment efficiency of the CT + SI group.\nSensitivity analysis on cost and effective rate. eCT_SI = effective rate of combined therapy of conventional treatment and Safflor injection, eCT_SI_H = effective rate of combined therapy of conventional treatment and Safflor injection and low molecular heparin, eCT_SI_DS = effective rate of combined therapy of conventional treatment and Safflor injection and compound Danshen drip pill, eCT_SYI_LC = effective rate of combined therapy of conventional treatment and safflor yellow injection and L-Carnitine injection, eCT_SYI_CD = effective rate of combined therapy of conventional treatment and safflor yellow injection and carvedilol, cCT_SI = cost of combined therapy of conventional treatment and Safflor injection, eCT_SYI = effective rate of combined therapy of conventional treatment and safflor yellow injection, eCT_SYI_AC = effective rate of combined therapy of conventional treatment and safflor yellow injection and atorvastatin calcium tablets, cCT_SI_H = cost of combined therapy of conventional treatment and Safflor injection and low molecular heparin, eCT_SYI_SMI = effective rate of combined therapy of conventional treatment and safflor yellow injection and Shenmai injection, eCT = effective rate of conventional treatment, cCT_SYI_SMI = cost of combined therapy of conventional treatment and safflor yellow injection and Shenmai injection, eCT_SYI_LC = effective rate of combined therapy of conventional treatment and safflor yellow injection and L-Carnitine injection.\nIt was assumed that the efficacy rate of the 9 treatment regimens fluctuated by 5%. The cost was analyzed sensitively according to the highest and lowest manufacturer retail price, assuming that WTP was GDP per capita in 2018. As shown in Figure 6, the parameter with the most significant impact on the results was the treatment efficiency of the CT + SI group.\nSensitivity analysis on cost and effective rate. eCT_SI = effective rate of combined therapy of conventional treatment and Safflor injection, eCT_SI_H = effective rate of combined therapy of conventional treatment and Safflor injection and low molecular heparin, eCT_SI_DS = effective rate of combined therapy of conventional treatment and Safflor injection and compound Danshen drip pill, eCT_SYI_LC = effective rate of combined therapy of conventional treatment and safflor yellow injection and L-Carnitine injection, eCT_SYI_CD = effective rate of combined therapy of conventional treatment and safflor yellow injection and carvedilol, cCT_SI = cost of combined therapy of conventional treatment and Safflor injection, eCT_SYI = effective rate of combined therapy of conventional treatment and safflor yellow injection, eCT_SYI_AC = effective rate of combined therapy of conventional treatment and safflor yellow injection and atorvastatin calcium tablets, cCT_SI_H = cost of combined therapy of conventional treatment and Safflor injection and low molecular heparin, eCT_SYI_SMI = effective rate of combined therapy of conventional treatment and safflor yellow injection and Shenmai injection, eCT = effective rate of conventional treatment, cCT_SYI_SMI = cost of combined therapy of conventional treatment and safflor yellow injection and Shenmai injection, eCT_SYI_LC = effective rate of combined therapy of conventional treatment and safflor yellow injection and L-Carnitine injection.\n[SUBTITLE] 4.3.3. Probabilistic sensitivity analysis (PSA). [SUBSECTION] The results of the PSA based on 1000 Monte Carlo simulations are presented in the cost-effectiveness scatter plot below (Fig. 7). The efficiency and the cost were presumed to be a beta distribution and a triangular distribution, respectively. The patient’s WTP changed from 0 to ¥198018. The acceptable cost effect curve is shown in Figure 7. The probability of cost-effectiveness of CT + SI gradually increased with the WTP threshold and exceeded CT when the WTP reached ¥19801.8. When the WTP is higher than ¥39603.6, the CT + SI probability representing a more economical scheme was reduced; however, it was still greater than 50%. The results of the PSA were consistent with the base-case results (Fig. 8).\nCost-effectiveness acceptability curve. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nCost-effectiveness of screening options. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThe results of the PSA based on 1000 Monte Carlo simulations are presented in the cost-effectiveness scatter plot below (Fig. 7). The efficiency and the cost were presumed to be a beta distribution and a triangular distribution, respectively. The patient’s WTP changed from 0 to ¥198018. The acceptable cost effect curve is shown in Figure 7. The probability of cost-effectiveness of CT + SI gradually increased with the WTP threshold and exceeded CT when the WTP reached ¥19801.8. When the WTP is higher than ¥39603.6, the CT + SI probability representing a more economical scheme was reduced; however, it was still greater than 50%. The results of the PSA were consistent with the base-case results (Fig. 8).\nCost-effectiveness acceptability curve. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nCost-effectiveness of screening options. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n[SUBTITLE] 4.3.1. Base-case results. [SUBSECTION] We selected studies with effective rates of more than 90% (including conventional treatment) for economic evaluation. As shown in Table 7, CT + SI was the most cost-effective treatment.\nBase-case analysis results.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, ICER = incremental cost-effectiveness ratio, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nWe selected studies with effective rates of more than 90% (including conventional treatment) for economic evaluation. As shown in Table 7, CT + SI was the most cost-effective treatment.\nBase-case analysis results.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, ICER = incremental cost-effectiveness ratio, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n[SUBTITLE] 4.3.2. Two-way sensitivity analysis. [SUBSECTION] It was assumed that the efficacy rate of the 9 treatment regimens fluctuated by 5%. The cost was analyzed sensitively according to the highest and lowest manufacturer retail price, assuming that WTP was GDP per capita in 2018. As shown in Figure 6, the parameter with the most significant impact on the results was the treatment efficiency of the CT + SI group.\nSensitivity analysis on cost and effective rate. eCT_SI = effective rate of combined therapy of conventional treatment and Safflor injection, eCT_SI_H = effective rate of combined therapy of conventional treatment and Safflor injection and low molecular heparin, eCT_SI_DS = effective rate of combined therapy of conventional treatment and Safflor injection and compound Danshen drip pill, eCT_SYI_LC = effective rate of combined therapy of conventional treatment and safflor yellow injection and L-Carnitine injection, eCT_SYI_CD = effective rate of combined therapy of conventional treatment and safflor yellow injection and carvedilol, cCT_SI = cost of combined therapy of conventional treatment and Safflor injection, eCT_SYI = effective rate of combined therapy of conventional treatment and safflor yellow injection, eCT_SYI_AC = effective rate of combined therapy of conventional treatment and safflor yellow injection and atorvastatin calcium tablets, cCT_SI_H = cost of combined therapy of conventional treatment and Safflor injection and low molecular heparin, eCT_SYI_SMI = effective rate of combined therapy of conventional treatment and safflor yellow injection and Shenmai injection, eCT = effective rate of conventional treatment, cCT_SYI_SMI = cost of combined therapy of conventional treatment and safflor yellow injection and Shenmai injection, eCT_SYI_LC = effective rate of combined therapy of conventional treatment and safflor yellow injection and L-Carnitine injection.\nIt was assumed that the efficacy rate of the 9 treatment regimens fluctuated by 5%. The cost was analyzed sensitively according to the highest and lowest manufacturer retail price, assuming that WTP was GDP per capita in 2018. As shown in Figure 6, the parameter with the most significant impact on the results was the treatment efficiency of the CT + SI group.\nSensitivity analysis on cost and effective rate. eCT_SI = effective rate of combined therapy of conventional treatment and Safflor injection, eCT_SI_H = effective rate of combined therapy of conventional treatment and Safflor injection and low molecular heparin, eCT_SI_DS = effective rate of combined therapy of conventional treatment and Safflor injection and compound Danshen drip pill, eCT_SYI_LC = effective rate of combined therapy of conventional treatment and safflor yellow injection and L-Carnitine injection, eCT_SYI_CD = effective rate of combined therapy of conventional treatment and safflor yellow injection and carvedilol, cCT_SI = cost of combined therapy of conventional treatment and Safflor injection, eCT_SYI = effective rate of combined therapy of conventional treatment and safflor yellow injection, eCT_SYI_AC = effective rate of combined therapy of conventional treatment and safflor yellow injection and atorvastatin calcium tablets, cCT_SI_H = cost of combined therapy of conventional treatment and Safflor injection and low molecular heparin, eCT_SYI_SMI = effective rate of combined therapy of conventional treatment and safflor yellow injection and Shenmai injection, eCT = effective rate of conventional treatment, cCT_SYI_SMI = cost of combined therapy of conventional treatment and safflor yellow injection and Shenmai injection, eCT_SYI_LC = effective rate of combined therapy of conventional treatment and safflor yellow injection and L-Carnitine injection.\n[SUBTITLE] 4.3.3. Probabilistic sensitivity analysis (PSA). [SUBSECTION] The results of the PSA based on 1000 Monte Carlo simulations are presented in the cost-effectiveness scatter plot below (Fig. 7). The efficiency and the cost were presumed to be a beta distribution and a triangular distribution, respectively. The patient’s WTP changed from 0 to ¥198018. The acceptable cost effect curve is shown in Figure 7. The probability of cost-effectiveness of CT + SI gradually increased with the WTP threshold and exceeded CT when the WTP reached ¥19801.8. When the WTP is higher than ¥39603.6, the CT + SI probability representing a more economical scheme was reduced; however, it was still greater than 50%. The results of the PSA were consistent with the base-case results (Fig. 8).\nCost-effectiveness acceptability curve. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nCost-effectiveness of screening options. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThe results of the PSA based on 1000 Monte Carlo simulations are presented in the cost-effectiveness scatter plot below (Fig. 7). The efficiency and the cost were presumed to be a beta distribution and a triangular distribution, respectively. The patient’s WTP changed from 0 to ¥198018. The acceptable cost effect curve is shown in Figure 7. The probability of cost-effectiveness of CT + SI gradually increased with the WTP threshold and exceeded CT when the WTP reached ¥19801.8. When the WTP is higher than ¥39603.6, the CT + SI probability representing a more economical scheme was reduced; however, it was still greater than 50%. The results of the PSA were consistent with the base-case results (Fig. 8).\nCost-effectiveness acceptability curve. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nCost-effectiveness of screening options. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.", "This analysis was done from the perspective of patients with angina.[61] The calculation of the implicit cost was not contained due to its complication. This study is a retrospective analysis, so the differences between indirect and hidden costs are too significant. Therefore, we only involved the direct costs of different treatment schemes.", "This study used a decision tree model to analyze the cost and effect of 13 treatment options for angina included in the network meta-analysis. Efficacy and safety indicators were obtained using meta-analysis to comprehensively evaluate the economics of 13 treatment regimens. The structure of the decision-tree model is shown in Figure 5. The model primarily assessed the short-term economy, and the time horizon for this analysis was 1 treatment course (14 days).\nDecision tree model. The decision tree model was used to analyze the cost-effectiveness of 13 treatment options for angina included in the network meta-analysis. The specific treatment protocols and their cost are shown in this figure. The time horizon for this analysis is 1 treatment course (14 days). AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.", "In pharmacoeconomic evaluation, cost-effectiveness analysis calculated the incremental cost-effectiveness ratio. Cyclone plots were drawn by single factor sensitivity analysis, probability sensitivity analysis was carried out by Monte Carlo simulation, and acceptable cost effect curves were drawn.[62] TreeAge 2011 was used to construct a decision tree model for cost-effectiveness and sensitivity analyses.", "The studies included in the economic evaluation were similar to the network meta-analysis. We obtained the effective rates of 13 treatment regimens according to the proportion of each study shown in the forest map in the meta-analysis and weighting the treatment efficiency of angina patients. The results showed that the efficiency ranking and the score ranking of SUCRA in the NMA were similar, indicating that the efficiency from the weighted calculation was reasonable and could be included in the economic-evaluation calculation (Table 4).\nClinical efficacy and ranking of 13 angina-pectoris treatments.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.", "Cost estimation was based on the patient perspective. We assumed that the direct and indirect costs of the 13 interventions were the same, that direct medical costs caused the differences in total costs, and that the cost of the conventional treatment was identical for each treatment regimen. In addition, this study’s effective components of safflower yellow injection and safflower injection are consistent. However, they were produced by different manufacturers, were differentiated in the network meta-analysis and discriminated in the cost calculation. We adopted a discount rate of 5% for the cost data and discounted uniformly until early 2020.\nDrug cost\nWe utilized the most common drug retail prices and the lowest to the highest manufacturers’ retail prices for the sensitivity analysis. When calculating the total drug cost of the 13 treatment schemes, the weighted drug amount was calculated by multiplying the cost of various drugs or injections in the included literature by the weight obtained from the meta-analysis and the drug cost of the treatment scheme was unified. The costs of the 10 drugs are shown in Table 5. The weighted dosages of the 13 treatment regimens are listed in Table 6.\nCost price and maximum/minimum value of 10 drugs.\n(A) All data come from 315 medicine price inquiry net https://www.315jiage.cn. (B) Maximum, minimum and cost of the exact drug specifications.\nWeighted dose and cost of 13 treatment options.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThe cost of 1 course of treatment, including Aspirin enteric-coated tablets, Propranolol tablets, Nitroglycerin tablets and Nifedipine sustained-release tablets for conventional treatment, was ¥13.09, ¥13.72, ¥1.68, and ¥17.08, respectively, and the total cost of 1 course of conventional treatment was ¥45.57.[63] The discounted cost of conventional treatment was ¥50.24.\nOther costs\nThe cost of injection mainly includes the cost of materials, such as disposable infusion tubes and syringes used for intravenous injection, and the cost of intravenous injection. The latest medical service fees published by the Beijing Medical Insurance Bureau include ¥5.5 for intravenous injection and ¥7 for intravenous infusion.[64] The total intravenous infusion material fee was ¥8.00 and the total amount of intravenous infusion material fee was ¥2.40.[65] After discounting, the average value is ¥14.3/day. In addition, the cost of examination items during the entire course of treatment for patients with angina includes the cost of blood, urine, stool routine, liver and kidney function, and electrocardiogram before treatment. The cost of laboratory tests and electrocardiograms was obtained from Jianwei Xuan et al[66] The average price of medical services was obtained from the website of the local Health Commission. Discount calculation results for inspection cost of ¥373.6. Zhang et al[67] summarized the costs of diagnosing and treating CHD in 26 sample hospitals from 2014 to 2016 and found that the average hospitalization cost for angina patients was ¥26745.12. Zhao et al[12] studied CHD in 237 tertiary hospitals in Beijing in 2014 and found that the average hospitalization cost of patients with unstable angina was ¥26482.41. The average cost of hospitalization calculated after discount is ¥34811.63.", "We selected studies with effective rates of more than 90% (including conventional treatment) for economic evaluation. As shown in Table 7, CT + SI was the most cost-effective treatment.\nBase-case analysis results.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, ICER = incremental cost-effectiveness ratio, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.", "It was assumed that the efficacy rate of the 9 treatment regimens fluctuated by 5%. The cost was analyzed sensitively according to the highest and lowest manufacturer retail price, assuming that WTP was GDP per capita in 2018. As shown in Figure 6, the parameter with the most significant impact on the results was the treatment efficiency of the CT + SI group.\nSensitivity analysis on cost and effective rate. eCT_SI = effective rate of combined therapy of conventional treatment and Safflor injection, eCT_SI_H = effective rate of combined therapy of conventional treatment and Safflor injection and low molecular heparin, eCT_SI_DS = effective rate of combined therapy of conventional treatment and Safflor injection and compound Danshen drip pill, eCT_SYI_LC = effective rate of combined therapy of conventional treatment and safflor yellow injection and L-Carnitine injection, eCT_SYI_CD = effective rate of combined therapy of conventional treatment and safflor yellow injection and carvedilol, cCT_SI = cost of combined therapy of conventional treatment and Safflor injection, eCT_SYI = effective rate of combined therapy of conventional treatment and safflor yellow injection, eCT_SYI_AC = effective rate of combined therapy of conventional treatment and safflor yellow injection and atorvastatin calcium tablets, cCT_SI_H = cost of combined therapy of conventional treatment and Safflor injection and low molecular heparin, eCT_SYI_SMI = effective rate of combined therapy of conventional treatment and safflor yellow injection and Shenmai injection, eCT = effective rate of conventional treatment, cCT_SYI_SMI = cost of combined therapy of conventional treatment and safflor yellow injection and Shenmai injection, eCT_SYI_LC = effective rate of combined therapy of conventional treatment and safflor yellow injection and L-Carnitine injection.", "The results of the PSA based on 1000 Monte Carlo simulations are presented in the cost-effectiveness scatter plot below (Fig. 7). The efficiency and the cost were presumed to be a beta distribution and a triangular distribution, respectively. The patient’s WTP changed from 0 to ¥198018. The acceptable cost effect curve is shown in Figure 7. The probability of cost-effectiveness of CT + SI gradually increased with the WTP threshold and exceeded CT when the WTP reached ¥19801.8. When the WTP is higher than ¥39603.6, the CT + SI probability representing a more economical scheme was reduced; however, it was still greater than 50%. The results of the PSA were consistent with the base-case results (Fig. 8).\nCost-effectiveness acceptability curve. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nCost-effectiveness of screening options. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.", "This study used various analytical methods to conduct a multilevel analysis of 13 treatment regimens related to safflower against angina from evidence-based medicine and economic evaluation. From the perspective of evidence-based medicine, the CT + SI + H group had the best treatment efficacy. The CT + SI group was the most cost-effective, combined with the cost data. Yet, CT + SI + DS was recommended as the best treatment choice due to the advantages of efficiency and cost-effectiveness. Sensitivity analysis showed that the model was sensitive to the treatment effectiveness instead of the drug cost. Therefore, we recommend a combination of conventional treatment and safflower injection to treat angina pectoris. Of note, adding low molecular weight heparin or compound Danshen-dropping pills to the combination could improve efficacy and cost-effectiveness. Indeed, more clinical trials are needed to support our conclusions due to the limited data.", "The authors thank Dr He and Dr Du for linguistic assistance.", "Conceptualization: Yongfa Chen, Liang Lu.\nData curation: Qiuchen Jin.\nFormal analysis: Liang Lu.\nMethodology: Liang Lu, Yang Li.\nSoftware: Yang Li.\nSupervision: Yongfa Chen.\nWriting – original draft: Liang Lu.\nWriting – review & editing: Liang Lu." ]
[ "methods", null, null, null, null, null, null, null, null, null, "methods", "results", "results", null, null, "results", null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Methods", "2.1. Search strategy", "2.2. Inclusion criteria", "2.2.1. Study type.", "2.2.2. Participants.", "2.2.3. Interventions.", "2.2.4. Outcomes.", "2.3. Exclusion criteria", "2.4. Literature screening and data extraction", "2.5. Quality assessment", "2.6. Statistical methods", "3. Result", "3.1. Search results", "3.2. Study characteristics", "3.3. Risk of bias assessment", "3.4. Network meta-analysis results", "3.4.1. Evidence network diagram.", "3.4.2. Test for heterogeneity.", "3.4.3. Publication bias.", "3.4.4. Network meta-analysis of drug efficacy for angina pectoris treatment.", "3.4.5. SUCRA curves of treatment efficacy.", "3.5. Adverse reactions", "4. Pharmacoeconomic evaluation", "4.1. Research perspective", "4.2. Methods", "4.2.1. Decision tree model.", "4.2.2. Statistical analysis.", "4.2.3. Effectiveness.", "4.2.4. Cost.", "4.3. Results", "4.3.1. Base-case results.", "4.3.2. Two-way sensitivity analysis.", "4.3.3. Probabilistic sensitivity analysis (PSA).", "5. Discussion", "6. Conclusion", "Acknowledgments", "Author contributions" ]
[ "Coronary heart disease (CHD) is the most common heart disease and represents a continuum of diseases. CHD begins with coronary atherosclerosis in the early stages and progresses to established coronary artery disease (CAD), caused by plaque buildup in the walls of the arteries that supply blood to the heart and other parts of the body. Of all the diseases in China, CAD is currently the leading cause of death. As of 2013, the CAD prevalence among people aged 15 and above was 1.23%, 0.81%, and 1.02% for the urban and rural residents and combination, respectively, while the prevalence reached 2.78% in the older population over 60.[1] A recent study on the global burden of disease displayed that China accounted for about 38.2% of the deaths of CHD (ischemic heart disease) worldwide from 1990 to 2017.[2] Meanwhile, the CHD for all cardiovascular diseases elevated from 29% to 37%.[3] Treating angina pectoris is critical to avoiding CHD by preventing acute myocardial infarction. In China, the annual angina pectoris is higher in men than in women aged >40 years.[4] Similarly, in another world, annual angina pectoris in 50-year-old men and women is 0.2% and 0.08%, respectively.\nPatients with CHD and angina pectoris frequently manifest anxiety and fear of untimely death. Besides,[5] in patients’ self-consciousness, they saw themselves as a burden to their family and others, both physically and financially. In addition to their physical pain, the such psychological condition could result in negative emotions such as anxiety, guilt, and remorse in patients,[6] which would be more likely to lead to acute myocardial infarction or sudden death.[7] Additionally, the irrational drug became increasingly severe due to the increasing number of patients with CHD and angina pectoris saddled the healthcare system with a more social and economic burden. More specifically, CHD accounted for 9.4% of the disability-adjusted life-year loss of the top 10 diseases, ranking first in developed and developing countries.[8] The survey reported that the PCI cases in 2017 were 753142, a 13% increase over 2016, and the cost of hospitalization and medical devices is increasing annually.\nCommonly used drugs for treating CHD and angina pectoris include nitrates β-blockers, calcium channel blockers, and antiplatelet agents. However, these drugs always produce side effects. Here, we selected a natural product, safflor yellow, a pigment extracted from the petals of safflor,[9–11] as a treatment drug to assess its efficacy, safety, and cost-effectiveness. Safflor yellow combats cardiovascular disease through various pharmacological effects, such as dilating blood vessels, improving myocardial blood supply, inhibiting platelet aggregation and thrombosis, and anti-oxidation, against cardiovascular disease. This study aimed to identify an optimal treatment plan for safflor yellow injection to guide rational drug use, targeting better allocation of resources and cost savings. To compare the efficacy and safety of safflor yellow injection with the existing angina-pectoris treatments, we conducted stratified research on top of evidence-based medicine using a network meta-analysis followed by pharmacoeconomic evaluation.", "We conducted this meta-analysis by the PRISMA 2020 checklist.\n[SUBTITLE] 2.1. Search strategy [SUBSECTION] We did a comprehensive search using predefined search terms in PubMed, Cochrane Library databases, Clinical Trials.gov, Chinese National Knowledge Infrastructure, WanFang, VIP databases, and China Biology Medicine Disc (Si-noMed) from January 2005 to December 2019. Keywords included “angina pectoris,” “coronary heart disease,” “safflor flavin,” “safflor yellow injection,” and “safflor injection.” An advanced search combined with keywords was used to search the Chinese literature. The main search terms were: “stenocardia,” “angina pectoris,” “coronary heart disease,” “safflor flavin,” “safflor injection.” All prospective studies were included with no linguistic restrictions and were independently screened by 2 reviewers (Lu and Li).\nWe did a comprehensive search using predefined search terms in PubMed, Cochrane Library databases, Clinical Trials.gov, Chinese National Knowledge Infrastructure, WanFang, VIP databases, and China Biology Medicine Disc (Si-noMed) from January 2005 to December 2019. Keywords included “angina pectoris,” “coronary heart disease,” “safflor flavin,” “safflor yellow injection,” and “safflor injection.” An advanced search combined with keywords was used to search the Chinese literature. The main search terms were: “stenocardia,” “angina pectoris,” “coronary heart disease,” “safflor flavin,” “safflor injection.” All prospective studies were included with no linguistic restrictions and were independently screened by 2 reviewers (Lu and Li).\n[SUBTITLE] 2.2. Inclusion criteria [SUBSECTION] [SUBTITLE] 2.2.1. Study type. [SUBSECTION] Randomized Controlled Trials (RCTs) and retrospective trials.\nRandomized Controlled Trials (RCTs) and retrospective trials.\n[SUBTITLE] 2.2.2. Participants. [SUBSECTION] All patients were clinically diagnosed with angina pectoris, including stable and unstable angina pectoris caused by aging, abnormal lipid metabolism, hypertension, smoking, diabetes, and other factors.[12]\nAll patients were clinically diagnosed with angina pectoris, including stable and unstable angina pectoris caused by aging, abnormal lipid metabolism, hypertension, smoking, diabetes, and other factors.[12]\n[SUBTITLE] 2.2.3. Interventions. [SUBSECTION] The treatment group was dosed with safflor yellow injection alone, safflor yellow freeze-dried injection product or safflor injection, or safflor yellow combined with conventional treatment or other drugs (low molecular heparin 5000U, carvedilol, levocarnitine injection, atorvastatin calcium tablets, Danshen injection, etc). Conventional treatments with nitrate drugs, β-blockers, angiotensin-converting enzyme inhibitors, and calcium channel blockers were used when angina pectoris occurred. As a result, these 4 drugs were incorporated into the cost calculation in the following pharmacoeconomic studies.\nThe control group was given conventional treatment or drugs against angina pectoris, such as compounded Danshen dripping pills, compound Danshen injections, Xiangdan injections, and safflor injections.\nThe treatment group was dosed with safflor yellow injection alone, safflor yellow freeze-dried injection product or safflor injection, or safflor yellow combined with conventional treatment or other drugs (low molecular heparin 5000U, carvedilol, levocarnitine injection, atorvastatin calcium tablets, Danshen injection, etc). Conventional treatments with nitrate drugs, β-blockers, angiotensin-converting enzyme inhibitors, and calcium channel blockers were used when angina pectoris occurred. As a result, these 4 drugs were incorporated into the cost calculation in the following pharmacoeconomic studies.\nThe control group was given conventional treatment or drugs against angina pectoris, such as compounded Danshen dripping pills, compound Danshen injections, Xiangdan injections, and safflor injections.\n[SUBTITLE] 2.2.4. Outcomes. [SUBSECTION] The total effective rate was defined based on “Common Guidelines for the Diagnosis and Treatment of Cardiovascular and Cerebrovascular Diseases in China.”[13]\njudgment criteria for angina pectoris:•significantly effective: angina pectoris disappears or disappears or the frequency or nitroglycerin consumption is reduced by more than 80% treated for 1 course;•effective: angina pectoris is largely relieved after 1 course, Nitroglycerin consumption was reduced by over 50%;•ineffective: times of angina pectoris or nitroglycerin usage was reduced by more than 80%, 50% to 80%, and <50%, respectively.\n\nsignificantly effective: angina pectoris disappears or disappears or the frequency or nitroglycerin consumption is reduced by more than 80% treated for 1 course;\neffective: angina pectoris is largely relieved after 1 course, Nitroglycerin consumption was reduced by over 50%;\nineffective: times of angina pectoris or nitroglycerin usage was reduced by more than 80%, 50% to 80%, and <50%, respectively.\nCriteria for the electrocardiogram (ECG) efficacy:•significantly effective: the symptoms disappear, the ST segment and T wave of the ECG return to normal, and the exercise test changes from positive to negative;•effective: symptoms were relieved, the ST segment was low on the ECG, and the T-wave inversion was corrected;•ineffective: the symptoms were not alleviated, and the ST segment was low on the ECG, or the T-wave inversion was not improved.\n\nsignificantly effective: the symptoms disappear, the ST segment and T wave of the ECG return to normal, and the exercise test changes from positive to negative;\neffective: symptoms were relieved, the ST segment was low on the ECG, and the T-wave inversion was corrected;\nineffective: the symptoms were not alleviated, and the ST segment was low on the ECG, or the T-wave inversion was not improved.\nThe total effective rate was defined based on “Common Guidelines for the Diagnosis and Treatment of Cardiovascular and Cerebrovascular Diseases in China.”[13]\njudgment criteria for angina pectoris:•significantly effective: angina pectoris disappears or disappears or the frequency or nitroglycerin consumption is reduced by more than 80% treated for 1 course;•effective: angina pectoris is largely relieved after 1 course, Nitroglycerin consumption was reduced by over 50%;•ineffective: times of angina pectoris or nitroglycerin usage was reduced by more than 80%, 50% to 80%, and <50%, respectively.\n\nsignificantly effective: angina pectoris disappears or disappears or the frequency or nitroglycerin consumption is reduced by more than 80% treated for 1 course;\neffective: angina pectoris is largely relieved after 1 course, Nitroglycerin consumption was reduced by over 50%;\nineffective: times of angina pectoris or nitroglycerin usage was reduced by more than 80%, 50% to 80%, and <50%, respectively.\nCriteria for the electrocardiogram (ECG) efficacy:•significantly effective: the symptoms disappear, the ST segment and T wave of the ECG return to normal, and the exercise test changes from positive to negative;•effective: symptoms were relieved, the ST segment was low on the ECG, and the T-wave inversion was corrected;•ineffective: the symptoms were not alleviated, and the ST segment was low on the ECG, or the T-wave inversion was not improved.\n\nsignificantly effective: the symptoms disappear, the ST segment and T wave of the ECG return to normal, and the exercise test changes from positive to negative;\neffective: symptoms were relieved, the ST segment was low on the ECG, and the T-wave inversion was corrected;\nineffective: the symptoms were not alleviated, and the ST segment was low on the ECG, or the T-wave inversion was not improved.\n[SUBTITLE] 2.2.1. Study type. [SUBSECTION] Randomized Controlled Trials (RCTs) and retrospective trials.\nRandomized Controlled Trials (RCTs) and retrospective trials.\n[SUBTITLE] 2.2.2. Participants. [SUBSECTION] All patients were clinically diagnosed with angina pectoris, including stable and unstable angina pectoris caused by aging, abnormal lipid metabolism, hypertension, smoking, diabetes, and other factors.[12]\nAll patients were clinically diagnosed with angina pectoris, including stable and unstable angina pectoris caused by aging, abnormal lipid metabolism, hypertension, smoking, diabetes, and other factors.[12]\n[SUBTITLE] 2.2.3. Interventions. [SUBSECTION] The treatment group was dosed with safflor yellow injection alone, safflor yellow freeze-dried injection product or safflor injection, or safflor yellow combined with conventional treatment or other drugs (low molecular heparin 5000U, carvedilol, levocarnitine injection, atorvastatin calcium tablets, Danshen injection, etc). Conventional treatments with nitrate drugs, β-blockers, angiotensin-converting enzyme inhibitors, and calcium channel blockers were used when angina pectoris occurred. As a result, these 4 drugs were incorporated into the cost calculation in the following pharmacoeconomic studies.\nThe control group was given conventional treatment or drugs against angina pectoris, such as compounded Danshen dripping pills, compound Danshen injections, Xiangdan injections, and safflor injections.\nThe treatment group was dosed with safflor yellow injection alone, safflor yellow freeze-dried injection product or safflor injection, or safflor yellow combined with conventional treatment or other drugs (low molecular heparin 5000U, carvedilol, levocarnitine injection, atorvastatin calcium tablets, Danshen injection, etc). Conventional treatments with nitrate drugs, β-blockers, angiotensin-converting enzyme inhibitors, and calcium channel blockers were used when angina pectoris occurred. As a result, these 4 drugs were incorporated into the cost calculation in the following pharmacoeconomic studies.\nThe control group was given conventional treatment or drugs against angina pectoris, such as compounded Danshen dripping pills, compound Danshen injections, Xiangdan injections, and safflor injections.\n[SUBTITLE] 2.2.4. Outcomes. [SUBSECTION] The total effective rate was defined based on “Common Guidelines for the Diagnosis and Treatment of Cardiovascular and Cerebrovascular Diseases in China.”[13]\njudgment criteria for angina pectoris:•significantly effective: angina pectoris disappears or disappears or the frequency or nitroglycerin consumption is reduced by more than 80% treated for 1 course;•effective: angina pectoris is largely relieved after 1 course, Nitroglycerin consumption was reduced by over 50%;•ineffective: times of angina pectoris or nitroglycerin usage was reduced by more than 80%, 50% to 80%, and <50%, respectively.\n\nsignificantly effective: angina pectoris disappears or disappears or the frequency or nitroglycerin consumption is reduced by more than 80% treated for 1 course;\neffective: angina pectoris is largely relieved after 1 course, Nitroglycerin consumption was reduced by over 50%;\nineffective: times of angina pectoris or nitroglycerin usage was reduced by more than 80%, 50% to 80%, and <50%, respectively.\nCriteria for the electrocardiogram (ECG) efficacy:•significantly effective: the symptoms disappear, the ST segment and T wave of the ECG return to normal, and the exercise test changes from positive to negative;•effective: symptoms were relieved, the ST segment was low on the ECG, and the T-wave inversion was corrected;•ineffective: the symptoms were not alleviated, and the ST segment was low on the ECG, or the T-wave inversion was not improved.\n\nsignificantly effective: the symptoms disappear, the ST segment and T wave of the ECG return to normal, and the exercise test changes from positive to negative;\neffective: symptoms were relieved, the ST segment was low on the ECG, and the T-wave inversion was corrected;\nineffective: the symptoms were not alleviated, and the ST segment was low on the ECG, or the T-wave inversion was not improved.\nThe total effective rate was defined based on “Common Guidelines for the Diagnosis and Treatment of Cardiovascular and Cerebrovascular Diseases in China.”[13]\njudgment criteria for angina pectoris:•significantly effective: angina pectoris disappears or disappears or the frequency or nitroglycerin consumption is reduced by more than 80% treated for 1 course;•effective: angina pectoris is largely relieved after 1 course, Nitroglycerin consumption was reduced by over 50%;•ineffective: times of angina pectoris or nitroglycerin usage was reduced by more than 80%, 50% to 80%, and <50%, respectively.\n\nsignificantly effective: angina pectoris disappears or disappears or the frequency or nitroglycerin consumption is reduced by more than 80% treated for 1 course;\neffective: angina pectoris is largely relieved after 1 course, Nitroglycerin consumption was reduced by over 50%;\nineffective: times of angina pectoris or nitroglycerin usage was reduced by more than 80%, 50% to 80%, and <50%, respectively.\nCriteria for the electrocardiogram (ECG) efficacy:•significantly effective: the symptoms disappear, the ST segment and T wave of the ECG return to normal, and the exercise test changes from positive to negative;•effective: symptoms were relieved, the ST segment was low on the ECG, and the T-wave inversion was corrected;•ineffective: the symptoms were not alleviated, and the ST segment was low on the ECG, or the T-wave inversion was not improved.\n\nsignificantly effective: the symptoms disappear, the ST segment and T wave of the ECG return to normal, and the exercise test changes from positive to negative;\neffective: symptoms were relieved, the ST segment was low on the ECG, and the T-wave inversion was corrected;\nineffective: the symptoms were not alleviated, and the ST segment was low on the ECG, or the T-wave inversion was not improved.\n[SUBTITLE] 2.3. Exclusion criteria [SUBSECTION] Studies without full-text access, studies with incomplete or severely faulted data, studies with repetitive publications or data, retrospective studies, studies with incomplete or unclear reports on experimental design and results reporting, and animal experiments.\nStudies without full-text access, studies with incomplete or severely faulted data, studies with repetitive publications or data, retrospective studies, studies with incomplete or unclear reports on experimental design and results reporting, and animal experiments.\n[SUBTITLE] 2.4. Literature screening and data extraction [SUBSECTION] The NoteExpress 3.4.0 software was used for reference management. Two researchers selected the documents independently following the inclusion and exclusion criteria and then extracted the data. The literature extraction data predominantly contained the following information: general information of the study: author, publication time, sample size, age, type of study, etc; treatment: dosage and treatment duration; and outcome indicators: angina pectoris efficacy criteria, ECG, hemorheology indexes, blood lipid improvement, etc.\nThe NoteExpress 3.4.0 software was used for reference management. Two researchers selected the documents independently following the inclusion and exclusion criteria and then extracted the data. The literature extraction data predominantly contained the following information: general information of the study: author, publication time, sample size, age, type of study, etc; treatment: dosage and treatment duration; and outcome indicators: angina pectoris efficacy criteria, ECG, hemorheology indexes, blood lipid improvement, etc.\n[SUBTITLE] 2.5. Quality assessment [SUBSECTION] The Cochrane Handbook versions 5.0.1 RCT bias risk assessment tool[14] was applied to weigh the methodological quality of RCTs. Seven domains were integrated into the evaluation: random sequence generation, allocation concealment, blinding method of subjects and researchers, blinding method of the outcome evaluator, incomplete outcome report, selective outcome report, and other biases. Each item was classified as a “low-risk bias,” “unclear,” or “high-risk bias.” Two reviewers conducted data extraction and methodological evaluation. Any inconsistencies were resolved through discussion.\nThe Cochrane Handbook versions 5.0.1 RCT bias risk assessment tool[14] was applied to weigh the methodological quality of RCTs. Seven domains were integrated into the evaluation: random sequence generation, allocation concealment, blinding method of subjects and researchers, blinding method of the outcome evaluator, incomplete outcome report, selective outcome report, and other biases. Each item was classified as a “low-risk bias,” “unclear,” or “high-risk bias.” Two reviewers conducted data extraction and methodological evaluation. Any inconsistencies were resolved through discussion.\n[SUBTITLE] 2.6. Statistical methods [SUBSECTION] A network meta-analysis was utilized for frequency statistics and a Bayesian approach. The frequency statistics approach used statistical samples under hypothesis testing and inference conclusions. The Bayesian approach is flexible and powerful and requires a high degree of statistical knowledge. The frequency statistics method is simple and easily understood (Tian et al, 2014).[15] The Bayesian analysis was performed under Bayesian principles and posterior/prior probability. Studies indicated equivalent reliability between the results of a network meta-analysis of frequency statistics and the Bayesian approach (Carlin et al, 2013).[16] This study implemented a network meta-analysis for research and analysis according to the frequency statistics, a multivariate framework and frequency theory. Stata software (version 14.0) was used for statistical analysis and graphics plotting, applying the mvmeta network and its packages (Tian et al, 2014).[15] The outcome indicators in this study were binary classification variables. Odds ratios (ORs) were calculated with 95% confidence intervals (95% CIs). A network diagram was prepared under the 2-arm data structure to demonstrate the comparative relationships among the different interventions (Zhang et al, 2013).[17] Subsequently, a networked meta-random effect model was constructed to evaluate the model consistency, and then “if plot” command was utilized to assess the inconsistency factor value and conduct the Z test. P > .05 indicated consistency, demonstrating better consistency in direct and indirect comparisons (Zhang et al, 2014).[18] The intervention was evaluated for publication bias or small-sample effects by drawing a comparison-correction funnel plot. The surface under the cumulative ranking curve of each intervention (SUCRA) was calculated to predict the ranking of the intervention drug efficacy. The closer the SUCRA value is to 100, the better the intervention is Zeng et al, 2013.[19]\nA network meta-analysis was utilized for frequency statistics and a Bayesian approach. The frequency statistics approach used statistical samples under hypothesis testing and inference conclusions. The Bayesian approach is flexible and powerful and requires a high degree of statistical knowledge. The frequency statistics method is simple and easily understood (Tian et al, 2014).[15] The Bayesian analysis was performed under Bayesian principles and posterior/prior probability. Studies indicated equivalent reliability between the results of a network meta-analysis of frequency statistics and the Bayesian approach (Carlin et al, 2013).[16] This study implemented a network meta-analysis for research and analysis according to the frequency statistics, a multivariate framework and frequency theory. Stata software (version 14.0) was used for statistical analysis and graphics plotting, applying the mvmeta network and its packages (Tian et al, 2014).[15] The outcome indicators in this study were binary classification variables. Odds ratios (ORs) were calculated with 95% confidence intervals (95% CIs). A network diagram was prepared under the 2-arm data structure to demonstrate the comparative relationships among the different interventions (Zhang et al, 2013).[17] Subsequently, a networked meta-random effect model was constructed to evaluate the model consistency, and then “if plot” command was utilized to assess the inconsistency factor value and conduct the Z test. P > .05 indicated consistency, demonstrating better consistency in direct and indirect comparisons (Zhang et al, 2014).[18] The intervention was evaluated for publication bias or small-sample effects by drawing a comparison-correction funnel plot. The surface under the cumulative ranking curve of each intervention (SUCRA) was calculated to predict the ranking of the intervention drug efficacy. The closer the SUCRA value is to 100, the better the intervention is Zeng et al, 2013.[19]", "We did a comprehensive search using predefined search terms in PubMed, Cochrane Library databases, Clinical Trials.gov, Chinese National Knowledge Infrastructure, WanFang, VIP databases, and China Biology Medicine Disc (Si-noMed) from January 2005 to December 2019. Keywords included “angina pectoris,” “coronary heart disease,” “safflor flavin,” “safflor yellow injection,” and “safflor injection.” An advanced search combined with keywords was used to search the Chinese literature. The main search terms were: “stenocardia,” “angina pectoris,” “coronary heart disease,” “safflor flavin,” “safflor injection.” All prospective studies were included with no linguistic restrictions and were independently screened by 2 reviewers (Lu and Li).", "[SUBTITLE] 2.2.1. Study type. [SUBSECTION] Randomized Controlled Trials (RCTs) and retrospective trials.\nRandomized Controlled Trials (RCTs) and retrospective trials.\n[SUBTITLE] 2.2.2. Participants. [SUBSECTION] All patients were clinically diagnosed with angina pectoris, including stable and unstable angina pectoris caused by aging, abnormal lipid metabolism, hypertension, smoking, diabetes, and other factors.[12]\nAll patients were clinically diagnosed with angina pectoris, including stable and unstable angina pectoris caused by aging, abnormal lipid metabolism, hypertension, smoking, diabetes, and other factors.[12]\n[SUBTITLE] 2.2.3. Interventions. [SUBSECTION] The treatment group was dosed with safflor yellow injection alone, safflor yellow freeze-dried injection product or safflor injection, or safflor yellow combined with conventional treatment or other drugs (low molecular heparin 5000U, carvedilol, levocarnitine injection, atorvastatin calcium tablets, Danshen injection, etc). Conventional treatments with nitrate drugs, β-blockers, angiotensin-converting enzyme inhibitors, and calcium channel blockers were used when angina pectoris occurred. As a result, these 4 drugs were incorporated into the cost calculation in the following pharmacoeconomic studies.\nThe control group was given conventional treatment or drugs against angina pectoris, such as compounded Danshen dripping pills, compound Danshen injections, Xiangdan injections, and safflor injections.\nThe treatment group was dosed with safflor yellow injection alone, safflor yellow freeze-dried injection product or safflor injection, or safflor yellow combined with conventional treatment or other drugs (low molecular heparin 5000U, carvedilol, levocarnitine injection, atorvastatin calcium tablets, Danshen injection, etc). Conventional treatments with nitrate drugs, β-blockers, angiotensin-converting enzyme inhibitors, and calcium channel blockers were used when angina pectoris occurred. As a result, these 4 drugs were incorporated into the cost calculation in the following pharmacoeconomic studies.\nThe control group was given conventional treatment or drugs against angina pectoris, such as compounded Danshen dripping pills, compound Danshen injections, Xiangdan injections, and safflor injections.\n[SUBTITLE] 2.2.4. Outcomes. [SUBSECTION] The total effective rate was defined based on “Common Guidelines for the Diagnosis and Treatment of Cardiovascular and Cerebrovascular Diseases in China.”[13]\njudgment criteria for angina pectoris:•significantly effective: angina pectoris disappears or disappears or the frequency or nitroglycerin consumption is reduced by more than 80% treated for 1 course;•effective: angina pectoris is largely relieved after 1 course, Nitroglycerin consumption was reduced by over 50%;•ineffective: times of angina pectoris or nitroglycerin usage was reduced by more than 80%, 50% to 80%, and <50%, respectively.\n\nsignificantly effective: angina pectoris disappears or disappears or the frequency or nitroglycerin consumption is reduced by more than 80% treated for 1 course;\neffective: angina pectoris is largely relieved after 1 course, Nitroglycerin consumption was reduced by over 50%;\nineffective: times of angina pectoris or nitroglycerin usage was reduced by more than 80%, 50% to 80%, and <50%, respectively.\nCriteria for the electrocardiogram (ECG) efficacy:•significantly effective: the symptoms disappear, the ST segment and T wave of the ECG return to normal, and the exercise test changes from positive to negative;•effective: symptoms were relieved, the ST segment was low on the ECG, and the T-wave inversion was corrected;•ineffective: the symptoms were not alleviated, and the ST segment was low on the ECG, or the T-wave inversion was not improved.\n\nsignificantly effective: the symptoms disappear, the ST segment and T wave of the ECG return to normal, and the exercise test changes from positive to negative;\neffective: symptoms were relieved, the ST segment was low on the ECG, and the T-wave inversion was corrected;\nineffective: the symptoms were not alleviated, and the ST segment was low on the ECG, or the T-wave inversion was not improved.\nThe total effective rate was defined based on “Common Guidelines for the Diagnosis and Treatment of Cardiovascular and Cerebrovascular Diseases in China.”[13]\njudgment criteria for angina pectoris:•significantly effective: angina pectoris disappears or disappears or the frequency or nitroglycerin consumption is reduced by more than 80% treated for 1 course;•effective: angina pectoris is largely relieved after 1 course, Nitroglycerin consumption was reduced by over 50%;•ineffective: times of angina pectoris or nitroglycerin usage was reduced by more than 80%, 50% to 80%, and <50%, respectively.\n\nsignificantly effective: angina pectoris disappears or disappears or the frequency or nitroglycerin consumption is reduced by more than 80% treated for 1 course;\neffective: angina pectoris is largely relieved after 1 course, Nitroglycerin consumption was reduced by over 50%;\nineffective: times of angina pectoris or nitroglycerin usage was reduced by more than 80%, 50% to 80%, and <50%, respectively.\nCriteria for the electrocardiogram (ECG) efficacy:•significantly effective: the symptoms disappear, the ST segment and T wave of the ECG return to normal, and the exercise test changes from positive to negative;•effective: symptoms were relieved, the ST segment was low on the ECG, and the T-wave inversion was corrected;•ineffective: the symptoms were not alleviated, and the ST segment was low on the ECG, or the T-wave inversion was not improved.\n\nsignificantly effective: the symptoms disappear, the ST segment and T wave of the ECG return to normal, and the exercise test changes from positive to negative;\neffective: symptoms were relieved, the ST segment was low on the ECG, and the T-wave inversion was corrected;\nineffective: the symptoms were not alleviated, and the ST segment was low on the ECG, or the T-wave inversion was not improved.", "Randomized Controlled Trials (RCTs) and retrospective trials.", "All patients were clinically diagnosed with angina pectoris, including stable and unstable angina pectoris caused by aging, abnormal lipid metabolism, hypertension, smoking, diabetes, and other factors.[12]", "The treatment group was dosed with safflor yellow injection alone, safflor yellow freeze-dried injection product or safflor injection, or safflor yellow combined with conventional treatment or other drugs (low molecular heparin 5000U, carvedilol, levocarnitine injection, atorvastatin calcium tablets, Danshen injection, etc). Conventional treatments with nitrate drugs, β-blockers, angiotensin-converting enzyme inhibitors, and calcium channel blockers were used when angina pectoris occurred. As a result, these 4 drugs were incorporated into the cost calculation in the following pharmacoeconomic studies.\nThe control group was given conventional treatment or drugs against angina pectoris, such as compounded Danshen dripping pills, compound Danshen injections, Xiangdan injections, and safflor injections.", "The total effective rate was defined based on “Common Guidelines for the Diagnosis and Treatment of Cardiovascular and Cerebrovascular Diseases in China.”[13]\njudgment criteria for angina pectoris:•significantly effective: angina pectoris disappears or disappears or the frequency or nitroglycerin consumption is reduced by more than 80% treated for 1 course;•effective: angina pectoris is largely relieved after 1 course, Nitroglycerin consumption was reduced by over 50%;•ineffective: times of angina pectoris or nitroglycerin usage was reduced by more than 80%, 50% to 80%, and <50%, respectively.\n\nsignificantly effective: angina pectoris disappears or disappears or the frequency or nitroglycerin consumption is reduced by more than 80% treated for 1 course;\neffective: angina pectoris is largely relieved after 1 course, Nitroglycerin consumption was reduced by over 50%;\nineffective: times of angina pectoris or nitroglycerin usage was reduced by more than 80%, 50% to 80%, and <50%, respectively.\nCriteria for the electrocardiogram (ECG) efficacy:•significantly effective: the symptoms disappear, the ST segment and T wave of the ECG return to normal, and the exercise test changes from positive to negative;•effective: symptoms were relieved, the ST segment was low on the ECG, and the T-wave inversion was corrected;•ineffective: the symptoms were not alleviated, and the ST segment was low on the ECG, or the T-wave inversion was not improved.\n\nsignificantly effective: the symptoms disappear, the ST segment and T wave of the ECG return to normal, and the exercise test changes from positive to negative;\neffective: symptoms were relieved, the ST segment was low on the ECG, and the T-wave inversion was corrected;\nineffective: the symptoms were not alleviated, and the ST segment was low on the ECG, or the T-wave inversion was not improved.", "Studies without full-text access, studies with incomplete or severely faulted data, studies with repetitive publications or data, retrospective studies, studies with incomplete or unclear reports on experimental design and results reporting, and animal experiments.", "The NoteExpress 3.4.0 software was used for reference management. Two researchers selected the documents independently following the inclusion and exclusion criteria and then extracted the data. The literature extraction data predominantly contained the following information: general information of the study: author, publication time, sample size, age, type of study, etc; treatment: dosage and treatment duration; and outcome indicators: angina pectoris efficacy criteria, ECG, hemorheology indexes, blood lipid improvement, etc.", "The Cochrane Handbook versions 5.0.1 RCT bias risk assessment tool[14] was applied to weigh the methodological quality of RCTs. Seven domains were integrated into the evaluation: random sequence generation, allocation concealment, blinding method of subjects and researchers, blinding method of the outcome evaluator, incomplete outcome report, selective outcome report, and other biases. Each item was classified as a “low-risk bias,” “unclear,” or “high-risk bias.” Two reviewers conducted data extraction and methodological evaluation. Any inconsistencies were resolved through discussion.", "A network meta-analysis was utilized for frequency statistics and a Bayesian approach. The frequency statistics approach used statistical samples under hypothesis testing and inference conclusions. The Bayesian approach is flexible and powerful and requires a high degree of statistical knowledge. The frequency statistics method is simple and easily understood (Tian et al, 2014).[15] The Bayesian analysis was performed under Bayesian principles and posterior/prior probability. Studies indicated equivalent reliability between the results of a network meta-analysis of frequency statistics and the Bayesian approach (Carlin et al, 2013).[16] This study implemented a network meta-analysis for research and analysis according to the frequency statistics, a multivariate framework and frequency theory. Stata software (version 14.0) was used for statistical analysis and graphics plotting, applying the mvmeta network and its packages (Tian et al, 2014).[15] The outcome indicators in this study were binary classification variables. Odds ratios (ORs) were calculated with 95% confidence intervals (95% CIs). A network diagram was prepared under the 2-arm data structure to demonstrate the comparative relationships among the different interventions (Zhang et al, 2013).[17] Subsequently, a networked meta-random effect model was constructed to evaluate the model consistency, and then “if plot” command was utilized to assess the inconsistency factor value and conduct the Z test. P > .05 indicated consistency, demonstrating better consistency in direct and indirect comparisons (Zhang et al, 2014).[18] The intervention was evaluated for publication bias or small-sample effects by drawing a comparison-correction funnel plot. The surface under the cumulative ranking curve of each intervention (SUCRA) was calculated to predict the ranking of the intervention drug efficacy. The closer the SUCRA value is to 100, the better the intervention is Zeng et al, 2013.[19]", "[SUBTITLE] 3.1. Search results [SUBSECTION] Of the 79829 related studies identified, 810 retrieved records were screened after removing duplicates and the initial exclusion of invalid literature. Full-text assessment resulted in 42 eligible articles after excluding 768 articles according to this review’s inclusion and exclusion criteria, including 41 Chinese studies and 1 English study. The study selection process was performed according to PRISMA guidelines (Fig. 1).\nDocument screening process.\nOf the 79829 related studies identified, 810 retrieved records were screened after removing duplicates and the initial exclusion of invalid literature. Full-text assessment resulted in 42 eligible articles after excluding 768 articles according to this review’s inclusion and exclusion criteria, including 41 Chinese studies and 1 English study. The study selection process was performed according to PRISMA guidelines (Fig. 1).\nDocument screening process.\n[SUBTITLE] 3.2. Study characteristics [SUBSECTION] The main characteristics of the included studies are summarized in Table 1. The studies were published between 2006 and 2019. Overall, 42 trials[12,20–60] with 4290 angina-pectoris patients were involved in the network meta-analysis, 2273 in the treatment group and 2017 in the control group. The sample sizes of the study participants ranged from 46 to 432. The mean age of the patients across trials fluctuated from 39.8 to 72.7 years, along with a 7 to 14 day treatment duration.\nCharacteristics of included studies.\n(A) Due to the different purity of safflor injection and safflor yellow injection, they should be distinguished; (B) The safflor yellow sodium chloride injection is prepared by adding sodium chloride to the safflor yellow injection for dilution. Therefore, only safflor yellow injection was included in the network meta-analysis.\nAC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThe main characteristics of the included studies are summarized in Table 1. The studies were published between 2006 and 2019. Overall, 42 trials[12,20–60] with 4290 angina-pectoris patients were involved in the network meta-analysis, 2273 in the treatment group and 2017 in the control group. The sample sizes of the study participants ranged from 46 to 432. The mean age of the patients across trials fluctuated from 39.8 to 72.7 years, along with a 7 to 14 day treatment duration.\nCharacteristics of included studies.\n(A) Due to the different purity of safflor injection and safflor yellow injection, they should be distinguished; (B) The safflor yellow sodium chloride injection is prepared by adding sodium chloride to the safflor yellow injection for dilution. Therefore, only safflor yellow injection was included in the network meta-analysis.\nAC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n[SUBTITLE] 3.3. Risk of bias assessment [SUBSECTION] The Cochrane risk of bias tool was used to assess the 9 included RCTs. Among the 42 included studies, 11[12,23–25,27,30,34,36,52,56,59] specifically reported the method of random sequence generation. Allocation concealment was adequately described in only a few included studies. All outcomes of the included studies were completed without determining other sources of bias. Overall, these 42 studies showed moderate methodological quality. The details of the bias-risk evaluation for each study are presented in Figure 2.\nBias risk of included studies.\nThe Cochrane risk of bias tool was used to assess the 9 included RCTs. Among the 42 included studies, 11[12,23–25,27,30,34,36,52,56,59] specifically reported the method of random sequence generation. Allocation concealment was adequately described in only a few included studies. All outcomes of the included studies were completed without determining other sources of bias. Overall, these 42 studies showed moderate methodological quality. The details of the bias-risk evaluation for each study are presented in Figure 2.\nBias risk of included studies.\n[SUBTITLE] 3.4. Network meta-analysis results [SUBSECTION] [SUBTITLE] 3.4.1. Evidence network diagram. [SUBSECTION] This network meta-analysis (NMA) included 7 safflor yellow-related studies, including its monotherapy and combination with 5 other traditional Chinese medicine injections or conventional treatments for angina pectoris. As is shown in Figure 3, 2 closed loops were formed, focusing on the conventional treatment. 42 RCTs[12,20–60] for angina-pectoris treatment efficiency were estimated according to the efficacy evaluation criteria. ECG effects and hemorheological indicators were included in 40 RCTs[12,20–24,26–55,57–60] and 19 RCTs,[20,21,25–30,33–35,37,38,41,48,49,52,59,60] respectively.\nEvidence network plot. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThis network meta-analysis (NMA) included 7 safflor yellow-related studies, including its monotherapy and combination with 5 other traditional Chinese medicine injections or conventional treatments for angina pectoris. As is shown in Figure 3, 2 closed loops were formed, focusing on the conventional treatment. 42 RCTs[12,20–60] for angina-pectoris treatment efficiency were estimated according to the efficacy evaluation criteria. ECG effects and hemorheological indicators were included in 40 RCTs[12,20–24,26–55,57–60] and 19 RCTs,[20,21,25–30,33–35,37,38,41,48,49,52,59,60] respectively.\nEvidence network plot. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n[SUBTITLE] 3.4.2. Test for heterogeneity. [SUBSECTION] Two triangular closed loops appeared during the intervention. LOOP was used to construct the inconsistency test chart, calculate the inconsistency factor, and conduct the Z test. The Z value finalized that Loop (CT + SYI-CT + SI-CT + DSI) P = .446 and Loop (CT-CT + SYI-CT + SI) P = .584, demonstrating no inconsistency results.\nTwo triangular closed loops appeared during the intervention. LOOP was used to construct the inconsistency test chart, calculate the inconsistency factor, and conduct the Z test. The Z value finalized that Loop (CT + SYI-CT + SI-CT + DSI) P = .446 and Loop (CT-CT + SYI-CT + SI) P = .584, demonstrating no inconsistency results.\n[SUBTITLE] 3.4.3. Publication bias. [SUBSECTION] Eleven studies were included in the funnel plot for publication bias analysis. The funnel plot showed an asymmetric distribution of points and indicated the possibility of publication bias and minor study effects.\nEleven studies were included in the funnel plot for publication bias analysis. The funnel plot showed an asymmetric distribution of points and indicated the possibility of publication bias and minor study effects.\n[SUBTITLE] 3.4.4. Network meta-analysis of drug efficacy for angina pectoris treatment. [SUBSECTION] 42 RCTs demonstrated the clinical treatment efficacy against angina pectoris. A comparison of these results is presented in Table 2. Compared with the conventional treatment group, the CT + SI + H group showed the highest treatment efficacy (OR = 9.62, 95% CI [3.84, 24.05]), and the DSI group displayed the most modest treatment effect (OR = 0.85, 95% CI [0.16, 4.64]).\nNetwork meta-analysis results comparing the clinical effectiveness.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n42 RCTs demonstrated the clinical treatment efficacy against angina pectoris. A comparison of these results is presented in Table 2. Compared with the conventional treatment group, the CT + SI + H group showed the highest treatment efficacy (OR = 9.62, 95% CI [3.84, 24.05]), and the DSI group displayed the most modest treatment effect (OR = 0.85, 95% CI [0.16, 4.64]).\nNetwork meta-analysis results comparing the clinical effectiveness.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n[SUBTITLE] 3.4.5. SUCRA curves of treatment efficacy. [SUBSECTION] The SUCRA values from probability ranking are listed in Table 3. CT + SI + H had the highest rank probability of treatment success rate. The rank probability of the treatments based on SUCRAs is shown in Figure 4, demonstrating similarity to the ranking of the effective NMA results (Table 3).\nProbability ranking of clinical effectiveness evaluation in 13 angina-pectoris treatments.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nSUCRA curves of 13 treatment interventions. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThe SUCRA values from probability ranking are listed in Table 3. CT + SI + H had the highest rank probability of treatment success rate. The rank probability of the treatments based on SUCRAs is shown in Figure 4, demonstrating similarity to the ranking of the effective NMA results (Table 3).\nProbability ranking of clinical effectiveness evaluation in 13 angina-pectoris treatments.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nSUCRA curves of 13 treatment interventions. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n[SUBTITLE] 3.4.1. Evidence network diagram. [SUBSECTION] This network meta-analysis (NMA) included 7 safflor yellow-related studies, including its monotherapy and combination with 5 other traditional Chinese medicine injections or conventional treatments for angina pectoris. As is shown in Figure 3, 2 closed loops were formed, focusing on the conventional treatment. 42 RCTs[12,20–60] for angina-pectoris treatment efficiency were estimated according to the efficacy evaluation criteria. ECG effects and hemorheological indicators were included in 40 RCTs[12,20–24,26–55,57–60] and 19 RCTs,[20,21,25–30,33–35,37,38,41,48,49,52,59,60] respectively.\nEvidence network plot. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThis network meta-analysis (NMA) included 7 safflor yellow-related studies, including its monotherapy and combination with 5 other traditional Chinese medicine injections or conventional treatments for angina pectoris. As is shown in Figure 3, 2 closed loops were formed, focusing on the conventional treatment. 42 RCTs[12,20–60] for angina-pectoris treatment efficiency were estimated according to the efficacy evaluation criteria. ECG effects and hemorheological indicators were included in 40 RCTs[12,20–24,26–55,57–60] and 19 RCTs,[20,21,25–30,33–35,37,38,41,48,49,52,59,60] respectively.\nEvidence network plot. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n[SUBTITLE] 3.4.2. Test for heterogeneity. [SUBSECTION] Two triangular closed loops appeared during the intervention. LOOP was used to construct the inconsistency test chart, calculate the inconsistency factor, and conduct the Z test. The Z value finalized that Loop (CT + SYI-CT + SI-CT + DSI) P = .446 and Loop (CT-CT + SYI-CT + SI) P = .584, demonstrating no inconsistency results.\nTwo triangular closed loops appeared during the intervention. LOOP was used to construct the inconsistency test chart, calculate the inconsistency factor, and conduct the Z test. The Z value finalized that Loop (CT + SYI-CT + SI-CT + DSI) P = .446 and Loop (CT-CT + SYI-CT + SI) P = .584, demonstrating no inconsistency results.\n[SUBTITLE] 3.4.3. Publication bias. [SUBSECTION] Eleven studies were included in the funnel plot for publication bias analysis. The funnel plot showed an asymmetric distribution of points and indicated the possibility of publication bias and minor study effects.\nEleven studies were included in the funnel plot for publication bias analysis. The funnel plot showed an asymmetric distribution of points and indicated the possibility of publication bias and minor study effects.\n[SUBTITLE] 3.4.4. Network meta-analysis of drug efficacy for angina pectoris treatment. [SUBSECTION] 42 RCTs demonstrated the clinical treatment efficacy against angina pectoris. A comparison of these results is presented in Table 2. Compared with the conventional treatment group, the CT + SI + H group showed the highest treatment efficacy (OR = 9.62, 95% CI [3.84, 24.05]), and the DSI group displayed the most modest treatment effect (OR = 0.85, 95% CI [0.16, 4.64]).\nNetwork meta-analysis results comparing the clinical effectiveness.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n42 RCTs demonstrated the clinical treatment efficacy against angina pectoris. A comparison of these results is presented in Table 2. Compared with the conventional treatment group, the CT + SI + H group showed the highest treatment efficacy (OR = 9.62, 95% CI [3.84, 24.05]), and the DSI group displayed the most modest treatment effect (OR = 0.85, 95% CI [0.16, 4.64]).\nNetwork meta-analysis results comparing the clinical effectiveness.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n[SUBTITLE] 3.4.5. SUCRA curves of treatment efficacy. [SUBSECTION] The SUCRA values from probability ranking are listed in Table 3. CT + SI + H had the highest rank probability of treatment success rate. The rank probability of the treatments based on SUCRAs is shown in Figure 4, demonstrating similarity to the ranking of the effective NMA results (Table 3).\nProbability ranking of clinical effectiveness evaluation in 13 angina-pectoris treatments.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nSUCRA curves of 13 treatment interventions. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThe SUCRA values from probability ranking are listed in Table 3. CT + SI + H had the highest rank probability of treatment success rate. The rank probability of the treatments based on SUCRAs is shown in Figure 4, demonstrating similarity to the ranking of the effective NMA results (Table 3).\nProbability ranking of clinical effectiveness evaluation in 13 angina-pectoris treatments.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nSUCRA curves of 13 treatment interventions. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n[SUBTITLE] 3.5. Adverse reactions [SUBSECTION] Nine studies[20,21,23,39,40,45,51,53,54] including 1045 patients demonstrated the adverse events occurrences. Mild venous inflammation was observed in 2,[45] disappearing after needle removal and not significantly affecting treatment. In addition, 2 patients in the control group developed an allergic reaction. One study[40] indicated that the treatment and control groups resulted in bleeding, slightly longer coagulation time, and slightly reduced platelet count after treatment. One study[39] reported that 3 cases of acute myocardial infarction occurred in the control group without inducing death among the adverse reactions in the circulatory system. Five studies[20,21,39,53,54] indicated other adverse reactions, including insomnia, nausea, dizziness, nausea, pruritus, rash, hypotension, head swelling, and muscle aches. All adverse reactions returned to normal after continued or discontinued observation. The results show that safflor yellow injection is effective and safe for treating angina pectoris, with few adverse reactions.\nNine studies[20,21,23,39,40,45,51,53,54] including 1045 patients demonstrated the adverse events occurrences. Mild venous inflammation was observed in 2,[45] disappearing after needle removal and not significantly affecting treatment. In addition, 2 patients in the control group developed an allergic reaction. One study[40] indicated that the treatment and control groups resulted in bleeding, slightly longer coagulation time, and slightly reduced platelet count after treatment. One study[39] reported that 3 cases of acute myocardial infarction occurred in the control group without inducing death among the adverse reactions in the circulatory system. Five studies[20,21,39,53,54] indicated other adverse reactions, including insomnia, nausea, dizziness, nausea, pruritus, rash, hypotension, head swelling, and muscle aches. All adverse reactions returned to normal after continued or discontinued observation. The results show that safflor yellow injection is effective and safe for treating angina pectoris, with few adverse reactions.", "Of the 79829 related studies identified, 810 retrieved records were screened after removing duplicates and the initial exclusion of invalid literature. Full-text assessment resulted in 42 eligible articles after excluding 768 articles according to this review’s inclusion and exclusion criteria, including 41 Chinese studies and 1 English study. The study selection process was performed according to PRISMA guidelines (Fig. 1).\nDocument screening process.", "The main characteristics of the included studies are summarized in Table 1. The studies were published between 2006 and 2019. Overall, 42 trials[12,20–60] with 4290 angina-pectoris patients were involved in the network meta-analysis, 2273 in the treatment group and 2017 in the control group. The sample sizes of the study participants ranged from 46 to 432. The mean age of the patients across trials fluctuated from 39.8 to 72.7 years, along with a 7 to 14 day treatment duration.\nCharacteristics of included studies.\n(A) Due to the different purity of safflor injection and safflor yellow injection, they should be distinguished; (B) The safflor yellow sodium chloride injection is prepared by adding sodium chloride to the safflor yellow injection for dilution. Therefore, only safflor yellow injection was included in the network meta-analysis.\nAC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.", "The Cochrane risk of bias tool was used to assess the 9 included RCTs. Among the 42 included studies, 11[12,23–25,27,30,34,36,52,56,59] specifically reported the method of random sequence generation. Allocation concealment was adequately described in only a few included studies. All outcomes of the included studies were completed without determining other sources of bias. Overall, these 42 studies showed moderate methodological quality. The details of the bias-risk evaluation for each study are presented in Figure 2.\nBias risk of included studies.", "[SUBTITLE] 3.4.1. Evidence network diagram. [SUBSECTION] This network meta-analysis (NMA) included 7 safflor yellow-related studies, including its monotherapy and combination with 5 other traditional Chinese medicine injections or conventional treatments for angina pectoris. As is shown in Figure 3, 2 closed loops were formed, focusing on the conventional treatment. 42 RCTs[12,20–60] for angina-pectoris treatment efficiency were estimated according to the efficacy evaluation criteria. ECG effects and hemorheological indicators were included in 40 RCTs[12,20–24,26–55,57–60] and 19 RCTs,[20,21,25–30,33–35,37,38,41,48,49,52,59,60] respectively.\nEvidence network plot. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThis network meta-analysis (NMA) included 7 safflor yellow-related studies, including its monotherapy and combination with 5 other traditional Chinese medicine injections or conventional treatments for angina pectoris. As is shown in Figure 3, 2 closed loops were formed, focusing on the conventional treatment. 42 RCTs[12,20–60] for angina-pectoris treatment efficiency were estimated according to the efficacy evaluation criteria. ECG effects and hemorheological indicators were included in 40 RCTs[12,20–24,26–55,57–60] and 19 RCTs,[20,21,25–30,33–35,37,38,41,48,49,52,59,60] respectively.\nEvidence network plot. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n[SUBTITLE] 3.4.2. Test for heterogeneity. [SUBSECTION] Two triangular closed loops appeared during the intervention. LOOP was used to construct the inconsistency test chart, calculate the inconsistency factor, and conduct the Z test. The Z value finalized that Loop (CT + SYI-CT + SI-CT + DSI) P = .446 and Loop (CT-CT + SYI-CT + SI) P = .584, demonstrating no inconsistency results.\nTwo triangular closed loops appeared during the intervention. LOOP was used to construct the inconsistency test chart, calculate the inconsistency factor, and conduct the Z test. The Z value finalized that Loop (CT + SYI-CT + SI-CT + DSI) P = .446 and Loop (CT-CT + SYI-CT + SI) P = .584, demonstrating no inconsistency results.\n[SUBTITLE] 3.4.3. Publication bias. [SUBSECTION] Eleven studies were included in the funnel plot for publication bias analysis. The funnel plot showed an asymmetric distribution of points and indicated the possibility of publication bias and minor study effects.\nEleven studies were included in the funnel plot for publication bias analysis. The funnel plot showed an asymmetric distribution of points and indicated the possibility of publication bias and minor study effects.\n[SUBTITLE] 3.4.4. Network meta-analysis of drug efficacy for angina pectoris treatment. [SUBSECTION] 42 RCTs demonstrated the clinical treatment efficacy against angina pectoris. A comparison of these results is presented in Table 2. Compared with the conventional treatment group, the CT + SI + H group showed the highest treatment efficacy (OR = 9.62, 95% CI [3.84, 24.05]), and the DSI group displayed the most modest treatment effect (OR = 0.85, 95% CI [0.16, 4.64]).\nNetwork meta-analysis results comparing the clinical effectiveness.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n42 RCTs demonstrated the clinical treatment efficacy against angina pectoris. A comparison of these results is presented in Table 2. Compared with the conventional treatment group, the CT + SI + H group showed the highest treatment efficacy (OR = 9.62, 95% CI [3.84, 24.05]), and the DSI group displayed the most modest treatment effect (OR = 0.85, 95% CI [0.16, 4.64]).\nNetwork meta-analysis results comparing the clinical effectiveness.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n[SUBTITLE] 3.4.5. SUCRA curves of treatment efficacy. [SUBSECTION] The SUCRA values from probability ranking are listed in Table 3. CT + SI + H had the highest rank probability of treatment success rate. The rank probability of the treatments based on SUCRAs is shown in Figure 4, demonstrating similarity to the ranking of the effective NMA results (Table 3).\nProbability ranking of clinical effectiveness evaluation in 13 angina-pectoris treatments.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nSUCRA curves of 13 treatment interventions. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThe SUCRA values from probability ranking are listed in Table 3. CT + SI + H had the highest rank probability of treatment success rate. The rank probability of the treatments based on SUCRAs is shown in Figure 4, demonstrating similarity to the ranking of the effective NMA results (Table 3).\nProbability ranking of clinical effectiveness evaluation in 13 angina-pectoris treatments.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nSUCRA curves of 13 treatment interventions. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.", "This network meta-analysis (NMA) included 7 safflor yellow-related studies, including its monotherapy and combination with 5 other traditional Chinese medicine injections or conventional treatments for angina pectoris. As is shown in Figure 3, 2 closed loops were formed, focusing on the conventional treatment. 42 RCTs[12,20–60] for angina-pectoris treatment efficiency were estimated according to the efficacy evaluation criteria. ECG effects and hemorheological indicators were included in 40 RCTs[12,20–24,26–55,57–60] and 19 RCTs,[20,21,25–30,33–35,37,38,41,48,49,52,59,60] respectively.\nEvidence network plot. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.", "Two triangular closed loops appeared during the intervention. LOOP was used to construct the inconsistency test chart, calculate the inconsistency factor, and conduct the Z test. The Z value finalized that Loop (CT + SYI-CT + SI-CT + DSI) P = .446 and Loop (CT-CT + SYI-CT + SI) P = .584, demonstrating no inconsistency results.", "Eleven studies were included in the funnel plot for publication bias analysis. The funnel plot showed an asymmetric distribution of points and indicated the possibility of publication bias and minor study effects.", "42 RCTs demonstrated the clinical treatment efficacy against angina pectoris. A comparison of these results is presented in Table 2. Compared with the conventional treatment group, the CT + SI + H group showed the highest treatment efficacy (OR = 9.62, 95% CI [3.84, 24.05]), and the DSI group displayed the most modest treatment effect (OR = 0.85, 95% CI [0.16, 4.64]).\nNetwork meta-analysis results comparing the clinical effectiveness.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.", "The SUCRA values from probability ranking are listed in Table 3. CT + SI + H had the highest rank probability of treatment success rate. The rank probability of the treatments based on SUCRAs is shown in Figure 4, demonstrating similarity to the ranking of the effective NMA results (Table 3).\nProbability ranking of clinical effectiveness evaluation in 13 angina-pectoris treatments.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nSUCRA curves of 13 treatment interventions. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.", "Nine studies[20,21,23,39,40,45,51,53,54] including 1045 patients demonstrated the adverse events occurrences. Mild venous inflammation was observed in 2,[45] disappearing after needle removal and not significantly affecting treatment. In addition, 2 patients in the control group developed an allergic reaction. One study[40] indicated that the treatment and control groups resulted in bleeding, slightly longer coagulation time, and slightly reduced platelet count after treatment. One study[39] reported that 3 cases of acute myocardial infarction occurred in the control group without inducing death among the adverse reactions in the circulatory system. Five studies[20,21,39,53,54] indicated other adverse reactions, including insomnia, nausea, dizziness, nausea, pruritus, rash, hypotension, head swelling, and muscle aches. All adverse reactions returned to normal after continued or discontinued observation. The results show that safflor yellow injection is effective and safe for treating angina pectoris, with few adverse reactions.", "[SUBTITLE] 4.1. Research perspective [SUBSECTION] This analysis was done from the perspective of patients with angina.[61] The calculation of the implicit cost was not contained due to its complication. This study is a retrospective analysis, so the differences between indirect and hidden costs are too significant. Therefore, we only involved the direct costs of different treatment schemes.\nThis analysis was done from the perspective of patients with angina.[61] The calculation of the implicit cost was not contained due to its complication. This study is a retrospective analysis, so the differences between indirect and hidden costs are too significant. Therefore, we only involved the direct costs of different treatment schemes.\n[SUBTITLE] 4.2. Methods [SUBSECTION] [SUBTITLE] 4.2.1. Decision tree model. [SUBSECTION] This study used a decision tree model to analyze the cost and effect of 13 treatment options for angina included in the network meta-analysis. Efficacy and safety indicators were obtained using meta-analysis to comprehensively evaluate the economics of 13 treatment regimens. The structure of the decision-tree model is shown in Figure 5. The model primarily assessed the short-term economy, and the time horizon for this analysis was 1 treatment course (14 days).\nDecision tree model. The decision tree model was used to analyze the cost-effectiveness of 13 treatment options for angina included in the network meta-analysis. The specific treatment protocols and their cost are shown in this figure. The time horizon for this analysis is 1 treatment course (14 days). AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThis study used a decision tree model to analyze the cost and effect of 13 treatment options for angina included in the network meta-analysis. Efficacy and safety indicators were obtained using meta-analysis to comprehensively evaluate the economics of 13 treatment regimens. The structure of the decision-tree model is shown in Figure 5. The model primarily assessed the short-term economy, and the time horizon for this analysis was 1 treatment course (14 days).\nDecision tree model. The decision tree model was used to analyze the cost-effectiveness of 13 treatment options for angina included in the network meta-analysis. The specific treatment protocols and their cost are shown in this figure. The time horizon for this analysis is 1 treatment course (14 days). AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n[SUBTITLE] 4.2.2. Statistical analysis. [SUBSECTION] In pharmacoeconomic evaluation, cost-effectiveness analysis calculated the incremental cost-effectiveness ratio. Cyclone plots were drawn by single factor sensitivity analysis, probability sensitivity analysis was carried out by Monte Carlo simulation, and acceptable cost effect curves were drawn.[62] TreeAge 2011 was used to construct a decision tree model for cost-effectiveness and sensitivity analyses.\nIn pharmacoeconomic evaluation, cost-effectiveness analysis calculated the incremental cost-effectiveness ratio. Cyclone plots were drawn by single factor sensitivity analysis, probability sensitivity analysis was carried out by Monte Carlo simulation, and acceptable cost effect curves were drawn.[62] TreeAge 2011 was used to construct a decision tree model for cost-effectiveness and sensitivity analyses.\n[SUBTITLE] 4.2.3. Effectiveness. [SUBSECTION] The studies included in the economic evaluation were similar to the network meta-analysis. We obtained the effective rates of 13 treatment regimens according to the proportion of each study shown in the forest map in the meta-analysis and weighting the treatment efficiency of angina patients. The results showed that the efficiency ranking and the score ranking of SUCRA in the NMA were similar, indicating that the efficiency from the weighted calculation was reasonable and could be included in the economic-evaluation calculation (Table 4).\nClinical efficacy and ranking of 13 angina-pectoris treatments.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThe studies included in the economic evaluation were similar to the network meta-analysis. We obtained the effective rates of 13 treatment regimens according to the proportion of each study shown in the forest map in the meta-analysis and weighting the treatment efficiency of angina patients. The results showed that the efficiency ranking and the score ranking of SUCRA in the NMA were similar, indicating that the efficiency from the weighted calculation was reasonable and could be included in the economic-evaluation calculation (Table 4).\nClinical efficacy and ranking of 13 angina-pectoris treatments.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n[SUBTITLE] 4.2.4. Cost. [SUBSECTION] Cost estimation was based on the patient perspective. We assumed that the direct and indirect costs of the 13 interventions were the same, that direct medical costs caused the differences in total costs, and that the cost of the conventional treatment was identical for each treatment regimen. In addition, this study’s effective components of safflower yellow injection and safflower injection are consistent. However, they were produced by different manufacturers, were differentiated in the network meta-analysis and discriminated in the cost calculation. We adopted a discount rate of 5% for the cost data and discounted uniformly until early 2020.\nDrug cost\nWe utilized the most common drug retail prices and the lowest to the highest manufacturers’ retail prices for the sensitivity analysis. When calculating the total drug cost of the 13 treatment schemes, the weighted drug amount was calculated by multiplying the cost of various drugs or injections in the included literature by the weight obtained from the meta-analysis and the drug cost of the treatment scheme was unified. The costs of the 10 drugs are shown in Table 5. The weighted dosages of the 13 treatment regimens are listed in Table 6.\nCost price and maximum/minimum value of 10 drugs.\n(A) All data come from 315 medicine price inquiry net https://www.315jiage.cn. (B) Maximum, minimum and cost of the exact drug specifications.\nWeighted dose and cost of 13 treatment options.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThe cost of 1 course of treatment, including Aspirin enteric-coated tablets, Propranolol tablets, Nitroglycerin tablets and Nifedipine sustained-release tablets for conventional treatment, was ¥13.09, ¥13.72, ¥1.68, and ¥17.08, respectively, and the total cost of 1 course of conventional treatment was ¥45.57.[63] The discounted cost of conventional treatment was ¥50.24.\nOther costs\nThe cost of injection mainly includes the cost of materials, such as disposable infusion tubes and syringes used for intravenous injection, and the cost of intravenous injection. The latest medical service fees published by the Beijing Medical Insurance Bureau include ¥5.5 for intravenous injection and ¥7 for intravenous infusion.[64] The total intravenous infusion material fee was ¥8.00 and the total amount of intravenous infusion material fee was ¥2.40.[65] After discounting, the average value is ¥14.3/day. In addition, the cost of examination items during the entire course of treatment for patients with angina includes the cost of blood, urine, stool routine, liver and kidney function, and electrocardiogram before treatment. The cost of laboratory tests and electrocardiograms was obtained from Jianwei Xuan et al[66] The average price of medical services was obtained from the website of the local Health Commission. Discount calculation results for inspection cost of ¥373.6. Zhang et al[67] summarized the costs of diagnosing and treating CHD in 26 sample hospitals from 2014 to 2016 and found that the average hospitalization cost for angina patients was ¥26745.12. Zhao et al[12] studied CHD in 237 tertiary hospitals in Beijing in 2014 and found that the average hospitalization cost of patients with unstable angina was ¥26482.41. The average cost of hospitalization calculated after discount is ¥34811.63.\nCost estimation was based on the patient perspective. We assumed that the direct and indirect costs of the 13 interventions were the same, that direct medical costs caused the differences in total costs, and that the cost of the conventional treatment was identical for each treatment regimen. In addition, this study’s effective components of safflower yellow injection and safflower injection are consistent. However, they were produced by different manufacturers, were differentiated in the network meta-analysis and discriminated in the cost calculation. We adopted a discount rate of 5% for the cost data and discounted uniformly until early 2020.\nDrug cost\nWe utilized the most common drug retail prices and the lowest to the highest manufacturers’ retail prices for the sensitivity analysis. When calculating the total drug cost of the 13 treatment schemes, the weighted drug amount was calculated by multiplying the cost of various drugs or injections in the included literature by the weight obtained from the meta-analysis and the drug cost of the treatment scheme was unified. The costs of the 10 drugs are shown in Table 5. The weighted dosages of the 13 treatment regimens are listed in Table 6.\nCost price and maximum/minimum value of 10 drugs.\n(A) All data come from 315 medicine price inquiry net https://www.315jiage.cn. (B) Maximum, minimum and cost of the exact drug specifications.\nWeighted dose and cost of 13 treatment options.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThe cost of 1 course of treatment, including Aspirin enteric-coated tablets, Propranolol tablets, Nitroglycerin tablets and Nifedipine sustained-release tablets for conventional treatment, was ¥13.09, ¥13.72, ¥1.68, and ¥17.08, respectively, and the total cost of 1 course of conventional treatment was ¥45.57.[63] The discounted cost of conventional treatment was ¥50.24.\nOther costs\nThe cost of injection mainly includes the cost of materials, such as disposable infusion tubes and syringes used for intravenous injection, and the cost of intravenous injection. The latest medical service fees published by the Beijing Medical Insurance Bureau include ¥5.5 for intravenous injection and ¥7 for intravenous infusion.[64] The total intravenous infusion material fee was ¥8.00 and the total amount of intravenous infusion material fee was ¥2.40.[65] After discounting, the average value is ¥14.3/day. In addition, the cost of examination items during the entire course of treatment for patients with angina includes the cost of blood, urine, stool routine, liver and kidney function, and electrocardiogram before treatment. The cost of laboratory tests and electrocardiograms was obtained from Jianwei Xuan et al[66] The average price of medical services was obtained from the website of the local Health Commission. Discount calculation results for inspection cost of ¥373.6. Zhang et al[67] summarized the costs of diagnosing and treating CHD in 26 sample hospitals from 2014 to 2016 and found that the average hospitalization cost for angina patients was ¥26745.12. Zhao et al[12] studied CHD in 237 tertiary hospitals in Beijing in 2014 and found that the average hospitalization cost of patients with unstable angina was ¥26482.41. The average cost of hospitalization calculated after discount is ¥34811.63.\n[SUBTITLE] 4.2.1. Decision tree model. [SUBSECTION] This study used a decision tree model to analyze the cost and effect of 13 treatment options for angina included in the network meta-analysis. Efficacy and safety indicators were obtained using meta-analysis to comprehensively evaluate the economics of 13 treatment regimens. The structure of the decision-tree model is shown in Figure 5. The model primarily assessed the short-term economy, and the time horizon for this analysis was 1 treatment course (14 days).\nDecision tree model. The decision tree model was used to analyze the cost-effectiveness of 13 treatment options for angina included in the network meta-analysis. The specific treatment protocols and their cost are shown in this figure. The time horizon for this analysis is 1 treatment course (14 days). AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThis study used a decision tree model to analyze the cost and effect of 13 treatment options for angina included in the network meta-analysis. Efficacy and safety indicators were obtained using meta-analysis to comprehensively evaluate the economics of 13 treatment regimens. The structure of the decision-tree model is shown in Figure 5. The model primarily assessed the short-term economy, and the time horizon for this analysis was 1 treatment course (14 days).\nDecision tree model. The decision tree model was used to analyze the cost-effectiveness of 13 treatment options for angina included in the network meta-analysis. The specific treatment protocols and their cost are shown in this figure. The time horizon for this analysis is 1 treatment course (14 days). AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n[SUBTITLE] 4.2.2. Statistical analysis. [SUBSECTION] In pharmacoeconomic evaluation, cost-effectiveness analysis calculated the incremental cost-effectiveness ratio. Cyclone plots were drawn by single factor sensitivity analysis, probability sensitivity analysis was carried out by Monte Carlo simulation, and acceptable cost effect curves were drawn.[62] TreeAge 2011 was used to construct a decision tree model for cost-effectiveness and sensitivity analyses.\nIn pharmacoeconomic evaluation, cost-effectiveness analysis calculated the incremental cost-effectiveness ratio. Cyclone plots were drawn by single factor sensitivity analysis, probability sensitivity analysis was carried out by Monte Carlo simulation, and acceptable cost effect curves were drawn.[62] TreeAge 2011 was used to construct a decision tree model for cost-effectiveness and sensitivity analyses.\n[SUBTITLE] 4.2.3. Effectiveness. [SUBSECTION] The studies included in the economic evaluation were similar to the network meta-analysis. We obtained the effective rates of 13 treatment regimens according to the proportion of each study shown in the forest map in the meta-analysis and weighting the treatment efficiency of angina patients. The results showed that the efficiency ranking and the score ranking of SUCRA in the NMA were similar, indicating that the efficiency from the weighted calculation was reasonable and could be included in the economic-evaluation calculation (Table 4).\nClinical efficacy and ranking of 13 angina-pectoris treatments.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThe studies included in the economic evaluation were similar to the network meta-analysis. We obtained the effective rates of 13 treatment regimens according to the proportion of each study shown in the forest map in the meta-analysis and weighting the treatment efficiency of angina patients. The results showed that the efficiency ranking and the score ranking of SUCRA in the NMA were similar, indicating that the efficiency from the weighted calculation was reasonable and could be included in the economic-evaluation calculation (Table 4).\nClinical efficacy and ranking of 13 angina-pectoris treatments.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n[SUBTITLE] 4.2.4. Cost. [SUBSECTION] Cost estimation was based on the patient perspective. We assumed that the direct and indirect costs of the 13 interventions were the same, that direct medical costs caused the differences in total costs, and that the cost of the conventional treatment was identical for each treatment regimen. In addition, this study’s effective components of safflower yellow injection and safflower injection are consistent. However, they were produced by different manufacturers, were differentiated in the network meta-analysis and discriminated in the cost calculation. We adopted a discount rate of 5% for the cost data and discounted uniformly until early 2020.\nDrug cost\nWe utilized the most common drug retail prices and the lowest to the highest manufacturers’ retail prices for the sensitivity analysis. When calculating the total drug cost of the 13 treatment schemes, the weighted drug amount was calculated by multiplying the cost of various drugs or injections in the included literature by the weight obtained from the meta-analysis and the drug cost of the treatment scheme was unified. The costs of the 10 drugs are shown in Table 5. The weighted dosages of the 13 treatment regimens are listed in Table 6.\nCost price and maximum/minimum value of 10 drugs.\n(A) All data come from 315 medicine price inquiry net https://www.315jiage.cn. (B) Maximum, minimum and cost of the exact drug specifications.\nWeighted dose and cost of 13 treatment options.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThe cost of 1 course of treatment, including Aspirin enteric-coated tablets, Propranolol tablets, Nitroglycerin tablets and Nifedipine sustained-release tablets for conventional treatment, was ¥13.09, ¥13.72, ¥1.68, and ¥17.08, respectively, and the total cost of 1 course of conventional treatment was ¥45.57.[63] The discounted cost of conventional treatment was ¥50.24.\nOther costs\nThe cost of injection mainly includes the cost of materials, such as disposable infusion tubes and syringes used for intravenous injection, and the cost of intravenous injection. The latest medical service fees published by the Beijing Medical Insurance Bureau include ¥5.5 for intravenous injection and ¥7 for intravenous infusion.[64] The total intravenous infusion material fee was ¥8.00 and the total amount of intravenous infusion material fee was ¥2.40.[65] After discounting, the average value is ¥14.3/day. In addition, the cost of examination items during the entire course of treatment for patients with angina includes the cost of blood, urine, stool routine, liver and kidney function, and electrocardiogram before treatment. The cost of laboratory tests and electrocardiograms was obtained from Jianwei Xuan et al[66] The average price of medical services was obtained from the website of the local Health Commission. Discount calculation results for inspection cost of ¥373.6. Zhang et al[67] summarized the costs of diagnosing and treating CHD in 26 sample hospitals from 2014 to 2016 and found that the average hospitalization cost for angina patients was ¥26745.12. Zhao et al[12] studied CHD in 237 tertiary hospitals in Beijing in 2014 and found that the average hospitalization cost of patients with unstable angina was ¥26482.41. The average cost of hospitalization calculated after discount is ¥34811.63.\nCost estimation was based on the patient perspective. We assumed that the direct and indirect costs of the 13 interventions were the same, that direct medical costs caused the differences in total costs, and that the cost of the conventional treatment was identical for each treatment regimen. In addition, this study’s effective components of safflower yellow injection and safflower injection are consistent. However, they were produced by different manufacturers, were differentiated in the network meta-analysis and discriminated in the cost calculation. We adopted a discount rate of 5% for the cost data and discounted uniformly until early 2020.\nDrug cost\nWe utilized the most common drug retail prices and the lowest to the highest manufacturers’ retail prices for the sensitivity analysis. When calculating the total drug cost of the 13 treatment schemes, the weighted drug amount was calculated by multiplying the cost of various drugs or injections in the included literature by the weight obtained from the meta-analysis and the drug cost of the treatment scheme was unified. The costs of the 10 drugs are shown in Table 5. The weighted dosages of the 13 treatment regimens are listed in Table 6.\nCost price and maximum/minimum value of 10 drugs.\n(A) All data come from 315 medicine price inquiry net https://www.315jiage.cn. (B) Maximum, minimum and cost of the exact drug specifications.\nWeighted dose and cost of 13 treatment options.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThe cost of 1 course of treatment, including Aspirin enteric-coated tablets, Propranolol tablets, Nitroglycerin tablets and Nifedipine sustained-release tablets for conventional treatment, was ¥13.09, ¥13.72, ¥1.68, and ¥17.08, respectively, and the total cost of 1 course of conventional treatment was ¥45.57.[63] The discounted cost of conventional treatment was ¥50.24.\nOther costs\nThe cost of injection mainly includes the cost of materials, such as disposable infusion tubes and syringes used for intravenous injection, and the cost of intravenous injection. The latest medical service fees published by the Beijing Medical Insurance Bureau include ¥5.5 for intravenous injection and ¥7 for intravenous infusion.[64] The total intravenous infusion material fee was ¥8.00 and the total amount of intravenous infusion material fee was ¥2.40.[65] After discounting, the average value is ¥14.3/day. In addition, the cost of examination items during the entire course of treatment for patients with angina includes the cost of blood, urine, stool routine, liver and kidney function, and electrocardiogram before treatment. The cost of laboratory tests and electrocardiograms was obtained from Jianwei Xuan et al[66] The average price of medical services was obtained from the website of the local Health Commission. Discount calculation results for inspection cost of ¥373.6. Zhang et al[67] summarized the costs of diagnosing and treating CHD in 26 sample hospitals from 2014 to 2016 and found that the average hospitalization cost for angina patients was ¥26745.12. Zhao et al[12] studied CHD in 237 tertiary hospitals in Beijing in 2014 and found that the average hospitalization cost of patients with unstable angina was ¥26482.41. The average cost of hospitalization calculated after discount is ¥34811.63.\n[SUBTITLE] 4.3. Results [SUBSECTION] [SUBTITLE] 4.3.1. Base-case results. [SUBSECTION] We selected studies with effective rates of more than 90% (including conventional treatment) for economic evaluation. As shown in Table 7, CT + SI was the most cost-effective treatment.\nBase-case analysis results.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, ICER = incremental cost-effectiveness ratio, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nWe selected studies with effective rates of more than 90% (including conventional treatment) for economic evaluation. As shown in Table 7, CT + SI was the most cost-effective treatment.\nBase-case analysis results.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, ICER = incremental cost-effectiveness ratio, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n[SUBTITLE] 4.3.2. Two-way sensitivity analysis. [SUBSECTION] It was assumed that the efficacy rate of the 9 treatment regimens fluctuated by 5%. The cost was analyzed sensitively according to the highest and lowest manufacturer retail price, assuming that WTP was GDP per capita in 2018. As shown in Figure 6, the parameter with the most significant impact on the results was the treatment efficiency of the CT + SI group.\nSensitivity analysis on cost and effective rate. eCT_SI = effective rate of combined therapy of conventional treatment and Safflor injection, eCT_SI_H = effective rate of combined therapy of conventional treatment and Safflor injection and low molecular heparin, eCT_SI_DS = effective rate of combined therapy of conventional treatment and Safflor injection and compound Danshen drip pill, eCT_SYI_LC = effective rate of combined therapy of conventional treatment and safflor yellow injection and L-Carnitine injection, eCT_SYI_CD = effective rate of combined therapy of conventional treatment and safflor yellow injection and carvedilol, cCT_SI = cost of combined therapy of conventional treatment and Safflor injection, eCT_SYI = effective rate of combined therapy of conventional treatment and safflor yellow injection, eCT_SYI_AC = effective rate of combined therapy of conventional treatment and safflor yellow injection and atorvastatin calcium tablets, cCT_SI_H = cost of combined therapy of conventional treatment and Safflor injection and low molecular heparin, eCT_SYI_SMI = effective rate of combined therapy of conventional treatment and safflor yellow injection and Shenmai injection, eCT = effective rate of conventional treatment, cCT_SYI_SMI = cost of combined therapy of conventional treatment and safflor yellow injection and Shenmai injection, eCT_SYI_LC = effective rate of combined therapy of conventional treatment and safflor yellow injection and L-Carnitine injection.\nIt was assumed that the efficacy rate of the 9 treatment regimens fluctuated by 5%. The cost was analyzed sensitively according to the highest and lowest manufacturer retail price, assuming that WTP was GDP per capita in 2018. As shown in Figure 6, the parameter with the most significant impact on the results was the treatment efficiency of the CT + SI group.\nSensitivity analysis on cost and effective rate. eCT_SI = effective rate of combined therapy of conventional treatment and Safflor injection, eCT_SI_H = effective rate of combined therapy of conventional treatment and Safflor injection and low molecular heparin, eCT_SI_DS = effective rate of combined therapy of conventional treatment and Safflor injection and compound Danshen drip pill, eCT_SYI_LC = effective rate of combined therapy of conventional treatment and safflor yellow injection and L-Carnitine injection, eCT_SYI_CD = effective rate of combined therapy of conventional treatment and safflor yellow injection and carvedilol, cCT_SI = cost of combined therapy of conventional treatment and Safflor injection, eCT_SYI = effective rate of combined therapy of conventional treatment and safflor yellow injection, eCT_SYI_AC = effective rate of combined therapy of conventional treatment and safflor yellow injection and atorvastatin calcium tablets, cCT_SI_H = cost of combined therapy of conventional treatment and Safflor injection and low molecular heparin, eCT_SYI_SMI = effective rate of combined therapy of conventional treatment and safflor yellow injection and Shenmai injection, eCT = effective rate of conventional treatment, cCT_SYI_SMI = cost of combined therapy of conventional treatment and safflor yellow injection and Shenmai injection, eCT_SYI_LC = effective rate of combined therapy of conventional treatment and safflor yellow injection and L-Carnitine injection.\n[SUBTITLE] 4.3.3. Probabilistic sensitivity analysis (PSA). [SUBSECTION] The results of the PSA based on 1000 Monte Carlo simulations are presented in the cost-effectiveness scatter plot below (Fig. 7). The efficiency and the cost were presumed to be a beta distribution and a triangular distribution, respectively. The patient’s WTP changed from 0 to ¥198018. The acceptable cost effect curve is shown in Figure 7. The probability of cost-effectiveness of CT + SI gradually increased with the WTP threshold and exceeded CT when the WTP reached ¥19801.8. When the WTP is higher than ¥39603.6, the CT + SI probability representing a more economical scheme was reduced; however, it was still greater than 50%. The results of the PSA were consistent with the base-case results (Fig. 8).\nCost-effectiveness acceptability curve. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nCost-effectiveness of screening options. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThe results of the PSA based on 1000 Monte Carlo simulations are presented in the cost-effectiveness scatter plot below (Fig. 7). The efficiency and the cost were presumed to be a beta distribution and a triangular distribution, respectively. The patient’s WTP changed from 0 to ¥198018. The acceptable cost effect curve is shown in Figure 7. The probability of cost-effectiveness of CT + SI gradually increased with the WTP threshold and exceeded CT when the WTP reached ¥19801.8. When the WTP is higher than ¥39603.6, the CT + SI probability representing a more economical scheme was reduced; however, it was still greater than 50%. The results of the PSA were consistent with the base-case results (Fig. 8).\nCost-effectiveness acceptability curve. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nCost-effectiveness of screening options. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n[SUBTITLE] 4.3.1. Base-case results. [SUBSECTION] We selected studies with effective rates of more than 90% (including conventional treatment) for economic evaluation. As shown in Table 7, CT + SI was the most cost-effective treatment.\nBase-case analysis results.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, ICER = incremental cost-effectiveness ratio, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nWe selected studies with effective rates of more than 90% (including conventional treatment) for economic evaluation. As shown in Table 7, CT + SI was the most cost-effective treatment.\nBase-case analysis results.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, ICER = incremental cost-effectiveness ratio, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n[SUBTITLE] 4.3.2. Two-way sensitivity analysis. [SUBSECTION] It was assumed that the efficacy rate of the 9 treatment regimens fluctuated by 5%. The cost was analyzed sensitively according to the highest and lowest manufacturer retail price, assuming that WTP was GDP per capita in 2018. As shown in Figure 6, the parameter with the most significant impact on the results was the treatment efficiency of the CT + SI group.\nSensitivity analysis on cost and effective rate. eCT_SI = effective rate of combined therapy of conventional treatment and Safflor injection, eCT_SI_H = effective rate of combined therapy of conventional treatment and Safflor injection and low molecular heparin, eCT_SI_DS = effective rate of combined therapy of conventional treatment and Safflor injection and compound Danshen drip pill, eCT_SYI_LC = effective rate of combined therapy of conventional treatment and safflor yellow injection and L-Carnitine injection, eCT_SYI_CD = effective rate of combined therapy of conventional treatment and safflor yellow injection and carvedilol, cCT_SI = cost of combined therapy of conventional treatment and Safflor injection, eCT_SYI = effective rate of combined therapy of conventional treatment and safflor yellow injection, eCT_SYI_AC = effective rate of combined therapy of conventional treatment and safflor yellow injection and atorvastatin calcium tablets, cCT_SI_H = cost of combined therapy of conventional treatment and Safflor injection and low molecular heparin, eCT_SYI_SMI = effective rate of combined therapy of conventional treatment and safflor yellow injection and Shenmai injection, eCT = effective rate of conventional treatment, cCT_SYI_SMI = cost of combined therapy of conventional treatment and safflor yellow injection and Shenmai injection, eCT_SYI_LC = effective rate of combined therapy of conventional treatment and safflor yellow injection and L-Carnitine injection.\nIt was assumed that the efficacy rate of the 9 treatment regimens fluctuated by 5%. The cost was analyzed sensitively according to the highest and lowest manufacturer retail price, assuming that WTP was GDP per capita in 2018. As shown in Figure 6, the parameter with the most significant impact on the results was the treatment efficiency of the CT + SI group.\nSensitivity analysis on cost and effective rate. eCT_SI = effective rate of combined therapy of conventional treatment and Safflor injection, eCT_SI_H = effective rate of combined therapy of conventional treatment and Safflor injection and low molecular heparin, eCT_SI_DS = effective rate of combined therapy of conventional treatment and Safflor injection and compound Danshen drip pill, eCT_SYI_LC = effective rate of combined therapy of conventional treatment and safflor yellow injection and L-Carnitine injection, eCT_SYI_CD = effective rate of combined therapy of conventional treatment and safflor yellow injection and carvedilol, cCT_SI = cost of combined therapy of conventional treatment and Safflor injection, eCT_SYI = effective rate of combined therapy of conventional treatment and safflor yellow injection, eCT_SYI_AC = effective rate of combined therapy of conventional treatment and safflor yellow injection and atorvastatin calcium tablets, cCT_SI_H = cost of combined therapy of conventional treatment and Safflor injection and low molecular heparin, eCT_SYI_SMI = effective rate of combined therapy of conventional treatment and safflor yellow injection and Shenmai injection, eCT = effective rate of conventional treatment, cCT_SYI_SMI = cost of combined therapy of conventional treatment and safflor yellow injection and Shenmai injection, eCT_SYI_LC = effective rate of combined therapy of conventional treatment and safflor yellow injection and L-Carnitine injection.\n[SUBTITLE] 4.3.3. Probabilistic sensitivity analysis (PSA). [SUBSECTION] The results of the PSA based on 1000 Monte Carlo simulations are presented in the cost-effectiveness scatter plot below (Fig. 7). The efficiency and the cost were presumed to be a beta distribution and a triangular distribution, respectively. The patient’s WTP changed from 0 to ¥198018. The acceptable cost effect curve is shown in Figure 7. The probability of cost-effectiveness of CT + SI gradually increased with the WTP threshold and exceeded CT when the WTP reached ¥19801.8. When the WTP is higher than ¥39603.6, the CT + SI probability representing a more economical scheme was reduced; however, it was still greater than 50%. The results of the PSA were consistent with the base-case results (Fig. 8).\nCost-effectiveness acceptability curve. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nCost-effectiveness of screening options. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThe results of the PSA based on 1000 Monte Carlo simulations are presented in the cost-effectiveness scatter plot below (Fig. 7). The efficiency and the cost were presumed to be a beta distribution and a triangular distribution, respectively. The patient’s WTP changed from 0 to ¥198018. The acceptable cost effect curve is shown in Figure 7. The probability of cost-effectiveness of CT + SI gradually increased with the WTP threshold and exceeded CT when the WTP reached ¥19801.8. When the WTP is higher than ¥39603.6, the CT + SI probability representing a more economical scheme was reduced; however, it was still greater than 50%. The results of the PSA were consistent with the base-case results (Fig. 8).\nCost-effectiveness acceptability curve. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nCost-effectiveness of screening options. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.", "This analysis was done from the perspective of patients with angina.[61] The calculation of the implicit cost was not contained due to its complication. This study is a retrospective analysis, so the differences between indirect and hidden costs are too significant. Therefore, we only involved the direct costs of different treatment schemes.", "[SUBTITLE] 4.2.1. Decision tree model. [SUBSECTION] This study used a decision tree model to analyze the cost and effect of 13 treatment options for angina included in the network meta-analysis. Efficacy and safety indicators were obtained using meta-analysis to comprehensively evaluate the economics of 13 treatment regimens. The structure of the decision-tree model is shown in Figure 5. The model primarily assessed the short-term economy, and the time horizon for this analysis was 1 treatment course (14 days).\nDecision tree model. The decision tree model was used to analyze the cost-effectiveness of 13 treatment options for angina included in the network meta-analysis. The specific treatment protocols and their cost are shown in this figure. The time horizon for this analysis is 1 treatment course (14 days). AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThis study used a decision tree model to analyze the cost and effect of 13 treatment options for angina included in the network meta-analysis. Efficacy and safety indicators were obtained using meta-analysis to comprehensively evaluate the economics of 13 treatment regimens. The structure of the decision-tree model is shown in Figure 5. The model primarily assessed the short-term economy, and the time horizon for this analysis was 1 treatment course (14 days).\nDecision tree model. The decision tree model was used to analyze the cost-effectiveness of 13 treatment options for angina included in the network meta-analysis. The specific treatment protocols and their cost are shown in this figure. The time horizon for this analysis is 1 treatment course (14 days). AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n[SUBTITLE] 4.2.2. Statistical analysis. [SUBSECTION] In pharmacoeconomic evaluation, cost-effectiveness analysis calculated the incremental cost-effectiveness ratio. Cyclone plots were drawn by single factor sensitivity analysis, probability sensitivity analysis was carried out by Monte Carlo simulation, and acceptable cost effect curves were drawn.[62] TreeAge 2011 was used to construct a decision tree model for cost-effectiveness and sensitivity analyses.\nIn pharmacoeconomic evaluation, cost-effectiveness analysis calculated the incremental cost-effectiveness ratio. Cyclone plots were drawn by single factor sensitivity analysis, probability sensitivity analysis was carried out by Monte Carlo simulation, and acceptable cost effect curves were drawn.[62] TreeAge 2011 was used to construct a decision tree model for cost-effectiveness and sensitivity analyses.\n[SUBTITLE] 4.2.3. Effectiveness. [SUBSECTION] The studies included in the economic evaluation were similar to the network meta-analysis. We obtained the effective rates of 13 treatment regimens according to the proportion of each study shown in the forest map in the meta-analysis and weighting the treatment efficiency of angina patients. The results showed that the efficiency ranking and the score ranking of SUCRA in the NMA were similar, indicating that the efficiency from the weighted calculation was reasonable and could be included in the economic-evaluation calculation (Table 4).\nClinical efficacy and ranking of 13 angina-pectoris treatments.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThe studies included in the economic evaluation were similar to the network meta-analysis. We obtained the effective rates of 13 treatment regimens according to the proportion of each study shown in the forest map in the meta-analysis and weighting the treatment efficiency of angina patients. The results showed that the efficiency ranking and the score ranking of SUCRA in the NMA were similar, indicating that the efficiency from the weighted calculation was reasonable and could be included in the economic-evaluation calculation (Table 4).\nClinical efficacy and ranking of 13 angina-pectoris treatments.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n[SUBTITLE] 4.2.4. Cost. [SUBSECTION] Cost estimation was based on the patient perspective. We assumed that the direct and indirect costs of the 13 interventions were the same, that direct medical costs caused the differences in total costs, and that the cost of the conventional treatment was identical for each treatment regimen. In addition, this study’s effective components of safflower yellow injection and safflower injection are consistent. However, they were produced by different manufacturers, were differentiated in the network meta-analysis and discriminated in the cost calculation. We adopted a discount rate of 5% for the cost data and discounted uniformly until early 2020.\nDrug cost\nWe utilized the most common drug retail prices and the lowest to the highest manufacturers’ retail prices for the sensitivity analysis. When calculating the total drug cost of the 13 treatment schemes, the weighted drug amount was calculated by multiplying the cost of various drugs or injections in the included literature by the weight obtained from the meta-analysis and the drug cost of the treatment scheme was unified. The costs of the 10 drugs are shown in Table 5. The weighted dosages of the 13 treatment regimens are listed in Table 6.\nCost price and maximum/minimum value of 10 drugs.\n(A) All data come from 315 medicine price inquiry net https://www.315jiage.cn. (B) Maximum, minimum and cost of the exact drug specifications.\nWeighted dose and cost of 13 treatment options.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThe cost of 1 course of treatment, including Aspirin enteric-coated tablets, Propranolol tablets, Nitroglycerin tablets and Nifedipine sustained-release tablets for conventional treatment, was ¥13.09, ¥13.72, ¥1.68, and ¥17.08, respectively, and the total cost of 1 course of conventional treatment was ¥45.57.[63] The discounted cost of conventional treatment was ¥50.24.\nOther costs\nThe cost of injection mainly includes the cost of materials, such as disposable infusion tubes and syringes used for intravenous injection, and the cost of intravenous injection. The latest medical service fees published by the Beijing Medical Insurance Bureau include ¥5.5 for intravenous injection and ¥7 for intravenous infusion.[64] The total intravenous infusion material fee was ¥8.00 and the total amount of intravenous infusion material fee was ¥2.40.[65] After discounting, the average value is ¥14.3/day. In addition, the cost of examination items during the entire course of treatment for patients with angina includes the cost of blood, urine, stool routine, liver and kidney function, and electrocardiogram before treatment. The cost of laboratory tests and electrocardiograms was obtained from Jianwei Xuan et al[66] The average price of medical services was obtained from the website of the local Health Commission. Discount calculation results for inspection cost of ¥373.6. Zhang et al[67] summarized the costs of diagnosing and treating CHD in 26 sample hospitals from 2014 to 2016 and found that the average hospitalization cost for angina patients was ¥26745.12. Zhao et al[12] studied CHD in 237 tertiary hospitals in Beijing in 2014 and found that the average hospitalization cost of patients with unstable angina was ¥26482.41. The average cost of hospitalization calculated after discount is ¥34811.63.\nCost estimation was based on the patient perspective. We assumed that the direct and indirect costs of the 13 interventions were the same, that direct medical costs caused the differences in total costs, and that the cost of the conventional treatment was identical for each treatment regimen. In addition, this study’s effective components of safflower yellow injection and safflower injection are consistent. However, they were produced by different manufacturers, were differentiated in the network meta-analysis and discriminated in the cost calculation. We adopted a discount rate of 5% for the cost data and discounted uniformly until early 2020.\nDrug cost\nWe utilized the most common drug retail prices and the lowest to the highest manufacturers’ retail prices for the sensitivity analysis. When calculating the total drug cost of the 13 treatment schemes, the weighted drug amount was calculated by multiplying the cost of various drugs or injections in the included literature by the weight obtained from the meta-analysis and the drug cost of the treatment scheme was unified. The costs of the 10 drugs are shown in Table 5. The weighted dosages of the 13 treatment regimens are listed in Table 6.\nCost price and maximum/minimum value of 10 drugs.\n(A) All data come from 315 medicine price inquiry net https://www.315jiage.cn. (B) Maximum, minimum and cost of the exact drug specifications.\nWeighted dose and cost of 13 treatment options.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThe cost of 1 course of treatment, including Aspirin enteric-coated tablets, Propranolol tablets, Nitroglycerin tablets and Nifedipine sustained-release tablets for conventional treatment, was ¥13.09, ¥13.72, ¥1.68, and ¥17.08, respectively, and the total cost of 1 course of conventional treatment was ¥45.57.[63] The discounted cost of conventional treatment was ¥50.24.\nOther costs\nThe cost of injection mainly includes the cost of materials, such as disposable infusion tubes and syringes used for intravenous injection, and the cost of intravenous injection. The latest medical service fees published by the Beijing Medical Insurance Bureau include ¥5.5 for intravenous injection and ¥7 for intravenous infusion.[64] The total intravenous infusion material fee was ¥8.00 and the total amount of intravenous infusion material fee was ¥2.40.[65] After discounting, the average value is ¥14.3/day. In addition, the cost of examination items during the entire course of treatment for patients with angina includes the cost of blood, urine, stool routine, liver and kidney function, and electrocardiogram before treatment. The cost of laboratory tests and electrocardiograms was obtained from Jianwei Xuan et al[66] The average price of medical services was obtained from the website of the local Health Commission. Discount calculation results for inspection cost of ¥373.6. Zhang et al[67] summarized the costs of diagnosing and treating CHD in 26 sample hospitals from 2014 to 2016 and found that the average hospitalization cost for angina patients was ¥26745.12. Zhao et al[12] studied CHD in 237 tertiary hospitals in Beijing in 2014 and found that the average hospitalization cost of patients with unstable angina was ¥26482.41. The average cost of hospitalization calculated after discount is ¥34811.63.", "This study used a decision tree model to analyze the cost and effect of 13 treatment options for angina included in the network meta-analysis. Efficacy and safety indicators were obtained using meta-analysis to comprehensively evaluate the economics of 13 treatment regimens. The structure of the decision-tree model is shown in Figure 5. The model primarily assessed the short-term economy, and the time horizon for this analysis was 1 treatment course (14 days).\nDecision tree model. The decision tree model was used to analyze the cost-effectiveness of 13 treatment options for angina included in the network meta-analysis. The specific treatment protocols and their cost are shown in this figure. The time horizon for this analysis is 1 treatment course (14 days). AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.", "In pharmacoeconomic evaluation, cost-effectiveness analysis calculated the incremental cost-effectiveness ratio. Cyclone plots were drawn by single factor sensitivity analysis, probability sensitivity analysis was carried out by Monte Carlo simulation, and acceptable cost effect curves were drawn.[62] TreeAge 2011 was used to construct a decision tree model for cost-effectiveness and sensitivity analyses.", "The studies included in the economic evaluation were similar to the network meta-analysis. We obtained the effective rates of 13 treatment regimens according to the proportion of each study shown in the forest map in the meta-analysis and weighting the treatment efficiency of angina patients. The results showed that the efficiency ranking and the score ranking of SUCRA in the NMA were similar, indicating that the efficiency from the weighted calculation was reasonable and could be included in the economic-evaluation calculation (Table 4).\nClinical efficacy and ranking of 13 angina-pectoris treatments.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.", "Cost estimation was based on the patient perspective. We assumed that the direct and indirect costs of the 13 interventions were the same, that direct medical costs caused the differences in total costs, and that the cost of the conventional treatment was identical for each treatment regimen. In addition, this study’s effective components of safflower yellow injection and safflower injection are consistent. However, they were produced by different manufacturers, were differentiated in the network meta-analysis and discriminated in the cost calculation. We adopted a discount rate of 5% for the cost data and discounted uniformly until early 2020.\nDrug cost\nWe utilized the most common drug retail prices and the lowest to the highest manufacturers’ retail prices for the sensitivity analysis. When calculating the total drug cost of the 13 treatment schemes, the weighted drug amount was calculated by multiplying the cost of various drugs or injections in the included literature by the weight obtained from the meta-analysis and the drug cost of the treatment scheme was unified. The costs of the 10 drugs are shown in Table 5. The weighted dosages of the 13 treatment regimens are listed in Table 6.\nCost price and maximum/minimum value of 10 drugs.\n(A) All data come from 315 medicine price inquiry net https://www.315jiage.cn. (B) Maximum, minimum and cost of the exact drug specifications.\nWeighted dose and cost of 13 treatment options.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThe cost of 1 course of treatment, including Aspirin enteric-coated tablets, Propranolol tablets, Nitroglycerin tablets and Nifedipine sustained-release tablets for conventional treatment, was ¥13.09, ¥13.72, ¥1.68, and ¥17.08, respectively, and the total cost of 1 course of conventional treatment was ¥45.57.[63] The discounted cost of conventional treatment was ¥50.24.\nOther costs\nThe cost of injection mainly includes the cost of materials, such as disposable infusion tubes and syringes used for intravenous injection, and the cost of intravenous injection. The latest medical service fees published by the Beijing Medical Insurance Bureau include ¥5.5 for intravenous injection and ¥7 for intravenous infusion.[64] The total intravenous infusion material fee was ¥8.00 and the total amount of intravenous infusion material fee was ¥2.40.[65] After discounting, the average value is ¥14.3/day. In addition, the cost of examination items during the entire course of treatment for patients with angina includes the cost of blood, urine, stool routine, liver and kidney function, and electrocardiogram before treatment. The cost of laboratory tests and electrocardiograms was obtained from Jianwei Xuan et al[66] The average price of medical services was obtained from the website of the local Health Commission. Discount calculation results for inspection cost of ¥373.6. Zhang et al[67] summarized the costs of diagnosing and treating CHD in 26 sample hospitals from 2014 to 2016 and found that the average hospitalization cost for angina patients was ¥26745.12. Zhao et al[12] studied CHD in 237 tertiary hospitals in Beijing in 2014 and found that the average hospitalization cost of patients with unstable angina was ¥26482.41. The average cost of hospitalization calculated after discount is ¥34811.63.", "[SUBTITLE] 4.3.1. Base-case results. [SUBSECTION] We selected studies with effective rates of more than 90% (including conventional treatment) for economic evaluation. As shown in Table 7, CT + SI was the most cost-effective treatment.\nBase-case analysis results.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, ICER = incremental cost-effectiveness ratio, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nWe selected studies with effective rates of more than 90% (including conventional treatment) for economic evaluation. As shown in Table 7, CT + SI was the most cost-effective treatment.\nBase-case analysis results.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, ICER = incremental cost-effectiveness ratio, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\n[SUBTITLE] 4.3.2. Two-way sensitivity analysis. [SUBSECTION] It was assumed that the efficacy rate of the 9 treatment regimens fluctuated by 5%. The cost was analyzed sensitively according to the highest and lowest manufacturer retail price, assuming that WTP was GDP per capita in 2018. As shown in Figure 6, the parameter with the most significant impact on the results was the treatment efficiency of the CT + SI group.\nSensitivity analysis on cost and effective rate. eCT_SI = effective rate of combined therapy of conventional treatment and Safflor injection, eCT_SI_H = effective rate of combined therapy of conventional treatment and Safflor injection and low molecular heparin, eCT_SI_DS = effective rate of combined therapy of conventional treatment and Safflor injection and compound Danshen drip pill, eCT_SYI_LC = effective rate of combined therapy of conventional treatment and safflor yellow injection and L-Carnitine injection, eCT_SYI_CD = effective rate of combined therapy of conventional treatment and safflor yellow injection and carvedilol, cCT_SI = cost of combined therapy of conventional treatment and Safflor injection, eCT_SYI = effective rate of combined therapy of conventional treatment and safflor yellow injection, eCT_SYI_AC = effective rate of combined therapy of conventional treatment and safflor yellow injection and atorvastatin calcium tablets, cCT_SI_H = cost of combined therapy of conventional treatment and Safflor injection and low molecular heparin, eCT_SYI_SMI = effective rate of combined therapy of conventional treatment and safflor yellow injection and Shenmai injection, eCT = effective rate of conventional treatment, cCT_SYI_SMI = cost of combined therapy of conventional treatment and safflor yellow injection and Shenmai injection, eCT_SYI_LC = effective rate of combined therapy of conventional treatment and safflor yellow injection and L-Carnitine injection.\nIt was assumed that the efficacy rate of the 9 treatment regimens fluctuated by 5%. The cost was analyzed sensitively according to the highest and lowest manufacturer retail price, assuming that WTP was GDP per capita in 2018. As shown in Figure 6, the parameter with the most significant impact on the results was the treatment efficiency of the CT + SI group.\nSensitivity analysis on cost and effective rate. eCT_SI = effective rate of combined therapy of conventional treatment and Safflor injection, eCT_SI_H = effective rate of combined therapy of conventional treatment and Safflor injection and low molecular heparin, eCT_SI_DS = effective rate of combined therapy of conventional treatment and Safflor injection and compound Danshen drip pill, eCT_SYI_LC = effective rate of combined therapy of conventional treatment and safflor yellow injection and L-Carnitine injection, eCT_SYI_CD = effective rate of combined therapy of conventional treatment and safflor yellow injection and carvedilol, cCT_SI = cost of combined therapy of conventional treatment and Safflor injection, eCT_SYI = effective rate of combined therapy of conventional treatment and safflor yellow injection, eCT_SYI_AC = effective rate of combined therapy of conventional treatment and safflor yellow injection and atorvastatin calcium tablets, cCT_SI_H = cost of combined therapy of conventional treatment and Safflor injection and low molecular heparin, eCT_SYI_SMI = effective rate of combined therapy of conventional treatment and safflor yellow injection and Shenmai injection, eCT = effective rate of conventional treatment, cCT_SYI_SMI = cost of combined therapy of conventional treatment and safflor yellow injection and Shenmai injection, eCT_SYI_LC = effective rate of combined therapy of conventional treatment and safflor yellow injection and L-Carnitine injection.\n[SUBTITLE] 4.3.3. Probabilistic sensitivity analysis (PSA). [SUBSECTION] The results of the PSA based on 1000 Monte Carlo simulations are presented in the cost-effectiveness scatter plot below (Fig. 7). The efficiency and the cost were presumed to be a beta distribution and a triangular distribution, respectively. The patient’s WTP changed from 0 to ¥198018. The acceptable cost effect curve is shown in Figure 7. The probability of cost-effectiveness of CT + SI gradually increased with the WTP threshold and exceeded CT when the WTP reached ¥19801.8. When the WTP is higher than ¥39603.6, the CT + SI probability representing a more economical scheme was reduced; however, it was still greater than 50%. The results of the PSA were consistent with the base-case results (Fig. 8).\nCost-effectiveness acceptability curve. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nCost-effectiveness of screening options. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nThe results of the PSA based on 1000 Monte Carlo simulations are presented in the cost-effectiveness scatter plot below (Fig. 7). The efficiency and the cost were presumed to be a beta distribution and a triangular distribution, respectively. The patient’s WTP changed from 0 to ¥198018. The acceptable cost effect curve is shown in Figure 7. The probability of cost-effectiveness of CT + SI gradually increased with the WTP threshold and exceeded CT when the WTP reached ¥19801.8. When the WTP is higher than ¥39603.6, the CT + SI probability representing a more economical scheme was reduced; however, it was still greater than 50%. The results of the PSA were consistent with the base-case results (Fig. 8).\nCost-effectiveness acceptability curve. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nCost-effectiveness of screening options. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.", "We selected studies with effective rates of more than 90% (including conventional treatment) for economic evaluation. As shown in Table 7, CT + SI was the most cost-effective treatment.\nBase-case analysis results.\nCD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, ICER = incremental cost-effectiveness ratio, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.", "It was assumed that the efficacy rate of the 9 treatment regimens fluctuated by 5%. The cost was analyzed sensitively according to the highest and lowest manufacturer retail price, assuming that WTP was GDP per capita in 2018. As shown in Figure 6, the parameter with the most significant impact on the results was the treatment efficiency of the CT + SI group.\nSensitivity analysis on cost and effective rate. eCT_SI = effective rate of combined therapy of conventional treatment and Safflor injection, eCT_SI_H = effective rate of combined therapy of conventional treatment and Safflor injection and low molecular heparin, eCT_SI_DS = effective rate of combined therapy of conventional treatment and Safflor injection and compound Danshen drip pill, eCT_SYI_LC = effective rate of combined therapy of conventional treatment and safflor yellow injection and L-Carnitine injection, eCT_SYI_CD = effective rate of combined therapy of conventional treatment and safflor yellow injection and carvedilol, cCT_SI = cost of combined therapy of conventional treatment and Safflor injection, eCT_SYI = effective rate of combined therapy of conventional treatment and safflor yellow injection, eCT_SYI_AC = effective rate of combined therapy of conventional treatment and safflor yellow injection and atorvastatin calcium tablets, cCT_SI_H = cost of combined therapy of conventional treatment and Safflor injection and low molecular heparin, eCT_SYI_SMI = effective rate of combined therapy of conventional treatment and safflor yellow injection and Shenmai injection, eCT = effective rate of conventional treatment, cCT_SYI_SMI = cost of combined therapy of conventional treatment and safflor yellow injection and Shenmai injection, eCT_SYI_LC = effective rate of combined therapy of conventional treatment and safflor yellow injection and L-Carnitine injection.", "The results of the PSA based on 1000 Monte Carlo simulations are presented in the cost-effectiveness scatter plot below (Fig. 7). The efficiency and the cost were presumed to be a beta distribution and a triangular distribution, respectively. The patient’s WTP changed from 0 to ¥198018. The acceptable cost effect curve is shown in Figure 7. The probability of cost-effectiveness of CT + SI gradually increased with the WTP threshold and exceeded CT when the WTP reached ¥19801.8. When the WTP is higher than ¥39603.6, the CT + SI probability representing a more economical scheme was reduced; however, it was still greater than 50%. The results of the PSA were consistent with the base-case results (Fig. 8).\nCost-effectiveness acceptability curve. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.\nCost-effectiveness of screening options. AC = atorvastatin calcium tablets, CD = carvedilol, CT = conventional treatment, DS = compound Danshen drip pill, DSI = (compound) Danshen injection, H = low molecular heparin, L-C = L-Carnitine injection, SI = safflor injection, SMI = Shenmai injection, SYI = safflor yellow injection, XDI = Xiangdan injection.", "The clinical outcome of angina is influenced by many factors, such as patient age, surgical operation, complications, and drug type. Specifically, angina pectoris is more likely to be cured with medications than other diseases, such as myocardial infarction, with higher mortality in CHD. Currently, the leading therapeutic method used in China is pharmacotherapy. Meanwhile, the primary indication of safflower is angina, consistently demonstrating excellent treatment efficacy. Therefore, it is significant to study the efficacy and cost-effectiveness of safflower-related treatment regimens for clinical guidance.\nNetwork meta-analysis was used to indirectly evaluate the efficiency of 13 treatments for angina patients. The bayesian method was utilized to assess the cost-effectiveness of 9 treatments indirectly. Compared with conventional treatment regimens, the treatment combined with safflower indicated improved effects, and the combination with Danshen-dropping pills demonstrated the most effective treatment potency. Moreover, the addition of other drugs, such as low molecular heparin, carvedilol, and l-carnitine injection, to the combination allowed higher efficacy and cost-effectiveness due to improving curative effect and reducing dosage and drug cost compared with the conventional treatment and routine treatment combined with safflower flavin.\nThe study limitations are as follows: The recovery cost from angina not mentioned in the included studies was not reflected. This might affect the evaluation when calculating the cost of a 1-course -treatment (14 days). The final Cochrane score of the included studies was low, resulting in insufficient information to judge the study quality, such as randomization, allocation, concealment, and blinding. Frequency-based meta-analysis was used for indirect comparisons; therefore, the efficiency ranking may be biased. However, certain studies indicated that the frequency-based and Bayesian network meta-analyses were comparable. The included studies were all published but lacked gray documents, such as special reports and unpublished materials. The studies lacked long-term follow-up monitoring in terms of safety, poorly assessing the long-term risk of safflower. More high-quality clinical data are required to confirm our findings.", "This study used various analytical methods to conduct a multilevel analysis of 13 treatment regimens related to safflower against angina from evidence-based medicine and economic evaluation. From the perspective of evidence-based medicine, the CT + SI + H group had the best treatment efficacy. The CT + SI group was the most cost-effective, combined with the cost data. Yet, CT + SI + DS was recommended as the best treatment choice due to the advantages of efficiency and cost-effectiveness. Sensitivity analysis showed that the model was sensitive to the treatment effectiveness instead of the drug cost. Therefore, we recommend a combination of conventional treatment and safflower injection to treat angina pectoris. Of note, adding low molecular weight heparin or compound Danshen-dropping pills to the combination could improve efficacy and cost-effectiveness. Indeed, more clinical trials are needed to support our conclusions due to the limited data.", "The authors thank Dr He and Dr Du for linguistic assistance.", "Conceptualization: Yongfa Chen, Liang Lu.\nData curation: Qiuchen Jin.\nFormal analysis: Liang Lu.\nMethodology: Liang Lu, Yang Li.\nSoftware: Yang Li.\nSupervision: Yongfa Chen.\nWriting – original draft: Liang Lu.\nWriting – review & editing: Liang Lu." ]
[ "intro", "methods", null, null, null, null, null, null, null, null, null, "methods", "results", "results", null, null, "results", null, null, null, null, null, null, null, null, "methods", null, null, null, null, "results", null, null, null, "discussion", null, null, null ]
[ "angina pectoris", "cost-effectiveness analysis", "network meta-analysis", "safflor yellow" ]
Treatment for overactive bladder: A meta-analysis of tibial versus parasacral neuromodulation.
36253991
The study aimed to assess the efficacy and safety of parasacral neuromodulation (PNS) versus tibial nerve stimulation (TNS) for patients with overactive bladder (OAB).
BACKGROUND
Databases including PubMed, Embase, clinicalTrial.gov, and Cochrane Library Central Register of Controlled Trials were systematically searched from January 1, 1999 to September 9, 2022. The improvements in a 3-day voiding diary were set as the primary outcomes. Then, the scores of overactive bladder-validated 8-question awareness tool (OAB-V8), King's health questionnaire (KHQ), and international consultation on incontinence questionnaire overactive bladder (ICIQ-OAB) were also evaluated.
METHODS
Five articles (4 randomized controlled trials [RCTs] and 1 prospective study) including 255 OAB patients were enrolled. Two kinds of neuromodulations had similar performances in the micturition (mean difference [MD] = 0.26, 95% confidence interval [CI]: -0.51 to 1.04, P = .50), urgency episodes (MD = -0.16, 95% CI: -0.64 to 0.31, P = .50), incontinence episodes (MD = 0.09, 95% CI: -0.41 to 0.59, P = .72), as well as in the nocturia episodes (MD = 0.04, 95% CI: -0.45 to 0.52, P = .89). Furthermore, there was no difference regarding ICIQ-OAB scores (P = .83), KHQ (P = .91), and OAB-V8 scores (P = .83). Importantly, included studies reported no adverse events in the 2 groups.
RESULTS
TNS and PNS had similar effectiveness for the treatment of OAB, moreover, without any identified adverse events in both groups. However, well-designed RCTs are stilled needed to verify our results.
CONCLUSION
[ "Humans", "Naphthalenesulfonates", "Randomized Controlled Trials as Topic", "Tibial Nerve", "Treatment Outcome", "Urinary Bladder, Overactive", "Urinary Incontinence", "Urination" ]
9575790
1. Introduction
Patients with overactive bladder syndrome (OAB) usually present urinary urgency and increased daytime micturition and/or nocturia, with or without urinary incontinence.[1] Based on a real-word study, the prevalence of OAB is about 17% and which increases gradually with age.[2] OAB is not life-threatening, but it alters life styles and may have a significant impact on the life quality, sexual function, and social interaction of patients.[3–6] After failed conservative and behavioral therapies, antimuscarinics such as solifenacin are the mainstream treatments for OAB, but high costs and side effects reduce patient adherence to these drugs and ultimately limit their benefits to a wider range of patients with OAB.[2,7,8] Under this circumstance, patients tend to choose a more cost-effective and less invasive approach. By modulating the sacral plexus and inhibiting detrusor muscle hyperactivity, physiotherapeutic treatment through electrostimulation of the tibial[9–11] and parasacral nerves[11–13] has proven an effective alternative to treat OAB symptoms in adult and infant patients. Both 2 neuromodulations target S2 to S4 nerves which correspond to the main bladder parasympathetic innervation, but the locations of electrodes in the 2 groups are different. With regard to tibial nerve stimulation (TNS), whose electrodes are usually located near the medial malleolus, covering the path of the tibial nerve to innervate S2 to S4 nerves.[14] Regarding parasacral nerve stimulation (PNS), electrodes are positioned over the parasacral region on the level of S3.[11] Although TNS and PNS have been proven effective and well-accepted,[11,13,14] it is unclear who has better effectiveness and safety profile for patients with OAB. Therefore, we conducted a meat-analysis to compare the safety and effectiveness of TNS versus PNS in treating patients with OAB and provide a better clinical reference for urologists when making a decision.
2. Materials and Methods
[SUBTITLE] 2.1. Study selection [SUBSECTION] The ethic approval of this study was not necessary because the included studies were open-sourced. The study was performed in line with the PRISMA guidelines.[15] Databases including PubMed, Embase, clinicalTrial.gov, and Cochrane Library Central Register of Controlled Trials were systematically searched from January 1, 1999 to September 9, 2022 to find the studies comparing the PNS and TNS for OAB. Each database was searched using the following formula: (Overactive bladder OR Overactive urinary bladder OR Overactive detrusor) AND (Tibial neuromodulation OR Tibial nerve stimulation OR Posterior tibial nerve stimulation) AND (Parasacral neuromodulation OR Parasacral nerve stimulation). Last, reference list of related studies was carefully checked for eligible articles. The ethic approval of this study was not necessary because the included studies were open-sourced. The study was performed in line with the PRISMA guidelines.[15] Databases including PubMed, Embase, clinicalTrial.gov, and Cochrane Library Central Register of Controlled Trials were systematically searched from January 1, 1999 to September 9, 2022 to find the studies comparing the PNS and TNS for OAB. Each database was searched using the following formula: (Overactive bladder OR Overactive urinary bladder OR Overactive detrusor) AND (Tibial neuromodulation OR Tibial nerve stimulation OR Posterior tibial nerve stimulation) AND (Parasacral neuromodulation OR Parasacral nerve stimulation). Last, reference list of related studies was carefully checked for eligible articles. [SUBTITLE] 2.2. Inclusion criteria and exclusion criteria [SUBSECTION] Study selection was conducted in line with the PICOS principle. Comparable studies (randomized controlled trial [RCT], retrospective study, and prospective study) evaluated the efficacy and safety of PNS versus TNS for patients with OAB were selected. There were no limitations regarding status and language. But a study should be excluded if other third-line treatment such as type A botulinum toxin injection was involved. Additionally, a study design of case report, protocol, comment, and an article without adequate data were also excluded. Study selection was conducted in line with the PICOS principle. Comparable studies (randomized controlled trial [RCT], retrospective study, and prospective study) evaluated the efficacy and safety of PNS versus TNS for patients with OAB were selected. There were no limitations regarding status and language. But a study should be excluded if other third-line treatment such as type A botulinum toxin injection was involved. Additionally, a study design of case report, protocol, comment, and an article without adequate data were also excluded. [SUBTITLE] 2.3. Study selection and data extraction [SUBSECTION] First, we identified the eligible studies according to the established inclusion and exclusion criteria. Subsequently, the baseline information, treatment protocol, OAB symptom scores at last visit, and complications were collected to perform downstream analyses. Last, all authors double-checked included data. If there was a disagreement, a discussion was performed in our team to draw a conclusion. First, we identified the eligible studies according to the established inclusion and exclusion criteria. Subsequently, the baseline information, treatment protocol, OAB symptom scores at last visit, and complications were collected to perform downstream analyses. Last, all authors double-checked included data. If there was a disagreement, a discussion was performed in our team to draw a conclusion. [SUBTITLE] 2.4. Outcomes [SUBSECTION] The amount of micturition in 24 hour, the number of urgency episodes and incontinence episodes in 24 hour, and the average nocturia episodes were primary outcomes to assess the effectiveness of PNS versus TNS. Then, overactive bladder-validated 8-question awareness tool (OAB-V8), King’s health questionnaire (KHQ) (incontinence impact domain), and international consultation on incontinence questionnaire overactive bladder (ICIQ-OAB) scores were compared to evaluate patients’ life quality and OAB symptom improvements. Last, complications such as bleeding, discomfort, and pain over the treated area were recorded. The amount of micturition in 24 hour, the number of urgency episodes and incontinence episodes in 24 hour, and the average nocturia episodes were primary outcomes to assess the effectiveness of PNS versus TNS. Then, overactive bladder-validated 8-question awareness tool (OAB-V8), King’s health questionnaire (KHQ) (incontinence impact domain), and international consultation on incontinence questionnaire overactive bladder (ICIQ-OAB) scores were compared to evaluate patients’ life quality and OAB symptom improvements. Last, complications such as bleeding, discomfort, and pain over the treated area were recorded. [SUBTITLE] 2.5. Quality assessment [SUBSECTION] The quality of RCTs and prospective studies was evaluated in line with the Cochrane handbook guideline.[16] The domains of risk of bias were judged as low, high, or unclear risk according to the scores. The quality of RCTs and prospective studies was evaluated in line with the Cochrane handbook guideline.[16] The domains of risk of bias were judged as low, high, or unclear risk according to the scores. [SUBTITLE] 2.6. Data analysis [SUBSECTION] All statistical analyses were performed using the Review Manager 5.3 (Cochrane Collaboration, Oxford, UK). All included data were continuous parameters, and their mean differences (MDs) with 95% confidence intervals (CIs) were calculated. I2 test was performed to identify heterogeneity among studies. IF I2 values <50%, the fixed-effect model was used, otherwise the random-effect method was used. Funnel plot was constructed to assess the publication bias of primary outcomes. All statistical analyses were performed using the Review Manager 5.3 (Cochrane Collaboration, Oxford, UK). All included data were continuous parameters, and their mean differences (MDs) with 95% confidence intervals (CIs) were calculated. I2 test was performed to identify heterogeneity among studies. IF I2 values <50%, the fixed-effect model was used, otherwise the random-effect method was used. Funnel plot was constructed to assess the publication bias of primary outcomes.
3. Results
[SUBTITLE] 3.1. Study characteristics [SUBSECTION] Following the study selection process in Figure 1, a total of 374 articles were initially identified, but 350 records were excluded according to the established inclusion and exclusion criteria. Subsequently, 8 articles were selected for qualitative synthesis after removal of noncomparative articles.[17–24] However, 2 study protocols[22,23] and 1 comment[24] with inadequate data were excluded. Eventually, a total of 5 trials (4 RCTs and 1 prospective study)[17–21] with 255 patients were enrolled. The characteristics of each study included were presented in Table 1. Notably, the study by Barroso et al[17] compared the efficacy and safety of 2 kinds of neuromodulations in children, but others 4 studies focused on adults. Table 2 showed the treatment protocol in each study. It should be noted that the treatment protocols in each study were not consistent. Importantly, the quality assessment showed a low risk in all studies (Fig. S1, Supplemental Digital Content http://links.lww.com/MD/H628). The baseline information of included studies. P = prospective, RCT = randomized controlled trail. The treatment protocol in each included study. PRISMA flowchart of study selection. Following the study selection process in Figure 1, a total of 374 articles were initially identified, but 350 records were excluded according to the established inclusion and exclusion criteria. Subsequently, 8 articles were selected for qualitative synthesis after removal of noncomparative articles.[17–24] However, 2 study protocols[22,23] and 1 comment[24] with inadequate data were excluded. Eventually, a total of 5 trials (4 RCTs and 1 prospective study)[17–21] with 255 patients were enrolled. The characteristics of each study included were presented in Table 1. Notably, the study by Barroso et al[17] compared the efficacy and safety of 2 kinds of neuromodulations in children, but others 4 studies focused on adults. Table 2 showed the treatment protocol in each study. It should be noted that the treatment protocols in each study were not consistent. Importantly, the quality assessment showed a low risk in all studies (Fig. S1, Supplemental Digital Content http://links.lww.com/MD/H628). The baseline information of included studies. P = prospective, RCT = randomized controlled trail. The treatment protocol in each included study. PRISMA flowchart of study selection. [SUBTITLE] 3.2. Evaluation of effectiveness and safety profile [SUBSECTION] The amount of micturition in 24 hour was provided in 4 studies,[18–21] and the number of urgency episodes was presented in 3 studies.[18,20,21] But only 2 studies had data regarding the number of incontinence episodes in 24 hour and nocturia episodes.[17,18] No statistical differences were noted in the micturition (MD = 0.26, 95% CI: –0.51 to 1.04, P = .50), urgency episodes (MD = –0.16, 95% CI: –0.64 to 0.31, P = .50), incontinence episodes (MD = 0.09, 95% CI: –0.41 to 0.59, P = .72), as well as in the nocturia episodes (MD = 0.04, 95% CI: –0.45 to 0.52, P = .89) in the 2 groups (Fig. 2). Moreover, none of publication bias regarding primary outcomes was identified (Fig. S2, Supplemental Digital Content http://links.lww.com/MD/H629). Forest plot for the changes of primary outcomes. PNS = parasacral nerve stimulation, TNS = tibial nerve stimulation. With regard to patients’ life quality, 4 articles[17–20] contained the data on ICIQ-OAB scores, 2 studies[19,20] presented the data on KHQ scores, and 3 articles[19–21] assessed OAB-V8 scores. Comparable pooled results were observed in terms of ICIQ-OAB scores (MD = –0.09, 95% CI: –0.90 to 0.73, P = .83), KHQ scores (MD = 0.98, 95% CI: –16.28 to 18.25, P = .91), and OAB-V8 scores (MD = –0.55, 95% CI: –5.56 to 4.46, P = .83) in the TNS and PNS groups (Fig. 3). Forest plot for life quality scores. Tibial nerve stimulation, ICIQ-OAB = International consultation on incontinence questionnaire overactive bladder, KHQ = King’s health questionnaire, OAB-V8 = overactive bladder-validated 8-question awareness tool, PNS = parasacral nerve stimulation, TNS = tibial nerve stimulation. Regarding the safety profile, no complications were identified or reported in the included studies. The amount of micturition in 24 hour was provided in 4 studies,[18–21] and the number of urgency episodes was presented in 3 studies.[18,20,21] But only 2 studies had data regarding the number of incontinence episodes in 24 hour and nocturia episodes.[17,18] No statistical differences were noted in the micturition (MD = 0.26, 95% CI: –0.51 to 1.04, P = .50), urgency episodes (MD = –0.16, 95% CI: –0.64 to 0.31, P = .50), incontinence episodes (MD = 0.09, 95% CI: –0.41 to 0.59, P = .72), as well as in the nocturia episodes (MD = 0.04, 95% CI: –0.45 to 0.52, P = .89) in the 2 groups (Fig. 2). Moreover, none of publication bias regarding primary outcomes was identified (Fig. S2, Supplemental Digital Content http://links.lww.com/MD/H629). Forest plot for the changes of primary outcomes. PNS = parasacral nerve stimulation, TNS = tibial nerve stimulation. With regard to patients’ life quality, 4 articles[17–20] contained the data on ICIQ-OAB scores, 2 studies[19,20] presented the data on KHQ scores, and 3 articles[19–21] assessed OAB-V8 scores. Comparable pooled results were observed in terms of ICIQ-OAB scores (MD = –0.09, 95% CI: –0.90 to 0.73, P = .83), KHQ scores (MD = 0.98, 95% CI: –16.28 to 18.25, P = .91), and OAB-V8 scores (MD = –0.55, 95% CI: –5.56 to 4.46, P = .83) in the TNS and PNS groups (Fig. 3). Forest plot for life quality scores. Tibial nerve stimulation, ICIQ-OAB = International consultation on incontinence questionnaire overactive bladder, KHQ = King’s health questionnaire, OAB-V8 = overactive bladder-validated 8-question awareness tool, PNS = parasacral nerve stimulation, TNS = tibial nerve stimulation. Regarding the safety profile, no complications were identified or reported in the included studies.
null
null
[ "2.1. Study selection", "2.2. Inclusion criteria and exclusion criteria", "2.3. Study selection and data extraction", "2.4. Outcomes", "2.5. Quality assessment", "2.6. Data analysis", "3.1. Study characteristics", "3.2. Evaluation of effectiveness and safety profile", "5. Conclusion", "Author contributions" ]
[ "The ethic approval of this study was not necessary because the included studies were open-sourced. The study was performed in line with the PRISMA guidelines.[15] Databases including PubMed, Embase, clinicalTrial.gov, and Cochrane Library Central Register of Controlled Trials were systematically searched from January 1, 1999 to September 9, 2022 to find the studies comparing the PNS and TNS for OAB. Each database was searched using the following formula: (Overactive bladder OR Overactive urinary bladder OR Overactive detrusor) AND (Tibial neuromodulation OR Tibial nerve stimulation OR Posterior tibial nerve stimulation) AND (Parasacral neuromodulation OR Parasacral nerve stimulation). Last, reference list of related studies was carefully checked for eligible articles.", "Study selection was conducted in line with the PICOS principle. Comparable studies (randomized controlled trial [RCT], retrospective study, and prospective study) evaluated the efficacy and safety of PNS versus TNS for patients with OAB were selected. There were no limitations regarding status and language. But a study should be excluded if other third-line treatment such as type A botulinum toxin injection was involved. Additionally, a study design of case report, protocol, comment, and an article without adequate data were also excluded.", "First, we identified the eligible studies according to the established inclusion and exclusion criteria. Subsequently, the baseline information, treatment protocol, OAB symptom scores at last visit, and complications were collected to perform downstream analyses. Last, all authors double-checked included data. If there was a disagreement, a discussion was performed in our team to draw a conclusion.", "The amount of micturition in 24 hour, the number of urgency episodes and incontinence episodes in 24 hour, and the average nocturia episodes were primary outcomes to assess the effectiveness of PNS versus TNS. Then, overactive bladder-validated 8-question awareness tool (OAB-V8), King’s health questionnaire (KHQ) (incontinence impact domain), and international consultation on incontinence questionnaire overactive bladder (ICIQ-OAB) scores were compared to evaluate patients’ life quality and OAB symptom improvements. Last, complications such as bleeding, discomfort, and pain over the treated area were recorded.", "The quality of RCTs and prospective studies was evaluated in line with the Cochrane handbook guideline.[16] The domains of risk of bias were judged as low, high, or unclear risk according to the scores.", "All statistical analyses were performed using the Review Manager 5.3 (Cochrane Collaboration, Oxford, UK). All included data were continuous parameters, and their mean differences (MDs) with 95% confidence intervals (CIs) were calculated. I2 test was performed to identify heterogeneity among studies. IF I2 values <50%, the fixed-effect model was used, otherwise the random-effect method was used. Funnel plot was constructed to assess the publication bias of primary outcomes.", "Following the study selection process in Figure 1, a total of 374 articles were initially identified, but 350 records were excluded according to the established inclusion and exclusion criteria. Subsequently, 8 articles were selected for qualitative synthesis after removal of noncomparative articles.[17–24] However, 2 study protocols[22,23] and 1 comment[24] with inadequate data were excluded. Eventually, a total of 5 trials (4 RCTs and 1 prospective study)[17–21] with 255 patients were enrolled. The characteristics of each study included were presented in Table 1. Notably, the study by Barroso et al[17] compared the efficacy and safety of 2 kinds of neuromodulations in children, but others 4 studies focused on adults. Table 2 showed the treatment protocol in each study. It should be noted that the treatment protocols in each study were not consistent. Importantly, the quality assessment showed a low risk in all studies (Fig. S1, Supplemental Digital Content http://links.lww.com/MD/H628).\nThe baseline information of included studies.\nP = prospective, RCT = randomized controlled trail.\nThe treatment protocol in each included study.\nPRISMA flowchart of study selection.", "The amount of micturition in 24 hour was provided in 4 studies,[18–21] and the number of urgency episodes was presented in 3 studies.[18,20,21] But only 2 studies had data regarding the number of incontinence episodes in 24 hour and nocturia episodes.[17,18] No statistical differences were noted in the micturition (MD = 0.26, 95% CI: –0.51 to 1.04, P = .50), urgency episodes (MD = –0.16, 95% CI: –0.64 to 0.31, P = .50), incontinence episodes (MD = 0.09, 95% CI: –0.41 to 0.59, P = .72), as well as in the nocturia episodes (MD = 0.04, 95% CI: –0.45 to 0.52, P = .89) in the 2 groups (Fig. 2). Moreover, none of publication bias regarding primary outcomes was identified (Fig. S2, Supplemental Digital Content http://links.lww.com/MD/H629).\nForest plot for the changes of primary outcomes. PNS = parasacral nerve stimulation, TNS = tibial nerve stimulation.\nWith regard to patients’ life quality, 4 articles[17–20] contained the data on ICIQ-OAB scores, 2 studies[19,20] presented the data on KHQ scores, and 3 articles[19–21] assessed OAB-V8 scores. Comparable pooled results were observed in terms of ICIQ-OAB scores (MD = –0.09, 95% CI: –0.90 to 0.73, P = .83), KHQ scores (MD = 0.98, 95% CI: –16.28 to 18.25, P = .91), and OAB-V8 scores (MD = –0.55, 95% CI: –5.56 to 4.46, P = .83) in the TNS and PNS groups (Fig. 3).\nForest plot for life quality scores. Tibial nerve stimulation, ICIQ-OAB = International consultation on incontinence questionnaire overactive bladder, KHQ = King’s health questionnaire, OAB-V8 = overactive bladder-validated 8-question awareness tool, PNS = parasacral nerve stimulation, TNS = tibial nerve stimulation.\nRegarding the safety profile, no complications were identified or reported in the included studies.", "The current pooled results supported that TNS is as effective as PNS for the treatment of OAB, moreover, without any reported adverse events in both groups. However, prospective studies with a larger sample size should be conducted to verify our findings.", "Conceptualization: Zhi-Hong Wang, Zhi-Hong Liu.\nData curation: Zhi-Hong Wang, Zhi-Hong Liu.\nFormal analysis: Zhi-Hong Wang, Zhi-Hong Liu.\nInvestigation: Zhi-Hong Wang, Zhi-Hong Liu.\nMethodology: Zhi-Hong Wang, Zhi-Hong Liu.\nProject administration: Zhi-Hong Wang, Zhi-Hong Liu.\nResources: Zhi-Hong Wang.\nSoftware: Zhi-Hong Wang, Zhi-Hong Liu.\nSupervision: Zhi-Hong Wang, Zhi-Hong Liu.\nValidation: Zhi-Hong Wang.\nVisualization: Zhi-Hong Wang.\nWriting – original draft: Zhi-Hong Wang, Zhi-Hong Liu.\nWriting – review & editing: Zhi-Hong Wang, Zhi-Hong Liu." ]
[ null, null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Materials and Methods", "2.1. Study selection", "2.2. Inclusion criteria and exclusion criteria", "2.3. Study selection and data extraction", "2.4. Outcomes", "2.5. Quality assessment", "2.6. Data analysis", "3. Results", "3.1. Study characteristics", "3.2. Evaluation of effectiveness and safety profile", "4. Discussion", "5. Conclusion", "Author contributions", "Supplementary Material" ]
[ "Patients with overactive bladder syndrome (OAB) usually present urinary urgency and increased daytime micturition and/or nocturia, with or without urinary incontinence.[1] Based on a real-word study, the prevalence of OAB is about 17% and which increases gradually with age.[2] OAB is not life-threatening, but it alters life styles and may have a significant impact on the life quality, sexual function, and social interaction of patients.[3–6] After failed conservative and behavioral therapies, antimuscarinics such as solifenacin are the mainstream treatments for OAB, but high costs and side effects reduce patient adherence to these drugs and ultimately limit their benefits to a wider range of patients with OAB.[2,7,8] Under this circumstance, patients tend to choose a more cost-effective and less invasive approach.\nBy modulating the sacral plexus and inhibiting detrusor muscle hyperactivity, physiotherapeutic treatment through electrostimulation of the tibial[9–11] and parasacral nerves[11–13] has proven an effective alternative to treat OAB symptoms in adult and infant patients. Both 2 neuromodulations target S2 to S4 nerves which correspond to the main bladder parasympathetic innervation, but the locations of electrodes in the 2 groups are different. With regard to tibial nerve stimulation (TNS), whose electrodes are usually located near the medial malleolus, covering the path of the tibial nerve to innervate S2 to S4 nerves.[14] Regarding parasacral nerve stimulation (PNS), electrodes are positioned over the parasacral region on the level of S3.[11] Although TNS and PNS have been proven effective and well-accepted,[11,13,14] it is unclear who has better effectiveness and safety profile for patients with OAB. Therefore, we conducted a meat-analysis to compare the safety and effectiveness of TNS versus PNS in treating patients with OAB and provide a better clinical reference for urologists when making a decision.", "[SUBTITLE] 2.1. Study selection [SUBSECTION] The ethic approval of this study was not necessary because the included studies were open-sourced. The study was performed in line with the PRISMA guidelines.[15] Databases including PubMed, Embase, clinicalTrial.gov, and Cochrane Library Central Register of Controlled Trials were systematically searched from January 1, 1999 to September 9, 2022 to find the studies comparing the PNS and TNS for OAB. Each database was searched using the following formula: (Overactive bladder OR Overactive urinary bladder OR Overactive detrusor) AND (Tibial neuromodulation OR Tibial nerve stimulation OR Posterior tibial nerve stimulation) AND (Parasacral neuromodulation OR Parasacral nerve stimulation). Last, reference list of related studies was carefully checked for eligible articles.\nThe ethic approval of this study was not necessary because the included studies were open-sourced. The study was performed in line with the PRISMA guidelines.[15] Databases including PubMed, Embase, clinicalTrial.gov, and Cochrane Library Central Register of Controlled Trials were systematically searched from January 1, 1999 to September 9, 2022 to find the studies comparing the PNS and TNS for OAB. Each database was searched using the following formula: (Overactive bladder OR Overactive urinary bladder OR Overactive detrusor) AND (Tibial neuromodulation OR Tibial nerve stimulation OR Posterior tibial nerve stimulation) AND (Parasacral neuromodulation OR Parasacral nerve stimulation). Last, reference list of related studies was carefully checked for eligible articles.\n[SUBTITLE] 2.2. Inclusion criteria and exclusion criteria [SUBSECTION] Study selection was conducted in line with the PICOS principle. Comparable studies (randomized controlled trial [RCT], retrospective study, and prospective study) evaluated the efficacy and safety of PNS versus TNS for patients with OAB were selected. There were no limitations regarding status and language. But a study should be excluded if other third-line treatment such as type A botulinum toxin injection was involved. Additionally, a study design of case report, protocol, comment, and an article without adequate data were also excluded.\nStudy selection was conducted in line with the PICOS principle. Comparable studies (randomized controlled trial [RCT], retrospective study, and prospective study) evaluated the efficacy and safety of PNS versus TNS for patients with OAB were selected. There were no limitations regarding status and language. But a study should be excluded if other third-line treatment such as type A botulinum toxin injection was involved. Additionally, a study design of case report, protocol, comment, and an article without adequate data were also excluded.\n[SUBTITLE] 2.3. Study selection and data extraction [SUBSECTION] First, we identified the eligible studies according to the established inclusion and exclusion criteria. Subsequently, the baseline information, treatment protocol, OAB symptom scores at last visit, and complications were collected to perform downstream analyses. Last, all authors double-checked included data. If there was a disagreement, a discussion was performed in our team to draw a conclusion.\nFirst, we identified the eligible studies according to the established inclusion and exclusion criteria. Subsequently, the baseline information, treatment protocol, OAB symptom scores at last visit, and complications were collected to perform downstream analyses. Last, all authors double-checked included data. If there was a disagreement, a discussion was performed in our team to draw a conclusion.\n[SUBTITLE] 2.4. Outcomes [SUBSECTION] The amount of micturition in 24 hour, the number of urgency episodes and incontinence episodes in 24 hour, and the average nocturia episodes were primary outcomes to assess the effectiveness of PNS versus TNS. Then, overactive bladder-validated 8-question awareness tool (OAB-V8), King’s health questionnaire (KHQ) (incontinence impact domain), and international consultation on incontinence questionnaire overactive bladder (ICIQ-OAB) scores were compared to evaluate patients’ life quality and OAB symptom improvements. Last, complications such as bleeding, discomfort, and pain over the treated area were recorded.\nThe amount of micturition in 24 hour, the number of urgency episodes and incontinence episodes in 24 hour, and the average nocturia episodes were primary outcomes to assess the effectiveness of PNS versus TNS. Then, overactive bladder-validated 8-question awareness tool (OAB-V8), King’s health questionnaire (KHQ) (incontinence impact domain), and international consultation on incontinence questionnaire overactive bladder (ICIQ-OAB) scores were compared to evaluate patients’ life quality and OAB symptom improvements. Last, complications such as bleeding, discomfort, and pain over the treated area were recorded.\n[SUBTITLE] 2.5. Quality assessment [SUBSECTION] The quality of RCTs and prospective studies was evaluated in line with the Cochrane handbook guideline.[16] The domains of risk of bias were judged as low, high, or unclear risk according to the scores.\nThe quality of RCTs and prospective studies was evaluated in line with the Cochrane handbook guideline.[16] The domains of risk of bias were judged as low, high, or unclear risk according to the scores.\n[SUBTITLE] 2.6. Data analysis [SUBSECTION] All statistical analyses were performed using the Review Manager 5.3 (Cochrane Collaboration, Oxford, UK). All included data were continuous parameters, and their mean differences (MDs) with 95% confidence intervals (CIs) were calculated. I2 test was performed to identify heterogeneity among studies. IF I2 values <50%, the fixed-effect model was used, otherwise the random-effect method was used. Funnel plot was constructed to assess the publication bias of primary outcomes.\nAll statistical analyses were performed using the Review Manager 5.3 (Cochrane Collaboration, Oxford, UK). All included data were continuous parameters, and their mean differences (MDs) with 95% confidence intervals (CIs) were calculated. I2 test was performed to identify heterogeneity among studies. IF I2 values <50%, the fixed-effect model was used, otherwise the random-effect method was used. Funnel plot was constructed to assess the publication bias of primary outcomes.", "The ethic approval of this study was not necessary because the included studies were open-sourced. The study was performed in line with the PRISMA guidelines.[15] Databases including PubMed, Embase, clinicalTrial.gov, and Cochrane Library Central Register of Controlled Trials were systematically searched from January 1, 1999 to September 9, 2022 to find the studies comparing the PNS and TNS for OAB. Each database was searched using the following formula: (Overactive bladder OR Overactive urinary bladder OR Overactive detrusor) AND (Tibial neuromodulation OR Tibial nerve stimulation OR Posterior tibial nerve stimulation) AND (Parasacral neuromodulation OR Parasacral nerve stimulation). Last, reference list of related studies was carefully checked for eligible articles.", "Study selection was conducted in line with the PICOS principle. Comparable studies (randomized controlled trial [RCT], retrospective study, and prospective study) evaluated the efficacy and safety of PNS versus TNS for patients with OAB were selected. There were no limitations regarding status and language. But a study should be excluded if other third-line treatment such as type A botulinum toxin injection was involved. Additionally, a study design of case report, protocol, comment, and an article without adequate data were also excluded.", "First, we identified the eligible studies according to the established inclusion and exclusion criteria. Subsequently, the baseline information, treatment protocol, OAB symptom scores at last visit, and complications were collected to perform downstream analyses. Last, all authors double-checked included data. If there was a disagreement, a discussion was performed in our team to draw a conclusion.", "The amount of micturition in 24 hour, the number of urgency episodes and incontinence episodes in 24 hour, and the average nocturia episodes were primary outcomes to assess the effectiveness of PNS versus TNS. Then, overactive bladder-validated 8-question awareness tool (OAB-V8), King’s health questionnaire (KHQ) (incontinence impact domain), and international consultation on incontinence questionnaire overactive bladder (ICIQ-OAB) scores were compared to evaluate patients’ life quality and OAB symptom improvements. Last, complications such as bleeding, discomfort, and pain over the treated area were recorded.", "The quality of RCTs and prospective studies was evaluated in line with the Cochrane handbook guideline.[16] The domains of risk of bias were judged as low, high, or unclear risk according to the scores.", "All statistical analyses were performed using the Review Manager 5.3 (Cochrane Collaboration, Oxford, UK). All included data were continuous parameters, and their mean differences (MDs) with 95% confidence intervals (CIs) were calculated. I2 test was performed to identify heterogeneity among studies. IF I2 values <50%, the fixed-effect model was used, otherwise the random-effect method was used. Funnel plot was constructed to assess the publication bias of primary outcomes.", "[SUBTITLE] 3.1. Study characteristics [SUBSECTION] Following the study selection process in Figure 1, a total of 374 articles were initially identified, but 350 records were excluded according to the established inclusion and exclusion criteria. Subsequently, 8 articles were selected for qualitative synthesis after removal of noncomparative articles.[17–24] However, 2 study protocols[22,23] and 1 comment[24] with inadequate data were excluded. Eventually, a total of 5 trials (4 RCTs and 1 prospective study)[17–21] with 255 patients were enrolled. The characteristics of each study included were presented in Table 1. Notably, the study by Barroso et al[17] compared the efficacy and safety of 2 kinds of neuromodulations in children, but others 4 studies focused on adults. Table 2 showed the treatment protocol in each study. It should be noted that the treatment protocols in each study were not consistent. Importantly, the quality assessment showed a low risk in all studies (Fig. S1, Supplemental Digital Content http://links.lww.com/MD/H628).\nThe baseline information of included studies.\nP = prospective, RCT = randomized controlled trail.\nThe treatment protocol in each included study.\nPRISMA flowchart of study selection.\nFollowing the study selection process in Figure 1, a total of 374 articles were initially identified, but 350 records were excluded according to the established inclusion and exclusion criteria. Subsequently, 8 articles were selected for qualitative synthesis after removal of noncomparative articles.[17–24] However, 2 study protocols[22,23] and 1 comment[24] with inadequate data were excluded. Eventually, a total of 5 trials (4 RCTs and 1 prospective study)[17–21] with 255 patients were enrolled. The characteristics of each study included were presented in Table 1. Notably, the study by Barroso et al[17] compared the efficacy and safety of 2 kinds of neuromodulations in children, but others 4 studies focused on adults. Table 2 showed the treatment protocol in each study. It should be noted that the treatment protocols in each study were not consistent. Importantly, the quality assessment showed a low risk in all studies (Fig. S1, Supplemental Digital Content http://links.lww.com/MD/H628).\nThe baseline information of included studies.\nP = prospective, RCT = randomized controlled trail.\nThe treatment protocol in each included study.\nPRISMA flowchart of study selection.\n[SUBTITLE] 3.2. Evaluation of effectiveness and safety profile [SUBSECTION] The amount of micturition in 24 hour was provided in 4 studies,[18–21] and the number of urgency episodes was presented in 3 studies.[18,20,21] But only 2 studies had data regarding the number of incontinence episodes in 24 hour and nocturia episodes.[17,18] No statistical differences were noted in the micturition (MD = 0.26, 95% CI: –0.51 to 1.04, P = .50), urgency episodes (MD = –0.16, 95% CI: –0.64 to 0.31, P = .50), incontinence episodes (MD = 0.09, 95% CI: –0.41 to 0.59, P = .72), as well as in the nocturia episodes (MD = 0.04, 95% CI: –0.45 to 0.52, P = .89) in the 2 groups (Fig. 2). Moreover, none of publication bias regarding primary outcomes was identified (Fig. S2, Supplemental Digital Content http://links.lww.com/MD/H629).\nForest plot for the changes of primary outcomes. PNS = parasacral nerve stimulation, TNS = tibial nerve stimulation.\nWith regard to patients’ life quality, 4 articles[17–20] contained the data on ICIQ-OAB scores, 2 studies[19,20] presented the data on KHQ scores, and 3 articles[19–21] assessed OAB-V8 scores. Comparable pooled results were observed in terms of ICIQ-OAB scores (MD = –0.09, 95% CI: –0.90 to 0.73, P = .83), KHQ scores (MD = 0.98, 95% CI: –16.28 to 18.25, P = .91), and OAB-V8 scores (MD = –0.55, 95% CI: –5.56 to 4.46, P = .83) in the TNS and PNS groups (Fig. 3).\nForest plot for life quality scores. Tibial nerve stimulation, ICIQ-OAB = International consultation on incontinence questionnaire overactive bladder, KHQ = King’s health questionnaire, OAB-V8 = overactive bladder-validated 8-question awareness tool, PNS = parasacral nerve stimulation, TNS = tibial nerve stimulation.\nRegarding the safety profile, no complications were identified or reported in the included studies.\nThe amount of micturition in 24 hour was provided in 4 studies,[18–21] and the number of urgency episodes was presented in 3 studies.[18,20,21] But only 2 studies had data regarding the number of incontinence episodes in 24 hour and nocturia episodes.[17,18] No statistical differences were noted in the micturition (MD = 0.26, 95% CI: –0.51 to 1.04, P = .50), urgency episodes (MD = –0.16, 95% CI: –0.64 to 0.31, P = .50), incontinence episodes (MD = 0.09, 95% CI: –0.41 to 0.59, P = .72), as well as in the nocturia episodes (MD = 0.04, 95% CI: –0.45 to 0.52, P = .89) in the 2 groups (Fig. 2). Moreover, none of publication bias regarding primary outcomes was identified (Fig. S2, Supplemental Digital Content http://links.lww.com/MD/H629).\nForest plot for the changes of primary outcomes. PNS = parasacral nerve stimulation, TNS = tibial nerve stimulation.\nWith regard to patients’ life quality, 4 articles[17–20] contained the data on ICIQ-OAB scores, 2 studies[19,20] presented the data on KHQ scores, and 3 articles[19–21] assessed OAB-V8 scores. Comparable pooled results were observed in terms of ICIQ-OAB scores (MD = –0.09, 95% CI: –0.90 to 0.73, P = .83), KHQ scores (MD = 0.98, 95% CI: –16.28 to 18.25, P = .91), and OAB-V8 scores (MD = –0.55, 95% CI: –5.56 to 4.46, P = .83) in the TNS and PNS groups (Fig. 3).\nForest plot for life quality scores. Tibial nerve stimulation, ICIQ-OAB = International consultation on incontinence questionnaire overactive bladder, KHQ = King’s health questionnaire, OAB-V8 = overactive bladder-validated 8-question awareness tool, PNS = parasacral nerve stimulation, TNS = tibial nerve stimulation.\nRegarding the safety profile, no complications were identified or reported in the included studies.", "Following the study selection process in Figure 1, a total of 374 articles were initially identified, but 350 records were excluded according to the established inclusion and exclusion criteria. Subsequently, 8 articles were selected for qualitative synthesis after removal of noncomparative articles.[17–24] However, 2 study protocols[22,23] and 1 comment[24] with inadequate data were excluded. Eventually, a total of 5 trials (4 RCTs and 1 prospective study)[17–21] with 255 patients were enrolled. The characteristics of each study included were presented in Table 1. Notably, the study by Barroso et al[17] compared the efficacy and safety of 2 kinds of neuromodulations in children, but others 4 studies focused on adults. Table 2 showed the treatment protocol in each study. It should be noted that the treatment protocols in each study were not consistent. Importantly, the quality assessment showed a low risk in all studies (Fig. S1, Supplemental Digital Content http://links.lww.com/MD/H628).\nThe baseline information of included studies.\nP = prospective, RCT = randomized controlled trail.\nThe treatment protocol in each included study.\nPRISMA flowchart of study selection.", "The amount of micturition in 24 hour was provided in 4 studies,[18–21] and the number of urgency episodes was presented in 3 studies.[18,20,21] But only 2 studies had data regarding the number of incontinence episodes in 24 hour and nocturia episodes.[17,18] No statistical differences were noted in the micturition (MD = 0.26, 95% CI: –0.51 to 1.04, P = .50), urgency episodes (MD = –0.16, 95% CI: –0.64 to 0.31, P = .50), incontinence episodes (MD = 0.09, 95% CI: –0.41 to 0.59, P = .72), as well as in the nocturia episodes (MD = 0.04, 95% CI: –0.45 to 0.52, P = .89) in the 2 groups (Fig. 2). Moreover, none of publication bias regarding primary outcomes was identified (Fig. S2, Supplemental Digital Content http://links.lww.com/MD/H629).\nForest plot for the changes of primary outcomes. PNS = parasacral nerve stimulation, TNS = tibial nerve stimulation.\nWith regard to patients’ life quality, 4 articles[17–20] contained the data on ICIQ-OAB scores, 2 studies[19,20] presented the data on KHQ scores, and 3 articles[19–21] assessed OAB-V8 scores. Comparable pooled results were observed in terms of ICIQ-OAB scores (MD = –0.09, 95% CI: –0.90 to 0.73, P = .83), KHQ scores (MD = 0.98, 95% CI: –16.28 to 18.25, P = .91), and OAB-V8 scores (MD = –0.55, 95% CI: –5.56 to 4.46, P = .83) in the TNS and PNS groups (Fig. 3).\nForest plot for life quality scores. Tibial nerve stimulation, ICIQ-OAB = International consultation on incontinence questionnaire overactive bladder, KHQ = King’s health questionnaire, OAB-V8 = overactive bladder-validated 8-question awareness tool, PNS = parasacral nerve stimulation, TNS = tibial nerve stimulation.\nRegarding the safety profile, no complications were identified or reported in the included studies.", "Although the PNS and TNS are well accepted, the priority of 2 kinks of neuromodulations remains unclear. Therefore, the first meta-analysis was performed to shed some light on the topic. Based on the current data, TNS and PNS presented comparable efficacy regarding micturition (P = .50), urgency episodes (P = .50), incontinence episodes (P = .72), as well as in the nocturia episodes (P = .89). Furthermore, both 2 kinds of neuromodulations significantly improved the patients’ life quality. Importantly, good safety profile of 2 types of neuromodulations was found.\nIn the field of urology, neuromodulation was applied to target and modulate the innervation system that controls lower urinary tract. Tibial nerve (TN), a distal branch of the sciatic nerve, that originates from the pelvis (L5–S3 spinal roots) and descends towards the lower limbs.[25] Modulation of the TN provides retrograde stimulation to the sacral plexus, which drives bladder relaxing and contraction, and improves bladder rhythm.[25] Previous comparative studies have proved TNS an effective, acceptable, and noninvasive approach for OAB in both adults and children.[26–28] The results found that the patients in TNS group presented a higher response rate and had better improvement regarding frequency, nocturnal voiding, urgency, and urge incontinence. Additionally, Iyer et al reported that TNS provided continuous therapeutic effects in a 9-year follow-up.[29] Furthermore, a review showed that TNS was helpful in sexual function.[30] But it should be mentioned that the major complications of TNS (pain and infections at the puncture site) should be concerned and managed carefully despite with a low incidence. In regard to PNS, due to its noninvasive nature, it is more popular in children with OAB. Superficial electrodes of PNS were generally positioned at parasacral region to modulate S2 to S3 nerves.[23] Veiga et al reported that patients underwent twice-weekly sessions or three times weekly of PNS had the same improvements of OAB symptoms.[31] Many studies have proven PNS an effective approach for OAB,[11,31–33] which was in line with our results, but it should be kept in mind that PNS had limited effects on enuresis and constipation.[33] More studies are needed to further investigate this topic. Based on the results of the current study, electrical stimulation including PNS and TNS could be considered as home treatments due to its simplicity, practicability of operation, and low risk and incidence of adverse effects. Several studies have proposed that the benefits of neuromodulation would diminish after the treatment period, necessitating the use of continuous application for better clinical outcomes.[19,20] Therefore, home-based neuromodulation may be preferable to a large number of medical appointments.\nUnfortunately, limited data were established on the adult sexual function after PNS. Moreover, a pooled result regarding evident cost-effectiveness can’t be performed due to insufficient data between 2 groups. At present, there is no evidence that longer stimulation durations produce better results and the optimal intervention protocol or schedule for both types of neuromodulations has not been determined.[34] More studies are needed to investigate these topics.\nThe study had several limitations. First, only 5 articles could be eventually selected, and the final number of patients was 255. Second, only 1 study focused on children, and it was difficult to perform subgroup analyses regarding gender and age. Therefore, prospective studies with a larger sample size should be considered in the future to verify the current findings.", "The current pooled results supported that TNS is as effective as PNS for the treatment of OAB, moreover, without any reported adverse events in both groups. However, prospective studies with a larger sample size should be conducted to verify our findings.", "Conceptualization: Zhi-Hong Wang, Zhi-Hong Liu.\nData curation: Zhi-Hong Wang, Zhi-Hong Liu.\nFormal analysis: Zhi-Hong Wang, Zhi-Hong Liu.\nInvestigation: Zhi-Hong Wang, Zhi-Hong Liu.\nMethodology: Zhi-Hong Wang, Zhi-Hong Liu.\nProject administration: Zhi-Hong Wang, Zhi-Hong Liu.\nResources: Zhi-Hong Wang.\nSoftware: Zhi-Hong Wang, Zhi-Hong Liu.\nSupervision: Zhi-Hong Wang, Zhi-Hong Liu.\nValidation: Zhi-Hong Wang.\nVisualization: Zhi-Hong Wang.\nWriting – original draft: Zhi-Hong Wang, Zhi-Hong Liu.\nWriting – review & editing: Zhi-Hong Wang, Zhi-Hong Liu.", "" ]
[ "intro", "methods", null, null, null, null, null, null, "results", null, null, "discussion", null, null, "supplementary-material" ]
[ "neuromodulation", "overactive bladder", "parasacral nerve stimulation", "tibial nerve stimulation" ]
Increase in body temperature in pediatric patients after costal cartilage harvest in microtia reconstruction: A retrospective observational study.
36253997
Previous evidence has clearly shown that maintaining normothermia in children undergoing surgery is difficult and is associated with adverse outcomes. Therefore, this study aimed to retrospectively analyze the changes in body temperature over time in 2 different types of microtia reconstruction surgeries, namely, embedding, and elevation surgeries.
BACKGROUND
We performed a retrospective chart review of patients who underwent microtia reconstruction (embedding and elevation) between July 2012 and February 2015 (n = 38). The changes in body temperature between the 2 types of surgeries were compared.
METHODS
During microtia reconstruction, the body temperature in the embedding surgery group was significantly higher than that in the elevation surgery group from 1 hour after the start of surgery to 1 day after the surgery (P < .001). Time, group, and time-group interaction were associated with an increase in body temperature (P < .001) but not the warming method.
RESULTS
We found an increase in body temperature in patients with microtia who underwent embedding surgery (autologous costal cartilage harvest surgery), and this was related to the type of surgery and not to the warming method. Therefore, further research is warranted to determine the cause of the increase in body temperature during this surgery.
CONCLUSION
[ "Body Temperature", "Child", "Congenital Microtia", "Costal Cartilage", "Humans", "Plastic Surgery Procedures", "Retrospective Studies" ]
9575776
1. Introduction
Maintaining normothermia in children undergoing noncardiac surgery is difficult. Unlike adults, children may have difficulty maintaining normothermia even during minor surgeries, and previous evidence clearly shows that this is associated with adverse outcomes.[1,2] Anesthesia-induced inhibition of thermoregulatory control together with exposure to cool environments results in major thermal abnormalities.[3–7] However, in some cases, the body temperature may increase. The increase in core body temperature occurs through a multiphasic response of the central body temperature to a central thermoregulatory mechanism localized in the preoptic area of the hypothalamus.[8] This phenomenon is regulated by two types of endogenous cytokines, some of which function as pyrogens and others as antipyretics. Microtia is a congenital anomaly of the auricle that ranges in severity from mild structural abnormalities to complete absence of the ear.[9,10] Patients with microtia undergo embedding and elevation surgeries for reconstruction. Initially, autologous cartilage is harvested and implanted at the desired location of the ear (embedding surgery). Thereafter, the framework is elevated and inserted (elevation surgery). Secondary surgery is performed at least 6 months later.[11] We incidentally found that patients with microtia experienced elevated body temperatures during reconstruction surgery. Therefore, we hypothesized that cartilage harvesting might be associated with an increase in body temperature. Thus, the present study aimed to retrospectively analyze the changes in body temperature over time in two different types of microtia reconstruction surgeries, namely, embedding and elevation surgeries.
2.3. Anesthetic procedure
Anesthesia was induced with propofol (1.5–2 mg/kg) or pentothal (4–5 mg/kg) and maintained with an end-tidal concentration of 2 to 3 vol% of sevoflurane or 5 to 6 vol% of desflurane. Tidal volume was controlled to maintain 30 to 35 mm Hg of end-tidal CO2. The operation room temperature was kept at 20 to 22 °C using the air conditioning system. Standard monitoring (including noninvasive blood pressure monitoring, body temperature measurement using an esophageal stethoscope, heart rate measurement, O2 saturation measurement, and electrocardiography) was applied. Body temperature was recorded every 30 min, and the use of warming tools was discontinued when the body temperature was above 36.0 °C. The peak body temperatures in the recovery room and on PODs 0, 1, 2, and 3 were recorded. The urine volume was maintained at >0.5 mL/kg/h.
3. Results
[SUBTITLE] 3.1. Patient demographics [SUBSECTION] The study included 50 patients. Among them, 7 patients with a fever greater than 37.5 °C before surgery and 5 patients with missing data were excluded. The remaining 38 patients were confirmed to be eligible. Among them, 28 were males (73.68%), and 10 were females (26.32%), with a mean age of 15.89 ± 1.98 years and ages ranging between 11 to 21 years at the time of surgery (Table 1). Based on the type of reconstruction surgery, the data from the patients were divided into Group EM and Group EL (Fig. 2). Demographics of the patients that were included in the retrospective study. All data are expressed as mean ± standard deviation or the number of patients (%). BMI = body mass index. Body temperature changes in the two types of surgeries. Group EL = elevation operation, Group EM = embedding operation; RR = recovery room, POD = postoperative day. *: P < .05, Group EM versus Group EL, †: P < .05 versus baseline (0 h). The study included 50 patients. Among them, 7 patients with a fever greater than 37.5 °C before surgery and 5 patients with missing data were excluded. The remaining 38 patients were confirmed to be eligible. Among them, 28 were males (73.68%), and 10 were females (26.32%), with a mean age of 15.89 ± 1.98 years and ages ranging between 11 to 21 years at the time of surgery (Table 1). Based on the type of reconstruction surgery, the data from the patients were divided into Group EM and Group EL (Fig. 2). Demographics of the patients that were included in the retrospective study. All data are expressed as mean ± standard deviation or the number of patients (%). BMI = body mass index. Body temperature changes in the two types of surgeries. Group EL = elevation operation, Group EM = embedding operation; RR = recovery room, POD = postoperative day. *: P < .05, Group EM versus Group EL, †: P < .05 versus baseline (0 h). [SUBTITLE] 3.2. Temperature changes between the two types of surgeries [SUBSECTION] The mean operation time was 6.62 ± 1.33 hour in Group EM and 4.27 ± 1.33 hour in Group EL. Of the total patients, 26 (68.42%) had a fever during the embedding surgery (Group EM, n = 38). However, only 2 (5.26%) patients developed a fever during the elevation surgery (Group EL, n = 38). In Group EM (n = 38), 12 (31.58%) patients developed a fever, and 14 (36.84%) of them had a fever in the recovery room or on POD 0 after the surgery. In Group EL, the 2 (5.26%) patients developed a fever in the recovery room or on POD 0 after the surgery. Compared to Group EL, Group EM had a higher body temperature from 1 hour after the start of surgery to the end of surgery (P < .001; Fig. 2), and it was higher up to POD 1 (P < .001). In Group EM, the body temperature was significantly higher from 3 hour after the start of the surgery to the end of the surgery than that at the baseline (0 h) (P < .05). In contrast, Group EL showed a statistically significant decrease in body temperature from 1 hour after the start of the surgery to the end of the surgery than that at the baseline (0 hour) (P < .05). Nevertheless, all patients with a fever regained a normal body temperature within POD 3. The mean operation time was 6.62 ± 1.33 hour in Group EM and 4.27 ± 1.33 hour in Group EL. Of the total patients, 26 (68.42%) had a fever during the embedding surgery (Group EM, n = 38). However, only 2 (5.26%) patients developed a fever during the elevation surgery (Group EL, n = 38). In Group EM (n = 38), 12 (31.58%) patients developed a fever, and 14 (36.84%) of them had a fever in the recovery room or on POD 0 after the surgery. In Group EL, the 2 (5.26%) patients developed a fever in the recovery room or on POD 0 after the surgery. Compared to Group EL, Group EM had a higher body temperature from 1 hour after the start of surgery to the end of surgery (P < .001; Fig. 2), and it was higher up to POD 1 (P < .001). In Group EM, the body temperature was significantly higher from 3 hour after the start of the surgery to the end of the surgery than that at the baseline (0 h) (P < .05). In contrast, Group EL showed a statistically significant decrease in body temperature from 1 hour after the start of the surgery to the end of the surgery than that at the baseline (0 hour) (P < .05). Nevertheless, all patients with a fever regained a normal body temperature within POD 3. [SUBTITLE] 3.3. Factors affecting the changes in body temperature [SUBSECTION] Among the covariates, induction agents, muscle relaxants, inhalation anesthetics, premedication drugs, antibiotics, infusion drugs, and warming methods, such as a humidifier circuit, air blanket, or warm pad, were not statistically significant (Table 2). Characteristics of the study population. All data are expressed as the number of patients (%). Group EM: the embedding surgery group; Group EL: the elevation surgery group. EL = elevation operation, EM = embedding operation. Considering the fixed effect of group, age, sex, antibiotics, time, warming methods, and the time-group interaction in a linear mixed model, the results of age, sex, antibiotics, and warming methods were not statistically significant (Table 3). Time, group, and time-group interactions were statistically significant in the linear mixed model (P < .001). Effect of risk factors on body temperature in a linear mixed model. A P value <.001 is considered statistically significant. Among the covariates, induction agents, muscle relaxants, inhalation anesthetics, premedication drugs, antibiotics, infusion drugs, and warming methods, such as a humidifier circuit, air blanket, or warm pad, were not statistically significant (Table 2). Characteristics of the study population. All data are expressed as the number of patients (%). Group EM: the embedding surgery group; Group EL: the elevation surgery group. EL = elevation operation, EM = embedding operation. Considering the fixed effect of group, age, sex, antibiotics, time, warming methods, and the time-group interaction in a linear mixed model, the results of age, sex, antibiotics, and warming methods were not statistically significant (Table 3). Time, group, and time-group interactions were statistically significant in the linear mixed model (P < .001). Effect of risk factors on body temperature in a linear mixed model. A P value <.001 is considered statistically significant.
null
null
[ "2. Materials and Methods", "2.1. Inclusion and exclusion criteria", "2.2. Study design", "2.4. Outcome measurement", "2.5. Statistical analyses", "3.2. Temperature changes between the two types of surgeries", "3.3. Factors affecting the changes in body temperature", "Author contributions" ]
[ "The protocol approved by the Medical Device Institutional Review Board of KUAH (2019AN0205). This clinical trial was registered with the Clinical Research Information Service, the Korean registration system for clinical trials (KCT0004109).\n[SUBTITLE] 2.1. Inclusion and exclusion criteria [SUBSECTION] Data for this study were obtained retrospectively from the electronic database of the Korea University Anam Hospital. We conducted a chart review of patients who underwent microtia reconstruction surgery (embedding and elevation) between July 2012 and February 2015 at a single university hospital. The exclusion criteria were as follows: patients with fever >37.5 °C before surgery, as well as patients with missing vital sign records during anesthesia, in the recovery room and ward, and on postoperative days (POD) 0, 1, 2, and 3.\nData for this study were obtained retrospectively from the electronic database of the Korea University Anam Hospital. We conducted a chart review of patients who underwent microtia reconstruction surgery (embedding and elevation) between July 2012 and February 2015 at a single university hospital. The exclusion criteria were as follows: patients with fever >37.5 °C before surgery, as well as patients with missing vital sign records during anesthesia, in the recovery room and ward, and on postoperative days (POD) 0, 1, 2, and 3.\n[SUBTITLE] 2.2. Study design [SUBSECTION] An overview of the retrospective study design is shown in Figure 1. The following patient information was collected:\nFlow chart showing the selection of the study participants.\nBasic patient information (height, weight, age, and underlying disease).\nData on anesthesia method, drugs used, and surgery time.\nPatient vital signs during surgery (blood pressure, heart rate, oxygen saturation, end-tidal CO2, and body temperature).\nVital signs of patients in the recovery room (blood pressure, heart rate, oxygen saturation, and body temperature).\nPatient vital signs (blood pressure, heart rate, oxygen saturation, and body temperature), medications used, and posterior-anterior chest radiography findings on PODs 0, 1, 2, and 3.\nBased on these data, the patients were divided into the following two groups: the embedding operation group (Group EM, n = 38) and the elevation operation group (Group EL, n = 38). Thereafter, the patients’ vital signs were compared between the 2 groups. A body temperature above 37.5 °C was defined as fever.\nAn overview of the retrospective study design is shown in Figure 1. The following patient information was collected:\nFlow chart showing the selection of the study participants.\nBasic patient information (height, weight, age, and underlying disease).\nData on anesthesia method, drugs used, and surgery time.\nPatient vital signs during surgery (blood pressure, heart rate, oxygen saturation, end-tidal CO2, and body temperature).\nVital signs of patients in the recovery room (blood pressure, heart rate, oxygen saturation, and body temperature).\nPatient vital signs (blood pressure, heart rate, oxygen saturation, and body temperature), medications used, and posterior-anterior chest radiography findings on PODs 0, 1, 2, and 3.\nBased on these data, the patients were divided into the following two groups: the embedding operation group (Group EM, n = 38) and the elevation operation group (Group EL, n = 38). Thereafter, the patients’ vital signs were compared between the 2 groups. A body temperature above 37.5 °C was defined as fever.\n[SUBTITLE] 2.3. Anesthetic procedure [SUBSECTION] Anesthesia was induced with propofol (1.5–2 mg/kg) or pentothal (4–5 mg/kg) and maintained with an end-tidal concentration of 2 to 3 vol% of sevoflurane or 5 to 6 vol% of desflurane. Tidal volume was controlled to maintain 30 to 35 mm Hg of end-tidal CO2. The operation room temperature was kept at 20 to 22 °C using the air conditioning system. Standard monitoring (including noninvasive blood pressure monitoring, body temperature measurement using an esophageal stethoscope, heart rate measurement, O2 saturation measurement, and electrocardiography) was applied. Body temperature was recorded every 30 min, and the use of warming tools was discontinued when the body temperature was above 36.0 °C. The peak body temperatures in the recovery room and on PODs 0, 1, 2, and 3 were recorded. The urine volume was maintained at >0.5 mL/kg/h.\nAnesthesia was induced with propofol (1.5–2 mg/kg) or pentothal (4–5 mg/kg) and maintained with an end-tidal concentration of 2 to 3 vol% of sevoflurane or 5 to 6 vol% of desflurane. Tidal volume was controlled to maintain 30 to 35 mm Hg of end-tidal CO2. The operation room temperature was kept at 20 to 22 °C using the air conditioning system. Standard monitoring (including noninvasive blood pressure monitoring, body temperature measurement using an esophageal stethoscope, heart rate measurement, O2 saturation measurement, and electrocardiography) was applied. Body temperature was recorded every 30 min, and the use of warming tools was discontinued when the body temperature was above 36.0 °C. The peak body temperatures in the recovery room and on PODs 0, 1, 2, and 3 were recorded. The urine volume was maintained at >0.5 mL/kg/h.\n[SUBTITLE] 2.4. Outcome measurement [SUBSECTION] The primary outcome measure for the difference in body temperature was the core body temperature during surgery, in the recovery room, and on PODs 0, 1, 2, and 3. The secondary outcome measures for factors that could affect body temperature changes included induction agents, premedication drugs, antibiotics, inhalation anesthetics, warming methods, and infusion drugs administered during surgery.\nThe primary outcome measure for the difference in body temperature was the core body temperature during surgery, in the recovery room, and on PODs 0, 1, 2, and 3. The secondary outcome measures for factors that could affect body temperature changes included induction agents, premedication drugs, antibiotics, inhalation anesthetics, warming methods, and infusion drugs administered during surgery.\n[SUBTITLE] 2.5. Statistical analyses [SUBSECTION] All data are expressed as mean ± standard deviation or the number of patients (%). Normal distributions were examined using Q-q plots and Shapiro–Wilk tests. For comparisons between groups, normally distributed data were compared using the independent t test, and non-normally distributed data were compared using the Mann–Whitney U test. For frequency comparisons between the 2 groups, the χ2 test or Fisher exact test was used. A P value <.001 was considered statistically significant. Statistical analysis was performed using IBM SPSS Statistics for Windows, Version 25.0 (IBM Corp., Armonk, NY).\nAll data are expressed as mean ± standard deviation or the number of patients (%). Normal distributions were examined using Q-q plots and Shapiro–Wilk tests. For comparisons between groups, normally distributed data were compared using the independent t test, and non-normally distributed data were compared using the Mann–Whitney U test. For frequency comparisons between the 2 groups, the χ2 test or Fisher exact test was used. A P value <.001 was considered statistically significant. Statistical analysis was performed using IBM SPSS Statistics for Windows, Version 25.0 (IBM Corp., Armonk, NY).", "Data for this study were obtained retrospectively from the electronic database of the Korea University Anam Hospital. We conducted a chart review of patients who underwent microtia reconstruction surgery (embedding and elevation) between July 2012 and February 2015 at a single university hospital. The exclusion criteria were as follows: patients with fever >37.5 °C before surgery, as well as patients with missing vital sign records during anesthesia, in the recovery room and ward, and on postoperative days (POD) 0, 1, 2, and 3.", "An overview of the retrospective study design is shown in Figure 1. The following patient information was collected:\nFlow chart showing the selection of the study participants.\nBasic patient information (height, weight, age, and underlying disease).\nData on anesthesia method, drugs used, and surgery time.\nPatient vital signs during surgery (blood pressure, heart rate, oxygen saturation, end-tidal CO2, and body temperature).\nVital signs of patients in the recovery room (blood pressure, heart rate, oxygen saturation, and body temperature).\nPatient vital signs (blood pressure, heart rate, oxygen saturation, and body temperature), medications used, and posterior-anterior chest radiography findings on PODs 0, 1, 2, and 3.\nBased on these data, the patients were divided into the following two groups: the embedding operation group (Group EM, n = 38) and the elevation operation group (Group EL, n = 38). Thereafter, the patients’ vital signs were compared between the 2 groups. A body temperature above 37.5 °C was defined as fever.", "The primary outcome measure for the difference in body temperature was the core body temperature during surgery, in the recovery room, and on PODs 0, 1, 2, and 3. The secondary outcome measures for factors that could affect body temperature changes included induction agents, premedication drugs, antibiotics, inhalation anesthetics, warming methods, and infusion drugs administered during surgery.", "All data are expressed as mean ± standard deviation or the number of patients (%). Normal distributions were examined using Q-q plots and Shapiro–Wilk tests. For comparisons between groups, normally distributed data were compared using the independent t test, and non-normally distributed data were compared using the Mann–Whitney U test. For frequency comparisons between the 2 groups, the χ2 test or Fisher exact test was used. A P value <.001 was considered statistically significant. Statistical analysis was performed using IBM SPSS Statistics for Windows, Version 25.0 (IBM Corp., Armonk, NY).", "The mean operation time was 6.62 ± 1.33 hour in Group EM and 4.27 ± 1.33 hour in Group EL. Of the total patients, 26 (68.42%) had a fever during the embedding surgery (Group EM, n = 38). However, only 2 (5.26%) patients developed a fever during the elevation surgery (Group EL, n = 38). In Group EM (n = 38), 12 (31.58%) patients developed a fever, and 14 (36.84%) of them had a fever in the recovery room or on POD 0 after the surgery. In Group EL, the 2 (5.26%) patients developed a fever in the recovery room or on POD 0 after the surgery.\nCompared to Group EL, Group EM had a higher body temperature from 1 hour after the start of surgery to the end of surgery (P < .001; Fig. 2), and it was higher up to POD 1 (P < .001). In Group EM, the body temperature was significantly higher from 3 hour after the start of the surgery to the end of the surgery than that at the baseline (0 h) (P < .05). In contrast, Group EL showed a statistically significant decrease in body temperature from 1 hour after the start of the surgery to the end of the surgery than that at the baseline (0 hour) (P < .05).\nNevertheless, all patients with a fever regained a normal body temperature within POD 3.", "Among the covariates, induction agents, muscle relaxants, inhalation anesthetics, premedication drugs, antibiotics, infusion drugs, and warming methods, such as a humidifier circuit, air blanket, or warm pad, were not statistically significant (Table 2).\nCharacteristics of the study population.\nAll data are expressed as the number of patients (%). Group EM: the embedding surgery group; Group EL: the elevation surgery group.\nEL = elevation operation, EM = embedding operation.\nConsidering the fixed effect of group, age, sex, antibiotics, time, warming methods, and the time-group interaction in a linear mixed model, the results of age, sex, antibiotics, and warming methods were not statistically significant (Table 3). Time, group, and time-group interactions were statistically significant in the linear mixed model (P < .001).\nEffect of risk factors on body temperature in a linear mixed model.\nA P value <.001 is considered statistically significant.", "Conceptualization: Piao Longhao, Seung Zhoo Yoon, Yoon Ji Choi, Guo-Shan Xu.\nData curation: Seung Zhoo Yoon, Yoon Ji Choi, Guo-Shan Xu, Dahyeon Kim.\nFormal analysis: Seung Zhoo Yoon, Yoon Ji Choi, Dahyeon Kim.\nFunding acquisition: Seung Zhoo Yoon.\nInvestigation: Seung Zhoo Yoon.\nMethodology: Piao Longhao, Seung Zhoo Yoon, Yoon Ji Choi, Guo-Shan Xu, Choon-Hak Lim.\nResources: Yoon Ji Choi.\nSupervision: Seung Zhoo Yoon, Yoon Ji Choi, Choon-Hak Lim.\nValidation: Yoon Ji Choi.\nVisualization: Yoon Ji Choi.\nWriting – original draft: Piao Longhao, Yoon Ji Choi, Guo-Shan Xu.\nWriting – review & editing: Piao Longhao, Yoon Ji Choi, Guo-Shan Xu." ]
[ "methods", null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Materials and Methods", "2.1. Inclusion and exclusion criteria", "2.2. Study design", "2.3. Anesthetic procedure", "2.4. Outcome measurement", "2.5. Statistical analyses", "3. Results", "3.1. Patient demographics", "3.2. Temperature changes between the two types of surgeries", "3.3. Factors affecting the changes in body temperature", "4. Discussion", "Author contributions" ]
[ "Maintaining normothermia in children undergoing noncardiac surgery is difficult. Unlike adults, children may have difficulty maintaining normothermia even during minor surgeries, and previous evidence clearly shows that this is associated with adverse outcomes.[1,2]\nAnesthesia-induced inhibition of thermoregulatory control together with exposure to cool environments results in major thermal abnormalities.[3–7] However, in some cases, the body temperature may increase. The increase in core body temperature occurs through a multiphasic response of the central body temperature to a central thermoregulatory mechanism localized in the preoptic area of the hypothalamus.[8] This phenomenon is regulated by two types of endogenous cytokines, some of which function as pyrogens and others as antipyretics.\nMicrotia is a congenital anomaly of the auricle that ranges in severity from mild structural abnormalities to complete absence of the ear.[9,10] Patients with microtia undergo embedding and elevation surgeries for reconstruction. Initially, autologous cartilage is harvested and implanted at the desired location of the ear (embedding surgery). Thereafter, the framework is elevated and inserted (elevation surgery). Secondary surgery is performed at least 6 months later.[11]\nWe incidentally found that patients with microtia experienced elevated body temperatures during reconstruction surgery. Therefore, we hypothesized that cartilage harvesting might be associated with an increase in body temperature. Thus, the present study aimed to retrospectively analyze the changes in body temperature over time in two different types of microtia reconstruction surgeries, namely, embedding and elevation surgeries.", "The protocol approved by the Medical Device Institutional Review Board of KUAH (2019AN0205). This clinical trial was registered with the Clinical Research Information Service, the Korean registration system for clinical trials (KCT0004109).\n[SUBTITLE] 2.1. Inclusion and exclusion criteria [SUBSECTION] Data for this study were obtained retrospectively from the electronic database of the Korea University Anam Hospital. We conducted a chart review of patients who underwent microtia reconstruction surgery (embedding and elevation) between July 2012 and February 2015 at a single university hospital. The exclusion criteria were as follows: patients with fever >37.5 °C before surgery, as well as patients with missing vital sign records during anesthesia, in the recovery room and ward, and on postoperative days (POD) 0, 1, 2, and 3.\nData for this study were obtained retrospectively from the electronic database of the Korea University Anam Hospital. We conducted a chart review of patients who underwent microtia reconstruction surgery (embedding and elevation) between July 2012 and February 2015 at a single university hospital. The exclusion criteria were as follows: patients with fever >37.5 °C before surgery, as well as patients with missing vital sign records during anesthesia, in the recovery room and ward, and on postoperative days (POD) 0, 1, 2, and 3.\n[SUBTITLE] 2.2. Study design [SUBSECTION] An overview of the retrospective study design is shown in Figure 1. The following patient information was collected:\nFlow chart showing the selection of the study participants.\nBasic patient information (height, weight, age, and underlying disease).\nData on anesthesia method, drugs used, and surgery time.\nPatient vital signs during surgery (blood pressure, heart rate, oxygen saturation, end-tidal CO2, and body temperature).\nVital signs of patients in the recovery room (blood pressure, heart rate, oxygen saturation, and body temperature).\nPatient vital signs (blood pressure, heart rate, oxygen saturation, and body temperature), medications used, and posterior-anterior chest radiography findings on PODs 0, 1, 2, and 3.\nBased on these data, the patients were divided into the following two groups: the embedding operation group (Group EM, n = 38) and the elevation operation group (Group EL, n = 38). Thereafter, the patients’ vital signs were compared between the 2 groups. A body temperature above 37.5 °C was defined as fever.\nAn overview of the retrospective study design is shown in Figure 1. The following patient information was collected:\nFlow chart showing the selection of the study participants.\nBasic patient information (height, weight, age, and underlying disease).\nData on anesthesia method, drugs used, and surgery time.\nPatient vital signs during surgery (blood pressure, heart rate, oxygen saturation, end-tidal CO2, and body temperature).\nVital signs of patients in the recovery room (blood pressure, heart rate, oxygen saturation, and body temperature).\nPatient vital signs (blood pressure, heart rate, oxygen saturation, and body temperature), medications used, and posterior-anterior chest radiography findings on PODs 0, 1, 2, and 3.\nBased on these data, the patients were divided into the following two groups: the embedding operation group (Group EM, n = 38) and the elevation operation group (Group EL, n = 38). Thereafter, the patients’ vital signs were compared between the 2 groups. A body temperature above 37.5 °C was defined as fever.\n[SUBTITLE] 2.3. Anesthetic procedure [SUBSECTION] Anesthesia was induced with propofol (1.5–2 mg/kg) or pentothal (4–5 mg/kg) and maintained with an end-tidal concentration of 2 to 3 vol% of sevoflurane or 5 to 6 vol% of desflurane. Tidal volume was controlled to maintain 30 to 35 mm Hg of end-tidal CO2. The operation room temperature was kept at 20 to 22 °C using the air conditioning system. Standard monitoring (including noninvasive blood pressure monitoring, body temperature measurement using an esophageal stethoscope, heart rate measurement, O2 saturation measurement, and electrocardiography) was applied. Body temperature was recorded every 30 min, and the use of warming tools was discontinued when the body temperature was above 36.0 °C. The peak body temperatures in the recovery room and on PODs 0, 1, 2, and 3 were recorded. The urine volume was maintained at >0.5 mL/kg/h.\nAnesthesia was induced with propofol (1.5–2 mg/kg) or pentothal (4–5 mg/kg) and maintained with an end-tidal concentration of 2 to 3 vol% of sevoflurane or 5 to 6 vol% of desflurane. Tidal volume was controlled to maintain 30 to 35 mm Hg of end-tidal CO2. The operation room temperature was kept at 20 to 22 °C using the air conditioning system. Standard monitoring (including noninvasive blood pressure monitoring, body temperature measurement using an esophageal stethoscope, heart rate measurement, O2 saturation measurement, and electrocardiography) was applied. Body temperature was recorded every 30 min, and the use of warming tools was discontinued when the body temperature was above 36.0 °C. The peak body temperatures in the recovery room and on PODs 0, 1, 2, and 3 were recorded. The urine volume was maintained at >0.5 mL/kg/h.\n[SUBTITLE] 2.4. Outcome measurement [SUBSECTION] The primary outcome measure for the difference in body temperature was the core body temperature during surgery, in the recovery room, and on PODs 0, 1, 2, and 3. The secondary outcome measures for factors that could affect body temperature changes included induction agents, premedication drugs, antibiotics, inhalation anesthetics, warming methods, and infusion drugs administered during surgery.\nThe primary outcome measure for the difference in body temperature was the core body temperature during surgery, in the recovery room, and on PODs 0, 1, 2, and 3. The secondary outcome measures for factors that could affect body temperature changes included induction agents, premedication drugs, antibiotics, inhalation anesthetics, warming methods, and infusion drugs administered during surgery.\n[SUBTITLE] 2.5. Statistical analyses [SUBSECTION] All data are expressed as mean ± standard deviation or the number of patients (%). Normal distributions were examined using Q-q plots and Shapiro–Wilk tests. For comparisons between groups, normally distributed data were compared using the independent t test, and non-normally distributed data were compared using the Mann–Whitney U test. For frequency comparisons between the 2 groups, the χ2 test or Fisher exact test was used. A P value <.001 was considered statistically significant. Statistical analysis was performed using IBM SPSS Statistics for Windows, Version 25.0 (IBM Corp., Armonk, NY).\nAll data are expressed as mean ± standard deviation or the number of patients (%). Normal distributions were examined using Q-q plots and Shapiro–Wilk tests. For comparisons between groups, normally distributed data were compared using the independent t test, and non-normally distributed data were compared using the Mann–Whitney U test. For frequency comparisons between the 2 groups, the χ2 test or Fisher exact test was used. A P value <.001 was considered statistically significant. Statistical analysis was performed using IBM SPSS Statistics for Windows, Version 25.0 (IBM Corp., Armonk, NY).", "Data for this study were obtained retrospectively from the electronic database of the Korea University Anam Hospital. We conducted a chart review of patients who underwent microtia reconstruction surgery (embedding and elevation) between July 2012 and February 2015 at a single university hospital. The exclusion criteria were as follows: patients with fever >37.5 °C before surgery, as well as patients with missing vital sign records during anesthesia, in the recovery room and ward, and on postoperative days (POD) 0, 1, 2, and 3.", "An overview of the retrospective study design is shown in Figure 1. The following patient information was collected:\nFlow chart showing the selection of the study participants.\nBasic patient information (height, weight, age, and underlying disease).\nData on anesthesia method, drugs used, and surgery time.\nPatient vital signs during surgery (blood pressure, heart rate, oxygen saturation, end-tidal CO2, and body temperature).\nVital signs of patients in the recovery room (blood pressure, heart rate, oxygen saturation, and body temperature).\nPatient vital signs (blood pressure, heart rate, oxygen saturation, and body temperature), medications used, and posterior-anterior chest radiography findings on PODs 0, 1, 2, and 3.\nBased on these data, the patients were divided into the following two groups: the embedding operation group (Group EM, n = 38) and the elevation operation group (Group EL, n = 38). Thereafter, the patients’ vital signs were compared between the 2 groups. A body temperature above 37.5 °C was defined as fever.", "Anesthesia was induced with propofol (1.5–2 mg/kg) or pentothal (4–5 mg/kg) and maintained with an end-tidal concentration of 2 to 3 vol% of sevoflurane or 5 to 6 vol% of desflurane. Tidal volume was controlled to maintain 30 to 35 mm Hg of end-tidal CO2. The operation room temperature was kept at 20 to 22 °C using the air conditioning system. Standard monitoring (including noninvasive blood pressure monitoring, body temperature measurement using an esophageal stethoscope, heart rate measurement, O2 saturation measurement, and electrocardiography) was applied. Body temperature was recorded every 30 min, and the use of warming tools was discontinued when the body temperature was above 36.0 °C. The peak body temperatures in the recovery room and on PODs 0, 1, 2, and 3 were recorded. The urine volume was maintained at >0.5 mL/kg/h.", "The primary outcome measure for the difference in body temperature was the core body temperature during surgery, in the recovery room, and on PODs 0, 1, 2, and 3. The secondary outcome measures for factors that could affect body temperature changes included induction agents, premedication drugs, antibiotics, inhalation anesthetics, warming methods, and infusion drugs administered during surgery.", "All data are expressed as mean ± standard deviation or the number of patients (%). Normal distributions were examined using Q-q plots and Shapiro–Wilk tests. For comparisons between groups, normally distributed data were compared using the independent t test, and non-normally distributed data were compared using the Mann–Whitney U test. For frequency comparisons between the 2 groups, the χ2 test or Fisher exact test was used. A P value <.001 was considered statistically significant. Statistical analysis was performed using IBM SPSS Statistics for Windows, Version 25.0 (IBM Corp., Armonk, NY).", "[SUBTITLE] 3.1. Patient demographics [SUBSECTION] The study included 50 patients. Among them, 7 patients with a fever greater than 37.5 °C before surgery and 5 patients with missing data were excluded. The remaining 38 patients were confirmed to be eligible. Among them, 28 were males (73.68%), and 10 were females (26.32%), with a mean age of 15.89 ± 1.98 years and ages ranging between 11 to 21 years at the time of surgery (Table 1). Based on the type of reconstruction surgery, the data from the patients were divided into Group EM and Group EL (Fig. 2).\nDemographics of the patients that were included in the retrospective study.\nAll data are expressed as mean ± standard deviation or the number of patients (%).\nBMI = body mass index.\nBody temperature changes in the two types of surgeries. Group EL = elevation operation, Group EM = embedding operation; RR = recovery room, POD = postoperative day. *: P < .05, Group EM versus Group EL, †: P < .05 versus baseline (0 h).\nThe study included 50 patients. Among them, 7 patients with a fever greater than 37.5 °C before surgery and 5 patients with missing data were excluded. The remaining 38 patients were confirmed to be eligible. Among them, 28 were males (73.68%), and 10 were females (26.32%), with a mean age of 15.89 ± 1.98 years and ages ranging between 11 to 21 years at the time of surgery (Table 1). Based on the type of reconstruction surgery, the data from the patients were divided into Group EM and Group EL (Fig. 2).\nDemographics of the patients that were included in the retrospective study.\nAll data are expressed as mean ± standard deviation or the number of patients (%).\nBMI = body mass index.\nBody temperature changes in the two types of surgeries. Group EL = elevation operation, Group EM = embedding operation; RR = recovery room, POD = postoperative day. *: P < .05, Group EM versus Group EL, †: P < .05 versus baseline (0 h).\n[SUBTITLE] 3.2. Temperature changes between the two types of surgeries [SUBSECTION] The mean operation time was 6.62 ± 1.33 hour in Group EM and 4.27 ± 1.33 hour in Group EL. Of the total patients, 26 (68.42%) had a fever during the embedding surgery (Group EM, n = 38). However, only 2 (5.26%) patients developed a fever during the elevation surgery (Group EL, n = 38). In Group EM (n = 38), 12 (31.58%) patients developed a fever, and 14 (36.84%) of them had a fever in the recovery room or on POD 0 after the surgery. In Group EL, the 2 (5.26%) patients developed a fever in the recovery room or on POD 0 after the surgery.\nCompared to Group EL, Group EM had a higher body temperature from 1 hour after the start of surgery to the end of surgery (P < .001; Fig. 2), and it was higher up to POD 1 (P < .001). In Group EM, the body temperature was significantly higher from 3 hour after the start of the surgery to the end of the surgery than that at the baseline (0 h) (P < .05). In contrast, Group EL showed a statistically significant decrease in body temperature from 1 hour after the start of the surgery to the end of the surgery than that at the baseline (0 hour) (P < .05).\nNevertheless, all patients with a fever regained a normal body temperature within POD 3.\nThe mean operation time was 6.62 ± 1.33 hour in Group EM and 4.27 ± 1.33 hour in Group EL. Of the total patients, 26 (68.42%) had a fever during the embedding surgery (Group EM, n = 38). However, only 2 (5.26%) patients developed a fever during the elevation surgery (Group EL, n = 38). In Group EM (n = 38), 12 (31.58%) patients developed a fever, and 14 (36.84%) of them had a fever in the recovery room or on POD 0 after the surgery. In Group EL, the 2 (5.26%) patients developed a fever in the recovery room or on POD 0 after the surgery.\nCompared to Group EL, Group EM had a higher body temperature from 1 hour after the start of surgery to the end of surgery (P < .001; Fig. 2), and it was higher up to POD 1 (P < .001). In Group EM, the body temperature was significantly higher from 3 hour after the start of the surgery to the end of the surgery than that at the baseline (0 h) (P < .05). In contrast, Group EL showed a statistically significant decrease in body temperature from 1 hour after the start of the surgery to the end of the surgery than that at the baseline (0 hour) (P < .05).\nNevertheless, all patients with a fever regained a normal body temperature within POD 3.\n[SUBTITLE] 3.3. Factors affecting the changes in body temperature [SUBSECTION] Among the covariates, induction agents, muscle relaxants, inhalation anesthetics, premedication drugs, antibiotics, infusion drugs, and warming methods, such as a humidifier circuit, air blanket, or warm pad, were not statistically significant (Table 2).\nCharacteristics of the study population.\nAll data are expressed as the number of patients (%). Group EM: the embedding surgery group; Group EL: the elevation surgery group.\nEL = elevation operation, EM = embedding operation.\nConsidering the fixed effect of group, age, sex, antibiotics, time, warming methods, and the time-group interaction in a linear mixed model, the results of age, sex, antibiotics, and warming methods were not statistically significant (Table 3). Time, group, and time-group interactions were statistically significant in the linear mixed model (P < .001).\nEffect of risk factors on body temperature in a linear mixed model.\nA P value <.001 is considered statistically significant.\nAmong the covariates, induction agents, muscle relaxants, inhalation anesthetics, premedication drugs, antibiotics, infusion drugs, and warming methods, such as a humidifier circuit, air blanket, or warm pad, were not statistically significant (Table 2).\nCharacteristics of the study population.\nAll data are expressed as the number of patients (%). Group EM: the embedding surgery group; Group EL: the elevation surgery group.\nEL = elevation operation, EM = embedding operation.\nConsidering the fixed effect of group, age, sex, antibiotics, time, warming methods, and the time-group interaction in a linear mixed model, the results of age, sex, antibiotics, and warming methods were not statistically significant (Table 3). Time, group, and time-group interactions were statistically significant in the linear mixed model (P < .001).\nEffect of risk factors on body temperature in a linear mixed model.\nA P value <.001 is considered statistically significant.", "The study included 50 patients. Among them, 7 patients with a fever greater than 37.5 °C before surgery and 5 patients with missing data were excluded. The remaining 38 patients were confirmed to be eligible. Among them, 28 were males (73.68%), and 10 were females (26.32%), with a mean age of 15.89 ± 1.98 years and ages ranging between 11 to 21 years at the time of surgery (Table 1). Based on the type of reconstruction surgery, the data from the patients were divided into Group EM and Group EL (Fig. 2).\nDemographics of the patients that were included in the retrospective study.\nAll data are expressed as mean ± standard deviation or the number of patients (%).\nBMI = body mass index.\nBody temperature changes in the two types of surgeries. Group EL = elevation operation, Group EM = embedding operation; RR = recovery room, POD = postoperative day. *: P < .05, Group EM versus Group EL, †: P < .05 versus baseline (0 h).", "The mean operation time was 6.62 ± 1.33 hour in Group EM and 4.27 ± 1.33 hour in Group EL. Of the total patients, 26 (68.42%) had a fever during the embedding surgery (Group EM, n = 38). However, only 2 (5.26%) patients developed a fever during the elevation surgery (Group EL, n = 38). In Group EM (n = 38), 12 (31.58%) patients developed a fever, and 14 (36.84%) of them had a fever in the recovery room or on POD 0 after the surgery. In Group EL, the 2 (5.26%) patients developed a fever in the recovery room or on POD 0 after the surgery.\nCompared to Group EL, Group EM had a higher body temperature from 1 hour after the start of surgery to the end of surgery (P < .001; Fig. 2), and it was higher up to POD 1 (P < .001). In Group EM, the body temperature was significantly higher from 3 hour after the start of the surgery to the end of the surgery than that at the baseline (0 h) (P < .05). In contrast, Group EL showed a statistically significant decrease in body temperature from 1 hour after the start of the surgery to the end of the surgery than that at the baseline (0 hour) (P < .05).\nNevertheless, all patients with a fever regained a normal body temperature within POD 3.", "Among the covariates, induction agents, muscle relaxants, inhalation anesthetics, premedication drugs, antibiotics, infusion drugs, and warming methods, such as a humidifier circuit, air blanket, or warm pad, were not statistically significant (Table 2).\nCharacteristics of the study population.\nAll data are expressed as the number of patients (%). Group EM: the embedding surgery group; Group EL: the elevation surgery group.\nEL = elevation operation, EM = embedding operation.\nConsidering the fixed effect of group, age, sex, antibiotics, time, warming methods, and the time-group interaction in a linear mixed model, the results of age, sex, antibiotics, and warming methods were not statistically significant (Table 3). Time, group, and time-group interactions were statistically significant in the linear mixed model (P < .001).\nEffect of risk factors on body temperature in a linear mixed model.\nA P value <.001 is considered statistically significant.", "This study revealed elevated body temperatures in patients undergoing autologous costal cartilage harvest surgery during microtia reconstruction. In addition, our retrospective analysis of risk factors for an increase in the body temperature revealed that only the type and duration of surgery were related to an increase in the body temperature.\nDuring surgery under general anesthesia, thermal imbalances are common.[1,7,12] In general, the core body temperature is known to drop by 1 to 2° after anesthesia induction.[1] In particular, children show more rapid changes in core body temperature than adults because of their lower body-mass-to-surface ratio. This difficulty in maintaining body temperature increases the risk of side effects even when the body temperature changes for a short period of time. In the case of minor or major surgery, thermal imbalances are caused by conduction, convection, and evaporation through the body surface or airways by radiative mechanisms. Inadvertent body temperature changes expose patients to the risk of postoperative complications, including postoperative myocardial ischemia, coagulation disorders, surgical bleeding, wound infection, and delayed recovery from anesthesia.[13–20]\nThe prevalence of intraoperative fever is relatively rarer than that of postoperative fever.[21,22] However, our study showed that 28 (73.68%) patients developed a fever. Among them, 12 (31.58%) developed an intraoperative fever, and 16 (42.10%) developed a postoperative fever, but all regained normal body temperatures within POD 3. Similarly, a prospective study of 81 patients with postoperative fever found that 80% of patients with a fever had no infection on POD 1.[21] Another search also showed that early fevers emerging between POD 1 and POD 4 rarely represented an infection.[23] In most patients, the fever resolves during the postoperative period in the absence of continuous surgical trauma, and a benign course can be expected.\nSpecifically, the body temperature of patients undergoing embedding surgery was significantly higher than that of patients undergoing elevation surgery from 1 hour after the start of the surgery to 1 day after the surgery. Moreover, the body temperature increased between 1 and 2 hour after the start of the surgery, example, at the time point when the costal cartilage was harvested. The incidence of early postoperative fever varies depending on the type and duration of surgery, the patient’s age, preexisting inflammation, and surgical site.[24–27] In our study, body temperature during surgery was related to the type and duration of surgery and was not related to the warming method.\nIn this study, pediatric patients undergoing elevation surgery experienced a decrease in their body temperature during the surgery. The type of surgery and the temperature of the operation room are the main factors affecting the decrease in the core body temperature in pediatric patients.[14] Moreover, the core body temperatures may be less stable in pediatric patients; hence, they can decrease more significantly at 1 hour after anesthesia induction than at the baseline.[28]\nThis study has limitation. Because this is a retrospective study, we were unable to confirm laboratory data to verify our hypothesis. In the future, it is necessary to proceed with prospective studies including confirmation of the level of proinflammatory cytokines (e.g. TNF-α, IL-6, etc).[29,30]\nIn addition, the EM group has a longer operation time (EM group: 6.62 ± 1.33 hour, EL group: 4.27 ± 1.33 hour), which may affect the patient’s body temperature increase. However, the EM group showed a higher body temperature than the EL group from 1 hour to 9 hours after the start of surgery (Fig. 2). Therefore, as hypothesized in this study, the type of surgery itself is thought to be the cause of increase in body temperature.\nThis study reports the findings of pediatric patients who developed unintentional hypothermia and hyperthermia while undergoing microtia reconstruction surgery, including both embedding and elevation. Since thermal disturbances are associated with serious consequences, the body temperature should be monitored, and efforts should be made to maintain a normal body temperature during general anesthesia. Anesthesiologists should proactively monitor the children’s body temperature during surgery, especially during embedding surgery, and should use various methods to avoid gross disturbances in the children’s body temperature.", "Conceptualization: Piao Longhao, Seung Zhoo Yoon, Yoon Ji Choi, Guo-Shan Xu.\nData curation: Seung Zhoo Yoon, Yoon Ji Choi, Guo-Shan Xu, Dahyeon Kim.\nFormal analysis: Seung Zhoo Yoon, Yoon Ji Choi, Dahyeon Kim.\nFunding acquisition: Seung Zhoo Yoon.\nInvestigation: Seung Zhoo Yoon.\nMethodology: Piao Longhao, Seung Zhoo Yoon, Yoon Ji Choi, Guo-Shan Xu, Choon-Hak Lim.\nResources: Yoon Ji Choi.\nSupervision: Seung Zhoo Yoon, Yoon Ji Choi, Choon-Hak Lim.\nValidation: Yoon Ji Choi.\nVisualization: Yoon Ji Choi.\nWriting – original draft: Piao Longhao, Yoon Ji Choi, Guo-Shan Xu.\nWriting – review & editing: Piao Longhao, Yoon Ji Choi, Guo-Shan Xu." ]
[ "intro", "methods", null, null, "methods", null, null, "results", "subjects", null, null, "discussion", null ]
[ "body temperature", "costal cartilage harvest", "microtia" ]
Serum leptin differs in children with obstructive sleep apnea: A meta-analysis and PRISMA compliant article.
36254000
Obstructive sleep apnea (OSA) as an independent cardiovascular risk factor has been proposed, but the mechanisms underlying cardiovascular disease is far from being completely elucidated. Leptin, an inflammatory cytokine produced by adipocytes, contributes to the modulation of metabolism, respiratory control, and inflammation, which are factors associated with cardiovascular disease. Serum levels of leptin in children with OSA have shown conflicting results in previous studies.
BACKGROUND
We performed a meta-analysis to clarify the correlation between leptin expression of the OSA patients following the PRISMA. PubMed, Embase, and Web of Science were systematically searched for relevant studies, and then independently screened by two researchers, and analyzed the data through STATA version 12.0.
METHODS
In a total of 5 articles including 469 participants, the data analysis showed that serum leptin levels were elevated in children with OSA (MD, 6.36; 95% CI, 0.24-12.49, P < .001), compared to the control group. Subgroup analysis were performed based on body mass index. The results of subgroup analysis demonstrated that the serum leptin concentration was correlated with body mass index in children with OSA (MD, 9.70; 95% CI, 0.22-11.18, P < .001).
RESULTS
The serum leptin levels were elevated in children with OSA, compared to the control group. It could add to our developing understanding of the pathogenesis and potential treatments for children with OSA, and help us to recognize the relevance of OSA in determining cardiovascular issues among children.
CONCLUSIONS
[ "Body Mass Index", "Cardiovascular Diseases", "Child", "Cytokines", "Humans", "Leptin", "Sleep Apnea, Obstructive" ]
9575813
1. Introduction
Obstructive sleep apnea (OSA) is a disease that affect patients from infancy to adulthood. It affects 1% to 6% of all children and almost up to 59% of obese children.[1] Untreated OSA is associated with neurobehavioral problems, decreased attention, disturbed emotional regulation, decreased academic performance, nighttime enuresis, impaired growth and so on.[2,3] At last, OSA has a relevant impact on the cardiovascular system in children.[4] Hence, early diagnosis of OSA may reduce the occurrence of systemic complications. At the same time, it is also important to explore the mechanisms of causing cardiovascular disease in children with OSA. Leptin is an adipocyte-derived hormone regulating energy expenditure and food intake and can be found in the circulation.[5] leptin, particularly in the context of hyperleptinemia, exerts detrimental effects in cardiovascular function and promotes adverse outcomes in cardiovascular disorders.[6] Several studies reported the association between OSA and leptin.[7–9] Dalesio et al showed that leptin was significantly higher in the obese/OSA group than in the control or OSA-only group.10 The association between OSA and serum leptin levels is intricate and multidirectional since leptin levels can also be affected by obesity alone.[10] It is reported that leptin may play roles in early diagnosis of OSA.[11] Alteration in the levels of leptin is associated with the risk of cardiovascular diseases.[12] However, contrary results was showed by Li et al that leptin levels were not different between OSA and non-OSA groups.[13] The variability of serum leptin levels in children with OSA remains clearly inconclusive. In the present study, the databases of PubMed, Embase and Web of Science were searched for relevant publications and a meta-analysis was performed. We tried to clarify whether serum leptin level is elevated in children with OSA.
2. Methods
[SUBTITLE] 2.1. Search strategy [SUBSECTION] We searched for English articles included in PubMed, Embase, and Web of Science. Search terms included the following key words: (obstructive sleep apnea hypopnea syndrome or sleep apnea or obstructive sleep apnea or obstructive sleep hypopnea or sleep-disordered breathing) and (adipokines or leptin) and (child or preschool children or teenager or adolescent). Potentially relevant articles were evaluated for inclusion against pre-specified eligibility and exclusion criteria. We searched for English articles included in PubMed, Embase, and Web of Science. Search terms included the following key words: (obstructive sleep apnea hypopnea syndrome or sleep apnea or obstructive sleep apnea or obstructive sleep hypopnea or sleep-disordered breathing) and (adipokines or leptin) and (child or preschool children or teenager or adolescent). Potentially relevant articles were evaluated for inclusion against pre-specified eligibility and exclusion criteria. [SUBTITLE] 2.2. Selection criteria [SUBSECTION] Eligible studies for the meta-analysis were required to meet the following inclusion criteria: participants at an age of 18 years or younger; serum leptin concentration was measured from morning fasting venous blood following overnight polysomnography; subjects received monitoring by polysomnography, and OSA was diagnosed if they had an obstructive apnea index ≥1. The exclusion criteria were the following: studies without sufficient data for meta-analysis; abstracts, reviews, letters, and case reports. Eligible studies for the meta-analysis were required to meet the following inclusion criteria: participants at an age of 18 years or younger; serum leptin concentration was measured from morning fasting venous blood following overnight polysomnography; subjects received monitoring by polysomnography, and OSA was diagnosed if they had an obstructive apnea index ≥1. The exclusion criteria were the following: studies without sufficient data for meta-analysis; abstracts, reviews, letters, and case reports. [SUBTITLE] 2.3. Statistical analysis [SUBSECTION] Statistical analyses were performed by using STATA version 12.0. Weighted mean difference (WMD) and 95% confidence interval (CI) were used to present the statistical results for continuous outcomes. And an inverse variance method was used for continuous variables.[14] The level of statistical significance was set at P < .05. Statistical heterogeneity was assessed on the basis of I square (I2) value. P < .10 was considered statistical heterogeneity to be statistically significant. An I2 value above 75% indicated high heterogeneity, an I2 value between 50% and 75% moderate heterogeneity, and an I2 value between 25% and 50% low heterogeneity. A result was believed to be homogeneous when an I2 value was <25%.[15] If I2 <50%, the study was believed to be homogeneous or to have low heterogeneity, and the fixed effects model was used to pool the results. If I2 >50%, the study was believed to be moderately or highly heterogeneous, and the random effect model was used to pool the data.[16,17] Subgroup analysis was done to assess the impact of body mass index (BMI ≥25 vs <25), and apnea-hypopnea index (AHI ≥10 vs <10). Sensitivity analysis was used to evaluate the stability of the meta-analysis results. Potential publication bias was evaluated by using funnel plot,[18] the Begg test,[19] and the test of Egger.[18] Statistical analyses were performed by using STATA version 12.0. Weighted mean difference (WMD) and 95% confidence interval (CI) were used to present the statistical results for continuous outcomes. And an inverse variance method was used for continuous variables.[14] The level of statistical significance was set at P < .05. Statistical heterogeneity was assessed on the basis of I square (I2) value. P < .10 was considered statistical heterogeneity to be statistically significant. An I2 value above 75% indicated high heterogeneity, an I2 value between 50% and 75% moderate heterogeneity, and an I2 value between 25% and 50% low heterogeneity. A result was believed to be homogeneous when an I2 value was <25%.[15] If I2 <50%, the study was believed to be homogeneous or to have low heterogeneity, and the fixed effects model was used to pool the results. If I2 >50%, the study was believed to be moderately or highly heterogeneous, and the random effect model was used to pool the data.[16,17] Subgroup analysis was done to assess the impact of body mass index (BMI ≥25 vs <25), and apnea-hypopnea index (AHI ≥10 vs <10). Sensitivity analysis was used to evaluate the stability of the meta-analysis results. Potential publication bias was evaluated by using funnel plot,[18] the Begg test,[19] and the test of Egger.[18]
3.1. Search results
[SUBTITLE] 3.1.1. Characteristics of the eligible studies [SUBSECTION] Overall 40 relevant articles were extracted from the databases. And 19 articles were identified to be relevant by roughly screened in terms of abstract and title against inclusion and exclusion criteria. We retrieved the full texts of the articles and excluded several full texts for following reasons: 5 had no control or control group was selected as AHI ≥5 events/hour in children; two had no basic information; 2 records included patients someone older than 18 years; 5 studies unrelated to topic. The steps of the literature search are detailed in Figure 1. Flow diagram indicating the literature selection process and results included in the meta-analyses. Finally, five studies covering data from a total of 469 participants, were included in this meta-analysis.[10,13,20–22] The characteristics of the included studies were summarized, such as author, year, country, age, and so on. And the information of BMI, AHI and leptin of each study are given in Table 1. All value expressed as mean (± standard deviation). Characteristics of included studies and participants’ characteristics of included studies. AHI = apnea hypopnea index, BMI = body mass index, CG = control group, n = Sample size, OG = obstructive sleep apnea (OSA) group, SD = standard deviation. Overall 40 relevant articles were extracted from the databases. And 19 articles were identified to be relevant by roughly screened in terms of abstract and title against inclusion and exclusion criteria. We retrieved the full texts of the articles and excluded several full texts for following reasons: 5 had no control or control group was selected as AHI ≥5 events/hour in children; two had no basic information; 2 records included patients someone older than 18 years; 5 studies unrelated to topic. The steps of the literature search are detailed in Figure 1. Flow diagram indicating the literature selection process and results included in the meta-analyses. Finally, five studies covering data from a total of 469 participants, were included in this meta-analysis.[10,13,20–22] The characteristics of the included studies were summarized, such as author, year, country, age, and so on. And the information of BMI, AHI and leptin of each study are given in Table 1. All value expressed as mean (± standard deviation). Characteristics of included studies and participants’ characteristics of included studies. AHI = apnea hypopnea index, BMI = body mass index, CG = control group, n = Sample size, OG = obstructive sleep apnea (OSA) group, SD = standard deviation. [SUBTITLE] 3.1.2. Pooled analysis [SUBSECTION] The value of I2 was 76%, indicating that the studies were high heterogeneous. Therefore, the random effects model was used to combine effect size. Meta-analysis exhibited that serum leptin levels in OSA group were 6.36 ng/mL higher than that in control group (95%CI, 0.24–12.49, P = .04) Figure 2. Forest plot and 95%CI for serum leptin levels in the OSA group in control with the control group in the meta-analysis. CI = confidence interval. OSA = obstructive sleep apnea. The value of I2 was 76%, indicating that the studies were high heterogeneous. Therefore, the random effects model was used to combine effect size. Meta-analysis exhibited that serum leptin levels in OSA group were 6.36 ng/mL higher than that in control group (95%CI, 0.24–12.49, P = .04) Figure 2. Forest plot and 95%CI for serum leptin levels in the OSA group in control with the control group in the meta-analysis. CI = confidence interval. OSA = obstructive sleep apnea. [SUBTITLE] 3.1.3. Subgroup analysis - BMI [SUBSECTION] All included studies reported the BMI values for each patient. Therefore, we performed a subgroup analysis of the articles according to obese or overweight. The total WMD in the studies with average BMI >25 was significant, with a corresponding value of 7.27 (95%CI, 0.32–13.22, P = .04; Fig. 3). The results revealed that the leptin level was elevated more significantly in the obese OSA patients. We also performed a special meta-analysis regarding the AHI (P > .05) of the patients with OSA. Our analysis showed that the leptin level was not significantly correlated with the AHI of the patients (MD, 6.15; 95%CI, −2.97–15.28, P = .19) Subgroup analysis based on body mass index (BMI) >25. All included studies reported the BMI values for each patient. Therefore, we performed a subgroup analysis of the articles according to obese or overweight. The total WMD in the studies with average BMI >25 was significant, with a corresponding value of 7.27 (95%CI, 0.32–13.22, P = .04; Fig. 3). The results revealed that the leptin level was elevated more significantly in the obese OSA patients. We also performed a special meta-analysis regarding the AHI (P > .05) of the patients with OSA. Our analysis showed that the leptin level was not significantly correlated with the AHI of the patients (MD, 6.15; 95%CI, −2.97–15.28, P = .19) Subgroup analysis based on body mass index (BMI) >25. [SUBTITLE] 3.1.4. Sensitivity analysis [SUBSECTION] We conducted a series of sensitivity analyses to explore the stability of the pooled data. The sensitivity analyses were conducted to evaluate the effects of each single study on the overall effect (Fig. 4). Based on the results of the sensitivity analysis, we again reviewed the inclusion literature, and ultimately, we remained these studies, the pooled effect size of the meta-analysis results was stable and reliable. The sensitivity analyses were conducted to evaluate the effects of each single study on the overall effect. We conducted a series of sensitivity analyses to explore the stability of the pooled data. The sensitivity analyses were conducted to evaluate the effects of each single study on the overall effect (Fig. 4). Based on the results of the sensitivity analysis, we again reviewed the inclusion literature, and ultimately, we remained these studies, the pooled effect size of the meta-analysis results was stable and reliable. The sensitivity analyses were conducted to evaluate the effects of each single study on the overall effect. [SUBTITLE] 3.1.5. Publication bias [SUBSECTION] We used Begg tests (P = .707) and Egger tests (P = .383) to assess publication bias in our study. The results were quite symmetric, indicating that the analysis did not include publication bias among the studies. We used Begg tests (P = .707) and Egger tests (P = .383) to assess publication bias in our study. The results were quite symmetric, indicating that the analysis did not include publication bias among the studies. [SUBTITLE] 3.1.6. Ethics [SUBSECTION] In this study, ethical approval was not necessary because the included data was based on previous published articles, and no original clinical data was collected or utilized. In this study, ethical approval was not necessary because the included data was based on previous published articles, and no original clinical data was collected or utilized.
null
null
[ "2.1. Search strategy", "2.2. Selection criteria", "2.3. Statistical analysis", "3. Results", "3.1.1. Characteristics of the eligible studies", "3.1.2. Pooled analysis", "3.1.3. Subgroup analysis - BMI", "3.1.4. Sensitivity analysis", "3.1.5. Publication bias", "3.1.6. Ethics", "Author contributions", "Correction" ]
[ "We searched for English articles included in PubMed, Embase, and Web of Science. Search terms included the following key words: (obstructive sleep apnea hypopnea syndrome or sleep apnea or obstructive sleep apnea or obstructive sleep hypopnea or sleep-disordered breathing) and (adipokines or leptin) and (child or preschool children or teenager or adolescent). Potentially relevant articles were evaluated for inclusion against pre-specified eligibility and exclusion criteria.", "Eligible studies for the meta-analysis were required to meet the following inclusion criteria: participants at an age of 18 years or younger; serum leptin concentration was measured from morning fasting venous blood following overnight polysomnography; subjects received monitoring by polysomnography, and OSA was diagnosed if they had an obstructive apnea index ≥1. The exclusion criteria were the following: studies without sufficient data for meta-analysis; abstracts, reviews, letters, and case reports.", "Statistical analyses were performed by using STATA version 12.0. Weighted mean difference (WMD) and 95% confidence interval (CI) were used to present the statistical results for continuous outcomes. And an inverse variance method was used for continuous variables.[14] The level of statistical significance was set at P < .05.\nStatistical heterogeneity was assessed on the basis of I square (I2) value. P < .10 was considered statistical heterogeneity to be statistically significant. An I2 value above 75% indicated high heterogeneity, an I2 value between 50% and 75% moderate heterogeneity, and an I2 value between 25% and 50% low heterogeneity. A result was believed to be homogeneous when an I2 value was <25%.[15] If I2 <50%, the study was believed to be homogeneous or to have low heterogeneity, and the fixed effects model was used to pool the results. If I2 >50%, the study was believed to be moderately or highly heterogeneous, and the random effect model was used to pool the data.[16,17]\nSubgroup analysis was done to assess the impact of body mass index (BMI ≥25 vs <25), and apnea-hypopnea index (AHI ≥10 vs <10). Sensitivity analysis was used to evaluate the stability of the meta-analysis results. Potential publication bias was evaluated by using funnel plot,[18] the Begg test,[19] and the test of Egger.[18]", "[SUBTITLE] 3.1. Search results [SUBSECTION] [SUBTITLE] 3.1.1. Characteristics of the eligible studies [SUBSECTION] Overall 40 relevant articles were extracted from the databases. And 19 articles were identified to be relevant by roughly screened in terms of abstract and title against inclusion and exclusion criteria. We retrieved the full texts of the articles and excluded several full texts for following reasons: 5 had no control or control group was selected as AHI ≥5 events/hour in children; two had no basic information; 2 records included patients someone older than 18 years; 5 studies unrelated to topic. The steps of the literature search are detailed in Figure 1.\nFlow diagram indicating the literature selection process and results included in the meta-analyses.\nFinally, five studies covering data from a total of 469 participants, were included in this meta-analysis.[10,13,20–22] The characteristics of the included studies were summarized, such as author, year, country, age, and so on. And the information of BMI, AHI and leptin of each study are given in Table 1. All value expressed as mean (± standard deviation).\nCharacteristics of included studies and participants’ characteristics of included studies.\nAHI = apnea hypopnea index, BMI = body mass index, CG = control group, n = Sample size, OG = obstructive sleep apnea (OSA) group, SD = standard deviation.\nOverall 40 relevant articles were extracted from the databases. And 19 articles were identified to be relevant by roughly screened in terms of abstract and title against inclusion and exclusion criteria. We retrieved the full texts of the articles and excluded several full texts for following reasons: 5 had no control or control group was selected as AHI ≥5 events/hour in children; two had no basic information; 2 records included patients someone older than 18 years; 5 studies unrelated to topic. The steps of the literature search are detailed in Figure 1.\nFlow diagram indicating the literature selection process and results included in the meta-analyses.\nFinally, five studies covering data from a total of 469 participants, were included in this meta-analysis.[10,13,20–22] The characteristics of the included studies were summarized, such as author, year, country, age, and so on. And the information of BMI, AHI and leptin of each study are given in Table 1. All value expressed as mean (± standard deviation).\nCharacteristics of included studies and participants’ characteristics of included studies.\nAHI = apnea hypopnea index, BMI = body mass index, CG = control group, n = Sample size, OG = obstructive sleep apnea (OSA) group, SD = standard deviation.\n[SUBTITLE] 3.1.2. Pooled analysis [SUBSECTION] The value of I2 was 76%, indicating that the studies were high heterogeneous. Therefore, the random effects model was used to combine effect size. Meta-analysis exhibited that serum leptin levels in OSA group were 6.36 ng/mL higher than that in control group (95%CI, 0.24–12.49, P = .04) Figure 2.\nForest plot and 95%CI for serum leptin levels in the OSA group in control with the control group in the meta-analysis. CI = confidence interval. OSA = obstructive sleep apnea.\nThe value of I2 was 76%, indicating that the studies were high heterogeneous. Therefore, the random effects model was used to combine effect size. Meta-analysis exhibited that serum leptin levels in OSA group were 6.36 ng/mL higher than that in control group (95%CI, 0.24–12.49, P = .04) Figure 2.\nForest plot and 95%CI for serum leptin levels in the OSA group in control with the control group in the meta-analysis. CI = confidence interval. OSA = obstructive sleep apnea.\n[SUBTITLE] 3.1.3. Subgroup analysis - BMI [SUBSECTION] All included studies reported the BMI values for each patient. Therefore, we performed a subgroup analysis of the articles according to obese or overweight. The total WMD in the studies with average BMI >25 was significant, with a corresponding value of 7.27 (95%CI, 0.32–13.22, P = .04; Fig. 3). The results revealed that the leptin level was elevated more significantly in the obese OSA patients. We also performed a special meta-analysis regarding the AHI (P > .05) of the patients with OSA. Our analysis showed that the leptin level was not significantly correlated with the AHI of the patients (MD, 6.15; 95%CI, −2.97–15.28, P = .19)\nSubgroup analysis based on body mass index (BMI) >25.\nAll included studies reported the BMI values for each patient. Therefore, we performed a subgroup analysis of the articles according to obese or overweight. The total WMD in the studies with average BMI >25 was significant, with a corresponding value of 7.27 (95%CI, 0.32–13.22, P = .04; Fig. 3). The results revealed that the leptin level was elevated more significantly in the obese OSA patients. We also performed a special meta-analysis regarding the AHI (P > .05) of the patients with OSA. Our analysis showed that the leptin level was not significantly correlated with the AHI of the patients (MD, 6.15; 95%CI, −2.97–15.28, P = .19)\nSubgroup analysis based on body mass index (BMI) >25.\n[SUBTITLE] 3.1.4. Sensitivity analysis [SUBSECTION] We conducted a series of sensitivity analyses to explore the stability of the pooled data. The sensitivity analyses were conducted to evaluate the effects of each single study on the overall effect (Fig. 4). Based on the results of the sensitivity analysis, we again reviewed the inclusion literature, and ultimately, we remained these studies, the pooled effect size of the meta-analysis results was stable and reliable.\nThe sensitivity analyses were conducted to evaluate the effects of each single study on the overall effect.\nWe conducted a series of sensitivity analyses to explore the stability of the pooled data. The sensitivity analyses were conducted to evaluate the effects of each single study on the overall effect (Fig. 4). Based on the results of the sensitivity analysis, we again reviewed the inclusion literature, and ultimately, we remained these studies, the pooled effect size of the meta-analysis results was stable and reliable.\nThe sensitivity analyses were conducted to evaluate the effects of each single study on the overall effect.\n[SUBTITLE] 3.1.5. Publication bias [SUBSECTION] We used Begg tests (P = .707) and Egger tests (P = .383) to assess publication bias in our study. The results were quite symmetric, indicating that the analysis did not include publication bias among the studies.\nWe used Begg tests (P = .707) and Egger tests (P = .383) to assess publication bias in our study. The results were quite symmetric, indicating that the analysis did not include publication bias among the studies.\n[SUBTITLE] 3.1.6. Ethics [SUBSECTION] In this study, ethical approval was not necessary because the included data was based on previous published articles, and no original clinical data was collected or utilized.\nIn this study, ethical approval was not necessary because the included data was based on previous published articles, and no original clinical data was collected or utilized.\n[SUBTITLE] 3.1.1. Characteristics of the eligible studies [SUBSECTION] Overall 40 relevant articles were extracted from the databases. And 19 articles were identified to be relevant by roughly screened in terms of abstract and title against inclusion and exclusion criteria. We retrieved the full texts of the articles and excluded several full texts for following reasons: 5 had no control or control group was selected as AHI ≥5 events/hour in children; two had no basic information; 2 records included patients someone older than 18 years; 5 studies unrelated to topic. The steps of the literature search are detailed in Figure 1.\nFlow diagram indicating the literature selection process and results included in the meta-analyses.\nFinally, five studies covering data from a total of 469 participants, were included in this meta-analysis.[10,13,20–22] The characteristics of the included studies were summarized, such as author, year, country, age, and so on. And the information of BMI, AHI and leptin of each study are given in Table 1. All value expressed as mean (± standard deviation).\nCharacteristics of included studies and participants’ characteristics of included studies.\nAHI = apnea hypopnea index, BMI = body mass index, CG = control group, n = Sample size, OG = obstructive sleep apnea (OSA) group, SD = standard deviation.\nOverall 40 relevant articles were extracted from the databases. And 19 articles were identified to be relevant by roughly screened in terms of abstract and title against inclusion and exclusion criteria. We retrieved the full texts of the articles and excluded several full texts for following reasons: 5 had no control or control group was selected as AHI ≥5 events/hour in children; two had no basic information; 2 records included patients someone older than 18 years; 5 studies unrelated to topic. The steps of the literature search are detailed in Figure 1.\nFlow diagram indicating the literature selection process and results included in the meta-analyses.\nFinally, five studies covering data from a total of 469 participants, were included in this meta-analysis.[10,13,20–22] The characteristics of the included studies were summarized, such as author, year, country, age, and so on. And the information of BMI, AHI and leptin of each study are given in Table 1. All value expressed as mean (± standard deviation).\nCharacteristics of included studies and participants’ characteristics of included studies.\nAHI = apnea hypopnea index, BMI = body mass index, CG = control group, n = Sample size, OG = obstructive sleep apnea (OSA) group, SD = standard deviation.\n[SUBTITLE] 3.1.2. Pooled analysis [SUBSECTION] The value of I2 was 76%, indicating that the studies were high heterogeneous. Therefore, the random effects model was used to combine effect size. Meta-analysis exhibited that serum leptin levels in OSA group were 6.36 ng/mL higher than that in control group (95%CI, 0.24–12.49, P = .04) Figure 2.\nForest plot and 95%CI for serum leptin levels in the OSA group in control with the control group in the meta-analysis. CI = confidence interval. OSA = obstructive sleep apnea.\nThe value of I2 was 76%, indicating that the studies were high heterogeneous. Therefore, the random effects model was used to combine effect size. Meta-analysis exhibited that serum leptin levels in OSA group were 6.36 ng/mL higher than that in control group (95%CI, 0.24–12.49, P = .04) Figure 2.\nForest plot and 95%CI for serum leptin levels in the OSA group in control with the control group in the meta-analysis. CI = confidence interval. OSA = obstructive sleep apnea.\n[SUBTITLE] 3.1.3. Subgroup analysis - BMI [SUBSECTION] All included studies reported the BMI values for each patient. Therefore, we performed a subgroup analysis of the articles according to obese or overweight. The total WMD in the studies with average BMI >25 was significant, with a corresponding value of 7.27 (95%CI, 0.32–13.22, P = .04; Fig. 3). The results revealed that the leptin level was elevated more significantly in the obese OSA patients. We also performed a special meta-analysis regarding the AHI (P > .05) of the patients with OSA. Our analysis showed that the leptin level was not significantly correlated with the AHI of the patients (MD, 6.15; 95%CI, −2.97–15.28, P = .19)\nSubgroup analysis based on body mass index (BMI) >25.\nAll included studies reported the BMI values for each patient. Therefore, we performed a subgroup analysis of the articles according to obese or overweight. The total WMD in the studies with average BMI >25 was significant, with a corresponding value of 7.27 (95%CI, 0.32–13.22, P = .04; Fig. 3). The results revealed that the leptin level was elevated more significantly in the obese OSA patients. We also performed a special meta-analysis regarding the AHI (P > .05) of the patients with OSA. Our analysis showed that the leptin level was not significantly correlated with the AHI of the patients (MD, 6.15; 95%CI, −2.97–15.28, P = .19)\nSubgroup analysis based on body mass index (BMI) >25.\n[SUBTITLE] 3.1.4. Sensitivity analysis [SUBSECTION] We conducted a series of sensitivity analyses to explore the stability of the pooled data. The sensitivity analyses were conducted to evaluate the effects of each single study on the overall effect (Fig. 4). Based on the results of the sensitivity analysis, we again reviewed the inclusion literature, and ultimately, we remained these studies, the pooled effect size of the meta-analysis results was stable and reliable.\nThe sensitivity analyses were conducted to evaluate the effects of each single study on the overall effect.\nWe conducted a series of sensitivity analyses to explore the stability of the pooled data. The sensitivity analyses were conducted to evaluate the effects of each single study on the overall effect (Fig. 4). Based on the results of the sensitivity analysis, we again reviewed the inclusion literature, and ultimately, we remained these studies, the pooled effect size of the meta-analysis results was stable and reliable.\nThe sensitivity analyses were conducted to evaluate the effects of each single study on the overall effect.\n[SUBTITLE] 3.1.5. Publication bias [SUBSECTION] We used Begg tests (P = .707) and Egger tests (P = .383) to assess publication bias in our study. The results were quite symmetric, indicating that the analysis did not include publication bias among the studies.\nWe used Begg tests (P = .707) and Egger tests (P = .383) to assess publication bias in our study. The results were quite symmetric, indicating that the analysis did not include publication bias among the studies.\n[SUBTITLE] 3.1.6. Ethics [SUBSECTION] In this study, ethical approval was not necessary because the included data was based on previous published articles, and no original clinical data was collected or utilized.\nIn this study, ethical approval was not necessary because the included data was based on previous published articles, and no original clinical data was collected or utilized.", "Overall 40 relevant articles were extracted from the databases. And 19 articles were identified to be relevant by roughly screened in terms of abstract and title against inclusion and exclusion criteria. We retrieved the full texts of the articles and excluded several full texts for following reasons: 5 had no control or control group was selected as AHI ≥5 events/hour in children; two had no basic information; 2 records included patients someone older than 18 years; 5 studies unrelated to topic. The steps of the literature search are detailed in Figure 1.\nFlow diagram indicating the literature selection process and results included in the meta-analyses.\nFinally, five studies covering data from a total of 469 participants, were included in this meta-analysis.[10,13,20–22] The characteristics of the included studies were summarized, such as author, year, country, age, and so on. And the information of BMI, AHI and leptin of each study are given in Table 1. All value expressed as mean (± standard deviation).\nCharacteristics of included studies and participants’ characteristics of included studies.\nAHI = apnea hypopnea index, BMI = body mass index, CG = control group, n = Sample size, OG = obstructive sleep apnea (OSA) group, SD = standard deviation.", "The value of I2 was 76%, indicating that the studies were high heterogeneous. Therefore, the random effects model was used to combine effect size. Meta-analysis exhibited that serum leptin levels in OSA group were 6.36 ng/mL higher than that in control group (95%CI, 0.24–12.49, P = .04) Figure 2.\nForest plot and 95%CI for serum leptin levels in the OSA group in control with the control group in the meta-analysis. CI = confidence interval. OSA = obstructive sleep apnea.", "All included studies reported the BMI values for each patient. Therefore, we performed a subgroup analysis of the articles according to obese or overweight. The total WMD in the studies with average BMI >25 was significant, with a corresponding value of 7.27 (95%CI, 0.32–13.22, P = .04; Fig. 3). The results revealed that the leptin level was elevated more significantly in the obese OSA patients. We also performed a special meta-analysis regarding the AHI (P > .05) of the patients with OSA. Our analysis showed that the leptin level was not significantly correlated with the AHI of the patients (MD, 6.15; 95%CI, −2.97–15.28, P = .19)\nSubgroup analysis based on body mass index (BMI) >25.", "We conducted a series of sensitivity analyses to explore the stability of the pooled data. The sensitivity analyses were conducted to evaluate the effects of each single study on the overall effect (Fig. 4). Based on the results of the sensitivity analysis, we again reviewed the inclusion literature, and ultimately, we remained these studies, the pooled effect size of the meta-analysis results was stable and reliable.\nThe sensitivity analyses were conducted to evaluate the effects of each single study on the overall effect.", "We used Begg tests (P = .707) and Egger tests (P = .383) to assess publication bias in our study. The results were quite symmetric, indicating that the analysis did not include publication bias among the studies.", "In this study, ethical approval was not necessary because the included data was based on previous published articles, and no original clinical data was collected or utilized.", "Qing Cheng, designed study, collected data, analyzed data, wrote article, revised article; Xun Niu, designed study, collected data, wrote article, analyzed data, revised article; Yao He, designed study, wrote article, analyzed data, revised article; Liu-Qing Zhou, designed study, analyzed data, revised article; Yao Hu, designed study, revised article.\nData curation: Yao Hu, Xun Niu.\nFormal analysis: Xun Niu.\nSoftware: Yao Hu, Qing Cheng.\nWriting – original draft: Yao He.\nWriting – review & editing: Liu-Qing Zhou, Qing Cheng.", "Qing Cheng’s name has been corrected from Qing Chen." ]
[ null, null, null, "results", null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Methods", "2.1. Search strategy", "2.2. Selection criteria", "2.3. Statistical analysis", "3. Results", "3.1. Search results", "3.1.1. Characteristics of the eligible studies", "3.1.2. Pooled analysis", "3.1.3. Subgroup analysis - BMI", "3.1.4. Sensitivity analysis", "3.1.5. Publication bias", "3.1.6. Ethics", "4. Discussion", "Author contributions", "Correction" ]
[ "Obstructive sleep apnea (OSA) is a disease that affect patients from infancy to adulthood. It affects 1% to 6% of all children and almost up to 59% of obese children.[1] Untreated OSA is associated with neurobehavioral problems, decreased attention, disturbed emotional regulation, decreased academic performance, nighttime enuresis, impaired growth and so on.[2,3] At last, OSA has a relevant impact on the cardiovascular system in children.[4] Hence, early diagnosis of OSA may reduce the occurrence of systemic complications. At the same time, it is also important to explore the mechanisms of causing cardiovascular disease in children with OSA.\nLeptin is an adipocyte-derived hormone regulating energy expenditure and food intake and can be found in the circulation.[5] leptin, particularly in the context of hyperleptinemia, exerts detrimental effects in cardiovascular function and promotes adverse outcomes in cardiovascular disorders.[6] Several studies reported the association between OSA and leptin.[7–9] Dalesio et al showed that leptin was significantly higher in the obese/OSA group than in the control or OSA-only group.10 The association between OSA and serum leptin levels is intricate and multidirectional since leptin levels can also be affected by obesity alone.[10] It is reported that leptin may play roles in early diagnosis of OSA.[11] Alteration in the levels of leptin is associated with the risk of cardiovascular diseases.[12] However, contrary results was showed by Li et al that leptin levels were not different between OSA and non-OSA groups.[13] The variability of serum leptin levels in children with OSA remains clearly inconclusive. In the present study, the databases of PubMed, Embase and Web of Science were searched for relevant publications and a meta-analysis was performed. We tried to clarify whether serum leptin level is elevated in children with OSA.", "[SUBTITLE] 2.1. Search strategy [SUBSECTION] We searched for English articles included in PubMed, Embase, and Web of Science. Search terms included the following key words: (obstructive sleep apnea hypopnea syndrome or sleep apnea or obstructive sleep apnea or obstructive sleep hypopnea or sleep-disordered breathing) and (adipokines or leptin) and (child or preschool children or teenager or adolescent). Potentially relevant articles were evaluated for inclusion against pre-specified eligibility and exclusion criteria.\nWe searched for English articles included in PubMed, Embase, and Web of Science. Search terms included the following key words: (obstructive sleep apnea hypopnea syndrome or sleep apnea or obstructive sleep apnea or obstructive sleep hypopnea or sleep-disordered breathing) and (adipokines or leptin) and (child or preschool children or teenager or adolescent). Potentially relevant articles were evaluated for inclusion against pre-specified eligibility and exclusion criteria.\n[SUBTITLE] 2.2. Selection criteria [SUBSECTION] Eligible studies for the meta-analysis were required to meet the following inclusion criteria: participants at an age of 18 years or younger; serum leptin concentration was measured from morning fasting venous blood following overnight polysomnography; subjects received monitoring by polysomnography, and OSA was diagnosed if they had an obstructive apnea index ≥1. The exclusion criteria were the following: studies without sufficient data for meta-analysis; abstracts, reviews, letters, and case reports.\nEligible studies for the meta-analysis were required to meet the following inclusion criteria: participants at an age of 18 years or younger; serum leptin concentration was measured from morning fasting venous blood following overnight polysomnography; subjects received monitoring by polysomnography, and OSA was diagnosed if they had an obstructive apnea index ≥1. The exclusion criteria were the following: studies without sufficient data for meta-analysis; abstracts, reviews, letters, and case reports.\n[SUBTITLE] 2.3. Statistical analysis [SUBSECTION] Statistical analyses were performed by using STATA version 12.0. Weighted mean difference (WMD) and 95% confidence interval (CI) were used to present the statistical results for continuous outcomes. And an inverse variance method was used for continuous variables.[14] The level of statistical significance was set at P < .05.\nStatistical heterogeneity was assessed on the basis of I square (I2) value. P < .10 was considered statistical heterogeneity to be statistically significant. An I2 value above 75% indicated high heterogeneity, an I2 value between 50% and 75% moderate heterogeneity, and an I2 value between 25% and 50% low heterogeneity. A result was believed to be homogeneous when an I2 value was <25%.[15] If I2 <50%, the study was believed to be homogeneous or to have low heterogeneity, and the fixed effects model was used to pool the results. If I2 >50%, the study was believed to be moderately or highly heterogeneous, and the random effect model was used to pool the data.[16,17]\nSubgroup analysis was done to assess the impact of body mass index (BMI ≥25 vs <25), and apnea-hypopnea index (AHI ≥10 vs <10). Sensitivity analysis was used to evaluate the stability of the meta-analysis results. Potential publication bias was evaluated by using funnel plot,[18] the Begg test,[19] and the test of Egger.[18]\nStatistical analyses were performed by using STATA version 12.0. Weighted mean difference (WMD) and 95% confidence interval (CI) were used to present the statistical results for continuous outcomes. And an inverse variance method was used for continuous variables.[14] The level of statistical significance was set at P < .05.\nStatistical heterogeneity was assessed on the basis of I square (I2) value. P < .10 was considered statistical heterogeneity to be statistically significant. An I2 value above 75% indicated high heterogeneity, an I2 value between 50% and 75% moderate heterogeneity, and an I2 value between 25% and 50% low heterogeneity. A result was believed to be homogeneous when an I2 value was <25%.[15] If I2 <50%, the study was believed to be homogeneous or to have low heterogeneity, and the fixed effects model was used to pool the results. If I2 >50%, the study was believed to be moderately or highly heterogeneous, and the random effect model was used to pool the data.[16,17]\nSubgroup analysis was done to assess the impact of body mass index (BMI ≥25 vs <25), and apnea-hypopnea index (AHI ≥10 vs <10). Sensitivity analysis was used to evaluate the stability of the meta-analysis results. Potential publication bias was evaluated by using funnel plot,[18] the Begg test,[19] and the test of Egger.[18]", "We searched for English articles included in PubMed, Embase, and Web of Science. Search terms included the following key words: (obstructive sleep apnea hypopnea syndrome or sleep apnea or obstructive sleep apnea or obstructive sleep hypopnea or sleep-disordered breathing) and (adipokines or leptin) and (child or preschool children or teenager or adolescent). Potentially relevant articles were evaluated for inclusion against pre-specified eligibility and exclusion criteria.", "Eligible studies for the meta-analysis were required to meet the following inclusion criteria: participants at an age of 18 years or younger; serum leptin concentration was measured from morning fasting venous blood following overnight polysomnography; subjects received monitoring by polysomnography, and OSA was diagnosed if they had an obstructive apnea index ≥1. The exclusion criteria were the following: studies without sufficient data for meta-analysis; abstracts, reviews, letters, and case reports.", "Statistical analyses were performed by using STATA version 12.0. Weighted mean difference (WMD) and 95% confidence interval (CI) were used to present the statistical results for continuous outcomes. And an inverse variance method was used for continuous variables.[14] The level of statistical significance was set at P < .05.\nStatistical heterogeneity was assessed on the basis of I square (I2) value. P < .10 was considered statistical heterogeneity to be statistically significant. An I2 value above 75% indicated high heterogeneity, an I2 value between 50% and 75% moderate heterogeneity, and an I2 value between 25% and 50% low heterogeneity. A result was believed to be homogeneous when an I2 value was <25%.[15] If I2 <50%, the study was believed to be homogeneous or to have low heterogeneity, and the fixed effects model was used to pool the results. If I2 >50%, the study was believed to be moderately or highly heterogeneous, and the random effect model was used to pool the data.[16,17]\nSubgroup analysis was done to assess the impact of body mass index (BMI ≥25 vs <25), and apnea-hypopnea index (AHI ≥10 vs <10). Sensitivity analysis was used to evaluate the stability of the meta-analysis results. Potential publication bias was evaluated by using funnel plot,[18] the Begg test,[19] and the test of Egger.[18]", "[SUBTITLE] 3.1. Search results [SUBSECTION] [SUBTITLE] 3.1.1. Characteristics of the eligible studies [SUBSECTION] Overall 40 relevant articles were extracted from the databases. And 19 articles were identified to be relevant by roughly screened in terms of abstract and title against inclusion and exclusion criteria. We retrieved the full texts of the articles and excluded several full texts for following reasons: 5 had no control or control group was selected as AHI ≥5 events/hour in children; two had no basic information; 2 records included patients someone older than 18 years; 5 studies unrelated to topic. The steps of the literature search are detailed in Figure 1.\nFlow diagram indicating the literature selection process and results included in the meta-analyses.\nFinally, five studies covering data from a total of 469 participants, were included in this meta-analysis.[10,13,20–22] The characteristics of the included studies were summarized, such as author, year, country, age, and so on. And the information of BMI, AHI and leptin of each study are given in Table 1. All value expressed as mean (± standard deviation).\nCharacteristics of included studies and participants’ characteristics of included studies.\nAHI = apnea hypopnea index, BMI = body mass index, CG = control group, n = Sample size, OG = obstructive sleep apnea (OSA) group, SD = standard deviation.\nOverall 40 relevant articles were extracted from the databases. And 19 articles were identified to be relevant by roughly screened in terms of abstract and title against inclusion and exclusion criteria. We retrieved the full texts of the articles and excluded several full texts for following reasons: 5 had no control or control group was selected as AHI ≥5 events/hour in children; two had no basic information; 2 records included patients someone older than 18 years; 5 studies unrelated to topic. The steps of the literature search are detailed in Figure 1.\nFlow diagram indicating the literature selection process and results included in the meta-analyses.\nFinally, five studies covering data from a total of 469 participants, were included in this meta-analysis.[10,13,20–22] The characteristics of the included studies were summarized, such as author, year, country, age, and so on. And the information of BMI, AHI and leptin of each study are given in Table 1. All value expressed as mean (± standard deviation).\nCharacteristics of included studies and participants’ characteristics of included studies.\nAHI = apnea hypopnea index, BMI = body mass index, CG = control group, n = Sample size, OG = obstructive sleep apnea (OSA) group, SD = standard deviation.\n[SUBTITLE] 3.1.2. Pooled analysis [SUBSECTION] The value of I2 was 76%, indicating that the studies were high heterogeneous. Therefore, the random effects model was used to combine effect size. Meta-analysis exhibited that serum leptin levels in OSA group were 6.36 ng/mL higher than that in control group (95%CI, 0.24–12.49, P = .04) Figure 2.\nForest plot and 95%CI for serum leptin levels in the OSA group in control with the control group in the meta-analysis. CI = confidence interval. OSA = obstructive sleep apnea.\nThe value of I2 was 76%, indicating that the studies were high heterogeneous. Therefore, the random effects model was used to combine effect size. Meta-analysis exhibited that serum leptin levels in OSA group were 6.36 ng/mL higher than that in control group (95%CI, 0.24–12.49, P = .04) Figure 2.\nForest plot and 95%CI for serum leptin levels in the OSA group in control with the control group in the meta-analysis. CI = confidence interval. OSA = obstructive sleep apnea.\n[SUBTITLE] 3.1.3. Subgroup analysis - BMI [SUBSECTION] All included studies reported the BMI values for each patient. Therefore, we performed a subgroup analysis of the articles according to obese or overweight. The total WMD in the studies with average BMI >25 was significant, with a corresponding value of 7.27 (95%CI, 0.32–13.22, P = .04; Fig. 3). The results revealed that the leptin level was elevated more significantly in the obese OSA patients. We also performed a special meta-analysis regarding the AHI (P > .05) of the patients with OSA. Our analysis showed that the leptin level was not significantly correlated with the AHI of the patients (MD, 6.15; 95%CI, −2.97–15.28, P = .19)\nSubgroup analysis based on body mass index (BMI) >25.\nAll included studies reported the BMI values for each patient. Therefore, we performed a subgroup analysis of the articles according to obese or overweight. The total WMD in the studies with average BMI >25 was significant, with a corresponding value of 7.27 (95%CI, 0.32–13.22, P = .04; Fig. 3). The results revealed that the leptin level was elevated more significantly in the obese OSA patients. We also performed a special meta-analysis regarding the AHI (P > .05) of the patients with OSA. Our analysis showed that the leptin level was not significantly correlated with the AHI of the patients (MD, 6.15; 95%CI, −2.97–15.28, P = .19)\nSubgroup analysis based on body mass index (BMI) >25.\n[SUBTITLE] 3.1.4. Sensitivity analysis [SUBSECTION] We conducted a series of sensitivity analyses to explore the stability of the pooled data. The sensitivity analyses were conducted to evaluate the effects of each single study on the overall effect (Fig. 4). Based on the results of the sensitivity analysis, we again reviewed the inclusion literature, and ultimately, we remained these studies, the pooled effect size of the meta-analysis results was stable and reliable.\nThe sensitivity analyses were conducted to evaluate the effects of each single study on the overall effect.\nWe conducted a series of sensitivity analyses to explore the stability of the pooled data. The sensitivity analyses were conducted to evaluate the effects of each single study on the overall effect (Fig. 4). Based on the results of the sensitivity analysis, we again reviewed the inclusion literature, and ultimately, we remained these studies, the pooled effect size of the meta-analysis results was stable and reliable.\nThe sensitivity analyses were conducted to evaluate the effects of each single study on the overall effect.\n[SUBTITLE] 3.1.5. Publication bias [SUBSECTION] We used Begg tests (P = .707) and Egger tests (P = .383) to assess publication bias in our study. The results were quite symmetric, indicating that the analysis did not include publication bias among the studies.\nWe used Begg tests (P = .707) and Egger tests (P = .383) to assess publication bias in our study. The results were quite symmetric, indicating that the analysis did not include publication bias among the studies.\n[SUBTITLE] 3.1.6. Ethics [SUBSECTION] In this study, ethical approval was not necessary because the included data was based on previous published articles, and no original clinical data was collected or utilized.\nIn this study, ethical approval was not necessary because the included data was based on previous published articles, and no original clinical data was collected or utilized.\n[SUBTITLE] 3.1.1. Characteristics of the eligible studies [SUBSECTION] Overall 40 relevant articles were extracted from the databases. And 19 articles were identified to be relevant by roughly screened in terms of abstract and title against inclusion and exclusion criteria. We retrieved the full texts of the articles and excluded several full texts for following reasons: 5 had no control or control group was selected as AHI ≥5 events/hour in children; two had no basic information; 2 records included patients someone older than 18 years; 5 studies unrelated to topic. The steps of the literature search are detailed in Figure 1.\nFlow diagram indicating the literature selection process and results included in the meta-analyses.\nFinally, five studies covering data from a total of 469 participants, were included in this meta-analysis.[10,13,20–22] The characteristics of the included studies were summarized, such as author, year, country, age, and so on. And the information of BMI, AHI and leptin of each study are given in Table 1. All value expressed as mean (± standard deviation).\nCharacteristics of included studies and participants’ characteristics of included studies.\nAHI = apnea hypopnea index, BMI = body mass index, CG = control group, n = Sample size, OG = obstructive sleep apnea (OSA) group, SD = standard deviation.\nOverall 40 relevant articles were extracted from the databases. And 19 articles were identified to be relevant by roughly screened in terms of abstract and title against inclusion and exclusion criteria. We retrieved the full texts of the articles and excluded several full texts for following reasons: 5 had no control or control group was selected as AHI ≥5 events/hour in children; two had no basic information; 2 records included patients someone older than 18 years; 5 studies unrelated to topic. The steps of the literature search are detailed in Figure 1.\nFlow diagram indicating the literature selection process and results included in the meta-analyses.\nFinally, five studies covering data from a total of 469 participants, were included in this meta-analysis.[10,13,20–22] The characteristics of the included studies were summarized, such as author, year, country, age, and so on. And the information of BMI, AHI and leptin of each study are given in Table 1. All value expressed as mean (± standard deviation).\nCharacteristics of included studies and participants’ characteristics of included studies.\nAHI = apnea hypopnea index, BMI = body mass index, CG = control group, n = Sample size, OG = obstructive sleep apnea (OSA) group, SD = standard deviation.\n[SUBTITLE] 3.1.2. Pooled analysis [SUBSECTION] The value of I2 was 76%, indicating that the studies were high heterogeneous. Therefore, the random effects model was used to combine effect size. Meta-analysis exhibited that serum leptin levels in OSA group were 6.36 ng/mL higher than that in control group (95%CI, 0.24–12.49, P = .04) Figure 2.\nForest plot and 95%CI for serum leptin levels in the OSA group in control with the control group in the meta-analysis. CI = confidence interval. OSA = obstructive sleep apnea.\nThe value of I2 was 76%, indicating that the studies were high heterogeneous. Therefore, the random effects model was used to combine effect size. Meta-analysis exhibited that serum leptin levels in OSA group were 6.36 ng/mL higher than that in control group (95%CI, 0.24–12.49, P = .04) Figure 2.\nForest plot and 95%CI for serum leptin levels in the OSA group in control with the control group in the meta-analysis. CI = confidence interval. OSA = obstructive sleep apnea.\n[SUBTITLE] 3.1.3. Subgroup analysis - BMI [SUBSECTION] All included studies reported the BMI values for each patient. Therefore, we performed a subgroup analysis of the articles according to obese or overweight. The total WMD in the studies with average BMI >25 was significant, with a corresponding value of 7.27 (95%CI, 0.32–13.22, P = .04; Fig. 3). The results revealed that the leptin level was elevated more significantly in the obese OSA patients. We also performed a special meta-analysis regarding the AHI (P > .05) of the patients with OSA. Our analysis showed that the leptin level was not significantly correlated with the AHI of the patients (MD, 6.15; 95%CI, −2.97–15.28, P = .19)\nSubgroup analysis based on body mass index (BMI) >25.\nAll included studies reported the BMI values for each patient. Therefore, we performed a subgroup analysis of the articles according to obese or overweight. The total WMD in the studies with average BMI >25 was significant, with a corresponding value of 7.27 (95%CI, 0.32–13.22, P = .04; Fig. 3). The results revealed that the leptin level was elevated more significantly in the obese OSA patients. We also performed a special meta-analysis regarding the AHI (P > .05) of the patients with OSA. Our analysis showed that the leptin level was not significantly correlated with the AHI of the patients (MD, 6.15; 95%CI, −2.97–15.28, P = .19)\nSubgroup analysis based on body mass index (BMI) >25.\n[SUBTITLE] 3.1.4. Sensitivity analysis [SUBSECTION] We conducted a series of sensitivity analyses to explore the stability of the pooled data. The sensitivity analyses were conducted to evaluate the effects of each single study on the overall effect (Fig. 4). Based on the results of the sensitivity analysis, we again reviewed the inclusion literature, and ultimately, we remained these studies, the pooled effect size of the meta-analysis results was stable and reliable.\nThe sensitivity analyses were conducted to evaluate the effects of each single study on the overall effect.\nWe conducted a series of sensitivity analyses to explore the stability of the pooled data. The sensitivity analyses were conducted to evaluate the effects of each single study on the overall effect (Fig. 4). Based on the results of the sensitivity analysis, we again reviewed the inclusion literature, and ultimately, we remained these studies, the pooled effect size of the meta-analysis results was stable and reliable.\nThe sensitivity analyses were conducted to evaluate the effects of each single study on the overall effect.\n[SUBTITLE] 3.1.5. Publication bias [SUBSECTION] We used Begg tests (P = .707) and Egger tests (P = .383) to assess publication bias in our study. The results were quite symmetric, indicating that the analysis did not include publication bias among the studies.\nWe used Begg tests (P = .707) and Egger tests (P = .383) to assess publication bias in our study. The results were quite symmetric, indicating that the analysis did not include publication bias among the studies.\n[SUBTITLE] 3.1.6. Ethics [SUBSECTION] In this study, ethical approval was not necessary because the included data was based on previous published articles, and no original clinical data was collected or utilized.\nIn this study, ethical approval was not necessary because the included data was based on previous published articles, and no original clinical data was collected or utilized.", "[SUBTITLE] 3.1.1. Characteristics of the eligible studies [SUBSECTION] Overall 40 relevant articles were extracted from the databases. And 19 articles were identified to be relevant by roughly screened in terms of abstract and title against inclusion and exclusion criteria. We retrieved the full texts of the articles and excluded several full texts for following reasons: 5 had no control or control group was selected as AHI ≥5 events/hour in children; two had no basic information; 2 records included patients someone older than 18 years; 5 studies unrelated to topic. The steps of the literature search are detailed in Figure 1.\nFlow diagram indicating the literature selection process and results included in the meta-analyses.\nFinally, five studies covering data from a total of 469 participants, were included in this meta-analysis.[10,13,20–22] The characteristics of the included studies were summarized, such as author, year, country, age, and so on. And the information of BMI, AHI and leptin of each study are given in Table 1. All value expressed as mean (± standard deviation).\nCharacteristics of included studies and participants’ characteristics of included studies.\nAHI = apnea hypopnea index, BMI = body mass index, CG = control group, n = Sample size, OG = obstructive sleep apnea (OSA) group, SD = standard deviation.\nOverall 40 relevant articles were extracted from the databases. And 19 articles were identified to be relevant by roughly screened in terms of abstract and title against inclusion and exclusion criteria. We retrieved the full texts of the articles and excluded several full texts for following reasons: 5 had no control or control group was selected as AHI ≥5 events/hour in children; two had no basic information; 2 records included patients someone older than 18 years; 5 studies unrelated to topic. The steps of the literature search are detailed in Figure 1.\nFlow diagram indicating the literature selection process and results included in the meta-analyses.\nFinally, five studies covering data from a total of 469 participants, were included in this meta-analysis.[10,13,20–22] The characteristics of the included studies were summarized, such as author, year, country, age, and so on. And the information of BMI, AHI and leptin of each study are given in Table 1. All value expressed as mean (± standard deviation).\nCharacteristics of included studies and participants’ characteristics of included studies.\nAHI = apnea hypopnea index, BMI = body mass index, CG = control group, n = Sample size, OG = obstructive sleep apnea (OSA) group, SD = standard deviation.\n[SUBTITLE] 3.1.2. Pooled analysis [SUBSECTION] The value of I2 was 76%, indicating that the studies were high heterogeneous. Therefore, the random effects model was used to combine effect size. Meta-analysis exhibited that serum leptin levels in OSA group were 6.36 ng/mL higher than that in control group (95%CI, 0.24–12.49, P = .04) Figure 2.\nForest plot and 95%CI for serum leptin levels in the OSA group in control with the control group in the meta-analysis. CI = confidence interval. OSA = obstructive sleep apnea.\nThe value of I2 was 76%, indicating that the studies were high heterogeneous. Therefore, the random effects model was used to combine effect size. Meta-analysis exhibited that serum leptin levels in OSA group were 6.36 ng/mL higher than that in control group (95%CI, 0.24–12.49, P = .04) Figure 2.\nForest plot and 95%CI for serum leptin levels in the OSA group in control with the control group in the meta-analysis. CI = confidence interval. OSA = obstructive sleep apnea.\n[SUBTITLE] 3.1.3. Subgroup analysis - BMI [SUBSECTION] All included studies reported the BMI values for each patient. Therefore, we performed a subgroup analysis of the articles according to obese or overweight. The total WMD in the studies with average BMI >25 was significant, with a corresponding value of 7.27 (95%CI, 0.32–13.22, P = .04; Fig. 3). The results revealed that the leptin level was elevated more significantly in the obese OSA patients. We also performed a special meta-analysis regarding the AHI (P > .05) of the patients with OSA. Our analysis showed that the leptin level was not significantly correlated with the AHI of the patients (MD, 6.15; 95%CI, −2.97–15.28, P = .19)\nSubgroup analysis based on body mass index (BMI) >25.\nAll included studies reported the BMI values for each patient. Therefore, we performed a subgroup analysis of the articles according to obese or overweight. The total WMD in the studies with average BMI >25 was significant, with a corresponding value of 7.27 (95%CI, 0.32–13.22, P = .04; Fig. 3). The results revealed that the leptin level was elevated more significantly in the obese OSA patients. We also performed a special meta-analysis regarding the AHI (P > .05) of the patients with OSA. Our analysis showed that the leptin level was not significantly correlated with the AHI of the patients (MD, 6.15; 95%CI, −2.97–15.28, P = .19)\nSubgroup analysis based on body mass index (BMI) >25.\n[SUBTITLE] 3.1.4. Sensitivity analysis [SUBSECTION] We conducted a series of sensitivity analyses to explore the stability of the pooled data. The sensitivity analyses were conducted to evaluate the effects of each single study on the overall effect (Fig. 4). Based on the results of the sensitivity analysis, we again reviewed the inclusion literature, and ultimately, we remained these studies, the pooled effect size of the meta-analysis results was stable and reliable.\nThe sensitivity analyses were conducted to evaluate the effects of each single study on the overall effect.\nWe conducted a series of sensitivity analyses to explore the stability of the pooled data. The sensitivity analyses were conducted to evaluate the effects of each single study on the overall effect (Fig. 4). Based on the results of the sensitivity analysis, we again reviewed the inclusion literature, and ultimately, we remained these studies, the pooled effect size of the meta-analysis results was stable and reliable.\nThe sensitivity analyses were conducted to evaluate the effects of each single study on the overall effect.\n[SUBTITLE] 3.1.5. Publication bias [SUBSECTION] We used Begg tests (P = .707) and Egger tests (P = .383) to assess publication bias in our study. The results were quite symmetric, indicating that the analysis did not include publication bias among the studies.\nWe used Begg tests (P = .707) and Egger tests (P = .383) to assess publication bias in our study. The results were quite symmetric, indicating that the analysis did not include publication bias among the studies.\n[SUBTITLE] 3.1.6. Ethics [SUBSECTION] In this study, ethical approval was not necessary because the included data was based on previous published articles, and no original clinical data was collected or utilized.\nIn this study, ethical approval was not necessary because the included data was based on previous published articles, and no original clinical data was collected or utilized.", "Overall 40 relevant articles were extracted from the databases. And 19 articles were identified to be relevant by roughly screened in terms of abstract and title against inclusion and exclusion criteria. We retrieved the full texts of the articles and excluded several full texts for following reasons: 5 had no control or control group was selected as AHI ≥5 events/hour in children; two had no basic information; 2 records included patients someone older than 18 years; 5 studies unrelated to topic. The steps of the literature search are detailed in Figure 1.\nFlow diagram indicating the literature selection process and results included in the meta-analyses.\nFinally, five studies covering data from a total of 469 participants, were included in this meta-analysis.[10,13,20–22] The characteristics of the included studies were summarized, such as author, year, country, age, and so on. And the information of BMI, AHI and leptin of each study are given in Table 1. All value expressed as mean (± standard deviation).\nCharacteristics of included studies and participants’ characteristics of included studies.\nAHI = apnea hypopnea index, BMI = body mass index, CG = control group, n = Sample size, OG = obstructive sleep apnea (OSA) group, SD = standard deviation.", "The value of I2 was 76%, indicating that the studies were high heterogeneous. Therefore, the random effects model was used to combine effect size. Meta-analysis exhibited that serum leptin levels in OSA group were 6.36 ng/mL higher than that in control group (95%CI, 0.24–12.49, P = .04) Figure 2.\nForest plot and 95%CI for serum leptin levels in the OSA group in control with the control group in the meta-analysis. CI = confidence interval. OSA = obstructive sleep apnea.", "All included studies reported the BMI values for each patient. Therefore, we performed a subgroup analysis of the articles according to obese or overweight. The total WMD in the studies with average BMI >25 was significant, with a corresponding value of 7.27 (95%CI, 0.32–13.22, P = .04; Fig. 3). The results revealed that the leptin level was elevated more significantly in the obese OSA patients. We also performed a special meta-analysis regarding the AHI (P > .05) of the patients with OSA. Our analysis showed that the leptin level was not significantly correlated with the AHI of the patients (MD, 6.15; 95%CI, −2.97–15.28, P = .19)\nSubgroup analysis based on body mass index (BMI) >25.", "We conducted a series of sensitivity analyses to explore the stability of the pooled data. The sensitivity analyses were conducted to evaluate the effects of each single study on the overall effect (Fig. 4). Based on the results of the sensitivity analysis, we again reviewed the inclusion literature, and ultimately, we remained these studies, the pooled effect size of the meta-analysis results was stable and reliable.\nThe sensitivity analyses were conducted to evaluate the effects of each single study on the overall effect.", "We used Begg tests (P = .707) and Egger tests (P = .383) to assess publication bias in our study. The results were quite symmetric, indicating that the analysis did not include publication bias among the studies.", "In this study, ethical approval was not necessary because the included data was based on previous published articles, and no original clinical data was collected or utilized.", "The present meta-analysis of 5 studies evaluated the serum leptin levels and compared these values between individuals with OSA and controls. The result showed that serum leptin levels were higher in children with OSA than in controls. And subgroup analysis revealed that the leptin level was significantly correlated with BMI (P < .001) which it was elevated more significantly in obese OSA patients. The overall heterogeneity was high in this meta-analysis (I2 = 76%); nevertheless, the sensitivity analysis indicated that no single study significantly influenced the pooled WMD, and Begg tests (P = .707) and Egger tests (P = .383) showed no publication bias in our study. Therefore, the result of our meta-analysis has relatively strong power. This is consistent with the previous findings that leptin levels appear to be determined by the degree of obesity as well as by the severity of OSA, particularly hypoxemia.[8] The leptin is involved in energy expenditure, balance of blood glucose metabolism, inflammatory processes, and regulation of immune function. Animal studies demonstrated that leptin played key roles in respiratory control by acting centrally to alter ventilations.[21] Moreover, leptin’s stimulation of oxidative stress, and at the cardiovascular level they result in vascular inflammation and vascular smooth muscle hypertrophy—all factors that contribute to atherosclerosis, hypertension, coronary heart disease and thrombosis.[12] Sufficient clinical studies also showing that leptin levels are elevated in adult with OSA and associated with disease severity, age, and BMI.[23–25] However, the consistency of the results regarding the leptin levels of leptin in children with OSA remains to be studied. Many studies have reported that patients with OSA have higher leptin levels than control groups.[9–11,13,20,21,26,27] Both sleep deprivation and hypoxemia are thought to be critical causative factors in OSA-induced impact on leptin levels.[28] Although, OSA and obesity are bidirectionally linked, One study demonstrated that serum leptin levels were significantly increases in children with OSA independently of obesity.[9] Moreover, Treatment with continuous positive airway pressure (CPAP) and/or nasal corticosteroids in children with OSA led to a significant decrease in leptin while increases in leptin emerged in those with residual OSA.[11,29] A more attractive hypothesis from these studies would be that changes in the frequency of intermittent hypoxia might have changed the serum leptin.[30] This supports our conclusion from another dimension.\nHowever, there are still some other sounds, such as leptin levels were not different between subjects with and without OSA.[13] No associations between OSA severity and the blood level of leptin in this clinical sample of overweight and obese children and adolescents.[31] These results differ from the results of this meta-analysis maybe because differences in the included populations and small sample size.\nAt last, the present meta-analysis contains several limitations. First, because of the differences in data statistics and experiment design, adequate data cannot be extracted from all or at least most of the included studies, numbers of articles were limited in the present meta-analysis. Second, obese children accounted for the majority of patients in the included studies which is associated with incidence of OSA in obese children. Third, the results may be heterogeneous due to the utilization of different methods and statistical analysis, sample size, and diagnostic work-up. Although, according to Begg’s funnel plots and the Egger’s test, the publication bias was not significant. So, according to present situation, additional studies with larger sample sizes are needed to acquire more representative and precise findings.\nIn conclusion, the meta-analysis demonstrated that serum leptin levels were elevated in children with OSA, compared to the control group. It could add to our developing understanding of the pathogenesis and potential treatments for children with OSA, and help us to recognize the relevance of OSA in determining cardiovascular issues among children.", "Qing Cheng, designed study, collected data, analyzed data, wrote article, revised article; Xun Niu, designed study, collected data, wrote article, analyzed data, revised article; Yao He, designed study, wrote article, analyzed data, revised article; Liu-Qing Zhou, designed study, analyzed data, revised article; Yao Hu, designed study, revised article.\nData curation: Yao Hu, Xun Niu.\nFormal analysis: Xun Niu.\nSoftware: Yao Hu, Qing Cheng.\nWriting – original draft: Yao He.\nWriting – review & editing: Liu-Qing Zhou, Qing Cheng.", "Qing Cheng’s name has been corrected from Qing Chen." ]
[ "intro", "methods", null, null, null, "results", "results", null, null, null, null, null, null, "discussion", null, null ]
[ "cardiovascular disease", "children", "leptin", "meta-analysis", "obstructive sleep apnea" ]
Dietary β-carotene and vitamin A and risk of Parkinson disease: A protocol for systematic review and meta-analysis.
36253999
The beneficial effects of dietary β-carotene and vitamin A on Parkinson disease (PD) have been confirmed, but some studies have yielded questionable results. Therefore, this meta-analysis investigated the effect of dietary β-carotene and vitamin A on the risk of PD.
BACKGROUND
The following databases were searched for relevant paper: PubMed, Embase, Medline, Scopus, Cochrane Library, CNKI, Wanfang Med online, and Weipu databases for the relevant paper from 1990 to March 28, 2022. The studies included were as follows: β-carotene and vitamin A intake was measured using scientifically recognized approaches, such as food frequency questionnaire (FFQ); evaluation of odds ratios using OR, RR, or HR; β-carotene and vitamin A intake for three or more quantitative categories; and PD diagnosed by a neurologist or hospital records.
METHODS
This study included 11 studies (four cohort studies, six case-control studies, and one cross-sectional study). The high β-carotene intake was associated with a significantly lower chance of developing PD than low β-carotene intake (pooled OR = 0.83, 95%CI = 0.74-0.94). Whereas the risk of advancement of PD was not significantly distinctive among the highest and lowest vitamin A intake (pooled OR = 1.08, 95%CI = 0.91-1.29).
RESULTS
Dietary β-carotene intake may have a protective effect against PD, whereas dietary vitamin A does not appear to have the same effect. More relevant studies are needed to include into meta-analysis in the further, as the recall bias and selection bias in retrospective and cross-sectional studies cause misclassifications in the assessment of nutrient intake.
CONCLUSIONS
[ "Ascorbic Acid", "Cross-Sectional Studies", "Humans", "Meta-Analysis as Topic", "Parkinson Disease", "Retrospective Studies", "Risk Factors", "Systematic Reviews as Topic", "Vitamin A", "Vitamin E", "beta Carotene" ]
9575799
1. Introduction
Parkinson disease (PD) is the second most common neurodegenerative disorder after Alzheimer disease. The prevalence of PD is approximated to be 0.3% in the general population in industrialized countries, with incidence rates ranging between 8 and 18 per 100,000 person-years.[1] The Global Burden of Disease research indicated that the major reason for disabilities worldwide are neurological disorders, and PD is the fastest growing of these disorders. This population is expected to quadruple to almost 12 million by 2040, owing primarily to aging. Additional factors, such as longer life expectancy, lower smoking rates, and increased urbanization, might push the load to over 17 million people. A study of the global regional and national burden of PD (2016) showed that from 1990 to 2016, the global burden of PD rapidly increased from 2.5 to 6.1 million. The doubling of the number of people with PD between 1990 and 2016 is expected to happen again in the next generation as the population ages and life expectancy rises.[2,3] PD manifests itself in both motor and non-motor symptoms. Tremor, stiffness, slowness, and imbalance are examples of motor symptoms that affect movement and physical tasks. Nonmotor symptoms can impact a variety of organ systems, including the gastrointestinal and genitourinary systems. Before movement symptoms appear in people who have PD, nonmotor symptoms usually develop gradually over years. Examples of prodromal nonmotor symptoms include rapid eye movement (REM) sleep behavior disorder, hyposmia, constipation, urine dysfunction, orthostatic hypotension, excessive daytime drowsiness, anxiety, and depression.[4] Movement disorder in PD is brought on by the death of dopaminergic neurons in the substantia nigra pars compacta (SNc), reactive oxygen species (ROS) accumulation caused by mitochondrial malfunction or inflammation is still a prominent contributor to dopaminergic neuron degeneration. And there is evidence to suggest that a crucial factor of the complicated degenerative cascade underlying dopaminergic neurodegeneration is oxidative stress.[5] Besides pharmacological treatments that exist to control PD, Parkinson patients frequently need extra care that is comprehensive to advance their daily life quality and well-being. Nutrition, in addition to influencing daily illness care, which is a possible disease-modifying component. Nutrition may help to slow the progression of neurodegeneration when it is fully utilized, and it can also aggravate it when nutrition is deficient. Previous epidemiologic research has revealed dietetic habits and risk for PD. Consumption of coffee and tea was found to be inversely related to the risk of PD. Also, smoking, exercise, and activity are protective factors. On the other hand, age, sex, genetic factors, and chemical exposure such as pesticides and high intake of dairy is related to the high risk of PD. Due to the inaccessibility of specific treatments to reduce the intensity or stop disease movement, the search for natural substances with neuroprotective and anti-inflammatory activities is a priority[6–9] The dietary β-carotene is a plant pigment and often found in orange and green vegetables.[10] β-Carotene is an antioxidant, which plays a part in coping with the oxidative stress that results from PD. Photosynthetic mechanisms of carotenoids can protect chlorophyl and mitochondria against oxidative damage. Carotenoids can be converted to vitamin A with the help of the enzyme carotene dioxygenase. Because of the existence of the β-ionone ring in its structure, dietary β-carotene is the most important precursor of vitamin A. During the previous decade, the ability of carotenoids to protect the nervous system has been illustrated.[11] Vitamin A is a lipophilic chemical that can only be obtained from food. Preformed Vitamin A (mostly retinol and retinyl esters) is commonly found in animal-derived diets, while provitamin A (primarily β-carotene and carotenoids) is absorbed from plant-based diets.[10] As a nutrient, there are substantial linkages in the pathology of PD, and proteins participant in vitamin A metabolism. Also, altered vitamin A metabolism and bioavailability tend to result in an oxidative stress, neuroinflammation, dopaminergic cell passing, influence on biological rhythms, and endocrine homeostasis. Hence, vitamin A, as a nutritional factor perhaps at the crossroad of different environmental and hereditary element of PD.[12] Evidence from the preclinical stage demonstrated a variety of control by vitamin A and related pathways that have been implicated in the etiology of PD.[13–15] A meta-analysis in 2014 showed available data are deficient for reaching firm conclusions on the epidemiological data on the link between vitamin A and β-carotene levels in the blood or dietary intakes and the risk of PD.[16] Considering the previous meta-analysis in 2013, new data from a large prospective cohort and case–control studies where the antioxidant impact of β-carotene and vitamin A was explored, and some studies inconsistent with the previous meta-analysis results..[17–19] After the meta-analysis in 2013, we included more studies conducted in our meta-analysis on the impact of dietary intakes of β-carotene and vitamin A in PD.
2. Methods
This meta-analysis has been reported in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement.[20] This meta-analysis has already been registered in PROSPERO. Registration ID: CRD42022320314. The analysis was according to previous published articles, no ethical approval and patient consent are required. [SUBTITLE] 2.1. Search strategy [SUBSECTION] Our article searched the literature in the following databases: PubMed, Embase, Medline, SCOPUS, Cochrane Library, CNKI, Wanfang, and Weipu databases for research published before February 26, 2022. Our search terms are: [(“Carotenoids” OR “Carotenoid” OR “Tetraterpenes” OR “Tetraterpene Derivatives” OR “Derivatives, Tetraterpene” OR “Carotenes” OR “Carotene”) OR (“beta Carotene” OR “Carotene, beta” OR “Betacarotene” OR “beta-Carotene” OR “Carotaben” OR “Max-Caro” OR “Max Caro” OR “MaxCaro” OR “Solatene” OR “Vetoron” OR “BellaCarotin” OR “Provatene” OR “β-carotene”) OR (“Vitamin A” OR “Aquasol A” OR “Retinol” OR “3,7-dimethyl-9-(2,6,6-tri-methyl-1-cyclohexen-1-yl)-2,4,6,8-nonatetraen-1-ol,(all-E)-Isomer” OR “All-Trans-Retinol” OR “All Trans Retinol” OR “Vitamin A1” OR “11-cis-Retinol”)] AND (“Parkinson Disease” OR “Idiopathic Parkinson Disease” OR “Lewy Body Parkinson Disease” OR “Parkinson Disease, Idiopathic” OR “Parkinson Disease, Lewy Body” OR “Parkinson Disease, Idiopathic” OR “Parkinson Disease” OR “Idiopathic Parkinson Disease” OR “Lewy Body Parkinson Disease” OR “Primary Parkinsonism” OR “Parkinsonism, Primary” OR “Paralysis Agitans”). There were no restrictions on the language used in the publications. The references cited in the publications that were found to be relevant were also looked over to see if there were any new publications. Our article searched the literature in the following databases: PubMed, Embase, Medline, SCOPUS, Cochrane Library, CNKI, Wanfang, and Weipu databases for research published before February 26, 2022. Our search terms are: [(“Carotenoids” OR “Carotenoid” OR “Tetraterpenes” OR “Tetraterpene Derivatives” OR “Derivatives, Tetraterpene” OR “Carotenes” OR “Carotene”) OR (“beta Carotene” OR “Carotene, beta” OR “Betacarotene” OR “beta-Carotene” OR “Carotaben” OR “Max-Caro” OR “Max Caro” OR “MaxCaro” OR “Solatene” OR “Vetoron” OR “BellaCarotin” OR “Provatene” OR “β-carotene”) OR (“Vitamin A” OR “Aquasol A” OR “Retinol” OR “3,7-dimethyl-9-(2,6,6-tri-methyl-1-cyclohexen-1-yl)-2,4,6,8-nonatetraen-1-ol,(all-E)-Isomer” OR “All-Trans-Retinol” OR “All Trans Retinol” OR “Vitamin A1” OR “11-cis-Retinol”)] AND (“Parkinson Disease” OR “Idiopathic Parkinson Disease” OR “Lewy Body Parkinson Disease” OR “Parkinson Disease, Idiopathic” OR “Parkinson Disease, Lewy Body” OR “Parkinson Disease, Idiopathic” OR “Parkinson Disease” OR “Idiopathic Parkinson Disease” OR “Lewy Body Parkinson Disease” OR “Primary Parkinsonism” OR “Parkinsonism, Primary” OR “Paralysis Agitans”). There were no restrictions on the language used in the publications. The references cited in the publications that were found to be relevant were also looked over to see if there were any new publications. [SUBTITLE] 2.2. Study selection [SUBSECTION] The title and abstract of each study were independently examined by two reviewers. The studies were chosen according to the following inclusion criteria: β-carotene and vitamin A intake was measured using scientifically recognized approaches, such as a food frequency questionnaire (FFQ); evaluation of odds ratios using OR, RR, or HR; β-carotene and vitamin A intake were converted to ordered categorical variables for three or more quantitative categories according to quartile points in the distribution of control; and PD diagnosed by a neurologist or hospital records. The following were the criteria for exclusion: reexaminations views or case reports; duplicate articles by identical articles; and no publications by the same cohort. Information from the OR, RR, or HR. The title and abstract of each study were independently examined by two reviewers. The studies were chosen according to the following inclusion criteria: β-carotene and vitamin A intake was measured using scientifically recognized approaches, such as a food frequency questionnaire (FFQ); evaluation of odds ratios using OR, RR, or HR; β-carotene and vitamin A intake were converted to ordered categorical variables for three or more quantitative categories according to quartile points in the distribution of control; and PD diagnosed by a neurologist or hospital records. The following were the criteria for exclusion: reexaminations views or case reports; duplicate articles by identical articles; and no publications by the same cohort. Information from the OR, RR, or HR. [SUBTITLE] 2.3. Data extraction [SUBSECTION] Two reviewers separately extracted all relevant papers and identified studies that were eligible. Based on a thorough examination of the title and abstract, the studies were evaluated for eligibility, and conflicts were addressed through consensus. The following information from each included study, includes the initial author name, the date of publication, kind of study, patients’ number, mean age of the participants, gender, duration of follow-up, adjusted variables, and outcome data: the odds ratio (OR) and 95% confidence intervals (CIs), for the development of PD was extracted. After the enrolled participants were sorted into five quintiles (Q1‐Q5) based on dietary β-carotene and vitamin A intake, we selected the outcome data of the participants in Q1 (lowest intake group) and Q5 (highest intake group) in this study. If the participants were divided into four quartiles (Q1‐Q4) or three tertiles (Q1‐Q3), participants in Q1 were deemed the reference group, while those in Q4 or Q3 were considered the highest intake group. Two reviewers separately extracted all relevant papers and identified studies that were eligible. Based on a thorough examination of the title and abstract, the studies were evaluated for eligibility, and conflicts were addressed through consensus. The following information from each included study, includes the initial author name, the date of publication, kind of study, patients’ number, mean age of the participants, gender, duration of follow-up, adjusted variables, and outcome data: the odds ratio (OR) and 95% confidence intervals (CIs), for the development of PD was extracted. After the enrolled participants were sorted into five quintiles (Q1‐Q5) based on dietary β-carotene and vitamin A intake, we selected the outcome data of the participants in Q1 (lowest intake group) and Q5 (highest intake group) in this study. If the participants were divided into four quartiles (Q1‐Q4) or three tertiles (Q1‐Q3), participants in Q1 were deemed the reference group, while those in Q4 or Q3 were considered the highest intake group. [SUBTITLE] 2.4. Quality assessment [SUBSECTION] The research studies’ methodological quality was assessed using the nine-point scale Newcastle Ottawa scale (NOS), which was according to three criteria: subject selection, group comparability, and measurement of outcomes or exposures.[21] Each study’s quality was rated as low (0‐3), moderate (4‐6), or high (7‐9). The consensus was used to resolve any conflicts. The research studies’ methodological quality was assessed using the nine-point scale Newcastle Ottawa scale (NOS), which was according to three criteria: subject selection, group comparability, and measurement of outcomes or exposures.[21] Each study’s quality was rated as low (0‐3), moderate (4‐6), or high (7‐9). The consensus was used to resolve any conflicts. [SUBTITLE] 2.5. Statistical analyses [SUBSECTION] The statistical analysis was completed with Revman5.4 software. The I2 statistics were used to perform the heterogeneity test to determine the degree of discrepancy between the outcomes. If I2 < 50%, showed the statistical heterogeneity non-existent between these researches, and the fixed effects model was employed in the calculation of the combined effect OR and 95% CI; I2 ≥ 50% was thought to imply significant heterogeneity, and the random effects model was utilized. OR and 95% CIs was used to analyze and investigate the rate of change in examining the disparity in the rate at which PD develops between the two groups with high and low β-carotene and vitamin A intake. A P-value of less than .05 was used to determine statistical significance. To assess the possibility of publication bias, funnel plots were utilized. The statistical analysis was completed with Revman5.4 software. The I2 statistics were used to perform the heterogeneity test to determine the degree of discrepancy between the outcomes. If I2 < 50%, showed the statistical heterogeneity non-existent between these researches, and the fixed effects model was employed in the calculation of the combined effect OR and 95% CI; I2 ≥ 50% was thought to imply significant heterogeneity, and the random effects model was utilized. OR and 95% CIs was used to analyze and investigate the rate of change in examining the disparity in the rate at which PD develops between the two groups with high and low β-carotene and vitamin A intake. A P-value of less than .05 was used to determine statistical significance. To assess the possibility of publication bias, funnel plots were utilized.
3.4. Meta-analysis results
[SUBTITLE] 3.4..1. Dietary β-carotene and PD [SUBSECTION] Nine studies with a total of 237,192 participants and 3707 PD cases were included in our analysis of dietary β-carotene and PD risk. The high β-carotene intake appeared a significantly lower chance of development of PD than the low β-carotene intake (pooled OR = 0.86, 95%CI = 0.77‐0.96, Z-value = 2.70, P = .007 < .05). The fixed-effect mode was utilized according to the results of I2 = 36%, with moderate evidence of heterogeneity in the data in Figure 2. Results of the meta-analysis of the rate of the development of Parkinson’s disease in the low and high β-carotene intake groups. Nine studies with a total of 237,192 participants and 3707 PD cases were included in our analysis of dietary β-carotene and PD risk. The high β-carotene intake appeared a significantly lower chance of development of PD than the low β-carotene intake (pooled OR = 0.86, 95%CI = 0.77‐0.96, Z-value = 2.70, P = .007 < .05). The fixed-effect mode was utilized according to the results of I2 = 36%, with moderate evidence of heterogeneity in the data in Figure 2. Results of the meta-analysis of the rate of the development of Parkinson’s disease in the low and high β-carotene intake groups. [SUBTITLE] 3.4..2. Dietary vitamin A and PD [SUBSECTION] Four studies involving a total of 63,781 participants with 962 cases were included in our analysis of dietary vitamin A and the risk of PD. Whereas the risk of advancement of PD was not significantly distinctive among the highest and the lowest vitamin A intake (pooled OR = 1.08, 95%CI = 0.91‐1.29, Z-value = 0.93, P = .35). The fixed-effect mode was utilized according to the results of I2 = 0% in Figure 3. Results of the meta-analysis of the rate of the development of Parkinson’s disease in the low and high vitamin A intake groups. Four studies involving a total of 63,781 participants with 962 cases were included in our analysis of dietary vitamin A and the risk of PD. Whereas the risk of advancement of PD was not significantly distinctive among the highest and the lowest vitamin A intake (pooled OR = 1.08, 95%CI = 0.91‐1.29, Z-value = 0.93, P = .35). The fixed-effect mode was utilized according to the results of I2 = 0% in Figure 3. Results of the meta-analysis of the rate of the development of Parkinson’s disease in the low and high vitamin A intake groups.
null
null
[ "2.1. Search strategy", "2.2. Study selection", "2.3. Data extraction", "2.4. Quality assessment", "2.5. Statistical analyses", "3. Results", "3.1. Literature search", "3.2. Study characteristics", "3.3. Risk of bias", "3.4..1. Dietary β-carotene and PD", "3.4..2. Dietary vitamin A and PD", "3.5. Publication bias", "4. Discussions", "5. Conclusion", "Declaration statement", "Author contributions" ]
[ "Our article searched the literature in the following databases: PubMed, Embase, Medline, SCOPUS, Cochrane Library, CNKI, Wanfang, and Weipu databases for research published before February 26, 2022. Our search terms are: [(“Carotenoids” OR “Carotenoid” OR “Tetraterpenes” OR “Tetraterpene Derivatives” OR “Derivatives, Tetraterpene” OR “Carotenes” OR “Carotene”) OR (“beta Carotene” OR “Carotene, beta” OR “Betacarotene” OR “beta-Carotene” OR “Carotaben” OR “Max-Caro” OR “Max Caro” OR “MaxCaro” OR “Solatene” OR “Vetoron” OR “BellaCarotin” OR “Provatene” OR “β-carotene”) OR (“Vitamin A” OR “Aquasol A” OR “Retinol” OR “3,7-dimethyl-9-(2,6,6-tri-methyl-1-cyclohexen-1-yl)-2,4,6,8-nonatetraen-1-ol,(all-E)-Isomer” OR “All-Trans-Retinol” OR “All Trans Retinol” OR “Vitamin A1” OR “11-cis-Retinol”)] AND (“Parkinson Disease” OR “Idiopathic Parkinson Disease” OR “Lewy Body Parkinson Disease” OR “Parkinson Disease, Idiopathic” OR “Parkinson Disease, Lewy Body” OR “Parkinson Disease, Idiopathic” OR “Parkinson Disease” OR “Idiopathic Parkinson Disease” OR “Lewy Body Parkinson Disease” OR “Primary Parkinsonism” OR “Parkinsonism, Primary” OR “Paralysis Agitans”). There were no restrictions on the language used in the publications. The references cited in the publications that were found to be relevant were also looked over to see if there were any new publications.", "The title and abstract of each study were independently examined by two reviewers. The studies were chosen according to the following inclusion criteria: β-carotene and vitamin A intake was measured using scientifically recognized approaches, such as a food frequency questionnaire (FFQ); evaluation of odds ratios using OR, RR, or HR; β-carotene and vitamin A intake were converted to ordered categorical variables for three or more quantitative categories according to quartile points in the distribution of control; and PD diagnosed by a neurologist or hospital records. The following were the criteria for exclusion: reexaminations views or case reports; duplicate articles by identical articles; and no publications by the same cohort. Information from the OR, RR, or HR.", "Two reviewers separately extracted all relevant papers and identified studies that were eligible. Based on a thorough examination of the title and abstract, the studies were evaluated for eligibility, and conflicts were addressed through consensus. The following information from each included study, includes the initial author name, the date of publication, kind of study, patients’ number, mean age of the participants, gender, duration of follow-up, adjusted variables, and outcome data: the odds ratio (OR) and 95% confidence intervals (CIs), for the development of PD was extracted. After the enrolled participants were sorted into five quintiles (Q1‐Q5) based on dietary β-carotene and vitamin A intake, we selected the outcome data of the participants in Q1 (lowest intake group) and Q5 (highest intake group) in this study. If the participants were divided into four quartiles (Q1‐Q4) or three tertiles (Q1‐Q3), participants in Q1 were deemed the reference group, while those in Q4 or Q3 were considered the highest intake group.", "The research studies’ methodological quality was assessed using the nine-point scale Newcastle Ottawa scale (NOS), which was according to three criteria: subject selection, group comparability, and measurement of outcomes or exposures.[21] Each study’s quality was rated as low (0‐3), moderate (4‐6), or high (7‐9). The consensus was used to resolve any conflicts.", "The statistical analysis was completed with Revman5.4 software. The I2 statistics were used to perform the heterogeneity test to determine the degree of discrepancy between the outcomes. If I2 < 50%, showed the statistical heterogeneity non-existent between these researches, and the fixed effects model was employed in the calculation of the combined effect OR and 95% CI; I2 ≥ 50% was thought to imply significant heterogeneity, and the random effects model was utilized. OR and 95% CIs was used to analyze and investigate the rate of change in examining the disparity in the rate at which PD develops between the two groups with high and low β-carotene and vitamin A intake. A P-value of less than .05 was used to determine statistical significance. To assess the possibility of publication bias, funnel plots were utilized.", "[SUBTITLE] 3.1. Literature search [SUBSECTION] There were a total number of 4128 studies found, with 991 duplicates deleted. After deleting duplicates, there were a total of 3137 articles found in Figure 1, a total of 21 papers were eligible after passing the title and abstract screening, studies characteristics in Table 1. Following a thorough examination, 10 studies were excluded.[22–31] Three studies were excluded for inadequate outcomes were recorded, one article had no specific dietary β-carotene and vitamin A intake,[23] and 2 reported insufficient outcome data (OR and 95% CIs).[22,25] Six studies were excluded for β-carotene and vitamin A intake was measured by serum levels.[24,26–28,30,31] One study was excluded for β-carotene and vitamin A intake not for three or more quantitative categories.[29] Resulting in 11 articles were excluded and 10 articles were included in our meta-analysis.[17–19,32–39] Nine studies analyzed the effect of dietary β-carotene and the risk of PD.[17–19,33–36,38,39] Four researches examined the impact of dietary vitamins A in relation to the possibility of PD.[19,32,35,37] Besides, research showed data for both sexes independently.[18] As a result, we considered the individual outcomes in our meta-analysis. Moreover, 7 studies categorized the research data into four quintiles according to the taking in levels of β-carotene or vitamins A.[18,19,32,34–36,38] One study categorized the research data into five quartiles;[40] and three studies categorized the exposure variables into tertiles.[17,33,37]\nCharacteristics of full text reviewed studies.\nCIs = 95% confidence intervals, DHQ = diet history questionnaire, FFQ = food frequency questionnaire, PD = Parkinson’s disease.\nFlow chart showing the search results of the meta-analysis.\nThere were a total number of 4128 studies found, with 991 duplicates deleted. After deleting duplicates, there were a total of 3137 articles found in Figure 1, a total of 21 papers were eligible after passing the title and abstract screening, studies characteristics in Table 1. Following a thorough examination, 10 studies were excluded.[22–31] Three studies were excluded for inadequate outcomes were recorded, one article had no specific dietary β-carotene and vitamin A intake,[23] and 2 reported insufficient outcome data (OR and 95% CIs).[22,25] Six studies were excluded for β-carotene and vitamin A intake was measured by serum levels.[24,26–28,30,31] One study was excluded for β-carotene and vitamin A intake not for three or more quantitative categories.[29] Resulting in 11 articles were excluded and 10 articles were included in our meta-analysis.[17–19,32–39] Nine studies analyzed the effect of dietary β-carotene and the risk of PD.[17–19,33–36,38,39] Four researches examined the impact of dietary vitamins A in relation to the possibility of PD.[19,32,35,37] Besides, research showed data for both sexes independently.[18] As a result, we considered the individual outcomes in our meta-analysis. Moreover, 7 studies categorized the research data into four quintiles according to the taking in levels of β-carotene or vitamins A.[18,19,32,34–36,38] One study categorized the research data into five quartiles;[40] and three studies categorized the exposure variables into tertiles.[17,33,37]\nCharacteristics of full text reviewed studies.\nCIs = 95% confidence intervals, DHQ = diet history questionnaire, FFQ = food frequency questionnaire, PD = Parkinson’s disease.\nFlow chart showing the search results of the meta-analysis.\n[SUBTITLE] 3.2. Study characteristics [SUBSECTION] The sum amount of 240,166 participants in the 11 studies and 4205 instances of PD were identified. Of the 11 included studies, 5 were done in the United States,[32,35,37–39] 2 were conducted in Sweden,[17,18] also 1 research each was carried out in Germany,[34] the Netherlands,[33] Singapore, and Japan,[36] respectively. In addition, nine studies (4 case–control, 4 cohort, and 1 cross-sectional) offered information on dietary β-carotene intake and 4 studies (3 case–control, and 1 cohort) offered information on dietary vitamin A intake. All researches within the investigation of β-carotene and vitamin A are considered to be dietary intake. The features of each research are displayed in Table 2.\nCharacteristics of studies in meta-analysis.\nBMI = body mass index, DHQ = diet history questionnaire, FFQ = food frequency questionnaire.\nThe sum amount of 240,166 participants in the 11 studies and 4205 instances of PD were identified. Of the 11 included studies, 5 were done in the United States,[32,35,37–39] 2 were conducted in Sweden,[17,18] also 1 research each was carried out in Germany,[34] the Netherlands,[33] Singapore, and Japan,[36] respectively. In addition, nine studies (4 case–control, 4 cohort, and 1 cross-sectional) offered information on dietary β-carotene intake and 4 studies (3 case–control, and 1 cohort) offered information on dietary vitamin A intake. All researches within the investigation of β-carotene and vitamin A are considered to be dietary intake. The features of each research are displayed in Table 2.\nCharacteristics of studies in meta-analysis.\nBMI = body mass index, DHQ = diet history questionnaire, FFQ = food frequency questionnaire.\n[SUBTITLE] 3.3. Risk of bias [SUBSECTION] The considers included in our study the NOS was used to conduct the survey in Table 3. All of the literature were high quality and appraised 8 to 9 points.\nQuality assessment of included studies using the Newcastle Ottawa Quality Assessment Scale.\n*one point.\nThe considers included in our study the NOS was used to conduct the survey in Table 3. All of the literature were high quality and appraised 8 to 9 points.\nQuality assessment of included studies using the Newcastle Ottawa Quality Assessment Scale.\n*one point.\n[SUBTITLE] 3.4. Meta-analysis results [SUBSECTION] [SUBTITLE] 3.4..1. Dietary β-carotene and PD [SUBSECTION] Nine studies with a total of 237,192 participants and 3707 PD cases were included in our analysis of dietary β-carotene and PD risk. The high β-carotene intake appeared a significantly lower chance of development of PD than the low β-carotene intake (pooled OR = 0.86, 95%CI = 0.77‐0.96, Z-value = 2.70, P = .007 < .05). The fixed-effect mode was utilized according to the results of I2 = 36%, with moderate evidence of heterogeneity in the data in Figure 2.\nResults of the meta-analysis of the rate of the development of Parkinson’s disease in the low and high β-carotene intake groups.\nNine studies with a total of 237,192 participants and 3707 PD cases were included in our analysis of dietary β-carotene and PD risk. The high β-carotene intake appeared a significantly lower chance of development of PD than the low β-carotene intake (pooled OR = 0.86, 95%CI = 0.77‐0.96, Z-value = 2.70, P = .007 < .05). The fixed-effect mode was utilized according to the results of I2 = 36%, with moderate evidence of heterogeneity in the data in Figure 2.\nResults of the meta-analysis of the rate of the development of Parkinson’s disease in the low and high β-carotene intake groups.\n[SUBTITLE] 3.4..2. Dietary vitamin A and PD [SUBSECTION] Four studies involving a total of 63,781 participants with 962 cases were included in our analysis of dietary vitamin A and the risk of PD. Whereas the risk of advancement of PD was not significantly distinctive among the highest and the lowest vitamin A intake (pooled OR = 1.08, 95%CI = 0.91‐1.29, Z-value = 0.93, P = .35). The fixed-effect mode was utilized according to the results of I2 = 0% in Figure 3.\nResults of the meta-analysis of the rate of the development of Parkinson’s disease in the low and high vitamin A intake groups.\nFour studies involving a total of 63,781 participants with 962 cases were included in our analysis of dietary vitamin A and the risk of PD. Whereas the risk of advancement of PD was not significantly distinctive among the highest and the lowest vitamin A intake (pooled OR = 1.08, 95%CI = 0.91‐1.29, Z-value = 0.93, P = .35). The fixed-effect mode was utilized according to the results of I2 = 0% in Figure 3.\nResults of the meta-analysis of the rate of the development of Parkinson’s disease in the low and high vitamin A intake groups.\n[SUBTITLE] 3.4..1. Dietary β-carotene and PD [SUBSECTION] Nine studies with a total of 237,192 participants and 3707 PD cases were included in our analysis of dietary β-carotene and PD risk. The high β-carotene intake appeared a significantly lower chance of development of PD than the low β-carotene intake (pooled OR = 0.86, 95%CI = 0.77‐0.96, Z-value = 2.70, P = .007 < .05). The fixed-effect mode was utilized according to the results of I2 = 36%, with moderate evidence of heterogeneity in the data in Figure 2.\nResults of the meta-analysis of the rate of the development of Parkinson’s disease in the low and high β-carotene intake groups.\nNine studies with a total of 237,192 participants and 3707 PD cases were included in our analysis of dietary β-carotene and PD risk. The high β-carotene intake appeared a significantly lower chance of development of PD than the low β-carotene intake (pooled OR = 0.86, 95%CI = 0.77‐0.96, Z-value = 2.70, P = .007 < .05). The fixed-effect mode was utilized according to the results of I2 = 36%, with moderate evidence of heterogeneity in the data in Figure 2.\nResults of the meta-analysis of the rate of the development of Parkinson’s disease in the low and high β-carotene intake groups.\n[SUBTITLE] 3.4..2. Dietary vitamin A and PD [SUBSECTION] Four studies involving a total of 63,781 participants with 962 cases were included in our analysis of dietary vitamin A and the risk of PD. Whereas the risk of advancement of PD was not significantly distinctive among the highest and the lowest vitamin A intake (pooled OR = 1.08, 95%CI = 0.91‐1.29, Z-value = 0.93, P = .35). The fixed-effect mode was utilized according to the results of I2 = 0% in Figure 3.\nResults of the meta-analysis of the rate of the development of Parkinson’s disease in the low and high vitamin A intake groups.\nFour studies involving a total of 63,781 participants with 962 cases were included in our analysis of dietary vitamin A and the risk of PD. Whereas the risk of advancement of PD was not significantly distinctive among the highest and the lowest vitamin A intake (pooled OR = 1.08, 95%CI = 0.91‐1.29, Z-value = 0.93, P = .35). The fixed-effect mode was utilized according to the results of I2 = 0% in Figure 3.\nResults of the meta-analysis of the rate of the development of Parkinson’s disease in the low and high vitamin A intake groups.\n[SUBTITLE] 3.5. Publication bias [SUBSECTION] The funnel plot showed no publication bias in Figure 4, and Figure 5.\nGraphic funnel plots of the included studies for evaluating the effects of β-carotene on the risk of Parkinson’s disease.\nGraphic funnel plots of the included studies for evaluating the effects of vitamin A on the risk of Parkinson’s disease.\nThe funnel plot showed no publication bias in Figure 4, and Figure 5.\nGraphic funnel plots of the included studies for evaluating the effects of β-carotene on the risk of Parkinson’s disease.\nGraphic funnel plots of the included studies for evaluating the effects of vitamin A on the risk of Parkinson’s disease.", "There were a total number of 4128 studies found, with 991 duplicates deleted. After deleting duplicates, there were a total of 3137 articles found in Figure 1, a total of 21 papers were eligible after passing the title and abstract screening, studies characteristics in Table 1. Following a thorough examination, 10 studies were excluded.[22–31] Three studies were excluded for inadequate outcomes were recorded, one article had no specific dietary β-carotene and vitamin A intake,[23] and 2 reported insufficient outcome data (OR and 95% CIs).[22,25] Six studies were excluded for β-carotene and vitamin A intake was measured by serum levels.[24,26–28,30,31] One study was excluded for β-carotene and vitamin A intake not for three or more quantitative categories.[29] Resulting in 11 articles were excluded and 10 articles were included in our meta-analysis.[17–19,32–39] Nine studies analyzed the effect of dietary β-carotene and the risk of PD.[17–19,33–36,38,39] Four researches examined the impact of dietary vitamins A in relation to the possibility of PD.[19,32,35,37] Besides, research showed data for both sexes independently.[18] As a result, we considered the individual outcomes in our meta-analysis. Moreover, 7 studies categorized the research data into four quintiles according to the taking in levels of β-carotene or vitamins A.[18,19,32,34–36,38] One study categorized the research data into five quartiles;[40] and three studies categorized the exposure variables into tertiles.[17,33,37]\nCharacteristics of full text reviewed studies.\nCIs = 95% confidence intervals, DHQ = diet history questionnaire, FFQ = food frequency questionnaire, PD = Parkinson’s disease.\nFlow chart showing the search results of the meta-analysis.", "The sum amount of 240,166 participants in the 11 studies and 4205 instances of PD were identified. Of the 11 included studies, 5 were done in the United States,[32,35,37–39] 2 were conducted in Sweden,[17,18] also 1 research each was carried out in Germany,[34] the Netherlands,[33] Singapore, and Japan,[36] respectively. In addition, nine studies (4 case–control, 4 cohort, and 1 cross-sectional) offered information on dietary β-carotene intake and 4 studies (3 case–control, and 1 cohort) offered information on dietary vitamin A intake. All researches within the investigation of β-carotene and vitamin A are considered to be dietary intake. The features of each research are displayed in Table 2.\nCharacteristics of studies in meta-analysis.\nBMI = body mass index, DHQ = diet history questionnaire, FFQ = food frequency questionnaire.", "The considers included in our study the NOS was used to conduct the survey in Table 3. All of the literature were high quality and appraised 8 to 9 points.\nQuality assessment of included studies using the Newcastle Ottawa Quality Assessment Scale.\n*one point.", "Nine studies with a total of 237,192 participants and 3707 PD cases were included in our analysis of dietary β-carotene and PD risk. The high β-carotene intake appeared a significantly lower chance of development of PD than the low β-carotene intake (pooled OR = 0.86, 95%CI = 0.77‐0.96, Z-value = 2.70, P = .007 < .05). The fixed-effect mode was utilized according to the results of I2 = 36%, with moderate evidence of heterogeneity in the data in Figure 2.\nResults of the meta-analysis of the rate of the development of Parkinson’s disease in the low and high β-carotene intake groups.", "Four studies involving a total of 63,781 participants with 962 cases were included in our analysis of dietary vitamin A and the risk of PD. Whereas the risk of advancement of PD was not significantly distinctive among the highest and the lowest vitamin A intake (pooled OR = 1.08, 95%CI = 0.91‐1.29, Z-value = 0.93, P = .35). The fixed-effect mode was utilized according to the results of I2 = 0% in Figure 3.\nResults of the meta-analysis of the rate of the development of Parkinson’s disease in the low and high vitamin A intake groups.", "The funnel plot showed no publication bias in Figure 4, and Figure 5.\nGraphic funnel plots of the included studies for evaluating the effects of β-carotene on the risk of Parkinson’s disease.\nGraphic funnel plots of the included studies for evaluating the effects of vitamin A on the risk of Parkinson’s disease.", "Our meta-analysis summarized data about the association of dietary β-carotene and vitamin A intake with the risk of PD. The findings from the study suggested that higher dietary β-carotene intake was both significantly and inversely related to the likelihood of PD, whereas higher dietary intake of vitamin A did not show such protective effects.\nThe dietary carotenoids are discovered in red, orange, yellow, and leafy green vegetables as well as red and orange fruit. Research showed carotenoids helps reduces these oxidative biochemical indicators and combat oxidative stress and as an antioxidant in the food that may help to slow or stop the course of PD.[40] β-Carotene has also been found to have anti-inflammatory properties which are important in the prevention of many degenerative diseases caused by oxidative stress, including neurological diseases such as PD.[11] Although this article only discussion the connection between dietary β-carotene intake and the possibility of PD, some studies also studied the relationship between serum β-carotene levels and the risk of PD. A study explained serum β-carotene levels are all significant reduction in PD patients (P < .001).[26] And three studies suggest that serum levels of β-carotene did not differ significantly between PD patients and control groups.[27,28,31] Two former meta-analyses done by Takeda et al[16] and Etminan et al[41] researched about intake of β-carotene and the risk of PD. The two former meta-analyses did not suggest any defensive impacts related to β-carotene. However, our study finds intake of β-carotene reduced the risk of PD. Among the 9 articles about β-carotene included in our meta-analysis, 3 studies showed the consumption of β-carotene has been linked to the development of PD. Two studies (1 cohort study and 1 case–control study) showed higher consumption of dietary β-carotene was significantly linked to a lower incidence of PD[18,36] and 1 cross-sectional study showed Intake of β-carotene was oppositely related to PD but this association was not significant.[33] Three new prospective cohort studies were included in our study.[17–19] One study showed that β-carotene consumption was linked to a decreased risk of PD.[18] Also, a study published recently was excluded from our meta-analysis for different outcome measures were recorded, which showed dietary β-carotene intake was inversely associated with the progression of PD.[22] But 3 studies containing dietary β-carotene intake were excluded for short of the outcome data,[23,25,29] those studies did not provide support for high dietary β-carotene intake can decrease the risk of PD.\nVitamin A is entirely provided by food like liver, meat, eggs, fish, dairy fat, and margarine. Vitamin A can be obtained from food in a variety of ways: directly as retinol-esters, or indirectly as β-carotene, which is partially converted to retinol. Same as β-carotene, dietary vitamin A is also an antioxidant, which exhibits neuroprotective properties against neurodegeneration. Vitamin A plays a role in coping with the oxidative stress that results in PD.[12] Retinoic, the main metabolite of vitamin A controls brain advancement by controlling neuronal separation, engine axonal development, and neural patterning.[13] Retinoic acid encourages GABAergic neurons to express dopamine receptors differentiate, also alterations in PD that inhibition of retinoic acid-mediated neuronal differentiation.[40] There is substantial contact between the PD pathogenesis and vitamin A and retinoid metabolism-related proteins. A variety of manipulations of vitamin A and its pathways have been used to determine vitamin A in the pathogenesis of PD: diet supplementation, diet inadequacy, knockout rats for retinoid receptors, and therapies using vitamin A derivatives in vivo or in vitro.[13–15] Three articles elucidated the relationship between serum vitamin A levels and the pathogenesis of PD. One study explained serum vitamin A was decreased in PD patients (P < .05).[30] And 2 studies suggested that serum vitamin A levels do not play a role in the pathogenesis of PD.[24,31] In 2014, Takeda et al conducted the first meta-analysis focus on vitamin A and carotenoids and the risk of PD.[16] For vitamin A, Takeda study included 8 papers: 7 case–control studies and 1 cross-sectional study. Takeda study showed current data have been insufficient to draw firm conclusion about the association between vitamin A levels in the blood or dietary intakes and the risk of PD. This is the same with the outcomes of our meta-analysis, though vitamin A intake was measured only using a FFQ or a diet history. Four studies included in our study about vitamin A with the risk of PD (3 case–control studies and 1 cohort study): all showed there were no preventive benefits of dietary vitamin A to PD.[19,32,35,37]\nOur study’s strength is that it included not just current large-scale observational studies published after the previous meta-analysis, but also two Asian studies, resulting in a more ethnically diverse sample.[19,36] Furthermore, the research included in this analysis had a high overall quality. However, we must also acknowledge the study’s potential limitations. Different studies we included evaluated dietary intake using the different FFQs, also the semi-quantitative instrument is possible to lead to underestimate true intake levels of dietary β-carotene and vitamin A.", "Our meta-analysis indicated that dietary β-carotene intake might have a protective impact against PD. In addition, we found that dietary vitamin A appears not to have protective effects on PD. More relevant studies are needed to include into meta-analysis in the further, as the recall bias and selection bias in retrospective and cross-sectional studies cause misclassifications in the assessment of nutrient intake.", "All authors declare that they have no competing interest.", "Conceptualization: Ling-Yu Wu, Qing-Han Gao.\nData curation: Ling-Yu Wu, Gui-Sheng Chen.\nFormal analysis: Ling-Yu Wu, Gui-Sheng Chen.\nWriting – original draft: Ling-Yu Wu.\nWriting – review &amp; editing: Ling-Yu Wu, Jing-Xin Chen.\nInvestigation: Ling-Yu Wu, Hua Gao.\nMethodology: Ling-Yu Wu.\nProject administration: Ling-Yu Wu.\nResources: Jing-Hong Huo, Yu-Fei Pang.\nSoftware: Ling-Yu Wu, Jing-Hong Huo, Yu-Fei Pang.\nSupervision: Ling-Yu Wu, Qing-Han Gao.\nArticle revision: Jing-Xin Chen, Ling-Yu Wu." ]
[ null, null, null, null, null, "results", null, null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Methods", "2.1. Search strategy", "2.2. Study selection", "2.3. Data extraction", "2.4. Quality assessment", "2.5. Statistical analyses", "3. Results", "3.1. Literature search", "3.2. Study characteristics", "3.3. Risk of bias", "3.4. Meta-analysis results", "3.4..1. Dietary β-carotene and PD", "3.4..2. Dietary vitamin A and PD", "3.5. Publication bias", "4. Discussions", "5. Conclusion", "Declaration statement", "Author contributions" ]
[ "Parkinson disease (PD) is the second most common neurodegenerative disorder after Alzheimer disease. The prevalence of PD is approximated to be 0.3% in the general population in industrialized countries, with incidence rates ranging between 8 and 18 per 100,000 person-years.[1] The Global Burden of Disease research indicated that the major reason for disabilities worldwide are neurological disorders, and PD is the fastest growing of these disorders. This population is expected to quadruple to almost 12 million by 2040, owing primarily to aging. Additional factors, such as longer life expectancy, lower smoking rates, and increased urbanization, might push the load to over 17 million people. A study of the global regional and national burden of PD (2016) showed that from 1990 to 2016, the global burden of PD rapidly increased from 2.5 to 6.1 million. The doubling of the number of people with PD between 1990 and 2016 is expected to happen again in the next generation as the population ages and life expectancy rises.[2,3]\nPD manifests itself in both motor and non-motor symptoms. Tremor, stiffness, slowness, and imbalance are examples of motor symptoms that affect movement and physical tasks. Nonmotor symptoms can impact a variety of organ systems, including the gastrointestinal and genitourinary systems. Before movement symptoms appear in people who have PD, nonmotor symptoms usually develop gradually over years. Examples of prodromal nonmotor symptoms include rapid eye movement (REM) sleep behavior disorder, hyposmia, constipation, urine dysfunction, orthostatic hypotension, excessive daytime drowsiness, anxiety, and depression.[4]\nMovement disorder in PD is brought on by the death of dopaminergic neurons in the substantia nigra pars compacta (SNc), reactive oxygen species (ROS) accumulation caused by mitochondrial malfunction or inflammation is still a prominent contributor to dopaminergic neuron degeneration. And there is evidence to suggest that a crucial factor of the complicated degenerative cascade underlying dopaminergic neurodegeneration is oxidative stress.[5]\nBesides pharmacological treatments that exist to control PD, Parkinson patients frequently need extra care that is comprehensive to advance their daily life quality and well-being. Nutrition, in addition to influencing daily illness care, which is a possible disease-modifying component. Nutrition may help to slow the progression of neurodegeneration when it is fully utilized, and it can also aggravate it when nutrition is deficient. Previous epidemiologic research has revealed dietetic habits and risk for PD. Consumption of coffee and tea was found to be inversely related to the risk of PD. Also, smoking, exercise, and activity are protective factors. On the other hand, age, sex, genetic factors, and chemical exposure such as pesticides and high intake of dairy is related to the high risk of PD. Due to the inaccessibility of specific treatments to reduce the intensity or stop disease movement, the search for natural substances with neuroprotective and anti-inflammatory activities is a priority[6–9]\nThe dietary β-carotene is a plant pigment and often found in orange and green vegetables.[10] β-Carotene is an antioxidant, which plays a part in coping with the oxidative stress that results from PD. Photosynthetic mechanisms of carotenoids can protect chlorophyl and mitochondria against oxidative damage. Carotenoids can be converted to vitamin A with the help of the enzyme carotene dioxygenase. Because of the existence of the β-ionone ring in its structure, dietary β-carotene is the most important precursor of vitamin A. During the previous decade, the ability of carotenoids to protect the nervous system has been illustrated.[11]\nVitamin A is a lipophilic chemical that can only be obtained from food. Preformed Vitamin A (mostly retinol and retinyl esters) is commonly found in animal-derived diets, while provitamin A (primarily β-carotene and carotenoids) is absorbed from plant-based diets.[10] As a nutrient, there are substantial linkages in the pathology of PD, and proteins participant in vitamin A metabolism. Also, altered vitamin A metabolism and bioavailability tend to result in an oxidative stress, neuroinflammation, dopaminergic cell passing, influence on biological rhythms, and endocrine homeostasis. Hence, vitamin A, as a nutritional factor perhaps at the crossroad of different environmental and hereditary element of PD.[12] Evidence from the preclinical stage demonstrated a variety of control by vitamin A and related pathways that have been implicated in the etiology of PD.[13–15]\nA meta-analysis in 2014 showed available data are deficient for reaching firm conclusions on the epidemiological data on the link between vitamin A and β-carotene levels in the blood or dietary intakes and the risk of PD.[16] Considering the previous meta-analysis in 2013, new data from a large prospective cohort and case–control studies where the antioxidant impact of β-carotene and vitamin A was explored, and some studies inconsistent with the previous meta-analysis results..[17–19] After the meta-analysis in 2013, we included more studies conducted in our meta-analysis on the impact of dietary intakes of β-carotene and vitamin A in PD.", "This meta-analysis has been reported in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement.[20] This meta-analysis has already been registered in PROSPERO. Registration ID: CRD42022320314. The analysis was according to previous published articles, no ethical approval and patient consent are required.\n[SUBTITLE] 2.1. Search strategy [SUBSECTION] Our article searched the literature in the following databases: PubMed, Embase, Medline, SCOPUS, Cochrane Library, CNKI, Wanfang, and Weipu databases for research published before February 26, 2022. Our search terms are: [(“Carotenoids” OR “Carotenoid” OR “Tetraterpenes” OR “Tetraterpene Derivatives” OR “Derivatives, Tetraterpene” OR “Carotenes” OR “Carotene”) OR (“beta Carotene” OR “Carotene, beta” OR “Betacarotene” OR “beta-Carotene” OR “Carotaben” OR “Max-Caro” OR “Max Caro” OR “MaxCaro” OR “Solatene” OR “Vetoron” OR “BellaCarotin” OR “Provatene” OR “β-carotene”) OR (“Vitamin A” OR “Aquasol A” OR “Retinol” OR “3,7-dimethyl-9-(2,6,6-tri-methyl-1-cyclohexen-1-yl)-2,4,6,8-nonatetraen-1-ol,(all-E)-Isomer” OR “All-Trans-Retinol” OR “All Trans Retinol” OR “Vitamin A1” OR “11-cis-Retinol”)] AND (“Parkinson Disease” OR “Idiopathic Parkinson Disease” OR “Lewy Body Parkinson Disease” OR “Parkinson Disease, Idiopathic” OR “Parkinson Disease, Lewy Body” OR “Parkinson Disease, Idiopathic” OR “Parkinson Disease” OR “Idiopathic Parkinson Disease” OR “Lewy Body Parkinson Disease” OR “Primary Parkinsonism” OR “Parkinsonism, Primary” OR “Paralysis Agitans”). There were no restrictions on the language used in the publications. The references cited in the publications that were found to be relevant were also looked over to see if there were any new publications.\nOur article searched the literature in the following databases: PubMed, Embase, Medline, SCOPUS, Cochrane Library, CNKI, Wanfang, and Weipu databases for research published before February 26, 2022. Our search terms are: [(“Carotenoids” OR “Carotenoid” OR “Tetraterpenes” OR “Tetraterpene Derivatives” OR “Derivatives, Tetraterpene” OR “Carotenes” OR “Carotene”) OR (“beta Carotene” OR “Carotene, beta” OR “Betacarotene” OR “beta-Carotene” OR “Carotaben” OR “Max-Caro” OR “Max Caro” OR “MaxCaro” OR “Solatene” OR “Vetoron” OR “BellaCarotin” OR “Provatene” OR “β-carotene”) OR (“Vitamin A” OR “Aquasol A” OR “Retinol” OR “3,7-dimethyl-9-(2,6,6-tri-methyl-1-cyclohexen-1-yl)-2,4,6,8-nonatetraen-1-ol,(all-E)-Isomer” OR “All-Trans-Retinol” OR “All Trans Retinol” OR “Vitamin A1” OR “11-cis-Retinol”)] AND (“Parkinson Disease” OR “Idiopathic Parkinson Disease” OR “Lewy Body Parkinson Disease” OR “Parkinson Disease, Idiopathic” OR “Parkinson Disease, Lewy Body” OR “Parkinson Disease, Idiopathic” OR “Parkinson Disease” OR “Idiopathic Parkinson Disease” OR “Lewy Body Parkinson Disease” OR “Primary Parkinsonism” OR “Parkinsonism, Primary” OR “Paralysis Agitans”). There were no restrictions on the language used in the publications. The references cited in the publications that were found to be relevant were also looked over to see if there were any new publications.\n[SUBTITLE] 2.2. Study selection [SUBSECTION] The title and abstract of each study were independently examined by two reviewers. The studies were chosen according to the following inclusion criteria: β-carotene and vitamin A intake was measured using scientifically recognized approaches, such as a food frequency questionnaire (FFQ); evaluation of odds ratios using OR, RR, or HR; β-carotene and vitamin A intake were converted to ordered categorical variables for three or more quantitative categories according to quartile points in the distribution of control; and PD diagnosed by a neurologist or hospital records. The following were the criteria for exclusion: reexaminations views or case reports; duplicate articles by identical articles; and no publications by the same cohort. Information from the OR, RR, or HR.\nThe title and abstract of each study were independently examined by two reviewers. The studies were chosen according to the following inclusion criteria: β-carotene and vitamin A intake was measured using scientifically recognized approaches, such as a food frequency questionnaire (FFQ); evaluation of odds ratios using OR, RR, or HR; β-carotene and vitamin A intake were converted to ordered categorical variables for three or more quantitative categories according to quartile points in the distribution of control; and PD diagnosed by a neurologist or hospital records. The following were the criteria for exclusion: reexaminations views or case reports; duplicate articles by identical articles; and no publications by the same cohort. Information from the OR, RR, or HR.\n[SUBTITLE] 2.3. Data extraction [SUBSECTION] Two reviewers separately extracted all relevant papers and identified studies that were eligible. Based on a thorough examination of the title and abstract, the studies were evaluated for eligibility, and conflicts were addressed through consensus. The following information from each included study, includes the initial author name, the date of publication, kind of study, patients’ number, mean age of the participants, gender, duration of follow-up, adjusted variables, and outcome data: the odds ratio (OR) and 95% confidence intervals (CIs), for the development of PD was extracted. After the enrolled participants were sorted into five quintiles (Q1‐Q5) based on dietary β-carotene and vitamin A intake, we selected the outcome data of the participants in Q1 (lowest intake group) and Q5 (highest intake group) in this study. If the participants were divided into four quartiles (Q1‐Q4) or three tertiles (Q1‐Q3), participants in Q1 were deemed the reference group, while those in Q4 or Q3 were considered the highest intake group.\nTwo reviewers separately extracted all relevant papers and identified studies that were eligible. Based on a thorough examination of the title and abstract, the studies were evaluated for eligibility, and conflicts were addressed through consensus. The following information from each included study, includes the initial author name, the date of publication, kind of study, patients’ number, mean age of the participants, gender, duration of follow-up, adjusted variables, and outcome data: the odds ratio (OR) and 95% confidence intervals (CIs), for the development of PD was extracted. After the enrolled participants were sorted into five quintiles (Q1‐Q5) based on dietary β-carotene and vitamin A intake, we selected the outcome data of the participants in Q1 (lowest intake group) and Q5 (highest intake group) in this study. If the participants were divided into four quartiles (Q1‐Q4) or three tertiles (Q1‐Q3), participants in Q1 were deemed the reference group, while those in Q4 or Q3 were considered the highest intake group.\n[SUBTITLE] 2.4. Quality assessment [SUBSECTION] The research studies’ methodological quality was assessed using the nine-point scale Newcastle Ottawa scale (NOS), which was according to three criteria: subject selection, group comparability, and measurement of outcomes or exposures.[21] Each study’s quality was rated as low (0‐3), moderate (4‐6), or high (7‐9). The consensus was used to resolve any conflicts.\nThe research studies’ methodological quality was assessed using the nine-point scale Newcastle Ottawa scale (NOS), which was according to three criteria: subject selection, group comparability, and measurement of outcomes or exposures.[21] Each study’s quality was rated as low (0‐3), moderate (4‐6), or high (7‐9). The consensus was used to resolve any conflicts.\n[SUBTITLE] 2.5. Statistical analyses [SUBSECTION] The statistical analysis was completed with Revman5.4 software. The I2 statistics were used to perform the heterogeneity test to determine the degree of discrepancy between the outcomes. If I2 < 50%, showed the statistical heterogeneity non-existent between these researches, and the fixed effects model was employed in the calculation of the combined effect OR and 95% CI; I2 ≥ 50% was thought to imply significant heterogeneity, and the random effects model was utilized. OR and 95% CIs was used to analyze and investigate the rate of change in examining the disparity in the rate at which PD develops between the two groups with high and low β-carotene and vitamin A intake. A P-value of less than .05 was used to determine statistical significance. To assess the possibility of publication bias, funnel plots were utilized.\nThe statistical analysis was completed with Revman5.4 software. The I2 statistics were used to perform the heterogeneity test to determine the degree of discrepancy between the outcomes. If I2 < 50%, showed the statistical heterogeneity non-existent between these researches, and the fixed effects model was employed in the calculation of the combined effect OR and 95% CI; I2 ≥ 50% was thought to imply significant heterogeneity, and the random effects model was utilized. OR and 95% CIs was used to analyze and investigate the rate of change in examining the disparity in the rate at which PD develops between the two groups with high and low β-carotene and vitamin A intake. A P-value of less than .05 was used to determine statistical significance. To assess the possibility of publication bias, funnel plots were utilized.", "Our article searched the literature in the following databases: PubMed, Embase, Medline, SCOPUS, Cochrane Library, CNKI, Wanfang, and Weipu databases for research published before February 26, 2022. Our search terms are: [(“Carotenoids” OR “Carotenoid” OR “Tetraterpenes” OR “Tetraterpene Derivatives” OR “Derivatives, Tetraterpene” OR “Carotenes” OR “Carotene”) OR (“beta Carotene” OR “Carotene, beta” OR “Betacarotene” OR “beta-Carotene” OR “Carotaben” OR “Max-Caro” OR “Max Caro” OR “MaxCaro” OR “Solatene” OR “Vetoron” OR “BellaCarotin” OR “Provatene” OR “β-carotene”) OR (“Vitamin A” OR “Aquasol A” OR “Retinol” OR “3,7-dimethyl-9-(2,6,6-tri-methyl-1-cyclohexen-1-yl)-2,4,6,8-nonatetraen-1-ol,(all-E)-Isomer” OR “All-Trans-Retinol” OR “All Trans Retinol” OR “Vitamin A1” OR “11-cis-Retinol”)] AND (“Parkinson Disease” OR “Idiopathic Parkinson Disease” OR “Lewy Body Parkinson Disease” OR “Parkinson Disease, Idiopathic” OR “Parkinson Disease, Lewy Body” OR “Parkinson Disease, Idiopathic” OR “Parkinson Disease” OR “Idiopathic Parkinson Disease” OR “Lewy Body Parkinson Disease” OR “Primary Parkinsonism” OR “Parkinsonism, Primary” OR “Paralysis Agitans”). There were no restrictions on the language used in the publications. The references cited in the publications that were found to be relevant were also looked over to see if there were any new publications.", "The title and abstract of each study were independently examined by two reviewers. The studies were chosen according to the following inclusion criteria: β-carotene and vitamin A intake was measured using scientifically recognized approaches, such as a food frequency questionnaire (FFQ); evaluation of odds ratios using OR, RR, or HR; β-carotene and vitamin A intake were converted to ordered categorical variables for three or more quantitative categories according to quartile points in the distribution of control; and PD diagnosed by a neurologist or hospital records. The following were the criteria for exclusion: reexaminations views or case reports; duplicate articles by identical articles; and no publications by the same cohort. Information from the OR, RR, or HR.", "Two reviewers separately extracted all relevant papers and identified studies that were eligible. Based on a thorough examination of the title and abstract, the studies were evaluated for eligibility, and conflicts were addressed through consensus. The following information from each included study, includes the initial author name, the date of publication, kind of study, patients’ number, mean age of the participants, gender, duration of follow-up, adjusted variables, and outcome data: the odds ratio (OR) and 95% confidence intervals (CIs), for the development of PD was extracted. After the enrolled participants were sorted into five quintiles (Q1‐Q5) based on dietary β-carotene and vitamin A intake, we selected the outcome data of the participants in Q1 (lowest intake group) and Q5 (highest intake group) in this study. If the participants were divided into four quartiles (Q1‐Q4) or three tertiles (Q1‐Q3), participants in Q1 were deemed the reference group, while those in Q4 or Q3 were considered the highest intake group.", "The research studies’ methodological quality was assessed using the nine-point scale Newcastle Ottawa scale (NOS), which was according to three criteria: subject selection, group comparability, and measurement of outcomes or exposures.[21] Each study’s quality was rated as low (0‐3), moderate (4‐6), or high (7‐9). The consensus was used to resolve any conflicts.", "The statistical analysis was completed with Revman5.4 software. The I2 statistics were used to perform the heterogeneity test to determine the degree of discrepancy between the outcomes. If I2 < 50%, showed the statistical heterogeneity non-existent between these researches, and the fixed effects model was employed in the calculation of the combined effect OR and 95% CI; I2 ≥ 50% was thought to imply significant heterogeneity, and the random effects model was utilized. OR and 95% CIs was used to analyze and investigate the rate of change in examining the disparity in the rate at which PD develops between the two groups with high and low β-carotene and vitamin A intake. A P-value of less than .05 was used to determine statistical significance. To assess the possibility of publication bias, funnel plots were utilized.", "[SUBTITLE] 3.1. Literature search [SUBSECTION] There were a total number of 4128 studies found, with 991 duplicates deleted. After deleting duplicates, there were a total of 3137 articles found in Figure 1, a total of 21 papers were eligible after passing the title and abstract screening, studies characteristics in Table 1. Following a thorough examination, 10 studies were excluded.[22–31] Three studies were excluded for inadequate outcomes were recorded, one article had no specific dietary β-carotene and vitamin A intake,[23] and 2 reported insufficient outcome data (OR and 95% CIs).[22,25] Six studies were excluded for β-carotene and vitamin A intake was measured by serum levels.[24,26–28,30,31] One study was excluded for β-carotene and vitamin A intake not for three or more quantitative categories.[29] Resulting in 11 articles were excluded and 10 articles were included in our meta-analysis.[17–19,32–39] Nine studies analyzed the effect of dietary β-carotene and the risk of PD.[17–19,33–36,38,39] Four researches examined the impact of dietary vitamins A in relation to the possibility of PD.[19,32,35,37] Besides, research showed data for both sexes independently.[18] As a result, we considered the individual outcomes in our meta-analysis. Moreover, 7 studies categorized the research data into four quintiles according to the taking in levels of β-carotene or vitamins A.[18,19,32,34–36,38] One study categorized the research data into five quartiles;[40] and three studies categorized the exposure variables into tertiles.[17,33,37]\nCharacteristics of full text reviewed studies.\nCIs = 95% confidence intervals, DHQ = diet history questionnaire, FFQ = food frequency questionnaire, PD = Parkinson’s disease.\nFlow chart showing the search results of the meta-analysis.\nThere were a total number of 4128 studies found, with 991 duplicates deleted. After deleting duplicates, there were a total of 3137 articles found in Figure 1, a total of 21 papers were eligible after passing the title and abstract screening, studies characteristics in Table 1. Following a thorough examination, 10 studies were excluded.[22–31] Three studies were excluded for inadequate outcomes were recorded, one article had no specific dietary β-carotene and vitamin A intake,[23] and 2 reported insufficient outcome data (OR and 95% CIs).[22,25] Six studies were excluded for β-carotene and vitamin A intake was measured by serum levels.[24,26–28,30,31] One study was excluded for β-carotene and vitamin A intake not for three or more quantitative categories.[29] Resulting in 11 articles were excluded and 10 articles were included in our meta-analysis.[17–19,32–39] Nine studies analyzed the effect of dietary β-carotene and the risk of PD.[17–19,33–36,38,39] Four researches examined the impact of dietary vitamins A in relation to the possibility of PD.[19,32,35,37] Besides, research showed data for both sexes independently.[18] As a result, we considered the individual outcomes in our meta-analysis. Moreover, 7 studies categorized the research data into four quintiles according to the taking in levels of β-carotene or vitamins A.[18,19,32,34–36,38] One study categorized the research data into five quartiles;[40] and three studies categorized the exposure variables into tertiles.[17,33,37]\nCharacteristics of full text reviewed studies.\nCIs = 95% confidence intervals, DHQ = diet history questionnaire, FFQ = food frequency questionnaire, PD = Parkinson’s disease.\nFlow chart showing the search results of the meta-analysis.\n[SUBTITLE] 3.2. Study characteristics [SUBSECTION] The sum amount of 240,166 participants in the 11 studies and 4205 instances of PD were identified. Of the 11 included studies, 5 were done in the United States,[32,35,37–39] 2 were conducted in Sweden,[17,18] also 1 research each was carried out in Germany,[34] the Netherlands,[33] Singapore, and Japan,[36] respectively. In addition, nine studies (4 case–control, 4 cohort, and 1 cross-sectional) offered information on dietary β-carotene intake and 4 studies (3 case–control, and 1 cohort) offered information on dietary vitamin A intake. All researches within the investigation of β-carotene and vitamin A are considered to be dietary intake. The features of each research are displayed in Table 2.\nCharacteristics of studies in meta-analysis.\nBMI = body mass index, DHQ = diet history questionnaire, FFQ = food frequency questionnaire.\nThe sum amount of 240,166 participants in the 11 studies and 4205 instances of PD were identified. Of the 11 included studies, 5 were done in the United States,[32,35,37–39] 2 were conducted in Sweden,[17,18] also 1 research each was carried out in Germany,[34] the Netherlands,[33] Singapore, and Japan,[36] respectively. In addition, nine studies (4 case–control, 4 cohort, and 1 cross-sectional) offered information on dietary β-carotene intake and 4 studies (3 case–control, and 1 cohort) offered information on dietary vitamin A intake. All researches within the investigation of β-carotene and vitamin A are considered to be dietary intake. The features of each research are displayed in Table 2.\nCharacteristics of studies in meta-analysis.\nBMI = body mass index, DHQ = diet history questionnaire, FFQ = food frequency questionnaire.\n[SUBTITLE] 3.3. Risk of bias [SUBSECTION] The considers included in our study the NOS was used to conduct the survey in Table 3. All of the literature were high quality and appraised 8 to 9 points.\nQuality assessment of included studies using the Newcastle Ottawa Quality Assessment Scale.\n*one point.\nThe considers included in our study the NOS was used to conduct the survey in Table 3. All of the literature were high quality and appraised 8 to 9 points.\nQuality assessment of included studies using the Newcastle Ottawa Quality Assessment Scale.\n*one point.\n[SUBTITLE] 3.4. Meta-analysis results [SUBSECTION] [SUBTITLE] 3.4..1. Dietary β-carotene and PD [SUBSECTION] Nine studies with a total of 237,192 participants and 3707 PD cases were included in our analysis of dietary β-carotene and PD risk. The high β-carotene intake appeared a significantly lower chance of development of PD than the low β-carotene intake (pooled OR = 0.86, 95%CI = 0.77‐0.96, Z-value = 2.70, P = .007 < .05). The fixed-effect mode was utilized according to the results of I2 = 36%, with moderate evidence of heterogeneity in the data in Figure 2.\nResults of the meta-analysis of the rate of the development of Parkinson’s disease in the low and high β-carotene intake groups.\nNine studies with a total of 237,192 participants and 3707 PD cases were included in our analysis of dietary β-carotene and PD risk. The high β-carotene intake appeared a significantly lower chance of development of PD than the low β-carotene intake (pooled OR = 0.86, 95%CI = 0.77‐0.96, Z-value = 2.70, P = .007 < .05). The fixed-effect mode was utilized according to the results of I2 = 36%, with moderate evidence of heterogeneity in the data in Figure 2.\nResults of the meta-analysis of the rate of the development of Parkinson’s disease in the low and high β-carotene intake groups.\n[SUBTITLE] 3.4..2. Dietary vitamin A and PD [SUBSECTION] Four studies involving a total of 63,781 participants with 962 cases were included in our analysis of dietary vitamin A and the risk of PD. Whereas the risk of advancement of PD was not significantly distinctive among the highest and the lowest vitamin A intake (pooled OR = 1.08, 95%CI = 0.91‐1.29, Z-value = 0.93, P = .35). The fixed-effect mode was utilized according to the results of I2 = 0% in Figure 3.\nResults of the meta-analysis of the rate of the development of Parkinson’s disease in the low and high vitamin A intake groups.\nFour studies involving a total of 63,781 participants with 962 cases were included in our analysis of dietary vitamin A and the risk of PD. Whereas the risk of advancement of PD was not significantly distinctive among the highest and the lowest vitamin A intake (pooled OR = 1.08, 95%CI = 0.91‐1.29, Z-value = 0.93, P = .35). The fixed-effect mode was utilized according to the results of I2 = 0% in Figure 3.\nResults of the meta-analysis of the rate of the development of Parkinson’s disease in the low and high vitamin A intake groups.\n[SUBTITLE] 3.4..1. Dietary β-carotene and PD [SUBSECTION] Nine studies with a total of 237,192 participants and 3707 PD cases were included in our analysis of dietary β-carotene and PD risk. The high β-carotene intake appeared a significantly lower chance of development of PD than the low β-carotene intake (pooled OR = 0.86, 95%CI = 0.77‐0.96, Z-value = 2.70, P = .007 < .05). The fixed-effect mode was utilized according to the results of I2 = 36%, with moderate evidence of heterogeneity in the data in Figure 2.\nResults of the meta-analysis of the rate of the development of Parkinson’s disease in the low and high β-carotene intake groups.\nNine studies with a total of 237,192 participants and 3707 PD cases were included in our analysis of dietary β-carotene and PD risk. The high β-carotene intake appeared a significantly lower chance of development of PD than the low β-carotene intake (pooled OR = 0.86, 95%CI = 0.77‐0.96, Z-value = 2.70, P = .007 < .05). The fixed-effect mode was utilized according to the results of I2 = 36%, with moderate evidence of heterogeneity in the data in Figure 2.\nResults of the meta-analysis of the rate of the development of Parkinson’s disease in the low and high β-carotene intake groups.\n[SUBTITLE] 3.4..2. Dietary vitamin A and PD [SUBSECTION] Four studies involving a total of 63,781 participants with 962 cases were included in our analysis of dietary vitamin A and the risk of PD. Whereas the risk of advancement of PD was not significantly distinctive among the highest and the lowest vitamin A intake (pooled OR = 1.08, 95%CI = 0.91‐1.29, Z-value = 0.93, P = .35). The fixed-effect mode was utilized according to the results of I2 = 0% in Figure 3.\nResults of the meta-analysis of the rate of the development of Parkinson’s disease in the low and high vitamin A intake groups.\nFour studies involving a total of 63,781 participants with 962 cases were included in our analysis of dietary vitamin A and the risk of PD. Whereas the risk of advancement of PD was not significantly distinctive among the highest and the lowest vitamin A intake (pooled OR = 1.08, 95%CI = 0.91‐1.29, Z-value = 0.93, P = .35). The fixed-effect mode was utilized according to the results of I2 = 0% in Figure 3.\nResults of the meta-analysis of the rate of the development of Parkinson’s disease in the low and high vitamin A intake groups.\n[SUBTITLE] 3.5. Publication bias [SUBSECTION] The funnel plot showed no publication bias in Figure 4, and Figure 5.\nGraphic funnel plots of the included studies for evaluating the effects of β-carotene on the risk of Parkinson’s disease.\nGraphic funnel plots of the included studies for evaluating the effects of vitamin A on the risk of Parkinson’s disease.\nThe funnel plot showed no publication bias in Figure 4, and Figure 5.\nGraphic funnel plots of the included studies for evaluating the effects of β-carotene on the risk of Parkinson’s disease.\nGraphic funnel plots of the included studies for evaluating the effects of vitamin A on the risk of Parkinson’s disease.", "There were a total number of 4128 studies found, with 991 duplicates deleted. After deleting duplicates, there were a total of 3137 articles found in Figure 1, a total of 21 papers were eligible after passing the title and abstract screening, studies characteristics in Table 1. Following a thorough examination, 10 studies were excluded.[22–31] Three studies were excluded for inadequate outcomes were recorded, one article had no specific dietary β-carotene and vitamin A intake,[23] and 2 reported insufficient outcome data (OR and 95% CIs).[22,25] Six studies were excluded for β-carotene and vitamin A intake was measured by serum levels.[24,26–28,30,31] One study was excluded for β-carotene and vitamin A intake not for three or more quantitative categories.[29] Resulting in 11 articles were excluded and 10 articles were included in our meta-analysis.[17–19,32–39] Nine studies analyzed the effect of dietary β-carotene and the risk of PD.[17–19,33–36,38,39] Four researches examined the impact of dietary vitamins A in relation to the possibility of PD.[19,32,35,37] Besides, research showed data for both sexes independently.[18] As a result, we considered the individual outcomes in our meta-analysis. Moreover, 7 studies categorized the research data into four quintiles according to the taking in levels of β-carotene or vitamins A.[18,19,32,34–36,38] One study categorized the research data into five quartiles;[40] and three studies categorized the exposure variables into tertiles.[17,33,37]\nCharacteristics of full text reviewed studies.\nCIs = 95% confidence intervals, DHQ = diet history questionnaire, FFQ = food frequency questionnaire, PD = Parkinson’s disease.\nFlow chart showing the search results of the meta-analysis.", "The sum amount of 240,166 participants in the 11 studies and 4205 instances of PD were identified. Of the 11 included studies, 5 were done in the United States,[32,35,37–39] 2 were conducted in Sweden,[17,18] also 1 research each was carried out in Germany,[34] the Netherlands,[33] Singapore, and Japan,[36] respectively. In addition, nine studies (4 case–control, 4 cohort, and 1 cross-sectional) offered information on dietary β-carotene intake and 4 studies (3 case–control, and 1 cohort) offered information on dietary vitamin A intake. All researches within the investigation of β-carotene and vitamin A are considered to be dietary intake. The features of each research are displayed in Table 2.\nCharacteristics of studies in meta-analysis.\nBMI = body mass index, DHQ = diet history questionnaire, FFQ = food frequency questionnaire.", "The considers included in our study the NOS was used to conduct the survey in Table 3. All of the literature were high quality and appraised 8 to 9 points.\nQuality assessment of included studies using the Newcastle Ottawa Quality Assessment Scale.\n*one point.", "[SUBTITLE] 3.4..1. Dietary β-carotene and PD [SUBSECTION] Nine studies with a total of 237,192 participants and 3707 PD cases were included in our analysis of dietary β-carotene and PD risk. The high β-carotene intake appeared a significantly lower chance of development of PD than the low β-carotene intake (pooled OR = 0.86, 95%CI = 0.77‐0.96, Z-value = 2.70, P = .007 < .05). The fixed-effect mode was utilized according to the results of I2 = 36%, with moderate evidence of heterogeneity in the data in Figure 2.\nResults of the meta-analysis of the rate of the development of Parkinson’s disease in the low and high β-carotene intake groups.\nNine studies with a total of 237,192 participants and 3707 PD cases were included in our analysis of dietary β-carotene and PD risk. The high β-carotene intake appeared a significantly lower chance of development of PD than the low β-carotene intake (pooled OR = 0.86, 95%CI = 0.77‐0.96, Z-value = 2.70, P = .007 < .05). The fixed-effect mode was utilized according to the results of I2 = 36%, with moderate evidence of heterogeneity in the data in Figure 2.\nResults of the meta-analysis of the rate of the development of Parkinson’s disease in the low and high β-carotene intake groups.\n[SUBTITLE] 3.4..2. Dietary vitamin A and PD [SUBSECTION] Four studies involving a total of 63,781 participants with 962 cases were included in our analysis of dietary vitamin A and the risk of PD. Whereas the risk of advancement of PD was not significantly distinctive among the highest and the lowest vitamin A intake (pooled OR = 1.08, 95%CI = 0.91‐1.29, Z-value = 0.93, P = .35). The fixed-effect mode was utilized according to the results of I2 = 0% in Figure 3.\nResults of the meta-analysis of the rate of the development of Parkinson’s disease in the low and high vitamin A intake groups.\nFour studies involving a total of 63,781 participants with 962 cases were included in our analysis of dietary vitamin A and the risk of PD. Whereas the risk of advancement of PD was not significantly distinctive among the highest and the lowest vitamin A intake (pooled OR = 1.08, 95%CI = 0.91‐1.29, Z-value = 0.93, P = .35). The fixed-effect mode was utilized according to the results of I2 = 0% in Figure 3.\nResults of the meta-analysis of the rate of the development of Parkinson’s disease in the low and high vitamin A intake groups.", "Nine studies with a total of 237,192 participants and 3707 PD cases were included in our analysis of dietary β-carotene and PD risk. The high β-carotene intake appeared a significantly lower chance of development of PD than the low β-carotene intake (pooled OR = 0.86, 95%CI = 0.77‐0.96, Z-value = 2.70, P = .007 < .05). The fixed-effect mode was utilized according to the results of I2 = 36%, with moderate evidence of heterogeneity in the data in Figure 2.\nResults of the meta-analysis of the rate of the development of Parkinson’s disease in the low and high β-carotene intake groups.", "Four studies involving a total of 63,781 participants with 962 cases were included in our analysis of dietary vitamin A and the risk of PD. Whereas the risk of advancement of PD was not significantly distinctive among the highest and the lowest vitamin A intake (pooled OR = 1.08, 95%CI = 0.91‐1.29, Z-value = 0.93, P = .35). The fixed-effect mode was utilized according to the results of I2 = 0% in Figure 3.\nResults of the meta-analysis of the rate of the development of Parkinson’s disease in the low and high vitamin A intake groups.", "The funnel plot showed no publication bias in Figure 4, and Figure 5.\nGraphic funnel plots of the included studies for evaluating the effects of β-carotene on the risk of Parkinson’s disease.\nGraphic funnel plots of the included studies for evaluating the effects of vitamin A on the risk of Parkinson’s disease.", "Our meta-analysis summarized data about the association of dietary β-carotene and vitamin A intake with the risk of PD. The findings from the study suggested that higher dietary β-carotene intake was both significantly and inversely related to the likelihood of PD, whereas higher dietary intake of vitamin A did not show such protective effects.\nThe dietary carotenoids are discovered in red, orange, yellow, and leafy green vegetables as well as red and orange fruit. Research showed carotenoids helps reduces these oxidative biochemical indicators and combat oxidative stress and as an antioxidant in the food that may help to slow or stop the course of PD.[40] β-Carotene has also been found to have anti-inflammatory properties which are important in the prevention of many degenerative diseases caused by oxidative stress, including neurological diseases such as PD.[11] Although this article only discussion the connection between dietary β-carotene intake and the possibility of PD, some studies also studied the relationship between serum β-carotene levels and the risk of PD. A study explained serum β-carotene levels are all significant reduction in PD patients (P < .001).[26] And three studies suggest that serum levels of β-carotene did not differ significantly between PD patients and control groups.[27,28,31] Two former meta-analyses done by Takeda et al[16] and Etminan et al[41] researched about intake of β-carotene and the risk of PD. The two former meta-analyses did not suggest any defensive impacts related to β-carotene. However, our study finds intake of β-carotene reduced the risk of PD. Among the 9 articles about β-carotene included in our meta-analysis, 3 studies showed the consumption of β-carotene has been linked to the development of PD. Two studies (1 cohort study and 1 case–control study) showed higher consumption of dietary β-carotene was significantly linked to a lower incidence of PD[18,36] and 1 cross-sectional study showed Intake of β-carotene was oppositely related to PD but this association was not significant.[33] Three new prospective cohort studies were included in our study.[17–19] One study showed that β-carotene consumption was linked to a decreased risk of PD.[18] Also, a study published recently was excluded from our meta-analysis for different outcome measures were recorded, which showed dietary β-carotene intake was inversely associated with the progression of PD.[22] But 3 studies containing dietary β-carotene intake were excluded for short of the outcome data,[23,25,29] those studies did not provide support for high dietary β-carotene intake can decrease the risk of PD.\nVitamin A is entirely provided by food like liver, meat, eggs, fish, dairy fat, and margarine. Vitamin A can be obtained from food in a variety of ways: directly as retinol-esters, or indirectly as β-carotene, which is partially converted to retinol. Same as β-carotene, dietary vitamin A is also an antioxidant, which exhibits neuroprotective properties against neurodegeneration. Vitamin A plays a role in coping with the oxidative stress that results in PD.[12] Retinoic, the main metabolite of vitamin A controls brain advancement by controlling neuronal separation, engine axonal development, and neural patterning.[13] Retinoic acid encourages GABAergic neurons to express dopamine receptors differentiate, also alterations in PD that inhibition of retinoic acid-mediated neuronal differentiation.[40] There is substantial contact between the PD pathogenesis and vitamin A and retinoid metabolism-related proteins. A variety of manipulations of vitamin A and its pathways have been used to determine vitamin A in the pathogenesis of PD: diet supplementation, diet inadequacy, knockout rats for retinoid receptors, and therapies using vitamin A derivatives in vivo or in vitro.[13–15] Three articles elucidated the relationship between serum vitamin A levels and the pathogenesis of PD. One study explained serum vitamin A was decreased in PD patients (P < .05).[30] And 2 studies suggested that serum vitamin A levels do not play a role in the pathogenesis of PD.[24,31] In 2014, Takeda et al conducted the first meta-analysis focus on vitamin A and carotenoids and the risk of PD.[16] For vitamin A, Takeda study included 8 papers: 7 case–control studies and 1 cross-sectional study. Takeda study showed current data have been insufficient to draw firm conclusion about the association between vitamin A levels in the blood or dietary intakes and the risk of PD. This is the same with the outcomes of our meta-analysis, though vitamin A intake was measured only using a FFQ or a diet history. Four studies included in our study about vitamin A with the risk of PD (3 case–control studies and 1 cohort study): all showed there were no preventive benefits of dietary vitamin A to PD.[19,32,35,37]\nOur study’s strength is that it included not just current large-scale observational studies published after the previous meta-analysis, but also two Asian studies, resulting in a more ethnically diverse sample.[19,36] Furthermore, the research included in this analysis had a high overall quality. However, we must also acknowledge the study’s potential limitations. Different studies we included evaluated dietary intake using the different FFQs, also the semi-quantitative instrument is possible to lead to underestimate true intake levels of dietary β-carotene and vitamin A.", "Our meta-analysis indicated that dietary β-carotene intake might have a protective impact against PD. In addition, we found that dietary vitamin A appears not to have protective effects on PD. More relevant studies are needed to include into meta-analysis in the further, as the recall bias and selection bias in retrospective and cross-sectional studies cause misclassifications in the assessment of nutrient intake.", "All authors declare that they have no competing interest.", "Conceptualization: Ling-Yu Wu, Qing-Han Gao.\nData curation: Ling-Yu Wu, Gui-Sheng Chen.\nFormal analysis: Ling-Yu Wu, Gui-Sheng Chen.\nWriting – original draft: Ling-Yu Wu.\nWriting – review &amp; editing: Ling-Yu Wu, Jing-Xin Chen.\nInvestigation: Ling-Yu Wu, Hua Gao.\nMethodology: Ling-Yu Wu.\nProject administration: Ling-Yu Wu.\nResources: Jing-Hong Huo, Yu-Fei Pang.\nSoftware: Ling-Yu Wu, Jing-Hong Huo, Yu-Fei Pang.\nSupervision: Ling-Yu Wu, Qing-Han Gao.\nArticle revision: Jing-Xin Chen, Ling-Yu Wu." ]
[ "intro", "methods", null, null, null, null, null, "results", null, null, null, "results", null, null, null, null, null, null, null ]
[ "meta-analysis", "Parkinson disease", "systematic review", "vitamin A", "β-carotene" ]
Identification of potential hub genes of gastric cancer.
36254003
Gastric cancer (GC) is a malignant tumor originated from gastric mucosa epithelium. It is the third leading cause of cancer mortality in China. The early symptoms are not obvious. When it is discovered, it has developed to the advanced stage, and the prognosis is poor. In order to screen for potential genes for GC development, this study obtained GSE118916 and GSE109476 from the gene expression omnibus (GEO) database for bioinformatics analysis.
BACKGROUND
First, GEO2R was used to identify differentially expressed genes (DEG) and the functional annotation of DEGs was performed by gene ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) analysis. The Search Tool for the Retrieval of Interacting Genes (STRING) tool was used to construct protein-protein interaction (PPI) network and the most important modules and hub genes were mined. Real time quantitative polymerase chain reaction assay was performed to verify the expression level of hub genes.
METHODS
A total of 139 DEGs were identified. The functional changes of DEGs are mainly concentrated in the cytoskeleton, extracellular matrix and collagen synthesis. Eleven genes were identified as core genes. Bioinformatics analysis shows that the core genes are mainly enriched in many processes related to cell adhesion and collagen.
RESULTS
In summary, the DEGs and hub genes found in this study may be potential diagnostic and therapeutic targets.
CONCLUSION
[ "Biomarkers, Tumor", "Computational Biology", "Gene Expression Profiling", "Gene Expression Regulation, Neoplastic", "Humans", "Prognosis", "Stomach Neoplasms" ]
9575828
1. Introduction
Gastric cancer (GC) is a malignant tumor originated from the gastric mucosal epithelium, mainly gastric adenocarcinoma. GC accounts for more than 95% of malignant tumors in the stomach and is one of the malignant tumors that seriously endanger human health. According to the results of the National Cancer Center of China in 2015, GC accounts for the third place in the mortality rate of malignant tumors in China.[1] The occurrence of GC is closely related to the adverse environment, lifestyle, dietary structure changes and Helicobacter pylori infection. Early GC symptoms are not obvious, some patients may have dyspepsia symptoms, and advanced GC may have upper abdominal pain, postprandial aggravation, poor appetite, anorexia, fatigue and weight loss. The common examination methods are gastroscopy and computed tomography, which are invasive and expensive.[2] When the patient has obvious symptoms, he is admitted to the hospital. The disease has developed to the advanced stage of GC, and the best surgical treatment time is lost. Except for Japan and South Korea, the 5-year survival rate of advanced GC in other countries and regions in the world is even less than 10%.[3] However, if GC can be diagnosed early, its 5-year survival rate will rise to 95%,[4] which means that the fundamental method for providing GC prognosis is early diagnosis and timely treatment. Currently, some serum biomarkers are used for screening early GC, such as CA19-9 and CEA, but these tumor markers are less sensitive and specific.[5] Therefore, to find a new effective biomarker for early GC, to further explore the pathogenesis of GC, to find potential diagnostic and therapeutic targets, to achieve early detection, early diagnosis and targeted therapy, with significant clinical value and market Application prospect. Bioinformatics is an emerging interdisciplinary subject that combines life sciences with computer science. It focuses on the collection, storage, processing, dissemination, analysis, and interpretation of biological information. The ability to process large amounts of complex biological data can be processed through the use of biological and informatics techniques. Microarray data information analysis technology has been widely used in the study of diseases such as tumors to explore the genetic correlation.[6,7] Microarray analysis technology can simultaneously acquire the expression information of tens of thousands of genes, and then explore the genomic changes related to the development of diseases. A large number of research and scholars[8,9] have used bioinformatics techniques to analyze differentially expressed genes (DEG) in tumor progression, and to study their roles in biological processes (BP), molecular functions (MF), and signaling pathways, and to elucidate the pathogenesis of diseases, so as to provide theoretical basis for early diagnosis and treatment. In this study, bioinformatics technology was used to find the gene sequencing data of GC patients and normal people from gene expression omnibus (GEO). Two high-quality genetic data sets were extracted and analyzed for further analysis. Gene ontology (GO) analysis and Kyoto Encyclopedia of Genes and Genomes (KEGG) analysis were performed by gene set enrichment analysis, and then important modules of the protein–protein interaction (PPI) network were screened. Using the genetic data of tumor patients and normal people in the sample, 73 gene sets and 11 significantly DEG molecules were found to be differentially expressed. These findings will enhance our understanding of the underlying mechanisms of GC and provide the basis for finding new diagnostic markers and targeted therapies.
2. Materials and Methods
[SUBTITLE] 2.1. Access to public data [SUBSECTION] GEO (http://www.ncbi.nlm.nih.gov/geo)[10] is an open high-throughput genomic database that includes microarrays, gene expression data and chips. On November 20, 2019, the key words “(gastric cancer) AND gene expression” were set to detect the datasets, using a filter of “expression profiling by array” and “recent two years.” There were 5 inclusion criteria: a sample number of more than 10 per dataset (samples of less than 10 were excluded), data from Homo sapiens (data from other species were excluded), a series entry type, expression profiling by array (data using methylation profiling by array were excluded), and a diagnosis of GC (data from other cancer diagnoses were excluded). Two expression profile data sets (GSE118916 and GSE109476) were downloaded from the GEO database. The annotation platform for GSE118916 is GPL15207 platform, [PrimeView] Affymetrix Human Gene Expression Array. The GSE118916 data set is composed of 15 GC tissues and 15 stomach normal tissues. The annotation platform for GSE109476 is GPL24530platform, Arraystar Human LncRNA microarray V2.0 (Agilent-033010; custom-annotation; probe name version). The GSE109476 date set is composed of 5 GC tissues and 5 stomach normal tissues. All probe numbers are converted to gene symbols based on the annotation information in the platform. GEO (http://www.ncbi.nlm.nih.gov/geo)[10] is an open high-throughput genomic database that includes microarrays, gene expression data and chips. On November 20, 2019, the key words “(gastric cancer) AND gene expression” were set to detect the datasets, using a filter of “expression profiling by array” and “recent two years.” There were 5 inclusion criteria: a sample number of more than 10 per dataset (samples of less than 10 were excluded), data from Homo sapiens (data from other species were excluded), a series entry type, expression profiling by array (data using methylation profiling by array were excluded), and a diagnosis of GC (data from other cancer diagnoses were excluded). Two expression profile data sets (GSE118916 and GSE109476) were downloaded from the GEO database. The annotation platform for GSE118916 is GPL15207 platform, [PrimeView] Affymetrix Human Gene Expression Array. The GSE118916 data set is composed of 15 GC tissues and 15 stomach normal tissues. The annotation platform for GSE109476 is GPL24530platform, Arraystar Human LncRNA microarray V2.0 (Agilent-033010; custom-annotation; probe name version). The GSE109476 date set is composed of 5 GC tissues and 5 stomach normal tissues. All probe numbers are converted to gene symbols based on the annotation information in the platform. [SUBTITLE] 2.2. Screening of DEGs via GEO2R [SUBSECTION] GEO2R (http://www.ncbi.nlm.nih.gov/geo/geo2r)[11] is a system for online analysis of data in GEO. This tool system runs in the R language. It is accurate to say that it uses 2 R packages: GEOquery and limma. The former is used for data reading and the latter is used for calculation. The best thing about GEO2R is that is an online tool, easy and efficient to operate. GEO2R can perform a command to compare gene expression profiles between groups in order to identify DEGs between GC and stomach normal groups. In general, when the probe set has a corresponding gene symbol, the probe is considered valuable and will be retained. Statistically significant measure is P value <.01 and fold change >1. GEO2R (http://www.ncbi.nlm.nih.gov/geo/geo2r)[11] is a system for online analysis of data in GEO. This tool system runs in the R language. It is accurate to say that it uses 2 R packages: GEOquery and limma. The former is used for data reading and the latter is used for calculation. The best thing about GEO2R is that is an online tool, easy and efficient to operate. GEO2R can perform a command to compare gene expression profiles between groups in order to identify DEGs between GC and stomach normal groups. In general, when the probe set has a corresponding gene symbol, the probe is considered valuable and will be retained. Statistically significant measure is P value <.01 and fold change >1. [SUBTITLE] 2.3. Functional annotation of DEGs via GO and KEGG analysis [SUBSECTION] Database for Annotation, Visualization and Integrated Discovery (DAVID) (https://david.ncifcrf.gov/home.jsp) (version 6.8) is a bioinformatics database that integrates biological data and analytical tools.[12] KEGG (https://www.kegg.jp/) could help researcher to understand advanced functions and biological systems.[13] GO is an ontology widely used in bioinformatics, which covers 3 aspects of biology, including cellular components, MF and biological process.[14] In order to analyze the GO and pathway enrichment information of DEGs, the DAVID online tool was executed. Statistically significant measure is P < .05. Database for Annotation, Visualization and Integrated Discovery (DAVID) (https://david.ncifcrf.gov/home.jsp) (version 6.8) is a bioinformatics database that integrates biological data and analytical tools.[12] KEGG (https://www.kegg.jp/) could help researcher to understand advanced functions and biological systems.[13] GO is an ontology widely used in bioinformatics, which covers 3 aspects of biology, including cellular components, MF and biological process.[14] In order to analyze the GO and pathway enrichment information of DEGs, the DAVID online tool was executed. Statistically significant measure is P < .05. [SUBTITLE] 2.4. Construction and analysis of PPI network [SUBSECTION] Search Tool for the Retrieval of Interacting Genes (STRING; http://string-db.org) (version 10.5)[15] is a network that can be used to predict and track PPIs. Introducing DEGs into the tool makes intermolecular network analysis. The analysis of the interactions between different proteins can provide insights into the mechanisms of generation or development of GC. In this study, STRING database was used to construct PPI network with DEGs. The minimum required interaction score is that medium confidence > 0.4. Cytoscape (version 3.6.1) is an open visualization software that can be used to visualize PPI network.[16] Based on topological principles, the Molecular Complex Detection (MCODE) (version 1.5.1), a plug-in for Cytoscape, can mine tightly coupled regions from PPI. First, Cytoscape software plots the PPI network. Secondly, MCODE identifies the most important modules in the PPI network graph. The criteria of MCODE analysis is that node score cutoff = 0.2, degree cutoff = 2, Max depth = 100, MCODE scores > 5, and k-score = 2. Search Tool for the Retrieval of Interacting Genes (STRING; http://string-db.org) (version 10.5)[15] is a network that can be used to predict and track PPIs. Introducing DEGs into the tool makes intermolecular network analysis. The analysis of the interactions between different proteins can provide insights into the mechanisms of generation or development of GC. In this study, STRING database was used to construct PPI network with DEGs. The minimum required interaction score is that medium confidence > 0.4. Cytoscape (version 3.6.1) is an open visualization software that can be used to visualize PPI network.[16] Based on topological principles, the Molecular Complex Detection (MCODE) (version 1.5.1), a plug-in for Cytoscape, can mine tightly coupled regions from PPI. First, Cytoscape software plots the PPI network. Secondly, MCODE identifies the most important modules in the PPI network graph. The criteria of MCODE analysis is that node score cutoff = 0.2, degree cutoff = 2, Max depth = 100, MCODE scores > 5, and k-score = 2. [SUBTITLE] 2.5. Mining and screening of core gene [SUBSECTION] The hub genes were selected with degrees ≥ 10. A network of the genes and their co-expression genes was analyzed using cBioPortal (http://www.cbioportal.org)[17,18] online platform. Hierarchical clustering of hub genes was constructed using UCSC Cancer Genomics Browser (http://genome-cancer.ucsc.edu).[19] The overall survival and disease-free survival analyses of hub genes were performed using Kaplan–Meier curve in cBioPortal. The hub genes were selected with degrees ≥ 10. A network of the genes and their co-expression genes was analyzed using cBioPortal (http://www.cbioportal.org)[17,18] online platform. Hierarchical clustering of hub genes was constructed using UCSC Cancer Genomics Browser (http://genome-cancer.ucsc.edu).[19] The overall survival and disease-free survival analyses of hub genes were performed using Kaplan–Meier curve in cBioPortal. [SUBTITLE] 2.6. RR-qPCR assay [SUBSECTION] A total of 10 GC participates were recruited. After surgery, 10 GC tumor samples from GC patients and 10 adjacent normal stomach tissues samples were obtained. The research conformed to the Declaration of Helsinki and was authorized by the Human Ethics and Research Ethics Committees of Third Medical Center of PLA General Hospital. The written informed consents were obtained from all participates. Total RNA was extracted from 10 GC tumor samples and 10 adjacent normal stomach tissues samples by the RNAiso Plus (Trizol) kit (Thermofisher, Massachusetts, America), and reverse transcribed to cDNA. Real time quantitative polymerase chain reaction (RT-qPCR) was performed using a Light Cycler® 4800 System with specific primers for genes. Table 1 presents the primer sequences used in the experiments. The RQ values (2−ΔΔCt, where Ct is the threshold cycle) of each sample were calculated, and are presented as fold change in gene expression relative to the control group. GAPDH was used as an endogenous control. Primers and their sequences for PCR analysis PCR = polymerase chain reaction. The verification of hub genes expression and role on the overall survival of GC patients using the cancer genome atlas (TCGA) data The gene expression dataset of GC in the TCGA was downloaded. There were a total of 580samples including 478 GC samples and 102 normal gastric samples. The IlluminaHiSeq UNC was selected as gene expression RNAseq in the research. In addition, the gene expression levels of hub genes between GC and normal gastric samples were compared using the one-way Anova. Furthermore, effect of gene expression of hub genes on overall survival was analyzed by using the TCGA data. A total of 10 GC participates were recruited. After surgery, 10 GC tumor samples from GC patients and 10 adjacent normal stomach tissues samples were obtained. The research conformed to the Declaration of Helsinki and was authorized by the Human Ethics and Research Ethics Committees of Third Medical Center of PLA General Hospital. The written informed consents were obtained from all participates. Total RNA was extracted from 10 GC tumor samples and 10 adjacent normal stomach tissues samples by the RNAiso Plus (Trizol) kit (Thermofisher, Massachusetts, America), and reverse transcribed to cDNA. Real time quantitative polymerase chain reaction (RT-qPCR) was performed using a Light Cycler® 4800 System with specific primers for genes. Table 1 presents the primer sequences used in the experiments. The RQ values (2−ΔΔCt, where Ct is the threshold cycle) of each sample were calculated, and are presented as fold change in gene expression relative to the control group. GAPDH was used as an endogenous control. Primers and their sequences for PCR analysis PCR = polymerase chain reaction. The verification of hub genes expression and role on the overall survival of GC patients using the cancer genome atlas (TCGA) data The gene expression dataset of GC in the TCGA was downloaded. There were a total of 580samples including 478 GC samples and 102 normal gastric samples. The IlluminaHiSeq UNC was selected as gene expression RNAseq in the research. In addition, the gene expression levels of hub genes between GC and normal gastric samples were compared using the one-way Anova. Furthermore, effect of gene expression of hub genes on overall survival was analyzed by using the TCGA data. [SUBTITLE] 2.7. Statistical analysis [SUBSECTION] Student’s t test was used to determine the statistical significance when comparing the 2 groups. Statistical analysis was carried out using SPSS software version 21.0 (IBM Corp. Armonk, NY). Value of P < .05 were considered statistically significant. Student’s t test was used to determine the statistical significance when comparing the 2 groups. Statistical analysis was carried out using SPSS software version 21.0 (IBM Corp. Armonk, NY). Value of P < .05 were considered statistically significant.
3.5. Results of RT-qPCR analysis
According to the above expression analysis, COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 were markedly up-regulated in GC tumor samples. As presented in Figure 7, the relative expression levels of COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 were significantly higher in GC samples, compared with the normal stomach tissues groups. The result demonstrated that COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 might be considered as biomarkers for GC. Relative expression of COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 by RT-qPCR analysis. *P < .05, compared with normal stomach tissues. RT-qPCR = real time quantitative polymerase chain reaction.
null
null
[ "2.1. Access to public data", "2.2. Screening of DEGs via GEO2R", "2.3. Functional annotation of DEGs via GO and KEGG analysis", "2.4. Construction and analysis of PPI network", "2.5. Mining and screening of core gene", "2.6. RR-qPCR assay", "2.7. Statistical analysis", "3. Results", "3.1. Identification of DEGs in GC", "3.2. KEGG and GO enrichment analyses of DEGs", "3.3. PPI network construction and module analysis", "3.4. Hub gene selection and analysis", "3.6. The verification by TCGA", "5. Conclusion", "Author contributions" ]
[ "GEO (http://www.ncbi.nlm.nih.gov/geo)[10] is an open high-throughput genomic database that includes microarrays, gene expression data and chips.\nOn November 20, 2019, the key words “(gastric cancer) AND gene expression” were set to detect the datasets, using a filter of “expression profiling by array” and “recent two years.” There were 5 inclusion criteria: a sample number of more than 10 per dataset (samples of less than 10 were excluded), data from Homo sapiens (data from other species were excluded), a series entry type, expression profiling by array (data using methylation profiling by array were excluded), and a diagnosis of GC (data from other cancer diagnoses were excluded).\nTwo expression profile data sets (GSE118916 and GSE109476) were downloaded from the GEO database. The annotation platform for GSE118916 is GPL15207 platform, [PrimeView] Affymetrix Human Gene Expression Array. The GSE118916 data set is composed of 15 GC tissues and 15 stomach normal tissues. The annotation platform for GSE109476 is GPL24530platform, Arraystar Human LncRNA microarray V2.0 (Agilent-033010; custom-annotation; probe name version). The GSE109476 date set is composed of 5 GC tissues and 5 stomach normal tissues. All probe numbers are converted to gene symbols based on the annotation information in the platform.", "GEO2R (http://www.ncbi.nlm.nih.gov/geo/geo2r)[11] is a system for online analysis of data in GEO. This tool system runs in the R language. It is accurate to say that it uses 2 R packages: GEOquery and limma. The former is used for data reading and the latter is used for calculation. The best thing about GEO2R is that is an online tool, easy and efficient to operate. GEO2R can perform a command to compare gene expression profiles between groups in order to identify DEGs between GC and stomach normal groups. In general, when the probe set has a corresponding gene symbol, the probe is considered valuable and will be retained. Statistically significant measure is P value <.01 and fold change >1.", "Database for Annotation, Visualization and Integrated Discovery (DAVID) (https://david.ncifcrf.gov/home.jsp) (version 6.8) is a bioinformatics database that integrates biological data and analytical tools.[12] KEGG (https://www.kegg.jp/) could help researcher to understand advanced functions and biological systems.[13] GO is an ontology widely used in bioinformatics, which covers 3 aspects of biology, including cellular components, MF and biological process.[14] In order to analyze the GO and pathway enrichment information of DEGs, the DAVID online tool was executed. Statistically significant measure is P < .05.", "Search Tool for the Retrieval of Interacting Genes (STRING; http://string-db.org) (version 10.5)[15] is a network that can be used to predict and track PPIs. Introducing DEGs into the tool makes intermolecular network analysis. The analysis of the interactions between different proteins can provide insights into the mechanisms of generation or development of GC. In this study, STRING database was used to construct PPI network with DEGs. The minimum required interaction score is that medium confidence > 0.4. Cytoscape (version 3.6.1) is an open visualization software that can be used to visualize PPI network.[16] Based on topological principles, the Molecular Complex Detection (MCODE) (version 1.5.1), a plug-in for Cytoscape, can mine tightly coupled regions from PPI. First, Cytoscape software plots the PPI network. Secondly, MCODE identifies the most important modules in the PPI network graph. The criteria of MCODE analysis is that node score cutoff = 0.2, degree cutoff = 2, Max depth = 100, MCODE scores > 5, and k-score = 2.", "The hub genes were selected with degrees ≥ 10. A network of the genes and their co-expression genes was analyzed using cBioPortal (http://www.cbioportal.org)[17,18] online platform. Hierarchical clustering of hub genes was constructed using UCSC Cancer Genomics Browser (http://genome-cancer.ucsc.edu).[19] The overall survival and disease-free survival analyses of hub genes were performed using Kaplan–Meier curve in cBioPortal.", "A total of 10 GC participates were recruited. After surgery, 10 GC tumor samples from GC patients and 10 adjacent normal stomach tissues samples were obtained. The research conformed to the Declaration of Helsinki and was authorized by the Human Ethics and Research Ethics Committees of Third Medical Center of PLA General Hospital. The written informed consents were obtained from all participates.\nTotal RNA was extracted from 10 GC tumor samples and 10 adjacent normal stomach tissues samples by the RNAiso Plus (Trizol) kit (Thermofisher, Massachusetts, America), and reverse transcribed to cDNA. Real time quantitative polymerase chain reaction (RT-qPCR) was performed using a Light Cycler® 4800 System with specific primers for genes. Table 1 presents the primer sequences used in the experiments. The RQ values (2−ΔΔCt, where Ct is the threshold cycle) of each sample were calculated, and are presented as fold change in gene expression relative to the control group. GAPDH was used as an endogenous control.\nPrimers and their sequences for PCR analysis\nPCR = polymerase chain reaction.\nThe verification of hub genes expression and role on the overall survival of GC patients using the cancer genome atlas (TCGA) data\nThe gene expression dataset of GC in the TCGA was downloaded. There were a total of 580samples including 478 GC samples and 102 normal gastric samples. The IlluminaHiSeq UNC was selected as gene expression RNAseq in the research. In addition, the gene expression levels of hub genes between GC and normal gastric samples were compared using the one-way Anova.\nFurthermore, effect of gene expression of hub genes on overall survival was analyzed by using the TCGA data.", "Student’s t test was used to determine the statistical significance when comparing the 2 groups. Statistical analysis was carried out using SPSS software version 21.0 (IBM Corp. Armonk, NY). Value of P < .05 were considered statistically significant.", "[SUBTITLE] 3.1. Identification of DEGs in GC [SUBSECTION] One volcano plot presents the DEGs in the GSE118916 (Fig. 1A), and another volcano plot presents the DEGs in the GSE109476 (Fig. 1B). After standardization of the microarray results, DEGs (1768 in GSE118916, and 564 in GSE109476) were identified. The overlap among the 2 datasets contained 139 genes as shown in the Venn diagram (Fig. 1C), consisting of 189 downregulated genes and 84 upregulated genes between GC tissues and non-cancerous tissues.\nDEGs in GC. (A) One volcano plot presents the DEGs in the GSE118916. (B) another volcano plot presents the DEGs in the GSE109476. (C) Venn diagram, PPI network and the most significant module of DEGs. (A) DEGs were selected with a fold change > 1 and P value < .01 among the mRNA expression profiling sets GSE118916 and GSE109476. The 2 datasets showed an overlap of 139 genes. DEG = differentially expressed genes, GC = gastric cancer.\nOne volcano plot presents the DEGs in the GSE118916 (Fig. 1A), and another volcano plot presents the DEGs in the GSE109476 (Fig. 1B). After standardization of the microarray results, DEGs (1768 in GSE118916, and 564 in GSE109476) were identified. The overlap among the 2 datasets contained 139 genes as shown in the Venn diagram (Fig. 1C), consisting of 189 downregulated genes and 84 upregulated genes between GC tissues and non-cancerous tissues.\nDEGs in GC. (A) One volcano plot presents the DEGs in the GSE118916. (B) another volcano plot presents the DEGs in the GSE109476. (C) Venn diagram, PPI network and the most significant module of DEGs. (A) DEGs were selected with a fold change > 1 and P value < .01 among the mRNA expression profiling sets GSE118916 and GSE109476. The 2 datasets showed an overlap of 139 genes. DEG = differentially expressed genes, GC = gastric cancer.\n[SUBTITLE] 3.2. KEGG and GO enrichment analyses of DEGs [SUBSECTION] To analyze the biological classification of DEGs, functional and pathway enrichment analyses were performed using DAVID. GO analysis results showed that changes in BP of DEGs were significantly enriched in collagen catabolic process, collagen fibril organization, extracellular matrix organization, integrin-mediated signaling pathway, cell adhesion and so on. Changes in MF were mainly enriched in collagen binding, growth factor binding, heparin binding, extracellular matrix structural constituent and so on (Table 1). Changes in cell component of DEGs were mainly enriched in extracellular matrix, proteinaceous extracellular matrix, collagen trimer, extracellular region and so on. The KEGG pathway analysis showed that all DEGs are mainly concentrated in ECM-receptor interaction, PI3K-Akt signaling pathway, Metabolism of xenobiotics by cytochrome P450, platelet activation, Gap junction, Protein digestion and absorption and Phagosome (Table 2).\nGO and KEGG pathway enrichment analysis of DEGs in gastric cancer samples.\nDEGs = differentially expressed genes, GO = gene ontology, KEGG = Kyoto Encyclopedia of Genes and Genomes.\nTo analyze the biological classification of DEGs, functional and pathway enrichment analyses were performed using DAVID. GO analysis results showed that changes in BP of DEGs were significantly enriched in collagen catabolic process, collagen fibril organization, extracellular matrix organization, integrin-mediated signaling pathway, cell adhesion and so on. Changes in MF were mainly enriched in collagen binding, growth factor binding, heparin binding, extracellular matrix structural constituent and so on (Table 1). Changes in cell component of DEGs were mainly enriched in extracellular matrix, proteinaceous extracellular matrix, collagen trimer, extracellular region and so on. The KEGG pathway analysis showed that all DEGs are mainly concentrated in ECM-receptor interaction, PI3K-Akt signaling pathway, Metabolism of xenobiotics by cytochrome P450, platelet activation, Gap junction, Protein digestion and absorption and Phagosome (Table 2).\nGO and KEGG pathway enrichment analysis of DEGs in gastric cancer samples.\nDEGs = differentially expressed genes, GO = gene ontology, KEGG = Kyoto Encyclopedia of Genes and Genomes.\n[SUBTITLE] 3.3. PPI network construction and module analysis [SUBSECTION] The PPI network of DEGs was constructed (Fig. 2) and the most significant module was obtained using Cytoscape (Fig. 3). The functional analyses of genes involved in this module were analyzed using DAVID.\nThe PPI network of DEGs was constructed using Cytoscape. DEG = differentially expressed genes, PPI = protein–protein interaction.\nThe most significant module was obtained from PPI network with 11 nodes. PPI = protein–protein interaction.\nThe PPI network of DEGs was constructed (Fig. 2) and the most significant module was obtained using Cytoscape (Fig. 3). The functional analyses of genes involved in this module were analyzed using DAVID.\nThe PPI network of DEGs was constructed using Cytoscape. DEG = differentially expressed genes, PPI = protein–protein interaction.\nThe most significant module was obtained from PPI network with 11 nodes. PPI = protein–protein interaction.\n[SUBTITLE] 3.4. Hub gene selection and analysis [SUBSECTION] A total of 11 genes were identified as hub genes with degrees ≥10. The names, abbreviations and functions for these hub genes are shown in Table 3. A network of the hub genes and their co-expression genes was analyzed using cBioPortal online platform (Fig. 4A). Hierarchical clustering showed that the hub genes could basically differentiate the GC samples from the non-cancerous samples (Fig. 4B). Subsequently, the overall survival analysis of the hub genes was performed using Kaplan–Meier curve. GC patients with COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 alteration showed worse overall survival (Figs. 5 and 6).\nSummaries for the function of 11 hub genes.\nInteraction network and biological process analysis of the hub genes. (A) Hub genes and their co-expression genes were analyzed using cBioPortal. Nodes with bold black outline represent hub genes. Nodes with thin black outline represent the co-expression genes. (B) Hierarchical clustering of hub genes was constructed using UCSC. The samples under the pink bar are non-cancerous samples and the samples under the blue bar are GC samples. Upregulation of genes is marked in red; downregulation of genes is marked in blue. GC = gastric cancer.\nOverall survival analyses of hub genes (COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, and SERPINH1). P < .05 was considered statistically significant.\nOverall survival analyses of hub genes (COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2). P < .05 was considered statistically significant.\nA total of 11 genes were identified as hub genes with degrees ≥10. The names, abbreviations and functions for these hub genes are shown in Table 3. A network of the hub genes and their co-expression genes was analyzed using cBioPortal online platform (Fig. 4A). Hierarchical clustering showed that the hub genes could basically differentiate the GC samples from the non-cancerous samples (Fig. 4B). Subsequently, the overall survival analysis of the hub genes was performed using Kaplan–Meier curve. GC patients with COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 alteration showed worse overall survival (Figs. 5 and 6).\nSummaries for the function of 11 hub genes.\nInteraction network and biological process analysis of the hub genes. (A) Hub genes and their co-expression genes were analyzed using cBioPortal. Nodes with bold black outline represent hub genes. Nodes with thin black outline represent the co-expression genes. (B) Hierarchical clustering of hub genes was constructed using UCSC. The samples under the pink bar are non-cancerous samples and the samples under the blue bar are GC samples. Upregulation of genes is marked in red; downregulation of genes is marked in blue. GC = gastric cancer.\nOverall survival analyses of hub genes (COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, and SERPINH1). P < .05 was considered statistically significant.\nOverall survival analyses of hub genes (COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2). P < .05 was considered statistically significant.\n[SUBTITLE] 3.5. Results of RT-qPCR analysis [SUBSECTION] According to the above expression analysis, COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 were markedly up-regulated in GC tumor samples. As presented in Figure 7, the relative expression levels of COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 were significantly higher in GC samples, compared with the normal stomach tissues groups. The result demonstrated that COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 might be considered as biomarkers for GC.\nRelative expression of COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 by RT-qPCR analysis. *P < .05, compared with normal stomach tissues. RT-qPCR = real time quantitative polymerase chain reaction.\nAccording to the above expression analysis, COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 were markedly up-regulated in GC tumor samples. As presented in Figure 7, the relative expression levels of COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 were significantly higher in GC samples, compared with the normal stomach tissues groups. The result demonstrated that COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 might be considered as biomarkers for GC.\nRelative expression of COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 by RT-qPCR analysis. *P < .05, compared with normal stomach tissues. RT-qPCR = real time quantitative polymerase chain reaction.\n[SUBTITLE] 3.6. The verification by TCGA [SUBSECTION] According to the above expression analysis, COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 were significantly up-regulated in GC tumor samples compared with the normal gastric samples. After confirmation using TCGA data, these genes expression levels in GC samples were also significantly higher than the normal gastric samples (Fig. 8).\nThe confirmation of gene expression level using The Cancer Genome Atlas (TCGA) data. The genes expression levels of COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 in GC samples were significantly higher than the normal gastric samples. GC = gastric cancer.\nOverall survival analysis showed that GC patients with high expression levels of COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 had poorer overall survival times than those with low expression levels (P < .05, Fig. 9).\nThe effect of gene expression on overall survival by using the TCGA data. TCGA = The Cancer Genome Atlas.\nAccording to the above expression analysis, COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 were significantly up-regulated in GC tumor samples compared with the normal gastric samples. After confirmation using TCGA data, these genes expression levels in GC samples were also significantly higher than the normal gastric samples (Fig. 8).\nThe confirmation of gene expression level using The Cancer Genome Atlas (TCGA) data. The genes expression levels of COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 in GC samples were significantly higher than the normal gastric samples. GC = gastric cancer.\nOverall survival analysis showed that GC patients with high expression levels of COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 had poorer overall survival times than those with low expression levels (P < .05, Fig. 9).\nThe effect of gene expression on overall survival by using the TCGA data. TCGA = The Cancer Genome Atlas.", "One volcano plot presents the DEGs in the GSE118916 (Fig. 1A), and another volcano plot presents the DEGs in the GSE109476 (Fig. 1B). After standardization of the microarray results, DEGs (1768 in GSE118916, and 564 in GSE109476) were identified. The overlap among the 2 datasets contained 139 genes as shown in the Venn diagram (Fig. 1C), consisting of 189 downregulated genes and 84 upregulated genes between GC tissues and non-cancerous tissues.\nDEGs in GC. (A) One volcano plot presents the DEGs in the GSE118916. (B) another volcano plot presents the DEGs in the GSE109476. (C) Venn diagram, PPI network and the most significant module of DEGs. (A) DEGs were selected with a fold change > 1 and P value < .01 among the mRNA expression profiling sets GSE118916 and GSE109476. The 2 datasets showed an overlap of 139 genes. DEG = differentially expressed genes, GC = gastric cancer.", "To analyze the biological classification of DEGs, functional and pathway enrichment analyses were performed using DAVID. GO analysis results showed that changes in BP of DEGs were significantly enriched in collagen catabolic process, collagen fibril organization, extracellular matrix organization, integrin-mediated signaling pathway, cell adhesion and so on. Changes in MF were mainly enriched in collagen binding, growth factor binding, heparin binding, extracellular matrix structural constituent and so on (Table 1). Changes in cell component of DEGs were mainly enriched in extracellular matrix, proteinaceous extracellular matrix, collagen trimer, extracellular region and so on. The KEGG pathway analysis showed that all DEGs are mainly concentrated in ECM-receptor interaction, PI3K-Akt signaling pathway, Metabolism of xenobiotics by cytochrome P450, platelet activation, Gap junction, Protein digestion and absorption and Phagosome (Table 2).\nGO and KEGG pathway enrichment analysis of DEGs in gastric cancer samples.\nDEGs = differentially expressed genes, GO = gene ontology, KEGG = Kyoto Encyclopedia of Genes and Genomes.", "The PPI network of DEGs was constructed (Fig. 2) and the most significant module was obtained using Cytoscape (Fig. 3). The functional analyses of genes involved in this module were analyzed using DAVID.\nThe PPI network of DEGs was constructed using Cytoscape. DEG = differentially expressed genes, PPI = protein–protein interaction.\nThe most significant module was obtained from PPI network with 11 nodes. PPI = protein–protein interaction.", "A total of 11 genes were identified as hub genes with degrees ≥10. The names, abbreviations and functions for these hub genes are shown in Table 3. A network of the hub genes and their co-expression genes was analyzed using cBioPortal online platform (Fig. 4A). Hierarchical clustering showed that the hub genes could basically differentiate the GC samples from the non-cancerous samples (Fig. 4B). Subsequently, the overall survival analysis of the hub genes was performed using Kaplan–Meier curve. GC patients with COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 alteration showed worse overall survival (Figs. 5 and 6).\nSummaries for the function of 11 hub genes.\nInteraction network and biological process analysis of the hub genes. (A) Hub genes and their co-expression genes were analyzed using cBioPortal. Nodes with bold black outline represent hub genes. Nodes with thin black outline represent the co-expression genes. (B) Hierarchical clustering of hub genes was constructed using UCSC. The samples under the pink bar are non-cancerous samples and the samples under the blue bar are GC samples. Upregulation of genes is marked in red; downregulation of genes is marked in blue. GC = gastric cancer.\nOverall survival analyses of hub genes (COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, and SERPINH1). P < .05 was considered statistically significant.\nOverall survival analyses of hub genes (COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2). P < .05 was considered statistically significant.", "According to the above expression analysis, COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 were significantly up-regulated in GC tumor samples compared with the normal gastric samples. After confirmation using TCGA data, these genes expression levels in GC samples were also significantly higher than the normal gastric samples (Fig. 8).\nThe confirmation of gene expression level using The Cancer Genome Atlas (TCGA) data. The genes expression levels of COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 in GC samples were significantly higher than the normal gastric samples. GC = gastric cancer.\nOverall survival analysis showed that GC patients with high expression levels of COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 had poorer overall survival times than those with low expression levels (P < .05, Fig. 9).\nThe effect of gene expression on overall survival by using the TCGA data. TCGA = The Cancer Genome Atlas.", "In conclusion, the present research aimed to identify DEGs which might be contained in the occurrence or development of GC. Finally, 139 DEGs and 11 hub genes were confirmed between GC tissues and normal tissues, which could be used as diagnostic and therapeutic biomarkers for GC. However, the biological functions of the all hub genes in GC require further researches.", "Conceptualization: Xu-Dong Zhou.\nData curation: Ya-Wei Qu.\nFormal analysis: Xu-Dong Zhou.\nInvestigation: Ya-Wei Qu, Fu-Hua Jia.\nMethodology: Fu-Hua Jia.\nResources: Xu-Dong Zhou.\nSupervision: Peng Chen.\nValidation: Peng Chen, Yin-Pu Wang.\nWriting – original draft: Yin-Pu Wang.\nWriting – review & editing: Hai-Feng Liu." ]
[ null, null, null, null, null, null, null, "results", null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Materials and Methods", "2.1. Access to public data", "2.2. Screening of DEGs via GEO2R", "2.3. Functional annotation of DEGs via GO and KEGG analysis", "2.4. Construction and analysis of PPI network", "2.5. Mining and screening of core gene", "2.6. RR-qPCR assay", "2.7. Statistical analysis", "3. Results", "3.1. Identification of DEGs in GC", "3.2. KEGG and GO enrichment analyses of DEGs", "3.3. PPI network construction and module analysis", "3.4. Hub gene selection and analysis", "3.5. Results of RT-qPCR analysis", "3.6. The verification by TCGA", "4. Discussion", "5. Conclusion", "Author contributions" ]
[ "Gastric cancer (GC) is a malignant tumor originated from the gastric mucosal epithelium, mainly gastric adenocarcinoma. GC accounts for more than 95% of malignant tumors in the stomach and is one of the malignant tumors that seriously endanger human health. According to the results of the National Cancer Center of China in 2015, GC accounts for the third place in the mortality rate of malignant tumors in China.[1] The occurrence of GC is closely related to the adverse environment, lifestyle, dietary structure changes and Helicobacter pylori infection. Early GC symptoms are not obvious, some patients may have dyspepsia symptoms, and advanced GC may have upper abdominal pain, postprandial aggravation, poor appetite, anorexia, fatigue and weight loss. The common examination methods are gastroscopy and computed tomography, which are invasive and expensive.[2] When the patient has obvious symptoms, he is admitted to the hospital. The disease has developed to the advanced stage of GC, and the best surgical treatment time is lost. Except for Japan and South Korea, the 5-year survival rate of advanced GC in other countries and regions in the world is even less than 10%.[3] However, if GC can be diagnosed early, its 5-year survival rate will rise to 95%,[4] which means that the fundamental method for providing GC prognosis is early diagnosis and timely treatment. Currently, some serum biomarkers are used for screening early GC, such as CA19-9 and CEA, but these tumor markers are less sensitive and specific.[5] Therefore, to find a new effective biomarker for early GC, to further explore the pathogenesis of GC, to find potential diagnostic and therapeutic targets, to achieve early detection, early diagnosis and targeted therapy, with significant clinical value and market Application prospect.\nBioinformatics is an emerging interdisciplinary subject that combines life sciences with computer science. It focuses on the collection, storage, processing, dissemination, analysis, and interpretation of biological information. The ability to process large amounts of complex biological data can be processed through the use of biological and informatics techniques. Microarray data information analysis technology has been widely used in the study of diseases such as tumors to explore the genetic correlation.[6,7] Microarray analysis technology can simultaneously acquire the expression information of tens of thousands of genes, and then explore the genomic changes related to the development of diseases. A large number of research and scholars[8,9] have used bioinformatics techniques to analyze differentially expressed genes (DEG) in tumor progression, and to study their roles in biological processes (BP), molecular functions (MF), and signaling pathways, and to elucidate the pathogenesis of diseases, so as to provide theoretical basis for early diagnosis and treatment.\nIn this study, bioinformatics technology was used to find the gene sequencing data of GC patients and normal people from gene expression omnibus (GEO). Two high-quality genetic data sets were extracted and analyzed for further analysis. Gene ontology (GO) analysis and Kyoto Encyclopedia of Genes and Genomes (KEGG) analysis were performed by gene set enrichment analysis, and then important modules of the protein–protein interaction (PPI) network were screened. Using the genetic data of tumor patients and normal people in the sample, 73 gene sets and 11 significantly DEG molecules were found to be differentially expressed. These findings will enhance our understanding of the underlying mechanisms of GC and provide the basis for finding new diagnostic markers and targeted therapies.", "[SUBTITLE] 2.1. Access to public data [SUBSECTION] GEO (http://www.ncbi.nlm.nih.gov/geo)[10] is an open high-throughput genomic database that includes microarrays, gene expression data and chips.\nOn November 20, 2019, the key words “(gastric cancer) AND gene expression” were set to detect the datasets, using a filter of “expression profiling by array” and “recent two years.” There were 5 inclusion criteria: a sample number of more than 10 per dataset (samples of less than 10 were excluded), data from Homo sapiens (data from other species were excluded), a series entry type, expression profiling by array (data using methylation profiling by array were excluded), and a diagnosis of GC (data from other cancer diagnoses were excluded).\nTwo expression profile data sets (GSE118916 and GSE109476) were downloaded from the GEO database. The annotation platform for GSE118916 is GPL15207 platform, [PrimeView] Affymetrix Human Gene Expression Array. The GSE118916 data set is composed of 15 GC tissues and 15 stomach normal tissues. The annotation platform for GSE109476 is GPL24530platform, Arraystar Human LncRNA microarray V2.0 (Agilent-033010; custom-annotation; probe name version). The GSE109476 date set is composed of 5 GC tissues and 5 stomach normal tissues. All probe numbers are converted to gene symbols based on the annotation information in the platform.\nGEO (http://www.ncbi.nlm.nih.gov/geo)[10] is an open high-throughput genomic database that includes microarrays, gene expression data and chips.\nOn November 20, 2019, the key words “(gastric cancer) AND gene expression” were set to detect the datasets, using a filter of “expression profiling by array” and “recent two years.” There were 5 inclusion criteria: a sample number of more than 10 per dataset (samples of less than 10 were excluded), data from Homo sapiens (data from other species were excluded), a series entry type, expression profiling by array (data using methylation profiling by array were excluded), and a diagnosis of GC (data from other cancer diagnoses were excluded).\nTwo expression profile data sets (GSE118916 and GSE109476) were downloaded from the GEO database. The annotation platform for GSE118916 is GPL15207 platform, [PrimeView] Affymetrix Human Gene Expression Array. The GSE118916 data set is composed of 15 GC tissues and 15 stomach normal tissues. The annotation platform for GSE109476 is GPL24530platform, Arraystar Human LncRNA microarray V2.0 (Agilent-033010; custom-annotation; probe name version). The GSE109476 date set is composed of 5 GC tissues and 5 stomach normal tissues. All probe numbers are converted to gene symbols based on the annotation information in the platform.\n[SUBTITLE] 2.2. Screening of DEGs via GEO2R [SUBSECTION] GEO2R (http://www.ncbi.nlm.nih.gov/geo/geo2r)[11] is a system for online analysis of data in GEO. This tool system runs in the R language. It is accurate to say that it uses 2 R packages: GEOquery and limma. The former is used for data reading and the latter is used for calculation. The best thing about GEO2R is that is an online tool, easy and efficient to operate. GEO2R can perform a command to compare gene expression profiles between groups in order to identify DEGs between GC and stomach normal groups. In general, when the probe set has a corresponding gene symbol, the probe is considered valuable and will be retained. Statistically significant measure is P value <.01 and fold change >1.\nGEO2R (http://www.ncbi.nlm.nih.gov/geo/geo2r)[11] is a system for online analysis of data in GEO. This tool system runs in the R language. It is accurate to say that it uses 2 R packages: GEOquery and limma. The former is used for data reading and the latter is used for calculation. The best thing about GEO2R is that is an online tool, easy and efficient to operate. GEO2R can perform a command to compare gene expression profiles between groups in order to identify DEGs between GC and stomach normal groups. In general, when the probe set has a corresponding gene symbol, the probe is considered valuable and will be retained. Statistically significant measure is P value <.01 and fold change >1.\n[SUBTITLE] 2.3. Functional annotation of DEGs via GO and KEGG analysis [SUBSECTION] Database for Annotation, Visualization and Integrated Discovery (DAVID) (https://david.ncifcrf.gov/home.jsp) (version 6.8) is a bioinformatics database that integrates biological data and analytical tools.[12] KEGG (https://www.kegg.jp/) could help researcher to understand advanced functions and biological systems.[13] GO is an ontology widely used in bioinformatics, which covers 3 aspects of biology, including cellular components, MF and biological process.[14] In order to analyze the GO and pathway enrichment information of DEGs, the DAVID online tool was executed. Statistically significant measure is P < .05.\nDatabase for Annotation, Visualization and Integrated Discovery (DAVID) (https://david.ncifcrf.gov/home.jsp) (version 6.8) is a bioinformatics database that integrates biological data and analytical tools.[12] KEGG (https://www.kegg.jp/) could help researcher to understand advanced functions and biological systems.[13] GO is an ontology widely used in bioinformatics, which covers 3 aspects of biology, including cellular components, MF and biological process.[14] In order to analyze the GO and pathway enrichment information of DEGs, the DAVID online tool was executed. Statistically significant measure is P < .05.\n[SUBTITLE] 2.4. Construction and analysis of PPI network [SUBSECTION] Search Tool for the Retrieval of Interacting Genes (STRING; http://string-db.org) (version 10.5)[15] is a network that can be used to predict and track PPIs. Introducing DEGs into the tool makes intermolecular network analysis. The analysis of the interactions between different proteins can provide insights into the mechanisms of generation or development of GC. In this study, STRING database was used to construct PPI network with DEGs. The minimum required interaction score is that medium confidence > 0.4. Cytoscape (version 3.6.1) is an open visualization software that can be used to visualize PPI network.[16] Based on topological principles, the Molecular Complex Detection (MCODE) (version 1.5.1), a plug-in for Cytoscape, can mine tightly coupled regions from PPI. First, Cytoscape software plots the PPI network. Secondly, MCODE identifies the most important modules in the PPI network graph. The criteria of MCODE analysis is that node score cutoff = 0.2, degree cutoff = 2, Max depth = 100, MCODE scores > 5, and k-score = 2.\nSearch Tool for the Retrieval of Interacting Genes (STRING; http://string-db.org) (version 10.5)[15] is a network that can be used to predict and track PPIs. Introducing DEGs into the tool makes intermolecular network analysis. The analysis of the interactions between different proteins can provide insights into the mechanisms of generation or development of GC. In this study, STRING database was used to construct PPI network with DEGs. The minimum required interaction score is that medium confidence > 0.4. Cytoscape (version 3.6.1) is an open visualization software that can be used to visualize PPI network.[16] Based on topological principles, the Molecular Complex Detection (MCODE) (version 1.5.1), a plug-in for Cytoscape, can mine tightly coupled regions from PPI. First, Cytoscape software plots the PPI network. Secondly, MCODE identifies the most important modules in the PPI network graph. The criteria of MCODE analysis is that node score cutoff = 0.2, degree cutoff = 2, Max depth = 100, MCODE scores > 5, and k-score = 2.\n[SUBTITLE] 2.5. Mining and screening of core gene [SUBSECTION] The hub genes were selected with degrees ≥ 10. A network of the genes and their co-expression genes was analyzed using cBioPortal (http://www.cbioportal.org)[17,18] online platform. Hierarchical clustering of hub genes was constructed using UCSC Cancer Genomics Browser (http://genome-cancer.ucsc.edu).[19] The overall survival and disease-free survival analyses of hub genes were performed using Kaplan–Meier curve in cBioPortal.\nThe hub genes were selected with degrees ≥ 10. A network of the genes and their co-expression genes was analyzed using cBioPortal (http://www.cbioportal.org)[17,18] online platform. Hierarchical clustering of hub genes was constructed using UCSC Cancer Genomics Browser (http://genome-cancer.ucsc.edu).[19] The overall survival and disease-free survival analyses of hub genes were performed using Kaplan–Meier curve in cBioPortal.\n[SUBTITLE] 2.6. RR-qPCR assay [SUBSECTION] A total of 10 GC participates were recruited. After surgery, 10 GC tumor samples from GC patients and 10 adjacent normal stomach tissues samples were obtained. The research conformed to the Declaration of Helsinki and was authorized by the Human Ethics and Research Ethics Committees of Third Medical Center of PLA General Hospital. The written informed consents were obtained from all participates.\nTotal RNA was extracted from 10 GC tumor samples and 10 adjacent normal stomach tissues samples by the RNAiso Plus (Trizol) kit (Thermofisher, Massachusetts, America), and reverse transcribed to cDNA. Real time quantitative polymerase chain reaction (RT-qPCR) was performed using a Light Cycler® 4800 System with specific primers for genes. Table 1 presents the primer sequences used in the experiments. The RQ values (2−ΔΔCt, where Ct is the threshold cycle) of each sample were calculated, and are presented as fold change in gene expression relative to the control group. GAPDH was used as an endogenous control.\nPrimers and their sequences for PCR analysis\nPCR = polymerase chain reaction.\nThe verification of hub genes expression and role on the overall survival of GC patients using the cancer genome atlas (TCGA) data\nThe gene expression dataset of GC in the TCGA was downloaded. There were a total of 580samples including 478 GC samples and 102 normal gastric samples. The IlluminaHiSeq UNC was selected as gene expression RNAseq in the research. In addition, the gene expression levels of hub genes between GC and normal gastric samples were compared using the one-way Anova.\nFurthermore, effect of gene expression of hub genes on overall survival was analyzed by using the TCGA data.\nA total of 10 GC participates were recruited. After surgery, 10 GC tumor samples from GC patients and 10 adjacent normal stomach tissues samples were obtained. The research conformed to the Declaration of Helsinki and was authorized by the Human Ethics and Research Ethics Committees of Third Medical Center of PLA General Hospital. The written informed consents were obtained from all participates.\nTotal RNA was extracted from 10 GC tumor samples and 10 adjacent normal stomach tissues samples by the RNAiso Plus (Trizol) kit (Thermofisher, Massachusetts, America), and reverse transcribed to cDNA. Real time quantitative polymerase chain reaction (RT-qPCR) was performed using a Light Cycler® 4800 System with specific primers for genes. Table 1 presents the primer sequences used in the experiments. The RQ values (2−ΔΔCt, where Ct is the threshold cycle) of each sample were calculated, and are presented as fold change in gene expression relative to the control group. GAPDH was used as an endogenous control.\nPrimers and their sequences for PCR analysis\nPCR = polymerase chain reaction.\nThe verification of hub genes expression and role on the overall survival of GC patients using the cancer genome atlas (TCGA) data\nThe gene expression dataset of GC in the TCGA was downloaded. There were a total of 580samples including 478 GC samples and 102 normal gastric samples. The IlluminaHiSeq UNC was selected as gene expression RNAseq in the research. In addition, the gene expression levels of hub genes between GC and normal gastric samples were compared using the one-way Anova.\nFurthermore, effect of gene expression of hub genes on overall survival was analyzed by using the TCGA data.\n[SUBTITLE] 2.7. Statistical analysis [SUBSECTION] Student’s t test was used to determine the statistical significance when comparing the 2 groups. Statistical analysis was carried out using SPSS software version 21.0 (IBM Corp. Armonk, NY). Value of P < .05 were considered statistically significant.\nStudent’s t test was used to determine the statistical significance when comparing the 2 groups. Statistical analysis was carried out using SPSS software version 21.0 (IBM Corp. Armonk, NY). Value of P < .05 were considered statistically significant.", "GEO (http://www.ncbi.nlm.nih.gov/geo)[10] is an open high-throughput genomic database that includes microarrays, gene expression data and chips.\nOn November 20, 2019, the key words “(gastric cancer) AND gene expression” were set to detect the datasets, using a filter of “expression profiling by array” and “recent two years.” There were 5 inclusion criteria: a sample number of more than 10 per dataset (samples of less than 10 were excluded), data from Homo sapiens (data from other species were excluded), a series entry type, expression profiling by array (data using methylation profiling by array were excluded), and a diagnosis of GC (data from other cancer diagnoses were excluded).\nTwo expression profile data sets (GSE118916 and GSE109476) were downloaded from the GEO database. The annotation platform for GSE118916 is GPL15207 platform, [PrimeView] Affymetrix Human Gene Expression Array. The GSE118916 data set is composed of 15 GC tissues and 15 stomach normal tissues. The annotation platform for GSE109476 is GPL24530platform, Arraystar Human LncRNA microarray V2.0 (Agilent-033010; custom-annotation; probe name version). The GSE109476 date set is composed of 5 GC tissues and 5 stomach normal tissues. All probe numbers are converted to gene symbols based on the annotation information in the platform.", "GEO2R (http://www.ncbi.nlm.nih.gov/geo/geo2r)[11] is a system for online analysis of data in GEO. This tool system runs in the R language. It is accurate to say that it uses 2 R packages: GEOquery and limma. The former is used for data reading and the latter is used for calculation. The best thing about GEO2R is that is an online tool, easy and efficient to operate. GEO2R can perform a command to compare gene expression profiles between groups in order to identify DEGs between GC and stomach normal groups. In general, when the probe set has a corresponding gene symbol, the probe is considered valuable and will be retained. Statistically significant measure is P value <.01 and fold change >1.", "Database for Annotation, Visualization and Integrated Discovery (DAVID) (https://david.ncifcrf.gov/home.jsp) (version 6.8) is a bioinformatics database that integrates biological data and analytical tools.[12] KEGG (https://www.kegg.jp/) could help researcher to understand advanced functions and biological systems.[13] GO is an ontology widely used in bioinformatics, which covers 3 aspects of biology, including cellular components, MF and biological process.[14] In order to analyze the GO and pathway enrichment information of DEGs, the DAVID online tool was executed. Statistically significant measure is P < .05.", "Search Tool for the Retrieval of Interacting Genes (STRING; http://string-db.org) (version 10.5)[15] is a network that can be used to predict and track PPIs. Introducing DEGs into the tool makes intermolecular network analysis. The analysis of the interactions between different proteins can provide insights into the mechanisms of generation or development of GC. In this study, STRING database was used to construct PPI network with DEGs. The minimum required interaction score is that medium confidence > 0.4. Cytoscape (version 3.6.1) is an open visualization software that can be used to visualize PPI network.[16] Based on topological principles, the Molecular Complex Detection (MCODE) (version 1.5.1), a plug-in for Cytoscape, can mine tightly coupled regions from PPI. First, Cytoscape software plots the PPI network. Secondly, MCODE identifies the most important modules in the PPI network graph. The criteria of MCODE analysis is that node score cutoff = 0.2, degree cutoff = 2, Max depth = 100, MCODE scores > 5, and k-score = 2.", "The hub genes were selected with degrees ≥ 10. A network of the genes and their co-expression genes was analyzed using cBioPortal (http://www.cbioportal.org)[17,18] online platform. Hierarchical clustering of hub genes was constructed using UCSC Cancer Genomics Browser (http://genome-cancer.ucsc.edu).[19] The overall survival and disease-free survival analyses of hub genes were performed using Kaplan–Meier curve in cBioPortal.", "A total of 10 GC participates were recruited. After surgery, 10 GC tumor samples from GC patients and 10 adjacent normal stomach tissues samples were obtained. The research conformed to the Declaration of Helsinki and was authorized by the Human Ethics and Research Ethics Committees of Third Medical Center of PLA General Hospital. The written informed consents were obtained from all participates.\nTotal RNA was extracted from 10 GC tumor samples and 10 adjacent normal stomach tissues samples by the RNAiso Plus (Trizol) kit (Thermofisher, Massachusetts, America), and reverse transcribed to cDNA. Real time quantitative polymerase chain reaction (RT-qPCR) was performed using a Light Cycler® 4800 System with specific primers for genes. Table 1 presents the primer sequences used in the experiments. The RQ values (2−ΔΔCt, where Ct is the threshold cycle) of each sample were calculated, and are presented as fold change in gene expression relative to the control group. GAPDH was used as an endogenous control.\nPrimers and their sequences for PCR analysis\nPCR = polymerase chain reaction.\nThe verification of hub genes expression and role on the overall survival of GC patients using the cancer genome atlas (TCGA) data\nThe gene expression dataset of GC in the TCGA was downloaded. There were a total of 580samples including 478 GC samples and 102 normal gastric samples. The IlluminaHiSeq UNC was selected as gene expression RNAseq in the research. In addition, the gene expression levels of hub genes between GC and normal gastric samples were compared using the one-way Anova.\nFurthermore, effect of gene expression of hub genes on overall survival was analyzed by using the TCGA data.", "Student’s t test was used to determine the statistical significance when comparing the 2 groups. Statistical analysis was carried out using SPSS software version 21.0 (IBM Corp. Armonk, NY). Value of P < .05 were considered statistically significant.", "[SUBTITLE] 3.1. Identification of DEGs in GC [SUBSECTION] One volcano plot presents the DEGs in the GSE118916 (Fig. 1A), and another volcano plot presents the DEGs in the GSE109476 (Fig. 1B). After standardization of the microarray results, DEGs (1768 in GSE118916, and 564 in GSE109476) were identified. The overlap among the 2 datasets contained 139 genes as shown in the Venn diagram (Fig. 1C), consisting of 189 downregulated genes and 84 upregulated genes between GC tissues and non-cancerous tissues.\nDEGs in GC. (A) One volcano plot presents the DEGs in the GSE118916. (B) another volcano plot presents the DEGs in the GSE109476. (C) Venn diagram, PPI network and the most significant module of DEGs. (A) DEGs were selected with a fold change > 1 and P value < .01 among the mRNA expression profiling sets GSE118916 and GSE109476. The 2 datasets showed an overlap of 139 genes. DEG = differentially expressed genes, GC = gastric cancer.\nOne volcano plot presents the DEGs in the GSE118916 (Fig. 1A), and another volcano plot presents the DEGs in the GSE109476 (Fig. 1B). After standardization of the microarray results, DEGs (1768 in GSE118916, and 564 in GSE109476) were identified. The overlap among the 2 datasets contained 139 genes as shown in the Venn diagram (Fig. 1C), consisting of 189 downregulated genes and 84 upregulated genes between GC tissues and non-cancerous tissues.\nDEGs in GC. (A) One volcano plot presents the DEGs in the GSE118916. (B) another volcano plot presents the DEGs in the GSE109476. (C) Venn diagram, PPI network and the most significant module of DEGs. (A) DEGs were selected with a fold change > 1 and P value < .01 among the mRNA expression profiling sets GSE118916 and GSE109476. The 2 datasets showed an overlap of 139 genes. DEG = differentially expressed genes, GC = gastric cancer.\n[SUBTITLE] 3.2. KEGG and GO enrichment analyses of DEGs [SUBSECTION] To analyze the biological classification of DEGs, functional and pathway enrichment analyses were performed using DAVID. GO analysis results showed that changes in BP of DEGs were significantly enriched in collagen catabolic process, collagen fibril organization, extracellular matrix organization, integrin-mediated signaling pathway, cell adhesion and so on. Changes in MF were mainly enriched in collagen binding, growth factor binding, heparin binding, extracellular matrix structural constituent and so on (Table 1). Changes in cell component of DEGs were mainly enriched in extracellular matrix, proteinaceous extracellular matrix, collagen trimer, extracellular region and so on. The KEGG pathway analysis showed that all DEGs are mainly concentrated in ECM-receptor interaction, PI3K-Akt signaling pathway, Metabolism of xenobiotics by cytochrome P450, platelet activation, Gap junction, Protein digestion and absorption and Phagosome (Table 2).\nGO and KEGG pathway enrichment analysis of DEGs in gastric cancer samples.\nDEGs = differentially expressed genes, GO = gene ontology, KEGG = Kyoto Encyclopedia of Genes and Genomes.\nTo analyze the biological classification of DEGs, functional and pathway enrichment analyses were performed using DAVID. GO analysis results showed that changes in BP of DEGs were significantly enriched in collagen catabolic process, collagen fibril organization, extracellular matrix organization, integrin-mediated signaling pathway, cell adhesion and so on. Changes in MF were mainly enriched in collagen binding, growth factor binding, heparin binding, extracellular matrix structural constituent and so on (Table 1). Changes in cell component of DEGs were mainly enriched in extracellular matrix, proteinaceous extracellular matrix, collagen trimer, extracellular region and so on. The KEGG pathway analysis showed that all DEGs are mainly concentrated in ECM-receptor interaction, PI3K-Akt signaling pathway, Metabolism of xenobiotics by cytochrome P450, platelet activation, Gap junction, Protein digestion and absorption and Phagosome (Table 2).\nGO and KEGG pathway enrichment analysis of DEGs in gastric cancer samples.\nDEGs = differentially expressed genes, GO = gene ontology, KEGG = Kyoto Encyclopedia of Genes and Genomes.\n[SUBTITLE] 3.3. PPI network construction and module analysis [SUBSECTION] The PPI network of DEGs was constructed (Fig. 2) and the most significant module was obtained using Cytoscape (Fig. 3). The functional analyses of genes involved in this module were analyzed using DAVID.\nThe PPI network of DEGs was constructed using Cytoscape. DEG = differentially expressed genes, PPI = protein–protein interaction.\nThe most significant module was obtained from PPI network with 11 nodes. PPI = protein–protein interaction.\nThe PPI network of DEGs was constructed (Fig. 2) and the most significant module was obtained using Cytoscape (Fig. 3). The functional analyses of genes involved in this module were analyzed using DAVID.\nThe PPI network of DEGs was constructed using Cytoscape. DEG = differentially expressed genes, PPI = protein–protein interaction.\nThe most significant module was obtained from PPI network with 11 nodes. PPI = protein–protein interaction.\n[SUBTITLE] 3.4. Hub gene selection and analysis [SUBSECTION] A total of 11 genes were identified as hub genes with degrees ≥10. The names, abbreviations and functions for these hub genes are shown in Table 3. A network of the hub genes and their co-expression genes was analyzed using cBioPortal online platform (Fig. 4A). Hierarchical clustering showed that the hub genes could basically differentiate the GC samples from the non-cancerous samples (Fig. 4B). Subsequently, the overall survival analysis of the hub genes was performed using Kaplan–Meier curve. GC patients with COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 alteration showed worse overall survival (Figs. 5 and 6).\nSummaries for the function of 11 hub genes.\nInteraction network and biological process analysis of the hub genes. (A) Hub genes and their co-expression genes were analyzed using cBioPortal. Nodes with bold black outline represent hub genes. Nodes with thin black outline represent the co-expression genes. (B) Hierarchical clustering of hub genes was constructed using UCSC. The samples under the pink bar are non-cancerous samples and the samples under the blue bar are GC samples. Upregulation of genes is marked in red; downregulation of genes is marked in blue. GC = gastric cancer.\nOverall survival analyses of hub genes (COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, and SERPINH1). P < .05 was considered statistically significant.\nOverall survival analyses of hub genes (COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2). P < .05 was considered statistically significant.\nA total of 11 genes were identified as hub genes with degrees ≥10. The names, abbreviations and functions for these hub genes are shown in Table 3. A network of the hub genes and their co-expression genes was analyzed using cBioPortal online platform (Fig. 4A). Hierarchical clustering showed that the hub genes could basically differentiate the GC samples from the non-cancerous samples (Fig. 4B). Subsequently, the overall survival analysis of the hub genes was performed using Kaplan–Meier curve. GC patients with COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 alteration showed worse overall survival (Figs. 5 and 6).\nSummaries for the function of 11 hub genes.\nInteraction network and biological process analysis of the hub genes. (A) Hub genes and their co-expression genes were analyzed using cBioPortal. Nodes with bold black outline represent hub genes. Nodes with thin black outline represent the co-expression genes. (B) Hierarchical clustering of hub genes was constructed using UCSC. The samples under the pink bar are non-cancerous samples and the samples under the blue bar are GC samples. Upregulation of genes is marked in red; downregulation of genes is marked in blue. GC = gastric cancer.\nOverall survival analyses of hub genes (COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, and SERPINH1). P < .05 was considered statistically significant.\nOverall survival analyses of hub genes (COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2). P < .05 was considered statistically significant.\n[SUBTITLE] 3.5. Results of RT-qPCR analysis [SUBSECTION] According to the above expression analysis, COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 were markedly up-regulated in GC tumor samples. As presented in Figure 7, the relative expression levels of COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 were significantly higher in GC samples, compared with the normal stomach tissues groups. The result demonstrated that COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 might be considered as biomarkers for GC.\nRelative expression of COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 by RT-qPCR analysis. *P < .05, compared with normal stomach tissues. RT-qPCR = real time quantitative polymerase chain reaction.\nAccording to the above expression analysis, COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 were markedly up-regulated in GC tumor samples. As presented in Figure 7, the relative expression levels of COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 were significantly higher in GC samples, compared with the normal stomach tissues groups. The result demonstrated that COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 might be considered as biomarkers for GC.\nRelative expression of COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 by RT-qPCR analysis. *P < .05, compared with normal stomach tissues. RT-qPCR = real time quantitative polymerase chain reaction.\n[SUBTITLE] 3.6. The verification by TCGA [SUBSECTION] According to the above expression analysis, COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 were significantly up-regulated in GC tumor samples compared with the normal gastric samples. After confirmation using TCGA data, these genes expression levels in GC samples were also significantly higher than the normal gastric samples (Fig. 8).\nThe confirmation of gene expression level using The Cancer Genome Atlas (TCGA) data. The genes expression levels of COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 in GC samples were significantly higher than the normal gastric samples. GC = gastric cancer.\nOverall survival analysis showed that GC patients with high expression levels of COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 had poorer overall survival times than those with low expression levels (P < .05, Fig. 9).\nThe effect of gene expression on overall survival by using the TCGA data. TCGA = The Cancer Genome Atlas.\nAccording to the above expression analysis, COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 were significantly up-regulated in GC tumor samples compared with the normal gastric samples. After confirmation using TCGA data, these genes expression levels in GC samples were also significantly higher than the normal gastric samples (Fig. 8).\nThe confirmation of gene expression level using The Cancer Genome Atlas (TCGA) data. The genes expression levels of COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 in GC samples were significantly higher than the normal gastric samples. GC = gastric cancer.\nOverall survival analysis showed that GC patients with high expression levels of COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 had poorer overall survival times than those with low expression levels (P < .05, Fig. 9).\nThe effect of gene expression on overall survival by using the TCGA data. TCGA = The Cancer Genome Atlas.", "One volcano plot presents the DEGs in the GSE118916 (Fig. 1A), and another volcano plot presents the DEGs in the GSE109476 (Fig. 1B). After standardization of the microarray results, DEGs (1768 in GSE118916, and 564 in GSE109476) were identified. The overlap among the 2 datasets contained 139 genes as shown in the Venn diagram (Fig. 1C), consisting of 189 downregulated genes and 84 upregulated genes between GC tissues and non-cancerous tissues.\nDEGs in GC. (A) One volcano plot presents the DEGs in the GSE118916. (B) another volcano plot presents the DEGs in the GSE109476. (C) Venn diagram, PPI network and the most significant module of DEGs. (A) DEGs were selected with a fold change > 1 and P value < .01 among the mRNA expression profiling sets GSE118916 and GSE109476. The 2 datasets showed an overlap of 139 genes. DEG = differentially expressed genes, GC = gastric cancer.", "To analyze the biological classification of DEGs, functional and pathway enrichment analyses were performed using DAVID. GO analysis results showed that changes in BP of DEGs were significantly enriched in collagen catabolic process, collagen fibril organization, extracellular matrix organization, integrin-mediated signaling pathway, cell adhesion and so on. Changes in MF were mainly enriched in collagen binding, growth factor binding, heparin binding, extracellular matrix structural constituent and so on (Table 1). Changes in cell component of DEGs were mainly enriched in extracellular matrix, proteinaceous extracellular matrix, collagen trimer, extracellular region and so on. The KEGG pathway analysis showed that all DEGs are mainly concentrated in ECM-receptor interaction, PI3K-Akt signaling pathway, Metabolism of xenobiotics by cytochrome P450, platelet activation, Gap junction, Protein digestion and absorption and Phagosome (Table 2).\nGO and KEGG pathway enrichment analysis of DEGs in gastric cancer samples.\nDEGs = differentially expressed genes, GO = gene ontology, KEGG = Kyoto Encyclopedia of Genes and Genomes.", "The PPI network of DEGs was constructed (Fig. 2) and the most significant module was obtained using Cytoscape (Fig. 3). The functional analyses of genes involved in this module were analyzed using DAVID.\nThe PPI network of DEGs was constructed using Cytoscape. DEG = differentially expressed genes, PPI = protein–protein interaction.\nThe most significant module was obtained from PPI network with 11 nodes. PPI = protein–protein interaction.", "A total of 11 genes were identified as hub genes with degrees ≥10. The names, abbreviations and functions for these hub genes are shown in Table 3. A network of the hub genes and their co-expression genes was analyzed using cBioPortal online platform (Fig. 4A). Hierarchical clustering showed that the hub genes could basically differentiate the GC samples from the non-cancerous samples (Fig. 4B). Subsequently, the overall survival analysis of the hub genes was performed using Kaplan–Meier curve. GC patients with COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 alteration showed worse overall survival (Figs. 5 and 6).\nSummaries for the function of 11 hub genes.\nInteraction network and biological process analysis of the hub genes. (A) Hub genes and their co-expression genes were analyzed using cBioPortal. Nodes with bold black outline represent hub genes. Nodes with thin black outline represent the co-expression genes. (B) Hierarchical clustering of hub genes was constructed using UCSC. The samples under the pink bar are non-cancerous samples and the samples under the blue bar are GC samples. Upregulation of genes is marked in red; downregulation of genes is marked in blue. GC = gastric cancer.\nOverall survival analyses of hub genes (COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, and SERPINH1). P < .05 was considered statistically significant.\nOverall survival analyses of hub genes (COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2). P < .05 was considered statistically significant.", "According to the above expression analysis, COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 were markedly up-regulated in GC tumor samples. As presented in Figure 7, the relative expression levels of COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 were significantly higher in GC samples, compared with the normal stomach tissues groups. The result demonstrated that COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 might be considered as biomarkers for GC.\nRelative expression of COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 by RT-qPCR analysis. *P < .05, compared with normal stomach tissues. RT-qPCR = real time quantitative polymerase chain reaction.", "According to the above expression analysis, COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 were significantly up-regulated in GC tumor samples compared with the normal gastric samples. After confirmation using TCGA data, these genes expression levels in GC samples were also significantly higher than the normal gastric samples (Fig. 8).\nThe confirmation of gene expression level using The Cancer Genome Atlas (TCGA) data. The genes expression levels of COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 in GC samples were significantly higher than the normal gastric samples. GC = gastric cancer.\nOverall survival analysis showed that GC patients with high expression levels of COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 had poorer overall survival times than those with low expression levels (P < .05, Fig. 9).\nThe effect of gene expression on overall survival by using the TCGA data. TCGA = The Cancer Genome Atlas.", "In 2018, there were more than 1 million new cases of GC in the world, and 783,000 deaths.[20] The most common sites of GC were gastric antrum (58%), cardia (20%), corpus (15%), whole stomach or most stomach (7%). GC can spread through direct spread, lymph node metastasis, hematogenous dissemination, and plant metastasis. At present, the treatment of GC is often treated by multiple means. The treatment may include partial gastrectomy or total gastrectomy, lymph node dissection and perioperative chemotherapy or postoperative radiotherapy and chemotherapy.[21–23] Patients may experience malnutrition, reduced immunity, and decreased quality of life during treatment. And it will bring a series of adverse reactions to patients, so that patients with GC not only suffer from physiologically great pain, but also psychologically bear tremendous pressure. After gastrectomy, the physiological function of patients will be seriously disturbed, and the body will also suffer from malnutrition, reflux esophagitis, absorption disorders and other adverse consequences.[24,25] On the other hand, since medicinal chemotherapy kills cancer cells and kills normal cells of the patient, it causes toxic effects and a series of adverse reactions, which cause serious damage to the patient’s body and mind. The prognosis of patients is often associated with timely diagnosis and treatment, but there are large clinical heterogeneities in different individuals and tumor types. Therefore, it is of great clinical significance to further explore the pathogenesis of GC, to find early diagnostic markers, targeted therapeutic genes and molecules, and to achieve early diagnosis and individualized treatment according to different individuals and pathological types.\nBioinformatics technology has been widely used to find genetic molecules related to tumorigenesis and development, and to find genes and molecules that can be used as therapeutic targets. Cao et al found the PLEKHG1 molecule related to GC through this technology, and further confirmed the correlation between the gene and GC, suggesting that the molecule is a biomarker for diagnosis and prediction of outcome.[26] Wang et al found a molecule related to colorectal cancer proliferation and metastasis through bioinformatics technology, suggesting that it may serve as a potential therapeutic target.[27] In this study, DEGs between GC tissues and non-cancer tissues were obtained by analyzing 2 mRNA microarray data sets. A total of 139 DEGs were identified in 2 data sets. Bioinformatics analysis revealed high expression of COL1A2, COL3A1, SPARC, PCOLCE, COL8A1, SERPINH1, COL8A2, COL6A3, LAMA4, LOXL1, and COL5A2 in GC patients. At the same time, multiple gene sets that were significantly up-regulated and down-regulated were found by GO analysis and KEGG analysis.\nCOL1A2 (Collagen Type I Alpha 2 Chain) is a member of the fibrocollagen family and encodes a pro-alpha 2 chain of type I collagen.[28] It acts to support the matrix structure, forming the interstitial part of most solid tumors, and regulates cell movement through interaction with the cytoskeleton. Studies have found that COL1A2 gene mainly affects cell proliferation, differentiation, adhesion and metastasis through extracellular matrix receptor interaction pathway and local adhesion pathway, mainly related to tumor invasion and metastasis.[29] Li et al found that the expression of COL1A2 in GC tissues was higher than that in adjacent normal tissues,[30] which was the same as the bioinformatics analysis in this study. Ponticos et al suggest that low expression of COL1A2 can inhibit the expression of TGF-B in cancer cells.[31] Since TGF-B contributes to the activation of PI3K signaling pathway, it is hypothesized that low expression of COL1A2 may inhibit the activation of PI3K signaling pathway by down-regulating the expression of TGF-B in cancer cells, and promote the apoptosis of GC cells.[28] The high expression of COL1A2 can promote the proliferation, invasion and migration of GC, while the low expression of COL1A2 can inhibit the proliferation of GC cells, delay cell migration, and promote the apoptosis of GC cells. Therefore, COL1A2 can be a potential biomarker and therapeutic target.\nSPARC (secreted protein acid and cysteine rich) is located in 5q33.1. It is a relative molecular mass of 32,000 nonstructural secreted extracellular matrix glycoprotein, it consists of a single polypeptide (285 amino acids), with the first 1981 U.S. TERMINE equal separation and purification of fetal bovine bone in humans.[32] It mediates the interaction of cell-microenvironment and has a wide range of biological effects in tumorigenesis, invasion, metastasis, angiogenesis and inflammation.[33] The study found that in some tumors with high metastatic characteristics, such as glioblastoma, melanoma, breast cancer and prostate cancer, SPARC can promote bone metastasis and epithelial-mesenchymal transition and promote tumor development, but as an anti-angiogenesis pancreatic cancer, colorectal cancer, gastric low metastatic tumors, pro-apoptotic, inhibition of cell proliferation and inhibition of cell cycle antitumor factor.[34,35] Its role in GC cells is highly controversial. Tsutomu et al found that the expression of SPARC mRNA in GC tissues was higher than that in the normal control group, and the prognosis of high SPARC expression was poor compared with low SPARC expression.[36] Chen et al also showed that in 140 ovarian cancer patients, high SPARC expression had a worse prognosis than low SPARC expression.[37] Chew et al and Liang et al reported that low SPARC expression was associated with poor long-term survival in 120 patients and 114 patients with colorectal cancer.[38,39] SPARC may play different roles in different stages of cancer and different stages of development of the same cancer. This study found that SPARC expression in GC tissues was higher than that in adjacent tissues, and the prognosis was poor.\nSERPINH1 (Serpin Family H Member 1) is a member of the serine protease inhibitor H subfamily, also known as HPS47, a heat shock protein 47, and the coding gene is located in the 11q13.5 region of human chromosome 11. It is involved in BP such as collagen synthesis and endopeptidase activity, and can be used as a partner in the biosynthesis pathway of collagen.[40] SERPINH1 is closely associated with collagen-related diseases, including osteogenesis imperfecta, keloids, and fibrosis.[41,42] Qi et al found that SERPINH1 is highly expressed in renal clear cell carcinoma with poor prognosis.[43] Studies have reported that SERPINH1 is associated with the occurrence and development of glioma and cervical cancer, and is a possible therapeutic target.[44,45] Zhang et al found that SERPINH1 is up-regulated in GC,[46] and it is possible to promote tumor growth and invasion by regulating the extracellular matrix (ECM) network. This study found that high expression of SERPINH1 in GC tissues, poor prognosis in patients with low expression, can be a potential biomarker.\nOur study identified 139 DEGs and 11 Hub genes that may be associated with the occurrence and development of GC. There are corresponding literatures indicating that COL1A2,[47,48] COL3A1,[49] SPARC,[50] SERPINH1,[51] COL6A3.[26] These genes are highly expressed in GC tissues, and the expression of LOXL1[52] is also related to distant metastasis of GC. However, the PCOLCE, COL8A2, COL8A1, and LAMA4 genes have not yet been documented to indicate their role in GC, and we subsequently recruited some patients. Relevant RT-qPCR experimental verification of these Hub genes is more indicative of the role of these genes in the development of GC than other studies.\nAlthough the study conducted a rigorous bioinformatics analysis, a large number of clinical samples, animal experiments should be comprehensively verified to better understand the pathogenesis of primary colorectal cancer.\nIn summary, we identified 20 gene sets and 10 distinct DEGs from genetic samples from patients with GC and normal subjects through bioinformatics analysis. Hub genes in DEGs may provide new ideas and evidence for the diagnosis and targeted therapy of GC.", "In conclusion, the present research aimed to identify DEGs which might be contained in the occurrence or development of GC. Finally, 139 DEGs and 11 hub genes were confirmed between GC tissues and normal tissues, which could be used as diagnostic and therapeutic biomarkers for GC. However, the biological functions of the all hub genes in GC require further researches.", "Conceptualization: Xu-Dong Zhou.\nData curation: Ya-Wei Qu.\nFormal analysis: Xu-Dong Zhou.\nInvestigation: Ya-Wei Qu, Fu-Hua Jia.\nMethodology: Fu-Hua Jia.\nResources: Xu-Dong Zhou.\nSupervision: Peng Chen.\nValidation: Peng Chen, Yin-Pu Wang.\nWriting – original draft: Yin-Pu Wang.\nWriting – review & editing: Hai-Feng Liu." ]
[ "intro", "methods", null, null, null, null, null, null, null, "results", null, null, null, null, "results", null, "discussion", null, null ]
[ "bioinformatic analysis", "differentially expressed genes", "gastric cancer", "protein-protein interaction" ]
Additive effects of ezetimibe, evolocumab, and alirocumab on plaque burden and lipid content as assessed by intravascular ultrasound: A PRISMA-compliant meta-analysis.
36254013
The additive effects of ezetimibe, evolocumab or alirocumab on lipid level, plaque volume, and plaque composition using intravascular ultrasound (IVUS) remain unclear.
BACKGROUND
According to the Preferred Reporting Items for Systematic reviews and Meta-Analyses statement, we performed a systematic review and meta-analysis of trials assessing the effects of ezetimibe, evolocumab, and alirocumab on coronary atherosclerosis using IVUS. The primary outcome was change in total atheroma volume (TAV), and the secondary outcomes were changes and differences in plaque composition and lipid content.
METHODS
Data were collected from 9 trials, involving 917 patients who received ezetimibe, evolocumab or alirocumab in addition to a statin and 919 patients who received statins alone. The pooled estimate demonstrated a significant reduction in TAV with the addition of ezetimibe and favorable effects of evolocumab and alirocumab on TAV. Subgroup analysis also supported favorable effects of evolocumab and alirocumab on TAV, according to baseline TAV, gender, type 2 diabetes mellitus, and prior stain use. Addition of proprotein convertase subtilisin/kexin type 9 (PCSK9) inhibitor to statin therapy resulted in significant reductions in low-density lipoprotein cholesterol (LDL-C), total cholesterol (TC), and triglycerides (TG), but not in high-density lipoprotein cholesterol (HDL-C). The pooled estimate also showed significant favorable effects of ezetimibe on LDL-C, TC, and TG, but an insignificant effect on HDL-C. Patients who received ezetimibe showed similar changes in the necrotic core, fibro-fatty plaque, fibrous plaque, and dense calcification compared with patients not treated with ezetimibe.
RESULTS
The addition of ezetimibe to statin therapy may further reduce plaque and lipid burdens but may not modify plaque composition. Although current evidence supports a similar impact from the addition of PCSK9 inhibitors to statin therapy, more evidence is needed to confirm such an effect.
CONCLUSIONS
[ "Antibodies, Monoclonal", "Antibodies, Monoclonal, Humanized", "Anticholesteremic Agents", "Cholesterol, HDL", "Cholesterol, LDL", "Diabetes Mellitus, Type 2", "Ezetimibe", "Humans", "Hydroxymethylglutaryl-CoA Reductase Inhibitors", "PCSK9 Inhibitors", "Plaque, Atherosclerotic", "Proprotein Convertase 9", "Subtilisins", "Triglycerides", "Ultrasonography, Interventional" ]
9575789
1. Introduction
Cardiovascular diseases, especially ischemic heart disease, remain the major cause of disease burden in the world.[1] Since 1990, the total number of disability-adjusted life years due to ischemic heart disease has steadily increased, reaching 182 million disability-adjusted life years with 9.14 million deaths in 2019.[2] The 2018 American College of Cardiology and American Heart Association cholesterol guidelines highlighted the addition of non-statins to statin therapy for patients at very high-risk risk of atherosclerotic cardiovascular disease when the low-density lipoprotein cholesterol (LDL-C) level remains ≥70 mg/dL (≥1.8 mmol/L).[3] In addition to the cornerstone lipid-lowering drugs statins, the use of ezetimibe, which targets the Niemann-Pick C1-like 1 intestinal cholesterol transporter protein as well as evolocumab and alirocumab, which target proprotein convertase subtilisin/kexin type 9 (PCSK9), leads to incremental lowering of LDL-C levels and a reduction in cardiovascular events.[4–6] A thin cap fibroatheroma, referred to as an unstable or vulnerable plaque that is most frequently prone to rupture, is characterized by an overlying thin fibrous cap and a large necrotic core.[7,8] A study based combining near-infrared spectroscopy and intravascular ultrasound showed that non-obstructive mild lesions with a heavy lipid content and high plaque burden are most likely to lead to a future major adverse cardiac event in patients after percutaneous coronary intervention for culprit and hemodynamic lesions.[9–11] Although the study did not endorse intensified pharmacotherapy for non-ischemic vulnerable plaques, the additive effect of non-statin therapy on lipid level and plaque burden needs to be determined.[12] Intravascular ultrasound can be used to image atherosclerotic plaques and measure atheroma burden and plaque dimensions.[13] Two previous meta-analyses have assessed the effects of non-statin lipid-lowering therapy on atheroma volume using intravascular ultrasound,[14,15] but these studies did not evaluate the effect of such additive therapy on plaque composition or assess the influence of patients’ baseline characteristics. Therefore, we performed the present meta-analysis to determine the additive effects of ezetimibe, evolocumab and alirocumab in combination with statins on lipid content and plaque volume and composition.
2. Methods
[SUBTITLE] 2.1. Systematic literature search [SUBSECTION] This study followed the Preferred Reporting Items for Systematic reviews and Meta-Analyses statement.[16] As a Preferred Reporting Items for Systematic reviews and Meta-Analyses-compliant meta-analysis of published literature, no ethical approval was required for this study. Two independent and blinded reviewers searched for articles in MEDLINE via PubMed, Web of Science and Embase from 1966 through January 2021 using the terms “IVUS, intravascular ultrasound, virtual histology, ezetimibe, evolocumab, alirocumab, proprotein convertase subtilisin/kexin type 9, PCSK9.” We also searched the bibliographies of retrieved articles, meta-analyses and systematic reviews. Additional data sources included conference proceedings from major meetings of the American College of Cardiology, European Society of Cardiology, American Heart Association, World Congress of Cardiology and Transcatheter Cardiovascular Therapeutics. We also directly contacted authors for additional information when necessary. This study followed the Preferred Reporting Items for Systematic reviews and Meta-Analyses statement.[16] As a Preferred Reporting Items for Systematic reviews and Meta-Analyses-compliant meta-analysis of published literature, no ethical approval was required for this study. Two independent and blinded reviewers searched for articles in MEDLINE via PubMed, Web of Science and Embase from 1966 through January 2021 using the terms “IVUS, intravascular ultrasound, virtual histology, ezetimibe, evolocumab, alirocumab, proprotein convertase subtilisin/kexin type 9, PCSK9.” We also searched the bibliographies of retrieved articles, meta-analyses and systematic reviews. Additional data sources included conference proceedings from major meetings of the American College of Cardiology, European Society of Cardiology, American Heart Association, World Congress of Cardiology and Transcatheter Cardiovascular Therapeutics. We also directly contacted authors for additional information when necessary. [SUBTITLE] 2.2. Study selection [SUBSECTION] Studies were included if they met the following prespecified criteria: clinical trials reported in peer-reviewed journals with fully available text; studies assessing the additive effects of ezetimibe, evolocumab and alirocumab in comparison with statin therapy alone; primary outcome of change in total atheroma volume between baseline and follow-up; used IVUS to measure atheroma volume; and a minimal 3-month follow-up. Studies were excluded if they met any of the following exclusion criteria: did not have a group that received statin therapy only; did not provide primary outcome and the reported data were incomplete; or serial case and observational studies. Studies were included if they met the following prespecified criteria: clinical trials reported in peer-reviewed journals with fully available text; studies assessing the additive effects of ezetimibe, evolocumab and alirocumab in comparison with statin therapy alone; primary outcome of change in total atheroma volume between baseline and follow-up; used IVUS to measure atheroma volume; and a minimal 3-month follow-up. Studies were excluded if they met any of the following exclusion criteria: did not have a group that received statin therapy only; did not provide primary outcome and the reported data were incomplete; or serial case and observational studies. [SUBTITLE] 2.3. Data extraction [SUBSECTION] Two blinded reviewers independently assessed the eligibility of studies using a prespecified standardized form. Disagreements were adjudicated by consensus. Data extraction was completed by the same reviewer. The following information was extracted: study, year, sample size, age, gender, smoking, previous disease, drugs used, lipid level, IVUS outcomes. Two blinded reviewers independently assessed the eligibility of studies using a prespecified standardized form. Disagreements were adjudicated by consensus. Data extraction was completed by the same reviewer. The following information was extracted: study, year, sample size, age, gender, smoking, previous disease, drugs used, lipid level, IVUS outcomes. [SUBTITLE] 2.4. Definition of primary and secondary outcomes [SUBSECTION] The primary outcome was the change in total atheroma volume (TAV) between baseline and follow-up. Secondary outcomes were: lipid content, including, high-density lipoprotein (HDL), LDL, total cholesterol (TC), and triglycerides (TG); plaque composition, including necrotic core, fibro-fatty plaque, fibrous plaque, and dense calcification; and change in TAV with treatment according to the baseline TAV, gender, type 2 diabetes mellitus, and statin use. The primary outcome was the change in total atheroma volume (TAV) between baseline and follow-up. Secondary outcomes were: lipid content, including, high-density lipoprotein (HDL), LDL, total cholesterol (TC), and triglycerides (TG); plaque composition, including necrotic core, fibro-fatty plaque, fibrous plaque, and dense calcification; and change in TAV with treatment according to the baseline TAV, gender, type 2 diabetes mellitus, and statin use. [SUBTITLE] 2.5. Risk of bias assessment [SUBSECTION] The methodological quality of the included studies was evaluated using the Cochrane Collaboration Risk of Bias tool.[17] The risk of bias was evaluated according to incomplete outcome data, selective reporting, blinding of participants and personnel, blinding of outcome assessment, random sequence generation, allocation concealment, and other biases, and each domain was rated as “low risk”, “unclear risk”, and “high risk”. The methodological quality of the included studies was evaluated using the Cochrane Collaboration Risk of Bias tool.[17] The risk of bias was evaluated according to incomplete outcome data, selective reporting, blinding of participants and personnel, blinding of outcome assessment, random sequence generation, allocation concealment, and other biases, and each domain was rated as “low risk”, “unclear risk”, and “high risk”. [SUBTITLE] 2.6. Statistical analysis [SUBSECTION] Traditional meta-analyses were conducted for studies that assessed the additive effects of ezetimibe, evolocumab and alirocumab in comparison with a statin only in terms of plaque burden and lipid level. Standardized mean difference (SMD) with corresponding 95% confidence interval (CI) values were used for continuous outcomes. Heterogeneity was assessed by the Cochran’s Q-statistic, and a P value < 0.01 was considered significant. In addition, heterogeneity was quantified using the I2 test (range, 0%–100%). An I2 > 50% or a P < .01 on Q test indicated the existence heterogeneity among the included studies. A random effects model was used to synthesize data in case of heterogeneity. Publication bias was evaluated by funnel plots if there were ≥10 included studies. The meta-analysis was performed using Review Manager (RevMan), version 5.3 (Cochrane Collaboration, Oxford, UK) and Stata 14/MP (StataCorp, College Station, TX). Traditional meta-analyses were conducted for studies that assessed the additive effects of ezetimibe, evolocumab and alirocumab in comparison with a statin only in terms of plaque burden and lipid level. Standardized mean difference (SMD) with corresponding 95% confidence interval (CI) values were used for continuous outcomes. Heterogeneity was assessed by the Cochran’s Q-statistic, and a P value < 0.01 was considered significant. In addition, heterogeneity was quantified using the I2 test (range, 0%–100%). An I2 > 50% or a P < .01 on Q test indicated the existence heterogeneity among the included studies. A random effects model was used to synthesize data in case of heterogeneity. Publication bias was evaluated by funnel plots if there were ≥10 included studies. The meta-analysis was performed using Review Manager (RevMan), version 5.3 (Cochrane Collaboration, Oxford, UK) and Stata 14/MP (StataCorp, College Station, TX).
3. Results
[SUBTITLE] 3.1. Characteristics of the included studies [SUBSECTION] The initial search identified a total of 406 relevant articles. After 317 studies were excluded due to duplication, 125 studies remained after review of the title and abstract. Two studies were excluded due to the absence of the primary outcome.[18,19] Finally, nine studies involving 917 patients were included in the present meta-analysis, including 7 studies evaluating the additive effect of ezetimibe,[20–26] 1 study evaluating the additive effect of evolocumab[27] and 1 study evaluating the effect of alirocumab[28] on plaque burden and lipid levels (Supplemental File 1, http://links.lww.com/MD/H657, Table 1). Most patients included in the meta-analysis were men (72%–91%) and elderly (aged 55–71 years). Cardiovascular risk factors were common, including smoking, hypertension, and diabetes mellitus. The mean range for TC, HDL, LDL, and TG levels were 162 to 220, 92 to 159, 36 to 53, and 66 to 145 mg/mL, respectively (Table 2). The available information indicated that more than half patients were taking an angiotensin II receptor blocker or angiotensin-converting enzyme inhibitor (Table 2). Eight of the nine studies were of moderate to high quality, while one study was of low quality due to its open label, non-randomized study design[25] (Supplemental Files 2–3, http://links.lww.com/MD/H658). Characteristics of included studies. ACS = acute coronary syndrome, F/U = follow-up, RCT = randomized control trial, SAP = stable angina pectoris. Baseline demographics and characteristics in patients who received intensive lipid-lowering with a statin or statin alone. Data reported as Ezetimibe, Evolucumab, or Alirocumab + Statin/Statin alone. ACEi = angiotensin-converting enzyme inhibitor, ARB = angiotensin II receptor blocker, BMI = body mass index, DM = diabetes mellitus, HDL = high-density lipoprotein, HTN = hypertension, LDL, low-density lipoprotein, TC, total cholesterol, TG = triglycerides, yr = years, NR = not reported. The initial search identified a total of 406 relevant articles. After 317 studies were excluded due to duplication, 125 studies remained after review of the title and abstract. Two studies were excluded due to the absence of the primary outcome.[18,19] Finally, nine studies involving 917 patients were included in the present meta-analysis, including 7 studies evaluating the additive effect of ezetimibe,[20–26] 1 study evaluating the additive effect of evolocumab[27] and 1 study evaluating the effect of alirocumab[28] on plaque burden and lipid levels (Supplemental File 1, http://links.lww.com/MD/H657, Table 1). Most patients included in the meta-analysis were men (72%–91%) and elderly (aged 55–71 years). Cardiovascular risk factors were common, including smoking, hypertension, and diabetes mellitus. The mean range for TC, HDL, LDL, and TG levels were 162 to 220, 92 to 159, 36 to 53, and 66 to 145 mg/mL, respectively (Table 2). The available information indicated that more than half patients were taking an angiotensin II receptor blocker or angiotensin-converting enzyme inhibitor (Table 2). Eight of the nine studies were of moderate to high quality, while one study was of low quality due to its open label, non-randomized study design[25] (Supplemental Files 2–3, http://links.lww.com/MD/H658). Characteristics of included studies. ACS = acute coronary syndrome, F/U = follow-up, RCT = randomized control trial, SAP = stable angina pectoris. Baseline demographics and characteristics in patients who received intensive lipid-lowering with a statin or statin alone. Data reported as Ezetimibe, Evolucumab, or Alirocumab + Statin/Statin alone. ACEi = angiotensin-converting enzyme inhibitor, ARB = angiotensin II receptor blocker, BMI = body mass index, DM = diabetes mellitus, HDL = high-density lipoprotein, HTN = hypertension, LDL, low-density lipoprotein, TC, total cholesterol, TG = triglycerides, yr = years, NR = not reported. [SUBTITLE] 3.2. Meta-analysis of changes in TAV and lipid levels [SUBSECTION] The pooled estimate for evolocumab and alirocumab demonstrated a significant favorable effect of PCSK9 inhibitors on TAV as measured by IVUS (SMD: −3.63, 95%CI: −4.44, −2.83) with significant heterogeneity (I2 = 90.5%). The addition of a PCSK9 inhibitor to a statin resulted in a significant reduction in the absolute change between baseline and follow-up for LDL-C (SMD: −30.87, 95%CI: −39.29, −22.45), TC (SMD: −26.04, 95%CI: −36.49, −15.58), and TG (SMD: −3.19, 95%CI: −5.56, −0.82), but not HDL-C (SMD: −1.14, 95%CI: −10.76, 8.49) (Fig. 1). Effects of evolocumab and alirocumab on total atheroma volume and lipid levels. Lipids include low-density lipoprotein cholesterol, high-density lipoprotein cholesterol, total cholesterol, and triglycerides. In contrast, the meta-analysis of 7 studies demonstrated that addition of ezetimibe led to a significant reduction in TAV (SMD: −0.24, 95%CI: −0.40, −0.09) without heterogeneity (I2 = 2.9%). The pooled estimate also showed a significant favorable effect of ezetimibe in terms of the difference at follow-up in lipid levels for LDL-C (SMD: −0.85, 95%CI: −1.07, −0.63), TC (SMD: −0.60, 95%CI: −0.78, −0.42), and TG (SMD: −1.23, 95%CI: −2.08, −0.39), but an insignificant effect on HDL-C (SMD: 0.06, 95%CI:–0.09, 0.21) (Fig. 2). Effect of ezetimibe on total atheroma volume and lipid levels. Lipids include low-density lipoprotein cholesterol, high-density lipoprotein cholesterol, total cholesterol, and triglycerides. The pooled estimate for evolocumab and alirocumab demonstrated a significant favorable effect of PCSK9 inhibitors on TAV as measured by IVUS (SMD: −3.63, 95%CI: −4.44, −2.83) with significant heterogeneity (I2 = 90.5%). The addition of a PCSK9 inhibitor to a statin resulted in a significant reduction in the absolute change between baseline and follow-up for LDL-C (SMD: −30.87, 95%CI: −39.29, −22.45), TC (SMD: −26.04, 95%CI: −36.49, −15.58), and TG (SMD: −3.19, 95%CI: −5.56, −0.82), but not HDL-C (SMD: −1.14, 95%CI: −10.76, 8.49) (Fig. 1). Effects of evolocumab and alirocumab on total atheroma volume and lipid levels. Lipids include low-density lipoprotein cholesterol, high-density lipoprotein cholesterol, total cholesterol, and triglycerides. In contrast, the meta-analysis of 7 studies demonstrated that addition of ezetimibe led to a significant reduction in TAV (SMD: −0.24, 95%CI: −0.40, −0.09) without heterogeneity (I2 = 2.9%). The pooled estimate also showed a significant favorable effect of ezetimibe in terms of the difference at follow-up in lipid levels for LDL-C (SMD: −0.85, 95%CI: −1.07, −0.63), TC (SMD: −0.60, 95%CI: −0.78, −0.42), and TG (SMD: −1.23, 95%CI: −2.08, −0.39), but an insignificant effect on HDL-C (SMD: 0.06, 95%CI:–0.09, 0.21) (Fig. 2). Effect of ezetimibe on total atheroma volume and lipid levels. Lipids include low-density lipoprotein cholesterol, high-density lipoprotein cholesterol, total cholesterol, and triglycerides. [SUBTITLE] 3.3. Treatment difference between the effect of evolocumab and alirocumab on TAV [SUBSECTION] Subgroup analysis showed favorable effects of evolocumab and alirocumab on total atheroma volume according to baseline TAV (<median: −1.07, 95%CI: −2.13, −0.01; ≥median: −1.14, 95%CI: −1.65, −0.62), gender (female: −1.48, 95%CI: −2.17, −0.79; male: −0.87, 95%CI:–1.29, −0.44), type 2 diabetes mellitus (with: −1.26, 95%CI: −2.03, −0.49; without: −1.03, 95%CI: −1.83, −0.23). The addition of a PCSK9 inhibitor led to regression of plaque (SMD: −1.01, 95%CI: −1.40, −0.63) in patients with prior stain use, but not in statin naïve patients (SMD: −0.94, 95%CI: −2.10, 0.23) (Fig. 3). The heterogeneity was significantly reduced in the subgroup for baseline TAV (I2 = 11.2% vs 0%), gender (I2 = 0% vs 0%), type 2 diabetes mellitus (I2 = 0% vs 8.5%), and prior stain use (I2 = 0% vs 0%). Effects of evolocumab and alirocumab on total atheroma volume according to baseline characteristics. Baseline characteristics include baseline total atheroma volume, gender, type 2 diabetes mellitus, and prior statin use. Subgroup analysis showed favorable effects of evolocumab and alirocumab on total atheroma volume according to baseline TAV (<median: −1.07, 95%CI: −2.13, −0.01; ≥median: −1.14, 95%CI: −1.65, −0.62), gender (female: −1.48, 95%CI: −2.17, −0.79; male: −0.87, 95%CI:–1.29, −0.44), type 2 diabetes mellitus (with: −1.26, 95%CI: −2.03, −0.49; without: −1.03, 95%CI: −1.83, −0.23). The addition of a PCSK9 inhibitor led to regression of plaque (SMD: −1.01, 95%CI: −1.40, −0.63) in patients with prior stain use, but not in statin naïve patients (SMD: −0.94, 95%CI: −2.10, 0.23) (Fig. 3). The heterogeneity was significantly reduced in the subgroup for baseline TAV (I2 = 11.2% vs 0%), gender (I2 = 0% vs 0%), type 2 diabetes mellitus (I2 = 0% vs 8.5%), and prior stain use (I2 = 0% vs 0%). Effects of evolocumab and alirocumab on total atheroma volume according to baseline characteristics. Baseline characteristics include baseline total atheroma volume, gender, type 2 diabetes mellitus, and prior statin use. [SUBTITLE] 3.4. Meta-analysis of changes in plaque composition [SUBSECTION] One study reported the additive effects of evolocumab on plaque composition, and two studies reported the additive effects of ezetimibe on plaque composition. Patients treated with ezetimibe showed similar changes in necrotic core (SMD: 0.04, 95%CI: −0.28, 0.35), fibro-fatty plaque (SMD: −.33, 95%CI–0.74, 0.08), fibrous plaque (−0.22 95%CI: −0.53, 0.10), and dense calcification (SMD: −0.12, 95%CI:–0.46, 0.22) compared with patients not treated with ezetimibe (Fig. 4). Evolocumab had no significant additional effect on the changes in fibrofatty plaque (−3.0 ± 1.0 vs–5.0 ± 1.0 mm3; P = .49), fibrous plaque (−2.4 ± 0.6 mm3 vs −3.0 ± 0.6 mm3; P = .49), necrotic core (0.1 ± 0.5 mm3 vs 0.6 ± 0.5 mm3; P = .49), or dense calcification (0.6 ± 0.3 mm3 vs 1.0 ± 0.3 mm3; P = .49).[29] Effect of ezetimibe on plaque composition. Plaque compositions include necrotic core, fibro-fatty plaque, fibrous plaque, and dense calcification. One study reported the additive effects of evolocumab on plaque composition, and two studies reported the additive effects of ezetimibe on plaque composition. Patients treated with ezetimibe showed similar changes in necrotic core (SMD: 0.04, 95%CI: −0.28, 0.35), fibro-fatty plaque (SMD: −.33, 95%CI–0.74, 0.08), fibrous plaque (−0.22 95%CI: −0.53, 0.10), and dense calcification (SMD: −0.12, 95%CI:–0.46, 0.22) compared with patients not treated with ezetimibe (Fig. 4). Evolocumab had no significant additional effect on the changes in fibrofatty plaque (−3.0 ± 1.0 vs–5.0 ± 1.0 mm3; P = .49), fibrous plaque (−2.4 ± 0.6 mm3 vs −3.0 ± 0.6 mm3; P = .49), necrotic core (0.1 ± 0.5 mm3 vs 0.6 ± 0.5 mm3; P = .49), or dense calcification (0.6 ± 0.3 mm3 vs 1.0 ± 0.3 mm3; P = .49).[29] Effect of ezetimibe on plaque composition. Plaque compositions include necrotic core, fibro-fatty plaque, fibrous plaque, and dense calcification.
null
null
[ "2.1. Systematic literature search", "2.2. Study selection", "2.3. Data extraction", "2.4. Definition of primary and secondary outcomes", "2.5. Risk of bias assessment", "2.6. Statistical analysis", "3.1. Characteristics of the included studies", "3.2. Meta-analysis of changes in TAV and lipid levels", "3.3. Treatment difference between the effect of evolocumab and alirocumab on TAV", "3.4. Meta-analysis of changes in plaque composition", "Author contributions" ]
[ "This study followed the Preferred Reporting Items for Systematic reviews and Meta-Analyses statement.[16] As a Preferred Reporting Items for Systematic reviews and Meta-Analyses-compliant meta-analysis of published literature, no ethical approval was required for this study. Two independent and blinded reviewers searched for articles in MEDLINE via PubMed, Web of Science and Embase from 1966 through January 2021 using the terms “IVUS, intravascular ultrasound, virtual histology, ezetimibe, evolocumab, alirocumab, proprotein convertase subtilisin/kexin type 9, PCSK9.” We also searched the bibliographies of retrieved articles, meta-analyses and systematic reviews. Additional data sources included conference proceedings from major meetings of the American College of Cardiology, European Society of Cardiology, American Heart Association, World Congress of Cardiology and Transcatheter Cardiovascular Therapeutics. We also directly contacted authors for additional information when necessary.", "Studies were included if they met the following prespecified criteria: clinical trials reported in peer-reviewed journals with fully available text; studies assessing the additive effects of ezetimibe, evolocumab and alirocumab in comparison with statin therapy alone; primary outcome of change in total atheroma volume between baseline and follow-up; used IVUS to measure atheroma volume; and a minimal 3-month follow-up. Studies were excluded if they met any of the following exclusion criteria: did not have a group that received statin therapy only; did not provide primary outcome and the reported data were incomplete; or serial case and observational studies.", "Two blinded reviewers independently assessed the eligibility of studies using a prespecified standardized form. Disagreements were adjudicated by consensus. Data extraction was completed by the same reviewer. The following information was extracted: study, year, sample size, age, gender, smoking, previous disease, drugs used, lipid level, IVUS outcomes.", "The primary outcome was the change in total atheroma volume (TAV) between baseline and follow-up. Secondary outcomes were: lipid content, including, high-density lipoprotein (HDL), LDL, total cholesterol (TC), and triglycerides (TG); plaque composition, including necrotic core, fibro-fatty plaque, fibrous plaque, and dense calcification; and change in TAV with treatment according to the baseline TAV, gender, type 2 diabetes mellitus, and statin use.", "The methodological quality of the included studies was evaluated using the Cochrane Collaboration Risk of Bias tool.[17] The risk of bias was evaluated according to incomplete outcome data, selective reporting, blinding of participants and personnel, blinding of outcome assessment, random sequence generation, allocation concealment, and other biases, and each domain was rated as “low risk”, “unclear risk”, and “high risk”.", "Traditional meta-analyses were conducted for studies that assessed the additive effects of ezetimibe, evolocumab and alirocumab in comparison with a statin only in terms of plaque burden and lipid level. Standardized mean difference (SMD) with corresponding 95% confidence interval (CI) values were used for continuous outcomes. Heterogeneity was assessed by the Cochran’s Q-statistic, and a P value < 0.01 was considered significant. In addition, heterogeneity was quantified using the I2 test (range, 0%–100%).\nAn I2 > 50% or a P < .01 on Q test indicated the existence heterogeneity among the included studies. A random effects model was used to synthesize data in case of heterogeneity. Publication bias was evaluated by funnel plots if there were ≥10 included studies. The meta-analysis was performed using Review Manager (RevMan), version 5.3 (Cochrane Collaboration, Oxford, UK) and Stata 14/MP (StataCorp, College Station, TX).", "The initial search identified a total of 406 relevant articles. After 317 studies were excluded due to duplication, 125 studies remained after review of the title and abstract. Two studies were excluded due to the absence of the primary outcome.[18,19] Finally, nine studies involving 917 patients were included in the present meta-analysis, including 7 studies evaluating the additive effect of ezetimibe,[20–26] 1 study evaluating the additive effect of evolocumab[27] and 1 study evaluating the effect of alirocumab[28] on plaque burden and lipid levels (Supplemental File 1, http://links.lww.com/MD/H657, Table 1). Most patients included in the meta-analysis were men (72%–91%) and elderly (aged 55–71 years). Cardiovascular risk factors were common, including smoking, hypertension, and diabetes mellitus. The mean range for TC, HDL, LDL, and TG levels were 162 to 220, 92 to 159, 36 to 53, and 66 to 145 mg/mL, respectively (Table 2). The available information indicated that more than half patients were taking an angiotensin II receptor blocker or angiotensin-converting enzyme inhibitor (Table 2). Eight of the nine studies were of moderate to high quality, while one study was of low quality due to its open label, non-randomized study design[25] (Supplemental Files 2–3, http://links.lww.com/MD/H658).\nCharacteristics of included studies.\nACS = acute coronary syndrome, F/U = follow-up, RCT = randomized control trial, SAP = stable angina pectoris.\nBaseline demographics and characteristics in patients who received intensive lipid-lowering with a statin or statin alone.\nData reported as Ezetimibe, Evolucumab, or Alirocumab + Statin/Statin alone.\nACEi = angiotensin-converting enzyme inhibitor, ARB = angiotensin II receptor blocker, BMI = body mass index, DM = diabetes mellitus, HDL = high-density lipoprotein, HTN = hypertension, LDL, low-density lipoprotein, TC, total cholesterol, TG = triglycerides, yr = years, NR = not reported.", "The pooled estimate for evolocumab and alirocumab demonstrated a significant favorable effect of PCSK9 inhibitors on TAV as measured by IVUS (SMD: −3.63, 95%CI: −4.44, −2.83) with significant heterogeneity (I2 = 90.5%). The addition of a PCSK9 inhibitor to a statin resulted in a significant reduction in the absolute change between baseline and follow-up for LDL-C (SMD: −30.87, 95%CI: −39.29, −22.45), TC (SMD: −26.04, 95%CI: −36.49, −15.58), and TG (SMD: −3.19, 95%CI: −5.56, −0.82), but not HDL-C (SMD: −1.14, 95%CI: −10.76, 8.49) (Fig. 1).\nEffects of evolocumab and alirocumab on total atheroma volume and lipid levels. Lipids include low-density lipoprotein cholesterol, high-density lipoprotein cholesterol, total cholesterol, and triglycerides.\nIn contrast, the meta-analysis of 7 studies demonstrated that addition of ezetimibe led to a significant reduction in TAV (SMD: −0.24, 95%CI: −0.40, −0.09) without heterogeneity (I2 = 2.9%). The pooled estimate also showed a significant favorable effect of ezetimibe in terms of the difference at follow-up in lipid levels for LDL-C (SMD: −0.85, 95%CI: −1.07, −0.63), TC (SMD: −0.60, 95%CI: −0.78, −0.42), and TG (SMD: −1.23, 95%CI: −2.08, −0.39), but an insignificant effect on HDL-C (SMD: 0.06, 95%CI:–0.09, 0.21) (Fig. 2).\nEffect of ezetimibe on total atheroma volume and lipid levels. Lipids include low-density lipoprotein cholesterol, high-density lipoprotein cholesterol, total cholesterol, and triglycerides.", "Subgroup analysis showed favorable effects of evolocumab and alirocumab on total atheroma volume according to baseline TAV (<median: −1.07, 95%CI: −2.13, −0.01; ≥median: −1.14, 95%CI: −1.65, −0.62), gender (female: −1.48, 95%CI: −2.17, −0.79; male: −0.87, 95%CI:–1.29, −0.44), type 2 diabetes mellitus (with: −1.26, 95%CI: −2.03, −0.49; without: −1.03, 95%CI: −1.83, −0.23). The addition of a PCSK9 inhibitor led to regression of plaque (SMD: −1.01, 95%CI: −1.40, −0.63) in patients with prior stain use, but not in statin naïve patients (SMD: −0.94, 95%CI: −2.10, 0.23) (Fig. 3). The heterogeneity was significantly reduced in the subgroup for baseline TAV (I2 = 11.2% vs 0%), gender (I2 = 0% vs 0%), type 2 diabetes mellitus (I2 = 0% vs 8.5%), and prior stain use (I2 = 0% vs 0%).\nEffects of evolocumab and alirocumab on total atheroma volume according to baseline characteristics. Baseline characteristics include baseline total atheroma volume, gender, type 2 diabetes mellitus, and prior statin use.", "One study reported the additive effects of evolocumab on plaque composition, and two studies reported the additive effects of ezetimibe on plaque composition. Patients treated with ezetimibe showed similar changes in necrotic core (SMD: 0.04, 95%CI: −0.28, 0.35), fibro-fatty plaque (SMD: −.33, 95%CI–0.74, 0.08), fibrous plaque (−0.22 95%CI: −0.53, 0.10), and dense calcification (SMD: −0.12, 95%CI:–0.46, 0.22) compared with patients not treated with ezetimibe (Fig. 4). Evolocumab had no significant additional effect on the changes in fibrofatty plaque (−3.0 ± 1.0 vs–5.0 ± 1.0 mm3; P = .49), fibrous plaque (−2.4 ± 0.6 mm3 vs −3.0 ± 0.6 mm3; P = .49), necrotic core (0.1 ± 0.5 mm3 vs 0.6 ± 0.5 mm3; P = .49), or dense calcification (0.6 ± 0.3 mm3 vs 1.0 ± 0.3 mm3; P = .49).[29]\nEffect of ezetimibe on plaque composition. Plaque compositions include necrotic core, fibro-fatty plaque, fibrous plaque, and dense calcification.", "DL and MZ conceived and designed the research; all authors collected data, conducted the research, and analyzed and interpreted data; DL wrote the initial paper; CL, YT, ZL and MZ revised the paper; MZ had primary responsibility for the final content. All authors read and approved the final manuscript.\nConceptualization: Di Liang, Ming Zhang.\nData curation: Di Liang, Di Liang, Chang Li, Yanming Tu, Zhiyong Li, Ming Zhang.\nFormal analysis: Di Liang, Di Liang, Chang Li, Yanming Tu, Zhiyong Li, Ming Zhang.\nMethodology: Di Liang, Di Liang, Chang Li, Yanming Tu, Zhiyong Li, Ming Zhang.\nSoftware: Di Liang, Di Liang, Chang Li, Yanming Tu, Zhiyong Li.\nSupervision: Ming Zhang.\nWriting – original draft: Di Liang.\nWriting – review & editing: Di Liang, Chang Li, Yanming Tu, Zhiyong Li, Ming Zhang." ]
[ null, null, null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Methods", "2.1. Systematic literature search", "2.2. Study selection", "2.3. Data extraction", "2.4. Definition of primary and secondary outcomes", "2.5. Risk of bias assessment", "2.6. Statistical analysis", "3. Results", "3.1. Characteristics of the included studies", "3.2. Meta-analysis of changes in TAV and lipid levels", "3.3. Treatment difference between the effect of evolocumab and alirocumab on TAV", "3.4. Meta-analysis of changes in plaque composition", "4. Discussion", "Author contributions", "Supplementary Material" ]
[ "Cardiovascular diseases, especially ischemic heart disease, remain the major cause of disease burden in the world.[1] Since 1990, the total number of disability-adjusted life years due to ischemic heart disease has steadily increased, reaching 182 million disability-adjusted life years with 9.14 million deaths in 2019.[2] The 2018 American College of Cardiology and American Heart Association cholesterol guidelines highlighted the addition of non-statins to statin therapy for patients at very high-risk risk of atherosclerotic cardiovascular disease when the low-density lipoprotein cholesterol (LDL-C) level remains ≥70 mg/dL (≥1.8 mmol/L).[3] In addition to the cornerstone lipid-lowering drugs statins, the use of ezetimibe, which targets the Niemann-Pick C1-like 1 intestinal cholesterol transporter protein as well as evolocumab and alirocumab, which target proprotein convertase subtilisin/kexin type 9 (PCSK9), leads to incremental lowering of LDL-C levels and a reduction in cardiovascular events.[4–6]\nA thin cap fibroatheroma, referred to as an unstable or vulnerable plaque that is most frequently prone to rupture, is characterized by an overlying thin fibrous cap and a large necrotic core.[7,8] A study based combining near-infrared spectroscopy and intravascular ultrasound showed that non-obstructive mild lesions with a heavy lipid content and high plaque burden are most likely to lead to a future major adverse cardiac event in patients after percutaneous coronary intervention for culprit and hemodynamic lesions.[9–11] Although the study did not endorse intensified pharmacotherapy for non-ischemic vulnerable plaques, the additive effect of non-statin therapy on lipid level and plaque burden needs to be determined.[12] Intravascular ultrasound can be used to image atherosclerotic plaques and measure atheroma burden and plaque dimensions.[13]\nTwo previous meta-analyses have assessed the effects of non-statin lipid-lowering therapy on atheroma volume using intravascular ultrasound,[14,15] but these studies did not evaluate the effect of such additive therapy on plaque composition or assess the influence of patients’ baseline characteristics. Therefore, we performed the present meta-analysis to determine the additive effects of ezetimibe, evolocumab and alirocumab in combination with statins on lipid content and plaque volume and composition.", "[SUBTITLE] 2.1. Systematic literature search [SUBSECTION] This study followed the Preferred Reporting Items for Systematic reviews and Meta-Analyses statement.[16] As a Preferred Reporting Items for Systematic reviews and Meta-Analyses-compliant meta-analysis of published literature, no ethical approval was required for this study. Two independent and blinded reviewers searched for articles in MEDLINE via PubMed, Web of Science and Embase from 1966 through January 2021 using the terms “IVUS, intravascular ultrasound, virtual histology, ezetimibe, evolocumab, alirocumab, proprotein convertase subtilisin/kexin type 9, PCSK9.” We also searched the bibliographies of retrieved articles, meta-analyses and systematic reviews. Additional data sources included conference proceedings from major meetings of the American College of Cardiology, European Society of Cardiology, American Heart Association, World Congress of Cardiology and Transcatheter Cardiovascular Therapeutics. We also directly contacted authors for additional information when necessary.\nThis study followed the Preferred Reporting Items for Systematic reviews and Meta-Analyses statement.[16] As a Preferred Reporting Items for Systematic reviews and Meta-Analyses-compliant meta-analysis of published literature, no ethical approval was required for this study. Two independent and blinded reviewers searched for articles in MEDLINE via PubMed, Web of Science and Embase from 1966 through January 2021 using the terms “IVUS, intravascular ultrasound, virtual histology, ezetimibe, evolocumab, alirocumab, proprotein convertase subtilisin/kexin type 9, PCSK9.” We also searched the bibliographies of retrieved articles, meta-analyses and systematic reviews. Additional data sources included conference proceedings from major meetings of the American College of Cardiology, European Society of Cardiology, American Heart Association, World Congress of Cardiology and Transcatheter Cardiovascular Therapeutics. We also directly contacted authors for additional information when necessary.\n[SUBTITLE] 2.2. Study selection [SUBSECTION] Studies were included if they met the following prespecified criteria: clinical trials reported in peer-reviewed journals with fully available text; studies assessing the additive effects of ezetimibe, evolocumab and alirocumab in comparison with statin therapy alone; primary outcome of change in total atheroma volume between baseline and follow-up; used IVUS to measure atheroma volume; and a minimal 3-month follow-up. Studies were excluded if they met any of the following exclusion criteria: did not have a group that received statin therapy only; did not provide primary outcome and the reported data were incomplete; or serial case and observational studies.\nStudies were included if they met the following prespecified criteria: clinical trials reported in peer-reviewed journals with fully available text; studies assessing the additive effects of ezetimibe, evolocumab and alirocumab in comparison with statin therapy alone; primary outcome of change in total atheroma volume between baseline and follow-up; used IVUS to measure atheroma volume; and a minimal 3-month follow-up. Studies were excluded if they met any of the following exclusion criteria: did not have a group that received statin therapy only; did not provide primary outcome and the reported data were incomplete; or serial case and observational studies.\n[SUBTITLE] 2.3. Data extraction [SUBSECTION] Two blinded reviewers independently assessed the eligibility of studies using a prespecified standardized form. Disagreements were adjudicated by consensus. Data extraction was completed by the same reviewer. The following information was extracted: study, year, sample size, age, gender, smoking, previous disease, drugs used, lipid level, IVUS outcomes.\nTwo blinded reviewers independently assessed the eligibility of studies using a prespecified standardized form. Disagreements were adjudicated by consensus. Data extraction was completed by the same reviewer. The following information was extracted: study, year, sample size, age, gender, smoking, previous disease, drugs used, lipid level, IVUS outcomes.\n[SUBTITLE] 2.4. Definition of primary and secondary outcomes [SUBSECTION] The primary outcome was the change in total atheroma volume (TAV) between baseline and follow-up. Secondary outcomes were: lipid content, including, high-density lipoprotein (HDL), LDL, total cholesterol (TC), and triglycerides (TG); plaque composition, including necrotic core, fibro-fatty plaque, fibrous plaque, and dense calcification; and change in TAV with treatment according to the baseline TAV, gender, type 2 diabetes mellitus, and statin use.\nThe primary outcome was the change in total atheroma volume (TAV) between baseline and follow-up. Secondary outcomes were: lipid content, including, high-density lipoprotein (HDL), LDL, total cholesterol (TC), and triglycerides (TG); plaque composition, including necrotic core, fibro-fatty plaque, fibrous plaque, and dense calcification; and change in TAV with treatment according to the baseline TAV, gender, type 2 diabetes mellitus, and statin use.\n[SUBTITLE] 2.5. Risk of bias assessment [SUBSECTION] The methodological quality of the included studies was evaluated using the Cochrane Collaboration Risk of Bias tool.[17] The risk of bias was evaluated according to incomplete outcome data, selective reporting, blinding of participants and personnel, blinding of outcome assessment, random sequence generation, allocation concealment, and other biases, and each domain was rated as “low risk”, “unclear risk”, and “high risk”.\nThe methodological quality of the included studies was evaluated using the Cochrane Collaboration Risk of Bias tool.[17] The risk of bias was evaluated according to incomplete outcome data, selective reporting, blinding of participants and personnel, blinding of outcome assessment, random sequence generation, allocation concealment, and other biases, and each domain was rated as “low risk”, “unclear risk”, and “high risk”.\n[SUBTITLE] 2.6. Statistical analysis [SUBSECTION] Traditional meta-analyses were conducted for studies that assessed the additive effects of ezetimibe, evolocumab and alirocumab in comparison with a statin only in terms of plaque burden and lipid level. Standardized mean difference (SMD) with corresponding 95% confidence interval (CI) values were used for continuous outcomes. Heterogeneity was assessed by the Cochran’s Q-statistic, and a P value < 0.01 was considered significant. In addition, heterogeneity was quantified using the I2 test (range, 0%–100%).\nAn I2 > 50% or a P < .01 on Q test indicated the existence heterogeneity among the included studies. A random effects model was used to synthesize data in case of heterogeneity. Publication bias was evaluated by funnel plots if there were ≥10 included studies. The meta-analysis was performed using Review Manager (RevMan), version 5.3 (Cochrane Collaboration, Oxford, UK) and Stata 14/MP (StataCorp, College Station, TX).\nTraditional meta-analyses were conducted for studies that assessed the additive effects of ezetimibe, evolocumab and alirocumab in comparison with a statin only in terms of plaque burden and lipid level. Standardized mean difference (SMD) with corresponding 95% confidence interval (CI) values were used for continuous outcomes. Heterogeneity was assessed by the Cochran’s Q-statistic, and a P value < 0.01 was considered significant. In addition, heterogeneity was quantified using the I2 test (range, 0%–100%).\nAn I2 > 50% or a P < .01 on Q test indicated the existence heterogeneity among the included studies. A random effects model was used to synthesize data in case of heterogeneity. Publication bias was evaluated by funnel plots if there were ≥10 included studies. The meta-analysis was performed using Review Manager (RevMan), version 5.3 (Cochrane Collaboration, Oxford, UK) and Stata 14/MP (StataCorp, College Station, TX).", "This study followed the Preferred Reporting Items for Systematic reviews and Meta-Analyses statement.[16] As a Preferred Reporting Items for Systematic reviews and Meta-Analyses-compliant meta-analysis of published literature, no ethical approval was required for this study. Two independent and blinded reviewers searched for articles in MEDLINE via PubMed, Web of Science and Embase from 1966 through January 2021 using the terms “IVUS, intravascular ultrasound, virtual histology, ezetimibe, evolocumab, alirocumab, proprotein convertase subtilisin/kexin type 9, PCSK9.” We also searched the bibliographies of retrieved articles, meta-analyses and systematic reviews. Additional data sources included conference proceedings from major meetings of the American College of Cardiology, European Society of Cardiology, American Heart Association, World Congress of Cardiology and Transcatheter Cardiovascular Therapeutics. We also directly contacted authors for additional information when necessary.", "Studies were included if they met the following prespecified criteria: clinical trials reported in peer-reviewed journals with fully available text; studies assessing the additive effects of ezetimibe, evolocumab and alirocumab in comparison with statin therapy alone; primary outcome of change in total atheroma volume between baseline and follow-up; used IVUS to measure atheroma volume; and a minimal 3-month follow-up. Studies were excluded if they met any of the following exclusion criteria: did not have a group that received statin therapy only; did not provide primary outcome and the reported data were incomplete; or serial case and observational studies.", "Two blinded reviewers independently assessed the eligibility of studies using a prespecified standardized form. Disagreements were adjudicated by consensus. Data extraction was completed by the same reviewer. The following information was extracted: study, year, sample size, age, gender, smoking, previous disease, drugs used, lipid level, IVUS outcomes.", "The primary outcome was the change in total atheroma volume (TAV) between baseline and follow-up. Secondary outcomes were: lipid content, including, high-density lipoprotein (HDL), LDL, total cholesterol (TC), and triglycerides (TG); plaque composition, including necrotic core, fibro-fatty plaque, fibrous plaque, and dense calcification; and change in TAV with treatment according to the baseline TAV, gender, type 2 diabetes mellitus, and statin use.", "The methodological quality of the included studies was evaluated using the Cochrane Collaboration Risk of Bias tool.[17] The risk of bias was evaluated according to incomplete outcome data, selective reporting, blinding of participants and personnel, blinding of outcome assessment, random sequence generation, allocation concealment, and other biases, and each domain was rated as “low risk”, “unclear risk”, and “high risk”.", "Traditional meta-analyses were conducted for studies that assessed the additive effects of ezetimibe, evolocumab and alirocumab in comparison with a statin only in terms of plaque burden and lipid level. Standardized mean difference (SMD) with corresponding 95% confidence interval (CI) values were used for continuous outcomes. Heterogeneity was assessed by the Cochran’s Q-statistic, and a P value < 0.01 was considered significant. In addition, heterogeneity was quantified using the I2 test (range, 0%–100%).\nAn I2 > 50% or a P < .01 on Q test indicated the existence heterogeneity among the included studies. A random effects model was used to synthesize data in case of heterogeneity. Publication bias was evaluated by funnel plots if there were ≥10 included studies. The meta-analysis was performed using Review Manager (RevMan), version 5.3 (Cochrane Collaboration, Oxford, UK) and Stata 14/MP (StataCorp, College Station, TX).", "[SUBTITLE] 3.1. Characteristics of the included studies [SUBSECTION] The initial search identified a total of 406 relevant articles. After 317 studies were excluded due to duplication, 125 studies remained after review of the title and abstract. Two studies were excluded due to the absence of the primary outcome.[18,19] Finally, nine studies involving 917 patients were included in the present meta-analysis, including 7 studies evaluating the additive effect of ezetimibe,[20–26] 1 study evaluating the additive effect of evolocumab[27] and 1 study evaluating the effect of alirocumab[28] on plaque burden and lipid levels (Supplemental File 1, http://links.lww.com/MD/H657, Table 1). Most patients included in the meta-analysis were men (72%–91%) and elderly (aged 55–71 years). Cardiovascular risk factors were common, including smoking, hypertension, and diabetes mellitus. The mean range for TC, HDL, LDL, and TG levels were 162 to 220, 92 to 159, 36 to 53, and 66 to 145 mg/mL, respectively (Table 2). The available information indicated that more than half patients were taking an angiotensin II receptor blocker or angiotensin-converting enzyme inhibitor (Table 2). Eight of the nine studies were of moderate to high quality, while one study was of low quality due to its open label, non-randomized study design[25] (Supplemental Files 2–3, http://links.lww.com/MD/H658).\nCharacteristics of included studies.\nACS = acute coronary syndrome, F/U = follow-up, RCT = randomized control trial, SAP = stable angina pectoris.\nBaseline demographics and characteristics in patients who received intensive lipid-lowering with a statin or statin alone.\nData reported as Ezetimibe, Evolucumab, or Alirocumab + Statin/Statin alone.\nACEi = angiotensin-converting enzyme inhibitor, ARB = angiotensin II receptor blocker, BMI = body mass index, DM = diabetes mellitus, HDL = high-density lipoprotein, HTN = hypertension, LDL, low-density lipoprotein, TC, total cholesterol, TG = triglycerides, yr = years, NR = not reported.\nThe initial search identified a total of 406 relevant articles. After 317 studies were excluded due to duplication, 125 studies remained after review of the title and abstract. Two studies were excluded due to the absence of the primary outcome.[18,19] Finally, nine studies involving 917 patients were included in the present meta-analysis, including 7 studies evaluating the additive effect of ezetimibe,[20–26] 1 study evaluating the additive effect of evolocumab[27] and 1 study evaluating the effect of alirocumab[28] on plaque burden and lipid levels (Supplemental File 1, http://links.lww.com/MD/H657, Table 1). Most patients included in the meta-analysis were men (72%–91%) and elderly (aged 55–71 years). Cardiovascular risk factors were common, including smoking, hypertension, and diabetes mellitus. The mean range for TC, HDL, LDL, and TG levels were 162 to 220, 92 to 159, 36 to 53, and 66 to 145 mg/mL, respectively (Table 2). The available information indicated that more than half patients were taking an angiotensin II receptor blocker or angiotensin-converting enzyme inhibitor (Table 2). Eight of the nine studies were of moderate to high quality, while one study was of low quality due to its open label, non-randomized study design[25] (Supplemental Files 2–3, http://links.lww.com/MD/H658).\nCharacteristics of included studies.\nACS = acute coronary syndrome, F/U = follow-up, RCT = randomized control trial, SAP = stable angina pectoris.\nBaseline demographics and characteristics in patients who received intensive lipid-lowering with a statin or statin alone.\nData reported as Ezetimibe, Evolucumab, or Alirocumab + Statin/Statin alone.\nACEi = angiotensin-converting enzyme inhibitor, ARB = angiotensin II receptor blocker, BMI = body mass index, DM = diabetes mellitus, HDL = high-density lipoprotein, HTN = hypertension, LDL, low-density lipoprotein, TC, total cholesterol, TG = triglycerides, yr = years, NR = not reported.\n[SUBTITLE] 3.2. Meta-analysis of changes in TAV and lipid levels [SUBSECTION] The pooled estimate for evolocumab and alirocumab demonstrated a significant favorable effect of PCSK9 inhibitors on TAV as measured by IVUS (SMD: −3.63, 95%CI: −4.44, −2.83) with significant heterogeneity (I2 = 90.5%). The addition of a PCSK9 inhibitor to a statin resulted in a significant reduction in the absolute change between baseline and follow-up for LDL-C (SMD: −30.87, 95%CI: −39.29, −22.45), TC (SMD: −26.04, 95%CI: −36.49, −15.58), and TG (SMD: −3.19, 95%CI: −5.56, −0.82), but not HDL-C (SMD: −1.14, 95%CI: −10.76, 8.49) (Fig. 1).\nEffects of evolocumab and alirocumab on total atheroma volume and lipid levels. Lipids include low-density lipoprotein cholesterol, high-density lipoprotein cholesterol, total cholesterol, and triglycerides.\nIn contrast, the meta-analysis of 7 studies demonstrated that addition of ezetimibe led to a significant reduction in TAV (SMD: −0.24, 95%CI: −0.40, −0.09) without heterogeneity (I2 = 2.9%). The pooled estimate also showed a significant favorable effect of ezetimibe in terms of the difference at follow-up in lipid levels for LDL-C (SMD: −0.85, 95%CI: −1.07, −0.63), TC (SMD: −0.60, 95%CI: −0.78, −0.42), and TG (SMD: −1.23, 95%CI: −2.08, −0.39), but an insignificant effect on HDL-C (SMD: 0.06, 95%CI:–0.09, 0.21) (Fig. 2).\nEffect of ezetimibe on total atheroma volume and lipid levels. Lipids include low-density lipoprotein cholesterol, high-density lipoprotein cholesterol, total cholesterol, and triglycerides.\nThe pooled estimate for evolocumab and alirocumab demonstrated a significant favorable effect of PCSK9 inhibitors on TAV as measured by IVUS (SMD: −3.63, 95%CI: −4.44, −2.83) with significant heterogeneity (I2 = 90.5%). The addition of a PCSK9 inhibitor to a statin resulted in a significant reduction in the absolute change between baseline and follow-up for LDL-C (SMD: −30.87, 95%CI: −39.29, −22.45), TC (SMD: −26.04, 95%CI: −36.49, −15.58), and TG (SMD: −3.19, 95%CI: −5.56, −0.82), but not HDL-C (SMD: −1.14, 95%CI: −10.76, 8.49) (Fig. 1).\nEffects of evolocumab and alirocumab on total atheroma volume and lipid levels. Lipids include low-density lipoprotein cholesterol, high-density lipoprotein cholesterol, total cholesterol, and triglycerides.\nIn contrast, the meta-analysis of 7 studies demonstrated that addition of ezetimibe led to a significant reduction in TAV (SMD: −0.24, 95%CI: −0.40, −0.09) without heterogeneity (I2 = 2.9%). The pooled estimate also showed a significant favorable effect of ezetimibe in terms of the difference at follow-up in lipid levels for LDL-C (SMD: −0.85, 95%CI: −1.07, −0.63), TC (SMD: −0.60, 95%CI: −0.78, −0.42), and TG (SMD: −1.23, 95%CI: −2.08, −0.39), but an insignificant effect on HDL-C (SMD: 0.06, 95%CI:–0.09, 0.21) (Fig. 2).\nEffect of ezetimibe on total atheroma volume and lipid levels. Lipids include low-density lipoprotein cholesterol, high-density lipoprotein cholesterol, total cholesterol, and triglycerides.\n[SUBTITLE] 3.3. Treatment difference between the effect of evolocumab and alirocumab on TAV [SUBSECTION] Subgroup analysis showed favorable effects of evolocumab and alirocumab on total atheroma volume according to baseline TAV (<median: −1.07, 95%CI: −2.13, −0.01; ≥median: −1.14, 95%CI: −1.65, −0.62), gender (female: −1.48, 95%CI: −2.17, −0.79; male: −0.87, 95%CI:–1.29, −0.44), type 2 diabetes mellitus (with: −1.26, 95%CI: −2.03, −0.49; without: −1.03, 95%CI: −1.83, −0.23). The addition of a PCSK9 inhibitor led to regression of plaque (SMD: −1.01, 95%CI: −1.40, −0.63) in patients with prior stain use, but not in statin naïve patients (SMD: −0.94, 95%CI: −2.10, 0.23) (Fig. 3). The heterogeneity was significantly reduced in the subgroup for baseline TAV (I2 = 11.2% vs 0%), gender (I2 = 0% vs 0%), type 2 diabetes mellitus (I2 = 0% vs 8.5%), and prior stain use (I2 = 0% vs 0%).\nEffects of evolocumab and alirocumab on total atheroma volume according to baseline characteristics. Baseline characteristics include baseline total atheroma volume, gender, type 2 diabetes mellitus, and prior statin use.\nSubgroup analysis showed favorable effects of evolocumab and alirocumab on total atheroma volume according to baseline TAV (<median: −1.07, 95%CI: −2.13, −0.01; ≥median: −1.14, 95%CI: −1.65, −0.62), gender (female: −1.48, 95%CI: −2.17, −0.79; male: −0.87, 95%CI:–1.29, −0.44), type 2 diabetes mellitus (with: −1.26, 95%CI: −2.03, −0.49; without: −1.03, 95%CI: −1.83, −0.23). The addition of a PCSK9 inhibitor led to regression of plaque (SMD: −1.01, 95%CI: −1.40, −0.63) in patients with prior stain use, but not in statin naïve patients (SMD: −0.94, 95%CI: −2.10, 0.23) (Fig. 3). The heterogeneity was significantly reduced in the subgroup for baseline TAV (I2 = 11.2% vs 0%), gender (I2 = 0% vs 0%), type 2 diabetes mellitus (I2 = 0% vs 8.5%), and prior stain use (I2 = 0% vs 0%).\nEffects of evolocumab and alirocumab on total atheroma volume according to baseline characteristics. Baseline characteristics include baseline total atheroma volume, gender, type 2 diabetes mellitus, and prior statin use.\n[SUBTITLE] 3.4. Meta-analysis of changes in plaque composition [SUBSECTION] One study reported the additive effects of evolocumab on plaque composition, and two studies reported the additive effects of ezetimibe on plaque composition. Patients treated with ezetimibe showed similar changes in necrotic core (SMD: 0.04, 95%CI: −0.28, 0.35), fibro-fatty plaque (SMD: −.33, 95%CI–0.74, 0.08), fibrous plaque (−0.22 95%CI: −0.53, 0.10), and dense calcification (SMD: −0.12, 95%CI:–0.46, 0.22) compared with patients not treated with ezetimibe (Fig. 4). Evolocumab had no significant additional effect on the changes in fibrofatty plaque (−3.0 ± 1.0 vs–5.0 ± 1.0 mm3; P = .49), fibrous plaque (−2.4 ± 0.6 mm3 vs −3.0 ± 0.6 mm3; P = .49), necrotic core (0.1 ± 0.5 mm3 vs 0.6 ± 0.5 mm3; P = .49), or dense calcification (0.6 ± 0.3 mm3 vs 1.0 ± 0.3 mm3; P = .49).[29]\nEffect of ezetimibe on plaque composition. Plaque compositions include necrotic core, fibro-fatty plaque, fibrous plaque, and dense calcification.\nOne study reported the additive effects of evolocumab on plaque composition, and two studies reported the additive effects of ezetimibe on plaque composition. Patients treated with ezetimibe showed similar changes in necrotic core (SMD: 0.04, 95%CI: −0.28, 0.35), fibro-fatty plaque (SMD: −.33, 95%CI–0.74, 0.08), fibrous plaque (−0.22 95%CI: −0.53, 0.10), and dense calcification (SMD: −0.12, 95%CI:–0.46, 0.22) compared with patients not treated with ezetimibe (Fig. 4). Evolocumab had no significant additional effect on the changes in fibrofatty plaque (−3.0 ± 1.0 vs–5.0 ± 1.0 mm3; P = .49), fibrous plaque (−2.4 ± 0.6 mm3 vs −3.0 ± 0.6 mm3; P = .49), necrotic core (0.1 ± 0.5 mm3 vs 0.6 ± 0.5 mm3; P = .49), or dense calcification (0.6 ± 0.3 mm3 vs 1.0 ± 0.3 mm3; P = .49).[29]\nEffect of ezetimibe on plaque composition. Plaque compositions include necrotic core, fibro-fatty plaque, fibrous plaque, and dense calcification.", "The initial search identified a total of 406 relevant articles. After 317 studies were excluded due to duplication, 125 studies remained after review of the title and abstract. Two studies were excluded due to the absence of the primary outcome.[18,19] Finally, nine studies involving 917 patients were included in the present meta-analysis, including 7 studies evaluating the additive effect of ezetimibe,[20–26] 1 study evaluating the additive effect of evolocumab[27] and 1 study evaluating the effect of alirocumab[28] on plaque burden and lipid levels (Supplemental File 1, http://links.lww.com/MD/H657, Table 1). Most patients included in the meta-analysis were men (72%–91%) and elderly (aged 55–71 years). Cardiovascular risk factors were common, including smoking, hypertension, and diabetes mellitus. The mean range for TC, HDL, LDL, and TG levels were 162 to 220, 92 to 159, 36 to 53, and 66 to 145 mg/mL, respectively (Table 2). The available information indicated that more than half patients were taking an angiotensin II receptor blocker or angiotensin-converting enzyme inhibitor (Table 2). Eight of the nine studies were of moderate to high quality, while one study was of low quality due to its open label, non-randomized study design[25] (Supplemental Files 2–3, http://links.lww.com/MD/H658).\nCharacteristics of included studies.\nACS = acute coronary syndrome, F/U = follow-up, RCT = randomized control trial, SAP = stable angina pectoris.\nBaseline demographics and characteristics in patients who received intensive lipid-lowering with a statin or statin alone.\nData reported as Ezetimibe, Evolucumab, or Alirocumab + Statin/Statin alone.\nACEi = angiotensin-converting enzyme inhibitor, ARB = angiotensin II receptor blocker, BMI = body mass index, DM = diabetes mellitus, HDL = high-density lipoprotein, HTN = hypertension, LDL, low-density lipoprotein, TC, total cholesterol, TG = triglycerides, yr = years, NR = not reported.", "The pooled estimate for evolocumab and alirocumab demonstrated a significant favorable effect of PCSK9 inhibitors on TAV as measured by IVUS (SMD: −3.63, 95%CI: −4.44, −2.83) with significant heterogeneity (I2 = 90.5%). The addition of a PCSK9 inhibitor to a statin resulted in a significant reduction in the absolute change between baseline and follow-up for LDL-C (SMD: −30.87, 95%CI: −39.29, −22.45), TC (SMD: −26.04, 95%CI: −36.49, −15.58), and TG (SMD: −3.19, 95%CI: −5.56, −0.82), but not HDL-C (SMD: −1.14, 95%CI: −10.76, 8.49) (Fig. 1).\nEffects of evolocumab and alirocumab on total atheroma volume and lipid levels. Lipids include low-density lipoprotein cholesterol, high-density lipoprotein cholesterol, total cholesterol, and triglycerides.\nIn contrast, the meta-analysis of 7 studies demonstrated that addition of ezetimibe led to a significant reduction in TAV (SMD: −0.24, 95%CI: −0.40, −0.09) without heterogeneity (I2 = 2.9%). The pooled estimate also showed a significant favorable effect of ezetimibe in terms of the difference at follow-up in lipid levels for LDL-C (SMD: −0.85, 95%CI: −1.07, −0.63), TC (SMD: −0.60, 95%CI: −0.78, −0.42), and TG (SMD: −1.23, 95%CI: −2.08, −0.39), but an insignificant effect on HDL-C (SMD: 0.06, 95%CI:–0.09, 0.21) (Fig. 2).\nEffect of ezetimibe on total atheroma volume and lipid levels. Lipids include low-density lipoprotein cholesterol, high-density lipoprotein cholesterol, total cholesterol, and triglycerides.", "Subgroup analysis showed favorable effects of evolocumab and alirocumab on total atheroma volume according to baseline TAV (<median: −1.07, 95%CI: −2.13, −0.01; ≥median: −1.14, 95%CI: −1.65, −0.62), gender (female: −1.48, 95%CI: −2.17, −0.79; male: −0.87, 95%CI:–1.29, −0.44), type 2 diabetes mellitus (with: −1.26, 95%CI: −2.03, −0.49; without: −1.03, 95%CI: −1.83, −0.23). The addition of a PCSK9 inhibitor led to regression of plaque (SMD: −1.01, 95%CI: −1.40, −0.63) in patients with prior stain use, but not in statin naïve patients (SMD: −0.94, 95%CI: −2.10, 0.23) (Fig. 3). The heterogeneity was significantly reduced in the subgroup for baseline TAV (I2 = 11.2% vs 0%), gender (I2 = 0% vs 0%), type 2 diabetes mellitus (I2 = 0% vs 8.5%), and prior stain use (I2 = 0% vs 0%).\nEffects of evolocumab and alirocumab on total atheroma volume according to baseline characteristics. Baseline characteristics include baseline total atheroma volume, gender, type 2 diabetes mellitus, and prior statin use.", "One study reported the additive effects of evolocumab on plaque composition, and two studies reported the additive effects of ezetimibe on plaque composition. Patients treated with ezetimibe showed similar changes in necrotic core (SMD: 0.04, 95%CI: −0.28, 0.35), fibro-fatty plaque (SMD: −.33, 95%CI–0.74, 0.08), fibrous plaque (−0.22 95%CI: −0.53, 0.10), and dense calcification (SMD: −0.12, 95%CI:–0.46, 0.22) compared with patients not treated with ezetimibe (Fig. 4). Evolocumab had no significant additional effect on the changes in fibrofatty plaque (−3.0 ± 1.0 vs–5.0 ± 1.0 mm3; P = .49), fibrous plaque (−2.4 ± 0.6 mm3 vs −3.0 ± 0.6 mm3; P = .49), necrotic core (0.1 ± 0.5 mm3 vs 0.6 ± 0.5 mm3; P = .49), or dense calcification (0.6 ± 0.3 mm3 vs 1.0 ± 0.3 mm3; P = .49).[29]\nEffect of ezetimibe on plaque composition. Plaque compositions include necrotic core, fibro-fatty plaque, fibrous plaque, and dense calcification.", "The present meta-analysis found significant reductions in plaque and lipid burdens in patients who received intensive lipid-lowering treatment with ezetimibe, evolocumab or alirocumab in addition to statin therapy. Subgroup analysis according to baseline total atheroma, gender, and type 2 diabetes mellitus also supported the favorable effect of the PCSK9 inhibitors on TAV. The GLAGOV study revealed that the addition of evolocumab in patients receiving statin therapy had a favorable effect on the progression of atherosclerotic plaques, while the ODYSSEY J-IVUS study found that addition of alirocumab resulted in a numerically greater but not statistically significant reduction in the TAV.[27,28] The lack of a statistically significant difference in the ODYSSEY J-IVUS study may have been due to its limited sample size, the short duration of the treatment period, and the increase in ezetimibe therapy that occurred in the standard care group.[28]\nAddition of a PCSK9 inhibitor and ezetimibe to statin therapy further reduced LDL-C, TC, and TG level, but not HDL-C. Incremental lowering of LDL-C with ezetimibe, alirocumab and evolocumab was shown to improve cardiovascular outcomes in the IMPROVE-IT,[4] ODYSSEY LONG TERM[6] and FOURIER[5] studies. The clinical benefit of LDL-C lowering treatment have also been proven in patients aged 75 years and older.[30] Moreover, the Lipid Rich Plaque (LRP) study indicated that lipid-heavy plaques of non-culprit lesions are associated with subsequent major adverse coronary events in patients with known coronary artery disease.[11,31,32] Therefore, it is necessary to consider tight control of plaque and LDL-C levels by adding PCSK9 inhibitors and ezetimibe for the initiation of rapid and effective plaque modification.\nOur findings showed that the addition of ezetimibe did not influence plaque composition, which is consistent with previous results for the addition of evolocumab.[29] In contrast, two meta-analyses showed that long-term and high-intensity statin treatment decreases fibrous tissue and increases dense calcification but does not induce significant changes in necrotic core and fibro-fatty plaque.[33,34] Evolocumab and ezetimibe promote favorable effects on lipid content and plaque atheroma and improve cardiovascular outcomes but fail to improve plaque composition as assessed by IVUS.[4,27,29] There may be two alternative explanations for the contradictory findings. On one hand, the spatial configuration rather than the amount of each type of plaque is the key factor in plaque vulnerability.[33] On the other hand, IVUS is unable to quantify the potential additional benefits of intensive lipid-lowering therapies and to reflect the plaque composition it is purported to measure.[29]\nA previous meta-analysis showed that statin treatment induced higher regression of plaque volume in patients with acute coronary syndrome (ACS) than in patients with stable angina pectoris (SAP).[35] Future studies are needed to further investigate the difference in coronary plaque regression between patients with ACS and SAP who received ezetimibe, evolocumab or alirocumab in addition to statin therapy. Although ultrasound has been widely used in the quantification of plaque composition, atherosclerotic plaque composition assessed by computed tomography and magnetic resonance imaging might offer advanced plaque characterization.[36] Large-scale prospective studies are required to demonstrate the incremental value of advanced imaging approaches to plaque burden measurements. The degree of plaque change was associated with the percentage reduction in LDL-C, and no threshold level at which the LDL-C lowering benefit ceases has been established.[37] Although the IMPROVE-IT and FOURIER trials support the additional benefits of ezetimibe and evolocumab regardless of LDL-C levels,[4,38] future studies are required to investigate the effect ezetimibe and PCSK9 inhibitors on coronary plaque in addition to LDL-C levels.\nSeveral limitations of the present study should be noted. First, we must be cautious in extrapolating findings from patients with clinical coronary disease to asymptomatic patients with subclinical atherosclerosis. Second, most composition analyses of plaque were pre-post comparisons of intensive lipid-lowering therapies, which make it difficult to dissect the natural progression of plaque composition.[33] Thus, the longitudinal change in plaque composition may be useful for assessing the effect of an intensive lipid-lowering strategy. Third, the lack of remarkable differences in plaque composition with addition of evolocumab and ezetimibe may reflect the potential challenges with measurement of plaque volume due to the generation of acoustic shadows and variable catheter position. Fourth, Hougaard et al provided median and range values for fibro-fatty and fibrous plaque as well as dense calcification, which was excluded from the meta-analysis of plaque composition.[21] Fifth, only one study assessed the effect of evolocumab on plaque composition, and no study evaluated the influence of alirocumab on plaque composition. Sixth, only nine papers were included in this meta-analysis and only one study reported data for treatment with evolocumab and alirocumab. Finally, the included studies included patients with different demographics, comorbidities, and baseline drug use in addition to having different study designs and follow-up periods, contributing heterogeneity and possibly weakening the strength of the conclusions.\nIn conclusion, the addition of ezetimibe to statin therapy may further reduce plaque and lipid burdens but may not modify plaque composition. Although current evidence supports a similar impact from the addition of PCSK9 inhibitors to statin therapy, more evidence is needed to confirm such an effect.", "DL and MZ conceived and designed the research; all authors collected data, conducted the research, and analyzed and interpreted data; DL wrote the initial paper; CL, YT, ZL and MZ revised the paper; MZ had primary responsibility for the final content. All authors read and approved the final manuscript.\nConceptualization: Di Liang, Ming Zhang.\nData curation: Di Liang, Di Liang, Chang Li, Yanming Tu, Zhiyong Li, Ming Zhang.\nFormal analysis: Di Liang, Di Liang, Chang Li, Yanming Tu, Zhiyong Li, Ming Zhang.\nMethodology: Di Liang, Di Liang, Chang Li, Yanming Tu, Zhiyong Li, Ming Zhang.\nSoftware: Di Liang, Di Liang, Chang Li, Yanming Tu, Zhiyong Li.\nSupervision: Ming Zhang.\nWriting – original draft: Di Liang.\nWriting – review & editing: Di Liang, Chang Li, Yanming Tu, Zhiyong Li, Ming Zhang.", "" ]
[ "intro", "methods", null, null, null, null, null, null, "results", null, null, null, null, "discussion", null, "supplementary-material" ]
[ "alirocumab", "evolocumab", "ezetimibe", "intravascular ultrasound", "plaque" ]
Underwater endoscopic mucosal resection of upper gastrointestinal subepithelial tumors: A case series pilot study (with video).
36254017
Underwater endoscopic mucosal resection (UW-EMR) has been recently introduced as an effective technique for rectal third layer subepithelial tumors. Therefore, we aimed to assess the safety, efficacy, and procedure time of UW-EMR for upper gastrointestinal subepithelial tumors (SETs) originating from the deep mucosal and/or submucosal layers.
INTRODUCTION
Between August 2018 to July 2022, a total of 17 SETs (7 duodenal SETs, 6 gastric SETs, and 4 esophageal SETs) were included in this study. On endoscopic ultrasound examinations, the tumors were found to be embedded in the submucosa without muscularis propria invasion. All SETs were resected successfully using UW-EMR. The characteristics of the tumors and their R0 resection rate, adverse event rate, and recurrence rate were evaluated retrospectively.
METHODS
The mean tumor size was 0.9 cm (range, 0.3-1.5 cm). En bloc resection and complete resection rates were 100%, respectively. The patients showed no complications such as perforation or bleeding. Histologic assessments of the resected tumors revealed 9 neuroendocrine tumors (7 on the duodenum, 2 on the stomach), 2 gastric cystica profunda, 1 gastric follicular lymphoma, 1 gastric fibromyxoma, 3 esophageal granular cell tumors, and 1 esophageal adenoid cystic carcinoma. The mean procedural time was 3.2 min (range, 1.3-8.7 minutes). The overall en bloc and complete resection rates were 100%, respectively. No recurrence was observed during the follow-up period.
RESULTS
UW-EMR is a safe and effective treatment for upper gastrointestinal SETs embedded in the submucosal layer. Further studies are needed to compare other endoscopic resection techniques.
CONCLUSION
[ "Endoscopic Mucosal Resection", "Esophageal Neoplasms", "Gastric Mucosa", "Humans", "Lymphoma, Non-Hodgkin", "Pilot Projects", "Retrospective Studies", "Stomach Neoplasms", "Treatment Outcome" ]
9575717
1. Introduction
The incidence of asymptomatic incidental upper gastrointestinal subepithelial tumors (SETs) is increasing with the popularization of endoscopic screening for upper gastrointestinal tract tumors and advancements in high-resolution endoscopy.[1] In general, most small SETs covered with normal mucosa can be observed with periodic follow-up. However, some tumors such as neuroendocrine tumors (NETs), lymphoma, and granular cell tumor (GCTs) have malignant potential. Thus, differential diagnosis between potentially malignant and benign upper gastrointestinal SETs is important. Endoscopic ultrasonography (EUS) is a useful tool to approximate the size, layer of origin, and echogenicity of tumors.[2] NETs and GCTs originating from the submucosal layer of the upper gastrointestinal wall have malignant potential. Unfortunately, the accuracy of EUS to predict the correct histologic diagnosis ranges from 45% to 66%, and is also dependent on the operator’s experience.[3–5] Therefore, definite tissue diagnosis of hypoechoic SETs embedded in the submucosa should be considered whenever possible. Conventional endoscopic mucosal resection (EMR) is a simple procedure for the diagnosis and treatment of small SETs confined to the muscularis mucosa and/or submucosa. Submucosal injection before snaring is helpful to avoid perforation by increasing the distance between the muscle and the tumor.[6] However, submucosal injection makes lesion capture by snaring more difficult.[7] Furthermore, complete tumor resection is not always easy because SETs involve the submucosa.[8] Endoscopic submucosal dissection (ESD) can increase the rate of en bloc R0 resection for these lesions. However, the disadvantages of ESD include the technical burden, prolonged procedure time, and related adverse events.[9] Underwater EMR (UW-EMR) was recently introduced as a useful method to remove rectal NETs with similar R0 resection rate as ESD.[10] The floating force provided by filling the gastrointestinal lumen with water instead of submucosal injection allows lifting of mucosal and submucosal tumors from the muscularis propria and facilitates snaring of the tumor by the creation of a pseudopedicle.[11] Moreover, UW-EMR for rectal NETs can reduce the procedure time dramatically in comparison with ESD. Therefore, we aimed to evaluate the efficacy, safety, and procedure time of UW-EMR for deep mucosal and/or submucosal layer SETs located on the esophagus, stomach, and duodenum.
2. Methods
[SUBTITLE] 2.1. Study design [SUBSECTION] From August 2018 to July 2022, a total of 17 upper gastrointestinal SETs were removed using UW-EMR at Pusan National University Yangsan Hospital. All patients were examined by endoscopy and EUS (UMP, 20 MHz; Olympus, Tokyo, Japan) before the endoscopic resection. EUS confirmed that the SETs had hypoechoic echogenicity involving the submucosa without muscularis propria invasion. The location, size, pathology, complete resection rate, complication rate, recurrence rate, and follow-up duration were evaluated retrospectively. The protocol for this study was reviewed and approved by the Institutional Review Board of School of Medicine Pusan National University (05-2020-186). Written informed consent was obtained from all the patients. This study was conducted in accordance with the human and ethical principles of research specified by the Declaration of Helsinki. Endoscopic procedure (see Video, Supplemental Video, http://links.lww.com/MD/H584, which illustrates UW-EMR for esophageal, duodenal and gastric SETs) The endoscope used for UW-EMR was GIF-HQ290 or GIF-2TQ260M (Olympus, Tokyo, Japan) in this study (Fig. 1A). All endoscopic procedures were performed under conscious sedation (intravenous administration of midazolam and meperidine). The patient maintained the left lateral decubitus position during the procedure. EUS was performed to evaluate the origin of SET before the endoscopic resection (Fig. 1B). When the SET was located on the esophagus, the table was tilted about 15 degrees to prevent aspiration. Warm distilled water was infused into the gastrointestinal lumen using a water pump. The infusion of water continued until the tumor was completely immersed underwater. The polypectomy snare (Endoflex, Germany) size (range, 15~25 mm) was determined based on the size of the SETs. A polypectomy snare was inserted through an endoscopic accessary channel. SET capture was performed in the underwater immersion state after confirming that the target tumor was slightly bulging from the mucosal surface (Fig. 1C). SET resection was performed using an Endocut Q current (effect, 3; cut duration, 2; cut interval, 3), which was generated using a VIO300D (ERBE, Tuebingen, Germany) electorosurgical unit (Fig. 1D and E). Endoscopic clipping was used to prevent complications such as delayed bleeding or perforation after UW-EMR for duodenal tumor. In other location, such as esophagus and stomach, prophylactic endoscopic clipping was not used. Resected specimens were evaluated by pathologist (Fig. 1F). All procedures were conducted by 3 endoscopists (SJ Kim, CW Choi and DG Ryu) with > 5 years of experience in therapeutic endoscopy. Patients with gastrointestinal NETs, esophageal GCTs, esophageal adenoid cystic carcinoma, gastric follicular lymphoma, underwent a follow-up endoscopic examination to evaluate local recurrence at the resection site. The first follow-up endoscopic examination with biopsy was performed approximately 3 months after the UW-EMR. Endoscopic image shows a 13 mm sized yellowish, hard subepithelial tumor with an erosion on the top in the stomach body (A). A endoscopic ultrasonography shows a 13 mm, homogeneous, hypoechoic lesion in the third layer (B). Water filling in the lumen causes the lesion to float, allowing the endoscopist to snare the tumor easily (C). En bloc resection was achieved (D and E). A pathological examination shows that a G1 neuroendocrine tumor (mitotic rate: 0/10 high-power field, Ki67 proliferation index: 1%) with free lateral resection margin (F). From August 2018 to July 2022, a total of 17 upper gastrointestinal SETs were removed using UW-EMR at Pusan National University Yangsan Hospital. All patients were examined by endoscopy and EUS (UMP, 20 MHz; Olympus, Tokyo, Japan) before the endoscopic resection. EUS confirmed that the SETs had hypoechoic echogenicity involving the submucosa without muscularis propria invasion. The location, size, pathology, complete resection rate, complication rate, recurrence rate, and follow-up duration were evaluated retrospectively. The protocol for this study was reviewed and approved by the Institutional Review Board of School of Medicine Pusan National University (05-2020-186). Written informed consent was obtained from all the patients. This study was conducted in accordance with the human and ethical principles of research specified by the Declaration of Helsinki. Endoscopic procedure (see Video, Supplemental Video, http://links.lww.com/MD/H584, which illustrates UW-EMR for esophageal, duodenal and gastric SETs) The endoscope used for UW-EMR was GIF-HQ290 or GIF-2TQ260M (Olympus, Tokyo, Japan) in this study (Fig. 1A). All endoscopic procedures were performed under conscious sedation (intravenous administration of midazolam and meperidine). The patient maintained the left lateral decubitus position during the procedure. EUS was performed to evaluate the origin of SET before the endoscopic resection (Fig. 1B). When the SET was located on the esophagus, the table was tilted about 15 degrees to prevent aspiration. Warm distilled water was infused into the gastrointestinal lumen using a water pump. The infusion of water continued until the tumor was completely immersed underwater. The polypectomy snare (Endoflex, Germany) size (range, 15~25 mm) was determined based on the size of the SETs. A polypectomy snare was inserted through an endoscopic accessary channel. SET capture was performed in the underwater immersion state after confirming that the target tumor was slightly bulging from the mucosal surface (Fig. 1C). SET resection was performed using an Endocut Q current (effect, 3; cut duration, 2; cut interval, 3), which was generated using a VIO300D (ERBE, Tuebingen, Germany) electorosurgical unit (Fig. 1D and E). Endoscopic clipping was used to prevent complications such as delayed bleeding or perforation after UW-EMR for duodenal tumor. In other location, such as esophagus and stomach, prophylactic endoscopic clipping was not used. Resected specimens were evaluated by pathologist (Fig. 1F). All procedures were conducted by 3 endoscopists (SJ Kim, CW Choi and DG Ryu) with > 5 years of experience in therapeutic endoscopy. Patients with gastrointestinal NETs, esophageal GCTs, esophageal adenoid cystic carcinoma, gastric follicular lymphoma, underwent a follow-up endoscopic examination to evaluate local recurrence at the resection site. The first follow-up endoscopic examination with biopsy was performed approximately 3 months after the UW-EMR. Endoscopic image shows a 13 mm sized yellowish, hard subepithelial tumor with an erosion on the top in the stomach body (A). A endoscopic ultrasonography shows a 13 mm, homogeneous, hypoechoic lesion in the third layer (B). Water filling in the lumen causes the lesion to float, allowing the endoscopist to snare the tumor easily (C). En bloc resection was achieved (D and E). A pathological examination shows that a G1 neuroendocrine tumor (mitotic rate: 0/10 high-power field, Ki67 proliferation index: 1%) with free lateral resection margin (F). [SUBTITLE] 2.2. Definition [SUBSECTION] Histopathological evaluation of the specimens was performed with 2-mm slices, with the microscopic evaluation including depth of invasion, lateral and vertical resection margins, and pathologic diagnosis. En bloc resection was defined as endoscopic resection of the tumor in a single piece. Complete resection was defined by the absence of tumor cells in microscopic evaluations at the resection margin. The procedure time was counted from the infusion of warm water to the resection of SET. We defined significant bleeding as a reduction of more than 2 g/dL in the hemoglobin level. The patient underwent second-look endoscopy when significant bleeding occurred after endoscopic resection. Perforation was diagnosed by the presence of subdiaphragmatic air or subcutaneous emphysema on chest radiographs after UW-EMR. Histopathological evaluation of the specimens was performed with 2-mm slices, with the microscopic evaluation including depth of invasion, lateral and vertical resection margins, and pathologic diagnosis. En bloc resection was defined as endoscopic resection of the tumor in a single piece. Complete resection was defined by the absence of tumor cells in microscopic evaluations at the resection margin. The procedure time was counted from the infusion of warm water to the resection of SET. We defined significant bleeding as a reduction of more than 2 g/dL in the hemoglobin level. The patient underwent second-look endoscopy when significant bleeding occurred after endoscopic resection. Perforation was diagnosed by the presence of subdiaphragmatic air or subcutaneous emphysema on chest radiographs after UW-EMR.
3. Results
The average age of the patients included in the study was 60.1 years (range, 36-79 years). The mean tumor size was 0.9 cm (range, 0.3-1.5 cm). Seven of the 17 lesions were located in the duodenum (46.6%), 6 were located in the stomach (26.7%), and 4 were present in the esophagus (26.7%) (Table 1). Endoscopic biopsies before UW-EMR diagnosed 9 of the 15 SETs (6 duodenal NETs, 2 gastric NETs, and 1 esophageal GCT). Patient and tumor characteristics. The overall en bloc and complete resection rates were 100%, respectively. The mean procedural time was 3.2 min (range, 1.3-8.7 minutes). Histologic assessments of the removed SETs revealed 9 NETs (7 in the duodenum and 2 in the stomach), 2 gastric cystica profunda, 1 gastric follicular lymphoma, 1 gastric fibromyxoma, 3 esophageal granular cell tumors, and 1 esophageal adenoid cystic carcinoma. All NETs were G1 grade and show no lymphovascular invasion. Follow-up endoscopic examinations were performed 3 months after UW-EMR. The biopsies at the UW-EMR site showed no residual tumor. No serious adverse events such as perforation or significant bleeding occurred during the procedures (Table 2). Clinical outcomes. G = grade, GCP = gastritis cystic profunda, GCT = granular cell tumor, NET = neuroendocrine tumor.
null
null
[ "2.1. Study design", "2.2. Definition", "Authors’ contributions" ]
[ "From August 2018 to July 2022, a total of 17 upper gastrointestinal SETs were removed using UW-EMR at Pusan National University Yangsan Hospital. All patients were examined by endoscopy and EUS (UMP, 20 MHz; Olympus, Tokyo, Japan) before the endoscopic resection. EUS confirmed that the SETs had hypoechoic echogenicity involving the submucosa without muscularis propria invasion. The location, size, pathology, complete resection rate, complication rate, recurrence rate, and follow-up duration were evaluated retrospectively. The protocol for this study was reviewed and approved by the Institutional Review Board of School of Medicine Pusan National University (05-2020-186). Written informed consent was obtained from all the patients. This study was conducted in accordance with the human and ethical principles of research specified by the Declaration of Helsinki.\nEndoscopic procedure (see Video, Supplemental Video, http://links.lww.com/MD/H584, which illustrates UW-EMR for esophageal, duodenal and gastric SETs)\nThe endoscope used for UW-EMR was GIF-HQ290 or GIF-2TQ260M (Olympus, Tokyo, Japan) in this study (Fig. 1A). All endoscopic procedures were performed under conscious sedation (intravenous administration of midazolam and meperidine). The patient maintained the left lateral decubitus position during the procedure. EUS was performed to evaluate the origin of SET before the endoscopic resection (Fig. 1B). When the SET was located on the esophagus, the table was tilted about 15 degrees to prevent aspiration. Warm distilled water was infused into the gastrointestinal lumen using a water pump. The infusion of water continued until the tumor was completely immersed underwater. The polypectomy snare (Endoflex, Germany) size (range, 15~25 mm) was determined based on the size of the SETs. A polypectomy snare was inserted through an endoscopic accessary channel. SET capture was performed in the underwater immersion state after confirming that the target tumor was slightly bulging from the mucosal surface (Fig. 1C). SET resection was performed using an Endocut Q current (effect, 3; cut duration, 2; cut interval, 3), which was generated using a VIO300D (ERBE, Tuebingen, Germany) electorosurgical unit (Fig. 1D and E). Endoscopic clipping was used to prevent complications such as delayed bleeding or perforation after UW-EMR for duodenal tumor. In other location, such as esophagus and stomach, prophylactic endoscopic clipping was not used. Resected specimens were evaluated by pathologist (Fig. 1F). All procedures were conducted by 3 endoscopists (SJ Kim, CW Choi and DG Ryu) with > 5 years of experience in therapeutic endoscopy. Patients with gastrointestinal NETs, esophageal GCTs, esophageal adenoid cystic carcinoma, gastric follicular lymphoma, underwent a follow-up endoscopic examination to evaluate local recurrence at the resection site. The first follow-up endoscopic examination with biopsy was performed approximately 3 months after the UW-EMR.\nEndoscopic image shows a 13 mm sized yellowish, hard subepithelial tumor with an erosion on the top in the stomach body (A). A endoscopic ultrasonography shows a 13 mm, homogeneous, hypoechoic lesion in the third layer (B). Water filling in the lumen causes the lesion to float, allowing the endoscopist to snare the tumor easily (C). En bloc resection was achieved (D and E). A pathological examination shows that a G1 neuroendocrine tumor (mitotic rate: 0/10 high-power field, Ki67 proliferation index: 1%) with free lateral resection margin (F).", "Histopathological evaluation of the specimens was performed with 2-mm slices, with the microscopic evaluation including depth of invasion, lateral and vertical resection margins, and pathologic diagnosis. En bloc resection was defined as endoscopic resection of the tumor in a single piece. Complete resection was defined by the absence of tumor cells in microscopic evaluations at the resection margin. The procedure time was counted from the infusion of warm water to the resection of SET. We defined significant bleeding as a reduction of more than 2 g/dL in the hemoglobin level. The patient underwent second-look endoscopy when significant bleeding occurred after endoscopic resection. Perforation was diagnosed by the presence of subdiaphragmatic air or subcutaneous emphysema on chest radiographs after UW-EMR.", "All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by Cheol Woong Choi, Hyung Wook Kim, Su Bum Park, Dae Gon Ryu. Tae Un Kim wrote the first draft of the manuscript. Su Jin Kim reviewed edited this manuscript. All authors read and approved the final manuscript.\nConceptualization: Su Jin Kim, Tae Un Kim, Cheol Woong Choi, Hyung Wook Kim, Su Bum Park, Dae Gon Ryu.\nWriting – original draft: Tae Un Kim." ]
[ null, null, null ]
[ "1. Introduction", "2. Methods", "2.1. Study design", "2.2. Definition", "3. Results", "4. Discussion", "Authors’ contributions", "Supplementary Material" ]
[ "The incidence of asymptomatic incidental upper gastrointestinal subepithelial tumors (SETs) is increasing with the popularization of endoscopic screening for upper gastrointestinal tract tumors and advancements in high-resolution endoscopy.[1] In general, most small SETs covered with normal mucosa can be observed with periodic follow-up. However, some tumors such as neuroendocrine tumors (NETs), lymphoma, and granular cell tumor (GCTs) have malignant potential. Thus, differential diagnosis between potentially malignant and benign upper gastrointestinal SETs is important.\nEndoscopic ultrasonography (EUS) is a useful tool to approximate the size, layer of origin, and echogenicity of tumors.[2] NETs and GCTs originating from the submucosal layer of the upper gastrointestinal wall have malignant potential. Unfortunately, the accuracy of EUS to predict the correct histologic diagnosis ranges from 45% to 66%, and is also dependent on the operator’s experience.[3–5] Therefore, definite tissue diagnosis of hypoechoic SETs embedded in the submucosa should be considered whenever possible.\nConventional endoscopic mucosal resection (EMR) is a simple procedure for the diagnosis and treatment of small SETs confined to the muscularis mucosa and/or submucosa. Submucosal injection before snaring is helpful to avoid perforation by increasing the distance between the muscle and the tumor.[6] However, submucosal injection makes lesion capture by snaring more difficult.[7] Furthermore, complete tumor resection is not always easy because SETs involve the submucosa.[8] Endoscopic submucosal dissection (ESD) can increase the rate of en bloc R0 resection for these lesions. However, the disadvantages of ESD include the technical burden, prolonged procedure time, and related adverse events.[9]\nUnderwater EMR (UW-EMR) was recently introduced as a useful method to remove rectal NETs with similar R0 resection rate as ESD.[10] The floating force provided by filling the gastrointestinal lumen with water instead of submucosal injection allows lifting of mucosal and submucosal tumors from the muscularis propria and facilitates snaring of the tumor by the creation of a pseudopedicle.[11] Moreover, UW-EMR for rectal NETs can reduce the procedure time dramatically in comparison with ESD. Therefore, we aimed to evaluate the efficacy, safety, and procedure time of UW-EMR for deep mucosal and/or submucosal layer SETs located on the esophagus, stomach, and duodenum.", "[SUBTITLE] 2.1. Study design [SUBSECTION] From August 2018 to July 2022, a total of 17 upper gastrointestinal SETs were removed using UW-EMR at Pusan National University Yangsan Hospital. All patients were examined by endoscopy and EUS (UMP, 20 MHz; Olympus, Tokyo, Japan) before the endoscopic resection. EUS confirmed that the SETs had hypoechoic echogenicity involving the submucosa without muscularis propria invasion. The location, size, pathology, complete resection rate, complication rate, recurrence rate, and follow-up duration were evaluated retrospectively. The protocol for this study was reviewed and approved by the Institutional Review Board of School of Medicine Pusan National University (05-2020-186). Written informed consent was obtained from all the patients. This study was conducted in accordance with the human and ethical principles of research specified by the Declaration of Helsinki.\nEndoscopic procedure (see Video, Supplemental Video, http://links.lww.com/MD/H584, which illustrates UW-EMR for esophageal, duodenal and gastric SETs)\nThe endoscope used for UW-EMR was GIF-HQ290 or GIF-2TQ260M (Olympus, Tokyo, Japan) in this study (Fig. 1A). All endoscopic procedures were performed under conscious sedation (intravenous administration of midazolam and meperidine). The patient maintained the left lateral decubitus position during the procedure. EUS was performed to evaluate the origin of SET before the endoscopic resection (Fig. 1B). When the SET was located on the esophagus, the table was tilted about 15 degrees to prevent aspiration. Warm distilled water was infused into the gastrointestinal lumen using a water pump. The infusion of water continued until the tumor was completely immersed underwater. The polypectomy snare (Endoflex, Germany) size (range, 15~25 mm) was determined based on the size of the SETs. A polypectomy snare was inserted through an endoscopic accessary channel. SET capture was performed in the underwater immersion state after confirming that the target tumor was slightly bulging from the mucosal surface (Fig. 1C). SET resection was performed using an Endocut Q current (effect, 3; cut duration, 2; cut interval, 3), which was generated using a VIO300D (ERBE, Tuebingen, Germany) electorosurgical unit (Fig. 1D and E). Endoscopic clipping was used to prevent complications such as delayed bleeding or perforation after UW-EMR for duodenal tumor. In other location, such as esophagus and stomach, prophylactic endoscopic clipping was not used. Resected specimens were evaluated by pathologist (Fig. 1F). All procedures were conducted by 3 endoscopists (SJ Kim, CW Choi and DG Ryu) with > 5 years of experience in therapeutic endoscopy. Patients with gastrointestinal NETs, esophageal GCTs, esophageal adenoid cystic carcinoma, gastric follicular lymphoma, underwent a follow-up endoscopic examination to evaluate local recurrence at the resection site. The first follow-up endoscopic examination with biopsy was performed approximately 3 months after the UW-EMR.\nEndoscopic image shows a 13 mm sized yellowish, hard subepithelial tumor with an erosion on the top in the stomach body (A). A endoscopic ultrasonography shows a 13 mm, homogeneous, hypoechoic lesion in the third layer (B). Water filling in the lumen causes the lesion to float, allowing the endoscopist to snare the tumor easily (C). En bloc resection was achieved (D and E). A pathological examination shows that a G1 neuroendocrine tumor (mitotic rate: 0/10 high-power field, Ki67 proliferation index: 1%) with free lateral resection margin (F).\nFrom August 2018 to July 2022, a total of 17 upper gastrointestinal SETs were removed using UW-EMR at Pusan National University Yangsan Hospital. All patients were examined by endoscopy and EUS (UMP, 20 MHz; Olympus, Tokyo, Japan) before the endoscopic resection. EUS confirmed that the SETs had hypoechoic echogenicity involving the submucosa without muscularis propria invasion. The location, size, pathology, complete resection rate, complication rate, recurrence rate, and follow-up duration were evaluated retrospectively. The protocol for this study was reviewed and approved by the Institutional Review Board of School of Medicine Pusan National University (05-2020-186). Written informed consent was obtained from all the patients. This study was conducted in accordance with the human and ethical principles of research specified by the Declaration of Helsinki.\nEndoscopic procedure (see Video, Supplemental Video, http://links.lww.com/MD/H584, which illustrates UW-EMR for esophageal, duodenal and gastric SETs)\nThe endoscope used for UW-EMR was GIF-HQ290 or GIF-2TQ260M (Olympus, Tokyo, Japan) in this study (Fig. 1A). All endoscopic procedures were performed under conscious sedation (intravenous administration of midazolam and meperidine). The patient maintained the left lateral decubitus position during the procedure. EUS was performed to evaluate the origin of SET before the endoscopic resection (Fig. 1B). When the SET was located on the esophagus, the table was tilted about 15 degrees to prevent aspiration. Warm distilled water was infused into the gastrointestinal lumen using a water pump. The infusion of water continued until the tumor was completely immersed underwater. The polypectomy snare (Endoflex, Germany) size (range, 15~25 mm) was determined based on the size of the SETs. A polypectomy snare was inserted through an endoscopic accessary channel. SET capture was performed in the underwater immersion state after confirming that the target tumor was slightly bulging from the mucosal surface (Fig. 1C). SET resection was performed using an Endocut Q current (effect, 3; cut duration, 2; cut interval, 3), which was generated using a VIO300D (ERBE, Tuebingen, Germany) electorosurgical unit (Fig. 1D and E). Endoscopic clipping was used to prevent complications such as delayed bleeding or perforation after UW-EMR for duodenal tumor. In other location, such as esophagus and stomach, prophylactic endoscopic clipping was not used. Resected specimens were evaluated by pathologist (Fig. 1F). All procedures were conducted by 3 endoscopists (SJ Kim, CW Choi and DG Ryu) with > 5 years of experience in therapeutic endoscopy. Patients with gastrointestinal NETs, esophageal GCTs, esophageal adenoid cystic carcinoma, gastric follicular lymphoma, underwent a follow-up endoscopic examination to evaluate local recurrence at the resection site. The first follow-up endoscopic examination with biopsy was performed approximately 3 months after the UW-EMR.\nEndoscopic image shows a 13 mm sized yellowish, hard subepithelial tumor with an erosion on the top in the stomach body (A). A endoscopic ultrasonography shows a 13 mm, homogeneous, hypoechoic lesion in the third layer (B). Water filling in the lumen causes the lesion to float, allowing the endoscopist to snare the tumor easily (C). En bloc resection was achieved (D and E). A pathological examination shows that a G1 neuroendocrine tumor (mitotic rate: 0/10 high-power field, Ki67 proliferation index: 1%) with free lateral resection margin (F).\n[SUBTITLE] 2.2. Definition [SUBSECTION] Histopathological evaluation of the specimens was performed with 2-mm slices, with the microscopic evaluation including depth of invasion, lateral and vertical resection margins, and pathologic diagnosis. En bloc resection was defined as endoscopic resection of the tumor in a single piece. Complete resection was defined by the absence of tumor cells in microscopic evaluations at the resection margin. The procedure time was counted from the infusion of warm water to the resection of SET. We defined significant bleeding as a reduction of more than 2 g/dL in the hemoglobin level. The patient underwent second-look endoscopy when significant bleeding occurred after endoscopic resection. Perforation was diagnosed by the presence of subdiaphragmatic air or subcutaneous emphysema on chest radiographs after UW-EMR.\nHistopathological evaluation of the specimens was performed with 2-mm slices, with the microscopic evaluation including depth of invasion, lateral and vertical resection margins, and pathologic diagnosis. En bloc resection was defined as endoscopic resection of the tumor in a single piece. Complete resection was defined by the absence of tumor cells in microscopic evaluations at the resection margin. The procedure time was counted from the infusion of warm water to the resection of SET. We defined significant bleeding as a reduction of more than 2 g/dL in the hemoglobin level. The patient underwent second-look endoscopy when significant bleeding occurred after endoscopic resection. Perforation was diagnosed by the presence of subdiaphragmatic air or subcutaneous emphysema on chest radiographs after UW-EMR.", "From August 2018 to July 2022, a total of 17 upper gastrointestinal SETs were removed using UW-EMR at Pusan National University Yangsan Hospital. All patients were examined by endoscopy and EUS (UMP, 20 MHz; Olympus, Tokyo, Japan) before the endoscopic resection. EUS confirmed that the SETs had hypoechoic echogenicity involving the submucosa without muscularis propria invasion. The location, size, pathology, complete resection rate, complication rate, recurrence rate, and follow-up duration were evaluated retrospectively. The protocol for this study was reviewed and approved by the Institutional Review Board of School of Medicine Pusan National University (05-2020-186). Written informed consent was obtained from all the patients. This study was conducted in accordance with the human and ethical principles of research specified by the Declaration of Helsinki.\nEndoscopic procedure (see Video, Supplemental Video, http://links.lww.com/MD/H584, which illustrates UW-EMR for esophageal, duodenal and gastric SETs)\nThe endoscope used for UW-EMR was GIF-HQ290 or GIF-2TQ260M (Olympus, Tokyo, Japan) in this study (Fig. 1A). All endoscopic procedures were performed under conscious sedation (intravenous administration of midazolam and meperidine). The patient maintained the left lateral decubitus position during the procedure. EUS was performed to evaluate the origin of SET before the endoscopic resection (Fig. 1B). When the SET was located on the esophagus, the table was tilted about 15 degrees to prevent aspiration. Warm distilled water was infused into the gastrointestinal lumen using a water pump. The infusion of water continued until the tumor was completely immersed underwater. The polypectomy snare (Endoflex, Germany) size (range, 15~25 mm) was determined based on the size of the SETs. A polypectomy snare was inserted through an endoscopic accessary channel. SET capture was performed in the underwater immersion state after confirming that the target tumor was slightly bulging from the mucosal surface (Fig. 1C). SET resection was performed using an Endocut Q current (effect, 3; cut duration, 2; cut interval, 3), which was generated using a VIO300D (ERBE, Tuebingen, Germany) electorosurgical unit (Fig. 1D and E). Endoscopic clipping was used to prevent complications such as delayed bleeding or perforation after UW-EMR for duodenal tumor. In other location, such as esophagus and stomach, prophylactic endoscopic clipping was not used. Resected specimens were evaluated by pathologist (Fig. 1F). All procedures were conducted by 3 endoscopists (SJ Kim, CW Choi and DG Ryu) with > 5 years of experience in therapeutic endoscopy. Patients with gastrointestinal NETs, esophageal GCTs, esophageal adenoid cystic carcinoma, gastric follicular lymphoma, underwent a follow-up endoscopic examination to evaluate local recurrence at the resection site. The first follow-up endoscopic examination with biopsy was performed approximately 3 months after the UW-EMR.\nEndoscopic image shows a 13 mm sized yellowish, hard subepithelial tumor with an erosion on the top in the stomach body (A). A endoscopic ultrasonography shows a 13 mm, homogeneous, hypoechoic lesion in the third layer (B). Water filling in the lumen causes the lesion to float, allowing the endoscopist to snare the tumor easily (C). En bloc resection was achieved (D and E). A pathological examination shows that a G1 neuroendocrine tumor (mitotic rate: 0/10 high-power field, Ki67 proliferation index: 1%) with free lateral resection margin (F).", "Histopathological evaluation of the specimens was performed with 2-mm slices, with the microscopic evaluation including depth of invasion, lateral and vertical resection margins, and pathologic diagnosis. En bloc resection was defined as endoscopic resection of the tumor in a single piece. Complete resection was defined by the absence of tumor cells in microscopic evaluations at the resection margin. The procedure time was counted from the infusion of warm water to the resection of SET. We defined significant bleeding as a reduction of more than 2 g/dL in the hemoglobin level. The patient underwent second-look endoscopy when significant bleeding occurred after endoscopic resection. Perforation was diagnosed by the presence of subdiaphragmatic air or subcutaneous emphysema on chest radiographs after UW-EMR.", "The average age of the patients included in the study was 60.1 years (range, 36-79 years). The mean tumor size was 0.9 cm (range, 0.3-1.5 cm). Seven of the 17 lesions were located in the duodenum (46.6%), 6 were located in the stomach (26.7%), and 4 were present in the esophagus (26.7%) (Table 1). Endoscopic biopsies before UW-EMR diagnosed 9 of the 15 SETs (6 duodenal NETs, 2 gastric NETs, and 1 esophageal GCT).\nPatient and tumor characteristics.\nThe overall en bloc and complete resection rates were 100%, respectively. The mean procedural time was 3.2 min (range, 1.3-8.7 minutes). Histologic assessments of the removed SETs revealed 9 NETs (7 in the duodenum and 2 in the stomach), 2 gastric cystica profunda, 1 gastric follicular lymphoma, 1 gastric fibromyxoma, 3 esophageal granular cell tumors, and 1 esophageal adenoid cystic carcinoma. All NETs were G1 grade and show no lymphovascular invasion. Follow-up endoscopic examinations were performed 3 months after UW-EMR. The biopsies at the UW-EMR site showed no residual tumor. No serious adverse events such as perforation or significant bleeding occurred during the procedures (Table 2).\nClinical outcomes.\nG = grade, GCP = gastritis cystic profunda, GCT = granular cell tumor, NET = neuroendocrine tumor.", "The widespread use of endoscopy for health checkups and the advancements in high-definition endoscopy have increased the rate of detection of upper gastrointestinal SETs. Although EUS can provide information regarding SETs, including their echogenicity, size, and layer of origin, SETs originating from the submucosal layer usually require pathologic diagnosis, except lipomas, cysts, and vascular lesions.[12] The results of the present study suggested that UW-EMR was a safe and effective resection method for upper gastrointestinal SETs originating from the submucosal layer.\nAlthough conventional EMR is a simple endoscopic resection technique for upper gastrointestinal SETs embedded in the submucosal layer, it is not easy to obtain a deep resection margin in this procedure.[13] Moreover, the extensive damage at the tumor resection margin makes it difficult to determine the pathologic margin status. In comparison with conventional EMR, ESD shows a higher en bloc resection rate while securing the resection margin.[14,15] In fact, ESD can achieve en bloc resection even in cases where EMR is difficult due to submucosal fibrosis. However, ESD requires greater technical skill and a prolonged procedure time, and is associated with a higher risk of adverse events, including perforation.\nConsidering these limitations of conventional EMR and ESD, several modified EMR methods have been described as effective treatment modalities. For tumors less than 1 cm in size that are located in the third layer of the esophagus or duodenum, a band-ligation–assisted EMR showed high en bloc and complete resection rates.[16,17] EMR with a cap (EMR-C) has also been shown to be an effective method to remove submucosal SETs less than 1 cm in size that are located on the digestive tract.[18] However, the size of the transparent cap or band limits the usability of this technique in removing SETs sized less than 1 cm.\nUW-EMR has recently emerged as a resection method to secure the margin for non-ampullary duodenal epithelial tumors less than 2 cm in size.[19,20] The water filling during UW-EMR maintains the proper muscle layer and allows tumors to float in the water without increasing tissue tension. The contraction of the superficial layers submerged in water and the lifting movement caused by the fat tissue of the submucosa create a pseudopedicle, making it easier to capture the tumor. Furthermore, lesions greater than 1 cm in size can be captured using a large-diameter snare. Therefore, complete resection was achieved in all 3 cases larger than 1 cm in this study.\nUW-EMR offers advantages over various endoscopic resection methods, including ESD, conventional EMR, band-ligation–assisted EMR, and EMR with a cap. The submucosal injection performed in several endoscopic resection methods causes tumors to sink under the epithelium. The increased tension of the surrounding tissue after submucosal injection makes capture of the tumor more difficult. In addition, lumen distention caused by air insufflation during the procedure causes thinning of the muscle layer. These factors increase the procedure time and the risk of perforation. In particular, the duodenum and esophagus are more difficult sites to perform endoscopic resection because of the narrow lumen and thin proper muscle layer. However, filling saline into the lumen causes the duodenal or esophageal wall to gently slope without thinning the muscle layer and sinking the gastrointestinal SETs. This increases the safety of the procedure while decreasing the risk of adverse events such as perforation.\nA recent study reported that UW-EMR can effectively remove colorectal lesions or duodenal lesions with submucosal fibrosis.[21,22] Mechanical stimulation caused by procedures such as biopsy can cause submucosal fibrosis, resulting in non-lifting of SETs and complicating various EMR techniques after submucosal injection. Unfortunately, we were unable to assess the effectiveness of UW-EMR for SETs with fibrosis because there was no case of fibrosis in our study.\nThis study had some limitations. First, this was a retrospective study, and the retrospective review of the clinical outcomes may have introduced a potential bias. Second, this was a single-center study with a small sample size, and all procedures were performed by 3 experienced endoscopists, which limited the generalizability of these findings to other centers with less experience. A multiple-center study including a large sample size is required to overcome these limitations.\nIn conclusion, the results of our study suggest that UW-EMR is safe and effective for the resection of upper gastrointestinal SETs located in the submucosal layer, including NETs, and GCTs.", "All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by Cheol Woong Choi, Hyung Wook Kim, Su Bum Park, Dae Gon Ryu. Tae Un Kim wrote the first draft of the manuscript. Su Jin Kim reviewed edited this manuscript. All authors read and approved the final manuscript.\nConceptualization: Su Jin Kim, Tae Un Kim, Cheol Woong Choi, Hyung Wook Kim, Su Bum Park, Dae Gon Ryu.\nWriting – original draft: Tae Un Kim.", "" ]
[ "intro", "methods", null, null, "results", "discussion", null, "supplementary-material" ]
[ "case report", "endoscopic resection", "subepithelial tumor" ]
Research features between Urology and Nephrology authors in articles regarding UTI related to CKD, HD, PD, and renal transplantation.
36254018
A urinary tract infection (UTI) is one of the most common types of infections affecting the urinary tract. When bacteria enter the bladder or kidney and multiply in the urine, a URI can occur. The urethra is shorter in women than in men, which makes it easier for bacteria to reach the bladder or kidneys and cause infection. A comparison of the research differences between Urology and Nephrology (UN) authors regarding UTI pertaining to the 4 areas (i.e., Chronic Kidney Disease, Hemodialysis, Peritoneal Dialysis, and Renal Transplantation [CHPR]) is thus necessary. We propose and verify 2 hypotheses: CHPR-related articles on UTI have equal journal impact factors (JIFs) in research achievements (RAs) and UN authors have similar research features (RFs).
BACKGROUND
Based on keywords associated with UTI and CHPR in titles, subject areas, and abstracts since 2013, we obtained 1284 abstracts and their associated metadata (e.g., citations, authors, research institutes, departments, countries of origin) from the Web of Science core collection. There were 1030 corresponding and first (co-first) authors with hT-JIF-indices (i.e., JIF was computed using hT-index rather than citations as usual). The following 5 visualizations were used to present the author's RA: radar, Sankey, time-to-event, impact beam plot, and choropleth map. The forest plot was used to distinguish RFs by observing the proportional counts of keyword plus in Web of Science core collection between UN authors.
METHODS
It was observed that CHPR-related articles had unequal JIFs (χ2 = 13.08, P = .004, df = 3, n = 1030) and UN departments had different RFs (Q = 53.24, df = 29, P = .004). In terms of countries, institutes, departments, and authors, the United States (hT-JIF = 38.30), Mayo Clinic (12.9), Nephrology (19.14), and Diana Karpman (10.34) from Sweden had the highest hT-JIF index.
RESULTS
With the aid of visualizations, the hT-JIF-index and keyword plus were demonstrated to assess RAs and distinguish RFs between UN authors. A replication of this study under other topics and in other disciplines is recommended in the future, rather than limiting it to UN authors only, as we did in this study.
CONCLUSION
[ "Female", "Humans", "Kidney Transplantation", "Male", "Nephrology", "Renal Insufficiency, Chronic", "Urinary Tract Infections", "Urology" ]
9575707
1. Introduction
Infections of the urinary tract (UTIs) occur when bacteria from the skin or rectum enter the urethra and infect the urinary tract.[1] Bladder infection (cystitis) is the most common type of infection of the urinary tract. Kidney infection (pyelonephritis) is another type of UTI. There are fewer of them, but they are more serious than bladder infections.[2] UTIs treated by Urology and Nephrology (UN) physicians are required to be known. In some cases, pyelonephritis is not caused by UTIs which are not treated promptly. Several factors contribute to the development of pyelonephritis, and it may occur without a history of cystitis. Nonetheless, when a UTI is not treated promptly, the bacteria can travel up to the kidneys and cause pyelonephritis. Pyelonephritis is an infection of the kidney that produces urine. Fever and back pain may result as a result of this condition.[3] It is estimated that 80% to 90% of UTIs are caused by specialized Escherichia coli (E coli) strains referred to as uropathogenic E coli. Althrough Enterobacteriaceae are bacteria that can be found in the digestive tract, they can be isolated from a variety of sources, and UTIs are not always caused by bacteria in the intestinal. However, if these bacteria live in the intestines, they may occasionally enter the urinary tract system. Other types of bacteria, which are less common, can also cause urinary tract infections.[4] [SUBTITLE] 1.1. Patients at high risk of developing UTI in the 4 groups [SUBSECTION] Chronic kidney disease (CKD) is defined as damage to the kidneys that lasts at least 3 months or more and results in a decrease in the glomerular filtration rate below 60 mL/min/1.73 m2, regardless of the cause.[5] Some chronic diseases, such as diabetes mellitus and hypertension, as well as some primary renal disorders, such as glomerulonephritis, eventually develop CKD as a result of their long-term complications.[6] The estimated prevalence of CKD worldwide is 13.4% (11.7%–15.1%), and the estimated number of patients with end-stage kidney disease (ESKD) requiring renal replacement therapy ranges from 4.902 to 7.083 million in China.[7] End-stage renal disease (ESRD) is defined as when a person’s kidneys cease to function on a permanent basis, requiring long-term dialysis or kidney transplantation to maintain life.[8] Despite the lack of available data for developing countries, it is estimated that 70% of patients with ESRD (stage 5 CKD) will be in developing countries by 2030, which will increase the burden on health care systems’ budgets.[6,9] As the population ages, the incidence of ESRD increases every year. The mortality rate remains high despite the continuous development of medical standards.[10] During the early stages of CKD, medical doctors will try to slow or control the causes of the patient’s kidney disease. It is important to note that treatment options vary according to the cause. However, damage to the kidneys may continue to worsen even when an underlying condition, such as diabetes mellitus or high blood pressure, has been controlled. There’s no cure for CKD, but treatment can help relieve the symptoms and stop it getting worse. The treatment will depend on the stage of patients’ CKD, such as lifestyle changes—to help patients stay as healthy as possible, medicine—to control associated problems (e.g., high blood pressure and high cholesterol).[11,12] In the treatment of ESRD, a variety of renal replacement therapies are commonly used, including hemodialysis, peritoneal dialysis, and renal transplantation.[13] The treatment may result in an increase in life expectancy, but it must be maintained for an extended period of time, which has a substantial impact on the patient physically and psychologically.[14] In addition, there is a high risk of developing UTI and/or sepsis (urosepsis) in patients with CKD, hemodialysis, peritoneal dialysis, and renal transplantation (CHPR for short).[15] However, little is known about research achievements (RAs) in CHPR-related articles on UTI, as well as research features (RFs) among UN authors. Chronic kidney disease (CKD) is defined as damage to the kidneys that lasts at least 3 months or more and results in a decrease in the glomerular filtration rate below 60 mL/min/1.73 m2, regardless of the cause.[5] Some chronic diseases, such as diabetes mellitus and hypertension, as well as some primary renal disorders, such as glomerulonephritis, eventually develop CKD as a result of their long-term complications.[6] The estimated prevalence of CKD worldwide is 13.4% (11.7%–15.1%), and the estimated number of patients with end-stage kidney disease (ESKD) requiring renal replacement therapy ranges from 4.902 to 7.083 million in China.[7] End-stage renal disease (ESRD) is defined as when a person’s kidneys cease to function on a permanent basis, requiring long-term dialysis or kidney transplantation to maintain life.[8] Despite the lack of available data for developing countries, it is estimated that 70% of patients with ESRD (stage 5 CKD) will be in developing countries by 2030, which will increase the burden on health care systems’ budgets.[6,9] As the population ages, the incidence of ESRD increases every year. The mortality rate remains high despite the continuous development of medical standards.[10] During the early stages of CKD, medical doctors will try to slow or control the causes of the patient’s kidney disease. It is important to note that treatment options vary according to the cause. However, damage to the kidneys may continue to worsen even when an underlying condition, such as diabetes mellitus or high blood pressure, has been controlled. There’s no cure for CKD, but treatment can help relieve the symptoms and stop it getting worse. The treatment will depend on the stage of patients’ CKD, such as lifestyle changes—to help patients stay as healthy as possible, medicine—to control associated problems (e.g., high blood pressure and high cholesterol).[11,12] In the treatment of ESRD, a variety of renal replacement therapies are commonly used, including hemodialysis, peritoneal dialysis, and renal transplantation.[13] The treatment may result in an increase in life expectancy, but it must be maintained for an extended period of time, which has a substantial impact on the patient physically and psychologically.[14] In addition, there is a high risk of developing UTI and/or sepsis (urosepsis) in patients with CKD, hemodialysis, peritoneal dialysis, and renal transplantation (CHPR for short).[15] However, little is known about research achievements (RAs) in CHPR-related articles on UTI, as well as research features (RFs) among UN authors. [SUBTITLE] 1.2. Journal impact factor of articles on CHPR related to UTI [SUBSECTION] The Journal Impact Factor (JIF) is an index annually published by The Journal Citation Reports.[16] It was proposed by Eugene Garfield in 1955 to compare investigators’ and journals’ research influence on its time.[17] Original research and review articles are the only article types that meet the definition of citable articles according to the Institute of Scientific Information.[18,19] Web of Science (Thomson Reuters Inc.) is a citation service accessible through an indexing database and search engine on the Web of Science core collection (WoSCC).[20–24] Factors associated with the JIF for UN journals were investigated.[16] The JIF might be used to compare RAs in CHPR-related articles on UTI instead of the traditional citations that appear in articles on a regular basis. Because UTI most commonly occurs in kidneys and bladders, we proposed 2 hypotheses: The RAs in CHPR-related articles on UTI based on JIFs are equal. The RFs between UN authors based on article keywords in WoSCC are similar. The Journal Impact Factor (JIF) is an index annually published by The Journal Citation Reports.[16] It was proposed by Eugene Garfield in 1955 to compare investigators’ and journals’ research influence on its time.[17] Original research and review articles are the only article types that meet the definition of citable articles according to the Institute of Scientific Information.[18,19] Web of Science (Thomson Reuters Inc.) is a citation service accessible through an indexing database and search engine on the Web of Science core collection (WoSCC).[20–24] Factors associated with the JIF for UN journals were investigated.[16] The JIF might be used to compare RAs in CHPR-related articles on UTI instead of the traditional citations that appear in articles on a regular basis. Because UTI most commonly occurs in kidneys and bladders, we proposed 2 hypotheses: The RAs in CHPR-related articles on UTI based on JIFs are equal. The RFs between UN authors based on article keywords in WoSCC are similar. [SUBTITLE] 1.3. Challenges faced in comparing the 2 issues [SUBSECTION] There are several challenges that need to be overcome in order to validate the 2 hypotheses, such as appropriate bibliometric metrics to measure the RAs are required to determine; statistical methods to examine differences in RAs of CHPR are lacking in dealing with missing data in JIF; RFs defined using medical subject headings (MeSH terms)[25–27] are still not uncovered using the keyword plus in WoSCC in literature; and the visualization used to differentiate RFs between 2 journal authors not found in past research. Fortunately, the hT-index (also known as the Tapered h-index)[28,29] takes into account all citations with descending weights when evaluating the RAs and generalizes the h-index.[30] The hT-index more closely related to the h-index than other bibliometric indices (e.g., x-/g-index[31,32]) has been verified.[33] Time-to-event analysis, also known as survival analysis, refers to a set of methods for analyzing the length of time until a well-defined endpoint occurs.[34] It is unique to survival data that not all subjects (e.g., CHPR articles in this study) experience the event (e.g., having JIF denoted by 1 for the event) by the end of the observation period (e.g., JIF observed in this study) so that the exact survival times (denoted by JIF) for some articles are unknown (i.e., JIF = 0 censored with nonevent in time-to-event analysis).[35,36] In addition, data are usually skewed, which limits the usefulness of analysis methods that assume a normal distribution. Thus, nonparametric and semiparametric methods, specifically the Kaplan−Meier estimator, log-rank test, and Cox proportional hazards model, can be applied to the ongoing series of time-to-event JIF data in CHPR articles. By using social network analysis (SNA)[37–39] to replace MeSH terms[25–27] with keyword plus, the RFs between UN authors can be compared using forest plots.[25,36] There are several challenges that need to be overcome in order to validate the 2 hypotheses, such as appropriate bibliometric metrics to measure the RAs are required to determine; statistical methods to examine differences in RAs of CHPR are lacking in dealing with missing data in JIF; RFs defined using medical subject headings (MeSH terms)[25–27] are still not uncovered using the keyword plus in WoSCC in literature; and the visualization used to differentiate RFs between 2 journal authors not found in past research. Fortunately, the hT-index (also known as the Tapered h-index)[28,29] takes into account all citations with descending weights when evaluating the RAs and generalizes the h-index.[30] The hT-index more closely related to the h-index than other bibliometric indices (e.g., x-/g-index[31,32]) has been verified.[33] Time-to-event analysis, also known as survival analysis, refers to a set of methods for analyzing the length of time until a well-defined endpoint occurs.[34] It is unique to survival data that not all subjects (e.g., CHPR articles in this study) experience the event (e.g., having JIF denoted by 1 for the event) by the end of the observation period (e.g., JIF observed in this study) so that the exact survival times (denoted by JIF) for some articles are unknown (i.e., JIF = 0 censored with nonevent in time-to-event analysis).[35,36] In addition, data are usually skewed, which limits the usefulness of analysis methods that assume a normal distribution. Thus, nonparametric and semiparametric methods, specifically the Kaplan−Meier estimator, log-rank test, and Cox proportional hazards model, can be applied to the ongoing series of time-to-event JIF data in CHPR articles. By using social network analysis (SNA)[37–39] to replace MeSH terms[25–27] with keyword plus, the RFs between UN authors can be compared using forest plots.[25,36] [SUBTITLE] 1.4. Aims of this study [SUBSECTION] The purpose of this study is to verify the 2 hypotheses: CHPR-related articles on UTI have equal JIFs in RAs and UN authors have similar RFs. The purpose of this study is to verify the 2 hypotheses: CHPR-related articles on UTI have equal JIFs in RAs and UN authors have similar RFs.
2. Methods
[SUBTITLE] 2.1. Data source [SUBSECTION] By searching the WoSCC with keywords involving CHPR-related articles on UTI (see Supplemental Digital Content 1, http://links.lww.com/MD/H561) with articles and review articles only since 2013, we obtained 1284 abstracts and their corresponding metadata (e.g., citations, country of origin, research institutes, authors placed in the first and corresponding positions, and JIFs of The Journal Citation Reports in 2022[16]). The data deposited in Supplemental Digital Content 2, http://links.lww.com/MD/H562 are publicly available on the WoSCC’s website. Therefore, ethical approval was not needed. By searching the WoSCC with keywords involving CHPR-related articles on UTI (see Supplemental Digital Content 1, http://links.lww.com/MD/H561) with articles and review articles only since 2013, we obtained 1284 abstracts and their corresponding metadata (e.g., citations, country of origin, research institutes, authors placed in the first and corresponding positions, and JIFs of The Journal Citation Reports in 2022[16]). The data deposited in Supplemental Digital Content 2, http://links.lww.com/MD/H562 are publicly available on the WoSCC’s website. Therefore, ethical approval was not needed. [SUBTITLE] 2.2. First goal: CHPR articles on UTI with JIFs in RAs [SUBSECTION] [SUBTITLE] 2.2.1. Computation of the ht-JIF index. [SUBSECTION] The citations were replaced with JIFs in computing the hT-JIF index for each article using Equation 1. = 0, JIF = 0 (e.g., Emerging Sources Citation Index in WoSCC) where i is within 1 to JIF. For example, an article with JIF = 6 has hT-JIF = 1.88 (=1 + 1/3 + 1/5 + 1/7 + 1/9 + 1/11). The algorithm of hT computation is at the link.[40] The citations were replaced with JIFs in computing the hT-JIF index for each article using Equation 1. = 0, JIF = 0 (e.g., Emerging Sources Citation Index in WoSCC) where i is within 1 to JIF. For example, an article with JIF = 6 has hT-JIF = 1.88 (=1 + 1/3 + 1/5 + 1/7 + 1/9 + 1/11). The algorithm of hT computation is at the link.[40] [SUBTITLE] 2.2.2. Time-to-event analysis applied to compare RAs in CHPR. [SUBSECTION] Statistical analyses were performed using MedCalc for Windows, version 19.4 (MedCalc Software, Ostend, Belgium). Data were arranged in 3 sequential columns, as shown in Equation 2. By using Microsoft Excel, JIFs are integers using the round (JIF + 1, 0) function if JIF > 0 and IF (JIF = 0, 0, 1) in the columns of JIR and event, respectively, where the censored event is indicated by 0 in column 2. Time-to-event analysis was performed (see Supplemental Digital Content 3, http://links.lww.com/MD/H563). Hazard ratios with 95% confidence intervals (CIs) were used to examine the difference between groups if the log-rank test was significant when the Type I error was set at α = 0.05. Statistical analyses were performed using MedCalc for Windows, version 19.4 (MedCalc Software, Ostend, Belgium). Data were arranged in 3 sequential columns, as shown in Equation 2. By using Microsoft Excel, JIFs are integers using the round (JIF + 1, 0) function if JIF > 0 and IF (JIF = 0, 0, 1) in the columns of JIR and event, respectively, where the censored event is indicated by 0 in column 2. Time-to-event analysis was performed (see Supplemental Digital Content 3, http://links.lww.com/MD/H563). Hazard ratios with 95% confidence intervals (CIs) were used to examine the difference between groups if the log-rank test was significant when the Type I error was set at α = 0.05. [SUBTITLE] 2.2.1. Computation of the ht-JIF index. [SUBSECTION] The citations were replaced with JIFs in computing the hT-JIF index for each article using Equation 1. = 0, JIF = 0 (e.g., Emerging Sources Citation Index in WoSCC) where i is within 1 to JIF. For example, an article with JIF = 6 has hT-JIF = 1.88 (=1 + 1/3 + 1/5 + 1/7 + 1/9 + 1/11). The algorithm of hT computation is at the link.[40] The citations were replaced with JIFs in computing the hT-JIF index for each article using Equation 1. = 0, JIF = 0 (e.g., Emerging Sources Citation Index in WoSCC) where i is within 1 to JIF. For example, an article with JIF = 6 has hT-JIF = 1.88 (=1 + 1/3 + 1/5 + 1/7 + 1/9 + 1/11). The algorithm of hT computation is at the link.[40] [SUBTITLE] 2.2.2. Time-to-event analysis applied to compare RAs in CHPR. [SUBSECTION] Statistical analyses were performed using MedCalc for Windows, version 19.4 (MedCalc Software, Ostend, Belgium). Data were arranged in 3 sequential columns, as shown in Equation 2. By using Microsoft Excel, JIFs are integers using the round (JIF + 1, 0) function if JIF > 0 and IF (JIF = 0, 0, 1) in the columns of JIR and event, respectively, where the censored event is indicated by 0 in column 2. Time-to-event analysis was performed (see Supplemental Digital Content 3, http://links.lww.com/MD/H563). Hazard ratios with 95% confidence intervals (CIs) were used to examine the difference between groups if the log-rank test was significant when the Type I error was set at α = 0.05. Statistical analyses were performed using MedCalc for Windows, version 19.4 (MedCalc Software, Ostend, Belgium). Data were arranged in 3 sequential columns, as shown in Equation 2. By using Microsoft Excel, JIFs are integers using the round (JIF + 1, 0) function if JIF > 0 and IF (JIF = 0, 0, 1) in the columns of JIR and event, respectively, where the censored event is indicated by 0 in column 2. Time-to-event analysis was performed (see Supplemental Digital Content 3, http://links.lww.com/MD/H563). Hazard ratios with 95% confidence intervals (CIs) were used to examine the difference between groups if the log-rank test was significant when the Type I error was set at α = 0.05. [SUBTITLE] 2.3. Statistical description [SUBSECTION] [SUBTITLE] 2.3.1. Computation of the ht-JIF index in many articles. [SUBSECTION] The hT-JIF index is derived from Equations 3 to 5 if the number of articles is greater than 1.[33] The starting weight in an article is determined by Equation 3, where j = 1. For example, an article with JIF = 6 has hT-JIF = 1.88 (=1 + 1/3 + 1/5 + 1/7 + 1/9 + 1/11), as computed by Equation 1. The weights of ten papers, for instance, with ten JIFs each, can be computed in Table 1 following Equations 4 and 5. In papers 1 to 10, the hT indices monotonically increase [2.13, 3.59, 4.79, 5.81, 6.71, 7.51, 8.25, 8.82, 9.51, 10.00], suggesting that the h-core articles are identical to those in the hT core and the contribution of the hT core is not changed in the hT core Durfee square.[28,29] Weights allocated to the 10 articles with 10 JIFs each (hT = 10 in this case). + means the summation on rows. JIF = journal impact factor. The hT-JIF index is derived from Equations 3 to 5 if the number of articles is greater than 1.[33] The starting weight in an article is determined by Equation 3, where j = 1. For example, an article with JIF = 6 has hT-JIF = 1.88 (=1 + 1/3 + 1/5 + 1/7 + 1/9 + 1/11), as computed by Equation 1. The weights of ten papers, for instance, with ten JIFs each, can be computed in Table 1 following Equations 4 and 5. In papers 1 to 10, the hT indices monotonically increase [2.13, 3.59, 4.79, 5.81, 6.71, 7.51, 8.25, 8.82, 9.51, 10.00], suggesting that the h-core articles are identical to those in the hT core and the contribution of the hT core is not changed in the hT core Durfee square.[28,29] Weights allocated to the 10 articles with 10 JIFs each (hT = 10 in this case). + means the summation on rows. JIF = journal impact factor. [SUBTITLE] 2.3.2. Draw radar plots with ht-JIF-indices for entities. [SUBSECTION] Four types of article-related entities were included; namely, counties, research institutes, departments, and authors, to compare their RAs using the hT-JIF index and draw them on radar plots. The Y-index[41,42] was proposed to evaluate the RAs based on the number of publications in the positions of corresponding and first (co-first) authors (denoted by J = FP and RP). Unfortunately, previous studies have not illustrated the way in which the radar diagram can be drawn based on the Y-index (=as the radius in the first quadrant).[41,42] The RAs should not be measured solely by publications (e.g., the Y-index). To select the entities that contributed most to the CHPR-related articles on UTI in this study, the hT-JIF index can be used by taking into account both publications and JIFs denoted by bubble size on the radar plot.[43] A choropleth map[44] was used to compare the RA across countries/regions. In particular, the RAs in the US states and provinces/metropolitan cities in China were compared with those in other countries/regions based on equal research populations. Four types of article-related entities were included; namely, counties, research institutes, departments, and authors, to compare their RAs using the hT-JIF index and draw them on radar plots. The Y-index[41,42] was proposed to evaluate the RAs based on the number of publications in the positions of corresponding and first (co-first) authors (denoted by J = FP and RP). Unfortunately, previous studies have not illustrated the way in which the radar diagram can be drawn based on the Y-index (=as the radius in the first quadrant).[41,42] The RAs should not be measured solely by publications (e.g., the Y-index). To select the entities that contributed most to the CHPR-related articles on UTI in this study, the hT-JIF index can be used by taking into account both publications and JIFs denoted by bubble size on the radar plot.[43] A choropleth map[44] was used to compare the RA across countries/regions. In particular, the RAs in the US states and provinces/metropolitan cities in China were compared with those in other countries/regions based on equal research populations. [SUBTITLE] 2.3.1. Computation of the ht-JIF index in many articles. [SUBSECTION] The hT-JIF index is derived from Equations 3 to 5 if the number of articles is greater than 1.[33] The starting weight in an article is determined by Equation 3, where j = 1. For example, an article with JIF = 6 has hT-JIF = 1.88 (=1 + 1/3 + 1/5 + 1/7 + 1/9 + 1/11), as computed by Equation 1. The weights of ten papers, for instance, with ten JIFs each, can be computed in Table 1 following Equations 4 and 5. In papers 1 to 10, the hT indices monotonically increase [2.13, 3.59, 4.79, 5.81, 6.71, 7.51, 8.25, 8.82, 9.51, 10.00], suggesting that the h-core articles are identical to those in the hT core and the contribution of the hT core is not changed in the hT core Durfee square.[28,29] Weights allocated to the 10 articles with 10 JIFs each (hT = 10 in this case). + means the summation on rows. JIF = journal impact factor. The hT-JIF index is derived from Equations 3 to 5 if the number of articles is greater than 1.[33] The starting weight in an article is determined by Equation 3, where j = 1. For example, an article with JIF = 6 has hT-JIF = 1.88 (=1 + 1/3 + 1/5 + 1/7 + 1/9 + 1/11), as computed by Equation 1. The weights of ten papers, for instance, with ten JIFs each, can be computed in Table 1 following Equations 4 and 5. In papers 1 to 10, the hT indices monotonically increase [2.13, 3.59, 4.79, 5.81, 6.71, 7.51, 8.25, 8.82, 9.51, 10.00], suggesting that the h-core articles are identical to those in the hT core and the contribution of the hT core is not changed in the hT core Durfee square.[28,29] Weights allocated to the 10 articles with 10 JIFs each (hT = 10 in this case). + means the summation on rows. JIF = journal impact factor. [SUBTITLE] 2.3.2. Draw radar plots with ht-JIF-indices for entities. [SUBSECTION] Four types of article-related entities were included; namely, counties, research institutes, departments, and authors, to compare their RAs using the hT-JIF index and draw them on radar plots. The Y-index[41,42] was proposed to evaluate the RAs based on the number of publications in the positions of corresponding and first (co-first) authors (denoted by J = FP and RP). Unfortunately, previous studies have not illustrated the way in which the radar diagram can be drawn based on the Y-index (=as the radius in the first quadrant).[41,42] The RAs should not be measured solely by publications (e.g., the Y-index). To select the entities that contributed most to the CHPR-related articles on UTI in this study, the hT-JIF index can be used by taking into account both publications and JIFs denoted by bubble size on the radar plot.[43] A choropleth map[44] was used to compare the RA across countries/regions. In particular, the RAs in the US states and provinces/metropolitan cities in China were compared with those in other countries/regions based on equal research populations. Four types of article-related entities were included; namely, counties, research institutes, departments, and authors, to compare their RAs using the hT-JIF index and draw them on radar plots. The Y-index[41,42] was proposed to evaluate the RAs based on the number of publications in the positions of corresponding and first (co-first) authors (denoted by J = FP and RP). Unfortunately, previous studies have not illustrated the way in which the radar diagram can be drawn based on the Y-index (=as the radius in the first quadrant).[41,42] The RAs should not be measured solely by publications (e.g., the Y-index). To select the entities that contributed most to the CHPR-related articles on UTI in this study, the hT-JIF index can be used by taking into account both publications and JIFs denoted by bubble size on the radar plot.[43] A choropleth map[44] was used to compare the RA across countries/regions. In particular, the RAs in the US states and provinces/metropolitan cities in China were compared with those in other countries/regions based on equal research populations. [SUBTITLE] 2.4. Second goal: RFs in UN authors [SUBSECTION] [SUBTITLE] 2.4.1. Data arrangement. [SUBSECTION] The RFs in UN departments were selected and compared by using the keyword plus in WoSCC through the following steps: Step 1: Cluster analysis was performed using SNA[37–39] and Pajek software[45] for the keyword plus in UN departments and drawing them on Sankey diagrams.[46,47] Step 2: A comparison of differences in RFs was made on the forest plot.[27,37,47] The RFs in UN departments were selected and compared by using the keyword plus in WoSCC through the following steps: Step 1: Cluster analysis was performed using SNA[37–39] and Pajek software[45] for the keyword plus in UN departments and drawing them on Sankey diagrams.[46,47] Step 2: A comparison of differences in RFs was made on the forest plot.[27,37,47] [SUBTITLE] 2.4.2. Computation of parameters in a forest plot. [SUBSECTION] The forest plot presents the degree to which data from multiple studies (e.g., keyword plus in this study) observing the same effect overlap with one another.[48] The results that fail to overlap (or fit) well are given the term heterogeneity of the data, which are deemed less conclusive. Otherwise, the data are said to be homogeneous and more conclusive. The heterogeneity is indicated by the I2 statistics[49]; see Equations 6 to 11 below. where k is the number of keywords plus. The P value yielded by the function in MS Excel [i.e., Chidist (T2, df)] is identical to the approach using analysis of variance.[23,46] The df is the degree of freedom (i.e., k − 1), n denotes the sample size (i.e., the even counts and the total observations) in the 2 groups,[50] and SEi=Vari derived from Equation 10. The Vari is the within-study variance in study i. The computation of odds ratios and their CIs are addressed in Equations 16 to 20, where the even and noneven numbers for 2 groups (i.e., UN departments in this study) were set as {n1e, n1n} and {n2e, n2n}. Accordingly, the odds ratio is computed by the formula (n1e × n2n)/(n2e × n1n)[47] in Equation 16, and the 95% CI equal to Odds +/- 1.96 × SEi, see Equation 20. If all odds ratios in a series of studies were compared, a heterogeneity of less than 50% was deemed low based on I2 in Equation 6 and indicates a greater degree of similarity between study data (e.g., keywords) than an I2 value above 50%, which indicates more dissimilarity.[27,51–53] The forest plot presents the degree to which data from multiple studies (e.g., keyword plus in this study) observing the same effect overlap with one another.[48] The results that fail to overlap (or fit) well are given the term heterogeneity of the data, which are deemed less conclusive. Otherwise, the data are said to be homogeneous and more conclusive. The heterogeneity is indicated by the I2 statistics[49]; see Equations 6 to 11 below. where k is the number of keywords plus. The P value yielded by the function in MS Excel [i.e., Chidist (T2, df)] is identical to the approach using analysis of variance.[23,46] The df is the degree of freedom (i.e., k − 1), n denotes the sample size (i.e., the even counts and the total observations) in the 2 groups,[50] and SEi=Vari derived from Equation 10. The Vari is the within-study variance in study i. The computation of odds ratios and their CIs are addressed in Equations 16 to 20, where the even and noneven numbers for 2 groups (i.e., UN departments in this study) were set as {n1e, n1n} and {n2e, n2n}. Accordingly, the odds ratio is computed by the formula (n1e × n2n)/(n2e × n1n)[47] in Equation 16, and the 95% CI equal to Odds +/- 1.96 × SEi, see Equation 20. If all odds ratios in a series of studies were compared, a heterogeneity of less than 50% was deemed low based on I2 in Equation 6 and indicates a greater degree of similarity between study data (e.g., keywords) than an I2 value above 50%, which indicates more dissimilarity.[27,51–53] [SUBTITLE] 2.4.1. Data arrangement. [SUBSECTION] The RFs in UN departments were selected and compared by using the keyword plus in WoSCC through the following steps: Step 1: Cluster analysis was performed using SNA[37–39] and Pajek software[45] for the keyword plus in UN departments and drawing them on Sankey diagrams.[46,47] Step 2: A comparison of differences in RFs was made on the forest plot.[27,37,47] The RFs in UN departments were selected and compared by using the keyword plus in WoSCC through the following steps: Step 1: Cluster analysis was performed using SNA[37–39] and Pajek software[45] for the keyword plus in UN departments and drawing them on Sankey diagrams.[46,47] Step 2: A comparison of differences in RFs was made on the forest plot.[27,37,47] [SUBTITLE] 2.4.2. Computation of parameters in a forest plot. [SUBSECTION] The forest plot presents the degree to which data from multiple studies (e.g., keyword plus in this study) observing the same effect overlap with one another.[48] The results that fail to overlap (or fit) well are given the term heterogeneity of the data, which are deemed less conclusive. Otherwise, the data are said to be homogeneous and more conclusive. The heterogeneity is indicated by the I2 statistics[49]; see Equations 6 to 11 below. where k is the number of keywords plus. The P value yielded by the function in MS Excel [i.e., Chidist (T2, df)] is identical to the approach using analysis of variance.[23,46] The df is the degree of freedom (i.e., k − 1), n denotes the sample size (i.e., the even counts and the total observations) in the 2 groups,[50] and SEi=Vari derived from Equation 10. The Vari is the within-study variance in study i. The computation of odds ratios and their CIs are addressed in Equations 16 to 20, where the even and noneven numbers for 2 groups (i.e., UN departments in this study) were set as {n1e, n1n} and {n2e, n2n}. Accordingly, the odds ratio is computed by the formula (n1e × n2n)/(n2e × n1n)[47] in Equation 16, and the 95% CI equal to Odds +/- 1.96 × SEi, see Equation 20. If all odds ratios in a series of studies were compared, a heterogeneity of less than 50% was deemed low based on I2 in Equation 6 and indicates a greater degree of similarity between study data (e.g., keywords) than an I2 value above 50%, which indicates more dissimilarity.[27,51–53] The forest plot presents the degree to which data from multiple studies (e.g., keyword plus in this study) observing the same effect overlap with one another.[48] The results that fail to overlap (or fit) well are given the term heterogeneity of the data, which are deemed less conclusive. Otherwise, the data are said to be homogeneous and more conclusive. The heterogeneity is indicated by the I2 statistics[49]; see Equations 6 to 11 below. where k is the number of keywords plus. The P value yielded by the function in MS Excel [i.e., Chidist (T2, df)] is identical to the approach using analysis of variance.[23,46] The df is the degree of freedom (i.e., k − 1), n denotes the sample size (i.e., the even counts and the total observations) in the 2 groups,[50] and SEi=Vari derived from Equation 10. The Vari is the within-study variance in study i. The computation of odds ratios and their CIs are addressed in Equations 16 to 20, where the even and noneven numbers for 2 groups (i.e., UN departments in this study) were set as {n1e, n1n} and {n2e, n2n}. Accordingly, the odds ratio is computed by the formula (n1e × n2n)/(n2e × n1n)[47] in Equation 16, and the 95% CI equal to Odds +/- 1.96 × SEi, see Equation 20. If all odds ratios in a series of studies were compared, a heterogeneity of less than 50% was deemed low based on I2 in Equation 6 and indicates a greater degree of similarity between study data (e.g., keywords) than an I2 value above 50%, which indicates more dissimilarity.[27,51–53] [SUBTITLE] 2.5. Articles with higher JIFs in UN departments [SUBSECTION] CHPR articles on UTIs with higher JIFs in UN departments were displayed on an impact beam plot (IBP).[54,55] The top JIF articles denoted by each dot were displayed on the IBP, from the left to the right side, by study type with normalized JIF from 0 to 100 (i.e., using the MSExcel function of PercentRank (array, x, 1) × 100). The IPB dashboard was shown on Google Maps. The article is immediately linked to PubMed once the dot is clicked on the dashboard. CHPR articles on UTIs with higher JIFs in UN departments were displayed on an impact beam plot (IBP).[54,55] The top JIF articles denoted by each dot were displayed on the IBP, from the left to the right side, by study type with normalized JIF from 0 to 100 (i.e., using the MSExcel function of PercentRank (array, x, 1) × 100). The IPB dashboard was shown on Google Maps. The article is immediately linked to PubMed once the dot is clicked on the dashboard. [SUBTITLE] 2.6. Statistics and tools [SUBSECTION] A forest plot was used to compare the odds ratios. The significance level for type I error was set at 0.05. The following 4 visualizations were used to present the author’s RA: radar, Sankey, time-to-event, and choropleth map. The forest plot was used to distinguish RFs by observing the proportional counts of keyword plus in WoSCC between UN authors. Google Maps was used to plot both choropleth maps and radar diagrams. The study flowchart is shown in Figure 1. Study flowchart. A forest plot was used to compare the odds ratios. The significance level for type I error was set at 0.05. The following 4 visualizations were used to present the author’s RA: radar, Sankey, time-to-event, and choropleth map. The forest plot was used to distinguish RFs by observing the proportional counts of keyword plus in WoSCC between UN authors. Google Maps was used to plot both choropleth maps and radar diagrams. The study flowchart is shown in Figure 1. Study flowchart.
3. Results
[SUBTITLE] 3.1. First goal: CHPR articles on UTI with JIFs in RAs [SUBSECTION] We obtained 1284 abstracts and their associated metadata (e.g., citations, authors, research institutes, departments, countries of origin) from the WoSCC since 2013 (Table 2). There were 1030 corresponding and first (co-first) authors with hT-JIF-indices (i.e., JIF was computed using hT-index rather than citations as usual). Distribution of publications for the 4 study types over the years. As shown in Figure 2, CHPR-related articles had unequal JIFs (χ2 = 13.08, P = .004, df = 3, n = 1030). As a result of its highest hazard ratios and 95% confidence intervals, the study type of renal transplantation has the lowest JIF among CHPRs. There was no support for the first hypothesis that CHPR-related articles relating to UTI had equal JIFs in RA. Time-to-event analysis of CHPR articles on UTI with JIFs in Ras. CHPR = Chronic Kidney Disease, Hemodialysis, Peritoneal Dialysis, and Renal Transplantation, JIF = journal impact factor, RA = research achievement, UTI = urinary tract infection. We obtained 1284 abstracts and their associated metadata (e.g., citations, authors, research institutes, departments, countries of origin) from the WoSCC since 2013 (Table 2). There were 1030 corresponding and first (co-first) authors with hT-JIF-indices (i.e., JIF was computed using hT-index rather than citations as usual). Distribution of publications for the 4 study types over the years. As shown in Figure 2, CHPR-related articles had unequal JIFs (χ2 = 13.08, P = .004, df = 3, n = 1030). As a result of its highest hazard ratios and 95% confidence intervals, the study type of renal transplantation has the lowest JIF among CHPRs. There was no support for the first hypothesis that CHPR-related articles relating to UTI had equal JIFs in RA. Time-to-event analysis of CHPR articles on UTI with JIFs in Ras. CHPR = Chronic Kidney Disease, Hemodialysis, Peritoneal Dialysis, and Renal Transplantation, JIF = journal impact factor, RA = research achievement, UTI = urinary tract infection. [SUBTITLE] 3.2. Statistical description [SUBSECTION] In terms of countries, institutes, departments, and authors, the United States (38.30), Mayo Clinic (12.9), Nephrology (19.14), and Diana Karpman (10.34) from Sweden had the highest hT-JIF index; see Figures 3 to 6. It is worth noting that New York ranks top in hT-JIF indices, followed by Australia and the UK, when the US states and provinces/metropolitan cities in China were involved in comparison (Fig. 3). Geographical distribution of hT-JIF indices in countries/regions. JIF = journal impact factor. In terms of countries, institutes, departments, and authors, the United States (38.30), Mayo Clinic (12.9), Nephrology (19.14), and Diana Karpman (10.34) from Sweden had the highest hT-JIF index; see Figures 3 to 6. It is worth noting that New York ranks top in hT-JIF indices, followed by Australia and the UK, when the US states and provinces/metropolitan cities in China were involved in comparison (Fig. 3). Geographical distribution of hT-JIF indices in countries/regions. JIF = journal impact factor. [SUBTITLE] 3.3. Second goal: RFs in UN authors [SUBSECTION] In Figure 7, 3 major clusters (renal transplantation, CKD, and nephrology) are separated by color, and 30 major keywords are displayed. The wider lines between keywords indicate the frequency of relationships between them. A larger block indicates that there are more keywords observed in SNA. As illustrated in Figure 8, UN departments had different RFs (Q = 53.24, df = 29, P = .004). The I2 (=45.53% <50%) indicates a greater degree of similarity between keywords.[27,51–53] However, there is no evidence supporting the second hypothesis that UN authors have similar RFs. In Figure 7, 3 major clusters (renal transplantation, CKD, and nephrology) are separated by color, and 30 major keywords are displayed. The wider lines between keywords indicate the frequency of relationships between them. A larger block indicates that there are more keywords observed in SNA. As illustrated in Figure 8, UN departments had different RFs (Q = 53.24, df = 29, P = .004). The I2 (=45.53% <50%) indicates a greater degree of similarity between keywords.[27,51–53] However, there is no evidence supporting the second hypothesis that UN authors have similar RFs. [SUBTITLE] 3.4. Articles with higher JIFs in UN departments [SUBSECTION] Figure 9 shows CHPR articles on UTIs with higher JIFs in UN departments. The article published in 2013 with the highest JIF (16.43) was written by Heyns CF from South Africa.[56] The 2 articles[57,58] with the highest JIF shown in Figure 2 (bottom) were not written by UN authors. Readers are invited to scan the QR code in Figure 9 and click on the dot of interest to read the article on PubMed. Comparison of the hT-JIF index among research institutes on a radar plot. JIF = journal impact factor. Comparison of the hT-JIF index among departments on a radar plot. JIF = journal impact factor. Comparison of the hT-JIF index among authors on a radar plot. JIF = journal impact factor. Three clusters were separated using SNA for keyword plus. SNA = social network analysis. Comparison of 30 major keywords in proportional counts in UN departments. UN = Urology and Nephrology. CHPR articles on UTI with higher JIFs in UN departments on the IBP. CHPR = Chronic Kidney Disease, Hemodialysis, Peritoneal Dialysis, and Renal Transplantation, IBP = impact beam plot, JIF = journal impact factor, UN = Urology and Nephrology, UTI = urinary tract infection. Figure 9 shows CHPR articles on UTIs with higher JIFs in UN departments. The article published in 2013 with the highest JIF (16.43) was written by Heyns CF from South Africa.[56] The 2 articles[57,58] with the highest JIF shown in Figure 2 (bottom) were not written by UN authors. Readers are invited to scan the QR code in Figure 9 and click on the dot of interest to read the article on PubMed. Comparison of the hT-JIF index among research institutes on a radar plot. JIF = journal impact factor. Comparison of the hT-JIF index among departments on a radar plot. JIF = journal impact factor. Comparison of the hT-JIF index among authors on a radar plot. JIF = journal impact factor. Three clusters were separated using SNA for keyword plus. SNA = social network analysis. Comparison of 30 major keywords in proportional counts in UN departments. UN = Urology and Nephrology. CHPR articles on UTI with higher JIFs in UN departments on the IBP. CHPR = Chronic Kidney Disease, Hemodialysis, Peritoneal Dialysis, and Renal Transplantation, IBP = impact beam plot, JIF = journal impact factor, UN = Urology and Nephrology, UTI = urinary tract infection. [SUBTITLE] 3.5. Online dashboards shown on google maps [SUBSECTION] All the QR codes in Figures are linked to the dashboards. Readers are suggested to examine the displayed dashboards on Google Maps. All the QR codes in Figures are linked to the dashboards. Readers are suggested to examine the displayed dashboards on Google Maps.
null
null
[ "Key points", "1.2. Journal impact factor of articles on CHPR related to UTI", "1.3. Challenges faced in comparing the 2 issues", "1.4. Aims of this study", "2.1. Data source", "2.2. First goal: CHPR articles on UTI with JIFs in RAs", "2.2.1. Computation of the ht-JIF index.", "2.2.2. Time-to-event analysis applied to compare RAs in CHPR.", "2.3. Statistical description", "2.3.1. Computation of the ht-JIF index in many articles.", "2.3.2. Draw radar plots with ht-JIF-indices for entities.", "2.4. Second goal: RFs in UN authors", "2.4.1. Data arrangement.", "2.4.2. Computation of parameters in a forest plot.", "2.5. Articles with higher JIFs in UN departments", "2.6. Statistics and tools", "3.1. First goal: CHPR articles on UTI with JIFs in RAs", "3.2. Statistical description", "3.3. Second goal: RFs in UN authors", "3.4. Articles with higher JIFs in UN departments", "3.5. Online dashboards shown on google maps", "4.1. Additional information", "4.2. High JIF articles regarding CHPR-related to UTI", "4.3. Implications and changes", "4.4. Limitations and suggestions", "5. Conclusion", "Acknowledgments", "Authors’ contributions" ]
[ "The novel hT-JIF index was introduced and proposed for this bibliometric analysis of CHPR pertaining UTI articles using visualizations.\nRadar plots with the hT-JIF index were used to visualize the RAs based on the co-first authors, which is rare in bibliometric studies.\nA time-to-event analysis and forest plot were used in this study to verify the 2 hypotheses. There is a lack of literature demonstrating the effectiveness of the method in detail about verifying the hypotheses as did in this study.\nSupplemental Digital Files contain instructions for conducting this study for readers wishing to replicate it independently.", "The Journal Impact Factor (JIF) is an index annually published by The Journal Citation Reports.[16] It was proposed by Eugene Garfield in 1955 to compare investigators’ and journals’ research influence on its time.[17] Original research and review articles are the only article types that meet the definition of citable articles according to the Institute of Scientific Information.[18,19] Web of Science (Thomson Reuters Inc.) is a citation service accessible through an indexing database and search engine on the Web of Science core collection (WoSCC).[20–24] Factors associated with the JIF for UN journals were investigated.[16] The JIF might be used to compare RAs in CHPR-related articles on UTI instead of the traditional citations that appear in articles on a regular basis.\nBecause UTI most commonly occurs in kidneys and bladders, we proposed 2 hypotheses:\nThe RAs in CHPR-related articles on UTI based on JIFs are equal.\nThe RFs between UN authors based on article keywords in WoSCC are similar.", "There are several challenges that need to be overcome in order to validate the 2 hypotheses, such as appropriate bibliometric metrics to measure the RAs are required to determine; statistical methods to examine differences in RAs of CHPR are lacking in dealing with missing data in JIF; RFs defined using medical subject headings (MeSH terms)[25–27] are still not uncovered using the keyword plus in WoSCC in literature; and the visualization used to differentiate RFs between 2 journal authors not found in past research.\nFortunately, the hT-index (also known as the Tapered h-index)[28,29] takes into account all citations with descending weights when evaluating the RAs and generalizes the h-index.[30] The hT-index more closely related to the h-index than other bibliometric indices (e.g., x-/g-index[31,32]) has been verified.[33]\nTime-to-event analysis, also known as survival analysis, refers to a set of methods for analyzing the length of time until a well-defined endpoint occurs.[34] It is unique to survival data that not all subjects (e.g., CHPR articles in this study) experience the event (e.g., having JIF denoted by 1 for the event) by the end of the observation period (e.g., JIF observed in this study) so that the exact survival times (denoted by JIF) for some articles are unknown (i.e., JIF = 0 censored with nonevent in time-to-event analysis).[35,36] In addition, data are usually skewed, which limits the usefulness of analysis methods that assume a normal distribution. Thus, nonparametric and semiparametric methods, specifically the Kaplan−Meier estimator, log-rank test, and Cox proportional hazards model, can be applied to the ongoing series of time-to-event JIF data in CHPR articles.\nBy using social network analysis (SNA)[37–39] to replace MeSH terms[25–27] with keyword plus, the RFs between UN authors can be compared using forest plots.[25,36]", "The purpose of this study is to verify the 2 hypotheses: CHPR-related articles on UTI have equal JIFs in RAs and UN authors have similar RFs.", "By searching the WoSCC with keywords involving CHPR-related articles on UTI (see Supplemental Digital Content 1, http://links.lww.com/MD/H561) with articles and review articles only since 2013, we obtained 1284 abstracts and their corresponding metadata (e.g., citations, country of origin, research institutes, authors placed in the first and corresponding positions, and JIFs of The Journal Citation Reports in 2022[16]).\nThe data deposited in Supplemental Digital Content 2, http://links.lww.com/MD/H562 are publicly available on the WoSCC’s website. Therefore, ethical approval was not needed.", "[SUBTITLE] 2.2.1. Computation of the ht-JIF index. [SUBSECTION] The citations were replaced with JIFs in computing the hT-JIF index for each article using Equation 1.\n= 0, JIF = 0 (e.g., Emerging Sources Citation Index in WoSCC)\nwhere i is within 1 to JIF. For example, an article with JIF = 6 has hT-JIF = 1.88 (=1 + 1/3 + 1/5 + 1/7 + 1/9 + 1/11). The algorithm of hT computation is at the link.[40]\nThe citations were replaced with JIFs in computing the hT-JIF index for each article using Equation 1.\n= 0, JIF = 0 (e.g., Emerging Sources Citation Index in WoSCC)\nwhere i is within 1 to JIF. For example, an article with JIF = 6 has hT-JIF = 1.88 (=1 + 1/3 + 1/5 + 1/7 + 1/9 + 1/11). The algorithm of hT computation is at the link.[40]\n[SUBTITLE] 2.2.2. Time-to-event analysis applied to compare RAs in CHPR. [SUBSECTION] Statistical analyses were performed using MedCalc for Windows, version 19.4 (MedCalc Software, Ostend, Belgium). Data were arranged in 3 sequential columns, as shown in Equation 2.\nBy using Microsoft Excel, JIFs are integers using the round (JIF + 1, 0) function if JIF > 0 and IF (JIF = 0, 0, 1) in the columns of JIR and event, respectively, where the censored event is indicated by 0 in column 2.\nTime-to-event analysis was performed (see Supplemental Digital Content 3, http://links.lww.com/MD/H563). Hazard ratios with 95% confidence intervals (CIs) were used to examine the difference between groups if the log-rank test was significant when the Type I error was set at α = 0.05.\nStatistical analyses were performed using MedCalc for Windows, version 19.4 (MedCalc Software, Ostend, Belgium). Data were arranged in 3 sequential columns, as shown in Equation 2.\nBy using Microsoft Excel, JIFs are integers using the round (JIF + 1, 0) function if JIF > 0 and IF (JIF = 0, 0, 1) in the columns of JIR and event, respectively, where the censored event is indicated by 0 in column 2.\nTime-to-event analysis was performed (see Supplemental Digital Content 3, http://links.lww.com/MD/H563). Hazard ratios with 95% confidence intervals (CIs) were used to examine the difference between groups if the log-rank test was significant when the Type I error was set at α = 0.05.", "The citations were replaced with JIFs in computing the hT-JIF index for each article using Equation 1.\n= 0, JIF = 0 (e.g., Emerging Sources Citation Index in WoSCC)\nwhere i is within 1 to JIF. For example, an article with JIF = 6 has hT-JIF = 1.88 (=1 + 1/3 + 1/5 + 1/7 + 1/9 + 1/11). The algorithm of hT computation is at the link.[40]", "Statistical analyses were performed using MedCalc for Windows, version 19.4 (MedCalc Software, Ostend, Belgium). Data were arranged in 3 sequential columns, as shown in Equation 2.\nBy using Microsoft Excel, JIFs are integers using the round (JIF + 1, 0) function if JIF > 0 and IF (JIF = 0, 0, 1) in the columns of JIR and event, respectively, where the censored event is indicated by 0 in column 2.\nTime-to-event analysis was performed (see Supplemental Digital Content 3, http://links.lww.com/MD/H563). Hazard ratios with 95% confidence intervals (CIs) were used to examine the difference between groups if the log-rank test was significant when the Type I error was set at α = 0.05.", "[SUBTITLE] 2.3.1. Computation of the ht-JIF index in many articles. [SUBSECTION] The hT-JIF index is derived from Equations 3 to 5 if the number of articles is greater than 1.[33]\nThe starting weight in an article is determined by Equation 3, where j = 1. For example, an article with JIF = 6 has hT-JIF = 1.88 (=1 + 1/3 + 1/5 + 1/7 + 1/9 + 1/11), as computed by Equation 1.\nThe weights of ten papers, for instance, with ten JIFs each, can be computed in Table 1 following Equations 4 and 5. In papers 1 to 10, the hT indices monotonically increase [2.13, 3.59, 4.79, 5.81, 6.71, 7.51, 8.25, 8.82, 9.51, 10.00], suggesting that the h-core articles are identical to those in the hT core and the contribution of the hT core is not changed in the hT core Durfee square.[28,29]\nWeights allocated to the 10 articles with 10 JIFs each (hT = 10 in this case).\n+ means the summation on rows.\nJIF = journal impact factor.\nThe hT-JIF index is derived from Equations 3 to 5 if the number of articles is greater than 1.[33]\nThe starting weight in an article is determined by Equation 3, where j = 1. For example, an article with JIF = 6 has hT-JIF = 1.88 (=1 + 1/3 + 1/5 + 1/7 + 1/9 + 1/11), as computed by Equation 1.\nThe weights of ten papers, for instance, with ten JIFs each, can be computed in Table 1 following Equations 4 and 5. In papers 1 to 10, the hT indices monotonically increase [2.13, 3.59, 4.79, 5.81, 6.71, 7.51, 8.25, 8.82, 9.51, 10.00], suggesting that the h-core articles are identical to those in the hT core and the contribution of the hT core is not changed in the hT core Durfee square.[28,29]\nWeights allocated to the 10 articles with 10 JIFs each (hT = 10 in this case).\n+ means the summation on rows.\nJIF = journal impact factor.\n[SUBTITLE] 2.3.2. Draw radar plots with ht-JIF-indices for entities. [SUBSECTION] Four types of article-related entities were included; namely, counties, research institutes, departments, and authors, to compare their RAs using the hT-JIF index and draw them on radar plots.\nThe Y-index[41,42] was proposed to evaluate the RAs based on the number of publications in the positions of corresponding and first (co-first) authors (denoted by J = FP and RP). Unfortunately, previous studies have not illustrated the way in which the radar diagram can be drawn based on the Y-index (=as the radius in the first quadrant).[41,42] The RAs should not be measured solely by publications (e.g., the Y-index). To select the entities that contributed most to the CHPR-related articles on UTI in this study, the hT-JIF index can be used by taking into account both publications and JIFs denoted by bubble size on the radar plot.[43]\nA choropleth map[44] was used to compare the RA across countries/regions. In particular, the RAs in the US states and provinces/metropolitan cities in China were compared with those in other countries/regions based on equal research populations.\nFour types of article-related entities were included; namely, counties, research institutes, departments, and authors, to compare their RAs using the hT-JIF index and draw them on radar plots.\nThe Y-index[41,42] was proposed to evaluate the RAs based on the number of publications in the positions of corresponding and first (co-first) authors (denoted by J = FP and RP). Unfortunately, previous studies have not illustrated the way in which the radar diagram can be drawn based on the Y-index (=as the radius in the first quadrant).[41,42] The RAs should not be measured solely by publications (e.g., the Y-index). To select the entities that contributed most to the CHPR-related articles on UTI in this study, the hT-JIF index can be used by taking into account both publications and JIFs denoted by bubble size on the radar plot.[43]\nA choropleth map[44] was used to compare the RA across countries/regions. In particular, the RAs in the US states and provinces/metropolitan cities in China were compared with those in other countries/regions based on equal research populations.", "The hT-JIF index is derived from Equations 3 to 5 if the number of articles is greater than 1.[33]\nThe starting weight in an article is determined by Equation 3, where j = 1. For example, an article with JIF = 6 has hT-JIF = 1.88 (=1 + 1/3 + 1/5 + 1/7 + 1/9 + 1/11), as computed by Equation 1.\nThe weights of ten papers, for instance, with ten JIFs each, can be computed in Table 1 following Equations 4 and 5. In papers 1 to 10, the hT indices monotonically increase [2.13, 3.59, 4.79, 5.81, 6.71, 7.51, 8.25, 8.82, 9.51, 10.00], suggesting that the h-core articles are identical to those in the hT core and the contribution of the hT core is not changed in the hT core Durfee square.[28,29]\nWeights allocated to the 10 articles with 10 JIFs each (hT = 10 in this case).\n+ means the summation on rows.\nJIF = journal impact factor.", "Four types of article-related entities were included; namely, counties, research institutes, departments, and authors, to compare their RAs using the hT-JIF index and draw them on radar plots.\nThe Y-index[41,42] was proposed to evaluate the RAs based on the number of publications in the positions of corresponding and first (co-first) authors (denoted by J = FP and RP). Unfortunately, previous studies have not illustrated the way in which the radar diagram can be drawn based on the Y-index (=as the radius in the first quadrant).[41,42] The RAs should not be measured solely by publications (e.g., the Y-index). To select the entities that contributed most to the CHPR-related articles on UTI in this study, the hT-JIF index can be used by taking into account both publications and JIFs denoted by bubble size on the radar plot.[43]\nA choropleth map[44] was used to compare the RA across countries/regions. In particular, the RAs in the US states and provinces/metropolitan cities in China were compared with those in other countries/regions based on equal research populations.", "[SUBTITLE] 2.4.1. Data arrangement. [SUBSECTION] The RFs in UN departments were selected and compared by using the keyword plus in WoSCC through the following steps:\nStep 1: Cluster analysis was performed using SNA[37–39] and Pajek software[45] for the keyword plus in UN departments and drawing them on Sankey diagrams.[46,47]\nStep 2: A comparison of differences in RFs was made on the forest plot.[27,37,47]\nThe RFs in UN departments were selected and compared by using the keyword plus in WoSCC through the following steps:\nStep 1: Cluster analysis was performed using SNA[37–39] and Pajek software[45] for the keyword plus in UN departments and drawing them on Sankey diagrams.[46,47]\nStep 2: A comparison of differences in RFs was made on the forest plot.[27,37,47]\n[SUBTITLE] 2.4.2. Computation of parameters in a forest plot. [SUBSECTION] The forest plot presents the degree to which data from multiple studies (e.g., keyword plus in this study) observing the same effect overlap with one another.[48] The results that fail to overlap (or fit) well are given the term heterogeneity of the data, which are deemed less conclusive. Otherwise, the data are said to be homogeneous and more conclusive. The heterogeneity is indicated by the I2 statistics[49]; see Equations 6 to 11 below.\nwhere k is the number of keywords plus. The P value yielded by the function in MS Excel [i.e., Chidist (T2, df)] is identical to the approach using analysis of variance.[23,46] The df is the degree of freedom (i.e., k − 1), n denotes the sample size (i.e., the even counts and the total observations) in the 2 groups,[50] and SEi=Vari derived from Equation 10. The Vari is the within-study variance in study i.\nThe computation of odds ratios and their CIs are addressed in Equations 16 to 20, where the even and noneven numbers for 2 groups (i.e., UN departments in this study) were set as {n1e, n1n} and {n2e, n2n}. Accordingly, the odds ratio is computed by the formula (n1e × n2n)/(n2e × n1n)[47] in Equation 16, and the 95% CI equal to Odds +/- 1.96 × SEi, see Equation 20.\nIf all odds ratios in a series of studies were compared, a heterogeneity of less than 50% was deemed low based on I2 in Equation 6 and indicates a greater degree of similarity between study data (e.g., keywords) than an I2 value above 50%, which indicates more dissimilarity.[27,51–53]\nThe forest plot presents the degree to which data from multiple studies (e.g., keyword plus in this study) observing the same effect overlap with one another.[48] The results that fail to overlap (or fit) well are given the term heterogeneity of the data, which are deemed less conclusive. Otherwise, the data are said to be homogeneous and more conclusive. The heterogeneity is indicated by the I2 statistics[49]; see Equations 6 to 11 below.\nwhere k is the number of keywords plus. The P value yielded by the function in MS Excel [i.e., Chidist (T2, df)] is identical to the approach using analysis of variance.[23,46] The df is the degree of freedom (i.e., k − 1), n denotes the sample size (i.e., the even counts and the total observations) in the 2 groups,[50] and SEi=Vari derived from Equation 10. The Vari is the within-study variance in study i.\nThe computation of odds ratios and their CIs are addressed in Equations 16 to 20, where the even and noneven numbers for 2 groups (i.e., UN departments in this study) were set as {n1e, n1n} and {n2e, n2n}. Accordingly, the odds ratio is computed by the formula (n1e × n2n)/(n2e × n1n)[47] in Equation 16, and the 95% CI equal to Odds +/- 1.96 × SEi, see Equation 20.\nIf all odds ratios in a series of studies were compared, a heterogeneity of less than 50% was deemed low based on I2 in Equation 6 and indicates a greater degree of similarity between study data (e.g., keywords) than an I2 value above 50%, which indicates more dissimilarity.[27,51–53]", "The RFs in UN departments were selected and compared by using the keyword plus in WoSCC through the following steps:\nStep 1: Cluster analysis was performed using SNA[37–39] and Pajek software[45] for the keyword plus in UN departments and drawing them on Sankey diagrams.[46,47]\nStep 2: A comparison of differences in RFs was made on the forest plot.[27,37,47]", "The forest plot presents the degree to which data from multiple studies (e.g., keyword plus in this study) observing the same effect overlap with one another.[48] The results that fail to overlap (or fit) well are given the term heterogeneity of the data, which are deemed less conclusive. Otherwise, the data are said to be homogeneous and more conclusive. The heterogeneity is indicated by the I2 statistics[49]; see Equations 6 to 11 below.\nwhere k is the number of keywords plus. The P value yielded by the function in MS Excel [i.e., Chidist (T2, df)] is identical to the approach using analysis of variance.[23,46] The df is the degree of freedom (i.e., k − 1), n denotes the sample size (i.e., the even counts and the total observations) in the 2 groups,[50] and SEi=Vari derived from Equation 10. The Vari is the within-study variance in study i.\nThe computation of odds ratios and their CIs are addressed in Equations 16 to 20, where the even and noneven numbers for 2 groups (i.e., UN departments in this study) were set as {n1e, n1n} and {n2e, n2n}. Accordingly, the odds ratio is computed by the formula (n1e × n2n)/(n2e × n1n)[47] in Equation 16, and the 95% CI equal to Odds +/- 1.96 × SEi, see Equation 20.\nIf all odds ratios in a series of studies were compared, a heterogeneity of less than 50% was deemed low based on I2 in Equation 6 and indicates a greater degree of similarity between study data (e.g., keywords) than an I2 value above 50%, which indicates more dissimilarity.[27,51–53]", "CHPR articles on UTIs with higher JIFs in UN departments were displayed on an impact beam plot (IBP).[54,55] The top JIF articles denoted by each dot were displayed on the IBP, from the left to the right side, by study type with normalized JIF from 0 to 100 (i.e., using the MSExcel function of PercentRank (array, x, 1) × 100). The IPB dashboard was shown on Google Maps. The article is immediately linked to PubMed once the dot is clicked on the dashboard.", "A forest plot was used to compare the odds ratios. The significance level for type I error was set at 0.05.\nThe following 4 visualizations were used to present the author’s RA: radar, Sankey, time-to-event, and choropleth map. The forest plot was used to distinguish RFs by observing the proportional counts of keyword plus in WoSCC between UN authors. Google Maps was used to plot both choropleth maps and radar diagrams. The study flowchart is shown in Figure 1.\nStudy flowchart.", "We obtained 1284 abstracts and their associated metadata (e.g., citations, authors, research institutes, departments, countries of origin) from the WoSCC since 2013 (Table 2). There were 1030 corresponding and first (co-first) authors with hT-JIF-indices (i.e., JIF was computed using hT-index rather than citations as usual).\nDistribution of publications for the 4 study types over the years.\nAs shown in Figure 2, CHPR-related articles had unequal JIFs (χ2 = 13.08, P = .004, df = 3, n = 1030). As a result of its highest hazard ratios and 95% confidence intervals, the study type of renal transplantation has the lowest JIF among CHPRs. There was no support for the first hypothesis that CHPR-related articles relating to UTI had equal JIFs in RA.\nTime-to-event analysis of CHPR articles on UTI with JIFs in Ras. CHPR = Chronic Kidney Disease, Hemodialysis, Peritoneal Dialysis, and Renal Transplantation, JIF = journal impact factor, RA = research achievement, UTI = urinary tract infection.", "In terms of countries, institutes, departments, and authors, the United States (38.30), Mayo Clinic (12.9), Nephrology (19.14), and Diana Karpman (10.34) from Sweden had the highest hT-JIF index; see Figures 3 to 6. It is worth noting that New York ranks top in hT-JIF indices, followed by Australia and the UK, when the US states and provinces/metropolitan cities in China were involved in comparison (Fig. 3).\nGeographical distribution of hT-JIF indices in countries/regions. JIF = journal impact factor.", "In Figure 7, 3 major clusters (renal transplantation, CKD, and nephrology) are separated by color, and 30 major keywords are displayed. The wider lines between keywords indicate the frequency of relationships between them. A larger block indicates that there are more keywords observed in SNA. As illustrated in Figure 8, UN departments had different RFs (Q = 53.24, df = 29, P = .004). The I2 (=45.53% <50%) indicates a greater degree of similarity between keywords.[27,51–53] However, there is no evidence supporting the second hypothesis that UN authors have similar RFs.", "Figure 9 shows CHPR articles on UTIs with higher JIFs in UN departments. The article published in 2013 with the highest JIF (16.43) was written by Heyns CF from South Africa.[56] The 2 articles[57,58] with the highest JIF shown in Figure 2 (bottom) were not written by UN authors. Readers are invited to scan the QR code in Figure 9 and click on the dot of interest to read the article on PubMed.\nComparison of the hT-JIF index among research institutes on a radar plot. JIF = journal impact factor.\nComparison of the hT-JIF index among departments on a radar plot. JIF = journal impact factor.\nComparison of the hT-JIF index among authors on a radar plot. JIF = journal impact factor.\nThree clusters were separated using SNA for keyword plus. SNA = social network analysis.\nComparison of 30 major keywords in proportional counts in UN departments. UN = Urology and Nephrology.\nCHPR articles on UTI with higher JIFs in UN departments on the IBP. CHPR = Chronic Kidney Disease, Hemodialysis, Peritoneal Dialysis, and Renal Transplantation, IBP = impact beam plot, JIF = journal impact factor, UN = Urology and Nephrology, UTI = urinary tract infection.", "All the QR codes in Figures are linked to the dashboards. Readers are suggested to examine the displayed dashboards on Google Maps.", "A growing number of patients worldwide suffer from CKD, which is associated with an increased risk of progression to ESKD.[15] The risk of developing infections (e.g., UTI) and/or sepsis (urosepsis) is high among patients with CKD stage 2 to 5, patients receiving chronic dialysis treatment (hemodialysis or peritoneal dialysis), and patients with kidney allograft dysfunction. However, little information is available regarding how many of these patients suffer from urinary tract infections.\nThe prevalence of CKD worldwide is estimated at 13.4% (11.7%–15.1%), and the estimated number of patients with ESKD needing renal replacement therapy in China is between 4.902 and 7.083 million.[7] The incidence of infections associated with CKD is less than one per 5000 people per year,[59–61] but frequent UTI episodes may increase the risk of developing ESKD.[61] It is higher in infants and young children than in adults but still moderate at approximately 1%.[62] A number of clinical risk factors and comorbidities other than UTI are suspected of contributing to the development of ESKD. Predisposing factors for UTI development in CKD patients include sex, age, genetic disposition, diabetes mellitus, obstructive nephropathy, arteriolosclerosis (microvascular calcification, ischemic nephropathy), nephrolithiasis, cast nephropathy, immunodeficiency syndromes, and immunosuppressive therapy.[59,63] No bibliometric analysis has been undertaken to determine whether CHPR-related articles on UTI have unequal JIFs between RAs, and UN authors have slightly different RFs, as we did in this study.\nTo measure research output, most institutions surveyed still rely on simple, easily quantifiable metrics, such as the JIF or publication count.[64–66] In Taiwan’s medical schools, the JIF has an unrivaled role in determining the faculty’s achievement evaluation and promotion. A major component of the evaluation is the CJA (category of article, journal quality, and author order) assessment.[67] Hence, we measured the RAs of authors with CHPR articles on UTI using the hT-JIF index.\nThirty-six percent of the 7618 papers on peritoneal dialysis published in 887 journals were written by American authors (6991 articles and 627 reviews),[68] and the United States was the most productive country (n = 51) among the top 100 influential papers on peritoneal dialysis.[69] In HD/PD articles, the United States had the highest hT index (=37.15) compared to many other countries. These results are consistent with our findings regarding the dominance of the United States in articles regarding CHPR-related UTI since 2013 (hT-JIF = 38.30).[33]", "An article that discusses CHPR-related UTIs with PMID = 24166342, entitled Urological aspects of HIV and AIDS, was published in 2013 in Nat. Rev. Urol. and written by Urology authors in University Stellenbosch in South Africa, has a few citations in WoSCC (=5).\nAn article with PMID = 24166342,[57] entitled History of Childhood Kidney Disease and Risk of Adult End-Stage Renal Disease, written by Ronit Calderon-Margalit (Israel) and published in N Engl J Med. in 2018, has 75 citations in WoSCC.\nManikkam Suthanthiran (U.S.) also published a study, Urinary-cell mRNA profile and acute cellular rejection in kidney allografts, in N Engl J Med. in 2013, which has 225 citations in WoSCC.", "There are several distinctive features of the study. First, the hT-JIF index with decimal places can be used to complement the hT-/h-index[28–30] to enhance the discrimination power for identifying RAs and ranking within a group.[70]\nSecond, time-to-event analysis[34] is unique because not all subjects (e.g., CHPR articles in this study) have experienced the event (e.g., having a JIF of 1 for the event) by the end of the observed JIF, so the exact JIF for some articles remains unknown during analysis.[35,36] Additionally, data are usually skewed, limiting the usefulness of methods that assume a normal distribution. Thus, the Kaplan−Meier estimator and log-rank test can be adapted to CHPR articles that contain time-to-event JIF data.\nThird, IBPs[54,55] allow authors to examine articles in a completely new manner, particularly with links to PubMed. In addition, Supplemental Digital File 2, http://links.lww.com/MD/H562 describes how to draw the IBP on Google Maps.\nAdditionally, the study provides 3 visual representations, including a choropleth map based on the hT-JIF index, a forest plot to identify the odds ratio in pair comparisons, and a Sankey diagram showing differences in RFs among the 2 UD authors.\nIt is more complex to calculate the hT index in combination with the JIF than the h index, but this issue can be resolved by a dedicated software program. The hT-index computation is demonstrated on the link,[40] which provides readers with the programming codes for understanding how the hT-index is calculated within a second. Thus, the hT-JIF index can be supplemented with a coordinate in Figures 4 to 6, such as p (FP, RP) with another bubble size. A dedicated software program can be used to overcome the potential problem of computation time by identifying the author RAs.", "There are a number of issues that need to be addressed in detail in further research. As a first concern, only articles pertaining to CHPR-related UTI are included. It is recommended that future studies include a wider range of articles related to UTIs than their RAs and RFs.\nThe second point is that although the Y-index[41,42] and hT-index[28,29,32] have been considered to be fair measures of RA contributions, it is assumed that the co-first authors contribute equally to the articles. If authorship does not follow the rule as designed, the results regarding the authors who contributed the most to articles regarding CHPR-related UTI will be biased.\nThird, it takes some time to calculate the hT index using the sum of weights in the Ferrers tableau (that is, all papers cited or JIF received in the list). The advancement in hardware has made this task trivial, comparable to computing other bibliometric metrics (e.g., h-/x-/g-indices[30–32]) using a dedicated software program, as shown in reference.[40]\nFourth, an hT-JIF index was proposed in this study instead of citations as usual; however, the RA is determined by many other factors (e.g., multiply CJA[67]) that need to be considered when calculating the hT index (e.g., using citations to compute the hT index[67]).\nFifth, according to Figure 3, only countries/regions with higher hT-JIF indices are compared. Readers may also be interested in the list of countries and regions with the Y-index[41,42] shown on the radar plot. Using the radar plot to display this type of influential country/region should be involved in a future study (see how to draw the radar plot in Supplemental Digital File 3, http://links.lww.com/MD/H563).\nFinally, even though the hT-JIF index is considered useful and applicable, it should be used with caution when comparing the differences between groups since it does not always follow a normal distribution. It was recommended that readers use the bootstrapping method[71–73] or the time-to-event analysis when comparing RAs between groups, particularly with 95% confidence intervals.", "In this study, we used the radar plot with the hT-JIF-index based on the number of publications in co-first authors to demonstrate that the hT-JIF-index generalized the h-index on citations for assessing author contributions in quality and quantity from all listed articles. It is important to take into account the hT-JIF-index and the IBP in future relevant bibliometric analyses of academic disciplines or specific research topics, rather than focusing only on articles related to CHPR-related UTI, as we did in this study.", "We thank Enago (www.enago.tw) for the English language review of this manuscript.", "KK, HY and WCK provided the concept and designed this study. CY and WCK interpreted the data. WC monitored the process and the manuscript. TW and HY drafted the manuscript. All authors read the manuscript and approved the final manuscript.\nConceptualization: Keng-Kok Tan, Chen-Yu Wang, Hsien-Yi Wang.\nData curation: Wei-Chih Kan.\nInvestigation: Willy Chou.\nMethodology: Tsair-Wei Chien." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "Key points", "1. Introduction", "1.1. Patients at high risk of developing UTI in the 4 groups", "1.2. Journal impact factor of articles on CHPR related to UTI", "1.3. Challenges faced in comparing the 2 issues", "1.4. Aims of this study", "2. Methods", "2.1. Data source", "2.2. First goal: CHPR articles on UTI with JIFs in RAs", "2.2.1. Computation of the ht-JIF index.", "2.2.2. Time-to-event analysis applied to compare RAs in CHPR.", "2.3. Statistical description", "2.3.1. Computation of the ht-JIF index in many articles.", "2.3.2. Draw radar plots with ht-JIF-indices for entities.", "2.4. Second goal: RFs in UN authors", "2.4.1. Data arrangement.", "2.4.2. Computation of parameters in a forest plot.", "2.5. Articles with higher JIFs in UN departments", "2.6. Statistics and tools", "3. Results", "3.1. First goal: CHPR articles on UTI with JIFs in RAs", "3.2. Statistical description", "3.3. Second goal: RFs in UN authors", "3.4. Articles with higher JIFs in UN departments", "3.5. Online dashboards shown on google maps", "4. Discussion", "4.1. Additional information", "4.2. High JIF articles regarding CHPR-related to UTI", "4.3. Implications and changes", "4.4. Limitations and suggestions", "5. Conclusion", "Acknowledgments", "Authors’ contributions", "Supplementary Material" ]
[ "The novel hT-JIF index was introduced and proposed for this bibliometric analysis of CHPR pertaining UTI articles using visualizations.\nRadar plots with the hT-JIF index were used to visualize the RAs based on the co-first authors, which is rare in bibliometric studies.\nA time-to-event analysis and forest plot were used in this study to verify the 2 hypotheses. There is a lack of literature demonstrating the effectiveness of the method in detail about verifying the hypotheses as did in this study.\nSupplemental Digital Files contain instructions for conducting this study for readers wishing to replicate it independently.", "Infections of the urinary tract (UTIs) occur when bacteria from the skin or rectum enter the urethra and infect the urinary tract.[1] Bladder infection (cystitis) is the most common type of infection of the urinary tract. Kidney infection (pyelonephritis) is another type of UTI. There are fewer of them, but they are more serious than bladder infections.[2] UTIs treated by Urology and Nephrology (UN) physicians are required to be known.\nIn some cases, pyelonephritis is not caused by UTIs which are not treated promptly. Several factors contribute to the development of pyelonephritis, and it may occur without a history of cystitis. Nonetheless, when a UTI is not treated promptly, the bacteria can travel up to the kidneys and cause pyelonephritis. Pyelonephritis is an infection of the kidney that produces urine. Fever and back pain may result as a result of this condition.[3] It is estimated that 80% to 90% of UTIs are caused by specialized Escherichia coli (E coli) strains referred to as uropathogenic E coli. Althrough Enterobacteriaceae are bacteria that can be found in the digestive tract, they can be isolated from a variety of sources, and UTIs are not always caused by bacteria in the intestinal. However, if these bacteria live in the intestines, they may occasionally enter the urinary tract system. Other types of bacteria, which are less common, can also cause urinary tract infections.[4]\n[SUBTITLE] 1.1. Patients at high risk of developing UTI in the 4 groups [SUBSECTION] Chronic kidney disease (CKD) is defined as damage to the kidneys that lasts at least 3 months or more and results in a decrease in the glomerular filtration rate below 60 mL/min/1.73 m2, regardless of the cause.[5] Some chronic diseases, such as diabetes mellitus and hypertension, as well as some primary renal disorders, such as glomerulonephritis, eventually develop CKD as a result of their long-term complications.[6] The estimated prevalence of CKD worldwide is 13.4% (11.7%–15.1%), and the estimated number of patients with end-stage kidney disease (ESKD) requiring renal replacement therapy ranges from 4.902 to 7.083 million in China.[7] End-stage renal disease (ESRD) is defined as when a person’s kidneys cease to function on a permanent basis, requiring long-term dialysis or kidney transplantation to maintain life.[8]\nDespite the lack of available data for developing countries, it is estimated that 70% of patients with ESRD (stage 5 CKD) will be in developing countries by 2030, which will increase the burden on health care systems’ budgets.[6,9]\nAs the population ages, the incidence of ESRD increases every year. The mortality rate remains high despite the continuous development of medical standards.[10] During the early stages of CKD, medical doctors will try to slow or control the causes of the patient’s kidney disease. It is important to note that treatment options vary according to the cause. However, damage to the kidneys may continue to worsen even when an underlying condition, such as diabetes mellitus or high blood pressure, has been controlled. There’s no cure for CKD, but treatment can help relieve the symptoms and stop it getting worse. The treatment will depend on the stage of patients’ CKD, such as lifestyle changes—to help patients stay as healthy as possible, medicine—to control associated problems (e.g., high blood pressure and high cholesterol).[11,12]\nIn the treatment of ESRD, a variety of renal replacement therapies are commonly used, including hemodialysis, peritoneal dialysis, and renal transplantation.[13] The treatment may result in an increase in life expectancy, but it must be maintained for an extended period of time, which has a substantial impact on the patient physically and psychologically.[14] In addition, there is a high risk of developing UTI and/or sepsis (urosepsis) in patients with CKD, hemodialysis, peritoneal dialysis, and renal transplantation (CHPR for short).[15] However, little is known about research achievements (RAs) in CHPR-related articles on UTI, as well as research features (RFs) among UN authors.\nChronic kidney disease (CKD) is defined as damage to the kidneys that lasts at least 3 months or more and results in a decrease in the glomerular filtration rate below 60 mL/min/1.73 m2, regardless of the cause.[5] Some chronic diseases, such as diabetes mellitus and hypertension, as well as some primary renal disorders, such as glomerulonephritis, eventually develop CKD as a result of their long-term complications.[6] The estimated prevalence of CKD worldwide is 13.4% (11.7%–15.1%), and the estimated number of patients with end-stage kidney disease (ESKD) requiring renal replacement therapy ranges from 4.902 to 7.083 million in China.[7] End-stage renal disease (ESRD) is defined as when a person’s kidneys cease to function on a permanent basis, requiring long-term dialysis or kidney transplantation to maintain life.[8]\nDespite the lack of available data for developing countries, it is estimated that 70% of patients with ESRD (stage 5 CKD) will be in developing countries by 2030, which will increase the burden on health care systems’ budgets.[6,9]\nAs the population ages, the incidence of ESRD increases every year. The mortality rate remains high despite the continuous development of medical standards.[10] During the early stages of CKD, medical doctors will try to slow or control the causes of the patient’s kidney disease. It is important to note that treatment options vary according to the cause. However, damage to the kidneys may continue to worsen even when an underlying condition, such as diabetes mellitus or high blood pressure, has been controlled. There’s no cure for CKD, but treatment can help relieve the symptoms and stop it getting worse. The treatment will depend on the stage of patients’ CKD, such as lifestyle changes—to help patients stay as healthy as possible, medicine—to control associated problems (e.g., high blood pressure and high cholesterol).[11,12]\nIn the treatment of ESRD, a variety of renal replacement therapies are commonly used, including hemodialysis, peritoneal dialysis, and renal transplantation.[13] The treatment may result in an increase in life expectancy, but it must be maintained for an extended period of time, which has a substantial impact on the patient physically and psychologically.[14] In addition, there is a high risk of developing UTI and/or sepsis (urosepsis) in patients with CKD, hemodialysis, peritoneal dialysis, and renal transplantation (CHPR for short).[15] However, little is known about research achievements (RAs) in CHPR-related articles on UTI, as well as research features (RFs) among UN authors.\n[SUBTITLE] 1.2. Journal impact factor of articles on CHPR related to UTI [SUBSECTION] The Journal Impact Factor (JIF) is an index annually published by The Journal Citation Reports.[16] It was proposed by Eugene Garfield in 1955 to compare investigators’ and journals’ research influence on its time.[17] Original research and review articles are the only article types that meet the definition of citable articles according to the Institute of Scientific Information.[18,19] Web of Science (Thomson Reuters Inc.) is a citation service accessible through an indexing database and search engine on the Web of Science core collection (WoSCC).[20–24] Factors associated with the JIF for UN journals were investigated.[16] The JIF might be used to compare RAs in CHPR-related articles on UTI instead of the traditional citations that appear in articles on a regular basis.\nBecause UTI most commonly occurs in kidneys and bladders, we proposed 2 hypotheses:\nThe RAs in CHPR-related articles on UTI based on JIFs are equal.\nThe RFs between UN authors based on article keywords in WoSCC are similar.\nThe Journal Impact Factor (JIF) is an index annually published by The Journal Citation Reports.[16] It was proposed by Eugene Garfield in 1955 to compare investigators’ and journals’ research influence on its time.[17] Original research and review articles are the only article types that meet the definition of citable articles according to the Institute of Scientific Information.[18,19] Web of Science (Thomson Reuters Inc.) is a citation service accessible through an indexing database and search engine on the Web of Science core collection (WoSCC).[20–24] Factors associated with the JIF for UN journals were investigated.[16] The JIF might be used to compare RAs in CHPR-related articles on UTI instead of the traditional citations that appear in articles on a regular basis.\nBecause UTI most commonly occurs in kidneys and bladders, we proposed 2 hypotheses:\nThe RAs in CHPR-related articles on UTI based on JIFs are equal.\nThe RFs between UN authors based on article keywords in WoSCC are similar.\n[SUBTITLE] 1.3. Challenges faced in comparing the 2 issues [SUBSECTION] There are several challenges that need to be overcome in order to validate the 2 hypotheses, such as appropriate bibliometric metrics to measure the RAs are required to determine; statistical methods to examine differences in RAs of CHPR are lacking in dealing with missing data in JIF; RFs defined using medical subject headings (MeSH terms)[25–27] are still not uncovered using the keyword plus in WoSCC in literature; and the visualization used to differentiate RFs between 2 journal authors not found in past research.\nFortunately, the hT-index (also known as the Tapered h-index)[28,29] takes into account all citations with descending weights when evaluating the RAs and generalizes the h-index.[30] The hT-index more closely related to the h-index than other bibliometric indices (e.g., x-/g-index[31,32]) has been verified.[33]\nTime-to-event analysis, also known as survival analysis, refers to a set of methods for analyzing the length of time until a well-defined endpoint occurs.[34] It is unique to survival data that not all subjects (e.g., CHPR articles in this study) experience the event (e.g., having JIF denoted by 1 for the event) by the end of the observation period (e.g., JIF observed in this study) so that the exact survival times (denoted by JIF) for some articles are unknown (i.e., JIF = 0 censored with nonevent in time-to-event analysis).[35,36] In addition, data are usually skewed, which limits the usefulness of analysis methods that assume a normal distribution. Thus, nonparametric and semiparametric methods, specifically the Kaplan−Meier estimator, log-rank test, and Cox proportional hazards model, can be applied to the ongoing series of time-to-event JIF data in CHPR articles.\nBy using social network analysis (SNA)[37–39] to replace MeSH terms[25–27] with keyword plus, the RFs between UN authors can be compared using forest plots.[25,36]\nThere are several challenges that need to be overcome in order to validate the 2 hypotheses, such as appropriate bibliometric metrics to measure the RAs are required to determine; statistical methods to examine differences in RAs of CHPR are lacking in dealing with missing data in JIF; RFs defined using medical subject headings (MeSH terms)[25–27] are still not uncovered using the keyword plus in WoSCC in literature; and the visualization used to differentiate RFs between 2 journal authors not found in past research.\nFortunately, the hT-index (also known as the Tapered h-index)[28,29] takes into account all citations with descending weights when evaluating the RAs and generalizes the h-index.[30] The hT-index more closely related to the h-index than other bibliometric indices (e.g., x-/g-index[31,32]) has been verified.[33]\nTime-to-event analysis, also known as survival analysis, refers to a set of methods for analyzing the length of time until a well-defined endpoint occurs.[34] It is unique to survival data that not all subjects (e.g., CHPR articles in this study) experience the event (e.g., having JIF denoted by 1 for the event) by the end of the observation period (e.g., JIF observed in this study) so that the exact survival times (denoted by JIF) for some articles are unknown (i.e., JIF = 0 censored with nonevent in time-to-event analysis).[35,36] In addition, data are usually skewed, which limits the usefulness of analysis methods that assume a normal distribution. Thus, nonparametric and semiparametric methods, specifically the Kaplan−Meier estimator, log-rank test, and Cox proportional hazards model, can be applied to the ongoing series of time-to-event JIF data in CHPR articles.\nBy using social network analysis (SNA)[37–39] to replace MeSH terms[25–27] with keyword plus, the RFs between UN authors can be compared using forest plots.[25,36]\n[SUBTITLE] 1.4. Aims of this study [SUBSECTION] The purpose of this study is to verify the 2 hypotheses: CHPR-related articles on UTI have equal JIFs in RAs and UN authors have similar RFs.\nThe purpose of this study is to verify the 2 hypotheses: CHPR-related articles on UTI have equal JIFs in RAs and UN authors have similar RFs.", "Chronic kidney disease (CKD) is defined as damage to the kidneys that lasts at least 3 months or more and results in a decrease in the glomerular filtration rate below 60 mL/min/1.73 m2, regardless of the cause.[5] Some chronic diseases, such as diabetes mellitus and hypertension, as well as some primary renal disorders, such as glomerulonephritis, eventually develop CKD as a result of their long-term complications.[6] The estimated prevalence of CKD worldwide is 13.4% (11.7%–15.1%), and the estimated number of patients with end-stage kidney disease (ESKD) requiring renal replacement therapy ranges from 4.902 to 7.083 million in China.[7] End-stage renal disease (ESRD) is defined as when a person’s kidneys cease to function on a permanent basis, requiring long-term dialysis or kidney transplantation to maintain life.[8]\nDespite the lack of available data for developing countries, it is estimated that 70% of patients with ESRD (stage 5 CKD) will be in developing countries by 2030, which will increase the burden on health care systems’ budgets.[6,9]\nAs the population ages, the incidence of ESRD increases every year. The mortality rate remains high despite the continuous development of medical standards.[10] During the early stages of CKD, medical doctors will try to slow or control the causes of the patient’s kidney disease. It is important to note that treatment options vary according to the cause. However, damage to the kidneys may continue to worsen even when an underlying condition, such as diabetes mellitus or high blood pressure, has been controlled. There’s no cure for CKD, but treatment can help relieve the symptoms and stop it getting worse. The treatment will depend on the stage of patients’ CKD, such as lifestyle changes—to help patients stay as healthy as possible, medicine—to control associated problems (e.g., high blood pressure and high cholesterol).[11,12]\nIn the treatment of ESRD, a variety of renal replacement therapies are commonly used, including hemodialysis, peritoneal dialysis, and renal transplantation.[13] The treatment may result in an increase in life expectancy, but it must be maintained for an extended period of time, which has a substantial impact on the patient physically and psychologically.[14] In addition, there is a high risk of developing UTI and/or sepsis (urosepsis) in patients with CKD, hemodialysis, peritoneal dialysis, and renal transplantation (CHPR for short).[15] However, little is known about research achievements (RAs) in CHPR-related articles on UTI, as well as research features (RFs) among UN authors.", "The Journal Impact Factor (JIF) is an index annually published by The Journal Citation Reports.[16] It was proposed by Eugene Garfield in 1955 to compare investigators’ and journals’ research influence on its time.[17] Original research and review articles are the only article types that meet the definition of citable articles according to the Institute of Scientific Information.[18,19] Web of Science (Thomson Reuters Inc.) is a citation service accessible through an indexing database and search engine on the Web of Science core collection (WoSCC).[20–24] Factors associated with the JIF for UN journals were investigated.[16] The JIF might be used to compare RAs in CHPR-related articles on UTI instead of the traditional citations that appear in articles on a regular basis.\nBecause UTI most commonly occurs in kidneys and bladders, we proposed 2 hypotheses:\nThe RAs in CHPR-related articles on UTI based on JIFs are equal.\nThe RFs between UN authors based on article keywords in WoSCC are similar.", "There are several challenges that need to be overcome in order to validate the 2 hypotheses, such as appropriate bibliometric metrics to measure the RAs are required to determine; statistical methods to examine differences in RAs of CHPR are lacking in dealing with missing data in JIF; RFs defined using medical subject headings (MeSH terms)[25–27] are still not uncovered using the keyword plus in WoSCC in literature; and the visualization used to differentiate RFs between 2 journal authors not found in past research.\nFortunately, the hT-index (also known as the Tapered h-index)[28,29] takes into account all citations with descending weights when evaluating the RAs and generalizes the h-index.[30] The hT-index more closely related to the h-index than other bibliometric indices (e.g., x-/g-index[31,32]) has been verified.[33]\nTime-to-event analysis, also known as survival analysis, refers to a set of methods for analyzing the length of time until a well-defined endpoint occurs.[34] It is unique to survival data that not all subjects (e.g., CHPR articles in this study) experience the event (e.g., having JIF denoted by 1 for the event) by the end of the observation period (e.g., JIF observed in this study) so that the exact survival times (denoted by JIF) for some articles are unknown (i.e., JIF = 0 censored with nonevent in time-to-event analysis).[35,36] In addition, data are usually skewed, which limits the usefulness of analysis methods that assume a normal distribution. Thus, nonparametric and semiparametric methods, specifically the Kaplan−Meier estimator, log-rank test, and Cox proportional hazards model, can be applied to the ongoing series of time-to-event JIF data in CHPR articles.\nBy using social network analysis (SNA)[37–39] to replace MeSH terms[25–27] with keyword plus, the RFs between UN authors can be compared using forest plots.[25,36]", "The purpose of this study is to verify the 2 hypotheses: CHPR-related articles on UTI have equal JIFs in RAs and UN authors have similar RFs.", "[SUBTITLE] 2.1. Data source [SUBSECTION] By searching the WoSCC with keywords involving CHPR-related articles on UTI (see Supplemental Digital Content 1, http://links.lww.com/MD/H561) with articles and review articles only since 2013, we obtained 1284 abstracts and their corresponding metadata (e.g., citations, country of origin, research institutes, authors placed in the first and corresponding positions, and JIFs of The Journal Citation Reports in 2022[16]).\nThe data deposited in Supplemental Digital Content 2, http://links.lww.com/MD/H562 are publicly available on the WoSCC’s website. Therefore, ethical approval was not needed.\nBy searching the WoSCC with keywords involving CHPR-related articles on UTI (see Supplemental Digital Content 1, http://links.lww.com/MD/H561) with articles and review articles only since 2013, we obtained 1284 abstracts and their corresponding metadata (e.g., citations, country of origin, research institutes, authors placed in the first and corresponding positions, and JIFs of The Journal Citation Reports in 2022[16]).\nThe data deposited in Supplemental Digital Content 2, http://links.lww.com/MD/H562 are publicly available on the WoSCC’s website. Therefore, ethical approval was not needed.\n[SUBTITLE] 2.2. First goal: CHPR articles on UTI with JIFs in RAs [SUBSECTION] [SUBTITLE] 2.2.1. Computation of the ht-JIF index. [SUBSECTION] The citations were replaced with JIFs in computing the hT-JIF index for each article using Equation 1.\n= 0, JIF = 0 (e.g., Emerging Sources Citation Index in WoSCC)\nwhere i is within 1 to JIF. For example, an article with JIF = 6 has hT-JIF = 1.88 (=1 + 1/3 + 1/5 + 1/7 + 1/9 + 1/11). The algorithm of hT computation is at the link.[40]\nThe citations were replaced with JIFs in computing the hT-JIF index for each article using Equation 1.\n= 0, JIF = 0 (e.g., Emerging Sources Citation Index in WoSCC)\nwhere i is within 1 to JIF. For example, an article with JIF = 6 has hT-JIF = 1.88 (=1 + 1/3 + 1/5 + 1/7 + 1/9 + 1/11). The algorithm of hT computation is at the link.[40]\n[SUBTITLE] 2.2.2. Time-to-event analysis applied to compare RAs in CHPR. [SUBSECTION] Statistical analyses were performed using MedCalc for Windows, version 19.4 (MedCalc Software, Ostend, Belgium). Data were arranged in 3 sequential columns, as shown in Equation 2.\nBy using Microsoft Excel, JIFs are integers using the round (JIF + 1, 0) function if JIF > 0 and IF (JIF = 0, 0, 1) in the columns of JIR and event, respectively, where the censored event is indicated by 0 in column 2.\nTime-to-event analysis was performed (see Supplemental Digital Content 3, http://links.lww.com/MD/H563). Hazard ratios with 95% confidence intervals (CIs) were used to examine the difference between groups if the log-rank test was significant when the Type I error was set at α = 0.05.\nStatistical analyses were performed using MedCalc for Windows, version 19.4 (MedCalc Software, Ostend, Belgium). Data were arranged in 3 sequential columns, as shown in Equation 2.\nBy using Microsoft Excel, JIFs are integers using the round (JIF + 1, 0) function if JIF > 0 and IF (JIF = 0, 0, 1) in the columns of JIR and event, respectively, where the censored event is indicated by 0 in column 2.\nTime-to-event analysis was performed (see Supplemental Digital Content 3, http://links.lww.com/MD/H563). Hazard ratios with 95% confidence intervals (CIs) were used to examine the difference between groups if the log-rank test was significant when the Type I error was set at α = 0.05.\n[SUBTITLE] 2.2.1. Computation of the ht-JIF index. [SUBSECTION] The citations were replaced with JIFs in computing the hT-JIF index for each article using Equation 1.\n= 0, JIF = 0 (e.g., Emerging Sources Citation Index in WoSCC)\nwhere i is within 1 to JIF. For example, an article with JIF = 6 has hT-JIF = 1.88 (=1 + 1/3 + 1/5 + 1/7 + 1/9 + 1/11). The algorithm of hT computation is at the link.[40]\nThe citations were replaced with JIFs in computing the hT-JIF index for each article using Equation 1.\n= 0, JIF = 0 (e.g., Emerging Sources Citation Index in WoSCC)\nwhere i is within 1 to JIF. For example, an article with JIF = 6 has hT-JIF = 1.88 (=1 + 1/3 + 1/5 + 1/7 + 1/9 + 1/11). The algorithm of hT computation is at the link.[40]\n[SUBTITLE] 2.2.2. Time-to-event analysis applied to compare RAs in CHPR. [SUBSECTION] Statistical analyses were performed using MedCalc for Windows, version 19.4 (MedCalc Software, Ostend, Belgium). Data were arranged in 3 sequential columns, as shown in Equation 2.\nBy using Microsoft Excel, JIFs are integers using the round (JIF + 1, 0) function if JIF > 0 and IF (JIF = 0, 0, 1) in the columns of JIR and event, respectively, where the censored event is indicated by 0 in column 2.\nTime-to-event analysis was performed (see Supplemental Digital Content 3, http://links.lww.com/MD/H563). Hazard ratios with 95% confidence intervals (CIs) were used to examine the difference between groups if the log-rank test was significant when the Type I error was set at α = 0.05.\nStatistical analyses were performed using MedCalc for Windows, version 19.4 (MedCalc Software, Ostend, Belgium). Data were arranged in 3 sequential columns, as shown in Equation 2.\nBy using Microsoft Excel, JIFs are integers using the round (JIF + 1, 0) function if JIF > 0 and IF (JIF = 0, 0, 1) in the columns of JIR and event, respectively, where the censored event is indicated by 0 in column 2.\nTime-to-event analysis was performed (see Supplemental Digital Content 3, http://links.lww.com/MD/H563). Hazard ratios with 95% confidence intervals (CIs) were used to examine the difference between groups if the log-rank test was significant when the Type I error was set at α = 0.05.\n[SUBTITLE] 2.3. Statistical description [SUBSECTION] [SUBTITLE] 2.3.1. Computation of the ht-JIF index in many articles. [SUBSECTION] The hT-JIF index is derived from Equations 3 to 5 if the number of articles is greater than 1.[33]\nThe starting weight in an article is determined by Equation 3, where j = 1. For example, an article with JIF = 6 has hT-JIF = 1.88 (=1 + 1/3 + 1/5 + 1/7 + 1/9 + 1/11), as computed by Equation 1.\nThe weights of ten papers, for instance, with ten JIFs each, can be computed in Table 1 following Equations 4 and 5. In papers 1 to 10, the hT indices monotonically increase [2.13, 3.59, 4.79, 5.81, 6.71, 7.51, 8.25, 8.82, 9.51, 10.00], suggesting that the h-core articles are identical to those in the hT core and the contribution of the hT core is not changed in the hT core Durfee square.[28,29]\nWeights allocated to the 10 articles with 10 JIFs each (hT = 10 in this case).\n+ means the summation on rows.\nJIF = journal impact factor.\nThe hT-JIF index is derived from Equations 3 to 5 if the number of articles is greater than 1.[33]\nThe starting weight in an article is determined by Equation 3, where j = 1. For example, an article with JIF = 6 has hT-JIF = 1.88 (=1 + 1/3 + 1/5 + 1/7 + 1/9 + 1/11), as computed by Equation 1.\nThe weights of ten papers, for instance, with ten JIFs each, can be computed in Table 1 following Equations 4 and 5. In papers 1 to 10, the hT indices monotonically increase [2.13, 3.59, 4.79, 5.81, 6.71, 7.51, 8.25, 8.82, 9.51, 10.00], suggesting that the h-core articles are identical to those in the hT core and the contribution of the hT core is not changed in the hT core Durfee square.[28,29]\nWeights allocated to the 10 articles with 10 JIFs each (hT = 10 in this case).\n+ means the summation on rows.\nJIF = journal impact factor.\n[SUBTITLE] 2.3.2. Draw radar plots with ht-JIF-indices for entities. [SUBSECTION] Four types of article-related entities were included; namely, counties, research institutes, departments, and authors, to compare their RAs using the hT-JIF index and draw them on radar plots.\nThe Y-index[41,42] was proposed to evaluate the RAs based on the number of publications in the positions of corresponding and first (co-first) authors (denoted by J = FP and RP). Unfortunately, previous studies have not illustrated the way in which the radar diagram can be drawn based on the Y-index (=as the radius in the first quadrant).[41,42] The RAs should not be measured solely by publications (e.g., the Y-index). To select the entities that contributed most to the CHPR-related articles on UTI in this study, the hT-JIF index can be used by taking into account both publications and JIFs denoted by bubble size on the radar plot.[43]\nA choropleth map[44] was used to compare the RA across countries/regions. In particular, the RAs in the US states and provinces/metropolitan cities in China were compared with those in other countries/regions based on equal research populations.\nFour types of article-related entities were included; namely, counties, research institutes, departments, and authors, to compare their RAs using the hT-JIF index and draw them on radar plots.\nThe Y-index[41,42] was proposed to evaluate the RAs based on the number of publications in the positions of corresponding and first (co-first) authors (denoted by J = FP and RP). Unfortunately, previous studies have not illustrated the way in which the radar diagram can be drawn based on the Y-index (=as the radius in the first quadrant).[41,42] The RAs should not be measured solely by publications (e.g., the Y-index). To select the entities that contributed most to the CHPR-related articles on UTI in this study, the hT-JIF index can be used by taking into account both publications and JIFs denoted by bubble size on the radar plot.[43]\nA choropleth map[44] was used to compare the RA across countries/regions. In particular, the RAs in the US states and provinces/metropolitan cities in China were compared with those in other countries/regions based on equal research populations.\n[SUBTITLE] 2.3.1. Computation of the ht-JIF index in many articles. [SUBSECTION] The hT-JIF index is derived from Equations 3 to 5 if the number of articles is greater than 1.[33]\nThe starting weight in an article is determined by Equation 3, where j = 1. For example, an article with JIF = 6 has hT-JIF = 1.88 (=1 + 1/3 + 1/5 + 1/7 + 1/9 + 1/11), as computed by Equation 1.\nThe weights of ten papers, for instance, with ten JIFs each, can be computed in Table 1 following Equations 4 and 5. In papers 1 to 10, the hT indices monotonically increase [2.13, 3.59, 4.79, 5.81, 6.71, 7.51, 8.25, 8.82, 9.51, 10.00], suggesting that the h-core articles are identical to those in the hT core and the contribution of the hT core is not changed in the hT core Durfee square.[28,29]\nWeights allocated to the 10 articles with 10 JIFs each (hT = 10 in this case).\n+ means the summation on rows.\nJIF = journal impact factor.\nThe hT-JIF index is derived from Equations 3 to 5 if the number of articles is greater than 1.[33]\nThe starting weight in an article is determined by Equation 3, where j = 1. For example, an article with JIF = 6 has hT-JIF = 1.88 (=1 + 1/3 + 1/5 + 1/7 + 1/9 + 1/11), as computed by Equation 1.\nThe weights of ten papers, for instance, with ten JIFs each, can be computed in Table 1 following Equations 4 and 5. In papers 1 to 10, the hT indices monotonically increase [2.13, 3.59, 4.79, 5.81, 6.71, 7.51, 8.25, 8.82, 9.51, 10.00], suggesting that the h-core articles are identical to those in the hT core and the contribution of the hT core is not changed in the hT core Durfee square.[28,29]\nWeights allocated to the 10 articles with 10 JIFs each (hT = 10 in this case).\n+ means the summation on rows.\nJIF = journal impact factor.\n[SUBTITLE] 2.3.2. Draw radar plots with ht-JIF-indices for entities. [SUBSECTION] Four types of article-related entities were included; namely, counties, research institutes, departments, and authors, to compare their RAs using the hT-JIF index and draw them on radar plots.\nThe Y-index[41,42] was proposed to evaluate the RAs based on the number of publications in the positions of corresponding and first (co-first) authors (denoted by J = FP and RP). Unfortunately, previous studies have not illustrated the way in which the radar diagram can be drawn based on the Y-index (=as the radius in the first quadrant).[41,42] The RAs should not be measured solely by publications (e.g., the Y-index). To select the entities that contributed most to the CHPR-related articles on UTI in this study, the hT-JIF index can be used by taking into account both publications and JIFs denoted by bubble size on the radar plot.[43]\nA choropleth map[44] was used to compare the RA across countries/regions. In particular, the RAs in the US states and provinces/metropolitan cities in China were compared with those in other countries/regions based on equal research populations.\nFour types of article-related entities were included; namely, counties, research institutes, departments, and authors, to compare their RAs using the hT-JIF index and draw them on radar plots.\nThe Y-index[41,42] was proposed to evaluate the RAs based on the number of publications in the positions of corresponding and first (co-first) authors (denoted by J = FP and RP). Unfortunately, previous studies have not illustrated the way in which the radar diagram can be drawn based on the Y-index (=as the radius in the first quadrant).[41,42] The RAs should not be measured solely by publications (e.g., the Y-index). To select the entities that contributed most to the CHPR-related articles on UTI in this study, the hT-JIF index can be used by taking into account both publications and JIFs denoted by bubble size on the radar plot.[43]\nA choropleth map[44] was used to compare the RA across countries/regions. In particular, the RAs in the US states and provinces/metropolitan cities in China were compared with those in other countries/regions based on equal research populations.\n[SUBTITLE] 2.4. Second goal: RFs in UN authors [SUBSECTION] [SUBTITLE] 2.4.1. Data arrangement. [SUBSECTION] The RFs in UN departments were selected and compared by using the keyword plus in WoSCC through the following steps:\nStep 1: Cluster analysis was performed using SNA[37–39] and Pajek software[45] for the keyword plus in UN departments and drawing them on Sankey diagrams.[46,47]\nStep 2: A comparison of differences in RFs was made on the forest plot.[27,37,47]\nThe RFs in UN departments were selected and compared by using the keyword plus in WoSCC through the following steps:\nStep 1: Cluster analysis was performed using SNA[37–39] and Pajek software[45] for the keyword plus in UN departments and drawing them on Sankey diagrams.[46,47]\nStep 2: A comparison of differences in RFs was made on the forest plot.[27,37,47]\n[SUBTITLE] 2.4.2. Computation of parameters in a forest plot. [SUBSECTION] The forest plot presents the degree to which data from multiple studies (e.g., keyword plus in this study) observing the same effect overlap with one another.[48] The results that fail to overlap (or fit) well are given the term heterogeneity of the data, which are deemed less conclusive. Otherwise, the data are said to be homogeneous and more conclusive. The heterogeneity is indicated by the I2 statistics[49]; see Equations 6 to 11 below.\nwhere k is the number of keywords plus. The P value yielded by the function in MS Excel [i.e., Chidist (T2, df)] is identical to the approach using analysis of variance.[23,46] The df is the degree of freedom (i.e., k − 1), n denotes the sample size (i.e., the even counts and the total observations) in the 2 groups,[50] and SEi=Vari derived from Equation 10. The Vari is the within-study variance in study i.\nThe computation of odds ratios and their CIs are addressed in Equations 16 to 20, where the even and noneven numbers for 2 groups (i.e., UN departments in this study) were set as {n1e, n1n} and {n2e, n2n}. Accordingly, the odds ratio is computed by the formula (n1e × n2n)/(n2e × n1n)[47] in Equation 16, and the 95% CI equal to Odds +/- 1.96 × SEi, see Equation 20.\nIf all odds ratios in a series of studies were compared, a heterogeneity of less than 50% was deemed low based on I2 in Equation 6 and indicates a greater degree of similarity between study data (e.g., keywords) than an I2 value above 50%, which indicates more dissimilarity.[27,51–53]\nThe forest plot presents the degree to which data from multiple studies (e.g., keyword plus in this study) observing the same effect overlap with one another.[48] The results that fail to overlap (or fit) well are given the term heterogeneity of the data, which are deemed less conclusive. Otherwise, the data are said to be homogeneous and more conclusive. The heterogeneity is indicated by the I2 statistics[49]; see Equations 6 to 11 below.\nwhere k is the number of keywords plus. The P value yielded by the function in MS Excel [i.e., Chidist (T2, df)] is identical to the approach using analysis of variance.[23,46] The df is the degree of freedom (i.e., k − 1), n denotes the sample size (i.e., the even counts and the total observations) in the 2 groups,[50] and SEi=Vari derived from Equation 10. The Vari is the within-study variance in study i.\nThe computation of odds ratios and their CIs are addressed in Equations 16 to 20, where the even and noneven numbers for 2 groups (i.e., UN departments in this study) were set as {n1e, n1n} and {n2e, n2n}. Accordingly, the odds ratio is computed by the formula (n1e × n2n)/(n2e × n1n)[47] in Equation 16, and the 95% CI equal to Odds +/- 1.96 × SEi, see Equation 20.\nIf all odds ratios in a series of studies were compared, a heterogeneity of less than 50% was deemed low based on I2 in Equation 6 and indicates a greater degree of similarity between study data (e.g., keywords) than an I2 value above 50%, which indicates more dissimilarity.[27,51–53]\n[SUBTITLE] 2.4.1. Data arrangement. [SUBSECTION] The RFs in UN departments were selected and compared by using the keyword plus in WoSCC through the following steps:\nStep 1: Cluster analysis was performed using SNA[37–39] and Pajek software[45] for the keyword plus in UN departments and drawing them on Sankey diagrams.[46,47]\nStep 2: A comparison of differences in RFs was made on the forest plot.[27,37,47]\nThe RFs in UN departments were selected and compared by using the keyword plus in WoSCC through the following steps:\nStep 1: Cluster analysis was performed using SNA[37–39] and Pajek software[45] for the keyword plus in UN departments and drawing them on Sankey diagrams.[46,47]\nStep 2: A comparison of differences in RFs was made on the forest plot.[27,37,47]\n[SUBTITLE] 2.4.2. Computation of parameters in a forest plot. [SUBSECTION] The forest plot presents the degree to which data from multiple studies (e.g., keyword plus in this study) observing the same effect overlap with one another.[48] The results that fail to overlap (or fit) well are given the term heterogeneity of the data, which are deemed less conclusive. Otherwise, the data are said to be homogeneous and more conclusive. The heterogeneity is indicated by the I2 statistics[49]; see Equations 6 to 11 below.\nwhere k is the number of keywords plus. The P value yielded by the function in MS Excel [i.e., Chidist (T2, df)] is identical to the approach using analysis of variance.[23,46] The df is the degree of freedom (i.e., k − 1), n denotes the sample size (i.e., the even counts and the total observations) in the 2 groups,[50] and SEi=Vari derived from Equation 10. The Vari is the within-study variance in study i.\nThe computation of odds ratios and their CIs are addressed in Equations 16 to 20, where the even and noneven numbers for 2 groups (i.e., UN departments in this study) were set as {n1e, n1n} and {n2e, n2n}. Accordingly, the odds ratio is computed by the formula (n1e × n2n)/(n2e × n1n)[47] in Equation 16, and the 95% CI equal to Odds +/- 1.96 × SEi, see Equation 20.\nIf all odds ratios in a series of studies were compared, a heterogeneity of less than 50% was deemed low based on I2 in Equation 6 and indicates a greater degree of similarity between study data (e.g., keywords) than an I2 value above 50%, which indicates more dissimilarity.[27,51–53]\nThe forest plot presents the degree to which data from multiple studies (e.g., keyword plus in this study) observing the same effect overlap with one another.[48] The results that fail to overlap (or fit) well are given the term heterogeneity of the data, which are deemed less conclusive. Otherwise, the data are said to be homogeneous and more conclusive. The heterogeneity is indicated by the I2 statistics[49]; see Equations 6 to 11 below.\nwhere k is the number of keywords plus. The P value yielded by the function in MS Excel [i.e., Chidist (T2, df)] is identical to the approach using analysis of variance.[23,46] The df is the degree of freedom (i.e., k − 1), n denotes the sample size (i.e., the even counts and the total observations) in the 2 groups,[50] and SEi=Vari derived from Equation 10. The Vari is the within-study variance in study i.\nThe computation of odds ratios and their CIs are addressed in Equations 16 to 20, where the even and noneven numbers for 2 groups (i.e., UN departments in this study) were set as {n1e, n1n} and {n2e, n2n}. Accordingly, the odds ratio is computed by the formula (n1e × n2n)/(n2e × n1n)[47] in Equation 16, and the 95% CI equal to Odds +/- 1.96 × SEi, see Equation 20.\nIf all odds ratios in a series of studies were compared, a heterogeneity of less than 50% was deemed low based on I2 in Equation 6 and indicates a greater degree of similarity between study data (e.g., keywords) than an I2 value above 50%, which indicates more dissimilarity.[27,51–53]\n[SUBTITLE] 2.5. Articles with higher JIFs in UN departments [SUBSECTION] CHPR articles on UTIs with higher JIFs in UN departments were displayed on an impact beam plot (IBP).[54,55] The top JIF articles denoted by each dot were displayed on the IBP, from the left to the right side, by study type with normalized JIF from 0 to 100 (i.e., using the MSExcel function of PercentRank (array, x, 1) × 100). The IPB dashboard was shown on Google Maps. The article is immediately linked to PubMed once the dot is clicked on the dashboard.\nCHPR articles on UTIs with higher JIFs in UN departments were displayed on an impact beam plot (IBP).[54,55] The top JIF articles denoted by each dot were displayed on the IBP, from the left to the right side, by study type with normalized JIF from 0 to 100 (i.e., using the MSExcel function of PercentRank (array, x, 1) × 100). The IPB dashboard was shown on Google Maps. The article is immediately linked to PubMed once the dot is clicked on the dashboard.\n[SUBTITLE] 2.6. Statistics and tools [SUBSECTION] A forest plot was used to compare the odds ratios. The significance level for type I error was set at 0.05.\nThe following 4 visualizations were used to present the author’s RA: radar, Sankey, time-to-event, and choropleth map. The forest plot was used to distinguish RFs by observing the proportional counts of keyword plus in WoSCC between UN authors. Google Maps was used to plot both choropleth maps and radar diagrams. The study flowchart is shown in Figure 1.\nStudy flowchart.\nA forest plot was used to compare the odds ratios. The significance level for type I error was set at 0.05.\nThe following 4 visualizations were used to present the author’s RA: radar, Sankey, time-to-event, and choropleth map. The forest plot was used to distinguish RFs by observing the proportional counts of keyword plus in WoSCC between UN authors. Google Maps was used to plot both choropleth maps and radar diagrams. The study flowchart is shown in Figure 1.\nStudy flowchart.", "By searching the WoSCC with keywords involving CHPR-related articles on UTI (see Supplemental Digital Content 1, http://links.lww.com/MD/H561) with articles and review articles only since 2013, we obtained 1284 abstracts and their corresponding metadata (e.g., citations, country of origin, research institutes, authors placed in the first and corresponding positions, and JIFs of The Journal Citation Reports in 2022[16]).\nThe data deposited in Supplemental Digital Content 2, http://links.lww.com/MD/H562 are publicly available on the WoSCC’s website. Therefore, ethical approval was not needed.", "[SUBTITLE] 2.2.1. Computation of the ht-JIF index. [SUBSECTION] The citations were replaced with JIFs in computing the hT-JIF index for each article using Equation 1.\n= 0, JIF = 0 (e.g., Emerging Sources Citation Index in WoSCC)\nwhere i is within 1 to JIF. For example, an article with JIF = 6 has hT-JIF = 1.88 (=1 + 1/3 + 1/5 + 1/7 + 1/9 + 1/11). The algorithm of hT computation is at the link.[40]\nThe citations were replaced with JIFs in computing the hT-JIF index for each article using Equation 1.\n= 0, JIF = 0 (e.g., Emerging Sources Citation Index in WoSCC)\nwhere i is within 1 to JIF. For example, an article with JIF = 6 has hT-JIF = 1.88 (=1 + 1/3 + 1/5 + 1/7 + 1/9 + 1/11). The algorithm of hT computation is at the link.[40]\n[SUBTITLE] 2.2.2. Time-to-event analysis applied to compare RAs in CHPR. [SUBSECTION] Statistical analyses were performed using MedCalc for Windows, version 19.4 (MedCalc Software, Ostend, Belgium). Data were arranged in 3 sequential columns, as shown in Equation 2.\nBy using Microsoft Excel, JIFs are integers using the round (JIF + 1, 0) function if JIF > 0 and IF (JIF = 0, 0, 1) in the columns of JIR and event, respectively, where the censored event is indicated by 0 in column 2.\nTime-to-event analysis was performed (see Supplemental Digital Content 3, http://links.lww.com/MD/H563). Hazard ratios with 95% confidence intervals (CIs) were used to examine the difference between groups if the log-rank test was significant when the Type I error was set at α = 0.05.\nStatistical analyses were performed using MedCalc for Windows, version 19.4 (MedCalc Software, Ostend, Belgium). Data were arranged in 3 sequential columns, as shown in Equation 2.\nBy using Microsoft Excel, JIFs are integers using the round (JIF + 1, 0) function if JIF > 0 and IF (JIF = 0, 0, 1) in the columns of JIR and event, respectively, where the censored event is indicated by 0 in column 2.\nTime-to-event analysis was performed (see Supplemental Digital Content 3, http://links.lww.com/MD/H563). Hazard ratios with 95% confidence intervals (CIs) were used to examine the difference between groups if the log-rank test was significant when the Type I error was set at α = 0.05.", "The citations were replaced with JIFs in computing the hT-JIF index for each article using Equation 1.\n= 0, JIF = 0 (e.g., Emerging Sources Citation Index in WoSCC)\nwhere i is within 1 to JIF. For example, an article with JIF = 6 has hT-JIF = 1.88 (=1 + 1/3 + 1/5 + 1/7 + 1/9 + 1/11). The algorithm of hT computation is at the link.[40]", "Statistical analyses were performed using MedCalc for Windows, version 19.4 (MedCalc Software, Ostend, Belgium). Data were arranged in 3 sequential columns, as shown in Equation 2.\nBy using Microsoft Excel, JIFs are integers using the round (JIF + 1, 0) function if JIF > 0 and IF (JIF = 0, 0, 1) in the columns of JIR and event, respectively, where the censored event is indicated by 0 in column 2.\nTime-to-event analysis was performed (see Supplemental Digital Content 3, http://links.lww.com/MD/H563). Hazard ratios with 95% confidence intervals (CIs) were used to examine the difference between groups if the log-rank test was significant when the Type I error was set at α = 0.05.", "[SUBTITLE] 2.3.1. Computation of the ht-JIF index in many articles. [SUBSECTION] The hT-JIF index is derived from Equations 3 to 5 if the number of articles is greater than 1.[33]\nThe starting weight in an article is determined by Equation 3, where j = 1. For example, an article with JIF = 6 has hT-JIF = 1.88 (=1 + 1/3 + 1/5 + 1/7 + 1/9 + 1/11), as computed by Equation 1.\nThe weights of ten papers, for instance, with ten JIFs each, can be computed in Table 1 following Equations 4 and 5. In papers 1 to 10, the hT indices monotonically increase [2.13, 3.59, 4.79, 5.81, 6.71, 7.51, 8.25, 8.82, 9.51, 10.00], suggesting that the h-core articles are identical to those in the hT core and the contribution of the hT core is not changed in the hT core Durfee square.[28,29]\nWeights allocated to the 10 articles with 10 JIFs each (hT = 10 in this case).\n+ means the summation on rows.\nJIF = journal impact factor.\nThe hT-JIF index is derived from Equations 3 to 5 if the number of articles is greater than 1.[33]\nThe starting weight in an article is determined by Equation 3, where j = 1. For example, an article with JIF = 6 has hT-JIF = 1.88 (=1 + 1/3 + 1/5 + 1/7 + 1/9 + 1/11), as computed by Equation 1.\nThe weights of ten papers, for instance, with ten JIFs each, can be computed in Table 1 following Equations 4 and 5. In papers 1 to 10, the hT indices monotonically increase [2.13, 3.59, 4.79, 5.81, 6.71, 7.51, 8.25, 8.82, 9.51, 10.00], suggesting that the h-core articles are identical to those in the hT core and the contribution of the hT core is not changed in the hT core Durfee square.[28,29]\nWeights allocated to the 10 articles with 10 JIFs each (hT = 10 in this case).\n+ means the summation on rows.\nJIF = journal impact factor.\n[SUBTITLE] 2.3.2. Draw radar plots with ht-JIF-indices for entities. [SUBSECTION] Four types of article-related entities were included; namely, counties, research institutes, departments, and authors, to compare their RAs using the hT-JIF index and draw them on radar plots.\nThe Y-index[41,42] was proposed to evaluate the RAs based on the number of publications in the positions of corresponding and first (co-first) authors (denoted by J = FP and RP). Unfortunately, previous studies have not illustrated the way in which the radar diagram can be drawn based on the Y-index (=as the radius in the first quadrant).[41,42] The RAs should not be measured solely by publications (e.g., the Y-index). To select the entities that contributed most to the CHPR-related articles on UTI in this study, the hT-JIF index can be used by taking into account both publications and JIFs denoted by bubble size on the radar plot.[43]\nA choropleth map[44] was used to compare the RA across countries/regions. In particular, the RAs in the US states and provinces/metropolitan cities in China were compared with those in other countries/regions based on equal research populations.\nFour types of article-related entities were included; namely, counties, research institutes, departments, and authors, to compare their RAs using the hT-JIF index and draw them on radar plots.\nThe Y-index[41,42] was proposed to evaluate the RAs based on the number of publications in the positions of corresponding and first (co-first) authors (denoted by J = FP and RP). Unfortunately, previous studies have not illustrated the way in which the radar diagram can be drawn based on the Y-index (=as the radius in the first quadrant).[41,42] The RAs should not be measured solely by publications (e.g., the Y-index). To select the entities that contributed most to the CHPR-related articles on UTI in this study, the hT-JIF index can be used by taking into account both publications and JIFs denoted by bubble size on the radar plot.[43]\nA choropleth map[44] was used to compare the RA across countries/regions. In particular, the RAs in the US states and provinces/metropolitan cities in China were compared with those in other countries/regions based on equal research populations.", "The hT-JIF index is derived from Equations 3 to 5 if the number of articles is greater than 1.[33]\nThe starting weight in an article is determined by Equation 3, where j = 1. For example, an article with JIF = 6 has hT-JIF = 1.88 (=1 + 1/3 + 1/5 + 1/7 + 1/9 + 1/11), as computed by Equation 1.\nThe weights of ten papers, for instance, with ten JIFs each, can be computed in Table 1 following Equations 4 and 5. In papers 1 to 10, the hT indices monotonically increase [2.13, 3.59, 4.79, 5.81, 6.71, 7.51, 8.25, 8.82, 9.51, 10.00], suggesting that the h-core articles are identical to those in the hT core and the contribution of the hT core is not changed in the hT core Durfee square.[28,29]\nWeights allocated to the 10 articles with 10 JIFs each (hT = 10 in this case).\n+ means the summation on rows.\nJIF = journal impact factor.", "Four types of article-related entities were included; namely, counties, research institutes, departments, and authors, to compare their RAs using the hT-JIF index and draw them on radar plots.\nThe Y-index[41,42] was proposed to evaluate the RAs based on the number of publications in the positions of corresponding and first (co-first) authors (denoted by J = FP and RP). Unfortunately, previous studies have not illustrated the way in which the radar diagram can be drawn based on the Y-index (=as the radius in the first quadrant).[41,42] The RAs should not be measured solely by publications (e.g., the Y-index). To select the entities that contributed most to the CHPR-related articles on UTI in this study, the hT-JIF index can be used by taking into account both publications and JIFs denoted by bubble size on the radar plot.[43]\nA choropleth map[44] was used to compare the RA across countries/regions. In particular, the RAs in the US states and provinces/metropolitan cities in China were compared with those in other countries/regions based on equal research populations.", "[SUBTITLE] 2.4.1. Data arrangement. [SUBSECTION] The RFs in UN departments were selected and compared by using the keyword plus in WoSCC through the following steps:\nStep 1: Cluster analysis was performed using SNA[37–39] and Pajek software[45] for the keyword plus in UN departments and drawing them on Sankey diagrams.[46,47]\nStep 2: A comparison of differences in RFs was made on the forest plot.[27,37,47]\nThe RFs in UN departments were selected and compared by using the keyword plus in WoSCC through the following steps:\nStep 1: Cluster analysis was performed using SNA[37–39] and Pajek software[45] for the keyword plus in UN departments and drawing them on Sankey diagrams.[46,47]\nStep 2: A comparison of differences in RFs was made on the forest plot.[27,37,47]\n[SUBTITLE] 2.4.2. Computation of parameters in a forest plot. [SUBSECTION] The forest plot presents the degree to which data from multiple studies (e.g., keyword plus in this study) observing the same effect overlap with one another.[48] The results that fail to overlap (or fit) well are given the term heterogeneity of the data, which are deemed less conclusive. Otherwise, the data are said to be homogeneous and more conclusive. The heterogeneity is indicated by the I2 statistics[49]; see Equations 6 to 11 below.\nwhere k is the number of keywords plus. The P value yielded by the function in MS Excel [i.e., Chidist (T2, df)] is identical to the approach using analysis of variance.[23,46] The df is the degree of freedom (i.e., k − 1), n denotes the sample size (i.e., the even counts and the total observations) in the 2 groups,[50] and SEi=Vari derived from Equation 10. The Vari is the within-study variance in study i.\nThe computation of odds ratios and their CIs are addressed in Equations 16 to 20, where the even and noneven numbers for 2 groups (i.e., UN departments in this study) were set as {n1e, n1n} and {n2e, n2n}. Accordingly, the odds ratio is computed by the formula (n1e × n2n)/(n2e × n1n)[47] in Equation 16, and the 95% CI equal to Odds +/- 1.96 × SEi, see Equation 20.\nIf all odds ratios in a series of studies were compared, a heterogeneity of less than 50% was deemed low based on I2 in Equation 6 and indicates a greater degree of similarity between study data (e.g., keywords) than an I2 value above 50%, which indicates more dissimilarity.[27,51–53]\nThe forest plot presents the degree to which data from multiple studies (e.g., keyword plus in this study) observing the same effect overlap with one another.[48] The results that fail to overlap (or fit) well are given the term heterogeneity of the data, which are deemed less conclusive. Otherwise, the data are said to be homogeneous and more conclusive. The heterogeneity is indicated by the I2 statistics[49]; see Equations 6 to 11 below.\nwhere k is the number of keywords plus. The P value yielded by the function in MS Excel [i.e., Chidist (T2, df)] is identical to the approach using analysis of variance.[23,46] The df is the degree of freedom (i.e., k − 1), n denotes the sample size (i.e., the even counts and the total observations) in the 2 groups,[50] and SEi=Vari derived from Equation 10. The Vari is the within-study variance in study i.\nThe computation of odds ratios and their CIs are addressed in Equations 16 to 20, where the even and noneven numbers for 2 groups (i.e., UN departments in this study) were set as {n1e, n1n} and {n2e, n2n}. Accordingly, the odds ratio is computed by the formula (n1e × n2n)/(n2e × n1n)[47] in Equation 16, and the 95% CI equal to Odds +/- 1.96 × SEi, see Equation 20.\nIf all odds ratios in a series of studies were compared, a heterogeneity of less than 50% was deemed low based on I2 in Equation 6 and indicates a greater degree of similarity between study data (e.g., keywords) than an I2 value above 50%, which indicates more dissimilarity.[27,51–53]", "The RFs in UN departments were selected and compared by using the keyword plus in WoSCC through the following steps:\nStep 1: Cluster analysis was performed using SNA[37–39] and Pajek software[45] for the keyword plus in UN departments and drawing them on Sankey diagrams.[46,47]\nStep 2: A comparison of differences in RFs was made on the forest plot.[27,37,47]", "The forest plot presents the degree to which data from multiple studies (e.g., keyword plus in this study) observing the same effect overlap with one another.[48] The results that fail to overlap (or fit) well are given the term heterogeneity of the data, which are deemed less conclusive. Otherwise, the data are said to be homogeneous and more conclusive. The heterogeneity is indicated by the I2 statistics[49]; see Equations 6 to 11 below.\nwhere k is the number of keywords plus. The P value yielded by the function in MS Excel [i.e., Chidist (T2, df)] is identical to the approach using analysis of variance.[23,46] The df is the degree of freedom (i.e., k − 1), n denotes the sample size (i.e., the even counts and the total observations) in the 2 groups,[50] and SEi=Vari derived from Equation 10. The Vari is the within-study variance in study i.\nThe computation of odds ratios and their CIs are addressed in Equations 16 to 20, where the even and noneven numbers for 2 groups (i.e., UN departments in this study) were set as {n1e, n1n} and {n2e, n2n}. Accordingly, the odds ratio is computed by the formula (n1e × n2n)/(n2e × n1n)[47] in Equation 16, and the 95% CI equal to Odds +/- 1.96 × SEi, see Equation 20.\nIf all odds ratios in a series of studies were compared, a heterogeneity of less than 50% was deemed low based on I2 in Equation 6 and indicates a greater degree of similarity between study data (e.g., keywords) than an I2 value above 50%, which indicates more dissimilarity.[27,51–53]", "CHPR articles on UTIs with higher JIFs in UN departments were displayed on an impact beam plot (IBP).[54,55] The top JIF articles denoted by each dot were displayed on the IBP, from the left to the right side, by study type with normalized JIF from 0 to 100 (i.e., using the MSExcel function of PercentRank (array, x, 1) × 100). The IPB dashboard was shown on Google Maps. The article is immediately linked to PubMed once the dot is clicked on the dashboard.", "A forest plot was used to compare the odds ratios. The significance level for type I error was set at 0.05.\nThe following 4 visualizations were used to present the author’s RA: radar, Sankey, time-to-event, and choropleth map. The forest plot was used to distinguish RFs by observing the proportional counts of keyword plus in WoSCC between UN authors. Google Maps was used to plot both choropleth maps and radar diagrams. The study flowchart is shown in Figure 1.\nStudy flowchart.", "[SUBTITLE] 3.1. First goal: CHPR articles on UTI with JIFs in RAs [SUBSECTION] We obtained 1284 abstracts and their associated metadata (e.g., citations, authors, research institutes, departments, countries of origin) from the WoSCC since 2013 (Table 2). There were 1030 corresponding and first (co-first) authors with hT-JIF-indices (i.e., JIF was computed using hT-index rather than citations as usual).\nDistribution of publications for the 4 study types over the years.\nAs shown in Figure 2, CHPR-related articles had unequal JIFs (χ2 = 13.08, P = .004, df = 3, n = 1030). As a result of its highest hazard ratios and 95% confidence intervals, the study type of renal transplantation has the lowest JIF among CHPRs. There was no support for the first hypothesis that CHPR-related articles relating to UTI had equal JIFs in RA.\nTime-to-event analysis of CHPR articles on UTI with JIFs in Ras. CHPR = Chronic Kidney Disease, Hemodialysis, Peritoneal Dialysis, and Renal Transplantation, JIF = journal impact factor, RA = research achievement, UTI = urinary tract infection.\nWe obtained 1284 abstracts and their associated metadata (e.g., citations, authors, research institutes, departments, countries of origin) from the WoSCC since 2013 (Table 2). There were 1030 corresponding and first (co-first) authors with hT-JIF-indices (i.e., JIF was computed using hT-index rather than citations as usual).\nDistribution of publications for the 4 study types over the years.\nAs shown in Figure 2, CHPR-related articles had unequal JIFs (χ2 = 13.08, P = .004, df = 3, n = 1030). As a result of its highest hazard ratios and 95% confidence intervals, the study type of renal transplantation has the lowest JIF among CHPRs. There was no support for the first hypothesis that CHPR-related articles relating to UTI had equal JIFs in RA.\nTime-to-event analysis of CHPR articles on UTI with JIFs in Ras. CHPR = Chronic Kidney Disease, Hemodialysis, Peritoneal Dialysis, and Renal Transplantation, JIF = journal impact factor, RA = research achievement, UTI = urinary tract infection.\n[SUBTITLE] 3.2. Statistical description [SUBSECTION] In terms of countries, institutes, departments, and authors, the United States (38.30), Mayo Clinic (12.9), Nephrology (19.14), and Diana Karpman (10.34) from Sweden had the highest hT-JIF index; see Figures 3 to 6. It is worth noting that New York ranks top in hT-JIF indices, followed by Australia and the UK, when the US states and provinces/metropolitan cities in China were involved in comparison (Fig. 3).\nGeographical distribution of hT-JIF indices in countries/regions. JIF = journal impact factor.\nIn terms of countries, institutes, departments, and authors, the United States (38.30), Mayo Clinic (12.9), Nephrology (19.14), and Diana Karpman (10.34) from Sweden had the highest hT-JIF index; see Figures 3 to 6. It is worth noting that New York ranks top in hT-JIF indices, followed by Australia and the UK, when the US states and provinces/metropolitan cities in China were involved in comparison (Fig. 3).\nGeographical distribution of hT-JIF indices in countries/regions. JIF = journal impact factor.\n[SUBTITLE] 3.3. Second goal: RFs in UN authors [SUBSECTION] In Figure 7, 3 major clusters (renal transplantation, CKD, and nephrology) are separated by color, and 30 major keywords are displayed. The wider lines between keywords indicate the frequency of relationships between them. A larger block indicates that there are more keywords observed in SNA. As illustrated in Figure 8, UN departments had different RFs (Q = 53.24, df = 29, P = .004). The I2 (=45.53% <50%) indicates a greater degree of similarity between keywords.[27,51–53] However, there is no evidence supporting the second hypothesis that UN authors have similar RFs.\nIn Figure 7, 3 major clusters (renal transplantation, CKD, and nephrology) are separated by color, and 30 major keywords are displayed. The wider lines between keywords indicate the frequency of relationships between them. A larger block indicates that there are more keywords observed in SNA. As illustrated in Figure 8, UN departments had different RFs (Q = 53.24, df = 29, P = .004). The I2 (=45.53% <50%) indicates a greater degree of similarity between keywords.[27,51–53] However, there is no evidence supporting the second hypothesis that UN authors have similar RFs.\n[SUBTITLE] 3.4. Articles with higher JIFs in UN departments [SUBSECTION] Figure 9 shows CHPR articles on UTIs with higher JIFs in UN departments. The article published in 2013 with the highest JIF (16.43) was written by Heyns CF from South Africa.[56] The 2 articles[57,58] with the highest JIF shown in Figure 2 (bottom) were not written by UN authors. Readers are invited to scan the QR code in Figure 9 and click on the dot of interest to read the article on PubMed.\nComparison of the hT-JIF index among research institutes on a radar plot. JIF = journal impact factor.\nComparison of the hT-JIF index among departments on a radar plot. JIF = journal impact factor.\nComparison of the hT-JIF index among authors on a radar plot. JIF = journal impact factor.\nThree clusters were separated using SNA for keyword plus. SNA = social network analysis.\nComparison of 30 major keywords in proportional counts in UN departments. UN = Urology and Nephrology.\nCHPR articles on UTI with higher JIFs in UN departments on the IBP. CHPR = Chronic Kidney Disease, Hemodialysis, Peritoneal Dialysis, and Renal Transplantation, IBP = impact beam plot, JIF = journal impact factor, UN = Urology and Nephrology, UTI = urinary tract infection.\nFigure 9 shows CHPR articles on UTIs with higher JIFs in UN departments. The article published in 2013 with the highest JIF (16.43) was written by Heyns CF from South Africa.[56] The 2 articles[57,58] with the highest JIF shown in Figure 2 (bottom) were not written by UN authors. Readers are invited to scan the QR code in Figure 9 and click on the dot of interest to read the article on PubMed.\nComparison of the hT-JIF index among research institutes on a radar plot. JIF = journal impact factor.\nComparison of the hT-JIF index among departments on a radar plot. JIF = journal impact factor.\nComparison of the hT-JIF index among authors on a radar plot. JIF = journal impact factor.\nThree clusters were separated using SNA for keyword plus. SNA = social network analysis.\nComparison of 30 major keywords in proportional counts in UN departments. UN = Urology and Nephrology.\nCHPR articles on UTI with higher JIFs in UN departments on the IBP. CHPR = Chronic Kidney Disease, Hemodialysis, Peritoneal Dialysis, and Renal Transplantation, IBP = impact beam plot, JIF = journal impact factor, UN = Urology and Nephrology, UTI = urinary tract infection.\n[SUBTITLE] 3.5. Online dashboards shown on google maps [SUBSECTION] All the QR codes in Figures are linked to the dashboards. Readers are suggested to examine the displayed dashboards on Google Maps.\nAll the QR codes in Figures are linked to the dashboards. Readers are suggested to examine the displayed dashboards on Google Maps.", "We obtained 1284 abstracts and their associated metadata (e.g., citations, authors, research institutes, departments, countries of origin) from the WoSCC since 2013 (Table 2). There were 1030 corresponding and first (co-first) authors with hT-JIF-indices (i.e., JIF was computed using hT-index rather than citations as usual).\nDistribution of publications for the 4 study types over the years.\nAs shown in Figure 2, CHPR-related articles had unequal JIFs (χ2 = 13.08, P = .004, df = 3, n = 1030). As a result of its highest hazard ratios and 95% confidence intervals, the study type of renal transplantation has the lowest JIF among CHPRs. There was no support for the first hypothesis that CHPR-related articles relating to UTI had equal JIFs in RA.\nTime-to-event analysis of CHPR articles on UTI with JIFs in Ras. CHPR = Chronic Kidney Disease, Hemodialysis, Peritoneal Dialysis, and Renal Transplantation, JIF = journal impact factor, RA = research achievement, UTI = urinary tract infection.", "In terms of countries, institutes, departments, and authors, the United States (38.30), Mayo Clinic (12.9), Nephrology (19.14), and Diana Karpman (10.34) from Sweden had the highest hT-JIF index; see Figures 3 to 6. It is worth noting that New York ranks top in hT-JIF indices, followed by Australia and the UK, when the US states and provinces/metropolitan cities in China were involved in comparison (Fig. 3).\nGeographical distribution of hT-JIF indices in countries/regions. JIF = journal impact factor.", "In Figure 7, 3 major clusters (renal transplantation, CKD, and nephrology) are separated by color, and 30 major keywords are displayed. The wider lines between keywords indicate the frequency of relationships between them. A larger block indicates that there are more keywords observed in SNA. As illustrated in Figure 8, UN departments had different RFs (Q = 53.24, df = 29, P = .004). The I2 (=45.53% <50%) indicates a greater degree of similarity between keywords.[27,51–53] However, there is no evidence supporting the second hypothesis that UN authors have similar RFs.", "Figure 9 shows CHPR articles on UTIs with higher JIFs in UN departments. The article published in 2013 with the highest JIF (16.43) was written by Heyns CF from South Africa.[56] The 2 articles[57,58] with the highest JIF shown in Figure 2 (bottom) were not written by UN authors. Readers are invited to scan the QR code in Figure 9 and click on the dot of interest to read the article on PubMed.\nComparison of the hT-JIF index among research institutes on a radar plot. JIF = journal impact factor.\nComparison of the hT-JIF index among departments on a radar plot. JIF = journal impact factor.\nComparison of the hT-JIF index among authors on a radar plot. JIF = journal impact factor.\nThree clusters were separated using SNA for keyword plus. SNA = social network analysis.\nComparison of 30 major keywords in proportional counts in UN departments. UN = Urology and Nephrology.\nCHPR articles on UTI with higher JIFs in UN departments on the IBP. CHPR = Chronic Kidney Disease, Hemodialysis, Peritoneal Dialysis, and Renal Transplantation, IBP = impact beam plot, JIF = journal impact factor, UN = Urology and Nephrology, UTI = urinary tract infection.", "All the QR codes in Figures are linked to the dashboards. Readers are suggested to examine the displayed dashboards on Google Maps.", "We observed that CHPR-related articles had unequal JIFs (χ2 = 13.08, P = .004, df = 3, n = 1030) and UN departments had different RFs (Q = 53.24, df = 29, P = .004). In terms of countries, institutes, departments, and authors, the United States (hT-JIF = 38.30), Mayo Clinic (12.9), Nephrology (19.14), and Diana Karpman (10.34) from Sweden had the highest hT-JIF index.\n[SUBTITLE] 4.1. Additional information [SUBSECTION] A growing number of patients worldwide suffer from CKD, which is associated with an increased risk of progression to ESKD.[15] The risk of developing infections (e.g., UTI) and/or sepsis (urosepsis) is high among patients with CKD stage 2 to 5, patients receiving chronic dialysis treatment (hemodialysis or peritoneal dialysis), and patients with kidney allograft dysfunction. However, little information is available regarding how many of these patients suffer from urinary tract infections.\nThe prevalence of CKD worldwide is estimated at 13.4% (11.7%–15.1%), and the estimated number of patients with ESKD needing renal replacement therapy in China is between 4.902 and 7.083 million.[7] The incidence of infections associated with CKD is less than one per 5000 people per year,[59–61] but frequent UTI episodes may increase the risk of developing ESKD.[61] It is higher in infants and young children than in adults but still moderate at approximately 1%.[62] A number of clinical risk factors and comorbidities other than UTI are suspected of contributing to the development of ESKD. Predisposing factors for UTI development in CKD patients include sex, age, genetic disposition, diabetes mellitus, obstructive nephropathy, arteriolosclerosis (microvascular calcification, ischemic nephropathy), nephrolithiasis, cast nephropathy, immunodeficiency syndromes, and immunosuppressive therapy.[59,63] No bibliometric analysis has been undertaken to determine whether CHPR-related articles on UTI have unequal JIFs between RAs, and UN authors have slightly different RFs, as we did in this study.\nTo measure research output, most institutions surveyed still rely on simple, easily quantifiable metrics, such as the JIF or publication count.[64–66] In Taiwan’s medical schools, the JIF has an unrivaled role in determining the faculty’s achievement evaluation and promotion. A major component of the evaluation is the CJA (category of article, journal quality, and author order) assessment.[67] Hence, we measured the RAs of authors with CHPR articles on UTI using the hT-JIF index.\nThirty-six percent of the 7618 papers on peritoneal dialysis published in 887 journals were written by American authors (6991 articles and 627 reviews),[68] and the United States was the most productive country (n = 51) among the top 100 influential papers on peritoneal dialysis.[69] In HD/PD articles, the United States had the highest hT index (=37.15) compared to many other countries. These results are consistent with our findings regarding the dominance of the United States in articles regarding CHPR-related UTI since 2013 (hT-JIF = 38.30).[33]\nA growing number of patients worldwide suffer from CKD, which is associated with an increased risk of progression to ESKD.[15] The risk of developing infections (e.g., UTI) and/or sepsis (urosepsis) is high among patients with CKD stage 2 to 5, patients receiving chronic dialysis treatment (hemodialysis or peritoneal dialysis), and patients with kidney allograft dysfunction. However, little information is available regarding how many of these patients suffer from urinary tract infections.\nThe prevalence of CKD worldwide is estimated at 13.4% (11.7%–15.1%), and the estimated number of patients with ESKD needing renal replacement therapy in China is between 4.902 and 7.083 million.[7] The incidence of infections associated with CKD is less than one per 5000 people per year,[59–61] but frequent UTI episodes may increase the risk of developing ESKD.[61] It is higher in infants and young children than in adults but still moderate at approximately 1%.[62] A number of clinical risk factors and comorbidities other than UTI are suspected of contributing to the development of ESKD. Predisposing factors for UTI development in CKD patients include sex, age, genetic disposition, diabetes mellitus, obstructive nephropathy, arteriolosclerosis (microvascular calcification, ischemic nephropathy), nephrolithiasis, cast nephropathy, immunodeficiency syndromes, and immunosuppressive therapy.[59,63] No bibliometric analysis has been undertaken to determine whether CHPR-related articles on UTI have unequal JIFs between RAs, and UN authors have slightly different RFs, as we did in this study.\nTo measure research output, most institutions surveyed still rely on simple, easily quantifiable metrics, such as the JIF or publication count.[64–66] In Taiwan’s medical schools, the JIF has an unrivaled role in determining the faculty’s achievement evaluation and promotion. A major component of the evaluation is the CJA (category of article, journal quality, and author order) assessment.[67] Hence, we measured the RAs of authors with CHPR articles on UTI using the hT-JIF index.\nThirty-six percent of the 7618 papers on peritoneal dialysis published in 887 journals were written by American authors (6991 articles and 627 reviews),[68] and the United States was the most productive country (n = 51) among the top 100 influential papers on peritoneal dialysis.[69] In HD/PD articles, the United States had the highest hT index (=37.15) compared to many other countries. These results are consistent with our findings regarding the dominance of the United States in articles regarding CHPR-related UTI since 2013 (hT-JIF = 38.30).[33]\n[SUBTITLE] 4.2. High JIF articles regarding CHPR-related to UTI [SUBSECTION] An article that discusses CHPR-related UTIs with PMID = 24166342, entitled Urological aspects of HIV and AIDS, was published in 2013 in Nat. Rev. Urol. and written by Urology authors in University Stellenbosch in South Africa, has a few citations in WoSCC (=5).\nAn article with PMID = 24166342,[57] entitled History of Childhood Kidney Disease and Risk of Adult End-Stage Renal Disease, written by Ronit Calderon-Margalit (Israel) and published in N Engl J Med. in 2018, has 75 citations in WoSCC.\nManikkam Suthanthiran (U.S.) also published a study, Urinary-cell mRNA profile and acute cellular rejection in kidney allografts, in N Engl J Med. in 2013, which has 225 citations in WoSCC.\nAn article that discusses CHPR-related UTIs with PMID = 24166342, entitled Urological aspects of HIV and AIDS, was published in 2013 in Nat. Rev. Urol. and written by Urology authors in University Stellenbosch in South Africa, has a few citations in WoSCC (=5).\nAn article with PMID = 24166342,[57] entitled History of Childhood Kidney Disease and Risk of Adult End-Stage Renal Disease, written by Ronit Calderon-Margalit (Israel) and published in N Engl J Med. in 2018, has 75 citations in WoSCC.\nManikkam Suthanthiran (U.S.) also published a study, Urinary-cell mRNA profile and acute cellular rejection in kidney allografts, in N Engl J Med. in 2013, which has 225 citations in WoSCC.\n[SUBTITLE] 4.3. Implications and changes [SUBSECTION] There are several distinctive features of the study. First, the hT-JIF index with decimal places can be used to complement the hT-/h-index[28–30] to enhance the discrimination power for identifying RAs and ranking within a group.[70]\nSecond, time-to-event analysis[34] is unique because not all subjects (e.g., CHPR articles in this study) have experienced the event (e.g., having a JIF of 1 for the event) by the end of the observed JIF, so the exact JIF for some articles remains unknown during analysis.[35,36] Additionally, data are usually skewed, limiting the usefulness of methods that assume a normal distribution. Thus, the Kaplan−Meier estimator and log-rank test can be adapted to CHPR articles that contain time-to-event JIF data.\nThird, IBPs[54,55] allow authors to examine articles in a completely new manner, particularly with links to PubMed. In addition, Supplemental Digital File 2, http://links.lww.com/MD/H562 describes how to draw the IBP on Google Maps.\nAdditionally, the study provides 3 visual representations, including a choropleth map based on the hT-JIF index, a forest plot to identify the odds ratio in pair comparisons, and a Sankey diagram showing differences in RFs among the 2 UD authors.\nIt is more complex to calculate the hT index in combination with the JIF than the h index, but this issue can be resolved by a dedicated software program. The hT-index computation is demonstrated on the link,[40] which provides readers with the programming codes for understanding how the hT-index is calculated within a second. Thus, the hT-JIF index can be supplemented with a coordinate in Figures 4 to 6, such as p (FP, RP) with another bubble size. A dedicated software program can be used to overcome the potential problem of computation time by identifying the author RAs.\nThere are several distinctive features of the study. First, the hT-JIF index with decimal places can be used to complement the hT-/h-index[28–30] to enhance the discrimination power for identifying RAs and ranking within a group.[70]\nSecond, time-to-event analysis[34] is unique because not all subjects (e.g., CHPR articles in this study) have experienced the event (e.g., having a JIF of 1 for the event) by the end of the observed JIF, so the exact JIF for some articles remains unknown during analysis.[35,36] Additionally, data are usually skewed, limiting the usefulness of methods that assume a normal distribution. Thus, the Kaplan−Meier estimator and log-rank test can be adapted to CHPR articles that contain time-to-event JIF data.\nThird, IBPs[54,55] allow authors to examine articles in a completely new manner, particularly with links to PubMed. In addition, Supplemental Digital File 2, http://links.lww.com/MD/H562 describes how to draw the IBP on Google Maps.\nAdditionally, the study provides 3 visual representations, including a choropleth map based on the hT-JIF index, a forest plot to identify the odds ratio in pair comparisons, and a Sankey diagram showing differences in RFs among the 2 UD authors.\nIt is more complex to calculate the hT index in combination with the JIF than the h index, but this issue can be resolved by a dedicated software program. The hT-index computation is demonstrated on the link,[40] which provides readers with the programming codes for understanding how the hT-index is calculated within a second. Thus, the hT-JIF index can be supplemented with a coordinate in Figures 4 to 6, such as p (FP, RP) with another bubble size. A dedicated software program can be used to overcome the potential problem of computation time by identifying the author RAs.\n[SUBTITLE] 4.4. Limitations and suggestions [SUBSECTION] There are a number of issues that need to be addressed in detail in further research. As a first concern, only articles pertaining to CHPR-related UTI are included. It is recommended that future studies include a wider range of articles related to UTIs than their RAs and RFs.\nThe second point is that although the Y-index[41,42] and hT-index[28,29,32] have been considered to be fair measures of RA contributions, it is assumed that the co-first authors contribute equally to the articles. If authorship does not follow the rule as designed, the results regarding the authors who contributed the most to articles regarding CHPR-related UTI will be biased.\nThird, it takes some time to calculate the hT index using the sum of weights in the Ferrers tableau (that is, all papers cited or JIF received in the list). The advancement in hardware has made this task trivial, comparable to computing other bibliometric metrics (e.g., h-/x-/g-indices[30–32]) using a dedicated software program, as shown in reference.[40]\nFourth, an hT-JIF index was proposed in this study instead of citations as usual; however, the RA is determined by many other factors (e.g., multiply CJA[67]) that need to be considered when calculating the hT index (e.g., using citations to compute the hT index[67]).\nFifth, according to Figure 3, only countries/regions with higher hT-JIF indices are compared. Readers may also be interested in the list of countries and regions with the Y-index[41,42] shown on the radar plot. Using the radar plot to display this type of influential country/region should be involved in a future study (see how to draw the radar plot in Supplemental Digital File 3, http://links.lww.com/MD/H563).\nFinally, even though the hT-JIF index is considered useful and applicable, it should be used with caution when comparing the differences between groups since it does not always follow a normal distribution. It was recommended that readers use the bootstrapping method[71–73] or the time-to-event analysis when comparing RAs between groups, particularly with 95% confidence intervals.\nThere are a number of issues that need to be addressed in detail in further research. As a first concern, only articles pertaining to CHPR-related UTI are included. It is recommended that future studies include a wider range of articles related to UTIs than their RAs and RFs.\nThe second point is that although the Y-index[41,42] and hT-index[28,29,32] have been considered to be fair measures of RA contributions, it is assumed that the co-first authors contribute equally to the articles. If authorship does not follow the rule as designed, the results regarding the authors who contributed the most to articles regarding CHPR-related UTI will be biased.\nThird, it takes some time to calculate the hT index using the sum of weights in the Ferrers tableau (that is, all papers cited or JIF received in the list). The advancement in hardware has made this task trivial, comparable to computing other bibliometric metrics (e.g., h-/x-/g-indices[30–32]) using a dedicated software program, as shown in reference.[40]\nFourth, an hT-JIF index was proposed in this study instead of citations as usual; however, the RA is determined by many other factors (e.g., multiply CJA[67]) that need to be considered when calculating the hT index (e.g., using citations to compute the hT index[67]).\nFifth, according to Figure 3, only countries/regions with higher hT-JIF indices are compared. Readers may also be interested in the list of countries and regions with the Y-index[41,42] shown on the radar plot. Using the radar plot to display this type of influential country/region should be involved in a future study (see how to draw the radar plot in Supplemental Digital File 3, http://links.lww.com/MD/H563).\nFinally, even though the hT-JIF index is considered useful and applicable, it should be used with caution when comparing the differences between groups since it does not always follow a normal distribution. It was recommended that readers use the bootstrapping method[71–73] or the time-to-event analysis when comparing RAs between groups, particularly with 95% confidence intervals.", "A growing number of patients worldwide suffer from CKD, which is associated with an increased risk of progression to ESKD.[15] The risk of developing infections (e.g., UTI) and/or sepsis (urosepsis) is high among patients with CKD stage 2 to 5, patients receiving chronic dialysis treatment (hemodialysis or peritoneal dialysis), and patients with kidney allograft dysfunction. However, little information is available regarding how many of these patients suffer from urinary tract infections.\nThe prevalence of CKD worldwide is estimated at 13.4% (11.7%–15.1%), and the estimated number of patients with ESKD needing renal replacement therapy in China is between 4.902 and 7.083 million.[7] The incidence of infections associated with CKD is less than one per 5000 people per year,[59–61] but frequent UTI episodes may increase the risk of developing ESKD.[61] It is higher in infants and young children than in adults but still moderate at approximately 1%.[62] A number of clinical risk factors and comorbidities other than UTI are suspected of contributing to the development of ESKD. Predisposing factors for UTI development in CKD patients include sex, age, genetic disposition, diabetes mellitus, obstructive nephropathy, arteriolosclerosis (microvascular calcification, ischemic nephropathy), nephrolithiasis, cast nephropathy, immunodeficiency syndromes, and immunosuppressive therapy.[59,63] No bibliometric analysis has been undertaken to determine whether CHPR-related articles on UTI have unequal JIFs between RAs, and UN authors have slightly different RFs, as we did in this study.\nTo measure research output, most institutions surveyed still rely on simple, easily quantifiable metrics, such as the JIF or publication count.[64–66] In Taiwan’s medical schools, the JIF has an unrivaled role in determining the faculty’s achievement evaluation and promotion. A major component of the evaluation is the CJA (category of article, journal quality, and author order) assessment.[67] Hence, we measured the RAs of authors with CHPR articles on UTI using the hT-JIF index.\nThirty-six percent of the 7618 papers on peritoneal dialysis published in 887 journals were written by American authors (6991 articles and 627 reviews),[68] and the United States was the most productive country (n = 51) among the top 100 influential papers on peritoneal dialysis.[69] In HD/PD articles, the United States had the highest hT index (=37.15) compared to many other countries. These results are consistent with our findings regarding the dominance of the United States in articles regarding CHPR-related UTI since 2013 (hT-JIF = 38.30).[33]", "An article that discusses CHPR-related UTIs with PMID = 24166342, entitled Urological aspects of HIV and AIDS, was published in 2013 in Nat. Rev. Urol. and written by Urology authors in University Stellenbosch in South Africa, has a few citations in WoSCC (=5).\nAn article with PMID = 24166342,[57] entitled History of Childhood Kidney Disease and Risk of Adult End-Stage Renal Disease, written by Ronit Calderon-Margalit (Israel) and published in N Engl J Med. in 2018, has 75 citations in WoSCC.\nManikkam Suthanthiran (U.S.) also published a study, Urinary-cell mRNA profile and acute cellular rejection in kidney allografts, in N Engl J Med. in 2013, which has 225 citations in WoSCC.", "There are several distinctive features of the study. First, the hT-JIF index with decimal places can be used to complement the hT-/h-index[28–30] to enhance the discrimination power for identifying RAs and ranking within a group.[70]\nSecond, time-to-event analysis[34] is unique because not all subjects (e.g., CHPR articles in this study) have experienced the event (e.g., having a JIF of 1 for the event) by the end of the observed JIF, so the exact JIF for some articles remains unknown during analysis.[35,36] Additionally, data are usually skewed, limiting the usefulness of methods that assume a normal distribution. Thus, the Kaplan−Meier estimator and log-rank test can be adapted to CHPR articles that contain time-to-event JIF data.\nThird, IBPs[54,55] allow authors to examine articles in a completely new manner, particularly with links to PubMed. In addition, Supplemental Digital File 2, http://links.lww.com/MD/H562 describes how to draw the IBP on Google Maps.\nAdditionally, the study provides 3 visual representations, including a choropleth map based on the hT-JIF index, a forest plot to identify the odds ratio in pair comparisons, and a Sankey diagram showing differences in RFs among the 2 UD authors.\nIt is more complex to calculate the hT index in combination with the JIF than the h index, but this issue can be resolved by a dedicated software program. The hT-index computation is demonstrated on the link,[40] which provides readers with the programming codes for understanding how the hT-index is calculated within a second. Thus, the hT-JIF index can be supplemented with a coordinate in Figures 4 to 6, such as p (FP, RP) with another bubble size. A dedicated software program can be used to overcome the potential problem of computation time by identifying the author RAs.", "There are a number of issues that need to be addressed in detail in further research. As a first concern, only articles pertaining to CHPR-related UTI are included. It is recommended that future studies include a wider range of articles related to UTIs than their RAs and RFs.\nThe second point is that although the Y-index[41,42] and hT-index[28,29,32] have been considered to be fair measures of RA contributions, it is assumed that the co-first authors contribute equally to the articles. If authorship does not follow the rule as designed, the results regarding the authors who contributed the most to articles regarding CHPR-related UTI will be biased.\nThird, it takes some time to calculate the hT index using the sum of weights in the Ferrers tableau (that is, all papers cited or JIF received in the list). The advancement in hardware has made this task trivial, comparable to computing other bibliometric metrics (e.g., h-/x-/g-indices[30–32]) using a dedicated software program, as shown in reference.[40]\nFourth, an hT-JIF index was proposed in this study instead of citations as usual; however, the RA is determined by many other factors (e.g., multiply CJA[67]) that need to be considered when calculating the hT index (e.g., using citations to compute the hT index[67]).\nFifth, according to Figure 3, only countries/regions with higher hT-JIF indices are compared. Readers may also be interested in the list of countries and regions with the Y-index[41,42] shown on the radar plot. Using the radar plot to display this type of influential country/region should be involved in a future study (see how to draw the radar plot in Supplemental Digital File 3, http://links.lww.com/MD/H563).\nFinally, even though the hT-JIF index is considered useful and applicable, it should be used with caution when comparing the differences between groups since it does not always follow a normal distribution. It was recommended that readers use the bootstrapping method[71–73] or the time-to-event analysis when comparing RAs between groups, particularly with 95% confidence intervals.", "In this study, we used the radar plot with the hT-JIF-index based on the number of publications in co-first authors to demonstrate that the hT-JIF-index generalized the h-index on citations for assessing author contributions in quality and quantity from all listed articles. It is important to take into account the hT-JIF-index and the IBP in future relevant bibliometric analyses of academic disciplines or specific research topics, rather than focusing only on articles related to CHPR-related UTI, as we did in this study.", "We thank Enago (www.enago.tw) for the English language review of this manuscript.", "KK, HY and WCK provided the concept and designed this study. CY and WCK interpreted the data. WC monitored the process and the manuscript. TW and HY drafted the manuscript. All authors read the manuscript and approved the final manuscript.\nConceptualization: Keng-Kok Tan, Chen-Yu Wang, Hsien-Yi Wang.\nData curation: Wei-Chih Kan.\nInvestigation: Willy Chou.\nMethodology: Tsair-Wei Chien.", "" ]
[ null, "intro", "subjects", null, null, null, "methods", null, null, null, null, null, null, null, null, null, null, null, null, "results", null, null, null, null, null, "discussion", null, null, null, null, null, null, null, "supplementary-material" ]
[ "bibliometric analysis", "hemodialysis", "hT-index", "impact beam plot", "journal impact factor", "peritoneal dialysis", "research achievement", "research feature" ]
Intervention for depression among undergraduate religious education students: A randomized controlled trial.
36254029
This research was designed to investigate the management of depression among undergraduate religious education students and identify the research implications for school-based religious intervention.
BACKGROUND
This research is a randomized controlled trial. The treatment condition had 34 undergraduate religious education students but 33 undergraduate religious education students were in the control condition. The treatment process involved a 12-week application of religious rational emotive behavior therapy (RREBT). With Beck's depression inventory, version 2 (BDI-II), data collection was made possible.
METHODS
Compared to students in the control condition, undergraduate religious education students in the treatment condition demonstrated a significant drop in mean BDI-II scores at post-test (F [1, 65] = 592.043, P < .05, η2p = .90). The effect of RREBT among students in the treatment condition stayed consistent at 2 weeks follow-up (F [1, 65] = 786.396, P < .05, η2p = .92, ΔR2 = .922).
RESULTS
The effect of RREBT on depression treatment among undergraduate religious education students was positive and can be consistent. The study results underscore the importance of expanding this treatment approach for these undergraduate education students in Nigeria.
CONCLUSION
[ "Depression", "Humans", "Nigeria", "Schools", "Students" ]
9575794
1. Introduction
Depression is a common mental health concern among Nigerian university students.[1,2] A previous cross-sectional survey reported a 25.2% prevalence rate for moderate to severe depression among a sample of 820 Nigerian university students.[3] Another study of a sample of 408 Nigerian undergraduate students from a specific field of study showed a prevalence of 44.6% for depression.[4] A study of 352 Nigerian medical students sampled from two universities reported that students’ age, socioeconomic class and gender were not significantly related to depression.[5] Undergraduate education students in Nigeria are also susceptible to depression.[6,7] A mean depression score of 40.84 ± 9.10 and 36.75 ± 9.48 were reported among first-year male and female undergraduate education students respectively (total sample = 560) in a Nigerian study that employed ex post facto research design.[7] Undergraduate religious education students in Nigeria are, thus, not immune to depression. Undergraduate religious education students in Nigeria are, thus, not immune to depression. Religious education students are exposed to a religious education curriculum that equips them with the knowledge and skills for analytic and ethical thinking in a multidisciplinary and contextualized manner about religion, faith, personal beliefs, and institutional religious practices.[8,9] Exposure to the religious education curriculum enables the students to also acquire the knowledge and skills required for examining ways of living and practising religions locally and globally.[8,9] Upon graduation, these students could work as religious teachers or engage in ministerial works.[8,9] Several studies have looked at how psychological interventions[10–12] can help to manage depression among Nigerian students in other fields of learning, but not among undergraduate religious education students. Therefore, the aim of this randomized controlled trial (RCT) was to investigate the management of depression among undergraduate religious education students by employing religious rational emotive behavior therapy (RREBT) and to identify the research implications for school-based religious intervention in Nigeria. The RREBT is a faith-based mental health intervention created following the principles and practice of rational emotive behavior therapy (REBT), a therapy, started by Ellis.[13,14] In RREBT it is assumed that emotionally healthy behavior can be actualized by harnessing an individual’s absolutistic religious philosophies.[14–16] Therefore, the incorporation of scriptural contents and other religious resources relevant to an individual’s religious traditions and orientations into the treatment process for resolving the emotional and/or behavioral problems of the individual is a common feature of RREBT.[15,17] Against this backdrop, the study hypothesis is that, compared to the students in the control condition, the management of depression using RREBT will be significantly realistic among undergraduate religious education students in the treatment condition.
2.5. Procedure
With the BDI-II, data collection was made possible at pretest, post-test, and two weeks follow-up (conducted 3 months after the post-test). At the beginning of the study, 350 undergraduate religious education students were screened for the presence of moderate to severe depression. As such, undergraduate religious education students with moderate to severe depression were considered eligible for the study. Other eligibility criteria were being a first-year undergraduate religious education student, submission of a consent form, and noninvolvement in other depression treatment programs. The treatment process involved a 12-week application of RREBT. The RREBT techniques[15] in addition to general REBT techniques for treatment of depression[22] were used in the current intervention to render therapeutic assistance to undergraduate religious education students. One session was held per week and lasted for two hours. The students in the control condition were waitlisted to commence their treatment sessions a week after the study follow-up evaluation.
3. Results
The students’ mean age in the treatment condition was 18.35 ± .08 years while the mean age for students in the control condition was 19.06 ± 2.15 years. Information with respect to gender of participants showed that in RREBT group, 28.4% were males while 22.4% were females. Also, in the control group, 32.3% were males while 14.9% were females. Furthermore, based on ethnicity, in RREBT 35.8% were Igbo, 1.5% were Yoruba, and 13.4% were other ethnic groups. Likewise, in the control group, 38.8% were Igbo, 3.0% were Yoruba while 6.0% were other ethnic groups. Table 2 show the descriptive statistics for each group by time points. Univariate analysis of the pretest data shows that undergraduate religious education students in the RREBT and control groups had comparable BDI-II scores (F [1, 65] = 2.645, P = .109). Participants demographic characteristics. χ2 = Chi-square, *Mean age ± SD of participants = mean and standard deviation, n = number of participants in each group; at-test result for age comparison, RREBT = religious rational emotive behavior therapy, t = t test. Descriptive statistics. BDI-II = Beck’s depression inventory, version 2, RREBT = religious rational emotive behavior therapy. Posttest results (Greenhouse-Geisser corrected) revealed a significant effect of Time (F [1.259, 81.860] = 200.953, P < .05, η2p = .76), Group (F [1, 65] = 592.043, P < .05, η2p = .90), and Time by Group interaction (F [1.259, 81.860] = 294.766, P < .05, η2p = .82) on depression severity among undergraduate religious education students. The results suggest that RREBT significantly reduced depression severity among the undergraduate religious education students (see Fig. 2 also). Mean changes in BDI-II scores of students across the study groups. BDI-II = Beck’s depression inventory, version 2. Univariate analysis of the follow-up data show that the effect of RREBT among students in the treatment condition remained consistent at 2 weeks follow-up (F [1, 65] = 786.396, P < .05, η2p = .92, sΔR2 = .922). In Table 3, the pairwise comparisons regarding the main effect of Time revealed a significant decrease in students’ BDI-II score from time 1 to time 2 (MD = 12.406, SE = .899, P < .05, 95%CI = 10.203, −14.608), and from time 1 to time 3 (MD = 12.473, SE = .770, P < .05, 95%CI = 10.586, −14.360). Likewise, the significant decrease at time 2 was sustained at time 3 (MD = .68, SE = .374, P > .05, 95%CI = −.848, −.984). In addition, pairwise comparisons regarding the main effect of Group revealed that the undergraduate religious education students in the RREBT condition reported lower BDI-II scores than students in the control condition (MD = 21.903, SE = .900, P < .05, 95%CI = 20.106 −23.701). Pairwise comparisons on the effect of Time and Group on students’ BDI-II scores. Adjustment for multiple comparisons: Sidak. MD = mean difference, RREBT = RREBT = religious rational emotive behavior therapy, SE = standard error.
null
null
[ "2. Methods", "2.1. Ethical statements", "2.2. Participants", "2.3. Randomization, allocation concealment, and blinding procedure", "2.4. Measures", "2.6. Data analyses", "5. Conclusion", "Author contributions" ]
[ "[SUBTITLE] 2.1. Ethical statements [SUBSECTION] The ethical approval of the study protocol for this RCT was issued by the Education Faculty Research Ethics Committee, University of Nigeria (REC/UNN/FE/2019/000017). The undergraduate religious education students sampled for the study filled out the informed consent form. The research study was developed and delivered as per the WMA Declaration of Helsinki. This RCT’s registration was done in the Pan African Clinical Trials Registry (unique identification number for the registry is: PACTR202209591337370).\nThe ethical approval of the study protocol for this RCT was issued by the Education Faculty Research Ethics Committee, University of Nigeria (REC/UNN/FE/2019/000017). The undergraduate religious education students sampled for the study filled out the informed consent form. The research study was developed and delivered as per the WMA Declaration of Helsinki. This RCT’s registration was done in the Pan African Clinical Trials Registry (unique identification number for the registry is: PACTR202209591337370).\n[SUBTITLE] 2.2. Participants [SUBSECTION] Participants were 67 first-year undergraduate religious education students sampled from four universities located in the southern part of Nigeria (see Fig. 1). With GPower 3.1 software, we were able to conduct the sample calculation (α = 0.05; actual power = 0.76; effect size = .20; suggested sample = 38).[18,19] The treatment condition had 34 undergraduate religious education students but 33 undergraduate religious education students were in the control condition as achieved using randomization software.[20]\nParticipant flowchart.\nParticipants were 67 first-year undergraduate religious education students sampled from four universities located in the southern part of Nigeria (see Fig. 1). With GPower 3.1 software, we were able to conduct the sample calculation (α = 0.05; actual power = 0.76; effect size = .20; suggested sample = 38).[18,19] The treatment condition had 34 undergraduate religious education students but 33 undergraduate religious education students were in the control condition as achieved using randomization software.[20]\nParticipant flowchart.\n[SUBTITLE] 2.3. Randomization, allocation concealment, and blinding procedure [SUBSECTION] The allocation sequence in this study was generated through simple randomization using a randomization table generated by random allocation software.[20] The allocation sequence was concealed from the individual allocating participants to study groups through the use of sealed opaque envelopes. In order to improve the blinding process, certain information concerning participants and group was not disclosed to the data analyst.\nThe allocation sequence in this study was generated through simple randomization using a randomization table generated by random allocation software.[20] The allocation sequence was concealed from the individual allocating participants to study groups through the use of sealed opaque envelopes. In order to improve the blinding process, certain information concerning participants and group was not disclosed to the data analyst.\n[SUBTITLE] 2.4. Measures [SUBSECTION] The Students’ Demographic Questionnaire was employed to get personal details like age (uncategorized), gender (1 = male; 2 = female), and ethnicity (1 = Igbo; 2 = Yoruba; 3 = others).\nThe Beck’s depression inventory-II (BDI-II)[21] which has 21 items with 4-point self-rating options ranging from 0 to 3 was used to evaluate the severity of depression among the undergraduate religious education students. The BDI-II scores are interpreted in the following manner: 0 to 13 for minimal depression, 14 to 19 for mild depression, 20 to 28 for moderate depression, and 29 to 63 for severe depression. The BDI-II was shown to be consistent for this research (α = .84).\nThe Students’ Demographic Questionnaire was employed to get personal details like age (uncategorized), gender (1 = male; 2 = female), and ethnicity (1 = Igbo; 2 = Yoruba; 3 = others).\nThe Beck’s depression inventory-II (BDI-II)[21] which has 21 items with 4-point self-rating options ranging from 0 to 3 was used to evaluate the severity of depression among the undergraduate religious education students. The BDI-II scores are interpreted in the following manner: 0 to 13 for minimal depression, 14 to 19 for mild depression, 20 to 28 for moderate depression, and 29 to 63 for severe depression. The BDI-II was shown to be consistent for this research (α = .84).\n[SUBTITLE] 2.5. Procedure [SUBSECTION] With the BDI-II, data collection was made possible at pretest, post-test, and two weeks follow-up (conducted 3 months after the post-test). At the beginning of the study, 350 undergraduate religious education students were screened for the presence of moderate to severe depression. As such, undergraduate religious education students with moderate to severe depression were considered eligible for the study. Other eligibility criteria were being a first-year undergraduate religious education student, submission of a consent form, and noninvolvement in other depression treatment programs. The treatment process involved a 12-week application of RREBT. The RREBT techniques[15] in addition to general REBT techniques for treatment of depression[22] were used in the current intervention to render therapeutic assistance to undergraduate religious education students. One session was held per week and lasted for two hours. The students in the control condition were waitlisted to commence their treatment sessions a week after the study follow-up evaluation.\nWith the BDI-II, data collection was made possible at pretest, post-test, and two weeks follow-up (conducted 3 months after the post-test). At the beginning of the study, 350 undergraduate religious education students were screened for the presence of moderate to severe depression. As such, undergraduate religious education students with moderate to severe depression were considered eligible for the study. Other eligibility criteria were being a first-year undergraduate religious education student, submission of a consent form, and noninvolvement in other depression treatment programs. The treatment process involved a 12-week application of RREBT. The RREBT techniques[15] in addition to general REBT techniques for treatment of depression[22] were used in the current intervention to render therapeutic assistance to undergraduate religious education students. One session was held per week and lasted for two hours. The students in the control condition were waitlisted to commence their treatment sessions a week after the study follow-up evaluation.\n[SUBTITLE] 2.6. Data analyses [SUBSECTION] A balance test was carried out before the main analyses. By employing repeated measures ANOVA (at 95%CI), analyses of data were made possible in the Statistical Package for Social Sciences software.[23] In the analysis, within-subjects factor was Time, whereas between-subjects factor was Group. Univariate analyses were carried out to find out any if there were mean differences in the pretest/follow-up depression scores of students in the two groups. Data screening for missing values and test of assumption violations were also carried out.\nA balance test was carried out before the main analyses. By employing repeated measures ANOVA (at 95%CI), analyses of data were made possible in the Statistical Package for Social Sciences software.[23] In the analysis, within-subjects factor was Time, whereas between-subjects factor was Group. Univariate analyses were carried out to find out any if there were mean differences in the pretest/follow-up depression scores of students in the two groups. Data screening for missing values and test of assumption violations were also carried out.", "The ethical approval of the study protocol for this RCT was issued by the Education Faculty Research Ethics Committee, University of Nigeria (REC/UNN/FE/2019/000017). The undergraduate religious education students sampled for the study filled out the informed consent form. The research study was developed and delivered as per the WMA Declaration of Helsinki. This RCT’s registration was done in the Pan African Clinical Trials Registry (unique identification number for the registry is: PACTR202209591337370).", "Participants were 67 first-year undergraduate religious education students sampled from four universities located in the southern part of Nigeria (see Fig. 1). With GPower 3.1 software, we were able to conduct the sample calculation (α = 0.05; actual power = 0.76; effect size = .20; suggested sample = 38).[18,19] The treatment condition had 34 undergraduate religious education students but 33 undergraduate religious education students were in the control condition as achieved using randomization software.[20]\nParticipant flowchart.", "The allocation sequence in this study was generated through simple randomization using a randomization table generated by random allocation software.[20] The allocation sequence was concealed from the individual allocating participants to study groups through the use of sealed opaque envelopes. In order to improve the blinding process, certain information concerning participants and group was not disclosed to the data analyst.", "The Students’ Demographic Questionnaire was employed to get personal details like age (uncategorized), gender (1 = male; 2 = female), and ethnicity (1 = Igbo; 2 = Yoruba; 3 = others).\nThe Beck’s depression inventory-II (BDI-II)[21] which has 21 items with 4-point self-rating options ranging from 0 to 3 was used to evaluate the severity of depression among the undergraduate religious education students. The BDI-II scores are interpreted in the following manner: 0 to 13 for minimal depression, 14 to 19 for mild depression, 20 to 28 for moderate depression, and 29 to 63 for severe depression. The BDI-II was shown to be consistent for this research (α = .84).", "A balance test was carried out before the main analyses. By employing repeated measures ANOVA (at 95%CI), analyses of data were made possible in the Statistical Package for Social Sciences software.[23] In the analysis, within-subjects factor was Time, whereas between-subjects factor was Group. Univariate analyses were carried out to find out any if there were mean differences in the pretest/follow-up depression scores of students in the two groups. Data screening for missing values and test of assumption violations were also carried out.", "The effect of RREBT on depression treatment among undergraduate religious education students was positive and can be consistent. The study results underscore the importance of expanding this treatment approach for these undergraduate education students in Nigerian universities.", "Conceptualization: Chiedu Eseadi, Leonard Chidi Ilechukwu, Vera Victor-Aigbodion, Abatihun Alehegn Sewagegn, Amos Nnaemeka Amedu.\nData curation: Chiedu Eseadi, Leonard Chidi Ilechukwu, Vera Victor-Aigbodion, Abatihun Alehegn Sewagegn, Amos Nnaemeka Amedu.\nFormal analysis: Chiedu Eseadi, Leonard Chidi Ilechukwu, Vera Victor-Aigbodion, Abatihun Alehegn Sewagegn, Amos Nnaemeka Amedu.\nFunding acquisition: Chiedu Eseadi, Leonard Chidi Ilechukwu, Vera Victor-Aigbodion, Abatihun Alehegn Sewagegn, Amos Nnaemeka Amedu.\nInvestigation: Chiedu Eseadi, Leonard Chidi Ilechukwu, Vera Victor-Aigbodion, Amos Nnaemeka Amedu.\nMethodology: Chiedu Eseadi, Leonard Chidi Ilechukwu, Vera Victor-Aigbodion, Abatihun Alehegn Sewagegn, Amos Nnaemeka Amedu.\nProject administration: Chiedu Eseadi, Leonard Chidi Ilechukwu, Vera Victor-Aigbodion, Abatihun Alehegn Sewagegn, Amos Nnaemeka Amedu.\nResources: Vera Victor-Aigbodion, Abatihun Alehegn Sewagegn.\nSoftware: Chiedu Eseadi, Vera Victor-Aigbodion, Abatihun Alehegn Sewagegn, Amos Nnaemeka Amedu.\nSupervision: Leonard Chidi Ilechukwu, Abatihun Alehegn Sewagegn, Amos Nnaemeka Amedu.\nValidation: Chiedu Eseadi, Leonard Chidi Ilechukwu, Vera Victor-Aigbodion, Abatihun Alehegn Sewagegn, Amos Nnaemeka Amedu.\nVisualization: Vera Victor-Aigbodion, Abatihun Alehegn Sewagegn.\nWriting – original draft: Chiedu Eseadi, Leonard Chidi Ilechukwu, Vera Victor-Aigbodion, Abatihun Alehegn Sewagegn, Amos Nnaemeka Amedu.\nWriting – review & editing: Chiedu Eseadi, Leonard Chidi Ilechukwu, Vera Victor-Aigbodion, Abatihun Alehegn Sewagegn, Amos Nnaemeka Amedu." ]
[ "methods", null, null, "methods", null, null, null, null ]
[ "1. Introduction", "2. Methods", "2.1. Ethical statements", "2.2. Participants", "2.3. Randomization, allocation concealment, and blinding procedure", "2.4. Measures", "2.5. Procedure", "2.6. Data analyses", "3. Results", "4. Discussion", "5. Conclusion", "Author contributions" ]
[ "Depression is a common mental health concern among Nigerian university students.[1,2] A previous cross-sectional survey reported a 25.2% prevalence rate for moderate to severe depression among a sample of 820 Nigerian university students.[3] Another study of a sample of 408 Nigerian undergraduate students from a specific field of study showed a prevalence of 44.6% for depression.[4] A study of 352 Nigerian medical students sampled from two universities reported that students’ age, socioeconomic class and gender were not significantly related to depression.[5]\nUndergraduate education students in Nigeria are also susceptible to depression.[6,7] A mean depression score of 40.84 ± 9.10 and 36.75 ± 9.48 were reported among first-year male and female undergraduate education students respectively (total sample = 560) in a Nigerian study that employed ex post facto research design.[7] Undergraduate religious education students in Nigeria are, thus, not immune to depression. Undergraduate religious education students in Nigeria are, thus, not immune to depression. Religious education students are exposed to a religious education curriculum that equips them with the knowledge and skills for analytic and ethical thinking in a multidisciplinary and contextualized manner about religion, faith, personal beliefs, and institutional religious practices.[8,9] Exposure to the religious education curriculum enables the students to also acquire the knowledge and skills required for examining ways of living and practising religions locally and globally.[8,9] Upon graduation, these students could work as religious teachers or engage in ministerial works.[8,9]\nSeveral studies have looked at how psychological interventions[10–12] can help to manage depression among Nigerian students in other fields of learning, but not among undergraduate religious education students. Therefore, the aim of this randomized controlled trial (RCT) was to investigate the management of depression among undergraduate religious education students by employing religious rational emotive behavior therapy (RREBT) and to identify the research implications for school-based religious intervention in Nigeria.\nThe RREBT is a faith-based mental health intervention created following the principles and practice of rational emotive behavior therapy (REBT), a therapy, started by Ellis.[13,14] In RREBT it is assumed that emotionally healthy behavior can be actualized by harnessing an individual’s absolutistic religious philosophies.[14–16] Therefore, the incorporation of scriptural contents and other religious resources relevant to an individual’s religious traditions and orientations into the treatment process for resolving the emotional and/or behavioral problems of the individual is a common feature of RREBT.[15,17] Against this backdrop, the study hypothesis is that, compared to the students in the control condition, the management of depression using RREBT will be significantly realistic among undergraduate religious education students in the treatment condition.", "[SUBTITLE] 2.1. Ethical statements [SUBSECTION] The ethical approval of the study protocol for this RCT was issued by the Education Faculty Research Ethics Committee, University of Nigeria (REC/UNN/FE/2019/000017). The undergraduate religious education students sampled for the study filled out the informed consent form. The research study was developed and delivered as per the WMA Declaration of Helsinki. This RCT’s registration was done in the Pan African Clinical Trials Registry (unique identification number for the registry is: PACTR202209591337370).\nThe ethical approval of the study protocol for this RCT was issued by the Education Faculty Research Ethics Committee, University of Nigeria (REC/UNN/FE/2019/000017). The undergraduate religious education students sampled for the study filled out the informed consent form. The research study was developed and delivered as per the WMA Declaration of Helsinki. This RCT’s registration was done in the Pan African Clinical Trials Registry (unique identification number for the registry is: PACTR202209591337370).\n[SUBTITLE] 2.2. Participants [SUBSECTION] Participants were 67 first-year undergraduate religious education students sampled from four universities located in the southern part of Nigeria (see Fig. 1). With GPower 3.1 software, we were able to conduct the sample calculation (α = 0.05; actual power = 0.76; effect size = .20; suggested sample = 38).[18,19] The treatment condition had 34 undergraduate religious education students but 33 undergraduate religious education students were in the control condition as achieved using randomization software.[20]\nParticipant flowchart.\nParticipants were 67 first-year undergraduate religious education students sampled from four universities located in the southern part of Nigeria (see Fig. 1). With GPower 3.1 software, we were able to conduct the sample calculation (α = 0.05; actual power = 0.76; effect size = .20; suggested sample = 38).[18,19] The treatment condition had 34 undergraduate religious education students but 33 undergraduate religious education students were in the control condition as achieved using randomization software.[20]\nParticipant flowchart.\n[SUBTITLE] 2.3. Randomization, allocation concealment, and blinding procedure [SUBSECTION] The allocation sequence in this study was generated through simple randomization using a randomization table generated by random allocation software.[20] The allocation sequence was concealed from the individual allocating participants to study groups through the use of sealed opaque envelopes. In order to improve the blinding process, certain information concerning participants and group was not disclosed to the data analyst.\nThe allocation sequence in this study was generated through simple randomization using a randomization table generated by random allocation software.[20] The allocation sequence was concealed from the individual allocating participants to study groups through the use of sealed opaque envelopes. In order to improve the blinding process, certain information concerning participants and group was not disclosed to the data analyst.\n[SUBTITLE] 2.4. Measures [SUBSECTION] The Students’ Demographic Questionnaire was employed to get personal details like age (uncategorized), gender (1 = male; 2 = female), and ethnicity (1 = Igbo; 2 = Yoruba; 3 = others).\nThe Beck’s depression inventory-II (BDI-II)[21] which has 21 items with 4-point self-rating options ranging from 0 to 3 was used to evaluate the severity of depression among the undergraduate religious education students. The BDI-II scores are interpreted in the following manner: 0 to 13 for minimal depression, 14 to 19 for mild depression, 20 to 28 for moderate depression, and 29 to 63 for severe depression. The BDI-II was shown to be consistent for this research (α = .84).\nThe Students’ Demographic Questionnaire was employed to get personal details like age (uncategorized), gender (1 = male; 2 = female), and ethnicity (1 = Igbo; 2 = Yoruba; 3 = others).\nThe Beck’s depression inventory-II (BDI-II)[21] which has 21 items with 4-point self-rating options ranging from 0 to 3 was used to evaluate the severity of depression among the undergraduate religious education students. The BDI-II scores are interpreted in the following manner: 0 to 13 for minimal depression, 14 to 19 for mild depression, 20 to 28 for moderate depression, and 29 to 63 for severe depression. The BDI-II was shown to be consistent for this research (α = .84).\n[SUBTITLE] 2.5. Procedure [SUBSECTION] With the BDI-II, data collection was made possible at pretest, post-test, and two weeks follow-up (conducted 3 months after the post-test). At the beginning of the study, 350 undergraduate religious education students were screened for the presence of moderate to severe depression. As such, undergraduate religious education students with moderate to severe depression were considered eligible for the study. Other eligibility criteria were being a first-year undergraduate religious education student, submission of a consent form, and noninvolvement in other depression treatment programs. The treatment process involved a 12-week application of RREBT. The RREBT techniques[15] in addition to general REBT techniques for treatment of depression[22] were used in the current intervention to render therapeutic assistance to undergraduate religious education students. One session was held per week and lasted for two hours. The students in the control condition were waitlisted to commence their treatment sessions a week after the study follow-up evaluation.\nWith the BDI-II, data collection was made possible at pretest, post-test, and two weeks follow-up (conducted 3 months after the post-test). At the beginning of the study, 350 undergraduate religious education students were screened for the presence of moderate to severe depression. As such, undergraduate religious education students with moderate to severe depression were considered eligible for the study. Other eligibility criteria were being a first-year undergraduate religious education student, submission of a consent form, and noninvolvement in other depression treatment programs. The treatment process involved a 12-week application of RREBT. The RREBT techniques[15] in addition to general REBT techniques for treatment of depression[22] were used in the current intervention to render therapeutic assistance to undergraduate religious education students. One session was held per week and lasted for two hours. The students in the control condition were waitlisted to commence their treatment sessions a week after the study follow-up evaluation.\n[SUBTITLE] 2.6. Data analyses [SUBSECTION] A balance test was carried out before the main analyses. By employing repeated measures ANOVA (at 95%CI), analyses of data were made possible in the Statistical Package for Social Sciences software.[23] In the analysis, within-subjects factor was Time, whereas between-subjects factor was Group. Univariate analyses were carried out to find out any if there were mean differences in the pretest/follow-up depression scores of students in the two groups. Data screening for missing values and test of assumption violations were also carried out.\nA balance test was carried out before the main analyses. By employing repeated measures ANOVA (at 95%CI), analyses of data were made possible in the Statistical Package for Social Sciences software.[23] In the analysis, within-subjects factor was Time, whereas between-subjects factor was Group. Univariate analyses were carried out to find out any if there were mean differences in the pretest/follow-up depression scores of students in the two groups. Data screening for missing values and test of assumption violations were also carried out.", "The ethical approval of the study protocol for this RCT was issued by the Education Faculty Research Ethics Committee, University of Nigeria (REC/UNN/FE/2019/000017). The undergraduate religious education students sampled for the study filled out the informed consent form. The research study was developed and delivered as per the WMA Declaration of Helsinki. This RCT’s registration was done in the Pan African Clinical Trials Registry (unique identification number for the registry is: PACTR202209591337370).", "Participants were 67 first-year undergraduate religious education students sampled from four universities located in the southern part of Nigeria (see Fig. 1). With GPower 3.1 software, we were able to conduct the sample calculation (α = 0.05; actual power = 0.76; effect size = .20; suggested sample = 38).[18,19] The treatment condition had 34 undergraduate religious education students but 33 undergraduate religious education students were in the control condition as achieved using randomization software.[20]\nParticipant flowchart.", "The allocation sequence in this study was generated through simple randomization using a randomization table generated by random allocation software.[20] The allocation sequence was concealed from the individual allocating participants to study groups through the use of sealed opaque envelopes. In order to improve the blinding process, certain information concerning participants and group was not disclosed to the data analyst.", "The Students’ Demographic Questionnaire was employed to get personal details like age (uncategorized), gender (1 = male; 2 = female), and ethnicity (1 = Igbo; 2 = Yoruba; 3 = others).\nThe Beck’s depression inventory-II (BDI-II)[21] which has 21 items with 4-point self-rating options ranging from 0 to 3 was used to evaluate the severity of depression among the undergraduate religious education students. The BDI-II scores are interpreted in the following manner: 0 to 13 for minimal depression, 14 to 19 for mild depression, 20 to 28 for moderate depression, and 29 to 63 for severe depression. The BDI-II was shown to be consistent for this research (α = .84).", "With the BDI-II, data collection was made possible at pretest, post-test, and two weeks follow-up (conducted 3 months after the post-test). At the beginning of the study, 350 undergraduate religious education students were screened for the presence of moderate to severe depression. As such, undergraduate religious education students with moderate to severe depression were considered eligible for the study. Other eligibility criteria were being a first-year undergraduate religious education student, submission of a consent form, and noninvolvement in other depression treatment programs. The treatment process involved a 12-week application of RREBT. The RREBT techniques[15] in addition to general REBT techniques for treatment of depression[22] were used in the current intervention to render therapeutic assistance to undergraduate religious education students. One session was held per week and lasted for two hours. The students in the control condition were waitlisted to commence their treatment sessions a week after the study follow-up evaluation.", "A balance test was carried out before the main analyses. By employing repeated measures ANOVA (at 95%CI), analyses of data were made possible in the Statistical Package for Social Sciences software.[23] In the analysis, within-subjects factor was Time, whereas between-subjects factor was Group. Univariate analyses were carried out to find out any if there were mean differences in the pretest/follow-up depression scores of students in the two groups. Data screening for missing values and test of assumption violations were also carried out.", "The students’ mean age in the treatment condition was 18.35 ± .08 years while the mean age for students in the control condition was 19.06 ± 2.15 years. Information with respect to gender of participants showed that in RREBT group, 28.4% were males while 22.4% were females. Also, in the control group, 32.3% were males while 14.9% were females. Furthermore, based on ethnicity, in RREBT 35.8% were Igbo, 1.5% were Yoruba, and 13.4% were other ethnic groups. Likewise, in the control group, 38.8% were Igbo, 3.0% were Yoruba while 6.0% were other ethnic groups.\nTable 2 show the descriptive statistics for each group by time points. Univariate analysis of the pretest data shows that undergraduate religious education students in the RREBT and control groups had comparable BDI-II scores (F [1, 65] = 2.645, P = .109).\nParticipants demographic characteristics.\nχ2 = Chi-square, *Mean age ± SD of participants = mean and standard deviation, n = number of participants in each group; at-test result for age comparison, RREBT = religious rational emotive behavior therapy, t = t test.\nDescriptive statistics.\nBDI-II = Beck’s depression inventory, version 2, RREBT = religious rational emotive behavior therapy.\nPosttest results (Greenhouse-Geisser corrected) revealed a significant effect of Time (F [1.259, 81.860] = 200.953, P < .05, η2p = .76), Group (F [1, 65] = 592.043, P < .05, η2p = .90), and Time by Group interaction (F [1.259, 81.860] = 294.766, P < .05, η2p = .82) on depression severity among undergraduate religious education students. The results suggest that RREBT significantly reduced depression severity among the undergraduate religious education students (see Fig. 2 also).\nMean changes in BDI-II scores of students across the study groups. BDI-II = Beck’s depression inventory, version 2.\nUnivariate analysis of the follow-up data show that the effect of RREBT among students in the treatment condition remained consistent at 2 weeks follow-up (F [1, 65] = 786.396, P < .05, η2p = .92, sΔR2 = .922).\nIn Table 3, the pairwise comparisons regarding the main effect of Time revealed a significant decrease in students’ BDI-II score from time 1 to time 2 (MD = 12.406, SE = .899, P < .05, 95%CI = 10.203, −14.608), and from time 1 to time 3 (MD = 12.473, SE = .770, P < .05, 95%CI = 10.586, −14.360). Likewise, the significant decrease at time 2 was sustained at time 3 (MD = .68, SE = .374, P > .05, 95%CI = −.848, −.984). In addition, pairwise comparisons regarding the main effect of Group revealed that the undergraduate religious education students in the RREBT condition reported lower BDI-II scores than students in the control condition (MD = 21.903, SE = .900, P < .05, 95%CI = 20.106 −23.701).\nPairwise comparisons on the effect of Time and Group on students’ BDI-II scores.\nAdjustment for multiple comparisons: Sidak.\nMD = mean difference, RREBT = RREBT = religious rational emotive behavior therapy, SE = standard error.", "The objective of this RCT was to investigate the management of depression among undergraduate religious education students through the application of RREBT and to highlight the research implications for school-based religious intervention in Nigeria. The study found that, compared to students in the control condition, undergraduate religious education students in the treatment condition demonstrated a significant drop in mean BDI-II scores at post-test. The result of RREBT among students in the treatment condition stayed consistent at follow-up. This aligns with some past studies that showed that REBT techniques can be effective for treating depression.[10,12,24–26] The implications of this research for school-based religious intervention in Nigeria cannot be overstated. Majority of Nigerian university students identify with one religion or the other. As such, exploring their religious philosophies which are similar to that of RREBT can help in creating a student-friendly intervention for the treatment of mental health problems like depression. Also, religious educators and religious counselors working in the university environment may collaborate to further investigate the usefulness of RREBT in helping university undergraduate students to cope with other mental health problems like suicidal ideation and posttraumatic stress disorder. When developing school-based mental health interventions, students’ religious orientations should be taken into account[27] and their religious traditions should be incorporated into such intervention process to address their peculiar mental health needs.\nIt is important to state that this study has its limitations like not making use of a qualitative approach for data collection and addressing this public health problem among undergraduate religious education students only, thereby, excluding undergraduate students from other academic disciplines. Also, the role played by students’ demographic characteristics in predicting the extent to which RREBT impacts depression among the students were not explored in this study. In future studies, the use of qualitative approaches is suggested and inclusion of students across the various academic disciplines is also recommended. In future studies, researchers should also recognize the extent to which students’ demographics, for instance, their ethnicity, age, gender, and socioeconomic class, could affect how much the RREBT program would reduce the severity of depression experienced by students.", "The effect of RREBT on depression treatment among undergraduate religious education students was positive and can be consistent. The study results underscore the importance of expanding this treatment approach for these undergraduate education students in Nigerian universities.", "Conceptualization: Chiedu Eseadi, Leonard Chidi Ilechukwu, Vera Victor-Aigbodion, Abatihun Alehegn Sewagegn, Amos Nnaemeka Amedu.\nData curation: Chiedu Eseadi, Leonard Chidi Ilechukwu, Vera Victor-Aigbodion, Abatihun Alehegn Sewagegn, Amos Nnaemeka Amedu.\nFormal analysis: Chiedu Eseadi, Leonard Chidi Ilechukwu, Vera Victor-Aigbodion, Abatihun Alehegn Sewagegn, Amos Nnaemeka Amedu.\nFunding acquisition: Chiedu Eseadi, Leonard Chidi Ilechukwu, Vera Victor-Aigbodion, Abatihun Alehegn Sewagegn, Amos Nnaemeka Amedu.\nInvestigation: Chiedu Eseadi, Leonard Chidi Ilechukwu, Vera Victor-Aigbodion, Amos Nnaemeka Amedu.\nMethodology: Chiedu Eseadi, Leonard Chidi Ilechukwu, Vera Victor-Aigbodion, Abatihun Alehegn Sewagegn, Amos Nnaemeka Amedu.\nProject administration: Chiedu Eseadi, Leonard Chidi Ilechukwu, Vera Victor-Aigbodion, Abatihun Alehegn Sewagegn, Amos Nnaemeka Amedu.\nResources: Vera Victor-Aigbodion, Abatihun Alehegn Sewagegn.\nSoftware: Chiedu Eseadi, Vera Victor-Aigbodion, Abatihun Alehegn Sewagegn, Amos Nnaemeka Amedu.\nSupervision: Leonard Chidi Ilechukwu, Abatihun Alehegn Sewagegn, Amos Nnaemeka Amedu.\nValidation: Chiedu Eseadi, Leonard Chidi Ilechukwu, Vera Victor-Aigbodion, Abatihun Alehegn Sewagegn, Amos Nnaemeka Amedu.\nVisualization: Vera Victor-Aigbodion, Abatihun Alehegn Sewagegn.\nWriting – original draft: Chiedu Eseadi, Leonard Chidi Ilechukwu, Vera Victor-Aigbodion, Abatihun Alehegn Sewagegn, Amos Nnaemeka Amedu.\nWriting – review & editing: Chiedu Eseadi, Leonard Chidi Ilechukwu, Vera Victor-Aigbodion, Abatihun Alehegn Sewagegn, Amos Nnaemeka Amedu." ]
[ "intro", "methods", null, null, "methods", null, "methods", null, "results", "discussion", null, null ]
[ "BDI-II", "depression", "RREBT", "undergraduate religious education students" ]
Network pharmacology study on the potential effect mechanism of Chuanzhi Tongluo Capsule in the treatment of cerebral infarction.
36254030
Chuanxiong Tongluo capsules have been widely used to treat recovered stroke and cerebral infarction, but their specific therapeutic mechanism is not well understood.
BACKGROUND
This study aims to investigate the mechanism of action for Chuanzhi Tongluo capsule on cerebral infarction based on a network pharmacology approach. The TCMSP platform collected the chemical composition of Chuanzhi Tongluo capsules. Its potential targets were predicted by Swiss target prediction and standardized using the Uniprot database for gene normalization. Meanwhile, the OMIM, Genecards, and TTD databases were used to obtain the targets related to cerebral infarction. The standard targets of Chuanzhi Tongluo capsule and cerebral infarction were uploaded to the STRING database to construct protein-protein interaction networks. Topological methods analyzed the key targets and components in the drug-component-disease-target network. Gene ontology function and Kyoto Encyclopedia of Genes and Genomes pathway enrichment analysis of the shared targets were performed using the DAVID database.
METHODS
A total of 105 active ingredients and 427 targets were associated with Chuanzhi Tongluo capsule, and there were 3055 targets related to cerebral infarction disease and 240 common targets between the two keywords. The key targets included INS, ALB, IL-6, VEGFA, TNF, and TP53. The conduction pathways involved include the calcium signaling pathway, cAMP signaling pathway, cGMP-PKG signaling pathway, and TNF signaling pathway.
RESULTS
The active ingredients in Chuanzhi Tongluo capsule may participate in the therapeutic process of cerebral infarction by regulating the calcium, cAMP, cGMP-PKG, and TNF signaling pathway through critical targets such as INS, ALB, IL-6, VEGFA, TNF, and TP53.
CONCLUSION
[ "Calcium", "Cerebral Infarction", "Drugs, Chinese Herbal", "Humans", "Interleukin-6", "Network Pharmacology" ]
9575740
1. Introduction
Cerebral infarction, also known as ischemic stroke, is a common clinical cerebrovascular disease with a high mortality and disability rate in patients. Patients with this disease mostly have underlying diseases that induce local blood circulation disorders in the brain, such as hypertension and abnormal platelet function. These causes predispose patients to multiple ischemias, hypoxia, softening necrosis of tissues, and eventually neurological impairment.[1] Epidemiological data show that the disease is characterized by rapid onset, high disability, mortality, and recurrence rates, resulting in the highest rate of disability in patients caused by a single disease in the world.[2] Similar to atherosclerosis, hypertension, coronary heart disease, hyperlipidemia, diabetes mellitus, obesity, alcohol consumption, and smoking are common risk factors for cerebral infarction. The American Heart Association/American Stroke Association recommends that stroke prevention can be achieved by controlling the causative risk factors.[3,4] Currently, intravenous recombinant tissue fibrinogen activator is the only FDA-approved drug for the treatment of ischemic stroke. However, the drug must be administered within 4.5 hours of the patient’s stroke onset, otherwise, there is a risk of intracranial hemorrhage. Intravascular mechanical embolization is one of the most important methods for the treatment of acute ischemic stroke in recent years and is a more effective treatment recommended in the guidelines for the treatment of this disease. However, the implementation of this method requires a more specialized team of physicians and a rigorous evaluation of the patient.[5] These drawbacks limit the clinical application of this treatment method. Current research shows that TCM can regulate the pathophysiological processes of patients with cerebral infarction through the advantages of multi-component, multi-target, and multi-effectiveness, and can play a more scientific, rational, and effective role in the purpose of comprehensive treatment. In recent years, with the development of modern genomics, proteomics, metabolomics and other theories, as well as the introduction of a systems biology perspective and the application of bioinformatics, the concept of network pharmacology (Network Pharmacology) has been proposed.[6,7] The principle of network pharmacology is to transform the traditional “one drug, one target” model into a holistic “multi-component, multi-target” model by constructing a network relationship between active ingredients and targets. The approach of elaborating the systemic mechanism of action of complex drugs at the molecular level is in line with the holistic concept emphasized in Chinese medicine.[8] The complexity of compound components in traditional Chinese medicine formulations and the lack of research methods often make it difficult for researchers to achieve a detailed study of the mechanisms of the action exerted by its components. Therefore, the network pharmacology research for TCM provides a scientific basis for explaining the differences in the way of thinking between Chinese and Western medicine and points the way to the worldwide promotion of TCM. The Chinese patent medicine Chuanzhi Tongluo Capsule is composed of the leech, Chuanxiong (Szechuan lovage rhizome), Danshen (Dan-Shen Root), and Huangqi (Milkvetch Root). This drug has the effect of clearing away heat and toxic material and is widely used in the treatment of stroke and cerebral infarction in the recovery period. The present study initially elucidated the mechanism of action of Chuanzhi Tongluo Capsule in the treatment of cerebral infarction and laid a good foundation for further research on the pharmacological basis and mechanism of action of this Chinese medicine.
2. Materials and Methods
[SUBTITLE] 2.1. Collection of active ingredients of Chuanzhi Tongluo Capsule [SUBSECTION] The compound components of Chuanzhi Tongluo Capsule were searched in the database of TCMSP and BATMAN with the keywords of “Huangqi”, “Dan Shen”, “leech”, and “Chuanxiong”. On the premise of pharmacokinetic principle, OB ≥ 30% and DL ≥ 0.18 were used as the screening conditions,[9] and the eligible components of Astragalus, Salvia, Leech, and Chuanxiong were selected as the active ingredients. This study is a summary of data from published articles and does not address issues related to patient ethics, etc. Therefore, this study does not require approval by the ethics committee. The compound components of Chuanzhi Tongluo Capsule were searched in the database of TCMSP and BATMAN with the keywords of “Huangqi”, “Dan Shen”, “leech”, and “Chuanxiong”. On the premise of pharmacokinetic principle, OB ≥ 30% and DL ≥ 0.18 were used as the screening conditions,[9] and the eligible components of Astragalus, Salvia, Leech, and Chuanxiong were selected as the active ingredients. This study is a summary of data from published articles and does not address issues related to patient ethics, etc. Therefore, this study does not require approval by the ethics committee. [SUBTITLE] 2.2. Prediction of potential targets for the treatment of cerebral infarction with Chuanzhi Tongluo Capsules [SUBSECTION] A search was conducted in the human genetic database (Genecards https://www.genecards.org/) using the keyword “Cerebral Infarction, (CI)”, and the disease-related genes were screened. The potential targets for the treatment of CI disease with Chuanzhi Tongluo Capsules were obtained by mapping the obtained targets to the compound targets screened in 1.1. A search was conducted in the human genetic database (Genecards https://www.genecards.org/) using the keyword “Cerebral Infarction, (CI)”, and the disease-related genes were screened. The potential targets for the treatment of CI disease with Chuanzhi Tongluo Capsules were obtained by mapping the obtained targets to the compound targets screened in 1.1. [SUBTITLE] 2.3. Target screening and construction of “drug-compound-target” network of Chuanzhi Tongluo Capsule [SUBSECTION] The active ingredients and their corresponding targets were searched and screened in the TCMSP database, and the gene names of the targets were corrected with the Uniprot database.[10] The target proteins were imported into Cytoscape 3.7.2 software to construct a “drug-compound-target” network for analysis. The active ingredients and their corresponding targets were searched and screened in the TCMSP database, and the gene names of the targets were corrected with the Uniprot database.[10] The target proteins were imported into Cytoscape 3.7.2 software to construct a “drug-compound-target” network for analysis. [SUBTITLE] 2.4. Protein–protein interaction (PPI) network construction and analysis [SUBSECTION] The potential targets of the therapeutic CI effect of Chuanzhi Tongluo Capsule screened under 1.3 were entered into the STRING database, and the genus “Homo Sapiens” (Human) was set, and the protein interaction relationships were obtained. The data files were saved in TSV format. The obtained data were imported into Cytoscape 3.7.2 software to construct PPI networks and analyze the core targets of the conditional screening. The potential targets of the therapeutic CI effect of Chuanzhi Tongluo Capsule screened under 1.3 were entered into the STRING database, and the genus “Homo Sapiens” (Human) was set, and the protein interaction relationships were obtained. The data files were saved in TSV format. The obtained data were imported into Cytoscape 3.7.2 software to construct PPI networks and analyze the core targets of the conditional screening. [SUBTITLE] 2.5. Gene ontology (GO) function and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis [SUBSECTION] To gain insight into the functions of the potential target genes and their roles in the signaling pathways of the above screened CIs, we performed GO analysis using the ClueGO plug-in and KEGG pathway enrichment analysis using the DAVID database. We set the P-value < .05 and defined the species as “Homo Sapiens.” The top 20 entries of the KEGG pathway were selected for visualization and ranked according to the number of enriched targets. To gain insight into the functions of the potential target genes and their roles in the signaling pathways of the above screened CIs, we performed GO analysis using the ClueGO plug-in and KEGG pathway enrichment analysis using the DAVID database. We set the P-value < .05 and defined the species as “Homo Sapiens.” The top 20 entries of the KEGG pathway were selected for visualization and ranked according to the number of enriched targets.
3. Results
[SUBTITLE] 3.1. Screening of active ingredients in Chuanhuitongluo capsules [SUBSECTION] A total of 105 compounds were screened by TCMSP and Batman-TCM databases in Chuanzhi Tongluo Capsules. Among them, 20 were from Astragalus, 65 from Salvia, 13 from the leech, and 7 from Chuanxiong. A total of 427 targets were predicted. For more information, see Table 1. The information on the active compounds and the targets of Chuanzhi Tongluo Capsules. A total of 105 compounds were screened by TCMSP and Batman-TCM databases in Chuanzhi Tongluo Capsules. Among them, 20 were from Astragalus, 65 from Salvia, 13 from the leech, and 7 from Chuanxiong. A total of 427 targets were predicted. For more information, see Table 1. The information on the active compounds and the targets of Chuanzhi Tongluo Capsules. [SUBTITLE] 3.2. Prediction and screening of core targets for the treatment of cerebral infarction with Chuanzhi Tongluo Capsule [SUBSECTION] A total of 3055 potential CI targets were screened using the GeneCards database. The online tool Venny 2. 1 (http://bioinfogp.cnb.csic.es/tools/venny/index.html) was used to draw the Venn diagram (Fig. 1) of the active ingredient targets and CI targets of Chuanzhi Tongluo Capsule. A total of 240 potential predicted intersection targets were obtained for CI treatment with Chuanzhi Tongluo Capsule. Venn diagram of the intersectional targets of Chuanzhi Tongluo Capsule for cerebral infarction. A total of 3055 potential CI targets were screened using the GeneCards database. The online tool Venny 2. 1 (http://bioinfogp.cnb.csic.es/tools/venny/index.html) was used to draw the Venn diagram (Fig. 1) of the active ingredient targets and CI targets of Chuanzhi Tongluo Capsule. A total of 240 potential predicted intersection targets were obtained for CI treatment with Chuanzhi Tongluo Capsule. Venn diagram of the intersectional targets of Chuanzhi Tongluo Capsule for cerebral infarction. [SUBTITLE] 3.3. Drug-component-target-disease network diagram construction [SUBSECTION] Cytoscape 3.7.2 software was used to construct the “drug-component-target-disease” network diagram (Fig. 2). A total of 550 nodes (including 4 drug nodes, 1 disease target, 441 targets, and 104 active compound nodes) and 956 edges were analyzed in the network diagram. In the network, triangles represent the drug and cerebral infarcts of Chuanzhi Tongluo Capsule, circles represent the compound active compounds, and target genes are represented by V-shaped. In the network, each edge represents the interactions between nodes, and the degree value represents the number of connections between nodes and other nodes. The data are analyzed using the “Network analyze” plug-in. The nodes with significant rank and centrality values in the network are screened, and the resulting nodes may play a key role in the network.[1] Thus, these multiple correspondences between the active compounds and the targets reflect the complexity of the herbal compound, while the interaction between the components and the targets may be a reflection of the holistic therapeutic effect of the herbal compound. Based on the screening of the rank and centrality values of the active compounds, the top 5 active ingredients were crocetin, ursolic acid, D-Mannitol, quercetin, and hederagenin, respectively. This suggests that several of the above active ingredients may be key factors in the treatment of cerebral infarction with Chuanzhi Tongluo Capsule. The drug-component-target-disease network diagram. For definition of abbreviations, see Appendix 1, Supplemental Digital Content, http://links.lww.com/MD/H448. Cytoscape 3.7.2 software was used to construct the “drug-component-target-disease” network diagram (Fig. 2). A total of 550 nodes (including 4 drug nodes, 1 disease target, 441 targets, and 104 active compound nodes) and 956 edges were analyzed in the network diagram. In the network, triangles represent the drug and cerebral infarcts of Chuanzhi Tongluo Capsule, circles represent the compound active compounds, and target genes are represented by V-shaped. In the network, each edge represents the interactions between nodes, and the degree value represents the number of connections between nodes and other nodes. The data are analyzed using the “Network analyze” plug-in. The nodes with significant rank and centrality values in the network are screened, and the resulting nodes may play a key role in the network.[1] Thus, these multiple correspondences between the active compounds and the targets reflect the complexity of the herbal compound, while the interaction between the components and the targets may be a reflection of the holistic therapeutic effect of the herbal compound. Based on the screening of the rank and centrality values of the active compounds, the top 5 active ingredients were crocetin, ursolic acid, D-Mannitol, quercetin, and hederagenin, respectively. This suggests that several of the above active ingredients may be key factors in the treatment of cerebral infarction with Chuanzhi Tongluo Capsule. The drug-component-target-disease network diagram. For definition of abbreviations, see Appendix 1, Supplemental Digital Content, http://links.lww.com/MD/H448. [SUBTITLE] 3.4. Prediction and analysis of core targets of Chuanzhi Tongluo Capsule for cerebral infarction [SUBSECTION] The intersecting targets of Chuanzhi Tongluo Capsule and cerebral infarction were imported into the STRING 11.0 database to obtain the protein interaction network map (Fig. 3). The obtained data were imported into Cytoscape 3.7.2 software in TSV format to construct the PPI network (Fig. 3). The network contains a total of 238 nodes and 3086 edges. The top 10 targets, such as INS, ALB, IL-6, VEGFA, TNF, TP53, APP, MAPK1, EGFR, and FTGS2, were ranked in descending order of degree value, and the top 10 targets may be the key targets for the treatment of cerebral infarction with Chuanzhi Tongluo Capsule (Table 2). Result of core target network information. Protein–protein interaction network diagram. For definition of abbreviations, see Appendix 1, Supplemental Digital Content, http://links.lww.com/MD/H448. The intersecting targets of Chuanzhi Tongluo Capsule and cerebral infarction were imported into the STRING 11.0 database to obtain the protein interaction network map (Fig. 3). The obtained data were imported into Cytoscape 3.7.2 software in TSV format to construct the PPI network (Fig. 3). The network contains a total of 238 nodes and 3086 edges. The top 10 targets, such as INS, ALB, IL-6, VEGFA, TNF, TP53, APP, MAPK1, EGFR, and FTGS2, were ranked in descending order of degree value, and the top 10 targets may be the key targets for the treatment of cerebral infarction with Chuanzhi Tongluo Capsule (Table 2). Result of core target network information. Protein–protein interaction network diagram. For definition of abbreviations, see Appendix 1, Supplemental Digital Content, http://links.lww.com/MD/H448. [SUBTITLE] 3.5. GO functional enrichment analysis [SUBSECTION] A total of 142 GO entries (P < .05) were obtained using the DAVID online platform for GO functional enrichment analysis of the intersecting targets. Among them, 96 entries were obtained for biological processes (Fig. 4A), mainly related to response to drug and response to hypoxia, and 21 entries were obtained for cell composition, mainly related to the plasma membrane and integral component of the plasma membrane (Fig. 4B); 25 articles of molecular function (Fig. 4C), mainly related to steroid hormone receptor activity (Fig. 5). The molecular functions are mainly related to steroid hormone receptor activity and drug binding. The chart of GO functional enrichment analysis. (A) Biological processes; (B) cell composition; and (C) molecular function. GO = gene ontology. The target-pathway map of Chuanzhi Tongluo Capsule. For definition of abbreviations, see Appendix 1, Supplemental Digital Content, http://links.lww.com/MD/H448. A total of 142 GO entries (P < .05) were obtained using the DAVID online platform for GO functional enrichment analysis of the intersecting targets. Among them, 96 entries were obtained for biological processes (Fig. 4A), mainly related to response to drug and response to hypoxia, and 21 entries were obtained for cell composition, mainly related to the plasma membrane and integral component of the plasma membrane (Fig. 4B); 25 articles of molecular function (Fig. 4C), mainly related to steroid hormone receptor activity (Fig. 5). The molecular functions are mainly related to steroid hormone receptor activity and drug binding. The chart of GO functional enrichment analysis. (A) Biological processes; (B) cell composition; and (C) molecular function. GO = gene ontology. The target-pathway map of Chuanzhi Tongluo Capsule. For definition of abbreviations, see Appendix 1, Supplemental Digital Content, http://links.lww.com/MD/H448. [SUBTITLE] 3.6. KEGG pathway enrichment analysis [SUBSECTION] The DAVID database was used for enrichment analysis of the intersection targets, and P < .05 was used as the screening criterion. A total of 25 pathways were screened, mainly involving calcium signaling pathway, cAMP signaling pathway, cGMP-PKG signaling pathway, TNF signaling pathway, cGMP-PKG signaling pathway, TNF signaling pathway, and HIF-1 signaling pathway (Table 3). This suggests that the active ingredients in Chuanzhi Tongluo Capsule may be involved in the treatment of patients with cerebral infarction through the above signaling pathways. Results of KEGG pathway enrichment analysis. KEGG = Kyoto Encyclopedia of Genes and Genomes. The DAVID database was used for enrichment analysis of the intersection targets, and P < .05 was used as the screening criterion. A total of 25 pathways were screened, mainly involving calcium signaling pathway, cAMP signaling pathway, cGMP-PKG signaling pathway, TNF signaling pathway, cGMP-PKG signaling pathway, TNF signaling pathway, and HIF-1 signaling pathway (Table 3). This suggests that the active ingredients in Chuanzhi Tongluo Capsule may be involved in the treatment of patients with cerebral infarction through the above signaling pathways. Results of KEGG pathway enrichment analysis. KEGG = Kyoto Encyclopedia of Genes and Genomes.
null
null
[ "2.1. Collection of active ingredients of Chuanzhi Tongluo Capsule", "2.2. Prediction of potential targets for the treatment of cerebral infarction with Chuanzhi Tongluo Capsules", "2.3. Target screening and construction of “drug-compound-target” network of Chuanzhi Tongluo Capsule", "2.4. Protein–protein interaction (PPI) network construction and analysis", "2.5. Gene ontology (GO) function and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis", "3.1. Screening of active ingredients in Chuanhuitongluo capsules", "3.2. Prediction and screening of core targets for the treatment of cerebral infarction with Chuanzhi Tongluo Capsule", "3.3. Drug-component-target-disease network diagram construction", "3.4. Prediction and analysis of core targets of Chuanzhi Tongluo Capsule for cerebral infarction", "3.5. GO functional enrichment analysis", "3.6. KEGG pathway enrichment analysis", "5. Conclusion", "\nAuthor contributions" ]
[ "The compound components of Chuanzhi Tongluo Capsule were searched in the database of TCMSP and BATMAN with the keywords of “Huangqi”, “Dan Shen”, “leech”, and “Chuanxiong”. On the premise of pharmacokinetic principle, OB ≥ 30% and DL ≥ 0.18 were used as the screening conditions,[9] and the eligible components of Astragalus, Salvia, Leech, and Chuanxiong were selected as the active ingredients. This study is a summary of data from published articles and does not address issues related to patient ethics, etc. Therefore, this study does not require approval by the ethics committee.", "A search was conducted in the human genetic database (Genecards https://www.genecards.org/) using the keyword “Cerebral Infarction, (CI)”, and the disease-related genes were screened. The potential targets for the treatment of CI disease with Chuanzhi Tongluo Capsules were obtained by mapping the obtained targets to the compound targets screened in 1.1.", "The active ingredients and their corresponding targets were searched and screened in the TCMSP database, and the gene names of the targets were corrected with the Uniprot database.[10] The target proteins were imported into Cytoscape 3.7.2 software to construct a “drug-compound-target” network for analysis.", "The potential targets of the therapeutic CI effect of Chuanzhi Tongluo Capsule screened under 1.3 were entered into the STRING database, and the genus “Homo Sapiens” (Human) was set, and the protein interaction relationships were obtained. The data files were saved in TSV format. The obtained data were imported into Cytoscape 3.7.2 software to construct PPI networks and analyze the core targets of the conditional screening.", "To gain insight into the functions of the potential target genes and their roles in the signaling pathways of the above screened CIs, we performed GO analysis using the ClueGO plug-in and KEGG pathway enrichment analysis using the DAVID database. We set the P-value < .05 and defined the species as “Homo Sapiens.” The top 20 entries of the KEGG pathway were selected for visualization and ranked according to the number of enriched targets.", "A total of 105 compounds were screened by TCMSP and Batman-TCM databases in Chuanzhi Tongluo Capsules. Among them, 20 were from Astragalus, 65 from Salvia, 13 from the leech, and 7 from Chuanxiong. A total of 427 targets were predicted. For more information, see Table 1.\nThe information on the active compounds and the targets of Chuanzhi Tongluo Capsules.", "A total of 3055 potential CI targets were screened using the GeneCards database. The online tool Venny 2. 1 (http://bioinfogp.cnb.csic.es/tools/venny/index.html) was used to draw the Venn diagram (Fig. 1) of the active ingredient targets and CI targets of Chuanzhi Tongluo Capsule. A total of 240 potential predicted intersection targets were obtained for CI treatment with Chuanzhi Tongluo Capsule.\nVenn diagram of the intersectional targets of Chuanzhi Tongluo Capsule for cerebral infarction.", "Cytoscape 3.7.2 software was used to construct the “drug-component-target-disease” network diagram (Fig. 2). A total of 550 nodes (including 4 drug nodes, 1 disease target, 441 targets, and 104 active compound nodes) and 956 edges were analyzed in the network diagram. In the network, triangles represent the drug and cerebral infarcts of Chuanzhi Tongluo Capsule, circles represent the compound active compounds, and target genes are represented by V-shaped. In the network, each edge represents the interactions between nodes, and the degree value represents the number of connections between nodes and other nodes. The data are analyzed using the “Network analyze” plug-in. The nodes with significant rank and centrality values in the network are screened, and the resulting nodes may play a key role in the network.[1] Thus, these multiple correspondences between the active compounds and the targets reflect the complexity of the herbal compound, while the interaction between the components and the targets may be a reflection of the holistic therapeutic effect of the herbal compound. Based on the screening of the rank and centrality values of the active compounds, the top 5 active ingredients were crocetin, ursolic acid, D-Mannitol, quercetin, and hederagenin, respectively. This suggests that several of the above active ingredients may be key factors in the treatment of cerebral infarction with Chuanzhi Tongluo Capsule.\nThe drug-component-target-disease network diagram. For definition of abbreviations, see Appendix 1, Supplemental Digital Content, http://links.lww.com/MD/H448.", "The intersecting targets of Chuanzhi Tongluo Capsule and cerebral infarction were imported into the STRING 11.0 database to obtain the protein interaction network map (Fig. 3). The obtained data were imported into Cytoscape 3.7.2 software in TSV format to construct the PPI network (Fig. 3). The network contains a total of 238 nodes and 3086 edges. The top 10 targets, such as INS, ALB, IL-6, VEGFA, TNF, TP53, APP, MAPK1, EGFR, and FTGS2, were ranked in descending order of degree value, and the top 10 targets may be the key targets for the treatment of cerebral infarction with Chuanzhi Tongluo Capsule (Table 2).\nResult of core target network information.\nProtein–protein interaction network diagram. For definition of abbreviations, see Appendix 1, Supplemental Digital Content, http://links.lww.com/MD/H448.", "A total of 142 GO entries (P < .05) were obtained using the DAVID online platform for GO functional enrichment analysis of the intersecting targets. Among them, 96 entries were obtained for biological processes (Fig. 4A), mainly related to response to drug and response to hypoxia, and 21 entries were obtained for cell composition, mainly related to the plasma membrane and integral component of the plasma membrane (Fig. 4B); 25 articles of molecular function (Fig. 4C), mainly related to steroid hormone receptor activity (Fig. 5). The molecular functions are mainly related to steroid hormone receptor activity and drug binding.\nThe chart of GO functional enrichment analysis. (A) Biological processes; (B) cell composition; and (C) molecular function. GO = gene ontology.\nThe target-pathway map of Chuanzhi Tongluo Capsule. For definition of abbreviations, see Appendix 1, Supplemental Digital Content, http://links.lww.com/MD/H448.", "The DAVID database was used for enrichment analysis of the intersection targets, and P < .05 was used as the screening criterion. A total of 25 pathways were screened, mainly involving calcium signaling pathway, cAMP signaling pathway, cGMP-PKG signaling pathway, TNF signaling pathway, cGMP-PKG signaling pathway, TNF signaling pathway, and HIF-1 signaling pathway (Table 3). This suggests that the active ingredients in Chuanzhi Tongluo Capsule may be involved in the treatment of patients with cerebral infarction through the above signaling pathways.\nResults of KEGG pathway enrichment analysis.\nKEGG = Kyoto Encyclopedia of Genes and Genomes.", "In summary, this paper investigated the pharmacological basis and potential biological mechanisms of Chuanzhi Tongluo Capsules in the treatment of cerebral infarction using a network pharmacology approach. The treatment of cerebral infarction with Chuanzhi Tongluo Capsule is a complex process involving multiple components, multiple targets and multiple pathways. The active ingredients in Chuanzhi Tongluo Capsule, such as saffron acid, ursolic acid, mannitol, quercetin, and ivy saponin element, may be involved in the therapeutic process of cerebral infarction by regulating calcium, cAMP, cGMP-PKG, and TNF signaling pathway through key targets, such as INS, ALB, IL-6, VEGFA, TNF, and TP53.", "Conceptualization: Shan Ma, Jianxin Zhang.\nData curation: Shan Ma.\nFormal analysis: Shan Ma.\nInvestigation: Wenhui Fan.\nMethodology: Wenhui Fan.\nProject administration: Jianxin Zhang.\nResources: Wenhui Fan.\nSoftware: Wenhui Fan.\nSupervision: Wenhui Fan.\nValidation: Wenhui Fan.\nWriting – original draft: Shan Ma, Jianxin Zhang.\nWriting – review & editing: Shan Ma, Jianxin Zhang." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Materials and Methods", "2.1. Collection of active ingredients of Chuanzhi Tongluo Capsule", "2.2. Prediction of potential targets for the treatment of cerebral infarction with Chuanzhi Tongluo Capsules", "2.3. Target screening and construction of “drug-compound-target” network of Chuanzhi Tongluo Capsule", "2.4. Protein–protein interaction (PPI) network construction and analysis", "2.5. Gene ontology (GO) function and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis", "3. Results", "3.1. Screening of active ingredients in Chuanhuitongluo capsules", "3.2. Prediction and screening of core targets for the treatment of cerebral infarction with Chuanzhi Tongluo Capsule", "3.3. Drug-component-target-disease network diagram construction", "3.4. Prediction and analysis of core targets of Chuanzhi Tongluo Capsule for cerebral infarction", "3.5. GO functional enrichment analysis", "3.6. KEGG pathway enrichment analysis", "4. Discussion", "5. Conclusion", "\nAuthor contributions", "Supplementary Material" ]
[ "Cerebral infarction, also known as ischemic stroke, is a common clinical cerebrovascular disease with a high mortality and disability rate in patients. Patients with this disease mostly have underlying diseases that induce local blood circulation disorders in the brain, such as hypertension and abnormal platelet function. These causes predispose patients to multiple ischemias, hypoxia, softening necrosis of tissues, and eventually neurological impairment.[1] Epidemiological data show that the disease is characterized by rapid onset, high disability, mortality, and recurrence rates, resulting in the highest rate of disability in patients caused by a single disease in the world.[2] Similar to atherosclerosis, hypertension, coronary heart disease, hyperlipidemia, diabetes mellitus, obesity, alcohol consumption, and smoking are common risk factors for cerebral infarction. The American Heart Association/American Stroke Association recommends that stroke prevention can be achieved by controlling the causative risk factors.[3,4] Currently, intravenous recombinant tissue fibrinogen activator is the only FDA-approved drug for the treatment of ischemic stroke. However, the drug must be administered within 4.5 hours of the patient’s stroke onset, otherwise, there is a risk of intracranial hemorrhage.\nIntravascular mechanical embolization is one of the most important methods for the treatment of acute ischemic stroke in recent years and is a more effective treatment recommended in the guidelines for the treatment of this disease. However, the implementation of this method requires a more specialized team of physicians and a rigorous evaluation of the patient.[5] These drawbacks limit the clinical application of this treatment method. Current research shows that TCM can regulate the pathophysiological processes of patients with cerebral infarction through the advantages of multi-component, multi-target, and multi-effectiveness, and can play a more scientific, rational, and effective role in the purpose of comprehensive treatment.\nIn recent years, with the development of modern genomics, proteomics, metabolomics and other theories, as well as the introduction of a systems biology perspective and the application of bioinformatics, the concept of network pharmacology (Network Pharmacology) has been proposed.[6,7] The principle of network pharmacology is to transform the traditional “one drug, one target” model into a holistic “multi-component, multi-target” model by constructing a network relationship between active ingredients and targets. The approach of elaborating the systemic mechanism of action of complex drugs at the molecular level is in line with the holistic concept emphasized in Chinese medicine.[8] The complexity of compound components in traditional Chinese medicine formulations and the lack of research methods often make it difficult for researchers to achieve a detailed study of the mechanisms of the action exerted by its components. Therefore, the network pharmacology research for TCM provides a scientific basis for explaining the differences in the way of thinking between Chinese and Western medicine and points the way to the worldwide promotion of TCM.\nThe Chinese patent medicine Chuanzhi Tongluo Capsule is composed of the leech, Chuanxiong (Szechuan lovage rhizome), Danshen (Dan-Shen Root), and Huangqi (Milkvetch Root). This drug has the effect of clearing away heat and toxic material and is widely used in the treatment of stroke and cerebral infarction in the recovery period. The present study initially elucidated the mechanism of action of Chuanzhi Tongluo Capsule in the treatment of cerebral infarction and laid a good foundation for further research on the pharmacological basis and mechanism of action of this Chinese medicine.", "[SUBTITLE] 2.1. Collection of active ingredients of Chuanzhi Tongluo Capsule [SUBSECTION] The compound components of Chuanzhi Tongluo Capsule were searched in the database of TCMSP and BATMAN with the keywords of “Huangqi”, “Dan Shen”, “leech”, and “Chuanxiong”. On the premise of pharmacokinetic principle, OB ≥ 30% and DL ≥ 0.18 were used as the screening conditions,[9] and the eligible components of Astragalus, Salvia, Leech, and Chuanxiong were selected as the active ingredients. This study is a summary of data from published articles and does not address issues related to patient ethics, etc. Therefore, this study does not require approval by the ethics committee.\nThe compound components of Chuanzhi Tongluo Capsule were searched in the database of TCMSP and BATMAN with the keywords of “Huangqi”, “Dan Shen”, “leech”, and “Chuanxiong”. On the premise of pharmacokinetic principle, OB ≥ 30% and DL ≥ 0.18 were used as the screening conditions,[9] and the eligible components of Astragalus, Salvia, Leech, and Chuanxiong were selected as the active ingredients. This study is a summary of data from published articles and does not address issues related to patient ethics, etc. Therefore, this study does not require approval by the ethics committee.\n[SUBTITLE] 2.2. Prediction of potential targets for the treatment of cerebral infarction with Chuanzhi Tongluo Capsules [SUBSECTION] A search was conducted in the human genetic database (Genecards https://www.genecards.org/) using the keyword “Cerebral Infarction, (CI)”, and the disease-related genes were screened. The potential targets for the treatment of CI disease with Chuanzhi Tongluo Capsules were obtained by mapping the obtained targets to the compound targets screened in 1.1.\nA search was conducted in the human genetic database (Genecards https://www.genecards.org/) using the keyword “Cerebral Infarction, (CI)”, and the disease-related genes were screened. The potential targets for the treatment of CI disease with Chuanzhi Tongluo Capsules were obtained by mapping the obtained targets to the compound targets screened in 1.1.\n[SUBTITLE] 2.3. Target screening and construction of “drug-compound-target” network of Chuanzhi Tongluo Capsule [SUBSECTION] The active ingredients and their corresponding targets were searched and screened in the TCMSP database, and the gene names of the targets were corrected with the Uniprot database.[10] The target proteins were imported into Cytoscape 3.7.2 software to construct a “drug-compound-target” network for analysis.\nThe active ingredients and their corresponding targets were searched and screened in the TCMSP database, and the gene names of the targets were corrected with the Uniprot database.[10] The target proteins were imported into Cytoscape 3.7.2 software to construct a “drug-compound-target” network for analysis.\n[SUBTITLE] 2.4. Protein–protein interaction (PPI) network construction and analysis [SUBSECTION] The potential targets of the therapeutic CI effect of Chuanzhi Tongluo Capsule screened under 1.3 were entered into the STRING database, and the genus “Homo Sapiens” (Human) was set, and the protein interaction relationships were obtained. The data files were saved in TSV format. The obtained data were imported into Cytoscape 3.7.2 software to construct PPI networks and analyze the core targets of the conditional screening.\nThe potential targets of the therapeutic CI effect of Chuanzhi Tongluo Capsule screened under 1.3 were entered into the STRING database, and the genus “Homo Sapiens” (Human) was set, and the protein interaction relationships were obtained. The data files were saved in TSV format. The obtained data were imported into Cytoscape 3.7.2 software to construct PPI networks and analyze the core targets of the conditional screening.\n[SUBTITLE] 2.5. Gene ontology (GO) function and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analysis [SUBSECTION] To gain insight into the functions of the potential target genes and their roles in the signaling pathways of the above screened CIs, we performed GO analysis using the ClueGO plug-in and KEGG pathway enrichment analysis using the DAVID database. We set the P-value < .05 and defined the species as “Homo Sapiens.” The top 20 entries of the KEGG pathway were selected for visualization and ranked according to the number of enriched targets.\nTo gain insight into the functions of the potential target genes and their roles in the signaling pathways of the above screened CIs, we performed GO analysis using the ClueGO plug-in and KEGG pathway enrichment analysis using the DAVID database. We set the P-value < .05 and defined the species as “Homo Sapiens.” The top 20 entries of the KEGG pathway were selected for visualization and ranked according to the number of enriched targets.", "The compound components of Chuanzhi Tongluo Capsule were searched in the database of TCMSP and BATMAN with the keywords of “Huangqi”, “Dan Shen”, “leech”, and “Chuanxiong”. On the premise of pharmacokinetic principle, OB ≥ 30% and DL ≥ 0.18 were used as the screening conditions,[9] and the eligible components of Astragalus, Salvia, Leech, and Chuanxiong were selected as the active ingredients. This study is a summary of data from published articles and does not address issues related to patient ethics, etc. Therefore, this study does not require approval by the ethics committee.", "A search was conducted in the human genetic database (Genecards https://www.genecards.org/) using the keyword “Cerebral Infarction, (CI)”, and the disease-related genes were screened. The potential targets for the treatment of CI disease with Chuanzhi Tongluo Capsules were obtained by mapping the obtained targets to the compound targets screened in 1.1.", "The active ingredients and their corresponding targets were searched and screened in the TCMSP database, and the gene names of the targets were corrected with the Uniprot database.[10] The target proteins were imported into Cytoscape 3.7.2 software to construct a “drug-compound-target” network for analysis.", "The potential targets of the therapeutic CI effect of Chuanzhi Tongluo Capsule screened under 1.3 were entered into the STRING database, and the genus “Homo Sapiens” (Human) was set, and the protein interaction relationships were obtained. The data files were saved in TSV format. The obtained data were imported into Cytoscape 3.7.2 software to construct PPI networks and analyze the core targets of the conditional screening.", "To gain insight into the functions of the potential target genes and their roles in the signaling pathways of the above screened CIs, we performed GO analysis using the ClueGO plug-in and KEGG pathway enrichment analysis using the DAVID database. We set the P-value < .05 and defined the species as “Homo Sapiens.” The top 20 entries of the KEGG pathway were selected for visualization and ranked according to the number of enriched targets.", "[SUBTITLE] 3.1. Screening of active ingredients in Chuanhuitongluo capsules [SUBSECTION] A total of 105 compounds were screened by TCMSP and Batman-TCM databases in Chuanzhi Tongluo Capsules. Among them, 20 were from Astragalus, 65 from Salvia, 13 from the leech, and 7 from Chuanxiong. A total of 427 targets were predicted. For more information, see Table 1.\nThe information on the active compounds and the targets of Chuanzhi Tongluo Capsules.\nA total of 105 compounds were screened by TCMSP and Batman-TCM databases in Chuanzhi Tongluo Capsules. Among them, 20 were from Astragalus, 65 from Salvia, 13 from the leech, and 7 from Chuanxiong. A total of 427 targets were predicted. For more information, see Table 1.\nThe information on the active compounds and the targets of Chuanzhi Tongluo Capsules.\n[SUBTITLE] 3.2. Prediction and screening of core targets for the treatment of cerebral infarction with Chuanzhi Tongluo Capsule [SUBSECTION] A total of 3055 potential CI targets were screened using the GeneCards database. The online tool Venny 2. 1 (http://bioinfogp.cnb.csic.es/tools/venny/index.html) was used to draw the Venn diagram (Fig. 1) of the active ingredient targets and CI targets of Chuanzhi Tongluo Capsule. A total of 240 potential predicted intersection targets were obtained for CI treatment with Chuanzhi Tongluo Capsule.\nVenn diagram of the intersectional targets of Chuanzhi Tongluo Capsule for cerebral infarction.\nA total of 3055 potential CI targets were screened using the GeneCards database. The online tool Venny 2. 1 (http://bioinfogp.cnb.csic.es/tools/venny/index.html) was used to draw the Venn diagram (Fig. 1) of the active ingredient targets and CI targets of Chuanzhi Tongluo Capsule. A total of 240 potential predicted intersection targets were obtained for CI treatment with Chuanzhi Tongluo Capsule.\nVenn diagram of the intersectional targets of Chuanzhi Tongluo Capsule for cerebral infarction.\n[SUBTITLE] 3.3. Drug-component-target-disease network diagram construction [SUBSECTION] Cytoscape 3.7.2 software was used to construct the “drug-component-target-disease” network diagram (Fig. 2). A total of 550 nodes (including 4 drug nodes, 1 disease target, 441 targets, and 104 active compound nodes) and 956 edges were analyzed in the network diagram. In the network, triangles represent the drug and cerebral infarcts of Chuanzhi Tongluo Capsule, circles represent the compound active compounds, and target genes are represented by V-shaped. In the network, each edge represents the interactions between nodes, and the degree value represents the number of connections between nodes and other nodes. The data are analyzed using the “Network analyze” plug-in. The nodes with significant rank and centrality values in the network are screened, and the resulting nodes may play a key role in the network.[1] Thus, these multiple correspondences between the active compounds and the targets reflect the complexity of the herbal compound, while the interaction between the components and the targets may be a reflection of the holistic therapeutic effect of the herbal compound. Based on the screening of the rank and centrality values of the active compounds, the top 5 active ingredients were crocetin, ursolic acid, D-Mannitol, quercetin, and hederagenin, respectively. This suggests that several of the above active ingredients may be key factors in the treatment of cerebral infarction with Chuanzhi Tongluo Capsule.\nThe drug-component-target-disease network diagram. For definition of abbreviations, see Appendix 1, Supplemental Digital Content, http://links.lww.com/MD/H448.\nCytoscape 3.7.2 software was used to construct the “drug-component-target-disease” network diagram (Fig. 2). A total of 550 nodes (including 4 drug nodes, 1 disease target, 441 targets, and 104 active compound nodes) and 956 edges were analyzed in the network diagram. In the network, triangles represent the drug and cerebral infarcts of Chuanzhi Tongluo Capsule, circles represent the compound active compounds, and target genes are represented by V-shaped. In the network, each edge represents the interactions between nodes, and the degree value represents the number of connections between nodes and other nodes. The data are analyzed using the “Network analyze” plug-in. The nodes with significant rank and centrality values in the network are screened, and the resulting nodes may play a key role in the network.[1] Thus, these multiple correspondences between the active compounds and the targets reflect the complexity of the herbal compound, while the interaction between the components and the targets may be a reflection of the holistic therapeutic effect of the herbal compound. Based on the screening of the rank and centrality values of the active compounds, the top 5 active ingredients were crocetin, ursolic acid, D-Mannitol, quercetin, and hederagenin, respectively. This suggests that several of the above active ingredients may be key factors in the treatment of cerebral infarction with Chuanzhi Tongluo Capsule.\nThe drug-component-target-disease network diagram. For definition of abbreviations, see Appendix 1, Supplemental Digital Content, http://links.lww.com/MD/H448.\n[SUBTITLE] 3.4. Prediction and analysis of core targets of Chuanzhi Tongluo Capsule for cerebral infarction [SUBSECTION] The intersecting targets of Chuanzhi Tongluo Capsule and cerebral infarction were imported into the STRING 11.0 database to obtain the protein interaction network map (Fig. 3). The obtained data were imported into Cytoscape 3.7.2 software in TSV format to construct the PPI network (Fig. 3). The network contains a total of 238 nodes and 3086 edges. The top 10 targets, such as INS, ALB, IL-6, VEGFA, TNF, TP53, APP, MAPK1, EGFR, and FTGS2, were ranked in descending order of degree value, and the top 10 targets may be the key targets for the treatment of cerebral infarction with Chuanzhi Tongluo Capsule (Table 2).\nResult of core target network information.\nProtein–protein interaction network diagram. For definition of abbreviations, see Appendix 1, Supplemental Digital Content, http://links.lww.com/MD/H448.\nThe intersecting targets of Chuanzhi Tongluo Capsule and cerebral infarction were imported into the STRING 11.0 database to obtain the protein interaction network map (Fig. 3). The obtained data were imported into Cytoscape 3.7.2 software in TSV format to construct the PPI network (Fig. 3). The network contains a total of 238 nodes and 3086 edges. The top 10 targets, such as INS, ALB, IL-6, VEGFA, TNF, TP53, APP, MAPK1, EGFR, and FTGS2, were ranked in descending order of degree value, and the top 10 targets may be the key targets for the treatment of cerebral infarction with Chuanzhi Tongluo Capsule (Table 2).\nResult of core target network information.\nProtein–protein interaction network diagram. For definition of abbreviations, see Appendix 1, Supplemental Digital Content, http://links.lww.com/MD/H448.\n[SUBTITLE] 3.5. GO functional enrichment analysis [SUBSECTION] A total of 142 GO entries (P < .05) were obtained using the DAVID online platform for GO functional enrichment analysis of the intersecting targets. Among them, 96 entries were obtained for biological processes (Fig. 4A), mainly related to response to drug and response to hypoxia, and 21 entries were obtained for cell composition, mainly related to the plasma membrane and integral component of the plasma membrane (Fig. 4B); 25 articles of molecular function (Fig. 4C), mainly related to steroid hormone receptor activity (Fig. 5). The molecular functions are mainly related to steroid hormone receptor activity and drug binding.\nThe chart of GO functional enrichment analysis. (A) Biological processes; (B) cell composition; and (C) molecular function. GO = gene ontology.\nThe target-pathway map of Chuanzhi Tongluo Capsule. For definition of abbreviations, see Appendix 1, Supplemental Digital Content, http://links.lww.com/MD/H448.\nA total of 142 GO entries (P < .05) were obtained using the DAVID online platform for GO functional enrichment analysis of the intersecting targets. Among them, 96 entries were obtained for biological processes (Fig. 4A), mainly related to response to drug and response to hypoxia, and 21 entries were obtained for cell composition, mainly related to the plasma membrane and integral component of the plasma membrane (Fig. 4B); 25 articles of molecular function (Fig. 4C), mainly related to steroid hormone receptor activity (Fig. 5). The molecular functions are mainly related to steroid hormone receptor activity and drug binding.\nThe chart of GO functional enrichment analysis. (A) Biological processes; (B) cell composition; and (C) molecular function. GO = gene ontology.\nThe target-pathway map of Chuanzhi Tongluo Capsule. For definition of abbreviations, see Appendix 1, Supplemental Digital Content, http://links.lww.com/MD/H448.\n[SUBTITLE] 3.6. KEGG pathway enrichment analysis [SUBSECTION] The DAVID database was used for enrichment analysis of the intersection targets, and P < .05 was used as the screening criterion. A total of 25 pathways were screened, mainly involving calcium signaling pathway, cAMP signaling pathway, cGMP-PKG signaling pathway, TNF signaling pathway, cGMP-PKG signaling pathway, TNF signaling pathway, and HIF-1 signaling pathway (Table 3). This suggests that the active ingredients in Chuanzhi Tongluo Capsule may be involved in the treatment of patients with cerebral infarction through the above signaling pathways.\nResults of KEGG pathway enrichment analysis.\nKEGG = Kyoto Encyclopedia of Genes and Genomes.\nThe DAVID database was used for enrichment analysis of the intersection targets, and P < .05 was used as the screening criterion. A total of 25 pathways were screened, mainly involving calcium signaling pathway, cAMP signaling pathway, cGMP-PKG signaling pathway, TNF signaling pathway, cGMP-PKG signaling pathway, TNF signaling pathway, and HIF-1 signaling pathway (Table 3). This suggests that the active ingredients in Chuanzhi Tongluo Capsule may be involved in the treatment of patients with cerebral infarction through the above signaling pathways.\nResults of KEGG pathway enrichment analysis.\nKEGG = Kyoto Encyclopedia of Genes and Genomes.", "A total of 105 compounds were screened by TCMSP and Batman-TCM databases in Chuanzhi Tongluo Capsules. Among them, 20 were from Astragalus, 65 from Salvia, 13 from the leech, and 7 from Chuanxiong. A total of 427 targets were predicted. For more information, see Table 1.\nThe information on the active compounds and the targets of Chuanzhi Tongluo Capsules.", "A total of 3055 potential CI targets were screened using the GeneCards database. The online tool Venny 2. 1 (http://bioinfogp.cnb.csic.es/tools/venny/index.html) was used to draw the Venn diagram (Fig. 1) of the active ingredient targets and CI targets of Chuanzhi Tongluo Capsule. A total of 240 potential predicted intersection targets were obtained for CI treatment with Chuanzhi Tongluo Capsule.\nVenn diagram of the intersectional targets of Chuanzhi Tongluo Capsule for cerebral infarction.", "Cytoscape 3.7.2 software was used to construct the “drug-component-target-disease” network diagram (Fig. 2). A total of 550 nodes (including 4 drug nodes, 1 disease target, 441 targets, and 104 active compound nodes) and 956 edges were analyzed in the network diagram. In the network, triangles represent the drug and cerebral infarcts of Chuanzhi Tongluo Capsule, circles represent the compound active compounds, and target genes are represented by V-shaped. In the network, each edge represents the interactions between nodes, and the degree value represents the number of connections between nodes and other nodes. The data are analyzed using the “Network analyze” plug-in. The nodes with significant rank and centrality values in the network are screened, and the resulting nodes may play a key role in the network.[1] Thus, these multiple correspondences between the active compounds and the targets reflect the complexity of the herbal compound, while the interaction between the components and the targets may be a reflection of the holistic therapeutic effect of the herbal compound. Based on the screening of the rank and centrality values of the active compounds, the top 5 active ingredients were crocetin, ursolic acid, D-Mannitol, quercetin, and hederagenin, respectively. This suggests that several of the above active ingredients may be key factors in the treatment of cerebral infarction with Chuanzhi Tongluo Capsule.\nThe drug-component-target-disease network diagram. For definition of abbreviations, see Appendix 1, Supplemental Digital Content, http://links.lww.com/MD/H448.", "The intersecting targets of Chuanzhi Tongluo Capsule and cerebral infarction were imported into the STRING 11.0 database to obtain the protein interaction network map (Fig. 3). The obtained data were imported into Cytoscape 3.7.2 software in TSV format to construct the PPI network (Fig. 3). The network contains a total of 238 nodes and 3086 edges. The top 10 targets, such as INS, ALB, IL-6, VEGFA, TNF, TP53, APP, MAPK1, EGFR, and FTGS2, were ranked in descending order of degree value, and the top 10 targets may be the key targets for the treatment of cerebral infarction with Chuanzhi Tongluo Capsule (Table 2).\nResult of core target network information.\nProtein–protein interaction network diagram. For definition of abbreviations, see Appendix 1, Supplemental Digital Content, http://links.lww.com/MD/H448.", "A total of 142 GO entries (P < .05) were obtained using the DAVID online platform for GO functional enrichment analysis of the intersecting targets. Among them, 96 entries were obtained for biological processes (Fig. 4A), mainly related to response to drug and response to hypoxia, and 21 entries were obtained for cell composition, mainly related to the plasma membrane and integral component of the plasma membrane (Fig. 4B); 25 articles of molecular function (Fig. 4C), mainly related to steroid hormone receptor activity (Fig. 5). The molecular functions are mainly related to steroid hormone receptor activity and drug binding.\nThe chart of GO functional enrichment analysis. (A) Biological processes; (B) cell composition; and (C) molecular function. GO = gene ontology.\nThe target-pathway map of Chuanzhi Tongluo Capsule. For definition of abbreviations, see Appendix 1, Supplemental Digital Content, http://links.lww.com/MD/H448.", "The DAVID database was used for enrichment analysis of the intersection targets, and P < .05 was used as the screening criterion. A total of 25 pathways were screened, mainly involving calcium signaling pathway, cAMP signaling pathway, cGMP-PKG signaling pathway, TNF signaling pathway, cGMP-PKG signaling pathway, TNF signaling pathway, and HIF-1 signaling pathway (Table 3). This suggests that the active ingredients in Chuanzhi Tongluo Capsule may be involved in the treatment of patients with cerebral infarction through the above signaling pathways.\nResults of KEGG pathway enrichment analysis.\nKEGG = Kyoto Encyclopedia of Genes and Genomes.", "Cerebral infarction is one of the most common cerebrovascular diseases in clinical practice. The pathogenesis of this disease is mostly due to the obstruction of cerebrovascular blood circulation in patients, which causes the insufficient blood supply to brain tissue, thus triggering ischemia and hypoxia in brain tissue and leading to impaired brain function. The clinical symptoms of patients mainly include coma, unconsciousness, hemiplegia, and numbness of the limbs.[11,12] The etiology, pathological process and related pathogenic mechanisms of this disease are complex and diverse, and patients have a long course. Current studies have shown that Chinese medicine has multi-target and multi-faceted advantages in the treatment of this disease.[13]\nIn Chinese medicine, cerebral infarction is called “stroke.” According to TCM, the pathogenesis of stroke patients is closely related to the pathological factors of wind, fire, phlegm, qi and stasis in TCM, among which “stasis” is involved in the whole process of ischemic stroke.[14] Patients with this disease are prone to sequelae during the recovery period due to an imbalance of Qi and blood, and blood vessels are not smooth. Therefore, activating blood circulation and removing blood stasis is quite important in the treatment of stroke disease.[15] Chuanzhi Tongluo Capsule is an enteric-soluble hard capsule of Chinese herbal medicine developed from four herbs, namely, Astragalus, Salvia, leech, and Chuanxiong, and supplemented by Buyang Huanwu Tang.[16] This Chinese medicine uses leech as the main drug component to release the patient’s blood stasis; Chuanxiong is a supplementary drug that can work with a leech to reach the brain and activate blood circulation; it can also make the overall qi and blood flow smoothly and the functions of internal organs work well; as a supplementary drug, Huangqi’s function of tonifying Qi can make the stagnant blood work again and maintain the meridians; Danshen has the function of activating blood to remove blood stasis, calming the mind and nourishing the heart; this Chinese medicine can be used for patients with cerebral infarction The herbal medicine can activate blood circulation, remove blood stasis, benefit qi, and promote circulation.[17] Studies have shown that quercetin, a major component of the Astragalus, can reduce neuronal cell damage through mechanisms such as inhibition of oxidative stress and inflammatory response, and play a role in protecting the nervous system in stroke.[18] Hirudin in leeches is the most potent natural thrombin inhibitor found to date, with significant anti-platelet aggregation, anticoagulation, antithrombotic, vasodilatory and viscosity-lowering effects.[19] Hirudin is less stable and its enteric capsules prevent the destruction of this active ingredient by gastric acid and pepsin, which further enhances its anticoagulant and antithrombotic effects.[20] Salvia can up-regulate the expression of fibroblast growth factor bFGF and has a protective effect on nerve cells undergoing ischemia-reperfusion; salvia can also further reduce PG concentration by reducing superoxide dismutase activity and mitigating ischemia-reperfusion injury.[21] Among them, danshenin can inhibit the release of thromboxane from endothelial cells.[22] Chuanxiongzin in chuanxiong can reduce ischemia-reperfusion injury in patients by reducing oxygen stress and promoting cellular mitochondrial energy metabolism.[23,24] In addition, Chuan Xiong has inhibited platelet aggregation and antithrombotic effects. Chuanxiongzin also reduces leukocyte adhesion to the venous wall, inhibits erythrocyte aggregation, accelerates erythrocyte electrophoresis, reduces platelet adhesion rate, and prevents elevated blood viscosity, among other effects.[25,26]\nBased on the network diagram of drug-component-disease-target, the screening of key ingredients reveals that the main ingredients in Chuanzhi Tongluo Capsule for the treatment of cerebral infarction include, crocetin, ursolic acid, dextro-mannitol, quercetin, ivy saponin element (hederagenin), etc. There are reports in the literature showing various pharmacological effects of saffron acid such as anti-tumor[27] and anti-atherosclerosis.[28] In addition, the experimental results of Zhang et al[29] showed that saffron acid could significantly prolong coagulation time in mice and tail bleeding time in mice; for in vitro thrombosis and venous thrombosis in rats, saffron acid significantly reduced the wet and dry mass of thrombus; and significantly prolonged TT, PT, and APTT in hemorrhagic rats. Studies showed that saffron glucoside was not absorbed in the small intestine of rats. The glycosides were also undetectable in the blood of rats, but their sapogenins, saffron acid, were detected.[30] These studies suggest that the anticoagulant and antithrombotic effects of saffron glucoside may be mediated through its sapogenin, saffron acid. In addition, saffron acid has several beneficial functions in the nervous system, such as neuronal survival, prominent plasticity, and microglia activation.[31] Microglia play a key regulatory role in the anti-inflammatory and immune response of the CNS. In the case of disease, the active function of the glial can help to restore homeostasis within the central nervous system. Ursolic acid, on the other hand, is a triterpenoid found in natural plants and has a variety of biological effects such as sedative, anti-inflammatory, antibacterial, anti-diabetic, anti-ulcer, and blood sugar lowering. Ursolic acid also has significant antioxidant properties. Bijuanjuan et al[32] reported that ursolic acid provides energy to cells by inducing the autophagic process, sustains their survival, and acts as a protective agent for vascular endothelial cells. Studies have shown that mannitol may contribute to the scavenging of oxidative free radicals from brain tissue in the ischemic zone and participate in altering the hemodynamics of the ischemic zone.[33] Also, neurons in the final area of the parietal nucleus of the cerebellum may be affected by the drug and thus involved in the regulation of cardiovascular function.[34,35] It has been shown that quercetin may improve its damaged nerves by inhibiting the release of inflammatory factors such as TNF-α and IL-1β in a rat model of stroke and further inhibiting apoptosis and oxidative stress levels of cells.[36] Ivy saponin elements are widely distributed in a variety of medicinal plants. It has been found that ivy saponin elements have various pharmacological effects such as antitumor, antidepressant, antibacterial and anti-inflammatory, and antidiabetic. Wu et al[37] showed that ivy saponin elements can improve motor impairment in PD mouse models through their neuroprotection. One study[38] found that ivy saponin elements had preventive effects on hyperlipidemia in both experimental rats and mice; also, its blood rheological properties of hyperlipidemia in experimental rats were significantly improved.\nTopological analysis of the key targets showed that 10 targets such as INS, ALB, IL-6, VEGFA, TNF and TP53 were ranked among the top targets. This suggests that these targets may play a key role in the treatment of ischemic stroke with Chuanzhi Tongluo Capsule. Among them, insulin can achieve the function of dilating small pre-capillary arteries, enhancing the compliance of ductus arteriosus and expanding microvascular blood volume.[39] Albumin can maintain cellular metabolism by increasing the transport of pyruvate with neurons, thus achieving neuronal protection.[40] It has been shown that serum albumin levels gradually decrease in acute stroke patients; and the greater the decrease in albumin in patients, the larger the infarct area and the more the blood-brain barrier is disrupted.[41] interleukin-6 is one of the systemic inflammatory markers and plays an important role in the body’s immune response against infection. il-6 may have a role in regulating the immune response, the acute phase response, and promoting hematopoiesis. Elevated IL-6 concentrations are closely associated with poor prognosis and high mortality in patients after stroke.[42] Rong Wei et al[43] demonstrated that the active ingredient in ginseng, ginsenoside Rg1, could achieve neuroprotective effects by decreasing the expression level of IL-6 in rats with ischemic stroke. VEGFA, a vascular endothelial growth factor, promotes cerebral neurovascular repair in the stroke region and improves the prognosis of stroke.[44]VEGFA has an important role in the process of angiogenesis after cerebral ischemia.[45] When VEGFA is given to the MCAO rat model, it increases the density of biological microvessels in the ischemic semidark zone and promotes angiogenesis.[46] TNF-α has the effect of activating microglia and promoting the expression of adhesion and chemokines. Improving the migration capacity of inflammation-related cells is one of the key causes of neuronal cell injury after ischemic stroke.[47] In addition, animal studies found that the volume of cerebral infarcts and the degree of brain damage after cerebral ischemia were significantly higher in mice deficient in the TNF receptor gene than in wild-type mice. The above studies suggest that TNF has certain neuroprotective effects.[48] As one of the inflammatory cytokines, TNF enhances the degree of neuronal damage and thrombus formation in ischemic stroke. TP53 plays an important role in cell proliferation and apoptosis by regulating the synthesis of cell cycle-related proteins. In addition, there are reports in the literature showing that TP53 can be used as a genetic marker to predict the prognosis of stroke patients.[49]\nThe GO enrichment analysis of the targets of Chuanzhi Tongluo Capsule in the treatment of ischemic stroke suggests that biological processes such as drug response, response to hypoxia, the composition of the plasma membrane, steroid hormone receptor activity, and drug binding may play a major role. The KEGG enrichment pathway analysis showed that the potential targets of Chuanzhi Tongluo Capsules in the treatment of cerebral infarction mainly involve calcium signaling pathway, cAMP signaling pathway, cGMP-PKG signaling pathway, and TNF signaling pathway. Among them, the calcium signaling pathway stabilizes blood pressure by regulating Ca2+ channels, which can lead to an increase in peripheral vascular resistance, thus causing an increase in blood pressure. It has also been suggested that Ca2+ increases the stability of vascular smooth muscle cell membranes to alleviate stressful blood pressure elevation and decreases the endogenous vascular pressor response, which in turn promotes urinary sodium excretion and reduces blood volume; Ca2+ further enhances the active transport of sodium and potassium by relieving the inhibition of Na-K-ATPase activity, relaxes vascular smooth muscle, and achieves blood pressure reduction.[50] cAMP is one of the important intracellular second messengers that can exert a cerebral protective effect by activating the PKA signaling pathway and mediating the formation of neuronal regenerative synapses by binding its response element to the protein CREB.[51] In addition, it has been shown that the cGMP-PKG signaling pathway can be involved in regulating neurogenesis and cell proliferation processes.[52] Meanwhile, activation of the cGMP-PKG signaling pathway promotes neurogenesis and improves neural structure and function in post-ischemic mice.[53] And selective inhibition of the TNF signaling pathway has potential applications in reducing blood-brain barrier disruption and improving neurological prognosis.[47]", "In summary, this paper investigated the pharmacological basis and potential biological mechanisms of Chuanzhi Tongluo Capsules in the treatment of cerebral infarction using a network pharmacology approach. The treatment of cerebral infarction with Chuanzhi Tongluo Capsule is a complex process involving multiple components, multiple targets and multiple pathways. The active ingredients in Chuanzhi Tongluo Capsule, such as saffron acid, ursolic acid, mannitol, quercetin, and ivy saponin element, may be involved in the therapeutic process of cerebral infarction by regulating calcium, cAMP, cGMP-PKG, and TNF signaling pathway through key targets, such as INS, ALB, IL-6, VEGFA, TNF, and TP53.", "Conceptualization: Shan Ma, Jianxin Zhang.\nData curation: Shan Ma.\nFormal analysis: Shan Ma.\nInvestigation: Wenhui Fan.\nMethodology: Wenhui Fan.\nProject administration: Jianxin Zhang.\nResources: Wenhui Fan.\nSoftware: Wenhui Fan.\nSupervision: Wenhui Fan.\nValidation: Wenhui Fan.\nWriting – original draft: Shan Ma, Jianxin Zhang.\nWriting – review & editing: Shan Ma, Jianxin Zhang.", "" ]
[ "intro", "methods", null, null, null, null, null, "results", null, null, null, null, null, null, "discussion", null, null, "supplementary-material" ]
[ "action mechanism", "cerebral infarction", "Chuanzhi Tongluo Capsule", "network pharmacology" ]
The angiotensin receptor and neprilysin inhibitor, LCZ696, in heart failure: A meta-analysis of randomized controlled trials.
36254034
LCZ696 is a novel neuroendocrine inhibitor that has been widely used in heart failure (HF). However, its advantage over other neuroendocrine inhibitors, such as angiotensin-converting enzyme inhibitors (ACEis) and angiotensin-receptor blockers (ARBs) has not been fully elucidated. This study aimed to provide the latest evidence regarding the efficacy and safety of LCZ696 as compared to other ACEis and ARBs with regards to the treatment of HF.
BACKGROUND
We systematically searched databases, including PubMed, Embase, and the Cochrane Library, for relevant randomized controlled trials (RCTs). The outcome measures included all-cause mortality, rate of hospitalizations for HF, rate of death from cardiovascular causes, change in N-terminal pro-brain natriuretic peptide (NT-proBNP) levels, and decline of renal function.
METHODS
Five RCTs involving 19,078 patients were identified. The meta-analysis indicated that LCZ696 was associated with a significant reduction in all-cause mortality (hazard ratio [HR] = 0.84; 95% confidence interval [CI], 0.76-0.93; P = .0005), rate of hospitalizations for HF (HR = 0.80; 95% CI, 0.73-0.87; P < .00001), reduction in NT-proBNP levels (rate ratio = 0.78; 95% CI, 0.70-0.88; P < .0001), and decline in renal function (odds ratio = 0.77; 95% CI, 0.68-0.88; P < .0001) compared with ACEis and ARBs. However, there was no statistical difference in the rate of death from cardiovascular causes (HR = 0.86; 95% CI, 0.72-1.03; P = .09) between LCZ696 and ACEis and ARBs.
RESULTS
LCZ696 is superior to ACEis and ARBs in the treatment of HF. Hence, it should be more widely used clinically.
CONCLUSION
[ "Aminobutyrates", "Angiotensin Receptor Antagonists", "Angiotensin-Converting Enzyme Inhibitors", "Angiotensins", "Antihypertensive Agents", "Biphenyl Compounds", "Drug Combinations", "Heart Failure", "Humans", "Neprilysin", "Randomized Controlled Trials as Topic", "Receptors, Angiotensin", "Valsartan" ]
9575833
1. Introduction
Heart failure (HF) is a common multifactorial and complex clinical syndrome characterized by impaired cardiac function. It is the end stage of a chain of cardiovascular events that results in a heavy burden of disease. Thus, it needs early intervention.[1,2] The treatment concept of HF continues to change from the hemodynamic stage in the 1970s to the neuroendocrine stage in the 1990s to the current stage of overall regulation and multi-target action.[3–5] The Golden Triangle drugs include angiotensin-converting enzyme inhibitors (ACEis), angiotensin-receptor blockers (ARBs), and aldosterone receptor antagonists β. They have been recommended by the treatment guidelines as the standard treatment for HF and have changed the treatment landscape for patients with HF in the last 2 decades.[6,7] However, despite treatment advancement, the overall prognosis of HF is poor.[2] With the aging global population, the prevalence of HF continues to increase. The mortality and rehospitalization rates of HF are still high; hence, improvement in treatment modalities should be seriously considered.[1,2] Currently, it is believed that the activation of the neuroendocrine system leading to the myocardial remodeling is the key factor that causes the occurrence and development of HF. The long-term activation of the neuroendocrine system and cytokines is the pathological basis leading to the occurrence and development of HF. Neuroendocrine inhibitors are considered as the cornerstone of the management of HF.[8,9] Nevertheless, the role of traditional neuroendocrine inhibitors in improving exercise tolerance and reducing mortality in patients with HF is generally limited. On the other hand, usage of new neuroendocrine inhibitors is becoming more widely accepted. LCZ696 is a new neuroendocrine inhibitor that inhibits the occurrence and delays the progression of HF by suppressing myocardial remodeling.[10,11] However, there is a lack of comprehensive comparative study on the efficacy and safety of LCZ696 compared with the traditional neuroendocrine inhibitors, such as ACEis and ARBs, in the treatment of HF. Moreover, whether LCZ696 is superior to ACEis and ARBs remains a matter of great concern. The drug safety can be evaluated in terms of the side effects that the drug confers. The side effects that are associated with neuroendocrine inhibitors include nephrotoxicity, hyperkalemia, symptomatic hypotension, and vascular edema. Among the aforementioned effects, nephrotoxicity is the main index for safety evaluation for drugs.[12–14] Thus, in recent years, several randomized controlled trials (RCTs) investigated the efficacy and safety of LCZ696 in treating HF and compared it with those of ACEis and ARBs. The PARAGON-HF trial[15] reported that LCZ696 did not result in a significantly lower rate of total hospitalizations (rate ratio [RR] = 0.85; 95% confidence interval [CI] 0.72–1.00) compared with ACEis and ARBs among patients with HF and an ejection fraction of 45% or higher. Additionally, the PIONEER-HF trial[16] reported that, although LCZ696 had more advantages than ACEis and ARBs with regards to the reduction of the rehospitalization rate and nephrotoxicity in patients with HF, it had no advantages in controlling the total mortality. On the other hand, other studies yielded contradicting results. The PARADIGM-HF[17] reported that LCZ696 led to a greater reduction (hazard ratio [HR] = 0.84; 95% CI, 0.76–0.93) in all-cause mortality among HF patients compared to that of enalapril. Therefore, the literature is still inconsistent. Thus, whether LCZ696 is more effective and safer than ACEis and ARBs still needs further investigations. To evaluate the efficacy and safety of LCZ696 in HF, we performed this meta-analysis and examined several important clinical outcomes, including all-cause mortality, rate of hospitalizations, rate of death from cardiovascular causes, change in N-terminal pro-brain natriuretic peptide (NT-proBNP) levels, and decline in renal function. Additionally, we tried to provide clinical evidence of the effects of LCZ696.
2. Materials and Methods
[SUBTITLE] 2.1. Search strategy [SUBSECTION] We systematically searched the current mainstream medical databases, including PubMed, Embase, and the Cochrane Library, with the inclusive dates being set from inception to February 2022. We covered a vast majority of medical literatures. The search terms used were as follows: “LCZ696,” “sacubitril/valsartan,” “angiotensin receptor neprilysin inhibitor,” “angiotensin-neprilysin inhibitor,” and “heart failure.” We also did a manual search using the reference lists of identified studies to include other potentially eligible literatures. We did not include unpublished papers. When duplicate trials were identified, only the most complete and updated data of the studies were included. We systematically searched the current mainstream medical databases, including PubMed, Embase, and the Cochrane Library, with the inclusive dates being set from inception to February 2022. We covered a vast majority of medical literatures. The search terms used were as follows: “LCZ696,” “sacubitril/valsartan,” “angiotensin receptor neprilysin inhibitor,” “angiotensin-neprilysin inhibitor,” and “heart failure.” We also did a manual search using the reference lists of identified studies to include other potentially eligible literatures. We did not include unpublished papers. When duplicate trials were identified, only the most complete and updated data of the studies were included. [SUBTITLE] 2.2. Inclusion and exclusion criteria [SUBSECTION] The inclusion criteria of the RCTs hereby included in the meta-analyses were as follows: patients diagnosed with HF (population), treatment with LCZ696 (intervention), ACEIs and ARBs (comparison), and one or more outcomes of all-cause mortality, rate of hospitalizations for HF, rate of death from cardiovascular causes, change in NT-proBNP levels, and decline in renal function (outcomes). The exclusion criteria of were as follows: non-English articles; non-RCTs (reviews, meta-analysis, letters, or case reports), and basic experiments or animal studies. The trials identified via the search were independently screened for inclusion by 2 authors, namely, C.Y. and H.Q. Any disagreements were arbitrated by a third author, namely, M.D.C. The inclusion criteria of the RCTs hereby included in the meta-analyses were as follows: patients diagnosed with HF (population), treatment with LCZ696 (intervention), ACEIs and ARBs (comparison), and one or more outcomes of all-cause mortality, rate of hospitalizations for HF, rate of death from cardiovascular causes, change in NT-proBNP levels, and decline in renal function (outcomes). The exclusion criteria of were as follows: non-English articles; non-RCTs (reviews, meta-analysis, letters, or case reports), and basic experiments or animal studies. The trials identified via the search were independently screened for inclusion by 2 authors, namely, C.Y. and H.Q. Any disagreements were arbitrated by a third author, namely, M.D.C. [SUBTITLE] 2.3. Data extraction and quality assessment [SUBSECTION] Two researchers (C.L. and L.J.L.) with knowledge on systematic evaluation independently carried out literature screening and quality evaluation to ensure the objectivity of the process and results. At times of disagreement, a third coauthor (L.R.X.) intervened to reach a final conclusion. The data extracted for each trial were: author, year, trial number, study design, country, number of patients, regimen, and available outcomes for analysis. The HR/RR for the main outcome measures with the relative 95% CI were extracted or calculated from each trial. Two coauthors (L.R.X, H.J.) assessed the methodological quality of selected literatures using the Cochrane Collaboration’s tool.[18] Two researchers (C.L. and L.J.L.) with knowledge on systematic evaluation independently carried out literature screening and quality evaluation to ensure the objectivity of the process and results. At times of disagreement, a third coauthor (L.R.X.) intervened to reach a final conclusion. The data extracted for each trial were: author, year, trial number, study design, country, number of patients, regimen, and available outcomes for analysis. The HR/RR for the main outcome measures with the relative 95% CI were extracted or calculated from each trial. Two coauthors (L.R.X, H.J.) assessed the methodological quality of selected literatures using the Cochrane Collaboration’s tool.[18] [SUBTITLE] 2.4. Statistical analysis [SUBSECTION] We performed the meta-analysis for the extracted data using the Review Manager software (RevMan version 5.4) provided by the Cochrane Collaboration. The data on all-cause mortality, rate of hospitalizations for HF, and rate of death from cardiovascular causes were pooled as HR with a 95% CI, while the data of change in NT-proBNP levels and decline in renal function were pooled as RR and OR with a 95% CI, respectively. The Cochran Q test and I2 test were used to assess the heterogeneity between studies. We used a random-effects model for the meta-analysis when the heterogeneity test was statistically significant (I2 ≥ 50%, P < .1). Otherwise, a fixed-effect model was used. A P-value of <.05 was considered statistically significant. We performed the meta-analysis for the extracted data using the Review Manager software (RevMan version 5.4) provided by the Cochrane Collaboration. The data on all-cause mortality, rate of hospitalizations for HF, and rate of death from cardiovascular causes were pooled as HR with a 95% CI, while the data of change in NT-proBNP levels and decline in renal function were pooled as RR and OR with a 95% CI, respectively. The Cochran Q test and I2 test were used to assess the heterogeneity between studies. We used a random-effects model for the meta-analysis when the heterogeneity test was statistically significant (I2 ≥ 50%, P < .1). Otherwise, a fixed-effect model was used. A P-value of <.05 was considered statistically significant.
3. Results
[SUBTITLE] 3.1. Study selection and characteristics of eligible studies [SUBSECTION] We identified 451 articles through the databases. After duplicate removal, 5 RCTs[15–17,19,20] involving 19,078 participants were eligible for the meta-analysis (Fig. 1 and Table 1). The number of participants in an individual trial ranged from 301 to 8442. The follow-up time ranged from 8 weeks to 34 months. Among the 5 studies, all patients in the experimental group were diagnosed with HF (4 chronic HF[15,17,19,20] and 1 acute HF[16]) and received LCZ696, while those in the control group who were diagnosed with HF received ACEis or ARBs. All articles were published between 2012 and 2021. All 5 studies assessed at least one or more outcomes, namely, all-cause mortality, rate of hospitalizations for HF, rate of death from cardiovascular causes, change in NT-proBNP levels, and decline in renal function. Important details about the included studies are shown in Table 1. The main characteristics and outcomes of included studies. NT-proBNP, N-terminal pro-brain natriuretic peptide. Flow chart about the article selection process. We identified 451 articles through the databases. After duplicate removal, 5 RCTs[15–17,19,20] involving 19,078 participants were eligible for the meta-analysis (Fig. 1 and Table 1). The number of participants in an individual trial ranged from 301 to 8442. The follow-up time ranged from 8 weeks to 34 months. Among the 5 studies, all patients in the experimental group were diagnosed with HF (4 chronic HF[15,17,19,20] and 1 acute HF[16]) and received LCZ696, while those in the control group who were diagnosed with HF received ACEis or ARBs. All articles were published between 2012 and 2021. All 5 studies assessed at least one or more outcomes, namely, all-cause mortality, rate of hospitalizations for HF, rate of death from cardiovascular causes, change in NT-proBNP levels, and decline in renal function. Important details about the included studies are shown in Table 1. The main characteristics and outcomes of included studies. NT-proBNP, N-terminal pro-brain natriuretic peptide. Flow chart about the article selection process. [SUBTITLE] 3.2. All-cause mortality [SUBSECTION] Two studies[16,17] reported the outcome of all-cause mortality, with a total of 9323 patients who received either LCZ696 or ACEis or ARBs. The statistical heterogeneity was deemed low in the pooled effect (I2 = 0%). There was no significant difference in the mortality rate (HR = 0.84; 95% CI, 0.76–0.93; P = .005, Fig. 2) between the 2 groups. Meta-analysis for the all-cause mortality. Two studies[16,17] reported the outcome of all-cause mortality, with a total of 9323 patients who received either LCZ696 or ACEis or ARBs. The statistical heterogeneity was deemed low in the pooled effect (I2 = 0%). There was no significant difference in the mortality rate (HR = 0.84; 95% CI, 0.76–0.93; P = .005, Fig. 2) between the 2 groups. Meta-analysis for the all-cause mortality. [SUBTITLE] 3.3. Rate of hospitalizations [SUBSECTION] Three articles[15–17] examined the hospitalizations for HF, with a total of 14,145 patients. The meta-analysis showed that there was no difference in the rate of hospitalizations between the LCZ696 and the ACEis/ARBs groups (HR = 0.80; 95% CI, 0.73–0.87; P < .00001, Fig. 3). Statistical heterogeneity was deemed low across the included studies (I2 = 43%). Meta-analysis for the rate of hospitalizations due to HF. HF = heart failure. Three articles[15–17] examined the hospitalizations for HF, with a total of 14,145 patients. The meta-analysis showed that there was no difference in the rate of hospitalizations between the LCZ696 and the ACEis/ARBs groups (HR = 0.80; 95% CI, 0.73–0.87; P < .00001, Fig. 3). Statistical heterogeneity was deemed low across the included studies (I2 = 43%). Meta-analysis for the rate of hospitalizations due to HF. HF = heart failure. [SUBTITLE] 3.4. Death from cardiovascular causes [SUBSECTION] Two studies[15,17] with a total of 12,364 patients investigated the changes in death from cardiovascular causes. The statistical heterogeneity was considered high (I2 = 63%). There was no statistical difference in death from cardiovascular causes between patients who received either LCZ696 or ACEis/ARBs (HR = 0.86; 95% CI, 0.72–1.03; P = .09, Fig. 4). Meta-analysis for death from cardiovascular causes. Two studies[15,17] with a total of 12,364 patients investigated the changes in death from cardiovascular causes. The statistical heterogeneity was considered high (I2 = 63%). There was no statistical difference in death from cardiovascular causes between patients who received either LCZ696 or ACEis/ARBs (HR = 0.86; 95% CI, 0.72–1.03; P = .09, Fig. 4). Meta-analysis for death from cardiovascular causes. [SUBTITLE] 3.5. Reduction in NT-proBNP levels [SUBSECTION] Three studies[16,19,20] with a total of 5814 patients reported the changes in NT-proBNP levels. Pooled analysis indicated that administration of LCZ696 resulted in a greater reduction of NT-proBNP level compared with that of ACEis/ARBs (RR = 0.78; 95% CI, 0.70–0.88; P < .0001, Fig. 5). High heterogeneity was detected for pooled effect (I2 = 67%). Meta-analysis for change in NT-proBNP level. NT-proBNP = N-terminal pro-brain natriuretic peptide. Three studies[16,19,20] with a total of 5814 patients reported the changes in NT-proBNP levels. Pooled analysis indicated that administration of LCZ696 resulted in a greater reduction of NT-proBNP level compared with that of ACEis/ARBs (RR = 0.78; 95% CI, 0.70–0.88; P < .0001, Fig. 5). High heterogeneity was detected for pooled effect (I2 = 67%). Meta-analysis for change in NT-proBNP level. NT-proBNP = N-terminal pro-brain natriuretic peptide. [SUBTITLE] 3.6. Decline in renal function [SUBSECTION] Three studies[15,16,20] with a total of 14,145 patients investigated the changes in renal function in both the LCZ696 and ACEis/ARBs groups. The analysis indicated that no difference in the decline in renal function was found between the 2 groups (odds ratio = 0.77; 95% CI, 0.68–0.88; P < .0001, Fig. 6). The high heterogeneity was statistically significant in all the included studies. No heterogeneity was found among these studies (I2 = 0%). Meta-analysis for decline in renal function. Three studies[15,16,20] with a total of 14,145 patients investigated the changes in renal function in both the LCZ696 and ACEis/ARBs groups. The analysis indicated that no difference in the decline in renal function was found between the 2 groups (odds ratio = 0.77; 95% CI, 0.68–0.88; P < .0001, Fig. 6). The high heterogeneity was statistically significant in all the included studies. No heterogeneity was found among these studies (I2 = 0%). Meta-analysis for decline in renal function. [SUBTITLE] 3.7. Risk of bias assessment [SUBSECTION] The risks of bias of the included studies in this meta-analysis are summarized in Figure 7. The methodological quality was assessed as high in all of the 5 included RCTs. Quality evaluation of included articles. The risks of bias of the included studies in this meta-analysis are summarized in Figure 7. The methodological quality was assessed as high in all of the 5 included RCTs. Quality evaluation of included articles.
null
null
[ "2.1. Search strategy", "2.2. Inclusion and exclusion criteria", "2.3. Data extraction and quality assessment", "2.4. Statistical analysis", "3.1. Study selection and characteristics of eligible studies", "3.2. All-cause mortality", "3.3. Rate of hospitalizations", "3.4. Death from cardiovascular causes", "3.5. Reduction in NT-proBNP levels", "3.6. Decline in renal function", "3.7. Risk of bias assessment", "5. Conclusion", "Authors’ contributions" ]
[ "We systematically searched the current mainstream medical databases, including PubMed, Embase, and the Cochrane Library, with the inclusive dates being set from inception to February 2022. We covered a vast majority of medical literatures. The search terms used were as follows: “LCZ696,” “sacubitril/valsartan,” “angiotensin receptor neprilysin inhibitor,” “angiotensin-neprilysin inhibitor,” and “heart failure.” We also did a manual search using the reference lists of identified studies to include other potentially eligible literatures. We did not include unpublished papers. When duplicate trials were identified, only the most complete and updated data of the studies were included.", "The inclusion criteria of the RCTs hereby included in the meta-analyses were as follows: patients diagnosed with HF (population), treatment with LCZ696 (intervention), ACEIs and ARBs (comparison), and one or more outcomes of all-cause mortality, rate of hospitalizations for HF, rate of death from cardiovascular causes, change in NT-proBNP levels, and decline in renal function (outcomes).\nThe exclusion criteria of were as follows: non-English articles; non-RCTs (reviews, meta-analysis, letters, or case reports), and basic experiments or animal studies.\nThe trials identified via the search were independently screened for inclusion by 2 authors, namely, C.Y. and H.Q. Any disagreements were arbitrated by a third author, namely, M.D.C.", "Two researchers (C.L. and L.J.L.) with knowledge on systematic evaluation independently carried out literature screening and quality evaluation to ensure the objectivity of the process and results. At times of disagreement, a third coauthor (L.R.X.) intervened to reach a final conclusion. The data extracted for each trial were: author, year, trial number, study design, country, number of patients, regimen, and available outcomes for analysis. The HR/RR for the main outcome measures with the relative 95% CI were extracted or calculated from each trial. Two coauthors (L.R.X, H.J.) assessed the methodological quality of selected literatures using the Cochrane Collaboration’s tool.[18]", "We performed the meta-analysis for the extracted data using the Review Manager software (RevMan version 5.4) provided by the Cochrane Collaboration. The data on all-cause mortality, rate of hospitalizations for HF, and rate of death from cardiovascular causes were pooled as HR with a 95% CI, while the data of change in NT-proBNP levels and decline in renal function were pooled as RR and OR with a 95% CI, respectively. The Cochran Q test and I2 test were used to assess the heterogeneity between studies. We used a random-effects model for the meta-analysis when the heterogeneity test was statistically significant (I2 ≥ 50%, P < .1). Otherwise, a fixed-effect model was used. A P-value of <.05 was considered statistically significant.", "We identified 451 articles through the databases. After duplicate removal, 5 RCTs[15–17,19,20] involving 19,078 participants were eligible for the meta-analysis (Fig. 1 and Table 1). The number of participants in an individual trial ranged from 301 to 8442. The follow-up time ranged from 8 weeks to 34 months. Among the 5 studies, all patients in the experimental group were diagnosed with HF (4 chronic HF[15,17,19,20] and 1 acute HF[16]) and received LCZ696, while those in the control group who were diagnosed with HF received ACEis or ARBs. All articles were published between 2012 and 2021. All 5 studies assessed at least one or more outcomes, namely, all-cause mortality, rate of hospitalizations for HF, rate of death from cardiovascular causes, change in NT-proBNP levels, and decline in renal function. Important details about the included studies are shown in Table 1.\nThe main characteristics and outcomes of included studies.\nNT-proBNP, N-terminal pro-brain natriuretic peptide.\nFlow chart about the article selection process.", "Two studies[16,17] reported the outcome of all-cause mortality, with a total of 9323 patients who received either LCZ696 or ACEis or ARBs. The statistical heterogeneity was deemed low in the pooled effect (I2 = 0%). There was no significant difference in the mortality rate (HR = 0.84; 95% CI, 0.76–0.93; P = .005, Fig. 2) between the 2 groups.\nMeta-analysis for the all-cause mortality.", "Three articles[15–17] examined the hospitalizations for HF, with a total of 14,145 patients. The meta-analysis showed that there was no difference in the rate of hospitalizations between the LCZ696 and the ACEis/ARBs groups (HR = 0.80; 95% CI, 0.73–0.87; P < .00001, Fig. 3). Statistical heterogeneity was deemed low across the included studies (I2 = 43%).\nMeta-analysis for the rate of hospitalizations due to HF. HF = heart failure.", "Two studies[15,17] with a total of 12,364 patients investigated the changes in death from cardiovascular causes. The statistical heterogeneity was considered high (I2 = 63%). There was no statistical difference in death from cardiovascular causes between patients who received either LCZ696 or ACEis/ARBs (HR = 0.86; 95% CI, 0.72–1.03; P = .09, Fig. 4).\nMeta-analysis for death from cardiovascular causes.", "Three studies[16,19,20] with a total of 5814 patients reported the changes in NT-proBNP levels. Pooled analysis indicated that administration of LCZ696 resulted in a greater reduction of NT-proBNP level compared with that of ACEis/ARBs (RR = 0.78; 95% CI, 0.70–0.88; P < .0001, Fig. 5). High heterogeneity was detected for pooled effect (I2 = 67%).\nMeta-analysis for change in NT-proBNP level. NT-proBNP = N-terminal pro-brain natriuretic peptide.", "Three studies[15,16,20] with a total of 14,145 patients investigated the changes in renal function in both the LCZ696 and ACEis/ARBs groups. The analysis indicated that no difference in the decline in renal function was found between the 2 groups (odds ratio = 0.77; 95% CI, 0.68–0.88; P < .0001, Fig. 6). The high heterogeneity was statistically significant in all the included studies. No heterogeneity was found among these studies (I2 = 0%).\nMeta-analysis for decline in renal function.", "The risks of bias of the included studies in this meta-analysis are summarized in Figure 7. The methodological quality was assessed as high in all of the 5 included RCTs.\nQuality evaluation of included articles.", "The current meta-analysis demonstrated that, compared with ACEis/ARBs, LCZ696 was associated with a significant improvement in the overall mortality, rate of hospitalizations for HF, reduction in NT-proBNP levels, and decline in renal function for patients with HF. Moreover, it did not increase the risk of death from cardiovascular causes. Due to the limitations in this study, further investigations are required.", "L.Y. and H.Q. contributed to the study design and writing. M.D.C., C.L., and L.J.L. carried out the data collection and selection. L.R.X.and H.J. carried out the data analysis. All authors read and approved the final manuscript.\nConceptualization: Yan Chen, Qian He.\nData curation: Dun-Chang Mo, Long Chen, Jia-Lu Lu.\nFormal analysis: Rui-Xing Li, Jie Huang.\nInvestigation: Long Chen.\nSoftware: Rui-Xing Li, Jie Huang.\nWriting – review & editing: Yan Chen, Qian He." ]
[ null, null, null, null, null, null, null, null, null, null, null, null, null ]
[ "1. Introduction", "2. Materials and Methods", "2.1. Search strategy", "2.2. Inclusion and exclusion criteria", "2.3. Data extraction and quality assessment", "2.4. Statistical analysis", "3. Results", "3.1. Study selection and characteristics of eligible studies", "3.2. All-cause mortality", "3.3. Rate of hospitalizations", "3.4. Death from cardiovascular causes", "3.5. Reduction in NT-proBNP levels", "3.6. Decline in renal function", "3.7. Risk of bias assessment", "4. Discussion", "5. Conclusion", "Authors’ contributions" ]
[ "Heart failure (HF) is a common multifactorial and complex clinical syndrome characterized by impaired cardiac function. It is the end stage of a chain of cardiovascular events that results in a heavy burden of disease. Thus, it needs early intervention.[1,2] The treatment concept of HF continues to change from the hemodynamic stage in the 1970s to the neuroendocrine stage in the 1990s to the current stage of overall regulation and multi-target action.[3–5] The Golden Triangle drugs include angiotensin-converting enzyme inhibitors (ACEis), angiotensin-receptor blockers (ARBs), and aldosterone receptor antagonists β. They have been recommended by the treatment guidelines as the standard treatment for HF and have changed the treatment landscape for patients with HF in the last 2 decades.[6,7] However, despite treatment advancement, the overall prognosis of HF is poor.[2] With the aging global population, the prevalence of HF continues to increase. The mortality and rehospitalization rates of HF are still high; hence, improvement in treatment modalities should be seriously considered.[1,2] Currently, it is believed that the activation of the neuroendocrine system leading to the myocardial remodeling is the key factor that causes the occurrence and development of HF. The long-term activation of the neuroendocrine system and cytokines is the pathological basis leading to the occurrence and development of HF. Neuroendocrine inhibitors are considered as the cornerstone of the management of HF.[8,9] Nevertheless, the role of traditional neuroendocrine inhibitors in improving exercise tolerance and reducing mortality in patients with HF is generally limited. On the other hand, usage of new neuroendocrine inhibitors is becoming more widely accepted.\nLCZ696 is a new neuroendocrine inhibitor that inhibits the occurrence and delays the progression of HF by suppressing myocardial remodeling.[10,11] However, there is a lack of comprehensive comparative study on the efficacy and safety of LCZ696 compared with the traditional neuroendocrine inhibitors, such as ACEis and ARBs, in the treatment of HF. Moreover, whether LCZ696 is superior to ACEis and ARBs remains a matter of great concern. The drug safety can be evaluated in terms of the side effects that the drug confers. The side effects that are associated with neuroendocrine inhibitors include nephrotoxicity, hyperkalemia, symptomatic hypotension, and vascular edema. Among the aforementioned effects, nephrotoxicity is the main index for safety evaluation for drugs.[12–14] Thus, in recent years, several randomized controlled trials (RCTs) investigated the efficacy and safety of LCZ696 in treating HF and compared it with those of ACEis and ARBs. The PARAGON-HF trial[15] reported that LCZ696 did not result in a significantly lower rate of total hospitalizations (rate ratio [RR] = 0.85; 95% confidence interval [CI] 0.72–1.00) compared with ACEis and ARBs among patients with HF and an ejection fraction of 45% or higher. Additionally, the PIONEER-HF trial[16] reported that, although LCZ696 had more advantages than ACEis and ARBs with regards to the reduction of the rehospitalization rate and nephrotoxicity in patients with HF, it had no advantages in controlling the total mortality. On the other hand, other studies yielded contradicting results. The PARADIGM-HF[17] reported that LCZ696 led to a greater reduction (hazard ratio [HR] = 0.84; 95% CI, 0.76–0.93) in all-cause mortality among HF patients compared to that of enalapril. Therefore, the literature is still inconsistent. Thus, whether LCZ696 is more effective and safer than ACEis and ARBs still needs further investigations.\nTo evaluate the efficacy and safety of LCZ696 in HF, we performed this meta-analysis and examined several important clinical outcomes, including all-cause mortality, rate of hospitalizations, rate of death from cardiovascular causes, change in N-terminal pro-brain natriuretic peptide (NT-proBNP) levels, and decline in renal function. Additionally, we tried to provide clinical evidence of the effects of LCZ696.", "[SUBTITLE] 2.1. Search strategy [SUBSECTION] We systematically searched the current mainstream medical databases, including PubMed, Embase, and the Cochrane Library, with the inclusive dates being set from inception to February 2022. We covered a vast majority of medical literatures. The search terms used were as follows: “LCZ696,” “sacubitril/valsartan,” “angiotensin receptor neprilysin inhibitor,” “angiotensin-neprilysin inhibitor,” and “heart failure.” We also did a manual search using the reference lists of identified studies to include other potentially eligible literatures. We did not include unpublished papers. When duplicate trials were identified, only the most complete and updated data of the studies were included.\nWe systematically searched the current mainstream medical databases, including PubMed, Embase, and the Cochrane Library, with the inclusive dates being set from inception to February 2022. We covered a vast majority of medical literatures. The search terms used were as follows: “LCZ696,” “sacubitril/valsartan,” “angiotensin receptor neprilysin inhibitor,” “angiotensin-neprilysin inhibitor,” and “heart failure.” We also did a manual search using the reference lists of identified studies to include other potentially eligible literatures. We did not include unpublished papers. When duplicate trials were identified, only the most complete and updated data of the studies were included.\n[SUBTITLE] 2.2. Inclusion and exclusion criteria [SUBSECTION] The inclusion criteria of the RCTs hereby included in the meta-analyses were as follows: patients diagnosed with HF (population), treatment with LCZ696 (intervention), ACEIs and ARBs (comparison), and one or more outcomes of all-cause mortality, rate of hospitalizations for HF, rate of death from cardiovascular causes, change in NT-proBNP levels, and decline in renal function (outcomes).\nThe exclusion criteria of were as follows: non-English articles; non-RCTs (reviews, meta-analysis, letters, or case reports), and basic experiments or animal studies.\nThe trials identified via the search were independently screened for inclusion by 2 authors, namely, C.Y. and H.Q. Any disagreements were arbitrated by a third author, namely, M.D.C.\nThe inclusion criteria of the RCTs hereby included in the meta-analyses were as follows: patients diagnosed with HF (population), treatment with LCZ696 (intervention), ACEIs and ARBs (comparison), and one or more outcomes of all-cause mortality, rate of hospitalizations for HF, rate of death from cardiovascular causes, change in NT-proBNP levels, and decline in renal function (outcomes).\nThe exclusion criteria of were as follows: non-English articles; non-RCTs (reviews, meta-analysis, letters, or case reports), and basic experiments or animal studies.\nThe trials identified via the search were independently screened for inclusion by 2 authors, namely, C.Y. and H.Q. Any disagreements were arbitrated by a third author, namely, M.D.C.\n[SUBTITLE] 2.3. Data extraction and quality assessment [SUBSECTION] Two researchers (C.L. and L.J.L.) with knowledge on systematic evaluation independently carried out literature screening and quality evaluation to ensure the objectivity of the process and results. At times of disagreement, a third coauthor (L.R.X.) intervened to reach a final conclusion. The data extracted for each trial were: author, year, trial number, study design, country, number of patients, regimen, and available outcomes for analysis. The HR/RR for the main outcome measures with the relative 95% CI were extracted or calculated from each trial. Two coauthors (L.R.X, H.J.) assessed the methodological quality of selected literatures using the Cochrane Collaboration’s tool.[18]\nTwo researchers (C.L. and L.J.L.) with knowledge on systematic evaluation independently carried out literature screening and quality evaluation to ensure the objectivity of the process and results. At times of disagreement, a third coauthor (L.R.X.) intervened to reach a final conclusion. The data extracted for each trial were: author, year, trial number, study design, country, number of patients, regimen, and available outcomes for analysis. The HR/RR for the main outcome measures with the relative 95% CI were extracted or calculated from each trial. Two coauthors (L.R.X, H.J.) assessed the methodological quality of selected literatures using the Cochrane Collaboration’s tool.[18]\n[SUBTITLE] 2.4. Statistical analysis [SUBSECTION] We performed the meta-analysis for the extracted data using the Review Manager software (RevMan version 5.4) provided by the Cochrane Collaboration. The data on all-cause mortality, rate of hospitalizations for HF, and rate of death from cardiovascular causes were pooled as HR with a 95% CI, while the data of change in NT-proBNP levels and decline in renal function were pooled as RR and OR with a 95% CI, respectively. The Cochran Q test and I2 test were used to assess the heterogeneity between studies. We used a random-effects model for the meta-analysis when the heterogeneity test was statistically significant (I2 ≥ 50%, P < .1). Otherwise, a fixed-effect model was used. A P-value of <.05 was considered statistically significant.\nWe performed the meta-analysis for the extracted data using the Review Manager software (RevMan version 5.4) provided by the Cochrane Collaboration. The data on all-cause mortality, rate of hospitalizations for HF, and rate of death from cardiovascular causes were pooled as HR with a 95% CI, while the data of change in NT-proBNP levels and decline in renal function were pooled as RR and OR with a 95% CI, respectively. The Cochran Q test and I2 test were used to assess the heterogeneity between studies. We used a random-effects model for the meta-analysis when the heterogeneity test was statistically significant (I2 ≥ 50%, P < .1). Otherwise, a fixed-effect model was used. A P-value of <.05 was considered statistically significant.", "We systematically searched the current mainstream medical databases, including PubMed, Embase, and the Cochrane Library, with the inclusive dates being set from inception to February 2022. We covered a vast majority of medical literatures. The search terms used were as follows: “LCZ696,” “sacubitril/valsartan,” “angiotensin receptor neprilysin inhibitor,” “angiotensin-neprilysin inhibitor,” and “heart failure.” We also did a manual search using the reference lists of identified studies to include other potentially eligible literatures. We did not include unpublished papers. When duplicate trials were identified, only the most complete and updated data of the studies were included.", "The inclusion criteria of the RCTs hereby included in the meta-analyses were as follows: patients diagnosed with HF (population), treatment with LCZ696 (intervention), ACEIs and ARBs (comparison), and one or more outcomes of all-cause mortality, rate of hospitalizations for HF, rate of death from cardiovascular causes, change in NT-proBNP levels, and decline in renal function (outcomes).\nThe exclusion criteria of were as follows: non-English articles; non-RCTs (reviews, meta-analysis, letters, or case reports), and basic experiments or animal studies.\nThe trials identified via the search were independently screened for inclusion by 2 authors, namely, C.Y. and H.Q. Any disagreements were arbitrated by a third author, namely, M.D.C.", "Two researchers (C.L. and L.J.L.) with knowledge on systematic evaluation independently carried out literature screening and quality evaluation to ensure the objectivity of the process and results. At times of disagreement, a third coauthor (L.R.X.) intervened to reach a final conclusion. The data extracted for each trial were: author, year, trial number, study design, country, number of patients, regimen, and available outcomes for analysis. The HR/RR for the main outcome measures with the relative 95% CI were extracted or calculated from each trial. Two coauthors (L.R.X, H.J.) assessed the methodological quality of selected literatures using the Cochrane Collaboration’s tool.[18]", "We performed the meta-analysis for the extracted data using the Review Manager software (RevMan version 5.4) provided by the Cochrane Collaboration. The data on all-cause mortality, rate of hospitalizations for HF, and rate of death from cardiovascular causes were pooled as HR with a 95% CI, while the data of change in NT-proBNP levels and decline in renal function were pooled as RR and OR with a 95% CI, respectively. The Cochran Q test and I2 test were used to assess the heterogeneity between studies. We used a random-effects model for the meta-analysis when the heterogeneity test was statistically significant (I2 ≥ 50%, P < .1). Otherwise, a fixed-effect model was used. A P-value of <.05 was considered statistically significant.", "[SUBTITLE] 3.1. Study selection and characteristics of eligible studies [SUBSECTION] We identified 451 articles through the databases. After duplicate removal, 5 RCTs[15–17,19,20] involving 19,078 participants were eligible for the meta-analysis (Fig. 1 and Table 1). The number of participants in an individual trial ranged from 301 to 8442. The follow-up time ranged from 8 weeks to 34 months. Among the 5 studies, all patients in the experimental group were diagnosed with HF (4 chronic HF[15,17,19,20] and 1 acute HF[16]) and received LCZ696, while those in the control group who were diagnosed with HF received ACEis or ARBs. All articles were published between 2012 and 2021. All 5 studies assessed at least one or more outcomes, namely, all-cause mortality, rate of hospitalizations for HF, rate of death from cardiovascular causes, change in NT-proBNP levels, and decline in renal function. Important details about the included studies are shown in Table 1.\nThe main characteristics and outcomes of included studies.\nNT-proBNP, N-terminal pro-brain natriuretic peptide.\nFlow chart about the article selection process.\nWe identified 451 articles through the databases. After duplicate removal, 5 RCTs[15–17,19,20] involving 19,078 participants were eligible for the meta-analysis (Fig. 1 and Table 1). The number of participants in an individual trial ranged from 301 to 8442. The follow-up time ranged from 8 weeks to 34 months. Among the 5 studies, all patients in the experimental group were diagnosed with HF (4 chronic HF[15,17,19,20] and 1 acute HF[16]) and received LCZ696, while those in the control group who were diagnosed with HF received ACEis or ARBs. All articles were published between 2012 and 2021. All 5 studies assessed at least one or more outcomes, namely, all-cause mortality, rate of hospitalizations for HF, rate of death from cardiovascular causes, change in NT-proBNP levels, and decline in renal function. Important details about the included studies are shown in Table 1.\nThe main characteristics and outcomes of included studies.\nNT-proBNP, N-terminal pro-brain natriuretic peptide.\nFlow chart about the article selection process.\n[SUBTITLE] 3.2. All-cause mortality [SUBSECTION] Two studies[16,17] reported the outcome of all-cause mortality, with a total of 9323 patients who received either LCZ696 or ACEis or ARBs. The statistical heterogeneity was deemed low in the pooled effect (I2 = 0%). There was no significant difference in the mortality rate (HR = 0.84; 95% CI, 0.76–0.93; P = .005, Fig. 2) between the 2 groups.\nMeta-analysis for the all-cause mortality.\nTwo studies[16,17] reported the outcome of all-cause mortality, with a total of 9323 patients who received either LCZ696 or ACEis or ARBs. The statistical heterogeneity was deemed low in the pooled effect (I2 = 0%). There was no significant difference in the mortality rate (HR = 0.84; 95% CI, 0.76–0.93; P = .005, Fig. 2) between the 2 groups.\nMeta-analysis for the all-cause mortality.\n[SUBTITLE] 3.3. Rate of hospitalizations [SUBSECTION] Three articles[15–17] examined the hospitalizations for HF, with a total of 14,145 patients. The meta-analysis showed that there was no difference in the rate of hospitalizations between the LCZ696 and the ACEis/ARBs groups (HR = 0.80; 95% CI, 0.73–0.87; P < .00001, Fig. 3). Statistical heterogeneity was deemed low across the included studies (I2 = 43%).\nMeta-analysis for the rate of hospitalizations due to HF. HF = heart failure.\nThree articles[15–17] examined the hospitalizations for HF, with a total of 14,145 patients. The meta-analysis showed that there was no difference in the rate of hospitalizations between the LCZ696 and the ACEis/ARBs groups (HR = 0.80; 95% CI, 0.73–0.87; P < .00001, Fig. 3). Statistical heterogeneity was deemed low across the included studies (I2 = 43%).\nMeta-analysis for the rate of hospitalizations due to HF. HF = heart failure.\n[SUBTITLE] 3.4. Death from cardiovascular causes [SUBSECTION] Two studies[15,17] with a total of 12,364 patients investigated the changes in death from cardiovascular causes. The statistical heterogeneity was considered high (I2 = 63%). There was no statistical difference in death from cardiovascular causes between patients who received either LCZ696 or ACEis/ARBs (HR = 0.86; 95% CI, 0.72–1.03; P = .09, Fig. 4).\nMeta-analysis for death from cardiovascular causes.\nTwo studies[15,17] with a total of 12,364 patients investigated the changes in death from cardiovascular causes. The statistical heterogeneity was considered high (I2 = 63%). There was no statistical difference in death from cardiovascular causes between patients who received either LCZ696 or ACEis/ARBs (HR = 0.86; 95% CI, 0.72–1.03; P = .09, Fig. 4).\nMeta-analysis for death from cardiovascular causes.\n[SUBTITLE] 3.5. Reduction in NT-proBNP levels [SUBSECTION] Three studies[16,19,20] with a total of 5814 patients reported the changes in NT-proBNP levels. Pooled analysis indicated that administration of LCZ696 resulted in a greater reduction of NT-proBNP level compared with that of ACEis/ARBs (RR = 0.78; 95% CI, 0.70–0.88; P < .0001, Fig. 5). High heterogeneity was detected for pooled effect (I2 = 67%).\nMeta-analysis for change in NT-proBNP level. NT-proBNP = N-terminal pro-brain natriuretic peptide.\nThree studies[16,19,20] with a total of 5814 patients reported the changes in NT-proBNP levels. Pooled analysis indicated that administration of LCZ696 resulted in a greater reduction of NT-proBNP level compared with that of ACEis/ARBs (RR = 0.78; 95% CI, 0.70–0.88; P < .0001, Fig. 5). High heterogeneity was detected for pooled effect (I2 = 67%).\nMeta-analysis for change in NT-proBNP level. NT-proBNP = N-terminal pro-brain natriuretic peptide.\n[SUBTITLE] 3.6. Decline in renal function [SUBSECTION] Three studies[15,16,20] with a total of 14,145 patients investigated the changes in renal function in both the LCZ696 and ACEis/ARBs groups. The analysis indicated that no difference in the decline in renal function was found between the 2 groups (odds ratio = 0.77; 95% CI, 0.68–0.88; P < .0001, Fig. 6). The high heterogeneity was statistically significant in all the included studies. No heterogeneity was found among these studies (I2 = 0%).\nMeta-analysis for decline in renal function.\nThree studies[15,16,20] with a total of 14,145 patients investigated the changes in renal function in both the LCZ696 and ACEis/ARBs groups. The analysis indicated that no difference in the decline in renal function was found between the 2 groups (odds ratio = 0.77; 95% CI, 0.68–0.88; P < .0001, Fig. 6). The high heterogeneity was statistically significant in all the included studies. No heterogeneity was found among these studies (I2 = 0%).\nMeta-analysis for decline in renal function.\n[SUBTITLE] 3.7. Risk of bias assessment [SUBSECTION] The risks of bias of the included studies in this meta-analysis are summarized in Figure 7. The methodological quality was assessed as high in all of the 5 included RCTs.\nQuality evaluation of included articles.\nThe risks of bias of the included studies in this meta-analysis are summarized in Figure 7. The methodological quality was assessed as high in all of the 5 included RCTs.\nQuality evaluation of included articles.", "We identified 451 articles through the databases. After duplicate removal, 5 RCTs[15–17,19,20] involving 19,078 participants were eligible for the meta-analysis (Fig. 1 and Table 1). The number of participants in an individual trial ranged from 301 to 8442. The follow-up time ranged from 8 weeks to 34 months. Among the 5 studies, all patients in the experimental group were diagnosed with HF (4 chronic HF[15,17,19,20] and 1 acute HF[16]) and received LCZ696, while those in the control group who were diagnosed with HF received ACEis or ARBs. All articles were published between 2012 and 2021. All 5 studies assessed at least one or more outcomes, namely, all-cause mortality, rate of hospitalizations for HF, rate of death from cardiovascular causes, change in NT-proBNP levels, and decline in renal function. Important details about the included studies are shown in Table 1.\nThe main characteristics and outcomes of included studies.\nNT-proBNP, N-terminal pro-brain natriuretic peptide.\nFlow chart about the article selection process.", "Two studies[16,17] reported the outcome of all-cause mortality, with a total of 9323 patients who received either LCZ696 or ACEis or ARBs. The statistical heterogeneity was deemed low in the pooled effect (I2 = 0%). There was no significant difference in the mortality rate (HR = 0.84; 95% CI, 0.76–0.93; P = .005, Fig. 2) between the 2 groups.\nMeta-analysis for the all-cause mortality.", "Three articles[15–17] examined the hospitalizations for HF, with a total of 14,145 patients. The meta-analysis showed that there was no difference in the rate of hospitalizations between the LCZ696 and the ACEis/ARBs groups (HR = 0.80; 95% CI, 0.73–0.87; P < .00001, Fig. 3). Statistical heterogeneity was deemed low across the included studies (I2 = 43%).\nMeta-analysis for the rate of hospitalizations due to HF. HF = heart failure.", "Two studies[15,17] with a total of 12,364 patients investigated the changes in death from cardiovascular causes. The statistical heterogeneity was considered high (I2 = 63%). There was no statistical difference in death from cardiovascular causes between patients who received either LCZ696 or ACEis/ARBs (HR = 0.86; 95% CI, 0.72–1.03; P = .09, Fig. 4).\nMeta-analysis for death from cardiovascular causes.", "Three studies[16,19,20] with a total of 5814 patients reported the changes in NT-proBNP levels. Pooled analysis indicated that administration of LCZ696 resulted in a greater reduction of NT-proBNP level compared with that of ACEis/ARBs (RR = 0.78; 95% CI, 0.70–0.88; P < .0001, Fig. 5). High heterogeneity was detected for pooled effect (I2 = 67%).\nMeta-analysis for change in NT-proBNP level. NT-proBNP = N-terminal pro-brain natriuretic peptide.", "Three studies[15,16,20] with a total of 14,145 patients investigated the changes in renal function in both the LCZ696 and ACEis/ARBs groups. The analysis indicated that no difference in the decline in renal function was found between the 2 groups (odds ratio = 0.77; 95% CI, 0.68–0.88; P < .0001, Fig. 6). The high heterogeneity was statistically significant in all the included studies. No heterogeneity was found among these studies (I2 = 0%).\nMeta-analysis for decline in renal function.", "The risks of bias of the included studies in this meta-analysis are summarized in Figure 7. The methodological quality was assessed as high in all of the 5 included RCTs.\nQuality evaluation of included articles.", "For decades, HF patients have limited effective therapeutic options and a poor prognosis.[1,3–5] The traditional neuroendocrine inhibitors, ACEis and ARBs, inhibit the activity of ACE, reduce the retention of water and sodium, suppresses bradykinin degradation, and relax the blood vessels. It can improve the symptoms and the activity tolerance of patients by reducing the load on the heart, inhibiting the remodeling of myocardial and the sympathetic activity, and protecting the vascular endothelial cells; thus, it reduces the risk of hospitalizations and mortality.[21] However, ACEis and ARBs have adverse reactions, such as nephrotoxicity and hyperkalemia. Particularly, ACEis can cause irritating dry cough. Moreover, it has a poor tolerance.[22–24] By comparison, LCZ696 inhibits ARBs and enkephalinase, which increases the levels of natriuretic peptide, bradykinin, adrenomedullin, and other endogenous vasoactive peptides.[10,25,26] The advent of the angiotensin receptor neprilysin inhibitor, LCZ696, brings new hope for HF patients. However, current literature is not consistent with regards to the superiority of LCZ696 over the traditional drugs, such as ACEis and ARBs. Thus, we performed this meta-analysis to evaluate the efficacy and safety of LCZ696 in HF and sought to find more evidence for the clinical use of LCZ696.\nTo the best of our knowledge, this is the first meta-analysis that comprehensively compared the efficacy and safety of LCZ696 with those of ACEis and ARBs from various aspects in the treatment of HF. The meta-analysis showed that LCZ696 was superior to ACEis and ARBs in improving the overall mortality, rate of hospitalizations for HF, decline in renal function, and reduction in NT-proBNP levels in patients with HF. These results were consistent with those in Huang et al’s study,[27] in which it was reported that compared with ACEI/ARB, LCZ696 decreased the risk of death, discontinuation due to AEs, and decline in renal function. Moreover, Kang et al[28] reported that LCZ696 significantly increased the estimated glomerular filtration rate (P = .02) and reduced the NT-proBNP level (P < .001) compared with irbesartan, valsartan, and enalapril. Another study by Chen et al[29] reported that LCZ696 was associated with a significantly reduced risk of renal function deterioration (P = .02) compared with that of ACEI/ARB. Furthermore, NT-proBNP is related to the adverse outcomes of HF, and the reduction of NT-proBNP levels and nephrotoxicity are helpful in improving the survival of patients with HF.[30] In addition, evidence shows that renal dysfunction is associated with mortality of HF patients. Moreover, nephrotoxicity is the common adverse reaction of all neuroendocrine inhibitors, which can effectively reduce nephrotoxicity, improve drug tolerance on the one hand, and indirectly reduce mortality.[8,9,31] Therefore, nephrotoxicity is regarded as the primary index to evaluate safety for HF. In this study, we found that, except for the similar rate of death from cardiovascular causes, LCZ696 was superior to ACEis/ARBs in terms of other efficacy and safety outcomes, including the reduction of renal function and reduction of NT-proBNP levels. These findings support the evidence that LCZ696 is superior to ACEis/ARBs in the treatment of HF and is worthy of clinical application.\nThis meta-analysis has several limitations. First, the number of included studies is relatively small. Nevertheless, the sample sizes of the included studies are very large; hence, the results are quite convincing. Second, due to limited data, we could not perform subgroup analysis according to the types of HF (acute or chronic) and the classification of renal dysfunction; thus, we measured the nephrotoxicity through the increasing blood creatinine ≥ 2 mg/dL and used this as the focus of meta-analysis, which led to a limitation in the evaluation of results in this study. However, we think that the results are more accurate from a larger perspective and supported the conclusion that LCZ696 is superior to ACEis/ARBs in the treatment of HF regardless of the type of HF. Third, angioedema is a common adverse event in patients treated with either LCZ696 or ACEis/ARBs. Since the incidences of angioedema in patients taking LCZ696 and in those taking ACEis/ARBs were reported to be very low (0.2%–0.6% vs 0.2%–1.4%) in the included trials (the PARAGON-HF,[15] PIONEER-HF,[16] and PARADIGM-HF[17] trials), we did not analyze them. However, we think that this toxicity is very important, which requires attention in future researches. Finally, the follow-up time among each trial was different, which may cause a potential impact on the outcome assessments.", "The current meta-analysis demonstrated that, compared with ACEis/ARBs, LCZ696 was associated with a significant improvement in the overall mortality, rate of hospitalizations for HF, reduction in NT-proBNP levels, and decline in renal function for patients with HF. Moreover, it did not increase the risk of death from cardiovascular causes. Due to the limitations in this study, further investigations are required.", "L.Y. and H.Q. contributed to the study design and writing. M.D.C., C.L., and L.J.L. carried out the data collection and selection. L.R.X.and H.J. carried out the data analysis. All authors read and approved the final manuscript.\nConceptualization: Yan Chen, Qian He.\nData curation: Dun-Chang Mo, Long Chen, Jia-Lu Lu.\nFormal analysis: Rui-Xing Li, Jie Huang.\nInvestigation: Long Chen.\nSoftware: Rui-Xing Li, Jie Huang.\nWriting – review & editing: Yan Chen, Qian He." ]
[ "intro", "methods", null, null, null, null, "results", null, null, null, null, null, null, null, "discussion", null, null ]
[ "ACEI/ARB", "heart failure", "LCZ696", "meta-analysis", "neuroendocrine inhibitor" ]